id
stringlengths
9
16
raw
stringlengths
0
3.38M
cleaned
stringlengths
0
3.38M
gr-qc/0606075
Tetrads in low-energy weak interactions Alcides Garat11 1. Instituto de F´ısica, Facultad de Ciencias, Igu´a 4225, esq. Mataojo, Montevideo, Uruguay.a) (Dated: June 15th, 2006) Tetrads are introduced in order to study the relationship between tetrad gauge states of spacetime and particle interactions, specially in weak processes at low energy. Through several examples like inverse Muon decay, elastic Neutrino-Electron scatter- ing, it is explicitly shown how to assign to each vertex of the corresponding low-order Feynman diagram in a weak interaction, a particular set of tetrad vectors. The re- lationship between the tetrads associated to different vertices is exhibited explicitly to be generated by a SU(2) local tetrad gauge transformation. We are establishing a direct link between standard gauge and tetrad gauge states of spacetime using the quantum field theories perturbative formulations. a)garat.alcides@gmail.com 1 arXiv:gr-qc/0606075v2 16 Oct 2025 I. INTRODUCTION We are trying to understand the underlying symmetries of different field architectures, by showing explicitly the local geometrical structure of different kinds of groups of transforma- tions of the Standard Model, specifically in their relationship to spacetime through specially defined tetrads. In references1,2 we studied the local geometrical meaning of electromagnetic local gauge transformations. In references3–5 we studied the local geometrical meaning of SU(2) × U(1) and SU(3) × SU(2) × U(1) local groups of gauge transformations. Isomor- phisms and homomorphisms were found that relate the standard model groups of local gauge transformations with new groups of local geometrical transformations in four-dimensional curved Lorentz spacetimes. These relationships can be explicitly displayed through the use of appropriately defined tetrads. It is the purpose of this work, to make use of already de- fined tetrads of different kinds1–5, in order to briefly show in an explicit way, the invariance of the metric tensor associated to a low-energy weak interaction, under different kinds of transformations. For instance, the invariance under electromagnetic local gauge transforma- tions, the invariance under SU(2) local gauge transformations, the invariance under local gauge transformations of the spinor fields6,7, etc. Since we are trying to “geometrize” the local gauge theories, it is interesting in its own right, to understand as well, the geometries that involve the standard fields associated with microparticle interactions. To that end, we introduce what we call “tetrad Feynman calculus”. We are able to explicitly show how to build a tetrad associated to a Feynman low-order diagram in low-energy weak interactions. The massive weak interactions bosons must have an associated gravitational field as elec- trons, muons and neutrinos also do have and even though these gravitational fields might be weak, they possess the necessary geometrical structure that enables the local symmetries of the standard model to be realized in an explicit fashion as it was analyzed in previous manuscripts1–5. In high energy interactions where virtual phenomena becomes relevant, a different approach is needed as we will discuss later on. We proceed to show how to assign a tetrad to each vertex, for instance in inverse Muon decay, and elastic Neutrino-Electron scattering. In the first two sections II and III we deal as an introduction with these tetrads defined in a general curved four-dimensional Lorentzian spacetime. These two sections and because of the construction nature of the new tetrads can also be identically developed in flat Minkowski spacetimes. In section IV we will limit our analysis to flat spacetimes as an 2 example compatible with the foundations of quantum field theories. As a general thought we strongly believe that the construction of tetrad fields, and metric tensors that explicitly display the local symmetries of microparticle interactions, are hinting us over a possible re- lationship or link, between General Relativity and Quantum Theories. In both subsections IV A 1-IV A 2 of section IV we also demonstrate that it is possible to transform the tetrad associated to a vertex in a particular diagram to the tetrad assigned to another vertex in the same Feynman diagram through a local SU(2) tetrad gauge transformation at the same spacetime point. It is clear that we envisage the spacetime of the interaction process as a common spacetime for all participating objects in the region of interaction even though in this first manuscript on the subject we are addressing the processes on a flat spacetime background. Throughout the paper we use the conventions of references1,3,8. In particular we use a metric with sign conventions -+++. The only difference in notation with8 will be that we will call our geometrized electromagnetic potential Aα, where fµν = Aν;µ−Aµ;ν is the geometrized electromagnetic field fµν = (G1/2/c2)Fµν. Analogously, f k µν are the geometrized Yang-Mills field components, f k µν = (G1/2/c2) F k µν. II. OVERVIEW OF NEW TETRADS AND SYMMETRIES FOR THE ABELIAN CASE The new tetrads have been designed on the basis of existence of antisymmetric second rank tensors. In the particular case were Abelian non-null electromagnetic fields are present in spacetime in addition to curvature or a gravitational field, or even when spacetime is flat the new method involves a local duality rotation of these gauge fields. We then proceed to introduce at every point in spacetime a duality rotation by an angle −α that transforms a non-null electromagnetic field fµν into an extremal field ξµν, ξµν = e−∗αfµν = cos(α) fµν −sin(α) ∗fµν, (1) where ∗fµν = 1 2 ϵµνστ f στ is the dual tensor of fµν. The local scalar α is the complexion of the electromagnetic field and it is a local gauge invariant quantity. Extremal fields are essentially electric fields and they satisfy, ξµν ∗ξµν = 0 . (2) 3 Equation (2) is imposed as a condition on (1) and then we find the expression for the complexion that results in tan(2α) = −fµν ∗f µν/fλρ f λρ. We can also prove just using identities valid in four-dimensional Lorentzian spacetimes that the condition (2) can be rewritten as ξαµ ∗ξµν = 0. In order to prove this equivalence between conditions we will need an identity8 valid for two second rank antisymmetric fields in a four-dimensional Lorentzian spacetime. This identity is given by, Aµα Bνα −∗Bµα ∗Aνα = 1 2 δ ν µ Aαβ Bαβ , (3) When this identity (3) is considered for the case Aµα = ξµα and Bνα = ξνα we obtain, ξµα ξνα −∗ξµα ∗ξνα = 1 2 δ ν µ Q , (4) where Q = ξµν ξµν = − q TµνT µν according to equations (39) in reference8. Q is assumed not to be zero, because we are dealing with non-null electromagnetic fields. It can be proved that condition (2) plus the general identity (3), when applied to the case Aµα = ξµα and Bνα = ∗ξνα provides the equivalent condition to (2), ξαµ ∗ξµν = 0 , (5) which is equation (64) in reference8. In geometrodynamics, the Einstein-Maxwell equa- tions, Rµν = fµλ f λ ν + ∗fµλ ∗f λ ν (6) f µν ;ν = 0 (7) ∗f µν ;ν = 0 , (8) reveal the existence of two potential vector fields9 Aν and ∗Aν, fµν = Aν;µ −Aµ;ν (9) ∗fµν = ∗Aν;µ −∗Aµ;ν . (10) 4 The symbol “;′′ stands for covariant derivative with respect to the metric tensor gµν and the star in ∗Aν is just nomenclature, not the dual operator, with the meaning that ∗Aν;µ = (∗Aν);µ. The duality rotation given by equation (59) in8 fµν = ξµν cos α+∗ξµν sin α, enables us to reexpress the stress-energy tensor in terms of the extremal field, Tµν = ξµλ ξ λ ν + ∗ξµλ ∗ξ λ ν . (11) It is now the right time to introduce the new tetrads that will diagonalize locally and covariantly the stress-energy tensor (11). U α = ξαλ ξρλ Xρ / ( q −Q/2 q Xµ ξµσ ξνσ Xν ) (12) V α = ξαλ Xλ / ( q Xµ ξµσ ξνσ Xν ) (13) Zα = ∗ξαλ Yλ / ( q Yµ ∗ξµσ ∗ξνσY ν ) (14) W α = ∗ξαλ ∗ξρλ Y ρ / ( q −Q/2 q Yµ ∗ξµσ ∗ξνσY ν ) . (15) With all these elements put together, particularly equations (4-5) it becomes trivial to prove that the tetrad (12-15) is orthonormal and diagonalizes1 the stress-energy tensor (11). The vectors (12-13) with eigenvalue Q/2 and the vectors (14-15) with eigenvalue −Q/2. At every point in spacetime the timelike and one spacelike vectors that for some geometries like Reissner-Nordstr¨om are (12-13) generate a plane that we called blade one1,10. The other two spacelike vectors (14-15) generate a local orthogonal plane that we called blade two. These vectors are constructed with the local extremal field8 (1), its dual, the very metric tensor and a pair of vector fields Xα and Y α that represent a generic gauge choice as long as the tetrad vectors do not become trivial. We are aware then, that we still have to introduce the vectors Xµ and Y µ. Let us introduce some names. The tetrad vectors have two essential structure components. For instance in vector U α there are two main structures. First, the skeleton, in this case ξαλ ξρλ, and second, the gauge vector Xρ. These do not include the normalization factor 1/ ( q −Q/2 q Xµ ξµσ ξνσ Xν ). The gauge vectors must be anything that do not make the tetrad vectors trivial and we mean by this that the tetrad (12-15) diagonalizes the stress-energy tensor for any non-trivial gauge vectors Xµ and Y µ. It is then possible to make different choices for Xµ and Y µ. The potential vector fields introduced in equations (9-10) represent a possible choice in geometrodynamics for the vectors Xα = Aα 5 and Y α = ∗Aα. We do not mean that the two vector fields have independence from each other, it is just a convenient choice. With this particular choice for the two gauge vector fields we can define then, U α = ξαλ ξρλ Aρ / ( q −Q/2 q Aµ ξµσ ξνσ Aν ) (16) V α = ξαλ Aλ / ( q Aµ ξµσ ξνσ Aν ) (17) Zα = ∗ξαλ ∗Aλ / ( q ∗Aµ ∗ξµσ ∗ξνσ ∗Aν ) (18) W α = ∗ξαλ ∗ξρλ ∗Aρ / ( q −Q/2 q ∗Aµ ∗ξµσ ∗ξνσ ∗Aν ) , (19) where the four vectors (16-19) satisfy the following algebraic properties, −U α Uα = V α Vα = Zα Zα = W α Wα = 1 . (20) Using the equations (4-5) it is simple to prove that (16-19) are orthogonal vectors. We think then about local electromagnetic gauge transformations. We notice that we can in- terpret the independent local gauge transformations of the vector potentials introduced in equations (9-10), that is, Aα →Aα + Λ,α and ∗Aα →∗Aα + ∗Λ,α as new choices for the two gauge vector fields Xµ and Y µ. The first local gauge transformation leaves fµν invariant and the second one leaves ∗fµν invariant, as long as the functions Λ and ∗Λ are scalars. Accord- ing to Schouten, for non-null electromagnetic fields in Einstein-Maxwell spacetimes there is a two-bladed or two-plane structure10 at every point in spacetime. These blades are the planes determined by the pairs (U α, V α) and (Zα, W α). In manuscript1 it was demonstrated that the transformation Aα →Aα + Λ,α generates a Lorentz transformation (except for one discrete reflection) of the tetrad vectors (U α, V α) into ( ˜U α, ˜V α) in such a way that these “rotated” vectors ( ˜U α, ˜V α) remain in the plane or blade one generated by (U α, V α). In the same reference1 it was also proven that the transformation ∗Aα →∗Aα + ∗Λ,α generates a “rotation” of the tetrad vectors (Zα, W α) into ( ˜Zα, ˜W α) such that these “rotated” vectors ( ˜Zα, ˜W α) remain in the plane or blade two generated by (Zα, W α). In manuscript1 it was demonstrated that the group of local electromagnetic gauge transformations is isomorphic to the group LB1 of boosts plus two discrete transformations on blade one, and independently to LB2, the group of spatial rotations on blade two. Equations like 6 U α (ϕ) = cosh(ϕ) U α + sinh(ϕ) V α (21) V α (ϕ) = sinh(ϕ) U α + cosh(ϕ) V α , (22) on the local plane one represent a local electromagnetic gauge transformation of the vectors (U α, V α). The transformation of the two vectors (U α, V α) on blade one, given in (16-17) by the “angle” ϕ in (21-22) is a proper transformation, that is, a boost. For discrete improper transformations the result follows the same lines, see reference1. Analogously on the local plane two, Zα (φ) = cos(φ) Zα −sin(φ) W α (23) W α (φ) = sin(φ) Zα + cos(φ) W α . (24) Equations (23-24) represent a local electromagnetic gauge transformation of the vectors (Zα, W α), the transformation of the two tetrad vectors (Zα, W α) on blade two, given in (18- 19), by the “angle” φ. It is straightforward to check that the equalities U[α (ϕ) V β] (ϕ) = U [α V β] and Z[α (φ) W β] (φ) = Z[α W β] are true. These equalities mean that these antisymmetric tetrad objects are gauge invariant. In the Abelian case it was proved that the local group of electromagnetic gauge transformations is isomorphic to both the local groups LB1 and LB2, separately, independently. LB1 on the local plane one is a group composed by the tetrad boosts SO(1, 1) and two different kinds of discrete transformations. One of the discrete transformations is the full inversion or minus the identity two by two. The other discrete transformation is not Lorentzian1 because it is a reflection or flip, a two by two matrix with zeroes in the diagonal and ones off-diagonal. LB2 on plane two is the group of spatial tetrad rotations on this plane, that is SO(2). It is worth reminding ourselves about a point in mathematical language that could be loose or inaccurate but nonetheless immediate to understand. With the purpose of exemplifying we can mention the isomorphisms in the Abelian case1 between the local group of electromagnetic gauge transformations and the local groups of tetrad transformations LB1 and LB2, separately, independently. The isomorphisms strictly speaking are homomorphisms between the local algebra of scalars associated to the local group of electromagnetic gauge transformations and the local groups LB1 and LB2, independently. We know that between the local algebra of scalars and the local group of 7 electromagnetic gauge transformations there is a homomorphism, a homomorphism between the real numbers R that is, the algebra of local scalars associated to the local group of electromagnetic gauge transformations and U(1), that is, the local group of electromagnetic gauge transformations. We give this relationship as implicitly understood even though we talk about isomorphisms between the local group of electromagnetic gauge transformations and the local groups of tetrad transformations LB1 and LB2, separately, independently. We must also stress that the local transformations (21-22) are not imposed local boosts on the vectors that define the local plane one. They are the result of local gauge transformations of the vectors (U α, V α). For example, from reference1 a particular boost after the gauge transformation would look like, ˜V α (1) q −˜V β (1) ˜V(1)β = (1 + C) q (1 + C)2 −D2 V α (1) q −V β (1) V(1)β + D q (1 + C)2 −D2 V α (2) q V β (2) V(2)β (25) ˜V α (2) q˜V β (2) ˜V(2)β = D q (1 + C)2 −D2 V α (1) q −V β (1) V(1)β + (1 + C) q (1 + C)2 −D2 V α (2) q V β (2) V(2)β . (26) In equations (25-26) the following notation has been used, C = (−Q/2)V(1)σΛσ/(V(2)βV β (2)), D = (−Q/2)V(2)σ Λσ/(V(1)β V β (1) ) and [(1+C)2 −D2] > 0 must be satisfied. The notation Λα has been used for Λ,α where Λ is the local scalar generating the local gauge transformation. U α = V α (1) q −V β (1) V(1)β and V α = V α (2) q V β (2) V(2)β according to the notation used in paper1, V α (1) = ξαλ ξρλ Aρ (27) V α (2) = q −Q/2 ξαλ Aλ (28) V α (3) = q −Q/2 ∗ξαλ ∗Aλ (29) V α (4) = ∗ξαλ ∗ξρλ ∗Aρ . (30) For the particular case when 1 + C > 0, the transformations (25-26) manifest that an electromagnetic gauge transformation on the vector field Aα →Aα + Λα, that leaves invariant the electromagnetic field fµν, generates a boost transformation on the normalized tetrad vector fields   V α (1) q −V β (1) V(1)β , V α (2) q V β (2) V(2)β  . In this case cosh(ϕ) = (1+C) √ (1+C)2−D2, see also equation (21). This was just one of the possible cases in LB1. Similar analysis for the vector 8 transformations (23-24) in the local plane two generated by (Zα, W α). See reference1 for the detailed analysis of all possible cases. Back to our main line of work we can write the electromagnetic field in terms of these tetrad vectors, fαβ = −2 q −Q/2 cos α U[α Vβ] + 2 q −Q/2 sin α Z[α Wβ] . (31) Equation (31) entails the maximum simplification in the expression of the electromagnetic field. The true degrees of freedom are the local scalars q −Q/2 and α. We can also present both local degrees of freedom as q −Q/2 cos α and q −Q/2 sin α. The object U[αVβ] remains invariant1 under a “rotation” of the tetrad vectors U α and V α by a scalar angle ϕ like in (21-22) on blade one. This is the way in which local gauge invariance is manifested explicitly on this local plane. Analogous for discrete transformations on blade one. Similar analysis on blade two. A spatial “rotation” generated by a gauge transformation of the tetrad vectors Zα and W α through an “angle” φ as in (23-24), such that the object Z[α Wβ] remains invariant1. All this formalism clearly provides a technique to maximally simplify the expression for the electromagnetic field strength as in equation (31). It is block diagonalized automatically by the tetrad (16-19). This is not the case for the non-Abelian SU(2) field strength. We do not have an automatic block diagonalization. To this purpose a new algorithm was developed in reference11. In the next section we study the construction of tetrads similar to the Abelian case but in the non-Abelian environment. III. OVERVIEW OF NEW TETRADS AND SYMMETRIES FOR THE NON-ABELIAN CASE For the non-Abelian case we first would like to present a set of examples on how to build extremal fields that are locally invariant under SU(2) gauge transformations. Let us remember that in the Abelian case in the tetrads (12-15) the only dependence on gauge came through the gauge vectors Xµ and Y µ. The tetrad skeletons as it was mentioned previously are local gauge invariants. The advantage of this method is that when we introduce local gauge transformations, the vectors that span the local plane one, do not leave this plane after the transformation and analogous in the local plane two. This fact implies in turn that the metric tensor, be non-flat or flat will not change and will remain invariant. It is then 9 important to show explicitly that we can construct extremal fields invariant under both the Abelian and the non-Abelian gauge transformations. One example could be for instance given by, ζµν = cos β fµν −sin β ∗fµν , (32) Following the Abelian pattern we can define the new complexion β, and to this end we will impose the new SU(2) local invariant condition, Tr[ζµν ∗ζµν] = ζk µν ∗ζkµν = 0 , (33) where the summation convention is applied on the internal index k. We are just using a generalized duality transformation, and defining through it, this new local scalar complexion β. Therefore, the complexion condition (33) is not an additional condition for the field strength. We simply introduced a possible generalization of the definition for the Abelian complexion, found through a new duality transformation as well. Then, we find the local SU(2) invariant complexion β to be, tan(2β) = −f k µν ∗f kµν/f p λρ f pλρ , (34) where once again the summation convention was applied on both k and p. We can also consider gauge covariant derivatives since they will become useful in the ensuing analysis. For example, the gauge covariant derivatives of the three extremal field internal components, ζkµν|ρ = ζkµν ; ρ + g ϵklp Alρ ζpµν . (35) where ϵklp is the completely skew-symmetric tensor in three dimensions with ϵ123 = 1, with g the coupling constant. As in the previous section II the symbol “;” stands for the usual covariant derivative associated with the metric tensor gµν. Next we consider the Einstein-Maxwell-Yang-Mills vacuum field equations, Rµν = T (ym) µν + T (em) µν (36) 10 f µν ;ν = 0 (37) ∗f µν ;ν = 0 (38) f kµν |ν = 0 (39) ∗f kµν |ν = 0 . (40) The field equations (37-38) provide two electromagnetic potentials9, not independent from each other, but due to the symmetry of the equations, available for our construction. Aµ and ∗Aµ are the two electromagnetic potentials, see the comments made about the Abelian potentials and the star nomenclature ∗Aµ in section II. Similar for the two Non- Abelian equations (39-40). The Non-Abelian potential Akµ is available for our construction as well12–19. With all these elements put together, we can proceed to define the auxiliary antisymmetric field, ωµν = Tr(∗ζ στ ζµν + ζ στ ∗ζµν) Tr(ζσρ |ρ ∗ζτλ |λ) . (41) This particular antisymmetric auxiliary field in our construction could also be alterna- tively chosen to be, ωµν = Tr(ζ στ ζµν) Tr(ζσρ |ρ ∗ζτλ |λ) . (42) We can choose this antisymmetric auxiliary field ωµν in many different ways, we just show two examples. It is clear that (41) or (42) are invariant under SU(2) local gauge transformations. Expressions (41) or (42) are nothing but explicit examples among many, see for example reference3. Once our choice is made we perform a local duality rotation in order to obtain the new extremal field. We remind ourselves through the algorithm created in section II and reference1 that extremal fields are found through local duality rotations of second rank antisymmetric tensors like in equation (1) because then we can use the equations analogous to (4-5) to define an orthogonal tetrad. That is the core of this algorithm. ϵµν = cos ϑ ωµν −sin ϑ ∗ωµν . (43) As always we choose this complexion ϑ to be defined by the condition, 11 ϵµν ∗ϵµν = 0 . (44) Thus we find the new local scalar complexion analogously to section II to be, tan(2ϑ) = −ωµν ∗ωµν/ωλρ ωλρ . (45) We used our new algorithm to find a new kind of local SU(2) gauge invariant extremal tensor ϵµν, that enables the construction of the new tetrad, Sµ (1) = ϵµλ ϵρλ Xρ (46) Sµ (2) = q −Qym/2 ϵµλ Xλ (47) Sµ (3) = q −Qym/2 ∗ϵµλ Yλ (48) Sµ (4) = ∗ϵµλ ∗ϵρλ Y ρ , (49) where Qym = ϵµν ϵµν and we assume this local scalar not to be zero. With the help of identity (3), when applied to the case Aµα = ϵµα and Bνα = ∗ϵνα we obtain as in section II the equivalent condition to (44), ϵαν ∗ϵµν = 0 , (50) It is a simple excercise using (3) for Aµα = ϵµα and Bνα = ϵνα, and (50), to prove that the vectors (46-49) are orthogonal. As we did before in section II we will call for future reference ϵµλϵρλ the skeleton of the tetrad vector Sµ (1), and Xρ the gauge vector. In the case of Sµ (3), the skeleton is ∗ϵµλ, and Yλ the gauge vector. It is clear now that skeletons are gauge invariant under SU(2) × U(1) as we announced at the start of this section. This property guarantees that the vectors under local U(1) or SU(2) gauge transformations will not leave their original planes or blades, keeping therefore the metric tensor explicitly invariant. Our final task in this construction will be to define the gauge vectors Xσ and Y σ for the tetrad (46-49). A non- trivial although useful choice that we can make is Xσ = Y σ = Tr[Σαβ E ρ α E λ β ∗ξ σ ρ ∗ξλτ Aτ]. The nature of the object Σαβ is explained in section VI, Appendix II in reference3 and also 12 section VI. The object Σαβ is basically built with the Pauli matrices and the identity two by two. The tetrad vectors E ρ α inside the expression Tr[Σαβ E ρ α E λ β ∗ξ σ ρ ∗ξλτ Aτ] can be chosen to be the tetrad vectors that we already know from manuscript1 and section II for electromagnetic fields in curved space-times. Following the same notation as in1 and equations (16-19), we call E ρ (o) = U ρ, E ρ (1) = V ρ, E ρ (2) = Zρ, E ρ (3) = W ρ. The electromagnetic extremal tensor ξρσ, and its dual ∗ξρσ are also already known from reference1 and section II. We make use of the already defined tetrads built for Einstein-Maxwell spacetimes in order to enable the use of the object Σαβ which is key in our construction. The key lies in the translating property of this object between SU(2) local gauge transformations S and local Lorentz transformations Λα γ, see reference3 and notice from section VI that S−1 Σαβ S = Λα γ Λβ δ Σγδ. We would like to study one more property of these chosen gauge vector fields Xσ = Y σ = Tr[Σαβ E ρ α E λ β ∗ξ σ ρ ∗ξλτ Aτ]. The structure E [ρ α E λ] β ∗ξρσ ∗ξλτ is invariant under U(1) local gauge transformations. The electromagnetic extremal field property1,8, ξµσ ∗ξµτ = 0 is useful in the contractions E ρ α E λ β ∗ξ σ ρ ∗ξλτ. Because it is leaving in the contraction of E ρ α E λ β with ∗ξρσ ∗ξλτ only the antisymmetric object E [ρ 2 E λ] 3 , which is locally U(1) gauge invariant. Precisely because of property (5). Let us remember that the object Σαβ is antisymmetric and contracted with the electromagnetic tetrads as Σαβ E ρ α E λ β inside the local gauge vector, see section III. In the first paper1 we proved that the group U(1) is isomorphic to the local group of boosts plus discrete transformations on blade one that we called LB1. The same group U(1) is isomorphic to SO(2), that we also called LB2 since it is related to local tetrad rotations on blade two. This is a fundamental result in group theory alone, let alone in physics. We proved in references3,4 that the local group of SU(2) gauge transformations is isomorphic to the tensor product of three LB1 groups. Second, the local group of SU(2) gauge trans- formations is isomorphic to the tensor product of three LB2 or SO(2) groups. All the local gauge groups of the Standard Model have been proven to be isomorphic to local groups of tetrad transformations in four-dimensional Lorentzian curved or flat spacetimes. The no-go theorems of the sixties20–22 have been proven to be incorrect. Not because of their internal logic but for the assumptions made at the outset of these theorems. We read in reference22 “S (the scattering matrix) is said to be Lorentz-invariant if it possesses a sym- metry group locally isomorphic to the Poincar`e group P.. . . A symmetry transformation is said to be an internal symmetry transformation if it commutes with P. This implies that it 13 acts only on particle-type indices, and has no matrix elements between particles of different four-momentum or different spin. A group composed of such transformations is called an internal symmetry group”. The local electromagnetic gauge group of transformations U(1) has been proven to be isomorphic to local groups of tetrad transformations LB1 and LB2 on both the orthogonal planes one and two. These local groups of transformations LB1 and LB2= SO(2) are composed of Lorentz transformations except in LB1 for an improper discrete reflection, see reference1. Therefore the local Lorentz group of spacetime transfor- mations cannot commute with LB1 or LB2 since Lorentz transformations on a local plane do not necessarily commute with Lorentz transformations on another local plane at the same point in spacetime. The local internal groups of transformations do not necessarily commute with the local Lorentz transformations, because they are isomorphic to local groups of tetrad transformations. Analogous results were proven for the non-Abelian cases SU(2)×U(1) and SU(3) × SU(2) × U(1) Yang-Mills, see references3–5. IV. SPACETIME FEYNMAN CALCULUS It is of fundamental importance to understand the geometry of spacetime when particle interactions are taking place. Using the accumulated analysis for different kinds of gauge theories carried out in1–5,11,23, we will show explicitly how to assign to different Feynman diagrams in weakly interacting processes, different sets of tetrad vectors. The massive weak interactions boson mediators have an associated gravitational field as well as electrons, muons and neutrinos and even though these gravitational fields might be weak, they possess the necessary geometrical structure that enables the local symmetries of the standard model to be realized in an explicit fashion as it was analyzed in previous manuscripts1–5. We judge relevant to understand that the transformations of colliding particles into the same or other emerging particles can occur through the local transformation properties of gravitational fields which will differ for different settings even though they exhibit analogous manifest local invariance under the internal symmetries of the standard model. We remind ourselves that in manuscripts1–5,11,23 all the local internal gauge symmetries of the standard model have been proved isomorphic to local groups of tetrad transformations in four-dimensional curved Lorentz spacetimes. However, in order not to get foundational contradictions with quantum field theories at this stage of analysis we will assume that the kind of tetrads introduced in 14 sections II and III are defined in Minkowski spacetime. This choice of spacetime involves no contradiction since the basic operation of local duality field transformation can be performed in flat Minkowski spacetimes as well. We can define the tetrads in Minkowski spacetime and prove that these tetrads define locally two orthogonal planes of stress-energy diagonalization. These geometrical structures exist not only in curved spacetimes but also in flat spacetimes. In essence local tetrad gauge states of spacetime would represent different microparticles in their spacetime manifestation, that can transform through local gauge transformations into other microparticles or other local tetrad gauge states of spacetime. The notation is a replica of the notation in24, so we refer the reader to this reference. We also refer the reader to24–28 for abundant literature citation, specially in the field of particle physics. A. Weak interactions The existence of mediators as it was shown in1–4 is irreplaceable as far as we are concerned with the construction of these kind of tetrads in weak interactions. In this case it is the existence of local SU(2) “extremal” fields that allow us to build tetrads in weak processes. There are interactions involving the massive mediators where any virtual effect is negligible. For instance the W −as the mediator in inverse Muon decay. The Zo mediator in elastic Neutrino-Electron scattering. This is important because the existence of virtual processes would require a different approach. We will analyze these processes through the use of appropriately defined tetrads. 1. Inverse Muon decay Let us consider the process e−(1) + νµ(2) →νe(3) + µ−(4). There are two vertices. We invoke then the existence of the SU(2) tetrads introduced in3,4, specially the general tetrad structure presented in the section “Gauge geometry” and also section III. We called these general SU(2) tetrad vectors Sµ (1) · · · Sµ (4) and the structure of these latter tetrads was introduced in equations (46-49) in the section III dedicated to the overview of these objects. There was a remaining freedom in the choice of two vector fields, Xρ and Y ρ. It is exactly through an appropriate choice for these two vector fields that we can identify a tetrad set for each vertex at the same spacetime point. In addition to the previously introduced notation 15 and structures, let us call the non-null electromagnetic tetrads, following again the notation in references1–4 and sections II-III, E ρ α . There are local non-null electromagnetic tetrads in both vertices at the same spacetime point since in one vertex we have an electron and in the other vertex a muon. The indices α and β are reserved for locally inertial coordinate systems. Then, we can proceed to define for the first vertex the two gauge vector fields, Xρ = Y ρ = u(3) γα (1 −γ5) u(1) Eρ α . (51) We are basically associating to the first vertex a current24 jα −= u(3) γα (1 −γ5) u(1). This current describes the process e−→νe + W −. For the second vertex we can choose for instance, Xρ = Y ρ = u(4) γα (1 −γ5) u(2) Eρ α . (52) Again, we are assigning to the second vertex a current24 jα −= u(4) γα (1 −γ5) u(2) describing the process νµ + W −→µ−. It is evident from all the analysis in3,4 that the geometrical transition from vertex one to vertex two and vice-versa, is an SU(2) generated local gauge transformation. That is only allowed through the existence of massive mediators. Following the ideas in3 we can start by choosing for instance, Xρ = Y ρ = Tr[Σαβ E σ α E λ β ∗ξ ρ σ ∗ξλτ Aτ] (53) The Σαβ objects are analyzed in appendix II in reference3 and sections III-VI in this present paper, ξσρ are the electromagnetic “extremal” fields introduced in reference1 and section II in this present paper, etc. Through a local SU(2) gauge transformation on blade one, we can “rotate” the normalized version of vectors (46-47) on blade one, until Xρ in (53) becomes Xρ in (51). We can also “rotate” the normalized version of vectors (48-49) on blade two, until Y ρ in (53) becomes Y ρ in (51). Let us remember that the tetrad skeletons are locally gauge invariant under U(1)×SU(2). This is just a sample of local gauge transforma- tions of the normalized version of vectors (46-49). We proved in references1,3,4 that the maps that both in the local plane one and two send the tetrad vectors that generate these planes from an initial gauge vector into another gauge vector are injective and surjective maps. The 16 map that assigns local groups of gauge transformations into local groups of tetrad trans- formations on either local orthogonal planes of stress-energy symmetry are isomorphisms. Again we can start with (53) and appropriately “rotate” the tetrad vectors on blade one, until they become the ones corresponding to Xρ given in (52). Similar for Y ρ in this second case (52). It is evident then that (51) and (52) are connected through local SU(2) gauge transformations on blades one and two, that in turn, leave invariant the metric tensor. That is, these local gauge transformations exist because of transitivity. The local groups of gauge transformations have been proven to be isomorphic to the local groups of tetrad transfor- mations on the local orthogonal planes of symmetry. Given two sets of tetrads on the local plane one, then there is a local gauge transformation that sends one set into the other and vice-versa. Similar in the local othogonal plane two. These local orthogonal planes we re- mind ourselves are the local planes of covariant diagonalization of the stress-energy tensor, for the Abelian case and also for the non-Abelian case, see references1–5,11. We can also notice that the vector fields (51-52) are not strictly vectors but pseudovectors under local parity transformations, see reference24. But the metric tensor remains unaltered under these local parity transformations. It is as if the geometry associated to the e−(1) and νe(3) can be transformed through the existence of a massive mediator into the geometry associated to the νµ(2) and µ−(4) without altering the spacetime. The vertices are local tetrad gauge states of the same flat spacetime. 2. Elastic Neutrino-Electron scattering Now, we are considering neutral currents. In particular the interaction process νµ(1) + e−(2) →νµ(3) + e−(4). As before we can assign to the first vertex the choice, Xρ = Y ρ = u(3) γα (1 −γ5) u(1) Zρ α . (54) The current jα −= u(3) γα (1 −γ5) u(1), represents the process νµ(1) →νµ(3) + Zo. The tetrad Zρ α is built as follows. Following again the notation in24 we know we have available a local vector field Zµ that results from the Weinberg rotation through the angle θw, in addition to the standard electromagnetic local vector field Aµ. The rotation can be written, 17 Aµ = Bµ cos θw + W 3 µ sin θw (55) Zµ = −Bµ sin θw + W 3 µ cos θw . (56) The local tetrad field Zρ α is present in both vertices at the same spacetime point, since the massive neutral mediator is present in both local vertices and Zρ α is a local tetrad associated to this flat spacetime. The electro-weak mixing involves a weak isotriplet of intermediate vector bosons W coupled to three weak isospin currents, and an isosinglet intermediate vector boson Bµ coupled to the weak hypercharge current. If we follow all the steps in reference1 and the method developed in section II in the present paper, we can build out of the curl Zµ,ν −Zν,µ a new tetrad. This auxiliary local tetrad Zρ α present in the definition of gauge-vectors (54) would once more in its own construction involve the choice of two gauge- vector fields, see reference1. We can choose for instance Zµ and Bµ as these two vector fields needed in turn in the definition and construction of the local auxiliary tetrad Zρ α. Then, the tetrad that couples to the neutrino current is associated to the massive Zo. The second vertex could be assigned a choice, Xρ = Y ρ = u(4) γα (cV −cA γ5) u(2) Eρ α , (57) representing e−(2) + Zo →e−(4). For this particular interaction cV = −1 2 + 2 sin θw, and cA = −1 2, where θw is again the Weinberg angle24. The massive mediator allows again for a SU(2) local gauge transformation between the tetrad vectors chosen for vertex one and the ones chosen for vertex two at the same spacetime point. The neutral current works as a geometry mediator between the scattered particles keeping the spacetime invariant at the same spacetime point. The vertices function as local tetrad gauge states of the same spacetime. V. CONCLUSIONS We have explored the possibility of assigning tetrads to Feynman diagrams. In interac- tions where we can assume the existence of particles with associated local fields like Abelian or non-Abelian gauge fields. At no step of analysis we have specified the tetrad themselves 18 making all these geometrical properties outstanding since they can be put forward with all generality without the need to study case by case as long as the gauge fields are non-null for example. We can think the massive weak interactions boson mediators to have associ- ated gravitational fields as well as electrons, muons and neutrinos do and even though these gravitational fields might be weak, they possess the necessary geometrical structure that enables the local symmetries of the standard model to be realized in an explicit fashion as it was studied thoroughly in previous manuscripts1–5. However we decided to consider a background flat spacetime since gravitational fields would entail foundational contradictions with standard quantum field theories. New concepts would have to be introduced and we do not want to do this at this stage in analysis. We deem fundamental to understand that the transformations of colliding particles into the same or other emerging particles through elastic or inelastic processes can occur through the local transformation properties of space- time which will differ for different settings through the new notion of tetrad gauge states of spacetime. Having done this explicitly, a number of questions naturally arise. We want these concluding remarks to be a summary of these open questions. • The order of the formulations24−31. We have worked out the low-order diagrams. Then, what happens with higher order diagrams ?. The tetrads admit the choice of two gauge vector fields Xρ and Y ρ, and the higher orders are additive exactly as in the quantum theories in these vector fields available as a choice. But there is more to understand. Do the higher order diagrams represent contributions coming from higher order perturbative theories of a full relativistic formulation of these interactions23 in- volving perturbations of the electromagnetic field, etc, for instance, or just perturba- tive expansions in the gauge vector fields?. As an example of this kind of situation we might want to qualitatively consider the quark decays b →s γ, see chapter XIV-7 in reference32 for instance. There are several possibilities but there are certainly higher order contributions to these kind of processes. Let us focus on the contribution that involves a W −boson mediator in rare decays. There is the b →W −+ c vertex and the subsequent c + W −→s vertex for example. Each one of these has an associated current vector, let us call them for short jα [bc] and jα [cs]. Both vertices have particles with electric charge so there is at each vertex an associated electromagnetic tetrad Eρ [bc]α and Eρ [cs]α respectively. Then we can associate to each vertex in this higher order diagram 19 gauge vectors Xρ [bc] = Y ρ [bc] = jα [bc] Eρ [bc]α and Xρ [cs] = Y ρ [cs] = jα [cs] Eρ [cs]α. The jα [bc] Eρ [bc]α contribution can then be added to the non-Abelian tetrad gauge vectors for vertex [bc] and similar for jα [cs] Eρ [cs]α at the vertex [cs]. In this contribution there is also a photon involved. This photon emission will change the stress-energy tensor and therefore will be associated to the perturbed extremal field which is in turn a local duality rotation of the perturbed electromagnetic field. These perturbation in the extremal field will in turn perturb the tetrad skeletons. Therefore, the vertices in the diagram will be associated to tetrad gauge states of the spacetime and the photon emission to tetrad skeleton perturbations. The b →s γ decays might involve contributions with top quarks and the analysis will be similar, the b →s γ decays could include a loop of the kind cc for example and we will proceed similarly as well. We will add to the gauge vectors associated to the corresponding vertex, higher and higher contributions with a corresponding expansion parameter. The currents at the corresponding vertex are the key objects necessary to produce gauge vectors associated to vertices. There could be that both, the perturbations in the skeletons and the perturbations in gauge vectors proceed simultaneously as in the b →s γ case. On one hand the local orthogonal planes of symmetry will tilt and on the other hand the tetrad vectors that span these local planes will rotate inside them. • A point outside the scope of this manuscript. The issue of “gauge gravity”. Since in references1,3 it was explicitly proved that the Abelian and non-Abelian gauge theories represent special symmetries of the gravitational field, we can ask about the meaning of “gauge gravity”. The electromagnetic field is associated to the LB1 and LB2 symmetries of the gravitational field, see reference1. The SU(2) group of local gauge transformations is associated to the symmetries of the tensor product of three LB1 or three LB2 groups of transformations, see references3,4. Analogous for SU(3), see reference5. Then, it is not obvious to understand what is the meaning of a statement like, “casting the theory of gravity into a Yang-Mills formulation”. We have reason to believe that we can truly cast the theory of gravity into a Yang-Mills formulation but as said, it is not obvious and requires a whole new work. • Another point outside the scope of this manuscript. The issue of quantum gravity. It has been proved explicitly that metric tensors can be associated with microparticle in- 20 teractions. These constructions are possible by means of non-null Abelian tetrad fields, and by means of SU(2) local non-Abelian tetrad fields. Perturbative formulations of these tetrad field structures as in reference23 can take care of quantum fluctuations as well. The quantum is connected through the existence of these tetrad fields to grav- ity. A treatment for a curved spacetime where a gravitational field is present would entail several new notions that we would like not to introduce at this stage of analysis. Nonetheless we can advance that in curved spacetimes there will be local orthogonal planes one and two and the isomorphisms between local gauge groups Abelian and non-Abelian and local groups of tetrad transformations LB1 and LB2 would also ap- ply in a similar fashion as to flat spacetimes1–5. The quantization will be reflected through interactions that alter the local plane-symmetry structure by tilting these local planes of symmetry. Continuously for continuous perturbations and discretely in quantum settings situations. The main idea behind these quantum formulations in curved spacetimes is that the local planes of stress-energy diagonalization are al- ways and during quantum interactions local orthogonal planes of symmetry, because quantum problems are basically confronted through perturbations analysis and these perturbations lead to continuous or discrete evolution of the local planes of symmetry either in flat or curved spacetimes. Continuous or discrete tilt symmetry evolution, see reference23. We also established that during interactions of microparticles the tetrad vectors that span the local orthogonal planes one and two might rotate inside them. The tetrads of different nature that we were able to build in1–5 and the present work, establish a link between the standard locally inertial flat field environment of the tra- ditional standard quantum theories in weak interactions on one hand, and the curved spacetime of gravity on the other hand. The point is the following, why are we using in quantum gravity similar conceptual foundations to theories that are not formulated in curved spacetimes but flat spacetimes ?. • Another point outside the limited scope of this paper. The issue of the Higgs mecha- nism. It is a device conceived in its relationship with the nature of mass, for instance of the mass mediators. In the present tetrad environment we can ask if it is necessary, or the mass comes into existence due to the presence of gravity ?. Is it possible that the local Higgs field and its quantum fluctuations are related to the perturbations of 21 the local gravitational weak field scalar approximation on a flat Minkowski background associated to the asymptotically curved spacetimes that in turn we can associate to elementary microparticles ?. • The issue of symmetry-breaking. It was proved in the general manuscripts1–4 that the gravitational field when built with tetrads along the lines of expressions (46-49) are manifestly invariant under local electromagnetic gauge transformations, and local SU(2) gauge transformations as well. But when assigning a tetrad set to a vertex in a low-energy weak process diagram in Minkowski spacetime, we make a particular choice for the two gauge vectors Xρ and Y ρ. For instance, through associated currents we choose a particular gauge, and a different one for each vertex, like in inverse Muon decay or elastic Neutrino-Electron scattering. Then, we wonder if this gauge fixing procedure could be the geometrical form of the standard symmetry-breaking process. Hereby, we can see that it is the tetrad fields that bridge the two gauges associated to the two vertices, through a local SU(2) gauge transformation, that in turn, leaves invariant the metric tensor. VI. APPENDIX I This appendix is introducing the object Σαβ that according to the matrix definitions introduced in the references is Hermitic. The use of this object in the construction of tetrads in section III enables the local SU(2) gauge transformations S, to get in turn transformed into purely geometrical transformations. That is, local rotations of the U(1) electromagnetic tetrads. The object σαβ is defined6,7 as σαβ = σα + σβ −−σβ + σα −. The object σα ± arises when building the Weyl representation for left handed and right handed spinors. According to reference7, it is defined as σα ± = (1, ±σi), where σi are the Pauli matrices for i = 1 · · · 3. Under the (1 2, 0) and (0, 1 2) spinor representations of the Lorentz group this object transforms as, S−1 (1/2) σα ± S(1/2) = Λα γ σγ ± . (58) Equation (58) means that under the spinor representation of the Lorentz group, σα ± transform as vectors. In (58), the matrices S(1/2) are local objects, as well as7 Λα γ. The 22 SU(2) elements can be considered to belong to the Weyl spinor representation of the Lorentz group. Since the group SU(2) is homomorphic to SO(3), they just represent local space rotations. It is also possible to define the object σ†αβ = σα −σβ + −σβ −σα + in a similar fashion. Therefore, we can write, ı  σαβ + σ†αβ =      0 if α = 0 and β = i 4 ϵijk σk if α = i and β = j , σαβ −σ†αβ =      −4 σi if α = 0 and β = i 0 if α = i and β = j . We then may call Σαβ ROT = ı  σαβ + σ†αβ , and Σαβ BOOST = ı  σαβ −σ†αβ and a possible choice for the object Σαβ could be for instance Σαβ = Σαβ ROT +Σαβ BOOST. This is a good choice when we consider proper Lorentz transformations of the tetrad vectors nested within the structure of the gauge vectors Xµ and Y µ. For spatial rotations of the U(1) electromagnetic tetrad vectors which in turn are nested within the structure of the two gauge vectors Xµ and Y µ, as is the case under study in section III, we can simply consider Σαβ = Σαβ ROT. These possible choices also make sure the Hermiticity of gauge vectors. Since when defining the gauge vectors Xµ and Y µ we are taking the trace, then Xµ and Y µ are real. REFERENCES 1A. Garat, Tetrads in geometrodynamics, J. Math. Phys. 46, 102502 (2005). A. Garat, Erratum: Tetrads in geometrodynamics, J. Math. Phys. 55, 019902 (2014). 2A. Garat, New tetrads in Riemannian geometry and new ensuing results in group theory, gauge theory and fundamental physics in particle physics, general relativity and astro- physics, Int. J. Mod. Phys. Conf. Ser., Vol. 45, (2017), 1760004. 3A. Garat, Tetrads in Yang-Mills geometrodynamics, Gravitation and Cosmology, (2014) Vol. 20 No. 1, pp. 116-126. Pleiades Publishing Ltd. arXiv:gr-qc/0602049. 4A. Garat, The new electromagnetic tetrads, infinite tetrad nesting and the non-trivial emer- gence of complex numbers in real theories of gravitation, Int. J. Geom. Methods Mod. Phys., Vol. 14, No. 9 (2017), 1750132. 5A. Garat, Tetrads in SU(3) × SU(2) × U(1) Yang-Mills geometrodynamics, Int. J. Geom. Methods Mod. Phys., Vol. 15 no. 3 (2018), 1850045. arXiv:1207.0912. 23 6M. Kaku, Quantum Field Theory: A Modern Introduction (Oxford University Press, 1993). 7L. ´Alvarez-Gaum´e and M. A. V´azquez-Mozo, Introductory Lectures on Quantum Field Theory (arXiv:hep-th/0510040). 8C. Misner and J. A. Wheeler, Classical physics as geometry, Annals of Physics 2, 525 (1957). 9N. Cabibbo and E. Ferrari, Nuovo Cim. 23, 1147 (1962). 10J. A. Schouten, Ricci Calculus: An Introduction to Tensor Calculus and Its Geometrical Applications (Springer, Berlin, 1954). 11A. Garat, “Gauge invariant method for maximum simplification of the field strength in non-Abelian Yang-Mills theories”, Int. J. Geom. Methods Mod. Phys., Vol. 12, No. 10 (2015), 1550104. arXiv:1306.2174. 12R. Gilmore, Lie Groups, Physics and Geometry (Cambridge University Press, 2008). 13R. Gilmore, Lie Groups, Lie Algebras, and Some of Their Applications (John Wiley & Sons, 1974). 14J. Stillwell, Naive Lie Theory (Springer Science + Business Media, L.L.C., 2010). 15N. Carter, Visual Group Theory (The Mathematical Association of America, Inc, 2009). 16M. Carmeli, Classical Fields: General Relativity and Gauge Theory (J. Wiley & Sons, New York, 1982). 17C. N. Yang and R. L. Mills, Phys. Rev. 96, 191 (1954). 18R. Utiyama, Phys. Rev., 101, 1597 (1956). 19T. W. B. Kibble, J. Math. Phys., 2, 212 (1961). 20S. Weinberg, Comments on relativistic supermultiplet theories, Phys. Rev. 139, B597 (1965). 21L. O’Raifeartagh, Lorentz invariance and internal symmetry, Phys. Rev. 139, B1052 (1965). 22S. Coleman and J. Mandula, All possible symmetries of the S matrix, Phys. Rev. 159, N5 1251 (1967). 23A. Garat, “Dynamical symmetry breaking in geometrodynamics”, TMF, 195:2, (2018), 313-328; Theoret. and Math. Phys., 195:2, (2018), 764-776. arXiv:1306.0602. 24D. Griffiths , Introduction to elementary particles (John Wiley & Sons, Inc. , 1987). 25T. P. Cheng and L. F. Li , Gauge Theory of Elementary Particle Physics (Oxford University Press, 1989). 24 26W. Greiner and B. Mueller, Gauge Theory of Weak Interactions (Springer Verlag Gmbh, 1996). 27W. Greiner and B. Mueller, Quantum Mechanics, Symmetries (Springer Verlag, 1989). 28G. ,t Hooft, Renormalization of Gauge Theories (Lecture notes Erice, 1998). 29M.E. Peskin and D.V. Schroeder, An Introduction to Quantum Field Theory (Perseus Books Publishing L.L.C., 1995). 30M. Srednicki, Quantum Field Theory (Cambridge University Press, New York, 2007). 31R. Jackiw, Fifty Years of Yang-Mills Theory and my Contribution to it (arXiv:physics/0403109, 2004). 32J. F. Donoghue, E. Golowich and B. R. Holstein, Dynamics of the Standard Model (Cam- bridge monographs on particle physics, nuclear physics and cosmology) (Cambridge Uni- versity Press, Cambridge, 2014). 25
Tetrads in low-energy weak interactions Alcides Garat11 1. Instituto de F ́ısica, Facultad de Ciencias, Igu ́a 4225, esq. Mataojo, Montevideo, Uruguay.a) (Dated: June 15th, 2006) Tetrads are introduced in order to study the relationship between tetrad gauge states of spacetime and particle interactions, specially in weak processes at low energy. Through several examples like inverse Muon decay, elastic Neutrino-Electron scattering, it is explicitly shown how to assign to each vertex of the corresponding low-order Feynman diagram in a weak interaction, a particular set of tetrad vectors. The relationship between the tetrads associated to different vertices is exhibited explicitly to be generated by a SU(2) local tetrad gauge transformation. We are establishing a direct link between standard gauge and tetrad gauge states of spacetime using the quantum field theories perturbative formulations. 1 arXiv:gr-qc/0606075v2 16 Oct 2025 I. INTRODUCTION We are trying to understand the underlying symmetries of different field architectures, by showing explicitly the local geometrical structure of different kinds of groups of transformations of the Standard Model, specifically in their relationship to spacetime through specially defined tetrads. In references1,2 we studied the local geometrical meaning of electromagnetic local gauge transformations. In references3-5 we studied the local geometrical meaning of SU(2) × U(1) and SU(3) × SU(2) × U(1) local groups of gauge transformations. Isomorphisms and homomorphisms were found that relate the standard model groups of local gauge transformations with new groups of local geometrical transformations in four-dimensional curved Lorentz spacetimes. These relationships can be explicitly displayed through the use of appropriately defined tetrads. It is the purpose of this work, to make use of already defined tetrads of different kinds1-5, in order to briefly show in an explicit way, the invariance of the metric tensor associated to a low-energy weak interaction, under different kinds of transformations. For instance, the invariance under electromagnetic local gauge transformations, the invariance under SU(2) local gauge transformations, the invariance under local gauge transformations of the spinor fields6,7, etc. Since we are trying to "geometrize" the local gauge theories, it is interesting in its own right, to understand as well, the geometries that involve the standard fields associated with microparticle interactions. To that end, we introduce what we call "tetrad Feynman calculus". We are able to explicitly show how to build a tetrad associated to a Feynman low-order diagram in low-energy weak interactions. The massive weak interactions bosons must have an associated gravitational field as electrons, muons and neutrinos also do have and even though these gravitational fields might be weak, they possess the necessary geometrical structure that enables the local symmetries of the standard model to be realized in an explicit fashion as it was analyzed in previous manuscripts1-5. In high energy interactions where virtual phenomena becomes relevant, a different approach is needed as we will discuss later on. We proceed to show how to assign a tetrad to each vertex, for instance in inverse Muon decay, and elastic Neutrino-Electron scattering. In the first two sections II and III we deal as an introduction with these tetrads defined in a general curved four-dimensional Lorentzian spacetime. These two sections and because of the construction nature of the new tetrads can also be identically developed in flat Minkowski spacetimes. In section IV we will limit our analysis to flat spacetimes as an 2 example compatible with the foundations of quantum field theories. As a general thought we strongly believe that the construction of tetrad fields, and metric tensors that explicitly display the local symmetries of microparticle interactions, are hinting us over a possible relationship or link, between General Relativity and Quantum Theories. In both subsections IV A 1-IV A 2 of section IV we also demonstrate that it is possible to transform the tetrad associated to a vertex in a particular diagram to the tetrad assigned to another vertex in the same Feynman diagram through a local SU(2) tetrad gauge transformation at the same spacetime point. It is clear that we envisage the spacetime of the interaction process as a common spacetime for all participating objects in the region of interaction even though in this first manuscript on the subject we are addressing the processes on a flat spacetime background. Throughout the paper we use the conventions of references1,3,8. In particular we use a metric with sign conventions -+++. The only difference in notation with8 will be that we will call our geometrized electromagnetic potential Aα, where fμν = Aν;μ-Aμ;ν is the geometrized electromagnetic field fμν = (G1/2/c2)Fμν. Analogously, f k μν are the geometrized Yang-Mills field components, f k μν = (G1/2/c2) F k μν. II. OVERVIEW OF NEW TETRADS AND SYMMETRIES FOR THE ABELIAN CASE The new tetrads have been designed on the basis of existence of antisymmetric second rank tensors. In the particular case were Abelian non-null electromagnetic fields are present in spacetime in addition to curvature or a gravitational field, or even when spacetime is flat the new method involves a local duality rotation of these gauge fields. We then proceed to introduce at every point in spacetime a duality rotation by an angle -α that transforms a non-null electromagnetic field fμν into an extremal field ξμν, ξμν = e-∗αfμν = cos(α) fμν -sin(α) ∗fμν, (1) where ∗fμν = 1 2 εμνστ f στ is the dual tensor of fμν. The local scalar α is the complexion of the electromagnetic field and it is a local gauge invariant quantity. Extremal fields are essentially electric fields and they satisfy, ξμν ∗ξμν = 0 . (2) 3 Equation (2) is imposed as a condition on (1) and then we find the expression for the complexion that results in tan(2α) = -fμν ∗f μν/fλρ f λρ. We can also prove just using identities valid in four-dimensional Lorentzian spacetimes that the condition (2) can be rewritten as ξαμ ∗ξμν = 0. In order to prove this equivalence between conditions we will need an identity8 valid for two second rank antisymmetric fields in a four-dimensional Lorentzian spacetime. This identity is given by, Aμα Bνα -∗Bμα ∗Aνα = 1 2 δ ν μ Aαβ Bαβ , (3) When this identity (3) is considered for the case Aμα = ξμα and Bνα = ξνα we obtain, ξμα ξνα -∗ξμα ∗ξνα = 1 2 δ ν μ Q , (4) where Q = ξμν ξμν = - q TμνT μν according to equations (39) in reference8. Q is assumed not to be zero, because we are dealing with non-null electromagnetic fields. It can be proved that condition (2) plus the general identity (3), when applied to the case Aμα = ξμα and Bνα = ∗ξνα provides the equivalent condition to (2), ξαμ ∗ξμν = 0 , (5) which is equation (64) in reference8. In geometrodynamics, the Einstein-Maxwell equations, Rμν = fμλ f λ ν + ∗fμλ ∗f λ ν (6) f μν ;ν = 0 (7) ∗f μν ;ν = 0 , (8) reveal the existence of two potential vector fields9 Aν and ∗Aν, fμν = Aν;μ -Aμ;ν (9) ∗fμν = ∗Aν;μ -∗Aμ;ν . (10) 4 The symbol ";′′ stands for covariant derivative with respect to the metric tensor gμν and the star in ∗Aν is just nomenclature, not the dual operator, with the meaning that ∗Aν;μ = (∗Aν);μ. The duality rotation given by equation (59) in8 fμν = ξμν cos α+∗ξμν sin α, enables us to reexpress the stress-energy tensor in terms of the extremal field, Tμν = ξμλ ξ λ ν + ∗ξμλ ∗ξ λ ν . (11) It is now the right time to introduce the new tetrads that will diagonalize locally and covariantly the stress-energy tensor (11). U α = ξαλ ξρλ Xρ / ( q -Q/2 q Xμ ξμσ ξνσ Xν ) (12) V α = ξαλ Xλ / ( q Xμ ξμσ ξνσ Xν ) (13) Zα = ∗ξαλ Yλ / ( q Yμ ∗ξμσ ∗ξνσY ν ) (14) W α = ∗ξαλ ∗ξρλ Y ρ / ( q -Q/2 q Yμ ∗ξμσ ∗ξνσY ν ) . (15) With all these elements put together, particularly equations (4-5) it becomes trivial to prove that the tetrad (12-15) is orthonormal and diagonalizes1 the stress-energy tensor (11). The vectors (12-13) with eigenvalue Q/2 and the vectors (14-15) with eigenvalue -Q/2. At every point in spacetime the timelike and one spacelike vectors that for some geometries like Reissner-Nordstr ̈om are (12-13) generate a plane that we called blade one1,10. The other two spacelike vectors (14-15) generate a local orthogonal plane that we called blade two. These vectors are constructed with the local extremal field8 (1), its dual, the very metric tensor and a pair of vector fields Xα and Y α that represent a generic gauge choice as long as the tetrad vectors do not become trivial. We are aware then, that we still have to introduce the vectors Xμ and Y μ. Let us introduce some names. The tetrad vectors have two essential structure components. For instance in vector U α there are two main structures. First, the skeleton, in this case ξαλ ξρλ, and second, the gauge vector Xρ. These do not include the normalization factor 1/ ( q -Q/2 q Xμ ξμσ ξνσ Xν ). The gauge vectors must be anything that do not make the tetrad vectors trivial and we mean by this that the tetrad (12-15) diagonalizes the stress-energy tensor for any non-trivial gauge vectors Xμ and Y μ. It is then possible to make different choices for Xμ and Y μ. The potential vector fields introduced in equations (9-10) represent a possible choice in geometrodynamics for the vectors Xα = Aα 5 and Y α = ∗Aα. We do not mean that the two vector fields have independence from each other, it is just a convenient choice. With this particular choice for the two gauge vector fields we can define then, U α = ξαλ ξρλ Aρ / ( q -Q/2 q Aμ ξμσ ξνσ Aν ) (16) V α = ξαλ Aλ / ( q Aμ ξμσ ξνσ Aν ) (17) Zα = ∗ξαλ ∗Aλ / ( q ∗Aμ ∗ξμσ ∗ξνσ ∗Aν ) (18) W α = ∗ξαλ ∗ξρλ ∗Aρ / ( q -Q/2 q ∗Aμ ∗ξμσ ∗ξνσ ∗Aν ) , (19) where the four vectors (16-19) satisfy the following algebraic properties, -U α Uα = V α Vα = Zα Zα = W α Wα = 1 . (20) Using the equations (4-5) it is simple to prove that (16-19) are orthogonal vectors. We think then about local electromagnetic gauge transformations. We notice that we can interpret the independent local gauge transformations of the vector potentials introduced in equations (9-10), that is, Aα →Aα + Λ,α and ∗Aα →∗Aα + ∗Λ,α as new choices for the two gauge vector fields Xμ and Y μ. The first local gauge transformation leaves fμν invariant and the second one leaves ∗fμν invariant, as long as the functions Λ and ∗Λ are scalars. According to Schouten, for non-null electromagnetic fields in Einstein-Maxwell spacetimes there is a two-bladed or two-plane structure10 at every point in spacetime. These blades are the planes determined by the pairs (U α, V α) and (Zα, W α). In manuscript1 it was demonstrated that the transformation Aα →Aα + Λ,α generates a Lorentz transformation (except for one discrete reflection) of the tetrad vectors (U α, V α) into ( ̃U α, ̃V α) in such a way that these "rotated" vectors ( ̃U α, ̃V α) remain in the plane or blade one generated by (U α, V α). In the same reference1 it was also proven that the transformation ∗Aα →∗Aα + ∗Λ,α generates a "rotation" of the tetrad vectors (Zα, W α) into ( ̃Zα, ̃W α) such that these "rotated" vectors ( ̃Zα, ̃W α) remain in the plane or blade two generated by (Zα, W α). In manuscript1 it was demonstrated that the group of local electromagnetic gauge transformations is isomorphic to the group LB1 of boosts plus two discrete transformations on blade one, and independently to LB2, the group of spatial rotations on blade two. Equations like 6 U α (φ) = cosh(φ) U α + sinh(φ) V α (21) V α (φ) = sinh(φ) U α + cosh(φ) V α , (22) on the local plane one represent a local electromagnetic gauge transformation of the vectors (U α, V α). The transformation of the two vectors (U α, V α) on blade one, given in (16-17) by the "angle" φ in (21-22) is a proper transformation, that is, a boost. For discrete improper transformations the result follows the same lines, see reference1. Analogously on the local plane two, Zα (φ) = cos(φ) Zα -sin(φ) W α (23) W α (φ) = sin(φ) Zα + cos(φ) W α . (24) Equations (23-24) represent a local electromagnetic gauge transformation of the vectors (Zα, W α), the transformation of the two tetrad vectors (Zα, W α) on blade two, given in (1819), by the "angle" φ. It is straightforward to check that the equalities U[α (φ) V β] (φ) = U [α V β] and Z[α (φ) W β] (φ) = Z[α W β] are true. These equalities mean that these antisymmetric tetrad objects are gauge invariant. In the Abelian case it was proved that the local group of electromagnetic gauge transformations is isomorphic to both the local groups LB1 and LB2, separately, independently. LB1 on the local plane one is a group composed by the tetrad boosts SO(1, 1) and two different kinds of discrete transformations. One of the discrete transformations is the full inversion or minus the identity two by two. The other discrete transformation is not Lorentzian1 because it is a reflection or flip, a two by two matrix with zeroes in the diagonal and ones off-diagonal. LB2 on plane two is the group of spatial tetrad rotations on this plane, that is SO(2). It is worth reminding ourselves about a point in mathematical language that could be loose or inaccurate but nonetheless immediate to understand. With the purpose of exemplifying we can mention the isomorphisms in the Abelian case1 between the local group of electromagnetic gauge transformations and the local groups of tetrad transformations LB1 and LB2, separately, independently. The isomorphisms strictly speaking are homomorphisms between the local algebra of scalars associated to the local group of electromagnetic gauge transformations and the local groups LB1 and LB2, independently. We know that between the local algebra of scalars and the local group of 7 electromagnetic gauge transformations there is a homomorphism, a homomorphism between the real numbers R that is, the algebra of local scalars associated to the local group of electromagnetic gauge transformations and U(1), that is, the local group of electromagnetic gauge transformations. We give this relationship as implicitly understood even though we talk about isomorphisms between the local group of electromagnetic gauge transformations and the local groups of tetrad transformations LB1 and LB2, separately, independently. We must also stress that the local transformations (21-22) are not imposed local boosts on the vectors that define the local plane one. They are the result of local gauge transformations of the vectors (U α, V α). For example, from reference1 a particular boost after the gauge transformation would look like, ̃V α (1) q - ̃V β (1) ̃V(1)β = (1 + C) q (1 + C)2 -D2 V α (1) q -V β (1) V(1)β + D q (1 + C)2 -D2 V α (2) q V β (2) V(2)β (25) ̃V α (2) q ̃V β (2) ̃V(2)β = D q (1 + C)2 -D2 V α (1) q -V β (1) V(1)β + (1 + C) q (1 + C)2 -D2 V α (2) q V β (2) V(2)β . (26) In equations (25-26) the following notation has been used, C = (-Q/2)V(1)σΛσ/(V(2)βV β (2)), D = (-Q/2)V(2)σ Λσ/(V(1)β V β (1) ) and [(1+C)2 -D2] > 0 must be satisfied. The notation Λα has been used for Λ,α where Λ is the local scalar generating the local gauge transformation. U α = V α (1) q -V β (1) V(1)β and V α = V α (2) q V β (2) V(2)β according to the notation used in paper1, V α (1) = ξαλ ξρλ Aρ (27) V α (2) = q -Q/2 ξαλ Aλ (28) V α (3) = q -Q/2 ∗ξαλ ∗Aλ (29) V α (4) = ∗ξαλ ∗ξρλ ∗Aρ . (30) For the particular case when 1 + C > 0, the transformations (25-26) manifest that an electromagnetic gauge transformation on the vector field Aα →Aα + Λα, that leaves invariant the electromagnetic field fμν, generates a boost transformation on the normalized tetrad vector fields   V α (1) q -V β (1) V(1)β , V α (2) q V β (2) V(2)β  . In this case cosh(φ) = (1+C) √ (1+C)2-D2, see also equation (21). This was just one of the possible cases in LB1. Similar analysis for the vector 8 transformations (23-24) in the local plane two generated by (Zα, W α). See reference1 for the detailed analysis of all possible cases. Back to our main line of work we can write the electromagnetic field in terms of these tetrad vectors, fαβ = -2 q -Q/2 cos α U[α Vβ] + 2 q -Q/2 sin α Z[α Wβ] . (31) Equation (31) entails the maximum simplification in the expression of the electromagnetic field. The true degrees of freedom are the local scalars q -Q/2 and α. We can also present both local degrees of freedom as q -Q/2 cos α and q -Q/2 sin α. The object U[αVβ] remains invariant1 under a "rotation" of the tetrad vectors U α and V α by a scalar angle φ like in (21-22) on blade one. This is the way in which local gauge invariance is manifested explicitly on this local plane. Analogous for discrete transformations on blade one. Similar analysis on blade two. A spatial "rotation" generated by a gauge transformation of the tetrad vectors Zα and W α through an "angle" φ as in (23-24), such that the object Z[α Wβ] remains invariant1. All this formalism clearly provides a technique to maximally simplify the expression for the electromagnetic field strength as in equation (31). It is block diagonalized automatically by the tetrad (16-19). This is not the case for the non-Abelian SU(2) field strength. We do not have an automatic block diagonalization. To this purpose a new algorithm was developed in reference11. In the next section we study the construction of tetrads similar to the Abelian case but in the non-Abelian environment. III. OVERVIEW OF NEW TETRADS AND SYMMETRIES FOR THE NON-ABELIAN CASE For the non-Abelian case we first would like to present a set of examples on how to build extremal fields that are locally invariant under SU(2) gauge transformations. Let us remember that in the Abelian case in the tetrads (12-15) the only dependence on gauge came through the gauge vectors Xμ and Y μ. The tetrad skeletons as it was mentioned previously are local gauge invariants. The advantage of this method is that when we introduce local gauge transformations, the vectors that span the local plane one, do not leave this plane after the transformation and analogous in the local plane two. This fact implies in turn that the metric tensor, be non-flat or flat will not change and will remain invariant. It is then 9 important to show explicitly that we can construct extremal fields invariant under both the Abelian and the non-Abelian gauge transformations. One example could be for instance given by, ζμν = cos β fμν -sin β ∗fμν , (32) Following the Abelian pattern we can define the new complexion β, and to this end we will impose the new SU(2) local invariant condition, Tr[ζμν ∗ζμν] = ζk μν ∗ζkμν = 0 , (33) where the summation convention is applied on the internal index k. We are just using a generalized duality transformation, and defining through it, this new local scalar complexion β. Therefore, the complexion condition (33) is not an additional condition for the field strength. We simply introduced a possible generalization of the definition for the Abelian complexion, found through a new duality transformation as well. Then, we find the local SU(2) invariant complexion β to be, tan(2β) = -f k μν ∗f kμν/f p λρ f pλρ , (34) where once again the summation convention was applied on both k and p. We can also consider gauge covariant derivatives since they will become useful in the ensuing analysis. For example, the gauge covariant derivatives of the three extremal field internal components, ζkμν|ρ = ζkμν ; ρ + g εklp Alρ ζpμν . (35) where εklp is the completely skew-symmetric tensor in three dimensions with ε123 = 1, with g the coupling constant. As in the previous section II the symbol ";" stands for the usual covariant derivative associated with the metric tensor gμν. Next we consider the Einstein-Maxwell-Yang-Mills vacuum field equations, Rμν = T (ym) μν + T (em) μν (36) 10 f μν ;ν = 0 (37) ∗f μν ;ν = 0 (38) f kμν |ν = 0 (39) ∗f kμν |ν = 0 . (40) The field equations (37-38) provide two electromagnetic potentials9, not independent from each other, but due to the symmetry of the equations, available for our construction. Aμ and ∗Aμ are the two electromagnetic potentials, see the comments made about the Abelian potentials and the star nomenclature ∗Aμ in section II. Similar for the two NonAbelian equations (39-40). The Non-Abelian potential Akμ is available for our construction as well12-19. With all these elements put together, we can proceed to define the auxiliary antisymmetric field, ωμν = Tr(∗ζ στ ζμν + ζ στ ∗ζμν) Tr(ζσρ |ρ ∗ζτλ |λ) . (41) This particular antisymmetric auxiliary field in our construction could also be alternatively chosen to be, ωμν = Tr(ζ στ ζμν) Tr(ζσρ |ρ ∗ζτλ |λ) . (42) We can choose this antisymmetric auxiliary field ωμν in many different ways, we just show two examples. It is clear that (41) or (42) are invariant under SU(2) local gauge transformations. Expressions (41) or (42) are nothing but explicit examples among many, see for example reference3. Once our choice is made we perform a local duality rotation in order to obtain the new extremal field. We remind ourselves through the algorithm created in section II and reference1 that extremal fields are found through local duality rotations of second rank antisymmetric tensors like in equation (1) because then we can use the equations analogous to (4-5) to define an orthogonal tetrad. That is the core of this algorithm. εμν = cos θ ωμν -sin θ ∗ωμν . (43) As always we choose this complexion θ to be defined by the condition, 11 εμν ∗εμν = 0 . (44) Thus we find the new local scalar complexion analogously to section II to be, tan(2θ) = -ωμν ∗ωμν/ωλρ ωλρ . (45) We used our new algorithm to find a new kind of local SU(2) gauge invariant extremal tensor εμν, that enables the construction of the new tetrad, Sμ (1) = εμλ ερλ Xρ (46) Sμ (2) = q -Qym/2 εμλ Xλ (47) Sμ (3) = q -Qym/2 ∗εμλ Yλ (48) Sμ (4) = ∗εμλ ∗ερλ Y ρ , (49) where Qym = εμν εμν and we assume this local scalar not to be zero. With the help of identity (3), when applied to the case Aμα = εμα and Bνα = ∗ενα we obtain as in section II the equivalent condition to (44), εαν ∗εμν = 0 , (50) It is a simple excercise using (3) for Aμα = εμα and Bνα = ενα, and (50), to prove that the vectors (46-49) are orthogonal. As we did before in section II we will call for future reference εμλερλ the skeleton of the tetrad vector Sμ (1), and Xρ the gauge vector. In the case of Sμ (3), the skeleton is ∗εμλ, and Yλ the gauge vector. It is clear now that skeletons are gauge invariant under SU(2) × U(1) as we announced at the start of this section. This property guarantees that the vectors under local U(1) or SU(2) gauge transformations will not leave their original planes or blades, keeping therefore the metric tensor explicitly invariant. Our final task in this construction will be to define the gauge vectors Xσ and Y σ for the tetrad (46-49). A nontrivial although useful choice that we can make is Xσ = Y σ = Tr[Σαβ E ρ α E λ β ∗ξ σ ρ ∗ξλτ Aτ]. The nature of the object Σαβ is explained in section VI, Appendix II in reference3 and also 12 section VI. The object Σαβ is basically built with the Pauli matrices and the identity two by two. The tetrad vectors E ρ α inside the expression Tr[Σαβ E ρ α E λ β ∗ξ σ ρ ∗ξλτ Aτ] can be chosen to be the tetrad vectors that we already know from manuscript1 and section II for electromagnetic fields in curved space-times. Following the same notation as in1 and equations (16-19), we call E ρ (o) = U ρ, E ρ (1) = V ρ, E ρ (2) = Zρ, E ρ (3) = W ρ. The electromagnetic extremal tensor ξρσ, and its dual ∗ξρσ are also already known from reference1 and section II. We make use of the already defined tetrads built for Einstein-Maxwell spacetimes in order to enable the use of the object Σαβ which is key in our construction. The key lies in the translating property of this object between SU(2) local gauge transformations S and local Lorentz transformations Λα γ, see reference3 and notice from section VI that S-1 Σαβ S = Λα γ Λβ δ Σγδ. We would like to study one more property of these chosen gauge vector fields Xσ = Y σ = Tr[Σαβ E ρ α E λ β ∗ξ σ ρ ∗ξλτ Aτ]. The structure E [ρ α E λ] β ∗ξρσ ∗ξλτ is invariant under U(1) local gauge transformations. The electromagnetic extremal field property1,8, ξμσ ∗ξμτ = 0 is useful in the contractions E ρ α E λ β ∗ξ σ ρ ∗ξλτ. Because it is leaving in the contraction of E ρ α E λ β with ∗ξρσ ∗ξλτ only the antisymmetric object E [ρ 2 E λ] 3 , which is locally U(1) gauge invariant. Precisely because of property (5). Let us remember that the object Σαβ is antisymmetric and contracted with the electromagnetic tetrads as Σαβ E ρ α E λ β inside the local gauge vector, see section III. In the first paper1 we proved that the group U(1) is isomorphic to the local group of boosts plus discrete transformations on blade one that we called LB1. The same group U(1) is isomorphic to SO(2), that we also called LB2 since it is related to local tetrad rotations on blade two. This is a fundamental result in group theory alone, let alone in physics. We proved in references3,4 that the local group of SU(2) gauge transformations is isomorphic to the tensor product of three LB1 groups. Second, the local group of SU(2) gauge transformations is isomorphic to the tensor product of three LB2 or SO(2) groups. All the local gauge groups of the Standard Model have been proven to be isomorphic to local groups of tetrad transformations in four-dimensional Lorentzian curved or flat spacetimes. The no-go theorems of the sixties20-22 have been proven to be incorrect. Not because of their internal logic but for the assumptions made at the outset of these theorems. We read in reference22 "S (the scattering matrix) is said to be Lorentz-invariant if it possesses a symmetry group locally isomorphic to the Poincar`e group P.. . . A symmetry transformation is said to be an internal symmetry transformation if it commutes with P. This implies that it 13 acts only on particle-type indices, and has no matrix elements between particles of different four-momentum or different spin. A group composed of such transformations is called an internal symmetry group". The local electromagnetic gauge group of transformations U(1) has been proven to be isomorphic to local groups of tetrad transformations LB1 and LB2 on both the orthogonal planes one and two. These local groups of transformations LB1 and LB2= SO(2) are composed of Lorentz transformations except in LB1 for an improper discrete reflection, see reference1. Therefore the local Lorentz group of spacetime transformations cannot commute with LB1 or LB2 since Lorentz transformations on a local plane do not necessarily commute with Lorentz transformations on another local plane at the same point in spacetime. The local internal groups of transformations do not necessarily commute with the local Lorentz transformations, because they are isomorphic to local groups of tetrad transformations. Analogous results were proven for the non-Abelian cases SU(2)×U(1) and SU(3) × SU(2) × U(1) Yang-Mills, see references3-5. IV. SPACETIME FEYNMAN CALCULUS It is of fundamental importance to understand the geometry of spacetime when particle interactions are taking place. Using the accumulated analysis for different kinds of gauge theories carried out in1-5,11,23, we will show explicitly how to assign to different Feynman diagrams in weakly interacting processes, different sets of tetrad vectors. The massive weak interactions boson mediators have an associated gravitational field as well as electrons, muons and neutrinos and even though these gravitational fields might be weak, they possess the necessary geometrical structure that enables the local symmetries of the standard model to be realized in an explicit fashion as it was analyzed in previous manuscripts1-5. We judge relevant to understand that the transformations of colliding particles into the same or other emerging particles can occur through the local transformation properties of gravitational fields which will differ for different settings even though they exhibit analogous manifest local invariance under the internal symmetries of the standard model. We remind ourselves that in manuscripts1-5,11,23 all the local internal gauge symmetries of the standard model have been proved isomorphic to local groups of tetrad transformations in four-dimensional curved Lorentz spacetimes. However, in order not to get foundational contradictions with quantum field theories at this stage of analysis we will assume that the kind of tetrads introduced in 14 sections II and III are defined in Minkowski spacetime. This choice of spacetime involves no contradiction since the basic operation of local duality field transformation can be performed in flat Minkowski spacetimes as well. We can define the tetrads in Minkowski spacetime and prove that these tetrads define locally two orthogonal planes of stress-energy diagonalization. These geometrical structures exist not only in curved spacetimes but also in flat spacetimes. In essence local tetrad gauge states of spacetime would represent different microparticles in their spacetime manifestation, that can transform through local gauge transformations into other microparticles or other local tetrad gauge states of spacetime. The notation is a replica of the notation in24, so we refer the reader to this reference. We also refer the reader to24-28 for abundant literature citation, specially in the field of particle physics. A. Weak interactions The existence of mediators as it was shown in1-4 is irreplaceable as far as we are concerned with the construction of these kind of tetrads in weak interactions. In this case it is the existence of local SU(2) "extremal" fields that allow us to build tetrads in weak processes. There are interactions involving the massive mediators where any virtual effect is negligible. For instance the W -as the mediator in inverse Muon decay. The Zo mediator in elastic Neutrino-Electron scattering. This is important because the existence of virtual processes would require a different approach. We will analyze these processes through the use of appropriately defined tetrads. 1. Inverse Muon decay Let us consider the process e-(1) + νμ(2) →νe(3) + μ-(4). There are two vertices. We invoke then the existence of the SU(2) tetrads introduced in3,4, specially the general tetrad structure presented in the section "Gauge geometry" and also section III. We called these general SU(2) tetrad vectors Sμ (1) · · · Sμ (4) and the structure of these latter tetrads was introduced in equations (46-49) in the section III dedicated to the overview of these objects. There was a remaining freedom in the choice of two vector fields, Xρ and Y ρ. It is exactly through an appropriate choice for these two vector fields that we can identify a tetrad set for each vertex at the same spacetime point. In addition to the previously introduced notation 15 and structures, let us call the non-null electromagnetic tetrads, following again the notation in references1-4 and sections II-III, E ρ α . There are local non-null electromagnetic tetrads in both vertices at the same spacetime point since in one vertex we have an electron and in the other vertex a muon. The indices α and β are reserved for locally inertial coordinate systems. Then, we can proceed to define for the first vertex the two gauge vector fields, Xρ = Y ρ = u(3) γα (1 -γ5) u(1) Eρ α . (51) We are basically associating to the first vertex a current24 jα -= u(3) γα (1 -γ5) u(1). This current describes the process e-→νe + W -. For the second vertex we can choose for instance, Xρ = Y ρ = u(4) γα (1 -γ5) u(2) Eρ α . (52) Again, we are assigning to the second vertex a current24 jα -= u(4) γα (1 -γ5) u(2) describing the process νμ + W -→μ-. It is evident from all the analysis in3,4 that the geometrical transition from vertex one to vertex two and vice-versa, is an SU(2) generated local gauge transformation. That is only allowed through the existence of massive mediators. Following the ideas in3 we can start by choosing for instance, Xρ = Y ρ = Tr[Σαβ E σ α E λ β ∗ξ ρ σ ∗ξλτ Aτ] (53) The Σαβ objects are analyzed in appendix II in reference3 and sections III-VI in this present paper, ξσρ are the electromagnetic "extremal" fields introduced in reference1 and section II in this present paper, etc. Through a local SU(2) gauge transformation on blade one, we can "rotate" the normalized version of vectors (46-47) on blade one, until Xρ in (53) becomes Xρ in (51). We can also "rotate" the normalized version of vectors (48-49) on blade two, until Y ρ in (53) becomes Y ρ in (51). Let us remember that the tetrad skeletons are locally gauge invariant under U(1)×SU(2). This is just a sample of local gauge transformations of the normalized version of vectors (46-49). We proved in references1,3,4 that the maps that both in the local plane one and two send the tetrad vectors that generate these planes from an initial gauge vector into another gauge vector are injective and surjective maps. The 16 map that assigns local groups of gauge transformations into local groups of tetrad transformations on either local orthogonal planes of stress-energy symmetry are isomorphisms. Again we can start with (53) and appropriately "rotate" the tetrad vectors on blade one, until they become the ones corresponding to Xρ given in (52). Similar for Y ρ in this second case (52). It is evident then that (51) and (52) are connected through local SU(2) gauge transformations on blades one and two, that in turn, leave invariant the metric tensor. That is, these local gauge transformations exist because of transitivity. The local groups of gauge transformations have been proven to be isomorphic to the local groups of tetrad transformations on the local orthogonal planes of symmetry. Given two sets of tetrads on the local plane one, then there is a local gauge transformation that sends one set into the other and vice-versa. Similar in the local othogonal plane two. These local orthogonal planes we remind ourselves are the local planes of covariant diagonalization of the stress-energy tensor, for the Abelian case and also for the non-Abelian case, see references1-5,11. We can also notice that the vector fields (51-52) are not strictly vectors but pseudovectors under local parity transformations, see reference24. But the metric tensor remains unaltered under these local parity transformations. It is as if the geometry associated to the e-(1) and νe(3) can be transformed through the existence of a massive mediator into the geometry associated to the νμ(2) and μ-(4) without altering the spacetime. The vertices are local tetrad gauge states of the same flat spacetime. 2. Elastic Neutrino-Electron scattering Now, we are considering neutral currents. In particular the interaction process νμ(1) + e-(2) →νμ(3) + e-(4). As before we can assign to the first vertex the choice, Xρ = Y ρ = u(3) γα (1 -γ5) u(1) Zρ α . (54) The current jα -= u(3) γα (1 -γ5) u(1), represents the process νμ(1) →νμ(3) + Zo. The tetrad Zρ α is built as follows. Following again the notation in24 we know we have available a local vector field Zμ that results from the Weinberg rotation through the angle θw, in addition to the standard electromagnetic local vector field Aμ. The rotation can be written, 17 Aμ = Bμ cos θw + W 3 μ sin θw (55) Zμ = -Bμ sin θw + W 3 μ cos θw . (56) The local tetrad field Zρ α is present in both vertices at the same spacetime point, since the massive neutral mediator is present in both local vertices and Zρ α is a local tetrad associated to this flat spacetime. The electro-weak mixing involves a weak isotriplet of intermediate vector bosons W coupled to three weak isospin currents, and an isosinglet intermediate vector boson Bμ coupled to the weak hypercharge current. If we follow all the steps in reference1 and the method developed in section II in the present paper, we can build out of the curl Zμ,ν -Zν,μ a new tetrad. This auxiliary local tetrad Zρ α present in the definition of gauge-vectors (54) would once more in its own construction involve the choice of two gaugevector fields, see reference1. We can choose for instance Zμ and Bμ as these two vector fields needed in turn in the definition and construction of the local auxiliary tetrad Zρ α. Then, the tetrad that couples to the neutrino current is associated to the massive Zo. The second vertex could be assigned a choice, Xρ = Y ρ = u(4) γα (cV -cA γ5) u(2) Eρ α , (57) representing e-(2) + Zo →e-(4). For this particular interaction cV = -1 2 + 2 sin θw, and cA = -1 2, where θw is again the Weinberg angle24. The massive mediator allows again for a SU(2) local gauge transformation between the tetrad vectors chosen for vertex one and the ones chosen for vertex two at the same spacetime point. The neutral current works as a geometry mediator between the scattered particles keeping the spacetime invariant at the same spacetime point. The vertices function as local tetrad gauge states of the same spacetime. V. CONCLUSIONS We have explored the possibility of assigning tetrads to Feynman diagrams. In interactions where we can assume the existence of particles with associated local fields like Abelian or non-Abelian gauge fields. At no step of analysis we have specified the tetrad themselves 18 making all these geometrical properties outstanding since they can be put forward with all generality without the need to study case by case as long as the gauge fields are non-null for example. We can think the massive weak interactions boson mediators to have associated gravitational fields as well as electrons, muons and neutrinos do and even though these gravitational fields might be weak, they possess the necessary geometrical structure that enables the local symmetries of the standard model to be realized in an explicit fashion as it was studied thoroughly in previous manuscripts1-5. However we decided to consider a background flat spacetime since gravitational fields would entail foundational contradictions with standard quantum field theories. New concepts would have to be introduced and we do not want to do this at this stage in analysis. We deem fundamental to understand that the transformations of colliding particles into the same or other emerging particles through elastic or inelastic processes can occur through the local transformation properties of spacetime which will differ for different settings through the new notion of tetrad gauge states of spacetime. Having done this explicitly, a number of questions naturally arise. We want these concluding remarks to be a summary of these open questions. • The order of the formulations24-31. We have worked out the low-order diagrams. Then, what happens with higher order diagrams ?. The tetrads admit the choice of two gauge vector fields Xρ and Y ρ, and the higher orders are additive exactly as in the quantum theories in these vector fields available as a choice. But there is more to understand. Do the higher order diagrams represent contributions coming from higher order perturbative theories of a full relativistic formulation of these interactions23 involving perturbations of the electromagnetic field, etc, for instance, or just perturbative expansions in the gauge vector fields?. As an example of this kind of situation we might want to qualitatively consider the quark decays b →s γ, see chapter XIV-7 in reference32 for instance. There are several possibilities but there are certainly higher order contributions to these kind of processes. Let us focus on the contribution that involves a W -boson mediator in rare decays. There is the b →W -+ c vertex and the subsequent c + W -→s vertex for example. Each one of these has an associated current vector, let us call them for short jα [bc] and jα [cs]. Both vertices have particles with electric charge so there is at each vertex an associated electromagnetic tetrad Eρ [bc]α and Eρ [cs]α respectively. Then we can associate to each vertex in this higher order diagram 19 gauge vectors Xρ [bc] = Y ρ [bc] = jα [bc] Eρ [bc]α and Xρ [cs] = Y ρ [cs] = jα [cs] Eρ [cs]α. The jα [bc] Eρ [bc]α contribution can then be added to the non-Abelian tetrad gauge vectors for vertex [bc] and similar for jα [cs] Eρ [cs]α at the vertex [cs]. In this contribution there is also a photon involved. This photon emission will change the stress-energy tensor and therefore will be associated to the perturbed extremal field which is in turn a local duality rotation of the perturbed electromagnetic field. These perturbation in the extremal field will in turn perturb the tetrad skeletons. Therefore, the vertices in the diagram will be associated to tetrad gauge states of the spacetime and the photon emission to tetrad skeleton perturbations. The b →s γ decays might involve contributions with top quarks and the analysis will be similar, the b →s γ decays could include a loop of the kind cc for example and we will proceed similarly as well. We will add to the gauge vectors associated to the corresponding vertex, higher and higher contributions with a corresponding expansion parameter. The currents at the corresponding vertex are the key objects necessary to produce gauge vectors associated to vertices. There could be that both, the perturbations in the skeletons and the perturbations in gauge vectors proceed simultaneously as in the b →s γ case. On one hand the local orthogonal planes of symmetry will tilt and on the other hand the tetrad vectors that span these local planes will rotate inside them. • A point outside the scope of this manuscript. The issue of "gauge gravity". Since in references1,3 it was explicitly proved that the Abelian and non-Abelian gauge theories represent special symmetries of the gravitational field, we can ask about the meaning of "gauge gravity". The electromagnetic field is associated to the LB1 and LB2 symmetries of the gravitational field, see reference1. The SU(2) group of local gauge transformations is associated to the symmetries of the tensor product of three LB1 or three LB2 groups of transformations, see references3,4. Analogous for SU(3), see reference5. Then, it is not obvious to understand what is the meaning of a statement like, "casting the theory of gravity into a Yang-Mills formulation". We have reason to believe that we can truly cast the theory of gravity into a Yang-Mills formulation but as said, it is not obvious and requires a whole new work. • Another point outside the scope of this manuscript. The issue of quantum gravity. It has been proved explicitly that metric tensors can be associated with microparticle in20 teractions. These constructions are possible by means of non-null Abelian tetrad fields, and by means of SU(2) local non-Abelian tetrad fields. Perturbative formulations of these tetrad field structures as in reference23 can take care of quantum fluctuations as well. The quantum is connected through the existence of these tetrad fields to gravity. A treatment for a curved spacetime where a gravitational field is present would entail several new notions that we would like not to introduce at this stage of analysis. Nonetheless we can advance that in curved spacetimes there will be local orthogonal planes one and two and the isomorphisms between local gauge groups Abelian and non-Abelian and local groups of tetrad transformations LB1 and LB2 would also apply in a similar fashion as to flat spacetimes1-5. The quantization will be reflected through interactions that alter the local plane-symmetry structure by tilting these local planes of symmetry. Continuously for continuous perturbations and discretely in quantum settings situations. The main idea behind these quantum formulations in curved spacetimes is that the local planes of stress-energy diagonalization are always and during quantum interactions local orthogonal planes of symmetry, because quantum problems are basically confronted through perturbations analysis and these perturbations lead to continuous or discrete evolution of the local planes of symmetry either in flat or curved spacetimes. Continuous or discrete tilt symmetry evolution, see reference23. We also established that during interactions of microparticles the tetrad vectors that span the local orthogonal planes one and two might rotate inside them. The tetrads of different nature that we were able to build in1-5 and the present work, establish a link between the standard locally inertial flat field environment of the traditional standard quantum theories in weak interactions on one hand, and the curved spacetime of gravity on the other hand. The point is the following, why are we using in quantum gravity similar conceptual foundations to theories that are not formulated in curved spacetimes but flat spacetimes ?. • Another point outside the limited scope of this paper. The issue of the Higgs mechanism. It is a device conceived in its relationship with the nature of mass, for instance of the mass mediators. In the present tetrad environment we can ask if it is necessary, or the mass comes into existence due to the presence of gravity ?. Is it possible that the local Higgs field and its quantum fluctuations are related to the perturbations of 21 the local gravitational weak field scalar approximation on a flat Minkowski background associated to the asymptotically curved spacetimes that in turn we can associate to elementary microparticles ?. • The issue of symmetry-breaking. It was proved in the general manuscripts1-4 that the gravitational field when built with tetrads along the lines of expressions (46-49) are manifestly invariant under local electromagnetic gauge transformations, and local SU(2) gauge transformations as well. But when assigning a tetrad set to a vertex in a low-energy weak process diagram in Minkowski spacetime, we make a particular choice for the two gauge vectors Xρ and Y ρ. For instance, through associated currents we choose a particular gauge, and a different one for each vertex, like in inverse Muon decay or elastic Neutrino-Electron scattering. Then, we wonder if this gauge fixing procedure could be the geometrical form of the standard symmetry-breaking process. Hereby, we can see that it is the tetrad fields that bridge the two gauges associated to the two vertices, through a local SU(2) gauge transformation, that in turn, leaves invariant the metric tensor. VI. APPENDIX I This appendix is introducing the object Σαβ that according to the matrix definitions introduced in the references is Hermitic. The use of this object in the construction of tetrads in section III enables the local SU(2) gauge transformations S, to get in turn transformed into purely geometrical transformations. That is, local rotations of the U(1) electromagnetic tetrads. The object σαβ is defined6,7 as σαβ = σα + σβ --σβ + σα -. The object σα ± arises when building the Weyl representation for left handed and right handed spinors. According to reference7, it is defined as σα ± = (1, ±σi), where σi are the Pauli matrices for i = 1 · · · 3. Under the (1 2, 0) and (0, 1 2) spinor representations of the Lorentz group this object transforms as, S-1 (1/2) σα ± S(1/2) = Λα γ σγ ± . (58) Equation (58) means that under the spinor representation of the Lorentz group, σα ± transform as vectors. In (58), the matrices S(1/2) are local objects, as well as7 Λα γ. The 22 SU(2) elements can be considered to belong to the Weyl spinor representation of the Lorentz group. Since the group SU(2) is homomorphic to SO(3), they just represent local space rotations. It is also possible to define the object σ†αβ = σα -σβ + -σβ -σα + in a similar fashion. Therefore, we can write, ı σαβ + σ†αβ =      0 if α = 0 and β = i 4 εijk σk if α = i and β = j , σαβ -σ†αβ =      -4 σi if α = 0 and β = i 0 if α = i and β = j . We then may call Σαβ ROT = ı σαβ + σ†αβ , and Σαβ BOOST = ı σαβ -σ†αβ and a possible choice for the object Σαβ could be for instance Σαβ = Σαβ ROT +Σαβ BOOST. This is a good choice when we consider proper Lorentz transformations of the tetrad vectors nested within the structure of the gauge vectors Xμ and Y μ. For spatial rotations of the U(1) electromagnetic tetrad vectors which in turn are nested within the structure of the two gauge vectors Xμ and Y μ, as is the case under study in section III, we can simply consider Σαβ = Σαβ ROT. These possible choices also make sure the Hermiticity of gauge vectors. Since when defining the gauge vectors Xμ and Y μ we are taking the trace, then Xμ and Y μ are real. REFERENCES 1A. Garat, Tetrads in geometrodynamics, J. Math. Phys. 46, 102502 (2005). A. Garat, Erratum: Tetrads in geometrodynamics, J. Math. Phys. 55, 019902 (2014). 2A. Garat, New tetrads in Riemannian geometry and new ensuing results in group theory, gauge theory and fundamental physics in particle physics, general relativity and astrophysics, Int. J. Mod. Phys. Conf. Ser., Vol. 45, (2017), 1760004. 3A. Garat, Tetrads in Yang-Mills geometrodynamics, Gravitation and Cosmology, (2014) Vol. 20 No. 1, pp. 116-126. Pleiades Publishing Ltd. arXiv:gr-qc/0602049. 4A. Garat, The new electromagnetic tetrads, infinite tetrad nesting and the non-trivial emergence of complex numbers in real theories of gravitation, Int. J. Geom. Methods Mod. Phys., Vol. 14, No. 9 (2017), 1750132. 5A. Garat, Tetrads in SU(3) × SU(2) × U(1) Yang-Mills geometrodynamics, Int. J. Geom. Methods Mod. Phys., Vol. 15 no. 3 (2018), 1850045. . 23 6M. Kaku, Quantum Field Theory: A Modern Introduction (Oxford University Press, 1993). 7L. ́Alvarez-Gaum ́e and M. A. V ́azquez-Mozo, Introductory Lectures on Quantum Field Theory (arXiv:hep-th/0510040). 8C. Misner and J. A. Wheeler, Classical physics as geometry, Annals of Physics 2, 525 (1957). 9N. Cabibbo and E. Ferrari, Nuovo Cim. 23, 1147 (1962). 10J. A. Schouten, Ricci Calculus: An Introduction to Tensor Calculus and Its Geometrical Applications (Springer, Berlin, 1954). 11A. Garat, "Gauge invariant method for maximum simplification of the field strength in non-Abelian Yang-Mills theories", Int. J. Geom. Methods Mod. Phys., Vol. 12, No. 10 (2015), 1550104. . 12R. Gilmore, Lie Groups, Physics and Geometry (Cambridge University Press, 2008). 13R. Gilmore, Lie Groups, Lie Algebras, and Some of Their Applications (John Wiley & Sons, 1974). 14J. Stillwell, Naive Lie Theory (Springer Science + Business Media, L.L.C., 2010). 15N. Carter, Visual Group Theory (The Mathematical Association of America, Inc, 2009). 16M. Carmeli, Classical Fields: General Relativity and Gauge Theory (J. Wiley & Sons, New York, 1982). 17C. N. Yang and R. L. Mills, Phys. Rev. 96, 191 (1954). 18R. Utiyama, Phys. Rev., 101, 1597 (1956). 19T. W. B. Kibble, J. Math. Phys., 2, 212 (1961). 20S. Weinberg, Comments on relativistic supermultiplet theories, Phys. Rev. 139, B597 (1965). 21L. O'Raifeartagh, Lorentz invariance and internal symmetry, Phys. Rev. 139, B1052 (1965). 22S. Coleman and J. Mandula, All possible symmetries of the S matrix, Phys. Rev. 159, N5 1251 (1967). 23A. Garat, "Dynamical symmetry breaking in geometrodynamics", TMF, 195:2, (2018), 313-328; Theoret. and Math. Phys., 195:2, (2018), 764-776. . 24D. Griffiths , Introduction to elementary particles (John Wiley & Sons, Inc. , 1987). 25T. P. Cheng and L. F. Li , Gauge Theory of Elementary Particle Physics (Oxford University Press, 1989). 24 26W. Greiner and B. Mueller, Gauge Theory of Weak Interactions (Springer Verlag Gmbh, 1996). 27W. Greiner and B. Mueller, Quantum Mechanics, Symmetries (Springer Verlag, 1989). 28G. ,t Hooft, Renormalization of Gauge Theories (Lecture notes Erice, 1998). 29M.E. Peskin and D.V. Schroeder, An Introduction to Quantum Field Theory (Perseus Books Publishing L.L.C., 1995). 30M. Srednicki, Quantum Field Theory (Cambridge University Press, New York, 2007). 31R. Jackiw, Fifty Years of Yang-Mills Theory and my Contribution to it (arXiv:physics/0403109, 2004). 32J. F. Donoghue, E. Golowich and B. R. Holstein, Dynamics of the Standard Model (Cambridge monographs on particle physics, nuclear physics and cosmology) (Cambridge University Press, Cambridge, 2014). 25
math/0104241
arXiv:math/0104241v1 [math.CO] 25 Apr 2001 THE LAURENT PHENOMENON SERGEY FOMIN AND ANDREI ZELEVINSKY Abstract. A composition of birational maps given by Laurent polynomi- als need not be given by Laurent polynomials; however, sometimes—quite unexpectedly—it does. We suggest a unified treatment of this phenomenon, which covers a large class of applications. In particular, we settle in the affir- mative a conjecture of D. Gale and R. Robinson on integrality of generalized Somos sequences, and prove the Laurent property for several multidimensional recurrences, confirming conjectures by J. Propp, N. Elkies, and M. Kleber. Contents 1. Introduction 1 2. The Caterpillar Lemma 4 3. One-dimensional recurrences 8 4. Two- and three-dimensional recurrences 11 5. Homogeneous exchange patterns 20 References 21 1. Introduction In this paper, we suggest a unified explanation for a number of instances in which certain recursively defined rational functions prove, unexpectedly, to be Laurent polynomials. We begin by presenting several instances of this Laurent phenomenon established in the paper. Example 1.1. (The cube recurrence) Consider a 3-dimensional array (yijk : (i, j, k) ∈H) whose elements satisfy the recurrence yi,j,k = αyi−1,j,kyi,j−1,k−1 + βyi,j−1,kyi−1,j,k−1 + γyi,j,k−1yi−1,j−1,k yi−1,j−1,k−1 . (1.1) Here H can be any non-empty subset of Z3 satisfying the following conditions: if (i, j, k) ∈H, then (i′, j′, k′) ∈H whenever i ≤i′, j ≤j′, k ≤k′; (1.2) for any (i′, j′, k′) ∈H, the set {(i, j, k) ∈H : i ≤i′, j ≤j′, k ≤k′} is finite. (1.3) Date: April 25, 2001. 1991 Mathematics Subject Classification. Primary 14E05, Secondary 05E99, 11B83. Key words and phrases. Laurent phenomenon, Somos sequence, Gale-Robinson conjecture. The authors were supported in part by NSF grants #DMS-0049063, #DMS-0070685 (S.F.), and #DMS-9971362 (A.Z.). 1 2 SERGEY FOMIN AND ANDREI ZELEVINSKY Theorem 1.2. Let Hinit = {(a, b, c) ∈H : (a −1, b −1, c −1) /∈H}. For every (i, j, k) ∈H, the entry yi,j,k is a Laurent polynomial with coefficients in Z[α, β, γ] in the initial entries ya,b,c, for (a, b, c) ∈Hinit. The cube recurrence (with α = β = γ = 1) was introduced by James Propp [10], who was also the one to conjecture Laurentness in the case when H ⊂Z3 is given by the condition i + j + k ≥0; in this case Hinit consists of all (a, b, c) ∈H such that a + b + c ∈{0, 1, 2}. Another natural choice of H was suggested by Michael Kleber: H = Z3 ≥0, in which case Hinit = {(a, b, c) ∈Z3 ≥0 : abc = 0}. Example 1.3. (The Gale-Robinson sequence) Let p, q, and r be distinct positive integers, let n = p + q + r, and let the sequence y0, y1, . . . satisfy the recurrence yk+n = αyk+pyk+n−p + βyk+qyk+n−q + γyk+ryk+n−r yk . (1.4) David Gale and Raphael Robinson conjectured (see [7] and [8, E15]) that every term of such a sequence is an integer provided y0 = · · · = yn−1 = 1 and α, β, γ are positive integers. Using Theorem 1.2, we prove the following stronger statement. Theorem 1.4. As a function of the initial terms y0, . . . , yn−1, every term of the Gale-Robinson sequence is a Laurent polynomial with coefficients in Z[α, β, γ]. We note that the special case α = β = γ = 1, p = 1, q = 2, r = 3, n = 6 (resp., r = 4, n = 7) of the recurrence (1.4) is the Somos-6 (resp., Somos-7) recurrence [7]. Example 1.5. (Octahedron recurrence) Consider the 3-dimensional recurrence yi,j,k = αyi+1,j,k−1yi−1,j,k−1 + βyi,j+1,k−1yi,j−1,k−1 yi,j,k−2 (1.5) for an array (yijk)(i,j,k)∈H whose indexing set H is contained in the lattice L = {(i, j, k) ∈Z3 : i + j + k ≡0 mod 2} (1.6) and satisfies the following analogues of conditions (1.2)–(1.3): if (i, j, k) ∈H, then (i′, j′, k′) ∈H whenever |i′ −i| + |j′ −j| ≤k′ −k; (1.7) for any (i′, j′, k′) ∈H, the set {(i, j, k) ∈H : |i′ −i| + |j′ −j| ≤k′ −k} (1.8) is finite. Theorem 1.6. Let Hinit = {(a, b, c) ∈H : (a, b, c −2) /∈H}. For every (i, j, k) ∈ H, the entry yi,j,k is a Laurent polynomial with coefficients in Z[α, β] in the initial entries ya,b,c, for (a, b, c) ∈Hinit. The octahedron recurrence on the half-lattice H = {(i, j, k) ∈L : k ≥0} (1.9) was studied by W. H. Mills, D. P. Robbins, and H. Rumsey in their pioneering work [9] on the Alternating Sign Matrix Conjecture (cf. [1] and [10, Section 10] for further references); in particular, they proved the special case of Theorem 1.6 for this choice of H. THE LAURENT PHENOMENON 3 Example 1.7. (Two-term version of the Gale-Robinson sequence) Let p, q, and n be positive integers such that p < q ≤n/2, and let the sequence y0, y1, . . . satisfy the recurrence yk+n = αyk+pyk+n−p + βyk+qyk+n−q yk . (1.10) Using Theorem 1.6, one can prove that this sequence also exhibits the Laurent phenomenon. Theorem 1.8. As a function of the initial terms y0, . . . , yn−1, every term ym is a Laurent polynomial with coefficients in Z[α, β]. We note that in the special case α = β = 1, p = 1, q = 2, n = 5 (resp., n = 4), (1.10) becomes the Somos-5 (resp., Somos-4) recurrence [7]. The last example of the Laurent phenomenon presented in this section is of a somewhat different kind; it is inspired by [2]. Example 1.9. Let n ≥3 be an integer, and consider a quadratic form P(x1, . . . , xn) = x2 1 + · · · + x2 n + X i<j αijxixj . Define the rational transformations F1, . . . , Fn by Fi : (x1, . . . , xn) 7→(x1, . . . , xi−1, P xi=0 xi , xi+1, . . . , xn). (1.11) Theorem 1.10. For any sequence of indices i1, . . . , im, the composition map G = Fi1 ◦· · · ◦Fim is given by G : x = (x1, . . . , xn) 7→(G1(x), . . . , Gn(x)), where G1, . . . , Gn are Laurent polynomials with coefficients in Z[αij : i < j]. This paper is an outgrowth of [6], where we initiated the study of a new class of commutative algebras, called cluster algebras, and established the Laurent phe- nomenon in that context. Here we prove the theorems stated above, along with a number of related results, using an approach inspired by [6]. The first step is to reformulate the problem in terms of generalized exchange patterns (cf. [6, Defini- tion 2.1]), which consist of clusters and exchanges among them. The clusters are distinguished finite sets of variables, each of the same cardinality n. An exchange operation on a cluster x replaces a variable x ∈x by a new variable x′ = P x , where P is a polynomial in the n−1 variables x−{x}. Each of the above theorems can be restated as saying that any member of the cluster obtained from an initial cluster x0 by a particular sequence of exchanges is a Laurent polynomial in the variables from x0. Theorem 1.10 is explicitly stated in this way; in the rest of examples above, the rephrasing is less straightforward. Our main technical tool is “The Caterpillar Lemma” (Theorem 2.1), which es- tablishes the Laurent phenomenon for a particular class of exchange patterns (see Figure 1). This is a modification of the namesake statement [6, Theorem 3.2], and its proof closely follows the argument in [6]. (We note that none of the two statements is a formal consequence of another.) 4 SERGEY FOMIN AND ANDREI ZELEVINSKY In most applications, including Theorems 1.2 and 1.6 above, the “caterpillar” patterns to which Theorem 2.1 applies, are not manifestly present within the origi- nal setup. Thus, we first complete it by creating additional clusters and exchanges, and then apply the Caterpillar Lemma. The paper is organized as follows. The Caterpillar Lemma is proved in Sec- tion 2. Subsequent sections contain its applications. In particular, Theorems 1.2, 1.4, 1.6, and 1.8 are proved in Section 4, while Theorem 1.10 is proved in Sec- tion 5. Other instances of the Laurent phenomenon treated in this paper include generalizations of each of the following: Somos-4 sequences (Example 3.3), Elkies’s “knight recurrence” (Example 4.1), frieze patterns (Example 4.3) and number walls (Example 4.4). We conjecture that in all instances of the Laurent phenomenon established in this paper, the Laurent polynomials in question have nonnegative integer coefficients. In other contexts, similar nonnegativity conjectures were made earlier in [4, 5, 6]. Acknowledgments. We thank Jim Propp for introducing us to a number of beautiful examples of the Laurent phenomenon, and for very helpful comments on the first draft of the paper. In particular, it was he who showed us how to deduce Theorem 1.8 from Theorem 1.6. This paper was completed during our stay at the Isaac Newton Institute for Mathematical Sciences (Cambridge, UK), whose support and hospitality are grate- fully acknowledged. 2. The Caterpillar Lemma Let us fix an integer n ≥2, and let T be a tree whose edges are labeled by the elements of the set [n] = {1, 2, . . ., n}, so that the edges emanating from each vertex receive different labels. By a common abuse of notation, we will sometimes denote by T the set of the graph’s vertices. We will write t k −−−t′ if vertices t, t′ ∈T are joined by an edge labeled by k. From now on, let A be a unique factorization domain (the ring of integers Z or a suitable polynomial ring would suffice for most applications). Assume that a nonzero polynomial P ∈A[x1, . . . , xn], not depending on xk , is associated with every edge t k −−−t′ in T . We will write t −−− P t′ or t k −−− P t′ , and call P the exchange polynomial associated with the given edge. The entire collection of these polynomials is called a generalized exchange pattern on T . (In [6], we introduced a much narrower notion of an exchange pattern; hence the terminology.) We fix a root vertex t0 ∈T , and introduce the initial cluster x(t0) of n in- dependent variables x1(t0), . . . , xn(t0). To each vertex t ∈T , we then associate a cluster x(t) consisting of n elements x1(t), . . . , xn(t) of the field of rational functions A(x1(t0), . . . , xn(t0)). The elements xi(t) are uniquely determined by the following exchange relations, for every edge t k −−− P t′: xi(t) = xi(t′) for any i ̸= k; (2.1) xk(t) xk(t′) = P(x(t)). (2.2) (One can recursively compute the xi(t)’s, moving away from the root. Since the exchange polynomial P does not depend on xk, the exchange relation (2.2) does not change if we apply it in the opposite direction.) THE LAURENT PHENOMENON 5 We next introduce a special class of “caterpillar” patterns, and state conditions on their exchange polynomials that will imply Laurentness. For m ≥1, let Tn,m be the tree of the form shown in Figure 1. ✲ ✲ ✲ ✲ ✲ ✲ ✲ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ❆ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ✁ ❍ ❍ ✟✟ s s s s s s s s s s s s s s s s s s s s s s s s s s t0 t1 thead Figure 1. The “caterpillar” tree Tn,m, for n = 4, m = 8 The tree Tn,m has m vertices of degree n in its “spine” and m(n−2)+2 vertices of degree 1. We label every edge of the tree by an element of [n], so that the n edges emanating from each vertex on the spine receive different labels. We let the root t0 be a vertex in Tn,m that does not belong to the spine but is connected to one of its ends. This gives rise to the orientation of the spine, with all the arrows pointing away from t0 (see Figure 1). We assign a nonzero exchange polynomial P ∈A[x1, . . . , xn] to every edge t −−−t′ of Tn,m, thus obtaining an exchange pattern. For a rational function F = F(x, y, . . . ), we will denote by F|x←g(x,y,... ) the result of substituting g(x, y, . . . ) for x into F. To illustrate, if F(x, y) = xy, then F|x←y x = y2 x . Theorem 2.1. (The Caterpillar Lemma) Assume that a generalized exchange pat- tern on Tn,m satisfies the following conditions: For any edge • k −−− P •, the polynomial P does not depend on xk, and is not (2.3) divisible by any xi, i ∈[n]. For any two edges • i −−− P • j −−→ Q •, the polynomials P and Q0 =Q|xi=0 (2.4) are coprime elements of A[x1, . . . , xn]. For any three edges • i −−− P • j −−→ Q • i −−− R • labeled i, j, i, we have (2.5) L · Qb 0 · P = R xj←Q0 xj , where b is a nonnegative integer, Q0 =Q|xi=0 , and L is a Laurent monomial whose coefficient lies in A and is coprime with P. Then each element xi(t), for i ∈[n], t ∈Tn,m , is a Laurent polynomial in x1(t0), . . . , xn(t0), with coefficients in A. (Note the orientation of edges in (2.4)–(2.5).) Proof. Our argument is essentially the same as in [6, Theorem 3.2]. For t ∈Tn,m, let L(t) = A[x1(t)±1, . . . , xn(t)±1] 6 SERGEY FOMIN AND ANDREI ZELEVINSKY denote the Laurent polynomial ring in the cluster x(t) with coefficients in A. We view each L(t) as a subring of the ambient field of rational functions A(x(t0)). In this notation, our goal is to show that every cluster x(t) is contained in L(t0). We abbreviate L0 = L(t0). Note that L0 is a unique factorization domain, so any two elements x, y ∈L0 have a well-defined greatest common divisor gcd(x, y) which is an element of L0 defined up to a multiple from the group L× 0 of invertible elements in L0; the group L× 0 consists of Laurent monomials in x1(t0), . . . , xn(t0) whose coefficient belongs to A×, the group of invertible elements of A. To prove that all x(t) are contained in L0, we proceed by induction on m, the size of the spine. The claim is trivial for m = 1, so let us assume that m ≥2, and furthermore assume that our statement is true for all “caterpillars” with smaller spine. It is thus enough to prove that x(thead) ⊂L0 , where thead is one of the vertices most distant from t0 (see Figure 1). We assume that the path from t0 to thead starts with the following two edges: t0 i −−− P t1 j −−→ Q t2. Let t3 ∈Tn,m be the vertex such that t2 i −−− R t3. The following lemma plays a crucial role in our proof. Lemma 2.2. The clusters x(t1), x(t2), and x(t3) are contained in L0. Further- more, gcd(xi(t3), xi(t1)) = gcd(xj(t2), xi(t1)) = 1. Proof. The only element in the clusters x(t1), x(t2), and x(t3) whose inclusion in L0 is not immediate from (2.1)–(2.2) is xi(t3). To simplify the notation, let us denote x = xi(t0), y = xj(t0) = xj(t1), z = xi(t1) = xi(t2), u = xj(t2) = xj(t3), and v = xi(t3), so that these variables appear in the clusters at t0, . . . , t3, as shown below: y,x • t0 i −−−−−−− P z,y • t1 j −−−−−−→ Q u,z • t2 i −−−−−−− R v,u • t3 . Note that the variables xk, for k /∈{i, j}, do not change as we move among the four clusters under consideration. The lemma is then restated as saying that v ∈L0; (2.6) gcd(z, u) = 1 ; (2.7) gcd(z, v) = 1 . (2.8) Another notational convention will be based on the fact that each of the polynomials P, Q, R has a distinguished variable on which it depends, namely xj for P and R, and xi for Q. (In view of (2.3), P and R do not depend on xi, while Q does not depend on xj.) With this in mind, we will routinely write P, Q, and R as polynomials in one (distinguished) variable. For example, we rewrite the formula in (2.5) as R Q(0) y  = L(y)Q(0)bP(y), (2.9) where we denote L(y) = L|xj←y. In the same spirit, the notation Q′, R′, etc., will refer to the partial derivatives with respect to the distinguished variable. THE LAURENT PHENOMENON 7 We will prove the statements (2.6), (2.7), and (2.8) one by one, in this order. We have: z = P(y) x ; u = Q(z) y = Q  P (y) x  y ; v = R(u) z = R  Q(z) y  z = R  Q(z) y  −R  Q(0) y  z + R  Q(0) y  z . Since R  Q(z) y  −R  Q(0) y  z ∈L0 and R  Q(0) y  z = L(y)Q(0)bP(y) z = L(y)Q(0)bx ∈L0 , (2.6) follows. We next prove (2.7). We have u = Q(z) y ≡Q(0) y mod z . Since x and y are invertible in L0, we conclude that gcd(z, u) = gcd(P(y), Q(0)) = 1 (using (2.4)). It remains to prove (2.8). Let f(z) = R  Q(z) y  . Then v = f(z) −f(0) z + L(y)Q(0)bx . Working modz, we obtain: f(z) −f(0) z ≡f ′(0) = R′  Q(0) y  · Q′(0) y . Hence v ≡R′  Q(0) y  · Q′(0) y + L(y)Q(0)bx mod z . Note that the right-hand side is a polynomial of degree 1 in x whose coefficients are Laurent polynomials in the rest of the variables of the cluster x(t0). Thus (2.8) follows from gcd L(y)Q(0)b, P(y)  = 1, which is a consequence of (2.4)–(2.5). □ We can now complete the proof of Theorem 2.1. We need to show that any variable X = xk(thead) belongs to L0. Since both t1 and t3 are closer to thead than t0, we can use the inductive assumption to conclude that X belongs to both L(t1) and L(t3). Since X ∈L(t1), it follows from (2.1) that X can be written as X = f/xi(t1)a for some f ∈L0 and a ∈Z≥0 . On the other hand, since X ∈L(t3), it follows from (2.1) and from the inclusion xi(t3) ∈L0 provided by Lemma 2.2 that X has the form X = g/xj(t2)bxi(t3)c for some g ∈L0 and some b, c ∈Z≥0 . The inclusion X ∈L0 now follows from the fact that, by the last statement in 8 SERGEY FOMIN AND ANDREI ZELEVINSKY Lemma 2.2, the denominators in the two obtained expressions for X are coprime in L0. □ 3. One-dimensional recurrences In this section, we apply Theorem 2.1 to study the Laurent phenomenon for sequences y0, y1, . . . given by recursions of the form ym+nym = F(ym+1, . . . , ym+n−1), (3.1) where F ∈A[x1, . . . , xn−1]. For an integer m, let ⟨m⟩denote the unique element of [n] = {1, . . . , n} satisfying m ≡⟨m⟩mod n. We define the polynomials F1, . . . , Fn ∈A[x1, . . . , xn] by Fm = F(x⟨m+1⟩, x⟨m+2⟩, . . . , x⟨m−1⟩); (3.2) thus Fm does not depend on the variable xm. We introduce the infinite “cyclic exchange pattern” t0 ⟨0⟩ −−−−−−− F⟨0⟩ t1 ⟨1⟩ −−−−−−− F⟨1⟩ t2 ⟨2⟩ −−−−−−− F⟨2⟩ t3 ⟨3⟩ −−−−−−− F⟨3⟩ t4 −−−· · · , (3.3) and let the cluster at each point tm consist of the variables ym, . . . , ym+n−1, labeled within the cluster according to the rule ys = x⟨s⟩(tm). Then equations (3.1) become the exchange relations associated with this pattern. To illustrate, let n = 4. Then the clusters will look like this: y1,y2,y3,y0 • t0 4 −−−−−−− y1,y2,y3,y4 • t1 1 −−−−−−− y5,y2,y3,y4 • t2 2 −−−−−−− y5,y6,y3,y4 • t3 3 −−−−−−− y5,y6,y7,y4 • t4 4 −−−−−−−· · · . In order to include this situation into the setup of Section 2 (cf. Figure 1), we create an infinite “caterpillar tree” whose “spine” is formed by the vertices tm, m > 0. We thus attach the missing n−2 “legs” with labels in [n]−{⟨m −1⟩, ⟨m⟩}, to each vertex tm. Our next goal is to state conditions on the polynomial F which make it possible to assign exchange polynomials satisfying (2.3)–(2.5) to the newly constructed legs. The first requirement (cf. (2.3)) is: The polynomial F is not divisible by any xi, i ∈[n −1]. (3.4) For m ∈[n −1], we set Qm = Fm|xn←0 = F(xm+1, . . . , xn−1, 0, x1, . . . , xm−1). (3.5) Our second requirement is Each Qm is an irreducible element of A[x±1 1 , . . . , x±1 n−1]. (3.6) To state our most substantial requirement, we recursively define a sequence of polynomials Gn−1, . . . , G1, G0 ∈A[x1, . . . , xn−1]; more precisely, each Gm will be defined up to a multiple in A×. (Later, G1, . . . , Gn−2 will become the exchange polynomials assigned to the “legs” of the caterpillar labeled by n = ⟨0⟩; see Fig- ure 2.) We set Gn−1 = F, and obtain each Gm−1 from Gm, as follows. Let ∼ Gm−1= Gm xm←Qm xm . (3.7) THE LAURENT PHENOMENON 9 • ⟨0⟩ −−−−−−− F⟨0⟩ • ⟨1⟩ −−−−−−− F⟨1⟩ • ⟨0⟩ G1 • ⟨2⟩ −−−−−−− F⟨2⟩ • ⟨0⟩ G2 • ⟨3⟩ −−−−−−− F⟨3⟩ • ⟨0⟩ −−−−−−− F⟨0⟩ • −−−· · · . Figure 2. Constructing a caterpillar; n = 4. Let L be a Laurent monomial in x1, . . . , xn−1, with coefficient in A, such that ≈ Gm−1= ∼ Gm−1 L (3.8) is a polynomial in A[x1, . . . , xn−1] not divisible by any xi or by any non-invertible scalar in A. Such an L is unique up to a multiple in A×. Finally, we set Gm−1 = ≈ Gm−1 Qbm , (3.9) where Qb m is the maximal power of Qm that divides ≈ Gm−1. With all this notation, our final requirement is: G0 = F. (3.10) Theorem 3.1. Let F be a polynomial in the variables x1, . . . , xn−1 with coefficients in a unique factorization domain A satisfying conditions (3.4), (3.6), and (3.10). Then every term of the sequence (yi) defined by the recurrence ym+n = F(ym+1, . . . , ym+n−1) ym is a Laurent polynomial in the initial n terms, with coefficients in A. Proof. To prove the Laurentness of some yN, we will apply Theorem 2.1 to the caterpillar tree constructed as follows. We set thead = tN−n+1; this corresponds to the first cluster containing yN. As a path from t0 to thead, we take a finite segment of (3.3): t0 ⟨0⟩ −−−−−−− F⟨0⟩ t1 ⟨1⟩ −−−−−−− F⟨1⟩ t2 ⟨2⟩ −−−−−−− F⟨2⟩ · · · ⟨N−1⟩ −−−−−−− F⟨N−1⟩ tN−n ⟨N⟩ −−−−−−− F⟨N⟩ tN−n+1 . (3.11) We then define the exchange polynomial Gj,k−1 associated with the leg labeled j attached to a vertex tk on the spine (see Figure 3) by Gj,k−1 = G⟨k−j−1⟩(x⟨j+1⟩, . . . , xn, x1, . . . , x⟨j−1⟩), where in the right-hand side, we use the polynomials G1, . . . , Gn−2 constructed in (3.7)–(3.9) above. It remains to verify that this exchange pattern satisfies (2.3), (2.4), and (2.5). Condition (2.3) for the edges appearing in (3.11) is immediate from (3.4), while for the rest of the edges, it follows from the definition of ≈ Gm−1 in (3.8). 10 SERGEY FOMIN AND ANDREI ZELEVINSKY • ⟨k−1⟩ −−−−−−− F⟨k−1⟩ tk j Gj,k−1 • ⟨k⟩ −−−−−−− F⟨k⟩ • Figure 3. Turning to (2.4), we first note that we may assume i = ⟨0⟩= n (otherwise apply a cyclic shift of indices). Under this assumption, we can identify the polynomials P and Q0 in (2.4) with the polynomials Gm−1 and Qm in (3.9), for some value of m. (The special case of P attached to one of the edges in (3.11) corresponds to m = 1, and its validity requires (3.10).) Then the condition gcd(Gm−1, Qm) = 1 follows from (3.6) and the choice of the exponent b in (3.9). Finally, (2.5) is ensured by the construction (3.7)–(3.9), which was designed expressly for this purpose. As before, the special case of P attached to one of the edges in (3.11) holds due to (3.10). □ In the rest of this section, we give a few applications of Theorem 3.1. In all of them, conditions (3.4) and (3.6) are immediate, so we concentrate on the verification of (3.10). Example 3.2. Let a and b be positive integers, and let the sequence y0, y1, . . . satisfy the recurrence yk = ya k−2yb k−1 + 1 yk−3 . We claim that every term of the sequence is a Laurent polynomial over Z in y0, y1, and y2. To prove this, we set n = 3 and construct the polynomials G2, G1, and G0 using (3.7)–(3.9). Initializing G2 = F(x1, x2) = xa 1xb 2 + 1, we obtain: Q2 = F(0, x1) = 1, ∼ G1= F x2←Q2 x2 = xa 1x−b 2 + 1, G1 = ≈ G1= xa 1 + xb 2, Q1 = F(x2, 0) = 1, ∼ G0= G1 x1←Q1 x1 = x−a 1 + xb 2, G0 = ≈ G0= 1 + xa 1xb 2 = F, as desired. Example 3.3. (Generalized Somos-4 sequence) Let a, b, and c be positive integers, and let the sequence y0, y1, . . . satisfy the recurrence yk = ya k−3yc k−1 + yb k−2 yk−4 . (The Somos-4 sequence [7], introduced by Michael Somos, is the special case a = c = 1, b = 2.) Again, each yi is a Laurent polynomial in the initial terms y0, y1, y2, and y3. To prove this, we set n = 4 and compute G3, . . . , G0 using (3.7)–(3.9) and beginning with G3 = F = xa 1xc 3 + xb 2: Q3 =F(0, x1, x2)=xb 1, G3 x3←Q3 x3=xa+bc 1 x−c 3 + xb 2, G2 =xa+bc 1 + xb 2xc 3, Q2 =F(x3, 0, x1)=xc 1xa 3, G2 x2←Q2 x2=xa+bc 1 +xbc 1 x−b 2 xab+c 3 , G1 =xa 1xb 2 + xab+c 3 , Q1 =F(x2, x3, 0)=xb 3, G1 x1←Q1 x1=x−a 1 xb 2xab 3 + xab+c 3 , G0 =xb 2+xa 1xc 3 =F, THE LAURENT PHENOMENON 11 and the claim follows. Remark 3.4. The Laurent phenomena in Theorems 1.4 and 1.8 can also be proved by applying Theorem 3.1: in the former (resp., latter) case, the polynomial F is given by F = αxpxn−p + βxqxn−q + γxrxn−r (resp., F = αxpxn−p + βxqxn−q). The proofs are straightforward but rather long. Shorter proofs, based on J. Propp’s idea of viewing one-dimensional recurrences as “projections” of multi-dimensional ones, are given in Section 4 below. 4. Two- and three-dimensional recurrences In this section, we use the strategy of Section 3 to establish the Laurent phenom- enon for several recurrences involving two- and three-dimensional arrays. Our first example generalizes a construction (and the corresponding Laurentness conjecture) suggested by Noam Elkies and communicated by James Propp. Even though the Laurent phenomenon in this example can be deduced from Theorem 1.6, we choose to give a self-contained treatment, for the sake of exposition. Example 4.1. (The knight recurrence) Consider a two-dimensional array (yij)i,j≥0 whose entries satisfy the recurrence yi,jyi−2,j−1 = αyi,j−1yi−2,j + βyi−1,jyi−1,j−1 . (4.1) We will prove that every yij is a Laurent polynomial in the initial entries Yinit = {yab : a < 2 or b < 1}, with coefficients in the ring A = Z[α, β]. We will refer to Yinit as the initial cluster, even though it is an infinite set. Notice, however, that each individual yij only depends on finitely many variables {yab ∈Yinit : a ≤i, b ≤j}. Similarly to Section 3, we will use the exchange relations (4.1) to create a se- quence of clusters satisfying the Caterpillar Lemma (Theorem 2.1). This is done in the following way. Let us denote by H = Z2 ≥0 the underlying set of indices; for h = (i, j) ∈H, we will write yh = yij . The variables of the initial cluster have labels in the set Hinit = {(i, j) ∈H : i < 2 or j < 1}. In Figure 4, the elements of Hinit are marked by •’s. We introduce the product partial order on H: (i1, j1) ≤(i2, j2) def ⇔(i1 ≤i2) and (j1 ≤j2). (4.2) For an element h = (i, j) ∈H −Hinit , let us denote h−= (i −2, j −1); in this notation, the exchange relation (4.1) expresses the product yh ·yh−as a polynomial in the variables yh′ , for h−< h′ < h. We write h−∼h, and extend this to an equivalence relation ∼on H. The equivalence class of h is denoted by ⟨h⟩. These classes are shown as slanted lines in Figure 4. All our exchange polynomials will belong to the ring A[xa : a ∈H/∼]. Note that Hinit has exactly one representative from each equivalence class. We will now construct a sequence of subsets H0 = Hinit, H1, H2, . . . , each having this 12 SERGEY FOMIN AND ANDREI ZELEVINSKY ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟ ✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟ ✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟ ✟✟✟✟ ✟✟ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ ❝ s s s s s s s s s s s s s s s s s s s s s s s s s 0 H 0 Figure 4. The initial cluster and the equivalence classes ⟨h⟩ property, using the following recursive rule. Let us fix a particular linear extension of the partial order (4.2), say, (i1, j1) ⪯(i2, j2) def ⇔(i1 + j1 < i2 + j2) or (i1 + j1 = i2 + j2 and i1 ≤i2). Restricting this linear ordering to the complement H −Hinit of the initial cluster, we obtain a numbering of the elements of this complement by positive integers: h0 = (2, 1), h1 = (2, 2), h2 = (3, 1), h3 = (2, 3), h4 = (3, 2), h5 = (4, 1), h6 = (2, 4), h7 = (3, 3), h8 = (4, 2), and so on. Having constructed Hm, we let Hm+1 = Hm ∪{hm} −{h− m}. To illustrate, the set H9 is shown in Figure 5. s s s s s s s s s s s s s s s s s s s s s s s s s 0 0 Figure 5. Indexing set H9 We next create the infinite exchange pattern t0 ⟨h0⟩ −−−−−−− P⟨h0⟩ t1 ⟨h1⟩ −−−−−−− P⟨h1⟩ t2 ⟨h2⟩ −−−−−−− P⟨h2⟩ t3 ⟨h3⟩ −−−−−−− P⟨h3⟩ t4 −−−· · · . (4.3) (cf. (3.3)) The cluster at each point tm is given by x(tm) = {yh : h ∈Hm}; as before, each cluster variable yh corresponds to the variable x⟨h⟩. The exchange THE LAURENT PHENOMENON 13 polynomial P⟨h⟩for an edge • ⟨h⟩ −−−• with h = (i, j) is given by P⟨h⟩= αx⟨(i,j−1)⟩x⟨(i−2,j)⟩+ βx⟨(i−1,j)⟩x⟨(i−1,j−1)⟩. (4.4) Then equations (4.1) become the exchange relations associated with this pattern. To establish the Laurent phenomenon, we will complete the caterpillar pattern by attaching “legs” to each vertex tm and assigning exchange polynomials to these legs so that the appropriate analogues of conditions (3.4), (3.6) and (3.10) are satisfied. Since we now work over the polynomial ring A[xa : a ∈H/∼] in infinitely many indeterminates, the number of legs attached to every vertex tm will also be infinite (one for every label a different from ⟨hm−1⟩and ⟨hm⟩). This will not matter much for our argument though: to prove the Laurentness for any yhm, we will simply restrict our attention to the finite part of the infinite caterpillar tree lying between t0 and thead = tm+1, and to the legs labeled by ⟨hk⟩for 0 ≤k ≤m. The role of conditions (3.4) and (3.6) is now played by the observation that each exchange polynomial P⟨h⟩is not divisible by any variable xa , and furthermore every specialization P⟨h⟩ xa←0 is an irreducible element of the Laurent polynomial ring. To formulate the analogue of (3.10), let us fix an equivalence class a ∈H/∼ and concentrate on defining the exchange polynomials for the legs labeled by a and attached to the vertices squeezed between two consecutive occurrences of the label a on the spine: • a −−− Pa • a1 −−−• a G1 • a2 −−−• a G2 • −−−• a • −−−• a • aN−2 −−−• a GN−2 • aN−1 −−−• a −−− Pa • . (4.5) We note that the labels a1, . . . , aN−1 ∈H/∼appearing on the spine between these two occurrences of a are distinct. For m = N −2, N −3, . . . , 1, we denote by Gm the exchange polynomial to be associated with the a-labeled leg attached between the edges labeled am and am+1 (cf. (4.5)). The polynomials Gm are defined with the help of a recursive procedure analogous to (3.7)–(3.9). We initialize GN−1 = Pa, and obtain each Gm−1 from Gm, as follows. The step (3.7) is replaced by ∼ Gm−1= Gm xam←Qm xam with Qm = Pam xa←0 . (4.6) We then compute ≈ Gm−1 and Gm−1 exactly as in (3.8)–(3.9). By the argument given in the proof of Theorem 3.1, the equality G0 = Pa would imply the desired Laurentness (cf. (3.10)). 14 SERGEY FOMIN AND ANDREI ZELEVINSKY To simplify computations, we denote the equivalence classes “surrounding” a, as shown below: · · · · · · · · · · · · · · · · · · · · · · · · q p f c a · · · · · · f c a e b · · · · · · a e b g d · · · · · · · · · · · · · · · · · · · · · · · · . (4.7) In other words, if a = ⟨(i, j)⟩, then b = ⟨(i, j −1)⟩, c = ⟨(i −1, j)⟩, etc. With this notation, we can redraw the pattern (4.5) as follows: • a −−−• g −−−· · · • a Gk−1 • f −−−• a Gk • e −−−• d −−−· · · • a Gℓ−1 • c −−−• a Gℓ • b −−−• · · · a −−−•, (4.8) for appropriate values of k and ℓ. We will call a value of m essential if Gm−1 ̸= Gm . We are going to see that the essential values of m are those for which am ∈{b, c, e, f}; in the notation of (4.8), these values are ℓ+ 1, ℓ, k + 1, and k. We initialize GN−1 = Pa = αxbxf + βxcxe . The values of m in the interval ℓ< m < N are not essential since the variable xam does not enter Pa, which is furthermore not divisible by Qm (because the latter involves variables absent in Pa). The first essential value is m = ℓ+ 1, with am = b: Qℓ+1 = Pb|xa←0 = (αxaxd + βxexg)|xa←0 = βxexg , ∼ Gℓ= Pa xb← Qℓ+1 xb = α βxexg xb xf + βxcxe , Gℓ= αxgxf + xbxc . Step m = ℓ(here am = c): Qℓ= Pc|xa←0 = (αxexp + βxaxf)|xa←0 = αxexp , ∼ Gℓ−1= Gℓ xc← Qℓ xc = αxgxf + xb αxexp xc , Gℓ−1 = xcxgxf + xbxexp . Notice that Gℓ−1 does not involve xd, so the value m = k + 2 is not essential, as are the rest of the values in the interval k + 1 < m < ℓ. Step m = k + 1, with am = e: Qk+1 = Pe|xa←0 = (αxcxg + βxaxb)|xa←0 = αxcxg , ∼ Gk= xcxgxf + xbxp αxcxg xe , Gk = xfxe + αxbxp . Step m = k, with am = f: Qk = Pf|xa←0 = (αxaxq + βxcxp)|xa←0 = βxcxp , ∼ Gk−1= βxcxp xf xe + αxbxp , Gk−1 = βxcxe + αxbxf . THE LAURENT PHENOMENON 15 The values of m in the interval 0 < m < k are not essential since none of the corresponding variables xam appears in Gk−1; in particular, m = 1 is not essential, since Gk−1 does not involve xg . Hence G0 = Gk−1 = βxcxe + αxbxf = Pa , as desired. The Laurentness is proved. Remark 4.2. The Laurent phenomenon for the recurrence (4.1) actually holds in greater generality. Specifically, one can replace H by any subset of Z2 which satisfies the following analogues of conditions (1.2)–(1.3) and (1.7)–(1.8): if h ∈H, then h′ ∈H whenever h ≤h′; (4.9) for any h′ ∈H, the set {h ∈H : h ≤h′} is finite. (4.10) Then take Hinit = {h ∈H : h−/∈H}. The proof of Laurentness only needs one adjustment, concerning the choice of a linear extension ≺. Specifically, while proving that yh is given by a Laurent polynomial, take a finite set H(h) ⊂H containing h and satisfying the conditions if h′ ∈H(h), then h′′ ∈H(h) whenever h′′ ≤h′ and h′′ ∈H; (4.11) for any h′ ∈H such that h′ ≤h, there exists h′′ ∈H(h) such that (4.12) h′′ ≥h and h′′ ∼h. (The existence of H(h) follows from (4.9)–(4.10).) Then define ⪯exactly as before on the set H(h); set h′ ≺h′′ for any h′ ∈H(h) and h′′ ∈H −H(h); and define ⪯on the complement H −H(h) by an arbitrary linear extension of ≤. These conditions ensure that the sets Hm needed in the proof of Laurentness of the given yh are well defined, and that the rest of the proof proceeds smoothly. Armed with the techniques developed above in this section, we will now prove the main theorems stated in the introduction. Proof of Theorem 1.2. Our argument is parallel to that in Example 4.1, so we skip the steps which are identical in both proofs. For simplicity of exposition, we present the proof in the special case H = Z3 ≥0; the case of general H requires the same adjustments as those described in Remark 4.2. We define the product partial order ≤and a compatible linear order ⪯on H by (i1, j1, k1) ≤(i2, j2, k2) def ⇔ (i1 ≤i2) and (j1 ≤j2) and (k1 ≤k2), (i1, j1, k1) ⪯(i2, j2, k2) def ⇔ (i1 + j1 + k1 < i2 + j2 + k2) or (i1 + j1 + k1 = i2 + j2 + k2 and i1 + j1 < i2 + j2) or (i1 + j1 = i2 + j2 and k1 = k2 and i1 ≤i2). For h = (i, j, k), we set h−= (i −1, j −1, k −1); thus, the exchange relation (1.1) expresses the product yh ·yh−as a polynomial in the variables yh′ , for h−< h′ < h. All the steps in Example 4.1 leading to the creation of the infinite exchange pattern (4.3) are repeated verbatim. Instead of (4.4), the exchange polynomials P⟨h⟩along the spine are now given by P⟨(i,j,k)⟩ = αx⟨(i−1,j,k)⟩x⟨(i,j−1,k−1)⟩+βx⟨(i,j−1,k)⟩x⟨(i−1,j,k−1)⟩+γx⟨(i,j,k−1)⟩x⟨(i−1,j−1,k)⟩. 16 SERGEY FOMIN AND ANDREI ZELEVINSKY The role of (4.7) is now played by Figure 6, which shows the “vicinity” of an equivalence class a. This figure displays the orthogonal projection of H along the vector (1, 1, 1). Thus the vertices represent equivalence classes in H/∼. For example, if a = ⟨(i, j, k)⟩, then b = ⟨(i, j, k −1)⟩, c = ⟨(i, j −1, k)⟩, d = ⟨(i −1, j, k)⟩, e = ⟨(i, j −1, k −1)⟩, f = ⟨(i −1, j, k −1)⟩, g = ⟨(i −1, j −1, k)⟩. With this notation, we have: Pa = αxdxe + βxcxf + γxbxg . s s s s s s s s s s s s s s s s s s s ✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟✟✟✟✟ ✟✟✟✟✟✟✟✟ ✟✟✟✟ ✟✟✟✟ ❍❍❍❍❍❍❍❍ ❍❍❍❍❍❍❍❍❍❍❍❍ ❍❍❍❍❍❍❍❍❍❍❍❍❍❍❍❍ ❍❍❍❍❍❍❍❍❍❍❍❍ ❍❍❍❍❍❍❍❍✟✟ ✯ ❍ ❍ ❨ ❄ v u d f p g a b s c e q r 1 2 3 Figure 6. The cube recurrence With the polynomials G1, G2, . . . defined as in (4.5), the essential values of m are now those for which am ∈{b, c, d, e, f, g}. (The verification that the rest of the values are not essential is left to the reader.) We denote these values by m1, . . . , m6 , respectively. The computation of the polynomials Gm begins by initializing GN−1 = Pa = αxdxe + βxcxf + γxbxg . Step m = m1, am = b: Qm1 = Pb|xa←0 = αxfxq + βxexp ; ∼ Gm1−1= Gm1 xb← Qm1 b = αxdxe + βxcxf + γ αxfxq+βxexp xb xg ; Gm1−1 = αxbxdxe + βxbxcxf + αγxfxgxq + βγxexgxp . Step m = m2, am = c: Qm2 = Pc|xa←0 = αxgxr + γxexs; ∼ Gm2−1= αxbxdxe + βxb αxgxr+γxexs xc xf + αγxfxgxq + βγxexgxp ; Gm2−1 = αxbxcxdxe+αβxbxfxgxr+βγxbxexfxs+αγxcxfxgxq+βγxcxexgxp . THE LAURENT PHENOMENON 17 Step m = m3, am = d: Qm3 = Pd|xa←0 = βxgxv + γxfxu ; ∼ Gm3−1= αxbxc βxgxv+γxfxu xd xe +αβxbxfxgxr + βγxbxexfxs + αγxcxfxgxq + βγxcxexgxp ; Gm3−1 = αβxbxcxexgxv + αγxbxcxexfxu + βγxbxdxexfxs + βγxcxdxexgxp +αβxbxdxfxgxr + αγxcxdxfxgxq . Step m = m4, am = e: Qm4 = Pe|xa←0 = βxbxr + γxcxq ; ∼ Gm4−1= Qm4 xe (αβxbxcxgxv + αγxbxcxfxu + βγxbxdxfxs + βγxcxdxgxp) +αxdxfxgQm4 ; Gm4−1 = αγxbxcxfxu + βγxbxdxfxs + αxdxexfxg +αβxbxcxgxv + βγxcxdxgxp . Step m = m5, am = f: Qm5 = Pf|xa←0 = αxbxv + γxdxp ; ∼ Gm5−1= Qm5 xf (αγxbxcxu + βγxbxdxs + αxdxexg) + βxcxgQm5 ; Gm5−1 = αxdxexg + βxcxfxg + αγxbxcxu + βγxbxdxs . Step m = m6, am = g: Qm6 = Pg|xa←0 = αxcxu + βxdxs ; ∼ Gm6−1= Qm6 xg (αxdxe + βxcxf) + γxbQm6 ; Gm6−1 = αxdxe + βxcxf + γxbxg = Pa , completing the proof. □ We will now deduce the Gale-Robinson conjecture from Theorem 1.2. Proof of Theorem 1.4. To prove the Laurentness of a given element yN of the Gale-Robinson sequence (ym), we define the array (zijk)(i,j,k)∈H by setting zijk = yN+pi+qj+rk , with the indexing set H = H(N) = {(i, j, k) ∈Z3 : N + pi + qj + rk ≥0} . Then (1.4) implies that the zijk satisfy the cube recurrence (1.1). Note that H satisfies the conditions (1.2)–(1.3). Thus Theorem 1.2 applies to (zijk), with Hinit = {(a, b, c) ∈Z3 : 0 ≤N + pa + qb + rc < n}. It remains to note that yN = z000 , while for any (a, b, c) ∈Hinit , we have zabc = ym with 0 ≤m < n. □ Proof of Theorem 1.6. This theorem is proved by the same argument as The- orem 1.2. We treat the Mills-Robbins-Rumsey special case (1.9) (cf. also (1.6)); similarly to Theorem 1.2, the case of general H requires the standard adjustments described in Remark 4.2. We use the partial order on the lattice L defined by (i, j, k) ≤(i′, j′, k′) : |i′ −i| + |j′ −j| ≤k′ −k . 18 SERGEY FOMIN AND ANDREI ZELEVINSKY For h = (i, j, k) ∈L, we set h−= (i, j, k −2), and define the equivalence relation ∼ accordingly. Figure 7 shows equivalence classes “surrounding” a given class a (cf. Figure 6). s ❝ s ❝ s ❝ s ❝ s ✻ ✲ d a c e p b q s r j i Figure 7. The initialization polynomial GN−1 = Pa is given by Pa = αxcxd + βxbxe . The table below displays am, Qm, ∼ Gm−1, and Gm−1 for all essential values of m. am Qm ∼ Gm−1 Gm−1 b αxpxq αxcxd + αβ xpxq xb xe xbxcxd + βxexpxq c βxqxr β xqxr xc xbxd + βxexpxq xbxdxr + xcxexp d βxpxs β xpxs xd xbxr + xcxexp βxbxrxs + xcxdxe e αxrxs βxbxrxs + α xrxs xe xcxd βxbxe + αxcxd We see that G0 = Ge−1 = Pa , completing the proof. □ Proof of Theorem 1.8. The proof mimics the above proof of Theorem 1.4. To prove the Laurentness of an element yN of the sequence (ym) satisfying (1.10), we define the array (zijk)(i,j,k)∈H by setting zijk = yN+ℓ(i,j,k) , where ℓ(i, j, k) = n i+j+k 2 −pi −qj. The indexing set H is now given by H = H(N) = {(i, j, k) ∈Z3 : N + ℓ(i, j, k) ≥0} . Then (1.10) implies that the zijk satisfy the octahedron recurrence (1.5). It is easy to check that H satisfies the conditions (1.7)–(1.8). Thus Theorem 1.6 applies to (zijk), with Hinit = {(a, b, c) ∈L : 0 ≤N +ℓ(a, b, c) < n}, and the theorem follows. □ We conclude this section by a couple of examples in which the Laurent phenom- enon is established by the same technique as above. In each case, we provide: • a picture of the equivalence classes “surrounding” a given class a, which plays the role of (4.7) in Example 4.1; • the initialization polynomial GN−1 = Pa; • a table showing am, Qm, ∼ Gm−1, and Gm−1 for all essential values of m. Example 4.3. (Frieze patterns) The generalized frieze pattern recurrence (cf., e.g., [3, 11]) is yijyi−1,j−1 = ε yi,j−1yi−1,j + β , (4.13) THE LAURENT PHENOMENON 19 where ε ∈{1, −1}. To prove Laurentness (over Z[β]), refer to Figure 8. Then Pa = ε xb xc + β, and the essential steps are: am Qm ∼ Gm−1 Gm−1 b β ε β xc xb + β ε xc + xb c β ε β xc + xb β + ε−1xb xc s s s s ❅ ❅ ❅ ❅ ❅ ❅ ■ ✒ c b a a i j Figure 8. Example 4.4. (Number walls) Consider the 2-dimensional recurrence yijyi,j−2 = yp i−1,j−1yr i+1,j−1 + yq i,j−1 , (4.14) where p, q, and r are nonnegative integers. To prove Laurentness, refer to Figure 9. Then Pa = xp dxr b + xq c, and the essential steps are: am Qm ∼ Gm−1 Gm−1 b xq f xp d xq f xb r + xq c xp dxqr f + xq cxr b c xp gxr f xp dxqr f + xp gxr f xc qxr b xp dxq c + xpq g xr b d xq g xq g xd pxq c + xpq g xr b xq c + xr bxp d s s s s s s s ✻ ✲ d c b a a g f j i Figure 9. Remark 4.5. As pointed out by J. Propp, the Laurent phenomenon for certain special cases of Examples 4.3 and 4.4 can be obtained by specialization of Exam- ple 1.5. 20 SERGEY FOMIN AND ANDREI ZELEVINSKY 5. Homogeneous exchange patterns In this section, we deduce Theorem 1.10 and a number of similar results from the following corollary of Theorem 2.1. Corollary 5.1. Let A be a unique factorization domain. Assume that a collection of nonzero polynomials P1, . . . , Pn ∈A[x1, . . . , xn] satisfies the following conditions: Each Pk does not depend on xk, and is not divisible by any xi, i ∈[n]. (5.1) For any i ̸= j, the polynomials Pji def = (Pj)|xi=0 and Pi are coprime. (5.2) For any i ̸= j, we have (5.3) L · P b ji · Pi =Pi xj← Pji xj , where b is a nonnegative integer, and L is a Laurent monomial whose coefficient lies in A and is coprime with Pi. Let us define the rational transformations Fi, i ∈[n], by Fi : (x1, . . . , xn) 7→(x1, . . . , xi−1, Pi xi , xi+1, . . . , xn). Then any composition of the form Fi1 ◦· · · ◦Fim is given by Laurent polynomials with coefficients in A. Proof. Let Tn denote a regular tree of degree n whose edges are labeled by elements of [n] so that all edges incident to a given vertex have different labels. Assigning Pi as an exchange polynomial for every edge of Tn labeled by i, we obtain a “homogeneous” exchange pattern on Tn satisfying conditions (2.3)–(2.5) in Theorem 2.1. This implies the desired Laurentness. □ Example 5.2. Let n ≥3 be an integer, and let P be a quadratic form given by P(x1, . . . , xn) = x2 1 + · · · + x2 n + X i<j αijxixj . Theorem 1.10 is a special case of Corollary 5.1 for Pi = P xi=0 and A = Z[αij : i < j]. Conditions (5.1)–(5.2) are clear. To verify (5.3), note that Pi = Pji + x2 j + xj X k αkjxk + X ℓ αjℓxl  , where k (resp. ℓ) runs over all indices such that k ̸= i and k < j (resp. ℓ̸= i and ℓ> j). It follows that Pi xj← Pji xj = Pji + P 2 ji x2 j + Pji xj X k αkjxk + X ℓ αjℓxl  = Pji x2 j Pi , verifying (5.3). In the remainder of this section, we list a few more applications of Corollary 5.1. In each case, the verification of its conditions is straightforward. Example 5.3. Let P and Q be monic palindromic polynomials in one variable: P(x) = (1 + xd) + α1(x + xd−1) + α2(x2 + xd−2) + . . . ; Q(x) = (1 + xe) + β1(x + xe−1) + β2(x2 + xe−2) + . . . . THE LAURENT PHENOMENON 21 Then every member of the sequence y0, y1, . . . defined by the recurrence yk =        µ2P(yk−1/λ) yk−2 if k is odd; λ2Q(yk−1/µ) yk−2 if k is even is a Laurent polynomial in y0 and y1 with coefficients in A = Z[λ±1, µ±1, αi, βi]. This follows from Corollary 5.1 with n = 2, P1 = µ2P(x2/λ), and P2 = λ2Q(x1/µ). Example 5.4. Consider the sequence y0, y1, . . . defined by the recurrence yk = y2 k−1 + cyk−1 + d yk−2 . (5.4) Every term of this sequence is a Laurent polynomial in y0 and y1 with coefficients in Z[c, d]. Example 5.5. Define the rational transformations F1, F2, F3 by F1 : (x1, x2, x3) 7→( x2 + x2 3 + x2 2x3 x1 , x2, x3 ), F2 : (x1, x2, x3) 7→( x1, x1 + x3 x2 , x3 ), F3 : (x1, x2, x3) 7→( x1, x2, x2 + x2 1 + x2 2x1 x3 ). (5.5) Then any composition Fi1 ◦Fi2 ◦· · · is given by (x1, x2, x3) 7→(G1, G2, G3), where G1, G2, G3 are Laurent polynomials in x1, x2, x3 over Z. References [1] D. Bressoud and J. Propp, How the alternating sign matrix conjecture was solved, Notices Amer. Math. Soc. 46 (1999), no. 6, 637–646. [2] J. H. Conway and R. K. Guy, The book of numbers, Copernicus, New York, 1996. [3] J. H. Conway and H. S. M. Coxeter, Triangulated polygons and frieze patterns, Math. Gaz. 57 (1973), no. 400, 87–94; and ibid., no. 401, 175–183. [4] S. Fomin and A. Zelevinsky, Double Bruhat cells and total positivity, J. Amer. Math. Soc. 12 (1999), 335–380. [5] S. Fomin and A. Zelevinsky, Total positivity: tests and parametrizations, Math. Intelligencer 22 (2000), no. 1, 23–33. [6] S. Fomin and A. Zelevinsky, Cluster algebras I: Foundations, preprint math.RT/0104151. [7] D. Gale, The strange and surprising saga of the Somos sequences, Math. Intelligencer 13 (1991), no. 1, 40–43. [8] R. K. Guy, Unsolved problems in number theory, 2nd edition, Springer-Verlag, New York, 1994. [9] W. H. Mills, D. P. Robbins, and H. Rumsey, Alternating sign matrices and descending plane partitions. J. Combin. Theory Ser. A 34 (1983), 340–359. [10] J. Propp, The Many Faces of Alternating-Sign Matrices, Discrete Math. Theor. Comput. Sci., to appear. [11] R. P. Stanley, Enumerative combinatorics, vol. 2, Cambridge University Press, 1999. Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA E-mail address: fomin@umich.edu Department of Mathematics, Northeastern University, Boston, MA 02115 E-mail address: andrei@neu.edu
arXiv:math/0104241v1 [math.CO] 25 Apr 2001 THE LAURENT PHENOMENON SERGEY FOMIN AND ANDREI ZELEVINSKY Abstract. A composition of birational maps given by Laurent polynomials need not be given by Laurent polynomials; however, sometimes-quite unexpectedly-it does. We suggest a unified treatment of this phenomenon, which covers a large class of applications. In particular, we settle in the affirmative a conjecture of D. Gale and R. Robinson on integrality of generalized Somos sequences, and prove the Laurent property for several multidimensional recurrences, confirming conjectures by J. Propp, N. Elkies, and M. Kleber. Contents 1. Introduction 1 2. The Caterpillar Lemma 4 3. One-dimensional recurrences 8 4. Two- and three-dimensional recurrences 11 5. Homogeneous exchange patterns 20 References 21 1. Introduction In this paper, we suggest a unified explanation for a number of instances in which certain recursively defined rational functions prove, unexpectedly, to be Laurent polynomials. We begin by presenting several instances of this Laurent phenomenon established in the paper. Example 1.1. (The cube recurrence) Consider a 3-dimensional array (yijk : (i, j, k) ∈H) whose elements satisfy the recurrence yi,j,k = αyi-1,j,kyi,j-1,k-1 + βyi,j-1,kyi-1,j,k-1 + γyi,j,k-1yi-1,j-1,k yi-1,j-1,k-1 . (1.1) Here H can be any non-empty subset of Z3 satisfying the following conditions: if (i, j, k) ∈H, then (i′, j′, k′) ∈H whenever i ≤i′, j ≤j′, k ≤k′; (1.2) for any (i′, j′, k′) ∈H, the set {(i, j, k) ∈H : i ≤i′, j ≤j′, k ≤k′} is finite. (1.3) Date: April 25, 2001. 1991 Mathematics Subject Classification. Primary 14E05, Secondary 05E99, 11B83. Key words and phrases. Laurent phenomenon, Somos sequence, Gale-Robinson conjecture. The authors were supported in part by NSF grants #DMS-0049063, #DMS-0070685 (S.F.), and #DMS-9971362 (A.Z.). 1 2 SERGEY FOMIN AND ANDREI ZELEVINSKY Theorem 1.2. Let Hinit = {(a, b, c) ∈H : (a -1, b -1, c -1) /∈H}. For every (i, j, k) ∈H, the entry yi,j,k is a Laurent polynomial with coefficients in Z[α, β, γ] in the initial entries ya,b,c, for (a, b, c) ∈Hinit. The cube recurrence (with α = β = γ = 1) was introduced by James Propp [10], who was also the one to conjecture Laurentness in the case when H ⊂Z3 is given by the condition i + j + k ≥0; in this case Hinit consists of all (a, b, c) ∈H such that a + b + c ∈{0, 1, 2}. Another natural choice of H was suggested by Michael Kleber: H = Z3 ≥0, in which case Hinit = {(a, b, c) ∈Z3 ≥0 : abc = 0}. Example 1.3. (The Gale-Robinson sequence) Let p, q, and r be distinct positive integers, let n = p + q + r, and let the sequence y0, y1, . . . satisfy the recurrence yk+n = αyk+pyk+n-p + βyk+qyk+n-q + γyk+ryk+n-r yk . (1.4) David Gale and Raphael Robinson conjectured (see [7] and [8, E15]) that every term of such a sequence is an integer provided y0 = · · · = yn-1 = 1 and α, β, γ are positive integers. Using Theorem 1.2, we prove the following stronger statement. Theorem 1.4. As a function of the initial terms y0, . . . , yn-1, every term of the Gale-Robinson sequence is a Laurent polynomial with coefficients in Z[α, β, γ]. We note that the special case α = β = γ = 1, p = 1, q = 2, r = 3, n = 6 (resp., r = 4, n = 7) of the recurrence (1.4) is the Somos-6 (resp., Somos-7) recurrence [7]. Example 1.5. (Octahedron recurrence) Consider the 3-dimensional recurrence yi,j,k = αyi+1,j,k-1yi-1,j,k-1 + βyi,j+1,k-1yi,j-1,k-1 yi,j,k-2 (1.5) for an array (yijk)(i,j,k)∈H whose indexing set H is contained in the lattice L = {(i, j, k) ∈Z3 : i + j + k ≡0 mod 2} (1.6) and satisfies the following analogues of conditions (1.2)-(1.3): if (i, j, k) ∈H, then (i′, j′, k′) ∈H whenever |i′ -i| + |j′ -j| ≤k′ -k; (1.7) for any (i′, j′, k′) ∈H, the set {(i, j, k) ∈H : |i′ -i| + |j′ -j| ≤k′ -k} (1.8) is finite. Theorem 1.6. Let Hinit = {(a, b, c) ∈H : (a, b, c -2) /∈H}. For every (i, j, k) ∈ H, the entry yi,j,k is a Laurent polynomial with coefficients in Z[α, β] in the initial entries ya,b,c, for (a, b, c) ∈Hinit. The octahedron recurrence on the half-lattice H = {(i, j, k) ∈L : k ≥0} (1.9) was studied by W. H. Mills, D. P. Robbins, and H. Rumsey in their pioneering work [9] on the Alternating Sign Matrix Conjecture (cf. [1] and [10, Section 10] for further references); in particular, they proved the special case of Theorem 1.6 for this choice of H. THE LAURENT PHENOMENON 3 Example 1.7. (Two-term version of the Gale-Robinson sequence) Let p, q, and n be positive integers such that p 0. We thus attach the missing n-2 "legs" with labels in [n]-{⟨m -1⟩, ⟨m⟩}, to each vertex tm. Our next goal is to state conditions on the polynomial F which make it possible to assign exchange polynomials satisfying (2.3)-(2.5) to the newly constructed legs. The first requirement (cf. (2.3)) is: The polynomial F is not divisible by any xi, i ∈[n -1]. (3.4) For m ∈[n -1], we set Qm = Fm|xn←0 = F(xm+1, . . . , xn-1, 0, x1, . . . , xm-1). (3.5) Our second requirement is Each Qm is an irreducible element of A[x±1 1 , . . . , x±1 n-1]. (3.6) To state our most substantial requirement, we recursively define a sequence of polynomials Gn-1, . . . , G1, G0 ∈A[x1, . . . , xn-1]; more precisely, each Gm will be defined up to a multiple in A×. (Later, G1, . . . , Gn-2 will become the exchange polynomials assigned to the "legs" of the caterpillar labeled by n = ⟨0⟩; see Figure 2.) We set Gn-1 = F, and obtain each Gm-1 from Gm, as follows. Let ∼ Gm-1= Gm xm←Qm xm . (3.7) THE LAURENT PHENOMENON 9 • ⟨0⟩ ------- F⟨0⟩ • ⟨1⟩ ------- F⟨1⟩ • ⟨0⟩ G1 • ⟨2⟩ ------- F⟨2⟩ • ⟨0⟩ G2 • ⟨3⟩ ------- F⟨3⟩ • ⟨0⟩ ------- F⟨0⟩ • ---· · · . Figure 2. Constructing a caterpillar; n = 4. Let L be a Laurent monomial in x1, . . . , xn-1, with coefficient in A, such that ≈ Gm-1= ∼ Gm-1 L (3.8) is a polynomial in A[x1, . . . , xn-1] not divisible by any xi or by any non-invertible scalar in A. Such an L is unique up to a multiple in A×. Finally, we set Gm-1 = ≈ Gm-1 Qbm , (3.9) where Qb m is the maximal power of Qm that divides ≈ Gm-1. With all this notation, our final requirement is: G0 = F. (3.10) Theorem 3.1. Let F be a polynomial in the variables x1, . . . , xn-1 with coefficients in a unique factorization domain A satisfying conditions (3.4), (3.6), and (3.10). Then every term of the sequence (yi) defined by the recurrence ym+n = F(ym+1, . . . , ym+n-1) ym is a Laurent polynomial in the initial n terms, with coefficients in A. Proof. To prove the Laurentness of some yN, we will apply Theorem 2.1 to the caterpillar tree constructed as follows. We set thead = tN-n+1; this corresponds to the first cluster containing yN. As a path from t0 to thead, we take a finite segment of (3.3): t0 ⟨0⟩ ------- F⟨0⟩ t1 ⟨1⟩ ------- F⟨1⟩ t2 ⟨2⟩ ------- F⟨2⟩ · · · ⟨N-1⟩ ------- F⟨N-1⟩ tN-n ⟨N⟩ ------- F⟨N⟩ tN-n+1 . (3.11) We then define the exchange polynomial Gj,k-1 associated with the leg labeled j attached to a vertex tk on the spine (see Figure 3) by Gj,k-1 = G⟨k-j-1⟩(x⟨j+1⟩, . . . , xn, x1, . . . , x⟨j-1⟩), where in the right-hand side, we use the polynomials G1, . . . , Gn-2 constructed in (3.7)-(3.9) above. It remains to verify that this exchange pattern satisfies (2.3), (2.4), and (2.5). Condition (2.3) for the edges appearing in (3.11) is immediate from (3.4), while for the rest of the edges, it follows from the definition of ≈ Gm-1 in (3.8). 10 SERGEY FOMIN AND ANDREI ZELEVINSKY • ⟨k-1⟩ ------- F⟨k-1⟩ tk j Gj,k-1 • ⟨k⟩ ------- F⟨k⟩ • Figure 3. Turning to (2.4), we first note that we may assume i = ⟨0⟩= n (otherwise apply a cyclic shift of indices). Under this assumption, we can identify the polynomials P and Q0 in (2.4) with the polynomials Gm-1 and Qm in (3.9), for some value of m. (The special case of P attached to one of the edges in (3.11) corresponds to m = 1, and its validity requires (3.10).) Then the condition gcd(Gm-1, Qm) = 1 follows from (3.6) and the choice of the exponent b in (3.9). Finally, (2.5) is ensured by the construction (3.7)-(3.9), which was designed expressly for this purpose. As before, the special case of P attached to one of the edges in (3.11) holds due to (3.10). □ In the rest of this section, we give a few applications of Theorem 3.1. In all of them, conditions (3.4) and (3.6) are immediate, so we concentrate on the verification of (3.10). Example 3.2. Let a and b be positive integers, and let the sequence y0, y1, . . . satisfy the recurrence yk = ya k-2yb k-1 + 1 yk-3 . We claim that every term of the sequence is a Laurent polynomial over Z in y0, y1, and y2. To prove this, we set n = 3 and construct the polynomials G2, G1, and G0 using (3.7)-(3.9). Initializing G2 = F(x1, x2) = xa 1xb 2 + 1, we obtain: Q2 = F(0, x1) = 1, ∼ G1= F x2←Q2 x2 = xa 1x-b 2 + 1, G1 = ≈ G1= xa 1 + xb 2, Q1 = F(x2, 0) = 1, ∼ G0= G1 x1←Q1 x1 = x-a 1 + xb 2, G0 = ≈ G0= 1 + xa 1xb 2 = F, as desired. Example 3.3. (Generalized Somos-4 sequence) Let a, b, and c be positive integers, and let the sequence y0, y1, . . . satisfy the recurrence yk = ya k-3yc k-1 + yb k-2 yk-4 . (The Somos-4 sequence [7], introduced by Michael Somos, is the special case a = c = 1, b = 2.) Again, each yi is a Laurent polynomial in the initial terms y0, y1, y2, and y3. To prove this, we set n = 4 and compute G3, . . . , G0 using (3.7)-(3.9) and beginning with G3 = F = xa 1xc 3 + xb 2: Q3 =F(0, x1, x2)=xb 1, G3 x3←Q3 x3=xa+bc 1 x-c 3 + xb 2, G2 =xa+bc 1 + xb 2xc 3, Q2 =F(x3, 0, x1)=xc 1xa 3, G2 x2←Q2 x2=xa+bc 1 +xbc 1 x-b 2 xab+c 3 , G1 =xa 1xb 2 + xab+c 3 , Q1 =F(x2, x3, 0)=xb 3, G1 x1←Q1 x1=x-a 1 xb 2xab 3 + xab+c 3 , G0 =xb 2+xa 1xc 3 =F, THE LAURENT PHENOMENON 11 and the claim follows. Remark 3.4. The Laurent phenomena in Theorems 1.4 and 1.8 can also be proved by applying Theorem 3.1: in the former (resp., latter) case, the polynomial F is given by F = αxpxn-p + βxqxn-q + γxrxn-r (resp., F = αxpxn-p + βxqxn-q). The proofs are straightforward but rather long. Shorter proofs, based on J. Propp's idea of viewing one-dimensional recurrences as "projections" of multi-dimensional ones, are given in Section 4 below. 4. Two- and three-dimensional recurrences In this section, we use the strategy of Section 3 to establish the Laurent phenomenon for several recurrences involving two- and three-dimensional arrays. Our first example generalizes a construction (and the corresponding Laurentness conjecture) suggested by Noam Elkies and communicated by James Propp. Even though the Laurent phenomenon in this example can be deduced from Theorem 1.6, we choose to give a self-contained treatment, for the sake of exposition. Example 4.1. (The knight recurrence) Consider a two-dimensional array (yij)i,j≥0 whose entries satisfy the recurrence yi,jyi-2,j-1 = αyi,j-1yi-2,j + βyi-1,jyi-1,j-1 . (4.1) We will prove that every yij is a Laurent polynomial in the initial entries Yinit = {yab : a j). It follows that Pi xj← Pji xj = Pji + P 2 ji x2 j + Pji xj X k αkjxk + X l αjlxl = Pji x2 j Pi , verifying (5.3). In the remainder of this section, we list a few more applications of Corollary 5.1. In each case, the verification of its conditions is straightforward. Example 5.3. Let P and Q be monic palindromic polynomials in one variable: P(x) = (1 + xd) + α1(x + xd-1) + α2(x2 + xd-2) + . . . ; Q(x) = (1 + xe) + β1(x + xe-1) + β2(x2 + xe-2) + . . . . THE LAURENT PHENOMENON 21 Then every member of the sequence y0, y1, . . . defined by the recurrence yk =        μ2P(yk-1/λ) yk-2 if k is odd; λ2Q(yk-1/μ) yk-2 if k is even is a Laurent polynomial in y0 and y1 with coefficients in A = Z[λ±1, μ±1, αi, βi]. This follows from Corollary 5.1 with n = 2, P1 = μ2P(x2/λ), and P2 = λ2Q(x1/μ). Example 5.4. Consider the sequence y0, y1, . . . defined by the recurrence yk = y2 k-1 + cyk-1 + d yk-2 . (5.4) Every term of this sequence is a Laurent polynomial in y0 and y1 with coefficients in Z[c, d]. Example 5.5. Define the rational transformations F1, F2, F3 by F1 : (x1, x2, x3) 7→( x2 + x2 3 + x2 2x3 x1 , x2, x3 ), F2 : (x1, x2, x3) 7→( x1, x1 + x3 x2 , x3 ), F3 : (x1, x2, x3) 7→( x1, x2, x2 + x2 1 + x2 2x1 x3 ). (5.5) Then any composition Fi1 ◦Fi2 ◦· · · is given by (x1, x2, x3) 7→(G1, G2, G3), where G1, G2, G3 are Laurent polynomials in x1, x2, x3 over Z. References [1] D. Bressoud and J. Propp, How the alternating sign matrix conjecture was solved, Notices Amer. Math. Soc. 46 (1999), no. 6, 637-646. [2] J. H. Conway and R. K. Guy, The book of numbers, Copernicus, New York, 1996. [3] J. H. Conway and H. S. M. Coxeter, Triangulated polygons and frieze patterns, Math. Gaz. 57 (1973), no. 400, 87-94; and ibid., no. 401, 175-183. [4] S. Fomin and A. Zelevinsky, Double Bruhat cells and total positivity, J. Amer. Math. Soc. 12 (1999), 335-380. [5] S. Fomin and A. Zelevinsky, Total positivity: tests and parametrizations, Math. Intelligencer 22 (2000), no. 1, 23-33. [6] S. Fomin and A. Zelevinsky, Cluster algebras I: Foundations, preprint math.RT/0104151. [7] D. Gale, The strange and surprising saga of the Somos sequences, Math. Intelligencer 13 (1991), no. 1, 40-43. [8] R. K. Guy, Unsolved problems in number theory, 2nd edition, Springer-Verlag, New York, 1994. [9] W. H. Mills, D. P. Robbins, and H. Rumsey, Alternating sign matrices and descending plane partitions. J. Combin. Theory Ser. A 34 (1983), 340-359. [10] J. Propp, The Many Faces of Alternating-Sign Matrices, Discrete Math. Theor. Comput. Sci., to appear. [11] R. P. Stanley, Enumerative combinatorics, vol. 2, Cambridge University Press, 1999. 48109, USA E-mail address: 02115 E-mail address:
2510.14979
FROM PIXELS TO WORDS – TOWARDS NATIVE VISION- LANGUAGE PRIMITIVES AT SCALE Haiwen Diao1 Mingxuan Li2 Silei Wu3 Linjun Dai3 Xiaohua Wang2 Hanming Deng3 Lewei Lu3 Dahua Lin3 Ziwei Liu1 1S-Lab, Nanyang Technological University 2Xi’an Jiaotong University 3SenseTime Research Website: https://github.com/EvolvingLMMs-Lab/NEO ABSTRACT The edifice of native Vision-Language Models (VLMs) has emerged as a rising contender to typical modular VLMs, shaped by evolving model architectures and training paradigms. Yet, two lingering clouds cast shadows over its widespread exploration and promotion: (-) What fundamental constraints set native VLMs apart from modular ones, and to what extent can these barriers be overcome? (-) How to make research in native VLMs more accessible and democratized, thereby accelerating progress in the field. In this paper, we clarify these challenges and outline guiding principles for constructing native VLMs. Specifically, one native VLM primitive should: (i) effectively align pixel and word representations within a shared semantic space; (ii) seamlessly integrate the strengths of formerly separate vision and language modules; (iii) inherently embody various cross- modal properties that support unified vision-language encoding, aligning, and reasoning. Hence, we launch NEO, a novel family of native VLMs built from first principles, capable of rivaling top-tier modular counterparts across diverse real- world scenarios. With only 390M image-text examples, NEO efficiently develops visual perception from scratch while mitigating vision-language conflicts inside a dense and monolithic model crafted from our elaborate primitives. We position NEO as a cornerstone for scalable and powerful native VLMs, paired with a rich set of reusable components that foster a cost-effective and extensible ecosystem. 1 INTRODUCTION Recently, Vision-Language Models (VLMs) Bai et al. (2025); Zhu et al. (2025); Wang et al. (2025b); xAI (2025); Anthropic (2025); DeepMind (2025); Hurst et al. (2024); OpenAI (2025) have emerged as a major breakthrough, extending the strong linguistic capabilities of Large Language Models (LLMs) to multimodal understanding. They typically follow a modular design that integrates a pre-trained Visual Encoder (VE) Radford et al. (2021); Chen et al. (2024f); Fang et al. (2023); Tschannen et al. (2025), a Projector Alayrac et al. (2022); Liu et al. (2024a); Dai et al. (2024), and an LLM Touvron et al. (2023); Yang et al. (2025); DeepSeek-AI et al. (2025). Through multi-stage post-training at scale, they incrementally overcome limitations in image resolution, aspect ratio, and visual encoding flexibility. Yet, modular designs still contend with strong inductive biases in pre-trained visual semantics, complex infrastructure, and scaling laws needed to harmonize their components. Against this backdrop, native VLMs have arisen as a new avenue of exploration, with Fuyu Bavishi et al. (2023) and EVE Diao et al. (2024) pioneering a promising route towards monolithic VLMs. Subsequent efforts seek to learn vision perception from scratch and mitigate vision-language conflicts via visual encoder distillation Diao et al. (2024); Li et al. (2025b); Wang et al. (2025a); Li et al. (2025a), mixed training data Lei et al. (2025); Li et al. (2025a), and modality-specific decomposition Diao et al. (2025); Luo et al. (2024; 2025); Li et al. (2025a). Nonetheless, constructing visual representations via mapping functions inside pre-trained LLMs often hinders efficiency Chen et al. (2024d); Luo et al. (2024), destabilizes optimization Team (2024); Wang et al. (2024b), and disrupts original linguistic knowledge Diao et al. (2024); Chen et al. (2024d), even under decoupled designs or large budgets Beyer et al. (2024). Besides, HoVLE Tao et al. (2025) and HaploVL Yan et al. (2025) address 1 arXiv:2510.14979v1 [cs.CV] 16 Oct 2025 Tile vs. Native Resolution Native Encode Pre-set Ratios Thumbnail Concat 1:1 1:2 1:3 1:4 1:5 1:6 2:3 3:2 … Large Image Small Image or or Native VLM Primitives Image Text ×L FFN PEL WEL Late vs. Early Fusion LLM VE VLM Image Text Image Text Modular VLM Monolithic VLM (opt) X-Attn MHNA + + + + SIGLIP CLIP EVA Vision Encoder Language Model GPT Claude Qwen Deepseek 1. Dense Model Architecture 2. Parameter : 0.3B, 0.6B, …, 22B 3. Patch Size : 14, 16, 30, 32 4. Absolute Position Embedding (Learnable PE or Sinusoidal PE) 5. Bi-Directional Attention 6. 2D-RoPE or Not 7. RoPE θ : 10000 8. Head Dim : 64, 128, 256 9. Layer Num : 6, 12, 24, …, 27 1. Dense or MoE Model Architecture 2. Parameter : 0.6B, 1.7B, 8B, …, 1043B 3. QK-Norm or Not 4. Sliding Window or Not 5. Hidden Activation : SiLU 6. Causal-only Attention 7. 1D-RoPE or Not 8. RoPE θ : 1000000 9. Head Dim : 128, 192, 256 10. Layer Num : 28, 32, …, 64, 89 GPT Gemini QwenVL Grok Claude InternVL Property 5 RoPE Variants for Multi-Dimensional Relations Property 3 Efficient Pixel-Word Alignment VLM ( VE ) Pre-Align & Encode [ reusable after PT ] [ cost-effective for SFT ] (LLM) Fully-Align & Reason Property 4 Multi-Modal Mixed Attention LM LM VE VE EMA Property 2 Visual Learning at Scale Self-Supervision - Self- Distilling - Masked Prediction Property 1 Seamlessly Load LLMs VLM LLM layer-wise initialization Img1 Txt1 Img2 Txt2 Inside VE Inside LLM (2). Channels Frame-wise Bi-Directional Token-wise Casual Attention (1). Indexes … … T T T TT T frequency decay … … W H W H T T … … … … T … … … H … W T H W … … T, T, H,W, H,W, H,W … … M-RoPE Video-RoPE IL-RoPE MM-RoPE … … THW TTT [0,0,0] [1,0,0] [1,0,1] [1,1,0] [1,1,1] [2,0,0] [3,0,0] [4,0,0] [4,0,1] [4,1,0] [4,1,1] [5,0,0] [0,0,0] [1,1,1] [1,1,2] [1,2,1] [1,2,2] [2,0,0] [3,0,0] [4,4,4] [4,4,5] [4,5,4] [4,5,5] [5,0,0] [0,0,0] [1,0,0] [1,0,1] [1,1,0] [1,1,1] [2,2,2] [3,3,3] [4,0,0] [4,0,1] [4,1,0] [4,1,1] [5,5,5] [0,0,0] [1,1,1] [1,1,2] [1,2,1] [1,2,2] [2,2,2] [3,3,3] [4,4,4] [4,4,5] [4,5,4] [4,5,5] [5,5,5] red pills and blue - Next-Token Prediction - Contrastive Learning Cross-Supervision Model Tags 1. Dense Model 2. 2.2B / 9B Params 3. 32 x 32 Patch Size 4. Sinusoidal PE 5. Mixed Attention 6. Native 3D-RoPE 7. RoPE θ: 10K + 1M 8. Layer Num: 40 / 42 9. Reusable Pre-Buffer 0 … Discrete vs. Continuous Patch Embed Token Quantize 1 N … Patch Flatten Modular VLM Modules Figure 1: Overview of our native vision-language frameworks, which project arbitrary-resolution images into a continuous latent space, integrating the virtues of modular VLM architectures and enabling efficient vision-language encoding, alignment, and interaction in an early-fusion manner. this by first mapping vision-language inputs into a shared space. Yet, their modality-sharing modules, whether derived from LLM or VE layers, neglect intrinsic discrepancy in encoding and interaction across modalities, ultimately compromising VLM’s capacity to unify visual-linguistic properties. Figure 1 outlines a central question: What properties must native VLMs possess to compete with modular ones? Modular VLMs decouple vision encoders from language models, allowing each to exploit modality-specific characteristics, e.g., bi-directional versus causal attention, distinct positional embeddings, and varied network configurations. This separation accelerates the development of visual and linguistic competencies and permits flexible combinations of individual components. However, it fragments the training procedure, increases alignment costs, and leaves the intermodal balance unresolved. Motivated by these analyses, we formulate the following strategies accordingly: (1) Native VLM Primitive. Native VLMs should embody a unified vision–language primitive that simultaneously integrates encoding, alignment, and reasoning across modalities in one single module. Its design should encompass three principles: (i) a Flexible Position Encoding scheme that generalizes effectively to dynamic spatial structures; (ii) a Multi-Head Native Attention (MHNA) that jointly processes visual–textual connectivity; (iii) Native Rotary Position Embeddings (Native-RoPE) with modality-specific frequencies, preserving compatibility with pretrained LLM’s weights while absorb- ing original VE’s interaction patterns. Guided by these tenets, we supercharge the fundamental LLM layers with a hybrid attention, expanded heads, and targeted RoPE across modalities, synchronously capturing multi-dimensional relationships for fine-grained and comprehensive correspondence. (2) Pre-Buffer and Post-LLM. The next crucial issue is to efficiently scale visual training while securing consistent pixel-word alignment. Here, we partition the monolithic backbone into pre-Buffer and post-LLM layers during pre-training, each rooted in identical native primitive architectures. This transient stage enables pretrained LLMs to steer visual learning and establish coherent relevance with later stages. As mid-training and supervised fine-tuning advance, the partition dissolves, yielding a unified architecture that autonomously allocates the VLM’s capacities to their respective functions. This end-to-end training reduces semantic biases of separate pretraining and large overheads of post-stage alignment, effectively bridging native and modular VLMs. Crucially, pre-Buffer persists as a reusable pretrained asset, facilitating sustainable resources for native VLM development. We launch NEO, an innovative native VLM that reimagines multi-modal integration from first principles. Unlike typical modular designs, NEO rests on unified primitives that natively encode, align, and reason across modalities, forming coherent pixel–word correspondences from the outset. Through streamlined end-to-end training on 390M image–text samples, NEO acquires strong visual perception and rivals leading modular VLMs of comparable scale across diverse benchmarks. Beyond competitive results, NEO offers reusable components that simplify subsequent development and reduce barriers to promoting native exploration. This reveals that next-generation multimodal systems could also originate from architectures that are native, unified, and intrinsically multimodal. 2 Take Native Vision - Language Model the red Word Embedding Layer Patch Embedding Layer Conv1 Conv2 GELU Flatten … , ! pill NEO the red , ! pill NEO \n Pos-Emb + ~ FFN + + MHNA FFN + + + + + + MHNA <img> </img> Pixel-Word Aligning via 𝑳𝟏* Primitives as pre-Buffer Image-Text Reasoning via 𝑳𝟐* Primitives as post-LLM Figure 2: Overview of our proposed NEO architecture. We begin with lightweight patch and word embedding layers that encode images and text into token sequences, which are subsequently processed by a monolithic decoder-only architecture. The pre-Buffer and post-LLM components, each stacked with multiple native primitives, facilitate efficient and precise pixel–word alignment and reasoning. 2 RELATED WORKS 2.1 MODULAR VISION-LANGUAGE MODELS Current Vision–Language Models (VLMs) have converged on a modular paradigm, where a pretrained Vision Encoder (VE) is paired with a Large Language Model (LLM) via lightweight adapters, e.g., projection layers Li et al. (2024a;b) or cross-attention mechanisms Alayrac et al. (2022); Dai et al. (2024). This architecture underlies both leading proprietary systems, including Claude Anthropic (2024; 2025), GPT Hurst et al. (2024); Yang et al. (2023); OpenAI (2025), and Gemini Comanici et al. (2025); DeepMind (2025), as well as prominent open-source frameworks such as InternVL Zhu et al. (2025); Wang et al. (2025b), Qwen-VL Wang et al. (2024a); Bai et al. (2025), and Grok xAI (2024; 2025). By harnessing the complementary strengths of vision and language components, modular architectures, adhering to the “ViT-MLP-LLM” pipeline, achieve unprecedented performance across diverse multimodal benchmarks and have emerged as the dominant design principle in the field. Despite empirical successes, modular VLMs remain constrained by multi-stage training and heteroge- neous structures. Extensive post-training interventions are often required to mitigate rigid inductive biases in pretrained VEs Wang et al. (2024a), which limit resolution flexibility, erode fine-grained de- tails, and blunt sensitivity to features across scales. Besides, pretraining semantic biases and capacity trade-offs between VEs and LLMs collectively impede design simplicity, deployment efficiency, and seamless integration of vision and language, underscoring the urgent need for a monolithic backbone. 2.2 NATIVE VISION-LANGUAGE MODELS Native VLMs embrace early-fusion integration rather than grafting VEs onto LLMs. Early Fuyu Bav- ishi et al. (2023), EVE Diao et al. (2024), and SOLO Chen et al. (2024d), embed image patches via linear projections, whereas Chameleon Team (2024), MoMA Lin et al. (2024), and MoT Liang et al. (2024) transform images into symbolic sequences via discrete tokenizers. Later studies Luo et al. (2024); Diao et al. (2025); Li et al. (2025b); Luo et al. (2025); Li et al. (2025a) leverage Mixture-of- Experts (MoE) or Divide-and-Conquer (DaC) strategies to suppress vision-language interference, while others Diao et al. (2024); Li et al. (2025b); Wang et al. (2025a); Li et al. (2025a) upgrade visual encoder supervision to accelerate the acquisition of visual concepts. Empirical evidence Beyer et al. (2024); Luo et al. (2024); Lei et al. (2025) reveals that, with sufficient data and progressive training, native VLMs rapidly approach modular counterparts, corroborating recent scaling-law insights Shukor et al. (2025b;a). Besides, recent methods Tao et al. (2025); Yan et al. (2025); Xiao et al. (2025) indicate that multi-modality encoding modules with the LLM or VE style slightly resolve vision-language misalignment, yet fail to fully integrate the distinct properties of each modality. Notably, NEO redefines native VLMs as a unibody system built from first-principle primitives. Every component—from image patch encoding, attention mechanism to rotary position embed- dings—ensures full compatibility with the intrinsic modeling patterns of VEs and LLMs. Meanwhile, NEO evolves modular VLM strengths via the modality-agnostic pre-Buffer and end-to-end training, dramatically enhancing pixel-word alignment and pushing the frontier of native VLM research. 3 + + + + Multi-Head Native Attention Feed-Forward Network RMSNorm RMSNorm Native VLM Primitive RoPE θ Q K V T Linear Linear Linear Q-norm K-norm H W 1m 1m 10k 10k 10k 10k (1). Introduce new FC/Norm Into original Q, K for H, W Keep Original Mapping Mode T H W Rebuild New Mapping Mode (3). Introduce Frame-wise Native Multi-Modal Attention Img1 Txt1 Img2 Txt2 Img1 Txt1 Img2 Txt2 Bidirectional Attention Word / Frame-wise Causal Attention [3,0,0] [0,0,0] [1,0,0] [2,0,0] [4,0,0] [5,0,0] [6,0,0] [7,0,0] [8,0,0] [9,0,0] [10,0,0] [11,0,0] [12,0,0] [8,0,1] [3,0,1] [3,1,0] [3,1,1] [8,1,0] [8,1,1] <SS> red <img> </img> and blue pills <img> </img> . <ES> (2). Native Rotary Position Embedding (Native-RoPE) [T, H, W] Decoupling Index <SS> : Start Symbol <ES> : End Symbol Q / K Head a) Keep LLM Head Channels b) Add New Head Channels Split T H W LLM θ 1,000,000 VE θ 10,000 Decay 0 – D 0 – D/2 0 – D/2 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 T (0 - 63) H, W (0 - 31) Frequency Value Channel Index (i) c) Decompose 3D-RoPE Channels - H / W have higher frequencies than T - Index range:T:0 ~ 𝒏∗𝟏𝟎𝟒~𝟔 H / W:0 ~ 𝒏∗𝟏𝟎𝟐 i) Simultaneously Compatible with Interaction Patterns of LLM / VE Q - ( T ) Q - ( H,W ) K - ( H,W ) K - ( T ) × × LLM’s high- / low- frequency for local / long-range relations VE’s relatively high frequencies for local semantic dependency ii) Seamlessly Absorb Pre-trained LLMs iii) Efficiently Construct Pixel-Word Connection MHNA FFN + + MHNA + + FFN Build Post-LLM via × 𝑳𝟐Primitives Load - Q,K,V (T) Load - LN (T) Load - FFN (T) Initial - Q (H, W) via Q (T) Initial - K (H, W) via Zero-W Build Pre-Buffer via × 𝑳𝟏Primitives - Rebuild visual-textual encoding from scratch - Large visual training with fixed post-LLM - Reusable module for native vlm community Shared [H, W] for Same Positions across Images Figure 3: Overview of our native primitive, which integrates native attention with bi-directional dependencies within images and word / frame-wise causal interactions, together with native rotary po- sition embeddings parameterized by modality-specific frequency, channel, and index allocation. It is inherently unified and intrinsically multimodal, substantially enhancing pixel–word correspondence. 3 METHODOLOGY 3.1 MODEL ARCHITECTURE Figure 2 illustrates the proposed NEO framework, which comprises lightweight patch and word embedding layers, a pre-Buffer, and a post-LLM, built upon stacked native VLM primitives. Patch and Word Embeddings. Given an image I, we convert it into token sequences via a lightweight Patch Embedding Layer (PEL) with two Convolutional layers (Conv1–2) Krizhevsky et al. (2012) and a Gaussian Error Linear Unit (GELU) Hendrycks & Gimpel (2016). For input text T, we encode it into word tokens using the original LLM Tokenizer as Word Embedding Layer (WEL): xv = Conv2(GELU(Conv1(I)) + PE), xt = Tokenizer(T), (1) where xv ∈R(h×w)×d / xt ∈Rn×d denote visual / textual tokens, and PE is 2D Sinusoidal Positional Encoding Dosovitskiy et al. (2021). The stride of Conv1 / Conv2 is 16 / 2, i.e., each visual token corresponds to a 32 × 32 image patch. Notably, Conv2 performs token folding like pixel unshuffle Chen et al. (2024e), with the special <img> and </img> tokens inserted at the boundaries of visual tokens, while mapping position and patch embeddings into a unified space. Afterward, visual and textual tokens are merged and propagated through the unified backbone. Native VLM Primitive. It adopts RMSNorm Zhang & Sennrich (2019) and SwiGLU Dauphin et al. (2017) consistent with the original LLM layers. Unlike prior methods Wang et al. (2024a); Bai et al. (2025); Zhu et al. (2025); Wang et al. (2025b) that collapse visual tokens into 1D representations or merely reallocate pre-trained LLM head dimensions across temporal (T), height (H), and width (W), we expand Query (Q) and Key (K) head dimensions while fully decoupling H, W, and T relations in Figure 3(1), causing an extra 10% parameter counts over the original Transformer block. The original T dimension is preserved, and additional H and W dimensions are added with their respective QK normalization Yang et al. (2025). Crucially, the linear weights of K for H and W channels are zero-initialized, and attention scale matches that of LLMs, maintaining the LLM pre-training paradigm and progressively activating multimodal capabilities for visual spatial relationships. This philosophy aligns with our Native Rotary Position Embedding (Native-RoPE) in Figure 3(2), which eliminates correlations between H / W and T indexes, while decoupling channel allocations of H, W, and T under the original LLM frequency. (1) Index Allocation: For text, T index is retained while H / W indexes are zeroed. Notably, Native-RoPE is equivalent to 1D-RoPE before training. For images, each visual token has a constant T index, with unique H / W indexes encoding spatial location. Videos, treated as sequences of frames, increment T index per frame, while H / W indexes follow the same spatial scheme as images. In multimodal inputs, each modality’s T index starts from the maximum ID of the preceding modality, ensuring continuous and unambiguous positional encoding across modalities. This serves two purposes: (a) For image pairs, H / W indexes start independently 4 from (0,0), and tokens at corresponding positions share identical dependency, strongly reinforcing correlations and interactions across matching regions Liao et al. (2025); Wu et al. (2025); (b) For image-text pairs, H / W indexes are decoupled from T index and bounded within (0,0) and (H,W), preventing large T index growth from disproportionately affecting H / W indexes Wang et al. (2024a); Bai et al. (2025) and thereby keeping spatial dependencies between long-range text and images. Another key aspect is (2) Channel and Frequency Allocation. Unlike recent 3D-RoPE methods Bai et al. (2025); Wei et al. (2025); Yuan et al. (2025); Liao et al. (2025), we fully decouple the channel and frequency allocation of H, W, and T, equipped with additional Q/K head dimensions for H and W. This resolves two issues: (a) Zeroing H / W indexes for pure text would disrupt the modeling patterns and linguistic capacity of the LLM if restricted to its original channels. Repairing this disruption requires substantial resources; (b) Even with interleaved or segmented reallocation, H and W are theoretically equivalent but are assigned different frequencies. Meanwhile, the RoPE frequency in LLMs is far lower than that of visual encoders in Figure 3(2). This mismatch limits the modeling of relative distances and local semantics. The problem is exacerbated by the disparity in scales, with temporal ranges spanning up to one million and spatial ranges only a few hundred. Specifically, Native-RoPE assigns distinct base frequencies to T, H, and W within their own dimen- sions, i.e., original LLM head dimension for T and new head dimension for H / W as follows: ΘT =  β −2k d T | k ∈[0, d 2)  , ΘH =  β −4i d H | i ∈[0, d 4)  , ΘW =  β −4j d W | j ∈[0, d 4)  (2) where β and Θ indicate the base and rotation frequency across H, W, and T. Notably, temporal T dimension captures both local and long-range relations, whereas spatial H / W dimensions emphasize local dependencies. This also opens avenues for broader applications, e.g., video understanding Wei et al. (2025), multimodal generation Deng et al. (2025b), and editing Deng et al. (2025a). Inspired by prior works Lei et al. (2025); Deng et al. (2025b); Li et al. (2025a), we treat one single image as a unified meta-unit for autoregressive modeling. To enable this, we propose Native Multi- Modal Attention with mixed masking in Figure 3(c). Text tokens adhere to standard causal attention, attending only to preceding tokens to maintain autoregressive generation. In contrast, image tokens employ full bidirectional attention, enabling exhaustive interactions among all visual tokens, akin to a visual encoder. This design captures rich spatial and contextual dependencies within images and facilitates vision-language correspondences, thereby supporting complex multimodal reasoning. We use FlexAttention Dong et al. (2024) to minimize memory overhead and increase throughput, as variable-length block-wise attention is fully optimized through CUDA kernel modifications. Pre-Buffer and Post-LLM. Drawing on modular designs Bai et al. (2025); Wang et al. (2025b), we split NEO into encoding and reasoning components at the outset. In contrast, we build a modality- shared pre-Buffer via native primitives to map vision and language into a unified representation space. We further design a post-LLM via native primitives to absorb the powerful language proficiency and reasoning capabilities of LLMs. This, in turn, promotes deep pixel-word integration within the pre- Buffer—a deliberate design choice to ensure rich multimodal alignment while minimizing disturbance to the LLM. The layer depth in the pre-Buffer and post-LLM primarily refers to the model parameters of existing VEs and LLMs, ensuring a relatively fair comparison while balancing accuracy and efficiency. Crucially, this separation exists only during pre-training to boost visual learning; during mid-training and supervised fine-tuning, the components are upgraded to a monolithic backbone, allowing the VLM to automatically allocate capacity for encoding, alignment, and reasoning. 3.2 TRAINING PROCEDURE Figure 4 illustrates the whole training pipeline, which proceeds through three progressive stages: pre-training, mid-training, and supervised fine-tuning. The entire model is optimized end-to-end. Pre-Training Stage. In this phase, NEO acquires fundamental visual concepts and contextual dependencies from scratch, guided by pre-trained patterns from LLMs. Training leverages 345M web-scale and synthetic image-caption pairs, including 100M English and 20M Chinese pairs from LAION-400M Schuhmann et al. (2021), 150M English pairs from COYO-700M Byeon et al. (2022), 20M long-caption examples from BLIP3o Chen et al. (2025), and 5M short-caption pairs from OpenImages Kuznetsova et al. (2018), recaptioned with a pre-trained InternVL2-8B model. The dataset is further enriched with 30M samples from LAION-COCO Schuhmann et al. (2022) and 20M 5 Stage 1: Pre-Training Stage 3: Supervised Fine-Tuning PEL NEO 🔥 WEL 🔥 🔥 NEO Stage 2: Mid-Training 🧊 🔥 Pre-Buffer, new QK in Post-LLM NEO 🔥 PEL 🔥 WEL PEL 🔥 WEL 🔥 🧊 256! - 1024! Any Resolution 345M Image-Text Caption Data Pairs 256! - 2048! Any Resolution 40M Caption QA OCR / Detection 256! - 2048! Any Resolution 4M High-Quality Instruction Data Figure 4: Overview of the entire training recipe. During pre-training, NEO learns visual perception from massive web-scale and synthetic image-caption pairs with frozen LLM weights to preserve linguistic knowledge. In mid-training and supervised fine-tuning, the full model is progressively optimized end-to-end using caption, conversation, OCR, detection, and high-quality instruction data. examples from Wukong Gu et al. (2022) with rich Optical Character Recognition (OCR) annotations. A 3:7 ratio of language to multi-modal data is incorporated to reconstruct text projections in the pre-Buffer. Only the patch embedding layer, the pre-Buffer, and additional QK linear weights and normalization, along with H and W, are trainable and optimized with a simple next-token prediction objective. Notably, the new QK heads not only counteract the LLM’s strong language bias that limits visual specialization but also safeguard its capabilities against the effects of low-quality data. Mid-Training Stage. The objective at this stage is to strengthen the alignment between visual and linguistic capabilities while progressively enhancing recognition of high-resolution images, complex scenes, object scales, spatial grounding, and compact OCR content. The training data is drawn from the pre-training corpus of InternVL-1.5 Chen et al. (2024f), comprising 40M samples across image captioning, conversation, detection, and OCR data, which account for approximately 66%, 11%, 8%, and 15% of the total, respectively. A 3:7 ratio of language to multi-modal data is again applied. The entire architecture is updated with the same loss functions to consolidate vision-language alignment, thereby equipping NEO with the foundational abilities required for various visual scenarios. Supervised Fine-Tuning Stage. During the SFT stage, NEO’s ability to follow complex linguistic instructions and varied dialogue patterns is further enhanced, a critical step towards real-world deployment. The full network is optimized across diverse high-quality, multi-source instruction datasets. Following Mono-InternVL Luo et al. (2024), we employ about 4M bilingual instructions for supervised learning, covering tasks such as visual question answering, multimodal dialogue, mathematics, and knowledge reasoning. Details of the instruction data are provided in the Appendix. 4 EXPERIMENTS 4.1 TRAINING SETTINGS Our NEO models are built on Qwen3-1.7B and Qwen3-8B Yang et al. (2025) as the LLMs. The pre-Buffer employs L1 = 12 primitive layers for NEO-2.2B and L1 = 6 for NEO-9B. We extend only the QK head dimension in raw transformer layers, introducing roughly 10% extra parameters over the original design. The base RoPE frequencies βT , βH, and βW are set to 1×106, 1×104, and 1×104, respectively. NEO is trained on sixteen 8-GPU (80G) nodes using the AdamW optimizer Loshchilov & Hutter (2019). The maximum learning rates for pre-training, mid-training, and SFT are 8 × 10−4, 4 × 10−5, and 5 × 10−5, with a warm-up ratio of 0.01 and a cosine decay scheduler across all stages. 4.2 MAIN RESULTS We conduct standard evaluations with VLMEvalKit Duan et al. (2024) on diverse benchmarks, covering chart, diagram, and document understanding tasks, e.g., AI2D Kembhavi et al. (2016), DocVQA Clark & Gardner (2018), ChartQA Masry et al. (2022), InfoVQA Mathew et al. (2022), TextVQA Singh et al. (2019), and OCRBench Liu et al. (2023e); visual perception and challenging reasoning tasks, e.g., MMMU Yue et al. (2024), MMBench-EN (MMB) Liu et al. (2024b), MMVet Yu et al. (2024), MMStar Chen et al. (2024c), SEEDBench-IMG (SEED-I) Li et al. (2023a); visual hallucination tasks, e.g., POPE Li et al. (2023b) and HallusionBench (HallB) Guan et al. (2024). 6 Table 1: Comparison with modular and native VLMs on general vision-language benchmarks. “# Data” denotes the dataset scale during pre-training, mid-training, and supervised fine-tuning. † indicates models that employ reinforcement learning (RL). Bold highlights the highest performance. Model LLM # Data MMMU MMB MMVet MMStar SEED-I POPE HallB ▼Modular Vision-Language Models (2B) Qwen2-VL Qwen2-1.5B – / – / – 41.1 74.9 49.5 48.0 – – 41.7 InternVL2.5 InternLM2.5-1.8B >6B / 100M / 16M 43.6 74.7 60.8 53.7 – 90.6 42.6 Qwen2.5-VL† Qwen2.5-1.5B – / – / – 51.2 79.1 61.8 55.9 – – 46.3 InternVL3† Qwen2.5-1.5B >6B / 100M / 22M 48.6 81.1 62.2 60.7 – 89.6 42.5 Encoder-Based Qwen3-1.7B >6B / 40M / 4M 47.1 75.8 37.4 52.7 73.6 87.0 44.4 ▼Native Vision-Language Models (2B) Mono-InternVL InternLM2-1.8B 1.2B / 143M / 7M 33.7 65.5 40.1 – 67.4 – 34.8 Mono-InternVL-1.5 InternLM2-1.8B 400M / 150M / 7M 39.1 64.0 54.0 – 66.9 – 32.5 HoVLE InternLM2-1.8B 550M / 50M / 7M 32.2 73.3 43.8 – 70.9 87.4 38.4 OneCAT Qwen2.5-1.5B 436M / 70M / 13M 39.0 72.4 42.4 – 70.9 – – NEO Qwen3-1.7B 345M / 40M / 4M 48.6 76.0 49.6 54.2 74.2 87.5 43.1 ▼Modular Vision-Language Models (8B) Qwen2-VL Qwen2-7B – / – / – 54.1 83 62.0 60.7 – 88.1 50.6 InternVL2.5 InternLM2.5-7B >6B / 50M / 4M 56.0 84.6 62.8 64.4 – 90.6 50.1 Qwen2.5-VL† Qwen2.5-7B – / – / – 55.0 83.5 67.1 63.9 – 86.4 52.9 InternVL3† Qwen2.5-7B >6B / 100M / 22M 62.7 83.4 81.3 68.2 – 91.1 49.9 Encoder-Based Qwen3-8B >6B / 40M / 4M 54.1 84 60.0 63.5 76.2 87.8 51.4 ▼Native Vision-Language Models (8B) Fuyu Persimmon-8B – / – / – 27.9 10.7 21.4 – 59.3 84.0 – Chameleon from scratch 1.4B / 0M / 1.8M 25.4 31.1 8.3 – 30.6 19.4 17.1 EVE Vicuna-7B 33M / 0M / 1.8M 32.6 52.3 25.7 – 64.6 85.0 26.4 SOLO Mistral-7B 44M / 0M / 2M – 67.7 30.4 – 64.4 78.6 – Emu3 from scratch – / – / – 31.6 58.5 37.2 – 68.2 85.2 – EVEv2 Qwen2.5-7B 77M / 15M / 7M 39.3 66.3 45.0 – 71.4 87.6 – BREEN Qwen2.5-7B 13M / 0M / 4M 42.7 71.4 38.9 51.2 – – 37.0 VoRA Qwen2.5-7B 30M / 0M / 0.6M 32.0 61.3 33.7 – 68.9 85.5 – SAIL Mistral-7B 512M / 86M / 6M – 70.1 46.3 53.1 72.9 85.8 54.2 NEO Qwen3-8B 345M / 40M / 4M 54.6 82.1 53.6 62.4 76.3 88.4 46.4 Following InternVL3 Zhu et al. (2025), we construct the Encoder-Based by combining Qwen3 Yang et al. (2025) and InternViT-300M Zhu et al. (2025). In the mid-training stage, we first train the projector on 10M samples, and further unfreeze the vision encoder utilizing another 30M samples. Comparison with Modular VLMs. As demonstrated in Table 1 and Table 2, NEO achieves highly competitive performance at both the 2B and 8B scales, despite using relatively limited pre-training and supervised fine-tuning data and without reinforcement learning. Remarkably, NEO approaches the performance of top-tier modular VLMs, e.g., Qwen2-VL Wang et al. (2024a), InternVL2.5 Chen et al. (2024e), Qwen2.5-VL Bai et al. (2025), and InternVL3 Zhu et al. (2025) across multiple benchmarks, rivaling architectures trained on billions of additional samples. These results highlight the effectiveness of our end-to-end training strategy and unified model design. By combining native attention mechanisms with Native-RoPE, NEO enhances interactions between visual and linguistic features, enabling it to match more complex modular systems despite its simpler architecture. Comparison with Native VLMs. From Table 1 and Table 2, NEO delivers substantial gains on visual-centric benchmarks over the best competitors, e.g., Mono-InterVL Luo et al. (2024; 2025), HoVLE Tao et al. (2025), OnCAT Li et al. (2025a), EVE Diao et al. (2024; 2025), Emu3 Wang et al. (2024b), BREEN Li et al. (2025b), VoRA Wang et al. (2025a), and SAIL Lei et al. (2025). By seamlessly integrating post-LLM components with the pre-Buffer for large-scale visual learning, NEO aligns visual inputs with textual features from scratch and supports complex visual reasoning, even without visual encoder supervision Diao et al. (2024); Tao et al. (2025); Li et al. (2025a); Wang et al. (2025a); Li et al. (2025b), highlighting the strengths of its native primitive designs and training strategies. These design choices allow NEO to surpass many native VLMs using fewer training resources, demonstrating the advantages of our primitives with efficient data-scaling capability. 7 Table 2: Comparison with modular and native VLMs on visual question answering benchmarks. Any Res., Tile-wise, Any Rat., and Fix Res. refer to any resolution, image tile splitting, any aspect ratio, and fixed resolution. MoE and DaC are Mixture-of-Experts and Divide-and-Conquer models. Model Input RoPE Backbone AI2D DocVQA ChartQA InfoVQA TextVQA OCRBench ▼Modular Vision-Language Models (2B) Qwen2-VL Any Res. M-RoPE Dense 74.7 90.1 73.5 65.5 79.7 80.9 InternVL2.5 Tile-wise 1D-RoPE Dense 74.9 88.7 79.2 60.9 74.3 80.4 Qwen2.5-VL† Any Res. M-RoPE Dense 81.6 93.9 84.0 77.1 79.3 79.7 InternVL3† Tile-wise 1D-RoPE Dense 78.7 88.3 80.2 66.1 77.0 83.5 Encoder-Based Tile-wise 1D-RoPE Dense 77.4 89.9 78.4 65.9 73.3 83.5 ▼Native Vision-Language Models (2B) Mono-InternVL Tile-wise. 1D-RoPE MoE 68.6 80.0 73.7 43.0 72.6 76.7 Mono-InternVL-1.5 Tile-wise. 1D-RoPE DaC 67.4 81.7 72.2 47.9 73.7 80.1 HoVLE Tile-wise. 1D-RoPE Dense 73.0 86.1 78.6 55.7 70.9 74.0 OneCAT Any Res. M-RoPE Dense 72.4 87.1 76.2 56.3 67.0 – NEO Any Res. Native-RoPE Dense 80.1 89.9 81.2 63.2 74.0 77.1 ▼Modular Vision-Language Models (8B) Qwen2-VL Any Res. M-RoPE Dense 83.0 94.5 83 76.5 84.3 86.6 InternVL2.5 Tile-wise 1D-RoPE Dense 84.5 93.0 84.8 77.6 79.1 82.2 Qwen2.5-VL† Any Res. M-RoPE Dense 83.9 95.7 87.3 82.6 84.9 86.4 InternVL3† Tile-wise 1D-RoPE Dense 85.2 92.7 86.6 76.8 80.2 88 Encoder-Based Tile-wise 1D-RoPE Dense 82.9 92.1 83.5 75 77.1 85.3 ▼Native Vision-Language Models (8B) Fuyu Any Res. 1D-RoPE Dense 64.5 – – – – 36.6 Chameleon Fix Res. 1D-RoPE Dense 46.0 1.5 2.9 5.0 4.8 0.7 EVE Any Rat. 1D-RoPE Dense 61.0 53.0 59.1 25.0 56.8 39.8 SOLO Any Res. 1D-RoPE Dense 61.4 – – – – 12.6 Emu3 Fix Res. 1D-RoPE Dense 70 76.3 68.6 43.8 64.7 68.7 EVEv2 Any Rat. 1D-RoPE DaC 74.8 – 73.9 – 71.1 70.2 BREEN Any Res. 1D-RoPE MoE 76.4 – – – 65.7 – VoRA Any Res. 1D-RoPE Dense 61.1 – – – 58.7 – SAIL Any Res. M-RoPE Dense 76.7 – – – 77.1 78.3 NEO Any Res. Native-RoPE Dense 83.1 88.6 82.1 60.9 75.0 77.7 Despite strong results, NEO lags on knowledge-/OCR-heavy tasks, e.g., MMMU, InfoVQA, and TextVQA. Interestingly, NEO-9B does not surpass NEO-2B on DocVQA and InfoVQA, indicating limitations in our current training corpus. Even so, NEO performs well under constraints, highlighting the native VLM as a scalable paradigm. Larger datasets and resources can unlock its full potential. 4.3 ABLATION STUDIES Unless otherwise specified, we report the average evaluation results, denoted as Avg., across ten vision-language benchmark datasets in Table 3. The pre-Buffer and new head dimensions in the post-LLM are trained on 20M pre-training samples, followed by full-backbone fine-tuning on 2M SFT instruction data. These constitute the standard training settings for our ablation studies. 32 39.7 44 46.8 48 47.2 48.8 20 30 40 50 60 0 2 4 6 8 10 12 Avg. Accuracy (%) Figure 5: Configurations of pre-Buffer. Hyperparameters of the Pre-Buffer Layer. Figure 5 illustrates the relationship between the number of pre- Buffer layers and the model’s average accuracy, using Qwen3-1.7B as the post-LLM. Performance improves consistently as the layer count increases, but gains begin to saturate beyond eight layers. To maximize accuracy while maintaining the same capacity as publicly available vision encoders Chen et al. (2024f); Radford et al. (2021); Zhai et al. (2023), we select 12 layers for NEO-2.2B. Notably, we choose 6 layers for NEO-9B, mainly due to the good trade-off between performance and efficiency. 8 Table 3: Configurations of attention and RoPE. MMS, CQA, IVQA, and OCRB denote MMStar, ChartQA, InfoVQA, and OCRBench. ⋆indicates that the base RoPE frequencies for height and width are set to 1M. To ensure fairness, we add new head dimensions of equal size across all models. Model Attention RoPE MMMU MMB MMS SEED-I AI2D CQA IVQA TVQA OCRB POPE Avg. A Causal 1D-RoPE 40.2 48.6 36.1 55.3 63.6 16.1 22.5 16.2 13.9 78.6 39.1 B Mixed 1D-RoPE 40.8 48.8 36.4 57.3 63.7 16.0 21.9 17.4 16.0 79.2 39.8 C Mixed IL-RoPE 40.0 47.3 36.3 57.6 62.0 18.8 23.4 17.9 13.2 78.8 39.5 D Mixed M-RoPE 40.3 49.6 37.2 57.8 64.2 23.7 25.2 20.4 18.8 79.3 41.7 E Mixed MM-RoPE 40.5 50.8 37.6 58.2 65.8 25.7 26.3 22.1 18.2 78.8 42.4 F Mixed Video-RoPE 40.6 51.3 37.8 58.8 64.3 27.4 26.1 23.7 21.3 81.0 43.2 G Causal Native-RoPE 40.2 49.2 36.3 57.1 63.7 19.2 23.5 19.5 16.7 77.8 40.3 H Mixed Native-RoPE 40.7 51.9 38.2 58.9 65.8 30.6 26.9 24.1 23.2 80.0 44.0 I Mixed Native-RoPE⋆ 40.4 50.4 36.9 57.0 64.1 25.6 25.2 21.7 20.1 78.7 42.0 60 65 70 75 80 PB1 PB2 PB3 NEO InternViT CLIP SigLIP Avg. Accuracy (%) Figure 6: Comparison with pre-Buffer and vision encoders. All models are initialized post-LLM using Qwen3-1.7B. 60 65 70 75 80 NEO-2.2B NEO-9B Avg. Accuracy (%) Stage 1 Stage 2 Stage 3 Figure 7: Evaluation results across three progressive training procedures. Configurations of Native Primitives. Table 3 compares various attention and RoPE designs. The pre-Buffer depth is 4, and the post-LLM is initialized with Qwen3-1.7B. All models share the same new QK head dimensions and normalization. (1) Attention mode. Comparing models A/B and G/H reveals consistent gains of mixed attention over causal one, reflecting its stronger capacity to model comprehensive dependencies and cross-modal alignment. (2) RoPE mode. Native-RoPE outperforms 1D-RoPE Zhu et al. (2025), IL-RoPE Liao et al. (2025), M-RoPE Bai et al. (2025), MM-RoPE Yuan et al. (2025), and Video-RoPE Wei et al. (2025), with at least a 0.8% gain. This validates the importance of disentangling height, width, and temporal components in RoPE to enhance spatial–temporal representations and fine-grained interactions. By contrast, setting the base RoPE frequency to 1M for height and width severely impairs the ability to perceive local semantics. Comparison between Pre-Buffer and Vision Encoders. In Figure 6, PB 1–3 denotes the Pre-Buffer after stage 1–3. For all models except NEO, the post-LLMs are initialized via Qwen3-1.7B for our pre-Buffer, InternViT-300M Chen et al. (2024e), CLIP-vit-large-patch14 Radford et al. (2021), and SigLIP-so400m-patch14-384 Zhai et al. (2023). After two-stage re-training, PB3 shows only an average gap of 2.5 / 2.4 / 1.7 / 3.7% over NEO / InternViT / CLIP / SigLIP using billion-scale training data. This substantially reduces the training costs of building native VLMs for subsequent research. Performance Gains across Stages. Figure 7 presents the result evolution across training stages. In Stages 1 and 2, the model is fine-tuned on 2M SFT examples. Performance improves consistently as training data scales increase across 2.2B and 9B model sizes. Following progressive training, NEO shows strong multimodal capabilities, enabling robust performance across diverse real-world tasks. 5 CONCLUSION We introduce NEO, a native VLM that seamlessly integrates vision and language into a single unified framework, eliminating the need for separate visual encoders or ad-hoc alignment modules. By leveraging hybrid attention and modality-aware rotary position embeddings, NEO captures rich, fine-grained interactions between pixels and words from the outset. Its pre-Buffer and post-LLM training paradigm ensures efficient convergence and robust alignment while maintaining end-to-end learning. Experiments show that this unified design not only advances multimodal understanding and reasoning but also lays the foundation for reusable, scalable components. Our native primitives highlight a promising path toward intrinsically multimodal, unified, and adaptable architectures. 9 ETHICS STATEMENT All resources are drawn from open-access datasets with explicitly defined usage policies. Our work seeks to advance multimodal learning capabilities without introducing ethical or safety concerns beyond those already associated with existing models. Nevertheless, risks such as dataset biases and potential misuse cannot be entirely ruled out. We emphasize the importance of careful data curation, responsible deployment, and transparent reporting as essential practices to mitigate these challenges. REPRODUCIBILITY STATEMENT We place strong emphasis on reproducibility, providing detailed descriptions to facilitate replication and validation. Information about dataset selection, training strategies, and evaluation settings is provided in Sec. 3.2 and Sec. 4.1. We commit to releasing the code, model weights, and detailed documentation to allow the community to reproduce our findings in future research. REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L. Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Kar´en Simonyan. Flamingo: a visual language model for few-shot learning. In Advances of Neural Information Processing Systems, New Orleans, LA, USA, 2022. Anthropic. Claude 3.7 sonnet: A hybrid reasoning ai model, 2025. URL https://www. anthropic.com/news/claude-3-7-sonnet. AI Anthropic. The claude 3 model family: opus, sonnet, haiku, 2024. URL https://www-cdn. anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_ Card_Claude_3.pdf. Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Ming-Hsuan Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report. CoRR, abs/2502.13923, 2025. Yuelin Bai, Xinrun Du, Yiming Liang, Yonggang Jin, Junting Zhou, Ziqiang Liu, Feiteng Fang, Mingshan Chang, Tianyu Zheng, Xincheng Zhang, et al. Coig-cqia: Quality is all you need for chinese instruction fine-tuning. CoRR, abs/2403.18058, 2024. Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sa˘gnak Tas¸ırlar. Introducing our multimodal models, 2023. URL https://www.adept.ai/ blog/fuyu-8b. Lucas Beyer, Andreas Steiner, Andr´e Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, Thomas Unterthiner, Daniel Keysers, Skanda Koppula, Fangyu Liu, Adam Grycner, Alexey A. Gritsenko, Neil Houlsby, Manoj Kumar, Keran Rong, Julian Eisenschlos, Rishabh Kabra, Matthias Bauer, Matko Bosnjak, Xi Chen, Matthias Minderer, Paul Voigtlaender, Ioana Bica, Ivana Balazevic, Joan Puigcerver, Pinelopi Papalampidi, Olivier J. H´enaff, Xi Xiong, Radu Soricut, Jeremiah Harmsen, and Xiaohua Zhai. Paligemma: a versatile 3b vlm for transfer. CoRR, abs/2407.07726, 2024. Ali Furkan Biten, Rub`en Tito, Andr´es Mafla, Llu´ıs G´omez i Bigorda, Marc¸al Rusi˜nol, C. V. Jawahar, Ernest Valveny, and Dimosthenis Karatzas. Scene text visual question answering. In IEEE International Conference on Computer Vision, pp. 4290–4300, Seoul, Korea (South), 2019. Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset, 2022. URL https://github.com/kakaobrain/ coyo-dataset. 10 Jie Cao and Jing Xiao. An augmented benchmark dataset for geometric question answering through dual parallel text encoding. In International Conference on Computational Linguistics, pp. 1511– 1520, 2022. Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: harnessing gpt4v-synthesized data for a lite vision-language model. CoRR, abs/2402.11684, 2024a. Jiuhai Chen, Zhiyang Xu, Xichen Pan, Yushi Hu, Can Qin, Tom Goldstein, Lifu Huang, Tianyi Zhou, Saining Xie, Silvio Savarese, Le Xue, Caiming Xiong, and Ran Xu. Blip3-o: A family of fully open unified multimodal models-architecture, training and dataset. CoRR, abs/2505.09568, 2025. Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: improving large multi-modal models with better captions. In European Conference on Computer Vision, volume 15075, pp. 370–387, Milan, Italy, 2024b. Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, and Feng Zhao. Are we on the right way for evaluating large vision- language models? In Advances of Neural Information Processing Systems, Vancouver, BC, Canada, 2024c. Yangyi Chen, Xingyao Wang, Hao Peng, and Heng Ji. A single transformer for scalable vision- language modeling. CoRR, abs/2407.06438, 2024d. Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, Lixin Gu, Xuehui Wang, Qingyun Li, Yimin Ren, Zixuan Chen, Jiapeng Luo, Jiahao Wang, Tan Jiang, Bo Wang, Conghui He, Botian Shi, Xingcheng Zhang, Han Lv, Yi Wang, Wenqi Shao, Pei Chu, Zhongying Tu, Tong He, Zhiyong Wu, Huipeng Deng, Jiaye Ge, Kai Chen, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. CoRR, abs/2412.05271, 2024e. Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, Ji Ma, Jiaqi Wang, Xiaoyi Dong, Hang Yan, Hewei Guo, Conghui He, Botian Shi, Zhenjiang Jin, Chao Xu, Bin Wang, Xingjian Wei, Wei Li, Wenjian Zhang, Bo Zhang, Pinlong Cai, Licheng Wen, Xiangchao Yan, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. CoRR, abs/2404.16821, 2024f. Chee Kheng Chng, Yuliang Liu, Yipeng Sun, Chun Chet Ng, Canjie Luo, Zihan Ni, ChuanMing Fang, Shuaitao Zhang, Junyu Han, Errui Ding, et al. Icdar2019 robust reading challenge on arbitrary-shaped text-rrc-art. In International Conference on Document Analysis and Recognition, pp. 1571–1576, 2019. Christopher Clark and Matt Gardner. Simple and effective multi-paragraph reading comprehension. In Annual Meeting of the Association for Computational Linguistics, pp. 845–855, Melbourne, Australia, 2018. Gheorghe Comanici, Eric Bieber, Mike Schaekermann, Ice Pasupat, Noveen Sachdeva, Inderjit S. Dhillon, Marcel Blistein, Ori Ram, Dan Zhang, Evan Rosen, Luke Marris, Sam Petulla, Colin Gaffney, Asaf Aharoni, Nathan Lintz, Tiago Cardal Pais, Henrik Jacobsson, Idan Szpektor, Nan- Jiang Jiang, Krishna Haridasan, Ahmed Omran, Nikunj Saunshi, Dara Bahri, Gaurav Mishra, Eric Chu, Toby Boyd, Brad Hekman, Aaron Parisi, Chaoyi Zhang, Kornraphop Kawintiranon, Tania Bedrax-Weiss, Oliver Wang, Ya Xu, Ollie Purkiss, Uri Mendlovic, Ila¨ı Deutel, Nam Nguyen, Adam Langley, Flip Korn, Lucia Rossazza, Alexandre Ram´e, Sagar Waghmare, Helen Miller, Nathan Byrd, Ashrith Sheshan, Raia Hadsell Sangnie Bhardwaj, Pawel Janus, Tero Rissa, Dan Horgan, Sharon Silver, Ayzaan Wahid, Sergey Brin, Yves Raimond, Klemen Kloboves, Cindy Wang, Nitesh Bharadwaj Gundavarapu, Ilia Shumailov, Bo Wang, Mantas Pajarskas, Joe Heyward, Martin Nikoltchev, Maciej Kula, Hao Zhou, Zachary Garrett, Sushant Kafle, Sercan Arik, Ankita Goel, Mingyao Yang, Jiho Park, Koji Kojima, Parsa Mahmoudieh, Koray Kavukcuoglu, Grace Chen, Doug Fritz, Anton Bulyenov, Sudeshna Roy, Dimitris Paparas, Hadar Shemtov, Bo-Juen Chen, Robin Strudel, David Reitter, Aurko Roy, Andrey Vlasov, Changwan Ryu, Chas Leichner, 11 Haichuan Yang, Zelda Mariet, Denis Vnukov, Tim Sohn, Amy Stuart, Wei Liang, Minmin Chen, Praynaa Rawlani, Christy Koh, JD Co-Reyes, Guangda Lai, Praseem Banzal, Dimitrios Vytiniotis, Jieru Mei, and Mu Cai. Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. CoRR, abs/2507.06261, 2025. Wenliang Dai, Nayeon Lee, Boxin Wang, Zhuoling Yang, Zihan Liu, Jon Barker, Tuomas Rintamaki, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. NVLM: open frontier-class multimodal llms. CoRR, abs/2409.11402, 2024. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e M. F. Moura, Devi Parikh, and Dhruv Batra. Visual dialog. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1080–1089, Honolulu, HI, USA, 2017. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International Conference on Machine Learning, volume 70, pp. 933–941, Sydney, NSW, Australia, 2017. Google DeepMind. Gemini 2.5 pro: Google’s most advanced reasoning model, 2025. URL https: //deepmind.google/models/gemini/pro/. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, and S. S. Li. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. CoRR, abs/2501.12948, 2025. Chaorui Deng, Deyao Zhu, Kunchang Li, Chenhui Gou, Feng Li, Zeyu Wang, Shu Zhong, Weihao Yu, Xiaonan Nie, Ziang Song, Shi Guang, and Haoqi Fan. Emerging properties in unified multimodal pretraining. CoRR, abs/2505.14683, 2025a. Haoge Deng, Ting Pan, Haiwen Diao, Zhengxiong Luo, Yufeng Cui, Huchuan Lu, Shiguang Shan, Yonggang Qi, and Xinlong Wang. Autoregressive video generation without vector quantization. In International Conference on Learning Representations, Singapore, 2025b. Haiwen Diao, Yufeng Cui, Xiaotong Li, Yueze Wang, Huchuan Lu, and Xinlong Wang. Unveiling encoder-free vision-language models. CoRR, abs/2406.11832, 2024. Haiwen Diao, Xiaotong Li, Yufeng Cui, Yueze Wang, Haoge Deng, Ting Pan, Wenxuan Wang, Huchuan Lu, and Xinlong Wang. Evev2: Improved baselines for encoder-free vision-language models. CoRR, abs/2502.06788, 2025. Juechu Dong, Boyuan Feng, Driss Guessous, Yanbo Liang, and Horace He. Flex attention: A programming model for generating optimized attention kernels. CoRR, abs/2412.05496, 2024. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: transformers for image recognition at scale. In International Conference on Learning Representations, Austria, 2021. Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, Dahua Lin, and Kai Chen. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In ACM International Conference on Multimedia, pp. 11198–11201, Melbourne, VIC, Australia, 2024. 12 Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. EVA: exploring the limits of masked visual representation learning at scale. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 19358–19369, Vancouver, BC, Canada, 2023. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: elevating the role of image understanding in visual question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 6325–6334, Honolulu, HI, USA, 2017. Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Niu Minzhe, Xiaodan Liang, Lewei Yao, Runhui Huang, Wei Zhang, Xin Jiang, Chunjing Xu, and Hang Xu. Wukong: A 100 million large-scale chinese cross-modal pre-training benchmark. In Advances of Neural Information Processing Systems, New Orleans, LA, USA,, 2022. Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large vision- language models. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 14375– 14385, Seattle, WA, USA, 2024. Conghui He, Zhenjiang Jin, Chao Xu, Jiantao Qiu, Bin Wang, Wei Li, Hang Yan, Jiaqi Wang, and Dahua Lin. Wanjuan: A comprehensive multimodal dataset for advancing english and chinese large models. CoRR, abs/2308.10755, 2023. Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). CoRR, abs/1606.08415, 2016. Drew A. Hudson and Christopher D. Manning. GQA: a new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 6700–6709, Long Beach, CA, USA, 2019. Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Os- trow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Madry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, An- toine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll L. Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea Voss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, and Dane Sherburn. Gpt-4o system card. CoRR, abs/2410.21276, 2024. Jimmycarter. Textocr gpt-4v dataset, 2023. URL https://huggingface.co/datasets/ jimmycarter/textocr-gpt4v. Kushal Kafle, Brian L. Price, Scott Cohen, and Christopher Kanan. DVQA: understanding data visualizations via question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 5648–5656, Salt Lake City, UT, USA, 2018. Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Min Joon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In European Conference on Computer Vision, volume 9908, pp. 235–251, Amsterdam, The Netherlands, 2016. Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. Are you smarter than a sixth grader? textbook question answering for multimodal 13 machine comprehension. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 4999–5007, 2017. Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. Ocr-free document understanding transformer. In European Conference on Computer Vision, volume 13688, pp. 498–517, Tel Aviv, Israel, 2022. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73, 2017. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolu- tional neural networks. In Advances of Neural Information Processing Systems, pp. 1106–1114, Lake Tahoe, Nevada, US, 2012. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper R. R. Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: unified image classification, object detection, and visual relationship detection at scale. CoRR, abs/1811.00982, 2018. LAION. Gpt-4v dataset, 2023. URL https://huggingface.co/datasets/laion/ gpt4v-dataset. Weixian Lei, Jiacong Wang, Haochen Wang, Xiangtai Li, Jun Hao Liew, Jiashi Feng, and Zilong Huang. The scalability of simplicity: Empirical analysis of vision-language learning with a single transformer. CoRR, abs/2504.10462, 2025. Paul Lerner, Olivier Ferret, Camille Guinaudeau, Herv´e Le Borgne, Romaric Besanc¸on, Jos´e G Moreno, and Jes´us Lov´on Melgarejo. Viquae, a dataset for knowledge-based visual question answering about named entities. In ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 3108–3120, 2022. Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang, Ziwei Liu, and Chunyuan Li. Llava-next: stronger llms supercharge multi- modal capabilities in the wild, 2024a. URL https://llava-vl.github.io/blog/ 2024-05-10-llava-next-stronger-llms/. Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: easy visual task transfer. CoRR, abs/2408.03326, 2024b. Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: bench- marking multimodal llms with generative comprehension. CoRR, abs/2307.16125, 2023a. Han Li, Xinyu Peng, Yaoming Wang, Zelin Peng, Xin Chen, Rongxiang Weng, Jingang Wang, Xunliang Cai, Wenrui Dai, and Hongkai Xiong. Onecat: Decoder-only auto-regressive model for unified understanding and generation. CoRR, abs/2509.03498, 2025a. Tianle Li, Yongming Rao, Winston Hu, and Yu Cheng. BREEN: bridge data-efficient encoder-free multimodal learning with learnable queries. CoRR, abs/2503.12446, 2025b. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. In Conference on Empirical Methods in Natural Language Processing, pp. 292–305, Singapore, 2023b. Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, and Alan L Yuille. Super-clevr: A virtual benchmark to diagnose domain robust- ness in visual reasoning. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 14963–14973, 2023c. 14 Weixin Liang, Lili Yu, Liang Luo, Srinivasan Iyer, Ning Dong, Chunting Zhou, Gargi Ghosh, Mike Lewis, Wen-tau Yih, Luke Zettlemoyer, and Xi Victoria Lin. Mixture-of-transformers: A sparse and scalable architecture for multi-modal foundation models. CoRR, abs/2411.04996, 2024. Chao Liao, Liyang Liu, Xun Wang, Zhengxiong Luo, Xinyu Zhang, Wenliang Zhao, Jie Wu, Liang Li, Zhi Tian, and Weilin Huang. Mogao: An omni foundation model for interleaved multi-modal generation. CoRR, abs/2505.05472, 2025. Xi Victoria Lin, Akshat Shrivastava, Liang Luo, Srinivasan Iyer, Mike Lewis, Gargi Ghosh, Luke Zettlemoyer, and Armen Aghajanyan. Moma: efficient early-fusion pre-training with mixture of modality-aware experts. CoRR, abs/2407.21770, 2024. Adam Dahlgren Lindstr¨om and Savitha Sam Abraham. Clevr-math: A dataset for compositional language, visual and mathematical reasoning. CoRR, abs/2208.05358, 2022. Fangyu Liu, Guy Emerson, and Nigel Collier. Visual spatial reasoning. Transactions of the Association for Computational Linguistics, 11:635–651, 2023a. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. CoRR, 2023b. Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, Jianshu Chen, Kaiqiang Song, Sangwoo Cho, Yaser Yacoob, and Dong Yu. Mmc: Advancing multimodal chart understanding with large-scale instruction tuning. CoRR, abs/2311.10774, 2023c. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In Advances of Neural Information Processing Systems, New Orleans, LA, USA, 2023d. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 26286–26296, Seattle, WA, USA, 2024a. Xi Liu, Rui Zhang, Yongsheng Zhou, Qianyi Jiang, Qi Song, Nan Li, Kai Zhou, Lei Wang, Dong Wang, Minghui Liao, et al. Icdar 2019 robust reading challenge on reading chinese text on signboard. CoRR, abs/1912.09641, 2019. Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: is your multi-modal model an all-around player? In European Conference on Computer Vision, volume 15064, pp. 216–233, Milan, Italy, 2024b. Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, and Xiang Bai. On the hidden mystery of ocr in large multimodal models. CoRR, abs/2305.07895, 2023e. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations, New Orleans, LA, USA, 2019. Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. CoRR, abs/2105.04165, 2021. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: multimodal reasoning via thought chains for science question answering. In Advances of Neural Information Processing Systems, volume 35, pp. 2507–2521, New Orleans, LA, USA, 2022a. Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. CoRR, abs/2209.14610, 2022b. Gen Luo, Xue Yang, Wenhan Dou, Zhaokai Wang, Jifeng Dai, Yu Qiao, and Xizhou Zhu. Mono- internvl: pushing the boundaries of monolithic multimodal large language models with endogenous visual pre-training. CoRR, abs/2410.08202, 2024. 15 Gen Luo, Wenhan Dou, Wenhao Li, Zhaokai Wang, Xue Yang, Changyao Tian, Hao Li, Weiyun Wang, Wenhai Wang, Xizhou Zhu, Yu Qiao, and Jifeng Dai. Mono-internvl-1.5: Towards cheaper and faster monolithic multimodal large language models. CoRR, abs/2507.12566, 2025. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 11–20, Las Vegas, NV, USA, 2016. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. OK-VQA: a visual question answering benchmark requiring external knowledge. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3195–3204, Vienna, Austria, 2019. Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq R. Joty, and Enamul Hoque. Chartqa: a benchmark for question answering about charts with visual and logical reasoning. In Annual Meeting of the Association for Computational Linguistics, pp. 2263–2279, Dublin, Ireland, 2022. Minesh Mathew, Viraj Bagal, Rub`en Tito, Dimosthenis Karatzas, Ernest Valveny, and C. V. Jawahar. Infographicvqa. In IEEE Winter Conference on Applications of Computer Vision, pp. 2582–2591, Waikoloa, HI, USA, 2022. Nitesh Methani, Pritha Ganguly, Mitesh M Khapra, and Pratyush Kumar. Plotqa: Reasoning over scientific plots. In IEEE Winter Conference on Applications of Computer Vision, pp. 1527–1536, 2020. Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. OCR-VQA: visual question answering by reading text in images. In International Conference on Document Analysis and Recognition, pp. 947–952, Sydney, Australia, 2019. OpenAI. Gpt-5: A unified multimodal model, 2025. URL https://openai.com/gpt-5. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, volume 139, pp. 8748–8763, virtual, 2021. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. LAION-400M: open dataset of clip-filtered 400 million image-text pairs. CoRR, abs/2111.02114, 2021. Christoph Schuhmann, Andreas K¨opf, Richard Vencu, Theo Coombes, and Romain Beaumont. Laion coco: 600m synthetic captions from laion2b-en, 2022. URL https://laion.ai/blog/ laion-coco/. Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-OKVQA: a benchmark for visual question answering using world knowledge. In European Conference on Computer Vision, volume 13668, pp. 146–162, Tel Aviv, Israel, 2022. Sanket Shah, Anand Mishra, Naganand Yadati, and Partha Pratim Talukdar. Kvqa: Knowledge- aware visual question answering. In AAAI Conference on Artificial Intelligence, volume 33, pp. 8876–8884, 2019. Baoguang Shi, Cong Yao, Minghui Liao, Mingkun Yang, Pei Xu, Linyan Cui, Serge Belongie, Shijian Lu, and Xiang Bai. Icdar2017 competition on reading chinese text in the wild (rctw-17). In International Conference on Document Analysis and Recognition, volume 1, pp. 1429–1434. IEEE, 2017. Mustafa Shukor, Louis B´ethune, Dan Busbridge, David Grangier, Enrico Fini, Alaaeldin El-Nouby, and Pierre Ablin. Scaling laws for optimal data mixtures. CoRR, abs/2507.09404, 2025a. Mustafa Shukor, Enrico Fini, Victor Guilherme Turrisi da Costa, Matthieu Cord, Joshua M. Susskind, and Alaaeldin El-Nouby. Scaling laws for native multimodal models. CoRR, abs/2504.07951, 2025b. 16 Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In European Conference on Computer Vision, volume 12347, pp. 742–758, Glasgow, UK, 2020. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 8317–8326, Long Beach, CA, USA, 2019. Yipeng Sun, Zihan Ni, Chee-Kheng Chng, Yuliang Liu, Canjie Luo, Chun Chet Ng, Junyu Han, Errui Ding, Jingtuo Liu, Dimosthenis Karatzas, et al. Icdar 2019 competition on large-scale street view text with partial labeling-rrc-lsvt. In International Conference on Document Analysis and Recognition, pp. 1557–1562, 2019. Chenxin Tao, Shiqian Su, Xizhou Zhu, Chenyu Zhang, Zhe Chen, Jiawen Liu, Wenhai Wang, Lewei Lu, Gao Huang, Yu Qiao, and Jifeng Dai. Hovle: Unleashing the power of monolithic vision- language models with holistic vision-language embedding. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 14559–14569, Nashville, TN, USA, 2025. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html, 3(6):7, 2023. Chameleon Team. Chameleon: mixed-modal early-fusion foundation models. CoRR, abs/2405.09818, 2024. Teknium. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants, 2023. URL https://huggingface.co/datasets/teknium/OpenHermes-2.5. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aur´elien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023. Michael Tschannen, Alexey A. Gritsenko, Xiao Wang, Muhammad Ferjad Naeem, Ibrahim Alab- dulmohsin, Nikhil Parthasarathy, Talfan Evans, Lucas Beyer, Ye Xia, Basil Mustafa, Olivier J. H´enaff, Jeremiah Harmsen, Andreas Steiner, and Xiaohua Zhai. Siglip 2: Multilingual vision- language encoders with improved semantic understanding, localization, and dense features. CoRR, abs/2502.14786, 2025. Andreas Veit, Tomas Matera, Lukas Neumann, Jiri Matas, and Serge Belongie. Coco-text: Dataset and benchmark for text detection and recognition in natural images. CoRR, abs/1601.07140, 2016. Han Wang, Yongjie Ye, Bingru Li, Yuxiang Nie, Jinghui Lu, Jingqun Tang, Yanjie Wang, and Can Huang. Vision as lora. CoRR, abs/2503.20680, 2025a. Junke Wang, Lingchen Meng, Zejia Weng, Bo He, Zuxuan Wu, and Yu-Gang Jiang. To see is to believe: prompting GPT-4V for better visual instruction tuning. CoRR, abs/2311.07574, 2023. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: enhancing vision-language model’s perception of the world at any resolution. CoRR, abs/2409.12191, 2024a. 17 Weiyun Wang, Zhangwei Gao, Lixin Gu, Hengjun Pu, Long Cui, Xingguang Wei, Zhaoyang Liu, Linglin Jing, Shenglong Ye, Jie Shao, et al. Internvl3.5: Advancing open-source multimodal models in versatility, reasoning, and efficiency. CoRR, abs/2508.18265, 2025b. Xinlong Wang, Xiaosong Zhang, Zhengxiong Luo, Quan Sun, Yufeng Cui, Jinsheng Wang, Fan Zhang, Yueze Wang, Zhen Li, Qiying Yu, Yingli Zhao, Yulong Ao, Xuebin Min, Tao Li, Boya Wu, Bo Zhao, Bowen Zhang, Liangdong Wang, Guang Liu, Zheqi He, Xi Yang, Jingjing Liu, Yonghua Lin, Tiejun Huang, and Zhongyuan Wang. Emu3: next-token prediction is all you need. CoRR, abs/2409.18869, 2024b. Xilin Wei, Xiaoran Liu, Yuhang Zang, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Jian Tong, Haodong Duan, Qipeng Guo, Jiaqi Wang, Xipeng Qiu, and Dahua Lin. Videorope: What makes for good video rotary position embedding? CoRR, abs/2502.05173, 2025. Chenyuan Wu, Pengfei Zheng, Ruiran Yan, Shitao Xiao, Xin Luo, Yueze Wang, Wanli Li, Xiyan Jiang, Yexin Liu, Junjie Zhou, Ze Liu, Ziyi Xia, Chaofan Li, Haoge Deng, Jiahao Wang, Kun Luo, Bo Zhang, Defu Lian, Xinlong Wang, Zhongyuan Wang, Tiejun Huang, and Zheng Liu. Omnigen2: Exploration to advanced multimodal generation. CoRR, abs/2506.18871, 2025. xAI. Grok-1.5 vision preview, 2024. URL https://x.ai/blog/grok-1.5v. xAI. Grok 3: xAI’s flagship ai model, 2025. URL https://x.ai/news/grok-3. Yicheng Xiao, Lin Song, Rui Yang, Cheng Cheng, Zunnan Xu, Zhaoyang Zhang, Yixiao Ge, Xiu Li, and Ying Shan. Haploomni: Unified single transformer for multimodal video understanding and generation. CoRR, abs/2506.02975, 2025. Rui Yan, Lin Song, Yicheng Xiao, Runhui Huang, Yixiao Ge, Ying Shan, and Hengshuang Zhao. Haplovl: A single-transformer baseline for multi-modal understanding. CoRR, abs/2503.14694, 2025. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jian Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. Qwen3 technical report. CoRR, abs/2505.09388, 2025. Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. The dawn of lmms: preliminary explorations with gpt-4v(ision). CoRR, abs/2309.17421, 2023. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. Modeling context in referring expressions. In European Conference on Computer Vision, volume 9906, pp. 69–85, Amsterdam, The Netherlands, 2016. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. CoRR, abs/2309.12284, 2023. Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: evaluating large multimodal models for integrated capabilities. In International Conference on Machine Learning, Vienna, Austria, 2024. Hangjie Yuan, Weihua Chen, Jun Cen, Hu Yu, Jingyun Liang, Shuning Chang, Zhihui Lin, Tao Feng, Pengwei Liu, Jiazheng Xing, Hao Luo, Jiasheng Tang, Fan Wang, and Yi Yang. Lumos-1: On autoregressive video generation from a unified model perspective. CoRR, abs/2507.08801, 2025. Tai-Ling Yuan, Zhe Zhu, Kun Xu, Cheng-Jun Li, Tai-Jiang Mu, and Shi-Min Hu. A large chinese text dataset in the wild. Journal of Computer Science and Technology, 34(3):509–521, 2019. 18 Xiang Yue, Yuansheng Ni, Tianyu Zheng, Kai Zhang, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. MMMU: a massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 9556–9567, Seattle, WA, USA, 2024. Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In IEEE International Conference on Computer Vision, pp. 11941–11952, Paris, France, 2023. Biao Zhang and Rico Sennrich. Root mean square layer normalization. In Advances of Neural Information Processing Systems, pp. 12360–12371, Vancouver, BC, Canada, 2019. Bo Zhao, Boya Wu, and Tiejun Huang. SVIT: scaling up visual instruction tuning. CoRR, abs/2307.04087, 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances of Neural Information Processing Systems, 36:46595–46623, 2023. Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Shao, Zhangwei Gao, Erfei Cui, Xuehui Wang, Yue Cao, Yangzhou Liu, Xingguang Wei, Hongjie Zhang, Haomin Wang, Weiye Xu, Hao Li, Jiahao Wang, Nianchen Deng, Songze Li, Yinan He, Tan Jiang, Jiapeng Luo, Yi Wang, Conghui He, Botian Shi, Xingcheng Zhang, Wenqi Shao, Junjun He, Yingtong Xiong, Wenwen Qu, Peng Sun, Penglong Jiao, Han Lv, Lijun Wu, Kaipeng Zhang, Huipeng Deng, Jiaye Ge, Kai Chen, Limin Wang, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models. CoRR, abs/2504.10479, 2025. 19 A APPENDIX USAGE OF LARGE LANGUAGE MODELS During manuscript preparation, large language models were used solely as writing assistants. They helped to check grammar, refine sentence structure, and provide style alternatives. All content related to methodology, experiments, and conclusions was developed entirely by the authors. LLM outputs were reviewed critically, and only human-verified edits were incorporated into the final text. A.1 SUPERVISED FINE-TUNING DATASETS Table 4: Dataset summary in supervised fine-tuning stage. Task Dataset Captioning TextCaps (en) Sidorov et al. (2020), ShareGPT4V (en&zh) Chen et al. (2024b) VQAv2 (en) Goyal et al. (2017), GQA (en) Hudson & Manning (2019), OKVQA (en) Marino et al. (2019), General QA VSR (en) Liu et al. (2023a), VisualDialog (en) Das et al. (2017) Science AI2D (en) Kembhavi et al. (2016), ScienceQA (en) Lu et al. (2022a), TQA (en) Kembhavi et al. (2017) ChartQA (en) Masry et al. (2022), MMC-Inst (en) Liu et al. (2023c), DVQA (en) Kafle et al. (2018), Chart PlotQA (en) Methani et al. (2020), LRV-Instruction (en) Liu et al. (2023b) GeoQA+ (en) Cao & Xiao (2022), TabMWP (en) Lu et al. (2022b), MathQA (en) Yu et al. (2023), Mathematics CLEVR-Math/Super (en) Lindstr¨om & Abraham (2022); Li et al. (2023c), Geometry3K (en) Lu et al. (2021) KVQA (en) Shah et al. (2019), A-OKVQA (en) Schwenk et al. (2022), ViQuAE (en) Lerner et al. (2022), Knowledge Wikipedia (en&zh) He et al. (2023) OCRVQA (en) Mishra et al. (2019), InfoVQA (en) Mathew et al. (2022), TextVQA (en) Singh et al. (2019), ArT (en&zh) Chng et al. (2019), COCO-Text (en) Veit et al. (2016), CTW (zh) Yuan et al. (2019), LSVT (zh) Sun et al. (2019), RCTW-17 (zh) Shi et al. (2017), ReCTs (zh) Liu et al. (2019), OCR SynthDoG (en&zh) Kim et al. (2022), ST-VQA (en) Biten et al. (2019) Document DocVQA (en) Clark & Gardner (2018), Common Crawl PDF (en&zh) Grounding RefCOCO/+/g (en) Yu et al. (2016); Mao et al. (2016), Visual Genome (en) Krishna et al. (2017) LLaVA-150K (en&zh) Liu et al. (2023d), LVIS-Instruct4V (en) Wang et al. (2023), ALLaVA (en&zh) Chen et al. (2024a), Laion-GPT4V (en) LAION (2023), Conversation TextOCR-GPT4V (en) Jimmycarter (2023), SVIT (en&zh) Zhao et al. (2023) OpenHermes2.5 (en) Teknium (2023), Alpaca-GPT4 (en) Taori et al. (2023), COIG-CQIA (zh) Bai et al. (2024), Text-only ShareGPT (en&zh) Zheng et al. (2023) A.2 IMPLEMENTATION DETAILS Table 5: Implementation details in the pre-training, mid-training and supervise fine-tuning. Configuration Pre-Training Mid-Training Supervised Fine-Tuning Resolution 2562 −1, 0242 2562 −2, 0482 2562 −2, 0482 Optimizer AdamW Optimizer hyperparameters β1 = 0.9, β2 = 0.999, eps = 1e−8 Learning rate schedule cosine with min lr cosine with min lr cosine decay Peak learning rate 8e−4 4e−5 5e−5 Min learning rate ratio 0.05 0.1 – Weight decay 0.01 Training steps 190k 50k 6k Warm-up steps 2k 200 200 Max sample length 8, 192 8, 192 8, 192 Global batch size 2, 560 1, 200 650 Text-only ratio 0.3 0.3 – Numerical precision bfloat16 20 A.3 LIMITATION AND DISCUSSION In this study, we innovate network architectures and training strategies for efficiently building native vision-language models. The full promise of NEO has remained largely untapped, hindered by scarce training data and limited computational resources, especially in knowledge-intensive and OCR-focused domains. Yet, strikingly, our NEO rivals state-of-the-art VLMs despite these severe constraints. We envision subsequent directions of NEO for the native VLM community as follows: Contextual relevance to recent advancements. Recent models such as Qwen3VL highlight concepts that resonate with our design choices, including dense linking of visual-language features, relative positional encodings, and architectural details like patch embedding and bias. In particular, the DeepStack approach underscores the importance of establishing strong pixel-word associations from the earliest stages, reinforcing the significance of densely integrated visual-language representations. Maximizing the potential via large investment. It is in great demand for continuously investing substantial resources, especially during the pre-training stage, to fully unlock NEO’s performance and approach the upper bound of the native model. At the same time, selectively open-sourcing key components during intermediate development can reduce follow-up training costs for future researchers and attract more research to native visual-language models. Moreover, the fundamental models from this work provide a valuable baseline for advancing reinforcement learning research. Explorations of full-spectrum model capacities. Expanding the full model sizes remains a critical factor in advancing various real-world applications. Even with limited resources, NEO-2.2B closely matches those of modular visual-language models with equivalent capacity, suggesting that the design philosophy of models in the 0.6 to 8 billion parameter range has matured. Such architectures not only achieve high performance but also facilitate the deployment of lightweight models at the edge, which is crucial for scenarios with limited computational resources or strict real-time requirements. Upgrading architectures and applications. To date, our work has focused on dense models for image-text understanding, while a sparse divide-and-conquer architecture is simultaneously under active development. Notably, we regard NEO not merely as an autoregressive VLM but as a new paradigm for visual-language intelligence. Its principle is to leverage end-to-end training within a unified architecture, eliminating manually imposed biases and scaling-up complexities by allowing data and models to dictate the learning process. Besides, our efforts are designed not merely to improve performance but to establish a definitive baseline for visual-language generation, long video understanding, and embodied AI. Crucially, NEO’s architecture systematically integrates the demands of video generation and related tasks, including attention mechanisms and rotary positional encodings, from the ground up. Although currently focused on text and images, NEO is poised to push the boundaries of what is possible across a wide spectrum of application scenarios and input modalities. Constrained by current text corpus and computational resources, we are unable to train a fully native model entirely from scratch without initialization from an existing LLM. This limitation also hinders our ability to mitigate potential biases arising from the dominance of the language modality. Despite these challenges, our NEO extends beyond providing a reusable pre-buffer that lowers the cost of adapting advanced LLMs—with updated weights and stronger capabilities—into VLMs under limited budgets. More importantly, NEO reveals the potential performance ceiling of native VLM architectures and provides valuable insights for future research on de novo multimodal training. 21
FROM PIXELS TO WORDS - TOWARDS NATIVE VISIONLANGUAGE PRIMITIVES AT SCALE Haiwen Diao1 Mingxuan Li2 Silei Wu3 Linjun Dai3 Xiaohua Wang2 Hanming Deng3 Lewei Lu3 Dahua Lin3 Ziwei Liu1 1S-Lab, Nanyang Technological University 2Xi'an Jiaotong University 3SenseTime Research Website: https://github.com/EvolvingLMMs-Lab/NEO ABSTRACT The edifice of native Vision-Language Models (VLMs) has emerged as a rising contender to typical modular VLMs, shaped by evolving model architectures and training paradigms. Yet, two lingering clouds cast shadows over its widespread exploration and promotion: (-) What fundamental constraints set native VLMs apart from modular ones, and to what extent can these barriers be overcome? (-) How to make research in native VLMs more accessible and democratized, thereby accelerating progress in the field. In this paper, we clarify these challenges and outline guiding principles for constructing native VLMs. Specifically, one native VLM primitive should: (i) effectively align pixel and word representations within a shared semantic space; (ii) seamlessly integrate the strengths of formerly separate vision and language modules; (iii) inherently embody various crossmodal properties that support unified vision-language encoding, aligning, and reasoning. Hence, we launch NEO, a novel family of native VLMs built from first principles, capable of rivaling top-tier modular counterparts across diverse realworld scenarios. With only 390M image-text examples, NEO efficiently develops visual perception from scratch while mitigating vision-language conflicts inside a dense and monolithic model crafted from our elaborate primitives. We position NEO as a cornerstone for scalable and powerful native VLMs, paired with a rich set of reusable components that foster a cost-effective and extensible ecosystem. 1 INTRODUCTION Recently, Vision-Language Models (VLMs) Bai et al. (2025); Zhu et al. (2025); Wang et al. (2025b); xAI (2025); Anthropic (2025); DeepMind (2025); Hurst et al. (2024); OpenAI (2025) have emerged as a major breakthrough, extending the strong linguistic capabilities of Large Language Models (LLMs) to multimodal understanding. They typically follow a modular design that integrates a pre-trained Visual Encoder (VE) Radford et al. (2021); Chen et al. (2024f); Fang et al. (2023); Tschannen et al. (2025), a Projector Alayrac et al. (2022); Liu et al. (2024a); Dai et al. (2024), and an LLM Touvron et al. (2023); Yang et al. (2025); DeepSeek-AI et al. (2025). Through multi-stage post-training at scale, they incrementally overcome limitations in image resolution, aspect ratio, and visual encoding flexibility. Yet, modular designs still contend with strong inductive biases in pre-trained visual semantics, complex infrastructure, and scaling laws needed to harmonize their components. Against this backdrop, native VLMs have arisen as a new avenue of exploration, with Fuyu Bavishi et al. (2023) and EVE Diao et al. (2024) pioneering a promising route towards monolithic VLMs. Subsequent efforts seek to learn vision perception from scratch and mitigate vision-language conflicts via visual encoder distillation Diao et al. (2024); Li et al. (2025b); Wang et al. (2025a); Li et al. (2025a), mixed training data Lei et al. (2025); Li et al. (2025a), and modality-specific decomposition Diao et al. (2025); Luo et al. (2024; 2025); Li et al. (2025a). Nonetheless, constructing visual representations via mapping functions inside pre-trained LLMs often hinders efficiency Chen et al. (2024d); Luo et al. (2024), destabilizes optimization Team (2024); Wang et al. (2024b), and disrupts original linguistic knowledge Diao et al. (2024); Chen et al. (2024d), even under decoupled designs or large budgets Beyer et al. (2024). Besides, HoVLE Tao et al. (2025) and HaploVL Yan et al. (2025) address 1 16 Oct 2025 Tile vs. Native Resolution Native Encode Pre-set Ratios Thumbnail Concat 1:1 1:2 1:3 1:4 1:5 1:6 2:3 3:2 ... Large Image Small Image or or Native VLM Primitives Image Text ×L FFN PEL WEL Late vs. Early Fusion LLM VE VLM Image Text Image Text Modular VLM Monolithic VLM (opt) X-Attn MHNA + + + + SIGLIP CLIP EVA Vision Encoder Language Model GPT Claude Qwen Deepseek 1. Dense Model Architecture 2. Parameter : 0.3B, 0.6B, ..., 22B 3. Patch Size : 14, 16, 30, 32 4. Absolute Position Embedding (Learnable PE or Sinusoidal PE) 5. Bi-Directional Attention 6. 2D-RoPE or Not 7. RoPE θ : 10000 8. Head Dim : 64, 128, 256 9. Layer Num : 6, 12, 24, ..., 27 1. Dense or MoE Model Architecture 2. Parameter : 0.6B, 1.7B, 8B, ..., 1043B 3. QK-Norm or Not 4. Sliding Window or Not 5. Hidden Activation : SiLU 6. Causal-only Attention 7. 1D-RoPE or Not 8. RoPE θ : 1000000 9. Head Dim : 128, 192, 256 10. Layer Num : 28, 32, ..., 64, 89 GPT Gemini QwenVL Grok Claude InternVL Property 5 RoPE Variants for Multi-Dimensional Relations Property 3 Efficient Pixel-Word Alignment VLM ( VE ) Pre-Align & Encode [ reusable after PT ] [ cost-effective for SFT ] (LLM) Fully-Align & Reason Property 4 Multi-Modal Mixed Attention LM LM VE VE EMA Property 2 Visual Learning at Scale Self-Supervision - SelfDistilling - Masked Prediction Property 1 Seamlessly Load LLMs VLM LLM layer-wise initialization Img1 Txt1 Img2 Txt2 Inside VE Inside LLM (2). Channels Frame-wise Bi-Directional Token-wise Casual Attention (1). Indexes ... ... T T T TT T frequency decay ... ... W H W H T T ... ... ... ... T ... ... ... H ... W T H W ... ... T, T, H,W, H,W, H,W ... ... M-RoPE Video-RoPE IL-RoPE MM-RoPE ... ... THW TTT [0,0,0] [1,0,0] [1,0,1] [1,1,0] [1,1,1] [2,0,0] [3,0,0] [4,0,0] [4,0,1] [4,1,0] [4,1,1] [5,0,0] [0,0,0] [1,1,1] [1,1,2] [1,2,1] [1,2,2] [2,0,0] [3,0,0] [4,4,4] [4,4,5] [4,5,4] [4,5,5] [5,0,0] [0,0,0] [1,0,0] [1,0,1] [1,1,0] [1,1,1] [2,2,2] [3,3,3] [4,0,0] [4,0,1] [4,1,0] [4,1,1] [5,5,5] [0,0,0] [1,1,1] [1,1,2] [1,2,1] [1,2,2] [2,2,2] [3,3,3] [4,4,4] [4,4,5] [4,5,4] [4,5,5] [5,5,5] red pills and blue - Next-Token Prediction - Contrastive Learning Cross-Supervision Model Tags 1. Dense Model 2. 2.2B / 9B Params 3. 32 x 32 Patch Size 4. Sinusoidal PE 5. Mixed Attention 6. Native 3D-RoPE 7. RoPE θ: 10K + 1M 8. Layer Num: 40 / 42 9. Reusable Pre-Buffer 0 ... Discrete vs. Continuous Patch Embed Token Quantize 1 N ... Patch Flatten Modular VLM Modules Figure 1: Overview of our native vision-language frameworks, which project arbitrary-resolution images into a continuous latent space, integrating the virtues of modular VLM architectures and enabling efficient vision-language encoding, alignment, and interaction in an early-fusion manner. this by first mapping vision-language inputs into a shared space. Yet, their modality-sharing modules, whether derived from LLM or VE layers, neglect intrinsic discrepancy in encoding and interaction across modalities, ultimately compromising VLM's capacity to unify visual-linguistic properties. Figure 1 outlines a central question: What properties must native VLMs possess to compete with modular ones? Modular VLMs decouple vision encoders from language models, allowing each to exploit modality-specific characteristics, e.g., bi-directional versus causal attention, distinct positional embeddings, and varied network configurations. This separation accelerates the development of visual and linguistic competencies and permits flexible combinations of individual components. However, it fragments the training procedure, increases alignment costs, and leaves the intermodal balance unresolved. Motivated by these analyses, we formulate the following strategies accordingly: (1) Native VLM Primitive. Native VLMs should embody a unified vision-language primitive that simultaneously integrates encoding, alignment, and reasoning across modalities in one single module. Its design should encompass three principles: (i) a Flexible Position Encoding scheme that generalizes effectively to dynamic spatial structures; (ii) a Multi-Head Native Attention (MHNA) that jointly processes visual-textual connectivity; (iii) Native Rotary Position Embeddings (Native-RoPE) with modality-specific frequencies, preserving compatibility with pretrained LLM's weights while absorbing original VE's interaction patterns. Guided by these tenets, we supercharge the fundamental LLM layers with a hybrid attention, expanded heads, and targeted RoPE across modalities, synchronously capturing multi-dimensional relationships for fine-grained and comprehensive correspondence. (2) Pre-Buffer and Post-LLM. The next crucial issue is to efficiently scale visual training while securing consistent pixel-word alignment. Here, we partition the monolithic backbone into pre-Buffer and post-LLM layers during pre-training, each rooted in identical native primitive architectures. This transient stage enables pretrained LLMs to steer visual learning and establish coherent relevance with later stages. As mid-training and supervised fine-tuning advance, the partition dissolves, yielding a unified architecture that autonomously allocates the VLM's capacities to their respective functions. This end-to-end training reduces semantic biases of separate pretraining and large overheads of post-stage alignment, effectively bridging native and modular VLMs. Crucially, pre-Buffer persists as a reusable pretrained asset, facilitating sustainable resources for native VLM development. We launch NEO, an innovative native VLM that reimagines multi-modal integration from first principles. Unlike typical modular designs, NEO rests on unified primitives that natively encode, align, and reason across modalities, forming coherent pixel-word correspondences from the outset. Through streamlined end-to-end training on 390M image-text samples, NEO acquires strong visual perception and rivals leading modular VLMs of comparable scale across diverse benchmarks. Beyond competitive results, NEO offers reusable components that simplify subsequent development and reduce barriers to promoting native exploration. This reveals that next-generation multimodal systems could also originate from architectures that are native, unified, and intrinsically multimodal. 2 Take Native Vision - Language Model the red Word Embedding Layer Patch Embedding Layer Conv1 Conv2 GELU Flatten ... , ! pill NEO the red , ! pill NEO Pos-Emb + ~ FFN + + MHNA FFN + + + + + + MHNA Pixel-Word Aligning via L1* Primitives as pre-Buffer Image-Text Reasoning via L2* Primitives as post-LLM Figure 2: Overview of our proposed NEO architecture. We begin with lightweight patch and word embedding layers that encode images and text into token sequences, which are subsequently processed by a monolithic decoder-only architecture. The pre-Buffer and post-LLM components, each stacked with multiple native primitives, facilitate efficient and precise pixel-word alignment and reasoning. 2 RELATED WORKS 2.1 MODULAR VISION-LANGUAGE MODELS Current Vision-Language Models (VLMs) have converged on a modular paradigm, where a pretrained Vision Encoder (VE) is paired with a Large Language Model (LLM) via lightweight adapters, e.g., projection layers Li et al. (2024a;b) or cross-attention mechanisms Alayrac et al. (2022); Dai et al. (2024). This architecture underlies both leading proprietary systems, including Claude Anthropic (2024; 2025), GPT Hurst et al. (2024); Yang et al. (2023); OpenAI (2025), and Gemini Comanici et al. (2025); DeepMind (2025), as well as prominent open-source frameworks such as InternVL Zhu et al. (2025); Wang et al. (2025b), Qwen-VL Wang et al. (2024a); Bai et al. (2025), and Grok xAI (2024; 2025). By harnessing the complementary strengths of vision and language components, modular architectures, adhering to the "ViT-MLP-LLM" pipeline, achieve unprecedented performance across diverse multimodal benchmarks and have emerged as the dominant design principle in the field. Despite empirical successes, modular VLMs remain constrained by multi-stage training and heterogeneous structures. Extensive post-training interventions are often required to mitigate rigid inductive biases in pretrained VEs Wang et al. (2024a), which limit resolution flexibility, erode fine-grained details, and blunt sensitivity to features across scales. Besides, pretraining semantic biases and capacity trade-offs between VEs and LLMs collectively impede design simplicity, deployment efficiency, and seamless integration of vision and language, underscoring the urgent need for a monolithic backbone. 2.2 NATIVE VISION-LANGUAGE MODELS Native VLMs embrace early-fusion integration rather than grafting VEs onto LLMs. Early Fuyu Bavishi et al. (2023), EVE Diao et al. (2024), and SOLO Chen et al. (2024d), embed image patches via linear projections, whereas Chameleon Team (2024), MoMA Lin et al. (2024), and MoT Liang et al. (2024) transform images into symbolic sequences via discrete tokenizers. Later studies Luo et al. (2024); Diao et al. (2025); Li et al. (2025b); Luo et al. (2025); Li et al. (2025a) leverage Mixture-ofExperts (MoE) or Divide-and-Conquer (DaC) strategies to suppress vision-language interference, while others Diao et al. (2024); Li et al. (2025b); Wang et al. (2025a); Li et al. (2025a) upgrade visual encoder supervision to accelerate the acquisition of visual concepts. Empirical evidence Beyer et al. (2024); Luo et al. (2024); Lei et al. (2025) reveals that, with sufficient data and progressive training, native VLMs rapidly approach modular counterparts, corroborating recent scaling-law insights Shukor et al. (2025b;a). Besides, recent methods Tao et al. (2025); Yan et al. (2025); Xiao et al. (2025) indicate that multi-modality encoding modules with the LLM or VE style slightly resolve vision-language misalignment, yet fail to fully integrate the distinct properties of each modality. Notably, NEO redefines native VLMs as a unibody system built from first-principle primitives. Every component-from image patch encoding, attention mechanism to rotary position embeddings-ensures full compatibility with the intrinsic modeling patterns of VEs and LLMs. Meanwhile, NEO evolves modular VLM strengths via the modality-agnostic pre-Buffer and end-to-end training, dramatically enhancing pixel-word alignment and pushing the frontier of native VLM research. 3 + + + + Multi-Head Native Attention Feed-Forward Network RMSNorm RMSNorm Native VLM Primitive RoPE θ Q K V T Linear Linear Linear Q-norm K-norm H W 1m 1m 10k 10k 10k 10k (1). Introduce new FC/Norm Into original Q, K for H, W Keep Original Mapping Mode T H W Rebuild New Mapping Mode (3). Introduce Frame-wise Native Multi-Modal Attention Img1 Txt1 Img2 Txt2 Img1 Txt1 Img2 Txt2 Bidirectional Attention Word / Frame-wise Causal Attention [3,0,0] [0,0,0] [1,0,0] [2,0,0] [4,0,0] [5,0,0] [6,0,0] [7,0,0] [8,0,0] [9,0,0] [10,0,0] [11,0,0] [12,0,0] [8,0,1] [3,0,1] [3,1,0] [3,1,1] [8,1,0] [8,1,1] red and blue pills . (2). Native Rotary Position Embedding (Native-RoPE) [T, H, W] Decoupling Index : Start Symbol : End Symbol Q / K Head a) Keep LLM Head Channels b) Add New Head Channels Split T H W LLM θ 1,000,000 VE θ 10,000 Decay 0 - D 0 - D/2 0 - D/2 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 T (0 - 63) H, W (0 - 31) Frequency Value Channel Index (i) c) Decompose 3D-RoPE Channels - H / W have higher frequencies than T - Index range:T:0 ~ n∗104~6 H / W:0 ~ n∗102 i) Simultaneously Compatible with Interaction Patterns of LLM / VE Q - ( T ) Q - ( H,W ) K - ( H,W ) K - ( T ) × × LLM's high- / lowfrequency for local / long-range relations VE's relatively high frequencies for local semantic dependency ii) Seamlessly Absorb Pre-trained LLMs iii) Efficiently Construct Pixel-Word Connection MHNA FFN + + MHNA + + FFN Build Post-LLM via × L2Primitives Load - Q,K,V (T) Load - LN (T) Load - FFN (T) Initial - Q (H, W) via Q (T) Initial - K (H, W) via Zero-W Build Pre-Buffer via × L1Primitives - Rebuild visual-textual encoding from scratch - Large visual training with fixed post-LLM - Reusable module for native vlm community Shared [H, W] for Same Positions across Images Figure 3: Overview of our native primitive, which integrates native attention with bi-directional dependencies within images and word / frame-wise causal interactions, together with native rotary position embeddings parameterized by modality-specific frequency, channel, and index allocation. It is inherently unified and intrinsically multimodal, substantially enhancing pixel-word correspondence. 3 METHODOLOGY 3.1 MODEL ARCHITECTURE Figure 2 illustrates the proposed NEO framework, which comprises lightweight patch and word embedding layers, a pre-Buffer, and a post-LLM, built upon stacked native VLM primitives. Patch and Word Embeddings. Given an image I, we convert it into token sequences via a lightweight Patch Embedding Layer (PEL) with two Convolutional layers (Conv1-2) Krizhevsky et al. (2012) and a Gaussian Error Linear Unit (GELU) Hendrycks & Gimpel (2016). For input text T, we encode it into word tokens using the original LLM Tokenizer as Word Embedding Layer (WEL): xv = Conv2(GELU(Conv1(I)) + PE), xt = Tokenizer(T), (1) where xv ∈R(h×w)×d / xt ∈Rn×d denote visual / textual tokens, and PE is 2D Sinusoidal Positional Encoding Dosovitskiy et al. (2021). The stride of Conv1 / Conv2 is 16 / 2, i.e., each visual token corresponds to a 32 × 32 image patch. Notably, Conv2 performs token folding like pixel unshuffle Chen et al. (2024e), with the special and tokens inserted at the boundaries of visual tokens, while mapping position and patch embeddings into a unified space. Afterward, visual and textual tokens are merged and propagated through the unified backbone. Native VLM Primitive. It adopts RMSNorm Zhang & Sennrich (2019) and SwiGLU Dauphin et al. (2017) consistent with the original LLM layers. Unlike prior methods Wang et al. (2024a); Bai et al. (2025); Zhu et al. (2025); Wang et al. (2025b) that collapse visual tokens into 1D representations or merely reallocate pre-trained LLM head dimensions across temporal (T), height (H), and width (W), we expand Query (Q) and Key (K) head dimensions while fully decoupling H, W, and T relations in Figure 3(1), causing an extra 10% parameter counts over the original Transformer block. The original T dimension is preserved, and additional H and W dimensions are added with their respective QK normalization Yang et al. (2025). Crucially, the linear weights of K for H and W channels are zero-initialized, and attention scale matches that of LLMs, maintaining the LLM pre-training paradigm and progressively activating multimodal capabilities for visual spatial relationships. This philosophy aligns with our Native Rotary Position Embedding (Native-RoPE) in Figure 3(2), which eliminates correlations between H / W and T indexes, while decoupling channel allocations of H, W, and T under the original LLM frequency. (1) Index Allocation: For text, T index is retained while H / W indexes are zeroed. Notably, Native-RoPE is equivalent to 1D-RoPE before training. For images, each visual token has a constant T index, with unique H / W indexes encoding spatial location. Videos, treated as sequences of frames, increment T index per frame, while H / W indexes follow the same spatial scheme as images. In multimodal inputs, each modality's T index starts from the maximum ID of the preceding modality, ensuring continuous and unambiguous positional encoding across modalities. This serves two purposes: (a) For image pairs, H / W indexes start independently 4 from (0,0), and tokens at corresponding positions share identical dependency, strongly reinforcing correlations and interactions across matching regions Liao et al. (2025); Wu et al. (2025); (b) For image-text pairs, H / W indexes are decoupled from T index and bounded within (0,0) and (H,W), preventing large T index growth from disproportionately affecting H / W indexes Wang et al. (2024a); Bai et al. (2025) and thereby keeping spatial dependencies between long-range text and images. Another key aspect is (2) Channel and Frequency Allocation. Unlike recent 3D-RoPE methods Bai et al. (2025); Wei et al. (2025); Yuan et al. (2025); Liao et al. (2025), we fully decouple the channel and frequency allocation of H, W, and T, equipped with additional Q/K head dimensions for H and W. This resolves two issues: (a) Zeroing H / W indexes for pure text would disrupt the modeling patterns and linguistic capacity of the LLM if restricted to its original channels. Repairing this disruption requires substantial resources; (b) Even with interleaved or segmented reallocation, H and W are theoretically equivalent but are assigned different frequencies. Meanwhile, the RoPE frequency in LLMs is far lower than that of visual encoders in Figure 3(2). This mismatch limits the modeling of relative distances and local semantics. The problem is exacerbated by the disparity in scales, with temporal ranges spanning up to one million and spatial ranges only a few hundred. Specifically, Native-RoPE assigns distinct base frequencies to T, H, and W within their own dimensions, i.e., original LLM head dimension for T and new head dimension for H / W as follows: ΘT = β -2k d T | k ∈[0, d 2) , ΘH = β -4i d H | i ∈[0, d 4) , ΘW = β -4j d W | j ∈[0, d 4) (2) where β and Θ indicate the base and rotation frequency across H, W, and T. Notably, temporal T dimension captures both local and long-range relations, whereas spatial H / W dimensions emphasize local dependencies. This also opens avenues for broader applications, e.g., video understanding Wei et al. (2025), multimodal generation Deng et al. (2025b), and editing Deng et al. (2025a). Inspired by prior works Lei et al. (2025); Deng et al. (2025b); Li et al. (2025a), we treat one single image as a unified meta-unit for autoregressive modeling. To enable this, we propose Native MultiModal Attention with mixed masking in Figure 3(c). Text tokens adhere to standard causal attention, attending only to preceding tokens to maintain autoregressive generation. In contrast, image tokens employ full bidirectional attention, enabling exhaustive interactions among all visual tokens, akin to a visual encoder. This design captures rich spatial and contextual dependencies within images and facilitates vision-language correspondences, thereby supporting complex multimodal reasoning. We use FlexAttention Dong et al. (2024) to minimize memory overhead and increase throughput, as variable-length block-wise attention is fully optimized through CUDA kernel modifications. Pre-Buffer and Post-LLM. Drawing on modular designs Bai et al. (2025); Wang et al. (2025b), we split NEO into encoding and reasoning components at the outset. In contrast, we build a modalityshared pre-Buffer via native primitives to map vision and language into a unified representation space. We further design a post-LLM via native primitives to absorb the powerful language proficiency and reasoning capabilities of LLMs. This, in turn, promotes deep pixel-word integration within the preBuffer-a deliberate design choice to ensure rich multimodal alignment while minimizing disturbance to the LLM. The layer depth in the pre-Buffer and post-LLM primarily refers to the model parameters of existing VEs and LLMs, ensuring a relatively fair comparison while balancing accuracy and efficiency. Crucially, this separation exists only during pre-training to boost visual learning; during mid-training and supervised fine-tuning, the components are upgraded to a monolithic backbone, allowing the VLM to automatically allocate capacity for encoding, alignment, and reasoning. 3.2 TRAINING PROCEDURE Figure 4 illustrates the whole training pipeline, which proceeds through three progressive stages: pre-training, mid-training, and supervised fine-tuning. The entire model is optimized end-to-end. Pre-Training Stage. In this phase, NEO acquires fundamental visual concepts and contextual dependencies from scratch, guided by pre-trained patterns from LLMs. Training leverages 345M web-scale and synthetic image-caption pairs, including 100M English and 20M Chinese pairs from LAION-400M Schuhmann et al. (2021), 150M English pairs from COYO-700M Byeon et al. (2022), 20M long-caption examples from BLIP3o Chen et al. (2025), and 5M short-caption pairs from OpenImages Kuznetsova et al. (2018), recaptioned with a pre-trained InternVL2-8B model. The dataset is further enriched with 30M samples from LAION-COCO Schuhmann et al. (2022) and 20M 5 Stage 1: Pre-Training Stage 3: Supervised Fine-Tuning PEL NEO 🔥 WEL 🔥 🔥 NEO Stage 2: Mid-Training 🧊 🔥 Pre-Buffer, new QK in Post-LLM NEO 🔥 PEL 🔥 WEL PEL 🔥 WEL 🔥 🧊 256! - 1024! Any Resolution 345M Image-Text Caption Data Pairs 256! - 2048! Any Resolution 40M Caption QA OCR / Detection 256! - 2048! Any Resolution 4M High-Quality Instruction Data Figure 4: Overview of the entire training recipe. During pre-training, NEO learns visual perception from massive web-scale and synthetic image-caption pairs with frozen LLM weights to preserve linguistic knowledge. In mid-training and supervised fine-tuning, the full model is progressively optimized end-to-end using caption, conversation, OCR, detection, and high-quality instruction data. examples from Wukong Gu et al. (2022) with rich Optical Character Recognition (OCR) annotations. A 3:7 ratio of language to multi-modal data is incorporated to reconstruct text projections in the pre-Buffer. Only the patch embedding layer, the pre-Buffer, and additional QK linear weights and normalization, along with H and W, are trainable and optimized with a simple next-token prediction objective. Notably, the new QK heads not only counteract the LLM's strong language bias that limits visual specialization but also safeguard its capabilities against the effects of low-quality data. Mid-Training Stage. The objective at this stage is to strengthen the alignment between visual and linguistic capabilities while progressively enhancing recognition of high-resolution images, complex scenes, object scales, spatial grounding, and compact OCR content. The training data is drawn from the pre-training corpus of InternVL-1.5 Chen et al. (2024f), comprising 40M samples across image captioning, conversation, detection, and OCR data, which account for approximately 66%, 11%, 8%, and 15% of the total, respectively. A 3:7 ratio of language to multi-modal data is again applied. The entire architecture is updated with the same loss functions to consolidate vision-language alignment, thereby equipping NEO with the foundational abilities required for various visual scenarios. Supervised Fine-Tuning Stage. During the SFT stage, NEO's ability to follow complex linguistic instructions and varied dialogue patterns is further enhanced, a critical step towards real-world deployment. The full network is optimized across diverse high-quality, multi-source instruction datasets. Following Mono-InternVL Luo et al. (2024), we employ about 4M bilingual instructions for supervised learning, covering tasks such as visual question answering, multimodal dialogue, mathematics, and knowledge reasoning. Details of the instruction data are provided in the Appendix. 4 EXPERIMENTS 4.1 TRAINING SETTINGS Our NEO models are built on Qwen3-1.7B and Qwen3-8B Yang et al. (2025) as the LLMs. The pre-Buffer employs L1 = 12 primitive layers for NEO-2.2B and L1 = 6 for NEO-9B. We extend only the QK head dimension in raw transformer layers, introducing roughly 10% extra parameters over the original design. The base RoPE frequencies βT , βH, and βW are set to 1×106, 1×104, and 1×104, respectively. NEO is trained on sixteen 8-GPU (80G) nodes using the AdamW optimizer Loshchilov & Hutter (2019). The maximum learning rates for pre-training, mid-training, and SFT are 8 × 10-4, 4 × 10-5, and 5 × 10-5, with a warm-up ratio of 0.01 and a cosine decay scheduler across all stages. 4.2 MAIN RESULTS We conduct standard evaluations with VLMEvalKit Duan et al. (2024) on diverse benchmarks, covering chart, diagram, and document understanding tasks, e.g., AI2D Kembhavi et al. (2016), DocVQA Clark & Gardner (2018), ChartQA Masry et al. (2022), InfoVQA Mathew et al. (2022), TextVQA Singh et al. (2019), and OCRBench Liu et al. (2023e); visual perception and challenging reasoning tasks, e.g., MMMU Yue et al. (2024), MMBench-EN (MMB) Liu et al. (2024b), MMVet Yu et al. (2024), MMStar Chen et al. (2024c), SEEDBench-IMG (SEED-I) Li et al. (2023a); visual hallucination tasks, e.g., POPE Li et al. (2023b) and HallusionBench (HallB) Guan et al. (2024). 6 Table 1: Comparison with modular and native VLMs on general vision-language benchmarks. "# Data" denotes the dataset scale during pre-training, mid-training, and supervised fine-tuning. † indicates models that employ reinforcement learning (RL). Bold highlights the highest performance. Model LLM # Data MMMU MMB MMVet MMStar SEED-I POPE HallB ▼Modular Vision-Language Models (2B) Qwen2-VL Qwen2-1.5B - / - / - 41.1 74.9 49.5 48.0 - - 41.7 InternVL2.5 InternLM2.5-1.8B >6B / 100M / 16M 43.6 74.7 60.8 53.7 - 90.6 42.6 Qwen2.5-VL† Qwen2.5-1.5B - / - / - 51.2 79.1 61.8 55.9 - - 46.3 InternVL3† Qwen2.5-1.5B >6B / 100M / 22M 48.6 81.1 62.2 60.7 - 89.6 42.5 Encoder-Based Qwen3-1.7B >6B / 40M / 4M 47.1 75.8 37.4 52.7 73.6 87.0 44.4 ▼Native Vision-Language Models (2B) Mono-InternVL InternLM2-1.8B 1.2B / 143M / 7M 33.7 65.5 40.1 - 67.4 - 34.8 Mono-InternVL-1.5 InternLM2-1.8B 400M / 150M / 7M 39.1 64.0 54.0 - 66.9 - 32.5 HoVLE InternLM2-1.8B 550M / 50M / 7M 32.2 73.3 43.8 - 70.9 87.4 38.4 OneCAT Qwen2.5-1.5B 436M / 70M / 13M 39.0 72.4 42.4 - 70.9 - - NEO Qwen3-1.7B 345M / 40M / 4M 48.6 76.0 49.6 54.2 74.2 87.5 43.1 ▼Modular Vision-Language Models (8B) Qwen2-VL Qwen2-7B - / - / - 54.1 83 62.0 60.7 - 88.1 50.6 InternVL2.5 InternLM2.5-7B >6B / 50M / 4M 56.0 84.6 62.8 64.4 - 90.6 50.1 Qwen2.5-VL† Qwen2.5-7B - / - / - 55.0 83.5 67.1 63.9 - 86.4 52.9 InternVL3† Qwen2.5-7B >6B / 100M / 22M 62.7 83.4 81.3 68.2 - 91.1 49.9 Encoder-Based Qwen3-8B >6B / 40M / 4M 54.1 84 60.0 63.5 76.2 87.8 51.4 ▼Native Vision-Language Models (8B) Fuyu Persimmon-8B - / - / - 27.9 10.7 21.4 - 59.3 84.0 - Chameleon from scratch 1.4B / 0M / 1.8M 25.4 31.1 8.3 - 30.6 19.4 17.1 EVE Vicuna-7B 33M / 0M / 1.8M 32.6 52.3 25.7 - 64.6 85.0 26.4 SOLO Mistral-7B 44M / 0M / 2M - 67.7 30.4 - 64.4 78.6 - Emu3 from scratch - / - / - 31.6 58.5 37.2 - 68.2 85.2 - EVEv2 Qwen2.5-7B 77M / 15M / 7M 39.3 66.3 45.0 - 71.4 87.6 - BREEN Qwen2.5-7B 13M / 0M / 4M 42.7 71.4 38.9 51.2 - - 37.0 VoRA Qwen2.5-7B 30M / 0M / 0.6M 32.0 61.3 33.7 - 68.9 85.5 - SAIL Mistral-7B 512M / 86M / 6M - 70.1 46.3 53.1 72.9 85.8 54.2 NEO Qwen3-8B 345M / 40M / 4M 54.6 82.1 53.6 62.4 76.3 88.4 46.4 Following InternVL3 Zhu et al. (2025), we construct the Encoder-Based by combining Qwen3 Yang et al. (2025) and InternViT-300M Zhu et al. (2025). In the mid-training stage, we first train the projector on 10M samples, and further unfreeze the vision encoder utilizing another 30M samples. Comparison with Modular VLMs. As demonstrated in Table 1 and Table 2, NEO achieves highly competitive performance at both the 2B and 8B scales, despite using relatively limited pre-training and supervised fine-tuning data and without reinforcement learning. Remarkably, NEO approaches the performance of top-tier modular VLMs, e.g., Qwen2-VL Wang et al. (2024a), InternVL2.5 Chen et al. (2024e), Qwen2.5-VL Bai et al. (2025), and InternVL3 Zhu et al. (2025) across multiple benchmarks, rivaling architectures trained on billions of additional samples. These results highlight the effectiveness of our end-to-end training strategy and unified model design. By combining native attention mechanisms with Native-RoPE, NEO enhances interactions between visual and linguistic features, enabling it to match more complex modular systems despite its simpler architecture. Comparison with Native VLMs. From Table 1 and Table 2, NEO delivers substantial gains on visual-centric benchmarks over the best competitors, e.g., Mono-InterVL Luo et al. (2024; 2025), HoVLE Tao et al. (2025), OnCAT Li et al. (2025a), EVE Diao et al. (2024; 2025), Emu3 Wang et al. (2024b), BREEN Li et al. (2025b), VoRA Wang et al. (2025a), and SAIL Lei et al. (2025). By seamlessly integrating post-LLM components with the pre-Buffer for large-scale visual learning, NEO aligns visual inputs with textual features from scratch and supports complex visual reasoning, even without visual encoder supervision Diao et al. (2024); Tao et al. (2025); Li et al. (2025a); Wang et al. (2025a); Li et al. (2025b), highlighting the strengths of its native primitive designs and training strategies. These design choices allow NEO to surpass many native VLMs using fewer training resources, demonstrating the advantages of our primitives with efficient data-scaling capability. 7 Table 2: Comparison with modular and native VLMs on visual question answering benchmarks. Any Res., Tile-wise, Any Rat., and Fix Res. refer to any resolution, image tile splitting, any aspect ratio, and fixed resolution. MoE and DaC are Mixture-of-Experts and Divide-and-Conquer models. Model Input RoPE Backbone AI2D DocVQA ChartQA InfoVQA TextVQA OCRBench ▼Modular Vision-Language Models (2B) Qwen2-VL Any Res. M-RoPE Dense 74.7 90.1 73.5 65.5 79.7 80.9 InternVL2.5 Tile-wise 1D-RoPE Dense 74.9 88.7 79.2 60.9 74.3 80.4 Qwen2.5-VL† Any Res. M-RoPE Dense 81.6 93.9 84.0 77.1 79.3 79.7 InternVL3† Tile-wise 1D-RoPE Dense 78.7 88.3 80.2 66.1 77.0 83.5 Encoder-Based Tile-wise 1D-RoPE Dense 77.4 89.9 78.4 65.9 73.3 83.5 ▼Native Vision-Language Models (2B) Mono-InternVL Tile-wise. 1D-RoPE MoE 68.6 80.0 73.7 43.0 72.6 76.7 Mono-InternVL-1.5 Tile-wise. 1D-RoPE DaC 67.4 81.7 72.2 47.9 73.7 80.1 HoVLE Tile-wise. 1D-RoPE Dense 73.0 86.1 78.6 55.7 70.9 74.0 OneCAT Any Res. M-RoPE Dense 72.4 87.1 76.2 56.3 67.0 - NEO Any Res. Native-RoPE Dense 80.1 89.9 81.2 63.2 74.0 77.1 ▼Modular Vision-Language Models (8B) Qwen2-VL Any Res. M-RoPE Dense 83.0 94.5 83 76.5 84.3 86.6 InternVL2.5 Tile-wise 1D-RoPE Dense 84.5 93.0 84.8 77.6 79.1 82.2 Qwen2.5-VL† Any Res. M-RoPE Dense 83.9 95.7 87.3 82.6 84.9 86.4 InternVL3† Tile-wise 1D-RoPE Dense 85.2 92.7 86.6 76.8 80.2 88 Encoder-Based Tile-wise 1D-RoPE Dense 82.9 92.1 83.5 75 77.1 85.3 ▼Native Vision-Language Models (8B) Fuyu Any Res. 1D-RoPE Dense 64.5 - - - - 36.6 Chameleon Fix Res. 1D-RoPE Dense 46.0 1.5 2.9 5.0 4.8 0.7 EVE Any Rat. 1D-RoPE Dense 61.0 53.0 59.1 25.0 56.8 39.8 SOLO Any Res. 1D-RoPE Dense 61.4 - - - - 12.6 Emu3 Fix Res. 1D-RoPE Dense 70 76.3 68.6 43.8 64.7 68.7 EVEv2 Any Rat. 1D-RoPE DaC 74.8 - 73.9 - 71.1 70.2 BREEN Any Res. 1D-RoPE MoE 76.4 - - - 65.7 - VoRA Any Res. 1D-RoPE Dense 61.1 - - - 58.7 - SAIL Any Res. M-RoPE Dense 76.7 - - - 77.1 78.3 NEO Any Res. Native-RoPE Dense 83.1 88.6 82.1 60.9 75.0 77.7 Despite strong results, NEO lags on knowledge-/OCR-heavy tasks, e.g., MMMU, InfoVQA, and TextVQA. Interestingly, NEO-9B does not surpass NEO-2B on DocVQA and InfoVQA, indicating limitations in our current training corpus. Even so, NEO performs well under constraints, highlighting the native VLM as a scalable paradigm. Larger datasets and resources can unlock its full potential. 4.3 ABLATION STUDIES Unless otherwise specified, we report the average evaluation results, denoted as Avg., across ten vision-language benchmark datasets in Table 3. The pre-Buffer and new head dimensions in the post-LLM are trained on 20M pre-training samples, followed by full-backbone fine-tuning on 2M SFT instruction data. These constitute the standard training settings for our ablation studies. 32 39.7 44 46.8 48 47.2 48.8 20 30 40 50 60 0 2 4 6 8 10 12 Avg. Accuracy (%) Figure 5: Configurations of pre-Buffer. Hyperparameters of the Pre-Buffer Layer. Figure 5 illustrates the relationship between the number of preBuffer layers and the model's average accuracy, using Qwen3-1.7B as the post-LLM. Performance improves consistently as the layer count increases, but gains begin to saturate beyond eight layers. To maximize accuracy while maintaining the same capacity as publicly available vision encoders Chen et al. (2024f); Radford et al. (2021); Zhai et al. (2023), we select 12 layers for NEO-2.2B. Notably, we choose 6 layers for NEO-9B, mainly due to the good trade-off between performance and efficiency. 8 Table 3: Configurations of attention and RoPE. MMS, CQA, IVQA, and OCRB denote MMStar, ChartQA, InfoVQA, and OCRBench. ⋆indicates that the base RoPE frequencies for height and width are set to 1M. To ensure fairness, we add new head dimensions of equal size across all models. Model Attention RoPE MMMU MMB MMS SEED-I AI2D CQA IVQA TVQA OCRB POPE Avg. A Causal 1D-RoPE 40.2 48.6 36.1 55.3 63.6 16.1 22.5 16.2 13.9 78.6 39.1 B Mixed 1D-RoPE 40.8 48.8 36.4 57.3 63.7 16.0 21.9 17.4 16.0 79.2 39.8 C Mixed IL-RoPE 40.0 47.3 36.3 57.6 62.0 18.8 23.4 17.9 13.2 78.8 39.5 D Mixed M-RoPE 40.3 49.6 37.2 57.8 64.2 23.7 25.2 20.4 18.8 79.3 41.7 E Mixed MM-RoPE 40.5 50.8 37.6 58.2 65.8 25.7 26.3 22.1 18.2 78.8 42.4 F Mixed Video-RoPE 40.6 51.3 37.8 58.8 64.3 27.4 26.1 23.7 21.3 81.0 43.2 G Causal Native-RoPE 40.2 49.2 36.3 57.1 63.7 19.2 23.5 19.5 16.7 77.8 40.3 H Mixed Native-RoPE 40.7 51.9 38.2 58.9 65.8 30.6 26.9 24.1 23.2 80.0 44.0 I Mixed Native-RoPE⋆ 40.4 50.4 36.9 57.0 64.1 25.6 25.2 21.7 20.1 78.7 42.0 60 65 70 75 80 PB1 PB2 PB3 NEO InternViT CLIP SigLIP Avg. Accuracy (%) Figure 6: Comparison with pre-Buffer and vision encoders. All models are initialized post-LLM using Qwen3-1.7B. 60 65 70 75 80 NEO-2.2B NEO-9B Avg. Accuracy (%) Stage 1 Stage 2 Stage 3 Figure 7: Evaluation results across three progressive training procedures. Configurations of Native Primitives. Table 3 compares various attention and RoPE designs. The pre-Buffer depth is 4, and the post-LLM is initialized with Qwen3-1.7B. All models share the same new QK head dimensions and normalization. (1) Attention mode. Comparing models A/B and G/H reveals consistent gains of mixed attention over causal one, reflecting its stronger capacity to model comprehensive dependencies and cross-modal alignment. (2) RoPE mode. Native-RoPE outperforms 1D-RoPE Zhu et al. (2025), IL-RoPE Liao et al. (2025), M-RoPE Bai et al. (2025), MM-RoPE Yuan et al. (2025), and Video-RoPE Wei et al. (2025), with at least a 0.8% gain. This validates the importance of disentangling height, width, and temporal components in RoPE to enhance spatial-temporal representations and fine-grained interactions. By contrast, setting the base RoPE frequency to 1M for height and width severely impairs the ability to perceive local semantics. Comparison between Pre-Buffer and Vision Encoders. In Figure 6, PB 1-3 denotes the Pre-Buffer after stage 1-3. For all models except NEO, the post-LLMs are initialized via Qwen3-1.7B for our pre-Buffer, InternViT-300M Chen et al. (2024e), CLIP-vit-large-patch14 Radford et al. (2021), and SigLIP-so400m-patch14-384 Zhai et al. (2023). After two-stage re-training, PB3 shows only an average gap of 2.5 / 2.4 / 1.7 / 3.7% over NEO / InternViT / CLIP / SigLIP using billion-scale training data. This substantially reduces the training costs of building native VLMs for subsequent research. Performance Gains across Stages. Figure 7 presents the result evolution across training stages. In Stages 1 and 2, the model is fine-tuned on 2M SFT examples. Performance improves consistently as training data scales increase across 2.2B and 9B model sizes. Following progressive training, NEO shows strong multimodal capabilities, enabling robust performance across diverse real-world tasks. 5 CONCLUSION We introduce NEO, a native VLM that seamlessly integrates vision and language into a single unified framework, eliminating the need for separate visual encoders or ad-hoc alignment modules. By leveraging hybrid attention and modality-aware rotary position embeddings, NEO captures rich, fine-grained interactions between pixels and words from the outset. Its pre-Buffer and post-LLM training paradigm ensures efficient convergence and robust alignment while maintaining end-to-end learning. Experiments show that this unified design not only advances multimodal understanding and reasoning but also lays the foundation for reusable, scalable components. Our native primitives highlight a promising path toward intrinsically multimodal, unified, and adaptable architectures. 9 ETHICS STATEMENT All resources are drawn from open-access datasets with explicitly defined usage policies. Our work seeks to advance multimodal learning capabilities without introducing ethical or safety concerns beyond those already associated with existing models. Nevertheless, risks such as dataset biases and potential misuse cannot be entirely ruled out. We emphasize the importance of careful data curation, responsible deployment, and transparent reporting as essential practices to mitigate these challenges. REPRODUCIBILITY STATEMENT We place strong emphasis on reproducibility, providing detailed descriptions to facilitate replication and validation. Information about dataset selection, training strategies, and evaluation settings is provided in Sec. 3.2 and Sec. 4.1. We commit to releasing the code, model weights, and detailed documentation to allow the community to reproduce our findings in future research. REFERENCES Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L. Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Kar ́en Simonyan. Flamingo: a visual language model for few-shot learning. In Advances of Neural Information Processing Systems, New Orleans, LA, USA, 2022. Anthropic. Claude 3.7 sonnet: A hybrid reasoning ai model, 2025. URL https://www. anthropic.com/news/claude-3-7-sonnet. AI Anthropic. The claude 3 model family: opus, sonnet, haiku, 2024. URL https://www-cdn. anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_ Card_Claude_3.pdf. Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Ming-Hsuan Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report. CoRR, abs/2502.13923, 2025. Yuelin Bai, Xinrun Du, Yiming Liang, Yonggang Jin, Junting Zhou, Ziqiang Liu, Feiteng Fang, Mingshan Chang, Tianyu Zheng, Xincheng Zhang, et al. Coig-cqia: Quality is all you need for chinese instruction fine-tuning. CoRR, abs/2403.18058, 2024. Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sa ̆gnak Tas ̧ırlar. Introducing our multimodal models, 2023. URL https://www.adept.ai/ blog/fuyu-8b. Lucas Beyer, Andreas Steiner, Andr ́e Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, Thomas Unterthiner, Daniel Keysers, Skanda Koppula, Fangyu Liu, Adam Grycner, Alexey A. Gritsenko, Neil Houlsby, Manoj Kumar, Keran Rong, Julian Eisenschlos, Rishabh Kabra, Matthias Bauer, Matko Bosnjak, Xi Chen, Matthias Minderer, Paul Voigtlaender, Ioana Bica, Ivana Balazevic, Joan Puigcerver, Pinelopi Papalampidi, Olivier J. H ́enaff, Xi Xiong, Radu Soricut, Jeremiah Harmsen, and Xiaohua Zhai. Paligemma: a versatile 3b vlm for transfer. CoRR, abs/2407.07726, 2024. Ali Furkan Biten, Rub`en Tito, Andr ́es Mafla, Llu ́ıs G ́omez i Bigorda, Marc ̧al Rusi ̃nol, C. V. Jawahar, Ernest Valveny, and Dimosthenis Karatzas. Scene text visual question answering. In IEEE International Conference on Computer Vision, pp. 4290-4300, Seoul, Korea (South), 2019. Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset, 2022. URL https://github.com/kakaobrain/ coyo-dataset. 10 Jie Cao and Jing Xiao. An augmented benchmark dataset for geometric question answering through dual parallel text encoding. In International Conference on Computational Linguistics, pp. 15111520, 2022. Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: harnessing gpt4v-synthesized data for a lite vision-language model. CoRR, abs/2402.11684, 2024a. Jiuhai Chen, Zhiyang Xu, Xichen Pan, Yushi Hu, Can Qin, Tom Goldstein, Lifu Huang, Tianyi Zhou, Saining Xie, Silvio Savarese, Le Xue, Caiming Xiong, and Ran Xu. Blip3-o: A family of fully open unified multimodal models-architecture, training and dataset. CoRR, abs/2505.09568, 2025. Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: improving large multi-modal models with better captions. In European Conference on Computer Vision, volume 15075, pp. 370-387, Milan, Italy, 2024b. Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, and Feng Zhao. Are we on the right way for evaluating large visionlanguage models? In Advances of Neural Information Processing Systems, Vancouver, BC, Canada, 2024c. Yangyi Chen, Xingyao Wang, Hao Peng, and Heng Ji. A single transformer for scalable visionlanguage modeling. CoRR, abs/2407.06438, 2024d. Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, Lixin Gu, Xuehui Wang, Qingyun Li, Yimin Ren, Zixuan Chen, Jiapeng Luo, Jiahao Wang, Tan Jiang, Bo Wang, Conghui He, Botian Shi, Xingcheng Zhang, Han Lv, Yi Wang, Wenqi Shao, Pei Chu, Zhongying Tu, Tong He, Zhiyong Wu, Huipeng Deng, Jiaye Ge, Kai Chen, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. CoRR, abs/2412.05271, 2024e. Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, Ji Ma, Jiaqi Wang, Xiaoyi Dong, Hang Yan, Hewei Guo, Conghui He, Botian Shi, Zhenjiang Jin, Chao Xu, Bin Wang, Xingjian Wei, Wei Li, Wenjian Zhang, Bo Zhang, Pinlong Cai, Licheng Wen, Xiangchao Yan, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. CoRR, abs/2404.16821, 2024f. Chee Kheng Chng, Yuliang Liu, Yipeng Sun, Chun Chet Ng, Canjie Luo, Zihan Ni, ChuanMing Fang, Shuaitao Zhang, Junyu Han, Errui Ding, et al. Icdar2019 robust reading challenge on arbitrary-shaped text-rrc-art. In International Conference on Document Analysis and Recognition, pp. 1571-1576, 2019. Christopher Clark and Matt Gardner. Simple and effective multi-paragraph reading comprehension. In Annual Meeting of the Association for Computational Linguistics, pp. 845-855, Melbourne, Australia, 2018. Gheorghe Comanici, Eric Bieber, Mike Schaekermann, Ice Pasupat, Noveen Sachdeva, Inderjit S. Dhillon, Marcel Blistein, Ori Ram, Dan Zhang, Evan Rosen, Luke Marris, Sam Petulla, Colin Gaffney, Asaf Aharoni, Nathan Lintz, Tiago Cardal Pais, Henrik Jacobsson, Idan Szpektor, NanJiang Jiang, Krishna Haridasan, Ahmed Omran, Nikunj Saunshi, Dara Bahri, Gaurav Mishra, Eric Chu, Toby Boyd, Brad Hekman, Aaron Parisi, Chaoyi Zhang, Kornraphop Kawintiranon, Tania Bedrax-Weiss, Oliver Wang, Ya Xu, Ollie Purkiss, Uri Mendlovic, Ila ̈ı Deutel, Nam Nguyen, Adam Langley, Flip Korn, Lucia Rossazza, Alexandre Ram ́e, Sagar Waghmare, Helen Miller, Nathan Byrd, Ashrith Sheshan, Raia Hadsell Sangnie Bhardwaj, Pawel Janus, Tero Rissa, Dan Horgan, Sharon Silver, Ayzaan Wahid, Sergey Brin, Yves Raimond, Klemen Kloboves, Cindy Wang, Nitesh Bharadwaj Gundavarapu, Ilia Shumailov, Bo Wang, Mantas Pajarskas, Joe Heyward, Martin Nikoltchev, Maciej Kula, Hao Zhou, Zachary Garrett, Sushant Kafle, Sercan Arik, Ankita Goel, Mingyao Yang, Jiho Park, Koji Kojima, Parsa Mahmoudieh, Koray Kavukcuoglu, Grace Chen, Doug Fritz, Anton Bulyenov, Sudeshna Roy, Dimitris Paparas, Hadar Shemtov, Bo-Juen Chen, Robin Strudel, David Reitter, Aurko Roy, Andrey Vlasov, Changwan Ryu, Chas Leichner, 11 Haichuan Yang, Zelda Mariet, Denis Vnukov, Tim Sohn, Amy Stuart, Wei Liang, Minmin Chen, Praynaa Rawlani, Christy Koh, JD Co-Reyes, Guangda Lai, Praseem Banzal, Dimitrios Vytiniotis, Jieru Mei, and Mu Cai. Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. CoRR, abs/2507.06261, 2025. Wenliang Dai, Nayeon Lee, Boxin Wang, Zhuoling Yang, Zihan Liu, Jon Barker, Tuomas Rintamaki, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping. NVLM: open frontier-class multimodal llms. CoRR, abs/2409.11402, 2024. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos ́e M. F. Moura, Devi Parikh, and Dhruv Batra. Visual dialog. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1080-1089, Honolulu, HI, USA, 2017. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International Conference on Machine Learning, volume 70, pp. 933-941, Sydney, NSW, Australia, 2017. Google DeepMind. Gemini 2.5 pro: Google's most advanced reasoning model, 2025. URL https: //deepmind.google/models/gemini/pro/. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, and S. S. Li. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. CoRR, abs/2501.12948, 2025. Chaorui Deng, Deyao Zhu, Kunchang Li, Chenhui Gou, Feng Li, Zeyu Wang, Shu Zhong, Weihao Yu, Xiaonan Nie, Ziang Song, Shi Guang, and Haoqi Fan. Emerging properties in unified multimodal pretraining. CoRR, abs/2505.14683, 2025a. Haoge Deng, Ting Pan, Haiwen Diao, Zhengxiong Luo, Yufeng Cui, Huchuan Lu, Shiguang Shan, Yonggang Qi, and Xinlong Wang. Autoregressive video generation without vector quantization. In International Conference on Learning Representations, Singapore, 2025b. Haiwen Diao, Yufeng Cui, Xiaotong Li, Yueze Wang, Huchuan Lu, and Xinlong Wang. Unveiling encoder-free vision-language models. CoRR, abs/2406.11832, 2024. Haiwen Diao, Xiaotong Li, Yufeng Cui, Yueze Wang, Haoge Deng, Ting Pan, Wenxuan Wang, Huchuan Lu, and Xinlong Wang. Evev2: Improved baselines for encoder-free vision-language models. CoRR, abs/2502.06788, 2025. Juechu Dong, Boyuan Feng, Driss Guessous, Yanbo Liang, and Horace He. Flex attention: A programming model for generating optimized attention kernels. CoRR, abs/2412.05496, 2024. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: transformers for image recognition at scale. In International Conference on Learning Representations, Austria, 2021. Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, Dahua Lin, and Kai Chen. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In ACM International Conference on Multimedia, pp. 11198-11201, Melbourne, VIC, Australia, 2024. 12 Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. EVA: exploring the limits of masked visual representation learning at scale. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 19358-19369, Vancouver, BC, Canada, 2023. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: elevating the role of image understanding in visual question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 6325-6334, Honolulu, HI, USA, 2017. Jiaxi Gu, Xiaojun Meng, Guansong Lu, Lu Hou, Niu Minzhe, Xiaodan Liang, Lewei Yao, Runhui Huang, Wei Zhang, Xin Jiang, Chunjing Xu, and Hang Xu. Wukong: A 100 million large-scale chinese cross-modal pre-training benchmark. In Advances of Neural Information Processing Systems, New Orleans, LA, USA,, 2022. Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large visionlanguage models. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1437514385, Seattle, WA, USA, 2024. Conghui He, Zhenjiang Jin, Chao Xu, Jiantao Qiu, Bin Wang, Wei Li, Hang Yan, Jiaqi Wang, and Dahua Lin. Wanjuan: A comprehensive multimodal dataset for advancing english and chinese large models. CoRR, abs/2308.10755, 2023. Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). CoRR, abs/1606.08415, 2016. Drew A. Hudson and Christopher D. Manning. GQA: a new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 6700-6709, Long Beach, CA, USA, 2019. Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander Madry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, Alexis Conneau, Ali Kamali, Allan Jabri, Allison Moyer, Allison Tam, Amadou Crookes, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, Andrej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghorbani, Ben Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Carroll L. Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea Voss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beaumont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, and Dane Sherburn. Gpt-4o system card. CoRR, abs/2410.21276, 2024. Jimmycarter. Textocr gpt-4v dataset, 2023. URL https://huggingface.co/datasets/ jimmycarter/textocr-gpt4v. Kushal Kafle, Brian L. Price, Scott Cohen, and Christopher Kanan. DVQA: understanding data visualizations via question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 5648-5656, Salt Lake City, UT, USA, 2018. Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Min Joon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In European Conference on Computer Vision, volume 9908, pp. 235-251, Amsterdam, The Netherlands, 2016. Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. Are you smarter than a sixth grader? textbook question answering for multimodal 13 machine comprehension. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 4999-5007, 2017. Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. Ocr-free document understanding transformer. In European Conference on Computer Vision, volume 13688, pp. 498-517, Tel Aviv, Israel, 2022. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73, 2017. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances of Neural Information Processing Systems, pp. 1106-1114, Lake Tahoe, Nevada, US, 2012. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper R. R. Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: unified image classification, object detection, and visual relationship detection at scale. CoRR, abs/1811.00982, 2018. LAION. Gpt-4v dataset, 2023. URL https://huggingface.co/datasets/laion/ gpt4v-dataset. Weixian Lei, Jiacong Wang, Haochen Wang, Xiangtai Li, Jun Hao Liew, Jiashi Feng, and Zilong Huang. The scalability of simplicity: Empirical analysis of vision-language learning with a single transformer. CoRR, abs/2504.10462, 2025. Paul Lerner, Olivier Ferret, Camille Guinaudeau, Herv ́e Le Borgne, Romaric Besanc ̧on, Jos ́e G Moreno, and Jes ́us Lov ́on Melgarejo. Viquae, a dataset for knowledge-based visual question answering about named entities. In ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 3108-3120, 2022. Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang, Ziwei Liu, and Chunyuan Li. Llava-next: stronger llms supercharge multimodal capabilities in the wild, 2024a. URL https://llava-vl.github.io/blog/ 2024-05-10-llava-next-stronger-llms/. Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. Llava-onevision: easy visual task transfer. CoRR, abs/2408.03326, 2024b. Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: benchmarking multimodal llms with generative comprehension. CoRR, abs/2307.16125, 2023a. Han Li, Xinyu Peng, Yaoming Wang, Zelin Peng, Xin Chen, Rongxiang Weng, Jingang Wang, Xunliang Cai, Wenrui Dai, and Hongkai Xiong. Onecat: Decoder-only auto-regressive model for unified understanding and generation. CoRR, abs/2509.03498, 2025a. Tianle Li, Yongming Rao, Winston Hu, and Yu Cheng. BREEN: bridge data-efficient encoder-free multimodal learning with learnable queries. CoRR, abs/2503.12446, 2025b. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. In Conference on Empirical Methods in Natural Language Processing, pp. 292-305, Singapore, 2023b. Zhuowan Li, Xingrui Wang, Elias Stengel-Eskin, Adam Kortylewski, Wufei Ma, Benjamin Van Durme, and Alan L Yuille. Super-clevr: A virtual benchmark to diagnose domain robustness in visual reasoning. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 14963-14973, 2023c. 14 Weixin Liang, Lili Yu, Liang Luo, Srinivasan Iyer, Ning Dong, Chunting Zhou, Gargi Ghosh, Mike Lewis, Wen-tau Yih, Luke Zettlemoyer, and Xi Victoria Lin. Mixture-of-transformers: A sparse and scalable architecture for multi-modal foundation models. CoRR, abs/2411.04996, 2024. Chao Liao, Liyang Liu, Xun Wang, Zhengxiong Luo, Xinyu Zhang, Wenliang Zhao, Jie Wu, Liang Li, Zhi Tian, and Weilin Huang. Mogao: An omni foundation model for interleaved multi-modal generation. CoRR, abs/2505.05472, 2025. Xi Victoria Lin, Akshat Shrivastava, Liang Luo, Srinivasan Iyer, Mike Lewis, Gargi Ghosh, Luke Zettlemoyer, and Armen Aghajanyan. Moma: efficient early-fusion pre-training with mixture of modality-aware experts. CoRR, abs/2407.21770, 2024. Adam Dahlgren Lindstr ̈om and Savitha Sam Abraham. Clevr-math: A dataset for compositional language, visual and mathematical reasoning. CoRR, abs/2208.05358, 2022. Fangyu Liu, Guy Emerson, and Nigel Collier. Visual spatial reasoning. Transactions of the Association for Computational Linguistics, 11:635-651, 2023a. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. CoRR, 2023b. Fuxiao Liu, Xiaoyang Wang, Wenlin Yao, Jianshu Chen, Kaiqiang Song, Sangwoo Cho, Yaser Yacoob, and Dong Yu. Mmc: Advancing multimodal chart understanding with large-scale instruction tuning. CoRR, abs/2311.10774, 2023c. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In Advances of Neural Information Processing Systems, New Orleans, LA, USA, 2023d. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 26286-26296, Seattle, WA, USA, 2024a. Xi Liu, Rui Zhang, Yongsheng Zhou, Qianyi Jiang, Qi Song, Nan Li, Kai Zhou, Lei Wang, Dong Wang, Minghui Liao, et al. Icdar 2019 robust reading challenge on reading chinese text on signboard. CoRR, abs/1912.09641, 2019. Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, and Dahua Lin. Mmbench: is your multi-modal model an all-around player? In European Conference on Computer Vision, volume 15064, pp. 216-233, Milan, Italy, 2024b. Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, and Xiang Bai. On the hidden mystery of ocr in large multimodal models. CoRR, abs/2305.07895, 2023e. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, New Orleans, LA, USA, 2019. Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. CoRR, abs/2105.04165, 2021. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: multimodal reasoning via thought chains for science question answering. In Advances of Neural Information Processing Systems, volume 35, pp. 2507-2521, New Orleans, LA, USA, 2022a. Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. CoRR, abs/2209.14610, 2022b. Gen Luo, Xue Yang, Wenhan Dou, Zhaokai Wang, Jifeng Dai, Yu Qiao, and Xizhou Zhu. Monointernvl: pushing the boundaries of monolithic multimodal large language models with endogenous visual pre-training. CoRR, abs/2410.08202, 2024. 15 Gen Luo, Wenhan Dou, Wenhao Li, Zhaokai Wang, Xue Yang, Changyao Tian, Hao Li, Weiyun Wang, Wenhai Wang, Xizhou Zhu, Yu Qiao, and Jifeng Dai. Mono-internvl-1.5: Towards cheaper and faster monolithic multimodal large language models. CoRR, abs/2507.12566, 2025. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L. Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 11-20, Las Vegas, NV, USA, 2016. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. OK-VQA: a visual question answering benchmark requiring external knowledge. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 3195-3204, Vienna, Austria, 2019. Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq R. Joty, and Enamul Hoque. Chartqa: a benchmark for question answering about charts with visual and logical reasoning. In Annual Meeting of the Association for Computational Linguistics, pp. 2263-2279, Dublin, Ireland, 2022. Minesh Mathew, Viraj Bagal, Rub`en Tito, Dimosthenis Karatzas, Ernest Valveny, and C. V. Jawahar. Infographicvqa. In IEEE Winter Conference on Applications of Computer Vision, pp. 2582-2591, Waikoloa, HI, USA, 2022. Nitesh Methani, Pritha Ganguly, Mitesh M Khapra, and Pratyush Kumar. Plotqa: Reasoning over scientific plots. In IEEE Winter Conference on Applications of Computer Vision, pp. 1527-1536, 2020. Anand Mishra, Shashank Shekhar, Ajeet Kumar Singh, and Anirban Chakraborty. OCR-VQA: visual question answering by reading text in images. In International Conference on Document Analysis and Recognition, pp. 947-952, Sydney, Australia, 2019. OpenAI. Gpt-5: A unified multimodal model, 2025. URL https://openai.com/gpt-5. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, volume 139, pp. 8748-8763, virtual, 2021. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. LAION-400M: open dataset of clip-filtered 400 million image-text pairs. CoRR, abs/2111.02114, 2021. Christoph Schuhmann, Andreas K ̈opf, Richard Vencu, Theo Coombes, and Romain Beaumont. Laion coco: 600m synthetic captions from laion2b-en, 2022. URL https://laion.ai/blog/ laion-coco/. Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. A-OKVQA: a benchmark for visual question answering using world knowledge. In European Conference on Computer Vision, volume 13668, pp. 146-162, Tel Aviv, Israel, 2022. Sanket Shah, Anand Mishra, Naganand Yadati, and Partha Pratim Talukdar. Kvqa: Knowledgeaware visual question answering. In AAAI Conference on Artificial Intelligence, volume 33, pp. 8876-8884, 2019. Baoguang Shi, Cong Yao, Minghui Liao, Mingkun Yang, Pei Xu, Linyan Cui, Serge Belongie, Shijian Lu, and Xiang Bai. Icdar2017 competition on reading chinese text in the wild (rctw-17). In International Conference on Document Analysis and Recognition, volume 1, pp. 1429-1434. IEEE, 2017. Mustafa Shukor, Louis B ́ethune, Dan Busbridge, David Grangier, Enrico Fini, Alaaeldin El-Nouby, and Pierre Ablin. Scaling laws for optimal data mixtures. CoRR, abs/2507.09404, 2025a. Mustafa Shukor, Enrico Fini, Victor Guilherme Turrisi da Costa, Matthieu Cord, Joshua M. Susskind, and Alaaeldin El-Nouby. Scaling laws for native multimodal models. CoRR, abs/2504.07951, 2025b. 16 Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In European Conference on Computer Vision, volume 12347, pp. 742-758, Glasgow, UK, 2020. Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 8317-8326, Long Beach, CA, USA, 2019. Yipeng Sun, Zihan Ni, Chee-Kheng Chng, Yuliang Liu, Canjie Luo, Chun Chet Ng, Junyu Han, Errui Ding, Jingtuo Liu, Dimosthenis Karatzas, et al. Icdar 2019 competition on large-scale street view text with partial labeling-rrc-lsvt. In International Conference on Document Analysis and Recognition, pp. 1557-1562, 2019. Chenxin Tao, Shiqian Su, Xizhou Zhu, Chenyu Zhang, Zhe Chen, Jiawen Liu, Wenhai Wang, Lewei Lu, Gao Huang, Yu Qiao, and Jifeng Dai. Hovle: Unleashing the power of monolithic visionlanguage models with holistic vision-language embedding. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 14559-14569, Nashville, TN, USA, 2025. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html, 3(6):7, 2023. Chameleon Team. Chameleon: mixed-modal early-fusion foundation models. CoRR, abs/2405.09818, 2024. Teknium. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants, 2023. URL https://huggingface.co/datasets/teknium/OpenHermes-2.5. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aur ́elien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: open foundation and fine-tuned chat models. CoRR, abs/2307.09288, 2023. Michael Tschannen, Alexey A. Gritsenko, Xiao Wang, Muhammad Ferjad Naeem, Ibrahim Alabdulmohsin, Nikhil Parthasarathy, Talfan Evans, Lucas Beyer, Ye Xia, Basil Mustafa, Olivier J. H ́enaff, Jeremiah Harmsen, Andreas Steiner, and Xiaohua Zhai. Siglip 2: Multilingual visionlanguage encoders with improved semantic understanding, localization, and dense features. CoRR, abs/2502.14786, 2025. Andreas Veit, Tomas Matera, Lukas Neumann, Jiri Matas, and Serge Belongie. Coco-text: Dataset and benchmark for text detection and recognition in natural images. CoRR, abs/1601.07140, 2016. Han Wang, Yongjie Ye, Bingru Li, Yuxiang Nie, Jinghui Lu, Jingqun Tang, Yanjie Wang, and Can Huang. Vision as lora. CoRR, abs/2503.20680, 2025a. Junke Wang, Lingchen Meng, Zejia Weng, Bo He, Zuxuan Wu, and Yu-Gang Jiang. To see is to believe: prompting GPT-4V for better visual instruction tuning. CoRR, abs/2311.07574, 2023. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-vl: enhancing vision-language model's perception of the world at any resolution. CoRR, abs/2409.12191, 2024a. 17 Weiyun Wang, Zhangwei Gao, Lixin Gu, Hengjun Pu, Long Cui, Xingguang Wei, Zhaoyang Liu, Linglin Jing, Shenglong Ye, Jie Shao, et al. Internvl3.5: Advancing open-source multimodal models in versatility, reasoning, and efficiency. CoRR, abs/2508.18265, 2025b. Xinlong Wang, Xiaosong Zhang, Zhengxiong Luo, Quan Sun, Yufeng Cui, Jinsheng Wang, Fan Zhang, Yueze Wang, Zhen Li, Qiying Yu, Yingli Zhao, Yulong Ao, Xuebin Min, Tao Li, Boya Wu, Bo Zhao, Bowen Zhang, Liangdong Wang, Guang Liu, Zheqi He, Xi Yang, Jingjing Liu, Yonghua Lin, Tiejun Huang, and Zhongyuan Wang. Emu3: next-token prediction is all you need. CoRR, abs/2409.18869, 2024b. Xilin Wei, Xiaoran Liu, Yuhang Zang, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Jian Tong, Haodong Duan, Qipeng Guo, Jiaqi Wang, Xipeng Qiu, and Dahua Lin. Videorope: What makes for good video rotary position embedding? CoRR, abs/2502.05173, 2025. Chenyuan Wu, Pengfei Zheng, Ruiran Yan, Shitao Xiao, Xin Luo, Yueze Wang, Wanli Li, Xiyan Jiang, Yexin Liu, Junjie Zhou, Ze Liu, Ziyi Xia, Chaofan Li, Haoge Deng, Jiahao Wang, Kun Luo, Bo Zhang, Defu Lian, Xinlong Wang, Zhongyuan Wang, Tiejun Huang, and Zheng Liu. Omnigen2: Exploration to advanced multimodal generation. CoRR, abs/2506.18871, 2025. xAI. Grok-1.5 vision preview, 2024. URL https://x.ai/blog/grok-1.5v. xAI. Grok 3: xAI's flagship ai model, 2025. URL https://x.ai/news/grok-3. Yicheng Xiao, Lin Song, Rui Yang, Cheng Cheng, Zunnan Xu, Zhaoyang Zhang, Yixiao Ge, Xiu Li, and Ying Shan. Haploomni: Unified single transformer for multimodal video understanding and generation. CoRR, abs/2506.02975, 2025. Rui Yan, Lin Song, Yicheng Xiao, Runhui Huang, Yixiao Ge, Ying Shan, and Hengshuang Zhao. Haplovl: A single-transformer baseline for multi-modal understanding. CoRR, abs/2503.14694, 2025. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jian Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng, Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. Qwen3 technical report. CoRR, abs/2505.09388, 2025. Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. The dawn of lmms: preliminary explorations with gpt-4v(ision). CoRR, abs/2309.17421, 2023. Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. Modeling context in referring expressions. In European Conference on Computer Vision, volume 9906, pp. 69-85, Amsterdam, The Netherlands, 2016. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. CoRR, abs/2309.12284, 2023. Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: evaluating large multimodal models for integrated capabilities. In International Conference on Machine Learning, Vienna, Austria, 2024. Hangjie Yuan, Weihua Chen, Jun Cen, Hu Yu, Jingyun Liang, Shuning Chang, Zhihui Lin, Tao Feng, Pengwei Liu, Jiazheng Xing, Hao Luo, Jiasheng Tang, Fan Wang, and Yi Yang. Lumos-1: On autoregressive video generation from a unified model perspective. CoRR, abs/2507.08801, 2025. Tai-Ling Yuan, Zhe Zhu, Kun Xu, Cheng-Jun Li, Tai-Jiang Mu, and Shi-Min Hu. A large chinese text dataset in the wild. Journal of Computer Science and Technology, 34(3):509-521, 2019. 18 Xiang Yue, Yuansheng Ni, Tianyu Zheng, Kai Zhang, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. MMMU: a massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 9556-9567, Seattle, WA, USA, 2024. Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In IEEE International Conference on Computer Vision, pp. 11941-11952, Paris, France, 2023. Biao Zhang and Rico Sennrich. Root mean square layer normalization. In Advances of Neural Information Processing Systems, pp. 12360-12371, Vancouver, BC, Canada, 2019. Bo Zhao, Boya Wu, and Tiejun Huang. SVIT: scaling up visual instruction tuning. CoRR, abs/2307.04087, 2023. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances of Neural Information Processing Systems, 36:46595-46623, 2023. Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Shao, Zhangwei Gao, Erfei Cui, Xuehui Wang, Yue Cao, Yangzhou Liu, Xingguang Wei, Hongjie Zhang, Haomin Wang, Weiye Xu, Hao Li, Jiahao Wang, Nianchen Deng, Songze Li, Yinan He, Tan Jiang, Jiapeng Luo, Yi Wang, Conghui He, Botian Shi, Xingcheng Zhang, Wenqi Shao, Junjun He, Yingtong Xiong, Wenwen Qu, Peng Sun, Penglong Jiao, Han Lv, Lijun Wu, Kaipeng Zhang, Huipeng Deng, Jiaye Ge, Kai Chen, Limin Wang, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models. CoRR, abs/2504.10479, 2025. 19 A APPENDIX USAGE OF LARGE LANGUAGE MODELS During manuscript preparation, large language models were used solely as writing assistants. They helped to check grammar, refine sentence structure, and provide style alternatives. All content related to methodology, experiments, and conclusions was developed entirely by the authors. LLM outputs were reviewed critically, and only human-verified edits were incorporated into the final text. A.1 SUPERVISED FINE-TUNING DATASETS Table 4: Dataset summary in supervised fine-tuning stage. Task Dataset Captioning TextCaps (en) Sidorov et al. (2020), ShareGPT4V (en&zh) Chen et al. (2024b) VQAv2 (en) Goyal et al. (2017), GQA (en) Hudson & Manning (2019), OKVQA (en) Marino et al. (2019), General QA VSR (en) Liu et al. (2023a), VisualDialog (en) Das et al. (2017) Science AI2D (en) Kembhavi et al. (2016), ScienceQA (en) Lu et al. (2022a), TQA (en) Kembhavi et al. (2017) ChartQA (en) Masry et al. (2022), MMC-Inst (en) Liu et al. (2023c), DVQA (en) Kafle et al. (2018), Chart PlotQA (en) Methani et al. (2020), LRV-Instruction (en) Liu et al. (2023b) GeoQA+ (en) Cao & Xiao (2022), TabMWP (en) Lu et al. (2022b), MathQA (en) Yu et al. (2023), Mathematics CLEVR-Math/Super (en) Lindstr ̈om & Abraham (2022); Li et al. (2023c), Geometry3K (en) Lu et al. (2021) KVQA (en) Shah et al. (2019), A-OKVQA (en) Schwenk et al. (2022), ViQuAE (en) Lerner et al. (2022), Knowledge Wikipedia (en&zh) He et al. (2023) OCRVQA (en) Mishra et al. (2019), InfoVQA (en) Mathew et al. (2022), TextVQA (en) Singh et al. (2019), ArT (en&zh) Chng et al. (2019), COCO-Text (en) Veit et al. (2016), CTW (zh) Yuan et al. (2019), LSVT (zh) Sun et al. (2019), RCTW-17 (zh) Shi et al. (2017), ReCTs (zh) Liu et al. (2019), OCR SynthDoG (en&zh) Kim et al. (2022), ST-VQA (en) Biten et al. (2019) Document DocVQA (en) Clark & Gardner (2018), Common Crawl PDF (en&zh) Grounding RefCOCO/+/g (en) Yu et al. (2016); Mao et al. (2016), Visual Genome (en) Krishna et al. (2017) LLaVA-150K (en&zh) Liu et al. (2023d), LVIS-Instruct4V (en) Wang et al. (2023), ALLaVA (en&zh) Chen et al. (2024a), Laion-GPT4V (en) LAION (2023), Conversation TextOCR-GPT4V (en) Jimmycarter (2023), SVIT (en&zh) Zhao et al. (2023) OpenHermes2.5 (en) Teknium (2023), Alpaca-GPT4 (en) Taori et al. (2023), COIG-CQIA (zh) Bai et al. (2024), Text-only ShareGPT (en&zh) Zheng et al. (2023) A.2 IMPLEMENTATION DETAILS Table 5: Implementation details in the pre-training, mid-training and supervise fine-tuning. Configuration Pre-Training Mid-Training Supervised Fine-Tuning Resolution 2562 -1, 0242 2562 -2, 0482 2562 -2, 0482 Optimizer AdamW Optimizer hyperparameters β1 = 0.9, β2 = 0.999, eps = 1e-8 Learning rate schedule cosine with min lr cosine with min lr cosine decay Peak learning rate 8e-4 4e-5 5e-5 Min learning rate ratio 0.05 0.1 - Weight decay 0.01 Training steps 190k 50k 6k Warm-up steps 2k 200 200 Max sample length 8, 192 8, 192 8, 192 Global batch size 2, 560 1, 200 650 Text-only ratio 0.3 0.3 - Numerical precision bfloat16 20 A.3 LIMITATION AND DISCUSSION In this study, we innovate network architectures and training strategies for efficiently building native vision-language models. The full promise of NEO has remained largely untapped, hindered by scarce training data and limited computational resources, especially in knowledge-intensive and OCR-focused domains. Yet, strikingly, our NEO rivals state-of-the-art VLMs despite these severe constraints. We envision subsequent directions of NEO for the native VLM community as follows: Contextual relevance to recent advancements. Recent models such as Qwen3VL highlight concepts that resonate with our design choices, including dense linking of visual-language features, relative positional encodings, and architectural details like patch embedding and bias. In particular, the DeepStack approach underscores the importance of establishing strong pixel-word associations from the earliest stages, reinforcing the significance of densely integrated visual-language representations. Maximizing the potential via large investment. It is in great demand for continuously investing substantial resources, especially during the pre-training stage, to fully unlock NEO's performance and approach the upper bound of the native model. At the same time, selectively open-sourcing key components during intermediate development can reduce follow-up training costs for future researchers and attract more research to native visual-language models. Moreover, the fundamental models from this work provide a valuable baseline for advancing reinforcement learning research. Explorations of full-spectrum model capacities. Expanding the full model sizes remains a critical factor in advancing various real-world applications. Even with limited resources, NEO-2.2B closely matches those of modular visual-language models with equivalent capacity, suggesting that the design philosophy of models in the 0.6 to 8 billion parameter range has matured. Such architectures not only achieve high performance but also facilitate the deployment of lightweight models at the edge, which is crucial for scenarios with limited computational resources or strict real-time requirements. Upgrading architectures and applications. To date, our work has focused on dense models for image-text understanding, while a sparse divide-and-conquer architecture is simultaneously under active development. Notably, we regard NEO not merely as an autoregressive VLM but as a new paradigm for visual-language intelligence. Its principle is to leverage end-to-end training within a unified architecture, eliminating manually imposed biases and scaling-up complexities by allowing data and models to dictate the learning process. Besides, our efforts are designed not merely to improve performance but to establish a definitive baseline for visual-language generation, long video understanding, and embodied AI. Crucially, NEO's architecture systematically integrates the demands of video generation and related tasks, including attention mechanisms and rotary positional encodings, from the ground up. Although currently focused on text and images, NEO is poised to push the boundaries of what is possible across a wide spectrum of application scenarios and input modalities. Constrained by current text corpus and computational resources, we are unable to train a fully native model entirely from scratch without initialization from an existing LLM. This limitation also hinders our ability to mitigate potential biases arising from the dominance of the language modality. Despite these challenges, our NEO extends beyond providing a reusable pre-buffer that lowers the cost of adapting advanced LLMs-with updated weights and stronger capabilities-into VLMs under limited budgets. More importantly, NEO reveals the potential performance ceiling of native VLM architectures and provides valuable insights for future research on de novo multimodal training. 21
2510.14981
Preprint - Under Review COUPLED DIFFUSION SAMPLING FOR TRAINING-FREE MULTI-VIEW IMAGE EDITING Hadi Alzayer1,2 Yunzhi Zhang1 Chen Geng1 Jia-Bin Huang2 Jiajun Wu1 1Stanford University 2University of Maryland, College Park ABSTRACT We present an inference-time diffusion sampling method to perform multi-view consistent image editing using pre-trained 2D image editing models. These mod- els can independently produce high-quality edits for each image in a set of multi- view images of a 3D scene or object, but they do not maintain consistency across views. Existing approaches typically address this by optimizing over explicit 3D representations, but they suffer from a lengthy optimization process and instability under sparse view settings. We propose an implicit 3D regularization approach by constraining the generated 2D image sequences to adhere to a pre-trained multi- view image distribution. This is achieved through coupled diffusion sampling, a simple diffusion sampling technique that concurrently samples two trajectories from both a multi-view image distribution and a 2D edited image distribution, us- ing a coupling term to enforce the multi-view consistency among the generated images. We validate the effectiveness and generality of this framework on three distinct multi-view image editing tasks, demonstrating its applicability across var- ious model architectures and highlighting its potential as a general solution for multi-view consistent editing. Project page: https://coupled-diffusion.github.io 1 INTRODUCTION Diffusion-based image editing models have demonstrated unprecedented realism across diverse tasks via end-to-end training. These include object relighting (Jin et al., 2024; Magar et al., 2025; Zhang et al., 2025a), spatial structure editing (Wu et al., 2024b; Mu et al., 2024; Alzayer et al., 2025b; Vavilala et al., 2025), and stylization (Zhang et al., 2023). However, collecting and curating 3D data is significantly more costly than working with 2D data. As a result, recent research has explored test-time optimization methods for multi-view editing that leverage pre-trained 2D image diffusion models (Poole et al., 2023; Haque et al., 2023). Relighting Lighting prompt: “Sunset lighting by the beach” Stylization Editing prompt: “Marble and jade statue” Spatial editing Inputs Outputs Figure 1: Applications of coupled diffusion sampling. Our approach enables lifting off-the-shelf 2D editing models into multi-view by combining the sampling process of 2D diffusion models with multi-view diffusion models to produce view-consistent edits. Here we showcase example view- consistent results using a 2D spatial editing model, stylization, and text-based relighting. 1 arXiv:2510.14981v1 [cs.CV] 16 Oct 2025 Preprint - Under Review Lifting 2D image editing models directly to the 3D multi-view domain is non-trivial, primarily due to the difficulty in ensuring 3D consistency across different viewpoints. To address this, most existing methods (Haque et al., 2023; Jin et al., 2024) rely on explicit 3D representations, i.e., NeRF (Milden- hall et al., 2020) or 3D Gaussian Splatting (Kerbl et al., 2023). Despite achieving promising results in certain scenarios, these methods typically require time-consuming optimization and dense input view coverage. This significantly limits their applicability to real-time, real-world scenarios. Can we directly extend the capabilities of 2D image editing models to the multi-view domain with- out relying on explicit 3D representations or incurring additional training overhead? We answer this question affirmatively by introducing a novel diffusion sampling method—coupled diffusion sam- pling. As shown in Fig. 1, our approach enables multi-view consistent image editing across diverse applications, including multi-view spatial editing, stylization, and relighting. Figure 2: Limitations of baselines. Using a pre- trained image-to-multiview model conditioned on an edited image, can only be faithful to that sin- gle image but not the rest of the input views. On the other hand, editing each image individually with the 2D model produces highly inconsistent re- sults. While prior work (Liu et al., 2022) proposes a method to compose diffusion models within the same domain, we find that their approach produces flickering results and cannot guarantee being faith- ful to the input views. As shown in Fig. 2, sampling from two dif- fusion models independently yields samples that are inconsistent across views. Condi- tioning a multi-view model using a single edited image, however, fails to preserve iden- tity and align with the editing objective across all views. While prior work (Liu et al., 2022; Du et al., 2023) explored combining diffu- sion models within a modality, we observe that such approaches do not maintain multi- view consistency and can stray from the edit- ing objective. Our approach is motivated by the observation that, any sequence of images generated by a pre-trained multi-view image diffusion model inherently exhibit multi-view consistency. To this end, we embrace an im- plicit 3D regularization paradigm by lever- aging scores estimated from multi-view dif- fusion models during the diffusion sampling process. Specifically, for any multi-view im- age editing task with a pre-trained 2D model, we couple it with a foundation multi-view diffusion model and perform sampling under dual guidance from both models. This pro- cess ensures that the resulting samples satisfy both the editing objective and multi-view 3D consis- tency, yet without any additional explicit 3D regularization or training overhead. We propose a practical sampling framework to achieve the above-mentioned goal by steering the standard diffusion sampling trajectory with an energy term coupling two sampling trajectories. This method ensures that each sample from one diffusion model remains within its own distribution while being guided by the other. In particular, samples from the multi-view diffusion model maintain multi-view consistency while being steered by the content edits from the 2D model. Conversely, the 2D model is steered so that its edits remain faithful to the inputs while being consistent across independently edited frames. Our solution is conceptually simple, broadly applicable, and adaptable to a variety of settings. We showcase its effectiveness across three distinct multi-view image editing tasks: multi-view spatial editing, stylization, and relighting. Through comprehensive experiments on each task, we demon- strate the advantages of our method over the state-of-the-art. We further validate the generalizability of our approach by applying it to diverse diffusion backbones and latent spaces, underscoring its promise as a general multi-view image editing engine. 2 RELATED WORK Test-time diffusion guidance. Test-time guidance approaches for diffusion models have been pro- posed to steer diffusion models toward external objectives. Test-time scaling methods (Ma et al., 2 Preprint - Under Review 2025; Li et al., 2024), such as rejection sampling or verifier-based search over large latent spaces, passively filter generated samples. In contrast, optimization-based guidance actively steers diffusion trajectories, offering a more efficient alternative. A widely used technique is classifier guidance, where a discriminative classifier steers the diffusion trajectory toward a target label (Dhariwal & Nichol, 2021). When the objective is differentiable, gradient-based guidance can be directly applied during sampling (Bansal et al., 2024) In other cases, prior work has explored diffusion guidance using degradation operators, which require additional assumptions in the forward process, e.g., as in linear inverse problems (Kawar et al., 2022; Wang et al., 2023a; Chung et al., 2023). However, in more general scenarios, such constraints are often intractable, making the proposed framework particularly suitable for these settings. 3D and multiview editing. With the advent of diffusion models capable of producing high-quality 2D image edits (Kulikov et al., 2025; Cao et al., 2023; Mokady et al., 2023), a natural question has been how to leverage those capabilities for 3D editing. One common approach is to optimize a 3D representation, such as Neural Radiance Fields (NeRF) (Mildenhall et al., 2020), so that its multi- view renderings satisfy the editing goal. Bridging diffusion and NeRF can be achieved either by modifying the training dataset during the optimization loop (Haque et al., 2023; Wu et al., 2024a) or through score distillation sampling (Poole et al., 2023; Wang et al., 2023b; McAllister et al., 2024; Yan et al., 2025). However, both approaches are prone to visual artifacts, which is fundamentally caused by the fact that 2D diffusion models lack 3D consistency awareness. To address this funda- mental challenge, prior work has directly trained multiview diffusion models (Litman et al., 2025; Alzayer et al., 2025a; Trevithick et al., 2025) for consistent editing. However, training a multiview diffusion model for each individual editing task is computationally expensive, and suitable training datasets are scarce. In our approach, we propose reusing existing multiview generation models (Gao et al., 2024; Zhou et al., 2025) for multiview editing by combining them with a 2D editing model, thereby incurring no additional training cost. In contrast to NeRF-based approaches, our method does not require a costly optimization process, as it relies solely on feed-forward sampling. Compositional diffusion sampling. Compositional sampling methods for diffusion models have been proposed to combine the priors of multiple models. Examples include product-of-experts sam- pling (Hinton, 2002; Zhang et al., 2025b), which samples from the product distribution of individual models. However, this approach imposes a strict requirement that valid samples lie in the intersection of the support of each model and fails when no such joint support exists. MultiDiffusion (Bar-Tal et al., 2023) and SyncTweedies (Kim et al., 2024) apply score composition for stitching panoramas or large images. However, their primary focus is on handling out-of-distribution scenarios, such as oversized images, whereas our work emphasizes remaining within each model’s prior distribution while steering generation toward satisfying cross-model constraints. Prior works Liu et al. (2022); Du et al. (2023) address inference-time composition for diffusion models, but these works focus on the same data modality. In contrast, our work bridges 2D and 3D modalities to tackle the practical challenge of 3D data sparsity. 3 METHOD 3.1 BACKGROUND Diffusion Models. Let x0 ∼pdata(x0) be a data sample and consider the forward noising process: q(xt | xt−1) = N(xt | √ 1 −σtxt−1, σtI), (1) with a variance schedule {σt}T t=1. (Ho et al., 2020) proposes to train a neural network ϵθ(xt, t), where θ denotes network parameters, such that when starting with an initial noise xT ∼N(0, I), it allows one to gradually denoise the sample to x0 ∼pdata(x0) via ˆx0 = 1 √¯αt (xt − √ 1 −¯αtϵθ(xt)) (2) xt−1 = √¯αt−1ˆx0 + p 1 −¯αt−1ϵθ(xt) + σtz, (3) where αt = 1 −σt, ¯αt := Qt s=1 αs. The next-step prediction xt−1 is obtained by computing the clean image estimate ˆx0 and re-injecting a decreasing amount of random noise z ∼N(0, I). 3 Preprint - Under Review Target Distribution A “Japanese samurai” Source Distribution (a) Standard DDPM Sampling (b) Coupled DDPM Sampling (c) Samples from Two Methods Target Distribution B “Astronaut on Mars” Target Distribution A “Japanese samurai” Source Distribution Samples from standard DDPM sampling Samples from coupled DDPM sampling (Ours) Target Distribution B “Astronaut on Mars” DDPM Denoising Prior Sampling Omitted Sample Steps Coupling Term Figure 3: Overview of the proposed coupled sampling method. Given two target statistical dis- tributions modeled with diffusion models: (a) standard DDPM sampling generates two instances independently, using scores from each distribution, which leads to samples without spatial align- ment; (b) in contrast, the proposed coupled DDPM sampling introduces coupling terms ∇U that pull the two sample paths together, producing spatially and semantically aligned outputs; and (c) as illustrated, samples from the standard DDPM sampling produce independent samples. In contrast, coupled sampling produces spatially aligned samples while each sample correctly remain within its distribution. 3.2 COUPLED DDPM SAMPLING Problem. Given two diffusion models ϵθA and ϵθB for a shared data domain Rd and with a shared DDPM schedule, our goal is to obtain two samples xA, xB ∈Rd such that they follow the data distribution prescribed by the pre-trained models pA data(x) and pB data(x), respectively, while staying close to each other. This objective can be interpreted as tilting the distribution pA data(x) to be close to a sample xB(x) ∼pB data(x), and vice versa. We introduce a coupling function U : Rd × Rd →R that measures the closeness of two samples. A natural choice is the Euclidean Distance and in this work, we use U(x, x′) = −λ 2 ∥x −x′∥2 2 with a constant coefficient λ ∈R. Formally, our objective is written as min xA,xB J A(xA, xB) + J B(xA, xB), where (4) J A(x; x′) := pA data(x) exp U(x, sg(x′)), (5) J B(x; x′) := pB data(x) exp U(sg(x), x′), (6) where sg denotes the stop gradient operation. Taking the gradients: ∇xJ i(x, x′) = ∇x log pi(x) + ∇xU(x, x′), i ∈{A, B}. (7) Here, the additional term ∇xU(x, x′) biases the sample trajectory {xi t}t from the standard diffusion trajectory following pi(x) to satisfy the goal. Tilting diffusion model sampling towards inference- time reward functions or constraints has been widely studied for preference alignment (Wu et al., 2023) and inverse problems (Chung et al., 2023; 2022), with gradient likelihood of a form similar to Eq. (7), although typically under a fixed target. In contrast, in this work, the optimization target depends on another variable. Algorithm. Let xA t , xB t ∈Rd be two data samples. xA t−1 = √¯αt−1ˆxA 0 + p 1 −¯αt−1(ϵθA(xA t ) + ∇ˆxA 0 U(ˆxA 0 , ˆxB 0 )) + σtzA, zA ∼N(0, I), (8) xB t−1 = √¯αt−1ˆxB 0 + p 1 −¯αt−1(ϵθB(xB t ) + ∇ˆxB 0 U(ˆxB 0 , ˆxA 0 )) + σtzB, zB ∼N(0, I). (9) Let f A(xA t ; t) := exp U(ˆxA 0 , ˆxB 0 ) ∝exp −1 2 ∥ˆxA 0 −ˆxB 0 ∥2 2 1/λ = N(ˆxB 0 , 1/ √ λI), providing the interpre- tation that f A(xA t ; t) assigns low energy to ˆxA 0 close to ˆxB 0 in during the sampling process, and similarly for xB t . This term effectively serves as a soft regularization that encourages two sam- ples to stay close. The gradient term ∇xU(x, x′) = −λ(x −x′) is easy to compute with minimal computation overhead. The sampling algorithm is summarized in Algorithm 1. 4 Preprint - Under Review Algorithm 1 Coupled DDPM Sampling 1: θ2D: Text2Image diffusion model 2: θMV : Text2MultiView diffusion model 3: xT,2D, xT,MV ∼N(0, I): initial latents 4: xT,2D, xT,MV shapes: N × H × W × C where N is # of views 5: for t ∈T, ..., 0 do 6: ˆx0,MV ← 1 √¯αt (xt,MV −√1 −¯αtϵθ,MV (xt,MV )) ▷x0 prediction 7: ˆx0,2D ← 1 √¯αt (xt,2D −√1 −¯αtϵθ,2D(xt,2D)) 8: ˆxt−1,MV ←√¯αt−1,MV ˆxMV,0 + √1 −¯αt−1ϵθ(xt,MV ) + σtz ▷DDPM step 9: ˆxt−1,2D ←√¯αt−1,MV ˆxMV,0 + √1 −¯αt−1ϵθ(xt,2D) + σtz 10: xt−1,2D ←ˆxt−1,2D −√1 −¯αt−1λ(ˆx2D,0 −ˆxMV,0) ▷Coupled guidance step 11: xt−1,MV ←ˆxt−1,MV −√1 −¯αt−1λ(ˆxMV,0 −ˆx2D,0) 12: end for Figure 4: Multi-view stylization. We show three examples of multi-view stylization of our method against the baselines. Prior work on combining diffusion models (Liu et al., 2022; Du et al., 2023) suffer from inconsistencies across frames. SDS based methods (Richardson et al., 2023) suffer from severe artifacts. Hunyuan 3D’s results follow the prompt loosely when doing retexturing. 4 EXPERIMENTS We refer the readers to the supplementary webpage for video results. To demonstrate the versatility of our method, we select tasks that highlight various editing aspects. 1) Spatial editing: We use Magic Fixup (Alzayer et al., 2025b) to highlight the ability of making geometric changes in a scene. 2) Stylization: We perform stylization using Control-Net (Zhang et al., 5 Preprint - Under Review Table 1: Quantitative comparison on spatial edit- ing. We evaluate against GT renders of the target edit, and use MEt3r for geometric consistency. Per-image metrics MV metric Method PSNR ↑ SSIM ↑ LPIPS ↓ MEt3r ↓ Users ↑ Per-image 16.5 0.550 0.253 0.353 - Image-to-MV 12.84 0.400 0.556 0.417 - Liu et al. (2022) 16.5 0.530 0.354 0.368 9% Du et al. (2023) 16.7 0.548 0.411 0.344 1% SDEdit 15.4 0.458 0.468 0.393 11% Ours 17.0 0.550 0.421 0.335 80% Table 2: Quantitative comparison on relight- ing. We evaluate against GT relighting results in terms of per-image metrics, and evaluate multi- view consistency with MEt3r. Per-image metrics MV metric Method PSNR ↑ SSIM ↑ LPIPS ↓ MEt3r ↓ Users ↑ Per-image 22.7 0.862 0.159 0.243 - Image-to-MV 19.3 0.815 0.193 0.229 - Liu et al. (2022) 23.2 0.871 0.152 0.220 10% Du et al. (2023) 22.1 0.863 0.158 0.217 19% NeRF + NG 22.4 0.865 0.162 0.217 25% Ours 23.2 0.868 0.157 0.217 46% 2023) with edge control, demonstrating how we can alter the general appearance of the input while preserving its overall shape. 3) Relighting: We perform relighting using two different models: 1) Neural-Gaffer (Jin et al., 2024), which takes an explicit environment map as input, and 2) IC-Light (Zhang et al., 2025a), which is text-conditioned, producing more diverse edits. For each of these tasks, we begin with a collection of input images and additional task-specific conditioning. The 2D model is capable of editing each image individually, but this often leads to inconsistencies across the set. In contrast, the multi-view model (Zhou et al., 2025) is a novel view synthesis model that takes a set of consistent images and generates novel views. Our pipeline first edits a single image using the 2D model and then uses it as a reference for the multi-view model. However, editing only a single image is insufficient to fully preserve the identity of the input, as illustrated in Figure 2. To address this, we couple the two models, enabling the multi-view model to maintain identity while ensuring consistency across multiple views. We perform the coupling in the latent space, and in all these experiments, both the image editing models and the multi-view model operate in the latent space of Stable Diffusion 2.1 (Rombach et al., 2022). For each task, we adopt Liu et al. (2022) and Du et al. (2023) as general-purpose baselines for com- bining our two diffusion models. We also include task-specific baselines tailored to each scenario. To provide a comprehensive evaluation, we conduct user studies with 25 participants for all tasks, comparing our approach to all baselines using best-of-n preference questions. 4.1 MULTI-VIEW SPATIAL EDITING Spatial editing is challenging because it requires accurately harmonizing the scene, including object interactions and changes in shadows and reflections resulting from edits. There are no large-scale datasets available for training spatial editing models. As a result, previous work on 2D spatial editing has relied on large-scale video datasets (Wu et al., 2024b; Cheng et al., 2025; Alzayer et al., 2025b) to learn natural object motion. However, such data sources do not exist for multi-view datasets, as dynamic multi-view or 4D datasets are extremely scarce and are typically created only for evaluation purposes. Our coupled sampling paradigm addresses this gap. We use Magic Fixup (Alzayer et al., 2025b) for the 2D editing model. This model takes the original image and a coarse edit that specifies the desired spatial changes. For multi-view editing, it is necessary to apply the edit consistently across all views. In our experiments, we unproject the target object in each image using a depth map. We then apply a 3D transformation to the object and reproject it into the image. As a baseline, we also use SDEdit (Meng et al., 2022), which similarly accepts a coarse edit. Figure 5 presents three different coarse edits, with two frames from each edit shown to illustrate consistency. In the first example, we find that our method correctly translated and rotated the car, while preserving the identity of the input. By contrast, the baselines struggle to maintain the back view of the scene. In the final edit, our method produces smooth shadows that match the ground truth, whereas the baseline results in highly irregular shadows. To quantitatively evaluate performance, we render the ground truth 3D transformation for each edit using Blender. We use standard reconstruction metrics, and MEt3r (Asim et al., 2025), which mea- sures the 3D consistency of multi-view outputs. Table 1 demonstrates that our method achieves higher PSNR and SSIM scores, along with superior multi-view consistency. 6 Preprint - Under Review GT SDEdit (Meng 2022) Du et al. 2023 Liu et al. 2022 Ours Coarse edit Input Figure 5: Qualitative comparison on multi-view spatial editing. The baselines struggle in pre- serving the identity of the input, and produce flickering artifacts across edited frames, while our results achieve both editing targets and multi-view consistency. 4.2 MULTI-VIEW STYLIZATION Stylization is a common application of diffusion models, where an input sequence, the spatial struc- ture of the desired output, and a text prompt specifying the style are provided. Control-Net (Zhang et al., 2023) enables this type of stylization by incorporating geometry-related conditioning, such as the Canny edges of an image. Because ControlNet is trained on a large dataset, it achieves higher text fidelity than text-to-MV models. A closely related task is 3D re-texturing, in which a 3D mesh is given and a new texture is generated using a generative model. To assess our method, we rendered ten different scenes and applied stylization to each using user-defined prompts. For a comprehen- sive comparison, we also include baselines that operate directly on the 3D mesh, such as TEXTure (Richardson et al., 2023), which synthesizes new textures using SDS (Poole et al., 2023), and Hun- yuan3D (Team, 2025), which employs a feed-forward multi-view model to generate textures. We omit InstructNeRF2NeRF as it fails on our inputs. In Fig. 4, we present results from three represen- tative examples. In the first example, score averaging methods have difficulty preserving the identity of the edited subject, resulting in color changes or the changing identity across frames. In contrast, TEXTure exhibits severe artifacts due to its SDS-based approach. Hunyuan3D produces very simple edits that often do not align with the text prompt. Although the quantitative evaluation of stylization remains challenging, we assess both temporal and subject consistency in our generated videos using VBench Zhang et al. (2024) and measure geometric consistency with MEt3r (Asim et al., 2025). Our results show that our method achieves superior temporal and subject consistency compared to previous approaches for combining diffusion models. For reference, we also report results from mesh-based methods on rendered videos, which are inherently temporally consistent due to the underlying mesh representation. 7 Preprint - Under Review Table 3: Quantitative comparison on stylization. We evaluate the temporal and subject consistency, and MEt3r score for geometric consistency. CLIP score is computed against the edit prompt. Per-img metric MV metrics Method CLIP score ↑ Temp. consis. ↑ Subject consist. ↑ MEt3r ↓ User pref. ↑ Mesh-free Per-image (Zhang et al., 2023) 30.0 0.922 0.740 0.546 - ✓ Image-to-MV (Zhou et al., 2025) 29.5 0.927 0.787 0.382 - ✓ TEXTure (Richardson et al., 2023) 28.4 0.967 0.748 0.426 14% X Hunyuan3D (Team, 2025) 29.9 0.952 0.754 0.391 8% X Liu et al. (2022) 30.1 0.934 0.759 0.461 19% ✓ Du et al. (2023) 30.2 0.926 0.762 0.461 12% ✓ Coupled Sampling (Ours) 29.68 0.946 0.807 0.392 47% ✓ GT Input Ours NeRF + Neural Gaffer (Jin et al. 2024) Liu et al. 2022 Du et al. 2023 Figure 6: Qualitative comparison on environment map based relighting. Other methods tend to produce flickering artifacts (notice the change in color in the first two rows for Liu et al. (2022); Du et al. (2023)). The usage of NeRF will make the lighting changes to be baked into the view dependent effects. Our method achieves the best overall result. 4.3 MULTI-VIEW RELIGHTING Environment map conditioned relighting. When the variance of the 2D diffusion results is low, meaning the sampling distribution is narrow, radiance fields can effectively regularize inconsis- tencies. However, this requires obtaining a consistent geometry beforehand. As an alternative, we demonstrate that a multi-view diffusion model can regularize inconsistencies in 2D relighting through coupled sampling. Figure 6 presents two relighting examples to illustrate this. We observe that prior methods for combining diffusion models (Liu et al., 2022; Du et al., 2023) can introduce flickering artifacts, as evidenced by abrupt color changes in the top two rows. In contrast, NeRF- based approaches may incorrectly attribute lighting variance to view-dependent effects, as illustrated in the bottom two rows of the backpack example. To quantitatively compare these methods, we use the 3D objects from Neural-Gaffer (Jin et al., 2024), and add both a diffuse and a glossy object, resulting in a total of seven objects with five relightings each. We compute per-image reconstruction metrics and geometric consistency using MEt3r, as shown in Table 2. Although these metrics do not capture subtle lighting flicker, our method achieves competitive results in both reconstruction and consistency. Importantly, we also report metrics for relighting each image individually, which serves as a coarse upper bound, and observe no degradation in performance. Text conditioned relighting. To show more drastic relighting outputs, we use IC-Light (Zhang et al., 2025a), which operates by relighting the object and adding a suitable background. While Stable-Virtual-Camera (Zhou et al., 2025) may have a weak prior for regularizing backgrounds due 8 Preprint - Under Review Sample inputs Sample inputs “snowy lighting” “snowy lighting” “glowing neon” “glowing neon” “studio lighting” “beach sunset” Relighting outputs Relighting outputs Figure 7: Text based relighting. We combine IC-Light (Zhang et al., 2025a), which enables text- based relighting with stable virtual camera to obtain multi-view results. Stable Diffusion 2.1 backbone oupled samples (ours) ultiview samples prompt: “steampunk scuba diver” prompt: “a jade statue of a necromancer” D samples Stable Diffusion XL backbone Figure 8: Coupling in different multi-view models. We implement coupling on T2I and T2MV models with two different backbones. We couple SD2.1 with MVDream (Shi et al., 2024), and SDXL with MVAdapter (Huang et al., 2024), which operates in SDXL latent space. In both cases, the coupled multiview samples show an increase in realism and a decrease in “objaverse” appear- ance. to its training data, we find that it still ensures the object is consistently lit across frames. In Fig. 7 we show diverse multi-view relighting results using our method. 5 ANALYSIS EXPERIMENTS In this section, we demonstrate that the benefits of coupled sampling extend to various models, and analyze how varying the guidance strength in our approach influence the results. Backbone variations. In Section 4, we presented multi-view editing results using Stable Virtual Camera (Zhou et al., 2025). Here, we further examine the impact of coupling on text-to-multi-view models, specifically MVDream (Shi et al., 2024), which extends Stable Diffusion 1.5 to produce four consistent views, and MV-Adapter (Huang et al., 2024), which leverages the more advanced SDXL backbone and operates in the SDXL latent space. For coupling, we use SD1.5 and SDXL as the respective text-to-image models. As shown in Figure 8, text-to-multi-view models often gen- erate objects with a CGI-like appearance, likely due to their training on datasets such as Objaverse 9 Preprint - Under Review No Coupling Enchanted forests with mushrooms Japanese samurai Astronaut on Mars With Coupling Golden retriever A digital painting A cat playing with a yarn ball Figure 9: Image space coupling. Using Flux, we perform coupled sampling on different prompts. We show that the coupled samples are spatially aligned while being faithful to the prompt. (Deitke et al., 2023). Introducing our coupling approach encourages the multi-view samples to better resemble real images, as modeled by the 2D diffusion models. Coupling Text-to-image flow models. Coupled diffusion sampling can be applied to both 2D and multi-view settings. To illustrate the effects of coupled sampling, we implement our method using the text-to-image model Flux (Labs, 2024). Although Flux is a flow-based model (Lipman et al., 2023; Liu et al., 2023), we show that our coupling approach remains effective. We test coupled sampling by generating two samples from the same model, each conditioned on a different prompt. As shown in Fig. 9, without coupling, the outputs are typically very distinct. With the coupled sampling, the outputs become spatially aligned while still reflecting their respective prompts. Guidance strength analysis. We quantitatively evaluate the effects of guidance strength λ on spatial editing performance. When λ is very small, the model output resembles image-to-MV sampling, resulting in low reconstruction performance. As λ increases, reconstruction performance improves. However, with further increases in λ, consistency across frames degrades as the outputs become more similar to 2D model samples and eventually collapse. 6 DISCUSSION AND CONCLUSION Figure 10: Guidance strength analysis. As we increase the guidance strength, the reconstruction improves but the consistency drops. We introduce a simple and effective approach for coupling diffusion models, enabling 2D dif- fusion models to generate consistent multi-view edits when used with multi-view diffusion mod- els. Our method is efficient, versatile, and achieves high-quality results. By guiding the diffusion sampling process, our approach pro- duces outputs that retain the strengths of the underlying models, while also inheriting their limitations. We believe this coupling strategy has poten- tial applications beyond multi-view editing. In the future, our paradigm could extend the capa- bilities of image-editing models to video edit- ing by integrating with video diffusion mod- els, without incurring additional computational overhead. Acknowledgments. We would like to thank Gordon Wetzstein, Jon Barron, Ben Poole, Michael Gharbi, and Songwei Ge for the fruitful discussions. 10 Preprint - Under Review REFERENCES Hadi Alzayer, Philipp Henzler, Jonathan T. Barron, Jia-Bin Huang, Pratul P. Srinivasan, and Dor Verbin. Generative multiview relighting for 3d reconstruction under extreme illumination varia- tion. In CVPR, June 2025a. Hadi Alzayer, Zhihao Xia, Xuaner (Cecilia) Zhang, Eli Shechtman, Jia-Bin Huang, and Michael Gharbi. Magic fixup: Streamlining photo editing by watching dynamic videos. ACM Trans. Graph., 2025b. Mohammad Asim, Christopher Wewer, Thomas Wimmer, Bernt Schiele, and Jan Eric Lenssen. Met3r: Measuring multi-view consistency in generated images. In CVPR, 2025. Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Roni Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Universal guidance for diffusion models. In ICLR, 2024. Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. Multidiffusion: fusing diffusion paths for controlled image generation. In Int. Conf. Mach. Learn., 2023. Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In ICCV, 2023. Yen-Chi Cheng, Krishna Kumar Singh, Jae Shin Yoon, Alexander Schwing, Liangyan Gui, Matheus Gadelha, Paul Guerrero, and Nanxuan Zhao. 3D-Fixup: Advancing Photo Editing with 3D Priors. In Proceedings of the SIGGRAPH Conference Papers, 2025. Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. In NeurIPS, 2022. Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In ICLR, 2023. Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A universe of anno- tated 3d objects. In CVPR, 2023. Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat GANs on image synthesis. In NeurIPS, 2021. Yilun Du, Conor Durkan, Robin Strudel, Joshua B. Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, and Will Grathwohl. Reduce, reuse, recycle: compo- sitional generation with energy-based diffusion models and mcmc. In Int. Conf. Mach. Learn., 2023. Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul P. Srinivasan, Jonathan T. Barron, and Ben Poole. Cat3d: Create anything in 3d with multi-view diffusion models. In NeurIPS, 2024. Ayaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct-nerf2nerf: Editing 3d scenes with instructions. In ICCV, 2023. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771–1800, 2002. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, volume 33, pp. 6840–6851, 2020. Zehuan Huang, Yuanchen Guo, Haoran Wang, Ran Yi, Lizhuang Ma, Yan-Pei Cao, and Lu Sheng. Mv-adapter: Multi-view consistent image generation made easy. In ICCV, 2024. Haian Jin, Yuan Li, Fujun Luan, Yuanbo Xiangli, Sai Bi, Kai Zhang, Zexiang Xu, Jin Sun, and Noah Snavely. Neural gaffer: Relighting any object via diffusion. In NeurIPS, 2024. 11 Preprint - Under Review Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. In NeurIPS, 2022. Bernhard Kerbl, Georgios Kopanas, Thomas Leimk¨uhler, and George Drettakis. 3d gaussian splat- ting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139–1, 2023. Jaihoon Kim, Juil Koo, Kyeongmin Yeo, and Minhyuk Sung. Synctweedies: A general generative framework based on synchronized diffusions. In NeurIPS, 2024. Vladimir Kulikov, Matan Kleiner, Inbar Huberman-Spiegelglas, and Tomer Michaeli. Flowedit: Inversion-free text-based editing using pre-trained flow models. In ICCV, 2025. Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024. Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Bian- calani, Shuiwang Ji, Aviv Regev, Sergey Levine, et al. Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding. arXiv preprint arXiv:2408.08252, 2024. Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. In ICLR, 2023. Yaron Lipman, Marton Havasi, Peter Holderrieth, Neta Shaul, Matt Le, Brian Karrer, Ricky TQ Chen, David Lopez-Paz, Heli Ben-Hamu, and Itai Gat. Flow matching guide and code. arXiv preprint arXiv:2412.06264, 2024. Yehonathan Litman, Fernando De la Torre, and Shubham Tulsiani. Lightswitch: Multi-view relight- ing with material-guided diffusion. In ICCV, 2025. Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. Compositional visual generation with composable diffusion models. In ECCV, 2022. Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In ICLR, 2023. Nanye Ma, Shangyuan Tong, Haolin Jia, Hexiang Hu, Yu-Chuan Su, Mingda Zhang, Xuan Yang, Yandong Li, Tommi Jaakkola, Xuhui Jia, et al. Inference-time scaling for diffusion models beyond scaling denoising steps. In CVPR, 2025. Nadav Magar, Amir Hertz, Eric Tabellion, Yael Pritch, Alex Rav-Acha, Ariel Shamir, and Yedid Hoshen. Lightlab: Controlling light sources in images with diffusion models. In SIGGRAPH, 2025. David McAllister, Songwei Ge, Jia-Bin Huang, David W. Jacobs, Alexei A. Efros, Aleksander Holynski, and Angjoo Kanazawa. Rethinking score distillation as a bridge between image distri- butions. In NeurIPS, 2024. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In ICLR, 2022. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In CVPR, 2023. Jiteng Mu, Micha¨el Gharbi, Richard Zhang, Eli Shechtman, Nuno Vasconcelos, Xiaolong Wang, and Taesung Park. Editable image elements for controllable synthesis. In ECCV, 2024. Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In ICLR, 2023. Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, and Daniel Cohen-Or. Texture: Text- guided texturing of 3d shapes. In ACM SIGGRAPH Conference Proceedings, 2023. 12 Preprint - Under Review Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High- resolution image synthesis with latent diffusion models. In CVPR, 2022. Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. MVDream: Multi-view diffusion for 3d generation. In ICLR, 2024. Tencent Hunyuan3D Team. Hunyuan3d 2.0: Scaling diffusion models for high resolution textured 3d assets generation, 2025. Alex Trevithick, Roni Paiss, Philipp Henzler, Dor Verbin, Rundi Wu, Hadi Alzayer, Ruiqi Gao, Ben Poole, Jonathan T. Barron, Aleksander Holynski, Ravi Ramamoorthi, and Pratul P. Srinivasan. Simvs: Simulating world inconsistencies for robust view synthesis. In CVPR, 2025. Vaibhav Vavilala, Seemandhar Jain, Rahul Vasanth, D. A. Forsyth, and Anand Bhattad. Generative blocks world: Moving things around in pictures, 2025. URL https://arxiv.org/abs/ 2506.20703. Yinhuai Wang, Jiwen Yu, and Jian Zhang. Zero-shot image restoration using denoising diffusion null-space model. In ICLR, 2023a. Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolific- dreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. In NeurIPS, 2023b. Ethan Weber, Aleksander Holynski, Varun Jampani, Saurabh Saxena, Noah Snavely, Abhishek Kar, and Angjoo Kanazawa. Nerfiller: Completing scenes via generative 3d inpainting. In CVPR, 2024. Luhuan Wu, Brian Trippe, Christian Naesseth, David Blei, and John P Cunningham. Practical and asymptotically exact conditional sampling in diffusion models. In NeurIPS, 2023. Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, and Aleksander Ho?y?ski. Reconfusion: 3d reconstruction with diffusion priors. In CVPR, 2024a. Ziyi Wu, Yulia Rubanova, Rishabh Kabra, Drew A. Hudson, Igor Gilitschenski, Yusuf Aytar, Sjoerd van Steenkiste, Kelsey R Allen, and Thomas Kipf. Neural assets: 3d-aware multi-object scene synthesis with image diffusion models. In NeurIPS, 2024b. Runjie Yan, Yinbo Chen, and Xiaolong Wang. Consistent flow distillation for text-to-3d generation. In ICLR, 2025. Fan Zhang, Shulin Tian, Ziqi Huang, Yu Qiao, and Ziwei Liu. Evaluation agent: Efficient and promptable evaluation framework for visual generative models. arXiv preprint arXiv:2412.09645, 2024. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Scaling in-the-wild training for diffusion-based illumination harmonization and editing by imposing consistent light transport. In ICLR, 2025a. Yunzhi Zhang, Carson Murtuza-Lanier, Zizhang Li, Yilun Du, and Jiajun Wu. Product of experts for visual generation. arXiv preprint arXiv:2506.08894, 2025b. Jensen (Jinghao) Zhou, Hang Gao, Vikram Voleti, Aaryaman Vasishta, Chun-Han Yao, Mark Boss, Philip Torr, Christian Rupprecht, and Varun Jampani. Stable virtual camera: Generative view synthesis with diffusion models. In ICCV, 2025. 13 Preprint - Under Review A ADDITIONAL DISCUSSION OF LIMITATIONS While the proposed method offers a simple and efficient framework for multi-view consistent image editing, several notable limitations remain. First, running both the 2D editing model and the multi- view diffusion model in parallel increases memory and computational requirements. This limitation could be addressed in future work by exploring adaptive guidance strength or applying guidance during only a subset of sampling steps. Second, the edited outputs are not perfectly 3D consistent compared to test-time optimization-based methods. This residual inconsistency can be further re- duced by robustly fitting a NeRF or 3D Gaussian Splatting model to the generated views, as shown in prior work Haque et al. (2023); Weber et al. (2024), while still maintaining much faster inference than optimization-based approaches. B IMPLEMENTATION DETAILS As our primary multi-view generation base model, we use Stable-Virtual-Camera (SVC) which is trained to process 21 frames at once, and use one or several consistent images for novel view syn- thesis. As we would like to edit a collection of views, we do not have access to more than one consistent edited photos, since we can only edit one image at a time with the 2D editing model. In our experiments, we edit a reference image, and then use it as the conditioning view. Note that this conditioning view has great influence on the outcome, as it dictates the distribution of acceptable 3D scenes that SVC would synthesize. We transform stable-virtual-camera into DDPM by converting the EDM based sampler into a DDPM scheduled by computing converting the noise levels into the appropriate alphas. Afterwards, since SVC was trained with a shifted noise schedule compared to SD2.1 image models, we re-align SVC’s schedule with the 2D model’s schedule for the coupled sampling to be effective. We conduct our experiments using NVIDIA A6000 GPUs. As our approach only requires a feed for- ward pass, the memory requirement is equivalent to the combined memory of the two models used. A better memory utilization can be further achieved by loading and off-loading the models from the GPU, as we can run them sequentially and then compute the coupling term. We use 50 denoising steps for spatial editing and stylization, and 100 denoising steps with Neural Gaffer relighting. The runtime of the sampling process is 130 seconds using our GPU resources for generating the full 21 frames sequence. In the experiments with Neural-Gaffer, one challenge is that Neural-Gaffer is trained on 256x256 images. On the other hand, SVC was trained on 576x576 images. We found that SVC performs very poorly on images of that size, and neural-gaffer does not generalize to 512x512 images or larger. After experimenting with the models, we found that at resolution size of 384x384, both models perform reasonably well and adopt that for the neural-gaffer experiments. C COUPLED DIFFUSION SAMPLING WITH FLOW MODELS Note that Flux (Labs, 2024), the text-to-image model used in Sec. 5 is a flow model. To sample from Flux using our proposed sampling method, first we transform the velocity vθ(xt) to the score function sθ(xt), as it can be linearly transformed into score functions via sθ(xt) = −−tvθ(xt)+xt 1−t . Then transform the inference schedule to be DDPM via time reparameterization (Lipman et al., 2024) by computing the appropriate alpha values that match the noise levels associated with each time step. C.1 EFFECTS OF GUIDANCE STRENGTH As an additional illustration, in Fig. 11 we show how the samples change as we increase the coupling strength, while using the same initial random noise and randomness seed. 14 Preprint - Under Review Coupling Strength 0.0 rompt: Japanese Samurai Prompt: Astronaut on Mars Coupling Strength 0.0025 Coupling Strength 0.005 Coupling Strength 0.01 Figure 11: Effects of coupling strength. We illustrate the effects of coupling strength on the spatial alignment between samples as we vary the coupling strength while keeping the initial noise and random seed. Table 4: User study results on text-based relighting. Method User pref. ↑ Liu et al. (2022) 24% Du et al. (2023) 26.5% Coupled Sampling (Ours) 49.5% D USER STUDY ON IC-LIGHT For completion, we include the results of our user study on IC-Light in Tab. 4. We show that our outputs are preferred by users over either of prior work on combinign diffusion models, as they tend to produce high flickering artifacts in the relit outputs. E EFFECTS OF STOCHASTICITY One observation one would make is that our coupling term resembles linearly combining the inter- mediate samples of the two models, so one may wonder why we do not simply get images that are a linear average of the two outputs. Indeed, when we use a deterministic sampler, like Euler Dis- crete Sampler that’s commonly used, this is the outcome that we encounter as we show in Fig. 12. However, when using a stochastic sampler like DDPM where noise is injected at every timestep, the model needs to correct for the added noise. When we include our coupling term in the stochastic step, the model can naturally correct or reject parts of the guidance that steers it away from its train- ing distribution. This is also the reason we make our coupling term to be correlated with the noise level, by scaling it with √1 −αt, since at step t, the model has the ability to correct for noise at that level, but steering the sample by a larger magnitude risks pulling the intermediate latents outside of the training distribution. Additionally, as intuitively understood about diffusion sampling, at the later time steps the structure of the outputs is already determined, so shifting the intermediate latents in a large direction can disrupt the sampling process. 15 Preprint - Under Review DDPM sampler (Stochastic) Husky playing with a ball usky sleeping No Coupling With Coupling No Coupling With Coupling Euler Discrete Sampler (Deterministic) Figure 12: Sampler comparison. When using a stochastic sampler, the coupling can lead to natural guidance pulling the outputs towards each other. On the other hand, a deterministic sampler would simply output the average of both samples, as ODE based sampling does cannot recover from noisy guidance. F ADDITIONAL T2MV RESULTS In Fig. 13 and Fig. 14, we highlight additional results from coupling text-to-multi-view models along with text-to-image models. G APPLICATIONS WITH MV-ADAPTER One of the limitations of MV-Adapter is that it can only generate fixed set of camera views, making its utility for editing limited. Nonetheless, we show that we can still use it by editing the outputs it produces by performing coupling with single-image editing models. In Fig. 15, we show an example of using MV-Adapter for stylization, and relighting. H OUTPUTS OF INSTRUCTNERF2NERF When running InstructNeRF2NeRF on our input sequences used for stylization with the same num- ber of frames as our method and other baselines (21 frames), we find that the radiance field com- pletely collapses. This is likely due to NeRF’s inability to gradually handle inconsistency with less dense camera coverage. 16 Preprint - Under Review Figure 13: Additional MVDream T2MV coupling results. Here we show additional results on the output of Text-to-Multiview MVDream when coupled with Text-to-Image SD2.1. 17 Preprint - Under Review Figure 14: Additional MV-Adapter T2MV coupling results. Here we show additional results on the output of Text-to-Multiview MV-Adapter when coupled with Text-to-Image SDXL. 18 Preprint - Under Review Figure 15: Multiview editing with MV-Adapter. Here we show editing results with MV-Adapter to achieve stylization by combining it with Control-Net (Zhang et al., 2023) and relighting using IC-Light (Zhang et al., 2025a). Sample input frame + rompt: “make it a golden lamborghini” InstructNeRF2NeRF renders Figure 16: InstructNeRF2NeRF outputs. When running InstructNeRF2NeRF (Haque et al., 2023) on our input views, we find that the editing training loop with InstructPix2Pix completely collapses. 19
Preprint - Under Review COUPLED DIFFUSION SAMPLING FOR TRAINING-FREE MULTI-VIEW IMAGE EDITING Hadi Alzayer1,2 Yunzhi Zhang1 Chen Geng1 Jia-Bin Huang2 Jiajun Wu1 1Stanford University 2 -time diffusion sampling method to perform multi-view consistent image editing using pre-trained 2D image editing models. These models can independently produce high-quality edits for each image in a set of multiview images of a 3D scene or object, but they do not maintain consistency across views. Existing approaches typically address this by optimizing over explicit 3D representations, but they suffer from a lengthy optimization process and instability under sparse view settings. We propose an implicit 3D regularization approach by constraining the generated 2D image sequences to adhere to a pre-trained multiview image distribution. This is achieved through coupled diffusion sampling, a simple diffusion sampling technique that concurrently samples two trajectories from both a multi-view image distribution and a 2D edited image distribution, using a coupling term to enforce the multi-view consistency among the generated images. We validate the effectiveness and generality of this framework on three distinct multi-view image editing tasks, demonstrating its applicability across various model architectures and highlighting its potential as a general solution for multi-view consistent editing. Project page: https://coupled-diffusion.github.io 1 INTRODUCTION Diffusion-based image editing models have demonstrated unprecedented realism across diverse tasks via end-to-end training. These include object relighting (Jin et al., 2024; Magar et al., 2025; Zhang et al., 2025a), spatial structure editing (Wu et al., 2024b; Mu et al., 2024; Alzayer et al., 2025b; Vavilala et al., 2025), and stylization (Zhang et al., 2023). However, collecting and curating 3D data is significantly more costly than working with 2D data. As a result, recent research has explored test-time optimization methods for multi-view editing that leverage pre-trained 2D image diffusion models (Poole et al., 2023; Haque et al., 2023). Relighting Lighting prompt: "Sunset lighting by the beach" Stylization Editing prompt: "Marble and jade statue" Spatial editing Inputs Outputs Figure 1: Applications of coupled diffusion sampling. Our approach enables lifting off-the-shelf 2D editing models into multi-view by combining the sampling process of 2D diffusion models with multi-view diffusion models to produce view-consistent edits. Here we showcase example viewconsistent results using a 2D spatial editing model, stylization, and text-based relighting. 1 16 Oct 2025 Preprint - Under Review Lifting 2D image editing models directly to the 3D multi-view domain is non-trivial, primarily due to the difficulty in ensuring 3D consistency across different viewpoints. To address this, most existing methods (Haque et al., 2023; Jin et al., 2024) rely on explicit 3D representations, i.e., NeRF (Mildenhall et al., 2020) or 3D Gaussian Splatting (Kerbl et al., 2023). Despite achieving promising results in certain scenarios, these methods typically require time-consuming optimization and dense input view coverage. This significantly limits their applicability to real-time, real-world scenarios. Can we directly extend the capabilities of 2D image editing models to the multi-view domain without relying on explicit 3D representations or incurring additional training overhead? We answer this question affirmatively by introducing a novel diffusion sampling method-coupled diffusion sampling. As shown in Fig. 1, our approach enables multi-view consistent image editing across diverse applications, including multi-view spatial editing, stylization, and relighting. Figure 2: Limitations of baselines. Using a pretrained image-to-multiview model conditioned on an edited image, can only be faithful to that single image but not the rest of the input views. On the other hand, editing each image individually with the 2D model produces highly inconsistent results. While prior work (Liu et al., 2022) proposes a method to compose diffusion models within the same domain, we find that their approach produces flickering results and cannot guarantee being faithful to the input views. As shown in Fig. 2, sampling from two diffusion models independently yields samples that are inconsistent across views. Conditioning a multi-view model using a single edited image, however, fails to preserve identity and align with the editing objective across all views. While prior work (Liu et al., 2022; Du et al., 2023) explored combining diffusion models within a modality, we observe that such approaches do not maintain multiview consistency and can stray from the editing objective. Our approach is motivated by the observation that, any sequence of images generated by a pre-trained multi-view image diffusion model inherently exhibit multi-view consistency. To this end, we embrace an implicit 3D regularization paradigm by leveraging scores estimated from multi-view diffusion models during the diffusion sampling process. Specifically, for any multi-view image editing task with a pre-trained 2D model, we couple it with a foundation multi-view diffusion model and perform sampling under dual guidance from both models. This process ensures that the resulting samples satisfy both the editing objective and multi-view 3D consistency, yet without any additional explicit 3D regularization or training overhead. We propose a practical sampling framework to achieve the above-mentioned goal by steering the standard diffusion sampling trajectory with an energy term coupling two sampling trajectories. This method ensures that each sample from one diffusion model remains within its own distribution while being guided by the other. In particular, samples from the multi-view diffusion model maintain multi-view consistency while being steered by the content edits from the 2D model. Conversely, the 2D model is steered so that its edits remain faithful to the inputs while being consistent across independently edited frames. Our solution is conceptually simple, broadly applicable, and adaptable to a variety of settings. We showcase its effectiveness across three distinct multi-view image editing tasks: multi-view spatial editing, stylization, and relighting. Through comprehensive experiments on each task, we demonstrate the advantages of our method over the state-of-the-art. We further validate the generalizability of our approach by applying it to diverse diffusion backbones and latent spaces, underscoring its promise as a general multi-view image editing engine. 2 RELATED WORK Test-time diffusion guidance. Test-time guidance approaches for diffusion models have been proposed to steer diffusion models toward external objectives. Test-time scaling methods (Ma et al., 2 Preprint - Under Review 2025; Li et al., 2024), such as rejection sampling or verifier-based search over large latent spaces, passively filter generated samples. In contrast, optimization-based guidance actively steers diffusion trajectories, offering a more efficient alternative. A widely used technique is classifier guidance, where a discriminative classifier steers the diffusion trajectory toward a target label (Dhariwal & Nichol, 2021). When the objective is differentiable, gradient-based guidance can be directly applied during sampling (Bansal et al., 2024) In other cases, prior work has explored diffusion guidance using degradation operators, which require additional assumptions in the forward process, e.g., as in linear inverse problems (Kawar et al., 2022; Wang et al., 2023a; Chung et al., 2023). However, in more general scenarios, such constraints are often intractable, making the proposed framework particularly suitable for these settings. 3D and multiview editing. With the advent of diffusion models capable of producing high-quality 2D image edits (Kulikov et al., 2025; Cao et al., 2023; Mokady et al., 2023), a natural question has been how to leverage those capabilities for 3D editing. One common approach is to optimize a 3D representation, such as Neural Radiance Fields (NeRF) (Mildenhall et al., 2020), so that its multiview renderings satisfy the editing goal. Bridging diffusion and NeRF can be achieved either by modifying the training dataset during the optimization loop (Haque et al., 2023; Wu et al., 2024a) or through score distillation sampling (Poole et al., 2023; Wang et al., 2023b; McAllister et al., 2024; Yan et al., 2025). However, both approaches are prone to visual artifacts, which is fundamentally caused by the fact that 2D diffusion models lack 3D consistency awareness. To address this fundamental challenge, prior work has directly trained multiview diffusion models (Litman et al., 2025; Alzayer et al., 2025a; Trevithick et al., 2025) for consistent editing. However, training a multiview diffusion model for each individual editing task is computationally expensive, and suitable training datasets are scarce. In our approach, we propose reusing existing multiview generation models (Gao et al., 2024; Zhou et al., 2025) for multiview editing by combining them with a 2D editing model, thereby incurring no additional training cost. In contrast to NeRF-based approaches, our method does not require a costly optimization process, as it relies solely on feed-forward sampling. Compositional diffusion sampling. Compositional sampling methods for diffusion models have been proposed to combine the priors of multiple models. Examples include product-of-experts sampling (Hinton, 2002; Zhang et al., 2025b), which samples from the product distribution of individual models. However, this approach imposes a strict requirement that valid samples lie in the intersection of the support of each model and fails when no such joint support exists. MultiDiffusion (Bar-Tal et al., 2023) and SyncTweedies (Kim et al., 2024) apply score composition for stitching panoramas or large images. However, their primary focus is on handling out-of-distribution scenarios, such as oversized images, whereas our work emphasizes remaining within each model's prior distribution while steering generation toward satisfying cross-model constraints. Prior works Liu et al. (2022); Du et al. (2023) address inference-time composition for diffusion models, but these works focus on the same data modality. In contrast, our work bridges 2D and 3D modalities to tackle the practical challenge of 3D data sparsity. 3 METHOD 3.1 BACKGROUND Diffusion Models. Let x0 ∼pdata(x0) be a data sample and consider the forward noising process: q(xt | xt-1) = N(xt | √ 1 -σtxt-1, σtI), (1) with a variance schedule {σt}T t=1. (Ho et al., 2020) proposes to train a neural network εθ(xt, t), where θ denotes network parameters, such that when starting with an initial noise xT ∼N(0, I), it allows one to gradually denoise the sample to x0 ∼pdata(x0) via ˆx0 = 1 √ ̄αt (xt - √ 1 - ̄αtεθ(xt)) (2) xt-1 = √ ̄αt-1ˆx0 + p 1 - ̄αt-1εθ(xt) + σtz, (3) where αt = 1 -σt, ̄αt := Qt s=1 αs. The next-step prediction xt-1 is obtained by computing the clean image estimate ˆx0 and re-injecting a decreasing amount of random noise z ∼N(0, I). 3 Preprint - Under Review Target Distribution A "Japanese samurai" Source Distribution (a) Standard DDPM Sampling (b) Coupled DDPM Sampling (c) Samples from Two Methods Target Distribution B "Astronaut on Mars" Target Distribution A "Japanese samurai" Source Distribution Samples from standard DDPM sampling Samples from coupled DDPM sampling (Ours) Target Distribution B "Astronaut on Mars" DDPM Denoising Prior Sampling Omitted Sample Steps Coupling Term Figure 3: Overview of the proposed coupled sampling method. Given two target statistical distributions modeled with diffusion models: (a) standard DDPM sampling generates two instances independently, using scores from each distribution, which leads to samples without spatial alignment; (b) in contrast, the proposed coupled DDPM sampling introduces coupling terms ∇U that pull the two sample paths together, producing spatially and semantically aligned outputs; and (c) as illustrated, samples from the standard DDPM sampling produce independent samples. In contrast, coupled sampling produces spatially aligned samples while each sample correctly remain within its distribution. 3.2 COUPLED DDPM SAMPLING Problem. Given two diffusion models εθA and εθB for a shared data domain Rd and with a shared DDPM schedule, our goal is to obtain two samples xA, xB ∈Rd such that they follow the data distribution prescribed by the pre-trained models pA data(x) and pB data(x), respectively, while staying close to each other. This objective can be interpreted as tilting the distribution pA data(x) to be close to a sample xB(x) ∼pB data(x), and vice versa. We introduce a coupling function U : Rd × Rd →R that measures the closeness of two samples. A natural choice is the Euclidean Distance and in this work, we use U(x, x′) = -λ 2 ∥x -x′∥2 2 with a constant coefficient λ ∈R. Formally, our objective is written as min xA,xB J A(xA, xB) + J B(xA, xB), where (4) J A(x; x′) := pA data(x) exp U(x, sg(x′)), (5) J B(x; x′) := pB data(x) exp U(sg(x), x′), (6) where sg denotes the stop gradient operation. Taking the gradients: ∇xJ i(x, x′) = ∇x log pi(x) + ∇xU(x, x′), i ∈{A, B}. (7) Here, the additional term ∇xU(x, x′) biases the sample trajectory {xi t}t from the standard diffusion trajectory following pi(x) to satisfy the goal. Tilting diffusion model sampling towards inferencetime reward functions or constraints has been widely studied for preference alignment (Wu et al., 2023) and inverse problems (Chung et al., 2023; 2022), with gradient likelihood of a form similar to Eq. (7), although typically under a fixed target. In contrast, in this work, the optimization target depends on another variable. Algorithm. Let xA t , xB t ∈Rd be two data samples. xA t-1 = √ ̄αt-1ˆxA 0 + p 1 - ̄αt-1(εθA(xA t ) + ∇ˆxA 0 U(ˆxA 0 , ˆxB 0 )) + σtzA, zA ∼N(0, I), (8) xB t-1 = √ ̄αt-1ˆxB 0 + p 1 - ̄αt-1(εθB(xB t ) + ∇ˆxB 0 U(ˆxB 0 , ˆxA 0 )) + σtzB, zB ∼N(0, I). (9) Let f A(xA t ; t) := exp U(ˆxA 0 , ˆxB 0 ) ∝exp -1 2 ∥ˆxA 0 -ˆxB 0 ∥2 2 1/λ = N(ˆxB 0 , 1/ √ λI), providing the interpretation that f A(xA t ; t) assigns low energy to ˆxA 0 close to ˆxB 0 in during the sampling process, and similarly for xB t . This term effectively serves as a soft regularization that encourages two samples to stay close. The gradient term ∇xU(x, x′) = -λ(x -x′) is easy to compute with minimal computation overhead. The sampling algorithm is summarized in Algorithm 1. 4 Preprint - Under Review Algorithm 1 Coupled DDPM Sampling 1: θ2D: Text2Image diffusion model 2: θMV : Text2MultiView diffusion model 3: xT,2D, xT,MV ∼N(0, I): initial latents 4: xT,2D, xT,MV shapes: N × H × W × C where N is # of views 5: for t ∈T, ..., 0 do 6: ˆx0,MV ← 1 √ ̄αt (xt,MV -√1 - ̄αtεθ,MV (xt,MV )) ▷x0 prediction 7: ˆx0,2D ← 1 √ ̄αt (xt,2D -√1 - ̄αtεθ,2D(xt,2D)) 8: ˆxt-1,MV ←√ ̄αt-1,MV ˆxMV,0 + √1 - ̄αt-1εθ(xt,MV ) + σtz ▷DDPM step 9: ˆxt-1,2D ←√ ̄αt-1,MV ˆxMV,0 + √1 - ̄αt-1εθ(xt,2D) + σtz 10: xt-1,2D ←ˆxt-1,2D -√1 - ̄αt-1λ(ˆx2D,0 -ˆxMV,0) ▷Coupled guidance step 11: xt-1,MV ←ˆxt-1,MV -√1 - ̄αt-1λ(ˆxMV,0 -ˆx2D,0) 12: end for Figure 4: Multi-view stylization. We show three examples of multi-view stylization of our method against the baselines. Prior work on combining diffusion models (Liu et al., 2022; Du et al., 2023) suffer from inconsistencies across frames. SDS based methods (Richardson et al., 2023) suffer from severe artifacts. Hunyuan 3D's results follow the prompt loosely when doing retexturing. 4 EXPERIMENTS We refer the readers to the supplementary webpage for video results. To demonstrate the versatility of our method, we select tasks that highlight various editing aspects. 1) Spatial editing: We use Magic Fixup (Alzayer et al., 2025b) to highlight the ability of making geometric changes in a scene. 2) Stylization: We perform stylization using Control-Net (Zhang et al., 5 Preprint - Under Review Table 1: Quantitative comparison on spatial editing. We evaluate against GT renders of the target edit, and use MEt3r for geometric consistency. Per-image metrics MV metric Method PSNR ↑ SSIM ↑ LPIPS ↓ MEt3r ↓ Users ↑ Per-image 16.5 0.550 0.253 0.353 - Image-to-MV 12.84 0.400 0.556 0.417 - Liu et al. (2022) 16.5 0.530 0.354 0.368 9% Du et al. (2023) 16.7 0.548 0.411 0.344 1% SDEdit 15.4 0.458 0.468 0.393 11% Ours 17.0 0.550 0.421 0.335 80% Table 2: Quantitative comparison on relighting. We evaluate against GT relighting results in terms of per-image metrics, and evaluate multiview consistency with MEt3r. Per-image metrics MV metric Method PSNR ↑ SSIM ↑ LPIPS ↓ MEt3r ↓ Users ↑ Per-image 22.7 0.862 0.159 0.243 - Image-to-MV 19.3 0.815 0.193 0.229 - Liu et al. (2022) 23.2 0.871 0.152 0.220 10% Du et al. (2023) 22.1 0.863 0.158 0.217 19% NeRF + NG 22.4 0.865 0.162 0.217 25% Ours 23.2 0.868 0.157 0.217 46% 2023) with edge control, demonstrating how we can alter the general appearance of the input while preserving its overall shape. 3) Relighting: We perform relighting using two different models: 1) Neural-Gaffer (Jin et al., 2024), which takes an explicit environment map as input, and 2) IC-Light (Zhang et al., 2025a), which is text-conditioned, producing more diverse edits. For each of these tasks, we begin with a collection of input images and additional task-specific conditioning. The 2D model is capable of editing each image individually, but this often leads to inconsistencies across the set. In contrast, the multi-view model (Zhou et al., 2025) is a novel view synthesis model that takes a set of consistent images and generates novel views. Our pipeline first edits a single image using the 2D model and then uses it as a reference for the multi-view model. However, editing only a single image is insufficient to fully preserve the identity of the input, as illustrated in Figure 2. To address this, we couple the two models, enabling the multi-view model to maintain identity while ensuring consistency across multiple views. We perform the coupling in the latent space, and in all these experiments, both the image editing models and the multi-view model operate in the latent space of Stable Diffusion 2.1 (Rombach et al., 2022). For each task, we adopt Liu et al. (2022) and Du et al. (2023) as general-purpose baselines for combining our two diffusion models. We also include task-specific baselines tailored to each scenario. To provide a comprehensive evaluation, we conduct user studies with 25 participants for all tasks, comparing our approach to all baselines using best-of-n preference questions. 4.1 MULTI-VIEW SPATIAL EDITING Spatial editing is challenging because it requires accurately harmonizing the scene, including object interactions and changes in shadows and reflections resulting from edits. There are no large-scale datasets available for training spatial editing models. As a result, previous work on 2D spatial editing has relied on large-scale video datasets (Wu et al., 2024b; Cheng et al., 2025; Alzayer et al., 2025b) to learn natural object motion. However, such data sources do not exist for multi-view datasets, as dynamic multi-view or 4D datasets are extremely scarce and are typically created only for evaluation purposes. Our coupled sampling paradigm addresses this gap. We use Magic Fixup (Alzayer et al., 2025b) for the 2D editing model. This model takes the original image and a coarse edit that specifies the desired spatial changes. For multi-view editing, it is necessary to apply the edit consistently across all views. In our experiments, we unproject the target object in each image using a depth map. We then apply a 3D transformation to the object and reproject it into the image. As a baseline, we also use SDEdit (Meng et al., 2022), which similarly accepts a coarse edit. Figure 5 presents three different coarse edits, with two frames from each edit shown to illustrate consistency. In the first example, we find that our method correctly translated and rotated the car, while preserving the identity of the input. By contrast, the baselines struggle to maintain the back view of the scene. In the final edit, our method produces smooth shadows that match the ground truth, whereas the baseline results in highly irregular shadows. To quantitatively evaluate performance, we render the ground truth 3D transformation for each edit using Blender. We use standard reconstruction metrics, and MEt3r (Asim et al., 2025), which measures the 3D consistency of multi-view outputs. Table 1 demonstrates that our method achieves higher PSNR and SSIM scores, along with superior multi-view consistency. 6 Preprint - Under Review GT SDEdit (Meng 2022) Du et al. 2023 Liu et al. 2022 Ours Coarse edit Input Figure 5: Qualitative comparison on multi-view spatial editing. The baselines struggle in preserving the identity of the input, and produce flickering artifacts across edited frames, while our results achieve both editing targets and multi-view consistency. 4.2 MULTI-VIEW STYLIZATION Stylization is a common application of diffusion models, where an input sequence, the spatial structure of the desired output, and a text prompt specifying the style are provided. Control-Net (Zhang et al., 2023) enables this type of stylization by incorporating geometry-related conditioning, such as the Canny edges of an image. Because ControlNet is trained on a large dataset, it achieves higher text fidelity than text-to-MV models. A closely related task is 3D re-texturing, in which a 3D mesh is given and a new texture is generated using a generative model. To assess our method, we rendered ten different scenes and applied stylization to each using user-defined prompts. For a comprehensive comparison, we also include baselines that operate directly on the 3D mesh, such as TEXTure (Richardson et al., 2023), which synthesizes new textures using SDS (Poole et al., 2023), and Hunyuan3D (Team, 2025), which employs a feed-forward multi-view model to generate textures. We omit InstructNeRF2NeRF as it fails on our inputs. In Fig. 4, we present results from three representative examples. In the first example, score averaging methods have difficulty preserving the identity of the edited subject, resulting in color changes or the changing identity across frames. In contrast, TEXTure exhibits severe artifacts due to its SDS-based approach. Hunyuan3D produces very simple edits that often do not align with the text prompt. Although the quantitative evaluation of stylization remains challenging, we assess both temporal and subject consistency in our generated videos using VBench Zhang et al. (2024) and measure geometric consistency with MEt3r (Asim et al., 2025). Our results show that our method achieves superior temporal and subject consistency compared to previous approaches for combining diffusion models. For reference, we also report results from mesh-based methods on rendered videos, which are inherently temporally consistent due to the underlying mesh representation. 7 Preprint - Under Review Table 3: Quantitative comparison on stylization. We evaluate the temporal and subject consistency, and MEt3r score for geometric consistency. CLIP score is computed against the edit prompt. Per-img metric MV metrics Method CLIP score ↑ Temp. consis. ↑ Subject consist. ↑ MEt3r ↓ User pref. ↑ Mesh-free Per-image (Zhang et al., 2023) 30.0 0.922 0.740 0.546 - ✓ Image-to-MV (Zhou et al., 2025) 29.5 0.927 0.787 0.382 - ✓ TEXTure (Richardson et al., 2023) 28.4 0.967 0.748 0.426 14% X Hunyuan3D (Team, 2025) 29.9 0.952 0.754 0.391 8% X Liu et al. (2022) 30.1 0.934 0.759 0.461 19% ✓ Du et al. (2023) 30.2 0.926 0.762 0.461 12% ✓ Coupled Sampling (Ours) 29.68 0.946 0.807 0.392 47% ✓ GT Input Ours NeRF + Neural Gaffer (Jin et al. 2024) Liu et al. 2022 Du et al. 2023 Figure 6: Qualitative comparison on environment map based relighting. Other methods tend to produce flickering artifacts (notice the change in color in the first two rows for Liu et al. (2022); Du et al. (2023)). The usage of NeRF will make the lighting changes to be baked into the view dependent effects. Our method achieves the best overall result. 4.3 MULTI-VIEW RELIGHTING Environment map conditioned relighting. When the variance of the 2D diffusion results is low, meaning the sampling distribution is narrow, radiance fields can effectively regularize inconsistencies. However, this requires obtaining a consistent geometry beforehand. As an alternative, we demonstrate that a multi-view diffusion model can regularize inconsistencies in 2D relighting through coupled sampling. Figure 6 presents two relighting examples to illustrate this. We observe that prior methods for combining diffusion models (Liu et al., 2022; Du et al., 2023) can introduce flickering artifacts, as evidenced by abrupt color changes in the top two rows. In contrast, NeRFbased approaches may incorrectly attribute lighting variance to view-dependent effects, as illustrated in the bottom two rows of the backpack example. To quantitatively compare these methods, we use the 3D objects from Neural-Gaffer (Jin et al., 2024), and add both a diffuse and a glossy object, resulting in a total of seven objects with five relightings each. We compute per-image reconstruction metrics and geometric consistency using MEt3r, as shown in Table 2. Although these metrics do not capture subtle lighting flicker, our method achieves competitive results in both reconstruction and consistency. Importantly, we also report metrics for relighting each image individually, which serves as a coarse upper bound, and observe no degradation in performance. Text conditioned relighting. To show more drastic relighting outputs, we use IC-Light (Zhang et al., 2025a), which operates by relighting the object and adding a suitable background. While Stable-Virtual-Camera (Zhou et al., 2025) may have a weak prior for regularizing backgrounds due 8 Preprint - Under Review Sample inputs Sample inputs "snowy lighting" "snowy lighting" "glowing neon" "glowing neon" "studio lighting" "beach sunset" Relighting outputs Relighting outputs Figure 7: Text based relighting. We combine IC-Light (Zhang et al., 2025a), which enables textbased relighting with stable virtual camera to obtain multi-view results. Stable Diffusion 2.1 backbone oupled samples (ours) ultiview samples prompt: "steampunk scuba diver" prompt: "a jade statue of a necromancer" D samples Stable Diffusion XL backbone Figure 8: Coupling in different multi-view models. We implement coupling on T2I and T2MV models with two different backbones. We couple SD2.1 with MVDream (Shi et al., 2024), and SDXL with MVAdapter (Huang et al., 2024), which operates in SDXL latent space. In both cases, the coupled multiview samples show an increase in realism and a decrease in "objaverse" appearance. to its training data, we find that it still ensures the object is consistently lit across frames. In Fig. 7 we show diverse multi-view relighting results using our method. 5 ANALYSIS EXPERIMENTS In this section, we demonstrate that the benefits of coupled sampling extend to various models, and analyze how varying the guidance strength in our approach influence the results. Backbone variations. In Section 4, we presented multi-view editing results using Stable Virtual Camera (Zhou et al., 2025). Here, we further examine the impact of coupling on text-to-multi-view models, specifically MVDream (Shi et al., 2024), which extends Stable Diffusion 1.5 to produce four consistent views, and MV-Adapter (Huang et al., 2024), which leverages the more advanced SDXL backbone and operates in the SDXL latent space. For coupling, we use SD1.5 and SDXL as the respective text-to-image models. As shown in Figure 8, text-to-multi-view models often generate objects with a CGI-like appearance, likely due to their training on datasets such as Objaverse 9 Preprint - Under Review No Coupling Enchanted forests with mushrooms Japanese samurai Astronaut on Mars With Coupling Golden retriever A digital painting A cat playing with a yarn ball Figure 9: Image space coupling. Using Flux, we perform coupled sampling on different prompts. We show that the coupled samples are spatially aligned while being faithful to the prompt. (Deitke et al., 2023). Introducing our coupling approach encourages the multi-view samples to better resemble real images, as modeled by the 2D diffusion models. Coupling Text-to-image flow models. Coupled diffusion sampling can be applied to both 2D and multi-view settings. To illustrate the effects of coupled sampling, we implement our method using the text-to-image model Flux (Labs, 2024). Although Flux is a flow-based model (Lipman et al., 2023; Liu et al., 2023), we show that our coupling approach remains effective. We test coupled sampling by generating two samples from the same model, each conditioned on a different prompt. As shown in Fig. 9, without coupling, the outputs are typically very distinct. With the coupled sampling, the outputs become spatially aligned while still reflecting their respective prompts. Guidance strength analysis. We quantitatively evaluate the effects of guidance strength λ on spatial editing performance. When λ is very small, the model output resembles image-to-MV sampling, resulting in low reconstruction performance. As λ increases, reconstruction performance improves. However, with further increases in λ, consistency across frames degrades as the outputs become more similar to 2D model samples and eventually collapse. 6 DISCUSSION AND CONCLUSION Figure 10: Guidance strength analysis. As we increase the guidance strength, the reconstruction improves but the consistency drops. We introduce a simple and effective approach for coupling diffusion models, enabling 2D diffusion models to generate consistent multi-view edits when used with multi-view diffusion models. Our method is efficient, versatile, and achieves high-quality results. By guiding the diffusion sampling process, our approach produces outputs that retain the strengths of the underlying models, while also inheriting their limitations. We believe this coupling strategy has potential applications beyond multi-view editing. In the future, our paradigm could extend the capabilities of image-editing models to video editing by integrating with video diffusion models, without incurring additional computational overhead. Acknowledgments. We would like to thank Gordon Wetzstein, Jon Barron, Ben Poole, Michael Gharbi, and Songwei Ge for the fruitful discussions. 10 Preprint - Under Review REFERENCES Hadi Alzayer, Philipp Henzler, Jonathan T. Barron, Jia-Bin Huang, Pratul P. Srinivasan, and Dor Verbin. Generative multiview relighting for 3d reconstruction under extreme illumination variation. In CVPR, June 2025a. Hadi Alzayer, Zhihao Xia, Xuaner (Cecilia) Zhang, Eli Shechtman, Jia-Bin Huang, and Michael Gharbi. Magic fixup: Streamlining photo editing by watching dynamic videos. ACM Trans. Graph., 2025b. Mohammad Asim, Christopher Wewer, Thomas Wimmer, Bernt Schiele, and Jan Eric Lenssen. Met3r: Measuring multi-view consistency in generated images. In CVPR, 2025. Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Roni Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Universal guidance for diffusion models. In ICLR, 2024. Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. Multidiffusion: fusing diffusion paths for controlled image generation. In Int. Conf. Mach. Learn., 2023. Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In ICCV, 2023. Yen-Chi Cheng, Krishna Kumar Singh, Jae Shin Yoon, Alexander Schwing, Liangyan Gui, Matheus Gadelha, Paul Guerrero, and Nanxuan Zhao. 3D-Fixup: Advancing Photo Editing with 3D Priors. In Proceedings of the SIGGRAPH Conference Papers, 2025. Hyungjin Chung, Byeongsu Sim, Dohoon Ryu, and Jong Chul Ye. Improving diffusion models for inverse problems using manifold constraints. In NeurIPS, 2022. Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In ICLR, 2023. Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, and Ali Farhadi. Objaverse: A universe of annotated 3d objects. In CVPR, 2023. Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat GANs on image synthesis. In NeurIPS, 2021. Yilun Du, Conor Durkan, Robin Strudel, Joshua B. Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, and Will Grathwohl. Reduce, reuse, recycle: compositional generation with energy-based diffusion models and mcmc. In Int. Conf. Mach. Learn., 2023. Ruiqi Gao, Aleksander Holynski, Philipp Henzler, Arthur Brussee, Ricardo Martin-Brualla, Pratul P. Srinivasan, Jonathan T. Barron, and Ben Poole. Cat3d: Create anything in 3d with multi-view diffusion models. In NeurIPS, 2024. Ayaan Haque, Matthew Tancik, Alexei A Efros, Aleksander Holynski, and Angjoo Kanazawa. Instruct-nerf2nerf: Editing 3d scenes with instructions. In ICCV, 2023. Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation, 14(8):1771-1800, 2002. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, volume 33, pp. 6840-6851, 2020. Zehuan Huang, Yuanchen Guo, Haoran Wang, Ran Yi, Lizhuang Ma, Yan-Pei Cao, and Lu Sheng. Mv-adapter: Multi-view consistent image generation made easy. In ICCV, 2024. Haian Jin, Yuan Li, Fujun Luan, Yuanbo Xiangli, Sai Bi, Kai Zhang, Zexiang Xu, Jin Sun, and Noah Snavely. Neural gaffer: Relighting any object via diffusion. In NeurIPS, 2024. 11 Preprint - Under Review Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming Song. Denoising diffusion restoration models. In NeurIPS, 2022. Bernhard Kerbl, Georgios Kopanas, Thomas Leimk ̈uhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. Jaihoon Kim, Juil Koo, Kyeongmin Yeo, and Minhyuk Sung. Synctweedies: A general generative framework based on synchronized diffusions. In NeurIPS, 2024. Vladimir Kulikov, Matan Kleiner, Inbar Huberman-Spiegelglas, and Tomer Michaeli. Flowedit: Inversion-free text-based editing using pre-trained flow models. In ICCV, 2025. Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024. Xiner Li, Yulai Zhao, Chenyu Wang, Gabriele Scalia, Gokcen Eraslan, Surag Nair, Tommaso Biancalani, Shuiwang Ji, Aviv Regev, Sergey Levine, et al. Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding. arXiv preprint , 2024. Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. In ICLR, 2023. Yaron Lipman, Marton Havasi, Peter Holderrieth, Neta Shaul, Matt Le, Brian Karrer, Ricky TQ Chen, David Lopez-Paz, Heli Ben-Hamu, and Itai Gat. Flow matching guide and code. arXiv preprint , 2024. Yehonathan Litman, Fernando De la Torre, and Shubham Tulsiani. Lightswitch: Multi-view relighting with material-guided diffusion. In ICCV, 2025. Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum. Compositional visual generation with composable diffusion models. In ECCV, 2022. Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In ICLR, 2023. Nanye Ma, Shangyuan Tong, Haolin Jia, Hexiang Hu, Yu-Chuan Su, Mingda Zhang, Xuan Yang, Yandong Li, Tommi Jaakkola, Xuhui Jia, et al. Inference-time scaling for diffusion models beyond scaling denoising steps. In CVPR, 2025. Nadav Magar, Amir Hertz, Eric Tabellion, Yael Pritch, Alex Rav-Acha, Ariel Shamir, and Yedid Hoshen. Lightlab: Controlling light sources in images with diffusion models. In SIGGRAPH, 2025. David McAllister, Songwei Ge, Jia-Bin Huang, David W. Jacobs, Alexei A. Efros, Aleksander Holynski, and Angjoo Kanazawa. Rethinking score distillation as a bridge between image distributions. In NeurIPS, 2024. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In ICLR, 2022. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020. Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In CVPR, 2023. Jiteng Mu, Micha ̈el Gharbi, Richard Zhang, Eli Shechtman, Nuno Vasconcelos, Xiaolong Wang, and Taesung Park. Editable image elements for controllable synthesis. In ECCV, 2024. Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. In ICLR, 2023. Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, and Daniel Cohen-Or. Texture: Textguided texturing of 3d shapes. In ACM SIGGRAPH Conference Proceedings, 2023. 12 Preprint - Under Review Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈orn Ommer. Highresolution image synthesis with latent diffusion models. In CVPR, 2022. Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, and Xiao Yang. MVDream: Multi-view diffusion for 3d generation. In ICLR, 2024. Tencent Hunyuan3D Team. Hunyuan3d 2.0: Scaling diffusion models for high resolution textured 3d assets generation, 2025. Alex Trevithick, Roni Paiss, Philipp Henzler, Dor Verbin, Rundi Wu, Hadi Alzayer, Ruiqi Gao, Ben Poole, Jonathan T. Barron, Aleksander Holynski, Ravi Ramamoorthi, and Pratul P. Srinivasan. Simvs: Simulating world inconsistencies for robust view synthesis. In CVPR, 2025. Vaibhav Vavilala, Seemandhar Jain, Rahul Vasanth, D. A. Forsyth, and Anand Bhattad. Generative blocks world: Moving things around in pictures, 2025. URL https://arxiv.org/abs/ 2506.20703. Yinhuai Wang, Jiwen Yu, and Jian Zhang. Zero-shot image restoration using denoising diffusion null-space model. In ICLR, 2023a. Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. In NeurIPS, 2023b. Ethan Weber, Aleksander Holynski, Varun Jampani, Saurabh Saxena, Noah Snavely, Abhishek Kar, and Angjoo Kanazawa. Nerfiller: Completing scenes via generative 3d inpainting. In CVPR, 2024. Luhuan Wu, Brian Trippe, Christian Naesseth, David Blei, and John P Cunningham. Practical and asymptotically exact conditional sampling in diffusion models. In NeurIPS, 2023. Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, and Aleksander Ho?y?ski. Reconfusion: 3d reconstruction with diffusion priors. In CVPR, 2024a. Ziyi Wu, Yulia Rubanova, Rishabh Kabra, Drew A. Hudson, Igor Gilitschenski, Yusuf Aytar, Sjoerd van Steenkiste, Kelsey R Allen, and Thomas Kipf. Neural assets: 3d-aware multi-object scene synthesis with image diffusion models. In NeurIPS, 2024b. Runjie Yan, Yinbo Chen, and Xiaolong Wang. Consistent flow distillation for text-to-3d generation. In ICLR, 2025. Fan Zhang, Shulin Tian, Ziqi Huang, Yu Qiao, and Ziwei Liu. Evaluation agent: Efficient and promptable evaluation framework for visual generative models. arXiv preprint , 2024. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Scaling in-the-wild training for diffusion-based illumination harmonization and editing by imposing consistent light transport. In ICLR, 2025a. Yunzhi Zhang, Carson Murtuza-Lanier, Zizhang Li, Yilun Du, and Jiajun Wu. Product of experts for visual generation. arXiv preprint , 2025b. Jensen (Jinghao) Zhou, Hang Gao, Vikram Voleti, Aaryaman Vasishta, Chun-Han Yao, Mark Boss, Philip Torr, Christian Rupprecht, and Varun Jampani. Stable virtual camera: Generative view synthesis with diffusion models. In ICCV, 2025. 13 Preprint - Under Review A ADDITIONAL DISCUSSION OF LIMITATIONS While the proposed method offers a simple and efficient framework for multi-view consistent image editing, several notable limitations remain. First, running both the 2D editing model and the multiview diffusion model in parallel increases memory and computational requirements. This limitation could be addressed in future work by exploring adaptive guidance strength or applying guidance during only a subset of sampling steps. Second, the edited outputs are not perfectly 3D consistent compared to test-time optimization-based methods. This residual inconsistency can be further reduced by robustly fitting a NeRF or 3D Gaussian Splatting model to the generated views, as shown in prior work Haque et al. (2023); Weber et al. (2024), while still maintaining much faster inference than optimization-based approaches. B IMPLEMENTATION DETAILS As our primary multi-view generation base model, we use Stable-Virtual-Camera (SVC) which is trained to process 21 frames at once, and use one or several consistent images for novel view synthesis. As we would like to edit a collection of views, we do not have access to more than one consistent edited photos, since we can only edit one image at a time with the 2D editing model. In our experiments, we edit a reference image, and then use it as the conditioning view. Note that this conditioning view has great influence on the outcome, as it dictates the distribution of acceptable 3D scenes that SVC would synthesize. We transform stable-virtual-camera into DDPM by converting the EDM based sampler into a DDPM scheduled by computing converting the noise levels into the appropriate alphas. Afterwards, since SVC was trained with a shifted noise schedule compared to SD2.1 image models, we re-align SVC's schedule with the 2D model's schedule for the coupled sampling to be effective. We conduct our experiments using NVIDIA A6000 GPUs. As our approach only requires a feed forward pass, the memory requirement is equivalent to the combined memory of the two models used. A better memory utilization can be further achieved by loading and off-loading the models from the GPU, as we can run them sequentially and then compute the coupling term. We use 50 denoising steps for spatial editing and stylization, and 100 denoising steps with Neural Gaffer relighting. The runtime of the sampling process is 130 seconds using our GPU resources for generating the full 21 frames sequence. In the experiments with Neural-Gaffer, one challenge is that Neural-Gaffer is trained on 256x256 images. On the other hand, SVC was trained on 576x576 images. We found that SVC performs very poorly on images of that size, and neural-gaffer does not generalize to 512x512 images or larger. After experimenting with the models, we found that at resolution size of 384x384, both models perform reasonably well and adopt that for the neural-gaffer experiments. C COUPLED DIFFUSION SAMPLING WITH FLOW MODELS Note that Flux (Labs, 2024), the text-to-image model used in Sec. 5 is a flow model. To sample from Flux using our proposed sampling method, first we transform the velocity vθ(xt) to the score function sθ(xt), as it can be linearly transformed into score functions via sθ(xt) = --tvθ(xt)+xt 1-t . Then transform the inference schedule to be DDPM via time reparameterization (Lipman et al., 2024) by computing the appropriate alpha values that match the noise levels associated with each time step. C.1 EFFECTS OF GUIDANCE STRENGTH As an additional illustration, in Fig. 11 we show how the samples change as we increase the coupling strength, while using the same initial random noise and randomness seed. 14 Preprint - Under Review Coupling Strength 0.0 rompt: Japanese Samurai Prompt: Astronaut on Mars Coupling Strength 0.0025 Coupling Strength 0.005 Coupling Strength 0.01 Figure 11: Effects of coupling strength. We illustrate the effects of coupling strength on the spatial alignment between samples as we vary the coupling strength while keeping the initial noise and random seed. Table 4: User study results on text-based relighting. Method User pref. ↑ Liu et al. (2022) 24% Du et al. (2023) 26.5% Coupled Sampling (Ours) 49.5% D USER STUDY ON IC-LIGHT For completion, we include the results of our user study on IC-Light in Tab. 4. We show that our outputs are preferred by users over either of prior work on combinign diffusion models, as they tend to produce high flickering artifacts in the relit outputs. E EFFECTS OF STOCHASTICITY One observation one would make is that our coupling term resembles linearly combining the intermediate samples of the two models, so one may wonder why we do not simply get images that are a linear average of the two outputs. Indeed, when we use a deterministic sampler, like Euler Discrete Sampler that's commonly used, this is the outcome that we encounter as we show in Fig. 12. However, when using a stochastic sampler like DDPM where noise is injected at every timestep, the model needs to correct for the added noise. When we include our coupling term in the stochastic step, the model can naturally correct or reject parts of the guidance that steers it away from its training distribution. This is also the reason we make our coupling term to be correlated with the noise level, by scaling it with √1 -αt, since at step t, the model has the ability to correct for noise at that level, but steering the sample by a larger magnitude risks pulling the intermediate latents outside of the training distribution. Additionally, as intuitively understood about diffusion sampling, at the later time steps the structure of the outputs is already determined, so shifting the intermediate latents in a large direction can disrupt the sampling process. 15 Preprint - Under Review DDPM sampler (Stochastic) Husky playing with a ball usky sleeping No Coupling With Coupling No Coupling With Coupling Euler Discrete Sampler (Deterministic) Figure 12: Sampler comparison. When using a stochastic sampler, the coupling can lead to natural guidance pulling the outputs towards each other. On the other hand, a deterministic sampler would simply output the average of both samples, as ODE based sampling does cannot recover from noisy guidance. F ADDITIONAL T2MV RESULTS In Fig. 13 and Fig. 14, we highlight additional results from coupling text-to-multi-view models along with text-to-image models. G APPLICATIONS WITH MV-ADAPTER One of the limitations of MV-Adapter is that it can only generate fixed set of camera views, making its utility for editing limited. Nonetheless, we show that we can still use it by editing the outputs it produces by performing coupling with single-image editing models. In Fig. 15, we show an example of using MV-Adapter for stylization, and relighting. H OUTPUTS OF INSTRUCTNERF2NERF When running InstructNeRF2NeRF on our input sequences used for stylization with the same number of frames as our method and other baselines (21 frames), we find that the radiance field completely collapses. This is likely due to NeRF's inability to gradually handle inconsistency with less dense camera coverage. 16 Preprint - Under Review Figure 13: Additional MVDream T2MV coupling results. Here we show additional results on the output of Text-to-Multiview MVDream when coupled with Text-to-Image SD2.1. 17 Preprint - Under Review Figure 14: Additional MV-Adapter T2MV coupling results. Here we show additional results on the output of Text-to-Multiview MV-Adapter when coupled with Text-to-Image SDXL. 18 Preprint - Under Review Figure 15: Multiview editing with MV-Adapter. Here we show editing results with MV-Adapter to achieve stylization by combining it with Control-Net (Zhang et al., 2023) and relighting using IC-Light (Zhang et al., 2025a). Sample input frame + rompt: "make it a golden lamborghini" InstructNeRF2NeRF renders Figure 16: InstructNeRF2NeRF outputs. When running InstructNeRF2NeRF (Haque et al., 2023) on our input views, we find that the editing training loop with InstructPix2Pix completely collapses. 19
2510.14978
Preprint LEARNING AN IMAGE EDITING MODEL WITHOUT IMAGE EDITING PAIRS Nupur Kumari1 Sheng-Yu Wang1 Nanxuan Zhao2 Yotam Nitzan2 Yuheng Li2 Krishna Kumar Singh2 Richard Zhang2 Eli Shechtman2 Jun-Yan Zhu1 Xun Huang2 1Carnegie Mellon University 2Adobe ABSTRACT Recent image editing models have achieved impressive results while following natural language editing instructions, but they rely on supervised fine-tuning with large datasets of input-target pairs. This is a critical bottleneck, as such naturally occurring pairs are hard to curate at scale. Current workarounds use synthetic training pairs that leverage the zero-shot capabilities of existing models. However, this can propagate and magnify the artifacts of the pretrained model into the final trained model. In this work, we present a new training paradigm that eliminates the need for paired data entirely. Our approach directly optimizes a few-step diffusion model by unrolling it during training and leveraging feedback from vision-language models (VLMs). For each input and editing instruction, the VLM evaluates if an edit follows the instruction and preserves unchanged content, providing direct gradients for end-to-end optimization. To ensure visual fidelity, we incorporate distribution matching loss (DMD), which constrains generated images to remain within the image manifold learned by pretrained models. We evaluate our method on standard benchmarks and include an extensive ablation study. Without any paired data, our method performs on par with various image editing diffusion models trained on extensive supervised paired data, under the few-step setting. Given the same VLM as the reward model, we also outperform RL-based techniques like Flow-GRPO. 1 INTRODUCTION Large-scale text-to-image models have achieved remarkable success, generating images of high fidelity that closely align with textual descriptions Ramesh et al. (2022); Peebles & Xie (2023); Kang et al. (2023). Despite these advances, text-only conditioning offers limited user control and falls short for many downstream applications (Meng et al., 2022; Gal et al., 2023). In practice, users often wish to start with an existing image to perform tasks like adjusting local attributes, changing the style, or placing an object in a new context. These image editing operations require precise, image-guided control that text-only prompts cannot provide. While collecting large-scale text–image pairs is relatively straightforward (Schuhmann et al., 2021), constructing supervised datasets for editing tasks is far more challenging. As one requires a pair of images, the input and its edited counterpart, along with the text instruction, and such data is rarely available online. Early methods addressed this by synthetically generating editing pairs (Brooks et al., 2023) from a pretrained model, using zero-shot editing techniques (Hertz et al., 2023). However, synthetic datasets can quickly become outdated with new and improved base models, and they risk amplifying and propagating artifacts of the synthetic editing process. More recent approaches extract frames from videos and annotate their differences (Chen et al., 2025; Song et al., 2023b; Krojer et al., 2024). Although promising, the applicability of this strategy is constrained by the diversity of transformations present in natural video sequences, where obtaining pixel-aligned before–and–after edited pairs is nearly impossible. A final alternative is to manually create training pairs (Winter et al., 2024; Magar et al., 2025), but this can be quite laborious and does not scale as easily. 1 arXiv:2510.14978v1 [cs.CV] 16 Oct 2025 Preprint In this work, we explore the possibility of training an image editing model without any training pairs. Our key idea is to leverage supervision from Vision Language Models (VLMs) (Liu et al., 2023a), relying on their general image-understanding capabilities to check whether the generated images satisfy the editing instructions. Prior works have studied the use of specialized models or general- purpose VLMs in improving generative models along dimensions such as text-alignment and aesthetic quality, primarily using reinforcement learning (Black et al., 2024; Liu et al., 2025a). In contrast, our method is the first to explore using gradient feedback from VLMs for general instruction-following, and we distill this feedback into a lightweight generative model that can generalize to arbitrary images and edit instructions. Our final method combines the VLM-feedback with a distribution matching loss to ensure that generated outputs remain in the realistic image domain while following the edit instructions. In summary, our contributions are threefold: 1. We propose NP-Edit (No-Pair Edit), a framework for training image editing models using gradient feedback from a Vision–Language Model (VLM), requiring no paired supervision. 2. For efficient training and effective VLM feedback, our formulation combines it with dis- tribution matching loss to learn a few-step image editing model. The final model remains competitive with existing baselines trained on supervised data. 3. We conduct a comprehensive empirical study analyzing the impact of (i) different VLM backbones, (ii) dataset scale and diversity, and (iii) VLM loss formulation. Our findings show that performance improves directly with more powerful VLMs and larger datasets, demonstrating its strong potential and scalability. 2 RELATED WORKS Diffusion-based image editing. Development of large-scale text-to-image models has enabled a wide range of downstream applications, including local image editing (Hertz et al., 2023; Meng et al., 2022), stylization (Sohn et al., 2023; Hertz et al., 2024; Jones et al., 2024), personalization and customization (Gal et al., 2023; Ruiz et al., 2023). These can broadly be viewed as different forms of image-editing capabilities. Early approaches often relied on zero-shot inference-time methods (Hertz et al., 2023; Parmar et al., 2023; Cao et al., 2023; Avrahami et al., 2023; Kim et al., 2023) or flexible but slow optimization-based techniques (Gal et al., 2023; Ruiz et al., 2023; Kumari et al., 2023). To improve efficiency and robustness, subsequent works introduced training-based ap- proaches (Brooks et al., 2023; Xiao et al., 2025; Chen et al., 2023; Fu et al., 2024; Sun et al., 2024). However, obtaining large datasets of image pairs remains challenging: synthetic curation (Brooks et al., 2023; Zhang et al., 2023; Zhao et al., 2024; Hui et al., 2024; Yang et al., 2024b; Tan et al., 2025; Cai et al., 2025; Kumari et al., 2025) risks becoming outdated as generative models improve, while human annotation (Winter et al., 2024; Magar et al., 2025; Ge et al., 2024; Sushko et al., 2025) is costly and labor-intensive. Recent efforts have explored constructing paired data from videos (Chen et al., 2025; Song et al., 2023b) or simulation environments (Yu et al., 2025), although these remain limited in either annotation diversity or visual realism. We also target similar image-editing capabil- ities but remove the need for paired data (Zhu et al., 2017), by using differentiable feedback from vision–language models instead of ground-truth edits. Post-training for image generation. Post-training methods typically align image generators with human preferences using either Direct Preference Optimization (DPO) (Wallace et al., 2024; Yang et al., 2024a) or Reinforcement Learning (RL) (Black et al., 2024). While early RL-based works use feedback from a simple scalar reward model (Kirstain et al., 2023; Xu et al., 2023), the paradigm has recently been enhanced by employing sophisticated Vision-Language Models (VLMs) as “judges” to provide more generic and accurate reward signals (Ku et al., 2024). Although post-training has been successfully applied to text-to-image generation, its use for image editing models has been less explored. Concurrently to our work, EARL (Ahmadi et al., 2025) begins to address this by using a VLM-as-a-judge framework to post-train an image-editing model with RL. However, RL-based approaches often depend heavily on good initialization, typically requiring a Supervised Fine-Tuning (SFT) phase with paired editing data. In contrast, our method leverages differentiable feedback from the VLM model, thereby obviating the need for an initial SFT stage and enables the learning of image editing models without the use of synthetically generated data. 2 Preprint Related to our work, Luo et al. (2025) recently introduced a method that incorporates gradient feedback from VLMs to satisfy various criteria, including the horizon line, style, and layout, in generated images. However, their framework operates in a per-example optimization setting, requiring costly LoRA fine-tuning (Hu et al., 2022) for each criterion and prompt pair, and also does not consider image editing tasks. Few-step diffusion models. Standard diffusion (or flow-matching) models require many sampling steps to generate high-quality images. Many prior works reduce the number of denoising steps for faster sampling by predicting larger denoising steps, including consistency models (Kim et al., 2024; Geng et al., 2024; Song et al., 2023a; Yang et al., 2024c; Song & Dhariwal, 2024; Lu & Song, 2025; Heek et al., 2024), shortcut models (Frans et al., 2024), meanflow (Geng et al., 2025), and inductive moment matching (Zhou et al., 2025). Another line distills a pre-trained multi-step teacher into a few-step student by matching ODE trajectories (Song et al., 2023a; Salimans & Ho, 2022; Geng et al., 2023), using adversarial loss (Sauer et al., 2024b; Kang et al., 2024; Yin et al., 2024a; Sauer et al., 2024a; Xu et al., 2024), or applying score distillation (Luo et al., 2023; Yin et al., 2024b;a; Zhou et al., 2024). In our framework, we adopt DMD (Yin et al., 2024b) as a distribution matching objective. This ensures that our few-step editing model’s output remains in the real-image manifold defined by the pre-trained text-to-image teacher, while using VLM-feedback to ensure the model follows the editing instructions. 3 BACKGROUND 3.1 DIFFUSION MODELS Diffusion or flow-based models are a class of generating models that learn the data distribution by denoising samples corrupted by different levels of Gaussian Noise (Ho et al., 2020; Song et al., 2021; Lipman et al., 2023). Given a real sample x, a forward diffusion process creates noisy samples xt = αtx+σtϵ over time t ∈(0, 1], where ϵ ∼N(0, I), and αt, σt define a noise schedule such that xT ∼N(0, I) and x0 = x. The denoising model is parameterized to reverse the forward diffusion process by either predicting the noise ϵ (Ho et al., 2020) added to the sample or velocity v towards the clean sample (Liu et al., 2023b; Salimans & Ho, 2022). In our work, we follow the flow-based formulation, with the forward denoising process being a linear interpolation, i.e., αt = 1 −t and σt = t. The training objective for a flow-based model, with parameters θ, can be simplified to the following: \begin {a ligned} \m a thbb { E} _{\x ^t,t,\mathbf {c}, \epsilon \sim \mathcal {N} (\bm {0}, \bm {I})} w_t||\mathbf {v} - \mathbf {v}_{\theta } (\x ^t, t, \mathbf {c}) ||, \end {aligned} (1) where v = ϵ −x and wt is a time dependent weighting factor. The denoising network can be conditioned on other inputs c, such as a text prompt, a reference image, or both, as in our case. 3.2 VISION LANGUAGE MODELS (VLMS) Vision Language Models (VLMs) trained from multimodal image-text data have shown exemplary visual understanding and reasoning capabilities and can serve as a general-purpose visual model. A common strategy for training such large-scale VLMs is via visual instruction tuning (Liu et al., 2023a), which aligns a pre-trained Vision Encoder output with the input word embedding space of a pretrained Large Language Model (LLM). More specifically, the image x is encoded into a set of tokens using the vision encoder, Xv = g(x). The input question regarding the image and its ground truth answer are tokenized in the LLM input embedding space as Xq and Xa, respectively. A projector module, fϕ, projects the vision-encoded tokens into the LLM word embedding space and is trained via standard autoregressive loss to maximize the probability of predicting the correct answer: \begin {a l i g ned } p(\mathbf {X} _a | \mathbf {X}_v, \mathbf {X}_q) = \prod _{i=1}^{L} p\Big (a_i | f_{\phi }(\mathbf {X}_v), \mathbf {X}_{q}, \mathbf {X}_{a<i} \Big ), \end {aligned} (2) where Xa = [a1 · · · aL] is of token length L, and Xa<i denotes all the tokens before the current prediction token index. The final loss simplifies to a cross-entropy over the total vocabulary length. In our experiments, we use LLaVa-OneVision-7B (Li et al., 2024) as the VLM that uses SigLIP (Zhai et al., 2023) vision encoder and Qwen-2 LLM (Qwen-Team, 2024), and is among the state-of-the-art VLMs of this scale. 3 Preprint 4 METHOD Given a pretrained text-to-image diffusion model Ginit and a dataset X = {(yi, ci, cy i , cx i )}N i=1 of reference image y, corresponding edit instruction c, and captions cy and cx that describe the reference and edited image respectively, we fine-tune Ginit into a few-step image editing model Gθ without requiring ground truth edited image x according to the edit instruction. Our approach, No-Pair (NP)-Edit, introduces a VLM-based loss to evaluate edit success and combines it with a distribution matching loss to ensure outputs remain within the natural image domain. Below, we first detail the construction of the dataset, then our training objective, and finally other implementation details. 4.1 EDIT INSTRUCTION DATASET Each dataset sample consists of a real image y as reference and an associated edit instruction, c. Following prior works (Liu et al., 2025b; Ye et al., 2025), we focus on several categories of local editing operations, such as Add, Replace, Remove, Adjust shape, Action, Stylization, Text editing, Color, Material, and Background change, as well as more free-form editing tasks such as Customization or Personalization (Gal et al., 2023; Ruiz et al., 2023; Kumari et al., 2023). Candidate instructions for each type are generated using Qwen2.5-32B VLM (Qwen-Team, 2025). Given an image-instruction pair, we further query the VLM to assess its validity and to suggest the caption, cx, for the edited image. For the customization task, we restrict reference images to those showing a prominent central object, either filtered from a real image corpus or generated via the pretrained model (Tan et al., 2025; Kumari et al., 2025), and prompt the VLM to generate a caption that places the object in a novel background or context. In total, for local and free-form editing instructions, our dataset consists of ∼3M and ∼600K reference images, respectively. The input prompt to the Qwen2.5-32B VLM for each setup is shown in Appendix D. 4.2 TRAINING OBJECTIVE Training a diffusion or flow-based model (Ho et al., 2020; Liu et al., 2023b) for image editing without pairs presents a unique challenge. Standard diffusion training takes as input noised versions of a ground-truth image. In our setting, no such ground-truth edited image exists; thus, we cannot construct these intermediate noisy inputs. On the other hand, directly mapping noise to the edited image in a single step is naturally challenging and yields poor fidelity (see Appendix B). To address this, during training, we propose to unroll the backward diffusion trajectory starting from noise using a two-step sampling procedure (Song et al., 2023a). Specifically, given the reference image–instruction pair (y, c), the editing model Gθ first predicts a provisional clean image ˆx0 θ from noise ϵ. Then, a second step refines this estimate by feeding an interpolated noisy input back into the model: \ b e g i n {a ligne d} \ hat { \ x } _{ \th e t a }^ 0 & = \ e psi l o n - \hat {\ v }_{\t he ta } , & & \ t e xt {wher e } \ h a t {\ v } _ { \th eta } \equiv G_{\theta }(\epsilon , t=1, \c , \y ), \quad \epsilon \sim \mathcal {N}(\bm {0}, \bm {I}), \\ \x _{\theta }^0 &= \hat {\x }_{\theta }^t - t\v _{\theta }, && \text {where } \v _{\theta } \equiv G_{\theta }(\hat {\x }_{\theta }^t, t, \c , \y ), \quad \hat {\x }_{\theta }^t=(1-t) \hat {\x }_{\theta }^0 + t\epsilon ; \epsilon \sim \mathcal {N}(\bm {0}, \bm {I}), t \sim (0,1). \end {aligned} (3) With the second step, the model is now trained on noisy intermediate states at timesteps determined by t, while being more efficient than a full backward unroll. In our method, we focus on few-step generation—specifically four steps—and restrict t ∈[0.25, 0.5, 0.75] in the second step. Few-step generator provides a better estimate of the denoised image, x0 θ, at intermediate steps, which in turn enables effective VLM-based feedback. VLMs tend to give unreliable judgments when inputs are noisy or blurry (see Appendix B). This also enables faster inference and lowers training costs. VLM-based editing loss. To evaluate whether an edit is successfully applied in x0 θ, we define a set of template questions with corresponding ground truth answers, DQA = {(Xqj, Xaj)}j, tailored to each edit category. The VLM is instructed to answer with a binary yes and no answer, i.e., Xaj ∈{Yes, No}, and X¯aj denotes the opposite response. The loss is then a binary cross-entropy over the predicted logit difference between the tokens corresponding to the correct and opposite responses, respectively. \b e g i n {a ligned } \ma thcal {L}_{\ te x t {V LM} } &= - \sum _j \log p(a_j) , \text { where } p(a_j) = \sigma (\ell ^{(j)}_{a_j} - \ell ^{(j)}_{\bar {a}_j} ) \end {aligned}\lbleq {vlm_loss} (4) 4 Preprint Input Output “Change the background to a bright, sunny meadow” Edit Instruction Few-step Generator 𝐺 DMD Loss Edit-verification question Identity-preservation question Editing loss You are a professional digital artist and an expert image editor. …evaluate if the editing instruction has been executed... Editing instruction: {edit instruction} Answer with a Yes or No… …You are a professional digital artist and an expert image editor. You will be provided with two images. Answer with a Yes or No if the second image is exactly the same as the first image. IGNORE the changes in the second image because of the edit: {edit instruction}. … 𝑉𝐿𝑀 𝐿!"# 𝑃”yes” , 𝑃”no” 𝑃”yes” , 𝑃”no” 𝐿!"# Figure 1: Method. We fine-tune a pretrained text-to-image model into a few-step image-editing model using differentiable VLM-feedback regarding edit success. In addition, we use distribution matching loss (DMD (Yin et al., 2024a)) to ensure output images remain in the natural image manifold. where ℓ(j) aj is the logit corresponding to the token Xaj, σ is the sigmoid function, and p(aj) is the probability of correct answer, while restricting normalization to only the Yes and No tokens, which we observe to be more effective during training (Zhang et al., 2024). Computing this loss is relatively fast, as it only requires a single forward call to the VLM per question, as opposed to autoregressive token prediction. For each edit instruction, we use two complementary questions to compute the editing loss: (1) Edit-verification question to assess whether the intended edit is applied, and (2) Identity-preservation question to ensure the image is not over-edited and is consistent with the reference image. Specifically, for the local image-editing instructions, we verify edit success with the following question “The objective is to evaluate if the editing instruction has been executed in the second image. Editing instruction: {edit instruction}. Answer with a Yes or No.” except removal edit-type, where we directly evaluate if the intended object is removed by asking “Answer with a Yes or No if the image has {object name}”. For the identity-preservation question, we ask the following: “Answer with a Yes or No if the second image is exactly the same as the first image. IGNORE the changes in the second image because of the edit: {edit instruction}”. We provide the list of all questions along with their system and user prompts for all editing types, including free-form editing in Appendix E. Distribution matching with text-to-image teacher model. While VLM feedback evaluates the efficacy of instruction following, it does not enforce the generated outputs to remain in the real image domain. To ensure this and keep the output distribution of the generator aligned with the pre-trained model, we apply Distribution Matching Distillation (DMD) (Yin et al., 2024b;a) between the fine-tuned model, Gθ, and the pre-trained text-to-image (teacher) model, Ginit. DMD minimizes the Kullback–Leibler (KL) divergence between the real image distribution, as estimated by the teacher model, and the output distribution of the fine-tuned model. The gradient of this KL-divergence loss with respect to the generator parameters can be simplified to: \be g in {aligned} \nabla _ \ t heta D_{K L} & = \ m athbb { E} _{ \sub st ac k { \epsilon \sim \mathcal {N}(\bm {0}, \bm {I}), t \in (0,1), \x ^0_{\theta } } } \Big [- \big ( \v _{\text {real}}(\x ^t_{\theta }, t, \c ^{\x }) - \v _{\text {gen}}(\x ^t_{\theta }, t, \c ^{\x })\big ) \hspace {.5mm} \frac {dG}{d\theta } \Big ], \end {aligned}\lbleq {kl_dv_grad2} (5) where cx is the text caption describing the noisy edited image xt θ and vreal, vgen represents the predicted velocity from the teacher and a trainable auxiliary model, Aϕ respectively. The auxiliary model is trained along with Gθ to learn the current output distribution of Gθ using a flow-based denoising objective. This loss ensures that the edited images not only satisfy the instruction but also remain faithful to the text conditioned distribution of real images modeled by the pretrained teacher. 4.3 TRAINING DETAILS The pretrained model Gθ is originally designed to generate an image, x, conditioned only on text c. To adapt it to our editing task, we extend its conditioning to include the reference image y. Following recent works (Xiao et al., 2025; Tan et al., 2025), we concatenate the VAE encoding of the reference image to the noisy target image encoding along the token sequence dimension, similar to text embedding, thereby enabling the model to attend to both text and visual conditions. To stabilize training, in the initial few iterations, we train the model with the objective of simply reconstructing the concatenated reference image. This encourages the network to propagate content 5 Preprint from the reference input, aligning it toward producing realistic images under joint text–image conditioning. After this, we introduce our main training objective as explained in the previous section. The final loss for the generator is a weighted combination of the VLM-based editing loss and DMD loss. The auxiliary network, Aϕ, is updated Naux times for every generator, Gθ, update (Yin et al., 2024a). Our pre-trained generative model is a 2B parameter internal DiT-based (Peebles & Xie, 2023) latent space diffusion model. The overall training pipeline is illustrated in Figure 1 and is detailed more formally in Algorithm 1 below. Other training hyperparameters are detailed in Appendix E. Algorithm 1: NP-Edit: our training method Input: Pretrained VLM and text-to-image model Ginit, Dataset X = {(yi, ci, cy i , cx i )}. Output: Few-step image-editing model Gθ 1 Gθ ←copyWeights(Ginit); Aϕ ←copyWeights(Ginit) 2 // Warmup with identity loss 3 for step = 1 to Nwarmup do 4 (y, cy) ∼X, ϵ ∼N(0, I), t ∼(0, 1] 5 x ←y 6 xt ←(1 −t)x + tϵ 7 vθ ←Gθ(xt, t, y, cy) 8 Lid ←||v −vθ|| where v = ϵ −x 9 θG ←θG −ηG∇θG Lid. 10 end for 11 12 // Main training loop 13 while train do 14 {(y, c, cx)} ∼X, ϵ ∼N(0, I), t ∈[0.25, 0.5, 0.75, 1] 15 vθ ←Gθ(ϵ, t = 1, y, c) 16 x0 θ ←ϵ −vθ 17 if t < 1 then 18 vθ ←Gθ(xt θ, t, y, c); xt θ ←(1 −t)x0 θ + tϵ ; ϵ ∼N(0, I) 19 x0 θ ←xt θ −tvθ 20 end if 21 Compute LVLM // Eqn. 4 22 Compute ∇θDKL // Eqn. 5 23 θG ←θG −ηGλvlm∇θGLVLM −ηGλdmd∇θDKL 24 for local step = 1 to Naux do 25 ϵ ∼N(0, I), {(y, c, cx)} ∼X 26 x0 θ ←Gθ(ϵ, y, c) // edited image with backward unroll 27 xt θ ←(1 −t)x0 θ + tϵ ϵ ∼N(0, I) t ∈(0, 1) 28 vϕ ←Aϕ(xt θ, cx) 29 ϕA ←ϕA −ηA∇θ||v −vϕ|| where v = ϵ −x0 θ 30 end for 31 end while 5 EXPERIMENTS In this section, we show the results of our method on local image editing as well as more free-form image editing tasks like customization, and compare them with the state-of-the-art baseline methods. 5.1 LOCAL IMAGE-EDITING Benchmark. For evaluation, following prior works, we use the English subset of GEdit- Benchmark Liu et al. (2025b), which captures real-world user interactions across different edit types. We also show results on the ImgEdit (Ye et al., 2025) benchmark in Appendix A. Evaluation metric. For quantitative evaluation, we follow prior works and use GPT4o-based VIEScore (Ku et al., 2024) metric. It scores each edit on: (1) Semantic Consistency (SC) score, 6 Preprint Qwen-Image-Edit (50 step) Step1x-Edit (4 step) Ours (4 step) FLUX.1Kontext (4 step) Qwen-Image-Edit (4 step) Change the background to high mountains Replace the text CS with VALO and replace GO with RANT Remove the dog getting a haircut Light the candle to enhance the candlelight Adjust the image style to a watercolor effect Change the color of the sheep to purple Make the person in the image wave Replace the computer’s casing with bamboo fiber composite Input reference image Figure 2: Qualitative comparison on GEdit-Bench under the few-step sampling setting. For an upper-bound comparison, in the 1st column we show results of the best multi-step sampling method (as measured by the quantitative metrics in Table 1). Our method performs on par or better than baseline methods across different edit types in the few-step setting. We show more samples in the Appendix Figure 10 7 Preprint Table 1: Quantitative evaluation on GEdit-Bench. Our method performs on par or better than baselines under the few-step setting. For multi-step sampling, it still outperforms OmiGen and remains competitive with many of the larger-scale models like BAGEL and FLUX.1 Kontext. All numbers reported in ×10 Method #Param #Step SC Score↑ PQ Score ↑ Overall ↑ Omni-Gen (Xiao et al., 2025) 4B 50 5.52 6.14 4.97 BAGEL (Deng et al., 2025) 7B 50 7.02 6.26 6.14 FLUX.1-Kontext (Labs et al., 2025) 12B 28 6.29 6.65 5.65 Step1X-Edit (Liu et al., 2025b) v1.1 12B 28 7.30 7.37 6.79 Qwen-Image-Edit (Wu et al., 2025) 20B 50 7.94 7.50 7.36 FLUX.1-Kontext (Labs et al., 2025) 12B 4 5.80 5.74 5.04 Step1X-Edit (Liu et al., 2025b) v1.1 12B 4 6.61 6.43 6.01 Qwen-Image-Edit (Wu et al., 2025) 20B 4 6.82 6.21 6.06 Turbo-Edit (Deutch et al., 2024) 1B 4 3.84 6.67 3.84 NP-Edit (Ours) 2B 4 6.16 7.69 6.10 Table 2: Free-form editing task, Customization, evaluation on Dreambooth. We perform better than OminiCon- trol, DSD, and SynCD, which are trained for this task on synthetic datasets. When compared to FLUX.1-Kontext and Qwen-Image-Edit, we still perform comparably in the few-step setting. All numbers are reported in ×10 Method #Param #Step SC Score↑ PQ Score ↑ Overall ↑ DSD (Cai et al., 2025) 12B 28 6.71 7.41 6.78 SynCD (Kumari et al., 2025) 12B 30 7.66 7.83 7.54 FLUX.1-Kontext (Labs et al., 2025) 12B 28 8.19 7.45 7.61 Qwen-Image-Edit (Wu et al., 2025) 20B 50 8.53 7.79 8.02 OminiControl (Tan et al., 2025) 12B 8 6.33 7.82 6.22 DSD (Cai et al., 2025) 12B 8 6.37 6.78 6.29 SynCD (Kumari et al., 2025) 12B 8 7.71 6.84 7.07 FLUX.1-Kontext (Labs et al., 2025) 12B 8 7.99 7.18 7.39 Qwen-Image-Edit (Wu et al., 2025) 20B 8 8.08 7.44 7.62 NP-Edit (Ours) 2B 8 7.68 7.56 7.33 NP-Edit (Ours) 2B 4 7.60 7.28 7.10 evaluating whether the edit instruction was followed, and (2) Perceptual Quality (PQ) score, assessing realism and absence of artifacts. Following VIEScore, for the Overall score, we take the geometric mean between SC and PQ for each image, and average across images in the evaluation benchmark. Baselines. We compare our method with leading baselines, including FLUX.1-Kontext (Labs et al., 2025), Step1X-Edit (Liu et al., 2025b), BAGEL (Deng et al., 2025), OmniGen (Xiao et al., 2025), and Qwen-Image-Edit (Wu et al., 2025). Since no prior work explicitly targets few-step editing, we simply evaluate the above baselines with few-step sampling as well as their original multi-step setting for an upper-bound comparison. We also include Turbo-Edit (Deutch et al., 2024), a state-of-the-art zero-shot few-step method that requires no paired supervision (as zero-shot) and is thus closest to our setup. We use the open-source implementation of all baselines, with further details in the Appendix F. Results Table 1 shows the quantitative result. In the few-step setting, our method achieves the best Overall and Perceptual Quality (PQ) score compared to baseline methods. When compared to their original multi-step sampling, our few-step model still outperforms OmniGen and remains competitive with BAGEL, FLUX.1-Kontext, despite being ×6 smaller parameter-wise. While Step1X-Edit and Qwen-Image-Edit perform better, they are substantially larger models. Figure 2 provides a qualitative comparison. As we can see, our method can successfully follow different editing instructions while being consistent with the input reference image. For instance, in the 6th row (sheep color change), our approach produces a more natural edit compared to baselines. It also performs comparably to the multi-step variant for edits like lighting the candle in 4rth row or making the person wave in 7th row. 5.2 FREE-FORM EDITING: CUSTOMIZATION Benchmark. We use the widely adopted DreamBooth (Ruiz et al., 2023) dataset for evaluation. It consists of 30 objects and 25 prompts per object category. The goal is to generate the same object as shown in the reference image, but in a different context, as mentioned in the text prompt. Baselines. We compare against state-of-the-art unified image-editing baselines such as FLUX.1- Kontext (Labs et al., 2025) and Qwen-Image-Edit Wu et al. (2025) as well as OminiControl (Tan 8 Preprint Qwen-Image-Edit (50 step) FLUX.1-Kontext (8 step) Ours (8 step) OminiControl (8 step) Qwen-Image-Edit (8 step) Shoe with a tree and autumn leaves in the background A toy on top of the sidewalk in a crowded street A cat wearing pink sunglasses Input reference image Figure 3: Qualitative comparison on Customization task. Our method can generate the object in new contexts while having better fidelity under few-step sampling. We show more samples in the Appendix Figure 12. et al., 2025), DSD (Cai et al., 2025), and SynCD (Kumari et al., 2025), which are feed-forward models trained specifically for this task on synthetic datasets. Evaluation metric. Here as well, we use the VIEScore evaluation with a similar Semantic Con- sistency (SC) score to evaluate identity and text alignment and the Perceptual Quality (PQ) score to measure realism, and the geometric mean of the two for the Overall score. We also report CLIP- Score (Radford et al., 2021) and DINO (Oquab et al., 2023) similarity-based metrics in the Appendix. Results. As shown in Table 2, our method performs comparably to state-of-the-art methods. In the few-shot sampling setting, all the baseline methods fail to generate realistic samples at 4 steps; therefore, we compare with them at 8 sampling steps. Our method still results in higher fidelity samples as Figure 3 shows, while maintaining object identity with the reference image. Note that our method performs better than OminiControl, which is also a few-step (8) model for this task. 5.3 ABLATION In this section, we perform several ablations to analyze the role of different components of our method, dataset scale, and stronger VLMs. All ablations are done on the local image-editing task. Training objective. We ablate our training objective across four settings: (1) using only distribution matching loss, (2) using only the VLM-based editing loss, (3) removing the identity-preservation loss DQA, and (4) replacing the binary-cross entropy loss (Eqn. 4) with standard cross-entropy over the full vocabulary. Results are shown below in Table 3. Training without the VLM-based loss and relying solely on distribution matching significantly degrades the model’s capabilities at following editing instructions. We observe that VLM-based loss is essential for maintaining consistency between input and edited images and for certain editing tasks like Removal (Figure 4 and Appendix Figure 5). However, only training with VLM-based loss leads to unrealistic outputs (Appendix Figure 6), and the training eventually diverges, as evidenced by the low overall score in Table 3, underscoring the need for DMD loss. In addition, using binary cross-entropy loss and having a question to check consistency between input and edited images improves the overall performance. Dataset and VLM scale. To study the role of dataset scale, we vary the number of unique reference images in training. Our final dataset represents the maximum scale feasible under our computational resources. Table 4 shows the performance across different dataset sizes and VLM backbones. We observe consistent gains with larger datasets, suggesting that further scaling of data could yield 9 Preprint Table 3: Training objective ablation. We compare on the GEdit-Bench using the VIEScore metric. Ablating different components of our method leads to a drop in performance, indicating its importance. Method SC Score↑ PQ Score ↑ Overall ↑ Ours 6.16 7.69 6.10 w/ only DMD 4.93 7.51 4.93 w/ only VLM 2.03 3.48 1.93 w/o VLM identity 5.70 7.67 5.76 w/ standard CE loss 5.95 7.64 5.89 Table 4: Dataset and VLM scale and compari- son with Reinforcement Learning on the GEdit- Bench. Increasing dataset scale and using stronger VLMs leads to increased performance. Our method also performs better than post-training an SFT model with RL (Liu et al., 2025a). Method SC Score↑ PQ Score ↑ Overall ↑ 1 % Dataset 4.41 7.10 4.66 50 % Dataset 5.41 7.73 5.52 100 % Dataset 6.16 7.69 6.10 InternVL-2B 5.36 7.67 5.45 InternVL-14B 5.88 7.74 5.89 LLava-0.5B 4.57 7.50 4.59 LLava-7B 6.16 7.69 6.10 SFT 3.91 5.70 3.64 SFT + RL 4.55 5.47 4.19 SFT + Ours 6.08 7.83 6.06 Ours Only DMD SFT + Ours Replace the school bus with a truck Remove the umbrella SFT + RL Figure 4: Qualitative analysis of ablation experiments. Our method maintains better input and edited image align- ment compared to only training with DMD loss, which also fails on tasks like removal. Compared to fine-tuning an SFT model with RL, our method results in better fidelity while following the edit instruction. Please zoom in for details. additional improvements. Similarly, a larger parameter VLM-backbone leads to better performance, across different VLMs such as InternVL (Chen et al., 2024) and LLava-OneVision (Li et al., 2024), underscoring the promise that our method can improve as more powerful VLMs are developed. Our method vs. Reinforcement Learning (RL). RL is a common post-training strategy for improving pre-trained models without paired supervision and can also leverage VLMs as the reward model, a similar setup to ours. Thus, we benchmark our method against Flow-GRPO (Liu et al., 2025a), a widely used RL method for text-to-image diffusion. However, since RL relies on a reasonable initialization, we need to first train an image-editing model via Supervised Fine-Tuning (SFT) on a paired dataset (Lin et al., 2025). We then fine-tune it with Flow-GRPO using the same Llava- OneVision reward model as in our approach. As shown in Table 4, SFT alone performs poorly, likely due to the limited quality of paired data. Our method surpasses both SFT and SFT+RL, despite requiring no paired supervision. Fine-tuning the model with some paired data before applying our approach can slightly improve the pixel-level consistency between the input reference and output edited image (as shown in Figure 4), although the quantitative numbers are similar. The Appendix provides additional results and more detailed discussion of the method’s limitations. 6 DISCUSSION AND LIMITATIONS This paper introduces a new paradigm for enabling image editing capabilities given a pre-trained text- to-image diffusion model, without paired before-and-after edit supervision. Our approach combines differentiable feedback from VLMs to ensure editing success with a distribution matching objective to maintain visual realism. This method achieves competitive performance with recent state-of-the-art baselines trained on paired data while enabling efficient few-step generation. Despite these promising results, our method has limitations. Without pixel-level supervision, edits may deviate from the input image in fine-grained details or fail to fully preserve subject identity. We show in Appendix C that adding a perceptual similarity loss (e.g., LPIPS (Zhang et al., 2018)) between input and edited images alleviates this to some extent, though often at the cost of editing quality. Another constraint for our method is the need to keep the VLM in GPU memory, introducing VRAM overhead. We expect ongoing advances in stronger and more efficient VLMs can help address this issue. Overall, our framework scales effectively with large unpaired datasets and highlights a path toward more flexible post-training of generative models for diverse downstream tasks. 10 Preprint Acknowledgment. We thank Gaurav Parmar, Maxwell Jones, and Ruihan Gao for their feedback and helpful discussions. This work was partly done while Nupur Kumari was interning at Adobe Research. The project was partly supported by Adobe Inc., the Packard Fellowship, the IITP grant funded by the Korean Government (MSIT) (No. RS-2024-00457882, National AI Research Lab Project), NSF IIS-2239076, and NSF ISS-2403303. REFERENCES Kingma DP Ba J Adam et al. A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 1412(6), 2014. Saba Ahmadi, Rabiul Awal, Ankur Sikarwar, Amirhossein Kazemnejad, Ge Ya Luo, Juan A Ro- driguez, Sai Rajeswar, Siva Reddy, Christopher Pal, Benno Krojer, et al. The promise of rl for autoregressive image editing. arXiv preprint arXiv:2508.01119, 2025. Omri Avrahami, Ohad Fried, and Dani Lischinski. Blended latent diffusion. ACM transactions on graphics (TOG), 42(4):1–11, 2023. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. In International Conference on Learning Representations (ICLR), 2024. Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Shengqu Cai, Eric Ryan Chan, Yunzhi Zhang, Leonidas Guibas, Jiajun Wu, and Gordon. Wetzstein. Diffusion self-distillation for zero-shot customized image generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2025. Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In IEEE International Conference on Computer Vision (ICCV), 2023. George Cazenavette, Avneesh Sud, Thomas Leung, and Ben Usman. Fakeinversion: Learning to detect images from unseen text-to-image models by inverting stable diffusion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Wenhu Chen, Hexiang Hu, Yandong Li, Nataniel Rui, Xuhui Jia, Ming-Wei Chang, and William W Cohen. Subject-driven text-to-image generation via apprenticeship learning. In Conference on Neural Information Processing Systems (NeurIPS), 2023. Xi Chen, Zhifei Zhang, He Zhang, Yuqian Zhou, Soo Ye Kim, Qing Liu, Yijun Li, Jianming Zhang, Nanxuan Zhao, Yilin Wang, et al. Unireal: Universal image generation and editing via learning real-world dynamics. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2025. Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Riccardo Corvi, Davide Cozzolino, Giada Zingarini, Giovanni Poggi, Koki Nagano, and Luisa Verdo- liva. On the detection of synthetic images generated by diffusion models. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023. Chaorui Deng, Deyao Zhu, Kunchang Li, Chenhui Gou, Feng Li, Zeyu Wang, Shu Zhong, Weihao Yu, Xiaonan Nie, Ziang Song, et al. Emerging properties in unified multimodal pretraining. arXiv preprint arXiv:2505.14683, 2025. Gilad Deutch, Rinon Gal, Daniel Garibi, Or Patashnik, and Daniel Cohen-Or. Turboedit: Text-based image editing using few-step diffusion models. In SIGGRAPH Asia Conference Proceedings, 2024. 11 Preprint Pierre Fernandez, Guillaume Couairon, Herv´e J´egou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. In IEEE International Conference on Computer Vision (ICCV), 2023. Kevin Frans, Danijar Hafner, Sergey Levine, and Pieter Abbeel. One step diffusion via shortcut models. arXiv preprint arXiv:2410.12557, 2024. Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, and Zhe Gan. Guiding instruction-based image editing via multimodal large language models. In International Conference on Learning Representations (ICLR), 2024. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. In International Conference on Learning Representations (ICLR), 2023. Yuying Ge, Sijie Zhao, Chen Li, Yixiao Ge, and Ying Shan. Seed-data-edit technical report: A hybrid dataset for instructional image editing. arXiv preprint arXiv:2405.04007, 2024. Zhengyang Geng, Ashwini Pokle, and J Zico Kolter. One-step diffusion distillation via deep equilib- rium models. In Conference on Neural Information Processing Systems (NeurIPS), 2023. Zhengyang Geng, Ashwini Pokle, William Luo, Justin Lin, and J Zico Kolter. Consistency models made easy. arXiv preprint arXiv:2406.14548, 2024. Zhengyang Geng, Mingyang Deng, Xingjian Bai, J Zico Kolter, and Kaiming He. Mean flows for one-step generative modeling. arXiv preprint arXiv:2505.13447, 2025. Jonathan Heek, Emiel Hoogeboom, and Tim Salimans. Multistep consistency models. arXiv preprint arXiv:2403.06807, 2024. Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt- to-prompt image editing with cross attention control. In International Conference on Learning Representations (ICLR), 2023. Amir Hertz, Andrey Voynov, Shlomi Fruchter, and Daniel Cohen-Or. Style aligned image generation via shared attention. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Conference on Neural Information Processing Systems (NeurIPS), 2020. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations (ICLR), 2022. Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answer- ing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Mude Hui, Siwei Yang, Bingchen Zhao, Yichun Shi, Heng Wang, Peng Wang, Yuyin Zhou, and Cihang Xie. Hq-edit: A high-quality dataset for instruction-based image editing. arXiv preprint arXiv:2404.09990, 2024. Maxwell Jones, Sheng-Yu Wang, Nupur Kumari, David Bau, and Jun-Yan Zhu. Customizing text-to- image models with a single image pair. In SIGGRAPH Asia Conference Proceedings, 2024. Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, and Taesung Park. Scaling up gans for text-to-image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Minguk Kang, Richard Zhang, Connelly Barnes, Sylvain Paris, Suha Kwak, Jaesik Park, Eli Shecht- man, Jun-Yan Zhu, and Taesung Park. Distilling diffusion models into conditional gans. In European Conference on Computer Vision (ECCV), 2024. 12 Preprint Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models: Learning proba- bility flow ode trajectory of diffusion. In International Conference on Learning Representations (ICLR), 2024. Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, and Jun-Yan Zhu. Dense text-to-image generation with attention modulation. In IEEE International Conference on Computer Vision (ICCV), 2023. Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. Pick- a-pic: An open dataset of user preferences for text-to-image generation. Advances in neural information processing systems, 36:36652–36663, 2023. Benno Krojer, Dheeraj Vattikonda, Luis Lara, Varun Jampani, Eva Portelance, Chris Pal, and Siva Reddy. Learning action and reasoning-centric image editing from videos and simulation. Advances in Neural Information Processing Systems, 37:38035–38078, 2024. Max Ku, Dongfu Jiang, Cong Wei, Xiang Yue, and Wenhu Chen. Viescore: Towards explainable metrics for conditional image synthesis evaluation. In Association for Computational Linguistics (ACL), 2024. Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Nupur Kumari, Xi Yin, Jun-Yan Zhu, Ishan Misra, and Samaneh Azadi. Generating multi-image synthetic data for text-to-image customization. In IEEE International Conference on Computer Vision (ICCV), 2025. Black Forest Labs, Stephen Batifol, Andreas Blattmann, Frederic Boesel, Saksham Consul, Cyril Di- agne, Tim Dockhorn, Jack English, Zion English, Patrick Esser, et al. Flux. 1 kontext: Flow match- ing for in-context image generation and editing in latent space. arXiv preprint arXiv:2506.15742, 2025. Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326, 2024. Bin Lin, Zongjian Li, Xinhua Cheng, Yuwei Niu, Yang Ye, Xianyi He, Shenghai Yuan, Wangbo Yu, Shaodong Wang, Yunyang Ge, et al. Uniworld: High-resolution semantic encoders for unified visual understanding and generation. arXiv preprint arXiv:2506.03147, 2025. Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. In International Conference on Learning Representations (ICLR), 2023. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In Conference on Neural Information Processing Systems (NeurIPS), 2023a. Jie Liu, Gongye Liu, Jiajun Liang, Yangguang Li, Jiaheng Liu, Xintao Wang, Pengfei Wan, Di Zhang, and Wanli Ouyang. Flow-grpo: Training flow matching models via online rl. In Conference on Neural Information Processing Systems (NeurIPS), 2025a. Shiyu Liu, Yucheng Han, Peng Xing, Fukun Yin, Rui Wang, Wei Cheng, Jiaqi Liao, Yingming Wang, Honghao Fu, Chunrui Han, et al. Step1x-edit: A practical framework for general image editing. arXiv preprint arXiv:2504.17761, 2025b. Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In International Conference on Learning Representations (ICLR), 2023b. Cheng Lu and Yang Song. Simplifying, stabilizing and scaling continuous-time consistency models. In International Conference on Learning Representations (ICLR), 2025. 13 Preprint Grace Luo, Jonathan Granskog, Aleksander Holynski, and Trevor Darrell. Dual-process image generation. In IEEE International Conference on Computer Vision (ICCV), 2025. Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, and Zhihua Zhang. Diff- instruct: A universal approach for transferring knowledge from pre-trained diffusion models. Advances in Neural Information Processing Systems, 36:76525–76546, 2023. Nadav Magar, Amir Hertz, Eric Tabellion, Yael Pritch, Alex Rav-Acha, Ariel Shamir, and Yedid Hoshen. Lightlab: Controlling light sources in images with diffusion models. In Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers, pp. 1–11, 2025. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations (ICLR), 2022. Maxime Oquab, Timoth´ee Darcet, Th´eo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. In Transactions on Machine Learning Research (TMLR), 2023. Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings, 2023. William Peebles and Saining Xie. Scalable diffusion models with transformers. In IEEE International Conference on Computer Vision (ICCV), 2023. Qwen-Team. Qwen2 technical report. arXiv preprint arXiv:2407.10671, 2024. Qwen-Team. Qwen2.5-vl, January 2025. URL https://qwenlm.github.io/blog/qwen2. 5-vl/. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text- conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations (ICLR), 2022. Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, and Robin Rombach. Fast high-resolution image synthesis with latent adversarial diffusion distillation. In SIGGRAPH Asia Conference Proceedings, 2024a. Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion distillation. In European Conference on Computer Vision (ECCV), 2024b. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, et al. Styledrop: Text-to-image generation in any style. In Conference on Neural Information Processing Systems (NeurIPS), 2023. 14 Preprint Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In Interna- tional Conference on Learning Representations (ICLR), 2021. Yang Song and Prafulla Dhariwal. Improved techniques for training consistency models. In Interna- tional Conference on Learning Representations (ICLR), 2024. Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. In Proceedings of the 40th International Conference on Machine Learning, 2023a. Yizhi Song, Zhifei Zhang, Zhe Lin, Scott Cohen, Brian Price, Jianming Zhang, Soo Ye Kim, and Daniel Aliaga. Objectstitch: Object compositing with diffusion model. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023b. Quan Sun, Yufeng Cui, Xiaosong Zhang, Fan Zhang, Qiying Yu, Zhengxiong Luo, Yueze Wang, Yongming Rao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative multimodal models are in-context learners. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Peter Sushko, Ayana Bharadwaj, Zhi Yang Lim, Vasily Ilin, Ben Caffee, Dongping Chen, Moham- madreza Salehi, Cheng-Yu Hsieh, and Ranjay Krishna. Realedit: Reddit edits as a large-scale empirical dataset for image transformations. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 13403–13413, 2025. Zhenxiong Tan, Songhua Liu, Xingyi Yang, Qiaochu Xue, and Xinchao Wang. Ominicontrol: Minimal and universal control for diffusion transformer. In IEEE International Conference on Computer Vision (ICCV), 2025. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. Cnn-generated images are surprisingly easy to spot... for now. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Daniel Winter, Matan Cohen, Shlomi Fruchter, Yael Pritch, Alex Rav-Acha, and Yedid Hoshen. Objectdrop: Bootstrapping counterfactuals for photorealistic object removal and insertion. In European Conference on Computer Vision (ECCV), 2024. Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun Wen, Wensen Feng, Xiaoxiao Xu, Yi Wang, Yichang Zhang, Yongqiang Zhu, Yujia Wu, Yuxuan Cai, and Zenan Liu. Qwen-image technical report, 2025. URL https://arxiv.org/abs/2508. 02324. Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, and Zheng Liu. Omnigen: Unified image generation. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 13294–13304, 2025. Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems, 36:15903–15935, 2023. Yanwu Xu, Yang Zhao, Zhisheng Xiao, and Tingbo Hou. Ufogen: You forward once large scale text-to-image generation via diffusion gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Wilson Yan, Andrew Brown, Pieter Abbeel, Rohit Girdhar, and Samaneh Azadi. Motion-conditioned image animation for video editing. arXiv preprint arXiv:2311.18827, 2023. 15 Preprint Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Weihan Shen, Xiaolong Zhu, and Xiu Li. Using human feedback to fine-tune diffusion models without any reward model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8941–8951, 2024a. Ling Yang, Bohan Zeng, Jiaming Liu, Hong Li, Minghao Xu, Wentao Zhang, and Shuicheng Yan. Editworld: Simulating world dynamics for instruction-following image editing. arXiv preprint arXiv:2405.14785, 2024b. Ling Yang, Zixiang Zhang, Zhilong Zhang, Xingchao Liu, Minkai Xu, Wentao Zhang, Chenlin Meng, Stefano Ermon, and Bin Cui. Consistency flow matching: Defining straight flows with velocity consistency. arXiv preprint arXiv:2407.02398, 2024c. Yang Ye, Xianyi He, Zongjian Li, Bin Lin, Shenghai Yuan, Zhiyuan Yan, Bohan Hou, and Li Yuan. Imgedit: A unified image editing dataset and benchmark. arXiv preprint arXiv:2505.20275, 2025. Tianwei Yin, Micha¨el Gharbi, Taesung Park, Richard Zhang, Eli Shechtman, Fredo Durand, and Bill Freeman. Improved distribution matching distillation for fast image synthesis. In Conference on Neural Information Processing Systems (NeurIPS), 2024a. Tianwei Yin, Micha¨el Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T Freeman, and Taesung Park. One-step diffusion with distribution matching distillation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024b. Xin Yu, Tianyu Wang, Soo Ye Kim, Paul Guerrero, Xi Chen, Qing Liu, Zhe Lin, and Xiaojuan Qi. Objectmover: Generative object movement with video prior. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2025. Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In IEEE International Conference on Computer Vision (ICCV), 2023. Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, and Yu Su. Magicbrush: A manually annotated dataset for instruction-guided image editing. Advances in Neural Information Processing Systems, 36:31428–31449, 2023. Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint arXiv:2408.15240, 2024. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. Haozhe Zhao, Xiaojian Shawn Ma, Liang Chen, Shuzheng Si, Rujie Wu, Kaikai An, Peiyu Yu, Minjia Zhang, Qing Li, and Baobao Chang. Ultraedit: Instruction-based fine-grained image editing at scale. Advances in Neural Information Processing Systems, 37:3058–3093, 2024. Linqi Zhou, Stefano Ermon, and Jiaming Song. Inductive moment matching. arXiv preprint arXiv:2503.07565, 2025. Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, and Hai Huang. Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation. In Forty-first International Conference on Machine Learning, 2024. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017. 16 Preprint Appendix A Additional Comparison with Baseline Methods 17 B Ablation Study 17 C Limitation 19 D Dataset Construction Details 20 E Training Implementation Details 23 E.1 Local-image editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 E.2 Free-form editing (Customization) . . . . . . . . . . . . . . . . . . . . . . . . . . 24 F Other Baseline Details 25 G Societal Impact 26 A ADDITIONAL COMPARISON WITH BASELINE METHODS Local image-editing. Here, we compare on another commonly adopted image-editing benchmark, ImgEdit (Ye et al., 2025) (Basic), which covers nine local editing types across diverse semantic categories with a total of 734 samples. Quantitative results under their proposed GPT4o-based evaluation protocol are reported in Table 5, with VIEScore (Ku et al., 2024) results in Table 6. Consistent with the trend observed on GEdit-Bench, our method has better or on-par performance in the few-step setting and remains comparable to many of the multi-step baselines as well. Qualitative comparisons are shown in Figure 11, with additional examples on GEdit-Bench in Figure 10. Customization or free-form editing. Here, we report additional metrics commonly used for evaluation. Specifically, CLIPScore (Radford et al., 2021) and TIFA (Hu et al., 2023) to measure text alignment; and similarity in DINOv2 (Oquab et al., 2023) feature space after background masking to measure identity alignment, denoted as MDINOv2-I. We also report an overall Geometric score (Yan et al., 2023) by taking the geometric mean of TIFA and MDINOv2-I. The results are shown in Table 7. Consistent with the VIEScore evaluation reported in the main paper, our method performs comparably with other baselines in the few-step sampling regime. We show more sample comparisons in Figure 12. Table 5: ImgEdit-Bench comparison using their proposed GPT-4o based evaluation protocal and few-step sampling setting. Our method outperforms baseline methods on the Avg of all edit types. Method #Param Action↑ Bg ↑ Style ↑ Adjust ↑ Replace ↑ Add ↑ Extract ↑ Remove ↑ Compose ↑ Avg ↑ Qwen-Image-Edit 20B 3.14 2.83 3.70 3.25 3.00 3.52 1.96 2.71 3.06 3.02 Flux.1-Kontext 12B 3.51 2.97 3.89 3.04 3.15 3.31 1.82 2.37 2.46 2.95 Step1X Edit 12B 3.66 2.60 3.46 3.44 2.50 3.25 1.77 2.41 2.38 2.83 NP-Edit (Ours) 2B 4.44 4.13 4.14 3.94 3.57 4.52 2.01 2.71 3.18 3.63 B ABLATION STUDY Training objective. Here, we provide a more detailed analysis by examining performance across different editing sub-types. As a reminder, we ablated our training objective under four settings: (1) using only the distribution matching loss, (2) using only the VLM-based editing loss, (3) removing the identity-preservation question from DQA, and (4) replacing the binary-cross entropy loss as explained in Eqn. 4 with standard cross-entropy over full vocabulary length. As shown in Figure 5, training with only the DMD loss yields comparable performance on certain sub-edit types such as Color change, since DMD matches the text-conditioned score between the fine-tuned and pre-trained models, thus improving overall text alignment. However, it fails on tasks like Removal, underscoring 17 Preprint Table 6: VIEScore evaluation on ImgEdit-Bench. Our method performs on par or better than baselines under the few-step setting. For multi-step sampling, it still outperforms OmniGen and remains competitive with many of the larger-scale models like BAGEL and FLUX.1 Kontext. All numbers reported in ×10 Method #Param #Step SC Score↑ PQ Score ↑ Overall ↑ BAGEL 7B 50 7.55 6.22 6.47 FLUX.1-Kontext 12B 28 6.94 6.73 6.19 Step-1X Edit v1.1 12B 28 7.26 7.30 6.72 QwenImage Edit 20B 50 8.30 7.77 7.85 QwenImage Edit 20B 4 6.23 5.14 5.46 FLUX.1-Kontext 12B 4 6.08 5.22 5.14 Step-1X Edit v1.1 12B 4 6.00 5.37 5.14 NP-Edit (Ours) 2B 4 6.72 7.78 6.62 Table 7: Quantitative evaluation of free-form editing task, Customization, on DreamBooth dataset. All numbers reported in ×10 Method #Param #Step MDINO Score↑ CLIP Score ↑ TIFA ↑ Geometric Score ↑ DSD 12B 28 6.55 3.08 8.71 7.32 SynCD 12B 30 7.34 3.09 8.53 7.71 FLUX.1-Kontext 12B 28 7.72 3.07 8.88 8.14 Qwen-Image-Edit 20B 50 7.47 3.14 9.37 8.22 OminiControl 12B 8 6.16 3.02 8.12 6.64 DSD 12B 8 5.88 3.15 8.93 6.99 SynCD 12B 8 7.11 3.16 9.11 7.79 FLUX.1-Kontext 12B 8 7.50 3.08 8.83 7.98 Qwen-Image-Edit 20B 8 7.29 3.08 8.96 7.91 NP-Edit (Ours) 2B 8 6.82 2.97 8.73 7.54 NP-Edit (Ours) 2B 4 7.03 3.04 8.89 7.72 Replace Add Human Color Background Tone Material Motion Text Remove Style Ours Only DMD Loss w/ CE Loss w/o VLM Identity Figure 5: Performance for each sub edit-type. Train- ing with only DMD loss fails to achieve certain tasks like removal and style changes. In addition, using bi- nary cross-entropy loss and VLM identity-based ques- tions helps improve the overall performance. Replace the blue sky with a starry night sky Replace the purple flowers with orange flowers Figure 6: Training with only VLM-editing loss leads to lower fidelity samples with the model only maxi- mizing the edit success probability. Current general- purpose VLMs are often not good at subjective tasks like evaluating image fidelity, highlighting the require- ment of distribution matching loss in our framework. the importance of VLM-based editing loss and its generalizability across diverse editing instructions. In addition, VLM-based loss also helps in maintaining better consistency between input and edited images (first row of Figure 4 in the main paper). However, when training with only the VLM-based editing loss, there’s a gradual degradation in image quality, as Figure 6 shows, highlighting the complementary role of distribution matching losses such as DMD. 18 Preprint t=4 t=8 t=28 No Yes Yes Question: Are there sunglasses in the image? VLM response No Yes Yes Question: Is there a table clock in the image? VLM response Figure 7: Unreliable VLM response on intermedi- ate outputs of a multi-step diffusion model. Here we show a 28-step diffusion process, denoising predic- tions from early steps (e.g., t = 4), which correspond to high noise levels, are blurry and semantically am- biguous. This can lead to unreliable responses from the VLM, as shown here. Therefore, we adopt a few-step diffusion model that always generates sharp images. Ours (4-step) Single step Edit the background to be Shanghai’s bund Add a cat sitting in the basket Figure 8: Our (4-step) vs single-step editing model. We compare our final 4-step model with a single-step model, both trained via our approach. Editing an im- age in a single step is still challenging and leads to lower-fidelity outputs. Ours w/o VLM Identity w/ LPIPS Turn the rice into a hamburger and draw an avatar eating a burger Remove the bangs Figure 9: Limitation. Our method can struggle to maintain exact pixel consistency between the input and edited image. Having LPIPS (Zhang et al., 2018) loss between the input and output edited image can resolve it to an extent (top row) but at the cost of reduced editing success (bottom row). Sampling steps. For our method, we chose to train a few-step image-editing model instead of a multi-step diffusion model, as multi-step diffusion has a noisy or blurry estimate of the final output in the early stages of diffusion. This can make it difficult to get a reliable response from the VLM, as shown in Figure 7. On the other hand, predicting an edited image in one step is still challenging, as mentioned in the main paper, and shown here in Figure 8. Thus few-step provides a nice balance between the two extremes of single and multi-step sampling for our purposes. C LIMITATION A limitation of our framework is the lack of pixel-wise supervision to preserve regions that should remain unchanged under a given edit instruction. Consequently, edited images can deviate from the input image in details or spatial alignment, as shown in Figure 9, 1st column. While our VLM-based editing loss includes a question to check consistency between the input and edited images, it does not enforce pixel-level alignment. Empirically, we find that current VLMs struggle to detect subtle changes. To mitigate this, we experiment with an additional LPIPS (Zhang et al., 2018) loss between the input and output edited images. As shown in the last column of Figure 9, this improves consistency but also negatively impacts editing quality, particularly for edit-types like Removal. Future work could explore specialized VLMs that are more sensitive to fine-grained, pixel-level differences. 19 Preprint D DATASET CONSTRUCTION DETAILS Each tuple in our dataset X = {(yi, ci, cy i , cx i )}N i=1 consists of a real reference-image, y, correspond- ing edit instruction, c, and text prompt corresponding to the reference and edited image, cy and cx, respectively. We use a text-image dataset corpus to select reference images. Given a reference image, we prompt Qwen-2.5-32B VLM to suggest different possible editing instructions. The system and user prompt for it are as follows: Role: system, Content: You are a helpful assistant and an expert in image editing. Role: user, Content: Task: As a researcher in image editing, your task is to generate simple editing instructions based on the given image. The edit types you can use include: 1) local color change, 2) local texture, 3) adjust (shape change), 4) add, 5) remove, 6) replace, 7) bg, 8) style, 9) action, and 10) text manipulation **Important**: Ensure that you create a balanced distribution of these edit types when generating the instructions. Each example should utilize a different edit type, and the edit types should be evenly distributed across all examples. When using the “add” edit type, DO NOT USE vague placements like ‘near’, ‘under’, or ‘beside’, instead, specify the exact location where the object should be placed. For example, instead of “add a castle near the trees” use “add a castle in the clearing between the trees”. Ensure that each instruction is straightforward and points to a single, clear edit change. Avoid complex or multi-step instructions. **Avoid Redundancy**: Make sure to introduce diversity in the edit instructions. Given the input image, could you generate simple edit instructions for different possible edit types by following the “format” of examples below and based on what you have seen in the image? Here are some examples showing the use of various edit types: Good example 1: {color change example} Good example 2: {texture change example} Good example 3: {adjust shape example} Good example 4: {add example} Good example 5: {remove example} Good example 6: {replace example} Good example 7: {bg example} Good example 8: {style example } Good example 9: {action example} Good example 10: {text manipulation example} Bad Examples: the edit instructions are hard/impossible to perform well, or mention vague terms that make the editing model struggle to perform well, and you should not follow. Bad example 1: - Instruction: make this dog look like it’s ready for a formal evening out? - Type: add - Reasoning: This instruction is bad because it does not mention the exact changes that are needed to make the dog look like it’s ready for a formal evening out. Bad example 2: - Instruction: remove the balloon [given an image of only balloons on a white background] - Type: remove - Reasoning: This instruction is bad as it removes the only object in the image. 20 Preprint **Important Considerations**: 1. Avoid repetition of specific phrases: Do not reuse examples or themes from the above examples. Create entirely new and diverse themes and scenarios. 2. Logical Flow: Ensure that each instruction is logical and makes sense given the image. 3. Specificity in Insertions: When adding objects, use precise placement (e.g., “in the sky” or “on the lake”). Avoid vague terms like “next to”, “around”, or “near”. 4. Balanced use of edit types: Use a variety of edit types such as [insertion], [replace], [local texture], [shape change], [style], [remove], [local color change], and [bg]. Ensure an even distribution of these edit types across your examples. 5. Diverse scenarios: Introduce variety in the scenarios, such as futuristic, historical, magical, surreal, or natural settings. Avoid overusing common tropes. 6. DO NOT suggest instructions that change a very small/minute part of the image. Could you now generate 4 examples of **new, creative, and contextually relevant** edit instructions by following the format above? Avoid using the specific phrases, themes, or scenarios from the examples provided above. **Each example must use a different edit type** from the ones listed above. Also, make sure to use each edit type equally across all generated examples. Finally, you should make the edit instructions as simple as possible so that the downstream editing model is able to work well. In the above user prompt, for the good examples, we randomly select an edit instruction for each editing type out of a fixed set of manually defined edit instructions. Given edit instructions for each image, we again prompt the VLM to check the validity of the edit instruction and, if valid, to suggest a possible caption for the edited image. The system and user prompt for this is: Role: system, Content: You are a helpful assistant and an expert in image editing. Role: user, Content: Task: As a researcher in image editing, given the input image, edit type, and the edit instruction, your task is to check if a given edit instruction is valid and can be applied to the image. If it is valid, generate a descriptive caption for what the image would look like after applying the edit instruction. If it is not valid, return “invalid” and explain why it is not valid, and output “NA” for the edited image caption. An edit instruction is invalid if it: 1. mentions to modify/remove/replace an object that is NOT PRESENT in the image. 2. is TOO HARD to make editing model to understand and perform well, e.g., “remove any visible accessories.” 3. DOES NOT change the image in any meaningful way, e.g., given the image of a forest, “change the background to a dense forest.” For the “remove” edit type: - DO NOT mention the object that is removed during the edit in the edited image caption. For example, given an image of a cat in a living room on a sofa with the edit type ”remove” and edit instruction: “remove the cat” Bad Example: A cat is removed from the sofa in a living room. Good Example: A living room with a sofa. Given the edit instruction and the original caption: Edit type: {edit type} Edit instruction: {simple edit instruction} Output format: Validity: ... Reasoning: ... Edited image Caption: ... 21 Preprint Please provide a concise but complete caption describing the edited image. Focus on the changes that would be made according to the edit instruction. Here are some more examples: Example 1: - Edit type: bg - Edit instruction: change the background to a sunset view - Validity: valid - Reasoning: The edit instruction is valid because it adjusts the current blue sky to a sunset view, which is a meaningful change. - Edited image caption: A park with a sunset view. People are walking around in the park. Example 2: - Edit type: remove - Edit instruction: remove the wine glass - Validity: invalid - Reasoning: The edit instruction is invalid because it mentions removing a wine glass that is not present in the image. - Edited image caption: NA **Important Considerations**: 1. DO NOT use instruction words like replaced, added, removed, modified, etc. in the caption. 2. Keep the caption general to explain any possible images resulting from the edit instruction. Only output the validity, reasoning, and edited image caption. Do not include any other text or explanations. After filtering the list of generated editing instructions using the above procedure, our final dataset consists of approximately 3M unique reference images with corresponding editing instructions spanning the 10 edit sub-types. Within the constraints of our available computational resources, this represents the largest dataset we were able to construct. For the customization task, we first instruct the VLM, to identify if the image has a prominent object in the center. We provide an in-context sample image as well to the model. The exact system and user prompt for this is: Role: system, Content: You are a helpful assistant and an expert in image personaliza- tion/customization. Role: user, Content: Task: You are assisting in a research project on image personalization. Your goal is to evaluate whether the SECOND image contains a **single, uniquely identifiable object** prominently positioned near the center of the frame. - The FIRST image (image˙path1) is an example of a valid case. - The specific object category in the second image can be different — focus only on **object uniqueness** and **image composition**. Good examples include object categories that can be personalized, have unique texture, and are not general objects: - Backpack, purse, toy, cat, dog, cup, bowl, water bottle, wearables, plushies, bike, car, clocks, etc. Bad examples include object categories that are general objects, and different instances of the category can not be distinguished: - Tree, building, door, flowers, food, vegetables, fruits, natural scenes, roads, etc. **Important Considerations:** 1. The object should be clearly recognizable and **visually distinct** from the background. 22 Preprint 2. The object should be **near the center** of the image. 3. The **entire object** should be visible — it should NOT be a tight or zoomed-in crop. 4. The background can be natural but should not be overly cluttered or visually distracting. 5. The image should feature a **single primary object**, not multiple equally prominent objects. Could you now judge the SECOND image and only provide the output, reasoning, and object name, in the following format: Output: True/False Reasoning: Brief explanation Object Name: The name of the object (e.g., “backpack”, “cat”, “toy”). If the VLM response predicts a valid image, we then query it again to suggest a new background context for the object category as follows: Role: system, Content: You are a helpful assistant and an expert in image personaliza- tion/customization. Role: user, Content: Given an image of an object category, you have to suggest three DIVERSE background captions for the object. Provide a detailed description of the background scene. Only suggest plausible backgrounds. DO NOT add the object name in the caption. DO NOT use emotional words in the caption. Be concise and factual but not too short. DO NOT mention the object name in the output captions. If the object is not a thing, but a scene, then output None. Example background captions for “White plastic bottle” are: 1. near the edge of a marbled kitchen counter, surrounded by a cutting board with chopped vegetables, a salt shaker, and a stainless steel sink in the background. 2. rests on a tiled bathroom shelf, accompanied by a toothbrush holder, a mirror with foggy edges, and a shower curtain partially drawn open. Example background captions for “a blue truck” are: 1. parked beside a graffiti-covered brick wall under a cloudy sky, with city skyscrapers rising in the background. 2. resting in a grassy field surrounded by wildflowers, with distant mountains and a golden sunset in the background. Object: {object category name} Output: 1. 2. 3. E TRAINING IMPLEMENTATION DETAILS E.1 LOCAL-IMAGE EDITING Training hyperparameters. We train on a batch-size of 32 using Adam (Adam et al., 2014) optimizer with a learning rate of 2 × 10−6, β1 as 0, and β2 as 0.9. We train for a total of 10K iteratuions with auxiliary network, Aϕ, being updated 10 times for every generator, Gθ, update, following similar strategy adopted in DMD2 (Yin et al., 2024a). We train with the identity loss (Section 4.3) for 250 iterations. For faster convergence, the first 4K training iterations are trained with a single step prediction (t = 1 in Line 3 of Algorithm 1), and then we start the 2-step unrolling of the diffusion trajectory. The final loss is a weighted combination of VLM-based editing loss and distribution matching loss with λVLM = 0.01 and λDMD = 0.5. During training, we also add a “do nothing” editing task with L2 loss between the input and edited image as regularization with a 1% 23 Preprint probability. This helps the model learn to maintain consistency between input and edited images. During training, we sample the editing instruction corresponding to each subtype uniformly, except removal, which is sampled with 25% probability. This is because, empirically, we observe that removal is more difficult than other edit-types like Color change. Template questions for VLM-based editing loss. As mentioned in the main paper, we evaluate VLM-based loss on two questions per edit type. Specifically for any edit type except removal, we use the following template: Role: user, Content: You are a professional digital artist and an expert image editor. You will be provided with two images. The first being the original real image, and the second being an edited version of the first. The objective is to evaluate if the editing instruction has been executed in the second image. Editing instruction: {edit instruction} Answer with a Yes or No. Note that sometimes the two images might look identical due to the failure of image editing. Answer No in that case. Role: user, Content: You are a professional digital artist and an expert image editor. You will be provided with two images. Answer with a Yes or No if the second image is exactly the same as the first image. IGNORE the changes in the second image because of the edit: {edit instruction}. Everything else should be the same. For the removal edit-type, we change the first question to explicitly ask about the presence of the target object to be removed, with the ground truth answer in this case being No. We find it to be more effective than a generic template. Role: user, Content: You are a professional digital artist and an expert image captioner. You will be provided with an image. Answer with a Yes or No if the image has {object name}. E.2 FREE-FORM EDITING (CUSTOMIZATION) Training hyperparameters. We reduce the warmup iterations for which we train with the identity loss to 100 in this case, since customization often requires more drastic changes in the output image compared to the input reference image. Further, we increase λDMD = 2 instead of 0.5 as in the case of local image-editing. The rest of the hyperparameters remain the same. Both during training and inference, the input text prompt to the few-step generator, Gθ, is in the following template: Generate the main object shown in the first image in a different setting and pose: { background scene description}. We train the 4-step model for 10K iterations. For the 8-step model, we fine-tune for 5K additional training steps starting from the 4-step model. Template questions for VLM-based editing loss. Here, we modify the questions to instead evaluate if the background context and pose are different in the generated image, i.e., editing success, and if the object identity is similar, i.e., image alignment and consistency between the input reference and edited image. The exact questions are as follows: Role: user, Content: You are a professional digital artist and an expert in image editing. You will be provided with two images. 24 Preprint Answer with a Yes or No if the {object˙name} in the second image is in a different pose and location than in the first image. Note that sometimes the second image might not have the same object because of the failure of image editing. Answer No in that case. Role: user, Content: You are a professional digital artist and an expert in image editing. You will be provided with two images. Answer with a Yes or No if the {object˙name} in the second image is the exact same identity, with similar color, shape, and texture as in the first image. Note that sometimes the second image might not have the same object because of the failure of image editing. Answer No in that case. F OTHER BASELINE DETAILS Flow-GRPO (Liu et al., 2025a). We follow the open-source implementation of Flow-GRPO and train with the same computational budget as our method, i.e., across 4 A100 GPUs and 2.5 days of training. The final model is fine-tuned from a pre-trained image-editing model for 5K iterations. During training, we collect 16 images per-prompt with 12 denoising steps (28 during inference) for computing the mean and standard deviation in GRPO (Shao et al., 2024). Following their official implementation, we train with LoRA (Hu et al., 2022) of rank 32, α = 64, learning rate 1 × 10−4, and use the VLM to score edits on a scale of 0 to 9, which is normalized between 0-1 to get the final reward. The exact prompt used to query the VLM is derived from VIEScore (Ku et al., 2024) and is shown below. Role: system, Content: You are a helpful assistant and an expert in image editing. Role: user, Content: You are a professional digital artist. You will have to evaluate the effectiveness of AI-generated edited image(s) based on given rules. You will have to give your output in this way (Keep your reasoning VERY CONCISE and SHORT): score : ..., reasoning : ... RULES: Two images will be provided: The first being the original real image and the second being an edited version of the first. The objective is to evaluate how successfully the editing instruction has been executed in the second image. Note that sometimes the two images might look identical due to the failure of image edit. From scale 0 to 9: A score from 0 to 9 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instruction at all. 9 indicates that the scene in the edited image follows the editing instruction text perfectly.) Editing instruction: {edit instruction} Supervised Fine-Tuning. We train with the standard velocity prediction flow-objective for 30K iterations with a batch-size of 32 and learning rate 2 × 10−6 with a linear warmup of 2K iterations. To enable classifier-free guidance, we drop the image and text conditions 10% of the time. Sampling parameters for local image-editing baselines. We follow the open-source implemen- tation to sample images from all the baseline models for the benchmark evaluations. The turbo- 25 Preprint edit (Deutch et al., 2024) baseline requires a caption corresponding to the edited image as well, and we use Qwen-2.5-32B-VLM to generate these captions for GEdit-Bench images. Sampling parameters for customization baselines. Here as well, we follow the open-source implementation to sample images from all the baseline models for the benchmark evaluations. In the case of DSD (Cai et al., 2025), it employs Gemini-1.5 to convert the input user-prompt into a detailed prompt. However, we skipped this step for a fair evaluation with other methods, which do not use any prompt rewriting tools. In the case of SynCD (Kumari et al., 2025), though it supports multiple reference images as input, we evaluated it with a single reference image, to keep the setup similar to other baseline methods and Ours. For sampling images with OminiControl (Tan et al., 2025) and DSD (Cai et al., 2025), we follow their recommended prompt setting and replace the category name with “this item”. G SOCIETAL IMPACT Our work introduces a training framework for fine-tuning text-to-image models into a few-step image-editing model without using paired supervision. By enabling few-step sampling, our method improves inference efficiency and reduces computational cost. Nonetheless, the broader risks of generative models, such as creating deepfakes and misleading content, also apply to our approach. Possible ways to mitigate this are watermarking generative content Fernandez et al. (2023) and reliable detection of generated images Wang et al. (2020); Corvi et al. (2023); Cazenavette et al. (2024) 26 Preprint Qwen-Image-Edit (50 step) Step1x-Edit (4 step) Ours (4 step) FLUX.1Kontext (4 step) Qwen-Image-Edit (4 step) Add a pair of sunglasses in a cool style Convert this image into an anime style Change the color of the goat to yellow Make the child pout Replace the bench’s material with marble Adjust the background to a beach Replace the text WOOD with LAND Remove the red dumbbell Input reference image Figure 10: Qualitative comparison on GEdit-Bench. We show results of our and baseline image-editing methods under the few-step sampling setting. For comparison, we also show the results of the best method with multi-step sampling, as measured by the quantitative metrics (Table 1), in the 1st column. Our method performs on par or better than baseline methods across different edit types in the few-step setting. 27 Preprint Qwen-Image-Edit (50 step) Step1x-Edit (4 step) Ours (4 step) FLUX.1Kontext (4 step) Qwen-Image-Edit (4 step) Change the background from a clear blue sky with bare branches to a sunset sky over a lush forest Add a coffee cup on the table in the foreground Change the wooden surface background to a marble countertop Remove the horse in the foreground Replace the sheep in the image with a deer Transfer the image into a dramatic charcoal-drawing style Raise the person’s right hand Input reference image Figure 11: Qualitative comparison on ImgEdit-Bench. We show results of our and baseline image-editing methods under the few-step sampling setting. For comparison, we also show the results of the best method with multi-step sampling, as measured by the quantitative metrics (Table 6), in the 1st column. Our method performs on par or better than baseline methods across different edit types in the few-step setting. 28 Preprint Qwen-Image-Edit (50 step) FLUX.1Kontext (8 step) Ours (8 step) OminiControl (8 step) Qwen-Image-Edit (8 step) A backpack on top of a purple rug in a forest A stuffed animal with a mountain in the background A toy with a city in the background A dog with a blue house in the background A bowl on the beach A dog in a purple wizard outfit A backpack with Eiffel tower in the background Input reference image Figure 12: Qualitative comparison on DreamBooth. We show results of our and baseline methods under the few-step sampling setting. For comparison, we also show the results of the best method with multi-step sampling, as measured by the quantitative metrics in the first column. Our method performs comparably with baseline methods on identity alignment while having better image fidelity across different concepts in the few-step setting. 29
Preprint LEARNING AN IMAGE EDITING MODEL WITHOUT IMAGE EDITING PAIRS Nupur Kumari1 Sheng-Yu Wang1 Nanxuan Zhao2 Yotam Nitzan2 Yuheng Li2 Krishna Kumar Singh2 Richard Zhang2 Eli Shechtman2 Jun-Yan Zhu1 Xun Huang2 1Carnegie Mellon University 2Adobe ABSTRACT Recent image editing models have achieved impressive results while following natural language editing instructions, but they rely on supervised fine-tuning with large datasets of input-target pairs. This is a critical bottleneck, as such naturally occurring pairs are hard to curate at scale. Current workarounds use synthetic training pairs that leverage the zero-shot capabilities of existing models. However, this can propagate and magnify the artifacts of the pretrained model into the final trained model. In this work, we present a new training paradigm that eliminates the need for paired data entirely. Our approach directly optimizes a few-step diffusion model by unrolling it during training and leveraging feedback from vision-language models (VLMs). For each input and editing instruction, the VLM evaluates if an edit follows the instruction and preserves unchanged content, providing direct gradients for end-to-end optimization. To ensure visual fidelity, we incorporate distribution matching loss (DMD), which constrains generated images to remain within the image manifold learned by pretrained models. We evaluate our method on standard benchmarks and include an extensive ablation study. Without any paired data, our method performs on par with various image editing diffusion models trained on extensive supervised paired data, under the few-step setting. Given the same VLM as the reward model, we also outperform RL-based techniques like Flow-GRPO. 1 INTRODUCTION Large-scale text-to-image models have achieved remarkable success, generating images of high fidelity that closely align with textual descriptions Ramesh et al. (2022); Peebles & Xie (2023); Kang et al. (2023). Despite these advances, text-only conditioning offers limited user control and falls short for many downstream applications (Meng et al., 2022; Gal et al., 2023). In practice, users often wish to start with an existing image to perform tasks like adjusting local attributes, changing the style, or placing an object in a new context. These image editing operations require precise, image-guided control that text-only prompts cannot provide. While collecting large-scale text-image pairs is relatively straightforward (Schuhmann et al., 2021), constructing supervised datasets for editing tasks is far more challenging. As one requires a pair of images, the input and its edited counterpart, along with the text instruction, and such data is rarely available online. Early methods addressed this by synthetically generating editing pairs (Brooks et al., 2023) from a pretrained model, using zero-shot editing techniques (Hertz et al., 2023). However, synthetic datasets can quickly become outdated with new and improved base models, and they risk amplifying and propagating artifacts of the synthetic editing process. More recent approaches extract frames from videos and annotate their differences (Chen et al., 2025; Song et al., 2023b; Krojer et al., 2024). Although promising, the applicability of this strategy is constrained by the diversity of transformations present in natural video sequences, where obtaining pixel-aligned before-and-after edited pairs is nearly impossible. A final alternative is to manually create training pairs (Winter et al., 2024; Magar et al., 2025), but this can be quite laborious and does not scale as easily. 1 16 Oct 2025 Preprint In this work, we explore the possibility of training an image editing model without any training pairs. Our key idea is to leverage supervision from Vision Language Models (VLMs) (Liu et al., 2023a), relying on their general image-understanding capabilities to check whether the generated images satisfy the editing instructions. Prior works have studied the use of specialized models or generalpurpose VLMs in improving generative models along dimensions such as text-alignment and aesthetic quality, primarily using reinforcement learning (Black et al., 2024; Liu et al., 2025a). In contrast, our method is the first to explore using gradient feedback from VLMs for general instruction-following, and we distill this feedback into a lightweight generative model that can generalize to arbitrary images and edit instructions. Our final method combines the VLM-feedback with a distribution matching loss to ensure that generated outputs remain in the realistic image domain while following the edit instructions. In summary, our contributions are threefold: 1. We propose NP-Edit (No-Pair Edit), a framework for training image editing models using gradient feedback from a Vision-Language Model (VLM), requiring no paired supervision. 2. For efficient training and effective VLM feedback, our formulation combines it with distribution matching loss to learn a few-step image editing model. The final model remains competitive with existing baselines trained on supervised data. 3. We conduct a comprehensive empirical study analyzing the impact of (i) different VLM backbones, (ii) dataset scale and diversity, and (iii) VLM loss formulation. Our findings show that performance improves directly with more powerful VLMs and larger datasets, demonstrating its strong potential and scalability. 2 RELATED WORKS Diffusion-based image editing. Development of large-scale text-to-image models has enabled a wide range of downstream applications, including local image editing (Hertz et al., 2023; Meng et al., 2022), stylization (Sohn et al., 2023; Hertz et al., 2024; Jones et al., 2024), personalization and customization (Gal et al., 2023; Ruiz et al., 2023). These can broadly be viewed as different forms of image-editing capabilities. Early approaches often relied on zero-shot inference-time methods (Hertz et al., 2023; Parmar et al., 2023; Cao et al., 2023; Avrahami et al., 2023; Kim et al., 2023) or flexible but slow optimization-based techniques (Gal et al., 2023; Ruiz et al., 2023; Kumari et al., 2023). To improve efficiency and robustness, subsequent works introduced training-based approaches (Brooks et al., 2023; Xiao et al., 2025; Chen et al., 2023; Fu et al., 2024; Sun et al., 2024). However, obtaining large datasets of image pairs remains challenging: synthetic curation (Brooks et al., 2023; Zhang et al., 2023; Zhao et al., 2024; Hui et al., 2024; Yang et al., 2024b; Tan et al., 2025; Cai et al., 2025; Kumari et al., 2025) risks becoming outdated as generative models improve, while human annotation (Winter et al., 2024; Magar et al., 2025; Ge et al., 2024; Sushko et al., 2025) is costly and labor-intensive. Recent efforts have explored constructing paired data from videos (Chen et al., 2025; Song et al., 2023b) or simulation environments (Yu et al., 2025), although these remain limited in either annotation diversity or visual realism. We also target similar image-editing capabilities but remove the need for paired data (Zhu et al., 2017), by using differentiable feedback from vision-language models instead of ground-truth edits. Post-training for image generation. Post-training methods typically align image generators with human preferences using either Direct Preference Optimization (DPO) (Wallace et al., 2024; Yang et al., 2024a) or Reinforcement Learning (RL) (Black et al., 2024). While early RL-based works use feedback from a simple scalar reward model (Kirstain et al., 2023; Xu et al., 2023), the paradigm has recently been enhanced by employing sophisticated Vision-Language Models (VLMs) as "judges" to provide more generic and accurate reward signals (Ku et al., 2024). Although post-training has been successfully applied to text-to-image generation, its use for image editing models has been less explored. Concurrently to our work, EARL (Ahmadi et al., 2025) begins to address this by using a VLM-as-a-judge framework to post-train an image-editing model with RL. However, RL-based approaches often depend heavily on good initialization, typically requiring a Supervised Fine-Tuning (SFT) phase with paired editing data. In contrast, our method leverages differentiable feedback from the VLM model, thereby obviating the need for an initial SFT stage and enables the learning of image editing models without the use of synthetically generated data. 2 Preprint Related to our work, Luo et al. (2025) recently introduced a method that incorporates gradient feedback from VLMs to satisfy various criteria, including the horizon line, style, and layout, in generated images. However, their framework operates in a per-example optimization setting, requiring costly LoRA fine-tuning (Hu et al., 2022) for each criterion and prompt pair, and also does not consider image editing tasks. Few-step diffusion models. Standard diffusion (or flow-matching) models require many sampling steps to generate high-quality images. Many prior works reduce the number of denoising steps for faster sampling by predicting larger denoising steps, including consistency models (Kim et al., 2024; Geng et al., 2024; Song et al., 2023a; Yang et al., 2024c; Song & Dhariwal, 2024; Lu & Song, 2025; Heek et al., 2024), shortcut models (Frans et al., 2024), meanflow (Geng et al., 2025), and inductive moment matching (Zhou et al., 2025). Another line distills a pre-trained multi-step teacher into a few-step student by matching ODE trajectories (Song et al., 2023a; Salimans & Ho, 2022; Geng et al., 2023), using adversarial loss (Sauer et al., 2024b; Kang et al., 2024; Yin et al., 2024a; Sauer et al., 2024a; Xu et al., 2024), or applying score distillation (Luo et al., 2023; Yin et al., 2024b;a; Zhou et al., 2024). In our framework, we adopt DMD (Yin et al., 2024b) as a distribution matching objective. This ensures that our few-step editing model's output remains in the real-image manifold defined by the pre-trained text-to-image teacher, while using VLM-feedback to ensure the model follows the editing instructions. 3 BACKGROUND 3.1 DIFFUSION MODELS Diffusion or flow-based models are a class of generating models that learn the data distribution by denoising samples corrupted by different levels of Gaussian Noise (Ho et al., 2020; Song et al., 2021; Lipman et al., 2023). Given a real sample x, a forward diffusion process creates noisy samples xt = αtx+σtε over time t ∈(0, 1], where ε ∼N(0, I), and αt, σt define a noise schedule such that xT ∼N(0, I) and x0 = x. The denoising model is parameterized to reverse the forward diffusion process by either predicting the noise ε (Ho et al., 2020) added to the sample or velocity v towards the clean sample (Liu et al., 2023b; Salimans & Ho, 2022). In our work, we follow the flow-based formulation, with the forward denoising process being a linear interpolation, i.e., αt = 1 -t and σt = t. The training objective for a flow-based model, with parameters θ, can be simplified to the following: {a ligned} a thbb { E} _{ ^t,t, {c}, {N} ( {0}, {I})} w_t|| {v} - {v}_{ } ( ^t, t, {c}) ||, {aligned} (1) where v = ε -x and wt is a time dependent weighting factor. The denoising network can be conditioned on other inputs c, such as a text prompt, a reference image, or both, as in our case. 3.2 VISION LANGUAGE MODELS (VLMS) Vision Language Models (VLMs) trained from multimodal image-text data have shown exemplary visual understanding and reasoning capabilities and can serve as a general-purpose visual model. A common strategy for training such large-scale VLMs is via visual instruction tuning (Liu et al., 2023a), which aligns a pre-trained Vision Encoder output with the input word embedding space of a pretrained Large Language Model (LLM). More specifically, the image x is encoded into a set of tokens using the vision encoder, Xv = g(x). The input question regarding the image and its ground truth answer are tokenized in the LLM input embedding space as Xq and Xa, respectively. A projector module, fφ, projects the vision-encoded tokens into the LLM word embedding space and is trained via standard autoregressive loss to maximize the probability of predicting the correct answer: {a l i g ned } p( {X} _a | {X}_v, {X}_q) = _{i=1}^{L} p (a_i | f_{ }( {X}_v), {X}_{q}, {X}_{a<i} ), {aligned} (2) where Xa = [a1 · · · aL] is of token length L, and Xa<i denotes all the tokens before the current prediction token index. The final loss simplifies to a cross-entropy over the total vocabulary length. In our experiments, we use LLaVa-OneVision-7B (Li et al., 2024) as the VLM that uses SigLIP (Zhai et al., 2023) vision encoder and Qwen-2 LLM (Qwen-Team, 2024), and is among the state-of-the-art VLMs of this scale. 3 Preprint 4 METHOD Given a pretrained text-to-image diffusion model Ginit and a dataset X = {(yi, ci, cy i , cx i )}N i=1 of reference image y, corresponding edit instruction c, and captions cy and cx that describe the reference and edited image respectively, we fine-tune Ginit into a few-step image editing model Gθ without requiring ground truth edited image x according to the edit instruction. Our approach, No-Pair (NP)-Edit, introduces a VLM-based loss to evaluate edit success and combines it with a distribution matching loss to ensure outputs remain within the natural image domain. Below, we first detail the construction of the dataset, then our training objective, and finally other implementation details. 4.1 EDIT INSTRUCTION DATASET Each dataset sample consists of a real image y as reference and an associated edit instruction, c. Following prior works (Liu et al., 2025b; Ye et al., 2025), we focus on several categories of local editing operations, such as Add, Replace, Remove, Adjust shape, Action, Stylization, Text editing, Color, Material, and Background change, as well as more free-form editing tasks such as Customization or Personalization (Gal et al., 2023; Ruiz et al., 2023; Kumari et al., 2023). Candidate instructions for each type are generated using Qwen2.5-32B VLM (Qwen-Team, 2025). Given an image-instruction pair, we further query the VLM to assess its validity and to suggest the caption, cx, for the edited image. For the customization task, we restrict reference images to those showing a prominent central object, either filtered from a real image corpus or generated via the pretrained model (Tan et al., 2025; Kumari et al., 2025), and prompt the VLM to generate a caption that places the object in a novel background or context. In total, for local and free-form editing instructions, our dataset consists of ∼3M and ∼600K reference images, respectively. The input prompt to the Qwen2.5-32B VLM for each setup is shown in Appendix D. 4.2 TRAINING OBJECTIVE Training a diffusion or flow-based model (Ho et al., 2020; Liu et al., 2023b) for image editing without pairs presents a unique challenge. Standard diffusion training takes as input noised versions of a ground-truth image. In our setting, no such ground-truth edited image exists; thus, we cannot construct these intermediate noisy inputs. On the other hand, directly mapping noise to the edited image in a single step is naturally challenging and yields poor fidelity (see Appendix B). To address this, during training, we propose to unroll the backward diffusion trajectory starting from noise using a two-step sampling procedure (Song et al., 2023a). Specifically, given the reference image-instruction pair (y, c), the editing model Gθ first predicts a provisional clean image ˆx0 θ from noise ε. Then, a second step refines this estimate by feeding an interpolated noisy input back into the model: \ b e g i n {a ligne d} \ hat { \ x } _{ e t a }^ 0 & = \ e psi l o n - {\ v }_{ he ta } , & & \ t e xt {wher e } \ h a t {\ v } _ { eta } G_{ }( , t=1, , ), {N}( {0}, {I}), \\ _{ }^0 &= { }_{ }^t - t _{ }, && {where } _{ } G_{ }( { }_{ }^t, t, , ), { }_{ }^t=(1-t) { }_{ }^0 + t ; {N}( {0}, {I}), t (0,1). {aligned} (3) With the second step, the model is now trained on noisy intermediate states at timesteps determined by t, while being more efficient than a full backward unroll. In our method, we focus on few-step generation-specifically four steps-and restrict t ∈[0.25, 0.5, 0.75] in the second step. Few-step generator provides a better estimate of the denoised image, x0 θ, at intermediate steps, which in turn enables effective VLM-based feedback. VLMs tend to give unreliable judgments when inputs are noisy or blurry (see Appendix B). This also enables faster inference and lowers training costs. VLM-based editing loss. To evaluate whether an edit is successfully applied in x0 θ, we define a set of template questions with corresponding ground truth answers, DQA = {(Xqj, Xaj)}j, tailored to each edit category. The VLM is instructed to answer with a binary yes and no answer, i.e., Xaj ∈{Yes, No}, and X ̄aj denotes the opposite response. The loss is then a binary cross-entropy over the predicted logit difference between the tokens corresponding to the correct and opposite responses, respectively. e g i n {a ligned } thcal {L}_{\ te x t {V LM} } &= - _j p(a_j) , { where } p(a_j) = ( ^{(j)}_{a_j} - ^{(j)}_{ {a}_j} ) {aligned} {vlm_loss} (4) 4 Preprint Input Output "Change the background to a bright, sunny meadow" Edit Instruction Few-step Generator G DMD Loss Edit-verification question Identity-preservation question Editing loss You are a professional digital artist and an expert image editor. ...evaluate if the editing instruction has been executed... Editing instruction: {edit instruction} Answer with a Yes or No... ...You are a professional digital artist and an expert image editor. You will be provided with two images. Answer with a Yes or No if the second image is exactly the same as the first image. IGNORE the changes in the second image because of the edit: {edit instruction}. ... VLM L!"# P"yes" , P"no" P"yes" , P"no" L!"# Figure 1: Method. We fine-tune a pretrained text-to-image model into a few-step image-editing model using differentiable VLM-feedback regarding edit success. In addition, we use distribution matching loss (DMD (Yin et al., 2024a)) to ensure output images remain in the natural image manifold. where l(j) aj is the logit corresponding to the token Xaj, σ is the sigmoid function, and p(aj) is the probability of correct answer, while restricting normalization to only the Yes and No tokens, which we observe to be more effective during training (Zhang et al., 2024). Computing this loss is relatively fast, as it only requires a single forward call to the VLM per question, as opposed to autoregressive token prediction. For each edit instruction, we use two complementary questions to compute the editing loss: (1) Edit-verification question to assess whether the intended edit is applied, and (2) Identity-preservation question to ensure the image is not over-edited and is consistent with the reference image. Specifically, for the local image-editing instructions, we verify edit success with the following question "The objective is to evaluate if the editing instruction has been executed in the second image. Editing instruction: {edit instruction}. Answer with a Yes or No." except removal edit-type, where we directly evaluate if the intended object is removed by asking "Answer with a Yes or No if the image has {object name}". For the identity-preservation question, we ask the following: "Answer with a Yes or No if the second image is exactly the same as the first image. IGNORE the changes in the second image because of the edit: {edit instruction}". We provide the list of all questions along with their system and user prompts for all editing types, including free-form editing in Appendix E. Distribution matching with text-to-image teacher model. While VLM feedback evaluates the efficacy of instruction following, it does not enforce the generated outputs to remain in the real image domain. To ensure this and keep the output distribution of the generator aligned with the pre-trained model, we apply Distribution Matching Distillation (DMD) (Yin et al., 2024b;a) between the fine-tuned model, Gθ, and the pre-trained text-to-image (teacher) model, Ginit. DMD minimizes the Kullback-Leibler (KL) divergence between the real image distribution, as estimated by the teacher model, and the output distribution of the fine-tuned model. The gradient of this KL-divergence loss with respect to the generator parameters can be simplified to: g in {aligned} _ \ t heta D_{K L} & = \ m athbb { E} _{ st ac k { {N}( {0}, {I}), t (0,1), ^0_{ } } } [- ( _{ {real}}( ^t_{ }, t, ^{ }) - _{ {gen}}( ^t_{ }, t, ^{ }) ) {.5mm} {dG}{d } ], {aligned} {kl_dv_grad2} (5) where cx is the text caption describing the noisy edited image xt θ and vreal, vgen represents the predicted velocity from the teacher and a trainable auxiliary model, Aφ respectively. The auxiliary model is trained along with Gθ to learn the current output distribution of Gθ using a flow-based denoising objective. This loss ensures that the edited images not only satisfy the instruction but also remain faithful to the text conditioned distribution of real images modeled by the pretrained teacher. 4.3 TRAINING DETAILS The pretrained model Gθ is originally designed to generate an image, x, conditioned only on text c. To adapt it to our editing task, we extend its conditioning to include the reference image y. Following recent works (Xiao et al., 2025; Tan et al., 2025), we concatenate the VAE encoding of the reference image to the noisy target image encoding along the token sequence dimension, similar to text embedding, thereby enabling the model to attend to both text and visual conditions. To stabilize training, in the initial few iterations, we train the model with the objective of simply reconstructing the concatenated reference image. This encourages the network to propagate content 5 Preprint from the reference input, aligning it toward producing realistic images under joint text-image conditioning. After this, we introduce our main training objective as explained in the previous section. The final loss for the generator is a weighted combination of the VLM-based editing loss and DMD loss. The auxiliary network, Aφ, is updated Naux times for every generator, Gθ, update (Yin et al., 2024a). Our pre-trained generative model is a 2B parameter internal DiT-based (Peebles & Xie, 2023) latent space diffusion model. The overall training pipeline is illustrated in Figure 1 and is detailed more formally in Algorithm 1 below. Other training hyperparameters are detailed in Appendix E. Algorithm 1: NP-Edit: our training method Input: Pretrained VLM and text-to-image model Ginit, Dataset X = {(yi, ci, cy i , cx i )}. Output: Few-step image-editing model Gθ 1 Gθ ←copyWeights(Ginit); Aφ ←copyWeights(Ginit) 2 // Warmup with identity loss 3 for step = 1 to Nwarmup do 4 (y, cy) ∼X, ε ∼N(0, I), t ∼(0, 1] 5 x ←y 6 xt ←(1 -t)x + tε 7 vθ ←Gθ(xt, t, y, cy) 8 Lid ←||v -vθ|| where v = ε -x 9 θG ←θG -ηG∇θG Lid. 10 end for 11 12 // Main training loop 13 while train do 14 {(y, c, cx)} ∼X, ε ∼N(0, I), t ∈[0.25, 0.5, 0.75, 1] 15 vθ ←Gθ(ε, t = 1, y, c) 16 x0 θ ←ε -vθ 17 if t < 1 then 18 vθ ←Gθ(xt θ, t, y, c); xt θ ←(1 -t)x0 θ + tε ; ε ∼N(0, I) 19 x0 θ ←xt θ -tvθ 20 end if 21 Compute LVLM // Eqn. 4 22 Compute ∇θDKL // Eqn. 5 23 θG ←θG -ηGλvlm∇θGLVLM -ηGλdmd∇θDKL 24 for local step = 1 to Naux do 25 ε ∼N(0, I), {(y, c, cx)} ∼X 26 x0 θ ←Gθ(ε, y, c) // edited image with backward unroll 27 xt θ ←(1 -t)x0 θ + tε ε ∼N(0, I) t ∈(0, 1) 28 vφ ←Aφ(xt θ, cx) 29 φA ←φA -ηA∇θ||v -vφ|| where v = ε -x0 θ 30 end for 31 end while 5 EXPERIMENTS In this section, we show the results of our method on local image editing as well as more free-form image editing tasks like customization, and compare them with the state-of-the-art baseline methods. 5.1 LOCAL IMAGE-EDITING Benchmark. For evaluation, following prior works, we use the English subset of GEditBenchmark Liu et al. (2025b), which captures real-world user interactions across different edit types. We also show results on the ImgEdit (Ye et al., 2025) benchmark in Appendix A. Evaluation metric. For quantitative evaluation, we follow prior works and use GPT4o-based VIEScore (Ku et al., 2024) metric. It scores each edit on: (1) Semantic Consistency (SC) score, 6 Preprint Qwen-Image-Edit (50 step) Step1x-Edit (4 step) Ours (4 step) FLUX.1Kontext (4 step) Qwen-Image-Edit (4 step) Change the background to high mountains Replace the text CS with VALO and replace GO with RANT Remove the dog getting a haircut Light the candle to enhance the candlelight Adjust the image style to a watercolor effect Change the color of the sheep to purple Make the person in the image wave Replace the computer's casing with bamboo fiber composite Input reference image Figure 2: Qualitative comparison on GEdit-Bench under the few-step sampling setting. For an upper-bound comparison, in the 1st column we show results of the best multi-step sampling method (as measured by the quantitative metrics in Table 1). Our method performs on par or better than baseline methods across different edit types in the few-step setting. We show more samples in the Appendix Figure 10 7 Preprint Table 1: Quantitative evaluation on GEdit-Bench. Our method performs on par or better than baselines under the few-step setting. For multi-step sampling, it still outperforms OmiGen and remains competitive with many of the larger-scale models like BAGEL and FLUX.1 Kontext. All numbers reported in ×10 Method #Param #Step SC Score↑ PQ Score ↑ Overall ↑ Omni-Gen (Xiao et al., 2025) 4B 50 5.52 6.14 4.97 BAGEL (Deng et al., 2025) 7B 50 7.02 6.26 6.14 FLUX.1-Kontext (Labs et al., 2025) 12B 28 6.29 6.65 5.65 Step1X-Edit (Liu et al., 2025b) v1.1 12B 28 7.30 7.37 6.79 Qwen-Image-Edit (Wu et al., 2025) 20B 50 7.94 7.50 7.36 FLUX.1-Kontext (Labs et al., 2025) 12B 4 5.80 5.74 5.04 Step1X-Edit (Liu et al., 2025b) v1.1 12B 4 6.61 6.43 6.01 Qwen-Image-Edit (Wu et al., 2025) 20B 4 6.82 6.21 6.06 Turbo-Edit (Deutch et al., 2024) 1B 4 3.84 6.67 3.84 NP-Edit (Ours) 2B 4 6.16 7.69 6.10 Table 2: Free-form editing task, Customization, evaluation on Dreambooth. We perform better than OminiControl, DSD, and SynCD, which are trained for this task on synthetic datasets. When compared to FLUX.1-Kontext and Qwen-Image-Edit, we still perform comparably in the few-step setting. All numbers are reported in ×10 Method #Param #Step SC Score↑ PQ Score ↑ Overall ↑ DSD (Cai et al., 2025) 12B 28 6.71 7.41 6.78 SynCD (Kumari et al., 2025) 12B 30 7.66 7.83 7.54 FLUX.1-Kontext (Labs et al., 2025) 12B 28 8.19 7.45 7.61 Qwen-Image-Edit (Wu et al., 2025) 20B 50 8.53 7.79 8.02 OminiControl (Tan et al., 2025) 12B 8 6.33 7.82 6.22 DSD (Cai et al., 2025) 12B 8 6.37 6.78 6.29 SynCD (Kumari et al., 2025) 12B 8 7.71 6.84 7.07 FLUX.1-Kontext (Labs et al., 2025) 12B 8 7.99 7.18 7.39 Qwen-Image-Edit (Wu et al., 2025) 20B 8 8.08 7.44 7.62 NP-Edit (Ours) 2B 8 7.68 7.56 7.33 NP-Edit (Ours) 2B 4 7.60 7.28 7.10 evaluating whether the edit instruction was followed, and (2) Perceptual Quality (PQ) score, assessing realism and absence of artifacts. Following VIEScore, for the Overall score, we take the geometric mean between SC and PQ for each image, and average across images in the evaluation benchmark. Baselines. We compare our method with leading baselines, including FLUX.1-Kontext (Labs et al., 2025), Step1X-Edit (Liu et al., 2025b), BAGEL (Deng et al., 2025), OmniGen (Xiao et al., 2025), and Qwen-Image-Edit (Wu et al., 2025). Since no prior work explicitly targets few-step editing, we simply evaluate the above baselines with few-step sampling as well as their original multi-step setting for an upper-bound comparison. We also include Turbo-Edit (Deutch et al., 2024), a state-of-the-art zero-shot few-step method that requires no paired supervision (as zero-shot) and is thus closest to our setup. We use the open-source implementation of all baselines, with further details in the Appendix F. Results Table 1 shows the quantitative result. In the few-step setting, our method achieves the best Overall and Perceptual Quality (PQ) score compared to baseline methods. When compared to their original multi-step sampling, our few-step model still outperforms OmniGen and remains competitive with BAGEL, FLUX.1-Kontext, despite being ×6 smaller parameter-wise. While Step1X-Edit and Qwen-Image-Edit perform better, they are substantially larger models. Figure 2 provides a qualitative comparison. As we can see, our method can successfully follow different editing instructions while being consistent with the input reference image. For instance, in the 6th row (sheep color change), our approach produces a more natural edit compared to baselines. It also performs comparably to the multi-step variant for edits like lighting the candle in 4rth row or making the person wave in 7th row. 5.2 FREE-FORM EDITING: CUSTOMIZATION Benchmark. We use the widely adopted DreamBooth (Ruiz et al., 2023) dataset for evaluation. It consists of 30 objects and 25 prompts per object category. The goal is to generate the same object as shown in the reference image, but in a different context, as mentioned in the text prompt. Baselines. We compare against state-of-the-art unified image-editing baselines such as FLUX.1Kontext (Labs et al., 2025) and Qwen-Image-Edit Wu et al. (2025) as well as OminiControl (Tan 8 Preprint Qwen-Image-Edit (50 step) FLUX.1-Kontext (8 step) Ours (8 step) OminiControl (8 step) Qwen-Image-Edit (8 step) Shoe with a tree and autumn leaves in the background A toy on top of the sidewalk in a crowded street A cat wearing pink sunglasses Input reference image Figure 3: Qualitative comparison on Customization task. Our method can generate the object in new contexts while having better fidelity under few-step sampling. We show more samples in the Appendix Figure 12. et al., 2025), DSD (Cai et al., 2025), and SynCD (Kumari et al., 2025), which are feed-forward models trained specifically for this task on synthetic datasets. Evaluation metric. Here as well, we use the VIEScore evaluation with a similar Semantic Consistency (SC) score to evaluate identity and text alignment and the Perceptual Quality (PQ) score to measure realism, and the geometric mean of the two for the Overall score. We also report CLIPScore (Radford et al., 2021) and DINO (Oquab et al., 2023) similarity-based metrics in the Appendix. Results. As shown in Table 2, our method performs comparably to state-of-the-art methods. In the few-shot sampling setting, all the baseline methods fail to generate realistic samples at 4 steps; therefore, we compare with them at 8 sampling steps. Our method still results in higher fidelity samples as Figure 3 shows, while maintaining object identity with the reference image. Note that our method performs better than OminiControl, which is also a few-step (8) model for this task. 5.3 ABLATION In this section, we perform several ablations to analyze the role of different components of our method, dataset scale, and stronger VLMs. All ablations are done on the local image-editing task. Training objective. We ablate our training objective across four settings: (1) using only distribution matching loss, (2) using only the VLM-based editing loss, (3) removing the identity-preservation loss DQA, and (4) replacing the binary-cross entropy loss (Eqn. 4) with standard cross-entropy over the full vocabulary. Results are shown below in Table 3. Training without the VLM-based loss and relying solely on distribution matching significantly degrades the model's capabilities at following editing instructions. We observe that VLM-based loss is essential for maintaining consistency between input and edited images and for certain editing tasks like Removal (Figure 4 and Appendix Figure 5). However, only training with VLM-based loss leads to unrealistic outputs (Appendix Figure 6), and the training eventually diverges, as evidenced by the low overall score in Table 3, underscoring the need for DMD loss. In addition, using binary cross-entropy loss and having a question to check consistency between input and edited images improves the overall performance. Dataset and VLM scale. To study the role of dataset scale, we vary the number of unique reference images in training. Our final dataset represents the maximum scale feasible under our computational resources. Table 4 shows the performance across different dataset sizes and VLM backbones. We observe consistent gains with larger datasets, suggesting that further scaling of data could yield 9 Preprint Table 3: Training objective ablation. We compare on the GEdit-Bench using the VIEScore metric. Ablating different components of our method leads to a drop in performance, indicating its importance. Method SC Score↑ PQ Score ↑ Overall ↑ Ours 6.16 7.69 6.10 w/ only DMD 4.93 7.51 4.93 w/ only VLM 2.03 3.48 1.93 w/o VLM identity 5.70 7.67 5.76 w/ standard CE loss 5.95 7.64 5.89 Table 4: Dataset and VLM scale and comparison with Reinforcement Learning on the GEditBench. Increasing dataset scale and using stronger VLMs leads to increased performance. Our method also performs better than post-training an SFT model with RL (Liu et al., 2025a). Method SC Score↑ PQ Score ↑ Overall ↑ 1 % Dataset 4.41 7.10 4.66 50 % Dataset 5.41 7.73 5.52 100 % Dataset 6.16 7.69 6.10 InternVL-2B 5.36 7.67 5.45 InternVL-14B 5.88 7.74 5.89 LLava-0.5B 4.57 7.50 4.59 LLava-7B 6.16 7.69 6.10 SFT 3.91 5.70 3.64 SFT + RL 4.55 5.47 4.19 SFT + Ours 6.08 7.83 6.06 Ours Only DMD SFT + Ours Replace the school bus with a truck Remove the umbrella SFT + RL Figure 4: Qualitative analysis of ablation experiments. Our method maintains better input and edited image alignment compared to only training with DMD loss, which also fails on tasks like removal. Compared to fine-tuning an SFT model with RL, our method results in better fidelity while following the edit instruction. Please zoom in for details. additional improvements. Similarly, a larger parameter VLM-backbone leads to better performance, across different VLMs such as InternVL (Chen et al., 2024) and LLava-OneVision (Li et al., 2024), underscoring the promise that our method can improve as more powerful VLMs are developed. Our method vs. Reinforcement Learning (RL). RL is a common post-training strategy for improving pre-trained models without paired supervision and can also leverage VLMs as the reward model, a similar setup to ours. Thus, we benchmark our method against Flow-GRPO (Liu et al., 2025a), a widely used RL method for text-to-image diffusion. However, since RL relies on a reasonable initialization, we need to first train an image-editing model via Supervised Fine-Tuning (SFT) on a paired dataset (Lin et al., 2025). We then fine-tune it with Flow-GRPO using the same LlavaOneVision reward model as in our approach. As shown in Table 4, SFT alone performs poorly, likely due to the limited quality of paired data. Our method surpasses both SFT and SFT+RL, despite requiring no paired supervision. Fine-tuning the model with some paired data before applying our approach can slightly improve the pixel-level consistency between the input reference and output edited image (as shown in Figure 4), although the quantitative numbers are similar. The Appendix provides additional results and more detailed discussion of the method's limitations. 6 DISCUSSION AND LIMITATIONS This paper introduces a new paradigm for enabling image editing capabilities given a pre-trained textto-image diffusion model, without paired before-and-after edit supervision. Our approach combines differentiable feedback from VLMs to ensure editing success with a distribution matching objective to maintain visual realism. This method achieves competitive performance with recent state-of-the-art baselines trained on paired data while enabling efficient few-step generation. Despite these promising results, our method has limitations. Without pixel-level supervision, edits may deviate from the input image in fine-grained details or fail to fully preserve subject identity. We show in Appendix C that adding a perceptual similarity loss (e.g., LPIPS (Zhang et al., 2018)) between input and edited images alleviates this to some extent, though often at the cost of editing quality. Another constraint for our method is the need to keep the VLM in GPU memory, introducing VRAM overhead. We expect ongoing advances in stronger and more efficient VLMs can help address this issue. Overall, our framework scales effectively with large unpaired datasets and highlights a path toward more flexible post-training of generative models for diverse downstream tasks. 10 Preprint Acknowledgment. We thank Gaurav Parmar, Maxwell Jones, and Ruihan Gao for their feedback and helpful discussions. This work was partly done while Nupur Kumari was interning at Adobe Research. The project was partly supported by Adobe Inc., the Packard Fellowship, the IITP grant funded by the Korean Government (MSIT) (No. RS-2024-00457882, National AI Research Lab Project), NSF IIS-2239076, and NSF ISS-2403303. REFERENCES Kingma DP Ba J Adam et al. A method for stochastic optimization. arXiv preprint , 1412(6), 2014. Saba Ahmadi, Rabiul Awal, Ankur Sikarwar, Amirhossein Kazemnejad, Ge Ya Luo, Juan A Rodriguez, Sai Rajeswar, Siva Reddy, Christopher Pal, Benno Krojer, et al. The promise of rl for autoregressive image editing. arXiv preprint , 2025. Omri Avrahami, Ohad Fried, and Dani Lischinski. Blended latent diffusion. ACM transactions on graphics (TOG), 42(4):1-11, 2023. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. In International Conference on Learning Representations (ICLR), 2024. Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Shengqu Cai, Eric Ryan Chan, Yunzhi Zhang, Leonidas Guibas, Jiajun Wu, and Gordon. Wetzstein. Diffusion self-distillation for zero-shot customized image generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2025. Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In IEEE International Conference on Computer Vision (ICCV), 2023. George Cazenavette, Avneesh Sud, Thomas Leung, and Ben Usman. Fakeinversion: Learning to detect images from unseen text-to-image models by inverting stable diffusion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Wenhu Chen, Hexiang Hu, Yandong Li, Nataniel Rui, Xuhui Jia, Ming-Wei Chang, and William W Cohen. Subject-driven text-to-image generation via apprenticeship learning. In Conference on Neural Information Processing Systems (NeurIPS), 2023. Xi Chen, Zhifei Zhang, He Zhang, Yuqian Zhou, Soo Ye Kim, Qing Liu, Yijun Li, Jianming Zhang, Nanxuan Zhao, Yilin Wang, et al. Unireal: Universal image generation and editing via learning real-world dynamics. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2025. Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Riccardo Corvi, Davide Cozzolino, Giada Zingarini, Giovanni Poggi, Koki Nagano, and Luisa Verdoliva. On the detection of synthetic images generated by diffusion models. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023. Chaorui Deng, Deyao Zhu, Kunchang Li, Chenhui Gou, Feng Li, Zeyu Wang, Shu Zhong, Weihao Yu, Xiaonan Nie, Ziang Song, et al. Emerging properties in unified multimodal pretraining. arXiv preprint , 2025. Gilad Deutch, Rinon Gal, Daniel Garibi, Or Patashnik, and Daniel Cohen-Or. Turboedit: Text-based image editing using few-step diffusion models. In SIGGRAPH Asia Conference Proceedings, 2024. 11 Preprint Pierre Fernandez, Guillaume Couairon, Herv ́e J ́egou, Matthijs Douze, and Teddy Furon. The stable signature: Rooting watermarks in latent diffusion models. In IEEE International Conference on Computer Vision (ICCV), 2023. Kevin Frans, Danijar Hafner, Sergey Levine, and Pieter Abbeel. One step diffusion via shortcut models. arXiv preprint , 2024. Tsu-Jui Fu, Wenze Hu, Xianzhi Du, William Yang Wang, Yinfei Yang, and Zhe Gan. Guiding instruction-based image editing via multimodal large language models. In International Conference on Learning Representations (ICLR), 2024. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. In International Conference on Learning Representations (ICLR), 2023. Yuying Ge, Sijie Zhao, Chen Li, Yixiao Ge, and Ying Shan. Seed-data-edit technical report: A hybrid dataset for instructional image editing. arXiv preprint , 2024. Zhengyang Geng, Ashwini Pokle, and J Zico Kolter. One-step diffusion distillation via deep equilibrium models. In Conference on Neural Information Processing Systems (NeurIPS), 2023. Zhengyang Geng, Ashwini Pokle, William Luo, Justin Lin, and J Zico Kolter. Consistency models made easy. arXiv preprint , 2024. Zhengyang Geng, Mingyang Deng, Xingjian Bai, J Zico Kolter, and Kaiming He. Mean flows for one-step generative modeling. arXiv preprint , 2025. Jonathan Heek, Emiel Hoogeboom, and Tim Salimans. Multistep consistency models. arXiv preprint , 2024. Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Promptto-prompt image editing with cross attention control. In International Conference on Learning Representations (ICLR), 2023. Amir Hertz, Andrey Voynov, Shlomi Fruchter, and Daniel Cohen-Or. Style aligned image generation via shared attention. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Conference on Neural Information Processing Systems (NeurIPS), 2020. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations (ICLR), 2022. Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Mude Hui, Siwei Yang, Bingchen Zhao, Yichun Shi, Heng Wang, Peng Wang, Yuyin Zhou, and Cihang Xie. Hq-edit: A high-quality dataset for instruction-based image editing. arXiv preprint , 2024. Maxwell Jones, Sheng-Yu Wang, Nupur Kumari, David Bau, and Jun-Yan Zhu. Customizing text-toimage models with a single image pair. In SIGGRAPH Asia Conference Proceedings, 2024. Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, and Taesung Park. Scaling up gans for text-to-image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Minguk Kang, Richard Zhang, Connelly Barnes, Sylvain Paris, Suha Kwak, Jaesik Park, Eli Shechtman, Jun-Yan Zhu, and Taesung Park. Distilling diffusion models into conditional gans. In European Conference on Computer Vision (ECCV), 2024. 12 Preprint Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. In International Conference on Learning Representations (ICLR), 2024. Yunji Kim, Jiyoung Lee, Jin-Hwa Kim, Jung-Woo Ha, and Jun-Yan Zhu. Dense text-to-image generation with attention modulation. In IEEE International Conference on Computer Vision (ICCV), 2023. Yuval Kirstain, Adam Polyak, Uriel Singer, Shahbuland Matiana, Joe Penna, and Omer Levy. Picka-pic: An open dataset of user preferences for text-to-image generation. Advances in neural information processing systems, 36:36652-36663, 2023. Benno Krojer, Dheeraj Vattikonda, Luis Lara, Varun Jampani, Eva Portelance, Chris Pal, and Siva Reddy. Learning action and reasoning-centric image editing from videos and simulation. Advances in Neural Information Processing Systems, 37:38035-38078, 2024. Max Ku, Dongfu Jiang, Cong Wei, Xiang Yue, and Wenhu Chen. Viescore: Towards explainable metrics for conditional image synthesis evaluation. In Association for Computational Linguistics (ACL), 2024. Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Nupur Kumari, Xi Yin, Jun-Yan Zhu, Ishan Misra, and Samaneh Azadi. Generating multi-image synthetic data for text-to-image customization. In IEEE International Conference on Computer Vision (ICCV), 2025. Black Forest Labs, Stephen Batifol, Andreas Blattmann, Frederic Boesel, Saksham Consul, Cyril Diagne, Tim Dockhorn, Jack English, Zion English, Patrick Esser, et al. Flux. 1 kontext: Flow matching for in-context image generation and editing in latent space. arXiv preprint , 2025. Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint , 2024. Bin Lin, Zongjian Li, Xinhua Cheng, Yuwei Niu, Yang Ye, Xianyi He, Shenghai Yuan, Wangbo Yu, Shaodong Wang, Yunyang Ge, et al. Uniworld: High-resolution semantic encoders for unified visual understanding and generation. arXiv preprint , 2025. Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. In International Conference on Learning Representations (ICLR), 2023. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In Conference on Neural Information Processing Systems (NeurIPS), 2023a. Jie Liu, Gongye Liu, Jiajun Liang, Yangguang Li, Jiaheng Liu, Xintao Wang, Pengfei Wan, Di Zhang, and Wanli Ouyang. Flow-grpo: Training flow matching models via online rl. In Conference on Neural Information Processing Systems (NeurIPS), 2025a. Shiyu Liu, Yucheng Han, Peng Xing, Fukun Yin, Rui Wang, Wei Cheng, Jiaqi Liao, Yingming Wang, Honghao Fu, Chunrui Han, et al. Step1x-edit: A practical framework for general image editing. arXiv preprint , 2025b. Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In International Conference on Learning Representations (ICLR), 2023b. Cheng Lu and Yang Song. Simplifying, stabilizing and scaling continuous-time consistency models. In International Conference on Learning Representations (ICLR), 2025. 13 Preprint Grace Luo, Jonathan Granskog, Aleksander Holynski, and Trevor Darrell. Dual-process image generation. In IEEE International Conference on Computer Vision (ICCV), 2025. Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, and Zhihua Zhang. Diffinstruct: A universal approach for transferring knowledge from pre-trained diffusion models. Advances in Neural Information Processing Systems, 36:76525-76546, 2023. Nadav Magar, Amir Hertz, Eric Tabellion, Yael Pritch, Alex Rav-Acha, Ariel Shamir, and Yedid Hoshen. Lightlab: Controlling light sources in images with diffusion models. In Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers, pp. 1-11, 2025. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations (ICLR), 2022. Maxime Oquab, Timoth ́ee Darcet, Th ́eo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. In Transactions on Machine Learning Research (TMLR), 2023. Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings, 2023. William Peebles and Saining Xie. Scalable diffusion models with transformers. In IEEE International Conference on Computer Vision (ICCV), 2023. Qwen-Team. Qwen2 technical report. arXiv preprint , 2024. Qwen-Team. Qwen2.5-vl, January 2025. URL https://qwenlm.github.io/blog/qwen2. 5-vl/. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML), 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical textconditional image generation with clip latents. arXiv preprint , 2022. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations (ICLR), 2022. Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, and Robin Rombach. Fast high-resolution image synthesis with latent adversarial diffusion distillation. In SIGGRAPH Asia Conference Proceedings, 2024a. Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion distillation. In European Conference on Computer Vision (ECCV), 2024b. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint , 2021. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint , 2024. Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, et al. Styledrop: Text-to-image generation in any style. In Conference on Neural Information Processing Systems (NeurIPS), 2023. 14 Preprint Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations (ICLR), 2021. Yang Song and Prafulla Dhariwal. Improved techniques for training consistency models. In International Conference on Learning Representations (ICLR), 2024. Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. In Proceedings of the 40th International Conference on Machine Learning, 2023a. Yizhi Song, Zhifei Zhang, Zhe Lin, Scott Cohen, Brian Price, Jianming Zhang, Soo Ye Kim, and Daniel Aliaga. Objectstitch: Object compositing with diffusion model. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023b. Quan Sun, Yufeng Cui, Xiaosong Zhang, Fan Zhang, Qiying Yu, Zhengxiong Luo, Yueze Wang, Yongming Rao, Jingjing Liu, Tiejun Huang, and Xinlong Wang. Generative multimodal models are in-context learners. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Peter Sushko, Ayana Bharadwaj, Zhi Yang Lim, Vasily Ilin, Ben Caffee, Dongping Chen, Mohammadreza Salehi, Cheng-Yu Hsieh, and Ranjay Krishna. Realedit: Reddit edits as a large-scale empirical dataset for image transformations. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 13403-13413, 2025. Zhenxiong Tan, Songhua Liu, Xingyi Yang, Qiaochu Xue, and Xinchao Wang. Ominicontrol: Minimal and universal control for diffusion transformer. In IEEE International Conference on Computer Vision (ICCV), 2025. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. Cnn-generated images are surprisingly easy to spot... for now. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Daniel Winter, Matan Cohen, Shlomi Fruchter, Yael Pritch, Alex Rav-Acha, and Yedid Hoshen. Objectdrop: Bootstrapping counterfactuals for photorealistic object removal and insertion. In European Conference on Computer Vision (ECCV), 2024. Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun Wen, Wensen Feng, Xiaoxiao Xu, Yi Wang, Yichang Zhang, Yongqiang Zhu, Yujia Wu, Yuxuan Cai, and Zenan Liu. Qwen-image technical report, 2025. URL https://arxiv.org/abs/2508. 02324. Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, and Zheng Liu. Omnigen: Unified image generation. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 13294-13304, 2025. Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems, 36:15903-15935, 2023. Yanwu Xu, Yang Zhao, Zhisheng Xiao, and Tingbo Hou. Ufogen: You forward once large scale text-to-image generation via diffusion gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024. Wilson Yan, Andrew Brown, Pieter Abbeel, Rohit Girdhar, and Samaneh Azadi. Motion-conditioned image animation for video editing. arXiv preprint , 2023. 15 Preprint Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Weihan Shen, Xiaolong Zhu, and Xiu Li. Using human feedback to fine-tune diffusion models without any reward model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8941-8951, 2024a. Ling Yang, Bohan Zeng, Jiaming Liu, Hong Li, Minghao Xu, Wentao Zhang, and Shuicheng Yan. Editworld: Simulating world dynamics for instruction-following image editing. arXiv preprint , 2024b. Ling Yang, Zixiang Zhang, Zhilong Zhang, Xingchao Liu, Minkai Xu, Wentao Zhang, Chenlin Meng, Stefano Ermon, and Bin Cui. Consistency flow matching: Defining straight flows with velocity consistency. arXiv preprint , 2024c. Yang Ye, Xianyi He, Zongjian Li, Bin Lin, Shenghai Yuan, Zhiyuan Yan, Bohan Hou, and Li Yuan. Imgedit: A unified image editing dataset and benchmark. arXiv preprint , 2025. Tianwei Yin, Micha ̈el Gharbi, Taesung Park, Richard Zhang, Eli Shechtman, Fredo Durand, and Bill Freeman. Improved distribution matching distillation for fast image synthesis. In Conference on Neural Information Processing Systems (NeurIPS), 2024a. Tianwei Yin, Micha ̈el Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T Freeman, and Taesung Park. One-step diffusion with distribution matching distillation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024b. Xin Yu, Tianyu Wang, Soo Ye Kim, Paul Guerrero, Xi Chen, Qing Liu, Zhe Lin, and Xiaojuan Qi. Objectmover: Generative object movement with video prior. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2025. Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In IEEE International Conference on Computer Vision (ICCV), 2023. Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, and Yu Su. Magicbrush: A manually annotated dataset for instruction-guided image editing. Advances in Neural Information Processing Systems, 36:31428-31449, 2023. Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint , 2024. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. Haozhe Zhao, Xiaojian Shawn Ma, Liang Chen, Shuzheng Si, Rujie Wu, Kaikai An, Peiyu Yu, Minjia Zhang, Qing Li, and Baobao Chang. Ultraedit: Instruction-based fine-grained image editing at scale. Advances in Neural Information Processing Systems, 37:3058-3093, 2024. Linqi Zhou, Stefano Ermon, and Jiaming Song. Inductive moment matching. arXiv preprint , 2025. Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, and Hai Huang. Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation. In Forty-first International Conference on Machine Learning, 2024. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017. 16 Preprint Appendix A Additional Comparison with Baseline Methods 17 B Ablation Study 17 C Limitation 19 D Dataset Construction Details 20 E Training Implementation Details 23 E.1 Local-image editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 E.2 Free-form editing (Customization) . . . . . . . . . . . . . . . . . . . . . . . . . . 24 F Other Baseline Details 25 G Societal Impact 26 A ADDITIONAL COMPARISON WITH BASELINE METHODS Local image-editing. Here, we compare on another commonly adopted image-editing benchmark, ImgEdit (Ye et al., 2025) (Basic), which covers nine local editing types across diverse semantic categories with a total of 734 samples. Quantitative results under their proposed GPT4o-based evaluation protocol are reported in Table 5, with VIEScore (Ku et al., 2024) results in Table 6. Consistent with the trend observed on GEdit-Bench, our method has better or on-par performance in the few-step setting and remains comparable to many of the multi-step baselines as well. Qualitative comparisons are shown in Figure 11, with additional examples on GEdit-Bench in Figure 10. Customization or free-form editing. Here, we report additional metrics commonly used for evaluation. Specifically, CLIPScore (Radford et al., 2021) and TIFA (Hu et al., 2023) to measure text alignment; and similarity in DINOv2 (Oquab et al., 2023) feature space after background masking to measure identity alignment, denoted as MDINOv2-I. We also report an overall Geometric score (Yan et al., 2023) by taking the geometric mean of TIFA and MDINOv2-I. The results are shown in Table 7. Consistent with the VIEScore evaluation reported in the main paper, our method performs comparably with other baselines in the few-step sampling regime. We show more sample comparisons in Figure 12. Table 5: ImgEdit-Bench comparison using their proposed GPT-4o based evaluation protocal and few-step sampling setting. Our method outperforms baseline methods on the Avg of all edit types. Method #Param Action↑ Bg ↑ Style ↑ Adjust ↑ Replace ↑ Add ↑ Extract ↑ Remove ↑ Compose ↑ Avg ↑ Qwen-Image-Edit 20B 3.14 2.83 3.70 3.25 3.00 3.52 1.96 2.71 3.06 3.02 Flux.1-Kontext 12B 3.51 2.97 3.89 3.04 3.15 3.31 1.82 2.37 2.46 2.95 Step1X Edit 12B 3.66 2.60 3.46 3.44 2.50 3.25 1.77 2.41 2.38 2.83 NP-Edit (Ours) 2B 4.44 4.13 4.14 3.94 3.57 4.52 2.01 2.71 3.18 3.63 B ABLATION STUDY Training objective. Here, we provide a more detailed analysis by examining performance across different editing sub-types. As a reminder, we ablated our training objective under four settings: (1) using only the distribution matching loss, (2) using only the VLM-based editing loss, (3) removing the identity-preservation question from DQA, and (4) replacing the binary-cross entropy loss as explained in Eqn. 4 with standard cross-entropy over full vocabulary length. As shown in Figure 5, training with only the DMD loss yields comparable performance on certain sub-edit types such as Color change, since DMD matches the text-conditioned score between the fine-tuned and pre-trained models, thus improving overall text alignment. However, it fails on tasks like Removal, underscoring 17 Preprint Table 6: VIEScore evaluation on ImgEdit-Bench. Our method performs on par or better than baselines under the few-step setting. For multi-step sampling, it still outperforms OmniGen and remains competitive with many of the larger-scale models like BAGEL and FLUX.1 Kontext. All numbers reported in ×10 Method #Param #Step SC Score↑ PQ Score ↑ Overall ↑ BAGEL 7B 50 7.55 6.22 6.47 FLUX.1-Kontext 12B 28 6.94 6.73 6.19 Step-1X Edit v1.1 12B 28 7.26 7.30 6.72 QwenImage Edit 20B 50 8.30 7.77 7.85 QwenImage Edit 20B 4 6.23 5.14 5.46 FLUX.1-Kontext 12B 4 6.08 5.22 5.14 Step-1X Edit v1.1 12B 4 6.00 5.37 5.14 NP-Edit (Ours) 2B 4 6.72 7.78 6.62 Table 7: Quantitative evaluation of free-form editing task, Customization, on DreamBooth dataset. All numbers reported in ×10 Method #Param #Step MDINO Score↑ CLIP Score ↑ TIFA ↑ Geometric Score ↑ DSD 12B 28 6.55 3.08 8.71 7.32 SynCD 12B 30 7.34 3.09 8.53 7.71 FLUX.1-Kontext 12B 28 7.72 3.07 8.88 8.14 Qwen-Image-Edit 20B 50 7.47 3.14 9.37 8.22 OminiControl 12B 8 6.16 3.02 8.12 6.64 DSD 12B 8 5.88 3.15 8.93 6.99 SynCD 12B 8 7.11 3.16 9.11 7.79 FLUX.1-Kontext 12B 8 7.50 3.08 8.83 7.98 Qwen-Image-Edit 20B 8 7.29 3.08 8.96 7.91 NP-Edit (Ours) 2B 8 6.82 2.97 8.73 7.54 NP-Edit (Ours) 2B 4 7.03 3.04 8.89 7.72 Replace Add Human Color Background Tone Material Motion Text Remove Style Ours Only DMD Loss w/ CE Loss w/o VLM Identity Figure 5: Performance for each sub edit-type. Training with only DMD loss fails to achieve certain tasks like removal and style changes. In addition, using binary cross-entropy loss and VLM identity-based questions helps improve the overall performance. Replace the blue sky with a starry night sky Replace the purple flowers with orange flowers Figure 6: Training with only VLM-editing loss leads to lower fidelity samples with the model only maximizing the edit success probability. Current generalpurpose VLMs are often not good at subjective tasks like evaluating image fidelity, highlighting the requirement of distribution matching loss in our framework. the importance of VLM-based editing loss and its generalizability across diverse editing instructions. In addition, VLM-based loss also helps in maintaining better consistency between input and edited images (first row of Figure 4 in the main paper). However, when training with only the VLM-based editing loss, there's a gradual degradation in image quality, as Figure 6 shows, highlighting the complementary role of distribution matching losses such as DMD. 18 Preprint t=4 t=8 t=28 No Yes Yes Question: Are there sunglasses in the image? VLM response No Yes Yes Question: Is there a table clock in the image? VLM response Figure 7: Unreliable VLM response on intermediate outputs of a multi-step diffusion model. Here we show a 28-step diffusion process, denoising predictions from early steps (e.g., t = 4), which correspond to high noise levels, are blurry and semantically ambiguous. This can lead to unreliable responses from the VLM, as shown here. Therefore, we adopt a few-step diffusion model that always generates sharp images. Ours (4-step) Single step Edit the background to be Shanghai's bund Add a cat sitting in the basket Figure 8: Our (4-step) vs single-step editing model. We compare our final 4-step model with a single-step model, both trained via our approach. Editing an image in a single step is still challenging and leads to lower-fidelity outputs. Ours w/o VLM Identity w/ LPIPS Turn the rice into a hamburger and draw an avatar eating a burger Remove the bangs Figure 9: Limitation. Our method can struggle to maintain exact pixel consistency between the input and edited image. Having LPIPS (Zhang et al., 2018) loss between the input and output edited image can resolve it to an extent (top row) but at the cost of reduced editing success (bottom row). Sampling steps. For our method, we chose to train a few-step image-editing model instead of a multi-step diffusion model, as multi-step diffusion has a noisy or blurry estimate of the final output in the early stages of diffusion. This can make it difficult to get a reliable response from the VLM, as shown in Figure 7. On the other hand, predicting an edited image in one step is still challenging, as mentioned in the main paper, and shown here in Figure 8. Thus few-step provides a nice balance between the two extremes of single and multi-step sampling for our purposes. C LIMITATION A limitation of our framework is the lack of pixel-wise supervision to preserve regions that should remain unchanged under a given edit instruction. Consequently, edited images can deviate from the input image in details or spatial alignment, as shown in Figure 9, 1st column. While our VLM-based editing loss includes a question to check consistency between the input and edited images, it does not enforce pixel-level alignment. Empirically, we find that current VLMs struggle to detect subtle changes. To mitigate this, we experiment with an additional LPIPS (Zhang et al., 2018) loss between the input and output edited images. As shown in the last column of Figure 9, this improves consistency but also negatively impacts editing quality, particularly for edit-types like Removal. Future work could explore specialized VLMs that are more sensitive to fine-grained, pixel-level differences. 19 Preprint D DATASET CONSTRUCTION DETAILS Each tuple in our dataset X = {(yi, ci, cy i , cx i )}N i=1 consists of a real reference-image, y, corresponding edit instruction, c, and text prompt corresponding to the reference and edited image, cy and cx, respectively. We use a text-image dataset corpus to select reference images. Given a reference image, we prompt Qwen-2.5-32B VLM to suggest different possible editing instructions. The system and user prompt for it are as follows: Role: system, Content: You are a helpful assistant and an expert in image editing. Role: user, Content: Task: As a researcher in image editing, your task is to generate simple editing instructions based on the given image. The edit types you can use include: 1) local color change, 2) local texture, 3) adjust (shape change), 4) add, 5) remove, 6) replace, 7) bg, 8) style, 9) action, and 10) text manipulation **Important**: Ensure that you create a balanced distribution of these edit types when generating the instructions. Each example should utilize a different edit type, and the edit types should be evenly distributed across all examples. When using the "add" edit type, DO NOT USE vague placements like 'near', 'under', or 'beside', instead, specify the exact location where the object should be placed. For example, instead of "add a castle near the trees" use "add a castle in the clearing between the trees". Ensure that each instruction is straightforward and points to a single, clear edit change. Avoid complex or multi-step instructions. **Avoid Redundancy**: Make sure to introduce diversity in the edit instructions. Given the input image, could you generate simple edit instructions for different possible edit types by following the "format" of examples below and based on what you have seen in the image? Here are some examples showing the use of various edit types: Good example 1: {color change example} Good example 2: {texture change example} Good example 3: {adjust shape example} Good example 4: {add example} Good example 5: {remove example} Good example 6: {replace example} Good example 7: {bg example} Good example 8: {style example } Good example 9: {action example} Good example 10: {text manipulation example} Bad Examples: the edit instructions are hard/impossible to perform well, or mention vague terms that make the editing model struggle to perform well, and you should not follow. Bad example 1: - Instruction: make this dog look like it's ready for a formal evening out? - Type: add - Reasoning: This instruction is bad because it does not mention the exact changes that are needed to make the dog look like it's ready for a formal evening out. Bad example 2: - Instruction: remove the balloon [given an image of only balloons on a white background] - Type: remove - Reasoning: This instruction is bad as it removes the only object in the image. 20 Preprint **Important Considerations**: 1. Avoid repetition of specific phrases: Do not reuse examples or themes from the above examples. Create entirely new and diverse themes and scenarios. 2. Logical Flow: Ensure that each instruction is logical and makes sense given the image. 3. Specificity in Insertions: When adding objects, use precise placement (e.g., "in the sky" or "on the lake"). Avoid vague terms like "next to", "around", or "near". 4. Balanced use of edit types: Use a variety of edit types such as [insertion], [replace], [local texture], [shape change], [style], [remove], [local color change], and [bg]. Ensure an even distribution of these edit types across your examples. 5. Diverse scenarios: Introduce variety in the scenarios, such as futuristic, historical, magical, surreal, or natural settings. Avoid overusing common tropes. 6. DO NOT suggest instructions that change a very small/minute part of the image. Could you now generate 4 examples of **new, creative, and contextually relevant** edit instructions by following the format above? Avoid using the specific phrases, themes, or scenarios from the examples provided above. **Each example must use a different edit type** from the ones listed above. Also, make sure to use each edit type equally across all generated examples. Finally, you should make the edit instructions as simple as possible so that the downstream editing model is able to work well. In the above user prompt, for the good examples, we randomly select an edit instruction for each editing type out of a fixed set of manually defined edit instructions. Given edit instructions for each image, we again prompt the VLM to check the validity of the edit instruction and, if valid, to suggest a possible caption for the edited image. The system and user prompt for this is: Role: system, Content: You are a helpful assistant and an expert in image editing. Role: user, Content: Task: As a researcher in image editing, given the input image, edit type, and the edit instruction, your task is to check if a given edit instruction is valid and can be applied to the image. If it is valid, generate a descriptive caption for what the image would look like after applying the edit instruction. If it is not valid, return "invalid" and explain why it is not valid, and output "NA" for the edited image caption. An edit instruction is invalid if it: 1. mentions to modify/remove/replace an object that is NOT PRESENT in the image. 2. is TOO HARD to make editing model to understand and perform well, e.g., "remove any visible accessories." 3. DOES NOT change the image in any meaningful way, e.g., given the image of a forest, "change the background to a dense forest." For the "remove" edit type: - DO NOT mention the object that is removed during the edit in the edited image caption. For example, given an image of a cat in a living room on a sofa with the edit type "remove" and edit instruction: "remove the cat" Bad Example: A cat is removed from the sofa in a living room. Good Example: A living room with a sofa. Given the edit instruction and the original caption: Edit type: {edit type} Edit instruction: {simple edit instruction} Output format: Validity: ... Reasoning: ... Edited image Caption: ... 21 Preprint Please provide a concise but complete caption describing the edited image. Focus on the changes that would be made according to the edit instruction. Here are some more examples: Example 1: - Edit type: bg - Edit instruction: change the background to a sunset view - Validity: valid - Reasoning: The edit instruction is valid because it adjusts the current blue sky to a sunset view, which is a meaningful change. - Edited image caption: A park with a sunset view. People are walking around in the park. Example 2: - Edit type: remove - Edit instruction: remove the wine glass - Validity: invalid - Reasoning: The edit instruction is invalid because it mentions removing a wine glass that is not present in the image. - Edited image caption: NA **Important Considerations**: 1. DO NOT use instruction words like replaced, added, removed, modified, etc. in the caption. 2. Keep the caption general to explain any possible images resulting from the edit instruction. Only output the validity, reasoning, and edited image caption. Do not include any other text or explanations. After filtering the list of generated editing instructions using the above procedure, our final dataset consists of approximately 3M unique reference images with corresponding editing instructions spanning the 10 edit sub-types. Within the constraints of our available computational resources, this represents the largest dataset we were able to construct. For the customization task, we first instruct the VLM, to identify if the image has a prominent object in the center. We provide an in-context sample image as well to the model. The exact system and user prompt for this is: Role: system, Content: You are a helpful assistant and an expert in image personalization/customization. Role: user, Content: Task: You are assisting in a research project on image personalization. Your goal is to evaluate whether the SECOND image contains a **single, uniquely identifiable object** prominently positioned near the center of the frame. - The FIRST image (image ̇path1) is an example of a valid case. - The specific object category in the second image can be different - focus only on **object uniqueness** and **image composition**. Good examples include object categories that can be personalized, have unique texture, and are not general objects: - Backpack, purse, toy, cat, dog, cup, bowl, water bottle, wearables, plushies, bike, car, clocks, etc. Bad examples include object categories that are general objects, and different instances of the category can not be distinguished: - Tree, building, door, flowers, food, vegetables, fruits, natural scenes, roads, etc. **Important Considerations:** 1. The object should be clearly recognizable and **visually distinct** from the background. 22 Preprint 2. The object should be **near the center** of the image. 3. The **entire object** should be visible - it should NOT be a tight or zoomed-in crop. 4. The background can be natural but should not be overly cluttered or visually distracting. 5. The image should feature a **single primary object**, not multiple equally prominent objects. Could you now judge the SECOND image and only provide the output, reasoning, and object name, in the following format: Output: True/False Reasoning: Brief explanation Object Name: The name of the object (e.g., "backpack", "cat", "toy"). If the VLM response predicts a valid image, we then query it again to suggest a new background context for the object category as follows: Role: system, Content: You are a helpful assistant and an expert in image personalization/customization. Role: user, Content: Given an image of an object category, you have to suggest three DIVERSE background captions for the object. Provide a detailed description of the background scene. Only suggest plausible backgrounds. DO NOT add the object name in the caption. DO NOT use emotional words in the caption. Be concise and factual but not too short. DO NOT mention the object name in the output captions. If the object is not a thing, but a scene, then output None. Example background captions for "White plastic bottle" are: 1. near the edge of a marbled kitchen counter, surrounded by a cutting board with chopped vegetables, a salt shaker, and a stainless steel sink in the background. 2. rests on a tiled bathroom shelf, accompanied by a toothbrush holder, a mirror with foggy edges, and a shower curtain partially drawn open. Example background captions for "a blue truck" are: 1. parked beside a graffiti-covered brick wall under a cloudy sky, with city skyscrapers rising in the background. 2. resting in a grassy field surrounded by wildflowers, with distant mountains and a golden sunset in the background. Object: {object category name} Output: 1. 2. 3. E TRAINING IMPLEMENTATION DETAILS E.1 LOCAL-IMAGE EDITING Training hyperparameters. We train on a batch-size of 32 using Adam (Adam et al., 2014) optimizer with a learning rate of 2 × 10-6, β1 as 0, and β2 as 0.9. We train for a total of 10K iteratuions with auxiliary network, Aφ, being updated 10 times for every generator, Gθ, update, following similar strategy adopted in DMD2 (Yin et al., 2024a). We train with the identity loss (Section 4.3) for 250 iterations. For faster convergence, the first 4K training iterations are trained with a single step prediction (t = 1 in Line 3 of Algorithm 1), and then we start the 2-step unrolling of the diffusion trajectory. The final loss is a weighted combination of VLM-based editing loss and distribution matching loss with λVLM = 0.01 and λDMD = 0.5. During training, we also add a "do nothing" editing task with L2 loss between the input and edited image as regularization with a 1% 23 Preprint probability. This helps the model learn to maintain consistency between input and edited images. During training, we sample the editing instruction corresponding to each subtype uniformly, except removal, which is sampled with 25% probability. This is because, empirically, we observe that removal is more difficult than other edit-types like Color change. Template questions for VLM-based editing loss. As mentioned in the main paper, we evaluate VLM-based loss on two questions per edit type. Specifically for any edit type except removal, we use the following template: Role: user, Content: You are a professional digital artist and an expert image editor. You will be provided with two images. The first being the original real image, and the second being an edited version of the first. The objective is to evaluate if the editing instruction has been executed in the second image. Editing instruction: {edit instruction} Answer with a Yes or No. Note that sometimes the two images might look identical due to the failure of image editing. Answer No in that case. Role: user, Content: You are a professional digital artist and an expert image editor. You will be provided with two images. Answer with a Yes or No if the second image is exactly the same as the first image. IGNORE the changes in the second image because of the edit: {edit instruction}. Everything else should be the same. For the removal edit-type, we change the first question to explicitly ask about the presence of the target object to be removed, with the ground truth answer in this case being No. We find it to be more effective than a generic template. Role: user, Content: You are a professional digital artist and an expert image captioner. You will be provided with an image. Answer with a Yes or No if the image has {object name}. E.2 FREE-FORM EDITING (CUSTOMIZATION) Training hyperparameters. We reduce the warmup iterations for which we train with the identity loss to 100 in this case, since customization often requires more drastic changes in the output image compared to the input reference image. Further, we increase λDMD = 2 instead of 0.5 as in the case of local image-editing. The rest of the hyperparameters remain the same. Both during training and inference, the input text prompt to the few-step generator, Gθ, is in the following template: Generate the main object shown in the first image in a different setting and pose: { background scene description}. We train the 4-step model for 10K iterations. For the 8-step model, we fine-tune for 5K additional training steps starting from the 4-step model. Template questions for VLM-based editing loss. Here, we modify the questions to instead evaluate if the background context and pose are different in the generated image, i.e., editing success, and if the object identity is similar, i.e., image alignment and consistency between the input reference and edited image. The exact questions are as follows: Role: user, Content: You are a professional digital artist and an expert in image editing. You will be provided with two images. 24 Preprint Answer with a Yes or No if the {object ̇name} in the second image is in a different pose and location than in the first image. Note that sometimes the second image might not have the same object because of the failure of image editing. Answer No in that case. Role: user, Content: You are a professional digital artist and an expert in image editing. You will be provided with two images. Answer with a Yes or No if the {object ̇name} in the second image is the exact same identity, with similar color, shape, and texture as in the first image. Note that sometimes the second image might not have the same object because of the failure of image editing. Answer No in that case. F OTHER BASELINE DETAILS Flow-GRPO (Liu et al., 2025a). We follow the open-source implementation of Flow-GRPO and train with the same computational budget as our method, i.e., across 4 A100 GPUs and 2.5 days of training. The final model is fine-tuned from a pre-trained image-editing model for 5K iterations. During training, we collect 16 images per-prompt with 12 denoising steps (28 during inference) for computing the mean and standard deviation in GRPO (Shao et al., 2024). Following their official implementation, we train with LoRA (Hu et al., 2022) of rank 32, α = 64, learning rate 1 × 10-4, and use the VLM to score edits on a scale of 0 to 9, which is normalized between 0-1 to get the final reward. The exact prompt used to query the VLM is derived from VIEScore (Ku et al., 2024) and is shown below. Role: system, Content: You are a helpful assistant and an expert in image editing. Role: user, Content: You are a professional digital artist. You will have to evaluate the effectiveness of AI-generated edited image(s) based on given rules. You will have to give your output in this way (Keep your reasoning VERY CONCISE and SHORT): score : ..., reasoning : ... RULES: Two images will be provided: The first being the original real image and the second being an edited version of the first. The objective is to evaluate how successfully the editing instruction has been executed in the second image. Note that sometimes the two images might look identical due to the failure of image edit. From scale 0 to 9: A score from 0 to 9 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instruction at all. 9 indicates that the scene in the edited image follows the editing instruction text perfectly.) Editing instruction: {edit instruction} Supervised Fine-Tuning. We train with the standard velocity prediction flow-objective for 30K iterations with a batch-size of 32 and learning rate 2 × 10-6 with a linear warmup of 2K iterations. To enable classifier-free guidance, we drop the image and text conditions 10% of the time. Sampling parameters for local image-editing baselines. We follow the open-source implementation to sample images from all the baseline models for the benchmark evaluations. The turbo25 Preprint edit (Deutch et al., 2024) baseline requires a caption corresponding to the edited image as well, and we use Qwen-2.5-32B-VLM to generate these captions for GEdit-Bench images. Sampling parameters for customization baselines. Here as well, we follow the open-source implementation to sample images from all the baseline models for the benchmark evaluations. In the case of DSD (Cai et al., 2025), it employs Gemini-1.5 to convert the input user-prompt into a detailed prompt. However, we skipped this step for a fair evaluation with other methods, which do not use any prompt rewriting tools. In the case of SynCD (Kumari et al., 2025), though it supports multiple reference images as input, we evaluated it with a single reference image, to keep the setup similar to other baseline methods and Ours. For sampling images with OminiControl (Tan et al., 2025) and DSD (Cai et al., 2025), we follow their recommended prompt setting and replace the category name with "this item". G SOCIETAL IMPACT Our work introduces a training framework for fine-tuning text-to-image models into a few-step image-editing model without using paired supervision. By enabling few-step sampling, our method improves inference efficiency and reduces computational cost. Nonetheless, the broader risks of generative models, such as creating deepfakes and misleading content, also apply to our approach. Possible ways to mitigate this are watermarking generative content Fernandez et al. (2023) and reliable detection of generated images Wang et al. (2020); Corvi et al. (2023); Cazenavette et al. (2024) 26 Preprint Qwen-Image-Edit (50 step) Step1x-Edit (4 step) Ours (4 step) FLUX.1Kontext (4 step) Qwen-Image-Edit (4 step) Add a pair of sunglasses in a cool style Convert this image into an anime style Change the color of the goat to yellow Make the child pout Replace the bench's material with marble Adjust the background to a beach Replace the text WOOD with LAND Remove the red dumbbell Input reference image Figure 10: Qualitative comparison on GEdit-Bench. We show results of our and baseline image-editing methods under the few-step sampling setting. For comparison, we also show the results of the best method with multi-step sampling, as measured by the quantitative metrics (Table 1), in the 1st column. Our method performs on par or better than baseline methods across different edit types in the few-step setting. 27 Preprint Qwen-Image-Edit (50 step) Step1x-Edit (4 step) Ours (4 step) FLUX.1Kontext (4 step) Qwen-Image-Edit (4 step) Change the background from a clear blue sky with bare branches to a sunset sky over a lush forest Add a coffee cup on the table in the foreground Change the wooden surface background to a marble countertop Remove the horse in the foreground Replace the sheep in the image with a deer Transfer the image into a dramatic charcoal-drawing style Raise the person's right hand Input reference image Figure 11: Qualitative comparison on ImgEdit-Bench. We show results of our and baseline image-editing methods under the few-step sampling setting. For comparison, we also show the results of the best method with multi-step sampling, as measured by the quantitative metrics (Table 6), in the 1st column. Our method performs on par or better than baseline methods across different edit types in the few-step setting. 28 Preprint Qwen-Image-Edit (50 step) FLUX.1Kontext (8 step) Ours (8 step) OminiControl (8 step) Qwen-Image-Edit (8 step) A backpack on top of a purple rug in a forest A stuffed animal with a mountain in the background A toy with a city in the background A dog with a blue house in the background A bowl on the beach A dog in a purple wizard outfit A backpack with Eiffel tower in the background Input reference image Figure 12: Qualitative comparison on DreamBooth. We show results of our and baseline methods under the few-step sampling setting. For comparison, we also show the results of the best method with multi-step sampling, as measured by the quantitative metrics in the first column. Our method performs comparably with baseline methods on identity alignment while having better image fidelity across different concepts in the few-step setting. 29
2510.14976
Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation Shaowei Liu∗ Chuan Guo2† Bing Zhou2† Jian Wang2† 1University of Illinois Urbana-Champaign 2Snap Inc. https://stevenlsw.github.io/ponimator/ Input Generated Interaction Animation (left→right: time steps) Two-person image Estimated interactive pose Single-person image Generated interactive pose Single-person image Generated interactive pose + ”Lift the other onto his back” Figure 1. Ponimator enables versatile interaction animation applications anchored on interactive poses. For two-person images (top), Ponimator generates contextual dynamics from estimated interactive poses (green box). For single-person images (middle) with optional text prompts (bottom), Ponimator first generates partner interactive poses (magenta box) and then fulfill the interaction dynamics. Abstract Close-proximity human-human interactive poses con- vey rich contextual information about interaction dynam- ics. Given such poses, humans can intuitively infer the context and anticipate possible past and future dynamics, drawing on strong priors of human behavior. Inspired by this observation, we propose Ponimator, a simple frame- work anchored on proximal interactive poses for versatile interaction animation. Our training data consists of close- contact two-person poses and their surrounding temporal context from motion-capture interaction datasets. Leverag- ing interactive pose priors, Ponimator employs two con- ditional diffusion models: (1) a pose animator that uses *Work done at an internship at Snap Research NYC, Snap Inc. †Co-corresponding author the temporal prior to generate dynamic motion sequences from interactive poses, and (2) a pose generator that ap- plies the spatial prior to synthesize interactive poses from a single pose, text, or both when interactive poses are un- available. Collectively, Ponimator supports diverse tasks, including image-based interaction animation, reaction an- imation, and text-to-interaction synthesis, facilitating the transfer of interaction knowledge from high-quality mo- cap data to open-world scenarios. Empirical experiments across diverse datasets and applications demonstrate the universality of the pose prior and the effectiveness and ro- bustness of our framework. Codes and video visualization can be found at https://stevenlsw.github.io/ponimator/ arXiv:2510.14976v1 [cs.CV] 16 Oct 2025 1. Introduction The interplay between humans plays a crucial role in our daily lives. These interactions convey key social signals that reflect relationships and intentions. For example, a simple hug typically expresses closeness, a handshake serves as a formal greeting, while combat indicates opposing stances. A key observation is that interactive poses in close prox- imity (e.g., handshake) carry rich prior information about interaction dynamics. Specifically, a pair of such poses reveals contextual cues about spatial relationships, con- straints, and intent, often suggesting probable ranges of past and future motions. These interactive poses can act as a bridge for modeling interaction dynamics with reduced complexity while inherently preserving prior knowledge of close interactions. In this paper, we present Ponimator, a novel framework that leverages the dynamics priors embedded in interactive poses through a generative model, demonstrating its ver- satility across various interaction animation tasks. We de- velop this interaction prior using a combination of two high- quality human-human interaction datasets: Inter-X [65] and Dual-Human [7]. From these datasets, we construct a col- lection of two-person poses in close proximity, as shown in Fig. 2, along with their preceding and subsequent inter- action motions. Using this collection, we train a conditional diffusion model to generate contextual interaction dynamics given a pair of closely interactive poses. We first demonstrate the application of our learned pose- to-dynamic interactive priors for open-domain images. So- cial interactions are frequently depicted in images, yet ex- isting works [7, 9, 10, 39] typically focus only on recon- structing static interactive poses, lacking the temporal dy- namics of these interactions. Meanwhile, video diffusion models [3, 16, 18] can animate images over time but of- ten struggle to maintain motion and interaction integrity. In contrast, Ponimator seamlessly transfers learned interac- tion prior knowledge from high-quality 3D mocap datasets to these in-the-wild images through estimated interactive poses, as shown in Fig. 1 (top). For broader applications, we developed an additional conditional diffusion model that leverages the spatial prior to generate interactive poses from multiple input types, including text descriptions, sin- gle poses, or both. Thus, when only a single person appears in an image, Ponimator can first generate a partner pose with an optional text prompt, and then animate the interactive poses over time (see Fig. 1). Furthermore, by anchoring on these interactive poses, Ponimator is able to generate short- clip two-person motions with proximal contact (see Fig. 8) directly from text input. Our key contributions are summarized as follows: 1) We present Ponimator, a simple framework designed to learn the dynamics prior of interactive poses from motion cap- ture data, particularly focusing on proximal human-human Figure 2. Interactive poses refer to two-person poses in proxim- ity and close contact. The top row displays interactive (green) and non-interactive (red) poses within one sequence. Interactive poses allow observers to intuitively infer the temporal context, while non-interactive poses are more ambiguous and difficult to inter- pret. The bottom row showcases common daily interactive poses. interaction animations; 2) The learned prior is universal and generalizes effectively to poses extracted from open-world images, enabling animation of social interactions in human images; 3) Ponimator can generate interactive poses from a single-person pose, text, or both, combined with interac- tive pose animation, enabling diverse applications including reaction animation and text-to-interaction synthesis. 2. Related work Human-human Interactions in Images. Human–human interactions are prevailing in social images. Significant progress has been made in interactive pose estimation [9, 10, 39] and interaction sequence reconstruction [20, 60]. Ugrinovic et al. integrate a physical simulator into the hu- man mesh recovery pipeline to capture the physical signif- icance of interactive poses. Huang et al. [20] use Vector- Quantised representation learning and specialized losses to learn a discrete interaction prior, but suffer from limited in- terpretability and generalization. In contrast, our method di- rectly anchors on interactive poses for interaction modeling without relying on additional physical simulators or intri- cate model designs. Our simple and interpretable prior gen- eralizes well to in-the-wild settings, adhering to the princi- ple that simplicity leads to robustness. The interactive pose prior is also explored in BUDDI [39], which estimates two- person static poses from images but is limited to static pose modeling and overlooks the rich dynamics of interactions. In contrast, our work unlocks interactive motions for both animation and generation in arbitrary open-world images. Human-human Motion Synthesis. Generating human motion dynamics has been a long-standing task [1, 2, 28, 29, 32, 38]. Utilizing generative models have gained widespread popularity recently [12–14, 25, 27, 30, 43, 44, 58, 59, 63, 64, 71, 72]. With the success of applying gener- ative models in single-person motion synthesis and the re- lease of large-scale two-person interaction datasets, such as InterGen [31] and Inter-X [65], there has been a surge in Interactive Pose Animator (b) Interactive Poses as Anchor (c) Unveiling Interaction Dynamics Interactive pose input Interactive Pose Generator Text prompt: “Two persons with arms on shoulders.” , Text prompt , Text prompt (a) Sourcing Interactive Poses Interactive pose output Figure 3. Framework overview. Ponimator consists of a pose generator and animator, bridged by interactive poses. The generator takes a single pose, text, or both as input to produce interactive poses, while the animator unleashes interaction dynamics from static poses. research [5, 6, 11, 24, 34, 35, 45, 50, 51, 53, 58, 59, 66] focused on multi-person motion generation. However, most existing studies generate two-person motions following in- put text, but often overlooking close-contact dynamics. For example, Liang et al. [31] proposed a diffusion model for two-person motion generation, but it relies on detailed text input and struggles with realistic interaction. In contrast, our framework focuses on short-range interactions by lever- aging generalizable interaction priors from static interactive poses, naturally ensuring physical contact between individ- uals and seamlessly generalizes to open-world scenarios. Human-human Motion Prediction. A body of work fo- cuses on tracking multi-person motions from videos [22, 23, 52], forecasting future multi-person motions based on past movements [15, 42, 55, 56, 62, 67, 68] and generat- ing reactive motion based on an individual’s full motion se- quence [5, 8, 11, 35, 49, 53, 66]. However, existing meth- ods rely on long history context or full individual motions while treating interactive poses and human dynamics sepa- rately. In contrast, our approach bridges these two modali- ties by anchoring on interactive poses and leveraging their prior for dynamics forecasting. This integration enables our model to generate both past and future interaction dynam- ics while supporting flexible inputs with fewer constraints, such as text, single-pose, or both, unlocking diverse appli- cations in animation and generation. 3. Approach Ponimator leverages interactive pose priors as intermediates for interaction animation, as shown in Fig. 3. We first in- troduce interactive poses and motion modeling (Sec. 3.1). Then, we present the pose animator (Sec. 3.2), which trans- forms interactive poses into motion, followed by the pose generator (Sec. 3.3), which generates interactive poses from various inputs. Finally, in Sec. 3.4, we explore Ponimator’s applications to real-world images and text. 3.1. Interactive Pose and Motion Modeling. Interactive pose and motion. Our work defines interac- tive poses as the poses of two individuals in proximity and close contact. For person a, we use the SMPLX parametric body model [40] to model the pose xa = (ϕa, θa, γa) and shape βa ∈R10. Here, θa ∈R21×3 is the joint rotations, ϕa ∈R1×3 and γa ∈R1×3 represents the global orienta- tion and translation. The interactive pose of two individuals a and b is given as xI = (xa I, xb I). An interaction motion consists of a short pose sequence X of length N, centered around an interaction moment, along with shape parameters θ of both individuals, where X = {xi}N i=1, β = (βa, βb). X includes an pair of interactive poses xI at interaction mo- ment index I within the sequence, and its nearby past poses x1:I and future poses xI+1:N. An example of interactive pose and motion is shown in Fig. 2. Interaction motion modeling. The interactive pose xI en- codes rich temporal and spatial priors. As shown Fig. 2, interactive poses convey motion dynamics (top row) and spatial relationships (bottom row) between individuals. The strong prior make it easier for models to learn, whereas non- interactive poses lack clear interaction cues, making learn- ing more challenging. Therefore, we model the interaction motion (X, β) by anchoring on its interactive pose xI. \l ab e \dis pla ystyle \highlight {red}{p(\boldsymbol {\mathcal {X}}; \highlight {green}{{\mathbf {x}_I}}, \boldsymbol {\beta })} l \disp laystyle \highlight {blue}{p(\highlight {green}{\mathbf {x}_I}, \boldsymbol {\beta })} {eq:decompose} p(\boldsymbol {\mathcal {X}}, \boldsymbol {\beta }) = \tikzmarknode {x}{\highlight {red}{p(\boldsymbol {\mathcal {X}}; \highlight {green}{{\mathbf {x}_I}}, \boldsymbol {\beta })}} \cdot \tikzmarknode {s}{\highlight {blue}{p(\highlight {green}{\mathbf {x}_I}, \boldsymbol {\beta })}} (1) temporal prior spatial prior Learning prior from diffusion model. Each prior’s dis- tribution in Eq. (1) is captured by a generative diffusion model [17] G, trained on high-quality mocap data. To Two-person Image Interaction Animation Pose Estimator Interactive Pose Animator two-person image interactive pose Interaction animation Single-person Image Interaction Animation Interactive Pose Generator Pose Estimator Interactive Pose Animator single-pose image optional text: lift the other onto his back interactive pose Interaction generation Text-to-Interaction Synthesis “Two person hugging together” Interactive Pose Generator Interactive Pose Animator text Interaction generation interactive pose Figure 4. Applications. Our framework enables two-person image animation, single-person interaction generation, and text-to-interaction synthesis. For two-person images, we estimate interactive poses using an off-the-shelf model [39]. For single-person images, we first estimate the pose by [4] and generate its interactive counterpart. For text input, our unified pose generator could synthesize the pose directly. These poses are then fed into our animator to generate human dynamics. model the underlying distribution of data z0, the diffusion model introduces noise ϵ to the clean data z0 in the forward pass, following zt = √¯αtz0 + √1 −¯αtϵ, ϵ ∼N(0, I), where αt ∈(0, 1) are constants, t is the diffusion timestep t ∈[0, Tdiffusion]. The model G aims to recover clean in- put by ˆz0 = G(zt, t, c) from the noisy observations zt and condition c, optimizing the objective: \ label {eq:diffusion- l oss} \m athc al {L}_D = \mathbf {E}_{\mathbf {z}_0, \bc , \boldsymbol {\epsilon } \sim \mathcal {N}(0,\mathbf {I}),t}[\| \mathbf {z}_0 - G(\mathbf {z}_t, t, \bc ) \|_2^b ] (2) During inference, the model iteratively predicts G(zt, t, c) from t = Tdiffusion to t = 0, gradually denoising the sample until it recovers the original clean data ˆz0. Close-proximity training data. We collect large-scale training data from public mocap datasets, InterX [65] and DualHuman [7], without requiring contact annotations. In- teractive poses are detected by spatial proximity, and if within a threshold, we extract the pose with its past and future frames to form a 3-second interaction clip. 3.2. Unveiling Dynamics from Interactive Poses The interactive pose animator captures the temporal prior in p(X; xI, β) given an interactive pose xI and two person’s shape β. The objective is to generate the motion sequences ˆX = {ˆxi}N i=1 where ˆxI ≈xI, as shown in Fig. 3 (c). Interactive pose-centered representation. We anchor the entire sequence on the interactive pose xI and define the denoising target z0 as the motion residuals with respect to interactive poses z0 = {xi−xI}N i=1 This learning objective enforces model to learn the contextual dynamics strongly shaped by interactive poses. During inference, we recover the predicted pose sequence {ˆxi}N i=1 by ˆz0 + xI. We encode the interactive time index I with a one- hot vector mI ∼OneHot(I) ∈{0, 1}N, where mi I = 1 iff i = I. To better preserve the spatial structure of interactive pose at time I in pose sequences, we apply an imputation strategy to the diffusion model, where the noise input zt in Eq. (2) is substituted with ˜zt: \ l ab e l { e q: a ni m at o r } \t ild e {\mathbf {z}}_{t} = (1- \mathbf {m}_I) \odot \mathbf {z}_{t} + \mathbf {m}_I \odot \mathbf {0}, \quad \bc = (\mathbf {m}_I, \mathbf {x}_I, \boldsymbol {\beta }), (3) where ⊙denotes element-wise multiplication and c is the input condition. After imputation, noise is added to interac- tive poses (i.e., ˜zt + xI) before fed into the network. Condition encoding. The interaction time condition mI is concatenated with the initial model input along the feature dimension. We encode the remaining conditions (xI, β) by leveraging the SMPLX joint forward kinematics (FK) function FK(·, ·) to compute joint positions of interactive pose jI = (FK(xa I, βa), FK(xb I, βb)). Here, jI inherently encodes both individuals’ poses and shapes. It is further embedded through a single-layer MLP and injected into the model layers via AdaIN [21]. Architecture and training. We adopt the DiT [41] ar- chitecture as our diffusion model, built on stacked Trans- former blocks [61] that alternate spatial attention for hu- man contact and temporal attention for motion dynamics. To train the model, besides diffusion loss LD in Eq. (2), we apply the SMPL loss Lsmpl as the MSE between the de- noised pose sequence and the clean input. We also use an interaction loss Linter [31] and a velocity loss [59]. Lvel Input Interaction Animation (left→right: time steps) Figure 5. Interactive pose image animation on FlickrCI3D dataset [9]. Left shows the input image, right shows the animated interaction motions. Interactive-pose frame is labeled in green box. encourages contact between individuals in close proxim- ity, while Lvel ensures motion coherence. The total loss L = λDLD+λsmplLsmpl+λinterLinter+λvelLvel. To improve robustness and generalization to noisy real-world poses, we apply augmentation by adding random noise to interactive pose xI. Please refer to Sec. A for details. 3.3. Interactive Pose Generator The interactive pose generator models p(xI, β) in Eq. (1), leveraging the spatial prior to generate xI, β from various conditions, as shown in Fig. 3(a). Unified input conditioning. Given various input condi- tions, including text c, single person pose (xa I, βa), or both, the model generates za 0 = (xa I, βa) and zb 0 = (xb I, βb), which together form the diffusion target z0 = (za 0, zb 0) in Eq. (2). To integrate these conditions into a unified model, we introduce two masks, mc and ma, to encode the pres- ence of text and pose conditions, respectively. These masks are sampled independently from a Bernoulli distribution with probability pcondition during training. We modify the model input zt and text condition c to ˜c in Eq. (2) as: \ l abe l {e q :g e n er a to r} \ til de {\ m athbf {z}}_{t} = ((1-\mathbf {m}_a) \odot \mathbf {z}_{t}^a + \mathbf {m}_a \odot \mathbf {z}_0^a, \mathbf {z}_{t}^b), \quad \tilde {\bc } = \mathbf {m}_c \odot \bc . \vspace {-1em} (4) This design enables the model to accommodate multiple combinations of conditions. In SMPL, human shapes are coupled with genders g ∈ {male, female, neutral}. To enable a more generic shape condition, we instead use the global joint positions of rest pose j{a,b} rest , which inherently capture both shape and gen- der information, and define the diffusion target as z0 = (x{a,b} I , j{a,b} rest ). After generation, we can recover β{a,b} from j{a,b} rest using inverse kinematics (IK). Architecture and training. We use the same architecture as pose animator with modifications below. (1) The text condition c is encoded via CLIP [48], processed by two trainable Transformer layers, and injected by AdaLN [21]. (2) We retain spatial attention layers and remove tempo- ral attentions. The model is trained with standard diffu- sion loss LD in Eq. (2), SMPL loss Lsmpl, and bone length loss Lbone minimizes the MSE with ground-truth lengths in the SMPLX [40] kinematic tree. Total loss L = λDLD + λsmplLsmpl + λboneLbone. Please see Sec. A for details. 3.4. Applications Our framework supports two-person interactive pose image animation, single-person pose interaction generation, and text-to-interaction synthesis, as shown in Fig. 4. Interactive pose image animation. As shown in 1st row of Fig. 4, given a two-person image, we estimate the in- teractive pose ˆxI using an off-the-shelf model [39]. The Input Interaction Generation (left→right: time steps) ”two person pose for a photo” ”one person lift another one up” Figure 6. Single-person image interaction generation on Motion-X [33] dataset. Left shows the single person image input, right shows the generated two-person interaction dynamics. The generated interactive pose frame is labeled in magenta box. Top two rows display single-person pose inputs, while the bottom two show the same with accompanying text below the input image. estimated pose is fed into our interactive pose animator (Sec. 3.2) to generate motions guided by the temporal prior in interactive poses. Our model provides flexible interaction timing control by adjusting I in Eq. (3), where I = 0 pre- dicts future motion, I = N reconstructs the past, and gen- erally, n = N 2 enables symmetric animation. Open-world animation results are shown in Fig. 5. Single-person pose interaction generation. As shown in the 2nd row of Fig. 4, given a single-person image, we esti- mate the pose ˆxa I using off-the-shelf model such as [4] and feed it into our interactive pose generator (Sec. 3.3). We set ma = 0, mc = 0 in Eq. (4) as model input, disabling text input and allowing ˆxa I to generate its interactive counterpart xb I using the spatial prior in interactive poses. Alternatively, setting mc = 1 enables additional text conditioning. Once the interactive pose ˆxI = (ˆxa I, ˆxb I) is obtained, it is fed into the interactive pose animator (Sec. 3.2) to synthesize mo- tion dynamics. Open-world results are presented in Fig. 6. Text-to-interaction synthesis. As shown in 3rd row of Fig. 4, given a short phrase, we generate the interactive pose ˆxI by setting ma = 0, mc = 1 in Eq. (3). The gener- ated ˆxI is then passed to the pose animator to produce the corresponding motion. Examples for ”two-person hugging together” and ”push” are presented in Figs. 4 and 8. 4. Experiments Implementation details. We extract interactive poses by detecting SMPL-X vertices contacts [39] below a threshold in each mocap dataset within a 3s window. The interactive pose animator has 8 layers (latent dim 1024) and is trained using AdamW [37] (LR 1e-4). All loss weights are 1 ex- cept λinter = 0.5.To handle real-world noise, we augment training by adding Gaussian noise (scale 0.02) to interactive poses. At inference, DDIM [54] samples 50 steps, generat- ing 3s motions at 10fps in 0.24s on an A100. The interac- tive pose generator follows a similar setup with ptext = 0.8, ppose = 0.2, and a frozen CLIP-ViTL/14 [48] text encoder. The pose generation take 0.21s. Models are trained for 4000 epochs with batch sizes of 256 (pose animator) and 512 (pose generator). Please see Sec. A for details. Datasets. We train and test our model on two large-scale datasets: Inter-X [65] (11k sequences) and Dual-Human [7] (2k sequences). We follow the official split for Inter-X and use a 3:1 training-testing split for Dual-Human, excluding non-interactive motion sequences. Metrics. We follow the evaluation metrics in [47, 50, 59]: Frechet Inception Distance (FID), the feature distribution against ground truth (GT). We compute it by training a mo- tion autoencoder to encode motion into features for each task; Precision (Pre.), the likelihood that generated mo- tions fall within the real distribution; Recall (Rec.), the like- lihood that real motions fall within the generated distribu- Interactive Pose Interaction Animation (left→right: time steps) Inter-X test set [65] Dual-Human test set [7] Duolando dataset [53] Hi4D dataset [70] Interhuman dataset [31] Multi-person interaction animation Figure 7. Interactive pose animation on in-domain datasets (Inter-X[65], Dual-Human [7]), out-of-domain dataset (Duolando [53], Hi4D [70], Interhuman [31]), and random composed multi-person pose. Each row: left—interactive pose, right—animation sequence. Our learned interactive pose prior is universal, generalizing across datasets and enabling multi-person interactions (6th row) without modification or retraining. tion; Diversity, the variance of generated motions. We also evaluate the physics plausibility via Contact Frame Ra- tio (CR., %)—proportion of frames with two-person con- tact—and averaged Inter-person Penetration (Pene., cm). 4.1. Effectiveness of Anchoring on Interactive Poses Previous works model human-human interaction dynamics either by finetuning on single-person motion priors with in- teraction data (e.g., ComMDM [50], RIG [58]) or by learn- ing interaction dynamics from scratch (e.g., InterGen [31]). In this work, we model interaction dynamics by anchor- ing on proximal interactive poses. To evaluate the effec- tiveness of these approaches, we employ a simple task— unconstrained generation. We further adapt MDM [59] to accommodate two-person motions in our setting. Ponima- tor seamlessly supports unconstrained generation by set- Method FID ↓Pre. ↑Rec.↑Div. →CR.→Pene.↓ GT 0.3 1.0 1.0 10.1 70.6 3.8 MDM* [59] 62.6 0.79 0.20 9.8 66.4 5.3 ComMDM [50] 88.8 0.37 0.49 10.9 44.3 4.7 RIG [58] 65.2 0.46 0.65 10.6 44.3 4.3 InterGen [31] 56.6 0.57 0.46 10.1 50.9 4.3 Ours 22.6 0.58 0.72 10.2 68.1 5.0 Table 1. Unconstrained interaction synthesis comparison on Inter-X [65] dataset. →means the closer to ground truth the better the result. Method in ∗is adapted from ours for two-person inter- action. Our method largely outperforms others in motion quality and contact ratio, naturally ensuring physical contact and motion realism by anchoring on interactive poses. Inter-X Dual-Human Method FID↓Div.→CR. →Pene.↓FID↓Div.→CR. →Pene.↓ GT 0.3 10.1 70.6 3.8 2.1 12.0 70.4 3.4 InterGen* 18.9 10.6 44.4 4.3 88.8 11.9 44.3 4.1 w/o anchor 7.1 9.8 67.3 5.1 36.9 11.6 70.7 4.5 - time 6.3 10.3 66.9 5.2 30.3 12.6 67.3 5.1 - joints 5.6 10.0 67.6 5.1 29.9 12.3 70.2 4.4 random-pose 5.8 10.1 67.4 5.1 30.1 12.3 69.3 4.5 ours 5.0 9.9 68.5 5.1 24.2 11.8 70.4 4.5 Table 2. Interactive pose animation comparison on Inter-X [65] and Dual-Human [7] dataset. InterGen* is adapted to take interac- tive poses input but lacks explicit interaction modeling, limiting its use of pose priors. Interactive pose anchoring, condition encoding, and interactive frames are crucial for the performance. ting ma = 0 and mc = 0. Experimental results on our dataset collection from Inter-X [65] are shown in Tab. 1. We observe that previous methods [31, 50, 58] struggle to synthesize close-contact interactions, while the adapted MDM* [59] exhibits lower interaction motion quality. In contrast, by simply anchoring on interactive poses, our model achieves superior motion realism (FID of 22.6) and physical contact (contact ratio of 68.1). 4.2. Interactive Pose Animation To evaluate the interactive pose animator, we compare against baselines and key ablations on Inter-X [65] and Dual-Human [7] datasets in Tab. 2. We ablate key com- ponents of pose animator: w/o anchor removes interac- tive pose anchoring, replacing the denoising target z0 with {xi}N i=1; - time removes the interaction time encoding mI; - joints removes joints condition encoding; InterGen* re- places text conditions with interactive pose condition while keeping all other settings unchanged; random-pose uses Method FID↓Div.→MModality↑CR. →Pene.↓ GT 0.06 6.78 - 70.6 3.8 InterGen 2.87 6.76 1.42 39.8 3.9 w/o anchor 2.74 6.78 1.41 39.0 4.0 Ours 1.82 6.78 1.46 45.9 4.3 Table 3. Text-to-interaction synthesis results on Inter-X [65] dataset. Our unified pipeline outperforms end-to-end w/o inter- active pose as anchor method in short-term interaction synthesis. InterGen [31] W/o anchor Ours Figure 8. Text-to-interaction comparison for ”push”. Anchored on interactive poses, our method achieves better contact and more realistic dynamics than InterGen [31] and the end-to-end baseline. random instead of interactive frames as anchor. All base- lines are trained under the same setting. Tab. 2 highlights the importance of interactive pose anchoring and interaction conditioning. InterGen* overlooks input poses, resulting in poorer performance. In contrast, our method explicitly models interaction and contact and achieves better results. Universal interactive pose prior. We visualize the an- imated motion in Fig. 7 on in-domain datasets (Inter- X[65], Dual-Human [7]) and out-of-domain datasets (Duolando [53], Hi4D [70], Interhuman [31]). Our ap- proach generalizes to unseen subjects and interactions using the universal interactive pose prior. Our model is surpris- ingly capable of generating interactions beyond two persons without modification or retraining (see last row in Fig. 7). Open-world two-person image animation. Our model generalizes to open-world images by extracting interactive poses from FlickrCI3D [9] dataset using [39]. As shown in Fig. 5, it transforms static poses into realistic motion. 4.3. Interaction Motion Generation We evaluate interaction motion generation on the Inter-X dataset [65] using text and single-person poses. Text-to-interaction synthesis We focus on 3s interac- tion generation, evaluating FID, Diversity, and MModal- ity—the ability to generate diverse interactions from the same text [31, 59]. We compare with InterGen [31] and an end-to-end w/o interactive pose baseline, both trained and Method FID↓Pre.↑Rec.↑Div.→CR.→Pene.↓ GT 0.3 1.0 1.0 10.1 70.6 3.8 w/o anchor 40.0 0.87 0.43 9.6 67.5 5.0 Ours 27.8 0.91 0.48 9.7 73.3 5.2 Table 4. Single pose-to-interaction synthesis results on Inter- X [65] dataset. Compared to without anchor baseline, our method uses interactive poses for more effective interaction modeling. Single Pose Interaction Generation (left→right: time steps) W/o anchor Ours Figure 9. Single pose-to-interaction comparison on Inter-X dataset [65]. Compared to the model without interactive pose an- chors, our method generates more natural human interactions. Single Pose Interactive Generation (left→right: time steps) + ”two person hug” Figure 10. Diverse interactive motion generation. From a single pose, our framework generates varied interactive poses (magenta box) and motions (1st, 2nd rows) and text-driven ones (3rd row). tested on the same data. As shown in Tab. 3 and Fig. 8, they struggle with contact modeling, while ours excels in short- term interaction generation using interactive pose priors. Interaction synthesis from single pose We evaluate single pose-to-interaction synthesis on Inter-X [65] dataset, com- paring our method with an end-to-end without interactive pose baseline, which struggles in the large motion space, as shown in Tab. 4 and Fig. 9. Our method leverages interac- tive poses to generate diverse motions under varying input conditions in Fig. 10. Open-world single-person image animation. Our model generalizes to open-world single-person images by estimat- ing poses [4], generating interactive counterparts, and an- imating motion. Fig. 6 shows results on Motion-X [65] dataset. Input Image Interaction Generation (left→right: time steps) Figure 11. Interactive human video generation. Given a single input image (left), our method generates interactive human motions that serve as intermediate results for video generation. We use an off-the-shelf human reconstruction model [46] to recover textured humans from a single image. By pairing the generated motion with an arbitrary second person and applying the corresponding textures, we can produce realistic human interaction videos. 4.4. Interaction Video Generation Our method generated interactive human motion could serve as intermediate outputs for downstream video gener- ation. While existing video diffusion models [3, 16, 18, 26] can synthesize human videos, their motions often lack tem- poral consistency and realism. In contrast, our generated motions provide a stable and realistic foundation for inter- active human video synthesis, either through pose-guided video diffusion models [19, 69, 73] or by texturizing mo- tion sequences. As shown in Fig. 11, we use an off-the- shelf human reconstruction model [46] to recover textured humans from a single image. The generated interactive mo- tion is then paired with an arbitrary second person’s texture to produce realistic human interaction videos. 4.5. Limitations Our method has few limitations: (1) it focus on short inter- action segments; (2) it relies solely on human poses, ignor- ing scene context; (3) pose inaccuracies may cause contact errors and foot sliding; (4) close interactions may lead to inter-person penetration. Please refer to the Sec. B for more details. 5. Conclusion We introduce Ponimator, which integrates a pose animator and generator for interactive pose animation and generation using conditional diffusion models. The animator leverages temporal priors for dynamic motion generation, the gener- ator uses spatial priors to create interactive poses from a single pose, text, or both. Ponimator enables open-world image interaction animation, single-pose interaction gen- eration, and text-to-interaction synthesis, exhibiting strong generalization and realism across datasets and applications. References [1] Chaitanya Ahuja and Louis-Philippe Morency. Lan- guage2pose: Natural language grounded pose forecasting. In 3DV. IEEE, 2019. 2 [2] Okan Arikan and David A Forsyth. Interactive motion gen- eration from examples. TOG, 2002. 2 [3] Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dock- horn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with la- tent diffusion models. In CVPR, 2023. 2, 9 [4] Zhongang Cai, Wanqi Yin, Ailing Zeng, Chen Wei, Qing- ping Sun, Wang Yanjun, Hui En Pang, Haiyi Mei, Mingyuan Zhang, Lei Zhang, Chen Change Loy, Lei Yang, and Zi- wei Liu. SMPLer-X: Scaling up expressive human pose and shape estimation. In NeurIPS, 2023. 4, 6, 8, 15 [5] Baptiste Chopin, Hao Tang, Naima Otberdout, Mohamed Daoudi, and Nicu Sebe. Interaction transformer for human reaction generation. Multimedia, 2023. 3 [6] Ke Fan, Junshu Tang, Weijian Cao, Ran Yi, Moran Li, Jingyu Gong, Jiangning Zhang, Yabiao Wang, Chengjie Wang, and Lizhuang Ma. Freemotion: A unified framework for number-free text-to-motion synthesis. arXiv preprint arXiv:2405.15763, 2024. 3 [7] Qi Fang, Yinghui Fan, Yanjun Li, Junting Dong, Dingwei Wu, Weidong Zhang, and Kang Chen. Capturing closely in- teracted two-person motions with reaction priors. In CVPR, 2024. 2, 4, 6, 7, 8, 16, 17, 18, 21 [8] Yanwen Fang, Jintai Chen, Peng-Tao Jiang, Chao Li, Yifeng Geng, Eddy KF Lam, and Guodong Li. Pg- former: Proxy-bridged game transformer for multi-person highly interactive extreme motion prediction. arXiv preprint arXiv:2306.03374, 2023. 3 [9] Mihai Fieraru, Mihai Zanfir, Elisabeta Oneata, Alin-Ionut Popa, Vlad Olaru, and Cristian Sminchisescu. Three- dimensional reconstruction of human interactions. In CVPR, 2020. 2, 5, 8, 16 [10] Mihai Fieraru, Mihai Zanfir, Teodor Szente, Eduard Baza- van, Vlad Olaru, and Cristian Sminchisescu. Remips: Phys- ically consistent 3d reconstruction of multiple interacting people under weak supervision. NeurIPS, 2021. 2 [11] Anindita Ghosh, Rishabh Dabral, Vladislav Golyanik, Chris- tian Theobalt, and Philipp Slusallek. Remos: Reactive 3d motion synthesis for two-person interactions. arXiv preprint arXiv:2311.17057, 2023. 3 [12] Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng, Minglun Gong, and Li Cheng. Ac- tion2motion: Conditioned generation of 3d human motions. In Multimedia, 2020. 2 [13] Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In CVPR, 2022. 13 [14] Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal gener- ation of 3d human motions and texts. In ECCV. Springer, 2022. 2 [15] Wen Guo, Xiaoyu Bie, Xavier Alameda-Pineda, and Francesc Moreno-Noguer. Multi-person extreme motion pre- diction. In CVPR, 2022. 3 [16] Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text- to-image diffusion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. 2, 9 [17] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffu- sion probabilistic models. NeurIPS, 2020. 3 [18] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video dif- fusion models. NeurIPS, 2022. 2, 9 [19] Li Hu. Animate anyone: Consistent and controllable image- to-video synthesis for character animation. In CVPR, pages 8153–8163, 2024. 9 [20] Buzhen Huang, Chen Li, Chongyang Xu, Liang Pan, Yan- gang Wang, and Gim Hee Lee. Closely interactive human reconstruction with proxemics and physics-guided adaption. In CVPR, 2024. 2 [21] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. 4, 5, 13 [22] Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Evgeny Levinkov, Bjoern Andres, and Bernt Schiele. Arttrack: Articulated multi-person tracking in the wild. In CVPR, 2017. 3 [23] Umar Iqbal, Anton Milan, and Juergen Gall. Posetrack: Joint multi-person pose estimation and tracking. In CVPR, 2017. 3 [24] Muhammad Gohar Javed, Chuan Guo, Li Cheng, and Xingyu Li. Intermask: 3d human interaction genera- tion via collaborative masked modelling. arXiv preprint arXiv:2410.10010, 2024. 3 [25] Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. Motiongpt: Human motion as a foreign language. NeurIPS, 2023. 2 [26] Yang Jin, Zhicheng Sun, Ningyuan Li, Kun Xu, Hao Jiang, Nan Zhuang, Quzhe Huang, Yang Song, Yadong Mu, and Zhouchen Lin. Pyramidal flow matching for efficient video generative modeling. arXiv preprint arXiv:2410.05954, 2024. 9 [27] Korrawe Karunratanakul, Konpat Preechakul, Supasorn Suwajanakorn, and Siyu Tang. Guided motion diffusion for controllable human motion synthesis. In ICCV, 2023. 2 [28] Lucas Kovar and Michael Gleicher. Flexible automatic mo- tion blending with registration curves. In Symposium on Computer Animation. San Diego, CA, USA, 2003. 2 [29] Lucas Kovar, Michael Gleicher, and Fr´ed´eric Pighin. Motion graphs. In SIGGRAPH, 2008. 2 [30] Peizhuo Li, Kfir Aberman, Zihan Zhang, Rana Hanocka, and Olga Sorkine-Hornung. Ganimator: Neural motion synthesis from a single sequence. TOG, 2022. 2 [31] Han Liang, Wenqian Zhang, Wenxuan Li, Jingyi Yu, and Lan Xu. Intergen: Diffusion-based multi-human motion genera- tion under complex interactions. IJCV, 2024. 2, 3, 4, 7, 8, 13, 17, 19 [32] Angela S Lin, Lemeng Wu, Rodolfo Corona, Kevin Tai, Qix- ing Huang, and Raymond J Mooney. Generating animated videos of human activities from natural language descrip- tions. Learning, 2018. 2 [33] Jing Lin, Ailing Zeng, Shunlin Lu, Yuanhao Cai, Ruimao Zhang, Haoqian Wang, and Lei Zhang. Motion-x: A large-scale 3d expressive whole-body human motion dataset. NeurIPS, 2024. 6, 15, 17 [34] Shaowei Liu, Yang Zhou, Jimei Yang, Saurabh Gupta, and Shenlong Wang. Contactgen: Generative contact modeling for grasp generation. In ICCV, pages 20609–20620, 2023. 3 [35] Yunze Liu, Changxi Chen, and Li Yi. Interactive humanoid: Online full-body motion reaction synthesis with social af- fordance canonicalization and forecasting. arXiv preprint arXiv:2312.08983, 2023. 3 [36] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multi- person linear model. TOG, 2015. 17, 19 [37] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. 6 [38] Julieta Martinez, Michael J Black, and Javier Romero. On human motion prediction using recurrent neural networks. In CVPR, 2017. 2 [39] Lea M¨uller, Vickie Ye, Georgios Pavlakos, Michael Black, and Angjoo Kanazawa. Generative proxemics: A prior for 3d social interaction from images. In CVPR, 2024. 2, 4, 5, 6, 8, 13, 15 [40] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3d hands, face, and body from a single image. In CVPR, 2019. 3, 5, 17, 19 [41] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. 4, 13 [42] Xiaogang Peng, Yaodi Shen, Haoran Wang, Binling Nie, Yi- gang Wang, and Zizhao Wu. Somoformer: Social-aware mo- tion transformer for multi-person motion prediction. arXiv preprint arXiv:2208.09224, 2022. 3 [43] Mathis Petrovich, Michael J Black, and G¨ul Varol. Action- conditioned 3d human motion synthesis with transformer vae. In ICCV, 2021. 2 [44] Mathis Petrovich, Michael J Black, and G¨ul Varol. Temos: Generating diverse human motions from textual descriptions. In ECCV. Springer, 2022. 2 [45] Pablo Ruiz Ponce, German Barquero, Cristina Palmero, Ser- gio Escalera, and Jose Garcia-Rodriguez. in2in: Leveraging individual information to generate human interactions. arXiv preprint arXiv:2404.09988, 2024. 3 [46] Lingteng Qiu, Xiaodong Gu, Peihao Li, Qi Zuo, Weichao Shen, Junfei Zhang, Kejie Qiu, Weihao Yuan, Guanying Chen, Zilong Dong, and Liefeng Bo. Lhm: Large animatable human reconstruction model from a single image in seconds. In ICCV, 2025. 9 [47] Sigal Raab, Inbal Leibovitch, Peizhuo Li, Kfir Aberman, Olga Sorkine-Hornung, and Daniel Cohen-Or. Modi: Un- conditional motion synthesis from diverse data. In CVPR, 2023. 6 [48] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. In ICML. PMLR, 2021. 5, 6 [49] Muhammad Rameez Ur Rahman, Luca Scofano, Edoardo De Matteis, Alessandro Flaborea, Alessio Sampieri, and Fabio Galasso. Best practices for 2-body pose forecasting. In CVPR, 2023. 3 [50] Yoni Shafir, Guy Tevet, Roy Kapon, and Amit Haim Bermano. Human motion diffusion as a generative prior. In ICLR, 2024. 3, 6, 7 [51] Mengyi Shan, Lu Dong, Yutao Han, Yuan Yao, Tao Liu, Ifeoma Nwogu, Guo-Jun Qi, and Mitch Hill. Towards open domain text-driven synthesis of multi-person motions. arXiv preprint arXiv:2405.18483, 2024. 3 [52] Bing Shuai, Alessandro Bergamo, Uta Buechler, Andrew Berneshawi, Alyssa Boden, and Joseph Tighe. Large scale real-world multi-person tracking. In ECCV. Springer, 2022. 3 [53] Li Siyao, Tianpei Gu, Zhitao Yang, Zhengyu Lin, Ziwei Liu, Henghui Ding, Lei Yang, and Chen Change Loy. Duolando: Follower gpt with off-policy reinforcement learning for dance accompaniment. arXiv preprint arXiv:2403.18811, 2024. 3, 7, 8, 16, 18 [54] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. 6, 13 [55] Sebastian Starke, Yiwei Zhao, Taku Komura, and Kazi Za- man. Local motion phases for learning multi-contact charac- ter movements. TOG, 2020. 3 [56] Sebastian Starke, Yiwei Zhao, Fabio Zinno, and Taku Ko- mura. Neural animation layering for synthesizing martial arts movements. TOG, 2021. 3 [57] Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. In WACV, 2022. 15 [58] Mikihiro Tanaka and Kent Fujiwara. Role-aware interaction generation from textual description. In ICCV, 2023. 2, 3, 7 [59] Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. Human motion diffu- sion model. In ICLR, 2023. 2, 3, 4, 6, 7, 8, 13 [60] Nicolas Ugrinovic, Boxiao Pan, Georgios Pavlakos, De- spoina Paschalidou, Bokui Shen, Jordi Sanchez-Riera, Francesc Moreno-Noguer, and Leonidas Guibas. Multiphys: multi-person physics-aware 3d motion estimation. In CVPR, 2024. 2 [61] A Vaswani. Attention is all you need. NeurIPS, 2017. 4, 13 [62] Jiashun Wang, Huazhe Xu, Medhini Narasimhan, and Xiao- long Wang. Multi-person 3d motion prediction with multi- range transformers. NeurIPS, 2021. 3 [63] Jingbo Wang, Sijie Yan, Bo Dai, and Dahua Lin. Scene- aware generative network for human motion synthesis. In CVPR, 2021. 2 [64] Jingbo Wang, Yu Rong, Jingyuan Liu, Sijie Yan, Dahua Lin, and Bo Dai. Towards diverse and natural scene-aware 3d human motion synthesis. In CVPR, 2022. 2 [65] Liang Xu, Xintao Lv, Yichao Yan, Xin Jin, Shuwen Wu, Congsheng Xu, Yifan Liu, Yizhou Zhou, Fengyun Rao, Xingdong Sheng, et al. Inter-x: Towards versatile human- human interaction analysis. In CVPR, 2024. 2, 4, 6, 7, 8, 16, 17, 18, 19, 21 [66] Liang Xu, Yizhou Zhou, Yichao Yan, Xin Jin, Wenhan Zhu, Fengyun Rao, Xiaokang Yang, and Wenjun Zeng. Regennet: Towards human action-reaction synthesis. In CVPR, 2024. 3 [67] Qingyao Xu, Weibo Mao, Jingze Gong, Chenxin Xu, Si- heng Chen, Weidi Xie, Ya Zhang, and Yanfeng Wang. Joint- relation transformer for multi-person motion prediction. In ICCV, 2023. 3 [68] Sirui Xu, Yu-Xiong Wang, and Liangyan Gui. Stochastic multi-person 3d motion forecasting. In ICLR, 2023. 3 [69] Jingyun Xue, Hongfa Wang, Qi Tian, Yue Ma, Andong Wang, Zhiyuan Zhao, Shaobo Min, Wenzhe Zhao, Kai- hao Zhang, Heung-Yeung Shum, et al. Follow-your-pose v2: Multiple-condition guided character image animation for stable pose control. arXiv preprint arXiv:2406.03035, 2024. 9 [70] Yifei Yin, Chen Guo, Manuel Kaufmann, Juan Zarate, Jie Song, and Otmar Hilliges. Hi4d: 4d instance segmentation of close human interaction. In CVPR, 2023. 7, 8 [71] Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan. Generating human motion from textual descriptions with discrete representations. In CVPR, 2023. 2 [72] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Motiondif- fuse: Text-driven human motion generation with diffusion model. arXiv preprint arXiv:2208.15001, 2022. 2 [73] Shenhao Zhu, Junming Leo Chen, Zuozhuo Dai, Yinghui Xu, Xun Cao, Yao Yao, Hao Zhu, and Siyu Zhu. Champ: Controllable and consistent human image animation with 3d parametric guidance. In ECCV, 2024. 9 Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation Supplementary Material https://stevenlsw.github.io/ponimator/ Abstract The supplementary material provides implementation details, limitation analysis, qualitative results and future work. In summary, we include • Sec. A. Implementation details and model architecture of the interactive pose animator and generator. • Sec. B. Limitation analysis of our current approach. • Sec. C. Additional qualitative results of long inter- active motion generation, complex interaction synthe- sis, two-person image animation, single-person im- age interaction generation, interactive pose animation, text-to-interaction motion synthesis, and single-pose-to- interaction motion synthesis. A. Implementation details Interactive pose extraction. Given a two-person pose from a motion sequence, we determine close contact by measur- ing the minimum distance between their SMPL-X meshe vertices. Following [39], we downsample the mesh based on predefined contact regions and compute pairwise dis- tances. If the smallest distance is below 1.3cm, we classify the pose as a proximity pose—indicating contact between the individuals. This interactive pose is then used to train human interaction dynamics. Model architecture. Our pose animator and pose generator follow the DiT architecture [41], which consists of stacked Transformer blocks [61], each incorporating an attention mechanism and a feed-forward network (FFN). Both the an- imator and generator comprise 8 Transformer layers, with the animator utilizing both spatial- and temporal-attention blocks, while the generator employs only spatial attention. The model has a latent dimension of 1024, with 8-head multi-head attention, and uses the GELU activation func- tion. The input motions are first encoded with positional en- coding before being processed by Transformer blocks. The input has the shape (B, P, N, D), where B is batch size, P = 2 represents the number of individuals, and N cor- responds to number of frames, and D is the dimension of diffusion target z0. Spatial attention operates along the P- dimension to model interactions between individuals, while temporal attention captures motion dynamics along the T- dimension. The model’s output layer is a linear MLP, ini- tialized with zero weights, which generates residual motion outputs. These residual motions are added to the interactive pose to produce the final output. Conditional information is incorporated into the model using Adaptive Instance Nor- malization [21]. Training. We apply training data augmentation to interac- tive poses in the interactive pose animator by adding ran- dom noise with a scale of 0.02 to account for real-world inaccuracies in pose estimation. This ensures that even if the interactive pose estimator introduces noise, the anima- tor can still produce reasonable results. This augmenta- tion is performed online during training. Following prior work [13, 31], we align one person’s pose in the interac- tive pose to face the positive Z direction and center it at the origin. The interaction loss in the pose animator fol- lows [31] and consists of a **contact loss**, which encour- ages contact between two individuals when their joints are close, and a **relative orientation loss**, which aligns their global orientations with the ground truth. The velocity loss Lvel, following MDM [59], ensures motion coherence by minimizing the velocity difference between the generated motion and the ground truth. For diffusion training, we use a cosine scheduler with 1000 diffusion steps and DDIM sampling [54] for 50 steps during inference. The model is trained with a learning rate of 1e-4 and weight decay of 0.00002 for 4000 epochs. The batch size is 256 for the interactive pose animator and 512 for the interactive pose generator. Training takes 2 days for the pose animator and 1 day for the pose generator on 4×A100 GPUs. Inference speed comparison. Our interactive pose genera- tion takes 0.21s on a single A100 on average, the interactive pose animator generates 3s motion at 10fps in 0.24s, com- parable to InterGen [31] which requires 0.76s for the same motion length. B. Limitation Analysis Our method has the limitations below. The common failure modes are illustrated in Fig. 12. Short motion modeling. Our method is mainly focus on short interactive motion segments. While our framework could support longer generation by interactive pose chain- ing as shown in Fig. 13, the benefit of interactive pose prior would diminish over time. In text-to-interaction synthesis, our framework prioritizes interactive motion-relevant infor- mation, which can result in partial rather than complete motion sequences when the input text describes extended Input Interaction Animation (left→right: time steps) Figure 12. Method limitation analysis. The first two rows show in-the-wild interactive pose animation results. In the first sample, severe interpenetration occurs as our method does not explicitly model penetration between two individuals. In the second, the generated motion is physically implausible due to the lack of scene context awareness, leading to collisions with the environment. The bottom two rows illustrate interaction motion generation from a single pose input. Due to inaccuracies in interactive pose generation, our method fails to produce realistic contact, resulting in unnatural motion. human interactions. Moreover, our pose animator—taking only interactive poses as input—cannot fully capture the se- mantic context or temporal ordering in text (e.g., distin- guishing “lifting up” from “putting down”). Incorporat- ing text conditioning into the pose-to-interaction stage is a promising avenue for improving text-to-interaction–specific tasks. However, since our main focus is on pose-to- interaction animation without enforced text input, this am- biguity can be a strength, enabling multiple valid and phys- ically plausible motion interpretations from the same inter- active pose. Inter-person penetrations. While our method enhances contact in two-person interactions, it does not explic- itly model interpenetration between individuals. Conse- quently, in close-contact scenarios—such as the first row in Fig. 12—some interpenetration may occur in the generated motion sequences. Achieving a balance between realistic contact and preventing interpenetration remains a challeng- ing problem, as enforcing strict physical constraints could compromise natural motion quality. Addressing interpen- etration modeling and ensuring physically plausible two- person interaction motion generation is an important direc- tion for future work. Lack of scene awareness. When applied to in-the- wild two-person pose animation or motion generation, our method relies solely on human pose information and ig- nores the surrounding environment. As a result, gener- ated motions may appear physically implausible in certain cases, such as the 2nd row of Fig. 12, where collisions oc- cur. Moreover, interactive poses can sometimes be ambigu- Figure 13. Longer motion generation by chaining interactive poses. We reuse the last generated pose as the next input, resetting interactive time to zero, enabling sliding-window synthesis of longer motions (key-frame in magenta box). Interactive Pose Interaction Animation (left→right: time steps) Figure 14. Complex interactive pose animation. Given an interactive pose, our pose animator can synthesize high-dynamics (1st row) and close-contact (2nd row) human-human motions, leveraging the strong interactive prior learned from high-quality mocap data. ous, causing noticeable motion errors when used as the sole input. A more robust approach would integrate additional scene information (e.g. image features) to improve motion prediction and dynamics forecasting. Inaccurate contact. The interactive pose estimator or our interactive pose generator may occasionally produce inac- curate interactive poses, resulting in poor human–human contact in the generated motions, as seen in the 3rd and 4th rows of Fig. 12. These inaccuracies result in unrealistic mo- tion due to the lack of precise interactive pose inputs. Since the pose animator primarily models temporal dynamics and depends on the interactive pose for spatial information, it often cannot correct errors arising from inaccurate interac- tive poses. Additionally, our generated interaction motions may exhibit artifacts such as foot sliding, a common issue in human motion synthesis. While such artifacts can often be mitigated through post-processing, we do not apply any post-processing in our examples. C. Qualitative results Longer interactive motion generation. Our framework is designed for short-term interaction generation but naturally extends to longer sequences. The pose animator takes an in- teractive pose together with an interactive time to synthesize both past and future motions centered on that pose. Longer sequences are produced by chaining segments in a sliding- window manner: the last generated pose of one segment is reused as the starting pose for the next, the interactive time index is reset to zero (beginning of the new segment), and generation continues. Repeating this process yields co- herent long-term interactions, as shown in Fig. 13, where key-frames are labeled in magenta box. Complex interactive pose animation. As shown in Fig. 14, beyond daily motions, our pose animator can syn- thesize complex interactive motions involving high dynam- ics (1st row) and close contact (2nd row) between two peo- ple, benefiting from the strong interaction dynamics learned from high-quality mocap data. Two person image human motion animation. We provide additional in-the-wild interactive pose animation results in Fig. 15. Given an interactive frame, we extract two-person poses using an off-the-shelf model [39], and animate the them with our interactive pose animator. To render the in- teraction, we use an off-the-shelf inpainting model [57] to remove the original individuals and overlay the generated motion. The results demonstrate that our model general- izes well to in-the-wild interactive poses, producing realis- tic human-human interactions. Single-person image human motion interaction genera- tion. We present additional single-person image interac- tion motion generation results on the Motion-X dataset [33] in Fig. 16. Given a single-person image, we first extract the pose using an off-the-shelf pose estimator [4] and then generate interactive poses with our interactive pose genera- tor. As shown, our model synthesizes plausible interactions Input Interaction Animation (left→right: time steps) Figure 15. Interactive pose image animation on FlickrCI3D dataset [9]. Left shows the input image, right shows the animated interaction motions. Interactive-pose frame is labeled in green box. Our model generalizes well to in-the-wild interactive poses, producing realistic human-human interaction dynamics. from diverse single-person inputs. Finally, we apply our in- teractive pose animator to generate two-person dynamics, demonstrating its effectiveness in challenging in-the-wild scenarios. Interactive pose animation. We provide additional vi- sualizations of interactive pose animation on the Inter- X dataset [65], Dual-Human dataset [7], and Duolando dataset [53] in Fig. 17. Our model could successfully syn- Input Interaction Animation (left→right: time steps) One person pushes the other Figure 16. Single-person pose interaction generation on Motion-X dataset [33]. Left shows the single person image input, right shows the generated two-person interaction dynamics. The generated interactive pose frame is labeled in magenta box. The bottom row show the single-pose input with accompanying text input. Given different single-person poses, our interactive pose generator produces plausible interactive poses under flexible conditions, while our interactive pose animator synthesizes realistic human-human motions. Our model demonstrates strong performance in challenging in-the-wild settings. thesize realistic dancing motions from out-of-domain inter- active poses on the unseen Duolando dataset. We further evaluate our method on the InterHuman dataset [31], a more challenging out-of-distribution bench- mark, with results shown in Fig. 18. InterHuman provides SMPLH [36] annotations for two-person interactions, pri- marily for text-to-motion generation, but with less accurate contact. To fit our framework, we convert the SMPLH [36] representation to SMPLX [40] and extract interactive poses from the test sequences. Despite annotation noise and di- verse pose distributions, our model produces realistic and coherent interactions, demonstrating strong generalization of the interactive pose prior. We also provide a qualitative comparison with two baselines—InterGen* and the random-pose variant (see Tab. 2)—in Fig. 19. InterGen [31] and the random-pose model exhibit poorer contact and more body penetration than ours, highlighting the effectiveness of interactive pose priors for realistic contact and interaction synthesis. Text-to-interaction synthesis. We present additional text- to-interaction motion synthesis results in Fig. 20. Our method effectively generates realistic two-person interac- tions from short phrases or simple words. By leveraging an intermediate interactive pose representation, our approach ensures consistent interaction and maintains accurate con- tact between the two individuals. Single pose-to-interaction motion synthesis. We present single pose-to-interaction motion synthesis results on the Inter-X [65] and Dual-Human [7] datasets in Fig. 21. As shown, our method generates appropriate interactive poses from various input poses while effectively capturing vivid underlying human dynamics. Interactive Pose Interaction Animation (left→right: time steps) Inter-X Dataset [65] Dual-Human Dataset [7] Duolando Dataset [53] Figure 17. More interactive pose animation visualization on Inter-X dataset [65], Dual-Human dataset [7], Duolando dataset [53]. Our pose animator generalizes well to out-of-domain interactive poses and synthesizes realistic dancing motions on the unseen Duolando two-person dancing motion dataset. Interactive Pose Interaction Animation (left→right: time steps) Figure 18. Interhuman dataset [31] interactive pose animation results. We convert dataset provided SMPLH [36] to SMPLX [40] representation and select interactive poses from test motion sequences. Despite contact inaccuracies due to dataset conventions and pose variations, our model synthesizes reasonable motions, demonstrating the strong generalization capability of interactive poses for guiding human interaction animation. Interactive Pose Interaction Animation (left→right: time steps) InterGen [31] W/O anchor Ours Figure 19. Interactive pose animation comparison on Inter-X dataset [65]. Compared to InterGen [31] and model trained with random poses, our method achieves better contact and human dynamics. Both baselines exhibit severe body penetration and less accurate contact, while our approach, guided by interactive poses, ensures more realistic interactions. Input Text: One person chases the other person Input Text: One person sits down first, another sits on his/her lap Input Text: One person goes to the other person’s ear and whispers to him/her Input Text: hand shake Input Text: hug Input Text: posing Figure 20. More text-to-interaction motion synthesis results. Our method synthesizes realistic two-person interactions from short phrases or single words. Single Pose Interaction Generation (left→right: time steps) Inter-X Dataset [65] Dual-Human Dataset [7] Figure 21. Single-pose guided interaction motion synthesis result on Inter-X [65] and Dual-Human [7] datasets. The input single- person pose is shown on the left. Our method generates appropriate interactive poses from various inputs, capturing vivid underlying human dynamics.
Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation Shaowei Liu∗ Chuan Guo2† Bing Zhou2† Jian Wang2† 1 -Champaign 2Snap Inc. https://stevenlsw.github.io/ponimator/ Input Generated Interaction Animation (left→right: time steps) Two-person image Estimated interactive pose Single-person image Generated interactive pose Single-person image Generated interactive pose + "Lift the other onto his back" Figure 1. Ponimator enables versatile interaction animation applications anchored on interactive poses. For two-person images (top), Ponimator generates contextual dynamics from estimated interactive poses (green box). For single-person images (middle) with optional text prompts (bottom), Ponimator first generates partner interactive poses (magenta box) and then fulfill the interaction dynamics. Abstract Close-proximity human-human interactive poses convey rich contextual information about interaction dynamics. Given such poses, humans can intuitively infer the context and anticipate possible past and future dynamics, drawing on strong priors of human behavior. Inspired by this observation, we propose Ponimator, a simple framework anchored on proximal interactive poses for versatile interaction animation. Our training data consists of closecontact two-person poses and their surrounding temporal context from motion-capture interaction datasets. Leveraging interactive pose priors, Ponimator employs two conditional diffusion models: (1) a pose animator that uses *Work done at an internship at Snap Research NYC, Snap Inc. †Co-corresponding author the temporal prior to generate dynamic motion sequences from interactive poses, and (2) a pose generator that applies the spatial prior to synthesize interactive poses from a single pose, text, or both when interactive poses are unavailable. Collectively, Ponimator supports diverse tasks, including image-based interaction animation, reaction animation, and text-to-interaction synthesis, facilitating the transfer of interaction knowledge from high-quality mocap data to open-world scenarios. Empirical experiments across diverse datasets and applications demonstrate the universality of the pose prior and the effectiveness and robustness of our framework. Codes and video visualization can be found at https://stevenlsw.github.io/ponimator/ 16 Oct 2025 1. Introduction The interplay between humans plays a crucial role in our daily lives. These interactions convey key social signals that reflect relationships and intentions. For example, a simple hug typically expresses closeness, a handshake serves as a formal greeting, while combat indicates opposing stances. A key observation is that interactive poses in close proximity (e.g., handshake) carry rich prior information about interaction dynamics. Specifically, a pair of such poses reveals contextual cues about spatial relationships, constraints, and intent, often suggesting probable ranges of past and future motions. These interactive poses can act as a bridge for modeling interaction dynamics with reduced complexity while inherently preserving prior knowledge of close interactions. In this paper, we present Ponimator, a novel framework that leverages the dynamics priors embedded in interactive poses through a generative model, demonstrating its versatility across various interaction animation tasks. We develop this interaction prior using a combination of two highquality human-human interaction datasets: Inter-X [65] and Dual-Human [7]. From these datasets, we construct a collection of two-person poses in close proximity, as shown in Fig. 2, along with their preceding and subsequent interaction motions. Using this collection, we train a conditional diffusion model to generate contextual interaction dynamics given a pair of closely interactive poses. We first demonstrate the application of our learned poseto-dynamic interactive priors for open-domain images. Social interactions are frequently depicted in images, yet existing works [7, 9, 10, 39] typically focus only on reconstructing static interactive poses, lacking the temporal dynamics of these interactions. Meanwhile, video diffusion models [3, 16, 18] can animate images over time but often struggle to maintain motion and interaction integrity. In contrast, Ponimator seamlessly transfers learned interaction prior knowledge from high-quality 3D mocap datasets to these in-the-wild images through estimated interactive poses, as shown in Fig. 1 (top). For broader applications, we developed an additional conditional diffusion model that leverages the spatial prior to generate interactive poses from multiple input types, including text descriptions, single poses, or both. Thus, when only a single person appears in an image, Ponimator can first generate a partner pose with an optional text prompt, and then animate the interactive poses over time (see Fig. 1). Furthermore, by anchoring on these interactive poses, Ponimator is able to generate shortclip two-person motions with proximal contact (see Fig. 8) directly from text input. Our key contributions are summarized as follows: 1) We present Ponimator, a simple framework designed to learn the dynamics prior of interactive poses from motion capture data, particularly focusing on proximal human-human Figure 2. Interactive poses refer to two-person poses in proximity and close contact. The top row displays interactive (green) and non-interactive (red) poses within one sequence. Interactive poses allow observers to intuitively infer the temporal context, while non-interactive poses are more ambiguous and difficult to interpret. The bottom row showcases common daily interactive poses. interaction animations; 2) The learned prior is universal and generalizes effectively to poses extracted from open-world images, enabling animation of social interactions in human images; 3) Ponimator can generate interactive poses from a single-person pose, text, or both, combined with interactive pose animation, enabling diverse applications including reaction animation and text-to-interaction synthesis. 2. Related work Human-human Interactions in Images. Human-human interactions are prevailing in social images. Significant progress has been made in interactive pose estimation [9, 10, 39] and interaction sequence reconstruction [20, 60]. Ugrinovic et al. integrate a physical simulator into the human mesh recovery pipeline to capture the physical significance of interactive poses. Huang et al. [20] use VectorQuantised representation learning and specialized losses to learn a discrete interaction prior, but suffer from limited interpretability and generalization. In contrast, our method directly anchors on interactive poses for interaction modeling without relying on additional physical simulators or intricate model designs. Our simple and interpretable prior generalizes well to in-the-wild settings, adhering to the principle that simplicity leads to robustness. The interactive pose prior is also explored in BUDDI [39], which estimates twoperson static poses from images but is limited to static pose modeling and overlooks the rich dynamics of interactions. In contrast, our work unlocks interactive motions for both animation and generation in arbitrary open-world images. Human-human Motion Synthesis. Generating human motion dynamics has been a long-standing task [1, 2, 28, 29, 32, 38]. Utilizing generative models have gained widespread popularity recently [12-14, 25, 27, 30, 43, 44, 58, 59, 63, 64, 71, 72]. With the success of applying generative models in single-person motion synthesis and the release of large-scale two-person interaction datasets, such as InterGen [31] and Inter-X [65], there has been a surge in Interactive Pose Animator (b) Interactive Poses as Anchor (c) Unveiling Interaction Dynamics Interactive pose input Interactive Pose Generator Text prompt: "Two persons with arms on shoulders." , Text prompt , Text prompt (a) Sourcing Interactive Poses Interactive pose output Figure 3. Framework overview. Ponimator consists of a pose generator and animator, bridged by interactive poses. The generator takes a single pose, text, or both as input to produce interactive poses, while the animator unleashes interaction dynamics from static poses. research [5, 6, 11, 24, 34, 35, 45, 50, 51, 53, 58, 59, 66] focused on multi-person motion generation. However, most existing studies generate two-person motions following input text, but often overlooking close-contact dynamics. For example, Liang et al. [31] proposed a diffusion model for two-person motion generation, but it relies on detailed text input and struggles with realistic interaction. In contrast, our framework focuses on short-range interactions by leveraging generalizable interaction priors from static interactive poses, naturally ensuring physical contact between individuals and seamlessly generalizes to open-world scenarios. Human-human Motion Prediction. A body of work focuses on tracking multi-person motions from videos [22, 23, 52], forecasting future multi-person motions based on past movements [15, 42, 55, 56, 62, 67, 68] and generating reactive motion based on an individual's full motion sequence [5, 8, 11, 35, 49, 53, 66]. However, existing methods rely on long history context or full individual motions while treating interactive poses and human dynamics separately. In contrast, our approach bridges these two modalities by anchoring on interactive poses and leveraging their prior for dynamics forecasting. This integration enables our model to generate both past and future interaction dynamics while supporting flexible inputs with fewer constraints, such as text, single-pose, or both, unlocking diverse applications in animation and generation. 3. Approach Ponimator leverages interactive pose priors as intermediates for interaction animation, as shown in Fig. 3. We first introduce interactive poses and motion modeling (Sec. 3.1). Then, we present the pose animator (Sec. 3.2), which transforms interactive poses into motion, followed by the pose generator (Sec. 3.3), which generates interactive poses from various inputs. Finally, in Sec. 3.4, we explore Ponimator's applications to real-world images and text. 3.1. Interactive Pose and Motion Modeling. Interactive pose and motion. Our work defines interactive poses as the poses of two individuals in proximity and close contact. For person a, we use the SMPLX parametric body model [40] to model the pose xa = (φa, θa, γa) and shape βa ∈R10. Here, θa ∈R21×3 is the joint rotations, φa ∈R1×3 and γa ∈R1×3 represents the global orientation and translation. The interactive pose of two individuals a and b is given as xI = (xa I, xb I). An interaction motion consists of a short pose sequence X of length N, centered around an interaction moment, along with shape parameters θ of both individuals, where X = {xi}N i=1, β = (βa, βb). X includes an pair of interactive poses xI at interaction moment index I within the sequence, and its nearby past poses x1:I and future poses xI+1:N. An example of interactive pose and motion is shown in Fig. 2. Interaction motion modeling. The interactive pose xI encodes rich temporal and spatial priors. As shown Fig. 2, interactive poses convey motion dynamics (top row) and spatial relationships (bottom row) between individuals. The strong prior make it easier for models to learn, whereas noninteractive poses lack clear interaction cues, making learning more challenging. Therefore, we model the interaction motion (X, β) by anchoring on its interactive pose xI. ab e pla ystyle {red}{p( { {X}}; {green}{{ {x}_I}}, { })} l laystyle {blue}{p( {green}{ {x}_I}, { })} {eq:decompose} p( { {X}}, { }) = {x}{ {red}{p( { {X}}; {green}{{ {x}_I}}, { })}} {s}{ {blue}{p( {green}{ {x}_I}, { })}} (1) temporal prior spatial prior Learning prior from diffusion model. Each prior's distribution in Eq. (1) is captured by a generative diffusion model [17] G, trained on high-quality mocap data. To Two-person Image Interaction Animation Pose Estimator Interactive Pose Animator two-person image interactive pose Interaction animation Single-person Image Interaction Animation Interactive Pose Generator Pose Estimator Interactive Pose Animator single-pose image optional text: lift the other onto his back interactive pose Interaction generation Text-to-Interaction Synthesis "Two person hugging together" Interactive Pose Generator Interactive Pose Animator text Interaction generation interactive pose Figure 4. Applications. Our framework enables two-person image animation, single-person interaction generation, and text-to-interaction synthesis. For two-person images, we estimate interactive poses using an off-the-shelf model [39]. For single-person images, we first estimate the pose by [4] and generate its interactive counterpart. For text input, our unified pose generator could synthesize the pose directly. These poses are then fed into our animator to generate human dynamics. model the underlying distribution of data z0, the diffusion model introduces noise ε to the clean data z0 in the forward pass, following zt = √ ̄αtz0 + √1 - ̄αtε, ε ∼N(0, I), where αt ∈(0, 1) are constants, t is the diffusion timestep t ∈[0, Tdiffusion]. The model G aims to recover clean input by ˆz0 = G(zt, t, c) from the noisy observations zt and condition c, optimizing the objective: \ label {eq:diffusion- l oss} athc al {L}_D = {E}_{ {z}_0, , { } {N}(0, {I}),t}[\| {z}_0 - G( {z}_t, t, ) \|_2^b ] (2) During inference, the model iteratively predicts G(zt, t, c) from t = Tdiffusion to t = 0, gradually denoising the sample until it recovers the original clean data ˆz0. Close-proximity training data. We collect large-scale training data from public mocap datasets, InterX [65] and DualHuman [7], without requiring contact annotations. Interactive poses are detected by spatial proximity, and if within a threshold, we extract the pose with its past and future frames to form a 3-second interaction clip. 3.2. Unveiling Dynamics from Interactive Poses The interactive pose animator captures the temporal prior in p(X; xI, β) given an interactive pose xI and two person's shape β. The objective is to generate the motion sequences ˆX = {ˆxi}N i=1 where ˆxI ≈xI, as shown in Fig. 3 (c). Interactive pose-centered representation. We anchor the entire sequence on the interactive pose xI and define the denoising target z0 as the motion residuals with respect to interactive poses z0 = {xi-xI}N i=1 This learning objective enforces model to learn the contextual dynamics strongly shaped by interactive poses. During inference, we recover the predicted pose sequence {ˆxi}N i=1 by ˆz0 + xI. We encode the interactive time index I with a onehot vector mI ∼OneHot(I) ∈{0, 1}N, where mi I = 1 iff i = I. To better preserve the spatial structure of interactive pose at time I in pose sequences, we apply an imputation strategy to the diffusion model, where the noise input zt in Eq. (2) is substituted with ̃zt: \ l ab e l { e q: a ni m at o r } ild e { {z}}_{t} = (1- {m}_I) {z}_{t} + {m}_I {0}, = ( {m}_I, {x}_I, { }), (3) where ⊙denotes element-wise multiplication and c is the input condition. After imputation, noise is added to interactive poses (i.e., ̃zt + xI) before fed into the network. Condition encoding. The interaction time condition mI is concatenated with the initial model input along the feature dimension. We encode the remaining conditions (xI, β) by leveraging the SMPLX joint forward kinematics (FK) function FK(·, ·) to compute joint positions of interactive pose jI = (FK(xa I, βa), FK(xb I, βb)). Here, jI inherently encodes both individuals' poses and shapes. It is further embedded through a single-layer MLP and injected into the model layers via AdaIN [21]. Architecture and training. We adopt the DiT [41] architecture as our diffusion model, built on stacked Transformer blocks [61] that alternate spatial attention for human contact and temporal attention for motion dynamics. To train the model, besides diffusion loss LD in Eq. (2), we apply the SMPL loss Lsmpl as the MSE between the denoised pose sequence and the clean input. We also use an interaction loss Linter [31] and a velocity loss [59]. Lvel Input Interaction Animation (left→right: time steps) Figure 5. Interactive pose image animation on FlickrCI3D dataset [9]. Left shows the input image, right shows the animated interaction motions. Interactive-pose frame is labeled in green box. encourages contact between individuals in close proximity, while Lvel ensures motion coherence. The total loss L = λDLD+λsmplLsmpl+λinterLinter+λvelLvel. To improve robustness and generalization to noisy real-world poses, we apply augmentation by adding random noise to interactive pose xI. Please refer to Sec. A for details. 3.3. Interactive Pose Generator The interactive pose generator models p(xI, β) in Eq. (1), leveraging the spatial prior to generate xI, β from various conditions, as shown in Fig. 3(a). Unified input conditioning. Given various input conditions, including text c, single person pose (xa I, βa), or both, the model generates za 0 = (xa I, βa) and zb 0 = (xb I, βb), which together form the diffusion target z0 = (za 0, zb 0) in Eq. (2). To integrate these conditions into a unified model, we introduce two masks, mc and ma, to encode the presence of text and pose conditions, respectively. These masks are sampled independently from a Bernoulli distribution with probability pcondition during training. We modify the model input zt and text condition c to ̃c in Eq. (2) as: \ l abe l {e q :g e n er a to r} \ til de {\ m athbf {z}}_{t} = ((1- {m}_a) {z}_{t}^a + {m}_a {z}_0^a, {z}_{t}^b), { } = {m}_c . {-1em} (4) This design enables the model to accommodate multiple combinations of conditions. In SMPL, human shapes are coupled with genders g ∈ {male, female, neutral}. To enable a more generic shape condition, we instead use the global joint positions of rest pose j{a,b} rest , which inherently capture both shape and gender information, and define the diffusion target as z0 = (x{a,b} I , j{a,b} rest ). After generation, we can recover β{a,b} from j{a,b} rest using inverse kinematics (IK). Architecture and training. We use the same architecture as pose animator with modifications below. (1) The text condition c is encoded via CLIP [48], processed by two trainable Transformer layers, and injected by AdaLN [21]. (2) We retain spatial attention layers and remove temporal attentions. The model is trained with standard diffusion loss LD in Eq. (2), SMPL loss Lsmpl, and bone length loss Lbone minimizes the MSE with ground-truth lengths in the SMPLX [40] kinematic tree. Total loss L = λDLD + λsmplLsmpl + λboneLbone. Please see Sec. A for details. 3.4. Applications Our framework supports two-person interactive pose image animation, single-person pose interaction generation, and text-to-interaction synthesis, as shown in Fig. 4. Interactive pose image animation. As shown in 1st row of Fig. 4, given a two-person image, we estimate the interactive pose ˆxI using an off-the-shelf model [39]. The Input Interaction Generation (left→right: time steps) "two person pose for a photo" "one person lift another one up" Figure 6. Single-person image interaction generation on Motion-X [33] dataset. Left shows the single person image input, right shows the generated two-person interaction dynamics. The generated interactive pose frame is labeled in magenta box. Top two rows display single-person pose inputs, while the bottom two show the same with accompanying text below the input image. estimated pose is fed into our interactive pose animator (Sec. 3.2) to generate motions guided by the temporal prior in interactive poses. Our model provides flexible interaction timing control by adjusting I in Eq. (3), where I = 0 predicts future motion, I = N reconstructs the past, and generally, n = N 2 enables symmetric animation. Open-world animation results are shown in Fig. 5. Single-person pose interaction generation. As shown in the 2nd row of Fig. 4, given a single-person image, we estimate the pose ˆxa I using off-the-shelf model such as [4] and feed it into our interactive pose generator (Sec. 3.3). We set ma = 0, mc = 0 in Eq. (4) as model input, disabling text input and allowing ˆxa I to generate its interactive counterpart xb I using the spatial prior in interactive poses. Alternatively, setting mc = 1 enables additional text conditioning. Once the interactive pose ˆxI = (ˆxa I, ˆxb I) is obtained, it is fed into the interactive pose animator (Sec. 3.2) to synthesize motion dynamics. Open-world results are presented in Fig. 6. Text-to-interaction synthesis. As shown in 3rd row of Fig. 4, given a short phrase, we generate the interactive pose ˆxI by setting ma = 0, mc = 1 in Eq. (3). The generated ˆxI is then passed to the pose animator to produce the corresponding motion. Examples for "two-person hugging together" and "push" are presented in Figs. 4 and 8. 4. Experiments Implementation details. We extract interactive poses by detecting SMPL-X vertices contacts [39] below a threshold in each mocap dataset within a 3s window. The interactive pose animator has 8 layers (latent dim 1024) and is trained using AdamW [37] (LR 1e-4). All loss weights are 1 except λinter = 0.5.To handle real-world noise, we augment training by adding Gaussian noise (scale 0.02) to interactive poses. At inference, DDIM [54] samples 50 steps, generating 3s motions at 10fps in 0.24s on an A100. The interactive pose generator follows a similar setup with ptext = 0.8, ppose = 0.2, and a frozen CLIP-ViTL/14 [48] text encoder. The pose generation take 0.21s. Models are trained for 4000 epochs with batch sizes of 256 (pose animator) and 512 (pose generator). Please see Sec. A for details. Datasets. We train and test our model on two large-scale datasets: Inter-X [65] (11k sequences) and Dual-Human [7] (2k sequences). We follow the official split for Inter-X and use a 3:1 training-testing split for Dual-Human, excluding non-interactive motion sequences. Metrics. We follow the evaluation metrics in [47, 50, 59]: Frechet Inception Distance (FID), the feature distribution against ground truth (GT). We compute it by training a motion autoencoder to encode motion into features for each task; Precision (Pre.), the likelihood that generated motions fall within the real distribution; Recall (Rec.), the likelihood that real motions fall within the generated distribuInteractive Pose Interaction Animation (left→right: time steps) Inter-X test set [65] Dual-Human test set [7] Duolando dataset [53] Hi4D dataset [70] Interhuman dataset [31] Multi-person interaction animation Figure 7. Interactive pose animation on in-domain datasets (Inter-X[65], Dual-Human [7]), out-of-domain dataset (Duolando [53], Hi4D [70], Interhuman [31]), and random composed multi-person pose. Each row: left-interactive pose, right-animation sequence. Our learned interactive pose prior is universal, generalizing across datasets and enabling multi-person interactions (6th row) without modification or retraining. tion; Diversity, the variance of generated motions. We also evaluate the physics plausibility via Contact Frame Ratio (CR., %)-proportion of frames with two-person contact-and averaged Inter-person Penetration (Pene., cm). 4.1. Effectiveness of Anchoring on Interactive Poses Previous works model human-human interaction dynamics either by finetuning on single-person motion priors with interaction data (e.g., ComMDM [50], RIG [58]) or by learning interaction dynamics from scratch (e.g., InterGen [31]). In this work, we model interaction dynamics by anchoring on proximal interactive poses. To evaluate the effectiveness of these approaches, we employ a simple taskunconstrained generation. We further adapt MDM [59] to accommodate two-person motions in our setting. Ponimator seamlessly supports unconstrained generation by setMethod FID ↓Pre. ↑Rec.↑Div. →CR.→Pene.↓ GT 0.3 1.0 1.0 10.1 70.6 3.8 MDM* [59] 62.6 0.79 0.20 9.8 66.4 5.3 ComMDM [50] 88.8 0.37 0.49 10.9 44.3 4.7 RIG [58] 65.2 0.46 0.65 10.6 44.3 4.3 InterGen [31] 56.6 0.57 0.46 10.1 50.9 4.3 Ours 22.6 0.58 0.72 10.2 68.1 5.0 Table 1. Unconstrained interaction synthesis comparison on Inter-X [65] dataset. →means the closer to ground truth the better the result. Method in ∗is adapted from ours for two-person interaction. Our method largely outperforms others in motion quality and contact ratio, naturally ensuring physical contact and motion realism by anchoring on interactive poses. Inter-X Dual-Human Method FID↓Div.→CR. →Pene.↓FID↓Div.→CR. →Pene.↓ GT 0.3 10.1 70.6 3.8 2.1 12.0 70.4 3.4 InterGen* 18.9 10.6 44.4 4.3 88.8 11.9 44.3 4.1 w/o anchor 7.1 9.8 67.3 5.1 36.9 11.6 70.7 4.5 - time 6.3 10.3 66.9 5.2 30.3 12.6 67.3 5.1 - joints 5.6 10.0 67.6 5.1 29.9 12.3 70.2 4.4 random-pose 5.8 10.1 67.4 5.1 30.1 12.3 69.3 4.5 ours 5.0 9.9 68.5 5.1 24.2 11.8 70.4 4.5 Table 2. Interactive pose animation comparison on Inter-X [65] and Dual-Human [7] dataset. InterGen* is adapted to take interactive poses input but lacks explicit interaction modeling, limiting its use of pose priors. Interactive pose anchoring, condition encoding, and interactive frames are crucial for the performance. ting ma = 0 and mc = 0. Experimental results on our dataset collection from Inter-X [65] are shown in Tab. 1. We observe that previous methods [31, 50, 58] struggle to synthesize close-contact interactions, while the adapted MDM* [59] exhibits lower interaction motion quality. In contrast, by simply anchoring on interactive poses, our model achieves superior motion realism (FID of 22.6) and physical contact (contact ratio of 68.1). 4.2. Interactive Pose Animation To evaluate the interactive pose animator, we compare against baselines and key ablations on Inter-X [65] and Dual-Human [7] datasets in Tab. 2. We ablate key components of pose animator: w/o anchor removes interactive pose anchoring, replacing the denoising target z0 with {xi}N i=1; - time removes the interaction time encoding mI; - joints removes joints condition encoding; InterGen* replaces text conditions with interactive pose condition while keeping all other settings unchanged; random-pose uses Method FID↓Div.→MModality↑CR. →Pene.↓ GT 0.06 6.78 - 70.6 3.8 InterGen 2.87 6.76 1.42 39.8 3.9 w/o anchor 2.74 6.78 1.41 39.0 4.0 Ours 1.82 6.78 1.46 45.9 4.3 Table 3. Text-to-interaction synthesis results on Inter-X [65] dataset. Our unified pipeline outperforms end-to-end w/o interactive pose as anchor method in short-term interaction synthesis. InterGen [31] W/o anchor Ours Figure 8. Text-to-interaction comparison for "push". Anchored on interactive poses, our method achieves better contact and more realistic dynamics than InterGen [31] and the end-to-end baseline. random instead of interactive frames as anchor. All baselines are trained under the same setting. Tab. 2 highlights the importance of interactive pose anchoring and interaction conditioning. InterGen* overlooks input poses, resulting in poorer performance. In contrast, our method explicitly models interaction and contact and achieves better results. Universal interactive pose prior. We visualize the animated motion in Fig. 7 on in-domain datasets (InterX[65], Dual-Human [7]) and out-of-domain datasets (Duolando [53], Hi4D [70], Interhuman [31]). Our approach generalizes to unseen subjects and interactions using the universal interactive pose prior. Our model is surprisingly capable of generating interactions beyond two persons without modification or retraining (see last row in Fig. 7). Open-world two-person image animation. Our model generalizes to open-world images by extracting interactive poses from FlickrCI3D [9] dataset using [39]. As shown in Fig. 5, it transforms static poses into realistic motion. 4.3. Interaction Motion Generation We evaluate interaction motion generation on the Inter-X dataset [65] using text and single-person poses. Text-to-interaction synthesis We focus on 3s interaction generation, evaluating FID, Diversity, and MModality-the ability to generate diverse interactions from the same text [31, 59]. We compare with InterGen [31] and an end-to-end w/o interactive pose baseline, both trained and Method FID↓Pre.↑Rec.↑Div.→CR.→Pene.↓ GT 0.3 1.0 1.0 10.1 70.6 3.8 w/o anchor 40.0 0.87 0.43 9.6 67.5 5.0 Ours 27.8 0.91 0.48 9.7 73.3 5.2 Table 4. Single pose-to-interaction synthesis results on InterX [65] dataset. Compared to without anchor baseline, our method uses interactive poses for more effective interaction modeling. Single Pose Interaction Generation (left→right: time steps) W/o anchor Ours Figure 9. Single pose-to-interaction comparison on Inter-X dataset [65]. Compared to the model without interactive pose anchors, our method generates more natural human interactions. Single Pose Interactive Generation (left→right: time steps) + "two person hug" Figure 10. Diverse interactive motion generation. From a single pose, our framework generates varied interactive poses (magenta box) and motions (1st, 2nd rows) and text-driven ones (3rd row). tested on the same data. As shown in Tab. 3 and Fig. 8, they struggle with contact modeling, while ours excels in shortterm interaction generation using interactive pose priors. Interaction synthesis from single pose We evaluate single pose-to-interaction synthesis on Inter-X [65] dataset, comparing our method with an end-to-end without interactive pose baseline, which struggles in the large motion space, as shown in Tab. 4 and Fig. 9. Our method leverages interactive poses to generate diverse motions under varying input conditions in Fig. 10. Open-world single-person image animation. Our model generalizes to open-world single-person images by estimating poses [4], generating interactive counterparts, and animating motion. Fig. 6 shows results on Motion-X [65] dataset. Input Image Interaction Generation (left→right: time steps) Figure 11. Interactive human video generation. Given a single input image (left), our method generates interactive human motions that serve as intermediate results for video generation. We use an off-the-shelf human reconstruction model [46] to recover textured humans from a single image. By pairing the generated motion with an arbitrary second person and applying the corresponding textures, we can produce realistic human interaction videos. 4.4. Interaction Video Generation Our method generated interactive human motion could serve as intermediate outputs for downstream video generation. While existing video diffusion models [3, 16, 18, 26] can synthesize human videos, their motions often lack temporal consistency and realism. In contrast, our generated motions provide a stable and realistic foundation for interactive human video synthesis, either through pose-guided video diffusion models [19, 69, 73] or by texturizing motion sequences. As shown in Fig. 11, we use an off-theshelf human reconstruction model [46] to recover textured humans from a single image. The generated interactive motion is then paired with an arbitrary second person's texture to produce realistic human interaction videos. 4.5. Limitations Our method has few limitations: (1) it focus on short interaction segments; (2) it relies solely on human poses, ignoring scene context; (3) pose inaccuracies may cause contact errors and foot sliding; (4) close interactions may lead to inter-person penetration. Please refer to the Sec. B for more details. 5. Conclusion We introduce Ponimator, which integrates a pose animator and generator for interactive pose animation and generation using conditional diffusion models. The animator leverages temporal priors for dynamic motion generation, the generator uses spatial priors to create interactive poses from a single pose, text, or both. Ponimator enables open-world image interaction animation, single-pose interaction generation, and text-to-interaction synthesis, exhibiting strong generalization and realism across datasets and applications. References [1] Chaitanya Ahuja and Louis-Philippe Morency. Language2pose: Natural language grounded pose forecasting. In 3DV. IEEE, 2019. 2 [2] Okan Arikan and David A Forsyth. Interactive motion generation from examples. TOG, 2002. 2 [3] Andreas Blattmann, Robin Rombach, Huan Ling, Tim Dockhorn, Seung Wook Kim, Sanja Fidler, and Karsten Kreis. Align your latents: High-resolution video synthesis with latent diffusion models. In CVPR, 2023. 2, 9 [4] Zhongang Cai, Wanqi Yin, Ailing Zeng, Chen Wei, Qingping Sun, Wang Yanjun, Hui En Pang, Haiyi Mei, Mingyuan Zhang, Lei Zhang, Chen Change Loy, Lei Yang, and Ziwei Liu. SMPLer-X: Scaling up expressive human pose and shape estimation. In NeurIPS, 2023. 4, 6, 8, 15 [5] Baptiste Chopin, Hao Tang, Naima Otberdout, Mohamed Daoudi, and Nicu Sebe. Interaction transformer for human reaction generation. Multimedia, 2023. 3 [6] Ke Fan, Junshu Tang, Weijian Cao, Ran Yi, Moran Li, Jingyu Gong, Jiangning Zhang, Yabiao Wang, Chengjie Wang, and Lizhuang Ma. Freemotion: A unified framework for number-free text-to-motion synthesis. arXiv preprint , 2024. 3 [7] Qi Fang, Yinghui Fan, Yanjun Li, Junting Dong, Dingwei Wu, Weidong Zhang, and Kang Chen. Capturing closely interacted two-person motions with reaction priors. In CVPR, 2024. 2, 4, 6, 7, 8, 16, 17, 18, 21 [8] Yanwen Fang, Jintai Chen, Peng-Tao Jiang, Chao Li, Yifeng Geng, Eddy KF Lam, and Guodong Li. Pgformer: Proxy-bridged game transformer for multi-person highly interactive extreme motion prediction. arXiv preprint , 2023. 3 [9] Mihai Fieraru, Mihai Zanfir, Elisabeta Oneata, Alin-Ionut Popa, Vlad Olaru, and Cristian Sminchisescu. Threedimensional reconstruction of human interactions. In CVPR, 2020. 2, 5, 8, 16 [10] Mihai Fieraru, Mihai Zanfir, Teodor Szente, Eduard Bazavan, Vlad Olaru, and Cristian Sminchisescu. Remips: Physically consistent 3d reconstruction of multiple interacting people under weak supervision. NeurIPS, 2021. 2 [11] Anindita Ghosh, Rishabh Dabral, Vladislav Golyanik, Christian Theobalt, and Philipp Slusallek. Remos: Reactive 3d motion synthesis for two-person interactions. arXiv preprint , 2023. 3 [12] Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng, Minglun Gong, and Li Cheng. Action2motion: Conditioned generation of 3d human motions. In Multimedia, 2020. 2 [13] Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In CVPR, 2022. 13 [14] Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In ECCV. Springer, 2022. 2 [15] Wen Guo, Xiaoyu Bie, Xavier Alameda-Pineda, and Francesc Moreno-Noguer. Multi-person extreme motion prediction. In CVPR, 2022. 3 [16] Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized textto-image diffusion models without specific tuning. arXiv preprint , 2023. 2, 9 [17] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 2020. 3 [18] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. NeurIPS, 2022. 2, 9 [19] Li Hu. Animate anyone: Consistent and controllable imageto-video synthesis for character animation. In CVPR, pages 8153-8163, 2024. 9 [20] Buzhen Huang, Chen Li, Chongyang Xu, Liang Pan, Yangang Wang, and Gim Hee Lee. Closely interactive human reconstruction with proxemics and physics-guided adaption. In CVPR, 2024. 2 [21] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. 4, 5, 13 [22] Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Evgeny Levinkov, Bjoern Andres, and Bernt Schiele. Arttrack: Articulated multi-person tracking in the wild. In CVPR, 2017. 3 [23] Umar Iqbal, Anton Milan, and Juergen Gall. Posetrack: Joint multi-person pose estimation and tracking. In CVPR, 2017. 3 [24] Muhammad Gohar Javed, Chuan Guo, Li Cheng, and Xingyu Li. Intermask: 3d human interaction generation via collaborative masked modelling. arXiv preprint , 2024. 3 [25] Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. Motiongpt: Human motion as a foreign language. NeurIPS, 2023. 2 [26] Yang Jin, Zhicheng Sun, Ningyuan Li, Kun Xu, Hao Jiang, Nan Zhuang, Quzhe Huang, Yang Song, Yadong Mu, and Zhouchen Lin. Pyramidal flow matching for efficient video generative modeling. arXiv preprint , 2024. 9 [27] Korrawe Karunratanakul, Konpat Preechakul, Supasorn Suwajanakorn, and Siyu Tang. Guided motion diffusion for controllable human motion synthesis. In ICCV, 2023. 2 [28] Lucas Kovar and Michael Gleicher. Flexible automatic motion blending with registration curves. In Symposium on Computer Animation. San Diego, CA, USA, 2003. 2 [29] Lucas Kovar, Michael Gleicher, and Fr ́ed ́eric Pighin. Motion graphs. In SIGGRAPH, 2008. 2 [30] Peizhuo Li, Kfir Aberman, Zihan Zhang, Rana Hanocka, and Olga Sorkine-Hornung. Ganimator: Neural motion synthesis from a single sequence. TOG, 2022. 2 [31] Han Liang, Wenqian Zhang, Wenxuan Li, Jingyi Yu, and Lan Xu. Intergen: Diffusion-based multi-human motion generation under complex interactions. IJCV, 2024. 2, 3, 4, 7, 8, 13, 17, 19 [32] Angela S Lin, Lemeng Wu, Rodolfo Corona, Kevin Tai, Qixing Huang, and Raymond J Mooney. Generating animated videos of human activities from natural language descriptions. Learning, 2018. 2 [33] Jing Lin, Ailing Zeng, Shunlin Lu, Yuanhao Cai, Ruimao Zhang, Haoqian Wang, and Lei Zhang. Motion-x: A large-scale 3d expressive whole-body human motion dataset. NeurIPS, 2024. 6, 15, 17 [34] Shaowei Liu, Yang Zhou, Jimei Yang, Saurabh Gupta, and Shenlong Wang. Contactgen: Generative contact modeling for grasp generation. In ICCV, pages 20609-20620, 2023. 3 [35] Yunze Liu, Changxi Chen, and Li Yi. Interactive humanoid: Online full-body motion reaction synthesis with social affordance canonicalization and forecasting. arXiv preprint , 2023. 3 [36] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multiperson linear model. TOG, 2015. 17, 19 [37] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint , 2017. 6 [38] Julieta Martinez, Michael J Black, and Javier Romero. On human motion prediction using recurrent neural networks. In CVPR, 2017. 2 [39] Lea M ̈uller, Vickie Ye, Georgios Pavlakos, Michael Black, and Angjoo Kanazawa. Generative proxemics: A prior for 3d social interaction from images. In CVPR, 2024. 2, 4, 5, 6, 8, 13, 15 [40] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed A. A. Osman, Dimitrios Tzionas, and Michael J. Black. Expressive body capture: 3d hands, face, and body from a single image. In CVPR, 2019. 3, 5, 17, 19 [41] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. 4, 13 [42] Xiaogang Peng, Yaodi Shen, Haoran Wang, Binling Nie, Yigang Wang, and Zizhao Wu. Somoformer: Social-aware motion transformer for multi-person motion prediction. arXiv preprint , 2022. 3 [43] Mathis Petrovich, Michael J Black, and G ̈ul Varol. Actionconditioned 3d human motion synthesis with transformer vae. In ICCV, 2021. 2 [44] Mathis Petrovich, Michael J Black, and G ̈ul Varol. Temos: Generating diverse human motions from textual descriptions. In ECCV. Springer, 2022. 2 [45] Pablo Ruiz Ponce, German Barquero, Cristina Palmero, Sergio Escalera, and Jose Garcia-Rodriguez. in2in: Leveraging individual information to generate human interactions. arXiv preprint , 2024. 3 [46] Lingteng Qiu, Xiaodong Gu, Peihao Li, Qi Zuo, Weichao Shen, Junfei Zhang, Kejie Qiu, Weihao Yuan, Guanying Chen, Zilong Dong, and Liefeng Bo. Lhm: Large animatable human reconstruction model from a single image in seconds. In ICCV, 2025. 9 [47] Sigal Raab, Inbal Leibovitch, Peizhuo Li, Kfir Aberman, Olga Sorkine-Hornung, and Daniel Cohen-Or. Modi: Unconditional motion synthesis from diverse data. In CVPR, 2023. 6 [48] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML. PMLR, 2021. 5, 6 [49] Muhammad Rameez Ur Rahman, Luca Scofano, Edoardo De Matteis, Alessandro Flaborea, Alessio Sampieri, and Fabio Galasso. Best practices for 2-body pose forecasting. In CVPR, 2023. 3 [50] Yoni Shafir, Guy Tevet, Roy Kapon, and Amit Haim Bermano. Human motion diffusion as a generative prior. In ICLR, 2024. 3, 6, 7 [51] Mengyi Shan, Lu Dong, Yutao Han, Yuan Yao, Tao Liu, Ifeoma Nwogu, Guo-Jun Qi, and Mitch Hill. Towards open domain text-driven synthesis of multi-person motions. arXiv preprint , 2024. 3 [52] Bing Shuai, Alessandro Bergamo, Uta Buechler, Andrew Berneshawi, Alyssa Boden, and Joseph Tighe. Large scale real-world multi-person tracking. In ECCV. Springer, 2022. 3 [53] Li Siyao, Tianpei Gu, Zhitao Yang, Zhengyu Lin, Ziwei Liu, Henghui Ding, Lei Yang, and Chen Change Loy. Duolando: Follower gpt with off-policy reinforcement learning for dance accompaniment. arXiv preprint , 2024. 3, 7, 8, 16, 18 [54] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint , 2020. 6, 13 [55] Sebastian Starke, Yiwei Zhao, Taku Komura, and Kazi Zaman. Local motion phases for learning multi-contact character movements. TOG, 2020. 3 [56] Sebastian Starke, Yiwei Zhao, Fabio Zinno, and Taku Komura. Neural animation layering for synthesizing martial arts movements. TOG, 2021. 3 [57] Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Kiwoong Park, and Victor Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. In WACV, 2022. 15 [58] Mikihiro Tanaka and Kent Fujiwara. Role-aware interaction generation from textual description. In ICCV, 2023. 2, 3, 7 [59] Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. Human motion diffusion model. In ICLR, 2023. 2, 3, 4, 6, 7, 8, 13 [60] Nicolas Ugrinovic, Boxiao Pan, Georgios Pavlakos, Despoina Paschalidou, Bokui Shen, Jordi Sanchez-Riera, Francesc Moreno-Noguer, and Leonidas Guibas. Multiphys: multi-person physics-aware 3d motion estimation. In CVPR, 2024. 2 [61] A Vaswani. Attention is all you need. NeurIPS, 2017. 4, 13 [62] Jiashun Wang, Huazhe Xu, Medhini Narasimhan, and Xiaolong Wang. Multi-person 3d motion prediction with multirange transformers. NeurIPS, 2021. 3 [63] Jingbo Wang, Sijie Yan, Bo Dai, and Dahua Lin. Sceneaware generative network for human motion synthesis. In CVPR, 2021. 2 [64] Jingbo Wang, Yu Rong, Jingyuan Liu, Sijie Yan, Dahua Lin, and Bo Dai. Towards diverse and natural scene-aware 3d human motion synthesis. In CVPR, 2022. 2 [65] Liang Xu, Xintao Lv, Yichao Yan, Xin Jin, Shuwen Wu, Congsheng Xu, Yifan Liu, Yizhou Zhou, Fengyun Rao, Xingdong Sheng, et al. Inter-x: Towards versatile humanhuman interaction analysis. In CVPR, 2024. 2, 4, 6, 7, 8, 16, 17, 18, 19, 21 [66] Liang Xu, Yizhou Zhou, Yichao Yan, Xin Jin, Wenhan Zhu, Fengyun Rao, Xiaokang Yang, and Wenjun Zeng. Regennet: Towards human action-reaction synthesis. In CVPR, 2024. 3 [67] Qingyao Xu, Weibo Mao, Jingze Gong, Chenxin Xu, Siheng Chen, Weidi Xie, Ya Zhang, and Yanfeng Wang. Jointrelation transformer for multi-person motion prediction. In ICCV, 2023. 3 [68] Sirui Xu, Yu-Xiong Wang, and Liangyan Gui. Stochastic multi-person 3d motion forecasting. In ICLR, 2023. 3 [69] Jingyun Xue, Hongfa Wang, Qi Tian, Yue Ma, Andong Wang, Zhiyuan Zhao, Shaobo Min, Wenzhe Zhao, Kaihao Zhang, Heung-Yeung Shum, et al. Follow-your-pose v2: Multiple-condition guided character image animation for stable pose control. arXiv preprint , 2024. 9 [70] Yifei Yin, Chen Guo, Manuel Kaufmann, Juan Zarate, Jie Song, and Otmar Hilliges. Hi4d: 4d instance segmentation of close human interaction. In CVPR, 2023. 7, 8 [71] Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan. Generating human motion from textual descriptions with discrete representations. In CVPR, 2023. 2 [72] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Motiondiffuse: Text-driven human motion generation with diffusion model. arXiv preprint , 2022. 2 [73] Shenhao Zhu, Junming Leo Chen, Zuozhuo Dai, Yinghui Xu, Xun Cao, Yao Yao, Hao Zhu, and Siyu Zhu. Champ: Controllable and consistent human image animation with 3d parametric guidance. In ECCV, 2024. 9 Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation Supplementary Material https://stevenlsw.github.io/ponimator/ Abstract The supplementary material provides implementation details, limitation analysis, qualitative results and future work. In summary, we include • Sec. A. Implementation details and model architecture of the interactive pose animator and generator. • Sec. B. Limitation analysis of our current approach. • Sec. C. Additional qualitative results of long interactive motion generation, complex interaction synthesis, two-person image animation, single-person image interaction generation, interactive pose animation, text-to-interaction motion synthesis, and single-pose-tointeraction motion synthesis. A. Implementation details Interactive pose extraction. Given a two-person pose from a motion sequence, we determine close contact by measuring the minimum distance between their SMPL-X meshe vertices. Following [39], we downsample the mesh based on predefined contact regions and compute pairwise distances. If the smallest distance is below 1.3cm, we classify the pose as a proximity pose-indicating contact between the individuals. This interactive pose is then used to train human interaction dynamics. Model architecture. Our pose animator and pose generator follow the DiT architecture [41], which consists of stacked Transformer blocks [61], each incorporating an attention mechanism and a feed-forward network (FFN). Both the animator and generator comprise 8 Transformer layers, with the animator utilizing both spatial- and temporal-attention blocks, while the generator employs only spatial attention. The model has a latent dimension of 1024, with 8-head multi-head attention, and uses the GELU activation function. The input motions are first encoded with positional encoding before being processed by Transformer blocks. The input has the shape (B, P, N, D), where B is batch size, P = 2 represents the number of individuals, and N corresponds to number of frames, and D is the dimension of diffusion target z0. Spatial attention operates along the Pdimension to model interactions between individuals, while temporal attention captures motion dynamics along the Tdimension. The model's output layer is a linear MLP, initialized with zero weights, which generates residual motion outputs. These residual motions are added to the interactive pose to produce the final output. Conditional information is incorporated into the model using Adaptive Instance Normalization [21]. Training. We apply training data augmentation to interactive poses in the interactive pose animator by adding random noise with a scale of 0.02 to account for real-world inaccuracies in pose estimation. This ensures that even if the interactive pose estimator introduces noise, the animator can still produce reasonable results. This augmentation is performed online during training. Following prior work [13, 31], we align one person's pose in the interactive pose to face the positive Z direction and center it at the origin. The interaction loss in the pose animator follows [31] and consists of a **contact loss**, which encourages contact between two individuals when their joints are close, and a **relative orientation loss**, which aligns their global orientations with the ground truth. The velocity loss Lvel, following MDM [59], ensures motion coherence by minimizing the velocity difference between the generated motion and the ground truth. For diffusion training, we use a cosine scheduler with 1000 diffusion steps and DDIM sampling [54] for 50 steps during inference. The model is trained with a learning rate of 1e-4 and weight decay of 0.00002 for 4000 epochs. The batch size is 256 for the interactive pose animator and 512 for the interactive pose generator. Training takes 2 days for the pose animator and 1 day for the pose generator on 4×A100 GPUs. Inference speed comparison. Our interactive pose generation takes 0.21s on a single A100 on average, the interactive pose animator generates 3s motion at 10fps in 0.24s, comparable to InterGen [31] which requires 0.76s for the same motion length. B. Limitation Analysis Our method has the limitations below. The common failure modes are illustrated in Fig. 12. Short motion modeling. Our method is mainly focus on short interactive motion segments. While our framework could support longer generation by interactive pose chaining as shown in Fig. 13, the benefit of interactive pose prior would diminish over time. In text-to-interaction synthesis, our framework prioritizes interactive motion-relevant information, which can result in partial rather than complete motion sequences when the input text describes extended Input Interaction Animation (left→right: time steps) Figure 12. Method limitation analysis. The first two rows show in-the-wild interactive pose animation results. In the first sample, severe interpenetration occurs as our method does not explicitly model penetration between two individuals. In the second, the generated motion is physically implausible due to the lack of scene context awareness, leading to collisions with the environment. The bottom two rows illustrate interaction motion generation from a single pose input. Due to inaccuracies in interactive pose generation, our method fails to produce realistic contact, resulting in unnatural motion. human interactions. Moreover, our pose animator-taking only interactive poses as input-cannot fully capture the semantic context or temporal ordering in text (e.g., distinguishing "lifting up" from "putting down"). Incorporating text conditioning into the pose-to-interaction stage is a promising avenue for improving text-to-interaction-specific tasks. However, since our main focus is on pose-tointeraction animation without enforced text input, this ambiguity can be a strength, enabling multiple valid and physically plausible motion interpretations from the same interactive pose. Inter-person penetrations. While our method enhances contact in two-person interactions, it does not explicitly model interpenetration between individuals. Consequently, in close-contact scenarios-such as the first row in Fig. 12-some interpenetration may occur in the generated motion sequences. Achieving a balance between realistic contact and preventing interpenetration remains a challenging problem, as enforcing strict physical constraints could compromise natural motion quality. Addressing interpenetration modeling and ensuring physically plausible twoperson interaction motion generation is an important direction for future work. Lack of scene awareness. When applied to in-thewild two-person pose animation or motion generation, our method relies solely on human pose information and ignores the surrounding environment. As a result, generated motions may appear physically implausible in certain cases, such as the 2nd row of Fig. 12, where collisions occur. Moreover, interactive poses can sometimes be ambiguFigure 13. Longer motion generation by chaining interactive poses. We reuse the last generated pose as the next input, resetting interactive time to zero, enabling sliding-window synthesis of longer motions (key-frame in magenta box). Interactive Pose Interaction Animation (left→right: time steps) Figure 14. Complex interactive pose animation. Given an interactive pose, our pose animator can synthesize high-dynamics (1st row) and close-contact (2nd row) human-human motions, leveraging the strong interactive prior learned from high-quality mocap data. ous, causing noticeable motion errors when used as the sole input. A more robust approach would integrate additional scene information (e.g. image features) to improve motion prediction and dynamics forecasting. Inaccurate contact. The interactive pose estimator or our interactive pose generator may occasionally produce inaccurate interactive poses, resulting in poor human-human contact in the generated motions, as seen in the 3rd and 4th rows of Fig. 12. These inaccuracies result in unrealistic motion due to the lack of precise interactive pose inputs. Since the pose animator primarily models temporal dynamics and depends on the interactive pose for spatial information, it often cannot correct errors arising from inaccurate interactive poses. Additionally, our generated interaction motions may exhibit artifacts such as foot sliding, a common issue in human motion synthesis. While such artifacts can often be mitigated through post-processing, we do not apply any post-processing in our examples. C. Qualitative results Longer interactive motion generation. Our framework is designed for short-term interaction generation but naturally extends to longer sequences. The pose animator takes an interactive pose together with an interactive time to synthesize both past and future motions centered on that pose. Longer sequences are produced by chaining segments in a slidingwindow manner: the last generated pose of one segment is reused as the starting pose for the next, the interactive time index is reset to zero (beginning of the new segment), and generation continues. Repeating this process yields coherent long-term interactions, as shown in Fig. 13, where key-frames are labeled in magenta box. Complex interactive pose animation. As shown in Fig. 14, beyond daily motions, our pose animator can synthesize complex interactive motions involving high dynamics (1st row) and close contact (2nd row) between two people, benefiting from the strong interaction dynamics learned from high-quality mocap data. Two person image human motion animation. We provide additional in-the-wild interactive pose animation results in Fig. 15. Given an interactive frame, we extract two-person poses using an off-the-shelf model [39], and animate the them with our interactive pose animator. To render the interaction, we use an off-the-shelf inpainting model [57] to remove the original individuals and overlay the generated motion. The results demonstrate that our model generalizes well to in-the-wild interactive poses, producing realistic human-human interactions. Single-person image human motion interaction generation. We present additional single-person image interaction motion generation results on the Motion-X dataset [33] in Fig. 16. Given a single-person image, we first extract the pose using an off-the-shelf pose estimator [4] and then generate interactive poses with our interactive pose generator. As shown, our model synthesizes plausible interactions Input Interaction Animation (left→right: time steps) Figure 15. Interactive pose image animation on FlickrCI3D dataset [9]. Left shows the input image, right shows the animated interaction motions. Interactive-pose frame is labeled in green box. Our model generalizes well to in-the-wild interactive poses, producing realistic human-human interaction dynamics. from diverse single-person inputs. Finally, we apply our interactive pose animator to generate two-person dynamics, demonstrating its effectiveness in challenging in-the-wild scenarios. Interactive pose animation. We provide additional visualizations of interactive pose animation on the InterX dataset [65], Dual-Human dataset [7], and Duolando dataset [53] in Fig. 17. Our model could successfully synInput Interaction Animation (left→right: time steps) One person pushes the other Figure 16. Single-person pose interaction generation on Motion-X dataset [33]. Left shows the single person image input, right shows the generated two-person interaction dynamics. The generated interactive pose frame is labeled in magenta box. The bottom row show the single-pose input with accompanying text input. Given different single-person poses, our interactive pose generator produces plausible interactive poses under flexible conditions, while our interactive pose animator synthesizes realistic human-human motions. Our model demonstrates strong performance in challenging in-the-wild settings. thesize realistic dancing motions from out-of-domain interactive poses on the unseen Duolando dataset. We further evaluate our method on the InterHuman dataset [31], a more challenging out-of-distribution benchmark, with results shown in Fig. 18. InterHuman provides SMPLH [36] annotations for two-person interactions, primarily for text-to-motion generation, but with less accurate contact. To fit our framework, we convert the SMPLH [36] representation to SMPLX [40] and extract interactive poses from the test sequences. Despite annotation noise and diverse pose distributions, our model produces realistic and coherent interactions, demonstrating strong generalization of the interactive pose prior. We also provide a qualitative comparison with two baselines-InterGen* and the random-pose variant (see Tab. 2)-in Fig. 19. InterGen [31] and the random-pose model exhibit poorer contact and more body penetration than ours, highlighting the effectiveness of interactive pose priors for realistic contact and interaction synthesis. Text-to-interaction synthesis. We present additional textto-interaction motion synthesis results in Fig. 20. Our method effectively generates realistic two-person interactions from short phrases or simple words. By leveraging an intermediate interactive pose representation, our approach ensures consistent interaction and maintains accurate contact between the two individuals. Single pose-to-interaction motion synthesis. We present single pose-to-interaction motion synthesis results on the Inter-X [65] and Dual-Human [7] datasets in Fig. 21. As shown, our method generates appropriate interactive poses from various input poses while effectively capturing vivid underlying human dynamics. Interactive Pose Interaction Animation (left→right: time steps) Inter-X Dataset [65] Dual-Human Dataset [7] Duolando Dataset [53] Figure 17. More interactive pose animation visualization on Inter-X dataset [65], Dual-Human dataset [7], Duolando dataset [53]. Our pose animator generalizes well to out-of-domain interactive poses and synthesizes realistic dancing motions on the unseen Duolando two-person dancing motion dataset. Interactive Pose Interaction Animation (left→right: time steps) Figure 18. Interhuman dataset [31] interactive pose animation results. We convert dataset provided SMPLH [36] to SMPLX [40] representation and select interactive poses from test motion sequences. Despite contact inaccuracies due to dataset conventions and pose variations, our model synthesizes reasonable motions, demonstrating the strong generalization capability of interactive poses for guiding human interaction animation. Interactive Pose Interaction Animation (left→right: time steps) InterGen [31] W/O anchor Ours Figure 19. Interactive pose animation comparison on Inter-X dataset [65]. Compared to InterGen [31] and model trained with random poses, our method achieves better contact and human dynamics. Both baselines exhibit severe body penetration and less accurate contact, while our approach, guided by interactive poses, ensures more realistic interactions. Input Text: One person chases the other person Input Text: One person sits down first, another sits on his/her lap Input Text: One person goes to the other person's ear and whispers to him/her Input Text: hand shake Input Text: hug Input Text: posing Figure 20. More text-to-interaction motion synthesis results. Our method synthesizes realistic two-person interactions from short phrases or single words. Single Pose Interaction Generation (left→right: time steps) Inter-X Dataset [65] Dual-Human Dataset [7] Figure 21. Single-pose guided interaction motion synthesis result on Inter-X [65] and Dual-Human [7] datasets. The input singleperson pose is shown on the left. Our method generates appropriate interactive poses from various inputs, capturing vivid underlying human dynamics.
2510.14980
Technical Report AGENTIC DESIGN OF COMPOSITIONAL MACHINES Wenqian Zhang1 Weiyang Liu2,* Zhen Liu1,*,† 1The Chinese University of Hong Kong (Shenzhen) 2The Chinese University of Hong Kong *Equal Advising †Corresponding Author besiegefield.github.io Eureka! Throw a Boulder Machine Code <boulder parent=bucket_1 attach_to=face_1> <bucket parent=wooden_block_2 attach_to=face_3> … Code Edit <boulder parent=bucket_1 attach_to=face_1> <bucket parent=log_2 attach_to=face_2> … Figure 1: The task of compositional machine design is illustrated in our BesiegeField environment. The figure shows a high-level sketch of the agentic workflow (w/ Gemini Pro 2.5), along with the resulting machines and their simulated performance. The design objective is to create a machine that throws boulders long distances. ABSTRACT The design of complex machines stands as both a marker of human intelligence and a foundation of engineering practice. Given recent advances in large lan- guage models (LLMs), we ask whether they, too, can learn to create. We ap- proach this question through the lens of compositional machine design: a task in which machines are assembled from standardized components to meet func- tional demands like locomotion or manipulation in a simulated physical environ- ment. With this simplification, machine design is expressed as writing XML-like code that explicitly specifies pairwise part connections. To support this investi- gation, we introduce BesiegeField, a testbed built on the machine-building game Besiege, which enables part-based construction, physical simulation and reward- driven evaluation. Using BesiegeField, we benchmark state-of-the-art LLMs with agentic workflows and identify key capabilities required for success, including spatial reasoning, strategic assembly, and instruction-following. As current open- source models fall short, we explore reinforcement learning (RL) as a path to improvement: we curate a cold-start dataset, conduct RL finetuning experiments, and highlight open challenges at the intersection of language, machine design, and physical reasoning. “Man is a tool-making animal.” — Benjamin Franklin 1 INTRODUCTION The history of human progress is, at its core, the history of machines, just as the ancient Greeks built the Antikythera mechanism to predict eclipses and Leonardo da Vinci envisioned machines to fly. 1 arXiv:2510.14980v2 [cs.AI] 19 Oct 2025 Technical Report Today, as large language models (LLMs) begin to approximate—and in some domains, surpass— human cognitive abilities, a natural question arises: Can computational models, like humans, conceive and create complex machines to achieve pur- poseful goals? At the heart of this question lie two tightly coupled concepts: compositionality, how parts are put together into assemblies, and functionality, the tasks these assemblies perform as they interact with external forces or inputs. While foundation models are already capable of synthesizing 3D shapes and building mechanical parts with computer-aided design (CAD) models, it is the complex compo- sitional structures, in which very different parts and components are orchestrated to smoothly move together, that realize a vast array of demands. Just as a clock emerges from the composition of sim- ple and standardized mechanical elements such as gears and flywheels, these same elements, when combined differently, can give rise to entirely different machines, such as a sewing machine. On the other hand, the same functionality may be realized by different part compositions, just as both cars and bicycles can transport a person from place to place. Put it concisely: composition is shaped by functionality, and functionality is realized through composition. Since such compositional machines can be expressed programmatically, with types, placements and articulations of parts represented in structured code that LLMs can generate and manipulate, we formalize the above question as: Can LLMs, given standardized mechanical parts and a reward function for the desired functionality, discover diverse spatial part compositions that maximize the reward and complete the task? The question is not only about the pursuit of intelligence but also about the practice of engineering. Modern design pipelines are often long and costly, especially in large-scale projects where each iter- ation demands substantial resources. These projects accumulate vast collections of documents and blueprints, making it difficult to trace, retrieve, or reuse past design efforts. Much essential know- how is passed informally across teams and generations, and in many cases, never fully recorded and since forgotten. An automated machine design system could directly address these challenges. Rather than merely mimicking patterns from historical designs, such a system should be agentic: ca- pable of exploring the exponentially large design space, leveraging prior knowledge to create novel designs for new demands and constraints, and improving them through feedback. To investigate this concretely, we introduce BesiegeField, an interactive environment built on the machine-design game of Besiege1. The environment allows for construction of simple mechanical machines with standardized and semantic parts such as gears and wheels, and supports customized physical scenar- ios in which LLM agents can test constructed machines and evaluate their dynamics and interactions. Building on BesiegeField, we benchmark state-of-the-art LLMs with different agent designs and strategies for selecting and placing basic mechanical elements to build machines for representative functional demands, a task we term compositional machine design. Through these experiments, we empirically identify key capabilities required for this task: accurate spatial reasoning, high-level knowledge of design strategies, and instruction-following in spatial domains. Since only a few proprietary LLMs achieve satisfactory results, we further investigate how reinforcement learning (RL) can improve the performance of open-source LLMs. To this end, we curate a small machine design dataset to cold-start RL finetuning, perform exploratory RL experiments, and highlight key challenges that chart directions for future research. In summary, our contributions are listed below: • We introduce and formalize the task of compositional machine design, where machines are as- sembled from standardized parts to achieve functional goals. • We present BesiegeField, an interactive environment that enables LLM agents to construct, simu- late, and evaluate compositional machines in customized physical scenarios. • We systematically benchmark state-of-the-art LLMs and different agentic workflow designs on representative machine-design tasks. • We explore RL finetuning of LLMs on this task, for which we curate a cold-start dataset, conduct experiments, and highlight the key challenges. 1https://en.wikipedia.org/wiki/Besiege_(video_game) 2 Technical Report 2 COMPOSITIONAL MACHINE DESIGN Full machine design involves many coupled elements: geometry, statics and dynamics, demand analysis, failure modes, safety, and even legal constraints (Beitz et al., 1996; Wong et al., 2025). To isolate a tractable subproblem, we focus on the structural composition of machines: how standard- ized parts are spatially arranged and mechanically linked to produce functional behavior. We refer to this task, introduced in the previous section, as compositional machine design. It captures two essential components: (i) the static geometry of a machine as a part-based assembly, and (ii) its com- patibility with functional demands, typically assessed through physical simulation. This abstraction omits considerations such as manufacturing constraints, material properties, or domain-specific reg- ulations, but retains the core spatial and behavioral reasoning challenges relevant to design. This special task of compositional machine design mirrors challenges found in other exploration domains. For example, automatic theorem proving involves a compositional and exponentially large action space, while electronic design automation (EDA) for chip layouts requires spatial reason- ing to place components of varying shapes under spatial constraints (albeit in a more regular and grid-constrained fashion than mechanical parts in machines). A unique challenge in machine de- sign, however, is its dependence on diverse long-horizon behaviors, both autonomous and non- autonomous, within an environment. Specifically, a machine may behave differently when operated in different ways (e.g., a bicycle when pedaled versus when braking) or under different external conditions (e.g., driving a car in sunny versus rainy weather). Similarly, many sophisticated ma- chines cannot function without appropriate control policies, as exemplified by aircraft that rely on fly-by-wire systems to stabilize their inherently unstable aerodynamic configurations (which would otherwise be unflyable by a human pilot alone). A key open problem is therefore how to account for the interplay among physics, control policy, and compositional structure in machine design. It is worth noting that, unlike in math theorem proving where one valid proof often suffices (even though multiple proofs may still be valued), design domains typically require generating a diverse set of candidate solutions. This diversity is essential to (i) differentiate products, (ii) adapt to un- predictable market demands, and (iii) account for uncertainty in real-world testing and deployment. Consequently, the task places greater emphasis on diversity, and a model for compositional machine design should function more like a generative model than a simple reward maximizer. 3 BESIEGEFIELD: PLAYGROUND FOR COMPOSITIONAL MACHINE DESIGN Studying the full problem of compositional machine design is challenging, as it involves the cou- pling of many interacting factors. We therefore focus on a minimalist, component-level setting in which machines are constructed primarily from cuboid primitives with clear functional semantics, together with a small set of specialized exceptions, and operate under a shared control policy in an environment governed by rigid-body and elastic mechanics. This abstraction allows us to prop- erly benchmark the capabilities of existing LLMs and to assess the upper bounds, potential, and challenges of agentic systems and RL algorithms. To this end, we create BesiegeField, an interactive environment adapted from the machine-building game Besiege, in which players design medieval machines to complete tasks such as destroying cas- tles. Powered by the built-in physics engine, BesiegeField supports physical simulation of mechani- cal systems such as vehicles and catapults in user-customized environments with terrains, obstacles, external forces (e.g., wind and gravity), and co-existing agents. The environment provides nearly 80 types of building blocks , including passive ones like drills and logs, and powered ones like pow- ered cogs and wheels. Machines are constructed by sequentially attaching new parts to vacant and attachable faces of existing blocks, starting from a root block and thus forming a “construction tree” (indeed a directly acyclic graph (DAG), in the sense of operation orders; one block can has two parents in the DAG; the actual structures may contain loops). Powered blocks can receive control commands, allowing machines to be operated precisely. During simulation, complete state infor- mation (e.g., the position and velocity of each block in the constructed machine) can be recorded for model feedback. Finally, the environment supports custom modifications and can be extended with additional block types and richer physics (e.g., simple fluid simulation). Further details are explained in Appendix B. BesiegeField is unique in balancing real-world geometry and physics, part-level semantics, and simple compositional rules. Block-stacking environments like LEGO (Fan et al., 2022) and 3 Technical Report Throw further! Go further! Figure 2: Demonstration of the machine design tasks in our experiments. (Left: car; Right: catapult). <Machine> … <Block id=“1” …> … <Pos x=“3” y=“0” z=“-0.5”> </Block> … </Machine> Position-based (Before Edit) Before Edit After Edit <Machine> … <Block id=“1” …> … <Pos x=“3” y=“0” z=“-0.5”> </Block> <Block id=“46” …> … <Pos x=“3” y=“0” z=“-2.5”> </Block> … </Machine> [ … {"type": “1", "id": 5, "parent": 1, "face_id":4}, {"type": "46", "id": 6, "parent": 5, "face_id": 1} ] Construction Tree (Before/After Edit) Position-based (After Edit) Figure 3: Demonstration of the default position-based representation and our construction tree representation. Parent block info is in blue and child info is in red. Minecraft (Fan et al., 2022; Pun et al., 2025) allow intuitive combinatorial assembly but do not natively provide realistic physical simulation and rely on generic blocks with limited semantic mean- ing. CAD modeling (Li et al., 2025) captures fine-grained geometry and interactions, but its com- plexity makes rules cumbersome and sequences prohibitively long. By contrast, BesiegeField uses semantically meaningful parts with cuboid-like construction rules-supporting realistic physics while remaining abstract enough for tractable composition. This calibrated balance enables the study of compositional creativity and geometric reasoning at a level of difficulty that both differentiates al- gorithms and permits rapid experimentation. Moreover, unlike prior environments, BesiegeField supports machine destruction, adding durability and failure analysis to the design space. 4 BENCHMARKING LLMS FOR COMPOSITIONAL MACHINE DESIGN 4.1 BENCHMARK SETTINGS Representative target machines and tasks. To benchmark and characterize the performance of different LLMs for agentic compositional machine design, we consider two conceptually simple yet representative target machines to build: car and catapult as shown in Fig. 2. While success in both requires understanding part semantics and structural syntax, car building primarily tests static relational reasoning, such as enforcing correct part orientations, symmetry, and stability; in contrast, catapult building challenges models with dynamic relational reasoning, where parts must coordinate over time to produce causal mechanical effects. Moreover, the two tasks are simple enough to be constructed with only a few blocks so that they fit within the LLM’s context window, yet complex enough to require explicit reasoning about construction strategies and causal dependencies. We evaluate the performance of cars and catapults by their moving distance and their throwing distance (i.e., the moving distance of the stone), respectively, towards a fixed and given direction. During each simulation, the generated machine will be placed at a designated position, and the active parts will be powered after a few seconds. As there can be reward hacking issues, for catapults experiments we surround the designated machine placement position with moderate-height walls. More details about the target machines, rewards, and environments can be found in Appendix B. Machine representations. In BesiegeField, the default position-based representation records all blocks with global 3D positions and uses a built-in algorithm to recover connections. Such a repre- sentation, however, does not well encode machine structures. Instead, we propose a parsimonious representation aligned with the game’s building logic, based on pairwise relative attachment relation- ships (i.e., how one part is rotated and attached to another). Details are explained in Appendix D.1.2. 4 Technical Report Consolidated Modification Plan: 1. Remove Old Base: Remove blocks [3], [4], and [6] first, then remove their parents [1], [2], and [5]. 2. Reposition Arm And Container: Move Log [10] from Rotating Block [9]’s face_id: 3 (Up) to face_id: 2 (Right). Move Container [11] from Log [10]’s face_id: 0 (Front) to face_id: 8 (Up). And Add a Ballast [35] to Rotating Block [9] at face_id: 1 (Left). 3. Build New Base: Add three Log [63] blocks to the Starting Block [0] at face_id: 1 (Back), face_id: 2 (Left), and face_id: 3 (Right). … I need to analyze the provided machine data to identify and correct the flaws. … (a) Gemini 2.5 Pro CoT (b) Modification Steps Analysis of Defects: – Severe Structural Imbalance. The machine has a tiny support base (1, 2, 3, 4, 5). The long arm with the heavy Boulder 12 will create a large torque, causing the machine to tip over backward (towards z-). … – Incorrect Pivot Orientation. The block 9 spins the arm horizontally. …Let's check the current setup: …Rotation around the x-axis causes a swing in the y-z plane… However, the Log 10 is attached to the 'Up’ face of the pivot. This means the arm starts pointing straight up, which is ineffective for a throw. … – Improper Container Setup. After moving the arm, I must also move the Container 11 to an ‘Up’ attachable face on the Log 10 to make it face upward y+ again. The Log 10 has several ‘Up’ attachable faces; face_id 8 is a suitable choice. … Figure 4: Example CoT of inspector agents (w/ Gemini 2.5 Pro). Blue text highlights the moderate capability of LLMs in spatial reasoning and imagined physical simulation. Thought Process: – The catapult arm (order 7 → 8 → 9) is driven by a Rotating Block whose axis is currently lying horizontally (orient x+), so the arm sweeps sideways. … – The two little cubes on the ground (order 5, 6) are the only supports and break under load. Attaching one wooden-block beam to the front of each cube forms a small skid that spreads the load. … – The light base tips because the long arm and the boulder are far forward. Adding a ballast cube (ID 35) to each rear side cube (#2 and #3) shifts the center of gravity rearwards. Modification Description: 1. Re-attach Rotating Block 7 to the upper build-point of block 1 so that its axis becomes vertical and the arm can pitch upward. 2. Reinforce the base by adding a 2-unit wooden beam in front of each ground cube (5, 6). 3. Add ballast cubes on the left and right rear frame cubes (2 and 3) to pull the center of gravity back over the supports. … (a) o3 CoT (b) Modification Steps Figure 5: Example CoT of inspector agents (w/ OpenAI o3). Red text highlights reasoning errors. Performance metrics. We evaluate our agentic systems using the following quantitative metrics: 1) file validity rate, the proportion of generated JSON files that can be successfully parsed into machine construction trees; 2) spatial validity rate, the proportion of generated machines that are free from self-collisions; 3) machine validity rate, the proportion of machines that satisfy both file and spatial validity; 4) mean and maximum simulation scores, the average and highest rewards achieved by generated machines in the environment. Environment feedback. For the simple target machines car and catapult, we consider environment feedback within a time window of 5 seconds that is long enough to characterize their designated functionalities. Specifically, for car we consider maximum speed and driving distance; for catapult, we consider boulder throwing distance and maximum height. We also record the machines’ global orientation and broken parts information (if any). Details are elaborated in Appendix D.3. 4.2 AGENTIC WORKFLOW DESIGN Single-agent setting. We first benchmark if a single LLM agent alone is capable of completing the task. Specifically, one LLM agent is provided with the environment description, the available machine components, the assembly syntax, and the functional requirements (e.g., moving an object forward). The agent generates a chain-of-thought (CoT; Wei et al. (2022)) to reason about what is needed and why, and then derives an abstract plan (e.g., connecting a lever to a container with a boulder). This plan is later translated into the construction tree representation. Iterative editing. Because compositional machine design requires both low-level spatial reasoning and high-level ideation, a single agent rarely produces satisfactory machines. We therefore also design an iterative editing workflow that involves three major agents: 1) designer, which produces an initial plan from the environment description, the available machine components, the assembly syntax, and the functional requirements; 2) refiner, a self-critic agent that which evaluates a draft 5 Technical Report Figure 7: Machines produced by agentic systems with different LLMs (Top: car; Bottom: catapult). Models Single-agent Iterative Editing Hierarchical Design Mean Max Std Mean Max Std Mean Max Std “Catapult” Task Gemini 2.5 Pro 2.30 9.0 3.86 4.67 21.95 8.68 9.83 18.19 8.35 OpenAI o3 2.87 5.22 1.96 9.14 14.01 3.71 2.00 11.11 3.98 Qwen3-Coder-480B-A35B 1.75 9.24 3.17 5.10 12.02 5.54 3.90 6.52 2.54 Doubao Seed 1.6 3.18 8.2 2.99 4.82 9.10 3.41 1.73 4.76 2.39 Claude Opus 4 1.19 4.82 2.21 1.18 4.91 2.18 2.27 9.32 4.22 DeepSeek-V3 3.50 4.86 2.17 3.07 5.24 2.55 2.41 4.93 2.58 Kimi K2 2.57 9.05 3.72 2.82 11.39 5.23 5.39 12.02 5.16 Llama 4 Scout 17B 16E 3.18 5.64 1.95 1.28 5.94 2.41 3.59 11.83 4.15 “Car” Task Gemini 2.5 Pro 33.96 40.85 6.73 34.34 41.66 13.96 29.96 41.52 7.78 OpenAI o3 15.28 32.08 8.97 14.34 35.08 11.79 28.39 36.18 11.01 Qwen3-Coder-480B-A35B 8.87 11.50 4.46 15.24 28.95 13.12 12.59 34.05 10.78 Doubao Seed 1.6 3.51 9.40 4.85 8.11 10.04 3.58 18.75 26.02 4.38 Claude Opus 4 9.83 12.98 1.28 8.07 28.04 12.48 14.56 38.67 20.69 DeepSeek-V3 9.06 10.53 3.68 8.23 18.84 7.12 17.92 31.94 12.85 Kimi K2 1.75 8.09 2.80 14.36 28.34 9.47 1.94 14.99 5.48 Llama 4 Scout 17B 16E 0.02 0.03 0.01 3.04 12.76 5.23 1.55 2.00 0.32 Table 1: Quantitative results of agentic systems with different LLMs. against requirements and constraints and proposes multiple candidate revisions at each step; 3) envi- ronment querier, an agent that runs machine simulation and summarizes the environment feedback, in the way that it always provides global information such as machine orientation throughout the trajectory but selectively reports the feedback on specific blocks (e.g., position and speed) for fur- ther machine refinement. The workflow begins with a draft from the designer that is later critiqued by an inspector, which assess the designed machine in an abstract fashion, then polished once by a refiner. The design then undergoes a fixed number of iterations, each consisting of one querier and one refiner step. At refiner stages, multiple candidates are generated for running Monte Carlo tree search (MCTS; Coulom (2006)). The best design found in this search process is selected as output. A machine that moves? Meta- Designer Designer Inspector + Refiner Refiner Active Env Querier Simulation × N Figure 6: Our agentic machine design workflow. Hierarchical construction. Inspired by typical human design processes as well as recent designs of agentic systems (Xiao et al., 2025; Teng et al., 2025; Zhang et al., 2025), we introduce a meta-designer agent that first analyzes the requirements and constraints, and then constructs a high- level blueprint of the major functional blocks (e.g., the suspension system) and their interconnections. With this blueprint in place, we adopt an autoregressive strategy to build the machine block by block: 1) we begin with the first functional block and dispatch the job to eight parallel builder agents; 2) the valid designs from this stage are evenly distributed to another eight builder agents to construct the second block; and 3) the process iterates in this manner until the entire machine is assembled. Empirically, we find that the meta-designer typically decomposes a machine into three to four functional blocks. 4.3 KEY EMPIRICAL OBSERVATIONS General observations. We find compositional machine design to be a challenging task for LLMs (Fig. 7 and Table 1), though not intractable: Gemini 2.5 Pro can consistently construct visually sen- sible machines with non-trivial performance. We find no evidence that reasoning models outperform non-reasoning ones, suggesting the main bottleneck lies in LLMs’ limited 3D understanding and/or 6 Technical Report in-context learning. We also find that LLMs, especially reasoning models, still exhibit some spatial and physical reasoning as exemplified by the CoT from Gemini Pro 2.5 (Fig. 4), much like a world model in text space. Failure patterns. We identified common failure patterns in LLM-generated machines (Fig. 17): 1) incorrect part orientations; 2) incorrect part placements, where parts attach to wrong parents; 3) instruction-following failures, where elements of the high-level blueprint are not strictly observed; 4) flawed high-level reasoning, where LLMs fail to recognize correct physics or essential components. Effect of environment feedback. It is unsurprising that with the more environment feedback the agents receive, the better performance of generated machines improve in general (Table 12). Effect of edit history. We find that edit histories are generally helpful in decreasing the number of failure attempts in creating valid machines (Table 6), which underscores the importance of longer context window of base models for efficient exploration. Hierarchical design. We observe the mean performance improves with hierarchical design only when the abstract-level reasoning on blueprints is reliable, as shown by the performance of Gemini 2.5 Pro. In the meantime, consistent with the intuition that hierarchical design is more structured and principled, it generally yields lower variance in obtained scores. Effect of CoT reasoning. As shown in Fig. 17, LLMs often fail to faithfully translate high-level ma- chine design plans in their CoT into semantically and geometrically consistent machine construction trees. To better assess the impact of CoT reasoning on high-level design, we feed the CoT gener- ated by Gemini 2.5 Pro (the best-performing model) to other LLMs, prompting them to directly output construction trees. The resulting machines generally show improved performance (Fig. 30) and highlight the critical role of high-level semantic reasoning in machine design. CoT-machine correspondence. Though the CoT often provides a reasonably high-level blueprint, agents may still generate machines that deviate from the intended structure (Fig. 17). We hypothe- size that this misalignment is a key reason many LLMs struggle to build better machines. Machine representation. We experiment with a coordinate-only representation derived from the default position-based (Appendix D.1) and our construction tree representation. Results show that the coordinate-only representation performs significantly worse (Table 7), implying that explicit structural information is necessary for LLM understanding. 3D information. We observe that (Table 5) the performance generally improves when we also feed parsed 3D information into the context of LLMs, which implies that LLMs are less capable of understanding relative spatial relationship (e.g., construction trees). 5 TOWARDS MACHINE DESIGN THROUGH REINFORCEMENT LEARNING Although agentic systems show promise in compositional machine design, simply scaling system size is unlikely to be economical, as errors compound rapidly. Like humans who internalize experi- ence, LLM agents should consolidate new knowledge into weights. We thus explore reinforcement learning with verifiable rewards (RLVR) in BesiegeField to develop machine-design capabilities. 5.1 EXPERIMENTAL SETTINGS Cold-start finetuning and dataset curation. Following recent RLVR practices (Lambert et al., 2025; Yue et al., 2025; Zhu et al., 2025a), we curated a small dataset to cold-start LLMs by aligning their reasoning process with expert CoT. Specifically, we collected textual descriptions of machine functionalities from Besiege player communities and prompted Gemini 2.5 Pro to generate corre- sponding machines. After filtering out invalid generations, we obtained 9,984 valid machine-CoT pairs. We then used this dataset to perform supervised finetuning on Qwen-2.5-14B-Instruct for 12 epochs. Additional training details are provided in Appendix F.2. Reward design. We use the reward R = is_valid × performance where is_valid indi- cates whether constraints are satisfied (Appendix D.2). For car, performance is the maximum travel distance; for catapult, it is the product of the boulder’s maximum height and distance, penal- izing solutions that are extreme in only one dimension. RL finetuning settings. We finetune agents specialized in building a single type of machine (either car or catapult), making our setup closely aligned with one-shot RLVR (Wang et al., 2025) where 7 Technical Report Models Catapult Car Validity Ratio Mean Score Max Score Validity Ratio Mean Score Max Score Qwen2.5-14B-Instruct 11/50 0.06 2.41 46/50 4.97 19.10 Qwen2.5-14B-Instruct + Cold-Start 9/50 0.11 5.54 40/50 4.67 20.23 Qwen2.5-14B-Instruct + RL 12/50 0.13 5.92 41/50 3.72 24.08 Qwen2.5-14B-Instruct + Cold-Start + RL 11/50 0.14 7.14 42/50 5.05 45.72 Table 2: Results of RLVR post-training in BesiegeField. We use Qwen2.5-14B as the backbond LLM. a single prompt is used throughout the RL process. We adopt group relative policy optimization (GRPO; Shao et al. (2024)) with LoRA parametrization (Hu et al., 2022) (rank 64) and mixed- precision training to finetune the cold-started model. We evaluate both the standard GRPO advantage estimator and the pass@k variant (Tang et al., 2025). In the latter case, due to the implementation of the RLVR framework verl (Sheng et al., 2025), the number of rollouts is set equal to k. Each experiment is run for 400 iterations on 8 A100 GPUs with per-GPU batch size of 1 and gradient accumulation of 8. We apply KL regularization with strength 0.001 to encourage the model to remain close to its initialization. 5.2 MAIN RESULTS AND OBSERVATIONS General results. As shown in Fig. 24, RL finetuning can generally improve the mean performance, mostly by increasing the percentage that machines are valid (including file validity, machine validity and satisfaction of minimum performance threshold). In the meantime, we also find that the maxi- mum reward increases in our best setting. Similar to observations in many other RLVR settings, the entropy of the output distribution quickly drops even with regularization. Pass@k advantage vs. Pass@1 advantage. Since we eventually care about the best performing de- signs, especially given the low validity rate, our default setting adopts Pass@k advantage estimator. Indeed, Pass@k finetuning is more likely to discovery promising machine designs (Fig. 22). Base Cold Start RL Figure 8: Designs at RL finetuning stages. Evolution of generated machines during finetuning. In Fig. 8, we qualitatively examine how models refine their designs over the course of finetuning. We observe that models typically make detail-level adjustments, such as shifting part positions, while keeping the same high-level design strategy rather than exploring alternative strate- gies. Although these strategies are often reasonable, the models struggle to find precise configurations that enable smooth coordination among parts. This precision is especially critical for sophisticated mechanisms like catapults to function properly. Cold-start. Not surprisingly, we find that cold-start alone does not enable models to produce satis- factory designs, and that finetuning on the cold-start model is better than on the base model (Table 2). 6 DISCUSSIONS AND INTRIGUING INSIGHTS Capabilities for compositional machine design. Although tasks such as visual understanding and generation also depend on spatial, physical, and semantic reasoning, compositional machine design introduces unique requirements for LLM capabilities. Without precise spatial placement of machine parts, a design may fail to function correctly; a gear train, for example, will not transmit rotation if the gears are misaligned. Since the design process is typically hierarchical, successful LLMs must be able to accurately translate high-level blueprints into detailed geometric designs. In addition, machine design spans both concept-level reasoning and detailed specification. This dual demand often leads to large design documents and calls for a form of “visual reasoning” expressed through text, similar to what has been studied in LLMs applied to scalable vector graphics (SVG) and CAD models (Qiu et al., 2025b; Alrashedy et al., 2025). Multimodal reasoning is also important because effective machine design typically relies on integrating textual descriptions with visual or schematic representations. In this work, however, we focus only on pure LLM-based reasoning to isolate and analyze its capabilities for compositional machine design. Challenges in agentic machine design systems. The task of machine design faces similar chal- lenges found in agentic systems in domains such as legal services and other knowledge-intensive fields. A key difficulty is the highly varied requirements and domain knowledge of different cus- 8 Technical Report tomers. To address this, LLMs need to acquire task-specific knowledge through in-context learning or finetuning. In addition, the complexity of design tasks often requires multiple agents to coordi- nate, and such pipelines can suffer error accumulation when the base LLM lacks sufficient capability. Exploration in machine design space. Different from tasks such as theorem proving, the goal of compositional machine design is to discover structures that more effectively achieve desired func- tionalities. Rather than reusing existing solutions, a practical design agent should be able to propose novel strategies, structural layouts, and part specifications as machine complexity increases. Meet- ing this requirement calls for RL finetuning methods that prevent models from collapsing into a nar- row set of strategies and structures, which recent methods aim to alleviate (Zhu et al., 2025b; Chen et al., 2025c; Cui et al., 2025; Cheng et al., 2025; Liu et al., 2025b). This demand is closely related to continual RL (Schwarz et al., 2018), since finetuned LLMs must avoid catastrophic forgetting, maintain its reasoning ability, and consolidate learned strategies, which is particularly important because large-scale machine design datasets are rare and commercially infeasible to collect. 7 RELATED WORK AND CONCLUDING REMARKS 3D graphics codes for generative modeling. There is a long history in 3D asset generation and engineering design of representing the construction of a target instance as a program or sequence of operations in a domain-specific language (Ritchie et al., 2023; Sun et al., 2025; Deng et al., 2022), which we refer to here as 3D graphics codes (Qiu et al., 2025b; Chen et al., 2025a). Unlike geomet- ric representations such as point clouds or meshes, these codes describe objects at a higher semantic level, capturing part composition, design constraints, and user operations in modeling software. Similar to programming languages, 3D graphics codes are inherently discrete and are typically gen- erated with autoregressive models trained from scratch (Yuan et al., 2024) or with LLMs finetuned on curated datasets (Kulits et al., 2025; Chen et al., 2025b). Much of the existing work centers on CAD scripts for individual parts (Wu et al., 2023; Alrashedy et al., 2025; Li et al., 2025) or Blender macros for single assets (Huang et al., 2024). Whereas recent studies on LEGO assemblies (Pun et al., 2025), Minecraft structures (Fan et al., 2022; Liu et al., 2025a), and procedural scene gen- eration (Sun et al., 2025; Chen et al., 2025a; Jones et al., 2025; Yuan et al., 2024) introduce richer compositionality, they still fall short of the task of compositional machine design, which requires assemblies that both function under physical laws and exhibit the precise geometry of real objects. LLM agents. LLM agents are language models organized to operate in iterative loops of perception and action (Yao et al., 2023b; Minaee et al., 2024; Hu et al., 2024c). They interact with external tools (Schick et al., 2023; Liu et al., 2024b; Kim et al., 2024; Qin et al., 2024), respond to signals from simulated or real environments (Savva et al., 2019; Shridhar et al., 2021), incorporate self- reflection to refine their outputs (Hu et al., 2024b; Alrashedy et al., 2025; Shinn et al., 2023; Yu et al., 2025), and are commonly organized into multi-agent systems that coordinate roles and exchange in- formation (Li et al., 2023; Chen et al., 2024; Zhang et al., 2025) . These designs move beyond one-shot text generation and establish LLMs as adaptive decision makers capable of long-horizon reasoning. Approaches that introduce search over possible solutions (Yao et al., 2023a; Putta et al., 2024; Koh et al., 2024) or reflection on prior attempts (Besta et al., 2024; Deng et al., 2024; Renze & Guven, 2024; Xiao et al., 2025; Yu et al., 2025) have enabled progress on increasingly complex tasks. LLM agents have already been used in design tasks such as code synthesis (Gao et al., 2023; Novikov et al., 2025; Madaan et al., 2023), CAD design (Alrashedy et al., 2025) and game environ- ments (Wang et al., 2024; Fan et al., 2022). Partially inspired by these developments, Makatura et al. (2023) proposed a prototypical agent-based design framework that generates mechanical structures from text prompts. Their system treats structure generation as a one-shot process and delegates the search for optimal geometric and physical parameters to external optimization tools. In contrast, our work with BesiegeField explores how LLM agents can directly and iteratively bridge compositional structures to functional goals, framing design as a process of reasoning and adaptation with both accurate simulation and intuitive physics. Reinforcement learning with verifiable rewards (RLVR). Recent studies indicate that, by running RL finetuning with verifiable rewards from simulators or verifiers, reasoning abilities emerge (Shao et al., 2024; Guo et al., 2025; Bai et al., 2022), even when single prompt is used during finetun- ing (Wang et al., 2025). Yet, many methods exhibit loss of diversity as output entropy collapses during reinforcement learning and thus do not fully enable LLMs to explore novel solutions. Exam- ples of mitigation methods include explicit entropy or KL regularization (Cui et al., 2025; Ouyang 9 Technical Report et al., 2022), Pass@k training (Tang et al., 2025; Chen et al., 2025c), and distribution-matching ob- jectives like generative flow networks (Zhu et al., 2025b; Hu et al., 2024a). BesiegeField provides verifiable rewards and thus enables direct application of RLVR to compositional machine design. Concluding remarks. We introduced compositional machine design, a simplified yet challenging task that reflects core aspects of real-world machine design. To evaluate LLM performance on this task, we developed BesiegeField, an interactive environment based on the game Besiege. Our results with agentic systems and reinforcement learning demonstrate that LLMs hold promise for solving this problem. While we did not exhaustively explore all designs or integrate multi-modal informa- tion, our findings underscore the need to advance fundamental LLM algorithms and capabilities, and point toward exciting future directions in machine design. ACKNOWLEDGMENT We sincerely thank the developers of Besiege for creating such an inspiring game and for fostering an open, vibrant player community, without which our exploration of this idea would not have been possible. We also extend our gratitude to the developers of BepInEx, whose work made it possible for us to unlock the full potential of Besiege. REFERENCES Kamel Alrashedy, Pradyumna Tambwekar, Zulfiqar Haider Zaidi, Megan Langwasser, Wei Xu, and Matthew Gombolay. Generating CAD code with vision-language models for 3d designs. In ICLR, 2025. 8, 9 Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harm- lessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022. 9 W Beitz, G Pahl, and K Grote. Engineering design: a systematic approach. Mrs Bulletin, 71:30, 1996. 3 Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gian- inazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In AAAI, 2024. 9 Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In ICLR, 2024. 9 Yamei Chen, Haoquan Zhang, Yangyi Huang, Zeju Qiu, Kaipeng Zhang, Yandong Wen, and Weiyang Liu. Symbolic graphics programming with large language models. arXiv preprint arXiv:2509.05208, 2025a. 9 Yongwei Chen, Yushi Lan, Shangchen Zhou, Tengfei Wang, and Xingang Pan. Sar3d: Autoregres- sive 3d object generation and understanding via multi-scale 3d vqvae. In CVPR, 2025b. 9 Zhipeng Chen, Xiaobo Qin, Youbin Wu, Yue Ling, Qinghao Ye, Wayne Xin Zhao, and Guang Shi. Pass@ k training for adaptively balancing exploration and exploitation of large reasoning models. arXiv preprint arXiv:2508.10751, 2025c. 9, 10 Daixuan Cheng, Shaohan Huang, Xuekai Zhu, Bo Dai, Wayne Xin Zhao, Zhenliang Zhang, and Furu Wei. Reasoning with exploration: An entropy perspective. arXiv preprint arXiv:2506.14758, 2025. 9 Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games, pp. 72–83. Springer, 2006. 6 Ganqu Cui, Yuchen Zhang, Jiacheng Chen, Lifan Yuan, Zhi Wang, Yuxin Zuo, Haozhan Li, Yuchen Fan, Huayu Chen, Weize Chen, et al. The entropy mechanism of reinforcement learning for reasoning language models. arXiv preprint arXiv:2505.22617, 2025. 9 10 Technical Report Boyang Deng, Sumith Kulal, Zhengyang Dong, Congyue Deng, Yonglong Tian, and Jiajun Wu. Unsupervised learning of shape programs with repeatable implicit parts. NeurIPS, 2022. 9 Yang Deng, Xuan Zhang, Wenxuan Zhang, Yifei Yuan, See-Kiong Ng, and Tat-Seng Chua. On the multi-turn instruction following for conversational web agents. arXiv preprint arXiv:2402.15057, 2024. 9 Tim Dettmers, Mike Lewis, Sam Shleifer, and Luke Zettlemoyer. 8-bit optimizers via block-wise quantization. In ICLR, 2022. 32 Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. NeurIPS, 2022. 3, 4, 9 Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In ICML, 2023. 9 Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Peiyi Wang, Qihao Zhu, Runxin Xu, Ruoyu Zhang, Shirong Ma, Xiao Bi, et al. Deepseek-r1 incentivizes reasoning in llms through reinforce- ment learning. Nature, 645(8081):633–638, 2025. 9 Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In ICLR, 2022. 8 Edward J Hu, Moksh Jain, Eric Elmoznino, Younesse Kaddar, Guillaume Lajoie, Yoshua Bengio, and Nikolay Malkin. Amortizing intractable inference in large language models. In ICLR, 2024a. 10 Shiying Hu, Zengrong Huang, Chengpeng Hu, and Jialin Liu. 3d building generation in minecraft via large language models. In 2024 IEEE Conference on Games (CoG), pp. 1–4. IEEE, 2024b. 9 Ziniu Hu, Ahmet Iscen, Aashi Jain, Thomas Kipf, Yisong Yue, David A Ross, Cordelia Schmid, and Alireza Fathi. Scenecraft: An llm agent for synthesizing 3d scenes as blender code. In ICML, 2024c. 9 Ian Huang, Guandao Yang, and Leonidas Guibas. Blenderalchemy: Editing 3d graphics with vision- language models. In ECCV, 2024. 9 R Kenny Jones, Paul Guerrero, Niloy J Mitra, and Daniel Ritchie. Shapelib: Designing a li- brary of programmatic 3d shape abstractions with large language models. arXiv preprint arXiv:2502.08884, 2025. 9 Myeongsoo Kim, Tyler Stennett, Dhruv Shah, Saurabh Sinha, and Alessandro Orso. Leveraging large language models to improve rest api testing. In Proceedings of the 2024 ACM/IEEE 44th International Conference on Software Engineering: New Ideas and Emerging Results, pp. 37–41, 2024. 9 Jing Yu Koh, Stephen Marcus McAleer, Daniel Fried, and Russ Salakhutdinov. Tree search for language model agents. In ICLR, 2024. 9 Peter Kulits, Haiwen Feng, Weiyang Liu, Victoria Fernandez Abrevaya, and Michael J Black. Re- thinking inverse graphics with large language models. Transactions on Machine Learning Re- search (TMLR), 2025. 9 Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brah- man, Lester James Validad Miranda, Alisa Liu, Nouha Dziri, Xinxi Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Christo- pher Wilhelm, Luca Soldaini, Noah A. Smith, Yizhong Wang, Pradeep Dasigi, and Hannaneh Hajishirzi. Tulu 3: Pushing frontiers in open language model post-training. In COLM, 2025. 7 Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Com- municative agents for" mind" exploration of large language model society. NeurIPS, 2023. 9 11 Technical Report Jiahao Li, Weijian Ma, Xueyang Li, Yunzhong Lou, Guichun Zhou, and Xiangdong Zhou. Cad- llama: leveraging large language models for computer-aided design parametric 3d model genera- tion. In CVPR, 2025. 4, 9 Shunyu Liu, Yaoru Li, Kongcheng Zhang, Zhenyu Cui, Wenkai Fang, Yuxuan Zheng, Tongya Zheng, and Mingli Song. Odyssey: Empowering minecraft agents with open-world skills. In IJCAI, 2025a. 9 Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, et al. Parameter-efficient orthogonal finetuning via butterfly factorization. In ICLR, 2024a. 32 Xukun Liu, Zhiyuan Peng, Xiaoyuan Yi, Xing Xie, Lirong Xiang, Yuchen Liu, and Dongkuan Xu. Toolnet: Connecting large language models with massive tools via tool graph. arXiv preprint arXiv:2403.00839, 2024b. 9 Zhen Liu, Tim Z Xiao, Weiyang Liu, Yoshua Bengio, and Dinghuai Zhang. Efficient diversity- preserving diffusion alignment via gradient-informed gflownets. In ICLR, 2025b. 9 Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. NeurIPS, 2023. 9 Liane Makatura, Michael Foshey, Bohan Wang, Felix HähnLein, Pingchuan Ma, Bolei Deng, Megan Tjandrasuwita, Andrew Spielberg, Crystal Elaine Owens, Peter Yichen Chen, et al. How can large language models help humans in design and manufacturing? arXiv preprint arXiv:2307.14377, 2023. 9 Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Am- atriain, and Jianfeng Gao. Large language models: A survey. arXiv preprint arXiv:2402.06196, 2024. 9 Alexander Novikov, Ngân V˜u, Marvin Eisenberger, Emilien Dupont, Po-Sen Huang, Adam Zsolt Wagner, Sergey Shirobokov, Borislav Kozlovskii, Francisco JR Ruiz, Abbas Mehrabian, et al. Alphaevolve: A coding agent for scientific and algorithmic discovery. arXiv preprint arXiv:2506.13131, 2025. 9 Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. NeurIPS, 2022. 9 Ava Pun, Kangle Deng, Ruixuan Liu, Deva Ramanan, Changliu Liu, and Jun-Yan Zhu. Generating physically stable and buildable brick structures from text. In ICCV, 2025. 4, 9 Pranav Putta, Edmund Mills, Naman Garg, Sumeet Motwani, Chelsea Finn, Divyansh Garg, and Rafael Rafailov. Agent q: Advanced reasoning and learning for autonomous ai agents. arXiv preprint arXiv:2408.07199, 2024. 9 Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, dahai li, Zhiyuan Liu, and Maosong Sun. ToolLLM: Facilitating large language models to master 16000+ real-world APIs. In ICLR, 2024. 9 Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, and Bernhard Schölkopf. Controlling text-to-image diffusion by orthogonal finetuning. In NeurIPS, 2023. 32 Zeju Qiu, Simon Buchholz, Tim Z Xiao, Maximilian Dax, Bernhard Schölkopf, and Weiyang Liu. Reparameterized llm training via orthogonal equivalence transformation. In NeurIPS, 2025a. 32 Zeju Qiu, Weiyang Liu, Haiwen Feng, Zhen Liu, Tim Z. Xiao, Katherine M Collins, Joshua B Tenenbaum, Adrian Weller, Michael J Black, and Bernhard Schölkopf. Can large language models understand symbolic graphics programs? In ICLR, 2025b. 8, 9 12 Technical Report Zeju Qiu, Weiyang Liu, Adrian Weller, and Bernhard Schölkopf. Orthogonal finetuning made scal- able. In EMNLP, 2025c. 32 Matthew Renze and Erhan Guven. Self-reflection in llm agents: Effects on problem-solving perfor- mance. arXiv preprint arXiv:2405.06682, 2024. 9 Daniel Ritchie, Paul Guerrero, R Kenny Jones, Niloy Mitra, Adriana Schulz, Karl D Willis, and Jiajun Wu. Neurosymbolic models for computer graphics. In Eurographics, 2023. 9 Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, et al. Habitat: A platform for embodied ai research. In CVPR, 2019. 9 Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. NeurIPS, 2023. 9 Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for contin- ual learning. In ICML, 2018. 9 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. 8, 9 Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. In Proceedings of the Twentieth European Conference on Computer Systems, pp. 1279–1297, 2025. 8 Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. NeurIPS, 2023. 9 Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In ICLR, 2021. 9 Chunyi Sun, Junlin Han, Weijian Deng, Xinlong Wang, Zishan Qin, and Stephen Gould. 3d-gpt: Procedural 3d modeling with large language models. In 3DV, 2025. 9 Yunhao Tang, Kunhao Zheng, Gabriel Synnaeve, and Remi Munos. Optimizing language models for inference time objectives using reinforcement learning. In ICML, 2025. 8, 10 Fengwei Teng, Zhaoyang Yu, Quan Shi, Jiayi Zhang, Chenglin Wu, and Yuyu Luo. Atom of thoughts for markov llm test-time scaling. arXiv preprint arXiv:2502.12018, 2025. 6 Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. Transactions on Machine Learning Research (TMLR), 2024. 9 Yiping Wang, Qing Yang, Zhiyuan Zeng, Liliang Ren, Liyuan Liu, Baolin Peng, Hao Cheng, Xuehai He, Kuan Wang, Jianfeng Gao, et al. Reinforcement learning for reasoning in large language models with one training example. arXiv preprint arXiv:2504.20571, 2025. 7, 9 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In NeurIPS, 2022. 5 Melvin Wong, Yueming Lyu, Thiago Rios, Stefan Menzel, and Yew-Soon Ong. Llm-to-phy3d: Physically conform online 3d object generation with llms. arXiv preprint arXiv:2506.11148, 2025. 3 Sifan Wu, Amir Khasahmadi, Mor Katz, Pradeep Kumar Jayaraman, Yewen Pu, Karl Willis, and Bang Liu. Cad-llm: Large language model for cad generation. In NeurIPS, 2023. 9 13 Technical Report Tim Z Xiao, Robert Bamler, Bernhard Schölkopf, and Weiyang Liu. Verbalized machine learning: Revisiting machine learning with language models. Transactions on Machine Learning Research, 2025. 6, 9 Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. NeurIPS, 2023a. 9 Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In ICLR, 2023b. 9 Zhouliang Yu, Yuhuan Yuan, Tim Z Xiao, Fuxiang Frank Xia, Jie Fu, Ge Zhang, Weiyang Liu, et al. Generating symbolic world models via test-time scaling of large language models. Transactions on Machine Learning Research, 2025. 9 Haocheng Yuan, Jing Xu, Hao Pan, Adrien Bousseau, Niloy J Mitra, and Changjian Li. Cadtalk: An algorithm and benchmark for semantic commenting of cad programs. In CVPR, 2024. 9 Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Yang Yue, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in LLMs beyond the base model? In ICML, 2025. 7 Jiayi Zhang, Jinyu Xiang, Zhaoyang Yu, Fengwei Teng, Xiong-Hui Chen, Jiaqi Chen, Mingchen Zhuge, Xin Cheng, Sirui Hong, Jinlin Wang, Bingnan Zheng, Bang Liu, Yuyu Luo, and Chenglin Wu. AFlow: Automating agentic workflow generation. In ICLR, 2025. 6, 9 Xinyu Zhu, Mengzhou Xia, Zhepei Wei, Wei-Lin Chen, Danqi Chen, and Yu Meng. The surpris- ing effectiveness of negative reinforcement in llm reasoning. arXiv preprint arXiv:2506.01347, 2025a. 7 Xuekai Zhu, Daixuan Cheng, Dinghuai Zhang, Hengli Li, Kaiyan Zhang, Che Jiang, Youbang Sun, Ermo Hua, Yuxin Zuo, Xingtai Lv, et al. Flowrl: Matching reward distributions for llm reasoning. arXiv preprint arXiv:2509.15207, 2025b. 9, 10 14 Technical Report Appendix Table of Contents A Supplementary Tables 16 B Details on the BesiegeField Environment 18 B.1 Construction Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.2 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.3 Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.4 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 C Search Strategies in Machine Modification Loops 24 D Environment Settings for Agentic Design and RL Finetuning 28 D.1 Machine Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 D.2 Reward Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 D.3 Environment Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 E Challenges in Compositional Machine Design 30 E.1 Failure Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 E.2 Need for Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 E.3 Appearance vs. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 F Settings for RL Finetuning 32 F.1 Cold-Start Dataset Curation . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 F.2 Cold-Start Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 F.3 RL Experiment Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 G Additional Ablation Studies 34 H Generated Samples 40 H.1 From RL-finetuned Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 H.2 From Agentic Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 I Relations between CoT and Machines 42 J CoT Samples from Gemini 2.5 Pro 43 J.1 Single Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 J.2 Meta Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 J.3 Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 J.4 Inspector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 J.5 Environment Querier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 J.6 Refiner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 K Single-Agent Prompt 56 L Multi-Agent Prompts 58 L.1 Shared Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 L.2 Designer System And User Prompt . . . . . . . . . . . . . . . . . . . . . . . . 61 L.3 Inspector System And User Prompt . . . . . . . . . . . . . . . . . . . . . . . 62 L.4 Refiner System Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 L.5 Environment Querier System Prompt . . . . . . . . . . . . . . . . . . . . . . 65 L.6 Block Informations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 15 Technical Report A SUPPLEMENTARY TABLES Models Meta-Designer & Designer Blind Refinement Modification w/ Env Feedback Valid Mean Max Valid Mean Max Valid Mean Max "Catapult" Task Gemini 2.5 Pro 5 8.49 9.14 5 8.18 11.07 5 15.73 18.19 Claude Opus 4 4 4.17 4.36 2 5.38 5.8 2 9.10 9.32 o3 3 0 0 3 0 0 3 5.34 11.11 Qwen3-Coder-480B-A35B 6 0.75 4.5 6 1.61 5.0 6 5.21 6.52 Doubao Seed 1.6 3 4.34 4.37 3 0.31 0.49 3 4.62 4.76 DeepSeek-V3 7 0 0 7 0.98 3.18 4 4.82 4.93 Kimi K2 6 7.18 12.02 2 5.29 7.36 0 - - GPT-5 8 3.87 4.92 1 5.35 5.35 0 - - Llama 4 Scout 17B 16E 8 3.59 11.83 0 - - 0 - - "Car" Task Gemini 2.5 Pro 7 27.85 38.06 7 17.81 39.64 7 29.96 41.52 Claude Opus 4 3 2.96 3.41 3 36.18 37.05 3 26.59 38.67 o3 8 29.27 41.43 2 20.03 40.04 2 28.39 36.18 Qwen3-Coder-480B-A35B 6 5.43 8.72 6 3.90 11.25 6 11.75 34.05 Doubao Seed 1.6 5 21.80 29.91 5 13.25 26.05 5 18.75 26.02 DeepSeek-V3 3 0.27 0.47 3 16.94 29.87 3 17.92 31.94 Kimi K2 1 6.74 6.74 1 0.39 0.39 1 14.99 14.99 GPT-5 8 5.67 20.32 8 3.75 9.65 8 8.43 13.72 Llama 4 Scout 17B 16E 4 1.55 2.00 1 0.47 0.47 1 0.47 0.47 Table 3: Comparison between the performance of machines generated by different stages. The mean score is computed by taking the average of the scores of all valid machines. We sample 8 machines at the designer stage and keep only the valid machines. The maximum number of retries in the following stages is thus equal to the number of valid machines produced at the designer stage. Models Designer Blind Refinement Modification w/ Env Feedback Valid Mean Max Valid Mean Max Valid Mean Max "Catapult" Task Gemini 2.5 Pro 3 6.13 9.0 3 8.10 12.09 3 11.08 21.95 Claude Opus 4 2 4.76 4.91 0 - - 0 - - o3 8 2.87 5.22 8 2.98 9.17 8 9.14 14.01 Qwen3-Coder-480B-A35B 4 3.5 9.24 4 6.39 10.78 4 10.2 12.02 Doubao Seed 1.6 6 4.24 8.2 6 4.61 8.75 6 6.43 9.10 DeepSeek-V3 6 4.67 4.86 5 4.33 4.78 5 4.91 5.24 Kimi K2 3 6.85 9.05 2 8.31 8.97 2 11.28 11.39 GPT-5 5 1.50 1.88 5 5.86 12.77 5 7.53 9.48 Llama 4 Scout 17B 16E 7 3.63 5.64 2 5.88 6.95 2 5.12 5.94 Table 4: Performance of machines generated after different stages of the iterative editing workflow (without meta-designer). Mean scores are computed on valid machines. 16 Technical Report Models Baseline w/o Parsed 3D Information Valid Mean Max Valid Mean Max Gemini 2.5 Pro 5 8.18 11.07 5 7.13 11.36 o3 3 0 0 3 0 0 Qwen3-Coder-480B-A35B 5 1.61 5.0 0 - - Claude Opus 4 2 5.38 5.8 2 0.18 0.26 Table 5: Ablation on the effect of parsed 3D information. We compute the blind refinement score under two machine representations. The average score is computed with respect to valid machines only; 8 tries for each experiment. Models Refiner Avg Retry ↓ Refiner validity rate ↑ Baseline w/o Modify History Baseline w/o Modify History Gemini 2.5 Pro 1.42 1.33 100% 100% o3 1.94 2.37 97.87% 88.89% Qwen3-Coder-480B-A35B 2.50 2.75 82.65% 91.94% Doubao Seed 1.6 2.74 2.93 85.18% 85.18% Claude Opus 4 3.24 3.69 94.12% 53.85% DeepSeek-V3 1.54 1.68 100% 98.31% Table 6: Ablation of edit history as refiner inputs. Models Machine Validity (pass/total) Designer Score File Valid 3D Valid Final Valid Mean Max Baseline (construction tree) Gemini 2.5 Pro pro 8/8 5/8 5/8 8.49 9.14 o3 3/8 3/3 3/8 0 0 Qwen3-Coder-480B-A35B 8/8 6/8 6/8 0.75 4.5 Doubao Seed 1.6 7/8 3/7 3/8 4.34 4.37 Claude Opus 4 8/8 4/8 4/8 4.17 4.36 DeepSeek-V3 7/8 7/7 7/8 0 0 Kimi K2 6/8 6/6 6/8 7.18 12.02 Llama 4 Scout 17B 16E 8/8 8/8 8/8 3.59 11.83 Global position-based 3D representation Gemini 2.5 Pro pro 5/8 5/8 5/8 4.96 12.85 o3 0/8 - - - - Claude Opus 4 0/8 - - - - Kimi K2 5/8 4/5 4/8 0 0 Llama 4 Scout 17B 16E 0/8 - - - - Table 7: Ablation study on machine representations. 17 Technical Report B DETAILS ON THE BESIEGEFIELD ENVIRONMENT We built BesiegeField by creating plug-in modules for the game Besiege that create interfaces to allow flexible composition of parts (once certain rules are obeyed), control policies on multiple powered parts (e.g.. powered cogs), recording of state information of any block (e.g., position, ori- entation, part integrity, etc.) and settings of termination conditions (e.g., some part passing through a line). BesiegeField supports multi-process launching and thus allows for efficient parallel RL train- ing. As the game natively supports multi-player gameplay, BesiegeField can naturally be applied to multi-agent RL settings. As the game Besiege (shown in Fig. 9) is built with the (mostly) open- sourced Unity3D game engine2, BesiegeField is highly-customizable: the environment 1) natively supports modification of physical parameters, external forces, terrains and obstacles (e.g., stone buildings) and 2) allows for extension patches (known as mods3) to introduce other mechanisms, such as new block types, fluid simulation and many other components. Figure 9: Besiege editor view. B.1 CONSTRUCTION RULE Each machine is built by attaching new blocks to the existing structure, starting from a special root block. For convenience, we describe each construction step as an “attacher” block (child) connected to an “attachee” block (parent). As an attacher, each block has exactly one face available for connection; as an attachee, each block has none to several attachable faces. Once a face is used, it is considered occupied and cannot be reused. If, after construction, the free end of a block happens to coincide with an attachable face of an existing block, the two blocks are automatically connected. A few special blocks violate the rule described above, such as spring. These blocks have two ends and thus must have two parent blocks, do not have physical volume can be attached to either vacant or occupied faces of other blocks. Finally, each block can be rescaled and rotated after construction. Since post-construction scaling and rotation introduce unnecessary complexity into our pipeline, we exclude them from our experi- ments and leave their handling to future work. B.2 SIMULATION Once constructed, the machine will be placed at the designated pose indicated by the position and orientation of the starting block (not necessary near the ground, but there is a maximum height 2https://en.wikipedia.org/wiki/Unity_(game_engine) 3https://en.wikipedia.org/wiki/Video_game_modding 18 Technical Report constraint). The machine will be subject to gravity and Newtonian physical laws (rigid and elastic ones) after placement. B.3 BLOCKS Out of the 75 construction blocks provided by Besiege, we filter out a list of 27 blocks that are most relevant to build machines with classical mechanical mechanisms such as levers and trusses. • Starting Block: the root of any mechanism; initial orientation is along z+ axis. • Small Wooden Block: a cubic basic construction block. • Wooden Block: shaped like two small wooden blocks attached together. • Wooden Rod: a slender, fragile construction block. • Log: shaped like three small wooden blocks arranged in parallel. • Steering Hinge: powered; controls rotation of sub-blocks, swinging left or right along the axis perpendicular to its placement axis. • Steering Block: powered; rotates blocks along its placement axis. • Powered Wheel: radius 1m; provides ground movement. • Unpowered Wheel: identical to the powered wheel but requires external force to rotate. • Large Powered Wheel: larger version of the powered wheel (radius 3m). • Large Unpowered Wheel: unpowered version of the large powered wheel. • Small Wheel: functions like a caster wheel (e.g., shopping cart), unpowered, 1.2m long. • Roller Wheel: similar to the small wheel, but shorter (0.8m). • Universal Joint: freely rotates around its placement axis, unpowered. • Hinge: swings up and down along the axis perpendicular to its placement axis, unpowered. • Ball Joint: swings freely in all directions, unpowered. • Axle Connector: similar to a ball joint but allows unrestricted 360◦rotation. • Suspension: shaped like a wooden block, it can buffer forces from all directions. • Rotating Block: powered; motor-like block that generates torque and rotates about its local y-axis. • Grabber: grabs objects on contact and can release them. • Boulder: a large rock, loosely attached; useful for throwing. • Grip Pad: block with the highest friction. • Elastic Pad: block with the highest elasticity. • Container: typically used to hold a boulder. • Spring: can contract; one of the special blocks that can have two parent attachments (with- out occupying attachable faces). • Brace: reinforces structural strength. • Ballast: a heavy cubic block used as a counterweight. B.4 TASKS We define a set of tasks in which the goal is to construct machines within a designated building area to accomplish specific objectives. • Movement. Referred to as the car task in the main text, the objective is to build a machine capable of driving along tracks and traversing various terrains. • Throw. Referred to as the catapult task in the main text, the goal is to construct a machine that can launch boulders over long distances. To prevent unintended strategies (e.g., carry- ing the boulder instead of throwing it, or letting it roll along the ground), the building area is enclosed by a medium-height wall. 19 Technical Report • Delivery. This task requires building a machine that can transport a large stone forward across different terrains (Fig. 13). • Pick. The objective here is to design a machine that can retrieve a stone located at the bottom of a deep well (Fig. 12). For many of these tasks, we introduce multiple difficulty levels (not used in the experiments reported in this paper) to encourage progressively more sophisticated designs: • Movement and Delivery. We consider: (1) randomized terrains with stones and wooden rods (e.g., Fig. 10), (2) curved tracks (Fig. 15), and (3) obstacles such as height-limiting bars. • Throw. We design: (1) varied objectives, such as requiring the boulder to pass through an aerial ring (Fig. 14) or land precisely within a small target zone, (2) environmental factors such as wind, and (3) obstacles, including height restrictions either within the building area or along the boulder’s trajectory. Figure 10: Illustration of the task car / movement on a rocky terrain, a more difficult setting compared to the environment used for the car task in our experiments. 20 Technical Report Figure 11: Illustration of the task catapult / throw. Figure 12: Illustration of the task pick. 21 Technical Report Figure 13: Illustration of the task delivery with a bump on the track. Figure 14: Illustration of the task catapult / Throw with the objective of throwing the boulder through the target ring. 22 Technical Report Figure 15: Illustration of the task car / movement with a curved track. 23 Technical Report C SEARCH STRATEGIES IN MACHINE MODIFICATION LOOPS Apart from the MCTS strategy used in the main experiments, we also evaluate two alternatives: (i) best-of-N, where we select the best-performing machine out of N candidates, and (ii) random search, which mimics best-of-N but instead selects a random candidate. For clarity, we refer to one con- secutive “querier-refiner” call as a search node (consistent with our MCTS setup). Unlike classical MCTS or best-of-N, here each search node is allowed up to five retries to prevent child statistics from being too sparse. We perform R search rounds, each aiming to obtain 5 valid candidate machines (though this may fail; if fewer than 5 are found, the parent node’s machine is used as a candidate). Full algorithmic details are provided in Algorithm 1, Algorithm 2, and Algorithm 3. In Fig. 9 we show the improvement of machine performance with respect to the number of search rounds used. In Fig. 16 we compare the efficiency of different search methods in our agentic compositional machine design setting. Algorithm 1 Random Search Algorithm Require: Agentic Search Node N Require: Scoring function S, machine valid check function F Require: Search Round R Ensure: The Best result with the highest score 1: Input machine ori_machine 2: max_retry ←5 3: machine_last_round ←ori_machine 4: for r = 1 to R do 5: best_score ←−∞ 6: retry ←0 7: while retry < max_retry do 8: retry ←retry + 1 9: machine_next_round ←N.generate(machine_last_round) 10: if F(machine_next_round) then 11: break 12: end if 13: end while 14: score ←S(machine_next_round) 15: machine_last_round ←machine_next_round 16: end for 17: return (machine_last_round, score) Models Random Search Best-of-N MCTS Avg.I Mean Max Avg.I Mean Max Avg.I Mean Max Gemini 2.5 Pro 5 15.02 20.5 20 14.67 16.66 8 15.73 18.19 Claude Opus 4 5 7.67 7.88 18 8.18 8.50 6 9.10 9.32 o3 5 7.71 11.94 8 10.60 15.07 7 5.34 11.11 Qwen3-Coder-480B-A35B 5 4.50 7.64 11 5.61 9.87 8.5 5.21 6.52 Table 8: Ablation study on different search strategies. We compare the agentic workflow final scores. MCTS is executed for 5 rounds, with Random Search and Best-of-N also run for the same number of rounds. Avg.I denotes the average number of node expansions per search round. 24 Technical Report Algorithm 2 Best-of-N Algorithm Require: Agentic Search Node N Require: Scoring function S, machine valid check function F Require: Search Round R, number of samples n Ensure: The Best result with the highest score 1: Input machine ori_machine 2: best_score ←−∞ 3: best_machine ←ori_machine 4: max_retry ←5 5: for r = 1 to R do 6: best_score ←−∞ 7: best_machine_this_round ←best_machine 8: for i = 1 to n do 9: retry ←0 10: while retry < max_retry do 11: retry ←retry + 1 12: machinei ←N.generate(best_machine_this_round) 13: if F(machinei) then 14: break 15: end if 16: end while 17: scorei ←S(machinei) 18: if scorei > best_score then 19: best_score ←scorei 20: best_machine ←machinei 21: end if 22: end for 23: end for 24: return (best_machine, best_score) 25 Technical Report Algorithm 3 Monte Carlo Tree Search (MCTS) Require: Agentic Search Node N Require: Root node root, maximum iterations MAX_ITER Require: Select(node): Traverse tree using UCB until leaf node Require: Expand(node): Generate 4 child candidates via LLM. Validate them in parallel: keep valid ones, and regenerate invalid ones until they pass or hit the max retry limit. If all fail, use the parent node as the child. Require: Simulate(node): Besiege simulation Require: Backpropagate(node, reward): Update visit counts and rewards along path Require: BestChild(node): Return child with highest simulation score Ensure: Best action from the search tree 1: root ←s0 2: max_retry ←5 3: for i = 1 to MAX_ITER do 4: retry ←0 5: node ←Select(root) ▷Step 1: Selection 6: if node == root or node.visited then 7: should_expand ←True 8: end if 9: if not should_expand then 10: child ←node ▷Unvisited leaf node; no children yet 11: end if 12: while should_expand and not all child nodes are valid do 13: retry ←retry + 1 14: child ←Expand(node) ▷Step 2: Expansion 15: if retry ≥max_retry then 16: break 17: end if 18: end while 19: reward ←Simulate(child) ▷Step 3: Simulation 20: Backpropagate(child, reward) ▷Step 4: Backpropagation 21: end for 22: return BestChild(root) ▷Return best child 26 Technical Report Models R2 R5 R10 Mean Max Mean Max Mean Max Gemini 2.5 Pro 15.04 17.31 15.73 18.19 16.44 18.19 Claude Opus 4 8.61 9.32 9.10 9.32 9.43 9.98 o3 5.33 11.11 5.34 11.11 8.46 14.52 Qwen3-Coder-480B-A35B 5.18 6.52 5.21 6.52 5.74 6.52 Table 9: Ablation study on the effect of search depth in MCTS. R2, R5, and R10 represent the running rounds of MCTS on the same search tree. Figure 16: The variation in machine average scores with the increasing number of LLM node expansion oper- ations under different search strategies. 27 Technical Report D ENVIRONMENT SETTINGS FOR AGENTIC DESIGN AND RL FINETUNING D.1 MACHINE REPRESENTATION To reduce complexity in compositional machine design, our machine representation assumes all blocks remain at their default scale and are not further rotated after attachment (note: the attachment operation itself may rotate blocks). D.1.1 GLOBAL POSITION-BASED REPRESENTATION By simplifying the default XML representation that BesiegeField receives, we obtain the global position-based representation. Below is a concrete example: [ {"type": 0, "Position": [0,0, 0], "Rotation": [0,0,0,1]}, {"type": 1, "Position": [0,0,0.5], "Rotation": [0,0,0,1]}, {"type": 2, "Position": [0,0,2.5], "Rotation": [0,0,0,1]}, {"type": 9, "Position": [0,0.5,2], "Rotation": [-0.707,0,0,0.707], "end-position": [0,2,0]} ] Basically, each block in the machine is independently recorded without mentioning its adjacent blocks. For most of the block types, only the block type and its pose (position + orientation) are recorded. For special blocks that have two parents, the other end has to be specified, for which the corresponding dictionary has an additional entry of “end-position”. D.1.2 CONSTRUCTION TREE REPRESENTATION With our parsimonious construction tree representation, the example machine above is represented by the following the following JSON list: [ {’type’: 0, ’id’: 0, ’parent’ : -1, ’face_id’ : -1}, {’type’: 1, ’id’: 1, ’parent’ : 0, ’face_id’ : 0}, {’type’: 2, ’id’: 2, ’parent’ : 1, ’face_id’ : 0}, {’type’: 9, ’id’: 3, ’parent_a’: 0, ’face_id_a’: 4, ’parent_b’: 1, ’face_id_b’: 6} ] Specifically, the ordered list of dictionaries of the machine construction JSON file represents the construction order of blocks. Each dictionary contains the following information of corresponding block: 1) “type”: block type; 2) “id”: the order ID of this block (the same as the order in the list), included so that LLMs do not have to parse it by itself; 3) “parent”, the ID of its parent block; 4) “face_id”, the face of the block’s parent to which the block is attached. In cases that the block has two parents (e.g., a string that connects both parts), we use “parent_a” and “parent_b” to record both parents; similar for “face_id”. Note: the first block with “id” 0 is always the unique starting block, of which the local position and rotation are always zero. D.2 REWARD SETTING Here we elaborate on the reward design for RL experiments in Sec. 5.1. Our reward is in the form of R = is_valid×performance where is_valid is the boolean representing machine validity and performance is the task-specific performance metric. Car. We set is_valid to 1 as long as the policy produces a machine that can be parsed from the generated construction tree and can be successfully placed into the environment without any self- collision; otherwise it is set to 0. performance is set to the distance between the starting position and the end position of the root block. 28 Technical Report Catapult. For is_valid to be 1 in this task, the machine has to satisfy an additional constraint compared to the car task: the maximum height of the boulder position during simulation must be greater than a threshold of 3m. As explained in the main text, performance for catapult is the product of the maximum height and maximum distance (towards some pre-defined direction) during simulation. D.3 ENVIRONMENT FEEDBACK In principle, we are able to obtain all state variables of each single part of a simulated machine. Due to the space complexity of the simulation results, not all information can be fed to LLM agents. Here we consider a minimal set of environment feedback information that the environment querier always gathers and returns to the refiner. Below are the minimal set of feedback information for each task: Car. 1) machine orientation; 2) machine maximum moving distance (towards a designated direc- tion); 3) machine max speed; 4) machine average speed per second; 5) machine position per 0.2 second (atomic time). Catapult. 1) boulder maximum distance (horizontal, towards a designated distance); 2) boulder maximum height; 3) boulder position per 0.2 second (atomic time). Beyond these basic pieces of feedback, the querier, after seeing the candidate machine and its sim- ulation results, in our default setting selectively extract a subset of environment feedback given its speculation on the issues of the simulated machine. For instance, parts during simulation may col- lide with each other and break. Such behavior carries important hints on why machines fail, and an LLM agent with sufficient capability in spatial and physics understanding can possibly identify the vulnerable blocks in the design. Below we elaborate on the additional information that the querier may gather: • Block index to query; • Time interval of interest (states outside this interval will not be considered); • Feedback type of interest (one or more from the list) – Position; – Orientation; – Velocity; – Length (for spring only) 29 Technical Report E CHALLENGES IN COMPOSITIONAL MACHINE DESIGN E.1 FAILURE PATTERNS Generated machines often fail in systematic ways. As shown in Fig. 17, we observe several recurring categories of errors, including flawed reasoning, structural attachment errors, incorrect part orienta- tions and failures in instruction following. These diverse failure types highlight both the reasoning and execution challenges inherent in compositional machine design. Modification Description: … 2. … Then, I will add a ballast on the opposite side of the rotating block to balance the structure and prevent tipping. (b) Incorrect parents (c) Incorrect part orientations Modification Description: 1. … Now the arm facing x+. Re-orient the Rotating Block to the right side of block 1… so the arm will face z+ … Modification Description: 1. … Add a Wooden Block (Type ID: 1) to id 1 in face_id 2 to support the container. … (d) Instruction following failures Modification Description: … 2. Add a Small Wooden Block (ID 15) straight under the Starting Block (0, face_id 5) so the root touches the ground and anchors the frame. (a) Flawed high level reasoning Figure 17: Examples to illustrate failure patterns. In each example, the original machine is shown on the left and the modified machine on the right. Failure patterns are sampled from Qwen3-Coder-480B-A35B-Instruct. E.2 NEED FOR PRECISION In Fig. 18 we present a simple example to illustrate how the task of compositional machine design requires high precision in the spatial design of configurations of different parts. Even though the high-level design is feasible, the machine in the top row fails to throw the boulder out due to the incorrect position of the container. E.3 APPEARANCE VS. PERFORMANCE As illustrated in Fig. 19, a machine’s appearance does not necessarily reflect its actual performance. A design that seems well-aligned with human intuition can fail dramatically, while one that looks awkward or unintuitive may achieve superior results. For LLMs to design machines that are both effective and visually intuitive to humans, reward functions must account not only for task perfor- mance but also for stability and other factors that shape human perception of functionality. 30 Technical Report Figure 18: Illustration of how machines built with feasible high-level designs may fail due to inaccurate part placement. Machine sampled from Gemini 2.5 Pro. Left: designed machines; Right: simulation results. Simulation Time Figure 19: Boulder-throwing trajectories for various machine designs generated by Gemini 2.5 Pro. From left to right, each row first shows the machine design, followed by a time-lapsed bird’s-eye view of its throw. 31 Technical Report F SETTINGS FOR RL FINETUNING F.1 COLD-START DATASET CURATION Noticing that Gemini 2.5 Pro produces the most satisfactory machines with reasonable CoT, we adopt the single-agent generation setting and collect Gemini-generated machines along with their CoT. We curated 100 design objectives: 75 captions of machines created by Besiege players from the Internet and 25 authored by us. These 25 prompts are constructed by 1) first writing down simple design objectives that are realizable by BesiegeField and can emerge from some simple rewards, and 2) then introducing environment constraints and machine-specific requirements. Using this prompt dataset, we generate 250 machines per prompt, and after filtering out inappropriate ones (those that fail to parse, cannot be built in the environment, or do not have a specific physics-driven functionality, e.g., a statue), we obtain 9,984 machines with their corresponding CoT. A sample gallery is shown in Fig. 20. We present examples in the curated prompt set: 1. Build a machine that can provide an exciting spinning amusement ride experience. 2. Build a machine that can mimic the movements of a humanoid figure for entertainment or functional demonstrations. 3. Build a machine that can glide smoothly over snow or ice. Below we present the text prompts with our simple authoring strategy, which can possibly be scaled with LLMs: -Additional Environment Constraints- 1. On an uneven, bumpy straight road, build a small car that must travel in a straight line to the finish. 2. On a straight road stands a stone wall; build a battering ram that must accelerate straight ahead, smash the wall, and finish with minimal damage to the machine. -Modified Demands for Target Machines- 1. Build a tall tower that must keep a heavy block (id 36) at 15 m height for 5 s without collapse. 2. On a straight road stands a 10 m high wall; build a siege ladder that must advance, extend its top above the wall, and remain upright throughout. F.2 COLD-START DETAILS In our experiment, we use Qwen2.5-14B-Instruct as the base model and train it on the Gemini syn- thesized dataset. To save GPU memory, we employ the parameter-efficient quantized OFT (QOFT) technique (Qiu et al., 2025c; 2023; Liu et al., 2024a; Qiu et al., 2025a) for updating the model param- eters, with OFT block size 64. We use 8-bit training with the 8-bit AdamW optimizer implmented with bitsandbytes (Dettmers et al., 2022), a learning rate of 1e-6 and a linear warmup schedule (3% of the total training steps). F.3 RL EXPERIMENT DETAILS We use verl framework to implement our RL experiments. The LLM is finetuned from Qwen2.5- 14B-Instruct (with LoRA of rank 64 on all linear layers) using the Gemini-synthesized dataset de- scribed above. We set learning rate to 5e-6 with gradient clipping threshold set to 0.5. The GRPO advantage estimator uses an advantage clipping ratio of 0.2. We add a KL penalty (weight 0.001) with respect to the pretrained LLM and do not introduce any entropy regularization. For rollouts, we use a temperature of 1.0 and top-p value of 0.95. Maximum input and output lengths are 3440 and 1168 tokens, respectively. We train each model for 400 update steps which take approximately 48 hours. 32 Technical Report Car “Catapult” Misc Machines Carousel Robot “Snake Crawling” Machines Figure 20: Examples of Gemini-synthesized machines. 33 Technical Report G ADDITIONAL ABLATION STUDIES Meta-Designer in hierarchical design. In Table 10, we show that how a meta-designer for hier- archical design may benefit compositional machine design. Leveraging the knowledge on existing machines, meta-designers can identify the key macro-level mechanical components that are easier to design compared to the whole task, as shown in the results for Gemini 2.5 Pro, Kimi K2 and Llama 4 Scout. However, introducing an additional stage can introduce compounding error and, if the LLM agent is not capable of integrating different macro-level mechanical components, they may lead to lower scores, which we hypothesize is the reason for the failure of hierarchical design in models like Qwen3. Moreover, we examine if the meta-designer should provide step-by-step building instruc- tion for the designer (Fig. 21), or simply provide high-level mechanical component descriptions. We find that a meta-designer that provides more detailed information is beneficial mostly when the base model is powerful enough (e.g., Gemini 2.5 Pro). Effect of feedback-free self-critic. In Table 11, we show that the inspector agent which does self- critic before running any environment simulation tend to improve performance for models like Gem- ini 2.5 Pro (the most powerful model for the task of compositional machine design in BesiegeField) but can fail drastically for models like o3. Effect of active feedback queries. In Table 12, we show that the active queries on the environment feedbacks help most of the models achieve better performance, compared to the setting with no environment feedback and that with only environment simulation final scores. Additional RL results. In Fig. 22 and 24 and , we show the maximum scores achieved in the envi- ronments with different RL methods plus the validity rate of machines. We visualize the maximum score since, in the case when one is allowed to use inference-time scaling techniques, the best per- forming machines are the ones people care most about. We show that our settings with Pass@64 training achieves the best maximum score with two different random seeds. In additiona, in Fig. 26, we visualize the corresponding Best@N metrics. For completeness, we also visualize the results with our default setting on the task car in Fig. 23, 25 and 27. Meta Designer: "Machine Structure": { … "Base Frame": { … "Guidance": "Wooden Blocks (ID 1) and Small Wooden Blocks (ID 15) are used to build a wide, heavy base that prevents tipping. They also form a tower to elevate the pivot point and a physical stop to arrest the arm's rotation, which is crucial for the release mechanism." }, "Throwing Mechanism ": { … "Guidance": "A Rotating Block (ID 22) provides the high-speed rotational power. This is attached to a Log (ID 63), which acts as a long, robust lever arm. A Container (ID 30) is placed at the end of the arm to hold the Boulder (ID 36) projectile.“ } } Detailed Meta Designer: "Machine Structure": { … "Base Frame": { … "Guidance ": "First, build the foundation. Attach a Log (63) to the left side of the Start Block (0) and another Log to the right side to create a wide horizontal base. Then, place a Small Wooden Block (15) on top of the Start Block. This will elevate the pivot point of the throwing mechanism for a better launch angle and provide a central connection point." }, "Throwing Mechanism": { … "Guidance": "On top of the Small Wooden Block from the Base Frame, place a Rotating Block (22). This will serve as the powered pivot. Attach a Log (63) to the rotating part of the Rotating Block, positioning it as the throwing arm. At the far end of this Log, attach a Container (30) with its opening facing upwards. Finally, place the Boulder (36) inside the Container." } } Figure 21: Construction guidance comparison of Meta Designer and Detailed Meta Designer, sampled with Gemini 2.5 Pro. 34 Technical Report Models Machine Validity (pass/total) Designer Score File Valid 3D Valid Final Valid Mean Max Baseline (Meta-Designer & Designer) Gemini 2.5 Pro 8/8 5/8 5/8 8.49 9.14 o3 3/8 3/3 3/8 0 0 Qwen3-Coder-480B-A35B 8/8 6/8 6/8 0.75 4.5 Doubao Seed 1.6 7/8 3/7 3/8 4.34 4.37 Claude Opus 4 8/8 4/8 4/8 4.17 4.36 DeepSeek-V3 7/8 7/7 7/8 0 0 Kimi K2 6/8 6/6 6/8 7.18 12.02 Llama 4 Scout 17B 16E 8/8 8/8 8/8 3.59 11.83 Single Agent Gemini 2.5 Pro 6/8 3/6 3/8 6.13 9.00 o3 8/8 8/8 8/8 2.87 5.22 Qwen3-Coder-480B-A35B 8/8 4/8 4/8 3.5 9.24 Doubao Seed 1.6 7/8 6/7 6/8 4.24 8.2 Claude Opus 4 8/8 2/6 2/8 4.76 4.91 DeepSeek-V3 7/8 6/7 6/8 4.67 4.86 Kimi K2 8/8 3/8 3/8 6.85 9.05 Llama 4 Scout 17B 16E 8/8 7/8 7/8 3.63 5.64 w/ detailed Meta-Designer & Designer Gemini 2.5 Pro 8/8 7/8 7/8 9.19 11.94 o3 7/8 6/7 6/8 0.92 1.18 Qwen3-Coder-480B-A35B 8/8 2/8 2/8 4.87 4.87 Doubao Seed 1.6 0/8 - - - - Claude Opus 4 7/8 5/7 5/8 4.13 4.79 DeepSeek-V3 8/8 8/8 8/8 6.12 9.0 Kimi K2 8/8 0/8 - - - Llama 4 Scout 17B 16E 8/8 7/8 7/8 4.01 6.93 Table 10: Ablation study on the meta-designer. Machine validity is evaluated in two aspects: file validity, 3D validity. Note that 3D validity requires the machine to first pass file validity. Final validity refers to a fully valid machine (satisfying both file and 3D validity). The mean simulation score is calculated based solely on the final valid outputs. Detailed Meta-Designer provides more concisely, step-by-step construction guidance to the Designer. Compared to Baseline, Single Agent is slightly harder to construct valid machines, but the simulation scores are better. Detailed Meta-Designer improves both metrics, but requires LLMs to have a strong 3D understanding and a large context window. The comparison between Meta-Designer and Detailed Meta-Designer is illustrated in Fig. 21. 35 Technical Report Models Blind Refinement Simulation Scores Baseline w/o Inspector Valid Mean Max Valid Mean Max Gemini 2.5 Pro 5 8.18 11.07 5 5.67 9.37 o3 3 0 0 3 3.08 9.24 Qwen3-Coder-480B-A35B 6 1.61 5.0 6 0.75 4.51 Doubao Seed 1.6 3 0.31 0.49 3 0.47 1.41 Claude Opus 4 2 5.38 5.8 4 5.20 8.25 DeepSeek-V3 7 0.98 3.18 7 0.38 2.16 Kimi K2 2 5.29 7.36 6 2.31 8.91 Llama 4 Scout 17B 16E 0 - - 0 - - Table 11: Ablation study on inspector agentic design. The mean simulation score is calculated based solely on the valid machines after blind refinement. Removing the inspector from the agentic flow lowers the blind refiner’s mean performance on LLMs with weaker 3D understanding, while barely affecting other models. Models Refiner Simulation Scores Baseline w/o Env Querier Score Only std Mean Max std Mean Max std Mean Max Gemini 2.5 Pro 2.47 15.73 18.19 4.18 14.89 19.77 2.05 9.68 13.18 o3 5.36 5.34 11.11 4.24 4.06 8.55 2.76 7.05 10.24 Qwen3-Coder-480B-A35B 0.95 5.21 6.52 2.32 4.05 6.89 2.53 2.81 5.56 Claude Opus 4 0.31 9.10 9.32 0.82 8.50 9.08 0.42 5.75 6.05 Doubao Seed 1.6 0.23 4.62 4.76 0.08 4.89 4.94 0.29 4.79 5.05 DeepSeek-V3 0.09 4.82 4.93 1.96 4.37 6.00 1.91 2.76 5.15 Table 12: Ablation study on the environment querier agent. For the refiner, the baseline includes simulation scores, basic environment feedback, and querier-required feedback. The "w/o env querier" setting provides only simulation scores and basic environment feedback. In the "pure score only" setting, only simulation scores are provided. Removing the environment querier causes a slight drop in average machine performance. With reward signals only, the performance markedly degrades across most LLMs. Figure 22: Catapult task machine scores across RL steps. KL regularization helps the model discover better structure designs. Pass@64 is greatly more efficient at uncovering powerful machine designs. Pass@8 (roll-out 8) outperforms Pass@1 (roll-out 64) in efficiency and matches its performance with fewer roll-outs. No cold start models lack the advanced knowledge needed to find better machines. 36 Technical Report Figure 23: Car machine scores across RL steps. The RL finetuning hyperparameter setting is the same as the base hyperparameter setting of Catapult. Machine performance slightly rises as training steps increase. Figure 24: Catapult task machine validity rate and reward non-zero rate across RL steps. The machine validity rate refers to the proportion of machines that can successfully run simulations. The reward non-zero rate represents the ratio of machines that can simulate with a non-zero reward. LLM constructs more legal machines as training steps increase, and rewards non-zero machines. Pass@8 and Pass@1 converge early. “No KL” fills roll-outs with failure cases, slowing performance gains. “No cold start” lacks design knowledge, encounters more failures than no KL, and improves validity rate most slowly. The base setting balances convergence and performance improvement. Figure 25: Car task machine validity rate and reward non-zero rate across RL steps. The machine validity converges early and remains stable during further training. 37 Technical Report Figure 26: Catapult task. Average Best@N metric. At each test step, the LLM generates 64 samples, selects the top N samples, and records the maximum score. This process is repeated 1,000 times, and the mean value is calculated. Base settings (both seeds) dominates Best@N performance; excluding base settings, “no KL” dominates the rest. Pass@1 and Pass@8 spawn only a handful of high-performance machines. No cold start produces machines of more average quality. 38 Technical Report Figure 27: Car task. Mean Best@N metrics. Similar to the machine validity rate, the Best@N performance increases quickly and remains stable in rest training periods. 39 Technical Report H GENERATED SAMPLES H.1 FROM RL-FINETUNED MODELS Here, we present some of the best RL samples from rollouts, as well as examples from the agentic workflow. Fig. 28 displays the RL rollout samples, while Fig. 29 illustrates the agentic workflow samples. 5.59 6.81 10.65 12.06 12.54 12.72 13.64 15.25 15.89 17.19 18.71 19.18 19.23 19.43 19.63 19.69 21.69 20.67 21.16 21.52 21.55 21.93 23.13 23.26 23.89 Figure 28: Qwen2.5-14B-Instruct cold started RL model catapult task sample from roll-out. Throwing dis- tances are labeled on the bottom-right corner of the image. 40 Technical Report H.2 FROM AGENTIC WORKFLOW 3.04 4.98 5.95 7.27 9.98 4.75 6.92 8.63 9.10 13.78 0.22 5.52 14.99 16.32 19.77 11.91 0.10 13.69 14.01 15.07 16.64 10.93 13.70 15.10 17.26 Qwen3-Coder-480B-A35B-Instruct Doubao Seed 1.6 o3 Gemini 2.5 Pro Claude 4 Opus Figure 29: The LLM inference gallery of machine-generated samples. The rows, from top to bottom, were inferred by the following models, respectively: Claude 4 Opus, Gemini 2.5 Pro, o3, Doubao Seed 1.6, and Qwen3-Coder-480B-A35B-Instruct. Throwing distances are labeled on the bottom-right corner of the image. 41 Technical Report I RELATIONS BETWEEN COT AND MACHINES To further investigate if high-level machine blueprint or low-level part placement is more important, we experiment with machine generation of LLMs by generating machine details conditioned on Gemini-generated CoT (instead of on CoT produced by themselves). We find that with Gemini CoT, almost all LLMs design machines that are more visually similar to "catapults", as shown in Fig. 30. We therefore hypothesize that the major gap between other LLMs, especially open-source ones, and Gemini 2.5 Pro is the abstract-level spatial and physics reasoning on machine designs. o3 Kimi K2 Claude Opus 4 Conditioned on its own CoT Conditioned on Gemini-generated CoT Figure 30: Comparison between generated machines conditioned on their own CoT or Gemini-generated CoT. 42 Technical Report J COT SAMPLES FROM GEMINI 2.5 PRO J.1 SINGLE AGENT system: <System Prompt> user: Design a machine to throw a boulder (type id 36) in a parabolic trajectory. You must use exactly one boulder. Optimize for throw distance , height, and speed. Apply the principle of leverage correctly, balancing the weight of the projectile. assistant: <think> **Analyzing Catapult Components** I’m now building a catapult with a rigid frame using ‘Wooden Block‘s to ensure stability. I plan to use a ‘Log‘ as the throwing arm, attached to a constructed pivot point. The ‘Container‘ will hold the boulder at the arm’s end. **Devising Launch Mechanisms** I’m reconsidering my approach. With the provided constraints, a trebuchet is out. The single boulder and lack of hinges necessitate a catapult. I must leverage the ‘Rubber Band‘. My plan is to design a tension-based catapult for the boulder, prioritizing stability and throwing arm efficiency. **Optimizing for Distance** I’m now focused on the tension-powered catapult. My plan is for a solid base, a pivot, a lever arm, a boulder-holding container, and rubber bands . I’m working on the arm’s length and the positioning of the rubber bands for maximum launch power and the best trajectory for the boulder. **Assessing the Lever Arm** I’m now working on the arm, crucial for force. I must decide on its length and how it attaches to the pivot. I’m considering different attachment methods to create a stable but rotatable lever, exploring options for maximum launch power and height. **Evaluating Construction Strategies** I’m stuck! I need to use a catapult given the constraints. But, the lack of a hinge restricts my options. I’m exploring different ways to connect the throwing arm to the frame and leverage the ‘Rubber Band‘ for propulsion. **Designing a Hybrid Catapult** I’ve sketched out the structure, pivoting on the ‘Rotating Block‘, with a base and uprights. The throwing arm will comprise a ‘Log‘, with a ‘ Container‘ at the tip. I am struggling with the rubber band attachment. I will have to ensure the right angles and tension. **Evaluating Rubber Band Placement** I’m now focusing on the final adjustments. With the core design finalized , the key is the ‘Rubber Band‘ placement. The goal is maximizing launch power by carefully considering the band’s attachment points and the angle of force application. I aim to achieve a long throw with this arrangement. **Evaluating Lever Arm Placement** I’m now focused on the throwing arm’s construction and its link to the rotating pivot. I’ve built the base and the pivot structure. The lever arm’s length and the location for the ‘Container‘ are my next priorities. The rubber band attachments also need to be evaluated. 43 Technical Report **Engineering the Launch Mechanism** I’m now focused on a tension-based catapult. It’s time to build, starting with a solid base and a stable pivot point using the ‘Rotating Block‘. My priorities are a strong throwing arm, the right ‘Container‘ for the boulder, and, critically, the strategic placement of the ‘Rubber Band‘s. **Refining the Design** I’m now focused on the core catapult design. The need for a simple build, given the constraints, favors a straightforward tension catapult. I am planning a solid base, a lever arm (most likely a ‘Log‘), a container for the boulder, and the crucial application of ‘Rubber Band‘s to generate the launch force. **Finalizing Catapult Design** I’ve spent time analyzing the limitations and refining the catapult build . I’ve now decided on the layout. The catapult will be built around a sturdy frame, a pivot using a ‘Rotating Block‘, an arm (lever), and the utilization of the ‘Rubber Band‘ for propulsion. I am ready for the final assembly. **Constructing the Base** I’m now focused on the basic frame. I’ve decided to construct it from logs for stability. The goal is a sturdy structure to anchor the catapult ’s components, primarily the ‘Rotating Block‘ and lever arm. I’m focusing on the geometry. **Analyzing Leverage Application** I’m now focused on catapult design. Given the one-boulder rule and lack of hinges, I’ll focus on a lever-based catapult. I am now determining the lever arm length. **Building a Lever Catapult** I’m planning a basic tension-based catapult using the ‘Rotating Block‘ as a pivot. I’m focusing on the throwing arm’s dimensions, the ‘Rubber Band ‘ placement to maximize launch power, and the ‘Container‘ block to hold the boulder. **Constructing Launch Support** I’m now focused on the basic design. With the user’s need to launch a single boulder, a tension-based catapult is optimal. I’ll build it around the ‘Rotating Block‘. I’m using a ‘Log‘ for the arm and planning a base, rubber band connections, and the boulder’s container. **Planning the Build** I’m streamlining the design. The key will be a stable base, a ‘Rotating Block‘ for the pivot, and a lever arm. I’ll use a ‘Container‘ to hold the boulder, and the ‘Rubber Band‘s will be the power source. Now I am evaluating the dimensions and the best placement of each part. **Constructing the Launching Mechanism** I’m designing a simplified catapult, keeping the focus on its functionality and the user’s requirement for a single boulder and the available blocks. I plan to use a ‘Rotating Block‘ as the pivot. The rest of the design must center around this ‘Rotating Block‘ component. **Constructing the Catapult** I’ve established the foundation of the catapult, focusing on a solid base and pivot mechanism. The challenge now is to construct the lever arm and connect it effectively to the ‘Rotating Block‘, while also determining how to hold the boulder. I will prioritize a design maximizing force transfer for launch. **Constructing a Lever Arm** I’m solidifying the basic build: a catapult, powered by rubber bands, given the constraints. The next step is constructing the lever arm, 44 Technical Report deciding its dimensions and how it will attach to the pivot point for optimal force transfer. **Constructing the Final Catapult** I’ve finalized the construction plan. The tension-based catapult, using the ‘Rotating Block‘ for the pivot and a lever arm, is my focus. I will use a ‘Container‘ for the boulder, and the ‘Rubber Band‘ for launching, paying close attention to force application and frame design. **Outlining Catapult Construction** I’ve sketched out the build, focusing on a tension-powered catapult. A base, a pivoting lever arm, and a boulder holder are planned. My design will depend on a stable foundation and the proper application of rubber bands for generating enough force for distance and height. **Constructing the Catapult** I’ve arrived at a straightforward plan for a tension-powered catapult, using the available blocks. I’m focusing on a design to maximize launch potential. The base, pivot point, arm, and rubber bands will be the key components for an efficient, functional build. **Devising the Final Catapult** I’ve finalized the core design. I am now sure of a tension-based catapult ; the concept is simple and effective. My focus is now on the construction idea and creating the blocks. I am convinced it will be a simple build and the machine can leverage the ‘Rotating Block‘ and ‘ Rubber Band‘ to meet the user’s launch goals. **Constructing a Launching System** I’m finalizing the blueprint now. The tension-based catapult will use a base, a pivot with the ‘Rotating Block‘, and a lever arm for the boulder. I’ll ensure maximum launch power through strategic rubber band placement and precise measurements. I need a strong yet streamlined design. **Constructing a Functional Catapult** I’ve finalized the plan. The design uses a tension-based catapult, with a "Rotating Block" for a pivot. A sturdy base with two Log arms ensures stability, while two Rubber Bands generate the launch force. The boulder holder is a "Container", and the system prioritizes lever arm efficiency for maximum height and distance. **Formulating Final Design** I’m now certain: a tension-based catapult, incorporating the ‘Rotating Block‘ for the pivot, will meet the user’s needs. The goal is maximum launch power within the given constraints. I’ll sketch out the construction, outlining the key blocks and how they fit together. I’ve a solid plan. Construction Idea: I will design a tension-powered catapult to launch the boulder. The design prioritizes a long throwing distance and height by utilizing a long lever arm. The machine has four main parts: 1. **Base Frame:** A T-shaped base, constructed from a central ’Wooden Block’ and two ’Log’s extending sideways, provides a wide, stable footprint to handle the throwing forces and prevent the machine from tipping over. 2. **Support Tower:** Two ’Log’s rise vertically from the ends of the base, creating a tall, sturdy support structure for the pivot axle. 3. **Lever Arm Assembly:** This is the core of the catapult. A ’Rotating Block’ is used as a functional hinge/axle, mounted on one of the support uprights. A ’Small Wooden Block’ acts as a hub on this axle, with a long ’Log’ attached pointing up (the throwing arm) and a ’Small Wooden Block’ attached pointing down (the short lever). This structure creates the 45 Technical Report necessary leverage. At the end of the throwing arm, a ’Container’ is mounted via an adapter block to hold the ’Boulder’. 4. **Power System:** Two ’Rubber Band’ blocks provide the propulsive force. They connect from a forward-projecting ’Log’ on the base to the short lever arm. The stretched rubber bands contract rapidly, pulling the short arm down and forward, which in turn whips the long throwing arm upwards and forwards, launching the boulder in a high parabolic trajectory. This design maximizes the lever ratio to achieve high velocity for the projectile, while the robust frame maintains stability during the launch. necessary_blocks ‘‘‘[1, 63, 22, 15, 30, 36, 9]‘‘‘ </think> JSON: ‘‘‘json [ {"type": 0, "id": 0, "parent": -1, "face_id": -1} {"type": 1, "id": 1, "parent": 0, "face_id": 1} {"type": 63, "id": 2, "parent": 1, "face_id": 1} {"type": 63, "id": 3, "parent": 1, "face_id": 3} {"type": 63, "id": 4, "parent": 2, "face_id": 6} {"type": 63, "id": 5, "parent": 3, "face_id": 6} {"type": 22, "id": 6, "parent": 4, "face_id": 4} {"type": 15, "id": 7, "parent": 6, "face_id": 3} {"type": 63, "id": 8, "parent": 7, "face_id": 3} {"type": 15, "id": 9, "parent": 7, "face_id": 4} {"type": 15, "id": 10, "parent": 8, "face_id": 0} {"type": 30, "id": 11, "parent": 10, "face_id": 3} {"type": 36, "id": 12, "parent": 11, "face_id": 0} {"type": 63, "id": 13, "parent": 0, "face_id": 0} {"type": 9, "id": 14, "parent_a": 13, "face_id_a": 1, "parent_b": 9, " face_id_b": 1} {"type": 9, "id": 15, "parent_a": 13, "face_id_a": 3, "parent_b": 9, " face_id_b": 2} ] ‘‘‘ J.2 META DESIGNER system: <System Prompt> user: Design a machine to throw a boulder (type id 36) in a parabolic trajectory. You must use exactly one boulder. Optimize for throw distance , height, and speed. Apply the principle of leverage correctly, balancing the weight of the projectile. meta-designer output: ‘‘‘json { "definition": "Construct a machine that can throw boulders in a parabolic trajectory, designed to maximize throw distance and height by utilizing the principle of leverage.", "function_points": [ "Establish a stable foundation to withstand recoil and prevent tipping.", "Implement a powered lever arm for projectile propulsion.", "Carry and release a boulder projectile at the apex of the arm’s swing." ], 46 Technical Report "design_structure": [ { "function_name": "Base and Support Frame", "description": "A wide, static frame built from the Start Block. It provides stability for the machine and includes a vertical support to mount the rotating mechanism, as well as a stop block to halt the throwing arm’s motion.", "related_function_points": [ "Establish a stable foundation to withstand recoil and prevent tipping." ] }, { "function_name": "Powered Throwing Arm", "description": "A composite structure consisting of a powered pivot, a long lever arm, and a projectile holder. The Rotating Block provides rapid rotational force, the Log acts as the lever to amplify speed, and the Container holds the Boulder, releasing it via inertia.", "related_function_points": [ "Implement a powered lever arm for projectile propulsion .", "Carry and release a boulder projectile at the apex of the arm’s swing." ] } ], "build_order": [ "Base and Support Frame", "Powered Throwing Arm" ], "machine_structure": { "Base and Support Frame": { "type": [ 1, 15 ], "guidance": "Wooden Blocks (ID 1) and Small Wooden Blocks (ID 15) are used to build a wide, heavy base that prevents tipping. They also form a tower to elevate the pivot point and a physical stop to arrest the arm’s rotation, which is crucial for the release mechanism." }, "Powered Throwing Arm": { "type": [ 22, 63, 30, 36 ], "guidance": "A Rotating Block (ID 22) provides the high-speed rotational power. This is attached to a Log (ID 63), which acts as a long, robust lever arm. A Container (ID 30) is placed at the end of the arm to hold the Boulder (ID 36) projectile." } } } ‘‘‘ J.3 DESIGNER system: <System Prompt> designer-layer1 input: 47 Technical Report On the basis of the existing structure, generate [Base and Support Frame ], [A wide, static frame built from the Start Block. It provides stability for the machine and includes a vertical support to mount the rotating mechanism, as well as a stop block to halt the throwing arm’s motion.], as [’Establish a stable foundation to withstand recoil and prevent tipping.’], using block type[1, 15], [Wooden Blocks (ID 1) and Small Wooden Blocks (ID 15) are used to build a wide, heavy base that prevents tipping. They also form a tower to elevate the pivot point and a physical stop to arrest the arm’s rotation, which is crucial for the release mechanism.] designer-output-layer1: Based on the existing structure, a Base and Support Frame was generated to establish a stable foundation to withstand recoil and prevent tipping. Ultimately, the block types [1, 15] were decided upon. No blocks were removed as they were all suitable for the design. JSON: ‘‘‘json [ {"type": "0", "id": 0, "parent": -1, "face_id": -1} {"type": "1", "id": 1, "parent": 0, "face_id": 1} {"type": "1", "id": 2, "parent": 0, "face_id": 2} {"type": "1", "id": 3, "parent": 0, "face_id": 0} {"type": "1", "id": 4, "parent": 1, "face_id": 2} {"type": "1", "id": 5, "parent": 2, "face_id": 4} {"type": "1", "id": 6, "parent": 0, "face_id": 3} {"type": "1", "id": 7, "parent": 6, "face_id": 6} {"type": "1", "id": 8, "parent": 7, "face_id": 6} {"type": "15", "id": 9, "parent": 8, "face_id": 0} ] ‘‘‘ designer-layer2: On the basis of the existing structure, generate [Powered Throwing Arm], [A composite structure consisting of a powered pivot, a long lever arm, and a projectile holder. The Rotating Block provides rapid rotational force, the Log acts as the lever to amplify speed, and the Container holds the Boulder, releasing it via inertia.], as [’Implement a powered lever arm for projectile propulsion.’, "Carry and release a boulder projectile at the apex of the arm’s swing."], using block type [22, 63, 30, 36], [A Rotating Block (ID 22) provides the high-speed rotational power. This is attached to a Log (ID 63), which acts as a long, robust lever arm. A Container (ID 30) is placed at the end of the arm to hold the Boulder (ID 36) projectile.] <3D Info> designer-output-layer2: Based on the existing structure, a Powered Throwing Arm was generated as Implement a powered lever arm for projectile propulsion.,Carry and release a boulder projectile at the apex of the arm’s swing.. Ultimately, the block types [22, 63, 30, 36] were decided upon, while [] were removed due to []. JSON: ‘‘‘json [ {"type": "0", "id": 0, "parent": -1, "face_id": -1}, {"type": "1", "id": 1, "parent": 0, "face_id": 1}, {"type": "1", "id": 2, "parent": 0, "face_id": 2}, {"type": "1", "id": 3, "parent": 0, "face_id": 0}, {"type": "1", "id": 4, "parent": 1, "face_id": 2}, {"type": "1", "id": 5, "parent": 2, "face_id": 4}, {"type": "1", "id": 6, "parent": 0, "face_id": 3}, {"type": "1", "id": 7, "parent": 6, "face_id": 6}, {"type": "1", "id": 8, "parent": 7, "face_id": 6}, {"type": "15", "id": 9, "parent": 8, "face_id": 0}, 48 Technical Report {"type": "22", "id": 10, "parent": 8, "face_id": 4}, {"type": "63", "id": 11, "parent": 10, "face_id": 3}, {"type": "30", "id": 12, "parent": 11, "face_id": 0}, {"type": "36", "id": 13, "parent": 12, "face_id": 0} ] ‘‘‘ J.4 INSPECTOR system: <System Prompt> inspector input: Task Introduction Construct a machine that can throw boulders in a parabolic trajectory, designed to maximize throw distance and height by utilizing the principle of leverage. JSON Information <JSON> 3D Information <3D Information> Questions <Inspector Questions> inspector-output: <Three-Dimensional Perception of the World> { "Coordinate System": "Left-Handed System", "Up": "y", "Right": "x", "Forward": "z", "Frontmost Block": {"3": [0.0, 0.0, 1.5]}, "Rearmost Block": {"9": [2.0, 2.0, -3.0]}, "Leftmost Block": {"5": [-2.0, 0.0, 1.5]}, "Rightmost Block": {"7": [2.0, 1.5, 0.0]}, "Topmost Block": {"13": [1.0, 7.4, -2.0]}, "Lowest Block": {"0": [0, 0, 0]} } </Three-Dimensional Perception of the World> <Question Answer> 1. **Dynamic Blocks Analysis:** The only dynamic block in this machine is the ‘Rotating Block‘ [10]. * **Position and Orientation**: The ‘Rotating Block‘ [10] is located at center coordinates [1.0, 2.0, -2.0] with a global orientation of ’x-’. * **Analysis**: a. **Impact on the machine**: The ‘Rotating Block‘ [10] serves as the pivot for the throwing arm. It is the core component intended to generate the throwing motion. b. **Direction of force**: The block is oriented along the ’x-’ axis, meaning its rotation axis is the global x-axis. When activated, it will generate a torque, causing its sub-blocks to rotate in the y-z plane . c. **Impact on sub-blocks and machine**: The ‘Rotating Block‘ [10] will cause its child blocks ([11], [12], [13]) to swing upwards and forwards in a powerful arc. Simultaneously, it will exert an equal and opposite reaction torque on its parent block [8] and the entire support structure, which will likely cause the machine to violently tip over or break. 49 Technical Report 2. **Special Static Blocks Analysis:** The special static blocks are the ‘Container‘ [12] and the ‘Boulder‘ [13]. * **Container [12]**: It is located at the end of the throwing arm (‘Log‘ [11]) with its center at [1.0, 6.9, -2.0]. Its orientation is ’y+’ (upwards). This placement and orientation are rational, as it correctly positions it to hold the ‘Boulder‘ [13] like a cup at the end of the throwing arm. * **Boulder [13]**: It is placed inside the ‘Container‘ [12] with its center at [1.0, 7.4, -2.0]. As the projectile, its placement is logical and necessary for the machine’s function. 3. **Balance Check (self-gravity):** a. **Center of Gravity**: The calculated center of gravity for the entire machine is approximately at (0.85, 4.22, -1.46). The block closest to this point is the ‘Log‘ [11]. b. **Sinking/Collapse**: Yes, parts of the machine will collapse. The entire throwing mechanism, including the heavy ‘Boulder‘ [13] (mass 5), rests on a tall, slender support tower made of single ‘Wooden Block‘s ([6], [7], [8]). This structure is extremely fragile and not braced. The machine is severely top-heavy, and the immense weight concentrated on this single tower will cause it to bend and collapse under gravity alone, even before the ‘Rotating Block‘ [10] is activated. 4. **Comprehensive Analysis:** a. **Force Summary**: The primary force is gravity, which makes the top-heavy machine highly unstable due to the high position of the ‘ Boulder‘ [13]. Upon activation, the ‘Rotating Block‘ [10] will generate a massive torque on the throwing arm and an equal reaction torque on the flimsy support tower. The combination of static top-heaviness and dynamic reaction forces will lead to immediate structural failure. b. **Logically Unreasonable Blocks**: * **Hierarchical Structure**: ‘Starting Block‘ [0] -> ‘Wooden Block‘ [6] -> ‘Wooden Block‘ [7] -> ‘Wooden Block‘ [8]. * **Reason for Unreasonableness**: The support tower consisting of blocks [6], [7], and [8] is fundamentally flawed. It is a single, unbraced column tasked with supporting the entire throwing arm and the heavy ‘Boulder‘ [13]. This design is far too weak to handle the static weight, let alone the powerful dynamic forces and reaction torque from the ‘Rotating Block‘ [10]. The machine is guaranteed to collapse. </Question Answer> <Summary of Design Defects> 1. Problem description, involving blocks: [6, 7, 8], belonging to structure: "Base and Support Frame" The support tower for the throwing mechanism, composed of blocks [6], [7], and [8], is critically weak. It is a single, unbraced column made of wooden blocks, which is completely inadequate to support the massive weight of the throwing arm and the ‘Boulder‘ [13]. The structure will collapse under its own weight or instantly shatter upon activation of the ‘Rotating Block‘ [10] due to the immense reaction torque. </Summary of Design Defects> J.5 ENVIRONMENT QUERIER system: <System Prompt> environment-querier input: Task Introduction Construct a machine that can throw boulders in a parabolic trajectory, designed to maximize throw distance and height by utilizing the principle of leverage. 50 Technical Report Mechanical JSON Structure <JSON> 3D Information <3D Information> Environment Feedback machine damaged: machine parts DoubleWoodenBlock order_id:8 occurred at 0.4 sec block order_id 13 boulder throwing distance 5.63 boulder max height 7.3 boulder actual position in first 5 seconds [1.0, 7.01, -1.5], [0.84, 7.34, -2.44], [0.64, 5.69, -6.2], [0.61, 3.28, -9.97], [0.57, 1.68, -10.14], [0.38, 1.78, -9.45], [0.17, 1.72, -8.76], [-0.03, 1.52, -8.13], [-0.22, 1.22, -7.61], [-0.36, 0.85, -7.23], [-0.47, 0.48, -6.98], [-0.53, 0.16, -6.86], [-0.55, -0.07, -6.83], [-0.55, -0.18, -6.87], [-0.53, -0.17, -6.92], [-0.52, -0.03, -6.94], [-0.52, 0.21, -6.9], [-0.55, 0.51, -6.76], [-0.6, 0.85, -6.52], [-0.69, 1.18, -6.16], [-0.81, 1.45, -5.72], [-0.95, 1.66, -5.2], [-1.11, 1.76, -4.64], [-1.28, 1.76, -4.07], [-1.44, 1.66, -3.53] environment-querier-output: <Thought Process> The user wants me to analyze a boulder-throwing machine that is failing its task. **1. Analyze the Task and Machine Design:** * **Objective:** Build a leverage-based machine to throw a boulder for maximum distance and height. This implies a catapult or trebuchet design. * **Machine Structure:** * **Base:** A wide base is constructed using ‘Wooden Block‘s (ID 1, 2, 3, 4, 5, 6) connected to the ‘Starting Block‘ (ID 0). This seems designed for stability. * **Tower:** A vertical tower is built on the right side of the base with ‘Wooden Block‘s (ID 6, 7, 8). Block 8 is the apex of this support structure. * **Lever Arm:** A ‘Rotating Block‘ (ID 10) is mounted on top of the tower (on block 8), serving as the fulcrum. A ‘Log‘ (ID 11) acts as the throwing arm, with a ‘Container‘ (ID 12) and ‘Boulder‘ (ID 13) at the top. A ‘Ballast‘ (ID 14) is attached to the other side of the ‘Rotating Block‘ as a counterweight. * **Power System:** The machine uses a combination of a powered ‘ Rotating Block‘ (ID 10) and two ‘Rubber Band‘s (ID 15, 16). The rubber bands connect the top of the tower (block 8) to points on the base ( blocks 3 and 5). **2. Analyze the Environmental Feedback:** * **‘machine damaged‘**: This is the most critical piece of feedback. The machine is breaking during operation. * **‘machine parts: DoubleWoodenBlock order_id:8‘**: The failure point is identified as block 8. This is the ‘Wooden Block‘ that supports the entire rotating lever arm assembly. * **‘occurred at 0.4sec‘**: The failure is almost instantaneous, indicating a massive, immediate overload on the structure. * **Boulder Performance:** The boulder travels a mere 5.63 units and ends up behind its starting Z coordinate. This confirms a catastrophic failure of the launch mechanism, not just a weak throw. The arm is likely collapsing or swinging backward. **3. Synthesize and Hypothesize the Root Cause:** * The point of failure (block 8) is subjected to several immense forces : 51 Technical Report 1. **Static Load:** The weight of the entire lever arm assembly ( rotating block, log, container, boulder, ballast) rests on it. 2. **Dynamic Load:** The torque from the ‘Rotating Block‘ (ID 10) creates a powerful reaction force on its mount point (block 8). 3. **Tension Load:** The two ‘Rubber Band‘s (ID 15, 16) are stretched from block 8 down to the base. When the simulation starts, they will contract with immense force, pulling down and forward on block 8. * **Hypothesis:** The combination of these forces, particularly the sudden, immense tension from the rubber bands, is causing block 8 to shear off from the rest of the tower (block 7). The tower itself, being a single column of wooden blocks, is structurally weak and cannot withstand this combination of bending and shear stress. The rubber bands are incorrectly placed; instead of powering the lever arm, they are destroying the support structure. The machine is literally tearing itself apart. **4. Formulate a Plan for Further Investigation:** To confirm this hypothesis, I need to observe the behavior of the key components involved in the structural failure around the time of the collapse (0.4s). * **Block 8 (Failure Point):** I need to see its position and rotation to confirm it’s being ripped from its mounting. * **Block 10 (Rotating Block/Fulcrum):** Its movement will show the forces being applied by the lever arm onto block 8. * **Block 11 (Lever Arm):** Its motion will confirm the direction of the attempted throw and the overall collapse dynamics. * **Rubber Bands (15 & 16):** I need to check their ‘length‘ property. A rapid decrease in length will prove they are contracting and applying the destructive force I suspect. Based on this analysis, I will request detailed feedback on these specific components to verify the cause of the failure. </Thought Process> <Required Feedback> [ { "id": 8, "duration": [0.0, 0.8], "properties": ["position", "rotation"] }, { "id": 10, "duration": [0.0, 0.8], "properties": ["position", "rotation"] }, { "id": 11, "duration": [0.0, 0.8], "properties": ["position", "rotation"] }, { "id": 15, "duration": [0.0, 0.8], "properties": ["length"] }, { "id": 16, "duration": [0.0, 0.8], "properties": ["length"] } ] </Required Feedback> 52 Technical Report J.6 REFINER system: <System Prompt> refiner input: Task Introduction Construct a machine that can throw boulders in a parabolic trajectory, designed to maximize throw distance and height by utilizing the principle of leverage. Mechanical JSON Structure <JSON> 3D Information <3D Information> Environment Feedback machine damaged: machine parts DoubleWoodenBlock order_id:8 occurred at 0.4 sec block order_id 13 boulder throwing distance 5.63 boulder max height 7.3 boulder actual position in first 5 seconds [1.0, 7.01, -1.5], [0.84, 7.34, -2.44], [0.64, 5.69, -6.2], [0.61, 3.28, -9.97], [0.57, 1.68, -10.14], [0.38, 1.78, -9.45], [0.17, 1.72, -8.76], [-0.03, 1.52, -8.13], [-0.22, 1.22, -7.61], [-0.36, 0.85, -7.23], [-0.47, 0.48, -6.98], [-0.53, 0.16, -6.86], [-0.55, -0.07, -6.83], [-0.55, -0.18, -6.87], [-0.53, -0.17, -6.92], [-0.52, -0.03, -6.94], [-0.52, 0.21, -6.9], [-0.55, 0.51, -6.76], [-0.6, 0.85, -6.52], [-0.69, 1.18, -6.16], [-0.81, 1.45, -5.72], [-0.95, 1.66, -5.2], [-1.11, 1.76, -4.64], [-1.28, 1.76, -4.07], [-1.44, 1.66, -3.53] Querier required feedback block order_id 8 type_id 1 info duration(s) [0.0, 1.0] block broken before below time(s) 0.2 block position [[2.01, 2.49, 0.01]] Due to block broken, the position is incomplete or not recorded block velocity [[0.15, 1.7, 4.1]] Due to block broken, the velocity is incomplete or not recorded block order_id 10 block type_id 22 block info duration(s) [0.0, 1.0] block position [[ 1.5 , 2.52, -1.49], [ 1.5 , 3.34, -0.56], [1.48, 5.27, 1.08], [0.73, 5.66, 2.42], [0.11, 4.45, 3.64])] block rotation [[ 0. , -0.71, 0.01, 0.71], [-0.16, -0.68, 0.18, 0.69], [-0.47, -0.47, 0.51, 0.54], [-0.73, -0.04, 0.08, 0.67], [-0.69, 0.06, -0.04, 0.72]] block order_id 0 block type_id 0 block info duration(s) [0.0, 2.0] block rotation 53 Technical Report [[0., 0., 0., 1.], [ 0.07, 0. , -0.01, 1. ], [ 0.45, 0.06, -0.04, 0.89], [ 0.39, 0.14, -0.09, 0.91], [0. , 0.15, 0.03, 0.99], [-0.09, 0.05, 0.36, 0.93], [0.2 , 0.01, 0.09, 0.98], [ 0.13, -0.31, -0.13, 0.93], [-0.06, -0.26, -0.1 , 0.96], [-0.08, -0.18, -0.1 , 0.97]] Initial State of the Machine The machine is initially placed on the ground, facing the z+ direction, with the target direction being z+. Please remember that the task failure is due to the mechanical design defects. Your ultimate goal is to modify the mechanical design defects so that it can complete the task. The historical modification information represents your previous attempts to make changes, and these steps did not comply with the game rules. Based on the information about the task failure, please make modifications to the mechanics. When designing the modification steps, avoid making the same mistakes as those in the historical modification information. refiner-output: <Thought Process> The user wants me to fix a trebuchet-like machine that is failing. **1. Analysis of the Machine and Failure:** * **Machine Structure:** The machine consists of a base, a vertical tower on the right side, and a rotating arm mounted on top of the tower. The arm has a container with a boulder on one end and was intended to have a counterweight on the other. Power is provided by a ‘Rotating Block ‘ (ID 10) at the pivot and a ‘Ballast‘ (ID 14) acting as a counterweight. Two ‘Rubber Band‘s (ID 15, 16) are connected from the top of the tower ( ID 8) to the base, seemingly for stabilization. * **Failure Mode:** The simulation feedback consistently reports that ‘ DoubleWoodenBlock id:8‘ is damaged at 0.4 seconds. Block 8 is the top- most block of the support tower, directly holding the entire rotating assembly (‘Rotating Block‘, arm, boulder, and ballast). * **Root Cause Analysis:** * **Overwhelming Force:** The combination of the ‘Rotating Block‘’s torque and the heavy ‘Ballast‘ (ID 14, mass 3) dropping creates an immense rotational force. The ‘Boulder‘ (ID 13, mass 5) at the end of a long lever arm (ID 11, ‘Log‘) amplifies this stress. The tower, constructed from simple ‘Wooden Block‘s, cannot withstand this force, leading to the failure of block 8. * **Instability:** The feedback shows the entire machine lifting and tilting (‘machine orientation in long time x-‘, ‘machine frequently changes in the following orientations‘). This indicates the base is too small and light to counteract the violent motion of the throwing arm. The ‘Rubber Band‘s (ID 15, 16) are connected asymmetrically (from the tower to the front and left parts of the base), which likely introduces a twisting force, contributing to the instability and structural failure rather than preventing it. **2. Modification Strategy:** The primary goal is to ensure the machine’s structural integrity so it can actually launch the boulder without self-destructing. * **Step 1: Reduce Destructive Forces.** The most critical issue is the excessive force breaking block 8. The simplest way to address this is to remove the primary source of this force, the ‘Ballast‘ (ID 14). The ‘ Rotating Block‘ (ID 10) by itself will still provide sufficient power to launch the boulder, but with much less violent acceleration, which the wooden structure should be able to handle. * **Step 2: Remove Ineffective Components.** The ‘Rubber Band‘s (ID 15 and 16) are not stabilizing the structure effectively and are likely contributing to the twisting failure due to their asymmetric placement. 54 Technical Report Removing them will simplify the forces acting on the tower and improve stability. This plan directly targets the identified points of failure. By removing the ballast and the rubber bands, the stress on the critical component ( block 8) is significantly reduced, and the machine’s overall stability should improve. The throwing distance might be reduced, but a successful, non-destructive launch is the first priority. **3. Final Plan:** 1. Remove ‘Ballast‘ (ID 14). 2. Remove ‘Rubber Band‘ (ID 15). 3. Remove ‘Rubber Band‘ (ID 16). This is a minimal but highly effective set of changes to fix the core problem. </Thought Process> <Modification Description> The machine’s support tower (block 8) is breaking due to excessive torque from the ballast and rotating block. The rubber bands are placed asymmetrically, causing twisting forces and instability. I will remove the ballast (ID 14) to reduce the strain on the structure and remove the two rubber bands (ID 15, 16) to eliminate the destabilizing twisting forces. </Modification Description> <Simulation Prediction After Modification> * **Power Blocks Analysis:** The launch will now be powered solely by the ‘Rotating Block‘ (10). The removal of the ‘Ballast‘ (14) significantly reduces the peak torque, resulting in a smoother, less violent rotation of the throwing arm. This will prevent the structural failure of the tower. * **Balance Check:** Without the heavy ballast dropping and the asymmetric pull from the rubber bands, the machine will be much more stable during operation. The base should remain mostly on the ground, with minimal tipping as the arm swings. The overall center of gravity will be more manageable. * **Comprehensive Analysis:** The primary defect, the destruction of block 8, will be resolved. The machine will successfully complete the launch sequence without breaking. While the throwing distance and height may be lower due to the reduced power, the machine will now be functional and can complete its primary task of throwing the boulder. * **10-second game simulation prediction:** * **0.0s - 0.5s:** The ‘Rotating Block‘ activates, and the arm begins to swing upwards smoothly. The tower (block 8) remains intact. * **0.5s - 1.0s:** The arm reaches its maximum rotation, and the boulder is released from the container. The machine’s base remains stable on the ground. * **1.0s - 5.0s:** The boulder travels in a clear parabolic arc. The machine settles without any damage. The final throwing distance will be shorter than the broken attempts but will represent a successful launch. </Simulation Prediction After Modification> <Modification Steps> Remove [14] Remove [15] Remove [16] </Modification Steps> 55 Technical Report K SINGLE-AGENT PROMPT You are a machine builder. Your task is to generate a complete machine as a JSON file based on the user’s request. Add new blocks to the initial structure; do not modify or delete it. I. Rules: 1. Coordinate System: Left-handed coordinate system, y+ upwards, z+ forward and x+ right. 2. Block Placement: New blocks must attach to ‘attachable_faces‘ of existing blocks. Blocks cannot overlap. 3. Size Limit: The final machine must not exceed dimensions of 17 ( Length, Z), 17 (Width, X), 9.5 (Height, Y). 4. Functionality: Ensure functional blocks are oriented correctly. 5. Ground Interaction: The ground automatically conforms to the machine’ s lowest block. Account for potential collisions between the machine and the ground throughout operation. 6. Gravity: Every block is subject to gravity; the greater a block’s mass, the stronger its downward force. Consider this in your design when the machine is in operation. 7. Physical rules: Classical mechanical laws such as conservation of momentum are applied. II. Block Data: Notes: You can only use blocks from this list. A block’s default orientation is Z+. 1. Attachable face: a. ‘id‘: The i-th attachable_face of this block. b. ‘pos‘: Coordinates relative to the building center(which is the attachable_face of the parent block) of this block. c. ‘orientation‘: Orientation relative to the building center of this block. 2. Tags: a. ‘Non-static‘: Block can generate force or movement. b. ‘Non-stable‘: Connection to parent is not rigid (e.g., hinges, boulders). c. ‘Linear‘: Do not collide with other blocks, but will occupy two attachable_faces. 3. Special Blocks: a. Boulder (id 36): Does not physically connect to other blocks. b. Spring (id 9): A special block that pulls its two connection points together. Detailed Infos: <Block Infos without explanations> III. JSON Output Format: 1. type: block’s type_id 2. id: this is i-th block 3. parent: parent block’s id 4. face_id: parent block’s constructible_point id 5. Standard Block: ‘{"type": <int>, "id": <int>, "parent": <int>, " face_id": <int>}‘ 6. special block (id: 9): ‘{"type": 9, "id": <int>, "parent_a": <int>, " face_id_a": <int>, "parent_b": <int>, "face_id_b": <int>}‘ IV. Final Response Format: Your response must contain only these two parts: 1. ‘Chain of thoughts:‘ a. You need to think step by step, analyse each block’s usage, and where to place them. Put your cot in <cot></cot> 56 Technical Report b. ‘Construction Idea:‘ A brief explanation of your design, remember to consider necessary block types, note them in ‘‘‘necessary_blocks [ type_1,type_2 ...]‘‘‘, no more than 300 words. 2. ‘JSON:‘ The complete JSON code inside a ‘‘‘json ... ‘‘‘ block. Here is an example: ‘‘‘json [ {"type":"0","id":0,"parent":-1,"face_id":-1}, {"type": <int>, "id": <int>, "parent": <int>, "face_id": <int>}, ... ] ‘‘‘ 57 Technical Report L MULTI-AGENT PROMPTS L.1 SHARED PROMPTS L.1.1 GAME INTRODUCTION WITH 3D KNOWLEDGE 1. Coordinate System: The game uses a left-handed coordinate system, with the Y-axis pointing upwards. In global coordinates, z+ is forward and x+ is to the right. 2. Construction: New blocks must be connected to the "attachable faces" of existing blocks. The default orientation of blocks is z+. 3. Block Types: a. Regular Blocks: Have fixed dimensions and multiple attachable faces. b. special blocks (ID 7, 9): Connect two attachable faces, do not collide with other blocks, but will occupy the connection points. 4. Size Limitations: The mechanical dimensions must not exceed Length (z) 17, Width (x) 17 Height (y) 9.5. L.1.2 MACHINE 3D JSON FORMAT ‘‘‘json [ {"type":"0","id":0,"parent":-1,"face_id":-1}, {"type":"Block Type ID","id":"Block Order ID","parent":"Parent Block ID","face_id":"Attachable Face ID in Parent Block"}, ... ] ‘‘‘ If it is a special block (Type ID is 7 or 9, other blocks are not special blocks), it will be: ‘‘‘json { "type":"Block Type ID", "id":"Block Order ID", "parent_a":"Parent A Block Order ID", "face_id_a":"Attachable Face ID in Parent A Block", "parent_b":"Parent B Block Order ID", "face_id_b":"Attachable Face ID in Parent B Block" } ‘‘‘ L.1.3 BUILD GUIDANCE Your task is to: Add new blocks based on the initial machine JSON and construction requirements provided by the user, without deleting the initial structure, and output the final complete JSON. User Input Format: 1. Building Objective: Describe the structure and function to be built, and provide a list of recommended block IDs. 2. Initial JSON: The structural data of the existing machine. Core Building Rules: 1. Block Usage: You can only select from the list of recommended block IDs provided by the user. You may remove certain recommended block IDs due to "inapplicability" or "better alternatives," but you cannot add new IDs. If any are removed, the reason must be stated. 2. Collision Prevention: You must accurately calculate the coordinates and orientation of new blocks based on the orientation and position of the parent block to ensure that the new block does not overlap with the existing structure. 58 Technical Report 3. Coordinate System and Orientation: The initial orientation of all blocks is Z+. The final orientation of new blocks must be transformed based on the parent block’s orientation and the relative direction of the building point, according to the following rules: Oriented z+: Front z+, Back z-, Left x-, Right x+, Up y+, Down y- Oriented z-: Front z-, Back z+, Left x+, Right x-, Up y+, Down y- Oriented x-: Front x-, Back x+, Left z-, Right z+, Up y+, Down y- Oriented x+: Front x+, Back x-, Left z+, Right z-, Up y+, Down y- Oriented y+: Front y+, Back y-, Left x-, Right x+, Up z-, Down z+ Oriented y-: Front y-, Back y+, Left x-, Right x+, Up z+, Down z- Your Output Format: 1. Building Plan: ‘Generated [structure summary] to achieve [function]. Finally used blocks [ID1, ID2,...]. Removed [ID3,...] because [reason for removal].‘ 2. Final JSON: ‘‘‘json [ // The complete JSON including both the initial structure and the new blocks ] ‘‘‘ L.1.4 META DESIGNER SYSTEM PROMPT You are a mechanical designer, and your task is to design a machine in the game Besiege based on the user’s requirements. Please gain a general understanding of the game based on the following information: I. Game Introduction: 1. Besiege is a physics-based construction game developed using Unity. Players need to build various machines to complete different tasks. 2. Besiege only contains basic mechanics and physical laws, such as mass, friction, and collision. 3. Blocks are used to build machines. Each block has its unique functions , advantages, and disadvantages. II. Block Introduction: 1. Blocks are mainly divided into five major categories: Basic Blocks, Mobility Blocks, Mechanical Blocks, Weapon Blocks, and Armor Blocks. - Basic Blocks are the fundamental components of many machines - structural blocks and some basic moving parts. - Mobility Blocks are primarily designed for movement functions - powered and unpowered wheels, steering blocks, and gears. - Mechanical Blocks provide various useful auxiliary functions - joints, suspension devices, winches, grabbers, etc. - Weapon Blocks offer various types of violent output at different ranges - swords and saws for close combat, and cannons and rockets for long- range. - Armor Blocks can protect the machine from damage or provide useful shapes for carrying other blocks - armor plates and wooden panels, as well as half-pipes and brackets. 2. Here is a detailed introduction to the properties and functions of each block: | Name | Category | Type ID | Function | |------|----------|---------|----------| | Starting Block | Basic | 0 | The Starting Block is the root block of the machine; it is placed at the starting position by default, cannot be moved, cannot be deleted, and only one can exist at a time. | | Small Wooden Block | Basic | 15 | A basic structural block, cube-shaped , to which other blocks can be attached from any side, making it particularly suitable for constructing the basic framework of machines. | 59 Technical Report | Wooden Block | Basic | 1 | A basic mechanical block, twice the length of a Small Wooden Block. | | Wooden Rod | Basic | 41 | A basic mechanical block, twice the length of a Small Wooden Block, with the same weight as a Wooden Block, but very fragile. | | Log | Basic | 63 | A basic mechanical block, more robust, three times the length of a Small Wooden Block. | | Brace | Basic | 7 | A non-placeable block used for reinforcement, built by "attaching" to other blocks, with no collision volume. It is often used to increase the stability of static structures and is not suitable for any dynamic structures. | | Steering Hinge | Mobility | 28 | The Steering Hinge can rotate blocks along an axis perpendicular to the placement axis. This block can rotate child blocks to a 180-degree direction to the left or right, commonly used for vehicle steering. | | Steering Block | Mobility | 13 | The Steering Block can rotate blocks along its placement axis, similar to the rotating part of a helicopter’s rotor. | | Powered Wheel | Mobility | 2 | Similar to a car wheel, it can drive itself but cannot turn independently. It is a mechanical device used for moving objects on the ground. | | Unpowered Wheel | Mobility | 40 | A wheel that does not rotate without external force, otherwise similar to a Powered Wheel. | | Powered Large Wheel | Mobility | 46 | Similar to a Powered Wheel, but with a radius and thickness twice that of a Powered Wheel. | | Unpowered Large Wheel | Mobility | | A wheel that does not rotate without external force, otherwise similar to a Powered Large Wheel. | | Small Wheel | Mobility | 50 | It works almost the same as a caster wheel (like a shopping cart wheel), unpowered. | | Universal Joint | Mechanical | 19 | A block that can freely rotate around its placement axis, similar to a Steering Block but without power. | | Hinge | Mechanical | 5 | Similar to a Steering Hinge, but without power . | | Ball Joint | Mechanical | 44 | Can swing 360 degrees along the axis perpendicular to the placement axis, but without power. | | Axle Connector | Mechanical | 76 | Similar to a Ball Joint. | | Rotating Block | Mechanical | 22 | Powered, it can rotate clockwise or counterclockwise along the axis perpendicular to the placement axis. | | Suspension | Mechanical | 16 | Shaped like a wooden block, it can buffer forces from all directions. | | Grabber | Mechanical | 27 | It will grab and hold onto any object it comes into contact with. | | Spring | Mechanical | 9 | A special block that attaches to two other blocks and can quickly pull them together. Its pulling force is almost entirely dependent on its length. | | Boulder | Weapon | 36 | A stone that does not directly connect to other blocks even when built on them. It can be used as a projectile weapon and is also commonly used as a target in transportation tests. | | Elastic Pad | Armor | 87 | Increases the elasticity of the contact surface, providing an effect of rebounding and increasing kinetic energy. | | Container | Armor | 30 | Can hold child blocks like a bowl, mainly used to carry blocks that cannot be directly connected to the machine. The container has some anti-slip capability, and only one block (the target to be carried) can be placed inside. No other blocks can be added. | | Roller Wheel | Locomotion | 86 | Similar to the small wheel, but shorter (0.8m). | | Grip Pad | Armour | 49 | Block with the highest friction. | | Ballast | Flight | 35 | A heavy cubic block used as a counterweight. | III. Mechanical Design Requirements: 1. When designing the machine, you should adopt a "layered design" approach. Break down the user’s requirements into the functions that the machine needs to achieve, and list the functional points. 60 Technical Report 2. For each functional point, design a structure that can meet the function. A structure can be understood as a "group of blocks," and several structures combined form the machine. 3. For each structure, determine the types of blocks to be used. 4. Determine the construction order of the structures to make the machine -building process layered. List which structure is the foundation and which is the upper-layer structure, and establish the construction sequence chain. IV. Output Format Requirements: ‘‘‘json { "definition": "Construct a machine that can fulfill the user’s requirements", "function_points": ["Function Point 1", "Function Point 2", "Function Point 3"], "design_structure": [ { "function_name": "Structure 1 Name", "description": "Description of Structure 1", "related_function_points": ["Function Point 1", "Function Point 2"] }, { "function_name": "Structure 2 Name", "description": "Description of Structure 2", "related_function_points": ["Function Point 3"] } ], "build_order": ["Structure 2 Name", "Structure 1 Name"], "machine_structure": { "Structure 1 Name": { "block_id": [ID1, ID2, ID3...], "guidance": "Guidance here" }, "Structure 2 Name": { "block_id": [ID4, ID5, ID6...], "guidance": "Guidance here" } } } ‘‘‘ V. Note: 1. You must design the machine based on the game introduction and block introduction, and you cannot use blocks that do not exist in the game. 2. Strictly follow the output format requirements. Do not output any content other than what is required by the output format. 3. For the design of structures, aim for simplicity and use the minimum number of structures to complete all functions. Check if there are existing structures that can be used before designing new ones. 4. When selecting blocks for a structure, limit the types of blocks to no more than three, and preferably use only one type. Focus solely on meeting the functional points with the bare minimum requirements, and do not attempt to fulfill demands beyond the functional points. I will provide the user input below. Please generate a mechanical overview in JSON format based on the user’s description. L.2 DESIGNER SYSTEM AND USER PROMPT <system> You are a mechanical builder in the game "Besiege." Your task is to add new blocks to an existing machine structure according to user requests and finally output the complete machine JSON data. 61 Technical Report I. Game Introduction: <Game Introduction With 3D Knowledge> II. Introduction to Blocks: <Block Infos> III. Introduction to JSON Format: <Machine 3D JSON Format> IV. Construction Guidance: <Build Guidance> V. Output Format Requirements: Based on the existing structure, a [structural summary] was generated as [functional implementation]. Ultimately, the block types [ID1, ID2, ...] were decided upon, while [ID3 , ...] were removed due to [reason for removal]. JSON: ‘‘‘json ... ‘‘‘ VI. Note: Building Principles 1. Correct Orientation: Ensure that functional blocks such as wheels and hinges are oriented correctly to achieve the intended function. 2. Efficiency First: a. Complete the design goal with the fewest blocks possible. b. The ground will automatically adapt to the lowest point of the mechanism. Output Requirements 1. Strict Structure: Your response must only contain two parts: Construction Plan and Final JSON. Prohibit any additional greetings, explanations, or comments. 2. Pure JSON: The complete JSON code block must be placed within ‘‘‘json ... ‘‘‘. Prohibit modifying the initial structure. Prohibit modifying the ‘scale‘ property of any block. Prohibit adding comments or non- existent properties in the JSON. Next, I will provide user input. Please generate a JSON based on the description. <user> <designer_output["design_structure"][i]["description"]>+<Output Machine Json> L.3 INSPECTOR SYSTEM AND USER PROMPT <system> I’ll provide you with a mission in the game Besiege, along with the machine designed for it in JSON format and its 3D information. Please identify and summarize the unreasonable parts of the machine design. Here’s the introduction to the game and construction knowledge. I. Game Introduction: <Game Introduction With 3D Knowledge> II. Introduction to Blocks: <Block Infos> III. Introduction to JSON and 3D Information: <Machine 3D JSON Format> IV. Introduction to Output Format: <Three-Dimensional Perception of the World> { "Coordinate System": "Left-Handed System or Right-Handed System", "Up": "x or y or z", "Right": "x or y or z", "Forward": "x or y or z", 62 Technical Report "Frontmost Block": {"id": [Center Point Coordinates]}, "Rearmost Block": {"id": [Center Point Coordinates]}, "Leftmost Block": {"id": [Center Point Coordinates]}, "Rightmost Block": {"id": [Center Point Coordinates]}, "Topmost Block": {"id": [Center Point Coordinates]}, "Lowest Block": {"id": [Center Point Coordinates]}, } </Three-Dimensional Perception of the World> <Question Answer> Write the answer to the user’s question here </Question Answer> <Summary of Design Defects> 1. Problem description, involving blocks: [id_list], belonging to structure: "Structure Name" ... </Summary of Design Defects> V. Notes: 1. Please do not output any irrelevant information and directly answer the user’s question. 2. The id of the block must be enclosed in "[]". Additionally, do not use "[]" for any other numbers; consider using "()" instead. Below, I will provide you with JSON and 3D information. Please answer the user’s question based on this information. <user> Task Introduction {designer_output["definition"]} JSON Information <Output Machine Json> 3D Information <Output Machine Json 3D Info> Mechanical Structure Information <Machine Tree With Designer Layer Arrangement> Initial State of the Machine The machine is initially placed on the ground, facing in the z+ direction , with the target direction being z+. Questions: 1. Output the position and orientation of all dynamic blocks, and analyze : a. The impact of dynamic blocks on the machine b. The direction of force provided by dynamic blocks c. The impact on sub-blocks and the direction of force on the machine 2. Output static blocks other than basic structural blocks, and analyze the rationality of their orientation and position. 3. Balance Check (self-gravity) a. The center of gravity of the machine (find the block closest to the center of gravity) b. Whether parts of the machine will sink due to gravity 4. Comprehensive Analysis a. Summarize the direction of all forces to analyze the movement of the machine b. Identify logically unreasonable blocks, output their hierarchical structure and reasons for unreasonableness L.4 REFINER SYSTEM PROMPT I will give you a task in the game Besiege, as well as the 3D information of the machine designed to complete this task. There are some 63 Technical Report unreasonable aspects in the design of this machine, and I would like you to modify these parts: I. Game Introduction: <Game Introduction With 3D Knowledge> II. Block Introduction: <Block Infos> III. Input Introduction: 1. Task & Context Task Objective: <designer_output["user_input"]> Preceding Information (if any): Defect Report: <quizzer_output> Modification History: <modify_history> Environmental Feedback: <environment_feedback> 2. Machine Data <Machine 3D JSON Format> IV. Modification Method Introduction: Please follow the steps below to analyze and modify the machine structure . Step 1: Analyze & Plan 1. Diagnose the Current Machine: Analyze its power, balance, structure, and overall movement to identify design flaws or areas for optimization. 2. Devise a Modification Plan: Based on your diagnosis, decide which blocks to move, remove, or add. 3. Evaluate Modification Impact: When planning, you must consider the impact of your changes. 4. Briefly Describe Modifications: Before generating commands, describe your modification plan in a sentence or two using natural language. Step 2: Execute Modification Operations Use the following three command formats for modifications. Output only the operation commands, with each command on a new line. 1. Add Block (Add) Format: Non-Linear: Add [block type ID] to [id] in [attachable_face_id ] Linear: Add [block type ID] to [id_a] in [ attachable_face_id_a] to [id_b] in [attachable_face_id_b] Rules: Can only be added to original blocks. You cannot add new blocks onto other newly added blocks. special blocks require specifying two connection points. 2. Remove Block (Remove) Format: Remove [id] Rules: Can only remove original blocks. Cannot remove a block that has child blocks. 3. Move Block (Move) Move [id] to [new_parent_id] in [new_attachable_face_id] Rules: Moves the target block and all its child blocks as a single unit. The new parent’s ‘id‘ must be smaller than the ‘id‘ of the block being moved. special blocks cannot be moved. The move must change the block’s original position. <Build Guidance: Coordinate System and Orientation> 64 Technical Report V. Output Format Introduction: <Thought Process> </Thought Process> <Modification Description> </Modification Description> <Simulation Prediction After Modification> </Simulation Prediction After Modification> <Required Feedback> </Required Feedback> <Modification Steps> </Modification Steps> VI. Note: 1. Output Scope: Only output content related to the modification method and the required output format. 2. Ground Definition: The ground is always located beneath the machine, in contact with the block that has the lowest y-coordinate. 3. Task Adherence: Pay close attention to the task requirements. Do not delete important blocks by mistake. 4. Parenting Constraint: A block that has been deleted cannot be used as a parent for new or moved blocks within the same set of operations. 5. Format Integrity: Ensure the output format is complete. You must retain the ‘<Thought Process></Thought Process>‘ and ‘<Modification Description></Modification Description>‘ tags. 6. Content Separation: Do not include verification results within the ‘< Modification Description>‘ block. 7. Preserve ’Success’ Steps: Retain any steps marked as "Success" and do not adjust their order. 8. Prioritize ’Error’ Steps: Focus your efforts on fixing the steps marked as "Error". Do not modify steps marked as "Unverified" prematurely . 9. Error History: I will provide the modification history for the "Error" steps. Use this information to avoid repeating invalid operations. Below, I will provide you with the JSON and 3D information. Please modify the machine accordingly. L.5 ENVIRONMENT QUERIER SYSTEM PROMPT I will give you a task in the game Besiege, as well as the information on the machine designed to complete this task. The machine has finished the task simulation and returned some environmental feedback to describe its performance. Please analyze the issues with the machine based on the feedback and request more feedback if needed. I. Game Introduction: <Game Introduction With 3D Knowledge> II. Block Introduction: <Block Infos> III. Input Introduction: <Machine 3D JSON Format> IV. Query Introduction: A. Environmental Feedback Request: After conducting the environmental simulation of your modified machinery, The system will provide essential data for the most critical components ( such as position, rotation, and velocity). Based on the performance of the essential data, you need to determine what problems the machinery may have encountered and request feedback on the key blocks that may have issues. You can also request those blocks that may significantly impact the machinery’s functionality and check whether their performance meets expectations. The format for the environmental feedback request is as follows. Please adhere strictly to this format and avoid including any extraneous information: 65 Technical Report <Required Feedback> [ { "id": int, "duration": [float, float], "properties": ["position", "rotation", "velocity", "length"], }, ... ] </Required Feedback> Both "id" and "duration" are mandatory fields. Note that the game runs for a total of 5 seconds, game state samples per 0.2s; do not exceed this duration. You can freely select the attributes in "properties," but avoid including irrelevant information. The "length" attribute is only applicable to linear components. V. Output Format Introduction: <Thought Process> </Thought Process> <Required Feedback> </Required Feedback> VI. Note: 1. Please do not output any irrelevant information. 2. The ground will always correctly appear beneath the machine, making normal contact with the block that has the lowest y-axis coordinate. 3. All blocks are affected by gravity. If a block with a large self- weight is built on certain non-powered non-static blocks, the non-static blocks may rotate due to the gravitational force of the sub-blocks. 4. Similarly, the power generated by powered blocks also needs to counteract the gravitational force of the sub-blocks. 5. Please adhere to the output format and avoid adding any irrelevant information in the JSON. Below, I will provide you with the JSON and 3D information, as well as the environmental feedback. Please request more feedback as needed. L.6 BLOCK INFORMATIONS Explanations: This is a concise list of the blocks you can use for this construction. Please read and follow the rules carefully. I. Block Information Format Each block’s information follows the dict format. Attributes that a block does not have will not appear in the keys. 1. Characteristic Tags: a. Non-static: The block can actively generate force or movement. b. Non-stable: The connection between the block and its parent block is non-rigid, allowing for movement or rotation. c. Linear: The block is used to connect two existing points rather than being attached to a single point. If there are no tags, it is a regular static and stable block. 2. Attachable Faces: The key is in the format of attachable face ID, coordinates (relative coordinates), and orientation (relative orientation). II. Key Special Rules 1. Powered Wheel Rule (applicable to all powered wheels): The direction of power provided by the wheel is not the same as its orientation. - Forward (Z+ direction): The wheel should face sideways (X+ or X-). - Left turn (power pushes towards X-): The wheel should face forward ( Z+). - Right turn (power pushes towards X+): The wheel should face backward (Z-). 66 Technical Report - When the wheel faces up or down (Y+ or Y-), it does not provide power. 2. Special Blocks (Brace, Spring): - They do not have their own connection points but connect to two other blocks. - Brace: Used to reinforce static structures with no collision volume. - Spring: Generates a contracting pull force along the line connecting its two ends when stretched. 3. Non-connecting Blocks (bombs, boulders): - These blocks are placed at the specified location but do not form a physical connection with any other block. Containers are usually needed to hold them. Detailed Infos: [ { "Name": "Starting Block", "Description": "The root block of the mechanism. It cannot be placed or deleted, and only one can exist at a time. Its initial position is fixed, and its initial orientation is z+.", "Type ID": 0, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 0.5], "Orientation": "Front "}, {"ID": 1, "Coordinates": [0, 0, -0.5], "Orientation": "Back "}, {"ID": 2, "Coordinates": [-0.5, 0, 0], "Orientation": "Left "}, {"ID": 3, "Coordinates": [0.5, 0, 0], "Orientation": "Right "}, {"ID": 4, "Coordinates": [0, 0.5, 0], "Orientation": "Up"}, {"ID": 5, "Coordinates": [0, -0.5, 0], "Orientation": "Down"} ], "Mass": 0.25 }, { "Name": "Small Wooden Block", "Description": "A basic construction block, cubic in shape.", "Type ID": 15, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Mass": 0.3 }, { "Name": "Wooden Block", "Description": "A basic construction block.", "Type ID": 1, "Size": [1, 1, 2], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 2], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [-0.5, 0, 1.5], "Orientation": "Left "}, 67 Technical Report {"ID": 3, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 4, "Coordinates": [0.5, 0, 1.5], "Orientation": "Right "}, {"ID": 5, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 6, "Coordinates": [0, 0.5, 1.5], "Orientation": "Up"}, {"ID": 7, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "}, {"ID": 8, "Coordinates": [0, -0.5, 1.5], "Orientation": "Down "} ], "Mass": 0.5 }, { "Name": "Wooden Rod", "Description": "A basic construction block, slender and fragile .", "Type ID": 41, "Size": [1, 1, 2], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 2], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [-0.5, 0, 1.5], "Orientation": "Left "}, {"ID": 3, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 4, "Coordinates": [0.5, 0, 1.5], "Orientation": "Right "}, {"ID": 5, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 6, "Coordinates": [0, 0.5, 1.5], "Orientation": "Up"}, {"ID": 7, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "}, {"ID": 8, "Coordinates": [0, -0.5, 1.5], "Orientation": "Down "} ], "Mass": 0.5 }, { "Name": "Log", "Description": "A basic construction block.", "Type ID": 63, "Size": [1, 1, 3], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 3], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [-0.5, 0, 1.5], "Orientation": "Left "}, {"ID": 3, "Coordinates": [-0.5, 0, 2.5], "Orientation": "Left "}, {"ID": 4, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 5, "Coordinates": [0.5, 0, 1.5], "Orientation": "Right "}, {"ID": 6, "Coordinates": [0.5, 0, 2.5], "Orientation": "Right "}, {"ID": 7, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 8, "Coordinates": [0, 0.5, 1.5], "Orientation": "Up"}, {"ID": 9, "Coordinates": [0, 0.5, 2.5], "Orientation": "Up"}, {"ID": 10, "Coordinates": [0, -0.5, 0.5], "Orientation": " Down"}, {"ID": 11, "Coordinates": [0, -0.5, 1.5], "Orientation": " Down"}, 68 Technical Report {"ID": 12, "Coordinates": [0, -0.5, 2.5], "Orientation": " Down"} ], "Mass": 1 }, { "Name": "Steering Hinge", "Description": "Powered, used to control the rotation of sub- blocks. It can swing left and right along the axis perpendicular to the placement axis.", "Type ID": 28, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"} ], "Special Attributes": { "Swing Direction": ["Left", "Right"], "Angle": [-90, 90], "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { "Name": "Steering Block", "Description": "Powered, used to control the rotation of sub- blocks. It can rotate clockwise or counterclockwise along the placement axis.", "Type ID": 13, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { "Name": "Powered Wheel", "Description": "Powered, a mechanical device used to move objects on the ground.", "Type ID": 2, "Size": [2, 2, 0.5], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 0.5], "Orientation": "Front"} ], "Special Attributes": { "Rotation Axis": "Front", "PoweredWheel":"True", "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { 69 Technical Report "Name": "Unpowered Wheel", "Description": "A wheel that does not rotate without external force, similar to the powered wheel.", "Type ID": 40, "Size": [2, 2, 0.5], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 0.5], "Orientation": "Front"} ], "Special Attributes": { "Rotation Axis": "Front", "NonStable":"True" }, "Mass": 1 }, { "Name": "Large Powered Wheel", "Description": "Similar to the powered wheel, but larger.", "Type ID": 46, "Size": [3, 3, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-1.5, 0, 1], "Orientation": "Front "}, {"ID": 2, "Coordinates": [1.5, 0, 1], "Orientation": "Front "}, {"ID": 3, "Coordinates": [0, 1.5, 1], "Orientation": "Front "}, {"ID": 4, "Coordinates": [0, -1.5, 1], "Orientation": "Front "}, {"ID": 5, "Coordinates": [-1.5, 0, 0.5], "Orientation": "Left "}, {"ID": 6, "Coordinates": [1.5, 0, 0.5], "Orientation": "Right "}, {"ID": 7, "Coordinates": [0, 1.5, 0.5], "Orientation": "Up"}, {"ID": 8, "Coordinates": [0, -1.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "PoweredWheel":"True", "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { "Name": "Large Unpowered Wheel", "Description": "Similar to the unpowered wheel, but larger.", "Type ID": 60, "Size": [3, 3, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-1.5, 0, 1], "Orientation": "Front "}, {"ID": 2, "Coordinates": [1.5, 0, 1], "Orientation": "Front "}, {"ID": 3, "Coordinates": [0, 1.5, 1], "Orientation": "Front "}, {"ID": 4, "Coordinates": [0, -1.5, 1], "Orientation": "Front "}, {"ID": 5, "Coordinates": [-1.5, 0, 0.5], "Orientation": "Left "}, {"ID": 6, "Coordinates": [1.5, 0, 0.5], "Orientation": "Right "}, {"ID": 7, "Coordinates": [0, 1.5, 0.5], "Orientation": "Up"}, 70 Technical Report {"ID": 8, "Coordinates": [0, -1.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "NonStable":"True" }, "Mass": 1 }, { "Name": "Small Wheel", "Description": "It works almost the same as a caster wheel (e.g., shopping cart wheel), but it is not powered.", "Type ID": 50, "Size": [0.5, 1, 1.5], "Special Attributes": {"NonStable":"True"}, "Mass": 0.5 }, { "Name": "Roller Wheel", "Description": "Same as the small wheel.", "Type ID": 86, "Size": [1, 1, 1], "Special Attributes": { "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Universal Joint", "Description": "A block that can freely rotate around its placement axis, but it is not powered.", "Type ID": 19, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Hinge", "Description": "It can swing up and down along the axis perpendicular to the placement axis, but it is not powered.", "Type ID": 5, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} 71 Technical Report ], "Special Attributes": { "Swing Direction": ["Up", "Down"], "Angle": [-90, 90], "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Ball Joint", "Description": "It can swing freely in all directions, but it is not powered.", "Type ID": 44, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Swing Range": "All directions outward from the build surface ", "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Axle Connector", "Description": "Similar to a ball joint.", "Type ID": 76, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"} ], "Special Attributes": { "Swing Range": "All directions outward from the build surface ", "NonStable":"True" }, "Mass": 0.3 }, { "Name": "Rotating Block", "Description": "When powered, this motor-like block generates torque and rotates about its local y-axis. Blocks connected at attachable_face 1 or 4 rotate with it as part of a rigid assembly. The rotation block has its own mass and obeys classical mechanics: it applies torque to connected parts when powered, and it can also be moved, rotated, or stopped by external forces or torques, depending on constraints.", "Type ID": 22, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, 72 Technical Report {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { "Name": "Grabber", "Description": "If the build point is unoccupied, it will grab any object that comes into contact with the build point and hold it firmly.", "Type ID": 27, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"} ], "Special Attributes": { "Grip Direction": "Front", "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Boulder", "Description": "A rock that will not directly connect to other blocks even if built on them, high mass.", "Type ID": 36, "Size": [1.9, 1.9, 1.9], "Special Attributes": { "NonStable":"True" }, "Mass": 5 }, { "Name": "Grip Pad", "Description": "The block with the highest friction.", "Type ID": 49, "Size": [0.8, 0.8, 0.5], "Mass": 0.3 }, { "Name": "Elastic Pad", "Description": "The block with the highest elasticity.", "Type ID": 87, "Size": [0.8, 0.8, 0.2], "Mass": 0.3 }, { "Name": "Container", "Description": "It has a railing around the building point. If oriented towards +y, it can hold sub-blocks like a bowl. It is mainly used to hold blocks that cannot directly connect to the mechanism, such as boulders and bombs. Do not place other blocks nearby to avoid overlap .", "Type ID": 30, "Size": [2.4, 3, 2.8], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"} ], "Mass": 0.5 }, 73 Technical Report { "Name": "Suspension", "Description": "It primarily serves as a buffer and shock absorber. It is similar in shape to a wooden block, with all Attachable Faces Properties located at the far end of the block.", "Type ID": 16, "Size": [1, 1, 2], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 2], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 1.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 1.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 1.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 1.5], "Orientation": "Down "} ], "Mass": 0.5 }, { "Name": "Brace", "Description": "The brace can be used for reinforcement. Its construction principle is to ’attach’ to other blocks. It has no collision volume. Since it is often used to stabilize static structures, it is not suitable for any dynamic structures.", "Type ID": 7, "Special Attributes": { "Linear": "True", "Anti Tension Direction": "Towards the center of the line segment between the two Attachable Faces Properties", "Anti-Compression Direction": "Outward from the center of the line segment between the two Attachable Faces Properties" }, "Mass": 0.5 }, { "Name": "Spring", "Description": "A special block that attaches to two other blocks and can quickly pull the two ends together. Its tension force is almost entirely dependent on its length.", "Type ID": 9, "Special Attributes": { "Linear": "True", "NonStatic":"True", "Tension Direction": "Towards the center of the line segment between the two Attachable Faces Properties" }, "Mass": 0.4 }, { "Name": "Ballast", "Description": "It serves as a counterweight, has a large mass, and is shaped like a cube.", "Type ID": 35, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], 74 Technical Report "Mass": 3 } ] 75
Technical Report AGENTIC DESIGN OF COMPOSITIONAL MACHINES Wenqian Zhang1 Weiyang Liu2,* Zhen Liu1,*,† 1The Chinese (Shenzhen) 2The Chinese *Equal Advising †Corresponding Author besiegefield.github.io Eureka! Throw a Boulder Machine Code ... Code Edit ... Figure 1: The task of compositional machine design is illustrated in our BesiegeField environment. The figure shows a high-level sketch of the agentic workflow (w/ Gemini Pro 2.5), along with the resulting machines and their simulated performance. The design objective is to create a machine that throws boulders long distances. ABSTRACT The design of complex machines stands as both a marker of human intelligence and a foundation of engineering practice. Given recent advances in large language models (LLMs), we ask whether they, too, can learn to create. We approach this question through the lens of compositional machine design: a task in which machines are assembled from standardized components to meet functional demands like locomotion or manipulation in a simulated physical environment. With this simplification, machine design is expressed as writing XML-like code that explicitly specifies pairwise part connections. To support this investigation, we introduce BesiegeField, a testbed built on the machine-building game Besiege, which enables part-based construction, physical simulation and rewarddriven evaluation. Using BesiegeField, we benchmark state-of-the-art LLMs with agentic workflows and identify key capabilities required for success, including spatial reasoning, strategic assembly, and instruction-following. As current opensource models fall short, we explore reinforcement learning (RL) as a path to improvement: we curate a cold-start dataset, conduct RL finetuning experiments, and highlight open challenges at the intersection of language, machine design, and physical reasoning. "Man is a tool-making animal." - Benjamin Franklin 1 INTRODUCTION The history of human progress is, at its core, the history of machines, just as the ancient Greeks built the Antikythera mechanism to predict eclipses and Leonardo da Vinci envisioned machines to fly. 1 19 Oct 2025 Technical Report Today, as large language models (LLMs) begin to approximate-and in some domains, surpasshuman cognitive abilities, a natural question arises: Can computational models, like humans, conceive and create complex machines to achieve purposeful goals? At the heart of this question lie two tightly coupled concepts: compositionality, how parts are put together into assemblies, and functionality, the tasks these assemblies perform as they interact with external forces or inputs. While foundation models are already capable of synthesizing 3D shapes and building mechanical parts with computer-aided design (CAD) models, it is the complex compositional structures, in which very different parts and components are orchestrated to smoothly move together, that realize a vast array of demands. Just as a clock emerges from the composition of simple and standardized mechanical elements such as gears and flywheels, these same elements, when combined differently, can give rise to entirely different machines, such as a sewing machine. On the other hand, the same functionality may be realized by different part compositions, just as both cars and bicycles can transport a person from place to place. Put it concisely: composition is shaped by functionality, and functionality is realized through composition. Since such compositional machines can be expressed programmatically, with types, placements and articulations of parts represented in structured code that LLMs can generate and manipulate, we formalize the above question as: Can LLMs, given standardized mechanical parts and a reward function for the desired functionality, discover diverse spatial part compositions that maximize the reward and complete the task? The question is not only about the pursuit of intelligence but also about the practice of engineering. Modern design pipelines are often long and costly, especially in large-scale projects where each iteration demands substantial resources. These projects accumulate vast collections of documents and blueprints, making it difficult to trace, retrieve, or reuse past design efforts. Much essential knowhow is passed informally across teams and generations, and in many cases, never fully recorded and since forgotten. An automated machine design system could directly address these challenges. Rather than merely mimicking patterns from historical designs, such a system should be agentic: capable of exploring the exponentially large design space, leveraging prior knowledge to create novel designs for new demands and constraints, and improving them through feedback. To investigate this concretely, we introduce BesiegeField, an interactive environment built on the machine-design game of Besiege1. The environment allows for construction of simple mechanical machines with standardized and semantic parts such as gears and wheels, and supports customized physical scenarios in which LLM agents can test constructed machines and evaluate their dynamics and interactions. Building on BesiegeField, we benchmark state-of-the-art LLMs with different agent designs and strategies for selecting and placing basic mechanical elements to build machines for representative functional demands, a task we term compositional machine design. Through these experiments, we empirically identify key capabilities required for this task: accurate spatial reasoning, high-level knowledge of design strategies, and instruction-following in spatial domains. Since only a few proprietary LLMs achieve satisfactory results, we further investigate how reinforcement learning (RL) can improve the performance of open-source LLMs. To this end, we curate a small machine design dataset to cold-start RL finetuning, perform exploratory RL experiments, and highlight key challenges that chart directions for future research. In summary, our contributions are listed below: • We introduce and formalize the task of compositional machine design, where machines are assembled from standardized parts to achieve functional goals. • We present BesiegeField, an interactive environment that enables LLM agents to construct, simulate, and evaluate compositional machines in customized physical scenarios. • We systematically benchmark state-of-the-art LLMs and different agentic workflow designs on representative machine-design tasks. • We explore RL finetuning of LLMs on this task, for which we curate a cold-start dataset, conduct experiments, and highlight the key challenges. 1https://en.wikipedia.org/wiki/Besiege_(video_game) 2 Technical Report 2 COMPOSITIONAL MACHINE DESIGN Full machine design involves many coupled elements: geometry, statics and dynamics, demand analysis, failure modes, safety, and even legal constraints (Beitz et al., 1996; Wong et al., 2025). To isolate a tractable subproblem, we focus on the structural composition of machines: how standardized parts are spatially arranged and mechanically linked to produce functional behavior. We refer to this task, introduced in the previous section, as compositional machine design. It captures two essential components: (i) the static geometry of a machine as a part-based assembly, and (ii) its compatibility with functional demands, typically assessed through physical simulation. This abstraction omits considerations such as manufacturing constraints, material properties, or domain-specific regulations, but retains the core spatial and behavioral reasoning challenges relevant to design. This special task of compositional machine design mirrors challenges found in other exploration domains. For example, automatic theorem proving involves a compositional and exponentially large action space, while electronic design automation (EDA) for chip layouts requires spatial reasoning to place components of varying shapes under spatial constraints (albeit in a more regular and grid-constrained fashion than mechanical parts in machines). A unique challenge in machine design, however, is its dependence on diverse long-horizon behaviors, both autonomous and nonautonomous, within an environment. Specifically, a machine may behave differently when operated in different ways (e.g., a bicycle when pedaled versus when braking) or under different external conditions (e.g., driving a car in sunny versus rainy weather). Similarly, many sophisticated machines cannot function without appropriate control policies, as exemplified by aircraft that rely on fly-by-wire systems to stabilize their inherently unstable aerodynamic configurations (which would otherwise be unflyable by a human pilot alone). A key open problem is therefore how to account for the interplay among physics, control policy, and compositional structure in machine design. It is worth noting that, unlike in math theorem proving where one valid proof often suffices (even though multiple proofs may still be valued), design domains typically require generating a diverse set of candidate solutions. This diversity is essential to (i) differentiate products, (ii) adapt to unpredictable market demands, and (iii) account for uncertainty in real-world testing and deployment. Consequently, the task places greater emphasis on diversity, and a model for compositional machine design should function more like a generative model than a simple reward maximizer. 3 BESIEGEFIELD: PLAYGROUND FOR COMPOSITIONAL MACHINE DESIGN Studying the full problem of compositional machine design is challenging, as it involves the coupling of many interacting factors. We therefore focus on a minimalist, component-level setting in which machines are constructed primarily from cuboid primitives with clear functional semantics, together with a small set of specialized exceptions, and operate under a shared control policy in an environment governed by rigid-body and elastic mechanics. This abstraction allows us to properly benchmark the capabilities of existing LLMs and to assess the upper bounds, potential, and challenges of agentic systems and RL algorithms. To this end, we create BesiegeField, an interactive environment adapted from the machine-building game Besiege, in which players design medieval machines to complete tasks such as destroying castles. Powered by the built-in physics engine, BesiegeField supports physical simulation of mechanical systems such as vehicles and catapults in user-customized environments with terrains, obstacles, external forces (e.g., wind and gravity), and co-existing agents. The environment provides nearly 80 types of building blocks , including passive ones like drills and logs, and powered ones like powered cogs and wheels. Machines are constructed by sequentially attaching new parts to vacant and attachable faces of existing blocks, starting from a root block and thus forming a "construction tree" (indeed a directly acyclic graph (DAG), in the sense of operation orders; one block can has two parents in the DAG; the actual structures may contain loops). Powered blocks can receive control commands, allowing machines to be operated precisely. During simulation, complete state information (e.g., the position and velocity of each block in the constructed machine) can be recorded for model feedback. Finally, the environment supports custom modifications and can be extended with additional block types and richer physics (e.g., simple fluid simulation). Further details are explained in Appendix B. BesiegeField is unique in balancing real-world geometry and physics, part-level semantics, and simple compositional rules. Block-stacking environments like LEGO (Fan et al., 2022) and 3 Technical Report Throw further! Go further! Figure 2: Demonstration of the machine design tasks in our experiments. (Left: car; Right: catapult). ... ... ... Position-based (Before Edit) Before Edit After Edit ... ... ... ... [ ... {"type": "1", "id": 5, "parent": 1, "face_id":4}, {"type": "46", "id": 6, "parent": 5, "face_id": 1} ] Construction Tree (Before/After Edit) Position-based (After Edit) Figure 3: Demonstration of the default position-based representation and our construction tree representation. Parent block info is in blue and child info is in red. Minecraft (Fan et al., 2022; Pun et al., 2025) allow intuitive combinatorial assembly but do not natively provide realistic physical simulation and rely on generic blocks with limited semantic meaning. CAD modeling (Li et al., 2025) captures fine-grained geometry and interactions, but its complexity makes rules cumbersome and sequences prohibitively long. By contrast, BesiegeField uses semantically meaningful parts with cuboid-like construction rules-supporting realistic physics while remaining abstract enough for tractable composition. This calibrated balance enables the study of compositional creativity and geometric reasoning at a level of difficulty that both differentiates algorithms and permits rapid experimentation. Moreover, unlike prior environments, BesiegeField supports machine destruction, adding durability and failure analysis to the design space. 4 BENCHMARKING LLMS FOR COMPOSITIONAL MACHINE DESIGN 4.1 BENCHMARK SETTINGS Representative target machines and tasks. To benchmark and characterize the performance of different LLMs for agentic compositional machine design, we consider two conceptually simple yet representative target machines to build: car and catapult as shown in Fig. 2. While success in both requires understanding part semantics and structural syntax, car building primarily tests static relational reasoning, such as enforcing correct part orientations, symmetry, and stability; in contrast, catapult building challenges models with dynamic relational reasoning, where parts must coordinate over time to produce causal mechanical effects. Moreover, the two tasks are simple enough to be constructed with only a few blocks so that they fit within the LLM's context window, yet complex enough to require explicit reasoning about construction strategies and causal dependencies. We evaluate the performance of cars and catapults by their moving distance and their throwing distance (i.e., the moving distance of the stone), respectively, towards a fixed and given direction. During each simulation, the generated machine will be placed at a designated position, and the active parts will be powered after a few seconds. As there can be reward hacking issues, for catapults experiments we surround the designated machine placement position with moderate-height walls. More details about the target machines, rewards, and environments can be found in Appendix B. Machine representations. In BesiegeField, the default position-based representation records all blocks with global 3D positions and uses a built-in algorithm to recover connections. Such a representation, however, does not well encode machine structures. Instead, we propose a parsimonious representation aligned with the game's building logic, based on pairwise relative attachment relationships (i.e., how one part is rotated and attached to another). Details are explained in Appendix D.1.2. 4 Technical Report Consolidated Modification Plan: 1. Remove Old Base: Remove blocks [3], [4], and [6] first, then remove their parents [1], [2], and [5]. 2. Reposition Arm And Container: Move Log [10] from Rotating Block [9]'s face_id: 3 (Up) to face_id: 2 (Right). Move Container [11] from Log [10]'s face_id: 0 (Front) to face_id: 8 (Up). And Add a Ballast [35] to Rotating Block [9] at face_id: 1 (Left). 3. Build New Base: Add three Log [63] blocks to the Starting Block [0] at face_id: 1 (Back), face_id: 2 (Left), and face_id: 3 (Right). ... I need to analyze the provided machine data to identify and correct the flaws. ... (a) Gemini 2.5 Pro CoT (b) Modification Steps Analysis of Defects: - Severe Structural Imbalance. The machine has a tiny support base (1, 2, 3, 4, 5). The long arm with the heavy Boulder 12 will create a large torque, causing the machine to tip over backward (towards z-). ... - Incorrect Pivot Orientation. The block 9 spins the arm horizontally. ...Let's check the current setup: ...Rotation around the x-axis causes a swing in the y-z plane... However, the Log 10 is attached to the 'Up' face of the pivot. This means the arm starts pointing straight up, which is ineffective for a throw. ... - Improper Container Setup. After moving the arm, I must also move the Container 11 to an 'Up' attachable face on the Log 10 to make it face upward y+ again. The Log 10 has several 'Up' attachable faces; face_id 8 is a suitable choice. ... Figure 4: Example CoT of inspector agents (w/ Gemini 2.5 Pro). Blue text highlights the moderate capability of LLMs in spatial reasoning and imagined physical simulation. Thought Process: - The catapult arm (order 7 → 8 → 9) is driven by a Rotating Block whose axis is currently lying horizontally (orient x+), so the arm sweeps sideways. ... - The two little cubes on the ground (order 5, 6) are the only supports and break under load. Attaching one wooden-block beam to the front of each cube forms a small skid that spreads the load. ... - The light base tips because the long arm and the boulder are far forward. Adding a ballast cube (ID 35) to each rear side cube (#2 and #3) shifts the center of gravity rearwards. Modification Description: 1. Re-attach Rotating Block 7 to the upper build-point of block 1 so that its axis becomes vertical and the arm can pitch upward. 2. Reinforce the base by adding a 2-unit wooden beam in front of each ground cube (5, 6). 3. Add ballast cubes on the left and right rear frame cubes (2 and 3) to pull the center of gravity back over the supports. ... (a) o3 CoT (b) Modification Steps Figure 5: Example CoT of inspector agents (w/ OpenAI o3). Red text highlights reasoning errors. Performance metrics. We evaluate our agentic systems using the following quantitative metrics: 1) file validity rate, the proportion of generated JSON files that can be successfully parsed into machine construction trees; 2) spatial validity rate, the proportion of generated machines that are free from self-collisions; 3) machine validity rate, the proportion of machines that satisfy both file and spatial validity; 4) mean and maximum simulation scores, the average and highest rewards achieved by generated machines in the environment. Environment feedback. For the simple target machines car and catapult, we consider environment feedback within a time window of 5 seconds that is long enough to characterize their designated functionalities. Specifically, for car we consider maximum speed and driving distance; for catapult, we consider boulder throwing distance and maximum height. We also record the machines' global orientation and broken parts information (if any). Details are elaborated in Appendix D.3. 4.2 AGENTIC WORKFLOW DESIGN Single-agent setting. We first benchmark if a single LLM agent alone is capable of completing the task. Specifically, one LLM agent is provided with the environment description, the available machine components, the assembly syntax, and the functional requirements (e.g., moving an object forward). The agent generates a chain-of-thought (CoT; Wei et al. (2022)) to reason about what is needed and why, and then derives an abstract plan (e.g., connecting a lever to a container with a boulder). This plan is later translated into the construction tree representation. Iterative editing. Because compositional machine design requires both low-level spatial reasoning and high-level ideation, a single agent rarely produces satisfactory machines. We therefore also design an iterative editing workflow that involves three major agents: 1) designer, which produces an initial plan from the environment description, the available machine components, the assembly syntax, and the functional requirements; 2) refiner, a self-critic agent that which evaluates a draft 5 Technical Report Figure 7: Machines produced by agentic systems with different LLMs (Top: car; Bottom: catapult). Models Single-agent Iterative Editing Hierarchical Design Mean Max Std Mean Max Std Mean Max Std "Catapult" Task Gemini 2.5 Pro 2.30 9.0 3.86 4.67 21.95 8.68 9.83 18.19 8.35 OpenAI o3 2.87 5.22 1.96 9.14 14.01 3.71 2.00 11.11 3.98 Qwen3-Coder-480B-A35B 1.75 9.24 3.17 5.10 12.02 5.54 3.90 6.52 2.54 Doubao Seed 1.6 3.18 8.2 2.99 4.82 9.10 3.41 1.73 4.76 2.39 Claude Opus 4 1.19 4.82 2.21 1.18 4.91 2.18 2.27 9.32 4.22 DeepSeek-V3 3.50 4.86 2.17 3.07 5.24 2.55 2.41 4.93 2.58 Kimi K2 2.57 9.05 3.72 2.82 11.39 5.23 5.39 12.02 5.16 Llama 4 Scout 17B 16E 3.18 5.64 1.95 1.28 5.94 2.41 3.59 11.83 4.15 "Car" Task Gemini 2.5 Pro 33.96 40.85 6.73 34.34 41.66 13.96 29.96 41.52 7.78 OpenAI o3 15.28 32.08 8.97 14.34 35.08 11.79 28.39 36.18 11.01 Qwen3-Coder-480B-A35B 8.87 11.50 4.46 15.24 28.95 13.12 12.59 34.05 10.78 Doubao Seed 1.6 3.51 9.40 4.85 8.11 10.04 3.58 18.75 26.02 4.38 Claude Opus 4 9.83 12.98 1.28 8.07 28.04 12.48 14.56 38.67 20.69 DeepSeek-V3 9.06 10.53 3.68 8.23 18.84 7.12 17.92 31.94 12.85 Kimi K2 1.75 8.09 2.80 14.36 28.34 9.47 1.94 14.99 5.48 Llama 4 Scout 17B 16E 0.02 0.03 0.01 3.04 12.76 5.23 1.55 2.00 0.32 Table 1: Quantitative results of agentic systems with different LLMs. against requirements and constraints and proposes multiple candidate revisions at each step; 3) environment querier, an agent that runs machine simulation and summarizes the environment feedback, in the way that it always provides global information such as machine orientation throughout the trajectory but selectively reports the feedback on specific blocks (e.g., position and speed) for further machine refinement. The workflow begins with a draft from the designer that is later critiqued by an inspector, which assess the designed machine in an abstract fashion, then polished once by a refiner. The design then undergoes a fixed number of iterations, each consisting of one querier and one refiner step. At refiner stages, multiple candidates are generated for running Monte Carlo tree search (MCTS; Coulom (2006)). The best design found in this search process is selected as output. A machine that moves? MetaDesigner Designer Inspector + Refiner Refiner Active Env Querier Simulation × N Figure 6: Our agentic machine design workflow. Hierarchical construction. Inspired by typical human design processes as well as recent designs of agentic systems (Xiao et al., 2025; Teng et al., 2025; Zhang et al., 2025), we introduce a meta-designer agent that first analyzes the requirements and constraints, and then constructs a highlevel blueprint of the major functional blocks (e.g., the suspension system) and their interconnections. With this blueprint in place, we adopt an autoregressive strategy to build the machine block by block: 1) we begin with the first functional block and dispatch the job to eight parallel builder agents; 2) the valid designs from this stage are evenly distributed to another eight builder agents to construct the second block; and 3) the process iterates in this manner until the entire machine is assembled. Empirically, we find that the meta-designer typically decomposes a machine into three to four functional blocks. 4.3 KEY EMPIRICAL OBSERVATIONS General observations. We find compositional machine design to be a challenging task for LLMs (Fig. 7 and Table 1), though not intractable: Gemini 2.5 Pro can consistently construct visually sensible machines with non-trivial performance. We find no evidence that reasoning models outperform non-reasoning ones, suggesting the main bottleneck lies in LLMs' limited 3D understanding and/or 6 Technical Report in-context learning. We also find that LLMs, especially reasoning models, still exhibit some spatial and physical reasoning as exemplified by the CoT from Gemini Pro 2.5 (Fig. 4), much like a world model in text space. Failure patterns. We identified common failure patterns in LLM-generated machines (Fig. 17): 1) incorrect part orientations; 2) incorrect part placements, where parts attach to wrong parents; 3) instruction-following failures, where elements of the high-level blueprint are not strictly observed; 4) flawed high-level reasoning, where LLMs fail to recognize correct physics or essential components. Effect of environment feedback. It is unsurprising that with the more environment feedback the agents receive, the better performance of generated machines improve in general (Table 12). Effect of edit history. We find that edit histories are generally helpful in decreasing the number of failure attempts in creating valid machines (Table 6), which underscores the importance of longer context window of base models for efficient exploration. Hierarchical design. We observe the mean performance improves with hierarchical design only when the abstract-level reasoning on blueprints is reliable, as shown by the performance of Gemini 2.5 Pro. In the meantime, consistent with the intuition that hierarchical design is more structured and principled, it generally yields lower variance in obtained scores. Effect of CoT reasoning. As shown in Fig. 17, LLMs often fail to faithfully translate high-level machine design plans in their CoT into semantically and geometrically consistent machine construction trees. To better assess the impact of CoT reasoning on high-level design, we feed the CoT generated by Gemini 2.5 Pro (the best-performing model) to other LLMs, prompting them to directly output construction trees. The resulting machines generally show improved performance (Fig. 30) and highlight the critical role of high-level semantic reasoning in machine design. CoT-machine correspondence. Though the CoT often provides a reasonably high-level blueprint, agents may still generate machines that deviate from the intended structure (Fig. 17). We hypothesize that this misalignment is a key reason many LLMs struggle to build better machines. Machine representation. We experiment with a coordinate-only representation derived from the default position-based (Appendix D.1) and our construction tree representation. Results show that the coordinate-only representation performs significantly worse (Table 7), implying that explicit structural information is necessary for LLM understanding. 3D information. We observe that (Table 5) the performance generally improves when we also feed parsed 3D information into the context of LLMs, which implies that LLMs are less capable of understanding relative spatial relationship (e.g., construction trees). 5 TOWARDS MACHINE DESIGN THROUGH REINFORCEMENT LEARNING Although agentic systems show promise in compositional machine design, simply scaling system size is unlikely to be economical, as errors compound rapidly. Like humans who internalize experience, LLM agents should consolidate new knowledge into weights. We thus explore reinforcement learning with verifiable rewards (RLVR) in BesiegeField to develop machine-design capabilities. 5.1 EXPERIMENTAL SETTINGS Cold-start finetuning and dataset curation. Following recent RLVR practices (Lambert et al., 2025; Yue et al., 2025; Zhu et al., 2025a), we curated a small dataset to cold-start LLMs by aligning their reasoning process with expert CoT. Specifically, we collected textual descriptions of machine functionalities from Besiege player communities and prompted Gemini 2.5 Pro to generate corresponding machines. After filtering out invalid generations, we obtained 9,984 valid machine-CoT pairs. We then used this dataset to perform supervised finetuning on Qwen-2.5-14B-Instruct for 12 epochs. Additional training details are provided in Appendix F.2. Reward design. We use the reward R = is_valid × performance where is_valid indicates whether constraints are satisfied (Appendix D.2). For car, performance is the maximum travel distance; for catapult, it is the product of the boulder's maximum height and distance, penalizing solutions that are extreme in only one dimension. RL finetuning settings. We finetune agents specialized in building a single type of machine (either car or catapult), making our setup closely aligned with one-shot RLVR (Wang et al., 2025) where 7 Technical Report Models Catapult Car Validity Ratio Mean Score Max Score Validity Ratio Mean Score Max Score Qwen2.5-14B-Instruct 11/50 0.06 2.41 46/50 4.97 19.10 Qwen2.5-14B-Instruct + Cold-Start 9/50 0.11 5.54 40/50 4.67 20.23 Qwen2.5-14B-Instruct + RL 12/50 0.13 5.92 41/50 3.72 24.08 Qwen2.5-14B-Instruct + Cold-Start + RL 11/50 0.14 7.14 42/50 5.05 45.72 Table 2: Results of RLVR post-training in BesiegeField. We use Qwen2.5-14B as the backbond LLM. a single prompt is used throughout the RL process. We adopt group relative policy optimization (GRPO; Shao et al. (2024)) with LoRA parametrization (Hu et al., 2022) (rank 64) and mixedprecision training to finetune the cold-started model. We evaluate both the standard GRPO advantage estimator and the pass@k variant (Tang et al., 2025). In the latter case, due to the implementation of the RLVR framework verl (Sheng et al., 2025), the number of rollouts is set equal to k. Each experiment is run for 400 iterations on 8 A100 GPUs with per-GPU batch size of 1 and gradient accumulation of 8. We apply KL regularization with strength 0.001 to encourage the model to remain close to its initialization. 5.2 MAIN RESULTS AND OBSERVATIONS General results. As shown in Fig. 24, RL finetuning can generally improve the mean performance, mostly by increasing the percentage that machines are valid (including file validity, machine validity and satisfaction of minimum performance threshold). In the meantime, we also find that the maximum reward increases in our best setting. Similar to observations in many other RLVR settings, the entropy of the output distribution quickly drops even with regularization. Pass@k advantage vs. Pass@1 advantage. Since we eventually care about the best performing designs, especially given the low validity rate, our default setting adopts Pass@k advantage estimator. Indeed, Pass@k finetuning is more likely to discovery promising machine designs (Fig. 22). Base Cold Start RL Figure 8: Designs at RL finetuning stages. Evolution of generated machines during finetuning. In Fig. 8, we qualitatively examine how models refine their designs over the course of finetuning. We observe that models typically make detail-level adjustments, such as shifting part positions, while keeping the same high-level design strategy rather than exploring alternative strategies. Although these strategies are often reasonable, the models struggle to find precise configurations that enable smooth coordination among parts. This precision is especially critical for sophisticated mechanisms like catapults to function properly. Cold-start. Not surprisingly, we find that cold-start alone does not enable models to produce satisfactory designs, and that finetuning on the cold-start model is better than on the base model (Table 2). 6 DISCUSSIONS AND INTRIGUING INSIGHTS Capabilities for compositional machine design. Although tasks such as visual understanding and generation also depend on spatial, physical, and semantic reasoning, compositional machine design introduces unique requirements for LLM capabilities. Without precise spatial placement of machine parts, a design may fail to function correctly; a gear train, for example, will not transmit rotation if the gears are misaligned. Since the design process is typically hierarchical, successful LLMs must be able to accurately translate high-level blueprints into detailed geometric designs. In addition, machine design spans both concept-level reasoning and detailed specification. This dual demand often leads to large design documents and calls for a form of "visual reasoning" expressed through text, similar to what has been studied in LLMs applied to scalable vector graphics (SVG) and CAD models (Qiu et al., 2025b; Alrashedy et al., 2025). Multimodal reasoning is also important because effective machine design typically relies on integrating textual descriptions with visual or schematic representations. In this work, however, we focus only on pure LLM-based reasoning to isolate and analyze its capabilities for compositional machine design. Challenges in agentic machine design systems. The task of machine design faces similar challenges found in agentic systems in domains such as legal services and other knowledge-intensive fields. A key difficulty is the highly varied requirements and domain knowledge of different cus8 Technical Report tomers. To address this, LLMs need to acquire task-specific knowledge through in-context learning or finetuning. In addition, the complexity of design tasks often requires multiple agents to coordinate, and such pipelines can suffer error accumulation when the base LLM lacks sufficient capability. Exploration in machine design space. Different from tasks such as theorem proving, the goal of compositional machine design is to discover structures that more effectively achieve desired functionalities. Rather than reusing existing solutions, a practical design agent should be able to propose novel strategies, structural layouts, and part specifications as machine complexity increases. Meeting this requirement calls for RL finetuning methods that prevent models from collapsing into a narrow set of strategies and structures, which recent methods aim to alleviate (Zhu et al., 2025b; Chen et al., 2025c; Cui et al., 2025; Cheng et al., 2025; Liu et al., 2025b). This demand is closely related to continual RL (Schwarz et al., 2018), since finetuned LLMs must avoid catastrophic forgetting, maintain its reasoning ability, and consolidate learned strategies, which is particularly important because large-scale machine design datasets are rare and commercially infeasible to collect. 7 RELATED WORK AND CONCLUDING REMARKS 3D graphics codes for generative modeling. There is a long history in 3D asset generation and engineering design of representing the construction of a target instance as a program or sequence of operations in a domain-specific language (Ritchie et al., 2023; Sun et al., 2025; Deng et al., 2022), which we refer to here as 3D graphics codes (Qiu et al., 2025b; Chen et al., 2025a). Unlike geometric representations such as point clouds or meshes, these codes describe objects at a higher semantic level, capturing part composition, design constraints, and user operations in modeling software. Similar to programming languages, 3D graphics codes are inherently discrete and are typically generated with autoregressive models trained from scratch (Yuan et al., 2024) or with LLMs finetuned on curated datasets (Kulits et al., 2025; Chen et al., 2025b). Much of the existing work centers on CAD scripts for individual parts (Wu et al., 2023; Alrashedy et al., 2025; Li et al., 2025) or Blender macros for single assets (Huang et al., 2024). Whereas recent studies on LEGO assemblies (Pun et al., 2025), Minecraft structures (Fan et al., 2022; Liu et al., 2025a), and procedural scene generation (Sun et al., 2025; Chen et al., 2025a; Jones et al., 2025; Yuan et al., 2024) introduce richer compositionality, they still fall short of the task of compositional machine design, which requires assemblies that both function under physical laws and exhibit the precise geometry of real objects. LLM agents. LLM agents are language models organized to operate in iterative loops of perception and action (Yao et al., 2023b; Minaee et al., 2024; Hu et al., 2024c). They interact with external tools (Schick et al., 2023; Liu et al., 2024b; Kim et al., 2024; Qin et al., 2024), respond to signals from simulated or real environments (Savva et al., 2019; Shridhar et al., 2021), incorporate selfreflection to refine their outputs (Hu et al., 2024b; Alrashedy et al., 2025; Shinn et al., 2023; Yu et al., 2025), and are commonly organized into multi-agent systems that coordinate roles and exchange information (Li et al., 2023; Chen et al., 2024; Zhang et al., 2025) . These designs move beyond one-shot text generation and establish LLMs as adaptive decision makers capable of long-horizon reasoning. Approaches that introduce search over possible solutions (Yao et al., 2023a; Putta et al., 2024; Koh et al., 2024) or reflection on prior attempts (Besta et al., 2024; Deng et al., 2024; Renze & Guven, 2024; Xiao et al., 2025; Yu et al., 2025) have enabled progress on increasingly complex tasks. LLM agents have already been used in design tasks such as code synthesis (Gao et al., 2023; Novikov et al., 2025; Madaan et al., 2023), CAD design (Alrashedy et al., 2025) and game environments (Wang et al., 2024; Fan et al., 2022). Partially inspired by these developments, Makatura et al. (2023) proposed a prototypical agent-based design framework that generates mechanical structures from text prompts. Their system treats structure generation as a one-shot process and delegates the search for optimal geometric and physical parameters to external optimization tools. In contrast, our work with BesiegeField explores how LLM agents can directly and iteratively bridge compositional structures to functional goals, framing design as a process of reasoning and adaptation with both accurate simulation and intuitive physics. Reinforcement learning with verifiable rewards (RLVR). Recent studies indicate that, by running RL finetuning with verifiable rewards from simulators or verifiers, reasoning abilities emerge (Shao et al., 2024; Guo et al., 2025; Bai et al., 2022), even when single prompt is used during finetuning (Wang et al., 2025). Yet, many methods exhibit loss of diversity as output entropy collapses during reinforcement learning and thus do not fully enable LLMs to explore novel solutions. Examples of mitigation methods include explicit entropy or KL regularization (Cui et al., 2025; Ouyang 9 Technical Report et al., 2022), Pass@k training (Tang et al., 2025; Chen et al., 2025c), and distribution-matching objectives like generative flow networks (Zhu et al., 2025b; Hu et al., 2024a). BesiegeField provides verifiable rewards and thus enables direct application of RLVR to compositional machine design. Concluding remarks. We introduced compositional machine design, a simplified yet challenging task that reflects core aspects of real-world machine design. To evaluate LLM performance on this task, we developed BesiegeField, an interactive environment based on the game Besiege. Our results with agentic systems and reinforcement learning demonstrate that LLMs hold promise for solving this problem. While we did not exhaustively explore all designs or integrate multi-modal information, our findings underscore the need to advance fundamental LLM algorithms and capabilities, and point toward exciting future directions in machine design. ACKNOWLEDGMENT We sincerely thank the developers of Besiege for creating such an inspiring game and for fostering an open, vibrant player community, without which our exploration of this idea would not have been possible. We also extend our gratitude to the developers of BepInEx, whose work made it possible for us to unlock the full potential of Besiege. REFERENCES Kamel Alrashedy, Pradyumna Tambwekar, Zulfiqar Haider Zaidi, Megan Langwasser, Wei Xu, and Matthew Gombolay. Generating CAD code with vision-language models for 3d designs. In ICLR, 2025. 8, 9 Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint , 2022. 9 W Beitz, G Pahl, and K Grote. Engineering design: a systematic approach. Mrs Bulletin, 71:30, 1996. 3 Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In AAAI, 2024. 9 Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors. In ICLR, 2024. 9 Yamei Chen, Haoquan Zhang, Yangyi Huang, Zeju Qiu, Kaipeng Zhang, Yandong Wen, and Weiyang Liu. Symbolic graphics programming with large language models. arXiv preprint , 2025a. 9 Yongwei Chen, Yushi Lan, Shangchen Zhou, Tengfei Wang, and Xingang Pan. Sar3d: Autoregressive 3d object generation and understanding via multi-scale 3d vqvae. In CVPR, 2025b. 9 Zhipeng Chen, Xiaobo Qin, Youbin Wu, Yue Ling, Qinghao Ye, Wayne Xin Zhao, and Guang Shi. Pass@ k training for adaptively balancing exploration and exploitation of large reasoning models. arXiv preprint , 2025c. 9, 10 Daixuan Cheng, Shaohan Huang, Xuekai Zhu, Bo Dai, Wayne Xin Zhao, Zhenliang Zhang, and Furu Wei. Reasoning with exploration: An entropy perspective. arXiv preprint , 2025. 9 Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games, pp. 72-83. Springer, 2006. 6 Ganqu Cui, Yuchen Zhang, Jiacheng Chen, Lifan Yuan, Zhi Wang, Yuxin Zuo, Haozhan Li, Yuchen Fan, Huayu Chen, Weize Chen, et al. The entropy mechanism of reinforcement learning for reasoning language models. arXiv preprint , 2025. 9 10 Technical Report Boyang Deng, Sumith Kulal, Zhengyang Dong, Congyue Deng, Yonglong Tian, and Jiajun Wu. Unsupervised learning of shape programs with repeatable implicit parts. NeurIPS, 2022. 9 Yang Deng, Xuan Zhang, Wenxuan Zhang, Yifei Yuan, See-Kiong Ng, and Tat-Seng Chua. On the multi-turn instruction following for conversational web agents. arXiv preprint , 2024. 9 Tim Dettmers, Mike Lewis, Sam Shleifer, and Luke Zettlemoyer. 8-bit optimizers via block-wise quantization. In ICLR, 2022. 32 Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. NeurIPS, 2022. 3, 4, 9 Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In ICML, 2023. 9 Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Peiyi Wang, Qihao Zhu, Runxin Xu, Ruoyu Zhang, Shirong Ma, Xiao Bi, et al. Deepseek-r1 incentivizes reasoning in llms through reinforcement learning. Nature, 645(8081):633-638, 2025. 9 Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In ICLR, 2022. 8 Edward J Hu, Moksh Jain, Eric Elmoznino, Younesse Kaddar, Guillaume Lajoie, Yoshua Bengio, and Nikolay Malkin. Amortizing intractable inference in large language models. In ICLR, 2024a. 10 Shiying Hu, Zengrong Huang, Chengpeng Hu, and Jialin Liu. 3d building generation in minecraft via large language models. In 2024 IEEE Conference on Games (CoG), pp. 1-4. IEEE, 2024b. 9 Ziniu Hu, Ahmet Iscen, Aashi Jain, Thomas Kipf, Yisong Yue, David A Ross, Cordelia Schmid, and Alireza Fathi. Scenecraft: An llm agent for synthesizing 3d scenes as blender code. In ICML, 2024c. 9 Ian Huang, Guandao Yang, and Leonidas Guibas. Blenderalchemy: Editing 3d graphics with visionlanguage models. In ECCV, 2024. 9 R Kenny Jones, Paul Guerrero, Niloy J Mitra, and Daniel Ritchie. Shapelib: Designing a library of programmatic 3d shape abstractions with large language models. arXiv preprint , 2025. 9 Myeongsoo Kim, Tyler Stennett, Dhruv Shah, Saurabh Sinha, and Alessandro Orso. Leveraging large language models to improve rest api testing. In Proceedings of the 2024 ACM/IEEE 44th International Conference on Software Engineering: New Ideas and Emerging Results, pp. 37-41, 2024. 9 Jing Yu Koh, Stephen Marcus McAleer, Daniel Fried, and Russ Salakhutdinov. Tree search for language model agents. In ICLR, 2024. 9 Peter Kulits, Haiwen Feng, Weiyang Liu, Victoria Fernandez Abrevaya, and Michael J Black. Rethinking inverse graphics with large language models. Transactions on Machine Learning Research (TMLR), 2025. 9 Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James Validad Miranda, Alisa Liu, Nouha Dziri, Xinxi Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Christopher Wilhelm, Luca Soldaini, Noah A. Smith, Yizhong Wang, Pradeep Dasigi, and Hannaneh Hajishirzi. Tulu 3: Pushing frontiers in open language model post-training. In COLM, 2025. 7 Guohao Li, Hasan Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel: Communicative agents for" mind" exploration of large language model society. NeurIPS, 2023. 9 11 Technical Report Jiahao Li, Weijian Ma, Xueyang Li, Yunzhong Lou, Guichun Zhou, and Xiangdong Zhou. Cadllama: leveraging large language models for computer-aided design parametric 3d model generation. In CVPR, 2025. 4, 9 Shunyu Liu, Yaoru Li, Kongcheng Zhang, Zhenyu Cui, Wenkai Fang, Yuxuan Zheng, Tongya Zheng, and Mingli Song. Odyssey: Empowering minecraft agents with open-world skills. In IJCAI, 2025a. 9 Weiyang Liu, Zeju Qiu, Yao Feng, Yuliang Xiu, Yuxuan Xue, Longhui Yu, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, et al. Parameter-efficient orthogonal finetuning via butterfly factorization. In ICLR, 2024a. 32 Xukun Liu, Zhiyuan Peng, Xiaoyuan Yi, Xing Xie, Lirong Xiang, Yuchen Liu, and Dongkuan Xu. Toolnet: Connecting large language models with massive tools via tool graph. arXiv preprint , 2024b. 9 Zhen Liu, Tim Z Xiao, Weiyang Liu, Yoshua Bengio, and Dinghuai Zhang. Efficient diversitypreserving diffusion alignment via gradient-informed gflownets. In ICLR, 2025b. 9 Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. NeurIPS, 2023. 9 Liane Makatura, Michael Foshey, Bohan Wang, Felix HähnLein, Pingchuan Ma, Bolei Deng, Megan Tjandrasuwita, Andrew Spielberg, Crystal Elaine Owens, Peter Yichen Chen, et al. How can large language models help humans in design and manufacturing? arXiv preprint , 2023. 9 Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, and Jianfeng Gao. Large language models: A survey. arXiv preprint , 2024. 9 Alexander Novikov, Ngân V ̃u, Marvin Eisenberger, Emilien Dupont, Po-Sen Huang, Adam Zsolt Wagner, Sergey Shirobokov, Borislav Kozlovskii, Francisco JR Ruiz, Abbas Mehrabian, et al. Alphaevolve: A coding agent for scientific and algorithmic discovery. arXiv preprint , 2025. 9 Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. NeurIPS, 2022. 9 Ava Pun, Kangle Deng, Ruixuan Liu, Deva Ramanan, Changliu Liu, and Jun-Yan Zhu. Generating physically stable and buildable brick structures from text. In ICCV, 2025. 4, 9 Pranav Putta, Edmund Mills, Naman Garg, Sumeet Motwani, Chelsea Finn, Divyansh Garg, and Rafael Rafailov. Agent q: Advanced reasoning and learning for autonomous ai agents. arXiv preprint , 2024. 9 Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, dahai li, Zhiyuan Liu, and Maosong Sun. ToolLLM: Facilitating large language models to master 16000+ real-world APIs. In ICLR, 2024. 9 Zeju Qiu, Weiyang Liu, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, and Bernhard Schölkopf. Controlling text-to-image diffusion by orthogonal finetuning. In NeurIPS, 2023. 32 Zeju Qiu, Simon Buchholz, Tim Z Xiao, Maximilian Dax, Bernhard Schölkopf, and Weiyang Liu. Reparameterized llm training via orthogonal equivalence transformation. In NeurIPS, 2025a. 32 Zeju Qiu, Weiyang Liu, Haiwen Feng, Zhen Liu, Tim Z. Xiao, Katherine M Collins, Joshua B Tenenbaum, Adrian Weller, Michael J Black, and Bernhard Schölkopf. Can large language models understand symbolic graphics programs? In ICLR, 2025b. 8, 9 12 Technical Report Zeju Qiu, Weiyang Liu, Adrian Weller, and Bernhard Schölkopf. Orthogonal finetuning made scalable. In EMNLP, 2025c. 32 Matthew Renze and Erhan Guven. Self-reflection in llm agents: Effects on problem-solving performance. arXiv preprint , 2024. 9 Daniel Ritchie, Paul Guerrero, R Kenny Jones, Niloy Mitra, Adriana Schulz, Karl D Willis, and Jiajun Wu. Neurosymbolic models for computer graphics. In Eurographics, 2023. 9 Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, et al. Habitat: A platform for embodied ai research. In CVPR, 2019. 9 Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. NeurIPS, 2023. 9 Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In ICML, 2018. 9 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint , 2024. 8, 9 Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. In Proceedings of the Twentieth European Conference on Computer Systems, pp. 1279-1297, 2025. 8 Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. NeurIPS, 2023. 9 Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In ICLR, 2021. 9 Chunyi Sun, Junlin Han, Weijian Deng, Xinlong Wang, Zishan Qin, and Stephen Gould. 3d-gpt: Procedural 3d modeling with large language models. In 3DV, 2025. 9 Yunhao Tang, Kunhao Zheng, Gabriel Synnaeve, and Remi Munos. Optimizing language models for inference time objectives using reinforcement learning. In ICML, 2025. 8, 10 Fengwei Teng, Zhaoyang Yu, Quan Shi, Jiayi Zhang, Chenglin Wu, and Yuyu Luo. Atom of thoughts for markov llm test-time scaling. arXiv preprint , 2025. 6 Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. Transactions on Machine Learning Research (TMLR), 2024. 9 Yiping Wang, Qing Yang, Zhiyuan Zeng, Liliang Ren, Liyuan Liu, Baolin Peng, Hao Cheng, Xuehai He, Kuan Wang, Jianfeng Gao, et al. Reinforcement learning for reasoning in large language models with one training example. arXiv preprint , 2025. 7, 9 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In NeurIPS, 2022. 5 Melvin Wong, Yueming Lyu, Thiago Rios, Stefan Menzel, and Yew-Soon Ong. Llm-to-phy3d: Physically conform online 3d object generation with llms. arXiv preprint , 2025. 3 Sifan Wu, Amir Khasahmadi, Mor Katz, Pradeep Kumar Jayaraman, Yewen Pu, Karl Willis, and Bang Liu. Cad-llm: Large language model for cad generation. In NeurIPS, 2023. 9 13 Technical Report Tim Z Xiao, Robert Bamler, Bernhard Schölkopf, and Weiyang Liu. Verbalized machine learning: Revisiting machine learning with language models. Transactions on Machine Learning Research, 2025. 6, 9 Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. NeurIPS, 2023a. 9 Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In ICLR, 2023b. 9 Zhouliang Yu, Yuhuan Yuan, Tim Z Xiao, Fuxiang Frank Xia, Jie Fu, Ge Zhang, Weiyang Liu, et al. Generating symbolic world models via test-time scaling of large language models. Transactions on Machine Learning Research, 2025. 9 Haocheng Yuan, Jing Xu, Hao Pan, Adrien Bousseau, Niloy J Mitra, and Changjian Li. Cadtalk: An algorithm and benchmark for semantic commenting of cad programs. In CVPR, 2024. 9 Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Yang Yue, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in LLMs beyond the base model? In ICML, 2025. 7 Jiayi Zhang, Jinyu Xiang, Zhaoyang Yu, Fengwei Teng, Xiong-Hui Chen, Jiaqi Chen, Mingchen Zhuge, Xin Cheng, Sirui Hong, Jinlin Wang, Bingnan Zheng, Bang Liu, Yuyu Luo, and Chenglin Wu. AFlow: Automating agentic workflow generation. In ICLR, 2025. 6, 9 Xinyu Zhu, Mengzhou Xia, Zhepei Wei, Wei-Lin Chen, Danqi Chen, and Yu Meng. The surprising effectiveness of negative reinforcement in llm reasoning. arXiv preprint , 2025a. 7 Xuekai Zhu, Daixuan Cheng, Dinghuai Zhang, Hengli Li, Kaiyan Zhang, Che Jiang, Youbang Sun, Ermo Hua, Yuxin Zuo, Xingtai Lv, et al. Flowrl: Matching reward distributions for llm reasoning. arXiv preprint , 2025b. 9, 10 14 Technical Report Appendix Table of Contents A Supplementary Tables 16 B Details on the BesiegeField Environment 18 B.1 Construction Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.2 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.3 Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.4 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 C Search Strategies in Machine Modification Loops 24 D Environment Settings for Agentic Design and RL Finetuning 28 D.1 Machine Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 D.2 Reward Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 D.3 Environment Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 E Challenges in Compositional Machine Design 30 E.1 Failure Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 E.2 Need for Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 E.3 Appearance vs. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 F Settings for RL Finetuning 32 F.1 Cold-Start Dataset Curation . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 F.2 Cold-Start Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 F.3 RL Experiment Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 G Additional Ablation Studies 34 H Generated Samples 40 H.1 From RL-finetuned Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 H.2 From Agentic Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 I Relations between CoT and Machines 42 J CoT Samples from Gemini 2.5 Pro 43 J.1 Single Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 J.2 Meta Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 J.3 Designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 J.4 Inspector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 J.5 Environment Querier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 J.6 Refiner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 K Single-Agent Prompt 56 L Multi-Agent Prompts 58 L.1 Shared Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 L.2 Designer System And User Prompt . . . . . . . . . . . . . . . . . . . . . . . . 61 L.3 Inspector System And User Prompt . . . . . . . . . . . . . . . . . . . . . . . 62 L.4 Refiner System Prompt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 L.5 Environment Querier System Prompt . . . . . . . . . . . . . . . . . . . . . . 65 L.6 Block Informations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 15 Technical Report A SUPPLEMENTARY TABLES Models Meta-Designer & Designer Blind Refinement Modification w/ Env Feedback Valid Mean Max Valid Mean Max Valid Mean Max "Catapult" Task Gemini 2.5 Pro 5 8.49 9.14 5 8.18 11.07 5 15.73 18.19 Claude Opus 4 4 4.17 4.36 2 5.38 5.8 2 9.10 9.32 o3 3 0 0 3 0 0 3 5.34 11.11 Qwen3-Coder-480B-A35B 6 0.75 4.5 6 1.61 5.0 6 5.21 6.52 Doubao Seed 1.6 3 4.34 4.37 3 0.31 0.49 3 4.62 4.76 DeepSeek-V3 7 0 0 7 0.98 3.18 4 4.82 4.93 Kimi K2 6 7.18 12.02 2 5.29 7.36 0 - - GPT-5 8 3.87 4.92 1 5.35 5.35 0 - - Llama 4 Scout 17B 16E 8 3.59 11.83 0 - - 0 - - "Car" Task Gemini 2.5 Pro 7 27.85 38.06 7 17.81 39.64 7 29.96 41.52 Claude Opus 4 3 2.96 3.41 3 36.18 37.05 3 26.59 38.67 o3 8 29.27 41.43 2 20.03 40.04 2 28.39 36.18 Qwen3-Coder-480B-A35B 6 5.43 8.72 6 3.90 11.25 6 11.75 34.05 Doubao Seed 1.6 5 21.80 29.91 5 13.25 26.05 5 18.75 26.02 DeepSeek-V3 3 0.27 0.47 3 16.94 29.87 3 17.92 31.94 Kimi K2 1 6.74 6.74 1 0.39 0.39 1 14.99 14.99 GPT-5 8 5.67 20.32 8 3.75 9.65 8 8.43 13.72 Llama 4 Scout 17B 16E 4 1.55 2.00 1 0.47 0.47 1 0.47 0.47 Table 3: Comparison between the performance of machines generated by different stages. The mean score is computed by taking the average of the scores of all valid machines. We sample 8 machines at the designer stage and keep only the valid machines. The maximum number of retries in the following stages is thus equal to the number of valid machines produced at the designer stage. Models Designer Blind Refinement Modification w/ Env Feedback Valid Mean Max Valid Mean Max Valid Mean Max "Catapult" Task Gemini 2.5 Pro 3 6.13 9.0 3 8.10 12.09 3 11.08 21.95 Claude Opus 4 2 4.76 4.91 0 - - 0 - - o3 8 2.87 5.22 8 2.98 9.17 8 9.14 14.01 Qwen3-Coder-480B-A35B 4 3.5 9.24 4 6.39 10.78 4 10.2 12.02 Doubao Seed 1.6 6 4.24 8.2 6 4.61 8.75 6 6.43 9.10 DeepSeek-V3 6 4.67 4.86 5 4.33 4.78 5 4.91 5.24 Kimi K2 3 6.85 9.05 2 8.31 8.97 2 11.28 11.39 GPT-5 5 1.50 1.88 5 5.86 12.77 5 7.53 9.48 Llama 4 Scout 17B 16E 7 3.63 5.64 2 5.88 6.95 2 5.12 5.94 Table 4: Performance of machines generated after different stages of the iterative editing workflow (without meta-designer). Mean scores are computed on valid machines. 16 Technical Report Models Baseline w/o Parsed 3D Information Valid Mean Max Valid Mean Max Gemini 2.5 Pro 5 8.18 11.07 5 7.13 11.36 o3 3 0 0 3 0 0 Qwen3-Coder-480B-A35B 5 1.61 5.0 0 - - Claude Opus 4 2 5.38 5.8 2 0.18 0.26 Table 5: Ablation on the effect of parsed 3D information. We compute the blind refinement score under two machine representations. The average score is computed with respect to valid machines only; 8 tries for each experiment. Models Refiner Avg Retry ↓ Refiner validity rate ↑ Baseline w/o Modify History Baseline w/o Modify History Gemini 2.5 Pro 1.42 1.33 100% 100% o3 1.94 2.37 97.87% 88.89% Qwen3-Coder-480B-A35B 2.50 2.75 82.65% 91.94% Doubao Seed 1.6 2.74 2.93 85.18% 85.18% Claude Opus 4 3.24 3.69 94.12% 53.85% DeepSeek-V3 1.54 1.68 100% 98.31% Table 6: Ablation of edit history as refiner inputs. Models Machine Validity (pass/total) Designer Score File Valid 3D Valid Final Valid Mean Max Baseline (construction tree) Gemini 2.5 Pro pro 8/8 5/8 5/8 8.49 9.14 o3 3/8 3/3 3/8 0 0 Qwen3-Coder-480B-A35B 8/8 6/8 6/8 0.75 4.5 Doubao Seed 1.6 7/8 3/7 3/8 4.34 4.37 Claude Opus 4 8/8 4/8 4/8 4.17 4.36 DeepSeek-V3 7/8 7/7 7/8 0 0 Kimi K2 6/8 6/6 6/8 7.18 12.02 Llama 4 Scout 17B 16E 8/8 8/8 8/8 3.59 11.83 Global position-based 3D representation Gemini 2.5 Pro pro 5/8 5/8 5/8 4.96 12.85 o3 0/8 - - - - Claude Opus 4 0/8 - - - - Kimi K2 5/8 4/5 4/8 0 0 Llama 4 Scout 17B 16E 0/8 - - - - Table 7: Ablation study on machine representations. 17 Technical Report B DETAILS ON THE BESIEGEFIELD ENVIRONMENT We built BesiegeField by creating plug-in modules for the game Besiege that create interfaces to allow flexible composition of parts (once certain rules are obeyed), control policies on multiple powered parts (e.g.. powered cogs), recording of state information of any block (e.g., position, orientation, part integrity, etc.) and settings of termination conditions (e.g., some part passing through a line). BesiegeField supports multi-process launching and thus allows for efficient parallel RL training. As the game natively supports multi-player gameplay, BesiegeField can naturally be applied to multi-agent RL settings. As the game Besiege (shown in Fig. 9) is built with the (mostly) opensourced Unity3D game engine2, BesiegeField is highly-customizable: the environment 1) natively supports modification of physical parameters, external forces, terrains and obstacles (e.g., stone buildings) and 2) allows for extension patches (known as mods3) to introduce other mechanisms, such as new block types, fluid simulation and many other components. Figure 9: Besiege editor view. B.1 CONSTRUCTION RULE Each machine is built by attaching new blocks to the existing structure, starting from a special root block. For convenience, we describe each construction step as an "attacher" block (child) connected to an "attachee" block (parent). As an attacher, each block has exactly one face available for connection; as an attachee, each block has none to several attachable faces. Once a face is used, it is considered occupied and cannot be reused. If, after construction, the free end of a block happens to coincide with an attachable face of an existing block, the two blocks are automatically connected. A few special blocks violate the rule described above, such as spring. These blocks have two ends and thus must have two parent blocks, do not have physical volume can be attached to either vacant or occupied faces of other blocks. Finally, each block can be rescaled and rotated after construction. Since post-construction scaling and rotation introduce unnecessary complexity into our pipeline, we exclude them from our experiments and leave their handling to future work. B.2 SIMULATION Once constructed, the machine will be placed at the designated pose indicated by the position and orientation of the starting block (not necessary near the ground, but there is a maximum height 2https://en.wikipedia.org/wiki/Unity_(game_engine) 3https://en.wikipedia.org/wiki/Video_game_modding 18 Technical Report constraint). The machine will be subject to gravity and Newtonian physical laws (rigid and elastic ones) after placement. B.3 BLOCKS Out of the 75 construction blocks provided by Besiege, we filter out a list of 27 blocks that are most relevant to build machines with classical mechanical mechanisms such as levers and trusses. • Starting Block: the root of any mechanism; initial orientation is along z+ axis. • Small Wooden Block: a cubic basic construction block. • Wooden Block: shaped like two small wooden blocks attached together. • Wooden Rod: a slender, fragile construction block. • Log: shaped like three small wooden blocks arranged in parallel. • Steering Hinge: powered; controls rotation of sub-blocks, swinging left or right along the axis perpendicular to its placement axis. • Steering Block: powered; rotates blocks along its placement axis. • Powered Wheel: radius 1m; provides ground movement. • Unpowered Wheel: identical to the powered wheel but requires external force to rotate. • Large Powered Wheel: larger version of the powered wheel (radius 3m). • Large Unpowered Wheel: unpowered version of the large powered wheel. • Small Wheel: functions like a caster wheel (e.g., shopping cart), unpowered, 1.2m long. • Roller Wheel: similar to the small wheel, but shorter (0.8m). • Universal Joint: freely rotates around its placement axis, unpowered. • Hinge: swings up and down along the axis perpendicular to its placement axis, unpowered. • Ball Joint: swings freely in all directions, unpowered. • Axle Connector: similar to a ball joint but allows unrestricted 360◦rotation. • Suspension: shaped like a wooden block, it can buffer forces from all directions. • Rotating Block: powered; motor-like block that generates torque and rotates about its local y-axis. • Grabber: grabs objects on contact and can release them. • Boulder: a large rock, loosely attached; useful for throwing. • Grip Pad: block with the highest friction. • Elastic Pad: block with the highest elasticity. • Container: typically used to hold a boulder. • Spring: can contract; one of the special blocks that can have two parent attachments (without occupying attachable faces). • Brace: reinforces structural strength. • Ballast: a heavy cubic block used as a counterweight. B.4 TASKS We define a set of tasks in which the goal is to construct machines within a designated building area to accomplish specific objectives. • Movement. Referred to as the car task in the main text, the objective is to build a machine capable of driving along tracks and traversing various terrains. • Throw. Referred to as the catapult task in the main text, the goal is to construct a machine that can launch boulders over long distances. To prevent unintended strategies (e.g., carrying the boulder instead of throwing it, or letting it roll along the ground), the building area is enclosed by a medium-height wall. 19 Technical Report • Delivery. This task requires building a machine that can transport a large stone forward across different terrains (Fig. 13). • Pick. The objective here is to design a machine that can retrieve a stone located at the bottom of a deep well (Fig. 12). For many of these tasks, we introduce multiple difficulty levels (not used in the experiments reported in this paper) to encourage progressively more sophisticated designs: • Movement and Delivery. We consider: (1) randomized terrains with stones and wooden rods (e.g., Fig. 10), (2) curved tracks (Fig. 15), and (3) obstacles such as height-limiting bars. • Throw. We design: (1) varied objectives, such as requiring the boulder to pass through an aerial ring (Fig. 14) or land precisely within a small target zone, (2) environmental factors such as wind, and (3) obstacles, including height restrictions either within the building area or along the boulder's trajectory. Figure 10: Illustration of the task car / movement on a rocky terrain, a more difficult setting compared to the environment used for the car task in our experiments. 20 Technical Report Figure 11: Illustration of the task catapult / throw. Figure 12: Illustration of the task pick. 21 Technical Report Figure 13: Illustration of the task delivery with a bump on the track. Figure 14: Illustration of the task catapult / Throw with the objective of throwing the boulder through the target ring. 22 Technical Report Figure 15: Illustration of the task car / movement with a curved track. 23 Technical Report C SEARCH STRATEGIES IN MACHINE MODIFICATION LOOPS Apart from the MCTS strategy used in the main experiments, we also evaluate two alternatives: (i) best-of-N, where we select the best-performing machine out of N candidates, and (ii) random search, which mimics best-of-N but instead selects a random candidate. For clarity, we refer to one consecutive "querier-refiner" call as a search node (consistent with our MCTS setup). Unlike classical MCTS or best-of-N, here each search node is allowed up to five retries to prevent child statistics from being too sparse. We perform R search rounds, each aiming to obtain 5 valid candidate machines (though this may fail; if fewer than 5 are found, the parent node's machine is used as a candidate). Full algorithmic details are provided in Algorithm 1, Algorithm 2, and Algorithm 3. In Fig. 9 we show the improvement of machine performance with respect to the number of search rounds used. In Fig. 16 we compare the efficiency of different search methods in our agentic compositional machine design setting. Algorithm 1 Random Search Algorithm Require: Agentic Search Node N Require: Scoring function S, machine valid check function F Require: Search Round R Ensure: The Best result with the highest score 1: Input machine ori_machine 2: max_retry ←5 3: machine_last_round ←ori_machine 4: for r = 1 to R do 5: best_score ←-∞ 6: retry ←0 7: while retry best_score then 19: best_score ←scorei 20: best_machine ←machinei 21: end if 22: end for 23: end for 24: return (best_machine, best_score) 25 Technical Report Algorithm 3 Monte Carlo Tree Search (MCTS) Require: Agentic Search Node N Require: Root node root, maximum iterations MAX_ITER Require: Select(node): Traverse tree using UCB until leaf node Require: Expand(node): Generate 4 child candidates via LLM. Validate them in parallel: keep valid ones, and regenerate invalid ones until they pass or hit the max retry limit. If all fail, use the parent node as the child. Require: Simulate(node): Besiege simulation Require: Backpropagate(node, reward): Update visit counts and rewards along path Require: BestChild(node): Return child with highest simulation score Ensure: Best action from the search tree 1: root ←s0 2: max_retry ←5 3: for i = 1 to MAX_ITER do 4: retry ←0 5: node ←Select(root) ▷Step 1: Selection 6: if node == root or node.visited then 7: should_expand ←True 8: end if 9: if not should_expand then 10: child ←node ▷Unvisited leaf node; no children yet 11: end if 12: while should_expand and not all child nodes are valid do 13: retry ←retry + 1 14: child ←Expand(node) ▷Step 2: Expansion 15: if retry ≥max_retry then 16: break 17: end if 18: end while 19: reward ←Simulate(child) ▷Step 3: Simulation 20: Backpropagate(child, reward) ▷Step 4: Backpropagation 21: end for 22: return BestChild(root) ▷Return best child 26 Technical Report Models R2 R5 R10 Mean Max Mean Max Mean Max Gemini 2.5 Pro 15.04 17.31 15.73 18.19 16.44 18.19 Claude Opus 4 8.61 9.32 9.10 9.32 9.43 9.98 o3 5.33 11.11 5.34 11.11 8.46 14.52 Qwen3-Coder-480B-A35B 5.18 6.52 5.21 6.52 5.74 6.52 Table 9: Ablation study on the effect of search depth in MCTS. R2, R5, and R10 represent the running rounds of MCTS on the same search tree. Figure 16: The variation in machine average scores with the increasing number of LLM node expansion operations under different search strategies. 27 Technical Report D ENVIRONMENT SETTINGS FOR AGENTIC DESIGN AND RL FINETUNING D.1 MACHINE REPRESENTATION To reduce complexity in compositional machine design, our machine representation assumes all blocks remain at their default scale and are not further rotated after attachment (note: the attachment operation itself may rotate blocks). D.1.1 GLOBAL POSITION-BASED REPRESENTATION By simplifying the default XML representation that BesiegeField receives, we obtain the global position-based representation. Below is a concrete example: [ {"type": 0, "Position": [0,0, 0], "Rotation": [0,0,0,1]}, {"type": 1, "Position": [0,0,0.5], "Rotation": [0,0,0,1]}, {"type": 2, "Position": [0,0,2.5], "Rotation": [0,0,0,1]}, {"type": 9, "Position": [0,0.5,2], "Rotation": [-0.707,0,0,0.707], "end-position": [0,2,0]} ] Basically, each block in the machine is independently recorded without mentioning its adjacent blocks. For most of the block types, only the block type and its pose (position + orientation) are recorded. For special blocks that have two parents, the other end has to be specified, for which the corresponding dictionary has an additional entry of "end-position". D.1.2 CONSTRUCTION TREE REPRESENTATION With our parsimonious construction tree representation, the example machine above is represented by the following the following JSON list: [ {'type': 0, 'id': 0, 'parent' : -1, 'face_id' : -1}, {'type': 1, 'id': 1, 'parent' : 0, 'face_id' : 0}, {'type': 2, 'id': 2, 'parent' : 1, 'face_id' : 0}, {'type': 9, 'id': 3, 'parent_a': 0, 'face_id_a': 4, 'parent_b': 1, 'face_id_b': 6} ] Specifically, the ordered list of dictionaries of the machine construction JSON file represents the construction order of blocks. Each dictionary contains the following information of corresponding block: 1) "type": block type; 2) "id": the order ID of this block (the same as the order in the list), included so that LLMs do not have to parse it by itself; 3) "parent", the ID of its parent block; 4) "face_id", the face of the block's parent to which the block is attached. In cases that the block has two parents (e.g., a string that connects both parts), we use "parent_a" and "parent_b" to record both parents; similar for "face_id". Note: the first block with "id" 0 is always the unique starting block, of which the local position and rotation are always zero. D.2 REWARD SETTING Here we elaborate on the reward design for RL experiments in Sec. 5.1. Our reward is in the form of R = is_valid×performance where is_valid is the boolean representing machine validity and performance is the task-specific performance metric. Car. We set is_valid to 1 as long as the policy produces a machine that can be parsed from the generated construction tree and can be successfully placed into the environment without any selfcollision; otherwise it is set to 0. performance is set to the distance between the starting position and the end position of the root block. 28 Technical Report Catapult. For is_valid to be 1 in this task, the machine has to satisfy an additional constraint compared to the car task: the maximum height of the boulder position during simulation must be greater than a threshold of 3m. As explained in the main text, performance for catapult is the product of the maximum height and maximum distance (towards some pre-defined direction) during simulation. D.3 ENVIRONMENT FEEDBACK In principle, we are able to obtain all state variables of each single part of a simulated machine. Due to the space complexity of the simulation results, not all information can be fed to LLM agents. Here we consider a minimal set of environment feedback information that the environment querier always gathers and returns to the refiner. Below are the minimal set of feedback information for each task: Car. 1) machine orientation; 2) machine maximum moving distance (towards a designated direction); 3) machine max speed; 4) machine average speed per second; 5) machine position per 0.2 second (atomic time). Catapult. 1) boulder maximum distance (horizontal, towards a designated distance); 2) boulder maximum height; 3) boulder position per 0.2 second (atomic time). Beyond these basic pieces of feedback, the querier, after seeing the candidate machine and its simulation results, in our default setting selectively extract a subset of environment feedback given its speculation on the issues of the simulated machine. For instance, parts during simulation may collide with each other and break. Such behavior carries important hints on why machines fail, and an LLM agent with sufficient capability in spatial and physics understanding can possibly identify the vulnerable blocks in the design. Below we elaborate on the additional information that the querier may gather: • Block index to query; • Time interval of interest (states outside this interval will not be considered); • Feedback type of interest (one or more from the list) - Position; - Orientation; - Velocity; - Length (for spring only) 29 Technical Report E CHALLENGES IN COMPOSITIONAL MACHINE DESIGN E.1 FAILURE PATTERNS Generated machines often fail in systematic ways. As shown in Fig. 17, we observe several recurring categories of errors, including flawed reasoning, structural attachment errors, incorrect part orientations and failures in instruction following. These diverse failure types highlight both the reasoning and execution challenges inherent in compositional machine design. Modification Description: ... 2. ... Then, I will add a ballast on the opposite side of the rotating block to balance the structure and prevent tipping. (b) Incorrect parents (c) Incorrect part orientations Modification Description: 1. ... Now the arm facing x+. Re-orient the Rotating Block to the right side of block 1... so the arm will face z+ ... Modification Description: 1. ... Add a Wooden Block (Type ID: 1) to id 1 in face_id 2 to support the container. ... (d) Instruction following failures Modification Description: ... 2. Add a Small Wooden Block (ID 15) straight under the Starting Block (0, face_id 5) so the root touches the ground and anchors the frame. (a) Flawed high level reasoning Figure 17: Examples to illustrate failure patterns. In each example, the original machine is shown on the left and the modified machine on the right. Failure patterns are sampled from Qwen3-Coder-480B-A35B-Instruct. E.2 NEED FOR PRECISION In Fig. 18 we present a simple example to illustrate how the task of compositional machine design requires high precision in the spatial design of configurations of different parts. Even though the high-level design is feasible, the machine in the top row fails to throw the boulder out due to the incorrect position of the container. E.3 APPEARANCE VS. PERFORMANCE As illustrated in Fig. 19, a machine's appearance does not necessarily reflect its actual performance. A design that seems well-aligned with human intuition can fail dramatically, while one that looks awkward or unintuitive may achieve superior results. For LLMs to design machines that are both effective and visually intuitive to humans, reward functions must account not only for task performance but also for stability and other factors that shape human perception of functionality. 30 Technical Report Figure 18: Illustration of how machines built with feasible high-level designs may fail due to inaccurate part placement. Machine sampled from Gemini 2.5 Pro. Left: designed machines; Right: simulation results. Simulation Time Figure 19: Boulder-throwing trajectories for various machine designs generated by Gemini 2.5 Pro. From left to right, each row first shows the machine design, followed by a time-lapsed bird's-eye view of its throw. 31 Technical Report F SETTINGS FOR RL FINETUNING F.1 COLD-START DATASET CURATION Noticing that Gemini 2.5 Pro produces the most satisfactory machines with reasonable CoT, we adopt the single-agent generation setting and collect Gemini-generated machines along with their CoT. We curated 100 design objectives: 75 captions of machines created by Besiege players from the Internet and 25 authored by us. These 25 prompts are constructed by 1) first writing down simple design objectives that are realizable by BesiegeField and can emerge from some simple rewards, and 2) then introducing environment constraints and machine-specific requirements. Using this prompt dataset, we generate 250 machines per prompt, and after filtering out inappropriate ones (those that fail to parse, cannot be built in the environment, or do not have a specific physics-driven functionality, e.g., a statue), we obtain 9,984 machines with their corresponding CoT. A sample gallery is shown in Fig. 20. We present examples in the curated prompt set: 1. Build a machine that can provide an exciting spinning amusement ride experience. 2. Build a machine that can mimic the movements of a humanoid figure for entertainment or functional demonstrations. 3. Build a machine that can glide smoothly over snow or ice. Below we present the text prompts with our simple authoring strategy, which can possibly be scaled with LLMs: -Additional Environment Constraints1. On an uneven, bumpy straight road, build a small car that must travel in a straight line to the finish. 2. On a straight road stands a stone wall; build a battering ram that must accelerate straight ahead, smash the wall, and finish with minimal damage to the machine. -Modified Demands for Target Machines1. Build a tall tower that must keep a heavy block (id 36) at 15 m height for 5 s without collapse. 2. On a straight road stands a 10 m high wall; build a siege ladder that must advance, extend its top above the wall, and remain upright throughout. F.2 COLD-START DETAILS In our experiment, we use Qwen2.5-14B-Instruct as the base model and train it on the Gemini synthesized dataset. To save GPU memory, we employ the parameter-efficient quantized OFT (QOFT) technique (Qiu et al., 2025c; 2023; Liu et al., 2024a; Qiu et al., 2025a) for updating the model parameters, with OFT block size 64. We use 8-bit training with the 8-bit AdamW optimizer implmented with bitsandbytes (Dettmers et al., 2022), a learning rate of 1e-6 and a linear warmup schedule (3% of the total training steps). F.3 RL EXPERIMENT DETAILS We use verl framework to implement our RL experiments. The LLM is finetuned from Qwen2.514B-Instruct (with LoRA of rank 64 on all linear layers) using the Gemini-synthesized dataset described above. We set learning rate to 5e-6 with gradient clipping threshold set to 0.5. The GRPO advantage estimator uses an advantage clipping ratio of 0.2. We add a KL penalty (weight 0.001) with respect to the pretrained LLM and do not introduce any entropy regularization. For rollouts, we use a temperature of 1.0 and top-p value of 0.95. Maximum input and output lengths are 3440 and 1168 tokens, respectively. We train each model for 400 update steps which take approximately 48 hours. 32 Technical Report Car "Catapult" Misc Machines Carousel Robot "Snake Crawling" Machines Figure 20: Examples of Gemini-synthesized machines. 33 Technical Report G ADDITIONAL ABLATION STUDIES Meta-Designer in hierarchical design. In Table 10, we show that how a meta-designer for hierarchical design may benefit compositional machine design. Leveraging the knowledge on existing machines, meta-designers can identify the key macro-level mechanical components that are easier to design compared to the whole task, as shown in the results for Gemini 2.5 Pro, Kimi K2 and Llama 4 Scout. However, introducing an additional stage can introduce compounding error and, if the LLM agent is not capable of integrating different macro-level mechanical components, they may lead to lower scores, which we hypothesize is the reason for the failure of hierarchical design in models like Qwen3. Moreover, we examine if the meta-designer should provide step-by-step building instruction for the designer (Fig. 21), or simply provide high-level mechanical component descriptions. We find that a meta-designer that provides more detailed information is beneficial mostly when the base model is powerful enough (e.g., Gemini 2.5 Pro). Effect of feedback-free self-critic. In Table 11, we show that the inspector agent which does selfcritic before running any environment simulation tend to improve performance for models like Gemini 2.5 Pro (the most powerful model for the task of compositional machine design in BesiegeField) but can fail drastically for models like o3. Effect of active feedback queries. In Table 12, we show that the active queries on the environment feedbacks help most of the models achieve better performance, compared to the setting with no environment feedback and that with only environment simulation final scores. Additional RL results. In Fig. 22 and 24 and , we show the maximum scores achieved in the environments with different RL methods plus the validity rate of machines. We visualize the maximum score since, in the case when one is allowed to use inference-time scaling techniques, the best performing machines are the ones people care most about. We show that our settings with Pass@64 training achieves the best maximum score with two different random seeds. In additiona, in Fig. 26, we visualize the corresponding Best@N metrics. For completeness, we also visualize the results with our default setting on the task car in Fig. 23, 25 and 27. Meta Designer: "Machine Structure": { ... "Base Frame": { ... "Guidance": "Wooden Blocks (ID 1) and Small Wooden Blocks (ID 15) are used to build a wide, heavy base that prevents tipping. They also form a tower to elevate the pivot point and a physical stop to arrest the arm's rotation, which is crucial for the release mechanism." }, "Throwing Mechanism ": { ... "Guidance": "A Rotating Block (ID 22) provides the high-speed rotational power. This is attached to a Log (ID 63), which acts as a long, robust lever arm. A Container (ID 30) is placed at the end of the arm to hold the Boulder (ID 36) projectile." } } Detailed Meta Designer: "Machine Structure": { ... "Base Frame": { ... "Guidance ": "First, build the foundation. Attach a Log (63) to the left side of the Start Block (0) and another Log to the right side to create a wide horizontal base. Then, place a Small Wooden Block (15) on top of the Start Block. This will elevate the pivot point of the throwing mechanism for a better launch angle and provide a central connection point." }, "Throwing Mechanism": { ... "Guidance": "On top of the Small Wooden Block from the Base Frame, place a Rotating Block (22). This will serve as the powered pivot. Attach a Log (63) to the rotating part of the Rotating Block, positioning it as the throwing arm. At the far end of this Log, attach a Container (30) with its opening facing upwards. Finally, place the Boulder (36) inside the Container." } } Figure 21: Construction guidance comparison of Meta Designer and Detailed Meta Designer, sampled with Gemini 2.5 Pro. 34 Technical Report Models Machine Validity (pass/total) Designer Score File Valid 3D Valid Final Valid Mean Max Baseline (Meta-Designer & Designer) Gemini 2.5 Pro 8/8 5/8 5/8 8.49 9.14 o3 3/8 3/3 3/8 0 0 Qwen3-Coder-480B-A35B 8/8 6/8 6/8 0.75 4.5 Doubao Seed 1.6 7/8 3/7 3/8 4.34 4.37 Claude Opus 4 8/8 4/8 4/8 4.17 4.36 DeepSeek-V3 7/8 7/7 7/8 0 0 Kimi K2 6/8 6/6 6/8 7.18 12.02 Llama 4 Scout 17B 16E 8/8 8/8 8/8 3.59 11.83 Single Agent Gemini 2.5 Pro 6/8 3/6 3/8 6.13 9.00 o3 8/8 8/8 8/8 2.87 5.22 Qwen3-Coder-480B-A35B 8/8 4/8 4/8 3.5 9.24 Doubao Seed 1.6 7/8 6/7 6/8 4.24 8.2 Claude Opus 4 8/8 2/6 2/8 4.76 4.91 DeepSeek-V3 7/8 6/7 6/8 4.67 4.86 Kimi K2 8/8 3/8 3/8 6.85 9.05 Llama 4 Scout 17B 16E 8/8 7/8 7/8 3.63 5.64 w/ detailed Meta-Designer & Designer Gemini 2.5 Pro 8/8 7/8 7/8 9.19 11.94 o3 7/8 6/7 6/8 0.92 1.18 Qwen3-Coder-480B-A35B 8/8 2/8 2/8 4.87 4.87 Doubao Seed 1.6 0/8 - - - - Claude Opus 4 7/8 5/7 5/8 4.13 4.79 DeepSeek-V3 8/8 8/8 8/8 6.12 9.0 Kimi K2 8/8 0/8 - - - Llama 4 Scout 17B 16E 8/8 7/8 7/8 4.01 6.93 Table 10: Ablation study on the meta-designer. Machine validity is evaluated in two aspects: file validity, 3D validity. Note that 3D validity requires the machine to first pass file validity. Final validity refers to a fully valid machine (satisfying both file and 3D validity). The mean simulation score is calculated based solely on the final valid outputs. Detailed Meta-Designer provides more concisely, step-by-step construction guidance to the Designer. Compared to Baseline, Single Agent is slightly harder to construct valid machines, but the simulation scores are better. Detailed Meta-Designer improves both metrics, but requires LLMs to have a strong 3D understanding and a large context window. The comparison between Meta-Designer and Detailed Meta-Designer is illustrated in Fig. 21. 35 Technical Report Models Blind Refinement Simulation Scores Baseline w/o Inspector Valid Mean Max Valid Mean Max Gemini 2.5 Pro 5 8.18 11.07 5 5.67 9.37 o3 3 0 0 3 3.08 9.24 Qwen3-Coder-480B-A35B 6 1.61 5.0 6 0.75 4.51 Doubao Seed 1.6 3 0.31 0.49 3 0.47 1.41 Claude Opus 4 2 5.38 5.8 4 5.20 8.25 DeepSeek-V3 7 0.98 3.18 7 0.38 2.16 Kimi K2 2 5.29 7.36 6 2.31 8.91 Llama 4 Scout 17B 16E 0 - - 0 - - Table 11: Ablation study on inspector agentic design. The mean simulation score is calculated based solely on the valid machines after blind refinement. Removing the inspector from the agentic flow lowers the blind refiner's mean performance on LLMs with weaker 3D understanding, while barely affecting other models. Models Refiner Simulation Scores Baseline w/o Env Querier Score Only std Mean Max std Mean Max std Mean Max Gemini 2.5 Pro 2.47 15.73 18.19 4.18 14.89 19.77 2.05 9.68 13.18 o3 5.36 5.34 11.11 4.24 4.06 8.55 2.76 7.05 10.24 Qwen3-Coder-480B-A35B 0.95 5.21 6.52 2.32 4.05 6.89 2.53 2.81 5.56 Claude Opus 4 0.31 9.10 9.32 0.82 8.50 9.08 0.42 5.75 6.05 Doubao Seed 1.6 0.23 4.62 4.76 0.08 4.89 4.94 0.29 4.79 5.05 DeepSeek-V3 0.09 4.82 4.93 1.96 4.37 6.00 1.91 2.76 5.15 Table 12: Ablation study on the environment querier agent. For the refiner, the baseline includes simulation scores, basic environment feedback, and querier-required feedback. The "w/o env querier" setting provides only simulation scores and basic environment feedback. In the "pure score only" setting, only simulation scores are provided. Removing the environment querier causes a slight drop in average machine performance. With reward signals only, the performance markedly degrades across most LLMs. Figure 22: Catapult task machine scores across RL steps. KL regularization helps the model discover better structure designs. Pass@64 is greatly more efficient at uncovering powerful machine designs. Pass@8 (roll-out 8) outperforms Pass@1 (roll-out 64) in efficiency and matches its performance with fewer roll-outs. No cold start models lack the advanced knowledge needed to find better machines. 36 Technical Report Figure 23: Car machine scores across RL steps. The RL finetuning hyperparameter setting is the same as the base hyperparameter setting of Catapult. Machine performance slightly rises as training steps increase. Figure 24: Catapult task machine validity rate and reward non-zero rate across RL steps. The machine validity rate refers to the proportion of machines that can successfully run simulations. The reward non-zero rate represents the ratio of machines that can simulate with a non-zero reward. LLM constructs more legal machines as training steps increase, and rewards non-zero machines. Pass@8 and Pass@1 converge early. "No KL" fills roll-outs with failure cases, slowing performance gains. "No cold start" lacks design knowledge, encounters more failures than no KL, and improves validity rate most slowly. The base setting balances convergence and performance improvement. Figure 25: Car task machine validity rate and reward non-zero rate across RL steps. The machine validity converges early and remains stable during further training. 37 Technical Report Figure 26: Catapult task. Average Best@N metric. At each test step, the LLM generates 64 samples, selects the top N samples, and records the maximum score. This process is repeated 1,000 times, and the mean value is calculated. Base settings (both seeds) dominates Best@N performance; excluding base settings, "no KL" dominates the rest. Pass@1 and Pass@8 spawn only a handful of high-performance machines. No cold start produces machines of more average quality. 38 Technical Report Figure 27: Car task. Mean Best@N metrics. Similar to the machine validity rate, the Best@N performance increases quickly and remains stable in rest training periods. 39 Technical Report H GENERATED SAMPLES H.1 FROM RL-FINETUNED MODELS Here, we present some of the best RL samples from rollouts, as well as examples from the agentic workflow. Fig. 28 displays the RL rollout samples, while Fig. 29 illustrates the agentic workflow samples. 5.59 6.81 10.65 12.06 12.54 12.72 13.64 15.25 15.89 17.19 18.71 19.18 19.23 19.43 19.63 19.69 21.69 20.67 21.16 21.52 21.55 21.93 23.13 23.26 23.89 Figure 28: Qwen2.5-14B-Instruct cold started RL model catapult task sample from roll-out. Throwing distances are labeled on the bottom-right corner of the image. 40 Technical Report H.2 FROM AGENTIC WORKFLOW 3.04 4.98 5.95 7.27 9.98 4.75 6.92 8.63 9.10 13.78 0.22 5.52 14.99 16.32 19.77 11.91 0.10 13.69 14.01 15.07 16.64 10.93 13.70 15.10 17.26 Qwen3-Coder-480B-A35B-Instruct Doubao Seed 1.6 o3 Gemini 2.5 Pro Claude 4 Opus Figure 29: The LLM inference gallery of machine-generated samples. The rows, from top to bottom, were inferred by the following models, respectively: Claude 4 Opus, Gemini 2.5 Pro, o3, Doubao Seed 1.6, and Qwen3-Coder-480B-A35B-Instruct. Throwing distances are labeled on the bottom-right corner of the image. 41 Technical Report I RELATIONS BETWEEN COT AND MACHINES To further investigate if high-level machine blueprint or low-level part placement is more important, we experiment with machine generation of LLMs by generating machine details conditioned on Gemini-generated CoT (instead of on CoT produced by themselves). We find that with Gemini CoT, almost all LLMs design machines that are more visually similar to "catapults", as shown in Fig. 30. We therefore hypothesize that the major gap between other LLMs, especially open-source ones, and Gemini 2.5 Pro is the abstract-level spatial and physics reasoning on machine designs. o3 Kimi K2 Claude Opus 4 Conditioned on its own CoT Conditioned on Gemini-generated CoT Figure 30: Comparison between generated machines conditioned on their own CoT or Gemini-generated CoT. 42 Technical Report J COT SAMPLES FROM GEMINI 2.5 PRO J.1 SINGLE AGENT system: user: Design a machine to throw a boulder (type id 36) in a parabolic trajectory. You must use exactly one boulder. Optimize for throw distance , height, and speed. Apply the principle of leverage correctly, balancing the weight of the projectile. assistant: **Analyzing Catapult Components** I'm now building a catapult with a rigid frame using 'Wooden Block's to ensure stability. I plan to use a 'Log' as the throwing arm, attached to a constructed pivot point. The 'Container' will hold the boulder at the arm's end. **Devising Launch Mechanisms** I'm reconsidering my approach. With the provided constraints, a trebuchet is out. The single boulder and lack of hinges necessitate a catapult. I must leverage the 'Rubber Band'. My plan is to design a tension-based catapult for the boulder, prioritizing stability and throwing arm efficiency. **Optimizing for Distance** I'm now focused on the tension-powered catapult. My plan is for a solid base, a pivot, a lever arm, a boulder-holding container, and rubber bands . I'm working on the arm's length and the positioning of the rubber bands for maximum launch power and the best trajectory for the boulder. **Assessing the Lever Arm** I'm now working on the arm, crucial for force. I must decide on its length and how it attaches to the pivot. I'm considering different attachment methods to create a stable but rotatable lever, exploring options for maximum launch power and height. **Evaluating Construction Strategies** I'm stuck! I need to use a catapult given the constraints. But, the lack of a hinge restricts my options. I'm exploring different ways to connect the throwing arm to the frame and leverage the 'Rubber Band' for propulsion. **Designing a Hybrid Catapult** I've sketched out the structure, pivoting on the 'Rotating Block', with a base and uprights. The throwing arm will comprise a 'Log', with a ' Container' at the tip. I am struggling with the rubber band attachment. I will have to ensure the right angles and tension. **Evaluating Rubber Band Placement** I'm now focusing on the final adjustments. With the core design finalized , the key is the 'Rubber Band' placement. The goal is maximizing launch power by carefully considering the band's attachment points and the angle of force application. I aim to achieve a long throw with this arrangement. **Evaluating Lever Arm Placement** I'm now focused on the throwing arm's construction and its link to the rotating pivot. I've built the base and the pivot structure. The lever arm's length and the location for the 'Container' are my next priorities. The rubber band attachments also need to be evaluated. 43 Technical Report **Engineering the Launch Mechanism** I'm now focused on a tension-based catapult. It's time to build, starting with a solid base and a stable pivot point using the 'Rotating Block'. My priorities are a strong throwing arm, the right 'Container' for the boulder, and, critically, the strategic placement of the 'Rubber Band's. **Refining the Design** I'm now focused on the core catapult design. The need for a simple build, given the constraints, favors a straightforward tension catapult. I am planning a solid base, a lever arm (most likely a 'Log'), a container for the boulder, and the crucial application of 'Rubber Band's to generate the launch force. **Finalizing Catapult Design** I've spent time analyzing the limitations and refining the catapult build . I've now decided on the layout. The catapult will be built around a sturdy frame, a pivot using a 'Rotating Block', an arm (lever), and the utilization of the 'Rubber Band' for propulsion. I am ready for the final assembly. **Constructing the Base** I'm now focused on the basic frame. I've decided to construct it from logs for stability. The goal is a sturdy structure to anchor the catapult 's components, primarily the 'Rotating Block' and lever arm. I'm focusing on the geometry. **Analyzing Leverage Application** I'm now focused on catapult design. Given the one-boulder rule and lack of hinges, I'll focus on a lever-based catapult. I am now determining the lever arm length. **Building a Lever Catapult** I'm planning a basic tension-based catapult using the 'Rotating Block' as a pivot. I'm focusing on the throwing arm's dimensions, the 'Rubber Band ' placement to maximize launch power, and the 'Container' block to hold the boulder. **Constructing Launch Support** I'm now focused on the basic design. With the user's need to launch a single boulder, a tension-based catapult is optimal. I'll build it around the 'Rotating Block'. I'm using a 'Log' for the arm and planning a base, rubber band connections, and the boulder's container. **Planning the Build** I'm streamlining the design. The key will be a stable base, a 'Rotating Block' for the pivot, and a lever arm. I'll use a 'Container' to hold the boulder, and the 'Rubber Band's will be the power source. Now I am evaluating the dimensions and the best placement of each part. **Constructing the Launching Mechanism** I'm designing a simplified catapult, keeping the focus on its functionality and the user's requirement for a single boulder and the available blocks. I plan to use a 'Rotating Block' as the pivot. The rest of the design must center around this 'Rotating Block' component. **Constructing the Catapult** I've established the foundation of the catapult, focusing on a solid base and pivot mechanism. The challenge now is to construct the lever arm and connect it effectively to the 'Rotating Block', while also determining how to hold the boulder. I will prioritize a design maximizing force transfer for launch. **Constructing a Lever Arm** I'm solidifying the basic build: a catapult, powered by rubber bands, given the constraints. The next step is constructing the lever arm, 44 Technical Report deciding its dimensions and how it will attach to the pivot point for optimal force transfer. **Constructing the Final Catapult** I've finalized the construction plan. The tension-based catapult, using the 'Rotating Block' for the pivot and a lever arm, is my focus. I will use a 'Container' for the boulder, and the 'Rubber Band' for launching, paying close attention to force application and frame design. **Outlining Catapult Construction** I've sketched out the build, focusing on a tension-powered catapult. A base, a pivoting lever arm, and a boulder holder are planned. My design will depend on a stable foundation and the proper application of rubber bands for generating enough force for distance and height. **Constructing the Catapult** I've arrived at a straightforward plan for a tension-powered catapult, using the available blocks. I'm focusing on a design to maximize launch potential. The base, pivot point, arm, and rubber bands will be the key components for an efficient, functional build. **Devising the Final Catapult** I've finalized the core design. I am now sure of a tension-based catapult ; the concept is simple and effective. My focus is now on the construction idea and creating the blocks. I am convinced it will be a simple build and the machine can leverage the 'Rotating Block' and ' Rubber Band' to meet the user's launch goals. **Constructing a Launching System** I'm finalizing the blueprint now. The tension-based catapult will use a base, a pivot with the 'Rotating Block', and a lever arm for the boulder. I'll ensure maximum launch power through strategic rubber band placement and precise measurements. I need a strong yet streamlined design. **Constructing a Functional Catapult** I've finalized the plan. The design uses a tension-based catapult, with a "Rotating Block" for a pivot. A sturdy base with two Log arms ensures stability, while two Rubber Bands generate the launch force. The boulder holder is a "Container", and the system prioritizes lever arm efficiency for maximum height and distance. **Formulating Final Design** I'm now certain: a tension-based catapult, incorporating the 'Rotating Block' for the pivot, will meet the user's needs. The goal is maximum launch power within the given constraints. I'll sketch out the construction, outlining the key blocks and how they fit together. I've a solid plan. Construction Idea: I will design a tension-powered catapult to launch the boulder. The design prioritizes a long throwing distance and height by utilizing a long lever arm. The machine has four main parts: 1. **Base Frame:** A T-shaped base, constructed from a central 'Wooden Block' and two 'Log's extending sideways, provides a wide, stable footprint to handle the throwing forces and prevent the machine from tipping over. 2. **Support Tower:** Two 'Log's rise vertically from the ends of the base, creating a tall, sturdy support structure for the pivot axle. 3. **Lever Arm Assembly:** This is the core of the catapult. A 'Rotating Block' is used as a functional hinge/axle, mounted on one of the support uprights. A 'Small Wooden Block' acts as a hub on this axle, with a long 'Log' attached pointing up (the throwing arm) and a 'Small Wooden Block' attached pointing down (the short lever). This structure creates the 45 Technical Report necessary leverage. At the end of the throwing arm, a 'Container' is mounted via an adapter block to hold the 'Boulder'. 4. **Power System:** Two 'Rubber Band' blocks provide the propulsive force. They connect from a forward-projecting 'Log' on the base to the short lever arm. The stretched rubber bands contract rapidly, pulling the short arm down and forward, which in turn whips the long throwing arm upwards and forwards, launching the boulder in a high parabolic trajectory. This design maximizes the lever ratio to achieve high velocity for the projectile, while the robust frame maintains stability during the launch. necessary_blocks '''[1, 63, 22, 15, 30, 36, 9]''' JSON: '''json [ {"type": 0, "id": 0, "parent": -1, "face_id": -1} {"type": 1, "id": 1, "parent": 0, "face_id": 1} {"type": 63, "id": 2, "parent": 1, "face_id": 1} {"type": 63, "id": 3, "parent": 1, "face_id": 3} {"type": 63, "id": 4, "parent": 2, "face_id": 6} {"type": 63, "id": 5, "parent": 3, "face_id": 6} {"type": 22, "id": 6, "parent": 4, "face_id": 4} {"type": 15, "id": 7, "parent": 6, "face_id": 3} {"type": 63, "id": 8, "parent": 7, "face_id": 3} {"type": 15, "id": 9, "parent": 7, "face_id": 4} {"type": 15, "id": 10, "parent": 8, "face_id": 0} {"type": 30, "id": 11, "parent": 10, "face_id": 3} {"type": 36, "id": 12, "parent": 11, "face_id": 0} {"type": 63, "id": 13, "parent": 0, "face_id": 0} {"type": 9, "id": 14, "parent_a": 13, "face_id_a": 1, "parent_b": 9, " face_id_b": 1} {"type": 9, "id": 15, "parent_a": 13, "face_id_a": 3, "parent_b": 9, " face_id_b": 2} ] ''' J.2 META DESIGNER system: user: Design a machine to throw a boulder (type id 36) in a parabolic trajectory. You must use exactly one boulder. Optimize for throw distance , height, and speed. Apply the principle of leverage correctly, balancing the weight of the projectile. meta-designer output: '''json { "definition": "Construct a machine that can throw boulders in a parabolic trajectory, designed to maximize throw distance and height by utilizing the principle of leverage.", "function_points": [ "Establish a stable foundation to withstand recoil and prevent tipping.", "Implement a powered lever arm for projectile propulsion.", "Carry and release a boulder projectile at the apex of the arm's swing." ], 46 Technical Report "design_structure": [ { "function_name": "Base and Support Frame", "description": "A wide, static frame built from the Start Block. It provides stability for the machine and includes a vertical support to mount the rotating mechanism, as well as a stop block to halt the throwing arm's motion.", "related_function_points": [ "Establish a stable foundation to withstand recoil and prevent tipping." ] }, { "function_name": "Powered Throwing Arm", "description": "A composite structure consisting of a powered pivot, a long lever arm, and a projectile holder. The Rotating Block provides rapid rotational force, the Log acts as the lever to amplify speed, and the Container holds the Boulder, releasing it via inertia.", "related_function_points": [ "Implement a powered lever arm for projectile propulsion .", "Carry and release a boulder projectile at the apex of the arm's swing." ] } ], "build_order": [ "Base and Support Frame", "Powered Throwing Arm" ], "machine_structure": { "Base and Support Frame": { "type": [ 1, 15 ], "guidance": "Wooden Blocks (ID 1) and Small Wooden Blocks (ID 15) are used to build a wide, heavy base that prevents tipping. They also form a tower to elevate the pivot point and a physical stop to arrest the arm's rotation, which is crucial for the release mechanism." }, "Powered Throwing Arm": { "type": [ 22, 63, 30, 36 ], "guidance": "A Rotating Block (ID 22) provides the high-speed rotational power. This is attached to a Log (ID 63), which acts as a long, robust lever arm. A Container (ID 30) is placed at the end of the arm to hold the Boulder (ID 36) projectile." } } } ''' J.3 DESIGNER system: designer-layer1 input: 47 Technical Report On the basis of the existing structure, generate [Base and Support Frame ], [A wide, static frame built from the Start Block. It provides stability for the machine and includes a vertical support to mount the rotating mechanism, as well as a stop block to halt the throwing arm's motion.], as ['Establish a stable foundation to withstand recoil and prevent tipping.'], using block type[1, 15], [Wooden Blocks (ID 1) and Small Wooden Blocks (ID 15) are used to build a wide, heavy base that prevents tipping. They also form a tower to elevate the pivot point and a physical stop to arrest the arm's rotation, which is crucial for the release mechanism.] designer-output-layer1: Based on the existing structure, a Base and Support Frame was generated to establish a stable foundation to withstand recoil and prevent tipping. Ultimately, the block types [1, 15] were decided upon. No blocks were removed as they were all suitable for the design. JSON: '''json [ {"type": "0", "id": 0, "parent": -1, "face_id": -1} {"type": "1", "id": 1, "parent": 0, "face_id": 1} {"type": "1", "id": 2, "parent": 0, "face_id": 2} {"type": "1", "id": 3, "parent": 0, "face_id": 0} {"type": "1", "id": 4, "parent": 1, "face_id": 2} {"type": "1", "id": 5, "parent": 2, "face_id": 4} {"type": "1", "id": 6, "parent": 0, "face_id": 3} {"type": "1", "id": 7, "parent": 6, "face_id": 6} {"type": "1", "id": 8, "parent": 7, "face_id": 6} {"type": "15", "id": 9, "parent": 8, "face_id": 0} ] ''' designer-layer2: On the basis of the existing structure, generate [Powered Throwing Arm], [A composite structure consisting of a powered pivot, a long lever arm, and a projectile holder. The Rotating Block provides rapid rotational force, the Log acts as the lever to amplify speed, and the Container holds the Boulder, releasing it via inertia.], as ['Implement a powered lever arm for projectile propulsion.', "Carry and release a boulder projectile at the apex of the arm's swing."], using block type [22, 63, 30, 36], [A Rotating Block (ID 22) provides the high-speed rotational power. This is attached to a Log (ID 63), which acts as a long, robust lever arm. A Container (ID 30) is placed at the end of the arm to hold the Boulder (ID 36) projectile.] designer-output-layer2: Based on the existing structure, a Powered Throwing Arm was generated as Implement a powered lever arm for projectile propulsion.,Carry and release a boulder projectile at the apex of the arm's swing.. Ultimately, the block types [22, 63, 30, 36] were decided upon, while [] were removed due to []. JSON: '''json [ {"type": "0", "id": 0, "parent": -1, "face_id": -1}, {"type": "1", "id": 1, "parent": 0, "face_id": 1}, {"type": "1", "id": 2, "parent": 0, "face_id": 2}, {"type": "1", "id": 3, "parent": 0, "face_id": 0}, {"type": "1", "id": 4, "parent": 1, "face_id": 2}, {"type": "1", "id": 5, "parent": 2, "face_id": 4}, {"type": "1", "id": 6, "parent": 0, "face_id": 3}, {"type": "1", "id": 7, "parent": 6, "face_id": 6}, {"type": "1", "id": 8, "parent": 7, "face_id": 6}, {"type": "15", "id": 9, "parent": 8, "face_id": 0}, 48 Technical Report {"type": "22", "id": 10, "parent": 8, "face_id": 4}, {"type": "63", "id": 11, "parent": 10, "face_id": 3}, {"type": "30", "id": 12, "parent": 11, "face_id": 0}, {"type": "36", "id": 13, "parent": 12, "face_id": 0} ] ''' J.4 INSPECTOR system: inspector input: Task Introduction Construct a machine that can throw boulders in a parabolic trajectory, designed to maximize throw distance and height by utilizing the principle of leverage. JSON Information 3D Information Questions inspector-output: { "Coordinate System": "Left-Handed System", "Up": "y", "Right": "x", "Forward": "z", "Frontmost Block": {"3": [0.0, 0.0, 1.5]}, "Rearmost Block": {"9": [2.0, 2.0, -3.0]}, "Leftmost Block": {"5": [-2.0, 0.0, 1.5]}, "Rightmost Block": {"7": [2.0, 1.5, 0.0]}, "Topmost Block": {"13": [1.0, 7.4, -2.0]}, "Lowest Block": {"0": [0, 0, 0]} } 1. **Dynamic Blocks Analysis:** The only dynamic block in this machine is the 'Rotating Block' [10]. * **Position and Orientation**: The 'Rotating Block' [10] is located at center coordinates [1.0, 2.0, -2.0] with a global orientation of 'x-'. * **Analysis**: a. **Impact on the machine**: The 'Rotating Block' [10] serves as the pivot for the throwing arm. It is the core component intended to generate the throwing motion. b. **Direction of force**: The block is oriented along the 'x-' axis, meaning its rotation axis is the global x-axis. When activated, it will generate a torque, causing its sub-blocks to rotate in the y-z plane . c. **Impact on sub-blocks and machine**: The 'Rotating Block' [10] will cause its child blocks ([11], [12], [13]) to swing upwards and forwards in a powerful arc. Simultaneously, it will exert an equal and opposite reaction torque on its parent block [8] and the entire support structure, which will likely cause the machine to violently tip over or break. 49 Technical Report 2. **Special Static Blocks Analysis:** The special static blocks are the 'Container' [12] and the 'Boulder' [13]. * **Container [12]**: It is located at the end of the throwing arm ('Log' [11]) with its center at [1.0, 6.9, -2.0]. Its orientation is 'y+' (upwards). This placement and orientation are rational, as it correctly positions it to hold the 'Boulder' [13] like a cup at the end of the throwing arm. * **Boulder [13]**: It is placed inside the 'Container' [12] with its center at [1.0, 7.4, -2.0]. As the projectile, its placement is logical and necessary for the machine's function. 3. **Balance Check (self-gravity):** a. **Center of Gravity**: The calculated center of gravity for the entire machine is approximately at (0.85, 4.22, -1.46). The block closest to this point is the 'Log' [11]. b. **Sinking/Collapse**: Yes, parts of the machine will collapse. The entire throwing mechanism, including the heavy 'Boulder' [13] (mass 5), rests on a tall, slender support tower made of single 'Wooden Block's ([6], [7], [8]). This structure is extremely fragile and not braced. The machine is severely top-heavy, and the immense weight concentrated on this single tower will cause it to bend and collapse under gravity alone, even before the 'Rotating Block' [10] is activated. 4. **Comprehensive Analysis:** a. **Force Summary**: The primary force is gravity, which makes the top-heavy machine highly unstable due to the high position of the ' Boulder' [13]. Upon activation, the 'Rotating Block' [10] will generate a massive torque on the throwing arm and an equal reaction torque on the flimsy support tower. The combination of static top-heaviness and dynamic reaction forces will lead to immediate structural failure. b. **Logically Unreasonable Blocks**: * **Hierarchical Structure**: 'Starting Block' [0] -> 'Wooden Block' [6] -> 'Wooden Block' [7] -> 'Wooden Block' [8]. * **Reason for Unreasonableness**: The support tower consisting of blocks [6], [7], and [8] is fundamentally flawed. It is a single, unbraced column tasked with supporting the entire throwing arm and the heavy 'Boulder' [13]. This design is far too weak to handle the static weight, let alone the powerful dynamic forces and reaction torque from the 'Rotating Block' [10]. The machine is guaranteed to collapse. 1. Problem description, involving blocks: [6, 7, 8], belonging to structure: "Base and Support Frame" The support tower for the throwing mechanism, composed of blocks [6], [7], and [8], is critically weak. It is a single, unbraced column made of wooden blocks, which is completely inadequate to support the massive weight of the throwing arm and the 'Boulder' [13]. The structure will collapse under its own weight or instantly shatter upon activation of the 'Rotating Block' [10] due to the immense reaction torque. J.5 ENVIRONMENT QUERIER system: environment-querier input: Task Introduction Construct a machine that can throw boulders in a parabolic trajectory, designed to maximize throw distance and height by utilizing the principle of leverage. 50 Technical Report Mechanical JSON Structure 3D Information Environment Feedback machine damaged: machine parts DoubleWoodenBlock order_id:8 occurred at 0.4 sec block order_id 13 boulder throwing distance 5.63 boulder max height 7.3 boulder actual position in first 5 seconds [1.0, 7.01, -1.5], [0.84, 7.34, -2.44], [0.64, 5.69, -6.2], [0.61, 3.28, -9.97], [0.57, 1.68, -10.14], [0.38, 1.78, -9.45], [0.17, 1.72, -8.76], [-0.03, 1.52, -8.13], [-0.22, 1.22, -7.61], [-0.36, 0.85, -7.23], [-0.47, 0.48, -6.98], [-0.53, 0.16, -6.86], [-0.55, -0.07, -6.83], [-0.55, -0.18, -6.87], [-0.53, -0.17, -6.92], [-0.52, -0.03, -6.94], [-0.52, 0.21, -6.9], [-0.55, 0.51, -6.76], [-0.6, 0.85, -6.52], [-0.69, 1.18, -6.16], [-0.81, 1.45, -5.72], [-0.95, 1.66, -5.2], [-1.11, 1.76, -4.64], [-1.28, 1.76, -4.07], [-1.44, 1.66, -3.53] environment-querier-output: The user wants me to analyze a boulder-throwing machine that is failing its task. **1. Analyze the Task and Machine Design:** * **Objective:** Build a leverage-based machine to throw a boulder for maximum distance and height. This implies a catapult or trebuchet design. * **Machine Structure:** * **Base:** A wide base is constructed using 'Wooden Block's (ID 1, 2, 3, 4, 5, 6) connected to the 'Starting Block' (ID 0). This seems designed for stability. * **Tower:** A vertical tower is built on the right side of the base with 'Wooden Block's (ID 6, 7, 8). Block 8 is the apex of this support structure. * **Lever Arm:** A 'Rotating Block' (ID 10) is mounted on top of the tower (on block 8), serving as the fulcrum. A 'Log' (ID 11) acts as the throwing arm, with a 'Container' (ID 12) and 'Boulder' (ID 13) at the top. A 'Ballast' (ID 14) is attached to the other side of the 'Rotating Block' as a counterweight. * **Power System:** The machine uses a combination of a powered ' Rotating Block' (ID 10) and two 'Rubber Band's (ID 15, 16). The rubber bands connect the top of the tower (block 8) to points on the base ( blocks 3 and 5). **2. Analyze the Environmental Feedback:** * **'machine damaged'**: This is the most critical piece of feedback. The machine is breaking during operation. * **'machine parts: DoubleWoodenBlock order_id:8'**: The failure point is identified as block 8. This is the 'Wooden Block' that supports the entire rotating lever arm assembly. * **'occurred at 0.4sec'**: The failure is almost instantaneous, indicating a massive, immediate overload on the structure. * **Boulder Performance:** The boulder travels a mere 5.63 units and ends up behind its starting Z coordinate. This confirms a catastrophic failure of the launch mechanism, not just a weak throw. The arm is likely collapsing or swinging backward. **3. Synthesize and Hypothesize the Root Cause:** * The point of failure (block 8) is subjected to several immense forces : 51 Technical Report 1. **Static Load:** The weight of the entire lever arm assembly ( rotating block, log, container, boulder, ballast) rests on it. 2. **Dynamic Load:** The torque from the 'Rotating Block' (ID 10) creates a powerful reaction force on its mount point (block 8). 3. **Tension Load:** The two 'Rubber Band's (ID 15, 16) are stretched from block 8 down to the base. When the simulation starts, they will contract with immense force, pulling down and forward on block 8. * **Hypothesis:** The combination of these forces, particularly the sudden, immense tension from the rubber bands, is causing block 8 to shear off from the rest of the tower (block 7). The tower itself, being a single column of wooden blocks, is structurally weak and cannot withstand this combination of bending and shear stress. The rubber bands are incorrectly placed; instead of powering the lever arm, they are destroying the support structure. The machine is literally tearing itself apart. **4. Formulate a Plan for Further Investigation:** To confirm this hypothesis, I need to observe the behavior of the key components involved in the structural failure around the time of the collapse (0.4s). * **Block 8 (Failure Point):** I need to see its position and rotation to confirm it's being ripped from its mounting. * **Block 10 (Rotating Block/Fulcrum):** Its movement will show the forces being applied by the lever arm onto block 8. * **Block 11 (Lever Arm):** Its motion will confirm the direction of the attempted throw and the overall collapse dynamics. * **Rubber Bands (15 & 16):** I need to check their 'length' property. A rapid decrease in length will prove they are contracting and applying the destructive force I suspect. Based on this analysis, I will request detailed feedback on these specific components to verify the cause of the failure. [ { "id": 8, "duration": [0.0, 0.8], "properties": ["position", "rotation"] }, { "id": 10, "duration": [0.0, 0.8], "properties": ["position", "rotation"] }, { "id": 11, "duration": [0.0, 0.8], "properties": ["position", "rotation"] }, { "id": 15, "duration": [0.0, 0.8], "properties": ["length"] }, { "id": 16, "duration": [0.0, 0.8], "properties": ["length"] } ] 52 Technical Report J.6 REFINER system: refiner input: Task Introduction Construct a machine that can throw boulders in a parabolic trajectory, designed to maximize throw distance and height by utilizing the principle of leverage. Mechanical JSON Structure 3D Information Environment Feedback machine damaged: machine parts DoubleWoodenBlock order_id:8 occurred at 0.4 sec block order_id 13 boulder throwing distance 5.63 boulder max height 7.3 boulder actual position in first 5 seconds [1.0, 7.01, -1.5], [0.84, 7.34, -2.44], [0.64, 5.69, -6.2], [0.61, 3.28, -9.97], [0.57, 1.68, -10.14], [0.38, 1.78, -9.45], [0.17, 1.72, -8.76], [-0.03, 1.52, -8.13], [-0.22, 1.22, -7.61], [-0.36, 0.85, -7.23], [-0.47, 0.48, -6.98], [-0.53, 0.16, -6.86], [-0.55, -0.07, -6.83], [-0.55, -0.18, -6.87], [-0.53, -0.17, -6.92], [-0.52, -0.03, -6.94], [-0.52, 0.21, -6.9], [-0.55, 0.51, -6.76], [-0.6, 0.85, -6.52], [-0.69, 1.18, -6.16], [-0.81, 1.45, -5.72], [-0.95, 1.66, -5.2], [-1.11, 1.76, -4.64], [-1.28, 1.76, -4.07], [-1.44, 1.66, -3.53] Querier required feedback block order_id 8 type_id 1 info duration(s) [0.0, 1.0] block broken before below time(s) 0.2 block position [[2.01, 2.49, 0.01]] Due to block broken, the position is incomplete or not recorded block velocity [[0.15, 1.7, 4.1]] Due to block broken, the velocity is incomplete or not recorded block order_id 10 block type_id 22 block info duration(s) [0.0, 1.0] block position [[ 1.5 , 2.52, -1.49], [ 1.5 , 3.34, -0.56], [1.48, 5.27, 1.08], [0.73, 5.66, 2.42], [0.11, 4.45, 3.64])] block rotation [[ 0. , -0.71, 0.01, 0.71], [-0.16, -0.68, 0.18, 0.69], [-0.47, -0.47, 0.51, 0.54], [-0.73, -0.04, 0.08, 0.67], [-0.69, 0.06, -0.04, 0.72]] block order_id 0 block type_id 0 block info duration(s) [0.0, 2.0] block rotation 53 Technical Report [[0., 0., 0., 1.], [ 0.07, 0. , -0.01, 1. ], [ 0.45, 0.06, -0.04, 0.89], [ 0.39, 0.14, -0.09, 0.91], [0. , 0.15, 0.03, 0.99], [-0.09, 0.05, 0.36, 0.93], [0.2 , 0.01, 0.09, 0.98], [ 0.13, -0.31, -0.13, 0.93], [-0.06, -0.26, -0.1 , 0.96], [-0.08, -0.18, -0.1 , 0.97]] Initial State of the Machine The machine is initially placed on the ground, facing the z+ direction, with the target direction being z+. Please remember that the task failure is due to the mechanical design defects. Your ultimate goal is to modify the mechanical design defects so that it can complete the task. The historical modification information represents your previous attempts to make changes, and these steps did not comply with the game rules. Based on the information about the task failure, please make modifications to the mechanics. When designing the modification steps, avoid making the same mistakes as those in the historical modification information. refiner-output: The user wants me to fix a trebuchet-like machine that is failing. **1. Analysis of the Machine and Failure:** * **Machine Structure:** The machine consists of a base, a vertical tower on the right side, and a rotating arm mounted on top of the tower. The arm has a container with a boulder on one end and was intended to have a counterweight on the other. Power is provided by a 'Rotating Block ' (ID 10) at the pivot and a 'Ballast' (ID 14) acting as a counterweight. Two 'Rubber Band's (ID 15, 16) are connected from the top of the tower ( ID 8) to the base, seemingly for stabilization. * **Failure Mode:** The simulation feedback consistently reports that ' DoubleWoodenBlock id:8' is damaged at 0.4 seconds. Block 8 is the topmost block of the support tower, directly holding the entire rotating assembly ('Rotating Block', arm, boulder, and ballast). * **Root Cause Analysis:** * **Overwhelming Force:** The combination of the 'Rotating Block''s torque and the heavy 'Ballast' (ID 14, mass 3) dropping creates an immense rotational force. The 'Boulder' (ID 13, mass 5) at the end of a long lever arm (ID 11, 'Log') amplifies this stress. The tower, constructed from simple 'Wooden Block's, cannot withstand this force, leading to the failure of block 8. * **Instability:** The feedback shows the entire machine lifting and tilting ('machine orientation in long time x-', 'machine frequently changes in the following orientations'). This indicates the base is too small and light to counteract the violent motion of the throwing arm. The 'Rubber Band's (ID 15, 16) are connected asymmetrically (from the tower to the front and left parts of the base), which likely introduces a twisting force, contributing to the instability and structural failure rather than preventing it. **2. Modification Strategy:** The primary goal is to ensure the machine's structural integrity so it can actually launch the boulder without self-destructing. * **Step 1: Reduce Destructive Forces.** The most critical issue is the excessive force breaking block 8. The simplest way to address this is to remove the primary source of this force, the 'Ballast' (ID 14). The ' Rotating Block' (ID 10) by itself will still provide sufficient power to launch the boulder, but with much less violent acceleration, which the wooden structure should be able to handle. * **Step 2: Remove Ineffective Components.** The 'Rubber Band's (ID 15 and 16) are not stabilizing the structure effectively and are likely contributing to the twisting failure due to their asymmetric placement. 54 Technical Report Removing them will simplify the forces acting on the tower and improve stability. This plan directly targets the identified points of failure. By removing the ballast and the rubber bands, the stress on the critical component ( block 8) is significantly reduced, and the machine's overall stability should improve. The throwing distance might be reduced, but a successful, non-destructive launch is the first priority. **3. Final Plan:** 1. Remove 'Ballast' (ID 14). 2. Remove 'Rubber Band' (ID 15). 3. Remove 'Rubber Band' (ID 16). This is a minimal but highly effective set of changes to fix the core problem. The machine's support tower (block 8) is breaking due to excessive torque from the ballast and rotating block. The rubber bands are placed asymmetrically, causing twisting forces and instability. I will remove the ballast (ID 14) to reduce the strain on the structure and remove the two rubber bands (ID 15, 16) to eliminate the destabilizing twisting forces. * **Power Blocks Analysis:** The launch will now be powered solely by the 'Rotating Block' (10). The removal of the 'Ballast' (14) significantly reduces the peak torque, resulting in a smoother, less violent rotation of the throwing arm. This will prevent the structural failure of the tower. * **Balance Check:** Without the heavy ballast dropping and the asymmetric pull from the rubber bands, the machine will be much more stable during operation. The base should remain mostly on the ground, with minimal tipping as the arm swings. The overall center of gravity will be more manageable. * **Comprehensive Analysis:** The primary defect, the destruction of block 8, will be resolved. The machine will successfully complete the launch sequence without breaking. While the throwing distance and height may be lower due to the reduced power, the machine will now be functional and can complete its primary task of throwing the boulder. * **10-second game simulation prediction:** * **0.0s - 0.5s:** The 'Rotating Block' activates, and the arm begins to swing upwards smoothly. The tower (block 8) remains intact. * **0.5s - 1.0s:** The arm reaches its maximum rotation, and the boulder is released from the container. The machine's base remains stable on the ground. * **1.0s - 5.0s:** The boulder travels in a clear parabolic arc. The machine settles without any damage. The final throwing distance will be shorter than the broken attempts but will represent a successful launch. Remove [14] Remove [15] Remove [16] 55 Technical Report K SINGLE-AGENT PROMPT You are a machine builder. Your task is to generate a complete machine as a JSON file based on the user's request. Add new blocks to the initial structure; do not modify or delete it. I. Rules: 1. Coordinate System: Left-handed coordinate system, y+ upwards, z+ forward and x+ right. 2. Block Placement: New blocks must attach to 'attachable_faces' of existing blocks. Blocks cannot overlap. 3. Size Limit: The final machine must not exceed dimensions of 17 ( Length, Z), 17 (Width, X), 9.5 (Height, Y). 4. Functionality: Ensure functional blocks are oriented correctly. 5. Ground Interaction: The ground automatically conforms to the machine' s lowest block. Account for potential collisions between the machine and the ground throughout operation. 6. Gravity: Every block is subject to gravity; the greater a block's mass, the stronger its downward force. Consider this in your design when the machine is in operation. 7. Physical rules: Classical mechanical laws such as conservation of momentum are applied. II. Block Data: Notes: You can only use blocks from this list. A block's default orientation is Z+. 1. Attachable face: a. 'id': The i-th attachable_face of this block. b. 'pos': Coordinates relative to the building center(which is the attachable_face of the parent block) of this block. c. 'orientation': Orientation relative to the building center of this block. 2. Tags: a. 'Non-static': Block can generate force or movement. b. 'Non-stable': Connection to parent is not rigid (e.g., hinges, boulders). c. 'Linear': Do not collide with other blocks, but will occupy two attachable_faces. 3. Special Blocks: a. Boulder (id 36): Does not physically connect to other blocks. b. Spring (id 9): A special block that pulls its two connection points together. Detailed Infos: III. JSON Output Format: 1. type: block's type_id 2. id: this is i-th block 3. parent: parent block's id 4. face_id: parent block's constructible_point id 5. Standard Block: '{"type": , "id": , "parent": , " face_id": }' 6. special block (id: 9): '{"type": 9, "id": , "parent_a": , " face_id_a": , "parent_b": , "face_id_b": }' IV. Final Response Format: Your response must contain only these two parts: 1. 'Chain of thoughts:' a. You need to think step by step, analyse each block's usage, and where to place them. Put your cot in 56 Technical Report b. 'Construction Idea:' A brief explanation of your design, remember to consider necessary block types, note them in '''necessary_blocks [ type_1,type_2 ...]''', no more than 300 words. 2. 'JSON:' The complete JSON code inside a '''json ... ''' block. Here is an example: '''json [ {"type":"0","id":0,"parent":-1,"face_id":-1}, {"type": , "id": , "parent": , "face_id": }, ... ] ''' 57 Technical Report L MULTI-AGENT PROMPTS L.1 SHARED PROMPTS L.1.1 GAME INTRODUCTION WITH 3D KNOWLEDGE 1. Coordinate System: The game uses a left-handed coordinate system, with the Y-axis pointing upwards. In global coordinates, z+ is forward and x+ is to the right. 2. Construction: New blocks must be connected to the "attachable faces" of existing blocks. The default orientation of blocks is z+. 3. Block Types: a. Regular Blocks: Have fixed dimensions and multiple attachable faces. b. special blocks (ID 7, 9): Connect two attachable faces, do not collide with other blocks, but will occupy the connection points. 4. Size Limitations: The mechanical dimensions must not exceed Length (z) 17, Width (x) 17 Height (y) 9.5. L.1.2 MACHINE 3D JSON FORMAT '''json [ {"type":"0","id":0,"parent":-1,"face_id":-1}, {"type":"Block Type ID","id":"Block Order ID","parent":"Parent Block ID","face_id":"Attachable Face ID in Parent Block"}, ... ] ''' If it is a special block (Type ID is 7 or 9, other blocks are not special blocks), it will be: '''json { "type":"Block Type ID", "id":"Block Order ID", "parent_a":"Parent A Block Order ID", "face_id_a":"Attachable Face ID in Parent A Block", "parent_b":"Parent B Block Order ID", "face_id_b":"Attachable Face ID in Parent B Block" } ''' L.1.3 BUILD GUIDANCE Your task is to: Add new blocks based on the initial machine JSON and construction requirements provided by the user, without deleting the initial structure, and output the final complete JSON. User Input Format: 1. Building Objective: Describe the structure and function to be built, and provide a list of recommended block IDs. 2. Initial JSON: The structural data of the existing machine. Core Building Rules: 1. Block Usage: You can only select from the list of recommended block IDs provided by the user. You may remove certain recommended block IDs due to "inapplicability" or "better alternatives," but you cannot add new IDs. If any are removed, the reason must be stated. 2. Collision Prevention: You must accurately calculate the coordinates and orientation of new blocks based on the orientation and position of the parent block to ensure that the new block does not overlap with the existing structure. 58 Technical Report 3. Coordinate System and Orientation: The initial orientation of all blocks is Z+. The final orientation of new blocks must be transformed based on the parent block's orientation and the relative direction of the building point, according to the following rules: Oriented z+: Front z+, Back z-, Left x-, Right x+, Up y+, Down yOriented z-: Front z-, Back z+, Left x+, Right x-, Up y+, Down yOriented x-: Front x-, Back x+, Left z-, Right z+, Up y+, Down yOriented x+: Front x+, Back x-, Left z+, Right z-, Up y+, Down yOriented y+: Front y+, Back y-, Left x-, Right x+, Up z-, Down z+ Oriented y-: Front y-, Back y+, Left x-, Right x+, Up z+, Down zYour Output Format: 1. Building Plan: 'Generated [structure summary] to achieve [function]. Finally used blocks [ID1, ID2,...]. Removed [ID3,...] because [reason for removal].' 2. Final JSON: '''json [ // The complete JSON including both the initial structure and the new blocks ] ''' L.1.4 META DESIGNER SYSTEM PROMPT You are a mechanical designer, and your task is to design a machine in the game Besiege based on the user's requirements. Please gain a general understanding of the game based on the following information: I. Game Introduction: 1. Besiege is a physics-based construction game developed using Unity. Players need to build various machines to complete different tasks. 2. Besiege only contains basic mechanics and physical laws, such as mass, friction, and collision. 3. Blocks are used to build machines. Each block has its unique functions , advantages, and disadvantages. II. Block Introduction: 1. Blocks are mainly divided into five major categories: Basic Blocks, Mobility Blocks, Mechanical Blocks, Weapon Blocks, and Armor Blocks. - Basic Blocks are the fundamental components of many machines - structural blocks and some basic moving parts. - Mobility Blocks are primarily designed for movement functions - powered and unpowered wheels, steering blocks, and gears. - Mechanical Blocks provide various useful auxiliary functions - joints, suspension devices, winches, grabbers, etc. - Weapon Blocks offer various types of violent output at different ranges - swords and saws for close combat, and cannons and rockets for longrange. - Armor Blocks can protect the machine from damage or provide useful shapes for carrying other blocks - armor plates and wooden panels, as well as half-pipes and brackets. 2. Here is a detailed introduction to the properties and functions of each block: | Name | Category | Type ID | Function | |------|----------|---------|----------| | Starting Block | Basic | 0 | The Starting Block is the root block of the machine; it is placed at the starting position by default, cannot be moved, cannot be deleted, and only one can exist at a time. | | Small Wooden Block | Basic | 15 | A basic structural block, cube-shaped , to which other blocks can be attached from any side, making it particularly suitable for constructing the basic framework of machines. | 59 Technical Report | Wooden Block | Basic | 1 | A basic mechanical block, twice the length of a Small Wooden Block. | | Wooden Rod | Basic | 41 | A basic mechanical block, twice the length of a Small Wooden Block, with the same weight as a Wooden Block, but very fragile. | | Log | Basic | 63 | A basic mechanical block, more robust, three times the length of a Small Wooden Block. | | Brace | Basic | 7 | A non-placeable block used for reinforcement, built by "attaching" to other blocks, with no collision volume. It is often used to increase the stability of static structures and is not suitable for any dynamic structures. | | Steering Hinge | Mobility | 28 | The Steering Hinge can rotate blocks along an axis perpendicular to the placement axis. This block can rotate child blocks to a 180-degree direction to the left or right, commonly used for vehicle steering. | | Steering Block | Mobility | 13 | The Steering Block can rotate blocks along its placement axis, similar to the rotating part of a helicopter's rotor. | | Powered Wheel | Mobility | 2 | Similar to a car wheel, it can drive itself but cannot turn independently. It is a mechanical device used for moving objects on the ground. | | Unpowered Wheel | Mobility | 40 | A wheel that does not rotate without external force, otherwise similar to a Powered Wheel. | | Powered Large Wheel | Mobility | 46 | Similar to a Powered Wheel, but with a radius and thickness twice that of a Powered Wheel. | | Unpowered Large Wheel | Mobility | | A wheel that does not rotate without external force, otherwise similar to a Powered Large Wheel. | | Small Wheel | Mobility | 50 | It works almost the same as a caster wheel (like a shopping cart wheel), unpowered. | | Universal Joint | Mechanical | 19 | A block that can freely rotate around its placement axis, similar to a Steering Block but without power. | | Hinge | Mechanical | 5 | Similar to a Steering Hinge, but without power . | | Ball Joint | Mechanical | 44 | Can swing 360 degrees along the axis perpendicular to the placement axis, but without power. | | Axle Connector | Mechanical | 76 | Similar to a Ball Joint. | | Rotating Block | Mechanical | 22 | Powered, it can rotate clockwise or counterclockwise along the axis perpendicular to the placement axis. | | Suspension | Mechanical | 16 | Shaped like a wooden block, it can buffer forces from all directions. | | Grabber | Mechanical | 27 | It will grab and hold onto any object it comes into contact with. | | Spring | Mechanical | 9 | A special block that attaches to two other blocks and can quickly pull them together. Its pulling force is almost entirely dependent on its length. | | Boulder | Weapon | 36 | A stone that does not directly connect to other blocks even when built on them. It can be used as a projectile weapon and is also commonly used as a target in transportation tests. | | Elastic Pad | Armor | 87 | Increases the elasticity of the contact surface, providing an effect of rebounding and increasing kinetic energy. | | Container | Armor | 30 | Can hold child blocks like a bowl, mainly used to carry blocks that cannot be directly connected to the machine. The container has some anti-slip capability, and only one block (the target to be carried) can be placed inside. No other blocks can be added. | | Roller Wheel | Locomotion | 86 | Similar to the small wheel, but shorter (0.8m). | | Grip Pad | Armour | 49 | Block with the highest friction. | | Ballast | Flight | 35 | A heavy cubic block used as a counterweight. | III. Mechanical Design Requirements: 1. When designing the machine, you should adopt a "layered design" approach. Break down the user's requirements into the functions that the machine needs to achieve, and list the functional points. 60 Technical Report 2. For each functional point, design a structure that can meet the function. A structure can be understood as a "group of blocks," and several structures combined form the machine. 3. For each structure, determine the types of blocks to be used. 4. Determine the construction order of the structures to make the machine -building process layered. List which structure is the foundation and which is the upper-layer structure, and establish the construction sequence chain. IV. Output Format Requirements: '''json { "definition": "Construct a machine that can fulfill the user's requirements", "function_points": ["Function Point 1", "Function Point 2", "Function Point 3"], "design_structure": [ { "function_name": "Structure 1 Name", "description": "Description of Structure 1", "related_function_points": ["Function Point 1", "Function Point 2"] }, { "function_name": "Structure 2 Name", "description": "Description of Structure 2", "related_function_points": ["Function Point 3"] } ], "build_order": ["Structure 2 Name", "Structure 1 Name"], "machine_structure": { "Structure 1 Name": { "block_id": [ID1, ID2, ID3...], "guidance": "Guidance here" }, "Structure 2 Name": { "block_id": [ID4, ID5, ID6...], "guidance": "Guidance here" } } } ''' V. Note: 1. You must design the machine based on the game introduction and block introduction, and you cannot use blocks that do not exist in the game. 2. Strictly follow the output format requirements. Do not output any content other than what is required by the output format. 3. For the design of structures, aim for simplicity and use the minimum number of structures to complete all functions. Check if there are existing structures that can be used before designing new ones. 4. When selecting blocks for a structure, limit the types of blocks to no more than three, and preferably use only one type. Focus solely on meeting the functional points with the bare minimum requirements, and do not attempt to fulfill demands beyond the functional points. I will provide the user input below. Please generate a mechanical overview in JSON format based on the user's description. L.2 DESIGNER SYSTEM AND USER PROMPT You are a mechanical builder in the game "Besiege." Your task is to add new blocks to an existing machine structure according to user requests and finally output the complete machine JSON data. 61 Technical Report I. Game Introduction: II. Introduction to Blocks: III. Introduction to JSON Format: IV. Construction Guidance: V. Output Format Requirements: Based on the existing structure, a [structural summary] was generated as [functional implementation]. Ultimately, the block types [ID1, ID2, ...] were decided upon, while [ID3 , ...] were removed due to [reason for removal]. JSON: '''json ... ''' VI. Note: Building Principles 1. Correct Orientation: Ensure that functional blocks such as wheels and hinges are oriented correctly to achieve the intended function. 2. Efficiency First: a. Complete the design goal with the fewest blocks possible. b. The ground will automatically adapt to the lowest point of the mechanism. Output Requirements 1. Strict Structure: Your response must only contain two parts: Construction Plan and Final JSON. Prohibit any additional greetings, explanations, or comments. 2. Pure JSON: The complete JSON code block must be placed within '''json ... '''. Prohibit modifying the initial structure. Prohibit modifying the 'scale' property of any block. Prohibit adding comments or nonexistent properties in the JSON. Next, I will provide user input. Please generate a JSON based on the description. + L.3 INSPECTOR SYSTEM AND USER PROMPT I'll provide you with a mission in the game Besiege, along with the machine designed for it in JSON format and its 3D information. Please identify and summarize the unreasonable parts of the machine design. Here's the introduction to the game and construction knowledge. I. Game Introduction: II. Introduction to Blocks: III. Introduction to JSON and 3D Information: IV. Introduction to Output Format: { "Coordinate System": "Left-Handed System or Right-Handed System", "Up": "x or y or z", "Right": "x or y or z", "Forward": "x or y or z", 62 Technical Report "Frontmost Block": {"id": [Center Point Coordinates]}, "Rearmost Block": {"id": [Center Point Coordinates]}, "Leftmost Block": {"id": [Center Point Coordinates]}, "Rightmost Block": {"id": [Center Point Coordinates]}, "Topmost Block": {"id": [Center Point Coordinates]}, "Lowest Block": {"id": [Center Point Coordinates]}, } Write the answer to the user's question here 1. Problem description, involving blocks: [id_list], belonging to structure: "Structure Name" ... V. Notes: 1. Please do not output any irrelevant information and directly answer the user's question. 2. The id of the block must be enclosed in "[]". Additionally, do not use "[]" for any other numbers; consider using "()" instead. Below, I will provide you with JSON and 3D information. Please answer the user's question based on this information. Task Introduction {designer_output["definition"]} JSON Information 3D Information Mechanical Structure Information Initial State of the Machine The machine is initially placed on the ground, facing in the z+ direction , with the target direction being z+. Questions: 1. Output the position and orientation of all dynamic blocks, and analyze : a. The impact of dynamic blocks on the machine b. The direction of force provided by dynamic blocks c. The impact on sub-blocks and the direction of force on the machine 2. Output static blocks other than basic structural blocks, and analyze the rationality of their orientation and position. 3. Balance Check (self-gravity) a. The center of gravity of the machine (find the block closest to the center of gravity) b. Whether parts of the machine will sink due to gravity 4. Comprehensive Analysis a. Summarize the direction of all forces to analyze the movement of the machine b. Identify logically unreasonable blocks, output their hierarchical structure and reasons for unreasonableness L.4 REFINER SYSTEM PROMPT I will give you a task in the game Besiege, as well as the 3D information of the machine designed to complete this task. There are some 63 Technical Report unreasonable aspects in the design of this machine, and I would like you to modify these parts: I. Game Introduction: II. Block Introduction: III. Input Introduction: 1. Task & Context Task Objective: Preceding Information (if any): Defect Report: Modification History: Environmental Feedback: 2. Machine Data IV. Modification Method Introduction: Please follow the steps below to analyze and modify the machine structure . Step 1: Analyze & Plan 1. Diagnose the Current Machine: Analyze its power, balance, structure, and overall movement to identify design flaws or areas for optimization. 2. Devise a Modification Plan: Based on your diagnosis, decide which blocks to move, remove, or add. 3. Evaluate Modification Impact: When planning, you must consider the impact of your changes. 4. Briefly Describe Modifications: Before generating commands, describe your modification plan in a sentence or two using natural language. Step 2: Execute Modification Operations Use the following three command formats for modifications. Output only the operation commands, with each command on a new line. 1. Add Block (Add) Format: Non-Linear: Add [block type ID] to [id] in [attachable_face_id ] Linear: Add [block type ID] to [id_a] in [ attachable_face_id_a] to [id_b] in [attachable_face_id_b] Rules: Can only be added to original blocks. You cannot add new blocks onto other newly added blocks. special blocks require specifying two connection points. 2. Remove Block (Remove) Format: Remove [id] Rules: Can only remove original blocks. Cannot remove a block that has child blocks. 3. Move Block (Move) Move [id] to [new_parent_id] in [new_attachable_face_id] Rules: Moves the target block and all its child blocks as a single unit. The new parent's 'id' must be smaller than the 'id' of the block being moved. special blocks cannot be moved. The move must change the block's original position. 64 Technical Report V. Output Format Introduction: VI. Note: 1. Output Scope: Only output content related to the modification method and the required output format. 2. Ground Definition: The ground is always located beneath the machine, in contact with the block that has the lowest y-coordinate. 3. Task Adherence: Pay close attention to the task requirements. Do not delete important blocks by mistake. 4. Parenting Constraint: A block that has been deleted cannot be used as a parent for new or moved blocks within the same set of operations. 5. Format Integrity: Ensure the output format is complete. You must retain the ' ' and ' ' tags. 6. Content Separation: Do not include verification results within the ' ' block. 7. Preserve 'Success' Steps: Retain any steps marked as "Success" and do not adjust their order. 8. Prioritize 'Error' Steps: Focus your efforts on fixing the steps marked as "Error". Do not modify steps marked as "Unverified" prematurely . 9. Error History: I will provide the modification history for the "Error" steps. Use this information to avoid repeating invalid operations. Below, I will provide you with the JSON and 3D information. Please modify the machine accordingly. L.5 ENVIRONMENT QUERIER SYSTEM PROMPT I will give you a task in the game Besiege, as well as the information on the machine designed to complete this task. The machine has finished the task simulation and returned some environmental feedback to describe its performance. Please analyze the issues with the machine based on the feedback and request more feedback if needed. I. Game Introduction: II. Block Introduction: III. Input Introduction: IV. Query Introduction: A. Environmental Feedback Request: After conducting the environmental simulation of your modified machinery, The system will provide essential data for the most critical components ( such as position, rotation, and velocity). Based on the performance of the essential data, you need to determine what problems the machinery may have encountered and request feedback on the key blocks that may have issues. You can also request those blocks that may significantly impact the machinery's functionality and check whether their performance meets expectations. The format for the environmental feedback request is as follows. Please adhere strictly to this format and avoid including any extraneous information: 65 Technical Report [ { "id": int, "duration": [float, float], "properties": ["position", "rotation", "velocity", "length"], }, ... ] Both "id" and "duration" are mandatory fields. Note that the game runs for a total of 5 seconds, game state samples per 0.2s; do not exceed this duration. You can freely select the attributes in "properties," but avoid including irrelevant information. The "length" attribute is only applicable to linear components. V. Output Format Introduction: VI. Note: 1. Please do not output any irrelevant information. 2. The ground will always correctly appear beneath the machine, making normal contact with the block that has the lowest y-axis coordinate. 3. All blocks are affected by gravity. If a block with a large selfweight is built on certain non-powered non-static blocks, the non-static blocks may rotate due to the gravitational force of the sub-blocks. 4. Similarly, the power generated by powered blocks also needs to counteract the gravitational force of the sub-blocks. 5. Please adhere to the output format and avoid adding any irrelevant information in the JSON. Below, I will provide you with the JSON and 3D information, as well as the environmental feedback. Please request more feedback as needed. L.6 BLOCK INFORMATIONS Explanations: This is a concise list of the blocks you can use for this construction. Please read and follow the rules carefully. I. Block Information Format Each block's information follows the dict format. Attributes that a block does not have will not appear in the keys. 1. Characteristic Tags: a. Non-static: The block can actively generate force or movement. b. Non-stable: The connection between the block and its parent block is non-rigid, allowing for movement or rotation. c. Linear: The block is used to connect two existing points rather than being attached to a single point. If there are no tags, it is a regular static and stable block. 2. Attachable Faces: The key is in the format of attachable face ID, coordinates (relative coordinates), and orientation (relative orientation). II. Key Special Rules 1. Powered Wheel Rule (applicable to all powered wheels): The direction of power provided by the wheel is not the same as its orientation. - Forward (Z+ direction): The wheel should face sideways (X+ or X-). - Left turn (power pushes towards X-): The wheel should face forward ( Z+). - Right turn (power pushes towards X+): The wheel should face backward (Z-). 66 Technical Report - When the wheel faces up or down (Y+ or Y-), it does not provide power. 2. Special Blocks (Brace, Spring): - They do not have their own connection points but connect to two other blocks. - Brace: Used to reinforce static structures with no collision volume. - Spring: Generates a contracting pull force along the line connecting its two ends when stretched. 3. Non-connecting Blocks (bombs, boulders): - These blocks are placed at the specified location but do not form a physical connection with any other block. Containers are usually needed to hold them. Detailed Infos: [ { "Name": "Starting Block", "Description": "The root block of the mechanism. It cannot be placed or deleted, and only one can exist at a time. Its initial position is fixed, and its initial orientation is z+.", "Type ID": 0, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 0.5], "Orientation": "Front "}, {"ID": 1, "Coordinates": [0, 0, -0.5], "Orientation": "Back "}, {"ID": 2, "Coordinates": [-0.5, 0, 0], "Orientation": "Left "}, {"ID": 3, "Coordinates": [0.5, 0, 0], "Orientation": "Right "}, {"ID": 4, "Coordinates": [0, 0.5, 0], "Orientation": "Up"}, {"ID": 5, "Coordinates": [0, -0.5, 0], "Orientation": "Down"} ], "Mass": 0.25 }, { "Name": "Small Wooden Block", "Description": "A basic construction block, cubic in shape.", "Type ID": 15, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Mass": 0.3 }, { "Name": "Wooden Block", "Description": "A basic construction block.", "Type ID": 1, "Size": [1, 1, 2], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 2], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [-0.5, 0, 1.5], "Orientation": "Left "}, 67 Technical Report {"ID": 3, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 4, "Coordinates": [0.5, 0, 1.5], "Orientation": "Right "}, {"ID": 5, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 6, "Coordinates": [0, 0.5, 1.5], "Orientation": "Up"}, {"ID": 7, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "}, {"ID": 8, "Coordinates": [0, -0.5, 1.5], "Orientation": "Down "} ], "Mass": 0.5 }, { "Name": "Wooden Rod", "Description": "A basic construction block, slender and fragile .", "Type ID": 41, "Size": [1, 1, 2], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 2], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [-0.5, 0, 1.5], "Orientation": "Left "}, {"ID": 3, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 4, "Coordinates": [0.5, 0, 1.5], "Orientation": "Right "}, {"ID": 5, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 6, "Coordinates": [0, 0.5, 1.5], "Orientation": "Up"}, {"ID": 7, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "}, {"ID": 8, "Coordinates": [0, -0.5, 1.5], "Orientation": "Down "} ], "Mass": 0.5 }, { "Name": "Log", "Description": "A basic construction block.", "Type ID": 63, "Size": [1, 1, 3], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 3], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [-0.5, 0, 1.5], "Orientation": "Left "}, {"ID": 3, "Coordinates": [-0.5, 0, 2.5], "Orientation": "Left "}, {"ID": 4, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 5, "Coordinates": [0.5, 0, 1.5], "Orientation": "Right "}, {"ID": 6, "Coordinates": [0.5, 0, 2.5], "Orientation": "Right "}, {"ID": 7, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 8, "Coordinates": [0, 0.5, 1.5], "Orientation": "Up"}, {"ID": 9, "Coordinates": [0, 0.5, 2.5], "Orientation": "Up"}, {"ID": 10, "Coordinates": [0, -0.5, 0.5], "Orientation": " Down"}, {"ID": 11, "Coordinates": [0, -0.5, 1.5], "Orientation": " Down"}, 68 Technical Report {"ID": 12, "Coordinates": [0, -0.5, 2.5], "Orientation": " Down"} ], "Mass": 1 }, { "Name": "Steering Hinge", "Description": "Powered, used to control the rotation of subblocks. It can swing left and right along the axis perpendicular to the placement axis.", "Type ID": 28, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"} ], "Special Attributes": { "Swing Direction": ["Left", "Right"], "Angle": [-90, 90], "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { "Name": "Steering Block", "Description": "Powered, used to control the rotation of subblocks. It can rotate clockwise or counterclockwise along the placement axis.", "Type ID": 13, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { "Name": "Powered Wheel", "Description": "Powered, a mechanical device used to move objects on the ground.", "Type ID": 2, "Size": [2, 2, 0.5], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 0.5], "Orientation": "Front"} ], "Special Attributes": { "Rotation Axis": "Front", "PoweredWheel":"True", "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { 69 Technical Report "Name": "Unpowered Wheel", "Description": "A wheel that does not rotate without external force, similar to the powered wheel.", "Type ID": 40, "Size": [2, 2, 0.5], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 0.5], "Orientation": "Front"} ], "Special Attributes": { "Rotation Axis": "Front", "NonStable":"True" }, "Mass": 1 }, { "Name": "Large Powered Wheel", "Description": "Similar to the powered wheel, but larger.", "Type ID": 46, "Size": [3, 3, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-1.5, 0, 1], "Orientation": "Front "}, {"ID": 2, "Coordinates": [1.5, 0, 1], "Orientation": "Front "}, {"ID": 3, "Coordinates": [0, 1.5, 1], "Orientation": "Front "}, {"ID": 4, "Coordinates": [0, -1.5, 1], "Orientation": "Front "}, {"ID": 5, "Coordinates": [-1.5, 0, 0.5], "Orientation": "Left "}, {"ID": 6, "Coordinates": [1.5, 0, 0.5], "Orientation": "Right "}, {"ID": 7, "Coordinates": [0, 1.5, 0.5], "Orientation": "Up"}, {"ID": 8, "Coordinates": [0, -1.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "PoweredWheel":"True", "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { "Name": "Large Unpowered Wheel", "Description": "Similar to the unpowered wheel, but larger.", "Type ID": 60, "Size": [3, 3, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-1.5, 0, 1], "Orientation": "Front "}, {"ID": 2, "Coordinates": [1.5, 0, 1], "Orientation": "Front "}, {"ID": 3, "Coordinates": [0, 1.5, 1], "Orientation": "Front "}, {"ID": 4, "Coordinates": [0, -1.5, 1], "Orientation": "Front "}, {"ID": 5, "Coordinates": [-1.5, 0, 0.5], "Orientation": "Left "}, {"ID": 6, "Coordinates": [1.5, 0, 0.5], "Orientation": "Right "}, {"ID": 7, "Coordinates": [0, 1.5, 0.5], "Orientation": "Up"}, 70 Technical Report {"ID": 8, "Coordinates": [0, -1.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "NonStable":"True" }, "Mass": 1 }, { "Name": "Small Wheel", "Description": "It works almost the same as a caster wheel (e.g., shopping cart wheel), but it is not powered.", "Type ID": 50, "Size": [0.5, 1, 1.5], "Special Attributes": {"NonStable":"True"}, "Mass": 0.5 }, { "Name": "Roller Wheel", "Description": "Same as the small wheel.", "Type ID": 86, "Size": [1, 1, 1], "Special Attributes": { "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Universal Joint", "Description": "A block that can freely rotate around its placement axis, but it is not powered.", "Type ID": 19, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Hinge", "Description": "It can swing up and down along the axis perpendicular to the placement axis, but it is not powered.", "Type ID": 5, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} 71 Technical Report ], "Special Attributes": { "Swing Direction": ["Up", "Down"], "Angle": [-90, 90], "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Ball Joint", "Description": "It can swing freely in all directions, but it is not powered.", "Type ID": 44, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Swing Range": "All directions outward from the build surface ", "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Axle Connector", "Description": "Similar to a ball joint.", "Type ID": 76, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"} ], "Special Attributes": { "Swing Range": "All directions outward from the build surface ", "NonStable":"True" }, "Mass": 0.3 }, { "Name": "Rotating Block", "Description": "When powered, this motor-like block generates torque and rotates about its local y-axis. Blocks connected at attachable_face 1 or 4 rotate with it as part of a rigid assembly. The rotation block has its own mass and obeys classical mechanics: it applies torque to connected parts when powered, and it can also be moved, rotated, or stopped by external forces or torques, depending on constraints.", "Type ID": 22, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, 72 Technical Report {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], "Special Attributes": { "Rotation Axis": "Front", "NonStatic":"True", "NonStable":"True" }, "Mass": 1 }, { "Name": "Grabber", "Description": "If the build point is unoccupied, it will grab any object that comes into contact with the build point and hold it firmly.", "Type ID": 27, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"} ], "Special Attributes": { "Grip Direction": "Front", "NonStable":"True" }, "Mass": 0.5 }, { "Name": "Boulder", "Description": "A rock that will not directly connect to other blocks even if built on them, high mass.", "Type ID": 36, "Size": [1.9, 1.9, 1.9], "Special Attributes": { "NonStable":"True" }, "Mass": 5 }, { "Name": "Grip Pad", "Description": "The block with the highest friction.", "Type ID": 49, "Size": [0.8, 0.8, 0.5], "Mass": 0.3 }, { "Name": "Elastic Pad", "Description": "The block with the highest elasticity.", "Type ID": 87, "Size": [0.8, 0.8, 0.2], "Mass": 0.3 }, { "Name": "Container", "Description": "It has a railing around the building point. If oriented towards +y, it can hold sub-blocks like a bowl. It is mainly used to hold blocks that cannot directly connect to the mechanism, such as boulders and bombs. Do not place other blocks nearby to avoid overlap .", "Type ID": 30, "Size": [2.4, 3, 2.8], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"} ], "Mass": 0.5 }, 73 Technical Report { "Name": "Suspension", "Description": "It primarily serves as a buffer and shock absorber. It is similar in shape to a wooden block, with all Attachable Faces Properties located at the far end of the block.", "Type ID": 16, "Size": [1, 1, 2], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 2], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 1.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 1.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 1.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 1.5], "Orientation": "Down "} ], "Mass": 0.5 }, { "Name": "Brace", "Description": "The brace can be used for reinforcement. Its construction principle is to 'attach' to other blocks. It has no collision volume. Since it is often used to stabilize static structures, it is not suitable for any dynamic structures.", "Type ID": 7, "Special Attributes": { "Linear": "True", "Anti Tension Direction": "Towards the center of the line segment between the two Attachable Faces Properties", "Anti-Compression Direction": "Outward from the center of the line segment between the two Attachable Faces Properties" }, "Mass": 0.5 }, { "Name": "Spring", "Description": "A special block that attaches to two other blocks and can quickly pull the two ends together. Its tension force is almost entirely dependent on its length.", "Type ID": 9, "Special Attributes": { "Linear": "True", "NonStatic":"True", "Tension Direction": "Towards the center of the line segment between the two Attachable Faces Properties" }, "Mass": 0.4 }, { "Name": "Ballast", "Description": "It serves as a counterweight, has a large mass, and is shaped like a cube.", "Type ID": 35, "Size": [1, 1, 1], "Attachable Faces Properties": [ {"ID": 0, "Coordinates": [0, 0, 1], "Orientation": "Front"}, {"ID": 1, "Coordinates": [-0.5, 0, 0.5], "Orientation": "Left "}, {"ID": 2, "Coordinates": [0.5, 0, 0.5], "Orientation": "Right "}, {"ID": 3, "Coordinates": [0, 0.5, 0.5], "Orientation": "Up"}, {"ID": 4, "Coordinates": [0, -0.5, 0.5], "Orientation": "Down "} ], 74 Technical Report "Mass": 3 } ] 75
2510.14971
ON THE INVARIANTS OF FINITE GROUPS ARISING IN A TOPOLOGICAL QUANTUM FIELD THEORY CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET Abstract. In this paper, we consider the properties of finite groups that are witnessed by group invariants arising in the context of Dijkgraaf–Witten theory, a topological quantum field theory, as invariants of surfaces. These invariants can be considered generalizations of the commuting probability, an invariant that has been well studied in the group theory literature. 1. Introduction There is a long history of using invariants to deduce structural properties of finite groups. These invariants are often constructed by counting objects associated with the group, such as el- ements, conjugacy classes or irreducible characters. In this paper, we take a different approach. Given the remarkable compatibility between mathematics and physics, it is reasonable to expect that group invariants arising naturally in physics are useful for characterizing group structure. Here, we consider Dijkgraaf–Witten theory, a topological quantum field theory (TQFT), in one space dimension and one time dimension. In short, such a TQFT determines a commutative Frobenius algebra that gives rise to invariants of surfaces. If we assume that this algebra is the center Z(CG) of the complex group algebra of a finite group G, then the invariant of a genus-h surface takes the form Qh(G) = P χ∈Irr(G)(|G|/χ(1))2h−2, where Irr(G) is the collection of complex irreducible characters of G. Note that Q0(G) = 1/|G| and Q1(G) = k(G), where k(G) is the number of conjugacy classes of G (equivalently, the number |Irr(G)| of irreducible complex characters). If we consider the invariants Qh(G) as arising from a sequence of increas- ingly complicated experiments, we might imagine that we first measure Q0(G), then Q1(G), and so on. With this in mind, it is natural to consider the following scaled invariants, which, as we will show, are very well-behaved: qh(G) := 1 |G| X χ∈Irr(G)  1 χ(1) 2h−2 . Now note that q0(G) = 1 and q1(G) = k(G)/|G|, where the latter is the so-called commuting probability d(G), which was first considered by Gustafson [14] and later by many authors. Viewed in this way, the “quantum invariants” qh(G) are generalizations of the commuting probability, for which the following structure criteria are known: (a) If d(G) > d(D8) = 5/8, then G is abelian (Gustafson [14]). (b) If d(G) > d(S3) = 1/2, then G is nilpotent (Lescot [20]). (c) If d(G) > d(A4) = 1/3, then G is supersolvable (Barry, MacHale, N´ı Sh´e [2]). (d) If d(G) > d(A5) = 1/12, then G is solvable (Dixon [9]). In this paper, we consider the properties of finite groups that are witnessed by the invari- ants qh(G). In our main result, we prove broad generalizations of the structure criteria witnessed by d(G) = q1(G) to any value of the genus h. Theorem 1.1. Let G be a finite group, and let h be any positive integer. (a) If qh(G) > qh(D8), then G is abelian. (b) If qh(G) > qh(S3), then G is nilpotent. (c) If qh(G) > qh(A4), then G is supersolvable. (d) If qh(G) > qh(A5), then G is solvable. Date: October 17, 2025. 1 arXiv:2510.14971v1 [math.GR] 16 Oct 2025 2 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET Guralnick and Robinson showed in [13, Lemma 2(x)] that if p is a prime and d(G) > 1/p, then G has a normal Sylow p-subgroup. Our next result generalizes this p-closed criterion to all values of the genus h. Theorem 1.2. Let G be a finite group, and let p be a prime. If qh(G) > β(h, p)/(p + 1), where β(h, p) = 1 + 1/p2h−1, then G has a normal Sylow p-subgroup. This criterion is the best possible. For example, if p = 2f −1 is a Mersenne prime, then the Frobenius group G = (C2)f ⋊Cp satisfies qh(G) = β(h, p)/(p + 1) and does not have a normal Sylow p-subgroup. Although Theorem 1.2 implies Theorem 1.1(b) and much of part (c), we will in fact use the latter to prove this p-closed criterion. Given a prime p, it is also natural to consider a p-local version of the invariants qh(G) by summing over irreducible p-Brauer characters instead of irreducible complex characters. We define qh,p′(G) = 1 |G|p′ X φ∈IBr(G) 1 φ(1)2h−2 . Note that q1,p′(G) = kp′(G)/|G|p′, where kp′(G) is the number of irreducible p-Brauer char- acters (equivalently, the number of p-regular conjugacy classes). This invariant is denoted by dp′(G) in the literature, and it was shown in [30, Theorem 1.1] that if p is an odd prime and dp′(G) > 1/(p −1), then G is p-solvable. This result is best possible for p > 3 since dp′(PSL2(p)) = 1/(p −1). Our next theorem extends this result to all values of the genus. Theorem 1.3. Let G be a finite group, let h be a positive integer, and let p be an odd prime. If qh,p′(G) > α(h, p)/(p−1), where α(h, p) = (22h−2+√p −1)/(22h−2√p −1), then G is p-solvable. Note that α(h, p) < 1 when both h > 1 and p > 2. Theorem 1.3 is not best possible for h = 1, in which case, again, the best bound is 1/(p −1) from [30]. We suspect, but have not proved, that the result is also not best possible for h > 1. Our proof relies on a lower bound for kp′(G) when G is not p-solvable. At the time of writing, the best known bound is kp′(G) > √p −1, which was proved by Hung and Mar´oti in [28]. An improvement in this bound would lead directly to an improvement in Theorem 1.3. Our paper is structured as follows. As our methods are purely group-theoretic, we defer an introduction to Dijkgraaf–Witten theory to Section 5. In Section 2, we make some remarks and collect some known facts that will be needed in the rest of the paper. In Section 3, we prove Theorems 1.1(a)–(d) and Theorem 1.3. In Section 4, we prove Theorem 1.2. Only the proof of Theorem 1.3 requires us to cite results that depend on the classification of finite simple groups. We do use Brauer’s k(GV )-theorem in the proof of Theorem 1.1(c), but there the group G is solvable, and the proof in that case does not rely on the classification. All groups considered in this paper are assumed to be finite. 2. Preliminaries. As noted in the introduction, the invariants qh(G) can be viewed as generalizations of the commuting probability d(G) = k(G)/|G| first considered by Gustafson [14]. Our proofs are based on the extensive literature on this invariant. Our first lemma collects some of the facts that are already known. Lemma 2.1. Let G be a finite group. (a) We have q1(G) = d(G) and qh(G) ≥qh+1(G) for all h ≥0. (b) If G is perfect and d(G) > 1/20, then G ∼= A5 or G ∼= SL2(5). (c) If p is a prime such that a Sylow p-subgroup of G is not normal, then d(G) ≤1/p. (d) If G is a nonabelian p-group for some prime p, then d(G) ≤1/p + 1/p2. (e) If G is nilpotent with a nonabelian Sylow p-subgroup for some prime p, then d(G) < 1/p or |G′| = p. INVARIANTS OF FINITE GROUPS ARISING IN TQFT 3 Proof. Part (a) follows immediately from the definition using |Irr(G)| = k(G) and χ(1) ≥1 for all χ ∈Irr(G). Part (b) is [5, Proposition 1.6]. Part (c) is [13, Lemma 2(x)]. Part (d) is [13, Lemma 2(viii)]. Part (e) is [13, Lemma 2(ix)]. □ Our next lemma shows that qh(G) is monotone with respect to subgroups H ≤G for all h ≥1. Lemma 2.2. Let G be a finite group, and let h be any positive integer. If H ≤G, then qh(H) ≥qh(G). Proof. We have qh(G) = 1 |G| X χ∈Irr(G) 1 χ(1)2h−2 ≤ 1 |G| X ψ∈Irr(H) X χ∈Irr(G|ψ) 1 χ(1)2h−2 ≤ 1 |G| X ψ∈Irr(H) [G : H] ψ(1)2h−2 = 1 |H| X ψ∈Irr(H) 1 ψ(1)2h−2 = qh(H). The first inequality follows because we have enumerated all the χ ∈Irr(G) (possibly multiple times). The characters in Irr(G|ψ) are the constituents of ψG by Frobenius reciprocity, and ψG(1) = [G : H]ψ(1). The sum P χ∈Irr(G|ψ) 1/χ(1)2h−2 is as large as possible when all the degrees are as small as possible and |Irr(G|ψ)| is as large as possible. Since ψ lies under χ, χ(1) ≥ψ(1). If equality holds for all χ ∈Irr(G|ψ), then χ(1) is as small as possible and |Irr(G|ψ)| = [G : H] is as large as possible, which yields the second inequality. □ We conclude this section with some remarks. Remark 2.3. The content of Theorems 1.1(a)–(d) is that the group properties of commuta- tivity, nilpotency, supersolvability and solvability are witnessed by the value of qh(G) for a particular “minimal” group G for all h ≥1. For example, Theorem 1.1(d) says that solvability is witnessed by A5 for all values of the genus h. Of course, our theorems would have trivial content if the invariants qh preserved the ordering of finite groups for all h; that is, if for all finite groups G and H it were true that qh(G) > qh(H) for some h > 1 implied that d(G) > d(H), then all our theorems would follow trivially from the corresponding result for d(G). But this is not the case. For example, if G = S6 and H = PGL2(9), then qh(G) > qh(H) for all h > 1, but d(G) = d(H). Our next example illustrates a general phenomenon: Notice that for a finite group G, the invariant qh(G) approaches 1/|G′| as h approaches infinity. Therefore, if d(G) > d(H) for some finite groups G and H, but 1/|G′| < 1/|H′|, then there must exist some positive integer N such that qh(G) < qh(H) for all h > N. For example, let G = A5, and let H be an extraspecial p-group of order p3 for the prime p = 59. Then d(G) = 1/12 and 1/|G′| = 1/60, while d(H) = (p2 + p −1)/p3 and 1/|H′| = 1/p. Then d(G) > d(H), while 1/|G′| < 1/|H′|, and the preceding observation applies. In fact, a calculation shows that qh(G) > qh(H) for h = 0, 1, 2, 3, while qh(G) < qh(H) for h ≥4. Remark 2.4. By examining our proofs, it is easy to see that our results hold not only when the exponent in qh(G) is the negative of the Euler characteristic 2−2h of a genus-h surface, but for any real number s ≥1. In this way, our results are related to the so-called “zeta function” ζG(s) = P χ∈Irr(G) χ(1)−s of a finite group G, which was used by Liebeck and Shalev to prove various results on Fuchsian groups, coverings of Riemann surfaces, subgroup growth, random walks and representation varieties [21, 22, 23]. Since our motivation stems from topological quantum field theories, we have stated and proved our results in this special case. Remark 2.5. It is known that d(G) ≤d(N)d(G/N) when N ⊴G [13, Lemma 2(ii)]. Our proofs would be simplified if qh(G) ≤qh(N)qh(G/N) for all h. We have not been able to prove this statement, however. If it were true, then it must of course be true for N = Z(G). But this already seems to be a hard question about projective character degrees: Letting Z = Z(G), qh(G) = 1 |G/Z||Z| X λ∈Irr(Z) X χ∈Irr(G|λ) 1 χ(1)2h−2 . 4 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET So to prove that qh(G) ≤qh(G/Z), it would suffice to show, for example, that X χ∈Irr(G|λ) 1 χ(1)2h−2 ≤ X χ∈Irr(G/Z) 1 χ(1)2h−2 ; that is, we want to show the sum over projective character degrees of G/Z is less than the sum over ordinary character degrees. We know that they are fewer in number: |Irr(G|λ)| ≤k(G/Z) as a consequence of Gallagher’s result on “θ-good” conjugacy classes (see Theorem 1.19 and Corollary 1.23 in [18]). Remark 2.6. It is natural to consider an invariant eqh(G) dual to qh(G) by summing over the sizes of the conjugacy classes C ∈Cl(G) of a finite group G. We define the invariant eqh(G) = 1/|G| P C∈Cl(G) 1/|C|h−1. Note that |G| is the the sum of the squared character degrees, but |G| is the sum of the class sizes. Therefore, we define eqh(G) with an exponent h−1 to make the two invariants compatible. We study the structural properties witnessed by this invariant in a future work. 3. Commutativity, nilpotency, supersolvability and solvability criteria. In this section, we prove Theorems 1.1(a)–(d). We begin with the commutativity criterion of Theorem 1.1(a). Proof of Theorem 1.1(a). Suppose for contradiction that G is a counterexample to the claim with minimal order. The degrees of the complex irreducible characters of D8 are 1, 1, 1, 1 and 2. By a result due to Gustafson [14], d(G) ≤5/8 since G is nonabelian. Altogether, we have 1 2 < 1 2 1 + 1 22h  = qh(D8) < qh(G) ≤d(G) ≤5 8. We first claim that G is nilpotent; that is, all Sylow subgroups of G are normal in G. By Lemma 2.1(c), if a Sylow p-subgroup of G is not normal for some prime p, then d(G) ≤1/p. But then 1/2 < 1/p, which is impossible. Next, we claim that all Sylow subgroups of odd order are abelian. Note that if G = A × B, then qh(G) = qh(A)qh(B). Since G is nilpotent by the last paragraph, this means that qh(G) = Q qh(P), where the product runs over the set of Sylow subgroups of G. In partic- ular, qh(P) > 1/2 for every Sylow subgroup P. However, by Lemma 2.1(d), if P is nonabelian then d(P) < 1/p + 1/p2. Thus, 1/p + 1/p2 > 1/2. This inequality only holds if p = 2, so all Sylow subgroups of odd order are abelian. We may now assume that G is a nonabelian 2-group such that qh(G) > qh(D8). Now, by Lemma 2.1(e), either d(G) < 1/2 or |G′| = 2. We have d(G) > 1/2 by the first paragraph, so |G′| = 2. Therefore, qh(D8) < qh(G) = [G : G′] |G| + 1 |G| X χ nonlinear 1 χ(1)2h−2 ≤1 2 + k(G) −[G : G′] |G| 22h−2 , where we used that χ(1) ≥2 for all nonlinear irreducible characters of G. Upon simplifying and using that d(G) = k(G)/|G| by definition, we have d(G) > 1 2 + 22h−2 qh(D8) −1 2  = 1 2 + 22h−2 1 22h+1  = 1 2 + 1 8 = 5 8. But this contradicts the fact that d(G) ≤5/8 from the first paragraph, so the theorem is proved. □ Next, we prove the nilpotency criterion of Theorem 1.1(b). We begin with an easy lemma due to Lescot [20], whose proof we include for completeness. Lemma 3.1. If d(G) > 1/4, then |G′| ≤3/(4d(G) −1). INVARIANTS OF FINITE GROUPS ARISING IN TQFT 5 Proof. Since |G| = P χ∈Irr(G) χ(1)2 and the number of linear characters is [G : G′], we have |G| −[G : G′] = X χ(1)>1 χ(1)2 ≥4(k(G) −[G : G′]). Dividing both sides by |G| and rearranging yields the claim. □ Proof of Theorem 1.1(b). Assume for a contradiction that G is a counterexample to the theorem of smallest possible order. The irreducible character degrees of S3 are 1, 1 and 2. Since G is not nilpotent, it contains a non-normal Sylow p-subgroup for some prime p dividing |G|. This means d(G) ≤1/p ≤1/2 by Lemma 2.1(c). Altogether, we have 1 3 < 1 3 1 + 1 22h−1  = qh(S3) < qh(G) ≤d(G) ≤1 p ≤1 2. In particular, this means that the only non-normal Sylow subgroup is the Sylow 2-subgroup. Now, by the monotonicity of Lemma 2.2, every proper subgroup of G is nilpotent, so G is a so-called minimal non-nilpotent group by induction. It follows from [15, Theorem III.5.2] that G has the following structure: G ∼= Q⋊P, where Q is a Sylow q-subgroup for some prime q, while P is a cyclic Sylow p-subgroup for some prime p and Φ(P) ≤Z(G). We know that p = 2 by the last paragraph, and q ̸= 2 since G is not nilpotent. As P is cyclic, [P : Φ(P)] = 2. Now consider the structure of G′: As G is not nilpotent, of course G′ > 1. As d(G) > 1/3, we have |G′| < 9 by Lemma 3.1. Since G/Q ∼= P abelian implies that G′ ≤Q, this means q = 3, 5 or 7 and G′ is cyclic of prime order q. Since G′ ⊴Q, we have G′ ∩Z(Q) > 1, so G′ ≤Z(Q) as G′ has prime order. So Q centralizes G′. If G′ < Q, then G′P < G is nilpotent by induction on the group order, so P centralizes G′, as well. But then G′ ≤Z(G) and G is nilpotent, a contradiction. So we have that G′ = Q is cyclic of order 3, 5 or 7. As Φ(P) ≤Z(G), the product QΦ(P) is abelian of index 2 in G. So every nonlinear irreducible character of G has degree 2 since it is a constituent of a character induced from a (necessarily linear) character of QΦ(P). As G has [G : G′] = |P| linear characters, the remaining nonlinear characters must be (|Q| −1)|P|/4 in number since the resulting sum of squared degrees is |P| + 1 4(|Q| −1)|P| · 22 = |G|. Therefore, qh(G) = 1 |G|  |P| + (|Q| −1)|P| 4 1 22h−2  = 1 q 1 + q −1 22h  > 1 3 1 + 2 22h  = qh(S3). But the inequality implies that q < 3. We have reached our final contradiction and proved the claim. □ Now we prove the supersolvability criterion of Theorem 1.1(c). Proof of Theorem 1.1(c). Let G be a counterexample to the theorem with minimal order. The irreducible character degrees of A4 are 1, 1, 1 and 3. Since G is not supersolvable, d(G) ≤1/3 by [2]. Altogether, we have 1 4 < 1 4(1 + 1 32h−1 ) = qh(A4) < qh(G) ≤d(G) ≤1 3. Then by Lemma 2.2, for every proper subgroup H of G, we have qh(H) ≥qh(G) > qh(A4) and hence by the minimality of |G|, H is supersolvable. As G is not supersolvable, G is a minimal non-supersolvable group. The structure of such groups was determined by Doerk [10], and can be read off from [3, Theorem 12] and [15, Chapter VI, Exercise 16]: G has exactly one normal Sylow p-subgroup P and a complement Q such that |Q| is divisible by at most two distinct primes. Furthermore, P/Φ(P) is a minimal normal subgroup of G/Φ(P) and is not cyclic, and Q ∩CG(P/Φ(P)) = Q ∩Φ(G) = Φ(Q) ∩Φ(G). Now we work in G = G/Φ(G), which is again minimal non-supersolvable, since if G is supersolvable then G is supersolvable by [15, Theorem VI.8.5]. Recall that Φ(P) ≤Φ(G) by [15, Lemma III.3.3]. Therefore, P is a minimal normal, elementary abelian, noncyclic subgroup of G, where G = PQ and Q acts coprimely, 6 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET faithfully and irreducibly on P. By Brauer’s k(GV )-theorem ([12, Theorem]), we know that k(G) = k(P ⋊Q) ≤|P|. Now we have 1 4 < d(G) ≤d(G) ≤ 1 |Q|, and so |Q| ≤3. Since Q has prime order and P is an irreducible Q-module, this means CG(x) ≤P for all nontrivial x ∈P, so G is a Frobenius group. Now, if Q = ⟨t⟩has order 2, then t acts as inversion on P (see [16, Lemma 7.21] and its proof). So any subgroup ⟨x⟩≤P is fixed by Q, which means P = ⟨x⟩is cyclic as it is an irreducible Q-module. This contradiction implies that |Q| = 3. Now we count the characters of the Frobenius group G: There are [G : G ′] = |Q| linear characters, and each of the nonlinear irreducible characters of G is induced from an irreducible character of P lying in one of the (|P| −1)/|Q| orbits under Q. So 1 4 < d(G) = 1 |P||Q|  |Q| + |P| −1 |Q|  . Rearranging and using that |Q| = 3, we find that p2 ≤|P| < 32/5 < 7. Therefore, |P| = 4 and so G ∼= A4. In particular, P is a 2-group. Let r be a prime divisor of |G|, and let R be a Sylow r-subgroup of G. If R is not normal in G, then d(G) ≤d(R) ≤1 r. Since d(G) ≥qh(G) > qh(A4) > 1 4 we deduce that 1 r > 1 4 which forces r < 4 and hence r ∈{2, 3}. In particular, Q is a π-group, where π ⊆{2, 3}. Since we showed that the Sylow p-subgroup P is a 2-group, Q must then be a 3-group. Now, since Φ(Q) ≥Φ(Q)∩Φ(G) = Q∩CG(P/Φ(P)) and Q ∼= Q/(Φ(Q)∩Φ(G)) is cyclic of order 3, we have Φ(Q) = Φ(Q) ∩Φ(G). So Q is itself cyclic by Burnside’s basis theorem [15, Theorem III.3.15]. We have shown that G = PQ, where P is a 2-group and Q is a cyclic 3-group, and G/Φ(G) ∼= A4. Now we turn to the more precise classification of minimal non-supersolvable groups in [3, Theorem 10]. Since Q is cyclic of 3-power order and p = 2, inspecting the list of possibilities in [3, Theorem 9] shows that only Types 2 and 3 are possible. We inspect each of these in turn and derive contradictions: Type 2: Q = ⟨z⟩is cyclic, P is an irreducible Q-module over the field of p elements with kernel ⟨zq⟩in Q. Since P is irreducible, Φ(P) = 1 and P ∼= V4 is abelian. The kernel K = ⟨z3⟩is central in G and has index 3 in Q, so |Q| = 3|K| and |G| = 12|K|. We have [G : G′] = |Q| = 3|K|, and the nonlinear irreducible characters of G are induced from characters ψ1 × ψ2 ∈Irr(P × K) with ψ1 ̸= 1, which split into |K| orbits under G. Thus, each of these |K| nonlinear characters of G has degree 3. So qh(G) = 1 12|K|  3|K| + |K| 32h−2  = 1 4  1 + 1 32h−1  = qh(A4), which is a contradiction. Note that if K = 1, then G ∼= A4. Type 3: P is a nonabelian special p-group of rank 2m, the order of p modulo q being 2m, Q = ⟨z⟩is cyclic, z induces an automorphism on P such that P/Φ(P) is a faithful and irreducible Q-module, and z centralizes Φ(P). Furthermore, |P/Φ(P)| = p2m and |P ′| ≤pm. Since |P| = 4, we have m = 1. So |P ′| ≤2, which means P is an extraspecial 2-group of order 23. Since P/Φ(P) is a faithful Q-module, Q is cyclic of order 3. If P is an extraspecial 2-group of order 23, then P has 4 linear characters and 1 nonlinear irreducible character of degree 2. The nontrivial linear characters form one Q-orbit and induce irreducibly to the same nonlinear character of G of degree 3. The unique nonlinear character of P is necessarily G- invariant, and so extends to G since [G : P] = 3 is prime (see [16, Corollary 6.19]). This gives rise to |Irr(G/P)| = 3 distinct irreducible characters of G of degree 2 by [16, Corollary 6.17]. There are [G : G′] = [G : P] = 3 linear characters. So qh(G) = 1 24  3 + 3 22h−2 + 1 32h−2  ≤1 24  6 + 1 32h−2  = 1 4  1 + 1 2 · 32h−1  < qh(A4), INVARIANTS OF FINITE GROUPS ARISING IN TQFT 7 a contradiction. This is our final contradiction and the theorem is proved. □ Now we prove the solvability criterion of Theorem 1.1(d). Proof of Theorem 1.1(d). If h = 1, then q1(G) = d(G) > d(A5) = 1/12, and G is already known to be solvable by a result of Dixon [9]. So, now assume that h ≥2, and suppose that G is a counterexample to the claim with minimal order. By Lemma 2.1(a), we have d(G) ≥qh(G) > qh(A5) > 1/|A5| = 1/60. We first note that qh(G′) ≥qh(G) by Lemma 2.2. So if G′ < G, then G′ is solvable by induction on the order of the group. But then G is also solvable, a contradiction. So G = G′ is perfect, which means qh(G) = 1 |G|  1 + X χ nonlinear 1 χ(1)2h−2  ≤ 1 |G|  1 + k(G) −1 22h−2  = 1 |G| k(G) + 22h−2 −1 22h−2  , where the inequality follows since χ(1) ≥2 for all nonlinear irreducible characters of G. Now note that we have the following inequalities when k(G) ≥5: k(G) + 22h−2 −1 22h−2 ≤2k(G) 5 for h = 2, k(G) + 22h−2 −1 22h−2 ≤k(G) 3 for h > 2. It is not difficult to see that if k(G) < 5, then |G| ≤12, and so G is solvable (see [4, Note A] or [24, Table 1]). Since G is nonsolvable, we must have k(G) ≥5. Therefore, we have d(G) = k(G) |G| ≥5 2q2(G) > 5 2q2(A5) = 5 2 4769 216000 > 1 20, d(G) = k(G) |G| ≥3qh(G) > 3qh(A5) > 3 60 = 1 20 for h > 2. Therefore, G is perfect with d(G) > 1/20 for all h ≥2. Then by Lemma 2.1(b), G ∼= A5 or G ∼= SL2(5). But qh(SL2(5)) = 1 120  1 + 2 22h−2 + 2 32h−2 + 2 42h−2 + 1 52h−2 + 1 62h−2  < 1 120  2 + 4 32h−2 + 2 42h−2 + 2 52h−2  = qh(A5). This is our final contradiction, and the theorem is proved. □ Next, we prove Theorem 1.3 in a manner very similar to the proof of Theorem 1.1(d). Although it may be true that qh,p′(G) is monotone with respect to subgroups, the proof of Lemma 2.2 will not work when the prime p divides |G| as Frobenius reciprocity does not hold in this case. However, we can again show that a minimal counterexample to the theorem must be perfect by using some basic results from the theory of Brauer characters. Before we begin, recall that we defined α(h, p) = (22h−2 + √p −1)/(22h−2√p −1). Proof of Theorem 1.3. Suppose for contradiction that G is a counterexample to the claim with minimal order. We first show that if N ≤G is a normal subgroup with index equal to any prime r, then qh,p′(G) ≤qh,p′(N). This will then imply that G is perfect, since if not, then there exists such an N ⊴G with α(h, p)/(p −1) < qh,p′(G) ≤qh,p′(N), so N is p-solvable by induction on the order of the group. But then G is also p-solvable since the quotient G/N is cyclic, which is a contradiction. 8 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET Now, each character φ ∈IBr(G) lies above at least one character θ ∈IBr(N). Denoting the collection of irreducible p-Brauer characters of G lying over θ ∈IBr(N) by IBr(G|θ), we have qh,p′(G) ≤ 1 |G|p′ X θ∈IBr(N) X φ∈IBr(G|θ) 1 φ(1)2h−2 . Since N ⊴G has prime index r, there are two possibilities for each θ ∈IBr(N): (1) θ is G-invariant, in which case θ extends to an irreducible Brauer character ˆθ of G by [26, Theorem 8.12] since G/N is cyclic. In addition, the characters βˆθ for β ∈IBr(G/N) are all the irreducible constituents of θG by [26, Corollary 8.20]. Note that β(1)ˆθ(1) = θ(1) since β is linear. Finally, since N ⊴G, φ ∈IBr(G) is an irreducible constituent of θG if and only if θ is an irre- ducible constituent of φN by [26, Corollary 8.7]. Therefore, |IBr(G|θ)| = |IBr(G/N)| = [G : N]p′. (2) The stabilizer of θ in G is N, in which case the induced character φ = θG is irreducible by Clifford’s theorem [26, Theorem 8.9]. So |IBr(G|θ)| = 1, again by [26, Corollary 8.7], and φ(1) > θ(1). Therefore, for each θ ∈IBr(N) we have X φ∈IBr(G|θ) 1 φ(1)2h−2 ≤[G : N]p′ θ(1)2h−2 , and so qh,p′(G) ≤[G : N]p′ |G|p′ X θ∈IBr(N) 1 θ(1)2h−2 = qh,p′(N), which is what we wanted to show. Since we now know that the counterexample G is perfect, all nontrivial irreducible Brauer characters are nonlinear, and we have qh,p′(G) ≤ 1 |G|p′  1 + kp′(G) −1 22h−2  = 1 |G|p′ kp′(G) + 22h−2 −1 22h−2  , where the inequality follows since φ(1) ≥2 for all nonlinear φ ∈IBr(G). Now, since G is non-p-solvable, it is known that kp′(G) > √p −1 by [28, Theorem 1.2]. Furthermore, whenever kp′(G) > √p −1, we have the following inequality: kp′(G) + 22h−2 −1 22h−2 ≤α(h, p)kp′(G). This implies that dp′(G) = kp′(G) |G|p′ ≥ 1 α(h, p)qh,p′(G) > 1 p −1. But this contradicts [30, Theorem 1.1], and the theorem is proved. □ 4. Criterion for the existence of normal Sylow subgroups In this section, we prove Theorem 1.2, which is a criterion for the existence of a normal Sylow subgroup. We first obtain the following result, which slightly improves Lemma 2(x) in [13] for nonabelian simple groups. Lemma 4.1. Let G be a finite nonabelian simple group, and let p be a prime dividing |G|. Then d(G) ≤1/(p + 1). Proof. Suppose by contradiction that d(G) > 1/(p + 1). Since G is a nonabelian simple group, by [9] we know that d(G) ≤1/12. Hence 1/(p + 1) < 1/12, which implies that p ≥13 as p is a prime. Let P be a Sylow p-subgroup of G. Suppose that [G : NG(P)] ≤p. Since [G : NG(P)] is coprime to p, we deduce that [G : NG(P)] ≤p −1. Since G is nonabelian simple, the core of NG(P) in G is trivial, so G embeds into Sp−1, the symmetric group of degree p −1. But then |G| is not divisible by p, which is a contradiction. Thus [G : NG(P)] ≥p + 1. By Burnside’s INVARIANTS OF FINITE GROUPS ARISING IN TQFT 9 normal p-complement theorem ([15, Theorem IV.2.6]), we have NG(P) ̸= CG(P) and hence |NG(P)| ≥2p. Combining these two inequalities, we obtain |G| = [G : NG(P)] · |NG(P)| ≥2p(p + 1). Let d be the minimal degree of a nontrivial complex irreducible character of G. By [11, Theo- rem 1], we have d ≥(p −1)/2. By [13, Lemma 2(vi)], noting that G′ = G, we have d(G) < 1 d2 + (1 −1 d2 ) 1 |G′| ≤ 4 (p −1)2 + 1 2p(p + 1). Since d(G) > 1/(p + 1), it follows that 1 p + 1 < 4 (p −1)2 + 1 2p(p + 1). Simplifying this inequality gives 2p3 −13p2 −4p < 1, or equivalently 2p3 −13p2 −4p ≤0. This can be rewritten as (2p −13)p ≤4 which is impossible for p ≥13. Hence, the assumption d(G) > 1/(p + 1) leads to a contradiction, and the lemma follows. □ Let p be a prime, and let h be a positive integer. Recall that for a finite group G and a prime p, G is said to be p-closed if G has a normal Sylow p-subgroup. Define γ(h, p) = 1 p + 1(1 + 1 p2h−1 ). Note that γ(h, p) > 1/(p + 1). and γ(h, p) = β(h, p)/(p+1) from the statement of Theorem 1.2. We now prove Theorem 1.2. Proof of Theorem 1.2. Let G be a counterexample to the theorem with minimal order. Then qh(G) > γ(h, p) but G does not have a normal Sylow p-subgroup. By Lemma 2.1(c), we have d(G) ≤1/p. Consequently, we obtain the following inequalities: 1 p ≥d(G) ≥qh(G) > γ(h, p) > 1 p + 1. By Lemma 2.2, for each proper subgroup H of G, we have qh(H) ≥qh(G) > γ(h, p) and thus by the minimality of |G|, H is p-closed. In particular, every proper subgroup of G is p-closed. Also, if N is a normal subgroup of G, then every proper subgroup of the quotient group G/N is also p-closed. Assume that p = 2. Then γ(h, p) = qh(S3) and since qh(G) > qh(S3), G is nilpotent by Theorem 1.1(b). But then G has a normal Sylow 2-subgroup, a contradiction. Therefore, we may assume that p ≥3. Let P be a Sylow p-subgroup of G. Claim 1: ⟨P G⟩= G. Hence, G has no proper normal subgroup of p′-index. Let L = ⟨P G⟩. Then P ≤L  G. Suppose that L < G. By the minimality of |G|, L has a normal Sylow p-subgroup, which implies that P  L. Since P is characteristic in L, we deduce that P  G, which is a contradiction. Therefore, G = L as desired. Claim 2: G is not a nonabelian simple group. Since d(G) > 1/(p + 1), the result follows from Lemma 4.1. Claim 3: Let N be a maximal normal subgroup of G. Then G = ⟨x⟩N with [G : N] = p, where N has a normal Sylow p-subgroup, which is Op(G), and x ∈P is a p-element and P = ⟨x⟩Op(G). In particular, G is p-solvable with a Hall p′-subgroup R and G = PR with [P : Op(G)] = p. Since G is not a simple group, G has a maximal normal subgroup, say N. It follows that N has a normal Sylow p-subgroup, say P1. As P1  N  G, we deduce that P1  G and thus P1 ≤Op(G). Since G/N is a simple group and d(G/N) ≥d(G) > 1/(p+1) and p divides |G/N| by Claim 1, Lemma 4.1 yields that G/N is abelian. Hence G/N is cyclic of order p. It follows that G = PN and P ∩N = P1  G. As [G : N] = [P : P1] = p and P is not normal in G, we deduce that P1 = Op(G). Let x ∈P \ P1. Then P = ⟨x⟩P1 = ⟨x⟩Op(G). Since N/Op(G) is a p′-group and both Op(G) and G/N are p-groups, G is p-solvable. So G has a Hall p′-subgroup R and thus N = ROp(G) and G = PN = PR with [G : N] = [P : Op(G)] = p. 10 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET Claim 4: R is a Sylow r-group of G for some prime r. Let G = G/Op(G). Then P ∼= Cp acts coprimely and nontrivially on N = R ∼= R. Suppose by contradiction that |R| = |R| is divisible by at least two distinct primes. Let q be a prime divisor of |R|. By a coprime action theorem ([17, Theorem 3.23]), P stabilizes a Sylow q-subgroup Q of R. But then PQ is a proper subgroup of G, and hence it has a normal Sylow p-subgroup. Thus, P centralizes Q. It follows that |G : CG(P)| is not divisible by q. Clearly, this index is also not divisible by p. As this is true for all prime divisors of |R|, we must have G = CG(P), which is a contradiction. Thus R is an r-group for some prime r, and so it is a Sylow r-subgroup of G. Claim 5: Let n be the smallest nontrivial character degree of G. Then n ≥(p −1)/2. Note that G is nonabelian and so such a minimal nontrivial degree n exists. Let χ ∈Irr(G) such that χ(1) = n and let K be the kernel of χ. Then K is a proper normal subgroup of G. We consider the following cases. Case 1: G/K has a normal Sylow p-subgroup. Then PK/K  G/K as PK/K is a Sylow p-subgroup of G/K. By Claim 1, we have G = PK and thus G/K ∼= P/P ∩K, which is a p-group. Since χ ∈Irr(G/K) is nonlinear, G/K is a nonabelian p-group and thus n = χ(1) ≥ p > (p −1)/2. Case 2: G/K does not have a normal Sylow p-subgroup. It follows that G/K has a faith- ful irreducible character of degree n and does not have a normal Sylow p-subgroup. By [11, Theorem 1], p ≤2n + 1 or n ≥(p −1)/2. Claim 6: G′ = RU, where U = G′ ∩Op(G)  G. Note that G is a {p, r}-group by Claim 4, so it is solvable and G′ ≤N  G. Since ⟨P G⟩= G, we deduce that G/G′ is an abelian p-group. Thus G′ contain a Sylow r-group of G. It follows that R ≤G′. By Dedekind’s modular law, G′ = R(G′ ∩Op(G)), where G′ ∩Op(G) is a normal Sylow p-subgroup of G′. Claim 7: The quotient G/R′Op(G) is a Frobenius group with a cyclic complement of order p and Frobenius kernel an abelian 2-group of order 2m for some m ≥2, and p = 2m −1. Note that N/Op(G) = ROp(G)/Op(G)  G/Op(G) and thus (N/Op(G))′ = R′Op(G)/Op(G). Hence, R′Op(G)G. Let G = G/R′Op(G). Then Op(G) = 1, RG, G ′ = R and G = ⟨x⟩R with |x| = p. Moreover, R is an abelian r-group and |G ′| = |R| = rm for some positive integer m. It follows that p is the only nontrivial character degree of G. It follows that 1 |G ′| ≤d(G) ≤1 p2 + (1 −1 p2 ) 1 |G ′| . Since d(G) > 1/(p + 1), we deduce that rm < (p + 1)(p2 −1) p2 −p −1 . Since p ≥3, we can check that (p + 1)(p2 −1) p2 −p −1 ≤2p + 1. Hence rm ≤2p. As r ̸= p, we have rm ≤2p −1. Moreover, as 1 |G ′| ≤d(G) ≤1 p, we deduce that rm ≥p. Thus (1) p ≤rm ≤2p −1. By Fitting’s theorem [17, Theorem 4.34], R = [R, P] × CR(P). If CR(P) is nontrivial, then [R, P]P is a proper subgroup of G which implies that it is p-closed and hence P centralizes [R, P], which is impossible as it would imply that P centralizes R and thus G has a normal Sylow p-subgroup. Therefore CR(P) = 1. It follows that G is a Frobenius group with kernel INVARIANTS OF FINITE GROUPS ARISING IN TQFT 11 R and a cyclic complement P. This implies that rm −1 = kp for some positive integer k. Combining with Equation (1), we get rm −1 = p. As p ≥3, p + 1 = rm must be even. Hence r = 2 and 2m −1 = p. In particular, m ≥2 is a prime and p is a Mersenne prime. For brevity, we define Γ(p, m) := Cp ⋉Cm 2 to be a Frobenius group with kernel an elementary abelian 2-group of order 2m and cyclic complement of order p with 2m −1 = p. Claim 8: The quotient G/Op(G) ∼= Γ(p, m) is a Frobenius group with a cyclic complement of order p and a Frobenius kernel an elementary abelian 2-group of order 2m with p = 2m −1 ≥7 and m ≥3 an odd prime. Assume that m = 2. Then p = 3 and γ(h, 3) = 1 4(1 + 1 32h−1 ) = qh(A4). Hence qh(G) > qh(A4) and so by Theorem 1.1 (c), G is supersolvable. By [15, Theorem VI.9.1(c)], as G is a {2, 3}-group, G has a normal Sylow 3-subgroup, which is a contradiction. Therefore, we may assume from now on that m ≥3 and p = 2m −1 ≥7. Working in the quotient group G/Op(G), we may assume Op(G) = 1. By Claim 6, G′ = R and so G′′ = R′ with [R : R′] = 2m by Claim 7. Recall that n is the smallest nontrivial character degree of G. Assume first that n ≥p. By [13, Lemma 2(vi)], we have 1 p + 1 < d(G) ≤1 n2 + (1 −1 n2 ) 1 |G′| ≤1 p2 + (1 −1 p2 ) 1 |G′|. As above, we deduce that p ≤|G′| ≤2p −1. Write |G′| = |R| = 2m+c for some integer c ≥0. Recall that 2m = 1 + p so p ≤|G′| = (1 + p)2c = 2cp + 2c ≤2p −1. This implies that 2c = 1, and hence G′ = R is abelian. Now assume that n < p. As |G| = 2m+cp and n divides |G| and is coprime to p, we deduce that n = 2a for some positive integer a ≥1. Note that (p −1)/2 ≤2a by Claim 5 and thus p −1 2 ≤2a < p = 2m −1. It follows that 1 2(2m −2) ≤2a ≤p −1 = 2m −2. Therefore, 2m−1 −1 = 1 2(2m −2) ≤2a ≤2m −2. We deduce that n = 2a = 2m−1. By [13, Lemma 2 (vi)], we have 1 p + 1 = 1 2m < d(G) ≤ 1 22(m−1) + (1 − 1 22(m−1) ) 1 |G′|. Since m ≥3, simplifying the previous inequality, we obtain |G′| < 2m + 2m −1 2m−2 −1. Now, the second summand on the right hand side of the inequality is at most 7 and thus 2m+c = |G′| < 2m + 7. However, this is impossible if c ≥1. We showed in both cases that G′ = R is abelian. If R is not elementary abelian, then the subgroup S generated by all elements of R of order 2 is a proper subgroup of R stabilized by P, so SP < G has a normal Sylow p-subgroup. Then P ⊴SP fixes every element of order 2 in R, and so P acts trivially on R by [17, Corollary 4.35]. But then P ⊴G, a contradiction. Thus, R is elementary abelian and G ∼= Γ(p, m), as claimed. Claim 9: The final contradiction. 12 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET Assume Op(G) = 1. Then G ∼= Γ(p, m) and so [G : G′] = p and p is the unique nontrivial character degree of G. Hence qh(G) = γ(h, p), which is a contradiction. Assume that Op(G) > 1. Note that G′ = RU and U := G′ ∩Op(G) is a normal Sylow r-subgroup of G′. (i) Assume that p ∤|G′|. So G′ = R is a normal Sylow r-subgroup of G. Now P = ⟨x⟩Op(G) is abelian. In this case, G′⟨x⟩ G. If G′⟨x⟩< G, then x centralizes G′, which is impossible. Thus G = G′⟨x⟩. In particular, P = ⟨x⟩and y = xp centralizes G′ = R. By Claim 8, G/Op(G) ∼= Γ(p, m) is a Frobenius group with G′ = R an elementary abelian subgroup of order 2m = p + 1. Thus F(G) = G′ × ⟨y⟩= G′ × Op(G) is abelian and in fact Op(G) = ⟨y⟩is central in G. It follows that p is the only nontrivial character degree of G. Since [G : G′] = |P|, we have qh(G) = 1 |G|(|P| + k(G) −|P| p2h−2 ) and |G| = [G : G′] + p2(k(G) −|P|). Solving for k(G)−|P| from the second equation and plug it in the former, we get qh(G) = γ(h, p), a contradiction. (ii) Assume p | |G′|. So |G′| = |R|pe for some positive integer e. Let n be the minimal nontrivial degree of G. By [13, Lemma 2 (vi)], we have (2) 1 p + 1 < 1 n2 + (1 −1 n2 ) 1 |G′|. Assume first that n ≥p. Then Equation (2) implies that |G′| < (p + 1)(p2 −1) p2 −p −1 . As in the proof of Claim 8, we deduce that |G′| ≤2p −1. However, since |G′| = |R|pe with e ≥1 and |R| = 2m = 1 + p ≥8, we see that |G′| > 2p −1, which is a contradiction. Assume that n < p. By Claim 5, we have n ≥(p −1)/2. Thus (p −1)/2 ≤n ≤p −1. Let χ ∈Irr(G) with χ(1) = n. Recall that 2m = p + 1 and G/Op(G) ∼= Γ(p, m). It follows that χ(1) divides |R| where |R| = 2m since p ∤χ(1) and χ(1) | |G|. So n = 2a for some positive integer a. Since (p −1)/2 = (2m −2)/2 = 2m−1 −1 < 2a ≤p −1 = 2m −2, we deduce that n = 2m−1. Employing the same argument as in the proof of Claim 8, we obtain a contradiction as m ≥3. The proof of the theorem is now complete. □ 5. Introduction to Dijkgraaf–Witten theory. For mathematicians, a topological quantum field theory (TQFT) is a machine for generating topological invariants of manifolds. For example, Witten [31] showed that the Jones polynomial is a topological invariant of manifolds in a certain TQFT. Atiyah [1] axiomatized topological quantum field theories in the language of category theory as a functor from a particular category of cobordisms to the category of vector spaces. The invariants that we consider arise in the context of Dijkgraaf–Witten theory [7, 8] in one space dimension and one time dimension. Such a TQFT determines a commutative Frobenius algebra that gives rise to topological invariants of surfaces. The new contribution of this paper is that, upon assuming this Frobenius algebra is the center of a complex group algebra, we instead consider these invariants as invariants of groups. It should be noted, however, that Dijkgraaf–Witten theory has already been used to study finite groups; for example, row and column sums in their character tables [19, 27, 29]. In this section, we derive these topological invariants in an elementary, explicit way, without assuming any previous knowledge of quantum field theory. For a similarly gentle introduction, we recommend Dijkgraaf’s 2015 lecture at the Institute for Advanced Study [6]. The state of a quantum system is described by a vector in a complex vector space V (possibly infinite-dimensional), and the evolution of quantum states is described by a linear map. Here, we work in one space dimension and one time dimension, so our quantum systems are restricted INVARIANTS OF FINITE GROUPS ARISING IN TQFT 13 Cobordism Map V = Z(CG) V ⊗V →V eχ · eχ′ = δχ,χ′eχ ⟨⟩: V ⊗V →C ⟨eχ, eχ′⟩= δχ,χ′ χ(1) |G| 2 C →V 1 7→e e = P χ∈Irr(G) eχ V →C eχ 7→ |G| χ(1) 2 V →V ⊗V eχ 7→ |G| χ(1) 2eχ ⊗eχ Table 1. Cobordisms giving rise to maps of vector spaces. to the surfaces of 1-manifolds, which are disjoint unions of circles. We also stipulate that V is finite-dimensional. If we assume that the evolution of quantum states depends only on the topology of the manifold, then a cobordism between manifolds (which represents this quantum evolution of states) gives rise to a map on vector spaces. For example, a topological cylinder is a cobordism between two circles, and it represents the identity map id : V →V since the evolution only depends on the topology of the manifold. As we will see, other cobordisms correspond to maps on vector spaces which give our vector space the structure of a commutative Frobenius algebra. Perhaps most importantly, the “pair of pants” corresponds to a multiplication of quantum states V ⊗V →V , which gives V the structure of an algebra. Due to the topological nature of the theory, this multiplication is commutative. The cobordisms giving rise to a multiplicative identity e ∈V , a comultiplication V →V ⊗V and a linear form ⟨⟩: V →C are pictured in Table 1. By considering appropriate cobordisms, it is not difficult to show that the multiplication is associative and the linear form satisfies the identity ⟨a, b · c⟩= ⟨a · b, c⟩of a Frobenius algebra. A genus-h surface can be decomposed into a composition of these cobordisms, giving rise to a map C →C. Since the map is linear, it is defined by the image of 1, and this complex number is a topological invariant of the surface. Now we assume that V = Z(CG) is the center of the complex group algebra of a finite group G. We compute the maps in terms of a basis for Z(CG) that is indexed by the irreducible complex characters χ ∈Irr(G) of G; namely, the primitive central idempotents eχ = χ(1) |G| X g∈G χ(g)g. 14 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET (a) (b) (c) · · · (d) Figure 1. Cobordisms for calculating (a) the “cup”, (b) the “cap” and (c) the comultiplication. Decomposing the genus h-surface in (d) into cobordisms calculated in Table 1 yields an invariant P χ∈Irr(G)(|G|/χ(1))2h−2 of the surface. In this basis, the multiplicative identity decomposes as e = P χ∈Irr(G) eχ, and the multiplication takes the form eχ · eχ′ = δχ,χ′eχ. We define the linear form to be ⟨eχ, eχ′⟩= δχ,χ′ χ(1) |G| 2 , which is the coefficient of the identity in the product eχ · eχ′ times 1/|G|. This is the so-called special Frobenius algebra structure on Z(CG). To show that the “cup” in the third row of Table 1 gives rise to a multiplicative identity of V , note that the cobordism in Fig. 1(a) is homeomorphic to the cylinder, which is the identity map. To show that the “cap” in the fourth row of Table 1 sends eχ to (χ(1)/|G|)2, note that the cap is homeomorphic to adding a cup to the cobordism for the linear form, as in Fig. 1(b). Finally, to show that the comultiplication is eχ 7→(|G|/χ(1))2eχ ⊗eχ in the last row of Table 1, note that capping off the comultiplication is homeomorphic to the cylinder, which represents the identity map, as in Fig. 1(c). Now, we compute the map C →C for a genus-h surface by composing the maps in Table 1: 1 7−→ X χ∈Irr(G) eχ 7−→ X χ∈Irr(G)  |G| χ(1) 2 eχ ⊗eχ 7−→ X χ∈Irr(G)  |G| χ(1) 2 eχ 7−→· · · 7−→ X χ∈Irr(G)  |G| χ(1) 2h eχ 7−→ X χ∈Irr(G)  |G| χ(1) 2h−2 = Qh(G). The number Qh(G) is the topological invariant of a genus-h surface in this TQFT. In this paper, we considered a more well-behaved invariant of groups by scaling Qh(G) by an appropriate factor of |G|, namely qh(G) = 1/|G| P χ∈Irr(G)(1/χ(1))2h−2. References [1] M. F. Atiyah, Topological quantum field theories, Inst. Hautes ´Etudes Sci. Publ. Math. No. 68 (1988), 175–186. [2] F. Barry, D. MacHale and ´A. N´ı Sh´e, Some supersolvability conditions for finite groups, Math. Proc. R. Ir. Acad. 106A (2006), no. 2, 163–177. INVARIANTS OF FINITE GROUPS ARISING IN TQFT 15 [3] A. Ballester-Bolinches and R. Esteban-Romero, On minimal non-supersoluble groups, Rev. Mat. Iberoam. 23 (2007), no. 1, 127–142. [4] W. Burnside, Theory of groups of finite order, Dover, New York, 1955. [5] Y. Choi, Small values and forbidden values for the Fourier anti-diagonal constant of a finite group, J. Aust. Math. Soc. 118 (2025), no. 3, 297–316. [6] R. H. Dijkgraaf, “PiTP 2015 – ‘Introduction to Topological and Conformal Field Theory (1 of 2)’ - Robbert Dijkgraaf”, uploaded by Institute for Advanced Study, 12 Aug. 2015, https://www.youtube.com/watch?v= jEEQO-tcyHc. [7] R. H. Dijkgraaf, Fields, strings and duality, in Sym´etries quantiques (Les Houches, 1995), 3–147, North- Holland, Amsterdam. [8] R. H. Dijkgraaf and E. Witten, Topological gauge theories and group cohomology, Comm. Math. Phys. 129 (1990), no. 2, 393–429. [9] J. D. Dixon, Solution to Problem 176, Canadian Mathematical Bulletin, 16 (1973), 302. [10] K. Doerk, Minimal nicht ¨uberaufl¨osbare endliche Gruppen, Math. Z. 91 (1966), 198–205. [11] W. Feit and J. G. Thompson, Groups which have a faithful representation of degree less than (p −1)/2, Pacific J. Math. 11 (1961), 1257–1262. [12] D. Gluck, K. Magaard, U. Riese, P. Schmid, The solution of the k(GV )-problem, J. Algebra 279 (2004), no. 2, 694–719. [13] R. M. Guralnick and G. R. Robinson, On the commuting probability in finite groups, J. Algebra 300 (2006), no. 2, 509–528. [14] W. H. Gustafson, What is the probability that two group elements commute?, Amer. Math. Monthly 80 (1973), 1031–1034. [15] B. Huppert, Finite Groups I, translated from the 1967 German edition by C. A. Schroeder, Grundlehren der mathematischen Wissenschaften, 364, Springer, Cham, 2025. [16] I. M. Isaacs, Character theory of finite groups, Dover, New York, 1994. [17] I. M. Isaacs, Finite group theory, Graduate Studies in Mathematics, 92, Amer. Math. Soc., Providence, RI, 2008. [18] I. M. Isaacs, Characters of solvable groups, Graduate Studies in Mathematics, 189, Amer. Math. Soc., Providence, RI, 2018. [19] R. de Mello Koch, Y.-H. He, G. Kemp and S. Ramgoolam, Integrality, duality and finiteness in combinatoric topological strings, J. High Energ. Phys. 2022, 71 (2022). [20] P. Lescot, Sur certains grupes finis, Recv. Math. Sp´eciales 8 (1987), 267–277. [21] M. W. Liebeck and A. Shalev, Fuchsian groups, coverings of Riemann surfaces, subgroup growth, random quotients and random walks, J. Algebra 276 (2004), no. 2, 552–601. [22] M. W. Liebeck and A. Shalev, Character degrees and random walks in finite groups of Lie type, Proc. London Math. Soc. (3) 90 (2005), no. 1, 61–86. [23] M. W. Liebeck and A. Shalev, Fuchsian groups, finite simple groups and representation varieties, Invent. Math. 159 (2005), no. 2, 317–367. [24] A. Vera L´opez and J. Vera L´opez, Classification of finite groups according to the number of conjugacy classes, Israel J. Math. 51 (1985), no. 4, 305–338. [25] V. T. Nagrebetski˘ı, Finite minimal non-supersolvable groups, in Finite groups (Proc. Gomel Sem., 1973/1974) (Russian), 104–108, 229, Izdat. “Nauka i Tehnika”, Minsk. [26] G. Navarro Ortega, Characters and blocks of finite groups, London Mathematical Society Lecture Note Series, 250, Cambridge Univ. Press, Cambridge, 1998. [27] A. Padellaro, R. Radhakrishnan and S. Ramgoolam, Row-column duality and combinatorial topological strings, J. Phys. A 57 (2024), no. 6, Paper No. 065202, 68 pp. [28] H. N. Nguyen and A. Mar´oti, p-regular conjugacy classes and p-rational irreducible characters, J. Algebra 607 (2022), 387–425. [29] S. Ramgoolam and E. Sharpe, Combinatoric topological string theories and group theory algorithms, J. High Energy Phys. 2022, no. 10, Paper No. 147, 56 pp. [30] C. A. Schroeder, Finite groups with many p-regular conjugacy classes, J. Algebra 641 (2024), 716–734. [31] E. Witten, Quantum field theory and the Jones polynomial, in Braid group, knot theory and statistical mechanics, 239–329, Adv. Ser. Math. Phys., 9, World Sci. Publ., Teaneck, NJ, 1989. Department of Mathematics and Statistics, Binghamton University, Binghamton, NY 13902- 6000, USA E-mail address: cschroe2@binghamton.edu Department of Mathematics and Statistics, Binghamton University, Binghamton, NY 13902- 6000, USA E-mail address: htongvie@binghamton.edu
ON THE INVARIANTS OF FINITE GROUPS ARISING IN A TOPOLOGICAL QUANTUM FIELD THEORY CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET Abstract. In this paper, we consider the properties of finite groups that are witnessed by group invariants arising in the context of Dijkgraaf-Witten theory, a topological quantum field theory, as invariants of surfaces. These invariants can be considered generalizations of the commuting probability, an invariant that has been well studied in the group theory literature. 1. Introduction There is a long history of using invariants to deduce structural properties of finite groups. These invariants are often constructed by counting objects associated with the group, such as elements, conjugacy classes or irreducible characters. In this paper, we take a different approach. Given the remarkable compatibility between mathematics and physics, it is reasonable to expect that group invariants arising naturally in physics are useful for characterizing group structure. Here, we consider Dijkgraaf-Witten theory, a topological quantum field theory (TQFT), in one space dimension and one time dimension. In short, such a TQFT determines a commutative Frobenius algebra that gives rise to invariants of surfaces. If we assume that this algebra is the center Z(CG) of the complex group algebra of a finite group G, then the invariant of a genus-h surface takes the form Qh(G) = P χ∈Irr(G)(|G|/χ(1))2h-2, where Irr(G) is the collection of complex irreducible characters of G. Note that Q0(G) = 1/|G| and Q1(G) = k(G), where k(G) is the number of conjugacy classes of G (equivalently, the number |Irr(G)| of irreducible complex characters). If we consider the invariants Qh(G) as arising from a sequence of increasingly complicated experiments, we might imagine that we first measure Q0(G), then Q1(G), and so on. With this in mind, it is natural to consider the following scaled invariants, which, as we will show, are very well-behaved: qh(G) := 1 |G| X χ∈Irr(G) 1 χ(1) 2h-2 . Now note that q0(G) = 1 and q1(G) = k(G)/|G|, where the latter is the so-called commuting probability d(G), which was first considered by Gustafson [14] and later by many authors. Viewed in this way, the "quantum invariants" qh(G) are generalizations of the commuting probability, for which the following structure criteria are known: (a) If d(G) > d(D8) = 5/8, then G is abelian (Gustafson [14]). (b) If d(G) > d(S3) = 1/2, then G is nilpotent (Lescot [20]). (c) If d(G) > d(A4) = 1/3, then G is supersolvable (Barry, MacHale, N ́ı Sh ́e [2]). (d) If d(G) > d(A5) = 1/12, then G is solvable (Dixon [9]). In this paper, we consider the properties of finite groups that are witnessed by the invariants qh(G). In our main result, we prove broad generalizations of the structure criteria witnessed by d(G) = q1(G) to any value of the genus h. Theorem 1.1. Let G be a finite group, and let h be any positive integer. (a) If qh(G) > qh(D8), then G is abelian. (b) If qh(G) > qh(S3), then G is nilpotent. (c) If qh(G) > qh(A4), then G is supersolvable. (d) If qh(G) > qh(A5), then G is solvable. Date: October 17, 2025. 1 16 Oct 2025 2 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET Guralnick and Robinson showed in [13, Lemma 2(x)] that if p is a prime and d(G) > 1/p, then G has a normal Sylow p-subgroup. Our next result generalizes this p-closed criterion to all values of the genus h. Theorem 1.2. Let G be a finite group, and let p be a prime. If qh(G) > β(h, p)/(p + 1), where β(h, p) = 1 + 1/p2h-1, then G has a normal Sylow p-subgroup. This criterion is the best possible. For example, if p = 2f -1 is a Mersenne prime, then the Frobenius group G = (C2)f ⋊Cp satisfies qh(G) = β(h, p)/(p + 1) and does not have a normal Sylow p-subgroup. Although Theorem 1.2 implies Theorem 1.1(b) and much of part (c), we will in fact use the latter to prove this p-closed criterion. Given a prime p, it is also natural to consider a p-local version of the invariants qh(G) by summing over irreducible p-Brauer characters instead of irreducible complex characters. We define qh,p′(G) = 1 |G|p′ X φ∈IBr(G) 1 φ(1)2h-2 . Note that q1,p′(G) = kp′(G)/|G|p′, where kp′(G) is the number of irreducible p-Brauer characters (equivalently, the number of p-regular conjugacy classes). This invariant is denoted by dp′(G) in the literature, and it was shown in [30, Theorem 1.1] that if p is an odd prime and dp′(G) > 1/(p -1), then G is p-solvable. This result is best possible for p > 3 since dp′(PSL2(p)) = 1/(p -1). Our next theorem extends this result to all values of the genus. Theorem 1.3. Let G be a finite group, let h be a positive integer, and let p be an odd prime. If qh,p′(G) > α(h, p)/(p-1), where α(h, p) = (22h-2+√p -1)/(22h-2√p -1), then G is p-solvable. Note that α(h, p) 1 and p > 2. Theorem 1.3 is not best possible for h = 1, in which case, again, the best bound is 1/(p -1) from [30]. We suspect, but have not proved, that the result is also not best possible for h > 1. Our proof relies on a lower bound for kp′(G) when G is not p-solvable. At the time of writing, the best known bound is kp′(G) > √p -1, which was proved by Hung and Mar ́oti in [28]. An improvement in this bound would lead directly to an improvement in Theorem 1.3. Our paper is structured as follows. As our methods are purely group-theoretic, we defer an introduction to Dijkgraaf-Witten theory to Section 5. In Section 2, we make some remarks and collect some known facts that will be needed in the rest of the paper. In Section 3, we prove Theorems 1.1(a)-(d) and Theorem 1.3. In Section 4, we prove Theorem 1.2. Only the proof of Theorem 1.3 requires us to cite results that depend on the classification of finite simple groups. We do use Brauer's k(GV )-theorem in the proof of Theorem 1.1(c), but there the group G is solvable, and the proof in that case does not rely on the classification. All groups considered in this paper are assumed to be finite. 2. Preliminaries. As noted in the introduction, the invariants qh(G) can be viewed as generalizations of the commuting probability d(G) = k(G)/|G| first considered by Gustafson [14]. Our proofs are based on the extensive literature on this invariant. Our first lemma collects some of the facts that are already known. Lemma 2.1. Let G be a finite group. (a) We have q1(G) = d(G) and qh(G) ≥qh+1(G) for all h ≥0. (b) If G is perfect and d(G) > 1/20, then G ∼= A5 or G ∼= SL2(5). (c) If p is a prime such that a Sylow p-subgroup of G is not normal, then d(G) ≤1/p. (d) If G is a nonabelian p-group for some prime p, then d(G) ≤1/p + 1/p2. (e) If G is nilpotent with a nonabelian Sylow p-subgroup for some prime p, then d(G) qh(H) for some h > 1 implied that d(G) > d(H), then all our theorems would follow trivially from the corresponding result for d(G). But this is not the case. For example, if G = S6 and H = PGL2(9), then qh(G) > qh(H) for all h > 1, but d(G) = d(H). Our next example illustrates a general phenomenon: Notice that for a finite group G, the invariant qh(G) approaches 1/|G′| as h approaches infinity. Therefore, if d(G) > d(H) for some finite groups G and H, but 1/|G′| N. For example, let G = A5, and let H be an extraspecial p-group of order p3 for the prime p = 59. Then d(G) = 1/12 and 1/|G′| = 1/60, while d(H) = (p2 + p -1)/p3 and 1/|H′| = 1/p. Then d(G) > d(H), while 1/|G′| qh(H) for h = 0, 1, 2, 3, while qh(G) 1/2 for every Sylow subgroup P. However, by Lemma 2.1(d), if P is nonabelian then d(P) 1/2. This inequality only holds if p = 2, so all Sylow subgroups of odd order are abelian. We may now assume that G is a nonabelian 2-group such that qh(G) > qh(D8). Now, by Lemma 2.1(e), either d(G) 1/2 by the first paragraph, so |G′| = 2. Therefore, qh(D8) 1 2 + 22h-2 qh(D8) -1 2 = 1 2 + 22h-2 1 22h+1 = 1 2 + 1 8 = 5 8. But this contradicts the fact that d(G) ≤5/8 from the first paragraph, so the theorem is proved. □ Next, we prove the nilpotency criterion of Theorem 1.1(b). We begin with an easy lemma due to Lescot [20], whose proof we include for completeness. Lemma 3.1. If d(G) > 1/4, then |G′| ≤3/(4d(G) -1). INVARIANTS OF FINITE GROUPS ARISING IN TQFT 5 Proof. Since |G| = P χ∈Irr(G) χ(1)2 and the number of linear characters is [G : G′], we have |G| -[G : G′] = X χ(1)>1 χ(1)2 ≥4(k(G) -[G : G′]). Dividing both sides by |G| and rearranging yields the claim. □ Proof of Theorem 1.1(b). Assume for a contradiction that G is a counterexample to the theorem of smallest possible order. The irreducible character degrees of S3 are 1, 1 and 2. Since G is not nilpotent, it contains a non-normal Sylow p-subgroup for some prime p dividing |G|. This means d(G) ≤1/p ≤1/2 by Lemma 2.1(c). Altogether, we have 1 3 1. As d(G) > 1/3, we have |G′| 1, so G′ ≤Z(Q) as G′ has prime order. So Q centralizes G′. If G′ 1 3 1 + 2 22h = qh(S3). But the inequality implies that q qh(A4) and hence by the minimality of |G|, H is supersolvable. As G is not supersolvable, G is a minimal non-supersolvable group. The structure of such groups was determined by Doerk [10], and can be read off from [3, Theorem 12] and [15, Chapter VI, Exercise 16]: G has exactly one normal Sylow p-subgroup P and a complement Q such that |Q| is divisible by at most two distinct primes. Furthermore, P/Φ(P) is a minimal normal subgroup of G/Φ(P) and is not cyclic, and Q ∩CG(P/Φ(P)) = Q ∩Φ(G) = Φ(Q) ∩Φ(G). Now we work in G = G/Φ(G), which is again minimal non-supersolvable, since if G is supersolvable then G is supersolvable by [15, Theorem VI.8.5]. Recall that Φ(P) ≤Φ(G) by [15, Lemma III.3.3]. Therefore, P is a minimal normal, elementary abelian, noncyclic subgroup of G, where G = PQ and Q acts coprimely, 6 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET faithfully and irreducibly on P. By Brauer's k(GV )-theorem ([12, Theorem]), we know that k(G) = k(P ⋊Q) ≤|P|. Now we have 1 4 qh(A4) > 1 4 we deduce that 1 r > 1 4 which forces r d(A5) = 1/12, and G is already known to be solvable by a result of Dixon [9]. So, now assume that h ≥2, and suppose that G is a counterexample to the claim with minimal order. By Lemma 2.1(a), we have d(G) ≥qh(G) > qh(A5) > 1/|A5| = 1/60. We first note that qh(G′) ≥qh(G) by Lemma 2.2. So if G′ 2. It is not difficult to see that if k(G) 5 2q2(A5) = 5 2 4769 216000 > 1 20, d(G) = k(G) |G| ≥3qh(G) > 3qh(A5) > 3 60 = 1 20 for h > 2. Therefore, G is perfect with d(G) > 1/20 for all h ≥2. Then by Lemma 2.1(b), G ∼= A5 or G ∼= SL2(5). But qh(SL2(5)) = 1 120 1 + 2 22h-2 + 2 32h-2 + 2 42h-2 + 1 52h-2 + 1 62h-2 θ(1). Therefore, for each θ ∈IBr(N) we have X φ∈IBr(G|θ) 1 φ(1)2h-2 ≤[G : N]p′ θ(1)2h-2 , and so qh,p′(G) ≤[G : N]p′ |G|p′ X θ∈IBr(N) 1 θ(1)2h-2 = qh,p′(N), which is what we wanted to show. Since we now know that the counterexample G is perfect, all nontrivial irreducible Brauer characters are nonlinear, and we have qh,p′(G) ≤ 1 |G|p′ 1 + kp′(G) -1 22h-2 = 1 |G|p′ kp′(G) + 22h-2 -1 22h-2 , where the inequality follows since φ(1) ≥2 for all nonlinear φ ∈IBr(G). Now, since G is non-p-solvable, it is known that kp′(G) > √p -1 by [28, Theorem 1.2]. Furthermore, whenever kp′(G) > √p -1, we have the following inequality: kp′(G) + 22h-2 -1 22h-2 ≤α(h, p)kp′(G). This implies that dp′(G) = kp′(G) |G|p′ ≥ 1 α(h, p)qh,p′(G) > 1 p -1. But this contradicts [30, Theorem 1.1], and the theorem is proved. □ 4. Criterion for the existence of normal Sylow subgroups In this section, we prove Theorem 1.2, which is a criterion for the existence of a normal Sylow subgroup. We first obtain the following result, which slightly improves Lemma 2(x) in [13] for nonabelian simple groups. Lemma 4.1. Let G be a finite nonabelian simple group, and let p be a prime dividing |G|. Then d(G) ≤1/(p + 1). Proof. Suppose by contradiction that d(G) > 1/(p + 1). Since G is a nonabelian simple group, by [9] we know that d(G) ≤1/12. Hence 1/(p + 1) 1/(p + 1), it follows that 1 p + 1 1/(p + 1) leads to a contradiction, and the lemma follows. □ Let p be a prime, and let h be a positive integer. Recall that for a finite group G and a prime p, G is said to be p-closed if G has a normal Sylow p-subgroup. Define γ(h, p) = 1 p + 1(1 + 1 p2h-1 ). Note that γ(h, p) > 1/(p + 1). and γ(h, p) = β(h, p)/(p+1) from the statement of Theorem 1.2. We now prove Theorem 1.2. Proof of Theorem 1.2. Let G be a counterexample to the theorem with minimal order. Then qh(G) > γ(h, p) but G does not have a normal Sylow p-subgroup. By Lemma 2.1(c), we have d(G) ≤1/p. Consequently, we obtain the following inequalities: 1 p ≥d(G) ≥qh(G) > γ(h, p) > 1 p + 1. By Lemma 2.2, for each proper subgroup H of G, we have qh(H) ≥qh(G) > γ(h, p) and thus by the minimality of |G|, H is p-closed. In particular, every proper subgroup of G is p-closed. Also, if N is a normal subgroup of G, then every proper subgroup of the quotient group G/N is also p-closed. Assume that p = 2. Then γ(h, p) = qh(S3) and since qh(G) > qh(S3), G is nilpotent by Theorem 1.1(b). But then G has a normal Sylow 2-subgroup, a contradiction. Therefore, we may assume that p ≥3. Let P be a Sylow p-subgroup of G. Claim 1: ⟨P G⟩= G. Hence, G has no proper normal subgroup of p′-index. Let L = ⟨P G⟩. Then P ≤L G. Suppose that L 1/(p + 1), the result follows from Lemma 4.1. Claim 3: Let N be a maximal normal subgroup of G. Then G = ⟨x⟩N with [G : N] = p, where N has a normal Sylow p-subgroup, which is Op(G), and x ∈P is a p-element and P = ⟨x⟩Op(G). In particular, G is p-solvable with a Hall p′-subgroup R and G = PR with [P : Op(G)] = p. Since G is not a simple group, G has a maximal normal subgroup, say N. It follows that N has a normal Sylow p-subgroup, say P1. As P1 N G, we deduce that P1 G and thus P1 ≤Op(G). Since G/N is a simple group and d(G/N) ≥d(G) > 1/(p+1) and p divides |G/N| by Claim 1, Lemma 4.1 yields that G/N is abelian. Hence G/N is cyclic of order p. It follows that G = PN and P ∩N = P1 G. As [G : N] = [P : P1] = p and P is not normal in G, we deduce that P1 = Op(G). Let x ∈P \ P1. Then P = ⟨x⟩P1 = ⟨x⟩Op(G). Since N/Op(G) is a p′-group and both Op(G) and G/N are p-groups, G is p-solvable. So G has a Hall p′-subgroup R and thus N = ROp(G) and G = PN = PR with [G : N] = [P : Op(G)] = p. 10 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET Claim 4: R is a Sylow r-group of G for some prime r. Let G = G/Op(G). Then P ∼= Cp acts coprimely and nontrivially on N = R ∼= R. Suppose by contradiction that |R| = |R| is divisible by at least two distinct primes. Let q be a prime divisor of |R|. By a coprime action theorem ([17, Theorem 3.23]), P stabilizes a Sylow q-subgroup Q of R. But then PQ is a proper subgroup of G, and hence it has a normal Sylow p-subgroup. Thus, P centralizes Q. It follows that |G : CG(P)| is not divisible by q. Clearly, this index is also not divisible by p. As this is true for all prime divisors of |R|, we must have G = CG(P), which is a contradiction. Thus R is an r-group for some prime r, and so it is a Sylow r-subgroup of G. Claim 5: Let n be the smallest nontrivial character degree of G. Then n ≥(p -1)/2. Note that G is nonabelian and so such a minimal nontrivial degree n exists. Let χ ∈Irr(G) such that χ(1) = n and let K be the kernel of χ. Then K is a proper normal subgroup of G. We consider the following cases. Case 1: G/K has a normal Sylow p-subgroup. Then PK/K G/K as PK/K is a Sylow p-subgroup of G/K. By Claim 1, we have G = PK and thus G/K ∼= P/P ∩K, which is a p-group. Since χ ∈Irr(G/K) is nonlinear, G/K is a nonabelian p-group and thus n = χ(1) ≥ p > (p -1)/2. Case 2: G/K does not have a normal Sylow p-subgroup. It follows that G/K has a faithful irreducible character of degree n and does not have a normal Sylow p-subgroup. By [11, Theorem 1], p ≤2n + 1 or n ≥(p -1)/2. Claim 6: G′ = RU, where U = G′ ∩Op(G) G. Note that G is a {p, r}-group by Claim 4, so it is solvable and G′ ≤N G. Since ⟨P G⟩= G, we deduce that G/G′ is an abelian p-group. Thus G′ contain a Sylow r-group of G. It follows that R ≤G′. By Dedekind's modular law, G′ = R(G′ ∩Op(G)), where G′ ∩Op(G) is a normal Sylow p-subgroup of G′. Claim 7: The quotient G/R′Op(G) is a Frobenius group with a cyclic complement of order p and Frobenius kernel an abelian 2-group of order 2m for some m ≥2, and p = 2m -1. Note that N/Op(G) = ROp(G)/Op(G) G/Op(G) and thus (N/Op(G))′ = R′Op(G)/Op(G). Hence, R′Op(G) G. Let G = G/R′Op(G). Then Op(G) = 1, R G, G ′ = R and G = ⟨x⟩R with |x| = p. Moreover, R is an abelian r-group and |G ′| = |R| = rm for some positive integer m. It follows that p is the only nontrivial character degree of G. It follows that 1 |G ′| ≤d(G) ≤1 p2 + (1 -1 p2 ) 1 |G ′| . Since d(G) > 1/(p + 1), we deduce that rm qh(A4) and so by Theorem 1.1 (c), G is supersolvable. By [15, Theorem VI.9.1(c)], as G is a {2, 3}-group, G has a normal Sylow 3-subgroup, which is a contradiction. Therefore, we may assume from now on that m ≥3 and p = 2m -1 ≥7. Working in the quotient group G/Op(G), we may assume Op(G) = 1. By Claim 6, G′ = R and so G′′ = R′ with [R : R′] = 2m by Claim 7. Recall that n is the smallest nontrivial character degree of G. Assume first that n ≥p. By [13, Lemma 2(vi)], we have 1 p + 1 1. Note that G′ = RU and U := G′ ∩Op(G) is a normal Sylow r-subgroup of G′. (i) Assume that p ∤|G′|. So G′ = R is a normal Sylow r-subgroup of G. Now P = ⟨x⟩Op(G) is abelian. In this case, G′⟨x⟩ G. If G′⟨x⟩ 2p -1, which is a contradiction. Assume that n < p. By Claim 5, we have n ≥(p -1)/2. Thus (p -1)/2 ≤n ≤p -1. Let χ ∈Irr(G) with χ(1) = n. Recall that 2m = p + 1 and G/Op(G) ∼= Γ(p, m). It follows that χ(1) divides |R| where |R| = 2m since p ∤χ(1) and χ(1) | |G|. So n = 2a for some positive integer a. Since (p -1)/2 = (2m -2)/2 = 2m-1 -1 < 2a ≤p -1 = 2m -2, we deduce that n = 2m-1. Employing the same argument as in the proof of Claim 8, we obtain a contradiction as m ≥3. The proof of the theorem is now complete. □ 5. Introduction to Dijkgraaf-Witten theory. For mathematicians, a topological quantum field theory (TQFT) is a machine for generating topological invariants of manifolds. For example, Witten [31] showed that the Jones polynomial is a topological invariant of manifolds in a certain TQFT. Atiyah [1] axiomatized topological quantum field theories in the language of category theory as a functor from a particular category of cobordisms to the category of vector spaces. The invariants that we consider arise in the context of Dijkgraaf-Witten theory [7, 8] in one space dimension and one time dimension. Such a TQFT determines a commutative Frobenius algebra that gives rise to topological invariants of surfaces. The new contribution of this paper is that, upon assuming this Frobenius algebra is the center of a complex group algebra, we instead consider these invariants as invariants of groups. It should be noted, however, that Dijkgraaf-Witten theory has already been used to study finite groups; for example, row and column sums in their character tables [19, 27, 29]. In this section, we derive these topological invariants in an elementary, explicit way, without assuming any previous knowledge of quantum field theory. For a similarly gentle introduction, we recommend Dijkgraaf's 2015 lecture at the Institute for Advanced Study [6]. The state of a quantum system is described by a vector in a complex vector space V (possibly infinite-dimensional), and the evolution of quantum states is described by a linear map. Here, we work in one space dimension and one time dimension, so our quantum systems are restricted INVARIANTS OF FINITE GROUPS ARISING IN TQFT 13 Cobordism Map V = Z(CG) V ⊗V →V eχ · eχ′ = δχ,χ′eχ ⟨⟩: V ⊗V →C ⟨eχ, eχ′⟩= δχ,χ′ χ(1) |G| 2 C →V 1 7→e e = P χ∈Irr(G) eχ V →C eχ 7→ |G| χ(1) 2 V →V ⊗V eχ 7→ |G| χ(1) 2eχ ⊗eχ Table 1. Cobordisms giving rise to maps of vector spaces. to the surfaces of 1-manifolds, which are disjoint unions of circles. We also stipulate that V is finite-dimensional. If we assume that the evolution of quantum states depends only on the topology of the manifold, then a cobordism between manifolds (which represents this quantum evolution of states) gives rise to a map on vector spaces. For example, a topological cylinder is a cobordism between two circles, and it represents the identity map id : V →V since the evolution only depends on the topology of the manifold. As we will see, other cobordisms correspond to maps on vector spaces which give our vector space the structure of a commutative Frobenius algebra. Perhaps most importantly, the "pair of pants" corresponds to a multiplication of quantum states V ⊗V →V , which gives V the structure of an algebra. Due to the topological nature of the theory, this multiplication is commutative. The cobordisms giving rise to a multiplicative identity e ∈V , a comultiplication V →V ⊗V and a linear form ⟨⟩: V →C are pictured in Table 1. By considering appropriate cobordisms, it is not difficult to show that the multiplication is associative and the linear form satisfies the identity ⟨a, b · c⟩= ⟨a · b, c⟩of a Frobenius algebra. A genus-h surface can be decomposed into a composition of these cobordisms, giving rise to a map C →C. Since the map is linear, it is defined by the image of 1, and this complex number is a topological invariant of the surface. Now we assume that V = Z(CG) is the center of the complex group algebra of a finite group G. We compute the maps in terms of a basis for Z(CG) that is indexed by the irreducible complex characters χ ∈Irr(G) of G; namely, the primitive central idempotents eχ = χ(1) |G| X g∈G χ(g)g. 14 CHRISTOPHER A. SCHROEDER AND HUNG P. TONG-VIET (a) (b) (c) · · · (d) Figure 1. Cobordisms for calculating (a) the "cup", (b) the "cap" and (c) the comultiplication. Decomposing the genus h-surface in (d) into cobordisms calculated in Table 1 yields an invariant P χ∈Irr(G)(|G|/χ(1))2h-2 of the surface. In this basis, the multiplicative identity decomposes as e = P χ∈Irr(G) eχ, and the multiplication takes the form eχ · eχ′ = δχ,χ′eχ. We define the linear form to be ⟨eχ, eχ′⟩= δχ,χ′ χ(1) |G| 2 , which is the coefficient of the identity in the product eχ · eχ′ times 1/|G|. This is the so-called special Frobenius algebra structure on Z(CG). To show that the "cup" in the third row of Table 1 gives rise to a multiplicative identity of V , note that the cobordism in Fig. 1(a) is homeomorphic to the cylinder, which is the identity map. To show that the "cap" in the fourth row of Table 1 sends eχ to (χ(1)/|G|)2, note that the cap is homeomorphic to adding a cup to the cobordism for the linear form, as in Fig. 1(b). Finally, to show that the comultiplication is eχ 7→(|G|/χ(1))2eχ ⊗eχ in the last row of Table 1, note that capping off the comultiplication is homeomorphic to the cylinder, which represents the identity map, as in Fig. 1(c). Now, we compute the map C →C for a genus-h surface by composing the maps in Table 1: 1 7-→ X χ∈Irr(G) eχ 7-→ X χ∈Irr(G) |G| χ(1) 2 eχ ⊗eχ 7-→ X χ∈Irr(G) |G| χ(1) 2 eχ 7-→· · · 7-→ X χ∈Irr(G) |G| χ(1) 2h eχ 7-→ X χ∈Irr(G) |G| χ(1) 2h-2 = Qh(G). The number Qh(G) is the topological invariant of a genus-h surface in this TQFT. In this paper, we considered a more well-behaved invariant of groups by scaling Qh(G) by an appropriate factor of |G|, namely qh(G) = 1/|G| P χ∈Irr(G)(1/χ(1))2h-2. References [1] M. F. Atiyah, Topological quantum field theories, Inst. Hautes ́Etudes Sci. Publ. Math. No. 68 (1988), 175-186. [2] F. Barry, D. MacHale and ́A. N ́ı Sh ́e, Some supersolvability conditions for finite groups, Math. Proc. R. Ir. Acad. 106A (2006), no. 2, 163-177. INVARIANTS OF FINITE GROUPS ARISING IN TQFT 15 [3] A. Ballester-Bolinches and R. Esteban-Romero, On minimal non-supersoluble groups, Rev. Mat. Iberoam. 23 (2007), no. 1, 127-142. [4] W. Burnside, Theory of groups of finite order, Dover, New York, 1955. [5] Y. Choi, Small values and forbidden values for the Fourier anti-diagonal constant of a finite group, J. Aust. Math. Soc. 118 (2025), no. 3, 297-316. [6] R. H. Dijkgraaf, "PiTP 2015 - 'Introduction to Topological and Conformal Field Theory (1 of 2)' - Robbert Dijkgraaf", uploaded by Institute for Advanced Study, 12 Aug. 2015, https://www.youtube.com/watch?v= jEEQO-tcyHc. [7] R. H. Dijkgraaf, Fields, strings and duality, in Sym ́etries quantiques (Les Houches, 1995), 3-147, NorthHolland, Amsterdam. [8] R. H. Dijkgraaf and E. Witten, Topological gauge theories and group cohomology, Comm. Math. Phys. 129 (1990), no. 2, 393-429. [9] J. D. Dixon, Solution to Problem 176, Canadian Mathematical Bulletin, 16 (1973), 302. [10] K. Doerk, Minimal nicht ̈uberaufl ̈osbare endliche Gruppen, Math. Z. 91 (1966), 198-205. [11] W. Feit and J. G. Thompson, Groups which have a faithful representation of degree less than (p -1)/2, Pacific J. Math. 11 (1961), 1257-1262. [12] D. Gluck, K. Magaard, U. Riese, P. Schmid, The solution of the k(GV )-problem, J. Algebra 279 (2004), no. 2, 694-719. [13] R. M. Guralnick and G. R. Robinson, On the commuting probability in finite groups, J. Algebra 300 (2006), no. 2, 509-528. [14] W. H. Gustafson, What is the probability that two group elements commute?, Amer. Math. Monthly 80 (1973), 1031-1034. [15] B. Huppert, Finite Groups I, translated from the 1967 German edition by C. A. Schroeder, Grundlehren der mathematischen Wissenschaften, 364, Springer, Cham, 2025. [16] I. M. Isaacs, Character theory of finite groups, Dover, New York, 1994. [17] I. M. Isaacs, Finite group theory, Graduate Studies in Mathematics, 92, Amer. Math. Soc., Providence, RI, 2008. [18] I. M. Isaacs, Characters of solvable groups, Graduate Studies in Mathematics, 189, Amer. Math. Soc., Providence, RI, 2018. [19] R. de Mello Koch, Y.-H. He, G. Kemp and S. Ramgoolam, Integrality, duality and finiteness in combinatoric topological strings, J. High Energ. Phys. 2022, 71 (2022). [20] P. Lescot, Sur certains grupes finis, Recv. Math. Sp ́eciales 8 (1987), 267-277. [21] M. W. Liebeck and A. Shalev, Fuchsian groups, coverings of Riemann surfaces, subgroup growth, random quotients and random walks, J. Algebra 276 (2004), no. 2, 552-601. [22] M. W. Liebeck and A. Shalev, Character degrees and random walks in finite groups of Lie type, Proc. London Math. Soc. (3) 90 (2005), no. 1, 61-86. [23] M. W. Liebeck and A. Shalev, Fuchsian groups, finite simple groups and representation varieties, Invent. Math. 159 (2005), no. 2, 317-367. [24] A. Vera L ́opez and J. Vera L ́opez, Classification of finite groups according to the number of conjugacy classes, Israel J. Math. 51 (1985), no. 4, 305-338. [25] V. T. Nagrebetski ̆ı, Finite minimal non-supersolvable groups, in Finite groups (Proc. Gomel Sem., 1973/1974) (Russian), 104-108, 229, Izdat. "Nauka i Tehnika", Minsk. [26] G. Navarro Ortega, Characters and blocks of finite groups, London Mathematical Society Lecture Note Series, 250, Cambridge Univ. Press, Cambridge, 1998. [27] A. Padellaro, R. Radhakrishnan and S. Ramgoolam, Row-column duality and combinatorial topological strings, J. Phys. A 57 (2024), no. 6, Paper No. 065202, 68 pp. [28] H. N. Nguyen and A. Mar ́oti, p-regular conjugacy classes and p-rational irreducible characters, J. Algebra 607 (2022), 387-425. [29] S. Ramgoolam and E. Sharpe, Combinatoric topological string theories and group theory algorithms, J. High Energy Phys. 2022, no. 10, Paper No. 147, 56 pp. [30] C. A. Schroeder, Finite groups with many p-regular conjugacy classes, J. Algebra 641 (2024), 716-734. [31] E. Witten, Quantum field theory and the Jones polynomial, in Braid group, knot theory and statistical mechanics, 239-329, Adv. Ser. Math. Phys., 9, World Sci. Publ., Teaneck, NJ, 1989. 139026000, USA E-mail address: 139026000, USA E-mail address:
2510.14973
ATTENTION IS ALL YOU NEED FOR KV CACHE IN DIFFUSION LLMS Quan Nguyen-Tri∗ FPT AI Residency Hanoi, Vietnam quannt40@fpt.com Mukul Ranjan∗& Zhiqiang Shen VILA Lab, MBZUAI Abu Dhabi, UAE {mukul.ranjan,zhiqiang.shen}@mbzuai.ac.ae Project page: https://vila-lab.github.io/elastic-cache-webpage/ ABSTRACT This work studies how to adaptively recompute key–value (KV) caches for diffusion large language models (DLMs) to maximize prediction accuracy while minimizing decoding latency. Prior meth- ods’ decoders recompute QKV for all tokens at every denoising step and layer, despite KV states changing little across most steps, especially in shallow layers, leading to substantial redundancy. We make three observations: (1) distant MASK tokens primarily act as a length-bias and can be cached block-wise beyond the active prediction window; (2) KV dynamics increase with depth, suggesting that selective refresh starting from deeper layers is sufficient; and (3) the most-attended token exhibits the smallest KV drift, providing a conservative lower bound on cache change for other tokens. Building on these, we propose Elastic-Cache, a training-free, architecture-agnostic strategy that jointly decides when to refresh (via an attention-aware drift test on the most-attended token) and where to refresh (via a depth-aware schedule that recomputes from a chosen layer onward while reusing shallow-layer caches and off-window MASK caches). Unlike fixed-period schemes, Elastic-Cache performs adaptive, layer-aware cache updates for diffusion LLMs, reducing redun- dant computation and accelerating decoding with negligible loss in generation quality. Experiments on LLaDA-Instruct, LLaDA-1.5, and LLaDA-V across mathematical reasoning and code generation tasks demonstrate consistent speedups: 8.7× on GSM8K (256 tokens), 45.1× on longer sequences, and 4.8× on HumanEval, while consistently maintaining higher accuracy than the baseline. Our method achieves significantly higher throughput (6.8× on GSM8K) than existing confidence-based approaches while preserving generation quality, enabling practical deployment of diffusion LLMs. 1 INTRODUCTION Diffusion large language models (DLMs) (Li et al., 2025) have recently emerged as a compelling alternative to au- toregressive Transformers (Radford et al., 2018; Achiam et al., 2023), yet their iterative denoising procedure makes inference particularly compute-intensive. In standard implementations, each decoding step recomputes queries, keys, and values (QKV) for every token at every layer, even though the underlying key–value (KV) states change only marginally across most steps. This all-tokens, all-layers recomputation incurs substantial latency and memory traffic, ultimately limiting practical deployment. Our goal in this study is to determine how and when to adaptively recompute the KV cache during decoding so as to maximize prediction quality while minimizing wall-clock latency. A defining property of diffusion LLM decoding is the progressive unmasking of tokens under a length- and structure- aware attention pattern. This induces heterogeneous KV dynamics: shallow layers tend to stabilize quickly as they encode local lexical structure, whereas deeper layers continue to adjust global, semantic dependencies. We formal- ize this with a notion of KV drift: the step-to-step change in cached keys and values, and observe two consistent trends: (i) drift is small for most steps, and (ii) drift grows with layer depth. These trends suggest that indiscriminate recomputation is wasteful, and that targeted refreshes could preserve accuracy while slashing cost. Prior acceleration methods for diffusion (and related) decoders typically refresh the KV cache on a fixed schedule, e.g., every k iterations without regard to instance difficulty, current attention patterns, or layerwise variability. Such fixed-period policies leave performance on the table: they recompute when nothing has changed and miss updates precisely when rapid semantic revisions occur. Moreover, by treating all layers uniformly, they over-service shallow layers whose representations have already converged, while under-servicing deeper layers where changes matter most. This motivates an adaptive, attention-aware alternative. ∗Equal contribution 1 arXiv:2510.14973v1 [cs.CL] 16 Oct 2025 Our approach is built on three empirical observations. First, distant MASK tokens exert negligible influence on unmasking the current token and behave primarily as a length-bias prior; thus, their KV can be block-cached outside the active prediction window to avoid redundant work. Second, KV drift increases with depth, so refreshes should start at a learned boundary layer ℓ⋆and apply only to deeper layers, reusing shallow-layer caches. Third, the most-attended token at a step typically exhibits the smallest drift, providing a conservative lower bound on KV changes across the context. Monitoring this drift yields a reliable, low-overhead trigger for deciding whether a global refresh is warranted. Based on these ideas, we propose Elastic-Cache, a training-free, architecture-agnostic strategy that couples Attention- Aware KV Cache Update with Layer-Aware KV Cache Update. The attention-aware module computes a lightweight drift statistic on the most-attended token; if the statistic exceeds a threshold, a refresh is triggered, otherwise cached KVs are reused. The layer-aware module then refreshes only layers ℓ≥ℓ⋆, while shallow layers retain their caches, and off-window MASK tokens remain block-cached. Together, these mechanisms align recomputation with where and when the model’s beliefs actually change, minimizing unnecessary QKV work. In contrast to fixed-period baselines, our Elastic-Cache adapts to the input, step, and layer granularity together. It reduces compute by skipping recomputation during stable phases, focuses effort on deeper layers during semantic revisions, and leverages block-wise caching for distant MASK tokens. Conceptually, the method reframes KV man- agement as an attention-guided control problem: attention estimates which tokens matter; drift detects how much the state has changed; and the layer boundary ℓ⋆encodes where updates pay off. This yields a practical pathway to low-latency diffusion LLM decoding without modifying training or the base architecture. Our contributions of this work: • We diagnose redundancy in diffusion LLM decoding and introduce KV drift as a principled signal for adaptive cache management. • We propose Elastic-Cache, the first (to our best knowledge) adaptive, layer-aware KV refresh policy for diffusion LLMs that jointly decides when to recompute (attention-aware drift test) and where to recompute (depth-selective updates). • We develop block-wise MASK caching to eliminate needless updates outside the prediction window. We provide comprehensive empirical experiments and ablations showing that our Elastic-Cache preserves gen- eration quality while substantially reducing decoding latency across tasks and model scales. 2 PRELIMINARY 2.1 MASKED DIFFUSION MODELS Masked Diffusion Models (MDMs), absorbing-state discrete diffusion, build on D3PM (Austin et al., 2021a) and its continuous-time variant (Campbell et al., 2022), replacing tokens with a special MASK along a forward process (Sahoo et al., 2024; Shi et al., 2024) at timestep t: qt|0(xt|x0) = L Y i=1 qt|0(xi t|xi 0) = L Y i=1 Cat(xi t; (1 −t)δxi 0 + tδMASK) (1) where t ∈[0, 1] controls interpolation between the original data x0 (at t = 0) and a fully masked sequence (at t = 1), Cat(·) denotes the categorical distribution. A parametric model pθ learns the reverse denoising; generation starts from all MASK and iteratively unmasks by sampling pθ(xi 0|xt). Recent theory (MDLM (Shi et al., 2024; Sahoo et al., 2024), RADD (Ou et al., 2024)) simplifies training from a variational bound to a reweighted cross-entropy over masked positions: LMDM = Z 1 0 1 t Eqt|0(xt|x0)   X i:xi t=MASK −log pθ(xi 0|xt)  dt (2) This formulation scales to LLMs as diffusion language models (DLMs), with LLaDA (Nie et al., 2025b) and Dream- 7B (Ye et al., 2025) matching autoregressive performance while enabling parallel decoding and flexible infilling. 2.2 KEY-VALUE CACHE IN TRANSFORMERS Transformer-based language models achieve computational efficiency during autoregressive generation through Key- Value (KV) caching (Pope et al., 2023). In causal attention, each layer projects the current hidden state Ht into query, 2 0 5 10 15 20 25 30 Layer 0 20 40 60 80 Swallow layers Cache changes Deep layers Cache changes (b) Key-Value states Cosine similarity 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0 5 10 15 20 25 30 Layer 0 20 40 60 80 (c) Attention weights Cosine similarity 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 Layer 0 20 40 60 80 100 120 (d) The most-attended tokens 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 200 400 600 800 1000 1200 1400 Token Position 0 20 40 60 80 100 120 Decoding Step Prompt tokens Faraway MASK tokens Tokens within Sliding Window (a) Attention weights 0.02 0.04 0.06 0.08 0.10 Figure 1: Visualization of our motivation. (a) MASK tokens located near each other receive high attention, while those situated far apart have minimal influence. (b) Over time, the representations in the KV states of cached tokens evolve, with deeper layers experiencing more substantial changes. (c) The changes in attention weights of most-attended tokens exhibit similar patterns to the changes in the KV states of all cached tokens. (d) The KV states of the most- attended tokens have the least changes. key, and value representations using learned projection matrices WQ, WK, WV. At decoding step t, the attention computation for the current token follows: At [t] = softmax Qt [t](Kt [1:t])⊤ √dk ! Vt [1:t], KV cache: ( Kt [1:t] = concat(Kt−1 [1:t−1], Kt [t]), Vt [1:t] = concat(Vt−1 [1:t−1], Vt [t]) . (3) To avoid redundant computation, previous key-value pairs are cached and reused. This caching strategy is effective because in causal attention, previously computed key-value pairs remain invariant throughout decoding (Kt−1 [1:t−1] = Kt [1:t−1]), enabling efficient reuse without affecting model output. KV-Cache in Bidirectional Attention. However, diffusion models employ bidirectional attention where all positions can attend to each other, breaking the invariance property of cached representations. As noted by dKV-Cache (Ma et al., 2025), token representations in diffusion models evolve dynamically during the iterative denoising process, making direct application of traditional KV-cache ineffective. The bidirectional dependencies cause previously com- puted key-value pairs to become stale as the sequence state changes, requiring careful redesign of caching strategies for diffusion language models. 3 METHODOLOGY 3.1 OUR FRAMEWORK OVERVIEW AND MOTIVATION Diffusion LLMs differ from autoregressive decoders in that their key–value (KV) states evolve across denoising steps due to bidirectional dependencies. Our objective is to adaptively decide when and where to recompute the KV cache to preserve accuracy while minimizing latency. Baseline decoders recompute QKV for all tokens and layers at every step, despite negligible KV changes for most steps and especially in shallow layers (Fig. 1b); deeper layers exhibit larger drift. Rather than fixed-period refreshes (Wu et al., 2025; Ma et al., 2025; Liu et al., 2025), we propose Elastic-Cache, the first (to our knowledge) adaptive, layer-aware KV update policy for diffusion LLMs that jointly optimizes timing and location of recomputation. Our design is driven by three observations. (1) Distant MASK tokens mainly act as a length prior and exert minimal influence on the current unmasking, we therefore block-cache their KV beyond the active prediction window (Fig. 1a). (2) KV drift grows with depth, refresh should start at a boundary layer and apply only to deeper layers (Fig. 1b). (3) The most-attended tokens typically shows the smallest KV change (Fig. 1d), giving a conservative lower bound for others, we use its drift as a lightweight trigger for refresh (Fig. 1c). Fig. 2 summarizes the pipeline. To the end, we proposed Elastic-Cache, a flexible method for key-value caching in diffusion large language models. Fig. 2 provides a visual representation of the overall pipeline of our proposed method. 3.2 SLIDING WINDOW DECODING AND KV CACHING Formally, let I = {1, 2, . . . , N} represent all positions. At decoding step t, let Dt denote newly decoded positions and Mt denote remaining masked positions, where Mt−1 = Mt ∪Dt. Denotes D<t = S i{Di}t i=1 as the set of all 3 Prompt Response (b) Elastic-Cache (Ours) Prompt Block 0 Block 1 (a) Fast-dLLM: Dual Cache t=T t=0 Most attentive Cached token MASK  token Decoded  token get_cached_state Recompute cache from layer  :Attention weights Cosine Similarity: change too significantly Fully compute cache for all layers Decode token Partially compute cache from layer  to Block Length = 4 Window Size = 3 Figure 2: Illustration of the Key-Value cache method for diffusion LLMs. (a) The fast-dLLM (Wu et al., 2025) block-wise decoding method caches the Key-Value of all tokens outside the current block at each step. The KV cache is updated after completing a block of decoding. (b) Our proposed method, Elastic-Cache, caches the key-value of tokens outside a sliding window that flexibly moves through the sentence from left to right at each iteration. When the attention weights corresponding to the most-attended tokens (one for each layer) change significantly at a layer l, we start recomputing the KV cache from layer l + 1 to the last layer. decoded tokens up to time step t. Initially, at t = 0 we compute the attention for each layer l: A0,l [I] = softmax Q0,l [I](K0,l [I])⊤ √dk ! V0,l [I], initialize KV cache: ( ˜K0,l [I] = K0,l [I] ˜V0,l [I] = V0,l [I] . (4) For each subsequence iteration t ranging from 1 to T, The model perform prediction for newly decoded position Dt and the remaining masked position Mt. To enhance efficiency, we only perform predictions for masked positions that are closest to the left and form a sliding window of size β, denoted as Mt β = Mt [1:β]. We also have that Mt−1 β = Mt β ∪Dt. We’ve observed that masked positions within the sliding window tend to pay close attention to each other. Conversely, MASK tokens outside the sliding window receive minimal attention, allowing us to reuse their KV from the cache without significantly altering the prediction of the current masked position within the sliding window. We compute the attention only for the tokens in the sliding window Mt−1 β at step t as follows: At,l [Mt−1 β ] = softmax   Qt,l [Mt−1 β ]( ˜Kt,l [I])⊤ √dk  ˜Vt,l [I], update KV cache at Mt−1 β :    ˜Kt,l [Mt−1 β ] = Kt,l [Mt−1 β ] ˜Vt,l [Mt−1 β ] = Vt,l [Mt−1 β ] . (5) Although the proposed sliding window decoding and KV cache share some similarities with the KV cache for block- wise decoding in Fast-dLLM (Wu et al., 2025) (as shown in the Fig. 2 for pipeline overview and comparison of both methods), we would like to emphasize the significant advancement of sliding window decoding. It ensures that tokens are close to each other and are predicted together, thereby minimizing the loss of cache for faraway MASK tokens. On the other hand, block-wise decoding may overlook those MASK tokens that are near the end of the block, leading to inefficient predictions due to overly aggressive caching of their surrounding context. 3.3 ATTENTION-AWARE KV CACHE UPDATE The most important novelty of our proposed method is to automatically determine whether to update the KV cache to preserve accuracy while minimizing latency. Our method leverages the awareness of the model’s attention weights to identify when the KV cache undergoes significant changes. At time step t and layer l, we determine the token that receives the most frequent attention from other tokens based on the attention weights corresponding to the current model’s prediction for Mt β. T t,l = arg max k∈D<t X q∈Mt β St,l [q,k], where: St,l [Mt β] = softmax   Qt,l [Mt β]( ˜Kt,l [I])⊤ √dk  . (6) 4 Algorithm 1 The Elastic-Cache Algorithm 1: Require: Trained model pθ, Prompt xprompt, Sliding window size β, Update threshold γ, Generation length N. 2: Initialize: x0 ←{xprompt; [MASK], . . . , [MASK]}; P ←length(xprompt) 3: t ←1; D1 ←{1, . . . , P}; M1 ←{P + 1, . . . , P + N}; T 0 ←∅; I = M0 = D1 ∪M1 4: while Mt ̸= ∅do 5: Mt β ←Mt [:β]; Qt ←T t−1 ∪Mt−1 β ; Ht,0 [Qt] ←Embedding(xt [Qt]); l∗←∞ 6: for l = 1, . . . , L do 7: if l > l∗then // Cache Update 8: ˜Ht,l [I], ˜Kt,l [I], ˜Vt,l [I] ←cache update(I); Qt,l [I], Kt,l [I], Vt,l [I] = linear(Ht,l [I]) 9: Ht,l+1 [I] , St,l [I] ←attention layer(Qt,l [I], Kt,l [I], Vt,l [I]) 10: else // Cache Reuse 11: ˜Ht,l [Qt], ˜Kt,l [Qt], ˜Vt,l [Qt] ←cache update(Qt); Qt,l [Qt], Kt,l [Qt], Vt,l [Qt] = linear(Ht,l [Qt]) 12: Ht,l+1 [Qt] , St,l [I] ←attention layer(Qt,l [Qt], ˜Kt,l [I], ˜Vt,l [I]) 13: σt,l ←cosine similarity(St−1,l [T t−1], St,l [T t−1]) 14: if σt,k < γ then // Start update cache from layer l + 1 15: l∗←l; Ht,l+1 [I] ←get cached state(Ht,l+1 [Qt] ) 16: end if 17: end if 18: Get the most-attended token: T t,l ←arg maxk∈D<t P q∈Mt β St,l [q,k] 19: end for 20: Decode new tokens: xt+1, Dt+1 ←decode(xt, Mt β) 21: Update state: Mt+1 ←Mt \ Dt+1; T t = S l{T t,l}L l=1 t ←t + 1 // State Update 22: end while 23: return xt−1 Here, we focus solely on the most-attended token among the current decoded tokens D<t. This is because the remain- ing MASK tokens either fall within the sliding window of predictions or have negligible influence on the unmasking tokens (Fig. 1a). We obtain one most-attended token per layer and compile the set of most-attended tokens, denoted as T t = S l{T t,l}L l=1. In practice, the most-attended token for a layer often overlaps with tokens from other layers, resulting in a relatively limited number of most-attended tokens being available at any given time. T t, besides being the tokens that have the most influence on the predictions’ outcome, also signify the least changes among the cached decoded tokens (Fig. 1d). Therefore, we use T t as a lightweight trigger for our cache update mechanism. Without updating all cached tokens, we only frequently update the most-attended tokens T t to measure the degree of changes for all other cached tokens. Ideally, since T t have the least change among the decoded tokens, we expect that when T t change significantly, the rest of the decoded tokens will also change significantly. Therefore, we add T t−1 to the sliding window at step t: T t−1 ∪Mt−1 β . We then measure the changes in attention weights of T t between the current and previous steps, t and t −1, using cosine similarity. l∗=  l if: σt,l < γ ∞ othewise , Cosine Similarity(St−1,l [T t−1], St,l [T t−1]) = ∥St−1,l [T t−1] · St,l [T t−1]∥ ∥St−1,l [T t−1]∥· ∥St,l [T t−1]∥ = σt,l. (7) The changes in attention St,l directly affect the output of the current layer or the input of the next layer Ht,l+1. This implies that our cached values are diverging from the actual values, necessitating an update. When a layer l∗observes significant changes in attention weights σt,l < γ, we initiate the update of the KV cache for the subsequent layers, starting from l∗+ 1 and continuing until the last layer L. To achieve this, we initialize the hidden states of all cached tokens with the states ˜Ht,l+1 [I] , which have been saved and updated using patterns similar to ˜Kt,l+1 [I] and ˜Vt,l+1 [I] . Update state: ˜Ht,l+1 [Mt−1 β ] = Ht,l+1 [Mt−1 β ], Initialize cache update: Qt,l+1 [I] , Kt,l+1 [I] , Vt,l+1 [I] = linear( ˜Ht,l+1 [I] ) (8) We then update and overwrite the KV cache using the same process as initially at t = 0, as described in Eq. 4. If none of the layers satisfy σt,l < γ, we continue to reuse our KV cache for future predictions. We didn’t directly compare the hidden state Ht,l+1 and Ht−1,l+1 because their changes depend on various network components. The error in measurement could be amplified by the divergence between the cached value and the actual value (including Key-Value states). 5 On the other hand, the changes in attention weights are closely linked to the source of the change in Key-Value states, which is the bidirectional attention mechanism in diffusion LLMs. Intuitively, the changes in attention weights become significant when new decoded tokens receive high attention and alter the attention output computed in the past when they were still masked. Consequently, the changes in attention weights exhibit very similar patterns to the changes in Key-Value states during decoding, as illustrated in Fig. 1b and Fig. 1c. We use the hyper-parameter γ to set the trigger for automatic cache updates. As shown in Fig. 1c, the attention weights’ cosine similarity landscapes influence this. A higher γ results in more frequent and extensive cache updates across multiple layers, while a lower γ triggers updates less frequently. This flexibility allows us to effectively manage the trade-off between accuracy and latency. Our overall algorithm is described in Algorithm 1. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Implementation Details. All our runs use a single NVIDIA A100 80GB. We evaluate Elastic-Cache on LLaDA- Instruct (Nie et al., 2025a), LLaDA-1.5 (Zhu et al., 2025), and multimodal LLaDA-V (You et al., 2025) across MBPP (Austin et al., 2021b), HumanEval (Chen et al., 2021), MATH (Hendrycks et al., 2021), GSM8K (Cobbe et al., 2021), MathVista (Lu et al., 2023), and MathVerse (Zhang et al., 2024b). Default hyperparameters: attention threshold γ=0.9, parallel-decoding confidence ϵ=0.9, cache block size 32. For fair comparison, we re-run LLaDA (Nie et al., 2025a) and Fast-dLLM (Wu et al., 2025) under the same hardware/software. Evaluation Framework and Metrics. We use lm-eval-harness (Gao et al., 2024). Throughput is token- s/sec averaged until emitting, matching Fast-dLLM’s protocol (Wu et al., 2025). Accuracy metrics: GSM8K: 5-shot flexible extract (Cobbe et al., 2021); MATH: 4-shot math verify (minerva math) (Hendrycks et al., 2021); HumanEval—0-shot with the Fast-dLLM post-processing (Chen et al., 2021; Wu et al., 2025); MBPP—3- shot pass@1 (Austin et al., 2021b). For LLaDA-V, we adopt the official pipeline with lmms-eval (Zhang et al., 2024a; Li et al., 2024): MathVista: gpt eval score (Lu et al., 2023); MathVerse: gpt eval score on mathverse testmini vision dominant (Zhang et al., 2024b). Confidence-Aware Decoding. We employ confidence-aware decoding strategies from Fast-dLLM (Wu et al., 2025), which select only tokens with confidence scores exceeding a specified threshold (ϵ), instead of unmasking a fixed number of tokens per step, as in the baseline Diffusion LLM. This straightforward yet effective approach accelerates Diffusion LLM inference by enabling more tokens to be predicted concurrently at each iteration, contingent upon the model’s performance. Consequently, we concentrate on comparing the acceleration achieved by the KV caching method under the same decoding strategies. 4.2 PERFORMANCE AND EFFICIENCY EVALUATION Across Tables 1, 2, and 3, our proposed Elastic-Cache delivers substantial throughput gains for diffusion LLMs with minimal accuracy loss. By adaptively updating the cache only when necessary, it achieves a speedup of up to 45.1× over the standard baseline. While maintaining accuracy within 1∼2% on MATH and HumanEval, it also achieves higher accuracy on GSM8K and MBPP. Compared to Fast-dLLM (Wu et al., 2025), Elastic-Cache consistently attains greater tokens/sec at better accuracy. As presented in Table 1, on LLaDA-Instruct, Elastic-Cache reaches 90.1 t/s on GSM8K (512 tokens; 25.2× over baseline) at 77.71% accuracy, surpassing Fast-dLLM’s 44.0 t/s @ 74.83%. On LLaDA-1.5 (Table 2), our approach yields even greater gains, including 45.1× on GSM8K-512, with an accuracy of 81.35% (baseline 81.35%). This observation indicates that Elastic-Cache performs better when the model’s predictions are more accurate. The reason behind this could be the close relationship between our approach and attention scores. Intuitively, accurate predictions are associated with meaningful attention scores with fewer outliers, which makes our approach operate more smoothly. We also observed that in most settings, Elastic-Cache provides higher throughput for longer generation lengths, whereas this is the opposite for Fast-dLLM (Wu et al., 2025), as it often experiences reduced throughput as the generation length increases. The advantages of our approach stem from the fixed-size sliding window and automatic cache update, which minimizes the dependency of throughput on the generation length. In the multimodal setting (LLaDA-V; Table 3), Elastic-Cache raises MathVerse-256 throughput to 32.3 t/s from Fast- dLLM’s 30.3 t/s while maintaining 29.19% accuracy, demonstrating robustness beyond text-only tasks. The significant improvement of Elastic-Cache compared to the baselines across various settings suggests that our method is broadly applicable and has high scalability potential. 6 Table 1: Comprehensive benchmark results on the LLaDA-Instruct suite. Each cell shows accuracy (top) and decoding throughput in tokens/sec with relative speedup to the LLaDA baseline (bottom, blue: t/s / orange: speedup). High- lighted cells denote the highest throughput and speedup per configuration. The highest accuracy is bolded. Confident-Aware Decoding Benchmark Gen Length LLaDA LLaDA Fast-dLLM Elastic-Cache GSM8K (5-shot) 256 78.01 7.3 (1.0×) 78.62 22.8 (3.1×) 77.94 53.7 (7.7×) 78.24 58.0 (8.2×) 512 77.10 3.6 (1.0×) 77.33 18.6 (5.2×) 74.83 44.0 (12.3×) 77.71 90.1 (25.2×) MATH (4-shot) 256 33.58 9.5 (1.0×) 33.28 25.8 (2.7×) 32.50 49.0 (5.1×) 33.14 48.7 (5.1×) 512 37.20 7.1 (1.0×) 36.82 24.0 (3.4×) 35.70 52.8 (7.4×) 36.60 59.3 (7.9×) HumanEval (0-shot) 256 40.85 33.3 (1.0×) 42.07 102.1 (3.1×) 37.20 99.8 (3.0×) 40.24 160.5 (4.8×) 512 43.90 17.7 (1.0×) 43.29 51.6 (2.9×) 45.73 76.1 (4.3×) 46.34 100.7 (5.0×) MBPP (3-shot) 256 29.80 6.5 (1.0×) 30.00 23.4 (3.6×) 25.40 45.1 (7.0×) 32.2 46.9 (7.3×) 512 15.0 4.7 (1.0×) 15.0 20.8 (4.4×) 13.6 44.7 (9.5×) 15.6 63.0 (13.4×) 4.3 ABLATIONS We ablate key choices: 1) Cache update threshold γ, 2) sliding window size β, and 3) prefill and generation length, to expose speed/accuracy trade-offs and justify defaults. Cache Update Threshold (γ). Table 4 illustrates the sensitivity of our proposed method to the parameter γ. As γ is used to control the frequency of cache updates, a consistent decrease in γ leads to an increase in throughput. However, there is also a trend of decreasing accuracy as throughput increases. The trend is more consistent for the LLaDA-1.5 model, while for LLaDA, the accuracy at peak (γ = 0.9) is higher, but the throughput is lower. Sliding Window Size (β). Fig. 3a shows that our accuracy is stable across various β and close to No-Cache until β ≈64; beyond that LLaDA’s tendency to emit EOS early degrades results (You et al., 2025). Throughput, however, is sensitive to β: larger windows enable more parallel prediction (fewer iterations, lower latency), but overly large β reduces cacheable MASK tokens, raising per-step compute and latency. Sliding Window vs. Block-Wise. When switching Elastic-Cache to block-wise decoding (Fast-dLLM-style) (Fig. 3a), our accuracy is often similar to No-Cache, but short blocks hurt accuracy and throughput diverges. Our sliding window groups nearby MASK tokens that strongly attend to each other, whereas block-wise caching over-aggressively freezes distant MASKs, harming small-block predictions. Our Elastic-Cache’s automatic cache refresh detects divergent tokens and updates them, preserving accuracy at the cost of some throughput. Prefill and Generation Length. Table 5a and Table 5b provide insights into the impact of prefill length and generation length on the overall speedup. Notably, both Fast-dLLM and Elastic-Cache experience a decrease in throughput as the prefill length increases from 3-shot to 8-shot. However, Elastic-Cache exhibits a remarkable speedup and consistently high accuracy across different prefill lengths. Moreover, the throughput of Elastic-Cache increases with generation length, highlighting its unique scaling properties. 4.4 ANALYSIS Cache update frequency. Fig. 3b and Fig. 3c illustrate the frequency of cache updates performed by Elastic-Cache under varying hyper-parameters γ and ϵ. The proposed method maintains a very low cache update frequency across different values of γ (Fig. 3b). In extreme cases, with γ = 0.95, the cache update frequency increases to only 20% compared to the baseline without a cache. Moreover, increasing the model’s confidence and accuracy (with ϵ, Fig. 3c) enhances Elastic-Cache’s effectiveness, and reduces the cache update frequency. Tunable Speed–Accuracy Trade-off. The cache update threshold γ directly determines the balance (Table 4). An ex- cessively high γ could lead to unnecessary cache updates, resulting in a decrease in speedup without any improvement in accuracy. Conversely, a smaller γ value could guarantee speedup while sacrificing accuracy. The optimal value for 7 Table 2: Comprehensive benchmark results on the LLaDA-1.5 suite. Each cell shows accuracy (top) and decoding throughput in tokens/sec with relative speedup to the LLaDA baseline (bottom, blue: t/s / orange: speedup). High- lighted cells denote the highest throughput and speedup per configuration. Confident-Aware Decoding Benchmark Gen Length LLaDA-1.5 LLaDA-1.5 Fast-dLLM Elastic-Cache GSM8K (5-shot) 256 80.36 6.7 (1.0×) 80.44 22.5 (3.3×) 80.59 51.2 (7.6×) 81.50 58.0 (8.7×) 512 81.35 2.6 (1.0×) 81.88 17.2 (6.6×) 80.82 36.8 (14.1×) 81.35 117.2 (45.1×) MATH (4-shot) 256 33.52 8.5 (1.0×) 33.60 22.3 (2.6×) 32.74 44.4 (5.2×) 33.50 51.0 (6.5×) 512 35.63 5.0 (1.0×) 35.56 20.3 (4.0×) 33.68 44.4 (8.8×) 35.36 74.8 (14.9×) HuamnEval (0-shot) 256 43.29 7.0 (1.0×) 42.68 17.5 (2.5×) 34.75 18.7 (2.7×) 36.59 20.9 (3.0×) 512 40.85 3.2 (1.0×) 39.63 9.7 (3.1×) 36.59 15.4 (4.8×) 37.80 16.8 (5.3×) MBPP (3-shot) 256 38.00 2.4 (1.0×) 38.00 14.2 (5.8×) 34.60 28.0 (11.6×) 41.20 32.7 (13.5×) 512 38.20 1.0 (1.0×) 38.60 11.5 (11.5×) 36.20 17.8 (17.8×) 39.00 32.8 (32.8×) Table 3: Performance and Speedup Comparison of LLaDA-V on MathVista and MathVerse. Each benchmark presents results from LLaDA-V (base) using Fast-dLLM, and our method. Length MathVista MathVerse Fast-dLLM Elastic-Cache (Ours) Fast-dLLM Elastic-Cache (Ours) 256 55.9 28.7 55.9 29.7 26.78 30.3 29.19 32.3 512 54.1 23.7 55.8 24.1 25.5 28.1 29.19 30.8 γ to maximize both accuracy and throughput depends on the model’s prediction. Models with higher accuracy tend to have the best γ value, which is closer to 1.0 (Table 4). Scaling Properties. Elastic-Cache scales greatly with the generation length and the power of the base model. Increas- ing the generation length slows down the baseline performance but speeds up Elastic-Cache (Tables 5b). Moreover, Elastic-Cache effectiveness is highly dependent on the accuracy of the model’s predictions (Table 1, Table 2, Fig. 3c). This indicates that Elastic-Cache can effectively scale with the size of the model and the size of the training data, as LLMs generally improve when they scale up. 5 RELATED WORK Diffusion Language Models. Classical diffusion models excel in continuous domains: images (Ho et al., 2020; Dhari- wal & Nichol, 2021; Rombach et al., 2022), audio (Yang et al., 2023; Huang et al., 2023), and video (Xing et al., 2024; Ho et al., 2022a;b), building on the seminal formulation of Sohl-Dickstein et al. (2015). Adapting diffusion to dis- crete text has followed Markov/multinomial/continuous-time paths (Austin et al., 2021a; Hoogeboom et al., 2021b;a; Campbell et al., 2022; Sun et al., 2022), refined via score matching, ratio methods, and reparameterization (Meng et al., 2022; Lou & Ermon, 2023; Zheng et al., 2023), with recent work unifying these views (Sahoo et al., 2024; Shi et al., 2024; Ou et al., 2024; Zheng et al., 2024). Early NLP systems validated these ideas (He et al., 2022; Li et al., 2022; Gong et al., 2022) and explored semi-autoregression (Han et al., 2022). Masked diffusion approaching autore- gressive quality (Sahoo et al., 2024) enabled scalable models (LLaDA) competitive with LLaMA (Nie et al., 2025a; 2024; Touvron et al., 2023a; Dubey et al., 2024), with further gains from AR adaptation and instruction tuning (Gong et al., 2024; Zhu et al., 2025; Ye et al., 2025). The paradigm now spans multimodal/structured domains (You et al., 2025; Yang et al., 2025; Yu et al., 2025; Wang et al., 2024a;b; Kitouni et al., 2023). Acceleration Techniques for Large Language Models. KV caching underpins efficient transformer infer- ence (Vaswani et al., 2017; Pope et al., 2023), complemented by GQA, RoPE, and modern LLM optimizations (Ainslie et al., 2023; Su et al., 2024; Touvron et al., 2023a;b; Dubey et al., 2024). Diffusion LLMs complicate caching due to bidirectional attention and evolving representations; dedicated methods include Fast-dLLM (Wu et al., 2025), dKV- 8 4 8 16 32 48 64 128 Sliding Window Size / Block Size 40 50 60 70 80 GSM8K (5-shot) Accuracy No cache 7.0x Speedup 6.4x Speedup Sliding window Accuracy Block-wise Accuracy Sliding window Throughput Block-wise Throughput 0 20 40 60 80 100 120 140 Throughput (tokens/s) No cache (a) Impact of Sliding window and block size 0.5 0.6 0.7 0.8 0.9 Gamma 76 77 78 79 80 81 82 83 GSM8K (5-shot) Accuracy No cache 20 40 60 80 100 120 140 Throughput (tokens/s) No cache 0.5% 1.8% 4.2% 6.5% 10.2% 20.0% 6.8x Speedup (b) Cache update frequency. 0.5 0.6 0.7 0.8 0.9 1.0 Threshold 72 74 76 78 80 82 GSM8K (5-shot) Accuracy Baseline 0 50 100 150 200 Throughput (tokens/s) Baseline 17.2% 11.8% 13.5% 10.2% 5.6% 64.5x Speedup (c) Confident-aware decoding. Figure 3: Ablation study and analysis of our proposed method. (a) Ablation study of our sliding window mechanism compared to block-wise decoding. (b) Analysis of cache update frequency under varying γ. The blue and orange lines represent accuracy and throughput, respectively. The numbers along the lines indicate the frequency of cache updates, assuming no baseline. (c) Analysis of cache update frequency under confident-aware decoding with varying ϵ. Table 4: Impact of attention threshold on accuracy and speedup under GSM8K (5-Shot) for LLaDA and LLaDA1.5 with generation length of 512. Elastic-Cache (Ours) Model No Cache Fast-dLLM γ = 0.5 γ = 0.7 γ = 0.8 γ = 0.85 γ = 0.9 γ = 0.95 LLaDA 77.10 3.6 (1.0×) 74.83 44.0 (12.2×) 71.57 109.9 (30.5×) 73.46 108.7 (30.2×) 74.30 103.9 (28.9×) 74.68 99.1 (27.5×) 77.71 91.5 (25.4×) 76.72 75.5 (21.0×) LLaDA-1.5 81.35 2.6 (1.0×) 80.82 36.8 (14.2×) 76.04 142.7 (54.9×) 77.63 138.6 (53.3×) 79.45 131.2 (50.5×) 80.21 129.9 (50.0×) 81.35 117.2 (45.1×) 83.02 98.4 (37.8×) Table 5: Comparison between Elastic-Cache and Fast-dLLM when varying few-shots and generation length. (a) Impact of few-shots on Accuracy and Speedup Under GSM8K (generation length of 1024) for LLaDA. Model. 3-shot 5-shot 8-shot Fast-dLLM 73.77 28.5 (1.0×) 76.04 25.0 (1.0×) 75.36 20.8 (1.0×) Elastic-Cache 75.13 185.3 (6.5×) 75.21 169.8 (6.8×) 75.28 143.9 (6.9×) (b) Impact of generation length on Accuracy and Speedup Un- der GSM8K (5-Shot), γ = 0.8 for LLaDA. Model. 256 512 1024 Fast-dLLM 77.94 53.7 (1.0×) 74.83 44.0 (1.0×) 76.04 25.0 (1.0×) Elastic-Cache 78.24 58.0 (1.1×) 77.71 91.5 (2.1×) 75.21 169.8 (6.8×) Cache (Ma et al., 2025), and DeepCache (Ma et al., 2024). Orthogonal accelerations exploit parallel/non-AR gen- eration (Gu et al., 2017; Xiao et al., 2023), block-wise diffusion (Arriola et al., 2025), fast sampling (Chen et al., 2023), test-time scaling (Ramesh & Mardani, 2025), and consistency models (Kou et al., 2024). However, most rely on temporal heuristics or fixed thresholds, leaving attention patterns underused. Our Perspective. We close this gap with attention-aware and layer-aware caching for diffusion LLMs: tracking most-attended tokens and depth-varying KV dynamics to guide recomputation, complementary to interval-based (Ma et al., 2025) and confidence-based (Wu et al., 2025) policies and compatible with the broader acceleration toolkit (Ainslie et al., 2023; Su et al., 2024; Touvron et al., 2023a;b; Dubey et al., 2024; Gu et al., 2017; Xiao et al., 2023; Arriola et al., 2025; Chen et al., 2023; Ramesh & Mardani, 2025; Kou et al., 2024). 6 CONCLUSION We presented Elastic-Cache, a training-free, architecture-agnostic policy that makes KV caching in diffusion LLMs adaptive along two axes: when to refresh (via an attention-aware drift test) and where to refresh (via a depth-selective update starting at a learned boundary layer). By block-caching distant MASK tokens, reusing shallow-layer caches, and refreshing only when the most-attended token indicates meaningful state change, Elastic-Cache removes large amounts of redundant QKV work. Across decoding steps, this yields substantial latency reductions with negligible impact on generation quality, addressing a key deployment bottleneck for diffusion decoders. Looking ahead, we plan to refine drift thresholds with learned predictors, formalize guarantees linking attention patterns to KV drift, and explore interplay with speculative decoding or other hardware-aware scheduling, extending the same principles to autoregressive LLMs and multimodal diffusion frameworks. 9 ETHICS STATEMENT This work targets inference-time efficiency for diffusion LLMs and does not introduce new data collection or model training. All evaluations use publicly available datasets and third-party checkpoints under their original licenses, no personally identifiable information is processed. While faster decoding can lower the cost of generation and thus broaden access, it may also amplify misuse. We neither change safety filters nor attempt to bypass alignment con- straints of the underlying models. We will document evaluation prompts and tasks, follow the usage policies of model providers, and encourage human oversight for downstream deployments, especially in high-stakes applications. REPRODUCIBILITY STATEMENT Elastic-Cache is training-free and defined by a small set of inference hyperparameters: the attention similarity thresh- old γ, block size and generation length. We will release code, configs, and scripts to reproduce all results: (i) reference implementations of Attention-Aware and Layer-Aware KV updates with ablation; (ii) exact prompts/datasets, metrics, and other criteria; and (iii) environment specs (CUDA/driver, framework versions) and hardware details (GPU type, batch sizes). We report wall-clock latency and accuracy metrics for each setting, and provide logs to our tables/figures from raw traces. REFERENCES Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebron, and Sumit Sanghai. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 4895–4901, 2023. Marianne Arriola, Aaron Gokaslan, Justin T. Chiu, Zhihan Yang, Zhixuan Qi, Jiaqi Han, Subham Sekhar Sahoo, and Volodymyr Kuleshov. Block diffusion: Interpolating between autoregressive and diffusion language models, 2025. URL https://arxiv.org/abs/2503.09573. Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne Van Den Berg. Structured denoising diffusion models in discrete state-spaces. Advances in Neural Information Processing Systems, 34:17981–17993, 2021a. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021b. Andrew Campbell, Joe Benton, Valentin De Bortoli, Thomas Rainforth, George Deligiannidis, and Arnaud Doucet. A continuous time framework for discrete denoising models. Advances in Neural Information Processing Systems, 35:28266–28279, 2022. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Ed- wards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Zixiang Chen, Huizhuo Yuan, Yongqian Li, Yiwen Kou, Junkai Zhang, and Quanquan Gu. Fast sampling via de- randomization for discrete diffusion models. arXiv preprint arXiv:2312.09193, 2023. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural infor- mation processing systems, 34:8780–8794, 2021. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 10 Bolin Gao and Lacra Pavel. On the properties of the softmax function with application in game theory and reinforce- ment learning. arXiv preprint arXiv:1704.00805, 2017. Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Ja- son Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 07 2024. URL https://zenodo.org/records/12608602. Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and LingPeng Kong. Diffuseq: Sequence to sequence text generation with diffusion models. arXiv preprint arXiv:2210.08933, 2022. Shansan Gong, Shivam Agarwal, Yizhe Zhang, Jiacheng Ye, Lin Zheng, Mukai Li, Chenxin An, Peilin Zhao, Wei Bi, Jiawei Han, et al. Scaling diffusion language models via adaptation from autoregressive models. arXiv preprint arXiv:2410.17891, 2024. Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. Non-autoregressive neural machine translation. arXiv preprint arXiv:1711.02281, 2017. Xiaochuang Han, Sachin Kumar, and Yulia Tsvetkov. Ssd-lm: Semi-autoregressive simplex-based diffusion language model for text generation and modular control. arXiv preprint arXiv:2210.17432, 2022. Zhengfu He, Tianxiang Sun, Kuanning Wang, Xuanjing Huang, and Xipeng Qiu. Diffusionbert: Improving generative masked language models with diffusion models. arXiv preprint arXiv:2211.15029, 2022. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Stein- hardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303, 2022a. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffu- sion models. Advances in neural information processing systems, 35:8633–8646, 2022b. Emiel Hoogeboom, Alexey A Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. Autoregressive diffusion models. arXiv preprint arXiv:2110.02037, 2021a. Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forr´e, and Max Welling. Argmax flows and multinomial diffusion: Learning categorical distributions. Advances in neural information processing systems, 34:12454–12465, 2021b. Rongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao. Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models, 2023. URL https://arxiv.org/abs/2301.12661. Ganesh Jawahar, Benoˆıt Sagot, and Djam´e Seddah. What does bert learn about the structure of language? In ACL 2019-57th Annual Meeting of the Association for Computational Linguistics, 2019. Ouail Kitouni, Niklas Nolte, James Hensman, and Bhaskar Mitra. Disk: A diffusion model for structured knowledge. arXiv preprint arXiv:2312.05253, 2023. Siqi Kou, Lanxiang Hu, Zhezhi He, Zhijie Deng, and Hao Zhang. Cllms: Consistency large language models. arXiv preprint arXiv:2403.00835, 2024. Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. Revealing the dark secrets of bert. arXiv preprint arXiv:1908.08593, 2019. Bo Li, Peiyuan Zhang, Kaichen Zhang, Fanyi Pu, Xinrun Du, Yuhao Dong, Haotian Liu, Yuanhan Zhang, Ge Zhang, Chunyuan Li, and Ziwei Liu. Lmms-eval: Accelerating the development of large multimodal models, March 2024. URL https://github.com/EvolvingLMMs-Lab/lmms-eval. 11 Tianyi Li, Mingda Chen, Bowei Guo, and Zhiqiang Shen. A survey on diffusion language models. arXiv preprint arXiv:2508.10875, 2025. Xiang Li, John Thickstun, Ishaan Gulrajani, Percy S Liang, and Tatsunori B Hashimoto. Diffusion-lm improves controllable text generation. Advances in neural information processing systems, 35:4328–4343, 2022. Zhiyuan Liu, Yicun Yang, Yaojie Zhang, Junjie Chen, Chang Zou, Qingyuan Wei, Shaobo Wang, and Linfeng Zhang. dllm-cache: Accelerating diffusion large language models with adaptive caching. arXiv preprint arXiv:2506.06295, 2025. Aaron Lou and Stefano Ermon. Reflected diffusion models. In International Conference on Machine Learning, pp. 22675–22701. PMLR, 2023. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255, 2023. Xinyin Ma, Gongfan Fang, and Xinchao Wang. Deepcache: Accelerating diffusion models for free. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 15762–15772, 2024. Xinyin Ma, Runpeng Yu, Gongfan Fang, and Xinchao Wang. dkv-cache: The cache for diffusion language models. arXiv preprint arXiv:2505.15781, 2025. Chenlin Meng, Kristy Choi, Jiaming Song, and Stefano Ermon. Concrete score matching: Generalized score matching for discrete data. Advances in Neural Information Processing Systems, 35:34532–34545, 2022. Shen Nie, Fengqi Zhu, Chao Du, Tianyu Pang, Qian Liu, Guangtao Zeng, Min Lin, and Chongxuan Li. Scaling up masked diffusion models on text. arXiv preprint arXiv:2410.18514, 2024. Shen Nie, Fengqi Zhu, Zebin You, Xiaolu Zhang, Jingyang Ou, Jun Hu, Jun Zhou, Yankai Lin, Ji-Rong Wen, and Chongxuan Li. Large language diffusion models. arXiv preprint arXiv:2502.09992, 2025a. Shen Nie, Fengqi Zhu, Zebin You, Xiaolu Zhang, Jingyang Ou, Jun Hu, Jun Zhou, Yankai Lin, Ji-Rong Wen, and Chongxuan Li. Large language diffusion models, 2025b. URL https://arxiv.org/abs/2502.09992. Jingyang Ou, Shen Nie, Kaiwen Xue, Fengqi Zhu, Jiacheng Sun, Zhenguo Li, and Chongxuan Li. Your absorbing discrete diffusion secretly models the conditional distributions of clean data. arXiv preprint arXiv:2406.03736, 2024. Reiner Pope, Sholto Douglas, Aakanksha Chowdhery, Jacob Devlin, James Bradbury, Jonathan Heek, Kefan Xiao, Shivani Agrawal, and Jeff Dean. Efficiently scaling transformer inference. Proceedings of machine learning and systems, 5:606–624, 2023. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by gener- ative pre-training. 2018. Vignav Ramesh and Morteza Mardani. Test-time scaling of diffusion models via noise trajectory search. arXiv preprint arXiv:2506.03164, 2025. Anna Rogers, Olga Kovaleva, and Anna Rumshisky. A primer in bertology: What we know about how bert works. Transactions of the association for computational linguistics, 8:842–866, 2021. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-resolution image synthesis with latent diffusion models, 2022. URL https://arxiv.org/abs/2112.10752. Subham Sekhar Sahoo, Marianne Arriola, Yair Schiff, Aaron Gokaslan, Edgar Marroquin, Justin T Chiu, Alexan- der Rush, and Volodymyr Kuleshov. Simple and effective masked diffusion language models. arXiv preprint arXiv:2406.07524, 2024. Jiaxin Shi, Kehang Han, Zhe Wang, Arnaud Doucet, and Michalis K Titsias. Simplified and generalized masked diffusion for discrete data. arXiv preprint arXiv:2406.04329, 2024. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pp. 2256–2265. PMLR, 2015. 12 Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063, 2024. Haoran Sun, Lijun Yu, Bo Dai, Dale Schuurmans, and Hanjun Dai. Score-based continuous-time discrete diffusion models. arXiv preprint arXiv:2211.16750, 2022. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023a. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023b. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, Shujian Huang, and Quanquan Gu. Diffusion language models are versatile protein learners. arXiv preprint arXiv:2402.18567, 2024a. Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, Shujian Huang, and Quanquan Gu. Dplm-2: A multimodal diffusion protein language model. arXiv preprint arXiv:2410.13782, 2024b. Chengyue Wu, Hao Zhang, Shuchen Xue, Zhijian Liu, Shizhe Diao, Ligeng Zhu, Ping Luo, Song Han, and Enze Xie. Fast-dllm: Training-free acceleration of diffusion llm by enabling kv cache and parallel decoding. arXiv preprint arXiv:2505.22618, 2025. Yisheng Xiao, Lijun Wu, Junliang Guo, Juntao Li, Min Zhang, Tao Qin, and Tie yan Liu. A survey on non- autoregressive generation for neural machine translation and beyond, 2023. URL https://arxiv.org/abs/ 2204.09269. Zhen Xing, Qijun Feng, Haoran Chen, Qi Dai, Han Hu, Hang Xu, Zuxuan Wu, and Yu-Gang Jiang. A survey on video diffusion models. ACM Computing Surveys, 57(2):1–42, 2024. Dongchao Yang, Jianwei Yu, Helin Wang, Wen Wang, Chao Weng, Yuexian Zou, and Dong Yu. Diffsound: Discrete diffusion model for text-to-sound generation, 2023. URL https://arxiv.org/abs/2207.09983. Ling Yang, Ye Tian, Bowen Li, Xinchen Zhang, Ke Shen, Yunhai Tong, and Mengdi Wang. Mmada: Multimodal large diffusion language models. arXiv preprint arXiv:2505.15809, 2025. Jiacheng Ye, Zhihui Xie, Lin Zheng, Jiahui Gao, Zirui Wu, Xin Jiang, Zhenguo Li, and Lingpeng Kong. Dream 7b, 2025. URL https://hkunlp.github.io/blog/2025/dream. Zebin You, Shen Nie, Xiaolu Zhang, Jun Hu, Jun Zhou, Zhiwu Lu, Ji-Rong Wen, and Chongxuan Li. Llada-v: Large language diffusion models with visual instruction tuning. arXiv preprint arXiv:2505.16933, 2025. Runpeng Yu, Xinyin Ma, and Xinchao Wang. Dimple: Discrete diffusion multimodal large language model with parallel decoding, 2025. URL https://arxiv.org/abs/2505.16990. Kaichen Zhang, Bo Li, Peiyuan Zhang, Fanyi Pu, Joshua Adrian Cahyono, Kairui Hu, Shuai Liu, Yuanhan Zhang, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Lmms-eval: Reality check on the evaluation of large multimodal models, 2024a. URL https://arxiv.org/abs/2407.12772. Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision, pp. 169–186. Springer, 2024b. Kaiwen Zheng, Yongxin Chen, Hanzi Mao, Ming-Yu Liu, Jun Zhu, and Qinsheng Zhang. Masked diffusion models are secretly time-agnostic masked models and exploit inaccurate categorical sampling. arXiv preprint arXiv:2409.02908, 2024. Lin Zheng, Jianbo Yuan, Lei Yu, and Lingpeng Kong. A reparameterized discrete diffusion model for text generation. ArXiv, abs/2302.05737, 2023. Fengqi Zhu, Rongzhen Wang, Shen Nie, Xiaolu Zhang, Chunwei Wu, Jun Hu, Jun Zhou, Jianfei Chen, Yankai Lin, Ji-Rong Wen, and Chongxuan Li. Llada 1.5: Variance-reduced preference optimization for large language diffusion models, 2025. URL https://arxiv.org/abs/2505.19223. 13 APPENDIX CONTENTS A Theoretical Validation for Elastic-Cache 15 A.1 Notation and Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.2 Background Lemmas and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.3 Layer-Wise KV Drift Monotonicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.4 Attention Concentration and Drift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 A.5 Implications for Elastic-Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 B Detailed Experiment Setup 22 C Batch Implementation of Elastic-Cache 23 D Use of Large Language Models 23 E Sample Response 23 14 A THEORETICAL VALIDATION FOR ELASTIC-CACHE A.1 NOTATION AND SETUP • L: number of transformer layers, indexed by ℓ∈{1, . . . , L} • T: total denoising steps, indexed by t ∈{0, . . . , T} • N: sequence length • d: hidden dimension; dk, dv: key and value dimensions • Ht,ℓ i ∈Rd: hidden state of token i at step t, layer ℓ • Kt,ℓ i , Vt,ℓ i ∈Rdk, Rdv: key and value of token i • St,ℓ∈RN×N: attention weights at step t, layer ℓ • D<t: decoded token positions up to step t −1 • Mt: masked token positions at step t • Mt β: sliding window of size β over masked positions • αt,ℓ k := P q∈Mt β St,ℓ q,k : total attention token k receives • ∆Hi := Ht,ℓ i −Ht−1,ℓ i : change in hidden state • ¯∆t,ℓ:= 1 N PN i=1 ∥∆Ht,ℓ i ∥2 : average hidden state drift • ∆max := maxi ∥∆Hi∥2 : maximum hidden state change • Γt,ℓ:= αt,ℓ T t,ℓ−maxk̸=T t,ℓαt,ℓ k ≥0 : Attention Gap A.2 BACKGROUND LEMMAS AND ASSUMPTIONS Lemma A.1 (Lipschitz Continuity of Softmax). Based on the Proposition 2 in Gao & Pavel (2017), the softmax function σ : Rn →∆n−1 defined by σ(z)i = exp(zi) Pn j=1 exp(zj) (9) is 1-Lipschitz continuous with respect to the ℓ2 norm: ∥σ(z) −σ(z′)∥2 ≤∥z −z′∥2 , ∀z, z′ ∈Rn (10) Assumption A.2 (Bounded Representations). At each layer ℓand step t: Ht,ℓ i 2 ≤Rℓ Assumption A.3 (Lipschitz Network Components). The projection matrices satisfy Wℓ Q 2 , Wℓ K 2 , Wℓ V 2 ≤ Wmax. The feedforward network at layer ℓis LFFN-Lipschitz continuous. Assumption A.4 (Progressive Unmasking). At each step t, a non-empty subset Dt ⊆Mt−1 is unmasked: |D<t| increases and Mt = Mt−1 \ Dt. Assumption A.5 (Layer-Wise Representation Dynamics). There exists ℓ∗∈{1, . . . , L} and functions fℓ(t) →0 as t →T for ℓ≤ℓ∗such that: • Shallow layers (ℓ≤ℓ∗): The expected hidden state change for decoded tokens vanishes: For ℓ≤ℓ∗: E[ Ht,ℓ i −Ht−1,ℓ i 2 | i ∈D<t] ≤fℓ(t) →0 • Deep layers (ℓ> ℓ∗): The expected change remains bounded away from zero: lim inf t→T E[ Ht,ℓ i −Ht−1,ℓ i 2 | i ∈D<t] ≥cℓ> 0 This reflects that early layers encode local lexical patterns that stabilize quickly, while deep layers encode semantic relationships that continue evolving (Kovaleva et al., 2019; Jawahar et al., 2019; Rogers et al., 2021). Our experiments validate this (Figure 1b). 15 Assumption A.6 (Attention Concentration). The attention gap is a non-negligible fraction of total attention mass: Γt,ℓ≥c · |Mt β| (11) for some constant c > 0 independent of N, t, ℓ. Definition A.7 (KV Drift). The KV drift at layer ℓ, step t for token i is: ∆t,ℓ i := Kt,ℓ i −Kt−1,ℓ i 2 + Vt,ℓ i −Vt−1,ℓ i 2 (12) Average drift over decoded tokens: ∆t,ℓ:= 1 |D<t| P i∈D<t ∆t,ℓ i A.3 LAYER-WISE KV DRIFT MONOTONICITY This theorem formalizes the observation that KV drift increases with layer depth, providing theoretical justification for our layer-aware cache refresh strategy that selectively recomputes deeper layers while reusing shallow-layer caches. Figure 1a empirically validates this monotonicity property. Theorem A.8 (Layer-Wise KV Drift Monotonicity). Under Assumptions A.2–A.5, there exists a transition layer ℓ∗∈ {1, . . . , L} such that for sufficiently large t (when most tokens are decoded): Et[∆t,ℓ] ≤Et[∆t,ℓ′], ∀ℓ≤ℓ∗< ℓ′ ≤L (13) Proof. Step 1: Relating KV Drift to Hidden State Drift. The key-value projections at layer ℓare: Kt,ℓ i = W ℓ KHt,ℓ i (14) Vt,ℓ i = W ℓ V Ht,ℓ i (15) By the triangle inequality and Assumption A.3 (∥W ℓ K∥2, ∥W ℓ V ∥2 ≤Wmax): ∥Kt,ℓ i −Kt−1,ℓ i ∥2 = ∥W ℓ K(Ht,ℓ i −Ht−1,ℓ i )∥2 ≤∥W ℓ K∥2∥Ht,ℓ i −Ht−1,ℓ i ∥2 ≤Wmax∥∆Ht,ℓ i ∥2 (16) Similarly for values: ∥Vt,ℓ i −Vt−1,ℓ i ∥2 ≤Wmax∥∆Ht,ℓ i ∥2 (17) Therefore: ∆t,ℓ i = ∥Kt,ℓ i −Kt−1,ℓ i ∥2 + ∥Vt,ℓ i −Vt−1,ℓ i ∥2 ≤2Wmax∥∆Ht,ℓ i ∥2 (18) Step 2: Layer Recursion for Hidden States. At layer ℓ, the transformer block computes: Ht,ℓ+1 i = Ht,ℓ i + Attnℓ(Qt,ℓ i , Kt,ℓ, Vt,ℓ) + FFNℓ(Ht,ℓ i + Attnℓ(·)) (19) where the attention output is: Attnℓ(Qt,ℓ i , Kt,ℓ, Vt,ℓ) = N X j=1 St,ℓ i,jVt,ℓ j (20) The change in hidden state at layer ℓ+ 1 satisfies: ∥∆Ht,ℓ+1 i ∥2 = ∥Ht,ℓ+1 i −Ht−1,ℓ+1 i ∥2 ≤∥∆Ht,ℓ i ∥2 + ∥Attnℓ(t) −Attnℓ(t −1)∥2 + ∥FFNℓ(inputt) −FFNℓ(inputt−1)∥2 (21) 16 By Assumption A.3, the FFN is LFFN-Lipschitz: ∥FFNℓ(inputt) −FFNℓ(inputt−1)∥2 ≤LFFN∥inputt −inputt−1∥2 (22) The FFN input is Ht,ℓ i + Attnℓ(·), so: ∥inputt −inputt−1∥2 ≤∥∆Ht,ℓ i ∥2 + ∥Attnℓ(t) −Attnℓ(t −1)∥2 (23) Therefore: ∥∆Ht,ℓ+1 i ∥2 ≤(1 + LFFN)∥∆Ht,ℓ i ∥2 + (1 + LFFN)∥Attnℓ(t) −Attnℓ(t −1)∥2 (24) Step 3: Bounding Attention Output Change. Denote ∆t,ℓ,i attn := ∥Attnℓ(t) −Attnℓ(t −1)∥2. We decompose: N X j=1 St,ℓ i,jVt,ℓ j − N X j=1 St−1,ℓ i,j Vt−1,ℓ j = N X j=1 St,ℓ i,j(Vt,ℓ j −Vt−1,ℓ j ) + N X j=1 (St,ℓ i,j −St−1,ℓ i,j )Vt−1,ℓ j (25) Taking norms and applying triangle inequality: ∆t,ℓ,i attn ≤ N X j=1 St,ℓ i,j∥Vt,ℓ j −Vt−1,ℓ j ∥2 + N X j=1 |St,ℓ i,j −St−1,ℓ i,j |∥Vt−1,ℓ j ∥2 (26) Step 3a: First term (value changes). Since P j St,ℓ i,j = 1 (attention weights sum to 1): N X j=1 St,ℓ i,j∥Vt,ℓ j −Vt−1,ℓ j ∥2 ≤ N X j=1 St,ℓ i,jWmax∥∆Ht,ℓ j ∥2 (by Assumption A.3) = WmaxEj∼St,ℓ i,: [∥∆Ht,ℓ j ∥2] ≤Wmax ¯∆t,ℓ (27) Step 3b: Second term (attention weight changes). By Cauchy-Schwarz: P j |aj|bj ≤(P j |aj|) maxj bj By Assumption A.2: ∥Vt−1,ℓ j ∥2 ≤WmaxRℓ Therefore: N X j=1 |St,ℓ i,j −St−1,ℓ i,j |∥Vt−1,ℓ j ∥2 ≤WmaxRℓ N X j=1 |St,ℓ i,j −St−1,ℓ i,j | (28) By the inequality ∥v∥1 ≤√n∥v∥2: N X j=1 |St,ℓ i,j −St−1,ℓ i,j | ≤ √ N∥St,ℓ i,: −St−1,ℓ i,: ∥2 (29) By Lemma A.1 (softmax is 1-Lipschitz in ℓ2): ∥St,ℓ i,: −St−1,ℓ i,: ∥2 ≤∥zt,ℓ i −zt−1,ℓ i ∥2 (30) where zt,ℓ i = (zt,ℓ i,1, . . . , zt,ℓ i,N) with zt,ℓ i,j = 1 √dk Qt,ℓ i · Kt,ℓ j . 17 Step 3c: Bounding logit changes. For each component: zt,ℓ i,j −zt−1,ℓ i,j = 1 √dk [Qt,ℓ i · Kt,ℓ j −Qt−1,ℓ i · Kt−1,ℓ j ] = 1 √dk [Qt,ℓ i · (Kt,ℓ j −Kt−1,ℓ j ) + (Qt,ℓ i −Qt−1,ℓ i ) · Kt−1,ℓ j ] (31) By Cauchy-Schwarz and the bounds from Assumptions A.2–A.3: |zt,ℓ i,j −zt−1,ℓ i,j | ≤ 1 √dk [WmaxRℓ· Wmax∥∆Ht,ℓ j ∥2 + Wmax∥∆Ht,ℓ i ∥2 · WmaxRℓ] = W 2 maxRℓ √dk [∥∆Ht,ℓ i ∥2 + ∥∆Ht,ℓ j ∥2] ≤2W 2 maxRℓ √dk max k ∥∆Ht,ℓ k ∥2 (32) Taking ℓ2 norm of the logit vector: ∥zt,ℓ i −zt−1,ℓ i ∥2 2 = N X j=1 |zt,ℓ i,j −zt−1,ℓ i,j |2 ≤N 2W 2 maxRℓ √dk 2 (max k ∥∆Ht,ℓ k ∥2)2 (33) Therefore: ∥zt,ℓ i −zt−1,ℓ i ∥2 ≤2W 2 maxRℓ √ N √dk max k ∥∆Ht,ℓ k ∥2 (34) For typical sequences where maxk ∥∆Ht,ℓ k ∥2 = O( ¯∆t,ℓ): ∥zt,ℓ i −zt−1,ℓ i ∥2 ≤2W 2 maxRℓ √ N √dk ¯∆t,ℓ (35) Step 3d: Combining. Combining the bounds from Steps 3a-3c: ∆t,ℓ,i attn ≤Wmax ¯∆t,ℓ+ WmaxRℓ √ N · 2W 2 maxRℓ √ N √dk ¯∆t,ℓ = Wmax ¯∆t,ℓ  1 + 2W 2 maxR2 ℓN √dk  (36) Define: Cattn(ℓ) := 2W 2 maxR2 ℓN √dk = O W 2 maxR2 ℓN √dk  (37) Then: ∆t,ℓ,i attn ≤Wmax(1 + Cattn(ℓ)) ¯∆t,ℓ (38) Step 4: Recursive Bound on Hidden State Drift. Substituting equation 38 into equation 24: ∥∆Ht,ℓ+1 i ∥2 ≤(1 + LFFN)∥∆Ht,ℓ i ∥2 + (1 + LFFN)Wmax(1 + Cattn(ℓ)) ¯∆t,ℓ (39) Taking averages over all tokens: ¯∆t,ℓ+1 ≤[(1 + LFFN) + (1 + LFFN)Wmax(1 + Cattn(ℓ))] ¯∆t,ℓ (40) 18 Define the layer-dependent amplification factor: λℓ:= (1 + LFFN)[1 + Wmax(1 + Cattn(ℓ))] (41) Then: ¯∆t,ℓ+1 ≤λℓ¯∆t,ℓ (42) Step 5: Layer-wise Accumulation by Induction. By induction on ℓ: ¯∆t,ℓ≤¯∆t,1 ℓ−1 Y k=1 λk (43) Since λℓ> 1, drift accumulates multiplicatively across layers. Step 6: Applying Layer-Wise Specialization. By Assumption A.5: • Shallow layers (ℓ≤ℓ∗): ¯∆t,ℓ≤fℓ(t) →0 as t →T • Deep layers (ℓ> ℓ∗): lim inft→T ¯∆t,ℓ≥cℓ> 0 By equation 18: E[∆t,ℓ] = E " 1 |D<t| X i∈D<t ∆t,ℓ i # ≤2Wmax ¯∆t,ℓ (44) Therefore, for sufficiently large t and any ℓ≤ℓ∗< ℓ′: E[∆t,ℓ] ≤2Wmaxfℓ(t) →0 (45) E[∆t,ℓ′] ≥2Wmaxcℓ′ > 0 (46) This establishes: E[∆t,ℓ] < E[∆t,ℓ′], ∀ℓ≤ℓ∗< ℓ′ (47) A.4 ATTENTION CONCENTRATION AND DRIFT Theorem A.9 (Attention Concentration and Drift). Let T t,ℓ= arg maxk∈D<t P q∈Mt β St,ℓ q,k be the most-attended token at layer ℓ, step t. Under Assumptions A.2–A.3, the most-attended token has drift bounded by: ∆t,ℓ T t,ℓ≤¯∆t,ℓ+ ϵt (48) where ¯∆t,ℓ= 1 |D<t| P i∈D<t ∆t,ℓ i is the average drift and ϵt = O  √dk Rℓ √ N  . Proof. Step 1: Bounding Attention Weight Changes. We derive how attention weights St,ℓ q,k change when hidden states change. Step 1a: Logit change. The attention logits are zq,k = 1 √dk Qq · Kk where: Qq = WQHq, Kk = WKHk (49) The change in logits between steps t and t −1 is: zt,ℓ q,k −zt−1,ℓ q,k = 1 √dk [Qt,ℓ q · Kt,ℓ k −Qt−1,ℓ q · Kt−1,ℓ k ] (50) 19 Using the identity ab −a′b′ = a(b −b′) + (a −a′)b′: = 1 √dk [Qt,ℓ q · (Kt,ℓ k −Kt−1,ℓ k ) + (Qt,ℓ q −Qt−1,ℓ q ) · Kt−1,ℓ k ] (51) Step 1b: Apply Cauchy-Schwarz inequality. Taking absolute value and applying Cauchy-Schwarz: |zt,ℓ q,k −zt−1,ℓ q,k | ≤ 1 √dk [∥Qt,ℓ q ∥2∥Kt,ℓ k −Kt−1,ℓ k ∥2 + ∥Qt,ℓ q −Qt−1,ℓ q ∥2∥Kt−1,ℓ k ∥2] (52) Step 1c: Bound projection norms. By Assumption A.2: ∥Ht,ℓ i ∥2 ≤Rℓfor all i, t. By Assumption A.3: ∥WQ∥2, ∥WK∥2 ≤Wmax. Therefore: ∥Qt,ℓ q ∥2 ≤∥WQ∥2∥Ht,ℓ q ∥2 ≤WmaxRℓ (53) ∥Kt,ℓ k ∥2 ≤∥WK∥2∥Ht,ℓ k ∥2 ≤WmaxRℓ (54) ∥Kt,ℓ k −Kt−1,ℓ k ∥2 ≤∥WK∥2∥Ht,ℓ k −Ht−1,ℓ k ∥2 ≤Wmax∥∆Hk∥2 (55) ∥Qt,ℓ q −Qt−1,ℓ q ∥2 ≤Wmax∥∆Hq∥2 (56) Substituting these bounds: |zt,ℓ q,k −zt−1,ℓ q,k | ≤ 1 √dk [WmaxRℓ· Wmax∥∆Hk∥2 + Wmax∥∆Hq∥2 · WmaxRℓ] = W 2 maxRℓ √dk [∥∆Hk∥2 + ∥∆Hq∥2] (57) Step 1d: Use maximum drift. Since ∥∆Hi∥2 ≤∆max for all i: |zt,ℓ q,k −zt−1,ℓ q,k | ≤2W 2 maxRℓ √dk ∆max (58) Step 1e: Compute ℓ2 norm of logit vector. The logit vector for query q is zq = (zq,1, . . . , zq,N) ∈RN. By the previous bound applied to each component: ∥zt,ℓ q −zt−1,ℓ q ∥2 2 = N X k=1 |zt,ℓ q,k −zt−1,ℓ q,k |2 ≤ N X k=1 2W 2 maxRℓ √dk 2 ∆2 max = N · 4W 4 maxR2 ℓ dk ∆2 max (59) Taking square root: ∥zt,ℓ q −zt−1,ℓ q ∥2 ≤2W 2 maxRℓ √ N √dk ∆max (60) Step 1f: Apply softmax Lipschitz property. By Lemma A.1 (softmax is 1-Lipschitz in ℓ2 norm): ∥St,ℓ q,: −St−1,ℓ q,: ∥2 ≤∥zt,ℓ q −zt−1,ℓ q ∥2 ≤2W 2 maxRℓ √ N √dk ∆max (61) Step 1g: Convert to ℓ∞norm. Since ∥v∥∞≤∥v∥2 for any vector v: max k |St,ℓ q,k −St−1,ℓ q,k | ≤2W 2 maxRℓ √ N √dk ∆max (62) 20 Step 2: Change in Total Attention Received. For token k, the change in total attention received is: |αt,ℓ k −αt−1,ℓ k | = X q∈Mt β (St,ℓ q,k −St−1,ℓ q,k ) ≤ X q∈Mt β |St,ℓ q,k −St−1,ℓ q,k | (triangle inequality) ≤|Mt β| · max q max k |St,ℓ q,k −St−1,ℓ q,k | (bound by max) (63) Using equation 62: |αt,ℓ k −αt−1,ℓ k | ≤|Mt β| · 2W 2 maxRℓ √ N √dk ∆max (64) Step 3: Relating to KV Drift. Recall that KV drift is ∆t,ℓ i = ∥Kt,ℓ i −Kt−1,ℓ i ∥2 + ∥Vt,ℓ i −Vt−1,ℓ i ∥2. By Assumption A.3: ∆t,ℓ i ≤Wmax∥∆Hi∥2 + Wmax∥∆Hi∥2 = 2Wmax∥∆Hi∥2 (65) Therefore: ∥∆Hi∥2 ≥ ∆t,ℓ i 2Wmax . In particular: ∆max ≥maxi ∆t,ℓ i 2Wmax . Substituting into equation 64: |αt,ℓ k −αt−1,ℓ k | ≤|Mt β| · 2W 2 maxRℓ √ N √dk · maxi ∆t,ℓ i 2Wmax = |Mt β| · WmaxRℓ √ N √dk max i ∆t,ℓ i (66) Step 4: Stability Constraint and Excess Drift. Suppose T t,ℓhas drift ∆t,ℓ T t,ℓ= ¯∆t,ℓ+ ε where ε > 0 is excess drift beyond average. Then: |αt,ℓ T t,ℓ−αt−1,ℓ T t,ℓ| ≤|Mt β| · WmaxRℓ √ N √dk ( ¯∆t,ℓ+ ε) (67) While tokens with average drift have: |αt,ℓ k −αt−1,ℓ k | ≤|Mt β| · WmaxRℓ √ N √dk ¯∆t,ℓ (68) The differential attention change is: ∆differential = |Mt β| · WmaxRℓ √ N √dk ε (69) For T t,ℓto remain most-attended, the gap at step t −1 must absorb this differential: Γt−1,ℓ≥∆differential = |Mt β| · WmaxRℓ √ N √dk ε (70) Step 5: Assuming Bounded Attention Gap. Applying the assumption A.6: c · |Mt β| ≥|Mt β| · WmaxRℓ √ N √dk ε (71) 21 Canceling |Mt β| (assuming |Mt β| > 0): c ≥WmaxRℓ √ N √dk ε (72) Solving for ε: ε ≤ c√dk WmaxRℓ √ N = O  √dk Rℓ √ N  (73) Therefore: ∆t,ℓ T t,ℓ≤¯∆t,ℓ+ O  √dk Rℓ √ N  (74) A.5 IMPLICATIONS FOR ELASTIC-CACHE These results provide theoretical justification for our design: • Theorem A.8: Deeper layers have larger KV drift, justifying layer-aware refresh starting from ℓ∗ • Theorem A.9: Most-attended tokens have minimal drift, validating their use as cache staleness indicators B DETAILED EXPERIMENT SETUP Implementation Details. We conduct all the experiments on a single NVIDIA A100 80GB GPU to ensure a con- sistent hardware environment. We evaluate our proposed method, Elastic-Cache, on three large scale DLMs: LLaDA-Instruct (Nie et al., 2025a), LLaDA-1.5 (Zhu et al., 2025), and the multimodal LLaDA-V (You et al., 2025). Our evaluation spans both language and multimodal reasoning tasks including MBPP (Austin et al., 2021b), Hu- manEval (Chen et al., 2021) for coding tasks, MATH (Hendrycks et al., 2021), GSM8K (Cobbe et al., 2021) for Maths related tasks and MathVista (Lu et al., 2023) MathVerse (Zhang et al., 2024b) for multimodal mathematical reason- ing tasks. The major hyperparameters for Elastic-Cache, unless otherwise specified in ablation studies, are set to a attention threshold of γ = 0.9, a confidence threshold for parallel decoding of ϵ = 0.9, and a cache block size of 32. To establish a rigorous and fair comparison for all baseline methods, were re-evaluate all the methods including the original diffusion model LLaDA Nie et al. (2025a) and Fast-dLLM (Wu et al., 2025). This process eliminates confounding variables from hardware or software discrepancies and ensures that all observed performance differences are attributable to the methods themselves. Evaluation Framework and Metrics. Our evaluation protocol comprehensively assesses both inference efficiency and the preservation of model performance across a variety of tasks. For standardization and reproducibility, we conduct all task-specific evaluations using the lm-eval-harness library (Gao et al., 2024). We measure inference speed by throughput in tokens per second (t/s), which we calculate as the average number of tokens the model generates over the entire sequence until it produces an end-of-sequence (<eos>) token. We keep our calculation methodology consistent with that of Fast-dLLM (Wu et al., 2025) to ensure comparable speed benchmarks. We measure task-specific performance using established metrics appropriate for each benchmark: for GSM8K (Cobbe et al., 2021), we report 5-shot flexible extract exact match accuracy; for the MATH dataset (Hendrycks et al., 2021), we report the 4-shot math verify score using the minerva math variant; for HumanEval (Chen et al., 2021), we evaluate 0- shot accuracy using a post-processing script consistent with the Fast-dLLM implementation to ensure fair comparison; and for MBPP (Austin et al., 2021b), we report the 3-shot pass@1 metric. For multimodal evaluation on LLaDA- V (You et al., 2025), we utilize an evaluation suite adapted from its official implementation using the lmms-eval framework (Zhang et al., 2024a; Li et al., 2024) to test on the MathVista (Lu et al., 2023) and MathVerse (Zhang et al., 2024b) benchmarks. For MathVista, we report the gpt eval score, and for MathVerse, we report the gpt eval score on the mathverse testmini vision dominant subset. Hyper-parameters: The hyper-parameters used for Elastic-Cache are provided in Table 6. Specifically, • For LLaDA and LLaDA-1.5, γ = 0.9 everywhere; β is mostly 16, except GSM8K (β = 32 at 256, 16 at 512) and HumanEval (β = 32 at both 256/512). • For LLaDA-V (MathVista/MathVerse), γ = 0.7 and β = 16 for both 256 and 512 token lengths. • All tasks are reported at generation lengths 256 and 512. 22 C BATCH IMPLEMENTATION OF ELASTIC-CACHE In our experimental setup, we use a batch size of one for comparison purposes. However, in a real-world deployment scenario, requests can be grouped into batches to take advantage of parallelism and speed up decoding. In this section, we extend our implementation of Elastic-Cache to support batch decoding and compare it to baselines under different batch sizes. Batch implementation of Elastic-Cache is very challenging because the query length varies during caching and cache updates. Specifically, the query positions at time t are represented as Qt, while caching is in progress, and I when cache updates are being performed. Since Elastic-Cache automatically triggers cache updates, each sample within a batch will have a different length due to the varying triggers for each sample. This poses a significant challenge to parallelism and efficiency of the method. To address this problem, we propose a solution that involves rearranging the batch and concatenating all sentences within the batch into a single sentence. This approach enables us to compute the sentences in parallel, regardless of their current lengths. We then reimplement the multi-head attention function to compute attention on the concatenated sentence (Algorithm 2). Algorithm 2 Batch attention computation 1: Require: Batch samples Qi,t,l Qi,t, ˜Ki,t,l I , ˜Vi,t,l I ; Batch size B; 2: Qt,l ←Cat([Q1,t,l Q1,t, . . . , QB,t,l QB,t]) 3: for i = 1, . . . , B do 4: Kt,l ←Cat([Kt,l, {Ki,t,l I } × |Qi,t|]); 5: Vt,l ←Cat([Vt,l, {Vi,t,l I } × |Qi,t|]); 6: end for 7: At,l = softmax  Qt,l(Kt,l)⊤ √dk  Vt,l 8: return At,l D USE OF LARGE LANGUAGE MODELS An LLM was used to help solely polish the writing of the paper, while all method, idea and experiments were prepared and carried out entirely by the authors. E SAMPLE RESPONSE In the following, we present several examples of actual generation results produced by our approach under different parameter settings. 23 Table 6: The hyper-parameters of Elastic-Cache under various settings. Model Benchmark Gen Length β γ LLaDA GSM8K (5-shot) 256 32 0.9 512 16 0.9 MATH (4-shot) 256 16 0.9 512 16 0.9 Humaneval (0-shot) 256 32 0.9 512 32 0.9 MBPP (3-shot) 256 16 0.9 512 16 0.9 LLaDA-1.5 GSM8K (5-shot) 256 16 0.9 512 16 0.9 MATH (4-shot) 256 16 0.9 512 16 0.9 Humaneval (0-shot) 256 32 0.9 512 32 0.9 MBPP (3-shot) 256 16 0.9 512 16 0.9 LLaDA-V Mathvista 256 16 0.7 512 16 0.7 Mathverse 256 16 0.7 512 16 0.7 Table 7: Example – Qualitative comparison for different γ, LLaDA-1.5 Prompt: A store sells 3 types of fruit: apples at $2 each, bananas at $1 each, and oranges at $3 each. If Sarah buys 4 apples, 6 bananas, and 2 oranges, how much does she spend in total? γ = 0.5 γ = 0.7 γ = 0.95 To determine how much Sarah spends in total, we need to calculate the cost of each type of fruit separately and then sum these amounts. 1. **Cost of apples:** - Sarah buys 4 apples. - Each apple costs $2. - Total cost for apples = (4 × 2 = 8) dollars. 2. **Cost of bananas:** - Sarah buys 6 bananas. - Each banana costs $1. - Total cost for bananas = (6 × 1 = 6) dollars. 3. **Cost of oranges:** - Sarah buys 2 oranges. - Each orange costs $3. - Total cost for oranges = (2 × 3 = 6) dollars. Now, we sum the costs of all the fruits to find the total amount Sarah spends: 8 (apples) + 6 (bananas) + 6 (oranges) = 20 dollars Therefore, Sarah spends a total of 20 dollars. Time: 3.1s To determine how much Sarah spends in total, we need to calculate the cost of each type of fruit separately and then sum these amounts. First, let’s calculate the cost of the ap- ples: - Sarah buys 4 apples. - Each ap- ple costs $2. - Therefore, the cost of apples is (4 × 2 = 8) dollars. Next, let’s calculate the cost of the ba- nanas: - Sarah buys 6 bananas. - Each banana costs $1. - Therefore, the cost of bananas is (6 × 1 = 6) dollars. Finally, let’s calculate the cost of the or- anges: - Sarah buys 2 oranges. - Each orange costs $3. - Therefore, the cost of oranges is (2 × 3 = 6) dollars. Now, we sum the costs of all the fruits to find the total amount Sarah spends: - Total cost = Cost of apples + Cost of bananas + Cost of oranges - Total cost = (8 + 6 + 6 = 20) dollars So, Sarah spends a total of 20 dollars. Time: 3.4s To determine how much Sarah spends in total, we need to calculate the cost of each type of fruit separately and then sum these amounts. First, let’s calculate the cost of the ap- ples: - Sarah buys 4 apples. - Each ap- ple costs $2. - Total cost for apples = (4 × 2 = 8) dollars. Next, let’s calculate the cost of the ba- nanas: - Sarah buys 6 bananas. - Each banana costs $1. - Total cost for ba- nanas = (6 × 1 = 6) dollars. Then, let’s calculate the cost of the or- anges: - Sarah buys 2 oranges. - Each orange costs $3. - Total cost for oranges = (2 × 3 = 6) dollars. Now, we sum the costs of all the fruits to find the total amount spent: - Total cost = Cost of apples + Cost of bananas + Cost of oranges - Total cost = 8 + 6 + 6 - Total cost = 20 dollars Therefore, Sarah spends a total of 20 dollars. Time: 3.5s 24 Table 8: Example – Qualitative comparison for different γ, LLaDA-1.5 Prompt: Tom has 24 marbles. He gives 1/3 of them to his sister and 1/4 of the remaining marbles to his brother. How many marbles does Tom have left? γ = 0.5 γ = 0.7 γ = 0.95 To determine how many marbles Tom has left, we will follow these steps: 1. Calculate the number of marbles Tom gives to his sister. 2. Subtract the marbles given to his sister from the original number of marbles. 3. Calcu- late the number of marbles Tom gives to his brother. 4. Subtract the marbles given to his brother from the remaining marbles. Let’s start with the first step: **Step 1: Calculate the number of mar- bles Tom gives to his sister.** Tom gives 1 3 of his 24 marbles to his sister. 1 3 × 24 = 8 So, Tom gives 8 marbles to his sister. **Step 2: Subtract the marbles given to his sister from the original number of marbles.** 24 −8 = 16 So, after giving 8 marbles to his sister, Tom has 16 marbles left. **Step 3: Calculate the number of mar- bles Tom gives to his brother.** Tom gives 1 4 of the remaining 16 mar- bles to his brother. 1 4 ×16 = 4 So, Tom gives 4 marbles to his brother. **Step 4: Subtract the marbles given to his brother from the remaining mar- bles.** 16 −4 = 12 So, after giving 4 marbles to his brother, Tom has 12 marbles left. Therefore, the number of marbles Tom has left is 12 . Time: 3.3s To determine how many marbles Tom has left, we will follow these steps: 1. Calculate the number of marbles Tom gives to his sister. 2. Subtract the marbles given to his sister from the original number of marbles. 3. Calcu- late the number of marbles Tom gives to his brother. 4. Subtract the marbles given to his brother from the remaining marbles. Let’s start with the first step: **Step 1: Calculate the number of mar- bles Tom gives to his sister.** Tom gives 1 3 of his 24 marbles to his sis- ter. Marbles given to sister = 1 3×24 = 8 **Step 2: Subtract the marbles given to his sister from the original number of marbles.** Remaining marbles after giving to sister = 24 −8 = 16 **Step 3: Calculate the number of mar- bles Tom gives to his brother.** Tom gives 1 4 of the remaining 16 mar- bles to his brother. Marbles given to brother = 1 4 × 16 = 4 **Step 4: Subtract the marbles given to his brother from the remaining mar- bles.** Remaining marbles after giving to brother = 16 −4 = 12 Therefore, the number of marbles Tom has left is 12 . Time: 3.1s To determine how many marbles Tom has left, we will follow these steps: 1. Calculate the number of marbles Tom gives to his sister. 2. Subtract the marbles given to his sister from the to- tal number of marbles. 3. Calculate the number of marbles Tom gives to his brother. 4. Subtract the marbles given to his brother from the remaining mar- bles. **Step 1: Calculate the number of mar- bles Tom gives to his sister.** Tom gives 1 3 of his 24 marbles to his sis- ter. Marbles given to sister = 1 3×24 = 8 **Step 2: Subtract the marbles given to his sister from the total number of mar- bles.** Remaining marbles after giving to sister = 24 −8 = 16 **Step 3: Calculate the number of mar- bles Tom gives to his brother.** Tom gives 1 4 of the remaining 16 mar- bles to his brother. Marbles given to brother = 1 4 × 16 = 4 **Step 4: Subtract the marbles given to his brother from the remaining mar- bles.** Remaining marbles after giving to brother = 16 - 4 = 12 Therefore, the number of marbles Tom has left is 12 . Time: 3.2s 25 Table 9: Example – Qualitative comparison of responses under different γ. Prompt: Points A and B are on parabola y = 4x2 + 7x −1, and the origin is the midpoint of AB. Find the square of the length of AB. γ = 0.5 γ = 0.7 γ = 0.9 Let A = (x1, y1) and B = (x2, y2). Since the origin is the midpoint of AB, we have: x1 + x2 2 , y1 + y2 2  = (0, 0). This implies: x1 + x2 = 0 and y1 + y2 = 0. Since A and B lie on the parabola y = 4x2 + 7x −1, we have: y1+y2 = (4x2 1+7x1−1)+(4x2 2+7x2−1) = 0. Now, using the equations x1 + x2 = 0 and y1 + y2 = 0, we get: 4x2 1 + 7x1 −1 + 4x2 2 + 7x2 −1 = 0. This simplifies to: 4x2 1 + 7x1 + 4x2 2 + 7x2 −2 = 0. Thus, the square of the length of AB is: (x1−x2)2+(y1−y2)2 = 02+02 = 0. Therefore, the square of the length of AB is: 0 . Time: 8.1s Let A = (x1, y1) and B = (x2, y2). Since the origin is the midpoint of AB, we have: x1 + x2 2 , y1 + y2 2  = (0, 0). This implies: x1 + x2 = 0 and y1 + y2 = 0. Since A and B lie on the parabola y = 4x2 + 7x −1, we have: y1 = 4x2 1 + 7x1 −1 and y2 = 4x2 2 + 7x2 −1. From x1 + x2 = 0, we get x1 = −x2. Substituting x1 = −x2 into the equa- tions for y1 and y2: y1 = 4x2 2 + 7x2 −1 and y2 = 4x2 2 + 7x2 −1. Since y1 + y2 = 0, we have: y1 = −y2. Thus, y1 = −(4x2 2+7x2−1) = −4x2 2−7x2+1. Similarly, y2 = 4x2 2 + 7x2 −1. Therefore, y1 = −y2 = −(4x2 2 + 7x2 −1) = −4x2 2 −7x2 + 1. Thus, the square of the length of AB is given by: 2 . Time: 9.3s Let A = (x1, y1) and B = (x2, y2). Since the origin is the midpoint of AB, we have: x1 + x2 2 , y1 + y2 2  = (0, 0). This implies: x1 + x2 = 0 and y1 + y2 = 0. Since A and B lie on the parabola y = 4x2 + 7x −1, we have: y1 = 4x2 1 + 7x1 −1 and y2 = 4x2 2 + 7x2 −1. Using y1 + y2 = 0, we get: y1 + y2 = (4x2 1+7x1−1)+(4x2 2+7x2−1) = 0. Simplifying, we have: 4x2 1 + 7x1 + 4x2 2 + 7x2 −2 = 0. Since x1 + x2 = 0, we get: 4x2 1 + 7x1 + 4x2 2 + 7x2 −2 = 0. Now, we need to find the square of the length of AB. The length of AB is given by: AB = p (x2 −x1)2 + (y2 −y1)2. Using the midpoint formula and the properties of the parabola, we can sim- plify the expression. However, since the problem asks for the square of the length, we can directly use the result. Thus, the square of the length of AB is: 50 . Time: 8.7s (Correct) 26 Table 10: Coding Example – Close Elements Prompt: from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """ FAST-DLLM OURS sorted_numbers = sorted(numbers) for i in range(len(sorted_numbers)): if abs(sorted_numbers[i] - sorted_numbers[i + 1]) < threshold: return True return False sorted_numbers = sorted(numbers) for i in range(len(sorted_numbers) - 1): if abs(sorted_numbers[i] - sorted_numbers[i + 1]) < threshold: return True return False Fast-DLLM generated incorrect solution Table 11: Coding Example – Truncate Number Prompt: def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ FAST-DLLM OURS if number < 0: raise ValueError("The number must be positive.") integer_part = int(number) decimal_part = number - integer_part return decimal_part if number < 0: raise ValueError("The number must be positive") integer_part = int(number) decimal_part = number - integer_part return decimal_part Both solutions are functionally equivalent with minor differences 27
ATTENTION IS ALL YOU NEED FOR KV CACHE IN DIFFUSION LLMS Quan Nguyen-Tri∗ FPT AI Residency Hanoi, Vietnam Mukul Ranjan∗& Zhiqiang Shen VILA Lab, MBZUAI Abu Dhabi, UAE { Project page: https://vila-lab.github.io/elastic-cache-webpage/ ABSTRACT This work studies how to adaptively recompute key-value (KV) caches for diffusion large language models (DLMs) to maximize prediction accuracy while minimizing decoding latency. Prior methods' decoders recompute QKV for all tokens at every denoising step and layer, despite KV states changing little across most steps, especially in shallow layers, leading to substantial redundancy. We make three observations: (1) distant MASK tokens primarily act as a length-bias and can be cached block-wise beyond the active prediction window; (2) KV dynamics increase with depth, suggesting that selective refresh starting from deeper layers is sufficient; and (3) the most-attended token exhibits the smallest KV drift, providing a conservative lower bound on cache change for other tokens. Building on these, we propose Elastic-Cache, a training-free, architecture-agnostic strategy that jointly decides when to refresh (via an attention-aware drift test on the most-attended token) and where to refresh (via a depth-aware schedule that recomputes from a chosen layer onward while reusing shallow-layer caches and off-window MASK caches). Unlike fixed-period schemes, Elastic-Cache performs adaptive, layer-aware cache updates for diffusion LLMs, reducing redundant computation and accelerating decoding with negligible loss in generation quality. Experiments on LLaDA-Instruct, LLaDA-1.5, and LLaDA-V across mathematical reasoning and code generation tasks demonstrate consistent speedups: 8.7× on GSM8K (256 tokens), 45.1× on longer sequences, and 4.8× on HumanEval, while consistently maintaining higher accuracy than the baseline. Our method achieves significantly higher throughput (6.8× on GSM8K) than existing confidence-based approaches while preserving generation quality, enabling practical deployment of diffusion LLMs. 1 INTRODUCTION Diffusion large language models (DLMs) (Li et al., 2025) have recently emerged as a compelling alternative to autoregressive Transformers (Radford et al., 2018; Achiam et al., 2023), yet their iterative denoising procedure makes inference particularly compute-intensive. In standard implementations, each decoding step recomputes queries, keys, and values (QKV) for every token at every layer, even though the underlying key-value (KV) states change only marginally across most steps. This all-tokens, all-layers recomputation incurs substantial latency and memory traffic, ultimately limiting practical deployment. Our goal in this study is to determine how and when to adaptively recompute the KV cache during decoding so as to maximize prediction quality while minimizing wall-clock latency. A defining property of diffusion LLM decoding is the progressive unmasking of tokens under a length- and structureaware attention pattern. This induces heterogeneous KV dynamics: shallow layers tend to stabilize quickly as they encode local lexical structure, whereas deeper layers continue to adjust global, semantic dependencies. We formalize this with a notion of KV drift: the step-to-step change in cached keys and values, and observe two consistent trends: (i) drift is small for most steps, and (ii) drift grows with layer depth. These trends suggest that indiscriminate recomputation is wasteful, and that targeted refreshes could preserve accuracy while slashing cost. Prior acceleration methods for diffusion (and related) decoders typically refresh the KV cache on a fixed schedule, e.g., every k iterations without regard to instance difficulty, current attention patterns, or layerwise variability. Such fixed-period policies leave performance on the table: they recompute when nothing has changed and miss updates precisely when rapid semantic revisions occur. Moreover, by treating all layers uniformly, they over-service shallow layers whose representations have already converged, while under-servicing deeper layers where changes matter most. This motivates an adaptive, attention-aware alternative. ∗Equal contribution 1 16 Oct 2025 Our approach is built on three empirical observations. First, distant MASK tokens exert negligible influence on unmasking the current token and behave primarily as a length-bias prior; thus, their KV can be block-cached outside the active prediction window to avoid redundant work. Second, KV drift increases with depth, so refreshes should start at a learned boundary layer l⋆and apply only to deeper layers, reusing shallow-layer caches. Third, the most-attended token at a step typically exhibits the smallest drift, providing a conservative lower bound on KV changes across the context. Monitoring this drift yields a reliable, low-overhead trigger for deciding whether a global refresh is warranted. Based on these ideas, we propose Elastic-Cache, a training-free, architecture-agnostic strategy that couples AttentionAware KV Cache Update with Layer-Aware KV Cache Update. The attention-aware module computes a lightweight drift statistic on the most-attended token; if the statistic exceeds a threshold, a refresh is triggered, otherwise cached KVs are reused. The layer-aware module then refreshes only layers l≥l⋆, while shallow layers retain their caches, and off-window MASK tokens remain block-cached. Together, these mechanisms align recomputation with where and when the model's beliefs actually change, minimizing unnecessary QKV work. In contrast to fixed-period baselines, our Elastic-Cache adapts to the input, step, and layer granularity together. It reduces compute by skipping recomputation during stable phases, focuses effort on deeper layers during semantic revisions, and leverages block-wise caching for distant MASK tokens. Conceptually, the method reframes KV management as an attention-guided control problem: attention estimates which tokens matter; drift detects how much the state has changed; and the layer boundary l⋆encodes where updates pay off. This yields a practical pathway to low-latency diffusion LLM decoding without modifying training or the base architecture. Our contributions of this work: • We diagnose redundancy in diffusion LLM decoding and introduce KV drift as a principled signal for adaptive cache management. • We propose Elastic-Cache, the first (to our best knowledge) adaptive, layer-aware KV refresh policy for diffusion LLMs that jointly decides when to recompute (attention-aware drift test) and where to recompute (depth-selective updates). • We develop block-wise MASK caching to eliminate needless updates outside the prediction window. We provide comprehensive empirical experiments and ablations showing that our Elastic-Cache preserves generation quality while substantially reducing decoding latency across tasks and model scales. 2 PRELIMINARY 2.1 MASKED DIFFUSION MODELS Masked Diffusion Models (MDMs), absorbing-state discrete diffusion, build on D3PM (Austin et al., 2021a) and its continuous-time variant (Campbell et al., 2022), replacing tokens with a special MASK along a forward process (Sahoo et al., 2024; Shi et al., 2024) at timestep t: qt|0(xt|x0) = L Y i=1 qt|0(xi t|xi 0) = L Y i=1 Cat(xi t; (1 -t)δxi 0 + tδMASK) (1) where t ∈[0, 1] controls interpolation between the original data x0 (at t = 0) and a fully masked sequence (at t = 1), Cat(·) denotes the categorical distribution. A parametric model pθ learns the reverse denoising; generation starts from all MASK and iteratively unmasks by sampling pθ(xi 0|xt). Recent theory (MDLM (Shi et al., 2024; Sahoo et al., 2024), RADD (Ou et al., 2024)) simplifies training from a variational bound to a reweighted cross-entropy over masked positions: LMDM = Z 1 0 1 t Eqt|0(xt|x0)   X i:xi t=MASK -log pθ(xi 0|xt)  dt (2) This formulation scales to LLMs as diffusion language models (DLMs), with LLaDA (Nie et al., 2025b) and Dream7B (Ye et al., 2025) matching autoregressive performance while enabling parallel decoding and flexible infilling. 2.2 KEY-VALUE CACHE IN TRANSFORMERS Transformer-based language models achieve computational efficiency during autoregressive generation through KeyValue (KV) caching (Pope et al., 2023). In causal attention, each layer projects the current hidden state Ht into query, 2 0 5 10 15 20 25 30 Layer 0 20 40 60 80 Swallow layers Cache changes Deep layers Cache changes (b) Key-Value states Cosine similarity 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 0 5 10 15 20 25 30 Layer 0 20 40 60 80 (c) Attention weights Cosine similarity 0.5 0.6 0.7 0.8 0.9 1.0 0 5 10 15 20 25 30 Layer 0 20 40 60 80 100 120 (d) The most-attended tokens 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 200 400 600 800 1000 1200 1400 Token Position 0 20 40 60 80 100 120 Decoding Step Prompt tokens Faraway MASK tokens Tokens within Sliding Window (a) Attention weights 0.02 0.04 0.06 0.08 0.10 Figure 1: Visualization of our motivation. (a) MASK tokens located near each other receive high attention, while those situated far apart have minimal influence. (b) Over time, the representations in the KV states of cached tokens evolve, with deeper layers experiencing more substantial changes. (c) The changes in attention weights of most-attended tokens exhibit similar patterns to the changes in the KV states of all cached tokens. (d) The KV states of the mostattended tokens have the least changes. key, and value representations using learned projection matrices WQ, WK, WV. At decoding step t, the attention computation for the current token follows: At [t] = softmax Qt [t](Kt [1:t])⊤ √dk ! Vt [1:t], KV cache: ( Kt [1:t] = concat(Kt-1 [1:t-1], Kt [t]), Vt [1:t] = concat(Vt-1 [1:t-1], Vt [t]) . (3) To avoid redundant computation, previous key-value pairs are cached and reused. This caching strategy is effective because in causal attention, previously computed key-value pairs remain invariant throughout decoding (Kt-1 [1:t-1] = Kt [1:t-1]), enabling efficient reuse without affecting model output. KV-Cache in Bidirectional Attention. However, diffusion models employ bidirectional attention where all positions can attend to each other, breaking the invariance property of cached representations. As noted by dKV-Cache (Ma et al., 2025), token representations in diffusion models evolve dynamically during the iterative denoising process, making direct application of traditional KV-cache ineffective. The bidirectional dependencies cause previously computed key-value pairs to become stale as the sequence state changes, requiring careful redesign of caching strategies for diffusion language models. 3 METHODOLOGY 3.1 OUR FRAMEWORK OVERVIEW AND MOTIVATION Diffusion LLMs differ from autoregressive decoders in that their key-value (KV) states evolve across denoising steps due to bidirectional dependencies. Our objective is to adaptively decide when and where to recompute the KV cache to preserve accuracy while minimizing latency. Baseline decoders recompute QKV for all tokens and layers at every step, despite negligible KV changes for most steps and especially in shallow layers (Fig. 1b); deeper layers exhibit larger drift. Rather than fixed-period refreshes (Wu et al., 2025; Ma et al., 2025; Liu et al., 2025), we propose Elastic-Cache, the first (to our knowledge) adaptive, layer-aware KV update policy for diffusion LLMs that jointly optimizes timing and location of recomputation. Our design is driven by three observations. (1) Distant MASK tokens mainly act as a length prior and exert minimal influence on the current unmasking, we therefore block-cache their KV beyond the active prediction window (Fig. 1a). (2) KV drift grows with depth, refresh should start at a boundary layer and apply only to deeper layers (Fig. 1b). (3) The most-attended tokens typically shows the smallest KV change (Fig. 1d), giving a conservative lower bound for others, we use its drift as a lightweight trigger for refresh (Fig. 1c). Fig. 2 summarizes the pipeline. To the end, we proposed Elastic-Cache, a flexible method for key-value caching in diffusion large language models. Fig. 2 provides a visual representation of the overall pipeline of our proposed method. 3.2 SLIDING WINDOW DECODING AND KV CACHING Formally, let I = {1, 2, . . . , N} represent all positions. At decoding step t, let Dt denote newly decoded positions and Mt denote remaining masked positions, where Mt-1 = Mt ∪Dt. Denotes D l∗then // Cache Update 8: ̃Ht,l [I], ̃Kt,l [I], ̃Vt,l [I] ←cache update(I); Qt,l [I], Kt,l [I], Vt,l [I] = linear(Ht,l [I]) 9: Ht,l+1 [I] , St,l [I] ←attention layer(Qt,l [I], Kt,l [I], Vt,l [I]) 10: else // Cache Reuse 11: ̃Ht,l [Qt], ̃Kt,l [Qt], ̃Vt,l [Qt] ←cache update(Qt); Qt,l [Qt], Kt,l [Qt], Vt,l [Qt] = linear(Ht,l [Qt]) 12: Ht,l+1 [Qt] , St,l [I] ←attention layer(Qt,l [Qt], ̃Kt,l [I], ̃Vt,l [I]) 13: σt,l ←cosine similarity(St-1,l [T t-1], St,l [T t-1]) 14: if σt,k l∗): The expected change remains bounded away from zero: lim inf t→T E[ Ht,l i -Ht-1,l i 2 | i ∈D 0 This reflects that early layers encode local lexical patterns that stabilize quickly, while deep layers encode semantic relationships that continue evolving (Kovaleva et al., 2019; Jawahar et al., 2019; Rogers et al., 2021). Our experiments validate this (Figure 1b). 15 Assumption A.6 (Attention Concentration). The attention gap is a non-negligible fraction of total attention mass: Γt,l≥c · |Mt β| (11) for some constant c > 0 independent of N, t, l. Definition A.7 (KV Drift). The KV drift at layer l, step t for token i is: ∆t,l i := Kt,l i -Kt-1,l i 2 + Vt,l i -Vt-1,l i 2 (12) Average drift over decoded tokens: ∆t,l:= 1 |D 1, drift accumulates multiplicatively across layers. Step 6: Applying Layer-Wise Specialization. By Assumption A.5: • Shallow layers (l≤l∗): ̄∆t,l≤fl(t) →0 as t →T • Deep layers (l> l∗): lim inft→T ̄∆t,l≥cl> 0 By equation 18: E[∆t,l] = E " 1 |D 0 (46) This establishes: E[∆t,l] 0 is excess drift beyond average. Then: |αt,l T t,l-αt-1,l T t,l| ≤|Mt β| · WmaxRl √ N √dk ( ̄∆t,l+ ε) (67) While tokens with average drift have: |αt,l k -αt-1,l k | ≤|Mt β| · WmaxRl √ N √dk ̄∆t,l (68) The differential attention change is: ∆differential = |Mt β| · WmaxRl √ N √dk ε (69) For T t,lto remain most-attended, the gap at step t -1 must absorb this differential: Γt-1,l≥∆differential = |Mt β| · WmaxRl √ N √dk ε (70) Step 5: Assuming Bounded Attention Gap. Applying the assumption A.6: c · |Mt β| ≥|Mt β| · WmaxRl √ N √dk ε (71) 21 Canceling |Mt β| (assuming |Mt β| > 0): c ≥WmaxRl √ N √dk ε (72) Solving for ε: ε ≤ c√dk WmaxRl √ N = O √dk Rl √ N (73) Therefore: ∆t,l T t,l≤ ̄∆t,l+ O √dk Rl √ N (74) A.5 IMPLICATIONS FOR ELASTIC-CACHE These results provide theoretical justification for our design: • Theorem A.8: Deeper layers have larger KV drift, justifying layer-aware refresh starting from l∗ • Theorem A.9: Most-attended tokens have minimal drift, validating their use as cache staleness indicators B DETAILED EXPERIMENT SETUP Implementation Details. We conduct all the experiments on a single NVIDIA A100 80GB GPU to ensure a consistent hardware environment. We evaluate our proposed method, Elastic-Cache, on three large scale DLMs: LLaDA-Instruct (Nie et al., 2025a), LLaDA-1.5 (Zhu et al., 2025), and the multimodal LLaDA-V (You et al., 2025). Our evaluation spans both language and multimodal reasoning tasks including MBPP (Austin et al., 2021b), HumanEval (Chen et al., 2021) for coding tasks, MATH (Hendrycks et al., 2021), GSM8K (Cobbe et al., 2021) for Maths related tasks and MathVista (Lu et al., 2023) MathVerse (Zhang et al., 2024b) for multimodal mathematical reasoning tasks. The major hyperparameters for Elastic-Cache, unless otherwise specified in ablation studies, are set to a attention threshold of γ = 0.9, a confidence threshold for parallel decoding of ε = 0.9, and a cache block size of 32. To establish a rigorous and fair comparison for all baseline methods, were re-evaluate all the methods including the original diffusion model LLaDA Nie et al. (2025a) and Fast-dLLM (Wu et al., 2025). This process eliminates confounding variables from hardware or software discrepancies and ensures that all observed performance differences are attributable to the methods themselves. Evaluation Framework and Metrics. Our evaluation protocol comprehensively assesses both inference efficiency and the preservation of model performance across a variety of tasks. For standardization and reproducibility, we conduct all task-specific evaluations using the lm-eval-harness library (Gao et al., 2024). We measure inference speed by throughput in tokens per second (t/s), which we calculate as the average number of tokens the model generates over the entire sequence until it produces an end-of-sequence ( ) token. We keep our calculation methodology consistent with that of Fast-dLLM (Wu et al., 2025) to ensure comparable speed benchmarks. We measure task-specific performance using established metrics appropriate for each benchmark: for GSM8K (Cobbe et al., 2021), we report 5-shot flexible extract exact match accuracy; for the MATH dataset (Hendrycks et al., 2021), we report the 4-shot math verify score using the minerva math variant; for HumanEval (Chen et al., 2021), we evaluate 0shot accuracy using a post-processing script consistent with the Fast-dLLM implementation to ensure fair comparison; and for MBPP (Austin et al., 2021b), we report the 3-shot pass@1 metric. For multimodal evaluation on LLaDAV (You et al., 2025), we utilize an evaluation suite adapted from its official implementation using the lmms-eval framework (Zhang et al., 2024a; Li et al., 2024) to test on the MathVista (Lu et al., 2023) and MathVerse (Zhang et al., 2024b) benchmarks. For MathVista, we report the gpt eval score, and for MathVerse, we report the gpt eval score on the mathverse testmini vision dominant subset. Hyper-parameters: The hyper-parameters used for Elastic-Cache are provided in Table 6. Specifically, • For LLaDA and LLaDA-1.5, γ = 0.9 everywhere; β is mostly 16, except GSM8K (β = 32 at 256, 16 at 512) and HumanEval (β = 32 at both 256/512). • For LLaDA-V (MathVista/MathVerse), γ = 0.7 and β = 16 for both 256 and 512 token lengths. • All tasks are reported at generation lengths 256 and 512. 22 C BATCH IMPLEMENTATION OF ELASTIC-CACHE In our experimental setup, we use a batch size of one for comparison purposes. However, in a real-world deployment scenario, requests can be grouped into batches to take advantage of parallelism and speed up decoding. In this section, we extend our implementation of Elastic-Cache to support batch decoding and compare it to baselines under different batch sizes. Batch implementation of Elastic-Cache is very challenging because the query length varies during caching and cache updates. Specifically, the query positions at time t are represented as Qt, while caching is in progress, and I when cache updates are being performed. Since Elastic-Cache automatically triggers cache updates, each sample within a batch will have a different length due to the varying triggers for each sample. This poses a significant challenge to parallelism and efficiency of the method. To address this problem, we propose a solution that involves rearranging the batch and concatenating all sentences within the batch into a single sentence. This approach enables us to compute the sentences in parallel, regardless of their current lengths. We then reimplement the multi-head attention function to compute attention on the concatenated sentence (Algorithm 2). Algorithm 2 Batch attention computation 1: Require: Batch samples Qi,t,l Qi,t, ̃Ki,t,l I , ̃Vi,t,l I ; Batch size B; 2: Qt,l ←Cat([Q1,t,l Q1,t, . . . , QB,t,l QB,t]) 3: for i = 1, . . . , B do 4: Kt,l ←Cat([Kt,l, {Ki,t,l I } × |Qi,t|]); 5: Vt,l ←Cat([Vt,l, {Vi,t,l I } × |Qi,t|]); 6: end for 7: At,l = softmax Qt,l(Kt,l)⊤ √dk Vt,l 8: return At,l D USE OF LARGE LANGUAGE MODELS An LLM was used to help solely polish the writing of the paper, while all method, idea and experiments were prepared and carried out entirely by the authors. E SAMPLE RESPONSE In the following, we present several examples of actual generation results produced by our approach under different parameter settings. 23 Table 6: The hyper-parameters of Elastic-Cache under various settings. Model Benchmark Gen Length β γ LLaDA GSM8K (5-shot) 256 32 0.9 512 16 0.9 MATH (4-shot) 256 16 0.9 512 16 0.9 Humaneval (0-shot) 256 32 0.9 512 32 0.9 MBPP (3-shot) 256 16 0.9 512 16 0.9 LLaDA-1.5 GSM8K (5-shot) 256 16 0.9 512 16 0.9 MATH (4-shot) 256 16 0.9 512 16 0.9 Humaneval (0-shot) 256 32 0.9 512 32 0.9 MBPP (3-shot) 256 16 0.9 512 16 0.9 LLaDA-V Mathvista 256 16 0.7 512 16 0.7 Mathverse 256 16 0.7 512 16 0.7 Table 7: Example - Qualitative comparison for different γ, LLaDA-1.5 Prompt: A store sells 3 types of fruit: apples at 1 each, and oranges at 2. - Total cost for apples = (4 × 2 = 8) dollars. 2. **Cost of bananas:** - Sarah buys 6 bananas. - Each banana costs 3. - Total cost for oranges = (2 × 3 = 6) dollars. Now, we sum the costs of all the fruits to find the total amount Sarah spends: 8 (apples) + 6 (bananas) + 6 (oranges) = 20 dollars Therefore, Sarah spends a total of 20 dollars. Time: 3.1s To determine how much Sarah spends in total, we need to calculate the cost of each type of fruit separately and then sum these amounts. First, let's calculate the cost of the apples: - Sarah buys 4 apples. - Each apple costs 1. - Therefore, the cost of bananas is (6 × 1 = 6) dollars. Finally, let's calculate the cost of the oranges: - Sarah buys 2 oranges. - Each orange costs 2. - Total cost for apples = (4 × 2 = 8) dollars. Next, let's calculate the cost of the bananas: - Sarah buys 6 bananas. - Each banana costs 3. - Total cost for oranges = (2 × 3 = 6) dollars. Now, we sum the costs of all the fruits to find the total amount spent: - Total cost = Cost of apples + Cost of bananas + Cost of oranges - Total cost = 8 + 6 + 6 - Total cost = 20 dollars Therefore, Sarah spends a total of 20 dollars. Time: 3.5s 24 Table 8: Example - Qualitative comparison for different γ, LLaDA-1.5 Prompt: Tom has 24 marbles. He gives 1/3 of them to his sister and 1/4 of the remaining marbles to his brother. How many marbles does Tom have left? γ = 0.5 γ = 0.7 γ = 0.95 To determine how many marbles Tom has left, we will follow these steps: 1. Calculate the number of marbles Tom gives to his sister. 2. Subtract the marbles given to his sister from the original number of marbles. 3. Calculate the number of marbles Tom gives to his brother. 4. Subtract the marbles given to his brother from the remaining marbles. Let's start with the first step: **Step 1: Calculate the number of marbles Tom gives to his sister.** Tom gives 1 3 of his 24 marbles to his sister. 1 3 × 24 = 8 So, Tom gives 8 marbles to his sister. **Step 2: Subtract the marbles given to his sister from the original number of marbles.** 24 -8 = 16 So, after giving 8 marbles to his sister, Tom has 16 marbles left. **Step 3: Calculate the number of marbles Tom gives to his brother.** Tom gives 1 4 of the remaining 16 marbles to his brother. 1 4 ×16 = 4 So, Tom gives 4 marbles to his brother. **Step 4: Subtract the marbles given to his brother from the remaining marbles.** 16 -4 = 12 So, after giving 4 marbles to his brother, Tom has 12 marbles left. Therefore, the number of marbles Tom has left is 12 . Time: 3.3s To determine how many marbles Tom has left, we will follow these steps: 1. Calculate the number of marbles Tom gives to his sister. 2. Subtract the marbles given to his sister from the original number of marbles. 3. Calculate the number of marbles Tom gives to his brother. 4. Subtract the marbles given to his brother from the remaining marbles. Let's start with the first step: **Step 1: Calculate the number of marbles Tom gives to his sister.** Tom gives 1 3 of his 24 marbles to his sister. Marbles given to sister = 1 3×24 = 8 **Step 2: Subtract the marbles given to his sister from the original number of marbles.** Remaining marbles after giving to sister = 24 -8 = 16 **Step 3: Calculate the number of marbles Tom gives to his brother.** Tom gives 1 4 of the remaining 16 marbles to his brother. Marbles given to brother = 1 4 × 16 = 4 **Step 4: Subtract the marbles given to his brother from the remaining marbles.** Remaining marbles after giving to brother = 16 -4 = 12 Therefore, the number of marbles Tom has left is 12 . Time: 3.1s To determine how many marbles Tom has left, we will follow these steps: 1. Calculate the number of marbles Tom gives to his sister. 2. Subtract the marbles given to his sister from the total number of marbles. 3. Calculate the number of marbles Tom gives to his brother. 4. Subtract the marbles given to his brother from the remaining marbles. **Step 1: Calculate the number of marbles Tom gives to his sister.** Tom gives 1 3 of his 24 marbles to his sister. Marbles given to sister = 1 3×24 = 8 **Step 2: Subtract the marbles given to his sister from the total number of marbles.** Remaining marbles after giving to sister = 24 -8 = 16 **Step 3: Calculate the number of marbles Tom gives to his brother.** Tom gives 1 4 of the remaining 16 marbles to his brother. Marbles given to brother = 1 4 × 16 = 4 **Step 4: Subtract the marbles given to his brother from the remaining marbles.** Remaining marbles after giving to brother = 16 - 4 = 12 Therefore, the number of marbles Tom has left is 12 . Time: 3.2s 25 Table 9: Example - Qualitative comparison of responses under different γ. Prompt: Points A and B are on parabola y = 4x2 + 7x -1, and the origin is the midpoint of AB. Find the square of the length of AB. γ = 0.5 γ = 0.7 γ = 0.9 Let A = (x1, y1) and B = (x2, y2). Since the origin is the midpoint of AB, we have: x1 + x2 2 , y1 + y2 2 = (0, 0). This implies: x1 + x2 = 0 and y1 + y2 = 0. Since A and B lie on the parabola y = 4x2 + 7x -1, we have: y1+y2 = (4x2 1+7x1-1)+(4x2 2+7x2-1) = 0. Now, using the equations x1 + x2 = 0 and y1 + y2 = 0, we get: 4x2 1 + 7x1 -1 + 4x2 2 + 7x2 -1 = 0. This simplifies to: 4x2 1 + 7x1 + 4x2 2 + 7x2 -2 = 0. Thus, the square of the length of AB is: (x1-x2)2+(y1-y2)2 = 02+02 = 0. Therefore, the square of the length of AB is: 0 . Time: 8.1s Let A = (x1, y1) and B = (x2, y2). Since the origin is the midpoint of AB, we have: x1 + x2 2 , y1 + y2 2 = (0, 0). This implies: x1 + x2 = 0 and y1 + y2 = 0. Since A and B lie on the parabola y = 4x2 + 7x -1, we have: y1 = 4x2 1 + 7x1 -1 and y2 = 4x2 2 + 7x2 -1. From x1 + x2 = 0, we get x1 = -x2. Substituting x1 = -x2 into the equations for y1 and y2: y1 = 4x2 2 + 7x2 -1 and y2 = 4x2 2 + 7x2 -1. Since y1 + y2 = 0, we have: y1 = -y2. Thus, y1 = -(4x2 2+7x2-1) = -4x2 2-7x2+1. Similarly, y2 = 4x2 2 + 7x2 -1. Therefore, y1 = -y2 = -(4x2 2 + 7x2 -1) = -4x2 2 -7x2 + 1. Thus, the square of the length of AB is given by: 2 . Time: 9.3s Let A = (x1, y1) and B = (x2, y2). Since the origin is the midpoint of AB, we have: x1 + x2 2 , y1 + y2 2 = (0, 0). This implies: x1 + x2 = 0 and y1 + y2 = 0. Since A and B lie on the parabola y = 4x2 + 7x -1, we have: y1 = 4x2 1 + 7x1 -1 and y2 = 4x2 2 + 7x2 -1. Using y1 + y2 = 0, we get: y1 + y2 = (4x2 1+7x1-1)+(4x2 2+7x2-1) = 0. Simplifying, we have: 4x2 1 + 7x1 + 4x2 2 + 7x2 -2 = 0. Since x1 + x2 = 0, we get: 4x2 1 + 7x1 + 4x2 2 + 7x2 -2 = 0. Now, we need to find the square of the length of AB. The length of AB is given by: AB = p (x2 -x1)2 + (y2 -y1)2. Using the midpoint formula and the properties of the parabola, we can simplify the expression. However, since the problem asks for the square of the length, we can directly use the result. Thus, the square of the length of AB is: 50 . Time: 8.7s (Correct) 26 Table 10: Coding Example - Close Elements Prompt: from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """ FAST-DLLM OURS sorted_numbers = sorted(numbers) for i in range(len(sorted_numbers)): if abs(sorted_numbers[i] - sorted_numbers[i + 1]) float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ FAST-DLLM OURS if number < 0: raise ValueError("The number must be positive.") integer_part = int(number) decimal_part = number - integer_part return decimal_part if number < 0: raise ValueError("The number must be positive") integer_part = int(number) decimal_part = number - integer_part return decimal_part Both solutions are functionally equivalent with minor differences 27
2510.14975
WithAnyone: Towards Controllable and ID Consistent Image Generation Hengyuan Xu1,2 Wei Cheng2,† Peng Xing2 Yixiao Fang2 Shuhan Wu2 Rui Wang2 Xianfang Zeng2 Daxin Jiang2 Gang Yu2,‡ Xingjun Ma1,‡ Yu-Gang Jiang1 1 Fudan University 2 StepFun Project Page MultiID-2M MultiID-Bench Models Code Figure 1. Showcases of WithAnyone. WithAnyone is capable of generating high-quality, controllable, and ID-consistent images by leveraging ID-contrastive training on the proposed MultiID-2M dataset. Abstract Identity-consistent generation has become an important focus in text-to-image research, with recent models achieving notable success in producing images aligned with a reference identity. Yet, the scarcity of large-scale paired datasets con- taining multiple images of the same individual forces most approaches to adopt reconstruction-based training. This reliance often leads to a failure mode we term copy-paste, where the model directly replicates the reference face rather than preserving identity across natural variations in pose, expression, or lighting. Such over-similarity undermines controllability and limits the expressive power of generation. To address these limitations, we (1) construct a large-scale † Wei Cheng leads this project; ‡Corresponding authors. paired dataset MultiID-2M tailored for multi-person sce- narios, providing diverse references for each identity; (2) introduce a benchmark that quantifies both copy-paste arti- facts and the trade-off between identity fidelity and variation; and (3) propose a novel training paradigm with a contrastive identity loss that leverages paired data to balance fidelity with diversity. These contributions culminate in WithAny- one, a diffusion-based model that effectively mitigates copy- paste while preserving high identity similarity. Extensive qualitative and quantitative experiments demonstrate that WithAnyone significantly reduces copy-paste artifacts, im- proves controllability over pose and expression, and main- tains strong perceptual quality. User studies further validate that our method achieves high identity fidelity while enabling expressive controllable generation. arXiv:2510.14975v1 [cs.CV] 16 Oct 2025 1. Introduction With the rapid progress of generative artificial intelligence, controllable image generation via reference images or image prompting [16, 19, 44, 57, 59, 66] and identity-consistent (ID-consistent) generation [8, 14, 15, 21, 50, 64, 68] have achieved remarkable advances: modern models can syn- thesize portraits that closely match the provided individual. Recent efforts [4, 8] push resemblance toward near-perfect reproduction. While pursuing higher similarity seems natu- ral, beyond a certain point, excessive fidelity becomes coun- terproductive. In real photographs of the same person, identity similarity varies substantially due to natural changes in pose, expres- sion, makeup, and illumination (Fig. 2). By contrast, many generative models adhere to the reference image far more rigidly than this natural range of variation. Although such over-optimization may seem beneficial, it suppresses legiti- mate variation, reducing controllability and limiting practical usability. We term this failure mode the copy-paste artifact: rather than synthesizing an identity in a flexible, controllable manner, the model effectively copies the reference image into the output (see Fig. 2). In this work, we formalize this artifact, develop metrics to quantify it, and propose a novel training strategy to mitigate it. Mitigating copy-paste artifacts is fundamentally con- strained by the lack of suitable training data. While numer- ous large-scale face datasets exist [9, 22, 29, 47, 51, 67, 70], they remain ill-suited for controllable multi-identity gener- ation. Critically, few datasets provide paired references for each identity-multiple images of the same person across di- verse expressions, poses, hairstyles, and viewpoints. As a re- sult, most prior work resorts to single-person, reconstruction- based training [14, 50], where the reference and target co- incide. This setup inherently promotes copying and exacer- bates copy-paste artifacts. Constructing datasets with multi- ple references per identity, particularly in group photos, and developing methods to effectively exploit such data remain open challenges. In this work, we introduce a large-scale open-source Multi-ID dataset, MultiID-2M, together with a compre- hensive benchmark, MultiID-Bench, designed for intrin- sic evaluation of multi-identity image generation. MultiID- 2M contains 500k group photos featuring 1–5 recognizable celebrities. For each celebrity, hundreds of individual im- ages are provided as paired references, covering diverse expressions, hairstyles, and viewing angles. In addition, 1.5M unpaired group photos without references are included. MultiID-Bench establishes a standardized evaluation proto- col for multi-identity generation. Beyond widely adopted metrics such as ID similarity [11, 45], it quantifies copy- paste artifacts by measuring distances between generated images, references, and ground truth. Evaluation on 12 state- of-the-art customization models highlights a clear trade-off 0.46 0.30 0.46 0.77 Face Similarity 0.0 0.2 0.4 0.6 0.8 1.0 Face Similarity Real Image Pairs Ours UniPortrait PuLID InstantID UMO 0.586 0.616 0.667 0.717 0.788 0.818 Prompt: A blonde lady, natural makeup Input WithAnyone PULID InstantID GT Copy Copy Customized Figure 2. Our Observation. Natural variations, such as head pose, expression, and makeup, may cause more face similarity decrease than expected. Copying reference image limits models’ ability to respond to expression and makeup adjustment prompts. between ID similarity and copy-paste artifacts (see Fig. 5). Furthermore, we present WithAnyone, a novel identity customization model built on the FLUX [27] architecture, as a step toward mitigating copy-paste artifacts. WithAny- one maintains state-of-the-art identity similarity (with regard to target image) while substantially reducing copy-paste, thereby breaking the long-observed trade-off between fidelity and artifacts. This advance is enabled by a paired-training strategy combined with an ID contrastive loss enhanced with a large negative pool, both made possible by our paired dataset. The labeled identities and their reference images en- able the construction of an extended negative pool (images of different identities), which provides stronger discrimination signals during optimization. In summary, our main contributions are: • MultiID-2M: A large-scale dataset of 500k group pho- tos containing multiple identifiable celebrities, each with hundreds of reference images capturing diverse variations, along with 1.5M additional unpaired group photos. This resource supports pre-training and evaluation of multi- identity generation models. • MultiID-Bench: A comprehensive benchmark with stan- dardized evaluation protocols for identity customization, enabling systematic and intrinsic assessment of multi- 2. Multi-ID Collection + <name> + 2/3/4 actors - <watermark> - alone - crowd 1. Single-ID Collection & Clustering + <name> <ID A> 3. ID Image Paring Embedding Retrieval <ID A> <ID B> cos( , ) 4. Post-processing Filtering & Labelling collage❌ not aesthetic❌ OCR & cropping 4. Quality Tuning Quality & Style Tuning 3. Paired Tuning Mid-train with Pair & Recon. Ground Truth Reference(s) HQ and Style Subset 10K 1. Reconstruction Pretrain with Fixed Prompt “two people” Ground Truth Reference(s) Paired Subset 500K Mult-ID Subset 2M Single-ID Subset 1M 2. Reconstruction Pretrain with Caption Ground Truth Reference(s) “A man with … and a lady with” Data Construction Training Stages Datasets Open Dataset High Quality Stylized Figure 3. Overview of WithAnyone. It builds on a large-scale dataset, MultiID-2M, constructed through a four-step pipeline: (1) collect and cluster single-ID data based on identity similarity; (2) gather multi-ID data via targeted searches using desired identity names with negative keywords for filtering; (3) form image pairs by matching faces between single-ID and multi-ID data; and (4) apply post-processing for quality control and stylization. Training proceeds in four stages: (1) pre-train on single-ID, multi-ID, and open-domain images with fixed prompts; (2) train with image-caption supervision; (3) fine-tune with ID-paired data; and (4) perform quality tuning using a curated high-quality subset. identity image generation methods. • WithAnyone: A novel ID customization model built on FLUX that achieves state-of-the-art performance, gener- ating high-fidelity multi-identity images while mitigating copy-paste artifacts and enhancing visual quality. 2. Related Work Single-ID Preservation. The generation of Identity- preserving images is a core topic in customized synthe- sis [5, 20, 35, 48, 49, 52, 58, 60, 63]. Many meth- ods in the UNet/Stable Diffusion era inject learned em- beddings (e.g., CLIP or ArcFace) via cross-attention or adapters [17, 40–43, 64]. With the rise of DiT-style back- bones [13, 27, 38] (e.g., SD3, FLUX), progress in ID preser- vation like PuLID [14], also attracts great attention. Multi-ID Preservation. Multi-ID preservation remains relatively underexplored. Some works target spatial control of multiple identities [15, 25, 68], while others focus on iden- tity fidelity. Methods such as XVerse [4] and UMO [8] use VAE-derived face embeddings concatenated with model in- puts, which can produce pixel-level copy-paste artifacts and reduce controllability. DynamicID [18]1 achieves improved controllability but is constrained by limited task-specific data 1Excluded from our experiments due to unavailability of code and pretrained models. and evaluation standards. Other general-purpose customiza- tion and editing models [2, 30, 36, 37, 53–56, 61] can also synthesize images containing multiple identities, but their ID similarity is often compromised for generality. ID-Centric Datasets and Benchmarks. Although there are numerous single-ID datasets [23, 51] and multi-ID col- lections [9, 22], paired reference images are scarce, so re- construction remains the dominant training objective for multi-ID datasets. Representative datasets are listed in Table 4. Evaluation protocols are underdeveloped: sev- eral works (e.g., PuLID [14], UniPortrait [15], and oth- ers [60, 68]) construct test sets by sampling identities from CelebA [29], which undermines reproducibility. Recent ef- forts benchmark multiple reference generation [54, 71] while focusing on general customization. To address this, we re- lease a curated multi-ID benchmark with standardized splits and comprehensive metrics to facilitate future research. 3. MultiID-2M: Paired Multi-Person Dataset Construction MultiID-2M is a large-scale multi-person dataset constructed via a four-stage pipeline: (1) collect single-ID images from the web and construct a clean reference bank by cluster- ing ArcFace [11] embeddings, yielding ∼1M reference im- ages across ∼3k identities (averaging 400 per identity); (2) retrieve candidate group photos via multi-name and scene- Cross Attention DiT Blocks Face Embedding Branch Image Embedding Branch a Model Architecture Target Predicted Negative Pool Embedding Target Target Detection GT-Landmark Predicted 1- cos( , ) Extended negative samples ID Contrastive Loss GT-aligned ID Loss b Training Objectives Figure 4. (a) Architecture of WithAnyone: Each reference is encoded by both a face-recognition network and a general image encoder, yielding identity-discriminative signals and complementary mid-level features. Face embeddings are restricted to attend only to image tokens within their corresponding face regions. (b) Training Objectives of WithAnyone: In addition to the diffusion loss, we incorporate an ID contrastive loss and a ground-truth–aligned ID loss, which together provide consistent and accurate identity supervision. aware queries and detect faces; (3) assign identities by match- ing ArcFace embeddings to single-ID cluster centers using cosine similarity (threshold 0.4); and (4) perform automated filtering and annotation, including Recognize Anything [69], aesthetic scoring [12], OCR-based watermark/logo removal, and LLM-based caption generation [1]. The final corpus comprises ∼500k identified multi-ID images with matched references from the reference bank, as well as ∼1.5M addi- tional unidentified multi-ID images for reconstruction train- ing, covering ∼25k unique identities, with diverse nation- alities and ethnicities. Further details of the construction pipeline and dataset statistics are provided in Appendix B. 4. MultiID-Bench: Comprehensive ID Cus- tomization Evaluation MultiID-Bench is a unified benchmark for group-photo (multi-ID) generation. It samples rare, long-tail identities with no overlap to training data, yielding 435 test cases. Each case consists of one ground-truth (GT) image contain- ing 1–4 people, the corresponding 1–4 reference images as inputs, and a prompt describing the GT. Detailed statistics are provided in Appendix B. Evaluation considers both identity fidelity and generation quality. Let r, t, g denote the face embeddings of the ref- erence identity, the target (ground-truth), and the generated image, respectively. We define similarity between two em- beddings as Sim(a, b), specifically we term the generated image’s face similarity with regard to GT as SimGT, and to reference as SimRef, Sim(a, b) = a⊤b ∥a∥∥b∥, (1) Specially, we denote SimRef = Sim(r, g) and SimGT = Sim(t, g). Prior works [8, 14, 15, 68] has largely reported only SimRef, which inadvertently favors trivial copy-paste: directly replicating the reference appearance maximizes the score, even when the prompt specifies changes in pose, ex- pression, or viewpoint. In contrast, MultiID-Bench uses SimGT the similarity to the ground-truth identity described by the prompt as the primary metric. This design penalizes excessive copying when natural variations (e.g., pose, ex- pression, occlusion) are expected, while rewarding faithful realization of the prompted scene. We define the angular distance as θab = arccos(Sim(a, b)) (geodesic distance on the unit sphere). The Copy-Paste metric is given by MCP(g | t, r) = θgt −θgr max(θtr, ε) ∈[−1, 1], (2) where ε is a small constant for numerical stability. The met- ric thus captures the relative bias of g toward the reference r versus the ground truth t, normalized by angular distance of r and t. A score of 1 means g fully coincides with the refer- ence (perfect copy-paste), while −1 means full agreement with the ground truth. We additionally report identity blending, prompt fidelity (CLIP I/T), and aesthetics; formal definitions and further details are provided in Appendix C. 5. WithAnyone: Controllable and ID- Consistent Generation Building on the scale and paired-reference supervision of the MultiID-2M, we devise training strategies and tailored objectives that transcend reconstruction to enable robust, identity-conditioned synthesis. This rich, identity-labeled supervision not only substantially improves identity fidelity but also suppresses trivial copy–paste artifacts and affords finer control over multi-identity composition. Motivated by these advantages, we introduce WithAnyone - a unified architecture and training recipe designed for controllable, high-fidelity multi-ID generation. Architectural schemat- ics and implementation details are provided in Fig. 4 and Appendix E. 5.1. Training Objectives Diffusion Loss. We adopt the mini-batch empirical flow- matching loss. For each batch, we sample a data latent x1 ∼pdata, Gaussian noise x0 ∼N(0, I), and a timestep t ∼U(0, 1). We then form the interpolated latent xt = (1 −t)x0 + tx1 and regress the target velocity (x1 −x0): Ldiff = vθ(x(i) t , t(i), c(i)) −(x(i) 1 −x(i) 0 ) 2 2, (3) where c(i) denotes the conditioning signal. Ground-truth-Aligned ID Loss. Since ArcFace embed- ding requires landmark detection and alignment, directly extracting landmarks from Igen is unreliable because gener- ated images are obtained through noisy diffusion or one-step denoising. Prior methods compromise: PortraitBooth [39] applies the loss only at low noise levels (t < 0.25), dis- carding supervision at higher noise, while PuLID [14] fully denoises generated results at significant computational cost. In contrast, we align the generated image using GT land- marks, thereby avoiding noisy landmark extraction. We minimize the cosine distance between GT-aligned ArcFace embeddings of the generated and ground-truth (GT) faces: LID = 1 −cos(g, t) (4) where g and t are ArcFace embeddings of the generated and GT images. This design (1) enables applying the ID loss across all noise levels, (2) incurs negligible overhead throughout training, and (3) implicitly supervises generated landmarks. Ablation studies (Sec. 6.3) demonstrate more accurate identity measurement and substantially improved identity preservation. Denoting the face recognition model as f(·, ·) (Arc- face [11], in our case), and the coupled detection model as g(·) (RetinaFace [10]), the generated image as G, and the ground-truth image as T, a embedding extraction should be performed as follows: t = f(g(T), T), (5) where g(T) are the detected landmarks, and f(·, ·) extracts the aligned face embedding. Instead of using g(G) as land- marks for G, our GT-aligned ID loss is computed as: Lid = 1 −cos(f(g(T), G), f(g(T), T)). (6) ID Contrastive Loss With Extended Negatives. To further strengthen identity preservation, we introduce an ID contrastive loss that explicitly pulls the generated image closer to its reference images in the face embedding space while pushing it away from other identities. The loss follows the InfoNCE [31] formulation: LCL = −log exp(cos(g, r)/τ) PM j=1 exp(cos(g, nj))/τ) , (7) where r is the embedding of a reference image of the same identity as the generated image, nj are embeddings of M negatives from different identities, and τ is a temperature hy- perparameter. This formulation relies on ID-labeled datasets, which make it possible to draw thousands of negatives per sample from the reference bank, thereby greatly enriching the diversity of negative examples. The overall training objective is a weighted sum of the above losses: L = Ldiff + λIDLID + λCLLCL, (8) where λID and λCL are hyper-parameters controlling the con- tributions of the ID loss and contrastive loss, respectively. Both are set to 0.1 across all training phases described below. 5.2. Training pipeline Copy–paste artifacts largely arise from reconstruction-only training, which encourages models to replicate the reference Table 1. Quantitative comparison on the single-person subset of MultiID-Bench and OmniContext. , , and indicate the first-, second-, and third-best performance, respectively. For Copy-Paste ranking, only cases with Sim(GT) > 0.40 are considered. a MultiID-Bench Method Identity Metrics Generation Quality Sim(GT) ↑ Sim(Ref) ↑ CP ↓ CLIP-I ↑ CLIP-T ↑ Aes ↑ DreamO 0.454 0.694 0.303 0.793 0.322 4.877 OmniGen 0.398 0.602 0.248 0.780 0.317 5.069 OmniGen2 0.365 0.475 0.142 0.787 0.331 4.991 FLUX.1 Kontext 0.324 0.408 0.099 0.755 0.327 5.319 Qwen-Image-Edit 0.324 0.409 0.093 0.776 0.316 5.056 GPT-4o Native 0.425 0.579 0.178 0.794 0.311 5.344 UNO 0.304 0.428 0.141 0.765 0.314 4.923 USO 0.401 0.635 0.286 0.790 0.329 5.077 UMO 0.458 0.732 0.359 0.783 0.305 4.850 UniPortrait 0.447 0.677 0.265 0.793 0.319 5.018 ID-Patch 0.426 0.633 0.231 0.792 0.312 4.900 InfU 0.439 0.630 0.233 0.772 0.328 5.359 PuLID 0.452 0.705 0.315 0.779 0.305 4.839 InstantID 0.464 0.734 0.337 0.764 0.295 5.255 Ours 0.460 0.578 0.144 0.798 0.313 4.783 GT 1.000 0.521 -0.999 N/A N/A N/A Ref 0.521 1.000 0.999 N/A N/A N/A b OmniContext Single Character Subset Method Quality Metrics Overall PF ↑ SC ↑ Overall ↑ DreamO 8.13 7.09 7.02 OmniGen 7.50 5.52 5.47 OmniGen2 8.64 8.50 8.34 FLUX.1 Kontext 7.72 8.60 7.94 Qwen-Image-Edit 7.66 8.16 7.51 GPT-4o Native 7.98 9.06 8.12 UNO 7.22 7.72 7.04 USO 6.96 7.88 6.70 UMO 6.56 7.92 6.79 UniPortrait 6.62 6.00 5.55 ID-Patch N/A N/A N/A InfU 7.69 4.62 4.70 PuLID 6.62 6.83 5.78 InstantID 4.89 5.49 4.35 Ours 7.43 7.04 6.52 0.42 0.43 0.44 0.45 0.46 Sim(GT) 0.16 0.24 0.32 Copy-Paste UniPortrait ID-Patch InfU GPT-4o IntantID Ours Other Methods DreamO PuLID a Single-ID subset 0.30 0.35 0.40 Sim(GT) 0.1 0.2 0.3 Copy-Paste UniPortrait GPT-4o OmniGen OmniGen2 Ours Other Methods ID-Patch DreamO b Multi-ID subset Figure 5. Trade-off between Face Similarity and Copy-paste. Except for WithAnyone, the other models fall roughly on a fitted curve, illustrating a clear trade-off between face similarity and copy-paste. Upper-right corner is desired. image rather than learn robust identity-conditioned genera- tion. Leveraging our paired dataset, we employ a four-phase training pipeline that gradually transitions the objective from reconstruction toward controllable, identity-preserving syn- thesis. Phase 1: Reconstruction pre-training with fixed prompt. We begin with reconstruction pre-training to ini- tialize the backbone, as this task is simpler than full identity- conditioned generation and can exploit large-scale unlabeled data. For the first few thousand steps, the caption is fixed to a constant dummy prompt (e.g., “two people”), ensuring the model prioritizes learning the identity-conditioning pathway rather than drifting toward text-conditioned styling. The full MultiID-2M is used in this phase, which typically lasts for 20k steps, at which point the model achieves satisfactory identity similarity. To further enhance data diversity, CelebA- HQ [23], FFHQ [24], and a subset of FaceID-6M [51] are also incorporated. Phase 2: Reconstruction pre-training with full cap- tions. This phase aligns identity learning with text- conditioned generation and lasts for an additional 40k steps, during which the model reaches peak identity similarity. Phase 3: Paired tuning. To suppress trivial copy–paste behavior, we replace 50% of the training samples with paired instances drawn from the 500k labeled images in MultiID- 2M. For each paired sample, instead of using the same image as both input and target, we randomly select one reference image from the identity’s reference set and another distinct image of the same identity as the target. This perturbation breaks the shortcut of direct duplication and compels the model to rely on high-level identity embeddings rather than low-level copying. Phase 4: Quality tuning. Finally, we fine-tune on a cu- rated high-quality subset augmented with generated stylized variants to (i) enhance perceptual fidelity and (ii) improve style robustness and transferability. This phase refines tex- ture, lighting, and stylistic adaptability while preserving the strong identity consistency established in earlier phases. 6. Experiments In this section, we present a comprehensive evaluation of baselines and our WithAnyone model on the proposed MultiID-Bench. Baselines. We evaluate two categories of baseline meth- ods: general customization models and face customization methods. The general customization models include Omni- Gen [61], OmniGen2 [54], Qwen-Image-Edit [53], FLUX.1 Kontext [2], UNO [56], USO [55], UMO [8], and native GPT-4o-Image [32]. The face customization methods in- clude UniPortrait [15], ID-Patch [68], PuLID [14] (referring to its FLUX [27] implementation throughout this paper), and InstantID [50]. All models were evaluated on the single- person subset of the benchmark, while only those supporting Table 2. Quantitative comparison on the multi-person subset of MultiID-Bench. , , and indicate the first-, second-, and third-best performance, respectively. For Copy-Paste ranking, only cases with Sim(GT) > 0.35 are considered. GPT exhibits prior knowledge of identities from TV series in subsets with more than two IDs, leading to abnormally high similarity scores. a 2-people Subset Method Identity Metrics Generation Quality Sim(GT) ↑ Sim(Ref) ↑ CP ↓ Bld ↓ CLIP-I ↑ CLIP-T ↑ Aes ↑ DreamO 0.359 0.514 0.179 0.105 0.763 0.319 4.764 OmniGen 0.345 0.529 0.209 0.110 0.750 0.326 5.152 OmniGen2 0.283 0.353 0.081 0.112 0.763 0.334 4.547 GPT 0.332 0.400 0.061 0.092 0.774 0.328 5.676 UNO 0.223 0.274 0.043 0.082 0.735 0.325 4.805 UMO 0.328 0.491 0.176 0.111 0.743 0.316 4.772 UniPortrait 0.367 0.601 0.254 0.075 0.750 0.323 5.187 ID-Patch 0.350 0.517 0.183 0.085 0.767 0.326 4.671 Ours 0.405 0.551 0.161 0.079 0.770 0.321 4.883 b 3-and-4-people Subset Method Identity Metrics Generation Quality Sim(GT) ↑ Sim(Ref) ↑ CP ↓ Bld ↓ CLIP-I ↑ CLIP-T ↑ Aes ↑ DreamO 0.311 0.427 0.116 0.081 0.709 0.317 4.695 OmniGen 0.345 0.529 0.209 0.110 0.750 0.326 5.152 OmniGen2 0.288 0.374 0.099 0.071 0.734 0.329 4.664 GPT 0.445 0.484 0.048 0.044 0.815 0.320 5.647 UNO 0.228 0.276 0.046 0.065 0.717 0.319 4.880 UMO 0.318 0.465 0.180 0.070 0.717 0.309 4.946 UniPortrait 0.343 0.517 0.178 0.048 0.708 0.323 5.090 ID-Patch 0.379 0.543 0.195 0.059 0.781 0.329 4.547 Ours 0.414 0.561 0.171 0.045 0.771 0.325 4.955 Input Ref DreamO GPT UniPortrait ID-Patch WithAnyone GT Prompt: “A couple posing together. The woman wears a blue, sleeveless, V-neck dress, while the man dons a light blue, semi-buttoned shirt. Both are smiling and standing close, with the man's arm around the woman, indicating a friendly or intimate relationship. Prompt: “a man in a dark suit holding a coffee mug and a woman in a light blue sweater resting her head on her hand. They appear to be in a kitchen, looking concerned or surprised. The man is standing, while the woman is seated at a counter. Prompt: “three people, two women and one man, posing closely together. The woman on the left wears a white blazer, while the younger woman in front has a strapless top. The man has a white shirt. All are smiling warmly at the camera. Prompt: “four people dressed in white shirts posing together. The group includes three males and two females, with one male and one female in the center. They are smiling and standing closely, suggesting a family or close-knit group. The attire is casual and coordinated. UMO Kontext QwenImgEdit PuLID Input Ref InstantID GT WithAnyone Prompt: “a woman wearing a white hooded jacket with a black inner garment. Her hair is styled loosely, and she has minimal makeup. The woman is posing with her head slightly tilted, showcasing a calm and composed demeanor. Her expression is neutral. Prompt: “a woman with long, dark hair flowing dynamically. She wears a white and blue geometric patterned top with a shawl-like drape. Her posture is poised, showcasing elegant jewelry and a subtle smile. The background features a blurred circular pattern in shades of gray. Prompt: “a woman in a black leather jacket holding a red microphone. She is smiling and appears to be performing or speaking, with her head slightly tilted and her mouth open as if she is in the middle of talking. Her long brown hair is styled straight.. UniPortrait Figure 6. Qualitative Results of Different Generation Methods. The text prompt is extracted from the ground-truth image shown on the leftmost side. multi-ID generation were additionally tested on the multi- person subset. Further implementation details are provided in Appendix F.1. 6.1. Quantitative Evaluation The quantitative results are reported in Tables 1 and 2. We ob- serve a clear trade-off between face similarity and copy-paste artifacts. As shown in Fig. 5, most methods align closely with a regression curve, where higher face similarity gener- ally coincides with stronger copy-paste. This indicates that many existing models boost measured similarity by directly replicating reference facial features rather than synthesizing the identity. In contrast, WithAnyone deviates substantially from this curve, achieving the highest face similarity with regard to GT while maintaining a markedly lower copy-paste score. WithAnyone also achieves the highest score among ID- specific reference models on the OmniContext [54] bench- mark. However, VLMs [1, 32] exhibit limited ability to distinguish individual identities and instead emphasize non- identity attributes such as pose, expression, or background. Despite that general customization and editing models of- ten outperform face customization models on OmniCon- text, WithAnyone still has best performance among face customization models. 6.2. Qualitative Comparison To complement the quantitative results, Fig. 6 presents qual- itative comparisons between our method, state-of-the-art general customization/editing models, and face customiza- tion generation models. It shows that identity consistency remains a significant weakness of general customization or editing models, con- sistent with our quantitative findings. Many VAE-based approaches where references are encoded through a VAE, such as FLUX.1 Kontext and DreamO tend to produce faces that either exhibit copy-paste artifacts or deviate markedly from the target identity. A likely reason is that VAE em- beddings emphasize low-level features, leaving high-level semantic understanding to the diffusion backbone, which may not have been pre-trained for this task. ID-specific ref- erence models also struggle with copy-paste artifacts. For example, they fail to make the subject smile when the refer- ence image is neutral and often cannot adjust head pose or even eye gaze. In contrast, WithAnyone generates flexible, controllable faces while faithfully preserving identity. 6.3. Ablation and User Studies To better understand the contribution of each component in WithAnyone, we conduct ablation studies on the training strategy, the GT-aligned ID loss, the InfoNCE-based ID loss, and our dataset. Due to space constraints, we report Table 3. Ablation Study. indicate the first, second, third performance respectively. We ablate paired data training (without stage 2, w/o s2), GT-Aligned landmark ID loss (Self-aligned, S.A.), extended negative samples in InfoNCE (w/o neg). And model trained on FFHQ is also compared. Ablation Identity Metrics Generation Quality Sim(G) ↑ Sim(R) ↑ CP ↓ CLIP-I ↑ CLIP-T ↑ Aes ↑ Phases w/o Phase 3 0.406 0.625 0.239 0.755 0.307 4.955 Loss w/o GT-Align 0.385 0.549 0.175 0.763 0.317 4.754 w/o Ext. Neg. 0.368 0.455 0.074 0.740 0.304 4.984 Data FFHQ only 0.224 0.246 0.027 0.658 0.330 5.039 Ours Full Setting 0.405 0.551 0.161 0.770 0.321 4.883 0.2 0.4 0.6 0.8 Noise Level 0.71 Pred Lmk 0.54 0.34 0.36 0.21 0.10 0.06 Face Sim 0.44 GT Lmk 0.0 0.5 1.0 ID Loss Figure 7. Comparison of GT-aligned and Prediction-aligned landmarks. the key results here, with additional analyses provided in Appendix G. As shown in Table 3, the paired-data fine-tuning phase reduces copy-paste artifacts without diminishing similarity to the ground truth, while training on FFHQ performs sig- nificantly worse than on our curated dataset. Fig. 7 further demonstrates that the GT-aligned ID loss lowers denoising error at low noise levels and yields higher-variance, more informative gradients at high noise, thereby strengthening identity learning. By ablating extended negatives, leaving only 63 negative samples from the batch (originally extended to 4096), the effectiveness of ID contrastive loss is greatly reduced. More ablation results can be found in Appendix G. We conduct a user study to evaluate perceptual quality and identity preservation. Ten participants were recruited and asked to rank 230 groups of generated images according to four criteria: identity similarity, presence of copy-paste ar- tifacts, prompt adherence, and aesthetics. The results, shown in Fig. 8, indicate that our method consistently achieves the highest average ranking across all dimensions, demonstrat- ing both stronger identity preservation and superior visual quality. Moreover, the copy-paste metric exhibits a moderate positive correlation with human judgments, suggesting that it captures perceptually meaningful artifacts. Further details of the study design, ranking protocol, and statistical analysis are provided in Appendix H. Sim CP PF Aes Ours UNO ID-Patch UniPortrait OmniGen Figure 8. User study. Bigger bubbles indicate higher ranking. 7. Conclusion Copy-paste artifacts are a common limitation of identity cus- tomization methods, and face-similarity metrics often exacer- bate the issue by implicitly rewarding direct copying. In this work, we identify and formally quantify this failure mode through MultiID-Bench, and propose targeted solutions. We curate MultiID-2M and develop training strategies and loss functions that explicitly discourage trivial replication. Empir- ical evaluations demonstrate that WithAnyone significantly reduces copy-paste artifacts while maintaining and in many cases improving identity similarity, thereby breaking the long-standing trade-off between fidelity and copying. These results highlight a practical path toward more faithful, con- trollable, and robust identity customization. References [1] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wen- bin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report. arXiv preprint arXiv:2502.13923, 2025. 4, 8, 19 [2] Stephen Batifol, Andreas Blattmann, Frederic Boesel, Saksham Consul, Cyril Diagne, Tim Dockhorn, Jack English, Zion English, Patrick Esser, Sumith Kulal, et al. Flux. 1 kontext: Flow matching for in-context image generation and editing in latent space. arXiv e-prints, 2025. 3, 6, 13 [3] Anthony Chen, Jianjin Xu, Wenzhao Zheng, Gaole Dai, Yida Wang, Renrui Zhang, Haofan Wang, and Shang- hang Zhang. Training-free regional prompting for dif- fusion transformers. arXiv preprint arXiv:2411.02395, 2024. 15 [4] Bowen Chen, Mengyi Zhao, Haomiao Sun, Li Chen, Xu Wang, Kang Du, and Xinglong Wu. Xverse: Consistent multi-subject control of identity and se- mantic attributes via dit modulation. arXiv preprint arXiv:2506.21416, 2025. 2, 3 [5] Weifeng Chen, Jiacheng Zhang, Jie Wu, Hefeng Wu, Xuefeng Xiao, and Liang Lin. Id-aligner: Enhancing identity-preserving text-to-image genera- tion with reward feedback learning. arXiv preprint arXiv:2404.15449, 2024. 3 [6] Wei Cheng, Ruixiang Chen, Siming Fan, Wanqi Yin, Keyu Chen, Zhongang Cai, Jingbo Wang, Yang Gao, Zhengming Yu, Zhengyu Lin, et al. Dna-rendering: A diverse neural actor repository for high-fidelity human- centric rendering. In ICCV, 2023. 14 [7] Wei Cheng, Su Xu, Jingtan Piao, Chen Qian, Wayne Wu, Kwan-Yee Lin, and Hongsheng Li. General- izable neural performer: Learning robust radiance fields for human novel view synthesis. arXiv preprint arXiv:2204.11798, 2022. 14 [8] Yufeng Cheng, Wenxu Wu, Shaojin Wu, Mengqi Huang, Fei Ding, and Qian He. Umo: Scaling multi- identity consistency for image customization via match- ing reward. arXiv preprint arXiv:2509.06818, 2025. 2, 3, 4, 6 [9] Jiaming Chu, Lei Jin, Yinglei Teng, Jianshu Li, Yun- chao Wei, Zheng Wang, Junliang Xing, Shuicheng Yan, and Jian Zhao. Uniparser: Multi-human parsing with unified correlation representation learning. TIP, 2024. 2, 3, 14 [10] Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. Retinaface: Single- shot multi-level face localisation in the wild. In CVPR, 2020. 5 [11] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019. 2, 3, 5, 19 [12] discus0434. aesthetic-predictor-v2-5. https: / / github . com / discus0434 / aesthetic - predictor-v2-5, 2023. Accessed: 2025-05-12. 4, 13, 15 [13] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In ICML, 2024. 3 [14] Zinan Guo, Yanze Wu, Zhuowei Chen, Lang Chen, Peng Zhang, and Qian He. Pulid: Pure and lightning id customization via contrastive alignment. In NeurIPS, 2024. 2, 3, 4, 5, 6, 15 [15] Junjie He, Yifeng Geng, and Liefeng Bo. Uniportrait: A unified framework for identity-preserving single-and multi-human image personalization. ICCV, 2025. 2, 3, 4, 6, 14 [16] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aber- man, Yael Pritch, and Daniel Cohen-Or. Prompt- to-prompt image editing with cross attention control. ICLR, 2023. 2 [17] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 2020. 3 [18] Xirui Hu, Jiahao Wang, Hao Chen, Weizhan Zhang, Benqi Wang, Yikun Li, and Haishun Nan. Dynamicid: Zero-shot multi-id image personalization with flexible facial editability. ICCV, 2025. 3 [19] Yuqi Hu, Longguang Wang, Xian Liu, Ling-Hao Chen, Yuwei Guo, Yukai Shi, Ce Liu, Anyi Rao, Zeyu Wang, and Hui Xiong. Simulating the real world: A uni- fied survey of multimodal generative models. arXiv preprint arXiv:2503.04641, 2025. 2 [20] Junha Hyung, Jaeyo Shin, and Jaegul Choo. Magicap- ture: High-resolution multi-concept portrait customiza- tion. In AAAI, 2024. 3 [21] Liming Jiang, Qing Yan, Yumin Jia, Zichuan Liu, Hao Kang, and Xin Lu. Infiniteyou: Flexible photo recraft- ing while preserving your identity. ICCV, 2025. 2 [22] Qing Jiang, Lin Wu, Zhaoyang Zeng, Tianhe Ren, Yuda Xiong, Yihao Chen, Qin Liu, and Lei Zhang. Referring to any person. 2025. 2, 3, 14 [23] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. ICLR, 2018. 3, 6 [24] Tero Karras, Samuli Laine, and Timo Aila. A style- based generator architecture for generative adversarial networks. In CVPR, 2019. 6, 19 [25] Chanran Kim, Jeongin Lee, Shichang Joung, Bongmo Kim, and Yeul-Min Baek. Instantfamily: Masked at- tention for zero-shot multi-id image generation. arXiv preprint arXiv:2404.19427, 2024. 3 [26] Minchul Kim, Anil K Jain, and Xiaoming Liu. Adaface: Quality adaptive margin for face recognition. In CVPR, 2022. 19 [27] Black Forest Labs. Flux. https://github.com/ black-forest-labs/flux, 2024. 2, 3, 6, 13 [28] Black Forest Labs. Flux.1 krea. https : //huggingface.co/black-forest-labs/ FLUX.1-Krea-dev, 2025. 13 [29] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. 2, 3, 14 [30] Chong Mou, Yanze Wu, Wenxu Wu, Zinan Guo, Pengze Zhang, Yufeng Cheng, Yiming Luo, Fei Ding, Shiwen Zhang, Xinghui Li, et al. Dreamo: A unified framework for image customization. SIGGRAPH Asia, 2025. 3 [31] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Rep- resentation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. 5 [32] OpenAI. Addendum to gpt-4o system card: Native image generation, 2025. 6, 8 [33] Maxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fer- nandez, Daniel Haziza, Francisco Massa, Alaaeldin El- Nouby, Russell Howes, Po-Yao Huang, Hu Xu, Vasu Sharma, Shang-Wen Li, Wojciech Galuba, Mike Rab- bat, Mido Assran, Nicolas Ballas, Gabriel Synnaeve, Ishan Misra, Herve Jegou, Julien Mairal, Patrick La- batut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision. arXiv:2304.07193, 2023. 14 [34] Dongwei Pan, Long Zhuo, Jingtan Piao, Huiwen Luo, Wei Cheng, Yuxin Wang, Siming Fan, Shengqi Liu, Lei Yang, Bo Dai, et al. Renderme-360: A large digi- tal asset library and benchmarks towards high-fidelity head avatars. NeurIPS, 2023. 14 [35] Foivos Paraperas Papantoniou, Alexandros Lattas, Stylianos Moschoglou, Jiankang Deng, Bernhard Kainz, and Stefanos Zafeiriou. Arc2face: A foundation model for id-consistent human faces. In ECCV, 2024. 3 [36] Gaurav Parmar, Or Patashnik, Kuan-Chieh Wang, Daniil Ostashev, Srinivasa Narasimhan, Jun-Yan Zhu, Daniel Cohen-Or, and Kfir Aberman. Object-level visual prompts for compositional image generation. arXiv preprint arXiv:2501.01424, 2025. 3 [37] Or Patashnik, Rinon Gal, Daniil Ostashev, Sergey Tulyakov, Kfir Aberman, and Daniel Cohen-Or. Nested attention: Semantic-aware attention values for concept personalization. In SIGGRAPH, 2025. 3 [38] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. 3 [39] Xu Peng, Junwei Zhu, Boyuan Jiang, Ying Tai, Dong- hao Luo, Jiangning Zhang, Wei Lin, Taisong Jin, Chengjie Wang, and Rongrong Ji. Portraitbooth: A versatile portrait model for fast identity-preserved per- sonalization. In CVPR, 2024. 5 [40] Guocheng Qian, Kuan-Chieh Wang, Or Patashnik, Ne- gin Heravi, Daniil Ostashev, Sergey Tulyakov, Daniel Cohen-Or, and Kfir Aberman. Omni-id: Holistic identity representation designed for generative tasks. CVPR, 2025. 3 [41] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural lan- guage supervision. In ICML, 2021. 15 [42] Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, and Xiaokang Yang. Facial geometric detail recovery via implicit representation. In FG, 2023. 13 [43] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 3 [44] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dream- booth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, 2023. 2 [45] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recog- nition and clustering. In CVPR, 2015. 2, 19 [46] Erich Schubert, Jörg Sander, Martin Ester, Hans Peter Kriegel, and Xiaowei Xu. Dbscan revisited, revisited: why and how you should (still) use dbscan. TODS, 2017. 13 [47] Lorenzo Stacchio, Alessia Angeli, Giuseppe Lisanti, Daniela Calanca, and Gustavo Marfia. Imago: A family photo album dataset for a socio-historical analysis of the twentieth century. arXiv preprint arXiv:2012.01955, 2020. 2, 14 [48] Dani Valevski, Danny Lumen, Yossi Matias, and Yaniv Leviathan. Face0: Instantaneously conditioning a text- to-image model on a face. In SIGGRAPH Asia, 2023. 3 [49] Qinghe Wang, Xu Jia, Xiaomin Li, Taiqing Li, Liqian Ma, Yunzhi Zhuge, and Huchuan Lu. Stableidentity: Inserting anybody into anywhere at first sight. TMM, 2025. 3 [50] Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, and Anthony Chen. Instantid: Zero-shot identity- preserving generation in seconds. arXiv preprint arXiv:2401.07519, 2024. 2, 6 [51] Shuhe Wang, Xiaoya Li, Jiwei Li, Guoyin Wang, Xi- aofei Sun, Bob Zhu, Han Qiu, Mo Yu, Shengjie Shen, Tianwei Zhang, et al. Faceid-6m: A large-scale, open- source faceid customization dataset. arXiv preprint arXiv:2503.07091, 2025. 2, 3, 6 [52] Yibin Wang, Weizhong Zhang, Jianwei Zheng, and Cheng Jin. High-fidelity person-centric subject-to- image synthesis. In CVPR, 2024. 3 [53] Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun Wen, Wensen Feng, Xiaoxiao Xu, Yi Wang, Yichang Zhang, Yongqiang Zhu, Yujia Wu, Yuxuan Cai, and Zenan Liu. Qwen-image technical report. arXiv preprint arXiv:2508.02324, 2025. 3, 6 [54] Chenyuan Wu, Pengfei Zheng, Ruiran Yan, Shitao Xiao, Xin Luo, Yueze Wang, Wanli Li, Xiyan Jiang, Yexin Liu, Junjie Zhou, Ze Liu, Ziyi Xia, Chaofan Li, Haoge Deng, Jiahao Wang, Kun Luo, Bo Zhang, Defu Lian, Xinlong Wang, Zhongyuan Wang, Tiejun Huang, and Zheng Liu. Omnigen2: Exploration to advanced multimodal generation. arXiv preprint arXiv:2506.18871, 2025. 3, 6, 8, 19 [55] Shaojin Wu, Mengqi Huang, Yufeng Cheng, Wenxu Wu, Jiahe Tian, Yiming Luo, Fei Ding, and Qian He. Uso: Unified style and subject-driven generation via disentangled and reward learning. arXiv preprint arXiv:2508.18966, 2025. 6 [56] Shaojin Wu, Mengqi Huang, Wenxu Wu, Yufeng Cheng, Fei Ding, and Qian He. Less-to-more general- ization: Unlocking more controllability by in-context generation. ICCV, 2025. 3, 6, 14 [57] Tong Wu, Yinghao Xu, Ryan Po, Mengchen Zhang, Guandao Yang, Jiaqi Wang, Ziwei Liu, Dahua Lin, and Gordon Wetzstein. Fiva: Fine-grained visual attribute dataset for text-to-image diffusion models. NeurIPS, 2024. 2 [58] Yi Wu, Ziqiang Li, Heliang Zheng, Chaoyue Wang, and Bin Li. Infinite-id: Identity-preserved personalization via id-semantics decoupling paradigm. In ECCV, 2024. 3 [59] Chufeng Xiao and Hongbo Fu. Customsketching: Sketch concept extraction for sketch-based image syn- thesis and editing. In Computer Graphics Forum. Wi- ley Online Library, 2024. 2 [60] Guangxuan Xiao, Tianwei Yin, William T Freeman, Frédo Durand, and Song Han. Fastcomposer: Tuning- free multi-subject image generation with localized at- tention. IJCV, 2025. 3 [61] Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, and Zheng Liu. Omnigen: Unified image generation. arXiv preprint arXiv:2409.11340, 2024. 3, 6 [62] Hengyuan Xu, Liyao Xiang, Hangyu Ye, Dixi Yao, Pengzhi Chu, and Baochun Li. Permutation equivari- ance of transformers and its applications. In CVPR, 2024. 15 [63] Yuxuan Yan, Chi Zhang, Rui Wang, Yichao Zhou, Gege Zhang, Pei Cheng, Gang Yu, and Bin Fu. Faces- tudio: Put your face everywhere in seconds. arXiv preprint arXiv:2312.02663, 2023. 3 [64] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arxiv:2308.06721, 2023. 2, 3, 15 [65] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In ICCV, 2023. 15, 19 [66] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In ICCV, 2023. 2 [67] Ning Zhang, Manohar Paluri, Yaniv Taigman, Rob Fergus, and Lubomir Bourdev. Beyond frontal faces: Improving person recognition using multiple cues. In CVPR, 2015. 2, 14 [68] Yimeng Zhang, Tiancheng Zhi, Jing Liu, Shen Sang, Liming Jiang, Qing Yan, Sijia Liu, and Linjie Luo. Id-patch: Robust id association for group photo person- alization. In CVPR, 2025. 2, 3, 4, 6, 14, 19 [69] Youcai Zhang, Xinyu Huang, Jinyu Ma, Zhaoyang Li, Zhaochuan Luo, Yanchun Xie, Yuzhuo Qin, Tong Luo, Yaqian Li, Shilong Liu, et al. Recognize any- thing: A strong image tagging model. arXiv preprint arXiv:2306.03514, 2023. 4, 13 [70] Yujie Zhong, Relja Arandjelovic, and Andrew Zisser- man. Compact deep aggregation for set retrieval. In ECCV, 2018. 2, 14 [71] Cailin Zhuang, Ailin Huang, Wei Cheng, Jingwei Wu, Yaoqi Hu, Jiaqi Liao, Hongyuan Wang, Xinyao Liao, Weiwei Cai, Hengyuan Xu, et al. Vistorybench: Com- prehensive benchmark suite for story visualization. arXiv preprint arXiv:2505.24862, 2025. 3 Appendix A. Family of WithAnyone FLUX.1 comprises a family of models, including FLUX.1 [27], FLUX.1 Kontext [2] and FLUX.1 Krea [28]. Krea is a text-to-image model with improved real-person face generation, whereas Kontext is an image-editing model that excels at making targeted adjustments while preserv- ing the rest of the image. However, as reported in Table 1, Kontext shows limited consistency with the reference face identity. Our method, WithAnyone, can be seamlessly integrated into Kontext for the face customization downstream tasks like face editing. As illustrated in Fig. 9, WithAnyone effec- tively injects identity information from the reference images into the target image. The overall training pipeline follows the procedure de- scribed in Sec. 5, with a single modification: the input image provided to Kontext (whose tokens are concatenated with the noisy latent at each denoising step) is set to the target image with the face region blurred. B. MultiID-2M Construction Details To fill in the void left by the lack of publicly available multi- ID datasets, a data constraction pipeline is proposed to create a large-scale dataset of multi-person images with paired iden- tity references for identities on the data record. Based on this pipeline, 500k group photo images are collected, featur- ing 3k identities, each with hundreds of single-ID reference images. Another 1M images that cannot be identified are also included in the dataset for image reconstruction training purpose for image reconstruction training purpose. B.1. Dataset Construction Pipeline The pipeline contains four steps, as shown in Fig. 3. The detailed pipeline are as follows. Single-ID images. To construct a ID reference set, single- ID images were collected from the web using celebrity names as search queries on Google Images. For each image, facial features were extracted with ArcFace [42], ensuring that only images containing exactly one face were retained. To remove outliers, DBSCAN [46] clustering was applied to the embeddings for each celebrity, resulting in a set of cluster centers and hundreds of reference images per identity. This process established a reliable reference set for each unique identity. Human review confirms the accuracy of the ID bank built in this step. Multi-ID images. To achieve best searching efficiency, group photos were obtained using more complex queries that combined multiple celebrity names, keywords indicating the number of people (e.g., “two celebrities”), scene descriptors (e.g., “award ceremony”), and negative keywords to filter out irrelevant results. ArcFace embeddings were extracted for these images, yielding a large pool of candidate multi-ID images. At this stage, the dataset comprised more than 20 million images. Retrieval. To provide ID reference for the multi-ID im- ages, it is necessary to retrieve the IDs on it. All single-ID cluster centers were aggregated into an embedding matrix. For each detected face in every multi-ID image, its ArcFace embedding was compared to all single-ID cluster centers to determine identity. The similarity between two embeddings was calculated as: sim(id1, id2) = cos(f(id1), f(id2)) (9) where id1 and id2 denote two faces, and f is the ArcFace embedding network. Each face in a multi-ID image was assigned the iden- tity of the single-ID cluster center with the highest similar- ity, provided the similarity exceeded a predefined threshold (0.5). This approach enabled accurate and automated iden- tity assignment in group images and facilitated retrieval of corresponding reference images. Filtering and labelling. To further improve dataset qual- ity, a series of annotation and filtering steps were applied. The Recognize Anything model [69], an aesthetic score pre- dictor [12], and other auxiliary tools were used for annota- tion. Images with low aesthetic scores or those identified as collages rather than genuine group photos were excluded. WithAnyone Kontext WithAnyone Kontext Figure 9. Application of WithAnyone-Kontext. Marrying editing models, WithAnyone is capable of face editing given customization references. Celebrities 0k 15k 30k Appearance Count Range: 3 to 42738 Median: 2407 a ID Appearance. 0 600 1200 Count Canada Others Japan Europe Korea China USA b Nationality Distribution. European descent Asian descent African descent Single Multiple US CN CA EU c Benchmark Distribution. Figure 10. Overview of Dataset Distributions. (a) ID appearance distribution for the subset of one nation: the x-axis represents celebrities, sorted by the number of images in which they appear. (b) Nationality distribution: celebrities in our dataset come from over 10 countries, with most data sourced from China and the USA. (c) Word cloud of the most frequent words in the captions. Optical Character Recognition (OCR) tools detected water- marks and logos, which were cropped out when possible; otherwise, the images were discarded. Finally, descriptive captions were generated for the images using a large lan- guage model, enriching the dataset with textual information. So far, a dataset with three parts is obtained: (1) 1M single-ID images as reference bank, or single-ID cross- paired training; (2) 500k paired multi-ID images with identi- fied persons; (3) 1M unpaired multi-ID images, which can be used for training scenario without the need of references, such as reconstruction. B.2. Dataset Statistics Following prior arts [6, 7, 29, 34], comprehensive statistics of the dataset are provided in Fig. 11, including the distri- bution of nationalities, the count of appearances per iden- tity, and a word cloud illustrating the most frequent terms in the generated image captions, offering insights into the diversity and richness of the dataset. A long-tail distribu- tion is observed in the count of appearances per identity in Fig. 11a, with a few identities appearing frequently while many others are less common. This provide a diverse set of identities, as well as a perfect test dataset without iden- tity interaction with the training set. Fig. 11b and Fig. 10c illustrate MultiID-2M’s nationality distribution and action di- versity respectively. The comparison between the proposed dataset and existing multi-ID datasets are listed in Table 4, highlighting MultiID-2M’s outstanding volume and paired references. C. Benchmark and Metrics Details Most existing methods are evaluated on privately curated test sets that are seldom released, and even when datasets are shared, the accompanying evaluation protocols vary widely. For example, ID-Patch [68] and UniPortrait [15] measure identity similarity using ArcFace embeddings, whereas UNO [56] relies on DINO [33] and CLIP similarity scores. This Table 4. Statistic comparison for multi-identity group photo datasets. #Img refers to total scale of the dataset; #Paired refers to paired group image number; #Img / ID indicates number of reference image for each single ID; #ID / Img means number of IDs appears on group photos. Dataset #Img #Paired #Img / ID #ID / Img IMAGO [47] 80k 0 0 - MHP [9] 5k 0 0 2 −10 PIPA [67] 40k 40k cross 1 −10 HumanRef [22] 36k 36 1+ 1 −14+ Celebrity Together [70] 194k 0 0 1 −5 MultiID-2M 1.5M 500k 100+ 1 −5 heterogeneity together with the common practice of report- ing only the cosine similarity between matched ArcFace embeddings fails to capture more nuanced insights and can even encourage degenerate behavior in which models pro- duce images that are effectively “copy-pastes” of the refer- ence photos. In this work, MultiID-Bench is introduced as a unified and extensible evaluation framework for group photo (multi- ID) generation. It standardizes assessment along two com- plementary axes: (i) identity fidelity (preserving each tar- get identity without unintended copying and blending), and (ii) generation quality (semantic faithfulness to the prompt/- ground truth and overall aesthetic quality). The data used in MultiID-Bench are drawn from the long-tail portion of MultiID-2M. We first select the least frequent identities and gather all images containing them. To prevent information leakage, the training split is filtered to ensure zero identity overlap with the benchmark set. The final benchmark contains 435 samples; each sample provides 1–4 reference identities (with their images), a correspond- ing ground-truth image, and a text prompt describing that ground-truth scene. Identity Blending. In the similarity matrix, the off- diagonal elements correspond to the similarity between dif- ferent identities. The average of the diagonal elements is Normal Special Upper Top Lower Professional Ancient costume Clothes a Clothes & Accessories Distribution. Single Interaction Dynamic Static Expression intimate communicate others Actions b Action Distribution. Figure 11. Distribution of Clothes and Action Labels of Proposed Dataset. used as the metric for identity fidelity, and the average of the off-diagonal elements serves as the metric for identity blending, as in Eq. 10. MBld(xg, xt) = 1 N 2 −N N X i=1 N X j=1,j̸=i cos(gi, tj) (10) where gi is the embedding of the i-th face in the gener- ated image xg, and tj is the embedding of the j-th face in the ground-truth image xt. A lower value indicates less unintended blending between different identities, which is desirable. Generation quality. The overall generation quality is evaluated based on CLIP-I and CLIP-T, which are the de facto standards for evaluating the prompt-following capabil- ity [41], are employed to measure the cosine similarity in the CLIP embedding space between the generated image and the ground truth image or caption. Additionally, an aesthetic score model [12] is used to assess the aesthetic quality of the generated images. D. Galleries of WithAnyone We show more results of WithAnyone in Fig. 12, Fig. 13, and Fig. 14. E. Model Framework Details We follow prior work [14, 64] and integrate a lightweight identity adapter into the diffusion backbone. Identity embed- dings are injected by cross-attention so that the base genera- tive prior is preserved while controllable identity signals are added. Face embedding. Each reference face is first encoded by ArcFace, producing a 1 × 512 identity embedding. To match the tokenized latent space of the DiT backbone, this vec- tor is projected with a multi-layer perceptron (MLP) into 8 tokens of dimension 3072 (i.e., an 8 × 3072 tensor). This to- kenization provides sufficient capacity for the cross-attention layers to integrate identity cues without overwhelming the generative context. Controllable attribute retention. Completely suppress- ing copy-like behavior is not always desirable: users some- times expect certain mid-level appearance attributes (e.g., hairstyle, accessories) to be preserved. ArcFace focuses on high-level, identity-discriminative geometry and texture cues but omits many mid-level semantic factors. To expose controllable retention of such attributes when needed, we optionally incorporate SigLIP [65] as a secondary encoder. SigLIP provides more semantically entangled representa- tions, enabling selective transfer of style-relevant traits while ArcFace anchors identity fidelity. Attention mask and location control. To further im- prove identity disentanglement and precise localization in the generated images, an attention mask and location control mechanism are incorporated [3, 62]. Specifically, ground- truth facial bounding boxes are extracted from the training data and used to generate binary attention masks. These masks are applied to the attention layers of the backbone model, ensuring that each reference token only attends to its corresponding face region in the image, providing location control at the same time. Feature injection. After each transformer block of the DiT backbone, we inject face features through a cross- attention modulation: H′ = H+λid softmax (HWQ)(EWK)⊤ √ d + M  (EWV ), (11) where H denotes the current hidden tokens, E the stacked face-embedding tokens, and WQ, WK, WV the projection matrices; d is the query/key dimension, and λid = 1.0 during training. When SigLIP is enabled, its tokens are processed by a parallel cross-attention with an independent scaling Reference Reference Reference Reference Figure 12. Galleries of Single-ID Generation. Figure 13. Galleries of 2-person Generation. Figure 14. Galleries of 3-to-4-person Generation. coefficient. F. Experimental Details F.1. Implementation Details WithAnyone is trained on 8 NVIDIA H100 GPUs, with a batch size of 4 on each GPU. The learning rate is set to 1e−4, and the AdamW optimizer is employed with a weight decay of 0.01. The pre-training phase runs for 60k steps, with a fixed prompt used during the first 20k steps. The subsequent paired-tuning phase lasts 30k steps: 50% of the samples use paired (reference, ground-truth) data, while the remaining 50% continue reconstruction training. Finally, a quality/style tuning stage of 10k steps is performed with a reduced learning rate of 1 × 10−5. For the extended ID contrastive loss, the target is used as the positve sample, while other IDs from samples in the same batch serve as negative samples. With the global batch size of 32, this yields less than a hundred negative samples. Extended negative samples are drawn from reference bank. If this ID is identified as one of the 3k ID in the reference bank, we simply omit its own ID and draw the from other IDs. If this ID is not identified, then it makes things easier – all the IDs in the reference bank can be used as negative samples. For other baseline methods, official implementations and checkpoints (or API) are used with default settings. Methods are tested on MultiID-Bench and real-human subset of Om- niContext [54]. OmniContext uses Vision-Language Models (VLMs) to evaluate the prompt-following (PF) and subject- consistency (SC) of generated images. For reproducibility, the VLM is fixed to Qwen2.5-VL [1]. ID-Patch [68] requires pose condition, and we use the ground-truth pose for it. Single face embedding model may induce biased evalu- ation on ID similarity, thus we average three de-facto face recognition models’ consine similarity to compute the over- all ID similarity metric, namely ArcFace [11], FaceNet [45], and AdaFace [26]. F.2. More Discussion on the Quantitative Results The performance of GPT on our 3-and-4-people subset of- fers a useful validation of our copy-paste metric, as shown in Table 2. This subset largely comprises group photographs from TV series that GPT may have encountered during pre- training, so GPT attains unusually high identity-similarity scores both to the ground truth (GT) and to the reference images. Actually, in one case GPT even generates an ID from the TV series that is not present in the reference images. This behaviour approximates an idealized scenario in which a model fully understands and faithfully reproduces the tar- get identity: similarity to GT and to references are both high, and the copy-paste measure the difference between distances to GT and to references approaches zero. These observa- tions are consistent with our metric design and support its ability to distinguish true identity understanding from trivial copy-and-paste replication. We report the experimental limit in Table 1. If one model completely copy the reference image, SimGT = 0.521, SimRef = 1.0, and copy-paste is 0.999, which aligns with the theoretical limit 1.0 of copy-paste. The prompt-following ability is measured by CLIP-I and CLIP-T in our benchmark, and is judged by VLM in Om- niContext. WithAnyonegains state-of-the-art performance in both metrics, and is ranked the highest in our user study. However, the credibility of CLIP scores and the aesthetic scores may be debated, as they are not always consistent with human perception. G. Ablation Study Details In this section, we systematically evaluate the impact of training strategy, GT-aligned ID-Loss, InfoNCE ID Loss, and our dataset construction. User study is also conducted to validate the consistency of the proposed metrics with human perception, as well as evaluate the human preference on different methods. SigLIP signal. SigLIP [65] signal is introduced to retain copy-paste effect when user tend to retain the features from reference images like hairstyle, accessories, etc. As shown in Fig. 16, increasing the SigLIP signal weight effectively amplifies the copy-paste effect while simultaneously boost- ing ID similarity to the reference images exactly as expected, since stronger SigLIP guidance enforces tighter semantic alignment and transfers more fine-grained appearance cues (e.g., hairstyle, accessories, local textures). Training strategy. We evaluate the effect of a paired-data fine-tuning stage. After an initial reconstruction training phase, we either continue training with paired (reference, ground-truth) data or keep training under the reconstruction objective for 10k steps. As shown in Table 3, continuing with paired data effectively reduces the copy-paste effect without compromising similarity to the ground truth. Dataset construction. To validate the effectiveness of our dataset, we trained a model on FFHQ [24] using recon- struction training for the same number of steps. As shown in Table 3, the FFHQ-trained model performs poorly across all metrics. This likely stems from FFHQ’s limited diversity and size, as it contains only 70k face-only portrait images. GT-aligned ID-Loss. We validate the GT-aligned ID- Loss with a simple experiment that visualizes predicted faces at different denoising time steps during training. As shown in Fig. 7, at low noise levels the GT-aligned ID-Loss is substantially lower than the loss computed using prediction- aligned landmarks, indicating that aligning faces to ground- truth landmarks reduces denoising error and yields a more accurate identity assessment. At high noise levels the GT- aligned ID-Loss shows greater variance, producing stronger 0 300 600 Step 0.4 0.5 0.6 0.7 ID Loss 0.0 0.1 0.1+ Figure 15. ID Loss Curves with λ× InfoNCE Loss. 0.1 is 0.1× InfoNCE Loss without extended negative samples, and 0.1+ is 0.1× InfoNCE Loss with extended negative samples. 0.55 0.60 0.62 0.64 Sim(Ref) 0.15 0.20 0.25 Copy-Paste 0.0 * Sig 0.2 * Sig 0.4 * Sig 0.6 * Sig 0.8 * Sig 1.0 * Sig Figure 16. Trade-off Curves with λ× Siglip and (1−λ)× ArcFace signal. Figure 17. User Study Interface. and more informative gradients that help the model learn identity features. InfoNCE Loss. The InfoNCE loss with extended neg- ative samples is crucial for the convergence in the early training stage. We conduct a toy experiment with 1000 train- ing samples, and record ID Loss curves with no InfoNCE loss, 0.1× InfoNCE loss without extended negatives, and 0.1× InfoNCE loss with extended negatives. As shown in Fig. 15, ID loss fits a lot faster with InfoNCE loss with extended negatives, demonstrating its effectiveness in accel- erating training convergence. It also largely increases the ID similarity score, as shown in Table 3. H. User Study Details Our user study is conducted with the same data samples and generated results in our quantitative experiments. Due to a tight financial budget, we randomly select 100 samples from single-person subset, 100 samples from 2-people subset, and all samples from 3-and-4 people subset. 10 participants are recruited for the study, all of whom are trained with a brief tutorial to understand the task and evaluation criteria. We illustrate the interface used in our user study in Fig. 17. H.1. Correlation Analysis We analyze the correlation between our proposed metrics and user study results. As shown in Table 5, our copy-paste metric shows a moderate positve correlation with user ratings on copy-paste effect. H.2. Participant Instructions We provide the instructions for training the participants in the following table. I. Prompts for Language Models Large language models (LLMs) and vision-language models (VLMs) are used in various stages of our work, including dataset captioning and OmniContext evaluation. I.1. Dataset Captioning Besides the system prompt, we design 6 different prompts to generate diverse captions for each image. 1 prompt is randomly selected for each image during captioning. Table 5. Correlation Statistics Between Machine Ranking and Human Ranking. Reported values include Pearson’s r, Spearman’s ρ, and Kendall’s τ with corresponding p-values. Dimension (N) Pearson r (p) Spearman ρ (p) Kendall τ (p) Copy-Paste 0.4417 (7.98e−48) 0.4535 (1.26e−50) 0.3405 (1.10e−46) ID Sim 0.3254 (1.54e−26) 0.3237 (2.91e−26) 0.2423 (1.11e−25) Participant Instructions and Evaluation Procedure Data source and task overview. Five different methods generated images under the following conditions: • A single prompt that describes the “ground truth image.” • Between 1 and 4 people in the scene (most examples contain 1–2 people). For each trial you will be shown the ground truth image, input images, and a generation instruction. Then you will observe five generated group-photo results (one per method) and rank them according to several evaluation dimensions. Use a 5-star scale where 5 stars = best and 1 star = worst. Please read the input image(s) and the editing instruction carefully before inspecting the generated results. Evaluation procedure (per-image ranking). Rank each generated image individually on the following criteria. Identity similarity • How well do the person(s) in the generated image resemble the person(s) in the ground truth image? • Rank images by their resemblance to the ground truth image: the more the generated person(s) look like the original reference, the higher the rating. • Important: When judging identity similarity, ignore factors such as image quality, rendering artifacts, or general aesthetics. Focus only on how much the person(s) resemble the original reference(s). Also, try to assess resemblance to the ground truth image as a whole, rather than comparing to any single separate “reference person n.” Copy-and-paste effect (excessive mimicry of the reference) • Generated images should resemble the original reference but should not be direct copies of an individual reference photo. • Evaluate whether the generated person appears to be directly copied from one of the reference images. Consider changes (or lack thereof) in expression, head pose and orientation, facial expression/demeanor, and light- ing/shading. • The lower the degree of direct copying (i.e., the less it looks like a pasted replica), the better. Rank according to the amount of change observed in the person(s): more natural variation (less copy-paste) should be ranked higher. Prompt following • Does the generated image reflect the content and constraints specified by the prompt/instruction? • Rank images by prompt fidelity: the more faithfully the image follows the prompt, the higher the ranking. Aesthetics • Judge the overall visual quality and pleasantness of the generated image (e.g., smoothness of rendering, harmonious body poses and composition). • Rank images by aesthetic quality: higher perceived visual quality receives higher ratings. Full Prompts for Dataset Captioning (6 variants) System Prompt: You are an advanced vision-language model tasked with generating accurate and comprehensive captions for images. Prompt 1: Please provide a brief description of the image based on these guidelines: 1. Describe the clothing, accessories, or jewelry worn by the people in detail. 2. Describe the genders, actions, and posture of the individual in detail, focusing on what they are doing. 3. The description should be concise, with a maximum of 77 words. 4. Start with ‘This image shows’ Prompt 2: Offer a short description of the image according to these rules: 1. Focus on details about clothing, accessories, or jewelry. 2. Focus on the gender, activity, and pose, and explain what the people is doing. 3. Keep the description within 77 words. 4. Begin the description with ‘This image shows’ Prompt 3: Please describe the image briefly, following these instructions: 1. Provide a detailed description of the clothing or jewelry the person may be wearing. 2. Provide a detailed description of the two persons’ gender, actions, and body position. 3. Limit the description to no more than 77 words. 4. Begin your description with ‘This image shows’ Prompt 4: Describe the picture briefly according to these rules: 1. Provide a detailed description of the clothing, jewelry, or accessories of the individuals. 2. Focus on the two persons’ gender, what they are doing, and their posture. 3. Keep the description concise, within a limit of 77 words. 4. Start your description with ‘This image shows’ Prompt 5: Provide a short and precise description of the image based on the following guidelines: 1. Describe what the person is wearing or any accessories. 2. Focus on the gender, activities, and body posture of the person. 3. Ensure the description is no longer than 77 words. 4. Begin with ‘This image shows’ Prompt 6: Briefly describe the image according to these instructions: 1. Provide a precise description of the clothing, jewelry, or other adornments of the people. 2. Focus on the person’s gender, what they are doing, and their posture. 3. The description should not exceed 77 words. 4. Start with the phrase ‘This image shows’ Modified Prompt for OmniContext Evaluation (Face Identity Focus) Rate from 0 to 10: Task: Evaluate how well the facial features in the final image match those of the individuals in the original reference images, as described in the instruction. Focus strictly on facial identity similarity; ignore hairstyle, clothing, body shape, background, and pose. Scoring Criteria • 0: The facial features are completely different from those in the reference images. • 1–3: The facial features have minimal similarity with only one or two matching elements. • 4–6: The facial features have moderate similarity but several important differences remain. • 7–9: The facial features are highly similar with only minor discrepancies. • 10: The facial features are perfectly matched to those in the reference images. Pay detailed attention to these facial elements: • Eyes: Shape, size, spacing, color, and distinctive characteristics of the eyes and eyebrows. • Nose: Shape, size, width, bridge height, and nostril appearance. • Mouth: Lip shape, fullness, width, and distinctive smile characteristics. • Facial structure: Cheekbone prominence, jawline definition, chin shape, and forehead structure. • Skin features: Distinctive marks like moles, freckles, wrinkles, and overall facial texture. • Proportions: Overall facial symmetry and proportional relationships between features. Example: If the instruction requests combining the face from one image onto another pose, the final image should clearly show the same facial features from the source image. Important: • For each significant facial feature difference, deduct at least one point. • Ignore hairstyle, body shape, clothing, background, pose, or other non-facial elements. • Focus only on facial similarity, not whether the overall instruction was followed. • Scoring should be strict high scores should only be given for very close facial matches. • Consider the level of detail visible in the images when making your assessment. Editing instruction: <instruction>
WithAnyone: Towards Controllable and ID Consistent Image Generation Hengyuan Xu1,2 Wei Cheng2,† Peng Xing2 Yixiao Fang2 Shuhan Wu2 Rui Wang2 Xianfang Zeng2 Daxin Jiang2 Gang Yu2,‡ Xingjun Ma1,‡ Yu-Gang Jiang1 1 Fudan University 2 StepFun Project Page MultiID-2M MultiID-Bench Models Code Figure 1. Showcases of WithAnyone. WithAnyone is capable of generating high-quality, controllable, and ID-consistent images by leveraging ID-contrastive training on the proposed MultiID-2M dataset. Abstract Identity-consistent generation has become an important focus in text-to-image research, with recent models achieving notable success in producing images aligned with a reference identity. Yet, the scarcity of large-scale paired datasets containing multiple images of the same individual forces most approaches to adopt reconstruction-based training. This reliance often leads to a failure mode we term copy-paste, where the model directly replicates the reference face rather than preserving identity across natural variations in pose, expression, or lighting. Such over-similarity undermines controllability and limits the expressive power of generation. To address these limitations, we (1) construct a large-scale † Wei Cheng leads this project; ‡Corresponding authors. paired dataset MultiID-2M tailored for multi-person scenarios, providing diverse references for each identity; (2) introduce a benchmark that quantifies both copy-paste artifacts and the trade-off between identity fidelity and variation; and (3) propose a novel training paradigm with a contrastive identity loss that leverages paired data to balance fidelity with diversity. These contributions culminate in WithAnyone, a diffusion-based model that effectively mitigates copypaste while preserving high identity similarity. Extensive qualitative and quantitative experiments demonstrate that WithAnyone significantly reduces copy-paste artifacts, improves controllability over pose and expression, and maintains strong perceptual quality. User studies further validate that our method achieves high identity fidelity while enabling expressive controllable generation. 16 Oct 2025 1. Introduction With the rapid progress of generative artificial intelligence, controllable image generation via reference images or image prompting [16, 19, 44, 57, 59, 66] and identity-consistent (ID-consistent) generation [8, 14, 15, 21, 50, 64, 68] have achieved remarkable advances: modern models can synthesize portraits that closely match the provided individual. Recent efforts [4, 8] push resemblance toward near-perfect reproduction. While pursuing higher similarity seems natural, beyond a certain point, excessive fidelity becomes counterproductive. In real photographs of the same person, identity similarity varies substantially due to natural changes in pose, expression, makeup, and illumination (Fig. 2). By contrast, many generative models adhere to the reference image far more rigidly than this natural range of variation. Although such over-optimization may seem beneficial, it suppresses legitimate variation, reducing controllability and limiting practical usability. We term this failure mode the copy-paste artifact: rather than synthesizing an identity in a flexible, controllable manner, the model effectively copies the reference image into the output (see Fig. 2). In this work, we formalize this artifact, develop metrics to quantify it, and propose a novel training strategy to mitigate it. Mitigating copy-paste artifacts is fundamentally constrained by the lack of suitable training data. While numerous large-scale face datasets exist [9, 22, 29, 47, 51, 67, 70], they remain ill-suited for controllable multi-identity generation. Critically, few datasets provide paired references for each identity-multiple images of the same person across diverse expressions, poses, hairstyles, and viewpoints. As a result, most prior work resorts to single-person, reconstructionbased training [14, 50], where the reference and target coincide. This setup inherently promotes copying and exacerbates copy-paste artifacts. Constructing datasets with multiple references per identity, particularly in group photos, and developing methods to effectively exploit such data remain open challenges. In this work, we introduce a large-scale open-source Multi-ID dataset, MultiID-2M, together with a comprehensive benchmark, MultiID-Bench, designed for intrinsic evaluation of multi-identity image generation. MultiID2M contains 500k group photos featuring 1-5 recognizable celebrities. For each celebrity, hundreds of individual images are provided as paired references, covering diverse expressions, hairstyles, and viewing angles. In addition, 1.5M unpaired group photos without references are included. MultiID-Bench establishes a standardized evaluation protocol for multi-identity generation. Beyond widely adopted metrics such as ID similarity [11, 45], it quantifies copypaste artifacts by measuring distances between generated images, references, and ground truth. Evaluation on 12 stateof-the-art customization models highlights a clear trade-off 0.46 0.30 0.46 0.77 Face Similarity 0.0 0.2 0.4 0.6 0.8 1.0 Face Similarity Real Image Pairs Ours UniPortrait PuLID InstantID UMO 0.586 0.616 0.667 0.717 0.788 0.818 Prompt: A blonde lady, natural makeup Input WithAnyone PULID InstantID GT Copy Copy Customized Figure 2. Our Observation. Natural variations, such as head pose, expression, and makeup, may cause more face similarity decrease than expected. Copying reference image limits models' ability to respond to expression and makeup adjustment prompts. between ID similarity and copy-paste artifacts (see Fig. 5). Furthermore, we present WithAnyone, a novel identity customization model built on the FLUX [27] architecture, as a step toward mitigating copy-paste artifacts. WithAnyone maintains state-of-the-art identity similarity (with regard to target image) while substantially reducing copy-paste, thereby breaking the long-observed trade-off between fidelity and artifacts. This advance is enabled by a paired-training strategy combined with an ID contrastive loss enhanced with a large negative pool, both made possible by our paired dataset. The labeled identities and their reference images enable the construction of an extended negative pool (images of different identities), which provides stronger discrimination signals during optimization. In summary, our main contributions are: • MultiID-2M: A large-scale dataset of 500k group photos containing multiple identifiable celebrities, each with hundreds of reference images capturing diverse variations, along with 1.5M additional unpaired group photos. This resource supports pre-training and evaluation of multiidentity generation models. • MultiID-Bench: A comprehensive benchmark with standardized evaluation protocols for identity customization, enabling systematic and intrinsic assessment of multi2. Multi-ID Collection + + 2/3/4 actors - - alone - crowd 1. Single-ID Collection & Clustering + 3. ID Image Paring Embedding Retrieval cos( , ) 4. Post-processing Filtering & Labelling collage❌ not aesthetic❌ OCR & cropping 4. Quality Tuning Quality & Style Tuning 3. Paired Tuning Mid-train with Pair & Recon. Ground Truth Reference(s) HQ and Style Subset 10K 1. Reconstruction Pretrain with Fixed Prompt "two people" Ground Truth Reference(s) Paired Subset 500K Mult-ID Subset 2M Single-ID Subset 1M 2. Reconstruction Pretrain with Caption Ground Truth Reference(s) "A man with ... and a lady with" Data Construction Training Stages Datasets Open Dataset High Quality Stylized Figure 3. Overview of WithAnyone. It builds on a large-scale dataset, MultiID-2M, constructed through a four-step pipeline: (1) collect and cluster single-ID data based on identity similarity; (2) gather multi-ID data via targeted searches using desired identity names with negative keywords for filtering; (3) form image pairs by matching faces between single-ID and multi-ID data; and (4) apply post-processing for quality control and stylization. Training proceeds in four stages: (1) pre-train on single-ID, multi-ID, and open-domain images with fixed prompts; (2) train with image-caption supervision; (3) fine-tune with ID-paired data; and (4) perform quality tuning using a curated high-quality subset. identity image generation methods. • WithAnyone: A novel ID customization model built on FLUX that achieves state-of-the-art performance, generating high-fidelity multi-identity images while mitigating copy-paste artifacts and enhancing visual quality. 2. Related Work Single-ID Preservation. The generation of Identitypreserving images is a core topic in customized synthesis [5, 20, 35, 48, 49, 52, 58, 60, 63]. Many methods in the UNet/Stable Diffusion era inject learned embeddings (e.g., CLIP or ArcFace) via cross-attention or adapters [17, 40-43, 64]. With the rise of DiT-style backbones [13, 27, 38] (e.g., SD3, FLUX), progress in ID preservation like PuLID [14], also attracts great attention. Multi-ID Preservation. Multi-ID preservation remains relatively underexplored. Some works target spatial control of multiple identities [15, 25, 68], while others focus on identity fidelity. Methods such as XVerse [4] and UMO [8] use VAE-derived face embeddings concatenated with model inputs, which can produce pixel-level copy-paste artifacts and reduce controllability. DynamicID [18]1 achieves improved controllability but is constrained by limited task-specific data 1Excluded from our experiments due to unavailability of code and pretrained models. and evaluation standards. Other general-purpose customization and editing models [2, 30, 36, 37, 53-56, 61] can also synthesize images containing multiple identities, but their ID similarity is often compromised for generality. ID-Centric Datasets and Benchmarks. Although there are numerous single-ID datasets [23, 51] and multi-ID collections [9, 22], paired reference images are scarce, so reconstruction remains the dominant training objective for multi-ID datasets. Representative datasets are listed in Table 4. Evaluation protocols are underdeveloped: several works (e.g., PuLID [14], UniPortrait [15], and others [60, 68]) construct test sets by sampling identities from CelebA [29], which undermines reproducibility. Recent efforts benchmark multiple reference generation [54, 71] while focusing on general customization. To address this, we release a curated multi-ID benchmark with standardized splits and comprehensive metrics to facilitate future research. 3. MultiID-2M: Paired Multi-Person Dataset Construction MultiID-2M is a large-scale multi-person dataset constructed via a four-stage pipeline: (1) collect single-ID images from the web and construct a clean reference bank by clustering ArcFace [11] embeddings, yielding ∼1M reference images across ∼3k identities (averaging 400 per identity); (2) retrieve candidate group photos via multi-name and sceneCross Attention DiT Blocks Face Embedding Branch Image Embedding Branch a Model Architecture Target Predicted Negative Pool Embedding Target Target Detection GT-Landmark Predicted 1- cos( , ) Extended negative samples ID Contrastive Loss GT-aligned ID Loss b Training Objectives Figure 4. (a) Architecture of WithAnyone: Each reference is encoded by both a face-recognition network and a general image encoder, yielding identity-discriminative signals and complementary mid-level features. Face embeddings are restricted to attend only to image tokens within their corresponding face regions. (b) Training Objectives of WithAnyone: In addition to the diffusion loss, we incorporate an ID contrastive loss and a ground-truth-aligned ID loss, which together provide consistent and accurate identity supervision. aware queries and detect faces; (3) assign identities by matching ArcFace embeddings to single-ID cluster centers using cosine similarity (threshold 0.4); and (4) perform automated filtering and annotation, including Recognize Anything [69], aesthetic scoring [12], OCR-based watermark/logo removal, and LLM-based caption generation [1]. The final corpus comprises ∼500k identified multi-ID images with matched references from the reference bank, as well as ∼1.5M additional unidentified multi-ID images for reconstruction training, covering ∼25k unique identities, with diverse nationalities and ethnicities. Further details of the construction pipeline and dataset statistics are provided in Appendix B. 4. MultiID-Bench: Comprehensive ID Customization Evaluation MultiID-Bench is a unified benchmark for group-photo (multi-ID) generation. It samples rare, long-tail identities with no overlap to training data, yielding 435 test cases. Each case consists of one ground-truth (GT) image containing 1-4 people, the corresponding 1-4 reference images as inputs, and a prompt describing the GT. Detailed statistics are provided in Appendix B. Evaluation considers both identity fidelity and generation quality. Let r, t, g denote the face embeddings of the reference identity, the target (ground-truth), and the generated image, respectively. We define similarity between two embeddings as Sim(a, b), specifically we term the generated image's face similarity with regard to GT as SimGT, and to reference as SimRef, Sim(a, b) = a⊤b ∥a∥∥b∥, (1) Specially, we denote SimRef = Sim(r, g) and SimGT = Sim(t, g). Prior works [8, 14, 15, 68] has largely reported only SimRef, which inadvertently favors trivial copy-paste: directly replicating the reference appearance maximizes the score, even when the prompt specifies changes in pose, expression, or viewpoint. In contrast, MultiID-Bench uses SimGT the similarity to the ground-truth identity described by the prompt as the primary metric. This design penalizes excessive copying when natural variations (e.g., pose, expression, occlusion) are expected, while rewarding faithful realization of the prompted scene. We define the angular distance as θab = arccos(Sim(a, b)) (geodesic distance on the unit sphere). The Copy-Paste metric is given by MCP(g | t, r) = θgt -θgr max(θtr, ε) ∈[-1, 1], (2) where ε is a small constant for numerical stability. The metric thus captures the relative bias of g toward the reference r versus the ground truth t, normalized by angular distance of r and t. A score of 1 means g fully coincides with the reference (perfect copy-paste), while -1 means full agreement with the ground truth. We additionally report identity blending, prompt fidelity (CLIP I/T), and aesthetics; formal definitions and further details are provided in Appendix C. 5. WithAnyone: Controllable and IDConsistent Generation Building on the scale and paired-reference supervision of the MultiID-2M, we devise training strategies and tailored objectives that transcend reconstruction to enable robust, identity-conditioned synthesis. This rich, identity-labeled supervision not only substantially improves identity fidelity but also suppresses trivial copy-paste artifacts and affords finer control over multi-identity composition. Motivated by these advantages, we introduce WithAnyone - a unified architecture and training recipe designed for controllable, high-fidelity multi-ID generation. Architectural schematics and implementation details are provided in Fig. 4 and Appendix E. 5.1. Training Objectives Diffusion Loss. We adopt the mini-batch empirical flowmatching loss. For each batch, we sample a data latent x1 ∼pdata, Gaussian noise x0 ∼N(0, I), and a timestep t ∼U(0, 1). We then form the interpolated latent xt = (1 -t)x0 + tx1 and regress the target velocity (x1 -x0): Ldiff = vθ(x(i) t , t(i), c(i)) -(x(i) 1 -x(i) 0 ) 2 2, (3) where c(i) denotes the conditioning signal. Ground-truth-Aligned ID Loss. Since ArcFace embedding requires landmark detection and alignment, directly extracting landmarks from Igen is unreliable because generated images are obtained through noisy diffusion or one-step denoising. Prior methods compromise: PortraitBooth [39] applies the loss only at low noise levels (t 0.40 are considered. a MultiID-Bench Method Identity Metrics Generation Quality Sim(GT) ↑ Sim(Ref) ↑ CP ↓ CLIP-I ↑ CLIP-T ↑ Aes ↑ DreamO 0.454 0.694 0.303 0.793 0.322 4.877 OmniGen 0.398 0.602 0.248 0.780 0.317 5.069 OmniGen2 0.365 0.475 0.142 0.787 0.331 4.991 FLUX.1 Kontext 0.324 0.408 0.099 0.755 0.327 5.319 Qwen-Image-Edit 0.324 0.409 0.093 0.776 0.316 5.056 GPT-4o Native 0.425 0.579 0.178 0.794 0.311 5.344 UNO 0.304 0.428 0.141 0.765 0.314 4.923 USO 0.401 0.635 0.286 0.790 0.329 5.077 UMO 0.458 0.732 0.359 0.783 0.305 4.850 UniPortrait 0.447 0.677 0.265 0.793 0.319 5.018 ID-Patch 0.426 0.633 0.231 0.792 0.312 4.900 InfU 0.439 0.630 0.233 0.772 0.328 5.359 PuLID 0.452 0.705 0.315 0.779 0.305 4.839 InstantID 0.464 0.734 0.337 0.764 0.295 5.255 Ours 0.460 0.578 0.144 0.798 0.313 4.783 GT 1.000 0.521 -0.999 N/A N/A N/A Ref 0.521 1.000 0.999 N/A N/A N/A b OmniContext Single Character Subset Method Quality Metrics Overall PF ↑ SC ↑ Overall ↑ DreamO 8.13 7.09 7.02 OmniGen 7.50 5.52 5.47 OmniGen2 8.64 8.50 8.34 FLUX.1 Kontext 7.72 8.60 7.94 Qwen-Image-Edit 7.66 8.16 7.51 GPT-4o Native 7.98 9.06 8.12 UNO 7.22 7.72 7.04 USO 6.96 7.88 6.70 UMO 6.56 7.92 6.79 UniPortrait 6.62 6.00 5.55 ID-Patch N/A N/A N/A InfU 7.69 4.62 4.70 PuLID 6.62 6.83 5.78 InstantID 4.89 5.49 4.35 Ours 7.43 7.04 6.52 0.42 0.43 0.44 0.45 0.46 Sim(GT) 0.16 0.24 0.32 Copy-Paste UniPortrait ID-Patch InfU GPT-4o IntantID Ours Other Methods DreamO PuLID a Single-ID subset 0.30 0.35 0.40 Sim(GT) 0.1 0.2 0.3 Copy-Paste UniPortrait GPT-4o OmniGen OmniGen2 Ours Other Methods ID-Patch DreamO b Multi-ID subset Figure 5. Trade-off between Face Similarity and Copy-paste. Except for WithAnyone, the other models fall roughly on a fitted curve, illustrating a clear trade-off between face similarity and copy-paste. Upper-right corner is desired. image rather than learn robust identity-conditioned generation. Leveraging our paired dataset, we employ a four-phase training pipeline that gradually transitions the objective from reconstruction toward controllable, identity-preserving synthesis. Phase 1: Reconstruction pre-training with fixed prompt. We begin with reconstruction pre-training to initialize the backbone, as this task is simpler than full identityconditioned generation and can exploit large-scale unlabeled data. For the first few thousand steps, the caption is fixed to a constant dummy prompt (e.g., "two people"), ensuring the model prioritizes learning the identity-conditioning pathway rather than drifting toward text-conditioned styling. The full MultiID-2M is used in this phase, which typically lasts for 20k steps, at which point the model achieves satisfactory identity similarity. To further enhance data diversity, CelebAHQ [23], FFHQ [24], and a subset of FaceID-6M [51] are also incorporated. Phase 2: Reconstruction pre-training with full captions. This phase aligns identity learning with textconditioned generation and lasts for an additional 40k steps, during which the model reaches peak identity similarity. Phase 3: Paired tuning. To suppress trivial copy-paste behavior, we replace 50% of the training samples with paired instances drawn from the 500k labeled images in MultiID2M. For each paired sample, instead of using the same image as both input and target, we randomly select one reference image from the identity's reference set and another distinct image of the same identity as the target. This perturbation breaks the shortcut of direct duplication and compels the model to rely on high-level identity embeddings rather than low-level copying. Phase 4: Quality tuning. Finally, we fine-tune on a curated high-quality subset augmented with generated stylized variants to (i) enhance perceptual fidelity and (ii) improve style robustness and transferability. This phase refines texture, lighting, and stylistic adaptability while preserving the strong identity consistency established in earlier phases. 6. Experiments In this section, we present a comprehensive evaluation of baselines and our WithAnyone model on the proposed MultiID-Bench. Baselines. We evaluate two categories of baseline methods: general customization models and face customization methods. The general customization models include OmniGen [61], OmniGen2 [54], Qwen-Image-Edit [53], FLUX.1 Kontext [2], UNO [56], USO [55], UMO [8], and native GPT-4o-Image [32]. The face customization methods include UniPortrait [15], ID-Patch [68], PuLID [14] (referring to its FLUX [27] implementation throughout this paper), and InstantID [50]. All models were evaluated on the singleperson subset of the benchmark, while only those supporting Table 2. Quantitative comparison on the multi-person subset of MultiID-Bench. , , and indicate the first-, second-, and third-best performance, respectively. For Copy-Paste ranking, only cases with Sim(GT) > 0.35 are considered. GPT exhibits prior knowledge of identities from TV series in subsets with more than two IDs, leading to abnormally high similarity scores. a 2-people Subset Method Identity Metrics Generation Quality Sim(GT) ↑ Sim(Ref) ↑ CP ↓ Bld ↓ CLIP-I ↑ CLIP-T ↑ Aes ↑ DreamO 0.359 0.514 0.179 0.105 0.763 0.319 4.764 OmniGen 0.345 0.529 0.209 0.110 0.750 0.326 5.152 OmniGen2 0.283 0.353 0.081 0.112 0.763 0.334 4.547 GPT 0.332 0.400 0.061 0.092 0.774 0.328 5.676 UNO 0.223 0.274 0.043 0.082 0.735 0.325 4.805 UMO 0.328 0.491 0.176 0.111 0.743 0.316 4.772 UniPortrait 0.367 0.601 0.254 0.075 0.750 0.323 5.187 ID-Patch 0.350 0.517 0.183 0.085 0.767 0.326 4.671 Ours 0.405 0.551 0.161 0.079 0.770 0.321 4.883 b 3-and-4-people Subset Method Identity Metrics Generation Quality Sim(GT) ↑ Sim(Ref) ↑ CP ↓ Bld ↓ CLIP-I ↑ CLIP-T ↑ Aes ↑ DreamO 0.311 0.427 0.116 0.081 0.709 0.317 4.695 OmniGen 0.345 0.529 0.209 0.110 0.750 0.326 5.152 OmniGen2 0.288 0.374 0.099 0.071 0.734 0.329 4.664 GPT 0.445 0.484 0.048 0.044 0.815 0.320 5.647 UNO 0.228 0.276 0.046 0.065 0.717 0.319 4.880 UMO 0.318 0.465 0.180 0.070 0.717 0.309 4.946 UniPortrait 0.343 0.517 0.178 0.048 0.708 0.323 5.090 ID-Patch 0.379 0.543 0.195 0.059 0.781 0.329 4.547 Ours 0.414 0.561 0.171 0.045 0.771 0.325 4.955 Input Ref DreamO GPT UniPortrait ID-Patch WithAnyone GT Prompt: "A couple posing together. The woman wears a blue, sleeveless, V-neck dress, while the man dons a light blue, semi-buttoned shirt. Both are smiling and standing close, with the man's arm around the woman, indicating a friendly or intimate relationship. Prompt: "a man in a dark suit holding a coffee mug and a woman in a light blue sweater resting her head on her hand. They appear to be in a kitchen, looking concerned or surprised. The man is standing, while the woman is seated at a counter. Prompt: "three people, two women and one man, posing closely together. The woman on the left wears a white blazer, while the younger woman in front has a strapless top. The man has a white shirt. All are smiling warmly at the camera. Prompt: "four people dressed in white shirts posing together. The group includes three males and two females, with one male and one female in the center. They are smiling and standing closely, suggesting a family or close-knit group. The attire is casual and coordinated. UMO Kontext QwenImgEdit PuLID Input Ref InstantID GT WithAnyone Prompt: "a woman wearing a white hooded jacket with a black inner garment. Her hair is styled loosely, and she has minimal makeup. The woman is posing with her head slightly tilted, showcasing a calm and composed demeanor. Her expression is neutral. Prompt: "a woman with long, dark hair flowing dynamically. She wears a white and blue geometric patterned top with a shawl-like drape. Her posture is poised, showcasing elegant jewelry and a subtle smile. The background features a blurred circular pattern in shades of gray. Prompt: "a woman in a black leather jacket holding a red microphone. She is smiling and appears to be performing or speaking, with her head slightly tilted and her mouth open as if she is in the middle of talking. Her long brown hair is styled straight.. UniPortrait Figure 6. Qualitative Results of Different Generation Methods. The text prompt is extracted from the ground-truth image shown on the leftmost side. multi-ID generation were additionally tested on the multiperson subset. Further implementation details are provided in Appendix F.1. 6.1. Quantitative Evaluation The quantitative results are reported in Tables 1 and 2. We observe a clear trade-off between face similarity and copy-paste artifacts. As shown in Fig. 5, most methods align closely with a regression curve, where higher face similarity generally coincides with stronger copy-paste. This indicates that many existing models boost measured similarity by directly replicating reference facial features rather than synthesizing the identity. In contrast, WithAnyone deviates substantially from this curve, achieving the highest face similarity with regard to GT while maintaining a markedly lower copy-paste score. WithAnyone also achieves the highest score among IDspecific reference models on the OmniContext [54] benchmark. However, VLMs [1, 32] exhibit limited ability to distinguish individual identities and instead emphasize nonidentity attributes such as pose, expression, or background. Despite that general customization and editing models often outperform face customization models on OmniContext, WithAnyone still has best performance among face customization models. 6.2. Qualitative Comparison To complement the quantitative results, Fig. 6 presents qualitative comparisons between our method, state-of-the-art general customization/editing models, and face customization generation models. It shows that identity consistency remains a significant weakness of general customization or editing models, consistent with our quantitative findings. Many VAE-based approaches where references are encoded through a VAE, such as FLUX.1 Kontext and DreamO tend to produce faces that either exhibit copy-paste artifacts or deviate markedly from the target identity. A likely reason is that VAE embeddings emphasize low-level features, leaving high-level semantic understanding to the diffusion backbone, which may not have been pre-trained for this task. ID-specific reference models also struggle with copy-paste artifacts. For example, they fail to make the subject smile when the reference image is neutral and often cannot adjust head pose or even eye gaze. In contrast, WithAnyone generates flexible, controllable faces while faithfully preserving identity. 6.3. Ablation and User Studies To better understand the contribution of each component in WithAnyone, we conduct ablation studies on the training strategy, the GT-aligned ID loss, the InfoNCE-based ID loss, and our dataset. Due to space constraints, we report Table 3. Ablation Study. indicate the first, second, third performance respectively. We ablate paired data training (without stage 2, w/o s2), GT-Aligned landmark ID loss (Self-aligned, S.A.), extended negative samples in InfoNCE (w/o neg). And model trained on FFHQ is also compared. Ablation Identity Metrics Generation Quality Sim(G) ↑ Sim(R) ↑ CP ↓ CLIP-I ↑ CLIP-T ↑ Aes ↑ Phases w/o Phase 3 0.406 0.625 0.239 0.755 0.307 4.955 Loss w/o GT-Align 0.385 0.549 0.175 0.763 0.317 4.754 w/o Ext. Neg. 0.368 0.455 0.074 0.740 0.304 4.984 Data FFHQ only 0.224 0.246 0.027 0.658 0.330 5.039 Ours Full Setting 0.405 0.551 0.161 0.770 0.321 4.883 0.2 0.4 0.6 0.8 Noise Level 0.71 Pred Lmk 0.54 0.34 0.36 0.21 0.10 0.06 Face Sim 0.44 GT Lmk 0.0 0.5 1.0 ID Loss Figure 7. Comparison of GT-aligned and Prediction-aligned landmarks. the key results here, with additional analyses provided in Appendix G. As shown in Table 3, the paired-data fine-tuning phase reduces copy-paste artifacts without diminishing similarity to the ground truth, while training on FFHQ performs significantly worse than on our curated dataset. Fig. 7 further demonstrates that the GT-aligned ID loss lowers denoising error at low noise levels and yields higher-variance, more informative gradients at high noise, thereby strengthening identity learning. By ablating extended negatives, leaving only 63 negative samples from the batch (originally extended to 4096), the effectiveness of ID contrastive loss is greatly reduced. More ablation results can be found in Appendix G. We conduct a user study to evaluate perceptual quality and identity preservation. Ten participants were recruited and asked to rank 230 groups of generated images according to four criteria: identity similarity, presence of copy-paste artifacts, prompt adherence, and aesthetics. The results, shown in Fig. 8, indicate that our method consistently achieves the highest average ranking across all dimensions, demonstrating both stronger identity preservation and superior visual quality. Moreover, the copy-paste metric exhibits a moderate positive correlation with human judgments, suggesting that it captures perceptually meaningful artifacts. Further details of the study design, ranking protocol, and statistical analysis are provided in Appendix H. Sim CP PF Aes Ours UNO ID-Patch UniPortrait OmniGen Figure 8. User study. Bigger bubbles indicate higher ranking. 7. Conclusion Copy-paste artifacts are a common limitation of identity customization methods, and face-similarity metrics often exacerbate the issue by implicitly rewarding direct copying. In this work, we identify and formally quantify this failure mode through MultiID-Bench, and propose targeted solutions. We curate MultiID-2M and develop training strategies and loss functions that explicitly discourage trivial replication. Empirical evaluations demonstrate that WithAnyone significantly reduces copy-paste artifacts while maintaining and in many cases improving identity similarity, thereby breaking the long-standing trade-off between fidelity and copying. These results highlight a practical path toward more faithful, controllable, and robust identity customization. References [1] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report. arXiv preprint , 2025. 4, 8, 19 [2] Stephen Batifol, Andreas Blattmann, Frederic Boesel, Saksham Consul, Cyril Diagne, Tim Dockhorn, Jack English, Zion English, Patrick Esser, Sumith Kulal, et al. Flux. 1 kontext: Flow matching for in-context image generation and editing in latent space. arXiv e-prints, 2025. 3, 6, 13 [3] Anthony Chen, Jianjin Xu, Wenzhao Zheng, Gaole Dai, Yida Wang, Renrui Zhang, Haofan Wang, and Shanghang Zhang. Training-free regional prompting for diffusion transformers. arXiv preprint , 2024. 15 [4] Bowen Chen, Mengyi Zhao, Haomiao Sun, Li Chen, Xu Wang, Kang Du, and Xinglong Wu. Xverse: Consistent multi-subject control of identity and semantic attributes via dit modulation. arXiv preprint , 2025. 2, 3 [5] Weifeng Chen, Jiacheng Zhang, Jie Wu, Hefeng Wu, Xuefeng Xiao, and Liang Lin. Id-aligner: Enhancing identity-preserving text-to-image generation with reward feedback learning. arXiv preprint , 2024. 3 [6] Wei Cheng, Ruixiang Chen, Siming Fan, Wanqi Yin, Keyu Chen, Zhongang Cai, Jingbo Wang, Yang Gao, Zhengming Yu, Zhengyu Lin, et al. Dna-rendering: A diverse neural actor repository for high-fidelity humancentric rendering. In ICCV, 2023. 14 [7] Wei Cheng, Su Xu, Jingtan Piao, Chen Qian, Wayne Wu, Kwan-Yee Lin, and Hongsheng Li. Generalizable neural performer: Learning robust radiance fields for human novel view synthesis. arXiv preprint , 2022. 14 [8] Yufeng Cheng, Wenxu Wu, Shaojin Wu, Mengqi Huang, Fei Ding, and Qian He. Umo: Scaling multiidentity consistency for image customization via matching reward. arXiv preprint , 2025. 2, 3, 4, 6 [9] Jiaming Chu, Lei Jin, Yinglei Teng, Jianshu Li, Yunchao Wei, Zheng Wang, Junliang Xing, Shuicheng Yan, and Jian Zhao. Uniparser: Multi-human parsing with unified correlation representation learning. TIP, 2024. 2, 3, 14 [10] Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. Retinaface: Singleshot multi-level face localisation in the wild. In CVPR, 2020. 5 [11] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019. 2, 3, 5, 19 [12] discus0434. aesthetic-predictor-v2-5. https: / / github . com / discus0434 / aesthetic - predictor-v2-5, 2023. Accessed: 2025-05-12. 4, 13, 15 [13] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. In ICML, 2024. 3 [14] Zinan Guo, Yanze Wu, Zhuowei Chen, Lang Chen, Peng Zhang, and Qian He. Pulid: Pure and lightning id customization via contrastive alignment. In NeurIPS, 2024. 2, 3, 4, 5, 6, 15 [15] Junjie He, Yifeng Geng, and Liefeng Bo. Uniportrait: A unified framework for identity-preserving single-and multi-human image personalization. ICCV, 2025. 2, 3, 4, 6, 14 [16] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Promptto-prompt image editing with cross attention control. ICLR, 2023. 2 [17] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NeurIPS, 2020. 3 [18] Xirui Hu, Jiahao Wang, Hao Chen, Weizhan Zhang, Benqi Wang, Yikun Li, and Haishun Nan. Dynamicid: Zero-shot multi-id image personalization with flexible facial editability. ICCV, 2025. 3 [19] Yuqi Hu, Longguang Wang, Xian Liu, Ling-Hao Chen, Yuwei Guo, Yukai Shi, Ce Liu, Anyi Rao, Zeyu Wang, and Hui Xiong. Simulating the real world: A unified survey of multimodal generative models. arXiv preprint , 2025. 2 [20] Junha Hyung, Jaeyo Shin, and Jaegul Choo. Magicapture: High-resolution multi-concept portrait customization. In AAAI, 2024. 3 [21] Liming Jiang, Qing Yan, Yumin Jia, Zichuan Liu, Hao Kang, and Xin Lu. Infiniteyou: Flexible photo recrafting while preserving your identity. ICCV, 2025. 2 [22] Qing Jiang, Lin Wu, Zhaoyang Zeng, Tianhe Ren, Yuda Xiong, Yihao Chen, Qin Liu, and Lei Zhang. Referring to any person. 2025. 2, 3, 14 [23] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. ICLR, 2018. 3, 6 [24] Tero Karras, Samuli Laine, and Timo Aila. A stylebased generator architecture for generative adversarial networks. In CVPR, 2019. 6, 19 [25] Chanran Kim, Jeongin Lee, Shichang Joung, Bongmo Kim, and Yeul-Min Baek. Instantfamily: Masked attention for zero-shot multi-id image generation. arXiv preprint , 2024. 3 [26] Minchul Kim, Anil K Jain, and Xiaoming Liu. Adaface: Quality adaptive margin for face recognition. In CVPR, 2022. 19 [27] Black Forest Labs. Flux. https://github.com/ black-forest-labs/flux, 2024. 2, 3, 6, 13 [28] Black Forest Labs. Flux.1 krea. https : //huggingface.co/black-forest-labs/ FLUX.1-Krea-dev, 2025. 13 [29] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. 2, 3, 14 [30] Chong Mou, Yanze Wu, Wenxu Wu, Zinan Guo, Pengze Zhang, Yufeng Cheng, Yiming Luo, Fei Ding, Shiwen Zhang, Xinghui Li, et al. Dreamo: A unified framework for image customization. SIGGRAPH Asia, 2025. 3 [31] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint , 2018. 5 [32] OpenAI. Addendum to gpt-4o system card: Native image generation, 2025. 6, 8 [33] Maxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin ElNouby, Russell Howes, Po-Yao Huang, Hu Xu, Vasu Sharma, Shang-Wen Li, Wojciech Galuba, Mike Rabbat, Mido Assran, Nicolas Ballas, Gabriel Synnaeve, Ishan Misra, Herve Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision. , 2023. 14 [34] Dongwei Pan, Long Zhuo, Jingtan Piao, Huiwen Luo, Wei Cheng, Yuxin Wang, Siming Fan, Shengqi Liu, Lei Yang, Bo Dai, et al. Renderme-360: A large digital asset library and benchmarks towards high-fidelity head avatars. NeurIPS, 2023. 14 [35] Foivos Paraperas Papantoniou, Alexandros Lattas, Stylianos Moschoglou, Jiankang Deng, Bernhard Kainz, and Stefanos Zafeiriou. Arc2face: A foundation model for id-consistent human faces. In ECCV, 2024. 3 [36] Gaurav Parmar, Or Patashnik, Kuan-Chieh Wang, Daniil Ostashev, Srinivasa Narasimhan, Jun-Yan Zhu, Daniel Cohen-Or, and Kfir Aberman. Object-level visual prompts for compositional image generation. arXiv preprint , 2025. 3 [37] Or Patashnik, Rinon Gal, Daniil Ostashev, Sergey Tulyakov, Kfir Aberman, and Daniel Cohen-Or. Nested attention: Semantic-aware attention values for concept personalization. In SIGGRAPH, 2025. 3 [38] William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. 3 [39] Xu Peng, Junwei Zhu, Boyuan Jiang, Ying Tai, Donghao Luo, Jiangning Zhang, Wei Lin, Taisong Jin, Chengjie Wang, and Rongrong Ji. Portraitbooth: A versatile portrait model for fast identity-preserved personalization. In CVPR, 2024. 5 [40] Guocheng Qian, Kuan-Chieh Wang, Or Patashnik, Negin Heravi, Daniil Ostashev, Sergey Tulyakov, Daniel Cohen-Or, and Kfir Aberman. Omni-id: Holistic identity representation designed for generative tasks. CVPR, 2025. 3 [41] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. 15 [42] Xingyu Ren, Alexandros Lattas, Baris Gecer, Jiankang Deng, Chao Ma, and Xiaokang Yang. Facial geometric detail recovery via implicit representation. In FG, 2023. 13 [43] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 3 [44] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In CVPR, 2023. 2 [45] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015. 2, 19 [46] Erich Schubert, Jörg Sander, Martin Ester, Hans Peter Kriegel, and Xiaowei Xu. Dbscan revisited, revisited: why and how you should (still) use dbscan. TODS, 2017. 13 [47] Lorenzo Stacchio, Alessia Angeli, Giuseppe Lisanti, Daniela Calanca, and Gustavo Marfia. Imago: A family photo album dataset for a socio-historical analysis of the twentieth century. arXiv preprint , 2020. 2, 14 [48] Dani Valevski, Danny Lumen, Yossi Matias, and Yaniv Leviathan. Face0: Instantaneously conditioning a textto-image model on a face. In SIGGRAPH Asia, 2023. 3 [49] Qinghe Wang, Xu Jia, Xiaomin Li, Taiqing Li, Liqian Ma, Yunzhi Zhuge, and Huchuan Lu. Stableidentity: Inserting anybody into anywhere at first sight. TMM, 2025. 3 [50] Qixun Wang, Xu Bai, Haofan Wang, Zekui Qin, and Anthony Chen. Instantid: Zero-shot identitypreserving generation in seconds. arXiv preprint , 2024. 2, 6 [51] Shuhe Wang, Xiaoya Li, Jiwei Li, Guoyin Wang, Xiaofei Sun, Bob Zhu, Han Qiu, Mo Yu, Shengjie Shen, Tianwei Zhang, et al. Faceid-6m: A large-scale, opensource faceid customization dataset. arXiv preprint , 2025. 2, 3, 6 [52] Yibin Wang, Weizhong Zhang, Jianwei Zheng, and Cheng Jin. High-fidelity person-centric subject-toimage synthesis. In CVPR, 2024. 3 [53] Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun Wen, Wensen Feng, Xiaoxiao Xu, Yi Wang, Yichang Zhang, Yongqiang Zhu, Yujia Wu, Yuxuan Cai, and Zenan Liu. Qwen-image technical report. arXiv preprint , 2025. 3, 6 [54] Chenyuan Wu, Pengfei Zheng, Ruiran Yan, Shitao Xiao, Xin Luo, Yueze Wang, Wanli Li, Xiyan Jiang, Yexin Liu, Junjie Zhou, Ze Liu, Ziyi Xia, Chaofan Li, Haoge Deng, Jiahao Wang, Kun Luo, Bo Zhang, Defu Lian, Xinlong Wang, Zhongyuan Wang, Tiejun Huang, and Zheng Liu. Omnigen2: Exploration to advanced multimodal generation. arXiv preprint , 2025. 3, 6, 8, 19 [55] Shaojin Wu, Mengqi Huang, Yufeng Cheng, Wenxu Wu, Jiahe Tian, Yiming Luo, Fei Ding, and Qian He. Uso: Unified style and subject-driven generation via disentangled and reward learning. arXiv preprint , 2025. 6 [56] Shaojin Wu, Mengqi Huang, Wenxu Wu, Yufeng Cheng, Fei Ding, and Qian He. Less-to-more generalization: Unlocking more controllability by in-context generation. ICCV, 2025. 3, 6, 14 [57] Tong Wu, Yinghao Xu, Ryan Po, Mengchen Zhang, Guandao Yang, Jiaqi Wang, Ziwei Liu, Dahua Lin, and Gordon Wetzstein. Fiva: Fine-grained visual attribute dataset for text-to-image diffusion models. NeurIPS, 2024. 2 [58] Yi Wu, Ziqiang Li, Heliang Zheng, Chaoyue Wang, and Bin Li. Infinite-id: Identity-preserved personalization via id-semantics decoupling paradigm. In ECCV, 2024. 3 [59] Chufeng Xiao and Hongbo Fu. Customsketching: Sketch concept extraction for sketch-based image synthesis and editing. In Computer Graphics Forum. Wiley Online Library, 2024. 2 [60] Guangxuan Xiao, Tianwei Yin, William T Freeman, Frédo Durand, and Song Han. Fastcomposer: Tuningfree multi-subject image generation with localized attention. IJCV, 2025. 3 [61] Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, and Zheng Liu. Omnigen: Unified image generation. arXiv preprint , 2024. 3, 6 [62] Hengyuan Xu, Liyao Xiang, Hangyu Ye, Dixi Yao, Pengzhi Chu, and Baochun Li. Permutation equivariance of transformers and its applications. In CVPR, 2024. 15 [63] Yuxuan Yan, Chi Zhang, Rui Wang, Yichao Zhou, Gege Zhang, Pei Cheng, Gang Yu, and Bin Fu. Facestudio: Put your face everywhere in seconds. arXiv preprint , 2023. 3 [64] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arxiv:2308.06721, 2023. 2, 3, 15 [65] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In ICCV, 2023. 15, 19 [66] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In ICCV, 2023. 2 [67] Ning Zhang, Manohar Paluri, Yaniv Taigman, Rob Fergus, and Lubomir Bourdev. Beyond frontal faces: Improving person recognition using multiple cues. In CVPR, 2015. 2, 14 [68] Yimeng Zhang, Tiancheng Zhi, Jing Liu, Shen Sang, Liming Jiang, Qing Yan, Sijia Liu, and Linjie Luo. Id-patch: Robust id association for group photo personalization. In CVPR, 2025. 2, 3, 4, 6, 14, 19 [69] Youcai Zhang, Xinyu Huang, Jinyu Ma, Zhaoyang Li, Zhaochuan Luo, Yanchun Xie, Yuzhuo Qin, Tong Luo, Yaqian Li, Shilong Liu, et al. Recognize anything: A strong image tagging model. arXiv preprint , 2023. 4, 13 [70] Yujie Zhong, Relja Arandjelovic, and Andrew Zisserman. Compact deep aggregation for set retrieval. In ECCV, 2018. 2, 14 [71] Cailin Zhuang, Ailin Huang, Wei Cheng, Jingwei Wu, Yaoqi Hu, Jiaqi Liao, Hongyuan Wang, Xinyao Liao, Weiwei Cai, Hengyuan Xu, et al. Vistorybench: Comprehensive benchmark suite for story visualization. arXiv preprint , 2025. 3 Appendix A. Family of WithAnyone FLUX.1 comprises a family of models, including FLUX.1 [27], FLUX.1 Kontext [2] and FLUX.1 Krea [28]. Krea is a text-to-image model with improved real-person face generation, whereas Kontext is an image-editing model that excels at making targeted adjustments while preserving the rest of the image. However, as reported in Table 1, Kontext shows limited consistency with the reference face identity. Our method, WithAnyone, can be seamlessly integrated into Kontext for the face customization downstream tasks like face editing. As illustrated in Fig. 9, WithAnyone effectively injects identity information from the reference images into the target image. The overall training pipeline follows the procedure described in Sec. 5, with a single modification: the input image provided to Kontext (whose tokens are concatenated with the noisy latent at each denoising step) is set to the target image with the face region blurred. B. MultiID-2M Construction Details To fill in the void left by the lack of publicly available multiID datasets, a data constraction pipeline is proposed to create a large-scale dataset of multi-person images with paired identity references for identities on the data record. Based on this pipeline, 500k group photo images are collected, featuring 3k identities, each with hundreds of single-ID reference images. Another 1M images that cannot be identified are also included in the dataset for image reconstruction training purpose for image reconstruction training purpose. B.1. Dataset Construction Pipeline The pipeline contains four steps, as shown in Fig. 3. The detailed pipeline are as follows. Single-ID images. To construct a ID reference set, singleID images were collected from the web using celebrity names as search queries on Google Images. For each image, facial features were extracted with ArcFace [42], ensuring that only images containing exactly one face were retained. To remove outliers, DBSCAN [46] clustering was applied to the embeddings for each celebrity, resulting in a set of cluster centers and hundreds of reference images per identity. This process established a reliable reference set for each unique identity. Human review confirms the accuracy of the ID bank built in this step. Multi-ID images. To achieve best searching efficiency, group photos were obtained using more complex queries that combined multiple celebrity names, keywords indicating the number of people (e.g., "two celebrities"), scene descriptors (e.g., "award ceremony"), and negative keywords to filter out irrelevant results. ArcFace embeddings were extracted for these images, yielding a large pool of candidate multi-ID images. At this stage, the dataset comprised more than 20 million images. Retrieval. To provide ID reference for the multi-ID images, it is necessary to retrieve the IDs on it. All single-ID cluster centers were aggregated into an embedding matrix. For each detected face in every multi-ID image, its ArcFace embedding was compared to all single-ID cluster centers to determine identity. The similarity between two embeddings was calculated as: sim(id1, id2) = cos(f(id1), f(id2)) (9) where id1 and id2 denote two faces, and f is the ArcFace embedding network. Each face in a multi-ID image was assigned the identity of the single-ID cluster center with the highest similarity, provided the similarity exceeded a predefined threshold (0.5). This approach enabled accurate and automated identity assignment in group images and facilitated retrieval of corresponding reference images. Filtering and labelling. To further improve dataset quality, a series of annotation and filtering steps were applied. The Recognize Anything model [69], an aesthetic score predictor [12], and other auxiliary tools were used for annotation. Images with low aesthetic scores or those identified as collages rather than genuine group photos were excluded. WithAnyone Kontext WithAnyone Kontext Figure 9. Application of WithAnyone-Kontext. Marrying editing models, WithAnyone is capable of face editing given customization references. Celebrities 0k 15k 30k Appearance Count Range: 3 to 42738 Median: 2407 a ID Appearance. 0 600 1200 Count Canada Others Japan Europe Korea China USA b Nationality Distribution. European descent Asian descent African descent Single Multiple US CN CA EU c Benchmark Distribution. Figure 10. Overview of Dataset Distributions. (a) ID appearance distribution for the subset of one nation: the x-axis represents celebrities, sorted by the number of images in which they appear. (b) Nationality distribution: celebrities in our dataset come from over 10 countries, with most data sourced from China and the USA. (c) Word cloud of the most frequent words in the captions. Optical Character Recognition (OCR) tools detected watermarks and logos, which were cropped out when possible; otherwise, the images were discarded. Finally, descriptive captions were generated for the images using a large language model, enriching the dataset with textual information. So far, a dataset with three parts is obtained: (1) 1M single-ID images as reference bank, or single-ID crosspaired training; (2) 500k paired multi-ID images with identified persons; (3) 1M unpaired multi-ID images, which can be used for training scenario without the need of references, such as reconstruction. B.2. Dataset Statistics Following prior arts [6, 7, 29, 34], comprehensive statistics of the dataset are provided in Fig. 11, including the distribution of nationalities, the count of appearances per identity, and a word cloud illustrating the most frequent terms in the generated image captions, offering insights into the diversity and richness of the dataset. A long-tail distribution is observed in the count of appearances per identity in Fig. 11a, with a few identities appearing frequently while many others are less common. This provide a diverse set of identities, as well as a perfect test dataset without identity interaction with the training set. Fig. 11b and Fig. 10c illustrate MultiID-2M's nationality distribution and action diversity respectively. The comparison between the proposed dataset and existing multi-ID datasets are listed in Table 4, highlighting MultiID-2M's outstanding volume and paired references. C. Benchmark and Metrics Details Most existing methods are evaluated on privately curated test sets that are seldom released, and even when datasets are shared, the accompanying evaluation protocols vary widely. For example, ID-Patch [68] and UniPortrait [15] measure identity similarity using ArcFace embeddings, whereas UNO [56] relies on DINO [33] and CLIP similarity scores. This Table 4. Statistic comparison for multi-identity group photo datasets. #Img refers to total scale of the dataset; #Paired refers to paired group image number; #Img / ID indicates number of reference image for each single ID; #ID / Img means number of IDs appears on group photos. Dataset #Img #Paired #Img / ID #ID / Img IMAGO [47] 80k 0 0 - MHP [9] 5k 0 0 2 -10 PIPA [67] 40k 40k cross 1 -10 HumanRef [22] 36k 36 1+ 1 -14+ Celebrity Together [70] 194k 0 0 1 -5 MultiID-2M 1.5M 500k 100+ 1 -5 heterogeneity together with the common practice of reporting only the cosine similarity between matched ArcFace embeddings fails to capture more nuanced insights and can even encourage degenerate behavior in which models produce images that are effectively "copy-pastes" of the reference photos. In this work, MultiID-Bench is introduced as a unified and extensible evaluation framework for group photo (multiID) generation. It standardizes assessment along two complementary axes: (i) identity fidelity (preserving each target identity without unintended copying and blending), and (ii) generation quality (semantic faithfulness to the prompt/- ground truth and overall aesthetic quality). The data used in MultiID-Bench are drawn from the long-tail portion of MultiID-2M. We first select the least frequent identities and gather all images containing them. To prevent information leakage, the training split is filtered to ensure zero identity overlap with the benchmark set. The final benchmark contains 435 samples; each sample provides 1-4 reference identities (with their images), a corresponding ground-truth image, and a text prompt describing that ground-truth scene. Identity Blending. In the similarity matrix, the offdiagonal elements correspond to the similarity between different identities. The average of the diagonal elements is Normal Special Upper Top Lower Professional Ancient costume Clothes a Clothes & Accessories Distribution. Single Interaction Dynamic Static Expression intimate communicate others Actions b Action Distribution. Figure 11. Distribution of Clothes and Action Labels of Proposed Dataset. used as the metric for identity fidelity, and the average of the off-diagonal elements serves as the metric for identity blending, as in Eq. 10. MBld(xg, xt) = 1 N 2 -N N X i=1 N X j=1,j̸=i cos(gi, tj) (10) where gi is the embedding of the i-th face in the generated image xg, and tj is the embedding of the j-th face in the ground-truth image xt. A lower value indicates less unintended blending between different identities, which is desirable. Generation quality. The overall generation quality is evaluated based on CLIP-I and CLIP-T, which are the de facto standards for evaluating the prompt-following capability [41], are employed to measure the cosine similarity in the CLIP embedding space between the generated image and the ground truth image or caption. Additionally, an aesthetic score model [12] is used to assess the aesthetic quality of the generated images. D. Galleries of WithAnyone We show more results of WithAnyone in Fig. 12, Fig. 13, and Fig. 14. E. Model Framework Details We follow prior work [14, 64] and integrate a lightweight identity adapter into the diffusion backbone. Identity embeddings are injected by cross-attention so that the base generative prior is preserved while controllable identity signals are added. Face embedding. Each reference face is first encoded by ArcFace, producing a 1 × 512 identity embedding. To match the tokenized latent space of the DiT backbone, this vector is projected with a multi-layer perceptron (MLP) into 8 tokens of dimension 3072 (i.e., an 8 × 3072 tensor). This tokenization provides sufficient capacity for the cross-attention layers to integrate identity cues without overwhelming the generative context. Controllable attribute retention. Completely suppressing copy-like behavior is not always desirable: users sometimes expect certain mid-level appearance attributes (e.g., hairstyle, accessories) to be preserved. ArcFace focuses on high-level, identity-discriminative geometry and texture cues but omits many mid-level semantic factors. To expose controllable retention of such attributes when needed, we optionally incorporate SigLIP [65] as a secondary encoder. SigLIP provides more semantically entangled representations, enabling selective transfer of style-relevant traits while ArcFace anchors identity fidelity. Attention mask and location control. To further improve identity disentanglement and precise localization in the generated images, an attention mask and location control mechanism are incorporated [3, 62]. Specifically, groundtruth facial bounding boxes are extracted from the training data and used to generate binary attention masks. These masks are applied to the attention layers of the backbone model, ensuring that each reference token only attends to its corresponding face region in the image, providing location control at the same time. Feature injection. After each transformer block of the DiT backbone, we inject face features through a crossattention modulation: H′ = H+λid softmax (HWQ)(EWK)⊤ √ d + M (EWV ), (11) where H denotes the current hidden tokens, E the stacked face-embedding tokens, and WQ, WK, WV the projection matrices; d is the query/key dimension, and λid = 1.0 during training. When SigLIP is enabled, its tokens are processed by a parallel cross-attention with an independent scaling Reference Reference Reference Reference Figure 12. Galleries of Single-ID Generation. Figure 13. Galleries of 2-person Generation. Figure 14. Galleries of 3-to-4-person Generation. coefficient. F. Experimental Details F.1. Implementation Details WithAnyone is trained on 8 NVIDIA H100 GPUs, with a batch size of 4 on each GPU. The learning rate is set to 1e-4, and the AdamW optimizer is employed with a weight decay of 0.01. The pre-training phase runs for 60k steps, with a fixed prompt used during the first 20k steps. The subsequent paired-tuning phase lasts 30k steps: 50% of the samples use paired (reference, ground-truth) data, while the remaining 50% continue reconstruction training. Finally, a quality/style tuning stage of 10k steps is performed with a reduced learning rate of 1 × 10-5. For the extended ID contrastive loss, the target is used as the positve sample, while other IDs from samples in the same batch serve as negative samples. With the global batch size of 32, this yields less than a hundred negative samples. Extended negative samples are drawn from reference bank. If this ID is identified as one of the 3k ID in the reference bank, we simply omit its own ID and draw the from other IDs. If this ID is not identified, then it makes things easier - all the IDs in the reference bank can be used as negative samples. For other baseline methods, official implementations and checkpoints (or API) are used with default settings. Methods are tested on MultiID-Bench and real-human subset of OmniContext [54]. OmniContext uses Vision-Language Models (VLMs) to evaluate the prompt-following (PF) and subjectconsistency (SC) of generated images. For reproducibility, the VLM is fixed to Qwen2.5-VL [1]. ID-Patch [68] requires pose condition, and we use the ground-truth pose for it. Single face embedding model may induce biased evaluation on ID similarity, thus we average three de-facto face recognition models' consine similarity to compute the overall ID similarity metric, namely ArcFace [11], FaceNet [45], and AdaFace [26]. F.2. More Discussion on the Quantitative Results The performance of GPT on our 3-and-4-people subset offers a useful validation of our copy-paste metric, as shown in Table 2. This subset largely comprises group photographs from TV series that GPT may have encountered during pretraining, so GPT attains unusually high identity-similarity scores both to the ground truth (GT) and to the reference images. Actually, in one case GPT even generates an ID from the TV series that is not present in the reference images. This behaviour approximates an idealized scenario in which a model fully understands and faithfully reproduces the target identity: similarity to GT and to references are both high, and the copy-paste measure the difference between distances to GT and to references approaches zero. These observations are consistent with our metric design and support its ability to distinguish true identity understanding from trivial copy-and-paste replication. We report the experimental limit in Table 1. If one model completely copy the reference image, SimGT = 0.521, SimRef = 1.0, and copy-paste is 0.999, which aligns with the theoretical limit 1.0 of copy-paste. The prompt-following ability is measured by CLIP-I and CLIP-T in our benchmark, and is judged by VLM in OmniContext. WithAnyonegains state-of-the-art performance in both metrics, and is ranked the highest in our user study. However, the credibility of CLIP scores and the aesthetic scores may be debated, as they are not always consistent with human perception. G. Ablation Study Details In this section, we systematically evaluate the impact of training strategy, GT-aligned ID-Loss, InfoNCE ID Loss, and our dataset construction. User study is also conducted to validate the consistency of the proposed metrics with human perception, as well as evaluate the human preference on different methods. SigLIP signal. SigLIP [65] signal is introduced to retain copy-paste effect when user tend to retain the features from reference images like hairstyle, accessories, etc. As shown in Fig. 16, increasing the SigLIP signal weight effectively amplifies the copy-paste effect while simultaneously boosting ID similarity to the reference images exactly as expected, since stronger SigLIP guidance enforces tighter semantic alignment and transfers more fine-grained appearance cues (e.g., hairstyle, accessories, local textures). Training strategy. We evaluate the effect of a paired-data fine-tuning stage. After an initial reconstruction training phase, we either continue training with paired (reference, ground-truth) data or keep training under the reconstruction objective for 10k steps. As shown in Table 3, continuing with paired data effectively reduces the copy-paste effect without compromising similarity to the ground truth. Dataset construction. To validate the effectiveness of our dataset, we trained a model on FFHQ [24] using reconstruction training for the same number of steps. As shown in Table 3, the FFHQ-trained model performs poorly across all metrics. This likely stems from FFHQ's limited diversity and size, as it contains only 70k face-only portrait images. GT-aligned ID-Loss. We validate the GT-aligned IDLoss with a simple experiment that visualizes predicted faces at different denoising time steps during training. As shown in Fig. 7, at low noise levels the GT-aligned ID-Loss is substantially lower than the loss computed using predictionaligned landmarks, indicating that aligning faces to groundtruth landmarks reduces denoising error and yields a more accurate identity assessment. At high noise levels the GTaligned ID-Loss shows greater variance, producing stronger 0 300 600 Step 0.4 0.5 0.6 0.7 ID Loss 0.0 0.1 0.1+ Figure 15. ID Loss Curves with λ× InfoNCE Loss. 0.1 is 0.1× InfoNCE Loss without extended negative samples, and 0.1+ is 0.1× InfoNCE Loss with extended negative samples. 0.55 0.60 0.62 0.64 Sim(Ref) 0.15 0.20 0.25 Copy-Paste 0.0 * Sig 0.2 * Sig 0.4 * Sig 0.6 * Sig 0.8 * Sig 1.0 * Sig Figure 16. Trade-off Curves with λ× Siglip and (1-λ)× ArcFace signal. Figure 17. User Study Interface. and more informative gradients that help the model learn identity features. InfoNCE Loss. The InfoNCE loss with extended negative samples is crucial for the convergence in the early training stage. We conduct a toy experiment with 1000 training samples, and record ID Loss curves with no InfoNCE loss, 0.1× InfoNCE loss without extended negatives, and 0.1× InfoNCE loss with extended negatives. As shown in Fig. 15, ID loss fits a lot faster with InfoNCE loss with extended negatives, demonstrating its effectiveness in accelerating training convergence. It also largely increases the ID similarity score, as shown in Table 3. H. User Study Details Our user study is conducted with the same data samples and generated results in our quantitative experiments. Due to a tight financial budget, we randomly select 100 samples from single-person subset, 100 samples from 2-people subset, and all samples from 3-and-4 people subset. 10 participants are recruited for the study, all of whom are trained with a brief tutorial to understand the task and evaluation criteria. We illustrate the interface used in our user study in Fig. 17. H.1. Correlation Analysis We analyze the correlation between our proposed metrics and user study results. As shown in Table 5, our copy-paste metric shows a moderate positve correlation with user ratings on copy-paste effect. H.2. Participant Instructions We provide the instructions for training the participants in the following table. I. Prompts for Language Models Large language models (LLMs) and vision-language models (VLMs) are used in various stages of our work, including dataset captioning and OmniContext evaluation. I.1. Dataset Captioning Besides the system prompt, we design 6 different prompts to generate diverse captions for each image. 1 prompt is randomly selected for each image during captioning. Table 5. Correlation Statistics Between Machine Ranking and Human Ranking. Reported values include Pearson's r, Spearman's ρ, and Kendall's τ with corresponding p-values. Dimension (N) Pearson r (p) Spearman ρ (p) Kendall τ (p) Copy-Paste 0.4417 (7.98e-48) 0.4535 (1.26e-50) 0.3405 (1.10e-46) ID Sim 0.3254 (1.54e-26) 0.3237 (2.91e-26) 0.2423 (1.11e-25) Participant Instructions and Evaluation Procedure Data source and task overview. Five different methods generated images under the following conditions: • A single prompt that describes the "ground truth image." • Between 1 and 4 people in the scene (most examples contain 1-2 people). For each trial you will be shown the ground truth image, input images, and a generation instruction. Then you will observe five generated group-photo results (one per method) and rank them according to several evaluation dimensions. Use a 5-star scale where 5 stars = best and 1 star = worst. Please read the input image(s) and the editing instruction carefully before inspecting the generated results. Evaluation procedure (per-image ranking). Rank each generated image individually on the following criteria. Identity similarity • How well do the person(s) in the generated image resemble the person(s) in the ground truth image? • Rank images by their resemblance to the ground truth image: the more the generated person(s) look like the original reference, the higher the rating. • Important: When judging identity similarity, ignore factors such as image quality, rendering artifacts, or general aesthetics. Focus only on how much the person(s) resemble the original reference(s). Also, try to assess resemblance to the ground truth image as a whole, rather than comparing to any single separate "reference person n." Copy-and-paste effect (excessive mimicry of the reference) • Generated images should resemble the original reference but should not be direct copies of an individual reference photo. • Evaluate whether the generated person appears to be directly copied from one of the reference images. Consider changes (or lack thereof) in expression, head pose and orientation, facial expression/demeanor, and lighting/shading. • The lower the degree of direct copying (i.e., the less it looks like a pasted replica), the better. Rank according to the amount of change observed in the person(s): more natural variation (less copy-paste) should be ranked higher. Prompt following • Does the generated image reflect the content and constraints specified by the prompt/instruction? • Rank images by prompt fidelity: the more faithfully the image follows the prompt, the higher the ranking. Aesthetics • Judge the overall visual quality and pleasantness of the generated image (e.g., smoothness of rendering, harmonious body poses and composition). • Rank images by aesthetic quality: higher perceived visual quality receives higher ratings. Full Prompts for Dataset Captioning (6 variants) System Prompt: You are an advanced vision-language model tasked with generating accurate and comprehensive captions for images. Prompt 1: Please provide a brief description of the image based on these guidelines: 1. Describe the clothing, accessories, or jewelry worn by the people in detail. 2. Describe the genders, actions, and posture of the individual in detail, focusing on what they are doing. 3. The description should be concise, with a maximum of 77 words. 4. Start with 'This image shows' Prompt 2: Offer a short description of the image according to these rules: 1. Focus on details about clothing, accessories, or jewelry. 2. Focus on the gender, activity, and pose, and explain what the people is doing. 3. Keep the description within 77 words. 4. Begin the description with 'This image shows' Prompt 3: Please describe the image briefly, following these instructions: 1. Provide a detailed description of the clothing or jewelry the person may be wearing. 2. Provide a detailed description of the two persons' gender, actions, and body position. 3. Limit the description to no more than 77 words. 4. Begin your description with 'This image shows' Prompt 4: Describe the picture briefly according to these rules: 1. Provide a detailed description of the clothing, jewelry, or accessories of the individuals. 2. Focus on the two persons' gender, what they are doing, and their posture. 3. Keep the description concise, within a limit of 77 words. 4. Start your description with 'This image shows' Prompt 5: Provide a short and precise description of the image based on the following guidelines: 1. Describe what the person is wearing or any accessories. 2. Focus on the gender, activities, and body posture of the person. 3. Ensure the description is no longer than 77 words. 4. Begin with 'This image shows' Prompt 6: Briefly describe the image according to these instructions: 1. Provide a precise description of the clothing, jewelry, or other adornments of the people. 2. Focus on the person's gender, what they are doing, and their posture. 3. The description should not exceed 77 words. 4. Start with the phrase 'This image shows' Modified Prompt for OmniContext Evaluation (Face Identity Focus) Rate from 0 to 10: Task: Evaluate how well the facial features in the final image match those of the individuals in the original reference images, as described in the instruction. Focus strictly on facial identity similarity; ignore hairstyle, clothing, body shape, background, and pose. Scoring Criteria • 0: The facial features are completely different from those in the reference images. • 1-3: The facial features have minimal similarity with only one or two matching elements. • 4-6: The facial features have moderate similarity but several important differences remain. • 7-9: The facial features are highly similar with only minor discrepancies. • 10: The facial features are perfectly matched to those in the reference images. Pay detailed attention to these facial elements: • Eyes: Shape, size, spacing, color, and distinctive characteristics of the eyes and eyebrows. • Nose: Shape, size, width, bridge height, and nostril appearance. • Mouth: Lip shape, fullness, width, and distinctive smile characteristics. • Facial structure: Cheekbone prominence, jawline definition, chin shape, and forehead structure. • Skin features: Distinctive marks like moles, freckles, wrinkles, and overall facial texture. • Proportions: Overall facial symmetry and proportional relationships between features. Example: If the instruction requests combining the face from one image onto another pose, the final image should clearly show the same facial features from the source image. Important: • For each significant facial feature difference, deduct at least one point. • Ignore hairstyle, body shape, clothing, background, pose, or other non-facial elements. • Focus only on facial similarity, not whether the overall instruction was followed. • Scoring should be strict high scores should only be given for very close facial matches. • Consider the level of detail visible in the images when making your assessment. Editing instruction:
2510.14969
LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Yiming Wang2,⋆,♠, Da Yin1,⋆,♠,♡, Yuedong Cui1,⋆, Ruichen Zheng1,⋆ Zhiqian Li1, Zongyu Lin1, Di Wu1, Xueqing Wu1, Chenchen Ye1, Yu Zhou1, Kai-Wei Chang1,♡ 1UCLA 2Harvard University ⋆Co-First Authors ♠Co-LeadAlphabetical Order ♡Equal Advise Digital agents require diverse, large-scale UI trajectories to generalize across real-world tasks, yet collecting such data is prohibitively expensive in both human annotation, infra and engineering perspectives. To this end, we introduce UI-Simulator, a scalable paradigm that generates structured UI states and transitions to synthesize training trajectories at scale. Our paradigm integrates an LLM-based digital world simulator for diverse UI states, a guided rollout process for coherent exploration, and a trajectory wrapper that produces high-quality and diverse trajectories for agent training. We further propose UI-Simulator-Grow, a targeted scaling strategy that enables more rapid and data-efficient scaling by prioritizing high-impact tasks and synthesizes informative trajectory variants. Experiments on WebArena and AndroidWorld show that UI-Simulator rivals or surpasses open-source agents trained on real UIs with significantly better robustness, despite using weaker teacher models. Moreover, UI-Simulator-Grow matches the performance of Llama-3-70B-Instruct using only Llama-3-8B-Instruct as the base model, highlighting the potential of targeted synthesis scaling paradigm to continuously and efficiently enhance the digital agents. Date: Oct 16, 2025 Code Repository: https://github.com/WadeYin9712/UI-Simulator Website: https://ui-simulator.notion.site/llms-as-scalable-digital-world-simulator Model Weights & Datasets: https://huggingface.co/UI-Simulator Contact: da.yin9712@gmail.com, w10y20ming@gmail.com 1. Introduction Large Language Models (LLMs) have emerged as the backbone of digital agents that follow user instructions and interact with diverse User Interface (UI) environments to accomplish complex tasks, such as daily web and mobile navigation (Deng et al., 2023, Koh et al., 2024, Zhou et al., 2024) and computer use tasks (Xie et al., 2024). A persistent bottleneck in training LLMs to become strong digital agents is the scarcity of large-scale, high-quality UI environment training trajectories. Collecting such data demands extensive human effort: for instance, Xie et al. (2024) report that designing 360+ realistic computer-use tasks, which usually involve long, complex sequences of UI actions, requires more than 1,800 human hours. This cost severely limits the scalability of agent development and has sparked interest (Ou et al., 2024, Murty et al., 2024, Sun et al., 2024, Pahuja et al., 2025) in automatic synthesis of training trajectories. When applying the automatic trajectory synthesis, what factors could significantly impact performance of the trained agent policies across different UIs? Motivated by Cobbe et al. (2020) and Kimi-K2 (Kimi et al., 2025), we argue that environment diversity would be a chief component, as exposing an agent to a wide variety of UI environments would increase its robustness and generalizability to unfamiliar tasks at test time. However, from infra and engineering aspects, deploying parallel real UI environments faces severe bottlenecks due to arXiv:2510.14969v1 [cs.CL] 16 Oct 2025 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training 🌍 UI-Simulator: LLMs as Scalable Simulators For Digital Agent Training 💽 LLMs internally learns to simulate UI digital world! Structural Code UI Frontend Code Procedural Knowledge LLM Pre-Training Corpus LLM World Simulator Guided Rollout Process Trajectory Wrapper 🌍 UI-Simulator Performance Highlight 7/8B-size open-source superior performance Better performance when using simulated envs Synatra NNetNav 🌍 UI-Simulator 🌍 UI-Simulator Stronger when scaling Traj. From Real-World Env. vs. Traj. # World Model Simulate Action: Click [1412] Control Control Add Controls to Sample Diverse, Valid Traj. Action Action … … Overall User Task Instruction Conditioned Step Thoughts Wrap 📈 UI-Simulator-Grow Performance Highlight Competitive performance with 70B-size models Better performance while with higher data efficiency Improve more rapidly than standard scaling Given UI-Simulator’s flexibility in generating diverse UI states, can we strategically synthesize data to accelerate LLM agent improvement? Llama3-70B 📈 UI-Simulator-Grow Traj. # 📈 UI-Simulator-Grow 🌍 UI-Simulator 📈 UI-Simulator-Grow 🌍 UI-Simulator (More Traj.) vs. Figure 1: Overview and performance highlights of UI-Simulator and UI-Simulator-Grow. high resource demands, network instability, and the lack of native distributed support (Lai et al., 2025). We notice that world models which model the environment states and their transitions (Munro, 1987, Ha and Schmidhuber, 2018) may offer a promising solution. If the world models enable the generation of diverse synthetic UI states, it will allow digital agents to immerse themselves in more diverse UI scenarios, enable richer rollouts and achieve stronger generalization to unseen apps and layouts. How can such digital world models be constructed? We argue that digital world models can be built on LLMs, as pre-training on front-end code and procedural knowledge makes them well-suited to synthesize realistic UI states and transitions triggered by user actions. In this paper, we introduce UI-Simulator, a scalable UI trajectory synthesis paradigm for training digital agents powered by an LLM-based digital world simulator. Given summaries of prior UI states and a next action, our digital world simulator generates future UI states in a hierarchical format without any additional fine-tuning. Each UI state encodes textual content, spatial coordinates, and dynamic attributes (e.g., focus status), organized into an accessibility tree structure that captures hierarchical relationships among regions and elements. To collect high-quality trajectories with UI-Simulator, we run a step-wise guided rollout process where a teacher agent explores UIs generated by the world simulator under step-wise task control that prevents incoherent actions and promotes diverse, context-grounded behavior conditioned on prior actions and current state. Finally, a trajectory wrapper turns the rollouts into available training trajectories with user instructions, ground-truth UI actions, and step-wise reasoning. Beyond simply blindly scaling up trajectory sizes with UI-Simulator scaling, we explore how to strategically and efficiently synthesize data to accelerate LLM agent improvement. We introduce UI-Simulator-Grow, a targeted scaling paradigm that achieves faster gains using fewer but more contributive trajectories. At each iteration, UI-Simulator-Grow selects target tasks that offer the greater learning potential based on teacher-forcing loss signals from dynamically constructed validation sets, and synthesizes diverse trajectory 2 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training variants to guide the next training iteration. We evaluate UI-Simulator on two widely used benchmarks, WebArena (Zhou et al., 2024) and An- droidWorld (Rawles et al., 2024), which cover web and mobile UI domains. UI-Simulator achieves very competitive performance among open-source agents of comparable model size. Notably, UI-Simulator is solely used to synthesize training resources with a weaker teacher model, GPT-4o-mini, whereas prior methods rely on the more powerful GPT-4o teacher model. We also find that UI-Simulator yields greater robustness than other baselines when evaluated on perturbed UI environments, and higher adaptability given a limited amount of experience on test environment. Moreover, UI-Simulator even outperforms the variants trained directly on real downstream environments using the same trajectory synthesis pipeline. Further, our targeted scaling paradigm UI-Simulator-Grow using only Llama-3-8B-Instruct base model matches Llama-3-70B-Instruct, and drives steeper performance gains on WebArena using only 66% of the original training trajectories, demonstrating significantly improved data efficiency. 2. Related Works World Models. Extensive prior work has explored learning dynamics models and using them to train decision-making policies (Werbos, 1987, Munro, 1987, Ha and Schmidhuber, 2018). Recently, the structural consistency of videos across tasks, environments, and embodiments has fueled progress in large-scale video pretraining for world models (Hafner et al., 2020, OpenAI, 2024, Parker-Holder et al., 2024). LLMs have also emerged as potential world models due to their rich encoding of physical and textual knowledge from massive corpora. Hao et al. (2023), Gu et al. (2024), Chae et al. (2025) explore the use of LLMs as both reasoning agents and inference-time world models that proactively identify optimal actions. In our paper, we emphasize more scalable, high-quality UI trajectory synthesis and investigates the broader potential of digital world simulation for agent training. While prior work such as Fang et al. (2025), Gao et al. (2025) also utilizes LLMs as world models for agent training, our approach emphasizes greater efficiency of building digital simulators. Instead of training a dedicated world model, which can be costly due to the need for large-scale data, we directly leverage the LLM’s prior knowledge, requiring little to no experience from downstream task environments. More importantly, we focus on scaling perspective and study the targeted scaling paradigm that efficiently synthesizes the most contributive trajectories at each iterations, enabling faster agent improvement with significantly fewer resources. Synthetic Data for Digital Agent Training. To overcome the bottleneck of limited high-quality human- annotated UI trajectories, recent efforts focus on scalable synthesis of training data for digital agents. Synatra (Ou et al., 2024) and AgentTrek (Xu et al., 2025) address this challenge by converting indirect human-readable knowledge (e.g., web tutorials and manuals) into direct task demonstrations, enabling large-scale supervision without manual annotation. NNetNav (Murty et al., 2024), OS-Genesis (Sun et al., 2024), InSTA (Trabucco et al., 2025), and Explorer (Pahuja et al., 2025) adopt unsupervised approaches that autonomously explore real-world websites and retroactively annotate action sequences with suitable instructions. Different from these methods, which rely solely on prior experience in real digital environments, UI-Simulator leverages an LLM-based digital world model to simulate diverse, plausible, and previously unseen UI states which enable broader generalization and more robust agent training. 3. Digital World Models in UI-Simulator In this section, we introduce how to build a digital world simulator fueled by LLMs in UI-Simulator trajectory synthesis paradigm. The simulator construction process can be applied to a variety of digital and even non-digital agent tasks. 3 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training LLM World Simulator Predict Next State Without Any References Action: Click [1412] 🌍 Retrieval-Free Simulator 🌍 Retrieval-Augmented Simulator Predict Next State With Limited Amount of Prior Experience in Test Env. Retrieve The Most Relevant UI for Next State Synthesis Few Prior Experience Generate Creative Next State Generate Creative Next State Action: Click [1412] Figure 2: Overall process of how the retrieval-free/-augmented simulators predict the next UI state. 3.1. Formulation We consider the task of simulating UI environment dynamics with LLMs to support agent training. LLMs can serve as the foundation of these simulators. Most digital UI environments, including web, mobile, and computer, can be represented as structured textual accessibility trees. Pre-training on front-end code and procedural knowledge makes LLMs suitable as a backbone model to synthesize reasonable UI states and state transitions triggered by user actions. Let st denote the full UI environment state at timestep t, ot be the corresponding observation visible to the agent, and at be the corresponding action taken by the agent. Each element e ⊂st is associated with a bounding box bbox(e) = (xe min, xe max, ye min, ye max), representing its position on the page. The environment dynamics are governed by a transition function st+1 = T (st, at), where T is either the LLM used for digital world simulator MLLM or rule-based transition function. The agent then receives a new observation ot+1, computed by extracting the visible UI elements from st+1 based on the positions. Specifically, in terms of receiving the observation ot from the state st at timestep t, let the viewport at this timestep be, Vt = [x0, x1] × [y0, y1]. The observation at step t is then calculated by, ot = {e ∈st | bbox(e) ∩Vt ̸= ∅} , i.e., capturing the set of elements whose bounding boxes intersect with the viewport region. 3.2. (Retrieval-Free) Simulation To bridge the gap between our digital world simulation and real-world UI transition, we design a hybrid approach that considers rule-based and model-based transitions. Concretely, for most UI actions, our framework empowers the LLM MLLM to generate realistic and diverse next-state transitions, serving as the core engine behind the simulator’s ability to produce valid and imaginative UI states. The transition follows a multi-step pipeline that guides the world simulator to anticipate outcomes, infer coherent and diverse next states, and render them into a structured format. At each step, we apply few-shot Chain-of-Thought (CoT) prompting to the LLMs: 4 in-context examples are used for predicting the overview, and 1 example is used for each of the other two steps. 4 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Predict an Overview of the Next State. The first step in modeling the effect of an action is to generate a high-level overview of the next state, conditioned on the current state and the selected action. For example, if the current state is a shopping website and the current action is typing sneakers into the search box and presses enter, the predicted overview would be, “a search results page for the keyword sneakers”. Generate Rich Draft in Natural Language. Based on the predicted overview, the LLM generates diverse and semantically rich content in natural language to populate the simulated webpage. The output of this step is intentionally unstructured and unconstrained by a fixed format, which encourages expressiveness and richness. The generated draft includes detailed descriptions of each element’s attributes, such as the element’s tag and content description, but without position information. For instance, a draft for a Reddit thread page might contain structural sections such as a heading section and a navigation section, as well as informative sections including the thread title, interaction area (e.g., upvote and comment buttons), and the main body of the post. This organization helps produce realistic and contextually rich simulated UI states in the next step. Convert Unstructured Draft into Structured Format. We treat the LLM as a style transfer model that converts the unstructured natural language draft into a well-defined structured format, which can be directly used as training states in agent trajectories. During this process, coordinates are also automatically assigned to each UI element to complete the specification of st+1. Note that some actions do not result in a completely new page but rather alter the view of the current state, e.g., a scroll action reveals content initially off-screen. To simulate deterministic actions with relatively fixed outcomes, we adopt rule-based transitions. See Appendix B for details. 3.3. Retrieval-Augmented Simulation A common and realistic way to evaluate agent capability is by assessing how quickly it can adapt to a new test environment after limited experiences. Beyond the setting where no prior knowledge about the test environment is available, we also consider scenarios where a small amount of the test environment experience is known. For this scenario, we also introduce retrieval-augmented simulation, which conditions UI generation on limited experience from the test target environment. Compared to relying solely on the LLM world simulator’s internal knowledge, this allows the world simulator to generate UI environments that not only resemble the target domains but also support a diverse range of tasks grounded in those environments. Formally, we first construct an offline retrieval corpus of N state transitions from the test environment, denoted as, D = {︁(︀˜o(i) t , H(i) t , ˜o(i) t+1, ˜s(i) t , ˜s(i) t+1 )︀}︁N i=1 , where ˜o(i) t and ˜o(i) t+1 are the observations before and after an action in the downstream test environment; ˜s(i) t and ˜s(i) t+1 are their corresponding UI states; and H(i) t denotes the action history up to timestep t. During UI-Simulator paradigm, when simulating the next UI state after a given action, we query this offline retrieval corpus with the current observation–action history pair (ot, Ht). A retriever then returns the most relevant observation ˜oret and corresponding state ˜sret from the corpus D. The transition can be modelled as st+1 = MLLM(st, at, sret ), where MLLM is prompted with both the current interaction context and the retrieved state sret, grounding the simulation in prior experience while still allowing the creation of novel, coherent UI states. The key distinction from retrieval-free simulation is the incorporation of the retrieved state sret into the simulation process. 5 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training In practice, we employ a hybrid retrieval pipeline over D to retrieve the transition that is most semantically similar to the current trajectory simulation. The retrieval process proceeds in three stages: First, a coarse filtering step is performed using BM25 ranking, where the action history serves as the query, to retrieve the transitions with very similar action histories. Next, we use current action histories as the query again to further narrow down the more relevant transitions from the transitions stored in D by utilizing GPT-4o as the semantic retriever, which captures deeper semantic similarities with the current action history queries. Finally, we construct a composite retrieval key that incorporates both the current state and action history, and apply BM25 again to select the most relevant transition. Despite the small size of D, this hybrid strategy still improves the consistency and realism of the generated UI states. 4. Scalable Synthetic Training Trajectory Collection in Simulated World In this section, we detail how LLMs are used to autonomously and continuously explore the digital world simulated by the LLM-based world model, generating high-quality training trajectories through guided rollouts and a final trajectory wrapper. 4.1. Overview and Formulation We formulate our data collection process in two stages. The first stage is an instruction-free rollout process, where a teacher agent interacts with the LLM-based digital world simulator to generate synthetic trajectories without conditions on any predefined user instruction G. This goal-free setup allows more flexible and executable trajectory synthesis unconstrained by specific task types. At each timestep t, given the current environment state st, observation ot, and prior action history Ht = [a0, a1, . . . , at−1], the teacher agent MTeacher samples the next action as at ∼MTeacher(ot, Ht). The environment then transitions to the next state st+1 via the world simulator LLM MLLM prediction and deterministic static rules. The teacher continues the rollout until it determines that a semantically coherent task has been completed. In the end, we retrospectively derive a user task instruction G that summarizes the intent underlying the completed trajectory. Each collected trajectory is then represented as τ = [o0, a0, o1, a1, . . . , oT, aT], where T is the trajectory length. Scaling the collection methods across multiple environments and teacher rollouts yields a large, diverse training dataset for downstream UI agent policy learning. However, this procedure still leaves two critical questions: 1) How can we ensure both diversity and validity of the trajectories generated without explicit user instructions? 2) How can we generate plausible and goal-consistent user instructions G that accurately reflect the completed trajectories? To this end, we propose a step-wise guided rollout process and a final trajectory wrapper to address these challenges. 4.2. Step-Wise Guided Rollout Process We notice that without proper guidance for each rollout step, LLMs often exhibit biased behavior, leading to homogeneous tasks and trajectories. To mitigate the bias and increase the diversity and quality of our training set, we introduce a step-wise guided rollout process which proposes task controls to encourage exploration towards diverse yet reasonable directions. The pipeline involves the following steps (See more details in Appendix G): Step-Wise Task Control Proposal. At first we prompt the teacher agents to envision common daily tasks users might perform based on the initial state, regarded as first-step task control. Specifically, given an initial state s0, we prompt the MTeacher to propose a high-level task description, referred to as the mentioned task control, c0 = MTeacher(s0). For example, if s0 is the home page of a shopping website, some examples of c0 are “Search for a certain product” or “Navigate to my account page”. When the actions related to first-step 6 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training control were finished, the second-step control is updated based on the current observation, and this process continues iteratively. In general, suppose the trajectory just reaches its t-th step, under the i-th control. We define a boolean function Done(ci) ∈{True, False} that indicates whether the current control ci has been completed, which is judged by MTeacher. The control update rule is given by: ci = MTeacher(st, ci−1 ), if Done(ci−1) = True. For example, after the Done function verified that the first-step control “Navigate to my account page” on the shopping website is completed, the proposed next-step control on the new My Account page in the shopping domain, could be generated as “Check out my order history”, “Modify my address information”, etc. This iterative goal proposal enables complex tasks to be compiled based on semantically meaningful sub-goals. Thought & Action Generation. Under the current step’s task control ci and prior rollout history Ht, the teacher agent MTeacher is also prompted to produce its internal reasoning thought rt, along with an action at and the corresponding step summary ht in a CoT manner. Each thought provides a justification or plan for why the action is appropriate, and is recorded in the rollout history along with the step summary and action, rt, at, ht ∼MTeacher(ot, ci, Ht), Ht+1 = Ht ∪[rt; at; ht], where ; indicates concatenation. To avoid endless rollouts, we allow MTeacher to autonomously decide when to terminate the trajectory. That is, the agent will generate a STOP action if it considers the task as completed, based on the current state and task control. 4.3. Trajectory Wrapper The trajectory wrapper is designed to transform raw rollout trajectories into high-quality instances by inferring valid user instructions and reconstructing a coherent, step-by-step reasoning process. Since the rollout process is initially not guided by an explicit user instruction, in our trajectory wrapping process, we first use a task summarizer to condense the agent’s actions into a concise description of what was accomplished and then further convert it as the final user instruction for the entire trajectory, denoting as G. To align the trajectory’s reasoning with this synthesized instruction, we then ask the teacher agent MTeacher to rewrite and refine their thoughts, ensuring they are well-conditioned on the newly generated instruction and reflect the agent’s internal decision-making. Besides, reasoning ability is often a critical component in agent tasks, e.g., tasks like “Tell me the most recent canceled order in the order history.” To support this capability, we allow MTeacher to insert intermediate reasoning thoughts when it deems such reasoning necessary or beneficial for conducting information query or analysis in the current UI state. In the end, we filter out low-quality trajectories based on the criteria: 1) actions must target valid elements and lead to meaningful state changes; and 2) the action mentioned in each step’s reasoning should match the action actually taken. 5. UI-Simulator-Grow: UI-Simulator-Powered Targeted Scaling In §4, we introduced UI-Simulator for synthesizing diverse training trajectories. Rather than blindly increasing trajectory volume, we also explore how to strategically and efficiently scale to accelerate agent im- provement. We propose UI-Simulator-Grow, a UI-Simulator–empowered targeted scaling paradigm that achieves faster gains with fewer synthesized trajectories. UI-Simulator-Grow iteratively identifies which tasks, if synthesized at the current stage, would most effectively enhance agent performance, and generates diverse trajectories for those tasks and their variants in preparation for the next training phase. In the first iteration, UI-Simulator-Grow collects an initial batch of trajectories following the procedure in §4 as UI-Simulator. In subsequent iterations, it automatically 7 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training selects target tasks based on dynamically updated validation signals, synthesizes relevant trajectories, and applies continual learning to ensure steady performance gains. We introduce this in detail as follows. Target Task Selection. Target tasks for the next training iteration must satisfy the criteria: The tasks must be neither trivial for the current agent to solve, nor beyond the agent’s present capabilities. Tasks the agent is already good at will offer limited learning signal, while tasks that are too hard may not lead to meaningful progress. We identify such target tasks by measuring the teacher-forcing loss of a teacher agent MTeacher on the current validation set. Specifically, for each step, we treat the teacher’s prediction as ground truth, compute the cross-entropy loss against the student agent’s prediction, and average the loss over all steps for the tasks. Tasks are then ranked by loss, and those within the 25%–75% percentile range are selected as targets to filter excessively easy or hard tasks. See more cases in Appendix 7.3. As the agent improves after each iteration, the validation set is also updated to reflect the agent’s evolving capabilities. In the first iteration, it is an independent batch of tasks synthesized in the same way as the initial training set. For later iterations, the validation set is composed entirely of a split from the newly synthesized data for the upcoming iteration. It aims to promote continual improvement and prevent future iteration evaluation from overfitting to earlier iterations. Synthesizing Diverse Target Task Trajectory Variants. After identifying target tasks that can effectively challenge the agent, we synthesize additional trajectories focused on those tasks. One strategy we adopted is lightweight task rewriting, where the task instruction is slightly modified without changing its core structure or logic. The corresponding environment states, thoughts, and actions are adjusted accordingly, while preserving the overall reasoning flow. For example, a selected task like “search running shoes” might be rewritten as “search slippers”. Since the task logic remains consistent, the UI states and actions (e.g., entering a query, clicking a result) are similarly structured. We prompt the LLM to maintain the task’s action types and flow, modifying only content-specific elements in the UI states such as product names. This ensures meaningful variation while preserving alignment with the agent’s learning objectives. Continual Learning. As UI-Simulator-Grow continuously incorporates new synthesized training trajectories through iterations, a key challenge is adapting the agent policies without forgetting. We address this with continual learning (Biesialska et al., 2020), focusing on replay methods, a widely used technique that revisits selected tasks from prior training stages. Following Dynosaur (Yin et al., 2023), we adopt a replay strategy that selects the most representative tasks from the previous iteration. Given N tasks from the prior iteration, we use Sentence Transformer (Reimers and Gurevych, 2019) based on RoBERTa-large (Liu et al., 2019) to encode task instructions into a matrix Ip ∈ RN×d, where d is the embedding dimension. We then compute cosine similarities: Spp = cos _sim(Ip, Ip) ∈ RN×N. Tasks with the top row sums in Spp are representative and selected for replay. 6. Experiments 6.1. Experimental Setup Evaluation Benchmarks. We evaluate the LLM agents trained with UI-Simulator on two benchmarks across web and mobile domains: WebArena, which contains 812 complex yet realistic web navigation tasks; and AndroidWorld, which consists of 116 challenging daily mobile usage tasks. We report the success rate (SR) across all tasks. The temperature for model inference is set to 0.6; preliminary experiments suggest that varying the temperature does not significantly affect performance. Note that for a fair comparison, 8 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Table 1: Overall success rate (SR) on the WebArena and AndroidWorld test sets. << indicates methods with substantially less exposure to the real downstream test environments. Models Teacher Agents Train Under Real Env.? WebArena SR (%) AndroidWorld SR (%) Base Open-Source LLMs and Proprietary LLMs Llama-3-8B-Instruct - ✗ 2.34 - CodeLlama-34B-Instruct - ✗ 4.06 - Lemur-chat-70B - ✗ 5.30 - Llama-3-70B-Instruct - ✗ 7.02 - Gemini Pro - ✗ 7.12 - Qwen-1.5-72B-Instruct - ✗ 7.14 - Qwen-2.5-7B-Instruct - ✗ 3.94 0.0 Qwen-2-VL-7B - ✗ - 5.0 Qwen-2-VL-72B - ✗ - 5.0 Gemma-2-27B - ✗ - 9.5 GPT-4o - ✗ 13.10 11.7 Digital Agent Training Data Synthesis Baselines AgentFlan N/A ✓ 4.68 - NNetNav Llama-3.1-70B ✓ 4.80 - Synatra GPT-4-turbo ✓ 6.28 - OS-Genesis GPT-4o ✓ 6.16 9.1 GUIMid (Post-Train) N/A ✓ 6.20 9.0 UI-Simulator-Series Variants UI-Simulator-F GPT-4o-mini ✗ 6.28 8.6 UI-Simulator-R GPT-4o-mini ✓(<<) 6.40 12.9 UI-Simulator-Grow-R GPT-4o-mini ✓(<<) 7.14 13.4 we reproduce and evaluate those methods under the original WebArena evaluation settings1, instead of BrowserGym or any lite versions. Digital World Simulation, Trajectory Collection and Agent Training. We use GPT-4o-mini for both state simulation and guided rollout. For WebArena, we train our digital agents and baseline agents using Llama-3-8B-Instruct as the base model. For AndroidWorld, we choose to use Qwen-2.5-7B-Instruct due to its extended context length support beyond 8192 tokens, a common requirement in AndroidWorld which exceeds the maximum context length of Llama-3-8B-Instruct. More details are discussed in Appendix C and D. 6.2. Overall Performance Results are presented in Table 1. We denote the UI-Simulator variants without and with retrieval- augmented simulation as UI-Simulator-F and UI-Simulator-R, respectively. UI-Simulator-F. We observe that even without exposure to real-world test environments, training solely on LLM-simulated environments can significantly enhance its base model’s performance. This is particularly evident on AndroidWorld, where the success rate increases from 0% to 9%. UI-Simulator-F even outperforms OS-Genesis on WebArena, which is trained using trajectories synthesized directly from 1The same as https://github.com/web-arena-x/webarena. 9 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training the WebArena test environments. These show that LLMs possess sufficient knowledge to generate reliable and coherent digital environment simulations, offering a promising alternative when real test environments involve high latency or are difficult to access. UI-Simulator-R vs. Larger & Proprietary Models. We observe that UI-Simulator-R performs on par with Gemini-Pro on WebArena, as well as GPT-4o on AndroidWorld, despite being built on a much smaller 8B-scale LLM. This highlights the strong generalization capability of UI-Simulator, even with limited exposure to the target environment. UI-Simulator vs. Open-Source Agent Traj. Synthesis Baselines. UI-Simulator-R surpasses OS- Genesis, which relies on the stronger GPT-4o teacher to generate training trajectories within test envi- ronments, while even UI-Simulator-F achieves superior performance on AndroidWorld despite being trained only with trajectories from the weaker GPT-4o-mini. These results highlight the potential of UI-Simulator when paired with stronger teacher agents. Moreover, unlike NNetNav and OS-Genesis, which generate synthetic training data through extensive unsupervised interaction with the test environ- ments, UI-Simulator-R restricts the environment exposure to a much smaller scope. Despite this, it still outperforms NNetNav and OS-Genesis by 2.2% and 0.9% on WebArena, and surpasses OS-Genesis by 3.8% on AndroidWorld, demonstrating the effectiveness of our simulation-driven approach in enabling rapid adaptation to test environments. 7. Analysis In this section, focusing on WebArena, we conduct a comprehensive analysis to evaluate the advantages and potential of the UI-Simulator framework. We present a series of training experiments alongside qualitative studies to examine each core component of UI-Simulator and illustrate how UI-Simulator-Grow helps effective scaling. More human evaluation on synthesized trajectory quality is discussed in Appendix E. 7.1. Ablation Study Agent Robustness Brought From UI-Simulator. Thanks to the flexibility of UI-Simulator in gener- ating diverse UI layouts, agents trained on its synthesized trajectories gain robustness to varied UI states. To test this, we perturb WebArena and AndroidWorld UI states by randomly shuffling layout structures while preserving UI content, ensuring validity and identical solutions. For a meaningful comparison with other strong baselines, we focus on comparison with the baseline OS-Genesis, whose performance is closest to ours on both datasets. We notice that UI-Simulator-F, which synthesizes UI states without referencing downstream environments, suffers the smallest performance change. Simulated Digital Environment vs. Real Test Environment. What happens if we collect similar number of training trajectories directly on the real test environments? Surprisingly, UI-Simulator can even outperform this strong baseline. We identify one key reason for this performance gap – the real test environments may not be able to consistently provide useful state transitions or cover diverse interaction scenarios. Many of the information query tasks involve a search step. For example, comparing the prices of two products does not explicitly mention the search operation, but it is a necessary step before the final comparison. If the search query keywords do not match any entries in the websites (e.g., no such product / place / repository / thread / ...), the environment may return Search not found, and the corresponding information query task trajectory quality would thus be bad; For tasks involving account pages or application settings, if the user is not logged in beforehand, those trajectories cannot be collected due to access restrictions; If we have only a few user accounts, the settings UI states we can access will be fairly homogeneous, which can hurt the agents’ ability 10 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training to generalize to diverse user configuration UI states. In contrast, both tasks can be easily synthesized scalably in the digital world model without any such constraints. This highlights the potential of UI-Simulator to go beyond the limitations of real environments by generating trajectories that are infeasible to obtain. UI-Simulator-R vs. OS-Genesis Sharing the Same Amount of Prior Experience on Test Environment. Both UI-Simulator-R and OS-Genesis have access to the test environment; however, OS-Genesis benefits from significantly more experience on the test environments. To assess them under equal conditions, we control for the amount of test environment experience and compare their performance. On both WebArena and AndroidWorld, we even find that UI-Simulator-R achieves around 4 and 2.5 times the performance of OS-Genesis, respectively, highlighting its ability to generate highly adaptive digital agents even with limited exposure to the real environment. Models WA (%) AW (%) UI-Simulator-F 6.28 8.6 Perturbed Env. 5.54 8.7 Synthesize in Real Env. 4.31 4.7 UI-Simulator-R 6.40 12.9 Synthesize in Real Env. 4.31 9.1 w/o Step-Wise Task Control 1.72 5.2 w/o Multi-Step Simulation 4.06 9.1 OS-Genesis 6.16 9.1 Perturbed Env. 4.43 8.7 Same # of Experience 1.48 5.2 Table 2: Ablation study on robustness, synthe- size trajectories from real test environment and utilization of the same amount of experience. WA and AW are WebArena and AndroidWorld in short. Rollout and Simulation Process Design. We ablate our rollout and simulation design, both key to synthe- sizing high-quality trajectories. To this end, we discuss more ablation study on the step-wise guided rollout pro- cess design to justify the effectiveness of our design. We first remove all step-wise task controls to collect a new set of training trajectories, aiming to assess whether the guided rollout process contributes to generating higher- quality data. From Table 2, we observe a performance drop of around 4.7% and 7.7% on WebArena and An- droidWorld following the removal of task controls. Upon closer inspection of the newly collected trajectories, we find that trajectory diversity suffers significantly. For a lightweight quantification of task diversity, we embed training tasks with RoBERTa-large, run PCA on the em- beddings, and take the PCA effective dimension at ≥90% explained variance. The higher the dimension numbers, the more diverse the tasks are. The effective dimension of the training data collected without step-wise task controls is 118, while adding the task control would increase it to 153, suggesting that the diversity is significantly enhanced. With a qualitative manual obser- vation, without the step-wise task controls as conditioning signals, the teacher agent tends to repeatedly sample the same one or two elements due to inherent model biases. This further highlights the importance and effectiveness of fine-grained, step-wise control in our trajectory collection process. We then replace multi-step simulation with single-step simulation to examine whether the simplified approach can still yield satisfactory simulations and benefit downstream tasks. As shown in Table 2, this modification results in a performance drop of approximately 2.4% and 3.8% on WebArena and AndroidWorld. Single- step simulation, though cost-saving, would always generate common, biased content, thereby harming the diversity and content richness of the collected training set. In contrast, we encourage the world model to output rich, diverse content by splitting the simulation into multiple steps, resulting in trajectories with higher quality. 7.2. UI-Simulator-Grow vs. Standard UI-Simulator Scaling 11 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training 0.8x (Iter 1) 1.4x (Iter 2) 2x (Iter 3) 0 5 10 15 20 25 Search / Info Query 0.8x (Iter 1) 1.4x (Iter 2) 2x (Iter 3) Account 0.8x (Iter 1) 1.4x (Iter 2) 2x (Iter 3) Post / Comment / Reply / Email 0.8x (Iter 1) 1.4x (Iter 2) 2x (Iter 3) Repo 0.8x (Iter 1) 1.4x (Iter 2) 2x (Iter 3) Subscribe Successful Task # Through UI-Simulator-Grow Scaling Figure 4: Successful task numbers across the 5 main task categories through the three iterations of the UI-Simulator-Grow scaling. 0.8x1x 1.4x 2x 3x Scaling Rate 4 5 6 7 8 WebArena SR (%) WebArena Performance Standard UI-Simulator Scaling UI-Simulator-Grow 0.8x1x 1.4x 2x 3x Scaling Rate 10 11 12 13 14 15 16 AndroidWorld SR (%) AndroidWorld Performance Standard UI-Simulator Scaling UI-Simulator-Grow Figure 3: The effect of standard scaling and UI- Simulator-Grow targeted scaling. We compare the UI-Simulator-Grow with the standard UI-Simulator scaling to examine which paradigm more effectively accelerates agent per- formance. For standard scaling, we use the full UI-Simulator-R training set split it into three equal parts, and emulate a 3-iteration scaling pro- cess by progressively adding one more split at each iteration. For UI-Simulator-Grow, the first it- eration uses the same initial split as in standard scaling. Subsequent iterations of UI-Simulator- Grow rely primarily on synthesized variants of target tasks selected from the last iteration, supplemented by portions of the remaining splits for constructing dynamic validation sets, instead of blindly adding more new generic trajectories as standard scaling does. Note that the process guarantees that UI-Simulator-Grow draws only from UI-Simulator-R and its generated variants, without introducing any external data beyond its scope for fair comparison. As shown in Figure 3, UI-Simulator-Grow yields a steeper performance improvement than standard scaling. Notably, by the third iteration, it matches the performance of Qwen-1.5-72B-Instruct and surpasses Llama-3-70B-Instruct. Moreover, UI-Simulator-Grow uses only 66% of the original UI-Simulator-R trajectories, demonstrating more efficient data utilization. Beyond overall success rates, we closely examine how performance improves under UI-Simulator-Grow. As shown in Figure 4, a consistent upward trend appears across most major WebArena task categories. Notably, for code repository operations, the final iterations of UI-Simulator-Grow demonstrate the ability to solve tasks that neither standard UI-Simulator scaling nor earlier iterations could handle. This highlights the paradigm’s potential to enable agents to tackle increasingly diverse and complex tasks. 7.3. Analysis on Targeted Task Selection in UI-Simulator-Grow Paradigm We describe how target tasks are selected for synthesizing new training trajectories in the next iteration of the UI-Simulator-Grow paradigm. Figure 5 illustrates the task selection process between the first and second UI-Simulator-Grow iterations for WebArena. We begin by ranking all tasks in the current validation set by their teacher-forcing loss, from smallest to largest. Tasks below the 25th percentile are 12 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Teacher-forcing loss on dynamic val set Search for Dolly de Leon's breakthrough performance. 0.17 Search for a public library on OpenStreetMap. Zoom in on the map. Add the product to the shopping cart. Show me the title of the submission made by ErenCloud. Find directions between The Getty Center and Griffith Park Tell me the subtotal of all items in the shopping cart View details of the Travel Planner project. Add 'SureThik 15g Hair Thickening Fiber (Black) and Holding Spray (3oz) Bundle' and 'Tweezers For Succulents Duo' to the wishlist Create a new forum titled "General Discussion" with the description "A place for general discussions on various topics," category "General," and tag "Discussion." Create and save a new marketing campaign titled "Summer Sale" with the description "Exciting discounts on summer products! Tell me how many issues were created by "Byte Blaze" that have been updated in the last year. Download project files What options do users have concerning updates about policy changes Update the product quantity to 2 0.19 0.30 0.26 0.27 0.31 0.37 0.33 0.51 0.53 0.43 0.41 0.59 0.52 0.71 … … … … … … … … … … 25th percentaile 75th percentaile … Selected target tasks for traj. synthesis in the next iteration … (a) Target task selection for web tasks. Teacher-forcing loss on dynamic val set Show me the version of the Retro Music application. 0.56 Record audio using the ‘Audio Recorder’ app. Read the ‘About’ section in the ‘Brocolli’ app Create a new folder named "My New Folder" with the "Markdown" folder type in the Markor app. Access saved locations in the OsmAnd map app. Tell me the status of the Airplane mode on the device. Add Rome as a new city in the Clock app. Clear the drawing canvas and undo the last action in 'Simple Draw Pro' app. Create a new event titled "Community Clean-Up Day" on April 22 at River Park with the description "Join us for a community clean-up at River Park." Set the playback speed to 1.5x in VLC Compare the content descriptions of the buttons related to deleting and switching input methods. Take a photo and adjust grid line settings in the Camera app Describe the common theme or type of cuisine represented by the recipes Add a new expense of $50 for the Water Bill under the Housing category with a note "City Water Services". What is the description of the recipe titled "Caprese Salad Skewers"? 0.57 0.63 0.61 0.62 0.64 0.71 0.67 0.79 0.82 0.74 0.72 0.83 0.81 0.85 … … … … … … 25th percentaile 75th percentaile … Selected target tasks for traj. synthesis in the next iteration … … (b) Target task selection for mobile tasks. Figure 5: Illustration of overall target task selection process. considered too easy or already well learned (e.g., zooming on a map or searching for a location) and are excluded from further synthesis. Conversely, tasks above the 75th percentile are often overly challenging or ambiguous, and are likewise excluded. The remaining middle range of tasks is chosen as the target set for the next training iteration. 8. Conclusions We introduced UI-Simulator, a scalable trajectory synthesis paradigm that uses LLM-based digital world simulators to synthesize diverse UI trajectories at scale through multi-step simulation, guided rollouts, and final trajectory wrapping. We further propose UI-Simulator-Grow, a targeted scaling paradigm that prioritizes high-impact tasks for more data-efficient continuous improvement. Experiments on WebArena and AndroidWorld show that UI-Simulator rivals or surpasses real-environment training despite using weaker teacher agents, while UI-Simulator-Grow achieves more rapid improvement trend than standard UI- Simulator scaling with only 66% training data and even matches 70B-scale models. Ablation study further 13 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training highlight the promise of simulation-driven trajectory synthesis as a more adaptive and robust approach for advancing digital agents. Beyond extending to other UI domains such as desktop, our future work envisions applying the world simulator and targeted scaling method to any environment representable in text. Motivated by NeuralOS (Rivard et al., 2025), we also envision to further extend the world simulator and scaling paradigm to the pixel level for narrowing the sim-to-real gap. References Magdalena Biesialska, Katarzyna Biesialska, and Marta R. Costa-jussà. Continual lifelong learning in natural language processing: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6523–6541, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics. doi: 10.18653/v1/2020.coling-main.574. URL https://aclanthology. org/2020.coling-main.574. Hyungjoo Chae, Namyoung Kim, Kai Tzu-iunn Ong, Minju Gwak, Gwanwoo Song, Jihoon Kim, Sunghwan Kim, Dongha Lee, and Jinyoung Yeo. Web agents with world models: Learning and leveraging environment dynamics in web navigation. In ICLR, 2025. Karl Cobbe, Chris Hesse, Jacob Hilton, and John Schulman. Leveraging procedural generation to benchmark reinforcement learning. In International conference on machine learning, pages 2048–2056. PMLR, 2020. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Sam Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. Advances in Neural Information Processing Systems, 36:28091–28114, 2023. Tianqing Fang, Hongming Zhang, Zhisong Zhang, Kaixin Ma, Wenhao Yu, Haitao Mi, and Dong Yu. Webevolver: Enhancing web agent self-improvement with coevolving world model. arXiv preprint arXiv:2504.21024, 2025. Yifei Gao, Junhong Ye, Jiaqi Wang, and Jitao Sang. Websynthesis: World-model-guided mcts for efficient webui-trajectory synthesis. arXiv preprint arXiv:2507.04370, 2025. Yu Gu, Kai Zhang, Yuting Ning, Boyuan Zheng, Boyu Gou, Tianci Xue, Cheng Chang, Sanjari Srivastava, Yanan Xie, Peng Qi, et al. Is your llm secretly a world model of the internet? model-based planning for web agents. arXiv preprint arXiv:2411.06559, 2024. David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018. Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. In ICLR, 2020. Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. Reasoning with language model is planning with world model. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8154–8173, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.507. URL https://aclanthology.org/2023.emnlp-main.507/. Pin-Lun Hsu, Yun Dai, Vignesh Kothapalli, Qingquan Song, Shao Tang, Siyu Zhu, Steven Shimizu, Shivam Sahni, Haowen Ning, and Yanning Chen. Liger kernel: Efficient triton kernels for llm training. arXiv preprint arXiv:2410.10989, 2025. 14 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Kimi, Yifan Bai, Yiping Bao, Guanduo Chen, Jiahao Chen, Ningxin Chen, Ruijue Chen, Yanru Chen, Yuankun Chen, Yutian Chen, Zhuofu Chen, Jialei Cui, Hao Ding, Mengnan Dong, Angang Du, Chenzhuang Du, Dikang Du, Yulun Du, Yu Fan, Yichen Feng, Kelin Fu, Bofei Gao, Hongcheng Gao, Peizhong Gao, Tong Gao, Xinran Gu, Longyu Guan, Haiqing Guo, Jianhang Guo, Hao Hu, Xiaoru Hao, Tianhong He, Weiran He, Wenyang He, Chao Hong, Yangyang Hu, Zhenxing Hu, Weixiao Huang, Zhiqi Huang, Zihao Huang, Tao Jiang, Zhejun Jiang, Xinyi Jin, Yongsheng Kang, Guokun Lai, Cheng Li, Fang Li, Haoyang Li, Ming Li, Wentao Li, Yanhao Li, Yiwei Li, Zhaowei Li, Zheming Li, Hongzhan Lin, Xiaohan Lin, Zongyu Lin, Chengyin Liu, Chenyu Liu, Hongzhang Liu, Jingyuan Liu, Junqi Liu, Liang Liu, Shaowei Liu, T. Y. Liu, Tianwei Liu, Weizhou Liu, Yangyang Liu, Yibo Liu, Yiping Liu, Yue Liu, Zhengying Liu, Enzhe Lu, Lijun Lu, Shengling Ma, Xinyu Ma, Yingwei Ma, Shaoguang Mao, Jie Mei, Xin Men, Yibo Miao, Siyuan Pan, Yebo Peng, Ruoyu Qin, Bowen Qu, Zeyu Shang, Lidong Shi, Shengyuan Shi, Feifan Song, Jianlin Su, Zhengyuan Su, Xinjie Sun, Flood Sung, Heyi Tang, Jiawen Tao, Qifeng Teng, Chensi Wang, Dinglu Wang, Feng Wang, Haiming Wang, Jianzhou Wang, Jiaxing Wang, Jinhong Wang, Shengjie Wang, Shuyi Wang, Yao Wang, Yejie Wang, Yiqin Wang, Yuxin Wang, Yuzhi Wang, Zhaoji Wang, Zhengtao Wang, Zhexu Wang, Chu Wei, Qianqian Wei, Wenhao Wu, Xingzhe Wu, Yuxin Wu, Chenjun Xiao, Xiaotong Xie, Weimin Xiong, Boyu Xu, Jing Xu, Jinjing Xu, L. H. Xu, Lin Xu, Suting Xu, Weixin Xu, Xinran Xu, Yangchuan Xu, Ziyao Xu, Junjie Yan, Yuzi Yan, Xiaofei Yang, Ying Yang, Zhen Yang, Zhilin Yang, Zonghan Yang, Haotian Yao, Xingcheng Yao, Wenjie Ye, Zhuorui Ye, Bohong Yin, Longhui Yu, Enming Yuan, Hongbang Yuan, Mengjie Yuan, Haobing Zhan, Dehao Zhang, Hao Zhang, Wanlu Zhang, Xiaobin Zhang, Yangkun Zhang, Yizhi Zhang, Yongting Zhang, Yu Zhang, Yutao Zhang, Yutong Zhang, Zheng Zhang, Haotian Zhao, Yikai Zhao, Huabin Zheng, Shaojie Zheng, Jianren Zhou, Xinyu Zhou, Zaida Zhou, Zhen Zhu, Weiyu Zhuang, and Xinxing Zu. Kimi k2: Open agentic intelligence, 2025. URL https://arxiv.org/abs/2507.20534. Jing Yu Koh, Robert Lo, Lawrence Jang, Vikram Duvvur, Ming Lim, Po-Yu Huang, Graham Neubig, Shuyan Zhou, Russ Salakhutdinov, and Daniel Fried. VisualWebArena: Evaluating multimodal agents on realistic visual web tasks. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 881–905, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.50. URL https://aclanthology.org/2024.acl-long.50/. Hanyu Lai, Xiao Liu, Yanxiao Zhao, Han Xu, Hanchen Zhang, Bohao Jing, Yanyu Ren, Shuntian Yao, Yuxiao Dong, and Jie Tang. Computerrl: Scaling end-to-end online reinforcement learning for computer use agents. arXiv preprint arXiv:2508.14040, 2025. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Paul Munro. A dual back-propagation scheme for scalar reward learning. In Ninth Annual Conference of the Cognitive Science Society, pages 165–176. Hillsdale, NJ. Cognitive Science Society Lawrence Erlbaum, 1987. Shikhar Murty, Hao Zhu, Dzmitry Bahdanau, and Christopher D Manning. Nnetnav: Unsupervised learning of browser agents through environment interaction in the wild. arXiv preprint arXiv:2410.02907, 2024. OpenAI. Sora system card. 2024. URL https://openai.com/index/sora-system-card/. Tianyue Ou, Frank F Xu, Aman Madaan, Jiarui Liu, Robert Lo, Abishek Sridhar, Sudipta Sengupta, Dan Roth, Graham Neubig, and Shuyan Zhou. Synatra: Turning indirect knowledge into direct demonstrations for 15 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training digital agents at scale. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. Vardaan Pahuja, Yadong Lu, Corby Rosset, Boyu Gou, Arindam Mitra, Spencer Whitehead, Yu Su, and Ahmed Awadallah. Explorer: Scaling exploration-driven web trajectory synthesis for multimodal web agents. arXiv preprint arXiv:2502.11357, 2025. Jack Parker-Holder, Philip Ball, Jake Bruce, Vibhavari Dasagi, Kristian Holsheimer, Christos Kaplanis, Alexandre Moufarek, Guy Scully, Jeremy Shar, Jimmy Shi, Stephen Spencer, Jessica Yung, Michael Dennis, Sultan Kenjeyev, Shangbang Long, Vlad Mnih, Harris Chan, Maxime Gazeau, Bonnie Li, Fabio Pardo, Luyu Wang, Lei Zhang, Frederic Besse, Tim Harley, Anna Mitenkova, Jane Wang, Jeff Clune, Demis Hassabis, Raia Hadsell, Adrian Bolton, Satinder Singh, and Tim Rocktäschel. Genie 2: A large-scale foundation world model. 2024. URL https://deepmind.google/discover/blog/ genie-2-a-large-scale-foundation-world-model/. Christopher Rawles, Sarah Clinckemaillie, Yifan Chang, Jonathan Waltz, Gabrielle Lau, Marybeth Fair, Alice Li, William Bishop, Wei Li, Folawiyo Campbell-Ajala, et al. Androidworld: A dynamic benchmarking environment for autonomous agents. arXiv preprint arXiv:2405.14573, 2024. Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.10084. Luke Rivard, Sun Sun, Hongyu Guo, Wenhu Chen, and Yuntian Deng. Neuralos: Towards simulating operating systems via neural generative models. arXiv preprint arXiv:2507.08800, 2025. Qiushi Sun, Kanzhi Cheng, Zichen Ding, Chuanyang Jin, Yian Wang, Fangzhi Xu, Zhenyu Wu, Chengyou Jia, Liheng Chen, Zhoumianze Liu, et al. Os-genesis: Automating gui agent trajectory construction via reverse task synthesis. arXiv preprint arXiv:2412.19723, 2024. Brandon Trabucco, Gunnar Sigurdsson, Robinson Piramuthu, and Ruslan Salakhutdinov. Towards internet- scale training for agents. arXiv preprint arXiv:2502.06776, 2025. Paul J Werbos. Learning how the world works: Specifications for predictive networks in robots and brains. In Proceedings of IEEE International Conference on Systems, Man and Cybernetics, NY, 1987. Tianbao Xie, Danyang Zhang, Jixuan Chen, Xiaochuan Li, Siheng Zhao, Ruisheng Cao, Toh Jing Hua, Zhoujun Cheng, Dongchan Shin, Fangyu Lei, et al. Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments. arXiv preprint arXiv:2404.07972, 2024. Yiheng Xu, Dunjie Lu, Zhennan Shen, Junli Wang, Zekun Wang, Yuchen Mao, Caiming Xiong, and Tao Yu. Agenttrek: Agent trajectory synthesis via guiding replay with web tutorials. In ICLR, 2025. Da Yin, Xiao Liu, Fan Yin, Ming Zhong, Hritik Bansal, Jiawei Han, and Kai-Wei Chang. Dynosaur: A dynamic growth paradigm for instruction-tuning data curation. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4031–4047, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/ 2023.emnlp-main.245. URL https://aclanthology.org/2023.emnlp-main.245/. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, et al. Webarena: A realistic web environment for building autonomous agents. In International Conference on Learning Representations (ICLR), 2024. 16 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Appendix A. Action Spaces in UI Simulator As shown in Table 3 and 4, we summarize the supported action spaces for WebArena and AndroidWorld, which span the common UI interactions (e.g., click, type, scroll). Table 3: Action space of WebArena environment. Action Description click [id] Click the element with the given ID. type [id] [content] [0|1] Type content into the text box with the specified ID; press Enter if the last argument is 1. hover [id] Hover over the element with the given ID (often to trigger popups or dropdowns). press [key_comb] Press a keyboard combination (e.g., ctrl+c). scroll [direction] Scroll the page in the specified direction: up, down, left, or right. go_forward Go forward to the next page (only after a prior go_back). go_back Go back to the previous page. new_tab Open a new empty browser tab. tab_focus [index] Switch focus to the tab at the given index. close_tab Close the current tab. goto [url] Navigate to the specified URL. Table 4: Action space of AndroidWorld environment. Action Description click [id] Tap the element with the given ID on the current screen. open_app [app_name] Launch the app with the specified name. input_text [id] [content] Enter content into the text box identified by id using the keyboard. keyborad_enter Press the Enter key. scroll [direction] Scroll the screen in the specified direction: up, down, left, or right. navigate_back Go back to the previous screen. navigate_home Return to the app’s home screen. wait Remain idle for a short duration. B. Details of Rule-Based Transition As discussed in §3.2, certain actions in the action space (e.g., Type, Scroll) involve deterministic state transitions. To better simulate this, we incorporate rule-based transitions that enhance realism in the simulation. The details of these rule-based transitions are introduced below. Type Action. This is the most straightforward case. The simulator updates the target element by appending the typed content into its content_description attribute. Scroll Action. The simulator will keep on the same state after taking a Scroll action, i.e., st+1 = st. The simulator maintains a scroll offset (xoffset, yoffset) ∈R2, which determines the visible region of the UI along with the window dimensions (wwin, hwin). Here we take (wwin, hwin) = (2400, 1080). 17 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Initially, the scroll offset is set to (xoffset, yoffset) = (0, 0). When a Scroll action is performed, the scroll offset is updated as follows: (xoffset, yoffset) ←(xoffset + ∆x, yoffset + ∆y), where (∆x, ∆y) are the fixed scroll displacements in horizontal and vertical directions, respectively. For instance, scroll [down] corresponds to (∆x, ∆y) = (0, 1080) The observation at timestep t, denoted ot, consists of all UI elements whose bounding boxes intersect with the current visible viewport: Vt = [xoffset, xoffset + wwin] × [yoffset, yoffset + hwin], ot = {e ∈st | bbox(e) ∩Vt ̸= ∅} . After the scroll offset is updated, the observation ot+1 is recomputed based on the new viewport Vt+1, and the state itself remains unchanged. New_tab, Navigate_back, Navigate_forward Actions. We model web browsing as a traverse process on the tree structure. To support this, the simulator maintains an explicit browsing stack to track session history. These actions deterministically alter the current tab state based on prior states stored in the stack. C. Key Statistics and Hyperparameters of Step-Wise Rollout Process Table 5: Step numbers of the collected trajectories and step-wise task control numbers across domains. Shopping Gitlab Map Reddit Shopping Admin Android Step # 800 1500 1500 1300 1500 6500 Step-wise task control # per proposal 5 8 3 6 8 5 As we mentioned in §4, we introduce a step-wise guided rollout process to encourage exploration towards diverse yet reasonable directions for UI trajectory synthesis. Table 5 summarizes key statistics and hyperpa- rameters of the rollout process. The first row reports the number of final synthesized trajectory steps for each domain, while the second row shows the number of task controls for rollout guidance when the teacher agent is going to propose such task controls. Variation in task control numbers reflects the complexity of website content: for instance, map websites are relatively simple, supporting only a few core functions such as search or navigation, whereas domains like Gitlab, Reddit, and shopping admin pages contain many more elements and support various functionalities, requiring more extensive task control. In the web environment, we collect 2K trajectories with an average length of 3.3 steps. In the Android mobile environment, we collect 1.3K trajectories averaging 5 steps each. The collected states are adjusted to fit the domain and format as defined in WebArena and AndroidWorld benchmarks. For retrieval-augmented simulation, the collection process leverages a limited amount of experiences from the two environments. For the size of the offline retrieval corpus D, we have 1647 transition experience for WebArena, and 683 for AndroidWorld (approximately only 25% and 10% of OS-Genesis experiences on test environments). 18 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Table 6: Human evaluation dimensions with definitions and illustrative examples. Dimensions Definitions Examples Realism of Task Whether the task resembles something a real user would encounter in everyday app usage. “Search for a product and add it to the cart” is realistic; “click random buttons” is not. State Reasonability Whether the UI states and their tran- sitions are reasonable given the app’s typical structure and context. A “checkout” button inside a map application is unreasonable. Action Validity Whether each action logically corre- sponds to the goal, the current state, and the intended next state. Clicking “submit” should occur only after all required entries are filled. Logical Consistency (Thoughts) Whether explanatory comments or in- ferred logic are coherent and free of contradictions. “User clicks search to find item” followed by “user wants to delete profile” is inconsistent. Task Completion Whether the trajectory ends with the task’s goal fully achieved. If the goal is “send a message,” the message should be sent in the final step. Trajectory Consistency Whether actions and transitions form a coherent flow, with no contradictions or unexpected diversions. The trajectory should not jump between unre- lated tasks or contexts. # Irrelevant Steps Number of steps unrelated to the goal; a high count indicates inefficiency or redundancy. Clicking “About Us” is irrelevant to “creating an account.” Topic Abstraction Whether the task is generalized and meaningful, not just low-level UI ma- nipulation. “Complete login” is abstracted; “click input, type name, click button” is not. The estimated cost per web trajectory is $0.02 for retrieval-free simulation and $0.05 for retrieval-augmented simulation, and the estimated cost doubles for each AndroidWorld training trajectory. D. Training and Evaluation Details For digital UI state simulation, the LLM-based world simulator are run with a decoding temperature of 0.5. During the trajectory synthesis process, teacher agents also generate next action with the decoding temperature of 0.5. We train Llama-3-8B-Instruct and Qwen-2.5-7B-Instruct for WebArena and AndroidWorld, respec- tively, using a batch size of 48, learning rate 1 × 10−5, and 2 epochs. Training is performed on 4 A6000 GPUs (48GB each) with Liger-Kernel (Hsu et al., 2025) to improve throughput and reduce memory usage. During inference on the downstream benchmarks, we set the generation temperature to 0.6 and the maximum output length to 1024 tokens. E. Human Evaluation of Training Trajectories Synthesized by UI-Simulator Beyond quantitative metrics which already demonstrate the effectiveness of training trajectories synthesized by UI-Simulator, we further conduct a qualitative human evaluation of the training trajectories across 8 dimensions (Table 6). For each dimension, scores are computed as the proportion of trajectories that satisfy the corresponding evaluation criterion. 19 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Figure 6: The front-end web interface for trajectory human evaluation. We recruited three annotators, each holding a master’s degree or higher in computer science. Each annotator evaluated 40 trajectories from both UI-Simulator-F and UI-Simulator-R. We built a front-end website for annotating, as shown in Figure 6. To assess reliability, we measured agreement on 30 overlapping trajectories, yielding pairwise scores of 0.876, 0.890, and 0.976, which indicate strong consistency. Table 7 and 8 present the average human evaluation scores across all dimensions. We observe that satisfaction rates for each dimension consistently reach, and in many cases exceed 90%. It suggests that even without additional fine-tuning, LLMs are already capable of serving as effective digital world simulators for scaling high-quality training trajectory synthesis. F. UI Simulation Issue Analysis While the digital world simulator significantly enhances agent training, it may still exhibit minor discrepancies in capturing certain real-world UI state transitions. Figure 7 and 8 demonstrate two cases where UI- Simulator-F and UI-Simulator-R may not simulate well. Figure 7 shows a transition where UI-Simulator-F mistakenly fuses irrelevant context into the next step simulation. The new webpage should be a list of all available forums in realistic Reddit after clicking the Forums link. However, UI-Simulator-F makes an error by taking the context of current forum, 20 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Table 7: Average human evaluation scores across dimensions (Web). Dimensions UI-Simulator-R UI-Simulator-F Realism of Task 0.914 0.942 State Reasonability 0.952 0.875 Action Validity 0.867 0.767 Logical Consistency (Thoughts) 0.867 0.733 Task Completion 0.938 0.908 Trajectory Consistency 0.971 0.917 # Irrelevant Steps 0.214 0.533 Topic Abstraction 0.990 1.000 Table 8: Average human evaluation scores across dimensions (Android). Dimensions UI-Simulator-R UI-Simulator-F Realism of Task 0.884 0.888 State Reasonability 0.873 0.888 Action Validity 0.856 0.806 Logical Consistency (Thoughts) 0.884 0.847 Task Completion 0.862 0.929 Trajectory Consistency 0.939 0.959 # Irrelevant Steps 1.09 0.794 Topic Abstraction 1.0 0.994 \f\deeplearning, into account. As a result, the new webpage shows a bunch of information related to deep learning. Transition in Figure 8 shows that UI-Simulator-R sometimes ignores the current context and refers too much to the retrieved reference state. The search results of Byte Blaze should be relevant to the keyword. However, in this case UI-Simulator-R just simulates the search results from the reference state and ignores what the user is currently searching. G. System Prompts In this section, we present the key system prompts used in our work. Tables 9–15 provide the prompts for the guided rollout process, Tables 16–19 detail those used during simulation, and Table 20 illustrates how we generate targeted tasks for UI-Simulator-Grow. The prompts used for simulating Android and web UI environment states are largely aligned. Table 21 presents the system prompt for thought and action generation in Android trajectory collection, whose instructions closely mirror those used for the web setting. All states are formatted according to the AndroidWorld specification, represented as lists of UI elements with attributes such as content_description and class_name. 21 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training [574] RootWebArea 'Should I continue with this?' focused: True. [637] image [605] link 'Forums' [644] link 'Submit' [655] button 'MarvelsGrantMan136' hasPopup: menu expanded: False [736] main [740] link '/f/deeplearning' [748] article [753] StaticText 'Should I continue with this?' [754] link 'Should I continue with this?' [503] StaticText 'Submitted by ' [759] link 'Eric-Cardozo' expanded: False [507] StaticText 't3_127uysh' [762] StaticText 'March 31, 2023 at 2:44:26 PM EDT' [511] StaticText ' in ' [763] link 'deeplearning' [513] StaticText 'm a physics student, and I started learning … [514] StaticText 'Here is the code: ' [771] link 'https://github.com/ericcardoz o/FastorML' [7636] RootWebArea 'Deep Learning Forums' [5279] link 'Home' [3133] link 'About Us' [18623] link 'Contact' [8494] link 'Sign Up' [17722] link 'Login' [12931] searchbox 'Search forums...' [18394] button 'Search' [17570] generic 'Browse Discussion Threads' [15867] link 'What are the best frameworks for deep learning?' [13991] StaticText '18 replies' [900] StaticText 'Last active: 1 hour ago' [4080] link 'Understanding Convolutional Neural Networks' [10867] StaticText '27 replies' [3872] StaticText 'Last active: 4 hours ago' [7673] link 'Latest trends in AI research' [19196] StaticText '9 replies' [3678] StaticText 'Last active: 3 days ago' Click [605] Figure 7: A case of failed simulation where UI-Simulator-F generates the new page based on irrelevant context. You are a Web task creation AI. Assume that you have an a11y tree of this website, generate some common tasks user perform on this web. To be successful, it is very important to follow the following rules: 1. Suppose that you’ve already logged in the website, and you don’t want to sign out. 2. Note that we don’t want the task control to focusing too much on the elements that the state contains. You should consider the functionalities behind the elements, and think of functionalities that people use. 3. The number of the tasks should be no more than {Number of Task Controls} , so just keep the most common ones. Besides, ideally the tasks are atomic independent, i.e. we don’t want a certain task to be a subtask of another task, and the task shouldn’t contain two subtasks like “do A and then do B”. Example: Initial state: {Initial State} Tasks: 1. Search for a place. 2. Find directions between two places. Table 9: System prompt for First-Step Task Control Proposal. {Initial State} is the Initial State for in- context example. {Number of Task Controls} limits the number of candidate task controls 22 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training [320] link 'Dashboard' [394] button hasPopup: menu expanded: False [1507] textbox 'Search GitLab' required: False [1468] generic 'Use the shortcut key <kbd>/</kbd> to start a search' [401] link 'Create new...' [403] link 'Issues' [1100] generic '13 assigned issues' [404] link 'Merge requests' [1102] generic '8 merge requests' [406] link 'To-Do List' [1118] generic 'Todos count' [1489] StaticText '5' [407] link 'Help' [409] link 'Byte Blaze' [1152] img 'Byte Blaze' [7] main [302] StaticText 'Projects' [305] link 'New project' [411] link 'Yours 14' [412] link 'Starred 3' [413] link 'Explore' [414] link 'Topics' [312] searchbox 'Filter by name' [374] button 'Name' [458] link 'All' [6320] RootWebArea 'Search results for "Byte Blaze" · GitLab' focused: True [9128] link 'Projects 7' [5946] link 'Milestones 4' [15074] link 'Users 15' [2986] link 'O' [4708] heading 'Open Source Diversity / opensourcediversity.org' [12492] link 'Open Source Diversity / opensourcediversity.org' [18470] generic 'Public - The project can be accessed without any authentication.' [2955] image [12092] generic 'blossom' [5288] StaticText '🌼' [9996] StaticText ' Code of ' [15143] link 'https://opensourcediversity.org' [5065] link '28' [845] image [5529] link '0' [4353] heading 'Checkstyle / checkstyle' [18893] link 'Checkstyle / checkstyle' [901] generic 'Public [17524] image Type [1507][Byte Blaze] Figure 8: A case of failed simulation where UI-Simulator-R overly depends on the reference state to generate the new page. 23 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are asked to modify a web task. You will be given a description of website and original task that correspond to the website. To be successful, it is very important to follow the following rules: ##Find the name of website and figure out what kind of website is given. ##Generate unique, diverse entity names and add to original task to make the task more specific. For example, a specific entity name for shopping website is a product name, a specific entity name for map website is a specific place name. ##Generate 15 examples. You should try to think of what kinds of terms are commonly searched. ##The search term should NOT be too complex, JUST the name. For example, we don’t want “the Eiffel Tower in Paris”, but just “Effiel Tower”. Examples: Website Description: {Website Description} Original task: Search for a certain product. ##Thought: I’ll try to search for a certain product. The candidiates should be diverse. People often search foods, clothes, fruits, electric devices, toys, detergents, trip utilities on the shopping website. ##Tasks: • Foods 1. Search for OREO milk cookies. 2. Search for Crisco Oil 48 Oz. 3. Search for L’Oreal Paris Revitalift Anti-Aging Cream. 4. ... • Fruits 1. Search for Driscoll’s Strawberries. 2. ... • ... {Example #2} {Example #3} Table 10: System prompt for Diverse Entity Specification. {Website Description} is a short description of the current website for in-context example. 24 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a Web task creation AI. Assume you are browsing on a website with some or no guidance. Based on the task control, the current webpage, and the browsing history, your task is to analyze the current webpage and browsing history, and continue browsing according to the task control and previous steps by giving the action on the current webpage. Here are some requirements for output: • You need to incorporate the following details in the Thought: – the task control (if no task control is given, just skip it), – what you learned from previous steps, – what the current webpage is like (it is best to use some elements within the webpage as evidence), – and which action to take next (either guided or not). • The Action should strictly follow the action format provided in the action space. • You also need to generate a single sentence abstract to summarize what this action does. Note that Thought, Action and Task should not exceed one line. The summary should NOT mention the content of task control (e.g. ‘Create a new project or issue’ shouldn’t appear in the Task in the following example), just focus on what the action does. Available actions: {Input Action List} Example Input: Task Control: {Input Task Control} Current state: {Input Current State} Previous steps: {Input Previous Steps} Example Output: Thought: Let’s think step by step. The task control is ‘Create a new project or issue.’. From the previous steps, I clicked the ‘New Project’ button to step into the project creation page. The current webpage contains elements like “Enter your project name here” textbox with id 10 and “visibility_level” selection with id 12, which means currently I’m at the project creation page, and it has many information entries to fill in. To continue creating the project, I shall fill in the required information. I can first type a name for the new project, and the corresponding input box id is 10. ’Web platform’ is a good name. I can set the third parameter to 1 to press enter to submit. Action: type [10] [Web platform] [1] Task: Type “Web platform” as the name for new project, and press enter to submit it. Table 11: System prompt for Thought & Action Generation. {Input Action List} is available action space. {Input Task Control}, {Input Current State}, and {Input Previous Steps} are Current Step Task Control, State, and Step History for in-context example. 25 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are asked to generate web task controls for the next step that are unrelated to visual adjustments. You will be provided: • Elements that are commonly used in current website. • Original task control • Previous steps you have taken. You need to follow the following rules: 1. AVOID task controls related to visual manipulation. 2. You have completed the Original task control in previous steps. You should make use of the given elements to propose reasonable new task controls to extend current trajectory. 3. Pay attention to “Newly appeared elements”, elements that appear in the new state compared to the last step. You should propose as many task controls based on the interactions with newly appearing elements as possible. 4. The new task controls SHOULD be consistent with previous steps and can form a single complete task rather than multiple independent tasks. E.g., we don’t want to search for one place and then explore related or nearby places. 5. The task controls should be diverse. We don’t want task controls doing the same thing but on different entities. For example, we don’t want to have both view the detail of A and then view the detail of B in the response. 6. Only give no more than task controls, just make sure you are proposing the most common ones. 7. You should strictly follow the output format in the example. 8. Note that you cannot interact with a StaticText or generic. Interacting with it wouldn’t cause any effect. So it is best not propose task controls related to them. Also we want the tasks can be done on the computers. Example: Original task control: Search for ‘PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro’ Previous steps: [“Search ‘PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro’ ”] ##Thoughts: Let’s think step by step. The original task control is to search ‘PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro’, and I have completed the original task control. The current webpage shows the search results related to the 6S Wireless Noise Canceling Hi-Fi Headphones. I need to take the next step that is consistent with the first step searching. The newly appeared elements include the detail links of the product I want, together with functionalities like “Add to Cart”, “Add to Wish List”, “Add Your Review”, “Add to Compare”, etc. I can check the most relevant search result for its details. I can also choose to interact with the product via functionalities like adding it to the cart, wishlist, etc. ##Task Controls: 1. View more details about ‘PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro’ 2. Add ‘PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro’ to cart 3. ... Table 12: System prompt for Step-Wise Task Control Proposal in later steps. 26 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a Web task creation AI. Assume you have step history, and try to summarize a high-level intent. The step history is in the format of “action: step summary”. You should pay special attention to the content within action (e.g. the content that is typed in), and the summarized task should reflect such content. Here are some requirements for summarization: • The high-level intent should faithfully follow the action sequence. • The high-level intent should be succinct and consistent with each single step. • Ignore unnecessary contexts and intermediate steps. The Task should only include the high-level goal of the trajectory. Example: Input: Previous steps: click[7809]: Access the help section for GitLab. click[482]: View the ‘Contact Support’ page to get help with GitLab issues. type[361][Bug: cannot edit account detail][1]: Enter a subject for your support request. type[5882][Account Issue][1]: Enter “Account Issue” in the subject field for the support request. click[3605]: Select the type of support needed as “Technical Support”. Thought: From steps history, the first two steps open ‘Contact Support’ for GitLab. The following two steps enter “account issue” for further detail. Last step select “Technical Support” as support type. Output: Task: Submit a Technical support request to GitLab regarding account issue. {Example #2} Table 13: System prompt for Task Summarization in trajectory wrapping process. 27 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a web behavior analyst. Assume you have collected a series of thoughts on actions taken during navigation on the web. The thoughts are not really purposeful, or not focusing on the given goal. Your task is to rewrite each thought to make them fit the given task goal. You should follow the rules when rephrasing thoughts: 1. The rewritten thought should consider what the current page is about and mention the goal, analyze what should be done now to complete the goal, and explain why this action is appropriate in the current step. 2. You should strictly keep the entity, the analysis on the webpage and previous steps, and the action in the original thought unchanged — just adapt the wording to make it reasonable for the goal. Ignore task control-related sentences in the original thoughts, and focus only on how to complete the given goal. 3. Please guarantee that each action appears in its corresponding rewritten thought. Example: Original thoughts: • Thought 1: According to the task control, I need to view account information. The current webpage displays my account details, including “Contact Information”, “Default Billing Address”, and “Default Shipping Address”. The “Account Information” text is prominently featured, and there is an “Edit” button linked to my account information. To proceed, I will click on the “Account Information” link to view the complete details of my account. In summary, the next action I will perform is click [3706]. • Thought 2: Moving forward as per the task control “Access the ‘Payment Methods” section to add a new credit card for future purchases.” From the previous step, I accessed “Account Information”. Currently, there are several buttons in the account management section, including one labeled “Payment Methods” (id 1523), which I need to click to reach the section for adding a new credit card. In summary, the next action I will perform is click [1523]. • Thought 3: ... Goal: Add a new credit card with the number “4111111111111111” and expiration date “12/25” to the account information. Actions: click [3706], click [1523], ... Rewritten thoughts: • Thought 1: Let’s think step-by-step. The current webpage displays my account details, including “Contact Information”, “Default Billing Address”, and “Default Shipping Address”. The “Account Information” text is prominently featured, and there is an “Edit” button linked to my account information. This is the first step. In order to add a new credit card with the number “4111111111111111” and expiration date “12/25” to the account information, I will click on the “Account Information” link to view the complete details of my account. In summary, the next action I will perform is click [3706]. • Thought 2: Let’s think step-by-step. From the previous step, I accessed “Account Information”. Currently, there are several buttons in the account management section, including one labeled “Payment Methods” (id 1523), which I need to click to reach the section for adding a new credit card. In summary, the next action I will perform is click [1523]. • Thought 3: ... Table 14: System prompt for Thought Rewriting in in trajectory wrapping process. 28 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a teacher who is writing quiz to test your students on the reading and understanding of webpage. The webpage should be either: • demonstrating detailed information for one object, or • containing information for multiple objects, and some of them are different and comparable. First, analyze the webpage. If you think the webpage is demonstrating detailed information for one object, put “Yes” in “Answer”. Otherwise put “No”. Second, you need to find relevant information that is useful for making quiz from the current webpage, such as specific numbers, ranking, entity names. If you think the webpage is wrapping information for multiple objects in a table, the questions you ask should be concentrating on the comparison of the information, or features that are in common. Note: You need to specify the full name of entity in each question. Don’t use terms like “this page”, “this item”. Don’t ask questions about layout, e.g., buttons, textbox, or details that most people don’t care, e.g. the contributor, the url of the site, etc. Don’t always use interrogative sentences like “what” or “which.” Instead, try using declarative sentences like “tell me ...” or “show me ....” Examples: Webpage: [1] RootWebArea ‘Carnegie Mellon University | OpenStreetMap’ focused: True [14] heading ‘OpenStreetMap logo OpenStreetMap’ ...(omit for brevity) [627] StaticText ‘University ’ [630] link ‘Carnegie Mellon University, Pittsburgh, Allegheny County, 15213, United States’ [624] link ‘More results’ ...(omit for brevity) Thought: Let’s think step by step. The current webpage is a search result of ‘Carnegie Mellon University’ on OpenStreetMap website. The searched result only contain detailed information about CMU, which means it is not applicable to compare with other thing. Thus the Answer should be “Yes”, and I shall ask questions that are about details of CMU. There are details like the address of CMU, the zip code. I’d like to ask questions about the details. Answer: Yes Questions: • Show me the address of Carnegie Mellon University. • What is the zip code of CMU? {Example #2} Table 15: System prompt for Reasoning Task Generation in in trajectory wrapping process. 29 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Given the current UI representation, the current action, the web browsing history, and the potentially relevant element: First, think about what current window is like, and try to interpret the action. Second, describe what the new window will be like after executing the action in a sentence. It is best to specify the roles of terms used in the description. Third, extract key information that should be kept in mind, like price of bought product, user profile, etc. Feel free to put “None” here if you think the action won’t cause any long-range influence. Lastly, answer whether such action would lead to a totally new webpage. Note: You’ve always logged in to the website. You don’t need to consider the logging process when generating the new window. The intent should be detailed enough to describe what a whole webpage looks like. Example: Thought: Let’s think step by step. Currently I’m at search results page of Google. The Google search results are general. The ‘click’ action is targeted on a hyperlink to ‘Alan’s podcast channel‘. Since we have set up the age verification and pass the age limit, this action redirects the user to its homepage. New window: The new window is Alan’s live podcast home page. Key Info: None Answer: Yes {Example #2} {Example #3} {Example #4} Table 16: System prompt for Next State Overview Prediction. 30 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Imagine you are a website designer. Given some previous information, the interpretation of action on last step, and description of a new website, first extract what the new website (and the domain) is, then answer the question: What sections should the webpage have, and list all of them and their functionalities, and compose elements that appear in each section in detail. You should ONLY generate informative elements that are relevant with the description. Informative elements refer only to the main elements that contain specific, instantiated information of the webpage. For example, a paragraph with texts introducing the term “California”, the link to Youtube, etc. Pay attention to the input, which contain information that must be included in current page. Also remember you are creating content within a certain domain. Elements like “Copyright: Alhambra Palace” will never appear in a Google map webpage. For a bunch of similar, important elements to generate (e.g. search results on a search results page), the number of such element should be at least 6. You should generate content based on the domain, and information that reasonably appear in the domain. E.g., we should have prices information in a shopping website. Sometimes you can have section like “Other related terms”, but that should never be the main content. Note that if you think an element can be interacted with, specify that it should be a link element. E.g., you don’t have to add another element “View details” or “Website” for a search item, because when clicking on the search term, we might jump to the detail of the item. Just merge their functionalities and tag the element as a link. Example: Previous Info: {Previous Key Info} Description: The new window displays the discussion thread page on Reddit. The title of the post is “What do you think of the new European Cup champion”. User could read and interact with the thread. Thought: Thread interaction section A “Comment” button for users to engage directly with the post A “Share” button for users to share the discussion thread An “Upvote” button and a “Downvote” button for users to express their opinions on the post Title section Title of the post: What do you think of the new European Cup champion Spain? Body section Post content: “Spain has triumphed in the ...” Link of Post number: #10003902 Upvote: 569 ... A comment section where users can share their thoughts, including: Comment: “I think Spain played incredibly well! Their teamwork was on another level!” Link of commenter: Alice; upvote: 5; downvote: 0; 12h ago; upvote/downvote button Comment: ... Table 17: System prompt for Generating Rich Draft in Natural Language. {Previous Key Info} contains some key information recorded in previous steps to boost the coherence of the content. 31 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Given the following reference action sequences and the current action sequence, your task is to find the ones from the reference sequences that are doing almost the same thing as current action sequence. If there’s no sequence in reference that does the same thing as current action sequence, then you should pay more attention to the ending steps, and choose ones that are doing the same thing in the latest steps. If the functionality of the current action history is not quite clear, just finding ones that exactly match the latest steps. Note: You need to consider the meaning behind actions like “click”, “type”. E.g., click button ‘Search’ means doing the search on the typed term. We DONT want the output sequences to have the future steps of the current action history. You should find sequences that just stopping at the same point as the current actions. You should strictly follow the output format. Example: 1. type textbox ‘Search’ 2. click button ‘Search’ 3. click link ‘Dell G7 Laptop’ 4. click ‘Add to cart’ 1. type textbox ‘Search’ 2. ... Current action sequence: 1. type textbox ‘Search’ 2. click button ‘Search’ 3. click link ‘OREO milk cookies’ 4. click ‘Add to cart’ ##Thoughts: Let’s think step by step. The current action sequence does searching at first, then clicks ‘OREO milk cookies’ to view its details, and adds it to the cart. I should output action sequences that are doing the same thing at the ending steps. ##Output: 1. type textbox ‘Search’ 2. click button ‘Search’ 3. click link ‘Dell G7 Laptop’ 4. click ‘Add to cart’ 1. type textbox ‘Search’ press enter 2. ... {Example #2} Table 18: System prompt for model-based Semantic Retriever based on action history. 32 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Given the following description of a GUI and the refernce GUI, create the content of a new state by taking elements and rewriting some important contents within the reference state. Note that the description of content might be short, but you should provide corresponding essential information in the reference GUI. So you should pay attention to each element in the reference GUI. If you think the reference GUI doesn’t match the descrition, then you should generate a lot of new contents that match the description, but within the same structure. If you think the reference GUI matches the descrition, you could copy most of the content from the reference state, and only modify a few important contents if needed. Do not try to add additional details or information in this case. • You are only allowed to modify information related to some named entities. You are not allowed to add/remove the functionality elements in the reference state if you think it matches the description. If no named entities are in the reference state, you can make no change. Example input: Reference state: {Reference State} Description of new state: The new website is a product page on Onestopshop ... Example output: {New Content} Table 19: System prompt for Retrieval-Augmented Draft Generation . {Reference State} refers to the state retrieved for generation. {New Content} contains a list of realistic elements inherited from the refer- ence state and newly composed content in the current context 33 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a Web task creation AI. Given the a11y tree of a website, generate a common task user perform on this web, and new browsing history adapted from the given browsing history. The reference task and browsing history is based on another webpage that has the similar functionalities but with different object. So you should propose task based on content from currently given entities and content. To be successful, it is very important to follow the following rules: 1. Suppose that you’ve already logged in the website, and you don’t want to sign out. 2. The new task should strictly have the same task type as the reference. Don’t change the task type 3. Don’t change the procedure in the browsing history. Just change the entity names of objects (products, items), not functionalities. If the reference browsing history is “None”, then put “None” in the new browsing history. Don’t imagine steps that are doing different things from reference browsing history. Example: Current webpage: {Input State} Reference task: Add ‘Dell G7 Gaming Laptop - 256GB’ to the cart. Reference browsing history: 1. Search for “Dell G7 Gaming Laptop” 2. Click ‘Dell G7 Gaming Laptop - 256GB’ to view its details. Task: Add ‘Milkman Bonus Bundle - 10 Packets Low-Fat Milk + 2 Packets Chocolate Milk with 18g Protein’ to the cart. New browsing history: 1. Search for “Milkman Bonus Bundle” 2. Click ‘Milkman Bonus Bundle - 10 Packets Low-Fat Milk + 2 Packets Chocolate Milk with 18g Protein’ to view its details. Table 20: System prompt for Targeted Task Variant Synthesis. {Input State} contains details of a prod- uct (‘Milkman Bonus Bundle’ in this case) 34 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Assume you are a Android mobile phone. Based on the task control, the current UI page, and the action history, your task is to analyze the current page and history, and continue browsing according to the task control and previous steps by giving the action on the current page. Here are some requirements for output: • You need to incorporate the following details in the Thought: – the task control, – what you learned from previous steps, – what the current page is like (it is best to use some elements within the page as evidence), – and which action to take next. • The Action should strictly follow the action format provided in the action space. • You also need to generate a single sentence abstract to summarize what this action does. Note that Thought, Action and Task should not exceed one line. The summary should NOT mention the content of task control (e.g. ‘Create a new project or issue’ shouldn’t appear in the Task in the following example), just focus on what the action does. Available actions: {Input Action List} Example Input: Task Control: {Input Task Control} Current state: Element 0: UIElement(text=None, content_description=Create contact, class_name=android.view.View, bbox=None, bbox_pixels=BoundingBox(x_min=0, x_max=1080, y_min=0, y_max=2400), hint_text=None, is_checked=False, is_checkable=False, is_clickable=False, is_editable=False, is_enabled=True, is_focused=False, is_focusable=False, is_long_clickable=False, is_scrollable=False, is_selected=False, is_visible=True, pack- age_name=com.google.android.contacts, resource_name=com.google.android.contacts:id/background_container, tooltip=None, resource_id=None, metadata=None)) Element 1: UIElement(text=None, content_description=None, class_name=...) Element 2: UIElement(text=First Name, content_description=James, class_name=...) Element 3: UIElement(text=Last Name, content_description=Brown, class_name=...) Element 4: UIElement(text=Phone Number, content_description=None, class_name=...) Element 5: UIElement(text=Email Address, content_description=None, class_name=...) Element 6: UIElement(text=Save, content_description=None, class_name=...) Element 7: UIElement(text=Cancel, content_description=None, class_name=...) ... Previous steps: {Input Previous Steps} Example Output: Thought: Let’s think step by step. The guide is ’Create a new contact for "James Brown". From previous steps, I opened the ’Contacts’ app, started the creation process and typed the first and last name. The current page shows that I’ve successfully typed the First and last name, and I also need to fill in details like phone number, email address. Since the guide doesn’t provide the phone number, I should give a realistic phone number here, like "718-099-5256". To continue creating the contact, I shall type "718-099-5256" to the Phone number. Action: input_text [4][718-099-5256] Task: Type "718-099-5256" as the phone number. Table 21: System prompt for Thought & Action Generation for AndroidWorld trajectory collection. {Input Action List} is the action space. {Input Task Control}, {Input Current State}, and {Input Previous Steps} are Current Step Task Control, State, and Step History for in-context example. 35
LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Yiming Wang2,⋆,♠, Da Yin1,⋆,♠,♡, Yuedong Cui1,⋆, Ruichen Zheng1,⋆ Zhiqian Li1, Zongyu Lin1, Di Wu1, Xueqing Wu1, Chenchen Ye1, Yu Zhou1, Kai-Wei Chang1,♡ 1UCLA 2Harvard University ⋆Co-First Authors ♠Co-LeadAlphabetical Order ♡Equal Advise Digital agents require diverse, large-scale UI trajectories to generalize across real-world tasks, yet collecting such data is prohibitively expensive in both human annotation, infra and engineering perspectives. To this end, we introduce UI-Simulator, a scalable paradigm that generates structured UI states and transitions to synthesize training trajectories at scale. Our paradigm integrates an LLM-based digital world simulator for diverse UI states, a guided rollout process for coherent exploration, and a trajectory wrapper that produces high-quality and diverse trajectories for agent training. We further propose UI-Simulator-Grow, a targeted scaling strategy that enables more rapid and data-efficient scaling by prioritizing high-impact tasks and synthesizes informative trajectory variants. Experiments on WebArena and AndroidWorld show that UI-Simulator rivals or surpasses open-source agents trained on real UIs with significantly better robustness, despite using weaker teacher models. Moreover, UI-Simulator-Grow matches the performance of Llama-3-70B-Instruct using only Llama-3-8B-Instruct as the base model, highlighting the potential of targeted synthesis scaling paradigm to continuously and efficiently enhance the digital agents. Date: Oct 16, 2025 Code Repository: https://github.com/WadeYin9712/UI-Simulator Website: https://ui-simulator.notion.site/llms-as-scalable-digital-world-simulator Model Weights & Datasets: https://huggingface.co/UI-Simulator Contact: , 1. Introduction Large Language Models (LLMs) have emerged as the backbone of digital agents that follow user instructions and interact with diverse User Interface (UI) environments to accomplish complex tasks, such as daily web and mobile navigation (Deng et al., 2023, Koh et al., 2024, Zhou et al., 2024) and computer use tasks (Xie et al., 2024). A persistent bottleneck in training LLMs to become strong digital agents is the scarcity of large-scale, high-quality UI environment training trajectories. Collecting such data demands extensive human effort: for instance, Xie et al. (2024) report that designing 360+ realistic computer-use tasks, which usually involve long, complex sequences of UI actions, requires more than 1,800 human hours. This cost severely limits the scalability of agent development and has sparked interest (Ou et al., 2024, Murty et al., 2024, Sun et al., 2024, Pahuja et al., 2025) in automatic synthesis of training trajectories. When applying the automatic trajectory synthesis, what factors could significantly impact performance of the trained agent policies across different UIs? Motivated by Cobbe et al. (2020) and Kimi-K2 (Kimi et al., 2025), we argue that environment diversity would be a chief component, as exposing an agent to a wide variety of UI environments would increase its robustness and generalizability to unfamiliar tasks at test time. However, from infra and engineering aspects, deploying parallel real UI environments faces severe bottlenecks due to 16 Oct 2025 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training 🌍 UI-Simulator: LLMs as Scalable Simulators For Digital Agent Training 💽 LLMs internally learns to simulate UI digital world! Structural Code UI Frontend Code Procedural Knowledge LLM Pre-Training Corpus LLM World Simulator Guided Rollout Process Trajectory Wrapper 🌍 UI-Simulator Performance Highlight 7/8B-size open-source superior performance Better performance when using simulated envs Synatra NNetNav 🌍 UI-Simulator 🌍 UI-Simulator Stronger when scaling Traj. From Real-World Env. vs. Traj. # World Model Simulate Action: Click [1412] Control Control Add Controls to Sample Diverse, Valid Traj. Action Action ... ... Overall User Task Instruction Conditioned Step Thoughts Wrap 📈 UI-Simulator-Grow Performance Highlight Competitive performance with 70B-size models Better performance while with higher data efficiency Improve more rapidly than standard scaling Given UI-Simulator's flexibility in generating diverse UI states, can we strategically synthesize data to accelerate LLM agent improvement? Llama3-70B 📈 UI-Simulator-Grow Traj. # 📈 UI-Simulator-Grow 🌍 UI-Simulator 📈 UI-Simulator-Grow 🌍 UI-Simulator (More Traj.) vs. Figure 1: Overview and performance highlights of UI-Simulator and UI-Simulator-Grow. high resource demands, network instability, and the lack of native distributed support (Lai et al., 2025). We notice that world models which model the environment states and their transitions (Munro, 1987, Ha and Schmidhuber, 2018) may offer a promising solution. If the world models enable the generation of diverse synthetic UI states, it will allow digital agents to immerse themselves in more diverse UI scenarios, enable richer rollouts and achieve stronger generalization to unseen apps and layouts. How can such digital world models be constructed? We argue that digital world models can be built on LLMs, as pre-training on front-end code and procedural knowledge makes them well-suited to synthesize realistic UI states and transitions triggered by user actions. In this paper, we introduce UI-Simulator, a scalable UI trajectory synthesis paradigm for training digital agents powered by an LLM-based digital world simulator. Given summaries of prior UI states and a next action, our digital world simulator generates future UI states in a hierarchical format without any additional fine-tuning. Each UI state encodes textual content, spatial coordinates, and dynamic attributes (e.g., focus status), organized into an accessibility tree structure that captures hierarchical relationships among regions and elements. To collect high-quality trajectories with UI-Simulator, we run a step-wise guided rollout process where a teacher agent explores UIs generated by the world simulator under step-wise task control that prevents incoherent actions and promotes diverse, context-grounded behavior conditioned on prior actions and current state. Finally, a trajectory wrapper turns the rollouts into available training trajectories with user instructions, ground-truth UI actions, and step-wise reasoning. Beyond simply blindly scaling up trajectory sizes with UI-Simulator scaling, we explore how to strategically and efficiently synthesize data to accelerate LLM agent improvement. We introduce UI-Simulator-Grow, a targeted scaling paradigm that achieves faster gains using fewer but more contributive trajectories. At each iteration, UI-Simulator-Grow selects target tasks that offer the greater learning potential based on teacher-forcing loss signals from dynamically constructed validation sets, and synthesizes diverse trajectory 2 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training variants to guide the next training iteration. We evaluate UI-Simulator on two widely used benchmarks, WebArena (Zhou et al., 2024) and AndroidWorld (Rawles et al., 2024), which cover web and mobile UI domains. UI-Simulator achieves very competitive performance among open-source agents of comparable model size. Notably, UI-Simulator is solely used to synthesize training resources with a weaker teacher model, GPT-4o-mini, whereas prior methods rely on the more powerful GPT-4o teacher model. We also find that UI-Simulator yields greater robustness than other baselines when evaluated on perturbed UI environments, and higher adaptability given a limited amount of experience on test environment. Moreover, UI-Simulator even outperforms the variants trained directly on real downstream environments using the same trajectory synthesis pipeline. Further, our targeted scaling paradigm UI-Simulator-Grow using only Llama-3-8B-Instruct base model matches Llama-3-70B-Instruct, and drives steeper performance gains on WebArena using only 66% of the original training trajectories, demonstrating significantly improved data efficiency. 2. Related Works World Models. Extensive prior work has explored learning dynamics models and using them to train decision-making policies (Werbos, 1987, Munro, 1987, Ha and Schmidhuber, 2018). Recently, the structural consistency of videos across tasks, environments, and embodiments has fueled progress in large-scale video pretraining for world models (Hafner et al., 2020, OpenAI, 2024, Parker-Holder et al., 2024). LLMs have also emerged as potential world models due to their rich encoding of physical and textual knowledge from massive corpora. Hao et al. (2023), Gu et al. (2024), Chae et al. (2025) explore the use of LLMs as both reasoning agents and inference-time world models that proactively identify optimal actions. In our paper, we emphasize more scalable, high-quality UI trajectory synthesis and investigates the broader potential of digital world simulation for agent training. While prior work such as Fang et al. (2025), Gao et al. (2025) also utilizes LLMs as world models for agent training, our approach emphasizes greater efficiency of building digital simulators. Instead of training a dedicated world model, which can be costly due to the need for large-scale data, we directly leverage the LLM's prior knowledge, requiring little to no experience from downstream task environments. More importantly, we focus on scaling perspective and study the targeted scaling paradigm that efficiently synthesizes the most contributive trajectories at each iterations, enabling faster agent improvement with significantly fewer resources. Synthetic Data for Digital Agent Training. To overcome the bottleneck of limited high-quality humanannotated UI trajectories, recent efforts focus on scalable synthesis of training data for digital agents. Synatra (Ou et al., 2024) and AgentTrek (Xu et al., 2025) address this challenge by converting indirect human-readable knowledge (e.g., web tutorials and manuals) into direct task demonstrations, enabling large-scale supervision without manual annotation. NNetNav (Murty et al., 2024), OS-Genesis (Sun et al., 2024), InSTA (Trabucco et al., 2025), and Explorer (Pahuja et al., 2025) adopt unsupervised approaches that autonomously explore real-world websites and retroactively annotate action sequences with suitable instructions. Different from these methods, which rely solely on prior experience in real digital environments, UI-Simulator leverages an LLM-based digital world model to simulate diverse, plausible, and previously unseen UI states which enable broader generalization and more robust agent training. 3. Digital World Models in UI-Simulator In this section, we introduce how to build a digital world simulator fueled by LLMs in UI-Simulator trajectory synthesis paradigm. The simulator construction process can be applied to a variety of digital and even non-digital agent tasks. 3 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training LLM World Simulator Predict Next State Without Any References Action: Click [1412] 🌍 Retrieval-Free Simulator 🌍 Retrieval-Augmented Simulator Predict Next State With Limited Amount of Prior Experience in Test Env. Retrieve The Most Relevant UI for Next State Synthesis Few Prior Experience Generate Creative Next State Generate Creative Next State Action: Click [1412] Figure 2: Overall process of how the retrieval-free/-augmented simulators predict the next UI state. 3.1. Formulation We consider the task of simulating UI environment dynamics with LLMs to support agent training. LLMs can serve as the foundation of these simulators. Most digital UI environments, including web, mobile, and computer, can be represented as structured textual accessibility trees. Pre-training on front-end code and procedural knowledge makes LLMs suitable as a backbone model to synthesize reasonable UI states and state transitions triggered by user actions. Let st denote the full UI environment state at timestep t, ot be the corresponding observation visible to the agent, and at be the corresponding action taken by the agent. Each element e ⊂st is associated with a bounding box bbox(e) = (xe min, xe max, ye min, ye max), representing its position on the page. The environment dynamics are governed by a transition function st+1 = T (st, at), where T is either the LLM used for digital world simulator MLLM or rule-based transition function. The agent then receives a new observation ot+1, computed by extracting the visible UI elements from st+1 based on the positions. Specifically, in terms of receiving the observation ot from the state st at timestep t, let the viewport at this timestep be, Vt = [x0, x1] × [y0, y1]. The observation at step t is then calculated by, ot = {e ∈st | bbox(e) ∩Vt ̸= ∅} , i.e., capturing the set of elements whose bounding boxes intersect with the viewport region. 3.2. (Retrieval-Free) Simulation To bridge the gap between our digital world simulation and real-world UI transition, we design a hybrid approach that considers rule-based and model-based transitions. Concretely, for most UI actions, our framework empowers the LLM MLLM to generate realistic and diverse next-state transitions, serving as the core engine behind the simulator's ability to produce valid and imaginative UI states. The transition follows a multi-step pipeline that guides the world simulator to anticipate outcomes, infer coherent and diverse next states, and render them into a structured format. At each step, we apply few-shot Chain-of-Thought (CoT) prompting to the LLMs: 4 in-context examples are used for predicting the overview, and 1 example is used for each of the other two steps. 4 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Predict an Overview of the Next State. The first step in modeling the effect of an action is to generate a high-level overview of the next state, conditioned on the current state and the selected action. For example, if the current state is a shopping website and the current action is typing sneakers into the search box and presses enter, the predicted overview would be, "a search results page for the keyword sneakers". Generate Rich Draft in Natural Language. Based on the predicted overview, the LLM generates diverse and semantically rich content in natural language to populate the simulated webpage. The output of this step is intentionally unstructured and unconstrained by a fixed format, which encourages expressiveness and richness. The generated draft includes detailed descriptions of each element's attributes, such as the element's tag and content description, but without position information. For instance, a draft for a Reddit thread page might contain structural sections such as a heading section and a navigation section, as well as informative sections including the thread title, interaction area (e.g., upvote and comment buttons), and the main body of the post. This organization helps produce realistic and contextually rich simulated UI states in the next step. Convert Unstructured Draft into Structured Format. We treat the LLM as a style transfer model that converts the unstructured natural language draft into a well-defined structured format, which can be directly used as training states in agent trajectories. During this process, coordinates are also automatically assigned to each UI element to complete the specification of st+1. Note that some actions do not result in a completely new page but rather alter the view of the current state, e.g., a scroll action reveals content initially off-screen. To simulate deterministic actions with relatively fixed outcomes, we adopt rule-based transitions. See Appendix B for details. 3.3. Retrieval-Augmented Simulation A common and realistic way to evaluate agent capability is by assessing how quickly it can adapt to a new test environment after limited experiences. Beyond the setting where no prior knowledge about the test environment is available, we also consider scenarios where a small amount of the test environment experience is known. For this scenario, we also introduce retrieval-augmented simulation, which conditions UI generation on limited experience from the test target environment. Compared to relying solely on the LLM world simulator's internal knowledge, this allows the world simulator to generate UI environments that not only resemble the target domains but also support a diverse range of tasks grounded in those environments. Formally, we first construct an offline retrieval corpus of N state transitions from the test environment, denoted as, D = {︁(︀ ̃o(i) t , H(i) t , ̃o(i) t+1, ̃s(i) t , ̃s(i) t+1 )︀}︁N i=1 , where ̃o(i) t and ̃o(i) t+1 are the observations before and after an action in the downstream test environment; ̃s(i) t and ̃s(i) t+1 are their corresponding UI states; and H(i) t denotes the action history up to timestep t. During UI-Simulator paradigm, when simulating the next UI state after a given action, we query this offline retrieval corpus with the current observation-action history pair (ot, Ht). A retriever then returns the most relevant observation ̃oret and corresponding state ̃sret from the corpus D. The transition can be modelled as st+1 = MLLM(st, at, sret ), where MLLM is prompted with both the current interaction context and the retrieved state sret, grounding the simulation in prior experience while still allowing the creation of novel, coherent UI states. The key distinction from retrieval-free simulation is the incorporation of the retrieved state sret into the simulation process. 5 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training In practice, we employ a hybrid retrieval pipeline over D to retrieve the transition that is most semantically similar to the current trajectory simulation. The retrieval process proceeds in three stages: First, a coarse filtering step is performed using BM25 ranking, where the action history serves as the query, to retrieve the transitions with very similar action histories. Next, we use current action histories as the query again to further narrow down the more relevant transitions from the transitions stored in D by utilizing GPT-4o as the semantic retriever, which captures deeper semantic similarities with the current action history queries. Finally, we construct a composite retrieval key that incorporates both the current state and action history, and apply BM25 again to select the most relevant transition. Despite the small size of D, this hybrid strategy still improves the consistency and realism of the generated UI states. 4. Scalable Synthetic Training Trajectory Collection in Simulated World In this section, we detail how LLMs are used to autonomously and continuously explore the digital world simulated by the LLM-based world model, generating high-quality training trajectories through guided rollouts and a final trajectory wrapper. 4.1. Overview and Formulation We formulate our data collection process in two stages. The first stage is an instruction-free rollout process, where a teacher agent interacts with the LLM-based digital world simulator to generate synthetic trajectories without conditions on any predefined user instruction G. This goal-free setup allows more flexible and executable trajectory synthesis unconstrained by specific task types. At each timestep t, given the current environment state st, observation ot, and prior action history Ht = [a0, a1, . . . , at-1], the teacher agent MTeacher samples the next action as at ∼MTeacher(ot, Ht). The environment then transitions to the next state st+1 via the world simulator LLM MLLM prediction and deterministic static rules. The teacher continues the rollout until it determines that a semantically coherent task has been completed. In the end, we retrospectively derive a user task instruction G that summarizes the intent underlying the completed trajectory. Each collected trajectory is then represented as τ = [o0, a0, o1, a1, . . . , oT, aT], where T is the trajectory length. Scaling the collection methods across multiple environments and teacher rollouts yields a large, diverse training dataset for downstream UI agent policy learning. However, this procedure still leaves two critical questions: 1) How can we ensure both diversity and validity of the trajectories generated without explicit user instructions? 2) How can we generate plausible and goal-consistent user instructions G that accurately reflect the completed trajectories? To this end, we propose a step-wise guided rollout process and a final trajectory wrapper to address these challenges. 4.2. Step-Wise Guided Rollout Process We notice that without proper guidance for each rollout step, LLMs often exhibit biased behavior, leading to homogeneous tasks and trajectories. To mitigate the bias and increase the diversity and quality of our training set, we introduce a step-wise guided rollout process which proposes task controls to encourage exploration towards diverse yet reasonable directions. The pipeline involves the following steps (See more details in Appendix G): Step-Wise Task Control Proposal. At first we prompt the teacher agents to envision common daily tasks users might perform based on the initial state, regarded as first-step task control. Specifically, given an initial state s0, we prompt the MTeacher to propose a high-level task description, referred to as the mentioned task control, c0 = MTeacher(s0). For example, if s0 is the home page of a shopping website, some examples of c0 are "Search for a certain product" or "Navigate to my account page". When the actions related to first-step 6 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training control were finished, the second-step control is updated based on the current observation, and this process continues iteratively. In general, suppose the trajectory just reaches its t-th step, under the i-th control. We define a boolean function Done(ci) ∈{True, False} that indicates whether the current control ci has been completed, which is judged by MTeacher. The control update rule is given by: ci = MTeacher(st, ci-1 ), if Done(ci-1) = True. For example, after the Done function verified that the first-step control "Navigate to my account page" on the shopping website is completed, the proposed next-step control on the new My Account page in the shopping domain, could be generated as "Check out my order history", "Modify my address information", etc. This iterative goal proposal enables complex tasks to be compiled based on semantically meaningful sub-goals. Thought & Action Generation. Under the current step's task control ci and prior rollout history Ht, the teacher agent MTeacher is also prompted to produce its internal reasoning thought rt, along with an action at and the corresponding step summary ht in a CoT manner. Each thought provides a justification or plan for why the action is appropriate, and is recorded in the rollout history along with the step summary and action, rt, at, ht ∼MTeacher(ot, ci, Ht), Ht+1 = Ht ∪[rt; at; ht], where ; indicates concatenation. To avoid endless rollouts, we allow MTeacher to autonomously decide when to terminate the trajectory. That is, the agent will generate a STOP action if it considers the task as completed, based on the current state and task control. 4.3. Trajectory Wrapper The trajectory wrapper is designed to transform raw rollout trajectories into high-quality instances by inferring valid user instructions and reconstructing a coherent, step-by-step reasoning process. Since the rollout process is initially not guided by an explicit user instruction, in our trajectory wrapping process, we first use a task summarizer to condense the agent's actions into a concise description of what was accomplished and then further convert it as the final user instruction for the entire trajectory, denoting as G. To align the trajectory's reasoning with this synthesized instruction, we then ask the teacher agent MTeacher to rewrite and refine their thoughts, ensuring they are well-conditioned on the newly generated instruction and reflect the agent's internal decision-making. Besides, reasoning ability is often a critical component in agent tasks, e.g., tasks like "Tell me the most recent canceled order in the order history." To support this capability, we allow MTeacher to insert intermediate reasoning thoughts when it deems such reasoning necessary or beneficial for conducting information query or analysis in the current UI state. In the end, we filter out low-quality trajectories based on the criteria: 1) actions must target valid elements and lead to meaningful state changes; and 2) the action mentioned in each step's reasoning should match the action actually taken. 5. UI-Simulator-Grow: UI-Simulator-Powered Targeted Scaling In §4, we introduced UI-Simulator for synthesizing diverse training trajectories. Rather than blindly increasing trajectory volume, we also explore how to strategically and efficiently scale to accelerate agent improvement. We propose UI-Simulator-Grow, a UI-Simulator-empowered targeted scaling paradigm that achieves faster gains with fewer synthesized trajectories. UI-Simulator-Grow iteratively identifies which tasks, if synthesized at the current stage, would most effectively enhance agent performance, and generates diverse trajectories for those tasks and their variants in preparation for the next training phase. In the first iteration, UI-Simulator-Grow collects an initial batch of trajectories following the procedure in §4 as UI-Simulator. In subsequent iterations, it automatically 7 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training selects target tasks based on dynamically updated validation signals, synthesizes relevant trajectories, and applies continual learning to ensure steady performance gains. We introduce this in detail as follows. Target Task Selection. Target tasks for the next training iteration must satisfy the criteria: The tasks must be neither trivial for the current agent to solve, nor beyond the agent's present capabilities. Tasks the agent is already good at will offer limited learning signal, while tasks that are too hard may not lead to meaningful progress. We identify such target tasks by measuring the teacher-forcing loss of a teacher agent MTeacher on the current validation set. Specifically, for each step, we treat the teacher's prediction as ground truth, compute the cross-entropy loss against the student agent's prediction, and average the loss over all steps for the tasks. Tasks are then ranked by loss, and those within the 25%-75% percentile range are selected as targets to filter excessively easy or hard tasks. See more cases in Appendix 7.3. As the agent improves after each iteration, the validation set is also updated to reflect the agent's evolving capabilities. In the first iteration, it is an independent batch of tasks synthesized in the same way as the initial training set. For later iterations, the validation set is composed entirely of a split from the newly synthesized data for the upcoming iteration. It aims to promote continual improvement and prevent future iteration evaluation from overfitting to earlier iterations. Synthesizing Diverse Target Task Trajectory Variants. After identifying target tasks that can effectively challenge the agent, we synthesize additional trajectories focused on those tasks. One strategy we adopted is lightweight task rewriting, where the task instruction is slightly modified without changing its core structure or logic. The corresponding environment states, thoughts, and actions are adjusted accordingly, while preserving the overall reasoning flow. For example, a selected task like "search running shoes" might be rewritten as "search slippers". Since the task logic remains consistent, the UI states and actions (e.g., entering a query, clicking a result) are similarly structured. We prompt the LLM to maintain the task's action types and flow, modifying only content-specific elements in the UI states such as product names. This ensures meaningful variation while preserving alignment with the agent's learning objectives. Continual Learning. As UI-Simulator-Grow continuously incorporates new synthesized training trajectories through iterations, a key challenge is adapting the agent policies without forgetting. We address this with continual learning (Biesialska et al., 2020), focusing on replay methods, a widely used technique that revisits selected tasks from prior training stages. Following Dynosaur (Yin et al., 2023), we adopt a replay strategy that selects the most representative tasks from the previous iteration. Given N tasks from the prior iteration, we use Sentence Transformer (Reimers and Gurevych, 2019) based on RoBERTa-large (Liu et al., 2019) to encode task instructions into a matrix Ip ∈ RN×d, where d is the embedding dimension. We then compute cosine similarities: Spp = cos _sim(Ip, Ip) ∈ RN×N. Tasks with the top row sums in Spp are representative and selected for replay. 6. Experiments 6.1. Experimental Setup Evaluation Benchmarks. We evaluate the LLM agents trained with UI-Simulator on two benchmarks across web and mobile domains: WebArena, which contains 812 complex yet realistic web navigation tasks; and AndroidWorld, which consists of 116 challenging daily mobile usage tasks. We report the success rate (SR) across all tasks. The temperature for model inference is set to 0.6; preliminary experiments suggest that varying the temperature does not significantly affect performance. Note that for a fair comparison, 8 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Table 1: Overall success rate (SR) on the WebArena and AndroidWorld test sets. / to start a search' [401] link 'Create new...' [403] link 'Issues' [1100] generic '13 assigned issues' [404] link 'Merge requests' [1102] generic '8 merge requests' [406] link 'To-Do List' [1118] generic 'Todos count' [1489] StaticText '5' [407] link 'Help' [409] link 'Byte Blaze' [1152] img 'Byte Blaze' [7] main [302] StaticText 'Projects' [305] link 'New project' [411] link 'Yours 14' [412] link 'Starred 3' [413] link 'Explore' [414] link 'Topics' [312] searchbox 'Filter by name' [374] button 'Name' [458] link 'All' [6320] RootWebArea 'Search results for "Byte Blaze" · GitLab' focused: True [9128] link 'Projects 7' [5946] link 'Milestones 4' [15074] link 'Users 15' [2986] link 'O' [4708] heading 'Open Source Diversity / opensourcediversity.org' [12492] link 'Open Source Diversity / opensourcediversity.org' [18470] generic 'Public - The project can be accessed without any authentication.' [2955] image [12092] generic 'blossom' [5288] StaticText '🌼' [9996] StaticText ' Code of ' [15143] link 'https://opensourcediversity.org' [5065] link '28' [845] image [5529] link '0' [4353] heading 'Checkstyle / checkstyle' [18893] link 'Checkstyle / checkstyle' [901] generic 'Public [17524] image Type [1507][Byte Blaze] Figure 8: A case of failed simulation where UI-Simulator-R overly depends on the reference state to generate the new page. 23 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are asked to modify a web task. You will be given a description of website and original task that correspond to the website. To be successful, it is very important to follow the following rules: ##Find the name of website and figure out what kind of website is given. ##Generate unique, diverse entity names and add to original task to make the task more specific. For example, a specific entity name for shopping website is a product name, a specific entity name for map website is a specific place name. ##Generate 15 examples. You should try to think of what kinds of terms are commonly searched. ##The search term should NOT be too complex, JUST the name. For example, we don't want "the Eiffel Tower in Paris", but just "Effiel Tower". Examples: Website Description: {Website Description} Original task: Search for a certain product. ##Thought: I'll try to search for a certain product. The candidiates should be diverse. People often search foods, clothes, fruits, electric devices, toys, detergents, trip utilities on the shopping website. ##Tasks: • Foods 1. Search for OREO milk cookies. 2. Search for Crisco Oil 48 Oz. 3. Search for L'Oreal Paris Revitalift Anti-Aging Cream. 4. ... • Fruits 1. Search for Driscoll's Strawberries. 2. ... • ... {Example #2} {Example #3} Table 10: System prompt for Diverse Entity Specification. {Website Description} is a short description of the current website for in-context example. 24 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a Web task creation AI. Assume you are browsing on a website with some or no guidance. Based on the task control, the current webpage, and the browsing history, your task is to analyze the current webpage and browsing history, and continue browsing according to the task control and previous steps by giving the action on the current webpage. Here are some requirements for output: • You need to incorporate the following details in the Thought: - the task control (if no task control is given, just skip it), - what you learned from previous steps, - what the current webpage is like (it is best to use some elements within the webpage as evidence), - and which action to take next (either guided or not). • The Action should strictly follow the action format provided in the action space. • You also need to generate a single sentence abstract to summarize what this action does. Note that Thought, Action and Task should not exceed one line. The summary should NOT mention the content of task control (e.g. 'Create a new project or issue' shouldn't appear in the Task in the following example), just focus on what the action does. Available actions: {Input Action List} Example Input: Task Control: {Input Task Control} Current state: {Input Current State} Previous steps: {Input Previous Steps} Example Output: Thought: Let's think step by step. The task control is 'Create a new project or issue.'. From the previous steps, I clicked the 'New Project' button to step into the project creation page. The current webpage contains elements like "Enter your project name here" textbox with id 10 and "visibility_level" selection with id 12, which means currently I'm at the project creation page, and it has many information entries to fill in. To continue creating the project, I shall fill in the required information. I can first type a name for the new project, and the corresponding input box id is 10. 'Web platform' is a good name. I can set the third parameter to 1 to press enter to submit. Action: type [10] [Web platform] [1] Task: Type "Web platform" as the name for new project, and press enter to submit it. Table 11: System prompt for Thought & Action Generation. {Input Action List} is available action space. {Input Task Control}, {Input Current State}, and {Input Previous Steps} are Current Step Task Control, State, and Step History for in-context example. 25 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are asked to generate web task controls for the next step that are unrelated to visual adjustments. You will be provided: • Elements that are commonly used in current website. • Original task control • Previous steps you have taken. You need to follow the following rules: 1. AVOID task controls related to visual manipulation. 2. You have completed the Original task control in previous steps. You should make use of the given elements to propose reasonable new task controls to extend current trajectory. 3. Pay attention to "Newly appeared elements", elements that appear in the new state compared to the last step. You should propose as many task controls based on the interactions with newly appearing elements as possible. 4. The new task controls SHOULD be consistent with previous steps and can form a single complete task rather than multiple independent tasks. E.g., we don't want to search for one place and then explore related or nearby places. 5. The task controls should be diverse. We don't want task controls doing the same thing but on different entities. For example, we don't want to have both view the detail of A and then view the detail of B in the response. 6. Only give no more than task controls, just make sure you are proposing the most common ones. 7. You should strictly follow the output format in the example. 8. Note that you cannot interact with a StaticText or generic. Interacting with it wouldn't cause any effect. So it is best not propose task controls related to them. Also we want the tasks can be done on the computers. Example: Original task control: Search for 'PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro' Previous steps: ["Search 'PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro' "] ##Thoughts: Let's think step by step. The original task control is to search 'PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro', and I have completed the original task control. The current webpage shows the search results related to the 6S Wireless Noise Canceling Hi-Fi Headphones. I need to take the next step that is consistent with the first step searching. The newly appeared elements include the detail links of the product I want, together with functionalities like "Add to Cart", "Add to Wish List", "Add Your Review", "Add to Compare", etc. I can check the most relevant search result for its details. I can also choose to interact with the product via functionalities like adding it to the cart, wishlist, etc. ##Task Controls: 1. View more details about 'PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro' 2. Add 'PHILIPS H6509 Wireless Headphones, Over-Ear Bluetooth Headphones with Noise Canceling Pro' to cart 3. ... Table 12: System prompt for Step-Wise Task Control Proposal in later steps. 26 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a Web task creation AI. Assume you have step history, and try to summarize a high-level intent. The step history is in the format of "action: step summary". You should pay special attention to the content within action (e.g. the content that is typed in), and the summarized task should reflect such content. Here are some requirements for summarization: • The high-level intent should faithfully follow the action sequence. • The high-level intent should be succinct and consistent with each single step. • Ignore unnecessary contexts and intermediate steps. The Task should only include the high-level goal of the trajectory. Example: Input: Previous steps: click[7809]: Access the help section for GitLab. click[482]: View the 'Contact Support' page to get help with GitLab issues. type[361][Bug: cannot edit account detail][1]: Enter a subject for your support request. type[5882][Account Issue][1]: Enter "Account Issue" in the subject field for the support request. click[3605]: Select the type of support needed as "Technical Support". Thought: From steps history, the first two steps open 'Contact Support' for GitLab. The following two steps enter "account issue" for further detail. Last step select "Technical Support" as support type. Output: Task: Submit a Technical support request to GitLab regarding account issue. {Example #2} Table 13: System prompt for Task Summarization in trajectory wrapping process. 27 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a web behavior analyst. Assume you have collected a series of thoughts on actions taken during navigation on the web. The thoughts are not really purposeful, or not focusing on the given goal. Your task is to rewrite each thought to make them fit the given task goal. You should follow the rules when rephrasing thoughts: 1. The rewritten thought should consider what the current page is about and mention the goal, analyze what should be done now to complete the goal, and explain why this action is appropriate in the current step. 2. You should strictly keep the entity, the analysis on the webpage and previous steps, and the action in the original thought unchanged - just adapt the wording to make it reasonable for the goal. Ignore task control-related sentences in the original thoughts, and focus only on how to complete the given goal. 3. Please guarantee that each action appears in its corresponding rewritten thought. Example: Original thoughts: • Thought 1: According to the task control, I need to view account information. The current webpage displays my account details, including "Contact Information", "Default Billing Address", and "Default Shipping Address". The "Account Information" text is prominently featured, and there is an "Edit" button linked to my account information. To proceed, I will click on the "Account Information" link to view the complete details of my account. In summary, the next action I will perform is click [3706]. • Thought 2: Moving forward as per the task control "Access the 'Payment Methods" section to add a new credit card for future purchases." From the previous step, I accessed "Account Information". Currently, there are several buttons in the account management section, including one labeled "Payment Methods" (id 1523), which I need to click to reach the section for adding a new credit card. In summary, the next action I will perform is click [1523]. • Thought 3: ... Goal: Add a new credit card with the number "4111111111111111" and expiration date "12/25" to the account information. Actions: click [3706], click [1523], ... Rewritten thoughts: • Thought 1: Let's think step-by-step. The current webpage displays my account details, including "Contact Information", "Default Billing Address", and "Default Shipping Address". The "Account Information" text is prominently featured, and there is an "Edit" button linked to my account information. This is the first step. In order to add a new credit card with the number "4111111111111111" and expiration date "12/25" to the account information, I will click on the "Account Information" link to view the complete details of my account. In summary, the next action I will perform is click [3706]. • Thought 2: Let's think step-by-step. From the previous step, I accessed "Account Information". Currently, there are several buttons in the account management section, including one labeled "Payment Methods" (id 1523), which I need to click to reach the section for adding a new credit card. In summary, the next action I will perform is click [1523]. • Thought 3: ... Table 14: System prompt for Thought Rewriting in in trajectory wrapping process. 28 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a teacher who is writing quiz to test your students on the reading and understanding of webpage. The webpage should be either: • demonstrating detailed information for one object, or • containing information for multiple objects, and some of them are different and comparable. First, analyze the webpage. If you think the webpage is demonstrating detailed information for one object, put "Yes" in "Answer". Otherwise put "No". Second, you need to find relevant information that is useful for making quiz from the current webpage, such as specific numbers, ranking, entity names. If you think the webpage is wrapping information for multiple objects in a table, the questions you ask should be concentrating on the comparison of the information, or features that are in common. Note: You need to specify the full name of entity in each question. Don't use terms like "this page", "this item". Don't ask questions about layout, e.g., buttons, textbox, or details that most people don't care, e.g. the contributor, the url of the site, etc. Don't always use interrogative sentences like "what" or "which." Instead, try using declarative sentences like "tell me ..." or "show me ...." Examples: Webpage: [1] RootWebArea 'Carnegie Mellon University | OpenStreetMap' focused: True [14] heading 'OpenStreetMap logo OpenStreetMap' ...(omit for brevity) [627] StaticText 'University ' [630] link 'Carnegie Mellon University, Pittsburgh, Allegheny County, 15213, United States' [624] link 'More results' ...(omit for brevity) Thought: Let's think step by step. The current webpage is a search result of 'Carnegie Mellon University' on OpenStreetMap website. The searched result only contain detailed information about CMU, which means it is not applicable to compare with other thing. Thus the Answer should be "Yes", and I shall ask questions that are about details of CMU. There are details like the address of CMU, the zip code. I'd like to ask questions about the details. Answer: Yes Questions: • Show me the address of Carnegie Mellon University. • What is the zip code of CMU? {Example #2} Table 15: System prompt for Reasoning Task Generation in in trajectory wrapping process. 29 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Given the current UI representation, the current action, the web browsing history, and the potentially relevant element: First, think about what current window is like, and try to interpret the action. Second, describe what the new window will be like after executing the action in a sentence. It is best to specify the roles of terms used in the description. Third, extract key information that should be kept in mind, like price of bought product, user profile, etc. Feel free to put "None" here if you think the action won't cause any long-range influence. Lastly, answer whether such action would lead to a totally new webpage. Note: You've always logged in to the website. You don't need to consider the logging process when generating the new window. The intent should be detailed enough to describe what a whole webpage looks like. Example: Thought: Let's think step by step. Currently I'm at search results page of Google. The Google search results are general. The 'click' action is targeted on a hyperlink to 'Alan's podcast channel'. Since we have set up the age verification and pass the age limit, this action redirects the user to its homepage. New window: The new window is Alan's live podcast home page. Key Info: None Answer: Yes {Example #2} {Example #3} {Example #4} Table 16: System prompt for Next State Overview Prediction. 30 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Imagine you are a website designer. Given some previous information, the interpretation of action on last step, and description of a new website, first extract what the new website (and the domain) is, then answer the question: What sections should the webpage have, and list all of them and their functionalities, and compose elements that appear in each section in detail. You should ONLY generate informative elements that are relevant with the description. Informative elements refer only to the main elements that contain specific, instantiated information of the webpage. For example, a paragraph with texts introducing the term "California", the link to Youtube, etc. Pay attention to the input, which contain information that must be included in current page. Also remember you are creating content within a certain domain. Elements like "Copyright: Alhambra Palace" will never appear in a Google map webpage. For a bunch of similar, important elements to generate (e.g. search results on a search results page), the number of such element should be at least 6. You should generate content based on the domain, and information that reasonably appear in the domain. E.g., we should have prices information in a shopping website. Sometimes you can have section like "Other related terms", but that should never be the main content. Note that if you think an element can be interacted with, specify that it should be a link element. E.g., you don't have to add another element "View details" or "Website" for a search item, because when clicking on the search term, we might jump to the detail of the item. Just merge their functionalities and tag the element as a link. Example: Previous Info: {Previous Key Info} Description: The new window displays the discussion thread page on Reddit. The title of the post is "What do you think of the new European Cup champion". User could read and interact with the thread. Thought: Thread interaction section A "Comment" button for users to engage directly with the post A "Share" button for users to share the discussion thread An "Upvote" button and a "Downvote" button for users to express their opinions on the post Title section Title of the post: What do you think of the new European Cup champion Spain? Body section Post content: "Spain has triumphed in the ..." Link of Post number: #10003902 Upvote: 569 ... A comment section where users can share their thoughts, including: Comment: "I think Spain played incredibly well! Their teamwork was on another level!" Link of commenter: Alice; upvote: 5; downvote: 0; 12h ago; upvote/downvote button Comment: ... Table 17: System prompt for Generating Rich Draft in Natural Language. {Previous Key Info} contains some key information recorded in previous steps to boost the coherence of the content. 31 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Given the following reference action sequences and the current action sequence, your task is to find the ones from the reference sequences that are doing almost the same thing as current action sequence. If there's no sequence in reference that does the same thing as current action sequence, then you should pay more attention to the ending steps, and choose ones that are doing the same thing in the latest steps. If the functionality of the current action history is not quite clear, just finding ones that exactly match the latest steps. Note: You need to consider the meaning behind actions like "click", "type". E.g., click button 'Search' means doing the search on the typed term. We DONT want the output sequences to have the future steps of the current action history. You should find sequences that just stopping at the same point as the current actions. You should strictly follow the output format. Example: 1. type textbox 'Search' 2. click button 'Search' 3. click link 'Dell G7 Laptop' 4. click 'Add to cart' 1. type textbox 'Search' 2. ... Current action sequence: 1. type textbox 'Search' 2. click button 'Search' 3. click link 'OREO milk cookies' 4. click 'Add to cart' ##Thoughts: Let's think step by step. The current action sequence does searching at first, then clicks 'OREO milk cookies' to view its details, and adds it to the cart. I should output action sequences that are doing the same thing at the ending steps. ##Output: 1. type textbox 'Search' 2. click button 'Search' 3. click link 'Dell G7 Laptop' 4. click 'Add to cart' 1. type textbox 'Search' press enter 2. ... {Example #2} Table 18: System prompt for model-based Semantic Retriever based on action history. 32 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Given the following description of a GUI and the refernce GUI, create the content of a new state by taking elements and rewriting some important contents within the reference state. Note that the description of content might be short, but you should provide corresponding essential information in the reference GUI. So you should pay attention to each element in the reference GUI. If you think the reference GUI doesn't match the descrition, then you should generate a lot of new contents that match the description, but within the same structure. If you think the reference GUI matches the descrition, you could copy most of the content from the reference state, and only modify a few important contents if needed. Do not try to add additional details or information in this case. • You are only allowed to modify information related to some named entities. You are not allowed to add/remove the functionality elements in the reference state if you think it matches the description. If no named entities are in the reference state, you can make no change. Example input: Reference state: {Reference State} Description of new state: The new website is a product page on Onestopshop ... Example output: {New Content} Table 19: System prompt for Retrieval-Augmented Draft Generation . {Reference State} refers to the state retrieved for generation. {New Content} contains a list of realistic elements inherited from the reference state and newly composed content in the current context 33 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training You are a Web task creation AI. Given the a11y tree of a website, generate a common task user perform on this web, and new browsing history adapted from the given browsing history. The reference task and browsing history is based on another webpage that has the similar functionalities but with different object. So you should propose task based on content from currently given entities and content. To be successful, it is very important to follow the following rules: 1. Suppose that you've already logged in the website, and you don't want to sign out. 2. The new task should strictly have the same task type as the reference. Don't change the task type 3. Don't change the procedure in the browsing history. Just change the entity names of objects (products, items), not functionalities. If the reference browsing history is "None", then put "None" in the new browsing history. Don't imagine steps that are doing different things from reference browsing history. Example: Current webpage: {Input State} Reference task: Add 'Dell G7 Gaming Laptop - 256GB' to the cart. Reference browsing history: 1. Search for "Dell G7 Gaming Laptop" 2. Click 'Dell G7 Gaming Laptop - 256GB' to view its details. Task: Add 'Milkman Bonus Bundle - 10 Packets Low-Fat Milk + 2 Packets Chocolate Milk with 18g Protein' to the cart. New browsing history: 1. Search for "Milkman Bonus Bundle" 2. Click 'Milkman Bonus Bundle - 10 Packets Low-Fat Milk + 2 Packets Chocolate Milk with 18g Protein' to view its details. Table 20: System prompt for Targeted Task Variant Synthesis. {Input State} contains details of a product ('Milkman Bonus Bundle' in this case) 34 LLMs as Scalable, General-Purpose Simulators For Evolving Digital Agent Training Assume you are a Android mobile phone. Based on the task control, the current UI page, and the action history, your task is to analyze the current page and history, and continue browsing according to the task control and previous steps by giving the action on the current page. Here are some requirements for output: • You need to incorporate the following details in the Thought: - the task control, - what you learned from previous steps, - what the current page is like (it is best to use some elements within the page as evidence), - and which action to take next. • The Action should strictly follow the action format provided in the action space. • You also need to generate a single sentence abstract to summarize what this action does. Note that Thought, Action and Task should not exceed one line. The summary should NOT mention the content of task control (e.g. 'Create a new project or issue' shouldn't appear in the Task in the following example), just focus on what the action does. Available actions: {Input Action List} Example Input: Task Control: {Input Task Control} Current state: Element 0: UIElement(text=None, content_description=Create contact, class_name=android.view.View, bbox=None, bbox_pixels=BoundingBox(x_min=0, x_max=1080, y_min=0, y_max=2400), hint_text=None, is_checked=False, is_checkable=False, is_clickable=False, is_editable=False, is_enabled=True, is_focused=False, is_focusable=False, is_long_clickable=False, is_scrollable=False, is_selected=False, is_visible=True, package_name=com.google.android.contacts, resource_name=com.google.android.contacts:id/background_container, tooltip=None, resource_id=None, metadata=None)) Element 1: UIElement(text=None, content_description=None, class_name=...) Element 2: UIElement(text=First Name, content_description=James, class_name=...) Element 3: UIElement(text=Last Name, content_description=Brown, class_name=...) Element 4: UIElement(text=Phone Number, content_description=None, class_name=...) Element 5: UIElement(text=Email Address, content_description=None, class_name=...) Element 6: UIElement(text=Save, content_description=None, class_name=...) Element 7: UIElement(text=Cancel, content_description=None, class_name=...) ... Previous steps: {Input Previous Steps} Example Output: Thought: Let's think step by step. The guide is 'Create a new contact for "James Brown". From previous steps, I opened the 'Contacts' app, started the creation process and typed the first and last name. The current page shows that I've successfully typed the First and last name, and I also need to fill in details like phone number, email address. Since the guide doesn't provide the phone number, I should give a realistic phone number here, like "718-099-5256". To continue creating the contact, I shall type "718-099-5256" to the Phone number. Action: input_text [4][718-099-5256] Task: Type "718-099-5256" as the phone number. Table 21: System prompt for Thought & Action Generation for AndroidWorld trajectory collection. {Input Action List} is the action space. {Input Task Control}, {Input Current State}, and {Input Previous Steps} are Current Step Task Control, State, and Step History for in-context example. 35
2510.14977
Preprint TERRA: EXPLORABLE NATIVE 3D WORLD MODEL WITH POINT LATENTS Yuanhui Huang1 Weiliang Chen1 Wenzhao Zheng1 Xin Tao2 Pengfei Wan2 Jie Zhou1 Jiwen Lu1 1Tsinghua University 2Kuaishou Technology 1 2 4 3 5 6 5 RGB Point Cloud Gaussian Primitives P2G Encoder Generative World Modeling P2G Decoder Point Latents 4 3 2 1 6 Figure 1: Method overview. Unlike conventional world models with pixel-aligned representations, we propose Terra as a native 3D world model that describes and generates 3D environments with point latents. Starting with a glimpse of the environment, Terra progressively explores the unknown regions to produce a coherent and complete world simulation. ABSTRACT World models have garnered increasing attention for comprehensive modeling of the real world. However, most existing methods still rely on pixel-aligned rep- resentations as the basis for world evolution, neglecting the inherent 3D nature of the physical world. This could undermine the 3D consistency and diminish the modeling efficiency of world models. In this paper, we present Terra, a na- tive 3D world model that represents and generates explorable environments in an intrinsic 3D latent space. Specifically, we propose a novel point-to-Gaussian variational autoencoder (P2G-VAE) that encodes 3D inputs into a latent point rep- resentation, which is subsequently decoded as 3D Gaussian primitives to jointly model geometry and appearance. We then introduce a sparse point flow matching network (SPFlow) for generating the latent point representation, which simulta- neously denoises the positions and features of the point latents. Our Terra enables exact multi-view consistency with native 3D representation and architecture, and supports flexible rendering from any viewpoint with only a single generation pro- cess. Furthermore, Terra achieves explorable world modeling through progressive generation in the point latent space. We conduct extensive experiments on the challenging indoor scenes from ScanNet v2. Terra achieves state-of-the-art per- formance in both reconstruction and generation with high 3D consistency. 1 INTRODUCTION World models have emerged as a promising research direction, with the aim of understanding and simulating the underlying mechanics of the physical world (Ha & Schmidhuber, 2018). Unlike 1 arXiv:2510.14977v1 [cs.CV] 16 Oct 2025 Preprint Large Language Models (LLMs), which are confined to textual processing (Vaswani et al., 2017; Brown et al., 2020), world models integrate multimodal visual data to construct a comprehensive and internal representation of the environment (Ha & Schmidhuber, 2018). From learning the evolution of the real world, world models enable various downstream applications, including perception (Min et al., 2024; Lai et al., 2025), prediction (Zheng et al., 2024a; Xiang et al., 2024; Team et al., 2025; Agarwal et al., 2025), reasoning (Bruce et al., 2024; Assran et al., 2023; Huang et al., 2024), and planning (Ren et al., 2025; Assran et al., 2025; Zheng et al., 2024b). Scene representation is fundamental to world models (Wang et al., 2024c; Team et al., 2025; Zheng et al., 2024a), forming the basis for world evolution. Conventional methods typically rely on 2D im- age or video representations, simulating world dynamics through video prediction (Agarwal et al., 2025; Bruce et al., 2024; Xiang et al., 2024; Assran et al., 2025). However, the generated videos often lack consistency across frames (Wang et al., 2024c; Huang et al., 2024; Zheng et al., 2024b), as the models do not consider explicit 3D priors and instead learn only implicit 3D cues from the training videos. To address this limitation, a line of work simultaneously predicts RGB images and depth maps to construct a pixel-aligned 2.5D representation (Team et al., 2025; Yang et al., 2025a; Lu et al., 2025). While they integrate geometric constraints into the generation process, learning the multi-view pixel correspondence remains challenging due to the ambiguity of relative camera poses. The physical world is inherently three-dimensional, including objects and their interactions. However, the rendering process only produces a partial 2D observation of the underlying 3D envi- ronment, inevitably losing crucial depth and pose information (Mildenhall et al., 2021; Kerbl et al., 2023). This poses critical challenges on the multi-view consistency of world models based on pixel- aligned representations (Lu et al., 2025; Team et al., 2025; Yang et al., 2025a; Wang et al., 2024c). To address this, we present Terra, a native 3D world model that describes and generates explorable environments with an intrinsic 3D representation, as shown in Figure 1. At its core, we learn a native point latent space that employs spatially sparse but semantically compact point latents as the basis for reconstruction and generation. Accordingly, Terra completely discards pixel-aligned designs and directly learns the distribution of 3D scenes in its most natural form, achieving 3D con- sistency without bells and whistles. To elaborate, we propose a novel point-to-Gaussian variational autoencoder (P2G-VAE) that converts 3D input into the latent point representation. The asymmetric decoder subsequently maps these point latents to rendering-compatible 3D Gaussian primitives to jointly model geometry and appearance. The P2G-VAE effectively reduces the redundancy in the input 3D data and derives a compact latent space suitable for generative modeling. Furthermore, we propose a sparse point flow matching model (SPFlow) to learn the transport trajectory from the noise distribution to the target point distribution. The SPFlow simultaneously denoises the positions and features of the point latents to leverage the complementary nature of geometric and textural attributes to foster their mutual enhancement. Based on P2G-VAE and SPFlow, we formulate the explorable world model as an outpainting task in the point latent space, which we approach through progressive training with three stages: reconstruction, unconditional generative pretrain, and masked conditional generation. We conduct extensive experiments on the challenging indoor scenes from ScanNet v2 (Dai et al., 2017). Our Terra achieves state-of-the-art performance in both reconstruction and generation with high 3D consistency and efficiency. 2 RELATED WORK 2D world models. Early attempts in world models focus on image or video representations, thanks to the exceptional performance of 2D diffusion models (Ho et al., 2020; Song et al., 2020; Rombach et al., 2022; Blattmann et al., 2023; Peebles & Xie, 2023). DriveDreamer (Wang et al., 2024c) and Sora (OpenAI, 2024) represent pioneering image-based and video-based world models, respectively, both leveraging diffusion models to achieve view-consistent and temporally coherent world model- ing. Subsequent research efforts focus primarily on enhancing the temporal consistency (Henschel et al., 2025; Huang et al., 2024; Yin et al., 2023), spatial coherence (Yu et al., 2024b; Wu et al., 2025; Chen et al., 2025a), physical plausibility (Assran et al., 2023; Agarwal et al., 2025; Assran et al., 2025), and interactivity (Xiang et al., 2024; He et al., 2025; Wang et al., 2025b) of generated videos. Several studies also explore integrating the language modality with conventional methods to train multimodal world models (Zheng et al., 2024b; Kondratyuk et al., 2023). Recently, Genie-3 (Bruce et al., 2024) has emerged as one of the most successful video world models, which enables excellent photorealism, flexible interaction, and real-time generation. Despite the promising advancements, 2 Preprint 2D world models learn the evolution of the real world solely from image or video data, overlooking the inherent 3D nature of the physical environments. This lack of sufficient 3D priors often results in failures to maintain 3D consistency in the generated outputs. Moreover, 2D world models require multiple generation passes to produce results with different viewing trajectories. 2.5D world models. To incorporate explicit 3D clues into world models, and also leverage the generative prior from 2D diffusion networks, a line of work (Hu et al., 2025; Gu et al., 2025; Chen et al., 2025b; Yang et al., 2025b; Huang et al., 2025; Yu et al., 2025) proposes to jointly predict depth and RGB images as a pixel-aligned 2.5D representation. ViewCrafter (Yu et al., 2024b) employs an off-the-shelf visual geometry estimator (Wang et al., 2024a) to perform depth and pose prediction, which is then used in novel view reprojection. Prometheus (Yang et al., 2025a) trains a dual-modal diffusion network for joint generation of depth and RGB images conditioned on camera poses. Furthermore, several work (Team et al., 2025; Lu et al., 2025) converts camera poses to 2D Pl¨ucker coordinates, in order to consider camera poses, depth and RGB images in a unified framework. In general, these methods try to learn the joint distribution of depth, poses and texture to improve 3D consistency. However, these factors are deeply coupled with each other by the delicate perspective transformation, which is often challenging for neural networks to learn in an implicit data-driven manner. We propose a native 3D world model that represents and generates explorable environments with a native 3D latent space, and guarantees multi-view consistency with 3D-to-2D rasterization. Native 3D generative models. Most relevant to our work are native 3D generative models that also employ 3D representations. Pioneering work in this field focuses on point cloud generation. Luo & Hu (2021) proposes the first diffusion probabilistic model for 3D point cloud generation. Vahdat et al. (2022) later extend this paradigm to support latent point diffusion, followed by advancements in architecture (Ren et al., 2024b), frequency analysis (Zhou et al., 2024) and flow matching (Vogel et al., 2024; Hui et al., 2025). However, these methods are confined to object or shape level gen- eration and are unable to synthesize textured results, which greatly restricts their application. To integrate texture, Lan et al. (2025) and Xiang et al. (2025) adopt Gaussian splatting as the 3D repre- sentation, but they are still limited to object generation. Zheng et al. (2024a) and Ren et al. (2024a) extend to scene-level 3D occupancy generation, but the occupancy is coarse in granularity and does not support rendering applications. In summary, existing methods are restricted to either object-level fine-grained or scene-level coarse geometry generation. In contrast, we construct the first native 3D world model with both large-scale and rendering-compatible 3D Gaussian generation. 3 PROPOSED APPROACH 3.1 LATENT POINT REPRESENTATION We present Terra as a native 3D world model that represents and generates explorable environments with an intrinsic 3D representation. Figure 2 outlines the overall pipeline. Formally, we formulate explorable world models as first generating an initial scene S0 and progressively expanding the known regions to produce a coherent and infinite world simulation S: S0 = g(∅, C0; θ), Si = g(Si−1, Ci; θ), Si = {S0, S1, ..., Si}, (1) where subscripts denote exploration steps, and g(Si−1, Ci; θ) represents the model with learnable parameters θ that generates the next-step exploration result Si based on the set of previously known regions Si−1 and the current conditional signal Ci. Conventional world models with pixel-aligned representations instantiate Si with colors R, depths D and poses T from different viewpoints: Si = [(R(n) i , D(n) i , T (n) i )|N n=1], (2) where N denotes the number of views in a single generation step and the superscript (n) is the view index. On the other hand, multi-view consistency for Lambert’s model can be formulated as: R(n)|x(n) = R(m)|x(m), d(n)x(n) = T (n)x, n, m = 1, 2, ..., N, (3) where x, x(n), d(n), R(n)|x(n) denote the 3D coordinates of a visible point, the image coordinates of x in the n-th view, the depth of x in the n-th view, and sampling R(n) at x(n), respectively. Eq. (3) requires that different pixels on separate views should share the same color if they are the projections of the same visible 3D point. Therefore, the ideal representation for conventional world models 3 Preprint 2D-to-3D Inverse Projection Point Latents 3D Gaussian Primitives VAE 𝓔 VAE 𝓓 Rendering Crop Scene 𝓔 Distance-Aware Trajectory Smoother SPFlow 3D Sparse Conv. Position-Feature Joint Denoising Point Attention Adaptive Refine & Upsample Condition Module Point Latents Supervision & Regularization Robust Position Perturb Terra 1 2 3 4 Figure 2: Overall pipeline. Terra consists of a point-to-Gaussian VAE and a sparse point flow matching model. The P2G-VAE effectively learns the transformation from input RGB point cloud to point latents, and then to 3D Gaussian primitives. The SPFlow learns the joint distribution of geometry and appearance. Both P2G-VAE and SPFlow adopt native sparse 3D architectures. should be the combination of Eq. (2) and (3), i.e. the multi-view colors, depths and poses satisfying the reprojection constraint. Unfortunately, it is often challenging for neural networks to learn this constraint in an implicit data-driven manner, leading to multi-view inconsistency. ViewCrafter (Yu et al., 2024b) bypasses this problem by taking smaller steps (N = 1) in every generation and explicitly projects previous contexts onto the novel view to enforce reprojection consistency. While effective, this approach significantly compromises the efficiency of exploration. Different from the pixel-aligned counterparts, we propose latent point representation P as a native 3D descriptor of the environment: Si = Pi ∈RMi×(3+D), where Mi and 3 + D denote the number of point latents for the i-th exploration step and the sum of the dimensions for 3D coordinates and features, respectively. The latent point representation is similar to the actual point cloud, located sparsely on the surface of objects, but limited in number and with semantically meaningful latent features. It also supports adapting Mi according to the complexity of different regions and inte- grating historical contexts by simply concatenating previous Pis. This design completely discards the view-dependent elements (Eq. (2)) and the reprojection constraint (Eq. (3)) from the exploration process and instead models the environment with 3D points x directly. These point latents can be transformed into 3D Gaussian primitives for rasterization, naturally satisfying 3D consistency and enabling flexible rendering from any viewpoint without rerunning the generation pipeline. 3.2 POINT-TO-GAUSSIAN VARIATIONAL AUTOENCODER We design the P2G-VAE to effectively generate the latent point representation from the input scene and decode it into 3D Gaussian primitives. We suppose the input scene is described by a colored point cloud Q ∈RB×6 to provide necessary 3D information, where B and 6 represent the number of points and the sum of dimensions for 3D coordinates and color, respectively. We build our P2G-VAE based on the point transformer architecture (Zhao et al., 2021) for efficiency. Apart from removing the residual connections in the original PTv3 (Wu et al., 2024a), we include the following novel designs for a robust latent space and effective Gaussian decoding, as shown in Figure 3. Robust position perturbation. In conventional VAEs (Kingma & Welling, 2014), it is common to regularize the latent features with a Kullback-Leibler Divergence loss LKL to align the feature distribution with a standard normal distribution. However, it is nontrivial to generalize this practice to unstructured point latents where 3D coordinates themselves contain crucial geometry information. Directly regularizing the coordinates to approximate Gaussian noise would have an adverse effect on the locality of point latents and the associated local structures. To this end, we propose a robust 4 Preprint Point Transformer Encoder Attn. Module Offset Pred. Pos. Res. Feat. Res. Adaptive Upsample Module Offset Pred. Pos. Res. Adaptive Refine Chamfer Distance Explicit Color Loss NN. Matching Learnable Queries Robust Position Perturbation Distance-aware Trajectory Smoothing LAP Matching Solver Crop Uniform Crop& Uniform Condition Noisy Latents Figure 3: Method details. LAP, Pos. Res., Feat. Res. and NN. denote linear assignment problem, position residual, feature residual and nearest neighbor, respectively. position perturbation technique which perturbs the coordinates of point latents with a predefined Gaussian noise n ∼N(0, σ2I3) where σ is a hyperparameter for noise intensity: P = [(p(m) ∈R3, f (m) ∈RD)|M m=1], p = ˆp + n, f ∼N(mean( ˆf), diag(var( ˆf))), (4) where we split point latents P into M position-feature pairs (p, f) and omit the exploration step for simplicity. The ˆp and ˆf denote the positions and features of points as input to the VAE bottleneck, and mean(·), var(·) are the functions to calculate the mean and variance of the latent features f. The robust position perturbation enhances the robustness of the VAE decoder against slight pertur- bations over the positions of point latents. Further, it greatly improves the generation quality since generated samples inevitably contain a certain level of noise, similar to our perturbation process. Adaptive upsampling and refinement. Given point latents after downsampling and perturbation, the VAE decoder should upsample them to an appropriate number and restore the dense structure. To achieve this, we introduce the adaptive upsampling and refinement modules. The adaptive upsam- pling module splits each point (p, f) into K child points (p(k), f (k))|K k=1 with K learnable queries q(k)|K k=1. These queries first interact with each point for contexts, and then each query predicts a relative displacement disp(·) and a residual feature resf(·) for the corresponding child point: ˆq(k)|K k=1 = ups(f, q(k)|K k=1), p(k) = p + disp(ˆq(k)), f (k) = f + resf(ˆq(k)), (5) where ups(·) denotes the point-query interaction module. This design enables controllable upsam- pling and avoids the complex mask-guided trimming operation in conventional methods (Ren et al., 2024a). Similar to the upsampling module, the adaptive refinement module further adjusts the point positions with offsets predicted from the point features: p′ = p + refine(f). These two modules progressively densify and refine the point positions, restoring a dense and meaningful structure. Comprehensive regularizations. To supervise the output Gaussian primitives, we employ the con- ventional rendering supervisions including L2, SSIM and LPIPS (Zhang et al., 2018) losses. In addition, we also incorporate other losses to improve the reconstructed geometry and regularize the properties of Gaussians for better visual quality. 1) We optimize the chamfer distances Lcham between the input point cloud and intermediate point clouds as the outputs of upsampling and re- finement modules, which provides explicit guidance for the prediction of position offsets. 2) We use the normal Lnorm and effective rank Lrank (Hyung et al., 2024) regularizations to regularize the rotation and scale properties of Gaussians. 3) We propose a novel explicit color supervision Lcolor, which directly aligns the color of each Gaussian with the color of the nearest point in the input point cloud. This loss bypasses the rasterization process and thus is more friendly for optimization. The overall loss function for our P2G-VAE can be formulated as: Lvae = Ll2 + λ1Lssim + λ2Llpips + λ3Lcham + λ4Lnorm + λ5Lrank + λ6Lcolor + λ7Lkl. (6) 3.3 NATIVE 3D GENERATIVE MODELING We use flow matching (Lipman et al., 2022) for generative modeling of the latent point representa- tion. Formally, we gradually add noise N ∼N(0, I) to both the positions and features of point 5 Preprint Table 1: Reconstruction performance. RGB PC. and Rep. Range represent colored point cloud and representation range of the output Gaussian, respectively. We select 20 random scenes from the validation set to reconstruct offline Gaussians as input to Can3Tok∗. Method Input Type Rep. Range PSNR↑ SSIM↑ LPIPS↓ Abs. Rel.↓ RMSE↓ δ1↑ PixelSplat RGB Partial 18.165 0.686 0.493 0.094 0.287 0.832 MVSplat RGB Partial 17.126 0.621 0.552 0.139 0.326 0.824 Prometheus RGBD Partial 17.279 0.644 0.448 0.087 0.251 0.901 Can3Tok∗ Gaussian Complete 19.578 0.733 0.514 0.031 0.151 0.973 Terra RGB PC. Complete 19.742 0.753 0.530 0.026 0.137 0.978 latents P ∈RM×(3+D) with a schedule t ∈[0, 1] in the diffusion process, and predict the velocity vector V ∈RM×(3+D) given noisy latents Pt in the reverse process: Pt = tP + (1 −t)N, V = F(Pt, t; ϕ), (7) where F(·, ·; ϕ) denotes a UNet (Peng et al., 2024) with learnable parameters ϕ based on 3D sparse convolution. The training objective can now be formulated as: Lflow = Et∼U[0,1],P ∼P,N∼N (0,I)||F(Pt, t; ϕ) −(P −N)||2, (8) where P denotes the ground truth distribution of point latents P . During inference, we start from sampled Gaussian noise and progressively approach clean point latents along the trajectory deter- mined by the predicted velocity vector. Note that we simultaneously diffuse the positions and fea- tures to learn the joint distribution of geometry and texture and facilitate their mutual enhancement. Distance-aware trajectory smoothing. Conventional flow matching applied to grid-based latents naturally matches noises and latents according to their grid indices. However, it would complicate the velocity field and the denoising trajectory if we simply match the point positions with noise samples based on their indices in the sequence (Hui et al., 2025). Intuitively, it is unreasonable to denoise a leftmost noise sample to a rightmost point. To address this, we propose a distance-aware trajectory smoothing technique that effectively straightens the transport trajectory and facilitates convergence for unstructured point flow matching. Since it is more reasonable to choose a closer noise sample as the diffusion target than a farther one, we optimize the matching M between point positions and noise samples to minimize the sum of distances between point-noise pairs: M∗= argminM M X m=1 ||p(m) −NMm,:3||2, M = reorder([1, 2, ..., M]), (9) where NMm,:3 denotes the position of the noise sample assigned to the m-th point latent. We apply the Jonker-Volgenant algorithm (Jonker & Volgenant, 1987) to efficienty solve Eq. (9). Simple conditioning mechanism. For an explorable model, we employ multi-stage training that consists of reconstruction, unconditional generative pretraining, and masked conditional generation. For masked conditions, we introduce three types of conditions to support different exploration styles: cropping, uniform sampling, and their combinations. We randomly crop a connected 3D region from the point latents as the cropping condition to unlock the ability to imagine and populate unknown regions. We uniformly sample some of the point latents across the scene as the uniform sampling condition to enable the model to refine known regions. We also use their combinations and first crop a connected 3D region and then apply uniform sampling inside it to simulate RGBD conditions. We concatenate the conditional point latents with the noisy ones and fix the condition across the diffusion process to inject conditional guidance even at the early denoising stage. 4 EXPERIMENTS 4.1 DATASETS AND METRICS We conduct extensive experiments on the challenging indoor scenes from the ScanNet v2 (Dai et al., 2017) dataset, which is widely adopted in embodied perception (Yu et al., 2024a; Wu et al., 2024b) and visual reconstruction (Wang et al., 2024a; 2025a). The dataset consists of 1513 scenes in total, covering diverse room types and layouts. Each scene is recorded by an RGBD video with semantic and pose annotations for each frame. We unproject the color and depth maps into 3D space using 6 Preprint Rendered RGB RGB GT Rendered Depth Depth GT Rendered Gaussian Input RGB PC. Figure 4: Visualization for reconstruction. Terra achieves photorealistic rendering quality for RGB and depth, and learns to complete the partial objects caused by the sensor failure in dark regions. Table 2: Generation Performance. CD and EMD denote Chamfer and earth mover’s distances, respectively. Terra achieves exceptional geometry generation quality compared with other methods. Method Repr. Unconditional Image Conditioned P-FID↓ P-KID(%)↓ FID↓ KID(%)↓ CD↓ EMD↓ FID↓ KID(%)↓ Prometheus RGBD 32.35 12.481 263.3 10.726 0.374 0.531 208.3 12.387 Trellis 3D Grid 19.62 7.658 361.4 23.748 0.405 0.589 314.9 24.713 Terra Point 8.79 1.745 307.2 18.919 0.217 0.474 262.4 20.283 the poses to produce colored point clouds as input to our P2G-VAE. In the generative training, we preprocess the point latents from the VAE encoder by randomly cropping a smaller rectangular region in the x-y plane and filtering out overly sparse and noisy samples. We follow Wang et al. (2024b) and split the dataset into 958 and 243 scenes for training and validation, respectively. We evaluate Terra on the reconstruction, unconditional, and image-conditioned generation tasks. For reconstruction, we compare Terra with three lines of methods: PixelSplat (Charatan et al., 2024) and MVSplat (Chen et al., 2024) with RGB input, Prometheus (Yang et al., 2025a) with RGBD input, and Can3Tok (Gao et al., 2025) with offline reconstructed Gaussians as input. We use PSNR, SSIM, and LPIPS metrics for visual quality, and Abs. Rel., RMSE, and δ1 metrics for depth accuracy. For generative tasks, we compare Terra with Prometheus using RGBD (Yang et al., 2025a) represen- tation, and Trellis (Xiang et al., 2025) using 3D grid representation. We retrain these baselines on ScanNet v2 using their official code for a fair comparison. For unconditional generation, we adopt point cloud FID (P-FID) and point cloud KID (P-KID) for geometry quality, and FID and KID for visual quality. Regarding image-conditioned generation, we adopt the Chamfer distance and earth mover’s distance for geometry quality, and FID and KID for visual quality. 4.2 IMPLEMENTATION DETAILS We construct the P2G-VAE based on PTv3 (Wu et al., 2024a), removing all residual connections and integrating the designs proposed in Section 3.2. We perform downsampling with stride 2 for 3 times in the encoder, reducing the number of points from 1 million to around 5000. In the decoder, we upsample the points also for 3 times with K = 7, 3, 3, respectively. We train the P2G-VAE for 36K iterations with an AdamW (Loshchilov & Hutter, 2017) optimizer. Regarding the SPFlow, we employ the OA-CNNs (Peng et al., 2024) as the UNet backbone. We crop a random region with a size of 2.4×2.4 m2 from a complete scene as the input sample. We train the SPFlow for 100K and 40K iterations for unconditional pretrain and conditional generation, respectively. 4.3 MAIN RESULTS Reconstruction. We report the results in Table 1. PixelSplat (Charatan et al., 2024) and MVS- plat (Chen et al., 2024) do not include 3D geometry information as input, and thus they might perform worse compared with others using depth or Gaussian input. Prometheus (Yang et al., 7 Preprint Ours Prometheus Trellis Figure 5: Visualization for unconditional generation. Only Terra is able to generate diverse and reasonable scenes while Prometheus and Trellis lack consistent geometry and texture, respectively. Ours Prometheus Trellis Condition Figure 6: Visualization for image conditioned generation. Both Terra and Prometheus are able to produce plausible images while the geometry consistency is far better than Prometheus. 2025a) achieves the best LPIPS because it is pretrained with a 2D diffusion model, which excels at image quality. Our Terra achieves the best results for all metrics except LPIPS, even better than Can3Tok (Gao et al., 2025) using offline reconstructed Gaussians, demonstrating the effectiveness of our P2G-VAE. Furthermore, Terra is able to reconstruct the whole scene in a single forward pass and also complete partial objects with incorrect depth measurement as shown by Figure 4. Unconditional generation. We report the results in Table 2. Our method achieves better P-FID and P-KID than Prometheus with 2.5D representation and Trellis (Xiang et al., 2025) with 3D grid representation, validating the superiority of point latents in modeling the geometry distribution. However, Terra performs worse compared to Prometheus in image quality metrics FID and KID, because Prometheus, with 2D diffusion pretrain, is able to synthesize plausible images even though the underlying 3D structures could be corrupted. We provide visualization results in Figure 5, where only Terra generates both reasonable and diverse 3D scenes while the results of other methods either lack accurate 3D structure or vivid textures. Image conditioned generation. We report the results in Table 2 and Figure 6. Given a conditional image, we first unproject it into 3D space with depth and intrinsics to produce a colored point cloud. Then we can formulate the image conditioned generation task as outpainting in the point 8 Preprint Step 1 Step 2 Step 3 Step 4 Step 5 Overview Figure 7: Visualization for explorable world model. Terra is able to generate both coherent and diverse room layouts with plausible textures from step-by-step exploration. Table 3: Ablation Study to validate the effectiveness of our design choices. Method Reconstruction Unconditional Generation PSNR↑SSIM↑Abs. Rel.↓RMSE↓P-FID↓P-KID(%)↓ FID↓ KID(%)↓ w.o. Robust Position Perturbation 20.487 0.783 0.023 0.132 15.28 5.218 349.3 21.884 w.o. Adaptive Upsampling and Refine 18.749 0.711 0.042 0.157 12.48 4.764 341.8 21.760 w.o. Explicit Color Supervision 19.582 0.739 0.030 0.144 10.61 3.142 327.9 19.418 w.o. Dist.-aware Trajectory Smoothing 19.742 0.753 0.026 0.137 24.84 11.387 401.8 27.482 Terra 19.742 0.753 0.026 0.137 8.79 1.745 307.2 18.919 latent space. Our Terra achieves better performance in chamfer distance and earth mover’s distance, demonstrating better geometry quality. Prometheus still achieves better FID and KID even though the visualizations show evident multi-view inconsistency. Explorable world model. We visualize the results for explorable world model in Figure 7. We start from a single step generation, and progressively extend the boundary to explore the unknown re- gions. In each step, we choose a random direction for exploration, take a step forward, and generate the next-step result with part of the known regions as condition. Our Terra is able to synthesize both coherent and diverse room layouts with plausible textures, validating the effectiveness of Terra. 4.4 ABLATION STUDY We conduct comprehensive ablation study to validate the effectiveness of our designs in Table 3. Although position perturbation for point latents degrades reconstruction performance, it is crucial for the generative training because it significantly improves the robustness of the VAE decoder against positional noise. Both adaptive upsampling and refinement and explicit color supervision enhance the reconstruction performance and also the generation quality. Distance-aware trajectory smoothing takes effect in the generative training and is critical for the convergence of the model. 5 CONCLUSION In this paper, we propose Terra as a native 3D world model that describes and generates explorable 3D environments with point latents. The point latents naturally satisfy the 3D consistency constraint crucial to world models as a native 3D representation, and support flexible rendering from any given viewpoint with a single generation process. To learn the intrinsic distribution of 3D data with point latents, we design the P2G-VAE and SPFlow networks for dimensionality reduction and generative modeling, respectively. We conduct experiments on ScanNet v2 with reconstruction, unconditional generation and image conditioned generation tasks, and Terra achieves the best overall performance 9 Preprint both quantitatively and qualitatively. Furthermore, Terra is able to explore the unknown regions in a progressive manner and produce a large-scale and coherent world simulation. REFERENCES Niket Agarwal, Arslan Ali, Maciej Bala, Yogesh Balaji, Erik Barker, Tiffany Cai, Prithvijit Chat- topadhyay, Yongxin Chen, Yin Cui, Yifan Ding, et al. Cosmos world foundation model platform for physical ai. arXiv preprint arXiv:2501.03575, 2025. Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. In CVPR, pp. 15619–15629, 2023. Mido Assran, Adrien Bardes, David Fan, Quentin Garrido, Russell Howes, Matthew Muckley, Am- mar Rizvi, Claire Roberts, Koustuv Sinha, Artem Zholus, et al. V-jepa 2: Self-supervised video models enable understanding, prediction and planning. arXiv preprint arXiv:2506.09985, 2025. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NIPS, 33:1877–1901, 2020. Jake Bruce, Michael D Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, et al. Genie: Generative inter- active environments. In ICML, 2024. David Charatan, Sizhe Lester Li, Andrea Tagliasacchi, and Vincent Sitzmann. pixelsplat: 3d gaus- sian splats from image pairs for scalable generalizable 3d reconstruction. In CVPR, pp. 19457– 19467, 2024. Junyi Chen, Haoyi Zhu, Xianglong He, Yifan Wang, Jianjun Zhou, Wenzheng Chang, Yang Zhou, Zizun Li, Zhoujie Fu, Jiangmiao Pang, et al. Deepverse: 4d autoregressive video generation as a world model. arXiv preprint arXiv:2506.01103, 2025a. Weiliang Chen, Jiayi Bi, Yuanhui Huang, Wenzhao Zheng, and Yueqi Duan. Scenecompleter: Dense 3d scene completion for generative novel view synthesis. arXiv preprint arXiv:2506.10981, 2025b. Yuedong Chen, Haofei Xu, Chuanxia Zheng, Bohan Zhuang, Marc Pollefeys, Andreas Geiger, Tat- Jen Cham, and Jianfei Cai. Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images. In ECCV, pp. 370–386. Springer, 2024. Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, pp. 5828– 5839, 2017. Quankai Gao, Iliyan Georgiev, Tuanfeng Y Wang, Krishna Kumar Singh, Ulrich Neumann, and Jae Shin Yoon. Can3tok: Canonical 3d tokenization and latent modeling of scene-level 3d gaus- sians. In ICCV, 2025. Zekai Gu, Rui Yan, Jiahao Lu, Peng Li, Zhiyang Dou, Chenyang Si, Zhen Dong, Qifeng Liu, Cheng Lin, Ziwei Liu, et al. Diffusion as shader: 3d-aware video diffusion for versatile video generation control. In SIGGRAPH, pp. 1–12, 2025. David Ha and J¨urgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2(3), 2018. Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrl: Enabling camera control for video diffusion models. In ICLR, 2025. 10 Preprint Roberto Henschel, Levon Khachatryan, Hayk Poghosyan, Daniil Hayrapetyan, Vahram Tadevosyan, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Streamingt2v: Consistent, dynamic, and extendable long video generation from text. In CVPR, pp. 2568–2577, 2025. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NIPS, 33: 6840–6851, 2020. Wenbo Hu, Xiangjun Gao, Xiaoyu Li, Sijie Zhao, Xiaodong Cun, Yong Zhang, Long Quan, and Ying Shan. Depthcrafter: Generating consistent long depth sequences for open-world videos. In CVPR, pp. 2005–2015, 2025. Tianyu Huang, Wangguandong Zheng, Tengfei Wang, Yuhao Liu, Zhenwei Wang, Junta Wu, Jie Jiang, Hui Li, Rynson WH Lau, Wangmeng Zuo, et al. Voyager: Long-range and world-consistent video diffusion for explorable 3d scene generation. arXiv preprint arXiv:2506.04225, 2025. Yuanhui Huang, Wenzhao Zheng, Yuan Gao, Xin Tao, Pengfei Wan, Di Zhang, Jie Zhou, and Jiwen Lu. Owl-1: Omni world model for consistent long video generation. arXiv preprint arXiv:2412.09600, 2024. Ka-Hei Hui, Chao Liu, Xiaohui Zeng, Chi-Wing Fu, and Arash Vahdat. Not-so-optimal transport flows for 3d point cloud generation. arXiv preprint arXiv:2502.12456, 2025. Junha Hyung, Susung Hong, Sungwon Hwang, Jaeseong Lee, Jaegul Choo, and Jin-Hwa Kim. Effective rank analysis and regularization for enhanced 3d gaussian splatting. NIPS, 37:110412– 110435, 2024. Roy Jonker and Anton Volgenant. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38(4):325–340, 1987. Bernhard Kerbl, Georgios Kopanas, Thomas Leimk¨uhler, and George Drettakis. 3d gaussian splat- ting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139–1, 2023. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. Dan Kondratyuk, Lijun Yu, Xiuye Gu, Jos´e Lezama, Jonathan Huang, Grant Schindler, Rachel Hornung, Vighnesh Birodkar, Jimmy Yan, Ming-Chang Chiu, et al. Videopoet: A large language model for zero-shot video generation. arXiv preprint arXiv:2312.14125, 2023. Hang Lai, Jiahang Cao, Jiafeng Xu, Hongtao Wu, Yunfeng Lin, Tao Kong, Yong Yu, and Weinan Zhang. World model-based perception for visual legged locomotion. In ICRA, pp. 11531–11537. IEEE, 2025. Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, and Chen Change Loy. Gaussiananything: Interactive point cloud latent diffusion for 3d genera- tion. In ICLR, 2025. Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. arXiv preprint arXiv:2210.02747, 2022. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. Yuanxun Lu, Jingyang Zhang, Tian Fang, Jean-Daniel Nahmias, Yanghai Tsin, Long Quan, Xun Cao, Yao Yao, and Shiwei Li. Matrix3d: Large photogrammetry model all-in-one. In CVPR, pp. 11250–11263, 2025. Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In CVPR, pp. 2837–2845, 2021. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99–106, 2021. 11 Preprint Chen Min, Dawei Zhao, Liang Xiao, Jian Zhao, Xinli Xu, Zheng Zhu, Lei Jin, Jianshu Li, Yulan Guo, Junliang Xing, et al. Driveworld: 4d pre-trained scene understanding via world models for autonomous driving. In CVPR, pp. 15522–15533, 2024. OpenAI. Video generation models as world simulators, 2024. URL https://openai.com/ research/video-generation-models-as-world-simulators. William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, pp. 4195– 4205, 2023. Bohao Peng, Xiaoyang Wu, Li Jiang, Yukang Chen, Hengshuang Zhao, Zhuotao Tian, and Jiaya Jia. Oa-cnns: Omni-adaptive sparse cnns for 3d semantic segmentation. In CVPR, pp. 21305–21315, 2024. Xuanchi Ren, Jiahui Huang, Xiaohui Zeng, Ken Museth, Sanja Fidler, and Francis Williams. Xcube: Large-scale 3d generative modeling using sparse voxel hierarchies. In CVPR, pp. 4209–4219, 2024a. Zhiyuan Ren, Minchul Kim, Feng Liu, and Xiaoming Liu. Tiger: Time-varying denoising model for 3d point cloud generation with diffusion process. In CVPR, pp. 9462–9471, 2024b. Zhongwei Ren, Yunchao Wei, Xun Guo, Yao Zhao, Bingyi Kang, Jiashi Feng, and Xiaojie Jin. Videoworld: Exploring knowledge learning from unlabeled videos. In CVPR, pp. 29029–29039, 2025. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High- resolution image synthesis with latent diffusion models. In CVPR, pp. 10684–10695, 2022. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. Aether Team, Haoyi Zhu, Yifan Wang, Jianjun Zhou, Wenzheng Chang, Yang Zhou, Zizun Li, Junyi Chen, Chunhua Shen, Jiangmiao Pang, et al. Aether: Geometric-aware unified world modeling. arXiv preprint arXiv:2503.18945, 2025. Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, Karsten Kreis, et al. Lion: Latent point diffusion models for 3d shape generation. NIPS, 35:10021–10039, 2022. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. NIPS, 30, 2017. Mathias Vogel, Keisuke Tateno, Marc Pollefeys, Federico Tombari, Marie-Julie Rakotosaona, and Francis Engelmann. P2p-bridge: Diffusion bridges for 3d point cloud denoising. In ECCV, pp. 184–201. Springer, 2024. Jianyuan Wang, Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht, and David Novotny. Vggt: Visual geometry grounded transformer. In CVPR, pp. 5294–5306, 2025a. Qiuheng Wang, Yukai Shi, Jiarong Ou, Rui Chen, Ke Lin, Jiahao Wang, Boyuan Jiang, Haotian Yang, Mingwu Zheng, Xin Tao, et al. Koala-36m: A large-scale video dataset improving consis- tency between fine-grained conditions and video content. In CVPR, pp. 8428–8437, 2025b. Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Ge- ometric 3d vision made easy. In CVPR, pp. 20697–20709, 2024a. Tai Wang, Xiaohan Mao, Chenming Zhu, Runsen Xu, Ruiyuan Lyu, Peisen Li, Xiao Chen, Wenwei Zhang, Kai Chen, Tianfan Xue, et al. Embodiedscan: A holistic multi-modal 3d perception suite towards embodied ai. In CVPR, pp. 19757–19767, 2024b. Xiaofeng Wang, Zheng Zhu, Guan Huang, Xinze Chen, Jiagang Zhu, and Jiwen Lu. Drivedreamer: Towards real-world-drive world models for autonomous driving. In ECCV, pp. 55–72. Springer, 2024c. Tong Wu, Shuai Yang, Ryan Po, Yinghao Xu, Ziwei Liu, Dahua Lin, and Gordon Wetzstein. Video world models with long-term spatial memory. arXiv preprint arXiv:2506.05284, 2025. 12 Preprint Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, and Hengshuang Zhao. Point transformer v3: Simpler faster stronger. In CVPR, pp. 4840– 4851, 2024a. Yuqi Wu, Wenzhao Zheng, Sicheng Zuo, Yuanhui Huang, Jie Zhou, and Jiwen Lu. Embodiedocc: Embodied 3d occupancy prediction for vision-based online scene understanding. arXiv preprint arXiv:2412.04380, 2024b. Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong, and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. In CVPR, pp. 21469–21480, 2025. Jiannan Xiang, Guangyi Liu, Yi Gu, Qiyue Gao, Yuting Ning, Yuheng Zha, Zeyu Feng, Tianhua Tao, Shibo Hao, Yemin Shi, et al. Pandora: Towards general world model with natural language actions and video states. arXiv preprint arXiv:2406.09455, 2024. Yuanbo Yang, Jiahao Shao, Xinyang Li, Yujun Shen, Andreas Geiger, and Yiyi Liao. Prometheus: 3d-aware latent diffusion models for feed-forward text-to-3d scene generation. In CVPR, pp. 2857–2869, 2025a. Zhongqi Yang, Wenhang Ge, Yuqi Li, Jiaqi Chen, Haoyuan Li, Mengyin An, Fei Kang, Hua Xue, Baixin Xu, Yuyang Yin, et al. Matrix-3d: Omnidirectional explorable 3d world generation. arXiv preprint arXiv:2508.08086, 2025b. Shengming Yin, Chenfei Wu, Huan Yang, Jianfeng Wang, Xiaodong Wang, Minheng Ni, Zhengyuan Yang, Linjie Li, Shuguang Liu, Fan Yang, et al. Nuwa-xl: Diffusion over diffusion for extremely long video generation. arXiv preprint arXiv:2303.12346, 2023. Hong-Xing Yu, Haoyi Duan, Charles Herrmann, William T Freeman, and Jiajun Wu. Wonderworld: Interactive 3d scene generation from a single image. In CVPR, pp. 5916–5926, 2025. Hongxiao Yu, Yuqi Wang, Yuntao Chen, and Zhaoxiang Zhang. Monocular occupancy prediction for scalable indoor scenes. In ECCV, pp. 38–54. Springer, 2024a. Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, Tien- Tsin Wong, Ying Shan, and Yonghong Tian. Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis. arXiv preprint arXiv:2409.02048, 2024b. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pp. 586–595, 2018. Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In CVPR, pp. 16259–16268, 2021. Wenzhao Zheng, Weiliang Chen, Yuanhui Huang, Borui Zhang, Yueqi Duan, and Jiwen Lu. Oc- cworld: Learning a 3d occupancy world model for autonomous driving. In ECCV, pp. 55–72. Springer, 2024a. Wenzhao Zheng, Zetian Xia, Yuanhui Huang, Sicheng Zuo, Jie Zhou, and Jiwen Lu. Doe-1: Closed- loop autonomous driving with large world model. arXiv preprint arXiv:2412.09627, 2024b. Chenliang Zhou, Fangcheng Zhong, Param Hanji, Zhilin Guo, Kyle Fogarty, Alejandro Sztrajman, Hongyun Gao, and Cengiz Oztireli. Frepolad: Frequency-rectified point latent diffusion for point cloud generation. In ECCV, pp. 434–453. Springer, 2024. 13
Preprint TERRA: EXPLORABLE NATIVE 3D WORLD MODEL WITH POINT LATENTS Yuanhui Huang1 Weiliang Chen1 Wenzhao Zheng1 Xin Tao2 Pengfei Wan2 Jie Zhou1 Jiwen Lu1 1Tsinghua University 2Kuaishou Technology 1 2 4 3 5 6 5 RGB Point Cloud Gaussian Primitives P2G Encoder Generative World Modeling P2G Decoder Point Latents 4 3 2 1 6 Figure 1: Method overview. Unlike conventional world models with pixel-aligned representations, we propose Terra as a native 3D world model that describes and generates 3D environments with point latents. Starting with a glimpse of the environment, Terra progressively explores the unknown regions to produce a coherent and complete world simulation. ABSTRACT World models have garnered increasing attention for comprehensive modeling of the real world. However, most existing methods still rely on pixel-aligned representations as the basis for world evolution, neglecting the inherent 3D nature of the physical world. This could undermine the 3D consistency and diminish the modeling efficiency of world models. In this paper, we present Terra, a native 3D world model that represents and generates explorable environments in an intrinsic 3D latent space. Specifically, we propose a novel point-to-Gaussian variational autoencoder (P2G-VAE) that encodes 3D inputs into a latent point representation, which is subsequently decoded as 3D Gaussian primitives to jointly model geometry and appearance. We then introduce a sparse point flow matching network (SPFlow) for generating the latent point representation, which simultaneously denoises the positions and features of the point latents. Our Terra enables exact multi-view consistency with native 3D representation and architecture, and supports flexible rendering from any viewpoint with only a single generation process. Furthermore, Terra achieves explorable world modeling through progressive generation in the point latent space. We conduct extensive experiments on the challenging indoor scenes from ScanNet v2. Terra achieves state-of-the-art performance in both reconstruction and generation with high 3D consistency. 1 INTRODUCTION World models have emerged as a promising research direction, with the aim of understanding and simulating the underlying mechanics of the physical world (Ha & Schmidhuber, 2018). Unlike 1 16 Oct 2025 Preprint Large Language Models (LLMs), which are confined to textual processing (Vaswani et al., 2017; Brown et al., 2020), world models integrate multimodal visual data to construct a comprehensive and internal representation of the environment (Ha & Schmidhuber, 2018). From learning the evolution of the real world, world models enable various downstream applications, including perception (Min et al., 2024; Lai et al., 2025), prediction (Zheng et al., 2024a; Xiang et al., 2024; Team et al., 2025; Agarwal et al., 2025), reasoning (Bruce et al., 2024; Assran et al., 2023; Huang et al., 2024), and planning (Ren et al., 2025; Assran et al., 2025; Zheng et al., 2024b). Scene representation is fundamental to world models (Wang et al., 2024c; Team et al., 2025; Zheng et al., 2024a), forming the basis for world evolution. Conventional methods typically rely on 2D image or video representations, simulating world dynamics through video prediction (Agarwal et al., 2025; Bruce et al., 2024; Xiang et al., 2024; Assran et al., 2025). However, the generated videos often lack consistency across frames (Wang et al., 2024c; Huang et al., 2024; Zheng et al., 2024b), as the models do not consider explicit 3D priors and instead learn only implicit 3D cues from the training videos. To address this limitation, a line of work simultaneously predicts RGB images and depth maps to construct a pixel-aligned 2.5D representation (Team et al., 2025; Yang et al., 2025a; Lu et al., 2025). While they integrate geometric constraints into the generation process, learning the multi-view pixel correspondence remains challenging due to the ambiguity of relative camera poses. The physical world is inherently three-dimensional, including objects and their interactions. However, the rendering process only produces a partial 2D observation of the underlying 3D environment, inevitably losing crucial depth and pose information (Mildenhall et al., 2021; Kerbl et al., 2023). This poses critical challenges on the multi-view consistency of world models based on pixelaligned representations (Lu et al., 2025; Team et al., 2025; Yang et al., 2025a; Wang et al., 2024c). To address this, we present Terra, a native 3D world model that describes and generates explorable environments with an intrinsic 3D representation, as shown in Figure 1. At its core, we learn a native point latent space that employs spatially sparse but semantically compact point latents as the basis for reconstruction and generation. Accordingly, Terra completely discards pixel-aligned designs and directly learns the distribution of 3D scenes in its most natural form, achieving 3D consistency without bells and whistles. To elaborate, we propose a novel point-to-Gaussian variational autoencoder (P2G-VAE) that converts 3D input into the latent point representation. The asymmetric decoder subsequently maps these point latents to rendering-compatible 3D Gaussian primitives to jointly model geometry and appearance. The P2G-VAE effectively reduces the redundancy in the input 3D data and derives a compact latent space suitable for generative modeling. Furthermore, we propose a sparse point flow matching model (SPFlow) to learn the transport trajectory from the noise distribution to the target point distribution. The SPFlow simultaneously denoises the positions and features of the point latents to leverage the complementary nature of geometric and textural attributes to foster their mutual enhancement. Based on P2G-VAE and SPFlow, we formulate the explorable world model as an outpainting task in the point latent space, which we approach through progressive training with three stages: reconstruction, unconditional generative pretrain, and masked conditional generation. We conduct extensive experiments on the challenging indoor scenes from ScanNet v2 (Dai et al., 2017). Our Terra achieves state-of-the-art performance in both reconstruction and generation with high 3D consistency and efficiency. 2 RELATED WORK 2D world models. Early attempts in world models focus on image or video representations, thanks to the exceptional performance of 2D diffusion models (Ho et al., 2020; Song et al., 2020; Rombach et al., 2022; Blattmann et al., 2023; Peebles & Xie, 2023). DriveDreamer (Wang et al., 2024c) and Sora (OpenAI, 2024) represent pioneering image-based and video-based world models, respectively, both leveraging diffusion models to achieve view-consistent and temporally coherent world modeling. Subsequent research efforts focus primarily on enhancing the temporal consistency (Henschel et al., 2025; Huang et al., 2024; Yin et al., 2023), spatial coherence (Yu et al., 2024b; Wu et al., 2025; Chen et al., 2025a), physical plausibility (Assran et al., 2023; Agarwal et al., 2025; Assran et al., 2025), and interactivity (Xiang et al., 2024; He et al., 2025; Wang et al., 2025b) of generated videos. Several studies also explore integrating the language modality with conventional methods to train multimodal world models (Zheng et al., 2024b; Kondratyuk et al., 2023). Recently, Genie-3 (Bruce et al., 2024) has emerged as one of the most successful video world models, which enables excellent photorealism, flexible interaction, and real-time generation. Despite the promising advancements, 2 Preprint 2D world models learn the evolution of the real world solely from image or video data, overlooking the inherent 3D nature of the physical environments. This lack of sufficient 3D priors often results in failures to maintain 3D consistency in the generated outputs. Moreover, 2D world models require multiple generation passes to produce results with different viewing trajectories. 2.5D world models. To incorporate explicit 3D clues into world models, and also leverage the generative prior from 2D diffusion networks, a line of work (Hu et al., 2025; Gu et al., 2025; Chen et al., 2025b; Yang et al., 2025b; Huang et al., 2025; Yu et al., 2025) proposes to jointly predict depth and RGB images as a pixel-aligned 2.5D representation. ViewCrafter (Yu et al., 2024b) employs an off-the-shelf visual geometry estimator (Wang et al., 2024a) to perform depth and pose prediction, which is then used in novel view reprojection. Prometheus (Yang et al., 2025a) trains a dual-modal diffusion network for joint generation of depth and RGB images conditioned on camera poses. Furthermore, several work (Team et al., 2025; Lu et al., 2025) converts camera poses to 2D Pl ̈ucker coordinates, in order to consider camera poses, depth and RGB images in a unified framework. In general, these methods try to learn the joint distribution of depth, poses and texture to improve 3D consistency. However, these factors are deeply coupled with each other by the delicate perspective transformation, which is often challenging for neural networks to learn in an implicit data-driven manner. We propose a native 3D world model that represents and generates explorable environments with a native 3D latent space, and guarantees multi-view consistency with 3D-to-2D rasterization. Native 3D generative models. Most relevant to our work are native 3D generative models that also employ 3D representations. Pioneering work in this field focuses on point cloud generation. Luo & Hu (2021) proposes the first diffusion probabilistic model for 3D point cloud generation. Vahdat et al. (2022) later extend this paradigm to support latent point diffusion, followed by advancements in architecture (Ren et al., 2024b), frequency analysis (Zhou et al., 2024) and flow matching (Vogel et al., 2024; Hui et al., 2025). However, these methods are confined to object or shape level generation and are unable to synthesize textured results, which greatly restricts their application. To integrate texture, Lan et al. (2025) and Xiang et al. (2025) adopt Gaussian splatting as the 3D representation, but they are still limited to object generation. Zheng et al. (2024a) and Ren et al. (2024a) extend to scene-level 3D occupancy generation, but the occupancy is coarse in granularity and does not support rendering applications. In summary, existing methods are restricted to either object-level fine-grained or scene-level coarse geometry generation. In contrast, we construct the first native 3D world model with both large-scale and rendering-compatible 3D Gaussian generation. 3 PROPOSED APPROACH 3.1 LATENT POINT REPRESENTATION We present Terra as a native 3D world model that represents and generates explorable environments with an intrinsic 3D representation. Figure 2 outlines the overall pipeline. Formally, we formulate explorable world models as first generating an initial scene S0 and progressively expanding the known regions to produce a coherent and infinite world simulation S: S0 = g(∅, C0; θ), Si = g(Si-1, Ci; θ), Si = {S0, S1, ..., Si}, (1) where subscripts denote exploration steps, and g(Si-1, Ci; θ) represents the model with learnable parameters θ that generates the next-step exploration result Si based on the set of previously known regions Si-1 and the current conditional signal Ci. Conventional world models with pixel-aligned representations instantiate Si with colors R, depths D and poses T from different viewpoints: Si = [(R(n) i , D(n) i , T (n) i )|N n=1], (2) where N denotes the number of views in a single generation step and the superscript (n) is the view index. On the other hand, multi-view consistency for Lambert's model can be formulated as: R(n)|x(n) = R(m)|x(m), d(n)x(n) = T (n)x, n, m = 1, 2, ..., N, (3) where x, x(n), d(n), R(n)|x(n) denote the 3D coordinates of a visible point, the image coordinates of x in the n-th view, the depth of x in the n-th view, and sampling R(n) at x(n), respectively. Eq. (3) requires that different pixels on separate views should share the same color if they are the projections of the same visible 3D point. Therefore, the ideal representation for conventional world models 3 Preprint 2D-to-3D Inverse Projection Point Latents 3D Gaussian Primitives VAE E VAE D Rendering Crop Scene E Distance-Aware Trajectory Smoother SPFlow 3D Sparse Conv. Position-Feature Joint Denoising Point Attention Adaptive Refine & Upsample Condition Module Point Latents Supervision & Regularization Robust Position Perturb Terra 1 2 3 4 Figure 2: Overall pipeline. Terra consists of a point-to-Gaussian VAE and a sparse point flow matching model. The P2G-VAE effectively learns the transformation from input RGB point cloud to point latents, and then to 3D Gaussian primitives. The SPFlow learns the joint distribution of geometry and appearance. Both P2G-VAE and SPFlow adopt native sparse 3D architectures. should be the combination of Eq. (2) and (3), i.e. the multi-view colors, depths and poses satisfying the reprojection constraint. Unfortunately, it is often challenging for neural networks to learn this constraint in an implicit data-driven manner, leading to multi-view inconsistency. ViewCrafter (Yu et al., 2024b) bypasses this problem by taking smaller steps (N = 1) in every generation and explicitly projects previous contexts onto the novel view to enforce reprojection consistency. While effective, this approach significantly compromises the efficiency of exploration. Different from the pixel-aligned counterparts, we propose latent point representation P as a native 3D descriptor of the environment: Si = Pi ∈RMi×(3+D), where Mi and 3 + D denote the number of point latents for the i-th exploration step and the sum of the dimensions for 3D coordinates and features, respectively. The latent point representation is similar to the actual point cloud, located sparsely on the surface of objects, but limited in number and with semantically meaningful latent features. It also supports adapting Mi according to the complexity of different regions and integrating historical contexts by simply concatenating previous Pis. This design completely discards the view-dependent elements (Eq. (2)) and the reprojection constraint (Eq. (3)) from the exploration process and instead models the environment with 3D points x directly. These point latents can be transformed into 3D Gaussian primitives for rasterization, naturally satisfying 3D consistency and enabling flexible rendering from any viewpoint without rerunning the generation pipeline. 3.2 POINT-TO-GAUSSIAN VARIATIONAL AUTOENCODER We design the P2G-VAE to effectively generate the latent point representation from the input scene and decode it into 3D Gaussian primitives. We suppose the input scene is described by a colored point cloud Q ∈RB×6 to provide necessary 3D information, where B and 6 represent the number of points and the sum of dimensions for 3D coordinates and color, respectively. We build our P2G-VAE based on the point transformer architecture (Zhao et al., 2021) for efficiency. Apart from removing the residual connections in the original PTv3 (Wu et al., 2024a), we include the following novel designs for a robust latent space and effective Gaussian decoding, as shown in Figure 3. Robust position perturbation. In conventional VAEs (Kingma & Welling, 2014), it is common to regularize the latent features with a Kullback-Leibler Divergence loss LKL to align the feature distribution with a standard normal distribution. However, it is nontrivial to generalize this practice to unstructured point latents where 3D coordinates themselves contain crucial geometry information. Directly regularizing the coordinates to approximate Gaussian noise would have an adverse effect on the locality of point latents and the associated local structures. To this end, we propose a robust 4 Preprint Point Transformer Encoder Attn. Module Offset Pred. Pos. Res. Feat. Res. Adaptive Upsample Module Offset Pred. Pos. Res. Adaptive Refine Chamfer Distance Explicit Color Loss NN. Matching Learnable Queries Robust Position Perturbation Distance-aware Trajectory Smoothing LAP Matching Solver Crop Uniform Crop& Uniform Condition Noisy Latents Figure 3: Method details. LAP, Pos. Res., Feat. Res. and NN. denote linear assignment problem, position residual, feature residual and nearest neighbor, respectively. position perturbation technique which perturbs the coordinates of point latents with a predefined Gaussian noise n ∼N(0, σ2I3) where σ is a hyperparameter for noise intensity: P = [(p(m) ∈R3, f (m) ∈RD)|M m=1], p = ˆp + n, f ∼N(mean( ˆf), diag(var( ˆf))), (4) where we split point latents P into M position-feature pairs (p, f) and omit the exploration step for simplicity. The ˆp and ˆf denote the positions and features of points as input to the VAE bottleneck, and mean(·), var(·) are the functions to calculate the mean and variance of the latent features f. The robust position perturbation enhances the robustness of the VAE decoder against slight perturbations over the positions of point latents. Further, it greatly improves the generation quality since generated samples inevitably contain a certain level of noise, similar to our perturbation process. Adaptive upsampling and refinement. Given point latents after downsampling and perturbation, the VAE decoder should upsample them to an appropriate number and restore the dense structure. To achieve this, we introduce the adaptive upsampling and refinement modules. The adaptive upsampling module splits each point (p, f) into K child points (p(k), f (k))|K k=1 with K learnable queries q(k)|K k=1. These queries first interact with each point for contexts, and then each query predicts a relative displacement disp(·) and a residual feature resf(·) for the corresponding child point: ˆq(k)|K k=1 = ups(f, q(k)|K k=1), p(k) = p + disp(ˆq(k)), f (k) = f + resf(ˆq(k)), (5) where ups(·) denotes the point-query interaction module. This design enables controllable upsampling and avoids the complex mask-guided trimming operation in conventional methods (Ren et al., 2024a). Similar to the upsampling module, the adaptive refinement module further adjusts the point positions with offsets predicted from the point features: p′ = p + refine(f). These two modules progressively densify and refine the point positions, restoring a dense and meaningful structure. Comprehensive regularizations. To supervise the output Gaussian primitives, we employ the conventional rendering supervisions including L2, SSIM and LPIPS (Zhang et al., 2018) losses. In addition, we also incorporate other losses to improve the reconstructed geometry and regularize the properties of Gaussians for better visual quality. 1) We optimize the chamfer distances Lcham between the input point cloud and intermediate point clouds as the outputs of upsampling and refinement modules, which provides explicit guidance for the prediction of position offsets. 2) We use the normal Lnorm and effective rank Lrank (Hyung et al., 2024) regularizations to regularize the rotation and scale properties of Gaussians. 3) We propose a novel explicit color supervision Lcolor, which directly aligns the color of each Gaussian with the color of the nearest point in the input point cloud. This loss bypasses the rasterization process and thus is more friendly for optimization. The overall loss function for our P2G-VAE can be formulated as: Lvae = Ll2 + λ1Lssim + λ2Llpips + λ3Lcham + λ4Lnorm + λ5Lrank + λ6Lcolor + λ7Lkl. (6) 3.3 NATIVE 3D GENERATIVE MODELING We use flow matching (Lipman et al., 2022) for generative modeling of the latent point representation. Formally, we gradually add noise N ∼N(0, I) to both the positions and features of point 5 Preprint Table 1: Reconstruction performance. RGB PC. and Rep. Range represent colored point cloud and representation range of the output Gaussian, respectively. We select 20 random scenes from the validation set to reconstruct offline Gaussians as input to Can3Tok∗. Method Input Type Rep. Range PSNR↑ SSIM↑ LPIPS↓ Abs. Rel.↓ RMSE↓ δ1↑ PixelSplat RGB Partial 18.165 0.686 0.493 0.094 0.287 0.832 MVSplat RGB Partial 17.126 0.621 0.552 0.139 0.326 0.824 Prometheus RGBD Partial 17.279 0.644 0.448 0.087 0.251 0.901 Can3Tok∗ Gaussian Complete 19.578 0.733 0.514 0.031 0.151 0.973 Terra RGB PC. Complete 19.742 0.753 0.530 0.026 0.137 0.978 latents P ∈RM×(3+D) with a schedule t ∈[0, 1] in the diffusion process, and predict the velocity vector V ∈RM×(3+D) given noisy latents Pt in the reverse process: Pt = tP + (1 -t)N, V = F(Pt, t; φ), (7) where F(·, ·; φ) denotes a UNet (Peng et al., 2024) with learnable parameters φ based on 3D sparse convolution. The training objective can now be formulated as: Lflow = Et∼U[0,1],P ∼P,N∼N (0,I)||F(Pt, t; φ) -(P -N)||2, (8) where P denotes the ground truth distribution of point latents P . During inference, we start from sampled Gaussian noise and progressively approach clean point latents along the trajectory determined by the predicted velocity vector. Note that we simultaneously diffuse the positions and features to learn the joint distribution of geometry and texture and facilitate their mutual enhancement. Distance-aware trajectory smoothing. Conventional flow matching applied to grid-based latents naturally matches noises and latents according to their grid indices. However, it would complicate the velocity field and the denoising trajectory if we simply match the point positions with noise samples based on their indices in the sequence (Hui et al., 2025). Intuitively, it is unreasonable to denoise a leftmost noise sample to a rightmost point. To address this, we propose a distance-aware trajectory smoothing technique that effectively straightens the transport trajectory and facilitates convergence for unstructured point flow matching. Since it is more reasonable to choose a closer noise sample as the diffusion target than a farther one, we optimize the matching M between point positions and noise samples to minimize the sum of distances between point-noise pairs: M∗= argminM M X m=1 ||p(m) -NMm,:3||2, M = reorder([1, 2, ..., M]), (9) where NMm,:3 denotes the position of the noise sample assigned to the m-th point latent. We apply the Jonker-Volgenant algorithm (Jonker & Volgenant, 1987) to efficienty solve Eq. (9). Simple conditioning mechanism. For an explorable model, we employ multi-stage training that consists of reconstruction, unconditional generative pretraining, and masked conditional generation. For masked conditions, we introduce three types of conditions to support different exploration styles: cropping, uniform sampling, and their combinations. We randomly crop a connected 3D region from the point latents as the cropping condition to unlock the ability to imagine and populate unknown regions. We uniformly sample some of the point latents across the scene as the uniform sampling condition to enable the model to refine known regions. We also use their combinations and first crop a connected 3D region and then apply uniform sampling inside it to simulate RGBD conditions. We concatenate the conditional point latents with the noisy ones and fix the condition across the diffusion process to inject conditional guidance even at the early denoising stage. 4 EXPERIMENTS 4.1 DATASETS AND METRICS We conduct extensive experiments on the challenging indoor scenes from the ScanNet v2 (Dai et al., 2017) dataset, which is widely adopted in embodied perception (Yu et al., 2024a; Wu et al., 2024b) and visual reconstruction (Wang et al., 2024a; 2025a). The dataset consists of 1513 scenes in total, covering diverse room types and layouts. Each scene is recorded by an RGBD video with semantic and pose annotations for each frame. We unproject the color and depth maps into 3D space using 6 Preprint Rendered RGB RGB GT Rendered Depth Depth GT Rendered Gaussian Input RGB PC. Figure 4: Visualization for reconstruction. Terra achieves photorealistic rendering quality for RGB and depth, and learns to complete the partial objects caused by the sensor failure in dark regions. Table 2: Generation Performance. CD and EMD denote Chamfer and earth mover's distances, respectively. Terra achieves exceptional geometry generation quality compared with other methods. Method Repr. Unconditional Image Conditioned P-FID↓ P-KID(%)↓ FID↓ KID(%)↓ CD↓ EMD↓ FID↓ KID(%)↓ Prometheus RGBD 32.35 12.481 263.3 10.726 0.374 0.531 208.3 12.387 Trellis 3D Grid 19.62 7.658 361.4 23.748 0.405 0.589 314.9 24.713 Terra Point 8.79 1.745 307.2 18.919 0.217 0.474 262.4 20.283 the poses to produce colored point clouds as input to our P2G-VAE. In the generative training, we preprocess the point latents from the VAE encoder by randomly cropping a smaller rectangular region in the x-y plane and filtering out overly sparse and noisy samples. We follow Wang et al. (2024b) and split the dataset into 958 and 243 scenes for training and validation, respectively. We evaluate Terra on the reconstruction, unconditional, and image-conditioned generation tasks. For reconstruction, we compare Terra with three lines of methods: PixelSplat (Charatan et al., 2024) and MVSplat (Chen et al., 2024) with RGB input, Prometheus (Yang et al., 2025a) with RGBD input, and Can3Tok (Gao et al., 2025) with offline reconstructed Gaussians as input. We use PSNR, SSIM, and LPIPS metrics for visual quality, and Abs. Rel., RMSE, and δ1 metrics for depth accuracy. For generative tasks, we compare Terra with Prometheus using RGBD (Yang et al., 2025a) representation, and Trellis (Xiang et al., 2025) using 3D grid representation. We retrain these baselines on ScanNet v2 using their official code for a fair comparison. For unconditional generation, we adopt point cloud FID (P-FID) and point cloud KID (P-KID) for geometry quality, and FID and KID for visual quality. Regarding image-conditioned generation, we adopt the Chamfer distance and earth mover's distance for geometry quality, and FID and KID for visual quality. 4.2 IMPLEMENTATION DETAILS We construct the P2G-VAE based on PTv3 (Wu et al., 2024a), removing all residual connections and integrating the designs proposed in Section 3.2. We perform downsampling with stride 2 for 3 times in the encoder, reducing the number of points from 1 million to around 5000. In the decoder, we upsample the points also for 3 times with K = 7, 3, 3, respectively. We train the P2G-VAE for 36K iterations with an AdamW (Loshchilov & Hutter, 2017) optimizer. Regarding the SPFlow, we employ the OA-CNNs (Peng et al., 2024) as the UNet backbone. We crop a random region with a size of 2.4×2.4 m2 from a complete scene as the input sample. We train the SPFlow for 100K and 40K iterations for unconditional pretrain and conditional generation, respectively. 4.3 MAIN RESULTS Reconstruction. We report the results in Table 1. PixelSplat (Charatan et al., 2024) and MVSplat (Chen et al., 2024) do not include 3D geometry information as input, and thus they might perform worse compared with others using depth or Gaussian input. Prometheus (Yang et al., 7 Preprint Ours Prometheus Trellis Figure 5: Visualization for unconditional generation. Only Terra is able to generate diverse and reasonable scenes while Prometheus and Trellis lack consistent geometry and texture, respectively. Ours Prometheus Trellis Condition Figure 6: Visualization for image conditioned generation. Both Terra and Prometheus are able to produce plausible images while the geometry consistency is far better than Prometheus. 2025a) achieves the best LPIPS because it is pretrained with a 2D diffusion model, which excels at image quality. Our Terra achieves the best results for all metrics except LPIPS, even better than Can3Tok (Gao et al., 2025) using offline reconstructed Gaussians, demonstrating the effectiveness of our P2G-VAE. Furthermore, Terra is able to reconstruct the whole scene in a single forward pass and also complete partial objects with incorrect depth measurement as shown by Figure 4. Unconditional generation. We report the results in Table 2. Our method achieves better P-FID and P-KID than Prometheus with 2.5D representation and Trellis (Xiang et al., 2025) with 3D grid representation, validating the superiority of point latents in modeling the geometry distribution. However, Terra performs worse compared to Prometheus in image quality metrics FID and KID, because Prometheus, with 2D diffusion pretrain, is able to synthesize plausible images even though the underlying 3D structures could be corrupted. We provide visualization results in Figure 5, where only Terra generates both reasonable and diverse 3D scenes while the results of other methods either lack accurate 3D structure or vivid textures. Image conditioned generation. We report the results in Table 2 and Figure 6. Given a conditional image, we first unproject it into 3D space with depth and intrinsics to produce a colored point cloud. Then we can formulate the image conditioned generation task as outpainting in the point 8 Preprint Step 1 Step 2 Step 3 Step 4 Step 5 Overview Figure 7: Visualization for explorable world model. Terra is able to generate both coherent and diverse room layouts with plausible textures from step-by-step exploration. Table 3: Ablation Study to validate the effectiveness of our design choices. Method Reconstruction Unconditional Generation PSNR↑SSIM↑Abs. Rel.↓RMSE↓P-FID↓P-KID(%)↓ FID↓ KID(%)↓ w.o. Robust Position Perturbation 20.487 0.783 0.023 0.132 15.28 5.218 349.3 21.884 w.o. Adaptive Upsampling and Refine 18.749 0.711 0.042 0.157 12.48 4.764 341.8 21.760 w.o. Explicit Color Supervision 19.582 0.739 0.030 0.144 10.61 3.142 327.9 19.418 w.o. Dist.-aware Trajectory Smoothing 19.742 0.753 0.026 0.137 24.84 11.387 401.8 27.482 Terra 19.742 0.753 0.026 0.137 8.79 1.745 307.2 18.919 latent space. Our Terra achieves better performance in chamfer distance and earth mover's distance, demonstrating better geometry quality. Prometheus still achieves better FID and KID even though the visualizations show evident multi-view inconsistency. Explorable world model. We visualize the results for explorable world model in Figure 7. We start from a single step generation, and progressively extend the boundary to explore the unknown regions. In each step, we choose a random direction for exploration, take a step forward, and generate the next-step result with part of the known regions as condition. Our Terra is able to synthesize both coherent and diverse room layouts with plausible textures, validating the effectiveness of Terra. 4.4 ABLATION STUDY We conduct comprehensive ablation study to validate the effectiveness of our designs in Table 3. Although position perturbation for point latents degrades reconstruction performance, it is crucial for the generative training because it significantly improves the robustness of the VAE decoder against positional noise. Both adaptive upsampling and refinement and explicit color supervision enhance the reconstruction performance and also the generation quality. Distance-aware trajectory smoothing takes effect in the generative training and is critical for the convergence of the model. 5 CONCLUSION In this paper, we propose Terra as a native 3D world model that describes and generates explorable 3D environments with point latents. The point latents naturally satisfy the 3D consistency constraint crucial to world models as a native 3D representation, and support flexible rendering from any given viewpoint with a single generation process. To learn the intrinsic distribution of 3D data with point latents, we design the P2G-VAE and SPFlow networks for dimensionality reduction and generative modeling, respectively. We conduct experiments on ScanNet v2 with reconstruction, unconditional generation and image conditioned generation tasks, and Terra achieves the best overall performance 9 Preprint both quantitatively and qualitatively. Furthermore, Terra is able to explore the unknown regions in a progressive manner and produce a large-scale and coherent world simulation. REFERENCES Niket Agarwal, Arslan Ali, Maciej Bala, Yogesh Balaji, Erik Barker, Tiffany Cai, Prithvijit Chattopadhyay, Yongxin Chen, Yin Cui, Yifan Ding, et al. Cosmos world foundation model platform for physical ai. arXiv preprint , 2025. Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture. In CVPR, pp. 15619-15629, 2023. Mido Assran, Adrien Bardes, David Fan, Quentin Garrido, Russell Howes, Matthew Muckley, Ammar Rizvi, Claire Roberts, Koustuv Sinha, Artem Zholus, et al. V-jepa 2: Self-supervised video models enable understanding, prediction and planning. arXiv preprint , 2025. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint , 2023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. NIPS, 33:1877-1901, 2020. Jake Bruce, Michael D Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, et al. Genie: Generative interactive environments. In ICML, 2024. David Charatan, Sizhe Lester Li, Andrea Tagliasacchi, and Vincent Sitzmann. pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction. In CVPR, pp. 1945719467, 2024. Junyi Chen, Haoyi Zhu, Xianglong He, Yifan Wang, Jianjun Zhou, Wenzheng Chang, Yang Zhou, Zizun Li, Zhoujie Fu, Jiangmiao Pang, et al. Deepverse: 4d autoregressive video generation as a world model. arXiv preprint , 2025a. Weiliang Chen, Jiayi Bi, Yuanhui Huang, Wenzhao Zheng, and Yueqi Duan. Scenecompleter: Dense 3d scene completion for generative novel view synthesis. arXiv preprint , 2025b. Yuedong Chen, Haofei Xu, Chuanxia Zheng, Bohan Zhuang, Marc Pollefeys, Andreas Geiger, TatJen Cham, and Jianfei Cai. Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images. In ECCV, pp. 370-386. Springer, 2024. Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, pp. 58285839, 2017. Quankai Gao, Iliyan Georgiev, Tuanfeng Y Wang, Krishna Kumar Singh, Ulrich Neumann, and Jae Shin Yoon. Can3tok: Canonical 3d tokenization and latent modeling of scene-level 3d gaussians. In ICCV, 2025. Zekai Gu, Rui Yan, Jiahao Lu, Peng Li, Zhiyang Dou, Chenyang Si, Zhen Dong, Qifeng Liu, Cheng Lin, Ziwei Liu, et al. Diffusion as shader: 3d-aware video diffusion for versatile video generation control. In SIGGRAPH, pp. 1-12, 2025. David Ha and J ̈urgen Schmidhuber. World models. arXiv preprint , 2(3), 2018. Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrl: Enabling camera control for video diffusion models. In ICLR, 2025. 10 Preprint Roberto Henschel, Levon Khachatryan, Hayk Poghosyan, Daniil Hayrapetyan, Vahram Tadevosyan, Zhangyang Wang, Shant Navasardyan, and Humphrey Shi. Streamingt2v: Consistent, dynamic, and extendable long video generation from text. In CVPR, pp. 2568-2577, 2025. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. NIPS, 33: 6840-6851, 2020. Wenbo Hu, Xiangjun Gao, Xiaoyu Li, Sijie Zhao, Xiaodong Cun, Yong Zhang, Long Quan, and Ying Shan. Depthcrafter: Generating consistent long depth sequences for open-world videos. In CVPR, pp. 2005-2015, 2025. Tianyu Huang, Wangguandong Zheng, Tengfei Wang, Yuhao Liu, Zhenwei Wang, Junta Wu, Jie Jiang, Hui Li, Rynson WH Lau, Wangmeng Zuo, et al. Voyager: Long-range and world-consistent video diffusion for explorable 3d scene generation. arXiv preprint , 2025. Yuanhui Huang, Wenzhao Zheng, Yuan Gao, Xin Tao, Pengfei Wan, Di Zhang, Jie Zhou, and Jiwen Lu. Owl-1: Omni world model for consistent long video generation. arXiv preprint , 2024. Ka-Hei Hui, Chao Liu, Xiaohui Zeng, Chi-Wing Fu, and Arash Vahdat. Not-so-optimal transport flows for 3d point cloud generation. arXiv preprint , 2025. Junha Hyung, Susung Hong, Sungwon Hwang, Jaeseong Lee, Jaegul Choo, and Jin-Hwa Kim. Effective rank analysis and regularization for enhanced 3d gaussian splatting. NIPS, 37:110412110435, 2024. Roy Jonker and Anton Volgenant. A shortest augmenting path algorithm for dense and sparse linear assignment problems. Computing, 38(4):325-340, 1987. Bernhard Kerbl, Georgios Kopanas, Thomas Leimk ̈uhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. Dan Kondratyuk, Lijun Yu, Xiuye Gu, Jos ́e Lezama, Jonathan Huang, Grant Schindler, Rachel Hornung, Vighnesh Birodkar, Jimmy Yan, Ming-Chang Chiu, et al. Videopoet: A large language model for zero-shot video generation. arXiv preprint , 2023. Hang Lai, Jiahang Cao, Jiafeng Xu, Hongtao Wu, Yunfeng Lin, Tao Kong, Yong Yu, and Weinan Zhang. World model-based perception for visual legged locomotion. In ICRA, pp. 11531-11537. IEEE, 2025. Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, and Chen Change Loy. Gaussiananything: Interactive point cloud latent diffusion for 3d generation. In ICLR, 2025. Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. arXiv preprint , 2022. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint , 2017. Yuanxun Lu, Jingyang Zhang, Tian Fang, Jean-Daniel Nahmias, Yanghai Tsin, Long Quan, Xun Cao, Yao Yao, and Shiwei Li. Matrix3d: Large photogrammetry model all-in-one. In CVPR, pp. 11250-11263, 2025. Shitong Luo and Wei Hu. Diffusion probabilistic models for 3d point cloud generation. In CVPR, pp. 2837-2845, 2021. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1):99-106, 2021. 11 Preprint Chen Min, Dawei Zhao, Liang Xiao, Jian Zhao, Xinli Xu, Zheng Zhu, Lei Jin, Jianshu Li, Yulan Guo, Junliang Xing, et al. Driveworld: 4d pre-trained scene understanding via world models for autonomous driving. In CVPR, pp. 15522-15533, 2024. OpenAI. Video generation models as world simulators, 2024. URL https://openai.com/ research/video-generation-models-as-world-simulators. William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, pp. 41954205, 2023. Bohao Peng, Xiaoyang Wu, Li Jiang, Yukang Chen, Hengshuang Zhao, Zhuotao Tian, and Jiaya Jia. Oa-cnns: Omni-adaptive sparse cnns for 3d semantic segmentation. In CVPR, pp. 21305-21315, 2024. Xuanchi Ren, Jiahui Huang, Xiaohui Zeng, Ken Museth, Sanja Fidler, and Francis Williams. Xcube: Large-scale 3d generative modeling using sparse voxel hierarchies. In CVPR, pp. 4209-4219, 2024a. Zhiyuan Ren, Minchul Kim, Feng Liu, and Xiaoming Liu. Tiger: Time-varying denoising model for 3d point cloud generation with diffusion process. In CVPR, pp. 9462-9471, 2024b. Zhongwei Ren, Yunchao Wei, Xun Guo, Yao Zhao, Bingyi Kang, Jiashi Feng, and Xiaojie Jin. Videoworld: Exploring knowledge learning from unlabeled videos. In CVPR, pp. 29029-29039, 2025. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ̈orn Ommer. Highresolution image synthesis with latent diffusion models. In CVPR, pp. 10684-10695, 2022. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint , 2020. Aether Team, Haoyi Zhu, Yifan Wang, Jianjun Zhou, Wenzheng Chang, Yang Zhou, Zizun Li, Junyi Chen, Chunhua Shen, Jiangmiao Pang, et al. Aether: Geometric-aware unified world modeling. arXiv preprint , 2025. Arash Vahdat, Francis Williams, Zan Gojcic, Or Litany, Sanja Fidler, Karsten Kreis, et al. Lion: Latent point diffusion models for 3d shape generation. NIPS, 35:10021-10039, 2022. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. NIPS, 30, 2017. Mathias Vogel, Keisuke Tateno, Marc Pollefeys, Federico Tombari, Marie-Julie Rakotosaona, and Francis Engelmann. P2p-bridge: Diffusion bridges for 3d point cloud denoising. In ECCV, pp. 184-201. Springer, 2024. Jianyuan Wang, Minghao Chen, Nikita Karaev, Andrea Vedaldi, Christian Rupprecht, and David Novotny. Vggt: Visual geometry grounded transformer. In CVPR, pp. 5294-5306, 2025a. Qiuheng Wang, Yukai Shi, Jiarong Ou, Rui Chen, Ke Lin, Jiahao Wang, Boyuan Jiang, Haotian Yang, Mingwu Zheng, Xin Tao, et al. Koala-36m: A large-scale video dataset improving consistency between fine-grained conditions and video content. In CVPR, pp. 8428-8437, 2025b. Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vision made easy. In CVPR, pp. 20697-20709, 2024a. Tai Wang, Xiaohan Mao, Chenming Zhu, Runsen Xu, Ruiyuan Lyu, Peisen Li, Xiao Chen, Wenwei Zhang, Kai Chen, Tianfan Xue, et al. Embodiedscan: A holistic multi-modal 3d perception suite towards embodied ai. In CVPR, pp. 19757-19767, 2024b. Xiaofeng Wang, Zheng Zhu, Guan Huang, Xinze Chen, Jiagang Zhu, and Jiwen Lu. Drivedreamer: Towards real-world-drive world models for autonomous driving. In ECCV, pp. 55-72. Springer, 2024c. Tong Wu, Shuai Yang, Ryan Po, Yinghao Xu, Ziwei Liu, Dahua Lin, and Gordon Wetzstein. Video world models with long-term spatial memory. arXiv preprint , 2025. 12 Preprint Xiaoyang Wu, Li Jiang, Peng-Shuai Wang, Zhijian Liu, Xihui Liu, Yu Qiao, Wanli Ouyang, Tong He, and Hengshuang Zhao. Point transformer v3: Simpler faster stronger. In CVPR, pp. 48404851, 2024a. Yuqi Wu, Wenzhao Zheng, Sicheng Zuo, Yuanhui Huang, Jie Zhou, and Jiwen Lu. Embodiedocc: Embodied 3d occupancy prediction for vision-based online scene understanding. arXiv preprint , 2024b. Jianfeng Xiang, Zelong Lv, Sicheng Xu, Yu Deng, Ruicheng Wang, Bowen Zhang, Dong Chen, Xin Tong, and Jiaolong Yang. Structured 3d latents for scalable and versatile 3d generation. In CVPR, pp. 21469-21480, 2025. Jiannan Xiang, Guangyi Liu, Yi Gu, Qiyue Gao, Yuting Ning, Yuheng Zha, Zeyu Feng, Tianhua Tao, Shibo Hao, Yemin Shi, et al. Pandora: Towards general world model with natural language actions and video states. arXiv preprint , 2024. Yuanbo Yang, Jiahao Shao, Xinyang Li, Yujun Shen, Andreas Geiger, and Yiyi Liao. Prometheus: 3d-aware latent diffusion models for feed-forward text-to-3d scene generation. In CVPR, pp. 2857-2869, 2025a. Zhongqi Yang, Wenhang Ge, Yuqi Li, Jiaqi Chen, Haoyuan Li, Mengyin An, Fei Kang, Hua Xue, Baixin Xu, Yuyang Yin, et al. Matrix-3d: Omnidirectional explorable 3d world generation. arXiv preprint , 2025b. Shengming Yin, Chenfei Wu, Huan Yang, Jianfeng Wang, Xiaodong Wang, Minheng Ni, Zhengyuan Yang, Linjie Li, Shuguang Liu, Fan Yang, et al. Nuwa-xl: Diffusion over diffusion for extremely long video generation. arXiv preprint , 2023. Hong-Xing Yu, Haoyi Duan, Charles Herrmann, William T Freeman, and Jiajun Wu. Wonderworld: Interactive 3d scene generation from a single image. In CVPR, pp. 5916-5926, 2025. Hongxiao Yu, Yuqi Wang, Yuntao Chen, and Zhaoxiang Zhang. Monocular occupancy prediction for scalable indoor scenes. In ECCV, pp. 38-54. Springer, 2024a. Wangbo Yu, Jinbo Xing, Li Yuan, Wenbo Hu, Xiaoyu Li, Zhipeng Huang, Xiangjun Gao, TienTsin Wong, Ying Shan, and Yonghong Tian. Viewcrafter: Taming video diffusion models for high-fidelity novel view synthesis. arXiv preprint , 2024b. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, pp. 586-595, 2018. Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In CVPR, pp. 16259-16268, 2021. Wenzhao Zheng, Weiliang Chen, Yuanhui Huang, Borui Zhang, Yueqi Duan, and Jiwen Lu. Occworld: Learning a 3d occupancy world model for autonomous driving. In ECCV, pp. 55-72. Springer, 2024a. Wenzhao Zheng, Zetian Xia, Yuanhui Huang, Sicheng Zuo, Jie Zhou, and Jiwen Lu. Doe-1: Closedloop autonomous driving with large world model. arXiv preprint , 2024b. Chenliang Zhou, Fangcheng Zhong, Param Hanji, Zhilin Guo, Kyle Fogarty, Alejandro Sztrajman, Hongyun Gao, and Cengiz Oztireli. Frepolad: Frequency-rectified point latent diffusion for point cloud generation. In ECCV, pp. 434-453. Springer, 2024. 13
2510.14966
Identity-Link IRT for Label-Free LLM Evaluation: Preserving Additivity in TVD-MI Scores Zachary Robertson Computer Science Stanford University zroberts@stanford.edu Abstract Pairwise comparisons of large language models using total variation distance mutual information (TVD-MI) produces binary critic decisions per pair. We show that averaging TVD-MI’s binary trials yields centered-probability scores with additive structure suitable for item-response theory (IRT) without nonlinear link functions. Maximum likelihood approaches to IRT use logistic links, but we find empirically that these transformations introduce curvature that breaks additivity: across three domains, the identity link yields median curl on raw data of 0.080–0.150 (P95=0.474–0.580), whereas probit/logit introduce substantially higher violations (median [0.245,0.588], P95[0.825,2.252]). We derive this clipped- linear model from Gini entropy maximization, yielding a box-constrained least- squares formulation that handles boundary saturation. At 33% coverage, we achieve holdout RMSE 0.117 ± 0.008 while preserving agent rankings (Spearman ρ=0.972 ± 0.015), 3× fewer evaluations than full dense. Judge robustness analysis (GPT-4o-mini vs Llama3-70b) shows strong agreement in agent rankings (ρ=0.872) and consistent identity link advantage. TVD-MI’s geometry is best preserved by identity mapping for efficient LLM evaluation, applicable to other bounded- response domains. 1 Introduction Evaluating large language models (LLMs) at scale is expensive: dense evaluation matrices grow quadratically in models and items. Standard practice—pairwise preferences or mutual-information- based comparisons—effectively requires scoring every agent–item pair, which quickly becomes prohibitive [Chiang et al., 2024]. In peer-prediction settings with no ground truth, one must rely on inter-model agreement patterns [Prelec, 2004, Dasgupta et al., 2013, Xu et al., 2025]. Robertson and Koyejo [2025] propose total-variation mutual information (TVD-MI) for label-free LLM evaluation. TV is the unique f-divergence that is also an IPM; its supremum is attained by a binary critic, so TVD-MI reduces to optimal binary tests distinguishing paired from independent responses [Sriperumbudur et al., 2009, Tsybakov, 2008]. Averaging these binary trials yields agent–item scores S ∈[−1, 1]k×n with one entry sij per (agent i, item j). Items discriminate among agents similarly to classical item response theory (IRT) [Rasch, 1993]. Empirically we find that the additivity appears on the raw TV scale, without nonlinear links: sij ≈θi −bj, (1) with θi a latent ability and bj an item difficulty. In contrast, logistic/probit links curve the geometry and break discrete integrability; the remainder of the paper formalizes and exploits this identity-link regime for sparse, sample-efficient evaluation. Preprint. arXiv:2510.14966v1 [cs.LG] 16 Oct 2025 1.1 Our Contributions TVD-MI evaluation matrices exhibit near-additive structure that standard IRT link functions distort. Identity link dominates for TVD-based scores. Across PubMed/OPUS/ICLR, the identity map yields median curl 0.080–0.150 (P95 = 0.474–0.580), whereas probit/logit induce 2–7× larger vio- lations (median [0.245, 0.588], P95 [0.825, 2.252]). Bootstrap gaps are significant (CIs exclude zero). Baselines corroborate the geometry: our clipped-linear additive model attains the best reconstruction on PubMed (RMSE 0.1215), while unconstrained UV factorization is far worse (RMSE 0.196), demonstrating the necessity of additivity (Section 6.4). Clipped-linear model from Gini entropy. We derive the identity link by projecting TVD-MI scores onto the additive manifold under Gini (Rényi-2) entropy, yielding a box-constrained least- squares estimator that handles saturation. Discrete integrability tests confirm approximate additivity (Section 4, 4.3). Sample-efficient sparse recovery. With 33% coverage and d-core ≥3 connectivity, we achieve holdout RMSE 0.117 ± 0.008 vs. 0.111 ± 0.006 for dense, i.e., ∼3× fewer evaluations with near-perfect ranking preservation (Spearman ρ = 0.972 ± 0.015, Ranking AUC 0.967 ± 0.012 vs. 0.983 ± 0.008). Results transfer across domains and judges (Section 6). 2 Background and Related Work 2.1 Item Response Theory and Matrix Completion Item response theory (IRT) models binary/graded responses through latent traits [Rasch, 1993]. The Rasch (1PL) model assumes logit(P(uij = 1)) = θi −bj, where θi is person ability and bj is item difficulty. This rank-2 structure has invariance properties and enables recovery from incomplete observations via matrix completion [Andersen, 1977, Candes and Recht, 2012]. Recent work applies IRT to LLM evaluation with ground truth labels. Castleman et al. [2025] use IRT to analyze math benchmarks, revealing that many items provide redundant signal. Zhou et al. [2025] show that IRT can identify mislabeled or ambiguous test items in NLP benchmarks. Other works propose standardized selection based on IRT analysis [Ding et al., 2024, Liu et al., 2025]. However, all prior work assumes access to ground truth—a requirement we eliminate through peer prediction while discovering that TVD-MI naturally produces additive structure without logistic links. 2.2 Peer Prediction and TVD-MI Peer-prediction mechanisms elicit information without ground truth [Prelec, 2004, Dasgupta et al., 2013, Xu et al., 2025]. Robertson and Koyejo [2025] introduced TVD mutual information for LLM evaluation without ground truth. For response distributions (X, Y ): ITVD(X; Y ) = TV(PXY , PX ⊗PY ), (2) where PXY is the joint (paired responses) and PX ⊗PY is the product of marginals (independent sources). TVD can be estimated through binary classification [Tsybakov, 2008]. For any decision rule r: TPRr := Pr S∼PXY [r(S) = 1], FPRr := Pr S∼PX⊗PY [r(S) = 1] (3) The difference TPRr −FPRr lower-bounds TVD-MI, with equality at the optimal critic. Lemma 2.1 (Binary optimal critic for total variation). Total variation admits the IPM representation TV(P, Q) = sup ∥h∥∞≤1 EP [h] −EQ[h], and the supremum is achieved by a binary critic h∗(x) ∈{−1, 1} with h∗(x) = sign(p(x) −q(x)) almost everywhere. Moreover, among f-divergences, total variation is (up to a positive multiplicative constant) the only one that is also an IPM over a symmetric convex function class (the L∞unit ball); hence it is the only f-divergence admitting such a bounded, binary optimal critic. This produces agent-item score matrices S ∈[−1, 1]K×J where sij = TPRij −FPRij for K agents and J items. The original formulation requires O(K2J) evaluations. 2 2.3 Why Traditional IRT Links Fail for TVD-MI Classical IRT models assume binary response data generated from a latent logistic process, motivating the logit link ℓ(p) = log(p/(1 −p)) [McCullagh, 2019]. However, TVD-MI scores arise from a different process. Pairwise discrimination: For each item j, we compute TVD-MI between all agent pairs, yielding a K ×K matrix of discrimination scores in [−1, 1]. Linear averaging: Agent i’s score on item j is the average discrimination against all other agents: sij = 1 K−1 P k̸=i TVD-MI(i, k|j). Additivity preservation: If pairwise scores exhibit additive structure (θi −θk + bj), the average inherits this structure in the raw space. This identity-link estimator aligns with Gini/Brier criterion used in proper scoring [Yuan et al., 2021, Glenn et al., 1950]. We validate this empirically in Section 4.3, showing that the identity link yields smaller violations than curved links. 3 Experimental Setup We evaluate 30 agent configurations across three domains: summarization (PubMed, 200 items), translation (OPUS, 186 items), and peer review (ICLR, 100 items). Agents span faithful baselines, stylistic variants, strategic distortions, and low-effort responses (see Appendix A for details). TVD-MI scores are computed through binary classification: an LLM judge (GPT-4o-mini) distin- guishes whether response pairs come from the same agent versus different agents. Agent i’s score on item k averages the discriminative signal (TPR - FPR) across all pairwise comparisons with other agents, producing matrices S ∈[−1, 1]K×J. We reserve 20% of agent-item pairs as a holdout set before computing any scores, ensuring train-test independence (see Appendix B). The resulting matrices exhibit boundary saturation (2-3% of entries at ±1) with mean 0.18, SD 0.31, and natural sparsity from computational constraints (see Appendix C). We use the identity link (no transformation) for our additive model sij = θi −bj, as Section 4.3 shows it outperforms logistic links by 2–7× in preserving discrete integrability. 4 Model and Diagnostics 4.1 Clipped-Linear Model from Gini Entropy We first validate that TVD-MI scores exhibit approximate additive structure through discrete inte- grability tests (Section 4.3). Given this empirical finding, we model the scores through an additive decomposition on the raw scale: sij = θi −bj + ϵij, |sij| ≤1, (4) where θi represents agent i’s latent discriminative ability, bj represents item j’s difficulty, and the box constraint naturally captures saturation at ±1. In this model, a sufficient condition for approximate additivity is that the noise ϵij is small or weakly dependent. Proposition 4.1 (Quadratic (Gini) projection onto the additive manifold). Assume we project scores onto the additive manifold by enforcing the discrete integrability (rectangle) constraints. Consider min sij X i,j s2 ij (5) s.t. |sij| ≤1, sij = tij ∀(i, j) ∈Ω, sij −si′j = sij′ −si′j′ ∀(i, i′, j, j′). (6) Then the optimizer has the additive form sij = θi −bj with |sij| ≤1. The quadratic objective corresponds to maximizing Gini impurity Gini(p) = 1 −P x p(x)2 under the change of variables s = 2p −1, and TV(X; X) = Gini(X) links this criterion to total variation. Proof Sketch. The objective minimizes P s2 ij, which corresponds to maximizing Gini impurity Gini(p) = 2p(1 −p) = 1 2(1 −s2) where s = 2p −1. Crucially, Gini impurity is the natural entropy measure for TVD: Lemma E.1 (Appendix E) shows that TVD-MI(X; X) = Gini(X), establishing that we are maximizing the self-information under total variation distance. The rectangle constraints enforce discrete integrability, which by Lemma F.1 (Appendix F) is equivalent to the additive form sij = θi −bj. The dual problem yields a box-constrained least-squares objective with identity link. 3 This proposition assumes additivity (we study deviations in Section 4.3) and shows that projecting onto the additive manifold under the natural entropy for TVD yields the identity link. This contrasts with Shannon entropy, which yields logistic links that distort the geometry when data is already approximately additive in bounded space. Given observed entries Ω⊆[K] × [J], we estimate parameters via regularized least squares: min θ,b X (i,j)∈Ω sij −(θi −bj) 2 + λ ∥θ∥2 2 + ∥b∥2 2  , (7) with gauge constraint P j bj = 0 for identifiability. The ridge penalty λ(∥θ∥2 2 + ∥b∥2 2) fixes the scale of the latent variables and provides regularization against overfitting in sparse observation regimes. We use λ = 10−6 throughout. This convex program admits closed-form updates via alternating minimization, converging to global optimum in O(|Ω|) time per iteration. 4.2 Discrete Integrability Test The additive model assumes that scores exhibit discrete integrability—the mixed second differences vanish. We formalize this through a rectangle test: Definition 4.2 (Rectangle Deviation). For any rectangle of agents (i, i′) and items (j, j′), the discrete integrability violation is: ∆(i, i′, j, j′) := sij −si′j −sij′ + si′j′. (8) Under perfect additivity, ∆= 0 for all rectangles. The curl measures deviation from the additive manifold. If sij = θi −bj, then: ∆= (θi −bj) −(θi′ −bj) −(θi −bj′) + (θi′ −bj′) (9) = θi −θi′ −θi + θi′ = 0. (10) We empirically evaluate curl magnitude by sampling random rectangles from the observed data. 4.3 Link and Model Ablation We computed discrete integrability violations to test the identity link choice on the raw score matrices across three domains using paired rectangles (20,000 per condition). Figure 1 shows empirical cumulative distribution functions of curl magnitude |∆| = |sij −si′j −sij′ + si′j′| for identity, probit, and logit link-transformed data. Statistical methodology. We ensured fair comparison by using identical rectangle samples across all links within each domain. Statistical significance was assessed via bootstrap resampling (500 iterations): for each bootstrap sample, we resampled agents and items with replacement, then computed median curl on the resampled matrix with newly drawn rectangles. This approach avoids dependence issues that arise from sampling individual rectangles, which share entries and violate independence assumptions. Table 1 shows that identity transformation (i.e., no transformation) achieves the lowest median curl across all domains: 0.129 (PubMed), 0.150 (OPUS), and 0.080 (ICLR). Probit transformation increases median curl by 90–255% (0.245–0.286), while logit produces 2–7× higher violations (0.438–0.588). The 95th percentiles reveal heavier tails: identity P95 [0.474, 0.580] versus logit P95 [1.454, 2.252]. Bootstrap confidence intervals. Table 2 shows that all pairwise differences are statistically signifi- cant. Identity vs probit differences range from ∆∈[−0.112, −0.213] (all CIs exclude zero), while identity vs logit differences are even larger (∆= [−0.298, −0.528]). The consistent pattern across domains supports that TVD-MI’s inherent geometry favors identity mapping. The identity link preserves TVD-MI’s natural additivity, while monotone transformations (probit, logit) distort this geometry even when applied to bounded data. 4 Table 1: Data-space curl on raw score matrices: median and P95 |∆| (lower is better). Identity preserves natural additivity; probit/logit introduce substantial violations. All differences significant via bootstrap (CIs exclude zero). Identity Probit Logit Domain Median P95 Median P95 Median P95 PubMed 0.129 0.474 0.245 0.825 0.438 1.454 OPUS 0.150 0.492 0.268 0.862 0.479 1.547 ICLR 0.080 0.580 0.286 1.188 0.588 2.252 Mean 0.120 0.515 0.266 0.958 0.502 1.751 Table 2: Bootstrap 95% confidence intervals for median curl differences (500 iterations). All comparisons show statistically significant advantages for identity link. Domain Identity vs Probit Identity vs Logit Probit vs Logit PubMed -0.112 [-0.122, -0.097] -0.298 [-0.326, -0.256] -0.185 [-0.204, -0.159] OPUS -0.117 [-0.128, -0.101] -0.322 [-0.346, -0.278] -0.204 [-0.219, -0.176] ICLR -0.213 [-0.228, -0.172] -0.528 [-0.560, -0.426] -0.315 [-0.335, -0.250] 4.4 Connectivity Requirements The observation pattern Ωmust satisfy connectivity constraints for reliable recovery. Minimum degree constraint: Every agent observes ≥d items and every item is observed by ≥d agents. Single component: The bipartite graph formed by Ωis connected. Empirically, we find d = 3 sufficient for stable recovery, providing redundancy against individual noisy measurements while maintaining sparsity. The connectivity ensures that all parameters are anchored to the same gauge, preventing drift between disconnected components. We refer to this as the "d-core ≥3" constraint to avoid confusion with the number of agents K. 5 Sampling and Protocol We evaluate sparse recovery through a train-test protocol with 20% holdout and four sampling strategies: row sampling (agents observe α-fraction of items), column sampling (items evaluated by β-fraction of agents), hybrid sampling (independent pair selection), and efficient (n log n) sampling (|Ω| = C(K + J) log(K + J) pairs). All regimes enforce d-core ≥3 connectivity. We measure reconstruction accuracy (holdout RMSE) and ranking fidelity (Spearman ρ, Kendall τ, Ranking AUC). Uncertainty is quantified through bootstrap resampling (500 iterations). Ranking AUC is constructed from the (model) scores averaged over items i.e. one score per agent used for classification. See Appendix D for full details. 6 Results 6.1 Sample Efficiency Figure 2 shows hold-out RMSE as a function of training coverage across different sampling regimes. The dense baseline (80% training coverage) achieves RMSE of 0.111. The (n log n) regime with C = 1.6 achieves RMSE of 0.117 at 33% coverage, a 3× reduction in required evaluations with 5.4% relative error increase. The row and column sampling strategies perform similarly, both requiring approximately 26% cover- age to achieve RMSE below 0.12. The hybrid regime underperforms at comparable coverage levels, suggesting that structured missingness patterns are more favorable than random sparsity—likely be- cause they guarantee minimum observations per entity. All regimes enforce d-core ≥3 connectivity. 5 Figure 1: Identity link preserves natural additivity of TVD-MI scores. Empirical cumulative distri- bution functions of curl magnitude |∆| on raw score matrices across three domains (20,000 paired rectangles per condition). Identity is leftmost with median curl 0.080–0.150, while curved links shift right with 2–7× higher violations (logit median: 0.438–0.588). TVD-MI averaging produces inherently additive data in [−1, 1] space; nonlinear links warp this geometry. (a) All sampling regimes (b) (n log n) regime detail Figure 2: Hold-out RMSE vs training coverage on PubMed (30×200). (a) The (n log n) regime (orange) achieves near-dense accuracy at 33% coverage across all sampling strategies. (b) Zooming into (n log n): the sweet spot occurs at C ∈[2, 3], balancing coverage and accuracy. Below C = 1, performance degrades rapidly as connectivity becomes harder to maintain. All sparse regimes enforce d-core ≥3. 6.2 Ranking Fidelity Table 3 shows that sparse recovery at 33% coverage achieves minimal reconstruction error while perfectly preserving agent rankings: Agent rankings are nearly perfectly preserved (Spearman ρ = 0.972 [0.942, 0.986], Kendall τ = 0.890 [0.833, 0.933]), ensuring that relative model comparisons remain valid. The high Ranking AUC (0.967 [0.942, 0.983] for sparse vs 0.983 [0.967, 0.992] for dense) confirms that we can still distinguish high-quality from problematic agents despite 3× reduction in observations. The 5.4% increase in hold-out RMSE (0.111 [0.107, 0.116] to 0.117 [0.113, 0.122]) represents minor reconstruction noise that does not affect downstream ranking tasks. The narrow confidence intervals show that these results are stable across different training samples. 6 Table 3: Fidelity metrics at 33% coverage ((n log n) with C = 2) compared to dense baseline on PubMed domain (30 agents × 200 items). Hold-out RMSE measures reconstruction accuracy on 20% reserved test data. Ranking AUC tests whether per-agent average scores can distinguish faithful from problematic agents. We use all 30 agents for model fitting; the ranking evaluation focuses on a subset of 4 faithful and 15 problematic agents identified in prior work [Robertson and Koyejo, 2025]. 95% confidence intervals from 500 bootstrap samples. Metric Dense Sparse (33%) Reconstruction Accuracy Hold-out RMSE 0.111 [0.107, 0.116] 0.117 [0.113, 0.122] Relative increase – +5.4% Ranking Fidelity Spearman ρ – 0.972 [0.942, 0.986] Kendall τ – 0.890 [0.833, 0.933] Ranking AUC 0.983 [0.967, 0.992] 0.967 [0.942, 0.983] 6.3 Cross-Domain Validation Table 4 confirms that our findings generalize beyond summarization. All domains show consistent patterns: 3× coverage reduction with small absolute RMSE increases (∆ranging from +0.003 to +0.006), and strong rank preservation (Spearman ρ > 0.95). Ranking AUC varies by domain (0.967 for PUBMED, 0.938 for OPUS, 0.646 for ICLR), reflecting inherent differences in agent discriminability rather than failures of sparse recovery—these values represent the ranking quality achievable even with dense evaluation in each domain. Table 4: Cross-domain validation at 33% coverage. All domains show minimal RMSE increase and strong rank preservation. Ranking AUC depends on the inherent discriminability of each domain’s agent pool. Domain Agents Items Dense Sparse RMSE ∆ Spearman Ranking RMSE RMSE ρ AUC PUBMED 30 200 0.111 0.117 +0.006 (+5%) 0.972 0.967 OPUS 31 186 0.123 0.126 +0.003 (+2%) 0.983 0.938 ICLR 29 100 0.129 0.135 +0.006 (+4%) 0.950 0.646 6.4 Baseline Comparison To validate that the additive constraint and identity link are essential design choices, we compare our clipped-linear model against four baselines at 33% coverage on PubMed. Table 5 shows results. Table 5: Baseline comparison at 33% coverage on PubMed (30 agents × 200 items). Clipped-linear (identity link) achieves best RMSE while non-additive UV factorization shows 61% higher error. SVD baseline’s poor reconstruction (0.172) demonstrates that simple mean imputation fails without the additive constraint. Method RMSE Spearman Kendall AUC ρ τ Clipped-Linear (Ours) 0.121 0.971 0.894 0.967 Identity + Isotonic (calibrated monotone mapping) 0.122 0.971 0.894 0.967 Rasch (Probit) 0.122 0.965 0.876 0.983 Rasch (Logit) 0.123 0.964 0.871 0.983 Nuclear Norm 0.126 0.974 0.894 0.950 SVD Baseline 0.172 0.923 0.807 0.967 UV Factorization 0.196 0.983 0.913 0.967 7 Identity link achieves best reconstruction. Our clipped-linear model with identity link achieves the lowest holdout RMSE (0.121), making it competitive with Rasch baselines. The isotonic regression variant (0.122) offers no measurable gain, confirming that a learned monotone calibration does not capture additional curvature. The identity link already linearizes the response space (Section 4.3). Additive constraint is essential. The non-additive UV factorization baseline, which fits S ≈UV T without constraining to the form θi −bj, achieves RMSE of 0.196 compared to our clipped-linear model (0.121). The SVD baseline with mean imputation also performs poorly (0.172), showing that simple low-rank structure without the additive constraint fails to capture TVD-MI geometry. While both maintain reasonable correlation and AUC their poor reconstruction confirms that additivity captures core properties of TVD-MI data. Computational efficiency. The Rasch methods are significantly faster than other baselines making them practical for large-scale evaluation. 6.5 Judge Robustness To validate that our findings are not artifacts of a specific judge model, we repeated the analysis using Llama3-70b (70 billion parameters) as an alternative judge on a subset of PubMed data (100 items, 30 agents). Table 6 shows strong cross-judge agreement. Table 6: Judge robustness: GPT-4o-mini vs Llama3-70b on PubMed. Strong agreement in agent rankings (Spearman ρ=0.872) shows that TVD-MI captures genuine quality differences. Both judges show identity achieving zero curl on predictions while curved links introduce consistent violations. Metric GPT-4o-mini Llama3-70b Cross-Judge Agreement Agent ranking correlation ρ = 0.872 Item difficulty correlation ρ = 0.687 Curl Statistics (Identity Link) Median |∆| (predictions) 0.000 0.000 P95 |∆| (predictions) 0.000 0.000 Median |∆| (raw data) 0.129 0.129 P95 |∆| (raw data) 0.466 0.448 Curl Statistics (Logit Link) Median |∆| (predictions) 0.009 0.009 P95 |∆| (predictions) 0.072 0.070 Median |∆| (raw data) 0.438 0.431 P95 |∆| (raw data) 1.433 1.346 The high correlation in agent rankings (Spearman ρ = 0.872) indicates genuine quality differences rather than judge-specific biases. Both judges show identity achieving zero median curl on predictions while curved links introduce consistent violations (|∆| ≈0.01), confirming that TVD-MI’s additive geometry is best preserved without transformation across judge models. The moderate correlation in item difficulties (ρ = 0.687) shows that different judges may find different items challenging, which is expected given model-specific failure modes. However, the consistency in agent rankings and link ablation results supports the robustness of our approach. 7 Discussion 7.1 Why Does Additive Structure Emerge? The identity link works because TVD-MI scores exhibit additive structure in their raw space. This arises from the averaging operation: if pairwise TVD-MI scores have form θi + θk −bj, then agent i’s average score is θi + ¯ θ−i −bj ≈θi + ¯θ −bj, preserving additivity. Traditional IRT’s logistic links are designed for binary outcomes generated by latent Gaussian processes and maximize Shannon entropy. However, TVD-MI has its own natural entropy measure: 8 Lemma E.1 shows that TVD-MI(X; X) = Gini(X), establishing Gini (Rényi-2) as the principled entropy for total variation distance. Applying logistic links—which maximize Shannon entropy—to TVD-MI introduces unnecessary distortion: identity yields zero median curl on predictions while curved links produce consistent violations, Table 1). The clipped-linear model preserves TVD-MI’s natural geometry by maximizing its native entropy measure. 7.2 Limitations Sparse recovery requires: (1) meaningful quality variation among agents, (2) d-core ≥3 connectivity (difficult to maintain below 15% coverage), and (3) approximate additivity. Domains with strong agent-item interactions will show larger curl and degraded recovery; our cross-domain validation suggests this is rare for current general-purpose LLMs. 7.3 Practical Recommendations Based on our empirical findings, we offer the following guidance for practitioners. Use (n log n) sampling with C ∈[1.5, 3]: This regime consistently achieves the best coverage-accuracy tradeoff, requiring only 30-40% of full evaluation cost while maintaining fidelity. Validate with discrete integrability test: Before committing to sparse evaluation, test additivity on a small dense subset using the identity link. Median |∆| < 0.2 indicates suitable structure for sparse recovery. Use identity link for TVD-MI: Our ablation study (Section 4.3) shows that the identity link achieves 2–7× lower curl than logistic alternatives, preserving TVD-MI’s additive structure. 8 Reproducibility All code, data preprocessing scripts, experimental masks, and analysis are available at https: //github.com/zrobertson466920/sparse-tvd-irt. Response data and LLM judge data are from Robertson and Koyejo [2025]. 9 Conclusion We show that TVD-MI averaging naturally produces additive structure in the raw [−1, 1] space, which traditional logistic links destroy. Identity link preserves this geometry, while probit/logit introduce 2–7× higher violations across three domains. We show this enables up to 3× fewer evaluations than full dense (33% coverage) with marginal increase in the downstream peer-prediction task. Our clipped-linear model, derived from Gini entropy maximization, preserves TVD-MI’s natural geometry and generalizes beyond LLM evaluation to any bounded-response domain where scores arise from averaging operations. The discrete integrability test is a simple diagnostic for validating link choices empirically, an alternative to the default use of logistic links in psychometrics for non-traditional response formats. 9 References Erling B Andersen. Sufficient statistics and latent trait models. Psychometrika, 42(1):69–81, 1977. Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization. Commu- nications of the ACM, 55(6):111–119, 2012. Jane Castleman, Nimra Nadeem, Tanvi Namjoshi, and Lydia T Liu. Rethinking math benchmarks for llms using irt. Proceedings of Machine Learning Research, 273:66–82, 2025. Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Banghua Zhu, Hao Zhang, Michael Jordan, Joseph E Gonzalez, et al. Chatbot arena: An open platform for evaluating llms by human preference. In Forty-first International Conference on Machine Learning, 2024. Anirban Dasgupta, Arpita Ghosh, and Kamesh Munagala. Crowdsourced judgement elicitation with endogenous proficiency. Proceedings of the 22nd International Conference on World Wide Web, pages 319–330, 2013. Mucong Ding, Chenghao Deng, Jocelyn Choo, Zichu Wu, Aakriti Agrawal, Avi Schwarzschild, Tianyi Zhou, Tom Goldstein, John Langford, Animashree Anandkumar, et al. Easy2hard-bench: Standardized difficulty labels for profiling llm performance and generalization. Advances in Neural Information Processing Systems, 37:44323–44365, 2024. W Brier Glenn et al. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1–3, 1950. Yunting Liu, Shreya Bhandari, and Zachary A Pardos. Leveraging llm respondents for item evaluation: A psychometric analysis. British Journal of Educational Technology, 56(3):1028–1052, 2025. Peter McCullagh. Generalized linear models. Routledge, 2019. Dražen Prelec. A bayesian truth serum for subjective data. Science, 306(5695):462–466, 2004. Georg Rasch. Probabilistic Models for Some Intelligence and Attainment Tests. ERIC, 1993. Zachary Robertson and Sanmi Koyejo. Let’s measure information step-by-step: Llm-based evaluation beyond vibes. arXiv preprint arXiv:2508.05469, 2025. Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert RG Lanckriet. On integral probability metrics,\phi-divergences and binary classification. arXiv preprint arXiv:0901.2698, 2009. Alexandre B. Tsybakov. Nonparametric estimators. In Introduction to Nonparametric Estimation, pages 1–76. Springer, 2008. Shengwei Xu, Yuxuan Lu, Grant Schoenebeck, and Yuqing Kong. Benchmarking llms’ judgments with no gold standard. In The Thirteenth International Conference on Learning Representations, 2025. Ye Yuan, Liji Wu, and Xiangmin Zhang. Gini-impurity index analysis. IEEE Transactions on Information Forensics and Security, 16:3154–3169, 2021. Hongli Zhou, Hui Huang, Ziqing Zhao, Lvyuan Han, Huicheng Wang, Kehai Chen, Muyun Yang, Wei Bao, Jian Dong, Bing Xu, et al. Lost in benchmarks? rethinking large language model benchmarking with item response theory. arXiv preprint arXiv:2505.15055, 2025. 10 A Agent Configurations We evaluate 30 agent configurations across three domains: Faithful baselines (4 agents): GPT-4, Claude-3-Opus, Gemini-1.5-Pro, and Llama-3-70b with standard prompts. Stylistic variants (6 agents): Temperature variations (0.3, 0.7, 1.0), length constraints (concise, verbose), and formatting modifications. Strategic distortions (15 agents): Paraphrasing, synonym substitution, sentence reordering, adding filler content, selective omission, and combinations thereof. Low-effort responses (5 agents): Truncated outputs, random text, template responses, and minimal- effort completions. This agent pool provides sufficient quality variation to test discriminative evaluation while including realistic failure modes. B TVD-MI Score Construction For each agent pair (i, j) and item k, we compute TVD-MI scores through binary classification. An LLM judge (GPT-4o-mini) distinguishes whether response pairs come from the same agent (positive) or different agents (negative), yielding true and false positive rates (TPR, FPR). The TVD-MI score for agent i on item k is: sik = 1 K −1 X j̸=i (TPRijk −FPRijk), (11) where K = 30 is the number of agents. This averages the discriminative signal across all other agents, producing matrices S ∈[−1, 1]K×J. Train-test independence protocol. We construct the holdout set by randomly selecting 20% of all possible agent-item pairs (i, k) before computing any TVD-MI scores. For training, we compute sik using only the pairwise comparisons (i, j, k) where (i, k) is not in the holdout set. Critically, if agent i on item k is held out, we exclude all pairwise terms (TPRijk −FPRijk) for any j from the averaging operation. This ensures the training score sik for held-out pairs is never computed, preventing test data leakage. C Data Characteristics The resulting matrices exhibit three properties that inform our modeling: Boundary saturation. Approximately 2-3% of entries saturate at ±1 across domains, occurring when agents achieve perfect discrimination (expert agents on easy items) or complete failure (degenerate responses). We clip scores to [−0.99, 0.99] before applying the link function, preventing undefined values while preserving rank ordering. Score distributions. Scores center around 0.18 (SD = 0.31) with heavier-than-Gaussian tails, capturing both typical moderate discrimination and exceptional performance/failure modes. Natural sparsity. Computational constraints in the original evaluation left 6-13% of entries unevalu- ated across domains, providing natural held-out test data. This natural sparsity pattern differs from our structured sampling regimes (Section 5) but shows that the evaluation framework can handle incomplete observations. D Sampling and Protocol Details D.1 Evaluation Framework We evaluate sparse recovery through a train-test protocol designed to mirror practical deployment scenarios. Holdout construction: We reserve 20% of all possible agent-item pairs as a disjoint test set, ensuring no overlap with training observations. This holdout remains fixed across all experiments. 11 Sparse sampling regimes: From the remaining 80% of pairs, we construct training masks under different sparsity patterns (described below), each enforcing d-core ≥3 connectivity. Model fitting: We fit the clipped-linear model (Eq. 7) on the sparse training data with regularization λ = 10−6. Evaluation: We measure hold-out RMSE and downstream fidelity metrics on the reserved test pairs. D.2 Structured Sparsity Regimes We investigate four sampling strategies that reflect different practical constraints: Row sampling (α-coverage). Each agent observes a random α-fraction of items. This models scenarios where agents have limited evaluation budgets. We test α ∈{0.15, 0.30, 0.45}. Column sampling (β-coverage). Each item is evaluated by a random β-fraction of agents. This models scenarios where items have evaluation quotas. We test β ∈{0.15, 0.30, 0.45}. Hybrid sampling. Each pair (i, j) is observed independently with probability α · β. This provides a baseline for unstructured sparsity. Efficient (n log n) sampling. We observe |Ω| = C(K + J) log(K + J) randomly selected pairs, motivated by matrix completion theory. We sweep C ∈{0.5, 1.0, 2.0, 3.0, 5.0} to identify the coverage-accuracy frontier. For all regimes, we enforce d-core ≥3 connectivity by iteratively adding random observations to under-connected nodes until the constraint is satisfied. D.3 Fidelity Metrics We evaluate sparse recovery through two complementary metrics: Reconstruction accuracy (RMSE). We measure hold-out root mean squared error between pre- dicted and observed scores on the 20% reserved test set. This quantifies how well the sparse model recovers the underlying score matrix. Ranking fidelity. Following the evaluation protocol in Robertson and Koyejo [2025], we partition agents into "faithful" (high-quality) and "problematic" (distorted/low-effort) groups. We then evaluate ranking preservation through: • Rank correlation: Spearman’s ρ and Kendall’s τ between agent abilities θ estimated from dense versus sparse data • Ranking AUC: For each agent, we average their scores across all evaluated items. We then compute AUC for the binary classification task: can we distinguish faithful from problematic agents using these per-agent average scores? These metrics directly test whether sparse recovery preserves the core utility of TVD-MI evaluation: the ability to rank and compare models. D.4 Statistical Inference We quantify uncertainty through bootstrap resampling with 500 iterations. For each bootstrap sample, we: 1. Resample training pairs with replacement (separately for dense and sparse masks) 2. Refit the clipped-linear model on the bootstrap training data 3. Evaluate on the original (non-resampled) holdout set 4. Compute all metrics (RMSE, rank correlations, AUC) This procedure captures model uncertainty arising from finite training samples. By resampling the training data rather than the test set, we obtain realistic confidence intervals that reflect variability in 12 parameter estimates. We report 95% confidence intervals as the 2.5th and 97.5th percentiles of the bootstrap distribution. E TVD-MI and Gini Entropy Connection We first establish that Gini entropy is the natural entropy measure for total variation distance. Lemma E.1 (TVD Self-Information Equals Gini Entropy). For a discrete random variable X with distribution PX, the TVD mutual information between X and itself satisfies: TVD-MI(X; X) = Gini(X) = 1 − X x PX(x)2 (12) Proof. When Y = X, the joint distribution PXX concentrates on the diagonal: PXX(x, x) = PX(x), PXX(x, y) = 0 for x ̸= y. By definition: TVD-MI(X; X) = 1 2 X x,y |PXX(x, y) −PX(x)PX(y)| (13) = 1 2  X x |PX(x) −PX(x)2| + X x̸=y PX(x)PX(y)   (14) The diagonal terms give P x PX(x)(1 −PX(x)) and the off-diagonal terms give 1 −P x PX(x)2. Combining: TVD-MI(X; X) = 1 2 "X x PX(x) − X x PX(x)2 + 1 − X x PX(x)2 # (15) = 1 − X x PX(x)2 = Gini(X) (16) Maximizing Gini entropy is equivalent to maximizing TVD self-information, the natural entropy measure for TVD-MI scores.. F Maximum Gini Entropy Derivation We now provide the complete derivation showing that Gini entropy maximization yields the clipped- linear model with identity link. F.1 Primal Formulation We validate empirically through discrete integrability tests (Section 4.3 of main text) that TVD-MI scores exhibit approximate additive structure. We think of this as (curl) deviation from an idealized assumption. Assumption: We want to find the best additive approximation sij ∈[−1, 1] for all agent-item pairs, subject to: 1. Observational consistency: sij = tij for all (i, j) ∈Ω(observed pairs) 2. Discrete integrability: sij −si′j = sij′ −si′j′ for all rectangles Using Gini (Rényi-2) entropy as the projection criterion, we minimize: min s X (i,j) s2 ij (17) subject to |sij| ≤1 and sij = tij for (i, j) ∈Ω. By the discrete integrability lemma (Lemma F.1 below), the rectangle constraints are satisfied if and only if sij = θi −bj for some parameters θi, bj. This a derivation from first principles of TVD-MI based entropy is Gini entropy (Appendix E) which reduces to projection onto the additive manifold under a quadratic objective. 13 F.2 Discrete Integrability Lemma Lemma F.1 (Discrete Integrability). A matrix sij satisfies the rectangle constraints sij −si′j = sij′ −si′j′ ∀(i, i′, j, j′) (18) if and only if there exist parameters θi, bj ∈R such that sij = θi −bj. Proof. Sufficiency: If sij = θi −bj, then: sij −si′j −sij′ + si′j′ = (θi −bj) −(θi′ −bj) −(θi −bj′) + (θi′ −bj′) (19) = θi −θi′ −θi + θi′ = 0 (20) Necessity: Fix a reference pair (i0, j0). Define: θi := sij0 −si0j0, bj := −si0j (21) Then for any (i, j), by the rectangle constraint with (i, i0, j0, j): sij −si0j = sij0 −si0j0 (22) sij = (sij0 −si0j0) + si0j = θi −bj (23) F.3 Dual Formulation via Convex Conjugate To derive the dual, we compute the convex conjugate of 1 2s2 subject to |s| ≤1: ϕ∗(u) := sup |s|≤1  us −1 2s2  (24) For |u| ≤1, the supremum is attained at s = u (interior), giving ϕ∗(u) = u2 −1 2u2 = 1 2u2. For |u| > 1, the supremum is attained at s = sign(u) (boundary), giving ϕ∗(u) = |u| −1 2. Thus: ϕ∗(u) =  1 2u2, |u| ≤1 |u| −1 2, |u| > 1 (25) This is the Huber loss with threshold 1. F.4 Lagrangian, Integrability, and the Identity Link Let ϕ(s) = 1 2s2 + ι[−1,1](s), where ιC is the indicator of a constraint set C. Let D be the rectangle (curl) operator so that Ds = 0 iff sij = θi −bj for some (θ, b) (Lemma F.1). We solve the projection min s X i,j ϕ(sij) s.t. PΩs = t, Ds = 0. Introducing Lagrange multipliers λ for PΩs = t and µ for Ds = 0, and defining u := P ⊤ Ωλ −D⊤µ, the Lagrangian is L(s, λ, µ) = X i,j ϕ(sij) −⟨u, s⟩+ ⟨λ, t⟩. Minimizing over s is separable entrywise and yields the conjugate ϕ∗: inf s L = ⟨λ, t⟩− X i,j ϕ∗(uij), ϕ∗(u) = ( 1 2u2, |u| ≤1, |u| −1 2, |u| > 1. Hence the dual is max λ,µ ⟨λ, t⟩− X i,j ϕ∗(P ⊤ Ωλ −D⊤µ)ij  . (D) 14 Projection interpretation. For fixed λ, the quantity inside the sum depends on u = P ⊤ Ωλ −D⊤µ. Because im(D⊤) = (ker D)⊥, varying µ moves u within the affine space P ⊤ Ωλ + im(−D⊤). Since ϕ∗is convex and radially nondecreasing in |u|, minimizing P i,j ϕ∗(uij) over this affine space selects the element of smallest Euclidean norm i.e., the orthogonal projection of P ⊤ Ωλ onto ker D. Thus the optimizer satisfies u⋆= P ⊤ Ωλ −D⊤µ⋆∈ker D, so there exist parameters θ, b such that u⋆ ij = θi −bj. Recovering the primal in additive form. The KKT stationarity condition for s gives uij ∈∂ϕ(sij), implying uij = sij when |sij| < 1 and uij has the same sign with |uij| ≥1 when |sij| = 1. Together with primal feasibility PΩs = t and Ds = 0, we obtain sij = θi −bj up to clipping at the box boundaries. Relaxing the hard constraint PΩs = t to a quadratic penalty for noisy data yields min θ,b X (i,j)∈Ω 1 2 tij −(θi −bj) 2 + X i,j ι[−1,1](θi −bj) + λ(∥θ∥2 2 + ∥b∥2 2), which is the box-constrained least-squares objective (Eq. 7) with the identity link. F.5 Comparison to Shannon Entropy For comparison, Shannon entropy H(p) = −p log p −(1 −p) log(1 −p) yields the logistic link. The first-order condition for maximizing Shannon entropy subject to linear constraints gives: ∂H ∂p = −log p + log(1 −p) = λ ⇒ p = 1 1 + e−λ = logit−1(λ) (26) This motivates the logit link logit(p) = log(p/(1 −p)) in traditional IRT. However, for TVD-MI scores that are already bounded and arise from averaging (not thresholding), the Gini entropy’s quadratic structure is more natural and preserves additivity without transformation. 15
Identity-Link IRT for Label-Free LLM Evaluation: Preserving Additivity in TVD-MI Scores Zachary Robertson Computer Science Stanford University Abstract Pairwise comparisons of large language models using total variation distance mutual information (TVD-MI) produces binary critic decisions per pair. We show that averaging TVD-MI's binary trials yields centered-probability scores with additive structure suitable for item-response theory (IRT) without nonlinear link functions. Maximum likelihood approaches to IRT use logistic links, but we find empirically that these transformations introduce curvature that breaks additivity: across three domains, the identity link yields median curl on raw data of 0.080-0.150 (P95=0.474-0.580), whereas probit/logit introduce substantially higher violations (median [0.245,0.588], P95[0.825,2.252]). We derive this clippedlinear model from Gini entropy maximization, yielding a box-constrained leastsquares formulation that handles boundary saturation. At 33% coverage, we achieve holdout RMSE 0.117 ± 0.008 while preserving agent rankings (Spearman ρ=0.972 ± 0.015), 3× fewer evaluations than full dense. Judge robustness analysis (GPT-4o-mini vs Llama3-70b) shows strong agreement in agent rankings (ρ=0.872) and consistent identity link advantage. TVD-MI's geometry is best preserved by identity mapping for efficient LLM evaluation, applicable to other boundedresponse domains. 1 Introduction Evaluating large language models (LLMs) at scale is expensive: dense evaluation matrices grow quadratically in models and items. Standard practice-pairwise preferences or mutual-informationbased comparisons-effectively requires scoring every agent-item pair, which quickly becomes prohibitive [Chiang et al., 2024]. In peer-prediction settings with no ground truth, one must rely on inter-model agreement patterns [Prelec, 2004, Dasgupta et al., 2013, Xu et al., 2025]. Robertson and Koyejo [2025] propose total-variation mutual information (TVD-MI) for label-free LLM evaluation. TV is the unique f-divergence that is also an IPM; its supremum is attained by a binary critic, so TVD-MI reduces to optimal binary tests distinguishing paired from independent responses [Sriperumbudur et al., 2009, Tsybakov, 2008]. Averaging these binary trials yields agent-item scores S ∈[-1, 1]k×n with one entry sij per (agent i, item j). Items discriminate among agents similarly to classical item response theory (IRT) [Rasch, 1993]. Empirically we find that the additivity appears on the raw TV scale, without nonlinear links: sij ≈θi -bj, (1) with θi a latent ability and bj an item difficulty. In contrast, logistic/probit links curve the geometry and break discrete integrability; the remainder of the paper formalizes and exploits this identity-link regime for sparse, sample-efficient evaluation. Preprint. 16 Oct 2025 1.1 Our Contributions TVD-MI evaluation matrices exhibit near-additive structure that standard IRT link functions distort. Identity link dominates for TVD-based scores. Across PubMed/OPUS/ICLR, the identity map yields median curl 0.080-0.150 (P95 = 0.474-0.580), whereas probit/logit induce 2-7× larger violations (median [0.245, 0.588], P95 [0.825, 2.252]). Bootstrap gaps are significant (CIs exclude zero). Baselines corroborate the geometry: our clipped-linear additive model attains the best reconstruction on PubMed (RMSE 0.1215), while unconstrained UV factorization is far worse (RMSE 0.196), demonstrating the necessity of additivity (Section 6.4). Clipped-linear model from Gini entropy. We derive the identity link by projecting TVD-MI scores onto the additive manifold under Gini (Rényi-2) entropy, yielding a box-constrained leastsquares estimator that handles saturation. Discrete integrability tests confirm approximate additivity (Section 4, 4.3). Sample-efficient sparse recovery. With 33% coverage and d-core ≥3 connectivity, we achieve holdout RMSE 0.117 ± 0.008 vs. 0.111 ± 0.006 for dense, i.e., ∼3× fewer evaluations with near-perfect ranking preservation (Spearman ρ = 0.972 ± 0.015, Ranking AUC 0.967 ± 0.012 vs. 0.983 ± 0.008). Results transfer across domains and judges (Section 6). 2 Background and Related Work 2.1 Item Response Theory and Matrix Completion Item response theory (IRT) models binary/graded responses through latent traits [Rasch, 1993]. The Rasch (1PL) model assumes logit(P(uij = 1)) = θi -bj, where θi is person ability and bj is item difficulty. This rank-2 structure has invariance properties and enables recovery from incomplete observations via matrix completion [Andersen, 1977, Candes and Recht, 2012]. Recent work applies IRT to LLM evaluation with ground truth labels. Castleman et al. [2025] use IRT to analyze math benchmarks, revealing that many items provide redundant signal. Zhou et al. [2025] show that IRT can identify mislabeled or ambiguous test items in NLP benchmarks. Other works propose standardized selection based on IRT analysis [Ding et al., 2024, Liu et al., 2025]. However, all prior work assumes access to ground truth-a requirement we eliminate through peer prediction while discovering that TVD-MI naturally produces additive structure without logistic links. 2.2 Peer Prediction and TVD-MI Peer-prediction mechanisms elicit information without ground truth [Prelec, 2004, Dasgupta et al., 2013, Xu et al., 2025]. Robertson and Koyejo [2025] introduced TVD mutual information for LLM evaluation without ground truth. For response distributions (X, Y ): ITVD(X; Y ) = TV(PXY , PX ⊗PY ), (2) where PXY is the joint (paired responses) and PX ⊗PY is the product of marginals (independent sources). TVD can be estimated through binary classification [Tsybakov, 2008]. For any decision rule r: TPRr := Pr S∼PXY [r(S) = 1], FPRr := Pr S∼PX⊗PY [r(S) = 1] (3) The difference TPRr -FPRr lower-bounds TVD-MI, with equality at the optimal critic. Lemma 2.1 (Binary optimal critic for total variation). Total variation admits the IPM representation TV(P, Q) = sup ∥h∥∞≤1 EP [h] -EQ[h], and the supremum is achieved by a binary critic h∗(x) ∈{-1, 1} with h∗(x) = sign(p(x) -q(x)) almost everywhere. Moreover, among f-divergences, total variation is (up to a positive multiplicative constant) the only one that is also an IPM over a symmetric convex function class (the L∞unit ball); hence it is the only f-divergence admitting such a bounded, binary optimal critic. This produces agent-item score matrices S ∈[-1, 1]K×J where sij = TPRij -FPRij for K agents and J items. The original formulation requires O(K2J) evaluations. 2 2.3 Why Traditional IRT Links Fail for TVD-MI Classical IRT models assume binary response data generated from a latent logistic process, motivating the logit link l(p) = log(p/(1 -p)) [McCullagh, 2019]. However, TVD-MI scores arise from a different process. Pairwise discrimination: For each item j, we compute TVD-MI between all agent pairs, yielding a K ×K matrix of discrimination scores in [-1, 1]. Linear averaging: Agent i's score on item j is the average discrimination against all other agents: sij = 1 K-1 P k̸=i TVD-MI(i, k|j). Additivity preservation: If pairwise scores exhibit additive structure (θi -θk + bj), the average inherits this structure in the raw space. This identity-link estimator aligns with Gini/Brier criterion used in proper scoring [Yuan et al., 2021, Glenn et al., 1950]. We validate this empirically in Section 4.3, showing that the identity link yields smaller violations than curved links. 3 Experimental Setup We evaluate 30 agent configurations across three domains: summarization (PubMed, 200 items), translation (OPUS, 186 items), and peer review (ICLR, 100 items). Agents span faithful baselines, stylistic variants, strategic distortions, and low-effort responses (see Appendix A for details). TVD-MI scores are computed through binary classification: an LLM judge (GPT-4o-mini) distinguishes whether response pairs come from the same agent versus different agents. Agent i's score on item k averages the discriminative signal (TPR - FPR) across all pairwise comparisons with other agents, producing matrices S ∈[-1, 1]K×J. We reserve 20% of agent-item pairs as a holdout set before computing any scores, ensuring train-test independence (see Appendix B). The resulting matrices exhibit boundary saturation (2-3% of entries at ±1) with mean 0.18, SD 0.31, and natural sparsity from computational constraints (see Appendix C). We use the identity link (no transformation) for our additive model sij = θi -bj, as Section 4.3 shows it outperforms logistic links by 2-7× in preserving discrete integrability. 4 Model and Diagnostics 4.1 Clipped-Linear Model from Gini Entropy We first validate that TVD-MI scores exhibit approximate additive structure through discrete integrability tests (Section 4.3). Given this empirical finding, we model the scores through an additive decomposition on the raw scale: sij = θi -bj + εij, |sij| ≤1, (4) where θi represents agent i's latent discriminative ability, bj represents item j's difficulty, and the box constraint naturally captures saturation at ±1. In this model, a sufficient condition for approximate additivity is that the noise εij is small or weakly dependent. Proposition 4.1 (Quadratic (Gini) projection onto the additive manifold). Assume we project scores onto the additive manifold by enforcing the discrete integrability (rectangle) constraints. Consider min sij X i,j s2 ij (5) s.t. |sij| ≤1, sij = tij ∀(i, j) ∈Ω, sij -si′j = sij′ -si′j′ ∀(i, i′, j, j′). (6) Then the optimizer has the additive form sij = θi -bj with |sij| ≤1. The quadratic objective corresponds to maximizing Gini impurity Gini(p) = 1 -P x p(x)2 under the change of variables s = 2p -1, and TV(X; X) = Gini(X) links this criterion to total variation. Proof Sketch. The objective minimizes P s2 ij, which corresponds to maximizing Gini impurity Gini(p) = 2p(1 -p) = 1 2(1 -s2) where s = 2p -1. Crucially, Gini impurity is the natural entropy measure for TVD: Lemma E.1 (Appendix E) shows that TVD-MI(X; X) = Gini(X), establishing that we are maximizing the self-information under total variation distance. The rectangle constraints enforce discrete integrability, which by Lemma F.1 (Appendix F) is equivalent to the additive form sij = θi -bj. The dual problem yields a box-constrained least-squares objective with identity link. 3 This proposition assumes additivity (we study deviations in Section 4.3) and shows that projecting onto the additive manifold under the natural entropy for TVD yields the identity link. This contrasts with Shannon entropy, which yields logistic links that distort the geometry when data is already approximately additive in bounded space. Given observed entries Ω⊆[K] × [J], we estimate parameters via regularized least squares: min θ,b X (i,j)∈Ω sij -(θi -bj) 2 + λ ∥θ∥2 2 + ∥b∥2 2 , (7) with gauge constraint P j bj = 0 for identifiability. The ridge penalty λ(∥θ∥2 2 + ∥b∥2 2) fixes the scale of the latent variables and provides regularization against overfitting in sparse observation regimes. We use λ = 10-6 throughout. This convex program admits closed-form updates via alternating minimization, converging to global optimum in O(|Ω|) time per iteration. 4.2 Discrete Integrability Test The additive model assumes that scores exhibit discrete integrability-the mixed second differences vanish. We formalize this through a rectangle test: Definition 4.2 (Rectangle Deviation). For any rectangle of agents (i, i′) and items (j, j′), the discrete integrability violation is: ∆(i, i′, j, j′) := sij -si′j -sij′ + si′j′. (8) Under perfect additivity, ∆= 0 for all rectangles. The curl measures deviation from the additive manifold. If sij = θi -bj, then: ∆= (θi -bj) -(θi′ -bj) -(θi -bj′) + (θi′ -bj′) (9) = θi -θi′ -θi + θi′ = 0. (10) We empirically evaluate curl magnitude by sampling random rectangles from the observed data. 4.3 Link and Model Ablation We computed discrete integrability violations to test the identity link choice on the raw score matrices across three domains using paired rectangles (20,000 per condition). Figure 1 shows empirical cumulative distribution functions of curl magnitude |∆| = |sij -si′j -sij′ + si′j′| for identity, probit, and logit link-transformed data. Statistical methodology. We ensured fair comparison by using identical rectangle samples across all links within each domain. Statistical significance was assessed via bootstrap resampling (500 iterations): for each bootstrap sample, we resampled agents and items with replacement, then computed median curl on the resampled matrix with newly drawn rectangles. This approach avoids dependence issues that arise from sampling individual rectangles, which share entries and violate independence assumptions. Table 1 shows that identity transformation (i.e., no transformation) achieves the lowest median curl across all domains: 0.129 (PubMed), 0.150 (OPUS), and 0.080 (ICLR). Probit transformation increases median curl by 90-255% (0.245-0.286), while logit produces 2-7× higher violations (0.438-0.588). The 95th percentiles reveal heavier tails: identity P95 [0.474, 0.580] versus logit P95 [1.454, 2.252]. Bootstrap confidence intervals. Table 2 shows that all pairwise differences are statistically significant. Identity vs probit differences range from ∆∈[-0.112, -0.213] (all CIs exclude zero), while identity vs logit differences are even larger (∆= [-0.298, -0.528]). The consistent pattern across domains supports that TVD-MI's inherent geometry favors identity mapping. The identity link preserves TVD-MI's natural additivity, while monotone transformations (probit, logit) distort this geometry even when applied to bounded data. 4 Table 1: Data-space curl on raw score matrices: median and P95 |∆| (lower is better). Identity preserves natural additivity; probit/logit introduce substantial violations. All differences significant via bootstrap (CIs exclude zero). Identity Probit Logit Domain Median P95 Median P95 Median P95 PubMed 0.129 0.474 0.245 0.825 0.438 1.454 OPUS 0.150 0.492 0.268 0.862 0.479 1.547 ICLR 0.080 0.580 0.286 1.188 0.588 2.252 Mean 0.120 0.515 0.266 0.958 0.502 1.751 Table 2: Bootstrap 95% confidence intervals for median curl differences (500 iterations). All comparisons show statistically significant advantages for identity link. Domain Identity vs Probit Identity vs Logit Probit vs Logit PubMed -0.112 [-0.122, -0.097] -0.298 [-0.326, -0.256] -0.185 [-0.204, -0.159] OPUS -0.117 [-0.128, -0.101] -0.322 [-0.346, -0.278] -0.204 [-0.219, -0.176] ICLR -0.213 [-0.228, -0.172] -0.528 [-0.560, -0.426] -0.315 [-0.335, -0.250] 4.4 Connectivity Requirements The observation pattern Ωmust satisfy connectivity constraints for reliable recovery. Minimum degree constraint: Every agent observes ≥d items and every item is observed by ≥d agents. Single component: The bipartite graph formed by Ωis connected. Empirically, we find d = 3 sufficient for stable recovery, providing redundancy against individual noisy measurements while maintaining sparsity. The connectivity ensures that all parameters are anchored to the same gauge, preventing drift between disconnected components. We refer to this as the "d-core ≥3" constraint to avoid confusion with the number of agents K. 5 Sampling and Protocol We evaluate sparse recovery through a train-test protocol with 20% holdout and four sampling strategies: row sampling (agents observe α-fraction of items), column sampling (items evaluated by β-fraction of agents), hybrid sampling (independent pair selection), and efficient (n log n) sampling (|Ω| = C(K + J) log(K + J) pairs). All regimes enforce d-core ≥3 connectivity. We measure reconstruction accuracy (holdout RMSE) and ranking fidelity (Spearman ρ, Kendall τ, Ranking AUC). Uncertainty is quantified through bootstrap resampling (500 iterations). Ranking AUC is constructed from the (model) scores averaged over items i.e. one score per agent used for classification. See Appendix D for full details. 6 Results 6.1 Sample Efficiency Figure 2 shows hold-out RMSE as a function of training coverage across different sampling regimes. The dense baseline (80% training coverage) achieves RMSE of 0.111. The (n log n) regime with C = 1.6 achieves RMSE of 0.117 at 33% coverage, a 3× reduction in required evaluations with 5.4% relative error increase. The row and column sampling strategies perform similarly, both requiring approximately 26% coverage to achieve RMSE below 0.12. The hybrid regime underperforms at comparable coverage levels, suggesting that structured missingness patterns are more favorable than random sparsity-likely because they guarantee minimum observations per entity. All regimes enforce d-core ≥3 connectivity. 5 Figure 1: Identity link preserves natural additivity of TVD-MI scores. Empirical cumulative distribution functions of curl magnitude |∆| on raw score matrices across three domains (20,000 paired rectangles per condition). Identity is leftmost with median curl 0.080-0.150, while curved links shift right with 2-7× higher violations (logit median: 0.438-0.588). TVD-MI averaging produces inherently additive data in [-1, 1] space; nonlinear links warp this geometry. (a) All sampling regimes (b) (n log n) regime detail Figure 2: Hold-out RMSE vs training coverage on PubMed (30×200). (a) The (n log n) regime (orange) achieves near-dense accuracy at 33% coverage across all sampling strategies. (b) Zooming into (n log n): the sweet spot occurs at C ∈[2, 3], balancing coverage and accuracy. Below C = 1, performance degrades rapidly as connectivity becomes harder to maintain. All sparse regimes enforce d-core ≥3. 6.2 Ranking Fidelity Table 3 shows that sparse recovery at 33% coverage achieves minimal reconstruction error while perfectly preserving agent rankings: Agent rankings are nearly perfectly preserved (Spearman ρ = 0.972 [0.942, 0.986], Kendall τ = 0.890 [0.833, 0.933]), ensuring that relative model comparisons remain valid. The high Ranking AUC (0.967 [0.942, 0.983] for sparse vs 0.983 [0.967, 0.992] for dense) confirms that we can still distinguish high-quality from problematic agents despite 3× reduction in observations. The 5.4% increase in hold-out RMSE (0.111 [0.107, 0.116] to 0.117 [0.113, 0.122]) represents minor reconstruction noise that does not affect downstream ranking tasks. The narrow confidence intervals show that these results are stable across different training samples. 6 Table 3: Fidelity metrics at 33% coverage ((n log n) with C = 2) compared to dense baseline on PubMed domain (30 agents × 200 items). Hold-out RMSE measures reconstruction accuracy on 20% reserved test data. Ranking AUC tests whether per-agent average scores can distinguish faithful from problematic agents. We use all 30 agents for model fitting; the ranking evaluation focuses on a subset of 4 faithful and 15 problematic agents identified in prior work [Robertson and Koyejo, 2025]. 95% confidence intervals from 500 bootstrap samples. Metric Dense Sparse (33%) Reconstruction Accuracy Hold-out RMSE 0.111 [0.107, 0.116] 0.117 [0.113, 0.122] Relative increase - +5.4% Ranking Fidelity Spearman ρ - 0.972 [0.942, 0.986] Kendall τ - 0.890 [0.833, 0.933] Ranking AUC 0.983 [0.967, 0.992] 0.967 [0.942, 0.983] 6.3 Cross-Domain Validation Table 4 confirms that our findings generalize beyond summarization. All domains show consistent patterns: 3× coverage reduction with small absolute RMSE increases (∆ranging from +0.003 to +0.006), and strong rank preservation (Spearman ρ > 0.95). Ranking AUC varies by domain (0.967 for PUBMED, 0.938 for OPUS, 0.646 for ICLR), reflecting inherent differences in agent discriminability rather than failures of sparse recovery-these values represent the ranking quality achievable even with dense evaluation in each domain. Table 4: Cross-domain validation at 33% coverage. All domains show minimal RMSE increase and strong rank preservation. Ranking AUC depends on the inherent discriminability of each domain's agent pool. Domain Agents Items Dense Sparse RMSE ∆ Spearman Ranking RMSE RMSE ρ AUC PUBMED 30 200 0.111 0.117 +0.006 (+5%) 0.972 0.967 OPUS 31 186 0.123 0.126 +0.003 (+2%) 0.983 0.938 ICLR 29 100 0.129 0.135 +0.006 (+4%) 0.950 0.646 6.4 Baseline Comparison To validate that the additive constraint and identity link are essential design choices, we compare our clipped-linear model against four baselines at 33% coverage on PubMed. Table 5 shows results. Table 5: Baseline comparison at 33% coverage on PubMed (30 agents × 200 items). Clipped-linear (identity link) achieves best RMSE while non-additive UV factorization shows 61% higher error. SVD baseline's poor reconstruction (0.172) demonstrates that simple mean imputation fails without the additive constraint. Method RMSE Spearman Kendall AUC ρ τ Clipped-Linear (Ours) 0.121 0.971 0.894 0.967 Identity + Isotonic (calibrated monotone mapping) 0.122 0.971 0.894 0.967 Rasch (Probit) 0.122 0.965 0.876 0.983 Rasch (Logit) 0.123 0.964 0.871 0.983 Nuclear Norm 0.126 0.974 0.894 0.950 SVD Baseline 0.172 0.923 0.807 0.967 UV Factorization 0.196 0.983 0.913 0.967 7 Identity link achieves best reconstruction. Our clipped-linear model with identity link achieves the lowest holdout RMSE (0.121), making it competitive with Rasch baselines. The isotonic regression variant (0.122) offers no measurable gain, confirming that a learned monotone calibration does not capture additional curvature. The identity link already linearizes the response space (Section 4.3). Additive constraint is essential. The non-additive UV factorization baseline, which fits S ≈UV T without constraining to the form θi -bj, achieves RMSE of 0.196 compared to our clipped-linear model (0.121). The SVD baseline with mean imputation also performs poorly (0.172), showing that simple low-rank structure without the additive constraint fails to capture TVD-MI geometry. While both maintain reasonable correlation and AUC their poor reconstruction confirms that additivity captures core properties of TVD-MI data. Computational efficiency. The Rasch methods are significantly faster than other baselines making them practical for large-scale evaluation. 6.5 Judge Robustness To validate that our findings are not artifacts of a specific judge model, we repeated the analysis using Llama3-70b (70 billion parameters) as an alternative judge on a subset of PubMed data (100 items, 30 agents). Table 6 shows strong cross-judge agreement. Table 6: Judge robustness: GPT-4o-mini vs Llama3-70b on PubMed. Strong agreement in agent rankings (Spearman ρ=0.872) shows that TVD-MI captures genuine quality differences. Both judges show identity achieving zero curl on predictions while curved links introduce consistent violations. Metric GPT-4o-mini Llama3-70b Cross-Judge Agreement Agent ranking correlation ρ = 0.872 Item difficulty correlation ρ = 0.687 Curl Statistics (Identity Link) Median |∆| (predictions) 0.000 0.000 P95 |∆| (predictions) 0.000 0.000 Median |∆| (raw data) 0.129 0.129 P95 |∆| (raw data) 0.466 0.448 Curl Statistics (Logit Link) Median |∆| (predictions) 0.009 0.009 P95 |∆| (predictions) 0.072 0.070 Median |∆| (raw data) 0.438 0.431 P95 |∆| (raw data) 1.433 1.346 The high correlation in agent rankings (Spearman ρ = 0.872) indicates genuine quality differences rather than judge-specific biases. Both judges show identity achieving zero median curl on predictions while curved links introduce consistent violations (|∆| ≈0.01), confirming that TVD-MI's additive geometry is best preserved without transformation across judge models. The moderate correlation in item difficulties (ρ = 0.687) shows that different judges may find different items challenging, which is expected given model-specific failure modes. However, the consistency in agent rankings and link ablation results supports the robustness of our approach. 7 Discussion 7.1 Why Does Additive Structure Emerge? The identity link works because TVD-MI scores exhibit additive structure in their raw space. This arises from the averaging operation: if pairwise TVD-MI scores have form θi + θk -bj, then agent i's average score is θi + ̄ θ-i -bj ≈θi + ̄θ -bj, preserving additivity. Traditional IRT's logistic links are designed for binary outcomes generated by latent Gaussian processes and maximize Shannon entropy. However, TVD-MI has its own natural entropy measure: 8 Lemma E.1 shows that TVD-MI(X; X) = Gini(X), establishing Gini (Rényi-2) as the principled entropy for total variation distance. Applying logistic links-which maximize Shannon entropy-to TVD-MI introduces unnecessary distortion: identity yields zero median curl on predictions while curved links produce consistent violations, Table 1). The clipped-linear model preserves TVD-MI's natural geometry by maximizing its native entropy measure. 7.2 Limitations Sparse recovery requires: (1) meaningful quality variation among agents, (2) d-core ≥3 connectivity (difficult to maintain below 15% coverage), and (3) approximate additivity. Domains with strong agent-item interactions will show larger curl and degraded recovery; our cross-domain validation suggests this is rare for current general-purpose LLMs. 7.3 Practical Recommendations Based on our empirical findings, we offer the following guidance for practitioners. Use (n log n) sampling with C ∈[1.5, 3]: This regime consistently achieves the best coverage-accuracy tradeoff, requiring only 30-40% of full evaluation cost while maintaining fidelity. Validate with discrete integrability test: Before committing to sparse evaluation, test additivity on a small dense subset using the identity link. Median |∆| 1, the supremum is attained at s = sign(u) (boundary), giving φ∗(u) = |u| -1 2. Thus: φ∗(u) = 1 2u2, |u| ≤1 |u| -1 2, |u| > 1 (25) This is the Huber loss with threshold 1. F.4 Lagrangian, Integrability, and the Identity Link Let φ(s) = 1 2s2 + ι[-1,1](s), where ιC is the indicator of a constraint set C. Let D be the rectangle (curl) operator so that Ds = 0 iff sij = θi -bj for some (θ, b) (Lemma F.1). We solve the projection min s X i,j φ(sij) s.t. PΩs = t, Ds = 0. Introducing Lagrange multipliers λ for PΩs = t and μ for Ds = 0, and defining u := P ⊤ Ωλ -D⊤μ, the Lagrangian is L(s, λ, μ) = X i,j φ(sij) -⟨u, s⟩+ ⟨λ, t⟩. Minimizing over s is separable entrywise and yields the conjugate φ∗: inf s L = ⟨λ, t⟩- X i,j φ∗(uij), φ∗(u) = ( 1 2u2, |u| ≤1, |u| -1 2, |u| > 1. Hence the dual is max λ,μ ⟨λ, t⟩- X i,j φ∗ (P ⊤ Ωλ -D⊤μ)ij . (D) 14 Projection interpretation. For fixed λ, the quantity inside the sum depends on u = P ⊤ Ωλ -D⊤μ. Because im(D⊤) = (ker D)⊥, varying μ moves u within the affine space P ⊤ Ωλ + im(-D⊤). Since φ∗is convex and radially nondecreasing in |u|, minimizing P i,j φ∗(uij) over this affine space selects the element of smallest Euclidean norm i.e., the orthogonal projection of P ⊤ Ωλ onto ker D. Thus the optimizer satisfies u⋆= P ⊤ Ωλ -D⊤μ⋆∈ker D, so there exist parameters θ, b such that u⋆ ij = θi -bj. Recovering the primal in additive form. The KKT stationarity condition for s gives uij ∈∂φ(sij), implying uij = sij when |sij| < 1 and uij has the same sign with |uij| ≥1 when |sij| = 1. Together with primal feasibility PΩs = t and Ds = 0, we obtain sij = θi -bj up to clipping at the box boundaries. Relaxing the hard constraint PΩs = t to a quadratic penalty for noisy data yields min θ,b X (i,j)∈Ω 1 2 tij -(θi -bj) 2 + X i,j ι[-1,1](θi -bj) + λ(∥θ∥2 2 + ∥b∥2 2), which is the box-constrained least-squares objective (Eq. 7) with the identity link. F.5 Comparison to Shannon Entropy For comparison, Shannon entropy H(p) = -p log p -(1 -p) log(1 -p) yields the logistic link. The first-order condition for maximizing Shannon entropy subject to linear constraints gives: ∂H ∂p = -log p + log(1 -p) = λ ⇒ p = 1 1 + e-λ = logit-1(λ) (26) This motivates the logit link logit(p) = log(p/(1 -p)) in traditional IRT. However, for TVD-MI scores that are already bounded and arise from averaging (not thresholding), the Gini entropy's quadratic structure is more natural and preserves additivity without transformation. 15
2510.14972
TOKDRIFT: When LLM Speaks in Subwords but Code Speaks in Grammar Yinxi Li, Yuntian Deng, Pengyu Nie University of Waterloo {yinxi.li, yuntian, pynie}@uwaterloo.ca Abstract Large language models (LLMs) for code rely on subword tokenizers, such as byte-pair en- coding (BPE), learned from mixed natural lan- guage text and programming language code but driven by statistics rather than grammar. As a result, semantically identical code snippets can be tokenized differently depending on super- ficial factors such as whitespace or identifier naming. To measure the impact of this mis- alignment, we introduce TOKDRIFT, a frame- work that applies semantic-preserving rewrite rules to create code variants differing only in tokenization. Across nine code LLMs, includ- ing large ones with over 30B parameters, even minor formatting changes can cause substantial shifts in model behavior. Layer-wise analysis shows that the issue originates in early embed- dings, where subword segmentation fails to cap- ture grammar token boundaries. Our findings identify misaligned tokenization as a hidden ob- stacle to reliable code understanding and gener- ation, highlighting the need for grammar-aware tokenization for future code LLMs. 1 Introduction Large language models (LLMs) have become pow- erful tools for programming tasks (Chen et al., 2021; Nye et al., 2021; Yang et al., 2024; Guo et al., 2024; Meta FAIR CodeGen Team, 2025). Before any modeling occurs, code is first tokenized into discrete units using a pretrained subword tokenizer such as byte-pair encoding (BPE) (Sennrich et al., 2016). However, the tokens that LLMs see, which are based on subword frequencies, are often very different from the tokens defined by programming language (PL) grammar. Whereas PLs have clear syntactic boundaries (e.g., keywords, identifiers, operators), subword tokenizers merge character se- quences statistically, sometimes splitting identifiers at arbitrary points or combining unrelated symbols into a single token. This misalignment between (a) Workflow of TOKDRIFT, our framework for quantifying LLM sensitivity to semantic-preserving code rewrite rules. (b) Example of tokenization misalignment. Adding a space between dot (“.”) and “factorial” causes a significant change in token sequences, from [“.factor”, “ial”] to [“.”, “␣factorial”]. Consequently, the LLM’s code translation prediction shifts from incorrect (naming the factorial function as “comb” and later referring to it as “combin”) to correct. Figure 1: TOKDRIFT workflow and example. subwords and syntax means that LLMs do not al- ways process code in the units that programmers or compilers would expect. As an example, the presence of a space before an identifier can lead to completely different token sequences, and thus different predictions, despite identical program semantics (Figure 1). While such differences may appear superficial, they raise a deeper concern about how robustly code LLMs represent grammar and meaning. If tokenization determines how code is segmented and embedded, even small discrepancies could propagate through the model and alter its predictions. This motivates the central question of our study: Does the misalignment between subword tok- enization and PL grammar limit LLMs’ abil- ity to understand and generate code? 1 arXiv:2510.14972v1 [cs.CL] 16 Oct 2025 To study this question, we introduce TOKDRIFT, a framework that applies semantic-preserving rewrite rules, such as changing whitespace or iden- tifier casing style, to create pairs of programs that are semantically equivalent but tokenized dif- ferently. We evaluate nine code LLMs across three representative programming tasks—bug fix- ing, code summarization, and code translation— and measure whether model outputs remain func- tionally equivalent when tokenization changes. Our experiments show that even minor tokeniza- tion variations can substantially impact model be- havior. For example, the most performant LLM in our experiment, Qwen2.5-Coder-32B-Instruct, changes its prediction 6.09% of the times when the input tokenization changes (and up to 60% under a single rewrite rule). Layer-wise analysis further indicates that the effect originates in early layers, where subword segmentation fails to align with grammatical token boundaries. Together, these findings suggest that tokenizer design remains a critical yet under-explored factor in developing ro- bust and grammar-aware code LLMs. The main contributions of this work include: • We identify and formalize the misaligned tok- enization problem in code LLMs. • We introduce TOKDRIFT, a framework for quan- tifying model sensitivity to semantic-preserving code rewrites that alter tokenization. • We conduct a large-scale empirical study show- ing that misaligned tokenization affects all evalu- ated models and persists with scaling. • We open-source our framework and data to fa- cilitate future research on grammar-aware and domain-adaptive tokenization. Our code and data are available at: https://github.com/uw-swag/tokdrift 2 Background 2.1 LLM Tokenization Tokenization is the first step in processing input for LLMs, converting raw text into a sequence of discrete tokens. Each token corresponds to a model time step and has a dedicated embedding. Mod- ern LLMs use learned tokenization strategies that eliminate the out-of-vocabulary problem by start- ing from minimal units, such as characters or bytes, and learning how to merge them into longer frag- ments based on frequency in a large corpus. Popu- lar approaches like BPE (Sennrich et al., 2016) and Figure 2: Heatmap of (code) LLMs’ vocabulary dis- tances (Amba Hombaiah et al., 2021). WordPiece (Schuster and Nakajima, 2012; Devlin et al., 2019) follow this general principle, differ- ing mainly in their merge heuristics. Often, pre- tokenization steps like splitting at whitespace are applied before learning to prevent tokens from span- ning across word boundaries. The tokenizers used by different LLMs can vary significantly due to differences in pre-tokenization rules, token learning algorithms, and pretraining corpora. As shown in Figure 2, even models from the same family often share less than half of their vocabulary, such as Llama 3 vs. Llama 4. The main exception occurs when model developers intention- ally reuse the same tokenizer across variants, such as Qwen2.5 and Qwen2.5-Coder, which share an identical vocabulary and tokenizer configuration. 2.2 PL Tokenization Tokenization in PLs, often called lexing, is the first step of code parsing: it transforms a stream of characters into a sequence of tokens according to a PL’s grammar. These tokens are then passed to a parser, which constructs an abstract syntax tree (AST) to represent the program’s structure. While exact rules vary by language, most PLs share a common set of token types, including: iden- tifiers (e.g., variable or function names), operators (e.g., +, *), keywords (e.g., if, return), literals (e.g., numeric or string constants), and whitespace, which is typically used to separate tokens but is otherwise ignored. Unlike LLM tokenization, PL tokenization in compilers and interpreters is deterministic. For example, the snippet x+1 is always tokenized into three tokens: an identifier (x), an operator (+), and a literal (1). Formatting changes, such as adding spaces, do not affect the token sequence as long as the code remains syntactically valid. 2 Table 1: Benchmarks in our experiments. We manually examine the benchmarks to follow the naming conventions, and to fix/exclude invalid tests and samples, see details in Section C.1. Benchmark Source Task Input PL Output PL # Samples HumanEval-Fix-py HumanEvalPack (Muennighoff et al., 2023) bug fixing Python Python 164 HumanEval-Fix-java Java Java 164 HumanEval-Explain-py code summarization Python Python 164 HumanEval-Explain-java Java Java 164 Avatar-py2java Avatar (Ahmad et al., 2023; Pan et al., 2024) code translation Python Java 244 Avatar-java2py Java Python 246 CodeNet-py2java CodeNet (Puri et al., 2021; Pan et al., 2024) Python Java 200 CodeNet-java2py Java Python 200 This behavior misaligns with LLM tokenizers: while PL tokenizers produce stable, grammar- aware units, LLM tokenizers frequently break code structure, resulting in inconsistent or fragmented representations of semantically identical programs. In this work, we refer to grammar-aware tokens as PL tokens, and contrast them with the LLM tokens produced by learned subword tokenizers. 3 TOKDRIFT Framework Figure 1a illustrates the overall workflow of TOK- DRIFT, our framework for quantifying model sen- sitivity to semantic-preserving code rewrites that alter tokenization. In a nutshell, TOKDRIFT sys- tematically compares the LLM outputs given the baseline input tokens and variant input tokens (af- ter applying rewrite rules) through a large set of experiments. Each experiment is performed on a specific benchmark, and tests the sensitivity of a given LLM against a specific rewrite rule. 3.1 Benchmarks We searched for recent popular coding LLM bench- marks where: (1) the input includes a code snippet, since rewrite rules cannot be applied on natural language; (2) the output is evaluated with an auto- mated functional correctness metric.We focused on two popular PLs, Java and Python. Based on these criteria, we selected eight bench- marks covering three tasks, listed in Table 1. Bug fixing (Tufano et al., 2019) transforms a buggy code snippet into a correct one. Code summariza- tion (Hu et al., 2018; Panthaplackel et al., 2020) aims at summarizing a code snippet into natural language description; following HumanEvalPack’s setup (Muennighoff et al., 2023), the description is fed back to LLM to generate code for measuring correctness. Code translation (Ahmad et al., 2023; Puri et al., 2021) is the task of translating a code snippet from one PL to another. All benchmarks use tests to evaluate the correctness of outputs. Table 2: Models used in our experiments. Series S M L Llama-3 3B 8B 70B Qwen2.5-Coder 1.5B 7B 32B DeepSeek-Coder 1.3B 6.7B 33B 3.2 Models Table 2 lists the models used in TOKDRIFT. We selected three series of popular open-source LLMs (using the coding-specific variants if available), namely Llama-3, Qwen2.5-Coder, and DeepSeek- Coder. To cover the model size spectrum, we used small (∼1B parameters), medium (∼7B), and large (>30B) variants in each series. All models are instruction-tuned. We perform greedy decoding to generate deterministic outputs (see experimental environment details in Section C.4). 3.3 Rewrite Rules Table 3 lists the rewrite rules used in TOKDRIFT. Each rewrite rule converts all occurrences of the left-hand side substring to the right-hand side sub- string. According to the grammars of the two PLs we experiment on (and generally for most modern PLs), these rewrite rules are semantically- preserving by design. We apply one rewrite rule at a time to investigate their impact in isolation. The six rewrite rules starting with “N” are in- spired by naming conventions. Identifiers usually follow one of the four casing styles: camelCase (for variables/functions in Java), PascalCase (for classes in Java/Python), snake_case (for vari- ables/functions in Python), and SCREAMING_CASE (for constants in Java/Python). Since variables/- functions are most common among identifiers, we design rewrite rules to alter their casing style. Specifically, N1, N2, N3 convert camelCase iden- tifiers in Java to the other three casing styles, while N4, N5, N6 convert snake_case identifiers in Python. These rewrite rules challenge LLMs’ ro- bustness to different naming styles. 3 Table 3: Rewrite rules supported by TOKDRIFT, inspired by naming conventions (starting with N) and spacing conventions (starting with S). Each rewrite rule may apply to Java (marked by J), Python (marked by P), or both. No. PL Rewrite Rule Description Example N1 J camelCase →snake_case Convert identifiers from the most common casing style in the input PL to alternative ones ␣sorted L st →␣sorted _lst N2 J camelCase →PascalCase ␣cloestPair →␣Close st Pair N3 J camelCase →SCREAMING_CASE ␣possible S olutions →␣POSS IBLE _S OLUTION S N4 P snake_case →camelCase ␣input _clip board →␣input Clipboard N5 P snake_case →PascalCase ␣string _xor →␣String X or N6 P snake_case →SCREAMING_CASE ␣triangle _area →␣TRI ANGLE _AREA S1 P OP -→OP ␣- Add space between operator and minus sign [::- 1 ] →[ :: ␣- 1 ] S2 P OP [→OP ␣[ Add space between operator and left square bracket )) )[ 2 :]\n →))) ␣[ 2 :]\n S3 J ) .→) ␣. Add space between right parentheses and period ␣'. '). replace →␣'.') ␣. replace S4 P ] )→] ␣) Add space between right square bracket and right parentheses : ]):\n →:] ␣):\n S5 P OP ]→OP ␣] Add space between operator and right square bracket = ␣[[] →= ␣[[ ␣] S6 J OP (→OP ␣( Add space between operator and left parentheses (( ! is True →( ␣(! is True S7 P [ ID→[ ␣ID Add space between left square bracket and identifier ([ v ow els →([ ␣vowels S8 J ++ )→++ ␣) Add space between increment operator and right parentheses ␣i ++) →␣i ++ ␣) S9 J . *→. ␣* Add space between period and asterisk .*;\n →. ␣* ;\n S10 P ) :→) ␣: Add space between right parentheses and colon ␣main ():\n →␣main () ␣:\n S11 J ) ;→) ␣; Add space between right parentheses and semicolon <>();\n →< >() ␣;\n S12 J OP ;→OP ␣; Add space between operator and semicolon Ac ++; →Ac ++ ␣; S13 J P ) )→) ␣) Add space between two right parentheses .toCharArray ()) →.toCharArray () ␣) S14 J P ( )→( ␣) Add space between two left parentheses alpha () →alpha ( ␣) S15 J P . ID→. ␣ID Add space between period and identifier .factor ial →. ␣factorial S16 J P ( ID→( ␣ID Add space between left parentheses and identifier (String →( ␣String S17 J P OP ID→OP ␣ID Add space between operator and identifier :i +len (sub string →: ␣i + ␣len ( ␣substring S18 J P OP ALL→OP ␣ALL Add space between operator and identifier/operator (l : ␣list ):\n →( ␣l : ␣list ) ␣:\n The eighteen rewrite rules starting with “S” are inspired by spacing conventions. Whitespace around most operators usually carries no seman- tic meaning and is optional. Thus, the spacing- related rewrite rules identifies two consecutive to- kens (one of them is an operator) and inserts a space in between. Specifically, we look for combinations where one of them is a specific operator or any kind of operator (represented by OP), and the other one is another specific operator or an identifier (repre- sented by ID). Exploring all combinations would be infeasible, thus we select the top-10 frequently appearing combinations in the benchmarks for each PL. In addition, we add S17 and S18 as “wildcard” rules to cover all cases where an OP is followed by an ID or ID/OP for both PLs. These rewrite rules challenge LLM and its tokenizer’s robustness to different formatting styles. Notably, in most LLMs with a pre-tokenization step of splitting be- fore whitespace, these rewrite rules will lead to more LLM tokens. 3.4 Metrics Recall that each experiment on a given {benchmark, model, rewrite rule} triplet compares the baseline outputs (given the original inputs) and the variant outputs (given the inputs after applying rewrite rule). The benchmark provides a set of tests to evaluate whether each output is correct or incorrect. We define accuracy as the percentage of correct outputs, and ∆accuracy as the variant’s accuracy minus the baseline’s accuracy. The ∆accuracy metric, although intuitive, has two limitations: (1) accuracy improvements and degradations on individual samples cancel out; (2) some samples may not be affected by a rewrite rule if the left-hand side substring does not appear in the input; the outputs of those samples will never change. To address these, we introduce an unbiased metric called sensitivity, defined as the percentage of the samples whose output correctness flips (from correct to incorrect or vice versa) out of the sam- ples whose input is changed by the rewrite rule. A lower sensitivity indicates that the model is more robust against the token changes introduced by a rewrite rule; when averaged across all rewrite rules, it reflects how sensitive the model is to the LLM-PL tokenization misalignment. 4 Evaluation 4.1 Results Table 4 shows the accuracy and ∆accuracy of each model on each rewrite rule. We can observe that most rewrite rules cause measurable changes in model accuracy, ranging from -2.90 to +0.32 abso- 4 Table 4: Accuracy and ∆accuracy (in parenthesis) of each model on each rewrite rule. Variant Llama-3B Llama-8B Llama-70B Qwen-1.5B Qwen-7B Qwen-32B DS-1.3B DS-6.7B DS-33B Average Input PL = Java baseline 32.04 43.15 57.24 33.59 57.36 70.41 38.50 58.01 57.36 49.74 N1 32.69 (+0.65) 43.54 (+0.39) 57.49 (+0.25) 35.27 (+1.68) 57.62 (+0.26) 70.28 (-0.13) 37.98 (-0.52) 57.36 (-0.65) 57.11 (-0.25) 49.93 (+0.19) N2 32.17 (+0.13) 43.54 (+0.39) 56.85 (-0.39) 35.27 (+1.68) 57.75 (+0.39) 70.41 (+0.00) 39.02 (+0.52) 58.14 (+0.13) 57.36 (+0.00) 50.06 (+0.32) N3 32.56 (+0.52) 44.19 (+1.04) 56.20 (-1.04) 35.53 (+1.94) 58.01 (+0.65) 69.12 (-1.29) 38.37 (-0.13) 56.33 (-1.68) 56.46 (-0.90) 49.64 (-0.10) S3 31.65 (-0.39) 43.02 (-0.13) 56.20 (-1.04) 34.37 (+0.78) 56.72 (-0.64) 70.41 (+0.00) 37.34 (-1.16) 58.66 (+0.65) 57.88 (+0.52) 49.58 (-0.16) S6 31.52 (-0.52) 43.02 (-0.13) 57.62 (+0.38) 33.20 (-0.39) 57.49 (+0.13) 70.28 (-0.13) 37.98 (-0.52) 58.53 (+0.52) 57.49 (+0.13) 49.68 (-0.06) S8 31.91 (-0.13) 43.28 (+0.13) 57.24 (+0.00) 34.11 (+0.52) 56.72 (-0.64) 71.45 (+1.04) 38.63 (+0.13) 57.49 (-0.52) 58.27 (+0.91) 49.90 (+0.16) S9 32.30 (+0.26) 40.96 (-2.19) 58.66 (+1.42) 33.46 (-0.13) 58.14 (+0.78) 69.51 (-0.90) 36.95 (-1.55) 56.59 (-1.42) 57.75 (+0.39) 49.37 (-0.37) S11 32.69 (+0.65) 44.57 (+1.42) 55.17 (-2.07) 35.14 (+1.55) 56.33 (-1.03) 71.58 (+1.17) 37.34 (-1.16) 57.11 (-0.90) 57.11 (-0.25) 49.67 (-0.07) S12 30.49 (-1.55) 43.02 (-0.13) 56.07 (-1.17) 34.75 (+1.16) 55.81 (-1.55) 67.05 (-3.36) 38.63 (+0.13) 55.94 (-2.07) 58.53 (+1.17) 48.92 (-0.82) S13 32.43 (+0.39) 42.64 (-0.51) 56.59 (-0.65) 33.46 (-0.13) 57.36 (+0.00) 69.77 (-0.64) 37.47 (-1.03) 58.27 (+0.26) 56.98 (-0.38) 49.44 (-0.30) S14 29.84 (-2.20) 41.09 (-2.06) 54.13 (-3.11) 32.17 (-1.42) 56.85 (-0.51) 71.19 (+0.78) 37.86 (-0.64) 57.11 (-0.90) 57.62 (+0.26) 48.65 (-1.09) S15 30.62 (-1.42) 36.82 (-6.33) 57.24 (+0.00) 33.46 (-0.13) 56.72 (-0.64) 70.28 (-0.13) 37.34 (-1.16) 55.43 (-2.58) 59.43 (+2.07) 48.59 (-1.15) S16 30.88 (-1.16) 40.83 (-2.32) 55.94 (-1.30) 34.88 (+1.29) 57.36 (+0.00) 71.96 (+1.55) 36.43 (-2.07) 57.49 (-0.52) 58.66 (+1.30) 49.38 (-0.36) S17 28.68 (-3.36) 37.34 (-5.81) 56.07 (-1.17) 35.66 (+2.07) 55.43 (-1.93) 70.03 (-0.38) 35.40 (-3.10) 55.04 (-2.97) 58.91 (+1.55) 48.06 (-1.68) S18 25.97 (-6.07) 34.88 (-8.27) 56.85 (-0.39) 34.11 (+0.52) 56.07 (-1.29) 70.28 (-0.13) 33.98 (-4.52) 53.10 (-4.91) 56.33 (-1.03) 46.84 (-2.90) Input PL = Python baseline 39.12 49.87 69.04 40.67 64.51 76.17 44.82 61.92 68.13 57.14 N4 40.03 (+0.91) 51.04 (+1.17) 68.91 (-0.13) 39.77 (-0.90) 65.03 (+0.52) 77.85 (+1.68) 44.30 (-0.52) 61.53 (-0.39) 68.39 (+0.26) 57.43 (+0.29) N5 37.56 (-1.56) 50.91 (+1.04) 68.65 (-0.39) 39.25 (-1.42) 64.77 (+0.26) 77.72 (+1.55) 42.88 (-1.94) 61.53 (-0.39) 68.39 (+0.26) 56.85 (-0.29) N6 38.08 (-1.04) 50.65 (+0.78) 66.19 (-2.85) 39.38 (-1.29) 64.51 (+0.00) 76.81 (+0.64) 42.23 (-2.59) 61.14 (-0.78) 67.62 (-0.51) 56.29 (-0.85) S1 39.38 (+0.26) 50.39 (+0.52) 68.65 (-0.39) 40.54 (-0.13) 64.51 (+0.00) 76.68 (+0.51) 44.69 (-0.13) 62.56 (+0.64) 67.62 (-0.51) 57.22 (+0.08) S2 39.64 (+0.52) 50.65 (+0.78) 68.78 (-0.26) 40.41 (-0.26) 64.77 (+0.26) 75.91 (-0.26) 43.65 (-1.17) 62.44 (+0.52) 67.75 (-0.38) 57.11 (-0.03) S4 39.77 (+0.65) 50.65 (+0.78) 69.30 (+0.26) 40.54 (-0.13) 64.51 (+0.00) 73.19 (-2.98) 44.82 (+0.00) 61.92 (+0.00) 67.36 (-0.77) 56.90 (-0.24) S5 38.60 (-0.52) 50.78 (+0.91) 68.91 (-0.13) 40.80 (+0.13) 64.12 (-0.39) 76.94 (+0.77) 44.43 (-0.39) 62.69 (+0.77) 66.71 (-1.42) 57.11 (-0.03) S7 40.03 (+0.91) 49.35 (-0.52) 68.26 (-0.78) 40.67 (+0.00) 63.34 (-1.17) 76.42 (+0.25) 44.30 (-0.52) 62.69 (+0.77) 67.23 (-0.90) 56.92 (-0.22) S10 38.47 (-0.65) 50.65 (+0.78) 69.17 (+0.13) 40.67 (+0.00) 63.99 (-0.52) 77.46 (+1.29) 44.56 (-0.26) 62.05 (+0.13) 67.10 (-1.03) 57.12 (-0.02) S13 37.95 (-1.17) 50.13 (+0.26) 69.30 (+0.26) 40.54 (-0.13) 64.90 (+0.39) 76.55 (+0.38) 44.30 (-0.52) 62.05 (+0.13) 67.10 (-1.03) 56.98 (-0.16) S14 38.73 (-0.39) 49.22 (-0.65) 68.39 (-0.65) 39.38 (-1.29) 63.73 (-0.78) 74.09 (-2.08) 45.08 (+0.26) 61.66 (-0.26) 67.49 (-0.64) 56.42 (-0.72) S15 39.12 (+0.00) 50.26 (+0.39) 67.49 (-1.55) 39.77 (-0.90) 62.69 (-1.82) 76.30 (+0.13) 44.17 (-0.65) 61.66 (-0.26) 67.23 (-0.90) 56.52 (-0.62) S16 40.16 (+1.04) 49.87 (+0.00) 69.04 (+0.00) 39.64 (-1.03) 63.08 (-1.43) 76.68 (+0.51) 43.65 (-1.17) 61.27 (-0.65) 67.23 (-0.90) 56.74 (-0.40) S17 40.41 (+1.29) 50.39 (+0.52) 67.62 (-1.42) 39.38 (-1.29) 61.92 (-2.59) 76.55 (+0.38) 42.62 (-2.20) 60.49 (-1.43) 66.32 (-1.81) 56.19 (-0.95) S18 37.44 (-1.68) 49.87 (+0.00) 67.62 (-1.42) 38.34 (-2.33) 63.08 (-1.43) 75.13 (-1.04) 42.49 (-2.33) 62.05 (+0.13) 67.36 (-0.77) 55.93 (-1.21) Background color: baseline in grey, variants better than baseline in green, and variants worse than baseline in red. The best variant is highlighted in bold and the worst variant is underlined. lute percentage points if averaging across all mod- els. The largest ∆accuracy of -8.27% happens on Llama-8B for Java benchmarks, whose accu- racy drops from 43.15% to 34.88% when applying rewrite rule S18 (adding space after each opera- tor). Considering advances in LLM performance are sometimes claimed with around 1 percentage point margin, these accuracy deltas caused by sim- ple rewrite rules are non-negligible. The impact of misaligned tokenization is more apparent in the sensitivity metric, as shown in the distribution plots in Figure 3. The average sensi- tivity is 9.26% for naming rewrites and 8.29% for spacing rewrites. Among the naming rewrites (Fig- ure 3a), LLMs are relatively less sensitive to trans- ductions between camelCase and snake_case (N1 and N4), likely because camelCase and SCREAMING_CASE are less frequent. This finding implies that the casing styles of identifiers, while technically convey no semantic meaning in PLs, are an important factor in LLMs’ understanding of code. In Figure 3b, we can see that LLMs’ aver- age sensitivity is over 10% for the two “wildcard” spacing rewrite rules (S17 and S18). Other spac- ing rewrite rules result in varying levels of sensi- tivity, among which the most impactful ones are S15 (adding space between period and identifier), S14 (adding space between a pair of parentheses), and S12 (adding space between operator and semi- colon). In terms of the average sensitivity of mod- els (Figure 3c), we observe that Llama-3 models are more sensitive than the other two series, but all models persist a non-negligible sensitivity of at least 5.71% (Qwen-32B on spacing rewrite rules). 4.2 Impact of Model Size We investigate whether larger models are less sen- sitive to tokenization changes, with the general assumption of larger models being more robust. Ta- ble 5 shows the the average sensitivity of models at different sizes, where the small, medium, and large models in each series are compared on a row. While the small and medium models are at around the same level of sensitivity, the large models are 5 (a) grouped by naming rewrite rule (b) grouped by spacing rewrite rule (c) grouped by model Figure 3: Violin plots of sensitivity distributions. usually less sensitive (i.e., more robust) than their smaller counterparts, with only one exception of Qwen-32B on naming rewrite rules. We also perform statistically significant tests via Wilcoxon signed-rank test (Conover, 1999). The results show that the differences are not significant for naming rules, but significant for spacing rules (except between the small and medium models for Qwen2.5-Coder and DeepSeek-Coder series). 4.3 Impact of Identifier Fragment Changes We noticed that identifiers are frequently tokenized into different subwords before and after applying rewrite rules. For example, Llama-3 tokenizes ‘␣sortedLst’ into three tokens [‘␣sorted’, ‘L’, ‘st’], and applying N1 changes it into two tokens [‘␣sorted’, ‘_lst’]. We define this case as iden- tifier fragment change: the list of fragments (tokens but ignoring spaces and underscores) changes be- Table 5: Impact of model size on sensitivity. Rewrite Rule Model Series S M L Naming Llama-3 11.48 10.68 9.43 Qwen2.5-Coder 7.73 7.95 8.27 DeepSeek-Coder 9.88 8.95 8.95 Spacing Llama-3 10.22 10.99 8.51 Qwen2.5-Coder 7.07 8.87 5.71 DeepSeek-Coder 8.36 8.71 6.26 Table 6: Impact of identifier fragment changes on sensi- tivity. “Unchanged” samples do not have any identifier fragment change, and “Changed” samples have at least one identifier fragment change. Rewrite Rule Model Unchanged Changed Naming Llama-70B 8.13 11.21 Qwen-32B 6.58 10.57 DS-33B 6.61 10.82 Spacing Llama-70B 7.24 11.89 Qwen-32B 5.09 7.37 DS-33B 5.80 7.12 fore and after applying rewrite rules. Using this concept, we can categorize the samples into two groups, one without any identifier fragment change (i.e., “Unchanged”), and the other with at least one identifier fragment change (i.e., “Changed”). Table 6 shows the average sensitivity of models on the two groups of samples; note that we focus on the large model in each series in this analysis. The identifier fragment changed group shows consis- tently higher sensitivity than the unchanged group, with the largest difference on for naming rewrite rules (10.82% vs. 6.61%). This finding suggests that how identifiers are tokenized into subwords play an important role in LLMs’ understanding of code. Arguably, identifiers are frequently not tokenized into semantically meaningful subwords (such as the ‘␣sortedLst’ example), which may fundamentally limit the model’s code comprehen- sion and generation capabilities. 5 Root Cause Analyses In addition to quantifying its impact, we also study why LLMs are sensitive to tokenization changes, along two aspects: (1) word frequency in the pre- training corpus (Section 5.1); (2) LLM’s hidden states before and after the rewrite rule (Section 5.2). 5.1 Word Frequency Analysis Our hypothesis is that there is a correlation between sensitivity and the word frequencies of the rewrite rule’s left-hand side and right-hand side. If the ratio of right-hand side to left-hand side word fre- 6 Table 7: Word frequency of rewrite rules’ left-hand side (LHS) and right-hand side (RHS) on GitHub. Ratio is the percentage of RHS to LHS word frequency. Rewrite Rule LHS RHS Ratio [%] Java S3: ) .→) ␣. 78.9M 45.7K 0.06 S8: ++ )→++ ␣) 22.9M 664K 2.90 S9: . *→. ␣* 34.2M 7.3M 21.35 S11: ) ;→) ␣; 161M 924K 0.57 S13: ) )→) ␣) 102M 3.4M 3.33 S14: ( )→( ␣) 144M 195K 0.14 S15: . ID→. ␣ID 175M 45.9M 16.22 S16: ( ID→( ␣ID 172M 6.6M 3.84 Python S4: ] )→] ␣) 44.6M 1.7M 3.81 S7: [ ID→[ ␣ID 61.1M 1.1M 1.83 S10: ) :→) ␣: 76M 1.4M 1.84 S13: ) )→) ␣) 59M 2.4M 4.07 S14: ( )→( ␣) 78.1M 71.7K 0.09 S15: . ID→. ␣ID 107M 40.6M 37.94 S16: ( ID→( ␣ID 105M 2.9M 2.76 quency is small (meaning right-hand side is rare in the corpus), LLMs will likely perform worse after applying the rewrite rule. We measure the word frequencies on GitHub, a primary source of code data in LLMs’ pretraining corpora.1 Table 7 shows the word frequencies of the rewrite rules, and the ratio (in percentages) of the right-hand side to the left-hand side word frequency. The ratio is always less than 100%, which explains why LLMs exhibit non-negligible sensitivity to all rewrite rules. Some rewrite rules with low ratio, e.g., S14, also exhibit high sensitivity in Figure 3b. 5.2 Hidden State Analysis LLMs’ hidden states represent their internal com- prehension and reasoning processes, which may help explain their sensitive to tokenization changes. We compare the hidden states before and after ap- plying the rewrite rules. For each tokens sequence changed, we extract the hidden states of the last token in the sequence, which summarizes the in- formation of the entire sequence. We focus this analysis on the best-performing LLM, Qwen-32B. We first measure the cosine similarity between the hidden states before and after applying the rewrite rules. Figure 4 shows correlation between the layer from which the hidden states are extracted and the similarity. For both naming and spacing rewrite rules, the similarity starts from almost 0 in the first (input) layer, increases (and stabilizes in 1We use GitHub’s search feature to measure word frequen- cies; due to the limitation in regular expressions and characters that can be used in the search string, we can only conduct this analysis on a subset of the spacing rewrite rules. (a) naming rewrite rules (b) spacing rewrite rules Figure 4: The similarity of each layer’s hidden states before and after applying rewrite rules. most cases) in middle layers, and drops again at the last (output) layer. This observation is consis- tent with the information bottleneck theory (Saxe et al., 2019), which states that the middle layers capture the compressed semantic information. In- terestingly, in Figure 4b, we observe that for some spacing rewrite rules (S14 and S3), the similarity in middle layers is also low, implying that the model sees the before and after versions as semantically different. These rewrite rules match the ones that LLMs are most sensitive to in Figure 3b. Then, we compute the hidden state diffs as the hidden states after applying rewrite rules minus those before applying, on the medium layer of the model which should best capture semantic infor- mation. Figure 5 shows the visualizations of the hidden state diffs using t-SNE (Maaten and Hinton, 2008). We observe that the diffs of naming and spacing rewrite rules are clearly distinguishable (Figure 5a), so are the diffs of naming (Figure 5b) and spacing rewrite rules (Figure 5c, note that S17 and S18 are excluded since they are supersets of other rewrite rules). This confirms that the hidden states, especially from the middle layers, are good representations of semantic information and may be utilized to mitigate the tokenization changes. 7 (a) naming vs. spacing (b) naming rewrite rules (c) spacing rewrite rules Figure 5: Visualizations of the hidden state diffs using t-SNE (Maaten and Hinton, 2008). 6 Related Work Tokenization Most modern LLMs use subword tokenizers such as BPE (Sennrich et al., 2016), which create vocabularies based on how often char- acter sequences occur together. The resulting to- ken types do not always correspond to meaningful words or code elements, and can vary depending on how the tokenizer was trained. For example, Liu et al. (2025) shows that allowing token merges across whitespace boundaries produces more mean- ingful units, compared to tokenizers that always split at spaces. Chirkova and Troshin (2023) intro- duces a tokenizer designed to better align with PL syntax, achieving lower token counts while preserv- ing model performance. These studies show that tokenization can influence how well a model un- derstands and generates code, and our work builds on this line of inquiry by quantifying the effects of semantic-preserving tokenization changes. Robustness to Representation Variations An- other important question is how robust LLMs are to variations in tokenization and representation at inference time. Zheng et al. (2025) show that instruction-tuned models can often retain high per- formance even when inputs are tokenized in un- conventional or character-level formats, suggest- ing that such models may learn generalizable in- ternal representations. However, their study also shows a measurable performance drop compared to standard tokenizations, and other work highlights further limitations. Wang et al. (2025) find that adversarial changes to token boundaries can sig- nificantly degrade model predictions, especially in models that have not undergone instruction tun- ing. In structured domains like chemistry, Yan et al. (2025) demonstrate that LLMs produce in- consistent outputs across semantically equivalent molecular representations. These findings suggest that LLMs remain sensitive to surface-level varia- tions. Our work contributes to this line by focusing specifically on PLs. Syntax-Aware Code Modeling To address the mismatch between subword tokenization and PL grammar, several approaches incorporate gram- mar constraints into the LLM decoding pro- cess. Synchromesh (Poesia et al., 2022) and PI- CARD (Scholak et al., 2021) enforce syntactic va- lidity at generation time by using runtime pars- ing to filter out invalid token continuations. Syn- Code (Ugare et al., 2024) improves the efficiency of such methods by constructing a DFA-based mask that precomputes token legality while explicitly handling partial tokens. Boundless BPE (Schmidt et al., 2025) removes fixed pretokenizers and en- ables dynamic boundary selection, allowing the model to learn tokens that correspond to syntactic or semantic units. Together, these efforts aim to align LLM outputs more closely with formal code structure, a disconnect that our work quantifies by measuring how semantics-preserving tokenization variations affect model behavior. 7 Conclusions This work studies the tokenization misalignment between subword-based LLMs and PL grammar. While subword tokenizers like BPE are widely used in code LLMs, they segment inputs based on fre- quency statistics, not grammar, leading to token boundaries that may not align with syntactic units in code. Through a suite of semantic-preserving rewrite rules, our framework TOKDRIFT shows that even minor formatting changes, such as whites- pace edits or identifier renamings, can cause sub- stantial shifts in model outputs. These effects hold across nine coding LLMs and three tasks (fixing, summarization, and translation). These findings motivate future research for grammar-aware or domain-adaptive tokenizers that more faithfully re- flect PL structure. 8 Limitations While our study shows limitations of current tok- enizer designs in code LLMs, our analysis focuses on a targeted set of semantic-preserving rewrites based on common formatting and naming conven- tions; these do not encompass all potential sources of tokenization drift. Second, although we evaluate nine widely used code LLMs, our findings may not generalize to models with fundamentally different architectures (e.g., state space models (Gu et al., 2022)) or tokenization strategies (e.g., character- level or grammar-driven tokenizers (Kim et al., 2016)). Third, our work centers on measurement and diagnosis, and we do not explore mitigation strategies. Future work could investigate tokenizer retraining, ensemble decoding over multiple to- kenizations, or architectural modifications to im- prove the alignment between token boundaries and programming language syntax. Acknowledgments We thank Yu Liu for valuable comments and feed- back. This work was supported in part by Compute Ontario (computeontario.ca) and the Digital Re- search Alliance of Canada (alliancecan.ca). It was also partially supported by a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant (RGPIN-2024-04909) and a start- up grant from the University of Waterloo. Yuntian Deng is additionally supported by an NSERC Dis- covery Grant (RGPIN-2024-05178) and a start-up grant from the University of Waterloo. References Wasi Ahmad, Md Golam Rahman Tushar, Saikat Chakraborty, and Kai-Wei Chang. 2023. AVATAR: A parallel corpus for Java-Python program translation. In Findings of the Association for Computational Linguistics: ACL, pages 2268–2281. Spurthi Amba Hombaiah, Tao Chen, Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Dy- namic language models for continuously evolving content. In International Conference on Knowledge Discovery and Data Mining, pages 2514–2524. Loubna Ben Allal, Niklas Muennighoff, Lo- gesh Kumar Umapathi, Ben Lipkin, and Leandro von Werra. 2022. A framework for the evaluation of code generation mod- els. https://github.com/bigcode-project/ bigcode-evaluation-harness. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, and 1 others. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Nadezhda Chirkova and Sergey Troshin. 2023. Codebpe: Investigating subtokenization options for large language model pretraining on source code. Preprint, arXiv:2308.00683. William Jay Conover. 1999. Practical nonparametric statistics. john wiley & sons. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Albert Gu, Karan Goel, and Christopher Ré. 2022. Effi- ciently modeling long sequences with structured state spaces. In The International Conference on Learning Representations (ICLR). Batu Guan, Xiao Wu, Yuanyuan Yuan, and Shaohua Li. 2025. Is your benchmark (still) useful? dynamic benchmarking for code language models. arXiv preprint arXiv:2503.06643. Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y. Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wen- feng Liang. 2024. Deepseek-coder: When the large language model meets programming – the rise of code intelligence. Preprint, arXiv:2401.14196. Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In International Conference on Program Comprehension, pages 200– 210. Yoon Kim, Yacine Jernite, David Sontag, and Alexander Rush. 2016. Character-aware neural language mod- els. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). Alisa Liu, Jonathan Hayase, Valentin Hofmann, Se- woong Oh, Noah A. Smith, and Yejin Choi. 2025. Su- perbpe: Space travel for language models. Preprint, arXiv:2503.13423. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(Nov):2579–2605. Meta FAIR CodeGen Team. 2025. Cwm: An open- weights llm for research on code generation with world models. Technical report, Meta. 32B- parameter open-weights model; inference code and weights released. 9 Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, and Shayne Longpre. 2023. OctoPack: Instruction tun- ing code large language models. arXiv preprint arXiv:2308.07124. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for interme- diate computation with language models. Preprint, arXiv:2112.00114. Rangeet Pan, Ali Reza Ibrahimzada, Rahul Krishna, Divya Sankar, Lambert Pouguem Wassi, Michele Merler, Boris Sobolev, Raju Pavuluri, Saurabh Sinha, and Reyhaneh Jabbarvand. 2024. Lost in transla- tion: A study of bugs introduced by large language models while translating code. In Proceedings of the IEEE/ACM 46th International Conference on Soft- ware Engineering, pages 1–13. Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, and Raymond Mooney. 2020. Learn- ing to update natural language comments based on code changes. In Annual Meeting of the Association for Computational Linguistics, pages 1853–1868. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830. Gabriel Poesia, Oleksandr Polozov, Vu Le, Ashish Ti- wari, Gustavo Soares, Christopher Meek, and Sumit Gulwani. 2022. Synchromesh: Reliable code gen- eration from pre-trained language models. Preprint, arXiv:2201.11227. Ruchir Puri, David S Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pujar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. 2021. CodeNet: A large-scale AI for code dataset for learning a diversity of coding tasks. In Conference on Neural Information Processing Sys- tems Datasets and Benchmarks Track. Andrew M Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan D Tracey, and David D Cox. 2019. On the information bottleneck theory of deep learning. Journal of Statistical Me- chanics: Theory and Experiment, 2019(12):124020. Craig W. Schmidt, Varshini Reddy, Chris Tanner, and Yuval Pinter. 2025. Boundless byte pair encod- ing: Breaking the pre-tokenization barrier. Preprint, arXiv:2504.00178. Torsten Scholak, Nathan Schucher, and Dzmitry Bah- danau. 2021. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9895–9901, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In International Conference on Acoustics, Speech and Signal Processing, pages 5149–5152. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Annual Meeting of the Association for Computational Linguistics, pages 1715–1725. Michele Tufano, Jevgenija Pantiuchina, Cody Watson, Gabriele Bavota, and Denys Poshyvanyk. 2019. On learning meaningful code changes via neural machine translation. In International Conference on Software Engineering, pages 25–36. Shubham Ugare, Tarun Suresh, Hangoo Kang, Sasa Mi- sailovic, and Gagandeep Singh. 2024. Syncode: Llm generation with grammar augmentation. Preprint, arXiv:2403.01632. Dixuan Wang, Yanda Li, Junyuan Jiang, Zepeng Ding, Ziqin Luo, Guochao Jiang, Jiaqing Liang, and De- qing Yang. 2025. Tokenization matters! degrading large language models through challenging their tok- enization. Preprint, arXiv:2405.17067. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, and 3 others. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Bing Yan, Angelica Chen, and Kyunghyun Cho. 2025. Inconsistency of llms in molecular representations. Digital Discovery. John Yang, Carlos E Jimenez, Alexander Wettig, Kil- ian Lieret, Shunyu Yao, Karthik R Narasimhan, and Ofir Press. 2024. SWE-agent: Agent-computer inter- faces enable automated software engineering. In The Thirty-eighth Annual Conference on Neural Informa- tion Processing Systems. Brian Siyuan Zheng, Alisa Liu, Orevaoghene Ahia, Jonathan Hayase, Yejin Choi, and Noah A. Smith. 2025. Broken tokens? your language model can se- cretly handle non-canonical tokenizations. Preprint, arXiv:2506.19004. 10 A Use of LLMs We used an LLM-based writing assistant to polish grammar. All ideas, analyses, experiments, and scientific claims are our own, and we take full re- sponsibility for the content of this work. B Additional Background: Tokenizer Differences Between LLMs Figure 6 shows the heatmap of vocabulary dis- tances between tokenizers, which includes 19 popu- lar open-source (coding) LLMs from 8 model fami- lies. Notably, most LLMs adopt a pre-tokenization strategy that splits text into linguistically and layout-meaningful chunks before a byte-level BPE. While details vary by family, common choices in- clude isolating short digit runs (often 1–3; Qwen and some DeepSeek variants prefer per-digit), treating contiguous letters with combining marks as words, splitting punctuation and symbol runs (sometimes with an optional leading space), and separating newline blocks and longer space runs. Non-Latin scripts such as Han/Hiragana/Katakana (and in some cases Hangul) are taken as contiguous spans. Family differences that matter for our study include LLaMA-3 explicitly detaching English cl- itics, CodeQwen-1.5 disabling pre-tokenization (leaving underscores and long ASCII spans intact), DeepSeek-Coder using code-oriented splits (letters, punctuation, newlines, CJK, digits), and DeepSeek- V3/LLaMA-4/GPT-OSS converging on a similar unified scheme. In practice, more aggressive pre- segmentation tends to make models tolerant to su- perficial spacing around symbols but sensitive to numeric chunk boundaries, whereas byte-only or lightly pre-segmented designs make underscore and identifier edits more likely to introduce new token boundaries. C Additional Experimental Methodology C.1 Benchmarks Normalization To ensure that our semantic-preserving naming/s- pacing rewrite rules (Section 3.3) do not spuriously break compilation or tests, we perform a light- weight normalization pass before evaluation. For the bug fixing and code summarization tasks from HumanEvalPack (Muennighoff et al., 2023), we first canonicalize Java identifier style to camelCase from snake_case2, then propagate 2HumanEvalPack (Muennighoff et al., 2023) translates the HumanEval (Chen et al., 2021) benchmark from Python any renamings consistently to tests, entry points, and declarations to preserve their functionalities. For the code translation tasks, we start from the Avatar and CodeNet benchmarks prepared by Pan et al. (2024), following their task defi- nitions and tests. We fixed some samples with harness-compatibility issues that would otherwise cause false negatives and prune a small number of unsalvageable or pathological samples (e.g., ex- tremely long inputs or cases that time out), without changing the underlying problem semantics. And finally, we dropped 6 python2java tasks and 4 java2python tasks in Avatar that we could not fix. The most common adjustments fall into a few categories: (i) IO/formatting normalization. For example, we replace non-portable characters such as U+FFFD or segmentation markers like U+2581 with ASCII equivalents; ensure consistent tokeniza- tion by splitting on spaces instead of empty strings; remove trailing spaces/newlines; standardize nu- meric output with Java DecimalFormat or Python f-strings to fixed precision; (ii) test correctness fixes where expected outputs were inconsistent with the reference implementation or ordering; and (iii) min- imal code-context edits that preserve semantics but align with tests (e.g., renaming helper methods where tokenizer-specific splits would otherwise oc- cur, adding @Override annotations, or make Scan- ner/FastScanner usage consistent). All edits are specified once, applied uniformly to baseline and variant inputs, and never conditioned on model out- puts. C.2 Rewrite Algorithms To mutatively rewrite a code context on naming, we first parse it to obtain a code token index and two identifier sets: (i) immutable identifiers derived from configured immutable types (e.g., Java: importDeclaration, methodCall; Python: import_as_name, trailer); (ii) declaration iden- tifiers that are safe to rename (excluding Java meth- ods annotated with @Override). We restrict can- didates by casing using regexes, specifically, snake case matches [a-z0-9]+(?:_[A-Za-z0-9]+)+ and camel case identifier matches the regex [a-z]+(?:[A-Z]+[A-Za-z0-9]+[A-Za-z0-9]*)+. For each eligible identifier, we segment its lexeme by a well designed regex, convert from the source to the target case, and record the absolute character positions in the original string where underscores to other PLs (including Java), but all the identifiers were re- mained in snake_case regardless of the target PL. 11 Figure 6: Heatmap of vocabulary distances between tokenizers (Full ver.). would be inserted or removed (edit events). For Hu- manEval tasks, we additionally propagate the same renamings to tests, entry points, and declarations to keep the harness consistent, and these are treated as optional ancillary patches and do not alter the core algorithm. The immutable/declaration settings aim to maximize safe coverage while preserving compilation and test pass behavior. Spacing rewrite follows the same structure but, instead of changing identifier lexemes, we insert exactly one space between adjacent tokens when their kinds match a configured token-type bigram (former, latter) from Table 3. For each match, we insert whitespace at the boundary between the two tokens, record an insertion event at that position, and update offsets. Conceptually, although rewrites are defined over PL tokens, the notion of a fragment uses LLM tokens. For each rewrite site, we consider the min- imal contiguous list of LLM tokens that covers the affected PL tokens (identifiers for naming and the two code tokens of each combination for spac- ing) as the fragment. Our fragment-change clas- sification is based on an analysis of all fragments’ transformation in a code context. Specifically, a merge occurs when at least one old LLM token boundary inside those spans disappears, and a split occurs when at least one new boundary appears af- ter rewriting. To detect and analyze all LLM token boundary transformations, we compute LLM token start positions before and after rewriting with the same LLM tokenizer. We ignore boundaries cre- ated exactly at an edit site between two code tokens or those created right next to the edit site within one code token. Meaning that for insertions, we disregard any boundary introduced by the inserted whitespace between two code tokens that were fully or partially combined into one LLM token, as well as those caused by standalone underscores immedi- ately to its right (a behavior commonly observed in 12 the DeepSeek-Coder or CodeQwen-1.5 tokenizer, where underscores are usually treated as a single token), as encoded by the various edit masks clas- sified by edit types in Algorithm 3. The loop in Algorithm 3 shifts the original boundary set by the cumulative δ pre edit to align coordinate systems, builds the masks for shift edits and edit-adjacent positions for insert operation, and then compares adjusted old versus filtered new starts. Specifically, let Sold and Snew be the sets of LLM token starts before and after rewriting, and after masking spe- cific edit sites, we compute A=Sold \ Snew (old boundaries lost) and B=Snew \ Sold (new bound- aries gained). The label is unchanged if A=∅and B=∅, merged if A̸=∅and B=∅, split if A=∅ and B̸=∅, and mixed otherwise. C.3 Metrics Computation Algorithm We evaluate on two input programming language subsets for accuracy and ∆accuracy, Xp (Python inputs) and Xj (Java inputs), where their union X = Xj ∪Xp with |X| = 1546. For a fixed rewrite rule wi and model m, let Ti be the deterministic transformation that applies wi to an input x ∈X, and we define T0 as no rule would be applied on the input. And we let W={i|i = 0, 1, · · · , 24} denote the assignment set for all rules where i=0 is the baseline, i>0 means the variant of applying rule wi. Running the model yields code fm(Ti(x)), which the harness evaluates on the test set T (x). We define the test-level pass fraction rm,i(x) ≜ 1 |T (x)| X t∈T (x) [[fm(Ti(x))]]t, where [[fm(Ti(x))]]t ∈{0, 1} denotes the execu- tion result of program fm(Ti(x)) from test t (Guan et al., 2025). Follow that we define the task-level correctness indicator Ym,i(x) ≜I{rm,i(x) = 1} ∈ {0, 1}. So the accuracy of a rule assignment i ∈W on a set S ∈{Xp, Xj} is Accuracyi(m; S) = 1 |S| X x∈S Ym,i(x). We report ∆accuracy as ∆accuracyi(m; S) ≜ Accuracyi(m; S) −Accuracy0(m; S), where i ∈ W and i ̸= 0. Not all inputs would be mod- ified by a given rule, we therefore define the actually-affected subset X′ i ≜{ x ∈X : Ti(x) ̸= x }, whose summed sizes for all rules classified by model series are shown in Table 8. Then our pro- posed sensitivity measures how often correctness flips among affected inputs only by Sensitivityi(m) ≜ 1 |X′ i| X x∈X′ i Ym,i(x)−Ym,0(x) . Intuitively, ∆accuracy captures net gains/losses which may cancel when aggregating, whereas sen- sitivity isolates the flip rate on inputs whose tokens were actually changed by wi. C.4 Experimental Environment We conduct all experiments on an NVIDIA H100 GPU cluster, consuming approximately 1840 GPU- hours in total across runs. All model check- points are obtained from the Hugging Face Hub and loaded with the Hugging Face Transform- ers library (v4.53.2) (Wolf et al., 2020). Unless otherwise stated, models are executed in fp32, the only exceptions are Llama-3.3-70B-Instruct, Qwen2.5-Coder-32B-Instruct, and deepseek-coder- 33b-instruct, which we run in fp16. All evalua- tions use the bigcode-evaluation-harness frame- work (Ben Allal et al., 2022) with its standard protocols. We use deterministic decoding without sampling and a batch size of 1 throughout. All tests are executed with Java 21.0.1 and Python 3.8. The maximum generation length is set to 1,024 tokens for HumanEvalPack and Avatar tasks, and 2,048 tokens for CodeNet tasks. For t-SNE visualizations, we use scikit-learn v1.7.1 (sklearn.manifold.TSNE) with perplexity to 70 and use the Barnes–Hut method with 1000 itera- tions, PCA initialization, learning_rate=’auto’, and n_jobs=16 (Pedregosa et al., 2011). D Additional Results and Analysis In Figures 7 and 8, the line plots summarize sen- sitivity for each rewrite rule. In Figure 9, com- paring samples with and without identifier frag- ment change shows the overall trend of sensitiv- ity on different model size. The per-series break- downs in Figures 10 to 12 echo this pattern across Llama-3, Qwen2.5-Coder, and DeepSeek-Coder, while Llama tends to be more sensitive overall, all families exhibit a variation in sensitivity between "changed" and "unchanged" groups. Figure 13 shows the distribution of ∆accuracy per rewrite rule. Compared to sensitivity, ∆accuracy 13 Algorithm 1 Naming Rewrite Input: C: code context, P: code parser, Itypes: immutable identifier types, ρsrc: source case regex, tgt: target case, (optional) ExtraPatches: extra patches. Output: C′, E, (optional) ExtraPatches′. // TokIdx: list of (x, τ, [i, j)) where x=code token, τ=token kind, [i, j)=char span 1: (TokIdx, Sim, Sdec) ←INDEX(C, P, Itypes) 2: E ←[ ], R ←∅, C′ ←C, O ←0 // E: list of edit underscore events (pos, δ); O: total offset 3: 4: for (x, τ, [i, j)) ∈TokIdx in ascending i do 5: if τ = id ∧(x /∈Sim ∨x ∈Sdec) ∧REGEXCHECK(x, ρsrc) then 6: y ←CASECONV(x, tgt) // rewrite the identifier to the target case 7: ∆list ←DIFFUNDERLINEPOS(x, y, i) // return a list of add/del underscore events (pos, δ) 8: E ←APPEND E, ∆list) 9: R[x] ←y 10: C′ ←CONCAT(C′[0:(i+O)], y, C′[(i+O+|x|):|C′|]) // string concatenation 11: O ←O + (|y| −|x|) 12: 13: ExtraPatches′ ←APPLYREWRITES(ExtraPatches, R) 14: return (C′, E, ExtraPatches′) may not be ideal for quantifying robustness be- cause gains and losses cancel and many samples are unaffected. Table 8 reports, for each rewrite rule, the number of benchmark samples actually modified, stratified by model series. Table 9 provides the full breakdown of sensitivity by fragment-change category. To- gether, these tables clarify both the scope of input perturbations and the source of robustness differ- ences observed in the main results. 14 Algorithm 2 Spacing Rewrite Input: C: code context, P: code parser, (Kf, Kℓ): token-type bigram. Output: C′, E. // TokIdx: list of (x, τ, [i, j)) where x=code token, τ=token kind, [i, j)=char span 1: TokIdx ←INDEX(C, P) 2: E ←[ ], C′ ←C, O ←0 // E: list of insert events (pos, +1); O: total offset 3: 4: for k ←0 to |TokIdx| −2 do 5: (xf, τf, [if, jf)) ←TokIdx[k]; (xℓ, τℓ, [iℓ, jℓ)) ←TokIdx[k+1] 6: if MATCH(τf, Kf) ∧MATCH(τℓ, Kℓ) then 7: E ←APPEND(E, (iℓ, +1)) 8: C′ ←CONCAT C′[0:(jf+O)], " ", C′[(iℓ+O):|C′|]  // insert one space 9: O ←O + 1 10: 11: return (C′, E) Table 8: Samples that been modified by rewrite rule, broken down by model series. Rewrite Rule Model Series Total Unchanged Changed All Merged Split Mixed Naming Llama-3 2238 1292 946 105 767 74 Qwen2.5-Coder 2238 1292 946 105 767 74 deepseek-coder 2247 999 1248 123 996 129 CodeQwen1.5 2247 954 1293 136 1043 114 Spacing Llama-3 12804 9315 3489 660 2394 435 Qwen2.5-Coder 12804 9315 3489 660 2391 438 deepseek-coder 12804 8381 4423 608 3091 724 CodeQwen1.5 12804 8720 4084 725 2504 855 Figure 7: Percentage difference for naming rewrite transformations. 15 Algorithm 3 Fragment-Change Classification (CLASSIFY) Input: C: original code context, C′: new code context, E: list of edit events (pos, δ) with δ ∈{±1}, EditType: edit type, T: LLM tokenizer. Output: type ∈{unchanged, merged, split, mixed} 1: Lold ←POSLLMTOKENS(C, T) // cumulative first character positions of LLM tokens in C 2: Lnew ←POSLLMTOKENS(C′, T) 3: Sold ←SET(Lold), Snew ←SET(Lnew), Sed ←{ pos | (pos, δ) ∈E }, S+ ed ←∅ 4: O ←0 // cumulative offset from prior edits 5: 6: for each (pos, δ) in E do 7: a ←pos + O // adjusted position of this edit 8: Sold ←{ p+δ if p > a else p | p ∈Sold } 9: Sed ←{ e+δ if e > a else e | e ∈Sed } 10: S+ ed ←S+ ed ∪{a + max(δ, 0)} 11: O ←O + δ 12: if EditType = underscore then 13: Snew ←Snew \ (S+ ed \ Sed) // ignore starts next-to inserted standalone underscore edit boundaries 14: else if EditType = whitespace then 15: Snew ←Snew \ (Sed \ Sold) // ignore new starts created at whitespace edit boundaries 16: 17: A ←Sold \ Snew // A: lost tokens after rewrite (some tokens merged) 18: B ←Snew \ Sold // B: gained tokens after rewrite (some tokens split) 19: 20: if A ̸= ∅and B = ∅then 21: return merged 22: else if A = ∅and B ̸= ∅then 23: return split 24: else if A ̸= ∅and B ̸= ∅then 25: return mixed 26: else 27: return unchanged Figure 8: Percentage difference for spacing rewrite transformations. 16 Table 9: Impact of different types of fragment change on sensitivity (Full ver.). Rewrite Rule Model Total Unchanged Changed (all) Changed (subcategories) Merged Split Mixed Naming Llama-S 11.48 10.68 12.58 10.48 13.17 9.46 Llama-M 10.68 9.44 12.37 8.57 13.30 8.11 Llama-L 9.43 8.13 11.21 9.52 11.73 8.11 Qwen-S 7.73 6.97 8.77 8.57 9.00 6.76 Qwen-M 7.95 7.35 8.77 5.71 9.00 10.81 Qwen-L 8.27 6.58 10.57 11.43 10.82 6.76 DS-S 9.88 8.31 11.14 4.88 11.95 10.85 DS-M 8.95 7.91 9.78 7.32 10.54 6.20 DS-L 8.95 6.61 10.82 10.57 10.64 12.40 Spacing Llama-S 10.22 9.32 12.61 11.06 13.37 10.80 Llama-M 10.99 9.69 14.45 13.03 14.83 14.48 Llama-L 8.51 7.24 11.89 10.00 11.53 16.78 Qwen-S 7.07 6.04 9.80 8.33 9.62 13.01 Qwen-M 8.87 7.53 12.47 12.42 10.71 22.15 Qwen-L 5.71 5.09 7.37 7.42 6.48 12.10 DS-S 8.36 7.25 10.47 8.72 10.45 12.02 DS-M 8.71 7.58 10.85 10.36 10.19 14.09 DS-L 6.26 5.80 7.12 6.41 6.44 10.64 17 Figure 9: Naming rewrite rules percentage difference (with or without fragment change). Figure 10: (Llama series) Spacing rewrite rules percent- age difference (with or without fragment change). Figure 11: (Qwen series) Spacing rewrite rules percent- age difference (with or without fragment change). Figure 12: (Deepseek series) Spacing rewrite rules per- centage difference (with or without fragment change). Figure 13: Distribution of ∆accuracy per rewrite rule across models and benchmarks. 18
TOKDRIFT: When LLM Speaks in Subwords but Code Speaks in Grammar Yinxi Li, Yuntian Deng, Pengyu Nie {yinxi.li, yuntian, Abstract Large language models (LLMs) for code rely on subword tokenizers, such as byte-pair encoding (BPE), learned from mixed natural language text and programming language code but driven by statistics rather than grammar. As a result, semantically identical code snippets can be tokenized differently depending on superficial factors such as whitespace or identifier naming. To measure the impact of this misalignment, we introduce TOKDRIFT, a framework that applies semantic-preserving rewrite rules to create code variants differing only in tokenization. Across nine code LLMs, including large ones with over 30B parameters, even minor formatting changes can cause substantial shifts in model behavior. Layer-wise analysis shows that the issue originates in early embeddings, where subword segmentation fails to capture grammar token boundaries. Our findings identify misaligned tokenization as a hidden obstacle to reliable code understanding and generation, highlighting the need for grammar-aware tokenization for future code LLMs. 1 Introduction Large language models (LLMs) have become powerful tools for programming tasks (Chen et al., 2021; Nye et al., 2021; Yang et al., 2024; Guo et al., 2024; Meta FAIR CodeGen Team, 2025). Before any modeling occurs, code is first tokenized into discrete units using a pretrained subword tokenizer such as byte-pair encoding (BPE) (Sennrich et al., 2016). However, the tokens that LLMs see, which are based on subword frequencies, are often very different from the tokens defined by programming language (PL) grammar. Whereas PLs have clear syntactic boundaries (e.g., keywords, identifiers, operators), subword tokenizers merge character sequences statistically, sometimes splitting identifiers at arbitrary points or combining unrelated symbols into a single token. This misalignment between (a) Workflow of TOKDRIFT, our framework for quantifying LLM sensitivity to semantic-preserving code rewrite rules. (b) Example of tokenization misalignment. Adding a space between dot (".") and "factorial" causes a significant change in token sequences, from [".factor", "ial"] to [".", "␣factorial"]. Consequently, the LLM's code translation prediction shifts from incorrect (naming the factorial function as "comb" and later referring to it as "combin") to correct. Figure 1: TOKDRIFT workflow and example. subwords and syntax means that LLMs do not always process code in the units that programmers or compilers would expect. As an example, the presence of a space before an identifier can lead to completely different token sequences, and thus different predictions, despite identical program semantics (Figure 1). While such differences may appear superficial, they raise a deeper concern about how robustly code LLMs represent grammar and meaning. If tokenization determines how code is segmented and embedded, even small discrepancies could propagate through the model and alter its predictions. This motivates the central question of our study: Does the misalignment between subword tokenization and PL grammar limit LLMs' ability to understand and generate code? 1 16 Oct 2025 To study this question, we introduce TOKDRIFT, a framework that applies semantic-preserving rewrite rules, such as changing whitespace or identifier casing style, to create pairs of programs that are semantically equivalent but tokenized differently. We evaluate nine code LLMs across three representative programming tasks-bug fixing, code summarization, and code translationand measure whether model outputs remain functionally equivalent when tokenization changes. Our experiments show that even minor tokenization variations can substantially impact model behavior. For example, the most performant LLM in our experiment, Qwen2.5-Coder-32B-Instruct, changes its prediction 6.09% of the times when the input tokenization changes (and up to 60% under a single rewrite rule). Layer-wise analysis further indicates that the effect originates in early layers, where subword segmentation fails to align with grammatical token boundaries. Together, these findings suggest that tokenizer design remains a critical yet under-explored factor in developing robust and grammar-aware code LLMs. The main contributions of this work include: • We identify and formalize the misaligned tokenization problem in code LLMs. • We introduce TOKDRIFT, a framework for quantifying model sensitivity to semantic-preserving code rewrites that alter tokenization. • We conduct a large-scale empirical study showing that misaligned tokenization affects all evaluated models and persists with scaling. • We open-source our framework and data to facilitate future research on grammar-aware and domain-adaptive tokenization. Our code and data are available at: https://github.com/uw-swag/tokdrift 2 Background 2.1 LLM Tokenization Tokenization is the first step in processing input for LLMs, converting raw text into a sequence of discrete tokens. Each token corresponds to a model time step and has a dedicated embedding. Modern LLMs use learned tokenization strategies that eliminate the out-of-vocabulary problem by starting from minimal units, such as characters or bytes, and learning how to merge them into longer fragments based on frequency in a large corpus. Popular approaches like BPE (Sennrich et al., 2016) and Figure 2: Heatmap of (code) LLMs' vocabulary distances (Amba Hombaiah et al., 2021). WordPiece (Schuster and Nakajima, 2012; Devlin et al., 2019) follow this general principle, differing mainly in their merge heuristics. Often, pretokenization steps like splitting at whitespace are applied before learning to prevent tokens from spanning across word boundaries. The tokenizers used by different LLMs can vary significantly due to differences in pre-tokenization rules, token learning algorithms, and pretraining corpora. As shown in Figure 2, even models from the same family often share less than half of their vocabulary, such as Llama 3 vs. Llama 4. The main exception occurs when model developers intentionally reuse the same tokenizer across variants, such as Qwen2.5 and Qwen2.5-Coder, which share an identical vocabulary and tokenizer configuration. 2.2 PL Tokenization Tokenization in PLs, often called lexing, is the first step of code parsing: it transforms a stream of characters into a sequence of tokens according to a PL's grammar. These tokens are then passed to a parser, which constructs an abstract syntax tree (AST) to represent the program's structure. While exact rules vary by language, most PLs share a common set of token types, including: identifiers (e.g., variable or function names), operators (e.g., +, *), keywords (e.g., if, return), literals (e.g., numeric or string constants), and whitespace, which is typically used to separate tokens but is otherwise ignored. Unlike LLM tokenization, PL tokenization in compilers and interpreters is deterministic. For example, the snippet x+1 is always tokenized into three tokens: an identifier (x), an operator (+), and a literal (1). Formatting changes, such as adding spaces, do not affect the token sequence as long as the code remains syntactically valid. 2 Table 1: Benchmarks in our experiments. We manually examine the benchmarks to follow the naming conventions, and to fix/exclude invalid tests and samples, see details in Section C.1. Benchmark Source Task Input PL Output PL # Samples HumanEval-Fix-py HumanEvalPack (Muennighoff et al., 2023) bug fixing Python Python 164 HumanEval-Fix-java Java Java 164 HumanEval-Explain-py code summarization Python Python 164 HumanEval-Explain-java Java Java 164 Avatar-py2java Avatar (Ahmad et al., 2023; Pan et al., 2024) code translation Python Java 244 Avatar-java2py Java Python 246 CodeNet-py2java CodeNet (Puri et al., 2021; Pan et al., 2024) Python Java 200 CodeNet-java2py Java Python 200 This behavior misaligns with LLM tokenizers: while PL tokenizers produce stable, grammaraware units, LLM tokenizers frequently break code structure, resulting in inconsistent or fragmented representations of semantically identical programs. In this work, we refer to grammar-aware tokens as PL tokens, and contrast them with the LLM tokens produced by learned subword tokenizers. 3 TOKDRIFT Framework Figure 1a illustrates the overall workflow of TOKDRIFT, our framework for quantifying model sensitivity to semantic-preserving code rewrites that alter tokenization. In a nutshell, TOKDRIFT systematically compares the LLM outputs given the baseline input tokens and variant input tokens (after applying rewrite rules) through a large set of experiments. Each experiment is performed on a specific benchmark, and tests the sensitivity of a given LLM against a specific rewrite rule. 3.1 Benchmarks We searched for recent popular coding LLM benchmarks where: (1) the input includes a code snippet, since rewrite rules cannot be applied on natural language; (2) the output is evaluated with an automated functional correctness metric.We focused on two popular PLs, Java and Python. Based on these criteria, we selected eight benchmarks covering three tasks, listed in Table 1. Bug fixing (Tufano et al., 2019) transforms a buggy code snippet into a correct one. Code summarization (Hu et al., 2018; Panthaplackel et al., 2020) aims at summarizing a code snippet into natural language description; following HumanEvalPack's setup (Muennighoff et al., 2023), the description is fed back to LLM to generate code for measuring correctness. Code translation (Ahmad et al., 2023; Puri et al., 2021) is the task of translating a code snippet from one PL to another. All benchmarks use tests to evaluate the correctness of outputs. Table 2: Models used in our experiments. Series S M L Llama-3 3B 8B 70B Qwen2.5-Coder 1.5B 7B 32B DeepSeek-Coder 1.3B 6.7B 33B 3.2 Models Table 2 lists the models used in TOKDRIFT. We selected three series of popular open-source LLMs (using the coding-specific variants if available), namely Llama-3, Qwen2.5-Coder, and DeepSeekCoder. To cover the model size spectrum, we used small (∼1B parameters), medium (∼7B), and large (>30B) variants in each series. All models are instruction-tuned. We perform greedy decoding to generate deterministic outputs (see experimental environment details in Section C.4). 3.3 Rewrite Rules Table 3 lists the rewrite rules used in TOKDRIFT. Each rewrite rule converts all occurrences of the left-hand side substring to the right-hand side substring. According to the grammars of the two PLs we experiment on (and generally for most modern PLs), these rewrite rules are semanticallypreserving by design. We apply one rewrite rule at a time to investigate their impact in isolation. The six rewrite rules starting with "N" are inspired by naming conventions. Identifiers usually follow one of the four casing styles: camelCase (for variables/functions in Java), PascalCase (for classes in Java/Python), snake_case (for variables/functions in Python), and SCREAMING_CASE (for constants in Java/Python). Since variables/- functions are most common among identifiers, we design rewrite rules to alter their casing style. Specifically, N1, N2, N3 convert camelCase identifiers in Java to the other three casing styles, while N4, N5, N6 convert snake_case identifiers in Python. These rewrite rules challenge LLMs' robustness to different naming styles. 3 Table 3: Rewrite rules supported by TOKDRIFT, inspired by naming conventions (starting with N) and spacing conventions (starting with S). Each rewrite rule may apply to Java (marked by J), Python (marked by P), or both. No. PL Rewrite Rule Description Example N1 J camelCase →snake_case Convert identifiers from the most common casing style in the input PL to alternative ones ␣sorted L st →␣sorted _lst N2 J camelCase →PascalCase ␣cloestPair →␣Close st Pair N3 J camelCase →SCREAMING_CASE ␣possible S olutions →␣POSS IBLE _S OLUTION S N4 P snake_case →camelCase ␣input _clip board →␣input Clipboard N5 P snake_case →PascalCase ␣string _xor →␣String X or N6 P snake_case →SCREAMING_CASE ␣triangle _area →␣TRI ANGLE _AREA S1 P OP -→OP ␣- Add space between operator and minus sign [::- 1 ] →[ :: ␣- 1 ] S2 P OP [→OP ␣[ Add space between operator and left square bracket )) )[ 2 :] →))) ␣[ 2 :] S3 J ) .→) ␣. Add space between right parentheses and period ␣'. '). replace →␣'.') ␣. replace S4 P ] )→] ␣) Add space between right square bracket and right parentheses : ]): →:] ␣): S5 P OP ]→OP ␣] Add space between operator and right square bracket = ␣[[] →= ␣[[ ␣] S6 J OP (→OP ␣( Add space between operator and left parentheses (( ! is True →( ␣(! is True S7 P [ ID→[ ␣ID Add space between left square bracket and identifier ([ v ow els →([ ␣vowels S8 J ++ )→++ ␣) Add space between increment operator and right parentheses ␣i ++) →␣i ++ ␣) S9 J . *→. ␣* Add space between period and asterisk .*; →. ␣* ; S10 P ) :→) ␣: Add space between right parentheses and colon ␣main (): →␣main () ␣: S11 J ) ;→) ␣; Add space between right parentheses and semicolon <>(); → () ␣; S12 J OP ;→OP ␣; Add space between operator and semicolon Ac ++; →Ac ++ ␣; S13 J P ) )→) ␣) Add space between two right parentheses .toCharArray ()) →.toCharArray () ␣) S14 J P ( )→( ␣) Add space between two left parentheses alpha () →alpha ( ␣) S15 J P . ID→. ␣ID Add space between period and identifier .factor ial →. ␣factorial S16 J P ( ID→( ␣ID Add space between left parentheses and identifier (String →( ␣String S17 J P OP ID→OP ␣ID Add space between operator and identifier :i +len (sub string →: ␣i + ␣len ( ␣substring S18 J P OP ALL→OP ␣ALL Add space between operator and identifier/operator (l : ␣list ): →( ␣l : ␣list ) ␣: The eighteen rewrite rules starting with "S" are inspired by spacing conventions. Whitespace around most operators usually carries no semantic meaning and is optional. Thus, the spacingrelated rewrite rules identifies two consecutive tokens (one of them is an operator) and inserts a space in between. Specifically, we look for combinations where one of them is a specific operator or any kind of operator (represented by OP), and the other one is another specific operator or an identifier (represented by ID). Exploring all combinations would be infeasible, thus we select the top-10 frequently appearing combinations in the benchmarks for each PL. In addition, we add S17 and S18 as "wildcard" rules to cover all cases where an OP is followed by an ID or ID/OP for both PLs. These rewrite rules challenge LLM and its tokenizer's robustness to different formatting styles. Notably, in most LLMs with a pre-tokenization step of splitting before whitespace, these rewrite rules will lead to more LLM tokens. 3.4 Metrics Recall that each experiment on a given {benchmark, model, rewrite rule} triplet compares the baseline outputs (given the original inputs) and the variant outputs (given the inputs after applying rewrite rule). The benchmark provides a set of tests to evaluate whether each output is correct or incorrect. We define accuracy as the percentage of correct outputs, and ∆accuracy as the variant's accuracy minus the baseline's accuracy. The ∆accuracy metric, although intuitive, has two limitations: (1) accuracy improvements and degradations on individual samples cancel out; (2) some samples may not be affected by a rewrite rule if the left-hand side substring does not appear in the input; the outputs of those samples will never change. To address these, we introduce an unbiased metric called sensitivity, defined as the percentage of the samples whose output correctness flips (from correct to incorrect or vice versa) out of the samples whose input is changed by the rewrite rule. A lower sensitivity indicates that the model is more robust against the token changes introduced by a rewrite rule; when averaged across all rewrite rules, it reflects how sensitive the model is to the LLM-PL tokenization misalignment. 4 Evaluation 4.1 Results Table 4 shows the accuracy and ∆accuracy of each model on each rewrite rule. We can observe that most rewrite rules cause measurable changes in model accuracy, ranging from -2.90 to +0.32 abso4 Table 4: Accuracy and ∆accuracy (in parenthesis) of each model on each rewrite rule. Variant Llama-3B Llama-8B Llama-70B Qwen-1.5B Qwen-7B Qwen-32B DS-1.3B DS-6.7B DS-33B Average Input PL = Java baseline 32.04 43.15 57.24 33.59 57.36 70.41 38.50 58.01 57.36 49.74 N1 32.69 (+0.65) 43.54 (+0.39) 57.49 (+0.25) 35.27 (+1.68) 57.62 (+0.26) 70.28 (-0.13) 37.98 (-0.52) 57.36 (-0.65) 57.11 (-0.25) 49.93 (+0.19) N2 32.17 (+0.13) 43.54 (+0.39) 56.85 (-0.39) 35.27 (+1.68) 57.75 (+0.39) 70.41 (+0.00) 39.02 (+0.52) 58.14 (+0.13) 57.36 (+0.00) 50.06 (+0.32) N3 32.56 (+0.52) 44.19 (+1.04) 56.20 (-1.04) 35.53 (+1.94) 58.01 (+0.65) 69.12 (-1.29) 38.37 (-0.13) 56.33 (-1.68) 56.46 (-0.90) 49.64 (-0.10) S3 31.65 (-0.39) 43.02 (-0.13) 56.20 (-1.04) 34.37 (+0.78) 56.72 (-0.64) 70.41 (+0.00) 37.34 (-1.16) 58.66 (+0.65) 57.88 (+0.52) 49.58 (-0.16) S6 31.52 (-0.52) 43.02 (-0.13) 57.62 (+0.38) 33.20 (-0.39) 57.49 (+0.13) 70.28 (-0.13) 37.98 (-0.52) 58.53 (+0.52) 57.49 (+0.13) 49.68 (-0.06) S8 31.91 (-0.13) 43.28 (+0.13) 57.24 (+0.00) 34.11 (+0.52) 56.72 (-0.64) 71.45 (+1.04) 38.63 (+0.13) 57.49 (-0.52) 58.27 (+0.91) 49.90 (+0.16) S9 32.30 (+0.26) 40.96 (-2.19) 58.66 (+1.42) 33.46 (-0.13) 58.14 (+0.78) 69.51 (-0.90) 36.95 (-1.55) 56.59 (-1.42) 57.75 (+0.39) 49.37 (-0.37) S11 32.69 (+0.65) 44.57 (+1.42) 55.17 (-2.07) 35.14 (+1.55) 56.33 (-1.03) 71.58 (+1.17) 37.34 (-1.16) 57.11 (-0.90) 57.11 (-0.25) 49.67 (-0.07) S12 30.49 (-1.55) 43.02 (-0.13) 56.07 (-1.17) 34.75 (+1.16) 55.81 (-1.55) 67.05 (-3.36) 38.63 (+0.13) 55.94 (-2.07) 58.53 (+1.17) 48.92 (-0.82) S13 32.43 (+0.39) 42.64 (-0.51) 56.59 (-0.65) 33.46 (-0.13) 57.36 (+0.00) 69.77 (-0.64) 37.47 (-1.03) 58.27 (+0.26) 56.98 (-0.38) 49.44 (-0.30) S14 29.84 (-2.20) 41.09 (-2.06) 54.13 (-3.11) 32.17 (-1.42) 56.85 (-0.51) 71.19 (+0.78) 37.86 (-0.64) 57.11 (-0.90) 57.62 (+0.26) 48.65 (-1.09) S15 30.62 (-1.42) 36.82 (-6.33) 57.24 (+0.00) 33.46 (-0.13) 56.72 (-0.64) 70.28 (-0.13) 37.34 (-1.16) 55.43 (-2.58) 59.43 (+2.07) 48.59 (-1.15) S16 30.88 (-1.16) 40.83 (-2.32) 55.94 (-1.30) 34.88 (+1.29) 57.36 (+0.00) 71.96 (+1.55) 36.43 (-2.07) 57.49 (-0.52) 58.66 (+1.30) 49.38 (-0.36) S17 28.68 (-3.36) 37.34 (-5.81) 56.07 (-1.17) 35.66 (+2.07) 55.43 (-1.93) 70.03 (-0.38) 35.40 (-3.10) 55.04 (-2.97) 58.91 (+1.55) 48.06 (-1.68) S18 25.97 (-6.07) 34.88 (-8.27) 56.85 (-0.39) 34.11 (+0.52) 56.07 (-1.29) 70.28 (-0.13) 33.98 (-4.52) 53.10 (-4.91) 56.33 (-1.03) 46.84 (-2.90) Input PL = Python baseline 39.12 49.87 69.04 40.67 64.51 76.17 44.82 61.92 68.13 57.14 N4 40.03 (+0.91) 51.04 (+1.17) 68.91 (-0.13) 39.77 (-0.90) 65.03 (+0.52) 77.85 (+1.68) 44.30 (-0.52) 61.53 (-0.39) 68.39 (+0.26) 57.43 (+0.29) N5 37.56 (-1.56) 50.91 (+1.04) 68.65 (-0.39) 39.25 (-1.42) 64.77 (+0.26) 77.72 (+1.55) 42.88 (-1.94) 61.53 (-0.39) 68.39 (+0.26) 56.85 (-0.29) N6 38.08 (-1.04) 50.65 (+0.78) 66.19 (-2.85) 39.38 (-1.29) 64.51 (+0.00) 76.81 (+0.64) 42.23 (-2.59) 61.14 (-0.78) 67.62 (-0.51) 56.29 (-0.85) S1 39.38 (+0.26) 50.39 (+0.52) 68.65 (-0.39) 40.54 (-0.13) 64.51 (+0.00) 76.68 (+0.51) 44.69 (-0.13) 62.56 (+0.64) 67.62 (-0.51) 57.22 (+0.08) S2 39.64 (+0.52) 50.65 (+0.78) 68.78 (-0.26) 40.41 (-0.26) 64.77 (+0.26) 75.91 (-0.26) 43.65 (-1.17) 62.44 (+0.52) 67.75 (-0.38) 57.11 (-0.03) S4 39.77 (+0.65) 50.65 (+0.78) 69.30 (+0.26) 40.54 (-0.13) 64.51 (+0.00) 73.19 (-2.98) 44.82 (+0.00) 61.92 (+0.00) 67.36 (-0.77) 56.90 (-0.24) S5 38.60 (-0.52) 50.78 (+0.91) 68.91 (-0.13) 40.80 (+0.13) 64.12 (-0.39) 76.94 (+0.77) 44.43 (-0.39) 62.69 (+0.77) 66.71 (-1.42) 57.11 (-0.03) S7 40.03 (+0.91) 49.35 (-0.52) 68.26 (-0.78) 40.67 (+0.00) 63.34 (-1.17) 76.42 (+0.25) 44.30 (-0.52) 62.69 (+0.77) 67.23 (-0.90) 56.92 (-0.22) S10 38.47 (-0.65) 50.65 (+0.78) 69.17 (+0.13) 40.67 (+0.00) 63.99 (-0.52) 77.46 (+1.29) 44.56 (-0.26) 62.05 (+0.13) 67.10 (-1.03) 57.12 (-0.02) S13 37.95 (-1.17) 50.13 (+0.26) 69.30 (+0.26) 40.54 (-0.13) 64.90 (+0.39) 76.55 (+0.38) 44.30 (-0.52) 62.05 (+0.13) 67.10 (-1.03) 56.98 (-0.16) S14 38.73 (-0.39) 49.22 (-0.65) 68.39 (-0.65) 39.38 (-1.29) 63.73 (-0.78) 74.09 (-2.08) 45.08 (+0.26) 61.66 (-0.26) 67.49 (-0.64) 56.42 (-0.72) S15 39.12 (+0.00) 50.26 (+0.39) 67.49 (-1.55) 39.77 (-0.90) 62.69 (-1.82) 76.30 (+0.13) 44.17 (-0.65) 61.66 (-0.26) 67.23 (-0.90) 56.52 (-0.62) S16 40.16 (+1.04) 49.87 (+0.00) 69.04 (+0.00) 39.64 (-1.03) 63.08 (-1.43) 76.68 (+0.51) 43.65 (-1.17) 61.27 (-0.65) 67.23 (-0.90) 56.74 (-0.40) S17 40.41 (+1.29) 50.39 (+0.52) 67.62 (-1.42) 39.38 (-1.29) 61.92 (-2.59) 76.55 (+0.38) 42.62 (-2.20) 60.49 (-1.43) 66.32 (-1.81) 56.19 (-0.95) S18 37.44 (-1.68) 49.87 (+0.00) 67.62 (-1.42) 38.34 (-2.33) 63.08 (-1.43) 75.13 (-1.04) 42.49 (-2.33) 62.05 (+0.13) 67.36 (-0.77) 55.93 (-1.21) Background color: baseline in grey, variants better than baseline in green, and variants worse than baseline in red. The best variant is highlighted in bold and the worst variant is underlined. lute percentage points if averaging across all models. The largest ∆accuracy of -8.27% happens on Llama-8B for Java benchmarks, whose accuracy drops from 43.15% to 34.88% when applying rewrite rule S18 (adding space after each operator). Considering advances in LLM performance are sometimes claimed with around 1 percentage point margin, these accuracy deltas caused by simple rewrite rules are non-negligible. The impact of misaligned tokenization is more apparent in the sensitivity metric, as shown in the distribution plots in Figure 3. The average sensitivity is 9.26% for naming rewrites and 8.29% for spacing rewrites. Among the naming rewrites (Figure 3a), LLMs are relatively less sensitive to transductions between camelCase and snake_case (N1 and N4), likely because camelCase and SCREAMING_CASE are less frequent. This finding implies that the casing styles of identifiers, while technically convey no semantic meaning in PLs, are an important factor in LLMs' understanding of code. In Figure 3b, we can see that LLMs' average sensitivity is over 10% for the two "wildcard" spacing rewrite rules (S17 and S18). Other spacing rewrite rules result in varying levels of sensitivity, among which the most impactful ones are S15 (adding space between period and identifier), S14 (adding space between a pair of parentheses), and S12 (adding space between operator and semicolon). In terms of the average sensitivity of models (Figure 3c), we observe that Llama-3 models are more sensitive than the other two series, but all models persist a non-negligible sensitivity of at least 5.71% (Qwen-32B on spacing rewrite rules). 4.2 Impact of Model Size We investigate whether larger models are less sensitive to tokenization changes, with the general assumption of larger models being more robust. Table 5 shows the the average sensitivity of models at different sizes, where the small, medium, and large models in each series are compared on a row. While the small and medium models are at around the same level of sensitivity, the large models are 5 (a) grouped by naming rewrite rule (b) grouped by spacing rewrite rule (c) grouped by model Figure 3: Violin plots of sensitivity distributions. usually less sensitive (i.e., more robust) than their smaller counterparts, with only one exception of Qwen-32B on naming rewrite rules. We also perform statistically significant tests via Wilcoxon signed-rank test (Conover, 1999). The results show that the differences are not significant for naming rules, but significant for spacing rules (except between the small and medium models for Qwen2.5-Coder and DeepSeek-Coder series). 4.3 Impact of Identifier Fragment Changes We noticed that identifiers are frequently tokenized into different subwords before and after applying rewrite rules. For example, Llama-3 tokenizes '␣sortedLst' into three tokens ['␣sorted', 'L', 'st'], and applying N1 changes it into two tokens ['␣sorted', '_lst']. We define this case as identifier fragment change: the list of fragments (tokens but ignoring spaces and underscores) changes beTable 5: Impact of model size on sensitivity. Rewrite Rule Model Series S M L Naming Llama-3 11.48 10.68 9.43 Qwen2.5-Coder 7.73 7.95 8.27 DeepSeek-Coder 9.88 8.95 8.95 Spacing Llama-3 10.22 10.99 8.51 Qwen2.5-Coder 7.07 8.87 5.71 DeepSeek-Coder 8.36 8.71 6.26 Table 6: Impact of identifier fragment changes on sensitivity. "Unchanged" samples do not have any identifier fragment change, and "Changed" samples have at least one identifier fragment change. Rewrite Rule Model Unchanged Changed Naming Llama-70B 8.13 11.21 Qwen-32B 6.58 10.57 DS-33B 6.61 10.82 Spacing Llama-70B 7.24 11.89 Qwen-32B 5.09 7.37 DS-33B 5.80 7.12 fore and after applying rewrite rules. Using this concept, we can categorize the samples into two groups, one without any identifier fragment change (i.e., "Unchanged"), and the other with at least one identifier fragment change (i.e., "Changed"). Table 6 shows the average sensitivity of models on the two groups of samples; note that we focus on the large model in each series in this analysis. The identifier fragment changed group shows consistently higher sensitivity than the unchanged group, with the largest difference on for naming rewrite rules (10.82% vs. 6.61%). This finding suggests that how identifiers are tokenized into subwords play an important role in LLMs' understanding of code. Arguably, identifiers are frequently not tokenized into semantically meaningful subwords (such as the '␣sortedLst' example), which may fundamentally limit the model's code comprehension and generation capabilities. 5 Root Cause Analyses In addition to quantifying its impact, we also study why LLMs are sensitive to tokenization changes, along two aspects: (1) word frequency in the pretraining corpus (Section 5.1); (2) LLM's hidden states before and after the rewrite rule (Section 5.2). 5.1 Word Frequency Analysis Our hypothesis is that there is a correlation between sensitivity and the word frequencies of the rewrite rule's left-hand side and right-hand side. If the ratio of right-hand side to left-hand side word fre6 Table 7: Word frequency of rewrite rules' left-hand side (LHS) and right-hand side (RHS) on GitHub. Ratio is the percentage of RHS to LHS word frequency. Rewrite Rule LHS RHS Ratio [%] Java S3: ) .→) ␣. 78.9M 45.7K 0.06 S8: ++ )→++ ␣) 22.9M 664K 2.90 S9: . *→. ␣* 34.2M 7.3M 21.35 S11: ) ;→) ␣; 161M 924K 0.57 S13: ) )→) ␣) 102M 3.4M 3.33 S14: ( )→( ␣) 144M 195K 0.14 S15: . ID→. ␣ID 175M 45.9M 16.22 S16: ( ID→( ␣ID 172M 6.6M 3.84 Python S4: ] )→] ␣) 44.6M 1.7M 3.81 S7: [ ID→[ ␣ID 61.1M 1.1M 1.83 S10: ) :→) ␣: 76M 1.4M 1.84 S13: ) )→) ␣) 59M 2.4M 4.07 S14: ( )→( ␣) 78.1M 71.7K 0.09 S15: . ID→. ␣ID 107M 40.6M 37.94 S16: ( ID→( ␣ID 105M 2.9M 2.76 quency is small (meaning right-hand side is rare in the corpus), LLMs will likely perform worse after applying the rewrite rule. We measure the word frequencies on GitHub, a primary source of code data in LLMs' pretraining corpora.1 Table 7 shows the word frequencies of the rewrite rules, and the ratio (in percentages) of the right-hand side to the left-hand side word frequency. The ratio is always less than 100%, which explains why LLMs exhibit non-negligible sensitivity to all rewrite rules. Some rewrite rules with low ratio, e.g., S14, also exhibit high sensitivity in Figure 3b. 5.2 Hidden State Analysis LLMs' hidden states represent their internal comprehension and reasoning processes, which may help explain their sensitive to tokenization changes. We compare the hidden states before and after applying the rewrite rules. For each tokens sequence changed, we extract the hidden states of the last token in the sequence, which summarizes the information of the entire sequence. We focus this analysis on the best-performing LLM, Qwen-32B. We first measure the cosine similarity between the hidden states before and after applying the rewrite rules. Figure 4 shows correlation between the layer from which the hidden states are extracted and the similarity. For both naming and spacing rewrite rules, the similarity starts from almost 0 in the first (input) layer, increases (and stabilizes in 1We use GitHub's search feature to measure word frequencies; due to the limitation in regular expressions and characters that can be used in the search string, we can only conduct this analysis on a subset of the spacing rewrite rules. (a) naming rewrite rules (b) spacing rewrite rules Figure 4: The similarity of each layer's hidden states before and after applying rewrite rules. most cases) in middle layers, and drops again at the last (output) layer. This observation is consistent with the information bottleneck theory (Saxe et al., 2019), which states that the middle layers capture the compressed semantic information. Interestingly, in Figure 4b, we observe that for some spacing rewrite rules (S14 and S3), the similarity in middle layers is also low, implying that the model sees the before and after versions as semantically different. These rewrite rules match the ones that LLMs are most sensitive to in Figure 3b. Then, we compute the hidden state diffs as the hidden states after applying rewrite rules minus those before applying, on the medium layer of the model which should best capture semantic information. Figure 5 shows the visualizations of the hidden state diffs using t-SNE (Maaten and Hinton, 2008). We observe that the diffs of naming and spacing rewrite rules are clearly distinguishable (Figure 5a), so are the diffs of naming (Figure 5b) and spacing rewrite rules (Figure 5c, note that S17 and S18 are excluded since they are supersets of other rewrite rules). This confirms that the hidden states, especially from the middle layers, are good representations of semantic information and may be utilized to mitigate the tokenization changes. 7 (a) naming vs. spacing (b) naming rewrite rules (c) spacing rewrite rules Figure 5: Visualizations of the hidden state diffs using t-SNE (Maaten and Hinton, 2008). 6 Related Work Tokenization Most modern LLMs use subword tokenizers such as BPE (Sennrich et al., 2016), which create vocabularies based on how often character sequences occur together. The resulting token types do not always correspond to meaningful words or code elements, and can vary depending on how the tokenizer was trained. For example, Liu et al. (2025) shows that allowing token merges across whitespace boundaries produces more meaningful units, compared to tokenizers that always split at spaces. Chirkova and Troshin (2023) introduces a tokenizer designed to better align with PL syntax, achieving lower token counts while preserving model performance. These studies show that tokenization can influence how well a model understands and generates code, and our work builds on this line of inquiry by quantifying the effects of semantic-preserving tokenization changes. Robustness to Representation Variations Another important question is how robust LLMs are to variations in tokenization and representation at inference time. Zheng et al. (2025) show that instruction-tuned models can often retain high performance even when inputs are tokenized in unconventional or character-level formats, suggesting that such models may learn generalizable internal representations. However, their study also shows a measurable performance drop compared to standard tokenizations, and other work highlights further limitations. Wang et al. (2025) find that adversarial changes to token boundaries can significantly degrade model predictions, especially in models that have not undergone instruction tuning. In structured domains like chemistry, Yan et al. (2025) demonstrate that LLMs produce inconsistent outputs across semantically equivalent molecular representations. These findings suggest that LLMs remain sensitive to surface-level variations. Our work contributes to this line by focusing specifically on PLs. Syntax-Aware Code Modeling To address the mismatch between subword tokenization and PL grammar, several approaches incorporate grammar constraints into the LLM decoding process. Synchromesh (Poesia et al., 2022) and PICARD (Scholak et al., 2021) enforce syntactic validity at generation time by using runtime parsing to filter out invalid token continuations. SynCode (Ugare et al., 2024) improves the efficiency of such methods by constructing a DFA-based mask that precomputes token legality while explicitly handling partial tokens. Boundless BPE (Schmidt et al., 2025) removes fixed pretokenizers and enables dynamic boundary selection, allowing the model to learn tokens that correspond to syntactic or semantic units. Together, these efforts aim to align LLM outputs more closely with formal code structure, a disconnect that our work quantifies by measuring how semantics-preserving tokenization variations affect model behavior. 7 Conclusions This work studies the tokenization misalignment between subword-based LLMs and PL grammar. While subword tokenizers like BPE are widely used in code LLMs, they segment inputs based on frequency statistics, not grammar, leading to token boundaries that may not align with syntactic units in code. Through a suite of semantic-preserving rewrite rules, our framework TOKDRIFT shows that even minor formatting changes, such as whitespace edits or identifier renamings, can cause substantial shifts in model outputs. These effects hold across nine coding LLMs and three tasks (fixing, summarization, and translation). These findings motivate future research for grammar-aware or domain-adaptive tokenizers that more faithfully reflect PL structure. 8 Limitations While our study shows limitations of current tokenizer designs in code LLMs, our analysis focuses on a targeted set of semantic-preserving rewrites based on common formatting and naming conventions; these do not encompass all potential sources of tokenization drift. Second, although we evaluate nine widely used code LLMs, our findings may not generalize to models with fundamentally different architectures (e.g., state space models (Gu et al., 2022)) or tokenization strategies (e.g., characterlevel or grammar-driven tokenizers (Kim et al., 2016)). Third, our work centers on measurement and diagnosis, and we do not explore mitigation strategies. Future work could investigate tokenizer retraining, ensemble decoding over multiple tokenizations, or architectural modifications to improve the alignment between token boundaries and programming language syntax. Acknowledgments We thank Yu Liu for valuable comments and feedback. This work was supported in part by Compute Ontario (computeontario.ca) and the Digital Research Alliance of Canada (alliancecan.ca). It was also partially supported by a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant (RGPIN-2024-04909) and a startup grant from the . Yuntian Deng is additionally supported by an NSERC Discovery Grant (RGPIN-2024-05178) and a start-up grant from the . References Wasi Ahmad, Md Golam Rahman Tushar, Saikat Chakraborty, and Kai-Wei Chang. 2023. AVATAR: A parallel corpus for Java-Python program translation. In Findings of the Association for Computational Linguistics: ACL, pages 2268-2281. Spurthi Amba Hombaiah, Tao Chen, Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Dynamic language models for continuously evolving content. In International Conference on Knowledge Discovery and Data Mining, pages 2514-2524. Loubna Ben Allal, Niklas Muennighoff, Logesh Kumar Umapathi, Ben Lipkin, and Leandro von Werra. 2022. A framework for the evaluation of code generation models. https://github.com/bigcode-project/ bigcode-evaluation-harness. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, and 1 others. 2021. Evaluating large language models trained on code. arXiv preprint . Nadezhda Chirkova and Sergey Troshin. 2023. Codebpe: Investigating subtokenization options for large language model pretraining on source code. Preprint, . William Jay Conover. 1999. Practical nonparametric statistics. john wiley & sons. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. Albert Gu, Karan Goel, and Christopher Ré. 2022. Efficiently modeling long sequences with structured state spaces. In The International Conference on Learning Representations (ICLR). Batu Guan, Xiao Wu, Yuanyuan Yuan, and Shaohua Li. 2025. Is your benchmark (still) useful? dynamic benchmarking for code language models. arXiv preprint . Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y. Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. 2024. Deepseek-coder: When the large language model meets programming - the rise of code intelligence. Preprint, . Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In International Conference on Program Comprehension, pages 200210. Yoon Kim, Yacine Jernite, David Sontag, and Alexander Rush. 2016. Character-aware neural language models. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). Alisa Liu, Jonathan Hayase, Valentin Hofmann, Sewoong Oh, Noah A. Smith, and Yejin Choi. 2025. Superbpe: Space travel for language models. Preprint, . Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(Nov):2579-2605. Meta FAIR CodeGen Team. 2025. Cwm: An openweights llm for research on code generation with world models. Technical report, Meta. 32Bparameter open-weights model; inference code and weights released. 9 Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, and Shayne Longpre. 2023. OctoPack: Instruction tuning code large language models. arXiv preprint . Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for intermediate computation with language models. Preprint, . Rangeet Pan, Ali Reza Ibrahimzada, Rahul Krishna, Divya Sankar, Lambert Pouguem Wassi, Michele Merler, Boris Sobolev, Raju Pavuluri, Saurabh Sinha, and Reyhaneh Jabbarvand. 2024. Lost in translation: A study of bugs introduced by large language models while translating code. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering, pages 1-13. Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, and Raymond Mooney. 2020. Learning to update natural language comments based on code changes. In Annual Meeting of the Association for Computational Linguistics, pages 1853-1868. F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830. Gabriel Poesia, Oleksandr Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, and Sumit Gulwani. 2022. Synchromesh: Reliable code generation from pre-trained language models. Preprint, . Ruchir Puri, David S Kung, Geert Janssen, Wei Zhang, Giacomo Domeniconi, Vladimir Zolotov, Julian Dolby, Jie Chen, Mihir Choudhury, Lindsey Decker, Veronika Thost, Luca Buratti, Saurabh Pujar, Shyam Ramji, Ulrich Finkler, Susan Malaika, and Frederick Reiss. 2021. CodeNet: A large-scale AI for code dataset for learning a diversity of coding tasks. In Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Andrew M Saxe, Yamini Bansal, Joel Dapello, Madhu Advani, Artemy Kolchinsky, Brendan D Tracey, and David D Cox. 2019. On the information bottleneck theory of deep learning. Journal of Statistical Mechanics: Theory and Experiment, 2019(12):124020. Craig W. Schmidt, Varshini Reddy, Chris Tanner, and Yuval Pinter. 2025. Boundless byte pair encoding: Breaking the pre-tokenization barrier. Preprint, . Torsten Scholak, Nathan Schucher, and Dzmitry Bahdanau. 2021. PICARD: Parsing incrementally for constrained auto-regressive decoding from language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9895-9901, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Mike Schuster and Kaisuke Nakajima. 2012. Japanese and korean voice search. In International Conference on Acoustics, Speech and Signal Processing, pages 5149-5152. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Annual Meeting of the Association for Computational Linguistics, pages 1715-1725. Michele Tufano, Jevgenija Pantiuchina, Cody Watson, Gabriele Bavota, and Denys Poshyvanyk. 2019. On learning meaningful code changes via neural machine translation. In International Conference on Software Engineering, pages 25-36. Shubham Ugare, Tarun Suresh, Hangoo Kang, Sasa Misailovic, and Gagandeep Singh. 2024. Syncode: Llm generation with grammar augmentation. Preprint, . Dixuan Wang, Yanda Li, Junyuan Jiang, Zepeng Ding, Ziqin Luo, Guochao Jiang, Jiaqing Liang, and Deqing Yang. 2025. Tokenization matters! degrading large language models through challenging their tokenization. Preprint, . Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, and 3 others. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. Bing Yan, Angelica Chen, and Kyunghyun Cho. 2025. Inconsistency of llms in molecular representations. Digital Discovery. John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik R Narasimhan, and Ofir Press. 2024. SWE-agent: Agent-computer interfaces enable automated software engineering. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. Brian Siyuan Zheng, Alisa Liu, Orevaoghene Ahia, Jonathan Hayase, Yejin Choi, and Noah A. Smith. 2025. Broken tokens? your language model can secretly handle non-canonical tokenizations. Preprint, . 10 A Use of LLMs We used an LLM-based writing assistant to polish grammar. All ideas, analyses, experiments, and scientific claims are our own, and we take full responsibility for the content of this work. B Additional Background: Tokenizer Differences Between LLMs Figure 6 shows the heatmap of vocabulary distances between tokenizers, which includes 19 popular open-source (coding) LLMs from 8 model families. Notably, most LLMs adopt a pre-tokenization strategy that splits text into linguistically and layout-meaningful chunks before a byte-level BPE. While details vary by family, common choices include isolating short digit runs (often 1-3; Qwen and some DeepSeek variants prefer per-digit), treating contiguous letters with combining marks as words, splitting punctuation and symbol runs (sometimes with an optional leading space), and separating newline blocks and longer space runs. Non-Latin scripts such as Han/Hiragana/Katakana (and in some cases Hangul) are taken as contiguous spans. Family differences that matter for our study include LLaMA-3 explicitly detaching English clitics, CodeQwen-1.5 disabling pre-tokenization (leaving underscores and long ASCII spans intact), DeepSeek-Coder using code-oriented splits (letters, punctuation, newlines, CJK, digits), and DeepSeekV3/LLaMA-4/GPT-OSS converging on a similar unified scheme. In practice, more aggressive presegmentation tends to make models tolerant to superficial spacing around symbols but sensitive to numeric chunk boundaries, whereas byte-only or lightly pre-segmented designs make underscore and identifier edits more likely to introduce new token boundaries. C Additional Experimental Methodology C.1 Benchmarks Normalization To ensure that our semantic-preserving naming/spacing rewrite rules (Section 3.3) do not spuriously break compilation or tests, we perform a lightweight normalization pass before evaluation. For the bug fixing and code summarization tasks from HumanEvalPack (Muennighoff et al., 2023), we first canonicalize Java identifier style to camelCase from snake_case2, then propagate 2HumanEvalPack (Muennighoff et al., 2023) translates the HumanEval (Chen et al., 2021) benchmark from Python any renamings consistently to tests, entry points, and declarations to preserve their functionalities. For the code translation tasks, we start from the Avatar and CodeNet benchmarks prepared by Pan et al. (2024), following their task definitions and tests. We fixed some samples with harness-compatibility issues that would otherwise cause false negatives and prune a small number of unsalvageable or pathological samples (e.g., extremely long inputs or cases that time out), without changing the underlying problem semantics. And finally, we dropped 6 python2java tasks and 4 java2python tasks in Avatar that we could not fix. The most common adjustments fall into a few categories: (i) IO/formatting normalization. For example, we replace non-portable characters such as U+FFFD or segmentation markers like U+2581 with ASCII equivalents; ensure consistent tokenization by splitting on spaces instead of empty strings; remove trailing spaces/newlines; standardize numeric output with Java DecimalFormat or Python f-strings to fixed precision; (ii) test correctness fixes where expected outputs were inconsistent with the reference implementation or ordering; and (iii) minimal code-context edits that preserve semantics but align with tests (e.g., renaming helper methods where tokenizer-specific splits would otherwise occur, adding @Override annotations, or make Scanner/FastScanner usage consistent). All edits are specified once, applied uniformly to baseline and variant inputs, and never conditioned on model outputs. C.2 Rewrite Algorithms To mutatively rewrite a code context on naming, we first parse it to obtain a code token index and two identifier sets: (i) immutable identifiers derived from configured immutable types (e.g., Java: importDeclaration, methodCall; Python: import_as_name, trailer); (ii) declaration identifiers that are safe to rename (excluding Java methods annotated with @Override). We restrict candidates by casing using regexes, specifically, snake case matches [a-z0-9]+(?:_[A-Za-z0-9]+)+ and camel case identifier matches the regex [a-z]+(?:[A-Z]+[A-Za-z0-9]+[A-Za-z0-9]*)+. For each eligible identifier, we segment its lexeme by a well designed regex, convert from the source to the target case, and record the absolute character positions in the original string where underscores to other PLs (including Java), but all the identifiers were remained in snake_case regardless of the target PL. 11 Figure 6: Heatmap of vocabulary distances between tokenizers (Full ver.). would be inserted or removed (edit events). For HumanEval tasks, we additionally propagate the same renamings to tests, entry points, and declarations to keep the harness consistent, and these are treated as optional ancillary patches and do not alter the core algorithm. The immutable/declaration settings aim to maximize safe coverage while preserving compilation and test pass behavior. Spacing rewrite follows the same structure but, instead of changing identifier lexemes, we insert exactly one space between adjacent tokens when their kinds match a configured token-type bigram (former, latter) from Table 3. For each match, we insert whitespace at the boundary between the two tokens, record an insertion event at that position, and update offsets. Conceptually, although rewrites are defined over PL tokens, the notion of a fragment uses LLM tokens. For each rewrite site, we consider the minimal contiguous list of LLM tokens that covers the affected PL tokens (identifiers for naming and the two code tokens of each combination for spacing) as the fragment. Our fragment-change classification is based on an analysis of all fragments' transformation in a code context. Specifically, a merge occurs when at least one old LLM token boundary inside those spans disappears, and a split occurs when at least one new boundary appears after rewriting. To detect and analyze all LLM token boundary transformations, we compute LLM token start positions before and after rewriting with the same LLM tokenizer. We ignore boundaries created exactly at an edit site between two code tokens or those created right next to the edit site within one code token. Meaning that for insertions, we disregard any boundary introduced by the inserted whitespace between two code tokens that were fully or partially combined into one LLM token, as well as those caused by standalone underscores immediately to its right (a behavior commonly observed in 12 the DeepSeek-Coder or CodeQwen-1.5 tokenizer, where underscores are usually treated as a single token), as encoded by the various edit masks classified by edit types in Algorithm 3. The loop in Algorithm 3 shifts the original boundary set by the cumulative δ pre edit to align coordinate systems, builds the masks for shift edits and edit-adjacent positions for insert operation, and then compares adjusted old versus filtered new starts. Specifically, let Sold and Snew be the sets of LLM token starts before and after rewriting, and after masking specific edit sites, we compute A=Sold \ Snew (old boundaries lost) and B=Snew \ Sold (new boundaries gained). The label is unchanged if A=∅and B=∅, merged if A̸=∅and B=∅, split if A=∅ and B̸=∅, and mixed otherwise. C.3 Metrics Computation Algorithm We evaluate on two input programming language subsets for accuracy and ∆accuracy, Xp (Python inputs) and Xj (Java inputs), where their union X = Xj ∪Xp with |X| = 1546. For a fixed rewrite rule wi and model m, let Ti be the deterministic transformation that applies wi to an input x ∈X, and we define T0 as no rule would be applied on the input. And we let W={i|i = 0, 1, · · · , 24} denote the assignment set for all rules where i=0 is the baseline, i>0 means the variant of applying rule wi. Running the model yields code fm(Ti(x)), which the harness evaluates on the test set T (x). We define the test-level pass fraction rm,i(x) ≜ 1 |T (x)| X t∈T (x) [[fm(Ti(x))]]t, where [[fm(Ti(x))]]t ∈{0, 1} denotes the execution result of program fm(Ti(x)) from test t (Guan et al., 2025). Follow that we define the task-level correctness indicator Ym,i(x) ≜I{rm,i(x) = 1} ∈ {0, 1}. So the accuracy of a rule assignment i ∈W on a set S ∈{Xp, Xj} is Accuracyi(m; S) = 1 |S| X x∈S Ym,i(x). We report ∆accuracy as ∆accuracyi(m; S) ≜ Accuracyi(m; S) -Accuracy0(m; S), where i ∈ W and i ̸= 0. Not all inputs would be modified by a given rule, we therefore define the actually-affected subset X′ i ≜{ x ∈X : Ti(x) ̸= x }, whose summed sizes for all rules classified by model series are shown in Table 8. Then our proposed sensitivity measures how often correctness flips among affected inputs only by Sensitivityi(m) ≜ 1 |X′ i| X x∈X′ i Ym,i(x)-Ym,0(x) . Intuitively, ∆accuracy captures net gains/losses which may cancel when aggregating, whereas sensitivity isolates the flip rate on inputs whose tokens were actually changed by wi. C.4 Experimental Environment We conduct all experiments on an NVIDIA H100 GPU cluster, consuming approximately 1840 GPUhours in total across runs. All model checkpoints are obtained from the Hugging Face Hub and loaded with the Hugging Face Transformers library (v4.53.2) (Wolf et al., 2020). Unless otherwise stated, models are executed in fp32, the only exceptions are Llama-3.3-70B-Instruct, Qwen2.5-Coder-32B-Instruct, and deepseek-coder33b-instruct, which we run in fp16. All evaluations use the bigcode-evaluation-harness framework (Ben Allal et al., 2022) with its standard protocols. We use deterministic decoding without sampling and a batch size of 1 throughout. All tests are executed with Java 21.0.1 and Python 3.8. The maximum generation length is set to 1,024 tokens for HumanEvalPack and Avatar tasks, and 2,048 tokens for CodeNet tasks. For t-SNE visualizations, we use scikit-learn v1.7.1 (sklearn.manifold.TSNE) with perplexity to 70 and use the Barnes-Hut method with 1000 iterations, PCA initialization, learning_rate='auto', and n_jobs=16 (Pedregosa et al., 2011). D Additional Results and Analysis In Figures 7 and 8, the line plots summarize sensitivity for each rewrite rule. In Figure 9, comparing samples with and without identifier fragment change shows the overall trend of sensitivity on different model size. The per-series breakdowns in Figures 10 to 12 echo this pattern across Llama-3, Qwen2.5-Coder, and DeepSeek-Coder, while Llama tends to be more sensitive overall, all families exhibit a variation in sensitivity between "changed" and "unchanged" groups. Figure 13 shows the distribution of ∆accuracy per rewrite rule. Compared to sensitivity, ∆accuracy 13 Algorithm 1 Naming Rewrite Input: C: code context, P: code parser, Itypes: immutable identifier types, ρsrc: source case regex, tgt: target case, (optional) ExtraPatches: extra patches. Output: C′, E, (optional) ExtraPatches′. // TokIdx: list of (x, τ, [i, j)) where x=code token, τ=token kind, [i, j)=char span 1: (TokIdx, Sim, Sdec) ←INDEX(C, P, Itypes) 2: E ←[ ], R ←∅, C′ ←C, O ←0 // E: list of edit underscore events (pos, δ); O: total offset 3: 4: for (x, τ, [i, j)) ∈TokIdx in ascending i do 5: if τ = id ∧(x /∈Sim ∨x ∈Sdec) ∧REGEXCHECK(x, ρsrc) then 6: y ←CASECONV(x, tgt) // rewrite the identifier to the target case 7: ∆list ←DIFFUNDERLINEPOS(x, y, i) // return a list of add/del underscore events (pos, δ) 8: E ←APPEND E, ∆list) 9: R[x] ←y 10: C′ ←CONCAT(C′[0:(i+O)], y, C′[(i+O+|x|):|C′|]) // string concatenation 11: O ←O + (|y| -|x|) 12: 13: ExtraPatches′ ←APPLYREWRITES(ExtraPatches, R) 14: return (C′, E, ExtraPatches′) may not be ideal for quantifying robustness because gains and losses cancel and many samples are unaffected. Table 8 reports, for each rewrite rule, the number of benchmark samples actually modified, stratified by model series. Table 9 provides the full breakdown of sensitivity by fragment-change category. Together, these tables clarify both the scope of input perturbations and the source of robustness differences observed in the main results. 14 Algorithm 2 Spacing Rewrite Input: C: code context, P: code parser, (Kf, Kl): token-type bigram. Output: C′, E. // TokIdx: list of (x, τ, [i, j)) where x=code token, τ=token kind, [i, j)=char span 1: TokIdx ←INDEX(C, P) 2: E ←[ ], C′ ←C, O ←0 // E: list of insert events (pos, +1); O: total offset 3: 4: for k ←0 to |TokIdx| -2 do 5: (xf, τf, [if, jf)) ←TokIdx[k]; (xl, τl, [il, jl)) ←TokIdx[k+1] 6: if MATCH(τf, Kf) ∧MATCH(τl, Kl) then 7: E ←APPEND(E, (il, +1)) 8: C′ ←CONCAT C′[0:(jf+O)], " ", C′[(il+O):|C′|] // insert one space 9: O ←O + 1 10: 11: return (C′, E) Table 8: Samples that been modified by rewrite rule, broken down by model series. Rewrite Rule Model Series Total Unchanged Changed All Merged Split Mixed Naming Llama-3 2238 1292 946 105 767 74 Qwen2.5-Coder 2238 1292 946 105 767 74 deepseek-coder 2247 999 1248 123 996 129 CodeQwen1.5 2247 954 1293 136 1043 114 Spacing Llama-3 12804 9315 3489 660 2394 435 Qwen2.5-Coder 12804 9315 3489 660 2391 438 deepseek-coder 12804 8381 4423 608 3091 724 CodeQwen1.5 12804 8720 4084 725 2504 855 Figure 7: Percentage difference for naming rewrite transformations. 15 Algorithm 3 Fragment-Change Classification (CLASSIFY) Input: C: original code context, C′: new code context, E: list of edit events (pos, δ) with δ ∈{±1}, EditType: edit type, T: LLM tokenizer. Output: type ∈{unchanged, merged, split, mixed} 1: Lold ←POSLLMTOKENS(C, T) // cumulative first character positions of LLM tokens in C 2: Lnew ←POSLLMTOKENS(C′, T) 3: Sold ←SET(Lold), Snew ←SET(Lnew), Sed ←{ pos | (pos, δ) ∈E }, S+ ed ←∅ 4: O ←0 // cumulative offset from prior edits 5: 6: for each (pos, δ) in E do 7: a ←pos + O // adjusted position of this edit 8: Sold ←{ p+δ if p > a else p | p ∈Sold } 9: Sed ←{ e+δ if e > a else e | e ∈Sed } 10: S+ ed ←S+ ed ∪{a + max(δ, 0)} 11: O ←O + δ 12: if EditType = underscore then 13: Snew ←Snew \ (S+ ed \ Sed) // ignore starts next-to inserted standalone underscore edit boundaries 14: else if EditType = whitespace then 15: Snew ←Snew \ (Sed \ Sold) // ignore new starts created at whitespace edit boundaries 16: 17: A ←Sold \ Snew // A: lost tokens after rewrite (some tokens merged) 18: B ←Snew \ Sold // B: gained tokens after rewrite (some tokens split) 19: 20: if A ̸= ∅and B = ∅then 21: return merged 22: else if A = ∅and B ̸= ∅then 23: return split 24: else if A ̸= ∅and B ̸= ∅then 25: return mixed 26: else 27: return unchanged Figure 8: Percentage difference for spacing rewrite transformations. 16 Table 9: Impact of different types of fragment change on sensitivity (Full ver.). Rewrite Rule Model Total Unchanged Changed (all) Changed (subcategories) Merged Split Mixed Naming Llama-S 11.48 10.68 12.58 10.48 13.17 9.46 Llama-M 10.68 9.44 12.37 8.57 13.30 8.11 Llama-L 9.43 8.13 11.21 9.52 11.73 8.11 Qwen-S 7.73 6.97 8.77 8.57 9.00 6.76 Qwen-M 7.95 7.35 8.77 5.71 9.00 10.81 Qwen-L 8.27 6.58 10.57 11.43 10.82 6.76 DS-S 9.88 8.31 11.14 4.88 11.95 10.85 DS-M 8.95 7.91 9.78 7.32 10.54 6.20 DS-L 8.95 6.61 10.82 10.57 10.64 12.40 Spacing Llama-S 10.22 9.32 12.61 11.06 13.37 10.80 Llama-M 10.99 9.69 14.45 13.03 14.83 14.48 Llama-L 8.51 7.24 11.89 10.00 11.53 16.78 Qwen-S 7.07 6.04 9.80 8.33 9.62 13.01 Qwen-M 8.87 7.53 12.47 12.42 10.71 22.15 Qwen-L 5.71 5.09 7.37 7.42 6.48 12.10 DS-S 8.36 7.25 10.47 8.72 10.45 12.02 DS-M 8.71 7.58 10.85 10.36 10.19 14.09 DS-L 6.26 5.80 7.12 6.41 6.44 10.64 17 Figure 9: Naming rewrite rules percentage difference (with or without fragment change). Figure 10: (Llama series) Spacing rewrite rules percentage difference (with or without fragment change). Figure 11: (Qwen series) Spacing rewrite rules percentage difference (with or without fragment change). Figure 12: (Deepseek series) Spacing rewrite rules percentage difference (with or without fragment change). Figure 13: Distribution of ∆accuracy per rewrite rule across models and benchmarks. 18
2510.14967
INFORMATION GAIN-BASED POLICY OPTIMIZATION: A SIMPLE AND EFFECTIVE APPROACH FOR MULTI- TURN LLM AGENTS Guoqing Wang1*, Sunhao Dai2*, Guangze Ye3*, Zeyu Gan2, Wei Yao2, Yong Deng1, Xiaofeng Wu1, and Zhenzhe Ying1 1Ant Group 2Renmin University of China 3Individual Author GitHub: https://github.com/GuoqingWang1/IGPO ABSTRACT Large language model (LLM)-based agents are increasingly trained with rein- forcement learning (RL) to enhance their ability to interact with external environ- ments through tool use, particularly in search-based settings that require multi-turn reasoning and knowledge acquisition. However, existing approaches typically rely on outcome-based rewards that are only provided at the final answer. This reward sparsity becomes particularly problematic in multi-turn settings, where long tra- jectories exacerbate two critical issues: (i) advantage collapse, where all rollouts receive identical rewards and provide no useful learning signals, and (ii) lack of fine-grained credit assignment, where dependencies between turns are obscured, especially in long-horizon tasks. In this paper, we propose Information Gain- based Policy Optimization (IGPO), a simple yet effective RL framework that pro- vides dense and intrinsic supervision for multi-turn agent training. IGPO models each interaction turn as an incremental process of acquiring information about the ground truth, and defines turn-level rewards as the marginal increase in the policy’s probability of producing the correct answer. Unlike prior process-level reward approaches that depend on external reward models or costly Monte Carlo estimation, IGPO derives intrinsic rewards directly from the model’s own belief updates. These intrinsic turn-level rewards are combined with outcome-level su- pervision to form dense reward trajectories. Extensive experiments on both in- domain and out-of-domain benchmarks demonstrate that IGPO consistently out- performs strong baselines in multi-turn scenarios, achieving higher accuracy and improved sample efficiency. 1 INTRODUCTION Large language model (LLM)–based agents are increasingly endowed with the ability to interact with external environments through tool use (Zhang et al., 2025a; Huang et al., 2025; Li et al., 2025c), a capability often regarded as a critical step toward building general-purpose autonomous intelligent systems (Gutierrez et al., 2023; Qu et al., 2025). For example, web search (Zhang et al., 2025b; Qi et al., 2024), one of the most fundamental tools, enables agents to access up-to-date large- scale knowledge that substantially improves their capacity to solve complex, knowledge-intensive tasks (Ning et al., 2025). Through iterative interaction with the external environment, agents can gradually acquire missing information and refine their reasoning toward solving the target query. To equip general-purpose LLMs with such agentic capabilities, early efforts primarily relied on prompt-based workflows (Li et al., 2025b; Wang et al., 2024a; Zheng et al., 2024), which al- lowed tool use without additional training but often suffered from poor generalization. More re- cent studies have explored supervised fine-tuning (SFT) (Wang et al., 2024b) and reinforcement learning (RL) (Jin et al., 2025; Song et al., 2025a; Zheng et al., 2025b) to explicitly incentivize tool use, achieving markedly better performance. In particular, Group Relative Policy Optimiza- tion (GRPO) (Shao et al., 2024)–style methods have emerged as the dominant approach for training agentic LLMs. In this paradigm, a group of rollouts is generated for each query under the current *Equal contribution. 1 arXiv:2510.14967v1 [cs.CL] 16 Oct 2025 policy, and outcome-based rewards, typically defined by the correctness of the final answer against the ground truth, are used to construct group-relative advantages that drive policy optimization. Despite their simplicity and effectiveness on relatively easy tasks, outcome rewards suffer from an inherent limitation: they are sparse (Zhang et al., 2025c), since supervision is provided only at the final answer. This sparsity becomes particularly detrimental in multi-turn agentic settings, where long trajectories exacerbate the problem in two ways. First, sparse rewards frequently lead to advantage collapse: when sampled rollouts yield the same answer (e.g., all wrong or all right), Figure 1: Proportion of zero-advantage groups during training—IGPO vs. GRPO on Qwen2.5-7B/3B-Instruct. all rollouts in the group receive identical outcome re- wards, yielding zero group-relative advantages. As shown in Figure 1, a substantial portion of training iter- ations suffer from this issue, especially for smaller mod- els, which struggle more with complex queries. Second, outcome-only supervision fails to provide fine-grained credit assignment. In multi-turn scenarios, later turns are tightly dependent on earlier ones: a reasoning or tool call of the current turn may be correct but rendered useless by prior mistakes, or conversely, early successes may be negated by subsequent errors. Such dependencies are eas- ily obscured under outcome-only rewards, particularly in multi-hop tasks that require long-horizon reasoning. Several recent approaches have attempted to mitigate these issues by introducing process-level re- wards. One line of work leverages external oracle knowledge or reward models to judge intermediate steps (Wang et al., 2025; Feng et al., 2025), but this strategy is costly to obtain and risks introducing additional bias. Another line relies on Monte Carlo simulations to estimate step values (Wang et al., 2023; Zuo et al., 2025; Zhang et al., 2025c), yet these methods suffer from high variance unless a large number of samples are collected. Overall, both directions face challenges in scalability and fail to provide simple and stable supervision, underscoring the need for an intrinsic and reliable process-level reward design. To address these challenges, we propose Information-Gain-based Policy Optimization (IGPO), a simple but effective reinforcement learning framework that provides stable and intrinsic supervision for multi-turn agent training. The key intuition is to model each agent–environment interaction turn as an incremental process of acquiring information about the ground truth. Specifically, at every turn, IGPO computes the policy’s probability of producing the correct answer and defines the turn-level reward as the marginal increase in this probability compared to the previous state. This information gain reward offers ground-truth-aware feedback at every turn, in contrast to outcome rewards that only supervise the final answer. While turn-level rewards ensure dense and stable supervision, the outcome reward remains essential to anchor training to the final task objective. To combine these strengths, IGPO also integrates the outcome reward with the sequence of turn-level rewards, forming a dense reward trajectory for each rollout. To further stabilize training, we normalize rewards within groups and propagate them with discounted accumulation, enabling turn-level advantage estimation that captures long-horizon dependencies. Finally, IGPO optimizes the policy with a GRPO-style surrogate objective, replacing rollout-level advantages with our turn-level ones. To evaluate the effectiveness of IGPO, we conduct extensive experiments on both in-domain and out-of-domain benchmarks with search-based agents. Results show that IGPO consistently outper- forms strong baselines, delivering substantial gains in both answer accuracy and sample efficiency. Our main contributions can be summarized as follows: (1) We analyze the phenomenon of advan- tage collapse in outcome-reward–based optimization, and reveal the inefficiency of existing process- level rewards due to reliance on external knowledge or high-variance estimation. (2) We propose IGPO, a simple yet effective policy optimization framework that leverages turn-level information gain to provide dense, ground-truth-aware supervision while preserving outcome-level alignment. (3) Comprehensive experiments demonstrate that IGPO outperforms strong baselines across multi- ple benchmarks and significantly improves sample efficiency, especially for smaller models. 2 2 PRELIMINARIES In this section, we present the standard multi-turn agentic RL pipeline, illustrated with a search agent as a representative example. 2.1 TASK FORMULATION Let D = {(q, a)} denote a dataset of question–answer pairs, and let E represent an external tool (e.g., a web search engine). The goal of the agent is to solve question q by generating a rollout o = (τ1, τ2, . . . , τT ) through iterative interaction with the environment via tool E, where T is the total number of interaction turns. The last turn τT is the answer turn that outputs a rationale-then- final answer sequence, while all previous turns involve reasoning and tool interaction. Specifi- cally, for t < T, each turn τt is defined as a triple consisting of [think], [tool call], and [tool response]. The [think] step compels the agent to reason explicitly before acting, and each reasoning process is wrapped in a <think></think> tag following the DeepSeek-R1 setting (Guo et al., 2025). The [tool call] step invokes the external tool E by producing a struc- tured request, typically JSON-formatted and wrapped in a dedicated tag (e.g., <search>search query</search> for web search). The [tool response] step then returns structured out- puts from E, such as webpage snippets with titles, URLs, and text when using a web search engine tool, enclosed in <tool response>retrieved documents</tool response> tags. In the final turn, after a [think] step, the agent generates its answer within the <answer></answer> tag, and this content is extracted as the trajectory’s final prediction ˆa, which is expected to correctly address the input query q. This agent-environment interaction is illustrated at the bottom of Figure 2. 2.2 AGENTIC REINFORCEMENT LEARNING PIPELINE Policy Optimization. Agentic RL typically adopts policy-gradient methods to optimize the agent policy πθ. A common approach is Group Relative Policy Optimization (GRPO) (Shao et al., 2024), which removes the need for an explicit critic by normalizing returns within each sampled group of rollouts. Formally, given an actor model πθ, a group of G rollouts {oi}G i=1 is sampled from old policy πθold(· | q) for each input (q, a) ∼D. The policy is then optimized by maximizing the clipped surrogate objective with KL regularization: JGRPO(θ) = E(q,a)∼D, {oi}∼πθold(·|q) " 1 G G X i=1 1 |oi| |oi| X t=1 min πθ(oi,t | q, oi,<t) πθold(oi,t | q, oi,<t) bAi, clip  πθ(oi,t | q, oi,<t) πθold(oi,t | q, oi,<t), 1 −ϵ, 1 + ϵ  bAi ! −β DKL(πθ ∥πref) # , (1) where bAi = ri−mean(r1,r2,··· ,rG) std(r1,r2,··· ,rG) is the normalized group-relative advantage for the i-th rollout and ri is the outcome reward of the i-th rollout. ϵ is the clipping ratio, and β controls the KL penalty that regularizes the updated policy toward the reference model πref. During optimization, gradients are applied only to decision tokens (reasoning, tool calls, answers), while tool responses from the external environment are masked out. Reward. During training, the agent receives a scalar reward r for each rollout o, which provides the optimization signal. Prior work usually adopts an outcome-based answer reward combined with a format penalty: r = ( F1(ˆa, a) = 2 |ˆa∩a | |ˆa|+|a| ∈[0, 1], if the output is in valid format, λfmt, otherwise, (2) where ˆa is the predicted final answer for each rollout, a is the ground-truth answer, and F1(ˆa, a) ∈ [0, 1] denotes the word-level F1 score between the two. If the output violates the required schema (e.g., missing tags or malformed JSON), a negative constant λfmt < 0 is assigned as a penalty. Thus, the outcome reward provides a correctness signal aligned with evaluation metrics, while the format penalty enforces the structural validity of outputs. 3 Policy LLM Environment Query Info … … Q Assign To Turn 0 Info Gain ( | ) 𝑮𝑻 𝑶𝒊 𝟏 Info Gain ( | ) 𝑮𝑻 Q Turn 1 Q Turn 2 Assign To Info Gain ( | ) 𝑮𝑻 Q Turn T-1 Assign To … … Information Gain–Based Immediate Reward 𝑹𝑰𝑮 (Answer is omitted) Ground Truth 𝒍𝒐𝒈𝒑𝒊 𝟎 Ground Truth 𝒍𝒐𝒈𝒑𝒊 𝟏 Ground Truth 𝒍𝒐𝒈𝒑𝒊 𝟐 … … Ground Truth 𝒍𝒐𝒈𝒑𝒊 𝑻&𝟏 [think] [answer] [think] [tool call] [tool response] … 𝑹𝟏 𝑰𝑮 ⊕ Reward 𝑹𝟏 𝑭𝟏 𝑹𝟐 𝑰𝑮 ⊕ 𝑹𝟐 𝑭𝟏 𝑹𝒊 𝑰𝑮 ⊕ 𝑹𝒊 𝑭𝟏 𝑹𝑮 𝑰𝑮 ⊕ 𝑹𝑮 𝑭𝟏 Rollout … … 𝑨𝟏 Advantage γ 𝑨𝟏 , 𝑨𝟐 γ 𝑨𝟐 , 𝑨𝒊 γ 𝑨𝒊 , 𝑨𝑮 γ 𝑨𝑮 , Interaction 𝑶𝟏 𝑶𝟐 𝑶𝒊 𝑶𝑮 [think] [tool call] [tool response] w/ loss w/o loss Repeat at most T-1 Times Rollout : 𝑶𝒊 𝟐 𝑶𝒊 𝑻&𝟏 Output 𝑶𝒊 𝟏 Output 𝑶𝒊 𝟏Output 𝑶𝒊 𝟐 Output 𝑶𝒊 𝑻&𝟐Output 𝑶𝒊 𝑻&𝟏 Figure 2: The training pipeline of IGPO. (Upper) Turn-level information gain rewards are computed by measuring changes in ground-truth probability and combined with the outcome reward to derive discounted advantages. (Lower) Each rollout contains at most T −1 interaction turns, where each turn includes a reasoning step, a tool call, and the returned tool response, followed by a final answer turn. During optimization, the loss on tool response is masked out. 3 INFORMATION GAIN-BASED POLICY OPTIMIZATION In this section, we first illustrate our motivation and then provide a detailed introduction to our pro- posed information gain-based policy optimization, whose overall framework is shown in Figure 2. 3.1 MOTIVATION While outcome-based reinforcement learning has been effective in single-turn tasks, directly extend- ing it to multi-turn agentic settings such as search agents faces critical limitations. In the standard GRPO framework (Eq. 1), each rollout oi receives a scalar reward ri computed from the final answer ˆai. For complex queries, however, it is often the case that all G rollouts fail to produce the correct answer, resulting in uniformly zero rewards; conversely, for simple queries, all rollouts may produce the same correct answer, leading to the same issue. In these cases, the normalized group-relative ad- vantages { bAi} collapse to near zero, and the entire sample provides almost no learning signal. We refer to this phenomenon as advantage collapse. Moreover, such outcome-only supervision lacks fine-grained credit assignment across turns. In multi-turn scenarios, later decisions critically de- pend on earlier ones: a tool call may be effective yet rendered useless by prior retrieval errors, or early reasoning may be correct but overshadowed by subsequent mistakes. Such dependencies are obscured under single outcome rewards, making it difficult for the policy to distinguish productive reasoning from uninformative or misleading turns. To mitigate this issue, we introduce Information-Gain-based Policy Optimization (IGPO). The key idea is to exploit the multi-turn structure of agentic rollouts and treat each turn as an opportunity to acquire additional evidence toward the ground truth. At every turn, IGPO measures the increase in the policy’s confidence of generating the correct answer, which we defined as the information gain of this turn and uses this as the turn-level reward. By rewarding turn-level information gain, IGPO supplies denser and more fine-grained supervision, especially at early training stages. We fur- ther present a theoretical analysis in Appendix A, which intuitively explains why IGPO effectively addresses the limitations of sparse outcome rewards in multi-turn scenarios. Since the informa- tion gain is defined with respect to the ground-truth answer and computed under teacher forcing, it always produces a valid signal, ensuring that every sample contributes to learning even when no rollout produces a fully correct final answer. 4 3.2 INFORMATION GAIN-BASED TURN-LEVEL REWARD Turn-level Reward. We view multi-turn agent–environment interaction as a process of incremen- tally acquiring information about the ground truth. To capture this intuition, we propose an intrinsic information gain-based reward. At each turn, we evaluate the policy’s probability of generating the ground-truth answer and define the reward as the difference between consecutive states. We call this the information gain reward, as it measures the marginal increase in posterior probability mass assigned to the ground truth induced by the current turn. Formally, let a = (a1, . . . , aL) denote the ground-truth answer tokens. For the t-th turn in the i-th rollout, the probability of a under the current policy πθ is computed as πθ(a | q, oi,≤t) = exp  1 L L X j=1 log πθ(aj | q, oi,≤t, a<j)  , (3) where oi,≤t denotes the prefix of rollout oi up to turn t. Then the immediate reward * for turn t is ri,t = IG(a | q, oi,t) = πθ(a | q, oi,≤t) −πθ(a | q, oi,≤t−1), 1 ≤t < T. (4) In practice, the ground-truth answer a is wrapped in the same schema as a predicted answer to ensure consistency with rollout formatting, e.g., <think>Now there’s enough information to answer</think><answer>Ground Truth a</answer>. This turn-level reward has two desirable properties: (1) Ground-truth awareness: the reward in- creases when the action raises the policy’s confidence in the correct answer, and decreases other- wise; (2) Dense supervision: the reward is defined for every sample, even when no rollout yields a correct answer, thereby alleviating reward sparsity and avoiding advantage collapse. Integrating Outcome and Turn-level Rewards. For each rollout oi = (τi,1, . . . , τi,T ) where the last turn τi,T is the answer turn producing ˆai, we can construct a length-T reward vector ri = (ri,1, ri,2, . . . , ri,T ). For t < T, the turn reward is the information gain ri,t = IG(a | q, oi,t) defined in Section 3.2. For the answer turn t = T, the reward ri,T follows the outcome-based formu- lation in Eq. 2. This yields a dense per-turn supervision signal that combines intrinsic information gains for intermediate turns with a final extrinsic correctness signal at the answer turn. 3.3 POLICY OPTIMIZATION WITH TURN-LEVEL ADVANTAGE Turn-level Advantage Estimation. Given a rollout oi = (τi,1, . . . , τi,T ), each turn τi,t is associ- ated with a reward ri,t as defined in Section 3.2. To make rewards comparable across turns and trajectories, we first aggregate all rewards in the group: R = { ri,t : i = 1, . . . , G, t = 1, . . . , T }, (5) and apply group-wise z-normalization: Ai,t = ri,t −mean(R) std(R) . (6) While Ai,t captures the relative quality of each turn, it only reflects immediate effects and ignores the impact of current decisions on future turns. To incorporate such long-horizon dependencies, we compute a discounted cumulative advantage to propagate outcome signals backward to earlier turns: eAi,t = T X k=t γ k−tAi,k, (7) where γ ∈(0, 1] is the discount factor. During optimization, eAi,t is assigned to all decision tokens produced in turn t, while raw tool responses from the external environment are masked out. This yields a dense and future-aware supervision signal for policy learning. *Due to its log-prob origin, we apply stop-gradient to the information gain–based reward. 5 Policy Optimization. With the discounted turn-level advantages { eAi,j} defined above, we optimize the agent policy using a clipped surrogate objective with KL regularization, following the same structure as GRPO but with a finer-grained credit assignment. Formally, the IGPO objective is JIGPO(θ) = E(q,a)∼D, {oi}∼πθold(·|q) " 1 G G X i=1 1 |oi| |oi| X t=1 min πθ(oi,t | q, oi,<t) πθold(oi,t | q, oi,<t) eAi, t, clip  πθ(oi,t | q, oi,<t) πθold(oi,t | q, oi,<t), 1 −ϵ, 1 + ϵ  eAi, t ! −β DKL(πθ ∥πref) # , (8) where ϵ is the clipping threshold, β controls the KL penalty strength, and t maps token oi,t to its originating turn. During optimization, only decision tokens (reasoning, tool calls, and answers) receive gradient updates, while raw tool responses are masked out. To further substantiate the simplicity and implementability of the proposed IGPO, we provide an algorithmic flow comparison between IGPO and GRPO in Appendix E. 4 EXPERIMENTS 4.1 EXPERIMENTAL SETUP Datasets & Metrics. To evaluate the effectiveness of our proposed IGPO, we conduct experiments on both in-domain (ID) and out-of-domain (OOD) QA benchmarks in an agentic search setting. Following previous work (Zheng et al., 2025b; Deng et al., 2025), the ID setting includes four widely used datasets: NQ (Kwiatkowski et al., 2019), TQ (Joshi et al., 2017), HotpotQA (Yang et al., 2018), and 2Wiki (Ho et al., 2020), while the OOD setting includes three datasets: MusiQue (Trivedi et al., 2022), Bamboogle (Press et al., 2022), and PopQA (Mallen et al., 2022). We report word-level F1 as the evaluation metric, which is computed as the harmonic mean of precision and recall between the predicted and reference answers. Baselines. To directly verify IGPO’s superiority on agentic search tasks, we compare it against a set of competitive baselines: (1) Prompt-based methods: CoT (Wei et al., 2022), CoT+RAG (Gao et al., 2023), and Search-o1 (Li et al., 2025b), which represent the baseline performance of LLMs without further training on search tasks. (2) Outcome-reward RL-based methods: Search-r1-base/Instruct (Jin et al., 2025), R1-searcher (Song et al., 2025a), and DeepResearcher (Zheng et al., 2025b), the representative search agents with outcome-based reward RL, yielding marked performance gains. (3) Step-reward RL-based methods: StepSearch-base/instruct (Wang et al., 2025), ReasoningRAG (Zhang et al., 2025c), and GiGPO (Feng et al., 2025), which are the latest approaches exploring step-reward RL in search-agent settings. To further validate IGPO’s effectiveness, we also compare it against the following commonly used RL algorithms under the same configuration: PPO (Schulman et al., 2017), a widely used actor-critic algorithm that requires an additional value model, and critic-free methods Reinforce++ (Hu, 2025), RLOO (Kool et al., 2019; Ahmadian et al., 2024), GRPO (Shao et al., 2024), and GSPO(Zheng et al., 2025a) which perform advantage estimation over trajectory groups or batchs. Implementation Details. We use Qwen2.5-7B-Instruct (Qwen et al., 2025) as our backbone model. The training is conducted using the verl (Sheng et al., 2025) framework. The discounted factor γ is set to 1 with no future tuning. At each training step, we sample 32 prompts, and sample 16 rollouts for each prompt. The maximum dialogue turns are set to 10. For the environment, we use the google search API as our tool. The settings of our experiments are consistent with DeepResearcher (Zheng et al., 2025b). For the other baselines in Table 1, we directly copy their reported results. All RL training methods (including ours and the baselines) use exactly the same hyperparameter config- urations. The training and inference prompt templates are shown in Appendix F. Please refer to Appendix C for more details. 4.2 OVERALL PERFORMANCE The overall performance comparison between IGPO and the baseline methods is presented in Table 1 and Table 2. Based on these results, we can draw the following key observations: 6 Table 1: Main results of IGPO compared with different agentic RL baselines across seven datasets. In-domain Out-of-domain Method NQ TQ HotpotQA 2Wiki Musique Bamboogle PopQA Avg. Prompt-based CoT 19.8 45.6 24.4 26.4 8.5 22.1 17.0 23.4 CoT+RAG 42.0 68.9 37.1 24.4 10.0 25.4 46.9 36.4 Search-o1 32.4 58.9 33.0 30.9 14.7 46.6 38.3 36.4 Outcome-reward RL-based Search-r1-base 45.4 71.9 55.9 44.6 26.7 56.5 43.2 49.2 Search-r1-instruct 33.1 44.7 45.7 43.4 26.5 45.0 43.0 40.2 R1-searcher 35.4 73.1 44.8 59.4 22.8 64.8 42.7 49.0 DeepResearcher 39.6 78.4 52.8 59.7 27.1 71.0 48.5 53.9 Step-reward RL-based StepSearch-base - - 49.3 45.0 32.4 57.3 - 46.0 StepSearch-instruct - - 50.2 43.1 31.2 53.4 - 44.5 ReasoningRAG - - 48.9 50.4 20.6 45.5 46.2 42.3 GiGPO 46.4 64.7 41.6 43.6 18.9 68.9 46.1 47.2 IGPO 46.7 80.1 57.2 68.2 31.4 74.9 52.5 58.7 Table 2: Main results of IGPO compared with different RL baselines across seven datasets. In-domain Out-of-domain Method NQ TQ HotpotQA 2Wiki Musique Bamboogle PopQA Avg. RLOO 40.7 72.5 49.6 55.0 24.8 62.2 43.1 49.7 PPO 38.7 75.4 48.6 59.7 26.2 63.4 48.7 51.5 GRPO 40.3 77.0 48.9 57.7 25.0 65.1 49.6 51.9 Reinforce++ 34.3 67.5 45.9 54.5 23.7 61.2 44.3 47.3 GSPO 41.5 77.7 46.3 60.1 25.4 67.6 45.4 52.0 IGPO 46.7 80.1 57.2 68.2 31.4 74.9 52.5 58.7 Training-based methods consistently outperform prompt-based baselines. As shown in Ta- ble 1, all reinforcement learning–based methods, whether outcome- or step-reward driven, achieve substantially higher performance than all prompt-based approaches. This confirms that explicit pol- icy optimization is essential for developing effective LLM-based agents, as opposed to relying on zero-shot prompting alone. Existing step-reward methods yield competitive but unstable improvements compared to outcome-reward RL methods. While step-reward baselines occasionally surpass outcome-reward ones on specific datasets (e.g., StepSearch on Musique), their overall performance still lags behind the strongest outcome-reward methods such as DeepResearcher. This suggests that existing step- reward designs, although able to provide intermediate guidance, often suffer from noisy or weak supervision signals that limit their generalizability. IGPO achieves the best overall performance across both in-domain and out-of-domain datasets. Our IGPO outperforms all baselines, with an average score of 58.7, a clear margin over the best method (+4.8 over DeepResearcher). This improvement is attributed to IGPO’s informa- tion gain-based reward design, which assigns intrinsic, ground-truth-aware credit at every turn while preserving the outcome reward at the answer step. By avoiding advantage collapse and improving sample efficiency, IGPO delivers robust gains across both in-domain and out-of-domain datasets. IGPO consistently outperforms other RL algorithms. Beyond task-specific baselines, Table 2 shows that IGPO also achieves the highest overall score among standard RL methods, surpassing RLOO, PPO, Reinforce++, and GSPO. Unlike these methods, which rely solely on sparse outcome 7 Table 3: Ablation results of IGPO on Qwen2.5-3B/7B-Instruct with different reward designs. IGPO (w/ F1) corresponds to using only outcome rewards, reducing to standard GRPO. In-domain Out-of-domain Method NQ TQ HotpotQA 2Wiki Musique Bamboogle PopQA Avg. Qwen2.5-3B-Instruct IGPO (w/ F1) 31.0 55.6 27.5 29.4 12.1 35.7 34.9 32.3 IGPO (w/ IG) 29.1 53.6 27.9 36.5 17.5 44.7 31.3 34.4 IGPO (w/ F1+IG) 40.5 69.4 46.8 48.2 23.1 57.9 47.4 47.6 Qwen2.5-7B-Instruct IGPO (w/ F1) 40.3 77.0 48.9 57.7 25.0 65.1 49.6 51.9 IGPO (w/ IG) 37.5 75.0 51.0 61.0 28.6 69.6 47.1 52.8 IGPO (w/ F1+IG) 46.7 80.1 57.2 68.2 31.4 74.9 52.5 58.7 (a) NQ (b) TQ (c) HotpotQA (d) 2Wiki (e) Musique (f) Bamboogle (g) PopQA Figure 3: Training curves on Qwen2.5-7B-Instruct with different reward designs. rewards, IGPO incorporates turn-level advantages to provide denser and more stable supervision, leading to stronger generalization and more efficient training. 4.3 ABLATION STUDY We further conduct ablation experiments to assess the contribution of different reward components. As shown in Table 3, we observe: First, using only information gain (IG) turn-based reward or only outcome reward (F1) yields clearly inferior results compared to the full combination. This highlights the complementary roles of turn-level and outcome-level supervision: the outcome reward enforces alignment with the final task objective but suffers from severe sparsity, whereas the information gain reward offers dense and stable guidance for intermediate steps. Second, IGPO with only IG achieves performance comparable to or even exceeding that of standard GRPO (i.e., IGPO w/ F1). This demonstrates that IGPO’s information gain reward is not subject to reward hacking. Usually, without outcome supervision, unstable reward designs would quickly collapse. In contrast, our IGPO remains robust because its turn-level signals are intrinsically defined and grounded in the ground truth. Third, the improvements are particularly pronounced on the smaller 3B model. Compared to standard GRPO, IGPO improves the 3B model by +15.3 points (32.3 →47.6) and the 7B model by +6.8 points (51.9 →58.7). This larger benefit on the 3B model arises because advantage collapse is more severe for weaker models that struggle to directly produce correct answers (Figure 1), making them especially reliant on dense reward signals. In such cases, the information gain reward helps prune noisy reasoning paths and reinforce rollouts that progressively approach the ground truth. 8 Figure 4: Mean reduction in ground-truth an- swer entropy from the initial query (Turn 0) to the last non-answer turn (T −1) during training. Figure 5: Token Efficiency: average perfor- mance with respect to the number of tokens used for gradient updates. Finally, IGPO demonstrates consistently faster and more stable learning dynamics. As shown in Figure 3, IGPO steadily outperforms its two ablated variants throughout training across all seven datasets. The curves highlight two advantages: (i) IGPO converges to higher F1 scores, confirming the benefit of combining intrinsic turn-level reward and outcome rewards, and (ii) IGPO maintains stable improvements over steps, indicating robustness against reward sparsity and noisy supervi- sion. These results further validate that IGPO provides dense and reliable training signals, thereby improving both training efficiency and final performance. 4.4 IN-DEPTH ANALYSIS Ground-truth Entropy Reduction. To better understand how IGPO improves training dynamics, we measure the change in ground-truth answer entropy from the initial query (Turn 0) to the last non-answer turn (T −1). As shown in Figure 4, IGPO consistently achieves a larger entropy re- duction than GRPO throughout training. This indicates that the information gain reward effectively encourages intermediate steps to move the policy closer to the ground-truth answer distribution. In contrast, outcome-based supervision in GRPO provides no guidance for intermediate turns, resulting in weaker entropy reduction. These results highlight that IGPO’s turn-level supervision translates into more confident and grounded reasoning trajectories. Token Efficiency. We further compare IGPO and GRPO in terms of token efficiency, i.e., the performance improvement per token used for gradient updates. As shown in Figure 5, performance increases more rapidly under IGPO, and the gap over GRPO widens as training progresses. In other words, IGPO achieves stronger performance with fewer tokens, indicating that its turn-level rewards deliver denser and more informative gradients than outcome-only supervision. This finding is consistent with the training dynamics observed in Figure 3, where IGPO not only converges faster but also maintains a stable advantage throughout optimization. Such improvements in token efficiency are particularly valuable in agentic RL, where training data is scarce and expensive to obtain, making efficient use of every gradient update a critical factor for scaling. The case study and additional analyses are provided in Appendix D. Beyond empirical effectiveness, our theoretical analysis in Appendix A shows that maximizing turn- level information gain constrains error accumulation in multi-turn scenarios. Thus, IGPO not only alleviates advantage collapse but also reduces error accumulation in long-horizon agentic tasks. 5 RELATED WORK The recent success of reinforcement learning (RL) methods in large reasoning models (Chen et al., 2025a) such as OpenAI o1 (Jaech et al., 2024) and DeepSeek R1 (Guo et al., 2025) has established RL as a central tool for enhancing large language models (LLMs)-based agents to solve more com- plex tasks. A growing body of work has explored different RL algorithms such as PPO (Schulman et al., 2017), Reinforce++ (Hu, 2025), GRPO (Shao et al., 2024), RLOO (Kool et al., 2019; Ahma- dian et al., 2024), DAPO (Yu et al., 2025), and GSPO (Zheng et al., 2025a). These methods have been particularly effective in improving the capabilities of LLM-based agents (Li et al., 2025a). 9 Building on these advances, an important line of research has focused on applying RL to search- based agents (Deng et al., 2025; Dai et al., 2025b;a). Early efforts such as DeepRetrieval (Jiang et al., 2025) demonstrated the feasibility of end-to-end optimization by applying PPO with retrieval- oriented metrics (e.g., recall) as rewards. Subsequent works, including Search-R1 (Jin et al., 2025), DeepResearcher (Zheng et al., 2025b), and ReSearch (Chen et al., 2025b), extended this paradigm to multi-turn reasoning and search. R1-Searcher (Song et al., 2025a) and R1-Searcher++ (Song et al., 2025b) further introduced two-stage RL strategies, separately strengthening the ability to interact with retrieval systems and to utilize retrieved information effectively. However, in multi-turn scenarios, outcome-only rewards remain sparse and often fail to provide suf- ficient guidance, leading to unstable optimization and inefficient sample utilization. Recent studies have explored step-wise or process-level rewards that assign credit to intermediate actions. Rea- sonRAG (Zhang et al., 2025c) adopted Monte Carlo Tree Search (MCTS) to approximate the value of each step. StepSearch (Wang et al., 2025) leveraged a memory vector of retrieved documents, supervising intermediate steps based on their maximum similarity to ground-truth evidence. GiGPO (Feng et al., 2025) introduced anchor-based grouping to estimate relative advantages for actions orig- inating from the same anchor state. While these methods provide denser supervision than outcome- only rewards, they either rely on external oracle knowledge or suffer from limited stability and scalability, leaving room for more intrinsic and generalizable process-level reward designs. 6 CONCLUSION, LIMITATIONS AND FUTURE WORK In this work, we propose IGPO, a simple and effective reinforcement learning framework for training multi-turn LLM-based agents. By providing intrinsic, ground-truth-aware supervision at every turn while preserving alignment with the final objective, IGPO delivers dense and stable training signals. Extensive experiments across in-domain and out-of-domain benchmarks demonstrate that IGPO consistently outperforms strong baselines, achieving higher accuracy and better sample efficiency, particularly for smaller models where sparse rewards are most problematic. However, our approach still relies on the availability of ground-truth answers, which limits its appli- cability in open-ended settings. In future work, we plan to extend IGPO to broader agentic scenarios beyond search, including tasks without explicit supervision. REFERENCES Arash Ahmadian, Chris Cremer, Matthias Gall´e, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet ¨Ust¨un, and Sara Hooker. Back to basics: Revisiting reinforce style optimization for learn- ing from human feedback in llms. arXiv preprint arXiv:2402.14740, 2024. Kedi Chen, Dezhao Ruan, Yuhao Dan, Yaoting Wang, Siyu Yan, Xuecheng Wu, Yinqi Zhang, Qin Chen, Jie Zhou, Liang He, Biqing Qi, Linyang Li, Qipeng Guo, Xiaoming Shi, and Wei Zhang. A survey of inductive reasoning for large language models, 2025a. URL https://arxiv.org/ abs/2510.10182. Mingyang Chen, Tianpeng Li, Haoze Sun, Yijie Zhou, Chenzheng Zhu, Haofen Wang, Jeff Z Pan, Wen Zhang, Huajun Chen, Fan Yang, et al. Learning to reason with search for llms via reinforce- ment learning. arXiv preprint arXiv:2503.19470, 2025b. Yuqin Dai, Guoqing Wang, Yuan Wang, Kairan Dou, Kaichen Zhou, Zhanwei Zhang, Shuo Yang, Fei Tang, Jun Yin, Pengyu Zeng, et al. Evinote-rag: Enhancing rag models via answer-supportive evidence notes. arXiv preprint arXiv:2509.00877, 2025a. Yuqin Dai, Shuo Yang, Guoqing Wang, Yong Deng, Zhanwei Zhang, Jun Yin, Pengyu Zeng, Zhen- zhe Ying, Changhua Meng, Can Yi, et al. Careful queries, credible results: Teaching rag models advanced web search tools with reinforcement learning. arXiv preprint arXiv:2508.07956, 2025b. Yong Deng, Guoqing Wang, Zhenzhe Ying, Xiaofeng Wu, Jinzhen Lin, Wenwen Xiong, Yuqin Dai, Shuo Yang, Zhanwei Zhang, Qiwen Wang, Yang Qin, Yuan Wang, Quanxing Zha, Sunhao Dai, and Changhua Meng. Atom-searcher: Enhancing agentic deep research via fine-grained atomic thought reward. arXiv preprint arXiv:2508.12800, 2025. 10 Lang Feng, Zhenghai Xue, Tingcong Liu, and Bo An. Group-in-group policy optimization for llm agent training. arXiv preprint arXiv:2505.10978, 2025. Zeyu Gan, Yun Liao, and Yong Liu. Rethinking external slow-thinking: From snowball errors to probability of correct reasoning. In Forty-second International Conference on Machine Learning, 2025. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yixin Dai, Jiawei Sun, Haofen Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997, 2(1), 2023. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Peiyi Wang, Qihao Zhu, Runxin Xu, Ruoyu Zhang, Shirong Ma, Xiao Bi, et al. Deepseek-r1 incentivizes reasoning in llms through reinforce- ment learning. Nature, 645(8081):633–638, 2025. Carlos I Gutierrez, Anthony Aguirre, Risto Uuk, Claire C Boine, and Matija Franklin. A proposal for a definition of general purpose artificial intelligence systems. Digital Society, 2(3):36, 2023. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th Interna- tional Conference on Computational Linguistics, pp. 6609–6625, 2020. Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262, 2025. Yuxuan Huang, Yihang Chen, Haozheng Zhang, Kang Li, Meng Fang, Linyi Yang, Xiaoguang Li, Lifeng Shang, Songcen Xu, Jianye Hao, et al. Deep research agents: A systematic examination and roadmap. arXiv preprint arXiv:2506.18096, 2025. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. Pengcheng Jiang, Jiacheng Lin, Lang Cao, Runchu Tian, SeongKu Kang, Zifeng Wang, Jimeng Sun, and Jiawei Han. Deepretrieval: Hacking real search engines and retrievers with large language models via reinforcement learning. arXiv preprint arXiv:2503.00223, 2025. Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516, 2025. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017. Wouter Kool, Herke van Hoof, and Max Welling. Buy 4 reinforce samples, get a baseline for free! 2019. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466, 2019. Long Li, Jiaran Hao, Jason Klein Liu, Zhijian Zhou, Xiaoyu Tan, Wei Chu, Zhe Wang, Shirui Pan, Chao Qu, and Yuan Qi. The choice of divergence: A neglected key to mitigating diversity collapse in reinforcement learning with verifiable reward. arXiv preprint arXiv:2509.07430, 2025a. Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, and Zhicheng Dou. Search-o1: Agentic search-enhanced large reasoning models. arXiv preprint arXiv:2501.05366, 2025b. Xuefeng Li, Haoyang Zou, and Pengfei Liu. Torl: Scaling tool-integrated rl. arXiv preprint arXiv:2503.23383, 2025c. 11 Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Hannaneh Hajishirzi, and Daniel Khashabi. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511, 7, 2022. Liangbo Ning, Ziran Liang, Zhuohang Jiang, Haohao Qu, Yujuan Ding, Wenqi Fan, Xiao-yong Wei, Shanru Lin, Hui Liu, Philip S Yu, et al. A survey of webagents: Towards next-generation ai agents for web automation with large foundation models. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2, pp. 6140–6150, 2025. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350, 2022. Zehan Qi, Xiao Liu, Iat Long Iong, Hanyu Lai, Xueqiao Sun, Wenyi Zhao, Yu Yang, Xinyue Yang, Jiadai Sun, Shuntian Yao, et al. Webrl: Training llm web agents via self-evolving online curricu- lum reinforcement learning. arXiv preprint arXiv:2411.02337, 2024. Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji- Rong Wen. Tool learning with large language models: A survey. Frontiers of Computer Science, 19(8):198343, 2025. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. In Proceedings of the Twentieth European Conference on Computer Systems, EuroSys ’25, pp. 1279–1297. ACM, March 2025. doi: 10.1145/3689031.3696075. Huatong Song, Jinhao Jiang, Yingqian Min, Jie Chen, Zhipeng Chen, Wayne Xin Zhao, Lei Fang, and Ji-Rong Wen. R1-searcher: Incentivizing the search capability in llms via reinforcement learning. arXiv preprint arXiv:2503.05592, 2025a. Huatong Song, Jinhao Jiang, Wenqing Tian, Zhipeng Chen, Yuhuan Wu, Jiahao Zhao, Yingqian Min, Wayne Xin Zhao, Lei Fang, and Ji-Rong Wen. R1-searcher++: Incentivizing the dynamic knowl- edge acquisition of llms via reinforcement learning. arXiv preprint arXiv:2505.17005, 2025b. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Musique: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539–554, 2022. Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935, 2023. Xiaohua Wang, Zhenghua Wang, Xuan Gao, Feiran Zhang, Yixin Wu, Zhibo Xu, Tianyuan Shi, Zhengyuan Wang, Shizheng Li, Qi Qian, et al. Searching for best practices in retrieval-augmented generation. arXiv preprint arXiv:2407.01219, 2024a. Ziliang Wang, Xuhui Zheng, Kang An, Cijun Ouyang, Jialu Cai, Yuhang Wang, and Yichao Wu. Stepsearch: Igniting llms search ability via step-wise proximal policy optimization. arXiv preprint arXiv:2505.15107, 2025. 12 Ziting Wang, Haitao Yuan, Wei Dong, Gao Cong, and Feifei Li. Corag: A cost-constrained re- trieval optimization system for retrieval-augmented generation. arXiv preprint arXiv:2411.00744, 2024b. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600, 2018. Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Weinan Dai, Tiantian Fan, Gaohong Liu, Lingjun Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025. Guibin Zhang, Hejia Geng, Xiaohang Yu, Zhenfei Yin, Zaibin Zhang, Zelin Tan, Heng Zhou, Zhongzhi Li, Xiangyuan Xue, Yijiang Li, et al. The landscape of agentic reinforcement learning for llms: A survey. arXiv preprint arXiv:2509.02547, 2025a. Weizhi Zhang, Yangning Li, Yuanchen Bei, Junyu Luo, Guancheng Wan, Liangwei Yang, Chenxuan Xie, Yuyao Yang, Wei-Chieh Huang, Chunyu Miao, et al. From web search towards agentic deep research: Incentivizing search with reasoning agents. arXiv preprint arXiv:2506.18959, 2025b. Wenlin Zhang, Xiangyang Li, Kuicai Dong, Yichao Wang, Pengyue Jia, Xiaopeng Li, Yingyi Zhang, Derong Xu, Zhaocheng Du, Huifeng Guo, et al. Process vs. outcome reward: Which is better for agentic rag reinforcement learning. arXiv preprint arXiv:2505.14069, 2025c. Chujie Zheng, Shixuan Liu, Mingze Li, Xiong-Hui Chen, Bowen Yu, Chang Gao, Kai Dang, Yuqiong Liu, Rui Men, An Yang, et al. Group sequence policy optimization. arXiv preprint arXiv:2507.18071, 2025a. Yuxiang Zheng, Shichao Sun, Lin Qiu, Dongyu Ru, Cheng Jiayang, Xuefeng Li, Jifan Lin, Bin- jie Wang, Yun Luo, Renjie Pan, et al. Openresearcher: Unleashing ai for accelerated scientific research. arXiv preprint arXiv:2408.06941, 2024. Yuxiang Zheng, Dayuan Fu, Xiangkun Hu, Xiaojie Cai, Lyumanshan Ye, Pengrui Lu, and Pengfei Liu. Deepresearcher: Scaling deep research via reinforcement learning in real-world environ- ments. arXiv preprint arXiv:2504.03160, 2025b. Yuxin Zuo, Kaiyan Zhang, Li Sheng, Shang Qu, Ganqu Cui, Xuekai Zhu, Haozhan Li, Yuchen Zhang, Xinwei Long, Ermo Hua, et al. Ttrl: Test-time reinforcement learning. arXiv preprint arXiv:2504.16084, 2025. 13 A THEORETICAL ANALYSIS The theoretical analysis here provides an intuitive support for the efficacy of our proposed method by addressing the limitations of sparse outcome rewards in multi-turn agents. Specifically, the the- ory establishes a crucial link: maximizing the process reward (IGPO’s objective) is equivalent to minimizing the upper bound on the undesirable accumulation of snowball errors during the reason- ing process. This minimization, in turn, systematically lowers the theoretical minimum for the final answer error rate, thus providing a fundamental guarantee that IGPO’s dense, turn-level signals lead to more confident and successful reasoning trajectories. Notations. Let Efinal be the event that the agent’s generated final answer does not match the ground truth answer. Its probability is denoted by P(Efinal), i.e., the error rate. For each turn t , denote the observed response [think], [tool call] as Rt. We also posit that there is an unobservable, abstract thinking step It that underlies the generation of Rt. Let R(t) process be the process reward, which is a dense reward signal received at each turn of the interaction. It is defined as the information gain about the ground truth answer, which is calculated as the increase in the log-probability of the correct answer from the previous state to the current state. Then, the total process reward Rtotal = PT −1 t=1 E[R(t) process] is the cumulative sum of all process rewards over a complete trajectory or episode. The expectation is taken over the thinking step and observed response. The training objective of the policy is to maximize this total reward. Definition A.1 (Snowball Error in Multi-turn Agentic RL). Consistent with Gan et al. (2025), we define the information loss at turn T as the conditional entropy Ent(It|Rt). Consider the non- trivial case where |Ent(It|Rt)| is bounded. The cumulative snowball error up to turn T is the sum of these losses: Ent<T (I|R) ≜ T −1 X t=1 Ent(It|Rt) (9) This quantity measures the aggregate uncertainty and ambiguity accumulated throughout the rea- soning trajectory before the final answer is produced. Next, we connect the cumulative snowball error to the agent’s final performance. It indicates the fundamental limitation of multi-turn agentic RL pipeline caused by snowball error. Lemma A.2 (Lower bound of error rate). The probability of a final answer error, P(Efinal), is lower- bounded by the cumulative snowball error accumulated during the reasoning process: P(Efinal) = Ω Ent<T (I|R) T −1  −Cconst. (10) where Cconst is a small positive constant. Proof Sketch. This result is strongly motivated by Theorem 3.3 from Gan et al. (2025). We treat the generation of the final answer at turn T as the final step of a multi-step reasoning process. The quality of this final step is conditioned on the information accumulated over the previous T −1 turns. The theorem from (Gan et al., 2025) states that the error probability of any step is lower-bounded by the average snowball error accumulated up to that point. Applying this principle to the final step (t = T) yields the stated result. Assumption A.3 (Monotonic Reward-Information Loss Link). The expected process reward at any turn H, E[R(t) process], is monotonically non-increasing with respect to the information loss at that turn, Ent(It|Rt). We assume there exists a bounded and monotonically non-increasing convex function f : R+ →R such that: E[R(t) process|It, Rt] ≤f (Ent(It|Rt)) . (11) Remark. As the information loss Ent(It|Rt) at turn t increases, the expected total information loss tends to decreases and asymptotically approaches a relatively small value, which is characterized by the convex nature of function f. 14 This assumption leads to the following result, demonstrating that optimizing for process rewards implicitly constrains the accumulation of snowball errors. We first formalize the intuition that a clearer reasoning step (lower information loss) is a prerequisite for a high-quality query, which in turn yields a higher expected process reward. Theorem A.4 (Process Reward as a Bound on Snowball Error). Under Assumption A.3, the expected cumulative snowball error is upper bounded by E[Ent<T (I|R)] = O(1) −Ω(Rtotal) . (12) Theorem A.4 establishes that maximizing the process reward is mathematically coupled with min- imizing an upper bound on the cumulative snowball error. The combination of Theorem A.4 and Lemma A.2 provides a complete, end-to-end theoretical justification for the efficacy of our proposed process reward mechanism. The logical chain is as follows: • Maximizing the process reward (our algorithm’s objective) forces the agent to minimize an upper bound on the cumulative snowball error (Theorem A.4). • Minimizing the cumulative snowball error, in turn, lowers the theoretical minimum for the final error rate, thereby systematically increasing the probability of task success (Lemma A.2). In conclusion, the turn-level process reward is not merely an engineering heuristic; it is a theo- retically grounded mechanism that fundamentally addresses the problem of error accumulation in multi-step reasoning. By providing a dense, immediate signal for reasoning clarity, it transforms the intractable problem of sparse-reward, long-horizon exploration into a series of manageable, short- horizon sub-problems, each aimed at maximizing immediate information gain. This explains the significant gains in training efficiency and final performance observed in our experiments. B PROOF FOR THEORETICAL ANALYSIS B.1 PROOF OF LEMMA A.2 Proof. We achieve this by applying Theorem 3.3 from Gan et al. (2025) to the final decision-making step of the agent. In particular, P(Efinal) ≥ Ent<T (I|R) T −1 −C1 log(|Afinal| −1) , (13) where |Afinal| is the cardinality of the final answer space and C1 is a small positive constant analo- gous to Entb(et) in Gan et al. (2025). Since log(|Afinal|−1) and C1 are constant, Ent<T (I|R) T −1 −C1 log(|Afinal|−1) sim- plifies to a form that is asymptotically dominated by the variable term. Therefore, the right-hand side of the inequality can be expressed in terms of the lower bound symbol Ωas Ω  Ent<T (I|R) T −1  −Cconst, which completes the proof. B.2 PROOF OF THEOREM A.4 Proof. According to the nature of f and the fact that there exist constants Cmax and β such that for all non-negative bounded x, there holds f(x) ≤Cmax −βx. Therefore, by taking the expectation over Assumption A.3 and summing across all turns from t = 1 to T −1, we have Rtotal = T −1 X t=1 E[R(t) process] ≤ T −1 X t=1 E[f (Ent(It|Rt))] ≤ T −1 X t=1 E[Cmax −β · Ent(It|Rt)] = (T −1)Cmax −β T −1 X t=1 E[Ent(It|Rt)] = (T −1)Cmax −β E[Ent<T (I|R)]. Rearranging terms yields the final result. 15 C MORE IMPLEMENTATION DETAILS All our training experiments are conducted on 8 × NVIDIA A100-80G GPUs. The detailed hyper- parameter settings are provided in Table 4. Unless otherwise specified, all experiments are based on this configuration. Table 4: Training hyperparameters. Training hyperparameters Value Training Batch Size 32 Mini-Batch Size 32 Infer Tensor Model Parallel Size 1 Sequence Parallel Size 4 Max Prompt Length 30767 Max Response Length 2000 Actor Learning Rate 1e-6 Rollout Temperature 1.0 Rollout Group Size 16 Max Turn Call Turns 10 KL-Divergence loss coefficient 0.001 D MORE DISCUSSION AND EXPERIMENTAL ANALYSIS D.1 COMPARISON WITH OTHER PROCESS-REWARD METHODS In addition to its obvious performance advantages, we also conduct a deeper analysis of IGPO’s superiority in terms of algorithmic characteristics compared to other process-reward-based agentic RL algorithms. We first introduce other existing process-reward-based agentic RL algorithms: • ReasoningRAG. The main contribution of this work is the proposal of a step-level data labeling strategy based on MCTS. Subsequently, the DPO algorithm is used to optimize the agent’s policy on the labeled step-level dataset. The main limitations of this method are: (1) the data labeling process relies on MCTS, which is inefficient, and when the number of samples is insufficient, it is difficult to accurately estimate the value of each step; (2) the off-policy optimization based on DPO is less effective than on-policy algorithms. • StepSearch. StepSearch constructs turn-level supervision signals by pre-defining golden search keywords and golden tool responses, and adopts an on-policy optimization approach. Although it shifts from off-policy to on-policy, the annotation process is resource-intensive and prone to annotator bias (whether from humans or LLMs). • GiGPO. GiGPO introduces a step-level grouping strategy based on anchor states and performs fine-grained advantage estimation within each step-level group. Although this provides a novel solution, it essentially still relies on the Monte Carlo assumption. When the number of anchor states is insufficient, it is often difficult to accurately estimate their value, which in turn leads to biased advantage estimation. The proposed IGPO effectively addresses the aforementioned limitations. Starting from the on- policy GRPO setting (where rollout data are used for a single parameter update), it employs an information-gain–based incremental reward construction strategy that requires no annotation and does not rely on Monte Carlo. Moreover, the incorporation of ground-truth awareness substantially reduces bias. Table 5 provides a detailed comparison highlighting the advantages of IGPO over other algorithms. D.2 CASE STUDY 16 Table 5: Comparison between various process reward methods. Algorithm On-Policy Explicit Labeling-Free Monte Carlo–Free Introduces No Bias ReasoningRAG No Yes No Sample-size Dependent StepSearch Yes No Yes No GiGPO Yes Yes No Sample-size Dependent IGPO Yes Yes Yes Yes [think] To answer this question, I need to find out the release dates of both films… [tool call] {'query': ['release date College Lovers film'…]} Which film whose director is younger, College Lovers or The Dixie Flyer? Ground Truth: College Lovers [tool response] The release date of College Lovers is October 5, 1930 … [think] Next, I need to find the birth years of John G. Adolfi and Charles J. Hunt to determine which director is younger… [tool call] {'query': ['John G. Adolfi birth year'…]} [tool response] John G. Adolfi was born on February 19, 1888… [think] John G. Adolfi was born on ... Charles J. Hunt was born on ... Since Charles J. Hunt was born earlier, he is younger than John G. Adolfi at the time of their respective film's release. Therefore, the director of The Dixie Flyer is younger. Info Gain: -0.80 Info Gain: 0.42 F1 Score: 0.0 Figure 6: Case study showing a scenario where the final answer is incorrect but contains a single correct retrieval turn. IGPO provides a process reward for this turn, improving token utilization. 17 [think] To answer this question, I need to find out the birthday of Augusta Marie of Holstein-Gottorp… [tool call] {'query': ['Augusta Marie of Holstein-Gottorp'…]} When is Augusta Marie Of Holstein-Gottorp's mother's birthday? Ground Truth: 22 November 1610 [tool response] Augusta Marie of Holstein-Gottorp (6 February 1649 – 25 April 1728) was a German noblewoman who became a… [think] I can see that Augusta Marie … was born on ... To find her mother's birthday, I need to look for information about her family, specifically her parents. [tool call] {'query': ['Augusta Marie of Holstein-Gottorp parents'…]} [tool response] … she was a daughter of Frederick III, Duke of Holstein-Gottorp and Duchess Marie Elisabeth of Saxony … [think] The search results consistently provide information that Duchess Marie Elisabeth of Saxony was born on 22 November 1610. Info Gain: -0.39 Info Gain: 0.45 F1 Score: 1.0 [think] Augusta Marie … mother was ... To find her birthday, I need to look for more details about Duchess Marie Elisabeth of Saxony. [tool call] {'query': ['Duchess Marie Elisabeth of Saxony birth date'…]} [tool response] Duchess Marie Elisabeth of Saxony (22 November 1610 – 24 October 1684) was a duchess consort of … Info Gain: 0.78 Figure 7: Case study illustrating a situation where the first round of retrieval failed, but the second and third rounds successfully located the correct evidence and produced the right answer. In this case, IGPO imposes a penalty on the erroneous retrieval in the first round. 18 E COMPARISON BETWEEN GRPO AND IGPO Algorithm 1 illustrates the algorithmic flow of IGPO (right) and GRPO (left). The key steps cor- responding to each algorithm are highlighted with the same color font to visually highlight the differences: yellow for reward calculation, green for advantage estimation, blue for advantage accu- mulation and assignment, and purple for policy optimization. In terms of reward calculation, IGPO constructs dense turn-level rewards through incremental information gain. For advantage estimation, both IGPO and GRPO use Group Normalization. Regarding advantage accumulation and allocation, GRPO directly assigns the outcome-based advantage to all tokens of the current output, while IGPO further computes the cumulative discounted advantage, capturing long-horizon information and per- forming turn-level reward assignment. In policy optimization, IGPO achieves more efficient and effective optimization by maximizing the turn-level cumulative discounted advantages. Algorithm 1 GRPO vs. IGPO GRPO Require: initial policy πθinit; task prompts D; hyperparameters ϵ, β, µ 1: πθ ←πθinit 2: for iteration = 1, . . . , I do 3: πref ←πθ 4: for step = 1, . . . , M do 5: Sample a batch Db from D 6: πθold ←πθ 7: For each q ∈Db, sample G outputs {yi}G i=1 ∼πθold(· | q) 8: Compute outcome reward {ri}G i=1 from the final answer in each yi 9: Compute each yi’s advantage {Ai}G i=1 via group normalization of {ri}G i=1 (Eq.in Sec 2.2) 10: Assign Ai to all tokens of yi 11: for GRPO iter = 1, . . . , µ do 12: Update πθ by maximizing the GRPO objective (Eq. 1) 13: end for 14: end for 15: end for IGPO Require: initial policy πθinit; task prompts D; max turns H; hyperparameters ϵ, β, γ, µ 1: πθ ←πθinit 2: for iteration = 1, . . . , I do 3: πref ←πθ 4: for step = 1, . . . , M do 5: Sample a batch Db from D 6: πθold ←πθ 7: For each q ∈Db, sample G outputs {yi}G i=1 ∼πθold(· | q) 8: for iteration = 1, . . . , T do 9: if t < T then 10: Compute the infomation gain- based turn-level rewards {ri,t}G i=1 for each yi (Eq. 4) 11: else 12: Compute the final-turn rewards {ri,T }G i=1 based on the answer in each yi (Eq. 2) 13: end if 14: end for 15: Compute the per turn advantages {Ai,1≤t≤T }G i=1 in yi via group normal- ization of {ri,1≤t≤T }G i=1 (Eq. 6) 16: Compute the per turn cumulative dis- counted advantages { ˆAi,1≤t≤T }G i=1 in each yi (Eq. 7), then assign them to the tokens in each turn 17: for IGPO iteration = 1, . . . , µ do 18: Update πθ by maximizing the IGPO objective (Eq. 8) 19: end for 20: end for 21: end for 19 F PROMPT TEMPLATE USED IN OUR EXPERIMENTS. Our prompt follows the style of DeepResearcher Zheng et al. (2025b), and the same template is used for training, validation, and testing. The prompt template is shown in the Figure 8, where {today} represents the current date to ensure the relevance of the model’s response. {{ tool.name }}: {{ tool.description }} indicates the available tools, while the #Rollout section con- trols the model’s output format. The #Tools section provides the model with the tool invocation method. * Today is {today} * You are an AI Assistant* The question I give you is a complex question that requires a *deep research* to answer. I will provide you with tools to help you answer the question: {%- for tool in tools.values() %} - {{ tool.name }}: {{ tool.description }} {%- endfor %} You don't have to answer the question now, but you should first think about the research plan or what to search next. Your output format should be one of the following two formats: # Rollout <think> YOUR THINKING PROCESS </think> <answer> YOUR ANSWER AFTER GETTING ENOUGH INFORMATION </answer> or <think> YOUR THINKING PROCESS </think> <tool_call> YOUR TOOL CALL WITH CORRECT FORMAT </tool_call> You should always follow the above two formats strictly. Only output the final answer (in words, numbers or phrase) inside the <answer></answer> tag, without any explanations or extra information. If this is a yes-or-no question, you should only answer yes or no. # Tools You may call one or more functions to assist with the user query. You are provided with function signatures within <tools></tools> XML tags: <tools> {\%- for tool in tools.values() \%} {{ '{' }}"type": "function", "function": {{ '{' }}"name": "{{ tool.name | replace("'", '"') }}", "description": "{{ tool.description }}", "parameters": {{ '{' }}"type": "object", "properties": {{tool.inputs | replace("'", '"')}}, "example": {{tool.example | replace("'", '"')}}, "uniqueItems": true{{ '}}}' }} {\%- endfor \%} </tools> For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {{ '{' }}"name": <function-name>, "arguments": <args-json-object>{{ '}' }} </tool_call> Figure 8: Prompt template used in our experiments. 20
INFORMATION GAIN-BASED POLICY OPTIMIZATION: A SIMPLE AND EFFECTIVE APPROACH FOR MULTITURN LLM AGENTS Guoqing Wang1*, Sunhao Dai2*, Guangze Ye3*, Zeyu Gan2, Wei Yao2, Yong Deng1, Xiaofeng Wu1, and Zhenzhe Ying1 1Ant Group 2Renmin 3Individual Author GitHub: https://github.com/GuoqingWang1/IGPO ABSTRACT Large language model (LLM)-based agents are increasingly trained with reinforcement learning (RL) to enhance their ability to interact with external environments through tool use, particularly in search-based settings that require multi-turn reasoning and knowledge acquisition. However, existing approaches typically rely on outcome-based rewards that are only provided at the final answer. This reward sparsity becomes particularly problematic in multi-turn settings, where long trajectories exacerbate two critical issues: (i) advantage collapse, where all rollouts receive identical rewards and provide no useful learning signals, and (ii) lack of fine-grained credit assignment, where dependencies between turns are obscured, especially in long-horizon tasks. In this paper, we propose Information Gainbased Policy Optimization (IGPO), a simple yet effective RL framework that provides dense and intrinsic supervision for multi-turn agent training. IGPO models each interaction turn as an incremental process of acquiring information about the ground truth, and defines turn-level rewards as the marginal increase in the policy's probability of producing the correct answer. Unlike prior process-level reward approaches that depend on external reward models or costly Monte Carlo estimation, IGPO derives intrinsic rewards directly from the model's own belief updates. These intrinsic turn-level rewards are combined with outcome-level supervision to form dense reward trajectories. Extensive experiments on both indomain and out-of-domain benchmarks demonstrate that IGPO consistently outperforms strong baselines in multi-turn scenarios, achieving higher accuracy and improved sample efficiency. 1 INTRODUCTION Large language model (LLM)-based agents are increasingly endowed with the ability to interact with external environments through tool use (Zhang et al., 2025a; Huang et al., 2025; Li et al., 2025c), a capability often regarded as a critical step toward building general-purpose autonomous intelligent systems (Gutierrez et al., 2023; Qu et al., 2025). For example, web search (Zhang et al., 2025b; Qi et al., 2024), one of the most fundamental tools, enables agents to access up-to-date largescale knowledge that substantially improves their capacity to solve complex, knowledge-intensive tasks (Ning et al., 2025). Through iterative interaction with the external environment, agents can gradually acquire missing information and refine their reasoning toward solving the target query. To equip general-purpose LLMs with such agentic capabilities, early efforts primarily relied on prompt-based workflows (Li et al., 2025b; Wang et al., 2024a; Zheng et al., 2024), which allowed tool use without additional training but often suffered from poor generalization. More recent studies have explored supervised fine-tuning (SFT) (Wang et al., 2024b) and reinforcement learning (RL) (Jin et al., 2025; Song et al., 2025a; Zheng et al., 2025b) to explicitly incentivize tool use, achieving markedly better performance. In particular, Group Relative Policy Optimization (GRPO) (Shao et al., 2024)-style methods have emerged as the dominant approach for training agentic LLMs. In this paradigm, a group of rollouts is generated for each query under the current *Equal contribution. 1 16 Oct 2025 policy, and outcome-based rewards, typically defined by the correctness of the final answer against the ground truth, are used to construct group-relative advantages that drive policy optimization. Despite their simplicity and effectiveness on relatively easy tasks, outcome rewards suffer from an inherent limitation: they are sparse (Zhang et al., 2025c), since supervision is provided only at the final answer. This sparsity becomes particularly detrimental in multi-turn agentic settings, where long trajectories exacerbate the problem in two ways. First, sparse rewards frequently lead to advantage collapse: when sampled rollouts yield the same answer (e.g., all wrong or all right), Figure 1: Proportion of zero-advantage groups during training-IGPO vs. GRPO on Qwen2.5-7B/3B-Instruct. all rollouts in the group receive identical outcome rewards, yielding zero group-relative advantages. As shown in Figure 1, a substantial portion of training iterations suffer from this issue, especially for smaller models, which struggle more with complex queries. Second, outcome-only supervision fails to provide fine-grained credit assignment. In multi-turn scenarios, later turns are tightly dependent on earlier ones: a reasoning or tool call of the current turn may be correct but rendered useless by prior mistakes, or conversely, early successes may be negated by subsequent errors. Such dependencies are easily obscured under outcome-only rewards, particularly in multi-hop tasks that require long-horizon reasoning. Several recent approaches have attempted to mitigate these issues by introducing process-level rewards. One line of work leverages external oracle knowledge or reward models to judge intermediate steps (Wang et al., 2025; Feng et al., 2025), but this strategy is costly to obtain and risks introducing additional bias. Another line relies on Monte Carlo simulations to estimate step values (Wang et al., 2023; Zuo et al., 2025; Zhang et al., 2025c), yet these methods suffer from high variance unless a large number of samples are collected. Overall, both directions face challenges in scalability and fail to provide simple and stable supervision, underscoring the need for an intrinsic and reliable process-level reward design. To address these challenges, we propose Information-Gain-based Policy Optimization (IGPO), a simple but effective reinforcement learning framework that provides stable and intrinsic supervision for multi-turn agent training. The key intuition is to model each agent-environment interaction turn as an incremental process of acquiring information about the ground truth. Specifically, at every turn, IGPO computes the policy's probability of producing the correct answer and defines the turn-level reward as the marginal increase in this probability compared to the previous state. This information gain reward offers ground-truth-aware feedback at every turn, in contrast to outcome rewards that only supervise the final answer. While turn-level rewards ensure dense and stable supervision, the outcome reward remains essential to anchor training to the final task objective. To combine these strengths, IGPO also integrates the outcome reward with the sequence of turn-level rewards, forming a dense reward trajectory for each rollout. To further stabilize training, we normalize rewards within groups and propagate them with discounted accumulation, enabling turn-level advantage estimation that captures long-horizon dependencies. Finally, IGPO optimizes the policy with a GRPO-style surrogate objective, replacing rollout-level advantages with our turn-level ones. To evaluate the effectiveness of IGPO, we conduct extensive experiments on both in-domain and out-of-domain benchmarks with search-based agents. Results show that IGPO consistently outperforms strong baselines, delivering substantial gains in both answer accuracy and sample efficiency. Our main contributions can be summarized as follows: (1) We analyze the phenomenon of advantage collapse in outcome-reward-based optimization, and reveal the inefficiency of existing processlevel rewards due to reliance on external knowledge or high-variance estimation. (2) We propose IGPO, a simple yet effective policy optimization framework that leverages turn-level information gain to provide dense, ground-truth-aware supervision while preserving outcome-level alignment. (3) Comprehensive experiments demonstrate that IGPO outperforms strong baselines across multiple benchmarks and significantly improves sample efficiency, especially for smaller models. 2 2 PRELIMINARIES In this section, we present the standard multi-turn agentic RL pipeline, illustrated with a search agent as a representative example. 2.1 TASK FORMULATION Let D = {(q, a)} denote a dataset of question-answer pairs, and let E represent an external tool (e.g., a web search engine). The goal of the agent is to solve question q by generating a rollout o = (τ1, τ2, . . . , τT ) through iterative interaction with the environment via tool E, where T is the total number of interaction turns. The last turn τT is the answer turn that outputs a rationale-thenfinal answer sequence, while all previous turns involve reasoning and tool interaction. Specifically, for t tag following the DeepSeek-R1 setting (Guo et al., 2025). The [tool call] step invokes the external tool E by producing a structured request, typically JSON-formatted and wrapped in a dedicated tag (e.g., search query for web search). The [tool response] step then returns structured outputs from E, such as webpage snippets with titles, URLs, and text when using a web search engine tool, enclosed in retrieved documents tags. In the final turn, after a [think] step, the agent generates its answer within the tag, and this content is extracted as the trajectory's final prediction ˆa, which is expected to correctly address the input query q. This agent-environment interaction is illustrated at the bottom of Figure 2. 2.2 AGENTIC REINFORCEMENT LEARNING PIPELINE Policy Optimization. Agentic RL typically adopts policy-gradient methods to optimize the agent policy πθ. A common approach is Group Relative Policy Optimization (GRPO) (Shao et al., 2024), which removes the need for an explicit critic by normalizing returns within each sampled group of rollouts. Formally, given an actor model πθ, a group of G rollouts {oi}G i=1 is sampled from old policy πθold(· | q) for each input (q, a) ∼D. The policy is then optimized by maximizing the clipped surrogate objective with KL regularization: JGRPO(θ) = E(q,a)∼D, {oi}∼πθold(·|q) " 1 G G X i=1 1 |oi| |oi| X t=1 min πθ(oi,t | q, oi, Now there's enough information to answer Ground Truth a . This turn-level reward has two desirable properties: (1) Ground-truth awareness: the reward increases when the action raises the policy's confidence in the correct answer, and decreases otherwise; (2) Dense supervision: the reward is defined for every sample, even when no rollout yields a correct answer, thereby alleviating reward sparsity and avoiding advantage collapse. Integrating Outcome and Turn-level Rewards. For each rollout oi = (τi,1, . . . , τi,T ) where the last turn τi,T is the answer turn producing ˆai, we can construct a length-T reward vector ri = (ri,1, ri,2, . . . , ri,T ). For t YOUR THINKING PROCESS YOUR ANSWER AFTER GETTING ENOUGH INFORMATION or YOUR THINKING PROCESS YOUR TOOL CALL WITH CORRECT FORMAT You should always follow the above two formats strictly. Only output the final answer (in words, numbers or phrase) inside the tag, without any explanations or extra information. If this is a yes-or-no question, you should only answer yes or no. # Tools You may call one or more functions to assist with the user query. You are provided with function signatures within XML tags: {\%- for tool in tools.values() \%} {{ '{' }}"type": "function", "function": {{ '{' }}"name": "{{ tool.name | replace("'", '"') }}", "description": "{{ tool.description }}", "parameters": {{ '{' }}"type": "object", "properties": {{tool.inputs | replace("'", '"')}}, "example": {{tool.example | replace("'", '"')}}, "uniqueItems": true{{ '}}}' }} {\%- endfor \%} For each function call, return a json object with function name and arguments within XML tags: {{ '{' }}"name": , "arguments": {{ '}' }} Figure 8: Prompt template used in our experiments. 20
2510.14970
Biology-informed neural networks learn nonlinear representations from omics data to improve genomic prediction and interpretability Katiana Kontolati∗ Bayer Crop Science katiana.kontolati@bayer.com Rini Jasmine Gladstone Bayer Crop Science rinijasmine.gladstone@bayer.com Ian W. Davis Bayer Crop Science ian.davis@bayer.com Ethan Pickering∗ Bayer Crop Science ethan.pickering@bayer.com Abstract We extend biologically-informed neural networks (BINNs) for genomic prediction (GP) and selection (GS) in crops by integrating thousands of single-nucleotide poly- morphisms (SNPs) with multi-omics measurements and prior biological knowledge. Traditional genotype-to-phenotype (G2P) models depend heavily on direct map- pings that achieve only modest accuracy, forcing breeders to conduct large, costly field trials to maintain or marginally improve genetic gain. Models that incorporate intermediate molecular phenotypes such as gene expression can achieve higher predictive fit, but they remain impractical for GS since such data are unavailable at deployment or design time. BINNs overcome this limitation by encoding pathway- level inductive biases and leveraging multi-omics data only during training, while using genotype data alone during inference. By embedding omics-derived priors directly into the network architecture, BINN outperforms conventional models in low-data (n < p) regimes and enables sensitivity analyses that expose biologi- cally meaningful traits. Applied to maize gene-expression and multi-environment field-trial data, BINN improves rank-correlation accuracy by up to 56% within and across subpopulations under sparse-data conditions and nonlinearly identifies genes that GWAS/TWAS fail to uncover. With complete domain knowledge for a synthetic metabolomics benchmark, BINN reduces prediction error by 75% relative to conventional neural nets and correctly identifies the most important nonlinear pathway. Importantly, both cases show highly sensitive BINN latent variables cor- relate with the experimental quantities they represent, despite not being trained on them. This suggests BINNs learns biologically-relevant representations, nonlinear or linear, from genotype to phenotype. Together, BINNs establish a framework that leverages intermediate domain information to improve genomic prediction accu- racy and reveal nonlinear biological relationships that can guide genomic selection, candidate gene selection, pathway enrichment, and gene-editing prioritization. 1 Introduction Feeding 10 billion people by 2050 will require faster, more reliable crop improvement. In both conventional breeding—crossing and selecting offspring—and genome editing—making targeted edits and selecting edited lines—progress hinges on accurate genotype-to-phenotype (G2P) prediction ∗Corresponding authors Preprint. Under review. arXiv:2510.14970v1 [cs.LG] 16 Oct 2025 G2P G2B2P A C A T G P A C A T G P G2P G2B2P R B: biological-domain knowledge Can utilizing domain knowledge improve predictive performance? RNA-seq Methylomics Metabolomics KEGG-pathway Proteomics ? Bio-Informed NN G2B Operation B2P Operation G B P Linear L Linear Linear Nonlinear A AB83icbVDLSsNAFL2pr1pfVZduBovgqiRS1GXRjSupYB/QhDKZTtqhk0mYh1BCf8ONC0Xc+jPu/BsnbRbaemDgcM693DMnTDlT2nW/nd La+sbmVnm7srO7t39QPTzqMRIQtsk4YnshVhRzgRta6Y57aWS4jktBtObnO/+0SlYol41NOUBjEeCRYxgrWVfD/Gekwz+5nlUG15t bdOdAq8QpSgwKtQfXLHybExFRowrFSfc9NdZBhqRnhdFbxjaIpJhM8on1LBY6pCrJ5hk6s8oQRYm0T2g0V39vZDhWahqHdjLPqJa9XPz P6xsdXQcZE6nRVJDFochwpBOUF4CGTFKi+dQSTCSzWREZY4mJtjXlJXjLX14lnYu6d1lvPDRqzZuijKcwCmcgwdX0IQ7aEbCKTwDK/w 5hjnxXl3PhajJafYOY/cD5/AL+ZkYE=</latexit>N G B P P B G GWAS TWAS BINN If BINN learns both G2B and B2P operations, B latent variables will correlate 1:1 with B observed variables. P B G Nonlinear Nonlinear Linear Nonlinear I: LL IV: NN II: NL III: LN GWAS TWAS BINN GWAS TWAS BINN GWAS TWAS BINN (a) (b) N N N L L L Figure 1: Biology-informed neural network framework embed domain knowledge for enhanced genomic prediction and learning nonlinear biological relationships. a) Conventional G2P models use genotype alone, leaving rich functional knowledge underutilized. BINNs embed curated biology (from RNA-seq (expression), methylomics (DNA methylation), metabolomics, KEGG pathway annotations, and proteomics) directly into the architecture in the form of pathway structure, regulatory priors, and sparsity constraints, to boost predictive accuracy while preserving practical utility. b) Four representative cases where GWAS, TWAS, and BINN are valid for analyzing genomic, transcriptomic, and phenomic datasets. Only BINN permits association under general nonlinearity (with the assumption the trained BINN model is accurate). to enable genomic selection (GS) [27, 13]. GS improves crop improvement efficiency by permitting early selection at the seed or edit stage and reducing dependence on slow, costly, multi-location field trials [20, 42]. With a well-calibrated model trained on prior genotype–phenotype data, a single low- cost genotyping assay can substantially reduce phenotyping requirements in both space and time [13]. A wide range of models has been applied to GS—including linear mixed models; machine-learning methods such as random forests and kernel approaches; and deep neural networks—with performance varying by trait architecture, data volume, and environment [10, 50]. Current approaches fall short of capturing the true biological processes from genotype to phenotype, and the field has a long way to go toward accurate, mechanistically grounded prediction. Despite the surge of countless AI models and architectures, linear mixed modeling approaches remain state-of-the-art in G2P studies, with no other class of model consistently outperforming them across crops, populations, traits, years, and locations [1, 4, 47]. Although there are several adaptations to linear models, most are quite similar. For instance, ridge regression [26] and its adaptation using best linear unbiased predictor, rrBLUP [15], trained using genotypic (marker) data are identical, except rrBLUP chooses the regularization parameter, α, as a ratio of variances instead of by cross-validation [33]. Likewise a genomic BLUP or GBLUP [12] is mathematically equivalent to rrBLUP, but with compute efficiency gains when the number of lines is less than the number of markers [21], while the "Bayesian alphabet" models (BayesA, BayesB, BayesC, etc) [18] differ in 2 allowing different markers to have different priors [28]. And although several promising nonlinear artificial neural network (ANN) architectures have been proposed in the literature [17, 24, 45], deep learning methods have yet to show consistent improvements over traditional models and have not been adopted at scale in breeding pipelines [29]. This is partly due to over-parameterization and the lack of tailored deep learning architectures. Further gains are unlikely without injecting biological structure such as pathways, interactions, and regulatory programs into the model, a role for which flexible AI architectures are well suited. A natural route to inject such biological structure is to integrate intermediate multi-omics sig- nals—transcripts, proteins, metabolites—as scaffolds between genotype and phenotype, enriching G2P models with mechanistic constraints [2, 5, 11, 20, 23, 27, 36]. Approaches such as multi-staged analysis [37] and transcriptome-wide association studies (TWAS) [16, 43] are proposed to integrate omics data, such as gene expression levels, between genotype and phenotype for association studies. Traditional linear mixed models, while capable of including additional covariates, are fundamentally limited in how they integrate multi-omics information. In practice, they treat each data modality as a flat feature set and cannot impose the hierarchical, pathway-level constraints that capture the flow from genotype through intermediate traits to phenotype. This lack of architectural flexibility prevents them from leveraging richer, layered representations, such as gene-to-metabolite cascades or expression- driven subnetworks, that can substantially boost predictive power. In contrast, ANNs provide the flexibility to integrate multiple high-dimensional data streams such as genomics, transcriptomics, proteomics, lipidomics, etc., for tasks ranging from regression and classification to unsupervised representation learning, a capability that traditional linear models lack [25, 32, 44, 46, 49]. In practice, downstream omics are not available for novel genotypes at design or deployment, so they can only inform training—via priors, pretraining, or architectural constraints—rather than serve as inference-time features. Biologically-informed neural networks (BINNs) have begun to provide model flexibility in embedding omics data and domain-specific knowledge such as gene, protein, or pathway relationships, directly into their architecture and training objectives to improve predictive accuracy, interpretability, and extrapolative potential. Analogous to physics-informed neural networks (PINNs) that incorporate governing equations into their loss functions [35], BINNs impose biologically plausible sparsity patterns on connections, reducing parameter count and data requirements while enabling efficient integration of heterogeneous multi-omics inputs. This inductive bias not only curbs overfitting but also aligns latent representations with known biology. BINNs have been applied across population genomics and biomedicine, including cancer subtype prediction, drug response modeling, and survival analysis in frameworks like GenNet [40], proteomics biomarker discovery [19], hierarchical pathway networks for prostate cancer [14], and multi-omics inferral of smoking status, age, and LDL from the BIOS cohort [41], demonstrating consistent gains in performance and stability. Despite their promise, existing BINNs enforce a rigid, fully interpretable mapping of neurons to biological entities, limiting architectural flexibility and nonlinear modeling, and in most cases still depend on omics measurements at deployment, undermining their practicality in plants and crop design. Furthermore, although existing BINN approaches employ ANN-based models, their method for ranking biological entities such as genes typically relies on assessing marginal associations through learned single-edge weights, an approach conceptually similar to traditional genome-wide association studies (GWAS) or TWAS. Consequently, these models tend to highlight already known significant genes while overlooking those involved in nonlinear or epistatic interactions. Here we design a BINN architecture to align with genomic selection and design of crops, explicitly integrating omics data as intermediate variables, removing the need for omics data at test/design while still allowing for biological interpretability and targeted pathway prioritization (Figure 1). Our key contributions are summarized below: 1. A BINN architecture balancing tunable sparsity, layered nonlinearity and practicality. 2. Superior sparse-data performance with up to 56% test-set rank correlation increase and 75% reduction in predictive error over G2P baselines, across and within populations. 3. Demonstrated ability to transfer across related populations, maintaining strong predictive accuracy when genetic background is partially shared. 4. A novel BINN-derived sensitivity analysis framework that associates biologically meaningful intermediate traits to phenotype that are beyond the reach of conventional GWAS and TWAS approaches. 3 5. A scalable, practical framework for plant and crop genomic selection and design that marries multi-omics-driven training with genotype-only deployment. 2 Results We develop and evaluate BINN models informed by two distinct types of intermediate omics—transcriptomics and metabolomics—to predict phenotypes directly from genotype. Our first case study leverages transcriptomic profiles measured in a large-scale maize field experiment[39], where lines from multiple heterotic groups were grown under real agronomic conditions. This setting provides a realistic testbed for BINNs: gene expression captures environmentally modulated regula- tory activity, offering a richer signal than raw marker data. In this context, we observe promising gains over conventional G2P models, but also important caveats due to the noise, heterogeneity, and incomplete pathway knowledge that naturally accompany field-derived omics measurements. In contrast, our second case study is a synthetic benchmark based on metabolomic traits simulated through an ordinary differential equation (ODE) model of shoot branching[9]. Unlike the maize field data, this framework provides “perfect” domain knowledge of the true causal pathways, enabling us to rigorously stress-test BINNs and evaluate their ability to recover mechanistic structure when the ground truth is fully known. Together, these complementary case studies - one grounded in realistic breeding data, the other in controlled synthetic biology - allow us to assess both the practical utility and the methodological limits of BINNs. Table 1 summarizes implementation details; full configurations and preprocessing steps can be found in the Supplementary Section. Table 1: Implementation components for the maize TWAS dataset and synthetic shoot-branching case studies. Attribute Maize TWAS Dataset Synthetic Shoot-Branching Input SNPs/Genes ∼20,000 SNPs per trait 1,600 genes Intermediate Omics Data Transcriptomics Metabolomics # Intermediate Traits ∼1,000 genes per trait 4 (auxin, sucrose, cytokinin, strigolactone) # Output Phenotypes 4 (anthesis NE, MI; silking NE, MI) 1 (time to bud outgrowth) Trait Selection Method ElasticNet regression From known ODE model SNP Mask Construction eQTL mapping of selected genes ODE-derived gene-metabolite mappings Loss Function MSE Soft-constrained MSE Reference Torres-Rodriguez et al. [39] Bertheloot et al. [9] & Powell et al. [34] 2.1 Transcriptomics-derived BINN Gene-expression-informed BINNs improve spearman rank correlations over G2P models across and within populations in maize field trials. BINNs embed expression-derived sparsity into a genotype-to-biological-knowledge-to-phenotype “G2B2P”, where biological-knowledge is gene expression data, network which delivers superior predictive performance compared to genotype-only GBLUP models, under sparse-data conditions (Figure 2b). We use the maize TWAS dataset from Torres-Rodríguez et al. [39], which provides matched genotypes, RNA-seq expression profiles, and flowering-time phenotypes: days-to-anthesis and days-to-silking in Nebraska and Michigan (see Figure 2b). In five independent 20%/80% train/test splits with five-fold cross-validation on each training set, BINN achieves higher Spearman correlations than the GBLUP baseline, both when pooling all 693 lines and within each of the seven heterotic subpopulations. BINNs outperformed GBLUP in the majority of pairwise comparisons, and the advantage was statistically significant on a paired t-test (p = 2.23 × 10−6). Here, we report representative results for silking NE as shown in Figure 2c; complete results for all phenotypes are provided in the Supplementary Figure 5. Notably, the most pronounced and consistent improvements occurred within individual subpopulations, the most critical and challenging scenario for GS, demonstrating BINN’s ability to capture subtle, group-specific genetic variation. BINNs leverage domain knowledge to pinpoint key genes and derive SNP–gene connections that both shrink network size while preserving nonlinear modeling power. Models that embed domain knowledge have frequently demonstrated similar or superior predictive power compared to purely genotype-based approaches [5]. In the maize TWAS application, we observe that expression-based domain-to-phenotype (B2P) models outperform conventional G2P baselines across most evaluation settings and BINNs can translate this predictive strength into the G2P setting. Transcriptomic profiles 4 Genotype data (G) Expression data (E) G2P (GBLUP) G2B2P (BINN) B2P (Ridge) BINN feature selection P C T G A C T T C A T G A C P Accurate? Genomic prediction? FCN FCN FCN FCN P (a) (c) (b) (d) (e) (f) (g) Figure 2: BINNs improve prediction accuracy and interpretability through utiliziation of gene expression in genotype to phenotype modeling. (a) Schematic of the transcriptomics-derived BINN architecture. The SNP marker and gene expression data are utilized for feature selection, which serves to sparsify the connections between the input and intermediate layers of the BINN architecture. The marker data for each gene is passed through pathway subnetworks at the intermediate layer and the outputs from this layer is passed through a non-linear integrator network to predict the phenotype values. The G2P and B2P models are both linear models. (b) Predicted vs. observed phenotype days to anthesis in MI across four subpopulations (SS, NSS, IDT, and Others). Points show G2P (GBLUP on genotype; pink triangles) and G2B2P (BINN with expression-informed sparsity; blue circles). An ordinary least-squares (OLS) fit is overlaid and the Spearman correlation (ρ) is reported in the legend. (c) Test Spearman correlation distributions for Silking NE, comparing three predictive models (G2P, G2B2P and B2P: Ridge regression on expression, green) across five independent 20/80% train/test splits, each with five-fold cross-validation to capture both split-level and CV-level variability. Each subplot shows results for all lines pooled (“all”) and for each of the seven distinct subpopulations (SS, NSS, IDT, Popcorn, Sweet corn, Tropical and Others). The pink dashed line marks the median of the G2P baseline; the dashed line for G2B2P is colored green when its median exceeds that baseline or red when it falls below. Legends report the median percent change of G2B2P relative to G2P and titles show the number of test samples. (d) Test Spearman correlation distributions for leave-one-population-out experiments for Silking NE, comparing the same three predictive models. Each subplot shows results for the predictions on six distinct subpopulations using the models that are trained by leaving out the corresponding subpopulation from the training data. (e) Predicted latent variables vs. observed gene expression for four representative high-correlation genes per phenotype. (f) Aggregated absolute phenotypic change for 30 representative genes - 15 with high absolute Pearson correlation and 15 close to zero. (g) Aggregated absolute phenotypic change per perturbed gene, with a threshold capturing the BINN-derived 100 most significant genes which includes zap1, zmm15, zcn14 and zcn8. 5 Table 2: Test set Spearman rank correlation of BINNs across all populations and cross-validation splits for various Elastic Net L1 ratios used to select genes for the G2B2P mask for all four phenotypes. The best setting for each trait is bolded. L1 ratio # genes Anthesis NE Anthesis MI Silking NE Silking MI 0.05 1541–2251 0.650 ± 0.034 0.652 ± 0.046 0.630 ± 0.041 0.649 ± 0.036 0.10 848–1271 0.655 ± 0.037 0.663 ± 0.034 0.632 ± 0.040 0.660 ± 0.044 0.20 467–696 0.605 ± 0.151 0.656 ± 0.038 0.623 ± 0.043 0.662 ± 0.038 0.50 132-301 0.595 ± 0.143 0.633 ± 0.053 0.593 ± 0.127 0.640 ± 0.040 1.00 50-131 0.597 ± 0.047 0.596 ± 0.047 0.547 ± 0.114 0.593 ± 0.059 inform the degree of architectural sparsity without ever feeding raw expression values into the prediction network at inference. Specifically, we first fit an Elastic Net model to each flowering-time trait—tuning its L1/L2 penalty via cross-validation—and selected the genes with nonzero coefficients, noting that the feature selection outcome was particularly sensitive to the specific cross-validation split. We then carried out eQTL mapping on these candidates to nominate their top SNP markers, which defined the sparse SNP→gene mask that shapes our G2B2P BINN architecture (Figure 2a). As shown in Table 2, a sweep of the Elastic Net’s L1 ratio revealed that retaining approximately 1,000 genes per trait and split offers the best trade-off between model compactness and accuracy, recovering the major regulators identified in the original study. An important limitation to this approach is that if B2P underperforms G2P, BINNs are unlikely to provide an advantage, since intermediate feature selection would not add predictive power. BINNs exploit shared expression–trait mechanisms beyond marker structure to improve gen- eralization but rigid priors can limit flexibility in genetically distant populations. It is well established that marker data are strongly tied to population structure whereas expression data are often tissue and environment-specific and less tightly tied to ancestry [38]. Removing systematically distinct populations from the BINN training set revealed a clear pattern in model transferability. Expression data overall offers improved cross-population generalization compared to marker-based models as expression reflects functional output of many regulatory layers that can “normalize” some of the divergence in raw markers. However, BINN’s biologically constrained architecture can only capitalize on this advantage when the causal paths through biology are shared between train and test lines. Evidently, when one of the mainstream heterotic groups such as SS, NSS, or IDT was held out, BINN maintained robust generalization performance (see Figure 2d). This is particularly important for large-scale breeding programs, which rely heavily on these temperate pools for developing elite germplasm and driving genetic gain. In contrast, when genetically more distant populations were excluded from training, such as sweet corn or tropical, which represent cases where regulatory programs diverge substantially from temperate pools [48], BINN appeared to overfit to pathway patterns dominated by the larger heterotic groups thereby limiting its ability to adapt. Thus, while BINNs excel when shared biological mechanisms exist, overly rigid priors limit its flexibility in zero-shot learning scenarios. However, this issue diminishes as domain knowledge improves: the more precise and mechanistically grounded it is, the larger the performance gains across tasks (see section 2.2). Sensitivity analysis reveals nonlinear genetic contributors that TWAS/GWAS may fail to cap- ture. A key advantage of BINNs lies in their interpretability, i.e., the ability of trained models to elucidate how specific biological entities influence intermediate traits and ultimately shape the phe- notype. Traditional BINNs with single-edge weights offer direct interpretability, as each parameter corresponds to a distinct marker–gene or gene–trait connection. In our implementation, we enhance model expressivity by replacing these single-edge weights with fully connected layers, sacrificing one-to-one parameter interpretability but enabling richer representations of nonlinear dependencies. To assess whether BINNs can identify biologically meaningful genes, we developed and conducted a sensitivity analysis (presented in Algorithm 1) across all pathway subnetworks. Specifically, we perturbed each pathway’s latent variable in both directions and quantified the resulting change in the predicted phenotype, ranking genes by their aggregated effect. Figure 2e summarizes this anal- ysis, showing that BINN recovers several TWAS-significant genes reported in Torres-Rodríguez et al. [39], such as the previously characterized zcn8 as well as zap1, zmm15 and zcn14, but also highlights additional candidates that may participate in epistatic interactions shaping phenotypic response. These genes may be overlooked by traditional approaches like TWAS, which rely on 6 marginal gene-trait associations, whereas BINN’s end-to-end training captures predictive, nonlinear, and combinatorial effects beyond the reach of standard linear models. Interestingly, we found that the genes most sensitive to phenotype perturbations also exhibit strong latent-expression correlations (Figure 2e,f). This suggests that BINNs preferentially learn biologically grounded representations, linear or nonlinear, aligned with known mechanisms. 2.2 Metabolomics-derived BINN We now demonstrate that BINNs offer greater potential in both accuracy and interpretability for genomic design when established domain knowledge is incorporated. This contrasts with the previous example, which relied solely on intermediate experimental data. To illustrate, we use a synthetic metabolomics problem in which hormone–sugar interactions are formalized through an ODE model, creating a clean G2P nonlinear test bed for evaluating BINNs. A motivating example comes from axillary bud outgrowth—the switch-like process underlying shoot branching—arising from interactions among auxin (A), cytokinin (CK), strigolactone (SL), and sucrose (S) in a minimal network [9], later extended with genotype-to-metabolite variation at multiple causal loci [34]. The domain knowledge we embed is simple but definitive: gene–metabolite, metabolite–metabolite, and metabolite–phenotype interactions. Importantly, the BINN is not provided with the full ODE details (the magnitudes of effects, equation structures, or nonlinearities) but only the associations between layers. To avoid an artificially “easy” setting and to test robustness, we perturb the simulated data by adding 5% noise to intermediate traits and 10% noise to phenotypes. This controlled setting enables us to rigorously test the BINN methodology: evaluating performance under both plentiful and sparse data conditions, examining its ability to recover known causal dynamics, and exploring extensions such as custom loss functions applied to partially observed intermediates. BINNs improve predictions under sparse-data regimes. In the genotype-to-metabolites-to- phenotype setup, BINN embeds ODE-derived metabolite pathways and constrains gene inputs through known gene–metabolite links. (Figure 3a). Evaluated across nine geometrically-spaced training sizes from 500 to 20,000 lines, with five random splits, the standard MSE-trained BINN con- sistently outperformed Ridge regression and an unconstrained fully-connected network (FCN) in the sparse-data regime, i.e., when the number of training lines was smaller than the input dimensionality (Figure 3b,c). As expected, when sample size increases, BINN performance approaches that of the FCN, indicating that the pathway-guided sparsity yields a better bias–variance trade-off when data are limited, while also preserving nonlinear expressivity as data grow. This demonstrates that BINNs can faithfully capture the intrinsic nonlinearity encoded in the underlying ODE dynamics across the entire data range, an aspect fundamentally beyond the reach of linear models like RR and GBLUP in the data-plentiful limit (n ≫p) and common neural networks in the data-sparse limit (n ≪p). Biology-informed loss functions, applied to intermediate variables, improves prediction ac- curacy and recovery of pathway dynamics with sparse intermediate labels. In many systems, intermediate traits can be experimentally measured, and this information can be used to better anchor latent variables to pathway dynamics. To this end, we introduce a soft-constraint loss function (i.e. Pearson loss) that explicitly encourages latent outputs to correlate with ground-truth intermediate trait values. In realistic scenarios, budget or experimental constraints often limit the amount of intermedi- ate data that can be collected. To reflect this, we evaluate the model under three conditions: 100%, 50%, and 10% of lines with known labels. This design preserves genotype-only inference while allowing for complete or partial pathway supervision during training. The soft-constraint approach (BINN soft), achieves the lowest test MSE in the small-data regime, and performs comparably to standard BINNs with larger datasets (Figure 3b). Notably, the performance of is largely insensitive to the proportion of intermediate data available, indicating that even sparse biological supervision enables the network to recover the underlying hormone–sugar dynamics. In other words, even a small amount of alignment is sufficient to boost downstream phenotypic predictions. BINN architectures provide a latent variable mechanism for interpretability, uncovering the critical importance of sucrose’s impact on outgrowth. When we analyze the biologically-informed latent variables via scatter-plot diagnostics on 20,000 held-out test lines, distinct pathway-level patterns emerge. We ran this experiment under both the standard MSE objective (BINN MSE) and a custom soft-constraint MSE (BINN soft). As expected, the soft-constraint setup (see Figure 3d (second row)) yields stronger correlations between latent variables and ground-truth metabolites, since the model was partially supervised to do so. However, an interesting observation comes from Figure 7 S FCN FCN FCN FCN CK A P FCN (a) genes Metabolomics (c) (b) SL (d) g1 g2 g3 g4 g5 g6 g7 g8 𝑛< 𝑝 𝑛> 𝑝 (e) Figure 3: BINNs significantly outperform baselines in the sparse data limit n < p. (a) Schematic of the BINN architecture for the shoot-branching network. Gene inputs are routed through four biologically-annotated pathway subnetworks corresponding to auxin (A), sucrose (S), cytokinins (CK), and strigolactones (SL), then combined in a final integrator to predict bud-outgrowth time. (b) Test-set performance (MSE) for six models: ridge regression (RR), a generic fully-connected network (FCN), BINN trained with standard MSE (BINN- MSE), and BINN trained with the biologically-informed soft-constraint loss (BINN-soft) with a varying fraction of known intermediate trait measurements (100%, 50% and 10%), evaluated across nine training-set sizes logarithmically spaced between 500 and 20,000 samples. Boxplots display the distribution across five random initializations. The vertical black dashed line at n = 1, 600 denotes the transition from sparse (n < p) to plentiful (n > p) data regimes. (c) Predicted vs observed phenotype across four training sizes (n = 500, 1,994, 7,953, 20,000) for three models: RR (purple circles), FCN (black triangles), and BINN (red squares). Each panel shows the identity line (y = x), and in-panel legends report Pearson correlation (r) for each model. (d) Scatter plots of predicted versus true latent values for each intermediate trait (A, S, CK, SL) under the BINN MSE (blue) and the BINN soft model (red). The fitted regression line and Pearson correlation coefficient, r, are overlaid. (e) Aggregated absolute phenotypic change per perturbed intermediate trait. 3d (first row): with the standard BINN MSE, most hormone pathways show little correlation with the ground truth, yet sucrose displays a remarkably strong alignment. As shown in Figure 3e, a post-hoc sensitivity analysis on the four intermediate traits demonstrated that the phenotype is more sensitivity to perturbations in sucrose, highlighting its importance. Indeed Bertheloot et al. [9], designed their experiments to show the critical importance of sucrose in this biological example. These results illustrate how BINNs can learn nonlinear relationships and surface biologically meaningful variables when appropriate inductive bias is added, achieving interpretability through both explicit alignment losses and emergent signals in the unconstrained setting. 3 Discussion Biologically informed neural networks provide the flexibility to incorporate ever-growing omics datasets for genomic selection, embedding intricate biological mechanisms and delivering improved accuracy and generalization. By using intermediate molecular information to shape network sparsity and connectivity, BINNs convert domain knowledge into inductive bias that both stabilizes learning 8 in sparse-data settings and yields pathway-level latent variables for interpretation. In this work we benchmark BINNs on two concrete tasks: an ODE-grounded, synthetic shoot-branching problem and a real maize TWAS dataset, to test whether they deliver accurate yet deployable genomic predictions. Across both applications, BINN outperforms baseline Ridge/FCN/GBLUP models in the sparse-data regime, remains stable under noise, and requires only genotypes at inference. Importantly, the trained model can be leveraged through post hoc sensitivity analysis to identify biologically relevant entities driving the phenotype, offering a nonlinear alternative to traditional GWAS and TWAS approaches. Our BINN extension is built first and foremost for genomic-selection practicality. The primary focus of existing BINN implementations in the literature has not considered GS or plant breeding utility and while although multiple studies have demonstrated the strong predictive power of omics data, it is not always clear how to translate these insights into models that rely exclusively on genomic inputs for GS. Our goal is to reframe the use of BINNs for GS that would make it relevant for many practical use cases in plant breeding. Full interpretability of existing models facilitates causal insight by directly linking predicted phenotypes to specific biological entities, but this rigid structure can constrain expressivity and thus limit predictive accuracy. A related challenge in plant breeding is scaling such models: breeders need decision-support tools that deliver accurate predictions without incurring the time and cost of routine omics measurements. In this work, we present a new biology-aware design and demonstrate its ability to improve predictive performance and practicality for GS, offering a hopeful path toward more accurate and scalable crop improvement. This is achieved by the following key features: BINNs do not need omics data at test time making them practical for the design phase where they are most needed and impactful. As discussed, omics data carry high predictive power and, when leveraged appropriately, can enhance models that rely solely on genotype data. However, a critical constraint in this setting is that any omics data downstream of the genotype will not be available at deployment time, therefore such information must be leveraged only during training to shape model structure and guide learning, without being directly used at inference. Our BINN implementation folds omics data into inductive biases during training but never at inference allowing us to maintain full compatibility with standard GS workflows. Consistent with our out-of-distribution experiments, when the target population shares sufficient genetic background with the training cohort, these learned inductive biases transfer effectively, enabling reliable inference on new genotypes whereas gains naturally decrease for more divergent populations. Tunable sparsity of BINNs reduces overfitting risks, sensitivity to noise, and improved predictive performance. We derive each omics-layer mask by applying feature-selection and association analyses to nominate candidate causal features across the dataset and match model architecture with trait complexity. Those candidates determine the sparse connectivity of the BINN model, allowing it to flexibly span oligogenic traits, where a few high-impact modules drive most of the signal, and highly polygenic traits, where predictive power emerges from many small-effect contributors, simply by tuning the model sparsity. The tuned BINN models developed for two diverse applications delivered superior performance under sparse data, achieving up to a 56% gain in test-set rank correlation and a 75% reduction in predictive error relative to G2P baselines, within and across populations. Layered nonlinearity increases expressivity and allows BINNs to capture epistasis and nonlinear interactions. BINNs demonstrated superior performance when learning the nonlinear metabolics problem, especially under sparse data. A key distinction of our approach lies in the balance between expressivity and interpretability. Unlike existing BINNs that assign each hidden neuron to a specific biological entity such as a gene, pathway, or protein, enabling direct read-off of effect weights, we employ flexible fully connected modules at each omics layer, enabling the network to learn rich, non-linear interactions among features while still honoring sparsity constraints derived from annotation even if individual hidden units no longer carry explicit biological labels. Each module’s small FCN can capture intra-pathway epistasis among genes (or other genetic regions, e.g. SNPs) and other local non-linear effects that single-weight connections miss. This is valuable when potential locus-locus interactions contribute to the phenotype. The final integrator network fuses these module outputs, capturing higher-order, between-gene interactions. BINN latent variables offer opportunities to find hidden layer causal relationships given their stand in for biological domain knowledge. Even though we emphasize expressivity over built-in neuron-level interpretability, our BINN framework still offers opportunities for biological insight by uncovering causal relationships and enabling targeted sensitivity analyses that can inform bio- 9 logical understanding and decision-making. BINNs can prioritize intermediate traits by perturbing their corresponding latent variables and observing the resulting effect on the phenotype. In our metabolomics example, the phenotype showed the strongest sensitivity to perturbations of the sucrose latent variable, suggesting that the model captures pathway-relevant biological signals even under imperfect calibration. In the transcriptomics case, BINN not only recovers genes consistent with prior TWAS findings but also uncovers a range of additional candidates potentially influencing the phenotype through nonlinear, epistatic interactions. While these findings warrant further experimental validation, BINNs provide a promising framework for prioritizing gene candidates in downstream applications such as functional genomics and gene editing. BINNs are not without several limitations that must be considered when designing them. Despite the promising performance of BINNs in both applications in this study there are several important limitations to consider. Introducing sparsity constraints as inductive biases can boost accuracy in data-scarce settings, but the degree of sparsity must be carefully calibrated. Too little or too much undermines model behavior and thus, a systematic analysis to determine the optimal sparsity level is essential. Moreover, omics datasets are often noisy and heterogeneous, making it nontrivial to translate one or multiple –omics layers into an effective BINN architecture without risking mis-specification. Like all ANN-based approaches, BINNs demand hyperparameter tuning and computational resources for proper calibration. One of the strengths of the BINN framework lies in its flexibility to incorporate different types of loss functions depending on the modeling objective, however, we did not find this improved results for the transcriptomics dataset. For example, one can augment the baseline predictive loss with biologically motivated constraints, such as pathway coherence penalties or sparsity-inducing terms, to encourage solutions that are both accurate and mechanistically plausible. We found that such auxiliary constraints can sometimes boost predictive accuracy, although the magnitude of improvement varies by application and requires careful balancing to avoid over-penalization. In our experiments, we also test a more rigid formulation by embedding a “hard” constrained MSE penalty designed to strongly enforce reconstruction consistency across latent pathways. However, this constraint did not yield improvements, suggesting that overly strict losses can reduce flexibility and hinder optimization. This highlights an important trade-off: while additional biological constraints can enhance performance and interpretability, they must be tuned with care. To build on these findings and design BINNs in real-world breeding programs, more studies most be taken to understand how BINNs perform across diverse applications and populations, with different domain knowledge, on varied phenotypes, and more. The “sparse data” regime varies by dataset size and population diversity, and the benefits of BINNs will depend on both trait complexity and crop species. While our work addressed relatively simple phenotypes (flowering time and time to bud outgrowth), systematic benchmarking across a broader spectrum of traits (e.g., from highly heritable, single-gene characteristics to complex, polygenic attributes) will be essential to fully realize the potential of such models in crop improvement. Table 3: Biological layers, their functions, and example data types. Biological Level Function Example Data Type Genotype Genetic information underlying traits SNP arrays, whole-genome sequencing Epigenetic Modifications Influence gene regulation without changing DNA sequence DNA methylation Gene Expression Controls gene activity affecting traits RNA-seq Protein Interaction Networks Provide structural and signaling connections influencing outcomes Proteomics Metabolites Biochemical traits linking genes to observable traits Metabolomics Regulatory Pathways Control and coordinate gene expression Pathway databases (e.g., KEGG) Phenotype Observable traits resulting from genetic and environmental interactions Yield, height, flowering time, etc. Importantly, BINN gains depend on the quality of domain knowledge embedded in the model. Purely data-driven setups, such as our transcriptomics-derived case, are constrained by the limits of 10 experimental measurement: gene expression, while richer than genotype alone, is highly context- dependent (tissue, environment, sampling window) and thus noisy and heterogeneous, making performance sensitive to study design. By contrast, when mechanistic knowledge is sharper and more stable, illustrated by our metabolomics example, BINNs deliver markedly better accuracy in the sparse- data regime. This underscores a broader point: advancing fundamental biological understanding is not optional but enabling for practical GS models. Progress can come both from targeted experimental studies that elucidate pathways and from computational advances, e.g., genomic language models (gLMs) [7, 6, 31, 30, 8] and functional genomics predictors [3, 22], that infer function for genes lacking annotation, ultimately converting provisional signals into strong inductive biases we can embed in BINNs. Future efforts should explore integrating environmental signals, e.g. weather time series, and further tailored strategies, such as integrating additional omics layers to more closely align BINN’s causal pathways with mechanistic biology while maintaining a sound bias-variance trade-off. A practical challenge is that available omics are often heterogeneous, collected from different tissues, devel- opmental stages, platforms, and cohort designs (single vs. multiple genotypes, narrow vs. diverse populations), making it nontrivial to decide how, or whether, to use them as priors. As summarized in Table 3, priors can be drawn from transcriptomics, proteomics, metabolite pathways, and curated interaction networks, enabling models that better reflect how genetic variation propagates through biological systems to affect traits. This can be facilitated by experimenting with alternative topologies (e.g., stacked-layer, or parallel-layer BINNs), and incorporating advanced neural modules to model pathway subnetworks more effectively. The goal should not be just higher accuracy, but robust, genotype-only predictors that surface pathway importance and translate into selection decisions. With these targeted extensions, BINNs are well-positioned to improve GS at scale. 4 Methods Here we outline the BINN design used throughout before introducing the formalism. Our networks preserve nonlinear flexibility while enforcing biology-derived sparsity: genotype inputs are routed through pathway modules defined by masks built from prior knowledge (e.g., feature selection, association/eQTL hits, curated pathways, or ODE-based links). Each module is a small fully connected subnetwork that can model local nonadditivity and epistasis, and the module latents are fused by a final integrator to predict the phenotype. Intermediate omics measurements (e.g., expression, metabolites) inform the masks and, when available, can weakly supervise the latents via a soft constraint during training, but are never required at inference, preserving genotype-only deployment. The same template flexibly instantiates single-layer, stacked, staggered, or parallel multi-omics variants (see Supplementary Material) without changing the mathematical core. 4.1 Model Development with Biological Inductive Biases We consider n samples with p SNP features collected in the matrix X ∈Rn×p and a scalar phenotype vector y ∈Rn. Our BINN embeds L sequential omics layers between X and y, each layer l producing a latent representation U (l) ∈Rn×kl from its input U (l−1). We set U (0) = X and denote the number of subnetworks, corresponding to biological entities such as genes, metabolites, or proteins, at layer l by kl. 1. Pathway subnetworks. At omics layer l, prior biological knowledge (e.g. eQTL links, pathway membership) is encoded by a binary mask M (l) ∈{0, 1}dl−1×kl, where dl−1 = dim(U (l−1)). Specifically, M (l) ij = 1, if feature i of U (l−1) maps to entity j at layer l, 0, otherwise. (1) For each of the kl entities we define a subnetwork f (l) j that processes only the selected inputs U (l−1)M (l) :,j . These subnetworks may have multiple hidden layers with sigmoid activations σ(u) = 1/(1 + e−u) to capture Hill-type response typical in biological systems. We then concatenate their outputs into T (l) =  f (l) 1 (U (l−1)M (l) :,1 ), . . . , f (l) kl (U (l−1)M (l) :,kl)  ∈Rn×kl, (2) and set U (l) = T (l). 11 2. Residual network. To capture genetic effects not explained by any pathway subnetworks, we apply an unconstrained residual network gr (with parameters θr) to the subset of SNPs that are not connected in any mask M (l). Denoting this unannotated SNP matrix by Xres ⊂X, the residual network directly predicts a scalar residual phenotype: r = gr Xres; θr  ∈Rn, (3) ensuring that r captures variation attributable to SNPs outside known biological pathways. 3. Final integrator network. After the last omics layer produces U (L) ∈Rn×kL, we concatenate it with the residual output R ∈Rn×hr: Z = [ U (L), R ] ∈Rn×(kL+hr). (4) A final feed-forward network h with parameters θf then maps Z to the predicted phenotype ˆy = h(Z; θf) ∈Rn. (5) which allows for the recovery of potential cross-pathway interactions. All network parameters {W (l), b(l)}L l=1, θr, and θf are learned jointly by minimizing a suitable loss function (see Section 4.2). The masks M (l) enforce biological sparsity, reduce overfitting, and ensure each subnetwork aligns with known genotype–intermediate trait relationships. Our BINN framework is highly modular: depending on the number, type, and interdependencies of available omics layers, the network can assume different connectivity patterns, ranging from single-layer designs to staggered, stacked, or parallel architectures. A brief description of these variants is provided in the Supplementary Material. 4.2 Custom Loss Functions for Guided Training BINNs are flexible and can be trained with any suitable loss function, ranging from conventional objectives to biologically informed criteria. Standard mean squared error (MSE). The simplest choice is the mean squared error between the true phenotype y and the prediction ˆy: LMSE(y, ˆy) = 1 n n X i=1 yi −ˆyi 2. (6) Biologically informed soft-constrained loss. To encourage the model’s intermediate represen- tations to align with measured omics traits, we add a soft constraint based on correlation. Let T (l) = [T (l) 1 , . . . , T (l) n ] be the predicted latent values at layer l and Z(l) = [Z(l) 1 , . . . , Z(l) n ] the corresponding ground-truth intermediate measurements. Define the sample Pearson correlation ρ(l) = Pn i=1 T (l) i −¯T (l) Z(l) i −¯Z(l) qPn i=1(T (l) i −¯T (l))2 qPn i=1(Z(l) i −¯Z(l))2 , (7) where ¯T (l) and ¯Z(l) are the layer-wise means. The biologically informed loss is then Lbio = LMSE(y, ˆy) + λ L X l=1  1 −ρ(l) , (8) where λ > 0 is a hyperparameter balancing phenotype accuracy against intermediate-trait alignment. One could instead enforce an exact match via an auxiliary hard-constrained MSE ∥T (l) −Z(l)∥2 2, but the correlation-based soft constraint is less restrictive and allows the model greater expressive power. This biologically informed loss is optional: users may omit the correlation term (λ = 0) to recover the standard MSE objective, or tune λ to any desired level of guidance. 12 4.3 Sensitivity Analysis for Latent Perturbations To quantify the contribution of intermediate biological entities to the phenotype, we perform a post-hoc sensitivity analysis across trained BINN ensembles. This approach assesses how con- trolled perturbations of individual latent variables influence the predicted phenotype, providing an interpretable measure of importance that generalizes across omics layers, phenotypes, and model instances. Overview. Given a trained BINN ensemble consisting of S outer splits and F inner folds, let {Ms,f}s=1..S, f=1..F denote the set of trained models for a phenotype of interest. Each model Ms,f contains intermediate latent representations U (l) at layer l, corresponding to biological entities such as genes, metabolites, or proteins. Sensitivity analysis probes each latent by replacing it with controlled constants and observing the resulting change in the model’s mean predicted phenotype ˆy. Step 1: Baseline computation. For each trained model, we first perform a forward pass on a held-out test dataset to compute the baseline predicted phenotype ˆy0 and the corresponding latent activations U (l). The mean µj and standard deviation σj of each latent dimension u(l) j are computed across all samples in the dataset. Step 2: Latent clamping and phenotype response. To evaluate the sensitivity of entity j, we perform two controlled perturbations by clamping its latent activation to constant values across all samples: u(l) j (δ) = µj + δ σj, δ ∈{+a, −a}, (9) while all other latents remain unaltered. The perturbed latent is then propagated through the trained model to produce new mean phenotype predictions ˆy(+) and ˆy(−). The per-entity sensitivity under model Ms,f is defined as the symmetric mean absolute deviation: ∆y(s,f) j = 1 2  |ˆy(+) −ˆy0| + |ˆy(−) −ˆy0|  , (10) capturing the magnitude of the phenotype’s response to both upward and downward perturbations. Step 3: Aggregation across models. Sensitivities are aggregated across all ensemble members that include entity j to obtain a robust importance estimate: ¯ ∆yj = 1 |M(j)| X (s,f)∈M(j) ∆y(s,f) j , (11) where M(j) is the subset of trained models that contain latent u(l) j . The resulting ¯ ∆yj represents the average change in predicted phenotype magnitude when the latent corresponding to entity j is clamped to values spanning its natural variability. Step 4: Entity ranking and downstream analysis. Entities are ranked by their aggregated sensitiv- ity ¯ ∆yj, yielding a model-based importance profile for the phenotype under study. This ranking can identify key intermediate traits, benchmark model interpretability against known associations, and guide downstream analyses such as candidate gene selection, pathway enrichment, or gene-editing prioritization. 13 Algorithm 1 Sensitivity Analysis Across BINN Ensembles Require: Trained models {Ms,f} for a phenotype; perturbation scale a 1: for each outer split s = 1..S do 2: for each inner fold f = 1..F do 3: Load trained model Ms,f 4: Compute baseline mean phenotype ˆy0 and latent statistics (µj, σj) 5: for each latent entity j do 6: Clamp latent uj ←µj ± a · σj (constant across all samples) 7: Forward model to compute mean predictions ˆy(+) and ˆy(−) 8: Compute ∆y(s,f) j = 1 2 |ˆy(+) −ˆy0| + |ˆy(−) −ˆy0|  9: end for 10: end for 11: end for 12: Aggregate ¯ ∆yj = mean(s,f)(∆y(s,f) j ) over all models containing j 13: Rank entities by ¯ ∆yj 5 Data Availability All data generated for the shoot-branching problem were produced in-house by numerically integrating the ODE frameworks described in Powell et al. [34] and Betherloot et al. [9]. For the maize TWAS analyses, we used the publicly released genotype–expression–phenotype dataset from Torres- Rodríguez et al. [39], which is accessible via the original publication’s data archive. 6 Code Availability All simulation code, analysis scripts, ElasticNet-derived gene lists, trained models, and code to generate the figures in this paper are available upon request and are provided for non-commercial research use only. Acknowledgements We would like to thank Megan Gillespie for supporting and championing this work. Nathan Springer for his countless hours in helping us link and communicate our ideas towards biological and genomic applications. Alexis Charalampopoulos for early ideation on the concept of BINNs. As well as several team members who have contributed substantial feedback throughout this development of this work: Jaclyn Noshay, Sarah Turner-Hissong, Fabiana Freitas Moreira, Zhangyue Shi, Rocio Dominguez Vidana, and Koushik Nagasubramanian. References [1] Admas Alemu, Johanna Åstrand, Osval A Montesinos-Lopez, Julio Isidro y Sanchez, Javier Fernandez-Gonzalez, Wuletaw Tadesse, Ramesh R Vetukuri, Anders S Carlsson, Alf Ceplitis, Jose Crossa, et al. Genomic selection in plant breeding: Key factors shaping two decades of progress. Molecular Plant, 17(4):552–578, 2024. [2] Susanna Atwell, Yu S Huang, Bjarni J Vilhjálmsson, Glenda Willems, Matthew Horton, Yan Li, Dazhe Meng, Alexander Platt, Aaron M Tarone, Tina T Hu, et al. Genome-wide association study of 107 phenotypes in arabidopsis thaliana inbred lines. Nature, 465(7298):627–631, 2010. [3] Žiga Avsec, Vikram Agarwal, Daniel Visentin, Joseph R Ledsam, Agnieszka Grabska- Barwinska, Kyle R Taylor, Yannis Assael, John Jumper, Pushmeet Kohli, and David R Kelley. Effective gene expression prediction from sequence by integrating long-range interactions. Nature methods, 18(10):1196–1203, 2021. 14 [4] Christina B Azodi, Emily Bolger, Andrew McCarren, Mark Roantree, Gustavo de Los Campos, and Shin-Han Shiu. Benchmarking parametric and machine learning models for genomic prediction of complex traits. G3: Genes, Genomes, Genetics, 9(11):3691–3702, 2019. [5] Christina B Azodi, Jeremy Pardo, Robert VanBuren, Gustavo de Los Campos, and Shin-Han Shiu. Transcriptome-based prediction of complex traits in maize. The Plant Cell, 32(1):139–151, 2020. [6] Gonzalo Benegas, Carlos Albors, Alan J Aw, Chengzhong Ye, and Yun S Song. A dna language model based on multispecies alignment predicts the effects of genome-wide variants. Nature Biotechnology, pages 1–6, 2025. [7] Gonzalo Benegas, Sanjit Singh Batra, and Yun S Song. Dna language models are powerful predictors of genome-wide variant effects. Proceedings of the National Academy of Sciences, 120(44):e2311219120, 2023. [8] Gonzalo Benegas, Chengzhong Ye, Carlos Albors, Jianan Canal Li, and Yun S Song. Genomic language models: opportunities and challenges. Trends in Genetics, 2025. [9] Jessica Bertheloot, François Barbier, Frédéric Boudon, Maria Dolores Perez-Garcia, Thomas Péron, Sylvie Citerne, Elizabeth Dun, Christine Beveridge, Christophe Godin, and Soulaiman Sakr. Sugar availability suppresses the auxin-induced strigolactone pathway to promote bud outgrowth. New Phytologist, 225(2):866–879, 2020. [10] Leo Breiman. Random forests. Machine learning, 45:5–32, 2001. [11] Ole F Christensen, Vinzent Börner, Luis Varona, and Andres Legarra. Genetic evaluation including intermediate omics features. Genetics, 219(2):iyab130, 2021. [12] Samuel A Clark and Julius van der Werf. Genomic best linear unbiased prediction (gblup) for the estimation of genomic breeding values. Genome-wide association studies and genomic prediction, pages 321–330, 2013. [13] Jose Crossa, Paulino Pérez-Rodríguez, Javier Cuevas, Osval A Montesinos-López, Diego Jarquín, Gustavo de Los Campos, Juan Burgueño, José Manuel González-Camacho, Salvador Pérez-Elizalde, Yoseph Beyene, and Susanne Dreisigacker. Genomic selection in plant breeding: methods, models, and perspectives. Trends in Plant Science, 22(11):961–975, 2017. [14] Haitham A Elmarakeby, Justin Hwang, Rand Arafeh, Jett Crowdis, Sydney Gang, David Liu, Saud H AlDubayan, Keyan Salari, Steven Kregel, Camden Richter, et al. Biologically informed deep neural network for prostate cancer discovery. Nature, 598(7880):348–352, 2021. [15] Jeffrey B Endelman. Ridge regression and other kernels for genomic selection with r package rrblup. The plant genome, 4(3), 2011. [16] Eric R Gamazon, Heather E Wheeler, Kaanan P Shah, Sahar V Mozaffari, Keston Aquino- Michaels, Robert J Carroll, Anne E Eyler, Joshua C Denny, GTEx Consortium, Dan L Nicolae, et al. A gene-based association method for mapping traits using reference transcriptome data. Nature genetics, 47(9):1091–1098, 2015. [17] Pengfei Gao, Haonan Zhao, Zheng Luo, Yifan Lin, Wanjie Feng, Yaling Li, Fanjiang Kong, Xia Li, Chao Fang, and Xutong Wang. Soydngp: a web-accessible deep learning framework for genomic prediction in soybean breeding. Briefings in bioinformatics, 24(6):bbad349, 2023. [18] Daniel Gianola, Gustavo de Los Campos, William G Hill, Eduardo Manfredi, and Rohan Fernando. Additive genetic variability and the bayesian alphabet. Genetics, 183(1):347–363, 2009. [19] Erik Hartman, Aaron M Scott, Christofer Karlsson, Tirthankar Mohanty, Suvi T Vaara, Adam Linder, Lars Malmström, and Johan Malmström. Interpreting biologically informed neural networks for enhanced proteomic biomarker discovery and pathway analysis. Nature Communi- cations, 14(1):5359, 2023. 15 [20] Elliot L Heffner, Mark E Sorrells, and Jean-Luc Jannink. Genomic selection for crop improve- ment. Crop Science, 49(1):1–12, 2009. [21] Laval Jacquin, Tuong-Vi Cao, and Nourollah Ahmadi. A unified and comprehensible view of parametric and kernel methods for genomic prediction with application to rice. Frontiers in genetics, 7:145, 2016. [22] Kishore Jaganathan, Sofia Kyriazopoulou Panagiotopoulou, Jeremy F McRae, Siavash Fazel Darbandi, David Knowles, Yang I Li, Jack A Kosmicki, Juan Arbelaez, Wenwu Cui, Grace B Schwartz, et al. Predicting splicing from primary sequence with deep learning. Cell, 176(3):535– 548, 2019. [23] Arthur Korte and Ashley Farlow. The advantages and limitations of trait analysis with gwas: a review. Plant methods, 9:1–9, 2013. [24] Wenlong Ma, Zhixu Qiu, Jie Song, Qian Cheng, and Chuang Ma. Deepgs: Predicting phenotypes from genotypes using deep learning. BioRxiv, page 241414, 2017. [25] Xiaojie Ma et al. Omicsgcn: a multi-view graph convolutional network for multi-omics data integration. Bioinformatics, 2022. [26] Gary C McDonald. Ridge regression. Wiley Interdisciplinary Reviews: Computational Statistics, 1(1):93–100, 2009. [27] Theo HE Meuwissen, Ben J Hayes, and ME1461589 Goddard. Prediction of total genetic value using genome-wide dense marker maps. genetics, 157(4):1819–1829, 2001. [28] Osval Antonio Montesinos López, Abelardo Montesinos López, and Jose Crossa. Bayesian genomic linear regression. In Multivariate Statistical Machine Learning Methods for Genomic Prediction, pages 171–208. Springer, 2022. [29] Osval Antonio Montesinos-López, Abelardo Montesinos-López, Paulino Pérez-Rodríguez, José Alberto Barrón-López, Johannes WR Martini, Silvia Berenice Fajardo-Flores, Laura S Gaytan-Lugo, Pedro C Santana-Mancilla, and José Crossa. A review of deep learning applica- tions for genomic selection. BMC genomics, 22(1):19, 2021. [30] Eric Nguyen, Michael Poli, Matthew G Durrant, Brian Kang, Dhruva Katrekar, David B Li, Liam J Bartie, Armin W Thomas, Samuel H King, Garyk Brixi, et al. Sequence modeling and design from molecular to genome scale with evo. Science, 386(6723):eado9336, 2024. [31] Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Michael Wornow, Callum Birch- Sykes, Stefano Massaroli, Aman Patel, Clayton Rabideau, Yoshua Bengio, et al. Hyenadna: Long-range genomic sequence modeling at single nucleotide resolution. Advances in neural information processing systems, 36, 2024. [32] Tran Nguyen et al. Deepprog: an ensemble of deep-learning and machine-learning models for prognosis prediction using multi-omics data. Nature Communications, 2020. [33] Joseph O Ogutu, Torben Schulz-Streeck, and Hans-Peter Piepho. Genomic selection using regularized linear regression models: ridge regression, lasso, elastic net and their extensions. In BMC proceedings, volume 6, pages 1–6. Springer, 2012. [34] Owen M Powell, Francois Barbier, Kai P Voss-Fels, Christine Beveridge, and Mark Cooper. Investigations into the emergent properties of gene-to-phenotype networks across cycles of selection: a case study of shoot branching in plants. in silico Plants, 4(1):diac006, 2022. [35] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707, 2019. [36] Christian Riedelsheimer, Angelika Czedik-Eysenberg, Christoph Grieder, Jan Lisec, Frank Technow, Ronan Sulpice, Thomas Altmann, Mark Stitt, Lothar Willmitzer, and Albrecht E Melchinger. Genomic and metabolic prediction of complex heterotic traits in hybrid maize. Nature genetics, 44(2):217–220, 2012. 16 [37] Marylyn D Ritchie, Emily R Holzinger, Ruowang Li, Sarah A Pendergrass, and Dokyoon Kim. Methods of integrating data to uncover genotype–phenotype interactions. Nature Reviews Genetics, 16(2):85–97, 2015. [38] J Vladimir Torres-Rodríguez, Delin Li, and James C Schnable. Evolving best practices for transcriptome-wide association studies accelerate discovery of gene-phenotype links. Current Opinion in Plant Biology, 83:102670, 2025. [39] J Vladimir Torres-Rodríguez, Delin Li, Jonathan Turkus, Linsey Newton, Jensina Davis, Lina Lopez-Corona, Waqar Ali, Guangchao Sun, Ravi V Mural, Marcin W Grzybowski, et al. Population-level gene expression can repeatedly link genes to functions in maize. The Plant Journal, 119(2):844–860, 2024. [40] Arno van Hilten, Steven A Kushner, Manfred Kayser, M Arfan Ikram, Hieab HH Adams, Caroline CW Klaver, Wiro J Niessen, and Gennady V Roshchupkin. Gennet framework: interpretable deep learning for predicting phenotypes from genetic data. Communications biology, 4(1):1094, 2021. [41] Arno van Hilten, Jeroen van Rooij, M Arfan Ikram, Wiro J Niessen, Joyce BJ van Meurs, and Gennady V Roshchupkin. Phenotype prediction using biologically interpretable neural networks on multi-cohort multi-omics data. NPJ systems biology and applications, 10(1):81, 2024. [42] Rajeev K Varshney, Manish Roorkiwal, and Mark E Sorrells. Genomic-enabled prediction models for improving crop productivity. Trends in Plant Science, 26(6):575–587, 2021. [43] Michael Wainberg, Nasa Sinnott-Armstrong, Nicholas Mancuso, Alvaro N Barbeira, David A Knowles, David Golan, Raili Ermel, Arno Ruusalepp, Thomas Quertermous, Ke Hao, et al. Opportunities and challenges for transcriptome-wide association studies. Nature genetics, 51(4):592–599, 2019. [44] Dong et al. Wang. Mogonet integrates multi-omics data through graph convolutional networks. Nature Machine Intelligence, 2021. [45] Hao Wang, Shen Yan, Wenxi Wang, Yongming Chen, Jingpeng Hong, Qiang He, Xianmin Diao, Yunan Lin, Yanqing Chen, Yongsheng Cao, et al. Cropformer: An interpretable deep learning framework for crop genomic prediction. Plant Communications, 6(3), 2025. [46] Kelin Wang, Muhammad Ali Abid, Awais Rasheed, Jose Crossa, Sarah Hearne, and Huihui Li. Dnngp, a deep neural network-based method for genomic prediction using multi-omics data in plants. Molecular Plant, 16(1):279–293, 2023. [47] Jacob D Washburn, José Ignacio Varela, Alencar Xavier, Qiuyue Chen, David Ertl, Joseph L Gage, James B Holland, Dayane Cristina Lima, Maria Cinta Romay, Marco Lopez-Cruz, et al. Global genotype by environment prediction competition reveals that diverse modeling strategies can deliver satisfactory maize yield estimates. Genetics, 229(2):iyae195, 2025. [48] Xun Wu, Yongxiang Li, Xin Li, Chunhui Li, Yunsu Shi, Yanchun Song, Zuping Zheng, Yu Li, and Tianyu Wang. Analysis of genetic differentiation and genomic variation to reveal potential regions of importance during maize improvement. BMC plant biology, 15(1):256, 2015. [49] Xinyue Yang et al. mmvae: a multi-modal variational autoencoder framework for integrative analysis of multi-omics data. Bioinformatics, 37(15):2151–2158, 2021. [50] Bayya Yegnanarayana. Artificial neural networks. PHI Learning Pvt. Ltd., 2009. 17 Supplementary Materials for: Biology-informed neural networks learn nonlinear representations from omics data to improve genomic prediction and interpretability Katiana Kontolati, Rini Jasmine Gladstone, Ian Davis, Ethan Pickering Overview This document provides supplementary information supporting the findings and methodol- ogy of the main manuscript. It includes additional figures, tables, implementation details, and reproducibility notes for both the transcriptomics and metabolomics applications. 1 Metabolomics-derived BINN Axillary bud outgrowth is the key developmental process driving shoot branching in plants. Its regulation arises from an intricate network of hormonal and sugar signals. In the frame- work of Bertheloot et al. [1], auxin (A) concentration produced at the shoot apex is trans- ported basipetally and inhibits cytokinin (CK) synthesis while promoting strigolactone (SL) production. Cytokinin levels and strigolactone levels then act antagonistically to activate or repress bud growth, respectively, and sucrose availability (S) acts as an enabling cue. Bertheloot et al. showed that this minimal network captures the switch-like behavior of bud outgrowth across species and can be approximated as a system of ordinary differential equations (ODEs) [1]). Powell et al. [8] then extended this ODE framework by parameteriz- ing each interaction term and intermediate-trait (hormones and sucrose) level with additive genetic values calculated based on the additive genetic effects at multiple causal loci. The introduction of genetic variation into the shoot-branching network yields a fully spec- ified G2P model, enabling in silico selection experiments that reveal how interactions among intermediate traits drive selection response, precisely the process breeders seek to optimize. Because this framework integrates genotype, metabolite levels, and phenotype, it functions as a biologically-informed model, embedding known genetic loci-metabolite relationships as inductive biases. Moreover, having access to the exact ODEs used to generate synthetic data gives us a ground-truth baseline against which to develop and rigorously benchmark our BINN architectures. Although the mapping from genotype to metabolite in this system is exactly specified, in real breeding populations we typically know only statistical associa- tions and never the true functional forms. We therefore hypothesize that a BINN can recover 1 these hidden relationships, learning the underlying network dynamics directly from data and providing both accurate predictions and mechanistic insight. 1.1 Adapted system of equations We generate a large synthetic dataset following the ODE-based shoot-branching network of Bertheloot et al. [1] extended by Powell et al. [8]. For reproducibility we outline the exact set of equations below. Given N lines and L causal loci per pathway, the procedure is: 1. Simulate additive genetic effects. Draw locus effects {βj}4L j=1 from a standard nor- mal distribution. Denote the resulting effect vector by β = (β1, . . . , β4L)⊤. 2. Generate genotypes. Form the genotype matrix G ∈{0, 1}N×4L, Gi,j ∼Bernoulli(0.5), where row i represents line i and columns 1 . . . L, L + 1 . . . 2L, 2L + 1 . . . 3L, 3L + 1 . . . 4L correspond to auxin (A), sugar (S), cytokinin (CK), and strigolactone (SL) pathways. 3. Compute additive pathway values. For each pathway p ∈{A, S, CK, SL}, let β(p) and G(p) denote the corresponding length-L subvector and submatrix. The raw genetic value for pathway p in line i is g(p) i = L X j=1 G(p) i,j β(p) j . (1) 4. Rescale to steady-state trait levels. To match the ranges used by Powell et al. [8], each g(p) is linearly rescaled to an intermediate trait x(p): x(p) i = g(p) i 2 maxi |g(p) i | + 1 2 ! sp + bp, (2) with pathway-specific scale sp and offset bp: (sA, bA) = (2.5, 0), (sS, bS) = (2.4, 0.1), (sCK, bCK) = (0.6, 0.4), (sSL, bSL) = (0.9, 0.2). 5. Compute interaction terms and intermediate traits. After computing raw genetic values g(p) via Eq. (1), we first linearly map auxin and sugar to obtain x(A) i = g(A) i , x(S) i = g(S) i . (3) 2 Cytokinin and strigolactone receive additional interaction adjustments as: x(CK) i = g(CK) i γ(i) CK←A + γ(i) CK←S 0.99 , (4) x(SL) i = g(SL) i + γ(i) SL←A 0.86 . (5) where γ(i) CK←A, γ(i) CK←S, and γ(i) SL←A are computed as γ(i) CK←A = 1 1 + 0.96 x(A) i , γ(i) CK←S = 0.25 (x(S) i )2 0.19 + (x(S) i )2, (6) γ(i) SL←A = 24.89 (x(A) i )2 294.58 + (x(A) i )2. (7) 6. Integrator and phenotype. The intermediate traits are passed throug the signal integrator term as γ(i) I←CK = 1 1 + 1000 x(CK) i , (8) γ(i) I←SL,S = 5.64 (x(SL) i )2 1 + 0.00418 + 7.10 (x(S) i )2 (x(SL) i )2. (9) The composite integrator signal for line i is Ii = 0.33 + γ(i) I←CK + γ(i) I←SL,S. (10) Time to bud outgrowth is then obtained by a linear transformation and clipping to remove negative days: yBO,i = min{−3.5 min j Ij + 3.5 Ii, 8.3}, (11) and lines with yBO,i ≤8.3 are scored as having initiated bud outgrowth. 7. Final dataset. We generate the genotype matrix X = G, the phenotype vector yBO, and the true intermediate traits {x(A), x(S), x(CK), x(SL)} for downstream model training and evaluation. 1.2 Data generation We generated a synthetic dataset using the equations outlined in Section 1.1 using our own Python implementation. We consider L = 400 causal loci per pathway which results in 1, 600 loci in total. A large dataset of N = 100, 000 lines was generated from which 10% was used for validation and 20% for testing fixed across models. The remaining 70% was used to randomly sample training subsets used in the experiments described below. Below, we show the distribution of the phenotype yBO and the four intermediate traits (A,S,CK,SL). 3 Figure 1: Distribution of the phenotype time to bud outgrowth yBO, and intermediate traits auxin (A), sucrose (S), cytokinin (CK) and strigolactone (SL). 1.3 Model architecture Each pathway subnetwork is a shallow fully-connected network (FCN) with hidden dimension H = 32, inter-layer sigmoid activations that aim to emulate the Hill-like dynamics of the shoot-branching network, and a one-dimensional output. The four pathway outputs are concatenated and passed through a final FCN with hidden dimension F = 16 to predict yBO. Any SNPs not assigned to a pathway are fed into a residual FCN of the same size. 1.4 Experimental setup All BINN experiments used the same training pipeline, implemented in PyTorch [6]. Below we summarize the key hyperparameters and procedures: Data splits and noise injection. For each run, we randomly sample from the full set Ntrain lines for training, Nval = 0.1 × Ntotal for validation, and Ntest = 0.2 × Ntotal for testing. To test the robustness of model to noisy data we then add Gaussian noise to the targets: yi ←yi 1 + ϵ(y) i  , ϵ(y) i ∼N(0, σ2 y), and to each intermediate trait p ∈{A, S, CK, SL}: x(p) i ←x(p) i 1 + ϵ(p) i  , ϵ(p) i ∼N(0, σ2 inter), where σy = 10% and σinter = 5%. Optimization. We trained each BINN with: • Optimizer: Adam, initial learning rate α = 10−3, no weight decay. 4 • Batch size: 64. • Scheduler: ReduceLROnPlateau on the validation loss, factor 0.5, patience 20 epochs. • Early stopping: terminate if validation loss does not improve for 20 consecutive epochs, up to a maximum of 500 epochs. Repetitions, dataset sizes, and random seeds. We evaluated each BINN configura- tion over nine logarithmically spaced training set sizes (500 to 20, 000 samples), combined with varying noise levels and intermediate-trait availability. For each of these dataset sizes, we performed n = 5 independent runs using different random seeds. Genotypes were drawn once per size, and each seed controlled the train/validation/test split (80/10/10), the net- work weight initialization, and the Gaussian noise added to both the final phenotype and intermediate traits. 1.5 Baseline Models To benchmark the performance of our BINN architectures, we compared against two standard models trained on the same synthetic data using our own implementation: Ridge regression. We fit a linear model ˆy = Xw + b, with L2 regularization on the weights. Concretely, the model implements L(w, b) = 1 N N X i=1 (yi −(x⊤ i w + b))2 + λ∥w∥2 2, where xi ∈Rp is the genotype vector, yi the target, and λ = 0.01 the regularization coef- ficient. Training proceeds by minimizing this loss via AdamW with the same learning rate and early-stopping schedule as the BINNs. Fully-connected network (FCN). As a representative “black-box” nonlinear model, we trained a small deep network with two hidden layers of size 64 and ReLU activations: h(1) = ReLU(W (1)x + b(1)), h(2) = ReLU(W (2)h(1) + b(2)), ˆy = W (3)h(2) + b(3). The model parameters {W (k), b(k)} are learned by minimizing mean-squared error: L = 1 N N X i=1 yi −ˆyi 2, using the same optimizer, learning rate, batch size, and early-stopping criteria as for the BINNs. 5 1.6 Evaluation Metric To quantify predictive performance, we compute the mean squared error (MSE) on the held- out test set for each model, dataset size, and random seed. Specifically, for each dataset size Ntrain, noise configuration, and intermediate-trait fraction, we perform n = 5 independent runs yielding test errors MSEr = 1 Ntest Ntest X i=1 yi −ˆy(r) i 2, r = 1, . . . , n. We then summarize these 5 values via their mean and standard deviation in the main text, and visualize their full distribution using boxplots. 2 Transcriptomics-derived BINN 2.1 Introduction Linking genetic variation to phenotypic traits remains a central challenge in plant genomics. Torres-Rodriguez et al. [10] conducted a transcriptome-wide association study (TWAS) on the Wisconsin Diversity Panel, consisting 693 temperate adapted maize inbred lines from seven heterotic pools: stiff stalk (SS), non-stiff stalk (NSS), iodent (IDT), tropical, popcorn, sweet corn, and other/unknown, to identify genes controlling flowering time. The authors collected full-length RNA-seq within a tight two-hour window and recorded days to anthesis and silking at two Corn Belt locations, Nebraska and Michigan. From these data, they identified 21 genes exceeding stringent false-discovery thresholds and mapped both cis- and trans-eQTL, including loci in well-characterized flowering regulators. The study’s power derives from its large sample size, precise sampling protocol, and deep sequencing depth, making it one of the most comprehensive TWAS datasets in maize. Because this cohort provides the complete “genotype–expression–phenotype” triangle, it is ideal for evaluating our methodology. We hypothesize that embedding expression-derived sparsity into the network architecture improves predictive accuracy over genotype-only mod- els, while still requiring only genotype data at inference. We constructed the BINN’s spar- sity mask through a two-step procedure. First, we trained an expression(biological domain knowledge)-to-phenotype (B2P) model using Elastic Net regression on the RNA-seq data for each flowering trait, tuning the regularization parameters via cross-validation. We then selected genes with non-zero coefficients. Second, we performed eQTL mapping on those can- didate genes to identify their top SNP markers. The resulting SNP-gene pairs defined the bi- nary mask connectivity matrix used to define our sparse genotype-to-expression-to-phenotype BINN architecture. 2.2 Dataset and Preprocessing We use the real-world maize data provided in Torres-Rodriguez et al. 2024 [10], which in- cludes flowering times (days to silking and anthesis) measured at two field locations (Ne- braska and Michigan), RNA-seq gene expression data from leaf samples, and genotypes from 6 15.8 million single nucleotide polymorphism (SNP) markers. The dataset consists of 690 lines from the Wisconsin Diversity Panel, covering 7 heterotic groups (stiff stalk, non-stiff stalk, iodent, tropical, popcorn, sweet corn, and other/unknown) and expression values of 39,756 genes. We followed the data pre-processing steps described in Torres-Rodriguez et al. [10] and used the spatially-corrected flowering time values that were provided. Fig. 2 shows the distribution of the 4 phenotypes across all the 690 lines. We also dropped the unexpressed genes, with median TPM less than 0.01, from further analysis. This reduced the total number of genes to 24,874. Furthermore, markers with a minor allele frequency (MAF) below 5% were dropped, and PLINK [9] was used to prune highly correlated markers (--indep-pairwise 1000 500 0.5), leaving 3.2 million markers for further analysis. The pre-processed expression and marker data is, then, used for feature selection for the BINN model. 2.2.1 Feature Selection The BINN architecture is sparsified by selecting a subset of genes and markers through a biologically-informed process. The genes are selected using a linear elastic net model and markers are selected through eQTL process. This reduces the input features (markers) by an order of 2 and the length of intermediate layer (genes) by an order of 1. 1. Gene selection : To identify a reduced set of significant genes influencing flowering time, we developed ElasticNet models utilizing expression data from 24,874 genes as predictors, with days to silking and anthesis for NE and MI (four distinct models) as the response variables. The expression data, measured in Transcripts Per Million (TPM), was normalized to a range of 0 to 2, following the methodology outlined by Torres-Rodriguez et al. [10]. The ElasticNet models were implemented using Scikit- Learn’s ElasticNetCV[4], and we selected the number of genes based by varying the values of the L1 ratio by subsetting the features with non-zero coefficients. We used 5- fold cross-validation for model building. To assess the quality of the selected genes, we compared the overlap between genes identified as influencing flowering time in Torres- Rodriguez et al. [10] and those selected by the ElasticNet models. Table 1 presents the number of genes identified for various values of the L1 ratio, (l), and the corresponding percentage of overlap with the flowering time genes reported in Torres-Rodriguez et al. [10], based on models trained using expression values from all the 690 lines. This analysis demonstrates that as the L1 ratio decreases, the number of selected genes increases, and a lower L1 ratio facilitates the identification of a greater number of significant genes. 2. Marker selection : For each gene selected through the ElasticNet model, significant markers were identified using eQTL analysis [2] as outlined in Torres-Rodriguez et al. [10], using mixed linear model [12] in rMVP [11] and including three principal components as covariates. For all the identified genes, the top 20 statistically significant markers are selected for training the BINN models. 7 Table 1: Assessment of the quality of the set of genes obtained using the ElasticNet models. Phenotype # Significant flowering time genes from [10] # genes from ElasticNet models % overlap of significant genes l = 1.0 l = 0.5 l = 0.2 l = 1.0 l = 0.5 l = 0.2 Days to Silking, MI 17 317 568 970 70% 82% 100% Days to Anthesis, MI 16 320 412 816 69% 81% 94% Days to Silking, NE 16 234 413 756 69% 81% 100% Days to Anthesis, NE 14 259 383 760 64% 100% 100% Figure 2: Distribution of the four phenotypes; days to anthesis and silking in NE and MI, for the real-world maize TWAS dataset 2.3 Model architecture The BINN architecture comprises of three layers: an input layer consisting of the reduced set of markers, an intermediate layer in which each node represents a gene, and a final output layer that generates the predicted phenotype values. Every node (gene) in the intermediate layer is connected to 20 nodes (significant markers) from the input layer via a pathway sub- network, which is structured as a fully connected network (FCN) with a single hidden layer of dimension, H = 128, and a sigmoid activation function. The outputs from all the pathway sub-networks are concatenated and subsequently fed into another FCN, with a single hidden layer of dimension, F = 128, to predict the flowering time, yFT. 2.4 Experimental setup The experiments were designed to evaluate the performance of the BINN under conditions of sparse data. Consequently, all models were trained on 20% of the dataset while being tested on the remaining 80%. The entire dataset was divided into five non-overlapping 8 subsets, while ensuring that the lines from all heterotic groups were represented in same proportion as the full dataset. The exact number of train, validation and test lines used in each independent fold is shown in Table 2. To prevent any data leakage during feature selection, gene and marker selection were performed independently for each subset, ensuring that the corresponding features were utilized exclusively when training the model on that specific subset of data. For gene selection, we used an L1 ratio of l = 0.1 for ElasticNet models, with a 5-fold cross-validation. Table 3 presents the number of genes selected per phenotype for all the splits of data. For every selected gene, we selected 20 markers based on the eQTL analysis. It is important to note that expression data was used solely for feature selection to introduce sparsity in the network architecture and was not utilized for model training. Therefore, only genotype data is required for training and validation of the model. Table 2: Number of train, validation and test lines for each subpopulation used in each fold (sparse-data scenario). Population Train Validation Test All 65 65 560 Others 29 29 241 SS 15 15 129 NSS 9 9 76 IDT 5 5 47 Tropical 3 3 26 Popcorn 2 2 18 Sweet corn 2 2 23 We trained all the BINN models with Adam optimizer with an initial learning rate, α = 10−3, and batch size of 16. Training was terminated if the validation loss did not improve for 20 consecutive epochs, with a maximum limit of 500 epochs. Additionally, we used 5-fold cross-validation (CV) for training on the five subsets of data. As BINN is a neural network, it necessitates a validation dataset during training. However, the baseline models utilize the entire 20% of the data for training without requiring a validation set. To ensure a fair comparison between the models, we initially trained the BINN models using 16% of the data for training and 4% for early stopping/validation for each of the five CV folds. We recorded the epoch that yielded the best model performance for each of these runs and subsequently retrained the BINN model using 20% of the data for the same number of epochs without a validation set. 2.5 Baseline Models We compare the performance of the BINN models against benchmark genotype-to-phenotype (G2P) and domain expression-to-phenotype (B2P) models, all of which are linear in nature. For G2P, we trained GBLUP and ridge regression models, whereas, for B2P, a ridge regression model was trained. All the ridge regression models were implemented using Scikit-Learn’s [4] RidgeCV function with 3-fold cross-validation. We used the GBLUP implementation from BGLR [7] with 5000 iterations after 1000 burn-in iterations. 9 Figure 3: Test Spearman correlation distributions for prediction of days to Anthesis in NE and MI — comparing all the models (G2P on genotype, pink; B2P: Elastic Net on expression, green; G2B2P: BINN with network sparsity defined by lasso selected genes and eQTL-selected markers, blue) across five independent 20%/80% train/test splits, each with five-fold cross-validation. Results of three G2P models are shown - GBLUP (G2P-GBLUP), Ridge Regression on every 100th marker (G2P-RR) and Ridge Regression on bio-informed (eQTL-selected) markers (G2P-RR-BI). Additionally, results of two B2P models are also shown - Ridge Regression on all the genes (B2P-RR) and Ridge Regression on bio-informed (ElasticNet-selected) genes (B2P-RR-BI). Each subplot shows results for all lines pooled and for each of the seven distinct subpopulations. 10 Figure 4: Test Spearman correlation distributions for prediction of days to Silking in NE and MI — comparing all the models (G2P on genotype, pink; B2P: Elastic Net on ex- pression, green; G2B2P: BINN with network sparsity defined by lasso selected genes and eQTL-selected markers, blue) across five independent 20%/80% train/test splits, each with five-fold cross-validation. Results of three G2P models are shown - GBLUP (G2P-GBLUP), Ridge Regression on every 100th marker (G2P-RR) and Ridge Regression on bio-informed (eQTL-selected) markers (G2P-RR-BI). Additionally, results of two B2P models are also shown - Ridge Regression on all the genes (B2P-RR) and Ridge Regression on bio-informed (ElasticNet-selected) genes (B2P-RR-BI). Each subplot shows results for all lines pooled and for each of the seven distinct subpopulations. 11 Table 3: Number of genes selected per phenotype for different subset of data using l = 0.1. Data subset Days to silking, MI Days to anthesis, MI Days to silking, NE Days to anthesis, NE Split 1 1,271 1,041 1,010 933 Split 2 1,230 995 1,071 912 Split 3 1,079 919 969 941 Split 4 1,188 985 1,044 848 Split 5 1,235 1,018 1,042 933 The G2P models were trained using every 100th marker, as we found no significant difference in performance compared to using all 3.2 million markers. On the other hand, B2P model was trained on the expression (in TPM) of all 24,874 genes. Prior to model training, the TPM values were transformed using the Box-Cox normalization with a parameter λ = 0.0282 to approximate a normal distribution. 2.6 Evaluation Strategy The performance of the BINN models are compared with that of the baseline models using Spearman’s rank correlation coefficients [5]. Specifically, we calculated the correlation be- tween the predicted and observed phenotype values, evaluated on the remaining 80% of the held-out data. Correlation values are computed for all five folds in each subset of data, as well as for each heterotic group represented in the dataset. It is important to note that since the correlation metric is computed for each fold using the fixed 80% held-out lines within a subset/split, this may underestimate the true model variance, potentially resulting in less variability in our test metrics. Additionally, we treated each heterotic group as an indepen- dent trial, aggregating Spearman correlations across the five folds, five data splits, and all four phenotypes, and then applied a paired t-test [3] to determine whether the BINN model’s performance is significantly better than that of the G2P baseline. In the results reported in the main text we obtained p = 2.23 × 10−6, well below the 0.05 significance threshold, allowing us to reject the null hypothesis and conclude that BINN significantly outperforms the baseline model. 2.7 Additional Evaluation Results In this section, we discuss additional experiments conducted to further evaluate the impact of biologically informed feature selection and network sparsity in BINNs. First, we compare the performance of the BINN model against various baseline models. Specifically, we consider three G2P baseline models - GBLUP, Ridge Regression using every 100th marker and Ridge Regression using eQTL-selected markers and two B2P models - Ridge Regression using all the genes and Ridge Regression using ElasticNet-selected genes. Figures 3 and 4 illustrate the performance of these models in predicting four flowering time traits — days to anthesis and silking in both NE and MI. Furthermore, we conducted an ablation study to evaluate the impact of biologically in- formed network sparsity on the performance of the BINN model. This involved incorporating 12 (a) (b) (c) Figure 5: Test Spearman correlation distributions for (a) Silking NE, (b) Silking MI and (c) Anthesis NE - comparing three predictive models (G2P: GBLUP on genotype, pink; B2P: Elastic Net on expression, green; G2B2P: BINN with expression-informed sparsity, blue) across five independent 20%/80% train/test splits, each with five-fold cross-validation. Each subplot shows results for all lines pooled (“all”) and for each of the seven distinct subpopulations (SS, NSS, IDT, Popcorn, Sweet corn, Tropical and Others). The pink dashed line marks the median of the G2P baseline; the dashed line for G2B2P is colored green when its median exceeds that baseline or red when it falls below. Legends report the median percent change of G2B2P relative to G2P. randomly selected network sparsity into the BINN architecture and comparing its predictive performance with that of the baseline models. Figure. 7 presents the results for the BINN model, where network sparsity is defined by randomly selected genes and eQTL-selected markers. To ensure a fair evaluation, we maintained an equal number of randomly selected genes as in the biologically informed case. Figure. 8 displays similar results for network sparsity defined by randomly selected genes and markers. Our observations from these analyses indicate that as the level of random connections in the network architecture increases—from randomly selected genes to randomly selected genes and markers—the performance of the BINN model significantly declines. Moreover, biologically informed markers and genes enhance the performance of the baseline models, as evidenced in Figures 3 and 4. These results underscore our finding that biologically informed sparsity imparts essential inductive bias to the model, thereby, improving its predictive accuracy. 3 Alternative BINN Architectures In the main text, we described our canonical BINN design and its instantiation for (1) the synthetic shoot-branching problem and (2) the maize TWAS dataset. The following examples 13 (a) (b) (c) Figure 6: Test Spearman correlation distributions for leave-one-population-out experiments for (a) Silking NE, (b) Silking MI and (c) Anthesis NE - comparing three predictive models (G2P: Ridge Regression on genotype, pink; B2P: Elastic Net on expression, green; G2B2P: BINN with expression-informed sparsity, blue) across five independent 20% train splits, each with five-fold cross-validation.Each subplot shows results for the predictions on the six distinct subpopulations (SS, NSS, IDT, Popcorn, Sweet corn, and Tropical) using the models that are trained by leaving out the corresponding subpopulation from the training data. The pink dashed line marks the median of the G2P baseline; the dashed line for G2B2P is colored green when its median exceeds that baseline or red when it falls below. Legends report the median percent change of G2B2P relative to G2P. are not exhaustive alternatives but illustrate the flexibility of our architecture in handling diverse multi-omics contexts: 3.1 Single-layer BINN This model - used for the TWAS problem - has one intermediate omics layer (gene expression) feeding directly into the final integrator. It is appropriate when a single data type dominates predictive power, as in transcriptome-wide association studies without the availability of additional -omics data such metabolomics or proteomics. 14 (a) (b) (c) (d) Figure 7: Test Spearman correlation distributions for four flowering-time traits—(a) Anthesis NE, (b) Anthesis MI, (c) Silking NE, and (d) Silking MI—comparing three predictive models. Here, the network sparsity of G2B2P (BINN) model is defined by randomly selected genes and eQTL-selected markers, blue). Models were trained across five independent 20%/80% train/test splits, each with five-fold cross-validation. Each subplot shows results for all lines pooled and for each of the seven distinct subpopulations. 3.2 Staggered-layer BINN In cases with a single omics modality but with rich intra-layer dependencies, we allow pathway subnetworks to exchange information via sparse cross-connections. For exam- ple, in the shoot-branching network, auxin (A) and sucrose (S) subnetworks both feed into the cytokinin (CK) and strigolactone (SL) modules. These staggered links capture non- hierarchical, within-layer interactions, such as gene–gene or metabolite–metabolite cross- talk, before passing all signals to the final integrator. 3.3 Stacked-layer BINN When multiple omics layers form a biological cascade, such as genotype →transcriptome → metabolome, we embed each layer as a separate set of subnetworks connected in series. The latent outputs of the first layer become the inputs to the second layer’s pathway modules, and so on, before final integration. This design mirrors natural hierarchies (e.g. expression driving metabolite synthesis) and is appropriate whenever upstream molecular changes are known to mediate downstream effects. 15 (a) (b) (c) (d) Figure 8: Test Spearman correlation distributions for four flowering-time traits—(a) Anthesis NE, (b) Anthesis MI, (c) Silking NE, and (d) Silking MI—comparing three predictive models. Here, the network sparsity of G2B2P (BINN) model is defined by randomly selected genes and randomly selected markers, blue). Models were trained across five independent 20%/80% train/test splits, each with five-fold cross-validation. Each subplot shows results for all lines pooled and for each of the seven distinct subpopulations. 3.4 Parallel-layer BINN Each omics type (e.g. transcriptomics, proteomics, methylation) is processed by an inde- pendent subnetwork, and their latent outputs are concatenated only at the final integrator. This “late fusion” design suits orthogonal assays with minimal direct cross-talk, such as combining methylation and ATAC-seq to predict complex agronomic traits. All variants share the same core elements, biological sparsity masks, shallow FCNs per module, and a unified integrator, yet differ in how modules interconnect, allowing users to tailor the model to their study’s data structure and biological assumptions. References [1] Jessica Bertheloot, Fran¸cois Barbier, Fr´ed´eric Boudon, Maria Dolores Perez-Garcia, Thomas P´eron, Sylvie Citerne, Elizabeth Dun, Christine Beveridge, Christophe Godin, and Soulaiman Sakr. Sugar availability suppresses the auxin-induced strigolactone path- way to promote bud outgrowth. New Phytologist, 225(2):866–879, 2020. [2] Beth Holloway, Stanley Luck, Mary Beatty, J-Antoni Rafalski, and Bailin Li. Genome- wide expression quantitative trait loci (eqtl) analysis in maize. BMC genomics, 12:1–14, 2011. 16 1 A T G C A P Single-Layer BINN (transcriptomics example) A T G C A P Stacked-Layer BINN … A T G C A P Staggered BINN (metabolomics example) … A T G C A P Parallel-Layer BINN (a) (b) (c) (d) Figure 9: Flexible BINN architectures for diverse multi-omics scenarios. (a) Single-layer BINN: A single intermediate omics layer (e.g. gene expression) feeds directly into the fi- nal integrator; this configuration was used for the TWAS problem. (b) Staggered-layer BINN: Within a single omics modality, pathway subnetworks exchange information via sparse cross-connections. For example, in the shoot-branching network, both auxin (A) and sucrose (S) modules feed into the cytokinin (CK) and strigolactone (SL) subnetworks before final integration.(c) Stacked-layer BINN: Multiple omics layers are embedded in series - outputs from the first layer’s subnetworks become inputs to the second layer’s modules - mirroring cascades such as transcriptome →metabolome. (d) Parallel-layer BINN: Independent sub- networks process each omics type (e.g. transcriptomics, proteomics, methylation) in parallel, with their latent outputs concatenated only at the final integrator (“late fusion”). [3] Henry Hsu and Peter A Lachenbruch. Paired t test. Wiley StatsRef: statistics reference online, 2014. [4] Oliver Kramer and Oliver Kramer. Scikit-learn. Machine learning for evolution strate- gies, pages 45–53, 2016. [5] Leann Myers and Maria J Sirois. Spearman correlation coefficients, differences between. Encyclopedia of statistical sciences, 12, 2004. [6] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmai- son, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Py- torch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32, 2019. [7] Paulino P´erez and Gustavo de Los Campos. Genome-wide regression and prediction with the bglr statistical package. Genetics, 198(2):483–495, 2014. [8] Owen M Powell, Francois Barbier, Kai P Voss-Fels, Christine Beveridge, and Mark Cooper. Investigations into the emergent properties of gene-to-phenotype networks across cycles of selection: a case study of shoot branching in plants. in silico Plants, 4(1):diac006, 2022. 17 [9] Shaun Purcell, Benjamin Neale, Kathe Todd-Brown, Lori Thomas, Manuel AR Ferreira, David Bender, Julian Maller, Pamela Sklar, Paul IW De Bakker, Mark J Daly, et al. Plink: a tool set for whole-genome association and population-based linkage analyses. The American journal of human genetics, 81(3):559–575, 2007. [10] J Vladimir Torres-Rodr´ıguez, Delin Li, Jonathan Turkus, Linsey Newton, Jensina Davis, Lina Lopez-Corona, Waqar Ali, Guangchao Sun, Ravi V Mural, Marcin W Grzybowski, et al. Population-level gene expression can repeatedly link genes to functions in maize. The Plant Journal, 119(2):844–860, 2024. [11] Lilin Yin, Haohao Zhang, Zhenshuang Tang, Jingya Xu, Dong Yin, Zhiwu Zhang, Xi- aohui Yuan, Mengjin Zhu, Shuhong Zhao, Xinyun Li, et al. rmvp: a memory-efficient, visualization-enhanced, and parallel-accelerated tool for genome-wide association study. Genomics, proteomics & bioinformatics, 19(4):619–628, 2021. [12] Zhiwu Zhang, Elhan Ersoz, Chao-Qiang Lai, Rory J Todhunter, Hemant K Tiwari, Michael A Gore, Peter J Bradbury, Jianming Yu, Donna K Arnett, Jose M Ordovas, et al. Mixed linear model approach adapted for genome-wide association studies. Nature genetics, 42(4):355–360, 2010. 18
Biology-informed neural networks learn nonlinear representations from omics data to improve genomic prediction and interpretability Katiana Kontolati∗ Bayer Crop Science Rini Jasmine Gladstone Bayer Crop Science Ian W. Davis Bayer Crop Science Ethan Pickering∗ Bayer Crop Science Abstract We extend biologically-informed neural networks (BINNs) for genomic prediction (GP) and selection (GS) in crops by integrating thousands of single-nucleotide polymorphisms (SNPs) with multi-omics measurements and prior biological knowledge. Traditional genotype-to-phenotype (G2P) models depend heavily on direct mappings that achieve only modest accuracy, forcing breeders to conduct large, costly field trials to maintain or marginally improve genetic gain. Models that incorporate intermediate molecular phenotypes such as gene expression can achieve higher predictive fit, but they remain impractical for GS since such data are unavailable at deployment or design time. BINNs overcome this limitation by encoding pathwaylevel inductive biases and leveraging multi-omics data only during training, while using genotype data alone during inference. By embedding omics-derived priors directly into the network architecture, BINN outperforms conventional models in low-data (n N G B P P B G GWAS TWAS BINN If BINN learns both G2B and B2P operations, B latent variables will correlate 1:1 with B observed variables. P B G Nonlinear Nonlinear Linear Nonlinear I: LL IV: NN II: NL III: LN GWAS TWAS BINN GWAS TWAS BINN GWAS TWAS BINN (a) (b) N N N L L L Figure 1: Biology-informed neural network framework embed domain knowledge for enhanced genomic prediction and learning nonlinear biological relationships. a) Conventional G2P models use genotype alone, leaving rich functional knowledge underutilized. BINNs embed curated biology (from RNA-seq (expression), methylomics (DNA methylation), metabolomics, KEGG pathway annotations, and proteomics) directly into the architecture in the form of pathway structure, regulatory priors, and sparsity constraints, to boost predictive accuracy while preserving practical utility. b) Four representative cases where GWAS, TWAS, and BINN are valid for analyzing genomic, transcriptomic, and phenomic datasets. Only BINN permits association under general nonlinearity (with the assumption the trained BINN model is accurate). to enable genomic selection (GS) [27, 13]. GS improves crop improvement efficiency by permitting early selection at the seed or edit stage and reducing dependence on slow, costly, multi-location field trials [20, 42]. With a well-calibrated model trained on prior genotype-phenotype data, a single lowcost genotyping assay can substantially reduce phenotyping requirements in both space and time [13]. A wide range of models has been applied to GS-including linear mixed models; machine-learning methods such as random forests and kernel approaches; and deep neural networks-with performance varying by trait architecture, data volume, and environment [10, 50]. Current approaches fall short of capturing the true biological processes from genotype to phenotype, and the field has a long way to go toward accurate, mechanistically grounded prediction. Despite the surge of countless AI models and architectures, linear mixed modeling approaches remain state-of-the-art in G2P studies, with no other class of model consistently outperforming them across crops, populations, traits, years, and locations [1, 4, 47]. Although there are several adaptations to linear models, most are quite similar. For instance, ridge regression [26] and its adaptation using best linear unbiased predictor, rrBLUP [15], trained using genotypic (marker) data are identical, except rrBLUP chooses the regularization parameter, α, as a ratio of variances instead of by cross-validation [33]. Likewise a genomic BLUP or GBLUP [12] is mathematically equivalent to rrBLUP, but with compute efficiency gains when the number of lines is less than the number of markers [21], while the "Bayesian alphabet" models (BayesA, BayesB, BayesC, etc) [18] differ in 2 allowing different markers to have different priors [28]. And although several promising nonlinear artificial neural network (ANN) architectures have been proposed in the literature [17, 24, 45], deep learning methods have yet to show consistent improvements over traditional models and have not been adopted at scale in breeding pipelines [29]. This is partly due to over-parameterization and the lack of tailored deep learning architectures. Further gains are unlikely without injecting biological structure such as pathways, interactions, and regulatory programs into the model, a role for which flexible AI architectures are well suited. A natural route to inject such biological structure is to integrate intermediate multi-omics signals-transcripts, proteins, metabolites-as scaffolds between genotype and phenotype, enriching G2P models with mechanistic constraints [2, 5, 11, 20, 23, 27, 36]. Approaches such as multi-staged analysis [37] and transcriptome-wide association studies (TWAS) [16, 43] are proposed to integrate omics data, such as gene expression levels, between genotype and phenotype for association studies. Traditional linear mixed models, while capable of including additional covariates, are fundamentally limited in how they integrate multi-omics information. In practice, they treat each data modality as a flat feature set and cannot impose the hierarchical, pathway-level constraints that capture the flow from genotype through intermediate traits to phenotype. This lack of architectural flexibility prevents them from leveraging richer, layered representations, such as gene-to-metabolite cascades or expressiondriven subnetworks, that can substantially boost predictive power. In contrast, ANNs provide the flexibility to integrate multiple high-dimensional data streams such as genomics, transcriptomics, proteomics, lipidomics, etc., for tasks ranging from regression and classification to unsupervised representation learning, a capability that traditional linear models lack [25, 32, 44, 46, 49]. In practice, downstream omics are not available for novel genotypes at design or deployment, so they can only inform training-via priors, pretraining, or architectural constraints-rather than serve as inference-time features. Biologically-informed neural networks (BINNs) have begun to provide model flexibility in embedding omics data and domain-specific knowledge such as gene, protein, or pathway relationships, directly into their architecture and training objectives to improve predictive accuracy, interpretability, and extrapolative potential. Analogous to physics-informed neural networks (PINNs) that incorporate governing equations into their loss functions [35], BINNs impose biologically plausible sparsity patterns on connections, reducing parameter count and data requirements while enabling efficient integration of heterogeneous multi-omics inputs. This inductive bias not only curbs overfitting but also aligns latent representations with known biology. BINNs have been applied across population genomics and biomedicine, including cancer subtype prediction, drug response modeling, and survival analysis in frameworks like GenNet [40], proteomics biomarker discovery [19], hierarchical pathway networks for prostate cancer [14], and multi-omics inferral of smoking status, age, and LDL from the BIOS cohort [41], demonstrating consistent gains in performance and stability. Despite their promise, existing BINNs enforce a rigid, fully interpretable mapping of neurons to biological entities, limiting architectural flexibility and nonlinear modeling, and in most cases still depend on omics measurements at deployment, undermining their practicality in plants and crop design. Furthermore, although existing BINN approaches employ ANN-based models, their method for ranking biological entities such as genes typically relies on assessing marginal associations through learned single-edge weights, an approach conceptually similar to traditional genome-wide association studies (GWAS) or TWAS. Consequently, these models tend to highlight already known significant genes while overlooking those involved in nonlinear or epistatic interactions. Here we design a BINN architecture to align with genomic selection and design of crops, explicitly integrating omics data as intermediate variables, removing the need for omics data at test/design while still allowing for biological interpretability and targeted pathway prioritization (Figure 1). Our key contributions are summarized below: 1. A BINN architecture balancing tunable sparsity, layered nonlinearity and practicality. 2. Superior sparse-data performance with up to 56% test-set rank correlation increase and 75% reduction in predictive error over G2P baselines, across and within populations. 3. Demonstrated ability to transfer across related populations, maintaining strong predictive accuracy when genetic background is partially shared. 4. A novel BINN-derived sensitivity analysis framework that associates biologically meaningful intermediate traits to phenotype that are beyond the reach of conventional GWAS and TWAS approaches. 3 5. A scalable, practical framework for plant and crop genomic selection and design that marries multi-omics-driven training with genotype-only deployment. 2 Results We develop and evaluate BINN models informed by two distinct types of intermediate omics-transcriptomics and metabolomics-to predict phenotypes directly from genotype. Our first case study leverages transcriptomic profiles measured in a large-scale maize field experiment[39], where lines from multiple heterotic groups were grown under real agronomic conditions. This setting provides a realistic testbed for BINNs: gene expression captures environmentally modulated regulatory activity, offering a richer signal than raw marker data. In this context, we observe promising gains over conventional G2P models, but also important caveats due to the noise, heterogeneity, and incomplete pathway knowledge that naturally accompany field-derived omics measurements. In contrast, our second case study is a synthetic benchmark based on metabolomic traits simulated through an ordinary differential equation (ODE) model of shoot branching[9]. Unlike the maize field data, this framework provides "perfect" domain knowledge of the true causal pathways, enabling us to rigorously stress-test BINNs and evaluate their ability to recover mechanistic structure when the ground truth is fully known. Together, these complementary case studies - one grounded in realistic breeding data, the other in controlled synthetic biology - allow us to assess both the practical utility and the methodological limits of BINNs. Table 1 summarizes implementation details; full configurations and preprocessing steps can be found in the Supplementary Section. Table 1: Implementation components for the maize TWAS dataset and synthetic shoot-branching case studies. Attribute Maize TWAS Dataset Synthetic Shoot-Branching Input SNPs/Genes ∼20,000 SNPs per trait 1,600 genes Intermediate Omics Data Transcriptomics Metabolomics # Intermediate Traits ∼1,000 genes per trait 4 (auxin, sucrose, cytokinin, strigolactone) # Output Phenotypes 4 (anthesis NE, MI; silking NE, MI) 1 (time to bud outgrowth) Trait Selection Method ElasticNet regression From known ODE model SNP Mask Construction eQTL mapping of selected genes ODE-derived gene-metabolite mappings Loss Function MSE Soft-constrained MSE Reference Torres-Rodriguez et al. [39] Bertheloot et al. [9] & Powell et al. [34] 2.1 Transcriptomics-derived BINN Gene-expression-informed BINNs improve spearman rank correlations over G2P models across and within populations in maize field trials. BINNs embed expression-derived sparsity into a genotype-to-biological-knowledge-to-phenotype "G2B2P", where biological-knowledge is gene expression data, network which delivers superior predictive performance compared to genotype-only GBLUP models, under sparse-data conditions (Figure 2b). We use the maize TWAS dataset from Torres-Rodríguez et al. [39], which provides matched genotypes, RNA-seq expression profiles, and flowering-time phenotypes: days-to-anthesis and days-to-silking in Nebraska and Michigan (see Figure 2b). In five independent 20%/80% train/test splits with five-fold cross-validation on each training set, BINN achieves higher Spearman correlations than the GBLUP baseline, both when pooling all 693 lines and within each of the seven heterotic subpopulations. BINNs outperformed GBLUP in the majority of pairwise comparisons, and the advantage was statistically significant on a paired t-test (p = 2.23 × 10-6). Here, we report representative results for silking NE as shown in Figure 2c; complete results for all phenotypes are provided in the Supplementary Figure 5. Notably, the most pronounced and consistent improvements occurred within individual subpopulations, the most critical and challenging scenario for GS, demonstrating BINN's ability to capture subtle, group-specific genetic variation. BINNs leverage domain knowledge to pinpoint key genes and derive SNP-gene connections that both shrink network size while preserving nonlinear modeling power. Models that embed domain knowledge have frequently demonstrated similar or superior predictive power compared to purely genotype-based approaches [5]. In the maize TWAS application, we observe that expression-based domain-to-phenotype (B2P) models outperform conventional G2P baselines across most evaluation settings and BINNs can translate this predictive strength into the G2P setting. Transcriptomic profiles 4 Genotype data (G) Expression data (E) G2P (GBLUP) G2B2P (BINN) B2P (Ridge) BINN feature selection P C T G A C T T C A T G A C P Accurate? Genomic prediction? FCN FCN FCN FCN P (a) (c) (b) (d) (e) (f) (g) Figure 2: BINNs improve prediction accuracy and interpretability through utiliziation of gene expression in genotype to phenotype modeling. (a) Schematic of the transcriptomics-derived BINN architecture. The SNP marker and gene expression data are utilized for feature selection, which serves to sparsify the connections between the input and intermediate layers of the BINN architecture. The marker data for each gene is passed through pathway subnetworks at the intermediate layer and the outputs from this layer is passed through a non-linear integrator network to predict the phenotype values. The G2P and B2P models are both linear models. (b) Predicted vs. observed phenotype days to anthesis in MI across four subpopulations (SS, NSS, IDT, and Others). Points show G2P (GBLUP on genotype; pink triangles) and G2B2P (BINN with expression-informed sparsity; blue circles). An ordinary least-squares (OLS) fit is overlaid and the Spearman correlation (ρ) is reported in the legend. (c) Test Spearman correlation distributions for Silking NE, comparing three predictive models (G2P, G2B2P and B2P: Ridge regression on expression, green) across five independent 20/80% train/test splits, each with five-fold cross-validation to capture both split-level and CV-level variability. Each subplot shows results for all lines pooled ("all") and for each of the seven distinct subpopulations (SS, NSS, IDT, Popcorn, Sweet corn, Tropical and Others). The pink dashed line marks the median of the G2P baseline; the dashed line for G2B2P is colored green when its median exceeds that baseline or red when it falls below. Legends report the median percent change of G2B2P relative to G2P and titles show the number of test samples. (d) Test Spearman correlation distributions for leave-one-population-out experiments for Silking NE, comparing the same three predictive models. Each subplot shows results for the predictions on six distinct subpopulations using the models that are trained by leaving out the corresponding subpopulation from the training data. (e) Predicted latent variables vs. observed gene expression for four representative high-correlation genes per phenotype. (f) Aggregated absolute phenotypic change for 30 representative genes - 15 with high absolute Pearson correlation and 15 close to zero. (g) Aggregated absolute phenotypic change per perturbed gene, with a threshold capturing the BINN-derived 100 most significant genes which includes zap1, zmm15, zcn14 and zcn8. 5 Table 2: Test set Spearman rank correlation of BINNs across all populations and cross-validation splits for various Elastic Net L1 ratios used to select genes for the G2B2P mask for all four phenotypes. The best setting for each trait is bolded. L1 ratio # genes Anthesis NE Anthesis MI Silking NE Silking MI 0.05 1541-2251 0.650 ± 0.034 0.652 ± 0.046 0.630 ± 0.041 0.649 ± 0.036 0.10 848-1271 0.655 ± 0.037 0.663 ± 0.034 0.632 ± 0.040 0.660 ± 0.044 0.20 467-696 0.605 ± 0.151 0.656 ± 0.038 0.623 ± 0.043 0.662 ± 0.038 0.50 132-301 0.595 ± 0.143 0.633 ± 0.053 0.593 ± 0.127 0.640 ± 0.040 1.00 50-131 0.597 ± 0.047 0.596 ± 0.047 0.547 ± 0.114 0.593 ± 0.059 inform the degree of architectural sparsity without ever feeding raw expression values into the prediction network at inference. Specifically, we first fit an Elastic Net model to each flowering-time trait-tuning its L1/L2 penalty via cross-validation-and selected the genes with nonzero coefficients, noting that the feature selection outcome was particularly sensitive to the specific cross-validation split. We then carried out eQTL mapping on these candidates to nominate their top SNP markers, which defined the sparse SNP→gene mask that shapes our G2B2P BINN architecture (Figure 2a). As shown in Table 2, a sweep of the Elastic Net's L1 ratio revealed that retaining approximately 1,000 genes per trait and split offers the best trade-off between model compactness and accuracy, recovering the major regulators identified in the original study. An important limitation to this approach is that if B2P underperforms G2P, BINNs are unlikely to provide an advantage, since intermediate feature selection would not add predictive power. BINNs exploit shared expression-trait mechanisms beyond marker structure to improve generalization but rigid priors can limit flexibility in genetically distant populations. It is well established that marker data are strongly tied to population structure whereas expression data are often tissue and environment-specific and less tightly tied to ancestry [38]. Removing systematically distinct populations from the BINN training set revealed a clear pattern in model transferability. Expression data overall offers improved cross-population generalization compared to marker-based models as expression reflects functional output of many regulatory layers that can "normalize" some of the divergence in raw markers. However, BINN's biologically constrained architecture can only capitalize on this advantage when the causal paths through biology are shared between train and test lines. Evidently, when one of the mainstream heterotic groups such as SS, NSS, or IDT was held out, BINN maintained robust generalization performance (see Figure 2d). This is particularly important for large-scale breeding programs, which rely heavily on these temperate pools for developing elite germplasm and driving genetic gain. In contrast, when genetically more distant populations were excluded from training, such as sweet corn or tropical, which represent cases where regulatory programs diverge substantially from temperate pools [48], BINN appeared to overfit to pathway patterns dominated by the larger heterotic groups thereby limiting its ability to adapt. Thus, while BINNs excel when shared biological mechanisms exist, overly rigid priors limit its flexibility in zero-shot learning scenarios. However, this issue diminishes as domain knowledge improves: the more precise and mechanistically grounded it is, the larger the performance gains across tasks (see section 2.2). Sensitivity analysis reveals nonlinear genetic contributors that TWAS/GWAS may fail to capture. A key advantage of BINNs lies in their interpretability, i.e., the ability of trained models to elucidate how specific biological entities influence intermediate traits and ultimately shape the phenotype. Traditional BINNs with single-edge weights offer direct interpretability, as each parameter corresponds to a distinct marker-gene or gene-trait connection. In our implementation, we enhance model expressivity by replacing these single-edge weights with fully connected layers, sacrificing one-to-one parameter interpretability but enabling richer representations of nonlinear dependencies. To assess whether BINNs can identify biologically meaningful genes, we developed and conducted a sensitivity analysis (presented in Algorithm 1) across all pathway subnetworks. Specifically, we perturbed each pathway's latent variable in both directions and quantified the resulting change in the predicted phenotype, ranking genes by their aggregated effect. Figure 2e summarizes this analysis, showing that BINN recovers several TWAS-significant genes reported in Torres-Rodríguez et al. [39], such as the previously characterized zcn8 as well as zap1, zmm15 and zcn14, but also highlights additional candidates that may participate in epistatic interactions shaping phenotypic response. These genes may be overlooked by traditional approaches like TWAS, which rely on 6 marginal gene-trait associations, whereas BINN's end-to-end training captures predictive, nonlinear, and combinatorial effects beyond the reach of standard linear models. Interestingly, we found that the genes most sensitive to phenotype perturbations also exhibit strong latent-expression correlations (Figure 2e,f). This suggests that BINNs preferentially learn biologically grounded representations, linear or nonlinear, aligned with known mechanisms. 2.2 Metabolomics-derived BINN We now demonstrate that BINNs offer greater potential in both accuracy and interpretability for genomic design when established domain knowledge is incorporated. This contrasts with the previous example, which relied solely on intermediate experimental data. To illustrate, we use a synthetic metabolomics problem in which hormone-sugar interactions are formalized through an ODE model, creating a clean G2P nonlinear test bed for evaluating BINNs. A motivating example comes from axillary bud outgrowth-the switch-like process underlying shoot branching-arising from interactions among auxin (A), cytokinin (CK), strigolactone (SL), and sucrose (S) in a minimal network [9], later extended with genotype-to-metabolite variation at multiple causal loci [34]. The domain knowledge we embed is simple but definitive: gene-metabolite, metabolite-metabolite, and metabolite-phenotype interactions. Importantly, the BINN is not provided with the full ODE details (the magnitudes of effects, equation structures, or nonlinearities) but only the associations between layers. To avoid an artificially "easy" setting and to test robustness, we perturb the simulated data by adding 5% noise to intermediate traits and 10% noise to phenotypes. This controlled setting enables us to rigorously test the BINN methodology: evaluating performance under both plentiful and sparse data conditions, examining its ability to recover known causal dynamics, and exploring extensions such as custom loss functions applied to partially observed intermediates. BINNs improve predictions under sparse-data regimes. In the genotype-to-metabolites-tophenotype setup, BINN embeds ODE-derived metabolite pathways and constrains gene inputs through known gene-metabolite links. (Figure 3a). Evaluated across nine geometrically-spaced training sizes from 500 to 20,000 lines, with five random splits, the standard MSE-trained BINN consistently outperformed Ridge regression and an unconstrained fully-connected network (FCN) in the sparse-data regime, i.e., when the number of training lines was smaller than the input dimensionality (Figure 3b,c). As expected, when sample size increases, BINN performance approaches that of the FCN, indicating that the pathway-guided sparsity yields a better bias-variance trade-off when data are limited, while also preserving nonlinear expressivity as data grow. This demonstrates that BINNs can faithfully capture the intrinsic nonlinearity encoded in the underlying ODE dynamics across the entire data range, an aspect fundamentally beyond the reach of linear models like RR and GBLUP in the data-plentiful limit (n ≫p) and common neural networks in the data-sparse limit (n ≪p). Biology-informed loss functions, applied to intermediate variables, improves prediction accuracy and recovery of pathway dynamics with sparse intermediate labels. In many systems, intermediate traits can be experimentally measured, and this information can be used to better anchor latent variables to pathway dynamics. To this end, we introduce a soft-constraint loss function (i.e. Pearson loss) that explicitly encourages latent outputs to correlate with ground-truth intermediate trait values. In realistic scenarios, budget or experimental constraints often limit the amount of intermediate data that can be collected. To reflect this, we evaluate the model under three conditions: 100%, 50%, and 10% of lines with known labels. This design preserves genotype-only inference while allowing for complete or partial pathway supervision during training. The soft-constraint approach (BINN soft), achieves the lowest test MSE in the small-data regime, and performs comparably to standard BINNs with larger datasets (Figure 3b). Notably, the performance of is largely insensitive to the proportion of intermediate data available, indicating that even sparse biological supervision enables the network to recover the underlying hormone-sugar dynamics. In other words, even a small amount of alignment is sufficient to boost downstream phenotypic predictions. BINN architectures provide a latent variable mechanism for interpretability, uncovering the critical importance of sucrose's impact on outgrowth. When we analyze the biologically-informed latent variables via scatter-plot diagnostics on 20,000 held-out test lines, distinct pathway-level patterns emerge. We ran this experiment under both the standard MSE objective (BINN MSE) and a custom soft-constraint MSE (BINN soft). As expected, the soft-constraint setup (see Figure 3d (second row)) yields stronger correlations between latent variables and ground-truth metabolites, since the model was partially supervised to do so. However, an interesting observation comes from Figure 7 S FCN FCN FCN FCN CK A P FCN (a) genes Metabolomics (c) (b) SL (d) g1 g2 g3 g4 g5 g6 g7 g8 n p (e) Figure 3: BINNs significantly outperform baselines in the sparse data limit n p) data regimes. (c) Predicted vs observed phenotype across four training sizes (n = 500, 1,994, 7,953, 20,000) for three models: RR (purple circles), FCN (black triangles), and BINN (red squares). Each panel shows the identity line (y = x), and in-panel legends report Pearson correlation (r) for each model. (d) Scatter plots of predicted versus true latent values for each intermediate trait (A, S, CK, SL) under the BINN MSE (blue) and the BINN soft model (red). The fitted regression line and Pearson correlation coefficient, r, are overlaid. (e) Aggregated absolute phenotypic change per perturbed intermediate trait. 3d (first row): with the standard BINN MSE, most hormone pathways show little correlation with the ground truth, yet sucrose displays a remarkably strong alignment. As shown in Figure 3e, a post-hoc sensitivity analysis on the four intermediate traits demonstrated that the phenotype is more sensitivity to perturbations in sucrose, highlighting its importance. Indeed Bertheloot et al. [9], designed their experiments to show the critical importance of sucrose in this biological example. These results illustrate how BINNs can learn nonlinear relationships and surface biologically meaningful variables when appropriate inductive bias is added, achieving interpretability through both explicit alignment losses and emergent signals in the unconstrained setting. 3 Discussion Biologically informed neural networks provide the flexibility to incorporate ever-growing omics datasets for genomic selection, embedding intricate biological mechanisms and delivering improved accuracy and generalization. By using intermediate molecular information to shape network sparsity and connectivity, BINNs convert domain knowledge into inductive bias that both stabilizes learning 8 in sparse-data settings and yields pathway-level latent variables for interpretation. In this work we benchmark BINNs on two concrete tasks: an ODE-grounded, synthetic shoot-branching problem and a real maize TWAS dataset, to test whether they deliver accurate yet deployable genomic predictions. Across both applications, BINN outperforms baseline Ridge/FCN/GBLUP models in the sparse-data regime, remains stable under noise, and requires only genotypes at inference. Importantly, the trained model can be leveraged through post hoc sensitivity analysis to identify biologically relevant entities driving the phenotype, offering a nonlinear alternative to traditional GWAS and TWAS approaches. Our BINN extension is built first and foremost for genomic-selection practicality. The primary focus of existing BINN implementations in the literature has not considered GS or plant breeding utility and while although multiple studies have demonstrated the strong predictive power of omics data, it is not always clear how to translate these insights into models that rely exclusively on genomic inputs for GS. Our goal is to reframe the use of BINNs for GS that would make it relevant for many practical use cases in plant breeding. Full interpretability of existing models facilitates causal insight by directly linking predicted phenotypes to specific biological entities, but this rigid structure can constrain expressivity and thus limit predictive accuracy. A related challenge in plant breeding is scaling such models: breeders need decision-support tools that deliver accurate predictions without incurring the time and cost of routine omics measurements. In this work, we present a new biology-aware design and demonstrate its ability to improve predictive performance and practicality for GS, offering a hopeful path toward more accurate and scalable crop improvement. This is achieved by the following key features: BINNs do not need omics data at test time making them practical for the design phase where they are most needed and impactful. As discussed, omics data carry high predictive power and, when leveraged appropriately, can enhance models that rely solely on genotype data. However, a critical constraint in this setting is that any omics data downstream of the genotype will not be available at deployment time, therefore such information must be leveraged only during training to shape model structure and guide learning, without being directly used at inference. Our BINN implementation folds omics data into inductive biases during training but never at inference allowing us to maintain full compatibility with standard GS workflows. Consistent with our out-of-distribution experiments, when the target population shares sufficient genetic background with the training cohort, these learned inductive biases transfer effectively, enabling reliable inference on new genotypes whereas gains naturally decrease for more divergent populations. Tunable sparsity of BINNs reduces overfitting risks, sensitivity to noise, and improved predictive performance. We derive each omics-layer mask by applying feature-selection and association analyses to nominate candidate causal features across the dataset and match model architecture with trait complexity. Those candidates determine the sparse connectivity of the BINN model, allowing it to flexibly span oligogenic traits, where a few high-impact modules drive most of the signal, and highly polygenic traits, where predictive power emerges from many small-effect contributors, simply by tuning the model sparsity. The tuned BINN models developed for two diverse applications delivered superior performance under sparse data, achieving up to a 56% gain in test-set rank correlation and a 75% reduction in predictive error relative to G2P baselines, within and across populations. Layered nonlinearity increases expressivity and allows BINNs to capture epistasis and nonlinear interactions. BINNs demonstrated superior performance when learning the nonlinear metabolics problem, especially under sparse data. A key distinction of our approach lies in the balance between expressivity and interpretability. Unlike existing BINNs that assign each hidden neuron to a specific biological entity such as a gene, pathway, or protein, enabling direct read-off of effect weights, we employ flexible fully connected modules at each omics layer, enabling the network to learn rich, non-linear interactions among features while still honoring sparsity constraints derived from annotation even if individual hidden units no longer carry explicit biological labels. Each module's small FCN can capture intra-pathway epistasis among genes (or other genetic regions, e.g. SNPs) and other local non-linear effects that single-weight connections miss. This is valuable when potential locus-locus interactions contribute to the phenotype. The final integrator network fuses these module outputs, capturing higher-order, between-gene interactions. BINN latent variables offer opportunities to find hidden layer causal relationships given their stand in for biological domain knowledge. Even though we emphasize expressivity over built-in neuron-level interpretability, our BINN framework still offers opportunities for biological insight by uncovering causal relationships and enabling targeted sensitivity analyses that can inform bio9 logical understanding and decision-making. BINNs can prioritize intermediate traits by perturbing their corresponding latent variables and observing the resulting effect on the phenotype. In our metabolomics example, the phenotype showed the strongest sensitivity to perturbations of the sucrose latent variable, suggesting that the model captures pathway-relevant biological signals even under imperfect calibration. In the transcriptomics case, BINN not only recovers genes consistent with prior TWAS findings but also uncovers a range of additional candidates potentially influencing the phenotype through nonlinear, epistatic interactions. While these findings warrant further experimental validation, BINNs provide a promising framework for prioritizing gene candidates in downstream applications such as functional genomics and gene editing. BINNs are not without several limitations that must be considered when designing them. Despite the promising performance of BINNs in both applications in this study there are several important limitations to consider. Introducing sparsity constraints as inductive biases can boost accuracy in data-scarce settings, but the degree of sparsity must be carefully calibrated. Too little or too much undermines model behavior and thus, a systematic analysis to determine the optimal sparsity level is essential. Moreover, omics datasets are often noisy and heterogeneous, making it nontrivial to translate one or multiple -omics layers into an effective BINN architecture without risking mis-specification. Like all ANN-based approaches, BINNs demand hyperparameter tuning and computational resources for proper calibration. One of the strengths of the BINN framework lies in its flexibility to incorporate different types of loss functions depending on the modeling objective, however, we did not find this improved results for the transcriptomics dataset. For example, one can augment the baseline predictive loss with biologically motivated constraints, such as pathway coherence penalties or sparsity-inducing terms, to encourage solutions that are both accurate and mechanistically plausible. We found that such auxiliary constraints can sometimes boost predictive accuracy, although the magnitude of improvement varies by application and requires careful balancing to avoid over-penalization. In our experiments, we also test a more rigid formulation by embedding a "hard" constrained MSE penalty designed to strongly enforce reconstruction consistency across latent pathways. However, this constraint did not yield improvements, suggesting that overly strict losses can reduce flexibility and hinder optimization. This highlights an important trade-off: while additional biological constraints can enhance performance and interpretability, they must be tuned with care. To build on these findings and design BINNs in real-world breeding programs, more studies most be taken to understand how BINNs perform across diverse applications and populations, with different domain knowledge, on varied phenotypes, and more. The "sparse data" regime varies by dataset size and population diversity, and the benefits of BINNs will depend on both trait complexity and crop species. While our work addressed relatively simple phenotypes (flowering time and time to bud outgrowth), systematic benchmarking across a broader spectrum of traits (e.g., from highly heritable, single-gene characteristics to complex, polygenic attributes) will be essential to fully realize the potential of such models in crop improvement. Table 3: Biological layers, their functions, and example data types. Biological Level Function Example Data Type Genotype Genetic information underlying traits SNP arrays, whole-genome sequencing Epigenetic Modifications Influence gene regulation without changing DNA sequence DNA methylation Gene Expression Controls gene activity affecting traits RNA-seq Protein Interaction Networks Provide structural and signaling connections influencing outcomes Proteomics Metabolites Biochemical traits linking genes to observable traits Metabolomics Regulatory Pathways Control and coordinate gene expression Pathway databases (e.g., KEGG) Phenotype Observable traits resulting from genetic and environmental interactions Yield, height, flowering time, etc. Importantly, BINN gains depend on the quality of domain knowledge embedded in the model. Purely data-driven setups, such as our transcriptomics-derived case, are constrained by the limits of 10 experimental measurement: gene expression, while richer than genotype alone, is highly contextdependent (tissue, environment, sampling window) and thus noisy and heterogeneous, making performance sensitive to study design. By contrast, when mechanistic knowledge is sharper and more stable, illustrated by our metabolomics example, BINNs deliver markedly better accuracy in the sparsedata regime. This underscores a broader point: advancing fundamental biological understanding is not optional but enabling for practical GS models. Progress can come both from targeted experimental studies that elucidate pathways and from computational advances, e.g., genomic language models (gLMs) [7, 6, 31, 30, 8] and functional genomics predictors [3, 22], that infer function for genes lacking annotation, ultimately converting provisional signals into strong inductive biases we can embed in BINNs. Future efforts should explore integrating environmental signals, e.g. weather time series, and further tailored strategies, such as integrating additional omics layers to more closely align BINN's causal pathways with mechanistic biology while maintaining a sound bias-variance trade-off. A practical challenge is that available omics are often heterogeneous, collected from different tissues, developmental stages, platforms, and cohort designs (single vs. multiple genotypes, narrow vs. diverse populations), making it nontrivial to decide how, or whether, to use them as priors. As summarized in Table 3, priors can be drawn from transcriptomics, proteomics, metabolite pathways, and curated interaction networks, enabling models that better reflect how genetic variation propagates through biological systems to affect traits. This can be facilitated by experimenting with alternative topologies (e.g., stacked-layer, or parallel-layer BINNs), and incorporating advanced neural modules to model pathway subnetworks more effectively. The goal should not be just higher accuracy, but robust, genotype-only predictors that surface pathway importance and translate into selection decisions. With these targeted extensions, BINNs are well-positioned to improve GS at scale. 4 Methods Here we outline the BINN design used throughout before introducing the formalism. Our networks preserve nonlinear flexibility while enforcing biology-derived sparsity: genotype inputs are routed through pathway modules defined by masks built from prior knowledge (e.g., feature selection, association/eQTL hits, curated pathways, or ODE-based links). Each module is a small fully connected subnetwork that can model local nonadditivity and epistasis, and the module latents are fused by a final integrator to predict the phenotype. Intermediate omics measurements (e.g., expression, metabolites) inform the masks and, when available, can weakly supervise the latents via a soft constraint during training, but are never required at inference, preserving genotype-only deployment. The same template flexibly instantiates single-layer, stacked, staggered, or parallel multi-omics variants (see Supplementary Material) without changing the mathematical core. 4.1 Model Development with Biological Inductive Biases We consider n samples with p SNP features collected in the matrix X ∈Rn×p and a scalar phenotype vector y ∈Rn. Our BINN embeds L sequential omics layers between X and y, each layer l producing a latent representation U (l) ∈Rn×kl from its input U (l-1). We set U (0) = X and denote the number of subnetworks, corresponding to biological entities such as genes, metabolites, or proteins, at layer l by kl. 1. Pathway subnetworks. At omics layer l, prior biological knowledge (e.g. eQTL links, pathway membership) is encoded by a binary mask M (l) ∈{0, 1}dl-1×kl, where dl-1 = dim(U (l-1)). Specifically, M (l) ij = 1, if feature i of U (l-1) maps to entity j at layer l, 0, otherwise. (1) For each of the kl entities we define a subnetwork f (l) j that processes only the selected inputs U (l-1)M (l) :,j . These subnetworks may have multiple hidden layers with sigmoid activations σ(u) = 1/(1 + e-u) to capture Hill-type response typical in biological systems. We then concatenate their outputs into T (l) = f (l) 1 (U (l-1)M (l) :,1 ), . . . , f (l) kl (U (l-1)M (l) :,kl) ∈Rn×kl, (2) and set U (l) = T (l). 11 2. Residual network. To capture genetic effects not explained by any pathway subnetworks, we apply an unconstrained residual network gr (with parameters θr) to the subset of SNPs that are not connected in any mask M (l). Denoting this unannotated SNP matrix by Xres ⊂X, the residual network directly predicts a scalar residual phenotype: r = gr Xres; θr ∈Rn, (3) ensuring that r captures variation attributable to SNPs outside known biological pathways. 3. Final integrator network. After the last omics layer produces U (L) ∈Rn×kL, we concatenate it with the residual output R ∈Rn×hr: Z = [ U (L), R ] ∈Rn×(kL+hr). (4) A final feed-forward network h with parameters θf then maps Z to the predicted phenotype ˆy = h(Z; θf) ∈Rn. (5) which allows for the recovery of potential cross-pathway interactions. All network parameters {W (l), b(l)}L l=1, θr, and θf are learned jointly by minimizing a suitable loss function (see Section 4.2). The masks M (l) enforce biological sparsity, reduce overfitting, and ensure each subnetwork aligns with known genotype-intermediate trait relationships. Our BINN framework is highly modular: depending on the number, type, and interdependencies of available omics layers, the network can assume different connectivity patterns, ranging from single-layer designs to staggered, stacked, or parallel architectures. A brief description of these variants is provided in the Supplementary Material. 4.2 Custom Loss Functions for Guided Training BINNs are flexible and can be trained with any suitable loss function, ranging from conventional objectives to biologically informed criteria. Standard mean squared error (MSE). The simplest choice is the mean squared error between the true phenotype y and the prediction ˆy: LMSE(y, ˆy) = 1 n n X i=1 yi -ˆyi 2. (6) Biologically informed soft-constrained loss. To encourage the model's intermediate representations to align with measured omics traits, we add a soft constraint based on correlation. Let T (l) = [T (l) 1 , . . . , T (l) n ] be the predicted latent values at layer l and Z(l) = [Z(l) 1 , . . . , Z(l) n ] the corresponding ground-truth intermediate measurements. Define the sample Pearson correlation ρ(l) = Pn i=1 T (l) i - ̄T (l) Z(l) i - ̄Z(l) qPn i=1(T (l) i - ̄T (l))2 qPn i=1(Z(l) i - ̄Z(l))2 , (7) where ̄T (l) and ̄Z(l) are the layer-wise means. The biologically informed loss is then Lbio = LMSE(y, ˆy) + λ L X l=1 1 -ρ(l) , (8) where λ > 0 is a hyperparameter balancing phenotype accuracy against intermediate-trait alignment. One could instead enforce an exact match via an auxiliary hard-constrained MSE ∥T (l) -Z(l)∥2 2, but the correlation-based soft constraint is less restrictive and allows the model greater expressive power. This biologically informed loss is optional: users may omit the correlation term (λ = 0) to recover the standard MSE objective, or tune λ to any desired level of guidance. 12 4.3 Sensitivity Analysis for Latent Perturbations To quantify the contribution of intermediate biological entities to the phenotype, we perform a post-hoc sensitivity analysis across trained BINN ensembles. This approach assesses how controlled perturbations of individual latent variables influence the predicted phenotype, providing an interpretable measure of importance that generalizes across omics layers, phenotypes, and model instances. Overview. Given a trained BINN ensemble consisting of S outer splits and F inner folds, let {Ms,f}s=1..S, f=1..F denote the set of trained models for a phenotype of interest. Each model Ms,f contains intermediate latent representations U (l) at layer l, corresponding to biological entities such as genes, metabolites, or proteins. Sensitivity analysis probes each latent by replacing it with controlled constants and observing the resulting change in the model's mean predicted phenotype ˆy. Step 1: Baseline computation. For each trained model, we first perform a forward pass on a held-out test dataset to compute the baseline predicted phenotype ˆy0 and the corresponding latent activations U (l). The mean μj and standard deviation σj of each latent dimension u(l) j are computed across all samples in the dataset. Step 2: Latent clamping and phenotype response. To evaluate the sensitivity of entity j, we perform two controlled perturbations by clamping its latent activation to constant values across all samples: u(l) j (δ) = μj + δ σj, δ ∈{+a, -a}, (9) while all other latents remain unaltered. The perturbed latent is then propagated through the trained model to produce new mean phenotype predictions ˆy(+) and ˆy(-). The per-entity sensitivity under model Ms,f is defined as the symmetric mean absolute deviation: ∆y(s,f) j = 1 2 |ˆy(+) -ˆy0| + |ˆy(-) -ˆy0| , (10) capturing the magnitude of the phenotype's response to both upward and downward perturbations. Step 3: Aggregation across models. Sensitivities are aggregated across all ensemble members that include entity j to obtain a robust importance estimate: ̄ ∆yj = 1 |M(j)| X (s,f)∈M(j) ∆y(s,f) j , (11) where M(j) is the subset of trained models that contain latent u(l) j . The resulting ̄ ∆yj represents the average change in predicted phenotype magnitude when the latent corresponding to entity j is clamped to values spanning its natural variability. Step 4: Entity ranking and downstream analysis. Entities are ranked by their aggregated sensitivity ̄ ∆yj, yielding a model-based importance profile for the phenotype under study. This ranking can identify key intermediate traits, benchmark model interpretability against known associations, and guide downstream analyses such as candidate gene selection, pathway enrichment, or gene-editing prioritization. 13 Algorithm 1 Sensitivity Analysis Across BINN Ensembles Require: Trained models {Ms,f} for a phenotype; perturbation scale a 1: for each outer split s = 1..S do 2: for each inner fold f = 1..F do 3: Load trained model Ms,f 4: Compute baseline mean phenotype ˆy0 and latent statistics (μj, σj) 5: for each latent entity j do 6: Clamp latent uj ←μj ± a · σj (constant across all samples) 7: Forward model to compute mean predictions ˆy(+) and ˆy(-) 8: Compute ∆y(s,f) j = 1 2 |ˆy(+) -ˆy0| + |ˆy(-) -ˆy0| 9: end for 10: end for 11: end for 12: Aggregate ̄ ∆yj = mean(s,f)(∆y(s,f) j ) over all models containing j 13: Rank entities by ̄ ∆yj 5 Data Availability All data generated for the shoot-branching problem were produced in-house by numerically integrating the ODE frameworks described in Powell et al. [34] and Betherloot et al. [9]. For the maize TWAS analyses, we used the publicly released genotype-expression-phenotype dataset from TorresRodríguez et al. [39], which is accessible via the original publication's data archive. 6 Code Availability All simulation code, analysis scripts, ElasticNet-derived gene lists, trained models, and code to generate the figures in this paper are available upon request and are provided for non-commercial research use only. Acknowledgements We would like to thank Megan Gillespie for supporting and championing this work. Nathan Springer for his countless hours in helping us link and communicate our ideas towards biological and genomic applications. Alexis Charalampopoulos for early ideation on the concept of BINNs. As well as several team members who have contributed substantial feedback throughout this development of this work: Jaclyn Noshay, Sarah Turner-Hissong, Fabiana Freitas Moreira, Zhangyue Shi, Rocio Dominguez Vidana, and Koushik Nagasubramanian. References [1] Admas Alemu, Johanna Åstrand, Osval A Montesinos-Lopez, Julio Isidro y Sanchez, Javier Fernandez-Gonzalez, Wuletaw Tadesse, Ramesh R Vetukuri, Anders S Carlsson, Alf Ceplitis, Jose Crossa, et al. Genomic selection in plant breeding: Key factors shaping two decades of progress. Molecular Plant, 17(4):552-578, 2024. [2] Susanna Atwell, Yu S Huang, Bjarni J Vilhjálmsson, Glenda Willems, Matthew Horton, Yan Li, Dazhe Meng, Alexander Platt, Aaron M Tarone, Tina T Hu, et al. Genome-wide association study of 107 phenotypes in arabidopsis thaliana inbred lines. Nature, 465(7298):627-631, 2010. [3] Žiga Avsec, Vikram Agarwal, Daniel Visentin, Joseph R Ledsam, Agnieszka GrabskaBarwinska, Kyle R Taylor, Yannis Assael, John Jumper, Pushmeet Kohli, and David R Kelley. Effective gene expression prediction from sequence by integrating long-range interactions. Nature methods, 18(10):1196-1203, 2021. 14 [4] Christina B Azodi, Emily Bolger, Andrew McCarren, Mark Roantree, Gustavo de Los Campos, and Shin-Han Shiu. Benchmarking parametric and machine learning models for genomic prediction of complex traits. G3: Genes, Genomes, Genetics, 9(11):3691-3702, 2019. [5] Christina B Azodi, Jeremy Pardo, Robert VanBuren, Gustavo de Los Campos, and Shin-Han Shiu. Transcriptome-based prediction of complex traits in maize. The Plant Cell, 32(1):139-151, 2020. [6] Gonzalo Benegas, Carlos Albors, Alan J Aw, Chengzhong Ye, and Yun S Song. A dna language model based on multispecies alignment predicts the effects of genome-wide variants. Nature Biotechnology, pages 1-6, 2025. [7] Gonzalo Benegas, Sanjit Singh Batra, and Yun S Song. Dna language models are powerful predictors of genome-wide variant effects. Proceedings of the National Academy of Sciences, 120(44):e2311219120, 2023. [8] Gonzalo Benegas, Chengzhong Ye, Carlos Albors, Jianan Canal Li, and Yun S Song. Genomic language models: opportunities and challenges. Trends in Genetics, 2025. [9] Jessica Bertheloot, François Barbier, Frédéric Boudon, Maria Dolores Perez-Garcia, Thomas Péron, Sylvie Citerne, Elizabeth Dun, Christine Beveridge, Christophe Godin, and Soulaiman Sakr. Sugar availability suppresses the auxin-induced strigolactone pathway to promote bud outgrowth. New Phytologist, 225(2):866-879, 2020. [10] Leo Breiman. Random forests. Machine learning, 45:5-32, 2001. [11] Ole F Christensen, Vinzent Börner, Luis Varona, and Andres Legarra. Genetic evaluation including intermediate omics features. Genetics, 219(2):iyab130, 2021. [12] Samuel A Clark and Julius van der Werf. Genomic best linear unbiased prediction (gblup) for the estimation of genomic breeding values. Genome-wide association studies and genomic prediction, pages 321-330, 2013. [13] Jose Crossa, Paulino Pérez-Rodríguez, Javier Cuevas, Osval A Montesinos-López, Diego Jarquín, Gustavo de Los Campos, Juan Burgueño, José Manuel González-Camacho, Salvador Pérez-Elizalde, Yoseph Beyene, and Susanne Dreisigacker. Genomic selection in plant breeding: methods, models, and perspectives. Trends in Plant Science, 22(11):961-975, 2017. [14] Haitham A Elmarakeby, Justin Hwang, Rand Arafeh, Jett Crowdis, Sydney Gang, David Liu, Saud H AlDubayan, Keyan Salari, Steven Kregel, Camden Richter, et al. Biologically informed deep neural network for prostate cancer discovery. Nature, 598(7880):348-352, 2021. [15] Jeffrey B Endelman. Ridge regression and other kernels for genomic selection with r package rrblup. The plant genome, 4(3), 2011. [16] Eric R Gamazon, Heather E Wheeler, Kaanan P Shah, Sahar V Mozaffari, Keston AquinoMichaels, Robert J Carroll, Anne E Eyler, Joshua C Denny, GTEx Consortium, Dan L Nicolae, et al. A gene-based association method for mapping traits using reference transcriptome data. Nature genetics, 47(9):1091-1098, 2015. [17] Pengfei Gao, Haonan Zhao, Zheng Luo, Yifan Lin, Wanjie Feng, Yaling Li, Fanjiang Kong, Xia Li, Chao Fang, and Xutong Wang. Soydngp: a web-accessible deep learning framework for genomic prediction in soybean breeding. Briefings in bioinformatics, 24(6):bbad349, 2023. [18] Daniel Gianola, Gustavo de Los Campos, William G Hill, Eduardo Manfredi, and Rohan Fernando. Additive genetic variability and the bayesian alphabet. Genetics, 183(1):347-363, 2009. [19] Erik Hartman, Aaron M Scott, Christofer Karlsson, Tirthankar Mohanty, Suvi T Vaara, Adam Linder, Lars Malmström, and Johan Malmström. Interpreting biologically informed neural networks for enhanced proteomic biomarker discovery and pathway analysis. Nature Communications, 14(1):5359, 2023. 15 [20] Elliot L Heffner, Mark E Sorrells, and Jean-Luc Jannink. Genomic selection for crop improvement. Crop Science, 49(1):1-12, 2009. [21] Laval Jacquin, Tuong-Vi Cao, and Nourollah Ahmadi. A unified and comprehensible view of parametric and kernel methods for genomic prediction with application to rice. Frontiers in genetics, 7:145, 2016. [22] Kishore Jaganathan, Sofia Kyriazopoulou Panagiotopoulou, Jeremy F McRae, Siavash Fazel Darbandi, David Knowles, Yang I Li, Jack A Kosmicki, Juan Arbelaez, Wenwu Cui, Grace B Schwartz, et al. Predicting splicing from primary sequence with deep learning. Cell, 176(3):535548, 2019. [23] Arthur Korte and Ashley Farlow. The advantages and limitations of trait analysis with gwas: a review. Plant methods, 9:1-9, 2013. [24] Wenlong Ma, Zhixu Qiu, Jie Song, Qian Cheng, and Chuang Ma. Deepgs: Predicting phenotypes from genotypes using deep learning. BioRxiv, page 241414, 2017. [25] Xiaojie Ma et al. Omicsgcn: a multi-view graph convolutional network for multi-omics data integration. Bioinformatics, 2022. [26] Gary C McDonald. Ridge regression. Wiley Interdisciplinary Reviews: Computational Statistics, 1(1):93-100, 2009. [27] Theo HE Meuwissen, Ben J Hayes, and ME1461589 Goddard. Prediction of total genetic value using genome-wide dense marker maps. genetics, 157(4):1819-1829, 2001. [28] Osval Antonio Montesinos López, Abelardo Montesinos López, and Jose Crossa. Bayesian genomic linear regression. In Multivariate Statistical Machine Learning Methods for Genomic Prediction, pages 171-208. Springer, 2022. [29] Osval Antonio Montesinos-López, Abelardo Montesinos-López, Paulino Pérez-Rodríguez, José Alberto Barrón-López, Johannes WR Martini, Silvia Berenice Fajardo-Flores, Laura S Gaytan-Lugo, Pedro C Santana-Mancilla, and José Crossa. A review of deep learning applications for genomic selection. BMC genomics, 22(1):19, 2021. [30] Eric Nguyen, Michael Poli, Matthew G Durrant, Brian Kang, Dhruva Katrekar, David B Li, Liam J Bartie, Armin W Thomas, Samuel H King, Garyk Brixi, et al. Sequence modeling and design from molecular to genome scale with evo. Science, 386(6723):eado9336, 2024. [31] Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Michael Wornow, Callum BirchSykes, Stefano Massaroli, Aman Patel, Clayton Rabideau, Yoshua Bengio, et al. Hyenadna: Long-range genomic sequence modeling at single nucleotide resolution. Advances in neural information processing systems, 36, 2024. [32] Tran Nguyen et al. Deepprog: an ensemble of deep-learning and machine-learning models for prognosis prediction using multi-omics data. Nature Communications, 2020. [33] Joseph O Ogutu, Torben Schulz-Streeck, and Hans-Peter Piepho. Genomic selection using regularized linear regression models: ridge regression, lasso, elastic net and their extensions. In BMC proceedings, volume 6, pages 1-6. Springer, 2012. [34] Owen M Powell, Francois Barbier, Kai P Voss-Fels, Christine Beveridge, and Mark Cooper. Investigations into the emergent properties of gene-to-phenotype networks across cycles of selection: a case study of shoot branching in plants. in silico Plants, 4(1):diac006, 2022. [35] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686-707, 2019. [36] Christian Riedelsheimer, Angelika Czedik-Eysenberg, Christoph Grieder, Jan Lisec, Frank Technow, Ronan Sulpice, Thomas Altmann, Mark Stitt, Lothar Willmitzer, and Albrecht E Melchinger. Genomic and metabolic prediction of complex heterotic traits in hybrid maize. Nature genetics, 44(2):217-220, 2012. 16 [37] Marylyn D Ritchie, Emily R Holzinger, Ruowang Li, Sarah A Pendergrass, and Dokyoon Kim. Methods of integrating data to uncover genotype-phenotype interactions. Nature Reviews Genetics, 16(2):85-97, 2015. [38] J Vladimir Torres-Rodríguez, Delin Li, and James C Schnable. Evolving best practices for transcriptome-wide association studies accelerate discovery of gene-phenotype links. Current Opinion in Plant Biology, 83:102670, 2025. [39] J Vladimir Torres-Rodríguez, Delin Li, Jonathan Turkus, Linsey Newton, Jensina Davis, Lina Lopez-Corona, Waqar Ali, Guangchao Sun, Ravi V Mural, Marcin W Grzybowski, et al. Population-level gene expression can repeatedly link genes to functions in maize. The Plant Journal, 119(2):844-860, 2024. [40] Arno van Hilten, Steven A Kushner, Manfred Kayser, M Arfan Ikram, Hieab HH Adams, Caroline CW Klaver, Wiro J Niessen, and Gennady V Roshchupkin. Gennet framework: interpretable deep learning for predicting phenotypes from genetic data. Communications biology, 4(1):1094, 2021. [41] Arno van Hilten, Jeroen van Rooij, M Arfan Ikram, Wiro J Niessen, Joyce BJ van Meurs, and Gennady V Roshchupkin. Phenotype prediction using biologically interpretable neural networks on multi-cohort multi-omics data. NPJ systems biology and applications, 10(1):81, 2024. [42] Rajeev K Varshney, Manish Roorkiwal, and Mark E Sorrells. Genomic-enabled prediction models for improving crop productivity. Trends in Plant Science, 26(6):575-587, 2021. [43] Michael Wainberg, Nasa Sinnott-Armstrong, Nicholas Mancuso, Alvaro N Barbeira, David A Knowles, David Golan, Raili Ermel, Arno Ruusalepp, Thomas Quertermous, Ke Hao, et al. Opportunities and challenges for transcriptome-wide association studies. Nature genetics, 51(4):592-599, 2019. [44] Dong et al. Wang. Mogonet integrates multi-omics data through graph convolutional networks. Nature Machine Intelligence, 2021. [45] Hao Wang, Shen Yan, Wenxi Wang, Yongming Chen, Jingpeng Hong, Qiang He, Xianmin Diao, Yunan Lin, Yanqing Chen, Yongsheng Cao, et al. Cropformer: An interpretable deep learning framework for crop genomic prediction. Plant Communications, 6(3), 2025. [46] Kelin Wang, Muhammad Ali Abid, Awais Rasheed, Jose Crossa, Sarah Hearne, and Huihui Li. Dnngp, a deep neural network-based method for genomic prediction using multi-omics data in plants. Molecular Plant, 16(1):279-293, 2023. [47] Jacob D Washburn, José Ignacio Varela, Alencar Xavier, Qiuyue Chen, David Ertl, Joseph L Gage, James B Holland, Dayane Cristina Lima, Maria Cinta Romay, Marco Lopez-Cruz, et al. Global genotype by environment prediction competition reveals that diverse modeling strategies can deliver satisfactory maize yield estimates. Genetics, 229(2):iyae195, 2025. [48] Xun Wu, Yongxiang Li, Xin Li, Chunhui Li, Yunsu Shi, Yanchun Song, Zuping Zheng, Yu Li, and Tianyu Wang. Analysis of genetic differentiation and genomic variation to reveal potential regions of importance during maize improvement. BMC plant biology, 15(1):256, 2015. [49] Xinyue Yang et al. mmvae: a multi-modal variational autoencoder framework for integrative analysis of multi-omics data. Bioinformatics, 37(15):2151-2158, 2021. [50] Bayya Yegnanarayana. Artificial neural networks. PHI Learning Pvt. Ltd., 2009. 17 Supplementary Materials for: Biology-informed neural networks learn nonlinear representations from omics data to improve genomic prediction and interpretability Katiana Kontolati, Rini Jasmine Gladstone, Ian Davis, Ethan Pickering Overview This document provides supplementary information supporting the findings and methodology of the main manuscript. It includes additional figures, tables, implementation details, and reproducibility notes for both the transcriptomics and metabolomics applications. 1 Metabolomics-derived BINN Axillary bud outgrowth is the key developmental process driving shoot branching in plants. Its regulation arises from an intricate network of hormonal and sugar signals. In the framework of Bertheloot et al. [1], auxin (A) concentration produced at the shoot apex is transported basipetally and inhibits cytokinin (CK) synthesis while promoting strigolactone (SL) production. Cytokinin levels and strigolactone levels then act antagonistically to activate or repress bud growth, respectively, and sucrose availability (S) acts as an enabling cue. Bertheloot et al. showed that this minimal network captures the switch-like behavior of bud outgrowth across species and can be approximated as a system of ordinary differential equations (ODEs) [1]). Powell et al. [8] then extended this ODE framework by parameterizing each interaction term and intermediate-trait (hormones and sucrose) level with additive genetic values calculated based on the additive genetic effects at multiple causal loci. The introduction of genetic variation into the shoot-branching network yields a fully specified G2P model, enabling in silico selection experiments that reveal how interactions among intermediate traits drive selection response, precisely the process breeders seek to optimize. Because this framework integrates genotype, metabolite levels, and phenotype, it functions as a biologically-informed model, embedding known genetic loci-metabolite relationships as inductive biases. Moreover, having access to the exact ODEs used to generate synthetic data gives us a ground-truth baseline against which to develop and rigorously benchmark our BINN architectures. Although the mapping from genotype to metabolite in this system is exactly specified, in real breeding populations we typically know only statistical associations and never the true functional forms. We therefore hypothesize that a BINN can recover 1 these hidden relationships, learning the underlying network dynamics directly from data and providing both accurate predictions and mechanistic insight. 1.1 Adapted system of equations We generate a large synthetic dataset following the ODE-based shoot-branching network of Bertheloot et al. [1] extended by Powell et al. [8]. For reproducibility we outline the exact set of equations below. Given N lines and L causal loci per pathway, the procedure is: 1. Simulate additive genetic effects. Draw locus effects {βj}4L j=1 from a standard normal distribution. Denote the resulting effect vector by β = (β1, . . . , β4L)⊤. 2. Generate genotypes. Form the genotype matrix G ∈{0, 1}N×4L, Gi,j ∼Bernoulli(0.5), where row i represents line i and columns 1 . . . L, L + 1 . . . 2L, 2L + 1 . . . 3L, 3L + 1 . . . 4L correspond to auxin (A), sugar (S), cytokinin (CK), and strigolactone (SL) pathways. 3. Compute additive pathway values. For each pathway p ∈{A, S, CK, SL}, let β(p) and G(p) denote the corresponding length-L subvector and submatrix. The raw genetic value for pathway p in line i is g(p) i = L X j=1 G(p) i,j β(p) j . (1) 4. Rescale to steady-state trait levels. To match the ranges used by Powell et al. [8], each g(p) is linearly rescaled to an intermediate trait x(p): x(p) i = g(p) i 2 maxi |g(p) i | + 1 2 ! sp + bp, (2) with pathway-specific scale sp and offset bp: (sA, bA) = (2.5, 0), (sS, bS) = (2.4, 0.1), (sCK, bCK) = (0.6, 0.4), (sSL, bSL) = (0.9, 0.2). 5. Compute interaction terms and intermediate traits. After computing raw genetic values g(p) via Eq. (1), we first linearly map auxin and sugar to obtain x(A) i = g(A) i , x(S) i = g(S) i . (3) 2 Cytokinin and strigolactone receive additional interaction adjustments as: x(CK) i = g(CK) i γ(i) CK←A + γ(i) CK←S 0.99 , (4) x(SL) i = g(SL) i + γ(i) SL←A 0.86 . (5) where γ(i) CK←A, γ(i) CK←S, and γ(i) SL←A are computed as γ(i) CK←A = 1 1 + 0.96 x(A) i , γ(i) CK←S = 0.25 (x(S) i )2 0.19 + (x(S) i )2, (6) γ(i) SL←A = 24.89 (x(A) i )2 294.58 + (x(A) i )2. (7) 6. Integrator and phenotype. The intermediate traits are passed throug the signal integrator term as γ(i) I←CK = 1 1 + 1000 x(CK) i , (8) γ(i) I←SL,S = 5.64 (x(SL) i )2 1 + 0.00418 + 7.10 (x(S) i )2 (x(SL) i )2. (9) The composite integrator signal for line i is Ii = 0.33 + γ(i) I←CK + γ(i) I←SL,S. (10) Time to bud outgrowth is then obtained by a linear transformation and clipping to remove negative days: yBO,i = min{-3.5 min j Ij + 3.5 Ii, 8.3}, (11) and lines with yBO,i ≤8.3 are scored as having initiated bud outgrowth. 7. Final dataset. We generate the genotype matrix X = G, the phenotype vector yBO, and the true intermediate traits {x(A), x(S), x(CK), x(SL)} for downstream model training and evaluation. 1.2 Data generation We generated a synthetic dataset using the equations outlined in Section 1.1 using our own Python implementation. We consider L = 400 causal loci per pathway which results in 1, 600 loci in total. A large dataset of N = 100, 000 lines was generated from which 10% was used for validation and 20% for testing fixed across models. The remaining 70% was used to randomly sample training subsets used in the experiments described below. Below, we show the distribution of the phenotype yBO and the four intermediate traits (A,S,CK,SL). 3 Figure 1: Distribution of the phenotype time to bud outgrowth yBO, and intermediate traits auxin (A), sucrose (S), cytokinin (CK) and strigolactone (SL). 1.3 Model architecture Each pathway subnetwork is a shallow fully-connected network (FCN) with hidden dimension H = 32, inter-layer sigmoid activations that aim to emulate the Hill-like dynamics of the shoot-branching network, and a one-dimensional output. The four pathway outputs are concatenated and passed through a final FCN with hidden dimension F = 16 to predict yBO. Any SNPs not assigned to a pathway are fed into a residual FCN of the same size. 1.4 Experimental setup All BINN experiments used the same training pipeline, implemented in PyTorch [6]. Below we summarize the key hyperparameters and procedures: Data splits and noise injection. For each run, we randomly sample from the full set Ntrain lines for training, Nval = 0.1 × Ntotal for validation, and Ntest = 0.2 × Ntotal for testing. To test the robustness of model to noisy data we then add Gaussian noise to the targets: yi ←yi 1 + ε(y) i , ε(y) i ∼N(0, σ2 y), and to each intermediate trait p ∈{A, S, CK, SL}: x(p) i ←x(p) i 1 + ε(p) i , ε(p) i ∼N(0, σ2 inter), where σy = 10% and σinter = 5%. Optimization. We trained each BINN with: • Optimizer: Adam, initial learning rate α = 10-3, no weight decay. 4 • Batch size: 64. • Scheduler: ReduceLROnPlateau on the validation loss, factor 0.5, patience 20 epochs. • Early stopping: terminate if validation loss does not improve for 20 consecutive epochs, up to a maximum of 500 epochs. Repetitions, dataset sizes, and random seeds. We evaluated each BINN configuration over nine logarithmically spaced training set sizes (500 to 20, 000 samples), combined with varying noise levels and intermediate-trait availability. For each of these dataset sizes, we performed n = 5 independent runs using different random seeds. Genotypes were drawn once per size, and each seed controlled the train/validation/test split (80/10/10), the network weight initialization, and the Gaussian noise added to both the final phenotype and intermediate traits. 1.5 Baseline Models To benchmark the performance of our BINN architectures, we compared against two standard models trained on the same synthetic data using our own implementation: Ridge regression. We fit a linear model ˆy = Xw + b, with L2 regularization on the weights. Concretely, the model implements L(w, b) = 1 N N X i=1 (yi -(x⊤ i w + b))2 + λ∥w∥2 2, where xi ∈Rp is the genotype vector, yi the target, and λ = 0.01 the regularization coefficient. Training proceeds by minimizing this loss via AdamW with the same learning rate and early-stopping schedule as the BINNs. Fully-connected network (FCN). As a representative "black-box" nonlinear model, we trained a small deep network with two hidden layers of size 64 and ReLU activations: h(1) = ReLU(W (1)x + b(1)), h(2) = ReLU(W (2)h(1) + b(2)), ˆy = W (3)h(2) + b(3). The model parameters {W (k), b(k)} are learned by minimizing mean-squared error: L = 1 N N X i=1 yi -ˆyi 2, using the same optimizer, learning rate, batch size, and early-stopping criteria as for the BINNs. 5 1.6 Evaluation Metric To quantify predictive performance, we compute the mean squared error (MSE) on the heldout test set for each model, dataset size, and random seed. Specifically, for each dataset size Ntrain, noise configuration, and intermediate-trait fraction, we perform n = 5 independent runs yielding test errors MSEr = 1 Ntest Ntest X i=1 yi -ˆy(r) i 2, r = 1, . . . , n. We then summarize these 5 values via their mean and standard deviation in the main text, and visualize their full distribution using boxplots. 2 Transcriptomics-derived BINN 2.1 Introduction Linking genetic variation to phenotypic traits remains a central challenge in plant genomics. Torres-Rodriguez et al. [10] conducted a transcriptome-wide association study (TWAS) on the Wisconsin Diversity Panel, consisting 693 temperate adapted maize inbred lines from seven heterotic pools: stiff stalk (SS), non-stiff stalk (NSS), iodent (IDT), tropical, popcorn, sweet corn, and other/unknown, to identify genes controlling flowering time. The authors collected full-length RNA-seq within a tight two-hour window and recorded days to anthesis and silking at two Corn Belt locations, Nebraska and Michigan. From these data, they identified 21 genes exceeding stringent false-discovery thresholds and mapped both cis- and trans-eQTL, including loci in well-characterized flowering regulators. The study's power derives from its large sample size, precise sampling protocol, and deep sequencing depth, making it one of the most comprehensive TWAS datasets in maize. Because this cohort provides the complete "genotype-expression-phenotype" triangle, it is ideal for evaluating our methodology. We hypothesize that embedding expression-derived sparsity into the network architecture improves predictive accuracy over genotype-only models, while still requiring only genotype data at inference. We constructed the BINN's sparsity mask through a two-step procedure. First, we trained an expression(biological domain knowledge)-to-phenotype (B2P) model using Elastic Net regression on the RNA-seq data for each flowering trait, tuning the regularization parameters via cross-validation. We then selected genes with non-zero coefficients. Second, we performed eQTL mapping on those candidate genes to identify their top SNP markers. The resulting SNP-gene pairs defined the binary mask connectivity matrix used to define our sparse genotype-to-expression-to-phenotype BINN architecture. 2.2 Dataset and Preprocessing We use the real-world maize data provided in Torres-Rodriguez et al. 2024 [10], which includes flowering times (days to silking and anthesis) measured at two field locations (Nebraska and Michigan), RNA-seq gene expression data from leaf samples, and genotypes from 6 15.8 million single nucleotide polymorphism (SNP) markers. The dataset consists of 690 lines from the Wisconsin Diversity Panel, covering 7 heterotic groups (stiff stalk, non-stiff stalk, iodent, tropical, popcorn, sweet corn, and other/unknown) and expression values of 39,756 genes. We followed the data pre-processing steps described in Torres-Rodriguez et al. [10] and used the spatially-corrected flowering time values that were provided. Fig. 2 shows the distribution of the 4 phenotypes across all the 690 lines. We also dropped the unexpressed genes, with median TPM less than 0.01, from further analysis. This reduced the total number of genes to 24,874. Furthermore, markers with a minor allele frequency (MAF) below 5% were dropped, and PLINK [9] was used to prune highly correlated markers (--indep-pairwise 1000 500 0.5), leaving 3.2 million markers for further analysis. The pre-processed expression and marker data is, then, used for feature selection for the BINN model. 2.2.1 Feature Selection The BINN architecture is sparsified by selecting a subset of genes and markers through a biologically-informed process. The genes are selected using a linear elastic net model and markers are selected through eQTL process. This reduces the input features (markers) by an order of 2 and the length of intermediate layer (genes) by an order of 1. 1. Gene selection : To identify a reduced set of significant genes influencing flowering time, we developed ElasticNet models utilizing expression data from 24,874 genes as predictors, with days to silking and anthesis for NE and MI (four distinct models) as the response variables. The expression data, measured in Transcripts Per Million (TPM), was normalized to a range of 0 to 2, following the methodology outlined by Torres-Rodriguez et al. [10]. The ElasticNet models were implemented using ScikitLearn's ElasticNetCV[4], and we selected the number of genes based by varying the values of the L1 ratio by subsetting the features with non-zero coefficients. We used 5fold cross-validation for model building. To assess the quality of the selected genes, we compared the overlap between genes identified as influencing flowering time in TorresRodriguez et al. [10] and those selected by the ElasticNet models. Table 1 presents the number of genes identified for various values of the L1 ratio, (l), and the corresponding percentage of overlap with the flowering time genes reported in Torres-Rodriguez et al. [10], based on models trained using expression values from all the 690 lines. This analysis demonstrates that as the L1 ratio decreases, the number of selected genes increases, and a lower L1 ratio facilitates the identification of a greater number of significant genes. 2. Marker selection : For each gene selected through the ElasticNet model, significant markers were identified using eQTL analysis [2] as outlined in Torres-Rodriguez et al. [10], using mixed linear model [12] in rMVP [11] and including three principal components as covariates. For all the identified genes, the top 20 statistically significant markers are selected for training the BINN models. 7 Table 1: Assessment of the quality of the set of genes obtained using the ElasticNet models. Phenotype # Significant flowering time genes from [10] # genes from ElasticNet models % overlap of significant genes l = 1.0 l = 0.5 l = 0.2 l = 1.0 l = 0.5 l = 0.2 Days to Silking, MI 17 317 568 970 70% 82% 100% Days to Anthesis, MI 16 320 412 816 69% 81% 94% Days to Silking, NE 16 234 413 756 69% 81% 100% Days to Anthesis, NE 14 259 383 760 64% 100% 100% Figure 2: Distribution of the four phenotypes; days to anthesis and silking in NE and MI, for the real-world maize TWAS dataset 2.3 Model architecture The BINN architecture comprises of three layers: an input layer consisting of the reduced set of markers, an intermediate layer in which each node represents a gene, and a final output layer that generates the predicted phenotype values. Every node (gene) in the intermediate layer is connected to 20 nodes (significant markers) from the input layer via a pathway subnetwork, which is structured as a fully connected network (FCN) with a single hidden layer of dimension, H = 128, and a sigmoid activation function. The outputs from all the pathway sub-networks are concatenated and subsequently fed into another FCN, with a single hidden layer of dimension, F = 128, to predict the flowering time, yFT. 2.4 Experimental setup The experiments were designed to evaluate the performance of the BINN under conditions of sparse data. Consequently, all models were trained on 20% of the dataset while being tested on the remaining 80%. The entire dataset was divided into five non-overlapping 8 subsets, while ensuring that the lines from all heterotic groups were represented in same proportion as the full dataset. The exact number of train, validation and test lines used in each independent fold is shown in Table 2. To prevent any data leakage during feature selection, gene and marker selection were performed independently for each subset, ensuring that the corresponding features were utilized exclusively when training the model on that specific subset of data. For gene selection, we used an L1 ratio of l = 0.1 for ElasticNet models, with a 5-fold cross-validation. Table 3 presents the number of genes selected per phenotype for all the splits of data. For every selected gene, we selected 20 markers based on the eQTL analysis. It is important to note that expression data was used solely for feature selection to introduce sparsity in the network architecture and was not utilized for model training. Therefore, only genotype data is required for training and validation of the model. Table 2: Number of train, validation and test lines for each subpopulation used in each fold (sparse-data scenario). Population Train Validation Test All 65 65 560 Others 29 29 241 SS 15 15 129 NSS 9 9 76 IDT 5 5 47 Tropical 3 3 26 Popcorn 2 2 18 Sweet corn 2 2 23 We trained all the BINN models with Adam optimizer with an initial learning rate, α = 10-3, and batch size of 16. Training was terminated if the validation loss did not improve for 20 consecutive epochs, with a maximum limit of 500 epochs. Additionally, we used 5-fold cross-validation (CV) for training on the five subsets of data. As BINN is a neural network, it necessitates a validation dataset during training. However, the baseline models utilize the entire 20% of the data for training without requiring a validation set. To ensure a fair comparison between the models, we initially trained the BINN models using 16% of the data for training and 4% for early stopping/validation for each of the five CV folds. We recorded the epoch that yielded the best model performance for each of these runs and subsequently retrained the BINN model using 20% of the data for the same number of epochs without a validation set. 2.5 Baseline Models We compare the performance of the BINN models against benchmark genotype-to-phenotype (G2P) and domain expression-to-phenotype (B2P) models, all of which are linear in nature. For G2P, we trained GBLUP and ridge regression models, whereas, for B2P, a ridge regression model was trained. All the ridge regression models were implemented using Scikit-Learn's [4] RidgeCV function with 3-fold cross-validation. We used the GBLUP implementation from BGLR [7] with 5000 iterations after 1000 burn-in iterations. 9 Figure 3: Test Spearman correlation distributions for prediction of days to Anthesis in NE and MI - comparing all the models (G2P on genotype, pink; B2P: Elastic Net on expression, green; G2B2P: BINN with network sparsity defined by lasso selected genes and eQTL-selected markers, blue) across five independent 20%/80% train/test splits, each with five-fold cross-validation. Results of three G2P models are shown - GBLUP (G2P-GBLUP), Ridge Regression on every 100th marker (G2P-RR) and Ridge Regression on bio-informed (eQTL-selected) markers (G2P-RR-BI). Additionally, results of two B2P models are also shown - Ridge Regression on all the genes (B2P-RR) and Ridge Regression on bio-informed (ElasticNet-selected) genes (B2P-RR-BI). Each subplot shows results for all lines pooled and for each of the seven distinct subpopulations. 10 Figure 4: Test Spearman correlation distributions for prediction of days to Silking in NE and MI - comparing all the models (G2P on genotype, pink; B2P: Elastic Net on expression, green; G2B2P: BINN with network sparsity defined by lasso selected genes and eQTL-selected markers, blue) across five independent 20%/80% train/test splits, each with five-fold cross-validation. Results of three G2P models are shown - GBLUP (G2P-GBLUP), Ridge Regression on every 100th marker (G2P-RR) and Ridge Regression on bio-informed (eQTL-selected) markers (G2P-RR-BI). Additionally, results of two B2P models are also shown - Ridge Regression on all the genes (B2P-RR) and Ridge Regression on bio-informed (ElasticNet-selected) genes (B2P-RR-BI). Each subplot shows results for all lines pooled and for each of the seven distinct subpopulations. 11 Table 3: Number of genes selected per phenotype for different subset of data using l = 0.1. Data subset Days to silking, MI Days to anthesis, MI Days to silking, NE Days to anthesis, NE Split 1 1,271 1,041 1,010 933 Split 2 1,230 995 1,071 912 Split 3 1,079 919 969 941 Split 4 1,188 985 1,044 848 Split 5 1,235 1,018 1,042 933 The G2P models were trained using every 100th marker, as we found no significant difference in performance compared to using all 3.2 million markers. On the other hand, B2P model was trained on the expression (in TPM) of all 24,874 genes. Prior to model training, the TPM values were transformed using the Box-Cox normalization with a parameter λ = 0.0282 to approximate a normal distribution. 2.6 Evaluation Strategy The performance of the BINN models are compared with that of the baseline models using Spearman's rank correlation coefficients [5]. Specifically, we calculated the correlation between the predicted and observed phenotype values, evaluated on the remaining 80% of the held-out data. Correlation values are computed for all five folds in each subset of data, as well as for each heterotic group represented in the dataset. It is important to note that since the correlation metric is computed for each fold using the fixed 80% held-out lines within a subset/split, this may underestimate the true model variance, potentially resulting in less variability in our test metrics. Additionally, we treated each heterotic group as an independent trial, aggregating Spearman correlations across the five folds, five data splits, and all four phenotypes, and then applied a paired t-test [3] to determine whether the BINN model's performance is significantly better than that of the G2P baseline. In the results reported in the main text we obtained p = 2.23 × 10-6, well below the 0.05 significance threshold, allowing us to reject the null hypothesis and conclude that BINN significantly outperforms the baseline model. 2.7 Additional Evaluation Results In this section, we discuss additional experiments conducted to further evaluate the impact of biologically informed feature selection and network sparsity in BINNs. First, we compare the performance of the BINN model against various baseline models. Specifically, we consider three G2P baseline models - GBLUP, Ridge Regression using every 100th marker and Ridge Regression using eQTL-selected markers and two B2P models - Ridge Regression using all the genes and Ridge Regression using ElasticNet-selected genes. Figures 3 and 4 illustrate the performance of these models in predicting four flowering time traits - days to anthesis and silking in both NE and MI. Furthermore, we conducted an ablation study to evaluate the impact of biologically informed network sparsity on the performance of the BINN model. This involved incorporating 12 (a) (b) (c) Figure 5: Test Spearman correlation distributions for (a) Silking NE, (b) Silking MI and (c) Anthesis NE - comparing three predictive models (G2P: GBLUP on genotype, pink; B2P: Elastic Net on expression, green; G2B2P: BINN with expression-informed sparsity, blue) across five independent 20%/80% train/test splits, each with five-fold cross-validation. Each subplot shows results for all lines pooled ("all") and for each of the seven distinct subpopulations (SS, NSS, IDT, Popcorn, Sweet corn, Tropical and Others). The pink dashed line marks the median of the G2P baseline; the dashed line for G2B2P is colored green when its median exceeds that baseline or red when it falls below. Legends report the median percent change of G2B2P relative to G2P. randomly selected network sparsity into the BINN architecture and comparing its predictive performance with that of the baseline models. Figure. 7 presents the results for the BINN model, where network sparsity is defined by randomly selected genes and eQTL-selected markers. To ensure a fair evaluation, we maintained an equal number of randomly selected genes as in the biologically informed case. Figure. 8 displays similar results for network sparsity defined by randomly selected genes and markers. Our observations from these analyses indicate that as the level of random connections in the network architecture increases-from randomly selected genes to randomly selected genes and markers-the performance of the BINN model significantly declines. Moreover, biologically informed markers and genes enhance the performance of the baseline models, as evidenced in Figures 3 and 4. These results underscore our finding that biologically informed sparsity imparts essential inductive bias to the model, thereby, improving its predictive accuracy. 3 Alternative BINN Architectures In the main text, we described our canonical BINN design and its instantiation for (1) the synthetic shoot-branching problem and (2) the maize TWAS dataset. The following examples 13 (a) (b) (c) Figure 6: Test Spearman correlation distributions for leave-one-population-out experiments for (a) Silking NE, (b) Silking MI and (c) Anthesis NE - comparing three predictive models (G2P: Ridge Regression on genotype, pink; B2P: Elastic Net on expression, green; G2B2P: BINN with expression-informed sparsity, blue) across five independent 20% train splits, each with five-fold cross-validation.Each subplot shows results for the predictions on the six distinct subpopulations (SS, NSS, IDT, Popcorn, Sweet corn, and Tropical) using the models that are trained by leaving out the corresponding subpopulation from the training data. The pink dashed line marks the median of the G2P baseline; the dashed line for G2B2P is colored green when its median exceeds that baseline or red when it falls below. Legends report the median percent change of G2B2P relative to G2P. are not exhaustive alternatives but illustrate the flexibility of our architecture in handling diverse multi-omics contexts: 3.1 Single-layer BINN This model - used for the TWAS problem - has one intermediate omics layer (gene expression) feeding directly into the final integrator. It is appropriate when a single data type dominates predictive power, as in transcriptome-wide association studies without the availability of additional -omics data such metabolomics or proteomics. 14 (a) (b) (c) (d) Figure 7: Test Spearman correlation distributions for four flowering-time traits-(a) Anthesis NE, (b) Anthesis MI, (c) Silking NE, and (d) Silking MI-comparing three predictive models. Here, the network sparsity of G2B2P (BINN) model is defined by randomly selected genes and eQTL-selected markers, blue). Models were trained across five independent 20%/80% train/test splits, each with five-fold cross-validation. Each subplot shows results for all lines pooled and for each of the seven distinct subpopulations. 3.2 Staggered-layer BINN In cases with a single omics modality but with rich intra-layer dependencies, we allow pathway subnetworks to exchange information via sparse cross-connections. For example, in the shoot-branching network, auxin (A) and sucrose (S) subnetworks both feed into the cytokinin (CK) and strigolactone (SL) modules. These staggered links capture nonhierarchical, within-layer interactions, such as gene-gene or metabolite-metabolite crosstalk, before passing all signals to the final integrator. 3.3 Stacked-layer BINN When multiple omics layers form a biological cascade, such as genotype →transcriptome → metabolome, we embed each layer as a separate set of subnetworks connected in series. The latent outputs of the first layer become the inputs to the second layer's pathway modules, and so on, before final integration. This design mirrors natural hierarchies (e.g. expression driving metabolite synthesis) and is appropriate whenever upstream molecular changes are known to mediate downstream effects. 15 (a) (b) (c) (d) Figure 8: Test Spearman correlation distributions for four flowering-time traits-(a) Anthesis NE, (b) Anthesis MI, (c) Silking NE, and (d) Silking MI-comparing three predictive models. Here, the network sparsity of G2B2P (BINN) model is defined by randomly selected genes and randomly selected markers, blue). Models were trained across five independent 20%/80% train/test splits, each with five-fold cross-validation. Each subplot shows results for all lines pooled and for each of the seven distinct subpopulations. 3.4 Parallel-layer BINN Each omics type (e.g. transcriptomics, proteomics, methylation) is processed by an independent subnetwork, and their latent outputs are concatenated only at the final integrator. This "late fusion" design suits orthogonal assays with minimal direct cross-talk, such as combining methylation and ATAC-seq to predict complex agronomic traits. All variants share the same core elements, biological sparsity masks, shallow FCNs per module, and a unified integrator, yet differ in how modules interconnect, allowing users to tailor the model to their study's data structure and biological assumptions. References [1] Jessica Bertheloot, Fran ̧cois Barbier, Fr ́ed ́eric Boudon, Maria Dolores Perez-Garcia, Thomas P ́eron, Sylvie Citerne, Elizabeth Dun, Christine Beveridge, Christophe Godin, and Soulaiman Sakr. Sugar availability suppresses the auxin-induced strigolactone pathway to promote bud outgrowth. New Phytologist, 225(2):866-879, 2020. [2] Beth Holloway, Stanley Luck, Mary Beatty, J-Antoni Rafalski, and Bailin Li. Genomewide expression quantitative trait loci (eqtl) analysis in maize. BMC genomics, 12:1-14, 2011. 16 1 A T G C A P Single-Layer BINN (transcriptomics example) A T G C A P Stacked-Layer BINN ... A T G C A P Staggered BINN (metabolomics example) ... A T G C A P Parallel-Layer BINN (a) (b) (c) (d) Figure 9: Flexible BINN architectures for diverse multi-omics scenarios. (a) Single-layer BINN: A single intermediate omics layer (e.g. gene expression) feeds directly into the final integrator; this configuration was used for the TWAS problem. (b) Staggered-layer BINN: Within a single omics modality, pathway subnetworks exchange information via sparse cross-connections. For example, in the shoot-branching network, both auxin (A) and sucrose (S) modules feed into the cytokinin (CK) and strigolactone (SL) subnetworks before final integration.(c) Stacked-layer BINN: Multiple omics layers are embedded in series - outputs from the first layer's subnetworks become inputs to the second layer's modules - mirroring cascades such as transcriptome →metabolome. (d) Parallel-layer BINN: Independent subnetworks process each omics type (e.g. transcriptomics, proteomics, methylation) in parallel, with their latent outputs concatenated only at the final integrator ("late fusion"). [3] Henry Hsu and Peter A Lachenbruch. Paired t test. Wiley StatsRef: statistics reference online, 2014. [4] Oliver Kramer and Oliver Kramer. Scikit-learn. Machine learning for evolution strategies, pages 45-53, 2016. [5] Leann Myers and Maria J Sirois. Spearman correlation coefficients, differences between. Encyclopedia of statistical sciences, 12, 2004. [6] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32, 2019. [7] Paulino P ́erez and Gustavo de Los Campos. Genome-wide regression and prediction with the bglr statistical package. Genetics, 198(2):483-495, 2014. [8] Owen M Powell, Francois Barbier, Kai P Voss-Fels, Christine Beveridge, and Mark Cooper. Investigations into the emergent properties of gene-to-phenotype networks across cycles of selection: a case study of shoot branching in plants. in silico Plants, 4(1):diac006, 2022. 17 [9] Shaun Purcell, Benjamin Neale, Kathe Todd-Brown, Lori Thomas, Manuel AR Ferreira, David Bender, Julian Maller, Pamela Sklar, Paul IW De Bakker, Mark J Daly, et al. Plink: a tool set for whole-genome association and population-based linkage analyses. The American journal of human genetics, 81(3):559-575, 2007. [10] J Vladimir Torres-Rodr ́ıguez, Delin Li, Jonathan Turkus, Linsey Newton, Jensina Davis, Lina Lopez-Corona, Waqar Ali, Guangchao Sun, Ravi V Mural, Marcin W Grzybowski, et al. Population-level gene expression can repeatedly link genes to functions in maize. The Plant Journal, 119(2):844-860, 2024. [11] Lilin Yin, Haohao Zhang, Zhenshuang Tang, Jingya Xu, Dong Yin, Zhiwu Zhang, Xiaohui Yuan, Mengjin Zhu, Shuhong Zhao, Xinyun Li, et al. rmvp: a memory-efficient, visualization-enhanced, and parallel-accelerated tool for genome-wide association study. Genomics, proteomics & bioinformatics, 19(4):619-628, 2021. [12] Zhiwu Zhang, Elhan Ersoz, Chao-Qiang Lai, Rory J Todhunter, Hemant K Tiwari, Michael A Gore, Peter J Bradbury, Jianming Yu, Donna K Arnett, Jose M Ordovas, et al. Mixed linear model approach adapted for genome-wide association studies. Nature genetics, 42(4):355-360, 2010. 18
2510.14968
RDD: Retrieval-Based Demonstration Decomposer for Planner Alignment in Long-Horizon Tasks Mingxuan Yan1 Yuping Wang1,2 Zechun Liu3 Jiachen Li1∗ 1University of California, Riverside 2University of Michigan 3Meta AI {myan035, yuping.wang, jiachen.li}@ucr.edu zechunliu@meta.com Abstract To tackle long-horizon tasks, recent hierarchical vision-language-action (VLAs) frameworks employ vision-language model (VLM)-based planners to decompose complex manipulation tasks into simpler sub-tasks that low-level visuomotor poli- cies can easily handle. Typically, the VLM planner is finetuned to learn to decom- pose a target task. This finetuning requires target task demonstrations segmented into sub-tasks by either human annotation or heuristic rules. However, the heuris- tic subtasks can deviate significantly from the training data of the visuomotor policy, which degrades task performance. To address these issues, we propose a Retrieval-based Demonstration Decomposer (RDD) that automatically decomposes demonstrations into sub-tasks by aligning the visual features of the decomposed sub-task intervals with those from the training data of the low-level visuomotor policies. Our method outperforms the state-of-the-art sub-task decomposer on both simulation and real-world tasks, demonstrating robustness across diverse settings. Code and more results are available at rdd-neurips.github.io. 1 Introduction Developing generalist robots that are capable of executing complex, long-horizon tasks in unstructured environments has become one of the central goals of current robotics research. Traditional robotic programming and learning methods often struggle with the variability and complexity inherent in real- world scenarios. Building upon the success of Vision-Language Models (VLMs) and Large Language Models (LLMs), a new class of multi-modal foundation models known as Vision-Language-Action models (VLAs) [1, 2, 3, 4, 5] has emerged specifically for embodied AI applications. As recent studies [6, 7, 8, 9, 10, 11, 12, 13] have shown, integrating high-level planners above the low-level visuomotor policies vastly improves the performance for long-horizon robotic tasks. This has led to the hierarchical VLA paradigm [14, 15, 13, 16, 17, 18, 19, 20]. The planner, often a powerful VLM, performs task planning and reasoning to break down complex tasks into simpler sub-tasks with step-by-step language instructions. A learning-based visuomotor policy, trained on datasets with short-horizon sub-tasks and conditioned on the generated sub-task instructions, performs precise manipulation to complete the sub-tasks one by one, thereby completing long-horizon tasks. Despite its versatility, a vanilla VLM planner typically needs to be finetuned with human demon- strations when deploying to a given task [18, 14, 16]. To build the dataset for planner finetuning, demonstrations are temporally decomposed to sub-tasks by human annotation [14, 16, 18, 19, 15] or heuristics [13, 15, 21, 22, 23, 24, 25]. However, these methods are neither scalable nor efficient, and, most importantly, they could generate sub-tasks that deviate significantly from the training data of the low-level visuomotor policy. Figure 1 illustrates this dilemma. The state-of-the-art sub-task de- composer UVD [25], which uses a heuristic decomposition rule based on visual feature change-point ∗Corresponding author 39th Conference on Neural Information Processing Systems (NeurIPS 2025). arXiv:2510.14968v1 [cs.RO] 16 Oct 2025 Sub-task 1 Sub-task 2 Sub-task...? (b) Sub-tasks appears in the training set of visuomotor policy Sub-task 1 Sub-task 2 (a) (c) LoRA “Move to a position above the teal spoke, keeping the gripper closed.” Sub-task Instructions Visuomotor Policy VLM-Based Task Planner Retrieve similar ones from policy training set RDD Figure 1: The core idea of RDD. (a) Two sub-tasks appear in the visuomotor policy’s training set, on which the policy has been optimized. (b) Existing sub-task decomposers, such as UVD [25], use heuristic decomposition rules and may generate “unfamiliar” sub-tasks that are difficult to handle for the low-level visuomotor policy. (c) In contrast, RDD decomposes the demonstration into sub-tasks that are visually similar to the ones in the training set of the visuomotor policy. The sub-tasks are then used to finetune the high-level planner, which gives sub-task instructions to the low-level visuomotor policy and guides it to finish the task step-by-step. detection, generates sub-tasks that significantly deviate from the training data of the visuomotor policy. Finetuning the planner with these sub-tasks could make the planner generate sub-task instructions that the visuomotor policy is not optimized for, leading to compromised performance. This gap motivates us to develop an automatic, training-free, and computationally efficient approach that a) automatically decomposes demonstration videos for high-level planner finetuning without human annotations or task-specific knowledge and b) aligns the decomposed sub-tasks with the training data of the low-level visuomotor policies. To achieve this, we propose a Retrieval-based Demonstration Decomposer (RDD) that decomposes the demonstration into sub-tasks visually similar to the ones in the training set of the visuomotor policy, as illustrated in Figure 1 (c). Inspired by previous work [25], we employ existing visual encoders [26, 27, 28, 29, 30] that encode images into a compact latent space where distance metrics (e.g., angular distance) are effective in describing the semantic relationship between images. To align the sub-tasks to the training data of the low-level visuomotor policy, we build a sub-task visual feature vector database with the visuomotor training set and design an effective sub-task similarity measure to ensure similar sub-task samples can be efficiently retrieved. We formulate demonstration decomposition as an optimal partitioning problem and employ a dynamic programming-based solver to optimize the decomposition strategy efficiently. The experiments show that RDD consistently outperforms state-of-the-art methods on both simulation and real-world benchmarks. The main contributions of this paper are as follows: • This work is the first to coordinate the high-level planner and low-level visuomotor policy in the hierarchical VLA framework by generating the planner’s finetuning dataset that is well aligned with the visuomotor policy to improve the long-horizon task performance. • We propose RDD, a novel training-free retrieval-based demonstration decomposition framework that aligns the sub-task decomposition with the training data of the visuomotor policy. Specifically, we model demonstration decomposition as an optimal partitioning problem, which can be solved efficiently with a dynamic programming solver. We also provide a detailed theoretical analysis of the complexity and properties of the proposed solver. • We evaluate RDD on both simulation and real-world benchmarks. Experimental results show that RDD outperforms the state-of-the-art heuristic decomposer and is robust across various settings. 2 Related Work Hierarchical VLAs. While single-stage VLAs [1, 2, 3, 4, 5] achieve promising performance in short-horizon manipulation tasks, long-horizon tasks need an in-depth understanding of the task and general planning ability, which is hard to handle by a single-stage model. To this end, hierarchical structures have emerged as a compelling solution for long-horizon manipulation tasks [14, 15, 13, 16, 2 17, 18, 19, 20, 10]. As representative examples, Hi Robot [14] and π0.5 [18] enhance their previous work on visuomotor policy [4, 3] with a VLM-based planner. According to image observation and the overall task goal, the planner provides sub-task instructions at each time step. The low-level policy, conditioned on the instruction, outputs the final actions. Hierarchical structures also enable error correction and human intervention [13, 16, 14]. However, these methods rely on either human annotation or heuristic rules to decompose the demonstrations when finetuning the planner, which is less efficient and could generate sub-tasks that are hard to handle by the visuomotor policy. Demonstration Decomposition. Finetuning the high-level planner in hierarchical VLAs requires demonstrations broken down into sub-tasks with associated labels. Manually performing this segmen- tation [14, 16, 18, 19, 15] is slow and expensive. Human subjectivity also leads to inconsistencies. Heuristic methods [13, 15, 21, 22, 23, 24], such as segmenting based on contact changes or end- effector velocity profiles, require task-specific knowledge for carefully designed rules. In contrast, UVD [25] leverages general visual representation and identifies sub-tasks by detecting frame-by- frame temporal change points of visual embedding distances. However, when applying to hierarchical VLAs, UVD can still sub-optimally decompose sub-tasks, which may deviate significantly from the training data of the visuomotor policy. In contrast, RDD decomposes the demonstrations by explicitly aligning the sub-tasks with the training set of the visuomotor policy, enabling seamless coordination between the planner and visuomotor policy. Visual Representations. Considerable efforts have been made to develop visual encoders that embed RGB frames into compact latent vector spaces [26, 27, 28, 29, 30]. Some of these efforts are specially designed for robotics and manipulation scenarios. For instance, R3M [27] uses time-contrastive learning on large datasets of human videos; LIV [26] learns a value function conditioned on both language instructions and images. These visual representations are designed to capture meaningful information about the scene, objects, and potentially their relationships or temporal dynamics. 3 Retrieval-Based Demonstration Decomposer (RDD) 3.1 Problem Statement Hierarchical VLAs typically follow an imitation learning framework that trains a low-level visuomotor policy πθ(at|st, ot, lt, L) and a high-level planner pϕ(lt|st, ot, lt−1, L). The latter is usually a VLM. at denotes the waypoint action at timestep t, including 6-DoF pose and binary gripper state. Both policy πθ and planner pϕ are conditioned on the RGB image observation ot, proprioceptive states st, and the overall task objective description L in natural language, such as “put the cube in the drawer”. The policy πθ is additionally conditioned on a sub-task instruction lt like “first, pick up the cube”, which is determined by the planner pϕ at time t. During the policy training phase, the raw training dataset Dtrain = {(Si, Li)}Ntrain i=1 is composed of Ntrain demonstrations where Si = {(ai t, si t, oi t)}Ti t=1 and Li represents the corresponding task objective description. To break the complex long-horizon tasks down to simple instructions required by the low-level policy πθ, a demonstration Si is decomposed into a set of partitions Pi = {Ii j}Bi j=1 based on task-specific rules or human annotations. The j-th interval Ii j = {Si[bi j], . . . , Si[ei j]} (bi j < ei j) corresponds to a single coherent sub-task, where bi j, ei j are indexes of the starting and ending frames. All time steps t within the same interval share the same sub-task instruction li t = flang(promptj) labeled manually or generated by a powerful language model. As such, the demonstration is augmented with language descriptions li t to Si aug = {(ai t, si t, oi t, li t)}Ti t=1 and the augmented training set is denoted as Dtrain aug = {(Si aug, Li)}Ntrain i=1 . The policy πθ(at|st, ot, lt, L) is then optimized on Dtrain aug . During the high-level planner finetuning phase, given M demonstrations (M ≪Ntrain) for each task, we construct a planner finetuning dataset Ddemo = {(Si, Li)}M i=1 and predict the partitioning strategy P ∈Π(Si) for Si, where Π(S) denotes all possible partitioning strategies over a sequence S: Π(S) = ( P = {I1, I2, . . . , IK} K [ i=1 Ii = S, Ii ∩Ij = ∅for i ̸= j ) . This can be formulated as an optimal partitioning problem, as illustrated in Figure 2: Pi∗= arg max P∈Π(Si) J(P), (3.1) 3 Eq.3.6 Visuomotor Policy Training Set Eq.3.3 Eq.3.2 Retrieve with ANNS Eq.3.5 Eq.3.1 Solve with Dynamic Programing Figure 2: RDD formulates demonstration decomposition as an optimal partitioning problem. Intervals colored in green are proposed segments of the demonstration Si, and ones colored in blue are retrieved from the visuomotor policy’s training set Dtrain aug . where J(P) is the partitioning strategy scoring function defined on P that evaluates how close the strategy is to the low-level visuomotor policy’s training dataset Dtrain aug . Given the partitioning found, Ddemo is augmented by flang and arranged to Ddemo aug following the same procedure as Dtrain aug . A pre-trained planner pϕ(lt|st, ot, lt−1, L) is then finetuned on Ddemo aug with supervised learning to learn to decompose the new task. 3.2 Demonstration Decomposition as Optimal Partitioning Problem. Dynamic Programming Solver. Brute-force search of Pi∗requires O(2N−1) times of evaluation of J for a N frame’s demonstration, which is computationally intractable. Fortunately, [31] show that when J is interval-wise additive (as illustrated in Figure 2), i.e: J(P) = X I∈P ˜J(I), (3.2) which implies J(P) = J(P1)+J(P2), (P1, P2) ∈{(P1, P2) | P = P1 ∪P2, P1 ∩P2 = ∅} , where ˜J is the scoring function of a single interval. The following optimality holds: Theorem 3.1 (Principle of Optimality [31]). Given an additive scoring function J, any subset P′ of an optimal partition P∗is the optimal partitioning strategy of the intervals it covers. This implies that if we find the partial optimal partitioning strategy for Si[0 : j], it must be a subset of the global optimal Pi∗. This optimality structure allows a dynamic programming algorithm [31] to find the optimal partition with O(N 2) evaluations of the interval scoring function ˜J. In real-world robot learning scenarios, the duration of a sub-task is limited (typically tens of sec- onds) [32, 33, 34, 14], thus the complexity of the algorithm can be further improved by ignoring intervals excessively long. We show that: if the length of the interval is bounded, the complexity can be further reduced to O(N). We provide the algorithm implementation in Appendix A.1, Algorithm 1, and draw the following conclusion: Corollary 3.1.1. If the length of every interval is in the range [Lmin, Lmax], 0 < Lmin < Lmax ≤N, Algorithm 1 finds the optima with O ((Lmax −Lmin) · max(Lmax −Lmin, N −Lmax)) evaluations of the interval scoring function ˜J. We defer the proof to Appendix A.2. When the maximum sub-task interval length Lmax is bounded, which is common in robotics learning scenarios, a linear complexity O(N) is achieved. Considering 4 general cases, in this work, we make no assumption on Lmax and only mildly assume Lmin = 2 for sanity (a valid interval must have both the starting and ending frame). We additionally remark that Algorithm 1 supports parallel evaluation of the scoring function, as the intervals to be evaluated are determined at the beginning. Interval Scoring Function. Recall that ˜J should reflect how well the proposed interval aligns with the intervals in the training set Dtrain aug , we define the interval scoring function ˜J as: Definition 3.1. The scoring function ˜J for an interval I is defined as: ˜J(Ii j) = |Ii j|sim Ii j, ANNS(V(Ii j), Dtrain aug )  = |Ii j|sim(Ii j, ˜Ii j), (3.3) where V maps interval I into a d-dimensional vector representation, |I| is the duration of the I, and ANNS(I, Dtrain aug ) represents the approximate nearest neighbor of the interval proposal I in the training set Dtrain aug under some distance metric δ in Rd. sim is an interval similarity measure. For simplicity, we denote the result of approximate nearest neighbor search for Ii j as ˜Ii j. Eq. 3.3 essentially evaluates how close the proposed interval is to the training set of the visuomotor policy in the training set Dtrain. Moreover, Def. 3.1 ensures the following notable property: Proposition 3.1. Suppose an interval Ii j can be split into K consecutive parts {Ii j1, Ii j2, . . . , Ii jK}, all of which have the same training set similarity score, i.e., sim(Ii j, ˜Ii j) = sim(Ii j1, ˜Ii j1) = · · · = sim(Ii jK, ˜Ii jK). Given the interval scoring function ˜J of Eq. 3.3, and an additive J, the following equality holds: J({Ii j}) = J({Ii j1, Ii j2, . . . , Ii jK}). (3.4) The proof is in Appendix B. This equality implies that J is ignorant of the number of intervals when evaluating nested partitionings with the same similarity score. An alternative way to interpret is that, in Eq. 3.3, sim assigns scores to the sub-task assignment of each timestamp in an interval instead of assigning to the interval as a whole, thus the score summation is irrelevant to the number of intervals in the partitioning strategy. 3.3 Interval Similarity and Overall Objective Interval Similarity Measures. As introduced in Section 2, one can embed the RGB image observa- tion oi t into a compact latent vector space for similarity measures. We define V as: V(I) = concat E(ob), E(oe)  . (3.5) As illustrated in Figure 2, ob, oe are image observations at the beginning and end of I, and E is the embedding function. This formulation is inspired by former studies [26, 35, 25] that the ending frame (i.e., the goal frame) contains rich information about the sub-task goal and thus can be a distinguishable representation. Eq. 3.5 also includes the starting frame, which is essentially the goal state of the previous sub-task, to aggregate context-related information into the vector representation. Let the approximate nearest neighbor of Ii j be ˜Ii j = ANNS(Ii j, Dtrain aug ) we define the similarity measure sim between Ii j, ˜Ii j as: sim(Ii j, ˜Ii j) = − " δ(V(Ii j), V(˜Ii j)) + α 1 −|Ii j| |˜Ii j| # , (3.6) where the first term is the distance between the vector representations of Ii j and ˜Ii j; the second evaluates the relative difference between the temporal durations of two intervals. α is a hyperparameter that controls the weights between temporal and visual similarity. Considering OOD Sub-tasks. While the primary objective of RDD is to align the planner with the visuomotor policy’s existing capabilities, in real-world applications, out-of-distribution (OOD) sub- tasks not learned by the low-level visuomotor may exist. In such scenarios, the objective changes to: aligning sub-task intervals to both existing visuomotor sub-tasks and general sub-tasks, and the newly decomposed sub-tasks will be used to finetune both the visuomotor and the planner. Firstly, to detect the existence of new sub-tasks in demonstrations, one can quantify the novelty of a demonstration by 5 ∆= 1 |P| P I∈P ˜J(I), the average similarity score of the optimal partition P found by the standard RDD algorithm. A low value of ∆indicates a low averaged similarity, which signals novel sub-tasks. An alternate interval similarity measure sim for the OOD setting is defined: sim(Ii j, ˜Ii j) = −δ(Ve(Ii j), Ve(˜Ii j)) | {z } retrieval + βG(Ii j) | {z } general , (3.7) where Ve(I) = E(oe) and only the ending frame is used to calculate the semantic distance due to unpredictable OOD sub-task durations; G evaluates how well a proposed interval aligns with “general” sub-tasks. The hyperparameter β balances the trade-off between aligning with visuomotor sub-tasks and discovering novel, generalizable sub-tasks. G can be implemented using heuristic general sub-task identification functions like UVD [25] to measure how well an interval conforms to generic change-point detection heuristics: G(I) = −1 |I|abs(b −UVD(e, I)), (3.8) where b, e represent the index of the beginning and ending frame of interval I. UVD(e, I) gives the index of the UVD predicted beginning frame, given the goal frame on e. Approximate Nearest Neighbor Search. Considering the vast number of intervals in Dtrain aug and the high-dimensional vector space, we adopt approximate nearest neighbor search (ANNS) to implement the nearest neighbor searcher ANNS for efficient query. In this work, we choose the popular random- projection-trees-based method Annoy [36] as the ANNS implementation, which is computationally efficient and shows good robustness on various data [37]. RDD can also work with GPU-accelerated ANNS libraries like FAISS [38] for further acceleration. Overall Optimization Objective. By substituting Eq. 3.2 and Eq. 3.3 into Eq. 3.1, we have the complete definition of the optimization problem as: Pi∗= arg max P∈Π(Si) X Ii j∈P |Ii j|sim(Ii j, ˜Ii j), (3.9) where sim and V are defined by Eq. 3.6 and Eq. 3.5, respectively. The optimal partitioning strategy Pi∗of demonstration Si can be solved by Algorithm 1. 4 Experiments Implementation and Parameter Settings. We adopt RACER [13] as the base hierarchical VLA framework, which uses RVT [39] as the low-level visuomotor policy πθ and the recent LLaVa-based VLM llama3-llava-next-8B [40] as the pre-trained base model for planner pϕ. We use the pre-trained RVT policy πθ provided by RACER [13] trained Dtrain aug and the validation set of RLBench (labeled with the same decomposition rule as in Dtrain aug ). During the deployment phase, the planner is finetuned for two epochs on Ddemo aug using LoRA [41], with the rank of 128 and a scaling factor of 256 following RACER. The finetuning process takes about 5 minutes with 4 NVIDIA 6000 Ada GPUs. For base parameter settings, we set the weighting factor α = 1 and interval similarity measure sim in Eq. 3.6 for non-OOD scenarios, and use LIV [26] as the visual encoder E that is specifically designed for manipulation tasks. We use Gemini-1.5-flash [42] to generate sub-task language instructions for proposed intervals in Ddemo aug . Visuomotor Policy Training Dataset and Vector Database. We evaluate RDD on the RLBench [32] robot manipulation benchmark. The visuomotor policy training set Dtrain aug is adapted from [13]. Dtrain originally consists of 1908 teleoperated demonstrations from the RLBench’s training set. When generating Dtrain aug , RACER additionally augmented it with heuristic failure-recovery samples, resulting in a training dataset with 10,159 demonstrations. In this work, we only use the original 1908 demonstrations to exclude interference. Demonstrations are decomposed into 12700 sub-task intervals using a task-specific heuristic decomposer based on motion and gripper states. Generally, the decomposer will mark a goal state of a sub-task whenever: 1) the gripper state closes or opens, 2) the arm stops for a pre-defined duration, and 3) the end of the demonstration. More details about this heuristic can be found in Section III.B of [13]; RACER uses GPT-4-turbo as the language 6 Table 1: Multi-task success rates (%) on RLBench. Method Avg. Succ. (↑) Avg. Rank (↓) Close Jar Install Bulb Meat off Grill Open Drawer Place Wine Push Buttons w/o Finetune 52.6 ± 8.2 4.5 ± 1.2 27.6 ± 26.4 34.8 ± 14.2 46.4 ± 26.8 95.6 ± 6.1 83.2 ± 13.0 54.8 ± 9.1 Uniform 71.3 ± 5.4 3.1 ± 1.2 46.4 ± 29.9 51.2 ± 19.2 76.4 ± 22.4 100.0 ± 0.0 80.8 ± 14.5 82.0 ± 7.8 UVD 71.4 ± 5.1 3.0 ± 1.3 44.0 ± 28.7 54.8 ± 20.0 85.2 ± 20.6 100.0 ± 0.0 80.8 ± 15.3 67.2 ± 13.6 RDD (Ours) 74.9 ± 6.9 2.2 ± 0.9 46.0 ± 28.2 52.8 ± 16.4 84.4 ± 21.1 99.2 ± 2.4 86.4 ± 15.4 84.0 ± 7.8 Expert 75.1 ± 4.7 2.2 ± 1.0 50.4 ± 33.1 50.4 ± 13.3 94.4 ± 9.7 99.2 ± 2.4 81.6 ± 15.0 85.6 ± 6.0 Method Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Sweep to Dustpan Turn Tap w/o Finetune 41.2 ± 20.1 36.4 ± 28.8 58.8 ± 23.3 36.0 ± 21.8 57.2 ± 14.9 22.8 ± 32.5 89.2 ± 13.4 Uniform 36.8 ± 15.4 98.0 ± 2.7 92.4 ± 10.8 64.8 ± 16.7 64.4 ± 9.9 34.8 ± 37.7 98.8 ± 3.6 UVD 35.2 ± 12.1 90.4 ± 8.6 96.8 ± 6.6 74.4 ± 29.2 66.8 ± 21.2 43.6 ± 24.6 89.6 ± 11.1 RDD (Ours) 41.2 ± 17.1 97.2 ± 3.1 98.4 ± 3.2 68.0 ± 25.0 65.2 ± 14.3 57.2 ± 29.7 94.0 ± 5.1 Expert 39.6 ± 15.6 91.2 ± 7.3 97.6 ± 5.1 75.2 ± 24.6 66.4 ± 22.0 48.8 ± 35.5 96.0 ± 5.7 labeling function flang to annotate the sub-task intervals, given the language descriptions of the robot movement and initial environment setup. Given Dtrain aug , we build a vector database following Eq. 3.5 and employ Annoy [36] as the ANNS algorithm to retrieve the approximate nearest neighbor. For each frame, to exclude the inference of occlusion, we concatenate the representation vectors of the front-view and gripper-view images into one. We apply the same configuration to UVD for fair comparison. For Annoy, we set the number of random-projection trees to 10 and let the searcher search through all trees at runtime. We empirically find that the choices of the ANNS algorithm or search parameters have a minor impact on the performance. We use angular distance as the distance measure δ, which is written as p 2(1 −cos(u, v)) for normalized vectors u, v. The finetuning dataset Ddemo aug is built on RLBench’s validation set following the same procedure, except that the decomposition strategy is replaced by RDD. Ddemo aug contains three demonstrations for each task. Evaluation Metrics and Baselines. We evaluate the performance of RDD and baselines in terms of multi-task success rates and corresponding rankings across 13 RLBench tasks2. We compare our approach with a variety of baselines that adopt different demonstration decomposition strategies: • Expert [13]: The task-specific expert heuristic decomposer used in Dtrain aug serving as a performance upper bound. • UVD [25]: A task-agnostic decomposer that detects change points of learning-based visual features. • Uniform: A decomposer that divides each demonstration into 10 partitions with equal duration. • w/o Finetune: The planner pϕ is the pre-trained VLM model without finetuning on Ddemo. 4.1 Quantitative Results and Analysis Multi-Task Performance on RLBench. Table 1 shows the overall performance of RDD and baseline methods on multiple manipulation tasks using the base setting in Section 4. Results are averaged over 10 random seeds. RDD achieves a near-oracle performance and only compromises the success rate of merely 0.2% compared with the expert decomposer, our performance upper bound. On the other hand, we observe that UVD performs similarly to the naive uniform sampling strategy. It implies that the change points of learning-based visual features are not always aligned with the samples in Dtrain aug . By aligning the high-level planner to the knowledge of low-level policy, RDD outperforms the baseline methods that blindly decompose the demonstrations without this knowledge. It also suggests that finetuning is necessary for VLM-based planners. All finetuning-based methods achieve over 35% improvement over the vanilla Llama model. 2Tasks on which the low-level visuomotor policy has a decent performance (success rate > 35% with expert planner). It excludes the interference of poorly optimized visuomotor when evaluating planners. Performance on all 18 tasks can be found in Appendix C. 7 Choice of Visual Representation. As an important building block of RDD, the choice of visual representation is of great importance. Table 2 shows the performance of RDD when adopting different visual encoders E, including robotics specialized encoders: LIV [26], R3M [27], VIP [35], VC-1 [28]; and encoders for general vision tasks: CLIP [43], DINOv2 [29] and ResNet [44] pre-trained for ImageNet-1k classification. Results are averaged over three random seeds. Table 2: Results when using dif- ferent visual encoders E. Full re- sults on all tasks can be found in Table 9 in the appendix. Visu. Repr. Avg. Succ. (↑) Avg. Rank (↓) LIV 81.1 ± 0.9 3.7 ± 1.6 R3M 80.0 ± 3.5 3.9 ± 1.7 VIP 75.3 ± 3.4 4.1 ± 2.0 VC-1 75.5 ± 3.1 3.8 ± 2.2 CLIP 78.2 ± 2.1 4.7 ± 2.0 DINOv2 78.4 ± 2.4 4.5 ± 1.8 ResNet 81.1 ± 2.5 3.4 ± 1.5 It can be seen that RDD shows good robustness with various visual encoders and consistently outperforms baselines with the majority of encoders except for VC-1 and VIP, which demonstrates the strong robustness of RDD. VC-1 and VIP, on the other hand, are the only models that do not involve any form of language integration during training and perform the worst among all encoders. This implies the importance of language integration for visual encoders in VLA perception for semantic information retrieval. For instance, subtle pixel differences, such as the change of gripper state, may have a significant difference in language description. Surprisingly, ResNet, whose training does not explicitly involve language supervision, demonstrates a strong performance. The reason may be that its training dataset, ImageNet-1k, implicitly correlates its latent space with the language image labels. Table 3: Results when tun- ing the weighting parameter α. Full results on all tasks can be found in Table 10. α Avg. Succ. Avg. Rank 0 75.0 ± 2.5 3.0 ± 1.0 0.5 75.7 ± 2.4 2.5 ± 0.7 1 81.1 ± 0.9 2.3 ± 1.4 2 76.2 ± 3.0 2.2 ± 0.8 Weighting Parameters. Table 3 shows the impact of α on the perfor- mance of RDD. Results are averaged over three random seeds. When α = 0, i.e., there is no temporal alignment, and the algorithm is confused about sub-tasks whose beginning and ending frames are similar (e.g., reciprocating motion). On the other hand, overly relying on the temporal similarity ignores the semantic relationship between intervals and leads to performance degradation. We also evaluate the impact of the β in Table 5 for OOD scenarios, and the result shows that RDD is less sensitive to β. The choice of β depends on specific applications and user needs. Number of Demonstrations in Ddemo. To explore the data efficiency of RDD, Table 4 shows its averaged success rates under different numbers of demonstrations in Ddemo aug . Results are averaged over three random seeds. Specifically, we break the three-demonstration base setting dataset into three non-overlapping datasets with one demonstration per task to avoid bias induced by varying demonstration qualities. This result shows a high data efficiency of RDD. We credit this efficiency to the less-noisy keyframes provided by RDD, which are more informative for VLM to learn the underlying decomposition rules. Table 4: Results with different numbers of demonstrations per task in Ddemo aug . Full results on all tasks are in Table 11. Demo. Num. Avg. Succ. (↑) Avg. Rank (↓) 1 (RDD) 77.9 ± 4.5 2.0 ± 0.9 3 (RDD) 81.1 ± 0.9 1.6 ± 0.6 3 (UVD) 75.6 ± 1.8 2.4 ± 0.6 Performance on Real-world and OOD sub-tasks. Here we demon- strate RDD’s performance on both real-world and settings where the OOD sub-task appears. We first evaluate RDD on the real-world manipulation benchmark AgiBotWorld-Alpha [33]. We test RDD and UVD on the “supermarket” task, using 152 demos to build the RDD database and 37 demos for testing. For OOD sub-tasks, we test RDD on the human-operated demonstration dataset from Robo- Cerebra [34], which features highly diverse demonstrations in terms of objects, task goals, and arrangements. We use 560 demos to build the RDD database and test on the remaining 140 demos. We use the similarity measure sim in Eq. 3.7 for the OOD setting. We evaluated the quality of the decomposition against ground-truth segmentations using the mean intersection over union (mIoU). As shown in Table 5, RDD outperforms UVD on real-world data. Under OOD settings, RDD consistently outperforms UVD by leveraging potential similarity between sub-tasks. Speed and Scalability. We test the running time of Algorithm 1 with different numbers of frames on AMD EPYC 9254 using one CPU core. Figure 3 plots the running time with/without the prior knowledge of the maximum length of interval Lmax. The results show that the complexity with Lmax grows linearly with the number of frames, which aligns with our conclusion in Corollary 3.1.1, which indicates that when Lmax is determined, the complexity of Algorithm 1 will be O(N). 8 Table 5: Performance on real-world and OOD sub-tasks (IoU). Method AgiBot. (Real World) LIBERO (OOD) UVD 0.506 0.598 RDD 0.706 / RDD (β = 0.25) / 0.624 RDD (β = 0.10) / 0.630 RDD (β = 0.05) / 0.614 Note that Algorithm 1 supports parallel evaluation of the scor- ing function ˜J, and the latency can be significantly reduced with multi-processing. Also, we demonstrate the scalability of RDD when working with GPU-accelerated ANNS algorithms like FAISS [38] in Appendix D. Necessity of Finetuning on Target Tasks: One may ask if the planner can transfer to an unseen new task in zero-shot. We thus build a new planner finetuned before deployment on the training set of the following tasks: “Close Jar”, “Insert Peg”, and “Install Bulb” as the baseline, which learns the visual features but not the task decompositions. Then, we test its performance on the remaining tasks. The results are shown in Table 6. The results are averaged across 10 random seeds, and we also exclude tasks where the visuomotor fails. The results prove the necessity of fine-tuning on target tasks. 100 200 300 400 Number of Frames to Decompose 0 100 200 300 400 500 Runtime (s) w/ Lmax w/o Lmax Figure 3: Linear scaling of running time of Algorithm 1 with Lmax. Decompose with VLMs: VLMs pretrained on internet-scale data are promising to process a variety of video understanding tasks. In Table 7, we compared RDD with a Gemini-2.5-pro [42]- based decomposer with the following prompt: There is a robot doing a task, which can be segmented into mul- tiple steps. A keyframe is where the robot finishes the previous step and begins the next. Can you help me find ALL indexes of keyframes? Please return a list of indices, for example: [15, 65, 105, ...]. Note that the frame index starts from 0 instead of 1. As shown, RDD outperforms Gemini-2.5-pro despite its powerful general video understanding abilities. This result highlights the necessity of the planner aligning and the effectiveness of RDD. Table 6: Vanilla Planner without fine- tuning on the target task. Full results on all tasks are in Table 12. Method Avg. Succ. (↑) Avg. Rank (↓) w/o finetuning on target task 77.9 ± 4.3 1.6 ± 0.5 RDD (Ours) 79.6 ± 7.2 1.4 ± 0.5 Extended Evaluations and Discussions. We provide extended evaluations results in C and further discussions in Appendix D. We also provide a conceptual speed evaluation of RDD when working with the GPU-accelerated ANNS method FAISS [38] in Appendix D.1. 4.2 Qualitative Results and Analysis Figure 4 visualizes the decomposition results of RDD and UVD on both real-world and simulation benchmarks. We can observe that RDD is robust to task-irrelevant interference and can robustly identify the sub-tasks that are close to the expert sub-task division. Also, RDD demonstrates strong robustness for nuance arm movement, where the keyframe localization is challenging precisely. Conversely, UVD fails to locate keyframes precisely, and the generated sub-tasks largely deviate from expert sub-tasks. 5 Discussions and Future Works Table 7: Comparing RDD with Gemini-2.5-pro. Full results on all tasks are in Table 13. Method Avg. Succ. (↑) Avg. Rank (↓) Gemini-2.5-pro 72.6 ± 4.7 1.7 ± 0.4 RDD (Ours) 74.9 ± 6.9 1.3 ± 0.4 Visuomotor Training Data Generation based on Source Dataset: While this work applies RDD to planner-visuomotor alignment, it can also be used to generate additional sub-task training data for visuomotor aligned with a labeled source dataset. By aligning the sub-task interval visual features with the existing source dataset, RDD may make the newly labeled data easier to learn, allowing the visuomotor reuse learned knowledge from the source dataset. Specific Sub-task Interval Features: RDD measures sub-task in- terval similarity in the single-frame image feature space. Some applications, such as hierarchical vision-language navigation [19], which require the planner to use historical landmark images, may necessitate specialized designs of the similarity score function. 9 Expert RDD (Ours) UVD Figure 4: Qualitative results of RDD and UVD functioning on both real-world (AgiBotWorld) and simulation (RLBench and LIBERO) benchmarks. Blocks outlined in black are sub-tasks decomposed by the same task- specific heuristic used in the visuomotor policy’s training set; blocks outlined in green are sub-tasks found by RDD; and blocks outlined in red are sub-tasks found by UVD. Data Quality of the Source Dataset and Data Curation: As a retrieval-based sub-task decomposi- tion method, RDD’s primary objective is to let the high-level planner effectively utilize the skills that the low-level visuomotor policy already possesses. Therefore, RDD is agnostic to the “optimality” of the skills themselves. This ensures the planner generates commands that the policy can reliably execute, rather than potentially “better” ones it cannot handle. On the other hand, in scenarios where the visuomotor policy’s training data contains significant noisy samples that the policy fails to learn, RDD can be easily integrated with dataset curation techniques [45, 46]. These methods can serve as a pre-processing step to filter the visuomotor training set. For instance, CUPID [45] computes an “action influence” score for state-action pairs that can be used to evaluate each segment’s contribution to the policy’s final behavior. By applying a simple threshold, low-influence or flawed segments can be pruned from the dataset before RDD uses it as a reference. This would prevent catastrophic failures by ensuring RDD aligns demonstrations only with high-quality, influential sub-tasks. 6 Conclusion In this work, we present the Retrieval-based Demonstration Decomposer (RDD), a training-free decomposition method that aligns the high-level task planner and low-level visuomotor policy in hierarchical VLAs. By retrieving and aligning sub-task segments with the low-level policy’s training data, RDD enables an effective planner that fully exploits the capability of the visuomotor policy. We formally formulate the demonstration decomposition task into an optimal partitioning problem, which can be efficiently solved by dynamic programming with our novel sub-task interval scoring function. Experiment results demonstrate that RDD outperforms state-of-the-art demonstration decomposers. RDD offers a scalable and promising solution for demonstration decomposition, opening new avenues for planner-policy coordination in hierarchical robot learning systems. References [1] Brianna Zitkovich, Tianhe Yu, Sichun Xu, Peng Xu, Ted Xiao, Fei Xia, Jialin Wu, Paul Wohlhart, Stefan Welker, Ayzaan Wahid, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. In Conference on Robot Learning, pages 2165–2183. PMLR, 2023. [2] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246, 2024. 10 [3] Karl Pertsch, Kyle Stachowicz, Brian Ichter, Danny Driess, Suraj Nair, Quan Vuong, Oier Mees, Chelsea Finn, and Sergey Levine. Fast: Efficient action tokenization for vision-language-action models. arXiv preprint arXiv:2501.09747, 2025. [4] Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al. π0: A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164, 2024. [5] Songming Liu, Lingxuan Wu, Bangguo Li, Hengkai Tan, Huayu Chen, Zhengyi Wang, Ke Xu, Hang Su, and Jun Zhu. Rdt-1b: a diffusion foundation model for bimanual manipulation. arXiv preprint arXiv:2410.07864, 2024. [6] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. [7] Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, and Yang Gao. Look before you leap: Unveiling the power of gpt-4v in robotic vision-language planning. arXiv preprint arXiv:2311.17842, 2023. [8] Austin Stone, Ted Xiao, Yao Lu, Keerthana Gopalakrishnan, Kuang-Huei Lee, Quan Vuong, Paul Wohlhart, Sean Kirmani, Brianna Zitkovich, Fei Xia, et al. Open-world object manipulation using pre-trained vision-language models. arXiv preprint arXiv:2303.00905, 2023. [9] Hongyi Chen, Yunchao Yao, Ruixuan Liu, Changliu Liu, and Jeffrey Ichnowski. Automating robot failure recovery using vision-language models with optimized prompts. arXiv preprint arXiv:2409.03966, 2024. [10] Suneel Belkhale, Tianli Ding, Ted Xiao, Pierre Sermanet, Quon Vuong, Jonathan Tompson, Yevgen Chebotar, Debidatta Dwibedi, and Dorsa Sadigh. Rt-h: Action hierarchies using language. arXiv preprint arXiv:2403.01823, 2024. [11] Peiqi Liu, Yaswanth Orru, Jay Vakil, Chris Paxton, Nur Muhammad Mahi Shafiullah, and Lerrel Pinto. Ok-robot: What really matters in integrating open-knowledge models for robotics. arXiv preprint arXiv:2401.12202, 2024. [12] Fangchen Liu, Kuan Fang, Pieter Abbeel, and Sergey Levine. Moka: Open-vocabulary robotic manipulation through mark-based visual prompting. In First Workshop on Vision-Language Models for Navigation and Manipulation at ICRA 2024, 2024. [13] Yinpei Dai, Jayjun Lee, Nima Fazeli, and Joyce Chai. Racer: Rich language-guided failure recovery policies for imitation learning. International Conference on Robotics and Automation (ICRA), 2025. [14] Lucy Xiaoyang Shi, Brian Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, et al. Hi robot: Open-ended instruction following with hierarchical vision-language-action models. arXiv preprint arXiv:2502.19417, 2025. [15] Yi Li, Yuquan Deng, Jesse Zhang, Joel Jang, Marius Memmel, Raymond Yu, Caelan Reed Garrett, Fabio Ramos, Dieter Fox, Anqi Li, et al. Hamster: Hierarchical action models for open-world robot manipulation. arXiv preprint arXiv:2502.05485, 2025. [16] Lucy Xiaoyang Shi, Zheyuan Hu, Tony Z Zhao, Archit Sharma, Karl Pertsch, Jianlan Luo, Sergey Levine, and Chelsea Finn. Yell at your robot: Improving on-the-fly from language corrections. arXiv preprint arXiv:2403.12910, 2024. [17] Weiyu Liu, Neil Nie, Ruohan Zhang, Jiayuan Mao, and Jiajun Wu. Learning compositional behaviors from demonstration and language. In 8th Annual Conference on Robot Learning, 2025. [18] Physical Intelligence, Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, et al. π0.5: a vision- language-action model with open-world generalization. arXiv preprint arXiv:2504.16054, 2025. 11 [19] An-Chieh Cheng, Yandong Ji, Zhaojing Yang, Zaitian Gongye, Xueyan Zou, Jan Kautz, Erdem Bıyık, Hongxu Yin, Sifei Liu, and Xiaolong Wang. Navila: Legged robot vision-language-action model for navigation. arXiv preprint arXiv:2412.04453, 2024. [20] Jiafei Duan, Wentao Yuan, Wilbert Pumacay, Yi Ru Wang, Kiana Ehsani, Dieter Fox, and Ranjay Krishna. Manipulate-anything: Automating real-world robots using vision-language models. arXiv preprint arXiv:2406.18915, 2024. [21] Changyeon Kim, Minho Heo, Doohyun Lee, Jinwoo Shin, Honglak Lee, Joseph J Lim, and Kimin Lee. Subtask-aware visual reward learning from segmented demonstrations. arXiv preprint arXiv:2502.20630, 2025. [22] Wensheng Wang and Ning Tan. Hybridgen: Vlm-guided hybrid planning for scalable data generation of imitation learning. arXiv preprint arXiv:2503.13171, 2025. [23] Ajay Mandlekar, Soroush Nasiriany, Bowen Wen, Iretiayo Akinola, Yashraj Narang, Linxi Fan, Yuke Zhu, and Dieter Fox. Mimicgen: A data generation system for scalable robot learning using human demonstrations. arXiv preprint arXiv:2310.17596, 2023. [24] Tongzhou Mu, Minghua Liu, and Hao Su. Drs: Learning reusable dense rewards for multi-stage tasks. arXiv preprint arXiv:2404.16779, 2024. [25] Zichen Zhang, Yunshuang Li, Osbert Bastani, Abhishek Gupta, Dinesh Jayaraman, Yecheng Ja- son Ma, and Luca Weihs. Universal visual decomposer: Long-horizon manipulation made easy. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 6973–6980. IEEE, 2024. [26] Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang, Osbert Bastani, and Dinesh Jayaraman. Liv: Language-image representations and rewards for robotic control. arXiv preprint arXiv:2306.00958, 2023. [27] Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, and Abhinav Gupta. R3m: A universal visual representation for robot manipulation. arXiv preprint arXiv:2203.12601, 2022. [28] Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Tingfan Wu, Jay Vakil, et al. Where are we in the search for an artificial visual cortex for embodied intelligence? Advances in Neural Information Processing Systems, 36:655–677, 2023. [29] Maxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Fernandez, et al. Dinov2: Learning robust visual features without supervision, 2023. [30] Timothée Darcet, Maxime Oquab, Julien Mairal, and Piotr Bojanowski. Vision transformers need registers, 2023. [31] Brad Jackson, Jeffrey D Scargle, David Barnes, Sundararajan Arabhi, Alina Alt, Peter Giou- mousis, Elyus Gwin, Paungkaew Sangtrakulcharoen, Linda Tan, and Tun Tao Tsai. An algorithm for optimal partitioning of data on an interval. IEEE Signal Processing Letters, 12(2):105–108, 2005. [32] Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019–3026, 2020. [33] Qingwen Bu, Jisong Cai, Li Chen, Xiuqi Cui, Yan Ding, Siyuan Feng, Xindong He, Xu Huang, et al. Agibot world colosseo: A large-scale manipulation platform for scalable and intelligent embodied systems. In 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2025. [34] Songhao Han, Boxiang Qiu, Yue Liao, Siyuan Huang, Chen Gao, Shuicheng Yan, and Si Liu. Robocerebra: A large-scale benchmark for long-horizon robotic manipulation evaluation. arXiv preprint arXiv:2506.06677, 2025. 12 [35] Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, and Amy Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint arXiv:2210.00030, 2022. [36] Erik Bernhardsson. ANNOY library. https://github.com/spotify/annoy. Accessed: 2025-05-05. [37] Martin Aumüller, Erik Bernhardsson, and Alexander Faithfull. Ann-benchmarks: A bench- marking tool for approximate nearest neighbor algorithms. Information Systems, 87:101374, 2020. [38] Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre- Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. The faiss library. 2024. [39] Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, and Dieter Fox. Rvt: Robotic view transformer for 3d object manipulation. In Conference on Robot Learning, pages 694–710. PMLR, 2023. [40] Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang, Ziwei Liu, and Chunyuan Li. Llava-next: Stronger llms supercharge multimodal capabilities in the wild. URL https://llava-vl. github. io/blog/2024-05-10-llava-next-stronger-llms, 2024. [41] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR, 1(2):3, 2022. [42] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. [43] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PmLR, 2021. [44] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [45] Christopher Agia, Rohan Sinha, Jingyun Yang, Rika Antonova, Marco Pavone, Haruki Nishimura, Masha Itkina, and Jeannette Bohg. Cupid: Curating data your robot loves with influence functions. arXiv preprint arXiv:2506.19121, 2025. [46] Joey Hejna, Suvir Mirchandani, Ashwin Balakrishna, Annie Xie, Ayzaan Wahid, Jonathan Tompson, Pannag Sanketi, Dhruv Shah, Coline Devin, and Dorsa Sadigh. Robot data curation with mutual information estimators. arXiv preprint arXiv:2502.08623, 2025. [47] Abby O’Neill, Abdul Rehman, Abhiram Maddukuri, Abhishek Gupta, Abhishek Padalkar, Abraham Lee, Acorn Pooley, Agrim Gupta, Ajay Mandlekar, Ajinkya Jain, et al. Open x- embodiment: Robotic learning datasets and rt-x models: Open x-embodiment collaboration 0. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 6892–6903. IEEE, 2024. [48] Mustafa Shukor, Dana Aubakirova, Francesco Capuano, Pepijn Kooijmans, Steven Palma, Adil Zouitine, Michel Aractingi, Caroline Pascal, Martino Russi, Andres Marafioti, et al. Smolvla: A vision-language-action model for affordable and efficient robotics. arXiv preprint arXiv:2506.01844, 2025. [49] Junming Zhang, Weijia Chen, Yuping Wang, Ram Vasudevan, and Matthew Johnson-Roberson. Point set voting for partial point cloud analysis. IEEE Robotics and Automation Letters, 6(2):596–603, 2021. [50] Mingxuan Yan, Ruijie Zhang, Xuedou Xiao, and Wei Wang. Detvpcc: Roi-based point cloud sequence compression for 3d object detection. arXiv preprint arXiv:2502.04804, 2025. 13 [51] Zehao Wang, Yuping Wang, Zhuoyuan Wu, Hengbo Ma, Zhaowei Li, Hang Qiu, and Jiachen Li. Cmp: Cooperative motion prediction with multi-agent communication. IEEE Robotics and Automation Letters, 2025. [52] Yuping Wang and Jier Chen. Eqdrive: Efficient equivariant motion forecasting with multi- modality for autonomous driving. In 2023 8th International Conference on Robotics and Automation Engineering (ICRAE), pages 224–229. IEEE, 2023. [53] Yuping Wang and Jier Chen. Equivariant map and agent geometry for autonomous driving motion prediction. In 2023 International Conference on Electrical, Computer and Energy Technologies (ICECET), pages 1–6. IEEE, 2023. [54] Shuo Xing, Chengyuan Qian, Yuping Wang, Hongyuan Hua, Kexin Tian, Yang Zhou, and Zhengzhong Tu. Openemma: Open-source multimodal model for end-to-end autonomous driving. In Proceedings of the Winter Conference on Applications of Computer Vision, pages 1001–1009, 2025. [55] Yuping Wang, Xiangyu Huang, Xiaokang Sun, Mingxuan Yan, Shuo Xing, Zhengzhong Tu, and Jiachen Li. Uniocc: A unified benchmark for occupancy forecasting and prediction in autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2025. [56] Yuping Wang, Shuo Xing, Cui Can, Renjie Li, Hongyuan Hua, Kexin Tian, Zhaobin Mo, Xiangbo Gao, Keshu Wu, Sulong Zhou, et al. Generative ai for autonomous driving: Frontiers and opportunities. arXiv preprint arXiv:2505.08854, 2025. [57] Xu Liu, Tong Zhou, Chong Wang, Yuping Wang, Yuanxin Wang, Qinjingwen Cao, Weizhi Du, Yonghuan Yang, Junjun He, Yu Qiao, et al. Toward the unification of generative and discriminative visual foundation model: A survey. The Visual Computer, pages 1–42, 2024. [58] Shuo Xing, Yuping Wang, Peiran Li, Ruizheng Bai, Yueqi Wang, Chengxuan Qian, Huaxiu Yao, and Zhengzhong Tu. Re-align: Aligning vision language models via retrieval-augmented direct preference optimization. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 2025. [59] Shuo Xing, Zezhou Sun, Shuangyu Xie, Kaiyuan Chen, Yanjia Huang, Yuping Wang, Jiachen Li, Dezhen Song, and Zhengzhong Tu. Can large vision language models read maps like a human? arXiv preprint arXiv:2503.14607, 2025. [60] Congrui Hetang and Yuping Wang. Novel view synthesis from a single rgbd image for indoor scenes. In 2023 International Conference on Image Processing, Computer Vision and Machine Learning (ICICML), pages 447–450. IEEE, 2023. [61] Xiangbo Gao, Yuheng Wu, Xuewen Luo, Keshu Wu, Xinghao Chen, Yuping Wang, Chenxi Liu, Yang Zhou, and Zhengzhong Tu. Airv2x: Unified air-ground vehicle-to-everything collaboration. arXiv preprint arXiv:2506.19283, 2025. [62] Peiran Li, Xinkai Zou, Zhuohang Wu, Ruifeng Li, Shuo Xing, Hanwen Zheng, Zhikai Hu, Yuping Wang, Haoxi Li, Qin Yuan, et al. Safeflow: A principled protocol for trustworthy and transactional autonomous agent systems. arXiv preprint arXiv:2506.07564, 2025. 14 Appendix A Algorithm Details A.1 Dynamic Programming Solver to Problem 3.1 Algorithm 1 shows the dynamic programming solver. Lmax and Lmin are user-specified parameters that determine the minimum and maximum length of proposed sub-task intervals. ˜J is the interval scoring function. Algorithm 1 MaxSumPartition Require: Sequence u = [u1, u2, . . . , un], scoring function ˜J, integer Lmin, integer Lmax Ensure: Maximum score sum and partition of u 1: Initialize dp[0 . . . n] ←−∞, parts[0 . . . n] ←∅ 2: dp[0] ←0 3: for i = Lmin + 1 to n do 4: bestScore ←−∞ 5: bestPartition ←∅ 6: for j = 0 to i do 7: if Lmin ≤i −j ≤Lmax then 8: segment ←u[j : i] 9: s ←˜J(segment) ▷can be evaluated in parallel before loops 10: if dp[j] + s > bestScore then 11: bestScore ←dp[j] + s 12: bestPartition ←parts[j] ∪{segment} 13: end if 14: end if 15: end for 16: if bestPartition ̸= ∅then 17: dp[i] ←bestScore 18: parts[i] ←bestPartition 19: else 20: dp[i] ←dp[i −1] 21: parts[i] ←parts[i −1] 22: end if 23: end for 24: return (dp[n], parts[n]) A.2 Proof of Correctness and Complexity of Algorithm 1 Proof. The correctness when Lmin = 1, Lmax = |Si| (we denote the algorithm under this case as Algorithm 0) has been proven in Proof 2 of Jackson et al. [31]. It is sufficient to prove the cases when 1 < Lmin < Lmax < |Si|. Notice that Algorithm 1 is equivalent to a special case of Algorithm 0 by constructing an adapted scoring function defined as Algorithm 2, where the score of invalid intervals Algorithm 2 AdaptedScoreFunc Require: Sub-sequence u′ = [u1, u2, . . . , um], scoring function ˜J, integer Lmin, integer Lmax Ensure: Adapted score of u′ 1: if Lmin ≤|u′| ≤Lmax then 2: return ˜J(u′) 3: else 4: return −∞ 5: end if is −∞. Note that ADAPTEDSCOREFUNC preserves the additiveness of J, because if any interval in a strategy P violates the length assumption, P is also invalid, i.e., J(P) = −∞. Given the facts: 1) 15 the correctness of Algorithm 0 has been proven by Proof 2 [31]; 2) Algorithm 1 is equivalent to a special case of Algorithm 0, by implication, the correctness of Algorithm 1 is proved. As for complexity, let N be the length of the demonstration and M be the number of evaluations to ˜J, we have: M = Lmax−Lmin+1 X j=2 j + (N −Lmax)(Lmax −Lmin + 1) = (Lmax −Lmin + 3)(Lmax −Lmin) 2 + (N −Lmax)(Lmax −Lmin + 1) = O ((Lmax −Lmin) · max(Lmax −Lmin, N −Lmax)) B Proof of Proposition 3.1 Proof. Let the identical similarity scores equal s, and let ei j, ei j be the starting and ending indexes of interval Ii j, respectively. By inducing Eq. 3.2 and Eq. 3.3 we rewrite the left side of Eq. 3.1 to: J({Ii j}) = ˜J(Ii j) = (ei j −bi j)s And the right side: J({Ii j1, Ii j2, . . . , Ii jK}) = K X k=1 ˜J(Ii jk) = (ei j1 −bi j1 + ei j2 −bi j2 + · · · + ei jk −bi jk)s = (ei j1 −bi j + ei j2 −ei j1 + · · · + ei j −ei j(k−1)) | {z } Since intervals are consecutive. s = (ei j −bi j)s = J({Ii j}) C Additional Quantitative Results Tables 8-13 provide the complete multi-task performances of the results in Section 4.1, including ones where the visuomotor policy fails. D Discussions D.1 Work with GPU-accelerated ANNS The nearest neighbor (NN) search in RDD can be significantly accelerated using GPU-accelerated libraries like FAISS [38]. We conduct experiments on a typical database of 10 million entries (mainstream policy training dataset scale, as shown in Section D.2) of 2048 dimensions (same dimension as our main experiment in Table 1) As shown in Table 14, FAISS can achieve > 300 NN queries per second on one NVIDIA 4090 GPU. Under this setting, RDD only needs < 2 minutes to decompose a 500-frame video (5 fps), with a max interval length of 100 frames. (44549 NN queries in total). In other words, as part of the offline dataset building process, RDD can decompose demonstrations at a high speed of 4.3 fps, which shows the high scalability of RDD. D.2 Scale of Mainstream Robotics Datasets To support the aforementioned experiment settings, here we provide the scale of some of the most popular open-sourced robotics datasets. In summary, assuming each demonstration can be 16 Table 8: Main results with all RLBench Tasks. Method Avg. Succ. (↑) Avg. Rank (↓) Close Jar Insert Peg Install Bulb Meat off Grill Open Drawer Place Cups Sort Shape Place Wine w/o Finetune 39.7 ± 6.5 4.3 ± 1.3 27.6 ± 26.4 5.6 ± 6.7 34.8 ± 14.2 46.4 ± 26.8 95.6 ± 6.1 3.2 ± 4.3 16.0 ± 11.7 83.2 ± 13.0 Uniform 54.5 ± 4.1 3.1 ± 1.2 46.4 ± 29.9 8.8 ± 11.8 51.2 ± 19.2 76.4 ± 22.4 100.0 ± 0.0 0.8 ± 1.6 25.6 ± 9.2 80.8 ± 14.5 UVD [25] 54.3 ± 3.9 3.2 ± 1.2 44.0 ± 28.7 10.4 ± 14.1 54.8 ± 20.0 85.2 ± 20.6 100.0 ± 0.0 1.2 ± 1.8 25.2 ± 11.0 80.8 ± 15.3 Expert [13] 57.6 ± 3.3 2.0 ± 0.9 50.4 ± 33.1 12.0 ± 17.9 50.4 ± 13.3 94.4 ± 9.7 99.2 ± 2.4 3.2 ± 3.9 26.0 ± 10.6 81.6 ± 15.0 RDD (Ours) 57.3 ± 5.3 2.4 ± 1.1 46.0 ± 28.2 16.8 ± 18.6 52.8 ± 16.4 84.4 ± 21.1 99.2 ± 2.4 2.0 ± 2.0 32.4 ± 10.2 86.4 ± 15.4 Method Push Buttons Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Stack Blocks Stack Cups Sweep to Dustpan Turn Tap w/o Finetune 54.8 ± 9.1 41.2 ± 20.1 36.4 ± 28.8 58.8 ± 23.3 36.0 ± 21.8 57.2 ± 14.9 2.8 ± 2.6 2.8 ± 3.6 22.8 ± 32.5 89.2 ± 13.4 Uniform 82.0 ± 7.8 36.8 ± 15.4 98.0 ± 2.7 92.4 ± 10.8 64.8 ± 16.7 64.4 ± 9.9 13.6 ± 7.8 5.2 ± 4.7 34.8 ± 37.7 98.8 ± 3.6 UVD [25] 67.2 ± 13.6 35.2 ± 12.1 90.4 ± 8.6 96.8 ± 6.6 74.4 ± 29.2 66.8 ± 21.2 9.6 ± 6.2 1.6 ± 3.7 43.6 ± 24.6 89.6 ± 11.1 Expert [13] 85.6 ± 6.0 39.6 ± 15.6 91.2 ± 7.3 97.6 ± 5.1 75.2 ± 24.6 66.4 ± 22.0 14.8 ± 11.2 5.2 ± 4.4 48.8 ± 35.5 96.0 ± 5.7 RDD (Ours) 84.0 ± 7.8 41.2 ± 17.1 97.2 ± 3.1 98.4 ± 3.2 68.0 ± 25.0 65.2 ± 14.3 5.2 ± 3.6 1.6 ± 2.7 57.2 ± 29.7 94.0 ± 5.1 Table 9: Multi-task performances with different visual representations. Visu. Repr. Avg. Succ. (↑) Avg. Rank (↓) Close Jar Insert Peg Install Bulb Meat off Grill Open Drawer Place Cups Sort Shape Place Wine LIV [26] 61.0 ± 0.4 3.6 ± 1.7 68.0 ± 17.3 4.0 ± 3.3 41.3 ± 21.7 96.0 ± 5.7 100.0 ± 0.0 1.3 ± 1.9 32.0 ± 5.7 96.0 ± 3.3 R3M [27] 59.2 ± 2.5 4.2 ± 1.8 65.3 ± 21.0 4.0 ± 5.7 44.0 ± 13.1 97.3 ± 3.8 98.7 ± 1.9 0.0 ± 0.0 12.0 ± 3.3 86.7 ± 6.8 VIP [35] 56.5 ± 2.0 4.0 ± 2.0 72.0 ± 14.2 2.7 ± 1.9 38.7 ± 15.4 93.3 ± 9.4 100.0 ± 0.0 5.3 ± 5.0 22.7 ± 10.0 89.3 ± 8.2 VC-1 [28] 56.9 ± 1.6 3.7 ± 2.3 73.3 ± 9.4 1.3 ± 1.9 30.7 ± 18.6 93.3 ± 9.4 100.0 ± 0.0 8.0 ± 3.3 20.0 ± 8.6 86.7 ± 10.0 CLIP [43] 58.4 ± 1.6 4.3 ± 2.0 62.7 ± 21.7 4.0 ± 3.3 46.7 ± 15.4 96.0 ± 5.7 100.0 ± 0.0 0.0 ± 0.0 16.0 ± 3.3 82.7 ± 13.6 DINOv2 [29] 58.3 ± 1.4 4.4 ± 1.6 65.3 ± 18.0 2.7 ± 3.8 41.3 ± 21.2 98.7 ± 1.9 100.0 ± 0.0 1.3 ± 1.9 13.3 ± 1.9 80.0 ± 8.6 ResNet [44] 60.5 ± 2.0 3.8 ± 1.7 68.0 ± 20.4 2.7 ± 3.8 46.7 ± 10.5 96.0 ± 5.7 100.0 ± 0.0 0.0 ± 0.0 13.3 ± 5.0 84.0 ± 6.5 Visu. Repr. Push Buttons Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Stack Blocks Stack Cups Sweep to Dustpan Turn Tap LIV [26] 78.7 ± 8.2 57.3 ± 3.8 97.3 ± 1.9 97.3 ± 3.8 88.0 ± 8.6 73.3 ± 3.8 4.0 ± 3.3 1.3 ± 1.9 66.7 ± 5.0 94.7 ± 5.0 R3M [27] 89.3 ± 5.0 50.7 ± 10.0 85.3 ± 5.0 94.7 ± 5.0 94.7 ± 5.0 82.7 ± 12.4 8.0 ± 3.3 1.3 ± 1.9 53.3 ± 8.2 97.3 ± 3.8 VIP [35] 92.0 ± 3.3 64.0 ± 8.6 93.3 ± 3.8 89.3 ± 10.0 10.7 ± 7.5 46.7 ± 36.7 2.7 ± 1.9 5.3 ± 3.8 92.0 ± 8.6 97.3 ± 1.9 VC-1 [28] 93.3 ± 5.0 65.3 ± 8.2 93.3 ± 6.8 92.0 ± 8.6 9.3 ± 6.8 52.0 ± 31.5 4.0 ± 3.3 9.3 ± 7.5 92.0 ± 8.6 100.0 ± 0.0 CLIP [43] 89.3 ± 5.0 46.7 ± 13.2 81.3 ± 3.8 94.7 ± 5.0 94.7 ± 5.0 81.3 ± 10.5 10.7 ± 3.8 5.3 ± 5.0 52.0 ± 8.6 88.0 ± 14.2 DINOv2 [29] 88.0 ± 3.3 50.7 ± 15.4 85.3 ± 5.0 94.7 ± 5.0 94.7 ± 5.0 78.7 ± 6.8 9.3 ± 1.9 4.0 ± 3.3 46.7 ± 5.0 94.7 ± 7.5 ResNet [44] 93.3 ± 1.9 61.3 ± 9.4 98.7 ± 1.9 90.7 ± 8.2 86.7 ± 10.5 73.3 ± 6.8 17.3 ± 11.5 1.3 ± 1.9 56.0 ± 5.7 100.0 ± 0.0 Table 10: Multi-task performance with different weighting parameter α. α Avg. Succ. (↑) Avg. Rank (↓) Close Jar Insert Peg Install Bulb Meat off Grill Open Drawer Place Cups Sort Shape Place Wine 0 57.3 ± 2.1 2.8 ± 1.0 74.7 ± 10.0 0.0 ± 0.0 32.0 ± 18.2 52.0 ± 14.2 98.7 ± 1.9 6.7 ± 5.0 29.3 ± 5.0 81.3 ± 5.0 0.5 57.6 ± 2.2 2.7 ± 0.8 73.3 ± 10.5 0.0 ± 0.0 33.3 ± 11.5 49.3 ± 21.0 100.0 ± 0.0 5.3 ± 3.8 29.3 ± 6.8 92.0 ± 5.7 1 61.0 ± 0.4 2.3 ± 1.4 68.0 ± 17.3 4.0 ± 3.3 41.3 ± 21.7 96.0 ± 5.7 100.0 ± 0.0 1.3 ± 1.9 32.0 ± 5.7 96.0 ± 3.3 2 58.0 ± 2.3 2.2 ± 0.8 76.0 ± 9.8 0.0 ± 0.0 33.3 ± 10.5 48.0 ± 11.3 100.0 ± 0.0 8.0 ± 3.3 29.3 ± 3.8 88.0 ± 6.5 α Push Buttons Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Stack Blocks Stack Cups Sweep to Dustpan Turn Tap 0 90.7 ± 5.0 62.7 ± 12.4 96.0 ± 5.7 77.3 ± 3.8 76.0 ± 11.8 58.7 ± 9.4 18.7 ± 12.4 1.3 ± 1.9 78.7 ± 16.4 96.0 ± 3.3 0.5 85.3 ± 6.8 62.7 ± 13.2 96.0 ± 3.3 80.0 ± 0.0 76.0 ± 14.2 58.7 ± 6.8 17.3 ± 13.6 0.0 ± 0.0 80.0 ± 15.0 97.3 ± 1.9 1 78.7 ± 8.2 57.3 ± 3.8 97.3 ± 1.9 97.3 ± 3.8 88.0 ± 8.6 73.3 ± 3.8 4.0 ± 3.3 1.3 ± 1.9 66.7 ± 5.0 94.7 ± 5.0 2 88.0 ± 8.6 60.0 ± 9.8 100.0 ± 0.0 81.3 ± 6.8 77.3 ± 12.4 58.7 ± 6.8 14.7 ± 10.0 1.3 ± 1.9 84.0 ± 17.3 96.0 ± 5.7 decomposed into 10 sub-tasks, the mainstream policy training datasets typically have 10 million sub-tasks. (≈10 million entries in the database). The Open X-Embodiment (OXE) Dataset [47]: A landmark collaboration among 21 institutions, OXE provides over 1 million robot trajectories from 22 different robot embodiments. Its explicit goal is to foster the development of generalist models, demonstrating that the community is actively removing the proprietary data barriers of the past. The explicit purpose of OXE is to provide a standardized, large-scale resource to train generalist models that have demonstrated significant performance gains by training on this diverse data. Hugging Face 17 Table 11: Multi-task performance with different numbers of demonstrations. Demo. Num. Avg. Succ. (↑) Avg. Rank (↓) Close Jar Insert Peg Install Bulb Meat off Grill Open Drawer Place Cups Sort Shape Place Wine 1 (RDD) 59.1 ± 3.4 1.8 ± 0.8 75.6 ± 11.1 6.2 ± 5.7 35.6 ± 9.5 65.8 ± 20.9 100.0 ± 0.0 5.8 ± 4.7 25.8 ± 3.8 91.1 ± 7.7 3 (RDD) 61.0 ± 0.4 1.8 ± 0.7 68.0 ± 17.3 4.0 ± 3.3 41.3 ± 21.7 96.0 ± 5.7 100.0 ± 0.0 1.3 ± 1.9 32.0 ± 5.7 96.0 ± 3.3 3 (UVD [25]) 57.1 ± 0.3 2.3 ± 0.6 66.7 ± 13.2 4.0 ± 5.7 37.3 ± 19.1 93.3 ± 9.4 100.0 ± 0.0 2.7 ± 1.9 21.3 ± 10.5 77.3 ± 11.5 Demo. Num. Push Buttons Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Stack Blocks Stack Cups Sweep to Dustpan Turn Tap 1 (RDD) 86.7 ± 5.7 60.4 ± 11.8 97.8 ± 2.0 78.2 ± 13.3 61.8 ± 28.7 79.6 ± 16.2 11.6 ± 8.1 2.2 ± 2.7 87.1 ± 16.2 92.9 ± 14.7 3 (RDD) 78.7 ± 8.2 57.3 ± 3.8 97.3 ± 1.9 97.3 ± 3.8 88.0 ± 8.6 73.3 ± 3.8 4.0 ± 3.3 1.3 ± 1.9 66.7 ± 5.0 94.7 ± 5.0 3 (UVD [25]) 62.7 ± 12.4 44.0 ± 6.5 84.0 ± 6.5 96.0 ± 5.7 85.3 ± 13.2 82.7 ± 12.4 16.0 ± 3.3 1.3 ± 1.9 60.0 ± 16.3 93.3 ± 5.0 Table 12: Multi-task performance of Vanilla Planner without finetuning on the target task. Method Avg. Succ. (↑) Avg. Rank (↓) Meat off Grill Open Drawer Place Wine Push Buttons Put in Cupboard w/o finetuning on target task 77.9 ± 4.3 1.6 ± 0.5 99.2 ± 2.4 99.6 ± 1.2 86.4 ± 8.8 70.4 ± 8.0 61.2 ± 16.8 RDD (Ours) 79.6 ± 7.2 1.4 ± 0.5 84.4 ± 21.1 99.2 ± 2.4 86.4 ± 15.4 84.0 ± 7.8 41.2 ± 17.1 Method Put in Drawer Put in Safe Drag Stick Slide Block Sweep to Dustpan Turn Tap w/o finetuning on target task 86.0 ± 14.3 94.8 ± 9.0 74.0 ± 23.3 62.4 ± 16.8 30.0 ± 15.3 92.4 ± 14.4 RDD (Ours) 97.2 ± 3.1 98.4 ± 3.2 68.0 ± 25.0 65.2 ± 14.3 57.2 ± 29.7 94.0 ± 5.1 Table 13: Comparing RDD with Gemini-2.5-pro. Method Avg. Succ. (↑) Avg. Rank (↓) Close Jar Install Bulb Meat off Grill Open Drawer Place Wine Push Buttons Gemini-2.5-pro 72.6 ± 4.7 1.7 ± 0.4 41.2 ± 30.1 40.8 ± 16.5 83.2 ± 15.2 99.6 ± 1.2 86.4 ± 11.1 82.4 ± 8.6 RDD (Ours) 74.9 ± 6.9 1.3 ± 0.4 46.0 ± 28.2 52.8 ± 16.4 84.4 ± 21.1 99.2 ± 2.4 86.4 ± 15.4 84.0 ± 7.8 Method Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Sweep to Dustpan Turn Tap Gemini-2.5-pro 38.4 ± 10.6 94.0 ± 6.8 93.6 ± 9.2 73.6 ± 22.3 63.6 ± 14.4 48.4 ± 14.9 99.2 ± 2.4 RDD (Ours) 41.2 ± 17.1 97.2 ± 3.1 98.4 ± 3.2 68.0 ± 25.0 65.2 ± 14.3 57.2 ± 29.7 94.0 ± 5.1 Table 14: Performance of FAISS nearest neighbor search and RDD time on NVIDIA 4090. Hardware Dim Vec Num QPS Lmax LI RDD Time (s) NVIDIA 4090 2048 10M 386 100 500 115 (4.3 fps) SmolVLA Dataset [48]: The emergence of models like SmolVLA, a capable vision-language-action model trained entirely on 23k episodes from 487 open-sourced community datasets through the LeRobot framework, outperforms the closed-source-dataset policy π0 [4]. AgiBot World [33]: AgiBot World provides not just datasets but complete open-source toolchains and standardized data collection pipelines, further enriching the public ecosystem. It has collected over 1 million trajectories on over 100 homogeneous robots. Their proposed model GO-1, entirely trained on this open-sourced dataset, outperforms the closed-source dataset policy π0 [4]. E Broader Impacts The potential negative societal impacts of our work are consistent with those commonly observed in robotics research, including risks related to privacy, labor displacement, and unintended misuse in sensitive contexts. While our method is primarily designed to enhance the scalability and efficiency of robotic systems, such advancements may accelerate deployment in real-world settings, amplifying both positive and negative consequences. In parallel, advances in point cloud analysis [49, 50], 18 cooperative motion prediction [51], autonomous driving frameworks [52, 53, 54, 55], and generative AI for driving [56] highlight both the promise and the risks of deploying increasingly capable vision- action models. Broader surveys of visual foundation models [57] and new work on multimodal alignment [58, 59] further strengthen the importance of trustworthy design and governance, especially for safety-critical applications such as transportation and human-robot interaction [60, 61]. To mitigate these risks, we emphasize alignment with ethical guidelines, including fairness, accountability, transparency, and safety, and encourage interdisciplinary collaboration to monitor societal impacts as these technologies evolve [62]. 19
RDD: Retrieval-Based Demonstration Decomposer for Planner Alignment in Long-Horizon Tasks Mingxuan Yan1 Yuping Wang1,2 Zechun Liu3 Jiachen Li1∗ 1 2 3Meta AI {myan035, yuping.wang, Abstract To tackle long-horizon tasks, recent hierarchical vision-language-action (VLAs) frameworks employ vision-language model (VLM)-based planners to decompose complex manipulation tasks into simpler sub-tasks that low-level visuomotor policies can easily handle. Typically, the VLM planner is finetuned to learn to decompose a target task. This finetuning requires target task demonstrations segmented into sub-tasks by either human annotation or heuristic rules. However, the heuristic subtasks can deviate significantly from the training data of the visuomotor policy, which degrades task performance. To address these issues, we propose a Retrieval-based Demonstration Decomposer (RDD) that automatically decomposes demonstrations into sub-tasks by aligning the visual features of the decomposed sub-task intervals with those from the training data of the low-level visuomotor policies. Our method outperforms the state-of-the-art sub-task decomposer on both simulation and real-world tasks, demonstrating robustness across diverse settings. Code and more results are available at rdd-neurips.github.io. 1 Introduction Developing generalist robots that are capable of executing complex, long-horizon tasks in unstructured environments has become one of the central goals of current robotics research. Traditional robotic programming and learning methods often struggle with the variability and complexity inherent in realworld scenarios. Building upon the success of Vision-Language Models (VLMs) and Large Language Models (LLMs), a new class of multi-modal foundation models known as Vision-Language-Action models (VLAs) [1, 2, 3, 4, 5] has emerged specifically for embodied AI applications. As recent studies [6, 7, 8, 9, 10, 11, 12, 13] have shown, integrating high-level planners above the low-level visuomotor policies vastly improves the performance for long-horizon robotic tasks. This has led to the hierarchical VLA paradigm [14, 15, 13, 16, 17, 18, 19, 20]. The planner, often a powerful VLM, performs task planning and reasoning to break down complex tasks into simpler sub-tasks with step-by-step language instructions. A learning-based visuomotor policy, trained on datasets with short-horizon sub-tasks and conditioned on the generated sub-task instructions, performs precise manipulation to complete the sub-tasks one by one, thereby completing long-horizon tasks. Despite its versatility, a vanilla VLM planner typically needs to be finetuned with human demonstrations when deploying to a given task [18, 14, 16]. To build the dataset for planner finetuning, demonstrations are temporally decomposed to sub-tasks by human annotation [14, 16, 18, 19, 15] or heuristics [13, 15, 21, 22, 23, 24, 25]. However, these methods are neither scalable nor efficient, and, most importantly, they could generate sub-tasks that deviate significantly from the training data of the low-level visuomotor policy. Figure 1 illustrates this dilemma. The state-of-the-art sub-task decomposer UVD [25], which uses a heuristic decomposition rule based on visual feature change-point ∗Corresponding author 39th Conference on Neural Information Processing Systems (NeurIPS 2025). 16 Oct 2025 Sub-task 1 Sub-task 2 Sub-task...? (b) Sub-tasks appears in the training set of visuomotor policy Sub-task 1 Sub-task 2 (a) (c) LoRA "Move to a position above the teal spoke, keeping the gripper closed." Sub-task Instructions Visuomotor Policy VLM-Based Task Planner Retrieve similar ones from policy training set RDD Figure 1: The core idea of RDD. (a) Two sub-tasks appear in the visuomotor policy's training set, on which the policy has been optimized. (b) Existing sub-task decomposers, such as UVD [25], use heuristic decomposition rules and may generate "unfamiliar" sub-tasks that are difficult to handle for the low-level visuomotor policy. (c) In contrast, RDD decomposes the demonstration into sub-tasks that are visually similar to the ones in the training set of the visuomotor policy. The sub-tasks are then used to finetune the high-level planner, which gives sub-task instructions to the low-level visuomotor policy and guides it to finish the task step-by-step. detection, generates sub-tasks that significantly deviate from the training data of the visuomotor policy. Finetuning the planner with these sub-tasks could make the planner generate sub-task instructions that the visuomotor policy is not optimized for, leading to compromised performance. This gap motivates us to develop an automatic, training-free, and computationally efficient approach that a) automatically decomposes demonstration videos for high-level planner finetuning without human annotations or task-specific knowledge and b) aligns the decomposed sub-tasks with the training data of the low-level visuomotor policies. To achieve this, we propose a Retrieval-based Demonstration Decomposer (RDD) that decomposes the demonstration into sub-tasks visually similar to the ones in the training set of the visuomotor policy, as illustrated in Figure 1 (c). Inspired by previous work [25], we employ existing visual encoders [26, 27, 28, 29, 30] that encode images into a compact latent space where distance metrics (e.g., angular distance) are effective in describing the semantic relationship between images. To align the sub-tasks to the training data of the low-level visuomotor policy, we build a sub-task visual feature vector database with the visuomotor training set and design an effective sub-task similarity measure to ensure similar sub-task samples can be efficiently retrieved. We formulate demonstration decomposition as an optimal partitioning problem and employ a dynamic programming-based solver to optimize the decomposition strategy efficiently. The experiments show that RDD consistently outperforms state-of-the-art methods on both simulation and real-world benchmarks. The main contributions of this paper are as follows: • This work is the first to coordinate the high-level planner and low-level visuomotor policy in the hierarchical VLA framework by generating the planner's finetuning dataset that is well aligned with the visuomotor policy to improve the long-horizon task performance. • We propose RDD, a novel training-free retrieval-based demonstration decomposition framework that aligns the sub-task decomposition with the training data of the visuomotor policy. Specifically, we model demonstration decomposition as an optimal partitioning problem, which can be solved efficiently with a dynamic programming solver. We also provide a detailed theoretical analysis of the complexity and properties of the proposed solver. • We evaluate RDD on both simulation and real-world benchmarks. Experimental results show that RDD outperforms the state-of-the-art heuristic decomposer and is robust across various settings. 2 Related Work Hierarchical VLAs. While single-stage VLAs [1, 2, 3, 4, 5] achieve promising performance in short-horizon manipulation tasks, long-horizon tasks need an in-depth understanding of the task and general planning ability, which is hard to handle by a single-stage model. To this end, hierarchical structures have emerged as a compelling solution for long-horizon manipulation tasks [14, 15, 13, 16, 2 17, 18, 19, 20, 10]. As representative examples, Hi Robot [14] and π0.5 [18] enhance their previous work on visuomotor policy [4, 3] with a VLM-based planner. According to image observation and the overall task goal, the planner provides sub-task instructions at each time step. The low-level policy, conditioned on the instruction, outputs the final actions. Hierarchical structures also enable error correction and human intervention [13, 16, 14]. However, these methods rely on either human annotation or heuristic rules to decompose the demonstrations when finetuning the planner, which is less efficient and could generate sub-tasks that are hard to handle by the visuomotor policy. Demonstration Decomposition. Finetuning the high-level planner in hierarchical VLAs requires demonstrations broken down into sub-tasks with associated labels. Manually performing this segmentation [14, 16, 18, 19, 15] is slow and expensive. Human subjectivity also leads to inconsistencies. Heuristic methods [13, 15, 21, 22, 23, 24], such as segmenting based on contact changes or endeffector velocity profiles, require task-specific knowledge for carefully designed rules. In contrast, UVD [25] leverages general visual representation and identifies sub-tasks by detecting frame-byframe temporal change points of visual embedding distances. However, when applying to hierarchical VLAs, UVD can still sub-optimally decompose sub-tasks, which may deviate significantly from the training data of the visuomotor policy. In contrast, RDD decomposes the demonstrations by explicitly aligning the sub-tasks with the training set of the visuomotor policy, enabling seamless coordination between the planner and visuomotor policy. Visual Representations. Considerable efforts have been made to develop visual encoders that embed RGB frames into compact latent vector spaces [26, 27, 28, 29, 30]. Some of these efforts are specially designed for robotics and manipulation scenarios. For instance, R3M [27] uses time-contrastive learning on large datasets of human videos; LIV [26] learns a value function conditioned on both language instructions and images. These visual representations are designed to capture meaningful information about the scene, objects, and potentially their relationships or temporal dynamics. 3 Retrieval-Based Demonstration Decomposer (RDD) 3.1 Problem Statement Hierarchical VLAs typically follow an imitation learning framework that trains a low-level visuomotor policy πθ(at|st, ot, lt, L) and a high-level planner pφ(lt|st, ot, lt-1, L). The latter is usually a VLM. at denotes the waypoint action at timestep t, including 6-DoF pose and binary gripper state. Both policy πθ and planner pφ are conditioned on the RGB image observation ot, proprioceptive states st, and the overall task objective description L in natural language, such as "put the cube in the drawer". The policy πθ is additionally conditioned on a sub-task instruction lt like "first, pick up the cube", which is determined by the planner pφ at time t. During the policy training phase, the raw training dataset Dtrain = {(Si, Li)}Ntrain i=1 is composed of Ntrain demonstrations where Si = {(ai t, si t, oi t)}Ti t=1 and Li represents the corresponding task objective description. To break the complex long-horizon tasks down to simple instructions required by the low-level policy πθ, a demonstration Si is decomposed into a set of partitions Pi = {Ii j}Bi j=1 based on task-specific rules or human annotations. The j-th interval Ii j = {Si[bi j], . . . , Si[ei j]} (bi j 35% with expert planner). It excludes the interference of poorly optimized visuomotor when evaluating planners. Performance on all 18 tasks can be found in Appendix C. 7 Choice of Visual Representation. As an important building block of RDD, the choice of visual representation is of great importance. Table 2 shows the performance of RDD when adopting different visual encoders E, including robotics specialized encoders: LIV [26], R3M [27], VIP [35], VC-1 [28]; and encoders for general vision tasks: CLIP [43], DINOv2 [29] and ResNet [44] pre-trained for ImageNet-1k classification. Results are averaged over three random seeds. Table 2: Results when using different visual encoders E. Full results on all tasks can be found in Table 9 in the appendix. Visu. Repr. Avg. Succ. (↑) Avg. Rank (↓) LIV 81.1 ± 0.9 3.7 ± 1.6 R3M 80.0 ± 3.5 3.9 ± 1.7 VIP 75.3 ± 3.4 4.1 ± 2.0 VC-1 75.5 ± 3.1 3.8 ± 2.2 CLIP 78.2 ± 2.1 4.7 ± 2.0 DINOv2 78.4 ± 2.4 4.5 ± 1.8 ResNet 81.1 ± 2.5 3.4 ± 1.5 It can be seen that RDD shows good robustness with various visual encoders and consistently outperforms baselines with the majority of encoders except for VC-1 and VIP, which demonstrates the strong robustness of RDD. VC-1 and VIP, on the other hand, are the only models that do not involve any form of language integration during training and perform the worst among all encoders. This implies the importance of language integration for visual encoders in VLA perception for semantic information retrieval. For instance, subtle pixel differences, such as the change of gripper state, may have a significant difference in language description. Surprisingly, ResNet, whose training does not explicitly involve language supervision, demonstrates a strong performance. The reason may be that its training dataset, ImageNet-1k, implicitly correlates its latent space with the language image labels. Table 3: Results when tuning the weighting parameter α. Full results on all tasks can be found in Table 10. α Avg. Succ. Avg. Rank 0 75.0 ± 2.5 3.0 ± 1.0 0.5 75.7 ± 2.4 2.5 ± 0.7 1 81.1 ± 0.9 2.3 ± 1.4 2 76.2 ± 3.0 2.2 ± 0.8 Weighting Parameters. Table 3 shows the impact of α on the performance of RDD. Results are averaged over three random seeds. When α = 0, i.e., there is no temporal alignment, and the algorithm is confused about sub-tasks whose beginning and ending frames are similar (e.g., reciprocating motion). On the other hand, overly relying on the temporal similarity ignores the semantic relationship between intervals and leads to performance degradation. We also evaluate the impact of the β in Table 5 for OOD scenarios, and the result shows that RDD is less sensitive to β. The choice of β depends on specific applications and user needs. Number of Demonstrations in Ddemo. To explore the data efficiency of RDD, Table 4 shows its averaged success rates under different numbers of demonstrations in Ddemo aug . Results are averaged over three random seeds. Specifically, we break the three-demonstration base setting dataset into three non-overlapping datasets with one demonstration per task to avoid bias induced by varying demonstration qualities. This result shows a high data efficiency of RDD. We credit this efficiency to the less-noisy keyframes provided by RDD, which are more informative for VLM to learn the underlying decomposition rules. Table 4: Results with different numbers of demonstrations per task in Ddemo aug . Full results on all tasks are in Table 11. Demo. Num. Avg. Succ. (↑) Avg. Rank (↓) 1 (RDD) 77.9 ± 4.5 2.0 ± 0.9 3 (RDD) 81.1 ± 0.9 1.6 ± 0.6 3 (UVD) 75.6 ± 1.8 2.4 ± 0.6 Performance on Real-world and OOD sub-tasks. Here we demonstrate RDD's performance on both real-world and settings where the OOD sub-task appears. We first evaluate RDD on the real-world manipulation benchmark AgiBotWorld-Alpha [33]. We test RDD and UVD on the "supermarket" task, using 152 demos to build the RDD database and 37 demos for testing. For OOD sub-tasks, we test RDD on the human-operated demonstration dataset from RoboCerebra [34], which features highly diverse demonstrations in terms of objects, task goals, and arrangements. We use 560 demos to build the RDD database and test on the remaining 140 demos. We use the similarity measure sim in Eq. 3.7 for the OOD setting. We evaluated the quality of the decomposition against ground-truth segmentations using the mean intersection over union (mIoU). As shown in Table 5, RDD outperforms UVD on real-world data. Under OOD settings, RDD consistently outperforms UVD by leveraging potential similarity between sub-tasks. Speed and Scalability. We test the running time of Algorithm 1 with different numbers of frames on AMD EPYC 9254 using one CPU core. Figure 3 plots the running time with/without the prior knowledge of the maximum length of interval Lmax. The results show that the complexity with Lmax grows linearly with the number of frames, which aligns with our conclusion in Corollary 3.1.1, which indicates that when Lmax is determined, the complexity of Algorithm 1 will be O(N). 8 Table 5: Performance on real-world and OOD sub-tasks (IoU). Method AgiBot. (Real World) LIBERO (OOD) UVD 0.506 0.598 RDD 0.706 / RDD (β = 0.25) / 0.624 RDD (β = 0.10) / 0.630 RDD (β = 0.05) / 0.614 Note that Algorithm 1 supports parallel evaluation of the scoring function ̃J, and the latency can be significantly reduced with multi-processing. Also, we demonstrate the scalability of RDD when working with GPU-accelerated ANNS algorithms like FAISS [38] in Appendix D. Necessity of Finetuning on Target Tasks: One may ask if the planner can transfer to an unseen new task in zero-shot. We thus build a new planner finetuned before deployment on the training set of the following tasks: "Close Jar", "Insert Peg", and "Install Bulb" as the baseline, which learns the visual features but not the task decompositions. Then, we test its performance on the remaining tasks. The results are shown in Table 6. The results are averaged across 10 random seeds, and we also exclude tasks where the visuomotor fails. The results prove the necessity of fine-tuning on target tasks. 100 200 300 400 Number of Frames to Decompose 0 100 200 300 400 500 Runtime (s) w/ Lmax w/o Lmax Figure 3: Linear scaling of running time of Algorithm 1 with Lmax. Decompose with VLMs: VLMs pretrained on internet-scale data are promising to process a variety of video understanding tasks. In Table 7, we compared RDD with a Gemini-2.5-pro [42]- based decomposer with the following prompt: There is a robot doing a task, which can be segmented into multiple steps. A keyframe is where the robot finishes the previous step and begins the next. Can you help me find ALL indexes of keyframes? Please return a list of indices, for example: [15, 65, 105, ...]. Note that the frame index starts from 0 instead of 1. As shown, RDD outperforms Gemini-2.5-pro despite its powerful general video understanding abilities. This result highlights the necessity of the planner aligning and the effectiveness of RDD. Table 6: Vanilla Planner without finetuning on the target task. Full results on all tasks are in Table 12. Method Avg. Succ. (↑) Avg. Rank (↓) w/o finetuning on target task 77.9 ± 4.3 1.6 ± 0.5 RDD (Ours) 79.6 ± 7.2 1.4 ± 0.5 Extended Evaluations and Discussions. We provide extended evaluations results in C and further discussions in Appendix D. We also provide a conceptual speed evaluation of RDD when working with the GPU-accelerated ANNS method FAISS [38] in Appendix D.1. 4.2 Qualitative Results and Analysis Figure 4 visualizes the decomposition results of RDD and UVD on both real-world and simulation benchmarks. We can observe that RDD is robust to task-irrelevant interference and can robustly identify the sub-tasks that are close to the expert sub-task division. Also, RDD demonstrates strong robustness for nuance arm movement, where the keyframe localization is challenging precisely. Conversely, UVD fails to locate keyframes precisely, and the generated sub-tasks largely deviate from expert sub-tasks. 5 Discussions and Future Works Table 7: Comparing RDD with Gemini-2.5-pro. Full results on all tasks are in Table 13. Method Avg. Succ. (↑) Avg. Rank (↓) Gemini-2.5-pro 72.6 ± 4.7 1.7 ± 0.4 RDD (Ours) 74.9 ± 6.9 1.3 ± 0.4 Visuomotor Training Data Generation based on Source Dataset: While this work applies RDD to planner-visuomotor alignment, it can also be used to generate additional sub-task training data for visuomotor aligned with a labeled source dataset. By aligning the sub-task interval visual features with the existing source dataset, RDD may make the newly labeled data easier to learn, allowing the visuomotor reuse learned knowledge from the source dataset. Specific Sub-task Interval Features: RDD measures sub-task interval similarity in the single-frame image feature space. Some applications, such as hierarchical vision-language navigation [19], which require the planner to use historical landmark images, may necessitate specialized designs of the similarity score function. 9 Expert RDD (Ours) UVD Figure 4: Qualitative results of RDD and UVD functioning on both real-world (AgiBotWorld) and simulation (RLBench and LIBERO) benchmarks. Blocks outlined in black are sub-tasks decomposed by the same taskspecific heuristic used in the visuomotor policy's training set; blocks outlined in green are sub-tasks found by RDD; and blocks outlined in red are sub-tasks found by UVD. Data Quality of the Source Dataset and Data Curation: As a retrieval-based sub-task decomposition method, RDD's primary objective is to let the high-level planner effectively utilize the skills that the low-level visuomotor policy already possesses. Therefore, RDD is agnostic to the "optimality" of the skills themselves. This ensures the planner generates commands that the policy can reliably execute, rather than potentially "better" ones it cannot handle. On the other hand, in scenarios where the visuomotor policy's training data contains significant noisy samples that the policy fails to learn, RDD can be easily integrated with dataset curation techniques [45, 46]. These methods can serve as a pre-processing step to filter the visuomotor training set. For instance, CUPID [45] computes an "action influence" score for state-action pairs that can be used to evaluate each segment's contribution to the policy's final behavior. By applying a simple threshold, low-influence or flawed segments can be pruned from the dataset before RDD uses it as a reference. This would prevent catastrophic failures by ensuring RDD aligns demonstrations only with high-quality, influential sub-tasks. 6 Conclusion In this work, we present the Retrieval-based Demonstration Decomposer (RDD), a training-free decomposition method that aligns the high-level task planner and low-level visuomotor policy in hierarchical VLAs. By retrieving and aligning sub-task segments with the low-level policy's training data, RDD enables an effective planner that fully exploits the capability of the visuomotor policy. We formally formulate the demonstration decomposition task into an optimal partitioning problem, which can be efficiently solved by dynamic programming with our novel sub-task interval scoring function. Experiment results demonstrate that RDD outperforms state-of-the-art demonstration decomposers. RDD offers a scalable and promising solution for demonstration decomposition, opening new avenues for planner-policy coordination in hierarchical robot learning systems. References [1] Brianna Zitkovich, Tianhe Yu, Sichun Xu, Peng Xu, Ted Xiao, Fei Xia, Jialin Wu, Paul Wohlhart, Stefan Welker, Ayzaan Wahid, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. In Conference on Robot Learning, pages 2165-2183. PMLR, 2023. [2] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. arXiv preprint , 2024. 10 [3] Karl Pertsch, Kyle Stachowicz, Brian Ichter, Danny Driess, Suraj Nair, Quan Vuong, Oier Mees, Chelsea Finn, and Sergey Levine. Fast: Efficient action tokenization for vision-language-action models. arXiv preprint , 2025. [4] Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al. π0: A vision-language-action flow model for general robot control. arXiv preprint , 2024. [5] Songming Liu, Lingxuan Wu, Bangguo Li, Hengkai Tan, Huayu Chen, Zhengyi Wang, Ke Xu, Hang Su, and Jun Zhu. Rdt-1b: a diffusion foundation model for bimanual manipulation. arXiv preprint , 2024. [6] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint , 2022. [7] Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, and Yang Gao. Look before you leap: Unveiling the power of gpt-4v in robotic vision-language planning. arXiv preprint , 2023. [8] Austin Stone, Ted Xiao, Yao Lu, Keerthana Gopalakrishnan, Kuang-Huei Lee, Quan Vuong, Paul Wohlhart, Sean Kirmani, Brianna Zitkovich, Fei Xia, et al. Open-world object manipulation using pre-trained vision-language models. arXiv preprint , 2023. [9] Hongyi Chen, Yunchao Yao, Ruixuan Liu, Changliu Liu, and Jeffrey Ichnowski. Automating robot failure recovery using vision-language models with optimized prompts. arXiv preprint , 2024. [10] Suneel Belkhale, Tianli Ding, Ted Xiao, Pierre Sermanet, Quon Vuong, Jonathan Tompson, Yevgen Chebotar, Debidatta Dwibedi, and Dorsa Sadigh. Rt-h: Action hierarchies using language. arXiv preprint , 2024. [11] Peiqi Liu, Yaswanth Orru, Jay Vakil, Chris Paxton, Nur Muhammad Mahi Shafiullah, and Lerrel Pinto. Ok-robot: What really matters in integrating open-knowledge models for robotics. arXiv preprint , 2024. [12] Fangchen Liu, Kuan Fang, Pieter Abbeel, and Sergey Levine. Moka: Open-vocabulary robotic manipulation through mark-based visual prompting. In First Workshop on Vision-Language Models for Navigation and Manipulation at ICRA 2024, 2024. [13] Yinpei Dai, Jayjun Lee, Nima Fazeli, and Joyce Chai. Racer: Rich language-guided failure recovery policies for imitation learning. International Conference on Robotics and Automation (ICRA), 2025. [14] Lucy Xiaoyang Shi, Brian Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, et al. Hi robot: Open-ended instruction following with hierarchical vision-language-action models. arXiv preprint , 2025. [15] Yi Li, Yuquan Deng, Jesse Zhang, Joel Jang, Marius Memmel, Raymond Yu, Caelan Reed Garrett, Fabio Ramos, Dieter Fox, Anqi Li, et al. Hamster: Hierarchical action models for open-world robot manipulation. arXiv preprint , 2025. [16] Lucy Xiaoyang Shi, Zheyuan Hu, Tony Z Zhao, Archit Sharma, Karl Pertsch, Jianlan Luo, Sergey Levine, and Chelsea Finn. Yell at your robot: Improving on-the-fly from language corrections. arXiv preprint , 2024. [17] Weiyu Liu, Neil Nie, Ruohan Zhang, Jiayuan Mao, and Jiajun Wu. Learning compositional behaviors from demonstration and language. In 8th Annual Conference on Robot Learning, 2025. [18] Physical Intelligence, Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, et al. π0.5: a visionlanguage-action model with open-world generalization. arXiv preprint , 2025. 11 [19] An-Chieh Cheng, Yandong Ji, Zhaojing Yang, Zaitian Gongye, Xueyan Zou, Jan Kautz, Erdem Bıyık, Hongxu Yin, Sifei Liu, and Xiaolong Wang. Navila: Legged robot vision-language-action model for navigation. arXiv preprint , 2024. [20] Jiafei Duan, Wentao Yuan, Wilbert Pumacay, Yi Ru Wang, Kiana Ehsani, Dieter Fox, and Ranjay Krishna. Manipulate-anything: Automating real-world robots using vision-language models. arXiv preprint , 2024. [21] Changyeon Kim, Minho Heo, Doohyun Lee, Jinwoo Shin, Honglak Lee, Joseph J Lim, and Kimin Lee. Subtask-aware visual reward learning from segmented demonstrations. arXiv preprint , 2025. [22] Wensheng Wang and Ning Tan. Hybridgen: Vlm-guided hybrid planning for scalable data generation of imitation learning. arXiv preprint , 2025. [23] Ajay Mandlekar, Soroush Nasiriany, Bowen Wen, Iretiayo Akinola, Yashraj Narang, Linxi Fan, Yuke Zhu, and Dieter Fox. Mimicgen: A data generation system for scalable robot learning using human demonstrations. arXiv preprint , 2023. [24] Tongzhou Mu, Minghua Liu, and Hao Su. Drs: Learning reusable dense rewards for multi-stage tasks. arXiv preprint , 2024. [25] Zichen Zhang, Yunshuang Li, Osbert Bastani, Abhishek Gupta, Dinesh Jayaraman, Yecheng Jason Ma, and Luca Weihs. Universal visual decomposer: Long-horizon manipulation made easy. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 6973-6980. IEEE, 2024. [26] Yecheng Jason Ma, William Liang, Vaidehi Som, Vikash Kumar, Amy Zhang, Osbert Bastani, and Dinesh Jayaraman. Liv: Language-image representations and rewards for robotic control. arXiv preprint , 2023. [27] Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, and Abhinav Gupta. R3m: A universal visual representation for robot manipulation. arXiv preprint , 2022. [28] Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Tingfan Wu, Jay Vakil, et al. Where are we in the search for an artificial visual cortex for embodied intelligence? Advances in Neural Information Processing Systems, 36:655-677, 2023. [29] Maxime Oquab, Timothée Darcet, Theo Moutakanni, Huy V. Vo, Marc Szafraniec, Vasil Khalidov, Fernandez, et al. Dinov2: Learning robust visual features without supervision, 2023. [30] Timothée Darcet, Maxime Oquab, Julien Mairal, and Piotr Bojanowski. Vision transformers need registers, 2023. [31] Brad Jackson, Jeffrey D Scargle, David Barnes, Sundararajan Arabhi, Alina Alt, Peter Gioumousis, Elyus Gwin, Paungkaew Sangtrakulcharoen, Linda Tan, and Tun Tao Tsai. An algorithm for optimal partitioning of data on an interval. IEEE Signal Processing Letters, 12(2):105-108, 2005. [32] Stephen James, Zicong Ma, David Rovick Arrojo, and Andrew J Davison. Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5(2):3019-3026, 2020. [33] Qingwen Bu, Jisong Cai, Li Chen, Xiuqi Cui, Yan Ding, Siyuan Feng, Xindong He, Xu Huang, et al. Agibot world colosseo: A large-scale manipulation platform for scalable and intelligent embodied systems. In 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2025. [34] Songhao Han, Boxiang Qiu, Yue Liao, Siyuan Huang, Chen Gao, Shuicheng Yan, and Si Liu. Robocerebra: A large-scale benchmark for long-horizon robotic manipulation evaluation. arXiv preprint , 2025. 12 [35] Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, and Amy Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. arXiv preprint , 2022. [36] Erik Bernhardsson. ANNOY library. https://github.com/spotify/annoy. Accessed: 2025-05-05. [37] Martin Aumüller, Erik Bernhardsson, and Alexander Faithfull. Ann-benchmarks: A benchmarking tool for approximate nearest neighbor algorithms. Information Systems, 87:101374, 2020. [38] Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, PierreEmmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. The faiss library. 2024. [39] Ankit Goyal, Jie Xu, Yijie Guo, Valts Blukis, Yu-Wei Chao, and Dieter Fox. Rvt: Robotic view transformer for 3d object manipulation. In Conference on Robot Learning, pages 694-710. PMLR, 2023. [40] Bo Li, Kaichen Zhang, Hao Zhang, Dong Guo, Renrui Zhang, Feng Li, Yuanhan Zhang, Ziwei Liu, and Chunyuan Li. Llava-next: Stronger llms supercharge multimodal capabilities in the wild. URL https://llava-vl. github. io/blog/2024-05-10-llava-next-stronger-llms, 2024. [41] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR, 1(2):3, 2022. [42] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint , 2023. [43] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PmLR, 2021. [44] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. [45] Christopher Agia, Rohan Sinha, Jingyun Yang, Rika Antonova, Marco Pavone, Haruki Nishimura, Masha Itkina, and Jeannette Bohg. Cupid: Curating data your robot loves with influence functions. arXiv preprint , 2025. [46] Joey Hejna, Suvir Mirchandani, Ashwin Balakrishna, Annie Xie, Ayzaan Wahid, Jonathan Tompson, Pannag Sanketi, Dhruv Shah, Coline Devin, and Dorsa Sadigh. Robot data curation with mutual information estimators. arXiv preprint , 2025. [47] Abby O'Neill, Abdul Rehman, Abhiram Maddukuri, Abhishek Gupta, Abhishek Padalkar, Abraham Lee, Acorn Pooley, Agrim Gupta, Ajay Mandlekar, Ajinkya Jain, et al. Open xembodiment: Robotic learning datasets and rt-x models: Open x-embodiment collaboration 0. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 6892-6903. IEEE, 2024. [48] Mustafa Shukor, Dana Aubakirova, Francesco Capuano, Pepijn Kooijmans, Steven Palma, Adil Zouitine, Michel Aractingi, Caroline Pascal, Martino Russi, Andres Marafioti, et al. Smolvla: A vision-language-action model for affordable and efficient robotics. arXiv preprint , 2025. [49] Junming Zhang, Weijia Chen, Yuping Wang, Ram Vasudevan, and Matthew Johnson-Roberson. Point set voting for partial point cloud analysis. IEEE Robotics and Automation Letters, 6(2):596-603, 2021. [50] Mingxuan Yan, Ruijie Zhang, Xuedou Xiao, and Wei Wang. Detvpcc: Roi-based point cloud sequence compression for 3d object detection. arXiv preprint , 2025. 13 [51] Zehao Wang, Yuping Wang, Zhuoyuan Wu, Hengbo Ma, Zhaowei Li, Hang Qiu, and Jiachen Li. Cmp: Cooperative motion prediction with multi-agent communication. IEEE Robotics and Automation Letters, 2025. [52] Yuping Wang and Jier Chen. Eqdrive: Efficient equivariant motion forecasting with multimodality for autonomous driving. In 2023 8th International Conference on Robotics and Automation Engineering (ICRAE), pages 224-229. IEEE, 2023. [53] Yuping Wang and Jier Chen. Equivariant map and agent geometry for autonomous driving motion prediction. In 2023 International Conference on Electrical, Computer and Energy Technologies (ICECET), pages 1-6. IEEE, 2023. [54] Shuo Xing, Chengyuan Qian, Yuping Wang, Hongyuan Hua, Kexin Tian, Yang Zhou, and Zhengzhong Tu. Openemma: Open-source multimodal model for end-to-end autonomous driving. In Proceedings of the Winter Conference on Applications of Computer Vision, pages 1001-1009, 2025. [55] Yuping Wang, Xiangyu Huang, Xiaokang Sun, Mingxuan Yan, Shuo Xing, Zhengzhong Tu, and Jiachen Li. Uniocc: A unified benchmark for occupancy forecasting and prediction in autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2025. [56] Yuping Wang, Shuo Xing, Cui Can, Renjie Li, Hongyuan Hua, Kexin Tian, Zhaobin Mo, Xiangbo Gao, Keshu Wu, Sulong Zhou, et al. Generative ai for autonomous driving: Frontiers and opportunities. arXiv preprint , 2025. [57] Xu Liu, Tong Zhou, Chong Wang, Yuping Wang, Yuanxin Wang, Qinjingwen Cao, Weizhi Du, Yonghuan Yang, Junjun He, Yu Qiao, et al. Toward the unification of generative and discriminative visual foundation model: A survey. The Visual Computer, pages 1-42, 2024. [58] Shuo Xing, Yuping Wang, Peiran Li, Ruizheng Bai, Yueqi Wang, Chengxuan Qian, Huaxiu Yao, and Zhengzhong Tu. Re-align: Aligning vision language models via retrieval-augmented direct preference optimization. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 2025. [59] Shuo Xing, Zezhou Sun, Shuangyu Xie, Kaiyuan Chen, Yanjia Huang, Yuping Wang, Jiachen Li, Dezhen Song, and Zhengzhong Tu. Can large vision language models read maps like a human? arXiv preprint , 2025. [60] Congrui Hetang and Yuping Wang. Novel view synthesis from a single rgbd image for indoor scenes. In 2023 International Conference on Image Processing, Computer Vision and Machine Learning (ICICML), pages 447-450. IEEE, 2023. [61] Xiangbo Gao, Yuheng Wu, Xuewen Luo, Keshu Wu, Xinghao Chen, Yuping Wang, Chenxi Liu, Yang Zhou, and Zhengzhong Tu. Airv2x: Unified air-ground vehicle-to-everything collaboration. arXiv preprint , 2025. [62] Peiran Li, Xinkai Zou, Zhuohang Wu, Ruifeng Li, Shuo Xing, Hanwen Zheng, Zhikai Hu, Yuping Wang, Haoxi Li, Qin Yuan, et al. Safeflow: A principled protocol for trustworthy and transactional autonomous agent systems. arXiv preprint , 2025. 14 Appendix A Algorithm Details A.1 Dynamic Programming Solver to Problem 3.1 Algorithm 1 shows the dynamic programming solver. Lmax and Lmin are user-specified parameters that determine the minimum and maximum length of proposed sub-task intervals. ̃J is the interval scoring function. Algorithm 1 MaxSumPartition Require: Sequence u = [u1, u2, . . . , un], scoring function ̃J, integer Lmin, integer Lmax Ensure: Maximum score sum and partition of u 1: Initialize dp[0 . . . n] ←-∞, parts[0 . . . n] ←∅ 2: dp[0] ←0 3: for i = Lmin + 1 to n do 4: bestScore ←-∞ 5: bestPartition ←∅ 6: for j = 0 to i do 7: if Lmin ≤i -j ≤Lmax then 8: segment ←u[j : i] 9: s ← ̃J(segment) ▷can be evaluated in parallel before loops 10: if dp[j] + s > bestScore then 11: bestScore ←dp[j] + s 12: bestPartition ←parts[j] ∪{segment} 13: end if 14: end if 15: end for 16: if bestPartition ̸= ∅then 17: dp[i] ←bestScore 18: parts[i] ←bestPartition 19: else 20: dp[i] ←dp[i -1] 21: parts[i] ←parts[i -1] 22: end if 23: end for 24: return (dp[n], parts[n]) A.2 Proof of Correctness and Complexity of Algorithm 1 Proof. The correctness when Lmin = 1, Lmax = |Si| (we denote the algorithm under this case as Algorithm 0) has been proven in Proof 2 of Jackson et al. [31]. It is sufficient to prove the cases when 1 300 NN queries per second on one NVIDIA 4090 GPU. Under this setting, RDD only needs < 2 minutes to decompose a 500-frame video (5 fps), with a max interval length of 100 frames. (44549 NN queries in total). In other words, as part of the offline dataset building process, RDD can decompose demonstrations at a high speed of 4.3 fps, which shows the high scalability of RDD. D.2 Scale of Mainstream Robotics Datasets To support the aforementioned experiment settings, here we provide the scale of some of the most popular open-sourced robotics datasets. In summary, assuming each demonstration can be 16 Table 8: Main results with all RLBench Tasks. Method Avg. Succ. (↑) Avg. Rank (↓) Close Jar Insert Peg Install Bulb Meat off Grill Open Drawer Place Cups Sort Shape Place Wine w/o Finetune 39.7 ± 6.5 4.3 ± 1.3 27.6 ± 26.4 5.6 ± 6.7 34.8 ± 14.2 46.4 ± 26.8 95.6 ± 6.1 3.2 ± 4.3 16.0 ± 11.7 83.2 ± 13.0 Uniform 54.5 ± 4.1 3.1 ± 1.2 46.4 ± 29.9 8.8 ± 11.8 51.2 ± 19.2 76.4 ± 22.4 100.0 ± 0.0 0.8 ± 1.6 25.6 ± 9.2 80.8 ± 14.5 UVD [25] 54.3 ± 3.9 3.2 ± 1.2 44.0 ± 28.7 10.4 ± 14.1 54.8 ± 20.0 85.2 ± 20.6 100.0 ± 0.0 1.2 ± 1.8 25.2 ± 11.0 80.8 ± 15.3 Expert [13] 57.6 ± 3.3 2.0 ± 0.9 50.4 ± 33.1 12.0 ± 17.9 50.4 ± 13.3 94.4 ± 9.7 99.2 ± 2.4 3.2 ± 3.9 26.0 ± 10.6 81.6 ± 15.0 RDD (Ours) 57.3 ± 5.3 2.4 ± 1.1 46.0 ± 28.2 16.8 ± 18.6 52.8 ± 16.4 84.4 ± 21.1 99.2 ± 2.4 2.0 ± 2.0 32.4 ± 10.2 86.4 ± 15.4 Method Push Buttons Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Stack Blocks Stack Cups Sweep to Dustpan Turn Tap w/o Finetune 54.8 ± 9.1 41.2 ± 20.1 36.4 ± 28.8 58.8 ± 23.3 36.0 ± 21.8 57.2 ± 14.9 2.8 ± 2.6 2.8 ± 3.6 22.8 ± 32.5 89.2 ± 13.4 Uniform 82.0 ± 7.8 36.8 ± 15.4 98.0 ± 2.7 92.4 ± 10.8 64.8 ± 16.7 64.4 ± 9.9 13.6 ± 7.8 5.2 ± 4.7 34.8 ± 37.7 98.8 ± 3.6 UVD [25] 67.2 ± 13.6 35.2 ± 12.1 90.4 ± 8.6 96.8 ± 6.6 74.4 ± 29.2 66.8 ± 21.2 9.6 ± 6.2 1.6 ± 3.7 43.6 ± 24.6 89.6 ± 11.1 Expert [13] 85.6 ± 6.0 39.6 ± 15.6 91.2 ± 7.3 97.6 ± 5.1 75.2 ± 24.6 66.4 ± 22.0 14.8 ± 11.2 5.2 ± 4.4 48.8 ± 35.5 96.0 ± 5.7 RDD (Ours) 84.0 ± 7.8 41.2 ± 17.1 97.2 ± 3.1 98.4 ± 3.2 68.0 ± 25.0 65.2 ± 14.3 5.2 ± 3.6 1.6 ± 2.7 57.2 ± 29.7 94.0 ± 5.1 Table 9: Multi-task performances with different visual representations. Visu. Repr. Avg. Succ. (↑) Avg. Rank (↓) Close Jar Insert Peg Install Bulb Meat off Grill Open Drawer Place Cups Sort Shape Place Wine LIV [26] 61.0 ± 0.4 3.6 ± 1.7 68.0 ± 17.3 4.0 ± 3.3 41.3 ± 21.7 96.0 ± 5.7 100.0 ± 0.0 1.3 ± 1.9 32.0 ± 5.7 96.0 ± 3.3 R3M [27] 59.2 ± 2.5 4.2 ± 1.8 65.3 ± 21.0 4.0 ± 5.7 44.0 ± 13.1 97.3 ± 3.8 98.7 ± 1.9 0.0 ± 0.0 12.0 ± 3.3 86.7 ± 6.8 VIP [35] 56.5 ± 2.0 4.0 ± 2.0 72.0 ± 14.2 2.7 ± 1.9 38.7 ± 15.4 93.3 ± 9.4 100.0 ± 0.0 5.3 ± 5.0 22.7 ± 10.0 89.3 ± 8.2 VC-1 [28] 56.9 ± 1.6 3.7 ± 2.3 73.3 ± 9.4 1.3 ± 1.9 30.7 ± 18.6 93.3 ± 9.4 100.0 ± 0.0 8.0 ± 3.3 20.0 ± 8.6 86.7 ± 10.0 CLIP [43] 58.4 ± 1.6 4.3 ± 2.0 62.7 ± 21.7 4.0 ± 3.3 46.7 ± 15.4 96.0 ± 5.7 100.0 ± 0.0 0.0 ± 0.0 16.0 ± 3.3 82.7 ± 13.6 DINOv2 [29] 58.3 ± 1.4 4.4 ± 1.6 65.3 ± 18.0 2.7 ± 3.8 41.3 ± 21.2 98.7 ± 1.9 100.0 ± 0.0 1.3 ± 1.9 13.3 ± 1.9 80.0 ± 8.6 ResNet [44] 60.5 ± 2.0 3.8 ± 1.7 68.0 ± 20.4 2.7 ± 3.8 46.7 ± 10.5 96.0 ± 5.7 100.0 ± 0.0 0.0 ± 0.0 13.3 ± 5.0 84.0 ± 6.5 Visu. Repr. Push Buttons Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Stack Blocks Stack Cups Sweep to Dustpan Turn Tap LIV [26] 78.7 ± 8.2 57.3 ± 3.8 97.3 ± 1.9 97.3 ± 3.8 88.0 ± 8.6 73.3 ± 3.8 4.0 ± 3.3 1.3 ± 1.9 66.7 ± 5.0 94.7 ± 5.0 R3M [27] 89.3 ± 5.0 50.7 ± 10.0 85.3 ± 5.0 94.7 ± 5.0 94.7 ± 5.0 82.7 ± 12.4 8.0 ± 3.3 1.3 ± 1.9 53.3 ± 8.2 97.3 ± 3.8 VIP [35] 92.0 ± 3.3 64.0 ± 8.6 93.3 ± 3.8 89.3 ± 10.0 10.7 ± 7.5 46.7 ± 36.7 2.7 ± 1.9 5.3 ± 3.8 92.0 ± 8.6 97.3 ± 1.9 VC-1 [28] 93.3 ± 5.0 65.3 ± 8.2 93.3 ± 6.8 92.0 ± 8.6 9.3 ± 6.8 52.0 ± 31.5 4.0 ± 3.3 9.3 ± 7.5 92.0 ± 8.6 100.0 ± 0.0 CLIP [43] 89.3 ± 5.0 46.7 ± 13.2 81.3 ± 3.8 94.7 ± 5.0 94.7 ± 5.0 81.3 ± 10.5 10.7 ± 3.8 5.3 ± 5.0 52.0 ± 8.6 88.0 ± 14.2 DINOv2 [29] 88.0 ± 3.3 50.7 ± 15.4 85.3 ± 5.0 94.7 ± 5.0 94.7 ± 5.0 78.7 ± 6.8 9.3 ± 1.9 4.0 ± 3.3 46.7 ± 5.0 94.7 ± 7.5 ResNet [44] 93.3 ± 1.9 61.3 ± 9.4 98.7 ± 1.9 90.7 ± 8.2 86.7 ± 10.5 73.3 ± 6.8 17.3 ± 11.5 1.3 ± 1.9 56.0 ± 5.7 100.0 ± 0.0 Table 10: Multi-task performance with different weighting parameter α. α Avg. Succ. (↑) Avg. Rank (↓) Close Jar Insert Peg Install Bulb Meat off Grill Open Drawer Place Cups Sort Shape Place Wine 0 57.3 ± 2.1 2.8 ± 1.0 74.7 ± 10.0 0.0 ± 0.0 32.0 ± 18.2 52.0 ± 14.2 98.7 ± 1.9 6.7 ± 5.0 29.3 ± 5.0 81.3 ± 5.0 0.5 57.6 ± 2.2 2.7 ± 0.8 73.3 ± 10.5 0.0 ± 0.0 33.3 ± 11.5 49.3 ± 21.0 100.0 ± 0.0 5.3 ± 3.8 29.3 ± 6.8 92.0 ± 5.7 1 61.0 ± 0.4 2.3 ± 1.4 68.0 ± 17.3 4.0 ± 3.3 41.3 ± 21.7 96.0 ± 5.7 100.0 ± 0.0 1.3 ± 1.9 32.0 ± 5.7 96.0 ± 3.3 2 58.0 ± 2.3 2.2 ± 0.8 76.0 ± 9.8 0.0 ± 0.0 33.3 ± 10.5 48.0 ± 11.3 100.0 ± 0.0 8.0 ± 3.3 29.3 ± 3.8 88.0 ± 6.5 α Push Buttons Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Stack Blocks Stack Cups Sweep to Dustpan Turn Tap 0 90.7 ± 5.0 62.7 ± 12.4 96.0 ± 5.7 77.3 ± 3.8 76.0 ± 11.8 58.7 ± 9.4 18.7 ± 12.4 1.3 ± 1.9 78.7 ± 16.4 96.0 ± 3.3 0.5 85.3 ± 6.8 62.7 ± 13.2 96.0 ± 3.3 80.0 ± 0.0 76.0 ± 14.2 58.7 ± 6.8 17.3 ± 13.6 0.0 ± 0.0 80.0 ± 15.0 97.3 ± 1.9 1 78.7 ± 8.2 57.3 ± 3.8 97.3 ± 1.9 97.3 ± 3.8 88.0 ± 8.6 73.3 ± 3.8 4.0 ± 3.3 1.3 ± 1.9 66.7 ± 5.0 94.7 ± 5.0 2 88.0 ± 8.6 60.0 ± 9.8 100.0 ± 0.0 81.3 ± 6.8 77.3 ± 12.4 58.7 ± 6.8 14.7 ± 10.0 1.3 ± 1.9 84.0 ± 17.3 96.0 ± 5.7 decomposed into 10 sub-tasks, the mainstream policy training datasets typically have 10 million sub-tasks. (≈10 million entries in the database). The Open X-Embodiment (OXE) Dataset [47]: A landmark collaboration among 21 institutions, OXE provides over 1 million robot trajectories from 22 different robot embodiments. Its explicit goal is to foster the development of generalist models, demonstrating that the community is actively removing the proprietary data barriers of the past. The explicit purpose of OXE is to provide a standardized, large-scale resource to train generalist models that have demonstrated significant performance gains by training on this diverse data. Hugging Face 17 Table 11: Multi-task performance with different numbers of demonstrations. Demo. Num. Avg. Succ. (↑) Avg. Rank (↓) Close Jar Insert Peg Install Bulb Meat off Grill Open Drawer Place Cups Sort Shape Place Wine 1 (RDD) 59.1 ± 3.4 1.8 ± 0.8 75.6 ± 11.1 6.2 ± 5.7 35.6 ± 9.5 65.8 ± 20.9 100.0 ± 0.0 5.8 ± 4.7 25.8 ± 3.8 91.1 ± 7.7 3 (RDD) 61.0 ± 0.4 1.8 ± 0.7 68.0 ± 17.3 4.0 ± 3.3 41.3 ± 21.7 96.0 ± 5.7 100.0 ± 0.0 1.3 ± 1.9 32.0 ± 5.7 96.0 ± 3.3 3 (UVD [25]) 57.1 ± 0.3 2.3 ± 0.6 66.7 ± 13.2 4.0 ± 5.7 37.3 ± 19.1 93.3 ± 9.4 100.0 ± 0.0 2.7 ± 1.9 21.3 ± 10.5 77.3 ± 11.5 Demo. Num. Push Buttons Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Stack Blocks Stack Cups Sweep to Dustpan Turn Tap 1 (RDD) 86.7 ± 5.7 60.4 ± 11.8 97.8 ± 2.0 78.2 ± 13.3 61.8 ± 28.7 79.6 ± 16.2 11.6 ± 8.1 2.2 ± 2.7 87.1 ± 16.2 92.9 ± 14.7 3 (RDD) 78.7 ± 8.2 57.3 ± 3.8 97.3 ± 1.9 97.3 ± 3.8 88.0 ± 8.6 73.3 ± 3.8 4.0 ± 3.3 1.3 ± 1.9 66.7 ± 5.0 94.7 ± 5.0 3 (UVD [25]) 62.7 ± 12.4 44.0 ± 6.5 84.0 ± 6.5 96.0 ± 5.7 85.3 ± 13.2 82.7 ± 12.4 16.0 ± 3.3 1.3 ± 1.9 60.0 ± 16.3 93.3 ± 5.0 Table 12: Multi-task performance of Vanilla Planner without finetuning on the target task. Method Avg. Succ. (↑) Avg. Rank (↓) Meat off Grill Open Drawer Place Wine Push Buttons Put in Cupboard w/o finetuning on target task 77.9 ± 4.3 1.6 ± 0.5 99.2 ± 2.4 99.6 ± 1.2 86.4 ± 8.8 70.4 ± 8.0 61.2 ± 16.8 RDD (Ours) 79.6 ± 7.2 1.4 ± 0.5 84.4 ± 21.1 99.2 ± 2.4 86.4 ± 15.4 84.0 ± 7.8 41.2 ± 17.1 Method Put in Drawer Put in Safe Drag Stick Slide Block Sweep to Dustpan Turn Tap w/o finetuning on target task 86.0 ± 14.3 94.8 ± 9.0 74.0 ± 23.3 62.4 ± 16.8 30.0 ± 15.3 92.4 ± 14.4 RDD (Ours) 97.2 ± 3.1 98.4 ± 3.2 68.0 ± 25.0 65.2 ± 14.3 57.2 ± 29.7 94.0 ± 5.1 Table 13: Comparing RDD with Gemini-2.5-pro. Method Avg. Succ. (↑) Avg. Rank (↓) Close Jar Install Bulb Meat off Grill Open Drawer Place Wine Push Buttons Gemini-2.5-pro 72.6 ± 4.7 1.7 ± 0.4 41.2 ± 30.1 40.8 ± 16.5 83.2 ± 15.2 99.6 ± 1.2 86.4 ± 11.1 82.4 ± 8.6 RDD (Ours) 74.9 ± 6.9 1.3 ± 0.4 46.0 ± 28.2 52.8 ± 16.4 84.4 ± 21.1 99.2 ± 2.4 86.4 ± 15.4 84.0 ± 7.8 Method Put in Cupboard Put in Drawer Put in Safe Drag Stick Slide Block Sweep to Dustpan Turn Tap Gemini-2.5-pro 38.4 ± 10.6 94.0 ± 6.8 93.6 ± 9.2 73.6 ± 22.3 63.6 ± 14.4 48.4 ± 14.9 99.2 ± 2.4 RDD (Ours) 41.2 ± 17.1 97.2 ± 3.1 98.4 ± 3.2 68.0 ± 25.0 65.2 ± 14.3 57.2 ± 29.7 94.0 ± 5.1 Table 14: Performance of FAISS nearest neighbor search and RDD time on NVIDIA 4090. Hardware Dim Vec Num QPS Lmax LI RDD Time (s) NVIDIA 4090 2048 10M 386 100 500 115 (4.3 fps) SmolVLA Dataset [48]: The emergence of models like SmolVLA, a capable vision-language-action model trained entirely on 23k episodes from 487 open-sourced community datasets through the LeRobot framework, outperforms the closed-source-dataset policy π0 [4]. AgiBot World [33]: AgiBot World provides not just datasets but complete open-source toolchains and standardized data collection pipelines, further enriching the public ecosystem. It has collected over 1 million trajectories on over 100 homogeneous robots. Their proposed model GO-1, entirely trained on this open-sourced dataset, outperforms the closed-source dataset policy π0 [4]. E Broader Impacts The potential negative societal impacts of our work are consistent with those commonly observed in robotics research, including risks related to privacy, labor displacement, and unintended misuse in sensitive contexts. While our method is primarily designed to enhance the scalability and efficiency of robotic systems, such advancements may accelerate deployment in real-world settings, amplifying both positive and negative consequences. In parallel, advances in point cloud analysis [49, 50], 18 cooperative motion prediction [51], autonomous driving frameworks [52, 53, 54, 55], and generative AI for driving [56] highlight both the promise and the risks of deploying increasingly capable visionaction models. Broader surveys of visual foundation models [57] and new work on multimodal alignment [58, 59] further strengthen the importance of trustworthy design and governance, especially for safety-critical applications such as transportation and human-robot interaction [60, 61]. To mitigate these risks, we emphasize alignment with ethical guidelines, including fairness, accountability, transparency, and safety, and encourage interdisciplinary collaboration to monitor societal impacts as these technologies evolve [62]. 19
2510.14963
Orders matter: tight bounds on the precision of sequential quantum estimation for multiparameter models Gabriele Fazio,1, ∗Jiayu He,2, † and Matteo G. A. Paris1, ‡ 1Dipartimento di Fisica Aldo Pontremoli, Universit`a degli Studi di Milano, I-20133 Milano, Italy 2QTF Centre of Excellence, Department of Physics, University of Helsinki, FI-00014 Helsinki, Finland (Dated: October 17, 2025) In multiparameter quantum metrology, the ultimate precision of joint estimation is dic- tated by the Holevo Cram´er-Rao bound. In this paper, we discuss and analyze in detail an alternative approach: the stepwise estimation strategy. In this approach, parameters are estimated sequentially, using an optimized fraction of the total available resources al- located to each step. We derive a tight and achievable precision bound for this protocol, the stepwise separable bound, and provide its closed-form analytical expression, revealing a crucial dependence on the chosen measurement ordering. We provide a rigorous comparison with the joint measurement strategy, deriving analytical conditions that determine when the stepwise approach offers superior precision. Through the analysis of several paradigmatic SU(2) unitary encoding models, we demonstrate that the stepwise strategy can indeed out- perform joint measurements, particularly in scenarios characterized by non-optimal probes or models with a high degree of sloppiness. Our findings establish stepwise estimation as a powerful alternative to joint and collective measurements, proving that sequential protocols can provide a genuine metrological advantage, especially in resource-constrained or imperfect experimental settings. I. INTRODUCTION Quantum metrology seeks to leverage uniquely quantum features such as entanglement, coher- ence, and squeezing to surpass the precision limits imposed by classical strategies [1–3]. While significant theoretical and experimental progress has been made in single-parameter quantum esti- mation, the simultaneous estimation of multiple parameters has emerged as a challenging frontier. Multiparameter quantum estimation underpins a broad spectrum of applications [3–7], from high- resolution imaging [8, 9] to tests of fundamental physics [10, 11]. However, it also introduces ∗gabriele.fazio@studenti.unimi.it (G. Fazio) † jiayu.he@helsinki.fi (J. He) ‡ matteo.paris@fisica.unimi.it (M. G. A. Paris) arXiv:2510.14963v1 [quant-ph] 16 Oct 2025 2 conceptual and technical challenges: the optimal measurements for different parameters may not commute, rendering it impossible to simultaneously saturate the individual quantum Cram´er–Rao bounds (QCRBs)[12–14]. In such cases, the ultimate limit on precision is dictated by the Holevo bound, which remains tight but notoriously difficult to evaluate and saturate[15]. It characterizes the ultimate achievable precision in the asymptotic regime, but it typically requires collective mea- surements over infinitely many copies of the quantum probe. Nagaoka introduced an alternative bound based on separable measurements [16]. Although generally weaker than the Holevo bound, it may be achieved using sepaarate (non collective) measurement schemes, and it has been shown to coincide with the Holevo bound in specific cases, such as two-parameter estimation with pure states in a two-dimensional Hilbert space [17]. However, both the Holevo and Nagaoka bounds lack closed-form analytic expressions in the general case, which limits their utility as benchmarks and in comparing strategies. To bridge the gap between theoretical bounds and practical feasibility, a hierarchy of precision limits has been developed. Among these, the R [18, 19] and T [20] bounds stand out for their analytic tractability and close connection to the geometry of quantum states, i.e., the quantum Fisher information matrix and the mean Uhlmann curvature. These bounds provide a useful bracketing of the Holevo bound, clarifying how quantum incompatibility and the sloppiness of model [21–27], i.e., the possible inefficiency of the parameter encoding, limit the achievable precision. Motivated by these considerations, we discuss in details a novel approach, and its corresponding precision bound, that is easier to obtain in analytic form and has operational relevance. In this context, we consider a stepwise estimation protocol [28], in which different parameters are estimated sequentially by allocating resources to different subsets of the measurement data. Such a strategy offers a compromise between the simplicity of fully separable single-parameter estimation and globally joint multiparameter estimation, while providing a tractable analytical framework. This is particularly relevant in scenarios where joint strategies are impractical [29] or when experimental constraints limit the accessibility to optimal probes [30]. In this work, we develop a general framework for stepwise estimation in quantum multiparameter metrology. We introduce a class of precision bounds tailored to such sequential strategies and investigate their dependence on parameter ordering, as well as their relation to existing bounds such as the Holevo, symmetric logarithmic derivative (SLD) QCRB and T/R-type bounds. Our results demonstrate that stepwise estimation can not only provide operationally meaningful bounds but may, under certain conditions, outperform joint strategies, particularly in the presence of sloppiness, i.e., when only certain combinations of parameters affect the quantum state significantly [31], or 3 under resource constraints that prevent the preparation of ideal probe states. The paper is organized as follows. In Section II, we review the general framework of quantum multiparameter estimation, including the definitions of the classical and quantum Cram`er-Rao bounds and the Holevo bound, and discuss their interrelations and attainability conditions. Sec- tion III introduces the concept of stepwise estimation strategies and formulates a family of precision bounds associated with sequential resource allocation. We derive closed-form analytic expressions for these bounds, establish their optimality with respect to different parameter orderings, and provide inequalities that characterize their performance relative to known quantum bounds. Sec- tion IV illustrates the application of our framework to explicit physical models, including two- and three-parameter estimation scenarios with qubit and qutrit probes [32, 33]. These examples highlight conditions under which stepwise estimation outperforms joint estimation, especially when the quantum Fisher information is highly anisotropic or the parameters are incompatible. Finally, Section V summarizes our main findings and discusses possible future directions. II. FRAMEWORK OF MULTIPARAMETER QUANTUM ESTIMATION In this Section, we outline the theoretical background for multiparameter quantum estimation. Let ρλ be a quantum state on a finite-dimensional Hilbert space, parameterized by a vector of n real parameters λ = (λ1, . . . , λn)T , and {Πk} with Πk ≥0 and P k Πk = I a positive operator-valued measurement (POVM). The probability of obtaining outcome k is given by pλ(k) = Tr [ρλΠk]. A corresponding estimator ˆλ(k) is assigned to each outcome, and the performance is evaluated via the covariance matrix V (ˆλ), whose components read: Vµν = X k pλ(k)[λµ(k) −Ek(ˆλµ)][λν(k) −Ek(ˆλν)]. (1) where Ek(ˆλµ) denotes the expectation value of the estimator ˆλµ under the distribution pλ(k). Assuming locally unbiased estimators, i.e. E[ˆλµ] = λµ and ∂νE(ˆλµ) = δµν, the classical Cram´er- Rao bound (CRB) provides a fundamental lower limit on the achievable covariance[34]: V (ˆλ) ≥1 M F −1 , (2) with F the Fisher information matrix (FIM), and M the number of repeated measurements. The elements of FIM are defined as: Fµν ≡ X k ∂µpλ(k) ∂νpλ(k) pλ(k) . (3) 4 The CRB can be saturated in the asymptotic limit of an infinite number of repeated experiments using Bayesian or maximum likelihood estimators [35]. In the quantum setting, due to non-commutativity of observables, multiple versions of quantum Fisher information matrices (QFIMs) arise. One of the most prominent among them is defined through the symmetric logarithmic derivatives (SLDs) LS µ [36], defined as operators which satisfy: ∂µρλ = LS µρλ + ρλLS µ 2 (4) The SLD-QFIM is then defined as Qµν ≡1 2 Tr  ρλ{LS µ, LS ν }  , (5) where {A, B} = AB + BA denotes the anti-commutator of the operators A and B. In the case of pure statistical models, ρλ = |ψλ⟩⟨ψλ|, the QFIM reduces to: Qµν = 4 Re ⟨∂µψλ|∂νψλ⟩−⟨∂µψλ|ψλ⟩⟨ψλ|∂νψλ⟩  , where ∂k ≡∂λk. A. General Quantum Cram´er-Rao Bounds In this subsection, we present a general framework for quantum Cram´er-Rao bounds (QCRBs), covering all commonly uesd quantum bounds, including the SLD bound CS, the Holevo bound CH, as well as the R and T bounds, and their hierarchical relationships. Utilizing the QFIM one can formulate scalar bounds for estimation error under a given cost matrix W (a real, positive definite d × d matrix). The quantum Cram´er-Rao Bound (QCRB), or SLD QCRB states that Tr[V (W, ˆλ)] ≥CS[W, ˆλ], with CS[W, ˆλ] ≡1 M Tr  W Q−1 , (6) where M is the number of independent repetitions of measurement. Increasing it can reduce statistical errors. However, due to non-commuting SLDs, the bound isn’t always tight. A more fundamental limit is provided by the Holevo bound[15, 37]: CH[W, ˆλ] ≡min X∈X n Tr [W Re (Z[X])] + Tr [||W Im (Z[X])||1] o , (7) where Zµν ≡Tr[ρλXµXν], and the set X contains Hermitian operators Xµ satisfying the local unbiased condition: Tr[∂νρλXµ] = δµν. The Holevo bound becomes asymptotically achievable in the limit of collective measurements over many copies of the state[38–40]. 5 A key challenge in multiparameter estimation arises from the incompatibility of optimal mea- surements for different parameters. The condition under which the SLD bound is saturable is given by the so-called weak commutativity criterion [13]: Tr  ρλ[LS µ, LS ν ]  = 0. (8) where [A, B] = AB −BA denotes the commutator of the operators A and B. This motivates the definition of the antisymmetric matrix U, often called the mean Uhlmann curvature (MUC). This quantity captures the average non-commutativity between the parameter generators: Uµν ≡1 2i Tr  ρλ[LS µ, LS ν ]  , (9) and is useful to quantify the incompatibility between parameters. For pure statistical models, ρλ = |ψλ⟩⟨ψλ|, we have Uµν = 4 Im ⟨∂µψλ|∂νψλ⟩−⟨∂µψλ|ψλ⟩⟨ψλ|∂νψλ⟩  , Due to the hardness of finding analytical expressions for the Holevo Bound, two bounds have been introduced. These are based on the quantumness measures R [18, 19, 41], and T [6, 18, 20], defined as R ≡||iQ−1U||∞, (10) T(W) ≡∥ √ WQ−1UQ−1√ W∥1 Tr [WQ−1] , (11) where || · ||∞denotes the maximum eigenvalue of the matrix, and ||A||1 = Tr[ √ A†A] is the nuclear norm. The corresponding bounds respect the following chain of inequalities: CS[W, ˆλ] ≤CH[W, ˆλ] ≤[1 + T(W)] CS[W, ˆλ] ≤[1 + R] CS[W, ˆλ] ≤2 CS[W, ˆλ] . (12) B. Two- and Three-Parameter Pure-State Models For the special case of two-parameter pure models, we have that R simplifies to[19] R2 = s det [U] det [Q], (13) and using a diagonal weight matrix W = diag(1, ω), the T bound reduces to T2 = 2 p ω det[U] Q22 + ω Q11 . (14) 6 For two-parameter qubit models we have that [17] CH[W, ˆλ] = CS[W, ˆλ] + p det[W] det[Q] Tr[ρλ[LS 1 , LS 2 ]] . (15) For pure two-parameter qubit models we have det[Q] = det[U] = U2 12 =  1 2iTr  ρ[LS 1 , LS 2 ] 2 , (16) and thus, using Eq. (14), the explicit formula CH = Tr[WQ−1] + 2 p det[WQ−1] = CS(1 + T2). (17) These bounds respect the following bracketing relation CS ≤CH = (1 + T2)CS ≤(1 + R2)CS = 2CS . (18) For three-parameter models R is given by R3 = s uT Q u det [Q], (19) with the vector u defined as u = (U23, −U13, U12). (20) The quantity T, for a diagonal weight matrix W = diag(1, ω1, ω2) is given by T3 ≡T3(W3) = 2 q uT Q f W3 Q u det [Q] , (21) where f W3 = diag(ω1ω2, ω2, ω1). III. A BOUND FOR STEPWISE MEASUREMENT STRATEGIES The SLD QCRB offers a computationally straightforward lower bound on the estimator vari- ance, but its attainability is guaranteed only if the SLDs satisfy the weak commutativity criterion. When this compatibility condition is not met, the Holevo Cram´er-Rao Bound provides a tighter, asymptotically achievable limit. However, reaching this bound necessitates collective measure- ments—experimentally demanding protocols that act jointly on many copies of the quantum state. Nagaoka proposed a simpler measurement strategy using separable measurements performed inde- pendently on each probe. While this simplifies the implementation, it generally results in a less tight 7 bound. In this context, we introduce a new class of bounds tailored for sequential measurement strategies. This stepwise approach involves allocating resources sequentially to estimate subsets of parameters, providing a framework that bridges the gap between single-parameter estimation and fully collective multiparameter strategies. Consider the task of estimating a set of n parameters, (λ1, ..., λn), using a total of M identically prepared probes. To implement a sequential measurement strategy, we partition the total set of probes into n groups (each dedicated to a different parameter) according to allocation fractions {γj : j = 1, .., n}, where each γj satisfies 0 ≤γj ≤1 and Pn i=j γj = 1. Thus, the j-th group contains γjM probes. The process unfolds as follows: 1. An experiment with γ1M probes is performed, using a measurement strategy optimized to estimate λ1 and assuming the other parameters (λ2, . . . , λn) unknown; 2. A second experiment, employing with γ2M probes is performed, aiming at optimally esti- mating λ2 assuming λ1 known from the previous step and the rest of parameters (λ3, . . . , λn) unknown; 3. The procedure continues sequentially: at step j, γjM probes are dedicated to optimally estimating the parameter λj assuming λ1, . . . λj−1 known from the previous steps and the rest of parameters (λj+1, . . . , λn) unknown; 4. Finally, the last group of γnM probes is used to implement a single-parameter quantum estimation of λn. The total variance for this protocol is P j V ( ˆλj), and is bounded by the sum of the optimal variances achievable at each step. By optimizing over all possible resource allocations {γj}, we define the stepwise separable bound for a given estimation order (1 →n) as: C1→n sep ≡ min γ1,...,γn [Q−1 1,...,n]11 γ1 + [Q−1 2,...,n]11 γ2 + ... + Q−1 n γn . (22) Here, Qj,...,n is the Quantum Fisher Information Matrix (QFIM) for the parameter subset (λj, . . . , λn), constructed by removing the first j −1 rows and columns from the full QFIM. Its inverse, Q−1 j,...,n, is an (n −j + 1) × (n −j + 1) positive-definite matrix. The notation [Q−1 j,...,n]11 refers to the (1, 1) entry of this inverse matrix, corresponding to the SLD Cram´er–Rao bound for λj assuming λ1, . . . λj−1 known from the previous steps and the rest of parameters (λj+1, . . . , λn) unknown. We use only the first element of the inverse QFIM because this is the variance we get 8 by considering a weight matrix of the form W = diag(1, 0, . . . ). We do this since we are interested in estimating just the j-th parameter at each step. The explicit dependence of the Csep bound on the estimation sequence order is denoted by the superscript. We highlight how this bound is also achievable. We in fact have that for each measurement step we are using a weight matrix W = diag(0, . . . , 1, . . . , 0), and with this choice the Holevo bound coincides with the respective QCRB. We also note that this bound is achievable in the asymptotic limit. To see this, consider the sequential protocol itself: at step j, we allocate γjM probes and perform the measurement that is optimal for the reduced parameter set (λj, . . . , λn). In the asymptotic limit, using the locally unbiased estimator associated with this measurement, the variance of ˆλj saturates [Q−1 j,...,n]11/γ1. Summing over all steps yields the stepwise bound, which is achievable in the asymptotic regime. A. Minimization of the Stepwise Bound Given an n-parameter model, we want to obtain the stepwise bound introduced in the previous Section, i.e., to perform the minimization in Eq.(22). The following two theorems provide closed analytical expressions for this minimization. The first theorem expresses the bound as a telescopic- like series, effectively removing the minimization over the {γi} in the original definition and yielding a fully general analytical expression for the stepwise bound. While this formula is general, it involves a sum over n terms, which becomes more and more complex for increasing n. The second theorem addresses this issue by exploiting the Cholesky decomposition of the QFIM, providing a computationally efficient alternative for evaluating the bound. Theorem III.1. The optimized stepwise measurement bound C1→n sep of Eq.(22) is given by C1→n sep = s det[Q2,...,n] det[Q1,...,n] + s det[Q3,...,n] det[Q2,...,n] + · · · + s 1 det[Qn] !2 , (23) which is achieved with γj = q [Q−1 j,...,n]11 nP l=1 q [Q−1 l,...,n]11 . (24) Proof: Let us denote [Q−1 j,...,n]11 ≡Aj. We can rewrite the problem as C1→n sep = min {γj} n X j=1 Aj γj = ∥x∥2, 9 where x ≡ q A1 γ1 , . . . , q An γn  , and ∥· ∥denotes the norm of vector. We define y ≡(√γ1, . . . , √γn) . Using Cauchy-Schwarz inequality |x · y|2 ≤∥x∥2∥y∥2 we get   n X j=1 p Aj   2 ≤   n X j=1 Aj γj     n X j=1 γi  = n X j=1 Aj γj . The minimum is thus C1→n sep = min {γj} n X j=1 Aj γj =   n X j=1 p Aj   2 , and is obtained when y is parallel to x, i.e., for γj ∝ p Aj. Normalizing to P j γj = 1, we get the optimal {γj} as γ∗ j = p Aj nP l=1 √Al = q [Q−1 j,...,n]11 nP l=1 q [Q−1 l,...,n]11 . To express C1→n sep in terms of determinants, we use the relation Aj = [Q−1 j,...,n]11 = det[Qj+1,...,n] det[Qj,...,n] . With this we prove that C1→n sep =   n X j=1 p Aj   2 = s det[Q2,...,n] det[Q1,...,n] + s det[Q3,...,n] det[Q2,...,n] + · · · + s 1 det[Qn] !2 . Corollary III.2. The optimal stepwise measurement bound, weighted by a diagonal weight matrix W = diag(ω1, . . . , ωn) is given by C1→n sep [W] = min {γj} ω1[Q−1 1,...,n]11 γ1 + · · · + ωnQ−1 n γn = min {γj} n X j=1 ˜Aj γj =   n X j=1 q ˜Aj   2 (25) = s ω1det[Q2,...,n] det[Q1,...,n] + s ω2det[Q3,...,n] det[Q2,...,n] + · · · + r ωn det[Qn] !2 . (26) Theorem III.3. Given L, the lower triangular matrix obtained from the Cholesky decomposition of the QFIM Q = LLT , then the stepwise bound Cn→1 sep can be expressed as Cn→1 sep = Tr[L−1] 2 , (27) where we highlight that the measurement order is reversed, firstly the n-th parameter, up to the first. 10 Proof: Knowing that L is lower triangular, we have that the right side is Tr[L−1] 2 =   n X j=1 1 [L]jj   2 , where [L]jj is the diagonal elements of L. For the leading principal minors, we know Q1,...,j = L1,...,jLT 1,...,j, from which we get det[Q1,...,j] = det[L1,...,j]det[LT 1,...,j] = det[L1,...,j]2 = jY l=1 [L]2 ll, where L1,...,j refers to leading principal minors of order j from L. Since this holds for all j, we have [L]jj = s det[Q1,...,j] det[Q1,...,j−1], then Tr[L−1] = X j 1 [L]jj = X i s det[Q1,...,j−1] det[Q1,...,j] . Therefore, we get Tr[L−1] 2 = s 1 det[Q1] + s det[Q1] det[Q1,2] + · · · + s det[Q1,...,n−1] det[Q1,...,n] !2 . Here we adopt the reversed estimation order, starting from the n-th parameter up to the first, in order to simplify the algebraic derivation. The same approach can be applied to obtain a closed-form expression for the original 1 →n order. B. Bounding of the Stepwise Strategy The value of the stepwise bound Csep changes with the order of estimation of the parameters, and a question arises on how these values are related. In Appendix A we suggest an algorithm to efficiently calculate numerically the more convenient ordering for given a QFIM. In this Section, we instead tackle an order independent analysis. In particular we provide an order independent lower and upper bound. Theorem III.4. For n-parameters estimation, the stepwise bound is bracketed by n3 Tr[Q] ≤n2 n s 1 det[Q] ≤Csep ≤nTr[Q−1]. (28) 11 Proof: We begin by considering the second inequality and, without loss of generality, focus on the stepwise bound C1→n sep . Using the arithmetic/geometric means (AM-GM) inequality we get C1→n sep = s det[Q2,...,n] det[Q1,...,n] + s det[Q3,...,n] det[Q2,...,n] + · · · + s 1 det[Qn] !2 ≥   n n v u u t s det[Q2,...,n] det[Q1,...,n] × s det[Q3,...,n] det[Q2,...,n] × · · · × s 1 det[Qn]    2 = n2 n s 1 det[Q]. This can then be reproduced for any choice of ordering. In turn, changing the order of parameter estimation corresponds to a rearrangement of the rows and columns of the QFIM. Having that det[Q] is invariant under those changes, this provides a lower limit for all orderings. The AM-GM also tells that in general this bound is not tight. It is achieved iff all the elements of the series are equal, meaning iff all diagonal elements of the Cholesky decomposition of the QFIM L are equal. The leftmost inequality is then proved by a property of positive matrices n Tr A−1 ≤ np det[A], which can be seen using the harmonic/geometric mean inequality (HM-GM) on the eigenvalues. The last inequality to prove is the rightmost, and can be seen using that [Q−1 j,...,n]11 = det[Qj+1,...,n] det[Qj,...,n] , and [Q−1 j,...,n]11 ≤[Q−1]jj. With these, using the arithmetic/quadratic (AM-QM) inequality, we have Csep =   n X j=1 q [Q−1 j,...,n]11   2 ≤n n X j=1 [Q−1 j,...,n]11 ≤nTr[Q−1] A necessary and sufficient condition for both the upper and lower bounds to be saturated is Q = cIn, where c > 0. If Q = cIn, we easily obtain lower and upper bounds coincide with Csep. Conversely, suppose Csep simultaneously reaches the lower and upper bounds. From the lower bound (AM–GM), all terms in the telescopic product must be equal: [Q−1 j,...,n]11 = k2 for all j. From the upper bound (AM–QM), we must have [Q−1 j,...,n]11 = [Q−1]jj and all diagonal elements of Q−1 equal. Combining these conditions with the fact that Q is a positive definite symmetric matrix, implies Q = 1 k2 In. Thus, Q = c In is both necessary and sufficient. 12 C. Csep and sloppiness in 2-parameter qubit models In the case of a two-parameter qubit model, we now establish conditions for the separable Csep to outperform the three bounds: SLD QCRB CS, T-dependent bound CT, and R-dependent bound CR. Since Csep is order dependent, we focus on C1→2 sep (estimating λ1 before λ2), with formulas for the opposite order derivable by switching Q11 and Q22. To isolate QFIM effects, we set W = diag(1, 1) in CS and CT. To understand when C1→2 sep ≤CS holds, we convert the inequality into a condition on the amount of sloppiness, which is quantified by the parameter s [25] s := 1 det[Q], which captures how strongly the system depends on a combination of the parameters rather than on them individually. In particular, when estimating λ1 first, followed by the estimation of λ2, the separable bound is given by C1→2 sep = 1 Q22 + Q22 det[Q] + 2 s 1 det[Q] , and the inequality C1→2 sep ≤CS may be written as s ≥ 1 Q2 11  1 + p 1 + Q11/Q22 2 ≡4Q2 22 Q4 12 . (29) This indicates that C1→2 sep becomes tighter if sloppiness is sufficiently large. The threshold is determined by the ratio Q22/Q2 12, not by their absolute values. When |Q12| ≫√Q22 (strong corre- lation), Csep outperforms CS even with moderate sloppiness. However, from the above expression, it is clear that when the QFIM is diagonal (i.e., Q12 = 0), C1→2 sep is strictly larger than CS. This is consistent with the intuition that when parameters are statistically independent (i.e., the QFIM is diagonal) stepwise estimation offers no advantage. Conversely, when statistical correlations be- tween parameters are unavoidable due to model structure, a stepwise estimation strategy may become advantageous. Since CS is not always attainable, we now discuss the relationships of Csep with the two bounds CT and CR, which are analytically expressible and always saturable (at least asymptotically) and provide upper bounds for CH at fixed weight matrix and for all of them, respectively. The bound CT is given by CT = CS(1 + T2) = Q11 + Q22 + 2 |U12| det[Q] , 13 and the condition C1→2 sep ≤CT corresponds to s ≥ 4Q2 22 Q2 12 + 2Q22 |U12| 2 . (30) This implies that Csep surpasses CT under weaker sloppiness requirements, enhancing its practical attainability. Furthermore, when the QFIM is diagonal, the condition for C1→2 sep ≤CT becomes s ≥ 1 det[U], which is always satisfied in two-parameter pure-state qubit models according to [25]. Furthermore, this implies that in such models: C1→2 sep = CT = CH, and the advantage of C1→2 sep becomes evident when sloppiness is large enough. The R-dependent bound reads as follows CR = CS(1 + R2) = Q11 + Q22 det[Q] 1 + |U12| s 1 det[Q] ! , and the inequality C1→2 sep ≤CR corresponds to |U12| (Q2 12 + Q2 22)s + Q2 12 √s + |U12| −2Q22 ≥0. (31) Solving (31) yields the explicit condition: s ≥ √ ∆−Q2 12 2 4 |U12|2 Q2 12 + Q2 22 2 , ∆= Q4 12 + 4 |U12| Q2 12 + Q2 22  (2Q22 −|U12|) ≥0. Remarkably, when 2Q22 = |U12|, the condition simplifies to s ≥0, indicating that C1→2 sep universally outperforms CR regardless of the sloppiness. This scenario corresponds to a regime where quantum incompatibility makes sequential estimation mostly convenient. When the QFIM is diagonal, the condition simplifies significantly, and the threshold becomes s ≥2Q22 −|U12| |U12|Q2 22 . When |U12| > 2Q22, the inequality holds for all s > 0 since the right-hand side is negative, and so C1→2 sep always dominates CR. When |U12| < 2Q22, the proper relation between Q22 and |U12| allows C1→2 sep ≤CR to hold even with low sloppiness, e.g., when Q22 and |U12| are both large. In the special case of vanishing incompatibility (i.e., U12 = 0), the attainability condition for CT and CR reduces exactly to that of C1→2 sep as shown in Eq. (29). However, even in these cases, the inequality only holds when sloppiness is sufficiently large. Finally, while the expression for C1→2 sep ≤CR is more involved, it still ultimately requires a large sloppiness for the superiority of Csep to manifest. In summary, Csep offers a practically attainable bound in models with sufficient sloppiness, espe- cially when the interplay between quantum incompatibility and statistical correlation is optimized. 14 IV. PERFORMANCE OF JOINT AND STEPWISE ESTIMATION IN SU(2) MODELS In this Section, we compare the performance of joint estimation (JE) and stepwise estimation (SE) strategies in estimating the parameters encoded unitarily by SU(2) Hamiltonians of the form H = B · Jn, (32) where Jn is an SU(2) generator along direction n. Three cases are considered: a two-parameter model with qubits, a two-parameter model with qutrits, and a three-parameter model with qutrits. Here, we summarize the main results and discuss their implications for multiparameter quantum estimation. The detailed derivations of the QFIM and the Uhlmann matrix for each case are provided in Appendix B. A. SU(2) 2-parameter estimation in qubit models Let us consider the Hamiltonian HB,θ = B(cos θJx + sin θJz) , (33) where Jj denotes the j-th generator of SU(2). As discussed in [32], in the case of qubit probes the QFIM is QBB = t2[1 −(nθ · r0)2] (34) Qθθ = 4 sin2 Bt 2 [1 −(n1 · r0)2] (35) QBθ = 2t sin Bt 2 (n1 · r0)(nθ · r0), (36) and the Uhlmann matrix is DθB = 2t sin Bt 2 n2 · r0 (37) with vectors nθ = (cos θ, 0, sin θ) (38) n1 = (cos Bt 2 sin θ, −sin Bt 2 , −cos Bt 2 cos θ) (39) n2 = nθ × n1 = (sin Bt 2 sin θ, cos Bt 2 , −sin Bt 2 cos θ) (40) r0 = (Tr[σxρ0], Tr[σyρ0], Tr[σzρ0]), (41) 15 where ρ0 = |ψ0⟩⟨ψ0| denotes the input probe state. Our goal is to compare the performance of stepwise and joint estimation strategies by analyzing the SE bound alongside other bounds, including the Holevo bound. Since the SLD Cram´er–Rao bound is not attainable in this model, our analysis focuses on the Holevo bound, together with the T-bound and the R-bound. For these kind of qubit models, we always have R = 1 [19, 32], independently on the value of the parameters (B, θ) and the choice of the probe state |ψ0⟩. We thus have CR = 2CS. From Theorem III.4 we also have Csep ≤2 Tr[Q−1] ≡2CS for two-parameter models. Combining these results immediately gives Csep ≤CR (42) Next, we consider the T-bound. Using Eq. 14 with the identity as weight matrix, we get T = 4t q (1 −α2 −β2) sin2 Bt 2 2 (1 −β2) (1 −cos(Bt)) + (1 −α2) t2 , (43) where α ≡nθ ·r0 and β ≡n1 ·r0, such that α, β ∈[−1, 1], α2 +β2 ≤1 and n2 ·r0 = p 1 −α2 −β2. From this, it can be proven analytically that C1→2 sep , C2→1 sep ≤Tr[Q−1](1 + T) (44) holds for every choice of α, β, B, t whenever Q is invertible. As stated in Eq. (17) for this model CH = CS(1 + T). Therefore, we conclude that for two-parameter SU(2) qubit models a stepwise estimation strategy always outperform the joint strategy: C1→2 sep , C2→1 sep ≤CT = CH. (45) B. SU(2) 2-parameter estimation in qutrit models We now consider the a two-parameter SU(2) qutrit model, where the Hamiltonian remains the same as in the qubit case, but the probe state is generalized to a qutrit. The detailed calculations are provided in Appendix B. The QFIM is QBB = 4t2[n2 θ · r −(nθ · r)2] (46) Qθθ = 16 sin2 Bt 2 [n2 1 · r −(n1 · r)2] (47) QBθ = −4t sin Bt 2 [n{1,θ} · r −2(n1 · r)(nθ · r)], (48) 16 with [n2 θ]i ≡Tr[J2 nθbi], [n2 1]i ≡Tr[J2 n1bi], [n{1,θ}]i ≡Tr[{Jn1, Jnθ}bi], and {bi} =  Γ1 √ 2, . . . , Γ8 √ 2, I √ 3  , r = {ri} = Tr[ρ0b1], . . . , Tr[ρ0b9]  , with Γi the Gell-Mann matrices, and bi their normalized version as described in Appendix B. The difficulty in optimizing this expression lies in the fact that pure qutrit states live on a 4-dimensional manifold embedded in a 7-sphere, making the domain highly nontrivial. To address this, we resort to numerical optimization, sampling over all possible pure qutrit states. Our results reveal that the optimal states for various values of B, θ, and t share a common property, they satisfy r ⊥{nθ, n1, n{1,θ}, n2}, and n2 θ · r, n2 1 · r = 1. This implies that the QFIM takes the form Q =  4t2 0 0 16 sin2 Bt 2  , (49) yielding a QCRB of CS = 1 16  4 t2 + csc2 Bt 2  . The Uhlmann curvature becomes U =   0 −4t sin Bt 2 (n2 · r) 4t sin Bt 2 (n2 · r) 0  = 0, (50) since r ⊥n2. The above analysis shows that using qutrits we have sufficient freedom (i.e., room for im- provement in the Hilbert space) to minimize the quantum incompatibility. The Uhlmann matrix vanishes, U = 0, and thus we have T = 0 and CH = CSLD. Furthermore, the optimal QFIM is diagonal. We conclude that by suitably choose the probe, joint estimation (JE) outperforms stepwise estimation (SE). C. SU(2) 3-parameter estimation in qutrit models In the previous two examples, we analyzed two-parameter models and observed that increasing the probe dimension reduces the effectiveness of the SE method when the encoding is good. We 17 now explore how this behavior changes when the number of parameters increases to three. If we restrict to qubits, the small dimension of the Hilbert space makes the QFIM singular. Therefore, the smallest probe dimension that allows a meaningful analysis is a qutrit. We extend the previous model by allowing the vector B to span all directions. The Hamiltonian is then given by HB,θ,ϕ = B(cos θ cos ϕJx + cos θ sin ϕJy + sin θJz) , (51) with n(3) θ = (cos θ cos ϕ, cos θ sin ϕ, sin θ) . (52) For more details on the derivation see [14]. Since this model is an extension of the previous one, the terms QBB, Qθθ, and QBθ remain formally the same as in Eqs. (46)–(48), but now evaluated using the vectors in Eq. (52) and n(3) 1 =  sin Bt 2 sin ϕ + cos Bt 2 sin θ cos ϕ, −sin Bt 2 cos ϕ + cos Bt 2 sin θ sin ϕ, −cos Bt 2 cos θ  . (53) In addition, we now have the third parameter ϕ, leading to the extra QFIM components Qϕϕ = 16 sin2 Bt 2 cos2 θ[⟨J2 n(3) 2 ⟩ 0 −⟨Jn(3) 2 ⟩2 0] (54) QBϕ = −4t sin Bt 2 cos θ[⟨{Jn(3) 2 , Jn(3) θ }⟩ 0 −2 ⟨Jn(3) 2 ⟩ 0 ⟨Jn(3) θ ⟩ 0] (55) Qθϕ = 8 sin2 Bt 2 cos θ[⟨{Jn(3) 1 , Jn(3) 2 }⟩ 0 −2 ⟨Jn(3) 1 ⟩ 0 ⟨Jn(3) 2 ⟩ 0], (56) with n(3) 2 =  cos Bt 2 sin ϕ −sin Bt 2 sin θ cos ϕ, −cos Bt 2 cos ϕ −sin Bt 2 sin θ sin ϕ, sin Bt 2 cos θ  . (57) Given the complexity of the model, a numerical approach has been considered. Our goal is to investigate when Csep is smaller than the Holevo bound and how this behavior depends on the characteristics of the probe state. To this end, we sample a large number of random states given a fixed set of parameters (B, θ, ϕ). For each probe state, we compute the minimum Csep over all possible orderings and the Holevo bound (via the SDP approach in [37, 42]). The results are shown in Fig. 1. In the left panel, we show the results for a sample of 105 states and a fixed set of parameters (B, θ, ϕ). In the right one, we consider 10 different sets of values for (B, θ, ϕ) and sample 104 states for each set. Notice that the choice of different parameter sets (B, θ, ϕ) does not 18 alter the shape of the distribution . The red line represents the points where CH = Csep, which separates the two regions where one of the two is larger than the other. Although it is evident that for small values of CH the Holevo bound is generally tighter (CH < Csep), a significant number of states exhibit the opposite behavior, where Csep < CH. This observation indicates that, within the three-parameter qutrit estimation model, the Holevo bound is not always the ultimate lower bound. In certain specific cases, Csep can serve as a tighter or alternative bound for parameter estimation. FIG. 1. Comparison of stepwise and joint estimation for 3-parameter SU(2) qutrit models. In the left panel, we show the results for a sample of 105 states and a fixed set of parameters (B, θ, ϕ). In the right one, we consider 10 different sets of values for (B, θ, ϕ) and sample 104 states for each set. In both panels, the red line represents the points where CH = Csep. We conclude that also in this case, when we have enough quantum resources (i.e., the optimal, or nearly optimal probe) JE is convenient. If the probe state is sub-optimal SE progressively outperforms JE. V. CONCLUSIONS In this work, we have analyzed stepwise estimation strategies for multiparameter quantum metrology, deriving a tight and analytically tractable precision bound, the stepwise separable bound Csep, that depends explicitly on the order of parameter estimation. We provided closed-form 19 expressions for this bound and showed how it can be efficiently computed using the Cholesky decomposition of the quantum Fisher information matrix (QFIM). We also established rigorous bounds on Csep, that hold independently of the estimation order. Through the analysis of SU(2) unitary models with qubit and qutrit probes, we demonstrated that stepwise estimation can outperform joint estimation strategies, particularly in scenarios char- acterized by large sloppiness, non-optimal probe states, or strong parameter incompatibility. In two-parameter qubit models, Csep was shown to be tighter than the Holevo bound. For qutrit sys- tems, however, the increased dimensionality allows for better suppression of incompatibility, making joint estimation more favorable under optimal conditions. In three-parameter qutrit models, we identified regimes where Csep remains competitive with or even superior to the Holevo bound, especially for suboptimal encodings. These results establish stepwise estimation as a viable and experimentally friendly alternative to collective measurements, particularly in resource-constrained or imperfect settings where ideal probe states or joint measurements are not feasible. Our results pave the way for extension to more general quantum systems, including continuous- variable and open quantum systems, and explore adaptive stepwise protocols that dynamically opti- mize the estimation order and resource allocation. Additionally, the connection between sloppiness, incompatibility, and estimation efficiency warrants further experimental investigation, potentially leading to new design principles for quantum sensors [43]. Appendix A: Dynamical Programming Algorithm for Best Ordering In Theorem. III.1 we provided a way to calculate the Csep bound for a given ordering. As mentioned the result depends on the ordering chosen. In this Appendix we want to discuss the computational complexity of finding the optimal ordering that minimizes the bound, and to pro- vide a more efficient way to calculate it through a dynamical programming (DP) approach. Let’s start by analyzing the brute force approach. We have to compute all possible orderings of n numbers, therefore O(n!). For each ordering, we have to compute the trailing determi- nants. The complexity of computing the determinant of a generic k × k matrix using methods like LU decomposition is O(k3). Considering all the trailing submatrices we therefore have the cost Pn k=1 O(k3) = O(n4). Therefore, the brute force time complexity is O(n!n4). We can do better than the brute force approach. Noticing a similarity with the traveling salesman problem, we can write a variation on the Bellman-Held-Karp algorithm [44, 45]. The 20 core principle is that given a set of indices S = {i1, . . . , ik}, the optimal sequence must contain an optimal subsequence of k −1 elements in the sequence. This property is known as optimal substructure. This allows us to create solutions iteratively, starting from an empty set. Wanting to find the minimum Csep, we refer to III.3 and choose the cost function as C([1, . . . , n]) ≡Tr[L−1] without the squared part in order to simplify the algebra. The algorithm for an n-paramter model works as follows: 1. Base case: the cost for an empty set of parameters is 0, C(∅) = 0. 2. Recurrence relation: exploiting the optimal substructure, for any non-empty sequence S, the optimal cost C(S) is found by considering each element j ∈S as the potential last element in the optimal sequence for S. If j is the last element, the preceding elements must form an optimal sequence for the subset S \ {j}. The total cost is thus the optimal cost of the preceding subset plus the cost contribution of adding j. We therefore have C(S) = min j∈S C(S \ {j}) + cost(j, S \ {j}). (A1) The cost function cost(j, S \ {j}), is that of adding the contribution of the parameter j, at the end of the sequence I ≡S \ {j}. Remembering from Th. III.3 that using L gives a revers ordering, we underline how this means we’re measuring the j-th parameter before the reversed I sequence. To evaluate the cost addition we consider the new QFIM QS =  QI,I qI,j qj,I qj,j  , (A2) with its new Cholesky decomposition matrix LS =  LI,I 0 lj,I lj,j  . (A3) From this we see QS = LSLT S =  LI,ILT I,I LI,IlT j,I lj,ILT I,I l2 j,j + lj,IlT j,I  . (A4) Equating this to Eq. A2 we get lj,I = qj,I  LT I,I −1 ⇒l2 j,j = qj,j −qj,IQ−1 I,IqT j,I, (A5) 21 which is the Schur complement of the block QI,I of the new QFIM QS. Using Th. III.3 on the new sequence we know that the cost function increases by 1/lj,j, therefore we conclude cost(j, S \ {j}) = 1 q qj,j −qj,IQ−1 I,IqT j,I . (A6) The algorithm is implemented by iteratively computing C(S) for all subsets S of size from 0 to n. We use memoization, storing all optimal cost values of subsequences I, which will be used for next steps, avoiding redundant calculations. The optimal sequence can then be reconstructed from the intermediate choices (and then reversing them), and the Csep bound is obtained squaring the optimal value since the algorithm minimizes the quantity C = Pn i=1 1/Lii. Let’s analyze the complexity of this algorithm. We still have to go through all possible subsets, therefore there’s a complexity of O(2n). Despite being exponential, this is the big complexity speedup. We in fact no longer need to look at all the permutations, but we can just consider the subsets. Then, for each iteration we have to invert k matrices (k −1) × (k −1), which has complexity O(k3). Just as before, doing this for each steps results in a complexity of O(n4). In conclusion we can manage to go from the brute force complexity O(n!n4) to O(2nn4). Regarding space complexity, for the memoization we need two tables, one for the optimal costs, and one for the paths. Each table stores an entry for all possible subsets with keys growing as O(n), which results in a space complexity of O(n2n). We briefly conclude this section, reminding that for large number of parameters this algorithm having an exponential trend becomes highly costly. Therefore one could consider using either heuristics or other techniques such as simulate annealing or genetic algorithms. Appendix B: Two-parameter SU(2) model with pure states We consider a family of Hamiltonians of the form HB,θ = B cos θ Jx + sin θ Jz  , (B1) where Ji (i = x, y, z) are the generators of SU(2) in the spin-j representation, and nθ = (cos θ, 0, sin θ). A general qubit state ρ0 can be written in Bloch vector form as ρ0 = 1 2 I + r0 · J, (B2) where r0 = (Tr[σxρ0], Tr[σyρ0], Tr[σzρ0]) 22 with |r0| = 1 and J = (Jx, Jy, Jz) = 1 2(σx, σy, σz). σi are the Pauli matrices. A general qutrit state can be written through its decomposition on an operator base bi ρ0 = 9 X i=1 ribi (B3) where r = {ri} = (Tr[ρ0b1], . . . , Tr[ρ0b9]), which should not be confused with the Bloch vector associated to the qutrit. We note that r9 = 1/ √ 3, and pure states are such that Tr[ρ2] = |r|2 = 1. The elements of the base bi are just the normalized Gell-Mann matrices b = {bi} = {Γ1/ √ 2, ..., Γ8/ √ 2, I/ √ 3} are the Gell-Mann matrices with Tr[bibj] = δij. For a probe state ρ0 evolving under the above Hamiltonian for a time t, the elements of the QFIM are given by QBB = 4t2 ⟨J2 nθ⟩0 −⟨Jnθ⟩2 0  , (B4) Qθθ = 16 sin2 Bt 2  ⟨J2 n1⟩0 −⟨Jn1⟩2 0  , (B5) QBθ = −4t sinBt 2  ⟨{Jn1, Jnθ}⟩0 −2 ⟨Jn1⟩0 ⟨Jnθ⟩0  , (B6) where n1 =  cos Bt 2 sin θ, −sin Bt 2 , −cos Bt 2 cos θ  is the derivative direction with respect to θ. For the Uhlmann matrix element relevant to compat- ibility conditions, we have DθB = 4t sinBt 2 ⟨Jn2⟩0 , (B7) where n2 =  sin Bt 2 sin θ, cos Bt 2 , −sin Bt 2 cos θ  . These formulas apply to both qubit (j = 1/2) and qutrit (j = 1) systems; the difference lies in the explicit form of the spin operators Ji. For the qubit case, they are expressed in terms of Pauli matrices as Jx = 1 2σx, Jy = 1 2σy, Jz = 1 2σz, 23 where σi are Pauli matrices. For the qutrit case, the generators take the form Jx = 1 √ 2      0 1 0 1 0 1 0 1 0     , Jy = 1 √ 2      0 −i 0 i 0 −i 0 i 0     , Jz =      1 0 0 0 0 0 0 0 −1     . [1] V. Giovannetti, S. Lloyd, L. Maccone, Quantum-enhanced measurements: beating the standard quan- tum limit, Science 306 (5700) (2004) 1330–1336. [2] M. G. A. Paris, Quantum estimation for quantum technology, International Journal of Quantum In- formation 7 (2009) 125–137. [3] V. Montenegro, C. Mukhopadhyay, R. Yousefjani, S. Sarkar, U. Mishra, M. G. A. Paris, A. Bayat, Review: Quantum metrology and sensing with many-body systems, Physics Reports 1134 (2025) 1–62. [4] M. Szczykulska, T. Baumgratz, A. Datta, Multi-parameter quantum metrology, Advances in Physics: X 1 (4) (2016) 621–639. [5] J. Liu, H. Yuan, X.-M. Lu, X. Wang, Quantum fisher information matrix and multiparameter estima- tion, Journal of Physics A: Mathematical and Theoretical 53 (2) (2020) 023001. [6] F. Albarelli, M. Barbieri, M. G. Genoni, I. Gianani, A perspective on multiparameter quantum metrol- ogy: From theoretical tools to applications in quantum imaging, Physics Letters A 384 (12) (2020) 126311. [7] G. Di Fresco, B. Spagnolo, D. Valenti, A. Carollo, Multiparameter quantum critical metrology, SciPost Physics 13 (4) (2022) 077. [8] A. Chrostowski, R. Demkowicz-Dobrza´nski, M. Jarzyna, K. Banaszek, On super-resolution imaging as a multiparameter estimation problem, International Journal of Quantum Information 15 (08) (2017) 1740005. [9] K. K. Lee, C. N. Gagatsos, S. Guha, A. Ashok, Quantum-inspired multi-parameter adaptive bayesian estimation for sensing and imaging, IEEE Journal of Selected Topics in Signal Processing 17 (2) (2022) 491–501. [10] A. Gupta, S. Datta, S. Kastha, S. Borhanian, K. Arun, B. Sathyaprakash, Multiparameter tests of general relativity using multiband gravitational-wave observations, Physical review letters 125 (20) (2020) 201101. [11] S. Ghosh, L.-C. Kwek, D. R. Terno, S. Vinjanampathy, Weak-value magnetometry for precision tests of fundamental physics, arXiv preprint arXiv:1912.10693 (2019). [12] T. Heinosaari, T. Miyadera, M. Ziman, An invitation to quantum incompatibility, Journal of Physics A: Mathematical and Theoretical 49 (12) (2016) 123001. [13] S. Ragy, M. Jarzyna, R. Demkowicz-Dobrza´nski, Compatibility in multiparameter quantum metrology, Phys. Rev. A 94 (2016) 052108. doi:10.1103/PhysRevA.94.052108. 24 [14] A. Candeloro, Z. Pazhotan, M. G. A. Paris, Dimension matters: precision and incompatibility in multi-parameter quantum estimation models, Quantum Science and Technology 9 (4) (2024) 045045. doi:10.1088/2058-9565/ad7498. [15] A. S. Holevo, Probabilistic and Statistical Aspects of Quantum Theory, Publications of the Scuola Normale Superiore. Monographs, Springer, Dordrecht, 2011. [16] H. Nagaoka, A new approach to cram´er-rao bounds for quantum state estimation, in: Asymptotic Theory of Quantum Statistical Inference: Selected Papers, WORLD SCIENTIFIC, 2005, pp. 100–112. [17] J. Suzuki, Explicit formula for the holevo bound for two-parameter qubit estimation problem, Journal of Mathematical Physics 57 (05 2015). doi:10.1063/1.4945086. [18] A. Carollo, B. Spagnolo, A. A. Dubkov, D. Valenti, On quantumness in multi-parameter quantum estimation, Journal of Statistical Mechanics: Theory and Experiment 9 (9) (2019) 094010. doi: 10.1088/1742-5468/ab3ccb. [19] S. Razavian, M. G. A. Paris, M. G. Genoni, On the quantumness of multiparameter estimation problems for qubit systems, Entropy 22 (11) (2020). doi:10.3390/e22111197. [20] J. He, G. Fazio, M. G. A. Paris, Weight-dependent and weight-independent measures of quantum incompatibility in multiparameter estimation, in preparation (2025). [21] L. J. Fiderer, T. Tufarelli, S. Piano, G. Adesso, General expressions for the quantum fisher information matrix with applications to discrete quantum imaging, PRX Quantum 2 (2021) 020308. doi:10.1103/ PRXQuantum.2.020308. URL https://link.aps.org/doi/10.1103/PRXQuantum.2.020308 [22] B. B. Machta, R. Chachra, M. K. Transtrum, J. P. Sethna, Parameter space compression underlies emergent theories and predictive models, Science 342 (6158) (2013) 604–607. [23] Y. Yang, F. Belliardo, V. Giovannetti, F. Li, Untwining multiple parameters at the exclusive zero- coincidence points with quantum control, New Journal of Physics 24 (12) (2023) 123041. doi:10. 1088/1367-2630/acae00. [24] J. J. Waterfall, F. P. Casey, R. N. Gutenkunst, K. S. Brown, C. R. Myers, P. W. Brouwer, V. Elser, J. P. Sethna, Sloppy-model universality class and the vandermonde matrix, Phys. Rev. Lett. 97 (2006) 150601. doi:10.1103/PhysRevLett.97.150601. URL https://link.aps.org/doi/10.1103/PhysRevLett.97.150601 [25] J. He, M. G. A. Paris, Scrambling for precision: optimizing multiparameter qubit estimation in the face of sloppiness and incompatibility, Journal of Physics A: Mathematical and Theoretical 58 (2025). [26] G. Bizzarri, M. Parisi, M. Manrique, I. Gianani, A. Chiuri, M. Rosati, V. Giovannetti, M. G. A. Paris, M. Barbieri, Controlling sloppiness in two-phase estimation with a tunable weak measurement, Optica Quantum 3 (5) (2025) 432–438. doi:10.1364/OPTICAQ.563646. [27] M. Frigerio, M. G. A. Paris, Overcoming sloppiness for enhanced metrology in continuous-variable quantum statistical models, Int. J. Quantum Inf. in press (2024). [28] C. Mukhopadhyay, A. Bayat, V. Montenegro, M. G. A. Paris, Beating joint quantum estimation limits 25 with stepwise multiparameter metrology, arXiv preprint arXiv:2506.06075 (2025). [29] Y. Yang, V. Montenegro, A. Bayat, Overcoming quantum metrology singularity through sequential measurements, Phys. Rev. Lett. 135 (2025) 010401. doi:10.1103/gsyv-jllq. [30] P. Sharma, S. Olivares, D. K. Mishra, M. G. A. Paris, Mitigating sloppiness in joint estimation of successive squeezing parameters, arXiv preprint arXiv:2506.15638 (2025). [31] B. B. Machta, R. Chachra, M. K. Transtrum, J. P. Sethna, Parameter space compression underlies emergent theories and predictive models, Science 342 (6158) (2013) 604–607. doi:10.1126/science. 1238723. [32] A. Candeloro, Z. Pazhotan, M. G. A. Paris, Dimension matters: precision and incompatibility in multi-parameter quantum estimation models, Quantum Science and Technology 9 (4) (2024) 045045. doi:10.1088/2058-9565/ad7498. [33] R. Pal, P. Ghosh, A. Ghoshal, U. Sen, Role of phase of optimal probe in noncommutativity vs coherence in quantum multiparameter estimation, arXiv preprint arXiv:2507.04824 (2025). [34] H. Cramer, Mathematical methods of statistics, Princeton mathematical series, Princeton Univ. Press, Princeton, NJ, 1954. [35] S. M. Kay, Statistical signal processing: estimation theory, Prentice Hall 1 (1993) Chapter–3. [36] C. W. Helstrom, Minimum mean-squared error of estimates in quantum statistics, Physics Letters A 25 (2) (1967) 101–102. [37] F. Albarelli, J. F. Friel, A. Datta, Evaluating the holevo cram´er-rao bound for multiparameter quantum metrology, Physical review letters 123 (20) (2019) 200503. [38] M. Hayashi, K. Matsumoto, Asymptotic performance of optimal state estimation in qubit system, Journal of Mathematical Physics 49 (2008) 102101–102101. doi:10.1063/1.2988130. [39] J. Kahn, M. Guta, Local asymptotic normality for finite dimensional quantum systems, Communica- tions in Mathematical Physics 289 (2009) 597–652. doi:10.1007/s00220-009-0787-3. [40] K. Yamagata, A. Fujiwara, R. D. Gill, Quantum local asymptotic normality based on a new quantum likelihood ratio, The Annals of Statistics 41 (4) (2013) 2197 – 2217. doi:10.1214/13-AOS1147. [41] A. Carollo, D. Valenti, B. Spagnolo, Geometry of quantum phase transitions, Physics Reports 838 (2020) 1–72. [42] S. Chang, M. G. Genoni, F. Albarelli, Multiparameter quantum estimation with gaussian states: effi- ciently evaluating holevo, rld and sld cram\’er-rao bounds, arXiv preprint arXiv:2504.17873 (2025). [43] G. Bizzarri, M. Parisi, M. Manrique, I. Gianani, A. Chiuri, M. Rosati, V. Giovannetti, M. G. A. Paris, M. Barbieri, Controlling sloppiness in two-phase estimation with a tunable weak measurement, Optica Quantum 3 (5) (2025) 432–438. [44] R. Bellman, Dynamic programming treatment of the travelling salesman problem, J. ACM 9 (1962) 61–63. doi:10.1145/321105.321111. [45] M. Held, R. M. Karp, A dynamic programming approach to sequencing problems, in: ACM ’61, Association for Computing Machinery, New York, NY, USA, 1961, p. 71.201–71.204. doi:10.1145/ 26 800029.808532.
Orders matter: tight bounds on the precision of sequential quantum estimation for multiparameter models Gabriele Fazio,1, ∗Jiayu He,2, † and Matteo G. A. Paris1, ‡ 1Dipartimento di Fisica Aldo Pontremoli, Universit`a degli Studi di Milano, I-20133 Milano, Italy 2QTF Centre of Excellence, -00014 Helsinki, Finland (Dated: October 17, 2025) In multiparameter quantum metrology, the ultimate precision of joint estimation is dictated by the Holevo Cram ́er-Rao bound. In this paper, we discuss and analyze in detail an alternative approach: the stepwise estimation strategy. In this approach, parameters are estimated sequentially, using an optimized fraction of the total available resources allocated to each step. We derive a tight and achievable precision bound for this protocol, the stepwise separable bound, and provide its closed-form analytical expression, revealing a crucial dependence on the chosen measurement ordering. We provide a rigorous comparison with the joint measurement strategy, deriving analytical conditions that determine when the stepwise approach offers superior precision. Through the analysis of several paradigmatic SU(2) unitary encoding models, we demonstrate that the stepwise strategy can indeed outperform joint measurements, particularly in scenarios characterized by non-optimal probes or models with a high degree of sloppiness. Our findings establish stepwise estimation as a powerful alternative to joint and collective measurements, proving that sequential protocols can provide a genuine metrological advantage, especially in resource-constrained or imperfect experimental settings. I. INTRODUCTION Quantum metrology seeks to leverage uniquely quantum features such as entanglement, coherence, and squeezing to surpass the precision limits imposed by classical strategies [1-3]. While significant theoretical and experimental progress has been made in single-parameter quantum estimation, the simultaneous estimation of multiple parameters has emerged as a challenging frontier. Multiparameter quantum estimation underpins a broad spectrum of applications [3-7], from highresolution imaging [8, 9] to tests of fundamental physics [10, 11]. However, it also introduces ∗ (G. Fazio) † (J. He) ‡ (M. G. A. Paris) 16 Oct 2025 2 conceptual and technical challenges: the optimal measurements for different parameters may not commute, rendering it impossible to simultaneously saturate the individual quantum Cram ́er-Rao bounds (QCRBs)[12-14]. In such cases, the ultimate limit on precision is dictated by the Holevo bound, which remains tight but notoriously difficult to evaluate and saturate[15]. It characterizes the ultimate achievable precision in the asymptotic regime, but it typically requires collective measurements over infinitely many copies of the quantum probe. Nagaoka introduced an alternative bound based on separable measurements [16]. Although generally weaker than the Holevo bound, it may be achieved using sepaarate (non collective) measurement schemes, and it has been shown to coincide with the Holevo bound in specific cases, such as two-parameter estimation with pure states in a two-dimensional Hilbert space [17]. However, both the Holevo and Nagaoka bounds lack closed-form analytic expressions in the general case, which limits their utility as benchmarks and in comparing strategies. To bridge the gap between theoretical bounds and practical feasibility, a hierarchy of precision limits has been developed. Among these, the R [18, 19] and T [20] bounds stand out for their analytic tractability and close connection to the geometry of quantum states, i.e., the quantum Fisher information matrix and the mean Uhlmann curvature. These bounds provide a useful bracketing of the Holevo bound, clarifying how quantum incompatibility and the sloppiness of model [21-27], i.e., the possible inefficiency of the parameter encoding, limit the achievable precision. Motivated by these considerations, we discuss in details a novel approach, and its corresponding precision bound, that is easier to obtain in analytic form and has operational relevance. In this context, we consider a stepwise estimation protocol [28], in which different parameters are estimated sequentially by allocating resources to different subsets of the measurement data. Such a strategy offers a compromise between the simplicity of fully separable single-parameter estimation and globally joint multiparameter estimation, while providing a tractable analytical framework. This is particularly relevant in scenarios where joint strategies are impractical [29] or when experimental constraints limit the accessibility to optimal probes [30]. In this work, we develop a general framework for stepwise estimation in quantum multiparameter metrology. We introduce a class of precision bounds tailored to such sequential strategies and investigate their dependence on parameter ordering, as well as their relation to existing bounds such as the Holevo, symmetric logarithmic derivative (SLD) QCRB and T/R-type bounds. Our results demonstrate that stepwise estimation can not only provide operationally meaningful bounds but may, under certain conditions, outperform joint strategies, particularly in the presence of sloppiness, i.e., when only certain combinations of parameters affect the quantum state significantly [31], or 3 under resource constraints that prevent the preparation of ideal probe states. The paper is organized as follows. In Section II, we review the general framework of quantum multiparameter estimation, including the definitions of the classical and quantum Cram`er-Rao bounds and the Holevo bound, and discuss their interrelations and attainability conditions. Section III introduces the concept of stepwise estimation strategies and formulates a family of precision bounds associated with sequential resource allocation. We derive closed-form analytic expressions for these bounds, establish their optimality with respect to different parameter orderings, and provide inequalities that characterize their performance relative to known quantum bounds. Section IV illustrates the application of our framework to explicit physical models, including twoand three-parameter estimation scenarios with qubit and qutrit probes [32, 33]. These examples highlight conditions under which stepwise estimation outperforms joint estimation, especially when the quantum Fisher information is highly anisotropic or the parameters are incompatible. Finally, Section V summarizes our main findings and discusses possible future directions. II. FRAMEWORK OF MULTIPARAMETER QUANTUM ESTIMATION In this Section, we outline the theoretical background for multiparameter quantum estimation. Let ρλ be a quantum state on a finite-dimensional Hilbert space, parameterized by a vector of n real parameters λ = (λ1, . . . , λn)T , and {Πk} with Πk ≥0 and P k Πk = I a positive operator-valued measurement (POVM). The probability of obtaining outcome k is given by pλ(k) = Tr [ρλΠk]. A corresponding estimator ˆλ(k) is assigned to each outcome, and the performance is evaluated via the covariance matrix V (ˆλ), whose components read: Vμν = X k pλ(k)[λμ(k) -Ek(ˆλμ)][λν(k) -Ek(ˆλν)]. (1) where Ek(ˆλμ) denotes the expectation value of the estimator ˆλμ under the distribution pλ(k). Assuming locally unbiased estimators, i.e. E[ˆλμ] = λμ and ∂νE(ˆλμ) = δμν, the classical Cram ́erRao bound (CRB) provides a fundamental lower limit on the achievable covariance[34]: V (ˆλ) ≥1 M F -1 , (2) with F the Fisher information matrix (FIM), and M the number of repeated measurements. The elements of FIM are defined as: Fμν ≡ X k ∂μpλ(k) ∂νpλ(k) pλ(k) . (3) 4 The CRB can be saturated in the asymptotic limit of an infinite number of repeated experiments using Bayesian or maximum likelihood estimators [35]. In the quantum setting, due to non-commutativity of observables, multiple versions of quantum Fisher information matrices (QFIMs) arise. One of the most prominent among them is defined through the symmetric logarithmic derivatives (SLDs) LS μ [36], defined as operators which satisfy: ∂μρλ = LS μρλ + ρλLS μ 2 (4) The SLD-QFIM is then defined as Qμν ≡1 2 Tr ρλ{LS μ, LS ν } , (5) where {A, B} = AB + BA denotes the anti-commutator of the operators A and B. In the case of pure statistical models, ρλ = |ψλ⟩⟨ψλ|, the QFIM reduces to: Qμν = 4 Re ⟨∂μψλ|∂νψλ⟩-⟨∂μψλ|ψλ⟩⟨ψλ|∂νψλ⟩ , where ∂k ≡∂λk. A. General Quantum Cram ́er-Rao Bounds In this subsection, we present a general framework for quantum Cram ́er-Rao bounds (QCRBs), covering all commonly uesd quantum bounds, including the SLD bound CS, the Holevo bound CH, as well as the R and T bounds, and their hierarchical relationships. Utilizing the QFIM one can formulate scalar bounds for estimation error under a given cost matrix W (a real, positive definite d × d matrix). The quantum Cram ́er-Rao Bound (QCRB), or SLD QCRB states that Tr[V (W, ˆλ)] ≥CS[W, ˆλ], with CS[W, ˆλ] ≡1 M Tr W Q-1 , (6) where M is the number of independent repetitions of measurement. Increasing it can reduce statistical errors. However, due to non-commuting SLDs, the bound isn't always tight. A more fundamental limit is provided by the Holevo bound[15, 37]: CH[W, ˆλ] ≡min X∈X n Tr [W Re (Z[X])] + Tr [||W Im (Z[X])||1] o , (7) where Zμν ≡Tr[ρλXμXν], and the set X contains Hermitian operators Xμ satisfying the local unbiased condition: Tr[∂νρλXμ] = δμν. The Holevo bound becomes asymptotically achievable in the limit of collective measurements over many copies of the state[38-40]. 5 A key challenge in multiparameter estimation arises from the incompatibility of optimal measurements for different parameters. The condition under which the SLD bound is saturable is given by the so-called weak commutativity criterion [13]: Tr ρλ[LS μ, LS ν ] = 0. (8) where [A, B] = AB -BA denotes the commutator of the operators A and B. This motivates the definition of the antisymmetric matrix U, often called the mean Uhlmann curvature (MUC). This quantity captures the average non-commutativity between the parameter generators: Uμν ≡1 2i Tr ρλ[LS μ, LS ν ] , (9) and is useful to quantify the incompatibility between parameters. For pure statistical models, ρλ = |ψλ⟩⟨ψλ|, we have Uμν = 4 Im ⟨∂μψλ|∂νψλ⟩-⟨∂μψλ|ψλ⟩⟨ψλ|∂νψλ⟩ , Due to the hardness of finding analytical expressions for the Holevo Bound, two bounds have been introduced. These are based on the quantumness measures R [18, 19, 41], and T [6, 18, 20], defined as R ≡||iQ-1U||∞, (10) T(W) ≡∥ √ WQ-1UQ-1√ W∥1 Tr [WQ-1] , (11) where || · ||∞denotes the maximum eigenvalue of the matrix, and ||A||1 = Tr[ √ A†A] is the nuclear norm. The corresponding bounds respect the following chain of inequalities: CS[W, ˆλ] ≤CH[W, ˆλ] ≤[1 + T(W)] CS[W, ˆλ] ≤[1 + R] CS[W, ˆλ] ≤2 CS[W, ˆλ] . (12) B. Two- and Three-Parameter Pure-State Models For the special case of two-parameter pure models, we have that R simplifies to[19] R2 = s det [U] det [Q], (13) and using a diagonal weight matrix W = diag(1, ω), the T bound reduces to T2 = 2 p ω det[U] Q22 + ω Q11 . (14) 6 For two-parameter qubit models we have that [17] CH[W, ˆλ] = CS[W, ˆλ] + p det[W] det[Q] Tr[ρλ[LS 1 , LS 2 ]] . (15) For pure two-parameter qubit models we have det[Q] = det[U] = U2 12 = 1 2iTr ρ[LS 1 , LS 2 ] 2 , (16) and thus, using Eq. (14), the explicit formula CH = Tr[WQ-1] + 2 p det[WQ-1] = CS(1 + T2). (17) These bounds respect the following bracketing relation CS ≤CH = (1 + T2)CS ≤(1 + R2)CS = 2CS . (18) For three-parameter models R is given by R3 = s uT Q u det [Q], (19) with the vector u defined as u = (U23, -U13, U12). (20) The quantity T, for a diagonal weight matrix W = diag(1, ω1, ω2) is given by T3 ≡T3(W3) = 2 q uT Q f W3 Q u det [Q] , (21) where f W3 = diag(ω1ω2, ω2, ω1). III. A BOUND FOR STEPWISE MEASUREMENT STRATEGIES The SLD QCRB offers a computationally straightforward lower bound on the estimator variance, but its attainability is guaranteed only if the SLDs satisfy the weak commutativity criterion. When this compatibility condition is not met, the Holevo Cram ́er-Rao Bound provides a tighter, asymptotically achievable limit. However, reaching this bound necessitates collective measurements-experimentally demanding protocols that act jointly on many copies of the quantum state. Nagaoka proposed a simpler measurement strategy using separable measurements performed independently on each probe. While this simplifies the implementation, it generally results in a less tight 7 bound. In this context, we introduce a new class of bounds tailored for sequential measurement strategies. This stepwise approach involves allocating resources sequentially to estimate subsets of parameters, providing a framework that bridges the gap between single-parameter estimation and fully collective multiparameter strategies. Consider the task of estimating a set of n parameters, (λ1, ..., λn), using a total of M identically prepared probes. To implement a sequential measurement strategy, we partition the total set of probes into n groups (each dedicated to a different parameter) according to allocation fractions {γj : j = 1, .., n}, where each γj satisfies 0 ≤γj ≤1 and Pn i=j γj = 1. Thus, the j-th group contains γjM probes. The process unfolds as follows: 1. An experiment with γ1M probes is performed, using a measurement strategy optimized to estimate λ1 and assuming the other parameters (λ2, . . . , λn) unknown; 2. A second experiment, employing with γ2M probes is performed, aiming at optimally estimating λ2 assuming λ1 known from the previous step and the rest of parameters (λ3, . . . , λn) unknown; 3. The procedure continues sequentially: at step j, γjM probes are dedicated to optimally estimating the parameter λj assuming λ1, . . . λj-1 known from the previous steps and the rest of parameters (λj+1, . . . , λn) unknown; 4. Finally, the last group of γnM probes is used to implement a single-parameter quantum estimation of λn. The total variance for this protocol is P j V ( ˆλj), and is bounded by the sum of the optimal variances achievable at each step. By optimizing over all possible resource allocations {γj}, we define the stepwise separable bound for a given estimation order (1 →n) as: C1→n sep ≡ min γ1,...,γn [Q-1 1,...,n]11 γ1 + [Q-1 2,...,n]11 γ2 + ... + Q-1 n γn . (22) Here, Qj,...,n is the Quantum Fisher Information Matrix (QFIM) for the parameter subset (λj, . . . , λn), constructed by removing the first j -1 rows and columns from the full QFIM. Its inverse, Q-1 j,...,n, is an (n -j + 1) × (n -j + 1) positive-definite matrix. The notation [Q-1 j,...,n]11 refers to the (1, 1) entry of this inverse matrix, corresponding to the SLD Cram ́er-Rao bound for λj assuming λ1, . . . λj-1 known from the previous steps and the rest of parameters (λj+1, . . . , λn) unknown. We use only the first element of the inverse QFIM because this is the variance we get 8 by considering a weight matrix of the form W = diag(1, 0, . . . ). We do this since we are interested in estimating just the j-th parameter at each step. The explicit dependence of the Csep bound on the estimation sequence order is denoted by the superscript. We highlight how this bound is also achievable. We in fact have that for each measurement step we are using a weight matrix W = diag(0, . . . , 1, . . . , 0), and with this choice the Holevo bound coincides with the respective QCRB. We also note that this bound is achievable in the asymptotic limit. To see this, consider the sequential protocol itself: at step j, we allocate γjM probes and perform the measurement that is optimal for the reduced parameter set (λj, . . . , λn). In the asymptotic limit, using the locally unbiased estimator associated with this measurement, the variance of ˆλj saturates [Q-1 j,...,n]11/γ1. Summing over all steps yields the stepwise bound, which is achievable in the asymptotic regime. A. Minimization of the Stepwise Bound Given an n-parameter model, we want to obtain the stepwise bound introduced in the previous Section, i.e., to perform the minimization in Eq.(22). The following two theorems provide closed analytical expressions for this minimization. The first theorem expresses the bound as a telescopiclike series, effectively removing the minimization over the {γi} in the original definition and yielding a fully general analytical expression for the stepwise bound. While this formula is general, it involves a sum over n terms, which becomes more and more complex for increasing n. The second theorem addresses this issue by exploiting the Cholesky decomposition of the QFIM, providing a computationally efficient alternative for evaluating the bound. Theorem III.1. The optimized stepwise measurement bound C1→n sep of Eq.(22) is given by C1→n sep = s det[Q2,...,n] det[Q1,...,n] + s det[Q3,...,n] det[Q2,...,n] + · · · + s 1 det[Qn] !2 , (23) which is achieved with γj = q [Q-1 j,...,n]11 nP l=1 q [Q-1 l,...,n]11 . (24) Proof: Let us denote [Q-1 j,...,n]11 ≡Aj. We can rewrite the problem as C1→n sep = min {γj} n X j=1 Aj γj = ∥x∥2, 9 where x ≡ q A1 γ1 , . . . , q An γn , and ∥· ∥denotes the norm of vector. We define y ≡(√γ1, . . . , √γn) . Using Cauchy-Schwarz inequality |x · y|2 ≤∥x∥2∥y∥2 we get   n X j=1 p Aj   2 ≤   n X j=1 Aj γj     n X j=1 γi  = n X j=1 Aj γj . The minimum is thus C1→n sep = min {γj} n X j=1 Aj γj =   n X j=1 p Aj   2 , and is obtained when y is parallel to x, i.e., for γj ∝ p Aj. Normalizing to P j γj = 1, we get the optimal {γj} as γ∗ j = p Aj nP l=1 √Al = q [Q-1 j,...,n]11 nP l=1 q [Q-1 l,...,n]11 . To express C1→n sep in terms of determinants, we use the relation Aj = [Q-1 j,...,n]11 = det[Qj+1,...,n] det[Qj,...,n] . With this we prove that C1→n sep =   n X j=1 p Aj   2 = s det[Q2,...,n] det[Q1,...,n] + s det[Q3,...,n] det[Q2,...,n] + · · · + s 1 det[Qn] !2 . Corollary III.2. The optimal stepwise measurement bound, weighted by a diagonal weight matrix W = diag(ω1, . . . , ωn) is given by C1→n sep [W] = min {γj} ω1[Q-1 1,...,n]11 γ1 + · · · + ωnQ-1 n γn = min {γj} n X j=1 ̃Aj γj =   n X j=1 q ̃Aj   2 (25) = s ω1det[Q2,...,n] det[Q1,...,n] + s ω2det[Q3,...,n] det[Q2,...,n] + · · · + r ωn det[Qn] !2 . (26) Theorem III.3. Given L, the lower triangular matrix obtained from the Cholesky decomposition of the QFIM Q = LLT , then the stepwise bound Cn→1 sep can be expressed as Cn→1 sep = Tr[L-1] 2 , (27) where we highlight that the measurement order is reversed, firstly the n-th parameter, up to the first. 10 Proof: Knowing that L is lower triangular, we have that the right side is Tr[L-1] 2 =   n X j=1 1 [L]jj   2 , where [L]jj is the diagonal elements of L. For the leading principal minors, we know Q1,...,j = L1,...,jLT 1,...,j, from which we get det[Q1,...,j] = det[L1,...,j]det[LT 1,...,j] = det[L1,...,j]2 = jY l=1 [L]2 ll, where L1,...,j refers to leading principal minors of order j from L. Since this holds for all j, we have [L]jj = s det[Q1,...,j] det[Q1,...,j-1], then Tr[L-1] = X j 1 [L]jj = X i s det[Q1,...,j-1] det[Q1,...,j] . Therefore, we get Tr[L-1] 2 = s 1 det[Q1] + s det[Q1] det[Q1,2] + · · · + s det[Q1,...,n-1] det[Q1,...,n] !2 . Here we adopt the reversed estimation order, starting from the n-th parameter up to the first, in order to simplify the algebraic derivation. The same approach can be applied to obtain a closed-form expression for the original 1 →n order. B. Bounding of the Stepwise Strategy The value of the stepwise bound Csep changes with the order of estimation of the parameters, and a question arises on how these values are related. In Appendix A we suggest an algorithm to efficiently calculate numerically the more convenient ordering for given a QFIM. In this Section, we instead tackle an order independent analysis. In particular we provide an order independent lower and upper bound. Theorem III.4. For n-parameters estimation, the stepwise bound is bracketed by n3 Tr[Q] ≤n2 n s 1 det[Q] ≤Csep ≤nTr[Q-1]. (28) 11 Proof: We begin by considering the second inequality and, without loss of generality, focus on the stepwise bound C1→n sep . Using the arithmetic/geometric means (AM-GM) inequality we get C1→n sep = s det[Q2,...,n] det[Q1,...,n] + s det[Q3,...,n] det[Q2,...,n] + · · · + s 1 det[Qn] !2 ≥   n n v u u t s det[Q2,...,n] det[Q1,...,n] × s det[Q3,...,n] det[Q2,...,n] × · · · × s 1 det[Qn]    2 = n2 n s 1 det[Q]. This can then be reproduced for any choice of ordering. In turn, changing the order of parameter estimation corresponds to a rearrangement of the rows and columns of the QFIM. Having that det[Q] is invariant under those changes, this provides a lower limit for all orderings. The AM-GM also tells that in general this bound is not tight. It is achieved iff all the elements of the series are equal, meaning iff all diagonal elements of the Cholesky decomposition of the QFIM L are equal. The leftmost inequality is then proved by a property of positive matrices n Tr A-1 ≤ np det[A], which can be seen using the harmonic/geometric mean inequality (HM-GM) on the eigenvalues. The last inequality to prove is the rightmost, and can be seen using that [Q-1 j,...,n]11 = det[Qj+1,...,n] det[Qj,...,n] , and [Q-1 j,...,n]11 ≤[Q-1]jj. With these, using the arithmetic/quadratic (AM-QM) inequality, we have Csep =   n X j=1 q [Q-1 j,...,n]11   2 ≤n n X j=1 [Q-1 j,...,n]11 ≤nTr[Q-1] A necessary and sufficient condition for both the upper and lower bounds to be saturated is Q = cIn, where c > 0. If Q = cIn, we easily obtain lower and upper bounds coincide with Csep. Conversely, suppose Csep simultaneously reaches the lower and upper bounds. From the lower bound (AM-GM), all terms in the telescopic product must be equal: [Q-1 j,...,n]11 = k2 for all j. From the upper bound (AM-QM), we must have [Q-1 j,...,n]11 = [Q-1]jj and all diagonal elements of Q-1 equal. Combining these conditions with the fact that Q is a positive definite symmetric matrix, implies Q = 1 k2 In. Thus, Q = c In is both necessary and sufficient. 12 C. Csep and sloppiness in 2-parameter qubit models In the case of a two-parameter qubit model, we now establish conditions for the separable Csep to outperform the three bounds: SLD QCRB CS, T-dependent bound CT, and R-dependent bound CR. Since Csep is order dependent, we focus on C1→2 sep (estimating λ1 before λ2), with formulas for the opposite order derivable by switching Q11 and Q22. To isolate QFIM effects, we set W = diag(1, 1) in CS and CT. To understand when C1→2 sep ≤CS holds, we convert the inequality into a condition on the amount of sloppiness, which is quantified by the parameter s [25] s := 1 det[Q], which captures how strongly the system depends on a combination of the parameters rather than on them individually. In particular, when estimating λ1 first, followed by the estimation of λ2, the separable bound is given by C1→2 sep = 1 Q22 + Q22 det[Q] + 2 s 1 det[Q] , and the inequality C1→2 sep ≤CS may be written as s ≥ 1 Q2 11 1 + p 1 + Q11/Q22 2 ≡4Q2 22 Q4 12 . (29) This indicates that C1→2 sep becomes tighter if sloppiness is sufficiently large. The threshold is determined by the ratio Q22/Q2 12, not by their absolute values. When |Q12| ≫√Q22 (strong correlation), Csep outperforms CS even with moderate sloppiness. However, from the above expression, it is clear that when the QFIM is diagonal (i.e., Q12 = 0), C1→2 sep is strictly larger than CS. This is consistent with the intuition that when parameters are statistically independent (i.e., the QFIM is diagonal) stepwise estimation offers no advantage. Conversely, when statistical correlations between parameters are unavoidable due to model structure, a stepwise estimation strategy may become advantageous. Since CS is not always attainable, we now discuss the relationships of Csep with the two bounds CT and CR, which are analytically expressible and always saturable (at least asymptotically) and provide upper bounds for CH at fixed weight matrix and for all of them, respectively. The bound CT is given by CT = CS(1 + T2) = Q11 + Q22 + 2 |U12| det[Q] , 13 and the condition C1→2 sep ≤CT corresponds to s ≥ 4Q2 22 Q2 12 + 2Q22 |U12| 2 . (30) This implies that Csep surpasses CT under weaker sloppiness requirements, enhancing its practical attainability. Furthermore, when the QFIM is diagonal, the condition for C1→2 sep ≤CT becomes s ≥ 1 det[U], which is always satisfied in two-parameter pure-state qubit models according to [25]. Furthermore, this implies that in such models: C1→2 sep = CT = CH, and the advantage of C1→2 sep becomes evident when sloppiness is large enough. The R-dependent bound reads as follows CR = CS(1 + R2) = Q11 + Q22 det[Q] 1 + |U12| s 1 det[Q] ! , and the inequality C1→2 sep ≤CR corresponds to |U12| (Q2 12 + Q2 22)s + Q2 12 √s + |U12| -2Q22 ≥0. (31) Solving (31) yields the explicit condition: s ≥ √ ∆-Q2 12 2 4 |U12|2 Q2 12 + Q2 22 2 , ∆= Q4 12 + 4 |U12| Q2 12 + Q2 22 (2Q22 -|U12|) ≥0. Remarkably, when 2Q22 = |U12|, the condition simplifies to s ≥0, indicating that C1→2 sep universally outperforms CR regardless of the sloppiness. This scenario corresponds to a regime where quantum incompatibility makes sequential estimation mostly convenient. When the QFIM is diagonal, the condition simplifies significantly, and the threshold becomes s ≥2Q22 -|U12| |U12|Q2 22 . When |U12| > 2Q22, the inequality holds for all s > 0 since the right-hand side is negative, and so C1→2 sep always dominates CR. When |U12| < 2Q22, the proper relation between Q22 and |U12| allows C1→2 sep ≤CR to hold even with low sloppiness, e.g., when Q22 and |U12| are both large. In the special case of vanishing incompatibility (i.e., U12 = 0), the attainability condition for CT and CR reduces exactly to that of C1→2 sep as shown in Eq. (29). However, even in these cases, the inequality only holds when sloppiness is sufficiently large. Finally, while the expression for C1→2 sep ≤CR is more involved, it still ultimately requires a large sloppiness for the superiority of Csep to manifest. In summary, Csep offers a practically attainable bound in models with sufficient sloppiness, especially when the interplay between quantum incompatibility and statistical correlation is optimized. 14 IV. PERFORMANCE OF JOINT AND STEPWISE ESTIMATION IN SU(2) MODELS In this Section, we compare the performance of joint estimation (JE) and stepwise estimation (SE) strategies in estimating the parameters encoded unitarily by SU(2) Hamiltonians of the form H = B · Jn, (32) where Jn is an SU(2) generator along direction n. Three cases are considered: a two-parameter model with qubits, a two-parameter model with qutrits, and a three-parameter model with qutrits. Here, we summarize the main results and discuss their implications for multiparameter quantum estimation. The detailed derivations of the QFIM and the Uhlmann matrix for each case are provided in Appendix B. A. SU(2) 2-parameter estimation in qubit models Let us consider the Hamiltonian HB,θ = B(cos θJx + sin θJz) , (33) where Jj denotes the j-th generator of SU(2). As discussed in [32], in the case of qubit probes the QFIM is QBB = t2[1 -(nθ · r0)2] (34) Qθθ = 4 sin2 Bt 2 [1 -(n1 · r0)2] (35) QBθ = 2t sin Bt 2 (n1 · r0)(nθ · r0), (36) and the Uhlmann matrix is DθB = 2t sin Bt 2 n2 · r0 (37) with vectors nθ = (cos θ, 0, sin θ) (38) n1 = (cos Bt 2 sin θ, -sin Bt 2 , -cos Bt 2 cos θ) (39) n2 = nθ × n1 = (sin Bt 2 sin θ, cos Bt 2 , -sin Bt 2 cos θ) (40) r0 = (Tr[σxρ0], Tr[σyρ0], Tr[σzρ0]), (41) 15 where ρ0 = |ψ0⟩⟨ψ0| denotes the input probe state. Our goal is to compare the performance of stepwise and joint estimation strategies by analyzing the SE bound alongside other bounds, including the Holevo bound. Since the SLD Cram ́er-Rao bound is not attainable in this model, our analysis focuses on the Holevo bound, together with the T-bound and the R-bound. For these kind of qubit models, we always have R = 1 [19, 32], independently on the value of the parameters (B, θ) and the choice of the probe state |ψ0⟩. We thus have CR = 2CS. From Theorem III.4 we also have Csep ≤2 Tr[Q-1] ≡2CS for two-parameter models. Combining these results immediately gives Csep ≤CR (42) Next, we consider the T-bound. Using Eq. 14 with the identity as weight matrix, we get T = 4t q (1 -α2 -β2) sin2 Bt 2 2 (1 -β2) (1 -cos(Bt)) + (1 -α2) t2 , (43) where α ≡nθ ·r0 and β ≡n1 ·r0, such that α, β ∈[-1, 1], α2 +β2 ≤1 and n2 ·r0 = p 1 -α2 -β2. From this, it can be proven analytically that C1→2 sep , C2→1 sep ≤Tr[Q-1](1 + T) (44) holds for every choice of α, β, B, t whenever Q is invertible. As stated in Eq. (17) for this model CH = CS(1 + T). Therefore, we conclude that for two-parameter SU(2) qubit models a stepwise estimation strategy always outperform the joint strategy: C1→2 sep , C2→1 sep ≤CT = CH. (45) B. SU(2) 2-parameter estimation in qutrit models We now consider the a two-parameter SU(2) qutrit model, where the Hamiltonian remains the same as in the qubit case, but the probe state is generalized to a qutrit. The detailed calculations are provided in Appendix B. The QFIM is QBB = 4t2[n2 θ · r -(nθ · r)2] (46) Qθθ = 16 sin2 Bt 2 [n2 1 · r -(n1 · r)2] (47) QBθ = -4t sin Bt 2 [n{1,θ} · r -2(n1 · r)(nθ · r)], (48) 16 with [n2 θ]i ≡Tr[J2 nθbi], [n2 1]i ≡Tr[J2 n1bi], [n{1,θ}]i ≡Tr[{Jn1, Jnθ}bi], and {bi} = Γ1 √ 2, . . . , Γ8 √ 2, I √ 3 , r = {ri} = Tr[ρ0b1], . . . , Tr[ρ0b9] , with Γi the Gell-Mann matrices, and bi their normalized version as described in Appendix B. The difficulty in optimizing this expression lies in the fact that pure qutrit states live on a 4-dimensional manifold embedded in a 7-sphere, making the domain highly nontrivial. To address this, we resort to numerical optimization, sampling over all possible pure qutrit states. Our results reveal that the optimal states for various values of B, θ, and t share a common property, they satisfy r ⊥{nθ, n1, n{1,θ}, n2}, and n2 θ · r, n2 1 · r = 1. This implies that the QFIM takes the form Q =  4t2 0 0 16 sin2 Bt 2  , (49) yielding a QCRB of CS = 1 16 4 t2 + csc2 Bt 2 . The Uhlmann curvature becomes U =   0 -4t sin Bt 2 (n2 · r) 4t sin Bt 2 (n2 · r) 0  = 0, (50) since r ⊥n2. The above analysis shows that using qutrits we have sufficient freedom (i.e., room for improvement in the Hilbert space) to minimize the quantum incompatibility. The Uhlmann matrix vanishes, U = 0, and thus we have T = 0 and CH = CSLD. Furthermore, the optimal QFIM is diagonal. We conclude that by suitably choose the probe, joint estimation (JE) outperforms stepwise estimation (SE). C. SU(2) 3-parameter estimation in qutrit models In the previous two examples, we analyzed two-parameter models and observed that increasing the probe dimension reduces the effectiveness of the SE method when the encoding is good. We 17 now explore how this behavior changes when the number of parameters increases to three. If we restrict to qubits, the small dimension of the Hilbert space makes the QFIM singular. Therefore, the smallest probe dimension that allows a meaningful analysis is a qutrit. We extend the previous model by allowing the vector B to span all directions. The Hamiltonian is then given by HB,θ,φ = B(cos θ cos φJx + cos θ sin φJy + sin θJz) , (51) with n(3) θ = (cos θ cos φ, cos θ sin φ, sin θ) . (52) For more details on the derivation see [14]. Since this model is an extension of the previous one, the terms QBB, Qθθ, and QBθ remain formally the same as in Eqs. (46)-(48), but now evaluated using the vectors in Eq. (52) and n(3) 1 = sin Bt 2 sin φ + cos Bt 2 sin θ cos φ, -sin Bt 2 cos φ + cos Bt 2 sin θ sin φ, -cos Bt 2 cos θ . (53) In addition, we now have the third parameter φ, leading to the extra QFIM components Qφφ = 16 sin2 Bt 2 cos2 θ[⟨J2 n(3) 2 ⟩ 0 -⟨Jn(3) 2 ⟩2 0] (54) QBφ = -4t sin Bt 2 cos θ[⟨{Jn(3) 2 , Jn(3) θ }⟩ 0 -2 ⟨Jn(3) 2 ⟩ 0 ⟨Jn(3) θ ⟩ 0] (55) Qθφ = 8 sin2 Bt 2 cos θ[⟨{Jn(3) 1 , Jn(3) 2 }⟩ 0 -2 ⟨Jn(3) 1 ⟩ 0 ⟨Jn(3) 2 ⟩ 0], (56) with n(3) 2 = cos Bt 2 sin φ -sin Bt 2 sin θ cos φ, -cos Bt 2 cos φ -sin Bt 2 sin θ sin φ, sin Bt 2 cos θ . (57) Given the complexity of the model, a numerical approach has been considered. Our goal is to investigate when Csep is smaller than the Holevo bound and how this behavior depends on the characteristics of the probe state. To this end, we sample a large number of random states given a fixed set of parameters (B, θ, φ). For each probe state, we compute the minimum Csep over all possible orderings and the Holevo bound (via the SDP approach in [37, 42]). The results are shown in Fig. 1. In the left panel, we show the results for a sample of 105 states and a fixed set of parameters (B, θ, φ). In the right one, we consider 10 different sets of values for (B, θ, φ) and sample 104 states for each set. Notice that the choice of different parameter sets (B, θ, φ) does not 18 alter the shape of the distribution . The red line represents the points where CH = Csep, which separates the two regions where one of the two is larger than the other. Although it is evident that for small values of CH the Holevo bound is generally tighter (CH < Csep), a significant number of states exhibit the opposite behavior, where Csep < CH. This observation indicates that, within the three-parameter qutrit estimation model, the Holevo bound is not always the ultimate lower bound. In certain specific cases, Csep can serve as a tighter or alternative bound for parameter estimation. FIG. 1. Comparison of stepwise and joint estimation for 3-parameter SU(2) qutrit models. In the left panel, we show the results for a sample of 105 states and a fixed set of parameters (B, θ, φ). In the right one, we consider 10 different sets of values for (B, θ, φ) and sample 104 states for each set. In both panels, the red line represents the points where CH = Csep. We conclude that also in this case, when we have enough quantum resources (i.e., the optimal, or nearly optimal probe) JE is convenient. If the probe state is sub-optimal SE progressively outperforms JE. V. CONCLUSIONS In this work, we have analyzed stepwise estimation strategies for multiparameter quantum metrology, deriving a tight and analytically tractable precision bound, the stepwise separable bound Csep, that depends explicitly on the order of parameter estimation. We provided closed-form 19 expressions for this bound and showed how it can be efficiently computed using the Cholesky decomposition of the quantum Fisher information matrix (QFIM). We also established rigorous bounds on Csep, that hold independently of the estimation order. Through the analysis of SU(2) unitary models with qubit and qutrit probes, we demonstrated that stepwise estimation can outperform joint estimation strategies, particularly in scenarios characterized by large sloppiness, non-optimal probe states, or strong parameter incompatibility. In two-parameter qubit models, Csep was shown to be tighter than the Holevo bound. For qutrit systems, however, the increased dimensionality allows for better suppression of incompatibility, making joint estimation more favorable under optimal conditions. In three-parameter qutrit models, we identified regimes where Csep remains competitive with or even superior to the Holevo bound, especially for suboptimal encodings. These results establish stepwise estimation as a viable and experimentally friendly alternative to collective measurements, particularly in resource-constrained or imperfect settings where ideal probe states or joint measurements are not feasible. Our results pave the way for extension to more general quantum systems, including continuousvariable and open quantum systems, and explore adaptive stepwise protocols that dynamically optimize the estimation order and resource allocation. Additionally, the connection between sloppiness, incompatibility, and estimation efficiency warrants further experimental investigation, potentially leading to new design principles for quantum sensors [43]. Appendix A: Dynamical Programming Algorithm for Best Ordering In Theorem. III.1 we provided a way to calculate the Csep bound for a given ordering. As mentioned the result depends on the ordering chosen. In this Appendix we want to discuss the computational complexity of finding the optimal ordering that minimizes the bound, and to provide a more efficient way to calculate it through a dynamical programming (DP) approach. Let's start by analyzing the brute force approach. We have to compute all possible orderings of n numbers, therefore O(n!). For each ordering, we have to compute the trailing determinants. The complexity of computing the determinant of a generic k × k matrix using methods like LU decomposition is O(k3). Considering all the trailing submatrices we therefore have the cost Pn k=1 O(k3) = O(n4). Therefore, the brute force time complexity is O(n!n4). We can do better than the brute force approach. Noticing a similarity with the traveling salesman problem, we can write a variation on the Bellman-Held-Karp algorithm [44, 45]. The 20 core principle is that given a set of indices S = {i1, . . . , ik}, the optimal sequence must contain an optimal subsequence of k -1 elements in the sequence. This property is known as optimal substructure. This allows us to create solutions iteratively, starting from an empty set. Wanting to find the minimum Csep, we refer to III.3 and choose the cost function as C([1, . . . , n]) ≡Tr[L-1] without the squared part in order to simplify the algebra. The algorithm for an n-paramter model works as follows: 1. Base case: the cost for an empty set of parameters is 0, C(∅) = 0. 2. Recurrence relation: exploiting the optimal substructure, for any non-empty sequence S, the optimal cost C(S) is found by considering each element j ∈S as the potential last element in the optimal sequence for S. If j is the last element, the preceding elements must form an optimal sequence for the subset S \ {j}. The total cost is thus the optimal cost of the preceding subset plus the cost contribution of adding j. We therefore have C(S) = min j∈S C(S \ {j}) + cost(j, S \ {j}). (A1) The cost function cost(j, S \ {j}), is that of adding the contribution of the parameter j, at the end of the sequence I ≡S \ {j}. Remembering from Th. III.3 that using L gives a revers ordering, we underline how this means we're measuring the j-th parameter before the reversed I sequence. To evaluate the cost addition we consider the new QFIM QS =  QI,I qI,j qj,I qj,j  , (A2) with its new Cholesky decomposition matrix LS =  LI,I 0 lj,I lj,j  . (A3) From this we see QS = LSLT S =  LI,ILT I,I LI,IlT j,I lj,ILT I,I l2 j,j + lj,IlT j,I  . (A4) Equating this to Eq. A2 we get lj,I = qj,I LT I,I -1 ⇒l2 j,j = qj,j -qj,IQ-1 I,IqT j,I, (A5) 21 which is the Schur complement of the block QI,I of the new QFIM QS. Using Th. III.3 on the new sequence we know that the cost function increases by 1/lj,j, therefore we conclude cost(j, S \ {j}) = 1 q qj,j -qj,IQ-1 I,IqT j,I . (A6) The algorithm is implemented by iteratively computing C(S) for all subsets S of size from 0 to n. We use memoization, storing all optimal cost values of subsequences I, which will be used for next steps, avoiding redundant calculations. The optimal sequence can then be reconstructed from the intermediate choices (and then reversing them), and the Csep bound is obtained squaring the optimal value since the algorithm minimizes the quantity C = Pn i=1 1/Lii. Let's analyze the complexity of this algorithm. We still have to go through all possible subsets, therefore there's a complexity of O(2n). Despite being exponential, this is the big complexity speedup. We in fact no longer need to look at all the permutations, but we can just consider the subsets. Then, for each iteration we have to invert k matrices (k -1) × (k -1), which has complexity O(k3). Just as before, doing this for each steps results in a complexity of O(n4). In conclusion we can manage to go from the brute force complexity O(n!n4) to O(2nn4). Regarding space complexity, for the memoization we need two tables, one for the optimal costs, and one for the paths. Each table stores an entry for all possible subsets with keys growing as O(n), which results in a space complexity of O(n2n). We briefly conclude this section, reminding that for large number of parameters this algorithm having an exponential trend becomes highly costly. Therefore one could consider using either heuristics or other techniques such as simulate annealing or genetic algorithms. Appendix B: Two-parameter SU(2) model with pure states We consider a family of Hamiltonians of the form HB,θ = B cos θ Jx + sin θ Jz , (B1) where Ji (i = x, y, z) are the generators of SU(2) in the spin-j representation, and nθ = (cos θ, 0, sin θ). A general qubit state ρ0 can be written in Bloch vector form as ρ0 = 1 2 I + r0 · J, (B2) where r0 = (Tr[σxρ0], Tr[σyρ0], Tr[σzρ0]) 22 with |r0| = 1 and J = (Jx, Jy, Jz) = 1 2(σx, σy, σz). σi are the Pauli matrices. A general qutrit state can be written through its decomposition on an operator base bi ρ0 = 9 X i=1 ribi (B3) where r = {ri} = (Tr[ρ0b1], . . . , Tr[ρ0b9]), which should not be confused with the Bloch vector associated to the qutrit. We note that r9 = 1/ √ 3, and pure states are such that Tr[ρ2] = |r|2 = 1. The elements of the base bi are just the normalized Gell-Mann matrices b = {bi} = {Γ1/ √ 2, ..., Γ8/ √ 2, I/ √ 3} are the Gell-Mann matrices with Tr[bibj] = δij. For a probe state ρ0 evolving under the above Hamiltonian for a time t, the elements of the QFIM are given by QBB = 4t2 ⟨J2 nθ⟩0 -⟨Jnθ⟩2 0 , (B4) Qθθ = 16 sin2 Bt 2 ⟨J2 n1⟩0 -⟨Jn1⟩2 0 , (B5) QBθ = -4t sinBt 2 ⟨{Jn1, Jnθ}⟩0 -2 ⟨Jn1⟩0 ⟨Jnθ⟩0 , (B6) where n1 = cos Bt 2 sin θ, -sin Bt 2 , -cos Bt 2 cos θ is the derivative direction with respect to θ. For the Uhlmann matrix element relevant to compatibility conditions, we have DθB = 4t sinBt 2 ⟨Jn2⟩0 , (B7) where n2 = sin Bt 2 sin θ, cos Bt 2 , -sin Bt 2 cos θ . These formulas apply to both qubit (j = 1/2) and qutrit (j = 1) systems; the difference lies in the explicit form of the spin operators Ji. For the qubit case, they are expressed in terms of Pauli matrices as Jx = 1 2σx, Jy = 1 2σy, Jz = 1 2σz, 23 where σi are Pauli matrices. For the qutrit case, the generators take the form Jx = 1 √ 2      0 1 0 1 0 1 0 1 0     , Jy = 1 √ 2      0 -i 0 i 0 -i 0 i 0     , Jz =      1 0 0 0 0 0 0 0 -1     . [1] V. Giovannetti, S. Lloyd, L. Maccone, Quantum-enhanced measurements: beating the standard quantum limit, Science 306 (5700) (2004) 1330-1336. [2] M. G. A. Paris, Quantum estimation for quantum technology, International Journal of Quantum Information 7 (2009) 125-137. [3] V. Montenegro, C. Mukhopadhyay, R. Yousefjani, S. Sarkar, U. Mishra, M. G. A. Paris, A. Bayat, Review: Quantum metrology and sensing with many-body systems, Physics Reports 1134 (2025) 1-62. [4] M. Szczykulska, T. Baumgratz, A. Datta, Multi-parameter quantum metrology, Advances in Physics: X 1 (4) (2016) 621-639. [5] J. Liu, H. Yuan, X.-M. Lu, X. Wang, Quantum fisher information matrix and multiparameter estimation, Journal of Physics A: Mathematical and Theoretical 53 (2) (2020) 023001. [6] F. Albarelli, M. Barbieri, M. G. Genoni, I. Gianani, A perspective on multiparameter quantum metrology: From theoretical tools to applications in quantum imaging, Physics Letters A 384 (12) (2020) 126311. [7] G. Di Fresco, B. Spagnolo, D. Valenti, A. Carollo, Multiparameter quantum critical metrology, SciPost Physics 13 (4) (2022) 077. [8] A. Chrostowski, R. Demkowicz-Dobrza ́nski, M. Jarzyna, K. Banaszek, On super-resolution imaging as a multiparameter estimation problem, International Journal of Quantum Information 15 (08) (2017) 1740005. [9] K. K. Lee, C. N. Gagatsos, S. Guha, A. Ashok, Quantum-inspired multi-parameter adaptive bayesian estimation for sensing and imaging, IEEE Journal of Selected Topics in Signal Processing 17 (2) (2022) 491-501. [10] A. Gupta, S. Datta, S. Kastha, S. Borhanian, K. Arun, B. Sathyaprakash, Multiparameter tests of general relativity using multiband gravitational-wave observations, Physical review letters 125 (20) (2020) 201101. [11] S. Ghosh, L.-C. Kwek, D. R. Terno, S. Vinjanampathy, Weak-value magnetometry for precision tests of fundamental physics, arXiv preprint (2019). [12] T. Heinosaari, T. Miyadera, M. Ziman, An invitation to quantum incompatibility, Journal of Physics A: Mathematical and Theoretical 49 (12) (2016) 123001. [13] S. Ragy, M. Jarzyna, R. Demkowicz-Dobrza ́nski, Compatibility in multiparameter quantum metrology, Phys. Rev. A 94 (2016) 052108. 24 [14] A. Candeloro, Z. Pazhotan, M. G. A. Paris, Dimension matters: precision and incompatibility in multi-parameter quantum estimation models, Quantum Science and Technology 9 (4) (2024) 045045. [15] A. S. Holevo, Probabilistic and Statistical Aspects of Quantum Theory, Publications of the Scuola Normale Superiore. Monographs, Springer, Dordrecht, 2011. [16] H. Nagaoka, A new approach to cram ́er-rao bounds for quantum state estimation, in: Asymptotic Theory of Quantum Statistical Inference: Selected Papers, WORLD SCIENTIFIC, 2005, pp. 100-112. [17] J. Suzuki, Explicit formula for the holevo bound for two-parameter qubit estimation problem, Journal of Mathematical Physics 57 (05 2015). [18] A. Carollo, B. Spagnolo, A. A. Dubkov, D. Valenti, On quantumness in multi-parameter quantum estimation, Journal of Statistical Mechanics: Theory and Experiment 9 (9) (2019) 094010. [19] S. Razavian, M. G. A. Paris, M. G. Genoni, On the quantumness of multiparameter estimation problems for qubit systems, Entropy 22 (11) (2020). [20] J. He, G. Fazio, M. G. A. Paris, Weight-dependent and weight-independent measures of quantum incompatibility in multiparameter estimation, in preparation (2025). [21] L. J. Fiderer, T. Tufarelli, S. Piano, G. Adesso, General expressions for the quantum fisher information matrix with applications to discrete quantum imaging, PRX Quantum 2 (2021) 020308. PRXQuantum.2.020308. URL https://link.aps.org/doi/10.1103/PRXQuantum.2.020308 [22] B. B. Machta, R. Chachra, M. K. Transtrum, J. P. Sethna, Parameter space compression underlies emergent theories and predictive models, Science 342 (6158) (2013) 604-607. [23] Y. Yang, F. Belliardo, V. Giovannetti, F. Li, Untwining multiple parameters at the exclusive zerocoincidence points with quantum control, New Journal of Physics 24 (12) (2023) 123041. 1088/1367-2630/acae00. [24] J. J. Waterfall, F. P. Casey, R. N. Gutenkunst, K. S. Brown, C. R. Myers, P. W. Brouwer, V. Elser, J. P. Sethna, Sloppy-model universality class and the vandermonde matrix, Phys. Rev. Lett. 97 (2006) 150601. URL https://link.aps.org/doi/10.1103/PhysRevLett.97.150601 [25] J. He, M. G. A. Paris, Scrambling for precision: optimizing multiparameter qubit estimation in the face of sloppiness and incompatibility, Journal of Physics A: Mathematical and Theoretical 58 (2025). [26] G. Bizzarri, M. Parisi, M. Manrique, I. Gianani, A. Chiuri, M. Rosati, V. Giovannetti, M. G. A. Paris, M. Barbieri, Controlling sloppiness in two-phase estimation with a tunable weak measurement, Optica Quantum 3 (5) (2025) 432-438. [27] M. Frigerio, M. G. A. Paris, Overcoming sloppiness for enhanced metrology in continuous-variable quantum statistical models, Int. J. Quantum Inf. in press (2024). [28] C. Mukhopadhyay, A. Bayat, V. Montenegro, M. G. A. Paris, Beating joint quantum estimation limits 25 with stepwise multiparameter metrology, arXiv preprint (2025). [29] Y. Yang, V. Montenegro, A. Bayat, Overcoming quantum metrology singularity through sequential measurements, Phys. Rev. Lett. 135 (2025) 010401. [30] P. Sharma, S. Olivares, D. K. Mishra, M. G. A. Paris, Mitigating sloppiness in joint estimation of successive squeezing parameters, arXiv preprint (2025). [31] B. B. Machta, R. Chachra, M. K. Transtrum, J. P. Sethna, Parameter space compression underlies emergent theories and predictive models, Science 342 (6158) (2013) 604-607. 1238723. [32] A. Candeloro, Z. Pazhotan, M. G. A. Paris, Dimension matters: precision and incompatibility in multi-parameter quantum estimation models, Quantum Science and Technology 9 (4) (2024) 045045. [33] R. Pal, P. Ghosh, A. Ghoshal, U. Sen, Role of phase of optimal probe in noncommutativity vs coherence in quantum multiparameter estimation, arXiv preprint (2025). [34] H. Cramer, Mathematical methods of statistics, Princeton mathematical series, Princeton Univ. Press, Princeton, NJ, 1954. [35] S. M. Kay, Statistical signal processing: estimation theory, Prentice Hall 1 (1993) Chapter-3. [36] C. W. Helstrom, Minimum mean-squared error of estimates in quantum statistics, Physics Letters A 25 (2) (1967) 101-102. [37] F. Albarelli, J. F. Friel, A. Datta, Evaluating the holevo cram ́er-rao bound for multiparameter quantum metrology, Physical review letters 123 (20) (2019) 200503. [38] M. Hayashi, K. Matsumoto, Asymptotic performance of optimal state estimation in qubit system, Journal of Mathematical Physics 49 (2008) 102101-102101. [39] J. Kahn, M. Guta, Local asymptotic normality for finite dimensional quantum systems, Communications in Mathematical Physics 289 (2009) 597-652. [40] K. Yamagata, A. Fujiwara, R. D. Gill, Quantum local asymptotic normality based on a new quantum likelihood ratio, The Annals of Statistics 41 (4) (2013) 2197 - 2217. [41] A. Carollo, D. Valenti, B. Spagnolo, Geometry of quantum phase transitions, Physics Reports 838 (2020) 1-72. [42] S. Chang, M. G. Genoni, F. Albarelli, Multiparameter quantum estimation with gaussian states: efficiently evaluating holevo, rld and sld cram\'er-rao bounds, arXiv preprint (2025). [43] G. Bizzarri, M. Parisi, M. Manrique, I. Gianani, A. Chiuri, M. Rosati, V. Giovannetti, M. G. A. Paris, M. Barbieri, Controlling sloppiness in two-phase estimation with a tunable weak measurement, Optica Quantum 3 (5) (2025) 432-438. [44] R. Bellman, Dynamic programming treatment of the travelling salesman problem, J. ACM 9 (1962) 61-63. [45] M. Held, R. M. Karp, A dynamic programming approach to sequencing problems, in: ACM '61, Association for Computing Machinery, New York, NY, USA, 1961, p. 71.201-71.204. 26 800029.808532.
2510.14974
Preprint PI-FLOW: POLICY-BASED FEW-STEP GENERATION VIA IMITATION DISTILLATION Hansheng Chen1 Kai Zhang2 Hao Tan2 Leonidas Guibas1 Gordon Wetzstein1 Sai Bi2 1Stanford University 2Adobe Research https://github.com/Lakonik/piFlow Figure 1: High quality 4-NFE text-to-image generations by π-Flow, distilled from FLUX.1-12B (top-right three images) and Qwen-Image-20B (all remaining images). π-Flow preserves the teacher’s coherent structures, fine details (e.g., skin and hair), and accurate text rendering, while avoiding diversity collapse (see Fig. 4 for sample diversity). ABSTRACT Few-step diffusion or flow-based generative models typically distill a velocity- predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality–diversity trade-off. To address this, we propose policy-based flow models (π-Flow). π-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and ac- curate ODE integration on these substeps without extra network evaluations. To match the policy’s ODE trajectory to the teacher’s, we introduce a novel imitation distillation approach, which matches the policy’s velocity to the teacher’s along the policy’s trajectory using a standard ℓ2 flow matching loss. By simply mimick- ing the teacher’s behavior, π-Flow enables stable and scalable training and avoids the quality–diversity trade-off. On ImageNet 2562, it attains a 1-NFE FID of 2.85, outperforming MeanFlow of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, π-Flow achieves substantially better diversity than state-of-the-art few-step methods, while maintaining teacher-level quality. 1 arXiv:2510.14974v1 [cs.LG] 16 Oct 2025 Preprint 1 INTRODUCTION Diffusion and flow matching models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song & Ermon, 2019; Lipman et al., 2023; Albergo & Vanden-Eijnden, 2023) have become the dominant method for visual generation, delivering compelling image quality and diversity. However, these models rely on a costly denoising process for inference, which integrates a probability flow ODE (Song et al., 2021) over multiple timesteps, each step requiring a neural network evaluation. Commonly, the inference cost of diffusion models is quantified by the number of function (network) evaluations (NFEs). To reduce the inference cost, diffusion distillation methods compress a pre-trained multi-step model (the teacher) into a student that requires only one or a few network evaluation steps. Existing distilla- tion approaches avoid ODE integration by taking one or a few shortcut steps that map noise to data, where each shortcut path is predicted by the student network, referred to as a shortcut-predicting model. Learning these shortcuts is a significant challenge because they cannot be directly inferred from the teacher model. This necessitates the use of complex training methods, such as progres- sive distillation (Salimans & Ho, 2022; Liu et al., 2023; 2024), consistency distillation (Song et al., 2023), and distribution matching (Sauer et al., 2024a; Yin et al., 2024b;a; Salimans et al., 2024). In turn, the sophisticated training often lead to degraded image quality from error accumulation or compromised diversity due to mode collapse. To sidestep the difficulties in shortcut-predicting distillation, we propose a novel policy-based flow model (π-Flow or pi-Flow) paradigm: given noisy data at one timestep, the student network predicts a network-free policy, which maps new noisy states to their corresponding flow velocities with negligible overhead, allowing fast and accurate ODE integration using multiple substeps of policy velocities instead of network evaluations. To train the student network, we introduce policy-based imitation distillation (π-ID), a DAgger- style (Ross et al., 2011) on-policy imitation learning (IL) method. π-ID trains the policy on its own trajectory: at visited states, we query the teacher velocity and match the policy’s output to it, using the teacher’s corrective signal to teach the policy to recover from its own mistakes and reduce error accumulation. Specifically, the matching employs a standard ℓ2 loss aligned with the teacher’s flow matching objective, thus naturally preserving its quality and diversity. We validate our paradigm with two types of policies: a simple dynamic-ˆx(t) 0 (DX) policy and an advanced GMFlow policy based on Chen et al. (2025). Experiments show that GMFlow policy outperforms DX policy and delivers the best ImageNet 2562 FID for 1-NFE generation using the standard DiT architecture (Peebles & Xie, 2023). To demonstrate its scalability, we distill FLUX.1- 12B (Black Forest Labs, 2024b) and Qwen-Image-20B (Wu et al., 2025) text-to-image models into 4-NFE π-Flow students, which achieve state-of-the-art diversity, while maintaining teacher-level quality. We summarize the contributions of this work as follows: • We propose π-Flow, a new paradigm that decouples ODE integration substeps from network evaluation steps, enabling both fast generation and straightforward distillation. • We introduce π-ID, a novel on-policy IL method for few-step π-Flow distillation, which reduces the training objective to a simple ℓ2 flow matching loss. • We demonstrate strong performance and scalability of π-Flow, particularly, its superior diversity and teacher alignment compared to other state-of-the-art 4-NFE text-to-image models. 2 PRELIMINARIES In this section, we briefly introduce flow matching models (Lipman et al., 2023; Liu et al., 2023) and the notations used in this paper. Let p(x0) denote the (latent) data probability density, where x0 ∈RD is a data point. A stan- dard flow model defines an interpolation between a data sample and a random Gaussian noise ϵ ∼N(0, I), yielding the diffused noisy data xt = αtx0 + σtϵ, where t ∈(0, 1] denotes the diffusion time, and αt = 1 −t, σt = t are the linear flow noise schedule. The optimal transport map across all marginal densities p(xt) = R RD N(xt; αtx0, σ2 t I)p(x0) dx0 can be described by 2 Preprint 𝒙𝒙𝑡𝑡dst 𝒙𝒙𝑡𝑡src 𝒙𝒙𝑡𝑡dst 𝒙𝒙𝑡𝑡src (c) Policy-based model 𝒙𝒙𝑡𝑡dst Policy 𝜋𝜋𝒙𝒙𝑡𝑡, 𝑡𝑡 (a) Standard flow model (b) Shortcut-predicting model 𝜋𝜋 𝜋𝜋 𝜋𝜋 𝜋𝜋 𝒙𝒙𝑡𝑡src = • Few network steps • Few integration steps = • Many network steps • Many integration steps ≪ • Few network steps • Many integration substeps Figure 2: Comparison between (a) standard flow model (teacher), (b) shortcut-predicting model, and (c) our policy-based model. The shortcut-predicting model skips all intermediate states, whereas the our-based model retains all intermediate substeps with minimal overhead. the following probability flow ODE (Song et al., 2021; Liu, 2022): dxt dt = ˙xt = xt −Ex0∼p(x0|xt)[x0] t = xt − R RD x0p(x0|xt) dx0 t , (1) with the denoising posterior p(x0|xt) := N(xt;αtx0,σ2 t I)p(x0) p(xt) . At test time, the model can generate samples by first initializing the noise x1 ←ϵ and then solving the ODE to obtain limt→0 xt. In practice, flow matching models approximate the ODE velocity dxt dt using a neural network Gθ(xt, t) with learnable parameters θ, trained using the ℓ2 flow matching loss: Lθ = Et,x0,xt 1 2∥u −Gθ(xt, t)∥2  , with sample velocity u := xt −x0 t . (2) Since each velocity query requires evaluating the network (Fig. 2 (a)), flow matching models couple sampling efficiency with solver precision. Despite the progress in advanced solvers (Karras et al., 2022; Zhang & Chen, 2023; Lu et al., 2022; 2023; Zhao et al., 2023), high-quality sampling typically requires over 10 steps due to inherent ODE truncation error, making it computationally expensive. 3 π-FLOW: POLICY-BASED FEW-STEP GENERATION In π-Flow, we define the policy as a network-free function π: RD × R →RD that maps a state (xt, t) to a flow velocity. A policy can be network-free if it only needs to describe a single ODE trajectory, which is fully determined by its initial state (xtsrc, tsrc) with tsrc ≥t. In this case, the policy for each trajectory must be dynamically predicted by a neural network conditioned on that initial state (xtsrc, tsrc). We therefore adapt a flow model to output not a single velocity, but an entire dynamic policy that governs the full trajectory. Formally, define the policy function space F :=  π: RD × R →RD . Then, our goal is to distill a policy generator network Gϕ : RD × R →F with learnable parameters ϕ, such that π(xt, t) = Gϕ(xtsrc, tsrc)(xt, t). As shown in Fig. 2 (c), π-Flow performs ODE-based denoising from tsrc to tdst via two stages: • A policy generation step, which feeds the initial state (xtsrc, tsrc) to the student network Gϕ to produce the policy π, i.e., π ←Gϕ(xtsrc, tsrc). • Multiple policy integration substeps, which integrates the ODE by querying the policy velocity over multiple substeps, obtaining a less noisy state by xtdst ←xtsrc + R tdst tsrc π(xt, t) dt. Unlike previous few-step distillation methods, π-Flow decouples network evaluation steps from ODE integration substeps. This allows it to combine the key advantages of two paradigms: it per- forms only a few network evaluations for efficient generation, similar to a shortcut-predicting model, while also executing dense integration substeps, just like a standard flow matching teacher. Thanks to its teacher-like ODE integration process, a π-Flow student offers unprecedented advantage in training, as we can now follow well-established imitation learning (IL) approaches to directly match the policy velocity π(xt, t) to the teacher velocity Gθ(xt, t), as discussed later in § 4. To identify the appropriate student policy families for fast image generation, we need to satisfy the following requirements: • Efficiency. The policy should provide closed-form velocities with minimal overhead, so that rolling out dense (e.g., 100+) substeps incurs negligible cost compared to a network evaluation. 3 Preprint • Compatibility. The policy should have a compact set of parameters that can be easily predicted by the student Gϕ with standard backbones (e.g., DiT (Peebles & Xie, 2023)). • Expressiveness. The policy should be able to approximate a complicated ODE trajectory starting from a certain initial state xtsrc. • Robustness. The policy should be able to handle trajectory variations that arise from perturba- tions to the initial state xtsrc. For instance, a suboptimal student network will produce an erroneous mapping from xtsrc to π. This introduces the randomness that the policy needs to accommodate throughout the rollout. Consequently, the policy function should adapt its velocity output to vari- ations in its state input xt, which is a challenging requirement for network-free functions. 3.1 DYNAMIC-ˆx(t) 0 POLICY We introduce a simple baseline policy called dynamic-ˆx(t) 0 policy (DX policy). DX policy defines π(xt, t) := xt−ˆx(t) 0 t , where ˆx(t) 0 approximates the posterior moment Ex0∼p(x0|xt)[x0] in Eq. (1). Along a fixed trajectory starting from an initial state (xtsrc, tsrc), the posterior moment is only depen- dent on t. Therefore, we first predict a grid of ˆx(ti) 0 at N evenly spaced times t1, ..., tN ∈[tdst, tsrc] by a single evaluation of the student network Gϕ(xtsrc, tsrc). This is achieved by expanding the out- put channels of the student network and performing u-to-x0 reparameterization. Then, for arbitrary t ∈[tdst, tsrc], we obtain the approximated moment ˆx(t) 0 by a linear interpolation over the grid. Apparently, DX policy is fast, compatible, and expressive enough so that any N-step teacher tra- jectory can be matched with N grid points. However, its robustness is limited because ˆx(t) 0 is not adaptive to perturbations in xt. 3.2 GMFLOW POLICY For stronger robustness, we incorporate an advanced GMFlow policy based on the closed- form GM velocity field in Chen et al. (2025). GMFlow policy expands the network out- put channels to predict a factorized Gaussian mixture (GM) velocity distribution q(u|xtsrc) = QL i=1 PK k=1 AikN ui; µik, s2I  , where Aik ∈R+, µik ∈RC, s ∈R+ are GM parameters pre- dicted by the network, L × C factorizes the data dimension D into sequence length L and channel size C, and K is a hyperparameter specifying the number of mixture components. This represen- tation enables a closed-form velocity expression at any 0 < t < tsrc (see § F for details). Its speed and compatibility has already been discussed in Chen et al. (2025), thus we focus on analyzing its expresiveness and robustness. Expressiveness. With the L × C factorization, each individual C-dimensional GM needs to be expressive enough to approximate a C-dimensional chunk of the teacher trajectory. In § E, we rigorously prove the following theorem, demonstrating GMFlow’s expressiveness. Theorem 1 (A GMFlow policy with K = N ·C can accurately approximate any N-step trajectory). Given pairwise distinct times t1, . . . , tN ∈(0, 1] and vectors xtn, ˙xtn ∈RC for n = 1, . . . , N, there exists a GM parameterization of p(x0) with N · C components, such that ˙xtn can be approxi- mated arbitrarily well using Eq (1) at t = tn for every n = 1, . . . , N. In practice, we can use K ≪N · C (e.g., K = 8) since the teacher trajectory is mostly smooth. More analysis of GMFlow hyperparameters are presented in § C.1. Robustness. GMFlow is highly robust against trajectory perturbation due to its probabilistic origin. Unlike DX policy, GMFlow models a fully dynamic denoising posterior (Eq. (23)) dependent on both xt and t. Leveraging its robustness, the policy can be flexibly altered via GM dropout in training (§ 4) and GM temperature in inference (§ B.1), both improving generalization performance. 4 π-ID: POLICY-BASED IMITATION DISTILLATION With the policy rollout sharing the same format as the teacher’s ODE integration, it is straightfor- ward to adopt imitation learning to learn the policy by directly matching the policy’s velocity to the 4 Preprint Algorithm 1: On-policy π-ID. Input: NFE, teacher Gθ, student Gϕ, condition c 1 Sample tsrc from  1 NFE , 2 NFE , · · · , 1 2 Initialize xtsrc (data-free or data-dependent) 3 π ←Gϕ(xtsrc, tsrc, c) 4 πD ←stopgrad(π) 5 Lϕ ←0 6 for finite samples t ∼U tsrc − 1 NFE , tsrc  do 7 xt ←xtsrc + R t tsrc πD(xt, t) dt 8 Lϕ ←Lϕ + 1 2∥Gθ(xt, t, c) −π(xt, t)∥2 9 ϕ ←Adam(ϕ, ∇ϕLϕ) // optimizer step Policy 𝜋𝜋 𝒙𝒙𝑡𝑡src Detached policy 𝜋𝜋D stopgrad (+ dropout) 𝒙𝒙𝑡𝑡dst Teacher velocity 𝐺𝐺𝜽𝜽(𝒙𝒙𝑡𝑡, 𝑡𝑡) Policy velocity 𝜋𝜋(𝒙𝒙𝑡𝑡, 𝑡𝑡) Detached policy 𝜋𝜋D rollout 𝐺𝐺𝝓𝝓 Figure 3: On-policy flow imitation dis- tillation. Intermediate states are sampled along the detached policy rollout, where the loss matches the policy to the teacher. teacher’s velocity. In this section, we introduce a simple policy-based imitation distillation (π-ID) algorithm based on DAgger-style (Ross et al., 2011) on-policy imitation. On-policy imitation learning is robust to error accumulation since it trains the policy on its own trajectory, allowing the teacher’s corrective signal to steer a deviating trajectory back on track. As shown in Fig. 3 and Algorithm 1, for a time interval from tsrc to tdst (i.e., a 1-NFE segment), we first feed the initial state (xtsrc, tsrc) to the student network Gϕ to obtain the policy π. We then sample an intermediate time t ∈(tdst, tsrc] and roll out a detached policy πD from tsrc to t using high- accuracy ODE integration (with a small step size of 1/128), yielding an intermediate state xt on the policy trajectory. This state is fed to both the learner policy π and the frozen teacher Gθ, which produce their respective velocities. Finally, we compute a standard ℓ2 flow matching loss between the two velocities, and backpropagate its gradients through the policy π to the student network Gϕ. Because the student forward/backward pass dominates compute while policy and teacher queries are relatively cheap, we may repeat the rollout-and-matching step multiple times for additional teacher supervisions. In practice, we sample two intermediate states per student forward pass. Data-dependent and data-free π-ID. The initial state xtsrc can be obtained via forward diffusion from real data x0 (data-dependent Algorithm 2), or via π-Flow’s reverse denoising from random noise x1 (data-free Algorithm 3). Both methods have roughly the same computational cost, and comparable performance, as demonstrated in the experiments (§ 5). 5 EXPERIMENTS To demonstrate the versatility of π-Flow, we evaluate it with three distinct image generation mod- els of different scales and architectures: DiT(SiT)-XL/2 (675M) (Peebles & Xie, 2023; Ma et al., 2024; Vaswani et al., 2017) for ImageNet 2562 (Deng et al., 2009) class-conditioned generation, FLUX.1-12B (Black Forest Labs, 2024b) and Qwen-Image-20B (Wu et al., 2025) for text-to-image generation. 5.1 IMPLEMENTATION DETAILS In this subsection, we discuss key implementation details essential to model performance. More training details and hyperparameter choices are presented in § C. GM dropout. Dropout is a widely adopted technique in supervised/imitation learning and rein- forcement learning to improve generalization (Srivastava et al., 2014; Cobbe et al., 2019). For the GMFlow policy, we introduce GM dropout in training to stochastically perturb and diversify π-ID rollouts to make the policy more robust to potential trajectory variations. Given the GM mixture weights Aik of the detached policy πD, we sample a binary mask for each component k = 1, · · · , K and multiply it into Aik synchronously across all i = 1, · · · , L. The masked weights are then renormalized and used for the detached rollout. By exploring alternative GM modes, this simple technique improves the policy’s robustness, yielding better FID on ImageNet 2562 (§ 5.2). Handling guidance-distilled teachers. On-policy imitation learning assumes the teacher is robust to out-of-distribution (OOD) intermediate states and can steer trajectories back on track. This gen- erally holds for standard flow models with classifier-free guidance (CFG) (Ho & Salimans, 2021), 5 Preprint Table 1: 1-NFE generation results of π-Flow with DX and GMFlow policies on ImageNet. Tested after 40k training iter- ations. FM stands for standard flow matching. Policy Teacher FID↓ IS↑ Precision↑Recall↑ DX (N = 10) REPA 4.73 327.6 0.781 0.514 DX (N = 20) REPA 4.44 329.8 0.786 0.531 DX (N = 40) REPA 4.90 321.8 0.778 0.537 GM (K = 8) REPA 3.07 336.9 0.789 0.572 GM (K = 32) REPA 3.08 341.7 0.791 0.562 GM (K = 32) FM 3.65 282.0 0.797 0.533 GM (K = 32) w/o dropout FM 4.14 279.6 0.799 0.525 Table 2: Comparison with previ- ous few-step DiTs on ImageNet. Model NFE FID↓ iCT 2 20.30 iMM 1×2 7.77 MeanFlow 2 2.20 FACM (REPA) 2 1.52 π-Flow (GM-REPA) 2 1.97 iCT 1 34.24 Shortcut 1 10.60 MeanFlow 1 3.43 π-Flow (GM-FM) 1 3.34 π-Flow (GM-REPA) 1 2.85 which exhibit error-correction behavior (Chidambaram et al., 2024). However, FLUX.1 dev (Black Forest Labs, 2024b) is a guidance-distilled model without true CFG and is less robust to OOD in- puts. To mitigate OOD exposure, we adopt a scheduled trajectory mixing strategy, which rolls out the trajectory using a mixture of teacher and student with a linearly decaying teacher ratio (see § B.2 for details). 5.2 IMAGENET DIT Our study utilizes two pretrained teachers with the same DiT architecture: a standard flow match- ing (FM) DiT (the baseline in Chen et al. (2025)), and the REPA DiT (Yu et al., 2025). Interval CFG (Kynk¨a¨anniemi et al., 2024) is applied to both teachers to maximize their performance. Each π-Flow student is initialized with the teacher weights and then fully finetuned using the π-ID loss. Evaluation metrics. We adopt the standard evaluation protocol in ADM (Dhariwal & Nichol, 2021) with the following metrics: Fr´echet Inception Distance (FID) (Heusel et al., 2017), Inception Score (IS), and Precision–Recall (Kynk¨a¨anniemi et al., 2019). Comparison of DX and GMFlow policies. As shown in Table 1, both policies yield strong 1- NFE FIDs after 40k training iterations, with the GMFlow policy consistently outperforming the DX policy by a clear margin. Notably, the DX policy exhibits sensitivity to the hyperparameter N (number of grid points), whereas the GMFlow policy produces consistent results across different values of K (number of Gaussians). Comparison with prior few-step DiTs. In Table 2, we compare π-Flow (GM policy with K = 32) to prior few-step DiTs on ImageNet 2562: iCT (Song & Dhariwal, 2024), Shortcut models (Frans et al., 2025), iMM (Zhou et al., 2025), MeanFlow (Geng et al., 2025), and FACM (Peng et al., 2025). The concurrent work FACM improves MeanFlow with an auxiliary loss to achieve a leading 2-NFE FID, though its 1-NFE performance on the standard DiT architecture is not reported. Our π-Flow adopts a simpler imitation learning objective without the expensive JVP operation in MeanFlow, yet still outperforms the original MeanFlow DiT across both 1-NFE and 2-NFE generation. Ablation study on GM dropout. From the two bottom rows in Table 1 we conclude that our standard implementation with a 0.05 GM dropout rate yields better FID and Recall compared to the setting without dropout, confirming the effectiveness of our GM dropout technique. 5.3 FLUX.1-12B AND QWEN-IMAGE-20B For text-to-image generation, we distill the 12B FLUX.1 dev (Black Forest Labs, 2024b) and 20B Qwen-Image (Wu et al., 2025) models into π-Flow students. During student training, we freeze the base parameters inherited from the teacher and finetune only the expanded output layer along with 256-rank LoRA adapters (Hu et al., 2022) on the feed-forward layers. For data-dependent distillation, we prepare 2.3M one-megapixel (1MP) images captioned with Qwen2.5-VL (Bai et al., 2025). In the data-free setting, we use only the generated captions as conditioning inputs while keeping the same 1MP resolution when initializing the noise. Evaluation protocol. We conduct a comprehensive evaluation on 10242 high-resolution image generation from three distinct prompt sets: (a) 10K captions from the COCO 2014 validation set (Lin 6 Preprint Table 3: Quantitative comparisons on COCO-10k dataset and HPSv2 prompt set. Model Distill method NFE COCO-10k prompts HPSv2 prompts Data align. Prompt align. Pref. align. Teacher align. Prompt align. Pref. align. FID↓pFID↓CLIP↑VQA↑ HPSv2.1↑ FID↓pFID↓CLIP↑VQA↑ HPSv2.1↑ FLUX.1 dev - 50 27.8 34.9 0.268 0.900 0.309 - - 0.284 0.805 0.314 FLUX Turbo GAN 8 26.7 32.0 0.267 0.900 0.308 13.8 18.5 0.286 0.814 0.313 Hyper-FLUX CD+Re 8 29.8 33.3 0.268 0.894 0.309 15.6 22.2 0.285 0.807 0.315 π-Flow (GM-FLUX) π-ID 8 29.0 35.4 0.268 0.901 0.311 12.6 15.9 0.285 0.810 0.316 SenseFlow (FLUX) VSD+CD+GAN 4 34.1 44.2 0.266 0.879 0.308 23.3 28.2 0.283 0.806 0.318 π-Flow (GM-FLUX) π-ID 4 29.8 36.1 0.269 0.903 0.308 14.3 19.2 0.288 0.816 0.313 π-Flow (GM-FLUX) π-ID (data-free) 4 29.7 36.2 0.269 0.905 0.310 14.4 19.7 0.287 0.813 0.314 Qwen-Image - 50×2 34.1 45.6 0.282 0.936 0.312 - - 0.302 0.872 0.309 Qwen-Image Lightning VSD 4 37.5 51.6 0.280 0.935 0.322 15.6 19.7 0.299 0.867 0.328 π-Flow (GM-Qwen) π-ID 4 36.0 46.1 0.281 0.934 0.314 12.8 16.6 0.300 0.860 0.310 π-Flow (GM-Qwen) π-ID (data-free) 4 36.0 45.7 0.282 0.936 0.315 12.9 16.8 0.301 0.862 0.312 Table 4: Quantitative comparisons on OneIG-Bench (Chang et al., 2025). Model Distill Method NFE Alignment↑ Text↑ Diversity↑ Style↑ Reasoning↑ FLUX.1 dev - 50 0.790 0.556 0.238 0.370 0.257 FLUX Turbo GAN 8 0.791 0.334 0.234 0.370 0.239 Hyper-FLUX CD+Re 8 0.790 0.530 0.198 0.369 0.254 π-Flow (GM-FLUX) π-ID 8 0.792 0.517 0.234 0.369 0.256 SenseFlow (FLUX) VSD+CD+GAN 4 0.776 0.384 0.151 0.343 0.238 π-Flow (GM-FLUX) π-ID 4 0.799 0.437 0.229 0.360 0.251 π-Flow (GM-FLUX) π-ID (data-free) 4 0.799 0.460 0.224 0.363 0.249 Qwen-Image - 50×2 0.880 0.888 0.194 0.427 0.306 Qwen-Image Lightning VSD 4 0.885 0.923 0.116 0.417 0.311 π-Flow (GM-Qwen) π-ID 4 0.875 0.892 0.180 0.434 0.298 π-Flow (GM-Qwen) π-ID (data-free) 4 0.881 0.890 0.176 0.433 0.300 et al., 2014), (b) 3200 prompts from the HPSv2 benchmark (Wu et al., 2023), and (c) 1120 prompts from OneIG-Bench (Chang et al., 2025). For the COCO and HPSv2 sets, we report common metrics including FID (Heusel et al., 2017), patch FID (pFID) (Lin et al., 2024a), CLIP similarity (Radford et al., 2021), VQAScore (Lin et al., 2024b), and HPSv2.1 (Wu et al., 2023). On COCO prompts, FIDs are computed against real images, reflecting data alignment. On HPSv2, FIDs are computed against the 50-step teacher generations, reflecting teacher alignment. CLIP and VQAScore measure prompt alignment, while HPSv2 captures human preference alignment. For OneIG-Bench, we adopt its official evaluation protocol and metrics. All quantitative results are presented in Table 3 and 4. Competitor models. We compare π-Flow against other few-step student models distilled from the same teacher. For FLUX, we compare against: 4-NFE SenseFlow (Ge et al., 2025), primarily lever- aging variational score distillation (VSD) (Wang et al., 2023), also known as distribution matching distillation (DMD) (Yin et al., 2024b); 8-NFE Hyper-FLUX (Ren et al., 2024), trained with consis- tency distillation (CD) (Song et al., 2023) and reward models (Re) (Xu et al., 2023); 8-NFE FLUX Turbo, based on GAN-like adversarial distillation (Goodfellow et al., 2014; Sauer et al., 2024b). For Qwen-Image, we compare with the 4-NFE Qwen-Image Lighting based on VSD (ModelTC, 2025). Note that the 4-NFE FLUX.1 schnell is distilled from the closed-source FLUX.1 pro instead of the publicly available FLUX.1 dev (Black Forest Labs, 2024a), so we do not compare with it directly, but include further discussion in § D. Strong all-around performance. As shown in Table 3 and Table 4, π-Flow demonstrates strong all- around performance, outperforming other few-step students on roughly 70% of all metrics, without exhibiting obvious weaknesses in any specific area. Superior diversity and teacher alignment. π-Flow consistently achieves the highest diversity scores and the best teacher-referenced FIDs by clear margins, especially in the 4-NFE setting. These results strongly suggest that π-Flow effectively avoids both diversity collapse and style drift. As a result, most of its scores closely match those of the teacher, with some even slightly surpass- ing the teacher scores (e.g., prompt alignment and several Qwen-Image OneIG scores). Its strong 7 Preprint With a clear box hovering above showing "System Update: Now supports gardening mode!", generate pixel art of a cheerful robot watering pixelated flowers in a futuristic garden. Young woman in dynamic neon streetwear poses on bustling Tokyo street at night, surrounded by illuminated signs. Captured in a photograph that features lifelike individuals. FLUX.1 dev 50-NFE 𝝅𝝅-Flow (GM-FLUX) 4-NFE 𝝅𝝅-Flow (GM-Qwen) 4-NFE Qwen-Image 50x2-NFE SenseFlow (FLUX) 4-NFE Qwen-Image Lightning 4-NFE FLUX.1 dev 50-NFE 𝝅𝝅-Flow (GM-FLUX) 4-NFE 𝝅𝝅-Flow (GM-Qwen) 4-NFE Qwen-Image 50x2-NFE SenseFlow (FLUX) 4-NFE Qwen-Image Lightning 4-NFE FLUX.1 dev 50-NFE 𝝅𝝅-Flow (GM-FLUX) 4-NFE SenseFlow (FLUX) 4-NFE The image should capture a lifelike essence, showcasing a young woman gracefully positioned in a golf swing on a bright summer day. Her long hair effortlessly streams behind her, complementing her ideal physique, which is accentuated by form-fitting red leggings adorned with striking geometric patterns. The scene is bathed in vibrant sunlight, creating dynamic shadows as she elegantly executes her swing with finesse. 𝝅𝝅-Flow (GM-Qwen) 4-NFE Qwen-Image 50x2-NFE Qwen-Image Lightning 4-NFE Figure 4: Images generated from the same batch of initial noise by π-Flows, teachers, and VSD stu- dents (SenseFlow, Qwen-Image Lightning). π-Flow models produce diverse structures that closely mirror the teacher’s. In contrast, VSD students tend to repeat the same structure. Notably, Sense- Flow mostly generates symmetric images. teacher alignment is also evident in Fig. 4, where π-Flow generates structurally similar images to the teacher’s from the same initial noise. Comparison with VSD (DMD) students. VSD models are notable for high visual quality, some- times surpassing teachers in quality and preference metrics. However, they are widely known to suffer from mode collapse, as reflected in our experiments: both SenseFlow and Qwen-Image Light- ning show significant drops in diversity and FIDs. Visual examples in Fig. 4 further highlight the collapse, where different initial noises produce visually similar images with only minor variations. In contrast, π-Flow maintains high quality and diversity without sacrificing either aspect. Comparison with other students. FLUX Turbo achieves better data alignment FIDs than the teacher due to GAN training, yet its text rendering performance is significantly weaker, as shown in Fig. 5. Meanwhile, Hyper-FLUX often produces undesirable texture artifacts and fuzzy details, whereas π-Flow achieves superior detail rendering, as shown in Fig. 6. 8 Preprint 𝝅𝝅-Flow (GM-FLUX) 8-NFE FLUX Turbo 8-NFE Figure 5: Images generated from the same initial noise by π-Flow and FLUX Turbo. π-Flow renders coherent texts, whereas FLUX Turbo underperforms in text rendering. 𝝅𝝅-Flow (GM-FLUX) 8-NFE Hyper-FLUX 8-NFE Aoshima's masterpiece depicts a forest illuminated by morning light. A painting depicting a scenic view of Guangzhou, China as a tourist destination by David Inshaw. Figure 6: Images generated from the same initial noise by π-Flow and Hyper-FLUX. π-Flow pro- duces notably finer details, as highlighted in the zoomed-in patches. Data-dependent vs. data-free. As shown in Table 3 and Table 4, data-dependent and data-free π-Flow models achieve nearly identical results. This demonstrates the practicality of π-Flow in scenarios where high-quality data is unavailable. GMFlow vs. DX policy. Consistent with prior ImageNet findings, the DX policy slightly underper- forms comapred to the GMFlow policy (Table 5), highlighting the latter’s superior robustness. Convergence. Figure 7 illustrates the convergence of π-Flow (GM-Qwen) over training iterations. Both FID and Patch FID scores initially improve rapidly, outperforming Qwen-Image Lightning within the first 400 iterations, and continue to improve steadily thereafter. This contrasts with previ- ous GAN or VSD-based methods that often require frequent checkpointing and cherry-picking (Ge et al., 2025), demonstrating the scalability and robustness of our approach. 6 RELATED WORK Prior work on diffusion model distillation primarily focuses on predicting shortcuts towards less noisy states, with training objectives ranging from direct regression to distribution matching. Early work (Luhman & Luhman, 2021) directly regresses the teacher’s ODE integral in a single step, but suffers from degraded quality, since regressing x0 with an ℓ2 loss tends to produce blurry results. Progressive distillation methods (Salimans & Ho, 2022; Liu et al., 2023; 2024; Frans et al., 2025) make further improvements via a multi-stage process that progressively increases the student’s step size and reduces its NFE by regressing the previous stage’s multi-step outputs with fewer steps, yet this introduces error accumulation. Consistency-based models (Song et al., 2023; Kim et al., 2024; Song & Dhariwal, 2024; Geng et al., 2025) implicitly impose a velocity-based regression loss, which improves quality compared to x-based regression. However, the velocity of a shortcut-predicting student must be constructed implicitly using either inaccurate finite differences or expensive Jacobian–vector products (JVPs). Moreover, their quality is still limited due to accumulation of velocity errors into the integrated 9 Preprint Table 5: Comparisons between DX and GMFlow policies on text-to-image generation. Policy Teacher HPSv2 prompts OneIG-Bench FID↓pFID↓HPSv2.1↑Text↑Diversity↑ DX (N = 10) FLUX 14.9 20.9 0.313 0.397 0.225 GM (K = 8) FLUX 14.3 19.2 0.313 0.437 0.229 DX (N = 10) Qwen-Image 12.7 17.0 0.306 0.869 0.185 GM (K = 8) Qwen-Image 12.8 16.6 0.310 0.892 0.180 0 2K 4K 6K 8K Iteration 14 16 18 20 Score Qwen Lightning Qwen Lightning FID Patch FID Figure 7: Teacher-referenced FID and Patch FID of GM-Qwen evaluated on HPSv2 prompts. state. Therefore, in practice, consistency distillation is often augmented with additional objectives to improve quality (Ren et al., 2024; Zheng et al., 2025), further complicating training. Conversely, distribution matching approaches (Yin et al., 2024b;a; Sauer et al., 2024b; Zhou et al., 2024; Luo et al., 2024; Salimans et al., 2024; Zhou et al., 2025) adopt score matching and adversarial training to align the student’s output distribution with the teacher’s, achieving superior quality but risking diversity loss due to mode collapse. Their common reliance on auxiliary networks also introduces additional tuning complexity and can lead to stability issues at scale (Ge et al., 2025). 7 CONCLUSION We introduced policy-based flow models (π-Flow), a novel framework for few-step generation in which the network outputs a fast policy that enables accurate ODE integration via dense substeps reach the denoised state. To distill π-Flow models, we proposed a simple on-policy imitation learn- ing approach that reduces the training objective to a single ℓ2 loss, mitigating error accumulation and quality–diversity trade-offs. Extensive experiments distilling ImageNet DiT, FLUX.1-12B, and Qwen-Image-20B models show that few-step π-Flows consistently attain teacher-level image qual- ity while significantly outperforming competitors in diversity and teacher alignment. π-Flow offers a scalable, principled paradigm for efficient, high-quality generation and opens new directions for future research, such as exploring more robust policy families, improved distillation objectives, and extensions to other applications (e.g., video generation). Reproducibility statement. To facilitate reproduction, we describe the detailed training procedures in Algorithms 2 and 3, and list all important hyperparameters in §C. Acknowledgements. This project was partially done while Hansheng Chen was supported by the Qualcomm Innovation Fellowship and partially done while Hansheng Chen was an intern at Adobe Research. We would like to thank Jianming Zhang and Hailin Jin for their great support throughout the internship, and Xingtong Ge for the help in evaluating SenseFlow. REFERENCES Michael Samuel Albergo and Eric Vanden-Eijnden. Building normalizing flows with stochastic interpolants. In ICLR, 2023. Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report, 2025. URL https://arxiv.org/abs/2502.13923. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In NeurIPS, NIPS’15, pp. 1171–1179, Cambridge, MA, USA, 2015. MIT Press. Black Forest Labs. Flux.1 [schnell]. https://huggingface.co/spaces/ black-forest-labs/FLUX.1-schnell, 2024a. 10 Preprint Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024b. Jingjing Chang, Yixiao Fang, Peng Xing, Shuhan Wu, Wei Cheng, Rui Wang, Xianfang Zeng, Gang Yu, and Hai-Bao Chen. Oneig-bench: Omni-dimensional nuanced evaluation for image generation. In NeurIPS, 2025. Hansheng Chen, Kai Zhang, Hao Tan, Zexiang Xu, Fujun Luan, Leonidas Guibas, Gordon Wet- zstein, and Sai Bi. Gaussian mixture flow matching models. In ICML, 2025. Muthu Chidambaram, Khashayar Gatmiry, Sitan Chen, Holden Lee, and Jianfeng Lu. What does guidance do? a fine-grained analysis in a simple setting. In NeurIPS, 2024. URL https: //openreview.net/forum?id=AdS3H8SaPi. Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying generaliza- tion in reinforcement learning. In ICML, 2019. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pp. 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Tim Dettmers, Mike Lewis, Sam Shleifer, and Luke Zettlemoyer. 8-bit optimizers via block-wise quantization. In ICLR, 2022. Prafulla Dhariwal and Alexander Quinn Nichol. Diffusion models beat GANs on image synthesis. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), NeurIPS, 2021. URL https://openreview.net/forum?id=AAWuCvzaVt. Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M¨uller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, Dustin Podell, Tim Dockhorn, Zion En- glish, Kyle Lacey, Alex Goodwin, Yannik Marek, and Robin Rombach. Scaling rectified flow transformers for high-resolution image synthesis. In ICML, 2024. Kevin Frans, Danijar Hafner, Sergey Levine, and Pieter Abbeel. One step diffusion via shortcut models. In ICLR, 2025. URL https://openreview.net/forum?id=OlzB6LnXcS. Xingtong Ge, Xin Zhang, Tongda Xu, Yi Zhang, Xinjie Zhang, Yan Wang, and Jun Zhang. Senseflow: Scaling distribution matching for flow-based text-to-image distillation, 2025. URL https://arxiv.org/abs/2506.00523. Zhengyang Geng, Mingyang Deng, Xingjian Bai, J. Zico Kolter, and Kaiming He. Mean flows for one-step generative modeling. In NeurIPS, 2025. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger (eds.), NeurIPS, volume 27. Cur- ran Associates, Inc., 2014. URL https://proceedings.neurips.cc/paper_files/ paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS, 2017. Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. In NeurIPS Workshop, 2021. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In NeurIPS, 2020. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In ICLR, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9. Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion- based generative models. In NeurIPS, 2022. Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine. Analyz- ing and improving the training dynamics of diffusion models. In CVPR, 2024. 11 Preprint Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models: Learning proba- bility flow ODE trajectory of diffusion. In ICLR, 2024. URL https://openreview.net/ forum?id=ymjI8feDTD. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014. Tuomas Kynk¨a¨anniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. In NeurIPS, 2019. Tuomas Kynk¨a¨anniemi, Miika Aittala, Tero Karras, Samuli Laine, Timo Aila, and Jaakko Lehtinen. Applying guidance in a limited interval improves sample and distribution quality in diffusion models. In NeurIPS, 2024. Shanchuan Lin, Anran Wang, and Xiao Yang. Sdxl-lightning: Progressive adversarial diffusion distillation, 2024a. URL https://arxiv.org/abs/2402.13929. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars (eds.), ECCV, pp. 740–755, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10602-1. Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Evaluating text-to-visual generation with image-to-text generation. In ECCV, 2024b. Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le. Flow matching for generative modeling. In ICLR, 2023. URL https://openreview.net/ forum?id=PqvMRDCJT9t. Qiang Liu. Rectified flow: A marginal preserving approach to optimal transport, 2022. URL https://arxiv.org/abs/2209.14577. Xingchao Liu, Chengyue Gong, and qiang liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In The Eleventh International Conference on Learning Repre- sentations, 2023. URL https://openreview.net/forum?id=XVjTT1nw5z. Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, and qiang liu. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation. In ICLR, 2024. URL https:// openreview.net/forum?id=1k4yZbbDqX. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. DPM-solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), NeurIPS, 2022. URL https: //openreview.net/forum?id=2uAaGwlP_V. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models, 2023. URL https://arxiv. org/abs/2211.01095. Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed, 2021. URL https://arxiv.org/abs/2101.02388. Weijian Luo, Zemin Huang, Zhengyang Geng, J. Zico Kolter, and Guo jun Qi. One-step diffusion distillation through score implicit matching. In NeurIPS, 2024. Nanye Ma, Mark Goldstein, Michael S. Albergo, Nicholas M. Boffi, Eric Vanden-Eijnden, and Saining Xie. Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. In ECCV, 2024. ModelTC. Qwen-image-lightning. https://github.com/ModelTC/ Qwen-Image-Lightning, 2025. William Peebles and Saining Xie. Scalable diffusion models with transformers. In ICCV, 2023. 12 Preprint Yansong Peng, Kai Zhu, Yu Liu, Pingyu Wu, Hebei Li, Xiaoyan Sun, and Feng Wu. Flow-anchored consistency models, 2025. URL https://arxiv.org/abs/2507.03738. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pp. 8748–8763, 2021. Yuxi Ren, Xin Xia, Yanzuo Lu, Jiacheng Zhang, Jie Wu, Pan Xie, XING WANG, and Xuefeng Xiao. Hyper-SD: Trajectory segmented consistency model for efficient image synthesis. In NeurIPS, 2024. URL https://openreview.net/forum?id=O5XbOoi0x3. Hans Richter. Parameterfreie absch¨atzung und realisierung von erwartungswerten. Bl¨atter der DGVFM, 3(2):147–162, Apr 1957. ISSN 1864-0303. doi: 10.1007/BF02808864. URL https://doi.org/10.1007/BF02808864. Stephane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and struc- tured prediction to no-regret online learning. In Geoffrey Gordon, David Dunson, and Miroslav Dud´ık (eds.), AISTATS, volume 15 of Proceedings of Machine Learning Research, pp. 627–635, Fort Lauderdale, FL, USA, 11–13 Apr 2011. PMLR. URL https://proceedings.mlr. press/v15/ross11a.html. Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In ICLR, 2022. Tim Salimans, Thomas Mensink, Jonathan Heek, and Emiel Hoogeboom. Multistep distillation of diffusion models via moment matching. NeurIPS, 37:36046–36070, 2024. Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas Blattmann, Patrick Esser, and Robin Rom- bach. Fast high-resolution image synthesis with latent adversarial diffusion distillation. In SIGGRAPH Asia, SA ’24, New York, NY, USA, 2024a. Association for Computing Machin- ery. ISBN 9798400711312. doi: 10.1145/3680528.3687625. URL https://doi.org/10. 1145/3680528.3687625. Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion dis- tillation. In ECCV, pp. 87–103, 2024b. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, pp. 2256–2265, 2015. Yang Song and Prafulla Dhariwal. Improved techniques for training consistency models. In ICLR, 2024. URL https://openreview.net/forum?id=WNzy9bRDvG. Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In NeurIPS, 2019. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2021. Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. In ICML, 2023. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958, January 2014. ISSN 1532-4435. Vladimir Tchakaloff. Formules de cubatures m´ecaniques `a coefficients non n´egatifs. Bulletin des Sciences Math´ematiques, 81(2):123–134, 1957. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), NeurIPS, volume 30. Curran Associates, Inc., 2017. Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, and Jun Zhu. Prolific- dreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. In NeurIPS, 2023. 13 Preprint Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun Wen, Wensen Feng, Xiaoxiao Xu, Yi Wang, Yichang Zhang, Yongqiang Zhu, Yujia Wu, Yuxuan Cai, and Zenan Liu. Qwen-image technical report, 2025. URL https://arxiv.org/abs/ 2508.02324. Xiaoshi Wu, Yiming Hao, Keqiang Sun, Yixiong Chen, Feng Zhu, Rui Zhao, and Hongsheng Li. Human preference score v2: A solid benchmark for evaluating human preferences of text-to- image synthesis, 2023. URL https://arxiv.org/abs/2306.09341. Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: learning and evaluating human preferences for text-to-image generation. In NeurIPS, pp. 15903–15935, 2023. Tianwei Yin, Micha¨el Gharbi, Taesung Park, Richard Zhang, Eli Shechtman, Fredo Durand, and William T. Freeman. Improved distribution matching distillation for fast image synthesis. In NeurIPS, 2024a. URL https://openreview.net/forum?id=tQukGCDaNT. Tianwei Yin, Micha¨el Gharbi, Richard Zhang, Eli Shechtman, Fr´edo Durand, William T Freeman, and Taesung Park. One-step diffusion with distribution matching distillation. In CVPR, 2024b. Sihyun Yu, Sangkyung Kwak, Huiwon Jang, Jongheon Jeong, Jonathan Huang, Jinwoo Shin, and Saining Xie. Representation alignment for generation: Training diffusion transformers is easier than you think. In ICLR, 2025. Qinsheng Zhang and Yongxin Chen. Fast sampling of diffusion models with exponential integrator. In ICLR, 2023. URL https://openreview.net/forum?id=Loek7hfb46P. Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, and Jiwen Lu. Unipc: A unified predictor- corrector framework for fast sampling of diffusion models. In NeurIPS, 2023. Kaiwen Zheng, Yuji Wang, Qianli Ma, Huayu Chen, Jintao Zhang, Yogesh Balaji, Jianfei Chen, Ming-Yu Liu, Jun Zhu, and Qinsheng Zhang. Large scale diffusion distillation via score- regularized continuous-time consistency, 2025. URL https://arxiv.org/abs/2510. 08431. Linqi Zhou, Stefano Ermon, and Jiaming Song. Inductive moment matching. In ICML, 2025. Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, and Hai Huang. Score identity distillation: exponentially fast distillation of pretrained diffusion models for one-step generation. In ICML, 2024. 14 Preprint Algorithm 2: Data-dependent on-policy π-ID training loop with time shifting. Input: NFE, teacher Gθ, data-condition distribution p(dx0, dc), shift m Output: Student Gϕ 1 Initialize student params ϕ 2 S ←  1 NFE , 2 NFE , · · · , 1 // can be adjusted to reduce final step size 3 for finite samples x0, c ∼p(dx0, dc), ϵ ∼N(0, I), τ ′ ∼U(0, 1) do 4 τsrc ←min{ τsrc | τsrc ∈S and τsrc ≥τ ′ } 5 τdst ←max{ τdst | τdst ∈S ∪{ 0 } and τdst < τsrc } 6 tsrc ← mτsrc 1+(m−1)τsrc // time shifting (Esser et al., 2024) 7 xtsrc ←αtsrcx0 + σtsrcϵ 8 π ←Gϕ(xtsrc, tsrc, c) 9 πD ←stopgrad(π) or πD ←dropout(stopgrad(π)) 10 Lϕ ←0 11 for finite samples τ ∼U(τdst, τsrc) do 12 t ← mτ 1+(m−1)τ 13 xt ←xtsrc + R t tsrc πD(xt, t) dt 14 Lϕ ←Lϕ + 1 2∥Gθ(xt, t, c) −π(xt, t)∥2 // can be replaced with Eq. (6) 15 ϕ ←Adam(ϕ, ∇ϕLϕ) // optimizer step Algorithm 3: Data-free on-policy π-ID training loop with time shifting. Input: NFE, teacher Gθ, condition distribution p(dc), shift m Output: Student Gϕ 1 Initialize student params ϕ 2 for finite samples c ∼p(dc), x1 ∼N(0, I) do 3 τsrc ←1, tsrc ←1 4 Lϕ ←0 5 while τsrc > 0 do 6 τdst ←τsrc − 1 NFE // can be adjusted to reduce final step size 7 tdst ← mτdst 1+(m−1)τdst // time shifting (Esser et al., 2024) 8 π ←Gϕ(xtsrc, tsrc, c) 9 πD ←stopgrad(π) or πD ←dropout(stopgrad(π)) 10 for finite samples τ ∼U(τdst, τsrc) do 11 t ← mτ 1+(m−1)τ 12 xt ←xtsrc + R t tsrc πD(xt, t) dt 13 Lϕ ←Lϕ + τsrc−τdst 2 ∥Gθ(xt, t, c) −π(xt, t)∥2 // can be replaced with Eq. (6) 14 xtdst ←xtsrc + R tdst tsrc πD(xt, t) dt 15 τsrc ←τdst, tsrc ←tdst 16 ϕ ←Adam(ϕ, ∇ϕLϕ) // optimizer step A USE OF LARGE LANGUAGE MODELS In preparing this manuscript, we used large language models (LLMs) as general-purpose writing assistants for grammar corrections, rephrasing, and clarity/concision edits. All LLM-suggested edits were reviewed and verified by the authors, who take full responsibility for the final manuscript. B ADDITIONAL TECHNICAL DETAILS B.1 GM TEMPERATURE Inspired by the temperature parameter in language models, we introduce a similar temperature pa- rameter for the GMFlow policy during inference. Let T > 0 be the temperature parameter. Given a C-dimensional GM velocity distribution q(u|xtsrc) = PK k=1 AkN u; µk, s2I  , the new GM prob- 15 Preprint Teacher 𝐺𝐺𝜽𝜽 step Policy 𝜋𝜋 rollout Detached policy 𝜋𝜋D rollout Policy 𝜋𝜋 𝒙𝒙𝑡𝑡src 𝒙𝒙𝑡𝑡dst 𝐺𝐺𝝓𝝓 Policy 𝜋𝜋 𝒙𝒙𝑡𝑡src Detached policy 𝜋𝜋D stopgrad (+ dropout) 𝒙𝒙𝑡𝑡dst 𝐺𝐺𝝓𝝓 (see Fig. 3) decay (a) Off-policy: teacher ratio=100% (b) Mixture of teacher and detached policy (c) On-policy: teacher ratio=0% decay Figure 8: Three stages of scheduled trajectory mixing. (a) Off-policy behavior cloning with a teacher ratio of 1. (b) Mixed teacher and detached-policy segments with a decaying teacher ratio. (c) On- policy imitation learning with a teacher ratio of 0 (Fig. 3). ability with temperature T is defined as: qT (u|xtsrc) := q 1 T (u|xtsrc) R RC q 1 T (u|xtsrc) du . (3) Although qT (u|xtsrc) does not have a general closed-form expression, it can be approximated by the following expression, which works very well as a practical implementation: qT (u|xtsrc) ≈ K X k=1 A 1 T k PK z=1 A 1 Tz N u; µk, s2TI  . (4) For the distilled FLUX and Qwen-Image models, we set T = 0.3 for 4-NFE generation and T = 0.7 for 8-NFE generation. An exception is that we do not apply temperature scaling to the final step, as we found this can impair texture details. As shown in Table 6, ablating GM temperature from the 4-NFE GM-FLUX leads to degraded teacher alignment. B.2 SCHEDULED TRAJECTORY MIXING FOR GUIDANCE-DISTILLED TEACHERS To reduce out-of-distribution exposure in imitation learning, scheduled sampling (Bengio et al., 2015) stochastically alternates between expert (teacher) and learner policy during trajectory integra- tion, decaying the expert probability from 1 to 0. However, naively applying it to π-ID is impractical because the teacher flow model Gθ is much slower than the network-free policy πD. To maintain constant compute throughout training, we introduce a scheduled trajectory mixing strat- egy. Since the teacher is slow, we fix the total number of teacher queries, allow each query to cover a coarse, longer step initially, and gradually shrink the teacher step size while filling the gaps with the fast policy πD. As shown in Fig. 8 (a), training initially adopts a fully off-policy teacher trajectory (behavior cloning). At the beginning time ta of each teacher step, we roll in the learner policy π, integrate it over the same interval from ta to tb, and match its average velocity to the teacher velocity with the ℓ2 loss: Lϕ = E " 1 2 Gθ(xta, ta) − 1 tb −ta Z tb ta π(xt, t) dt 2# . (5) As training progresses (Fig. 8 (b)), we then mix teacher and detached-policy segments while using the same loss, and linearly decay the teacher ratio—the sum of teacher step lengths divided by the total interval length tsrc −tdst. Finally, when the teacher ratio reaches 0, training reduces to on- policy π-ID. All teacher step boundaries (starts and ends) are randomly sampled within the interval [tdst, tsrc] under the teacher ratio constraint, so that step sizes and locations vary while the total teacher-covered length follows the current ratio schedule. We apply scheduled trajectory mixing exclusively when distilling the FLUX.1 dev model, as it lacks real CFG. Since omitting CFG doubles the teacher’s speed, we increase the number of intermediate samples (teacher steps) to 4 accordingly. B.3 MICRO-WINDOW VELOCITY MATCHING For on-policy π-ID, in practice we found that replacing the instantaneous velocity matching loss in Algorithm 1 with a modified average velocity loss over a micro time window generally benefits 16 Preprint Table 6: Ablation study on 4-NFE π-Flow (GM-FLUX), evaluated on the HPSv2 prompt set using teacher-referenced FID metrics (reflecting teacher alignment). Method FID↓pFID↓ π-Flow (GM-FLUX) 14.3 19.2 w/o GM temperature 14.9 20.1 w/o micro window 14.6 20.3 A futuristic, sleek sports car with a low, aerodynamic design is shown in motion against a backdrop of a city skyline at sunset. The car features sharp angles, large wheels with orange accents, and a prominent front grille. The cityscape includes tall buildings with illuminated windows, and the sky is painted with hues of orange and blue as the sun sets. The lighting is warm and golden, with the sun setting behind the city, casting a glow over the scene. The car is positioned in the foreground, with the city skyline in the background, creating a sense of depth and movement. FLUX.1 dev 128-NFE FLUX.1 dev 43-NFE Figure 9: The 128-NFE FLUX.1 dev often generates blurry images, whereas the 43-NFE FLUX.1 dev reduces the blur and produces sharper edges. training. The modified loss is defined as: Lϕ = E  1 2 Gθ(xt, t) − 1 −∆t Z t−∆t t π(xt, t) dt 2 , (6) where ∆t is the window size. We set ∆t = 3/128 (three policy integration steps) for all FLUX.1 and Qwen-Image experiments. The benefits of micro-window velocity matching are threefold: • It generally smooths the training signal, reducing sensitivity to sharp local variations in the teacher trajectory. • It stabilizes the less robust DX policy. In the ImageNet experiments, we observe that training with the DX policy diverges without this modification. • With ∆t = 3/128, the policy effectively mimics teacher sampling with 128 3 ≈43 steps instead of 128 steps. For the guidance-distilled FLUX.1 dev model, we observe that the teacher often generates blurry images using 128-step sampling, while 43-step sampling yields sharper results (see Fig. 9). This behavior is inherited by the student, so micro-window velocity matching helps reduce blur. As shown in Table 6, ablating the micro window trick from the 4-NFE GM-FLUX leads to degraded teacher alignment. 17 Preprint Table 7: Hyperparameters used in the ImageNet experiments. 1-NFE 2-NFE GM-FM (K = 32) GM-REPA (K = 8) GM-REPA (K = 32) DX-REPA (N = 10) DX-REPA (N = 20) DX-REPA (N = 40) GM-REPA (K = 32) GM dropout 0.05 0.05 0.05 - - - 0.05 # of intermediate states 2 2 2 2 2 2 2 Window size (raw) ∆τ - - - 10/128 5/128 3/128 - Shift m 1.0 1.0 1.0 1.0 1.0 1.0 1.0 Teacher CFG 2.7 3.2 3.2 3.2 3.2 3.2 2.8 Teacher CFG interval t ∈[0, 0.6] t ∈[0, 0.7] t ∈[0, 0.7] t ∈[0, 0.7] t ∈[0, 0.7] t ∈[0, 0.7] t ∈[0, 0.7] Learning rate 5e-5 5e-5 5e-5 5e-5 5e-5 5e-5 5e-5 Batch size 4096 4096 4096 4096 4096 4096 4096 # of training iterations in Table 2 140K - 140K - - - 24K EMA param γ in Karras et al. (2024) 7.0 7.0 7.0 7.0 7.0 7.0 7.0 Table 8: Hyperparameters used in FLUX and Qwen-Image experiments. 4-NFE 8-NFE GM-FLUX (K = 8) GM-Qwen (K = 8) DX-FLUX (N = 10) DX-Qwen (N = 10) GM-FLUX (K = 8) GM dropout 0.1 0.1 - - 0.1 GM temperature T 0.3 0.3 - - 0.7 # of intermediate states 4 2 4 2 4 Window size (raw) ∆τ 3/128 3/128 3/128 3/128 3/128 Shift m 3.2 3.2 3.2 3.2 3.2 Final step size scale 0.5 0.5 0.5 0.5 0.5 Teacher CFG 3.5 4.0 3.5 4.0 3.5 Learning rate 1e-4 1e-4 1e-4 1e-4 1e-4 Batch size 256 256 256 256 256 # of training iterations 3K 9K 3K 9K 3K # of decay iterations (§ B.2) 2K - 2K - 2K EMA param γ in Karras et al. (2024) 7.0 7.0 7.0 7.0 7.0 B.4 TIME SAMPLING For high resolution image generation, Esser et al. (2024) proposed a time shifting mechanism to rescale the noise strength. Let τ be the pre-shift raw time and m be the shift hyperparameter, the shifted time is defined as t := mτ 1+(m−1)τ . Following this idea, π-ID samples times uniformly in raw-time space and then applies the shift to remap those samples. Detailed time sampling routines are given in Algorithms 2 and 3. For FLUX.1 and Qwen-Image, we use a fixed shift m = 3.2, which is a rounded approximation of FLUX.1’s official dynamic shift at 1MP resolution. In addition, several diffusion/flow models reduce the noise strength at the final step to improve detail (Karras et al., 2022; Wu et al., 2025). Accordingly, for FLUX.1 and Qwen-Image we halve the final step size (relative to previous steps) in raw-time space. C ADDITIONAL IMPLEMENTATION DETAILS AND HYPERPARAMETERS All models are trained with BF16 mixed precision, using the 8-bit Adam optimizer (Kingma & Ba, 2014; Dettmers et al., 2022) without weight decay. For inference, we use EMA weights with 18 Preprint Table 9: FLUX.1 schnell evaluation results on COCO-10k dataset and HPSv2 prompt set. Model Distill method NFE COCO-10k prompts HPSv2 prompts Data align. Prompt align. Pref. align. Prompt align. Pref. align. FID↓pFID↓CLIP↑VQA↑ HPSv2.1↑ CLIP↑VQA↑ HPSv2.1↑ FLUX.1 schnell GAN 4 21.8 29.1 0.274 0.913 0.297 0.297 0.843 0.301 𝝅𝝅-Flow (GM-FLUX) 4-NFE FLUX.1 schnell 4-NFE The president being abducted by aliens. A puppy is driving a car in a film still. A portrait of a melting skull with intricate abstract details, created by Tooth Wu, Wlop Beeple, and Dan Mumford, using Octane Render. Spiderman character in the game Sea of Thieves. The image depicts a beautiful goddess of spring wearing a wreath and flowy green skirt, created by artist wlop. FLUX.1 dev 50-NFE Figure 10: Typical failure cases of FLUX.1 schnell. For reference, we also show the corresponding FLUX.1 dev and π-Flow results from the same initial noise. a dynamic moment schedule (Karras et al., 2024). Detailed hyperparameter choices are listed in Table 7 and 8. C.1 DISCUSSION ON GMFLOW POLICY HYPERPARAMETERS For the GMFlow policy, we observed that the hyperparameters suggested by Chen et al. (2025) (K = 8, C = VAE latent channel size) generally work well. These parameters play important roles 19 Preprint in balancing compatibility, expresiveness, and robustness. A larger K improves expresiveness but impairs compatibility as it may complicate network training. A larger C improves robustness (since GMFlow models correlations within each C-dimensional chunk) but impairs expresiveness (raises the theoretical K = N ·C bound). In addition, improving expresiveness may generally compromise robustness, due to the increased chance of encountering outlier trajectories during inference. D DISCUSSION ON FLUX.1 SCHNELL The official 4-NFE FLUX.1 schnell model (Black Forest Labs, 2024a) (based on adversarial distil- lation (Sauer et al., 2024a)) is distilled from the closed-source FLUX.1 pro instead of the publicly available FLUX.1 dev. This makes a direct comparison to the student models in Table 3 inequitable. For reference, nevertheless, we include the COCO-10k and HPSv2 metrics for FLUX.1 schnell in Table 9. These metrics reveal a trade-off: while FLUX.1 schnell achieves significantly better data and prompt alignment than FLUX.1 dev, its preference alignment is substantially weaker than FLUX.1 dev and all of its students. To validate this observation, we conducted a human preference study. Our 4-NFE π-Flow (GM- FLUX) was compared against FLUX.1 schnell on 200 images generated from HPSv2 prompts. π-Flow was preferred by users 59.5% of the time, aligning with the HPSv2.1 preference metric. Furthermore, qualitative comparisons in Fig. 10 reveals that FLUX.1 schnell is prone to frequent structural errors (e.g., missing/extra/distorted limbs), whereas π-Flow maintains coherent structures. E PROOF OF THEOREM 1 We will prove that a GM with N · C components suffices for approximating any N-step trajectory in RC by first establishing Theorem 2, and then applying the Richter–Tchakaloff theorem to show that a mixture of N · C Dirac deltas satisfy all ODE moment equations, which finally leads to N · C Gaussian components. Theorem 2. Given pairwise distinct times t1, . . . , tN ∈(0, 1] and vectors xtn, ˙xtn ∈RC for n = 1, . . . , N, there exists a probability measure p(dx0) on RC, such that Eq (1) holds at t = tn for every n = 1, . . . , N. E.1 MOMENT EQUATION For every t ∈(0, 1], the ODE moment equation has the following equivalent forms: ˙xt = Z RC xt −x0 t p(dx0|xt) ⇔ ˙xt Z RC p(dx0|xt) = Z RC xt −x0 t p(dx0|xt) ⇔ Z RC x0 −xt + t ˙xt t p(dx0|xt) = 0 ⇔ Z RC (x0 −xt + t ˙xt)N xt; αtx0, σ2 t I  p(dx0) tp(xt) = 0 ⇔ Z RC(x0 −xt + t ˙xt)N xt; αtx0, σ2 t I  p(dx0) = 0. (7) Let g(t, x0) := (x0 −xt + t ˙xt)N xt; αtx0, σ2 t I  be a kernel function. The above equation can be written as a multivariate homogeneous Fredholm integral equation of the first kind: Z RC g(t, x0)p(dx0) = 0. (8) To prove Theorem 2, we need to show that there exists a probability measure p(dx0) on RC that solves the Fredholm equation at t = tn for every n = 1, · · · , N. 20 Preprint E.2 UNIVARIATE MOMENT EQUATION To prove the existence of a solution to the multivariate Fredholm equation, we can simplify the proof into a univariate case by showing that an element-wise probability factorization p(dx0) = QC i=1 p(dxi0) exists that solves the Fredholm equation. In this case, Eq. (7) can be written as: ∀i = 1, 2, · · · , C, Z R (xi0 −xit + t ˙xit)N xit; αtxi0, σ2 t  p(dxi0) Y j̸=i Z R N xjt; αtxj0, σ2 t  p(dxj0) = 0 ⇔ ∀i = 1, 2, · · · , C, Z R (xi0 −xit + t ˙xit)N xit; αtxi0, σ2 t  p(dxi0) = 0. (9) To see this, we need to prove that there exists a probability measure p(x0) on R that solves the following univariate Fredholm equation at t = tn for every n = 1, · · · , N: Z R g(t, x0)p(dx0) = 0, (10) where g(t, x0) := (x0 −xt + t ˙xt)N xt; αtx0, σ2 t  is the univariate kernel function. E.3 CONVEX COMBINATION Lemma 1. Define the vector function: γ : R →RN, γ(x0) = (g(t1, x0), g(t2, x0), · · · , g(tN, x0)). (11) Then, the zero vector lies in the convex hull in RN, i.e.: 0 ∈conv{ γ(x0) | x0 ∈R } ∈RN. (12) Proof. Define S := conv{ γ(x0) | x0 ∈R }. Assume for the sake of contradiction that 0 /∈S. By the supporting and separating hyperplane theorem, there exists w ̸= 0 ∈RN, such that: ∀χ ∈S, ⟨w, χ⟩≤0. (13) In particular, this implies that: ∀x0 ∈R, ⟨w, γ(x0)⟩≤0. (14) Define h(x0) := ⟨w, γ(x0)⟩= PN n=1 wng(tn, x0). Recall the definition of g(t, x0) that: g(t, x0) = (x0 −xt + t ˙xt)N xt; αtx0, σ2 t  = x0 −xt + t ˙xt √ 2πt2 exp  −(xt −αtx0)2 2σ2 t  . (15) Let n∗be an index with wn∗̸= 0 for which the exponential term above decays the slowest, i.e.: α2 tn∗ 2σ2 tn∗ = min  α2 tn 2σ2 tn wn∗̸= 0  . (16) Note that since α2 t 2σ2 t is monotonic, for every n ̸= n∗with wn ̸= 0, we have α2 tn 2σ2 tn > α2 tn∗ 2σ2 tn∗. Therefore, as |x0| →∞, h(x0) is dominated by the n∗-th component, i.e.: h(x0) = wn∗x0 −xtn∗+ tn∗˙xtn∗ p 2πt2 n∗ exp  −(xtn∗−αtn∗x0)2 2σ2 tn∗  (1 + O(1)). (17) Because the term x0−xtn∗+tn∗˙xtn∗changes sign between −∞and +∞, h(x0) takes both positive and negative values. This contradicts the hyperplane implication that h(x0) ≤0. Therefore, we conclude that 0 ∈S. By Lemma 1 and Carath´eodory’s theorem, the zero vector can be expressed as a convex combination of at most N +1 points on γ(x0). Therefore, there exists a finite-support probability measure p(dx0) consisting of N + 1 Dirac delta components that solves the univariate Fredholm equation at t = tn for every n = 1, · · · , N, completing the proof of Theorem 2. 21 Preprint E.4 N · C COMPONENTS SUFFICE Richter’s extension to Tchakaloff’s theorem states as follows. Theorem 3 (Richter (1957); Tchakaloff (1957)). Let V be a finite-dimensional space of measurable functions on RC. For some probability measure p(dx0) on RC, define the moment functional: Λ: V →R, Λ[g] := Z RC g(x0)p(dx0). (18) Then there exists a K-atomic measure p∗(dx0) = PK k=1 Akδµk(dx0) with Ak > 0 and K ≤dim V such that: ∀g ∈V, Λ[g] = Z RC g(x0)p∗(dx0) = K X k=1 Akg(µk). (19) By Theorem 2, we know that for V = span{ gi(tn, x0) | i = 1, · · · , C, n = 1, · · · , N } with the scalar function gi(tn, x0) := (xi0−xit+t ˙xit)N xt; αtx0, σ2 t I  , there exists a probability measure p(dx0) such that R RD gi(tn, x0)p(dx0) = 0 for every i, n. Then, by the Richter–Tchakaloff theo- rem, there also exists a K-atomic measure with K ≤dim V ≤N · C that satisfies all the moment equations. By taking the upper bound, this implies the existence of an N · C-atomic probability measure p∗(dx0) = PN·C k=1 Akδµk(dx0) with Ak > 0, PN·C k=1 Ak = 1 that solves the Fredholm equation (Eq. (8)) at t = tn for every n = 1, · · · , N. Finally, since N xt; αtx0, σ2 t I  is continuous, the N · C Dirac deltas in p∗(dx0) can be replaced by a mixture of N · C narrow Gaussians, such that ˙xn is approximated arbitrarily well for every n, i.e.: ∀n = 1, · · · , N, lim s→0 xtn − R RD x0p(dx0|xtn) tn p(dx0)=PN·C k=1 AkN (dx0;µk,s2I) = xtn − R RD x0p(dx0|xtn) tn p(dx0)=PN·C k=1 Akδµk (dx0) = ˙xtn (20) This completes the proof of Theorem 1. F DERIVATION OF CLOSED-FORM GMFLOW VELOCITY In this section, we provide details regarding the derivation of closed-form GMFlow velocity, which was originally presented by Chen et al. (2025) but not covered in detail. Given the u-based GM prediction q(u|xtsrc) = PK k=1 AkN u; µk, s2I  with Ak ∈R+, µk ∈RC, s ∈R+, we first convert it into the x0-based parameterization by substituting u = xtsrc−x0 σtsrc into the density function, which yields: q(x0|xtsrc) = K X k=1 AkN x0; µxk, s2 xI  , (21) with the new parameters µxk = xtsrc −σtsrcµk and sx = σtsrcs. Then, for any t < tsrc and any xt ∈RC, the denoising posterior at (xt, t) is given by: q(x0|xt) = p(xt|x0) Z · p(xtsrc|x0)q(x0|xtsrc), (22) 22 Preprint where Z is a normalization factor dependent on xtsrc, xt, tsrc, t. Using the definition of forward diffusion p(xtsrc|x0) = N xtsrc; αtsrcx0, σ2 tsrcI  , we have: q(x0|xt) = N xt; αtx0, σ2 t I  Z · N xtsrc; αtsrcx0, σ2 tsrcI q(x0|xtsrc) = N  x0; 1 αt xt, σ2 t α2 t I  Z′ · N  x0; 1 αtsrc xtsrc, σ2 tsrc α2 tsrc I q(x0|xtsrc) = 1 Z′′ N  x0; ν ζ , σ2 tsrcσ2 t ζ I  q(x0|xtsrc), where ν = σ2 tsrcαtxt −σ2 t αtsrcxtsrc, ζ = σ2 tsrcα2 t −σ2 t α2 tsrc. The result can be further simplified into a new GM: q(x0|xt) = K X k=1 A′ kN  x0; µ′ k, s′2I  , (23) with the following parameters: s′2 = s2 xσ2 tsrcσ2 t s2xζ + σ2 tsrcσ2 t (24) µ′ k = s2 xν + σ2 tsrcσ2 t µxk s2xζ + σ2 tsrcσ2 t (25) A′ k = exp a′ k PK k=1 exp a′ k , (26) where the new logit a′ k is given by: a′ k = log Ak −1 2 ∥ν −ζµxk∥2 ζσ2 tsrcσ2 t + ζ2s2x . (27) Finally, the closed-form GMFlow velocity at (xt, t) is given by function π: π: RC × R →RC, π(xt, t) = xt −Ex0∼q(x0|xt)[x0] t = xt −PK k=1 A′ kµ′ k t . (28) Extension to discrete support. The closed-form GMFlow velocity can also be generalized to dis- crete support by taking limsx→0 π(xt, t), which yields the simplified parameters: µ′ k = µxk (29) a′ k = log Ak −1 2 ∥ν −ζµxk∥2 ζσ2 tsrcσ2 t . (30) G ADDITIONAL QUALITATIVE RESULTS. We show additional uncurated results of FLUX-based models in Fig. 11 and 12. 23 Preprint FLUX.1 dev 50-NFE 𝝅𝝅-Flow (GM-FLUX) 8-NFE FLUX Turbo 8-NFE Hyper-FLUX 8-NFE 𝝅𝝅-Flow (GM-FLUX) 4-NFE SenseFlow (FLUX) 4-NFE This picture follows the abstract expressionism visual style. A character with short pink hair, blue eyes, wearing a black hoodie, blue neck gaiter, black shorts, and sneakers, stands in a busy street. Vibrant shop signs, lanterns, traffic cones, a fire hydrant, and sunlight create a dynamic atmosphere. A battle-hardened male warrior stands tall, his blood-drenched face and determined eyes reflecting fierce resolve. He grips an intricately decorated sword, its hilt gleaming in the dim light. The Baroque style fills the scene, with dramatic lighting casting deep shadows on his worn features and intricate armor textures. Please recreate the extravagant apartment's living room, overlooking the Mediterranean from the desert. The striking surroundings feature succulents and cacti. The furniture is crocheted, including the sofa, walls, curtains, carpet, tables, and wall art, with a desert wildlife view visible through the window, also crocheted. The bed features a modern, minimalist design with a clean look and low profile. Its headboard is subtly curved and remains within the bed's width without wrapping around the sides, maintaining moderate height for functionality and comfort. Upholstered entirely in textured neutral fabric like light grey or beige, the frame includes headboard, side rails, and footboard without tufting or embellishments. The mattress fits snugly with no visible gap between it and the frame. Short, dark-colored legs subtly contrast with the light upholstery. Bedding consists of neatly arranged white fitted sheets, flat sheets, and pillows. A simple, light-colored wall serves as the neutral background, ensuring the bed remains the image's focal point. Stunning woman in misty sunlit field, delicate lace rose gold dress flowing in breeze, gazing over shoulder, hair adorned with sunflowers, sunlight highlighting her melancholy face, sorrowful expression, shimmering eyes, leaves swirling around her, overcast sky with lively atmosphere, intricate dress details, realistic style. Please craft an image of a genuine individual. A young woman, dressed in an authentic cowboy outfit, stands confidently next to a weathered barn. The scene is evocatively captured in sepia tones. In the distance, a vast field stretches out, dotted with tall grasses and dried hay bales, while a vibrant blue sky peeks in from the corner. In a captivating style reminiscent of animated storytelling, behind prison bars stands a powerful woman, eyes glowing red and intense, with a prominent scar. Figure 11: An uncurated random batch from the OneIG-Bench prompt set, part A. 24 Preprint FLUX.1 dev 50-NFE 𝝅𝝅-Flow (GM-FLUX) 8-NFE FLUX Turbo 8-NFE Hyper-FLUX 8-NFE 𝝅𝝅-Flow (GM-FLUX) 4-NFE SenseFlow (FLUX) 4-NFE A row of blue buses idles beneath a metal canopy marked "GATE 12 - INTERCITY EXPRESS" at a busy bus terminal. An LED screen scrolls "NEXT DEPARTURE: 15:30." Travelers sip coffee from paper cups, while a nearby newsstand labeled "CITY NEWS" displays headlines and postcards of local landmarks. Could you explore the differences between nuclear fission and fusion by designing a comparison chart?Tips: Energy release, reaction process, atomic nuclei Could you show us the educational diagram of nanomaterial structures? Tips: nanomaterials, structure Green tea ice cubes float in milk, each with a distinct ink pattern. The surface appears smooth and glass-like, enhancing the realistic feel and inviting a tactile experience. The entire scene is depicted in 3d rendering. Create an image with lifelike depiction of people, featuring a mid-thirties salaryman with thinning jet black hair. He stands confidently with hands on his hips, showcasing a crisp blue suit against the dynamic backdrop of a vibrant Tokyo cityscape. Beneath a starry sky stretching over a serene desert landscape, the poster title "Celestial Wonders: Stargazing Nights" glows softly in silver. Midway down, text invites "Discover the universe through guided telescope tours and expert talks." Footer information provides "Every Saturday evening at Mirage Dunes Observatory. Tickets available online." Young woman in wedding attire, ivory skirt, white block heel sandals, delicate train, snug-fitting garment creating stunning silhouette, white mesh top with beadwork and intricate maple leaf pattern, realistic portrayal. Figure 12: An uncurated random batch from the OneIG-Bench prompt set, part B. 25
Preprint PI-FLOW: POLICY-BASED FEW-STEP GENERATION VIA IMITATION DISTILLATION Hansheng Chen1 Kai Zhang2 Hao Tan2 Leonidas Guibas1 Gordon Wetzstein1 Sai Bi2 1Stanford University 2Adobe Research https://github.com/Lakonik/piFlow Figure 1: High quality 4-NFE text-to-image generations by π-Flow, distilled from FLUX.1-12B (top-right three images) and Qwen-Image-20B (all remaining images). π-Flow preserves the teacher's coherent structures, fine details (e.g., skin and hair), and accurate text rendering, while avoiding diversity collapse (see Fig. 4 for sample diversity). ABSTRACT Few-step diffusion or flow-based generative models typically distill a velocitypredicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models (π-Flow). π-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard l2 flow matching loss. By simply mimicking the teacher's behavior, π-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On ImageNet 2562, it attains a 1-NFE FID of 2.85, outperforming MeanFlow of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, π-Flow achieves substantially better diversity than state-of-the-art few-step methods, while maintaining teacher-level quality. 1 16 Oct 2025 Preprint 1 INTRODUCTION Diffusion and flow matching models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song & Ermon, 2019; Lipman et al., 2023; Albergo & Vanden-Eijnden, 2023) have become the dominant method for visual generation, delivering compelling image quality and diversity. However, these models rely on a costly denoising process for inference, which integrates a probability flow ODE (Song et al., 2021) over multiple timesteps, each step requiring a neural network evaluation. Commonly, the inference cost of diffusion models is quantified by the number of function (network) evaluations (NFEs). To reduce the inference cost, diffusion distillation methods compress a pre-trained multi-step model (the teacher) into a student that requires only one or a few network evaluation steps. Existing distillation approaches avoid ODE integration by taking one or a few shortcut steps that map noise to data, where each shortcut path is predicted by the student network, referred to as a shortcut-predicting model. Learning these shortcuts is a significant challenge because they cannot be directly inferred from the teacher model. This necessitates the use of complex training methods, such as progressive distillation (Salimans & Ho, 2022; Liu et al., 2023; 2024), consistency distillation (Song et al., 2023), and distribution matching (Sauer et al., 2024a; Yin et al., 2024b;a; Salimans et al., 2024). In turn, the sophisticated training often lead to degraded image quality from error accumulation or compromised diversity due to mode collapse. To sidestep the difficulties in shortcut-predicting distillation, we propose a novel policy-based flow model (π-Flow or pi-Flow) paradigm: given noisy data at one timestep, the student network predicts a network-free policy, which maps new noisy states to their corresponding flow velocities with negligible overhead, allowing fast and accurate ODE integration using multiple substeps of policy velocities instead of network evaluations. To train the student network, we introduce policy-based imitation distillation (π-ID), a DAggerstyle (Ross et al., 2011) on-policy imitation learning (IL) method. π-ID trains the policy on its own trajectory: at visited states, we query the teacher velocity and match the policy's output to it, using the teacher's corrective signal to teach the policy to recover from its own mistakes and reduce error accumulation. Specifically, the matching employs a standard l2 loss aligned with the teacher's flow matching objective, thus naturally preserving its quality and diversity. We validate our paradigm with two types of policies: a simple dynamic-ˆx(t) 0 (DX) policy and an advanced GMFlow policy based on Chen et al. (2025). Experiments show that GMFlow policy outperforms DX policy and delivers the best ImageNet 2562 FID for 1-NFE generation using the standard DiT architecture (Peebles & Xie, 2023). To demonstrate its scalability, we distill FLUX.112B (Black Forest Labs, 2024b) and Qwen-Image-20B (Wu et al., 2025) text-to-image models into 4-NFE π-Flow students, which achieve state-of-the-art diversity, while maintaining teacher-level quality. We summarize the contributions of this work as follows: • We propose π-Flow, a new paradigm that decouples ODE integration substeps from network evaluation steps, enabling both fast generation and straightforward distillation. • We introduce π-ID, a novel on-policy IL method for few-step π-Flow distillation, which reduces the training objective to a simple l2 flow matching loss. • We demonstrate strong performance and scalability of π-Flow, particularly, its superior diversity and teacher alignment compared to other state-of-the-art 4-NFE text-to-image models. 2 PRELIMINARIES In this section, we briefly introduce flow matching models (Lipman et al., 2023; Liu et al., 2023) and the notations used in this paper. Let p(x0) denote the (latent) data probability density, where x0 ∈RD is a data point. A standard flow model defines an interpolation between a data sample and a random Gaussian noise ε ∼N(0, I), yielding the diffused noisy data xt = αtx0 + σtε, where t ∈(0, 1] denotes the diffusion time, and αt = 1 -t, σt = t are the linear flow noise schedule. The optimal transport map across all marginal densities p(xt) = R RD N(xt; αtx0, σ2 t I)p(x0) dx0 can be described by 2 Preprint xxttdst xxttsrc xxttdst xxttsrc (c) Policy-based model xxttdst Policy ππxxtt, tt (a) Standard flow model (b) Shortcut-predicting model ππ ππ ππ ππ xxttsrc = • Few network steps • Few integration steps = • Many network steps • Many integration steps ≪ • Few network steps • Many integration substeps Figure 2: Comparison between (a) standard flow model (teacher), (b) shortcut-predicting model, and (c) our policy-based model. The shortcut-predicting model skips all intermediate states, whereas the our-based model retains all intermediate substeps with minimal overhead. the following probability flow ODE (Song et al., 2021; Liu, 2022): dxt dt = ̇xt = xt -Ex0∼p(x0|xt)[x0] t = xt - R RD x0p(x0|xt) dx0 t , (1) with the denoising posterior p(x0|xt) := N(xt;αtx0,σ2 t I)p(x0) p(xt) . At test time, the model can generate samples by first initializing the noise x1 ←ε and then solving the ODE to obtain limt→0 xt. In practice, flow matching models approximate the ODE velocity dxt dt using a neural network Gθ(xt, t) with learnable parameters θ, trained using the l2 flow matching loss: Lθ = Et,x0,xt 1 2∥u -Gθ(xt, t)∥2 , with sample velocity u := xt -x0 t . (2) Since each velocity query requires evaluating the network (Fig. 2 (a)), flow matching models couple sampling efficiency with solver precision. Despite the progress in advanced solvers (Karras et al., 2022; Zhang & Chen, 2023; Lu et al., 2022; 2023; Zhao et al., 2023), high-quality sampling typically requires over 10 steps due to inherent ODE truncation error, making it computationally expensive. 3 π-FLOW: POLICY-BASED FEW-STEP GENERATION In π-Flow, we define the policy as a network-free function π: RD × R →RD that maps a state (xt, t) to a flow velocity. A policy can be network-free if it only needs to describe a single ODE trajectory, which is fully determined by its initial state (xtsrc, tsrc) with tsrc ≥t. In this case, the policy for each trajectory must be dynamically predicted by a neural network conditioned on that initial state (xtsrc, tsrc). We therefore adapt a flow model to output not a single velocity, but an entire dynamic policy that governs the full trajectory. Formally, define the policy function space F := π: RD × R →RD . Then, our goal is to distill a policy generator network Gφ : RD × R →F with learnable parameters φ, such that π(xt, t) = Gφ(xtsrc, tsrc)(xt, t). As shown in Fig. 2 (c), π-Flow performs ODE-based denoising from tsrc to tdst via two stages: • A policy generation step, which feeds the initial state (xtsrc, tsrc) to the student network Gφ to produce the policy π, i.e., π ←Gφ(xtsrc, tsrc). • Multiple policy integration substeps, which integrates the ODE by querying the policy velocity over multiple substeps, obtaining a less noisy state by xtdst ←xtsrc + R tdst tsrc π(xt, t) dt. Unlike previous few-step distillation methods, π-Flow decouples network evaluation steps from ODE integration substeps. This allows it to combine the key advantages of two paradigms: it performs only a few network evaluations for efficient generation, similar to a shortcut-predicting model, while also executing dense integration substeps, just like a standard flow matching teacher. Thanks to its teacher-like ODE integration process, a π-Flow student offers unprecedented advantage in training, as we can now follow well-established imitation learning (IL) approaches to directly match the policy velocity π(xt, t) to the teacher velocity Gθ(xt, t), as discussed later in § 4. To identify the appropriate student policy families for fast image generation, we need to satisfy the following requirements: • Efficiency. The policy should provide closed-form velocities with minimal overhead, so that rolling out dense (e.g., 100+) substeps incurs negligible cost compared to a network evaluation. 3 Preprint • Compatibility. The policy should have a compact set of parameters that can be easily predicted by the student Gφ with standard backbones (e.g., DiT (Peebles & Xie, 2023)). • Expressiveness. The policy should be able to approximate a complicated ODE trajectory starting from a certain initial state xtsrc. • Robustness. The policy should be able to handle trajectory variations that arise from perturbations to the initial state xtsrc. For instance, a suboptimal student network will produce an erroneous mapping from xtsrc to π. This introduces the randomness that the policy needs to accommodate throughout the rollout. Consequently, the policy function should adapt its velocity output to variations in its state input xt, which is a challenging requirement for network-free functions. 3.1 DYNAMIC-ˆx(t) 0 POLICY We introduce a simple baseline policy called dynamic-ˆx(t) 0 policy (DX policy). DX policy defines π(xt, t) := xt-ˆx(t) 0 t , where ˆx(t) 0 approximates the posterior moment Ex0∼p(x0|xt)[x0] in Eq. (1). Along a fixed trajectory starting from an initial state (xtsrc, tsrc), the posterior moment is only dependent on t. Therefore, we first predict a grid of ˆx(ti) 0 at N evenly spaced times t1, ..., tN ∈[tdst, tsrc] by a single evaluation of the student network Gφ(xtsrc, tsrc). This is achieved by expanding the output channels of the student network and performing u-to-x0 reparameterization. Then, for arbitrary t ∈[tdst, tsrc], we obtain the approximated moment ˆx(t) 0 by a linear interpolation over the grid. Apparently, DX policy is fast, compatible, and expressive enough so that any N-step teacher trajectory can be matched with N grid points. However, its robustness is limited because ˆx(t) 0 is not adaptive to perturbations in xt. 3.2 GMFLOW POLICY For stronger robustness, we incorporate an advanced GMFlow policy based on the closedform GM velocity field in Chen et al. (2025). GMFlow policy expands the network output channels to predict a factorized Gaussian mixture (GM) velocity distribution q(u|xtsrc) = QL i=1 PK k=1 AikN ui; μik, s2I , where Aik ∈R+, μik ∈RC, s ∈R+ are GM parameters predicted by the network, L × C factorizes the data dimension D into sequence length L and channel size C, and K is a hyperparameter specifying the number of mixture components. This representation enables a closed-form velocity expression at any 0 0 do 6 τdst ←τsrc - 1 NFE // can be adjusted to reduce final step size 7 tdst ← mτdst 1+(m-1)τdst // time shifting (Esser et al., 2024) 8 π ←Gφ(xtsrc, tsrc, c) 9 πD ←stopgrad(π) or πD ←dropout(stopgrad(π)) 10 for finite samples τ ∼U(τdst, τsrc) do 11 t ← mτ 1+(m-1)τ 12 xt ←xtsrc + R t tsrc πD(xt, t) dt 13 Lφ ←Lφ + τsrc-τdst 2 ∥Gθ(xt, t, c) -π(xt, t)∥2 // can be replaced with Eq. (6) 14 xtdst ←xtsrc + R tdst tsrc πD(xt, t) dt 15 τsrc ←τdst, tsrc ←tdst 16 φ ←Adam(φ, ∇φLφ) // optimizer step A USE OF LARGE LANGUAGE MODELS In preparing this manuscript, we used large language models (LLMs) as general-purpose writing assistants for grammar corrections, rephrasing, and clarity/concision edits. All LLM-suggested edits were reviewed and verified by the authors, who take full responsibility for the final manuscript. B ADDITIONAL TECHNICAL DETAILS B.1 GM TEMPERATURE Inspired by the temperature parameter in language models, we introduce a similar temperature parameter for the GMFlow policy during inference. Let T > 0 be the temperature parameter. Given a C-dimensional GM velocity distribution q(u|xtsrc) = PK k=1 AkN u; μk, s2I , the new GM prob15 Preprint Teacher GGθθ step Policy ππ rollout Detached policy ππD rollout Policy ππ xxttsrc xxttdst GGφφ Policy ππ xxttsrc Detached policy ππD stopgrad (+ dropout) xxttdst GGφφ (see Fig. 3) decay (a) Off-policy: teacher ratio=100% (b) Mixture of teacher and detached policy (c) On-policy: teacher ratio=0% decay Figure 8: Three stages of scheduled trajectory mixing. (a) Off-policy behavior cloning with a teacher ratio of 1. (b) Mixed teacher and detached-policy segments with a decaying teacher ratio. (c) Onpolicy imitation learning with a teacher ratio of 0 (Fig. 3). ability with temperature T is defined as: qT (u|xtsrc) := q 1 T (u|xtsrc) R RC q 1 T (u|xtsrc) du . (3) Although qT (u|xtsrc) does not have a general closed-form expression, it can be approximated by the following expression, which works very well as a practical implementation: qT (u|xtsrc) ≈ K X k=1 A 1 T k PK z=1 A 1 Tz N u; μk, s2TI . (4) For the distilled FLUX and Qwen-Image models, we set T = 0.3 for 4-NFE generation and T = 0.7 for 8-NFE generation. An exception is that we do not apply temperature scaling to the final step, as we found this can impair texture details. As shown in Table 6, ablating GM temperature from the 4-NFE GM-FLUX leads to degraded teacher alignment. B.2 SCHEDULED TRAJECTORY MIXING FOR GUIDANCE-DISTILLED TEACHERS To reduce out-of-distribution exposure in imitation learning, scheduled sampling (Bengio et al., 2015) stochastically alternates between expert (teacher) and learner policy during trajectory integration, decaying the expert probability from 1 to 0. However, naively applying it to π-ID is impractical because the teacher flow model Gθ is much slower than the network-free policy πD. To maintain constant compute throughout training, we introduce a scheduled trajectory mixing strategy. Since the teacher is slow, we fix the total number of teacher queries, allow each query to cover a coarse, longer step initially, and gradually shrink the teacher step size while filling the gaps with the fast policy πD. As shown in Fig. 8 (a), training initially adopts a fully off-policy teacher trajectory (behavior cloning). At the beginning time ta of each teacher step, we roll in the learner policy π, integrate it over the same interval from ta to tb, and match its average velocity to the teacher velocity with the l2 loss: Lφ = E " 1 2 Gθ(xta, ta) - 1 tb -ta Z tb ta π(xt, t) dt 2# . (5) As training progresses (Fig. 8 (b)), we then mix teacher and detached-policy segments while using the same loss, and linearly decay the teacher ratio-the sum of teacher step lengths divided by the total interval length tsrc -tdst. Finally, when the teacher ratio reaches 0, training reduces to onpolicy π-ID. All teacher step boundaries (starts and ends) are randomly sampled within the interval [tdst, tsrc] under the teacher ratio constraint, so that step sizes and locations vary while the total teacher-covered length follows the current ratio schedule. We apply scheduled trajectory mixing exclusively when distilling the FLUX.1 dev model, as it lacks real CFG. Since omitting CFG doubles the teacher's speed, we increase the number of intermediate samples (teacher steps) to 4 accordingly. B.3 MICRO-WINDOW VELOCITY MATCHING For on-policy π-ID, in practice we found that replacing the instantaneous velocity matching loss in Algorithm 1 with a modified average velocity loss over a micro time window generally benefits 16 Preprint Table 6: Ablation study on 4-NFE π-Flow (GM-FLUX), evaluated on the HPSv2 prompt set using teacher-referenced FID metrics (reflecting teacher alignment). Method FID↓pFID↓ π-Flow (GM-FLUX) 14.3 19.2 w/o GM temperature 14.9 20.1 w/o micro window 14.6 20.3 A futuristic, sleek sports car with a low, aerodynamic design is shown in motion against a backdrop of a city skyline at sunset. The car features sharp angles, large wheels with orange accents, and a prominent front grille. The cityscape includes tall buildings with illuminated windows, and the sky is painted with hues of orange and blue as the sun sets. The lighting is warm and golden, with the sun setting behind the city, casting a glow over the scene. The car is positioned in the foreground, with the city skyline in the background, creating a sense of depth and movement. FLUX.1 dev 128-NFE FLUX.1 dev 43-NFE Figure 9: The 128-NFE FLUX.1 dev often generates blurry images, whereas the 43-NFE FLUX.1 dev reduces the blur and produces sharper edges. training. The modified loss is defined as: Lφ = E  1 2 Gθ(xt, t) - 1 -∆t Z t-∆t t π(xt, t) dt 2 , (6) where ∆t is the window size. We set ∆t = 3/128 (three policy integration steps) for all FLUX.1 and Qwen-Image experiments. The benefits of micro-window velocity matching are threefold: • It generally smooths the training signal, reducing sensitivity to sharp local variations in the teacher trajectory. • It stabilizes the less robust DX policy. In the ImageNet experiments, we observe that training with the DX policy diverges without this modification. • With ∆t = 3/128, the policy effectively mimics teacher sampling with 128 3 ≈43 steps instead of 128 steps. For the guidance-distilled FLUX.1 dev model, we observe that the teacher often generates blurry images using 128-step sampling, while 43-step sampling yields sharper results (see Fig. 9). This behavior is inherited by the student, so micro-window velocity matching helps reduce blur. As shown in Table 6, ablating the micro window trick from the 4-NFE GM-FLUX leads to degraded teacher alignment. 17 Preprint Table 7: Hyperparameters used in the ImageNet experiments. 1-NFE 2-NFE GM-FM (K = 32) GM-REPA (K = 8) GM-REPA (K = 32) DX-REPA (N = 10) DX-REPA (N = 20) DX-REPA (N = 40) GM-REPA (K = 32) GM dropout 0.05 0.05 0.05 - - - 0.05 # of intermediate states 2 2 2 2 2 2 2 Window size (raw) ∆τ - - - 10/128 5/128 3/128 - Shift m 1.0 1.0 1.0 1.0 1.0 1.0 1.0 Teacher CFG 2.7 3.2 3.2 3.2 3.2 3.2 2.8 Teacher CFG interval t ∈[0, 0.6] t ∈[0, 0.7] t ∈[0, 0.7] t ∈[0, 0.7] t ∈[0, 0.7] t ∈[0, 0.7] t ∈[0, 0.7] Learning rate 5e-5 5e-5 5e-5 5e-5 5e-5 5e-5 5e-5 Batch size 4096 4096 4096 4096 4096 4096 4096 # of training iterations in Table 2 140K - 140K - - - 24K EMA param γ in Karras et al. (2024) 7.0 7.0 7.0 7.0 7.0 7.0 7.0 Table 8: Hyperparameters used in FLUX and Qwen-Image experiments. 4-NFE 8-NFE GM-FLUX (K = 8) GM-Qwen (K = 8) DX-FLUX (N = 10) DX-Qwen (N = 10) GM-FLUX (K = 8) GM dropout 0.1 0.1 - - 0.1 GM temperature T 0.3 0.3 - - 0.7 # of intermediate states 4 2 4 2 4 Window size (raw) ∆τ 3/128 3/128 3/128 3/128 3/128 Shift m 3.2 3.2 3.2 3.2 3.2 Final step size scale 0.5 0.5 0.5 0.5 0.5 Teacher CFG 3.5 4.0 3.5 4.0 3.5 Learning rate 1e-4 1e-4 1e-4 1e-4 1e-4 Batch size 256 256 256 256 256 # of training iterations 3K 9K 3K 9K 3K # of decay iterations (§ B.2) 2K - 2K - 2K EMA param γ in Karras et al. (2024) 7.0 7.0 7.0 7.0 7.0 B.4 TIME SAMPLING For high resolution image generation, Esser et al. (2024) proposed a time shifting mechanism to rescale the noise strength. Let τ be the pre-shift raw time and m be the shift hyperparameter, the shifted time is defined as t := mτ 1+(m-1)τ . Following this idea, π-ID samples times uniformly in raw-time space and then applies the shift to remap those samples. Detailed time sampling routines are given in Algorithms 2 and 3. For FLUX.1 and Qwen-Image, we use a fixed shift m = 3.2, which is a rounded approximation of FLUX.1's official dynamic shift at 1MP resolution. In addition, several diffusion/flow models reduce the noise strength at the final step to improve detail (Karras et al., 2022; Wu et al., 2025). Accordingly, for FLUX.1 and Qwen-Image we halve the final step size (relative to previous steps) in raw-time space. C ADDITIONAL IMPLEMENTATION DETAILS AND HYPERPARAMETERS All models are trained with BF16 mixed precision, using the 8-bit Adam optimizer (Kingma & Ba, 2014; Dettmers et al., 2022) without weight decay. For inference, we use EMA weights with 18 Preprint Table 9: FLUX.1 schnell evaluation results on COCO-10k dataset and HPSv2 prompt set. Model Distill method NFE COCO-10k prompts HPSv2 prompts Data align. Prompt align. Pref. align. Prompt align. Pref. align. FID↓pFID↓CLIP↑VQA↑ HPSv2.1↑ CLIP↑VQA↑ HPSv2.1↑ FLUX.1 schnell GAN 4 21.8 29.1 0.274 0.913 0.297 0.297 0.843 0.301 ππ-Flow (GM-FLUX) 4-NFE FLUX.1 schnell 4-NFE The president being abducted by aliens. A puppy is driving a car in a film still. A portrait of a melting skull with intricate abstract details, created by Tooth Wu, Wlop Beeple, and Dan Mumford, using Octane Render. Spiderman character in the game Sea of Thieves. The image depicts a beautiful goddess of spring wearing a wreath and flowy green skirt, created by artist wlop. FLUX.1 dev 50-NFE Figure 10: Typical failure cases of FLUX.1 schnell. For reference, we also show the corresponding FLUX.1 dev and π-Flow results from the same initial noise. a dynamic moment schedule (Karras et al., 2024). Detailed hyperparameter choices are listed in Table 7 and 8. C.1 DISCUSSION ON GMFLOW POLICY HYPERPARAMETERS For the GMFlow policy, we observed that the hyperparameters suggested by Chen et al. (2025) (K = 8, C = VAE latent channel size) generally work well. These parameters play important roles 19 Preprint in balancing compatibility, expresiveness, and robustness. A larger K improves expresiveness but impairs compatibility as it may complicate network training. A larger C improves robustness (since GMFlow models correlations within each C-dimensional chunk) but impairs expresiveness (raises the theoretical K = N ·C bound). In addition, improving expresiveness may generally compromise robustness, due to the increased chance of encountering outlier trajectories during inference. D DISCUSSION ON FLUX.1 SCHNELL The official 4-NFE FLUX.1 schnell model (Black Forest Labs, 2024a) (based on adversarial distillation (Sauer et al., 2024a)) is distilled from the closed-source FLUX.1 pro instead of the publicly available FLUX.1 dev. This makes a direct comparison to the student models in Table 3 inequitable. For reference, nevertheless, we include the COCO-10k and HPSv2 metrics for FLUX.1 schnell in Table 9. These metrics reveal a trade-off: while FLUX.1 schnell achieves significantly better data and prompt alignment than FLUX.1 dev, its preference alignment is substantially weaker than FLUX.1 dev and all of its students. To validate this observation, we conducted a human preference study. Our 4-NFE π-Flow (GMFLUX) was compared against FLUX.1 schnell on 200 images generated from HPSv2 prompts. π-Flow was preferred by users 59.5% of the time, aligning with the HPSv2.1 preference metric. Furthermore, qualitative comparisons in Fig. 10 reveals that FLUX.1 schnell is prone to frequent structural errors (e.g., missing/extra/distorted limbs), whereas π-Flow maintains coherent structures. E PROOF OF THEOREM 1 We will prove that a GM with N · C components suffices for approximating any N-step trajectory in RC by first establishing Theorem 2, and then applying the Richter-Tchakaloff theorem to show that a mixture of N · C Dirac deltas satisfy all ODE moment equations, which finally leads to N · C Gaussian components. Theorem 2. Given pairwise distinct times t1, . . . , tN ∈(0, 1] and vectors xtn, ̇xtn ∈RC for n = 1, . . . , N, there exists a probability measure p(dx0) on RC, such that Eq (1) holds at t = tn for every n = 1, . . . , N. E.1 MOMENT EQUATION For every t ∈(0, 1], the ODE moment equation has the following equivalent forms: ̇xt = Z RC xt -x0 t p(dx0|xt) ⇔ ̇xt Z RC p(dx0|xt) = Z RC xt -x0 t p(dx0|xt) ⇔ Z RC x0 -xt + t ̇xt t p(dx0|xt) = 0 ⇔ Z RC (x0 -xt + t ̇xt)N xt; αtx0, σ2 t I p(dx0) tp(xt) = 0 ⇔ Z RC(x0 -xt + t ̇xt)N xt; αtx0, σ2 t I p(dx0) = 0. (7) Let g(t, x0) := (x0 -xt + t ̇xt)N xt; αtx0, σ2 t I be a kernel function. The above equation can be written as a multivariate homogeneous Fredholm integral equation of the first kind: Z RC g(t, x0)p(dx0) = 0. (8) To prove Theorem 2, we need to show that there exists a probability measure p(dx0) on RC that solves the Fredholm equation at t = tn for every n = 1, · · · , N. 20 Preprint E.2 UNIVARIATE MOMENT EQUATION To prove the existence of a solution to the multivariate Fredholm equation, we can simplify the proof into a univariate case by showing that an element-wise probability factorization p(dx0) = QC i=1 p(dxi0) exists that solves the Fredholm equation. In this case, Eq. (7) can be written as: ∀i = 1, 2, · · · , C, Z R (xi0 -xit + t ̇xit)N xit; αtxi0, σ2 t p(dxi0) Y j̸=i Z R N xjt; αtxj0, σ2 t p(dxj0) = 0 ⇔ ∀i = 1, 2, · · · , C, Z R (xi0 -xit + t ̇xit)N xit; αtxi0, σ2 t p(dxi0) = 0. (9) To see this, we need to prove that there exists a probability measure p(x0) on R that solves the following univariate Fredholm equation at t = tn for every n = 1, · · · , N: Z R g(t, x0)p(dx0) = 0, (10) where g(t, x0) := (x0 -xt + t ̇xt)N xt; αtx0, σ2 t is the univariate kernel function. E.3 CONVEX COMBINATION Lemma 1. Define the vector function: γ : R →RN, γ(x0) = (g(t1, x0), g(t2, x0), · · · , g(tN, x0)). (11) Then, the zero vector lies in the convex hull in RN, i.e.: 0 ∈conv{ γ(x0) | x0 ∈R } ∈RN. (12) Proof. Define S := conv{ γ(x0) | x0 ∈R }. Assume for the sake of contradiction that 0 /∈S. By the supporting and separating hyperplane theorem, there exists w ̸= 0 ∈RN, such that: ∀χ ∈S, ⟨w, χ⟩≤0. (13) In particular, this implies that: ∀x0 ∈R, ⟨w, γ(x0)⟩≤0. (14) Define h(x0) := ⟨w, γ(x0)⟩= PN n=1 wng(tn, x0). Recall the definition of g(t, x0) that: g(t, x0) = (x0 -xt + t ̇xt)N xt; αtx0, σ2 t = x0 -xt + t ̇xt √ 2πt2 exp -(xt -αtx0)2 2σ2 t . (15) Let n∗be an index with wn∗̸= 0 for which the exponential term above decays the slowest, i.e.: α2 tn∗ 2σ2 tn∗ = min α2 tn 2σ2 tn wn∗̸= 0 . (16) Note that since α2 t 2σ2 t is monotonic, for every n ̸= n∗with wn ̸= 0, we have α2 tn 2σ2 tn > α2 tn∗ 2σ2 tn∗. Therefore, as |x0| →∞, h(x0) is dominated by the n∗-th component, i.e.: h(x0) = wn∗x0 -xtn∗+ tn∗ ̇xtn∗ p 2πt2 n∗ exp -(xtn∗-αtn∗x0)2 2σ2 tn∗ (1 + O(1)). (17) Because the term x0-xtn∗+tn∗ ̇xtn∗changes sign between -∞and +∞, h(x0) takes both positive and negative values. This contradicts the hyperplane implication that h(x0) ≤0. Therefore, we conclude that 0 ∈S. By Lemma 1 and Carath ́eodory's theorem, the zero vector can be expressed as a convex combination of at most N +1 points on γ(x0). Therefore, there exists a finite-support probability measure p(dx0) consisting of N + 1 Dirac delta components that solves the univariate Fredholm equation at t = tn for every n = 1, · · · , N, completing the proof of Theorem 2. 21 Preprint E.4 N · C COMPONENTS SUFFICE Richter's extension to Tchakaloff's theorem states as follows. Theorem 3 (Richter (1957); Tchakaloff (1957)). Let V be a finite-dimensional space of measurable functions on RC. For some probability measure p(dx0) on RC, define the moment functional: Λ: V →R, Λ[g] := Z RC g(x0)p(dx0). (18) Then there exists a K-atomic measure p∗(dx0) = PK k=1 Akδμk(dx0) with Ak > 0 and K ≤dim V such that: ∀g ∈V, Λ[g] = Z RC g(x0)p∗(dx0) = K X k=1 Akg(μk). (19) By Theorem 2, we know that for V = span{ gi(tn, x0) | i = 1, · · · , C, n = 1, · · · , N } with the scalar function gi(tn, x0) := (xi0-xit+t ̇xit)N xt; αtx0, σ2 t I , there exists a probability measure p(dx0) such that R RD gi(tn, x0)p(dx0) = 0 for every i, n. Then, by the Richter-Tchakaloff theorem, there also exists a K-atomic measure with K ≤dim V ≤N · C that satisfies all the moment equations. By taking the upper bound, this implies the existence of an N · C-atomic probability measure p∗(dx0) = PN·C k=1 Akδμk(dx0) with Ak > 0, PN·C k=1 Ak = 1 that solves the Fredholm equation (Eq. (8)) at t = tn for every n = 1, · · · , N. Finally, since N xt; αtx0, σ2 t I is continuous, the N · C Dirac deltas in p∗(dx0) can be replaced by a mixture of N · C narrow Gaussians, such that ̇xn is approximated arbitrarily well for every n, i.e.: ∀n = 1, · · · , N, lim s→0 xtn - R RD x0p(dx0|xtn) tn p(dx0)=PN·C k=1 AkN (dx0;μk,s2I) = xtn - R RD x0p(dx0|xtn) tn p(dx0)=PN·C k=1 Akδμk (dx0) = ̇xtn (20) This completes the proof of Theorem 1. F DERIVATION OF CLOSED-FORM GMFLOW VELOCITY In this section, we provide details regarding the derivation of closed-form GMFlow velocity, which was originally presented by Chen et al. (2025) but not covered in detail. Given the u-based GM prediction q(u|xtsrc) = PK k=1 AkN u; μk, s2I with Ak ∈R+, μk ∈RC, s ∈R+, we first convert it into the x0-based parameterization by substituting u = xtsrc-x0 σtsrc into the density function, which yields: q(x0|xtsrc) = K X k=1 AkN x0; μxk, s2 xI , (21) with the new parameters μxk = xtsrc -σtsrcμk and sx = σtsrcs. Then, for any t < tsrc and any xt ∈RC, the denoising posterior at (xt, t) is given by: q(x0|xt) = p(xt|x0) Z · p(xtsrc|x0)q(x0|xtsrc), (22) 22 Preprint where Z is a normalization factor dependent on xtsrc, xt, tsrc, t. Using the definition of forward diffusion p(xtsrc|x0) = N xtsrc; αtsrcx0, σ2 tsrcI , we have: q(x0|xt) = N xt; αtx0, σ2 t I Z · N xtsrc; αtsrcx0, σ2 tsrcI q(x0|xtsrc) = N x0; 1 αt xt, σ2 t α2 t I Z′ · N x0; 1 αtsrc xtsrc, σ2 tsrc α2 tsrc I q(x0|xtsrc) = 1 Z′′ N x0; ν ζ , σ2 tsrcσ2 t ζ I q(x0|xtsrc), where ν = σ2 tsrcαtxt -σ2 t αtsrcxtsrc, ζ = σ2 tsrcα2 t -σ2 t α2 tsrc. The result can be further simplified into a new GM: q(x0|xt) = K X k=1 A′ kN x0; μ′ k, s′2I , (23) with the following parameters: s′2 = s2 xσ2 tsrcσ2 t s2xζ + σ2 tsrcσ2 t (24) μ′ k = s2 xν + σ2 tsrcσ2 t μxk s2xζ + σ2 tsrcσ2 t (25) A′ k = exp a′ k PK k=1 exp a′ k , (26) where the new logit a′ k is given by: a′ k = log Ak -1 2 ∥ν -ζμxk∥2 ζσ2 tsrcσ2 t + ζ2s2x . (27) Finally, the closed-form GMFlow velocity at (xt, t) is given by function π: π: RC × R →RC, π(xt, t) = xt -Ex0∼q(x0|xt)[x0] t = xt -PK k=1 A′ kμ′ k t . (28) Extension to discrete support. The closed-form GMFlow velocity can also be generalized to discrete support by taking limsx→0 π(xt, t), which yields the simplified parameters: μ′ k = μxk (29) a′ k = log Ak -1 2 ∥ν -ζμxk∥2 ζσ2 tsrcσ2 t . (30) G ADDITIONAL QUALITATIVE RESULTS. We show additional uncurated results of FLUX-based models in Fig. 11 and 12. 23 Preprint FLUX.1 dev 50-NFE ππ-Flow (GM-FLUX) 8-NFE FLUX Turbo 8-NFE Hyper-FLUX 8-NFE ππ-Flow (GM-FLUX) 4-NFE SenseFlow (FLUX) 4-NFE This picture follows the abstract expressionism visual style. A character with short pink hair, blue eyes, wearing a black hoodie, blue neck gaiter, black shorts, and sneakers, stands in a busy street. Vibrant shop signs, lanterns, traffic cones, a fire hydrant, and sunlight create a dynamic atmosphere. A battle-hardened male warrior stands tall, his blood-drenched face and determined eyes reflecting fierce resolve. He grips an intricately decorated sword, its hilt gleaming in the dim light. The Baroque style fills the scene, with dramatic lighting casting deep shadows on his worn features and intricate armor textures. Please recreate the extravagant apartment's living room, overlooking the Mediterranean from the desert. The striking surroundings feature succulents and cacti. The furniture is crocheted, including the sofa, walls, curtains, carpet, tables, and wall art, with a desert wildlife view visible through the window, also crocheted. The bed features a modern, minimalist design with a clean look and low profile. Its headboard is subtly curved and remains within the bed's width without wrapping around the sides, maintaining moderate height for functionality and comfort. Upholstered entirely in textured neutral fabric like light grey or beige, the frame includes headboard, side rails, and footboard without tufting or embellishments. The mattress fits snugly with no visible gap between it and the frame. Short, dark-colored legs subtly contrast with the light upholstery. Bedding consists of neatly arranged white fitted sheets, flat sheets, and pillows. A simple, light-colored wall serves as the neutral background, ensuring the bed remains the image's focal point. Stunning woman in misty sunlit field, delicate lace rose gold dress flowing in breeze, gazing over shoulder, hair adorned with sunflowers, sunlight highlighting her melancholy face, sorrowful expression, shimmering eyes, leaves swirling around her, overcast sky with lively atmosphere, intricate dress details, realistic style. Please craft an image of a genuine individual. A young woman, dressed in an authentic cowboy outfit, stands confidently next to a weathered barn. The scene is evocatively captured in sepia tones. In the distance, a vast field stretches out, dotted with tall grasses and dried hay bales, while a vibrant blue sky peeks in from the corner. In a captivating style reminiscent of animated storytelling, behind prison bars stands a powerful woman, eyes glowing red and intense, with a prominent scar. Figure 11: An uncurated random batch from the OneIG-Bench prompt set, part A. 24 Preprint FLUX.1 dev 50-NFE ππ-Flow (GM-FLUX) 8-NFE FLUX Turbo 8-NFE Hyper-FLUX 8-NFE ππ-Flow (GM-FLUX) 4-NFE SenseFlow (FLUX) 4-NFE A row of blue buses idles beneath a metal canopy marked "GATE 12 - INTERCITY EXPRESS" at a busy bus terminal. An LED screen scrolls "NEXT DEPARTURE: 15:30." Travelers sip coffee from paper cups, while a nearby newsstand labeled "CITY NEWS" displays headlines and postcards of local landmarks. Could you explore the differences between nuclear fission and fusion by designing a comparison chart?Tips: Energy release, reaction process, atomic nuclei Could you show us the educational diagram of nanomaterial structures? Tips: nanomaterials, structure Green tea ice cubes float in milk, each with a distinct ink pattern. The surface appears smooth and glass-like, enhancing the realistic feel and inviting a tactile experience. The entire scene is depicted in 3d rendering. Create an image with lifelike depiction of people, featuring a mid-thirties salaryman with thinning jet black hair. He stands confidently with hands on his hips, showcasing a crisp blue suit against the dynamic backdrop of a vibrant Tokyo cityscape. Beneath a starry sky stretching over a serene desert landscape, the poster title "Celestial Wonders: Stargazing Nights" glows softly in silver. Midway down, text invites "Discover the universe through guided telescope tours and expert talks." Footer information provides "Every Saturday evening at Mirage Dunes Observatory. Tickets available online." Young woman in wedding attire, ivory skirt, white block heel sandals, delicate train, snug-fitting garment creating stunning silhouette, white mesh top with beadwork and intricate maple leaf pattern, realistic portrayal. Figure 12: An uncurated random batch from the OneIG-Bench prompt set, part B. 25
2510.14965
CHANGINGGROUNDING: 3D VISUAL GROUNDING IN CHANGING SCENES Miao Hu1, Zhiwei Huang2, Tai Wang4, Jiangmiao Pang4, Dahua Lin3,4, Nanning Zheng1B, Runsen Xu3,4B 1Xi’an Jiaotong University, 2Zhejiang University, 3The Chinese University of Hong Kong, 4Shanghai AI Laboratory ABSTRACT Real-world robots localize objects from natural-language instructions while scenes around them keep changing. Yet most of the existing 3D visual ground- ing (3DVG) method still assumes a reconstructed and up-to-date point cloud, an assumption that forces costly re-scans and hinders deployment. We argue that 3DVG should be formulated as an active, memory-driven problem, and we in- troduce ChangingGrounding, the first benchmark that explicitly measures how well an agent can exploit past observations, explore only where needed, and still deliver precise 3D boxes in changing scenes. To set a strong reference point, we also propose Mem-ChangingGrounder, a zero-shot method for this task that marries cross-modal retrieval with lightweight multi-view fusion: it identi- fies the object type implied by the query, retrieves relevant memories to guide actions, then explores the target efficiently in the scene, falls back when pre- vious operations are invalid, performs multi-view scanning of the target, and projects the fused evidence from multi-view scans to get accurate object bound- ing boxes. We evaluate different baselines on ChangingGrounding, and our Mem- ChangingGrounder achieves the highest localization accuracy while greatly reduc- ing exploration cost. We hope this benchmark and method catalyze a shift toward practical, memory-centric 3DVG research for real-world applications. Project page: https://hm123450.github.io/CGB/ . 1 INTRODUCTION 3D Visual Grounding (3DVG) is a critical technology that enables precise localization of target ob- jects in 3D scenes through natural language instructions, with broad applications in service robotics (Gonzalez-Aguirre et al., 2021), computer-aided room design (Sipe & Casasent, 2003; Ganin et al., 2021), and human-machine interaction (Aggarwal, 2004; Li et al., 2020). Current methodologies and benchmarks (Achlioptas et al., 2020; Chen et al., 2020) predominantly operate under static scene assumptions, where pre-reconstructed full scene point clouds (Qi et al., 2017) and textual queries (Radford et al., 2021) are fed into end-to-end models to predict 3D bounding boxes (Jain et al., 2022; Wu et al., 2023; Luo et al., 2022; Shi et al., 2024; Guo et al., 2025) as shown in Figure 1. However, these approaches face significant limitations when deployed in real-world robotic sys- tems: practical environments are inherently dynamic (e.g., furniture rearrangement, object occlu- sion/replacement), so robots have to rescan the entire scenes to reconstruct complete point clouds every time, which is very costly(Sch¨onberger & Frahm, 2016); otherwise, the robots don’t even know whether and where the scenes have changed. In contrast, humans searching in changing envi- ronments quickly draw on memories of past scenes to pinpoint likely target areas and can complete object localization through only a few new observations. Inspired by this insight, we contend that a new memory-based paradigm for real-world 3D visual grounding is needed. To the best of our knowledge, no existing work has explored 3D visual grounding in changing scenes by using memory from past observations. In this paper, we formally define this task and introduce a novel benchmark, the ChangingGrounding benchmark, as follows (shown in Figure 1): given the memory of the previous scene, the unexplored current scene, and a query describing the target object in the current scene, the robot needs to predict the target’s 3D bounding box in the current scene. 1 arXiv:2510.14965v1 [cs.CV] 16 Oct 2025 Previous Setting Full Search Pointcloud Reconstruction End-to-End Model Target Robot Evaluation ChangingGrounding Accuracy Cost Not found. The lamp has moved. Then turn round? Accuracy Action Cost Motion Cost Evaluation ... Memory Database Previous Scene Query: Lamp on the table. Memory Retrieval I remember the lamp was here before. Current Scene Exploration ...... Go look at the place of memory first. Not found. Turn right next? Found lamp matching the query! Task Finished! Stop running. Query: The cushion closest to the office chair. Figure 1: Comparison between the previous setting of 3DVG and the ChangingGrounding task. The key motivation of the task and the benchmark is to measure how a 3D visual grounding system accurately and efficiently finds the target object by leveraging the memory of past observations and exploring the current scene. So we evaluate task performance using two key metrics: the accuracy of the predicted 3D bounding box and the cost for scene exploration. A better system achieves higher accuracy while keeping the lower cost. To support the task, we construct our novel dataset and benchmark, based on the 3RScan dataset (Wald et al., 2019), supported by a novel exploration and rendering pipeline to simulate how real-world robots perform 3D visual grounding. In addition to our benchmark and dataset, we propose a novel framework called Mem- ChangingGrounder to address this new task. As current end-to-end approaches are not designed for memory access and scene agent exploration, our method is based on a zero-shot agent-based approach (Xu et al., 2024a). Specifically, Mem-ChangingGrounder first classifies user queries, then retrieves relevant memories to guide its action policy, and then explores for the target images in the scene based on this policy, next ensures fallback localization if no valid target images are found, and finally performs scanning of the target and predicts 3D localization through multi-view projection. We introduce three additional baseline methods and compare them with our proposed Mem- ChangingGrounder on the ChangingGrounding benchmark. The three baselines simulate different grounding policies: (i) Wandering Grounding: aimless exploration, (ii) Central Rotation Grounding: simple rotation, and (iii) Memory-Only Grounding: memory-only with no exploration. Experimen- tal results show that Mem-ChangingGrounder achieves the highest grounding accuracy among other baseline methods while maintaining a relatively low exploration cost, demonstrating a superior bal- ance between accuracy and efficiency, and the effectiveness of our proposed policy. 2 RELATED WORK 3D Visual Grounding Benchmarks and Methods. 3D visual grounding aims to locate target objects from natural language queries. Early work focused on matching objects with shape descrip- tions (Achlioptas et al., 2019; Prabhudesai et al., 2020). ScanRefer (Chen et al., 2020) and ReferIt3D (Achlioptas et al., 2020) extended this to scene-level benchmarks using ScanNet (Dai et al., 2017). ScanRefer predicts full 3D bounding boxes, while ReferIt3D identifies the correct object from given candidates. Later datasets expanded the setting: Multi3DRefer (Zhang et al., 2023) supports ground- ing multiple objects, and ScanReason (Zhu et al., 2024a) uses complex human instructions. These benchmarks are closer to real needs but ignore temporal changes in scenes. Methods for 3D visual grounding include supervised and zero-shot approaches. Supervised models (Guo et al., 2025; Qian et al., 2024; Wu et al., 2023; Jain et al., 2022; Luo et al., 2022; Shi et al., 2024) rely on annotated datasets, combining a detection branch for 3D objects and a language branch for text encoding. They achieve strong results but are limited by scarce annotations. Zero-shot methods, on the other hand, use LLMs (Touvron et al., 2023; Devlin et al., 2018; Brown et al., 2020; OpenAI, 2023a) 2 Dataset Generation Pipeline Changing Previous Scene Current Scene (1)Horizontal Proximity: Choose the table that is far from the sofa. (2)Vertical Proximity: Choose the picture frame that is above the sofa. (3)Between: The sofa chair that is between the sofa and the side table. (4)Support: Select the table that is on the top of the rug. : Anchor Object : Target Object Memory Images Dataset = Memory Information and Spatial Relation Descriptions Figure 2: ChangingGrounding Dataset generation pipeline. and VLMs (OpenAI, 2023b; Chen et al., 2024; Liu et al., 2023; Xu et al., 2024b) to overcome this issue. Some reformulate grounding as a text problem or use LLM-generated scripts (Yang et al., 2024; Yuan et al., 2024; Fang et al., 2024). VLM-Grounder (Xu et al., 2024a) grounds objects through images instead of point clouds, and SeeGround (Li et al., 2025) selects viewpoints to render scenes for VLM input. These advances improve scalability, but none address grounding in dynamic scenes. Since VLM-Grounder does not require full point cloud input, it provides a practical basis for changing-scene grounding, and we extend our method from this framework. 3D Perception in Changing Scenes. Early work (Fehr et al., 2017) built a small dynamic-scene dataset for 3D reconstruction, but it lacked annotations. InteriorNet (Li et al., 2018) later provided a large synthetic dataset with object and lighting changes. 3RScan (Wald et al., 2019) pioneered the creation of a large-scale real-world indoor RGB-D dataset, encompassing scans of the same indoor environment at different time points, and introduced the task of 3D object instance relocalization, which involves relocating object instances within changing indoor scenes. Many studies followed, such as camera relocalization in changing indoor environments (Wald et al., 2020), changing detec- tion (Adam et al., 2022), changing environment reconstruction (Zhu et al., 2024b), and changing prediction (Looper et al., 2022). Besides, Hypo3D (Mao et al., 2025) conducts a 3D VQA bench- mark to evaluate models’ ability in changing scenes based on 3RScan. Notably, our work represents the first exploration of 3D visual grounding tasks in changing environments. The 3RScan dataset provides scene scans at different time steps, as well as the coordinate system transformations be- tween scenes and the correspondences of objects. We construct our novel 3D visual grounding dataset based on these annotations. 3 CHANGINGGROUNDING In this section, we first formulate the ChangingGrounding task, then establish the evaluation metrics, and finally detail the dataset collection pipeline along with a statistical analysis. 3.1 TASK FORMULATION Consider a robot that observed a room yesterday and acquired its scene information. When revisiting the room today, objects may have been rearranged. The robot must locate a target object described by a user query. A naive solution is to explore the whole room and then apply standard 3DVG methods, but this is inefficient. Inspired by human memory, we propose enabling robots to use past observations for more efficient and accurate grounding. The task is defined as ⟨Sp, Sc, Mp, Dc⟩→B, where B is the predicted 3D bounding box of the target. Sp is the previous scene. Sc is the current scene with unknown changes. Mp is the memory of Sp, including RGB-D images and poses. Dc is a text description of the target object. The robot must ground the target object in Sc using Mp and Dc. Because the task requires both efficient and precise grounding, we will evaluate this task by two key metrics: accuracy and exploration cost. 3 Table 1: Comparison of datasets. VG: Visual Grounding. CVG: Changing Visual Grounding Dataset Task Prompts Size Build on Dynamic Support Nr3D (Achlioptas et al., 2020) VG 42K Static dataset No Sr3D (Achlioptas et al., 2020) VG 84K Static dataset No ScanRefer (Chen et al., 2020) VG 52K Static dataset No ViGiL3D (Wang et al., 2025) VG 0.35K Static dataset No Multi3DRefer (Zhang et al., 2023) VG 62K Static dataset No ScanReason (Zhu et al., 2024a) VG 10K Static dataset No ChangingGrounding (ours) CVG 267K Changing dataset Yes For research simplicity, we also set several task assumptions as follows. Zero-cost Memory Access. The memory information Mp for the previous scene Sp is stored in the robot’s database and can be accessed at any time without incurring additional cost. Standardized Scene Coordinate System. Each 3D scene has a standardized coordinate system Ts. For different temporal scene states of the same physical space, their standardized coordinate systems are aligned to one global coordinate system. Robot’s Initial Pose. We adopt the OpenCV right-handed camera coordinate convention and apply it to all poses. For convenience, we assume that in each scene, the robot is initially positioned at the origin of Ts and its initial orientation is obtained by transforming Ts so that the axes satisfy the OpenCV convention Exploration. For the new scene Sc, the robot needs to explore to obtain relevant information about the scene. Therefore, the acquisition of information about Sc will involve certain costs. The cost includes action cost Ca and motion cost Cm (details in Section 3.2). New Observations. We assume the robot is equipped with an RGB-D camera, and it can move to achieve new positions and orientations (new poses). At the new pose, the robot can obtain a new observation. To fulfill this assumption, we developed a rendering module. The rendering module takes the mesh file of a scene and the desired new pose as inputs and outputs the RGB- D image observed from the new pose within the scene (an RGB-D image formulated as (I, D) = Rendering(Mesh, Pose)). 3.2 EVALUATION METRICS The evaluation uses two metrics: localization accuracy and exploration cost. Localization accuracy follows standard 3DVG evaluation and is measured by the ratio of samples whose predicted 3D bounding box overlaps the ground-truth box above a threshold (e.g., Acc@0.25). The exploration cost includes action cost Ca and motion cost Cm. Ca counts the number of actions taken until the target is localized. Each action means the robot moves to a new pose and captures a new observation. Action cost alone may be insufficient, since a single action can involve a large movement. We therefore also measure motion cost. Motion cost considers both translation and rotation. To compare them on the same scale, we con- vert to time using nominal speeds: translation v = 0.5 m/s, rotation ω = 1 rad/s. Given poses {(t1, R1), . . . , (tn, Rn)}, with n = Ca, the costs are: Ctrans = 1 v Pn−1 i=1 ∥ti+1 −ti∥, Crot = 1 ω Pn−1 i=1 arccos Tr(R⊤ i Ri+1)−1 2  , Cm = Ctrans + Crot. It’s noted that when calculating Ctrans, we only consider cost on the horizontal plane. The rotation term uses the well-known trace formula θ = arccos (Tr(R⊤) −1)/2  , which gives the rotation angle θ of a rotation matrix. By summing these angles and dividing by the nominal rotational speed ω, we obtain the rotation time. 3.3 DATASET AND BENCHMARK CONSTRUCTION We constructed the ChangingGrounding dataset to support the proposed task. It contains: (1) spatial relation descriptions of target objects as user queries; (2) original RGB-D images of each scene with 4 camera poses as memory information; (3) a mesh file for generating new observations. We base our dataset on 3RScan, which has 1,482 snapshots from 478 indoor environments, providing transfor- mations between scans for alignment, dense instance-level annotations, and object correspondences across scans. These properties allow us to align scenes, re-render them, and construct cases where objects are moved. The dataset is built in two steps. As shown in Figure 2, first, we generate spatial relation descriptions following ReferIt3D (Achlioptas et al., 2020). Second, we process 3RScan data to obtain re-rendered images and store them as memory information. Spatial Relation Descriptions. We use the template 〈Target Category〉〈Spatial Relation〉〈Anchor Category〉, such as “the chair farthest from the cabinet.” The anchor category differs from the target. We select 209 fine-grained categories from 3RScan, including those appearing in at least four scenes and those marked as rigid-move. A target is valid if it belongs to these categories and has at most six distractors of the same class. Anchor categories include these 209 classes plus 24 others. ReferIt3D defines five spatial relations (Horizontal Proximity, Vertical Proximity, Between, Allocentric, and Support), but we exclude Allocentric since 3RScan lacks front-orientation annotations. The detailed rationale for category filtering and the construction feasibility are provided in Appendix G. The set of spatial relation descriptions is provided in the supplementary material. 3RScan Processing. We align scans of the same scene to a global coordinate system. The initial scan is taken as the reference, and we calculate its coordinate system first. Then, transformations between the reference and other scans are applied to align all other scans to the coordinate system. For re-rendering, we adopt the ScanNet (Dai et al., 2017) camera model (1296 × 968 resolution with intrinsics (1169.6, 1167.1, 646.3, 489.9)) and use our rendering module to standardize RGB-D images as memory frames. Statistics. We compared the ChangingGrounding dataset with existing datasets in Table 1. Our ChangingGrounding is the largest and the only one built on changing environments. It introduces the new task of changing visual grounding, along with its formulation, baselines, and evaluation protocol. More details and some visual examples are presented in Appendix G and O.6. 4 MEM-CHANGINGGROUNDER (MCG) In this section, we introduce Mem-ChangingGrounder (MCG), a zero-shot framework for 3D visual grounding in changing scenes. MCG takes a query Dc in the current scene Sc and predicts the 3D bounding box of the target object Ot, using memory Mp of the previous scene represented as RGB-D images {Ip} and camera poses {pp}. As shown in Figure 3, MCG has two action policies within four core modules: Query Classification, Memory Retrieval and Grounding, Fallback, and Multi-view Projection. The workflow is to first classify the query and select the path for retrieval and grounding. MCG then explores the current scene with action policies to locate the target. If this fails, the fallback module estimates the target. Finally, multi-view information is fused for accurate grounding. Because MCG builds on the VLM-Grounder (Xu et al., 2024a) framework, we will first introduce this framework (Section 4.1) and then present MCG’s four key modules. 4.1 PRELIMINARY OF VLM-GROUNDER VLM-Grounder is a zero-shot 3D visual grounding method that localizes target objects using 2D im- ages and natural language. The pipeline is: from the current scene image sequence {Ic}, all images containing the target category are detected to form {Ic}det; then a VLM (OpenAI, 2025a) analyzes the query and stitched {Ic}det to find the target image; next, an open-vocabulary detector (Liu et al., 2024) proposes objects in the image, and the VLM selects the correct one; finally, a multi-view projection module fuses multiple viewpoints to estimate the 3D bounding box. 4.2 ACTION POLICY Before presenting the core modules of MCG, we briefly describe two action policies frequently employed in MCG to explore the new scene and find the target object, which are the Omnidirectional Scene Scanner (OSS) and the Spatial Relation Aware Scanner (SRAS). We give the basic explanation here, while the complete derivation is in Appendix H. 5 Omnidirectional Scene Scanner (OSS) Target Image Omnidirectional Imaging Query: Microwave on the table. Camera Pose Grounding Render Spatial Relation Aware Scanner (SRAS) Target Image Spatial Relation Analysis Up Camera Pose Query: Microwave on the table. Grounding Multi-view projection 3D BBox Target: Microwave + SAM Reference Mask Project & Calculater 3D Center 3D Center Imaging around the Center Render ... Select Views Select 2D Box&SAM Render Classify Query Memory Dataset Pre-selection& Stitching Query Stitched Images Classification Results Unverifiable Query: Choose the microwave that is nearest the chair. Find Anchor Target Image Anchor Anchor SRAS OSS Grounding :Not Found :Found Verifiable Query: Choose the microwave that is on the table. Find Target & Anchor Target Image Anchor SRAS OSS :Not Found :Found Target Anchor Target Anchor SRAS Multi-view projection Target 3D Bounding Box Fallback OSS Target: Microwave Valid? Target Image Grounding Detect Figure 3: Workflow of Mem-ChangingGrounder (MCG). The upper part shows the overall pipeline: MCG classifies queries, retrieves memory, uses OSS and SRAS to search, applies fallback when needed, and predicts the 3D bounding box through multi-view projection. The lower part shows details of OSS, SRAS, and Multi-view Projection. Omnidirectional Scene Scanner. The OSS module is a set of robot agent actions when it needs to locate an anchor object or a target object. As shown in the bottom leftmost figure of Figure 3, from a given pose, the agent performs a 360° scan by rotating around the gravity axis, ensuring a full exploration of the surroundings. The collected Images from all poses are then collected, indexed, dynamically stitched, and then sent to the VLM, which selects the one that best matches the query. Spatial Relation Aware Scanner. The SRAS module defines agent actions when the agent needs to find the target after locating the anchor. It generates observations from the anchor pose using the spatial relation between anchor and target, and applies VLM (OpenAI, 2025a) to identify the target image. As shown in Figure 3, given anchor pose pa and query Dc, the agent first uses VLM to infer the relative position of target Ot to anchor Oa. Based on this relation, the agent adjusts pa, collects images, stitches them, and inputs them with Dc into VLM to predict the target. New poses are generated for different spatial relation categories. 4.3 DETAILS OF MCG Query Classification. From a memory perspective, user queries are either verifiable or unverifiable. For unverifiable queries, even if the target and anchor stay still, a target found in memory may not match in the current scene. For example, “the chair farthest from the table” may point to a different chair when a new one appears farther away in the current scene. Thus, the memory target no longer fits the query. In contrast, if the target and anchor stay static, and the target found in the memory will always match the query in the current scene, this kind of query is verifiable. For example, “the vase on the table” is verifiable. MCG uses the VLM to judge whether a query is verifiable. Memory Retrieval and Grounding. As shown in Figure 3, this module is designed to obtain an initial estimate of the target grounding result by combining memory retrieval and exploration. This module locates the target image in the current scene Sc by integrating memory Mp, user queries Dc, and exploration of Sc. In short, this module will first try to use the memory Mp to locate an anchor object, and then explore the target object based on the spatial relationship between the anchor and the target. This process is carried out with the assistance of the two action policies, SRAS and OSS modules, which give action according to current observations and the spatial relations. Depending on the type of query, the specific grounding procedures differ. This module is carefully designed with a not simple, yet effective logic. We provide detailed explanations in Appendix J. Fallback. The Fallback module will be activated to find an alternative target image if the Memory Retrieval and Grounding module fails to ground an initial estimate of the target object and return the corresponding target image. Specifically, the agent will first retrieve from memory the clearest image that contains an object of the target class. It will then start from the pose of the image and use 6 OSS to perform a 360° search for images containing the target object as an alternative result for the Memory Retrieval and Grounding module. Multi-view Projection. After memory-guided retrieval or fallback identifies the target image, the agent uses VLM (OpenAI, 2025a) to predict its 2D bounding box. The image and box are then fed to SAM (Kirillov et al., 2023) to obtain a mask, which is projected into 3D using depth and intrinsics to form a reference point cloud. Since this single-view cloud is incomplete, the module refines grounding through multi-view target-centered scanning. As shown in Figure 3, the agent circles the center of the reference cloud, collects multi-view observations, selects a 2d bounding box, projects them into 3D, clusters the clouds, and outputs a refined 3D bounding box. The complete calculation procedure is provided in Appendix K. 5 EXPERIMENTAL RESULTS 5.1 EXPERIMENTAL SETTINGS Dataset. Following VLM-Grounder (Xu et al., 2024a) and LLM-Grounder (Yang et al., 2024), we randomly sample 250 validation instances for evaluation. Each instance contains a user query, which is either “Unique” (only one instance of the target class in the scene) or “Multiple” (with distractors of the same class in the scene). The details of sample selection and composition are provided in Appendix O.1. Baselines and Implementation. We evaluate three additional baselines, covering two scenarios: (i) using only exploration without memory. (ii) using only memory without exploration. The three baselines are organized as follows: 1). Wandering Grounding: the original VLM-Grounder(Xu et al., 2024a) approach utilizing all images and poses of scene Sc from 3RScan; 2). Central Rotation Grounding: the VLM-Grounder utilizing images captured through a similar methodology of OSS at the initial pose of Sc; 3). Memory-Only Grounding: the VLM-Grounder utilizing images only from the memory Mp in scene Sp. For experiments, we use GPT-4.1-2025-04-14 (OpenAI, 2025a) as the VLM, with tests in both high-resolution and low-resolution image modes. We set Temperature = 0.1, Top-P = 0.3, max stitched images L = 6, and ensemble images N = 7. The retry limit is set to M = 3 for baselines, but removed in MCG since it employs a different fallback. The 2D detectors include SAM-Huge (Kirillov et al., 2023) and GroundingDINO (Liu et al., 2024). Evaluation Metrics. Accuracy is measured by Acc@0.25 and Acc@0.5, the percentage of samples where the IoU between prediction and ground truth exceeds 0.25 or 0.50. Cost is measured by Ca and Cm (defined in Section 3.2). Details how Ca and Cm are computed for each baseline methods and MCG are in Appendix M 5.2 MAIN RESULTS As shown in Table 2, our Mem-ChangingGrounder (MCG) achieves the best accuracy in both low- and high-resolution settings (29.2% and 36.8%), outperforming all baselines. That clear margin un- derlines the superiority and robustness of our solution for grounding performance across a spectrum of visual qualities. At the same time, our method maintains modest action cost Ca and motion cost Cm, which demonstrates a carefully engineered compromise between effectiveness and efficiency. This is because MCG consults memory before moving and then performs short, targeted actions, avoiding long exploratory loops. Wandering Grounding (WG): In comparison, the WG method achieves the second-highest accu- racy at both resolutions, but its Ca is about five times larger and its Cm is also much higher. The reason is its wide roaming: the robot repeatedly sweeps across the environment. This traversal lets the agent collect more scene information and improve accuracy, but also forces long travel and many actions, which cause a heavy cost. Central Rotation Grounding (CRG): The CRG method keeps the robot at the scene center and performs one full rotation, which removes translation and reduces actions, making the cost very low. However, this single and constrained viewpoint misses occluded objects, height-shifted objects, or complex spatial layouts, so important visual information is lost. As a result, its grounding accuracy is the lowest among all methods. 7 Table 2: Accuracy and exploration cost of three baselines and Mem-ChangingGrounder (ours) on the ChangingGrounding benchmark under high- and low-resolution settings. The two resolution settings are separated by a middle line. Higher accuracy and lower cost indicate better performance. Best accuracy and lowest cost are bolded. Cost is measured in 1K seconds one unit. Overall Unique Multiple Cost ↓ Method Model Res @0.25 @0.50 @0.25 @0.50 @0.25 @0.50 Ca Ctrans Crot Cm Wandering Grounding GPT-4.1 low 24.80 10.80 30.67 10.67 16.00 11.00 44.23 8.73 8.78 17.51 Central Rotation Grounding GPT-4.1 low 16.80 6.00 19.33 9.33 13.00 1.00 18.00 0.00 1.70 1.70 Memory-Only Grounding GPT-4.1 low 20.80 10.00 22.67 10.67 18.00 9.00 0.00 0.00 0.00 0.00 Mem-ChangingGrounder (ours) GPT-4.1 low 29.20 14.80 30.00 15.33 28.00 14.00 8.53 5.73 3.98 9.70 Wandering Grounding GPT-4.1 high 32.40 12.80 38.67 16.00 23.00 8.00 44.23 8.73 8.78 17.51 Central Rotation Grounding GPT-4.1 high 17.20 6.80 18.00 8.00 16.00 5.00 18.00 0.00 1.70 1.70 Memory-Only Grounding GPT-4.1 high 26.00 12.40 26.67 11.33 25.00 14.00 0.00 0.00 0.00 0.00 Mem-ChangingGrounder (ours) GPT-4.1 high 36.80 18.00 42.67 19.33 28.00 16.00 8.47 5.84 3.92 9.76 Table 3: Memory strategy. Memory Acc. Ca Cm w/o. 35.2 31.94 18.60 w. 36.8 8.47 9.76 Table 4: Fallback. Fallback Acc. Ca Cm w/o. 36.4 8.21 9.53 w. 36.8 8.47 9.76 Table 5: Multi-view projection. Strategy Acc. Ca Cm Baseline 22.4 4.81 2.95 +Multi-scan 28.0 8.52 9.72 +filter 36.8 8.47 9.76 Memory-Only Grounding (MOG): The MOG method also has a low cost since it relies only on stored panoramic memories, with one final adjustment after estimating the target. If the memories are complete and the scene unchanged, accuracy can be high. But if the environment changes or the memory has gaps, the lack of verification and correction quickly reduces accuracy, placing this method far behind ours. Overall, our method reaches the highest accuracy while keeping Ca and Cm low, proving that memory-augmented strategies balance efficiency and precision in changing scenes. Memory-Only Grounding and Central Rotation Grounding cut costs but lose accuracy by avoiding exploration or using oversimplified strategies. Wandering Grounding explores more but ignores memory, so it needs many actions and long travel, leading to higher costs and lower accuracy than ours. 5.3 ABLATION STUDIES We validated several design choices in MCG. For the memory strategy, we compared with a memory-free setting where the system follows Wandering Grounding’s pose sequence without mem- ory. As shown in Table 3, both achieve similar accuracy, but the memory-free approach consumes far more costs, confirming the efficiency of memory use. For fallback, we tested the method without this strategy. As shown in Table 4, though accuracy and cost are similar with or without fallback, adding fallback ensures completeness and coverage of edge cases. For multi-view projection, we performed a two-step ablation: first, adding center rotation for multi-view acquisition, then adding outlier removal. As shown in Table 5, each step improved accuracy; although center rotation in- creases cost, it benefits the localization accuracy. For application scenarios where an accurate 3D bounding box is not necessary, the cost of MCG could be further reduced. Finally, for different VLMs, we compared GPT-4o (OpenAI, 2024) and GPT-4.1 (OpenAI, 2025a). As shown in Table 6, costs are similar, but GPT-4.1 yields higher accuracy, indicating that better VLM directly results in better performance. Also, we compare rendered versus real memory images and find that render- ing doesn’t have a significant negative impact on grounding accuracy, as shown in Table 7. More detailed analysis of the rendered-versus-real experiment is in Appendix O.2. 8 Table 6: Different VLMs. Method Acc. Ca Cm GPT-4o 31.6 8.34 8.47 GPT-4.1. 36.8 9.52 9.76 Table 7: Render vs real. Method Acc. Ca Cm w. 28 1.74 2.05 w/o. 24 1.62 1.94 Table 8: 3DVG comparasion. Method Acc. Ca Cm 3D-Vista 33.2 18.00 2.39 MCG (ours) 36.8 8.47 9.76 Table 9: Module success rates (%). Sub-module accuracies of MRGS and MS are separated by bars. Total QAS QCS MRGS MS MRGS VP VCI MS OD SRI MP 36 100 100 56 64 56 70 80 64 89 96 75 Table 10: Inference time (s) of different modules. Total View Preselection VLM Predict Images Detection Predictor Project Points 113.2 11.3 11.7 0.2 0.7 5.4 DISCUSSION ABOUT CRITICAL LIMITATIONS OF 3D VISUAL GROUNDING METHODS Although existing 3D visual grounding methods may rescan the entire scene each time to perform our changing grounding task, the approach is still impractical. In dynamic environments, scenes are constantly changing, and it is unrealistic to perform a full rescan every time an object is moved. Worse yet, the system often does not know whether, when, or where the scene has changed, making it difficult to decide whether a new scan is necessary before grounding. This reliance on complete and repeated reconstruction is inefficient and infeasible in practical applications. Nevertheless, for a more complete comparison, we adapted the 3D-Vista (Zhu et al., 2023) model to the memory- based setting, which is pre-trained on the 3RScan dataset. It should be noted that 3D-Vista requires ground-truth bounding boxes, which makes its performance higher than realistic. We also use a simplified way to calculate cost, which makes the cost lower than it is. As shown in Table 8, our method still outperforms it regarding accuracy and cost. More details are in Appendix O.3. 5.5 SUCCESS RATE AND INFERENCE TIME We randomly sampled 50 examples from the test samples and checked the success rate of every stage of our pipeline. The following defines the criteria for whether a step is successful. (1) Query Analysis Stage (QAS): correctly extracts both the target category and any additional constraints. (2) Query Classification Stage (QCS): assigns the query to the proper categories. (3) Memory Retrieval and Grounding Stage (MRGS): picks a view that contains the target object. (4) Multi-view Stage (MS): The 3D IoU between the predicted box and the ground-truth box is ≥0.25. Specifically, the success rate in the MRGS depends on 2 other modules: (a) View Pre-selection (VP) and (b) VLM Choose Image (VCI). The MS depends on 3 other modules: (a) OV Detection (OD); (b) Select Reference Instance (SRI); (c) Multi-view Projection (MP). These modules’ detailed explanations are in Appendix O.4. As shown in Table 9, the largest sources of error stem from the MRGS and the MS. Detailed failure cases and descriptions are provided in Appendix O.5 and Appendix N. Also, we report the inference time of different modules in Table 10, with more analysis in Appendix O.4. 6 CONCLUSION In this work, we reformulate 3D visual grounding as an active, memory-driven task and intro- duce ChangingGrounding, the first benchmark for changing scenes with cost accounting. We also propose a novel and strong baseline named Mem-ChangingGrounder for this new task. Mem- ChangingGrounder demonstrates that leveraging memory and efficient exploration can raise local- ization accuracy while cutting down grounding costs. We believe our dataset, task, and baselines will motivate and serve as a starting point for future research on 3D visual grounding in changing scenes. More details (e.g., use of LLMs, open problems, etc.) are in the appendix. 9 REFERENCES Panos Achlioptas, Judy Fan, Robert X. D. Hawkins, Noah D. Goodman, and Leonidas J. Guibas. Shapeglot: Learning language for shape differentiation. CoRR, abs/1905.02925, 2019. Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In ECCV, 2020. Aikaterini Adam, Torsten Sattler, Konstantinos Karantzalos, and Tomas Pajdla. Objects can move: 3d change detection by geometric transformation consistency. In European Conference on Com- puter Vision, pp. 108–124. Springer, 2022. Charu C Aggarwal. A human-computer interactive method for projected clustering. IEEE transac- tions on knowledge and data engineering, 16(4):448–460, 2004. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020. Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner. Scanrefer: 3d object localization in rgb-d scans using natural language. In ECCV, 2020. Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, and Jifeng Dai. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In CVPR, 2024. Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Jiading Fang, Xiangshan Tan, Shengjie Lin, Igor Vasiljevic, Vitor Guizilini, Hongyuan Mei, Rares Ambrus, Gregory Shakhnarovich, and Matthew R Walter. Transcrib3d: 3d referring expression resolution through large language models, 2024. Marius Fehr, Fadri Furrer, Ivan Dryanovski, J¨urgen Sturm, Igor Gilitschenski, Roland Siegwart, and Cesar Cadena. Tsdf-based change detection for consistent long-term dense reconstruction and dynamic object discovery. In 2017 IEEE International Conference on Robotics and automation (ICRA), pp. 5237–5244. IEEE, 2017. Yaroslav Ganin, Sergey Bartunov, Yujia Li, Ethan Keller, and Stefano Saliceti. Computer-aided design as language. Advances in Neural Information Processing Systems, 34:5885–5897, 2021. Juan Angel Gonzalez-Aguirre, Ricardo Osorio-Oliveros, Karen L Rodr´ıguez-Hern´andez, Javier Liz´arraga-Iturralde, Ruben Morales Menendez, Ricardo A Ramirez-Mendoza, Mauricio Adolfo Ramirez-Moreno, and Jorge de Jesus Lozoya-Santos. Service robots: Trends and technology. Applied Sciences, 11(22):10702, 2021. Wenxuan Guo, Xiuwei Xu, Ziwei Wang, Jianjiang Feng, Jie Zhou, and Jiwen Lu. Text-guided sparse voxel pruning for efficient 3d visual grounding. In CVPR, 2025. Ayush Jain, Nikolaos Gkanatsios, Ishita Mediratta, and Katerina Fragkiadaki. Bottom up top down detection transformers for language grounding in images and point clouds. In ECCV, 2022. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In ICCV, 2023. Rong Li, Shijie Li, Lingdong Kong, Xulei Yang, and Junwei Liang. Seeground: See and ground for zero-shot open-vocabulary 3d visual grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 10 Wenbin Li, Sajad Saeedi, John McCormac, Ronald Clark, Dimos Tzoumanikas, Qing Ye, Yuzhong Huang, Rui Tang, and Stefan Leutenegger. Interiornet: Mega-scale multi-sensor photo-realistic indoor scenes dataset. In British Machine Vision Conference (BMVC), 2018. Yong-Lu Li, Xinpeng Liu, Han Lu, Shiyi Wang, Junqi Liu, Jiefeng Li, and Cewu Lu. Detailed 2d-3d joint representation for human-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10166–10175, 2020. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS, 2023. Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In ECCV, 2024. Samuel Looper, Javier Rodriguez-Puigvert, Roland Siegwart, Cesar Cadena, and Lukas Schmid. 3d vsg: Long-term semantic scene change prediction through 3d variable scene graphs. arXiv preprint arXiv:2209.07896, 2022. Junyu Luo, Jiahui Fu, Xianghao Kong, Chen Gao, Haibing Ren, Hao Shen, Huaxia Xia, and Si Liu. 3d-sps: Single-stage 3d visual grounding via referred point progressive selection. In CVPR, 2022. Ye Mao, Weixun Luo, Junpeng Jing, Anlan Qiu, and Krystian Mikolajczyk. Hypo3d: Exploring hypothetical reasoning in 3d. ICML, 2025. Junjie Ni, Yijin Li, Zhaoyang Huang, Hongsheng Li, Hujun Bao, Zhaopeng Cui, and Guofeng Zhang. Pats: Patch area transportation with subdivision for local feature matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17776–17786, 2023. OpenAI. Gpt-4 technical report. arXiv:2303.08774, 2023a. OpenAI. Gpt-4v. https://openai.com/index/gpt-4v-system-card/, 2023b. OpenAI. Gpt-4o. https://openai.com/index/hello-gpt-4o/, 2024. OpenAI. Gpt-4.1. https://openai.com/index/gpt-4-1/, 2025a. OpenAI. Openai-o1. https://openai.com/o1/, 2025b. Mihir Prabhudesai, Hsiao-Yu Tung, Syed Ashar Javed, Maximilian Sieb, Adam W Harley, and Ka- terina Fragkiadaki. Embodied language grounding with implicit 3d visual feature representations. CVPR, 2020. Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical fea- ture learning on point sets in a metric space. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), NeurIPS, volume 30. Curran Associates, Inc., 2017. Zhipeng Qian, Yiwei Ma, Zhekai Lin, Jiayi Ji, Xiawu Zheng, Xiaoshuai Sun, and Rongrong Ji. Multi-branch collaborative learning network for 3d visual grounding, 2024. URL https:// arxiv.org/abs/2407.05363. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. Johannes Lutz Sch¨onberger and Jan-Michael Frahm. Structure-from-motion revisited. In CVPR, 2016. Xiangxi Shi, Zhonghua Wu, and Stefan Lee. Viewpoint-aware visual grounding in 3d scenes. In CVPR, pp. 14056–14065, 2024. doi: 10.1109/CVPR52733.2024.01333. Michael A. Sipe and David Casasent. Feature space trajectory methods for active computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(12):1634–1643, 2003. 11 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023. Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate True, Albert Antony, Gokula Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, et al. Fastvlm: Efficient vision encoding for vision language models. In Proceedings of the Computer Vision and Pattern Recog- nition Conference, pp. 19769–19780, 2025. Johanna Wald, Armen Avetisyan, Nassir Navab, Federico Tombari, and Matthias Nießner. Rio: 3d object instance re-localization in changing indoor environments. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7658–7667, 2019. Johanna Wald, Torsten Sattler, Stuart Golodetz, Tommaso Cavallari, and Federico Tombari. Be- yond controlled environments: 3d camera re-localization in changing indoor scenes. In European Conference on Computer Vision, pp. 467–487. Springer, 2020. Austin T Wang, ZeMing Gong, and Angel X Chang. Vigil3d: A linguistically diverse dataset for 3d visual grounding. arXiv preprint arXiv:2501.01366, 2025. Yanmin Wu, Xinhua Cheng, Renrui Zhang, Zesen Cheng, and Jian Zhang. Eda: Explicit text- decoupling and dense alignment for 3d visual grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Runsen Xu, Zhiwei Huang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Vlm-grounder: A vlm agent for zero-shot 3d visual grounding. In CoRL, 2024a. Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. In ECCV, 2024b. Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F Fouhey, and Joyce Chai. Llm-grounder: Open-vocabulary 3d visual grounding with large language model as an agent. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pp. 7694–7701. IEEE, 2024. Zhihao Yuan, Jinke Ren, Chun-Mei Feng, Hengshuang Zhao, Shuguang Cui, and Zhen Li. Visual programming for zero-shot open-vocabulary 3d visual grounding. In CVPR, 2024. Yiming Zhang, ZeMing Gong, and Angel X Chang. Multi3drefer: Grounding text description to multiple 3d objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15225–15236, October 2023. Chenming Zhu, Tai Wang, Wenwei Zhang, Kai Chen, and Xihui Liu. Scanreason: Empowering 3d visual grounding with reasoning capabilities. In European Conference on Computer Vision, pp. 151–168. Springer, 2024a. Liyuan Zhu, Shengyu Huang, and Iro Armeni Konrad Schindler. Living scenes: Multi-object relo- calization and reconstruction in changing 3d environments. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024b. Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, and Qing Li. 3d-vista: Pre- trained transformer for 3d vision and text alignment. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pp. 2911–2921, 2023. 12 A APPENDIX OVERVIEW AND ORGANIZATION This appendix provides supplementary details to support and extend the main paper. The organiza- tion of the appendix is as follows: 1. Use of LLMs ( Section B ): This section states the use of Large Language Models (LLMs). 2. Ethics Statement ( Section C ): This section provides ethics statements. 3. Reproducibility Statement ( Section D ): This section provides reproducibility state- ments. 4. Broader Impact ( Section E ) The broader societal implications of our work are discussed. 5. Benchmark Statement ( Section F ): This section describes the release status and usage policy of the ChangingGrounding Benchmark (CGB). 6. More Details for Data Construction ( Section G ): This section describes more details for the data construction process of CGB. 7. Action Policy ( Section H ): This section shows a more detailed description of action poli- cies, the Omnidirectional Scene Scanner (OSS), and the Spatial Relation Aware Scanner (SRAS) 8. Query Classification ( Section I ): This section presents additional details of the Query Classification module. 9. Memory Retrieval and Grounding( Section J): This section presents a more detailed explanation of two different algorithmic paths based on the query classification results. 10. Multi-view Projection ( Section K ): This section introduces more details for the Multi- view Projection module. This module obtains multi-view point clouds and then filters them to get more accurate 3D bounding boxes. 11. VLM Prompts ( Section L ): We provide the full list of vision-language prompts used in MCG, covering all modules including memory retrieval, spatial relation parsing, multi- view comparison and selection, and fallback strategies. These prompts form a modular, interpretable interface for multi-stage reasoning. 12. Cost Calculation for Methods ( Section M ): This section details how action costs and motion costs are computed for each method. The evaluation aligns with the cost metrics defined in the main text, and a note explains that all costs are reported in units of 1,000 seconds (e.g., 9k = 9000s). 13. Open Problems ( Section N ): We outline the current limitations of the CGB benchmark and the MCG method, including the lack of allocentric relations, the impact of rendering noise, and the dependency on external 2D models. Future improvements are discussed. 14. More Results ( Section O ): Additional results are presented to assess the robustness of MCG, including a comparison between using rendered vs. real images in memory, and a set of failure cases analyzing the limitations of VLM, SRAS, SAM, and the projection pipeline. A complete example is shown to illustrate how MCG grounds a target object in a changing scene. B USE OF LLMS We used large language models (OpenAI’s ChatGPT (OpenAI, 2024)) solely to aid in the polish- ing of English writing. The models were employed to improve clarity, grammar, and style in the manuscript text. No part of the research design, data analysis, model implementation, or results interpretation was generated or influenced by LLMs. All scientific contributions, ideas, and experi- mental results are entirely the work of the authors. C ETHICS STATEMENT This work focuses on the design and evaluation of a benchmark. It does not involve human subjects, sensitive personal data, or private information. All datasets used are publicly available. We adhere to academic integrity standards. 13 D REPRODUCIBILITY STATEMENT To lower the research threshold and enable independent fairness audits, we will fully open-source our benchmark generation process, data preprocessing methods, and evaluation scripts. All data are drawn from public or simulated scenes and contain no personally identifiable information. E BROADER IMPACT This study introduces a new task for changing scene 3D visual grounding, releasing the open bench- mark CGB and a strong reference method, MCG. This technology can significantly enhance the efficiency of logistics and service robots in dynamic environments, advancing smart manufacturing and supply-chain management. However, rapid automation may temporarily displace low-skill jobs, requiring joint reskilling programs to equip workers with digital skills. F BENCHMARK STATEMENT We will publicly release the proposed CGB benchmark and its accompanying dataset on the Hug- gingface platform, making it freely accessible to the research community. The dataset will be regu- larly updated and maintained to ensure its accuracy and relevance. It is important to note that, at this stage, all available data in the CGB benchmark is used exclusively for testing purposes. We hope this benchmark will encourage further research into 3D visual localization in dynamically chang- ing environments. All files within the CGB benchmark are strictly intended for non-commercial research purposes and must not be used in any context that could potentially cause harm to society. Also, to support reproducibility, we provide benchmark testing examples for the proposed MCG method on the GitHub platform, along with detailed environment specifications and a complete execution pipeline to facilitate efficient replication and verification of experimental results. G MORE DETAILS FOR DATA CONSTRUCTION Detail Explanation for Building Spatial Relation Descriptions. When selecting target and anchor categories, we followed several principles from the ReferIt3D benchmark to ensure both robustness and task relevance. For target categories, we excluded classes appearing in fewer than four scenes to reduce long- tail bias from infrequent categories. In addition, we included objects that undergo changes across scenes, so the final 209 target categories are the union of objects appearing in at least four scenes and those objects exhibiting changes. This dual criterion for target category selection can reduce long-tail effects and ensure the task remains relevant to dynamic scenes. To further maintain annotation reliability, for each individual scene, we applied the constraint that a target object must have no more than six distractors of the same category in the scene. This was motivated by ReferIt3D annotator feedback showing that error rates rise significantly when distractors exceed six. Importantly, this constraint applies per scene: a category may be excluded as a target in one scene but remain valid in another if the distractor number is within the limit. For anchor categories, we followed a similar strategy, using the 209 target categories plus 24 additional large or frequent objects (e.g., fireplaces, televisions). This design improves diversity while also improving reliability, since such larger anchors are easier to describe in spatial relations. We also enforce at most one anchor object in complex scenes, because our descriptions use spatial templates (Target–Relation–Anchor) rather than detailed attributes; with multiple anchors, it would be unclear which instance is referenced. Overall, this filtering strategy balances statistical robustness with task specificity, yielding a diverse set of over 200,000 prompts while ensuring clear and reliable grounding cases. Statistic. The dataset contains 266,916 referential descriptions that uniquely locate targets through spatial relations. As shown in Figure 4, the word cloud highlights frequent terms such as common furniture, along with many less frequent items. We also merge the original 528 object categories in 3RScan into 225 broader ones for tractability, with the help of ChatGPT-o1 (OpenAI, 2025b). 14 Figure 4: A word cloud generated from spatial-relation descriptions, visually highlighting the fre- quency of occurring terms. H ACTION POLICY Omnidirectional Scene Scanner. The OSS module is a set of robot agent actions when it needs to locate an anchor object or a target object. From a given pose, the agent will perform a full 360° scan and use the VLM to identify the observation that best matches the given query. As shown in the leftmost figure at the bottom of Figure 3, the agent starts from an initial pose p and a user query, then generates twenty poses by rotating p around the gravity axis (ψ) in steps of 18◦× i for i = 0, 1, . . . , 19. These actions ensure a comprehensive exploration of the surroundings. Subsequently, the agent tilts each pose downward by 20◦around the horizontal (ϕ) axis to avoid missing objects that lie slightly below the original level. Next, the agent obtains images at each pose, annotates sequential IDs, and dynamically stitches them. Finally, the agent will input the stitched result to VLM to predict the correct image containing the anchor object or target object based on user queries. pi = p · Rψ(18◦× i) · Rϕ(−20◦), i = 0, 1, . . . , 19 (1) Spatial Relation Aware Scanner. The SRAS module is a set of robot agent actions when the anchor object has already been located, and its next step is to search for the target object. It is designed to obtain a series of observations starting from the anchor object pose based on the spatial relationship between the anchor and the target object, and then use VLM (OpenAI, 2025a) to predict which of these observations contain the desired target object. As shown in the second image at the bottom of Figure 3, given the anchor image pose pa and the user query Dc, the agent will first use VLM to analyze the positional relationship between the target object Ot and the anchor object Oa based on the query. Leveraging this positional relationship, the agent then adjusts pa to generate a series of new poses. Next, the agent obtains images at these new poses, assigns them unique IDs, and dynamically stitches them together. Finally, the agent inputs stitched images and Dc into the VLM to predict the target image. New poses are generated based on different categories of spatial relationships listed below: • Horizontal and Between - The agent applies the same omnidirectional scanning strategy as the OSS module to process pa to acquire a set of new poses. Then the agent images at these poses and uses VLM to evaluate which image indeed contains the Ot matching the Dc. • Support and Vertical - If VLM analysis shows that Ot is below Oa, the agent will generate a series of new poses by rotating pose pa downward around its local horizontal (ϕ) axis in 20° increments. Besides tilting directly downward, in order to cover a wider exploration area, the agent will also first rotate pa a little left and right around its gravity axis (ψ), and then rotate downward to generate more poses. Next, the agent obtains observation images at these poses and uses VLM to evaluate which image indeed contains the Ot matching the Dc. If Ot is above Oa, the process is similar to that of the ”below” relationship, except that the rotation is upwards. 15 Now on, we’ll use the X, Y, and Z axes to explain more details of the pose generation methods based on different spatial relationships. This will help readers follow the accompanying code more easily. Additionally, to simplify later rotation and translation steps, we first adjust the camera pose of the anchor-object image consistently: the Y-axis points downward, and the X-axis points to the right. Up. The camera is first shifted backward along its Z-axis to broaden the view. Next, we rotate the pose around its local Y-axis by −90◦, −45◦, 0◦, 45◦, and 90◦. For each of these turns, we add an extra tilt around the local X-axis by 0◦, 18◦, 36◦, and 54◦. This nested sweep yields 5×4 = 20 new poses, providing a more comprehensive upward field of view. Down. The “down” case follows a process highly similar to the “up” case, with the key difference being the direction of rotation around the local X-axis (which controls the up-down viewing direc- tion). The camera is first shifted backward along its Z-axis to broaden the view. Next, we rotate the pose around its local Y-axis by −90◦, −45◦, 0◦, 45◦, and 90◦. For each of these turns, we add an extra tilt around the local X-axis by 0◦, −18◦, −36◦, and −54◦. This nested sweep yields 5×4 = 20 new poses, providing a more comprehensive downward field of view. Horizontal and Between. For simplicity, we use the same procedure to generate new poses for both “horizontal” and “between” relations (Note that for the “between” relation, the initial anchor- object image only needs to include one of the two anchor objects involved in that relation). First, the camera moves backward along its local Z-axis to widen the view; next, it moves closer to the room’s center to cover more of the scene; then, it rotates around its local Y-axis in 18° increments from 0° to 342°, creating 20 evenly spaced horizontal angles; after each Y-rotation, the camera tilts 25° downward around its local X-axis to avoid missing lower parts of the scene. This sweep produces 20 viewpoints that give a broad, slightly downward-looking perspective of the environment. I QUERY CLASSIFICATION As stated in the main text Section 4.3, queries with “between” relation should be categorized as verifiable queries. The “between” relation is complex because it involves two anchor objects. If we followed the verifiable workflow, we would need to confirm both anchors and the target object’s final position, and then we may need to build another suitable memory-retrieval and grounding algorithm based on the confirmation results. This is too complex for our current scope. For simplicity, we just mark queries with the “between” relation as unverifiable and only use the first anchor object. The remaining steps use the same procedure as the queries with a ”horizontal” relation. J MEMORY RETRIEVAL AND GROUNDING Here are in-depth explanations of 2 algorithmic paths for different kinds of queries. • - Unverifiable Queries – As mentioned in the Query Classification module, for unverifiable queries, the agent cannot ensure that the target object, which is directly grounded in mem- ory, still matches the query in the current scene. Therefore, the agent prioritizes finding the anchor object from memory. The agent first follows the VLM-Grounder (Xu et al., 2024a) approach to preprocess images from memory: a 2D open-vocabulary detector (Liu et al., 2024) filters all images in Mp to generate a preprocessed image sequence {Ip}det contain- ing anchor class objects, which are then dynamically stitched with ID annotations. After that, the agent uses VLM to predict an image Ia p which shows the anchor object clearly from {Ip}det. The agent obtains the pose pa where the image Ia p was taken, then it will go to the same pose in the current scene Sc to get a new observation Ia c . If the anchor object stays still in Ia c , the agent will use the spatial relationship of the query Dc to find the target. Specifically, the agent inputs Dc and the pose pa into the SRAS module for final target localization in Sc. If the anchor object doesn’t stay still, the agent will go to the center of Sc and directly search around to find the target. Specifically, the agent initiates OSS at the center of Sc to directly locate the target. • - Verifiable Queries – Different from unverifiable queries, for verifiable queries, the agent prioritizes directly grounding the target object matching the Dc from memory. After a similar pre-process pipeline as verifiable queries, the agent obtains stitched images {Ip}det 16 that contain the anchor class objects or target class objects. Then it uses VLM (OpenAI, 2025a) to select from {Ip}det a target image It p containing target object satisfying the Dc and an anchor image Ia p containing anchor object. Next, by moving to the same camera poses pt and pa of It p and Ia p in the current scene Sc, the agent obtains the corresponding new observations It c and Ia c . Following that, the agent verifies the status of images It c and Ia c . If the target object in It c and the anchor object in Ia c both stay still, the agent directly outputs It c as a result. If the target object doesn’t stay still but the anchor object stays still, the agent will use the spatial relationship of Dc to find the target starting from the anchor pose pa. Specifically, the agent will invoke SRAS and input Dc and anchor image pose pa for localization. If the anchor object moves, the agent will first try to locate it in Sc, and then use the relationship to find the target. This is because, for this type of query, once the anchor is found, the target can usually be located through a series of clear actions. Specifically, the agent will move to the center of Sc and use OSS for the anchor position. It should be noted that the rotational search via OSS can terminate early: as soon as the VLM spots the anchor object, the scan stops. Once the anchor is located, the agent finally invokes SRAS to track the target. K MULTI-VIEW PROJECTION Inspired by VLM-Grounder, our multi-scan projection also merges point clouds from several views to build the final cloud. But unlike VLM-Grounder, which uses PATs (Ni et al., 2023) to get ap- propriate views, we gather views by scanning around the target object. The entire pipeline can be divided into three stages: (1) obtaining a reference point cloud for the target object, (2) performing surround-view scanning around the reference point cloud’s center to collect multi-view point clouds, and (3) removing outliers from the aggregated point cloud set. In the main text, we have already clearly described the overall pipeline of the Multi-view Projection module. Here, we first outline the more complete process and then provide the notable details for stages 1 and 2. After memory-guided retrieval or fallback identifies the target image, the agent will use VLM (Ope- nAI, 2025a) to predict the 2D bounding box of the target object detected in the image. It then feeds the image with this box to SAM (Kirillov et al., 2023) to obtain a segmentation mask, projects the mask into 3D space using depth and intrinsics, and derives a reference point cloud. However, this reference point cloud is not complete. It is derived from a projection of the target object from a single viewpoint, which may not capture all parts of the object and thus results in an incomplete point cloud. To compensate for incomplete single-view point clouds, we introduce this module to refine the grounding result with a multi-view, target-centered scanning strategy. In this module, the agent circles the center of the reference 3D point cloud to get multi-view observations and projects these observations into 3D point clouds. Finally, the agent clusters and filters these point clouds and outputs a more accurate 3D bounding box. Specifically, from the reference point cloud, the agent extracts the 3D bounding box and computes the box’s center c and the diagonal length lbox. The agent uses these values to define an observation sphere. The center of this observation sphere is c, and the radius of this sphere is calculated as r = max(lbox/2, 1.5 m). The agent then generates sixteen poses and obtains their corresponding observations on a 30°-tilted ring around the sphere. Subsequently, the agent uses VLM to select the four observations that most clearly and completely capture the target object. For each frame, the agent will select a single valid mask for the target object: it runs an open-vocabulary detector [3] to locate the object’s candidate 2D bounding boxes; segments those boxes with SAM to produce candidate masks; projects the masks into the 3D point cloud; finally keeps the one mask whose corresponding point cloud centroid is closest to reference point cloud center c. All valid masks are then projected to 3D point clouds. Finally, to filter the outliers, the agent sorts these clouds by the volume of their bounding boxes and discards any cloud whose volume is substantially larger than that of the next smaller one. The remaining clouds are then fused with the reference cloud to produce the refined point cloud. Getting the Reference Point Cloud. To get the reference point cloud, we need to obtain the tar- get object’s 2D bounding box and use the SAM model to get its mask in the image. Next, we can project this mask into 3D space to obtain the object’s reference point cloud using the camera pa- rameters and depth data. Therefore, first, the agent feeds the image containing the target object into 17 GroundingDINO and removes any 2D boxes that are too large, since some boxes cover the whole image. (as GroundingDINO may occasionally return boxes covering the entire image). After that, it marks the centers of the remaining boxes on the image. Then it passes the image, the user query, and additional contextual cues (e.g., ”the object is located in the bottom-right corner”) into the VLM to identify the most semantically relevant 2D bounding box corresponding to the target object. The agent uses this box and its center as a positive point for SAM to create a segmentation mask. Finally, the mask is projected into a 3D point cloud using the camera parameters and the depth image, with the same denoising process strategy as VLM-Grounder during projection. Surround-view Scanning. The agent scans around the reference point cloud’s center to capture many new views. For each view, it runs GroundingDINO to find 2D boxes. It projects each box into 3D and measures the Euclidean distance between that box’s point-cloud center and the reference center. The box with the shortest distance is kept as the target object in that view. The agent repeats this for all views and gathers the resulting point clouds. It should be noted that, in addition to the outlier removal strategy based on bounding box size sorting described in the main text, we also apply an extra denoising step at the beginning of the candidate box selection process. The details of this initial denoising step are explained below. Among the bboxes, we select the one whose center is closest to that of the reference point cloud. However, due to the limitations of the 2D object detector and SAM, this nearest candidate may not always correspond to the true target object. To address this, we first input the reference image into a vision-language model (VLM) to assess whether the target object is particularly large or partially outside the camera view. If so, no additional filtering is applied. Otherwise, we enforce a spatial constraint requiring that the center of the selected candidate point cloud lies within 0.25 meters of the reference center; this helps prevent the inclusion of significant noise points unrelated to the target object. L VLM PROMPTS For the baseline methods, we use the same prompts as those employed in VLM-Grounder. For the MCG method, we introduce several additional prompts, including those designed for the memory retrieval image module, prompts used to compare whether the target object has moved between images, prompts used in SRAS, and prompts applied in the multi-scan projection process. We will explain each of them in the following sections. The memory retrieval prompt for unverifiable queries selects the top 3 images that clearly cap- ture the anchor object from a video sequence when no reliable grounding information is available. In contrast, the memory retrieval prompt for verifiable queries performs a two-stage reasoning process: it first searches for images that satisfy the query constraints and falls back to identify- ing the target object if constraints are unmet. The oss prompt for unverifiable queries focuses on selecting the single image that most clearly and completely depicts the target object from a 360-degree scan, while the oss prompt for verifiable queries incorporates a three-step reasoning strategy, identifying the earliest image containing the anchor and then limiting the search space for target localization accordingly. The relation parsing prompt is used to infer the spatial re- lation (e.g., up, down, near, far, between) between the target and anchor objects from the query. The sras choose target prompt performs target selection under a 360-degree rotation by evalu- ating multiple views and returning the most confident match. The compare prompt determines whether two images captured from the same pose show the target object at the same position, sup- porting consistency checks. The fallback prompt implements a robust two-step procedure: locating a query-matching image if available, or falling back to the clearest image showing the object class. The get good view prompt is used to retrieve up to four images that provide the best views of a ref- erence object based on a reference image with a bounding box. Finally, the bboxchoose prompt re- fines object selection by identifying the most probable target object among multiple candidate boxes, integrating query content and spatial descriptions. Together, these prompts provide a structured, in- terpretable, and modular interface for vision-language agents to perform complex multi-view spatial reasoning and object grounding tasks. The limit prompt guides the VLM to assess whether the target object is overly large or partially occluded, serving as a prior for filtering unreliable candidate point clouds. Table 11 shows their detailed contents. 18 Table 11: VLM prompts memory retrieval prompt for unverifiable queries You are an intelligent assistant proficient in analyzing images. Given a series of indoor room images from a video, you need to analyze these images and select the best 3 images. Each image has an ID in the upper left corner indicating its sequence in the video. Multiple images may be combined and displayed together to save the place. The anchor object is {anchor class}. If there are some images that are very similar, only select the clearest one to participate in the further selection pro- cess. Select the best 3 images from the remaining images according to the following rule: Rule 1: Select those images from the remaining ones that can clearly display the anchor object until the total number of selected images reaches 3. Please reply in json format, including ”reasoning” and ”selected image ids”: { ”reasoning”: ”Your reasoning process”, // Your thinking process regarding the selection task ”selected image ids”: [”00045”, ”00002”, ”...”], // A list of the IDs of the best 3 images selected ac- cording to the rules. Note that the returned IDs should be in the form of ”00045”, not ”00045.color”, and do not add any suffix after the numbers. ”unique question”: 6 // This is an independent question. Regardless of any other factors, only look for which image among all those provided captures the object {targetclass} most clearly. If none is found, return -1. } Now start the task: There are {num view selections} images for you to select from. memory retrieval prompt for verifiable queries Imagine that you are in a room and tasked with finding a specific object. You already know the query content: {query}, the anchor object class: {anchorclass}, and the target object class: {targetclass}. The provided images are obtained by extracting frames from a video. Your task is to analyze these images to locate the target object described in the query. You will receive multiple images, each with an ID marked in the upper left corner to indicate its order in the video. Adjacent images have adjacent IDs. Note that, to save space, multiple images may be combined and displayed together. You will also be given the query statement and a parsed version specifying the target object class and conditions. Your task is divided into two main steps: Step 1: Based on the query and associated conditions, determine whether any of the provided images contain the target object that satisfies the requirements. If found, return the corresponding image ID; if not, return -1. Step 2: If no matching image is found in Step 1, ignore the query content and examine all images to see if any clearly capture an object of class {targetclass}. If such an image exists, return its ID; otherwise, return -1. Please note that the query statement and conditions may not be fully satisfied in a single image, and they may also contain inaccuracies. Your goal is to find the object that most likely satisfies the query. If multiple candidates exist, choose the one you are most confident about. Your response should be a JSON object containing the following fields: { ”reasoning”: ”Your reasoning process”, // Explain how you judged and located the target object. If cross-image reasoning is used, specify which images were involved and how. ”find or not”: true, // Return true if a suitable image matching the query is found, otherwise return false. ”target image id”: 4, // Return the image ID that best satisfies the query and conditions. If none found, return -1. ”anchor image id”: 6, // Return the ID of the image where the anchor object is most clearly visible. ”extended description”: ”The target object is a red box located in the lower left corner of the image.”, // Describe the target object in the selected image, focusing on color and position. ”unique question”: 6 // This is an independent question. Regardless of other factors, select the image that most clearly captures an object of class {targetclass}. If none, return -1. } 19 Now start the task: There are {num view selections} images for your reference. The following are the conditions for the target object: {condition} oss prompt for unverifiable queries Imagine that you are in a room and tasked with finding a specific object. You already know the query content: {query}, the anchor object class: {anchorclass}, and the target object class: {targetclass}. The provided images are frames extracted from a video in which the camera performs a full 360- degree rotation around a specific point. Your task is to analyze these images to locate the target object described in the query. You will receive multiple images, each with an ID marked in the upper left corner indicating its sequence in the video. Adjacent images have adjacent IDs. To save space, multiple images may be combined and displayed together. Additionally, you will be provided with the query statement and its parsed version, which specify the target class and grounding conditions. Your goal is to find the image that most clearly and completely captures the target object described by the query. The conditions may not be fully accurate or verifiable from a single image, so the correct object may not satisfy all of them. Try your best to identify the object that most likely meets the conditions. If multiple candidates appear correct, choose the one you are most confident about. While checking each image, consider different views throughout the 360-degree rotation. If you find the target object in an image, also examine whether other images capture the same object more clearly or completely, and return the best one. Your answer should be based on the image where the target object is most clearly and completely visible. Please reply in JSON format, structured as follows: { ”reasoning”: ”Your reasoning process”, // Explain the process of how you identified and located the target object. If reasoning across multiple images is used, explain which images were referenced and how. ”target image id”: 1, // Replace with the actual image ID (only one) that most clearly captures the target object. ”reference image ids”: [1, 2, ...], // A list of image IDs that also contain the target object and helped in reasoning. ”extended description”: ”The target object is a red box. It has a black stripe in the middle.”, // Describe the target object’s appearance based on the selected image. Color and features only; do not include position. ”extended description withposition”: ”The target object is a red box located in the lower left corner of the image.” // Describe the target object with both appearance and spatial position in the image. } Now start the task: There are {num view selections} images for your reference. Here is the condition for the target object: {condition} oss prompt for verifiable queries Imagine that you are in a room with the task of finding specific objects. You already know the query content: {query}, the anchor object category: {anchorclass}, and the target object category: {targetclass}. The provided images are extracted frames from a video that rotates around a certain point. Each image is marked with an ID in the top-left corner to indicate its sequence in the video. Adjacent images have adjacent IDs. For space efficiency, multiple images may be combined and displayed together. You will also receive a parsed version of the query, which clearly defines the target object category, the anchor object category, and grounding conditions. Your task consists of the following three steps: Step 1: Based on the anchor object category, determine whether any of the provided images clearly capture the anchor object. If no such image is found, return -1 directly. Step 2: If Step 1 is successful, return the smallest image ID (denoted as min ID) among the images that clearly capture the anchor object. Step 3: Among the images with IDs from 0 to min ID, try to find an image that clearly captures the 20 target object and satisfies the query content and conditions. If such an image is found, return its ID; otherwise, return -1. Note: The query statement and conditions may not be perfectly accurate or fully visible in a single image. Try your best to locate the object that is most likely to match these conditions. If multiple objects are plausible, select the one you are most confident about. Here is an example: In Step 1, images 12, 13, 14, and 15 all clearly capture the anchor object, so Step 2 yields min ID = 12. In Step 3, no image from ID 0 to 12 meets the query requirements, so target image id = -1. Please reply in JSON format as follows: { ”reasoning”: ”Your reasoning process”, // Explain the reasoning process across all three steps. If cross-image reasoning is involved, specify which images were used and how. ”anchor image id”: 12, // Return the smallest image ID that clearly captures the anchor object. If none is found, return -1. ”target image id”: 4, // If anchor image id = -1, then return -1 directly. Otherwise, return the image ID (≤anchor image id) that best satisfies the query. If none found, return -1. ”extended description”: ”The target object is a red box located in the lower-left corner of the image.”, // Describe the target object in the image with ID = target image id. If target image id = -1, return None. ”unique question”: 6 // This is an independent question. Regardless of other factors, return the ID of the image that most clearly captures an object of class {targetclass}. If none found, return -1. } Now start the task: There are {num view selections} images for your reference. Here are the conditions for the target object: {condition} relation parsing prompt You are an agent who is highly skilled at analyzing spatial relationships. You are given a query: {query}, a target object: {classtarget1}, and an anchor object: {anchorclass}. Your task is to de- termine the spatial relationship of the target object relative to the anchor object based on the query content. The possible spatial relationships are defined as follows: - up: the target object is above the anchor object // the target object is lying on the anchor object // the target object is on top of the anchor object. - down: the target object is below the anchor object // the target object is supporting the anchor object // the anchor object is on top of the target object. - near: the target object is close to the anchor object. - far: the target object is far from the anchor object. - between: the target object is between multiple anchor objects. Please reply in JSON format with one key, ”reasoning”, indicating the spatial relationship you determine: { ”reasoning”: ”up” // Return the spatial relationship type (up, down, near, far, or between) that best describes the position of the target object relative to the anchor object. } Now start the task. sras choose target prompt Imagine you’re in a room tasked with finding a specific object. You already know the anchor object class: {anchorclass}, the target object class: {targetclass}, and the query the target object should match: {query}. The provided images are captured during a 360-degree rotation around the anchor object. You are given a sequence of indoor-scanning video frames and a query describing a target object in 21 the scene. Your task is to analyze the images and locate the target object according to the query content. Each image is annotated with an ID in the top-left corner indicating its sequential position in the video. Adjacent images have adjacent IDs. For space efficiency, multiple images may be combined and displayed together. You are also provided with a parsed version of the query, which lists the conditions that the target object should satisfy. After filtering and comparison, your goal is to identify the image ID that contains the target object most clearly based on the query and conditions. Note that these conditions may not be fully ob- servable in a single image and might be imprecise. The correct object may not meet all conditions. Try to find the object that most likely satisfies them. If multiple candidates seem plausible, choose the one you are most confident about. If no object meets the query criteria, make your best guess. Usually, the target object appears in several images—return the one where it is captured most clearly and completely. Please reply in JSON format with the following structure: { ”reasoning”: ”Your reasoning process”, // Explain how you identified and located the target object. If you used multiple images, describe which ones and how they contributed to your decision. ”target image id”: 1, // Replace with the actual image ID that most clearly shows the target object. Only one ID should be provided. ”reference image ids”: [1, 2, ...], // A list of other image IDs that also helped confirm the target object’s identity. ”extended description”: ”The target object is a red-colored box. It has a black stripe across the middle.”, // Describe the target object’s color and notable features. No need to mention its position. ”extended description withposition”: ”The target object is a red-colored box located in the lower left corner of the image.” // Describe both appearance and position of the object in the selected image. } Now start the task: There are {num view selections} images for your reference. Here is the condition for the target object: {condition} compare prompt You are an intelligent assistant who is extremely proficient in examining images. You already know the target object category: {target class}. Now I will provide you with two images. You need to determine whether the target objects captured in these two images are in the exact same position. Since these two images are taken from the same pose, you only need to check whether the target objects are in the same position within the images. For example, if the target object is a table and you can clearly see that the table is located in the middle of both images, then the target objects captured in these two images are considered to be in the same position. Please reply in JSON format with two keys: ”reasoning” and ”images same or not”: { ”reasoning”: ”Your reasons”, // Explain the basis for your judgment on whether the target objects captured in these two images are in the same position. ”images same or not”: true // It should be true if you think the target objects captured in the two images are in the same position. If you find that the positions of the target objects captured in the two images are different, or if the target object is captured in the first image but not in the second, then it should be false. } fallback prompt Imagine you are in a room tasked with finding a specific object. You already know the query content: {query}, and the target object category: {targetclass}. The images provided to you are frames extracted from a video that rotates around a particular point. Each image is marked with an ID in the top-left corner to indicate its sequence in the video, and adjacent images have consecutive IDs. For space efficiency, multiple images may be combined and displayed together. 22 Your task consists of two steps: Step 1: Locate an image that contains the target object that satisfies the query statement and its associated conditions. The image must clearly and completely capture the target object. If such an image is found, return its ID and skip Step 2. Step 2: If no image meets the query-based requirements, ignore the query and check all provided images. Identify an image that clearly captures the object of category {targetclass}. If such an image is found, return its ID. If none are found, return -1. Please reply in JSON format with the following structure: { ”reasoning”: ”Your reasoning process”, // Explain the reasoning behind both steps of your decision-making process. ”match query id”: 12, // Return the image ID that satisfies Step 1. If no image matches the query, return -1. ”object image id”: 4, // If Step 1 is successful, return -1 here. Otherwise, return the ID of the image that clearly captures the object in Step 2. If not found, return -1. ”extended description”: ”The target object is a red box located in the lower-left corner of the image.” // Provide a brief description of the target object as seen in the selected image. Focus on visual features such as color and location within the image. } Now start the task: There are {num view selections} images for your reference. get good view prompt You are an excellent image analysis expert. I will now provide you with several images, each marked with an ID in the upper left corner. These images are captured by rotating around a target object {target} that is framed with a green bounding box in the reference image. The reference image is also provided, and it contains the target object {target} enclosed by a green box, with the word ”refer” shown in red in the upper left corner. Your task is to determine which three (at most four) of the provided images capture the target ob- ject from the reference image most clearly and completely. Please note that, for layout efficiency, multiple images may be displayed together in a single composite image. Your response should be in JSON format, containing the following fields: { ”reasoning process”: ”Your reasoning process”, // Explain how you select the images that best capture the target object framed in the reference image. ”image ids”: [2, 4, 5, 7] // Replace with the actual image IDs. Return up to four IDs corresponding to the images that, in your opinion, capture the target object most clearly and completely. } Now start the task: There are {num images} candidate images and one reference image for you to choose from. bboxchoose prompt Great! Here is the detailed version of the picture you’ve selected. There are {num candidate bboxes} candidate objects shown in the picture. I have annotated an object ID at the center of each object with white text on a black background. You already know the query content: {query}, the anchor object: {anchorclass}, and the target object: {classtarget}. In addition, you will be provided with an extended description: {description}, which includes the position of the target object in the picture. Your task consists of two main steps: Step 1: The candidate objects shown in the picture are not necessarily all of the target class {classtarget}. You must first determine which of them belongs to the class {classtarget}. Step 2: Among the identified candidate objects of class {classtarget}, select the one that best matches both the query content and the extended description (including position). Please reply in JSON format with two fields: 23 { ”reasoning”: ”Your reasoning processing”, // Describe your full reasoning process in three parts: (1) how you identified candidate objects of the target class; (2) how you verified them against the extended description; and (3) how you selected the final object ID. ”object id”: 0 // The object ID you select. Always provide one object ID from the picture that you are most confident about, even if you think the correct object might not be present. } Now start the task: There are {num candidate bboxes} candidate objects in the image. limit prompt Great! Now you will perform an expert judgment on the visibility of a target object in the provided image. You already know the target object category: {targetclass}. You will be shown one image containing this object class. Your task consists of two main steps: Step 1: Some object categories, such as beds, sofas, closets, cabinets, shelves, etc., are consid- ered inherently large. If the target object belongs to this group of large categories, directly return "limit": true without proceeding to the next step. Step 2: If the target class is not considered large, examine the image and determine whether the target object appears to be fully captured. If you believe the object is incomplete or partially outside the frame, return "limit": true; otherwise, return "limit": false. Please reply in JSON format with two fields: { ”reasoning”: ”Your reasoning process”, // Describe your reasoning clearly: (1) whether the category is considered large, and (2) if not, how you judged the completeness of the object in the image. ”limit”: false // Return true only if the object is large, or if it is not large but appears incomplete in the image. } Now start the task: You are given one image and the target object category: {targetclass}. M COST CALCULATION FOR METHODS Before we officially begin, let us once again emphasize that all costs are reported in units of 1,000 seconds (e.g., 9k = 9000s). The results shown in tables ( Table 2, Table 3, Table 4, Table 5, Ta- ble 6, Table 7, Table 8, Table 12) have all been processed with unit normalization. For both the baseline methods and our proposed MCG approach, the robot’s initial camera pose is assumed to be at the center of the room (see the main text for the formal definition of this key assumption). For MCG, the full camera trajectory starts from the initial pose and follows a sequence of new poses generated by the MCG pipeline. The cost of the entire trajectory is computed according to the evaluation metrics defined in the main paper. For the WG and CRG baselines, all images are pre-captured and sequentially indexed. We first identify the image whose pose is closest to the initial camera pose and denote its index as n. The camera trajectory then starts from the initial pose and proceeds through the poses of images with indices n, n + 1, n + 2, . . ., wrapping around from the last index back to 1 as needed, and ending at index n −1. The cost is computed based on the same evaluation procedure. For the MOG baseline, which only utilizes memory images, the camera trajectory consists of only two poses: the initial pose and the pose of the target image. Its cost is similarly computed using the defined metrics. N OPEN PROBLEMS We present the ChangingGrounding benchmark (CGB) as the first benchmark for evaluating 3D visual grounding in changing scenes and introduce the Mem-ChangingGrounder (MCG) as a strong baseline method. Nevertheless, both still exhibit the following limitations. 24 N.1 LIMITATIONS OF THE CGB BENCHMARK At present, our CGB dataset models only the relative positional changes between the target and its surroundings, without accounting for critical factors such as lighting variations, object appear- ance attributes (e.g., color, material, deformation), or dynamic scene interactions. Moreover, its repertoire of spatial relations lacks allocentric descriptions like “Object A is in front of Object B.” These omissions narrow the benchmark’s breadth and depth when assessing an agent’s cross-scene generalization and robustness. Future work can address these gaps by enriching multimodal annota- tions, introducing additional dimensions of variation, and incorporating allocentric relations, thereby expanding the dataset’s scale and diversity and enhancing CGB’s applicability and challenge in real- world dynamic environments. N.2 LIMITATIONS OF THE MCG METHOD Limitations of VLM capability. MCG relies heavily on the underlying Vision–Language Model (VLM) to locate target objects in image sequences according to the analysis requirements. As demonstrated by the ablation studies above, the strength of the VLM has a decisive impact on MCG’s final grounding accuracy. If the VLM is insufficiently capable—or if the visual informa- tion in real-world scenes is unusually complex—MCG’s performance can deteriorate. Nevertheless, because VLM technology is advancing rapidly, we can replace the current module with more pow- erful models in the future to further enhance performance. Noise from rendered images. During the experiments, MCG consistently feeds rendered RGB-D images into the vision-language model (VLM) for inference or uses them for SAM-based segmen- tation, projection, and related processes. However, the rendering process based on mesh files intro- duces various types of noise, including artifacts in the RGB images and inaccuracies in the depth maps. Moreover, there may be inherent differences in how VLMs process real versus rendered images. These factors can negatively affect the grounding accuracy. Noise introduced by 2D models. MCG depends on 2-D object detectors and segmentation networks to filter candidate images and perform the final projection. Although state-of-the-art models such as GroundingDINO and SAM are highly capable, they still exhibit missed detections, false positives, imprecise bounding boxes, and segmentation errors. These imperfections propagate through the pipeline and ultimately undermine the accuracy of the grounding results. Future work. Despite these limitations, we believe that our work on MCG and the CGB benchmark provides a strong foundation for future research in the field of grounding tasks in changing scenes. We hope that our contributions will inspire researchers to explore new methods and techniques to address the challenges posed by dynamic scenes. Specifically, we encourage the community to focus on the following open problems: (1) Improving VLM Robustness: Developing more robust Vision–Language Models that can handle complex real-world visual information and reduce the impact of noise; (2) Enhancing Multimodal Integration: Exploring ways to better integrate multi- modal data (e.g., combining visual, linguistic, and spatial information) to improve grounding accu- racy; (3) Expanding Benchmark Diversity: Contributing to the expansion of the CGB benchmark by adding more diverse scenarios, including variations in lighting, object appearance, and dynamic interactions; (4) Reducing Noise in Rendered Data: Investigating methods to minimize the noise introduced during the rendering process and to bridge the gap between real and rendered images; (5) Advancing 2D-to-3D Projection Techniques: Improving the accuracy and reliability of 2D object detection and segmentation models to enhance the overall grounding performance. We hope that our work will serve as a catalyst for further research in this exciting and challenging domain. By addressing these open problems, we can collectively push the boundaries of 3D visual grounding in changing environments and develop more effective and robust solutions. O MORE RESULTS O.1 TEST SAMPLES To reduce costs, we randomly selected 250 validation samples from the ChangingGrounding dataset for testing. For each sample, first, we select any reference scan as Sc and randomly select one rescan of Sc as Sp (since the reference scan is typically the most complete). We then pick a target O with 25 ID ido. Subsequently, we extract descriptions Do for target object from the ChangingGrounding dataset according to scan id = Sc and target id = ido (2) Finally, we combine the Do with the previous scene memory data Mp of Sp to form a sample. It is important to note that, in order to ensure that the test samples cover diverse types of descriptions, we selected a fixed number of instances from each relation type. Within the 250 samples, both the anchor object and the target object may either remain static or undergo changes. Finally, to satisfy the requirement that the target should be located in the vicinity of the anchor, we constrained the distance between the anchor and the target object to within 1.5 meters. O.2 RENDERED VS. REAL IMAGES IN MEMORY In previous experiments, both the memory and exploration images used by our system were re- rendered images. We still don’t know how well VLMs work with these synthetic images. To check this, we conduct a comparative experiment with two settings. In the w.rendering setting, both the memory and exploration inputs are re-rendered images, consistent with the main experiment. In the w/o.rendering setting, the exploration images remain rendered, while the memory images are replaced with real photographs. Note that we don’t have real images captured with the unified camera module described in the main text Section 3.3. To align our rendered images with the real photos supplied by 3RScan, we render every image using the original camera model from 3RScan. We randomly sample 50 instances from a pool of 250 and observe the final grounding results. As shown in Table 12, experimental findings indicate that using rendered images in memory does not significantly affect the overall grounding accuracy. Results show that the w.rendering setting ap- pears to perform slightly worse than the w/o.rendering setting. That does not prove that rendering is superior because there exists normal experimental variance. Moreover, the MCG pipeline still requires many exploration images that must be rendered for VLM inference. Overall, these results suggest that using rendered images in our experiments is a feasible approach. Table 12: Comparison between using rendered and real images in memory. Version Acc@0.25 A c M c w. rendering 28 1.74 2.05 w/o. rendering 24 1.62 1.94 O.3 3D VISUAL GROUNDING METHODS IMPLEMENTATION Prior 3D visual grounding methods are not readily adaptable to scenarios involving dynamic visual grounding because they are not designed to leverage memory. These methods typically require the latest point clouds of the current scene for each grounding instance, which is impractical for real-world applications since rescanning the entire scene to obtain updated point clouds is highly inefficient. Nevertheless, we attempt to adapt these methods to incorporate memory for our task setting. We designed a pipeline as follows: the model initially uses point clouds from memory (past scenes) to locate the anchor object based on the query. Once the anchor object’s position is identified, the model determines a neighboring region around the anchor object to obtain updated point clouds. This approach eliminates the need for scanning the entire scene. The neighboring region is defined as within a 2-meter radius of the anchor object’s position. For cost calculations, we make an approximation based on the assumption that the agent starts from the center of the room, moves to the previously predicted anchor object location, and performs a full 360-degree rotation around the vertical axis to scan the region. It is important to note that this assumption is also not always feasible in real-world scenarios. Specifically, a single 360-degree rota- tion at one position often cannot capture all details, resulting in estimated costs that are significantly lower than the actual costs. 26 We conducted experiments using the 3D-Vista (Zhu et al., 2023) model, as it is pre-trained on the 3RScan dataset. It should be noted that this model requires pre-detection of all 3D bounding boxes prior to grounding. For our experiments, we utilized GT bounding boxes, which significantly en- hance performance beyond realistic scenarios. The final experimental results are presented in the Table 8. We create the pipeline as follows: the agent first uses 3D-Vista to locate the anchor object based on the query in the point cloud of a previous scene. After obtaining the position of the anchor object, the agent uses this position as the center to crop a region within a 2-meter radius from the current scene’s point cloud. This region is then provided as input to 3D-Vista for inference, under the same assumption that 3D-Vista has access to the ground-truth bounding box contained within this region. Please note that these experiments are intended solely for reference. We do not consider them to have practical significance due to simplifying assumptions. Specifically, at Acc@0.25, our method demonstrates superior accuracy and lower action costs. Additionally, since 3D-Vista performs a full 360-degree rotation during scanning (an impractical scenario), it exhibits nearly zero translation cost and reduced rotation cost. O.4 INFERENCE TIME Success rate. Specifically, A step for modules in the MRGS is counted as successful under 2 following conditions: (a) View Pre-selection (VP): The pre-selected views from the SRAS or the OSS contain the target object. (b) VLM Choose Image (VCI): The VLM predicts an image that contains the target object. A step for modules in the MS is counted as successful under 3 conditions: (a) OV Detection (OD): At least one detection box contains the target object. (b) Select Reference Instance (SRI): The VLM selects the detection box containing the target object. (c) Multi-view Projection (MP): The 3D IoU between the predicted box and the ground-truth box is ≥0.25. Their success rates are reported in Table 9 in the main text. Inference time. The table below shows the time consumption of several modules that we consider important in our framework. Table 13: Inference time of different modules (unit: seconds) Query Analysis View Preselection Detection Predictor 0.8 11.3 0.2 SAM Predictor Dynamic Stitching Project Points 0.5 9.7 0.7 VLM Select Box VLM Analysis (Spatial relations) VLM Analysis (Static) 11.4 1.0 5.4 VLM Predict Images VLM Choose Good Views Depth Denoising 11.7 11.5 0.7 VLM Decide Distance Limitation Total 4.9 113.2 We acknowledge that the inference speed has significant room for optimization, given that this is a research project. For example, as shown in the Table 13, the majority of the time in our pipeline is spent on VLM inference. However, with the rapid advancement of VLM technology, we expect that high-speed large models will soon be deployable on edge devices such as the FastVLM project intro- duced by Apple (Vasu et al., 2025). FastVLM reaches 85× faster TTFT when compared specifically to LLaVa-OneVision operating at 1152×1152 resolution. This opens up promising opportunities for significantly reducing the overall runtime of our method. O.5 ERROR CASE ILLUSTRATION In this section, we present concrete failure cases in the MCG framework. 27 Wrong images for VLMs final predicting First, owing to the VLM’s capacity, it may fail to identify the anchor object in memory images at the beginning, causing errors in the whole grounding process. Once the anchor is wrong, new views from SRAS or OSS often miss the target. Second, views from SRAS and OSS may produce limited viewpoints that lack the correct object (anchor or target), especially when the correct object is located low. Third, there exist a lot of unusable rendering images, which often have large blank or missing regions. In all cases, they lead to a single situation where the VLM can’t get any images containing the target object at all, which will cause failure for the final VLM grounding steps. There are some examples shown in Figure 5 and Figure 6. VLMs failure in grounding target images Relational queries involving horizontal spatial reason- ing (e.g., “select the chair closest to the table”) impose higher demands on the inference capability of vision-language models (VLMs). Such relationships require the model to make fine-grained com- parisons based on relative spatial distances rather than absolute object properties. In cluttered scenes, distractors with small distance gaps increase errors, making VLMs prone to wrong selections. An example is shown in Figure 7. Failure in SAM and projection During our experiments, we observed that SAM often produces noisy masks by including pixels unrelated to the target, which we believe may stem from SAM’s poor generalization to rendered images and the low quality of these images. This over-segmentation reduces the accuracy of 3D projection and bounding box localization. In addition, since our ex- periments are conducted on rendered images, missing or incomplete regions often affect precision. Although we applied denoising on depth maps by removing abrupt pixels, residual noise remains a challenge to accurate 3D localization. Examples are shown in Figure 8. Wrong VLM choise Ground truth Figure 5: VLMs failure in memory retrieval, the anchor object is a box. Wrong VLM choise Correct Figure 6: Failure in SRAS, the user query is to find the cushion that is farthest from the PC. Wrong VLM choice Correct Anchor Figure 7: VLMs fail to ground the target image: query “cabinet near the box.” 28 Wrong SAM result Rendering noise Figure 8: Failure in SAM and projection. O.6 FULL DEMO In this section, as shown in Figure 9, Figure 10 and Figure 11, we present a representative and struc- turally concise example to intuitively illustrate how MCG effectively leverages memory information for efficient and accurate target localization in dynamic environments. The example highlights the central role played by the vision-language model (VLM) throughout the entire execution process. We provide a detailed depiction of the VLM’s reasoning at each step, demonstrating how it progres- sively converges on the target object through multi-round perception and decision-making, thereby showcasing its capabilities in semantic understanding and spatial reasoning. Memory images preselection & Dynamic stitching "query": "choose the toilet that is under the flush" "target_class": "toilet" "anchor_class": "flush" memory_retrieval_prompt_for_verifiable_queries {'reasoning': "The query asks for the toilet that is under the flush, with 'flush' as the anchor object and 'toilet' as the target object. I first scanned the images for the presence of a flush (typically a button or lever above a toilet) and a toilet. The clearest images of a toilet are from image IDs 000071, 000072, 000073, 000074, and 000075. Among these, the flush mechanism (the button or lever typically found above the toilet tank) is visible in images 000072, 000073, 000074, and 000075. The toilet is directly under the flush in these images. Image 000073 provides the clearest and most centered view of both the flush and the toilet, making it the best candidate for the target object under the specified condition. For the anchor object (flush), image 000073 also provides a clear view. For the unique question, image 000073 also captures the toilet most clearly.", 'find_or_not': True, 'target_image_id': 73, 'anchor_image_id': 73, 'extended_description': 'The target object is a white toilet located in the center of the image, directly beneath the flush button on the toilet tank.', 'unique_question': 73} Memory retrieve "target_class": "flush" { "reasoning": "In both images, the flush (toilet) is located in the same position within the frame, slightly to the right of the center and oriented in the same direction. The surrounding objects and tiles are also consistent in both images, indicating the camera angle and position have not changed. The only difference is that in the first image, the toilet seat is open, while in the second image, the seat is closed with a decorative cover. However, the position of the flush itself remains unchanged.", "images_same_or_not": true } Stay static or not compare_prompt Figure 9: Case of the MCG grounding part-1. 29 "target_class": "toilet" { "reasoning": "In both images, the toilet is positioned in the same location within the frame, slightly to the right of center and oriented in the same direction. The surrounding objects, such as the bottles and the floor tiles, are also in the same positions in both images. The only difference is that in the first image, the toilet seat is open, while in the second image, the toilet seat is closed and has a decorative cover. This does not affect the position of the toilet itself within the images.", "images_same_or_not": true } Stay static or not compare_prompt limit_prompt Big stuff or occluded stuff "target_class": "toilet" { "reasoning": "The target object is a toilet, which is not considered a relatively large object like a bed, sofa, closet, cabinet, or shelf. Therefore, I proceed to the second step. In the provided image, the toilet is not photographed completely; only a part of it is visible.", "limit": true } Figure 10: Case of the MCG grounding part-2. Full-circle Refer point cloud center Full circle rescan New poses get_good_view_prompt "target_class": "toilet" { "reasoning_process": "To determine which images capture the target object toilet most clearly and completely, I compared each image with the reference image. The reference image shows a toilet enclosed in a green bounding box. I looked for images where the toilet is visible, clear, and framed similarly to the reference image. Images 000001, 000002, 000004, and 000005 show the toilet clearly and from angles that provide a good view of the object, similar to the reference image. These images capture the toilet most completely and clearly among all the provided images.", "image_ids": [1, 2, 4, 5] } Get good multi-views Compute the minimum Euclidean distance Reference Optimal candidates Filter Final result Figure 11: Case of the MCG grounding part-3. 30
CHANGINGGROUNDING: 3D VISUAL GROUNDING IN CHANGING SCENES Miao Hu1, Zhiwei Huang2, Tai Wang4, Jiangmiao Pang4, Dahua Lin3,4, Nanning Zheng1B, Runsen Xu3,4B 1Xi'an Jiaotong University, 2Zhejiang University, 3The Chinese 4Shanghai AI Laboratory ABSTRACT Real-world robots localize objects from natural-language instructions while scenes around them keep changing. Yet most of the existing 3D visual grounding (3DVG) method still assumes a reconstructed and up-to-date point cloud, an assumption that forces costly re-scans and hinders deployment. We argue that 3DVG should be formulated as an active, memory-driven problem, and we introduce ChangingGrounding, the first benchmark that explicitly measures how well an agent can exploit past observations, explore only where needed, and still deliver precise 3D boxes in changing scenes. To set a strong reference point, we also propose Mem-ChangingGrounder, a zero-shot method for this task that marries cross-modal retrieval with lightweight multi-view fusion: it identifies the object type implied by the query, retrieves relevant memories to guide actions, then explores the target efficiently in the scene, falls back when previous operations are invalid, performs multi-view scanning of the target, and projects the fused evidence from multi-view scans to get accurate object bounding boxes. We evaluate different baselines on ChangingGrounding, and our MemChangingGrounder achieves the highest localization accuracy while greatly reducing exploration cost. We hope this benchmark and method catalyze a shift toward practical, memory-centric 3DVG research for real-world applications. Project page: https://hm123450.github.io/CGB/ . 1 INTRODUCTION 3D Visual Grounding (3DVG) is a critical technology that enables precise localization of target objects in 3D scenes through natural language instructions, with broad applications in service robotics (Gonzalez-Aguirre et al., 2021), computer-aided room design (Sipe & Casasent, 2003; Ganin et al., 2021), and human-machine interaction (Aggarwal, 2004; Li et al., 2020). Current methodologies and benchmarks (Achlioptas et al., 2020; Chen et al., 2020) predominantly operate under static scene assumptions, where pre-reconstructed full scene point clouds (Qi et al., 2017) and textual queries (Radford et al., 2021) are fed into end-to-end models to predict 3D bounding boxes (Jain et al., 2022; Wu et al., 2023; Luo et al., 2022; Shi et al., 2024; Guo et al., 2025) as shown in Figure 1. However, these approaches face significant limitations when deployed in real-world robotic systems: practical environments are inherently dynamic (e.g., furniture rearrangement, object occlusion/replacement), so robots have to rescan the entire scenes to reconstruct complete point clouds every time, which is very costly(Sch ̈onberger & Frahm, 2016); otherwise, the robots don't even know whether and where the scenes have changed. In contrast, humans searching in changing environments quickly draw on memories of past scenes to pinpoint likely target areas and can complete object localization through only a few new observations. Inspired by this insight, we contend that a new memory-based paradigm for real-world 3D visual grounding is needed. To the best of our knowledge, no existing work has explored 3D visual grounding in changing scenes by using memory from past observations. In this paper, we formally define this task and introduce a novel benchmark, the ChangingGrounding benchmark, as follows (shown in Figure 1): given the memory of the previous scene, the unexplored current scene, and a query describing the target object in the current scene, the robot needs to predict the target's 3D bounding box in the current scene. 1 16 Oct 2025 Previous Setting Full Search Pointcloud Reconstruction End-to-End Model Target Robot Evaluation ChangingGrounding Accuracy Cost Not found. The lamp has moved. Then turn round? Accuracy Action Cost Motion Cost Evaluation ... Memory Database Previous Scene Query: Lamp on the table. Memory Retrieval I remember the lamp was here before. Current Scene Exploration ...... Go look at the place of memory first. Not found. Turn right next? Found lamp matching the query! Task Finished! Stop running. Query: The cushion closest to the office chair. Figure 1: Comparison between the previous setting of 3DVG and the ChangingGrounding task. The key motivation of the task and the benchmark is to measure how a 3D visual grounding system accurately and efficiently finds the target object by leveraging the memory of past observations and exploring the current scene. So we evaluate task performance using two key metrics: the accuracy of the predicted 3D bounding box and the cost for scene exploration. A better system achieves higher accuracy while keeping the lower cost. To support the task, we construct our novel dataset and benchmark, based on the 3RScan dataset (Wald et al., 2019), supported by a novel exploration and rendering pipeline to simulate how real-world robots perform 3D visual grounding. In addition to our benchmark and dataset, we propose a novel framework called MemChangingGrounder to address this new task. As current end-to-end approaches are not designed for memory access and scene agent exploration, our method is based on a zero-shot agent-based approach (Xu et al., 2024a). Specifically, Mem-ChangingGrounder first classifies user queries, then retrieves relevant memories to guide its action policy, and then explores for the target images in the scene based on this policy, next ensures fallback localization if no valid target images are found, and finally performs scanning of the target and predicts 3D localization through multi-view projection. We introduce three additional baseline methods and compare them with our proposed MemChangingGrounder on the ChangingGrounding benchmark. The three baselines simulate different grounding policies: (i) Wandering Grounding: aimless exploration, (ii) Central Rotation Grounding: simple rotation, and (iii) Memory-Only Grounding: memory-only with no exploration. Experimental results show that Mem-ChangingGrounder achieves the highest grounding accuracy among other baseline methods while maintaining a relatively low exploration cost, demonstrating a superior balance between accuracy and efficiency, and the effectiveness of our proposed policy. 2 RELATED WORK 3D Visual Grounding Benchmarks and Methods. 3D visual grounding aims to locate target objects from natural language queries. Early work focused on matching objects with shape descriptions (Achlioptas et al., 2019; Prabhudesai et al., 2020). ScanRefer (Chen et al., 2020) and ReferIt3D (Achlioptas et al., 2020) extended this to scene-level benchmarks using ScanNet (Dai et al., 2017). ScanRefer predicts full 3D bounding boxes, while ReferIt3D identifies the correct object from given candidates. Later datasets expanded the setting: Multi3DRefer (Zhang et al., 2023) supports grounding multiple objects, and ScanReason (Zhu et al., 2024a) uses complex human instructions. These benchmarks are closer to real needs but ignore temporal changes in scenes. Methods for 3D visual grounding include supervised and zero-shot approaches. Supervised models (Guo et al., 2025; Qian et al., 2024; Wu et al., 2023; Jain et al., 2022; Luo et al., 2022; Shi et al., 2024) rely on annotated datasets, combining a detection branch for 3D objects and a language branch for text encoding. They achieve strong results but are limited by scarce annotations. Zero-shot methods, on the other hand, use LLMs (Touvron et al., 2023; Devlin et al., 2018; Brown et al., 2020; OpenAI, 2023a) 2 Dataset Generation Pipeline Changing Previous Scene Current Scene (1)Horizontal Proximity: Choose the table that is far from the sofa. (2)Vertical Proximity: Choose the picture frame that is above the sofa. (3)Between: The sofa chair that is between the sofa and the side table. (4)Support: Select the table that is on the top of the rug. : Anchor Object : Target Object Memory Images Dataset = Memory Information and Spatial Relation Descriptions Figure 2: ChangingGrounding Dataset generation pipeline. and VLMs (OpenAI, 2023b; Chen et al., 2024; Liu et al., 2023; Xu et al., 2024b) to overcome this issue. Some reformulate grounding as a text problem or use LLM-generated scripts (Yang et al., 2024; Yuan et al., 2024; Fang et al., 2024). VLM-Grounder (Xu et al., 2024a) grounds objects through images instead of point clouds, and SeeGround (Li et al., 2025) selects viewpoints to render scenes for VLM input. These advances improve scalability, but none address grounding in dynamic scenes. Since VLM-Grounder does not require full point cloud input, it provides a practical basis for changing-scene grounding, and we extend our method from this framework. 3D Perception in Changing Scenes. Early work (Fehr et al., 2017) built a small dynamic-scene dataset for 3D reconstruction, but it lacked annotations. InteriorNet (Li et al., 2018) later provided a large synthetic dataset with object and lighting changes. 3RScan (Wald et al., 2019) pioneered the creation of a large-scale real-world indoor RGB-D dataset, encompassing scans of the same indoor environment at different time points, and introduced the task of 3D object instance relocalization, which involves relocating object instances within changing indoor scenes. Many studies followed, such as camera relocalization in changing indoor environments (Wald et al., 2020), changing detection (Adam et al., 2022), changing environment reconstruction (Zhu et al., 2024b), and changing prediction (Looper et al., 2022). Besides, Hypo3D (Mao et al., 2025) conducts a 3D VQA benchmark to evaluate models' ability in changing scenes based on 3RScan. Notably, our work represents the first exploration of 3D visual grounding tasks in changing environments. The 3RScan dataset provides scene scans at different time steps, as well as the coordinate system transformations between scenes and the correspondences of objects. We construct our novel 3D visual grounding dataset based on these annotations. 3 CHANGINGGROUNDING In this section, we first formulate the ChangingGrounding task, then establish the evaluation metrics, and finally detail the dataset collection pipeline along with a statistical analysis. 3.1 TASK FORMULATION Consider a robot that observed a room yesterday and acquired its scene information. When revisiting the room today, objects may have been rearranged. The robot must locate a target object described by a user query. A naive solution is to explore the whole room and then apply standard 3DVG methods, but this is inefficient. Inspired by human memory, we propose enabling robots to use past observations for more efficient and accurate grounding. The task is defined as ⟨Sp, Sc, Mp, Dc⟩→B, where B is the predicted 3D bounding box of the target. Sp is the previous scene. Sc is the current scene with unknown changes. Mp is the memory of Sp, including RGB-D images and poses. Dc is a text description of the target object. The robot must ground the target object in Sc using Mp and Dc. Because the task requires both efficient and precise grounding, we will evaluate this task by two key metrics: accuracy and exploration cost. 3 Table 1: Comparison of datasets. VG: Visual Grounding. CVG: Changing Visual Grounding Dataset Task Prompts Size Build on Dynamic Support Nr3D (Achlioptas et al., 2020) VG 42K Static dataset No Sr3D (Achlioptas et al., 2020) VG 84K Static dataset No ScanRefer (Chen et al., 2020) VG 52K Static dataset No ViGiL3D (Wang et al., 2025) VG 0.35K Static dataset No Multi3DRefer (Zhang et al., 2023) VG 62K Static dataset No ScanReason (Zhu et al., 2024a) VG 10K Static dataset No ChangingGrounding (ours) CVG 267K Changing dataset Yes For research simplicity, we also set several task assumptions as follows. Zero-cost Memory Access. The memory information Mp for the previous scene Sp is stored in the robot's database and can be accessed at any time without incurring additional cost. Standardized Scene Coordinate System. Each 3D scene has a standardized coordinate system Ts. For different temporal scene states of the same physical space, their standardized coordinate systems are aligned to one global coordinate system. Robot's Initial Pose. We adopt the OpenCV right-handed camera coordinate convention and apply it to all poses. For convenience, we assume that in each scene, the robot is initially positioned at the origin of Ts and its initial orientation is obtained by transforming Ts so that the axes satisfy the OpenCV convention Exploration. For the new scene Sc, the robot needs to explore to obtain relevant information about the scene. Therefore, the acquisition of information about Sc will involve certain costs. The cost includes action cost Ca and motion cost Cm (details in Section 3.2). New Observations. We assume the robot is equipped with an RGB-D camera, and it can move to achieve new positions and orientations (new poses). At the new pose, the robot can obtain a new observation. To fulfill this assumption, we developed a rendering module. The rendering module takes the mesh file of a scene and the desired new pose as inputs and outputs the RGBD image observed from the new pose within the scene (an RGB-D image formulated as (I, D) = Rendering(Mesh, Pose)). 3.2 EVALUATION METRICS The evaluation uses two metrics: localization accuracy and exploration cost. Localization accuracy follows standard 3DVG evaluation and is measured by the ratio of samples whose predicted 3D bounding box overlaps the ground-truth box above a threshold (e.g., ). The exploration cost includes action cost Ca and motion cost Cm. Ca counts the number of actions taken until the target is localized. Each action means the robot moves to a new pose and captures a new observation. Action cost alone may be insufficient, since a single action can involve a large movement. We therefore also measure motion cost. Motion cost considers both translation and rotation. To compare them on the same scale, we convert to time using nominal speeds: translation v = 0.5 m/s, rotation ω = 1 rad/s. Given poses {(t1, R1), . . . , (tn, Rn)}, with n = Ca, the costs are: Ctrans = 1 v Pn-1 i=1 ∥ti+1 -ti∥, Crot = 1 ω Pn-1 i=1 arccos Tr(R⊤ i Ri+1)-1 2 , Cm = Ctrans + Crot. It's noted that when calculating Ctrans, we only consider cost on the horizontal plane. The rotation term uses the well-known trace formula θ = arccos (Tr(R⊤) -1)/2 , which gives the rotation angle θ of a rotation matrix. By summing these angles and dividing by the nominal rotational speed ω, we obtain the rotation time. 3.3 DATASET AND BENCHMARK CONSTRUCTION We constructed the ChangingGrounding dataset to support the proposed task. It contains: (1) spatial relation descriptions of target objects as user queries; (2) original RGB-D images of each scene with 4 camera poses as memory information; (3) a mesh file for generating new observations. We base our dataset on 3RScan, which has 1,482 snapshots from 478 indoor environments, providing transformations between scans for alignment, dense instance-level annotations, and object correspondences across scans. These properties allow us to align scenes, re-render them, and construct cases where objects are moved. The dataset is built in two steps. As shown in Figure 2, first, we generate spatial relation descriptions following ReferIt3D (Achlioptas et al., 2020). Second, we process 3RScan data to obtain re-rendered images and store them as memory information. Spatial Relation Descriptions. We use the template 〈Target Category〉〈Spatial Relation〉〈Anchor Category〉, such as "the chair farthest from the cabinet." The anchor category differs from the target. We select 209 fine-grained categories from 3RScan, including those appearing in at least four scenes and those marked as rigid-move. A target is valid if it belongs to these categories and has at most six distractors of the same class. Anchor categories include these 209 classes plus 24 others. ReferIt3D defines five spatial relations (Horizontal Proximity, Vertical Proximity, Between, Allocentric, and Support), but we exclude Allocentric since 3RScan lacks front-orientation annotations. The detailed rationale for category filtering and the construction feasibility are provided in Appendix G. The set of spatial relation descriptions is provided in the supplementary material. 3RScan Processing. We align scans of the same scene to a global coordinate system. The initial scan is taken as the reference, and we calculate its coordinate system first. Then, transformations between the reference and other scans are applied to align all other scans to the coordinate system. For re-rendering, we adopt the ScanNet (Dai et al., 2017) camera model (1296 × 968 resolution with intrinsics (1169.6, 1167.1, 646.3, 489.9)) and use our rendering module to standardize RGB-D images as memory frames. Statistics. We compared the ChangingGrounding dataset with existing datasets in Table 1. Our ChangingGrounding is the largest and the only one built on changing environments. It introduces the new task of changing visual grounding, along with its formulation, baselines, and evaluation protocol. More details and some visual examples are presented in Appendix G and O.6. 4 MEM-CHANGINGGROUNDER (MCG) In this section, we introduce Mem-ChangingGrounder (MCG), a zero-shot framework for 3D visual grounding in changing scenes. MCG takes a query Dc in the current scene Sc and predicts the 3D bounding box of the target object Ot, using memory Mp of the previous scene represented as RGB-D images {Ip} and camera poses {pp}. As shown in Figure 3, MCG has two action policies within four core modules: Query Classification, Memory Retrieval and Grounding, Fallback, and Multi-view Projection. The workflow is to first classify the query and select the path for retrieval and grounding. MCG then explores the current scene with action policies to locate the target. If this fails, the fallback module estimates the target. Finally, multi-view information is fused for accurate grounding. Because MCG builds on the VLM-Grounder (Xu et al., 2024a) framework, we will first introduce this framework (Section 4.1) and then present MCG's four key modules. 4.1 PRELIMINARY OF VLM-GROUNDER VLM-Grounder is a zero-shot 3D visual grounding method that localizes target objects using 2D images and natural language. The pipeline is: from the current scene image sequence {Ic}, all images containing the target category are detected to form {Ic}det; then a VLM (OpenAI, 2025a) analyzes the query and stitched {Ic}det to find the target image; next, an open-vocabulary detector (Liu et al., 2024) proposes objects in the image, and the VLM selects the correct one; finally, a multi-view projection module fuses multiple viewpoints to estimate the 3D bounding box. 4.2 ACTION POLICY Before presenting the core modules of MCG, we briefly describe two action policies frequently employed in MCG to explore the new scene and find the target object, which are the Omnidirectional Scene Scanner (OSS) and the Spatial Relation Aware Scanner (SRAS). We give the basic explanation here, while the complete derivation is in Appendix H. 5 Omnidirectional Scene Scanner (OSS) Target Image Omnidirectional Imaging Query: Microwave on the table. Camera Pose Grounding Render Spatial Relation Aware Scanner (SRAS) Target Image Spatial Relation Analysis Up Camera Pose Query: Microwave on the table. Grounding Multi-view projection 3D BBox Target: Microwave + SAM Reference Mask Project & Calculater 3D Center 3D Center Imaging around the Center Render ... Select Views Select 2D Box&SAM Render Classify Query Memory Dataset Pre-selection& Stitching Query Stitched Images Classification Results Unverifiable Query: Choose the microwave that is nearest the chair. Find Anchor Target Image Anchor Anchor SRAS OSS Grounding :Not Found :Found Verifiable Query: Choose the microwave that is on the table. Find Target & Anchor Target Image Anchor SRAS OSS :Not Found :Found Target Anchor Target Anchor SRAS Multi-view projection Target 3D Bounding Box Fallback OSS Target: Microwave Valid? Target Image Grounding Detect Figure 3: Workflow of Mem-ChangingGrounder (MCG). The upper part shows the overall pipeline: MCG classifies queries, retrieves memory, uses OSS and SRAS to search, applies fallback when needed, and predicts the 3D bounding box through multi-view projection. The lower part shows details of OSS, SRAS, and Multi-view Projection. Omnidirectional Scene Scanner. The OSS module is a set of robot agent actions when it needs to locate an anchor object or a target object. As shown in the bottom leftmost figure of Figure 3, from a given pose, the agent performs a 360° scan by rotating around the gravity axis, ensuring a full exploration of the surroundings. The collected Images from all poses are then collected, indexed, dynamically stitched, and then sent to the VLM, which selects the one that best matches the query. Spatial Relation Aware Scanner. The SRAS module defines agent actions when the agent needs to find the target after locating the anchor. It generates observations from the anchor pose using the spatial relation between anchor and target, and applies VLM (OpenAI, 2025a) to identify the target image. As shown in Figure 3, given anchor pose pa and query Dc, the agent first uses VLM to infer the relative position of target Ot to anchor Oa. Based on this relation, the agent adjusts pa, collects images, stitches them, and inputs them with Dc into VLM to predict the target. New poses are generated for different spatial relation categories. 4.3 DETAILS OF MCG Query Classification. From a memory perspective, user queries are either verifiable or unverifiable. For unverifiable queries, even if the target and anchor stay still, a target found in memory may not match in the current scene. For example, "the chair farthest from the table" may point to a different chair when a new one appears farther away in the current scene. Thus, the memory target no longer fits the query. In contrast, if the target and anchor stay static, and the target found in the memory will always match the query in the current scene, this kind of query is verifiable. For example, "the vase on the table" is verifiable. MCG uses the VLM to judge whether a query is verifiable. Memory Retrieval and Grounding. As shown in Figure 3, this module is designed to obtain an initial estimate of the target grounding result by combining memory retrieval and exploration. This module locates the target image in the current scene Sc by integrating memory Mp, user queries Dc, and exploration of Sc. In short, this module will first try to use the memory Mp to locate an anchor object, and then explore the target object based on the spatial relationship between the anchor and the target. This process is carried out with the assistance of the two action policies, SRAS and OSS modules, which give action according to current observations and the spatial relations. Depending on the type of query, the specific grounding procedures differ. This module is carefully designed with a not simple, yet effective logic. We provide detailed explanations in Appendix J. Fallback. The Fallback module will be activated to find an alternative target image if the Memory Retrieval and Grounding module fails to ground an initial estimate of the target object and return the corresponding target image. Specifically, the agent will first retrieve from memory the clearest image that contains an object of the target class. It will then start from the pose of the image and use 6 OSS to perform a 360° search for images containing the target object as an alternative result for the Memory Retrieval and Grounding module. Multi-view Projection. After memory-guided retrieval or fallback identifies the target image, the agent uses VLM (OpenAI, 2025a) to predict its 2D bounding box. The image and box are then fed to SAM (Kirillov et al., 2023) to obtain a mask, which is projected into 3D using depth and intrinsics to form a reference point cloud. Since this single-view cloud is incomplete, the module refines grounding through multi-view target-centered scanning. As shown in Figure 3, the agent circles the center of the reference cloud, collects multi-view observations, selects a 2d bounding box, projects them into 3D, clusters the clouds, and outputs a refined 3D bounding box. The complete calculation procedure is provided in Appendix K. 5 EXPERIMENTAL RESULTS 5.1 EXPERIMENTAL SETTINGS Dataset. Following VLM-Grounder (Xu et al., 2024a) and LLM-Grounder (Yang et al., 2024), we randomly sample 250 validation instances for evaluation. Each instance contains a user query, which is either "Unique" (only one instance of the target class in the scene) or "Multiple" (with distractors of the same class in the scene). The details of sample selection and composition are provided in Appendix O.1. Baselines and Implementation. We evaluate three additional baselines, covering two scenarios: (i) using only exploration without memory. (ii) using only memory without exploration. The three baselines are organized as follows: 1). Wandering Grounding: the original VLM-Grounder(Xu et al., 2024a) approach utilizing all images and poses of scene Sc from 3RScan; 2). Central Rotation Grounding: the VLM-Grounder utilizing images captured through a similar methodology of OSS at the initial pose of Sc; 3). Memory-Only Grounding: the VLM-Grounder utilizing images only from the memory Mp in scene Sp. For experiments, we use GPT-4.1-2025-04-14 (OpenAI, 2025a) as the VLM, with tests in both high-resolution and low-resolution image modes. We set Temperature = 0.1, Top-P = 0.3, max stitched images L = 6, and ensemble images N = 7. The retry limit is set to M = 3 for baselines, but removed in MCG since it employs a different fallback. The 2D detectors include SAM-Huge (Kirillov et al., 2023) and GroundingDINO (Liu et al., 2024). Evaluation Metrics. Accuracy is measured by and , the percentage of samples where the IoU between prediction and ground truth exceeds 0.25 or 0.50. Cost is measured by Ca and Cm (defined in Section 3.2). Details how Ca and Cm are computed for each baseline methods and MCG are in Appendix M 5.2 MAIN RESULTS As shown in Table 2, our Mem-ChangingGrounder (MCG) achieves the best accuracy in both lowand high-resolution settings (29.2% and 36.8%), outperforming all baselines. That clear margin underlines the superiority and robustness of our solution for grounding performance across a spectrum of visual qualities. At the same time, our method maintains modest action cost Ca and motion cost Cm, which demonstrates a carefully engineered compromise between effectiveness and efficiency. This is because MCG consults memory before moving and then performs short, targeted actions, avoiding long exploratory loops. Wandering Grounding (WG): In comparison, the WG method achieves the second-highest accuracy at both resolutions, but its Ca is about five times larger and its Cm is also much higher. The reason is its wide roaming: the robot repeatedly sweeps across the environment. This traversal lets the agent collect more scene information and improve accuracy, but also forces long travel and many actions, which cause a heavy cost. Central Rotation Grounding (CRG): The CRG method keeps the robot at the scene center and performs one full rotation, which removes translation and reduces actions, making the cost very low. However, this single and constrained viewpoint misses occluded objects, height-shifted objects, or complex spatial layouts, so important visual information is lost. As a result, its grounding accuracy is the lowest among all methods. 7 Table 2: Accuracy and exploration cost of three baselines and Mem-ChangingGrounder (ours) on the ChangingGrounding benchmark under high- and low-resolution settings. The two resolution settings are separated by a middle line. Higher accuracy and lower cost indicate better performance. Best accuracy and lowest cost are bolded. Cost is measured in 1K seconds one unit. Overall Unique Multiple Cost ↓ Method Model Res @0.25 @0.50 @0.25 @0.50 @0.25 @0.50 Ca Ctrans Crot Cm Wandering Grounding GPT-4.1 low 24.80 10.80 30.67 10.67 16.00 11.00 44.23 8.73 8.78 17.51 Central Rotation Grounding GPT-4.1 low 16.80 6.00 19.33 9.33 13.00 1.00 18.00 0.00 1.70 1.70 Memory-Only Grounding GPT-4.1 low 20.80 10.00 22.67 10.67 18.00 9.00 0.00 0.00 0.00 0.00 Mem-ChangingGrounder (ours) GPT-4.1 low 29.20 14.80 30.00 15.33 28.00 14.00 8.53 5.73 3.98 9.70 Wandering Grounding GPT-4.1 high 32.40 12.80 38.67 16.00 23.00 8.00 44.23 8.73 8.78 17.51 Central Rotation Grounding GPT-4.1 high 17.20 6.80 18.00 8.00 16.00 5.00 18.00 0.00 1.70 1.70 Memory-Only Grounding GPT-4.1 high 26.00 12.40 26.67 11.33 25.00 14.00 0.00 0.00 0.00 0.00 Mem-ChangingGrounder (ours) GPT-4.1 high 36.80 18.00 42.67 19.33 28.00 16.00 8.47 5.84 3.92 9.76 Table 3: Memory strategy. Memory Acc. Ca Cm w/o. 35.2 31.94 18.60 w. 36.8 8.47 9.76 Table 4: Fallback. Fallback Acc. Ca Cm w/o. 36.4 8.21 9.53 w. 36.8 8.47 9.76 Table 5: Multi-view projection. Strategy Acc. Ca Cm Baseline 22.4 4.81 2.95 +Multi-scan 28.0 8.52 9.72 +filter 36.8 8.47 9.76 Memory-Only Grounding (MOG): The MOG method also has a low cost since it relies only on stored panoramic memories, with one final adjustment after estimating the target. If the memories are complete and the scene unchanged, accuracy can be high. But if the environment changes or the memory has gaps, the lack of verification and correction quickly reduces accuracy, placing this method far behind ours. Overall, our method reaches the highest accuracy while keeping Ca and Cm low, proving that memory-augmented strategies balance efficiency and precision in changing scenes. Memory-Only Grounding and Central Rotation Grounding cut costs but lose accuracy by avoiding exploration or using oversimplified strategies. Wandering Grounding explores more but ignores memory, so it needs many actions and long travel, leading to higher costs and lower accuracy than ours. 5.3 ABLATION STUDIES We validated several design choices in MCG. For the memory strategy, we compared with a memory-free setting where the system follows Wandering Grounding's pose sequence without memory. As shown in Table 3, both achieve similar accuracy, but the memory-free approach consumes far more costs, confirming the efficiency of memory use. For fallback, we tested the method without this strategy. As shown in Table 4, though accuracy and cost are similar with or without fallback, adding fallback ensures completeness and coverage of edge cases. For multi-view projection, we performed a two-step ablation: first, adding center rotation for multi-view acquisition, then adding outlier removal. As shown in Table 5, each step improved accuracy; although center rotation increases cost, it benefits the localization accuracy. For application scenarios where an accurate 3D bounding box is not necessary, the cost of MCG could be further reduced. Finally, for different VLMs, we compared GPT-4o (OpenAI, 2024) and GPT-4.1 (OpenAI, 2025a). As shown in Table 6, costs are similar, but GPT-4.1 yields higher accuracy, indicating that better VLM directly results in better performance. Also, we compare rendered versus real memory images and find that rendering doesn't have a significant negative impact on grounding accuracy, as shown in Table 7. More detailed analysis of the rendered-versus-real experiment is in Appendix O.2. 8 Table 6: Different VLMs. Method Acc. Ca Cm GPT-4o 31.6 8.34 8.47 GPT-4.1. 36.8 9.52 9.76 Table 7: Render vs real. Method Acc. Ca Cm w. 28 1.74 2.05 w/o. 24 1.62 1.94 Table 8: 3DVG comparasion. Method Acc. Ca Cm 3D-Vista 33.2 18.00 2.39 MCG (ours) 36.8 8.47 9.76 Table 9: Module success rates (%). Sub-module accuracies of MRGS and MS are separated by bars. Total QAS QCS MRGS MS MRGS VP VCI MS OD SRI MP 36 100 100 56 64 56 70 80 64 89 96 75 Table 10: Inference time (s) of different modules. Total View Preselection VLM Predict Images Detection Predictor Project Points 113.2 11.3 11.7 0.2 0.7 5.4 DISCUSSION ABOUT CRITICAL LIMITATIONS OF 3D VISUAL GROUNDING METHODS Although existing 3D visual grounding methods may rescan the entire scene each time to perform our changing grounding task, the approach is still impractical. In dynamic environments, scenes are constantly changing, and it is unrealistic to perform a full rescan every time an object is moved. Worse yet, the system often does not know whether, when, or where the scene has changed, making it difficult to decide whether a new scan is necessary before grounding. This reliance on complete and repeated reconstruction is inefficient and infeasible in practical applications. Nevertheless, for a more complete comparison, we adapted the 3D-Vista (Zhu et al., 2023) model to the memorybased setting, which is pre-trained on the 3RScan dataset. It should be noted that 3D-Vista requires ground-truth bounding boxes, which makes its performance higher than realistic. We also use a simplified way to calculate cost, which makes the cost lower than it is. As shown in Table 8, our method still outperforms it regarding accuracy and cost. More details are in Appendix O.3. 5.5 SUCCESS RATE AND INFERENCE TIME We randomly sampled 50 examples from the test samples and checked the success rate of every stage of our pipeline. The following defines the criteria for whether a step is successful. (1) Query Analysis Stage (QAS): correctly extracts both the target category and any additional constraints. (2) Query Classification Stage (QCS): assigns the query to the proper categories. (3) Memory Retrieval and Grounding Stage (MRGS): picks a view that contains the target object. (4) Multi-view Stage (MS): The 3D IoU between the predicted box and the ground-truth box is ≥0.25. Specifically, the success rate in the MRGS depends on 2 other modules: (a) View Pre-selection (VP) and (b) VLM Choose Image (VCI). The MS depends on 3 other modules: (a) OV Detection (OD); (b) Select Reference Instance (SRI); (c) Multi-view Projection (MP). These modules' detailed explanations are in Appendix O.4. As shown in Table 9, the largest sources of error stem from the MRGS and the MS. Detailed failure cases and descriptions are provided in Appendix O.5 and Appendix N. Also, we report the inference time of different modules in Table 10, with more analysis in Appendix O.4. 6 CONCLUSION In this work, we reformulate 3D visual grounding as an active, memory-driven task and introduce ChangingGrounding, the first benchmark for changing scenes with cost accounting. We also propose a novel and strong baseline named Mem-ChangingGrounder for this new task. MemChangingGrounder demonstrates that leveraging memory and efficient exploration can raise localization accuracy while cutting down grounding costs. We believe our dataset, task, and baselines will motivate and serve as a starting point for future research on 3D visual grounding in changing scenes. More details (e.g., use of LLMs, open problems, etc.) are in the appendix. 9 REFERENCES Panos Achlioptas, Judy Fan, Robert X. D. Hawkins, Noah D. Goodman, and Leonidas J. Guibas. Shapeglot: Learning language for shape differentiation. CoRR, abs/1905.02925, 2019. Panos Achlioptas, Ahmed Abdelreheem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In ECCV, 2020. Aikaterini Adam, Torsten Sattler, Konstantinos Karantzalos, and Tomas Pajdla. Objects can move: 3d change detection by geometric transformation consistency. In European Conference on Computer Vision, pp. 108-124. Springer, 2022. Charu C Aggarwal. A human-computer interactive method for projected clustering. IEEE transactions on knowledge and data engineering, 16(4):448-460, 2004. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020. Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner. Scanrefer: 3d object localization in rgb-d scans using natural language. In ECCV, 2020. Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, and Jifeng Dai. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In CVPR, 2024. Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR, 2017. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint , 2018. Jiading Fang, Xiangshan Tan, Shengjie Lin, Igor Vasiljevic, Vitor Guizilini, Hongyuan Mei, Rares Ambrus, Gregory Shakhnarovich, and Matthew R Walter. Transcrib3d: 3d referring expression resolution through large language models, 2024. Marius Fehr, Fadri Furrer, Ivan Dryanovski, J ̈urgen Sturm, Igor Gilitschenski, Roland Siegwart, and Cesar Cadena. Tsdf-based change detection for consistent long-term dense reconstruction and dynamic object discovery. In 2017 IEEE International Conference on Robotics and automation (ICRA), pp. 5237-5244. IEEE, 2017. Yaroslav Ganin, Sergey Bartunov, Yujia Li, Ethan Keller, and Stefano Saliceti. Computer-aided design as language. Advances in Neural Information Processing Systems, 34:5885-5897, 2021. Juan Angel Gonzalez-Aguirre, Ricardo Osorio-Oliveros, Karen L Rodr ́ıguez-Hern ́andez, Javier Liz ́arraga-Iturralde, Ruben Morales Menendez, Ricardo A Ramirez-Mendoza, Mauricio Adolfo Ramirez-Moreno, and Jorge de Jesus Lozoya-Santos. Service robots: Trends and technology. Applied Sciences, 11(22):10702, 2021. Wenxuan Guo, Xiuwei Xu, Ziwei Wang, Jianjiang Feng, Jie Zhou, and Jiwen Lu. Text-guided sparse voxel pruning for efficient 3d visual grounding. In CVPR, 2025. Ayush Jain, Nikolaos Gkanatsios, Ishita Mediratta, and Katerina Fragkiadaki. Bottom up top down detection transformers for language grounding in images and point clouds. In ECCV, 2022. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. In ICCV, 2023. Rong Li, Shijie Li, Lingdong Kong, Xulei Yang, and Junwei Liang. Seeground: See and ground for zero-shot open-vocabulary 3d visual grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025. 10 Wenbin Li, Sajad Saeedi, John McCormac, Ronald Clark, Dimos Tzoumanikas, Qing Ye, Yuzhong Huang, Rui Tang, and Stefan Leutenegger. Interiornet: Mega-scale multi-sensor photo-realistic indoor scenes dataset. In British Machine Vision Conference (BMVC), 2018. Yong-Lu Li, Xinpeng Liu, Han Lu, Shiyi Wang, Junqi Liu, Jiefeng Li, and Cewu Lu. Detailed 2d-3d joint representation for human-object interaction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10166-10175, 2020. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS, 2023. Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In ECCV, 2024. Samuel Looper, Javier Rodriguez-Puigvert, Roland Siegwart, Cesar Cadena, and Lukas Schmid. 3d vsg: Long-term semantic scene change prediction through 3d variable scene graphs. arXiv preprint , 2022. Junyu Luo, Jiahui Fu, Xianghao Kong, Chen Gao, Haibing Ren, Hao Shen, Huaxia Xia, and Si Liu. 3d-sps: Single-stage 3d visual grounding via referred point progressive selection. In CVPR, 2022. Ye Mao, Weixun Luo, Junpeng Jing, Anlan Qiu, and Krystian Mikolajczyk. Hypo3d: Exploring hypothetical reasoning in 3d. ICML, 2025. Junjie Ni, Yijin Li, Zhaoyang Huang, Hongsheng Li, Hujun Bao, Zhaopeng Cui, and Guofeng Zhang. Pats: Patch area transportation with subdivision for local feature matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 17776-17786, 2023. OpenAI. Gpt-4 technical report. , 2023a. OpenAI. Gpt-4v. https://openai.com/index/gpt-4v-system-card/, 2023b. OpenAI. Gpt-4o. https://openai.com/index/hello-gpt-4o/, 2024. OpenAI. Gpt-4.1. https://openai.com/index/gpt-4-1/, 2025a. OpenAI. Openai-o1. https://openai.com/o1/, 2025b. Mihir Prabhudesai, Hsiao-Yu Tung, Syed Ashar Javed, Maximilian Sieb, Adam W Harley, and Katerina Fragkiadaki. Embodied language grounding with implicit 3d visual feature representations. CVPR, 2020. Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), NeurIPS, volume 30. Curran Associates, Inc., 2017. Zhipeng Qian, Yiwei Ma, Zhekai Lin, Jiayi Ji, Xiawu Zheng, Xiaoshuai Sun, and Rongrong Ji. Multi-branch collaborative learning network for 3d visual grounding, 2024. URL https:// arxiv.org/abs/2407.05363. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021. Johannes Lutz Sch ̈onberger and Jan-Michael Frahm. Structure-from-motion revisited. In CVPR, 2016. Xiangxi Shi, Zhonghua Wu, and Stefan Lee. Viewpoint-aware visual grounding in 3d scenes. In CVPR, pp. 14056-14065, 2024. Michael A. Sipe and David Casasent. Feature space trajectory methods for active computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(12):1634-1643, 2003. 11 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth ́ee Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint , 2023. Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate True, Albert Antony, Gokula Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, et al. Fastvlm: Efficient vision encoding for vision language models. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 19769-19780, 2025. Johanna Wald, Armen Avetisyan, Nassir Navab, Federico Tombari, and Matthias Nießner. Rio: 3d object instance re-localization in changing indoor environments. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7658-7667, 2019. Johanna Wald, Torsten Sattler, Stuart Golodetz, Tommaso Cavallari, and Federico Tombari. Beyond controlled environments: 3d camera re-localization in changing indoor scenes. In European Conference on Computer Vision, pp. 467-487. Springer, 2020. Austin T Wang, ZeMing Gong, and Angel X Chang. Vigil3d: A linguistically diverse dataset for 3d visual grounding. arXiv preprint , 2025. Yanmin Wu, Xinhua Cheng, Renrui Zhang, Zesen Cheng, and Jian Zhang. Eda: Explicit textdecoupling and dense alignment for 3d visual grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. Runsen Xu, Zhiwei Huang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Vlm-grounder: A vlm agent for zero-shot 3d visual grounding. In CoRL, 2024a. Runsen Xu, Xiaolong Wang, Tai Wang, Yilun Chen, Jiangmiao Pang, and Dahua Lin. Pointllm: Empowering large language models to understand point clouds. In ECCV, 2024b. Jianing Yang, Xuweiyi Chen, Shengyi Qian, Nikhil Madaan, Madhavan Iyengar, David F Fouhey, and Joyce Chai. Llm-grounder: Open-vocabulary 3d visual grounding with large language model as an agent. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pp. 7694-7701. IEEE, 2024. Zhihao Yuan, Jinke Ren, Chun-Mei Feng, Hengshuang Zhao, Shuguang Cui, and Zhen Li. Visual programming for zero-shot open-vocabulary 3d visual grounding. In CVPR, 2024. Yiming Zhang, ZeMing Gong, and Angel X Chang. Multi3drefer: Grounding text description to multiple 3d objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 15225-15236, October 2023. Chenming Zhu, Tai Wang, Wenwei Zhang, Kai Chen, and Xihui Liu. Scanreason: Empowering 3d visual grounding with reasoning capabilities. In European Conference on Computer Vision, pp. 151-168. Springer, 2024a. Liyuan Zhu, Shengyu Huang, and Iro Armeni Konrad Schindler. Living scenes: Multi-object relocalization and reconstruction in changing 3d environments. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2024b. Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng, Siyuan Huang, and Qing Li. 3d-vista: Pretrained transformer for 3d vision and text alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2911-2921, 2023. 12 A APPENDIX OVERVIEW AND ORGANIZATION This appendix provides supplementary details to support and extend the main paper. The organization of the appendix is as follows: 1. Use of LLMs ( Section B ): This section states the use of Large Language Models (LLMs). 2. Ethics Statement ( Section C ): This section provides ethics statements. 3. Reproducibility Statement ( Section D ): This section provides reproducibility statements. 4. Broader Impact ( Section E ) The broader societal implications of our work are discussed. 5. Benchmark Statement ( Section F ): This section describes the release status and usage policy of the ChangingGrounding Benchmark (CGB). 6. More Details for Data Construction ( Section G ): This section describes more details for the data construction process of CGB. 7. Action Policy ( Section H ): This section shows a more detailed description of action policies, the Omnidirectional Scene Scanner (OSS), and the Spatial Relation Aware Scanner (SRAS) 8. Query Classification ( Section I ): This section presents additional details of the Query Classification module. 9. Memory Retrieval and Grounding( Section J): This section presents a more detailed explanation of two different algorithmic paths based on the query classification results. 10. Multi-view Projection ( Section K ): This section introduces more details for the Multiview Projection module. This module obtains multi-view point clouds and then filters them to get more accurate 3D bounding boxes. 11. VLM Prompts ( Section L ): We provide the full list of vision-language prompts used in MCG, covering all modules including memory retrieval, spatial relation parsing, multiview comparison and selection, and fallback strategies. These prompts form a modular, interpretable interface for multi-stage reasoning. 12. Cost Calculation for Methods ( Section M ): This section details how action costs and motion costs are computed for each method. The evaluation aligns with the cost metrics defined in the main text, and a note explains that all costs are reported in units of 1,000 seconds (e.g., 9k = 9000s). 13. Open Problems ( Section N ): We outline the current limitations of the CGB benchmark and the MCG method, including the lack of allocentric relations, the impact of rendering noise, and the dependency on external 2D models. Future improvements are discussed. 14. More Results ( Section O ): Additional results are presented to assess the robustness of MCG, including a comparison between using rendered vs. real images in memory, and a set of failure cases analyzing the limitations of VLM, SRAS, SAM, and the projection pipeline. A complete example is shown to illustrate how MCG grounds a target object in a changing scene. B USE OF LLMS We used large language models (OpenAI's ChatGPT (OpenAI, 2024)) solely to aid in the polishing of English writing. The models were employed to improve clarity, grammar, and style in the manuscript text. No part of the research design, data analysis, model implementation, or results interpretation was generated or influenced by LLMs. All scientific contributions, ideas, and experimental results are entirely the work of the authors. C ETHICS STATEMENT This work focuses on the design and evaluation of a benchmark. It does not involve human subjects, sensitive personal data, or private information. All datasets used are publicly available. We adhere to academic integrity standards. 13 D REPRODUCIBILITY STATEMENT To lower the research threshold and enable independent fairness audits, we will fully open-source our benchmark generation process, data preprocessing methods, and evaluation scripts. All data are drawn from public or simulated scenes and contain no personally identifiable information. E BROADER IMPACT This study introduces a new task for changing scene 3D visual grounding, releasing the open benchmark CGB and a strong reference method, MCG. This technology can significantly enhance the efficiency of logistics and service robots in dynamic environments, advancing smart manufacturing and supply-chain management. However, rapid automation may temporarily displace low-skill jobs, requiring joint reskilling programs to equip workers with digital skills. F BENCHMARK STATEMENT We will publicly release the proposed CGB benchmark and its accompanying dataset on the Huggingface platform, making it freely accessible to the research community. The dataset will be regularly updated and maintained to ensure its accuracy and relevance. It is important to note that, at this stage, all available data in the CGB benchmark is used exclusively for testing purposes. We hope this benchmark will encourage further research into 3D visual localization in dynamically changing environments. All files within the CGB benchmark are strictly intended for non-commercial research purposes and must not be used in any context that could potentially cause harm to society. Also, to support reproducibility, we provide benchmark testing examples for the proposed MCG method on the GitHub platform, along with detailed environment specifications and a complete execution pipeline to facilitate efficient replication and verification of experimental results. G MORE DETAILS FOR DATA CONSTRUCTION Detail Explanation for Building Spatial Relation Descriptions. When selecting target and anchor categories, we followed several principles from the ReferIt3D benchmark to ensure both robustness and task relevance. For target categories, we excluded classes appearing in fewer than four scenes to reduce longtail bias from infrequent categories. In addition, we included objects that undergo changes across scenes, so the final 209 target categories are the union of objects appearing in at least four scenes and those objects exhibiting changes. This dual criterion for target category selection can reduce long-tail effects and ensure the task remains relevant to dynamic scenes. To further maintain annotation reliability, for each individual scene, we applied the constraint that a target object must have no more than six distractors of the same category in the scene. This was motivated by ReferIt3D annotator feedback showing that error rates rise significantly when distractors exceed six. Importantly, this constraint applies per scene: a category may be excluded as a target in one scene but remain valid in another if the distractor number is within the limit. For anchor categories, we followed a similar strategy, using the 209 target categories plus 24 additional large or frequent objects (e.g., fireplaces, televisions). This design improves diversity while also improving reliability, since such larger anchors are easier to describe in spatial relations. We also enforce at most one anchor object in complex scenes, because our descriptions use spatial templates (Target-Relation-Anchor) rather than detailed attributes; with multiple anchors, it would be unclear which instance is referenced. Overall, this filtering strategy balances statistical robustness with task specificity, yielding a diverse set of over 200,000 prompts while ensuring clear and reliable grounding cases. Statistic. The dataset contains 266,916 referential descriptions that uniquely locate targets through spatial relations. As shown in Figure 4, the word cloud highlights frequent terms such as common furniture, along with many less frequent items. We also merge the original 528 object categories in 3RScan into 225 broader ones for tractability, with the help of ChatGPT-o1 (OpenAI, 2025b). 14 Figure 4: A word cloud generated from spatial-relation descriptions, visually highlighting the frequency of occurring terms. H ACTION POLICY Omnidirectional Scene Scanner. The OSS module is a set of robot agent actions when it needs to locate an anchor object or a target object. From a given pose, the agent will perform a full 360° scan and use the VLM to identify the observation that best matches the given query. As shown in the leftmost figure at the bottom of Figure 3, the agent starts from an initial pose p and a user query, then generates twenty poses by rotating p around the gravity axis (ψ) in steps of 18◦× i for i = 0, 1, . . . , 19. These actions ensure a comprehensive exploration of the surroundings. Subsequently, the agent tilts each pose downward by 20◦around the horizontal (φ) axis to avoid missing objects that lie slightly below the original level. Next, the agent obtains images at each pose, annotates sequential IDs, and dynamically stitches them. Finally, the agent will input the stitched result to VLM to predict the correct image containing the anchor object or target object based on user queries. pi = p · Rψ(18◦× i) · Rφ(-20◦), i = 0, 1, . . . , 19 (1) Spatial Relation Aware Scanner. The SRAS module is a set of robot agent actions when the anchor object has already been located, and its next step is to search for the target object. It is designed to obtain a series of observations starting from the anchor object pose based on the spatial relationship between the anchor and the target object, and then use VLM (OpenAI, 2025a) to predict which of these observations contain the desired target object. As shown in the second image at the bottom of Figure 3, given the anchor image pose pa and the user query Dc, the agent will first use VLM to analyze the positional relationship between the target object Ot and the anchor object Oa based on the query. Leveraging this positional relationship, the agent then adjusts pa to generate a series of new poses. Next, the agent obtains images at these new poses, assigns them unique IDs, and dynamically stitches them together. Finally, the agent inputs stitched images and Dc into the VLM to predict the target image. New poses are generated based on different categories of spatial relationships listed below: • Horizontal and Between - The agent applies the same omnidirectional scanning strategy as the OSS module to process pa to acquire a set of new poses. Then the agent images at these poses and uses VLM to evaluate which image indeed contains the Ot matching the Dc. • Support and Vertical - If VLM analysis shows that Ot is below Oa, the agent will generate a series of new poses by rotating pose pa downward around its local horizontal (φ) axis in 20° increments. Besides tilting directly downward, in order to cover a wider exploration area, the agent will also first rotate pa a little left and right around its gravity axis (ψ), and then rotate downward to generate more poses. Next, the agent obtains observation images at these poses and uses VLM to evaluate which image indeed contains the Ot matching the Dc. If Ot is above Oa, the process is similar to that of the "below" relationship, except that the rotation is upwards. 15 Now on, we'll use the X, Y, and Z axes to explain more details of the pose generation methods based on different spatial relationships. This will help readers follow the accompanying code more easily. Additionally, to simplify later rotation and translation steps, we first adjust the camera pose of the anchor-object image consistently: the Y-axis points downward, and the X-axis points to the right. Up. The camera is first shifted backward along its Z-axis to broaden the view. Next, we rotate the pose around its local Y-axis by -90◦, -45◦, 0◦, 45◦, and 90◦. For each of these turns, we add an extra tilt around the local X-axis by 0◦, 18◦, 36◦, and 54◦. This nested sweep yields 5×4 = 20 new poses, providing a more comprehensive upward field of view. Down. The "down" case follows a process highly similar to the "up" case, with the key difference being the direction of rotation around the local X-axis (which controls the up-down viewing direction). The camera is first shifted backward along its Z-axis to broaden the view. Next, we rotate the pose around its local Y-axis by -90◦, -45◦, 0◦, 45◦, and 90◦. For each of these turns, we add an extra tilt around the local X-axis by 0◦, -18◦, -36◦, and -54◦. This nested sweep yields 5×4 = 20 new poses, providing a more comprehensive downward field of view. Horizontal and Between. For simplicity, we use the same procedure to generate new poses for both "horizontal" and "between" relations (Note that for the "between" relation, the initial anchorobject image only needs to include one of the two anchor objects involved in that relation). First, the camera moves backward along its local Z-axis to widen the view; next, it moves closer to the room's center to cover more of the scene; then, it rotates around its local Y-axis in 18° increments from 0° to 342°, creating 20 evenly spaced horizontal angles; after each Y-rotation, the camera tilts 25° downward around its local X-axis to avoid missing lower parts of the scene. This sweep produces 20 viewpoints that give a broad, slightly downward-looking perspective of the environment. I QUERY CLASSIFICATION As stated in the main text Section 4.3, queries with "between" relation should be categorized as verifiable queries. The "between" relation is complex because it involves two anchor objects. If we followed the verifiable workflow, we would need to confirm both anchors and the target object's final position, and then we may need to build another suitable memory-retrieval and grounding algorithm based on the confirmation results. This is too complex for our current scope. For simplicity, we just mark queries with the "between" relation as unverifiable and only use the first anchor object. The remaining steps use the same procedure as the queries with a "horizontal" relation. J MEMORY RETRIEVAL AND GROUNDING Here are in-depth explanations of 2 algorithmic paths for different kinds of queries. • - Unverifiable Queries - As mentioned in the Query Classification module, for unverifiable queries, the agent cannot ensure that the target object, which is directly grounded in memory, still matches the query in the current scene. Therefore, the agent prioritizes finding the anchor object from memory. The agent first follows the VLM-Grounder (Xu et al., 2024a) approach to preprocess images from memory: a 2D open-vocabulary detector (Liu et al., 2024) filters all images in Mp to generate a preprocessed image sequence {Ip}det containing anchor class objects, which are then dynamically stitched with ID annotations. After that, the agent uses VLM to predict an image Ia p which shows the anchor object clearly from {Ip}det. The agent obtains the pose pa where the image Ia p was taken, then it will go to the same pose in the current scene Sc to get a new observation Ia c . If the anchor object stays still in Ia c , the agent will use the spatial relationship of the query Dc to find the target. Specifically, the agent inputs Dc and the pose pa into the SRAS module for final target localization in Sc. If the anchor object doesn't stay still, the agent will go to the center of Sc and directly search around to find the target. Specifically, the agent initiates OSS at the center of Sc to directly locate the target. • - Verifiable Queries - Different from unverifiable queries, for verifiable queries, the agent prioritizes directly grounding the target object matching the Dc from memory. After a similar pre-process pipeline as verifiable queries, the agent obtains stitched images {Ip}det 16 that contain the anchor class objects or target class objects. Then it uses VLM (OpenAI, 2025a) to select from {Ip}det a target image It p containing target object satisfying the Dc and an anchor image Ia p containing anchor object. Next, by moving to the same camera poses pt and pa of It p and Ia p in the current scene Sc, the agent obtains the corresponding new observations It c and Ia c . Following that, the agent verifies the status of images It c and Ia c . If the target object in It c and the anchor object in Ia c both stay still, the agent directly outputs It c as a result. If the target object doesn't stay still but the anchor object stays still, the agent will use the spatial relationship of Dc to find the target starting from the anchor pose pa. Specifically, the agent will invoke SRAS and input Dc and anchor image pose pa for localization. If the anchor object moves, the agent will first try to locate it in Sc, and then use the relationship to find the target. This is because, for this type of query, once the anchor is found, the target can usually be located through a series of clear actions. Specifically, the agent will move to the center of Sc and use OSS for the anchor position. It should be noted that the rotational search via OSS can terminate early: as soon as the VLM spots the anchor object, the scan stops. Once the anchor is located, the agent finally invokes SRAS to track the target. K MULTI-VIEW PROJECTION Inspired by VLM-Grounder, our multi-scan projection also merges point clouds from several views to build the final cloud. But unlike VLM-Grounder, which uses PATs (Ni et al., 2023) to get appropriate views, we gather views by scanning around the target object. The entire pipeline can be divided into three stages: (1) obtaining a reference point cloud for the target object, (2) performing surround-view scanning around the reference point cloud's center to collect multi-view point clouds, and (3) removing outliers from the aggregated point cloud set. In the main text, we have already clearly described the overall pipeline of the Multi-view Projection module. Here, we first outline the more complete process and then provide the notable details for stages 1 and 2. After memory-guided retrieval or fallback identifies the target image, the agent will use VLM (OpenAI, 2025a) to predict the 2D bounding box of the target object detected in the image. It then feeds the image with this box to SAM (Kirillov et al., 2023) to obtain a segmentation mask, projects the mask into 3D space using depth and intrinsics, and derives a reference point cloud. However, this reference point cloud is not complete. It is derived from a projection of the target object from a single viewpoint, which may not capture all parts of the object and thus results in an incomplete point cloud. To compensate for incomplete single-view point clouds, we introduce this module to refine the grounding result with a multi-view, target-centered scanning strategy. In this module, the agent circles the center of the reference 3D point cloud to get multi-view observations and projects these observations into 3D point clouds. Finally, the agent clusters and filters these point clouds and outputs a more accurate 3D bounding box. Specifically, from the reference point cloud, the agent extracts the 3D bounding box and computes the box's center c and the diagonal length lbox. The agent uses these values to define an observation sphere. The center of this observation sphere is c, and the radius of this sphere is calculated as r = max(lbox/2, 1.5 m). The agent then generates sixteen poses and obtains their corresponding observations on a 30°-tilted ring around the sphere. Subsequently, the agent uses VLM to select the four observations that most clearly and completely capture the target object. For each frame, the agent will select a single valid mask for the target object: it runs an open-vocabulary detector [3] to locate the object's candidate 2D bounding boxes; segments those boxes with SAM to produce candidate masks; projects the masks into the 3D point cloud; finally keeps the one mask whose corresponding point cloud centroid is closest to reference point cloud center c. All valid masks are then projected to 3D point clouds. Finally, to filter the outliers, the agent sorts these clouds by the volume of their bounding boxes and discards any cloud whose volume is substantially larger than that of the next smaller one. The remaining clouds are then fused with the reference cloud to produce the refined point cloud. Getting the Reference Point Cloud. To get the reference point cloud, we need to obtain the target object's 2D bounding box and use the SAM model to get its mask in the image. Next, we can project this mask into 3D space to obtain the object's reference point cloud using the camera parameters and depth data. Therefore, first, the agent feeds the image containing the target object into 17 GroundingDINO and removes any 2D boxes that are too large, since some boxes cover the whole image. (as GroundingDINO may occasionally return boxes covering the entire image). After that, it marks the centers of the remaining boxes on the image. Then it passes the image, the user query, and additional contextual cues (e.g., "the object is located in the bottom-right corner") into the VLM to identify the most semantically relevant 2D bounding box corresponding to the target object. The agent uses this box and its center as a positive point for SAM to create a segmentation mask. Finally, the mask is projected into a 3D point cloud using the camera parameters and the depth image, with the same denoising process strategy as VLM-Grounder during projection. Surround-view Scanning. The agent scans around the reference point cloud's center to capture many new views. For each view, it runs GroundingDINO to find 2D boxes. It projects each box into 3D and measures the Euclidean distance between that box's point-cloud center and the reference center. The box with the shortest distance is kept as the target object in that view. The agent repeats this for all views and gathers the resulting point clouds. It should be noted that, in addition to the outlier removal strategy based on bounding box size sorting described in the main text, we also apply an extra denoising step at the beginning of the candidate box selection process. The details of this initial denoising step are explained below. Among the bboxes, we select the one whose center is closest to that of the reference point cloud. However, due to the limitations of the 2D object detector and SAM, this nearest candidate may not always correspond to the true target object. To address this, we first input the reference image into a vision-language model (VLM) to assess whether the target object is particularly large or partially outside the camera view. If so, no additional filtering is applied. Otherwise, we enforce a spatial constraint requiring that the center of the selected candidate point cloud lies within 0.25 meters of the reference center; this helps prevent the inclusion of significant noise points unrelated to the target object. L VLM PROMPTS For the baseline methods, we use the same prompts as those employed in VLM-Grounder. For the MCG method, we introduce several additional prompts, including those designed for the memory retrieval image module, prompts used to compare whether the target object has moved between images, prompts used in SRAS, and prompts applied in the multi-scan projection process. We will explain each of them in the following sections. The memory retrieval prompt for unverifiable queries selects the top 3 images that clearly capture the anchor object from a video sequence when no reliable grounding information is available. In contrast, the memory retrieval prompt for verifiable queries performs a two-stage reasoning process: it first searches for images that satisfy the query constraints and falls back to identifying the target object if constraints are unmet. The oss prompt for unverifiable queries focuses on selecting the single image that most clearly and completely depicts the target object from a 360-degree scan, while the oss prompt for verifiable queries incorporates a three-step reasoning strategy, identifying the earliest image containing the anchor and then limiting the search space for target localization accordingly. The relation parsing prompt is used to infer the spatial relation (e.g., up, down, near, far, between) between the target and anchor objects from the query. The sras choose target prompt performs target selection under a 360-degree rotation by evaluating multiple views and returning the most confident match. The compare prompt determines whether two images captured from the same pose show the target object at the same position, supporting consistency checks. The fallback prompt implements a robust two-step procedure: locating a query-matching image if available, or falling back to the clearest image showing the object class. The get good view prompt is used to retrieve up to four images that provide the best views of a reference object based on a reference image with a bounding box. Finally, the bboxchoose prompt refines object selection by identifying the most probable target object among multiple candidate boxes, integrating query content and spatial descriptions. Together, these prompts provide a structured, interpretable, and modular interface for vision-language agents to perform complex multi-view spatial reasoning and object grounding tasks. The limit prompt guides the VLM to assess whether the target object is overly large or partially occluded, serving as a prior for filtering unreliable candidate point clouds. Table 11 shows their detailed contents. 18 Table 11: VLM prompts memory retrieval prompt for unverifiable queries You are an intelligent assistant proficient in analyzing images. Given a series of indoor room images from a video, you need to analyze these images and select the best 3 images. Each image has an ID in the upper left corner indicating its sequence in the video. Multiple images may be combined and displayed together to save the place. The anchor object is {anchor class}. If there are some images that are very similar, only select the clearest one to participate in the further selection process. Select the best 3 images from the remaining images according to the following rule: Rule 1: Select those images from the remaining ones that can clearly display the anchor object until the total number of selected images reaches 3. Please reply in json format, including "reasoning" and "selected image ids": { "reasoning": "Your reasoning process", // Your thinking process regarding the selection task "selected image ids": ["00045", "00002", "..."], // A list of the IDs of the best 3 images selected according to the rules. Note that the returned IDs should be in the form of "00045", not "00045.color", and do not add any suffix after the numbers. "unique question": 6 // This is an independent question. Regardless of any other factors, only look for which image among all those provided captures the object {targetclass} most clearly. If none is found, return -1. } Now start the task: There are {num view selections} images for you to select from. memory retrieval prompt for verifiable queries Imagine that you are in a room and tasked with finding a specific object. You already know the query content: {query}, the anchor object class: {anchorclass}, and the target object class: {targetclass}. The provided images are obtained by extracting frames from a video. Your task is to analyze these images to locate the target object described in the query. You will receive multiple images, each with an ID marked in the upper left corner to indicate its order in the video. Adjacent images have adjacent IDs. Note that, to save space, multiple images may be combined and displayed together. You will also be given the query statement and a parsed version specifying the target object class and conditions. Your task is divided into two main steps: Step 1: Based on the query and associated conditions, determine whether any of the provided images contain the target object that satisfies the requirements. If found, return the corresponding image ID; if not, return -1. Step 2: If no matching image is found in Step 1, ignore the query content and examine all images to see if any clearly capture an object of class {targetclass}. If such an image exists, return its ID; otherwise, return -1. Please note that the query statement and conditions may not be fully satisfied in a single image, and they may also contain inaccuracies. Your goal is to find the object that most likely satisfies the query. If multiple candidates exist, choose the one you are most confident about. Your response should be a JSON object containing the following fields: { "reasoning": "Your reasoning process", // Explain how you judged and located the target object. If cross-image reasoning is used, specify which images were involved and how. "find or not": true, // Return true if a suitable image matching the query is found, otherwise return false. "target image id": 4, // Return the image ID that best satisfies the query and conditions. If none found, return -1. "anchor image id": 6, // Return the ID of the image where the anchor object is most clearly visible. "extended description": "The target object is a red box located in the lower left corner of the image.", // Describe the target object in the selected image, focusing on color and position. "unique question": 6 // This is an independent question. Regardless of other factors, select the image that most clearly captures an object of class {targetclass}. If none, return -1. } 19 Now start the task: There are {num view selections} images for your reference. The following are the conditions for the target object: {condition} oss prompt for unverifiable queries Imagine that you are in a room and tasked with finding a specific object. You already know the query content: {query}, the anchor object class: {anchorclass}, and the target object class: {targetclass}. The provided images are frames extracted from a video in which the camera performs a full 360degree rotation around a specific point. Your task is to analyze these images to locate the target object described in the query. You will receive multiple images, each with an ID marked in the upper left corner indicating its sequence in the video. Adjacent images have adjacent IDs. To save space, multiple images may be combined and displayed together. Additionally, you will be provided with the query statement and its parsed version, which specify the target class and grounding conditions. Your goal is to find the image that most clearly and completely captures the target object described by the query. The conditions may not be fully accurate or verifiable from a single image, so the correct object may not satisfy all of them. Try your best to identify the object that most likely meets the conditions. If multiple candidates appear correct, choose the one you are most confident about. While checking each image, consider different views throughout the 360-degree rotation. If you find the target object in an image, also examine whether other images capture the same object more clearly or completely, and return the best one. Your answer should be based on the image where the target object is most clearly and completely visible. Please reply in JSON format, structured as follows: { "reasoning": "Your reasoning process", // Explain the process of how you identified and located the target object. If reasoning across multiple images is used, explain which images were referenced and how. "target image id": 1, // Replace with the actual image ID (only one) that most clearly captures the target object. "reference image ids": [1, 2, ...], // A list of image IDs that also contain the target object and helped in reasoning. "extended description": "The target object is a red box. It has a black stripe in the middle.", // Describe the target object's appearance based on the selected image. Color and features only; do not include position. "extended description withposition": "The target object is a red box located in the lower left corner of the image." // Describe the target object with both appearance and spatial position in the image. } Now start the task: There are {num view selections} images for your reference. Here is the condition for the target object: {condition} oss prompt for verifiable queries Imagine that you are in a room with the task of finding specific objects. You already know the query content: {query}, the anchor object category: {anchorclass}, and the target object category: {targetclass}. The provided images are extracted frames from a video that rotates around a certain point. Each image is marked with an ID in the top-left corner to indicate its sequence in the video. Adjacent images have adjacent IDs. For space efficiency, multiple images may be combined and displayed together. You will also receive a parsed version of the query, which clearly defines the target object category, the anchor object category, and grounding conditions. Your task consists of the following three steps: Step 1: Based on the anchor object category, determine whether any of the provided images clearly capture the anchor object. If no such image is found, return -1 directly. Step 2: If Step 1 is successful, return the smallest image ID (denoted as min ID) among the images that clearly capture the anchor object. Step 3: Among the images with IDs from 0 to min ID, try to find an image that clearly captures the 20 target object and satisfies the query content and conditions. If such an image is found, return its ID; otherwise, return -1. Note: The query statement and conditions may not be perfectly accurate or fully visible in a single image. Try your best to locate the object that is most likely to match these conditions. If multiple objects are plausible, select the one you are most confident about. Here is an example: In Step 1, images 12, 13, 14, and 15 all clearly capture the anchor object, so Step 2 yields min ID = 12. In Step 3, no image from ID 0 to 12 meets the query requirements, so target image id = -1. Please reply in JSON format as follows: { "reasoning": "Your reasoning process", // Explain the reasoning process across all three steps. If cross-image reasoning is involved, specify which images were used and how. "anchor image id": 12, // Return the smallest image ID that clearly captures the anchor object. If none is found, return -1. "target image id": 4, // If anchor image id = -1, then return -1 directly. Otherwise, return the image ID (≤anchor image id) that best satisfies the query. If none found, return -1. "extended description": "The target object is a red box located in the lower-left corner of the image.", // Describe the target object in the image with ID = target image id. If target image id = -1, return None. "unique question": 6 // This is an independent question. Regardless of other factors, return the ID of the image that most clearly captures an object of class {targetclass}. If none found, return -1. } Now start the task: There are {num view selections} images for your reference. Here are the conditions for the target object: {condition} relation parsing prompt You are an agent who is highly skilled at analyzing spatial relationships. You are given a query: {query}, a target object: {classtarget1}, and an anchor object: {anchorclass}. Your task is to determine the spatial relationship of the target object relative to the anchor object based on the query content. The possible spatial relationships are defined as follows: - up: the target object is above the anchor object // the target object is lying on the anchor object // the target object is on top of the anchor object. - down: the target object is below the anchor object // the target object is supporting the anchor object // the anchor object is on top of the target object. - near: the target object is close to the anchor object. - far: the target object is far from the anchor object. - between: the target object is between multiple anchor objects. Please reply in JSON format with one key, "reasoning", indicating the spatial relationship you determine: { "reasoning": "up" // Return the spatial relationship type (up, down, near, far, or between) that best describes the position of the target object relative to the anchor object. } Now start the task. sras choose target prompt Imagine you're in a room tasked with finding a specific object. You already know the anchor object class: {anchorclass}, the target object class: {targetclass}, and the query the target object should match: {query}. The provided images are captured during a 360-degree rotation around the anchor object. You are given a sequence of indoor-scanning video frames and a query describing a target object in 21 the scene. Your task is to analyze the images and locate the target object according to the query content. Each image is annotated with an ID in the top-left corner indicating its sequential position in the video. Adjacent images have adjacent IDs. For space efficiency, multiple images may be combined and displayed together. You are also provided with a parsed version of the query, which lists the conditions that the target object should satisfy. After filtering and comparison, your goal is to identify the image ID that contains the target object most clearly based on the query and conditions. Note that these conditions may not be fully observable in a single image and might be imprecise. The correct object may not meet all conditions. Try to find the object that most likely satisfies them. If multiple candidates seem plausible, choose the one you are most confident about. If no object meets the query criteria, make your best guess. Usually, the target object appears in several images-return the one where it is captured most clearly and completely. Please reply in JSON format with the following structure: { "reasoning": "Your reasoning process", // Explain how you identified and located the target object. If you used multiple images, describe which ones and how they contributed to your decision. "target image id": 1, // Replace with the actual image ID that most clearly shows the target object. Only one ID should be provided. "reference image ids": [1, 2, ...], // A list of other image IDs that also helped confirm the target object's identity. "extended description": "The target object is a red-colored box. It has a black stripe across the middle.", // Describe the target object's color and notable features. No need to mention its position. "extended description withposition": "The target object is a red-colored box located in the lower left corner of the image." // Describe both appearance and position of the object in the selected image. } Now start the task: There are {num view selections} images for your reference. Here is the condition for the target object: {condition} compare prompt You are an intelligent assistant who is extremely proficient in examining images. You already know the target object category: {target class}. Now I will provide you with two images. You need to determine whether the target objects captured in these two images are in the exact same position. Since these two images are taken from the same pose, you only need to check whether the target objects are in the same position within the images. For example, if the target object is a table and you can clearly see that the table is located in the middle of both images, then the target objects captured in these two images are considered to be in the same position. Please reply in JSON format with two keys: "reasoning" and "images same or not": { "reasoning": "Your reasons", // Explain the basis for your judgment on whether the target objects captured in these two images are in the same position. "images same or not": true // It should be true if you think the target objects captured in the two images are in the same position. If you find that the positions of the target objects captured in the two images are different, or if the target object is captured in the first image but not in the second, then it should be false. } fallback prompt Imagine you are in a room tasked with finding a specific object. You already know the query content: {query}, and the target object category: {targetclass}. The images provided to you are frames extracted from a video that rotates around a particular point. Each image is marked with an ID in the top-left corner to indicate its sequence in the video, and adjacent images have consecutive IDs. For space efficiency, multiple images may be combined and displayed together. 22 Your task consists of two steps: Step 1: Locate an image that contains the target object that satisfies the query statement and its associated conditions. The image must clearly and completely capture the target object. If such an image is found, return its ID and skip Step 2. Step 2: If no image meets the query-based requirements, ignore the query and check all provided images. Identify an image that clearly captures the object of category {targetclass}. If such an image is found, return its ID. If none are found, return -1. Please reply in JSON format with the following structure: { "reasoning": "Your reasoning process", // Explain the reasoning behind both steps of your decision-making process. "match query id": 12, // Return the image ID that satisfies Step 1. If no image matches the query, return -1. "object image id": 4, // If Step 1 is successful, return -1 here. Otherwise, return the ID of the image that clearly captures the object in Step 2. If not found, return -1. "extended description": "The target object is a red box located in the lower-left corner of the image." // Provide a brief description of the target object as seen in the selected image. Focus on visual features such as color and location within the image. } Now start the task: There are {num view selections} images for your reference. get good view prompt You are an excellent image analysis expert. I will now provide you with several images, each marked with an ID in the upper left corner. These images are captured by rotating around a target object {target} that is framed with a green bounding box in the reference image. The reference image is also provided, and it contains the target object {target} enclosed by a green box, with the word "refer" shown in red in the upper left corner. Your task is to determine which three (at most four) of the provided images capture the target object from the reference image most clearly and completely. Please note that, for layout efficiency, multiple images may be displayed together in a single composite image. Your response should be in JSON format, containing the following fields: { "reasoning process": "Your reasoning process", // Explain how you select the images that best capture the target object framed in the reference image. "image ids": [2, 4, 5, 7] // Replace with the actual image IDs. Return up to four IDs corresponding to the images that, in your opinion, capture the target object most clearly and completely. } Now start the task: There are {num images} candidate images and one reference image for you to choose from. bboxchoose prompt Great! Here is the detailed version of the picture you've selected. There are {num candidate bboxes} candidate objects shown in the picture. I have annotated an object ID at the center of each object with white text on a black background. You already know the query content: {query}, the anchor object: {anchorclass}, and the target object: {classtarget}. In addition, you will be provided with an extended description: {description}, which includes the position of the target object in the picture. Your task consists of two main steps: Step 1: The candidate objects shown in the picture are not necessarily all of the target class {classtarget}. You must first determine which of them belongs to the class {classtarget}. Step 2: Among the identified candidate objects of class {classtarget}, select the one that best matches both the query content and the extended description (including position). Please reply in JSON format with two fields: 23 { "reasoning": "Your reasoning processing", // Describe your full reasoning process in three parts: (1) how you identified candidate objects of the target class; (2) how you verified them against the extended description; and (3) how you selected the final object ID. "object id": 0 // The object ID you select. Always provide one object ID from the picture that you are most confident about, even if you think the correct object might not be present. } Now start the task: There are {num candidate bboxes} candidate objects in the image. limit prompt Great! Now you will perform an expert judgment on the visibility of a target object in the provided image. You already know the target object category: {targetclass}. You will be shown one image containing this object class. Your task consists of two main steps: Step 1: Some object categories, such as beds, sofas, closets, cabinets, shelves, etc., are considered inherently large. If the target object belongs to this group of large categories, directly return "limit": true without proceeding to the next step. Step 2: If the target class is not considered large, examine the image and determine whether the target object appears to be fully captured. If you believe the object is incomplete or partially outside the frame, return "limit": true; otherwise, return "limit": false. Please reply in JSON format with two fields: { "reasoning": "Your reasoning process", // Describe your reasoning clearly: (1) whether the category is considered large, and (2) if not, how you judged the completeness of the object in the image. "limit": false // Return true only if the object is large, or if it is not large but appears incomplete in the image. } Now start the task: You are given one image and the target object category: {targetclass}. M COST CALCULATION FOR METHODS Before we officially begin, let us once again emphasize that all costs are reported in units of 1,000 seconds (e.g., 9k = 9000s). The results shown in tables ( Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 12) have all been processed with unit normalization. For both the baseline methods and our proposed MCG approach, the robot's initial camera pose is assumed to be at the center of the room (see the main text for the formal definition of this key assumption). For MCG, the full camera trajectory starts from the initial pose and follows a sequence of new poses generated by the MCG pipeline. The cost of the entire trajectory is computed according to the evaluation metrics defined in the main paper. For the WG and CRG baselines, all images are pre-captured and sequentially indexed. We first identify the image whose pose is closest to the initial camera pose and denote its index as n. The camera trajectory then starts from the initial pose and proceeds through the poses of images with indices n, n + 1, n + 2, . . ., wrapping around from the last index back to 1 as needed, and ending at index n -1. The cost is computed based on the same evaluation procedure. For the MOG baseline, which only utilizes memory images, the camera trajectory consists of only two poses: the initial pose and the pose of the target image. Its cost is similarly computed using the defined metrics. N OPEN PROBLEMS We present the ChangingGrounding benchmark (CGB) as the first benchmark for evaluating 3D visual grounding in changing scenes and introduce the Mem-ChangingGrounder (MCG) as a strong baseline method. Nevertheless, both still exhibit the following limitations. 24 N.1 LIMITATIONS OF THE CGB BENCHMARK At present, our CGB dataset models only the relative positional changes between the target and its surroundings, without accounting for critical factors such as lighting variations, object appearance attributes (e.g., color, material, deformation), or dynamic scene interactions. Moreover, its repertoire of spatial relations lacks allocentric descriptions like "Object A is in front of Object B." These omissions narrow the benchmark's breadth and depth when assessing an agent's cross-scene generalization and robustness. Future work can address these gaps by enriching multimodal annotations, introducing additional dimensions of variation, and incorporating allocentric relations, thereby expanding the dataset's scale and diversity and enhancing CGB's applicability and challenge in realworld dynamic environments. N.2 LIMITATIONS OF THE MCG METHOD Limitations of VLM capability. MCG relies heavily on the underlying Vision-Language Model (VLM) to locate target objects in image sequences according to the analysis requirements. As demonstrated by the ablation studies above, the strength of the VLM has a decisive impact on MCG's final grounding accuracy. If the VLM is insufficiently capable-or if the visual information in real-world scenes is unusually complex-MCG's performance can deteriorate. Nevertheless, because VLM technology is advancing rapidly, we can replace the current module with more powerful models in the future to further enhance performance. Noise from rendered images. During the experiments, MCG consistently feeds rendered RGB-D images into the vision-language model (VLM) for inference or uses them for SAM-based segmentation, projection, and related processes. However, the rendering process based on mesh files introduces various types of noise, including artifacts in the RGB images and inaccuracies in the depth maps. Moreover, there may be inherent differences in how VLMs process real versus rendered images. These factors can negatively affect the grounding accuracy. Noise introduced by 2D models. MCG depends on 2-D object detectors and segmentation networks to filter candidate images and perform the final projection. Although state-of-the-art models such as GroundingDINO and SAM are highly capable, they still exhibit missed detections, false positives, imprecise bounding boxes, and segmentation errors. These imperfections propagate through the pipeline and ultimately undermine the accuracy of the grounding results. Future work. Despite these limitations, we believe that our work on MCG and the CGB benchmark provides a strong foundation for future research in the field of grounding tasks in changing scenes. We hope that our contributions will inspire researchers to explore new methods and techniques to address the challenges posed by dynamic scenes. Specifically, we encourage the community to focus on the following open problems: (1) Improving VLM Robustness: Developing more robust Vision-Language Models that can handle complex real-world visual information and reduce the impact of noise; (2) Enhancing Multimodal Integration: Exploring ways to better integrate multimodal data (e.g., combining visual, linguistic, and spatial information) to improve grounding accuracy; (3) Expanding Benchmark Diversity: Contributing to the expansion of the CGB benchmark by adding more diverse scenarios, including variations in lighting, object appearance, and dynamic interactions; (4) Reducing Noise in Rendered Data: Investigating methods to minimize the noise introduced during the rendering process and to bridge the gap between real and rendered images; (5) Advancing 2D-to-3D Projection Techniques: Improving the accuracy and reliability of 2D object detection and segmentation models to enhance the overall grounding performance. We hope that our work will serve as a catalyst for further research in this exciting and challenging domain. By addressing these open problems, we can collectively push the boundaries of 3D visual grounding in changing environments and develop more effective and robust solutions. O MORE RESULTS O.1 TEST SAMPLES To reduce costs, we randomly selected 250 validation samples from the ChangingGrounding dataset for testing. For each sample, first, we select any reference scan as Sc and randomly select one rescan of Sc as Sp (since the reference scan is typically the most complete). We then pick a target O with 25 ID ido. Subsequently, we extract descriptions Do for target object from the ChangingGrounding dataset according to scan id = Sc and target id = ido (2) Finally, we combine the Do with the previous scene memory data Mp of Sp to form a sample. It is important to note that, in order to ensure that the test samples cover diverse types of descriptions, we selected a fixed number of instances from each relation type. Within the 250 samples, both the anchor object and the target object may either remain static or undergo changes. Finally, to satisfy the requirement that the target should be located in the vicinity of the anchor, we constrained the distance between the anchor and the target object to within 1.5 meters. O.2 RENDERED VS. REAL IMAGES IN MEMORY In previous experiments, both the memory and exploration images used by our system were rerendered images. We still don't know how well VLMs work with these synthetic images. To check this, we conduct a comparative experiment with two settings. In the w.rendering setting, both the memory and exploration inputs are re-rendered images, consistent with the main experiment. In the w/o.rendering setting, the exploration images remain rendered, while the memory images are replaced with real photographs. Note that we don't have real images captured with the unified camera module described in the main text Section 3.3. To align our rendered images with the real photos supplied by 3RScan, we render every image using the original camera model from 3RScan. We randomly sample 50 instances from a pool of 250 and observe the final grounding results. As shown in Table 12, experimental findings indicate that using rendered images in memory does not significantly affect the overall grounding accuracy. Results show that the w.rendering setting appears to perform slightly worse than the w/o.rendering setting. That does not prove that rendering is superior because there exists normal experimental variance. Moreover, the MCG pipeline still requires many exploration images that must be rendered for VLM inference. Overall, these results suggest that using rendered images in our experiments is a feasible approach. Table 12: Comparison between using rendered and real images in memory. Version A c M c w. rendering 28 1.74 2.05 w/o. rendering 24 1.62 1.94 O.3 3D VISUAL GROUNDING METHODS IMPLEMENTATION Prior 3D visual grounding methods are not readily adaptable to scenarios involving dynamic visual grounding because they are not designed to leverage memory. These methods typically require the latest point clouds of the current scene for each grounding instance, which is impractical for real-world applications since rescanning the entire scene to obtain updated point clouds is highly inefficient. Nevertheless, we attempt to adapt these methods to incorporate memory for our task setting. We designed a pipeline as follows: the model initially uses point clouds from memory (past scenes) to locate the anchor object based on the query. Once the anchor object's position is identified, the model determines a neighboring region around the anchor object to obtain updated point clouds. This approach eliminates the need for scanning the entire scene. The neighboring region is defined as within a 2-meter radius of the anchor object's position. For cost calculations, we make an approximation based on the assumption that the agent starts from the center of the room, moves to the previously predicted anchor object location, and performs a full 360-degree rotation around the vertical axis to scan the region. It is important to note that this assumption is also not always feasible in real-world scenarios. Specifically, a single 360-degree rotation at one position often cannot capture all details, resulting in estimated costs that are significantly lower than the actual costs. 26 We conducted experiments using the 3D-Vista (Zhu et al., 2023) model, as it is pre-trained on the 3RScan dataset. It should be noted that this model requires pre-detection of all 3D bounding boxes prior to grounding. For our experiments, we utilized GT bounding boxes, which significantly enhance performance beyond realistic scenarios. The final experimental results are presented in the Table 8. We create the pipeline as follows: the agent first uses 3D-Vista to locate the anchor object based on the query in the point cloud of a previous scene. After obtaining the position of the anchor object, the agent uses this position as the center to crop a region within a 2-meter radius from the current scene's point cloud. This region is then provided as input to 3D-Vista for inference, under the same assumption that 3D-Vista has access to the ground-truth bounding box contained within this region. Please note that these experiments are intended solely for reference. We do not consider them to have practical significance due to simplifying assumptions. Specifically, at , our method demonstrates superior accuracy and lower action costs. Additionally, since 3D-Vista performs a full 360-degree rotation during scanning (an impractical scenario), it exhibits nearly zero translation cost and reduced rotation cost. O.4 INFERENCE TIME Success rate. Specifically, A step for modules in the MRGS is counted as successful under 2 following conditions: (a) View Pre-selection (VP): The pre-selected views from the SRAS or the OSS contain the target object. (b) VLM Choose Image (VCI): The VLM predicts an image that contains the target object. A step for modules in the MS is counted as successful under 3 conditions: (a) OV Detection (OD): At least one detection box contains the target object. (b) Select Reference Instance (SRI): The VLM selects the detection box containing the target object. (c) Multi-view Projection (MP): The 3D IoU between the predicted box and the ground-truth box is ≥0.25. Their success rates are reported in Table 9 in the main text. Inference time. The table below shows the time consumption of several modules that we consider important in our framework. Table 13: Inference time of different modules (unit: seconds) Query Analysis View Preselection Detection Predictor 0.8 11.3 0.2 SAM Predictor Dynamic Stitching Project Points 0.5 9.7 0.7 VLM Select Box VLM Analysis (Spatial relations) VLM Analysis (Static) 11.4 1.0 5.4 VLM Predict Images VLM Choose Good Views Depth Denoising 11.7 11.5 0.7 VLM Decide Distance Limitation Total 4.9 113.2 We acknowledge that the inference speed has significant room for optimization, given that this is a research project. For example, as shown in the Table 13, the majority of the time in our pipeline is spent on VLM inference. However, with the rapid advancement of VLM technology, we expect that high-speed large models will soon be deployable on edge devices such as the FastVLM project introduced by Apple (Vasu et al., 2025). FastVLM reaches 85× faster TTFT when compared specifically to LLaVa-OneVision operating at 1152×1152 resolution. This opens up promising opportunities for significantly reducing the overall runtime of our method. O.5 ERROR CASE ILLUSTRATION In this section, we present concrete failure cases in the MCG framework. 27 Wrong images for VLMs final predicting First, owing to the VLM's capacity, it may fail to identify the anchor object in memory images at the beginning, causing errors in the whole grounding process. Once the anchor is wrong, new views from SRAS or OSS often miss the target. Second, views from SRAS and OSS may produce limited viewpoints that lack the correct object (anchor or target), especially when the correct object is located low. Third, there exist a lot of unusable rendering images, which often have large blank or missing regions. In all cases, they lead to a single situation where the VLM can't get any images containing the target object at all, which will cause failure for the final VLM grounding steps. There are some examples shown in Figure 5 and Figure 6. VLMs failure in grounding target images Relational queries involving horizontal spatial reasoning (e.g., "select the chair closest to the table") impose higher demands on the inference capability of vision-language models (VLMs). Such relationships require the model to make fine-grained comparisons based on relative spatial distances rather than absolute object properties. In cluttered scenes, distractors with small distance gaps increase errors, making VLMs prone to wrong selections. An example is shown in Figure 7. Failure in SAM and projection During our experiments, we observed that SAM often produces noisy masks by including pixels unrelated to the target, which we believe may stem from SAM's poor generalization to rendered images and the low quality of these images. This over-segmentation reduces the accuracy of 3D projection and bounding box localization. In addition, since our experiments are conducted on rendered images, missing or incomplete regions often affect precision. Although we applied denoising on depth maps by removing abrupt pixels, residual noise remains a challenge to accurate 3D localization. Examples are shown in Figure 8. Wrong VLM choise Ground truth Figure 5: VLMs failure in memory retrieval, the anchor object is a box. Wrong VLM choise Correct Figure 6: Failure in SRAS, the user query is to find the cushion that is farthest from the PC. Wrong VLM choice Correct Anchor Figure 7: VLMs fail to ground the target image: query "cabinet near the box." 28 Wrong SAM result Rendering noise Figure 8: Failure in SAM and projection. O.6 FULL DEMO In this section, as shown in Figure 9, Figure 10 and Figure 11, we present a representative and structurally concise example to intuitively illustrate how MCG effectively leverages memory information for efficient and accurate target localization in dynamic environments. The example highlights the central role played by the vision-language model (VLM) throughout the entire execution process. We provide a detailed depiction of the VLM's reasoning at each step, demonstrating how it progressively converges on the target object through multi-round perception and decision-making, thereby showcasing its capabilities in semantic understanding and spatial reasoning. Memory images preselection & Dynamic stitching "query": "choose the toilet that is under the flush" "target_class": "toilet" "anchor_class": "flush" memory_retrieval_prompt_for_verifiable_queries {'reasoning': "The query asks for the toilet that is under the flush, with 'flush' as the anchor object and 'toilet' as the target object. I first scanned the images for the presence of a flush (typically a button or lever above a toilet) and a toilet. The clearest images of a toilet are from image IDs 000071, 000072, 000073, 000074, and 000075. Among these, the flush mechanism (the button or lever typically found above the toilet tank) is visible in images 000072, 000073, 000074, and 000075. The toilet is directly under the flush in these images. Image 000073 provides the clearest and most centered view of both the flush and the toilet, making it the best candidate for the target object under the specified condition. For the anchor object (flush), image 000073 also provides a clear view. For the unique question, image 000073 also captures the toilet most clearly.", 'find_or_not': True, 'target_image_id': 73, 'anchor_image_id': 73, 'extended_description': 'The target object is a white toilet located in the center of the image, directly beneath the flush button on the toilet tank.', 'unique_question': 73} Memory retrieve "target_class": "flush" { "reasoning": "In both images, the flush (toilet) is located in the same position within the frame, slightly to the right of the center and oriented in the same direction. The surrounding objects and tiles are also consistent in both images, indicating the camera angle and position have not changed. The only difference is that in the first image, the toilet seat is open, while in the second image, the seat is closed with a decorative cover. However, the position of the flush itself remains unchanged.", "images_same_or_not": true } Stay static or not compare_prompt Figure 9: Case of the MCG grounding part-1. 29 "target_class": "toilet" { "reasoning": "In both images, the toilet is positioned in the same location within the frame, slightly to the right of center and oriented in the same direction. The surrounding objects, such as the bottles and the floor tiles, are also in the same positions in both images. The only difference is that in the first image, the toilet seat is open, while in the second image, the toilet seat is closed and has a decorative cover. This does not affect the position of the toilet itself within the images.", "images_same_or_not": true } Stay static or not compare_prompt limit_prompt Big stuff or occluded stuff "target_class": "toilet" { "reasoning": "The target object is a toilet, which is not considered a relatively large object like a bed, sofa, closet, cabinet, or shelf. Therefore, I proceed to the second step. In the provided image, the toilet is not photographed completely; only a part of it is visible.", "limit": true } Figure 10: Case of the MCG grounding part-2. Full-circle Refer point cloud center Full circle rescan New poses get_good_view_prompt "target_class": "toilet" { "reasoning_process": "To determine which images capture the target object toilet most clearly and completely, I compared each image with the reference image. The reference image shows a toilet enclosed in a green bounding box. I looked for images where the toilet is visible, clear, and framed similarly to the reference image. Images 000001, 000002, 000004, and 000005 show the toilet clearly and from angles that provide a good view of the object, similar to the reference image. These images capture the toilet most completely and clearly among all the provided images.", "image_ids": [1, 2, 4, 5] } Get good multi-views Compute the minimum Euclidean distance Reference Optimal candidates Filter Final result Figure 11: Case of the MCG grounding part-3. 30
2510.14964
Efficient and Flexible Multirate Temporal Adaptivity Daniel R. Reynoldsa,1,∗, Sylvia Amiherea,1, Dashon Mitchella,1, Vu Thai Luanb aDepartment of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Maryland, USA bDepartment of Mathematics and Statistics, Texas Tech University, Lubbock, Texas, USA Abstract In this work we present two new families of multirate time step adaptivity controllers, that are designed to work with embedded multirate infinitesi- mal (MRI) time integration methods for adapting time steps when solving problems with multiple time scales. We compare these controllers against competing approaches on two benchmark problems and see that they offer dramatically improved performance and flexibility, with each proposed fam- ily excelling on different types of multirate applications. The combination of embedded MRI methods and the proposed controllers enable adaptive simulations of problems with a potentially arbitrary number of time scales, achieving high accuracy while maintaining low computational cost. Addition- ally, we introduce a new set of embeddings for the family of explicit multirate exponential Runge–Kutta (MERK) methods of orders 2 through 5, resulting in the first-ever fifth-order embedded MRI method. Finally, we compare the performance of a wide range of embedded MRI methods on our benchmark problems to provide guidance on how to select an appropriate MRI method and multirate controller. Keywords: multirate time integration, time step adaptivity, initial-value ∗Corresponding author Email addresses: dreynolds@umbc.edu (Daniel R. Reynolds), samihere@umbc.edu (Sylvia Amihere), dashonm1@umbc.edu (Dashon Mitchell), Vu.Luan@ttu.edu (Vu Thai Luan) 1This work was funded in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) Program through the FASTMath Institute, under DOE award DE- SC0021354. arXiv:2510.14964v1 [math.NA] 16 Oct 2025 problems 2010 MSC: 65L05, 65L06, 65L20 1. Introduction In this work, we focus on time-step adaptivity within multirate infinites- imal (MRI) methods, that are used when solving systems of ordinary differ- ential equation initial value problems of the form y′(t) = f s(t, y) + f f(t, y), t > t0 y(t0) = y0, (1) Here, the operator f f contains physical processes that evolve on a fast time scale with typical time step size h, and f s contains processes that evolve on a slower time scale with typical step size H ≫h. Multirate methods are frequently used for problems of this form when f s is considerably more costly to evaluate than f f, and thus algorithms that evaluate fs infrequently have the potential for significant runtime improvements over single rate approaches that evolve all processes with time step h. Generally speaking, an explicit infinitesimal method for evolving a single time step yn ≈y(tn) to yn+1 ≈y(tn + Hn), with its embedded solution ˜yn+1 ≈y(tn + Hn), proceeds through the following sequence [1, 2, 3, 4, 5, 6]. 1. Let: Y1 = yn 2. For i = 2, ..., s: (a) Solve: v′ i(θ) = f f(θ, vi)+ri(θ), for θ ∈[θ0,i, θF,i] with vi(θ0,i) = v0,i. (b) Let: Yi = vi(θF,i). 3. Solve: ˜v′ s(θ) = f f(θ, ˜vs) + ˜rs(θ), for θ ∈[θ0,s, θF,s] with ˜vs(θ0,s) = v0,s. 4. Let: yn+1 = Ys and ˜yn+1 = ˜vs(θF,s). MRI methods are determined by their fast stage time intervals [θ0,i, θF,i], initial conditions v0,i, forcing functions ri(θ), and embedding forcing func- tion ˜rs(θ). Both ri(θ) and ˜rs(θ) are constructed using linear combinations of {f s(θF,j, Yj)}, and serve to propagate information from the slow to the fast time scales. Implicit and implicit-explicit extensions of the above MRI algorithm exist, that involve replacing step 2a with an implicit solve for some internal stages Yi. The embedded solution ˜yn+1 is similar to embedded solu- tions in standard Runge-Kutta methods, in that it approximates the solution 2 to an alternate order of accuracy and may be computed with minimal extra effort beyond computing yn+1; in this case it is computed by re-solving the last stage with an alternate forcing function, ˜rs(θ). As seen in steps 2a and 3 above, computation of each stage in an explicit MRI method requires solving a secondary, inner, IVP. Typically, these inner IVPs are not solved exactly, and instead are approximated using another numerical method with inner time steps hn,m such that maxm |hn,m| ≪|Hn|. In the sections that follow, we first summarize how temporal adaptivity is performed for problems with a single time scale in Section 2. In Section 3 we introduce previous work, and propose two new approaches, for dynami- cally adapting both the slow and fast time steps, Hn and hn,m, within MRI methods. Two of these multirate controller families require estimates of the accumulated fast temporal error, so we describe our algorithms for comput- ing those estimates in Section 4. The majority of this manuscript focuses on numerical tests of these multirate controllers in Section 5. In that section, we first describe our two benchmark problems that we use throughout our numerical tests. We then outline the set of embedded MRI methods that we will test, including new embeddings for the explicit multirate exponen- tial Runge–Kutta (MERK) methods that result in the first ever fifth-order embedded MRI method. The remainder of Section 5 performs a variety of numerical experiments to examine the proposed multirate adaptivity strate- gies and the embedded MRI methods. We discuss our conclusions from these tests and identify future directions to extend this work in Section 6. 2. Single Rate Control Before introducing multirate temporal adaptivity, we briefly review the typical approaches for single-rate adaptivity, that attempt to control the local error induced within a given time step. Assuming that the initial condition at the time step tn is exact, i.e., yn −y(tn) = 0, the local error is defined to be εn = y(tn + hn) −yn+1. (2) Single-rate controllers then adapt the step size hn to achieve two primary objectives: (a) ensure the local error is “small enough”, i.e., ∥εn∥< 1, (3) where ∥·∥is a norm that incorporates the requested temporal accuracy (e.g., the WRMS-norm from [7, 8]), and (b) perform as few time steps as possible 3 to meet objective (a). Additional considerations include a smooth step size progression [9, 10, 11, 12], and improving the efficiency and robustness of iterative algebraic solvers that may be used at each step [13, 14]. Single rate adaptivity may be easily incorporated into standard “time marching” schemes. Instead of utilizing a fixed step size hn = h, one begins at t0 with an initial step size h0, and initializes the counter n = 0. Then in a loop over step attempts: (i) compute a candidate approximation y∗ n+1 ≈y(tn + hn) and a corre- sponding approximation of the local error ε∗ n ≈y(tn + hn) −y∗ n+1. (ii) If ∥ε∗ n∥< 1 set tn+1 = tn + hn, yn+1 = y∗ n+1, and n = n + 1. Else: reject the approximation y∗ n+1 (and the corresponding step size hn). (iii) Use ∥ε∗ n∥to estimate a step size ˜h for the subsequent step attempt. Since the true local error εn is inaccessible, then for simplicity in the remain- der of this paper, we use εn to denote the approximation ε∗ n. Whether εn passes or fails the error test (3), step (iii) must select ˜h to use on the next step attempt. This adaptivity controller is a function that typically depends on a small set of (hn−k, εn−k) pairs, i.e., ˜h = C(hn, εn, hn−1, εn−1, . . . , p). There are a myriad of choices for controllers C in the literature, but the simplest approach is the so-called I controller. Assuming that the numerical method for computing yn+1 has order p, then from [15] εn ≈y(tn + hn) −y∗ n+1 = c(tn) hp+1 n + O hp+2 n  = O hp+1 n  for some c(t) independent of hn. Taking the norm of this gives ∥εn∥≈∥c(tn)∥hp+1 n . (4) Since we have just computed the estimate ∥εn∥based off a step with size hn, a piecewise constant approximation c(t) ≈cn allows us to “solve” for ∥cn∥= ∥εn∥/hp+1 n . Assuming that c(t) will remain relatively constant from one step attempt to the next, the candidate step size ˜h is computed to exactly attain an error goal of 1 (or any tol): tol = ∥cn∥˜hp+1 ⇒ ˜h = hn  tol ∥εn∥ 1/(p+1) . 4 This formula automatically grows or shrinks the step size based on whether the error test ∥εn∥1/(p+1) ≤tol passed or failed, respectively. Due to the multiple assumptions above, a safety factor σ < 1 is typically applied, ˜h = σhn  tol ∥εn∥ 1/(p+1) . More advanced approaches for C are typically based on control theory, and use additional (hn−k, εn−k) values to build higher-degree piecewise polynomial approximations of the principal error function [9, 13, 14, 10, 11, 12]. To extend these ideas to the multi-rate context, we require two comple- mentary components: (a) strategies to estimate local temporal error, and (b) algorithms for selecting step sizes following each attempted step. Since (b) dictates our needs for (a), we begin with multirate temporal control. 3. Multirate Temporal Controllers Multirate temporal adaptivity has not been widely studied in the litera- ture. The two articles we are aware of are [16] and [17]. The former is specific to “MrGARK” methods [18], that require a more rigid algorithmic structure than the MRI algorithm introduced in Section 1. The latter works with MRI methods, and will serve as a baseline comparison for our new methods. In the following subsections, we consider three such approaches in this work. 3.1. Coupled Stepsize (“H-h”) Control The adaptive multirate controllers from [17] simultaneously predict both a slow step size Hn, and a multirate ratio Mn, such that the inner solver takes fixed small substeps hn = Hn/Mn throughout each subinterval [θ0,i, θF,i]. Due to their use of fixed inner steps that are adapted only when the outer step Hn is adapted, we will refer to these as coupled stepsize (“H-h”) controllers. In our subsequent comparisons, we consider four controllers introduced in [17]: MRI-CC, MRI-LL, MRI-PI, and MRI-PID. 3.2. “Decoupled” Multirate Control Our simplest proposed MRI controller uses two decoupled single-rate con- trollers to separately adapt the inner and outer time steps, i.e., ˜H = Cs(Hn, εs n, Hn−1, εs n−1, . . . , P), ˜h = Cf(hn,m, εf n,m, hn,m−1, εf n,m−1, . . . , p), (5) 5 where P and p are the orders of the MRI method and inner solver, respec- tively. Here, (Hk, εs k) are the stepsize and local error estimates for time step k at the slow time scale, and (hk,ℓ, εf k,ℓ) are the stepsize and local error es- timates for the fast substep ℓwithin the slow step k. We note that in this approach, both Cs and Cf are decoupled, and thus selection of ˜H and ˜h occur independently through application of any pair of single-rate controllers. We summarize their use in the following pseudocode. Given the current state yn, candidate step Hn, and controllers Cs and Cf, perform a single MRI step attempt as follows. 1. Let: Y1 = yn. 2. For each MRI stage i = 2, . . . , s: (a) Use an adaptive solver with Cf to ensure ∥εf n,m∥≤1 for the IVP v′ i(θ) = f f(θ, vi) + ri(θ), θ ∈[θ0,i, θF,i], vi(θ0,i) = v0,i. (b) Let Yi = vi(θF,i). 3. Use an adaptive solver with Cf to ensure ∥εf n,m∥≤1 for the IVP ˜v′ s(θ) = f f(θ, ˜vs) + ˜rs(θ), θ ∈[θ0,s, θF,s], ˜vs(θ0,s) = v0,s. 4. Let: yn+1 = Ys, ˜yn+1 = ˜vs(θF,s), and εs n = yn+1 −˜yn+1. 5. Use Cs to select a new step size ˜H for the ensuing step attempt. Since this approach ignores any coupling between time scales, we expect these controllers to work well for multirate problems wherein the time scales are somewhat decoupled, where errors introduced at one scale do not “pol- lute” the other, for example reaction-diffusion systems [19], acoustic-elastic wave systems [20], or transport-chemistry models [21]. Furthermore, due to its decoupled nature, this technique can be trivially extended to an arbitrary number of time scales, allowing temporal adaptivity for so-called “telescopic” multirate methods. 3.3. Stepsize-tolerance (“H-Tol”) Control Our second proposed family are the so-called “H-Tol” multirate con- trollers. As outlined in Section 2, standard controllers attempt to adapt step sizes to control local error within each step to achieve a requested tol- erance. However, MRI methods must ask another “inner” adaptive fast-scale solver to produce the stage solutions vi(θF,i), that result from sub-stepping over each interval [θ0,i, θF,i]. Local errors within the fast integrator may ac- cumulate, resulting in an overall fast-solver error εf i that could exceed the requested tolerance. If that inner solver can produce both vi(θF,i) and an 6 estimation of the accumulated error, εf i,approx, then the tolerances provided to that next-fastest solver can be adjusted accordingly to ensure MRI stage solutions Yi = vi(θF,i) that are within the overall tolerances requested of the outer MRI method. To this end, first assume that at step n, the MRI method requests so- lutions from the inner solver that are tolfacn more accurate than the MRI method, i.e., if the MRI method strives to achieve local errors ∥εs n∥≤1, then the inner solver strives to achieve local substep errors ∥εf n,m∥≤tolfacn. To account for accumulation of temporal error in the inner solver, we assume that its accumulated error over the slow step [tn, tn + Hn] has the form ∥εf n∥= χ(tn)Hntolfacn, (6) where χ(t) is independent of tolfacn, but may vary in time. We construct this ansatz as follows: since the fast integrator takes multiple substeps to integrate over the full step Hn then the accumulated error should be proportional to this interval width; the problem- and method-dependent factor χ(tn) models this constant of proportionality. By grouping the terms from (6) as ∥εf n∥= (χ(tn)Hn) (tolfacn)0+1, we see that this fits the standard asymptotic error assumption used in single- rate controllers (4), through identifying χ(tn)Hn with ∥c(tn)∥, the control parameter tolfacn with hn, and by defining the “order” p = 0. Thus, any single-rate controller that relies on the asymptotic error assumption (6) may be used to adjust tolfacn between slow step attempts. Thus we construct an H-Tol controller from three single-rate controllers: • Cs,H – adapts Hn to achieve user-requested solution tolerances using data (Hn, εs n, Hn−1, εs n−1, . . . , P). • Cs,Tol – adapts tolfacn using the strategy described above with data (tolfacn, εf n, tolfacn−1, εf n−1, . . . , 0). • Cf – adapts inner time steps hn,m to achieve the current requested tolerance, tolfacn, using data (hn,m, εf n,m, hn,m−1, εf n,m−1, . . . , p). We summarize their use in the following pseudocode. Given the current state yn, candidate step Hn, and controllers Cs,H, Cs,Tol and Cf, perform a single MRI step attempt as follows. 1. Let: Y1 = yn. 7 2. For each MRI stage i = 2, . . . , s: (a) Use an adaptive solver with Cf to ensure ∥εf n,m∥≤tolfacn for the IVP v′ i(θ) = f f(θ, vi) + ri(θ), θ ∈[θ0,i, θF,i], vi(θ0,i) = v0,i. (b) Let Yi = vi(θF,i). 3. Use an adaptive solver with Cf to ensure ∥εf n,m∥≤tolfacn for the IVP ˜v′ s(θ) = f f(θ, ˜vs) + ˜rs(θ), θ ∈[θ0,s, θF,s], ˜vs(θ0,s) = v0,s. 4. Let: yn+1 = Ys, ˜yn+1 = ˜vs(θF,s), εs n = yn+1 −˜yn+1, and retrieve the accumulated fast error εf n from the inner solver. 5. Use Cs,H to select a new step size ˜H for the ensuing step attempt. 6. Use Cs,Tol to select a tolerance factor ^ tolfac for the ensuing step attempt. This class of controllers may also be applied to telescopic MRI methods, since it only focuses on the relationship between two successive time scales. 4. Fast Temporal Error Estimation The H-h and H-Tol MRI adaptivity families require estimates of the temporal errors at both the slow and fast time scales, εs n and εf n, respec- tively. While the slow error may be estimated using the MRI method’s time step solution and embedding, εs n = ∥yn −˜yn∥, non-intrusive approaches for estimating εf n are more challenging. We employ the following two strategies. Since the H-Tol multirate controller leverages an adaptive fast solver, we leverage the fact that at each substep of the fast integrator tn,m, it computes an estimate of the local temporal error, εf n,m (e.g., using its own time step solution and embedding). Since the fast solver may be run with different tol- erances than the slow solver, we denote its relative and absolute tolerances as reltolf n and abstolf, respectively. We then accumulate the local error esti- mates from each substep by assuming that none of these local errors cancel, estimating the total fast error as the sum of the substep errors, εf,add n = reltolf n reltols n X m∈M ∥εf n,m∥, (7) where M is the set of all steps since the accumulator was last reset (generally at the start of the step, tn). To understand the tolerance ratio, we note that both the fast and slow solvers use a weighted root-mean squared norm, such that an error norm of 1 corresponds with “sufficiently accurate,” i.e., ∥εf n,m∥=  1 N N X i=1 εf n,m,i abstolf + reltolf n|yn,m,i| !2  1/2 , 8 where yn,m is the time step solution at the start of the substep, and the vectors yn,m and εf n,m have N components. The prefactor reltolf n serves to convert the accumulated fast errors to a raw estimate of the relative error, before the factor 1/reltols n scales this back to normalized relative error. Since the H-h multirate controller uses fixed fast substeps we cannot assume that the fast solver is adaptive. Thus, we use a more costly approach that runs the fast integrator using fixed time steps of size h and kh (e.g., with k = 2) to achieve fast solutions yf n,h and yf n,kh, which we then subtract to estimate the fast temporal error, εf,dbl n = 1 kp −1 ∥yf n,m,h −yf n,m,kh∥, (8) where p is the global order of accuracy for the method. 5. Numerical Tests In this section, we perform numerical tests to verify the proposed Decoupled and H-Tol multirate adaptivity strategies, and to compare their performance against the previous H-h controllers. All codes for performing these tests and reproducing the figures in this paper are available in [22], and use the ARKODE time integration solver [23], which is part of the SUNDIALS li- brary [7, 8]. We use two benchmark problems. The first is a multirate, nonlinear version of the Kværno-Prothero-Robinson (KPR) ODE test problem, that has been slightly modified from [6, Section 6.2]: u′(t) v′(t)  = G es ef −1  (u2 −p −2) /(2u) (v2 −q −2) /(2v)  + p′(t)/(2u) q′(t)/(2v)  (9) over 0 < t < 5, where p(t) = cos(t) and q(t) = cos(ωt(1 + e−(t−2)2)). This problem has analytical solution u(t) = p 2 + p(t) and v(t) = p 2 + q(t), and its behavior is dictated by the parameters: es determines the strength of coupling from the fast to the slow scale, ef determines the coupling strength from the slow to the fast scale, G < 0 determines the stiffness at the slow time scale, and w determines the time-scale separation factor. We split the right-hand side above into up to three functions for the slow-implicit (fi), 9 slow-explicit (f e) and fast (f f) operators, respectively,  G u2−p−2 2u + es v2−q−2 2v 0  | {z } fi +  p′(t) 2u 0  | {z } fe +  0 ef u2−p−2 2u −v2−q−2−q′(t) 2v  | {z } ff ; (10) for non-ImEx MRI methods the combined slow operator is fs = f i + f e. The second benchmark is a stiff version of the Brusselator problem, orig- inally proposed in [24],   u′(t) v′(t) w′(t)  =   a + vu2 −(w + 1)u wu −vu2 b−w ϵ −wu   (11) for 0 < t < 10. We use the initial conditions u(0) = 1.2, v(0) = 3.1, and w(0) = 3, and parameters a = 1, b = 3.5 and ϵ = 5 × 10−6, unless stated otherwise. We note that ϵ determines the stiffness and/or time scale separation factor for the problem, such that typically H/h = 1/(100ϵ). We again define a splitting for this right-hand side,   −(w + 1)u wu −wu   | {z } fi +   a + vu2 −vu2 0   | {z } fe +   0 0 b−w ϵ   | {z } ff (12) 5.1. Embedded MRI Methods To explore multirate temporal adaptivity, we rely on MRI methods that include embeddings, to provide approximations of both the time step solution yn and embedding ˜yn. To this end, we run tests using the following 15 methods (with orders of accuracy in parentheses): • MRI-GARK methods from [2]: – Explicit ERK22a (2), ERK22b (2), ERK33a (3), ERK45a (4); – Implicit IRK21a (2), ESDIRK34a (3), and ESDIRK46a (4); • the explicit RALSTON2 (2) MRI-GARK method from [25]; • IMEX-MRI-SR methods from [6]: IMEXSR21 (2), IMEXSR32 (3), and IMEXSR43 (4); 10 • explicit MERK methods MERK21 (2), MERK32 (3), MERK43 (4), and MERK54 (5). The embeddings for each of the above methods have an order of accuracy one lower than the method itself. We note that the original MERK2, MERK3, MERK4, and MERK5 methods from [4] do not include embeddings. We provide embeddings for each in Appendix Appendix A. 5.2. Multirate Temporal Controllers With our fast temporal estimation strategies in place, we now examine the performance of the MRI adaptivity algorithms from Section 3. For both the Decoupled and H-Tol controller approaches, we construct MRI controllers using each of the H211, H0211, H0321, H312 and I single-rate adaptivity controllers [11], from the ARKODE library [23]; we additionally test the previously-introduced H-h controllers MRI-CC, MRI-LL, MRI-PI, and MRI- PID [17]. For all tests, we pair each of the 15 MRI methods from Section 5.1 with a fast explicit Runge–Kutta method of the same order. We apply these to two test problems: • the KPR problem (9) (with parameters G = −100, es = 5, ef = 0.5, ω = {50, 500}), to assess controller performance when both fast and slow scales evolve dynamically, and where the scale separation factor is varied; • the stiff Brusselator problem (11) (with parameter ϵ = {10−4, 10−5}), to assess controller performance when the fast scale is stability-limited at varied stiffness levels but generally evolves at a constant rate, but the slow scale varies in time. We run each problem over their full temporal duration using a fixed abso- lute tolerance abstol = 10−11, and a variety of relative tolerances, reltol = 10−k, k = 3, . . . , 7. This corresponds with a total of 4200 test combinations. For each test combination and time step (tn−1, yn−1) →(tn, yn), we compute a reference solution using a fifth-order single-rate explicit solver with ini- tial condition (tn−1, yn−1) and tolerances reltol = 10−10 and abstol = 10−12, respectively, to evolve to (tn, yref(tn)). We then determine each method’s ability to achieve the target local accuracy as accuracy = max yn,l −yref,l(tn) abstol + reltol |yref,l(tn)| , (13) 11 where the maximum is taken over all solution components (l) and over all time steps (n). We note that an optimal method would achieve “accuracy” values exactly equal to 1; however in practice most combinations of adaptivity controllers and time integration methods would be considered successful if they achieve an accuracy value within a factor of 100 of the requested relative tolerance. In addition to computing these accuracy metrics, we record the overall integrator effort, as estimated by the number of time steps required at each of the fast and slow time scales. 5.2.1. Controller robustness We begin with a simple question: how effective is each MRI adaptivity family (Decoupled, H-Tol, and H-h) at achieving a range of target tolerances according to the metric (13)? To answer this high-level question, Figure 1 shows aggregated results across the full suite of tests. Here, the top six plots show the KPR problem having multirate ratios ω = {50, 500} for second order MRI methods (left), third order MRI methods (middle), and higher- order MRI methods (right), and the bottom six plots show similar results for the Brusselator problem with stiffness factors ϵ = {10−4, 10−5}. Within each plot, we present shaded regions for each family, showing the range of “ac- curacy” values attained by MRI methods and controllers within that family for each tolerance. We can assess the robustness of each family by examin- ing how tightly these plots cluster around the target accuracy of one. Due to their disparate results, we separate our comments on the Decoupled and H-Tol controller families from the H-h family. For both problems and multirate regimes, the Decoupled and H-Tol con- troller families performed excellently, typically achieving solutions within a factor of 10 from the requested tolerances. There were a few outliers among these for the KPR problem at ω = 500 and the loosest relative tolerances 10−4 and 10−3, where for some decoupled controllers the ESDIRK46a and ERK45a methods had errors over 100x larger than requested. Due to its stiff- ness, the Brusselator problem had slightly more outliers, with ESDIRK34a giving excessive error for some tighter tolerances, and ESDIRK46a struggling to attain solutions within 100x of the target across a wide range of tolerances and single-rate controllers. We additionally note that from the plots in Fig- ure 1 it is difficult to discern significant performance differences between the Decoupled and H-Tol families, indicating that their accuracy depends more strongly on the underlying adaptive MRI method or single-rate controller than the time step selection mechanism. Based on these results, we conclude 12 10−7 10−6 10−5 10−4 10−3 reltol 10−1 100 101 102 103 accuracy ω = 50 10−7 10−6 10−5 10−4 10−3 reltol 10−1 100 101 102 103 104 accuracy ω = 500 10−7 10−6 10−5 10−4 10−3 reltol 100 101 102 103 104 10−7 10−6 10−5 10−4 10−3 reltol 100 101 102 103 104 10−7 10−6 10−5 10−4 10−3 reltol 100 101 102 103 10−7 10−6 10−5 10−4 10−3 reltol 100 101 102 103 104 Family H-h Htol Decoupled 10−7 10−6 10−5 10−4 10−3 reltol 101 103 105 107 109 accuracy ϵ = 0.0001 10−7 10−6 10−5 10−4 10−3 reltol 101 104 107 1010 accuracy ϵ = 1e-05 10−7 10−6 10−5 10−4 10−3 reltol 101 103 105 107 109 y 10−7 10−6 10−5 10−4 10−3 reltol 101 104 107 1010 10−7 10−6 10−5 10−4 10−3 reltol 100 102 104 106 y 10−7 10−6 10−5 10−4 10−3 reltol 100 102 104 106 108 1010 Figure 1: Accuracy measurements for each multirate controller family, when tested across a wide range of tolerances and individual MRI controllers. Columns denote MRI method accuracies: left are O(H2), middle are O(H3), and right are O(H4) and O(H5). The KPR test problem is in the top two rows, and the Brusselator test problem is in the bottom two rows. Note that where two regions overlap, their shading mixes together; the H-Tol and Decoupled families perfectly overlap, resulting in a teal-gray color. that both of these families are robust across a wide range of MRI methods and tolerances, and should be applicable to most multirate applications. The H-h controller family did not fare as well. Although some methods and controllers were able to meet the requested tolerances, most had errors that were orders of magnitude larger than requested. Generally, the H-h controllers deteriorated as the problems became more multirate; in separate tests on weakly multirate problems (e.g., KPR with ω = 5 and the stiff Brus- 13 selator with ϵ = 0.01) this family was generally able to meet the requested error tolerances, which agrees with the results from [17]. The only outliers from this generally poor performance were the MERK21 and MERK32 methods, that achieved the target accuracy for all tolerances and multirate parame- ters. No other combination of adaptive MRI method and H-h controller was able to achieve even within 10x of the target accuracy for the challenging problems with ω = 500 and ϵ = 10−5. We cannot yet explain why the em- bedded MERK methods with H-h controllers outperform other embedded MRI methods; however, we generally conclude that controllers in the H-h family are considerably less robust than either the Decoupled or H-Tol fami- lies for applications with nontrivial multirate nature, so we recommend that practitioners apply caution when using the H-h family. 5.2.2. Adaptive MRI efficiency We now turn our attention to the computational efficiency of the indi- vidual adaptive MRI methods. To test computational efficiency for a given embedded MRI method and multirate controller, we ran each problem us- ing relative solution tolerances 10−k, k = 3, 4, . . . , 7. As in [3, 17, 4, 5], we then estimate the computational “work” required for a calculation by sepa- rately considering the number of slow steps and the total number of fast and slow steps. The former is relevant for multirate applications where the slow operators require significantly more computational effort than the fast oper- ators, and thus MRI methods are applied to reduce the number of slow time steps. The latter is relevant for applications where the slow and fast opera- tors require similar computational effort. For either metric, we then plot the computational efficiency by graphing the solution error as a function of work, overlaying these curves for a variety of methods. For any desired accuracy, the left-most curve thus corresponds with the most efficient method. In lieu of presenting plots that overlay hundreds of combinations of MRI methods and adaptive controllers, we first downselected only the most per- formant combinations. Within each test problem (KPR with ω = {50, 500}, and stiff Brusselator with ϵ = {10−4, 10−5}), MRI method order (2, 3, and higher), and work metric (slow steps, fast steps), we: • define a set of 20 logarithmically-spaced test tolerances in [10−6, 10−2]; • at each test tolerance, we estimate the work required to attain the target tolerance by interpolating the (work, error) pairs from our data; 14 • rank each method+controller combination for each test tolerance, ac- cumulating their average rank over all test tolerances. To more rigorously compare the performance of MRI methods and con- troller combinations, we also conducted a variety of statistical analyses of the full set of efficiency data. We used a one-way Analysis of variance (ANOVA) to analyze controllers and a repeated measure ANOVA for each MRI method, examining fast and slow time scales separately. Both analyses yielded p- values of zero, indicating a statistically significant difference between con- trollers and between methods of a given order. To identify the best-performing method of a given order for either the fast or slow work metric on each test problem, we first grouped the data by controller. For each controller and test problem, we calculated the mean and standard deviation of all rank values across methods of that particular order. Next, we condensed each method’s performance into a single value by averaging its rank values. Finally, we calculated the z-score for each method to determine its deviation from the mean, and then averaged these z-scores across all controllers. An MRI method with negative z-score (i.e., below the mean) indicates better performance, while a positive z-score (above the mean) signifies poorer performance. A similar z-score analysis is conducted to determine the best-performing controllers across all methods and test prob- lems, for the fast and slow time scales. We include these statistical results in our discussion below. Family performance. As observed in our robustness tests in Section 5.2.1, the largest determining factor for adaptive efficiency was the controller family; this was followed by the embedded MRI method itself, and then the specific choice of controller. Specifically, when comparing the average rankings of each family across all test problems, MRI methods and work metrics, the H-Tol and Decoupled families had z-scores of -0.349 and -0.340, respectively, while the H-h family had a z-score of 0.862. This indicates that the proposed families performed similarly to each other, but were far more efficient than the H-h family. MRI method performance. To compare individual embedded MRI methods, we downselected to the twelve fastest MRI method and controller pairs for each combination of problem, method order, and work metric. This process yielded eight sets of top-performing pairs. We then computed the intersection of the two sets corresponding to the different multirate values, retaining only those pairs that consistently rank among the top 12 for both. 15 In Figures 2-4, we overlay the efficiency plots for these sets of “fastest” meth- ods to determine the most performant MRI method and adaptivity controller pairs. In each figure, the KPR test problem is on the top row, and the stiff Brusselator test problem is on the bottom rows. Within each row, we show the plots for slow computational efficiency on the left. 102 103 104 slow work 10−8 10−7 10−6 10−5 10−4 Error ω=50 102 103 104 slow work 10−8 10−7 10−6 10−5 10−4 10−3 10−2 ω=500 Method + Controller IRK21a+D-I IRK21a+HT-H0211 IRK21a+HT-H0321 IRK21a+HT-H211 IMEXSR21+HT-H211 IRK21a+HT-I IMEXSR21+HT-I IRK21a+HT-H312 IMEXSR21+HT-H312 (a) 103 104 105 fast work 10−7 10−6 10−5 10−4 10−3 Error ω=50 104 105 106 fast work 10−7 10−6 10−5 10−4 10−3 10−2 ω=500 Method + Controller ERK22b+D-H0321 IRK21a+D-H0321 RALSTON2+D-H211 ERK22b+D-H211 IRK21a+D-H211 RALSTON2+D-I ERK22a+D-I ERK22b+D-I IRK21a+D-I ERK22b+D-H312 IRK21a+D-H312 (b) 102 104 slow work 10−6 10−5 10−4 10−3 Error ϵ=0.0001 102 103 104 slow work 10−6 10−5 10−4 10−3 ϵ=1e-05 Method + Controller RALSTON2+HT-H0211 ERK22a+HT-H0211 RALSTON2+HT-H0321 ERK22a+HT-H0321 MERK21+HT-H0321 RALSTON2+HT-I ERK22a+HT-I RALSTON2+HT-H312 MERK21+HT-H312 (c) 104 105 106 fast work 10−6 10−5 10−4 10−3 10−2 Error ϵ=0.0001 105 106 fast work 10−6 10−5 10−4 10−3 10−2 ϵ=1e-05 Method + Controller ERK22b+D-H0211 ERK22b+D-H0321 IRK21a+D-H0321 IRK21a+D-H211 ERK22b+D-I IRK21a+D-I ERK22b+HT-H0211 ERK22b+HT-H0321 IRK21a+HT-H0321 ERK22b+HT-I IRK21a+HT-I (d) Figure 2: Efficiency comparisons for the top second-order adaptive MRI methods. The top row contains the slow and fast time scales for the KPR test problem with multirate ratios ω = {50, 500}. The stiff Brusselator test problem with both stiffness parameters ϵ = {10−4, 10−5} is on the bottom. Figure 2 shows the efficiency results for the best second-order adaptive MRI method combinations. The top MRI methods showed the most vari- ability in their performance at the slow time scale (left plots). Here, different MRI methods excelled for each problem, with IRK21a the best for the KPR problem across a range of H-Tol controllers (plus one Decoupled controller), while RALSTON2 and ERK22a dominated the stiff Brusselator problem (again 16 Table 1: Average rank z-scores for embedded second-order MRI methods. MRI Slow Fast Method KPR Bruss Avg KPR Bruss Avg IRK21a -1.50 -0.17 -0.84 -0.95 -1.09 -1.02 RALSTON2 0.48 -0.76 -0.14 -0.35 -0.17 -0.26 ERK22a 0.32 -0.34 -0.01 0.10 0.16 0.13 MERK21 0.43 -0.09 0.17 0.82 0.76 0.79 ERK22b 0.86 -0.47 0.19 -1.11 -1.15 -1.13 IMEXSR21 -0.58 1.84 0.63 1.49 1.49 1.49 Table 2: Average rank z-scores for embedded third-order MRI methods. MRI Slow Fast Method KPR Bruss Avg KPR Bruss Avg ERK33a 0.10 -1.01 -0.46 -0.30 -0.86 -0.58 ESDIRK34a 0.05 0.21 0.13 -1.25 -0.83 -1.04 IMEXSR32 -1.10 1.38 0.14 1.17 1.36 1.27 MERK32 0.95 -0.58 0.19 0.37 0.33 0.35 using various H-Tol controllers). The corresponding z-scores for each MRI method’s average rank are given in Table 1. At the fast time scale (right plots), the MRI methods showed more consistent performance across both test problems, with ERK22b and IRK21a giving the best performance, and IMEXSR21 the worst performance. Interestingly, when measuring fast cost the Decoupled controllers rank near the top more frequently than H-Tol. We additionally note a feature that will be present in higher-order methods as well – the fast time scale stiff Brusselator work-precision curves are nearly vertical, a characteristic of explicit methods on stiffness-dominated problems. The z-scores for these methods are also given in Table 1. From these results, we conclude that the second-order adaptive MRI methods are generally ef- fective at both fast and slow time scales, with IRK21a and RALSTON2 being the most efficient across both problems and time scales, while IMEXSR21 was generally the least efficient second-order method. Figure 3 shows the efficiency of the best third-order adaptive MRI method combinations. Again, for problems where the cost is dominated by operators at the slow time scale, the KPR and stiff Brusselator problems have different optimal MRI methods. For KPR, by far the most efficient MRI method was 17 102 103 104 slow work 10−7 10−6 10−5 10−4 10−3 Error ω=50 101 103 slow work 10−7 10−6 10−5 10−4 10−3 ω=500 Method + Controller IMEXSR32+HT-H0211 ESDIRK34a+HT-H0321 IMEXSR32+HT-H0321 ESDIRK34a+HT-H211 IMEXSR32+HT-H211 ESDIRK34a+HT-I IMEXSR32+HT-I ESDIRK34a+HT-H312 IMEXSR32+HT-H312 (a) 103 104 105 fast work 10−7 10−6 10−5 10−4 10−3 10−2 Error ω=50 104 106 fast work 10−6 10−5 10−4 10−3 10−2 ω=500 Method + Controller ESDIRK34a+D-H0321 ERK33a+D-H211 ESDIRK34a+D-H211 ERK33a+D-I ESDIRK34a+D-I ERK33a+D-H312 ESDIRK34a+D-H312 (b) 102 103 104 slow work 10−6 10−5 10−4 10−3 Error ϵ=0.0001 102 103 104 slow work 10−7 10−6 10−5 10−4 10−3 ϵ=1e-05 Method + Controller ERK33a+D-H211 ERK33a+D-I ERK33a+D-H312 ERK33a+HT-H0211 MERK32+HT-H0211 ERK33a+HT-H0321 MERK32+HT-H0321 ERK33a+HT-H211 MERK32+HT-H211 ERK33a+HT-I MERK32+HT-I ERK33a+HT-H312 MERK32+HT-H312 (c) 104 105 fast work 10−6 10−5 10−4 10−3 10−2 Error ϵ=0.0001 105 106 fast work 10−7 10−6 10−5 10−4 10−3 10−2 ϵ=1e-05 Method + Controller ESDIRK34a+D-H0321 ERK33a+D-H211 ESDIRK34a+D-H211 ERK33a+D-I ESDIRK34a+D-I ERK33a+D-H312 ESDIRK34a+D-H312 ERK33a+HT-H0321 ESDIRK34a+HT-H0321 ERK33a+HT-H211 ESDIRK34a+HT-H211 ERK33a+HT-I ESDIRK34a+HT-I ERK33a+HT-H312 ESDIRK34a+HT-H312 (d) Figure 3: Efficiency comparisons for the top third-order adaptive MRI methods. The top row shows the slow and fast time scales for the KPR test problem with multirate ratios ω = {50, 500}. The stiff Brusselator test problem with both stiffness parameters ϵ = {10−4, 10−5} is on the bottom row. 18 Table 3: Average rank z-scores for embedded fourth- and fifth-order MRI methods. MRI Slow Fast Method KPR Bruss Avg KPR Bruss Avg ERK45a 0.16 -0.84 -0.34 -0.59 -1.28 -0.93 ESDIRK46a -1.34 1.11 -0.11 -1.20 0.23 -0.48 MERK43 0.95 -0.89 0.03 0.49 -0.15 0.17 MERK54 0.44 -0.20 0.12 -0.13 -0.26 -0.20 IMEXSR43 -0.21 0.82 0.30 1.43 1.46 1.44 IMEXSR32 for an array of H-Tol controllers, while ERK33a and MERK32 are the most efficient for the stiff Brusselator (again predominantly using H-Tol controllers). Also similarly to the second-order methods, for multirate ap- plications where the slow and fast operators have commensurate cost, both the KPR and stiff Brusselator tests agree that the most efficient third-order methods are ERK33a and ESDIRK34a, with the Decoupled controllers appear- ing more frequently than H-Tol. Interestingly, we note that for the stiff Brusselator problem at the fast time scale, although ESDIRK34a is more effi- cient at loose tolerances, it is incapable of achieving accuracies better than 10−5. Table 2 presents the z-scores for each problem and metric, where it is clear that the overall top-performing third-order MRI method was ERK33a, since its average z-scores for both slow and fast work metrics were negative. We present efficiency results for fourth- and fifth-order adaptive MRI methods in Figure 4 and Table 3. Here, we see that the best-performing methods again differ between the KPR and stiff Brusselator problems. For the KPR problem at both ω values, the implicit ESDIRK46a provides the best efficiency using either work metric, while ERK45a is competitive when considering fast time scale effort. For the stiff Brusselator problem, we see that only MERK43 and MERK54 are able to achieve solutions with errors below 10−6, but that for looser tolerances ERK45a is competitive at the slow time scale and wins at the fast time scale. We additionally note that as with the lower order methods, when computational cost is dominated by the slow time scale operators, nearly all of the top-performing methods use H-Tol- based controllers, while for problems where fast scale computational effort is significant the Decoupled controllers slightly outnumber H-Tol. Multirate controller performance. We conclude this section by turn- ing our attention to the performance of individual multirate controllers. Our previous statistical analyses compared MRI methods when using the same 19 101 102 103 slow work 10−7 10−6 10−5 10−4 10−3 10−2 Error ω=50 102 103 slow work 10−7 10−6 10−5 10−4 10−3 10−2 10−1 ω=500 Method + Controller ESDIRK46a+D-H312 ESDIRK46a+HT-H0211 IMEXSR43+HT-H0211 ESDIRK46a+HT-H0321 ESDIRK46a+HT-H211 ESDIRK46a+HT-I ESDIRK46a+HT-H312 (a) 101 103 fast work 10−7 10−6 10−5 10−4 10−3 10−2 Error ω=50 103 104 105 fast work 10−7 10−6 10−5 10−4 10−3 10−2 10−1 ω=500 Method + Controller ESDIRK46a+D-H0211 ESDIRK46a+D-H0321 ERK45a+D-H211 ESDIRK46a+D-H211 ERK45a+D-I ESDIRK46a+D-I ERK45a+D-H312 ESDIRK46a+D-H312 ESDIRK46a+HT-H211 ESDIRK46a+HT-I ESDIRK46a+HT-H312 (b) 101 102 103 slow work 10−7 10−6 10−5 10−4 10−3 Error ϵ=0.0001 102 103 slow work 10−8 10−7 10−6 10−5 10−4 10−3 ϵ=1e-05 Method + Controller ERK45a+HT-H0321 MERK43+HT-H0321 MERK54+HT-H0321 MERK43+HT-H211 MERK43+HT-I MERK54+HT-I MERK43+HT-H312 (c) 104 105 106 fast work 10−7 10−6 10−5 10−4 10−3 Error ϵ=0.0001 104 106 fast work 10−7 10−6 10−5 10−4 10−3 ϵ=1e-05 Method + Controller ERK45a+D-H211 ERK45a+D-I MERK43+D-I ERK45a+D-H312 ERK45a+HT-H211 ERK45a+HT-I ERK45a+HT-H312 (d) Figure 4: Efficiency comparisons for the top fourth- and fifth-order adaptive MRI methods. The top row shows the slow and fast time scales for the KPR test problem with multirate ratios ω = {50, 500}. The stiff Brusselator test problem with both stiffness parameters ϵ = {10−4, 10−5} is on the bottom row. multirate controllers, so here we compare the performance of each H-Tol and Decoupled controller when using the same MRI methods. The z-scores from this analysis are shown in Table 4. Here, we perform the analysis sep- arately for the average ranks at the slow time scale and the fast time scale in the left and right columns, respectively. These underscore the intuitive results seen above, that the H-Tol controllers are clearly more efficient than the Decoupled controllers when slow time scale costs are dominant, but that Decoupled are more efficient when the fast time scale costs become signifi- cant. These further indicate that within each family, the multirate controllers constructed from the single-rate “I” controller are significantly more efficient for these test problems than the others. 20 Table 4: Average rank z-scores for multirate controllers (compares only the H-Tol and Decoupled families). Ranked scores for slow time scale work are on the left, and for fast time scale work are on the right. Slow Scale Fast Scale Multirate controller z-score Multirate controller z-score HT-I -0.690 D-I -0.557 HT-H0321 -0.349 D-H312 -0.419 HT-H312 -0.280 D-H211 -0.405 HT-H0211 -0.264 D-H0321 -0.181 HT-H211 -0.255 D-H0211 0.037 D-I 0.065 HT-I 0.150 D-H312 0.307 HT-H211 0.189 D-H211 0.420 HT-H312 0.245 D-H0321 0.445 HT-H0321 0.443 D-H0211 0.601 HT-H0211 0.499 5.2.3. Temporal Adaptivity Comparisons To better understand the differing behavior between each controller fam- ily, we present time histories of H and h as a function of the internal simula- tion time, t, on the KPR and stiff Brusselator tests. To focus the presenta- tion, all simulations use absolute and relative tolerances 10−11 and 10−4, the ERK33a MRI method, and ARKODE’s default third-order explicit Runge– Kutta method at the fast time scale. The KPR problem is run with es = 5, ef = 0.5, and ω = 500, and the stiff Brusselator problem uses ϵ = 10−4. With this setup, we selected the D-H211, HT-H211, and MRICC multirate controllers. To help unclutter the figures, we plot a subset of these internal step sizes in Figure 5: up to 200 values of H(t) and up to 1000 values of h(t). In the legend, we list the numbers of slow and fast time steps, and the attained “accuracy” as defined in equation (13). As expected from the analytical solution to the KPR problem, we see that at t ≈2.5 and t ≈3.5 the oscillations in the fast variable slow down, allowing the fast integrator to increase its steps dramatically. Similarly, due to rapid changes in the slow variables for the stiff Brusselator solution at t ≈6.5 the “slow” solution com- ponents speed up rapidly, causing the slow integrator to shrink steps while the fast integrator remains steady. From these plots, we additionally discern the underlying differences be- tween each controller family. Notably, while all methods exhibit similar slow 21 step histories on the KPR problem, at the fast time scale the MRI-CC H-h controller consistently took overly large fast time steps, resulting in solutions with orders of magnitude more error than requested. Additionally, the KPR plots show the tradeoff between the Decoupled and H-Tol controllers, namely that the H-Tol controller will shift additional effort onto the fast time scale so that it can increase the slow step size. The stiff Brusselator problem tells a slightly different story: although all controllers exhibited similar behavior when resolving the fast/stiff time scale, the MRI-CC controller took slow time steps that were far smaller than necessary, resulting in significantly worse efficiency (and interestingly, also much worse accuracy). Meanwhile, the H-Tol controller required marginally fewer slow steps than its Decoupled counterpart. 5.3. Nested Multirate Tests We end with a demonstration of the multirate H-Tol controller on a nested multirate application. For this, we consider a modification of the previous KPR problem (9) to have 3 time scales which change throughout the course of the simulation:   u′(t) v′(t) w′(t)  =   G e e e α β e −β α     (u2 −p −2) /(2u) (v2 −q −2) /(2v) (w2 −r −2) /(2w)  +   p′(t)/(2u) q′(t)/(2v) r′(t)/(2w)   (14) over 0 < t < 5, where p(t) = 1 2 cos(t), q(t) = cos(ωt(1 + e−(t−2)2)), and r(t) = cos(ω2t(1 + e−(t−3)2)). This problem has analytical solution u(t) = p 2 + p(t), v(t) = p 2 + q(t), and w(t) = p 2 + r(t), stable (but oscillatory) eigenvalues, and its behavior is dictated by the parameters: e that determines the strength of coupling between the time scales, G < 0 that determines the stiffness at slow time scale, α and β that govern oscillations between v and w, and ω that determines the time-scale separation factors between u and v, and between v and w. Denoting the full right-hand side vector for (14) as fu(t, u, v, w) fv(t, u, v, w) fw(t, u, v, w) T, we split the right-hand side into three functions for the slow (f s), medium (f m), and fast f f scales via   fu(t, u, v, w) 0 0   | {z } fs +   0 fv(t, u, v, w) 0   | {z } fm +   0 0 fw(t, u, v, w)   | {z } ff 22 0 1 2 3 4 5 10−3 10−2 10−1 H KPR step size history Controller D-H211 steps: 499, 29662 accuracy: 7.8 HT-H211 steps: 277, 202295 accuracy: 3.7 MRI-CC steps: 341, 3609 accuracy: 719196.0 0 1 2 3 4 5 t 10−4 10−3 10−2 h 0 2 4 6 8 10 10−4 10−3 10−2 10−1 H StiffBrusselator step size history Controller D-H211 steps: 211, 76262 accuracy: 2.4 HT-H211 steps: 172, 79982 accuracy: 1.3 MRI-CC steps: 13946, 99900 accuracy: 29661.3 0 2 4 6 8 10 t 10−7 10−6 10−5 10−4 10−3 h Figure 5: Slow and fast time step histories for each multirate controller family on the KPR and stiff Brusselator test problems. The legends list the numbers of slow and fast time steps, as well as the attained solution accuracy factor (13). Summary statistics from running these nested multirate simulations with the HT-I adaptivity controller at an absolute tolerance abstol = 10−11 and a range of relative tolerances reltol = {10−2, 10−4, 10−6, 10−8} are provided in Table 5. All tests used the time scale separation factor ω = 50, stiffness factor G = −10, and coupling factors e = 5, α = −1, β = 1, the ERK22b MRI method at the slow and intermediate time scales, and ARKODE’s default second-order explicit Runge–Kutta method at the fast time scale. From these, we see that HT-I indeed works well at tracking the dynamics of each time scale, achieving solutions with accuracy metrics within a factor of ∼30 23 from the requested tolerances, and with work metrics that increase on average by a factor of 33 from one scale to the next. Table 5: Summary statistics for multirate simulations of the nested KPR problem (14) using the HT-I controller at both the slow and intermediate time scales, for various relative tolerances. The number of slow, intermediate, and fast time steps are shown, as well as the attained accuracy factor as defined in equation (13). Reltol Slow Steps Int Steps Fast Steps Accuracy Factor 10−2 84 294 4,028 29.79 10−4 1,081 12,896 455,531 10.19 10−6 1,686 91,208 2,880,373 14.16 10−8 9,793 861,182 26,943,054 6.47 6. Conclusions In this work we present two new families of multirate time step adaptivity controllers, Decoupled and H-Tol, that are designed to work with embedded MRI methods for adapting time steps when solving problems with multiple time scales. Comparing against the previously-introduced H-h controllers from [17], the proposed controllers offer dramatically improved performance and flexibility. While the proposed controller families are able to track mul- tiple time scales for methods of a variety of orders of accuracy, the H-h controllers seem to struggle to select steps with sufficient scale separation between H and h, resulting in worse accuracy and higher cost than the proposed families. The combination of embedded MRI methods with the Decoupled and H-Tol controllers theoretically support adaptive simulations of problems with an arbitrary number of time scales (here shown for a 3-scale benchmark problem), and achieve high accuracy while maintaining low com- putational cost. Of the proposed families, the H-Tol controllers show much stronger performance for problems where cost is dominated by evaluation of the slow operators, at the expense of requiring estimates of the accumulated fast temporal error. On the other hand, the Decoupled controllers require no such accumulated fast error estimate, and show the best performance for problems where the fast and slow operators have comparable cost. Thus, we recommend that users consider this trade-off when selecting a multirate controller for their application. 24 We also compared the performance of many adaptive MRI methods on our benchmark problems. Although it is clear that some MRI methods outper- form others in these tests, the optimal choice is clearly problem-specific. We thus encourage practitioners to employ software libraries such as ARKODE [23] that allow them to easily switch between methods to explore what works best for their application. A key challenge when using partitioned solvers of any type (e.g., ImEx, MRI) is to construct a good splitting of the ODE right-hand side into com- ponents, f(t, y) = P i f {i}(t, y). While we have split the two benchmark problems here to work with explicit, implicit, and ImEx MRI methods, we do not claim that these splittings are optimal for either problem. Interest- ingly, although this KPR problem would not generally be considered as stiff, the results in Section 5.2.3 indicated a slight preference for implicit and ImEx MRI methods. Similarly, since the stiff Brusselator benchmark was split such that the stiff terms were subcycled at the fast time scale, it is notable that explicit MRI methods generally achieved the best performance. This ques- tion of how to optimally match explicit, implicit, or ImEx MRI methods to a given application is important but poorly understood. It is therefore critical for practitioners to additionally experiment with different splittings for their applications. We note that although this manuscript examined performance on small- scale benchmark problems, the benchmarks were selected to represent some key traits of larger-scale multirate PDE applications. We are currently testing these proposed MRI adaptivity algorithms on large scale problems arising in simulations of real-time Boltzmann transport equations and tokamak fusion plasmas; however, we leave those results to future publications given the breadth of experiments already performed here. We further note that not all applications are easily amenable to additive splittings of the form f(t, y) = P i f {i}(t, y). To our knowledge, although some recent work has introduced multirate methods for nonlinearly parti- tioned systems [26], those do not yet support either the “infinitesimal” struc- ture explored here, or embeddings for temporal error estimation. However, we anticipate that once those classes of methods have been further devel- oped to provide these features, techniques similar to those in this work can be applied for adapting the step sizes at each time scale. Finally, we note that the individual single-rate controllers that comprise both the Decoupled and H-Tol controllers may be chosen independently, and thus different types of single-rate controllers could be used for each compo- 25 nent. For simplicity in this work we consistently selected single-rate compo- nent controllers of matching type. However, future studies may investigate the performance of mixed single-rate controllers, e.g., since it is likely that slow time steps may benefit less from a longer temporal history, a potential combination of an I controller for the slow time scale and a higher-order controller for the fast time scale may be beneficial. Acknowledgements Daniel R. Reynolds and Vu Thai Luan would like to thank the Vietnam Institute for Advanced Study in Mathematics (VIASM) for their hospitality during the summer research stay, where part of this work was carried out. References [1] M. Schlegel, O. Knoth, M. Arnold, R. Wolke, Multirate Runge–Kutta schemes for advection equations, Journal of Computational and Applied Mathematics 226 (2) (2009) 345–357. doi:10.1016/j.cam.2008.08. 009. [2] A. Sandu, A Class of Multirate Infinitesimal GARK Methods, SIAM J. Numer. Anal. 57 (5) (2019) 2300–2327. doi:10.1137/18M1205492. [3] R. Chinomona, D. R. Reynolds, Implicit-Explicit Multirate Infinitesimal GARK Methods, SIAM J. Sci. Comput. 43 (5) (2021) A3082–A3113. doi:10.1137/20M1354349. [4] V. T. Luan, R. Chinomona, D. R. Reynolds, A New Class of High-Order Methods for Multirate Differential Equations, SIAM J. Sci. Comput. 42 (2) (2020) A1245–A1268. doi:10.1137/19M125621X. [5] V. T. Luan, R. Chinomona, D. R. Reynolds, Multirate Exponential Rosenbrock Methods, SIAM J. Sci. Comput. 44 (5) (2022) A3265– A3289. doi:10.1137/21M1439481. [6] A. C. Fish, D. R. Reynolds, S. B. Roberts, Implicit–explicit multirate infinitesimal stage-restart methods, Journal of Computational and Ap- plied Mathematics 438 (2024) 115534. doi:10.1016/j.cam.2023.115 534. 26 [7] A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban, D. E. Shumaker, C. S. Woodward, SUNDIALS: Suite of nonlinear and differential/algebraic equation solvers, ACM Trans. Math. Softw. 31 (3) (2005) 363–396. doi:10.1145/1089014.1089020. [8] D. J. Gardner, D. R. Reynolds, C. S. Woodward, C. J. Balos, En- abling New Flexibility in the SUNDIALS Suite of Nonlinear and Dif- ferential/Algebraic Equation Solvers, ACM Trans. Math. Softw. 48 (3) (2022) 31:1–31:24. doi:10.1145/3539801. [9] K. Gustafsson, Control theoretic techniques for stepsize selection in ex- plicit Runge-Kutta methods, ACM Trans. Math. Softw. 17 (4) (1991) 533–554. doi:10.1145/210232.210242. [10] G. Söderlind, Automatic Control and Adaptive Time-Stepping, Numeri- cal Algorithms 31 (1) (2002) 281–310. doi:10.1023/A:1021160023092. [11] G. Söderlind, Digital filters in adaptive time-stepping, ACM Trans. Math. Softw. 29 (1) (2003) 1–26. doi:10.1145/641876.641877. [12] G. Söderlind, Time-step selection algorithms: Adaptivity, control, and signal processing, Applied Numerical Mathematics 56 (3-4) (2006) 488– 502. doi:10.1016/j.apnum.2005.04.026. [13] K. Gustafsson, Control-theoretic techniques for stepsize selection in im- plicit Runge-Kutta methods, ACM Trans. Math. Softw. 20 (4) (1994) 496–517. doi:10.1145/198429.198437. [14] K. Gustafsson, G. Söderlind, Control Strategies for the Iterative Solution of Nonlinear Equations in ODE Solvers, SIAM J. Sci. Comput. 18 (1) (1997) 23–40. doi:10.1137/S1064827595287109. [15] E. Hairer, S. P. Nørsett, G. Wanner, Solving Ordinary Differential Equa- tions I: Nonstiff Problems, 2nd Edition, Springer Series in Computa- tional Mathematics, Springer Ser.Comp.Mathem. Hairer,E.:Solving Or- dinary Diff., Springer-Verlag, Berlin Heidelberg, 1993. doi:10.1007/97 8-3-540-78862-1. [16] A. Sarshar, S. Roberts, A. Sandu, Design of High-Order Decoupled Mul- tirate GARK Schemes, SIAM J. Sci. Comput. 41 (2) (2019) A816–A847. doi:10.1137/18M1182875. 27 [17] A. C. Fish, D. R. Reynolds, Adaptive Time Step Control for Multirate Infinitesimal Methods, SIAM J. Sci. Comput. 45 (2) (2023) A958–A984. doi:10.1137/22M1479798. [18] M. Günther, A. Sandu, Multirate generalized additive Runge Kutta methods, Numer. Math. 133 (3) (2016) 497–524. doi:10.1007/s0 0211-015-0756-z. [19] D. L. Ropp, J. N. Shadid, C. C. Ober, Studies of the accuracy of time integration methods for reaction–diffusion equations, Journal of compu- tational physics 194 (2) (2004) 544–574. [20] N. A. Petersson, B. Sjögreen, High order accurate finite difference mod- eling of seismo-acoustic wave propagation in a moving atmosphere and a heterogeneous earth model coupled across a realistic topography, Jour- nal of Scientific Computing 74 (1) (2018) 290–323. [21] G. J. McRae, W. R. Goodin, J. H. Seinfeld, Numerical solution of the atmospheric diffusion equation for chemically reacting flows, Journal of Computational Physics 45 (1) (1982) 1–42. [22] D. Reynolds, D. Mitchell, S. Amihere, MRI-Adaptivity-Paper, https: //github.com/sundials-codes/mri-adaptivity-paper, accessed: 2025-08-01 (2025). [23] D. R. Reynolds, D. J. Gardner, C. S. Woodward, R. Chinomona, ARKODE: A Flexible IVP Solver Infrastructure for One-step Methods, ACM Trans. Math. Softw. 49 (2) (2023) 19:1–19:26. doi:10.1145/35 94632. [24] I. Prigogine, R. Lefever, Symmetry Breaking Instabilities in Dissipative Systems. II, The Journal of Chemical Physics 48 (4) (1968) 1695–1700. doi:10.1063/1.1668896. [25] S. Roberts, A. A. Popov, A. Sarshar, A. Sandu, A Fast Time-Stepping Strategy for Dynamical Systems Equipped with a Surrogate Model, SIAM J. Sci. Comput. 44 (3) (2022) A1405–A1427, publisher: Society for Industrial and Applied Mathematics. doi:10.1137/20M1386281. [26] T. Buvoli, B. K. Tran, B. S. Southworth, Multirate runge-kutta for nonlinearly partitioned systems (2025). arXiv:2504.03257. 28 Appendix A. Embedded MERK Methods In [4] the authors introduce the methods MERK2, MERK3, MERK4, and MERK5. Each of these has a beneficial structure that allows an embedding to be in- cluded in the next-to-last internal stage, giving rise to the following embed- ded methods. As in [4], we provide only the forcing functions for the internal stages (rn,i(τ)), the updated solution (rn(τ)), and the embedding (˜rn(τ)). For brevity, we denote the slow function evaluations at each step and stage as f s n = f s(tn, yn) and f s n,i = f s(tn + ciH, zi), respectively, and we denote the difference functions as Dn,i = f s n,i −f s n. • MERK21, that has abscissae c =  0 c2 1  and uses the constant c2 = 1 2: rn,2(τ) = f s n, rn(τ) = f s n + τ c2H Dn,2, ˜rn(τ) = f s n. • MERK32, that has abscissae c =  0 c2 2 3 1  and also uses the constant c2 = 1 2: rn,2(τ) = f s n, rn,3(τ) = f s n + τ c2H Dn,2, rn(τ) = f s n + 3τ 2H Dn,3, ˜rn(τ) = rn,3(τ). • MERK43, that has abscissae c =  0 c2 c3 c4 c5 c6 1  and uses the constants c2 = c3 = 1 2, c4 = c6 = 1 3 and c5 = 5 6: rn,2(τ) = f s n, rn,3(τ) = rn,4(τ) = f s n + τ c2H Dn,2, rn,5(τ) = rn,6(τ) = f s n + τ H  −c4 c3(c3−c4)Dn,3 + c3 c4(c3−c4)Dn,4  + τ 2 H2  1 c3(c3−c4)Dn,3 − 1 c4(c3−c4)Dn,4  , rn(τ) = f s n + τ H  −c6 c5(c5−c6)Dn,5 + c5 c6(c5−c6)Dn,6  + τ 2 H2  1 c5(c5−c6)Dn,5 − 1 c6(c5−c6)Dn,6  , ˜rn(τ) = rn,5(τ). 29 • MERK54, that has abscissae c = [ 0 c2 c3 c4 c5 c6 c7 c8 c9 c10 1 ] and uses the constants c2 = c3 = c5 = c9 = 1 2, c4 = c6 = 1 3, c7 = 1 4, c8 = 7 10, and c10 = 2 3: rn,2(τ) = f s n, rn,3(τ) = rn,4(τ) = f s n + τ c2H Dn,2, rn,5(τ) = rn,6(τ) = rn,7(τ) = f s n + τ H (α3Dn,3 + α4Dn,4) + τ 2 H2 (β3Dn,3 −β4Dn,4) , rn,8(τ) = rn,9(τ) = rn,10(τ) = f s n + τ H (α5Dn,5 + α6Dn,6 + α7Dn,7) −τ 2 H2 (β5Dn,5 + β6Dn,6 + β7Dn,7) + τ 3 H3 (γ5Dn,5 + γ6Dn,6 + γ7Dn,7) , rn(τ) = f s n + τ H (α8Dn,8 + α9Dn,9 + α10Dn,10) −τ 2 H2 (β8Dn,8 + β9Dn,9 + β10Dn,10) + τ 3 H3 (γ8Dn,8 + γ9Dn,9 + γ10Dn,10) , ˜rn(τ) = rn,8(τ). where α3 = c4 c3(c4−c3), α4 = c3 c4(c3−c4), α5 = c6c7 c5(c5−c6)(c5−c7), α6 = c5c7 c6(c6−c5)(c6−c7), α7 = c5c6 c7(c7−c5)(c7−c6), α8 = c9c10 c8(c8−c9)(c8−c10), α9 = c8c10 c9(c9−c8)(c9−c10), α10 = c8c9 c10(c10−c8)(c10−c9) β3 = 1 c3(c3−c4), β4 = 1 c4(c3−c4), β5 = c6+c7 c5(c5−c6)(c5−c7), β6 = c5+c7 c6(c6−c5)(c6−c7), β7 = c5+c6 c7(c7−c5)(c7−c6), β8 = c9+c10 c8(c8−c9)(c8−c10), β9 = c8+c10 c9(c9−c8)(c9−c10), β10 = c8+c9 c10(c10−c8)(c10−c9) γ5 = 1 c5(c5−c6)(c5−c7), γ6 = 1 c6(c6−c5)(c6−c7), γ7 = 1 c7(c7−c5)(c7−c6), γ8 = 1 c8(c8−c9)(c8−c10), γ9 = 1 c9(c9−c8)(c9−c10), γ10 = 1 c10(c10−c8)(c10−c9). 30
Efficient and Flexible Multirate Temporal Adaptivity Daniel R. Reynoldsa,1,∗, Sylvia Amiherea,1, Dashon Mitchella,1, Vu Thai Luanb a - mal (MRI) time integration methods for adapting time steps when solving problems with multiple time scales. We compare these controllers against competing approaches on two benchmark problems and see that they offer dramatically improved performance and flexibility, with each proposed family excelling on different types of multirate applications. The combination of embedded MRI methods and the proposed controllers enable adaptive simulations of problems with a potentially arbitrary number of time scales, achieving high accuracy while maintaining low computational cost. Additionally, we introduce a new set of embeddings for the family of explicit multirate exponential Runge-Kutta (MERK) methods of orders 2 through 5, resulting in the first-ever fifth-order embedded MRI method. Finally, we compare the performance of a wide range of embedded MRI methods on our benchmark problems to provide guidance on how to select an appropriate MRI method and multirate controller. Keywords: multirate time integration, time step adaptivity, initial-value ∗Corresponding author Email addresses: (Daniel R. Reynolds), (Sylvia Amihere), (Dashon Mitchell), (Vu Thai Luan) 1This work was funded in part by the U.S. (SciDAC) Program through the FASTMath Institute, under DOE award DESC0021354. 16 Oct 2025 problems 2010 MSC: 65L05, 65L06, 65L20 1. Introduction In this work, we focus on time-step adaptivity within multirate infinitesimal (MRI) methods, that are used when solving systems of ordinary differential equation initial value problems of the form y′(t) = f s(t, y) + f f(t, y), t > t0 y(t0) = y0, (1) Here, the operator f f contains physical processes that evolve on a fast time scale with typical time step size h, and f s contains processes that evolve on a slower time scale with typical step size H ≫h. Multirate methods are frequently used for problems of this form when f s is considerably more costly to evaluate than f f, and thus algorithms that evaluate fs infrequently have the potential for significant runtime improvements over single rate approaches that evolve all processes with time step h. Generally speaking, an explicit infinitesimal method for evolving a single time step yn ≈y(tn) to yn+1 ≈y(tn + Hn), with its embedded solution ̃yn+1 ≈y(tn + Hn), proceeds through the following sequence [1, 2, 3, 4, 5, 6]. 1. Let: Y1 = yn 2. For i = 2, ..., s: (a) Solve: v′ i(θ) = f f(θ, vi)+ri(θ), for θ ∈[θ0,i, θF,i] with vi(θ0,i) = v0,i. (b) Let: Yi = vi(θF,i). 3. Solve: ̃v′ s(θ) = f f(θ, ̃vs) + ̃rs(θ), for θ ∈[θ0,s, θF,s] with ̃vs(θ0,s) = v0,s. 4. Let: yn+1 = Ys and ̃yn+1 = ̃vs(θF,s). MRI methods are determined by their fast stage time intervals [θ0,i, θF,i], initial conditions v0,i, forcing functions ri(θ), and embedding forcing function ̃rs(θ). Both ri(θ) and ̃rs(θ) are constructed using linear combinations of {f s(θF,j, Yj)}, and serve to propagate information from the slow to the fast time scales. Implicit and implicit-explicit extensions of the above MRI algorithm exist, that involve replacing step 2a with an implicit solve for some internal stages Yi. The embedded solution ̃yn+1 is similar to embedded solutions in standard Runge-Kutta methods, in that it approximates the solution 2 to an alternate order of accuracy and may be computed with minimal extra effort beyond computing yn+1; in this case it is computed by re-solving the last stage with an alternate forcing function, ̃rs(θ). As seen in steps 2a and 3 above, computation of each stage in an explicit MRI method requires solving a secondary, inner, IVP. Typically, these inner IVPs are not solved exactly, and instead are approximated using another numerical method with inner time steps hn,m such that maxm |hn,m| ≪|Hn|. In the sections that follow, we first summarize how temporal adaptivity is performed for problems with a single time scale in Section 2. In Section 3 we introduce previous work, and propose two new approaches, for dynamically adapting both the slow and fast time steps, Hn and hn,m, within MRI methods. Two of these multirate controller families require estimates of the accumulated fast temporal error, so we describe our algorithms for computing those estimates in Section 4. The majority of this manuscript focuses on numerical tests of these multirate controllers in Section 5. In that section, we first describe our two benchmark problems that we use throughout our numerical tests. We then outline the set of embedded MRI methods that we will test, including new embeddings for the explicit multirate exponential Runge-Kutta (MERK) methods that result in the first ever fifth-order embedded MRI method. The remainder of Section 5 performs a variety of numerical experiments to examine the proposed multirate adaptivity strategies and the embedded MRI methods. We discuss our conclusions from these tests and identify future directions to extend this work in Section 6. 2. Single Rate Control Before introducing multirate temporal adaptivity, we briefly review the typical approaches for single-rate adaptivity, that attempt to control the local error induced within a given time step. Assuming that the initial condition at the time step tn is exact, i.e., yn -y(tn) = 0, the local error is defined to be εn = y(tn + hn) -yn+1. (2) Single-rate controllers then adapt the step size hn to achieve two primary objectives: (a) ensure the local error is "small enough", i.e., ∥εn∥< 1, (3) where ∥·∥is a norm that incorporates the requested temporal accuracy (e.g., the WRMS-norm from [7, 8]), and (b) perform as few time steps as possible 3 to meet objective (a). Additional considerations include a smooth step size progression [9, 10, 11, 12], and improving the efficiency and robustness of iterative algebraic solvers that may be used at each step [13, 14]. Single rate adaptivity may be easily incorporated into standard "time marching" schemes. Instead of utilizing a fixed step size hn = h, one begins at t0 with an initial step size h0, and initializes the counter n = 0. Then in a loop over step attempts: (i) compute a candidate approximation y∗ n+1 ≈y(tn + hn) and a corresponding approximation of the local error ε∗ n ≈y(tn + hn) -y∗ n+1. (ii) If ∥ε∗ n∥< 1 set tn+1 = tn + hn, yn+1 = y∗ n+1, and n = n + 1. Else: reject the approximation y∗ n+1 (and the corresponding step size hn). (iii) Use ∥ε∗ n∥to estimate a step size ̃h for the subsequent step attempt. Since the true local error εn is inaccessible, then for simplicity in the remainder of this paper, we use εn to denote the approximation ε∗ n. Whether εn passes or fails the error test (3), step (iii) must select ̃h to use on the next step attempt. This adaptivity controller is a function that typically depends on a small set of (hn-k, εn-k) pairs, i.e., ̃h = C(hn, εn, hn-1, εn-1, . . . , p). There are a myriad of choices for controllers C in the literature, but the simplest approach is the so-called I controller. Assuming that the numerical method for computing yn+1 has order p, then from [15] εn ≈y(tn + hn) -y∗ n+1 = c(tn) hp+1 n + O hp+2 n = O hp+1 n for some c(t) independent of hn. Taking the norm of this gives ∥εn∥≈∥c(tn)∥hp+1 n . (4) Since we have just computed the estimate ∥εn∥based off a step with size hn, a piecewise constant approximation c(t) ≈cn allows us to "solve" for ∥cn∥= ∥εn∥/hp+1 n . Assuming that c(t) will remain relatively constant from one step attempt to the next, the candidate step size ̃h is computed to exactly attain an error goal of 1 (or any tol): tol = ∥cn∥ ̃hp+1 ⇒ ̃h = hn tol ∥εn∥ 1/(p+1) . 4 This formula automatically grows or shrinks the step size based on whether the error test ∥εn∥1/(p+1) ≤tol passed or failed, respectively. Due to the multiple assumptions above, a safety factor σ < 1 is typically applied, ̃h = σhn tol ∥εn∥ 1/(p+1) . More advanced approaches for C are typically based on control theory, and use additional (hn-k, εn-k) values to build higher-degree piecewise polynomial approximations of the principal error function [9, 13, 14, 10, 11, 12]. To extend these ideas to the multi-rate context, we require two complementary components: (a) strategies to estimate local temporal error, and (b) algorithms for selecting step sizes following each attempted step. Since (b) dictates our needs for (a), we begin with multirate temporal control. 3. Multirate Temporal Controllers Multirate temporal adaptivity has not been widely studied in the literature. The two articles we are aware of are [16] and [17]. The former is specific to "MrGARK" methods [18], that require a more rigid algorithmic structure than the MRI algorithm introduced in Section 1. The latter works with MRI methods, and will serve as a baseline comparison for our new methods. In the following subsections, we consider three such approaches in this work. 3.1. Coupled Stepsize ("H-h") Control The adaptive multirate controllers from [17] simultaneously predict both a slow step size Hn, and a multirate ratio Mn, such that the inner solver takes fixed small substeps hn = Hn/Mn throughout each subinterval [θ0,i, θF,i]. Due to their use of fixed inner steps that are adapted only when the outer step Hn is adapted, we will refer to these as coupled stepsize ("H-h") controllers. In our subsequent comparisons, we consider four controllers introduced in [17]: MRI-CC, MRI-LL, MRI-PI, and MRI-PID. 3.2. "Decoupled" Multirate Control Our simplest proposed MRI controller uses two decoupled single-rate controllers to separately adapt the inner and outer time steps, i.e., ̃H = Cs(Hn, εs n, Hn-1, εs n-1, . . . , P), ̃h = Cf(hn,m, εf n,m, hn,m-1, εf n,m-1, . . . , p), (5) 5 where P and p are the orders of the MRI method and inner solver, respectively. Here, (Hk, εs k) are the stepsize and local error estimates for time step k at the slow time scale, and (hk,l, εf k,l) are the stepsize and local error estimates for the fast substep lwithin the slow step k. We note that in this approach, both Cs and Cf are decoupled, and thus selection of ̃H and ̃h occur independently through application of any pair of single-rate controllers. We summarize their use in the following pseudocode. Given the current state yn, candidate step Hn, and controllers Cs and Cf, perform a single MRI step attempt as follows. 1. Let: Y1 = yn. 2. For each MRI stage i = 2, . . . , s: (a) Use an adaptive solver with Cf to ensure ∥εf n,m∥≤1 for the IVP v′ i(θ) = f f(θ, vi) + ri(θ), θ ∈[θ0,i, θF,i], vi(θ0,i) = v0,i. (b) Let Yi = vi(θF,i). 3. Use an adaptive solver with Cf to ensure ∥εf n,m∥≤1 for the IVP ̃v′ s(θ) = f f(θ, ̃vs) + ̃rs(θ), θ ∈[θ0,s, θF,s], ̃vs(θ0,s) = v0,s. 4. Let: yn+1 = Ys, ̃yn+1 = ̃vs(θF,s), and εs n = yn+1 - ̃yn+1. 5. Use Cs to select a new step size ̃H for the ensuing step attempt. Since this approach ignores any coupling between time scales, we expect these controllers to work well for multirate problems wherein the time scales are somewhat decoupled, where errors introduced at one scale do not "pollute" the other, for example reaction-diffusion systems [19], acoustic-elastic wave systems [20], or transport-chemistry models [21]. Furthermore, due to its decoupled nature, this technique can be trivially extended to an arbitrary number of time scales, allowing temporal adaptivity for so-called "telescopic" multirate methods. 3.3. Stepsize-tolerance ("H-Tol") Control Our second proposed family are the so-called "H-Tol" multirate controllers. As outlined in Section 2, standard controllers attempt to adapt step sizes to control local error within each step to achieve a requested tolerance. However, MRI methods must ask another "inner" adaptive fast-scale solver to produce the stage solutions vi(θF,i), that result from sub-stepping over each interval [θ0,i, θF,i]. Local errors within the fast integrator may accumulate, resulting in an overall fast-solver error εf i that could exceed the requested tolerance. If that inner solver can produce both vi(θF,i) and an 6 estimation of the accumulated error, εf i,approx, then the tolerances provided to that next-fastest solver can be adjusted accordingly to ensure MRI stage solutions Yi = vi(θF,i) that are within the overall tolerances requested of the outer MRI method. To this end, first assume that at step n, the MRI method requests solutions from the inner solver that are tolfacn more accurate than the MRI method, i.e., if the MRI method strives to achieve local errors ∥εs n∥≤1, then the inner solver strives to achieve local substep errors ∥εf n,m∥≤tolfacn. To account for accumulation of temporal error in the inner solver, we assume that its accumulated error over the slow step [tn, tn + Hn] has the form ∥εf n∥= χ(tn)Hntolfacn, (6) where χ(t) is independent of tolfacn, but may vary in time. We construct this ansatz as follows: since the fast integrator takes multiple substeps to integrate over the full step Hn then the accumulated error should be proportional to this interval width; the problem- and method-dependent factor χ(tn) models this constant of proportionality. By grouping the terms from (6) as ∥εf n∥= (χ(tn)Hn) (tolfacn)0+1, we see that this fits the standard asymptotic error assumption used in singlerate controllers (4), through identifying χ(tn)Hn with ∥c(tn)∥, the control parameter tolfacn with hn, and by defining the "order" p = 0. Thus, any single-rate controller that relies on the asymptotic error assumption (6) may be used to adjust tolfacn between slow step attempts. Thus we construct an H-Tol controller from three single-rate controllers: • Cs,H - adapts Hn to achieve user-requested solution tolerances using data (Hn, εs n, Hn-1, εs n-1, . . . , P). • Cs,Tol - adapts tolfacn using the strategy described above with data (tolfacn, εf n, tolfacn-1, εf n-1, . . . , 0). • Cf - adapts inner time steps hn,m to achieve the current requested tolerance, tolfacn, using data (hn,m, εf n,m, hn,m-1, εf n,m-1, . . . , p). We summarize their use in the following pseudocode. Given the current state yn, candidate step Hn, and controllers Cs,H, Cs,Tol and Cf, perform a single MRI step attempt as follows. 1. Let: Y1 = yn. 7 2. For each MRI stage i = 2, . . . , s: (a) Use an adaptive solver with Cf to ensure ∥εf n,m∥≤tolfacn for the IVP v′ i(θ) = f f(θ, vi) + ri(θ), θ ∈[θ0,i, θF,i], vi(θ0,i) = v0,i. (b) Let Yi = vi(θF,i). 3. Use an adaptive solver with Cf to ensure ∥εf n,m∥≤tolfacn for the IVP ̃v′ s(θ) = f f(θ, ̃vs) + ̃rs(θ), θ ∈[θ0,s, θF,s], ̃vs(θ0,s) = v0,s. 4. Let: yn+1 = Ys, ̃yn+1 = ̃vs(θF,s), εs n = yn+1 - ̃yn+1, and retrieve the accumulated fast error εf n from the inner solver. 5. Use Cs,H to select a new step size ̃H for the ensuing step attempt. 6. Use Cs,Tol to select a tolerance factor ^ tolfac for the ensuing step attempt. This class of controllers may also be applied to telescopic MRI methods, since it only focuses on the relationship between two successive time scales. 4. Fast Temporal Error Estimation The H-h and H-Tol MRI adaptivity families require estimates of the temporal errors at both the slow and fast time scales, εs n and εf n, respectively. While the slow error may be estimated using the MRI method's time step solution and embedding, εs n = ∥yn - ̃yn∥, non-intrusive approaches for estimating εf n are more challenging. We employ the following two strategies. Since the H-Tol multirate controller leverages an adaptive fast solver, we leverage the fact that at each substep of the fast integrator tn,m, it computes an estimate of the local temporal error, εf n,m (e.g., using its own time step solution and embedding). Since the fast solver may be run with different tolerances than the slow solver, we denote its relative and absolute tolerances as reltolf n and abstolf, respectively. We then accumulate the local error estimates from each substep by assuming that none of these local errors cancel, estimating the total fast error as the sum of the substep errors, εf,add n = reltolf n reltols n X m∈M ∥εf n,m∥, (7) where M is the set of all steps since the accumulator was last reset (generally at the start of the step, tn). To understand the tolerance ratio, we note that both the fast and slow solvers use a weighted root-mean squared norm, such that an error norm of 1 corresponds with "sufficiently accurate," i.e., ∥εf n,m∥=  1 N N X i=1 εf n,m,i abstolf + reltolf n|yn,m,i| !2  1/2 , 8 where yn,m is the time step solution at the start of the substep, and the vectors yn,m and εf n,m have N components. The prefactor reltolf n serves to convert the accumulated fast errors to a raw estimate of the relative error, before the factor 1/reltols n scales this back to normalized relative error. Since the H-h multirate controller uses fixed fast substeps we cannot assume that the fast solver is adaptive. Thus, we use a more costly approach that runs the fast integrator using fixed time steps of size h and kh (e.g., with k = 2) to achieve fast solutions yf n,h and yf n,kh, which we then subtract to estimate the fast temporal error, εf,dbl n = 1 kp -1 ∥yf n,m,h -yf n,m,kh∥, (8) where p is the global order of accuracy for the method. 5. Numerical Tests In this section, we perform numerical tests to verify the proposed Decoupled and H-Tol multirate adaptivity strategies, and to compare their performance against the previous H-h controllers. All codes for performing these tests and reproducing the figures in this paper are available in [22], and use the ARKODE time integration solver [23], which is part of the SUNDIALS library [7, 8]. We use two benchmark problems. The first is a multirate, nonlinear version of the Kværno-Prothero-Robinson (KPR) ODE test problem, that has been slightly modified from [6, Section 6.2]: u′(t) v′(t) = G es ef -1 (u2 -p -2) /(2u) (v2 -q -2) /(2v) + p′(t)/(2u) q′(t)/(2v) (9) over 0 < t < 5, where p(t) = cos(t) and q(t) = cos(ωt(1 + e-(t-2)2)). This problem has analytical solution u(t) = p 2 + p(t) and v(t) = p 2 + q(t), and its behavior is dictated by the parameters: es determines the strength of coupling from the fast to the slow scale, ef determines the coupling strength from the slow to the fast scale, G < 0 determines the stiffness at the slow time scale, and w determines the time-scale separation factor. We split the right-hand side above into up to three functions for the slow-implicit (fi), 9 slow-explicit (f e) and fast (f f) operators, respectively, G u2-p-2 2u + es v2-q-2 2v 0 | {z } fi + p′(t) 2u 0 | {z } fe + 0 ef u2-p-2 2u -v2-q-2-q′(t) 2v | {z } ff ; (10) for non-ImEx MRI methods the combined slow operator is fs = f i + f e. The second benchmark is a stiff version of the Brusselator problem, originally proposed in [24],   u′(t) v′(t) w′(t)  =   a + vu2 -(w + 1)u wu -vu2 b-w ε -wu   (11) for 0 < t < 10. We use the initial conditions u(0) = 1.2, v(0) = 3.1, and w(0) = 3, and parameters a = 1, b = 3.5 and ε = 5 × 10-6, unless stated otherwise. We note that ε determines the stiffness and/or time scale separation factor for the problem, such that typically H/h = 1/(100ε). We again define a splitting for this right-hand side,   -(w + 1)u wu -wu   | {z } fi +   a + vu2 -vu2 0   | {z } fe +   0 0 b-w ε   | {z } ff (12) 5.1. Embedded MRI Methods To explore multirate temporal adaptivity, we rely on MRI methods that include embeddings, to provide approximations of both the time step solution yn and embedding ̃yn. To this end, we run tests using the following 15 methods (with orders of accuracy in parentheses): • MRI-GARK methods from [2]: - Explicit ERK22a (2), ERK22b (2), ERK33a (3), ERK45a (4); - Implicit IRK21a (2), ESDIRK34a (3), and ESDIRK46a (4); • the explicit RALSTON2 (2) MRI-GARK method from [25]; • IMEX-MRI-SR methods from [6]: IMEXSR21 (2), IMEXSR32 (3), and IMEXSR43 (4); 10 • explicit MERK methods MERK21 (2), MERK32 (3), MERK43 (4), and MERK54 (5). The embeddings for each of the above methods have an order of accuracy one lower than the method itself. We note that the original MERK2, MERK3, MERK4, and MERK5 methods from [4] do not include embeddings. We provide embeddings for each in Appendix Appendix A. 5.2. Multirate Temporal Controllers With our fast temporal estimation strategies in place, we now examine the performance of the MRI adaptivity algorithms from Section 3. For both the Decoupled and H-Tol controller approaches, we construct MRI controllers using each of the H211, H0211, H0321, H312 and I single-rate adaptivity controllers [11], from the ARKODE library [23]; we additionally test the previously-introduced H-h controllers MRI-CC, MRI-LL, MRI-PI, and MRIPID [17]. For all tests, we pair each of the 15 MRI methods from Section 5.1 with a fast explicit Runge-Kutta method of the same order. We apply these to two test problems: • the KPR problem (9) (with parameters G = -100, es = 5, ef = 0.5, ω = {50, 500}), to assess controller performance when both fast and slow scales evolve dynamically, and where the scale separation factor is varied; • the stiff Brusselator problem (11) (with parameter ε = {10-4, 10-5}), to assess controller performance when the fast scale is stability-limited at varied stiffness levels but generally evolves at a constant rate, but the slow scale varies in time. We run each problem over their full temporal duration using a fixed absolute tolerance abstol = 10-11, and a variety of relative tolerances, reltol = 10-k, k = 3, . . . , 7. This corresponds with a total of 4200 test combinations. For each test combination and time step (tn-1, yn-1) →(tn, yn), we compute a reference solution using a fifth-order single-rate explicit solver with initial condition (tn-1, yn-1) and tolerances reltol = 10-10 and abstol = 10-12, respectively, to evolve to (tn, yref(tn)). We then determine each method's ability to achieve the target local accuracy as accuracy = max yn,l -yref,l(tn) abstol + reltol |yref,l(tn)| , (13) 11 where the maximum is taken over all solution components (l) and over all time steps (n). We note that an optimal method would achieve "accuracy" values exactly equal to 1; however in practice most combinations of adaptivity controllers and time integration methods would be considered successful if they achieve an accuracy value within a factor of 100 of the requested relative tolerance. In addition to computing these accuracy metrics, we record the overall integrator effort, as estimated by the number of time steps required at each of the fast and slow time scales. 5.2.1. Controller robustness We begin with a simple question: how effective is each MRI adaptivity family (Decoupled, H-Tol, and H-h) at achieving a range of target tolerances according to the metric (13)? To answer this high-level question, Figure 1 shows aggregated results across the full suite of tests. Here, the top six plots show the KPR problem having multirate ratios ω = {50, 500} for second order MRI methods (left), third order MRI methods (middle), and higherorder MRI methods (right), and the bottom six plots show similar results for the Brusselator problem with stiffness factors ε = {10-4, 10-5}. Within each plot, we present shaded regions for each family, showing the range of "accuracy" values attained by MRI methods and controllers within that family for each tolerance. We can assess the robustness of each family by examining how tightly these plots cluster around the target accuracy of one. Due to their disparate results, we separate our comments on the Decoupled and H-Tol controller families from the H-h family. For both problems and multirate regimes, the Decoupled and H-Tol controller families performed excellently, typically achieving solutions within a factor of 10 from the requested tolerances. There were a few outliers among these for the KPR problem at ω = 500 and the loosest relative tolerances 10-4 and 10-3, where for some decoupled controllers the ESDIRK46a and ERK45a methods had errors over 100x larger than requested. Due to its stiffness, the Brusselator problem had slightly more outliers, with ESDIRK34a giving excessive error for some tighter tolerances, and ESDIRK46a struggling to attain solutions within 100x of the target across a wide range of tolerances and single-rate controllers. We additionally note that from the plots in Figure 1 it is difficult to discern significant performance differences between the Decoupled and H-Tol families, indicating that their accuracy depends more strongly on the underlying adaptive MRI method or single-rate controller than the time step selection mechanism. Based on these results, we conclude 12 10-7 10-6 10-5 10-4 10-3 reltol 10-1 100 101 102 103 accuracy ω = 50 10-7 10-6 10-5 10-4 10-3 reltol 10-1 100 101 102 103 104 accuracy ω = 500 10-7 10-6 10-5 10-4 10-3 reltol 100 101 102 103 104 10-7 10-6 10-5 10-4 10-3 reltol 100 101 102 103 104 10-7 10-6 10-5 10-4 10-3 reltol 100 101 102 103 10-7 10-6 10-5 10-4 10-3 reltol 100 101 102 103 104 Family H-h Htol Decoupled 10-7 10-6 10-5 10-4 10-3 reltol 101 103 105 107 109 accuracy ε = 0.0001 10-7 10-6 10-5 10-4 10-3 reltol 101 104 107 1010 accuracy ε = 1e-05 10-7 10-6 10-5 10-4 10-3 reltol 101 103 105 107 109 y 10-7 10-6 10-5 10-4 10-3 reltol 101 104 107 1010 10-7 10-6 10-5 10-4 10-3 reltol 100 102 104 106 y 10-7 10-6 10-5 10-4 10-3 reltol 100 102 104 106 108 1010 Figure 1: Accuracy measurements for each multirate controller family, when tested across a wide range of tolerances and individual MRI controllers. Columns denote MRI method accuracies: left are O(H2), middle are O(H3), and right are O(H4) and O(H5). The KPR test problem is in the top two rows, and the Brusselator test problem is in the bottom two rows. Note that where two regions overlap, their shading mixes together; the H-Tol and Decoupled families perfectly overlap, resulting in a teal-gray color. that both of these families are robust across a wide range of MRI methods and tolerances, and should be applicable to most multirate applications. The H-h controller family did not fare as well. Although some methods and controllers were able to meet the requested tolerances, most had errors that were orders of magnitude larger than requested. Generally, the H-h controllers deteriorated as the problems became more multirate; in separate tests on weakly multirate problems (e.g., KPR with ω = 5 and the stiff Brus13 selator with ε = 0.01) this family was generally able to meet the requested error tolerances, which agrees with the results from [17]. The only outliers from this generally poor performance were the MERK21 and MERK32 methods, that achieved the target accuracy for all tolerances and multirate parameters. No other combination of adaptive MRI method and H-h controller was able to achieve even within 10x of the target accuracy for the challenging problems with ω = 500 and ε = 10-5. We cannot yet explain why the embedded MERK methods with H-h controllers outperform other embedded MRI methods; however, we generally conclude that controllers in the H-h family are considerably less robust than either the Decoupled or H-Tol families for applications with nontrivial multirate nature, so we recommend that practitioners apply caution when using the H-h family. 5.2.2. Adaptive MRI efficiency We now turn our attention to the computational efficiency of the individual adaptive MRI methods. To test computational efficiency for a given embedded MRI method and multirate controller, we ran each problem using relative solution tolerances 10-k, k = 3, 4, . . . , 7. As in [3, 17, 4, 5], we then estimate the computational "work" required for a calculation by separately considering the number of slow steps and the total number of fast and slow steps. The former is relevant for multirate applications where the slow operators require significantly more computational effort than the fast operators, and thus MRI methods are applied to reduce the number of slow time steps. The latter is relevant for applications where the slow and fast operators require similar computational effort. For either metric, we then plot the computational efficiency by graphing the solution error as a function of work, overlaying these curves for a variety of methods. For any desired accuracy, the left-most curve thus corresponds with the most efficient method. In lieu of presenting plots that overlay hundreds of combinations of MRI methods and adaptive controllers, we first downselected only the most performant combinations. Within each test problem (KPR with ω = {50, 500}, and stiff Brusselator with ε = {10-4, 10-5}), MRI method order (2, 3, and higher), and work metric (slow steps, fast steps), we: • define a set of 20 logarithmically-spaced test tolerances in [10-6, 10-2]; • at each test tolerance, we estimate the work required to attain the target tolerance by interpolating the (work, error) pairs from our data; 14 • rank each method+controller combination for each test tolerance, accumulating their average rank over all test tolerances. To more rigorously compare the performance of MRI methods and controller combinations, we also conducted a variety of statistical analyses of the full set of efficiency data. We used a one-way Analysis of variance (ANOVA) to analyze controllers and a repeated measure ANOVA for each MRI method, examining fast and slow time scales separately. Both analyses yielded pvalues of zero, indicating a statistically significant difference between controllers and between methods of a given order. To identify the best-performing method of a given order for either the fast or slow work metric on each test problem, we first grouped the data by controller. For each controller and test problem, we calculated the mean and standard deviation of all rank values across methods of that particular order. Next, we condensed each method's performance into a single value by averaging its rank values. Finally, we calculated the z-score for each method to determine its deviation from the mean, and then averaged these z-scores across all controllers. An MRI method with negative z-score (i.e., below the mean) indicates better performance, while a positive z-score (above the mean) signifies poorer performance. A similar z-score analysis is conducted to determine the best-performing controllers across all methods and test problems, for the fast and slow time scales. We include these statistical results in our discussion below. Family performance. As observed in our robustness tests in Section 5.2.1, the largest determining factor for adaptive efficiency was the controller family; this was followed by the embedded MRI method itself, and then the specific choice of controller. Specifically, when comparing the average rankings of each family across all test problems, MRI methods and work metrics, the H-Tol and Decoupled families had z-scores of -0.349 and -0.340, respectively, while the H-h family had a z-score of 0.862. This indicates that the proposed families performed similarly to each other, but were far more efficient than the H-h family. MRI method performance. To compare individual embedded MRI methods, we downselected to the twelve fastest MRI method and controller pairs for each combination of problem, method order, and work metric. This process yielded eight sets of top-performing pairs. We then computed the intersection of the two sets corresponding to the different multirate values, retaining only those pairs that consistently rank among the top 12 for both. 15 In Figures 2-4, we overlay the efficiency plots for these sets of "fastest" methods to determine the most performant MRI method and adaptivity controller pairs. In each figure, the KPR test problem is on the top row, and the stiff Brusselator test problem is on the bottom rows. Within each row, we show the plots for slow computational efficiency on the left. 102 103 104 slow work 10-8 10-7 10-6 10-5 10-4 Error ω=50 102 103 104 slow work 10-8 10-7 10-6 10-5 10-4 10-3 10-2 ω=500 Method + Controller IRK21a+D-I IRK21a+HT-H0211 IRK21a+HT-H0321 IRK21a+HT-H211 IMEXSR21+HT-H211 IRK21a+HT-I IMEXSR21+HT-I IRK21a+HT-H312 IMEXSR21+HT-H312 (a) 103 104 105 fast work 10-7 10-6 10-5 10-4 10-3 Error ω=50 104 105 106 fast work 10-7 10-6 10-5 10-4 10-3 10-2 ω=500 Method + Controller ERK22b+D-H0321 IRK21a+D-H0321 RALSTON2+D-H211 ERK22b+D-H211 IRK21a+D-H211 RALSTON2+D-I ERK22a+D-I ERK22b+D-I IRK21a+D-I ERK22b+D-H312 IRK21a+D-H312 (b) 102 104 slow work 10-6 10-5 10-4 10-3 Error ε=0.0001 102 103 104 slow work 10-6 10-5 10-4 10-3 ε=1e-05 Method + Controller RALSTON2+HT-H0211 ERK22a+HT-H0211 RALSTON2+HT-H0321 ERK22a+HT-H0321 MERK21+HT-H0321 RALSTON2+HT-I ERK22a+HT-I RALSTON2+HT-H312 MERK21+HT-H312 (c) 104 105 106 fast work 10-6 10-5 10-4 10-3 10-2 Error ε=0.0001 105 106 fast work 10-6 10-5 10-4 10-3 10-2 ε=1e-05 Method + Controller ERK22b+D-H0211 ERK22b+D-H0321 IRK21a+D-H0321 IRK21a+D-H211 ERK22b+D-I IRK21a+D-I ERK22b+HT-H0211 ERK22b+HT-H0321 IRK21a+HT-H0321 ERK22b+HT-I IRK21a+HT-I (d) Figure 2: Efficiency comparisons for the top second-order adaptive MRI methods. The top row contains the slow and fast time scales for the KPR test problem with multirate ratios ω = {50, 500}. The stiff Brusselator test problem with both stiffness parameters ε = {10-4, 10-5} is on the bottom. Figure 2 shows the efficiency results for the best second-order adaptive MRI method combinations. The top MRI methods showed the most variability in their performance at the slow time scale (left plots). Here, different MRI methods excelled for each problem, with IRK21a the best for the KPR problem across a range of H-Tol controllers (plus one Decoupled controller), while RALSTON2 and ERK22a dominated the stiff Brusselator problem (again 16 Table 1: Average rank z-scores for embedded second-order MRI methods. MRI Slow Fast Method KPR Bruss Avg KPR Bruss Avg IRK21a -1.50 -0.17 -0.84 -0.95 -1.09 -1.02 RALSTON2 0.48 -0.76 -0.14 -0.35 -0.17 -0.26 ERK22a 0.32 -0.34 -0.01 0.10 0.16 0.13 MERK21 0.43 -0.09 0.17 0.82 0.76 0.79 ERK22b 0.86 -0.47 0.19 -1.11 -1.15 -1.13 IMEXSR21 -0.58 1.84 0.63 1.49 1.49 1.49 Table 2: Average rank z-scores for embedded third-order MRI methods. MRI Slow Fast Method KPR Bruss Avg KPR Bruss Avg ERK33a 0.10 -1.01 -0.46 -0.30 -0.86 -0.58 ESDIRK34a 0.05 0.21 0.13 -1.25 -0.83 -1.04 IMEXSR32 -1.10 1.38 0.14 1.17 1.36 1.27 MERK32 0.95 -0.58 0.19 0.37 0.33 0.35 using various H-Tol controllers). The corresponding z-scores for each MRI method's average rank are given in Table 1. At the fast time scale (right plots), the MRI methods showed more consistent performance across both test problems, with ERK22b and IRK21a giving the best performance, and IMEXSR21 the worst performance. Interestingly, when measuring fast cost the Decoupled controllers rank near the top more frequently than H-Tol. We additionally note a feature that will be present in higher-order methods as well - the fast time scale stiff Brusselator work-precision curves are nearly vertical, a characteristic of explicit methods on stiffness-dominated problems. The z-scores for these methods are also given in Table 1. From these results, we conclude that the second-order adaptive MRI methods are generally effective at both fast and slow time scales, with IRK21a and RALSTON2 being the most efficient across both problems and time scales, while IMEXSR21 was generally the least efficient second-order method. Figure 3 shows the efficiency of the best third-order adaptive MRI method combinations. Again, for problems where the cost is dominated by operators at the slow time scale, the KPR and stiff Brusselator problems have different optimal MRI methods. For KPR, by far the most efficient MRI method was 17 102 103 104 slow work 10-7 10-6 10-5 10-4 10-3 Error ω=50 101 103 slow work 10-7 10-6 10-5 10-4 10-3 ω=500 Method + Controller IMEXSR32+HT-H0211 ESDIRK34a+HT-H0321 IMEXSR32+HT-H0321 ESDIRK34a+HT-H211 IMEXSR32+HT-H211 ESDIRK34a+HT-I IMEXSR32+HT-I ESDIRK34a+HT-H312 IMEXSR32+HT-H312 (a) 103 104 105 fast work 10-7 10-6 10-5 10-4 10-3 10-2 Error ω=50 104 106 fast work 10-6 10-5 10-4 10-3 10-2 ω=500 Method + Controller ESDIRK34a+D-H0321 ERK33a+D-H211 ESDIRK34a+D-H211 ERK33a+D-I ESDIRK34a+D-I ERK33a+D-H312 ESDIRK34a+D-H312 (b) 102 103 104 slow work 10-6 10-5 10-4 10-3 Error ε=0.0001 102 103 104 slow work 10-7 10-6 10-5 10-4 10-3 ε=1e-05 Method + Controller ERK33a+D-H211 ERK33a+D-I ERK33a+D-H312 ERK33a+HT-H0211 MERK32+HT-H0211 ERK33a+HT-H0321 MERK32+HT-H0321 ERK33a+HT-H211 MERK32+HT-H211 ERK33a+HT-I MERK32+HT-I ERK33a+HT-H312 MERK32+HT-H312 (c) 104 105 fast work 10-6 10-5 10-4 10-3 10-2 Error ε=0.0001 105 106 fast work 10-7 10-6 10-5 10-4 10-3 10-2 ε=1e-05 Method + Controller ESDIRK34a+D-H0321 ERK33a+D-H211 ESDIRK34a+D-H211 ERK33a+D-I ESDIRK34a+D-I ERK33a+D-H312 ESDIRK34a+D-H312 ERK33a+HT-H0321 ESDIRK34a+HT-H0321 ERK33a+HT-H211 ESDIRK34a+HT-H211 ERK33a+HT-I ESDIRK34a+HT-I ERK33a+HT-H312 ESDIRK34a+HT-H312 (d) Figure 3: Efficiency comparisons for the top third-order adaptive MRI methods. The top row shows the slow and fast time scales for the KPR test problem with multirate ratios ω = {50, 500}. The stiff Brusselator test problem with both stiffness parameters ε = {10-4, 10-5} is on the bottom row. 18 Table 3: Average rank z-scores for embedded fourth- and fifth-order MRI methods. MRI Slow Fast Method KPR Bruss Avg KPR Bruss Avg ERK45a 0.16 -0.84 -0.34 -0.59 -1.28 -0.93 ESDIRK46a -1.34 1.11 -0.11 -1.20 0.23 -0.48 MERK43 0.95 -0.89 0.03 0.49 -0.15 0.17 MERK54 0.44 -0.20 0.12 -0.13 -0.26 -0.20 IMEXSR43 -0.21 0.82 0.30 1.43 1.46 1.44 IMEXSR32 for an array of H-Tol controllers, while ERK33a and MERK32 are the most efficient for the stiff Brusselator (again predominantly using H-Tol controllers). Also similarly to the second-order methods, for multirate applications where the slow and fast operators have commensurate cost, both the KPR and stiff Brusselator tests agree that the most efficient third-order methods are ERK33a and ESDIRK34a, with the Decoupled controllers appearing more frequently than H-Tol. Interestingly, we note that for the stiff Brusselator problem at the fast time scale, although ESDIRK34a is more efficient at loose tolerances, it is incapable of achieving accuracies better than 10-5. Table 2 presents the z-scores for each problem and metric, where it is clear that the overall top-performing third-order MRI method was ERK33a, since its average z-scores for both slow and fast work metrics were negative. We present efficiency results for fourth- and fifth-order adaptive MRI methods in Figure 4 and Table 3. Here, we see that the best-performing methods again differ between the KPR and stiff Brusselator problems. For the KPR problem at both ω values, the implicit ESDIRK46a provides the best efficiency using either work metric, while ERK45a is competitive when considering fast time scale effort. For the stiff Brusselator problem, we see that only MERK43 and MERK54 are able to achieve solutions with errors below 10-6, but that for looser tolerances ERK45a is competitive at the slow time scale and wins at the fast time scale. We additionally note that as with the lower order methods, when computational cost is dominated by the slow time scale operators, nearly all of the top-performing methods use H-Tolbased controllers, while for problems where fast scale computational effort is significant the Decoupled controllers slightly outnumber H-Tol. Multirate controller performance. We conclude this section by turning our attention to the performance of individual multirate controllers. Our previous statistical analyses compared MRI methods when using the same 19 101 102 103 slow work 10-7 10-6 10-5 10-4 10-3 10-2 Error ω=50 102 103 slow work 10-7 10-6 10-5 10-4 10-3 10-2 10-1 ω=500 Method + Controller ESDIRK46a+D-H312 ESDIRK46a+HT-H0211 IMEXSR43+HT-H0211 ESDIRK46a+HT-H0321 ESDIRK46a+HT-H211 ESDIRK46a+HT-I ESDIRK46a+HT-H312 (a) 101 103 fast work 10-7 10-6 10-5 10-4 10-3 10-2 Error ω=50 103 104 105 fast work 10-7 10-6 10-5 10-4 10-3 10-2 10-1 ω=500 Method + Controller ESDIRK46a+D-H0211 ESDIRK46a+D-H0321 ERK45a+D-H211 ESDIRK46a+D-H211 ERK45a+D-I ESDIRK46a+D-I ERK45a+D-H312 ESDIRK46a+D-H312 ESDIRK46a+HT-H211 ESDIRK46a+HT-I ESDIRK46a+HT-H312 (b) 101 102 103 slow work 10-7 10-6 10-5 10-4 10-3 Error ε=0.0001 102 103 slow work 10-8 10-7 10-6 10-5 10-4 10-3 ε=1e-05 Method + Controller ERK45a+HT-H0321 MERK43+HT-H0321 MERK54+HT-H0321 MERK43+HT-H211 MERK43+HT-I MERK54+HT-I MERK43+HT-H312 (c) 104 105 106 fast work 10-7 10-6 10-5 10-4 10-3 Error ε=0.0001 104 106 fast work 10-7 10-6 10-5 10-4 10-3 ε=1e-05 Method + Controller ERK45a+D-H211 ERK45a+D-I MERK43+D-I ERK45a+D-H312 ERK45a+HT-H211 ERK45a+HT-I ERK45a+HT-H312 (d) Figure 4: Efficiency comparisons for the top fourth- and fifth-order adaptive MRI methods. The top row shows the slow and fast time scales for the KPR test problem with multirate ratios ω = {50, 500}. The stiff Brusselator test problem with both stiffness parameters ε = {10-4, 10-5} is on the bottom row. multirate controllers, so here we compare the performance of each H-Tol and Decoupled controller when using the same MRI methods. The z-scores from this analysis are shown in Table 4. Here, we perform the analysis separately for the average ranks at the slow time scale and the fast time scale in the left and right columns, respectively. These underscore the intuitive results seen above, that the H-Tol controllers are clearly more efficient than the Decoupled controllers when slow time scale costs are dominant, but that Decoupled are more efficient when the fast time scale costs become significant. These further indicate that within each family, the multirate controllers constructed from the single-rate "I" controller are significantly more efficient for these test problems than the others. 20 Table 4: Average rank z-scores for multirate controllers (compares only the H-Tol and Decoupled families). Ranked scores for slow time scale work are on the left, and for fast time scale work are on the right. Slow Scale Fast Scale Multirate controller z-score Multirate controller z-score HT-I -0.690 D-I -0.557 HT-H0321 -0.349 D-H312 -0.419 HT-H312 -0.280 D-H211 -0.405 HT-H0211 -0.264 D-H0321 -0.181 HT-H211 -0.255 D-H0211 0.037 D-I 0.065 HT-I 0.150 D-H312 0.307 HT-H211 0.189 D-H211 0.420 HT-H312 0.245 D-H0321 0.445 HT-H0321 0.443 D-H0211 0.601 HT-H0211 0.499 5.2.3. Temporal Adaptivity Comparisons To better understand the differing behavior between each controller family, we present time histories of H and h as a function of the internal simulation time, t, on the KPR and stiff Brusselator tests. To focus the presentation, all simulations use absolute and relative tolerances 10-11 and 10-4, the ERK33a MRI method, and ARKODE's default third-order explicit RungeKutta method at the fast time scale. The KPR problem is run with es = 5, ef = 0.5, and ω = 500, and the stiff Brusselator problem uses ε = 10-4. With this setup, we selected the D-H211, HT-H211, and MRICC multirate controllers. To help unclutter the figures, we plot a subset of these internal step sizes in Figure 5: up to 200 values of H(t) and up to 1000 values of h(t). In the legend, we list the numbers of slow and fast time steps, and the attained "accuracy" as defined in equation (13). As expected from the analytical solution to the KPR problem, we see that at t ≈2.5 and t ≈3.5 the oscillations in the fast variable slow down, allowing the fast integrator to increase its steps dramatically. Similarly, due to rapid changes in the slow variables for the stiff Brusselator solution at t ≈6.5 the "slow" solution components speed up rapidly, causing the slow integrator to shrink steps while the fast integrator remains steady. From these plots, we additionally discern the underlying differences between each controller family. Notably, while all methods exhibit similar slow 21 step histories on the KPR problem, at the fast time scale the MRI-CC H-h controller consistently took overly large fast time steps, resulting in solutions with orders of magnitude more error than requested. Additionally, the KPR plots show the tradeoff between the Decoupled and H-Tol controllers, namely that the H-Tol controller will shift additional effort onto the fast time scale so that it can increase the slow step size. The stiff Brusselator problem tells a slightly different story: although all controllers exhibited similar behavior when resolving the fast/stiff time scale, the MRI-CC controller took slow time steps that were far smaller than necessary, resulting in significantly worse efficiency (and interestingly, also much worse accuracy). Meanwhile, the H-Tol controller required marginally fewer slow steps than its Decoupled counterpart. 5.3. Nested Multirate Tests We end with a demonstration of the multirate H-Tol controller on a nested multirate application. For this, we consider a modification of the previous KPR problem (9) to have 3 time scales which change throughout the course of the simulation:   u′(t) v′(t) w′(t)  =   G e e e α β e -β α     (u2 -p -2) /(2u) (v2 -q -2) /(2v) (w2 -r -2) /(2w)  +   p′(t)/(2u) q′(t)/(2v) r′(t)/(2w)   (14) over 0 < t < 5, where p(t) = 1 2 cos(t), q(t) = cos(ωt(1 + e-(t-2)2)), and r(t) = cos(ω2t(1 + e-(t-3)2)). This problem has analytical solution u(t) = p 2 + p(t), v(t) = p 2 + q(t), and w(t) = p 2 + r(t), stable (but oscillatory) eigenvalues, and its behavior is dictated by the parameters: e that determines the strength of coupling between the time scales, G < 0 that determines the stiffness at slow time scale, α and β that govern oscillations between v and w, and ω that determines the time-scale separation factors between u and v, and between v and w. Denoting the full right-hand side vector for (14) as fu(t, u, v, w) fv(t, u, v, w) fw(t, u, v, w) T, we split the right-hand side into three functions for the slow (f s), medium (f m), and fast f f scales via   fu(t, u, v, w) 0 0   | {z } fs +   0 fv(t, u, v, w) 0   | {z } fm +   0 0 fw(t, u, v, w)   | {z } ff 22 0 1 2 3 4 5 10-3 10-2 10-1 H KPR step size history Controller D-H211 steps: 499, 29662 accuracy: 7.8 HT-H211 steps: 277, 202295 accuracy: 3.7 MRI-CC steps: 341, 3609 accuracy: 719196.0 0 1 2 3 4 5 t 10-4 10-3 10-2 h 0 2 4 6 8 10 10-4 10-3 10-2 10-1 H StiffBrusselator step size history Controller D-H211 steps: 211, 76262 accuracy: 2.4 HT-H211 steps: 172, 79982 accuracy: 1.3 MRI-CC steps: 13946, 99900 accuracy: 29661.3 0 2 4 6 8 10 t 10-7 10-6 10-5 10-4 10-3 h Figure 5: Slow and fast time step histories for each multirate controller family on the KPR and stiff Brusselator test problems. The legends list the numbers of slow and fast time steps, as well as the attained solution accuracy factor (13). Summary statistics from running these nested multirate simulations with the HT-I adaptivity controller at an absolute tolerance abstol = 10-11 and a range of relative tolerances reltol = {10-2, 10-4, 10-6, 10-8} are provided in Table 5. All tests used the time scale separation factor ω = 50, stiffness factor G = -10, and coupling factors e = 5, α = -1, β = 1, the ERK22b MRI method at the slow and intermediate time scales, and ARKODE's default second-order explicit Runge-Kutta method at the fast time scale. From these, we see that HT-I indeed works well at tracking the dynamics of each time scale, achieving solutions with accuracy metrics within a factor of ∼30 23 from the requested tolerances, and with work metrics that increase on average by a factor of 33 from one scale to the next. Table 5: Summary statistics for multirate simulations of the nested KPR problem (14) using the HT-I controller at both the slow and intermediate time scales, for various relative tolerances. The number of slow, intermediate, and fast time steps are shown, as well as the attained accuracy factor as defined in equation (13). Reltol Slow Steps Int Steps Fast Steps Accuracy Factor 10-2 84 294 4,028 29.79 10-4 1,081 12,896 455,531 10.19 10-6 1,686 91,208 2,880,373 14.16 10-8 9,793 861,182 26,943,054 6.47 6. Conclusions In this work we present two new families of multirate time step adaptivity controllers, Decoupled and H-Tol, that are designed to work with embedded MRI methods for adapting time steps when solving problems with multiple time scales. Comparing against the previously-introduced H-h controllers from [17], the proposed controllers offer dramatically improved performance and flexibility. While the proposed controller families are able to track multiple time scales for methods of a variety of orders of accuracy, the H-h controllers seem to struggle to select steps with sufficient scale separation between H and h, resulting in worse accuracy and higher cost than the proposed families. The combination of embedded MRI methods with the Decoupled and H-Tol controllers theoretically support adaptive simulations of problems with an arbitrary number of time scales (here shown for a 3-scale benchmark problem), and achieve high accuracy while maintaining low computational cost. Of the proposed families, the H-Tol controllers show much stronger performance for problems where cost is dominated by evaluation of the slow operators, at the expense of requiring estimates of the accumulated fast temporal error. On the other hand, the Decoupled controllers require no such accumulated fast error estimate, and show the best performance for problems where the fast and slow operators have comparable cost. Thus, we recommend that users consider this trade-off when selecting a multirate controller for their application. 24 We also compared the performance of many adaptive MRI methods on our benchmark problems. Although it is clear that some MRI methods outperform others in these tests, the optimal choice is clearly problem-specific. We thus encourage practitioners to employ software libraries such as ARKODE [23] that allow them to easily switch between methods to explore what works best for their application. A key challenge when using partitioned solvers of any type (e.g., ImEx, MRI) is to construct a good splitting of the ODE right-hand side into components, f(t, y) = P i f {i}(t, y). While we have split the two benchmark problems here to work with explicit, implicit, and ImEx MRI methods, we do not claim that these splittings are optimal for either problem. Interestingly, although this KPR problem would not generally be considered as stiff, the results in Section 5.2.3 indicated a slight preference for implicit and ImEx MRI methods. Similarly, since the stiff Brusselator benchmark was split such that the stiff terms were subcycled at the fast time scale, it is notable that explicit MRI methods generally achieved the best performance. This question of how to optimally match explicit, implicit, or ImEx MRI methods to a given application is important but poorly understood. It is therefore critical for practitioners to additionally experiment with different splittings for their applications. We note that although this manuscript examined performance on smallscale benchmark problems, the benchmarks were selected to represent some key traits of larger-scale multirate PDE applications. We are currently testing these proposed MRI adaptivity algorithms on large scale problems arising in simulations of real-time Boltzmann transport equations and tokamak fusion plasmas; however, we leave those results to future publications given the breadth of experiments already performed here. We further note that not all applications are easily amenable to additive splittings of the form f(t, y) = P i f {i}(t, y). To our knowledge, although some recent work has introduced multirate methods for nonlinearly partitioned systems [26], those do not yet support either the "infinitesimal" structure explored here, or embeddings for temporal error estimation. However, we anticipate that once those classes of methods have been further developed to provide these features, techniques similar to those in this work can be applied for adapting the step sizes at each time scale. Finally, we note that the individual single-rate controllers that comprise both the Decoupled and H-Tol controllers may be chosen independently, and thus different types of single-rate controllers could be used for each compo25 nent. For simplicity in this work we consistently selected single-rate component controllers of matching type. However, future studies may investigate the performance of mixed single-rate controllers, e.g., since it is likely that slow time steps may benefit less from a longer temporal history, a potential combination of an I controller for the slow time scale and a higher-order controller for the fast time scale may be beneficial. Acknowledgements Daniel R. Reynolds and Vu Thai Luan would like to thank the Vietnam Institute for Advanced Study in Mathematics (VIASM) for their hospitality during the summer research stay, where part of this work was carried out. References [1] M. Schlegel, O. Knoth, M. Arnold, R. Wolke, Multirate Runge-Kutta schemes for advection equations, Journal of Computational and Applied Mathematics 226 (2) (2009) 345-357. 009. [2] A. Sandu, A Class of Multirate Infinitesimal GARK Methods, SIAM J. Numer. Anal. 57 (5) (2019) 2300-2327. [3] R. Chinomona, D. R. Reynolds, Implicit-Explicit Multirate Infinitesimal GARK Methods, SIAM J. Sci. Comput. 43 (5) (2021) A3082-A3113. [4] V. T. Luan, R. Chinomona, D. R. Reynolds, A New Class of High-Order Methods for Multirate Differential Equations, SIAM J. Sci. Comput. 42 (2) (2020) A1245-A1268. [5] V. T. Luan, R. Chinomona, D. R. Reynolds, Multirate Exponential Rosenbrock Methods, SIAM J. Sci. Comput. 44 (5) (2022) A3265A3289. [6] A. C. Fish, D. R. Reynolds, S. B. Roberts, Implicit-explicit multirate infinitesimal stage-restart methods, Journal of Computational and Applied Mathematics 438 (2024) 115534. 534. 26 [7] A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban, D. E. Shumaker, C. S. Woodward, SUNDIALS: Suite of nonlinear and differential/algebraic equation solvers, ACM Trans. Math. Softw. 31 (3) (2005) 363-396. [8] D. J. Gardner, D. R. Reynolds, C. S. Woodward, C. J. Balos, Enabling New Flexibility in the SUNDIALS Suite of Nonlinear and Differential/Algebraic Equation Solvers, ACM Trans. Math. Softw. 48 (3) (2022) 31:1-31:24. [9] K. Gustafsson, Control theoretic techniques for stepsize selection in explicit Runge-Kutta methods, ACM Trans. Math. Softw. 17 (4) (1991) 533-554. [10] G. Söderlind, Automatic Control and Adaptive Time-Stepping, Numerical Algorithms 31 (1) (2002) 281-310. [11] G. Söderlind, Digital filters in adaptive time-stepping, ACM Trans. Math. Softw. 29 (1) (2003) 1-26. [12] G. Söderlind, Time-step selection algorithms: Adaptivity, control, and signal processing, Applied Numerical Mathematics 56 (3-4) (2006) 488502. [13] K. Gustafsson, Control-theoretic techniques for stepsize selection in implicit Runge-Kutta methods, ACM Trans. Math. Softw. 20 (4) (1994) 496-517. [14] K. Gustafsson, G. Söderlind, Control Strategies for the Iterative Solution of Nonlinear Equations in ODE Solvers, SIAM J. Sci. Comput. 18 (1) (1997) 23-40. [15] E. Hairer, S. P. Nørsett, G. Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd Edition, Springer Series in Computational Mathematics, Springer Ser.Comp.Mathem. Hairer,E.:Solving Ordinary Diff., Springer-Verlag, Berlin Heidelberg, 1993. 8-3-540-78862-1. [16] A. Sarshar, S. Roberts, A. Sandu, Design of High-Order Decoupled Multirate GARK Schemes, SIAM J. Sci. Comput. 41 (2) (2019) A816-A847. 27 [17] A. C. Fish, D. R. Reynolds, Adaptive Time Step Control for Multirate Infinitesimal Methods, SIAM J. Sci. Comput. 45 (2) (2023) A958-A984. [18] M. Günther, A. Sandu, Multirate generalized additive Runge Kutta methods, Numer. Math. 133 (3) (2016) 497-524. 0211-015-0756-z. [19] D. L. Ropp, J. N. Shadid, C. C. Ober, Studies of the accuracy of time integration methods for reaction-diffusion equations, Journal of computational physics 194 (2) (2004) 544-574. [20] N. A. Petersson, B. Sjögreen, High order accurate finite difference modeling of seismo-acoustic wave propagation in a moving atmosphere and a heterogeneous earth model coupled across a realistic topography, Journal of Scientific Computing 74 (1) (2018) 290-323. [21] G. J. McRae, W. R. Goodin, J. H. Seinfeld, Numerical solution of the atmospheric diffusion equation for chemically reacting flows, Journal of Computational Physics 45 (1) (1982) 1-42. [22] D. Reynolds, D. Mitchell, S. Amihere, MRI-Adaptivity-Paper, https: //github.com/sundials-codes/mri-adaptivity-paper, accessed: 2025-08-01 (2025). [23] D. R. Reynolds, D. J. Gardner, C. S. Woodward, R. Chinomona, ARKODE: A Flexible IVP Solver Infrastructure for One-step Methods, ACM Trans. Math. Softw. 49 (2) (2023) 19:1-19:26. 94632. [24] I. Prigogine, R. Lefever, Symmetry Breaking Instabilities in Dissipative Systems. II, The Journal of Chemical Physics 48 (4) (1968) 1695-1700. [25] S. Roberts, A. A. Popov, A. Sarshar, A. Sandu, A Fast Time-Stepping Strategy for Dynamical Systems Equipped with a Surrogate Model, SIAM J. Sci. Comput. 44 (3) (2022) A1405-A1427, publisher: Society for Industrial and Applied Mathematics. [26] T. Buvoli, B. K. Tran, B. S. Southworth, Multirate runge-kutta for nonlinearly partitioned systems (2025). . 28 Appendix A. Embedded MERK Methods In [4] the authors introduce the methods MERK2, MERK3, MERK4, and MERK5. Each of these has a beneficial structure that allows an embedding to be included in the next-to-last internal stage, giving rise to the following embedded methods. As in [4], we provide only the forcing functions for the internal stages (rn,i(τ)), the updated solution (rn(τ)), and the embedding ( ̃rn(τ)). For brevity, we denote the slow function evaluations at each step and stage as f s n = f s(tn, yn) and f s n,i = f s(tn + ciH, zi), respectively, and we denote the difference functions as Dn,i = f s n,i -f s n. • MERK21, that has abscissae c = 0 c2 1 and uses the constant c2 = 1 2: rn,2(τ) = f s n, rn(τ) = f s n + τ c2H Dn,2, ̃rn(τ) = f s n. • MERK32, that has abscissae c = 0 c2 2 3 1 and also uses the constant c2 = 1 2: rn,2(τ) = f s n, rn,3(τ) = f s n + τ c2H Dn,2, rn(τ) = f s n + 3τ 2H Dn,3, ̃rn(τ) = rn,3(τ). • MERK43, that has abscissae c = 0 c2 c3 c4 c5 c6 1 and uses the constants c2 = c3 = 1 2, c4 = c6 = 1 3 and c5 = 5 6: rn,2(τ) = f s n, rn,3(τ) = rn,4(τ) = f s n + τ c2H Dn,2, rn,5(τ) = rn,6(τ) = f s n + τ H -c4 c3(c3-c4)Dn,3 + c3 c4(c3-c4)Dn,4 + τ 2 H2 1 c3(c3-c4)Dn,3 - 1 c4(c3-c4)Dn,4 , rn(τ) = f s n + τ H -c6 c5(c5-c6)Dn,5 + c5 c6(c5-c6)Dn,6 + τ 2 H2 1 c5(c5-c6)Dn,5 - 1 c6(c5-c6)Dn,6 , ̃rn(τ) = rn,5(τ). 29 • MERK54, that has abscissae c = [ 0 c2 c3 c4 c5 c6 c7 c8 c9 c10 1 ] and uses the constants c2 = c3 = c5 = c9 = 1 2, c4 = c6 = 1 3, c7 = 1 4, c8 = 7 10, and c10 = 2 3: rn,2(τ) = f s n, rn,3(τ) = rn,4(τ) = f s n + τ c2H Dn,2, rn,5(τ) = rn,6(τ) = rn,7(τ) = f s n + τ H (α3Dn,3 + α4Dn,4) + τ 2 H2 (β3Dn,3 -β4Dn,4) , rn,8(τ) = rn,9(τ) = rn,10(τ) = f s n + τ H (α5Dn,5 + α6Dn,6 + α7Dn,7) -τ 2 H2 (β5Dn,5 + β6Dn,6 + β7Dn,7) + τ 3 H3 (γ5Dn,5 + γ6Dn,6 + γ7Dn,7) , rn(τ) = f s n + τ H (α8Dn,8 + α9Dn,9 + α10Dn,10) -τ 2 H2 (β8Dn,8 + β9Dn,9 + β10Dn,10) + τ 3 H3 (γ8Dn,8 + γ9Dn,9 + γ10Dn,10) , ̃rn(τ) = rn,8(τ). where α3 = c4 c3(c4-c3), α4 = c3 c4(c3-c4), α5 = c6c7 c5(c5-c6)(c5-c7), α6 = c5c7 c6(c6-c5)(c6-c7), α7 = c5c6 c7(c7-c5)(c7-c6), α8 = c9c10 c8(c8-c9)(c8-c10), α9 = c8c10 c9(c9-c8)(c9-c10), α10 = c8c9 c10(c10-c8)(c10-c9) β3 = 1 c3(c3-c4), β4 = 1 c4(c3-c4), β5 = c6+c7 c5(c5-c6)(c5-c7), β6 = c5+c7 c6(c6-c5)(c6-c7), β7 = c5+c6 c7(c7-c5)(c7-c6), β8 = c9+c10 c8(c8-c9)(c8-c10), β9 = c8+c10 c9(c9-c8)(c9-c10), β10 = c8+c9 c10(c10-c8)(c10-c9) γ5 = 1 c5(c5-c6)(c5-c7), γ6 = 1 c6(c6-c5)(c6-c7), γ7 = 1 c7(c7-c5)(c7-c6), γ8 = 1 c8(c8-c9)(c8-c10), γ9 = 1 c9(c9-c8)(c9-c10), γ10 = 1 c10(c10-c8)(c10-c9). 30
2510.14961
EFFICIENT PARALLEL SAMPLERS FOR RECURRENT- DEPTH MODELS AND THEIR CONNECTION TO DIFFU- SION LANGUAGE MODELS Jonas Geiping ELLIS Institute Tübingen & Max-Planck Institute for Intelligent Systems, Tübingen AI Center jonas@tue.ellis.eu Xinyu Yang Electrical and Computer Engineering Carnegie Mellon University Guinan Su ELLIS Institute Tübingen & Max-Planck Institute for Intelligent Systems, Tübingen AI Center ABSTRACT Language models with recurrent depth, also referred to as universal or looped when considering transformers, are defined by the capacity to increase their computation through the repetition of layers. Recent efforts in pretraining have demonstrated that these architectures can scale to modern language modeling tasks while exhibiting advantages in reasoning tasks. In this work, we examine the relationship between recurrent-depth models and diffusion language models. Building on their similar- ities, we develop a new diffusion forcing sampler for these models to accelerate generation. The sampler advances by decoding new tokens at every forward pass of the model, while the latent states of these tokens can be further refined in parallel through recurrence. Theoretically, generation with our sampler is strictly more expressive than the baseline autoregressive generation using the same time budget on modern hardware. Moreover, this sampler, based on principles from diffusion literature, can be directly applied to existing 3.5B recurrent-depth transformers without any tuning, leading to up to a 5x speedup. Consequently, our findings not only provide an efficient mechanism for parallelizing the extra computation in recurrent-depth models at inference, but also suggest that such models can be naturally viewed as strong continuous, though causal, diffusion language models. 1 INTRODUCTION Conventional large language models (LLMs) are constructed as fixed-depth neural networks with a predetermined number of layers (often merely a two-digit count), a property that not only allows these models to be trained efficiently, but in practice appears sufficient for many tasks (Radford et al., 2019). However, more challenging tasks in mathematics and programming often require conceptual leaps over multiple steps in a logical chain that are hard for these models to learn robustly. More formally, fixed-depth transformers fall within the complexity class TC0 (Merrill and Sabharwal, 2023). To resolve this, recent efforts have focused on training models to “verbalize” their internal reasoning into chains-of-thought composed of small sub-steps, each of which the model is capable of learning. An alternative to fixed-depth are models with recurrent depth (Dehghani et al., 2019; Schwarzschild et al., 2021), which can repeat layers. Consequently, these models are also referred to as looped transformers (Giannou et al., 2023), or, as universal transformers, (Dehghani et al., 2019) when highlighting the motivation for these systems to represent universal Turing machines (Graves et al., 2014; Graves, 2017). Merrill and Sabharwal (2025) showcase that, in contrast to fixed-depth models, models with arbitrary recurrence are indeed capable of representing a larger complexity class. 1 arXiv:2510.14961v1 [cs.LG] 16 Oct 2025 (a) Sequential Generation Recurrence Steps Token Position 1 2 3 4 5 6 7 8 9 (b) Diffusion Forcing Sampler Token Position 1 2 3 4 5 2 3 4 3 4 5 4 5 5 Legend Prior Compute Steps Current Compute Step Recurrence Steps 5 Figure 1: Different generation schemes for autoregressive, recurrent-depth models. Left: Standard sequential generation, which proceeds one token and step of the recurrence at a time (time steps denoted by integers). Right: A diffusion forcing sampler used for the same model can parallelize generation “diagonally”, by computing one step of the recurrence per token position, iteratively refining its estimate of the generated sequence. However, generation with autoregressive recurrent-depth models is typically slow, given that every repetition of the model layers must be executed sequentially before the next token can be produced. In this work, we discuss how generation from recurrent-depth models can be efficiently parallelized by connecting this architecture to diffusion model architectures. Both architectures “recur” in a related sense, and even though both are trained with different objectives, we show that samplers adapted from diffusion literature, namely, diffusion forcing (Chen et al., 2024a), can be directly applied to parallelize the generation of already existing recurrent-depth models from Geiping et al. (2025). We discuss how to adapt diffusion forcing sampling to recurrent-depth models, identifying the essential architectural components and strategies required to ensure both stability of the iterates and bounded memory usage. As illustrated in Figure 1, rather than waiting for the recurrence at sequence position n to fully converge before generating the next token, our sampler immediately produces token drafts from intermediate iterates. It then advances to position n + 1, where the subsequent forward pass simultaneously refines the drafts for steps n and n + 1, while also decoding an initial draft for n + 2. In this way, the sampler achieves parallelism along the sequence dimension, akin to speculative decoding. Importantly, because the underlying model is trained as a causal language model, information still propagates strictly from left to right, and the output sequence is iteratively refined across recurrences. While this approach does not reduce FLOPs, it effectively exploits modern GPU architectures by unlocking additional opportunities for parallelization. Overall, in this work, we • Clarify the connection between recurrent-depth models and diffusion models via diffusion forcing and block or wave-based inference strategies for sequence-based diffusion models. • Describe how to apply principles from diffusion forcing to efficiently parallelize the inference of models with recurrent depth. • Verify that recurrent-depth models equipped with diffusion-forcing samplers achieve the strongest balance between practical efficiency and theoretical expressiveness in both prefilling and decoding. • Show that diffusion forcing sampling outperforms even well-tuned speculative decoding baselines for the same model with speed gains that can be smoothly traded off against accuracy. 2 RELATED WORK We briefly introduce both recurrent models and diffusion models, focusing on language applications. Recurrent Models. Models with recurrent computations have long been central to machine learning (Amari, 1972; Hopfield, 1982; Braitenberg, 1986; Gers and Schmidhuber, 2000; Sutskever et al., 2008), not only due to significant inspiration from recurrent firing patterns found in neuroscience (Hopfield, 1982; Lamme and Roelfsema, 2000; Douglas and Martin, 2004), and early successes in language modeling centered on recurrent neural networks (Mikolov et al., 2010; Sutskever et al., 2011). With the advent of transformer models, these architectures were considered less scalable, yet recurrence, now as recurrence in depth, was swiftly re-introduced as universal transformers, Dehghani et al. (2019), motivating that these models could be capable of modeling universal Turing machines (Graves et al., 2014). Other work showed that recurrent models were capable of learning algorithms (Schwarzschild et al., 2021; Bansal et al., 2022; Bear et al., 2024). That recurrence was 2 t=3 To determine the number I t=4 To determine how number of would t=5 To determine how many of eggs- t=6 To determine how many dozens dozens Claireto t=7 To determine how many dozens of of willons t=8 To determine how many dozens of eggs eggs be of t=9 To determine how many dozens of eggs Claire Claire Claire a t=10 To determine how many dozens of eggs Claire will will will day t=11 To determine how many dozens of eggs Claire will eat eat eat in t=12 To determine how many dozens of eggs Claire will eat in in in 4 t=13 To determine how many dozens of eggs Claire will eat in 4 4 4 weeks Frozen tokens, committed to KV cache Current Candidate Tokens Newly Sampled Token Figure 2: An example of a text sequence being generated with the proposed diffusion forcing sampler from a depth-recurrent model. While the original recurrent-depth model requires 32 recurrence steps to produce a single token (the default for this model), the diffusion sampler has already produced and committed 8 new tokens (green). As described, the sampler advances by at least one token per step of the recurrence. Decoded candidate tokens are initial spell out incoherent text, but map into the right concepts, and quickly improve with more steps. Note that the “freeze” decision is dynamic, based on distance to the previous state in latent space (not pictured). capable of representing universal computation was explicitly constructed for transformer models in Giannou et al. (2023), and following work on looped transformers has shown that these models are capable learners (Giannou et al., 2023; Gatmiry et al., 2024; Yang et al., 2024; McLeish et al., 2024; Fan et al., 2025). These findings have led to a wave of work training larger, general-purpose recurrent-depth models of language (Tan et al., 2023; Abnar et al., 2023; Mathur et al., 2024; Csordás et al., 2024; Geiping et al., 2025), as well as work retro-fitting recurrence into trained models (Li et al., 2020; Bae et al., 2024; Hay and Wolf, 2023; Liu et al., 2024b). Several of these works also highlight the possibility of implementing latent reasoning via recurrence, that is to complement or replace verbalized chains-of-thought, with recurrence. Examples for this line of thinking are Coconut (Hao et al., 2024), as well as Liu et al. (2024a); Cheng and Durme (2024). In this work, we propose a generic sampling algorithm for depth-recurrent models, which we test with the models developed in Geiping et al. (2025), which are trained for general language understanding and reasoning on 800B tokens, with 3.5B parameters, and openly accessible. Diffusion Language Models. Diffusion models are general-purpose generative models, with early applications focusing on continuous domains, such as images (Song and Ermon, 2019; Rombach et al., 2022; Peebles and Xie, 2023), which lead to substantial interest in extending diffusion also to discrete domains, such as text (Austin et al., 2021; Hoogeboom et al., 2021). Approaches to language diffusion are split on whether to incorporate diffusion processes on a continuous variable (that is then projected into discrete space) (Chen et al., 2022; Dieleman et al., 2022; Han et al., 2023; Karimi Mahabadi et al., 2024; Jo and Hwang, 2025; Graves et al., 2025), or diffusion processes that directly act on discrete variables.(Lou et al., 2024; Richemond et al., 2023). The latter though, especially using masking as the discrete forward diffusion step, is currently the most scalable approach, employed in large-scale efforts to train language diffusion models, competitive with autoregressive models (Gong et al., 2025a;b; DeepMind, 2025; Nie et al., 2025; Wang et al., 2025b; Xie et al., 2025; Ye et al., 2025). Inference Strategies for Diffusion Language Models. To make diffusion tractable for arbitrarily long sequences requires techniques such as block diffusion (Arriola et al., 2025), where chunks of text are being modified by the diffusion model, and then frozen and their KV entries cached, with the sampler moving to the next chunk. A more free-form approach to handle sequence-based diffusion is to use diffusion forcing (Chen et al., 2024a), a hybrid model, where noise is added to future tokens in a sequence relative to the position of the current token, allowing the sampler to move both on both the sequence dimension and the diffusion time dimension. Inference Acceleration for Fixed-Depth Transformers. Inference in transformers, in particular in small-batch settings is memory-bound, meaning that the transfer of data (or, in the default case, model parameters) to and from the L1 cache of the accelerator, is the dominating cost during inference, allowing algorithms such as speculative decoding (Leviathan et al., 2023) and follow-ups (Cai et al., 2024; Miao et al., 2024; Chen et al., 2024c) to improve inference speed through speculative 3 parallelization. Using smaller draft models, these algorithms draft text several tokens in the future, which can then be verified using the original model, as verification of the entire text sequence is compute-bound and hence, fast. 3 APPLYING DIFFUSION FORCING TO RECURRENT-DEPTH MODELS In this section, we present our diffusion forcing sampler for recurrent-depth models, which accelerates text generation by advancing at least one token in each recurrence step, as illustrated in Figure 2. 3.1 BACKGROUND ON RECURRENT-DEPTH MODELS Before detailing the diffusion forcing sampler, we briefly describe the particular recurrent-depth archi- tecture proposed by Geiping et al. (2025), emphasizing features of the model that are pertinent to the sampler’s functionality. We will use the checkpoint name Huginn-0125 when referring to the trained model. The architecture of this model contains three main blocks, each composed of multiple trans- former layers: (i) a prelude block P, projecting the embedded input tokens into a latent space; (ii) a recurrent block R, iterating r times in this latent space by refining a state vector s, and (iii) a coda block C that processes the latent state and produces the model’s probabilities for the next token, formally e = P(x) s0 ∼N(0, σ2I) si = R(e, si−1) for i ∈{1, . . . , r} p = C(sr). Notably, while this architecture is derived from looping the middle layers of fixed-depth transformer models (Skean et al., 2024; Sun et al., 2024; Kaplan et al., 2024), with features such as input injection and random state initialization from the literature of recurrent-depth models (Bansal et al., 2022; Anil et al., 2022), it can also be interpreted as a latent-space diffusion model following the formulation of Rombach et al. (2022): Starting from an initial random state s0, the model iteratively refines this state conditioned on the embedded input sequence e, until we assume the state to be completely denoised at the end of the process, at which point it will be decoded into the next token using C. In Geiping et al. (2025), this model is trained using randomized unrolling with truncated backpropa- gation, i.e. a random number of iterates r is sampled (from a Poisson-lognormal distribution), and then the entire current batch of training sequences is iterated up to r, which is not directly related to diffusion language modeling, which most effectively trains by randomized masking and adaptation from autoregressive models (Nie et al., 2025; Xie et al., 2025; Ye et al., 2025; Gong et al., 2025a). 3.2 THE INGREDIENTS FOR DIFFUSION FORCING SAMPLING While we will describe experiments using this particular recurrent-depth model, the sampler can be applied to all recurrent-depth models that fulfill the following requirements. Input Injection. The first necessary component, aside from the recurrence over layers itself, is the input injection, i.e., the conditioning of the recurrence on e. This will allow the sampler to “course-correct” if conditioning changes without having to jettison a partially computed state s. The other component that may improve the connection to diffusion modeling is the initialization of random states, but while we speculate that this is beneficial, it is not architecturally necessary. As such, recurrent-depth models trained in Csordás et al. (2024); Schöne et al. (2025); Mohtashami et al. (2024) or Wang et al. (2025a) could also benefit from this sampler. However, looped architectures such as Coconut (Hao et al., 2024), which train to feed the outputs of a transformer back in as inputs, are not immediately supported and require retraining to incorporate input injection, separating their recurrent state from their input data. Robust Recurrence. The second necessary property is that the intermediate state at every step of the recurrence must be decodable to approximately correct solutions. While this property is generally satisfied, it may fail in models trained exclusively with a fixed number of recurrences r, where decoding from earlier steps can yield nonsensical outputs rather than approximate versions of the intended result. 4 0 5 10 15 20 25 30 Recurrence Steps (r) 0% 10% 20% 30% 40% 50% Accuracy KV-Cache Sharing (KV size fix) Baseline (KV size prop. to r) Figure 3: The Huginn-0125 recurrent-depth model can match the baseline performance on the GSM8k dataset when enabling KV cache sharing (with a minimal cache size of 1), using r-times less memory for KV states. KV Cache Sharing. The third property, while not strictly required but highly beneficial for diffusion forcing samplers, is the ability of different recurrent depths to share their KV cache across iterations during generation. Without fungible KV states, all KV states from previous recurrences and tokens must be retained in memory, causing the cache to grow with both sequence length and recurrence depth. As shown in Figure 3, the trained Huginn-0125 model inherently supports KV cache sharing, allowing us to store only the KV state of the most recent recurrence for each token position1. 3.3 A SIMPLIFIED VERSION OF THE SAMPLING ALGORITHM Next, we present the algorithm for our sampler. Given a prompt x, Algorithm 1 describes a sim- plified version that directly adapts diffusion forc- ing principles to parallelize generation across the sequence dimension. This approach yields improve- ments in tokens/second while maintaining equivalent total FLOP requirements. An example of the sampler’s behavior is illustrated in Figure 2. We emphasize several important aspects. First, the number of inner recurrences r′ may be chosen to exceed one. These additional iterations are relatively inexpensive, since the broader logic of the sampler is not yet invoked. More importantly, they serve to stabilize the recurrence. Because the conditioning on the input embedding e may vary across successive steps of the sampler, the model risks becoming trapped in oscillatory behavior unless sufficient steps are allowed to adapt the current state to the evolving conditioning. This mechanism closely parallels practices in the diffusion literature, such as the use of supplementary diffusion steps in Bansal et al. (2023) to incorporate complex guidance signals into image diffusion models. Second, we naturally employ this sampler only during the generation phase, as the prefill phase is already parallelizable in the sequence dimension, as the recurrence can be computed on all token positions of the prompt simultaneously. Further, in terms of efficiency, we note that we do not actually want to keep the state for all tokens changing indefinitely, as doing so would slow down generation again, as well as increase memory usage dramatically. As such, similar to block-diffusion samplers (Arriola et al., 2025), we look for rules that decide when each position is “finished”. In the simplified version of the sampler, we freeze the last token once we reach a predetermined number of recurrence steps at this position – which naturally happens r positions behind the current maximal extent of the sequence. Frozen tokens are removed from the state vector and their KV states are added to the cache, so that, as in block diffusion models (Arriola et al., 2025), at each point in time, only a small subset of tokens in being modified and the full generation runs like a wave over the generating sequence. Finally, note that with this simplified exit rule, r′ = r exactly recovers the original autoregressive sampler. 3.4 STABILIZING COMPONENTS BASED ON DIFFUSION PRINCIPLES Further, we also experiment with adding momentum to the input conditioning e, setting e = η eprev + (1 −η)P(ycurrent), (1) which we find can stabilize the recurrence in challenging sequences, providing a small, but robust gain on average. Secondly, surprisingly, we find that even though these models are never trained with noise injected into intermediate states, that artificially adding noise to the state in each step of the sampler, in 1With this form of KV sharing, the cache requires no more memory than that of a parameter-matched fixed-depth transformer. 5 Algorithm 1 Diffusion-forcing-style generation, simplified version (Full Version in Algorithm 2) Require: current text context x, max new tokens N, inner recurrence r′, total recurrences per token r, diffusion steps T, init scale α 1: yfrozen ←x 2: ycurrent ←x 3: z ←InitState(1, α) 4: for step t = 1, . . . , T do 5: e ←P(ycurrent) 6: znoise ←InitState(1, α) 7: z ←(1 −βt)z + βt znoise 8: for j = 1, . . . , r′ do 9: z ←R(z, e) ▷Inner recurrence 10: end for 11: p ←C(z) ▷project latent states to logits 12: ˆy ←Sample(p) 13: ycurrent ←[yfrozen, ˆy] 14: yfrozen ←Assign ycurrent up to the last ⌈r r′ ⌉entries ▷Freeze completed tokens 15: if |yfrozen| −|x| ≥N then break 16: end if 17: z ←[z, InitState(1, α)] ▷Append a new latent state for the next position 18: end for 19: return yfrozen analogy to sampling from continuous diffusion models, i.e. z′ = (1 −βt)z + βt znoise where znoise = InitState(1, α), (2) can stabilize the iterative process, leading to gains in both accuracy and throughput if r′ is small. In practice, we schedule βt linearly as a function of steps t at each position, so that the latter steps are naturally less noisy (Chen et al., 2024b), which we find to outperform either scheduling βt scaled by the square root of the number of recurrences at each position or keeping it constant. However, the optimal value of βt depends on r′. 3.5 ADAPTIVE EXITS However, the fixed exit scheme of the simplified sampler can run into issues. The recurrent-depth model is causal and how quickly states converge depends on the complexity of the query. This can lead to situations where either, compute is wasted because the states at certain positions have already converged quicker than r, or, more problematically, states where, due to a late change in the condi- tioning of prior tokens, the states have not converged in time. Freezing these unfinished states would worsen generation, in the worst case leading to a spiral where each token that is frozen incorrectly slows down convergence further, leading to a response that becomes more incorrect with each token. However, we can remedy both cases through adaptive compute. We pick the simplest adaptive exit criterion, the normalized distance in latent space, and compute this quantity for each position and freeze up to all positions where this distance δi is smaller than a threshold ε. δi = ∥zi −zprev,i∥2 ∥zi∥2 , k∗= max{k : δj < ε for all j ≤k} (3) We combine this with a limiter on the maximum length of the wavefront of the algorithm to guarantee that both 1) the number of states currently being modified, so the maximum memory footprint, is bounded and 2) only positions with converged states are frozen. The full algorithm is described in Appendix Algorithm 2. With these rules in place, we note that setting the wavefront to 1 token, we exactly recover the token-per-token adaptive compute sampler from (Geiping et al., 2025). We show the practical outcome of this sampler for a challenging input sequence from GSM8k in a series of heatmaps in the appendix, see Figure 12. The heatmap shows the development of the sequence as a function of generation steps and tokens. We see that the wave first advances quickly, but then halts for a short amount of steps, before resuming the advance. Remark 3.1 (Convergence of the Adaptive Diffusion Sampler). With this algorithm, we can, in principle guarantee convergence to the same solution as when sampling autoregressively, if we assume 6 0 20 40 60 80 Token Position 0 20 40 60 80 100 120 140 Sampler Step 0 20 40 60 80 Token Position 0 20 40 60 80 100 120 140 Sampler Step 0 20 40 60 80 Token Position 0 20 40 60 80 100 120 140 Sampler Step Figure 4: Examples of adaptive sampler behavior. Each color represents a token id in the vocabulary of the model, showing the development of the generated sequence (running left to right) as a function of sampler steps (running top to bottom) for different hyperparameter choices. The leftmost example is r′ = 4, and tokens are frozen quickly, whereas middle and right show sequences with r < 4 require more adaptive computation, and in both cases the sampler stalls after hitting the maximal length of the wavefront (here 32 to visualize), before resolving the sequence and advancing again. that the recurrent block R is a contraction. Then, convergence of iterates, i.e. Equation (3), implies convergence to the fixed point of the operator. Second, because the model is causal, convergence of the first token position does not depend others and will converge at some step t. At this step, the conditioning of the subsequent token is frozen, so it will also converge, proving convergence of the full sequence to the autoregressive solution by induction. However, in practice, large-scale recurrent-depth models are not easily proven to be contractive, even if models are approximately path-independent (Anil et al., 2022). Finally, we remark on practical back-of-the-envelope estimates of runtime cost. Remark 3.2 (Computational Cost). In comparison to the baseline autoregressive sampling algorithm where the recurrence is computed one token at a time, there are two additional sources of computa- tional cost, the cost to encode and decode latent states using P and C, and the potential cost incurred if convergence is slower than in baseline due to cascading effects of tokens changing late, as seen in Figure 4 if the adaptive version is used. The first cost depends on the size of the recurrent block R, relative to prelude and coda. For the model we study in this work this is disadvantageous as the FLOP costs for prelude and coda equal one pass through the recurrent block. We define the FLOP costs of one pass through R as f, ignoring attention, so that the FLOP costs of one iteration of the sampler is roughly (r′ + 1)f. Then, the total FLOP costs of running the baseline algorithm for w tokens are (r + 1)fw, compared to (r + r r′ )fw for the non-adaptive diffusion sampler. However, as we will see, this FLOP inefficiency is counteracted in practice by the parallelization gains obtained from the sampler. 4 THEORETICAL ANALYSIS This section develops a theoretical framework to justify the optimality of our design in balancing efficiency and expressiveness with two research questions (RQs): (i) Why should models prioritize recurrence, i.e. depth scaling, during prefilling? and (ii) Why should models prioritize parallelizing decoding from a larger wavefront of tokens using the sampler described in the previous section, i.e. width scaling during decoding? 4.1 PROBLEM FORMULATION Before answering these RQs, we formalize the notions of depth and width within our framework, which limits our analysis to Transformer-based autoregressive LLMs. In particular, we focus exclu- sively on the comparison between depth and width, without considering length (i.e., CoT) scaling. 7 Definition 4.1 (Depth and Width in Recurrent-Depth Models, informal). For recurrent-depth models, we define depth dt and width wt at each time step t ∈N, with initial conditions d0 = 0 and w0 = L0 (where L0 denotes the input sequence length). The corresponding update rules are given as follows: 1. Depth Update: At each step t, dt+1 = dt + 1 with d0 = 0, therefore dt = t for all t ∈N. 2. Width Update: At each step t, width changes only through token exits and token entries: δ(t) = −1, if a hidden state decodes from the model (exit event), +1, if a latest token encodes into the model (entry event). 4.2 LLMS SHOULD PRIORITIZE DEPTH SCALING DURING PREFILLING. To establish this, we first define a width scaling architecture without increasing model parameters following Wu et al. (2025). Concretely, we repeat each token along the sequence dimension. Note that during prefilling, increasing the number of such repeated tokens is equivalent to width scaling under our definition, since this expands the input sequence length. Here, we introduce two variants: • Width Scaling without KV Sharing (Width-NoShare): For the j-th copy of token i, attention is allowed to all copies of tokens 0, . . . , i −1, as well as the first j −1 copies of token i. • Width Scaling with KV Sharing (Width-KVShare): For the j-th copy of token i, attention is limited to (i) the last copy of tokens 0, . . . , i −1, and (ii) the first j −1 copies of token i. Based on the above definition, we state the importance of depth scaling during prefilling stage. Theorem 4.2 (Depth vs. Width Scaling in Prefilling, informal). Given the width-scaling architecture above and our recurrent-depth model with the same scaling factor s. Then the following hold: 1. Expressiveness. Under equal scaling factors, depth scaling is more expressive than width scaling. 2. Complexity. For asymptotic prefill cost (including both attention and linear layers), we have EDepth≤EWidth-KVShare<EWidth-NoShare. 3. Parallelism. There exists a threshold L⋆such that for L < L⋆, width scaling provides s2 times the parallelism of depth scaling, while for L ≥L⋆both saturate with similar parallelism. Remark 4.3. Let L be a random variable for prompt length with distribution D. Then the probability that depth scaling is more efficient than width scaling equals PrL∼D[L ≥L⋆]. Since L⋆on modern GPUs typically lies between a few hundred and a few thousand tokens while empirical input length distributions place substantial mass above this range, the probability is indeed close to 1 in practice. 4.3 LLMS SHOULD PRIORITIZE WIDTH SCALING DURING DECODING. Next, we prove that recurrent-depth models should use diffusion forcing samplers during decoding. Theorem 4.4 (Depth vs. Width Scaling in Decoding, informal). For recurrent-depth models with r > 1 inner recurrences, if diffusion forcing sampling and KV-cache sharing are employed with wavefront size W ≤L⋆, then diffusion forcing decoding achieves equal depth and strictly greater width compared to standard autoregressive decoding under the same runtime constraints. Mathematically, this relationship can be expressed as: dDF(T) = dAR(T) and wDF(T) > wAR(T), where T is the runtime budget, and DF and AR denote diffusion forcing and autoregressive decoding. Remark 4.5. Since model parameters and KV states are shared, the I/O cost of processing multiple tokens is asymptotically equivalent to processing a single token, enabling increased token generation within identical runtime constraints. At each decoding step, an expanded wavefront enables greater width scaling, providing superior expressiveness compared to autoregressive decoding. Empirically, since maximum recurrence depth rarely exceeds r ≈100, the condition W ≤L⋆typically holds. 5 EXPERIMENTAL EVALUATION To assess whether our method really accelerates generation, we compare our sampler against an equally optimized implementation of standard autoregressive sampling, both evaluated with a batch 8 Sampler GSM8K MATH500 HumanEval MBPP Acc t/s Acc t/s Acc t/s Acc t/s Static AR (r = 32) 41.77% 36.1 17.60% 6.4 22.56% 13.5 31.60% 15.3 Static AR (r = 4) 1.59% 312.9 3.20% 18.6 0.61% 244.1 1.40% 49.6 Static AR (r = 8) 31.61% 137.5 14.80% 23.1 21.34% 61.7 27.40% 57.2 Static AR (r = 64) 42.15% 18.2 18.60% 3.4 22.56% 7.3 30.20% 7.6 Adaptive Compute AR 42.23% 66.9 18.20% 12.2 21.95% 26.1 30.20% 29.5 Speculative Decoding AR 42.76% 69.5 17.80% 13.4 20.12% 27.5 30.60% 31.6 Diff. Sampler (r′ = 2, βt = 0.5) 40.71% 182.2 17.60% 35.9 20.12% 67.4 27.80% 92.3 Diff. Sampler (r′ = 4, βt = 0) 42.08% 157.3 18.00% 30.3 20.12% 64.9 31.00% 70.2 Relative Diff to AR (r = 32) +0.31 4.36x +0.40 4.73x -2.44 4.81x -0.60 4.59x Table 1: Performance comparison of autoregressive (AR) and diffusion samplers for the Huginn-0125 model using a comparable backend (batch size 1, transformers with dynamic KV caching, no further inference optimizations). For both samplers, we record the total evaluation time divided by the number of samples. “Acc” denotes task accuracy, and “t/s” denotes the median of tokens/second measurements for all samples in the task. size of 1. Extensions to larger batch sizes are conceivable but fall outside the scope of this study, see additional discussion in Section A.2 We evaluate the 4 generative benchmarks (GSM8K, MATH500, HumanEval and MBPP) also eval- uated in (Geiping et al., 2025), which we rerun using our sampler and compare against a number of baselines. Aside from the static, autoregressive baseline (static AR), at different recurrence steps, we also compare against the adaptive compute sampler of the original work, which still samples token-by-token, but exits the recurrence at every token, once the difference in latent space is small enough. We tune this sampler, finding that its hyperparameter, the threshold ε is similar to the diffusion sampler. Finally, we also compare against a heavily tuned self-speculative decoding baseline. It was observed in Geiping et al. (2025) that recurrent-depth models can be natively used as their own draft models, using fewer steps to draft. We find that drafting 4 tokens into the future, each with 4 draft steps is optimal for the Huginn-0125 checkpoint on GSM8k. We implement all samplers in comparable Hugging Face transformers implementations with dynamic KV caching and we measure mean accuracy and median tokens per second, computed over queries from each benchmark. All timings are obtained from CUDA event measurements on sandboxed A100-40GB GPUs. If not otherwise mentioned, we default to conservative settings for the sampler, always setting an exit threshold of ε = 0.03, βt = 0, η = 0.1 and r′ = 4, for a maximum wavefront size of 128. 0 5 10 15 20 25 30 Inner Recurrence (r′), t = 0 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 25 50 75 100 125 150 175 200 Tokens/Second 0.03 0.05 0.08 0.25 0.50 0.80 Exit Threshold ( ), t = 0 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second Figure 5: Trade-off between accuracy and speed on GSM8k under different hyperparameter choices. Left: Effect of increasing inner recurrence r′. Inner recurrence stabilizes the sampling, increasing accuracy at the cost of throughput. Right: Effect of varying the exit threshold ε. Modulating the exit threshold most directly trades off throughput and accuracy. 9 Sampler GSM8K Minerva Math HumanEval MBPP Acc Time Acc Time Acc Time Acc Time Huginn-0125 Static AR (r = 32) 41.77% 36.1 12.98% 21.0 22.56% 13.5 31.60% 15.3 Diff. Sampler (r′ = 4, βt = 0) 42.08% 157.3 13.06% 96.0 20.12% 64.9 31.00% 70.2 SWA Model Variant Static AR (r = 32) 47.99% 36.2 14.86% 22.1 23.78% 14.9 31.20% 11.8 Diff. Sampler (r′ = 4, βt = 0) 47.08% 143.1 14.52% 101.4 23.78% 71.2 29.20% 59.7 Math-Finetuned Model Static AR (r = 32) 58.91% 29.8 22.20% 7.9 17.07% 11.5 28.80% 11.2 Diff. Sampler (r′ = 4, βt = 0) 58.45% 144.1 21.40% 39.8 15.24% 47.9 27.60% 57.1 Table 2: Hyperparameters remain stable across different model variants. For example, both the weight-averaged checkpoint from the original work and the model finetuned on MetaMath for this study exhibit consistent speed gains in the range of 4–5× and accuracy deviations within 0.5–1%, even when baseline values change. 0.0 0.1 0.2 0.3 0.4 0.5 Embedding EMA Value ( ), t = 0 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Noise Coefficient ( t) with linear schedule 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second Figure 6: Left: Scaling the amount of momentum η in the conditioning., showing that small, but non-zero η values are optimal. Right: Scaling the amount of noise added during inference for r′ = 4, scheduled linearly in the number of recurrence steps, also measured on GSM8k. At r′ = 4, adding noise is not optimal. We plot the full spectrum of r′ to βt in Figure 7. 5.1 BENCHMARK RESULTS. We summarize our findings in Table 1. We find that on all benchmarks, executing the parallelized sampler leads to significant speedups of around 5x, with only minor trade-offs in generation quality of around 1%, depending on the task, owing to the trade-off set by our default hyperparameters. In Table 2 we repeat all benchmarks for two additional model checkpoints, the SWA model also released in Geiping et al. (2025), and a math variant, that we finetuned on the MetaMath dataset (Yu et al., 2023). Even though these model variants differ noticeably in their benchmark scores, they show similar gains and trade-offfs when using the diffusion sampler. 5.2 VARIANTS AND HYPERPARAMETERS Hyperparameter Choices. We show the trade-off curves arising when varying the inner recurrence r′ and the exit threshold ε in Figure 5 for two settings of noise βt, finding that we can effectively trade-off additional generation speed against minor losses in accuracy. We further vary the embedding EMA η and the noise schedule in Figure 6, showing that the sampler is robust to a broad range of settings for both options, although upsides are also limited. In Figure 7, we sweep a range of values for r′ and βt, showing that, on average, more noise is helpful if the model takes fewer inner recurrence steps. In Figure 8 (left), we confirm that larger maximum wavefront sizes (i.e. the number of tokens that is modified at once in the adaptive sampler) allow for better parallelization. For the tested A100 GPU, the optimal maximal wavefront size is between 64 and 128, although this is likely accelerator-specific. Moving Forward Multiple Steps. In principle, there is no limitation of only advancing one token at a time, and so we can consider headways greater than 1, however, for these, we have no prior position to decode from, so we can only fill these positions with random tokens, or a particular padding token. And, given that the model is still causal, it will take several steps for sequential 10 40 60 80 100 120 140 160 180 Throughput (Tokens/Second) 0.38 0.39 0.40 0.41 0.42 Accuracy r=1 r=2 r=3 r=4 r=5 r=6 r=7 r=8 r=16 r=32 t=0.0 t=0.1 t=0.2 t=0.3 t=0.5 t=0.8 t=1.0 t=2.0 Pareto Frontier (32, 0) (32, 0.5) (6, 0.2) (4, 0.2) (4, 0.3) (2, 0.2) (2, 0.3) (2, 0.5) (2, 0) Figure 7: The Pareto Curve of Accuracy and Throughput on GSM8k spanned by varying inner recurrence and noise hyperparameter pairs (r′, βt). Adding moderate amounts of noise, e.g. βt = 0.2 is dominating runs with no noise added. Note also the scale of y-axis, as even at the rightmost part of the frontier, we are observing accuracy losses of only 2%. 4 8 32 64 256 1024 Maximum Wavefront Size 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second 1 3 5 7 16 128 Headway (h) 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second Figure 8: Impact of Additional Hyperparameter Choices on GSM8k. Left: Size of the wavefront. Increasing wavefront size up to a value around 64-128 appears optimal. We note that the optimal wavefront size is also likely to be accelerator-specific. Right: Amount of headway. Larger amounts of headway than 1, i.e. advancing the sampler more than 1 token per step, do not seem to materialize practical speedups for the studied model. dependencies to be resolved, even if we sample a large headway in every step. We experiment with headways greater than one, but while interestingly stable, this accelerates the speed of the sampler only marginally at a cost to accuracy, see Figure 8, right. 6 CONCLUSIONS: ARE RECURRENT-DEPTH TRANSFORMERS SECRETLY CONTINUOUS LANGUAGE DIFFUSION MODELS? We have shown that, surprisingly, diffusion forcing samplers can be directly applied to parallelize the inference of existing recurrent-depth language models, which we justify theoretically, and implement in practice, leading to five times faster single-sequence inference, even on reasoning and coding benchmark questions. Interestingly, we could also interpret this relationship in the opposite direction, namely that the recurrent-depth models of Geiping et al. (2025) are effectively continuous latent language diffusion models, just trained with an unusual objective, namely truncated unrolling. This would imply that unrolling objectives could be competitive objectives for future language diffusion models. However, while this comparison is possible, the recurrent models like Huginn-0125 are still causal, at least without further training, and so this advantage of diffusion modeling remains elusive. 11 ACKNOWLEDGMENTS JG acknowledges the support of the Hector foundation and the Max Planck Computing and Data Facility (MPCDF), especially the compute cluster Raven. We are especially thankful that the MPCDF team was able to address the overheating issues that coincided with the large-scale deployment of the evaluation of this sampling algorithm to the Raven compute cluster. GS acknowledges the support of the International Max Planck Research School for Intelligent Systems (IMPRS-IS). REPRODUCIBILITY STATEMENT We provide the complete sampling algorithm we describe, including all options at https://gith ub.com/seal-rg/recurrent-pretraining. We provide experimental details in Section 5 and provide further ablations and variants in the appendix. If not otherwise mentioned, all measured values are based on at least 5 repeated experiments. All timing are measured using CUDA events on GPUs of equal power, and are comparable to timings in the same table or figure. REFERENCES Samira Abnar, Omid Saremi, Laurent Dinh, Shantel Wilson, Miguel Angel Bautista, Chen Huang, Vimal Thilak, Etai Littwin, Jiatao Gu, Josh Susskind, and Samy Bengio. 2023. Adaptivity and Modularity for Efficient Generalization Over Task Complexity. arxiv:2310.08866[cs]. S.-I. Amari. 1972. Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements. IEEE Transactions on Computers, C-21(11):1197–1206. Cem Anil, Ashwini Pokle, Kaiqu Liang, Johannes Treutlein, Yuhuai Wu, Shaojie Bai, J. Zico Kolter, and Roger Baker Grosse. 2022. Path Independent Equilibrium Models Can Better Exploit Test-Time Computation. In Advances in Neural Information Processing Systems. Marianne Arriola, Aaron Gokaslan, Justin T. Chiu, Zhihan Yang, Zhixuan Qi, Jiaqi Han, Subham Sekhar Sahoo, and Volodymyr Kuleshov. 2025. Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models. arxiv:2503.09573[cs]. Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. 2021. Structured Denoising Diffusion Models in Discrete State-Spaces. In Advances in Neural Information Processing Systems, volume 34, pages 17981–17993. Curran Associates, Inc. Sangmin Bae, Adam Fisch, Hrayr Harutyunyan, Ziwei Ji, Seungyeon Kim, and Tal Schuster. 2024. Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA. Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Roni Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2023. Universal Guidance for Diffusion Models. In The Twelfth International Conference on Learning Representations. Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, and Tom Gold- stein. 2022. End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking. In Advances in Neural Information Processing Systems. Jay Bear, Adam Prügel-Bennett, and Jonathon Hare. 2024. Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints. arxiv:2410.23451[cs]. Valentino Braitenberg. 1986. Vehicles: Experiments in Synthetic Psychology. MIT press. Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, and Tri Dao. 2024. Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. arxiv:2401.10774[cs]. Boyuan Chen, Diego Marti Monso, Yilun Du, Max Simchowitz, Russ Tedrake, and Vincent Sitzmann. 2024a. Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion. arxiv:2407.01392[cs]. Boyuan Chen, Diego Marti Monso, Yilun Du, Max Simchowitz, Russ Tedrake, and Vincent Sitzmann. 2024b. Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion. arxiv:2407.01392[cs]. Ting Chen, Ruixiang Zhang, and Geoffrey Hinton. 2022. Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning. In The Eleventh International Conference on Learning Representations. Zhuoming Chen, Avner May, Ruslan Svirschevski, Yuhsun Huang, Max Ryabinin, Zhihao Jia, and Beidi Chen. 2024c. Sequoia: Scalable, robust, and hardware-aware speculative decoding. arXiv preprint arXiv:2402.12374. 12 Jeffrey Cheng and Benjamin Van Durme. 2024. Compressed Chain of Thought: Efficient Reasoning Through Dense Representations. arxiv:2412.13171[cs]. Róbert Csordás, Kazuki Irie, Jürgen Schmidhuber, Christopher Potts, and Christopher D. Manning. 2024. MoEUT: Mixture-of-Experts Universal Transformers. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. Google DeepMind. 2025. Gemini Diffusion. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arxiv:2501.12948[cs]. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2019. Universal Transformers. arxiv:1807.03819[cs, stat]. Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, and Jonas Adler. 2022. Continuous diffusion for categorical data. arxiv:2211.15089[cs]. Rodney J. Douglas and Kevan A. C. Martin. 2004. Neuronal circuits of the neocortex. Annual Review of Neuroscience, 27:419–451. Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee. 2025. Looped Transformers for Length Generalization. In The Thirteenth International Conference on Learning Representations. Khashayar Gatmiry, Nikunj Saunshi, Sashank J. Reddi, Stefanie Jegelka, and Sanjiv Kumar. 2024. Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning? Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, and Tom Goldstein. 2025. Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach. arxiv:2502.05171[cs]. F.A. Gers and J. Schmidhuber. 2000. Recurrent nets that time and count. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, volume 3, pages 189–194 vol.3. Angeliki Giannou, Shashank Rajput, Jy-Yong Sohn, Kangwook Lee, Jason D. Lee, and Dimitris Papailiopoulos. 2023. Looped Transformers as Programmable Computers. In Proceedings of the 40th International Conference on Machine Learning, pages 11398–11442. PMLR. Shansan Gong, Shivam Agarwal, Yizhe Zhang, Jiacheng Ye, Lin Zheng, Mukai Li, Chenxin An, Peilin Zhao, Wei Bi, Jiawei Han, Hao Peng, and Lingpeng Kong. 2025a. Scaling Diffusion Language Models via Adaptation from Autoregressive Models. arxiv:2410.17891[cs]. Shansan Gong, Ruixiang Zhang, Huangjie Zheng, Jiatao Gu, Navdeep Jaitly, Lingpeng Kong, and Yizhe Zhang. 2025b. DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation. arxiv:2506.20639[cs]. Alex Graves. 2017. Adaptive Computation Time for Recurrent Neural Networks. arxiv:1603.08983[cs]. Alex Graves, Rupesh Kumar Srivastava, Timothy Atkinson, and Faustino Gomez. 2025. Bayesian Flow Networks. arxiv:2308.07037[cs]. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural Turing Machines. arxiv:1410.5401[cs]. Xiaochuang Han, Sachin Kumar, and Yulia Tsvetkov. 2023. SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11575–11596, Toronto, Canada. Association for Computational Linguistics. Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. 2024. Training Large Language Models to Reason in a Continuous Latent Space. arxiv:2412.06769[cs]. Tamir David Hay and Lior Wolf. 2023. Dynamic Layer Tying for Parameter-Efficient Transformers. In The Twelfth International Conference on Learning Representations. 13 Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. 2021. Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions. In Advances in Neural Information Processing Systems, volume 34, pages 12454–12465. Curran Associates, Inc. J J Hopfield. 1982. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America, 79(8):2554–2558. Jaehyeong Jo and Sung Ju Hwang. 2025. Continuous Diffusion Model for Language Modeling. arxiv:2502.11564[cs]. Guy Kaplan, Matanel Oren, Yuval Reif, and Roy Schwartz. 2024. From Tokens to Words: On the Inner Lexicon of LLMs. arxiv:2410.05864[cs]. Rabeeh Karimi Mahabadi, Hamish Ivison, Jaesung Tae, James Henderson, Iz Beltagy, Matthew Peters, and Arman Cohan. 2024. TESS: Text-to-Text Self-Conditioned Simplex Diffusion. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2347–2361, St. Julian’s, Malta. Association for Computational Linguistics. V. A. Lamme and P. R. Roelfsema. 2000. The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23(11):571–579. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast Inference from Transformers via Speculative Decoding. In Proceedings of the 40th International Conference on Machine Learning, pages 19274–19286. PMLR. Xian Li, Asa Cooper Stickland, Yuqing Tang, and Xiang Kong. 2020. Deep Transformers with Latent Depth. arxiv:2009.13102[cs]. Luyang Liu, Jonas Pfeiffer, Jiaxing Wu, Jun Xie, and Arthur Szlam. 2024a. Deliberation in Latent Space via Differentiable Cache Augmentation. arxiv:2412.17747[cs]. Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, Liangzhen Lai, and Vikas Chandra. 2024b. MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. arxiv:2402.14905[cs]. Aaron Lou, Chenlin Meng, and Stefano Ermon. 2024. Discrete diffusion modeling by estimating the ratios of the data distribution. In Proceedings of the 41st International Conference on Machine Learning, volume 235 of ICML’24, pages 32819–32848, Vienna, Austria. JMLR.org. Mrinal Mathur, Barak A. Pearlmutter, and Sergey M. Plis. 2024. MIND over Body: Adaptive Thinking using Dynamic Computation. In The Thirteenth International Conference on Learning Representations. Sean Michael McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, and Tom Goldstein. 2024. Transformers Can Do Arithmetic with the Right Embeddings. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. William Merrill and Ashish Sabharwal. 2023. The Parallelism Tradeoff: Limitations of Log-Precision Trans- formers. Transactions of the Association for Computational Linguistics, 11:531–545. William Merrill and Ashish Sabharwal. 2025. A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers. arxiv:2503.03961[cs]. Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, and 1 others. 2024. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, pages 932–949. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan ˇCernocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. Interspeech 2010, pages 1045–1048. Amirkeivan Mohtashami, Matteo Pagliardini, and Martin Jaggi. 2024. CoTFormer: A Chain of Thought Driven Architecture with Budget-Adaptive Computation Cost at Inference. In The Thirteenth International Conference on Learning Representations. Shen Nie, Fengqi Zhu, Zebin You, Xiaolu Zhang, Jingyang Ou, Jun Hu, Jun Zhou, Yankai Lin, Ji-Rong Wen, and Chongxuan Li. 2025. Large Language Diffusion Models. arxiv:2502.09992[cs]. 14 William Peebles and Saining Xie. 2023. Scalable Diffusion Models with Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195–4205. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. OpenAI, page 24. Pierre Harvey Richemond, Sander Dieleman, and Arnaud Doucet. 2023. Categorical SDEs with Simplex Diffusion. In ICML 2023 Workshop: Sampling and Optimization in Discrete Space. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695. Mark Schöne, Babak Rahmani, Heiner Kremer, Fabian Falck, Hitesh Ballani, and Jannes Gladrow. 2025. Implicit Language Models are RNNs: Balancing Parallelization and Expressivity. arxiv:2502.07827[cs]. Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, and Tom Goldstein. 2021. Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks. In Advances in Neural Information Processing Systems, volume 34, pages 6695–6706. Curran Associates, Inc. Oscar Skean, Md Rifat Arefin, Yann LeCun, and Ravid Shwartz-Ziv. 2024. Does Representation Matter? Exploring Intermediate Layers in Large Language Models. arxiv:2412.09563[cs]. Yang Song and Stefano Ermon. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. arXiv:1907.05600 [cs, stat]. Qi Sun, Marc Pickett, Aakash Kumar Nain, and Llion Jones. 2024. Transformer Layers as Painters. arxiv:2407.09298[cs]. Ilya Sutskever, Geoffrey E Hinton, and Graham W Taylor. 2008. The Recurrent Temporal Restricted Boltzmann Machine. In Advances in Neural Information Processing Systems, volume 21. Curran Associates, Inc. Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML’11, pages 1017–1024, Madison, WI, USA. Omnipress. Shawn Tan, Yikang Shen, Zhenfang Chen, Aaron Courville, and Chuang Gan. 2023. Sparse Universal Trans- former. arxiv:2310.07096[cs]. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, and 49 others. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models. arxiv:2307.09288[cs]. Guan Wang, Jin Li, Yuhao Sun, Xing Chen, Changling Liu, Yue Wu, Meng Lu, Sen Song, and Yasin Abbasi Yadkori. 2025a. Hierarchical Reasoning Model. arxiv:2506.21734[cs]. Xu Wang, Chenkai Xu, Yijie Jin, Jiachun Jin, Hao Zhang, and Zhijie Deng. 2025b. Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing. arxiv:2508.09192[cs]. Bohong Wu, Shen Yan, Sijun Zhang, Jianqiao Lu, Yutao Zeng, Ya Wang, and Xun Zhou. 2025. Efficient pretraining length scaling. arXiv preprint arXiv:2504.14992. Zhihui Xie, Jiacheng Ye, Lin Zheng, Jiahui Gao, Jingwei Dong, Zirui Wu, Xueliang Zhao, Shansan Gong, Xin Jiang, Zhenguo Li, and Lingpeng Kong. 2025. Dream-Coder 7B: An Open Diffusion Language Model for Code. arxiv:2509.01142[cs]. Liu Yang, Kangwook Lee, Robert D. Nowak, and Dimitris Papailiopoulos. 2024. Looped Transformers are Better at Learning Learning Algorithms. In The Twelfth International Conference on Learning Representations. Jiacheng Ye, Zhihui Xie, Lin Zheng, Jiahui Gao, Zirui Wu, Xin Jiang, Zhenguo Li, and Lingpeng Kong. 2025. Dream 7B: Diffusion Large Language Models. arxiv:2508.15487[cs]. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023. MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models. In The Twelfth International Conference on Learning Representations. 15 A APPENDIX A.1 ADDITIONAL ALGORITHM DETAILS We provide the full algorithm, including adaptive exiting in Algorithm 2. Algorithm 2 Diffusion-style generation with latent-diference-based freezing Require: prompt x, max new tokens N, inner recurrence r, diffusion steps T, init scale α, exit threshold ε 1: yfrozen ←x, ycurrent ←x 2: z ←InitState(|x|, α) 3: zprev ←z 4: for step t = 1, . . . , T do 5: e ←P(ycurrent) 6: znoise ∼N(0, σ2I) 7: z ←(1 −βr)z + βrznoise 8: for j = 1, . . . , r do 9: z ←R(z, e) 10: end for 11: p ←C(z) 12: ˆy ←Sample(p) 13: ycurrent ←[yfrozen, ˆy] 14: δi ←||zi −zprev,i||2/||zi||2 ▷Compute relative changes in latents at each position. 15: if exists position i with δi < ε then 16: let k∗←index of the last such freezable position where δi < ε ▷freeze up to k∗ 17: yfrozen ←ycurrent[1:k∗] 18: keep only unfrozen tail of latents: z ←z[k∗−ℓ:] 19: else 20: no tokens frozen this step 21: end if 22: if |yfrozen| −|x| ≥N then break 23: end if 24: z ←[z, InitState(1, α)] ▷Append a new latent state for the next position 25: zprev ←z 26: end for 27: return yfrozen A.2 ADDITIONAL VARIANTS Larger Batch Sizes. The sampler discussed in this work could, in principle, also be deployed in batched or continuously-batched inference settings. In that scenario, similar to a paged KV cache, the sampler would reserve a number of slots for hidden states up to an occupancy multiplier of the maximum wavefront size, and would be capable of scheduling recurrent updates in tandem with sequence updates. For larger models, this would, if implemented efficiently, actually simplify deployment, as recurrent states are fungible, and e.g. states could be evicted from one device, and then bundled into the next forward call of the model on a different device, as the slots of the model’s hidden states do not have to correspond to contiguous sequences in either the sequence or the recurrence dimension. However, due to to the imminent complexity of such an inference engine, we refrained from engaging with this direction in this work, and focus only on properly bringing the general idea of diffusion sampling to recurrent-depth models, and leave a batched inference engine as a limitation, potentially motivating future work. A.3 ADDITIONAL INFORMATION. Finetuned Math Model: To verify that our findings are not limited to the particular model check- point we evaluate, and its capabilities, we finetune the original checkpoint for one epoch with a trapezoidal learning rate schedule with a peak learning rate of 5 × 10−7 using the MetaMath dataset 16 0 2 4 6 8 10 State Initialization Scale ( ) 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second 0.0 0.2 0.4 0.6 0.8 1.0 Continuous Compute 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second Figure 9: Impact of Additional Hyperparameter Choices, also on GSM8k. Left Initialization Scale of new states, which has only a minor effect of the result. Right: Continuous Compute, i.e. choosing to initialize new states with previously computed states (We initialize new states with the latest state from the position one step to the left). This is less effective for our sampler, given that the position one step to the left is only the result of r′ recurrences. (Yu et al., 2023). As suggested in the original work, we train the model with randomized unrolling, we set a mean of r = 32 and sample r from an Exponential distribution. As a sidenote, we remark that while we do train the full model, most of the gains can also be achieved by just finetuning the adapter component of the model that maps inputs and states into the recurrent block. Dataset Details. When evaluating GSM8k, we always refer to the CoT version of the dataset, which we provide to the model with the 8 few-shot examples associated with this variant as in Touvron et al. (2023). We always score GSM8k using the flexible-extract metric, i.e. by matching the last number in the model response against the reference answer. For MATH500, we follow the format of DeepSeek-AI et al. (2025), while for Minerva Math, we follow the updated format established in the lm-eval harness. For both, we grade answers using math-verify. For MBPP and HumanEval, we grade these benchmarks as normal. During inference we sample with a temperature of 0.2 and top-p factor of 0.95 as in Geiping et al. (2025). A.4 QUALITATIVE EVALUATION To visualize the progress (or temporary lack thereof) of the sampler on a challenging sequence from the GSM8k validation set, we provide a few additional visualizations in Figure 12. B THEORETICAL ANALYSIS B.1 PROBLEM FORMULATIONS Definition B.1 (Depth and Width in Recurrent-Depth Models). Consider a recurrent-depth model Md that processes an input sequence x ∈RL0×h, where L0 ∈N is the sequence length and h ∈N is the hidden dimension. At each generation step t ∈N, we define a hidden state as the h-dimensional output vector produced by a Transformer block for an input token. Let Ht ∈Rwt×h denote the 2D-matrix containing all hidden states at step t. We define the following two associated quantities: • the depth dt ∈N, defined as the number of serial Transformer block forward passes used to obtain Ht from the initial L0 input tokens (i.e., the generation step), while ignoring any discretization; • the width wt ∈N, defined as the cardinality of the active hidden-state set Ht ( i.e., the number of h-dimensional hidden states that are processed in parallel at generation step t). These quantities evolve according to the following rules: 1. Initialization. At time t = 0, we set d0 = 0, w0 = L0. 2. Depth update. At each step t ≥0, one additional Transformer block is applied, hence dt+1 = dt + 1, so that dt = t for all t ∈N. 17 1 2 3 4 5 6 7 8 16 32 Inner Recurrence (r′) 0.0 0.1 0.2 0.3 0.5 0.8 1.0 2.0 State Noise Mixing ( t) 0.385 (165 t/s) 0.405 (184 t/s) 0.407 (174 t/s) 0.405 (156 t/s) 0.407 (136 t/s) 0.418 (123 t/s) 0.418 (112 t/s) 0.416 (102 t/s) 0.418 (60 t/s) 0.427 (32 t/s) 0.392 (161 t/s) 0.402 (183 t/s) 0.406 (172 t/s) 0.414 (150 t/s) 0.410 (135 t/s) 0.413 (123 t/s) 0.394 (167 t/s) 0.412 (177 t/s) 0.412 (171 t/s) 0.418 (152 t/s) 0.416 (136 t/s) 0.424 (123 t/s) 0.392 (168 t/s) 0.409 (180 t/s) 0.404 (171 t/s) 0.415 (155 t/s) 0.412 (138 t/s) 0.417 (120 t/s) 0.383 (167 t/s) 0.407 (183 t/s) 0.402 (172 t/s) 0.410 (152 t/s) 0.410 (140 t/s) 0.421 (122 t/s) 0.418 (110 t/s) 0.424 (104 t/s) 0.419 (58 t/s) 0.425 (32 t/s) 0.387 (163 t/s) 0.396 (179 t/s) 0.410 (172 t/s) 0.415 (149 t/s) 0.418 (140 t/s) 0.414 (118 t/s) 0.392 (166 t/s) 0.406 (175 t/s) 0.411 (172 t/s) 0.406 (151 t/s) 0.411 (137 t/s) 0.421 (122 t/s) 0.388 (148 t/s) 0.408 (171 t/s) 0.404 (169 t/s) 0.409 (144 t/s) 0.409 (136 t/s) 0.415 (123 t/s) 0.385 0.390 0.395 0.400 0.405 0.410 0.415 0.420 0.425 Accuracy Figure 10: A heatmap of accuracy and throughput measurements spanned by varying noise and inner recurrence. 40 60 80 100 120 140 160 180 Throughput (Tokens/Second) 0.38 0.39 0.40 0.41 0.42 Accuracy 0 0.1 0.2 0.3 0.5 0.8 1 2 0 0.1 0.2 0.3 0.5 0.8 1 2 0 0.1 0.2 0.3 0.5 0.8 1 2 0 0.1 0.2 0.3 0.5 0.8 1 2 0 0.1 0.2 0.3 0.5 0.8 1 2 0 0.1 0.2 0.3 0.5 0.8 1 2 0 0.5 0 0.5 0 0.5 0 0.5 inner_recurrence=1 inner_recurrence=2 inner_recurrence=3 inner_recurrence=4 inner_recurrence=5 inner_recurrence=6 inner_recurrence=7 inner_recurrence=8 inner_recurrence=16 inner_recurrence=32 Figure 11: Additional visualizations of the trade-off of noise and inner recurrence in Figure 7. 18 Figure 12: A full example of a sampler hyperparameter failure. As in Figure 4, this figure shows the token ids on the left, as they change during successive steps of the sampler (running from top to bottom) over the sequence dimension (running left to right). We see that the model tries various configurations for the current tokens, before they are gradually frozen as their latent states converge. Due to a few hard decisions (from the perspective of the model), as seen on the stability charts on the right, early in the sequence, progress stalls until these tokens are decided, but then picks up speed again. However, large points of the wavefront all decode into the whitespace token (dark blue color), so that no useful states information is computed until the earlier tokens are resolved. 0.0 0.1 0.2 0.3 0.4 0.5 Embedding EMA Value ( ), t = 0 0.50 0.52 0.54 0.56 0.58 0.60 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second 10 1 Exit Threshold ( ) 0.50 0.52 0.54 0.56 0.58 0.60 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 Tokens/Second 0 5 10 15 20 25 30 Inner Recurrence (r′) 0.50 0.52 0.54 0.56 0.58 0.60 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 Tokens/Second 0 5 10 15 20 25 30 Inner Recurrence (r′), t = 0 0.50 0.52 0.54 0.56 0.58 0.60 Accuracy (%) Accuracy Throughput 25 50 75 100 125 150 175 200 Tokens/Second 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Noise Coefficient ( t) with linear schedule 0.50 0.52 0.54 0.56 0.58 0.60 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second Figure 13: Hyperparameter Robustness for the finetuned math model on GSM8k. These figure repeat the ablation study from the main body concerning hyperparameter robustness also for the finetuned math model, showing that behaviors are largely similar, even though the model’s capability has noticeably changed. 3. Width update. At each step t ≥0, the width changes only due to two types of events: 19 • Token entry: let e(t) ∈N0 denote the number of new tokens encoded into the model at step t, each contributing a new hidden state; • Hidden-state exit: let x(t) ∈N0 denote the number of hidden states removed from the model at step t due to decoding. Then the width evolves as wt+1 = wt + e(t) −x(t). Equivalently, the net change can be written as δ(t) = e(t) −x(t), so that δ(t) > 0 corresponds to entries (more tokens encoded), and δ(t) < 0 corresponds to exits (more hidden states decoded). Remark B.2. At any generation step t, all hidden states in Ht share the same depth dt, since each step corresponds to one additional serial forward pass through the Transformer block. B.2 LLMS SHOULD PRIORITIZE DEPTH SCALING DURING PREFILLING. Definition B.3 (Width Scaling Variants). Fix a width scaling factor s ∈N. Given an input sequence of length L, for each token i ∈{1, . . . , L} we create s copies indexed by j ∈{1, . . . , s}. The replicated sequence therefore has length L · s, with elements denoted by (i, j), the j-th copy of token i. The width-scaling model is obtained by applying a Transformer block (with parameters unchanged) to this replicated sequence under a customized attention mask, followed by a reduction step that maps the L · s outputs back to length L (e.g., by selecting the last copy or averaging over copies). We define two variants according to how each copy may attend: • Width-NoShare. The j-th copy of token i may attend to all copies of tokens 0, . . . , i −1, as well as the first j −1 copies of token i. • Width-KVShare. The j-th copy of token i may attend only to the last copy of tokens 0, . . . , i −1, together with the first j −1 copies of token i. Proposition B.4. During prefilling, both Width-NoShare and Width-KVShare are valid width- scaling architectures with factor s. Proof. Depth. At any generation step, each variant performs exactly one Transformer block forward pass on the replicated sequence. Therefore the number of serial block forward passes needed to produce the hidden states is unchanged, so the depth satisfies ˜dt = dt. Width. By definition, the width wt is the number of hidden states produced in parallel at step t. In the original model, prefilling a sequence of length L produces L hidden states per step. In both variants, we replicate each token s times, so the block computes hidden states for all pairs (i, j) with i ∈{1, . . . , L} and j ∈{1, . . . , s}. Hence the total number of hidden states produced in that step is ˜wt = Ls = s · wt. The difference between NoShare and KVShare lies only in the attention pattern (which copies each query may attend to). This affects information flow but not the number of hidden states computed. The optional reduction back to length L occurs after the parallel computation and thus does not change the measured width. Conclusion. Both variants keep serial depth fixed and enlarge width by a factor of s, which is precisely our notion of width scaling. 20
EFFICIENT PARALLEL SAMPLERS FOR RECURRENTDEPTH MODELS AND THEIR CONNECTION TO DIFFUSION LANGUAGE MODELS Jonas Geiping ELLIS Institute Tübingen & Max-Planck Institute for Intelligent Systems, Tübingen AI Center Xinyu Yang Electrical and Computer Engineering Carnegie Mellon University Guinan Su ELLIS Institute Tübingen & Max-Planck Institute for Intelligent Systems, Tübingen AI Center ABSTRACT Language models with recurrent depth, also referred to as universal or looped when considering transformers, are defined by the capacity to increase their computation through the repetition of layers. Recent efforts in pretraining have demonstrated that these architectures can scale to modern language modeling tasks while exhibiting advantages in reasoning tasks. In this work, we examine the relationship between recurrent-depth models and diffusion language models. Building on their similarities, we develop a new diffusion forcing sampler for these models to accelerate generation. The sampler advances by decoding new tokens at every forward pass of the model, while the latent states of these tokens can be further refined in parallel through recurrence. Theoretically, generation with our sampler is strictly more expressive than the baseline autoregressive generation using the same time budget on modern hardware. Moreover, this sampler, based on principles from diffusion literature, can be directly applied to existing 3.5B recurrent-depth transformers without any tuning, leading to up to a 5x speedup. Consequently, our findings not only provide an efficient mechanism for parallelizing the extra computation in recurrent-depth models at inference, but also suggest that such models can be naturally viewed as strong continuous, though causal, diffusion language models. 1 INTRODUCTION Conventional large language models (LLMs) are constructed as fixed-depth neural networks with a predetermined number of layers (often merely a two-digit count), a property that not only allows these models to be trained efficiently, but in practice appears sufficient for many tasks (Radford et al., 2019). However, more challenging tasks in mathematics and programming often require conceptual leaps over multiple steps in a logical chain that are hard for these models to learn robustly. More formally, fixed-depth transformers fall within the complexity class TC0 (Merrill and Sabharwal, 2023). To resolve this, recent efforts have focused on training models to "verbalize" their internal reasoning into chains-of-thought composed of small sub-steps, each of which the model is capable of learning. An alternative to fixed-depth are models with recurrent depth (Dehghani et al., 2019; Schwarzschild et al., 2021), which can repeat layers. Consequently, these models are also referred to as looped transformers (Giannou et al., 2023), or, as universal transformers, (Dehghani et al., 2019) when highlighting the motivation for these systems to represent universal Turing machines (Graves et al., 2014; Graves, 2017). Merrill and Sabharwal (2025) showcase that, in contrast to fixed-depth models, models with arbitrary recurrence are indeed capable of representing a larger complexity class. 1 16 Oct 2025 (a) Sequential Generation Recurrence Steps Token Position 1 2 3 4 5 6 7 8 9 (b) Diffusion Forcing Sampler Token Position 1 2 3 4 5 2 3 4 3 4 5 4 5 5 Legend Prior Compute Steps Current Compute Step Recurrence Steps 5 Figure 1: Different generation schemes for autoregressive, recurrent-depth models. Left: Standard sequential generation, which proceeds one token and step of the recurrence at a time (time steps denoted by integers). Right: A diffusion forcing sampler used for the same model can parallelize generation "diagonally", by computing one step of the recurrence per token position, iteratively refining its estimate of the generated sequence. However, generation with autoregressive recurrent-depth models is typically slow, given that every repetition of the model layers must be executed sequentially before the next token can be produced. In this work, we discuss how generation from recurrent-depth models can be efficiently parallelized by connecting this architecture to diffusion model architectures. Both architectures "recur" in a related sense, and even though both are trained with different objectives, we show that samplers adapted from diffusion literature, namely, diffusion forcing (Chen et al., 2024a), can be directly applied to parallelize the generation of already existing recurrent-depth models from Geiping et al. (2025). We discuss how to adapt diffusion forcing sampling to recurrent-depth models, identifying the essential architectural components and strategies required to ensure both stability of the iterates and bounded memory usage. As illustrated in Figure 1, rather than waiting for the recurrence at sequence position n to fully converge before generating the next token, our sampler immediately produces token drafts from intermediate iterates. It then advances to position n + 1, where the subsequent forward pass simultaneously refines the drafts for steps n and n + 1, while also decoding an initial draft for n + 2. In this way, the sampler achieves parallelism along the sequence dimension, akin to speculative decoding. Importantly, because the underlying model is trained as a causal language model, information still propagates strictly from left to right, and the output sequence is iteratively refined across recurrences. While this approach does not reduce FLOPs, it effectively exploits modern GPU architectures by unlocking additional opportunities for parallelization. Overall, in this work, we • Clarify the connection between recurrent-depth models and diffusion models via diffusion forcing and block or wave-based inference strategies for sequence-based diffusion models. • Describe how to apply principles from diffusion forcing to efficiently parallelize the inference of models with recurrent depth. • Verify that recurrent-depth models equipped with diffusion-forcing samplers achieve the strongest balance between practical efficiency and theoretical expressiveness in both prefilling and decoding. • Show that diffusion forcing sampling outperforms even well-tuned speculative decoding baselines for the same model with speed gains that can be smoothly traded off against accuracy. 2 RELATED WORK We briefly introduce both recurrent models and diffusion models, focusing on language applications. Recurrent Models. Models with recurrent computations have long been central to machine learning (Amari, 1972; Hopfield, 1982; Braitenberg, 1986; Gers and Schmidhuber, 2000; Sutskever et al., 2008), not only due to significant inspiration from recurrent firing patterns found in neuroscience (Hopfield, 1982; Lamme and Roelfsema, 2000; Douglas and Martin, 2004), and early successes in language modeling centered on recurrent neural networks (Mikolov et al., 2010; Sutskever et al., 2011). With the advent of transformer models, these architectures were considered less scalable, yet recurrence, now as recurrence in depth, was swiftly re-introduced as universal transformers, Dehghani et al. (2019), motivating that these models could be capable of modeling universal Turing machines (Graves et al., 2014). Other work showed that recurrent models were capable of learning algorithms (Schwarzschild et al., 2021; Bansal et al., 2022; Bear et al., 2024). That recurrence was 2 t=3 To determine the number I t=4 To determine how number of would t=5 To determine how many of eggst=6 To determine how many dozens dozens Claireto t=7 To determine how many dozens of of willons t=8 To determine how many dozens of eggs eggs be of t=9 To determine how many dozens of eggs Claire Claire Claire a t=10 To determine how many dozens of eggs Claire will will will day t=11 To determine how many dozens of eggs Claire will eat eat eat in t=12 To determine how many dozens of eggs Claire will eat in in in 4 t=13 To determine how many dozens of eggs Claire will eat in 4 4 4 weeks Frozen tokens, committed to KV cache Current Candidate Tokens Newly Sampled Token Figure 2: An example of a text sequence being generated with the proposed diffusion forcing sampler from a depth-recurrent model. While the original recurrent-depth model requires 32 recurrence steps to produce a single token (the default for this model), the diffusion sampler has already produced and committed 8 new tokens (green). As described, the sampler advances by at least one token per step of the recurrence. Decoded candidate tokens are initial spell out incoherent text, but map into the right concepts, and quickly improve with more steps. Note that the "freeze" decision is dynamic, based on distance to the previous state in latent space (not pictured). capable of representing universal computation was explicitly constructed for transformer models in Giannou et al. (2023), and following work on looped transformers has shown that these models are capable learners (Giannou et al., 2023; Gatmiry et al., 2024; Yang et al., 2024; McLeish et al., 2024; Fan et al., 2025). These findings have led to a wave of work training larger, general-purpose recurrent-depth models of language (Tan et al., 2023; Abnar et al., 2023; Mathur et al., 2024; Csordás et al., 2024; Geiping et al., 2025), as well as work retro-fitting recurrence into trained models (Li et al., 2020; Bae et al., 2024; Hay and Wolf, 2023; Liu et al., 2024b). Several of these works also highlight the possibility of implementing latent reasoning via recurrence, that is to complement or replace verbalized chains-of-thought, with recurrence. Examples for this line of thinking are Coconut (Hao et al., 2024), as well as Liu et al. (2024a); Cheng and Durme (2024). In this work, we propose a generic sampling algorithm for depth-recurrent models, which we test with the models developed in Geiping et al. (2025), which are trained for general language understanding and reasoning on 800B tokens, with 3.5B parameters, and openly accessible. Diffusion Language Models. Diffusion models are general-purpose generative models, with early applications focusing on continuous domains, such as images (Song and Ermon, 2019; Rombach et al., 2022; Peebles and Xie, 2023), which lead to substantial interest in extending diffusion also to discrete domains, such as text (Austin et al., 2021; Hoogeboom et al., 2021). Approaches to language diffusion are split on whether to incorporate diffusion processes on a continuous variable (that is then projected into discrete space) (Chen et al., 2022; Dieleman et al., 2022; Han et al., 2023; Karimi Mahabadi et al., 2024; Jo and Hwang, 2025; Graves et al., 2025), or diffusion processes that directly act on discrete variables.(Lou et al., 2024; Richemond et al., 2023). The latter though, especially using masking as the discrete forward diffusion step, is currently the most scalable approach, employed in large-scale efforts to train language diffusion models, competitive with autoregressive models (Gong et al., 2025a;b; DeepMind, 2025; Nie et al., 2025; Wang et al., 2025b; Xie et al., 2025; Ye et al., 2025). Inference Strategies for Diffusion Language Models. To make diffusion tractable for arbitrarily long sequences requires techniques such as block diffusion (Arriola et al., 2025), where chunks of text are being modified by the diffusion model, and then frozen and their KV entries cached, with the sampler moving to the next chunk. A more free-form approach to handle sequence-based diffusion is to use diffusion forcing (Chen et al., 2024a), a hybrid model, where noise is added to future tokens in a sequence relative to the position of the current token, allowing the sampler to move both on both the sequence dimension and the diffusion time dimension. Inference Acceleration for Fixed-Depth Transformers. Inference in transformers, in particular in small-batch settings is memory-bound, meaning that the transfer of data (or, in the default case, model parameters) to and from the L1 cache of the accelerator, is the dominating cost during inference, allowing algorithms such as speculative decoding (Leviathan et al., 2023) and follow-ups (Cai et al., 2024; Miao et al., 2024; Chen et al., 2024c) to improve inference speed through speculative 3 parallelization. Using smaller draft models, these algorithms draft text several tokens in the future, which can then be verified using the original model, as verification of the entire text sequence is compute-bound and hence, fast. 3 APPLYING DIFFUSION FORCING TO RECURRENT-DEPTH MODELS In this section, we present our diffusion forcing sampler for recurrent-depth models, which accelerates text generation by advancing at least one token in each recurrence step, as illustrated in Figure 2. 3.1 BACKGROUND ON RECURRENT-DEPTH MODELS Before detailing the diffusion forcing sampler, we briefly describe the particular recurrent-depth architecture proposed by Geiping et al. (2025), emphasizing features of the model that are pertinent to the sampler's functionality. We will use the checkpoint name Huginn-0125 when referring to the trained model. The architecture of this model contains three main blocks, each composed of multiple transformer layers: (i) a prelude block P, projecting the embedded input tokens into a latent space; (ii) a recurrent block R, iterating r times in this latent space by refining a state vector s, and (iii) a coda block C that processes the latent state and produces the model's probabilities for the next token, formally e = P(x) s0 ∼N(0, σ2I) si = R(e, si-1) for i ∈{1, . . . , r} p = C(sr). Notably, while this architecture is derived from looping the middle layers of fixed-depth transformer models (Skean et al., 2024; Sun et al., 2024; Kaplan et al., 2024), with features such as input injection and random state initialization from the literature of recurrent-depth models (Bansal et al., 2022; Anil et al., 2022), it can also be interpreted as a latent-space diffusion model following the formulation of Rombach et al. (2022): Starting from an initial random state s0, the model iteratively refines this state conditioned on the embedded input sequence e, until we assume the state to be completely denoised at the end of the process, at which point it will be decoded into the next token using C. In Geiping et al. (2025), this model is trained using randomized unrolling with truncated backpropagation, i.e. a random number of iterates r is sampled (from a Poisson-lognormal distribution), and then the entire current batch of training sequences is iterated up to r, which is not directly related to diffusion language modeling, which most effectively trains by randomized masking and adaptation from autoregressive models (Nie et al., 2025; Xie et al., 2025; Ye et al., 2025; Gong et al., 2025a). 3.2 THE INGREDIENTS FOR DIFFUSION FORCING SAMPLING While we will describe experiments using this particular recurrent-depth model, the sampler can be applied to all recurrent-depth models that fulfill the following requirements. Input Injection. The first necessary component, aside from the recurrence over layers itself, is the input injection, i.e., the conditioning of the recurrence on e. This will allow the sampler to "course-correct" if conditioning changes without having to jettison a partially computed state s. The other component that may improve the connection to diffusion modeling is the initialization of random states, but while we speculate that this is beneficial, it is not architecturally necessary. As such, recurrent-depth models trained in Csordás et al. (2024); Schöne et al. (2025); Mohtashami et al. (2024) or Wang et al. (2025a) could also benefit from this sampler. However, looped architectures such as Coconut (Hao et al., 2024), which train to feed the outputs of a transformer back in as inputs, are not immediately supported and require retraining to incorporate input injection, separating their recurrent state from their input data. Robust Recurrence. The second necessary property is that the intermediate state at every step of the recurrence must be decodable to approximately correct solutions. While this property is generally satisfied, it may fail in models trained exclusively with a fixed number of recurrences r, where decoding from earlier steps can yield nonsensical outputs rather than approximate versions of the intended result. 4 0 5 10 15 20 25 30 Recurrence Steps (r) 0% 10% 20% 30% 40% 50% Accuracy KV-Cache Sharing (KV size fix) Baseline (KV size prop. to r) Figure 3: The Huginn-0125 recurrent-depth model can match the baseline performance on the GSM8k dataset when enabling KV cache sharing (with a minimal cache size of 1), using r-times less memory for KV states. KV Cache Sharing. The third property, while not strictly required but highly beneficial for diffusion forcing samplers, is the ability of different recurrent depths to share their KV cache across iterations during generation. Without fungible KV states, all KV states from previous recurrences and tokens must be retained in memory, causing the cache to grow with both sequence length and recurrence depth. As shown in Figure 3, the trained Huginn-0125 model inherently supports KV cache sharing, allowing us to store only the KV state of the most recent recurrence for each token position1. 3.3 A SIMPLIFIED VERSION OF THE SAMPLING ALGORITHM Next, we present the algorithm for our sampler. Given a prompt x, Algorithm 1 describes a simplified version that directly adapts diffusion forcing principles to parallelize generation across the sequence dimension. This approach yields improvements in tokens/second while maintaining equivalent total FLOP requirements. An example of the sampler's behavior is illustrated in Figure 2. We emphasize several important aspects. First, the number of inner recurrences r′ may be chosen to exceed one. These additional iterations are relatively inexpensive, since the broader logic of the sampler is not yet invoked. More importantly, they serve to stabilize the recurrence. Because the conditioning on the input embedding e may vary across successive steps of the sampler, the model risks becoming trapped in oscillatory behavior unless sufficient steps are allowed to adapt the current state to the evolving conditioning. This mechanism closely parallels practices in the diffusion literature, such as the use of supplementary diffusion steps in Bansal et al. (2023) to incorporate complex guidance signals into image diffusion models. Second, we naturally employ this sampler only during the generation phase, as the prefill phase is already parallelizable in the sequence dimension, as the recurrence can be computed on all token positions of the prompt simultaneously. Further, in terms of efficiency, we note that we do not actually want to keep the state for all tokens changing indefinitely, as doing so would slow down generation again, as well as increase memory usage dramatically. As such, similar to block-diffusion samplers (Arriola et al., 2025), we look for rules that decide when each position is "finished". In the simplified version of the sampler, we freeze the last token once we reach a predetermined number of recurrence steps at this position - which naturally happens r positions behind the current maximal extent of the sequence. Frozen tokens are removed from the state vector and their KV states are added to the cache, so that, as in block diffusion models (Arriola et al., 2025), at each point in time, only a small subset of tokens in being modified and the full generation runs like a wave over the generating sequence. Finally, note that with this simplified exit rule, r′ = r exactly recovers the original autoregressive sampler. 3.4 STABILIZING COMPONENTS BASED ON DIFFUSION PRINCIPLES Further, we also experiment with adding momentum to the input conditioning e, setting e = η eprev + (1 -η)P(ycurrent), (1) which we find can stabilize the recurrence in challenging sequences, providing a small, but robust gain on average. Secondly, surprisingly, we find that even though these models are never trained with noise injected into intermediate states, that artificially adding noise to the state in each step of the sampler, in 1With this form of KV sharing, the cache requires no more memory than that of a parameter-matched fixed-depth transformer. 5 Algorithm 1 Diffusion-forcing-style generation, simplified version (Full Version in Algorithm 2) Require: current text context x, max new tokens N, inner recurrence r′, total recurrences per token r, diffusion steps T, init scale α 1: yfrozen ←x 2: ycurrent ←x 3: z ←InitState(1, α) 4: for step t = 1, . . . , T do 5: e ←P(ycurrent) 6: znoise ←InitState(1, α) 7: z ←(1 -βt)z + βt znoise 8: for j = 1, . . . , r′ do 9: z ←R(z, e) ▷Inner recurrence 10: end for 11: p ←C(z) ▷project latent states to logits 12: ˆy ←Sample(p) 13: ycurrent ←[yfrozen, ˆy] 14: yfrozen ←Assign ycurrent up to the last ⌈r r′ ⌉entries ▷Freeze completed tokens 15: if |yfrozen| -|x| ≥N then break 16: end if 17: z ←[z, InitState(1, α)] ▷Append a new latent state for the next position 18: end for 19: return yfrozen analogy to sampling from continuous diffusion models, i.e. z′ = (1 -βt)z + βt znoise where znoise = InitState(1, α), (2) can stabilize the iterative process, leading to gains in both accuracy and throughput if r′ is small. In practice, we schedule βt linearly as a function of steps t at each position, so that the latter steps are naturally less noisy (Chen et al., 2024b), which we find to outperform either scheduling βt scaled by the square root of the number of recurrences at each position or keeping it constant. However, the optimal value of βt depends on r′. 3.5 ADAPTIVE EXITS However, the fixed exit scheme of the simplified sampler can run into issues. The recurrent-depth model is causal and how quickly states converge depends on the complexity of the query. This can lead to situations where either, compute is wasted because the states at certain positions have already converged quicker than r, or, more problematically, states where, due to a late change in the conditioning of prior tokens, the states have not converged in time. Freezing these unfinished states would worsen generation, in the worst case leading to a spiral where each token that is frozen incorrectly slows down convergence further, leading to a response that becomes more incorrect with each token. However, we can remedy both cases through adaptive compute. We pick the simplest adaptive exit criterion, the normalized distance in latent space, and compute this quantity for each position and freeze up to all positions where this distance δi is smaller than a threshold ε. δi = ∥zi -zprev,i∥2 ∥zi∥2 , k∗= max{k : δj 1 inner recurrences, if diffusion forcing sampling and KV-cache sharing are employed with wavefront size W ≤L⋆, then diffusion forcing decoding achieves equal depth and strictly greater width compared to standard autoregressive decoding under the same runtime constraints. Mathematically, this relationship can be expressed as: dDF(T) = dAR(T) and wDF(T) > wAR(T), where T is the runtime budget, and DF and AR denote diffusion forcing and autoregressive decoding. Remark 4.5. Since model parameters and KV states are shared, the I/O cost of processing multiple tokens is asymptotically equivalent to processing a single token, enabling increased token generation within identical runtime constraints. At each decoding step, an expanded wavefront enables greater width scaling, providing superior expressiveness compared to autoregressive decoding. Empirically, since maximum recurrence depth rarely exceeds r ≈100, the condition W ≤L⋆typically holds. 5 EXPERIMENTAL EVALUATION To assess whether our method really accelerates generation, we compare our sampler against an equally optimized implementation of standard autoregressive sampling, both evaluated with a batch 8 Sampler GSM8K MATH500 HumanEval MBPP Acc t/s Acc t/s Acc t/s Acc t/s Static AR (r = 32) 41.77% 36.1 17.60% 6.4 22.56% 13.5 31.60% 15.3 Static AR (r = 4) 1.59% 312.9 3.20% 18.6 0.61% 244.1 1.40% 49.6 Static AR (r = 8) 31.61% 137.5 14.80% 23.1 21.34% 61.7 27.40% 57.2 Static AR (r = 64) 42.15% 18.2 18.60% 3.4 22.56% 7.3 30.20% 7.6 Adaptive Compute AR 42.23% 66.9 18.20% 12.2 21.95% 26.1 30.20% 29.5 Speculative Decoding AR 42.76% 69.5 17.80% 13.4 20.12% 27.5 30.60% 31.6 Diff. Sampler (r′ = 2, βt = 0.5) 40.71% 182.2 17.60% 35.9 20.12% 67.4 27.80% 92.3 Diff. Sampler (r′ = 4, βt = 0) 42.08% 157.3 18.00% 30.3 20.12% 64.9 31.00% 70.2 Relative Diff to AR (r = 32) +0.31 4.36x +0.40 4.73x -2.44 4.81x -0.60 4.59x Table 1: Performance comparison of autoregressive (AR) and diffusion samplers for the Huginn-0125 model using a comparable backend (batch size 1, transformers with dynamic KV caching, no further inference optimizations). For both samplers, we record the total evaluation time divided by the number of samples. "Acc" denotes task accuracy, and "t/s" denotes the median of tokens/second measurements for all samples in the task. size of 1. Extensions to larger batch sizes are conceivable but fall outside the scope of this study, see additional discussion in Section A.2 We evaluate the 4 generative benchmarks (GSM8K, MATH500, HumanEval and MBPP) also evaluated in (Geiping et al., 2025), which we rerun using our sampler and compare against a number of baselines. Aside from the static, autoregressive baseline (static AR), at different recurrence steps, we also compare against the adaptive compute sampler of the original work, which still samples token-by-token, but exits the recurrence at every token, once the difference in latent space is small enough. We tune this sampler, finding that its hyperparameter, the threshold ε is similar to the diffusion sampler. Finally, we also compare against a heavily tuned self-speculative decoding baseline. It was observed in Geiping et al. (2025) that recurrent-depth models can be natively used as their own draft models, using fewer steps to draft. We find that drafting 4 tokens into the future, each with 4 draft steps is optimal for the Huginn-0125 checkpoint on GSM8k. We implement all samplers in comparable Hugging Face transformers implementations with dynamic KV caching and we measure mean accuracy and median tokens per second, computed over queries from each benchmark. All timings are obtained from CUDA event measurements on sandboxed A100-40GB GPUs. If not otherwise mentioned, we default to conservative settings for the sampler, always setting an exit threshold of ε = 0.03, βt = 0, η = 0.1 and r′ = 4, for a maximum wavefront size of 128. 0 5 10 15 20 25 30 Inner Recurrence (r′), t = 0 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 25 50 75 100 125 150 175 200 Tokens/Second 0.03 0.05 0.08 0.25 0.50 0.80 Exit Threshold ( ), t = 0 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second Figure 5: Trade-off between accuracy and speed on GSM8k under different hyperparameter choices. Left: Effect of increasing inner recurrence r′. Inner recurrence stabilizes the sampling, increasing accuracy at the cost of throughput. Right: Effect of varying the exit threshold ε. Modulating the exit threshold most directly trades off throughput and accuracy. 9 Sampler GSM8K Minerva Math HumanEval MBPP Acc Time Acc Time Acc Time Acc Time Huginn-0125 Static AR (r = 32) 41.77% 36.1 12.98% 21.0 22.56% 13.5 31.60% 15.3 Diff. Sampler (r′ = 4, βt = 0) 42.08% 157.3 13.06% 96.0 20.12% 64.9 31.00% 70.2 SWA Model Variant Static AR (r = 32) 47.99% 36.2 14.86% 22.1 23.78% 14.9 31.20% 11.8 Diff. Sampler (r′ = 4, βt = 0) 47.08% 143.1 14.52% 101.4 23.78% 71.2 29.20% 59.7 Math-Finetuned Model Static AR (r = 32) 58.91% 29.8 22.20% 7.9 17.07% 11.5 28.80% 11.2 Diff. Sampler (r′ = 4, βt = 0) 58.45% 144.1 21.40% 39.8 15.24% 47.9 27.60% 57.1 Table 2: Hyperparameters remain stable across different model variants. For example, both the weight-averaged checkpoint from the original work and the model finetuned on MetaMath for this study exhibit consistent speed gains in the range of 4-5× and accuracy deviations within 0.5-1%, even when baseline values change. 0.0 0.1 0.2 0.3 0.4 0.5 Embedding EMA Value ( ), t = 0 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Noise Coefficient ( t) with linear schedule 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second Figure 6: Left: Scaling the amount of momentum η in the conditioning., showing that small, but non-zero η values are optimal. Right: Scaling the amount of noise added during inference for r′ = 4, scheduled linearly in the number of recurrence steps, also measured on GSM8k. At r′ = 4, adding noise is not optimal. We plot the full spectrum of r′ to βt in Figure 7. 5.1 BENCHMARK RESULTS. We summarize our findings in Table 1. We find that on all benchmarks, executing the parallelized sampler leads to significant speedups of around 5x, with only minor trade-offs in generation quality of around 1%, depending on the task, owing to the trade-off set by our default hyperparameters. In Table 2 we repeat all benchmarks for two additional model checkpoints, the SWA model also released in Geiping et al. (2025), and a math variant, that we finetuned on the MetaMath dataset (Yu et al., 2023). Even though these model variants differ noticeably in their benchmark scores, they show similar gains and trade-offfs when using the diffusion sampler. 5.2 VARIANTS AND HYPERPARAMETERS Hyperparameter Choices. We show the trade-off curves arising when varying the inner recurrence r′ and the exit threshold ε in Figure 5 for two settings of noise βt, finding that we can effectively trade-off additional generation speed against minor losses in accuracy. We further vary the embedding EMA η and the noise schedule in Figure 6, showing that the sampler is robust to a broad range of settings for both options, although upsides are also limited. In Figure 7, we sweep a range of values for r′ and βt, showing that, on average, more noise is helpful if the model takes fewer inner recurrence steps. In Figure 8 (left), we confirm that larger maximum wavefront sizes (i.e. the number of tokens that is modified at once in the adaptive sampler) allow for better parallelization. For the tested A100 GPU, the optimal maximal wavefront size is between 64 and 128, although this is likely accelerator-specific. Moving Forward Multiple Steps. In principle, there is no limitation of only advancing one token at a time, and so we can consider headways greater than 1, however, for these, we have no prior position to decode from, so we can only fill these positions with random tokens, or a particular padding token. And, given that the model is still causal, it will take several steps for sequential 10 40 60 80 100 120 140 160 180 Throughput (Tokens/Second) 0.38 0.39 0.40 0.41 0.42 Accuracy r=1 r=2 r=3 r=4 r=5 r=6 r=7 r=8 r=16 r=32 t=0.0 t=0.1 t=0.2 t=0.3 t=0.5 t=0.8 t=1.0 t=2.0 Pareto Frontier (32, 0) (32, 0.5) (6, 0.2) (4, 0.2) (4, 0.3) (2, 0.2) (2, 0.3) (2, 0.5) (2, 0) Figure 7: The Pareto Curve of Accuracy and Throughput on GSM8k spanned by varying inner recurrence and noise hyperparameter pairs (r′, βt). Adding moderate amounts of noise, e.g. βt = 0.2 is dominating runs with no noise added. Note also the scale of y-axis, as even at the rightmost part of the frontier, we are observing accuracy losses of only 2%. 4 8 32 64 256 1024 Maximum Wavefront Size 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second 1 3 5 7 16 128 Headway (h) 0.30 0.32 0.34 0.36 0.38 0.40 0.42 0.44 Accuracy (%) Accuracy Throughput 20 40 60 80 100 120 140 160 180 200 Tokens/Second Figure 8: Impact of Additional Hyperparameter Choices on GSM8k. Left: Size of the wavefront. Increasing wavefront size up to a value around 64-128 appears optimal. We note that the optimal wavefront size is also likely to be accelerator-specific. Right: Amount of headway. Larger amounts of headway than 1, i.e. advancing the sampler more than 1 token per step, do not seem to materialize practical speedups for the studied model. dependencies to be resolved, even if we sample a large headway in every step. We experiment with headways greater than one, but while interestingly stable, this accelerates the speed of the sampler only marginally at a cost to accuracy, see Figure 8, right. 6 CONCLUSIONS: ARE RECURRENT-DEPTH TRANSFORMERS SECRETLY CONTINUOUS LANGUAGE DIFFUSION MODELS? We have shown that, surprisingly, diffusion forcing samplers can be directly applied to parallelize the inference of existing recurrent-depth language models, which we justify theoretically, and implement in practice, leading to five times faster single-sequence inference, even on reasoning and coding benchmark questions. Interestingly, we could also interpret this relationship in the opposite direction, namely that the recurrent-depth models of Geiping et al. (2025) are effectively continuous latent language diffusion models, just trained with an unusual objective, namely truncated unrolling. This would imply that unrolling objectives could be competitive objectives for future language diffusion models. However, while this comparison is possible, the recurrent models like Huginn-0125 are still causal, at least without further training, and so this advantage of diffusion modeling remains elusive. 11 ACKNOWLEDGMENTS JG acknowledges the support of the Hector foundation and the Max Planck Computing and Data Facility (MPCDF), especially the compute cluster Raven. We are especially thankful that the MPCDF team was able to address the overheating issues that coincided with the large-scale deployment of the evaluation of this sampling algorithm to the Raven compute cluster. GS acknowledges the support of the International Max Planck Research School for Intelligent Systems (IMPRS-IS). REPRODUCIBILITY STATEMENT We provide the complete sampling algorithm we describe, including all options at https://gith ub.com/seal-rg/recurrent-pretraining. We provide experimental details in Section 5 and provide further ablations and variants in the appendix. If not otherwise mentioned, all measured values are based on at least 5 repeated experiments. All timing are measured using CUDA events on GPUs of equal power, and are comparable to timings in the same table or figure. REFERENCES Samira Abnar, Omid Saremi, Laurent Dinh, Shantel Wilson, Miguel Angel Bautista, Chen Huang, Vimal Thilak, Etai Littwin, Jiatao Gu, Josh Susskind, and Samy Bengio. 2023. Adaptivity and Modularity for Efficient Generalization Over Task Complexity. arxiv:2310.08866[cs]. S.-I. Amari. 1972. Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements. IEEE Transactions on Computers, C-21(11):1197-1206. Cem Anil, Ashwini Pokle, Kaiqu Liang, Johannes Treutlein, Yuhuai Wu, Shaojie Bai, J. Zico Kolter, and Roger Baker Grosse. 2022. Path Independent Equilibrium Models Can Better Exploit Test-Time Computation. In Advances in Neural Information Processing Systems. Marianne Arriola, Aaron Gokaslan, Justin T. Chiu, Zhihan Yang, Zhixuan Qi, Jiaqi Han, Subham Sekhar Sahoo, and Volodymyr Kuleshov. 2025. Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models. arxiv:2503.09573[cs]. Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. 2021. Structured Denoising Diffusion Models in Discrete State-Spaces. In Advances in Neural Information Processing Systems, volume 34, pages 17981-17993. Curran Associates, Inc. Sangmin Bae, Adam Fisch, Hrayr Harutyunyan, Ziwei Ji, Seungyeon Kim, and Tal Schuster. 2024. Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA. Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Roni Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2023. Universal Guidance for Diffusion Models. In The Twelfth International Conference on Learning Representations. Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, and Tom Goldstein. 2022. End-to-end Algorithm Synthesis with Recurrent Networks: Extrapolation without Overthinking. In Advances in Neural Information Processing Systems. Jay Bear, Adam Prügel-Bennett, and Jonathon Hare. 2024. Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints. arxiv:2410.23451[cs]. Valentino Braitenberg. 1986. Vehicles: Experiments in Synthetic Psychology. MIT press. Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, and Tri Dao. 2024. Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. arxiv:2401.10774[cs]. Boyuan Chen, Diego Marti Monso, Yilun Du, Max Simchowitz, Russ Tedrake, and Vincent Sitzmann. 2024a. Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion. arxiv:2407.01392[cs]. Boyuan Chen, Diego Marti Monso, Yilun Du, Max Simchowitz, Russ Tedrake, and Vincent Sitzmann. 2024b. Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion. arxiv:2407.01392[cs]. Ting Chen, Ruixiang Zhang, and Geoffrey Hinton. 2022. Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning. In The Eleventh International Conference on Learning Representations. Zhuoming Chen, Avner May, Ruslan Svirschevski, Yuhsun Huang, Max Ryabinin, Zhihao Jia, and Beidi Chen. 2024c. Sequoia: Scalable, robust, and hardware-aware speculative decoding. arXiv preprint . 12 Jeffrey Cheng and Benjamin Van Durme. 2024. Compressed Chain of Thought: Efficient Reasoning Through Dense Representations. arxiv:2412.13171[cs]. Róbert Csordás, Kazuki Irie, Jürgen Schmidhuber, Christopher Potts, and Christopher D. Manning. 2024. MoEUT: Mixture-of-Experts Universal Transformers. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. Google DeepMind. 2025. Gemini Diffusion. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. arxiv:2501.12948[cs]. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2019. Universal Transformers. arxiv:1807.03819[cs, stat]. Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, and Jonas Adler. 2022. Continuous diffusion for categorical data. arxiv:2211.15089[cs]. Rodney J. Douglas and Kevan A. C. Martin. 2004. Neuronal circuits of the neocortex. Annual Review of Neuroscience, 27:419-451. Ying Fan, Yilun Du, Kannan Ramchandran, and Kangwook Lee. 2025. Looped Transformers for Length Generalization. In The Thirteenth International Conference on Learning Representations. Khashayar Gatmiry, Nikunj Saunshi, Sashank J. Reddi, Stefanie Jegelka, and Sanjiv Kumar. 2024. Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning? Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, and Tom Goldstein. 2025. Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach. arxiv:2502.05171[cs]. F.A. Gers and J. Schmidhuber. 2000. Recurrent nets that time and count. In Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, volume 3, pages 189-194 vol.3. Angeliki Giannou, Shashank Rajput, Jy-Yong Sohn, Kangwook Lee, Jason D. Lee, and Dimitris Papailiopoulos. 2023. Looped Transformers as Programmable Computers. In Proceedings of the 40th International Conference on Machine Learning, pages 11398-11442. PMLR. Shansan Gong, Shivam Agarwal, Yizhe Zhang, Jiacheng Ye, Lin Zheng, Mukai Li, Chenxin An, Peilin Zhao, Wei Bi, Jiawei Han, Hao Peng, and Lingpeng Kong. 2025a. Scaling Diffusion Language Models via Adaptation from Autoregressive Models. arxiv:2410.17891[cs]. Shansan Gong, Ruixiang Zhang, Huangjie Zheng, Jiatao Gu, Navdeep Jaitly, Lingpeng Kong, and Yizhe Zhang. 2025b. DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation. arxiv:2506.20639[cs]. Alex Graves. 2017. Adaptive Computation Time for Recurrent Neural Networks. arxiv:1603.08983[cs]. Alex Graves, Rupesh Kumar Srivastava, Timothy Atkinson, and Faustino Gomez. 2025. Bayesian Flow Networks. arxiv:2308.07037[cs]. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural Turing Machines. arxiv:1410.5401[cs]. Xiaochuang Han, Sachin Kumar, and Yulia Tsvetkov. 2023. SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11575-11596, Toronto, Canada. Association for Computational Linguistics. Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. 2024. Training Large Language Models to Reason in a Continuous Latent Space. arxiv:2412.06769[cs]. Tamir David Hay and Lior Wolf. 2023. Dynamic Layer Tying for Parameter-Efficient Transformers. In The Twelfth International Conference on Learning Representations. 13 Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. 2021. Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions. In Advances in Neural Information Processing Systems, volume 34, pages 12454-12465. Curran Associates, Inc. J J Hopfield. 1982. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America, 79(8):2554-2558. Jaehyeong Jo and Sung Ju Hwang. 2025. Continuous Diffusion Model for Language Modeling. arxiv:2502.11564[cs]. Guy Kaplan, Matanel Oren, Yuval Reif, and Roy Schwartz. 2024. From Tokens to Words: On the Inner Lexicon of LLMs. arxiv:2410.05864[cs]. Rabeeh Karimi Mahabadi, Hamish Ivison, Jaesung Tae, James Henderson, Iz Beltagy, Matthew Peters, and Arman Cohan. 2024. TESS: Text-to-Text Self-Conditioned Simplex Diffusion. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2347-2361, St. Julian's, Malta. Association for Computational Linguistics. V. A. Lamme and P. R. Roelfsema. 2000. The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23(11):571-579. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast Inference from Transformers via Speculative Decoding. In Proceedings of the 40th International Conference on Machine Learning, pages 19274-19286. PMLR. Xian Li, Asa Cooper Stickland, Yuqing Tang, and Xiang Kong. 2020. Deep Transformers with Latent Depth. arxiv:2009.13102[cs]. Luyang Liu, Jonas Pfeiffer, Jiaxing Wu, Jun Xie, and Arthur Szlam. 2024a. Deliberation in Latent Space via Differentiable Cache Augmentation. arxiv:2412.17747[cs]. Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, Liangzhen Lai, and Vikas Chandra. 2024b. MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. arxiv:2402.14905[cs]. Aaron Lou, Chenlin Meng, and Stefano Ermon. 2024. Discrete diffusion modeling by estimating the ratios of the data distribution. In Proceedings of the 41st International Conference on Machine Learning, volume 235 of ICML'24, pages 32819-32848, Vienna, Austria. JMLR.org. Mrinal Mathur, Barak A. Pearlmutter, and Sergey M. Plis. 2024. MIND over Body: Adaptive Thinking using Dynamic Computation. In The Thirteenth International Conference on Learning Representations. Sean Michael McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, and Tom Goldstein. 2024. Transformers Can Do Arithmetic with the Right Embeddings. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. William Merrill and Ashish Sabharwal. 2023. The Parallelism Tradeoff: Limitations of Log-Precision Transformers. Transactions of the Association for Computational Linguistics, 11:531-545. William Merrill and Ashish Sabharwal. 2025. A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers. arxiv:2503.03961[cs]. Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, and 1 others. 2024. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, pages 932-949. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan ˇCernocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proc. Interspeech 2010, pages 1045-1048. Amirkeivan Mohtashami, Matteo Pagliardini, and Martin Jaggi. 2024. CoTFormer: A Chain of Thought Driven Architecture with Budget-Adaptive Computation Cost at Inference. In The Thirteenth International Conference on Learning Representations. Shen Nie, Fengqi Zhu, Zebin You, Xiaolu Zhang, Jingyang Ou, Jun Hu, Jun Zhou, Yankai Lin, Ji-Rong Wen, and Chongxuan Li. 2025. Large Language Diffusion Models. arxiv:2502.09992[cs]. 14 William Peebles and Saining Xie. 2023. Scalable Diffusion Models with Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195-4205. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. OpenAI, page 24. Pierre Harvey Richemond, Sander Dieleman, and Arnaud Doucet. 2023. Categorical SDEs with Simplex Diffusion. In ICML 2023 Workshop: Sampling and Optimization in Discrete Space. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-Resolution Image Synthesis With Latent Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695. Mark Schöne, Babak Rahmani, Heiner Kremer, Fabian Falck, Hitesh Ballani, and Jannes Gladrow. 2025. Implicit Language Models are RNNs: Balancing Parallelization and Expressivity. arxiv:2502.07827[cs]. Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, and Tom Goldstein. 2021. Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks. In Advances in Neural Information Processing Systems, volume 34, pages 6695-6706. Curran Associates, Inc. Oscar Skean, Md Rifat Arefin, Yann LeCun, and Ravid Shwartz-Ziv. 2024. Does Representation Matter? Exploring Intermediate Layers in Large Language Models. arxiv:2412.09563[cs]. Yang Song and Stefano Ermon. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. . Qi Sun, Marc Pickett, Aakash Kumar Nain, and Llion Jones. 2024. Transformer Layers as Painters. arxiv:2407.09298[cs]. Ilya Sutskever, Geoffrey E Hinton, and Graham W Taylor. 2008. The Recurrent Temporal Restricted Boltzmann Machine. In Advances in Neural Information Processing Systems, volume 21. Curran Associates, Inc. Ilya Sutskever, James Martens, and Geoffrey Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, pages 1017-1024, Madison, WI, USA. Omnipress. Shawn Tan, Yikang Shen, Zhenfang Chen, Aaron Courville, and Chuang Gan. 2023. Sparse Universal Transformer. arxiv:2310.07096[cs]. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, and 49 others. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models. arxiv:2307.09288[cs]. Guan Wang, Jin Li, Yuhao Sun, Xing Chen, Changling Liu, Yue Wu, Meng Lu, Sen Song, and Yasin Abbasi Yadkori. 2025a. Hierarchical Reasoning Model. arxiv:2506.21734[cs]. Xu Wang, Chenkai Xu, Yijie Jin, Jiachun Jin, Hao Zhang, and Zhijie Deng. 2025b. Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing. arxiv:2508.09192[cs]. Bohong Wu, Shen Yan, Sijun Zhang, Jianqiao Lu, Yutao Zeng, Ya Wang, and Xun Zhou. 2025. Efficient pretraining length scaling. arXiv preprint . Zhihui Xie, Jiacheng Ye, Lin Zheng, Jiahui Gao, Jingwei Dong, Zirui Wu, Xueliang Zhao, Shansan Gong, Xin Jiang, Zhenguo Li, and Lingpeng Kong. 2025. Dream-Coder 7B: An Open Diffusion Language Model for Code. arxiv:2509.01142[cs]. Liu Yang, Kangwook Lee, Robert D. Nowak, and Dimitris Papailiopoulos. 2024. Looped Transformers are Better at Learning Learning Algorithms. In The Twelfth International Conference on Learning Representations. Jiacheng Ye, Zhihui Xie, Lin Zheng, Jiahui Gao, Zirui Wu, Xin Jiang, Zhenguo Li, and Lingpeng Kong. 2025. Dream 7B: Diffusion Large Language Models. arxiv:2508.15487[cs]. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023. MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models. In The Twelfth International Conference on Learning Representations. 15 A APPENDIX A.1 ADDITIONAL ALGORITHM DETAILS We provide the full algorithm, including adaptive exiting in Algorithm 2. Algorithm 2 Diffusion-style generation with latent-diference-based freezing Require: prompt x, max new tokens N, inner recurrence r, diffusion steps T, init scale α, exit threshold ε 1: yfrozen ←x, ycurrent ←x 2: z ←InitState(|x|, α) 3: zprev ←z 4: for step t = 1, . . . , T do 5: e ←P(ycurrent) 6: znoise ∼N(0, σ2I) 7: z ←(1 -βr)z + βrznoise 8: for j = 1, . . . , r do 9: z ←R(z, e) 10: end for 11: p ←C(z) 12: ˆy ←Sample(p) 13: ycurrent ←[yfrozen, ˆy] 14: δi ←||zi -zprev,i||2/||zi||2 ▷Compute relative changes in latents at each position. 15: if exists position i with δi 0 corresponds to entries (more tokens encoded), and δ(t) < 0 corresponds to exits (more hidden states decoded). Remark B.2. At any generation step t, all hidden states in Ht share the same depth dt, since each step corresponds to one additional serial forward pass through the Transformer block. B.2 LLMS SHOULD PRIORITIZE DEPTH SCALING DURING PREFILLING. Definition B.3 (Width Scaling Variants). Fix a width scaling factor s ∈N. Given an input sequence of length L, for each token i ∈{1, . . . , L} we create s copies indexed by j ∈{1, . . . , s}. The replicated sequence therefore has length L · s, with elements denoted by (i, j), the j-th copy of token i. The width-scaling model is obtained by applying a Transformer block (with parameters unchanged) to this replicated sequence under a customized attention mask, followed by a reduction step that maps the L · s outputs back to length L (e.g., by selecting the last copy or averaging over copies). We define two variants according to how each copy may attend: • Width-NoShare. The j-th copy of token i may attend to all copies of tokens 0, . . . , i -1, as well as the first j -1 copies of token i. • Width-KVShare. The j-th copy of token i may attend only to the last copy of tokens 0, . . . , i -1, together with the first j -1 copies of token i. Proposition B.4. During prefilling, both Width-NoShare and Width-KVShare are valid widthscaling architectures with factor s. Proof. Depth. At any generation step, each variant performs exactly one Transformer block forward pass on the replicated sequence. Therefore the number of serial block forward passes needed to produce the hidden states is unchanged, so the depth satisfies ̃dt = dt. Width. By definition, the width wt is the number of hidden states produced in parallel at step t. In the original model, prefilling a sequence of length L produces L hidden states per step. In both variants, we replicate each token s times, so the block computes hidden states for all pairs (i, j) with i ∈{1, . . . , L} and j ∈{1, . . . , s}. Hence the total number of hidden states produced in that step is ̃wt = Ls = s · wt. The difference between NoShare and KVShare lies only in the attention pattern (which copies each query may attend to). This affects information flow but not the number of hidden states computed. The optional reduction back to length L occurs after the parallel computation and thus does not change the measured width. Conclusion. Both variants keep serial depth fixed and enlarge width by a factor of s, which is precisely our notion of width scaling. 20
2510.14962
Preprint RAINDIFF: END-TO-END PRECIPITATION NOWCAST- ING VIA TOKEN-WISE ATTENTION DIFFUSION Thao Nguyen, Jiaqi Ma, Fahad Khan, Souhaib Ben Taieb, Salman Khan Mohamed Bin Zayed University of AI {thao.nguyen,salman.khan}@mbzuai.ac.ae ABSTRACT Precipitation nowcasting, predicting future radar echo sequences from current ob- servations, is a critical yet challenging task due to the inherently chaotic and tightly coupled spatio-temporal dynamics of the atmosphere. While recent ad- vances in diffusion-based models attempt to capture both large-scale motion and fine-grained stochastic variability, they often suffer from scalability issues: latent- space approaches require a separately trained autoencoder, adding complexity and limiting generalization, while pixel-space approaches are computationally inten- sive and often omit attention mechanisms, reducing their ability to model long- range spatio-temporal dependencies. To address these limitations, we propose a Token-wise Attention integrated into not only the U-Net diffusion model but also the spatio-temporal encoder that dynamically captures multi-scale spatial inter- actions and temporal evolution. Unlike prior approaches, our method natively integrates attention into the architecture without incurring the high resource cost typical of pixel-space diffusion, thereby eliminating the need for separate latent modules. Our extensive experiments and visual evaluations across diverse datasets demonstrate that the proposed method significantly outperforms state-of-the-art approaches, yielding superior local fidelity, generalization, and robustness in com- plex precipitation forecasting scenarios. Our code is released here. 1 INTRODUCTION Predicting when and where rain will fall over the next few minutes to hours, known as precipitation nowcasting, remains one of the most pressing challenges in weather forecasting (Ravuri et al., 2021; Veillette et al., 2020). The goal is to predict a sequence of future radar echoes conditioned on recent observations. Traditional approaches rely on numerical weather prediction (NWP), which explicitly models atmospheric dynamics through partial differential equations (PDEs) (Bauer et al., 2015). While physically grounded, NWP methods are computationally expensive and slow to update, lim- iting their use for the rapid, iterative forecasts required in nowcasting (Bi et al., 2023). Recent advances in deep learning have enabled data-driven alternatives that bypass explicit PDE solvers. Deterministic nowcasting models have demonstrated a strong ability to capture the large- scale advection of precipitation fields, but they typically suffer from oversmoothing effects, espe- cially at longer lead times (extending to several hours), resulting in underestimation of atmospheric intensity and loss of fine-scale spatial detail (Shi et al., 2015; Ravuri et al., 2021). To mitigate this, probabilistic generative approaches leveraging generative adversarial networks (GANs) or diffusion models have been introduced to generate more realistic and accurate radar fields (Zhang et al., 2023; Gao et al., 2023). However, these models often inflate the effective degrees of freedom by treating the entire spatio-temporal field as stochastic, which introduces excessive randomness and reduces the positional accuracy of rainfall structures (Yu et al., 2024). To reconcile these trade-offs, hybrid architectures have emerged that benefit from both determin- istic and probabilistic paradigms. These methods decompose weather evolution into (i) a globally coherent deterministic component to capture large-scale dynamics, and (ii) a localized stochastic refinement to model fine-grained variability (Yu et al., 2024; Gong et al., 2024). This factorization has shown promise in simultaneously improving positional fidelity and generative sharpness. 1 arXiv:2510.14962v1 [cs.CV] 16 Oct 2025 Preprint Input RainDiff (Ours) -15 Min -10 Min 0 Min Ground Truth DiffCast -20 Min +25 Min +40 Min +70 Min +10 Min +55 Min +85 Min +100 Min RainDiff DiffCast +100 Min -5 Min Ground Truth Figure 1: A visualization from the SEVIR dataset shows that, at the longest forecast horizon, Rain- Diff avoids oversmoothed outputs and better preserves weather fronts compared to the state-of-the- art DiffCast (Yu et al., 2024), resulting in closer alignment with the ground truth. Despite these advances, key limitations persist for hybrid architectures that restrict their scalability and generalization. For example, CasCast (Gong et al., 2024) relies on a latent-space formulation, requiring an auxiliary autoencoder pre-trained on large datasets. This dependency hampers general- ization to new domains where suitable autoencoders may be unavailable. In contrast, DiffCast (Yu et al., 2024) operates directly in pixel space, thereby avoiding latent bottlenecks. However, to remain computationally tractable, it omits self-attention mechanisms (Dosovitskiy et al., 2021) in the high- resolution layers. This design choice limits the model’s capacity to capture complex long-range and fine-scale spatio-temporal dependencies, as illustrated in Figure 1. To overcome these limitations, we propose Token-wise Attention, integrated across all spatial reso- lutions in our network. This design enables accurate modeling of fine-scale structures while main- taining computational efficiency. Unlike conventional self-attention (Yu et al., 2024; Gong et al., 2024), our token-wise formulation avoids the quadratic complexity induced by the high dimension- ality of radar data. Moreover, all operations are performed directly in pixel space, removing the need for an external latent autoencoder (Gong et al., 2024). Finally, drawing on empirical insights, we introduce Post-attention, which emphasizes the informative conditional context crucial for the denoising process. To summarize, our key contributions are: • We introduce Token-wise Attention, a novel mechanism that enables full-resolution self- attention directly in pixel space while retaining tractable computational cost. • We provide a theoretical analysis exposing the limitations of integrating existing attention mechanisms into recurrent conditioners, and introduce Post-attention, a lightweight drop-in module that extracts critical contextual information to guide denoising while maintaining computational efficiency. • We perform extensive experiments on four benchmark datasets, demonstrating that our approach consistently outperforms state-of-the-art methods in both deterministic and gen- erative settings, achieving superior performance across multiple evaluation metrics. 2 RELATED WORK Recently, deep learning has emerged as a powerful alternative to traditional Numerical Weather Prediction (NWP) (Skamarock et al., 2008), with approaches typically classified as determinis- tic or probabilistic. Early efforts were predominantly deterministic, emphasizing spatio-temporal modeling to produce point forecasts of future atmospheric conditions. For example, ConvLSTM (Shi et al., 2015) combined convolutional and recurrent layers to capture spatio-temporal dynam- ics. Later methods sought to enhance accuracy by integrating physical constraints, as in PhyDNet 2 Preprint Figure 2: Overall architecture of our precipitation nowcasting framework RainDiff. Given an input sequence X0, a deterministic predictor Fθ1 outputs a coarse prediction µ. The concatenation of X0 and µ is encoded by a cascaded spatio-temporal encoder Fθ3 to yield conditioning features h, refined by Post-attention. A diffusion-based stochastic module Fθ2 equipped with Token-wise Attention at all resolutions in pixel space predicts residual segments ˆr autoregressively, where the denoising process is conditioned on h and the predicted segments. This design captures rich contextual rela- tionships and inter-frame dependencies in the radar field while keeping computation efficient. (Guen & Thome, 2020), or by incorporating a broader set of meteorological variables for more comprehensive forecasting, as in Pangu (Bi et al., 2023) and Fengwu (Chen et al., 2023). Although these models effectively capture large-scale motion patterns, their predictions tend to become overly smooth and blurry at longer lead times. This degradation arises from compounding errors, reliance on Mean Squared Error (MSE) loss, and the absence of local stochastic modeling—all of which suppress fine-scale detail. Generative models have been proposed to mitigate the blurriness of deterministic forecasts by intro- ducing latent variables that capture the inherent stochasticity of weather patterns. Examples include GAN-based approaches such as DGMR (Ravuri et al., 2021) and more recently, diffusion-based models like PreDiff (Gao et al., 2023). While some generative methods treat the entire system stochastically, a growing line of research explores hybrid strategies. Notably, DiffCast (Yu et al., 2024) and CasCast (Gong et al., 2024) combine deterministic modeling of large-scale motion with probabilistic modeling of fine-scale variability, thereby leveraging the complementary strengths of both paradigms. Although diffusion-based methods have achieved promising results, they often face limitations such as the training overhead of external latent autoencoders or the omission of attention layers due to the high computational cost of operating in pixel space. These trade-offs between representational capacity and computational feasibility are not unique to weather forecasting, but are symptomatic of diffusion models more broadly, including in domains such as medical imaging, where pretrained autoencoders are frequently unavailable (Chen et al., 2024; Konz et al., 2024). To overcome these limitations, we propose a method that simplifies the training pipeline and improves generality, en- abling wide applicability without reliance on domain-specific latent autoencoders. 3 METHODOLOGY We formulate precipitation nowcasting as a spatio-temporal forecasting problem on a hybrid frame- work, which consists of three components: a deterministic module Fθ1(·), a diffusion-based stochas- tic module Fθ2(·), and a spatio-temporal module Fθ3(·). The output from Fθ1(·) is passed to Fθ3(·) to extract conditioning features, which guide the denoising process of Fθ2(·). We also introduce Token-wise Attention, a mechanism that enables self-attention at all spatial res- olutions while avoiding the prohibitive pixel-space cost of Vision Transformer (ViT) self-attention (Dosovitskiy et al., 2021) under equivalent resource constraints. In contrast, prior approaches either 3 Preprint use external latent autoencoders that compress inputs before training a U-Net from Fθ2(·) and de- code outputs during inference, or they restrict ViT self-attention to bottleneck resolutions because of its high computational cost, especially the softmax operation on the attention map. In addition, we propose Post-attention in the spatio-temporal encoder Fθ3(·), which emphasizes informative contextual signals to guide the denoising process. 3.1 OVERALL FRAMEWORK Let X0 ∈RH×W ×C×Tin be a 4-dimensional tensor of shape (H, W, C, Tin), representing a sequence of Tin input frames, where H and W denote the spatial resolution and C the number of channels. Similarly, let y ∈RH×W ×C×Tout denote the sequence of Tout future frames. Our objective is to learn a generative model for the conditional distribution p(y | X0). Our approach proceeds in two steps. First, we train a deterministic predictor Fθ1 : RH×W ×C×Tin → RH×W ×C×Tout, to estimate the conditional expectation µ(X0) = E[y | X0]. This estimate provides only a coarse estimate of the conditional distribution, capturing the global motion trend and overall structure but failing to represent uncertainty and often leading to blurry predictions with a loss of fine-scale details. Second, we introduce a spatio-temporal encoder Fθ3(·) that processes both X0 and µ to extract a representation h, which encodes global motion priors, sequence consistency, and inter-frame dependencies. We then model the residual r = y −µ, using a stochastic prediction module Fθ2(·) based on a diffusion model (Section 3.2). Token-wise Attention (Section 3.3) refines the temporal evolution of the residual distribution conditioned on h, while Post-attention mechanism (Section 3.4) sharpens h during denoising, amplifying salient context and suppressing irrelevant detail. At inference, to generate a sample from p(y | X0), we first sample a residual ˆr from the diffusion- based prediction module and then add it to the predicted mean ˆµ, yielding one realization ˆy = ˆµ+ ˆr. Repeating this procedure produces diverse realizations of plausible future sequences. An overview of the proposed framework is shown in Figure 2. 3.2 STOCHASTIC MODELING NETWORK Given a tensor of input frames X0, the deterministic predictor Fθ1(·) estimates the conditional mean µ(X0) by minimizing the MSE loss: L(θ1) = E h ∥Fθ1(X0) −y∥2i . (1) While Fθ1 provides a deterministic estimate of the mean, such forecasts often blur intense echoes and lose fine-scale structure at long lead times. To address this, we incorporate a diffusion-based stochastic module Fθ2(·), which learns a generative model for the conditional distribution p(y | X0) by iteratively denoising toward the data manifold (Ho et al., 2020; Song et al., 2020). We denote the resulting distribution as pθ2(y | X0). In the radar-echo domain, CorrDiff (Mardani et al., 2025) highlights a strong input–target distribu- tion mismatch caused by large forward noise. This mismatch becomes more pronounced at longer horizons when diffusing directly on y, ultimately reducing sample fidelity. To mitigate this, we in- stead model the residual r, which lowers variance and enables learning pθ2(r | X0) more effectively than pθ2(y | X0) (Mardani et al., 2025). Furthermore, we introduce a spatio-temporal encoder Fθ3(·), which takes (X0, µ) as input and pro- duces a global feature representation: h = Fθ3 (cat(X0, µ)) , (2) which provides a compact summary of the temporal dynamics and captures the overarching motion trends and overall structure. In particular, we model the residual sequence in an autoregressive factorization conditioned on the global representation h. The joint distribution over the residuals is expressed as: pθ2 (r1:Tout | h) = Tout Y j=1 pθ2 (rj | rj−1, h) . (3) 4 Preprint Recent work (Ning et al., 2023) has shown that sequence-to-sequence multi-horizon forecasting provides a more effective paradigm than one-step autoregressive prediction for recurrent spatio- temporal modeling. This strategy mitigates error accumulation, improves temporal coherence, and enhances computational efficiency. Motivated by these benefits, we partition the residual sequence r into contiguous segments, defining sm = r[(m−1)Tin:mTin], m = 1, . . . , M where M = l Tout Tin m and each sm ∈RH×W ×C×M. The full sequence is then obtained by concatenation: s = cat(s1, s2, . . . , sM) ∈RH×W ×C×M×Tout. We model the evolution of the segment sequence s in an autoregressive manner using a backward diffusion process: each segment sm is predicted conditioned on its predecessor sm−1 and the global context h. The joint distribution is expressed as: pθ2 (s1:M | h) = M Y m=1 pθ2 (sm | sm−1, h) . (4) During inference, since the ground-truth sm−1 is not available, it is replaced by the estimated ˆsm−1 generated at step (m −1). Diffusion models (Song et al., 2020; Ho et al., 2020) consist of a fixed forward noising process and a learned reverse denoising process. In the forward chain, starting from a clean segment s0 m ∼p(s) (with s0 m ≡sm), Gaussian noise is added according to a variance schedule {(αt, βt)}T t=1, where βt = 1 −αt: q(st m | st−1 m ) = N(st m; √αt st−1 m , βtI). This admits a closed-form expression for sampling at any step t ∈{1, 2, . . . , T }: q(st m | s0 m) = N(st m; √¯αt s0 m, (1 −¯αt)I), with ¯αt = Qt k=1 αk. After T steps, sT m approaches standard Gaussian noise. The reverse process is learned as pθ2(st−1 m | st m, ˆsm−1, h), which iteratively denoises st m toward the data manifold conditioned on the previously predicted segment ˆsm−1 and global context h. For a T -step denoising diffusion, the target distribution is modeled as: pθ2(s0:T m | ˆsm−1, h) = p(sT m) TY t=1 pθ2(st−1 m | st m, ˆsm−1, h). (5) where sT m ∼N(0, I), t indexes the denoising step, and st m denotes the t-th denoising state of the m-th residual segment. In the denoising process, learning to recover the residual state st−1 m from st m is equivalent to esti- mating the noise ϵ injected at the t-th corruption step. Accordingly, the diffusion module Fθ2(·) is trained with the segment-level loss: L(θ2, θ3; sm) = E h Fθ2(st m; ˆsm−1, h, t, m) −ϵ 2i . (6) where θ3 are the parameters for the global representation h. The overall diffusion loss is then obtained by aggregating over all residual segments: L(θ2, θ3) = M X m=1 L(θ2, θ3; sm). (7) Finally, to capture the interaction between the deterministic predictive backbone and the stochastic residual diffusion, we train the entire framework end-to-end with the combined objective: L(θ1, θ2, θ3) = γL(θ2, θ3) + (1 −γ)L(θ1). (8) where γ ∈[0, 1] balances the contributions of the stochastic and deterministic components. Once trained, the diffusion module generates each residual segment sm by iteratively denoising from Gaussian noise, conditioned on the previously predicted segment ˆsm−1. Repeating this procedure for M steps yields a residual sequence ˆs ∈RH×W ×C×M×Tout, with the initial segment ˆs0 initialized to zeros. A single realization of the future sequence is then obtained by adding the sampled residual ˆs to the deterministic mean ˆµ: ˆy = ˆµ + ˆs. Repeating the sampling procedure produces multiple realizations drawn from the learned approximate conditional distribution p(y | h). 5 Preprint 3.3 TOKEN-WISE ATTENTION Recent studies (Shaker et al., 2023) have simplified attention mechanisms by discarding key–value interactions and retaining only query–key interactions to model token dependencies. However, our empirical analysis shows that relying solely on query–key interactions fails to capture the detailed characteristics of radar echoes, as it ignores the contribution of value (V) information. To overcome this limitation, we introduce Token-wise Attention (TWA). Given an input embedding matrix z ∈Rn×d, where n denotes the number of tokens and d the embedding dimension, the self-attention mechanism in ViT (Dosovitskiy et al., 2021) has a compu- tational complexity of O(n2d). In contrast, our Token-wise Attention achieves a reduced complexity of O(nd). For a feature map of spatial size h × w (i.e., n = h × w), this reduction translates from O(h2w2d) to O(hwd). First, the input z is projected into query, key, and value representations via linear transformations: Q = zWQ, K = zWK, V = zWV , where WQ, WK, WV ∈Rd×d. Each matrix can then be expressed as Q = [q1, q2, . . . , qn], K = [k1, k2, . . . , kn], V = [v1, v2, . . . , vn], with qi, ki, vi ∈R1×d. To highlight the token-wise importance within the sequence Q, we introduce a learnable weight vector wα ∈R1×d. This vector interacts with the query matrix Q ∈Rn×d through a scaled dot product, yielding a query score map α ∈R1×n. The entries of α represent attention weights that quantify the relative significance of each query token qi ∈R1×d with respect to the global context defined by wα. These weights are then used to construct a global query representation q ∈R1×d by aggregating information across all tokens. Specifically, we compute a normalized weighted sum of the query tokens via a Softmax function: q = Softmax n X i=1 αiqi ! , α = Q · wα √ d . (9) Unlike ViT self-attention, which applies a Softmax over an n × n similarity matrix, our approach normalizes only along a 1 × n dimension. The resulting global query q aggregates information from all token-level queries, emphasizing components with greater attention relevance as determined by the learned distribution α. Subsequently, the global query q is compared against each key token ki ∈R1×d from the key matrix K ∈Rn×d. This comparison is computed via dot products, yielding the query–key alignment matrix p ∈Rn×d: p = [p1, p2, . . . , pn] = [q · k1, q · k2, . . . , q · kn]. (10) Similar to equation 9, we summarize the global key k ∈R1×d as: k = Softmax n X i=1 βipi ! , β = K · wβ √ d . (11) Finally, the interaction between the global key vector k ∈R1×d and the value matrix V ∈Rn×d is modeled through element-wise multiplication, producing a global context representation. To refine the token representations, we apply two multilayer perceptrons (MLPs): one operating on the nor- malized queries with a residual connection, and the other on the key–value interaction. The resulting output ˆz is expressed as: ˆz = MLPQ(Norm(Q)) + MLPV (V ∗k) (12) where ∗denotes element-wise multiplication. 3.4 SPATIO-TEMPORAL ENCODER Spatio-temporal encoder: In our RainDiff framework, the spatio-temporal encoder Fθ3(·) is built as a cascaded architecture to extract conditioning features at multiple resolutions. Specifically, Fθ3 comprises several resolution-aligned blocks; each block contains a ResNet module Rl for spatial feature extraction and a ConvGRU module Gl for temporal modeling across Tin + Tout frames: hl j = GlRl(xl j), hl j−1  , j = 1, 2, . . . , Tin+Tout. (13) 6 Preprint Here, xl j and hl j−1 denote the j-th input and (j−1)-th hidden state at level l, respectively. When l=0, x0 j is the raw input frame, and we hence can write xj ≡x0 j. Post-attention (PA): Due to the absence of a latent autoencoder, the conditioning produced by the spatio-temporal encoder can have redundant context. A self-attention mechanism is thus needed to suppress irrelevant content and emphasize salient context for conditioning the diffusion model. To do this, prior work often inserts attention inside recurrent modules (Lange et al., 2021; Lin et al., 2020). In our setting, however, the training objective does not directly supervise temporal recur- rence; it is defined by denoising (diffusion) and deterministic reconstruction. In addition, as condi- tioning sequences are encoded one-by-one and gradients propagate to the spatio-temporal encoder through two pathways (via h and via µ), the resulting gradient with respect to each input frame xj decomposes as: ∂L123 ∂xj = γ ∂L23 ∂hT ∂hT ∂xj + " γ ∂L23 ∂hT ∂hT ∂µ − X m,t √¯αt ∂L23 ∂s tm ! + (1 −γ) ∂L1 ∂µ # ∂µ ∂xj (14) ∂hT ∂xj =   T −1 Y i=j ∂hi+1 ∂hi  ∂hj ∂xj , ∂hT ∂µ = T −1 Y i=1 ∂hi+1 ∂hi ! ∂h1 ∂µ , T = Tin + Tout, j ∈{1, . . . , Tin}. (15) where L123, L23, L1 denote L(θ1, θ2, θ3), L(θ2, θ3), L(θ1) respectively. In spatio-temporal en- coders, gradients can suffer from severe attenuation due to repeated multiplication of Jacobians, i.e., through the product QT −1 i=j ∂hi+1/∂hi. Inserting attention within each recurrent step adds an extra per-step contraction and ties the attention update to intermediate gradients that are not directly anchored to the dedicated loss, which worsens attenuation. We therefore apply our Token-wise Attention after the encoder outputs (PA), at multiple resolutions. Post-attention brings two practi- cal advantages: (i) fewer attention calls—attention is applied once per encoded representation (per scale), rather than at every recurrent step, substantially reducing compute versus in-block attention (Lange et al., 2021; Lin et al., 2020); and (ii) stability and efficiency—by avoiding attention inside recurrence, PA reduces gradient attenuation and amplification through long products and improves training stability and throughput. Further experiments in the ablation studies (Section 4.4) support these viewpoints. 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS Dataset: We evaluate our framework on four widely used precipitation nowcasting datasets Shang- hai Radar (Chen et al., 2020), SEVIR (Veillette et al., 2020), MeteoNet (Larvor et al., 2020) and CIKM1. We adopt a challenging forecasting setup of predicting 20 future frames from 5 initial frames (5 →20), except for the CIKM dataset, where only the next 10 frames are predicted (5 →10) due to its shorter sequence length constraints. Further dataset details are provided in Appendix B. Training protocol: Our RainDiff model is trained for 300K iterations on a batch size of 4 via an Adam optimizer with a learning rate of 1 × 10−4. Following (Ho et al., 2020), we set the diffusion process to 1000 steps and employ 250 denoising steps during inference using DDIM (Song et al., 2020). We implement SimVP (Gao et al., 2022a) as our deterministic module. In line with (Yu et al., 2024), the combined loss in Equation 8 is balanced with γ = 0.5 between deterministic and denoising components. All experiments are executed on a single NVIDIA A6000 GPU. 4.2 EVALUATION METRICS Forecast accuracy is evaluated using average Critical Success Index (CSI) and Heidke Skill Score (HSS) across multiple reflectivity thresholds (Luo et al., 2022; Gao et al., 2023; Veillette et al., 2020). To assess spatial robustness, we also report multi-scale CSI scores (CSI-4, CSI-16) using pooling kernels of size 4 and 16 (Gao et al., 2022b; 2023). Perceptual quality is quantified by Learned Perceptual Image Patch Similarity (LPIPS) and Structural Similarity Index Measure (SSIM). 1https://tianchi.aliyun.com/dataset/1085 7 Preprint Table 1: Quantitative comparison across four radar nowcasting datasets (Shanghai Radar, MeteoNet, SEVIR, CIKM). We evaluate deterministic baselines (PhyDNet, SimVP, EarthFarseer, AlphaPre) and probabilistic methods (DiffCast) against our RainDiff using CSI, pooled CSI at 4×4 and 16×16 (CSI-4 / CSI-16), HSS, LPIPS, and SSIM. Bold marks our results. Overall, RainDiff attains the best or tied-best performance on most metrics and datasets, indicating both stronger localization and perceptual/structural quality. This design allows capturing rich context and dependency between frames in the radar field while maintaining efficient computation. Method Shanghai Radar MeteoNet ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM PhyDNet 0.3692 0.4066 0.5041 0.5009 0.2505 0.7784 0.1259 0.1450 0.1741 0.1950 0.2837 0.8188 SimVP 0.3965 0.4360 0.5261 0.5290 0.2365 0.7727 0.1300 0.1662 0.2190 0.1927 0.2448 0.8098 EarthFarseer 0.3998 0.4455 0.5405 0.5330 0.2126 0.7214 0.1651 0.2230 0.3567 0.2527 0.2128 0.7548 DiffCast 0.4000 0.4887 0.6063 0.5358 0.1561 0.7898 0.1454 0.2209 0.3382 0.2196 0.1298 0.7923 AlphaPre 0.3934 0.3939 0.4237 0.5203 0.2925 0.7863 0.1532 0.1729 0.1965 0.2284 0.2697 0.7891 RainDiff 0.4448 0.5152 0.6260 0.5822 0.1454 0.7997 0.1618 0.2484 0.3907 0.2430 0.1231 0.8210 Method SEVIR CIKM ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM PhyDNet 0.3648 0.3878 0.4618 0.4400 0.4057 0.5606 0.4487 0.4790 0.5488 0.4906 0.5079 0.4906 SimVP 0.3572 0.3766 0.4229 0.4268 0.4604 0.4898 0.4879 0.5079 0.5817 0.5328 0.5574 0.5272 EarthFarseer 0.3677 0.4120 0.5310 0.4459 0.3124 0.5264 0.4647 0.4819 0.5651 0.5094 0.4960 0.5572 DiffCast 0.3711 0.4417 0.6168 0.4539 0.2137 0.5362 0.4834 0.5175 0.6481 0.5182 0.2900 0.4993 AlphaPre 0.3436 0.3578 0.4010 0.4038 0.4005 0.5452 0.4858 0.5101 0.6064 0.5231 0.4660 0.4852 RainDiff 0.3835 0.4534 0.6193 0.4701 0.2070 0.5500 0.4916 0.5235 0.6536 0.5236 0.2926 0.5110 4.3 EXPERIMENTAL RESULTS For a comprehensive evaluation, we compare our method against both deterministic and probabilis- tic baselines. The deterministic models include the recurrent-free SimVP (Gao et al., 2022a) and AlphaPre (Lin et al., 2025), as well as the autoregressive PhyDNet (Guen & Thome, 2020) and EarthFarseer (Wu et al., 2024). As the state-of-the-art probabilistic approach, we include the Diff- Cast (Yu et al., 2024) model. Quantitative results: Table 1 presents the results of our RainDiff compared to other baselines across four radar datasets. On the Shanghai Radar dataset, RainDiff achieves the highest CSI (0.4448), HSS (0.5822), and SSIM (0.7997), along with the lowest LPIPS (0.1454), significantly outperforming the next-best method, DiffCast, across all metrics. Similarly, on SEVIR dataset, RainDiff achieves the best CSI (0.3835), LPIPS (0.2070), and competitive SSIM (0.5500), offering a better perceptual trade-off than PhyDNet, which has higher SSIM (0.5606) but much worse LPIPS (0.4057). For CIKM dataset, RainDiff leads with the best CSI (0.4916), CSI-4 (0.5235), and CSI-16 (0.6536), demonstrating strong robustness under high-variability conditions. On MeteoNet, RainDiff delivers the best perceptual scores (SSIM: 0.8201, LPIPS: 0.1231) and ranks second in CSI (0.1618), con- firming its strong generalization. In addition, Figure 4 reports frame-wise CSI and HSS. As lead time increases, scores drop across all methods due to accumulating forecast uncertainty, yet our ap- proach consistently outperforms the baselines at most timesteps—often by a larger margin at longer leads—demonstrating superior robustness to temporal expanding. Qualitative results: A comparison in Figure 3 reveals the limitations of existing methods. Deter- ministic models yield blurry outputs, while the stochastic model DiffCast, though sharper, introduces excessive and uncontrolled randomness at air masses’ boundaries—an issue we attribute to its lack of attention mechanisms. This results in an unstable representation of temporal-spatial dependen- cies. Our framework solves this by integrating Token-wise Attention. This component not only enables the generation of realistic, high-fidelity details but also regulates the model’s stochastic be- havior, leading to forecasts with improved structural accuracy and consistency, thereby mitigating the chaotic predictions seen in DiffCast. Further visualizations are given in Appendix E. 4.4 ABLATION STUDY Effect of Individual Components: To evaluate the contribution of each component, we perform ablation experiments with three settings: (i) RainDiff without both Token-wise Attention in the U- 8 Preprint Table 2: Ablation studies of RainDiff on the Shanghai Radar dataset: (a) individual components and (b) attention mechanisms in the spatio-temporal encoder. (a) Ablation: individual components (i–iv). Method ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM (i) 0.4000 0.4887 0.6063 0.5358 0.1561 0.7898 (ii) 0.4370 0.5026 0.6030 0.5737 0.1461 0.7890 (iii) 0.4396 0.5066 0.6142 0.5767 0.1466 0.8125 (iv) 0.4448 0.5152 0.6260 0.5822 0.1454 0.7997 (b) Ablation: attention mechanisms (i–iii) on the spatio-temporal encoder. Method ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM (i) 0.4284 0.4808 0.5600 0.5623 0.1562 0.8217 (ii) 0.4310 0.5060 0.6049 0.5680 0.1502 0.7751 (iii) 0.4448 0.5152 0.6260 0.5822 0.1454 0.7997 -24 Min -18 Min -12 Min -6 Min 0 Min +12 Min +24 Min +36 Min +48 Min  +60 Min +72 Min +84 Min +96 Min +108 Min +120 Min RainDiff (Ours) Diffcast EarthFarseer AlphaPre Ground Truth +120 Min Figure 3: Qualitative comparison with existing works on the Shanghai Radar dataset, where the reflectivity range is on the top right. Figure 4: Frame-wise CSI and HSS for various methods on the Shanghai Radar dataset. Net and Post-attention in the Spatio-temporal Encoder, which corresponds to DiffCast (Yu et al., 2024), (ii) Integrate DiffCast with Adaptive Attention from (Shaker et al., 2023), (iii) RainDiff with Token-wise Attention and without Post-attention, and (iv) our full RainDiff model. As shown in Table 2a, the absence of any component results in a clear degradation of performance, underscoring the critical role of each design choice in strengthening predictive capability. Effect of attention mechanism on Spatio-temporal encoder: As shown in Table 2b, we evaluate the effectiveness of our attention design on spatio-temporal encoder by comparing it with several alternatives proposed in (Lange et al., 2021; Lin et al., 2020), where attention layers are integrated within the recurrent block. We perform ablation experiments with three settings on ConvGRU block l-th: (i) The attention layers are integrated to the input xl j at each frame j-th, (ii) The attention layers are integrated to the output hl j at each frame j-th, and (iii) our RainDiff, where Post-attention is only applied on the final condition hl T of frame T-th, Section 3.4. The results in Table 2b support our contribution discussed in Section 3.4: RainDiff with Post-attention consistently achieves higher efficiency while maintaining performance comparable to other integration methods. 5 CONCLUSION RainDiff is an end-to-end diffusion framework for precipitation nowcasting that applies Token-wise Attention at all spatial scales within a diffusion U-Net, eliminating the need for a latent autoencoder and improving scalability and performance. In addition, we propose Post-attention module miti- gating gradient attenuation when attention meets recurrent conditioning. Across four benchmarks, RainDiff surpasses deterministic and probabilistic baselines in localization, perceptual quality, and long-horizon robustness. Future work will involve physical constraints by using multi-modal inputs and reduce latency by replacing autoregression. 9 Preprint REFERENCES Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction. Nature, 525(7567):47–55, 2015. Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Accurate medium- range global weather forecasting with 3d neural networks. Nature, 619(7970):533–538, 2023. Kang Chen, Tao Han, Junchao Gong, Lei Bai, Fenghua Ling, Jing-Jia Luo, Xi Chen, Leiming Ma, Tianning Zhang, Rui Su, et al. Fengwu: Pushing the skillful global medium-range weather forecast beyond 10 days lead. arXiv preprint arXiv:2304.02948, 2023. Lei Chen, Yuan Cao, Leiming Ma, and Junping Zhang. A deep learning-based methodology for precipitation nowcasting with radar. Earth and Space Science, 7(2):e2019EA000812, 2020. Qi Chen, Xiaoxi Chen, Haorui Song, Zhiwei Xiong, Alan Yuille, Chen Wei, and Zongwei Zhou. Towards generalizable tumor synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11147–11158, 2024. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszko- reit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. Zhangyang Gao, Cheng Tan, Lirong Wu, and Stan Z Li. Simvp: Simpler yet better video prediction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3170–3180, 2022a. Zhihan Gao, Xingjian Shi, Hao Wang, Yi Zhu, Yuyang Bernie Wang, Mu Li, and Dit-Yan Yeung. Earthformer: Exploring space-time transformers for earth system forecasting. Advances in Neural Information Processing Systems, 35:25390–25403, 2022b. Zhihan Gao, Xingjian Shi, Boran Han, Hao Wang, Xiaoyong Jin, Danielle Maddix, Yi Zhu, Mu Li, and Yuyang Wang. Prediff: Precipitation nowcasting with latent diffusion models. arXiv preprint arXiv:2307.10422, 2023. Junchao Gong, Lei Bai, Peng Ye, Wanghan Xu, Na Liu, Jianhua Dai, Xiaokang Yang, and Wanli Ouyang. Cascast: Skillful high-resolution precipitation nowcasting via cascaded modelling. In- ternational Conference on Machine Learning, 2024. Vincent Le Guen and Nicolas Thome. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11474–11484, 2020. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020. Nicholas Konz, Yuwen Chen, Haoyu Dong, and Maciej A Mazurowski. Anatomically-controllable medical image generation with segmentation-guided diffusion models. In International Confer- ence on Medical Image Computing and Computer-Assisted Intervention, pp. 88–98. Springer, 2024. Bernard Lange, Masha Itkina, and Mykel J Kochenderfer. Attention augmented convlstm for en- vironment prediction. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1346–1353. IEEE, 2021. Gwenna¨elle Larvor, L´ea Berthomier, Vincent Chabot, Brice Le Pape, Bruno Pradel, and Lior Perez. Meteonet, an open reference weather dataset, 2020. Kenghong Lin, Baoquan Zhang, Demin Yu, Wenzhi Feng, Shidong Chen, Feifan Gao, Xutao Li, and Yunming Ye. Alphapre: Amplitude-phase disentanglement model for precipitation nowcasting. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 17841–17850, 2025. 10 Preprint Zhihui Lin, Maomao Li, Zhuobin Zheng, Yangyang Cheng, and Chun Yuan. Self-attention convlstm for spatiotemporal prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 11531–11538, 2020. Chuyao Luo, Guangning Xu, Xutao Li, and Yunming Ye. The reconstitution predictive network for precipitation nowcasting. Neurocomputing, 507:1–15, 2022. Morteza Mardani, Noah Brenowitz, Yair Cohen, Jaideep Pathak, Chieh-Yu Chen, Cheng-Chin Liu, Arash Vahdat, Mohammad Amin Nabian, Tao Ge, Akshay Subramaniam, et al. Residual cor- rective diffusion modeling for km-scale atmospheric downscaling. Communications Earth & Environment, 6(1):124, 2025. Shuliang Ning, Mengcheng Lan, Yanran Li, Chaofeng Chen, Qian Chen, Xunlai Chen, Xiaoguang Han, and Shuguang Cui. Mimo is all you need: A strong multi-in-multi-out baseline for video prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pp. 1975– 1983, 2023. Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skilful precipitation nowcasting using deep generative models of radar. Nature, 597(7878):672–677, 2021. Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, and Fahad Shahbaz Khan. Swiftformer: Efficient additive attention for transformer-based real-time mobile vision applications. In Proceedings of the IEEE/CVF international conference on com- puter vision, pp. 17425–17436, 2023. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. Ad- vances in neural information processing systems, 28, 2015. William C Skamarock, Joseph B Klemp, Jimy Dudhia, David O Gill, Dale M Barker, Michael G Duda, Xiang-Yu Huang, Wei Wang, and Jordan G Powers. A description of the advanced research wrf. National Center for Atmospheric Research, Boulder, CO), Version, 3:690, 2008. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. Mark Veillette, Siddharth Samsi, and Chris Mattioli. Sevir: A storm event imagery dataset for deep learning applications in radar and satellite meteorology. Advances in Neural Information Processing Systems, 33:22009–22019, 2020. Hao Wu, Yuxuan Liang, Wei Xiong, Zhengyang Zhou, Wei Huang, Shilong Wang, and Kun Wang. Earthfarsser: Versatile spatio-temporal dynamical systems modeling in one model. In Proceed- ings of the AAAI conference on artificial intelligence, volume 38, pp. 15906–15914, 2024. Demin Yu, Xutao Li, Yunming Ye, Baoquan Zhang, Chuyao Luo, Kuai Dai, Rui Wang, and Xunlai Chen. Diffcast: A unified framework via residual diffusion for precipitation nowcasting. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 27758– 27767, 2024. Yuchen Zhang, Mingsheng Long, Kaiyuan Chen, Lanxiang Xing, Ronghua Jin, Michael I Jordan, and Jianmin Wang. Skilful nowcasting of extreme precipitation with nowcastnet. Nature, 619 (7970):526–532, 2023. 11 Preprint A USAGE OF LLMS All authors declare that the LLMs are used as a general-purpose assist tool for polishing the manuscript, LLMs do not contribute to research ideation. B DATASET Shanghai Radar: The Shanghai Radar dataset (Chen et al., 2020) contains radar scans images from the WSR-88D radar in Pudong, Shanghai, China, collected between October 2015 and July 2018. Each scan covers a 501 × 501 km area at 6-minute intervals. The data are segmented into 25-frame sequences with a stride of 20, and are split into training, validation, and test subsets following (Chen et al., 2020). The first 5 frames (30 minutes) are used to predict the next 20 frames (120 minutes). Radar values are rescaled to [0, 70], and CSI/HSS are computed at thresholds [20, 30, 35, 40]. SEVIR: The SEVIR dataset (Veillette et al., 2020) consists of storm events from 2017 to 2020, collected every 5 minutes over 4-hour windows using GOES-16 and NEXRAD sources. Each event spans a 384×384 km region. We use the vertically integrated liquid (VIL) radar mosaics and extract 25-frame sequences with a stride of 12 following (Yu et al., 2024). The first 5 frames (25 minutes) are used to predict the next 20 frames (100 minutes). The dataset is split into training, validation, and test subsets using cutoff dates: before 2019-01-01 for training, 2019-01-01 to 2019-06-01 for validation, and 2019-06-01 to 2020-12-31 for testing. Radar values are rescaled to [0, 255], and CSI/HSS are evaluated at thresholds [16, 74, 133, 160, 181, 219]. MeteoNet: MeteoNet (Larvor et al., 2020) provides radar and auxiliary meteorological data over two regions in France for the years 2016–2018. We use rain radar scans over north-western France, available at 6-minute intervals. Sequences of 25 frames are extracted using a stride of 12, where the first 5 frames (30 minutes) are used to predict the next 20 frames (120 minutes). The data are partitioned into training, validation, and test sets with cutoff dates of 2016-01-01 to 2017-12-31, 2018-01-01 to 2018-06-01, and 2018-06-01 to 2018-12-31, respectively. Radar values are rescaled to [0, 70], and CSI/HSS are computed at thresholds [12, 18, 24, 32]. CIKM: The CIKM dataset2 from CIKM AnalytiCup 2017 provides 15-frame radar echo sequences sampled every 6 minutes over a 1.5-hour period, covering a 101 × 101 km region in Guangdong, China. Each sample includes reflectivity maps at four altitudes from 0.5 km to 3.5 km; we use data at the 2.5 km level. Different from previous datasets, in CIKM, first 5 frames (30 minutes) are used to predict the next 10 frames (60 minutes). The dataset is split into training, validation, and test subsets using the official partition. Pixel values are rescaled to [0, 70], and CSI/HSS metrics are computed at thresholds [20, 30, 35, 40]. C TOKEN-WISE ATTENTION: TIME–SPACE COMPLEXITY C.1 SELF-ATTENTION (VIT (DOSOVITSKIY ET AL., 2021)) Given input embeddings z ∈Rn×d (with n tokens and width d), we form Q = zWQ, K = zWK, V = zWV , where WQ, WK, WV ∈Rd×d. The scaled dot-product attention is: ˆz = Attention(Q, K, V ) = Softmax QK⊤ √ d  V (16) Projecting to Q, K, V costs 3 O(nd2). Computing logits QK⊤is O(n2d), the softmax is O(n2), and multiplying by V is O(n2d). Thus, the overall time and space complexities are: T(n, d) = O(nd2 + n2d) (17) S(n, d) = O(nd) + O(n2) = O(n2 + nd) (18) For a feature map of size h × w, n = hw and d ≪n, equations. 17–18 simplify to: T(n, d) = O(n2), S(n, d) = O(n2). (19) 2https://tianchi.aliyun.com/dataset/1085 12 Preprint C.2 TOKEN-WISE ATTENTION We analyze the TWA operations through each step (excluding the outer linear projections/MLPs): Equations 9, 11: T(n, d) = O(nd) + O(n) + O(nd) = O(nd) (20) S(n, d) = O(n) + O(d) (21) Equation 10: T(n, d) = O(nd), S(n, d) = O(nd). (22) Equation 12: T(n, d) = O(nd), S(n, d) = O(nd). (23) The TWA scales linearly in the number of tokens: T(n, d) = O(nd) (24) S(n, d) = O(nd + n + d) = O(nd) (25) For a feature map of size h × w, n = hw and d ≪n, equations 24–25 simplify to: T(n, d) = O(n), S(n, d) = O(n). (26) D SPATIO-TEMPORAL ENCODER The gradient flow of L from equation 8 with respect to each input frame xj decomposes into: ∂L123 ∂xj = γ " ∂L23 ∂hT ∂hT ∂xj + ∂hT ∂µ ∂µ ∂xj  + X m,t ∂L23 ∂stm ∂st m ∂xj # + (1 −γ) ∂L1 ∂µ ∂µ ∂xj (27) with ∂st m ∂xj = ∂st m ∂s0m ∂s0 m ∂µ ∂µ ∂xj = −√¯αt ∂µ ∂xj , (28) ⇒ ∂L123 ∂xj = γ ∂Lθ2 ∂hT ∂hT ∂xj + " γ ∂L23 ∂hT ∂hT ∂µ − X m,t √¯αt ∂L23 ∂stm ! + (1 −γ) ∂L1 ∂µ # ∂µ ∂xj , (29) where ∂hT ∂xj =   T −1 Y i=j ∂hi+1 ∂hi   | {z } Jj→T ∂hj ∂xj , ∂hT ∂µ = T −1 Y i=1 ∂hi+1 ∂hi ! | {z } J1→T ∂h1 ∂µ , T = Tin + Tout, j ∈[1, Tin]. (30) where L123, L23, L1 denote L(θ1, θ2, θ3), L(θ2, θ3), L(θ1) respectively. E VISUALIZATION Here are additional qualitative examples across datasets. As shown in Figures 5, 6, 7, 8, all deter- ministic backbones become noticeably blurry by the 60-minute horizon, with high-reflectivity cores and fine-scale details fading. In contrast, RainDiff preserves sharper echoes and yields more accurate precipitation intensity and localization, especially at longer lead timesteps. Compared to DiffCast, the closest baseline, RainDiff produces less stochastic, more coherent precipitation contours while better matching the observed air masses’ shape and position. 13 Preprint DiffCast Ground Truth RainDiff (Ours) PhyDNet EarthFarseer AlphaPre +20 Min +30 Min +50 Min +10 Min +40 Min +60 Min +100 Min -15 Min +70 Min +80 Min +90 Min SimVP -20 Min -10 Min -5 Min 0 Min Figure 5: Prediction examples on the SEVIR dataset, where the color bar on the top right represents the reflectivity range of radar echoes. Ground Truth DiffCast EarthFarseer SimVP PhyDNet AlphaPre -18 Min -12 Min -6 Min 0 Min -24 Min +6 Min +12 Min +18 Min +24 Min +30 Min +36 Min +42 Min +48 Min +54 Min +60 Min RainDiff (Ours) Figure 6: Prediction examples on the CIKM dataset, where the color bar on the top right represents the reflectivity range of radar echoes. 14 Preprint Ground Truth RainDiff (Ours) DiffCast EarthFarseer SimVP PhyDNet AlphaPre +12 Min +24 Min +36 Min +48 Min  +60 Min +72 Min +84 Min +96 Min +108 Min +120 Min -24 Min -18 Min -12 Min -6 Min 0 Min Figure 7: Prediction examples on the Shanghai Radar dataset, where the color bar on the top right represents the reflectivity range of radar echoes. DiffCast Ground Truth Our RainDiff EarthFarseer AlphaPre SimVP PhyDNet -18 Min -12 Min -6 Min 0 Min -24 Min +12 Min +24 Min +36 Min +48 Min  +60 Min +72 Min +84 Min +96 Min +108 Min +120 Min Figure 8: Prediction examples on the MeteoNet dataset, where the color bar on the top right repre- sents the reflectivity range of radar echoes. 15
Preprint RAINDIFF: END-TO-END PRECIPITATION NOWCASTING VIA TOKEN-WISE ATTENTION DIFFUSION Thao Nguyen, Jiaqi Ma, Fahad Khan, Souhaib Ben Taieb, Salman Khan Mohamed Bin Zayed { ABSTRACT Precipitation nowcasting, predicting future radar echo sequences from current observations, is a critical yet challenging task due to the inherently chaotic and tightly coupled spatio-temporal dynamics of the atmosphere. While recent advances in diffusion-based models attempt to capture both large-scale motion and fine-grained stochastic variability, they often suffer from scalability issues: latentspace approaches require a separately trained autoencoder, adding complexity and limiting generalization, while pixel-space approaches are computationally intensive and often omit attention mechanisms, reducing their ability to model longrange spatio-temporal dependencies. To address these limitations, we propose a Token-wise Attention integrated into not only the U-Net diffusion model but also the spatio-temporal encoder that dynamically captures multi-scale spatial interactions and temporal evolution. Unlike prior approaches, our method natively integrates attention into the architecture without incurring the high resource cost typical of pixel-space diffusion, thereby eliminating the need for separate latent modules. Our extensive experiments and visual evaluations across diverse datasets demonstrate that the proposed method significantly outperforms state-of-the-art approaches, yielding superior local fidelity, generalization, and robustness in complex precipitation forecasting scenarios. Our code is released here. 1 INTRODUCTION Predicting when and where rain will fall over the next few minutes to hours, known as precipitation nowcasting, remains one of the most pressing challenges in weather forecasting (Ravuri et al., 2021; Veillette et al., 2020). The goal is to predict a sequence of future radar echoes conditioned on recent observations. Traditional approaches rely on numerical weather prediction (NWP), which explicitly models atmospheric dynamics through partial differential equations (PDEs) (Bauer et al., 2015). While physically grounded, NWP methods are computationally expensive and slow to update, limiting their use for the rapid, iterative forecasts required in nowcasting (Bi et al., 2023). Recent advances in deep learning have enabled data-driven alternatives that bypass explicit PDE solvers. Deterministic nowcasting models have demonstrated a strong ability to capture the largescale advection of precipitation fields, but they typically suffer from oversmoothing effects, especially at longer lead times (extending to several hours), resulting in underestimation of atmospheric intensity and loss of fine-scale spatial detail (Shi et al., 2015; Ravuri et al., 2021). To mitigate this, probabilistic generative approaches leveraging generative adversarial networks (GANs) or diffusion models have been introduced to generate more realistic and accurate radar fields (Zhang et al., 2023; Gao et al., 2023). However, these models often inflate the effective degrees of freedom by treating the entire spatio-temporal field as stochastic, which introduces excessive randomness and reduces the positional accuracy of rainfall structures (Yu et al., 2024). To reconcile these trade-offs, hybrid architectures have emerged that benefit from both deterministic and probabilistic paradigms. These methods decompose weather evolution into (i) a globally coherent deterministic component to capture large-scale dynamics, and (ii) a localized stochastic refinement to model fine-grained variability (Yu et al., 2024; Gong et al., 2024). This factorization has shown promise in simultaneously improving positional fidelity and generative sharpness. 1 16 Oct 2025 Preprint Input RainDiff (Ours) -15 Min -10 Min 0 Min Ground Truth DiffCast -20 Min +25 Min +40 Min +70 Min +10 Min +55 Min +85 Min +100 Min RainDiff DiffCast +100 Min -5 Min Ground Truth Figure 1: A visualization from the SEVIR dataset shows that, at the longest forecast horizon, RainDiff avoids oversmoothed outputs and better preserves weather fronts compared to the state-of-theart DiffCast (Yu et al., 2024), resulting in closer alignment with the ground truth. Despite these advances, key limitations persist for hybrid architectures that restrict their scalability and generalization. For example, CasCast (Gong et al., 2024) relies on a latent-space formulation, requiring an auxiliary autoencoder pre-trained on large datasets. This dependency hampers generalization to new domains where suitable autoencoders may be unavailable. In contrast, DiffCast (Yu et al., 2024) operates directly in pixel space, thereby avoiding latent bottlenecks. However, to remain computationally tractable, it omits self-attention mechanisms (Dosovitskiy et al., 2021) in the highresolution layers. This design choice limits the model's capacity to capture complex long-range and fine-scale spatio-temporal dependencies, as illustrated in Figure 1. To overcome these limitations, we propose Token-wise Attention, integrated across all spatial resolutions in our network. This design enables accurate modeling of fine-scale structures while maintaining computational efficiency. Unlike conventional self-attention (Yu et al., 2024; Gong et al., 2024), our token-wise formulation avoids the quadratic complexity induced by the high dimensionality of radar data. Moreover, all operations are performed directly in pixel space, removing the need for an external latent autoencoder (Gong et al., 2024). Finally, drawing on empirical insights, we introduce Post-attention, which emphasizes the informative conditional context crucial for the denoising process. To summarize, our key contributions are: • We introduce Token-wise Attention, a novel mechanism that enables full-resolution selfattention directly in pixel space while retaining tractable computational cost. • We provide a theoretical analysis exposing the limitations of integrating existing attention mechanisms into recurrent conditioners, and introduce Post-attention, a lightweight drop-in module that extracts critical contextual information to guide denoising while maintaining computational efficiency. • We perform extensive experiments on four benchmark datasets, demonstrating that our approach consistently outperforms state-of-the-art methods in both deterministic and generative settings, achieving superior performance across multiple evaluation metrics. 2 RELATED WORK Recently, deep learning has emerged as a powerful alternative to traditional Numerical Weather Prediction (NWP) (Skamarock et al., 2008), with approaches typically classified as deterministic or probabilistic. Early efforts were predominantly deterministic, emphasizing spatio-temporal modeling to produce point forecasts of future atmospheric conditions. For example, ConvLSTM (Shi et al., 2015) combined convolutional and recurrent layers to capture spatio-temporal dynamics. Later methods sought to enhance accuracy by integrating physical constraints, as in PhyDNet 2 Preprint Figure 2: Overall architecture of our precipitation nowcasting framework RainDiff. Given an input sequence X0, a deterministic predictor Fθ1 outputs a coarse prediction μ. The concatenation of X0 and μ is encoded by a cascaded spatio-temporal encoder Fθ3 to yield conditioning features h, refined by Post-attention. A diffusion-based stochastic module Fθ2 equipped with Token-wise Attention at all resolutions in pixel space predicts residual segments ˆr autoregressively, where the denoising process is conditioned on h and the predicted segments. This design captures rich contextual relationships and inter-frame dependencies in the radar field while keeping computation efficient. (Guen & Thome, 2020), or by incorporating a broader set of meteorological variables for more comprehensive forecasting, as in Pangu (Bi et al., 2023) and Fengwu (Chen et al., 2023). Although these models effectively capture large-scale motion patterns, their predictions tend to become overly smooth and blurry at longer lead times. This degradation arises from compounding errors, reliance on Mean Squared Error (MSE) loss, and the absence of local stochastic modeling-all of which suppress fine-scale detail. Generative models have been proposed to mitigate the blurriness of deterministic forecasts by introducing latent variables that capture the inherent stochasticity of weather patterns. Examples include GAN-based approaches such as DGMR (Ravuri et al., 2021) and more recently, diffusion-based models like PreDiff (Gao et al., 2023). While some generative methods treat the entire system stochastically, a growing line of research explores hybrid strategies. Notably, DiffCast (Yu et al., 2024) and CasCast (Gong et al., 2024) combine deterministic modeling of large-scale motion with probabilistic modeling of fine-scale variability, thereby leveraging the complementary strengths of both paradigms. Although diffusion-based methods have achieved promising results, they often face limitations such as the training overhead of external latent autoencoders or the omission of attention layers due to the high computational cost of operating in pixel space. These trade-offs between representational capacity and computational feasibility are not unique to weather forecasting, but are symptomatic of diffusion models more broadly, including in domains such as medical imaging, where pretrained autoencoders are frequently unavailable (Chen et al., 2024; Konz et al., 2024). To overcome these limitations, we propose a method that simplifies the training pipeline and improves generality, enabling wide applicability without reliance on domain-specific latent autoencoders. 3 METHODOLOGY We formulate precipitation nowcasting as a spatio-temporal forecasting problem on a hybrid framework, which consists of three components: a deterministic module Fθ1(·), a diffusion-based stochastic module Fθ2(·), and a spatio-temporal module Fθ3(·). The output from Fθ1(·) is passed to Fθ3(·) to extract conditioning features, which guide the denoising process of Fθ2(·). We also introduce Token-wise Attention, a mechanism that enables self-attention at all spatial resolutions while avoiding the prohibitive pixel-space cost of Vision Transformer (ViT) self-attention (Dosovitskiy et al., 2021) under equivalent resource constraints. In contrast, prior approaches either 3 Preprint use external latent autoencoders that compress inputs before training a U-Net from Fθ2(·) and decode outputs during inference, or they restrict ViT self-attention to bottleneck resolutions because of its high computational cost, especially the softmax operation on the attention map. In addition, we propose Post-attention in the spatio-temporal encoder Fθ3(·), which emphasizes informative contextual signals to guide the denoising process. 3.1 OVERALL FRAMEWORK Let X0 ∈RH×W ×C×Tin be a 4-dimensional tensor of shape (H, W, C, Tin), representing a sequence of Tin input frames, where H and W denote the spatial resolution and C the number of channels. Similarly, let y ∈RH×W ×C×Tout denote the sequence of Tout future frames. Our objective is to learn a generative model for the conditional distribution p(y | X0). Our approach proceeds in two steps. First, we train a deterministic predictor Fθ1 : RH×W ×C×Tin → RH×W ×C×Tout, to estimate the conditional expectation μ(X0) = E[y | X0]. This estimate provides only a coarse estimate of the conditional distribution, capturing the global motion trend and overall structure but failing to represent uncertainty and often leading to blurry predictions with a loss of fine-scale details. Second, we introduce a spatio-temporal encoder Fθ3(·) that processes both X0 and μ to extract a representation h, which encodes global motion priors, sequence consistency, and inter-frame dependencies. We then model the residual r = y -μ, using a stochastic prediction module Fθ2(·) based on a diffusion model (Section 3.2). Token-wise Attention (Section 3.3) refines the temporal evolution of the residual distribution conditioned on h, while Post-attention mechanism (Section 3.4) sharpens h during denoising, amplifying salient context and suppressing irrelevant detail. At inference, to generate a sample from p(y | X0), we first sample a residual ˆr from the diffusionbased prediction module and then add it to the predicted mean ˆμ, yielding one realization ˆy = ˆμ+ ˆr. Repeating this procedure produces diverse realizations of plausible future sequences. An overview of the proposed framework is shown in Figure 2. 3.2 STOCHASTIC MODELING NETWORK Given a tensor of input frames X0, the deterministic predictor Fθ1(·) estimates the conditional mean μ(X0) by minimizing the MSE loss: L(θ1) = E h ∥Fθ1(X0) -y∥2i . (1) While Fθ1 provides a deterministic estimate of the mean, such forecasts often blur intense echoes and lose fine-scale structure at long lead times. To address this, we incorporate a diffusion-based stochastic module Fθ2(·), which learns a generative model for the conditional distribution p(y | X0) by iteratively denoising toward the data manifold (Ho et al., 2020; Song et al., 2020). We denote the resulting distribution as pθ2(y | X0). In the radar-echo domain, CorrDiff (Mardani et al., 2025) highlights a strong input-target distribution mismatch caused by large forward noise. This mismatch becomes more pronounced at longer horizons when diffusing directly on y, ultimately reducing sample fidelity. To mitigate this, we instead model the residual r, which lowers variance and enables learning pθ2(r | X0) more effectively than pθ2(y | X0) (Mardani et al., 2025). Furthermore, we introduce a spatio-temporal encoder Fθ3(·), which takes (X0, μ) as input and produces a global feature representation: h = Fθ3 (cat(X0, μ)) , (2) which provides a compact summary of the temporal dynamics and captures the overarching motion trends and overall structure. In particular, we model the residual sequence in an autoregressive factorization conditioned on the global representation h. The joint distribution over the residuals is expressed as: pθ2 (r1:Tout | h) = Tout Y j=1 pθ2 (rj | rj-1, h) . (3) 4 Preprint Recent work (Ning et al., 2023) has shown that sequence-to-sequence multi-horizon forecasting provides a more effective paradigm than one-step autoregressive prediction for recurrent spatiotemporal modeling. This strategy mitigates error accumulation, improves temporal coherence, and enhances computational efficiency. Motivated by these benefits, we partition the residual sequence r into contiguous segments, defining sm = r[(m-1)Tin:mTin], m = 1, . . . , M where M = l Tout Tin m and each sm ∈RH×W ×C×M. The full sequence is then obtained by concatenation: s = cat(s1, s2, . . . , sM) ∈RH×W ×C×M×Tout. We model the evolution of the segment sequence s in an autoregressive manner using a backward diffusion process: each segment sm is predicted conditioned on its predecessor sm-1 and the global context h. The joint distribution is expressed as: pθ2 (s1:M | h) = M Y m=1 pθ2 (sm | sm-1, h) . (4) During inference, since the ground-truth sm-1 is not available, it is replaced by the estimated ˆsm-1 generated at step (m -1). Diffusion models (Song et al., 2020; Ho et al., 2020) consist of a fixed forward noising process and a learned reverse denoising process. In the forward chain, starting from a clean segment s0 m ∼p(s) (with s0 m ≡sm), Gaussian noise is added according to a variance schedule {(αt, βt)}T t=1, where βt = 1 -αt: q(st m | st-1 m ) = N(st m; √αt st-1 m , βtI). This admits a closed-form expression for sampling at any step t ∈{1, 2, . . . , T }: q(st m | s0 m) = N(st m; √ ̄αt s0 m, (1 - ̄αt)I), with ̄αt = Qt k=1 αk. After T steps, sT m approaches standard Gaussian noise. The reverse process is learned as pθ2(st-1 m | st m, ˆsm-1, h), which iteratively denoises st m toward the data manifold conditioned on the previously predicted segment ˆsm-1 and global context h. For a T -step denoising diffusion, the target distribution is modeled as: pθ2(s0:T m | ˆsm-1, h) = p(sT m) TY t=1 pθ2(st-1 m | st m, ˆsm-1, h). (5) where sT m ∼N(0, I), t indexes the denoising step, and st m denotes the t-th denoising state of the m-th residual segment. In the denoising process, learning to recover the residual state st-1 m from st m is equivalent to estimating the noise ε injected at the t-th corruption step. Accordingly, the diffusion module Fθ2(·) is trained with the segment-level loss: L(θ2, θ3; sm) = E h Fθ2(st m; ˆsm-1, h, t, m) -ε 2i . (6) where θ3 are the parameters for the global representation h. The overall diffusion loss is then obtained by aggregating over all residual segments: L(θ2, θ3) = M X m=1 L(θ2, θ3; sm). (7) Finally, to capture the interaction between the deterministic predictive backbone and the stochastic residual diffusion, we train the entire framework end-to-end with the combined objective: L(θ1, θ2, θ3) = γL(θ2, θ3) + (1 -γ)L(θ1). (8) where γ ∈[0, 1] balances the contributions of the stochastic and deterministic components. Once trained, the diffusion module generates each residual segment sm by iteratively denoising from Gaussian noise, conditioned on the previously predicted segment ˆsm-1. Repeating this procedure for M steps yields a residual sequence ˆs ∈RH×W ×C×M×Tout, with the initial segment ˆs0 initialized to zeros. A single realization of the future sequence is then obtained by adding the sampled residual ˆs to the deterministic mean ˆμ: ˆy = ˆμ + ˆs. Repeating the sampling procedure produces multiple realizations drawn from the learned approximate conditional distribution p(y | h). 5 Preprint 3.3 TOKEN-WISE ATTENTION Recent studies (Shaker et al., 2023) have simplified attention mechanisms by discarding key-value interactions and retaining only query-key interactions to model token dependencies. However, our empirical analysis shows that relying solely on query-key interactions fails to capture the detailed characteristics of radar echoes, as it ignores the contribution of value (V) information. To overcome this limitation, we introduce Token-wise Attention (TWA). Given an input embedding matrix z ∈Rn×d, where n denotes the number of tokens and d the embedding dimension, the self-attention mechanism in ViT (Dosovitskiy et al., 2021) has a computational complexity of O(n2d). In contrast, our Token-wise Attention achieves a reduced complexity of O(nd). For a feature map of spatial size h × w (i.e., n = h × w), this reduction translates from O(h2w2d) to O(hwd). First, the input z is projected into query, key, and value representations via linear transformations: Q = zWQ, K = zWK, V = zWV , where WQ, WK, WV ∈Rd×d. Each matrix can then be expressed as Q = [q1, q2, . . . , qn], K = [k1, k2, . . . , kn], V = [v1, v2, . . . , vn], with qi, ki, vi ∈R1×d. To highlight the token-wise importance within the sequence Q, we introduce a learnable weight vector wα ∈R1×d. This vector interacts with the query matrix Q ∈Rn×d through a scaled dot product, yielding a query score map α ∈R1×n. The entries of α represent attention weights that quantify the relative significance of each query token qi ∈R1×d with respect to the global context defined by wα. These weights are then used to construct a global query representation q ∈R1×d by aggregating information across all tokens. Specifically, we compute a normalized weighted sum of the query tokens via a Softmax function: q = Softmax n X i=1 αiqi ! , α = Q · wα √ d . (9) Unlike ViT self-attention, which applies a Softmax over an n × n similarity matrix, our approach normalizes only along a 1 × n dimension. The resulting global query q aggregates information from all token-level queries, emphasizing components with greater attention relevance as determined by the learned distribution α. Subsequently, the global query q is compared against each key token ki ∈R1×d from the key matrix K ∈Rn×d. This comparison is computed via dot products, yielding the query-key alignment matrix p ∈Rn×d: p = [p1, p2, . . . , pn] = [q · k1, q · k2, . . . , q · kn]. (10) Similar to equation 9, we summarize the global key k ∈R1×d as: k = Softmax n X i=1 βipi ! , β = K · wβ √ d . (11) Finally, the interaction between the global key vector k ∈R1×d and the value matrix V ∈Rn×d is modeled through element-wise multiplication, producing a global context representation. To refine the token representations, we apply two multilayer perceptrons (MLPs): one operating on the normalized queries with a residual connection, and the other on the key-value interaction. The resulting output ˆz is expressed as: ˆz = MLPQ(Norm(Q)) + MLPV (V ∗k) (12) where ∗denotes element-wise multiplication. 3.4 SPATIO-TEMPORAL ENCODER Spatio-temporal encoder: In our RainDiff framework, the spatio-temporal encoder Fθ3(·) is built as a cascaded architecture to extract conditioning features at multiple resolutions. Specifically, Fθ3 comprises several resolution-aligned blocks; each block contains a ResNet module Rl for spatial feature extraction and a ConvGRU module Gl for temporal modeling across Tin + Tout frames: hl j = Gl Rl(xl j), hl j-1 , j = 1, 2, . . . , Tin+Tout. (13) 6 Preprint Here, xl j and hl j-1 denote the j-th input and (j-1)-th hidden state at level l, respectively. When l=0, x0 j is the raw input frame, and we hence can write xj ≡x0 j. Post-attention (PA): Due to the absence of a latent autoencoder, the conditioning produced by the spatio-temporal encoder can have redundant context. A self-attention mechanism is thus needed to suppress irrelevant content and emphasize salient context for conditioning the diffusion model. To do this, prior work often inserts attention inside recurrent modules (Lange et al., 2021; Lin et al., 2020). In our setting, however, the training objective does not directly supervise temporal recurrence; it is defined by denoising (diffusion) and deterministic reconstruction. In addition, as conditioning sequences are encoded one-by-one and gradients propagate to the spatio-temporal encoder through two pathways (via h and via μ), the resulting gradient with respect to each input frame xj decomposes as: ∂L123 ∂xj = γ ∂L23 ∂hT ∂hT ∂xj + " γ ∂L23 ∂hT ∂hT ∂μ - X m,t √ ̄αt ∂L23 ∂s tm ! + (1 -γ) ∂L1 ∂μ # ∂μ ∂xj (14) ∂hT ∂xj =   T -1 Y i=j ∂hi+1 ∂hi  ∂hj ∂xj , ∂hT ∂μ = T -1 Y i=1 ∂hi+1 ∂hi ! ∂h1 ∂μ , T = Tin + Tout, j ∈{1, . . . , Tin}. (15) where L123, L23, L1 denote L(θ1, θ2, θ3), L(θ2, θ3), L(θ1) respectively. In spatio-temporal encoders, gradients can suffer from severe attenuation due to repeated multiplication of Jacobians, i.e., through the product QT -1 i=j ∂hi+1/∂hi. Inserting attention within each recurrent step adds an extra per-step contraction and ties the attention update to intermediate gradients that are not directly anchored to the dedicated loss, which worsens attenuation. We therefore apply our Token-wise Attention after the encoder outputs (PA), at multiple resolutions. Post-attention brings two practical advantages: (i) fewer attention calls-attention is applied once per encoded representation (per scale), rather than at every recurrent step, substantially reducing compute versus in-block attention (Lange et al., 2021; Lin et al., 2020); and (ii) stability and efficiency-by avoiding attention inside recurrence, PA reduces gradient attenuation and amplification through long products and improves training stability and throughput. Further experiments in the ablation studies (Section 4.4) support these viewpoints. 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS Dataset: We evaluate our framework on four widely used precipitation nowcasting datasets Shanghai Radar (Chen et al., 2020), SEVIR (Veillette et al., 2020), MeteoNet (Larvor et al., 2020) and CIKM1. We adopt a challenging forecasting setup of predicting 20 future frames from 5 initial frames (5 →20), except for the CIKM dataset, where only the next 10 frames are predicted (5 →10) due to its shorter sequence length constraints. Further dataset details are provided in Appendix B. Training protocol: Our RainDiff model is trained for 300K iterations on a batch size of 4 via an Adam optimizer with a learning rate of 1 × 10-4. Following (Ho et al., 2020), we set the diffusion process to 1000 steps and employ 250 denoising steps during inference using DDIM (Song et al., 2020). We implement SimVP (Gao et al., 2022a) as our deterministic module. In line with (Yu et al., 2024), the combined loss in Equation 8 is balanced with γ = 0.5 between deterministic and denoising components. All experiments are executed on a single NVIDIA A6000 GPU. 4.2 EVALUATION METRICS Forecast accuracy is evaluated using average Critical Success Index (CSI) and Heidke Skill Score (HSS) across multiple reflectivity thresholds (Luo et al., 2022; Gao et al., 2023; Veillette et al., 2020). To assess spatial robustness, we also report multi-scale CSI scores (CSI-4, CSI-16) using pooling kernels of size 4 and 16 (Gao et al., 2022b; 2023). Perceptual quality is quantified by Learned Perceptual Image Patch Similarity (LPIPS) and Structural Similarity Index Measure (SSIM). 1https://tianchi.aliyun.com/dataset/1085 7 Preprint Table 1: Quantitative comparison across four radar nowcasting datasets (Shanghai Radar, MeteoNet, SEVIR, CIKM). We evaluate deterministic baselines (PhyDNet, SimVP, EarthFarseer, AlphaPre) and probabilistic methods (DiffCast) against our RainDiff using CSI, pooled CSI at 4×4 and 16×16 (CSI-4 / CSI-16), HSS, LPIPS, and SSIM. Bold marks our results. Overall, RainDiff attains the best or tied-best performance on most metrics and datasets, indicating both stronger localization and perceptual/structural quality. This design allows capturing rich context and dependency between frames in the radar field while maintaining efficient computation. Method Shanghai Radar MeteoNet ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM PhyDNet 0.3692 0.4066 0.5041 0.5009 0.2505 0.7784 0.1259 0.1450 0.1741 0.1950 0.2837 0.8188 SimVP 0.3965 0.4360 0.5261 0.5290 0.2365 0.7727 0.1300 0.1662 0.2190 0.1927 0.2448 0.8098 EarthFarseer 0.3998 0.4455 0.5405 0.5330 0.2126 0.7214 0.1651 0.2230 0.3567 0.2527 0.2128 0.7548 DiffCast 0.4000 0.4887 0.6063 0.5358 0.1561 0.7898 0.1454 0.2209 0.3382 0.2196 0.1298 0.7923 AlphaPre 0.3934 0.3939 0.4237 0.5203 0.2925 0.7863 0.1532 0.1729 0.1965 0.2284 0.2697 0.7891 RainDiff 0.4448 0.5152 0.6260 0.5822 0.1454 0.7997 0.1618 0.2484 0.3907 0.2430 0.1231 0.8210 Method SEVIR CIKM ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM PhyDNet 0.3648 0.3878 0.4618 0.4400 0.4057 0.5606 0.4487 0.4790 0.5488 0.4906 0.5079 0.4906 SimVP 0.3572 0.3766 0.4229 0.4268 0.4604 0.4898 0.4879 0.5079 0.5817 0.5328 0.5574 0.5272 EarthFarseer 0.3677 0.4120 0.5310 0.4459 0.3124 0.5264 0.4647 0.4819 0.5651 0.5094 0.4960 0.5572 DiffCast 0.3711 0.4417 0.6168 0.4539 0.2137 0.5362 0.4834 0.5175 0.6481 0.5182 0.2900 0.4993 AlphaPre 0.3436 0.3578 0.4010 0.4038 0.4005 0.5452 0.4858 0.5101 0.6064 0.5231 0.4660 0.4852 RainDiff 0.3835 0.4534 0.6193 0.4701 0.2070 0.5500 0.4916 0.5235 0.6536 0.5236 0.2926 0.5110 4.3 EXPERIMENTAL RESULTS For a comprehensive evaluation, we compare our method against both deterministic and probabilistic baselines. The deterministic models include the recurrent-free SimVP (Gao et al., 2022a) and AlphaPre (Lin et al., 2025), as well as the autoregressive PhyDNet (Guen & Thome, 2020) and EarthFarseer (Wu et al., 2024). As the state-of-the-art probabilistic approach, we include the DiffCast (Yu et al., 2024) model. Quantitative results: Table 1 presents the results of our RainDiff compared to other baselines across four radar datasets. On the Shanghai Radar dataset, RainDiff achieves the highest CSI (0.4448), HSS (0.5822), and SSIM (0.7997), along with the lowest LPIPS (0.1454), significantly outperforming the next-best method, DiffCast, across all metrics. Similarly, on SEVIR dataset, RainDiff achieves the best CSI (0.3835), LPIPS (0.2070), and competitive SSIM (0.5500), offering a better perceptual trade-off than PhyDNet, which has higher SSIM (0.5606) but much worse LPIPS (0.4057). For CIKM dataset, RainDiff leads with the best CSI (0.4916), CSI-4 (0.5235), and CSI-16 (0.6536), demonstrating strong robustness under high-variability conditions. On MeteoNet, RainDiff delivers the best perceptual scores (SSIM: 0.8201, LPIPS: 0.1231) and ranks second in CSI (0.1618), confirming its strong generalization. In addition, Figure 4 reports frame-wise CSI and HSS. As lead time increases, scores drop across all methods due to accumulating forecast uncertainty, yet our approach consistently outperforms the baselines at most timesteps-often by a larger margin at longer leads-demonstrating superior robustness to temporal expanding. Qualitative results: A comparison in Figure 3 reveals the limitations of existing methods. Deterministic models yield blurry outputs, while the stochastic model DiffCast, though sharper, introduces excessive and uncontrolled randomness at air masses' boundaries-an issue we attribute to its lack of attention mechanisms. This results in an unstable representation of temporal-spatial dependencies. Our framework solves this by integrating Token-wise Attention. This component not only enables the generation of realistic, high-fidelity details but also regulates the model's stochastic behavior, leading to forecasts with improved structural accuracy and consistency, thereby mitigating the chaotic predictions seen in DiffCast. Further visualizations are given in Appendix E. 4.4 ABLATION STUDY Effect of Individual Components: To evaluate the contribution of each component, we perform ablation experiments with three settings: (i) RainDiff without both Token-wise Attention in the U8 Preprint Table 2: Ablation studies of RainDiff on the Shanghai Radar dataset: (a) individual components and (b) attention mechanisms in the spatio-temporal encoder. (a) Ablation: individual components (i-iv). Method ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM (i) 0.4000 0.4887 0.6063 0.5358 0.1561 0.7898 (ii) 0.4370 0.5026 0.6030 0.5737 0.1461 0.7890 (iii) 0.4396 0.5066 0.6142 0.5767 0.1466 0.8125 (iv) 0.4448 0.5152 0.6260 0.5822 0.1454 0.7997 (b) Ablation: attention mechanisms (i-iii) on the spatio-temporal encoder. Method ↑CSI ↑CSI-4 ↑CSI-16 ↑HSS ↓LPIPS ↑SSIM (i) 0.4284 0.4808 0.5600 0.5623 0.1562 0.8217 (ii) 0.4310 0.5060 0.6049 0.5680 0.1502 0.7751 (iii) 0.4448 0.5152 0.6260 0.5822 0.1454 0.7997 -24 Min -18 Min -12 Min -6 Min 0 Min +12 Min +24 Min +36 Min +48 Min +60 Min +72 Min +84 Min +96 Min +108 Min +120 Min RainDiff (Ours) Diffcast EarthFarseer AlphaPre Ground Truth +120 Min Figure 3: Qualitative comparison with existing works on the Shanghai Radar dataset, where the reflectivity range is on the top right. Figure 4: Frame-wise CSI and HSS for various methods on the Shanghai Radar dataset. Net and Post-attention in the Spatio-temporal Encoder, which corresponds to DiffCast (Yu et al., 2024), (ii) Integrate DiffCast with Adaptive Attention from (Shaker et al., 2023), (iii) RainDiff with Token-wise Attention and without Post-attention, and (iv) our full RainDiff model. As shown in Table 2a, the absence of any component results in a clear degradation of performance, underscoring the critical role of each design choice in strengthening predictive capability. Effect of attention mechanism on Spatio-temporal encoder: As shown in Table 2b, we evaluate the effectiveness of our attention design on spatio-temporal encoder by comparing it with several alternatives proposed in (Lange et al., 2021; Lin et al., 2020), where attention layers are integrated within the recurrent block. We perform ablation experiments with three settings on ConvGRU block l-th: (i) The attention layers are integrated to the input xl j at each frame j-th, (ii) The attention layers are integrated to the output hl j at each frame j-th, and (iii) our RainDiff, where Post-attention is only applied on the final condition hl T of frame T-th, Section 3.4. The results in Table 2b support our contribution discussed in Section 3.4: RainDiff with Post-attention consistently achieves higher efficiency while maintaining performance comparable to other integration methods. 5 CONCLUSION RainDiff is an end-to-end diffusion framework for precipitation nowcasting that applies Token-wise Attention at all spatial scales within a diffusion U-Net, eliminating the need for a latent autoencoder and improving scalability and performance. In addition, we propose Post-attention module mitigating gradient attenuation when attention meets recurrent conditioning. Across four benchmarks, RainDiff surpasses deterministic and probabilistic baselines in localization, perceptual quality, and long-horizon robustness. Future work will involve physical constraints by using multi-modal inputs and reduce latency by replacing autoregression. 9 Preprint REFERENCES Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction. Nature, 525(7567):47-55, 2015. Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Accurate mediumrange global weather forecasting with 3d neural networks. Nature, 619(7970):533-538, 2023. Kang Chen, Tao Han, Junchao Gong, Lei Bai, Fenghua Ling, Jing-Jia Luo, Xi Chen, Leiming Ma, Tianning Zhang, Rui Su, et al. Fengwu: Pushing the skillful global medium-range weather forecast beyond 10 days lead. arXiv preprint , 2023. Lei Chen, Yuan Cao, Leiming Ma, and Junping Zhang. A deep learning-based methodology for precipitation nowcasting with radar. Earth and Space Science, 7(2):e2019EA000812, 2020. Qi Chen, Xiaoxi Chen, Haorui Song, Zhiwei Xiong, Alan Yuille, Chen Wei, and Zongwei Zhou. Towards generalizable tumor synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11147-11158, 2024. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. Zhangyang Gao, Cheng Tan, Lirong Wu, and Stan Z Li. Simvp: Simpler yet better video prediction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3170-3180, 2022a. Zhihan Gao, Xingjian Shi, Hao Wang, Yi Zhu, Yuyang Bernie Wang, Mu Li, and Dit-Yan Yeung. Earthformer: Exploring space-time transformers for earth system forecasting. Advances in Neural Information Processing Systems, 35:25390-25403, 2022b. Zhihan Gao, Xingjian Shi, Boran Han, Hao Wang, Xiaoyong Jin, Danielle Maddix, Yi Zhu, Mu Li, and Yuyang Wang. Prediff: Precipitation nowcasting with latent diffusion models. arXiv preprint , 2023. Junchao Gong, Lei Bai, Peng Ye, Wanghan Xu, Na Liu, Jianhua Dai, Xiaokang Yang, and Wanli Ouyang. Cascast: Skillful high-resolution precipitation nowcasting via cascaded modelling. International Conference on Machine Learning, 2024. Vincent Le Guen and Nicolas Thome. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11474-11484, 2020. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840-6851, 2020. Nicholas Konz, Yuwen Chen, Haoyu Dong, and Maciej A Mazurowski. Anatomically-controllable medical image generation with segmentation-guided diffusion models. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 88-98. Springer, 2024. Bernard Lange, Masha Itkina, and Mykel J Kochenderfer. Attention augmented convlstm for environment prediction. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1346-1353. IEEE, 2021. Gwenna ̈elle Larvor, L ́ea Berthomier, Vincent Chabot, Brice Le Pape, Bruno Pradel, and Lior Perez. Meteonet, an open reference weather dataset, 2020. Kenghong Lin, Baoquan Zhang, Demin Yu, Wenzhi Feng, Shidong Chen, Feifan Gao, Xutao Li, and Yunming Ye. Alphapre: Amplitude-phase disentanglement model for precipitation nowcasting. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 17841-17850, 2025. 10 Preprint Zhihui Lin, Maomao Li, Zhuobin Zheng, Yangyang Cheng, and Chun Yuan. Self-attention convlstm for spatiotemporal prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 11531-11538, 2020. Chuyao Luo, Guangning Xu, Xutao Li, and Yunming Ye. The reconstitution predictive network for precipitation nowcasting. Neurocomputing, 507:1-15, 2022. Morteza Mardani, Noah Brenowitz, Yair Cohen, Jaideep Pathak, Chieh-Yu Chen, Cheng-Chin Liu, Arash Vahdat, Mohammad Amin Nabian, Tao Ge, Akshay Subramaniam, et al. Residual corrective diffusion modeling for km-scale atmospheric downscaling. Communications Earth & Environment, 6(1):124, 2025. Shuliang Ning, Mengcheng Lan, Yanran Li, Chaofeng Chen, Qian Chen, Xunlai Chen, Xiaoguang Han, and Shuguang Cui. Mimo is all you need: A strong multi-in-multi-out baseline for video prediction. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pp. 19751983, 2023. Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skilful precipitation nowcasting using deep generative models of radar. Nature, 597(7878):672-677, 2021. Abdelrahman Shaker, Muhammad Maaz, Hanoona Rasheed, Salman Khan, Ming-Hsuan Yang, and Fahad Shahbaz Khan. Swiftformer: Efficient additive attention for transformer-based real-time mobile vision applications. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 17425-17436, 2023. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. Advances in neural information processing systems, 28, 2015. William C Skamarock, Joseph B Klemp, Jimy Dudhia, David O Gill, Dale M Barker, Michael G Duda, Xiang-Yu Huang, Wei Wang, and Jordan G Powers. A description of the advanced research wrf. National Center for Atmospheric Research, Boulder, CO), Version, 3:690, 2008. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint , 2020. Mark Veillette, Siddharth Samsi, and Chris Mattioli. Sevir: A storm event imagery dataset for deep learning applications in radar and satellite meteorology. Advances in Neural Information Processing Systems, 33:22009-22019, 2020. Hao Wu, Yuxuan Liang, Wei Xiong, Zhengyang Zhou, Wei Huang, Shilong Wang, and Kun Wang. Earthfarsser: Versatile spatio-temporal dynamical systems modeling in one model. In Proceedings of the AAAI conference on artificial intelligence, volume 38, pp. 15906-15914, 2024. Demin Yu, Xutao Li, Yunming Ye, Baoquan Zhang, Chuyao Luo, Kuai Dai, Rui Wang, and Xunlai Chen. Diffcast: A unified framework via residual diffusion for precipitation nowcasting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2775827767, 2024. Yuchen Zhang, Mingsheng Long, Kaiyuan Chen, Lanxiang Xing, Ronghua Jin, Michael I Jordan, and Jianmin Wang. Skilful nowcasting of extreme precipitation with nowcastnet. Nature, 619 (7970):526-532, 2023. 11 Preprint A USAGE OF LLMS All authors declare that the LLMs are used as a general-purpose assist tool for polishing the manuscript, LLMs do not contribute to research ideation. B DATASET Shanghai Radar: The Shanghai Radar dataset (Chen et al., 2020) contains radar scans images from the WSR-88D radar in Pudong, Shanghai, China, collected between October 2015 and July 2018. Each scan covers a 501 × 501 km area at 6-minute intervals. The data are segmented into 25-frame sequences with a stride of 20, and are split into training, validation, and test subsets following (Chen et al., 2020). The first 5 frames (30 minutes) are used to predict the next 20 frames (120 minutes). Radar values are rescaled to [0, 70], and CSI/HSS are computed at thresholds [20, 30, 35, 40]. SEVIR: The SEVIR dataset (Veillette et al., 2020) consists of storm events from 2017 to 2020, collected every 5 minutes over 4-hour windows using GOES-16 and NEXRAD sources. Each event spans a 384×384 km region. We use the vertically integrated liquid (VIL) radar mosaics and extract 25-frame sequences with a stride of 12 following (Yu et al., 2024). The first 5 frames (25 minutes) are used to predict the next 20 frames (100 minutes). The dataset is split into training, validation, and test subsets using cutoff dates: before 2019-01-01 for training, 2019-01-01 to 2019-06-01 for validation, and 2019-06-01 to 2020-12-31 for testing. Radar values are rescaled to [0, 255], and CSI/HSS are evaluated at thresholds [16, 74, 133, 160, 181, 219]. MeteoNet: MeteoNet (Larvor et al., 2020) provides radar and auxiliary meteorological data over two regions in France for the years 2016-2018. We use rain radar scans over north-western France, available at 6-minute intervals. Sequences of 25 frames are extracted using a stride of 12, where the first 5 frames (30 minutes) are used to predict the next 20 frames (120 minutes). The data are partitioned into training, validation, and test sets with cutoff dates of 2016-01-01 to 2017-12-31, 2018-01-01 to 2018-06-01, and 2018-06-01 to 2018-12-31, respectively. Radar values are rescaled to [0, 70], and CSI/HSS are computed at thresholds [12, 18, 24, 32]. CIKM: The CIKM dataset2 from CIKM AnalytiCup 2017 provides 15-frame radar echo sequences sampled every 6 minutes over a 1.5-hour period, covering a 101 × 101 km region in Guangdong, China. Each sample includes reflectivity maps at four altitudes from 0.5 km to 3.5 km; we use data at the 2.5 km level. Different from previous datasets, in CIKM, first 5 frames (30 minutes) are used to predict the next 10 frames (60 minutes). The dataset is split into training, validation, and test subsets using the official partition. Pixel values are rescaled to [0, 70], and CSI/HSS metrics are computed at thresholds [20, 30, 35, 40]. C TOKEN-WISE ATTENTION: TIME-SPACE COMPLEXITY C.1 SELF-ATTENTION (VIT (DOSOVITSKIY ET AL., 2021)) Given input embeddings z ∈Rn×d (with n tokens and width d), we form Q = zWQ, K = zWK, V = zWV , where WQ, WK, WV ∈Rd×d. The scaled dot-product attention is: ˆz = Attention(Q, K, V ) = Softmax QK⊤ √ d V (16) Projecting to Q, K, V costs 3 O(nd2). Computing logits QK⊤is O(n2d), the softmax is O(n2), and multiplying by V is O(n2d). Thus, the overall time and space complexities are: T(n, d) = O(nd2 + n2d) (17) S(n, d) = O(nd) + O(n2) = O(n2 + nd) (18) For a feature map of size h × w, n = hw and d ≪n, equations. 17-18 simplify to: T(n, d) = O(n2), S(n, d) = O(n2). (19) 2https://tianchi.aliyun.com/dataset/1085 12 Preprint C.2 TOKEN-WISE ATTENTION We analyze the TWA operations through each step (excluding the outer linear projections/MLPs): Equations 9, 11: T(n, d) = O(nd) + O(n) + O(nd) = O(nd) (20) S(n, d) = O(n) + O(d) (21) Equation 10: T(n, d) = O(nd), S(n, d) = O(nd). (22) Equation 12: T(n, d) = O(nd), S(n, d) = O(nd). (23) The TWA scales linearly in the number of tokens: T(n, d) = O(nd) (24) S(n, d) = O(nd + n + d) = O(nd) (25) For a feature map of size h × w, n = hw and d ≪n, equations 24-25 simplify to: T(n, d) = O(n), S(n, d) = O(n). (26) D SPATIO-TEMPORAL ENCODER The gradient flow of L from equation 8 with respect to each input frame xj decomposes into: ∂L123 ∂xj = γ " ∂L23 ∂hT ∂hT ∂xj + ∂hT ∂μ ∂μ ∂xj + X m,t ∂L23 ∂stm ∂st m ∂xj # + (1 -γ) ∂L1 ∂μ ∂μ ∂xj (27) with ∂st m ∂xj = ∂st m ∂s0m ∂s0 m ∂μ ∂μ ∂xj = -√ ̄αt ∂μ ∂xj , (28) ⇒ ∂L123 ∂xj = γ ∂Lθ2 ∂hT ∂hT ∂xj + " γ ∂L23 ∂hT ∂hT ∂μ - X m,t √ ̄αt ∂L23 ∂stm ! + (1 -γ) ∂L1 ∂μ # ∂μ ∂xj , (29) where ∂hT ∂xj =   T -1 Y i=j ∂hi+1 ∂hi   | {z } Jj→T ∂hj ∂xj , ∂hT ∂μ = T -1 Y i=1 ∂hi+1 ∂hi ! | {z } J1→T ∂h1 ∂μ , T = Tin + Tout, j ∈[1, Tin]. (30) where L123, L23, L1 denote L(θ1, θ2, θ3), L(θ2, θ3), L(θ1) respectively. E VISUALIZATION Here are additional qualitative examples across datasets. As shown in Figures 5, 6, 7, 8, all deterministic backbones become noticeably blurry by the 60-minute horizon, with high-reflectivity cores and fine-scale details fading. In contrast, RainDiff preserves sharper echoes and yields more accurate precipitation intensity and localization, especially at longer lead timesteps. Compared to DiffCast, the closest baseline, RainDiff produces less stochastic, more coherent precipitation contours while better matching the observed air masses' shape and position. 13 Preprint DiffCast Ground Truth RainDiff (Ours) PhyDNet EarthFarseer AlphaPre +20 Min +30 Min +50 Min +10 Min +40 Min +60 Min +100 Min -15 Min +70 Min +80 Min +90 Min SimVP -20 Min -10 Min -5 Min 0 Min Figure 5: Prediction examples on the SEVIR dataset, where the color bar on the top right represents the reflectivity range of radar echoes. Ground Truth DiffCast EarthFarseer SimVP PhyDNet AlphaPre -18 Min -12 Min -6 Min 0 Min -24 Min +6 Min +12 Min +18 Min +24 Min +30 Min +36 Min +42 Min +48 Min +54 Min +60 Min RainDiff (Ours) Figure 6: Prediction examples on the CIKM dataset, where the color bar on the top right represents the reflectivity range of radar echoes. 14 Preprint Ground Truth RainDiff (Ours) DiffCast EarthFarseer SimVP PhyDNet AlphaPre +12 Min +24 Min +36 Min +48 Min +60 Min +72 Min +84 Min +96 Min +108 Min +120 Min -24 Min -18 Min -12 Min -6 Min 0 Min Figure 7: Prediction examples on the Shanghai Radar dataset, where the color bar on the top right represents the reflectivity range of radar echoes. DiffCast Ground Truth Our RainDiff EarthFarseer AlphaPre SimVP PhyDNet -18 Min -12 Min -6 Min 0 Min -24 Min +12 Min +24 Min +36 Min +48 Min +60 Min +72 Min +84 Min +96 Min +108 Min +120 Min Figure 8: Prediction examples on the MeteoNet dataset, where the color bar on the top right represents the reflectivity range of radar echoes. 15
2510.14958
MathCanvas: Intrinsic Visual Chain-of-Thought for Multimodal Mathematical Reasoning Weikang Shi1* Aldrich Yu1* Rongyao Fang1*† Houxing Ren1 Ke Wang1 Aojun Zhou1 Changyao Tian1 Xinyu Fu2 Yuxuan Hu1 Zimu Lu1 Linjiang Huang3 Si Liu3 Rui Liu2‡ Hongsheng Li1‡ 1Multimedia Laboratory (MMLab), The Chinese University of Hong Kong, 2Huawei Research, 3BUAA wkshi@link.cuhk.edu.hk hsli@ee.cuhk.edu.hk Question: In the semicircle with diameter AB, let O be the midpoint of AB. Find the area of the shaded region given AB = 2, arc AD = arc DE = arc EB. BAGEL MathCanvas Solution: To find the area of the shaded region, we need to calculate the area of the sector formed by the arc DE and subtract the area of △DOE. Since D and E divide the semicircular arc into three equal arcs, each arc is 60°. The radius of O is 1. … … Therefore, the area of the shaded region is π/6 - √3/4. BAGEL-Zebra-CoT Nano-Banana Ours Solution: First, let's analyze the properties of the semicircle and the points dividing the arc. △AOE is a 30-60-90 triangle. The angle at O is 60°. … … The final Answer is (4π - 6√3 + 9) / 2. Solution: This image shows how the semicircular arc is divided into three equal 60° arcs. It clearly marks the 60° angles corresponding to O. … … The area of the shaded region is calculated as √3/4. Solution: Let the radius be r = AB/2 = 1. Connect OD and OE. ∠DOE = 60°, △DOE is equilateral, so ∠ODE = 60°. Thus DE//AB, S△DOE=S△DAE. Thus, the shaded region equals to sector ODE. Then the shaded area is calculated as 60°/360° × πr2 = π/6. Figure 1: MathCanvas demonstrates the first successful application of intrinsic Visual Chain-of-Thought (VCoT) for complex mathematical reasoning. Prior attempts fail by generating incorrect (BAGEL-Zebra-CoT) or strategically poor (Nano-Banana) visuals, leading to wrong solutions. In contrast, MathCanvas correctly generates an intermediate visual step that unlocks a simpler, elegant solution path. Abstract While Large Language Models (LLMs) have excelled in textual reasoning, they struggle with mathematical domains like geometry that intrin- sically rely on visual aids. Existing approaches to Visual Chain-of-Thought (VCoT) are often limited by rigid external tools or fail to gen- erate the high-fidelity, strategically-timed dia- grams necessary for complex problem-solving. To bridge this gap, we introduce MathCan- vas, a comprehensive framework designed to endow unified Large Multimodal Models (LMMs) with intrinsic VCoT capabilities for mathematics. Our approach consists of two phases. First, a Visual Manipulation stage pre- trains the model on a novel 15.2M-pair cor- pus, comprising 10M caption-to-diagram pairs (MathCanvas-Imagen) and 5.2M step-by-step editing trajectories (MathCanvas-Edit), to mas- ter diagram generation and editing. Second, a Strategic Visual-Aided Reasoning stage fine- tunes the model on MathCanvas-Instruct, a new 219K-example dataset of interleaved visual- textual reasoning paths, teaching it when and how to leverage visual aids. To facilitate rig- orous evaluation, we introduce MathCanvas- *Equal Contribution †Project lead ‡Corresponding author Bench, a challenging benchmark with 3K prob- lems that require models to produce inter- leaved visual-textual solutions. Our model, BAGEL-Canvas, trained under this framework, achieves an 86% relative improvement over strong LMM baselines on MathCanvas-Bench, demonstrating excellent generalization to other public math benchmarks. Our work provides a complete toolkit—framework, datasets, and benchmark—to unlock complex, human-like visual-aided reasoning in LMMs. Project Page: https://mathcanvas.github.io/ 1 Introduction Mathematical reasoning represents a pinnacle of human intelligence, demanding a sophisticated in- terplay of logical deduction, symbolic manipula- tion, and abstract thinking. The advent of Large Language Models (LLMs) (DeepSeek-AI et al., 2025; Yang et al., 2024; OpenAI et al., 2024b) has marked a significant milestone in artificial intel- ligence, demonstrating remarkable capabilities in tackling complex mathematical reasoning tasks. A key driver of recent progress in LLM-based rea- soning has been the Chain-of-Thought (CoT) (Wei et al., 2023) technique, which enables models to externalize intermediate steps and significantly im- proves performance on mathematical tasks. 1 arXiv:2510.14958v1 [cs.CV] 16 Oct 2025 However, the purely textual nature of CoT presents a fundamental limitation in domains like geometry and function analysis, where human problem-solving intrinsically involves constructing and manipulating visual aids, and even state-of-the- art models struggle in its absence (see Figure 12 in Appendix D). This gap has motivated the develop- ment of Visual Chain-of-Thought (VCoT), which aims to integrate visual information into the rea- soning process. Early approaches to VCoT have predominantly relied on external specialized tools, such as dedicated vision models (Shao et al., 2024a; Hu et al., 2024; Gao et al., 2025b) or code inter- preters (Hu et al., 2024; Wang et al., 2025c,d). While effective in specific contexts, these tool- based methods are often rigid, constrained to a predefined set of operations, and dependent on specific input formats (e.g., source code), which hinders their flexibility and broader applicability. Recent work has explored intrinsic VCoT, where unified large multimodal models (LMMs) natively generate visual thoughts as an integral part of their reasoning process (Cheng et al., 2025; Li et al., 2025b,a; Chern et al., 2025). Though promising, these previous attempts have been confined to simple domains and have yet to succeed in mathematics due to two key challenges. First, current unified LMMs lack the capability to generate and iteratively edit the high-fidelity math- ematical diagrams required for precise reasoning. The generated visuals are often geometrically in- correct, rendering them useless for logical deduc- tion, as shown with BAGEL-Zebra-CoT (Li et al., 2025a) in Figure 1. Second, and more fundamen- tally, models lack the procedural knowledge to em- ploy visual aids as a strategic component of their reasoning process—the complex decision of deter- mining when to draw, what to draw, and how to leverage the visualization for subsequent logical deduction. This strategic failure is evident even in advanced models like Nano-Banana (Comanici et al., 2025), shown in Figure 1, whose generated visual acts more as a flawed decoration than an in- tegral reasoning step, ultimately failing to uncover the key insight needed for the solution. To this end, we argue that addressing these chal- lenges requires models capable of interleaving tex- tual deduction with the creation and modification of visual aids. Accordingly, we introduce Math- Canvas, a comprehensive framework designed to endow unified LMMs with intrinsic VCoT capa- bilities for complex mathematical problem-solving. Our approach is structured around two comple- mentary phases: Visual Manipulation and Strategic Visual-Aided Reasoning. The first phase, Visual Manipulation, focuses on equipping the model with foundational visual syn- thesis and editing skills. To achieve this, we con- struct a new million-scale pretraining corpus specif- ically for mathematical diagrams. This resource comprises two parts: MathCanvas-Edit, contain- ing 5.2M step-by-step diagram editing instruction pairs generated via a hybrid pipeline that combines LLM-driven mining with programmatic synthesis, and MathCanvas-Imagen, with 10M caption-to- diagram pairs. Pretraining on them imparts the robust diagram generation and manipulation abili- ties that form the bedrock of our approach. The second phase, Strategic Visual-Aided Rea- soning, aims to teach the model how to interleave diagrammatic actions with its textual reasoning steps. For this purpose, we curate MathCanvas- Instruct, the first large-scale dataset for interleaved visual–textual mathematical reasoning. It con- tains 219K training examples, where each solution is represented as an interleaved sequence of tex- tual reasoning and corresponding visual steps. As demonstrated in Figure 1, training on MathCanvas- Instruct enables the model to learn how to coordi- nate diagrammatic actions with reasoning trajecto- ries to successfully solve complex problems. Furthermore, to rigorously evaluate models’ ca- pabilities in visual–textual mathematical reason- ing, we introduce a dedicated benchmark test set MathCanvas-Bench comprising 3K carefully cu- rated problems. Each test instance requires the solver to produce coherent interleaved reasoning and visual outputs. We benchmarked 20 leading LMMs on this dataset, revealing substantial per- formance gaps and establishing it as a challenging and comprehensive testbed for future research on Visual Chain-of-Thought reasoning. In summary, our contributions are as follows: • We propose MathCanvas, a comprehensive framework that enables LMMs to perform in- trinsic VCoT reasoning for complex mathemat- ical problem solving. • We construct two large-scale corpora tailored for our two-phase approach: a 15.2M-pair pre- training dataset for Visual Manipulation, and a 219K-example fine-tuning dataset for Strategic Visual-Aided Reasoning. • We further introduce a dedicated MathCanvas- Bench test set with 3K problems and benchmark 20 leading LMMs on it, revealing substantial deficiencies and establishing a challenging eval- uation bed for future research. 2 Construct point D as the circumcenter of A, B, C. Construct E such that DA = DE. Construct … LLM … Beam Search Quality Filter Self- Bootstrapping Symbolic Code … Geometric Primitives … Geometric Relations 1. angle bisector 2. reflect 3. circumcenter 4. incenter 5. inscribed circle … … Sample … Sample (a) Competition-Level Mining (b) Foundational Structure Generation Construct quadrilateral … Connect A and E, inter- … Construct point E as the … Construct point H … Construct point I … Construct parallel … … MathCanvas-Edit MathCanvas-Imagen Latex Code: documentclass[tikz,borde r=3.14mm]{standalone} draw[thick,->] (-2,0) … Geometric Problems ImgCode- 8.6M Quality Filter Caption: The diagram shows an ellipse centered at the origin O, with its … Deduplicate 4M diagram- caption pairs 5.4M diagram- caption pairs 612K diagram- caption pairs from public datasets Figure 2: The curation pipeline for the MathCanvas-Edit and MathCanvas-Imagen dataset. • Experiments show that our model trained under the MathCanvas framework achieves a 86% rel- ative improvement over strong LMM baselines on MathCanvas-Bench, demonstrating the effec- tiveness of our approach in unlocking intrinsic VCoT capabilities. 2 Related Work Mathematical Reasoning with Large Multi- modal Models. The remarkable success of text- only LLMs in mathematical reasoning, often driven by sophisticated chain-of-thought prompting (Wei et al., 2023; Yang et al., 2024; Yue et al., 2023; Wang et al., 2023; Shao et al., 2024b), has natu- rally spurred interest in extending these capabilities to the multimodal domain. Initial efforts in this area have largely involved adapting LMMs by en- hancing vision-text alignment on domain-specific data and then fine-tuning on mathematical question- answer pairs (Gao et al., 2025a; Wang et al., 2025a; Zhuang et al., 2024; Zhang et al., 2024b; Guo et al., 2025). While subsequent work has advanced the state of the art with techniques like reinforcement learning (Yang et al., 2025; Wang et al., 2025b; Duan et al., 2025; Wei et al., 2025), these models remain fundamentally text-centric. While they ef- fectively interpret visual information in the input, they largely neglect vision as an active, generative component of the reasoning process itself. Visual Chain-of-Thought. Unlike various tex- tual chain-of-thought (Wei et al., 2023; Fang et al., 2025a,b), visual chain-of-thought aims to bridge this gap by integrating the generation of visual aids directly into the reasoning process. Existing ap- proaches follow two main lines. The first leverages external tools, such as vision models to extract im- age details (Shao et al., 2024a; Chen et al., 2025; Hu et al., 2024; OpenAI, 2025b; Gao et al., 2025b) or code interpreters to add auxiliary structures (Hu et al., 2024; Wang et al., 2025c,d). This approach, however, is constrained, as these tools are either non-generative or lack general applicability due to rigidity. The second line explores intrinsic VCoT, where models natively generate visual thoughts as an integral part of their reasoning (Cheng et al., 2025; Li et al., 2025b,c; Chern et al., 2025; Li et al., 2025a). Despite its promise, this approach has so far been demonstrated primarily in simpler do- mains like spatial games and struggles to produce the precise, logically consistent diagrams required for complex mathematical reasoning. Datasets and Benchmarks for Multimodal Math- ematical Reasoning. The progress in visual- mathematical reasoning is largely driven by the evolution of its benchmarks. While foundational datasets like Geometry3K (Lu et al., 2021) and Sci- enceQA (Lu et al., 2022) established the task, re- cent challenging benchmarks such as MMMU (Yue et al., 2024), MathVista (Lu et al., 2024), Mathvi- sion (Wang et al., 2024), and MathVerse (Zhang et al., 2024a), among others (Qiao et al., 2024; Wang et al., 2025e; Sun et al., 2024), have pushed the limits of LMMs’ visual reasoning. However, a fundamental limitation persists: these benchmarks consist of static problem-solution pairs and lack the step-by-step visual demonstrations required to train models for dynamic, process-oriented reasoning. This is precisely the gap our work addresses with the introduction of MathCanvas-Instruct and the MathCanvas-Bench benchmark. 3 3 Method In this section, we detail the methodology behind MathCanvas. We first describe the construction of our large-scale training corpora for visual manipu- lation and strategic reasoning (3.1). We then intro- duce MathCanvas-Bench, a dedicated benchmark for rigorous evaluation (3.2). Finally, we present our two-stage training recipe that leverages these resources to instill intrinsic VCoT capabilities in a unified LMM (3.3). 3.1 Training Corpora Construction 3.1.1 Million-scale Pretraining Corpus To endow unified LMMs with the foundational vi- sual synthesis and editing capabilities required for mathematical reasoning, we construct a compre- hensive million-scale pretraining corpus compris- ing two complementary components: MathCanvas- Edit for diagram editing and MathCanvas-Imagen for diagram generation. The overall construction pipeline is shown in Figure 2. MathCanvas-Edit is designed to teach models how to iteratively modify mathematical diagrams through step-by-step transformations. We construct this dataset through a hybrid pipeline that com- bines complex competition-level geometry prob- lems with systematically generated simple geomet- ric figures, yielding a total of 5.2M edit trajectories. Competition-Level Mining. We start with 128 geometry problems from mathematical competi- tions to serve as realistic seed configurations. Us- ing these seeds, we employ the AlphaGeometry LLM (Trinh et al., 2024) with beam search to gen- erate numerous auxiliary line drawing methods for each problem. We then filter for geometrically in- valid constructions and render the corresponding diagram sequences, where each step is an edit op- eration (e.g., adding an auxiliary line, marking an angle). This iterative process yields 4.2M edit tra- jectories capturing the complexity of competition- level reasoning. To ensure visual diversity from this limited set of seeds, the rendering of each trajec- tory is controlled by a unique random seed, varying visual attributes like orientation and line styles. Foundational Structure Generation. While com- petition problems provide realism, they tend toward complexity that may not adequately cover funda- mental editing operations. To address this, we con- struct a complementary set of simple geometric figures using AlphaGeometry’s formal language. We first define a basic geometric primitive set (e.g., points, lines, circles) and a geometric relation set (e.g., circumcenter, incenter, parallel), the full de- tails of which are provided in Appendix B.1. Then we develop an automated algorithm that randomly and incrementally adds geometric primitives and relations to these basic structures, creating progres- sively more complex diagrams. Invalid or degen- erate configurations are filtered out through geo- metric constraint checking. By leveraging differ- ent random seeds during rendering, we obtain 1M additional edit trajectories that provide systematic coverage of fundamental geometric operations after three iterations of this synthetic generation process. MathCanvas-Imagen focuses on teaching mod- els to generate mathematical diagrams from textual descriptions. We construct it by aggregating and processing data from three complementary sources, resulting in 10M caption-to-diagram pairs. Re-purposing from MathCanvas-Edit. We first leverage the edit trajectories in MathCanvas-Edit, extracting caption-to-diagram pairs from each edit- ing step. After deduplication based on visual and textual similarity, we obtain 5.4M diverse caption- to-diagram pairs that inherently align with the types of diagrams needed for mathematical reasoning. Augmenting with Code-derived Captions. To further scale our dataset, we utilize the ImgCode- 8.6M (Wang et al., 2025a) dataset, which con- tains programmatically generated mathematical di- agrams paired with source code. We first apply quality filtering to remove corrupted or low-quality images. We then employ GPT-4.1-mini to generate natural language captions by taking image-code pairs as input, producing descriptions that capture both the visual content and mathematical seman- tics of each diagram. This process yields 4M high- quality caption-to-diagram pairs with rich, descrip- tive captions with diverse mathematical diagrams. Incorporating Public Datasets. Finally, we incor- porate 612K caption-to-diagram pairs from existing public resources, including MAVIS (Zhang et al., 2024b) and TR-CoT (Deng et al., 2025b), which provide additional diversity in caption styles and diagram types, complementing our dataset. Through this comprehensive construction pro- cess, the pretraining corpus provides a robust foun- dation for pretraining models on both diagram gen- eration and editing, establishing the essential visual capabilities needed for intrinsic VCoT in mathe- matical reasoning. 3.1.2 MathCanvas-Instruct To equip models with the ability to strategically interleave visual synthesis and editing actions 4 Nums. Image number 0 1 > 1 1167 1887 2679 25 400 Question Solution Analytic Geometry Algebra Solid Geometry Transformational Geometry Plane Geometry Calculus & Vector Statistics Trigonometry Knowledge distribution Question Solution Frequency Text length Figure 3: Statistical analysis of the MathCanvas-Bench test set. Left: Knowledge types distribution. Middle: Distribution of questions and solutions containing varying numbers of images. Right: Text length distribution of questions and solutions (measured in text tokens). with their textual reasoning process, we introduce MathCanvas-Instruct, the first large-scale dataset specifically designed for interleaved visual-textual mathematical reasoning. Dataset Construction We begin by gathering 632K multimodal mathematics problems and so- lutions from a wide array of middle school and high school textbooks, exams, and websites. From this initial pool, we implement a rigorous multi- stage filtering pipeline to ensure data quality and relevance. First, we employ GPT-5 to analyze the problems, filtering out examples where the pro- vided images served no role in the reasoning pro- cess. This step also standardized all mathematical formulas into LaTeX format, resulting in a refined set of 367K problems. A second round of filter- ing, also powered by GPT-5, removes problems that contained errors, lacked explicit answers, fea- tured low-quality or unclear images, or consisted solely of drawing tasks. This left us with 303K high-quality problems. To ensure the novelty and diversity of the dataset, we then perform both text and image deduplica- tion, which yielded 222K unique problem-solution pairs. The images in the remaining dataset under- went a quality enhancement step using a super- resolution model, SwinIR (Liang et al., 2021), to improve clarity and detail before being resized to a uniform 512x512 resolution. Finally, GPT-4.1 is used to classify all problems into a hierarchical taxonomy of 8 major categories and fine-grained subcategories. This collection is then partitioned to form our evaluation benchmark, MathCanvas- Bench, with the remaining 219K examples consti- tuting the MathCanvas-Instruct training set. Further statistics and examples for MathCanvas-Instruct are presented in Appendix B.2. 3.2 The MathCanvas-Bench Evaluation Benchmark Benchmark Construction We construct MathCanvas-Bench by sampling 3K problems from the 222K-pair collection described in Sec- tion 3.1.2. The construction process involves three key steps. First, we exclude all multiple-choice questions to ensure that evaluation relies on generative reasoning rather than random guessing. Second, to create a balanced test set, we perform weighted sampling across problem categories, setting the sampling weight for each category to the 0.7 exponential power of its proportion. This strategy increases the representation of less common problem types. Finally, to prevent data leakage, we remove any question from the remaining 219K training set that has a 5-gram Jaccard similarity score higher than 0.4 with any problem in MathCanvas-Bench. This process helps ensure a fair evaluation of model generalization. Further statistics on the final benchmark are shown in Figure 3. Evaluation Protocol Our evaluation protocol re- lies on GPT-4.1 to ensure consistent and scalable assessment. For each problem, GPT-4.1 is tasked with extracting the final answers for every sub- question from the model’s output and comparing them against the ground-truth answers. The spe- cific prompt templates used for this process are detailed in Appendix C. We employ two distinct metrics to score performance: Complete Accuracy: A binary score is awarded. A model receives 1 point only if the answers to all sub-questions are correct, and 0 otherwise. This metric evaluates the model’s ability to solve a prob- lem completely. Weighted Scoring: To provide a more granu- lar evaluation of partial progress, this metric as- 5 Unified LLM Gen. Expert Und. Expert Gen. Encoder Editing:Construct G on line CE and such that GB is perpendicular to AB. T2I:A quadrilateral pyramid with a parallelogram ABCD as its base … … Editing T2I Rectified-Flow Loss Final solution: ∵ ∠BOC = 90°, ∴ OD = ½BC = 1. ∵ △ABC is equilateral, ∴ AD = √3. In △OAD: OA < OD + AD with equality when O,D,A are collinear. ∴ OA_max = OD + AD = 1 + √3 Question: Given equilateral △ABC with side length 2, B on y-axis, C on x-axis. Find max(OA). Auxiliary line: Let D = midpoint of BC, connect O and D, A and D. Unified LLM Gen. Expert Und. Expert Gen. Encoder Und. Encoder Editing Input Image Cross-Entropy Loss Rectified-Flow Loss Cross-Entropy Loss Stage I Visual Manipulation Stage II Strategic Visual- Aided Reasoning Und. Encoder For T2I and Editing For Editing Figure 4: The two-stage training recipe for MathCanvas. (Left) Stage I: Visual Manipulation. The model’s Generation Expert is pretrained on our MathCanvas-Edit and MathCanvas-Imagen corpora to instill foundational diagram generation and editing skills. (Right) Stage II: Strategic Visual-Aided Reasoning. The entire model is then fine-tuned on MathCanvas-Instruct to learn the strategic interleaving of visual actions with textual reasoning. signs exponentially increasing weights to each sub- question. The precise formula for this weighting scheme is detailed in Appendix C. The final score is the sum of the weights of the correctly answered sub-questions, a method that allows us to assess the model’s accuracy on intermediate steps within the reasoning chain. Thus, MathCanvas-Bench provides a rigorous and challenging testbed for evaluating interleaved image-textual reasoning capabilities. 3.3 Two-Stage Training Recipe We implement our framework on BAGEL (Deng et al., 2025a), a state-of-the-art unified LMM. Its architecture features two distinct transformer ex- perts—one for understanding and one for gener- ation—integrated within a single, unified model structure. This design provides a strong foundation for our approach. Our MathCanvas recipe enhances this architecture through a two-stage process, as illustrated in Figure 4: a foundational Stage I: Vi- sual Manipulation, followed by Stage II: Strategic Visual-Aided Reasoning. Stage I: Visual Manipulation The goal of this foundational stage is to instill robust visual syn- thesis and editing skills for mathematical dia- grams. We pretrain the model on a mixture of our 5.2M-trajectory MathCanvas-Edit and 10M- pair MathCanvas-Imagen datasets. To foster iter- ative editing capabilities, each editing trajectory is structured as a continuous sequence of 2-4 di- agram transformations. To preserve the model’s inherent reasoning abilities, we freeze the entire understanding pathway and exclusively train the Generation Expert via a Rectified-Flow Loss (Liu et al., 2022) on the diagram generation task (Fig- ure 4, Stage I). This approach builds a strong visual foundation without catastrophic forgetting of its core understanding capabilities. Stage II: Strategic Visual-Aided Reasoning With the visual foundation established, Stage II fine-tunes the model to intelligently interleave its drawing and reasoning faculties using our inter- leaved image-text dataset, MathCanvas-Instruct. To enable the model to strategically decide when to draw, it is trained on a token prediction task. Fol- lowing each text segment (marked by the <im_end> token), the model must predict whether to generate the <|vision_start|> token to initiate a draw- ing, or the <|endoftext|> token to conclude the response. To inform how the model draws and understands, we process input and output images differently. All images provided in the question are encoded into clean VAE and ViT tokens, serving as visual context. For images within the solution, which the model must generate, we additionally include noised VAE tokens to compute the Rectified-Flow Loss. Unlike Stage I, all model components are unfrozen and trained jointly (Figure 4, Stage II). To enhance generation quality, we also leverage the ar- chitecture’s inherent dual Classifier-Free Guidance mechanism during inference. This orchestration stage culminates in a model that can autonomously 6 Model Size Think Overall Algebra Analytic Geom. Calc & Vector Plane Geom. Solid Geom. Stats. Transf. Geom. Trig. Complete Weighted Closed-source (unified) LMMs Gemini-2.5-Pro - ✓ 47.9 58.2 68.0 59.2 60.2 54.8 48.7 64.5 58.5 69.9 Gemini-2.5-Flash - ✓ 39.3 49.5 63.2 56.5 54.6 40.7 40.7 61.1 46.8 64.6 Gemini-2.0-Flash - ✗ 21.2 32.6 39.1 32.6 38.9 31.1 25.6 51.4 28.1 38.0 GPT-4.1 - ✗ 19.0 30.0 40.4 30.7 37.1 24.1 25.1 54.0 21.5 42.5 GPT-4.1-mini - ✗ 14.6 26.3 35.7 30.5 36.5 22.0 22.4 24.8 19.7 30.3 GPT-4o - ✗ 9.9 19.4 21.6 17.7 21.8 19.5 18.6 17.4 13.2 23.0 GPT-5 - ✓ 43.5 51.4 68.7 55.5 64.2 45.6 36.1 64.5 42.7 66.5 Claude-Sonnet-4 - ✓ 25.0 37.8 44.8 38.9 49.3 33.8 33.0 46.9 30.3 47.6 Seed-1.6-Thinking - ✓ 44.1 55.2 67.7 57.5 55.9 52.2 45.0 65.1 56.8 60.7 Qwen3-VL-Plus - ✓ 40.9 51.5 67.0 54.6 56.9 45.9 42.0 66.7 49.3 58.9 Nano-Banana - ✗ 33.2 43.7 55.4 50.2 51.8 34.5 36.6 56.7 39.4 60.4 Open-source (unified) LMMs Qwen-2.5-VL-7B 7B ✗ 8.9 18.7 19.5 19.0 19.2 20.6 18.7 10.7 13.9 15.0 Qwen-2.5-VL-32B 32B ✗ 15.4 27.6 29.8 27.4 27.8 27.4 27.2 27.9 20.1 30.5 Qwen-2.5-VL-72B 72B ✗ 21.1 32.8 30.6 19.5 36.4 34.5 33.5 23.9 33.6 48.9 Gemma-3-27b-it 27B ✗ 15.8 26.6 31.3 28.4 34.4 25.8 21.0 40.0 21.0 26.9 InternVL3.5-8B 8B ✗ 16.7 26.4 32.3 33.8 33.8 24.2 26.9 43.7 16.2 14.9 InternVL3.5-30B-A3B 30B ✗ 11.7 22.2 22.2 19.9 15.1 24.9 24.3 22.1 17.4 18.4 Keye-VL-1.5-8B 8B ✓ 17.1 27.0 33.1 28.0 26.2 27.0 23.6 29.5 20.9 26.3 BAGEL 7B ✗ 8.3 18.5 18.1 13.1 17.1 20.8 23.0 10.9 19.4 13.3 BAGEL-Zebra-CoT 7B ✗ 8.0 16.6 18.0 15.1 15.6 18.0 16.8 20.8 11.1 14.1 BAGEL-Canvas 7B ✗ 21.9 34.4 29.9 27.2 17.9 40.0 35.3 23.2 29.3 40.4 ∆Over Base Model +13.6 +15.9 +11.8 +14.1 +0.8 +19.2 +12.3 +12.3 +9.9 +27.1 Table 1: Comparison of model performances across all mathematical subjects. The best closed-source and open-source highest accuracy of LMMs are marked in red and blue, respectively. generate diagrams as intermediate steps to solve complex problems. Detailed training hyperparame- ters are provided in Appendix A. 4 Experiments We compare BAGEL-Canvas against 20 prominent LMMs, including top-performing proprietary mod- els such as the Gemini series (2.5-Pro, 2.5-Flash, Nano-Banana, 2.0-Flash) (Comanici et al., 2025), the GPT series (GPT-5, GPT-4.1, GPT-4.1-mini, GPT-4o) (OpenAI, 2025a; OpenAI et al., 2024c,a), Claude-Sonnet-4 (Anthropic, 2025), other strong multimodal models like Seed-1.6-Thinking (Seed et al., 2025) and Qwen3-VL-Plus (Bai et al., 2025), and powerful open-source models, including the Qwen-2.5-VL series (7B, 32B, 72B) (Bai et al., 2025), Gemma-3-27b-it(Team et al., 2025) , In- ternVL3.5 (8B, 30B) (Wang et al., 2025b), and Keye-VL-1.5-8B (Yang et al., 2025). We also in- clude our base model, BAGEL (Deng et al., 2025a), and a variant, BAGEL-Zebra-CoT (Li et al., 2025a), to precisely measure the gains from our frame- work. All LMM evaluations are conducted using VLMEvalKit (Duan et al., 2024) to ensure a fair comparison. The comprehensive results are shown in Table 1. 4.1 Benchmark Results As presented in Table 1, BAGEL-Canvas achieves a weighted score of 34.4% on our benchmark, es- tablishing it as the top-performing open-source model. It surpasses all open-source competitors, in- cluding significantly larger models like Qwen-2.5- VL-72B (32.8) and InternVL3.5-30B-A3B (22.2). This result represents a substantial +15.9 point im- provement over its base model, BAGEL, demon- strating the profound effectiveness of our training paradigm in unlocking advanced reasoning capa- bilities. Furthermore, BAGEL-Canvas proves to be highly competitive with proprietary systems, outperforming several prominent models such as Gemini-2.0-Flash (32.6) and GPT-4.1 (30.0). An analysis of performance across mathematical domains reveals that BAGEL-Canvas exhibits the most significant gains in geometry-heavy subjects: Trigonometry (+27.1), Plane Geometry (+19.2), and Solid Geometry (+12.3). This result strongly supports our hypothesis that visual reasoning is par- ticularly beneficial for geometric problem-solving. The model also shows substantial improvements in Analytic Geometry (+14.1) and Algebra (+11.8), suggesting that the ability to visualize functions and coordinate systems enhances reasoning in broader mathematical contexts. The modest gain in Cal- culus & Vector (+0.8) indicates that this domain may require specialized reasoning capabilities be- 7 Model MathVista MathVerse MathVision (GPS) (Text Dominant) (Text Lite) (test) AnaG Angle Area Len SolG Alg Others BAGEL 68.8 49.2 42.0 24.1 26.2 31.8 25.0 28.7 22.1 17.1 23.1 BAGEL-Canvas 79.3 65.4 59.9 32.9 48.8 49.1 35.2 37.9 31.2 30.1 27.9 ∆ +10.5 +16.2 +17.9 +8.8 +22.6 +17.3 +10.2 +9.2 +9.1 +13.0 +4.8 Table 2: Generalization performance of BAGEL-Canvas compared to its base model (BAGEL) on three multimodal math benchmarks. ∆indicates the absolute improvement. MathVision subject abbreviations: AnaG (Analytic Geometry), SolG (Solid Geometry), Alg (Algebra), Angle (Metric Geometry - Angle), Area (Metric Geometry - Area), and Len (Metric Geometry - Length). Model Overall Complete Weighted BAGEL-Canvas 21.9 34.4 w/o MathCanvas-Edit 19.8 32.0 w/o MathCanvas-Imagen 18.2 30.8 Table 3: Ablation study on the pre-training corpora. We report the performance drop after removing the editing data (w/o MathCanvas-Edit) and the entire pre-training data (w/o MathCanvas-Imagen). Model Overall Complete Weighted BAGEL-Canvas 21.9 34.4 – (Skip Image) 19.7 31.9 BAGEL-Canvas-Text 18.7 30.9 Table 4: Ablation study on the visual modality. BAGEL- Canvas-Text is a variant fine-tuned without any visual data. (– Skip Image) denotes the full model being con- strained to text-only reasoning during inference. yond the scope of our current visual augmentation techniques. 4.2 Performance on Other Math Benchmarks To assess the generalization capabilities of BAGEL- Canvas, we evaluate it on three established pub- lic benchmarks: the GPS category from Math- Vista’s testmini set (Lu et al., 2024), the full test set of MathVision (Wang et al., 2024), and the Text Dominant/Lite subsets from MathVerse’s testmini (Zhang et al., 2024a). As detailed in Table 2, BAGEL-Canvas demonstrates substantial and consistent improvements over its base model, BAGEL, across all benchmarks, with particularly strong gains on MathVerse (+17.9) and MathVista (+10.5). The detailed breakdown on MathVision further reveals significant improvements in subjects that benefit from visual intuition, such as Analytic Geometry (+22.6), Algebra (+13.0), and various plane geometry tasks (Angle: +17.3). Crucially, since these benchmarks require text-only solutions, this strong performance validates that our training paradigm fundamentally enhances the model’s in- trinsic reasoning abilities, allowing it to generalize effectively to traditional problem-solving formats. 4.3 Ablation Studies We conduct a series of ablation studies to dissect the contributions of the key components within our framework: the pretraining corpus and the role of the visual modality in the final reasoning stage. Effectiveness of the Pre-training Corpus. We investigate the impact of our two-stage pre-training strategy by ablating the MathCanvas-Edit and MathCanvas-Imagen corpora. As shown in Ta- ble 3, removing the MathCanvas-Edit data (w/o MathCanvas-Edit) results in a 2.4-point drop in the weighted score. This highlights the importance of learning step-by-step diagram editing, a criti- cal skill for solving complex problems that require constructing auxiliary elements. A further abla- tion, removing the entire pre-training stage (w/o MathCanvas-Imagen), leads to an additional 1.2- point performance decrease. This confirms that even foundational diagram generation capabilities provide a vital scaffold for the fine-tuning phase. Together, these results validate our two-stage pre- training approach, demonstrating that both gener- ation and editing skills are essential for achieving optimal performance. Importance of Visual Modality in Reasoning. We analyze the importance of the visual modality through two ablations. First, we fine-tune a vari- ant, BAGEL-Canvas-Text, using only the textual reasoning paths from MathCanvas-Instruct. Sec- ond, we constrain the full BAGEL-Canvas model to bypass visual generation during inference (– Skip Image). As shown in Table 4, both scenar- ios result in a significant performance drop. The BAGEL-Canvas-Text variant’s weighted score falls by 3.5 points, confirming that training on inter- leaved visual-textual data is essential for learning complex reasoning. Interestingly, the model that simply skips image generation at inference (– Skip 8 Image) performs 1.0 point better than BAGEL- Canvas-Text, despite both producing text-only so- lutions. This suggests that our interleaved training paradigm not only teaches the model how to lever- age visual aids but also fundamentally enhances its underlying textual reasoning capabilities. 5 Conclusion We introduced MathCanvas, a comprehensive framework to endow Large Multimodal Models with intrinsic Visual Chain-of-Thought capabili- ties for mathematical reasoning. By leveraging our newly created large-scale datasets (MathCanvas- Edit, MathCanvas-Imagen, and MathCanvas- Instruct) in a two-stage training recipe, we taught our model, BAGEL-Canvas, to master diagram ma- nipulation and strategically interleave it with tex- tual deduction. This approach yielded an 86% rel- ative improvement over strong baselines on our MathCanvas-Bench benchmark. Crucially, this training paradigm not only teaches the model when and how to draw, but also fundamentally enhances its core textual reasoning. Our work provides a robust foundation for future research into broader and more complex multimodal reasoning. References Anthropic. 2025. System card: Claude opus 4 & claude sonnet 4. Technical report, Anthropic. Accessed 2025-10-07. Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wen- bin Ge, Sibo Song, Kai Dang, Peng Wang, Shi- jie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, and 8 others. 2025. Qwen2.5-vl technical report. arXiv preprint arXiv:2502.13923. Xinyan Chen, Renrui Zhang, Dongzhi Jiang, Aojun Zhou, Shilin Yan, Weifeng Lin, and Hongsheng Li. 2025. Mint-cot: Enabling interleaved visual tokens in mathematical chain-of-thought reasoning. arXiv preprint arXiv:2506.05331. Zihui Cheng, Qiguang Chen, Xiao Xu, Jiaqi Wang, Weiyun Wang, Hao Fei, Yidong Wang, Alex Jin- peng Wang, Zhi Chen, Wanxiang Che, and Libo Qin. 2025. Visual thoughts: A unified perspective of un- derstanding multimodal chain-of-thought. Preprint, arXiv:2505.15510. Ethan Chern, Zhulin Hu, Steffi Chern, Siqi Kou, Jiadi Su, Yan Ma, Zhijie Deng, and Pengfei Liu. 2025. Thinking with generated images. arXiv preprint arXiv:2505.22525. Gheorghe Comanici, Eric Bieber, Mike Schaekermann, Ice Pasupat, Noveen Sachdeva, Inderjit Dhillon, Mar- cel Blistein, Ori Ram, Dan Zhang, Evan Rosen, Luke Marris, Sam Petulla, Colin Gaffney, Asaf Aharoni, Nathan Lintz, Tiago Cardal Pais, Henrik Jacobs- son, Idan Szpektor, Nan-Jiang Jiang, and 3290 oth- ers. 2025. Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. Preprint, arXiv:2507.06261. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhi- hong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capa- bility in llms via reinforcement learning. Preprint, arXiv:2501.12948. Chaorui Deng, Deyao Zhu, Kunchang Li, Chenhui Gou, Feng Li, Zeyu Wang, Shu Zhong, Weihao Yu, Xi- aonan Nie, Ziang Song, Guang Shi, and Haoqi Fan. 2025a. Emerging properties in unified multimodal pretraining. Preprint, arXiv:2505.14683. Linger Deng, Linghao Zhu, Yuliang Liu, Yu Wang, Qunyi Xie, Jingjing Wu, Gang Zhang, Yingying Zhu, and Xiang Bai. 2025b. Theorem-validated reverse chain-of-thought problem generation for geometric reasoning. Preprint, arXiv:2410.17885. Chengqi Duan, Rongyao Fang, Yuqing Wang, Kun Wang, Linjiang Huang, Xingyu Zeng, Hongsheng Li, and Xihui Liu. 2025. Got-r1: Unleashing reasoning capability of mllm for visual genera- tion with reinforcement learning. arXiv preprint arXiv:2505.17022. Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, and 1 others. 2024. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Proceedings of the 32nd ACM international conference on multimedia, pages 11198–11201. Rongyao Fang, Chengqi Duan, Kun Wang, Linjiang Huang, Hao Li, Shilin Yan, Hao Tian, Xingyu Zeng, Rui Zhao, Jifeng Dai, and 1 others. 2025a. Got: Unleashing reasoning capability of multimodal large language model for visual generation and editing. arXiv preprint arXiv:2503.10639. Rongyao Fang, Aldrich Yu, Chengqi Duan, Linjiang Huang, Shuai Bai, Yuxuan Cai, Kun Wang, Si Liu, Xi- hui Liu, and Hongsheng Li. 2025b. Flux-reason-6m & prism-bench: A million-scale text-to-image rea- soning dataset and comprehensive benchmark. arXiv preprint arXiv:2509.09680. Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jian- hua Han, Hang Xu, Zhenguo Li, and Lingpeng Kong. 2025a. G-llava: Solving geometric problem 9 with multi-modal large language model. Preprint, arXiv:2312.11370. Jun Gao, Yongqi Li, Ziqiang Cao, and Wenjie Li. 2025b. Interleaved-modal chain-of-thought. Preprint, arXiv:2411.19488. Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, Yubo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, and Xiang Yue. 2025. Mammoth-vl: Eliciting multimodal reasoning with instruction tuning at scale. Preprint, arXiv:2412.05237. Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Os- tendorf, Luke Zettlemoyer, Noah A Smith, and Ran- jay Krishna. 2024. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. Preprint, arXiv:2406.09403. Ang Li, Charles Wang, Kaiyu Yue, Zikui Cai, Ollie Liu, Deqing Fu, Peng Guo, Wang Bill Zhu, Vatsal Sharan, Robin Jia, and 1 others. 2025a. Zebra-cot: A dataset for interleaved vision language reasoning. arXiv preprint arXiv:2507.16746. Chengzu Li, Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vuli´c, and Furu Wei. 2025b. Imagine while reasoning in space: Multimodal visualization-of-thought. arXiv preprint arXiv:2501.07542. Chengzu Li, Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vuli´c, and Furu Wei. 2025c. Imagine while reasoning in space: Multimodal visualization-of-thought. Preprint, arXiv:2501.07542. Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. 2021. Swinir: Image restoration using swin transformer. Preprint, arXiv:2108.10257. Xingchao Liu, Chengyue Gong, and Qiang Liu. 2022. Flow straight and fast: Learning to gener- ate and transfer data with rectified flow. Preprint, arXiv:2209.03003. Ilya Loshchilov and Frank Hutter. 2019. De- coupled weight decay regularization. Preprint, arXiv:1711.05101. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chun- yuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai- Wei Chang, Michel Galley, and Jianfeng Gao. 2024. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In Inter- national Conference on Learning Representations (ICLR). Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. 2021. Inter-gps: Interpretable geometry problem solv- ing with formal language and symbolic reasoning. Preprint, arXiv:2105.04165. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, Kai- Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neu- ral Information Processing Systems (NeurIPS). OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, and 401 others. 2024a. Gpt- 4o system card. Preprint, arXiv:2410.21276. OpenAI, :, Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, Allison Tam, and 244 others. 2024b. Openai o1 system card. Preprint, arXiv:2412.16720. OpenAI. 2025a. GPT-5 System Card. Technical report, OpenAI. Accessed on [YYYY-MM-DD]. OpenAI. 2025b. OpenAI o3 and o4-mini System Card. Technical report, OpenAI. Accessed on [YYYY- MM-DD]. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Alt- man, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haim- ing Bao, Mohammad Bavarian, Jeff Belgum, and 262 others. 2024c. Gpt-4 technical report. Preprint, arXiv:2303.08774. Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, Runfeng Qiao, Yifan Zhang, Xiao Zong, Yida Xu, Muxi Diao, Zhimin Bao, Chen Li, and Honggang Zhang. 2024. We-math: Does your large multimodal model achieve human-like mathematical reasoning? Preprint, arXiv:2407.01284. ByteDance Seed, :, Jiaze Chen, Tiantian Fan, Xin Liu, Lingjun Liu, Zhiqi Lin, Mingxuan Wang, Chengyi Wang, Xiangpeng Wei, Wenyuan Xu, Yufeng Yuan, Yu Yue, Lin Yan, Qiying Yu, Xiaochen Zuo, Chi Zhang, Ruofei Zhu, Zhecheng An, and 255 others. 2025. Seed1.5-thinking: Advancing superb reason- ing models with reinforcement learning. Preprint, arXiv:2504.13914. Hao Shao, Shengju Qian, Han Xiao, Guanglu Song, Zhuofan Zong, Letian Wang, Yu Liu, and Hong- sheng Li. 2024a. Visual cot: Advancing multi- modal language models with a comprehensive dataset and benchmark for chain-of-thought reasoning. Ad- vances in Neural Information Processing Systems, 37:8612–8642. 10 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. 2024b. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. Preprint, arXiv:2402.03300. Kai Sun, Yushi Bai, Ji Qi, Lei Hou, and Juanzi Li. 2024. Mm-math: Advancing multimodal math evaluation with process evaluation and fine-grained classifica- tion. Preprint, arXiv:2404.05091. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, and 197 others. 2025. Gemma 3 technical report. Preprint, arXiv:2503.19786. Trieu Trinh, Yuhuai Wu, Quoc Le, He He, and Thang Luong. 2024. Solving olympiad geometry without human demonstrations. Nature. Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hongsheng Li. 2024. Measuring multimodal mathematical reason- ing with math-vision dataset. In The Thirty-eight Conference on Neural Information Processing Sys- tems Datasets and Benchmarks Track. Ke Wang, Junting Pan, Linda Wei, Aojun Zhou, Weikang Shi, Zimu Lu, Han Xiao, Yunqiao Yang, Houxing Ren, Mingjie Zhan, and Hongsheng Li. 2025a. Mathcoder-vl: Bridging vision and code for enhanced multimodal mathematical reasoning. Preprint, arXiv:2505.10557. Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi Song, Mingjie Zhan, and Hongsheng Li. 2023. Mathcoder: Seamless code integration in llms for enhanced math- ematical reasoning. Preprint, arXiv:2310.03731. Weiyun Wang, Zhangwei Gao, Lixin Gu, Hengjun Pu, Long Cui, Xingguang Wei, Zhaoyang Liu, Linglin Jing, Shenglong Ye, Jie Shao, Zhaokai Wang, Zhe Chen, Hongjie Zhang, Ganlin Yang, Haomin Wang, Qi Wei, Jinhui Yin, Wenhao Li, Erfei Cui, and 56 others. 2025b. Internvl3.5: Advancing open-source multimodal models in versatility, reasoning, and effi- ciency. Preprint, arXiv:2508.18265. Yikun Wang, Siyin Wang, Qinyuan Cheng, Zhaoye Fei, Liang Ding, Qipeng Guo, Dacheng Tao, and Xipeng Qiu. 2025c. Visuothink: Empowering lvlm reasoning with multimodal tree search. Preprint, arXiv:2504.09130. Yikun Wang, Yibin Wang, Dianyi Wang, Zimian Peng, Qipeng Guo, Dacheng Tao, and Jiaqi Wang. 2025d. Geometryzero: Improving geometry solving for llm with group contrastive policy optimization. Preprint, arXiv:2506.07160. Zhikai Wang, Jiashuo Sun, Wenqi Zhang, Zhiqiang Hu, Xin Li, Fan Wang, and Deli Zhao. 2025e. Benchmarking multimodal mathematical reason- ing with explicit visual dependency. Preprint, arXiv:2504.18589. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elic- its reasoning in large language models. Preprint, arXiv:2201.11903. Lai Wei, Yuting Li, Kaipeng Zheng, Chen Wang, Yue Wang, Linghe Kong, Lichao Sun, and Weiran Huang. 2025. Advancing multimodal reasoning via reinforcement learning with cold start. Preprint, arXiv:2505.22334. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jian- hong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. 2024. Qwen2.5-math tech- nical report: Toward mathematical expert model via self-improvement. Preprint, arXiv:2409.12122. Biao Yang, Bin Wen, Boyang Ding, Changyi Liu, Chen- glong Chu, Chengru Song, Chongling Rao, Chuan Yi, Da Li, Dunju Zang, Fan Yang, Guorui Zhou, Guowang Zhang, Han Shen, Hao Peng, Haojie Ding, Hao Wang, Haonan Fan, Hengrui Ju, and 42 others. 2025. Kwai keye-vl 1.5 technical report. Preprint, arXiv:2509.01563. Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, and 3 others. 2024. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of CVPR. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wen- hao Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023. Mammoth: Building math generalist mod- els through hybrid instruction tuning. Preprint, arXiv:2309.05653. Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, and 1 others. 2024a. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? arXiv preprint arXiv:2403.14624. Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Ziyu Guo, Shicheng Li, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, Shanghang Zhang, Peng Gao, Chunyuan Li, and Hongsheng Li. 2024b. Mavis: Mathematical visual instruction tuning with an auto- matic data engine. Preprint, arXiv:2407.08739. Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. 2024. Math-puma: Progressive upward multi- modal alignment to enhance mathematical reasoning. Preprint, arXiv:2408.08640. 11 A Training Details We implement our framework on top of the publicly available BAGEL-7B-MoT (Deng et al., 2025a) model. All training experiments were conducted on a cluster of 16 NVIDIA H800 GPUs. We use the AdamW (Loshchilov and Hutter, 2019) optimizer for both training stages. The detailed hyperparame- ters for our two-stage training recipe, correspond- ing to Stage I (Visual Manipulation) and Stage II (Strategic Visual-Aided Reasoning), are provided in Table 5. Stage I In this stage, the primary objective is to train the model’s visual generation capabilities. As described in Section 3.3, we freeze the entire un- derstanding expert and only train the generation expert. The loss is solely based on the Rectified- Flow objective (Liu et al., 2022) for diagram gen- eration, hence the absence of a Cross-Entropy loss component. We employed a slightly higher ViT condition dropout rate (0.3) to regularize the model and prevent overfitting to the visual features of the pretrainig data. Stage II In the second stage, all model compo- nents are unfrozen to enable joint optimization. The model is trained on a combined loss function: a Cross-Entropy loss for predicting the next to- ken (either text or the special <|vision_start|> and <|endoftext|> token), weighted by 0.25, and the Rectified-Flow loss for generating diagrams, weighted by 1.0. The learning rate is halved, and the number of training steps is reduced, which is typical for fine-tuning tasks. The ViT condition dropout is lowered to 0.1 to better leverage visual context during strategic reasoning. B Dataset Details B.1 MathCanvas-Edit and MathCanvas-Imagen Details on Foundational Structure Generation. As described in Section 3.1, the Foundational Struc- ture Generation pipeline for the MathCanvas-Edit dataset relies on an automated algorithm that ran- domly and incrementally adds geometric primitives and relations. This section specifies the exact sets used in this process. Geometric Primitive Set. The generation process initiates by selecting one of 18 basic geometric objects from the following set: • segment, angle • Triangles: triangle, iso_triangle (isosce- les), r_triangle (right), triangle_ab, ieq_ triangle (equilateral), risos (right isosceles) • Quadrangles: rectangle, isquare, trapezoid, r_trapezoid (right), eq_trapezoid (isosce- les), quadrangle, eq_quadrangle (equilateral), eqdia_quadrangle (equal-diagonal) • Polygons: pentagon, eq_pentagon (equilateral) Geometric Relation Set. Subsequently, the algo- rithm iteratively applies relations from a predefined set of 41 constructions. These are categorized by the number of new points they introduce (typically one or two). • 1-Point Relations (37): angle_bisector, angle_mirror, circle, circumcenter, eq_triangle, eqangle2, eqdistance, foot, incenter, excenter, intersection_cc, on_bline, intersection_lc, on_aline, intersection_ll, on_line, intersection_ lp, intersection_lt, intersection_pp, intersection_tt, lc_tangent, midpoint, mirror, nsquare, on_bline, on_circle, on_pline, on_tline, on_dia, orthocenter, parallelogram, psquare, reflect, s_angle, shift, on_opline, eqangle3, on_circum • 2-Point Relations (4): square, trisegment, trisect, tangent The automated algorithm randomly samples from these sets to build progressively more complex dia- grams, ensuring systematic coverage of fundamen- tal geometric operations. Examples. An example from the MathCanvas- Edit dataset is presented in Figure 6. Examples from the MathCanvas-Imagen dataset are shown in Figures 7 and 8. B.2 MathCanvas-Instruct Dataset Statistics. We present the knowledge point distribution of the MathCanvas-Instruct train- ing set in Figure 5. Table 6 demonstrates the sta- tistical characteristics of the MathCanvas-Instruct dataset, comprising 219K problems, of which 65% are multimodal and 35% are text-only. We have also analyzed the distribution of problem sources, the length of questions and solutions, and the num- ber of images they contain. Examples. We showcase examples from the MathCanvas-Instruct dataset in Figures 9, 10, 11. C Benchmark Evaluation Details C.1 Weighted Scoring Weights The weights for our Weighted Scoring metric are calculated using an exponential growth factor of 12 Hyperparameter Stage I Stage II Optimizer & Scheduler Learning Rate (LR) 2 × 10−5 1 × 10−5 LR Scheduler Cosine Decay Cosine Decay Min Learning Rate 1 × 10−7 1 × 10−7 Warmup Steps 2,000 500 Total Training Steps 80,000 16,000 Model & Loss EMA Decay Rate 0.999 0.995 Rectified-Flow Timestep Shift 2.0 2.0 Cross-Entropy (CE) Loss Weight N/A 0.25 Rectified-Flow (MSE) Loss Weight 1.0 (Implicit) 1.0 Frozen Components Understanding Expert None Batching & Tokenization Max Tokens per Batch 46,080 51,200 Max Tokens per Sample 8,192 25,600 Regularization (Dropout) Text Condition Dropout 0.1 0.1 ViT Condition Dropout 0.3 0.1 VAE Condition Dropout 0.1 0.1 Table 5: Key hyperparameters for the two-stage training process. "N/A" indicates that the parameter was not applicable to that stage. Analytic Geometry Algebra Solid Geometry Plane Geometry Trigonometry Transformational Geometry Statistics Calculus & Vector Figure 5: Distribution of knowledge type of MathCanvas-Instruct dataset. 1.3, valuing later sub-questions more heavily. The specific formula for the weight wi of the i-th sub- question in a problem with N sub-questions is: wi = 1.3i−1 PN j=1 1.3j−1 Since our benchmark contains problems with a maximum of four sub-questions, we use the follow- ing pre-calculated, normalized weights for evalua- tion. The final score for a problem is the sum of the weights of the correctly answered sub-questions. • For 2 sub-questions: [0.4348, 0.5652] • For 3 sub-questions: [0.2506, 0.3258, 0.4236] • For 4 sub-questions: [0.1616, 0.2101, 0.2732, 0.3551] C.2 Evaluation Template Tables 7 and 8 display the prompt templates used for MathCanvas-Bench evaluation. D Additional Qualitative Results To further illustrate the limitations of even the most advanced LMMs when they lack intrinsic VCoT ca- pabilities, we present qualitative examples of their performance on problems that benefit from visual manipulation. Figure 12 shows the solutions from Gemini-2.5-Pro and GPT-5 for the problem fea- tured in Figure 1 of the main paper, demonstrating their reliance on complex and sometimes flawed algebraic approaches. We provide more qualitative results of BAGEL-Zebra-CoT, Nano-Banana, and our method in Figure 13. 13 Base caption:Let ABC be a right triangle with the right angle at vertex A. Image: Step 1: Construct points D and E$that divide segment AB into three equal parts. Image: Step 2: Construct points F and G that divide segment CE into three equal parts. Image: Step 3: Connect CD. Image: Figure 6: An example from MathCanvas-Edit dataset. Caption: In triangle ABC, ∠BAC is a right angle and ∠ABC and ∠ACB are equal. The shape D is the mirror image of B across the line CA. Image: Caption: Triangle CAB has G as its centroid, with D, E, and F being the midpoints of sides CA, BA, and CB respectively. The length of segment AB is half of segment AC. Image: Caption: Let ABC be a right isosceles triangle with the right angle at A. Construct points D and E that divide segment BC into three equal parts. Construct points F and G that divide segment BD into three equal parts. Construct H as the incenter of triangle ACD. Image: Figure 7: Examples from MathCanvas-Imagen dataset. Caption: The image depicts a Cartesian coordinate system with the x-axis and y-axis labeled as x and y... The axes intersect at the origin, labeled as O. A smooth curve is plotted, representing the function y = 1 / (1 + x^2). This curve starts at y = 1 when x = 0 and asymptotically approaches the x-axis as x increases... Two vertical dashed lines are drawn from the x-axis to the curve at x = 1 and x = 2. These lines intersect the curve at points where the y-values are 1/2 and 1/5, respectively. These intersections highlight the values of the function... illustrating its decreasing nature. Image: Caption: The image depicts a circle centered at point O, with three points A, B, and C located on the circumference... The points are positioned with A in th upper left, B in the upper right, and C in the lower left. The line segments OA, OB, and OC are radii of the circle... Additionally, the line segment AC is drawn, forming the triangle OAC with the radii OA and OC. The spatial arrangement suggests that triangle OAC is an isosceles triangle with OA = OC... The overall configuration highlights the geometric properties of the circle and the isosceles triangle within it. Image: Figure 8: Examples from MathCanvas-Imagen dataset. 14 Question: Input image: In the circle with center O, AB is a diameter. Points C, D, E lie on the circle. DE is perpendicular to AB. Given ∠ABE=50°, find the measure of ∠ACD. Options: A. 90°, B. 50°, C. 45°, D. 30°. Solution: Text: Draw segment BD. Image: Text: Since AB is a diameter and DE is perpendicular to AB at its midpoint O, by the perpendicular diameter theorem, E and D are endpoints of a chord perpendicular to the diameter, so BE and BD are equal radii in isosceles triangle BDE, implying BE=BD and thus ∠ABD=∠ABE. From ∠ABE=50°, we get ∠ABD=50°. ∠ACD and ∠ABD subtend the same arc AD of the circle, so by the inscribed angle theorem, ∠ACD=∠ABD=50°. Therefore, the correct option is B. Figure 9: An example from MathCanvas-Instruct dataset. Question: Input image: In right trapezoid ABCD, AB // CD, AD ⊥ CD. Given AB=BC, and AE ⊥ BC at E. Determine the relationship between CD and CE. Solution: Text: Draw segment AC. Image: Text: Since AB=BC, triangle ABC is isosceles with vertex at B, hence ∠BAC=∠ACB. Because AB // CD, corresponding angles give ∠ACD=∠ACB=∠BAC. Therefore ∠ACD=∠ACE. Given AD⊥CD and AE⊥BC, we have ∠ADC=∠AEC=90°. In triangles ACD and ACE: ∠ADC=∠AEC, ∠ACD=∠ACE$, and AC is common. By AAS, triangle ACD≅ACE. Corresponding sides in congruent triangles are equal, so CD=CE. Thus, CD=CE. Figure 10: An example from MathCanvas-Instruct dataset. Question: Real numbers x,y satisfy the constraints: y >= 1, y <= 3x-1, x+y <= m. If the minimum value of the objective function z=5x-4y is -3, then m= ( ) Options: A. 3 B. 2 C. 4 D. 11/3 Solution: Text: The feasible region is defined by the inequalities y >= 1, y <= 3x - 1, and x + y <= m. Draw the feasible region. Image: Text: The minimum value of the objective function z = 5x - 4y will occur at one of the vertices of this region. Based on the slopes of the boundary lines and the objective function, the minimum value is achieved at the intersection of the lines y = 3x - 1 and x + y = m. First, find the coordinates of this vertex by solving the system y = 3x - 1 and x + y = m. This gives x = (m+1)/4 and y = (3m-1)/4. Next, substitute these coordinates into the objective function: z = 5x - 4y = 5((m+1)/4) - 4((3m-1)/4) = (-7m + 9)/4. We are given that the minimum value of z is -3. So, we set (-7m + 9)/4 = -3. Solving for m gives -7m + 9 = -12, which leads to -7m = -21, so m = 3. The correct option is A. Figure 11: An example from MathCanvas-Instruct dataset. 15 Question: Input image: In the semicircle with diameter AB, let O be the midpoint of AB. Find the area of the shaded region given AB = 2, arc AD = arc DE = arc EB. Gemini-2.5-Pro: 1. Determine Angles and Radius: Since the diameter AB = 2, the radius R = 1. The 180° arc of the semicircle is divided into three equal parts, so `arc AD = arc DE = arc EB = 60°`. The corresponding central angle ∠DOE is also 60°. 2. Use Properties of Parallel Lines: Connect OD and OE. Since OD = OE = 1 (radii) and ∠DOE = 60°, ΔDOE is an equilateral triangle. This implies that the line segment DE is parallel to the diameter AB. 3. Equate and Calculate Area: The shaded triangle, ΔADE, and the triangle ΔODE share the same base DE. Because their vertices A and O lie on a line (AB) that is parallel to the base DE, the triangles have equal heights. Therefore, Area(ΔADE) = Area(ΔODE). The area of the equilateral triangle ΔODE with side length 1 is: Area = (side² × √3) / 4 = (1² × √3) / 4 = √3 / 4. GPT-5: 1. Establish Coordinates: Let the semicircle's center O be the origin (0, 0). With diameter AB = 2, the radius is r = 1. Place the diameter on the x-axis, so A = (−1, 0) and B = (1, 0). 2. Determine Vertex Coordinates: The 180° arc is divided into three equal 60° arcs. The angles for the vertices from the positive x-axis are E at 60°, D at 120°, and A at 180°. Their coordinates (r*cosθ, r*sinθ) are: * A = (-1, 0) * D = (cos 120°, sin 120°) = (-1/2, √3/2) * E = (cos 60°, sin 60°) = (1/2, √3/2) 3. Calculate Area: The area of ΔADE can be found using the shoelace formula (vector cross product). Define two vectors originating from vertex A: AD = D - A = (1/2, √3/2) and AE = E - A = (3/2, √3/2). The area calculation is as follows: Area = 1/2 | (AD_x)(AE_y) - (AD_y)(AE_x) | = 1/2 | (1/2)(√3/2) - (√3/2)(3/2) | = 1/2 | √3/4 - 3√3/4 | = 1/2 | -2√3/4 | = √3/4. Figure 12: Leading LMMs (Gemini-2.5-Pro and GPT-5) solving the problem from Figure 1 via text-only reasoning. Their complex or incorrect algebraic approaches, in contrast to the elegant geometric solutions unlocked by intrinsic VCoT, demonstrate the necessity of our MathCanvas framework. 16 Question: Input image: Given CE // DF, ∠ABF=100°, and ∠CAB=20°, find the measure of ∠ACE. BAGEL-Zebra-CoT: The problem asks for the measure of ∠ACE. We are given that CE // DF. This suggests that we can use properties of parallel lines and transversals to find the required angle. We are also given ∠ABF=100° and ∠CAB=20°. Let's visualize the problem to better understand the relationships between the angles. The diagram shows the lines CE and DF are parallel. The line segment AB intersects DF at point B. We are given ∠ABF=100° . Since AB is a transversal cutting the parallel lines CE and DF, we can use the property that the sum of the interior angles on the same side of a transversal is 180°. Therefore, ∠ABF + ∠CBA = 180°. We can calculate ∠CBA as follows: ∠CBA = 180° - ∠ABF = 180°- 100° = 80°. Next, we need to find ∠ACE… … The final answer is ∠ACE= 80°. Nano-Banana: To find the measure of ∠ACE, we can draw an auxiliary line through point C parallel to DF (and therefore also parallel to CE). Let this line be CG, with G to the right of C. Since CG // DF, we have: ∠GCB = ∠CBD (alternate interior angles). Since DF is a straight line, ∠ABC + ∠ABF = 180°. So, ∠ABC = 180° - ∠ABF = 180° - 100° = 80°… … The final answer is 60°. Ours: Extend EC to meet line AB at point G. Since CE // DF, ∠AGE = ∠ABF = 100°. In triangle ACG, the exterior angle at C equals the sum of the two remote interior angles: ∠ACE= ∠CAB + ∠AGE = 20° + 100° = 120°. Therefore, answer is 120°. Figure 13: Comparison of BAGEL-Zebra-CoT, Nano-Banana, and our method. 17 Statistics Number Total Samples 218,604 - Text questions 35% - Multimodal questions 65% - Middle school questions 63% - Grade 7 6% - Grade 8 17% - Grade 9 77% - High school questions 37% - Grade 10 12% - Grade 11 16% - Grade 12 72% - One question 68% - Two sub-questions 18% - Three sub-questions 12% - Four or more sub-questions 2% Question length (text tokens) - Maximum 466 - Average 107.92 Solution length (text tokens) - Maximum 2001 - Average 539.66 Multimodal Question Image - Maximum number 5 - Average number 1.03 Solution Image - Maximum number 5 - Average number 1.18 Table 6: More statistics of MathCanvas-Instruct dataset. 18 You are an expert mathematics teacher and a precise data evaluator. Your task is to analyze a given math problem, compare a predicted solution against a ground truth answer, and determine if the prediction is correct. INPUT FORMAT: You will be provided with a JSON string containing the following fields: • question_text: The full text of the mathematical problem. • ground_truth_answer: The correct, final answer text. This is the gold standard and is already extracted. • prediction_solution: The full solution text from the model, from which you must extract the final answer(s). TASK & OUTPUT REQUIREMENTS: Your output must be a single, valid JSON object. The process involves two main steps: Answer Parsing and Extraction and Correctness Judgment. Step 1: Answer Parsing and Extraction Your first task is to create two lists of answers: gt_answers and pred_answers. The structure of the gt_answers list defines the required structure for the pred_answers list. 1.1 Parsing ground_truth_answer: • The ground_truth_answer is a clean, final answer. • Your task is to parse it into a list (gt_answers). • CRITICAL PARSING RULE: The only condition for creating a list with multiple elements is the presence of explicit multi-part answer tags (e.g., <1>...</1>, <2>...</2>). • If tags are present: Extract the content of each tag into a separate list element. Example: "<1>5 cm</1><2>10 cm</2>" becomes ["5 cm", "10 cm"]. • If no such tags are present: The entire, unmodified string must be treated as the single element of the list. Do not split the string by characters, words, commas, or any other pattern. Example 1: "ABC" must become ["ABC"], NOT ["A", "B", "C"]. Example 2: "x=5, y=10" must become ["x=5, y=10"], NOT ["x=5", "y=10"]. • The gt_answers list will never contain null elements and its length defines the number of sub-questions. 1.2 Extracting from prediction_solution: • Your primary task is to extract the final answer(s) from the prediction_solution text to create the pred_answers list. • IMPORTANT: The answers to different sub-questions may appear in different places within the prediction_solution, not necessarily grouped together at the end. You must treat this as a matching task. Table 7: The prompt template (part 1) used by GPT-4.1 for mathematical reasoning evaluation. 19 • For each part of the gt_answers list, you must scan the entire prediction_solution to find the corresponding predicted answer. Look for explicit labels (e.g., "(1)", "Part A"), final conclusions, boxed answers (e.g., \boxed{...}), or statements that directly answer a part of the original question. • The final pred_answers list must have the exact same length as the gt_answers list. • For each sub-question, if you cannot find a corresponding answer in the prediction_solution, you must use null as a placeholder in that position. This rule is critical and applies in all cases where an answer is missing, including when the prediction_solution appears incomplete or is truncated before all sub-questions are addressed. CRITICAL RULE: The final gt_answers and pred_answers lists must be of equal length. The number of parts in the ground_truth_answer dictates the required length for both lists. Step 2: Correctness Judgment Your second task is to compare the pred_answers list against the gt_answers list, element by element. Judgment Rules: • Numerical Equivalence: Treat numbers as correct if they are mathematically equivalent (e.g., 5, 5.0, 10/2). Allow for minor floating-point rounding differences. • Textual Equivalence: For text answers, judge based on semantic meaning, not exact matching. Ignore case, whitespace, and phrasing differences (e.g., "CB is parallel to PD" is equivalent to "Line CB || Line PD"). • Generate Correctness List: Create a boolean list named correctness. The i-th element is true if the i-th predicted answer is correct, false otherwise. This list must have the same length as the answer lists. Final JSON Output Structure: Your entire response must be a single, valid JSON object matching the schema below. Do not include any text outside of this JSON object. { "analysis": "A brief step-by-step explanation...", "gt_answers": [ "string", ... ], "pred_answers": [ "string or null", ... ], "correctness": [ true/false, ... ] } INPUT DATA: {input_data} Table 8: The prompt template (part 2) used by GPT-4.1 for mathematical reasoning evaluation. The text highlighted in cyan is replaced with the specific input data for each problem being evaluated. 20
MathCanvas: Intrinsic Visual Chain-of-Thought for Multimodal Mathematical Reasoning Weikang Shi1* Aldrich Yu1* Rongyao Fang1*† Houxing Ren1 Ke Wang1 Aojun Zhou1 Changyao Tian1 Xinyu Fu2 Yuxuan Hu1 Zimu Lu1 Linjiang Huang3 Si Liu3 Rui Liu2‡ Hongsheng Li1‡ 1Multimedia Laboratory (MMLab), The Chinese 2Huawei Research, 3BUAA Question: In the semicircle with diameter AB, let O be the midpoint of AB. Find the area of the shaded region given AB = 2, arc AD = arc DE = arc EB. BAGEL MathCanvas Solution: To find the area of the shaded region, we need to calculate the area of the sector formed by the arc DE and subtract the area of △DOE. Since D and E divide the semicircular arc into three equal arcs, each arc is 60°. The radius of O is 1. ... ... Therefore, the area of the shaded region is π/6 - √3/4. BAGEL-Zebra-CoT Nano-Banana Ours Solution: First, let's analyze the properties of the semicircle and the points dividing the arc. △AOE is a 30-60-90 triangle. The angle at O is 60°. ... ... The final Answer is (4π - 6√3 + 9) / 2. Solution: This image shows how the semicircular arc is divided into three equal 60° arcs. It clearly marks the 60° angles corresponding to O. ... ... The area of the shaded region is calculated as √3/4. Solution: Let the radius be r = AB/2 = 1. Connect OD and OE. ∠DOE = 60°, △DOE is equilateral, so ∠ODE = 60°. Thus DE//AB, S△DOE=S△DAE. Thus, the shaded region equals to sector ODE. Then the shaded area is calculated as 60°/360° × πr2 = π/6. Figure 1: MathCanvas demonstrates the first successful application of intrinsic Visual Chain-of-Thought (VCoT) for complex mathematical reasoning. Prior attempts fail by generating incorrect (BAGEL-Zebra-CoT) or strategically poor (Nano-Banana) visuals, leading to wrong solutions. In contrast, MathCanvas correctly generates an intermediate visual step that unlocks a simpler, elegant solution path. Abstract While Large Language Models (LLMs) have excelled in textual reasoning, they struggle with mathematical domains like geometry that intrinsically rely on visual aids. Existing approaches to Visual Chain-of-Thought (VCoT) are often limited by rigid external tools or fail to generate the high-fidelity, strategically-timed diagrams necessary for complex problem-solving. To bridge this gap, we introduce MathCanvas, a comprehensive framework designed to endow unified Large Multimodal Models (LMMs) with intrinsic VCoT capabilities for mathematics. Our approach consists of two phases. First, a Visual Manipulation stage pretrains the model on a novel 15.2M-pair corpus, comprising 10M caption-to-diagram pairs (MathCanvas-Imagen) and 5.2M step-by-step editing trajectories (MathCanvas-Edit), to master diagram generation and editing. Second, a Strategic Visual-Aided Reasoning stage finetunes the model on MathCanvas-Instruct, a new 219K-example dataset of interleaved visualtextual reasoning paths, teaching it when and how to leverage visual aids. To facilitate rigorous evaluation, we introduce MathCanvas- *Equal Contribution †Project lead ‡Corresponding author Bench, a challenging benchmark with 3K problems that require models to produce interleaved visual-textual solutions. Our model, BAGEL-Canvas, trained under this framework, achieves an 86% relative improvement over strong LMM baselines on MathCanvas-Bench, demonstrating excellent generalization to other public math benchmarks. Our work provides a complete toolkit-framework, datasets, and benchmark-to unlock complex, human-like visual-aided reasoning in LMMs. Project Page: https://mathcanvas.github.io/ 1 Introduction Mathematical reasoning represents a pinnacle of human intelligence, demanding a sophisticated interplay of logical deduction, symbolic manipulation, and abstract thinking. The advent of Large Language Models (LLMs) (DeepSeek-AI et al., 2025; Yang et al., 2024; OpenAI et al., 2024b) has marked a significant milestone in artificial intelligence, demonstrating remarkable capabilities in tackling complex mathematical reasoning tasks. A key driver of recent progress in LLM-based reasoning has been the Chain-of-Thought (CoT) (Wei et al., 2023) technique, which enables models to externalize intermediate steps and significantly improves performance on mathematical tasks. 1 16 Oct 2025 However, the purely textual nature of CoT presents a fundamental limitation in domains like geometry and function analysis, where human problem-solving intrinsically involves constructing and manipulating visual aids, and even state-of-theart models struggle in its absence (see Figure 12 in Appendix D). This gap has motivated the development of Visual Chain-of-Thought (VCoT), which aims to integrate visual information into the reasoning process. Early approaches to VCoT have predominantly relied on external specialized tools, such as dedicated vision models (Shao et al., 2024a; Hu et al., 2024; Gao et al., 2025b) or code interpreters (Hu et al., 2024; Wang et al., 2025c,d). While effective in specific contexts, these toolbased methods are often rigid, constrained to a predefined set of operations, and dependent on specific input formats (e.g., source code), which hinders their flexibility and broader applicability. Recent work has explored intrinsic VCoT, where unified large multimodal models (LMMs) natively generate visual thoughts as an integral part of their reasoning process (Cheng et al., 2025; Li et al., 2025b,a; Chern et al., 2025). Though promising, these previous attempts have been confined to simple domains and have yet to succeed in mathematics due to two key challenges. First, current unified LMMs lack the capability to generate and iteratively edit the high-fidelity mathematical diagrams required for precise reasoning. The generated visuals are often geometrically incorrect, rendering them useless for logical deduction, as shown with BAGEL-Zebra-CoT (Li et al., 2025a) in Figure 1. Second, and more fundamentally, models lack the procedural knowledge to employ visual aids as a strategic component of their reasoning process-the complex decision of determining when to draw, what to draw, and how to leverage the visualization for subsequent logical deduction. This strategic failure is evident even in advanced models like Nano-Banana (Comanici et al., 2025), shown in Figure 1, whose generated visual acts more as a flawed decoration than an integral reasoning step, ultimately failing to uncover the key insight needed for the solution. To this end, we argue that addressing these challenges requires models capable of interleaving textual deduction with the creation and modification of visual aids. Accordingly, we introduce MathCanvas, a comprehensive framework designed to endow unified LMMs with intrinsic VCoT capabilities for complex mathematical problem-solving. Our approach is structured around two complementary phases: Visual Manipulation and Strategic Visual-Aided Reasoning. The first phase, Visual Manipulation, focuses on equipping the model with foundational visual synthesis and editing skills. To achieve this, we construct a new million-scale pretraining corpus specifically for mathematical diagrams. This resource comprises two parts: MathCanvas-Edit, containing 5.2M step-by-step diagram editing instruction pairs generated via a hybrid pipeline that combines LLM-driven mining with programmatic synthesis, and MathCanvas-Imagen, with 10M caption-todiagram pairs. Pretraining on them imparts the robust diagram generation and manipulation abilities that form the bedrock of our approach. The second phase, Strategic Visual-Aided Reasoning, aims to teach the model how to interleave diagrammatic actions with its textual reasoning steps. For this purpose, we curate MathCanvasInstruct, the first large-scale dataset for interleaved visual-textual mathematical reasoning. It contains 219K training examples, where each solution is represented as an interleaved sequence of textual reasoning and corresponding visual steps. As demonstrated in Figure 1, training on MathCanvasInstruct enables the model to learn how to coordinate diagrammatic actions with reasoning trajectories to successfully solve complex problems. Furthermore, to rigorously evaluate models' capabilities in visual-textual mathematical reasoning, we introduce a dedicated benchmark test set MathCanvas-Bench comprising 3K carefully curated problems. Each test instance requires the solver to produce coherent interleaved reasoning and visual outputs. We benchmarked 20 leading LMMs on this dataset, revealing substantial performance gaps and establishing it as a challenging and comprehensive testbed for future research on Visual Chain-of-Thought reasoning. In summary, our contributions are as follows: • We propose MathCanvas, a comprehensive framework that enables LMMs to perform intrinsic VCoT reasoning for complex mathematical problem solving. • We construct two large-scale corpora tailored for our two-phase approach: a 15.2M-pair pretraining dataset for Visual Manipulation, and a 219K-example fine-tuning dataset for Strategic Visual-Aided Reasoning. • We further introduce a dedicated MathCanvasBench test set with 3K problems and benchmark 20 leading LMMs on it, revealing substantial deficiencies and establishing a challenging evaluation bed for future research. 2 Construct point D as the circumcenter of A, B, C. Construct E such that DA = DE. Construct ... LLM ... Beam Search Quality Filter SelfBootstrapping Symbolic Code ... Geometric Primitives ... Geometric Relations 1. angle bisector 2. reflect 3. circumcenter 4. incenter 5. inscribed circle ... ... Sample ... Sample (a) Competition-Level Mining (b) Foundational Structure Generation Construct quadrilateral ... Connect A and E, inter- ... Construct point E as the ... Construct point H ... Construct point I ... Construct parallel ... ... MathCanvas-Edit MathCanvas-Imagen Latex Code: documentclass[tikz,borde r=3.14mm]{standalone} draw[thick,->] (-2,0) ... Geometric Problems ImgCode8.6M Quality Filter Caption: The diagram shows an ellipse centered at the origin O, with its ... Deduplicate 4M diagramcaption pairs 5.4M diagramcaption pairs 612K diagramcaption pairs from public datasets Figure 2: The curation pipeline for the MathCanvas-Edit and MathCanvas-Imagen dataset. • Experiments show that our model trained under the MathCanvas framework achieves a 86% relative improvement over strong LMM baselines on MathCanvas-Bench, demonstrating the effectiveness of our approach in unlocking intrinsic VCoT capabilities. 2 Related Work Mathematical Reasoning with Large Multimodal Models. The remarkable success of textonly LLMs in mathematical reasoning, often driven by sophisticated chain-of-thought prompting (Wei et al., 2023; Yang et al., 2024; Yue et al., 2023; Wang et al., 2023; Shao et al., 2024b), has naturally spurred interest in extending these capabilities to the multimodal domain. Initial efforts in this area have largely involved adapting LMMs by enhancing vision-text alignment on domain-specific data and then fine-tuning on mathematical questionanswer pairs (Gao et al., 2025a; Wang et al., 2025a; Zhuang et al., 2024; Zhang et al., 2024b; Guo et al., 2025). While subsequent work has advanced the state of the art with techniques like reinforcement learning (Yang et al., 2025; Wang et al., 2025b; Duan et al., 2025; Wei et al., 2025), these models remain fundamentally text-centric. While they effectively interpret visual information in the input, they largely neglect vision as an active, generative component of the reasoning process itself. Visual Chain-of-Thought. Unlike various textual chain-of-thought (Wei et al., 2023; Fang et al., 2025a,b), visual chain-of-thought aims to bridge this gap by integrating the generation of visual aids directly into the reasoning process. Existing approaches follow two main lines. The first leverages external tools, such as vision models to extract image details (Shao et al., 2024a; Chen et al., 2025; Hu et al., 2024; OpenAI, 2025b; Gao et al., 2025b) or code interpreters to add auxiliary structures (Hu et al., 2024; Wang et al., 2025c,d). This approach, however, is constrained, as these tools are either non-generative or lack general applicability due to rigidity. The second line explores intrinsic VCoT, where models natively generate visual thoughts as an integral part of their reasoning (Cheng et al., 2025; Li et al., 2025b,c; Chern et al., 2025; Li et al., 2025a). Despite its promise, this approach has so far been demonstrated primarily in simpler domains like spatial games and struggles to produce the precise, logically consistent diagrams required for complex mathematical reasoning. Datasets and Benchmarks for Multimodal Mathematical Reasoning. The progress in visualmathematical reasoning is largely driven by the evolution of its benchmarks. While foundational datasets like Geometry3K (Lu et al., 2021) and ScienceQA (Lu et al., 2022) established the task, recent challenging benchmarks such as MMMU (Yue et al., 2024), MathVista (Lu et al., 2024), Mathvision (Wang et al., 2024), and MathVerse (Zhang et al., 2024a), among others (Qiao et al., 2024; Wang et al., 2025e; Sun et al., 2024), have pushed the limits of LMMs' visual reasoning. However, a fundamental limitation persists: these benchmarks consist of static problem-solution pairs and lack the step-by-step visual demonstrations required to train models for dynamic, process-oriented reasoning. This is precisely the gap our work addresses with the introduction of MathCanvas-Instruct and the MathCanvas-Bench benchmark. 3 3 Method In this section, we detail the methodology behind MathCanvas. We first describe the construction of our large-scale training corpora for visual manipulation and strategic reasoning (3.1). We then introduce MathCanvas-Bench, a dedicated benchmark for rigorous evaluation (3.2). Finally, we present our two-stage training recipe that leverages these resources to instill intrinsic VCoT capabilities in a unified LMM (3.3). 3.1 Training Corpora Construction 3.1.1 Million-scale Pretraining Corpus To endow unified LMMs with the foundational visual synthesis and editing capabilities required for mathematical reasoning, we construct a comprehensive million-scale pretraining corpus comprising two complementary components: MathCanvasEdit for diagram editing and MathCanvas-Imagen for diagram generation. The overall construction pipeline is shown in Figure 2. MathCanvas-Edit is designed to teach models how to iteratively modify mathematical diagrams through step-by-step transformations. We construct this dataset through a hybrid pipeline that combines complex competition-level geometry problems with systematically generated simple geometric figures, yielding a total of 5.2M edit trajectories. Competition-Level Mining. We start with 128 geometry problems from mathematical competitions to serve as realistic seed configurations. Using these seeds, we employ the AlphaGeometry LLM (Trinh et al., 2024) with beam search to generate numerous auxiliary line drawing methods for each problem. We then filter for geometrically invalid constructions and render the corresponding diagram sequences, where each step is an edit operation (e.g., adding an auxiliary line, marking an angle). This iterative process yields 4.2M edit trajectories capturing the complexity of competitionlevel reasoning. To ensure visual diversity from this limited set of seeds, the rendering of each trajectory is controlled by a unique random seed, varying visual attributes like orientation and line styles. Foundational Structure Generation. While competition problems provide realism, they tend toward complexity that may not adequately cover fundamental editing operations. To address this, we construct a complementary set of simple geometric figures using AlphaGeometry's formal language. We first define a basic geometric primitive set (e.g., points, lines, circles) and a geometric relation set (e.g., circumcenter, incenter, parallel), the full details of which are provided in Appendix B.1. Then we develop an automated algorithm that randomly and incrementally adds geometric primitives and relations to these basic structures, creating progressively more complex diagrams. Invalid or degenerate configurations are filtered out through geometric constraint checking. By leveraging different random seeds during rendering, we obtain 1M additional edit trajectories that provide systematic coverage of fundamental geometric operations after three iterations of this synthetic generation process. MathCanvas-Imagen focuses on teaching models to generate mathematical diagrams from textual descriptions. We construct it by aggregating and processing data from three complementary sources, resulting in 10M caption-to-diagram pairs. Re-purposing from MathCanvas-Edit. We first leverage the edit trajectories in MathCanvas-Edit, extracting caption-to-diagram pairs from each editing step. After deduplication based on visual and textual similarity, we obtain 5.4M diverse captionto-diagram pairs that inherently align with the types of diagrams needed for mathematical reasoning. Augmenting with Code-derived Captions. To further scale our dataset, we utilize the ImgCode8.6M (Wang et al., 2025a) dataset, which contains programmatically generated mathematical diagrams paired with source code. We first apply quality filtering to remove corrupted or low-quality images. We then employ GPT-4.1-mini to generate natural language captions by taking image-code pairs as input, producing descriptions that capture both the visual content and mathematical semantics of each diagram. This process yields 4M highquality caption-to-diagram pairs with rich, descriptive captions with diverse mathematical diagrams. Incorporating Public Datasets. Finally, we incorporate 612K caption-to-diagram pairs from existing public resources, including MAVIS (Zhang et al., 2024b) and TR-CoT (Deng et al., 2025b), which provide additional diversity in caption styles and diagram types, complementing our dataset. Through this comprehensive construction process, the pretraining corpus provides a robust foundation for pretraining models on both diagram generation and editing, establishing the essential visual capabilities needed for intrinsic VCoT in mathematical reasoning. 3.1.2 MathCanvas-Instruct To equip models with the ability to strategically interleave visual synthesis and editing actions 4 Nums. Image number 0 1 > 1 1167 1887 2679 25 400 Question Solution Analytic Geometry Algebra Solid Geometry Transformational Geometry Plane Geometry Calculus & Vector Statistics Trigonometry Knowledge distribution Question Solution Frequency Text length Figure 3: Statistical analysis of the MathCanvas-Bench test set. Left: Knowledge types distribution. Middle: Distribution of questions and solutions containing varying numbers of images. Right: Text length distribution of questions and solutions (measured in text tokens). with their textual reasoning process, we introduce MathCanvas-Instruct, the first large-scale dataset specifically designed for interleaved visual-textual mathematical reasoning. Dataset Construction We begin by gathering 632K multimodal mathematics problems and solutions from a wide array of middle school and high school textbooks, exams, and websites. From this initial pool, we implement a rigorous multistage filtering pipeline to ensure data quality and relevance. First, we employ GPT-5 to analyze the problems, filtering out examples where the provided images served no role in the reasoning process. This step also standardized all mathematical formulas into LaTeX format, resulting in a refined set of 367K problems. A second round of filtering, also powered by GPT-5, removes problems that contained errors, lacked explicit answers, featured low-quality or unclear images, or consisted solely of drawing tasks. This left us with 303K high-quality problems. To ensure the novelty and diversity of the dataset, we then perform both text and image deduplication, which yielded 222K unique problem-solution pairs. The images in the remaining dataset underwent a quality enhancement step using a superresolution model, SwinIR (Liang et al., 2021), to improve clarity and detail before being resized to a uniform 512x512 resolution. Finally, GPT-4.1 is used to classify all problems into a hierarchical taxonomy of 8 major categories and fine-grained subcategories. This collection is then partitioned to form our evaluation benchmark, MathCanvasBench, with the remaining 219K examples constituting the MathCanvas-Instruct training set. Further statistics and examples for MathCanvas-Instruct are presented in Appendix B.2. 3.2 The MathCanvas-Bench Evaluation Benchmark Benchmark Construction We construct MathCanvas-Bench by sampling 3K problems from the 222K-pair collection described in Section 3.1.2. The construction process involves three key steps. First, we exclude all multiple-choice questions to ensure that evaluation relies on generative reasoning rather than random guessing. Second, to create a balanced test set, we perform weighted sampling across problem categories, setting the sampling weight for each category to the 0.7 exponential power of its proportion. This strategy increases the representation of less common problem types. Finally, to prevent data leakage, we remove any question from the remaining 219K training set that has a 5-gram Jaccard similarity score higher than 0.4 with any problem in MathCanvas-Bench. This process helps ensure a fair evaluation of model generalization. Further statistics on the final benchmark are shown in Figure 3. Evaluation Protocol Our evaluation protocol relies on GPT-4.1 to ensure consistent and scalable assessment. For each problem, GPT-4.1 is tasked with extracting the final answers for every subquestion from the model's output and comparing them against the ground-truth answers. The specific prompt templates used for this process are detailed in Appendix C. We employ two distinct metrics to score performance: Complete Accuracy: A binary score is awarded. A model receives 1 point only if the answers to all sub-questions are correct, and 0 otherwise. This metric evaluates the model's ability to solve a problem completely. Weighted Scoring: To provide a more granular evaluation of partial progress, this metric as5 Unified LLM Gen. Expert Und. Expert Gen. Encoder Editing:Construct G on line CE and such that GB is perpendicular to AB. T2I:A quadrilateral pyramid with a parallelogram ABCD as its base ... ... Editing T2I Rectified-Flow Loss Final solution: ∵ ∠BOC = 90°, ∴ OD = 1⁄2BC = 1. ∵ △ABC is equilateral, ∴ AD = √3. In △OAD: OA token), the model must predict whether to generate the token to initiate a drawing, or the token to conclude the response. To inform how the model draws and understands, we process input and output images differently. All images provided in the question are encoded into clean VAE and ViT tokens, serving as visual context. For images within the solution, which the model must generate, we additionally include noised VAE tokens to compute the Rectified-Flow Loss. Unlike Stage I, all model components are unfrozen and trained jointly (Figure 4, Stage II). To enhance generation quality, we also leverage the architecture's inherent dual Classifier-Free Guidance mechanism during inference. This orchestration stage culminates in a model that can autonomously 6 Model Size Think Overall Algebra Analytic Geom. Calc & Vector Plane Geom. Solid Geom. Stats. Transf. Geom. Trig. Complete Weighted Closed-source (unified) LMMs Gemini-2.5-Pro - ✓ 47.9 58.2 68.0 59.2 60.2 54.8 48.7 64.5 58.5 69.9 Gemini-2.5-Flash - ✓ 39.3 49.5 63.2 56.5 54.6 40.7 40.7 61.1 46.8 64.6 Gemini-2.0-Flash - ✗ 21.2 32.6 39.1 32.6 38.9 31.1 25.6 51.4 28.1 38.0 GPT-4.1 - ✗ 19.0 30.0 40.4 30.7 37.1 24.1 25.1 54.0 21.5 42.5 GPT-4.1-mini - ✗ 14.6 26.3 35.7 30.5 36.5 22.0 22.4 24.8 19.7 30.3 GPT-4o - ✗ 9.9 19.4 21.6 17.7 21.8 19.5 18.6 17.4 13.2 23.0 GPT-5 - ✓ 43.5 51.4 68.7 55.5 64.2 45.6 36.1 64.5 42.7 66.5 Claude-Sonnet-4 - ✓ 25.0 37.8 44.8 38.9 49.3 33.8 33.0 46.9 30.3 47.6 Seed-1.6-Thinking - ✓ 44.1 55.2 67.7 57.5 55.9 52.2 45.0 65.1 56.8 60.7 Qwen3-VL-Plus - ✓ 40.9 51.5 67.0 54.6 56.9 45.9 42.0 66.7 49.3 58.9 Nano-Banana - ✗ 33.2 43.7 55.4 50.2 51.8 34.5 36.6 56.7 39.4 60.4 Open-source (unified) LMMs Qwen-2.5-VL-7B 7B ✗ 8.9 18.7 19.5 19.0 19.2 20.6 18.7 10.7 13.9 15.0 Qwen-2.5-VL-32B 32B ✗ 15.4 27.6 29.8 27.4 27.8 27.4 27.2 27.9 20.1 30.5 Qwen-2.5-VL-72B 72B ✗ 21.1 32.8 30.6 19.5 36.4 34.5 33.5 23.9 33.6 48.9 Gemma-3-27b-it 27B ✗ 15.8 26.6 31.3 28.4 34.4 25.8 21.0 40.0 21.0 26.9 InternVL3.5-8B 8B ✗ 16.7 26.4 32.3 33.8 33.8 24.2 26.9 43.7 16.2 14.9 InternVL3.5-30B-A3B 30B ✗ 11.7 22.2 22.2 19.9 15.1 24.9 24.3 22.1 17.4 18.4 Keye-VL-1.5-8B 8B ✓ 17.1 27.0 33.1 28.0 26.2 27.0 23.6 29.5 20.9 26.3 BAGEL 7B ✗ 8.3 18.5 18.1 13.1 17.1 20.8 23.0 10.9 19.4 13.3 BAGEL-Zebra-CoT 7B ✗ 8.0 16.6 18.0 15.1 15.6 18.0 16.8 20.8 11.1 14.1 BAGEL-Canvas 7B ✗ 21.9 34.4 29.9 27.2 17.9 40.0 35.3 23.2 29.3 40.4 ∆Over Base Model +13.6 +15.9 +11.8 +14.1 +0.8 +19.2 +12.3 +12.3 +9.9 +27.1 Table 1: Comparison of model performances across all mathematical subjects. The best closed-source and open-source highest accuracy of LMMs are marked in red and blue, respectively. generate diagrams as intermediate steps to solve complex problems. Detailed training hyperparameters are provided in Appendix A. 4 Experiments We compare BAGEL-Canvas against 20 prominent LMMs, including top-performing proprietary models such as the Gemini series (2.5-Pro, 2.5-Flash, Nano-Banana, 2.0-Flash) (Comanici et al., 2025), the GPT series (GPT-5, GPT-4.1, GPT-4.1-mini, GPT-4o) (OpenAI, 2025a; OpenAI et al., 2024c,a), Claude-Sonnet-4 (Anthropic, 2025), other strong multimodal models like Seed-1.6-Thinking (Seed et al., 2025) and Qwen3-VL-Plus (Bai et al., 2025), and powerful open-source models, including the Qwen-2.5-VL series (7B, 32B, 72B) (Bai et al., 2025), Gemma-3-27b-it(Team et al., 2025) , InternVL3.5 (8B, 30B) (Wang et al., 2025b), and Keye-VL-1.5-8B (Yang et al., 2025). We also include our base model, BAGEL (Deng et al., 2025a), and a variant, BAGEL-Zebra-CoT (Li et al., 2025a), to precisely measure the gains from our framework. All LMM evaluations are conducted using VLMEvalKit (Duan et al., 2024) to ensure a fair comparison. The comprehensive results are shown in Table 1. 4.1 Benchmark Results As presented in Table 1, BAGEL-Canvas achieves a weighted score of 34.4% on our benchmark, establishing it as the top-performing open-source model. It surpasses all open-source competitors, including significantly larger models like Qwen-2.5VL-72B (32.8) and InternVL3.5-30B-A3B (22.2). This result represents a substantial +15.9 point improvement over its base model, BAGEL, demonstrating the profound effectiveness of our training paradigm in unlocking advanced reasoning capabilities. Furthermore, BAGEL-Canvas proves to be highly competitive with proprietary systems, outperforming several prominent models such as Gemini-2.0-Flash (32.6) and GPT-4.1 (30.0). An analysis of performance across mathematical domains reveals that BAGEL-Canvas exhibits the most significant gains in geometry-heavy subjects: Trigonometry (+27.1), Plane Geometry (+19.2), and Solid Geometry (+12.3). This result strongly supports our hypothesis that visual reasoning is particularly beneficial for geometric problem-solving. The model also shows substantial improvements in Analytic Geometry (+14.1) and Algebra (+11.8), suggesting that the ability to visualize functions and coordinate systems enhances reasoning in broader mathematical contexts. The modest gain in Calculus & Vector (+0.8) indicates that this domain may require specialized reasoning capabilities be7 Model MathVista MathVerse MathVision (GPS) (Text Dominant) (Text Lite) (test) AnaG Angle Area Len SolG Alg Others BAGEL 68.8 49.2 42.0 24.1 26.2 31.8 25.0 28.7 22.1 17.1 23.1 BAGEL-Canvas 79.3 65.4 59.9 32.9 48.8 49.1 35.2 37.9 31.2 30.1 27.9 ∆ +10.5 +16.2 +17.9 +8.8 +22.6 +17.3 +10.2 +9.2 +9.1 +13.0 +4.8 Table 2: Generalization performance of BAGEL-Canvas compared to its base model (BAGEL) on three multimodal math benchmarks. ∆indicates the absolute improvement. MathVision subject abbreviations: AnaG (Analytic Geometry), SolG (Solid Geometry), Alg (Algebra), Angle (Metric Geometry - Angle), Area (Metric Geometry - Area), and Len (Metric Geometry - Length). Model Overall Complete Weighted BAGEL-Canvas 21.9 34.4 w/o MathCanvas-Edit 19.8 32.0 w/o MathCanvas-Imagen 18.2 30.8 Table 3: Ablation study on the pre-training corpora. We report the performance drop after removing the editing data (w/o MathCanvas-Edit) and the entire pre-training data (w/o MathCanvas-Imagen). Model Overall Complete Weighted BAGEL-Canvas 21.9 34.4 - (Skip Image) 19.7 31.9 BAGEL-Canvas-Text 18.7 30.9 Table 4: Ablation study on the visual modality. BAGELCanvas-Text is a variant fine-tuned without any visual data. (- Skip Image) denotes the full model being constrained to text-only reasoning during inference. yond the scope of our current visual augmentation techniques. 4.2 Performance on Other Math Benchmarks To assess the generalization capabilities of BAGELCanvas, we evaluate it on three established public benchmarks: the GPS category from MathVista's testmini set (Lu et al., 2024), the full test set of MathVision (Wang et al., 2024), and the Text Dominant/Lite subsets from MathVerse's testmini (Zhang et al., 2024a). As detailed in Table 2, BAGEL-Canvas demonstrates substantial and consistent improvements over its base model, BAGEL, across all benchmarks, with particularly strong gains on MathVerse (+17.9) and MathVista (+10.5). The detailed breakdown on MathVision further reveals significant improvements in subjects that benefit from visual intuition, such as Analytic Geometry (+22.6), Algebra (+13.0), and various plane geometry tasks (Angle: +17.3). Crucially, since these benchmarks require text-only solutions, this strong performance validates that our training paradigm fundamentally enhances the model's intrinsic reasoning abilities, allowing it to generalize effectively to traditional problem-solving formats. 4.3 Ablation Studies We conduct a series of ablation studies to dissect the contributions of the key components within our framework: the pretraining corpus and the role of the visual modality in the final reasoning stage. Effectiveness of the Pre-training Corpus. We investigate the impact of our two-stage pre-training strategy by ablating the MathCanvas-Edit and MathCanvas-Imagen corpora. As shown in Table 3, removing the MathCanvas-Edit data (w/o MathCanvas-Edit) results in a 2.4-point drop in the weighted score. This highlights the importance of learning step-by-step diagram editing, a critical skill for solving complex problems that require constructing auxiliary elements. A further ablation, removing the entire pre-training stage (w/o MathCanvas-Imagen), leads to an additional 1.2point performance decrease. This confirms that even foundational diagram generation capabilities provide a vital scaffold for the fine-tuning phase. Together, these results validate our two-stage pretraining approach, demonstrating that both generation and editing skills are essential for achieving optimal performance. Importance of Visual Modality in Reasoning. We analyze the importance of the visual modality through two ablations. First, we fine-tune a variant, BAGEL-Canvas-Text, using only the textual reasoning paths from MathCanvas-Instruct. Second, we constrain the full BAGEL-Canvas model to bypass visual generation during inference (- Skip Image). As shown in Table 4, both scenarios result in a significant performance drop. The BAGEL-Canvas-Text variant's weighted score falls by 3.5 points, confirming that training on interleaved visual-textual data is essential for learning complex reasoning. Interestingly, the model that simply skips image generation at inference (- Skip 8 Image) performs 1.0 point better than BAGELCanvas-Text, despite both producing text-only solutions. This suggests that our interleaved training paradigm not only teaches the model how to leverage visual aids but also fundamentally enhances its underlying textual reasoning capabilities. 5 Conclusion We introduced MathCanvas, a comprehensive framework to endow Large Multimodal Models with intrinsic Visual Chain-of-Thought capabilities for mathematical reasoning. By leveraging our newly created large-scale datasets (MathCanvasEdit, MathCanvas-Imagen, and MathCanvasInstruct) in a two-stage training recipe, we taught our model, BAGEL-Canvas, to master diagram manipulation and strategically interleave it with textual deduction. This approach yielded an 86% relative improvement over strong baselines on our MathCanvas-Bench benchmark. Crucially, this training paradigm not only teaches the model when and how to draw, but also fundamentally enhances its core textual reasoning. Our work provides a robust foundation for future research into broader and more complex multimodal reasoning. References Anthropic. 2025. System card: Claude opus 4 & claude sonnet 4. Technical report, Anthropic. Accessed 2025-10-07. Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, and 8 others. 2025. Qwen2.5-vl technical report. arXiv preprint . Xinyan Chen, Renrui Zhang, Dongzhi Jiang, Aojun Zhou, Shilin Yan, Weifeng Lin, and Hongsheng Li. 2025. Mint-cot: Enabling interleaved visual tokens in mathematical chain-of-thought reasoning. arXiv preprint . Zihui Cheng, Qiguang Chen, Xiao Xu, Jiaqi Wang, Weiyun Wang, Hao Fei, Yidong Wang, Alex Jinpeng Wang, Zhi Chen, Wanxiang Che, and Libo Qin. 2025. Visual thoughts: A unified perspective of understanding multimodal chain-of-thought. Preprint, . Ethan Chern, Zhulin Hu, Steffi Chern, Siqi Kou, Jiadi Su, Yan Ma, Zhijie Deng, and Pengfei Liu. 2025. Thinking with generated images. arXiv preprint . Gheorghe Comanici, Eric Bieber, Mike Schaekermann, Ice Pasupat, Noveen Sachdeva, Inderjit Dhillon, Marcel Blistein, Ori Ram, Dan Zhang, Evan Rosen, Luke Marris, Sam Petulla, Colin Gaffney, Asaf Aharoni, Nathan Lintz, Tiago Cardal Pais, Henrik Jacobsson, Idan Szpektor, Nan-Jiang Jiang, and 3290 others. 2025. Gemini 2.5: Pushing the frontier with advanced reasoning, multimodality, long context, and next generation agentic capabilities. Preprint, . DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. Preprint, . Chaorui Deng, Deyao Zhu, Kunchang Li, Chenhui Gou, Feng Li, Zeyu Wang, Shu Zhong, Weihao Yu, Xiaonan Nie, Ziang Song, Guang Shi, and Haoqi Fan. 2025a. Emerging properties in unified multimodal pretraining. Preprint, . Linger Deng, Linghao Zhu, Yuliang Liu, Yu Wang, Qunyi Xie, Jingjing Wu, Gang Zhang, Yingying Zhu, and Xiang Bai. 2025b. Theorem-validated reverse chain-of-thought problem generation for geometric reasoning. Preprint, . Chengqi Duan, Rongyao Fang, Yuqing Wang, Kun Wang, Linjiang Huang, Xingyu Zeng, Hongsheng Li, and Xihui Liu. 2025. Got-r1: Unleashing reasoning capability of mllm for visual generation with reinforcement learning. arXiv preprint . Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, and 1 others. 2024. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Proceedings of the 32nd ACM international conference on multimedia, pages 11198-11201. Rongyao Fang, Chengqi Duan, Kun Wang, Linjiang Huang, Hao Li, Shilin Yan, Hao Tian, Xingyu Zeng, Rui Zhao, Jifeng Dai, and 1 others. 2025a. Got: Unleashing reasoning capability of multimodal large language model for visual generation and editing. arXiv preprint . Rongyao Fang, Aldrich Yu, Chengqi Duan, Linjiang Huang, Shuai Bai, Yuxuan Cai, Kun Wang, Si Liu, Xihui Liu, and Hongsheng Li. 2025b. Flux-reason-6m & prism-bench: A million-scale text-to-image reasoning dataset and comprehensive benchmark. arXiv preprint . Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han, Hang Xu, Zhenguo Li, and Lingpeng Kong. 2025a. G-llava: Solving geometric problem 9 with multi-modal large language model. Preprint, . Jun Gao, Yongqi Li, Ziqiang Cao, and Wenjie Li. 2025b. Interleaved-modal chain-of-thought. Preprint, . Jarvis Guo, Tuney Zheng, Yuelin Bai, Bo Li, Yubo Wang, King Zhu, Yizhi Li, Graham Neubig, Wenhu Chen, and Xiang Yue. 2025. Mammoth-vl: Eliciting multimodal reasoning with instruction tuning at scale. Preprint, . Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. 2024. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. Preprint, . Ang Li, Charles Wang, Kaiyu Yue, Zikui Cai, Ollie Liu, Deqing Fu, Peng Guo, Wang Bill Zhu, Vatsal Sharan, Robin Jia, and 1 others. 2025a. Zebra-cot: A dataset for interleaved vision language reasoning. arXiv preprint . Chengzu Li, Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vuli ́c, and Furu Wei. 2025b. Imagine while reasoning in space: Multimodal visualization-of-thought. arXiv preprint . Chengzu Li, Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vuli ́c, and Furu Wei. 2025c. Imagine while reasoning in space: Multimodal visualization-of-thought. Preprint, . Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. 2021. Swinir: Image restoration using swin transformer. Preprint, . Xingchao Liu, Chengyue Gong, and Qiang Liu. 2022. Flow straight and fast: Learning to generate and transfer data with rectified flow. Preprint, . Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. Preprint, . Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, KaiWei Chang, Michel Galley, and Jianfeng Gao. 2024. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. In International Conference on Learning Representations (ICLR). Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. 2021. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. Preprint, . Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Processing Systems (NeurIPS). OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ̨adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, and 401 others. 2024a. Gpt4o system card. Preprint, . OpenAI, :, Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, Alex Iftimie, Alex Karpenko, Alex Tachard Passos, Alexander Neitz, Alexander Prokofiev, Alexander Wei, Allison Tam, and 244 others. 2024b. Openai o1 system card. Preprint, . OpenAI. 2025a. GPT-5 System Card. Technical report, OpenAI. Accessed on [YYYY-MM-DD]. OpenAI. 2025b. OpenAI o3 and o4-mini System Card. Technical report, OpenAI. Accessed on [YYYYMM-DD]. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, and 262 others. 2024c. Gpt-4 technical report. Preprint, . Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue, Shanglin Lei, Zhe Wei, Miaoxuan Zhang, Runfeng Qiao, Yifan Zhang, Xiao Zong, Yida Xu, Muxi Diao, Zhimin Bao, Chen Li, and Honggang Zhang. 2024. We-math: Does your large multimodal model achieve human-like mathematical reasoning? Preprint, . ByteDance Seed, :, Jiaze Chen, Tiantian Fan, Xin Liu, Lingjun Liu, Zhiqi Lin, Mingxuan Wang, Chengyi Wang, Xiangpeng Wei, Wenyuan Xu, Yufeng Yuan, Yu Yue, Lin Yan, Qiying Yu, Xiaochen Zuo, Chi Zhang, Ruofei Zhu, Zhecheng An, and 255 others. 2025. Seed1.5-thinking: Advancing superb reasoning models with reinforcement learning. Preprint, . Hao Shao, Shengju Qian, Han Xiao, Guanglu Song, Zhuofan Zong, Letian Wang, Yu Liu, and Hongsheng Li. 2024a. Visual cot: Advancing multimodal language models with a comprehensive dataset and benchmark for chain-of-thought reasoning. Advances in Neural Information Processing Systems, 37:8612-8642. 10 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. 2024b. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. Preprint, . Kai Sun, Yushi Bai, Ji Qi, Lei Hou, and Juanzi Li. 2024. Mm-math: Advancing multimodal math evaluation with process evaluation and fine-grained classification. Preprint, . Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, and 197 others. 2025. Gemma 3 technical report. Preprint, . Trieu Trinh, Yuhuai Wu, Quoc Le, He He, and Thang Luong. 2024. Solving olympiad geometry without human demonstrations. Nature. Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hongsheng Li. 2024. Measuring multimodal mathematical reasoning with math-vision dataset. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Ke Wang, Junting Pan, Linda Wei, Aojun Zhou, Weikang Shi, Zimu Lu, Han Xiao, Yunqiao Yang, Houxing Ren, Mingjie Zhan, and Hongsheng Li. 2025a. Mathcoder-vl: Bridging vision and code for enhanced multimodal mathematical reasoning. Preprint, . Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun Luo, Weikang Shi, Renrui Zhang, Linqi Song, Mingjie Zhan, and Hongsheng Li. 2023. Mathcoder: Seamless code integration in llms for enhanced mathematical reasoning. Preprint, . Weiyun Wang, Zhangwei Gao, Lixin Gu, Hengjun Pu, Long Cui, Xingguang Wei, Zhaoyang Liu, Linglin Jing, Shenglong Ye, Jie Shao, Zhaokai Wang, Zhe Chen, Hongjie Zhang, Ganlin Yang, Haomin Wang, Qi Wei, Jinhui Yin, Wenhao Li, Erfei Cui, and 56 others. 2025b. Internvl3.5: Advancing open-source multimodal models in versatility, reasoning, and efficiency. Preprint, . Yikun Wang, Siyin Wang, Qinyuan Cheng, Zhaoye Fei, Liang Ding, Qipeng Guo, Dacheng Tao, and Xipeng Qiu. 2025c. Visuothink: Empowering lvlm reasoning with multimodal tree search. Preprint, . Yikun Wang, Yibin Wang, Dianyi Wang, Zimian Peng, Qipeng Guo, Dacheng Tao, and Jiaqi Wang. 2025d. Geometryzero: Improving geometry solving for llm with group contrastive policy optimization. Preprint, . Zhikai Wang, Jiashuo Sun, Wenqi Zhang, Zhiqiang Hu, Xin Li, Fan Wang, and Deli Zhao. 2025e. Benchmarking multimodal mathematical reasoning with explicit visual dependency. Preprint, . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models. Preprint, . Lai Wei, Yuting Li, Kaipeng Zheng, Chen Wang, Yue Wang, Linghe Kong, Lichao Sun, and Weiran Huang. 2025. Advancing multimodal reasoning via reinforcement learning with cold start. Preprint, . An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. 2024. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement. Preprint, . Biao Yang, Bin Wen, Boyang Ding, Changyi Liu, Chenglong Chu, Chengru Song, Chongling Rao, Chuan Yi, Da Li, Dunju Zang, Fan Yang, Guorui Zhou, Guowang Zhang, Han Shen, Hao Peng, Haojie Ding, Hao Wang, Haonan Fan, Hengrui Ju, and 42 others. 2025. Kwai keye-vl 1.5 technical report. Preprint, . Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, and 3 others. 2024. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of CVPR. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023. Mammoth: Building math generalist models through hybrid instruction tuning. Preprint, . Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, and 1 others. 2024a. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? arXiv preprint . Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Ziyu Guo, Shicheng Li, Yichi Zhang, Chengzhuo Tong, Jiaming Liu, Aojun Zhou, Bin Wei, Shanghang Zhang, Peng Gao, Chunyuan Li, and Hongsheng Li. 2024b. Mavis: Mathematical visual instruction tuning with an automatic data engine. Preprint, . Wenwen Zhuang, Xin Huang, Xiantao Zhang, and Jin Zeng. 2024. Math-puma: Progressive upward multimodal alignment to enhance mathematical reasoning. Preprint, . 11 A Training Details We implement our framework on top of the publicly available BAGEL-7B-MoT (Deng et al., 2025a) model. All training experiments were conducted on a cluster of 16 NVIDIA H800 GPUs. We use the AdamW (Loshchilov and Hutter, 2019) optimizer for both training stages. The detailed hyperparameters for our two-stage training recipe, corresponding to Stage I (Visual Manipulation) and Stage II (Strategic Visual-Aided Reasoning), are provided in Table 5. Stage I In this stage, the primary objective is to train the model's visual generation capabilities. As described in Section 3.3, we freeze the entire understanding expert and only train the generation expert. The loss is solely based on the RectifiedFlow objective (Liu et al., 2022) for diagram generation, hence the absence of a Cross-Entropy loss component. We employed a slightly higher ViT condition dropout rate (0.3) to regularize the model and prevent overfitting to the visual features of the pretrainig data. Stage II In the second stage, all model components are unfrozen to enable joint optimization. The model is trained on a combined loss function: a Cross-Entropy loss for predicting the next token (either text or the special and token), weighted by 0.25, and the Rectified-Flow loss for generating diagrams, weighted by 1.0. The learning rate is halved, and the number of training steps is reduced, which is typical for fine-tuning tasks. The ViT condition dropout is lowered to 0.1 to better leverage visual context during strategic reasoning. B Dataset Details B.1 MathCanvas-Edit and MathCanvas-Imagen Details on Foundational Structure Generation. As described in Section 3.1, the Foundational Structure Generation pipeline for the MathCanvas-Edit dataset relies on an automated algorithm that randomly and incrementally adds geometric primitives and relations. This section specifies the exact sets used in this process. Geometric Primitive Set. The generation process initiates by selecting one of 18 basic geometric objects from the following set: • segment, angle • Triangles: triangle, iso_triangle (isosceles), r_triangle (right), triangle_ab, ieq_ triangle (equilateral), risos (right isosceles) • Quadrangles: rectangle, isquare, trapezoid, r_trapezoid (right), eq_trapezoid (isosceles), quadrangle, eq_quadrangle (equilateral), eqdia_quadrangle (equal-diagonal) • Polygons: pentagon, eq_pentagon (equilateral) Geometric Relation Set. Subsequently, the algorithm iteratively applies relations from a predefined set of 41 constructions. These are categorized by the number of new points they introduce (typically one or two). • 1-Point Relations (37): angle_bisector, angle_mirror, circle, circumcenter, eq_triangle, eqangle2, eqdistance, foot, incenter, excenter, intersection_cc, on_bline, intersection_lc, on_aline, intersection_ll, on_line, intersection_ lp, intersection_lt, intersection_pp, intersection_tt, lc_tangent, midpoint, mirror, nsquare, on_bline, on_circle, on_pline, on_tline, on_dia, orthocenter, parallelogram, psquare, reflect, s_angle, shift, on_opline, eqangle3, on_circum • 2-Point Relations (4): square, trisegment, trisect, tangent The automated algorithm randomly samples from these sets to build progressively more complex diagrams, ensuring systematic coverage of fundamental geometric operations. Examples. An example from the MathCanvasEdit dataset is presented in Figure 6. Examples from the MathCanvas-Imagen dataset are shown in Figures 7 and 8. B.2 MathCanvas-Instruct Dataset Statistics. We present the knowledge point distribution of the MathCanvas-Instruct training set in Figure 5. Table 6 demonstrates the statistical characteristics of the MathCanvas-Instruct dataset, comprising 219K problems, of which 65% are multimodal and 35% are text-only. We have also analyzed the distribution of problem sources, the length of questions and solutions, and the number of images they contain. Examples. We showcase examples from the MathCanvas-Instruct dataset in Figures 9, 10, 11. C Benchmark Evaluation Details C.1 Weighted Scoring Weights The weights for our Weighted Scoring metric are calculated using an exponential growth factor of 12 Hyperparameter Stage I Stage II Optimizer & Scheduler Learning Rate (LR) 2 × 10-5 1 × 10-5 LR Scheduler Cosine Decay Cosine Decay Min Learning Rate 1 × 10-7 1 × 10-7 Warmup Steps 2,000 500 Total Training Steps 80,000 16,000 Model & Loss EMA Decay Rate 0.999 0.995 Rectified-Flow Timestep Shift 2.0 2.0 Cross-Entropy (CE) Loss Weight N/A 0.25 Rectified-Flow (MSE) Loss Weight 1.0 (Implicit) 1.0 Frozen Components Understanding Expert None Batching & Tokenization Max Tokens per Batch 46,080 51,200 Max Tokens per Sample 8,192 25,600 Regularization (Dropout) Text Condition Dropout 0.1 0.1 ViT Condition Dropout 0.3 0.1 VAE Condition Dropout 0.1 0.1 Table 5: Key hyperparameters for the two-stage training process. "N/A" indicates that the parameter was not applicable to that stage. Analytic Geometry Algebra Solid Geometry Plane Geometry Trigonometry Transformational Geometry Statistics Calculus & Vector Figure 5: Distribution of knowledge type of MathCanvas-Instruct dataset. 1.3, valuing later sub-questions more heavily. The specific formula for the weight wi of the i-th subquestion in a problem with N sub-questions is: wi = 1.3i-1 PN j=1 1.3j-1 Since our benchmark contains problems with a maximum of four sub-questions, we use the following pre-calculated, normalized weights for evaluation. The final score for a problem is the sum of the weights of the correctly answered sub-questions. • For 2 sub-questions: [0.4348, 0.5652] • For 3 sub-questions: [0.2506, 0.3258, 0.4236] • For 4 sub-questions: [0.1616, 0.2101, 0.2732, 0.3551] C.2 Evaluation Template Tables 7 and 8 display the prompt templates used for MathCanvas-Bench evaluation. D Additional Qualitative Results To further illustrate the limitations of even the most advanced LMMs when they lack intrinsic VCoT capabilities, we present qualitative examples of their performance on problems that benefit from visual manipulation. Figure 12 shows the solutions from Gemini-2.5-Pro and GPT-5 for the problem featured in Figure 1 of the main paper, demonstrating their reliance on complex and sometimes flawed algebraic approaches. We provide more qualitative results of BAGEL-Zebra-CoT, Nano-Banana, and our method in Figure 13. 13 Base caption:Let ABC be a right triangle with the right angle at vertex A. Image: Step 1: Construct points D and E , and AC is common. By AAS, triangle ACD≅ACE. Corresponding sides in congruent triangles are equal, so CD=CE. Thus, CD=CE. Figure 10: An example from MathCanvas-Instruct dataset. Question: Real numbers x,y satisfy the constraints: y >= 1, y = 1, y ... , ... ). • If tags are present: Extract the content of each tag into a separate list element. Example: " 5 cm 10 cm " becomes ["5 cm", "10 cm"]. • If no such tags are present: The entire, unmodified string must be treated as the single element of the list. Do not split the string by characters, words, commas, or any other pattern. Example 1: "ABC" must become ["ABC"], NOT ["A", "B", "C"]. Example 2: "x=5, y=10" must become ["x=5, y=10"], NOT ["x=5", "y=10"]. • The gt_answers list will never contain null elements and its length defines the number of sub-questions. 1.2 Extracting from prediction_solution: • Your primary task is to extract the final answer(s) from the prediction_solution text to create the pred_answers list. • IMPORTANT: The answers to different sub-questions may appear in different places within the prediction_solution, not necessarily grouped together at the end. You must treat this as a matching task. Table 7: The prompt template (part 1) used by GPT-4.1 for mathematical reasoning evaluation. 19 • For each part of the gt_answers list, you must scan the entire prediction_solution to find the corresponding predicted answer. Look for explicit labels (e.g., "(1)", "Part A"), final conclusions, boxed answers (e.g., ), or statements that directly answer a part of the original question. • The final pred_answers list must have the exact same length as the gt_answers list. • For each sub-question, if you cannot find a corresponding answer in the prediction_solution, you must use null as a placeholder in that position. This rule is critical and applies in all cases where an answer is missing, including when the prediction_solution appears incomplete or is truncated before all sub-questions are addressed. CRITICAL RULE: The final gt_answers and pred_answers lists must be of equal length. The number of parts in the ground_truth_answer dictates the required length for both lists. Step 2: Correctness Judgment Your second task is to compare the pred_answers list against the gt_answers list, element by element. Judgment Rules: • Numerical Equivalence: Treat numbers as correct if they are mathematically equivalent (e.g., 5, 5.0, 10/2). Allow for minor floating-point rounding differences. • Textual Equivalence: For text answers, judge based on semantic meaning, not exact matching. Ignore case, whitespace, and phrasing differences (e.g., "CB is parallel to PD" is equivalent to "Line CB || Line PD"). • Generate Correctness List: Create a boolean list named correctness. The i-th element is true if the i-th predicted answer is correct, false otherwise. This list must have the same length as the answer lists. Final JSON Output Structure: Your entire response must be a single, valid JSON object matching the schema below. Do not include any text outside of this JSON object. { "analysis": "A brief step-by-step explanation...", "gt_answers": [ "string", ... ], "pred_answers": [ "string or null", ... ], "correctness": [ true/false, ... ] } INPUT DATA: {input_data} Table 8: The prompt template (part 2) used by GPT-4.1 for mathematical reasoning evaluation. The text highlighted in cyan is replaced with the specific input data for each problem being evaluated. 20
2510.14956
ON WEIGHTED AND BOUNDED MULTIDIMENSIONAL CATALAN NUMBERS RYOTA INAGAKI AND DIMANA PRAMATAROVA Abstract. We define a weighted analog for the multidimensional Catalan numbers, obtain matrix-based recurrences for some of them, and give conditions under which they are peri- odic. Building on this framework, we introduce two new sequences of triangular arrays: the first one enumerates the k-dimensional Balanced ballot paths of exact height s; the second one is a new multidimensional generalization of the Narayana numbers, which count the number of Balanced ballot paths with exactly p peaks. 1. Introduction The sequence of the Catalan numbers Cn = 1 n + 1 2n n  is one of the most studied ones in the field of enumerative combinatorics, which is the branch of mathematics dedicated to counting discrete structures by deriving exact formulas, generating functions, or recursive relations. The Catalan numbers (sequence A000108 in the OEIS [7]) enumerate various objects such as the triangulations of a convex polygon with n + 2 sides, rooted binary trees with 2n nodes, along with hundreds of others [12]. More notably, they count the number of Dyck paths of length 2n, which are sequences of points in Z2 starting at (0, 0) and ending at (2n, 0), composed of n up-steps of (1, 1) and n down-steps of (1, −1), and the paths do not go below the x axis (see Figure 1 for an example). x y Figure 1. A Dyck path of 8 steps Now, we introduce weighted Catalan numbers, which first appeared in works such as those of Goulden and Jackson [5]. For fixed sequence of integers ⃗b = (b0, b1, b2, . . .) ∈ZN, which we call weight vector, and a Dyck path P of length 2n, the weight wt⃗b(P) of the Dyck path P is the product bh1bh2 · · · bhn, where hi is the height of the starting point of the i-th up-step Date: October 17, 2025. 2020 Mathematics Subject Classification. 05A15, 05A19, 11B50. Key words and phrases. Weighted Catalan Numbers, Multidimensional Catalan Numbers, Narayana Numbers, Periodicity. 1 arXiv:2510.14956v1 [math.CO] 16 Oct 2025 2 of P. The corresponding n-th weighted Catalan number for ⃗b is defined as C ⃗b n = X P wt⃗b(P), where the sum is over all Dyck paths of length 2n. Examples of weighted Dyck paths are displayed in Figures 2 and 3. x y b0 b1 b1 b0 Figure 2. A weighted Dyck path with wt⃗b(P) = b2 0b2 1 x y b3 0 x y b2 0b1 x y b2 0b1 x y b0b2 1 x y b0b1b2 Figure 3. For weight vector ⃗b = (b0, b1, b2), all 5 weighted Dyck paths of 6 steps with corresponding weights for weight vector ⃗b = (b0, b1, b2, . . . ). The third weighted Catalan number for this weight vector is C ⃗b 3 = b3 0 + 2b2 0b1 + b0b2 1 + b0b1b2. For particular weight vectors, the weighted Catalan numbers have many combinatorial interpretations. For example, when the weight vector is ⃗b = (1, q, q2, . . . ), the corresponding weighted Catalan number C ⃗b n is the q-Catalan numbers [3], which encode the distribution of areas under Dyck paths. Postnikov [8] proved that when the weight vector is set to be ⃗b = (12, 32, 52, . . . ), the weighted Catalan number C ⃗b n counts combinatorial types of Morse links of order n. Postnikov conjectured that C ⃗b n has a period of 2 · 3r−3 modulo 3r, meaning that 2 · 3r−3 is the smallest positive integer such that C ⃗b n+2·3r−3 −C ⃗b n is a multiple of 3r for large n. This was later proven by Gao and Gu [4] in 2021. Arithmetic properties of the weighted Catalan numbers have also been extensively studied. In 2006, Postnikov and Sagan [9] derived a condition under which the 2-adic valuation of the weighted Catalan numbers is equal to that of the corresponding unweighted ones. Later in the year, Konvalinka [6] proved an analogous result for the q-Catalan numbers. In 2010, An [1] proved a conjecture by Konvalinka and studied other divisibility properties using matrices. Later, in 2012, Shader [11] considered the periodicity modulo pr for prime p of certain weighted Catalan numbers. In 2021, Gao and Gu proved a condition for the periodicity of the weighted Catalan numbers modulo an integer [4, Theorem 4.2]. We build on the above results by extending the definition of weighted two-dimensional Catalan numbers to weighted multidimensional ones by considering height functions of sim- ilar behavior. The paper is organized as follows. In Section 2, we define Balanced ballot 3 paths, the generalization of Dyck paths to higher dimensions, and use them to present our generalization of weighted Dyck paths to higher dimensions. We discuss prior and ba- sic results on the 2-dimensional Catalan numbers and prove Gao and Gu’s Theorem [4] using a matrix-based approach; this inspires our techniques in deriving periodicity prop- erties in the k-dimensionalCatalann numbers. In Section 3, we discuss properties guaran- teeing periodicities of weighted k-dimensional Catalan numbers modulo m, where m is an integer. In Section 4.1, we use recurrence sequences of integers to find closed-forms for some cases of the multidimensional bounded and weighted Catalan numbers. Using the bounded multidimensional Catalan numbers, in Section 4.2 we construct new integer se- quences, the multidimensional triangles of Balanced ballot paths of height exactly , s and establish properties. In Section 4.3, we use our definition of height to define peaks and also consider the number of peaks in ballot paths to construct analogs of the Narayana num- bers. Code used to calculate the 3 a4-dimensionalnal Balanced-Ballot-Path-Height triangles and the 3 a4-dimensionalnal Narayana triangles can be found on the GitHub repository https://github.com/Ryota-IMath/Inagaki Pramatarova multidim height Catalan. 2. Definitions and Notations 2.1. Problem Setup. We begin by discussing variants of the Dyck path and their extensions to higher dimensions. Consider the east-north version [12], which is also known as the 2- dimensional ballot path with n east steps and n north steps. By scaling, rotating, and flipping the path, one sees that the definitions of Dyck paths and ballot paths ending at (n, n) are equivalent. Definition 2.1. A (2-dimensional) Balanced ballot path of 2n steps is a sequence of points in Z2, starting at (0, 0) and ending at (n, n), formed from n east-steps (1, 0) and n north-steps (0, 1), so that the path never goes above the diagonal y = x. We now extend this to higher dimensions. To the best of our knowledge, the generalization of the multidimensional weighted Catalan numbers has not been previously defined in the literature. A point in the k-dimensional lattice Zk is a k-tuple (x1, x2, . . . , xk), and steps are taken in the positive coordinate directions, typically along the standard basis vectors ⃗ei = (0, 0, . . . , 0, 1, 0, . . . , 0), in which the i-th coordinate is 1 and the others are 0. Definition 2.2. The k−dimensional Balanced ballot path of kn steps, denoted as Pk,n = v1, v2, v3, . . . , vkn, is a sequence of kn steps in Zk starting at (0, 0, . . . , 0) and ending at (n, n, . . . , n) satisfying the following conditions: • Each step vi −vi−1 is in the set of standard unit vectors {⃗e1,⃗e2, . . . ,⃗ek}. • Each point x = (x1, . . . , xk) in the path satisfies x1 ≥x2 ≥. . . ≥xk. We call an up-step any step in the direction of ⃗e1 = (1, 0, . . . , 0). Note that ballot paths do not require an equal number of steps in each direction; however, we consider the balanced case, which is essentially the same as multidimensional Dyck paths, but we use this term to distinguish them from the definition of Dyck paths presented in the introduction. Using these, we can define the k-dimensional Catalan number: Definition 2.3 ((A060854 in OEIS [7])). For n and k, the nth k-dimensional Catalan number is the number of k-dimensional Balanced ballot paths of kn steps. The n-th k-dimensional 4 Catalan number equals Ck,n = 0!1! · · · (n −1)! · (kn)! k!(k + 1)! · · · (k + n −1)!. Remark 2.4. One can observe from the above formula that Ck,n = Cn,k. Remark 2.5. The nth k-dimensional Catalan number is the number of Standard Young Tableaux k × n, as derived by the hook length formula [13]. We now extend the notion of bounded and weighted Catalan numbers to k-dimensional Catalan numbers. In order to define them, we define a height function as follows. Definition 2.6. For point x ∈Zk, we define height of x as hk(x) := x1 −x2 + x1 −x3 + . . . + x1 −xk = (k −1)x1 − k X i=2 xi. Given a k−dimensional Balanced ballot path P, define the height of the path P to be max{hk(x) : x ∈P}. This is a natural extension of the 2-dimensional Catalan numbers, as the height is the difference between the number of up-steps and down-steps, i.e., x1 −x2. The height function is the Manhattan distance [10] from point (x1, x2, . . . , xk) to (x1, x1, . . . , x1). Example 2.7. An example of a 3-dimensional Balanced ballot path is shown in Figure 4, where the black arrows correspond to ⃗e1, the gray arrows to ⃗e2 and the light gray ones to ⃗e3, with the values of the height at the points where it increases - at each ⃗e1 step. Figure 4. A 3-dimensional Balanced ballot path from (0, 0, 0) to (3, 3, 3) with the heights of each point along the path indicated. We use the formula h3(x) = x1 −x2 + x1 −x3 to calculate the heights. 5 Definition 2.8. For integers k ≥2 and s ≥0, we define the n-th k-dimensional s- bounded Catalan number, denoted by Ck,s,n, as the number of ballot paths P of kn steps starting at the origin (0, 0, . . . , 0) and ending at (n, n, . . . , n), satisfying the following condition: for any x = (x1, . . . , xk) in P, the height function hk(x) as in Definition 2.10 is less than or equal to s. A visualization of Balanced ballot paths for the bounded Catalan number is as follows. These are ballot paths from ⃗0 to (n, . . . , n) such that each node is between the hyperplanes x1 = x2 + · · · + xk k −1 and x1 = x2 + · · · + xk + s k −1 . We consider the weight function only on the positive contribution to the height function, which means that we focus only on the change of the x1-coordinate. Definition 2.9. Given a sequence of integers ⃗b = (b0, b1, b2, . . .) and a k-dimensional Bal- anced ballot path Pk of kn steps, the weight of path Pk with respects to ⃗b, denoted by wt⃗b(Pk), is the product bh1bh2 · · · bhn, where hi is the height (as in Definition 2.8) of the starting point of the i-th up-step of Pk. The corresponding n-th k-dimensional weighted Catalan number is C ⃗b k,n = X P wt⃗b(Pk), where the sum is over all k-dimensional ballot Paths Pk of kn steps. Definition 2.10. Let k be an integer at least 2 and s be a nonnegative integer. For fixed sequence of integers ⃗b = (b0, b1, . . .), the k-dimensional s-bounded weighted Catalan numbers C ⃗b k,s,n are defined analogously. Remark 2.11. The unweighted version Ck,s,n from Definition 2.8 corresponds to C ⃗b k,s,n for the weight vector ⃗b = (1, 1, . . . , 1, 0, 0, . . .), where the first s+k −1 entries are equal to 1 and the rest are zero. 2.2. Prior and Basic Results on Weighted 2-Catalan Numbers. To provide a founda- tion for examining periodicity, we first discuss basic results on the weighted (2-dimensional) Catalan numbers, which inspire our approach to studying multidimensional weighted Cata- lan numbers. Analogously to An [1] and Shader [11], we derive a tridiagonal matrix-based recurrence for the 2-dimensional weighted Catalan numbers. For the next preliminary result we denote by C ⃗b n,i the number of weighted Dyck paths from (0, i) to (2n, 0). (In particular, C ⃗b n,0 = C ⃗b n) Lemma 2.12. The 2-dimensional weighted Catalan numbers satisfy the following recurrence:   C ⃗b n,0 C ⃗b n,2 C ⃗b n,4 ...   =   b0 b0b1 0 . . . . . . . . . 1 b1 + b2 b2b3 0 . . . . . . 0 1 b3 + b4 b4b5 0 . . . ... ... ... ... ... . . .     C ⃗b n−1,0 C ⃗b n−1,2 C ⃗b n−1,4 ...   . Proof. We have C ⃗b n,0 = b0C ⃗b n−1,0 + b0b1C ⃗b n−1,2 and more generally C ⃗b n,2i = C ⃗b n−1,2i−2 + (b2i + b2i−1)C ⃗b n−1,2i + b2ib2i+1C ⃗b n,2i+2, 6 due to the possible two steps we can take from the previous states and their corresponding weights. □ Remark 2.13. Denote the transition matrix as A and note that (C0,0, C0,2, C0,4, . . .) = (1, 0, 0, . . .). We obtain (Cn,0, Cn,2, Cn,4, . . .) = An · (1, 0, 0, . . .) = ([An]1,1, [An]2,1, . . . ). In particular, we hope to be able to efficiently compute the first entry of An when needed. Using this argument, we can rederive the following result from Gao and Gu [4]: Theorem 2.14. For any positive integer m, the sequence C ⃗b 1, C ⃗b 2, . . . is periodic modulo m if m divides b0b1 . . . bk for some non-negative integer k. Proof. Recall that C ⃗b n,i is the number of weighted Dyck paths from (0, i) to (2n, 0). Observe that the weight of each Dyck path for i >  k 2  is divisible by b0b1 . . . bk. Together with Lemma 2.12, this implies that the transition matrix modulo m is of finite size (ℓ+1)×(ℓ+1), which depends on the parity of k. Specifically, ℓ= ⌊k 2⌋. If k is even, then the last element of the matrix is a2ℓ= bk−1, and if k is odd, then it is a2ℓ= bk−2 + bk−1.   C ⃗b n,0 C ⃗b n,2 C ⃗b n,4 ... C ⃗b n,2ℓ   =   b0 b0b1 0 . . . . . . . . . 1 b1 + b2 b2b3 0 . . . . . . 0 1 b3 + b4 b4b5 0 . . . ... ... ... ... ... . . . 0 0 . . . 0 1 a2ℓ     C ⃗b n−1,0 C ⃗b n−1,2 C ⃗b n−1,4 ... C ⃗b n−1,2ℓ   . There exist positive integers s and t such that (C ⃗b s,0, . . . , C ⃗b s,2ℓ) ≡(C ⃗b s+t,0, . . . , C ⃗b s+t,2ℓ), be- cause by the Pigeonhole principle, there are at most mℓ+1 possible combinations of (C ⃗b n,0, . . . , C ⃗b n,2ℓ) (mod m). The matrix is of finite size, and thus the sequence will eventually be periodic, with C ⃗b n+jt,0 ≡C ⃗b n,0 (mod m) for any positive integer j. □ 3. Periodicity of multidimensional Catalan numbers Here we derive general results on the periodicity of the multidimensional weighted Catalan numbers from Definition 2.9, starting with the bounded ones. Proposition 3.1. For k ≥2, every nonzero-length k-dimensional Balanced ballot path must reach height k −1. Proposition 3.2. For k ≥2, the (k −1)-bounded, k-dimensional Catalan number is always 1. Proof. For every n ≥1, there is one and only one ballot path from ⃗0 to (n, n, n, . . . , n) that does not exceed height k −1: it is the path described by the sequence of steps ⃗e1,⃗e2, . . . ,⃗ek of length k repeated n times. □ Theorem 3.3. The sequence of the k-dimensional s-bounded and weighted Catalan numbers is periodic modulo any positive integer m. Proof. We proceed as in Theorem 2.14. The weight vector is ⃗b = (b0, b1, . . . , bs, 0, . . .). Be- cause of the height restriction, we have finitely many states (An, A′ n, . . . A(ℓ) n ). Then the tran- sition matrix for Ck,⃗b n is of finite size ℓ× ℓ. There are at most mℓ+1 possible combinations 7 of (An, A′ n, . . . A(ℓ) n ) (mod m), hence by the Pigeonhole principle, there exist positive inte- gers s and t such that (As, A′ s, . . . A(ℓ) s ) ≡(As+t, A′ s+t, . . . A(ℓ) s+t) (mod m). Thus, As+pt ≡As (mod m) for any positive integer p and the sequence is eventually periodic. □ Corollary 3.4. For any fixed positive integers k ≥2 and h, the sequence Dk n,h, denoting the k-dimensional Balanced ballot paths of height h, is periodic modulo any positive integer m. Proof. In the next Section, we show that Dk,h,n = Ck,h,n −Ck,h−1,n. From Theorem 3.3 it follows that both Ck,h,n and Ck,h−1,n are periodic modulo m. It remains to note that if two sequences are periodic modulo m, then their difference is also periodic and additionally pm(Dk,h,n) = lcm(pm(Ck,h,n), pm(Ck,h−1,n)), where pm denotes period modulo m. □ We obtain an analogous result to Theorem 2.14 on the periodicity of the k-dimensional weighted Catalan numbers. Theorem 3.5. For any positive integer m and a weight vector ⃗b, the sequence C ⃗b k,1, C ⃗b k,2, . . . is eventually periodic modulo m if there exists a positive integer s, such that each of the weights bs, bs+1, . . . bs+k−2 is divisible by m. Proof. The weight of the path changes only at each up-step (see Definition 2.9). We observe what happens after one up-step. For the path to reach a height greater than s + k −2, all steps should start at a point with height between hs, hs+1, . . . hs+k−2, as an up-step changes the height by k −1. All weights at these heights are divisible by m, and thus for each k-dimensional Balanced ballot path Pk with hk(x) > s + k −2, the weight wt⃗b(Pk) ≡0 (mod m). Then it is enough to consider only the paths with hk(x) ≤s + k −2, i.e., the (s + k −2)-bounded ballot paths. By Proposition 3.3, their corresponding Catalan numbers Ck,s,⃗b n are periodic modulo m. □ Similar statements can be proven when m divides the product of several weights. How- ever, the greater the number of weights, the more of their permutations are divisible by m we obtain as a requirement. Here are more specific conditions for the scenario in three dimensions. Theorem 3.6. For any positive integer m and sequence of integers⃗b, the sequence C ⃗b k,1, C ⃗b k,2, . . . is eventually periodic modulo m, if there exists a positive integer s such that m | bs−jbs+k−j′ for all j ∈{0, 1, 2 . . . , k −1}, j′ ∈{j, j + 1, . . . k −1}. Proof. We contend that the last two steps in the x1 direction before the path is above height s + k are always of the following form: a step in the x1 direction from height s −j for some j ∈{0, 1, 2, ..., k −1} to height s −j + k and then a step in the x1 direction from height ℓ for some ℓ∈{s + 1, s + 2, . . . , s + k −j′} to ℓ+ k. Therefore we find that any summand in C ⃗b k,n (mod m) = P P wt⃗b(P) from any path that exceed height s + k is 0 (mod m). Therefore we find that C ⃗b k,n (mod m) = C ⃗b′ k,n (mod m) where b′ i = bi for i ∈{0, 1, . . . , s} and b′ i = 0 everywhere else. We know from Theorem 3.3 that C ⃗b′ k,n (mod m) is eventually periodic. This completes the proof. □ 4. Examples of Weighted k-dimensional Catalan Numbers 4.1. Recursive formula for certain higher-dimensional weighted and bounded Catalan numbers. Here we obtain formulas for specific sequences of the k-dimensional 8 s-bounded and weighted numbers C ⃗b k,s,n. We begin with a general k, but later focus mostly on k = 3. Theorem 4.1. The k-dimensional k-bounded and weighted Catalan numbers satisfy the re- currence C ⃗b k,k,n = (b0 + (k −1)b1)C ⃗b k,k,n−1. Proof. For clarity, denote An = C ⃗b k,k,n and let Bn−1 be the number of paths from (2, 1, 1, . . . , 1, 1, 0) to (n, n, . . . , n), such that for each node (v1, . . . , vk) we have v1 ≥v2 ≥· · · ≥vk and h(x) ≤k. By the definition of a Balanced ballot path, there is only one sequence of k steps from (a, . . . , a) to (a + 1, . . . , a + 1) with a weight contribution of b0. Due to height restrictions, there is only one sequence with k steps from (a, . . . , a) to (a + 2, a + 1, . . . , a + 1, a) with a weight contribution of b0b1. There are k −1 ways to go from (a, a −1, . . . , a −1, a −2) to (a+1, a, . . . , a, a−1) each with contribution of b1, because the ⃗e1 step should be the last one and there are k −1 possibilities for when the ⃗ek step will occur. Similarly, there are k −1 ways to go from (a, a −1, . . . , a −1, a −2) to (a, . . . , a) with weight contribution of 1. Using these relations, we obtain the recurrence:  An Bn  =  b0 b0b1 k −1 (k −1)b1   An−1 Bn−1  . From An = b0An−1 + b0b1Bn−1 it follows Bn−1 = An −b0An−1 b0b1 . Substituting into the second row we get Bn = (k −1)An−1 + (k −1)An −b0An−1 b0 = k −1 b0 An. Hence An = (b0 + (k −1)b1)An−1. □ From the recurrence relation for An and weights b0 = b1 = 1 it directly follows that: Corollary. The k-dimensional k-bounded Catalan numbers satisfy Ck,k,n = kn−1. For the next results, given a value of s, we denote by An the number of 3-dimensional bounded Balanced ballot paths from (0, 0, 0) to (n, n, n), by Bn−1 the number of paths from (2, 1, 0) to (n, n, n), by Cn−2 the number of paths from (3, 3, 0) to (n, n, n), by Dn−1 the number of paths from (3, 0, 0) to (n, n, n), by En−2 the number of paths from (4, 2, 0) to (n, n, n), and by Fn−3 the number of paths from (5, 4, 0) to (n, n, n). Each of them, satisfies that for each node x = (x1, x2, x3) we have x1 ≥x2 ≥x3 and h(x) ≤s. Because we start from the origin, (A0, B0, C0, . . .) = (1, 0, 0, . . .). Note that, by definition, An = C ⃗b 3,s,n. Because of the height restriction, the weight vector for the 3-dimensional 4-bounded Cata- lan numbers is ⃗b = (b0, b1, b2, 0, . . .). Proposition 4.2. The 3-dimensional 4-bounded and weighted Catalan numbers satisfy the recurrence   An Bn Cn  =   b0 b0b1 + b0b2 0 2 2b1 + 2b2 b2 1 b1 + b2 b2     An−1 Bn−1 Cn−1  . Proof. As in Theorem 4.1, we observe the possibilities after every 3 steps. By the definition of a Balanced ballot path, there is one way to go from a point of form (a, a, a) to (a + 1, a + 1, a+1), namely ⃗e1, ⃗e2, ⃗e3 with a weight contribution of b0. Due to the height restriction there are two sequences of steps that one can take from (a, a, a) to (a+2, a+1, a), namely ⃗e1, ⃗e2, ⃗e1 and ⃗e1, ⃗e1, ⃗e2 with weight contributions of b0b1 and b0b2. Likewise, the weight contributions 9 for the steps starting from (a, a−1, a−2) to (a+1, a, a−1) are 2b1+2b2, to (a+1, a+1, a−2) is b2, and to (a, a, a) is 1. Finally, the weight contribution for the way from (a, a, a −3) to (a, a, a) is 1, to (a + 1, a, a −1) is b2 + b1, and to (a + 1, a + 1, a −2) is b2. The possibilities are displayed in Figure 5. □ Figure 5. The possible states for C ⃗b,3 n with ⃗b = (b0, b1, b2, 0, 0, . . .) In the unweighted case, ⃗b = (1, 1, 1, 0, . . .) computations from the matrix give An = 6An−1 −3An−2. The sequence with this recurrence is A158869 in OEIS, [7] and also counts the number of ways of filling a 2 × 3 × 2n parallelepiped with 1 × 2 × 2 bricks. Corollary 4.3. The 3-dimensional 4-bounded unweighted Catalan numbers satisfy the re- currence C3,4,n = 6C3,4,n−1 −3C3,4,n−2 Similarly, for s = 5: Proposition 4.4. The 3-dimensional 5-bounded Catalan numbers C ⃗b 3,5,n satisfy the recur- rence   An Bn Cn  =   b0 b0b2 + b0b1 0 2 2(b1 + b2 + b3) b3 + b2 1 b3 + b2 + b1 2(b3 + b2 + b1)     An−1 Bn−1 Cn−1  . Proof. As in Proposition 4.2 we have three states. Due to the height restriction there is one way to go from (a, a, a) to (a −1, a −1, a −1), two ways from (a, a, a) to (a + 2, a + 1, a), six from (a, a−1, a−2) to (a+1, a, a−1), two to (a, a, a) and two to (a+1, a+1, a−2). Finally, the number of ways to go from (a, a, a−3) to (a+1, a, a−1) are 3, to (a+1, a+1, a−2) are 3 and to (a, a, a) is one. Considering the weights at each up-step we obtain the recurrence. □ For s = 6 we have many more states and we give a derivation only in the unweighted case. 10 Proposition 4.5. The 3-dimensional 6-bounded Catalan numbers C3,6,n satisfy the following recurrence:   An Bn Cn Dn En Fn   =   1 2 0 1 0 0 2 6 2 1 1 0 1 3 3 0 2 2 0 2 1 3 3 0 0 3 3 0 2 0 0 1 3 0 1 2     An−1 Bn−1 Cn−1 Dn−1 En−1 Fn−1   . Proof. The result is obtained in the same manner as Proposition 4.2, but with 6 states not 3. □ 4.2. Multidimensional Balanced-Ballot-Path-Height triangles. For the next results, we used a Python program to determine each Ck,s,n. Denote by Dk,s,n the number of k-dimensional balanced ballot paths of kn steps such that height is exactly s, i.e. for at least one intermediate point h(x) = s, but for no points h(x) > s. In the table below, the numbers in each row correspond to the number of Balanced ballot paths of kn steps and height from k −1 to (k −1)n. This is similar to the 2-dimensional Balanced-ballot-path-height triangle (sequence A080936 in OEIS [7]), but for k = 3. Recall Definition 2.8 for the k-dimensional s-bounded Catalan numbers Ck,s,n. It is evident that Dk,s,n = Ck,s,n −Ck,s−1,n. n\h 2 3 4 5 6 7 8 9 10 11 12 1 1 2 1 2 2 3 1 8 18 10 5 4 1 26 120 142 117 42 14 5 1 80 720 1481 1789 1130 596 168 42 6 1 242 4122 13680 23205 20940 14817 6936 2781 660 132 Table 1. The 3-dimensional Ballanced-Ballot-Path-Height Triangle Table 1 gives the first 36 numbers of the triangular array sequence D3,s,n. Note that the sum of the numbers of every n-th row is equal to the n-th 3-dimensional Catalan number (sequence A005789 in the OEIS [7]). Moreover, for any n we have D3,2n,n = Cn, where Cn is the classical n-th Catalan number. Indeed, the first n steps should be in the direction of ⃗e1, to reach height 2n, and the number of the remaining steps, i.e., ballot paths of length 2n formed by n steps of ⃗e2 and n steps of ⃗e3, is the n-th 2-dimensional Catalan number. n\h 3 4 5 6 7 8 9 10 11 12 1 1 2 1 3 5 5 3 1 15 68 147 105 84 42 4 1 63 722 3098 4720 5940 5112 2520 1386 462 Table 2. The 4-dimensional Balanced-Ballot-Path-Height triangle 11 Table 2 gives the first 22 elements of the sequence D4,s,n. The sum of the numbers of every n-th row is equal to the n-th 4-dimensional Catalan number (sequence A005790 in the OEIS, [7]). Moreover, for all n we have D4,3n,n = C3,n, again by arguments, analogous to the 3-dimensional case. This suggests that generalizations are possible. For each row of the k-dimensional Balanced-ballot-path-height triangle we have: n(k−1) X s=k−1 Dk,s,n = Ck,n and Dk,s,n = Ck,s,n −Ck,s−1,n. Proposition 4.6. For a k-dimensional Balanced-ballot-path-height triangle we have Dk,(k−1)n,n = Ck−1,n. Proof. The sequence Dk,(k−1)n,n counts the k-dimensional Balanced ballot paths of length kn, with height (k −1)n. The only way to reach this height is when the first (k −1) steps are all up-steps of ⃗e1 – otherwise, if there were t steps of ⃗ei in between them, then for the height function we obtain h(x) = (k −1) −t < k −1, contradiction. Therefore, the first k −1 steps are all ⃗e1 and the number of k-dimensional ballot paths starting from (k −1, 0, 0, . . . , 0) and ending at (k −1, k −1, . . . , k −1) equals the number of (k −1)-dimensional ballot paths from (0, . . . , 0) to (k −1, . . . , k −1) which is Ck−1,n. □ Proposition 4.7. For a k-dimensional Balanced-ballot-path-height triangle, we have Dk,(k−1)n−1,n = (n −1)Dk,(k−1)n,n. Proof. A k-dimensional Balanced ballot path has height (k−1)n−1 only if the first (k−1)n+1 elements consist of (k −1)n steps in the ⃗e1 direction and one step in the direction of ⃗e2. The number of ways for this is (n −1). Let S1 be the collection of pairs whose first entry consists of a k-dimensional ballot Path of kn steps and maximum height (k−1)n, and whose second entry consists of an integer in {2, 3, . . . , n}. Let S2 be the collection of k-dimensional Balanced ballot Paths of kn steps and height (k −1)n −1. It suffices to show |S1| = |S2|. We illustrate a bijection between S1 and S2 – given a pair (P, i), we construct a ballot Path P ′ as follows. The i-th step of P ′ is in the direction ⃗e2, while all steps from the first to the (n + 1)-st one, except for the i-th one, are in the direction ⃗e1, and the directions of all of the other steps in P ′ are the same as those of P. □ 4.3. A multidimensional generalization of the Narayana triangle. Here we consider another statistic of the k-dimensional ballot paths, namely the number of peaks. A peak is a node, to which we have arrived with an up-step and left from with a down-step. For example, the path on Figure 1 has three peaks. In the 2-dimensional case, the sequence representing the number of paths of length n with a fixed number of peaks is called the Narayana numbers (A001263 in OEIS [7]). There is a higher-dimensional analog [14], where a peak is a node, to which we have arrived with an ⃗ei step and left from with an ⃗ej step for some i < j; by our definition of peak, we provide an alternative. In the context of k-dimensional Balanced ballot paths, increases in the first coordinate x1 represent a positive change in height at points. Therefore, we consider it the primary coordinate. This is why, for any k ≥2 and k-dimensional ballot path P, we consider ⃗e1 to be an up-step. 12 n\p 1 2 3 4 5 6 1 1 2 2 3 3 5 23 14 4 14 131 233 84 5 42 664 2339 2367 594 6 132 3166 18520 36265 24714 4719 Table 3. Table of values of N3,p,n, the 3-dimensional Narayana triangle. n\p 1 2 3 4 5 6 1 1 2 5 9 3 42 236 184 4 462 5354 12268 5940 5 6006 118914 543119 737129 257636 6 87516 2653224 20245479 53243052 50245691 13754842 Table 4. Table of values of N4,p,n, the 4-dimensional Narayana triangle. Definition 4.8. Denote by Nk,m,n the number of k-dimensional ballot Paths with p peaks, where a peak is a node, to which we have arrived with an ⃗e1 step and left with an ⃗ej step for some j > 1. We wrote Python code to generate the first few elements of N3,p,n and N4,p,n. Tables 3 and 4 show respectively the 3-dimensional and 4-dimensional Narayana triangles for our height h. The 3-dimensional one has a corresponding sequence A338403 in OEIS [7], with another combinatorial interpretation – counting the number (n, k)-Duck words [2]. On the other hand, the 4-dimensional one does not seem to be currently available at OEIS. Note that for each row of the k-dimensional Narayana triangle we have: n(k−1) X h=k−1 Nk,p,n = Ck,n = n(k−1) X h=k−1 Dk,s,n Another property of the k-dimensional Narayana triangle is that the numbers in the first column form the sequence of (k −1)-dimensional Catalan numbers and hence also connects with the Balanced-ballot-path-height triangle. Together with Proposition 4.6, this yields a relation between the three types of sequences. Proposition 4.9. For a k-dimensional Narayana triangle we have: Nk,1,n = Ck−1,n = Dk,(k−1)n,n. Proof. In order to have only one peak, there should be only one pair of steps ⃗e1, ⃗ei. The only way for the condition to be satisfied is if the first n steps are up-steps of ⃗e1. These paths are the same as in Proposition 4.6. The number of k-dimensional ballot Paths starting from (k −1, 0, 0, . . . , 0) and ending at (k −1, k −1, . . . , k −1) equals the number of (k −1)- dimensional ballot paths from (0, . . . , 0) to (k −1, . . . , k −1) which is Ck−1,n. □ 13 5. Acknowledgments This project was started during the Research Science Institute (RSI) 2025, hosted at the Massachusetts Institute of Technology (MIT). We greatly appreciate Dr. Tanya Khovanova and Professor Alexander Postnikov for insightful discussions. We also thank supervisors Prof. Roman Bezrukavnikov and Dr. Jonathan Bloom for their general advice and recom- mendations. We also acknowledge AnaMaria Perez, Dr. Jenny Sendova, Miroslav Marinov, Prof. Stanislav Harizanov, Nick Arosemena, Mircea Dan Hernest, and Austin Luo for their feedback. We thank the Center for Excellence in Education and MIT for allowing us to work on this project during the Research Science Institute 2025. The first author is employed by the MIT Department of Mathematics. The second author is supported by the St. Cyril and St. Methodius International Foundation, the EVRIKA Foundation and the High School Student Institute of Mathematics and Informatics in Bulgaria. Figures 1- 4 were generated using TikZ. Figure 5 was generated using draw.io. References [1] Junkyu An. Combinatorial Enumeration of Weighted Catalan Numbers. ProQuest LLC, Ann Arbor, MI, 2010. Thesis (Ph.D.)–Massachusetts Institute of Technology. [2] Ilani Axelrod-Freed. 312-avoiding reduced valid hook configurations and duck words. Enumer. Comb. Appl., 1(2):Paper No. S2R14, 13, 2021. [3] J. Cigler. q-Catalan- und q-Motzkinzahlen. ¨Osterreich. Akad. Wiss. Math.-Natur. Kl. Sitzungsber. II, 208:3–20, 1999. [4] Yibo Gao and Andrew Gu. Arithmetic of weighted Catalan numbers. J. Number Theory, 226:213–242, 2021. [5] Ian P. Goulden and David M. Jackson. Combinatorial enumeration. Dover Publications, Inc., Mineola, NY, 2004. With a foreword by Gian-Carlo Rota, Reprint of the 1983 original. [6] Matjaˇz Konvalinka. Divisibility of generalized Catalan numbers. J. Comb. Theory, Ser. A, 114(6):1089– 1100, 2007. [7] OEIS Foundation Inc. The on-line encyclopedia of integer sequences. Published electronically at https: //oeis.org, 2025. [8] Alexander Postnikov. Counting morse curves and links. Preprint Preliminary Version, MIT Mathemat- ics, December 2000. 8 pages, available at MIT Mathematics. [9] Alexander Postnikov and Bruce E. Sagan. What power of two divides a weighted Catalan number? J. Comb. Theory, Ser. A, 114(5):970–977, 2007. [10] ScienceDirect Topics. Manhattan distance. ScienceDirect Topics (Engineering / Mathematics / Com- puter Science), n.d. Online overview article, “The Manhattan distance between two vectors (city blocks) is equal to the one-norm of the distance between the vectors.”. [11] Sarah Shader. Weighted catalan numbers and their divisibility properties. Undergraduate research paper, Massachusetts Institute of Technology, Research Science Institute, 2013. [12] Richard P. Stanley. Catalan numbers. Cambridge: Cambridge University Press, 2015. [13] Richard P. Stanley. Enumerative combinatorics. Vol. 2, volume 208 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, second edition, [2024] ©2024. With an appendix by Sergey Fomin. [14] Robert A. Sulanke. Generalizing narayana and schr¨oder numbers to higher dimensions. Electron. J. Combin., 11(1):Research Paper R54, 2004. Submitted Dec 29, 2003; accepted May 15, 2004; published Aug 23, 2004. Department of Mathematics, Massachusetts Institute of Technology Email address: inaga270@mit.edu ”Akademik Kiril Popov” High school of Mathematics Email address: dimanapramatarova@gmail.com
ON WEIGHTED AND BOUNDED MULTIDIMENSIONAL CATALAN NUMBERS RYOTA INAGAKI AND DIMANA PRAMATAROVA Abstract. We define a weighted analog for the multidimensional Catalan numbers, obtain matrix-based recurrences for some of them, and give conditions under which they are periodic. Building on this framework, we introduce two new sequences of triangular arrays: the first one enumerates the k-dimensional Balanced ballot paths of exact height s; the second one is a new multidimensional generalization of the Narayana numbers, which count the number of Balanced ballot paths with exactly p peaks. 1. Introduction The sequence of the Catalan numbers Cn = 1 n + 1 2n n is one of the most studied ones in the field of enumerative combinatorics, which is the branch of mathematics dedicated to counting discrete structures by deriving exact formulas, generating functions, or recursive relations. The Catalan numbers (sequence A000108 in the OEIS [7]) enumerate various objects such as the triangulations of a convex polygon with n + 2 sides, rooted binary trees with 2n nodes, along with hundreds of others [12]. More notably, they count the number of Dyck paths of length 2n, which are sequences of points in Z2 starting at (0, 0) and ending at (2n, 0), composed of n up-steps of (1, 1) and n down-steps of (1, -1), and the paths do not go below the x axis (see Figure 1 for an example). x y Figure 1. A Dyck path of 8 steps Now, we introduce weighted Catalan numbers, which first appeared in works such as those of Goulden and Jackson [5]. For fixed sequence of integers ⃗b = (b0, b1, b2, . . .) ∈ZN, which we call weight vector, and a Dyck path P of length 2n, the weight wt⃗b(P) of the Dyck path P is the product bh1bh2 · · · bhn, where hi is the height of the starting point of the i-th up-step Date: October 17, 2025. 2020 Mathematics Subject Classification. 05A15, 05A19, 11B50. Key words and phrases. Weighted Catalan Numbers, Multidimensional Catalan Numbers, Narayana Numbers, Periodicity. 1 16 Oct 2025 2 of P. The corresponding n-th weighted Catalan number for ⃗b is defined as C ⃗b n = X P wt⃗b(P), where the sum is over all Dyck paths of length 2n. Examples of weighted Dyck paths are displayed in Figures 2 and 3. x y b0 b1 b1 b0 Figure 2. A weighted Dyck path with wt⃗b(P) = b2 0b2 1 x y b3 0 x y b2 0b1 x y b2 0b1 x y b0b2 1 x y b0b1b2 Figure 3. For weight vector ⃗b = (b0, b1, b2), all 5 weighted Dyck paths of 6 steps with corresponding weights for weight vector ⃗b = (b0, b1, b2, . . . ). The third weighted Catalan number for this weight vector is C ⃗b 3 = b3 0 + 2b2 0b1 + b0b2 1 + b0b1b2. For particular weight vectors, the weighted Catalan numbers have many combinatorial interpretations. For example, when the weight vector is ⃗b = (1, q, q2, . . . ), the corresponding weighted Catalan number C ⃗b n is the q-Catalan numbers [3], which encode the distribution of areas under Dyck paths. Postnikov [8] proved that when the weight vector is set to be ⃗b = (12, 32, 52, . . . ), the weighted Catalan number C ⃗b n counts combinatorial types of Morse links of order n. Postnikov conjectured that C ⃗b n has a period of 2 · 3r-3 modulo 3r, meaning that 2 · 3r-3 is the smallest positive integer such that C ⃗b n+2·3r-3 -C ⃗b n is a multiple of 3r for large n. This was later proven by Gao and Gu [4] in 2021. Arithmetic properties of the weighted Catalan numbers have also been extensively studied. In 2006, Postnikov and Sagan [9] derived a condition under which the 2-adic valuation of the weighted Catalan numbers is equal to that of the corresponding unweighted ones. Later in the year, Konvalinka [6] proved an analogous result for the q-Catalan numbers. In 2010, An [1] proved a conjecture by Konvalinka and studied other divisibility properties using matrices. Later, in 2012, Shader [11] considered the periodicity modulo pr for prime p of certain weighted Catalan numbers. In 2021, Gao and Gu proved a condition for the periodicity of the weighted Catalan numbers modulo an integer [4, Theorem 4.2]. We build on the above results by extending the definition of weighted two-dimensional Catalan numbers to weighted multidimensional ones by considering height functions of similar behavior. The paper is organized as follows. In Section 2, we define Balanced ballot 3 paths, the generalization of Dyck paths to higher dimensions, and use them to present our generalization of weighted Dyck paths to higher dimensions. We discuss prior and basic results on the 2-dimensional Catalan numbers and prove Gao and Gu's Theorem [4] using a matrix-based approach; this inspires our techniques in deriving periodicity properties in the k-dimensionalCatalann numbers. In Section 3, we discuss properties guaranteeing periodicities of weighted k-dimensional Catalan numbers modulo m, where m is an integer. In Section 4.1, we use recurrence sequences of integers to find closed-forms for some cases of the multidimensional bounded and weighted Catalan numbers. Using the bounded multidimensional Catalan numbers, in Section 4.2 we construct new integer sequences, the multidimensional triangles of Balanced ballot paths of height exactly , s and establish properties. In Section 4.3, we use our definition of height to define peaks and also consider the number of peaks in ballot paths to construct analogs of the Narayana numbers. Code used to calculate the 3 a4-dimensionalnal Balanced-Ballot-Path-Height triangles and the 3 a4-dimensionalnal Narayana triangles can be found on the GitHub repository https://github.com/Ryota-IMath/Inagaki Pramatarova multidim height Catalan. 2. Definitions and Notations 2.1. Problem Setup. We begin by discussing variants of the Dyck path and their extensions to higher dimensions. Consider the east-north version [12], which is also known as the 2dimensional ballot path with n east steps and n north steps. By scaling, rotating, and flipping the path, one sees that the definitions of Dyck paths and ballot paths ending at (n, n) are equivalent. Definition 2.1. A (2-dimensional) Balanced ballot path of 2n steps is a sequence of points in Z2, starting at (0, 0) and ending at (n, n), formed from n east-steps (1, 0) and n north-steps (0, 1), so that the path never goes above the diagonal y = x. We now extend this to higher dimensions. To the best of our knowledge, the generalization of the multidimensional weighted Catalan numbers has not been previously defined in the literature. A point in the k-dimensional lattice Zk is a k-tuple (x1, x2, . . . , xk), and steps are taken in the positive coordinate directions, typically along the standard basis vectors ⃗ei = (0, 0, . . . , 0, 1, 0, . . . , 0), in which the i-th coordinate is 1 and the others are 0. Definition 2.2. The k-dimensional Balanced ballot path of kn steps, denoted as Pk,n = v1, v2, v3, . . . , vkn, is a sequence of kn steps in Zk starting at (0, 0, . . . , 0) and ending at (n, n, . . . , n) satisfying the following conditions: • Each step vi -vi-1 is in the set of standard unit vectors {⃗e1,⃗e2, . . . ,⃗ek}. • Each point x = (x1, . . . , xk) in the path satisfies x1 ≥x2 ≥. . . ≥xk. We call an up-step any step in the direction of ⃗e1 = (1, 0, . . . , 0). Note that ballot paths do not require an equal number of steps in each direction; however, we consider the balanced case, which is essentially the same as multidimensional Dyck paths, but we use this term to distinguish them from the definition of Dyck paths presented in the introduction. Using these, we can define the k-dimensional Catalan number: Definition 2.3 ((A060854 in OEIS [7])). For n and k, the nth k-dimensional Catalan number is the number of k-dimensional Balanced ballot paths of kn steps. The n-th k-dimensional 4 Catalan number equals Ck,n = 0!1! · · · (n -1)! · (kn)! k!(k + 1)! · · · (k + n -1)!. Remark 2.4. One can observe from the above formula that Ck,n = Cn,k. Remark 2.5. The nth k-dimensional Catalan number is the number of Standard Young Tableaux k × n, as derived by the hook length formula [13]. We now extend the notion of bounded and weighted Catalan numbers to k-dimensional Catalan numbers. In order to define them, we define a height function as follows. Definition 2.6. For point x ∈Zk, we define height of x as hk(x) := x1 -x2 + x1 -x3 + . . . + x1 -xk = (k -1)x1 - k X i=2 xi. Given a k-dimensional Balanced ballot path P, define the height of the path P to be max{hk(x) : x ∈P}. This is a natural extension of the 2-dimensional Catalan numbers, as the height is the difference between the number of up-steps and down-steps, i.e., x1 -x2. The height function is the Manhattan distance [10] from point (x1, x2, . . . , xk) to (x1, x1, . . . , x1). Example 2.7. An example of a 3-dimensional Balanced ballot path is shown in Figure 4, where the black arrows correspond to ⃗e1, the gray arrows to ⃗e2 and the light gray ones to ⃗e3, with the values of the height at the points where it increases - at each ⃗e1 step. Figure 4. A 3-dimensional Balanced ballot path from (0, 0, 0) to (3, 3, 3) with the heights of each point along the path indicated. We use the formula h3(x) = x1 -x2 + x1 -x3 to calculate the heights. 5 Definition 2.8. For integers k ≥2 and s ≥0, we define the n-th k-dimensional sbounded Catalan number, denoted by Ck,s,n, as the number of ballot paths P of kn steps starting at the origin (0, 0, . . . , 0) and ending at (n, n, . . . , n), satisfying the following condition: for any x = (x1, . . . , xk) in P, the height function hk(x) as in Definition 2.10 is less than or equal to s. A visualization of Balanced ballot paths for the bounded Catalan number is as follows. These are ballot paths from ⃗0 to (n, . . . , n) such that each node is between the hyperplanes x1 = x2 + · · · + xk k -1 and x1 = x2 + · · · + xk + s k -1 . We consider the weight function only on the positive contribution to the height function, which means that we focus only on the change of the x1-coordinate. Definition 2.9. Given a sequence of integers ⃗b = (b0, b1, b2, . . .) and a k-dimensional Balanced ballot path Pk of kn steps, the weight of path Pk with respects to ⃗b, denoted by wt⃗b(Pk), is the product bh1bh2 · · · bhn, where hi is the height (as in Definition 2.8) of the starting point of the i-th up-step of Pk. The corresponding n-th k-dimensional weighted Catalan number is C ⃗b k,n = X P wt⃗b(Pk), where the sum is over all k-dimensional ballot Paths Pk of kn steps. Definition 2.10. Let k be an integer at least 2 and s be a nonnegative integer. For fixed sequence of integers ⃗b = (b0, b1, . . .), the k-dimensional s-bounded weighted Catalan numbers C ⃗b k,s,n are defined analogously. Remark 2.11. The unweighted version Ck,s,n from Definition 2.8 corresponds to C ⃗b k,s,n for the weight vector ⃗b = (1, 1, . . . , 1, 0, 0, . . .), where the first s+k -1 entries are equal to 1 and the rest are zero. 2.2. Prior and Basic Results on Weighted 2-Catalan Numbers. To provide a foundation for examining periodicity, we first discuss basic results on the weighted (2-dimensional) Catalan numbers, which inspire our approach to studying multidimensional weighted Catalan numbers. Analogously to An [1] and Shader [11], we derive a tridiagonal matrix-based recurrence for the 2-dimensional weighted Catalan numbers. For the next preliminary result we denote by C ⃗b n,i the number of weighted Dyck paths from (0, i) to (2n, 0). (In particular, C ⃗b n,0 = C ⃗b n) Lemma 2.12. The 2-dimensional weighted Catalan numbers satisfy the following recurrence:   C ⃗b n,0 C ⃗b n,2 C ⃗b n,4 ...   =   b0 b0b1 0 . . . . . . . . . 1 b1 + b2 b2b3 0 . . . . . . 0 1 b3 + b4 b4b5 0 . . . ... ... ... ... ... . . .     C ⃗b n-1,0 C ⃗b n-1,2 C ⃗b n-1,4 ...   . Proof. We have C ⃗b n,0 = b0C ⃗b n-1,0 + b0b1C ⃗b n-1,2 and more generally C ⃗b n,2i = C ⃗b n-1,2i-2 + (b2i + b2i-1)C ⃗b n-1,2i + b2ib2i+1C ⃗b n,2i+2, 6 due to the possible two steps we can take from the previous states and their corresponding weights. □ Remark 2.13. Denote the transition matrix as A and note that (C0,0, C0,2, C0,4, . . .) = (1, 0, 0, . . .). We obtain (Cn,0, Cn,2, Cn,4, . . .) = An · (1, 0, 0, . . .) = ([An]1,1, [An]2,1, . . . ). In particular, we hope to be able to efficiently compute the first entry of An when needed. Using this argument, we can rederive the following result from Gao and Gu [4]: Theorem 2.14. For any positive integer m, the sequence C ⃗b 1, C ⃗b 2, . . . is periodic modulo m if m divides b0b1 . . . bk for some non-negative integer k. Proof. Recall that C ⃗b n,i is the number of weighted Dyck paths from (0, i) to (2n, 0). Observe that the weight of each Dyck path for i > k 2 is divisible by b0b1 . . . bk. Together with Lemma 2.12, this implies that the transition matrix modulo m is of finite size (l+1)×(l+1), which depends on the parity of k. Specifically, l= ⌊k 2⌋. If k is even, then the last element of the matrix is a2l= bk-1, and if k is odd, then it is a2l= bk-2 + bk-1.   C ⃗b n,0 C ⃗b n,2 C ⃗b n,4 ... C ⃗b n,2l   =   b0 b0b1 0 . . . . . . . . . 1 b1 + b2 b2b3 0 . . . . . . 0 1 b3 + b4 b4b5 0 . . . ... ... ... ... ... . . . 0 0 . . . 0 1 a2l     C ⃗b n-1,0 C ⃗b n-1,2 C ⃗b n-1,4 ... C ⃗b n-1,2l   . There exist positive integers s and t such that (C ⃗b s,0, . . . , C ⃗b s,2l) ≡(C ⃗b s+t,0, . . . , C ⃗b s+t,2l), because by the Pigeonhole principle, there are at most ml+1 possible combinations of (C ⃗b n,0, . . . , C ⃗b n,2l) (mod m). The matrix is of finite size, and thus the sequence will eventually be periodic, with C ⃗b n+jt,0 ≡C ⃗b n,0 (mod m) for any positive integer j. □ 3. Periodicity of multidimensional Catalan numbers Here we derive general results on the periodicity of the multidimensional weighted Catalan numbers from Definition 2.9, starting with the bounded ones. Proposition 3.1. For k ≥2, every nonzero-length k-dimensional Balanced ballot path must reach height k -1. Proposition 3.2. For k ≥2, the (k -1)-bounded, k-dimensional Catalan number is always 1. Proof. For every n ≥1, there is one and only one ballot path from ⃗0 to (n, n, n, . . . , n) that does not exceed height k -1: it is the path described by the sequence of steps ⃗e1,⃗e2, . . . ,⃗ek of length k repeated n times. □ Theorem 3.3. The sequence of the k-dimensional s-bounded and weighted Catalan numbers is periodic modulo any positive integer m. Proof. We proceed as in Theorem 2.14. The weight vector is ⃗b = (b0, b1, . . . , bs, 0, . . .). Because of the height restriction, we have finitely many states (An, A′ n, . . . A(l) n ). Then the transition matrix for Ck,⃗b n is of finite size l× l. There are at most ml+1 possible combinations 7 of (An, A′ n, . . . A(l) n ) (mod m), hence by the Pigeonhole principle, there exist positive integers s and t such that (As, A′ s, . . . A(l) s ) ≡(As+t, A′ s+t, . . . A(l) s+t) (mod m). Thus, As+pt ≡As (mod m) for any positive integer p and the sequence is eventually periodic. □ Corollary 3.4. For any fixed positive integers k ≥2 and h, the sequence Dk n,h, denoting the k-dimensional Balanced ballot paths of height h, is periodic modulo any positive integer m. Proof. In the next Section, we show that Dk,h,n = Ck,h,n -Ck,h-1,n. From Theorem 3.3 it follows that both Ck,h,n and Ck,h-1,n are periodic modulo m. It remains to note that if two sequences are periodic modulo m, then their difference is also periodic and additionally pm(Dk,h,n) = lcm(pm(Ck,h,n), pm(Ck,h-1,n)), where pm denotes period modulo m. □ We obtain an analogous result to Theorem 2.14 on the periodicity of the k-dimensional weighted Catalan numbers. Theorem 3.5. For any positive integer m and a weight vector ⃗b, the sequence C ⃗b k,1, C ⃗b k,2, . . . is eventually periodic modulo m if there exists a positive integer s, such that each of the weights bs, bs+1, . . . bs+k-2 is divisible by m. Proof. The weight of the path changes only at each up-step (see Definition 2.9). We observe what happens after one up-step. For the path to reach a height greater than s + k -2, all steps should start at a point with height between hs, hs+1, . . . hs+k-2, as an up-step changes the height by k -1. All weights at these heights are divisible by m, and thus for each k-dimensional Balanced ballot path Pk with hk(x) > s + k -2, the weight wt⃗b(Pk) ≡0 (mod m). Then it is enough to consider only the paths with hk(x) ≤s + k -2, i.e., the (s + k -2)-bounded ballot paths. By Proposition 3.3, their corresponding Catalan numbers Ck,s,⃗b n are periodic modulo m. □ Similar statements can be proven when m divides the product of several weights. However, the greater the number of weights, the more of their permutations are divisible by m we obtain as a requirement. Here are more specific conditions for the scenario in three dimensions. Theorem 3.6. For any positive integer m and sequence of integers⃗b, the sequence C ⃗b k,1, C ⃗b k,2, . . . is eventually periodic modulo m, if there exists a positive integer s such that m | bs-jbs+k-j′ for all j ∈{0, 1, 2 . . . , k -1}, j′ ∈{j, j + 1, . . . k -1}. Proof. We contend that the last two steps in the x1 direction before the path is above height s + k are always of the following form: a step in the x1 direction from height s -j for some j ∈{0, 1, 2, ..., k -1} to height s -j + k and then a step in the x1 direction from height l for some l∈{s + 1, s + 2, . . . , s + k -j′} to l+ k. Therefore we find that any summand in C ⃗b k,n (mod m) = P P wt⃗b(P) from any path that exceed height s + k is 0 (mod m). Therefore we find that C ⃗b k,n (mod m) = C ⃗b′ k,n (mod m) where b′ i = bi for i ∈{0, 1, . . . , s} and b′ i = 0 everywhere else. We know from Theorem 3.3 that C ⃗b′ k,n (mod m) is eventually periodic. This completes the proof. □ 4. Examples of Weighted k-dimensional Catalan Numbers 4.1. Recursive formula for certain higher-dimensional weighted and bounded Catalan numbers. Here we obtain formulas for specific sequences of the k-dimensional 8 s-bounded and weighted numbers C ⃗b k,s,n. We begin with a general k, but later focus mostly on k = 3. Theorem 4.1. The k-dimensional k-bounded and weighted Catalan numbers satisfy the recurrence C ⃗b k,k,n = (b0 + (k -1)b1)C ⃗b k,k,n-1. Proof. For clarity, denote An = C ⃗b k,k,n and let Bn-1 be the number of paths from (2, 1, 1, . . . , 1, 1, 0) to (n, n, . . . , n), such that for each node (v1, . . . , vk) we have v1 ≥v2 ≥· · · ≥vk and h(x) ≤k. By the definition of a Balanced ballot path, there is only one sequence of k steps from (a, . . . , a) to (a + 1, . . . , a + 1) with a weight contribution of b0. Due to height restrictions, there is only one sequence with k steps from (a, . . . , a) to (a + 2, a + 1, . . . , a + 1, a) with a weight contribution of b0b1. There are k -1 ways to go from (a, a -1, . . . , a -1, a -2) to (a+1, a, . . . , a, a-1) each with contribution of b1, because the ⃗e1 step should be the last one and there are k -1 possibilities for when the ⃗ek step will occur. Similarly, there are k -1 ways to go from (a, a -1, . . . , a -1, a -2) to (a, . . . , a) with weight contribution of 1. Using these relations, we obtain the recurrence: An Bn = b0 b0b1 k -1 (k -1)b1 An-1 Bn-1 . From An = b0An-1 + b0b1Bn-1 it follows Bn-1 = An -b0An-1 b0b1 . Substituting into the second row we get Bn = (k -1)An-1 + (k -1)An -b0An-1 b0 = k -1 b0 An. Hence An = (b0 + (k -1)b1)An-1. □ From the recurrence relation for An and weights b0 = b1 = 1 it directly follows that: Corollary. The k-dimensional k-bounded Catalan numbers satisfy Ck,k,n = kn-1. For the next results, given a value of s, we denote by An the number of 3-dimensional bounded Balanced ballot paths from (0, 0, 0) to (n, n, n), by Bn-1 the number of paths from (2, 1, 0) to (n, n, n), by Cn-2 the number of paths from (3, 3, 0) to (n, n, n), by Dn-1 the number of paths from (3, 0, 0) to (n, n, n), by En-2 the number of paths from (4, 2, 0) to (n, n, n), and by Fn-3 the number of paths from (5, 4, 0) to (n, n, n). Each of them, satisfies that for each node x = (x1, x2, x3) we have x1 ≥x2 ≥x3 and h(x) ≤s. Because we start from the origin, (A0, B0, C0, . . .) = (1, 0, 0, . . .). Note that, by definition, An = C ⃗b 3,s,n. Because of the height restriction, the weight vector for the 3-dimensional 4-bounded Catalan numbers is ⃗b = (b0, b1, b2, 0, . . .). Proposition 4.2. The 3-dimensional 4-bounded and weighted Catalan numbers satisfy the recurrence   An Bn Cn  =   b0 b0b1 + b0b2 0 2 2b1 + 2b2 b2 1 b1 + b2 b2     An-1 Bn-1 Cn-1  . Proof. As in Theorem 4.1, we observe the possibilities after every 3 steps. By the definition of a Balanced ballot path, there is one way to go from a point of form (a, a, a) to (a + 1, a + 1, a+1), namely ⃗e1, ⃗e2, ⃗e3 with a weight contribution of b0. Due to the height restriction there are two sequences of steps that one can take from (a, a, a) to (a+2, a+1, a), namely ⃗e1, ⃗e2, ⃗e1 and ⃗e1, ⃗e1, ⃗e2 with weight contributions of b0b1 and b0b2. Likewise, the weight contributions 9 for the steps starting from (a, a-1, a-2) to (a+1, a, a-1) are 2b1+2b2, to (a+1, a+1, a-2) is b2, and to (a, a, a) is 1. Finally, the weight contribution for the way from (a, a, a -3) to (a, a, a) is 1, to (a + 1, a, a -1) is b2 + b1, and to (a + 1, a + 1, a -2) is b2. The possibilities are displayed in Figure 5. □ Figure 5. The possible states for C ⃗b,3 n with ⃗b = (b0, b1, b2, 0, 0, . . .) In the unweighted case, ⃗b = (1, 1, 1, 0, . . .) computations from the matrix give An = 6An-1 -3An-2. The sequence with this recurrence is A158869 in OEIS, [7] and also counts the number of ways of filling a 2 × 3 × 2n parallelepiped with 1 × 2 × 2 bricks. Corollary 4.3. The 3-dimensional 4-bounded unweighted Catalan numbers satisfy the recurrence C3,4,n = 6C3,4,n-1 -3C3,4,n-2 Similarly, for s = 5: Proposition 4.4. The 3-dimensional 5-bounded Catalan numbers C ⃗b 3,5,n satisfy the recurrence   An Bn Cn  =   b0 b0b2 + b0b1 0 2 2(b1 + b2 + b3) b3 + b2 1 b3 + b2 + b1 2(b3 + b2 + b1)     An-1 Bn-1 Cn-1  . Proof. As in Proposition 4.2 we have three states. Due to the height restriction there is one way to go from (a, a, a) to (a -1, a -1, a -1), two ways from (a, a, a) to (a + 2, a + 1, a), six from (a, a-1, a-2) to (a+1, a, a-1), two to (a, a, a) and two to (a+1, a+1, a-2). Finally, the number of ways to go from (a, a, a-3) to (a+1, a, a-1) are 3, to (a+1, a+1, a-2) are 3 and to (a, a, a) is one. Considering the weights at each up-step we obtain the recurrence. □ For s = 6 we have many more states and we give a derivation only in the unweighted case. 10 Proposition 4.5. The 3-dimensional 6-bounded Catalan numbers C3,6,n satisfy the following recurrence:   An Bn Cn Dn En Fn   =   1 2 0 1 0 0 2 6 2 1 1 0 1 3 3 0 2 2 0 2 1 3 3 0 0 3 3 0 2 0 0 1 3 0 1 2     An-1 Bn-1 Cn-1 Dn-1 En-1 Fn-1   . Proof. The result is obtained in the same manner as Proposition 4.2, but with 6 states not 3. □ 4.2. Multidimensional Balanced-Ballot-Path-Height triangles. For the next results, we used a Python program to determine each Ck,s,n. Denote by Dk,s,n the number of k-dimensional balanced ballot paths of kn steps such that height is exactly s, i.e. for at least one intermediate point h(x) = s, but for no points h(x) > s. In the table below, the numbers in each row correspond to the number of Balanced ballot paths of kn steps and height from k -1 to (k -1)n. This is similar to the 2-dimensional Balanced-ballot-path-height triangle (sequence A080936 in OEIS [7]), but for k = 3. Recall Definition 2.8 for the k-dimensional s-bounded Catalan numbers Ck,s,n. It is evident that Dk,s,n = Ck,s,n -Ck,s-1,n. n 2 3 4 5 6 7 8 9 10 11 12 1 1 2 1 2 2 3 1 8 18 10 5 4 1 26 120 142 117 42 14 5 1 80 720 1481 1789 1130 596 168 42 6 1 242 4122 13680 23205 20940 14817 6936 2781 660 132 Table 1. The 3-dimensional Ballanced-Ballot-Path-Height Triangle Table 1 gives the first 36 numbers of the triangular array sequence D3,s,n. Note that the sum of the numbers of every n-th row is equal to the n-th 3-dimensional Catalan number (sequence A005789 in the OEIS [7]). Moreover, for any n we have D3,2n,n = Cn, where Cn is the classical n-th Catalan number. Indeed, the first n steps should be in the direction of ⃗e1, to reach height 2n, and the number of the remaining steps, i.e., ballot paths of length 2n formed by n steps of ⃗e2 and n steps of ⃗e3, is the n-th 2-dimensional Catalan number. n 3 4 5 6 7 8 9 10 11 12 1 1 2 1 3 5 5 3 1 15 68 147 105 84 42 4 1 63 722 3098 4720 5940 5112 2520 1386 462 Table 2. The 4-dimensional Balanced-Ballot-Path-Height triangle 11 Table 2 gives the first 22 elements of the sequence D4,s,n. The sum of the numbers of every n-th row is equal to the n-th 4-dimensional Catalan number (sequence A005790 in the OEIS, [7]). Moreover, for all n we have D4,3n,n = C3,n, again by arguments, analogous to the 3-dimensional case. This suggests that generalizations are possible. For each row of the k-dimensional Balanced-ballot-path-height triangle we have: n(k-1) X s=k-1 Dk,s,n = Ck,n and Dk,s,n = Ck,s,n -Ck,s-1,n. Proposition 4.6. For a k-dimensional Balanced-ballot-path-height triangle we have Dk,(k-1)n,n = Ck-1,n. Proof. The sequence Dk,(k-1)n,n counts the k-dimensional Balanced ballot paths of length kn, with height (k -1)n. The only way to reach this height is when the first (k -1) steps are all up-steps of ⃗e1 - otherwise, if there were t steps of ⃗ei in between them, then for the height function we obtain h(x) = (k -1) -t 1. We wrote Python code to generate the first few elements of N3,p,n and N4,p,n. Tables 3 and 4 show respectively the 3-dimensional and 4-dimensional Narayana triangles for our height h. The 3-dimensional one has a corresponding sequence A338403 in OEIS [7], with another combinatorial interpretation - counting the number (n, k)-Duck words [2]. On the other hand, the 4-dimensional one does not seem to be currently available at OEIS. Note that for each row of the k-dimensional Narayana triangle we have: n(k-1) X h=k-1 Nk,p,n = Ck,n = n(k-1) X h=k-1 Dk,s,n Another property of the k-dimensional Narayana triangle is that the numbers in the first column form the sequence of (k -1)-dimensional Catalan numbers and hence also connects with the Balanced-ballot-path-height triangle. Together with Proposition 4.6, this yields a relation between the three types of sequences. Proposition 4.9. For a k-dimensional Narayana triangle we have: Nk,1,n = Ck-1,n = Dk,(k-1)n,n. Proof. In order to have only one peak, there should be only one pair of steps ⃗e1, ⃗ei. The only way for the condition to be satisfied is if the first n steps are up-steps of ⃗e1. These paths are the same as in Proposition 4.6. The number of k-dimensional ballot Paths starting from (k -1, 0, 0, . . . , 0) and ending at (k -1, k -1, . . . , k -1) equals the number of (k -1)- dimensional ballot paths from (0, . . . , 0) to (k -1, . . . , k -1) which is Ck-1,n. □ 13 5. Acknowledgments This project was started during the Research Science Institute (RSI) 2025, hosted at the Massachusetts (MIT). We greatly appreciate Dr. Tanya Khovanova and Professor Alexander Postnikov for insightful discussions. We also thank supervisors Prof. Roman Bezrukavnikov and Dr. Jonathan Bloom for their general advice and recommendations. We also acknowledge AnaMaria Perez, Dr. Jenny Sendova, Miroslav Marinov, Prof. Stanislav Harizanov, Nick Arosemena, Mircea Dan Hernest, and Austin Luo for their feedback. We thank the Center for Excellence in Education and MIT for allowing us to work on this project during the Research Science Institute 2025. The first author is employed by the MIT . The second author is supported by the St. Cyril and St. Methodius International Foundation, the EVRIKA Foundation and the High School Student . Figures 1- 4 were generated using TikZ. Figure 5 was generated using draw.io. References [1] Junkyu An. Combinatorial Enumeration of Weighted Catalan Numbers. ProQuest LLC, Ann Arbor, MI, 2010. Thesis (Ph.D.)-Massachusetts . [2] Ilani Axelrod-Freed. 312-avoiding reduced valid hook configurations and duck words. Enumer. Comb. Appl., 1(2):Paper No. S2R14, 13, 2021. [3] J. Cigler. q-Catalan- und q-Motzkinzahlen. ̈Osterreich. Akad. Wiss. Math.-Natur. Kl. Sitzungsber. II, 208:3-20, 1999. [4] Yibo Gao and Andrew Gu. Arithmetic of weighted Catalan numbers. J. Number Theory, 226:213-242, 2021. [5] Ian P. Goulden and David M. Jackson. Combinatorial enumeration. Dover Publications, Inc., Mineola, NY, 2004. With a foreword by Gian-Carlo Rota, Reprint of the 1983 original. [6] Matjaˇz Konvalinka. Divisibility of generalized Catalan numbers. J. Comb. Theory, Ser. A, 114(6):10891100, 2007. [7] OEIS Foundation Inc. The on-line encyclopedia of integer sequences. Published electronically at https: //oeis.org, 2025. [8] Alexander Postnikov. Counting morse curves and links. Preprint Preliminary Version, MIT Mathematics, December 2000. 8 pages, available at MIT Mathematics. [9] Alexander Postnikov and Bruce E. Sagan. What power of two divides a weighted Catalan number? J. Comb. Theory, Ser. A, 114(5):970-977, 2007. [10] ScienceDirect Topics. Manhattan distance. ScienceDirect Topics (Engineering / Mathematics / Computer Science), n.d. Online overview article, "The Manhattan distance between two vectors (city blocks) is equal to the one-norm of the distance between the vectors.". [11] Sarah Shader. Weighted catalan numbers and their divisibility properties. Undergraduate research paper, Massachusetts 2013. [12] Richard P. Stanley. Catalan numbers. Cambridge: Cambridge University Press, 2015. [13] Richard P. Stanley. Enumerative combinatorics. Vol. 2, volume 208 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, second edition, [2024] . With an appendix by Sergey Fomin. [14] Robert A. Sulanke. Generalizing narayana and schr ̈oder numbers to higher dimensions. Electron. J. Combin., 11(1):Research Paper R54, 2004. Submitted Dec 29, 2003; accepted May 15, 2004; published Aug 23, 2004. : "Akademik Kiril Popov" High school of Mathematics Email address:
2510.14960
C4D: 4D Made from 3D through Dual Correspondences Shizun Wang1 Zhenxiang Jiang1 Xingyi Yang2 Xinchao Wang1* 1National University of Singapore 2The Hong Kong Polytechnic University {shizun.wang, zhenxiang.jiang}@nus.u.edu, xingyi.yang@polyu.edu.hk, xinchao@nus.edu.sg Monocular Video Per-frame dense Point Cloud & Camera Poses Camera Intrinsics C4D Motion Masks 2D Point Tracking 3D Point Tracking Figure 1. Given a monocular video that contains both camera movement and object movement, C4D can recover the dynamic scene in 4D, including per-frame dense point cloud, camera poses and intrinsic parameters. Video depth, motion masks, and point tracking in both 2D and 3D space are also available in the outputs. Abstract Recovering 4D from monocular video, which jointly es- timates dynamic geometry and camera poses, is an in- evitably challenging problem. While recent pointmap-based 3D reconstruction methods (e.g., DUSt3R) have made great progress in reconstructing static scenes, directly applying them to dynamic scenes leads to inaccurate results. This discrepancy arises because moving objects violate multi- view geometric constraints, disrupting the reconstruction. To address this, we introduce C4D, a framework that lever- ages temporal Correspondences to extend existing 3D re- construction formulation to 4D. Specifically, apart from predicting pointmaps, C4D captures two types of corre- spondences: short-term optical flow and long-term point tracking. We train a dynamic-aware point tracker that pro- vides additional mobility information, facilitating the esti- mation of motion masks to separate moving elements from the static background, thus offering more reliable guidance for dynamic scenes. Furthermore, we introduce a set of dy- namic scene optimization objectives to recover per-frame 3D geometry and camera parameters. Simultaneously, the correspondences lift 2D trajectories into smooth 3D trajec- *Corresponding author. tories, enabling fully integrated 4D reconstruction. Exper- iments show that our framework achieves complete 4D re- covery and demonstrates strong performance across mul- tiple downstream tasks, including depth estimation, cam- era pose estimation, and point tracking. Project Page: https://littlepure2333.github.io/C4D 1. Introduction Recovering complete 4D representations from monocular videos, which involves estimating dynamic scene geometry, camera poses, and 3D point tracking, is a highly challenging task. While extending 3D reconstruction methods over the time dimension might seem straightforward, achieving ac- curate and smooth time-varying geometries and consistent camera pose trajectories is far from simple. Recent paradigm shifts in 3D reconstruction, such as DUSt3R [61], have shown significant success in recon- structing static scenes from unordered images. It directly predicts dense 3D pointmaps from images and makes many 3D downstream tasks, like recovering camera parameters and global 3D reconstruction, become easy by just apply- ing global alignment optimization on 3D pointmaps. However, when applied to dynamic scenes, these for- 1 arXiv:2510.14960v1 [cs.CV] 16 Oct 2025 mulations often produce substantial inaccuracies. This is because their reliance on multi-view geometric constraints breaks down as moving objects violate the assumptions of global alignment. As a result, they struggle to achieve ac- curate 4D reconstructions in dynamic scenes. Our key insight is that the interplay between temporal correspondences and 3D reconstruction naturally leads to 4D. By capturing 2D correspondences over time, we can effectively separate moving regions from static ones. By calibrating the camera in the static region only, we improve the quality of the 3D reconstruction. In turn, the improved 3D model helps connect these correspondences, creating a consistent 4D representation that integrates temporal details into the 3D structure. This motivation drives our framework, C4D, a frame- work designed to upgrade the current 3D reconstruction for- mulation by using temporal Correspondences to achieve 4D reconstruction. Apart from 3D pointmap prediction, C4D captures short-term optical flow and long-term point track- ing. These temporal correspondences are essential: they generate motion masks that guide the 3D reconstruction process, while also contributing to optimizing the smooth- ness of the 4D representation. To achieve this, we introduce the Dynamic-aware Point Tracker (DynPT), which not only tracks points but also pre- dicts whether they are moving in the world coordinates. Using this information, we create a correspondence-guided strategy that combines static points and optical flow to gen- erate motion masks. These motion masks guide the 3D re- construction by focusing on static regions, enabling more accurate estimation of camera parameters from the point maps and further enhancing geometric consistency. To further improve the 4D reconstruction, we introduce a set of correspondence-aided optimization techniques. These include ensuring the camera movements are con- sistent, keeping the camera path smooth, and maintaining smooth trajectories for the 3D points. Together, these im- provements result in a refined and stable 4D reconstruction that is both accurate and smooth over time. Extensive exper- iments show that C4D delivers strong performance in dy- namic scene reconstruction. When applied to various down- stream tasks, such as depth estimation, camera pose estima- tion, and point tracking, C4D performs competitively, even compared to specialized methods. In summary, our key contributions are as follows: • We introduce C4D, a framework that upgrades the cur- rent 3D reconstruction formulation to 4D reconstruction by incorporating two temporal correspondences. • We propose a dynamic-aware point tracker (DynPT) that not only tracks points but also predicts whether a point is dynamic in world coordinates. • We present a motion mask prediction mechanism guided by optical flow and our DynPT. • We introduce correspondence-aided optimization tech- niques to improve the consistency and smoothness of 4D reconstruction. • We conduct experiments on depth estimation, camera pose estimation, and point tracking, demonstrating that C4D achieves strong performance, even compared to spe- cialized methods. 2. Related Work 2.1. Temporal Correspondences Optical flow represents dense pixel-level motion displace- ment between consecutive frames, capturing short-term dense correspondences. Modern deep learning methods have transformed optical flow estimation, leveraging large datasets [3, 36], CNNs [12, 53], ViTs [68], and iterative re- finement [55, 64], resulting in significant improvements in accuracy and robustness. We leverage the motion informa- tion contained in optical flow to generate motion masks in this work. Point tracking aims to track a set of query points and predict their position and occlusion in a video [10], pro- viding long-term sparse pixel correspondences. Tracking Any Point (TAP) methods [6, 11, 18, 23] extract correlation maps between frames and use a neural network to predict tracking positions and occlusions, achieving strong perfor- mance on causal videos. While these methods are effective, they all lack the ability to predict the mobility of points in world coordinates, which we achieve in this work. 2.2. 3D Reconstruction Recovering 3D structures and camera poses from image col- lections has been studied for decades [19]. Classic meth- ods such as Structure-from-motion (SfM) [46] and visual SLAM [9, 39] operate in sequential pipelines, often involv- ing keypoint detection [2, 34, 35, 44], matching [45, 66], triangulation, and bundle adjustment [1, 58]. However, the sequential pipeline is complex and vulnerable to errors in each sub-task. To address these, DUSt3R [61] introduces a significant paradigm shift by directly predicting pointmaps from image pairs, and dense 3D reconstruction can be ob- tained by a global alignment optimization. 2.3. 4D Reconstruction Since the world is dynamic, 4D reconstruction naturally ex- tends 3D reconstruction. Recent works [5, 7, 17, 27–29, 31, 33, 49, 52, 60, 62] explore 4D reconstruction from monoc- ular video. Building on either 3DGS [26] or pointmap [61] representation, most of these methods are optimization- based and rely on off-the-shelf priors for supervision, such as depth, optical flow, and tracking trajectories. Concurrent work MonST3R [71] explores pointmap-based 4D recon- struction by fine-tuning DUSt3R on dynamic scene data, whereas we directly use pretrained pointmap-based model 2 Video frames Dynamic-aware Point Tracker (DynPT) Relative dynamic point Relative static point 2D tracking trajectories 𝑇 Optical Flow Estimator 3D Recon Model For every Pair 𝑒 = (𝑖, 𝑗) Pair-wise optical flow 𝐹!→#, 𝐹#→! Pair-wise pointmaps 𝑋!;%, 𝑋#;% *𝐾 Camera intrinsic ,𝑃 Camera pose ,𝑋 Per-frame pointmaps Correspondence -aided Optimization ,𝑇 3D Tracking trajectories Input Construction Sequence Model Inference Intermediates Collection 4D Results Constraints Variables Dynamic masks Figure 2. Overview of C4D. C4D takes monocular video as input and jointly predicts dense 3D pointmaps (Sec. 3.1) and temporal correspondences (Sec. 3.2), including short-term optical flow and long-term point tracking (Sec. 3.2.1). These correspondences are utilized to predict motion masks (Sec. 3.2.2) and participate in the optimization process (Sec. 3.3) with 3D pointmaps to obtain 4D outputs. weights and complement them with correspondence-guided optimization for 4D reconstruction. 3. Method The core idea of our method is to jointly predict dense 3D pointmaps and temporal correspondences from an input video, leveraging these correspondences to improve 4D re- construction in dynamic scenes. These correspondences are obtained from both short-term optical flow and long-term point tracking. The whole pipeline is shown in Figure 2. We begin by reviewing the 3D reconstruction formula- tion in Sec. 3.1, which provides dense 3D pointmaps. Next, we introduce our dynamic-aware point tracker (DynPT) in Sec. 3.2.1, designed to track points while also identify- ing whether they are dynamic in world coordinates. In Sec. 3.2.2, we describe how DynPT is combined with opti- cal flow to estimate reliable motion masks. Finally, Sec. 3.3 details our correspondence-aided optimization, which uti- lizes pointmaps, optical flow, point tracks, and motion masks to refine the 4D reconstruction. 3.1. 3D Reconstruction Formulation Our method complements the recent feed-forward 3D re- construction paradigm, DUSt3R [61], and can be applied to any DUSt3R-based model weights [32, 71]. Given a video with T frames {I1, I2, ..., IT }, a scene graph G is constructed, where an edge represents a pair of images e = (In, Im) ∈G. Then DUSt3R operates in two steps: (1) A ViT-based network Φ that takes a pair of two im- ages In, Im ∈RW ×H×3 as input and directly outputs two dense pointmaps Xn, Xm ∈RW ×H×3 with associated confidence maps Cn, Cm ∈RW ×H. Xn, Cn, Xm, Cm = Φ(In, Im) (1) (2) Since these pointmaps are represented in the local coordinate of each pair, DUSt3R employs global optimiza- tion to all pairs of pointmaps, to recover global aligned pointmaps {X t ∈RW ×H×3} for all frames t = 1, ..., T. LGA(X, P, σ) = X e∈G X t∈e Ct;e||X t −σePeXt;e|| (2) Where Pe ∈R3×4 and σe > 0 are pairwise pose and scaling. To reduce computational cost, we use a sparse scene graph based on a strided sliding window, as in [13, 61, 71], where only pairs within a local temporal window are used for optimization. While this 3D formulation performs well on static scenes, its performance drops in dynamic scenes, as dis- cussed in Sec. 4.2. This is primarily due to moving ob- jects violating multi-view geometric constraints, which mo- tivates us to extend the current 3D formulation to a 4D one. 3.2. Capturing Dual Correspondences We capture two correspondences to help 4D recovery: long- term point tracking and short-term optical flow. 3.2.1. Dynamic-aware Point Tracker Current 2D point tracking methods like Tracking Any Point (TAP) [10, 11, 23, 24] can robustly track query points in videos. However, they cannot distinguish whether the movement of the tracking point is caused by camera move- ment or object movement. To segment moving objects in the world coordinate system, we enhance these trackers by enabling them to predict the mobility of tracking points. We introduce the Dynamic-aware Point Tracker (DynPT), which differentiates between motion caused by the camera and true object dynamics. This helps us identify and seg- ment moving objects even when both the camera and the objects are in motion. Tracker Architecture We adopt a similar design of Co- Tracker [23, 24] to design our DynPT, which is illustrated 3 3D-aware Encoder CNN Extractor Input video Feature maps Transformer Iterative Update M times Query Sampler Position Confidence Visibility Mobility P(0) C(0) V(0) M(0) ∆P ∆C ∆V ∆M P(M) C(M) V(M) M(M) Predictions Figure 3. Architecture of Dynamic-aware Point Tracker (DynPT). For given video input and sampled initial query points, DynPT uses Transformer to iteratively update the tracks with fea- tures obtained from both 3D-aware ViT encoder and CNN. in Figure 3. Original CoTracker only uses one CNN [20] to extract features. While to better capture the spatial dynamic relationships, we additionally employ a 3D-aware ViT en- coder, which comes from DUSt3R’s encoder, to enhance the 3D spatial information [65]. And different from all other TAP methods, DynPT directly predicts one more attribute, mobility, along with other attributes of tracks. Specifically, for an input video of length T, DynPT first extracts each frame’s multi-scale features from a 3D-aware encoder and CNN, which are used to construct 4D correla- tion features Corr to provide richer information for track- ing [6]. Given a query point P0 ∈R2 at the first frame, we initialize the track positions Pt with the same position of P0 for all remaining times t = 1, ..., T, and initial- ize the confidence Ct, visibility Vt and mobility Mt with zeros for all times. Then we iteratively update these at- tributes with a transformer for M times. At each iteration, the transformer takes a grid of input tokens spanning time T: Gi t = (ηi t−1→t, ηi t→t+1, Ci t, V i t , M i t, Corri t) for every query point i = 1, ..., N, where ηi t→t+1 = η(Pt+1 −Pt) represents Fourier Encoded embedding of per-frame dis- placements. Inside the Transformer, the attention operation is applied across time and track dimensions. Training and Inference We train DynPT on Kubric [16], a synthetic dataset from which ground- truth mobility labels can be obtained. We use Huber loss to supervise position. And we employ cross-entropy loss to supervise confidence, visibility and mobility. When performing inference on a video, DynPT predicts tracks in a sliding window manner. More details about the DynPT can be found in the supplementary materials. 3.2.2. Correspondence-Guided Motion Mask Estimation The most important part of 4D reconstruction in dynamic scenes is to separate dynamic areas from static areas in world coordinates. To achieve this, we utilize two temporal correspondences: short-term optical flow Fest estimated by off-the-shelf models [55, 64, 68], and long-term point track- ing trajectory T predicted by DynPT. Figure 4 shows this strategy of correspondence-guided motion mask prediction. Since DynPT provides mobility predictions of tracks, at Sample static position Frame at 𝑡+ 5 Frame at 𝑡 Optical flow 𝐹!→!#$ Tracking points 𝑇! Epipolar error map Motion mask Calculate Fundamental Matrix Threshold Motion masks w/ adjacent frames Union Final motion mask ℳ! Figure 4. Correspondence-guided motion mask prediction. The solid circle indicates predicted dynamic points, the hollow circle indicates predicted static points. Adjacent frames are from constructed image pairs containing the current frame. time t, we can retrieve the positions of static points {P j t } where M j t = 0. And given an optical flow F t→t′ from time t to adjacent time t′, we can sample the pixel corre- spondences of these static points {(P j t , P j t′)}. With these correspondences, we then estimate the fundamental matrix F between the two frames via the Least Median of Squares (LMedS) method [43], which does not require known cam- era parameters and is robust to outliers. Since the funda- mental matrix is estimated solely based on static points, it reflects only the underlying camera motion, unaffected by dynamic objects in the scene. So using this F to calculate the epipolar error map on all correspondences in F t→t′, the area with large error indicates there violates the epipolar constraints, that is, dynamic area. In practice, we compute the error map using the Sampson error [19], which provides a more robust approximation of the epipolar error by ac- counting for scale and orientation. Then a threshold is ap- plied to obtain the motion mask. While considering a longer temporal range, calculating the motion mask based on only two frames is not sufficient. For example, a person’s standing foot may remain still for several frames before lifting off to step, as shown in Fig- ure 4. To address this, we calculate the motion mask of the current frame using adjacent frames from the constructed image pairs that include the current frame t, then take the union of these masks to produce the final motion mask Mt. 3.3. Correspondence-aided Optimization for 4D Based on the Global Alignment (GA) objective described in Sec 3.1, we introduce additional optimization objec- tives to improve the accuracy and smoothness in dynamic 4 scenes: camera movement alignment, camera trajectory smoothness, and point trajectory smoothness. The op- timizable variables are per-frame depthmap Dt, camera intrinsic Kt and camera pose P t = [Rt|T t]. Then we re-parameterize the global pointmaps X t as X t i,j := P t−1h(Kt−1[iDt i,j; jDt i,j; Dt i,j]), where (i, j) is the pixel coordinate and h(·) is the homogeneous mapping. So that, optimizing X t is equivalent to optimizing P t, Kt, Dt. Since global alignment tends to align moving objects to the same position, it can negatively impact camera pose estimation. To address this, and leveraging the fact that optical flow provides a prior on camera motion, we intro- duce the Camera Movement Alignment (CMA) objec- tive [22, 54, 62, 71, 72]. CMA encourages the estimated ego motion to be consistent with optical flow in static re- gions. Specifically, for two frames It and It′, we compute the ego-motion field F t→t′ ego as the 2D displacement of X t by moving camera from t to t′. Then we can encourage this field to be close to the optical flow field Ft→t′, in the static regions St =∼Mt, which is the opposite region of the motion mask Mt computed in Sec 3.2.2: LCMA(X) = X e∈G X (t,t′)∈e S ·  Ft→t′ ego −Ft→t′ 1 (3) The Camera Trajectory Smoothness (CTS) objective is commonly used in visual odometry [37, 50, 71] to en- force smooth camera motion by penalizing abrupt changes in camera rotation and translation between consecutive frames: LCTS(X) = N X t=0 Rt⊤Rt+1 −I F + (Tt+1 −Tt) 2 (4) where ∥· ∥F denotes the Frobenius norm, and I is the identity matrix. Lastly, we propose the Point Trajectory Smoothness (PTS) objective to smooth world coordinate pointmaps over time. Within a local temporal window, we first select 2D tracking trajectories T that remain visible throughout the window and lift them to 3D trajectories. We then smooth these 3D trajectories using a 1D convolution with adap- tive weights, where weights are reduced for outlier points based on their temporal deviations. For each frame within the window, we treat the smoothed points as control points and apply a linear blend of control point displacements to transform all other points, weighting each control point’s influence based on proximity, resulting in dense smoothed pointmaps e X t. (More details in supplementary.) We then minimize per-frame distance between global pointmaps and smoothed pointmaps using L1 loss: LPTS(X) = N X t=0  ||X t −e X t||1  (5) The complete optimization objective for recovering the 4D scene is: ˆ X = arg min X,P,σ  wGALGA(X, σ, P) + wCMALCMA(X) + wCTSLCTS(X) + wPTSLPTS(X)  (6) where wGA, wCMA, wCTS, wPTS are the loss weights. The completed outputs of C4D contain world-coordinate pointmaps ˆ X, depthmaps ˆD, camera poses ˆP, camera in- trinsics ˆK, motion masks M, 2D tracking trajectories T, and lifted 3D tracking trajectories ˆT. 4. Experiments We evaluate C4D on multiple downstream tasks, comparing it with specialized methods (Sec. 4.3), and 3D formulations (Sec. 4.2). The ablation study in Sec. 4.4 justifies our design choices, and implementation details are provided in supple- mentary materials. 4.1. Datasets and Metrics We evaluate camera pose estimation on Sintel [3], TUM- dynamics [51] and ScanNet [8] following [4, 73, 74]. Sintel is a synthetic dataset featuring challenging motion blur and large camera movements. TUM-Dynamics and ScanNet are real-world datasets for dynamic scenes and static scenes, respectively. We report the metrics of Absolute Translation Error (ATE), Relative Translation Error (RPE trans), and Relative Rotation Error (RPE rot). For depth estimation, we evaluate on Sintel, Bonn [40], and KITTI [14], following [21, 71]. Bonn [40] and KITTI [14] are real-world indoor dynamic scene and out- door datasets. The evaluation metrics for depth estimation are Absolute Relative Error (Abs Rel), Root Mean Squared Error (RMSE), and the percentage of inlier points δ < 1.25, as used in prior works [21, 69]. For point tracking, we evaluate our method on the TAP- Vid benchmark [10] and Kubric [16]. TAP-Vid contains videos with annotations of tracking point positions and oc- clusion. We use the metrics of occlusion accuracy (OA), position accuracy (δx avg), and average Jaccard (AJ) to evalu- ate this benchmark, following [11, 18, 23, 59]. Kubric is a generator that synthesizes semi-realistic multi-object falling videos with rich annotations, including the moving status of tracking points in world coordinates. To fully evaluate the diverse dynamic patterns in the real world, we use three datasets from Kubric to assess dynamic accuracy (D-ACC): 1) MOVi-E, which introduces simple (linear) camera move- ment while always “looking at” the center point in world coordinates; 2) Panning MOVi-E, which modifies MOVi- E with panning camera movement; 3) MOVi-F, similar to MOVi-E but adds some random motion blur. 5 3D Model Weights Optimization Formulation Sintel TUM-dynamics ScanNet (static) ATE ↓RPE trans ↓RPE rot ↓ATE ↓RPE trans ↓RPE rot ↓ATE ↓RPE trans ↓RPE rot ↓ DUSt3R Global Alignment 0.416 0.216 18.038 0.127 0.058 2.033 0.060 0.024 0.751 C4D 0.334 0.154 0.948 0.093 0.018 0.906 0.064 0.018 0.570 MASt3R Global Alignment 0.437 0.329 12.760 0.084 0.052 1.245 0.073 0.027 0.706 C4D 0.448 0.199 1.730 0.048 0.012 0.671 0.067 0.018 0.467 MonSt3R Global Alignment 0.158 0.099 1.924 0.099 0.041 1.912 0.075 0.026 0.707 C4D (Ours) 0.103 0.040 0.705 0.071 0.019 0.897 0.061 0.017 0.538 Table 1. Camera pose estimation results across 3D/4D formulations. Evaluation on the Sintel, TUM-dynamic, and ScanNet datasets. The best results are highlighted in bold. Our 4D formulation, C4D, consistently improves the performance based on 3D models. 3D Model Weights Optimization Formulation Sintel Bonn KITTI Abs Rel ↓RMSE ↓δ<1.25 ↑Abs Rel ↓RMSE ↓δ<1.25 ↑Abs Rel ↓RMSE ↓δ<1.25 ↑ DUSt3R Global Alignment 0.502 5.141 54.9 0.149 0.422 84.4 0.129 5.162 84.2 C4D (Ours) 0.478 5.052 57.9 0.143 0.411 84.7 0.126 5.140 85.0 MASt3R Global Alignment 0.370 4.669 57.8 0.174 0.503 78.4 0.092 4.000 89.8 C4D (Ours) 0.379 4.756 58.3 0.168 0.485 78.6 0.092 4.000 89.7 MonSt3R Global Alignment 0.335 4.467 57.5 0.065 0.254 96.2 0.090 4.128 90.6 C4D (Ours) 0.327 4.465 60.7 0.061 0.249 96.5 0.089 4.128 90.6 Table 2. Video depth estimation results across 3D/4D formulations. We evaluate scale-and-shift-invariant depth on Sintel, Bonn, and KITTI. The best results are highlighted in bold. Our 4D fomulation, C4D, consistently improve the performance based on 3D models. 4.2. Comparison across 3D/4D Formulations 3D Baselines We choose the currently available DUSt3R- based models as our 3D baseline models: 1) DUSt3R [61], trained on millions of image pairs in static scenes, demon- strating impressive performance and generalization across various real-world static scenarios with different camera pa- rameters. 2) MASt3R [32], the follow-up work to DUSt3R, which initializes its weights from DUSt3R and is fine-tuned on the matching task, also using large-scale data from static scenes. 3) MonSt3R [71], which fine-tunes the decoder and head of DUSt3R on selected dynamic scene datasets. The global alignment is the default optimization strategy in the 3D formulation, as described in Sec. 3.3. Results We evaluate the results of camera pose estima- tion and video depth estimation, as shown in Table 1 and Ta- ble 2. Our C4D achieves consistent performance improve- ments compared to 3D formulation across different 3D model weights. For camera pose estimation, C4D signif- icantly improves performance (e.g., reducing RPEr from 18.038 to 0.948) even on the challenging Sintel dataset, demonstrating the effectiveness of our method. The re- sults on the ScanNet dataset, which consists of static scenes, also show that our method further enhances performance in static environments. C4D also outperforms 3D formu- lations in terms of video depth accuracy. Moreover, these results highlight a comparison among 3D model weights: DUSt3R and MASt3R perform comparably overall, while MonST3R achieves better results as it is fine-tuned on dy- namic scene datasets. 4.3. Comparison with Other Methods Since C4D produces multiple outputs, we compare our method with others specifically designed for individual tasks, including camera pose estimation, video depth esti- mation, and point tracking. Evaluation on camera pose estimation We compare with methods that can predict camera pose and video depth jointly: Robust-CVD [30], CasualSAM [73], and the con- current work MonST3R [71]. We re-evaluated MonST3R using their publicly available codes and checkpoints for a fair comparison. For a broader evaluation, we also compare with learning-based visual odometry methods: DROID- SLAM [56], DPVO [57], ParticleSfM [74], and LEAP- VO [4]. Note that DROID-SLAM, DPVO, and LEAP-VO require ground-truth camera intrinsics as input, while our C4D can estimate camera intrinsics and camera poses using only a monocular video as input. The results are presented in Table 3, showing that C4D achieves highly competitive performance even compared to specialized visual odometry methods and generalizes well on static scenes, such as the 6 Category Method Sintel TUM-dynamics ScanNet (static) ATE ↓RPE trans ↓RPE rot ↓ATE ↓RPE trans ↓RPE rot ↓ATE ↓RPE trans ↓RPE rot ↓ Pose only DROID-SLAM† 0.175 0.084 1.912 - - - - - - DPVO† 0.115 0.072 1.975 - - - - - - ParticleSfM 0.129 0.031 0.535 - - - 0.136 0.023 0.836 LEAP-VO† 0.089 0.066 1.250 0.068 0.008 1.686 0.070 0.018 0.535 Joint depth & pose Robust-CVD 0.360 0.154 3.443 0.153 0.026 3.528 0.227 0.064 7.374 CasualSAM 0.141 0.035 0.615 0.071 0.010 1.712 0.158 0.034 1.618 MonST3R 0.109 0.043 0.737 0.104 0.223 1.037 0.068 0.017 0.545 C4D-M (Ours) 0.103 0.040 0.705 0.071 0.019 0.897 0.061 0.017 0.538 Table 3. Camera pose evaluation on Sintel, TUM-dynamic, and ScanNet. The best and second best results are highlighted in bold and underlined, respectively. † means the method requires ground truth camera intrinsics as input. “C4D-M” denotes C4D with MonST3R’s model weights. Alignment Category Method Sintel Bonn KITTI Abs Rel ↓δ<1.25 ↑Abs Rel ↓δ<1.25 ↑Abs Rel ↓δ <1.25 ↑ Per-sequence scale & shift Single-frame depth Marigold 0.532 51.5 0.091 93.1 0.149 79.6 DepthAnything-V2 0.367 55.4 0.106 92.1 0.140 80.4 Video depth NVDS 0.408 48.3 0.167 76.6 0.253 58.8 ChronoDepth 0.687 48.6 0.100 91.1 0.167 75.9 DepthCrafter 0.292 69.7 0.075 97.1 0.110 88.1 Joint video depth & pose Robust-CVD 0.703 47.8 - - - - CasualSAM 0.387 54.7 0.169 73.7 0.246 62.2 MonST3R 0.335 58.5 0.063 96.2 0.157 73.8 C4D-M (Ours) 0.327 60.7 0.061 96.5 0.089 90.6 Per-sequence scale Video depth DepthCrafter 0.692 53.5 0.217 57.6 0.141 81.8 Joint video depth & pose MonST3R 0.345 55.8 0.065 96.2 0.159 73.5 Joint video depth & pose C4D-M (Ours) 0.338 58.1 0.063 96.4 0.091 90.6 Table 4. Video depth evaluation on Sintel, Bonn, and KITTI. Two types of depth range alignment are evaluated: scale & shift and scale- only. “C4D-M” denotes C4D with MonST3R’s model weights. ScanNet dataset. Evaluation on video depth estimation Table 4 shows the evaluation results on video depth estimation. We compare with various kinds of depth estimation methods: single-frame depth methods such as Marigold [25] and DepthAnything-V2 [69], and video depth methods such as NVDS [63], ChronoDepth [47], and DepthCrafter [21]. Note that these methods predict relative depth, which leads to inconsistencies across multiple views when projecting to world coordinates [15]. We also compare with methods that can predict video depth and camera pose jointly: Robust- CVD [30], CasualSAM [73], and MonST3R [71]. The eval- uation is conducted using two kinds of depth range align- ment: scale & shift, and scale-only. C4D achieves highly competitive results in scale & shift alignment. However, as demonstrated in [70], a shift in depth will affect the x, y, and z coordinates non-uniformly when recovering the 3D geom- etry of a scene, resulting in shape distortions. Therefore, a more important evaluation is under scale-only alignment, where C4D achieves the best performance. Evaluation on point tracking As part of the C4D out- puts, we evaluate point tracking results in Table 5 and com- pare them with other TAP methods: RAFT [55], TAP- Net [10], PIPs [18], MFT [38], TAPIR [11], and Co- tracker [23]. Note that all previous TAP methods can only predict the position and occlusion of tracking points, whereas our method can additionally predict mobility, con- tributing to a robust motion mask prediction as described in Sec. 3.2.2. Despite this more challenging learning ob- jective, our method still achieves comparable performance with SOTA methods and demonstrates high accuracy in pre- dicting mobility. 7 Method MOVi-E Pan. MOVi-E MOVi-F TAP-Vid DAVIS TAP-Vid Kinetics D-ACC ↑ D-ACC ↑ D-ACC ↑ AJ ↑ < δx avg ↑ OA ↑ AJ ↑ < δx avg ↑ OA ↑ Predict Position & Occlusion RAFT - - - 30.0 46.3 79.6 34.5 52.5 79.7 TAP-Net - - - 38.4 53.1 82.3 46.6 60.9 85.0 PIPs - - - 39.9 56.0 81.3 39.1 55.3 82.9 MFT - - - 47.3 66.8 77.8 39.6 60.4 72.7 TAPIR - - - 56.2 70.0 86.5 49.6 64.2 85.0 CoTracker - - - 61.8 76.1 88.3 49.6 64.3 83.3 Predict Position & Occlusion & Mobility DynPT (Ours) 87.9 94.1 91.5 61.6 75.4 87.4 47.8 62.6 82.3 Table 5. Point tracking evaluation results on the TAP-Vid and Kubric (MOVi-E, Panning MOVi-E, and MOVi-F) Datasets. Apart from achieving competitive results with SOTA TAP methods, DynPT offers a unique capability: predicting the mobility of tracking points, which is crucial for determining whether a point is dynamic in world coordinates. Method Camera pose Video depth ATE ↓RPE t ↓RPE r ↓Abs Rel ↓RMSE ↓δ<1.25 ↑ w/o CMA 0.140 0.051 0.905 0.335 4.501 0.582 w/o CTS 0.131 0.058 1.348 0.322 4.442 0.608 w/o PTS 0.103 0.040 0.705 0.327 4.465 0.607 C4D (Ours) 0.103 0.040 0.705 0.327 4.459 0.609 Table 6. Ablation study on the Sintel dataset. 1) 3D trajectories w/o PTS 2) 3D trajectories w/ PTS Frame at time t Frame at time t+15 Video C4D (Ours) MonST3R MASt3R DUSt3R Figure 5. Ablation illustration of Point Trajectory Smoothness (PTS) objective. The temporal depth and 3D trajectories become more smooth after applying PTS objective. 4.4. Ablation Study Ablation results in Table 6 indicate that all loss functions are crucial. The proposed loss suite achieves the best pose esti- mation with minimal impact on video depth accuracy. Since the temporal smoothness of depth cannot be reflected by the quantitative metrics in Table 6, we show the temporal depth slice changes in Figure 5, following [21, 69], which demon- strates that our PTS objective is effective in producing more temporally smooth depth and 3D point trajectories. Note that while MonST3R also employs the CMA objective, the motion mask used in this objective is crucial, and our mo- MonST3R GT C4D Video MonST3R GT C4D Video Figure 6. Qualitative comparison of motion mask on Sintel. Our motion mask is more accurate than MonST3R’s. tion mask is more accurate than MonST3R’s, as shown in Figure 6. Due to page limitations, the ablation of DynPT is provided in the supplement. 5. Conclusion In this paper, we introduce C4D, a framework for recover- ing 4D representations from monocular videos through joint prediction of dense pointmaps and temporal correspon- dences. Within this framework, a Dynamic-aware Point Tracker (DynPT), a correspondence-guided motion mask prediction, and correspondence-aided optimization are pro- posed to achieve accurate and smooth 4D reconstruction and camera pose estimation. Experiments demonstrate that C4D effectively reconstructs dynamic scenes, delivering competitive performance in depth estimation, camera pose estimation, and point tracking. 8 Acknowledgement This project is supported by the National Research Foun- dation, Singapore, under its Medium Sized Center for Ad- vanced Robotics Technology Innovation. References [1] Sameer Agarwal, Noah Snavely, Steven M Seitz, and Richard Szeliski. Bundle adjustment in the large. In Com- puter Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part II 11, pages 29–42. Springer, 2010. 2 [2] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (surf). Computer vi- sion and image understanding, 110(3):346–359, 2008. 2 [3] Daniel J. Butler, Jonas Wulff, Garrett B. Stanley, and Michael J. Black. A naturalistic open source movie for opti- cal flow evaluation. In ECCV, pages 611–625, 2012. 2, 5, 1, 3, 4 [4] Weirong Chen, Le Chen, Rui Wang, and Marc Pollefeys. LEAP-VO: Long-term effective any point tracking for visual odometry. In CVPR, pages 19844–19853, 2024. 5, 6 [5] Xingyu Chen, Yue Chen, Yuliang Xiu, Andreas Geiger, and Anpei Chen. Easi3r: Estimating disentangled motion from dust3r without training. arXiv preprint arXiv:2503.24391, 2025. 2 [6] Seokju Cho, Jiahui Huang, Jisu Nam, Honggyu An, Seun- gryong Kim, and Joon-Young Lee. Local all-pair correspon- dence for point tracking. In European Conference on Com- puter Vision, pages 306–325. Springer, 2025. 2, 4 [7] Wen-Hsuan Chu, Lei Ke, and Katerina Fragkiadaki. Dream- scene4d: Dynamic multi-object scene generation from monocular videos. arXiv preprint arXiv:2405.02280, 2024. 2 [8] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Hal- ber, Thomas Funkhouser, and Matthias Nießner. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In CVPR, pages 5828–5839, 2017. 5, 6 [9] Andrew J Davison, Ian D Reid, Nicholas D Molton, and Olivier Stasse. Monoslam: Real-time single camera slam. IEEE transactions on pattern analysis and machine intelli- gence, 29(6):1052–1067, 2007. 2 [10] Carl Doersch, Ankush Gupta, Larisa Markeeva, Adria Re- casens, Lucas Smaira, Yusuf Aytar, Joao Carreira, Andrew Zisserman, and Yi Yang. Tap-vid: A benchmark for track- ing any point in a video. Advances in Neural Information Processing Systems, 35:13610–13626, 2022. 2, 3, 5, 7, 6 [11] Carl Doersch, Yi Yang, Mel Vecerik, Dilara Gokay, Ankush Gupta, Yusuf Aytar, Joao Carreira, and Andrew Zisserman. Tapir: Tracking any point with per-frame initialization and temporal refinement. In Proceedings of the IEEE/CVF In- ternational Conference on Computer Vision, pages 10061– 10072, 2023. 2, 3, 5, 7 [12] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In Pro- ceedings of the IEEE international conference on computer vision, pages 2758–2766, 2015. 2 [13] Bardienus Duisterhof, Lojze Zust, Philippe Weinzaepfel, Vincent Leroy, Yohann Cabon, and Jerome Revaud. Mast3r- sfm: a fully-integrated solution for unconstrained structure- from-motion. arXiv preprint arXiv:2409.19152, 2024. 3 [14] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The KITTI dataset. 32(11): 1231–1237, 2013. 5 [15] Cl´ement Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J Brostow. Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF inter- national conference on computer vision, pages 3828–3838, 2019. 7 [16] Klaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J Fleet, Dan Gnanapra- gasam, Florian Golemo, Charles Herrmann, et al. Kubric: A scalable dataset generator. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3749–3761, 2022. 4, 5, 1 [17] Jisang Han, Honggyu An, Jaewoo Jung, Takuya Narihira, Junyoung Seo, Kazumi Fukuda, Chaehyun Kim, Sunghwan Hong, Yuki Mitsufuji, and Seungryong Kim. Dˆ 2ust3r: En- hancing 3d reconstruction with 4d pointmaps for dynamic scenes. arXiv preprint arXiv:2504.06264, 2025. 2 [18] Adam W Harley, Zhaoyuan Fang, and Katerina Fragkiadaki. Particle video revisited: Tracking through occlusions using point trajectories. In European Conference on Computer Vi- sion, pages 59–75. Springer, 2022. 2, 5, 7 [19] Richard Hartley and Andrew Zisserman. Multiple view ge- ometry in computer vision. Cambridge university press, 2003. 2, 4 [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. 4 [21] Wenbo Hu, Xiangjun Gao, Xiaoyu Li, Sijie Zhao, Xi- aodong Cun, Yong Zhang, Long Quan, and Ying Shan. DepthCrafter: Generating consistent long depth sequences for open-world videos. arXiv preprint arXiv:2409.02095, 2024. 5, 7, 8, 6 [22] Moritz Kappel, Florian Hahlbohm, Timon Scholz, Susana Castillo, Christian Theobalt, Martin Eisemann, Vladislav Golyanik, and Marcus Magnor. D-npc: Dynamic neural point clouds for non-rigid view synthesis from monocular video. arXiv preprint arXiv:2406.10078, 2024. 5 [23] Nikita Karaev, Ignacio Rocco, Benjamin Graham, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. Co- tracker: It is better to track together. arXiv preprint arXiv:2307.07635, 2023. 2, 3, 5, 7 [24] Nikita Karaev, Iurii Makarov, Jianyuan Wang, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. Co- tracker3: Simpler and better point tracking by pseudo- labelling real videos. arXiv preprint arXiv:2410.11831, 2024. 3 9 [25] Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Met- zger, Rodrigo Caye Daudt, and Konrad Schindler. Repurpos- ing diffusion-based image generators for monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9492– 9502, 2024. 7 [26] Bernhard Kerbl, Georgios Kopanas, Thomas Leimk¨uhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139–1, 2023. 2 [27] Hanyang Kong, Xingyi Yang, and Xinchao Wang. Efficient gaussian splatting for monocular dynamic scene rendering via sparse time-variant attribute modeling. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 4374– 4382, 2025. 2 [28] Hanyang Kong, Xingyi Yang, and Xinchao Wang. Gener- ative sparse-view gaussian splatting. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 26745–26755, 2025. [29] Hanyang Kong, Xingyi Yang, and Xinchao Wang. Rogsplat: Robust gaussian splatting via generative priors. In Proceed- ings of the IEEE International Conference on Computer Vi- sion, 2025. 2 [30] Johannes Kopf, Xuejian Rong, and Jia-Bin Huang. Ro- bust consistent video depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1611–1621, 2021. 6, 7 [31] Jiahui Lei, Yijia Weng, Adam Harley, Leonidas Guibas, and Kostas Daniilidis. Mosca: Dynamic gaussian fusion from casual videos via 4d motion scaffolds. arXiv preprint arXiv:2405.17421, 2024. 2 [32] Vincent Leroy, Yohann Cabon, and J´erˆome Revaud. Ground- ing image matching in 3d with mast3r. arXiv preprint arXiv:2406.09756, 2024. 3, 6, 1 [33] Qingming Liu, Yuan Liu, Jiepeng Wang, Xianqiang Lv, Peng Wang, Wenping Wang, and Junhui Hou. Modgs: Dy- namic gaussian splatting from causually-captured monocular videos. arXiv preprint arXiv:2406.00434, 2024. 2 [34] David G Lowe. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE interna- tional conference on computer vision, pages 1150–1157. Ieee, 1999. 2 [35] David G Lowe. Distinctive image features from scale- invariant keypoints. International journal of computer vi- sion, 60:91–110, 2004. 2 [36] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 4040–4048, 2016. 2 [37] Raul Mur-Artal, Jose Maria Martinez Montiel, and Juan D Tardos. Orb-slam: A versatile and accurate monocular slam system. IEEE transactions on robotics, 31(5):1147–1163, 2015. 5 [38] Michal Neoral, Jon´aˇs ˇSer`ych, and Jiˇr´ı Matas. Mft: Long- term tracking of every pixel. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 6837–6847, 2024. 7 [39] Richard A Newcombe, Steven J Lovegrove, and Andrew J Davison. Dtam: Dense tracking and mapping in real-time. In 2011 international conference on computer vision, pages 2320–2327. IEEE, 2011. 2 [40] Emanuele Palazzolo, Jens Behley, Philipp Lottes, Philippe Giguere, and Cyrill Stachniss. Refusion: 3d reconstruc- tion in dynamic environments for RGB-D cameras exploit- ing residuals. pages 7855–7862, 2019. 5 [41] Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 724–732, 2016. 1, 2 [42] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R¨adle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714, 2024. 1 [43] Peter J Rousseeuw. Least median of squares regression. Journal of the American statistical association, 79(388): 871–880, 1984. 4 [44] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. Orb: An efficient alternative to sift or surf. In 2011 International conference on computer vision, pages 2564– 2571. Ieee, 2011. 2 [45] Agarwal Sameer. Building rome in a day. Proc. ICCV, 2009, 2009. 2 [46] Johannes L Schonberger and Jan-Michael Frahm. Structure- from-motion revisited. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 4104–4113, 2016. 2 [47] Jiahao Shao, Yuanbo Yang, Hongyu Zhou, Youmin Zhang, Yujun Shen, Matteo Poggi, and Yiyi Liao. Learning tem- porally consistent video depth from video diffusion priors. arXiv preprint arXiv:2406.01493, 2024. 7 [48] Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learn- ing rates. In Artificial intelligence and machine learning for multi-domain operations applications, pages 369–386. SPIE, 2019. 5 [49] Colton Stearns, Adam Harley, Mikaela Uy, Florian Dubost, Federico Tombari, Gordon Wetzstein, and Leonidas Guibas. Dynamic gaussian marbles for novel view synthesis of casual monocular videos. arXiv preprint arXiv:2406.18717, 2024. 2 [50] Frank Steinbr¨ucker, J¨urgen Sturm, and Daniel Cremers. Real-time visual odometry from dense rgb-d images. In 2011 IEEE international conference on computer vision work- shops (ICCV Workshops), pages 719–722. IEEE, 2011. 5 [51] J¨urgen Sturm, Nikolas Engelhard, Felix Endres, Wolfram Burgard, and Daniel Cremers. A benchmark for the eval- uation of RGB-D SLAM systems. pages 573–580, 2012. 5, 6 10 [52] Edgar Sucar, Zihang Lai, Eldar Insafutdinov, and An- drea Vedaldi. Dynamic point maps: A versatile repre- sentation for dynamic 3d reconstruction. arXiv preprint arXiv:2503.16318, 2025. 2 [53] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8934–8943, 2018. 2 [54] Yang-Tian Sun, Yihua Huang, Lin Ma, Xiaoyang Lyu, Yan- Pei Cao, and Xiaojuan Qi. Splatter a video: Video gaussian representation for versatile processing. Advances in Neural Information Processing Systems, 37:50401–50425, 2024. 5 [55] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23– 28, 2020, Proceedings, Part II 16, pages 402–419. Springer, 2020. 2, 4, 7 [56] Zachary Teed and Jia Deng. Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras. Advances in neu- ral information processing systems, 34:16558–16569, 2021. 6 [57] Zachary Teed, Lahav Lipson, and Jia Deng. Deep patch vi- sual odometry. Advances in Neural Information Processing Systems, 36, 2024. 6 [58] Bill Triggs, Philip F McLauchlan, Richard I Hartley, and Andrew W Fitzgibbon. Bundle adjustment—a mod- ern synthesis. In Vision Algorithms: Theory and Prac- tice: International Workshop on Vision Algorithms Corfu, Greece, September 21–22, 1999 Proceedings, pages 298– 372. Springer, 2000. 2 [59] Qianqian Wang, Yen-Yu Chang, Ruojin Cai, Zhengqi Li, Bharath Hariharan, Aleksander Holynski, and Noah Snavely. Tracking everything everywhere all at once. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, pages 19795–19806, 2023. 5 [60] Qianqian Wang, Vickie Ye, Hang Gao, Jake Austin, Zhengqi Li, and Angjoo Kanazawa. Shape of motion: 4d reconstruc- tion from a single video. arXiv preprint arXiv:2407.13764, 2024. 2 [61] Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vi- sion made easy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20697– 20709, 2024. 1, 2, 3, 6 [62] Shizun Wang, Xingyi Yang, Qiuhong Shen, Zhenxiang Jiang, and Xinchao Wang. Gflow: Recovering 4d world from monocular video. arXiv preprint arXiv:2405.18426, 2024. 2, 5 [63] Yiran Wang, Min Shi, Jiaqi Li, Zihao Huang, Zhiguo Cao, Jianming Zhang, Ke Xian, and Guosheng Lin. Neural video depth stabilizer. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 9466–9476, 2023. 7 [64] Yihan Wang, Lahav Lipson, and Jia Deng. Sea-raft: Simple, efficient, accurate raft for optical flow. In European Confer- ence on Computer Vision, pages 36–54. Springer, 2025. 2, 4 [65] Philippe Weinzaepfel, Thomas Lucas, Vincent Leroy, Yohann Cabon, Vaibhav Arora, Romain Br´egier, Gabriela Csurka, Leonid Antsfeld, Boris Chidlovskii, and J´erˆome Revaud. Croco v2: Improved cross-view completion pre- training for stereo matching and optical flow. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, pages 17969–17980, 2023. 4 [66] Changchang Wu. Towards linear-time incremental struc- ture from motion. In 2013 International Conference on 3D Vision-3DV 2013, pages 127–134. IEEE, 2013. 2 [67] Junyu Xie, Charig Yang, Weidi Xie, and Andrew Zisserman. Moving object segmentation: All you need is sam (and flow). In Proceedings of the Asian Conference on Computer Vision, pages 162–178, 2024. 1 [68] Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, and Dacheng Tao. Gmflow: Learning optical flow via global matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8121–8130, 2022. 2, 4 [69] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiao- gang Xu, Jiashi Feng, and Hengshuang Zhao. Depth any- thing V2. arXiv preprint arXiv:2406.09414, 2024. 5, 7, 8 [70] Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Long Mai, Simon Chen, and Chunhua Shen. Learning to recover 3d scene shape from a single image. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 204–213, 2021. 7 [71] Junyi Zhang, Charles Herrmann, Junhwa Hur, Varun Jam- pani, Trevor Darrell, Forrester Cole, Deqing Sun, and Ming- Hsuan Yang. Monst3r: A simple approach for estimat- ing geometry in the presence of motion. arXiv preprint arXiv:2410.03825, 2024. 2, 3, 5, 6, 7, 1 [72] Zhoutong Zhang, Forrester Cole, Richard Tucker, William T Freeman, and Tali Dekel. Consistent depth of moving objects in video. ACM Transactions on Graphics (ToG), 40(4):1–12, 2021. 5 [73] Zhoutong Zhang, Forrester Cole, Zhengqi Li, Michael Ru- binstein, Noah Snavely, and William T Freeman. Structure and motion from casual videos. In European Conference on Computer Vision, pages 20–37. Springer, 2022. 5, 6, 7 [74] Wang Zhao, Shaohui Liu, Hengkai Guo, Wenping Wang, and Yong-Jin Liu. Particlesfm: Exploiting dense point trajecto- ries for localizing moving cameras in the wild. In European Conference on Computer Vision, pages 523–542. Springer, 2022. 5, 6 11 C4D: 4D Made from 3D through Dual Correspondences Supplementary Material 6. More Visual Results 6.1. 4D Reconstruction Given only a monocular video as input, C4D can reconstruct dynamic scenes along with camera parameters. Visual re- sults are shown in Figure 7. To provide a comprehensive view of the 4D reconstruction, the static regions across all frames are retained, while the dynamic regions from uni- formly sampled frames are also displayed. 6.2. Temporally Smooth Video Depth In Figure 8, we compare the video depth estimation re- sults of C4D with other DUSt3R-based methods, including DUSt3R [61], MASt3R [32], and MonST3R [71]. In addi- tion to producing more accurate depth, C4D demonstrates superior temporal smoothness compared to other methods. To illustrate this, we highlight the y-t depth slices along the vertical red line in the red boxes, showing the depth variation over time. As observed, C4D achieves tempo- rally smoother depth results, thanks to the Point Trajectory Smoothness (PTS) objective. In contrast, other methods ex- hibit zigzag artifacts in the y-t depth slices, indicating flick- ering artifacts in the video depth. 6.3. Motion Mask One of the most critical aspects of reconstructing dynamic scenes is identifying dynamic regions, that is, predicting motion masks. In Figure 9, we provide a qualitative com- parison of motion masks generated by our C4D and the concurrent work MonST3R on the Sintel dataset [3]. This dataset poses significant challenges due to its fast camera motion, large object motion, and motion blur. In the first video, C4D demonstrates its ability to gener- ate reliable motion masks on the Sintel dataset, even when a large portion of the frame content is dynamic. This success is attributed to our proposed correspondence-guided motion mask prediction strategy. In contrast, MonST3R struggles to recognize such dynamic regions under these challenging conditions. In the second video, C4D predicts more com- plete motion masks, whereas MonST3R only generates par- tial results. This improvement is due to C4D’s consideration of multi-frame motion cues in our motion mask prediction strategy, which is crucial for practical scenarios. 7. More Experimental Results 7.1. Motion Segmentation Results Unlike prompt-based video segmentation like SAM2 [42], motion segmentation aims to automatically segment the Method Ours MonST3R [71] FlowP-SAM [67] IoU ↑ 47.89 31.57 42.23 Table 7. Motion segmentation results on DAVIS2016. Note that the evaluation is conducted without Hungarian matching between predicted and ground-truth motion masks. moving regions in the video. We evaluate our method on DAVIS 2016 [41] and compare it with some automatic mo- tion segmentation methods in Tab. 7, where our approach outperforms both MonST3R [71] and the state-of-the-art supervised method, FlowP-SAM [67]. Note that the eval- uation is conducted without Hungarian matching between predicted and ground-truth motion masks. 7.2. Ablation on Different Tracker Variants The tracker needs to predict additional mobility to infer the dynamic mask, which is a more difficult learning prob- lem as it requires understanding spatial relationships. Ta- ble. 8 shows that a sole CNN or 3D-aware encoder strug- gles with multi-tasking, whereas combining both improves performance. 8. More Technical Details 8.1. Dynamic-aware Point Tracker (DynPT) The ground truth used to supervise confidence is defined by an indicator that determines whether the predicted position lies within 12 pixels of the ground truth position. And since there are no currently available labels for mobility, we use the rich annotations provided by the Kubric [16] generator to generate ground-truth mobility labels. Specifically, the “positions” label, which describes the position of an object for each frame in world coordinates, is utilized. As the movements of objects in Kubric are entirely rigid, we determine an object’s mobility as follows: if the tempo- ral difference in the ”position” of an object exceeds a pre- defined threshold (e.g., 0.01), all the tracking points associ- ated with that object are labeled as dynamic (i.e., mobility is labeled as True). It is important to note that although an “is dynamic” la- bel is provided in Kubric, it only indicates whether the ob- ject is stationary on the floor (False) or being tossed (True) at the initial frame. However, some objects may collide and move in subsequent frames. In such cases, the “is dynamic” label does not accurately represent the object’s mobility, ne- cessitating the use of our threshold-based approach. We train DynPT on the training sets of the panning 1 Figure 7. Visual results of 4D reconstruction on DAVIS dataset [41]. C4D can reconstruct the dynamic scene and recover camera parameters from monocular video input. 2 Video C4D (Ours) MonST3R MASt3R DUSt3R Figure 8. Qualitative comparison of video depth on Sintel [3]. We compare C4D with MonST3R, MASt3R, and DUSt3R. To better visualize the temporal depth quality, we highlight y-t depth slices along the vertical red line within red boxes. For optimal viewing, please zoom in. Method MOVi-E Pan. MOVi-E MOVi-F TAP-Vid DAVIS TAP-Vid Kinetics D-ACC ↑ D-ACC ↑ D-ACC ↑ AJ ↑ < δx avg ↑ OA ↑ AJ ↑ < δx avg ↑ OA ↑ DynPT 87.9 94.1 91.5 61.6 75.4 87.4 47.8 62.6 82.3 CE only 82.6 90.4 86.8 60.6 74.3 86.8 46.2 62.1 81.7 3E only 85.4 92.2 90.4 42.4 56.6 73.4 38.9 54.6 70.4 Table 8. Ablation study of different design choices for DynPT. “CE” denotes the use of a CNN encoder, while “3E” refers to the 3D-aware encoder. MOVi-E and MOVi-F datasets. These datasets are chosen for their non-trivial camera movements and motion blur, which closely resemble real-world scenarios. For evalua- tion, in addition to the the panning MOVi-E and MOVi-F datasets, we also evaluate on the MOVi-E dataset to assess the generalization ability of DynPT. During inference, DynPT processes videos by querying a sparse set of points in a sliding window manner to maintain computational efficiency as in [23, 24]. The query points are sampled based on grids: the image is divided into grids of 20 × 20 pixels, and one point with the maximum im- age gradient is sampled from each grid to capture the most distinguishable descriptor. Additionally, one random point is sampled from each grid to ensure diversity and prevent bias towards only high-gradient areas. This combination of gradient-based sampling and random sampling ensures a balanced selection of points, enabling robust and diverse feature extraction across the image. 8.2. Point Trajectory Smoothness (PTS) Objective The primary goal of this objective is to ensure tempo- ral smoothness in the per-frame pointmaps. Directly per- forming dense tracking for every pixel at every frame is computationally expensive. To address this, we pro- pose an efficient strategy for generating dense, smoothed pointmaps. First, we track a sparse set of points and smooth their 3D trajectories using adaptive weighting (Sec 8.2.1). Next, we propagate the displacements resulting from the 3 MonST3R GT C4D Video MonST3R GT C4D Video Figure 9. Qualitative comparison of motion mask on Sintel dataset [3]. We present the motion masks generated by C4D and MonST3R. Video frames and ground-truth motion masks are also included for reference. The white regions indicate dynamic areas. smoothing process to their local neighbors through linear blending (Sec 8.2.2), ultimately producing dense, smoothed pointmaps. This smoothing process is applied in a non-overlapping sliding window manner. For each local window, smooth- ing is performed on an extended window that includes ad- ditional frames on both ends. However, only the smoothed results within the original window are retained. This ap- proach ensures both computational efficiency and temporal consistency. 8.2.1. Trajectory Smoothing with Adaptive Weighting To enhance the smoothness of 3D trajectories while mitigat- ing the influence of outliers, we employ a 1D convolution- based smoothing process with adaptive weights. This method ensures that trajectories are refined effectively with- out over-smoothing salient features. The core steps of the process are described below. Trajectory Representation. The input trajectories are represented as a tensor T ∈RT ×N×C, where T is the num- ber of time frames, N is the number of tracked points, and C is the dimensionality of the coordinates (e.g., C = 3 for 3D trajectories). Smoothing Kernel. A uniform smoothing kernel of size k is defined as: K = 1 k 1k, (7) where 1k is a vector of ones with length k, and we set it to 5. The kernel is normalized to ensure consistent averaging across the kernel window size. 4 Outlier-Aware Weighting. To reduce the influence of outliers, we compute a weight matrix W ∈RT ×N based on the differences between consecutive trajectory points: ∆Tt = ∥Tt −Tt−1∥2, (8) where ∆Tt is the norm of the difference between consecu- tive points. The weights are then defined as: Wt,n = exp (−λ · ∆Tt,n) , (9) where λ is a smoothing factor controlling the decay of weights for larger deviations and we set it to 1. To ensure temporal alignment, the weights are padded appropriately: Wt = ( W1, t = 1, Wt−1, otherwise. (10) Weighted Convolution. To smooth the trajectories, we apply a weighted 1D convolution to each trajectory point: ˜T = Conv1D(T ⊙W, K) Conv1D(W, K) , (11) where ⊙denotes element-wise multiplication. The convo- lution is applied independently for each trajectory and co- ordinate dimension. The output ˜T is a smoothed trajectory tensor with the same shape as the input, ensuring that both global and local trajectory consistency are preserved. 8.2.2. Linear Blend Displacement (LBD) for Point Trans- formation After obtaining the smoothed 3D trajectories of sparse tracking points, we leverage the observation that the dis- placements caused by the smoothing process are approxi- mately consistent within local regions. Inspired by linear blend skinning, we treat the smoothed tracking points as control points. To transform all other 3D points based on the displacements of these control points, we employ a Linear Blend Displacement (LBD) approach. This method calcu- lates proximity-weighted displacements for each point by considering its k nearest control points, ensuring smooth and locally influenced transformations. The detailed steps are described below. Problem Formulation. Given a set of query points X ∈RP1×3, control points C ∈RP2×3, and control dis- placements D ∈RP2×3, the goal is to compute the trans- formed points ˜X ∈RP1×3 using a weighted combination of the control displacements. Here, P1 is the number of query points, and P2 is the number of control points. Nearest Neighbor Search. For each query point, we identify its k nearest control points using the L2 distance. This yields: dj,k = ∥Xj −CIj,k∥2, (12) Ij,k = Indices of the k nearest control points, (13) where dj,k is the squared distance between the j-th query point and the k-th nearest control point. We set k to 4 in our experiments. Weight Computation. We compute proximity-based weights using inverse distance weighting: wj,k = 1 Dj,k , (14) The weights are normalized across the k nearest neighbors: ˆwj,k = wj,k P k′ wj,k′ . (15) Displacement Aggregation. Using the computed weights, the displacement for each query point is aggre- gated as linear blend of control displacements: ∆xj = X k ˆwj,kDIj,k, (16) where dj,k is the displacement of the k-th nearest control point. Point Transformation. Finally, the transformed query points are computed by adding the aggregated displace- ments: ˜Xj = Xj + ∆xj. (17) 8.3. Implementation Details Training Details. We train DynPT for 50,000 steps with a total batch size of 32, starting from scratch except for the 3D-aware encoder, which is initialized from DUSt3R’s pre- trained encoder and kept frozen during training. The learn- ing rate is set to 5e-4, and we use the AdamW optimizer with a OneCycle learning rate scheduler [48]. Inference Details. To ensure fast computation during mo- tion mask calculation, we sample static points only from the latest sliding window of DynPT, as this window already includes the majority of points in the frame. The default tracking window size for DynPT is set to 16, with a stride of 4 frames. For the Point Trajectory Smoothness (PTS) objective, the default window size is 20 frames, extended by adding 5 additional frames on each end to ensure conti- nuity and smoothness. And for longer videos, the window sizes for DynPT and PTS can be further extended to reduce computational costs. Optimization Details. The correspondence-aided opti- mization is performed in two stages. In the first stage, we optimize using the global alignment (GA), camera move- ment alignment (CMA), and camera trajectory smooth- ness (CTS) objectives, with respective weights wGA = 1, wCMA = 0.01, and wCTS = 0.01. During this stage, the op- timization targets the depth maps ˆD, camera poses ˆP and 5 camera intrinsics ˆK. After completing the first stage, we fix the camera pose ˆP and intrinsics ˆK and proceed to op- timize depth maps ˆD only in the second stage. We apply only the point trajectory smoothness (PTS) objective with weight wPTS = 1 to further refine the depth maps ˆD in the second stage. Both stages are optimized for 300 iterations using the Adam optimizer with a learning rate of 0.01. Datasets and Evaluation. Following [21, 71], we sam- ple the first 90 frames with a temporal stride of 3 from the TUM-Dynamics [51] and ScanNet [8] datasets for computa- tional efficiency. For dynamic accuracy evaluation, we use the validation sets of the MOVi-E, Panning MOVi-E, and MOVi-F datasets, comprising 250, 248, and 147 sequences, respectively. Each sequence contains 256 randomly sam- pled tracks spanning 24 frames. The resolution is fixed at 256 × 256, consistent with the TAPVid benchmark [10]. The evaluation metric used is accuracy, which assesses both dynamic (positive) and static (negative) states, defined as: D-ACC = TP + TN TP + TN + FP + FN, (18) where TP denotes true positives, TN denotes true nega- tives, FP denotes false positives, and FN denotes false neg- atives. 6
C4D: 4D Made from 3D through Dual Correspondences Shizun Wang1 Zhenxiang Jiang1 Xingyi Yang2 Xinchao Wang1* 1National 2The Hong Kong Polytechnic University {shizun.wang, , , Monocular Video Per-frame dense Point Cloud & Camera Poses Camera Intrinsics C4D Motion Masks 2D Point Tracking 3D Point Tracking Figure 1. Given a monocular video that contains both camera movement and object movement, C4D can recover the dynamic scene in 4D, including per-frame dense point cloud, camera poses and intrinsic parameters. Video depth, motion masks, and point tracking in both 2D and 3D space are also available in the outputs. Abstract Recovering 4D from monocular video, which jointly estimates dynamic geometry and camera poses, is an inevitably challenging problem. While recent pointmap-based 3D reconstruction methods (e.g., DUSt3R) have made great progress in reconstructing static scenes, directly applying them to dynamic scenes leads to inaccurate results. This discrepancy arises because moving objects violate multiview geometric constraints, disrupting the reconstruction. To address this, we introduce C4D, a framework that leverages temporal Correspondences to extend existing 3D reconstruction formulation to 4D. Specifically, apart from predicting pointmaps, C4D captures two types of correspondences: short-term optical flow and long-term point tracking. We train a dynamic-aware point tracker that provides additional mobility information, facilitating the estimation of motion masks to separate moving elements from the static background, thus offering more reliable guidance for dynamic scenes. Furthermore, we introduce a set of dynamic scene optimization objectives to recover per-frame 3D geometry and camera parameters. Simultaneously, the correspondences lift 2D trajectories into smooth 3D trajec- *Corresponding author. tories, enabling fully integrated 4D reconstruction. Experiments show that our framework achieves complete 4D recovery and demonstrates strong performance across multiple downstream tasks, including depth estimation, camera pose estimation, and point tracking. Project Page: https://littlepure2333.github.io/C4D 1. Introduction Recovering complete 4D representations from monocular videos, which involves estimating dynamic scene geometry, camera poses, and 3D point tracking, is a highly challenging task. While extending 3D reconstruction methods over the time dimension might seem straightforward, achieving accurate and smooth time-varying geometries and consistent camera pose trajectories is far from simple. Recent paradigm shifts in 3D reconstruction, such as DUSt3R [61], have shown significant success in reconstructing static scenes from unordered images. It directly predicts dense 3D pointmaps from images and makes many 3D downstream tasks, like recovering camera parameters and global 3D reconstruction, become easy by just applying global alignment optimization on 3D pointmaps. However, when applied to dynamic scenes, these for1 16 Oct 2025 mulations often produce substantial inaccuracies. This is because their reliance on multi-view geometric constraints breaks down as moving objects violate the assumptions of global alignment. As a result, they struggle to achieve accurate 4D reconstructions in dynamic scenes. Our key insight is that the interplay between temporal correspondences and 3D reconstruction naturally leads to 4D. By capturing 2D correspondences over time, we can effectively separate moving regions from static ones. By calibrating the camera in the static region only, we improve the quality of the 3D reconstruction. In turn, the improved 3D model helps connect these correspondences, creating a consistent 4D representation that integrates temporal details into the 3D structure. This motivation drives our framework, C4D, a framework designed to upgrade the current 3D reconstruction formulation by using temporal Correspondences to achieve 4D reconstruction. Apart from 3D pointmap prediction, C4D captures short-term optical flow and long-term point tracking. These temporal correspondences are essential: they generate motion masks that guide the 3D reconstruction process, while also contributing to optimizing the smoothness of the 4D representation. To achieve this, we introduce the Dynamic-aware Point Tracker (DynPT), which not only tracks points but also predicts whether they are moving in the world coordinates. Using this information, we create a correspondence-guided strategy that combines static points and optical flow to generate motion masks. These motion masks guide the 3D reconstruction by focusing on static regions, enabling more accurate estimation of camera parameters from the point maps and further enhancing geometric consistency. To further improve the 4D reconstruction, we introduce a set of correspondence-aided optimization techniques. These include ensuring the camera movements are consistent, keeping the camera path smooth, and maintaining smooth trajectories for the 3D points. Together, these improvements result in a refined and stable 4D reconstruction that is both accurate and smooth over time. Extensive experiments show that C4D delivers strong performance in dynamic scene reconstruction. When applied to various downstream tasks, such as depth estimation, camera pose estimation, and point tracking, C4D performs competitively, even compared to specialized methods. In summary, our key contributions are as follows: • We introduce C4D, a framework that upgrades the current 3D reconstruction formulation to 4D reconstruction by incorporating two temporal correspondences. • We propose a dynamic-aware point tracker (DynPT) that not only tracks points but also predicts whether a point is dynamic in world coordinates. • We present a motion mask prediction mechanism guided by optical flow and our DynPT. • We introduce correspondence-aided optimization techniques to improve the consistency and smoothness of 4D reconstruction. • We conduct experiments on depth estimation, camera pose estimation, and point tracking, demonstrating that C4D achieves strong performance, even compared to specialized methods. 2. Related Work 2.1. Temporal Correspondences Optical flow represents dense pixel-level motion displacement between consecutive frames, capturing short-term dense correspondences. Modern deep learning methods have transformed optical flow estimation, leveraging large datasets [3, 36], CNNs [12, 53], ViTs [68], and iterative refinement [55, 64], resulting in significant improvements in accuracy and robustness. We leverage the motion information contained in optical flow to generate motion masks in this work. Point tracking aims to track a set of query points and predict their position and occlusion in a video [10], providing long-term sparse pixel correspondences. Tracking Any Point (TAP) methods [6, 11, 18, 23] extract correlation maps between frames and use a neural network to predict tracking positions and occlusions, achieving strong performance on causal videos. While these methods are effective, they all lack the ability to predict the mobility of points in world coordinates, which we achieve in this work. 2.2. 3D Reconstruction Recovering 3D structures and camera poses from image collections has been studied for decades [19]. Classic methods such as Structure-from-motion (SfM) [46] and visual SLAM [9, 39] operate in sequential pipelines, often involving keypoint detection [2, 34, 35, 44], matching [45, 66], triangulation, and bundle adjustment [1, 58]. However, the sequential pipeline is complex and vulnerable to errors in each sub-task. To address these, DUSt3R [61] introduces a significant paradigm shift by directly predicting pointmaps from image pairs, and dense 3D reconstruction can be obtained by a global alignment optimization. 2.3. 4D Reconstruction Since the world is dynamic, 4D reconstruction naturally extends 3D reconstruction. Recent works [5, 7, 17, 27-29, 31, 33, 49, 52, 60, 62] explore 4D reconstruction from monocular video. Building on either 3DGS [26] or pointmap [61] representation, most of these methods are optimizationbased and rely on off-the-shelf priors for supervision, such as depth, optical flow, and tracking trajectories. Concurrent work MonST3R [71] explores pointmap-based 4D reconstruction by fine-tuning DUSt3R on dynamic scene data, whereas we directly use pretrained pointmap-based model 2 Video frames Dynamic-aware Point Tracker (DynPT) Relative dynamic point Relative static point 2D tracking trajectories T Optical Flow Estimator 3D Recon Model For every Pair e = (i, j) Pair-wise optical flow F!→#, F#→! Pair-wise pointmaps X!;%, X#;% *K Camera intrinsic ,P Camera pose ,X Per-frame pointmaps Correspondence -aided Optimization ,T 3D Tracking trajectories Input Construction Sequence Model Inference Intermediates Collection 4D Results Constraints Variables Dynamic masks Figure 2. Overview of C4D. C4D takes monocular video as input and jointly predicts dense 3D pointmaps (Sec. 3.1) and temporal correspondences (Sec. 3.2), including short-term optical flow and long-term point tracking (Sec. 3.2.1). These correspondences are utilized to predict motion masks (Sec. 3.2.2) and participate in the optimization process (Sec. 3.3) with 3D pointmaps to obtain 4D outputs. weights and complement them with correspondence-guided optimization for 4D reconstruction. 3. Method The core idea of our method is to jointly predict dense 3D pointmaps and temporal correspondences from an input video, leveraging these correspondences to improve 4D reconstruction in dynamic scenes. These correspondences are obtained from both short-term optical flow and long-term point tracking. The whole pipeline is shown in Figure 2. We begin by reviewing the 3D reconstruction formulation in Sec. 3.1, which provides dense 3D pointmaps. Next, we introduce our dynamic-aware point tracker (DynPT) in Sec. 3.2.1, designed to track points while also identifying whether they are dynamic in world coordinates. In Sec. 3.2.2, we describe how DynPT is combined with optical flow to estimate reliable motion masks. Finally, Sec. 3.3 details our correspondence-aided optimization, which utilizes pointmaps, optical flow, point tracks, and motion masks to refine the 4D reconstruction. 3.1. 3D Reconstruction Formulation Our method complements the recent feed-forward 3D reconstruction paradigm, DUSt3R [61], and can be applied to any DUSt3R-based model weights [32, 71]. Given a video with T frames {I1, I2, ..., IT }, a scene graph G is constructed, where an edge represents a pair of images e = (In, Im) ∈G. Then DUSt3R operates in two steps: (1) A ViT-based network Φ that takes a pair of two images In, Im ∈RW ×H×3 as input and directly outputs two dense pointmaps Xn, Xm ∈RW ×H×3 with associated confidence maps Cn, Cm ∈RW ×H. Xn, Cn, Xm, Cm = Φ(In, Im) (1) (2) Since these pointmaps are represented in the local coordinate of each pair, DUSt3R employs global optimization to all pairs of pointmaps, to recover global aligned pointmaps {X t ∈RW ×H×3} for all frames t = 1, ..., T. LGA(X, P, σ) = X e∈G X t∈e Ct;e||X t -σePeXt;e|| (2) Where Pe ∈R3×4 and σe > 0 are pairwise pose and scaling. To reduce computational cost, we use a sparse scene graph based on a strided sliding window, as in [13, 61, 71], where only pairs within a local temporal window are used for optimization. While this 3D formulation performs well on static scenes, its performance drops in dynamic scenes, as discussed in Sec. 4.2. This is primarily due to moving objects violating multi-view geometric constraints, which motivates us to extend the current 3D formulation to a 4D one. 3.2. Capturing Dual Correspondences We capture two correspondences to help 4D recovery: longterm point tracking and short-term optical flow. 3.2.1. Dynamic-aware Point Tracker Current 2D point tracking methods like Tracking Any Point (TAP) [10, 11, 23, 24] can robustly track query points in videos. However, they cannot distinguish whether the movement of the tracking point is caused by camera movement or object movement. To segment moving objects in the world coordinate system, we enhance these trackers by enabling them to predict the mobility of tracking points. We introduce the Dynamic-aware Point Tracker (DynPT), which differentiates between motion caused by the camera and true object dynamics. This helps us identify and segment moving objects even when both the camera and the objects are in motion. Tracker Architecture We adopt a similar design of CoTracker [23, 24] to design our DynPT, which is illustrated 3 3D-aware Encoder CNN Extractor Input video Feature maps Transformer Iterative Update M times Query Sampler Position Confidence Visibility Mobility P(0) C(0) V(0) M(0) ∆P ∆C ∆V ∆M P(M) C(M) V(M) M(M) Predictions Figure 3. Architecture of Dynamic-aware Point Tracker (DynPT). For given video input and sampled initial query points, DynPT uses Transformer to iteratively update the tracks with features obtained from both 3D-aware ViT encoder and CNN. in Figure 3. Original CoTracker only uses one CNN [20] to extract features. While to better capture the spatial dynamic relationships, we additionally employ a 3D-aware ViT encoder, which comes from DUSt3R's encoder, to enhance the 3D spatial information [65]. And different from all other TAP methods, DynPT directly predicts one more attribute, mobility, along with other attributes of tracks. Specifically, for an input video of length T, DynPT first extracts each frame's multi-scale features from a 3D-aware encoder and CNN, which are used to construct 4D correlation features Corr to provide richer information for tracking [6]. Given a query point P0 ∈R2 at the first frame, we initialize the track positions Pt with the same position of P0 for all remaining times t = 1, ..., T, and initialize the confidence Ct, visibility Vt and mobility Mt with zeros for all times. Then we iteratively update these attributes with a transformer for M times. At each iteration, the transformer takes a grid of input tokens spanning time T: Gi t = (ηi t-1→t, ηi t→t+1, Ci t, V i t , M i t, Corri t) for every query point i = 1, ..., N, where ηi t→t+1 = η(Pt+1 -Pt) represents Fourier Encoded embedding of per-frame displacements. Inside the Transformer, the attention operation is applied across time and track dimensions. Training and Inference We train DynPT on Kubric [16], a synthetic dataset from which groundtruth mobility labels can be obtained. We use Huber loss to supervise position. And we employ cross-entropy loss to supervise confidence, visibility and mobility. When performing inference on a video, DynPT predicts tracks in a sliding window manner. More details about the DynPT can be found in the supplementary materials. 3.2.2. Correspondence-Guided Motion Mask Estimation The most important part of 4D reconstruction in dynamic scenes is to separate dynamic areas from static areas in world coordinates. To achieve this, we utilize two temporal correspondences: short-term optical flow Fest estimated by off-the-shelf models [55, 64, 68], and long-term point tracking trajectory T predicted by DynPT. Figure 4 shows this strategy of correspondence-guided motion mask prediction. Since DynPT provides mobility predictions of tracks, at Sample static position Frame at t+ 5 Frame at t Optical flow F!→!#$ Tracking points T! Epipolar error map Motion mask Calculate Fundamental Matrix Threshold Motion masks w/ adjacent frames Union Final motion mask M! Figure 4. Correspondence-guided motion mask prediction. The solid circle indicates predicted dynamic points, the hollow circle indicates predicted static points. Adjacent frames are from constructed image pairs containing the current frame. time t, we can retrieve the positions of static points {P j t } where M j t = 0. And given an optical flow F t→t′ from time t to adjacent time t′, we can sample the pixel correspondences of these static points {(P j t , P j t′)}. With these correspondences, we then estimate the fundamental matrix F between the two frames via the Least Median of Squares (LMedS) method [43], which does not require known camera parameters and is robust to outliers. Since the fundamental matrix is estimated solely based on static points, it reflects only the underlying camera motion, unaffected by dynamic objects in the scene. So using this F to calculate the epipolar error map on all correspondences in F t→t′, the area with large error indicates there violates the epipolar constraints, that is, dynamic area. In practice, we compute the error map using the Sampson error [19], which provides a more robust approximation of the epipolar error by accounting for scale and orientation. Then a threshold is applied to obtain the motion mask. While considering a longer temporal range, calculating the motion mask based on only two frames is not sufficient. For example, a person's standing foot may remain still for several frames before lifting off to step, as shown in Figure 4. To address this, we calculate the motion mask of the current frame using adjacent frames from the constructed image pairs that include the current frame t, then take the union of these masks to produce the final motion mask Mt. 3.3. Correspondence-aided Optimization for 4D Based on the Global Alignment (GA) objective described in Sec 3.1, we introduce additional optimization objectives to improve the accuracy and smoothness in dynamic 4 scenes: camera movement alignment, camera trajectory smoothness, and point trajectory smoothness. The optimizable variables are per-frame depthmap Dt, camera intrinsic Kt and camera pose P t = [Rt|T t]. Then we re-parameterize the global pointmaps X t as X t i,j := P t-1h(Kt-1[iDt i,j; jDt i,j; Dt i,j]), where (i, j) is the pixel coordinate and h(·) is the homogeneous mapping. So that, optimizing X t is equivalent to optimizing P t, Kt, Dt. Since global alignment tends to align moving objects to the same position, it can negatively impact camera pose estimation. To address this, and leveraging the fact that optical flow provides a prior on camera motion, we introduce the Camera Movement Alignment (CMA) objective [22, 54, 62, 71, 72]. CMA encourages the estimated ego motion to be consistent with optical flow in static regions. Specifically, for two frames It and It′, we compute the ego-motion field F t→t′ ego as the 2D displacement of X t by moving camera from t to t′. Then we can encourage this field to be close to the optical flow field Ft→t′, in the static regions St =∼Mt, which is the opposite region of the motion mask Mt computed in Sec 3.2.2: LCMA(X) = X e∈G X (t,t′)∈e S · Ft→t′ ego -Ft→t′ 1 (3) The Camera Trajectory Smoothness (CTS) objective is commonly used in visual odometry [37, 50, 71] to enforce smooth camera motion by penalizing abrupt changes in camera rotation and translation between consecutive frames: LCTS(X) = N X t=0 Rt⊤Rt+1 -I F + (Tt+1 -Tt) 2 (4) where ∥· ∥F denotes the Frobenius norm, and I is the identity matrix. Lastly, we propose the Point Trajectory Smoothness (PTS) objective to smooth world coordinate pointmaps over time. Within a local temporal window, we first select 2D tracking trajectories T that remain visible throughout the window and lift them to 3D trajectories. We then smooth these 3D trajectories using a 1D convolution with adaptive weights, where weights are reduced for outlier points based on their temporal deviations. For each frame within the window, we treat the smoothed points as control points and apply a linear blend of control point displacements to transform all other points, weighting each control point's influence based on proximity, resulting in dense smoothed pointmaps e X t. (More details in supplementary.) We then minimize per-frame distance between global pointmaps and smoothed pointmaps using L1 loss: LPTS(X) = N X t=0 ||X t -e X t||1 (5) The complete optimization objective for recovering the 4D scene is: ˆ X = arg min X,P,σ wGALGA(X, σ, P) + wCMALCMA(X) + wCTSLCTS(X) + wPTSLPTS(X) (6) where wGA, wCMA, wCTS, wPTS are the loss weights. The completed outputs of C4D contain world-coordinate pointmaps ˆ X, depthmaps ˆD, camera poses ˆP, camera intrinsics ˆK, motion masks M, 2D tracking trajectories T, and lifted 3D tracking trajectories ˆT. 4. Experiments We evaluate C4D on multiple downstream tasks, comparing it with specialized methods (Sec. 4.3), and 3D formulations (Sec. 4.2). The ablation study in Sec. 4.4 justifies our design choices, and implementation details are provided in supplementary materials. 4.1. Datasets and Metrics We evaluate camera pose estimation on Sintel [3], TUMdynamics [51] and ScanNet [8] following [4, 73, 74]. Sintel is a synthetic dataset featuring challenging motion blur and large camera movements. TUM-Dynamics and ScanNet are real-world datasets for dynamic scenes and static scenes, respectively. We report the metrics of Absolute Translation Error (ATE), Relative Translation Error (RPE trans), and Relative Rotation Error (RPE rot). For depth estimation, we evaluate on Sintel, Bonn [40], and KITTI [14], following [21, 71]. Bonn [40] and KITTI [14] are real-world indoor dynamic scene and outdoor datasets. The evaluation metrics for depth estimation are Absolute Relative Error (Abs Rel), Root Mean Squared Error (RMSE), and the percentage of inlier points δ < 1.25, as used in prior works [21, 69]. For point tracking, we evaluate our method on the TAPVid benchmark [10] and Kubric [16]. TAP-Vid contains videos with annotations of tracking point positions and occlusion. We use the metrics of occlusion accuracy (OA), position accuracy (δx avg), and average Jaccard (AJ) to evaluate this benchmark, following [11, 18, 23, 59]. Kubric is a generator that synthesizes semi-realistic multi-object falling videos with rich annotations, including the moving status of tracking points in world coordinates. To fully evaluate the diverse dynamic patterns in the real world, we use three datasets from Kubric to assess dynamic accuracy (D-ACC): 1) MOVi-E, which introduces simple (linear) camera movement while always "looking at" the center point in world coordinates; 2) Panning MOVi-E, which modifies MOViE with panning camera movement; 3) MOVi-F, similar to MOVi-E but adds some random motion blur. 5 3D Model Weights Optimization Formulation Sintel TUM-dynamics ScanNet (static) ATE ↓RPE trans ↓RPE rot ↓ATE ↓RPE trans ↓RPE rot ↓ATE ↓RPE trans ↓RPE rot ↓ DUSt3R Global Alignment 0.416 0.216 18.038 0.127 0.058 2.033 0.060 0.024 0.751 C4D 0.334 0.154 0.948 0.093 0.018 0.906 0.064 0.018 0.570 MASt3R Global Alignment 0.437 0.329 12.760 0.084 0.052 1.245 0.073 0.027 0.706 C4D 0.448 0.199 1.730 0.048 0.012 0.671 0.067 0.018 0.467 MonSt3R Global Alignment 0.158 0.099 1.924 0.099 0.041 1.912 0.075 0.026 0.707 C4D (Ours) 0.103 0.040 0.705 0.071 0.019 0.897 0.061 0.017 0.538 Table 1. Camera pose estimation results across 3D/4D formulations. Evaluation on the Sintel, TUM-dynamic, and ScanNet datasets. The best results are highlighted in bold. Our 4D formulation, C4D, consistently improves the performance based on 3D models. 3D Model Weights Optimization Formulation Sintel Bonn KITTI Abs Rel ↓RMSE ↓δ<1.25 ↑Abs Rel ↓RMSE ↓δ<1.25 ↑Abs Rel ↓RMSE ↓δ<1.25 ↑ DUSt3R Global Alignment 0.502 5.141 54.9 0.149 0.422 84.4 0.129 5.162 84.2 C4D (Ours) 0.478 5.052 57.9 0.143 0.411 84.7 0.126 5.140 85.0 MASt3R Global Alignment 0.370 4.669 57.8 0.174 0.503 78.4 0.092 4.000 89.8 C4D (Ours) 0.379 4.756 58.3 0.168 0.485 78.6 0.092 4.000 89.7 MonSt3R Global Alignment 0.335 4.467 57.5 0.065 0.254 96.2 0.090 4.128 90.6 C4D (Ours) 0.327 4.465 60.7 0.061 0.249 96.5 0.089 4.128 90.6 Table 2. Video depth estimation results across 3D/4D formulations. We evaluate scale-and-shift-invariant depth on Sintel, Bonn, and KITTI. The best results are highlighted in bold. Our 4D fomulation, C4D, consistently improve the performance based on 3D models. 4.2. Comparison across 3D/4D Formulations 3D Baselines We choose the currently available DUSt3Rbased models as our 3D baseline models: 1) DUSt3R [61], trained on millions of image pairs in static scenes, demonstrating impressive performance and generalization across various real-world static scenarios with different camera parameters. 2) MASt3R [32], the follow-up work to DUSt3R, which initializes its weights from DUSt3R and is fine-tuned on the matching task, also using large-scale data from static scenes. 3) MonSt3R [71], which fine-tunes the decoder and head of DUSt3R on selected dynamic scene datasets. The global alignment is the default optimization strategy in the 3D formulation, as described in Sec. 3.3. Results We evaluate the results of camera pose estimation and video depth estimation, as shown in Table 1 and Table 2. Our C4D achieves consistent performance improvements compared to 3D formulation across different 3D model weights. For camera pose estimation, C4D significantly improves performance (e.g., reducing RPEr from 18.038 to 0.948) even on the challenging Sintel dataset, demonstrating the effectiveness of our method. The results on the ScanNet dataset, which consists of static scenes, also show that our method further enhances performance in static environments. C4D also outperforms 3D formulations in terms of video depth accuracy. Moreover, these results highlight a comparison among 3D model weights: DUSt3R and MASt3R perform comparably overall, while MonST3R achieves better results as it is fine-tuned on dynamic scene datasets. 4.3. Comparison with Other Methods Since C4D produces multiple outputs, we compare our method with others specifically designed for individual tasks, including camera pose estimation, video depth estimation, and point tracking. Evaluation on camera pose estimation We compare with methods that can predict camera pose and video depth jointly: Robust-CVD [30], CasualSAM [73], and the concurrent work MonST3R [71]. We re-evaluated MonST3R using their publicly available codes and checkpoints for a fair comparison. For a broader evaluation, we also compare with learning-based visual odometry methods: DROIDSLAM [56], DPVO [57], ParticleSfM [74], and LEAPVO [4]. Note that DROID-SLAM, DPVO, and LEAP-VO require ground-truth camera intrinsics as input, while our C4D can estimate camera intrinsics and camera poses using only a monocular video as input. The results are presented in Table 3, showing that C4D achieves highly competitive performance even compared to specialized visual odometry methods and generalizes well on static scenes, such as the 6 Category Method Sintel TUM-dynamics ScanNet (static) ATE ↓RPE trans ↓RPE rot ↓ATE ↓RPE trans ↓RPE rot ↓ATE ↓RPE trans ↓RPE rot ↓ Pose only DROID-SLAM† 0.175 0.084 1.912 - - - - - - DPVO† 0.115 0.072 1.975 - - - - - - ParticleSfM 0.129 0.031 0.535 - - - 0.136 0.023 0.836 LEAP-VO† 0.089 0.066 1.250 0.068 0.008 1.686 0.070 0.018 0.535 Joint depth & pose Robust-CVD 0.360 0.154 3.443 0.153 0.026 3.528 0.227 0.064 7.374 CasualSAM 0.141 0.035 0.615 0.071 0.010 1.712 0.158 0.034 1.618 MonST3R 0.109 0.043 0.737 0.104 0.223 1.037 0.068 0.017 0.545 C4D-M (Ours) 0.103 0.040 0.705 0.071 0.019 0.897 0.061 0.017 0.538 Table 3. Camera pose evaluation on Sintel, TUM-dynamic, and ScanNet. The best and second best results are highlighted in bold and underlined, respectively. † means the method requires ground truth camera intrinsics as input. "C4D-M" denotes C4D with MonST3R's model weights. Alignment Category Method Sintel Bonn KITTI Abs Rel ↓δ<1.25 ↑Abs Rel ↓δ<1.25 ↑Abs Rel ↓δ <1.25 ↑ Per-sequence scale & shift Single-frame depth Marigold 0.532 51.5 0.091 93.1 0.149 79.6 DepthAnything-V2 0.367 55.4 0.106 92.1 0.140 80.4 Video depth NVDS 0.408 48.3 0.167 76.6 0.253 58.8 ChronoDepth 0.687 48.6 0.100 91.1 0.167 75.9 DepthCrafter 0.292 69.7 0.075 97.1 0.110 88.1 Joint video depth & pose Robust-CVD 0.703 47.8 - - - - CasualSAM 0.387 54.7 0.169 73.7 0.246 62.2 MonST3R 0.335 58.5 0.063 96.2 0.157 73.8 C4D-M (Ours) 0.327 60.7 0.061 96.5 0.089 90.6 Per-sequence scale Video depth DepthCrafter 0.692 53.5 0.217 57.6 0.141 81.8 Joint video depth & pose MonST3R 0.345 55.8 0.065 96.2 0.159 73.5 Joint video depth & pose C4D-M (Ours) 0.338 58.1 0.063 96.4 0.091 90.6 Table 4. Video depth evaluation on Sintel, Bonn, and KITTI. Two types of depth range alignment are evaluated: scale & shift and scaleonly. "C4D-M" denotes C4D with MonST3R's model weights. ScanNet dataset. Evaluation on video depth estimation Table 4 shows the evaluation results on video depth estimation. We compare with various kinds of depth estimation methods: single-frame depth methods such as Marigold [25] and DepthAnything-V2 [69], and video depth methods such as NVDS [63], ChronoDepth [47], and DepthCrafter [21]. Note that these methods predict relative depth, which leads to inconsistencies across multiple views when projecting to world coordinates [15]. We also compare with methods that can predict video depth and camera pose jointly: RobustCVD [30], CasualSAM [73], and MonST3R [71]. The evaluation is conducted using two kinds of depth range alignment: scale & shift, and scale-only. C4D achieves highly competitive results in scale & shift alignment. However, as demonstrated in [70], a shift in depth will affect the x, y, and z coordinates non-uniformly when recovering the 3D geometry of a scene, resulting in shape distortions. Therefore, a more important evaluation is under scale-only alignment, where C4D achieves the best performance. Evaluation on point tracking As part of the C4D outputs, we evaluate point tracking results in Table 5 and compare them with other TAP methods: RAFT [55], TAPNet [10], PIPs [18], MFT [38], TAPIR [11], and Cotracker [23]. Note that all previous TAP methods can only predict the position and occlusion of tracking points, whereas our method can additionally predict mobility, contributing to a robust motion mask prediction as described in Sec. 3.2.2. Despite this more challenging learning objective, our method still achieves comparable performance with SOTA methods and demonstrates high accuracy in predicting mobility. 7 Method MOVi-E Pan. MOVi-E MOVi-F TAP-Vid DAVIS TAP-Vid Kinetics D-ACC ↑ D-ACC ↑ D-ACC ↑ AJ ↑ < δx avg ↑ OA ↑ AJ ↑ < δx avg ↑ OA ↑ Predict Position & Occlusion RAFT - - - 30.0 46.3 79.6 34.5 52.5 79.7 TAP-Net - - - 38.4 53.1 82.3 46.6 60.9 85.0 PIPs - - - 39.9 56.0 81.3 39.1 55.3 82.9 MFT - - - 47.3 66.8 77.8 39.6 60.4 72.7 TAPIR - - - 56.2 70.0 86.5 49.6 64.2 85.0 CoTracker - - - 61.8 76.1 88.3 49.6 64.3 83.3 Predict Position & Occlusion & Mobility DynPT (Ours) 87.9 94.1 91.5 61.6 75.4 87.4 47.8 62.6 82.3 Table 5. Point tracking evaluation results on the TAP-Vid and Kubric (MOVi-E, Panning MOVi-E, and MOVi-F) Datasets. Apart from achieving competitive results with SOTA TAP methods, DynPT offers a unique capability: predicting the mobility of tracking points, which is crucial for determining whether a point is dynamic in world coordinates. Method Camera pose Video depth ATE ↓RPE t ↓RPE r ↓Abs Rel ↓RMSE ↓δ<1.25 ↑ w/o CMA 0.140 0.051 0.905 0.335 4.501 0.582 w/o CTS 0.131 0.058 1.348 0.322 4.442 0.608 w/o PTS 0.103 0.040 0.705 0.327 4.465 0.607 C4D (Ours) 0.103 0.040 0.705 0.327 4.459 0.609 Table 6. Ablation study on the Sintel dataset. 1) 3D trajectories w/o PTS 2) 3D trajectories w/ PTS Frame at time t Frame at time t+15 Video C4D (Ours) MonST3R MASt3R DUSt3R Figure 5. Ablation illustration of Point Trajectory Smoothness (PTS) objective. The temporal depth and 3D trajectories become more smooth after applying PTS objective. 4.4. Ablation Study Ablation results in Table 6 indicate that all loss functions are crucial. The proposed loss suite achieves the best pose estimation with minimal impact on video depth accuracy. Since the temporal smoothness of depth cannot be reflected by the quantitative metrics in Table 6, we show the temporal depth slice changes in Figure 5, following [21, 69], which demonstrates that our PTS objective is effective in producing more temporally smooth depth and 3D point trajectories. Note that while MonST3R also employs the CMA objective, the motion mask used in this objective is crucial, and our moMonST3R GT C4D Video MonST3R GT C4D Video Figure 6. Qualitative comparison of motion mask on Sintel. Our motion mask is more accurate than MonST3R's. tion mask is more accurate than MonST3R's, as shown in Figure 6. Due to page limitations, the ablation of DynPT is provided in the supplement. 5. Conclusion In this paper, we introduce C4D, a framework for recovering 4D representations from monocular videos through joint prediction of dense pointmaps and temporal correspondences. Within this framework, a Dynamic-aware Point Tracker (DynPT), a correspondence-guided motion mask prediction, and correspondence-aided optimization are proposed to achieve accurate and smooth 4D reconstruction and camera pose estimation. Experiments demonstrate that C4D effectively reconstructs dynamic scenes, delivering competitive performance in depth estimation, camera pose estimation, and point tracking. 8 Acknowledgement This project is supported by the National Research Foundation, Singapore, under its Medium Sized Center for Advanced Robotics Technology Innovation. References [1] Sameer Agarwal, Noah Snavely, Steven M Seitz, and Richard Szeliski. Bundle adjustment in the large. In Computer Vision-ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part II 11, pages 29-42. Springer, 2010. 2 [2] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (surf). Computer vision and image understanding, 110(3):346-359, 2008. 2 [3] Daniel J. Butler, Jonas Wulff, Garrett B. Stanley, and Michael J. Black. A naturalistic open source movie for optical flow evaluation. In ECCV, pages 611-625, 2012. 2, 5, 1, 3, 4 [4] Weirong Chen, Le Chen, Rui Wang, and Marc Pollefeys. LEAP-VO: Long-term effective any point tracking for visual odometry. In CVPR, pages 19844-19853, 2024. 5, 6 [5] Xingyu Chen, Yue Chen, Yuliang Xiu, Andreas Geiger, and Anpei Chen. Easi3r: Estimating disentangled motion from dust3r without training. arXiv preprint , 2025. 2 [6] Seokju Cho, Jiahui Huang, Jisu Nam, Honggyu An, Seungryong Kim, and Joon-Young Lee. Local all-pair correspondence for point tracking. In European Conference on Computer Vision, pages 306-325. Springer, 2025. 2, 4 [7] Wen-Hsuan Chu, Lei Ke, and Katerina Fragkiadaki. Dreamscene4d: Dynamic multi-object scene generation from monocular videos. arXiv preprint , 2024. 2 [8] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In CVPR, pages 5828-5839, 2017. 5, 6 [9] Andrew J Davison, Ian D Reid, Nicholas D Molton, and Olivier Stasse. Monoslam: Real-time single camera slam. IEEE transactions on pattern analysis and machine intelligence, 29(6):1052-1067, 2007. 2 [10] Carl Doersch, Ankush Gupta, Larisa Markeeva, Adria Recasens, Lucas Smaira, Yusuf Aytar, Joao Carreira, Andrew Zisserman, and Yi Yang. Tap-vid: A benchmark for tracking any point in a video. Advances in Neural Information Processing Systems, 35:13610-13626, 2022. 2, 3, 5, 7, 6 [11] Carl Doersch, Yi Yang, Mel Vecerik, Dilara Gokay, Ankush Gupta, Yusuf Aytar, Joao Carreira, and Andrew Zisserman. Tapir: Tracking any point with per-frame initialization and temporal refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1006110072, 2023. 2, 3, 5, 7 [12] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 2758-2766, 2015. 2 [13] Bardienus Duisterhof, Lojze Zust, Philippe Weinzaepfel, Vincent Leroy, Yohann Cabon, and Jerome Revaud. Mast3rsfm: a fully-integrated solution for unconstrained structurefrom-motion. arXiv preprint , 2024. 3 [14] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The KITTI dataset. 32(11): 1231-1237, 2013. 5 [15] Cl ́ement Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J Brostow. Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3828-3838, 2019. 7 [16] Klaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J Fleet, Dan Gnanapragasam, Florian Golemo, Charles Herrmann, et al. Kubric: A scalable dataset generator. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3749-3761, 2022. 4, 5, 1 [17] Jisang Han, Honggyu An, Jaewoo Jung, Takuya Narihira, Junyoung Seo, Kazumi Fukuda, Chaehyun Kim, Sunghwan Hong, Yuki Mitsufuji, and Seungryong Kim. Dˆ 2ust3r: Enhancing 3d reconstruction with 4d pointmaps for dynamic scenes. arXiv preprint , 2025. 2 [18] Adam W Harley, Zhaoyuan Fang, and Katerina Fragkiadaki. Particle video revisited: Tracking through occlusions using point trajectories. In European Conference on Computer Vision, pages 59-75. Springer, 2022. 2, 5, 7 [19] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. 2, 4 [20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 4 [21] Wenbo Hu, Xiangjun Gao, Xiaoyu Li, Sijie Zhao, Xiaodong Cun, Yong Zhang, Long Quan, and Ying Shan. DepthCrafter: Generating consistent long depth sequences for open-world videos. arXiv preprint , 2024. 5, 7, 8, 6 [22] Moritz Kappel, Florian Hahlbohm, Timon Scholz, Susana Castillo, Christian Theobalt, Martin Eisemann, Vladislav Golyanik, and Marcus Magnor. D-npc: Dynamic neural point clouds for non-rigid view synthesis from monocular video. arXiv preprint , 2024. 5 [23] Nikita Karaev, Ignacio Rocco, Benjamin Graham, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. Cotracker: It is better to track together. arXiv preprint , 2023. 2, 3, 5, 7 [24] Nikita Karaev, Iurii Makarov, Jianyuan Wang, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. Cotracker3: Simpler and better point tracking by pseudolabelling real videos. arXiv preprint , 2024. 3 9 [25] Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, and Konrad Schindler. Repurposing diffusion-based image generators for monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 94929502, 2024. 7 [26] Bernhard Kerbl, Georgios Kopanas, Thomas Leimk ̈uhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Trans. Graph., 42(4):139-1, 2023. 2 [27] Hanyang Kong, Xingyi Yang, and Xinchao Wang. Efficient gaussian splatting for monocular dynamic scene rendering via sparse time-variant attribute modeling. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 43744382, 2025. 2 [28] Hanyang Kong, Xingyi Yang, and Xinchao Wang. Generative sparse-view gaussian splatting. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 26745-26755, 2025. [29] Hanyang Kong, Xingyi Yang, and Xinchao Wang. Rogsplat: Robust gaussian splatting via generative priors. In Proceedings of the IEEE International Conference on Computer Vision, 2025. 2 [30] Johannes Kopf, Xuejian Rong, and Jia-Bin Huang. Robust consistent video depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1611-1621, 2021. 6, 7 [31] Jiahui Lei, Yijia Weng, Adam Harley, Leonidas Guibas, and Kostas Daniilidis. Mosca: Dynamic gaussian fusion from casual videos via 4d motion scaffolds. arXiv preprint , 2024. 2 [32] Vincent Leroy, Yohann Cabon, and J ́erˆome Revaud. Grounding image matching in 3d with mast3r. arXiv preprint , 2024. 3, 6, 1 [33] Qingming Liu, Yuan Liu, Jiepeng Wang, Xianqiang Lv, Peng Wang, Wenping Wang, and Junhui Hou. Modgs: Dynamic gaussian splatting from causually-captured monocular videos. arXiv preprint , 2024. 2 [34] David G Lowe. Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on computer vision, pages 1150-1157. Ieee, 1999. 2 [35] David G Lowe. Distinctive image features from scaleinvariant keypoints. International journal of computer vision, 60:91-110, 2004. 2 [36] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4040-4048, 2016. 2 [37] Raul Mur-Artal, Jose Maria Martinez Montiel, and Juan D Tardos. Orb-slam: A versatile and accurate monocular slam system. IEEE transactions on robotics, 31(5):1147-1163, 2015. 5 [38] Michal Neoral, Jon ́aˇs ˇSer`ych, and Jiˇr ́ı Matas. Mft: Longterm tracking of every pixel. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 6837-6847, 2024. 7 [39] Richard A Newcombe, Steven J Lovegrove, and Andrew J Davison. Dtam: Dense tracking and mapping in real-time. In 2011 international conference on computer vision, pages 2320-2327. IEEE, 2011. 2 [40] Emanuele Palazzolo, Jens Behley, Philipp Lottes, Philippe Giguere, and Cyrill Stachniss. Refusion: 3d reconstruction in dynamic environments for RGB-D cameras exploiting residuals. pages 7855-7862, 2019. 5 [41] Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 724-732, 2016. 1, 2 [42] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R ̈adle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. arXiv preprint , 2024. 1 [43] Peter J Rousseeuw. Least median of squares regression. Journal of the American statistical association, 79(388): 871-880, 1984. 4 [44] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. Orb: An efficient alternative to sift or surf. In 2011 International conference on computer vision, pages 25642571. Ieee, 2011. 2 [45] Agarwal Sameer. Building rome in a day. Proc. ICCV, 2009, 2009. 2 [46] Johannes L Schonberger and Jan-Michael Frahm. Structurefrom-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4104-4113, 2016. 2 [47] Jiahao Shao, Yuanbo Yang, Hongyu Zhou, Youmin Zhang, Yujun Shen, Matteo Poggi, and Yiyi Liao. Learning temporally consistent video depth from video diffusion priors. arXiv preprint , 2024. 7 [48] Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial intelligence and machine learning for multi-domain operations applications, pages 369-386. SPIE, 2019. 5 [49] Colton Stearns, Adam Harley, Mikaela Uy, Florian Dubost, Federico Tombari, Gordon Wetzstein, and Leonidas Guibas. Dynamic gaussian marbles for novel view synthesis of casual monocular videos. arXiv preprint , 2024. 2 [50] Frank Steinbr ̈ucker, J ̈urgen Sturm, and Daniel Cremers. Real-time visual odometry from dense rgb-d images. In 2011 IEEE international conference on computer vision workshops (ICCV Workshops), pages 719-722. IEEE, 2011. 5 [51] J ̈urgen Sturm, Nikolas Engelhard, Felix Endres, Wolfram Burgard, and Daniel Cremers. A benchmark for the evaluation of RGB-D SLAM systems. pages 573-580, 2012. 5, 6 10 [52] Edgar Sucar, Zihang Lai, Eldar Insafutdinov, and Andrea Vedaldi. Dynamic point maps: A versatile representation for dynamic 3d reconstruction. arXiv preprint , 2025. 2 [53] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8934-8943, 2018. 2 [54] Yang-Tian Sun, Yihua Huang, Lin Ma, Xiaoyang Lyu, YanPei Cao, and Xiaojuan Qi. Splatter a video: Video gaussian representation for versatile processing. Advances in Neural Information Processing Systems, 37:50401-50425, 2024. 5 [55] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 2328, 2020, Proceedings, Part II 16, pages 402-419. Springer, 2020. 2, 4, 7 [56] Zachary Teed and Jia Deng. Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras. Advances in neural information processing systems, 34:16558-16569, 2021. 6 [57] Zachary Teed, Lahav Lipson, and Jia Deng. Deep patch visual odometry. Advances in Neural Information Processing Systems, 36, 2024. 6 [58] Bill Triggs, Philip F McLauchlan, Richard I Hartley, and Andrew W Fitzgibbon. Bundle adjustment-a modern synthesis. In Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms Corfu, Greece, September 21-22, 1999 Proceedings, pages 298372. Springer, 2000. 2 [59] Qianqian Wang, Yen-Yu Chang, Ruojin Cai, Zhengqi Li, Bharath Hariharan, Aleksander Holynski, and Noah Snavely. Tracking everything everywhere all at once. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 19795-19806, 2023. 5 [60] Qianqian Wang, Vickie Ye, Hang Gao, Jake Austin, Zhengqi Li, and Angjoo Kanazawa. Shape of motion: 4d reconstruction from a single video. arXiv preprint , 2024. 2 [61] Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, and Jerome Revaud. Dust3r: Geometric 3d vision made easy. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2069720709, 2024. 1, 2, 3, 6 [62] Shizun Wang, Xingyi Yang, Qiuhong Shen, Zhenxiang Jiang, and Xinchao Wang. Gflow: Recovering 4d world from monocular video. arXiv preprint , 2024. 2, 5 [63] Yiran Wang, Min Shi, Jiaqi Li, Zihao Huang, Zhiguo Cao, Jianming Zhang, Ke Xian, and Guosheng Lin. Neural video depth stabilizer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9466-9476, 2023. 7 [64] Yihan Wang, Lahav Lipson, and Jia Deng. Sea-raft: Simple, efficient, accurate raft for optical flow. In European Conference on Computer Vision, pages 36-54. Springer, 2025. 2, 4 [65] Philippe Weinzaepfel, Thomas Lucas, Vincent Leroy, Yohann Cabon, Vaibhav Arora, Romain Br ́egier, Gabriela Csurka, Leonid Antsfeld, Boris Chidlovskii, and J ́erˆome Revaud. Croco v2: Improved cross-view completion pretraining for stereo matching and optical flow. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17969-17980, 2023. 4 [66] Changchang Wu. Towards linear-time incremental structure from motion. In 2013 International Conference on 3D Vision-3DV 2013, pages 127-134. IEEE, 2013. 2 [67] Junyu Xie, Charig Yang, Weidi Xie, and Andrew Zisserman. Moving object segmentation: All you need is sam (and flow). In Proceedings of the Asian Conference on Computer Vision, pages 162-178, 2024. 1 [68] Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, and Dacheng Tao. Gmflow: Learning optical flow via global matching. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8121-8130, 2022. 2, 4 [69] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything V2. arXiv preprint , 2024. 5, 7, 8 [70] Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Long Mai, Simon Chen, and Chunhua Shen. Learning to recover 3d scene shape from a single image. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 204-213, 2021. 7 [71] Junyi Zhang, Charles Herrmann, Junhwa Hur, Varun Jampani, Trevor Darrell, Forrester Cole, Deqing Sun, and MingHsuan Yang. Monst3r: A simple approach for estimating geometry in the presence of motion. arXiv preprint , 2024. 2, 3, 5, 6, 7, 1 [72] Zhoutong Zhang, Forrester Cole, Richard Tucker, William T Freeman, and Tali Dekel. Consistent depth of moving objects in video. ACM Transactions on Graphics (ToG), 40(4):1-12, 2021. 5 [73] Zhoutong Zhang, Forrester Cole, Zhengqi Li, Michael Rubinstein, Noah Snavely, and William T Freeman. Structure and motion from casual videos. In European Conference on Computer Vision, pages 20-37. Springer, 2022. 5, 6, 7 [74] Wang Zhao, Shaohui Liu, Hengkai Guo, Wenping Wang, and Yong-Jin Liu. Particlesfm: Exploiting dense point trajectories for localizing moving cameras in the wild. In European Conference on Computer Vision, pages 523-542. Springer, 2022. 5, 6 11 C4D: 4D Made from 3D through Dual Correspondences Supplementary Material 6. More Visual Results 6.1. 4D Reconstruction Given only a monocular video as input, C4D can reconstruct dynamic scenes along with camera parameters. Visual results are shown in Figure 7. To provide a comprehensive view of the 4D reconstruction, the static regions across all frames are retained, while the dynamic regions from uniformly sampled frames are also displayed. 6.2. Temporally Smooth Video Depth In Figure 8, we compare the video depth estimation results of C4D with other DUSt3R-based methods, including DUSt3R [61], MASt3R [32], and MonST3R [71]. In addition to producing more accurate depth, C4D demonstrates superior temporal smoothness compared to other methods. To illustrate this, we highlight the y-t depth slices along the vertical red line in the red boxes, showing the depth variation over time. As observed, C4D achieves temporally smoother depth results, thanks to the Point Trajectory Smoothness (PTS) objective. In contrast, other methods exhibit zigzag artifacts in the y-t depth slices, indicating flickering artifacts in the video depth. 6.3. Motion Mask One of the most critical aspects of reconstructing dynamic scenes is identifying dynamic regions, that is, predicting motion masks. In Figure 9, we provide a qualitative comparison of motion masks generated by our C4D and the concurrent work MonST3R on the Sintel dataset [3]. This dataset poses significant challenges due to its fast camera motion, large object motion, and motion blur. In the first video, C4D demonstrates its ability to generate reliable motion masks on the Sintel dataset, even when a large portion of the frame content is dynamic. This success is attributed to our proposed correspondence-guided motion mask prediction strategy. In contrast, MonST3R struggles to recognize such dynamic regions under these challenging conditions. In the second video, C4D predicts more complete motion masks, whereas MonST3R only generates partial results. This improvement is due to C4D's consideration of multi-frame motion cues in our motion mask prediction strategy, which is crucial for practical scenarios. 7. More Experimental Results 7.1. Motion Segmentation Results Unlike prompt-based video segmentation like SAM2 [42], motion segmentation aims to automatically segment the Method Ours MonST3R [71] FlowP-SAM [67] IoU ↑ 47.89 31.57 42.23 Table 7. Motion segmentation results on DAVIS2016. Note that the evaluation is conducted without Hungarian matching between predicted and ground-truth motion masks. moving regions in the video. We evaluate our method on DAVIS 2016 [41] and compare it with some automatic motion segmentation methods in Tab. 7, where our approach outperforms both MonST3R [71] and the state-of-the-art supervised method, FlowP-SAM [67]. Note that the evaluation is conducted without Hungarian matching between predicted and ground-truth motion masks. 7.2. Ablation on Different Tracker Variants The tracker needs to predict additional mobility to infer the dynamic mask, which is a more difficult learning problem as it requires understanding spatial relationships. Table. 8 shows that a sole CNN or 3D-aware encoder struggles with multi-tasking, whereas combining both improves performance. 8. More Technical Details 8.1. Dynamic-aware Point Tracker (DynPT) The ground truth used to supervise confidence is defined by an indicator that determines whether the predicted position lies within 12 pixels of the ground truth position. And since there are no currently available labels for mobility, we use the rich annotations provided by the Kubric [16] generator to generate ground-truth mobility labels. Specifically, the "positions" label, which describes the position of an object for each frame in world coordinates, is utilized. As the movements of objects in Kubric are entirely rigid, we determine an object's mobility as follows: if the temporal difference in the "position" of an object exceeds a predefined threshold (e.g., 0.01), all the tracking points associated with that object are labeled as dynamic (i.e., mobility is labeled as True). It is important to note that although an "is dynamic" label is provided in Kubric, it only indicates whether the object is stationary on the floor (False) or being tossed (True) at the initial frame. However, some objects may collide and move in subsequent frames. In such cases, the "is dynamic" label does not accurately represent the object's mobility, necessitating the use of our threshold-based approach. We train DynPT on the training sets of the panning 1 Figure 7. Visual results of 4D reconstruction on DAVIS dataset [41]. C4D can reconstruct the dynamic scene and recover camera parameters from monocular video input. 2 Video C4D (Ours) MonST3R MASt3R DUSt3R Figure 8. Qualitative comparison of video depth on Sintel [3]. We compare C4D with MonST3R, MASt3R, and DUSt3R. To better visualize the temporal depth quality, we highlight y-t depth slices along the vertical red line within red boxes. For optimal viewing, please zoom in. Method MOVi-E Pan. MOVi-E MOVi-F TAP-Vid DAVIS TAP-Vid Kinetics D-ACC ↑ D-ACC ↑ D-ACC ↑ AJ ↑ < δx avg ↑ OA ↑ AJ ↑ < δx avg ↑ OA ↑ DynPT 87.9 94.1 91.5 61.6 75.4 87.4 47.8 62.6 82.3 CE only 82.6 90.4 86.8 60.6 74.3 86.8 46.2 62.1 81.7 3E only 85.4 92.2 90.4 42.4 56.6 73.4 38.9 54.6 70.4 Table 8. Ablation study of different design choices for DynPT. "CE" denotes the use of a CNN encoder, while "3E" refers to the 3D-aware encoder. MOVi-E and MOVi-F datasets. These datasets are chosen for their non-trivial camera movements and motion blur, which closely resemble real-world scenarios. For evaluation, in addition to the the panning MOVi-E and MOVi-F datasets, we also evaluate on the MOVi-E dataset to assess the generalization ability of DynPT. During inference, DynPT processes videos by querying a sparse set of points in a sliding window manner to maintain computational efficiency as in [23, 24]. The query points are sampled based on grids: the image is divided into grids of 20 × 20 pixels, and one point with the maximum image gradient is sampled from each grid to capture the most distinguishable descriptor. Additionally, one random point is sampled from each grid to ensure diversity and prevent bias towards only high-gradient areas. This combination of gradient-based sampling and random sampling ensures a balanced selection of points, enabling robust and diverse feature extraction across the image. 8.2. Point Trajectory Smoothness (PTS) Objective The primary goal of this objective is to ensure temporal smoothness in the per-frame pointmaps. Directly performing dense tracking for every pixel at every frame is computationally expensive. To address this, we propose an efficient strategy for generating dense, smoothed pointmaps. First, we track a sparse set of points and smooth their 3D trajectories using adaptive weighting (Sec 8.2.1). Next, we propagate the displacements resulting from the 3 MonST3R GT C4D Video MonST3R GT C4D Video Figure 9. Qualitative comparison of motion mask on Sintel dataset [3]. We present the motion masks generated by C4D and MonST3R. Video frames and ground-truth motion masks are also included for reference. The white regions indicate dynamic areas. smoothing process to their local neighbors through linear blending (Sec 8.2.2), ultimately producing dense, smoothed pointmaps. This smoothing process is applied in a non-overlapping sliding window manner. For each local window, smoothing is performed on an extended window that includes additional frames on both ends. However, only the smoothed results within the original window are retained. This approach ensures both computational efficiency and temporal consistency. 8.2.1. Trajectory Smoothing with Adaptive Weighting To enhance the smoothness of 3D trajectories while mitigating the influence of outliers, we employ a 1D convolutionbased smoothing process with adaptive weights. This method ensures that trajectories are refined effectively without over-smoothing salient features. The core steps of the process are described below. Trajectory Representation. The input trajectories are represented as a tensor T ∈RT ×N×C, where T is the number of time frames, N is the number of tracked points, and C is the dimensionality of the coordinates (e.g., C = 3 for 3D trajectories). Smoothing Kernel. A uniform smoothing kernel of size k is defined as: K = 1 k 1k, (7) where 1k is a vector of ones with length k, and we set it to 5. The kernel is normalized to ensure consistent averaging across the kernel window size. 4 Outlier-Aware Weighting. To reduce the influence of outliers, we compute a weight matrix W ∈RT ×N based on the differences between consecutive trajectory points: ∆Tt = ∥Tt -Tt-1∥2, (8) where ∆Tt is the norm of the difference between consecutive points. The weights are then defined as: Wt,n = exp (-λ · ∆Tt,n) , (9) where λ is a smoothing factor controlling the decay of weights for larger deviations and we set it to 1. To ensure temporal alignment, the weights are padded appropriately: Wt = ( W1, t = 1, Wt-1, otherwise. (10) Weighted Convolution. To smooth the trajectories, we apply a weighted 1D convolution to each trajectory point: ̃T = Conv1D(T ⊙W, K) Conv1D(W, K) , (11) where ⊙denotes element-wise multiplication. The convolution is applied independently for each trajectory and coordinate dimension. The output ̃T is a smoothed trajectory tensor with the same shape as the input, ensuring that both global and local trajectory consistency are preserved. 8.2.2. Linear Blend Displacement (LBD) for Point Transformation After obtaining the smoothed 3D trajectories of sparse tracking points, we leverage the observation that the displacements caused by the smoothing process are approximately consistent within local regions. Inspired by linear blend skinning, we treat the smoothed tracking points as control points. To transform all other 3D points based on the displacements of these control points, we employ a Linear Blend Displacement (LBD) approach. This method calculates proximity-weighted displacements for each point by considering its k nearest control points, ensuring smooth and locally influenced transformations. The detailed steps are described below. Problem Formulation. Given a set of query points X ∈RP1×3, control points C ∈RP2×3, and control displacements D ∈RP2×3, the goal is to compute the transformed points ̃X ∈RP1×3 using a weighted combination of the control displacements. Here, P1 is the number of query points, and P2 is the number of control points. Nearest Neighbor Search. For each query point, we identify its k nearest control points using the L2 distance. This yields: dj,k = ∥Xj -CIj,k∥2, (12) Ij,k = Indices of the k nearest control points, (13) where dj,k is the squared distance between the j-th query point and the k-th nearest control point. We set k to 4 in our experiments. Weight Computation. We compute proximity-based weights using inverse distance weighting: wj,k = 1 Dj,k , (14) The weights are normalized across the k nearest neighbors: ˆwj,k = wj,k P k′ wj,k′ . (15) Displacement Aggregation. Using the computed weights, the displacement for each query point is aggregated as linear blend of control displacements: ∆xj = X k ˆwj,kDIj,k, (16) where dj,k is the displacement of the k-th nearest control point. Point Transformation. Finally, the transformed query points are computed by adding the aggregated displacements: ̃Xj = Xj + ∆xj. (17) 8.3. Implementation Details Training Details. We train DynPT for 50,000 steps with a total batch size of 32, starting from scratch except for the 3D-aware encoder, which is initialized from DUSt3R's pretrained encoder and kept frozen during training. The learning rate is set to 5e-4, and we use the AdamW optimizer with a OneCycle learning rate scheduler [48]. Inference Details. To ensure fast computation during motion mask calculation, we sample static points only from the latest sliding window of DynPT, as this window already includes the majority of points in the frame. The default tracking window size for DynPT is set to 16, with a stride of 4 frames. For the Point Trajectory Smoothness (PTS) objective, the default window size is 20 frames, extended by adding 5 additional frames on each end to ensure continuity and smoothness. And for longer videos, the window sizes for DynPT and PTS can be further extended to reduce computational costs. Optimization Details. The correspondence-aided optimization is performed in two stages. In the first stage, we optimize using the global alignment (GA), camera movement alignment (CMA), and camera trajectory smoothness (CTS) objectives, with respective weights wGA = 1, wCMA = 0.01, and wCTS = 0.01. During this stage, the optimization targets the depth maps ˆD, camera poses ˆP and 5 camera intrinsics ˆK. After completing the first stage, we fix the camera pose ˆP and intrinsics ˆK and proceed to optimize depth maps ˆD only in the second stage. We apply only the point trajectory smoothness (PTS) objective with weight wPTS = 1 to further refine the depth maps ˆD in the second stage. Both stages are optimized for 300 iterations using the Adam optimizer with a learning rate of 0.01. Datasets and Evaluation. Following [21, 71], we sample the first 90 frames with a temporal stride of 3 from the TUM-Dynamics [51] and ScanNet [8] datasets for computational efficiency. For dynamic accuracy evaluation, we use the validation sets of the MOVi-E, Panning MOVi-E, and MOVi-F datasets, comprising 250, 248, and 147 sequences, respectively. Each sequence contains 256 randomly sampled tracks spanning 24 frames. The resolution is fixed at 256 × 256, consistent with the TAPVid benchmark [10]. The evaluation metric used is accuracy, which assesses both dynamic (positive) and static (negative) states, defined as: D-ACC = TP + TN TP + TN + FP + FN, (18) where TP denotes true positives, TN denotes true negatives, FP denotes false positives, and FN denotes false negatives. 6
2510.14957
Phantom Mirage from Axion Dark Energy Rayne Liu,1 Yijie Zhu,2 Wayne Hu,1 and Vivian Miranda3 1Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637, USA 2Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA 3C. N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY 11794, USA (Dated: October 17, 2025) Supernova (SN) and baryon acoustic oscillation (BAO) distance measures have recently provided hints that the dark energy is not only dynamical but apparently evolves from normal to phantom dark energy between redshifts 0 < z < 1. A normal axion dark energy component in the mass range just below the Hubble scale can mimic a phantom component by appearing as dark energy at z = 1 and dark matter at z = 0, raising the possibility of a phantom mirage. We show that there is a wide range of axion dark energy contributions that can resolve the SN-BAO tension as well as thawing quintessence does, leaving BAO tension with the cosmic microwave background (CMB) for the distance measures from z ∼1 to recombination to be resolved at high redshifts. With axions, raising the optical depth to reionization to τ ≈0.1 works essentially as well as w0 −wa phantom dark energy for all but the lowE CMB data, with a remaining ∆χ2 ∼−16 compared with ΛCDM, whereas a small spatial curvature of ΩK ∼0.003 can largely relax the full SN-BAO-CMB tension with a total ∆χ2 ∼−12. I. INTRODUCTION Recent distance-redshift measurements of Supernova (SN) Type Ia [1] and baryon acoustic oscillations (BAO) [2, 3] in conjunction with the cosmic microwave back- ground (CMB) [4] data have raised the possibility that the dark energy, which accelerates the expansion, is not only dynamical, but may also be best fit as a compo- nent that evolves from normal matter to matter that at least apparently violates the null energy condition, dubbed phantom dark energy [5–27]. Such an exotic component suggests a dark energy equa- tion of state parameter wde that evolves from wde < −1 to wde > −1 and places strong conditions on physical candidates in order not to exhibit ghost and gradient instability, which typically requires fundamental modi- fications to gravitational or dark sector forces [28–33], see also [34–46] for recent assessments). On the other hand, the well-established agreement with high redshift predictions in ΛCDM from the CMB indicates that these drastic modifications have the curious property that the strong deviations in the normal and phantom phases must nearly cancel each other so as to have similar end- points to the expansion history as ΛCDM [29, 47], im- plying that one or the other may be a mirage. It is therefore important to examine other ways in which this implied expansion history can occur without ever requiring a phantom dark energy component: that phantom dark energy is a mirage. One well-studied pos- sibility is a coupling between the dark matter and dark energy, or the so-called coupled quintessence, where the dark energy transfers some of its energy density to the dark matter so that if we analyze its expansion history assuming decoupled cold dark matter, we would infer a phantom equation of state for the effective dark energy density [48–52]. Similar to the phantom models, these models still typically introduce new forces between dark matter and dark energy in order to mediate the coupling. In this work, we study the ability of axions to provide a similar phantom mirage without introducing any ad- ditional couplings in the dark sector, requiring only an additional cosmological constant or normal quintessence. It is well known that a light scalar field acts as a contri- bution to the cosmological constant when it is frozen on its potential by Hubble friction, and once released acts instead as dark matter. The QCD axion, for example, is a leading candidate to compose of all of the dark mat- ter. Moreover, axions as all of the dark energy provide a technically natural solution through their shift sym- metry as to why their mass can stay of order the very small ma ∼H0 ∼10−33 eV required [53, 54]. If instead the mass is only somewhat larger than this Hubble scale value, then the transition between dark energy and dark matter behavior can occur between redshifts 0 < z < 1 and provide a phantom mirage for SN and BAO dis- tance measures, and allow non-dark-energy explanations for the BAO-CMB tension in ΛCDM to accommodate all three types of data. In Sec. II, we give the SN, BAO and CMB datasets, axion dark energy parameters and statistical methods employed. In Sec. III, we discuss why axion dark en- ergy can produce the mirage of phantom dark energy and even phantom crossing. We present the compari- son with the data in Sec. IV and discuss these results in V. In Appendix A, we provide details on our fast and efficient emulation approach for predicting axion model observables. Throughout we employ units where ℏ= c = 1 and define lg(ma) = log10(ma/1eV) but explicitly write log10 elsewhere. II. DATASETS AND METHODS We study the ability of axions to relax the tension be- tween the following baseline data sets (see Appendix A arXiv:2510.14957v1 [astro-ph.CO] 16 Oct 2025 2 for calculational details and definitions): • CMB: Planck 2018 [4] PR3 lowT, lowE, and plik TTTEEE lite likelihoods, Planck PR4 lensing [55] and ACT DR6 lensing baseline data [56]. When comparing datasets to models we take the ΛCDM Planck 2018 TTTEEE+lowT+lowE+lensing mean parameters [4] (their Tab. 2) as the fiducial model, denoted “fid” for shorthand. • BAO: DESI DR2 [2] distance measures, using their Tab. 4 for DV /rd for the lowest redshift bin and DM/rd, DH/rd, and their cross-correlation rM,H for all other bins. When plotting any observable in units of the predictions in the fiducial model we denote, e.g. DM(z) [fid] = DM(z)/DM,fid(z). • SN: DESY5 supernovae distance modulus µ [1]. The likelihood analysis uses the unbinned data but for plotting purposes we bin in redshift using in- verse covariance weights and compare to the fidu- cial model after optimizing the unkwown absolute magnitude. We choose the most recent BAO and SN data sets as they exhibit the strongest evidence for phantom dark energy and hence the largest challenge to resolve with axions. We call the best fit ΛCDM model to these datasets the “baseline ΛCDM” model, which is distinct from the Planck 2018 fiducial model due to the compromises in parameter values to fit these datasets. In addition to the usual 6 ΛCDM parameters (initial curvature spectrum amplitude As, spectral tilt ns, an- gular size of the sound horizon θs, baryon density Ωbh2, cold dark matter density Ωch2 and optical depth to reion- ization τ) with their usual uninformative priors [4], we add the axion mass ma with a flat, range bound prior of −35 < lg(ma) < −31.5 and their energy density Ωah2 with flat uninformative priors. In an extension to the baseline analysis to help resolve CMB+BAO tension, we also consider the spatial curvature ΩK with flat uninfor- mative priors. For the parameter estimation, we employ a fast and efficient emulator of the axion Einstein-Boltzmann code AxiECAMB1 [57] as detailed in Appendix A. In addition to Markov Chain Monte Carlo parameter posteriors using Cobaya2, we show the profile likelihood as ∆χ2 at maximum posterior using the Cocoa3 imple- mentation of the Procoli algorithm [58]. Specifically for each lg(ma), we maximize the posterior ln P over the other parameters, remove the prior ln L = ln P −ln Π and display the change in −2 ln L over that of the base- line ΛCDM model. 1 https://github.com/Ra-yne/AxiECAMB 2 https://github.com/CobayaSampler/cobaya 3 https://github.com/CosmoLike/cocoa z 0.05 0.10 0.15 0.20 0.25 8πGρ/3H2 0 ρeff(z) ρde(z) ρa(z) ρa(0)(1 + z)3 0.0 0.2 0.4 0.6 0.8 z −1.5 −1.0 −0.5 0.0 0.5 1 + w 1 + weff 1 + wax 0 FIG. 1. Phantom mirage mechanism. Top panel: true energy densities of the axion ρa and dark energy ρde compared with the effective dark energy ρeff that would be assumed by ana- lyzing this case as CDM and dark energy. Bottom panel: the inferred effective equation of state weff < −1 at high redshift despite wa > −1 approaching a CDM-like wa ∼0 at z = 0. Here lg(ma) = −32.5, fde = 0.1. III. PHANTOM MIRAGE Distance measures from SN, BAO and the CMB de- pend only on the expansion history and not the theoreti- cal division of the dark sector into dark matter and dark energy components. An incorrect assumption about the dark matter can lead to an incorrect inference of phantom dark energy, which we call a phantom mirage. A well known example of the phantom mirage is pro- vided by coupled quintessence [48–50]. In this case, cou- pling between the dark energy and dark matter transfers energy from the former to the latter. When analyzed un- der the assumption of non-interacting cold dark matter, the remaining effective dark energy gives the mirage of a phantom component with w < −1. Axions with ma ≳H0 provide another example, but one that does not require an additional coupling. At early times when H ≫H0, Hubble friction prevents the axion field from rolling in its potential causing its energy density to behave as an addition to the cosmological con- stant whereas at late times the field oscillates around the quadratic minimum and behaves as dark matter. A key difference with coupled quintessence is that for the ax- ion phantom mirage, the transition from dark energy to dark matter does not involve coupling between the two 3 but rather a separate component that mimics their re- spective behaviors at different redshifts. To see that this produces the same expansion history as phantom dark energy and cold dark matter, define the effective dark energy density as that which would be inferred by considering axions as a contribution to CDM at the present. Given a true dark energy density ρde, we would instead infer an effective dark energy of ρeff(z) ≡ρde(z) + ρa(z) −ρa(0)(1 + z)3 (1) so that the expansion history remains the same. The equation of state of the effective dark energy is defined by 1 + weff = 1 3 d ln ρeff d ln(1 + z) (2) = (1 + wde)ρde + (1 + wa)ρa −ρa(0)(1 + z)3 ρeff , where wde and wa are the true dark energy and axion equations of state respectively. At z = 0, weff ≈wde, but for higher z when wa < 0 and approaches −1, while ρeff > 0, the negative contribution from the incorrect matterlike assumption for axions drives weff < −1 which implies the mirage of a phantom. We can interpret the negative contribution to ρeff as reducing the high z dark energy, which makes the effective dark energy grow as the universe expands. If wde > −1, then this also implies a phantom crossing where ρeff switches from growing to decaying with the expansion. However, we start with the simplest assumption that the true dark energy is a cosmological constant where wde = −1, and hence the case where the effective dark energy is always a phantom, at least when time averaged over axion oscillations. We illustrate this behavior in Fig. 1 (top panel) with an example where lg(ma) = −32.5, close to the mass where the equation of state at the present first reaches wa(0) = 0. More precisely, with the specific choice of h = 0.674, Ωde = 0.311 and fa = 0.1 employed here, wa(0) = 0 at lg(ma) = −32.45. Notice that by z = 1, the axions have already reached the frozen regime and an extrapolation as matter would overestimate the mat- ter density. The effective energy density then decreases at high redshift implying a phantom equation of state weff < −1 even though wa > −1 and wde = −1, see Fig. 1 (bottom panel). Phrased alternatively, the expan- sion history mimics a low Ωm model at the high redshifts z ≳0.5 relevant for BAO and a high Ωm for the low redshifts z ≲0.5 relevant for SN. The axion phantom mirage can therefore reduce the tension between BAO and SN without having any phantom component. Fur- thermore, by having only a small fraction fde of dark energy in axions but with a larger change in their equa- tion of state, the axion phantom mirage can produce a more rapid change in weff than a model with just a single component of thawing quintessence. 0.00 0.25 0.50 0.75 1.00 fde baseline ΛCDM baseline axion 0.6850 0.6875 0.6900 0.6925 0.6950 Ωde = ΩΛ + Ωa −34.0 −33.5 −33.0 −32.5 −32.0 −31.5 lg(ma) −6 −4 −2 0 ∆χ2 FIG. 2. Likelihood profile as a function of axion mass or ∆χ2 relative to the baseline ΛCDM model for the baseline SN+BAO+CMB analysis (lower panel). Similar improve- ments over ΛCDM occur in the axion phantom mirage range −33.5 ≲lg(ma) ≲−32.3 but with very different axion con- tributions to the dark energy fde = Ωa/Ωde (upper panel). For much smaller masses all fde cases behave as ΛCDM with the same total Ωde = ΩΛ + Ωa (middle panel) and for much higher masses the profile value for fde is small and also pre- dicts ΛCDM observables up to the accuracy of the emulator discussed in Appendix A. IV. AXION LANDSCAPE A. Baseline analysis We begin our analysis with the simplest scenario where axions are the only addition to the standard ΛCDM paradigm and all data sets are taken at face value, which we call the baseline analysis. In Fig. 2 (bottom panel), we show the ∆χ2 profile in the axion mass ma relative to the baseline ΛCDM model, as well as the values of the axion fraction of the dark energy fde (top panel) and total dark energy Ωde = ΩΛ + Ωa (middle panel) that correspond to these 4 name lg(ma) fde extension ∆χ2 ∆χ2 nolowE baseline ΛCDM – 0 – 0 0 w0 −wa – 0 w0 = −0.755, wa = −0.831 −20.5 −18.8 baseline −32.9 0.996 – −7.1 −8.8 −32.5 0.0797 – −5.5 −6.7 no lowE −32.9 0.997 τ = 0.0999 – −16.3 −32.5 0.117 τ = 0.101 – −15.1 curvature −32.9 0.84 ΩK = 2.90 × 10−3 −12.4 −11.9 −32.5 0.0846 ΩK = 2.95 × 10−3 −11.2 −10.5 TABLE I. Model parameters in the baseline and extended analyses with high optical depth τ or spatial curvature ΩK. All ∆χ2 values are relative to the baseline ΛCDM model. For axion cases the best fit lg(ma) = −32.9 and the high mass end of the axion phantom mirage lg(ma) = −32.5 are shown and generally perform comparably despite the very different dark energy fraction fde. High τ models drop the lowE Planck 2018 data and fit the rest of the SN+BAO+CMB data almost as well as w0 −wa. 0.2 0.4 0.6 0.8 1.0 z −0.05 0.00 0.05 0.10 0.15 µ −µfid SN data baseline ΛCDM baseline lg(ma) = −32.9 baseline lg(ma) = −32.5 FIG. 3. SN data compared with models in the baseline anal- ysis. The baseline ΛCDM model fails to fit the relative mag- nitudes of the lowest redshift SN vs higher redshifts whereas both axion cases lg(ma) = −32.9 (fde ≈1) and lg(ma) = 32.5 (fde ≈0.08) capture the upturn. profile models. Note that for lg(ma) ≲−33.5, despite the apparent changes in the value of fde for different ma, all fde for the given Ωde fit essentially equally well, with the changes in ∆χ2 being much less than unity as the profile shows. This is because in this ma ≪H0 limit, the axion field is still frozen on its potential at the present. In the range −33.5 ≲lg(ma) ≲−32.3, cases with finite 1 > fde > 0 fit the data better than ΛCDM. The low mass end of this regime represents the standard thaw- ing quintessence case fde ∼1 where all of the dark en- ergy is in a single scalar field component. Interestingly, the higher masses in this range provide nearly as good a fit but with a steeply decreasing fde. This extension to higher masses and lower fractions displays the axion phantom mirage discussed in Sec. III where the axion behaves increasingly like dark matter at z = 0. Given the nearly equally good fits across this ax- ion phantom mass range, we select two representative cases with very different fde and examine the origin of the preferences in the data. The relevant parameters of these models are given in Tab. I. The first is the global minimum at lg(ma) = −32.9 and fde ≈1, with ∆χ2 = −7.1 compared to ΛCDM and the second is lg(ma) = −32.5 and fde ≈0.08, with ∆χ2 = −5.5. Both are comparable to the improvements in the cal- ibrated thawing quintessence models where wde(z) = w0 −1.58(1 + w0)z/(1 + z) studied in Ref. [5]. In particular both models provide comparable fits to the SN data, shown in Fig. 3, especially the upturn at the lowest redshift bin, despite their different fde. This is a consequence of the larger mass producing a much sharper change in distances for a given fde due to its equation of state quickly approaching a matter like value of wa ∼0. Baseline ΛCDM has the opposite trends with redshift given that it is optimized to SN+BAO+CMB and does not fit the SN data alone well. In Fig 4, we show the comparison to BAO distance measures. Again the two cases fit BAO comparably well but notice that the fits are also comparable to baseline ΛCDM, especially for z > 0.5 where the data are the most constraining. In particular around z ∼0.8 all three mod- els overpredict DM(0.8)/rd by around a percent. This is a consequence of the high redshift tension between BAO and CMB given the calibration of rd provided by the latter and its inference of DM(z∗), the distance to re- combination at z∗, through the precise measurement of θ∗, the angular size of the sound horizon. Axions and more generally thawing dark energy models cannot effi- ciently change DM(z∗) −DM(0.8) = R z∗ 0.8 dz/H(z) in a flat universe. The CMB calibration of rd is increasingly driven by CMB lensing measurements as ground based data im- prove. In Fig. 5, we show the comparison of the models to the data for the CMB lensing power spectrum Cϕϕ ℓ relative to the fiducial model. Even in the fiducial model the lensing amplitude is slightly low compared with the data for the reconstruction and smoothing of the acoustic peaks (see e.g. [59] for a recent AL rescaling type assess- ment). Notice that with SN+BAO+CMB, all three best 5 1 2 3 z 0.950 0.975 1.000 1.025 1.050 DV /rd [fid] baseline ΛCDM baseline lg(ma) = −32.9 baseline lg(ma) = −32.5 BAO data 1 2 3 z DM/rd [fid] 1 2 3 z DH/rd [fid] FIG. 4. BAO data compared with models in the baseline analysis. While the baseline ΛCDM lacks the low z upturn of the axion cases, all three fit the high-z BAO data comparably well. Here and in other figures “[fid]” stands for in units of the same quantity in the fiducial model. Starred datapoints represent those which are used in the likelihood whereas DV for all but the lowest redshift bin is redundant with DM and DH. 102 103 ℓ 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Cφφ ℓ [fid] baseline ΛCDM baseline lg(ma) = −32.9 baseline lg(ma) = −32.5 Planck PR4 ACT DR6 FIG. 5. CMB lensing data compared with models in the base- line analysis. The baseline ΛCDM and axion cases all reduce the lensing amplitude so as to fit the BAO data reflecting tension in the BAO+CMB data. fit curves further underpredict lensing relative to the fidu- cial model and are a slightly worse fit to the data. This is again the consequence of the compromise between the BAO and CMB data. On the CMB side, to fit the amplitude of the lens- ing power spectrum and the amplitude and smoothing of the acoustic peaks simultaneously, the cold dark mat- ter Ωch2 must remain relatively high and rd relatively low. This parameter also controls the BAO vs. CMB distances DM(z∗) −DM(0.8), with BAO measurements favoring the lower Ωch2. The lack of freedom to adjust these relationships causes the tension between the BAO and CMB data. We can see this directly in Fig. 6. Here we show the posterior constraints for fde, DM(0.8)/rd, rd for ΛCDM vs. axions with lg(ma) = −32.5. In ΛCDM, the tension between the CMB and BAO distance measures appears 0.04 0.10 0.16 fde 0.995 1.000 1.005 rd [fid] 0.99 1.00 1.01 DM(0.8)/rd [fid] 0.99 1.00 1.01 DM(0.8)/rd [fid] 1.000 1.005 rd [fid] CMB, ΛCDM SN+CMB, ΛCDM SN+BAO+CMB, ΛCDM SN+BAO+CMB, lg(ma) = −32.5 FIG. 6. BAO+CMB tension parameters vs. axion dark en- ergy fraction in the baseline analysis (68%, 95% CL contours). BAO data prefer a low value for DM(0.8)/rd than in the fiducial CMB only analysis and this tension is exacerbated by the inclusion of SN so that the baseline ΛCDM analy- sis of SN+BAO+CMB pushes constraints into the tail of the SN+CMB posterior. The axion case with lg(ma) = −32.5 shows a 95% exlcusion of fde = 0 due to the better fit to the lowest z SN but does not appreciably change the BAO+CMB tension between z = 0.8 and recombination. as a preference for high DM(0.8)/rd and low rd in the former vs. low DM(0.8)/rd in the latter. Adding in SN to the CMB, while still consistent in ΛCDM, makes the tension worse. This is the well known SN+BAO discrep- ancy in Ωm and BAO+CMB shift in H0rd but phrased 6 0.1 0.2 fde 1.000 1.005 rd [fid] 0.99 1.00 1.01 DM(0.8)/rd [fid] 0.99 1.00 1.01 DM(0.8)/rd [fid] 1.000 1.005 rd [fid] lg(ma) = −32.5 SN+BAO+CMB, no lowE SN+CMB, no lowE SN+BAO+CMB, ΩK > 0 FIG. 7. BAO+CMB tension parameters vs. axion dark en- ergy fraction in the high-z extension analyses. Tension in the DM(0.8)/rd, rd plane can be addressed by increasing τ, here through dropping the lowE Planck 2018 data, or by adding spatial curvature. The former allows the CMB cal- ibration of rd to shift whereas the latter changes the relation between DM(0.8) and the distance to recombination. Both provide better fits to SN+BAO+CMB than the baseline case and retain a preference for fde > 0 from SN at this mass, lg(ma) = −32.5. more directly in terms of quantities relevant to BAO and hence in a more model independent fashion that is useful for dynamical dark energy as well. When BAO are added to SN+CMB, DM(0.8)/rd and rd are pushed to the extremes of what is allowed by the latter, reflecting the tension in the data sets. On the other hand the improvement from better fitting the SN is enough to place fde = 0 outside the 95% CL regions. Thus while the axion phantom mirage provides a good fit to SN for a wide range of fractional contributions to the dark energy from 5% to 100%, it cannot re- solve all of the tension between SN+BAO+CMB but in- stead performs about as well as the calibrated thawing quintessence class. B. High-z extensions We have seen that a wide range of axion contributions to the dark energy fit the SN+BAO+CMB data as well as calibrated thawing quintessence but fail to address the BAO+CMB tension. This is because by construc- tion such axions mainly affect distances to low redshift z < 0.5, leaving the high redshift universe to resemble ΛCDM, especially in the distance between the best BAO 0.2 0.4 0.6 0.8 1.0 z −0.05 0.00 0.05 0.10 0.15 µ −µfid SN data baseline ΛCDM no lowE lg(ma) = −32.5 curvature lg(ma) = −32.5 w0 −wa FIG. 8. SN data compared with models in the extended anal- yses, no lowE and curvature at lg(ma) = −32.5, vs w0 −wa. The baseline ΛCDM case is included for reference. All three models fit the SN data nearly equally as well. constraints around z ∼0.8 and recombination. On the other hand, it is well known that the BAO+CMB tension alone can be resolved without dy- namical dark energy by either changing the CMB cali- bration of rd or the distance between z ∼0.8 and recom- bination, but these solutions fail to fit SN data at much lower redshifts. When such solutions are combined with the axion phantom mirage, they jointly address the full SN+BAO+CMB tension. As has been emphasized in Ref. [60] (see also [61–66]), in ΛCDM the CMB+BAO tension rests on the inabil- ity to change the cold dark matter density Ωch2 given the measurements of the CMB lensing once the optical depth to reionization τ is constrained by the Planck lowE polarization data. In combination with Planck measure- ment of the amplitude of the acoustic peaks, which is controlled by Ase−2τ, the optical depth constraint pre- vents the amplitude As of the curvature power spectrum, from adjusting the lensing amplitude to fit the data and instead fixes Ωch2. Even without CMB lensing recon- struction, lensing constraints from the smoothing of the acoustic peaks comparably limit this ability, which sug- gests that the tension is not due to unknown systematics in the lensing reconstruction. With Ωch2 fixed, ΛCDM no longer has the freedom to adjust DM(z∗) −DM(0.8). These constraints on τ can be somewhat relaxed in ex- tended reionization models [67] or alternately by reducing the large scale primodial power spectrum from inflation [60, 68]. There may also be unknown systematics in this challenging measurement [69, 70]. Here we remain ag- nostic about the physical origin of this solution and sim- ply test it by removing the lowE likelihood contribution in the data following [60, 61]. Doing so, the remaining CMB data in ΛCDM give a lower Ωch2 = 0.1186±0.0013 vs. 0.12 in the fiducial model and a better fit to TTTEEE in particular. 7 1 2 3 z 0.950 0.975 1.000 1.025 1.050 DV /rd [fid] baseline ΛCDM no lowE lg(ma) = −32.5 curvature lg(ma) = −32.5 w0 −wa BAO data 1 2 3 z DM/rd [fid] 1 2 3 z DH/rd [fid] FIG. 9. BAO data compared with models in the extended analyses, no lowE and curvature at lg(ma) = −32.5, vs w0 −wa. The no lowE case fits nearly as well as w0 −wa whereas the curvature case fits slightly worse at high-z but still better than the baseline ΛCDM model. 102 103 ℓ 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Cφφ ℓ [fid] baseline ΛCDM no lowE lg(ma) = −32.5 curvature lg(ma) = −32.5 w0 −wa Planck PR4 ACT DR6 FIG. 10. CMB lensing data compared with models in the extended analyses, no lowE and curvature at lg(ma) = −32.5, vs w0 −wa. The no lowE case fits as well as w0 −wa whereas the curvature case fits as well as the baseline ΛCDM model. In Fig. 7, we show the posterior constraints in fde, DM(0.8)/rd, rd when the lowE data is dropped. Note that the SN+CMB constraints now widen to include more of the BAO favored regime reflecting the reduc- tion in BAO+CMB tension. This occurs mainly due to the increase in rd from the ability to lower Ωch2. In this case the best fit at lg(ma) = −32.9 and fde ≈1 has an improvement of ∆χ2 nolowE = −16.3, very close to the best fit wde(z) = w0 + waz/(1 + z) phantom model where ∆χ2 nolowE = −18.8 using CAMB4 for pre- dictions (see Tab. I and Appendix A). Thus by allowing the optical depth to reionization to be raised to τ ≈0.1, 4 http://camb.info. CAMB uses the Parameterized Post-Fried- mann phenomenological approach [31] to avoid ghost and gradi- ent instabilities which is not a microphysical model of phantom dark energy unlike our axion phantom mirage scenario. the axion phantom mirage fits the rest of the CMB, SN and BAO almost as well. This fit only degrades slightly across the phantom mirage mass range. For example at lg(ma) = −32.5 and fde ≈0.12, ∆χ2 nolowE = −15.1. No- tice that eliminating the source of BAO+CMB tension allows the best fde at this mass to increase, which al- lows a better fit to the SN and BAO data due the axion phantom mechanism. These good fits to the SN, BAO and CMB lensing data sets are shown in Fig. 8-10 for the lg(ma) = −32.5 case. In fact the lg(ma) = −32.5 case has a sharper upturn in SN magnitudes at the lowest redshifts than the global best fit lg(ma) = −32.9 case. It also allows a larger CMB lensing amplitude than even the fiducial model while still fitting the BAO with Ωch2 = 0.1162 vs. the fiducial value 0.12 by raising As. A second type of solution to the BAO-CMB tension is to leave the sound horizon calibration fixed rd ≈rd,fid but introduce spatial curvature ΩK to change DM(z∗) − DM(0.8) [2, 71]. With just open ΛCDM, this has the effect of lowering Ωm and raising H0rd through H0 into better compatibility with BAO. More generally curvature changes the relationship between DM(0.8) and DM(z∗) without altering Ωch2 or rd. With the addition of the axion phantom mirage, the same effect allows DM(0.8)/rd to become larger and bring it into compatibility with the BAO measurements as shown in Fig. 7. In Tab. I, we again give the best fit model at lg(ma) = −32.9 and fde = 0.84 as well as at lg(ma) = −32.5 and fde = 0.085 where ΩK ≈0.003. Because the addition of curvature does not change the rd inference from the CMB, it does not resolve the BAO+CMB tension quite as well as dropping lowE. The improvement in ∆χ2 reaches −12.4 and −11.2 respec- tively. We can see this in the comparison to BAO data in Fig. 9 where the high redshift end fits less well than the no lowE solution and in Fig. 10 where it fails to raise the CMB lensing amplitude. A third type of solution is to change the sound hori- zon, or correspondingly rd, through a component of Early 8 0.00 0.25 0.50 0.75 1.00 fde baseline ΛCDM no lowE curvature 0.685 0.690 0.695 0.700 Ωde = ΩΛ + Ωa −34.0 −33.5 −33.0 −32.5 −32.0 lg(ma) −20 −15 −10 −5 0 ∆χ2 i w0 −wa w0 −wa (no lowE) FIG. 11. Likelihood profile as a function of axion mass or ∆χ2 relative to the baseline ΛCDM model for the extended no lowE and curvature analyses (lower panel). ∆χ2 values are relative to the baseline ΛCDM model but are given for ∆χ2 nolowE in the former and the total for the latter. The two solutions prefer slightly different values of Ωde (middle panel) and the no lowE case allows larger axion dark energy fractions fde than the curvature case (upper panel). The no lowE case fits the rest of the SN+BAO+CMB data given ∆χ2 nolowE ∼ −16 almost as well as the w0 −wa case (dot-dashed lines) whereas for the curvature case the total ∆χ2 ∼−12 is notably worse (dotted lines) but still significant. Dark Energy [72, 73]. We leave this possibility to future work. V. DISCUSSION We have shown that the ability of axions with mass ma ≳H0 to mimic a cosmological constant at high red- shift and dark matter at low redshift causes a mirage of phantom dark energy and can explain the relative dis- tances to SN and BAO. In fact this effective dark en- ergy can cross the phantom divide and provide an even sharper change with redshift if the true dark energy has an equation of state wde > −1 like quintessence, without causing any instabilities in the dark sector. This resolution of the SN-BAO tension at z ≲1 in ΛCDM fits essentially as well as calibrated thawing quintessence models. Moreover it allows for a wide range of axion fractions from ∼5% −100% of the dark energy across this mass range. With axions alone, the BAO-CMB tension remains and is driven by the CMB determination of the cold dark matter density Ωch2 which largely determines the BAO distance to z ∼0.8 and the CMB distance to recombina- tion by calibrating the sound horizon or equivalently the BAO scale rd. In the truly phantom w0 −wa dark en- ergy solutions, this is resolved by a stronger dark energy evolution at high z. On the other hand, this tension, be- ing largely at high-z, does not necessarily require a dark energy resolution. In particular, the CMB calibration of rd has become increasingly reliant on CMB lensing measurements but that requires knowledge of τ, the optical depth to re- combination, to fix Ωch2. We show that the remaining tension between SN-BAO-CMB data can be resolved if τ ≈0.1. While this is not allowed by lowE polarization in the standard reionization history and power law infla- tionary power spectrum which imply τ = 0.0544±0.00755 [4], more general models or systematic errors in the mea- surement could accommodate a higher τ. Indeed, JWST measurements of high-redshift galaxies [74–77] may al- ready be hinting at earlier reionization (e.g. [78, 79]). Ignoring the lowE polarization contribution to χ2, ax- ions improve the rest of the fit to SN+BAO+CMB over ΛCDM by ∆χ2 ∼−16 and is comparable to the w0 −wa solution where ∆χ2 ∼−19. Alternately, high redshift extensions to ΛCDM can also change the CMB inferences for rd and BAO distances. We show that a small curvature contribution ΩK ∼0.003 can also relieve the BAO-CMB tension by adjusting the distance between z ∼0.8 and recombination, bringing the overall best fit to ∆χ2 ∼−12. Modifying rd to solve the Hubble constant tension with the SHOES Cepheid- SN distance scale [80] by introducing an “early dark en- ergy” scalar field that contributes near recombination can also relieve the the BAO-CMB tension. This opens the intriguing possibility that a more general scalar sector could solve all of these tensions with ΛCDM simultane- ously. We leave these considerations to a future work. ACKNOWLEDGMENTS We thank Fei Ge for help with the CMB lensing data and Tom Crawford, Tanisha Jhaveri, Austin Joyce, and Tanvi Karwal for useful comments. R.L & W.H. are supported by U.S. Dept. of Energy contract DE- SC0009924 and the Simons Foundation. VM and YZ are supported by the Roman Project Infrastructure Team “Maximizing Cosmological Science with the Ro- 9 man High Latitude Imaging Survey” (NASA contracts 80NM0018D0004-80NM0024F0012). V.M. is also par- tially supported by the Roman Project Infrastructure Team “A Roman Project Infrastructure Team to Sup- port Cosmological Measurements with Type Ia Super- novae” (NASA contract 80NSSC24M0023). Computing resources were provided by the University of Chicago Research Computing Center through the Kavli Institute for Cosmological Physics and by Stony Brook Research Computing and Cyberinfrastructure, and the Institute for Advanced Computational Science at Stony Brook University through the high-performance SeaWulf com- puting system. Appendix A: Emulator Training Procedure Parameter Value accuracy boost 1.5 l max scalar 7500 l accuracy boost 1.0 l sample boost 2.0 k eta max scalar 18000 do late rad truncation False transfer kmax 10 transfer k per logint 130 transfer high precision True accurateBB True TABLE II. Settings for AxiECAMB For the analyses in the main text we have both modi- fied and emulated AxiECAMB. The modification makes AxiECAMB solve the axion Klein-Gordon equation 2ϕ = −m2 aϕ to the present for all axion masses in the quadratic potential approximation for axions. The accuracy settings for AxiECAMB are given in Tab. II. These are sufficient to make the accuracy of the χ2 for CMB likelihoods limited by the use of REC- FAST for recombination and takahashi HALOFIT [81] for the nonlinear power spectrum in AxiECAMB as op- posed to COSMOREC and mead2020 HMCODE [82] for CAMB v1.5.9 with accuracy settings specified in Ref. [83]. For reference, the impact of these changes on the power spectra in the ΛCDM fiducial modal are shown in Fig. 12 and mostly occur at very high ℓcompared with those of the measurements so that the errors there fall be- low the rough cosmic variance limits of ∆Cℓ/Cℓ∼1/ℓfor the temperature anisotropy and below the measurement noise for lensing (cf. Fig. 5). There is a correspondingly small error in the compu- tation of χ2 between AxiECAMB and CAMB that is well below ∆χ2 = 1 as given in Tab. III. By reverting CAMB to the older version of halofit and recombination used by AxiECAMB we see that the choice of recombi- nation code mainly impacts the χ2 for Planck lite TT- TEEE but only by ∆χ2 ∼0.1 −0.2 level. The choice of nonlinear power spectrum model affects mainly ACT lensing but by ∆χ2 < 0.1. One notable addition is an ex- tra ∆χ2 ≈0.1 in Planck lowT likelihood. Fig. 12 shows that the CT T ℓ predictions themselves are highly accurate so it should not affect parameter estimation. In addition to the usual linear scaling with ∆CT T ℓ /CT T ℓ for ∆χ2 ≪1 rather than quadratically for ∆χ2 ≫1 due to statistical fluctuations, this likely also reflects the enhanced changes in χ2 for a given accuracy of computation due to ΛCDM being a somewhat poor fit to the lowT data. While these together set a floor in the accuracy of our computations it is sufficient for the datasets we consider. It also sets the level of diminishing returns that motivate the preci- sion settings in Tab. II and motivate the accuracy goals of the emulator as we shall see next. Nonlinear Recombination TTTEEE LowT LowE ACTϕϕ mead2020 COSMOREC 0.31 -0.11 0.00 -0.01 mead2020 RECFAST 0.15 -0.11 0.01 -0.01 takahashi COSMOREC 0.31 -0.11 0.00 0.05 takahashi RECFAST 0.17 -0.11 0.01 0.05 TABLE III. ∆χ2 changes for CAMB v1.59 relative to Ax- iECAMB on the fiducial ΛCDM model (top row, ACT ac- curacy settings). For Planck lite TTTEEE, the difference between COSMOREC and RECFAST contributes more than half of ∆χ2 as shown by switching the two in CAMB. For LowE, the differences are negligible. For LowT, ∆χ2 ≈−0.1 between CAMB and AxiECAMB across all the test cases. Re- verting mead2020 to takahashi in CAMB changes the ACT baseline lensing by less than 0.1 and others entirely negligibly. We then emulate the AxiECAMB calculations follow- ing the architectures and training strategies developed in Ref. [84–86]. To perform a thorough analysis of the ax- ion cases with SN, BAO and CMB datasets, the required data vectors to be emulated include the CMB primary power spectra CXY ℓ (X, Y ∈T, E), the lensing power spectrum, Cϕϕ ℓ, H(z), and the BAO drag scale rd. We also emulate the mapping of θ∗to H0 in order to sample the posterior in θ∗, which is directly constrained by the CMB. We adopt the Residual MLP (ResMLP) with PCA ar- chitecture for H(z) and Cϕϕ ℓ. For the mapping from θ∗ to H0, we train a ResMLP with smaller dimensions and size compared to the one above. For CMB primary power spectra, we apply a combination of ResMLP and Convo- lutional Neural Network (CNN). A Gaussian Processing (GP) neural network is found to be sufficient for rd, as we can simply train this using ΛCDM parameters given that axions with the mass range in our work have negligible energy density at recombination. Transverse comoving distances are computed by performing the integration in 10 101 102 103 ℓ 10−6 10−5 10−4 10−3 10−2 10−1 ∆CTT ℓ/CTT ℓ fiducial ΛCDM, CAMB vs. AxiECAMB (fde →0) 1/ℓ 101 102 103 ℓ 10−6 10−5 10−4 10−3 10−2 10−1 ∆Cφφ ℓ/Cφφ ℓ FIG. 12. Comparison between AxiECAMB and CAMB for CT T ℓ (top) and Cϕϕ ℓ (bottom) under the settings described in the main text. The relative difference for TT is largely due to differing recombination codes, and for ϕϕ to nonlinear power spectrum codes. Deviations appear at high ℓand the 1/ℓline approximates a cosmic variance limited measurement out to ℓ. The accuracy suffices for the datasets we use. H(z); in general DM(z) = 1 ΩKH2 0 sinh  ΩKH2 0 Z z 0 dz H(z′)  , (A1) with the flat limit of ΩK →0 of DM = R dz/H(z). We adopt a composite-Simpson integration algorithm to ob- tain results from the Hubble parameter computed from the emulator. The BAO radial distance DH(z) = 1/H(z) and vol- ume weighted DV (z) = [zD2 M(z)DH(z)]1/3 follow di- rectly. Supernova magnitudes m, corrected for peculiar velocities, are compared to models through the distance modulus µ = m −M = 5 log10 (1 + z)DM(z) 1Mpc  + 25, (A2) with the absolute magnitude M marginalized in the like- lihood or maximized in plots. A complete list of emulator configurations used are summarized in Table. IV. The CNN architecture is shown in Fig. 13, with the channel number of the CNN fixed at 16. Observable Architecture CXY ℓ , X, Y ∈T, E ResMLP+CNN Cϕϕ ℓ ResMLP+PCA H(z) ResMLP+PCA rd Gaussian Processing θ∗→H0 ResMLP Distance Observables Derived from H(z) TABLE IV. Architectures for each observable. Input Layer (8 x 512) Dense Layer (512 x 512) Dense Layer (512 x 512) Convolutional Layer Output Layer (3200 x 2998) Embedding Layer (512 x 3200) 3x Cosmological Parameters CMB Power Spectrum (TT, TE or EE) 16 Channels FIG. 13. Architecture of the 1D convolutional neural network. To cover a sufficiently large parameter space for a thor- ough analysis, we adopt uniform sampling among the pri- ors in Table. V. The training prior is slightly larger than the testing prior to avoid edge effects. 11 Uniform Uniform Parameter Train Test Ωbh2 [0.002, 0.04] [0.006, 0.038] Ωch2 [0.03, 0.24] [0.04, 0.23] H0 [55, 85] [60, 80] τ [0.005, 0.105] [0.01, 0.15] ln(1010As) [1.61, 3.6] [1.7, 3.5] ns [0.7, 1.3] [0.8, 1.2] lg(ma) [−35, −31] [−34, −31.5] Ωah2[lg(ma) ≤−32] [0, 0.4] [10−8, 0.38] Ωah2[lg(ma) > −32] [0, 0.21] [10−8, 0.19] TABLE V. Parameter ranges for the training and testing pa- rameter sets for the uniform and Gaussian sampling methods. For the models with curvature, we further extend the space to analyze non-zero ΩK ∈[0, 0.004], with trimmed priors on τ ∈[0.04, 0.12], ln(1010As) ∈[3, 3.3], ns ∈ [0.92, 0.99] and lg(ma) ∈[−34, −32], which includes the region of interest for fitting the datasets. To well emulate the region where fde is small, when generating training datasets, we oversample in the low Ωah2 region by applying an additional uniform sampling in logarithmic measure of log10 (Ωah2) ∈[−5, −1]. We also oversample the region where lg(ma) > −32.5 since the relationship between cosmological parameters and data vectors is more non-linear in this region due to more oscillations in the background evolution of the axion field. Furthermore, we sample directly at Ωah2 = 10−8 ≈0 to force the emulator to learn the ΛCDM limit. For rd and θ∗→H0 mapping emulators, we train with the simple Mean Square Error (MSE) loss function of the output directly. For H(z) and Cϕϕ ℓ emulators, we train the models under MSE of the PCA transformed outputs. Since those data vectors are either short in size or structurally simple, a crude MSE loss function and simple training strategies are sufficient. The more challenging training is for CMB primaries CXY ℓ as they have more complicated structures. To train them to be as precise as possible compared to the direct output from AxiECAMB, we set the training loss func- tion, separately for each CXY ℓ , to be loss = D X ℓ ∆˜Cℓ˜C−1∆˜Cℓ 1/2E . (A3) ∆˜Cℓis the difference in the rescaled quantity as ˜Cℓ= Cℓe2τ/As, where the fiducial model picked at near the Planck 2018 best fit. The covariance matrix ˜C is set to be the corresponding diagonal part of a cosmic-variance covariance matrix for rescaled quantities with ℓ≤3000 for the sake of training. This rescaling technique is tested in Ref. [86]. The increasingly tighter constraints as ℓin- creases under the cosmic-variance assumption forces the training to learn the detailed structure at high ℓ. For testing, we compute the quantity ∆ˆχ2 = P ∆CXY ℓ C−1 XY,X′Y ′∆CX′Y ′ ℓ , with X, Y, X′, Y ′ ∈T, E, using the Planck lite binning and covariance matrix files. 0.001 0.01 0.1 1 10 2 CMB 2 250 500 750 1000 1250 0.0 0.2 0.4 0.6 | C |/C % 99% 95% 68% FIG. 14. Top: Distribution of ∆ˆχ2 of the CMB primaries em- ulators upon testing on Planck TTTEEE lite likelihood. Only 1% points are outliers with ∆ˆχ2 > 0.2. Bottom: Distribution of percent difference between emulator and AxiECAMB out- puts of Cϕϕ ℓ. Within the range ℓ≤1250, which is the ACT lensing extended multipole range, whereas we use the baseline ACT range ℓ≤763 in our analysis, we can see that the per- cent error of the emulation is suppressed to below 1% across the range of ACT lensing likelihood. We then evaluate the median and distribution of this quantity across a testing set. Note that ∆ˆχ2 does not involve the Planck data itself and therefore does not get enhanced if the model is a bad fit to the data or from the noise fluctuations of a good fit. Upon testing, our emulators have almost consistently below 0.5% error in DM(z) and H(z) across the red- shift range z ∈[0, 3]. The rd GP emulator has around 0.02% error, and the emulator for mapping from θ∗to H0 around 0.04% error. Both of these derived parameter emulators have errors one order of magnitude lower than the 1σ deviation from the Planck measurements. For the CMB primary power spectra, our emulators are trained to have the fraction of outliers with ∆ˆχ2 > 0.2 to be around 1% under Planck lite TTTEEE likelihood within our testing range, shown in Fig. 14. The CMB lensing power spectrum emulator is trained to have < 1% er- ror across the ℓrange of ACT lensing. The training for the baseline spatially flat cases uses 2.5 million train- ing points. We then develop another emulator for the curvature analysis with an addition of 0.8 million points sampled in the range described above, and the models reach a similar level of precision. 12 [1] T. M. C. Abbott et al. (DES), Astrophys. J. Lett. 973, L14 (2024), arXiv:2401.02929 [astro-ph.CO]. [2] M. Abdul Karim et al. (DESI), (2025), arXiv:2503.14738 [astro-ph.CO]. [3] W. Elbers et al. (DESI), (2025), arXiv:2503.14744 [astro- ph.CO]. [4] N. Aghanim et al. (Planck), Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)], arXiv:1807.06209 [astro-ph.CO]. [5] K. Lodha et al. (DESI), (2025), arXiv:2503.14743 [astro- ph.CO]. [6] G. Ye, M. Martinelli, B. Hu, and A. Silvestri, Phys. Rev. Lett. 134, 181002 (2025), arXiv:2407.15832 [astro- ph.CO]. [7] X. D. Jia, J. P. Hu, D. H. Gao, S. X. Yi, and F. Y. Wang, (2025), arXiv:2509.17454 [astro-ph.CO]. [8] S. Goldstein, M. Celoria, and F. Schmidt, (2025), arXiv:2507.16970 [astro-ph.CO]. [9] H. Chaudhary, V. K. Sharma, S. Capozziello, and G. Mustafa, (2025), arXiv:2510.08339 [astro-ph.CO]. [10] S. Roy Choudhury, T. Okumura, and K. Umetsu, (2025), arXiv:2509.26144 [astro-ph.CO]. [11] R. Kou and A. Lewis, (2025), arXiv:2509.16155 [astro- ph.CO]. [12] L. W. K. Goh and A. N. Taylor, (2025), arXiv:2509.12335 [astro-ph.CO]. [13] A. G´omez-Valent and A. Gonz´alez-Fuentes, (2025), arXiv:2508.00621 [astro-ph.CO]. [14] P. Brax, (2025), arXiv:2507.16723 [astro-ph.CO]. [15] M. Braglia, X. Chen, and A. Loeb, (2025), arXiv:2507.13925 [astro-ph.CO]. [16] E. Silva and R. C. Nunes, (2025), arXiv:2507.13989 [astro-ph.CO]. [17] S. S. Mishra, W. L. Matthewson, V. Sahni, A. Shafieloo, and Y. Shtanov, (2025), arXiv:2507.07193 [astro-ph.CO]. [18] S. Lee, (2025), arXiv:2507.01380 [astro-ph.CO]. [19] I. D. Gialamas, G. H¨utsi, M. Raidal, J. Urrutia, M. Vasar, and H. Veerm¨ae, Phys. Rev. D 112, 063551 (2025), arXiv:2506.21542 [astro-ph.CO]. [20] E. ¨Oz¨ulker, E. Di Valentino, and W. Giar`e, (2025), arXiv:2506.19053 [astro-ph.CO]. [21] R. E. Keeley, A. Shafieloo, and W. L. Matthewson, (2025), arXiv:2506.15091 [astro-ph.CO]. [22] J. M. Cline and V. Muralidharan, Phys. Rev. D 112, 063539 (2025), arXiv:2506.13047 [astro-ph.CO]. [23] R. de Souza, G. Rodrigues, and J. Alcaniz, (2025), arXiv:2504.16337 [astro-ph.CO]. [24] B. R. Dinda and R. Maartens, Mon. Not. Roy. Astron. Soc. 542, L31 (2025), arXiv:2504.15190 [astro-ph.CO]. [25] G. Gu et al. (DESI), (2025), 10.1038/s41550-025-02669- 6, arXiv:2504.06118 [astro-ph.CO]. [26] K. V. Berghaus, J. A. Kable, and V. Miranda, Phys. Rev. D 110, 103524 (2024), arXiv:2404.14341 [astro-ph.CO]. [27] J. Rebou¸cas, D. H. F. de Souza, K. Zhong, V. Mi- randa, and R. Rosenfeld, JCAP 02, 024 (2025), arXiv:2408.14628 [astro-ph.CO]. [28] A. Vikman, Phys. Rev. D 71, 023515 (2005), arXiv:astro- ph/0407107. [29] W. Hu, Phys. Rev. D 71, 047301 (2005), arXiv:astro- ph/0410680. [30] P. Creminelli, G. D’Amico, J. Norena, and F. Vernizzi, JCAP 02, 018 (2009), arXiv:0811.0827 [astro-ph]. [31] W. Fang, W. Hu, and A. Lewis, Phys. Rev. D 78, 087303 (2008), arXiv:0808.3125 [astro-ph]. [32] G. Gubitosi, F. Piazza, and F. Vernizzi, JCAP 02, 032 (2013), arXiv:1210.0201 [hep-th]. [33] J. K. Bloomfield, ´E. ´E. Flanagan, M. Park, and S. Watson, JCAP 08, 010 (2013), arXiv:1211.7054 [astro- ph.CO]. [34] M. S. Oliveira, F. A. Brito, and J. A. V. Campos, (2025), arXiv:2510.08459 [astro-ph.CO]. [35] N. J. Pullisseri and S. Unnikrishnan, (2025), arXiv:2510.07103 [gr-qc]. [36] W. J. Wolf, P. G. Ferreira, and C. Garc´ıa-Garc´ıa, (2025), arXiv:2509.17586 [astro-ph.CO]. [37] A. Pourtsidou, (2025), arXiv:2509.15091 [astro-ph.CO]. [38] S. Tsujikawa, (2025), arXiv:2508.17231 [astro-ph.CO]. [39] Z. Yao, G. Ye, and A. Silvestri, (2025), arXiv:2508.01378 [gr-qc]. [40] S. L. Guedezounme, B. R. Dinda, and R. Maartens, (2025), arXiv:2507.18274 [astro-ph.CO]. [41] M. H¨og˚as and E. M¨ortsell, (2025), arXiv:2507.03743 [astro-ph.CO]. [42] A. Paliathanasis, Phys. Dark Univ. 49, 101993 (2025), arXiv:2504.11132 [gr-qc]. [43] D. Andriot, Phys. Dark Univ. 49, 102000 (2025), arXiv:2505.10410 [hep-th]. [44] C. You, D. Wang, and T. Yang, Phys. Rev. D 112, 043503 (2025), arXiv:2504.00985 [astro-ph.CO]. [45] S. Pan, S. Paul, E. N. Saridakis, and W. Yang, (2025), arXiv:2504.00994 [astro-ph.CO]. [46] E. Silva, M. A. Sabogal, M. Scherer, R. C. Nunes, E. Di Valentino, and S. Kumar, Phys. Rev. D 111, 123511 (2025), arXiv:2503.23225 [astro-ph.CO]. [47] E. V. Linder, (2007), arXiv:0708.0024 [astro-ph]. [48] L. Amendola, Phys. Rev. D 62, 043511 (2000), arXiv:astro-ph/9908023. [49] S. Das, P. S. Corasaniti, and J. Khoury, Phys. Rev. D 73, 083509 (2006), arXiv:astro-ph/0510628. [50] E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006), arXiv:hep-th/0603057. [51] J. Khoury, M.-X. Lin, and M. Trodden, (2025), arXiv:2503.16415 [astro-ph.CO]. [52] A. Bedroya, G. Obied, C. Vafa, and D. H. Wu, (2025), arXiv:2507.03090 [astro-ph.CO]. [53] S. M. Carroll, Phys. Rev. Lett. 81, 3067 (1998), arXiv:astro-ph/9806099. [54] A. J. Shajib and J. A. Frieman, Phys. Rev. D 112, 063508 (2025), arXiv:2502.06929 [astro-ph.CO]. [55] J. Carron, M. Mirmelstein, and A. Lewis, JCAP 09, 039 (2022), arXiv:2206.07773 [astro-ph.CO]. [56] F. J. Qu et al. (ACT), Astrophys. J. 962, 112 (2024), arXiv:2304.05202 [astro-ph.CO]. [57] R. Liu, W. Hu, and D. Grin, Phys. Rev. D 112, 023513 (2025), arXiv:2412.15192 [astro-ph.CO]. [58] T. Karwal, Y. Patel, A. Bartlett, V. Poulin, T. L. Smith, and D. N. Pfeffer, (2024), arXiv:2401.14225 [astro-ph.CO]. [59] F. Ge et al. (SPT-3G), (2024), arXiv:2411.06000 [astro- ph.CO]. [60] T. Jhaveri, T. Karwal, and W. Hu, Phys. Rev. D 112, 043541 (2025), arXiv:2504.21813 [astro-ph.CO]. 13 [61] N. Sailer, G. S. Farren, S. Ferraro, and M. White, (2025), arXiv:2504.16932 [astro-ph.CO]. [62] W. Giar`e, E. Di Valentino, and A. Melchiorri, Phys. Rev. D 109, 103519 (2024), arXiv:2312.06482 [astro-ph.CO]. [63] I. J. Allali, P. Singh, J. Fan, and L. Li, (2025), arXiv:2503.05691 [astro-ph.CO]. [64] I. J. Allali, L. Li, P. Singh, and J. Fan, (2025), arXiv:2509.09678 [astro-ph.CO]. [65] Z. Huang, (2025), arXiv:2509.09086 [astro-ph.CO]. [66] W. Elbers, (2025), arXiv:2508.21069 [astro-ph.CO]. [67] C. Heinrich and W. Hu, Phys. Rev. D 104, 063505 (2021), arXiv:2104.13998 [astro-ph.CO]. [68] G. Obied, C. Dvorkin, C. Heinrich, W. Hu, and V. Mi- randa, Phys. Rev. D 98, 043518 (2018), arXiv:1803.01858 [astro-ph.CO]. [69] J. M. Delouis, L. Pagano, S. Mottet, J. L. Puget, and L. Vibert, Astron. Astrophys. 629, A38 (2019), arXiv:1901.11386 [astro-ph.CO]. [70] Y. Li et al. (CLASS), (2025), arXiv:2501.11904 [astro- ph.CO]. [71] S.-F. Chen and M. Zaldarriaga, JCAP 08, 014 (2025), arXiv:2505.00659 [astro-ph.CO]. [72] V. Poulin, T. L. Smith, R. Calder´on, and T. Simon, (2025), arXiv:2505.08051 [astro-ph.CO]. [73] A. R. Khalife et al. (SPT-3G), (2025), arXiv:2507.23355 [astro-ph.CO]. [74] S. L. Finkelstein et al., Astrophys. J. Lett. 969, L2 (2024), arXiv:2311.04279 [astro-ph.GA]. [75] M. Castellano et al., Astrophys. J. Lett. 938, L15 (2022), arXiv:2207.09436 [astro-ph.GA]. [76] D. J. Eisenstein et al., arXiv e-prints , arXiv:2306.02465 (2023), arXiv:2306.02465 [astro-ph.GA]. [77] Y. Harikane et al., Astrophys. J. Supp. 265, 5 (2023), arXiv:2208.01612 [astro-ph.GA]. [78] J. B. Mu˜noz, J. Mirocha, J. Chisholm, S. R. Furlanetto, and C. Mason, Mon. Not. Roy. Astron. Soc. 535, L37 (2024), arXiv:2404.07250 [astro-ph.CO]. [79] J. Witstok et al., Nature 639, 897 (2025), arXiv:2408.16608 [astro-ph.GA]. [80] A. G. Riess et al., (2025), arXiv:2509.01667 [astro- ph.CO]. [81] R. Takahashi, M. Sato, T. Nishimichi, A. Taruya, and M. Oguri, Astrophys. J. 761, 152 (2012), arXiv:1208.2701 [astro-ph.CO]. [82] A. Mead, S. Brieden, T. Tr¨oster, and C. Hey- mans, Mon. Not. Roy. Astron. Soc. 502, 1401 (2021), arXiv:2009.01858 [astro-ph.CO]. [83] E. Calabrese et al. (ACT), (2025), arXiv:2503.14454 [astro-ph.CO]. [84] K. Zhong, E. Saraivanov, J. Caputi, V. Miranda, S. S. Boruah, T. Eifler, and E. Krause, Phys. Rev. D 111, 123519 (2025), arXiv:2402.17716 [astro-ph.CO]. [85] E. Saraivanov, K. Zhong, V. Miranda, S. S. Boruah, T. Eifler, and E. Krause, Phys. Rev. D 111, 123520 (2025), arXiv:2403.12337 [astro-ph.CO]. [86] Y. Zhu, E. Saraivanov, J. A. Kable, A. S. Gian- nakopoulou, A. Nijjar, V. Miranda, M. Bonici, T. Eifler, and E. Krause, (2025), arXiv:2505.22574 [astro-ph.CO].
Phantom Mirage from Axion Dark Energy Rayne Liu,1 Yijie Zhu,2 Wayne Hu,1 and Vivian Miranda3 1Kavli Institute for Cosmological Physics, 60637, USA 2 11794, USA 3C. N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY 11794, USA (Dated: October 17, 2025) Supernova (SN) and baryon acoustic oscillation (BAO) distance measures have recently provided hints that the dark energy is not only dynamical but apparently evolves from normal to phantom dark energy between redshifts 0 -1 and places strong conditions on physical candidates in order not to exhibit ghost and gradient instability, which typically requires fundamental modifications to gravitational or dark sector forces [28-33], see also [34-46] for recent assessments). On the other hand, the well-established agreement with high redshift predictions in ΛCDM from the CMB indicates that these drastic modifications have the curious property that the strong deviations in the normal and phantom phases must nearly cancel each other so as to have similar endpoints to the expansion history as ΛCDM [29, 47], implying that one or the other may be a mirage. It is therefore important to examine other ways in which this implied expansion history can occur without ever requiring a phantom dark energy component: that phantom dark energy is a mirage. One well-studied possibility is a coupling between the dark matter and dark energy, or the so-called coupled quintessence, where the dark energy transfers some of its energy density to the dark matter so that if we analyze its expansion history assuming decoupled cold dark matter, we would infer a phantom equation of state for the effective dark energy density [48-52]. Similar to the phantom models, these models still typically introduce new forces between dark matter and dark energy in order to mediate the coupling. In this work, we study the ability of axions to provide a similar phantom mirage without introducing any additional couplings in the dark sector, requiring only an additional cosmological constant or normal quintessence. It is well known that a light scalar field acts as a contribution to the cosmological constant when it is frozen on its potential by Hubble friction, and once released acts instead as dark matter. The QCD axion, for example, is a leading candidate to compose of all of the dark matter. Moreover, axions as all of the dark energy provide a technically natural solution through their shift symmetry as to why their mass can stay of order the very small ma ∼H0 ∼10-33 eV required [53, 54]. If instead the mass is only somewhat larger than this Hubble scale value, then the transition between dark energy and dark matter behavior can occur between redshifts 0 -1 approaching a CDM-like wa ∼0 at z = 0. Here lg(ma) = -32.5, fde = 0.1. III. PHANTOM MIRAGE Distance measures from SN, BAO and the CMB depend only on the expansion history and not the theoretical division of the dark sector into dark matter and dark energy components. An incorrect assumption about the dark matter can lead to an incorrect inference of phantom dark energy, which we call a phantom mirage. A well known example of the phantom mirage is provided by coupled quintessence [48-50]. In this case, coupling between the dark energy and dark matter transfers energy from the former to the latter. When analyzed under the assumption of non-interacting cold dark matter, the remaining effective dark energy gives the mirage of a phantom component with w 0, the negative contribution from the incorrect matterlike assumption for axions drives weff -1, then this also implies a phantom crossing where ρeff switches from growing to decaying with the expansion. However, we start with the simplest assumption that the true dark energy is a cosmological constant where wde = -1, and hence the case where the effective dark energy is always a phantom, at least when time averaged over axion oscillations. We illustrate this behavior in Fig. 1 (top panel) with an example where lg(ma) = -32.5, close to the mass where the equation of state at the present first reaches wa(0) = 0. More precisely, with the specific choice of h = 0.674, Ωde = 0.311 and fa = 0.1 employed here, wa(0) = 0 at lg(ma) = -32.45. Notice that by z = 1, the axions have already reached the frozen regime and an extrapolation as matter would overestimate the matter density. The effective energy density then decreases at high redshift implying a phantom equation of state weff -1 and wde = -1, see Fig. 1 (bottom panel). Phrased alternatively, the expansion history mimics a low Ωm model at the high redshifts z ≳0.5 relevant for BAO and a high Ωm for the low redshifts z ≲0.5 relevant for SN. The axion phantom mirage can therefore reduce the tension between BAO and SN without having any phantom component. Furthermore, by having only a small fraction fde of dark energy in axions but with a larger change in their equation of state, the axion phantom mirage can produce a more rapid change in weff than a model with just a single component of thawing quintessence. 0.00 0.25 0.50 0.75 1.00 fde baseline ΛCDM baseline axion 0.6850 0.6875 0.6900 0.6925 0.6950 Ωde = ΩΛ + Ωa -34.0 -33.5 -33.0 -32.5 -32.0 -31.5 lg(ma) -6 -4 -2 0 ∆χ2 FIG. 2. Likelihood profile as a function of axion mass or ∆χ2 relative to the baseline ΛCDM model for the baseline SN+BAO+CMB analysis (lower panel). Similar improvements over ΛCDM occur in the axion phantom mirage range -33.5 ≲lg(ma) ≲-32.3 but with very different axion contributions to the dark energy fde = Ωa/Ωde (upper panel). For much smaller masses all fde cases behave as ΛCDM with the same total Ωde = ΩΛ + Ωa (middle panel) and for much higher masses the profile value for fde is small and also predicts ΛCDM observables up to the accuracy of the emulator discussed in Appendix A. IV. AXION LANDSCAPE A. Baseline analysis We begin our analysis with the simplest scenario where axions are the only addition to the standard ΛCDM paradigm and all data sets are taken at face value, which we call the baseline analysis. In Fig. 2 (bottom panel), we show the ∆χ2 profile in the axion mass ma relative to the baseline ΛCDM model, as well as the values of the axion fraction of the dark energy fde (top panel) and total dark energy Ωde = ΩΛ + Ωa (middle panel) that correspond to these 4 name lg(ma) fde extension ∆χ2 ∆χ2 nolowE baseline ΛCDM - 0 - 0 0 w0 -wa - 0 w0 = -0.755, wa = -0.831 -20.5 -18.8 baseline -32.9 0.996 - -7.1 -8.8 -32.5 0.0797 - -5.5 -6.7 no lowE -32.9 0.997 τ = 0.0999 - -16.3 -32.5 0.117 τ = 0.101 - -15.1 curvature -32.9 0.84 ΩK = 2.90 × 10-3 -12.4 -11.9 -32.5 0.0846 ΩK = 2.95 × 10-3 -11.2 -10.5 TABLE I. Model parameters in the baseline and extended analyses with high optical depth τ or spatial curvature ΩK. All ∆χ2 values are relative to the baseline ΛCDM model. For axion cases the best fit lg(ma) = -32.9 and the high mass end of the axion phantom mirage lg(ma) = -32.5 are shown and generally perform comparably despite the very different dark energy fraction fde. High τ models drop the lowE Planck 2018 data and fit the rest of the SN+BAO+CMB data almost as well as w0 -wa. 0.2 0.4 0.6 0.8 1.0 z -0.05 0.00 0.05 0.10 0.15 μ -μfid SN data baseline ΛCDM baseline lg(ma) = -32.9 baseline lg(ma) = -32.5 FIG. 3. SN data compared with models in the baseline analysis. The baseline ΛCDM model fails to fit the relative magnitudes of the lowest redshift SN vs higher redshifts whereas both axion cases lg(ma) = -32.9 (fde ≈1) and lg(ma) = 32.5 (fde ≈0.08) capture the upturn. profile models. Note that for lg(ma) ≲-33.5, despite the apparent changes in the value of fde for different ma, all fde for the given Ωde fit essentially equally well, with the changes in ∆χ2 being much less than unity as the profile shows. This is because in this ma ≪H0 limit, the axion field is still frozen on its potential at the present. In the range -33.5 ≲lg(ma) ≲-32.3, cases with finite 1 > fde > 0 fit the data better than ΛCDM. The low mass end of this regime represents the standard thawing quintessence case fde ∼1 where all of the dark energy is in a single scalar field component. Interestingly, the higher masses in this range provide nearly as good a fit but with a steeply decreasing fde. This extension to higher masses and lower fractions displays the axion phantom mirage discussed in Sec. III where the axion behaves increasingly like dark matter at z = 0. Given the nearly equally good fits across this axion phantom mass range, we select two representative cases with very different fde and examine the origin of the preferences in the data. The relevant parameters of these models are given in Tab. I. The first is the global minimum at lg(ma) = -32.9 and fde ≈1, with ∆χ2 = -7.1 compared to ΛCDM and the second is lg(ma) = -32.5 and fde ≈0.08, with ∆χ2 = -5.5. Both are comparable to the improvements in the calibrated thawing quintessence models where wde(z) = w0 -1.58(1 + w0)z/(1 + z) studied in Ref. [5]. In particular both models provide comparable fits to the SN data, shown in Fig. 3, especially the upturn at the lowest redshift bin, despite their different fde. This is a consequence of the larger mass producing a much sharper change in distances for a given fde due to its equation of state quickly approaching a matter like value of wa ∼0. Baseline ΛCDM has the opposite trends with redshift given that it is optimized to SN+BAO+CMB and does not fit the SN data alone well. In Fig 4, we show the comparison to BAO distance measures. Again the two cases fit BAO comparably well but notice that the fits are also comparable to baseline ΛCDM, especially for z > 0.5 where the data are the most constraining. In particular around z ∼0.8 all three models overpredict DM(0.8)/rd by around a percent. This is a consequence of the high redshift tension between BAO and CMB given the calibration of rd provided by the latter and its inference of DM(z∗), the distance to recombination at z∗, through the precise measurement of θ∗, the angular size of the sound horizon. Axions and more generally thawing dark energy models cannot efficiently change DM(z∗) -DM(0.8) = R z∗ 0.8 dz/H(z) in a flat universe. The CMB calibration of rd is increasingly driven by CMB lensing measurements as ground based data improve. In Fig. 5, we show the comparison of the models to the data for the CMB lensing power spectrum Cφφ l relative to the fiducial model. Even in the fiducial model the lensing amplitude is slightly low compared with the data for the reconstruction and smoothing of the acoustic peaks (see e.g. [59] for a recent AL rescaling type assessment). Notice that with SN+BAO+CMB, all three best 5 1 2 3 z 0.950 0.975 1.000 1.025 1.050 DV /rd [fid] baseline ΛCDM baseline lg(ma) = -32.9 baseline lg(ma) = -32.5 BAO data 1 2 3 z DM/rd [fid] 1 2 3 z DH/rd [fid] FIG. 4. BAO data compared with models in the baseline analysis. While the baseline ΛCDM lacks the low z upturn of the axion cases, all three fit the high-z BAO data comparably well. Here and in other figures "[fid]" stands for in units of the same quantity in the fiducial model. Starred datapoints represent those which are used in the likelihood whereas DV for all but the lowest redshift bin is redundant with DM and DH. 102 103 l 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Cφφ l [fid] baseline ΛCDM baseline lg(ma) = -32.9 baseline lg(ma) = -32.5 Planck PR4 ACT DR6 FIG. 5. CMB lensing data compared with models in the baseline analysis. The baseline ΛCDM and axion cases all reduce the lensing amplitude so as to fit the BAO data reflecting tension in the BAO+CMB data. fit curves further underpredict lensing relative to the fiducial model and are a slightly worse fit to the data. This is again the consequence of the compromise between the BAO and CMB data. On the CMB side, to fit the amplitude of the lensing power spectrum and the amplitude and smoothing of the acoustic peaks simultaneously, the cold dark matter Ωch2 must remain relatively high and rd relatively low. This parameter also controls the BAO vs. CMB distances DM(z∗) -DM(0.8), with BAO measurements favoring the lower Ωch2. The lack of freedom to adjust these relationships causes the tension between the BAO and CMB data. We can see this directly in Fig. 6. Here we show the posterior constraints for fde, DM(0.8)/rd, rd for ΛCDM vs. axions with lg(ma) = -32.5. In ΛCDM, the tension between the CMB and BAO distance measures appears 0.04 0.10 0.16 fde 0.995 1.000 1.005 rd [fid] 0.99 1.00 1.01 DM(0.8)/rd [fid] 0.99 1.00 1.01 DM(0.8)/rd [fid] 1.000 1.005 rd [fid] CMB, ΛCDM SN+CMB, ΛCDM SN+BAO+CMB, ΛCDM SN+BAO+CMB, lg(ma) = -32.5 FIG. 6. BAO+CMB tension parameters vs. axion dark energy fraction in the baseline analysis (68%, 95% CL contours). BAO data prefer a low value for DM(0.8)/rd than in the fiducial CMB only analysis and this tension is exacerbated by the inclusion of SN so that the baseline ΛCDM analysis of SN+BAO+CMB pushes constraints into the tail of the SN+CMB posterior. The axion case with lg(ma) = -32.5 shows a 95% exlcusion of fde = 0 due to the better fit to the lowest z SN but does not appreciably change the BAO+CMB tension between z = 0.8 and recombination. as a preference for high DM(0.8)/rd and low rd in the former vs. low DM(0.8)/rd in the latter. Adding in SN to the CMB, while still consistent in ΛCDM, makes the tension worse. This is the well known SN+BAO discrepancy in Ωm and BAO+CMB shift in H0rd but phrased 6 0.1 0.2 fde 1.000 1.005 rd [fid] 0.99 1.00 1.01 DM(0.8)/rd [fid] 0.99 1.00 1.01 DM(0.8)/rd [fid] 1.000 1.005 rd [fid] lg(ma) = -32.5 SN+BAO+CMB, no lowE SN+CMB, no lowE SN+BAO+CMB, ΩK > 0 FIG. 7. BAO+CMB tension parameters vs. axion dark energy fraction in the high-z extension analyses. Tension in the DM(0.8)/rd, rd plane can be addressed by increasing τ, here through dropping the lowE Planck 2018 data, or by adding spatial curvature. The former allows the CMB calibration of rd to shift whereas the latter changes the relation between DM(0.8) and the distance to recombination. Both provide better fits to SN+BAO+CMB than the baseline case and retain a preference for fde > 0 from SN at this mass, lg(ma) = -32.5. more directly in terms of quantities relevant to BAO and hence in a more model independent fashion that is useful for dynamical dark energy as well. When BAO are added to SN+CMB, DM(0.8)/rd and rd are pushed to the extremes of what is allowed by the latter, reflecting the tension in the data sets. On the other hand the improvement from better fitting the SN is enough to place fde = 0 outside the 95% CL regions. Thus while the axion phantom mirage provides a good fit to SN for a wide range of fractional contributions to the dark energy from 5% to 100%, it cannot resolve all of the tension between SN+BAO+CMB but instead performs about as well as the calibrated thawing quintessence class. B. High-z extensions We have seen that a wide range of axion contributions to the dark energy fit the SN+BAO+CMB data as well as calibrated thawing quintessence but fail to address the BAO+CMB tension. This is because by construction such axions mainly affect distances to low redshift z -1 like quintessence, without causing any instabilities in the dark sector. This resolution of the SN-BAO tension at z ≲1 in ΛCDM fits essentially as well as calibrated thawing quintessence models. Moreover it allows for a wide range of axion fractions from ∼5% -100% of the dark energy across this mass range. With axions alone, the BAO-CMB tension remains and is driven by the CMB determination of the cold dark matter density Ωch2 which largely determines the BAO distance to z ∼0.8 and the CMB distance to recombination by calibrating the sound horizon or equivalently the BAO scale rd. In the truly phantom w0 -wa dark energy solutions, this is resolved by a stronger dark energy evolution at high z. On the other hand, this tension, being largely at high-z, does not necessarily require a dark energy resolution. In particular, the CMB calibration of rd has become increasingly reliant on CMB lensing measurements but that requires knowledge of τ, the optical depth to recombination, to fix Ωch2. We show that the remaining tension between SN-BAO-CMB data can be resolved if τ ≈0.1. While this is not allowed by lowE polarization in the standard reionization history and power law inflationary power spectrum which imply τ = 0.0544±0.00755 [4], more general models or systematic errors in the measurement could accommodate a higher τ. Indeed, JWST measurements of high-redshift galaxies [74-77] may already be hinting at earlier reionization (e.g. [78, 79]). Ignoring the lowE polarization contribution to χ2, axions improve the rest of the fit to SN+BAO+CMB over ΛCDM by ∆χ2 ∼-16 and is comparable to the w0 -wa solution where ∆χ2 ∼-19. Alternately, high redshift extensions to ΛCDM can also change the CMB inferences for rd and BAO distances. We show that a small curvature contribution ΩK ∼0.003 can also relieve the BAO-CMB tension by adjusting the distance between z ∼0.8 and recombination, bringing the overall best fit to ∆χ2 ∼-12. Modifying rd to solve the Hubble constant tension with the SHOES CepheidSN distance scale [80] by introducing an "early dark energy" scalar field that contributes near recombination can also relieve the the BAO-CMB tension. This opens the intriguing possibility that a more general scalar sector could solve all of these tensions with ΛCDM simultaneously. We leave these considerations to a future work. ACKNOWLEDGMENTS We thank Fei Ge for help with the CMB lensing data and Tom Crawford, Tanisha Jhaveri, Austin Joyce, and Tanvi Karwal for useful comments. R.L & W.H. are supported by U.S. Dept. of Energy contract DESC0009924 and the Simons Foundation. VM and YZ are supported by the Roman Project Infrastructure Team "Maximizing Cosmological Science with the Ro9 man High Latitude Imaging Survey" (NASA contracts 80NM0018D0004-80NM0024F0012). V.M. is also partially supported by the Roman Project Infrastructure Team "A Roman Project Infrastructure Team to Support Cosmological Measurements with Type Ia Supernovae" (NASA contract 80NSSC24M0023). Computing resources were provided by the -performance SeaWulf computing system. Appendix A: Emulator Training Procedure Parameter Value accuracy boost 1.5 l max scalar 7500 l accuracy boost 1.0 l sample boost 2.0 k eta max scalar 18000 do late rad truncation False transfer kmax 10 transfer k per logint 130 transfer high precision True accurateBB True TABLE II. Settings for AxiECAMB For the analyses in the main text we have both modified and emulated AxiECAMB. The modification makes AxiECAMB solve the axion Klein-Gordon equation 2φ = -m2 aφ to the present for all axion masses in the quadratic potential approximation for axions. The accuracy settings for AxiECAMB are given in Tab. II. These are sufficient to make the accuracy of the χ2 for CMB likelihoods limited by the use of RECFAST for recombination and takahashi HALOFIT [81] for the nonlinear power spectrum in AxiECAMB as opposed to COSMOREC and mead2020 HMCODE [82] for CAMB v1.5.9 with accuracy settings specified in Ref. [83]. For reference, the impact of these changes on the power spectra in the ΛCDM fiducial modal are shown in Fig. 12 and mostly occur at very high lcompared with those of the measurements so that the errors there fall below the rough cosmic variance limits of ∆Cl/Cl∼1/lfor the temperature anisotropy and below the measurement noise for lensing (cf. Fig. 5). There is a correspondingly small error in the computation of χ2 between AxiECAMB and CAMB that is well below ∆χ2 = 1 as given in Tab. III. By reverting CAMB to the older version of halofit and recombination used by AxiECAMB we see that the choice of recombination code mainly impacts the χ2 for Planck lite TTTEEE but only by ∆χ2 ∼0.1 -0.2 level. The choice of nonlinear power spectrum model affects mainly ACT lensing but by ∆χ2 -32] [0, 0.21] [10-8, 0.19] TABLE V. Parameter ranges for the training and testing parameter sets for the uniform and Gaussian sampling methods. For the models with curvature, we further extend the space to analyze non-zero ΩK ∈[0, 0.004], with trimmed priors on τ ∈[0.04, 0.12], ln(1010As) ∈[3, 3.3], ns ∈ [0.92, 0.99] and lg(ma) ∈[-34, -32], which includes the region of interest for fitting the datasets. To well emulate the region where fde is small, when generating training datasets, we oversample in the low Ωah2 region by applying an additional uniform sampling in logarithmic measure of log10 (Ωah2) ∈[-5, -1]. We also oversample the region where lg(ma) > -32.5 since the relationship between cosmological parameters and data vectors is more non-linear in this region due to more oscillations in the background evolution of the axion field. Furthermore, we sample directly at Ωah2 = 10-8 ≈0 to force the emulator to learn the ΛCDM limit. For rd and θ∗→H0 mapping emulators, we train with the simple Mean Square Error (MSE) loss function of the output directly. For H(z) and Cφφ l emulators, we train the models under MSE of the PCA transformed outputs. Since those data vectors are either short in size or structurally simple, a crude MSE loss function and simple training strategies are sufficient. The more challenging training is for CMB primaries CXY l as they have more complicated structures. To train them to be as precise as possible compared to the direct output from AxiECAMB, we set the training loss function, separately for each CXY l , to be loss = D X l ∆ ̃Cl ̃C-1∆ ̃Cl 1/2E . (A3) ∆ ̃Clis the difference in the rescaled quantity as ̃Cl= Cle2τ/As, where the fiducial model picked at near the Planck 2018 best fit. The covariance matrix ̃C is set to be the corresponding diagonal part of a cosmic-variance covariance matrix for rescaled quantities with l≤3000 for the sake of training. This rescaling technique is tested in Ref. [86]. The increasingly tighter constraints as lincreases under the cosmic-variance assumption forces the training to learn the detailed structure at high l. For testing, we compute the quantity ∆ˆχ2 = P ∆CXY l C-1 XY,X′Y ′∆CX′Y ′ l , with X, Y, X′, Y ′ ∈T, E, using the Planck lite binning and covariance matrix files. 0.001 0.01 0.1 1 10 2 CMB 2 250 500 750 1000 1250 0.0 0.2 0.4 0.6 | C |/C % 99% 95% 68% FIG. 14. Top: Distribution of ∆ˆχ2 of the CMB primaries emulators upon testing on Planck TTTEEE lite likelihood. Only 1% points are outliers with ∆ˆχ2 > 0.2. Bottom: Distribution of percent difference between emulator and AxiECAMB outputs of Cφφ l. Within the range l≤1250, which is the ACT lensing extended multipole range, whereas we use the baseline ACT range l≤763 in our analysis, we can see that the percent error of the emulation is suppressed to below 1% across the range of ACT lensing likelihood. We then evaluate the median and distribution of this quantity across a testing set. Note that ∆ˆχ2 does not involve the Planck data itself and therefore does not get enhanced if the model is a bad fit to the data or from the noise fluctuations of a good fit. Upon testing, our emulators have almost consistently below 0.5% error in DM(z) and H(z) across the redshift range z ∈[0, 3]. The rd GP emulator has around 0.02% error, and the emulator for mapping from θ∗to H0 around 0.04% error. Both of these derived parameter emulators have errors one order of magnitude lower than the 1σ deviation from the Planck measurements. For the CMB primary power spectra, our emulators are trained to have the fraction of outliers with ∆ˆχ2 > 0.2 to be around 1% under Planck lite TTTEEE likelihood within our testing range, shown in Fig. 14. The CMB lensing power spectrum emulator is trained to have < 1% error across the lrange of ACT lensing. The training for the baseline spatially flat cases uses 2.5 million training points. We then develop another emulator for the curvature analysis with an addition of 0.8 million points sampled in the range described above, and the models reach a similar level of precision. 12 [1] T. M. C. Abbott et al. (DES), Astrophys. J. Lett. 973, L14 (2024), . [2] M. Abdul Karim et al. (DESI), (2025), . [3] W. Elbers et al. (DESI), (2025), . [4] N. Aghanim et al. (Planck), Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)], . [5] K. Lodha et al. (DESI), (2025), . [6] G. Ye, M. Martinelli, B. Hu, and A. Silvestri, Phys. Rev. Lett. 134, 181002 (2025), . [7] X. D. Jia, J. P. Hu, D. H. Gao, S. X. Yi, and F. Y. Wang, (2025), . [8] S. Goldstein, M. Celoria, and F. Schmidt, (2025), . [9] H. Chaudhary, V. K. Sharma, S. Capozziello, and G. Mustafa, (2025), . [10] S. Roy Choudhury, T. Okumura, and K. Umetsu, (2025), . [11] R. Kou and A. Lewis, (2025), . [12] L. W. K. Goh and A. N. Taylor, (2025), . [13] A. G ́omez-Valent and A. Gonz ́alez-Fuentes, (2025), . [14] P. Brax, (2025), . [15] M. Braglia, X. Chen, and A. Loeb, (2025), . [16] E. Silva and R. C. Nunes, (2025), . [17] S. S. Mishra, W. L. Matthewson, V. Sahni, A. Shafieloo, and Y. Shtanov, (2025), . [18] S. Lee, (2025), . [19] I. D. Gialamas, G. H ̈utsi, M. Raidal, J. Urrutia, M. Vasar, and H. Veerm ̈ae, Phys. Rev. D 112, 063551 (2025), . [20] E. ̈Oz ̈ulker, E. Di Valentino, and W. Giar`e, (2025), . [21] R. E. Keeley, A. Shafieloo, and W. L. Matthewson, (2025), . [22] J. M. Cline and V. Muralidharan, Phys. Rev. D 112, 063539 (2025), . [23] R. de Souza, G. Rodrigues, and J. Alcaniz, (2025), . [24] B. R. Dinda and R. Maartens, Mon. Not. Roy. Astron. Soc. 542, L31 (2025), . [25] G. Gu et al. (DESI), (2025), 10.1038/s41550-025-026696, . [26] K. V. Berghaus, J. A. Kable, and V. Miranda, Phys. Rev. D 110, 103524 (2024), . [27] J. Rebou ̧cas, D. H. F. de Souza, K. Zhong, V. Miranda, and R. Rosenfeld, JCAP 02, 024 (2025), . [28] A. Vikman, Phys. Rev. D 71, 023515 (2005), arXiv:astroph/0407107. [29] W. Hu, Phys. Rev. D 71, 047301 (2005), arXiv:astroph/0410680. [30] P. Creminelli, G. D'Amico, J. Norena, and F. Vernizzi, JCAP 02, 018 (2009), . [31] W. Fang, W. Hu, and A. Lewis, Phys. Rev. D 78, 087303 (2008), . [32] G. Gubitosi, F. Piazza, and F. Vernizzi, JCAP 02, 032 (2013), . [33] J. K. Bloomfield, ́E. ́E. Flanagan, M. Park, and S. Watson, JCAP 08, 010 (2013), . [34] M. S. Oliveira, F. A. Brito, and J. A. V. Campos, (2025), . [35] N. J. Pullisseri and S. Unnikrishnan, (2025), . [36] W. J. Wolf, P. G. Ferreira, and C. Garc ́ıa-Garc ́ıa, (2025), . [37] A. Pourtsidou, (2025), . [38] S. Tsujikawa, (2025), . [39] Z. Yao, G. Ye, and A. Silvestri, (2025), . [40] S. L. Guedezounme, B. R. Dinda, and R. Maartens, (2025), . [41] M. H ̈og ̊as and E. M ̈ortsell, (2025), . [42] A. Paliathanasis, Phys. Dark Univ. 49, 101993 (2025), . [43] D. Andriot, Phys. Dark Univ. 49, 102000 (2025), . [44] C. You, D. Wang, and T. Yang, Phys. Rev. D 112, 043503 (2025), . [45] S. Pan, S. Paul, E. N. Saridakis, and W. Yang, (2025), . [46] E. Silva, M. A. Sabogal, M. Scherer, R. C. Nunes, E. Di Valentino, and S. Kumar, Phys. Rev. D 111, 123511 (2025), . [47] E. V. Linder, (2007), . [48] L. Amendola, Phys. Rev. D 62, 043511 (2000), arXiv:astro-ph/9908023. [49] S. Das, P. S. Corasaniti, and J. Khoury, Phys. Rev. D 73, 083509 (2006), arXiv:astro-ph/0510628. [50] E. J. Copeland, M. Sami, and S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006), arXiv:hep-th/0603057. [51] J. Khoury, M.-X. Lin, and M. Trodden, (2025), . [52] A. Bedroya, G. Obied, C. Vafa, and D. H. Wu, (2025), . [53] S. M. Carroll, Phys. Rev. Lett. 81, 3067 (1998), arXiv:astro-ph/9806099. [54] A. J. Shajib and J. A. Frieman, Phys. Rev. D 112, 063508 (2025), . [55] J. Carron, M. Mirmelstein, and A. Lewis, JCAP 09, 039 (2022), . [56] F. J. Qu et al. (ACT), Astrophys. J. 962, 112 (2024), . [57] R. Liu, W. Hu, and D. Grin, Phys. Rev. D 112, 023513 (2025), . [58] T. Karwal, Y. Patel, A. Bartlett, V. Poulin, T. L. Smith, and D. N. Pfeffer, (2024), . [59] F. Ge et al. (SPT-3G), (2024), . [60] T. Jhaveri, T. Karwal, and W. Hu, Phys. Rev. D 112, 043541 (2025), . 13 [61] N. Sailer, G. S. Farren, S. Ferraro, and M. White, (2025), . [62] W. Giar`e, E. Di Valentino, and A. Melchiorri, Phys. Rev. D 109, 103519 (2024), . [63] I. J. Allali, P. Singh, J. Fan, and L. Li, (2025), . [64] I. J. Allali, L. Li, P. Singh, and J. Fan, (2025), . [65] Z. Huang, (2025), . [66] W. Elbers, (2025), . [67] C. Heinrich and W. Hu, Phys. Rev. D 104, 063505 (2021), . [68] G. Obied, C. Dvorkin, C. Heinrich, W. Hu, and V. Miranda, Phys. Rev. D 98, 043518 (2018), . [69] J. M. Delouis, L. Pagano, S. Mottet, J. L. Puget, and L. Vibert, Astron. Astrophys. 629, A38 (2019), . [70] Y. Li et al. (CLASS), (2025), . [71] S.-F. Chen and M. Zaldarriaga, JCAP 08, 014 (2025), . [72] V. Poulin, T. L. Smith, R. Calder ́on, and T. Simon, (2025), . [73] A. R. Khalife et al. (SPT-3G), (2025), . [74] S. L. Finkelstein et al., Astrophys. J. Lett. 969, L2 (2024), . [75] M. Castellano et al., Astrophys. J. Lett. 938, L15 (2022), . [76] D. J. Eisenstein et al., arXiv e-prints , (2023), . [77] Y. Harikane et al., Astrophys. J. Supp. 265, 5 (2023), . [78] J. B. Mu ̃noz, J. Mirocha, J. Chisholm, S. R. Furlanetto, and C. Mason, Mon. Not. Roy. Astron. Soc. 535, L37 (2024), . [79] J. Witstok et al., Nature 639, 897 (2025), . [80] A. G. Riess et al., (2025), . [81] R. Takahashi, M. Sato, T. Nishimichi, A. Taruya, and M. Oguri, Astrophys. J. 761, 152 (2012), . [82] A. Mead, S. Brieden, T. Tr ̈oster, and C. Heymans, Mon. Not. Roy. Astron. Soc. 502, 1401 (2021), . [83] E. Calabrese et al. (ACT), (2025), . [84] K. Zhong, E. Saraivanov, J. Caputi, V. Miranda, S. S. Boruah, T. Eifler, and E. Krause, Phys. Rev. D 111, 123519 (2025), . [85] E. Saraivanov, K. Zhong, V. Miranda, S. S. Boruah, T. Eifler, and E. Krause, Phys. Rev. D 111, 123520 (2025), . [86] Y. Zhu, E. Saraivanov, J. A. Kable, A. S. Giannakopoulou, A. Nijjar, V. Miranda, M. Bonici, T. Eifler, and E. Krause, (2025), .
2510.14959
CBF-RL: Safety Filtering Reinforcement Learning in Training with Control Barrier Functions *Lizhi Yang, *Blake Werner, Massimiliano de Sa, Aaron D. Ames Abstract— Reinforcement learning (RL), while powerful and expressive, can often prioritize performance at the expense of safety. Yet safety violations can lead to catastrophic out- comes in real-world deployments. Control Barrier Functions (CBFs) offer a principled method to enforce dynamic safety— traditionally deployed online via safety filters. While the result is safe behavior, the fact that the RL policy does not have knowledge of the CBF can lead to conservative behaviors. This paper proposes CBF-RL, a framework for generating safe behaviors with RL by enforcing CBFs in training. CBF-RL has two key attributes: (1) minimally modifying a nominal RL policy to encode safety constraints via a CBF term, (2) and safety filtering of the policy rollouts in training. Theoretically, we prove that continuous-time safety filters can be deployed via closed-form expressions on discrete-time roll-outs. Practically, we demonstrate that CBF-RL internalizes the safety constraints in the learned policy—both enforcing safer actions and biasing towards safer rewards—enabling safe deployment without the need for an online safety filter. We validate our framework through ablation studies on navigation tasks and on the Unitree G1 humanoid robot, where CBF-RL enables safer exploration, faster convergence, and robust performance under uncertainty, enabling the humanoid robot to avoid obstacles and climb stairs safely in real-world settings without a runtime safety filter. I. INTRODUCTION Humanoid robots are capable of interacting with en- vironments designed for humans. However, the complex environment, high-dimensional robot dynamics, and noise of the sensors also make them highly vulnerable to unsafe control inputs. One unsafe action could lead to damage to both the robot and its surroundings, and thus ensuring safety is essential. Meanwhile, reinforcement learning (RL) has emerged as a powerful tool for humanoid robots to achieve diverse skills, but focuses mostly on performance [1]–[3] and expressiveness [4]–[7]. In this paper, we propose integrating formal safety mechanisms with the powerful exploration and exploitation abilities of RL so that learned policies can reduce or prevent catastrophic behaviors. To achieve this, we turn to Control Barrier Functions (CBFs) [8] for a principled way to encode state-based safety constraints as forward- invariant sets. The CBF conditions are often enforced using safety filters [9]: quadratic programs satisfying the safety constraint by minimally modifying a proposed control input. There are two key approaches to instantiating safety filters in RL. The first approach is safety filtering the RL-proposed action and projects it into the safe set before execution [10]– [13] or performing constrained updates of the gradient [14], * denotes equal contribution. All authors affiliated with Caltech MCE. This research is supported in part by the Technology Innovation Institute (TII), BP p.l.c., and by Dow via project #227027AW. Deployment CBF-RL Training Safety Filter Action/Reward Fig. 1. A humanoid robot trained to climb stairs with the CBF-RL framework. Safety is injected into training by both filtering the policy- proposed actions and also provide safety rewards in addition to task and regularization rewards. During deployment the CBF policy retains safe behavior without a runtime filter. [15]. This guarantees safety at runtime, but the filter must remain in the loop at deployment, and the learned policy may never internalize the constraint. This prevents a high- dimensional agent, like a humanoid robot, from discovering novel or efficient behaviors since the exploration space is pruned too aggressively. Both also require solving an optimization program at every control step, which may be computationally expensive. The second approach is reward shaping where a residual augments the reward term to penalize states that approach or violate constraint boundaries [16]–[21] and encourages the agent towards safer behaviors without active filtering. This alone does not directly enforce safe actions during training and is often sensitive to the choice of penalty weights, possibly being insufficient in safety-critical applications. This paper proposes a fusion of these two approaches to integrating CBFs into RL. A. Contributions In this paper, we show that safety filtering and reward shaping are complementary, proposing CBF-RL: a dual ap- proach that applies both a closed-form CBF-based safety arXiv:2510.14959v2 [cs.RO] 19 Oct 2025 filter and a barrier-inspired reward term during training. This safety filters a nominal RL policy, enabling it to learn safe behaviors. Catastrophic unsafe actions are prevented by the active filter, and the reward term biases the policy toward avoiding safety interventions. Therefore, the policy has direct corrective supervision–it observes what it would have done, how the filter corrects it, and how the reward changes. It also learns to propose actions that directly satisfy the barrier condition. This enables the policy to show safe behaviors at deployment time without an active filter. We evaluate CBF-RL, and its dual approach to safe RL, with ablations using a 2D navigation task with dynamics randomness, analyzing task completion rates and robustness tests. We also train humanoid locomotion policies in Isaa- cLab [22] with full-order dynamics and domain random- ization for obstacle avoidance and stair climbing tasks. The approach is validated on hardware using a Unitree G1 hu- manoid with zero-shot sim-to-real policies to exhibit the ef- fectiveness of the proposed framework in high-dimensional, complex systems. With the dual-trained policy, the robot can successfully navigate around obstacles and climb stairs under command sequences that would lead to failure with nominal policies that don’t leverage the CBF-RL framework. Our contributions are as follows: • Conceptually: We propose a dual CBF-RL training framework that uses both CBF-based active filtering and barrier-inspired rewards during training, and can be deployed without a filter. • Theoretically: We provide a continuous-to-discrete- relationship analysis of CBF-RL and a closed-form solution for light-weight integration. • Practically: We empirically demonstrate across simu- lated, and hardware experiments that policies trained using the dual approach can internalize safety and reduce unsafe actions at deployment. B. Related Work Due to the ease of modifying rewards for training, a number of works incorporate safety value functions into the reward structure to guide policies toward safety. [16], [17] use a fixed penalty, [18], [19], [21] utilize correction- proportional penalties, and [20] proposes an explicit barrier- inspired reward shaping mechanism to reduce unsafe explo- ration. These methods encourage safe behavior but do not explicitly direct the policy to exhibit safe behaviors during learning, instead solely relying on the policy to discover safer actions on its own, leading to slower training. To address this, runtime CBF-based safety filters during training minimally modify the actions of the policy such that the system stays in the user-defined forward-invariant safe set typically by solving an optimization program, be it discrete-time due to the nature of RL training [10]–[13], [15], [23], or continuous-time [24]–[26] which empirically shows improved safety. For humanoid locomotion, these methods are not ideal as humanoid robots have tight real- time and computation power constraints and inaccurate state estimation from sensor noise. We instead filter only in simulation during training, and show that the resulting policy retains safety even without a runtime filter. We further provide theoretical verification that under certain conditions, continuous-time CBF can be used as conditions for forward invariance for RL simulations that are discrete in nature, and thus we can use the closed-form solution to the CBF-QP to accelerate each step in training. Many papers utilize model-based approaches [11], [15], [24], [27]. which require access to accurate dynamics mod- els. While theoretically appealing, they are less practical for high-dimensional humanoids where dynamics are complex and uncertain. Some works also do constrained updates of the gradient to ensure that the policy remains safe [14], [15]; however, they also require solving optimization programs at each gradient update step and limit the exploration of the policy. Our framework is model-free, requiring only deriva- tives of the reduced-order model (e.g. for the kinematics of a humanoid robot, its Jacobian J), and emphasizes lightweight integration with standard policy-gradient RL, in our case proximal policy optimization(PPO) [28]. This also leads to the benefit of the policy being able to venture closer to the constraint boundaries, ensuring rich exploration. In response to the issue of stochastic systems, robust ex- tensions address uncertainty in dynamics or sensing through disturbance observers, Gaussian Process models, or robusti- fied CBFs [24]–[26], [29]. These methods improve reliability under uncertainty but add significant complexity and com- putational burden. Here we show that by relying on domain randomization during training, dual-trained policies remain safer under uncertainty of the system without explicit models. Another line of work learns barrier-like certificates or relates value functions to barrier properties [30], [31]. These methods aim to automate the design of barrier functions or embed them in differentiable layers. Our approach assumes analytic barrier functions and focuses on pragmatic integra- tion with RL training rather than barrier discovery. The above works have been applied to domains such as spacecraft inspection [12], drone control [13], autonomous driving [23], and driver assistance [24]. Much of the prior work captures part of the safety challenge, but none directly combines filtering and shaping in a way that allows a policy to internalize safety during training and then act autonomously without a filter at deployment, especially not on a high-dimensional system such as a humanoid robot. This distinction defines the novelty of our contribution. II. BACKGROUND Reinforcement Learning We consider the standard RL formulation of a Markov decision process (MDP) (X, U, P, r, γ). At each timestep k, an agent selects an action uk ∈U, according to a policy πθ(uk|xk) based on an observed state xk ∈X, and receives a reward r(xk, uk). The environment follows the transition dynamics xk+1 ∼ P(·|xk, uk). The goal is then to maximize the expected discounted return E[P∞ k=0 γtr(xk, uk)]. During deployment, actions are selected as the conditional expectation uk = E[πθ(uk|xk)]. In this work, we specifically use model- free policy-gradient actor-critic methods such as PPO [28], though our approach is agnostic to the specific RL algorithm. RL also often suffers from reward sparsity when unsafe events like obstacle collisions are rare. Thus the policy seldom experiences consequences of unsafe actions, leading to vanishing gradients and unstable, slow training. As such, one part of our method also does reward densification to mitigate this by designing r(xk, uk) to give informative nonterminal signals related to safety. Reduced-Order Models Consider a continuous-time system with dynamics ˙x = ϕ(x, u), where x ∈Rn is the state and u ∈Rm is the control input. In our case the state x may be very high-dimensional, e.g. joint positions and velocities of a robot. Thus we consider a reduced-order state q ∈Rnq, nq < n, representing key lower-dimensional features such as the robot’s center of mass position. We define a projection p : Rn →Rnq that projects the full-order state onto the reduced- order state. Given a locally Lipschitz continuous feedback control law v = k(q), the reduced state q is governed by ˙q = ∂p ∂x ϕ x, ψ(x, v)  ≈f(q) + g(q)v = f(q) + g(q)k(q), (1) where ψ(x, v) is a control interface lifting the reduced- order input v to a full-order input u. See [30], [32] for the connections between reduced and full order models. Control Barrier Function Safety Filters In the control barrier function framework, a set of “safe states” for the system, S ⊆Rn, is encoded as the zero superlevel set of a continuously differentiable function h : Rn →R, S :=  q ∈Rn h(q) ≥0 , (2) ∂S :=  q ∈Rn h(q) = 0 , (3) int (S) :=  q ∈Rn h(q) > 0 . (4) The aim of safety-critical control is to design a feedback control law k(q) that renders S forward invariant. Definition 1. A set S ⊆Rn is forward invariant for (1) if, for every initial state q(t0) ∈S, the resulting state trajectory q : I ⊆R →Rn remains in S for all t ∈I ∩R≥t0. For control-affine systems as in (1), the forward invariance of S can be enforced using CBFs [8], [9]. Definition 2. Let S ⊂Rn be a set as in (2), with h satisfying ∇h|∂S ̸= 0. Then, h is a control barrier function (CBF) on Rn if there exists1 a γ ∈Ke ∞such that for all q ∈Rn, sup v∈Rm n ˙h(q, v) = Lfh(q) + Lgh(q)v o > −γ(h(q)). (5) Given a CBF h and function γ ∈Ke ∞, the set of control inputs satisfying (5) at q is given by UCBF(q) = n v ∈Rm ˙h(q, v) ≥−γ(h(q)) o . (6) A function α : R →R belongs to Ke ∞if it is increasing, continuous, and satisfies α(0) = 0, limr→±∞α(r) = ±∞. Any locally Lipschitz controller k for (1) for which k(q) ∈ UCBF(q) ∀q ∈S enforces forward invariance of S [8]. For real-world robotic systems with zero-order hold, sampled-data implementations, it is generally impossible to choose continuous control actions that create the closed-loop system (1). As such, for the remainder of this work, we focus on the discrete-time analogues of these systems, qk+1 = F(qk) + G(qk)vk, ∀k ∈Z≥0, (7) where F : Rn × Rm →Rm is the discretization of(1) over a time interval ∆t > 0 for a constant input v. Here, we focus on enforcing forward invariance at sample times2 k∆t. This can be achieved using a discrete-time CBF [34], [35]. Definition 3. Let S ⊆Rn be as in (2) and ρ ∈[0, 1]. The function h is a discrete-time control barrier function (DTCBF) for (7) if ∀q ∈S there exists a v ∈Rm for which h(F(q) + G(q)v) ≥ρh(q). (8) Similar to CBFs and continuous-time systems, DTCBFs keep the discrete-time system (7) safe for each k ∈Z≥0. Here the value of h is lower bounded by a geometrically decaying curve, h(qk) ≥ρkh(q0) [34]. Thus, they can be used to generate safe control actions through an optimization program wherein a desired but potentially unsafe input vdes k ∈ Rm is minimally modified to produce a safe input: vsafe k = arg min vk∈Rm ∥vk −vdes k (qk)∥2 (9) s.t. h(F(qk) + G(qk)vk) ≥ρh(qk). (10) III. DUAL APPROACH TO CBF-RL In order to apply CBFs to RL pipelines, we characterize the relationship between continuous-time CBFs and discrete updates of the RL environment. With this relationship, we can utilize the closed-form solution of the continuous-time CBF-QP to understand its effect on the RL environment. Lemma 1. Suppose h : Rn →R is a C1 CBF for the continuous-time single integrator ˙q = v, where q, v ∈Rn. Let ks : Rn →Rn be any safe, locally Lipschitz controller for the continuous-time integrator, satisfying ∇h(q)⊤ks(q) ≥−αh(q), ∀q ∈Rn, (11) for some α > 0. Let f∆t : Rn×Rn →Rn, (q, v) 7→q+∆t v be the Euler discretization of the single integrator with time step ∆t > 0. There exists a continuous function R : Rn × Rn →R such that ∀q ∈Rn, lim∥w∥→0 R(q,w) ∥w∥ = 0 and h(f∆t(q, ks(q))) ≥(1 −∆t α)h(q) −|R(q, ∆t ks(q))|. (12) Proof. By [36, Chapter 5.2 (2)], there exists a continuous function R : Rn ×Rn →R satisfying lim∥w∥→0 R(q,w) ∥w∥ = 0 and h(q + w) = h(q) + ∇h(q)⊤w + R(q, w) ∀q, w ∈Rn. Taking w = ∆t ks(q), we calculate h(q + ∆t ks(q)) as = h(q) + ∆t · [∇h(q)⊤ks(q)] + R(q, ∆t ks(q)) (13) ≥h(q) −∆t · αh(q) −|R(q, ∆t ks(q))|. (14) See [33] for a discussion of zero-order-hold, intersample safety. RL Policy CBF Safety Filter CBF-Guided Policy Update Hardware Simulation Fig. 2. CBF-RL framework: For one given task, the user defines the safety barrier function h(q) and the accompanying ∇h(q). During training, the RL policy proposes action vpolicy; the CBF safety filter then calculates the closed-form solution of the CBF-QP vfiltered and the safety reward rcbf based on proposed action vpolicy and agent configuration q. The RL agent executes vfiltered in the massively-parallel discretized environment, and the policy is updated with the combination of task, regularization, and safety rewards r = rnominal + rcbf. During deployment, the policy is able to output safe actions vpolicy cbf without needing an explicit runtime filter. Combining h terms, the result follows. Using Lemma 1, we bound the evolution of the barrier function along trajectories of the Euler-discretized integrator. Theorem 1 (Continuous to Discrete Safety). Consider the setting of Lemma 1. Suppose there exists a compact, forward- invariant set K ⊆Rn for the discrete-time integrator dy- namics qk+1 = f∆t(qk, ks(qk)). Then, ∀∆t > 0, µ(∆t) = supq∈K |R(q, ∆t ks(q))| < +∞and lim∆t→0 µ(∆t)/∆t = 0. Further, provided (1 −∆t α) ∈[0, 1), for q0 ∈K, h(qk) ≥(1 −∆t α)kh(q0) −µ(∆t) ∆t α , ∀k ≥0. (15) Remark 1. As a consequence of the limiting behavior of µ, as we take the discretization step ∆t to zero, the standard DTCBF bound, h(qk) ≥(1 −∆t α)kh(q0), is recovered. Proof. Fix ∆t > 0. Since R and ks are continuous, compactness of K implies µ(∆t) is finite ∀∆t > 0. To establish the limit lim∆t→0 µ(∆t)/∆t = 0, we note that lim∆t→0 |R(q,∆t ks(q))| ∆t = 0 ∀q ∈K implies sup q∈K lim ∆t→0 |R(q,∆t ks(q))| ∆t = 0. (16) Compactness of K and joint continuity of |R(q, ∆t ks(q))| in q and ∆t implies uniform continuity of |R(q, ∆t ks(q))|; this lets us interchange limit and supremum. We conclude lim∆t→0 supq∈K |R(q,∆t ks(q))| ∆t = 0 ⇒lim∆t→0 µ(∆t) ∆t = 0. Now, we establish the bound using a comparison system. If yk+1 = ρyk−|dk| for all k ≥0 and |dk| ≤µ, a geometric series argument establishes yk ≥ρky0 −( 1 1−ρ)µ ∀k ≥0. Fix q0 ∈K. Since K is forward invariant for the closed-loop system, |R(qk, ∆t ks(qk))| ≤µ(∆t) ∀k ≥0. Using Lemma 1, we take ρ = 1−∆tα, µ(∆t) = supq∈K |R(q, ∆t ks(q))|, and conclude by comparison with yk that the bound h(qk) ≥(1 −∆t α)kh(q0) −µ(∆t) ∆t α , (17) does indeed hold for all q0 ∈K. The established relationship means that with small enough ∆t, as is the norm with physical simulators oriented towards RL, we can apply continuous-time CBF tools directly to discrete-time RL environments; thus we provide the two core parts of CBF-RL shown in Fig. 2: Safety-filtering during training At training time, the policy would generate desired actions that are not necessarily safe vpolicy k at step k. A safety filter is then applied to enforce safe behaviors and guide the system to ’learn’ the safety filter as part of the natural closed-loop dynamics. As we have proven the relationship between the continuous-time and discrete-time CBF conditions, typically, we can achieve safety filtering through a CBF-QP as follows [9]: vsafe k = arg min vk∈Rn 1 2||vk −vpolicy k ||2 (18) s.t. ∇h(qk)⊤vk ≥−αh(qk). (19) However, solving QPs for every step of RL training is not preferable in the massively-parallel environments such as IsaacLab [22], instead, we employ the explicit solution: vsafe k =        vpolicy k , if a⊤ k vpolicy k ≥bk vpolicy k + bk −a⊤ k vpolicy k  ak ∥ak∥2 , o.w. (20) ak := ∇h(qk), bk := −α h(qk). (21) Penalizing unsafe behavior In addition to filtering the policy actions at training time, we also quantify how safe the environment is to inform training. To this end, we modify the rewards to include rCBF defined as rcbf(qk, vk) = max  a⊤ k vpolicy k −bk  , 0  (22) + exp  −∥vpolicy−vsafe∥2 σ2  −1  (23) and the whole reward is thus r = rnominal + rcbf. Algorithm 1 RL Training with Discrete-Time CBF Safety 1: Initialize policy parameters θ, initial configuration q0, safety function h 2: for step = 1 to Nsteps do 3: Initialize q0, observation o0 4: for k = 0 to T −1 do 5: qpolicy k+1 ←πθ(ok) 6: vpolicy k ← (qpolicy k+1 −qk) ∆t 7: ak = ∇h(qk), bk = −αh(qk) 8: Compute CBF condition: c = a⊤ k vpolicy k −bk 9: if c ≥0 then 10: vsafe k ←vpolicy k , 11: else 12: vsafe k ←vpolicy k + (bk −a⊤ k vpolicy k ) a⊤ k ∥ak∥2 13: end if 14: qsafe k+1 ←qk + ∆t vsafe k 15: qenv k+1, ok+1 ←ENVIRONMENT UPDATE(qsafe k+1) 16: rcbf ← w · [max  0,  a⊤ k vpolicy k −bk  + exp  −∥vpolicy−vsafe∥2 σ2  −1] 17: r ←R(qenv k+1) + rcbf 18: Store transition: (qk, ok, qpolicy k+1 , qsafe k+1, qenv k+1, r) 19: end for 20: Update policy parameters θ ←η∇θL(θ, r) 21: end for Intuitively, this reward penalizes actions whenever the safety filter is activated, and also incentivizes the model to take actions as close to the safe actions as possible to reduce the intervention of the filter. The whole algorithm is as shown in Alg. 1. Single Integrator Example To demonstrate our proposed approach and analyze the effect of each component of CBF- RL, we perform extensive ablation studies on a single inte- grator navigation task. The agent is placed into a 2D world with obstacles and tasked with going to a specified goal. The starting, obstacle, and goal locations are all randomized, and extra care is taken such that the agent always initializes in a safe state following [13]. The single integrator has dynamics qk+1 = qk + vk∆t where q = (x, y) is the robot position, vk is the velocity of the agent, and ∆t is the timestep size. The safety barrier function h is defined as h(q) = min n min j ∥q −pj∥−(ragent + rj)  , x −ragent, (L −x) −ragent, y −ragent, (L −y) −ragent o (24) ∇h(q) =        q −pj⋆ ∥q −pj⋆∥, j⋆∈arg minj hj(q), ±ex, if left/right wall active, ±ey, if bottom/top wall active, (25) where pj is the position of the jth obstacle with radius rj, the agent also has radius ragent and the size of the world is L. Thus we can formulate the training-time safety filter with (20) and define our reward modification associated with the safety with (23). Full reward terms are shown in Table II. Ablation To validate our method, we train 4 variants (Dual, Reward-only, Filter-only, Nominal) and test 12 vari- ants following Table III for 1500 steps with 4096 parallel environments. We also evaluate policies trained with filtered action then deployed without a runtime filter (rt. filt.). We observe that the Dual approach achieves higher rewards while remaining safe throughout training. Over 1000 random test environments, the Dual policy performs well with or without a runtime filter, whereas Filter Only performs well only with an active safety filter and degrades markedly without it. Robustness To further investigate the robustness of the policy induced by domain randomization (DR), we train the dual approach with noise on the dynamics model, i.e. qk+1 = qk + (vk + d)∆t where d follows the standard normal distribution scaled by 20% to the maximum velocity. It is observed that the policy trained with the dual method overall suffers least from the dynamics disturbance as can be seen in Table I. TABLE I. Success rates over 1000 random test environments for different methods, with and without DR. The dual policy consistently performs well and suffers less degradation due to dynamics uncertainty. Dual Dual (w/o rt. filt.) Reward Only No DR 99.0% 92.7% 91.9% DR 99.0%(−0%) 91.7%(−1%) 87.6%(−4.3%) Filter Only Filter Only (w/o rt. filt.) Nominal No DR 98.8% 38.7% 51.4% DR 96.7%(−2.1%) 36.8%(−1.9%) 55.0%(+3.6%) IV. CBF-RL FOR SAFE HUMANOID LOCOMOTION In the following section, we present two different use cases of CBF-RL in humanoid locomotion to show the generality and performance of our method. Each uses a different set of user-specified reduced-order coordinates and a valid CBF for the resulting reduced-order dynamics. The nominal rewards and observations (history length 5) follow [37]. Note that for blind stair climbing, we employ the asymmetric actor- critic method [38] and provide a height scan of size 1m × 1.5m with a resolution of 0.1m to the critic observation with a history length of 1. We use a MLP policy with 3 hidden layers of [512, 256, 128] neurons running at 50Hz, outputting the joint position setpoints of the 12-DoF lower body for the Unitree G1. We train in IsaacLab with 4096 environments of ∆t = 0.005s up to 20,000 steps and perform zero-shot hardware transfer experiments to verify our approach. A. Task Definition Planar Obstacle Avoidance First, we consider the task where a humanoid robot needs to avoid obstacles during 0 500 1000 1500 Step 0 500 1000 1500 2000 Reward Mean episode reward vs. steps 0 9 17 26 34 Collisions Dual Dual Collisions Filter Only Collisions Reward Only Reward Only Collisions Nominal Collisions Filter Only Nominal Fig. 3. Training progress with the Dual, Reward Only, Filter Only and Nominal methods. Dual and Filter Only achieve faster convergence and avoid training-time safety violations. 2 4 6 8 10 0 2 4 6 8 Trajectory Comparison Start Nominal Dual Dual (w/o rt. filt.) Reward Only Filter Only Filter Only (w/o rt. filt.) Nominal DR Dual DR Dual (w/o rt. filt.) DR Reward Only DR Filter Only DR Filter Only (w/o rt. filt.) DR Goal Fig. 4. Trajectory comparisons of one example simulation. The black dot is the start and the yellow circle is the goal. Success = reaching the goal; failure = collision with obstacles or wall. TABLE II. Reward terms for single integrator navigation. Agent is rewarded for staying alive and closing the distance to the goal and penalized for hitting the obstacles / walls, exceeding the set time to complete the task, and violating the CBF conditions or not proposing safer actions Reward Formula rgoal 1.0 · 1(goal reached) robstacle −1.0 · 1(obstacle collision) rwall −1.0 · 1(wall collision) rprogress 20.0 · ∥pt−1 −g∥−∥pt −g∥ vmax∆t · 1(active) ralive 0.01 · 1(active) rcbf 100 ·  max(∇h(q)⊤v + αh(q), 0) + exp − ∥vpolicy−vsafe∥2 0.52  −1  · 1(active) rtimeout −10.0 · 1(time exceeded) TABLE III. List of method configurations for single integrator navigation. We ablate all permutations of the two main components of CBF-RL. Method Training Deployment DR Nominal Nominal No Runtime Filter No Dual Reward+Filter Runtime Filter No Dual (w/o rt. filt.) Reward+Filter No Runtime Filter No Reward Only Reward No Runtime Filter No Filter Only Filter Runtime Filter No Filter Only (w/o rt. filt.) Filter No Runtime Filter No Nominal DR Nominal No Runtime Filter Yes Dual DR Reward+Filter Runtime Filter Yes Dual (w/o rt. filt.) DR Reward+Filter No Runtime Filter Yes Reward Only DR Reward No Runtime Filter Yes Filter Only DR Filter Runtime Filter Yes Filter Only (w/o rt. filt.) DR Filter No Runtime Filter Yes locomotion without intervention, even when the velocity command is to collide with the obstacle. Thus we can simplify the safety problem as a single integrator problem where the policy modulates the robot’s planar velocities vbase planar = [vx, vy] to maintain safe distances from the closest obstacle, a cylinder centered at pr o in the robot frame. We define the safety function h(p) = ||pr o||−Rr −Ro where Rr and Ro are the radii of the robot and the obstacle respectively. We then train our robot with the CBF reward: robstacle cbf = max( pr o ||pro|| ⊤vplanar + αh(p), 0) + exp  − ∥vbase planar−vsafe planar∥2 σ2  −1. (26) Stair Climbing Second, we consider the task of humanoid locomotion on stairs. For this task, we consider the kinematic model of the foot as our reduced-order model qsw k+1 = qsw k + ∆t Jsw(qsw k )vsw k , where qsw = [px, py]T is the swing foot’s position in the body frame, Jsw comprises the rows of the robot’s body Jacobian associated with the foot’s position, and vsw is robot’s joint velocities. In climbing stairs, one problem is the robot hitting its toe against the next stair riser. We design the barrier as the distance to a hyperplane tangent to the stair after the one it is currently stepping on: h(q) = pstair x −px, where pstair x is the x position of the hyperplane in the body reference frame. Thus the CBF reward is: rnext stair cbf = max −Jsw x (q)  v + α h(q), 0  + exp(−∥q −qsafe||2 σ2 ) −1. (27) added to the nominal rewards including modifications to the feet clearance reward where the reference feet height now is dependent on the stair at the front of the robot and also a penalty on the swing foot force. B. Hardware Experiments Obstacle Course The obstacle course comprises 2 parts: the first part is the obstacle avoidance task where the robot has to prevent itself from colliding into the obstacles even if the velocity command intends so, and the second part comprises stairs constructed out of wooden pallets of riser height 0.14m and tread depth 0.3m. During execution, the robot locates the obstacles approximated as cylinders using the ZED 2 RGB-D camera through point cloud clustering. As seen from the h values as in Fig. 5, the robot modulates its own velocity to avoid the obstacle even when the velocity command pushes it to do so. However, it has no terrain perception and only uses proprioception for stair climbing. Despite this, we observe that the robot uses proprioception to determine when to climb and how high to lift its feet, successfully climbing the wooden stairs. 0 1 2 3 4 5 6 X (m) Robot Trajectory Trajectory Stairs Obstacles Margin 0 20 40 60 80 100 0.0 0.5 m/s 0 20 40 60 80 100 0.00 0.05 m/s 0 50 100 t (s) 0 2 h h(q) h h=0 t (s) Fig. 5. Robot trajectory, h, and command vs. actual velocity visualization. The robot avoids obstacles approximated as cylinders without a runtime safety filter and climb up stairs. The velocity plots show the robot modulating its own velocities despite the command and the h plot quantifies safety. Fig. 6. Snapshots of high stairs of riser height 0.3m. The nominal policy clips its feet against the riser and stumbles, as shown with the red CBF violations while the CBF-RL dual trained policy successfully climbs up and down. The yellow star marks the point the robot’s feet collides with the stair riser. Fig. 7. Snapshots of outdoor experiments. The robot is able to climb up stairs of varying roughness, tread depths and riser heights. Stair Climbing Robustness In order to further test the robustness of our method, we experiment both indoors and outdoors. During indoor experiments, we perform continuous runs up and down stairs, and also test the performance of the policy on high stairs, with the dual-trained policy able to climb up stairs 0.3m high and the policy trained without the CBF modifications unable to do so, as shown in Fig. 6. Here we note that the robot is able to gauge the depth and height of the stairs through proprioception and adjust its footsteps accordingly. For outdoor experiments, we test the dual-trained policy on concrete-poured stairs of different roughness and sizes, with rougher stairs of riser height 0.14m / tread depth 0.33m and smoother stairs of riser height 0.15m / tread depth 0.4m respectively, as shown in Fig. 7, where the robot could adjust its center of mass by modulating the torso pitch angle to account for deeper and higher stairs. V. CONCLUSION This paper introduced CBF-RL, a lightweight dual ap- proach to inject safety into learning by combining training- time CBF safety filtering with reward design, leading to policies that internalize safety and operate without a runtime filter, demonstrating its effectiveness through simulated and real-world experiments. We also provided a detailed theoret- ical analysis rationalizing the use of continuous-time CBF conditions in discrete-time RL simulation training. Looking forward, we plan to incorporate automated barrier discovery, perception-based barriers, and extend the application of CBF- RL beyond locomotion to whole-body loco-manipulation that would address broader humanoid capabilities. REFERENCES [1] D. Crowley, J. Dao, H. Duan, K. Green, J. Hurst, and A. Fern, “Optimizing bipedal locomotion for the 100m dash with comparison to human running,” arXiv preprint arXiv:2508.03070, 2025. [2] Z. Li, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath, “Reinforcement learning for versatile, dynamic, and robust bipedal locomotion control,” The International Journal of Robotics Research, vol. 44, no. 5, pp. 840–888, 2025. [3] T. Peng, L. Bao, and C. Zhou, “Gait-conditioned reinforcement learning with multi-phase curriculum for humanoid locomotion,” arXiv preprint arXiv:2505.20619, 2025. [4] T. He, J. Gao, W. Xiao, Y. Zhang, Z. Wang, J. Wang, Z. Luo, G. He, N. Sobanbabu, C. Pan, Z. Yi, G. Qu, K. Kitani, J. K. Hodgins, L. Fan, Y. Zhu, C. Liu, and G. Shi, “ASAP: Aligning Simulation and Real- World Physics for Learning Agile Humanoid Whole-Body Skills,” in Proceedings of Robotics: Science and Systems, LosAngeles, CA, USA, June 2025. [5] A. Allshire, H. Choi, J. Zhang, D. McAllister, A. Zhang, C. M. Kim, T. Darrell, P. Abbeel, J. Malik, and A. Kanazawa, “Visual imitation en- ables contextual humanoid control,” arXiv preprint arXiv:2505.03729, 2025. [6] T. E. Truong, Q. Liao, X. Huang, G. Tevet, C. K. Liu, and K. Sreenath, “Beyondmimic: From motion tracking to versatile humanoid control via guided diffusion,” arXiv preprint arXiv:2508.08241, 2025. [7] Z. Su, B. Zhang, N. Rahmanian, Y. Gao, Q. Liao, C. Regan, K. Sreenath, and S. S. Sastry, “Hitter: A humanoid table ten- nis robot via hierarchical planning and learning,” arXiv preprint arXiv:2508.21043, 2025. [8] A. D. Ames, X. Xu, J. W. Grizzle, and P. Tabuada, “Control barrier function based quadratic programs for safety critical systems,” IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 3861–3876, 2016. [9] A. D. Ames, S. Coogan, M. Egerstedt, G. Notomista, K. Sreenath, and P. Tabuada, “Control barrier functions: Theory and applications,” in 2019 18th European control conference (ECC). Ieee, 2019, pp. 3420–3431. [10] R. Cheng, G. Orosz, R. M. Murray, and J. W. Burdick, “End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks,” in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 3387–3395. [11] H. Ma, J. Chen, S. Eben, Z. Lin, Y. Guan, Y. Ren, and S. Zheng, “Model-based constrained reinforcement learning using generalized control barrier function,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 4552– 4559. [12] D. Van Wijk, K. Dunlap, M. Majji, and K. Hobbs, “Safe Spacecraft Inspection via Deep Reinforcement Learning and Discrete Control Barrier Functions,” Journal of Aerospace Information Systems, vol. 21, no. 12, pp. 996–1013, Dec. 2024. [13] F. P. Bejarano, L. Brunke, and A. P. Schoellig, “Safety Filtering While Training: Improving the Performance and Sample Efficiency of Reinforcement Learning Agents,” IEEE Robotics and Automation Letters, vol. 10, no. 1, pp. 788–795, Jan. 2025, arXiv:2410.11671 [cs]. [14] M. Alshiekh, R. Bloem, R. Ehlers, B. K¨onighofer, S. Niekum, and U. Topcu, “Safe reinforcement learning via shielding,” in 32nd AAAI Conference on Artificial Intelligence: AAAI-18, 2018, pp. 2669–2678. [15] H. Zhang, Z. Li, and A. Clark, “Model-based Reinforcement Learning with Provable Safety Guarantees via Control Barrier Functions,” in 2021 IEEE International Conference on Robotics and Automation (ICRA). Xi’an, China: IEEE, May 2021, pp. 792–798. [16] H. Krasowski, J. Thumm, M. M¨uller, L. Sch¨afer, X. Wang, and M. Althoff, “Provably safe reinforcement learning: Conceptual anal- ysis, survey, and benchmarking,” Transactions on Machine Learning Research, 2022. [17] K. Dunlap, M. Mote, K. Delsing, and K. L. Hobbs, “Run time assured reinforcement learning for safe satellite docking,” Journal of Aerospace Information Systems, vol. 20, no. 1, pp. 25–36, 2023. [18] K. P. Wabersich and M. N. Zeilinger, “A predictive safety filter for learning-based control of constrained nonlinear dynamical systems,” Automatica, vol. 129, p. 109597, 2021. [19] X. Wang, “Ensuring safety of learning-based motion planners using control barrier functions,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4773–4780, 2022. [20] N. Nilaksh, A. Ranjan, S. Agrawal, A. Jain, P. Jagtap, and S. Ko- lathaya, “Barrier Functions Inspired Reward Shaping for Rein- forcement Learning,” in 2024 IEEE International Conference on Robotics and Automation (ICRA), May 2024, pp. 10 807–10 813, arXiv:2403.01410 [cs]. [21] Z. Wang, T. Ma, Y. Jia, X. Yang, J. Zhou, W. Ouyang, Q. Zhang, and J. Liang, “Omni-perception: Omnidirectional collision avoidance for legged locomotion in dynamic environments,” arXiv preprint arXiv:2505.19214, 2025. [22] M. Mittal, C. Yu, Q. Yu, J. Liu, N. Rudin, D. Hoeller, J. L. Yuan, R. Singh, Y. Guo, H. Mazhar, A. Mandlekar, B. Babich, G. State, M. Hutter, and A. Garg, “Orbit: A unified simulation framework for interactive robot learning environments,” IEEE Robotics and Automa- tion Letters, vol. 8, no. 6, pp. 3740–3747, 2023. [23] C. Zhang, L. Dai, H. Zhang, and Z. Wang, “Control Barrier Function- Guided Deep Reinforcement Learning for Decision-Making of Au- tonomous Vehicle at On-Ramp Merging,” IEEE Transactions on Intelligent Transportation Systems, vol. 26, no. 6, pp. 8919–8932, June 2025. [24] H. Hailemichael, B. Ayalew, L. Kerbel, A. Ivanco, and K. Loiselle, “Safe reinforcement learning for an energy-efficient driver assistance system,” IFAC-PapersOnLine, vol. 55, no. 37, pp. 615–620, 2022. [25] Y. Emam, G. Notomista, P. Glotfelter, Z. Kira, and M. Egerstedt, “Safe reinforcement learning using robust control barrier functions,” IEEE Robotics and Automation Letters, 2022. [26] Y. Cheng, P. Zhao, and N. Hovakimyan, “Safe and efficient reinforce- ment learning using disturbance-observer-based control barrier func- tions,” in Learning for Dynamics and Control Conference. PMLR, 2023, pp. 104–115. [27] D. Du, S. Han, N. Qi, H. B. Ammar, J. Wang, and W. Pan, “Reinforce- ment learning for safe robot control using control lyapunov barrier functions,” in 2023 IEEE International Conference on Robotics and Automation, ICRA 2023. IEEE, 2023, pp. 9442–9448. [28] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. [29] S. Li and O. Bastani, “Robust model predictive shielding for safe reinforcement learning with stochastic dynamics,” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 7166–7172. [30] M. H. Cohen and C. Belta, “Safe exploration in model-based rein- forcement learning using control barrier functions,” Automatica, vol. 147, p. 110684, 2023. [31] D. C. H. Tan, F. Acero, R. McCarthy, D. Kanoulas, and Z. Li, “Value Functions are Control Barrier Functions: Verification of Safe Policies using Control Theory,” Dec. 2023, arXiv:2306.04026 [cs]. [32] M. H. Cohen, N. Csomay-Shanklin, W. D. Compton, T. G. Molnar, and A. D. Ames, “Safety-critical controller synthesis with reduced- order models,” in 2025 American Control Conference (ACC). IEEE, 2025, pp. 5216–5221. [33] J. Breeden, K. Garg, and D. Panagou, “Control barrier functions in sampled-data systems,” IEEE Control Systems Letters, vol. 6, pp. 367– 372, 2021. [34] A. Agrawal and K. Sreenath, “Discrete control barrier functions for safety-critical control of discrete systems with application to bipedal robot navigation.” in Robotics: Science and Systems, vol. 13. Cambridge, MA, USA, 2017, pp. 1–10. [35] M. Ahmadi, A. Singletary, J. W. Burdick, and A. D. Ames, “Safe policy synthesis in multi-agent pomdps via discrete-time barrier func- tions,” in 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2019, pp. 4797–4803. [36] C. C. Pugh, Real mathematical analysis. Springer, 2002. [37] K. Zakka, B. Tabanpour, Q. Liao, M. Haiderbhai, S. Holt, J. Y. Luo, A. Allshire, E. Frey, K. Sreenath, L. A. Kahrs, C. Sferrazza, Y. Tassa, and P. Abbeel, “Demonstrating MuJoCo Playground,” in Proceedings of Robotics: Science and Systems, LosAngeles, CA, USA, June 2025. [38] L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba, and P. Abbeel, “Asymmetric actor critic for image-based robot learning,” in 14th Robotics: Science and Systems, RSS 2018. MIT Press Journals, 2018.
CBF-RL: Safety Filtering Reinforcement Learning in Training with Control Barrier Functions *Lizhi Yang, *Blake Werner, Massimiliano de Sa, Aaron D. Ames Abstract- Reinforcement learning (RL), while powerful and expressive, can often prioritize performance at the expense of safety. Yet safety violations can lead to catastrophic outcomes in real-world deployments. Control Barrier Functions (CBFs) offer a principled method to enforce dynamic safetytraditionally deployed online via safety filters. While the result is safe behavior, the fact that the RL policy does not have knowledge of the CBF can lead to conservative behaviors. This paper proposes CBF-RL, a framework for generating safe behaviors with RL by enforcing CBFs in training. CBF-RL has two key attributes: (1) minimally modifying a nominal RL policy to encode safety constraints via a CBF term, (2) and safety filtering of the policy rollouts in training. Theoretically, we prove that continuous-time safety filters can be deployed via closed-form expressions on discrete-time roll-outs. Practically, we demonstrate that CBF-RL internalizes the safety constraints in the learned policy-both enforcing safer actions and biasing towards safer rewards-enabling safe deployment without the need for an online safety filter. We validate our framework through ablation studies on navigation tasks and on the Unitree G1 humanoid robot, where CBF-RL enables safer exploration, faster convergence, and robust performance under uncertainty, enabling the humanoid robot to avoid obstacles and climb stairs safely in real-world settings without a runtime safety filter. I. INTRODUCTION Humanoid robots are capable of interacting with environments designed for humans. However, the complex environment, high-dimensional robot dynamics, and noise of the sensors also make them highly vulnerable to unsafe control inputs. One unsafe action could lead to damage to both the robot and its surroundings, and thus ensuring safety is essential. Meanwhile, reinforcement learning (RL) has emerged as a powerful tool for humanoid robots to achieve diverse skills, but focuses mostly on performance [1]-[3] and expressiveness [4]-[7]. In this paper, we propose integrating formal safety mechanisms with the powerful exploration and exploitation abilities of RL so that learned policies can reduce or prevent catastrophic behaviors. To achieve this, we turn to Control Barrier Functions (CBFs) [8] for a principled way to encode state-based safety constraints as forwardinvariant sets. The CBF conditions are often enforced using safety filters [9]: quadratic programs satisfying the safety constraint by minimally modifying a proposed control input. There are two key approaches to instantiating safety filters in RL. The first approach is safety filtering the RL-proposed action and projects it into the safe set before execution [10]- [13] or performing constrained updates of the gradient [14], * denotes equal contribution. All authors affiliated with Caltech MCE. This research is supported in part by the Technology Innovation Institute (TII), BP p.l.c., and by Dow via project #227027AW. Deployment CBF-RL Training Safety Filter Action/Reward Fig. 1. A humanoid robot trained to climb stairs with the CBF-RL framework. Safety is injected into training by both filtering the policyproposed actions and also provide safety rewards in addition to task and regularization rewards. During deployment the CBF policy retains safe behavior without a runtime filter. [15]. This guarantees safety at runtime, but the filter must remain in the loop at deployment, and the learned policy may never internalize the constraint. This prevents a highdimensional agent, like a humanoid robot, from discovering novel or efficient behaviors since the exploration space is pruned too aggressively. Both also require solving an optimization program at every control step, which may be computationally expensive. The second approach is reward shaping where a residual augments the reward term to penalize states that approach or violate constraint boundaries [16]-[21] and encourages the agent towards safer behaviors without active filtering. This alone does not directly enforce safe actions during training and is often sensitive to the choice of penalty weights, possibly being insufficient in safety-critical applications. This paper proposes a fusion of these two approaches to integrating CBFs into RL. A. Contributions In this paper, we show that safety filtering and reward shaping are complementary, proposing CBF-RL: a dual approach that applies both a closed-form CBF-based safety 19 Oct 2025 filter and a barrier-inspired reward term during training. This safety filters a nominal RL policy, enabling it to learn safe behaviors. Catastrophic unsafe actions are prevented by the active filter, and the reward term biases the policy toward avoiding safety interventions. Therefore, the policy has direct corrective supervision-it observes what it would have done, how the filter corrects it, and how the reward changes. It also learns to propose actions that directly satisfy the barrier condition. This enables the policy to show safe behaviors at deployment time without an active filter. We evaluate CBF-RL, and its dual approach to safe RL, with ablations using a 2D navigation task with dynamics randomness, analyzing task completion rates and robustness tests. We also train humanoid locomotion policies in IsaacLab [22] with full-order dynamics and domain randomization for obstacle avoidance and stair climbing tasks. The approach is validated on hardware using a Unitree G1 humanoid with zero-shot sim-to-real policies to exhibit the effectiveness of the proposed framework in high-dimensional, complex systems. With the dual-trained policy, the robot can successfully navigate around obstacles and climb stairs under command sequences that would lead to failure with nominal policies that don't leverage the CBF-RL framework. Our contributions are as follows: • Conceptually: We propose a dual CBF-RL training framework that uses both CBF-based active filtering and barrier-inspired rewards during training, and can be deployed without a filter. • Theoretically: We provide a continuous-to-discreterelationship analysis of CBF-RL and a closed-form solution for light-weight integration. • Practically: We empirically demonstrate across simulated, and hardware experiments that policies trained using the dual approach can internalize safety and reduce unsafe actions at deployment. B. Related Work Due to the ease of modifying rewards for training, a number of works incorporate safety value functions into the reward structure to guide policies toward safety. [16], [17] use a fixed penalty, [18], [19], [21] utilize correctionproportional penalties, and [20] proposes an explicit barrierinspired reward shaping mechanism to reduce unsafe exploration. These methods encourage safe behavior but do not explicitly direct the policy to exhibit safe behaviors during learning, instead solely relying on the policy to discover safer actions on its own, leading to slower training. To address this, runtime CBF-based safety filters during training minimally modify the actions of the policy such that the system stays in the user-defined forward-invariant safe set typically by solving an optimization program, be it discrete-time due to the nature of RL training [10]-[13], [15], [23], or continuous-time [24]-[26] which empirically shows improved safety. For humanoid locomotion, these methods are not ideal as humanoid robots have tight realtime and computation power constraints and inaccurate state estimation from sensor noise. We instead filter only in simulation during training, and show that the resulting policy retains safety even without a runtime filter. We further provide theoretical verification that under certain conditions, continuous-time CBF can be used as conditions for forward invariance for RL simulations that are discrete in nature, and thus we can use the closed-form solution to the CBF-QP to accelerate each step in training. Many papers utilize model-based approaches [11], [15], [24], [27]. which require access to accurate dynamics models. While theoretically appealing, they are less practical for high-dimensional humanoids where dynamics are complex and uncertain. Some works also do constrained updates of the gradient to ensure that the policy remains safe [14], [15]; however, they also require solving optimization programs at each gradient update step and limit the exploration of the policy. Our framework is model-free, requiring only derivatives of the reduced-order model (e.g. for the kinematics of a humanoid robot, its Jacobian J), and emphasizes lightweight integration with standard policy-gradient RL, in our case proximal policy optimization(PPO) [28]. This also leads to the benefit of the policy being able to venture closer to the constraint boundaries, ensuring rich exploration. In response to the issue of stochastic systems, robust extensions address uncertainty in dynamics or sensing through disturbance observers, Gaussian Process models, or robustified CBFs [24]-[26], [29]. These methods improve reliability under uncertainty but add significant complexity and computational burden. Here we show that by relying on domain randomization during training, dual-trained policies remain safer under uncertainty of the system without explicit models. Another line of work learns barrier-like certificates or relates value functions to barrier properties [30], [31]. These methods aim to automate the design of barrier functions or embed them in differentiable layers. Our approach assumes analytic barrier functions and focuses on pragmatic integration with RL training rather than barrier discovery. The above works have been applied to domains such as spacecraft inspection [12], drone control [13], autonomous driving [23], and driver assistance [24]. Much of the prior work captures part of the safety challenge, but none directly combines filtering and shaping in a way that allows a policy to internalize safety during training and then act autonomously without a filter at deployment, especially not on a high-dimensional system such as a humanoid robot. This distinction defines the novelty of our contribution. II. BACKGROUND Reinforcement Learning We consider the standard RL formulation of a Markov decision process (MDP) (X, U, P, r, γ). At each timestep k, an agent selects an action uk ∈U, according to a policy πθ(uk|xk) based on an observed state xk ∈X, and receives a reward r(xk, uk). The environment follows the transition dynamics xk+1 ∼ P(·|xk, uk). The goal is then to maximize the expected discounted return E[P∞ k=0 γtr(xk, uk)]. During deployment, actions are selected as the conditional expectation uk = E[πθ(uk|xk)]. In this work, we specifically use modelfree policy-gradient actor-critic methods such as PPO [28], though our approach is agnostic to the specific RL algorithm. RL also often suffers from reward sparsity when unsafe events like obstacle collisions are rare. Thus the policy seldom experiences consequences of unsafe actions, leading to vanishing gradients and unstable, slow training. As such, one part of our method also does reward densification to mitigate this by designing r(xk, uk) to give informative nonterminal signals related to safety. Reduced-Order Models Consider a continuous-time system with dynamics ̇x = φ(x, u), where x ∈Rn is the state and u ∈Rm is the control input. In our case the state x may be very high-dimensional, e.g. joint positions and velocities of a robot. Thus we consider a reduced-order state q ∈Rnq, nq 0 . (4) The aim of safety-critical control is to design a feedback control law k(q) that renders S forward invariant. Definition 1. A set S ⊆Rn is forward invariant for (1) if, for every initial state q(t0) ∈S, the resulting state trajectory q : I ⊆R →Rn remains in S for all t ∈I ∩R≥t0. For control-affine systems as in (1), the forward invariance of S can be enforced using CBFs [8], [9]. Definition 2. Let S ⊂Rn be a set as in (2), with h satisfying ∇h|∂S ̸= 0. Then, h is a control barrier function (CBF) on Rn if there exists1 a γ ∈Ke ∞such that for all q ∈Rn, sup v∈Rm n ̇h(q, v) = Lfh(q) + Lgh(q)v o > -γ(h(q)). (5) Given a CBF h and function γ ∈Ke ∞, the set of control inputs satisfying (5) at q is given by UCBF(q) = n v ∈Rm ̇h(q, v) ≥-γ(h(q)) o . (6) A function α : R →R belongs to Ke ∞if it is increasing, continuous, and satisfies α(0) = 0, limr→±∞α(r) = ±∞. Any locally Lipschitz controller k for (1) for which k(q) ∈ UCBF(q) ∀q ∈S enforces forward invariance of S [8]. For real-world robotic systems with zero-order hold, sampled-data implementations, it is generally impossible to choose continuous control actions that create the closed-loop system (1). As such, for the remainder of this work, we focus on the discrete-time analogues of these systems, qk+1 = F(qk) + G(qk)vk, ∀k ∈Z≥0, (7) where F : Rn × Rm →Rm is the discretization of(1) over a time interval ∆t > 0 for a constant input v. Here, we focus on enforcing forward invariance at sample times2 k∆t. This can be achieved using a discrete-time CBF [34], [35]. Definition 3. Let S ⊆Rn be as in (2) and ρ ∈[0, 1]. The function h is a discrete-time control barrier function (DTCBF) for (7) if ∀q ∈S there exists a v ∈Rm for which h(F(q) + G(q)v) ≥ρh(q). (8) Similar to CBFs and continuous-time systems, DTCBFs keep the discrete-time system (7) safe for each k ∈Z≥0. Here the value of h is lower bounded by a geometrically decaying curve, h(qk) ≥ρkh(q0) [34]. Thus, they can be used to generate safe control actions through an optimization program wherein a desired but potentially unsafe input vdes k ∈ Rm is minimally modified to produce a safe input: vsafe k = arg min vk∈Rm ∥vk -vdes k (qk)∥2 (9) s.t. h(F(qk) + G(qk)vk) ≥ρh(qk). (10) III. DUAL APPROACH TO CBF-RL In order to apply CBFs to RL pipelines, we characterize the relationship between continuous-time CBFs and discrete updates of the RL environment. With this relationship, we can utilize the closed-form solution of the continuous-time CBF-QP to understand its effect on the RL environment. Lemma 1. Suppose h : Rn →R is a C1 CBF for the continuous-time single integrator ̇q = v, where q, v ∈Rn. Let ks : Rn →Rn be any safe, locally Lipschitz controller for the continuous-time integrator, satisfying ∇h(q)⊤ks(q) ≥-αh(q), ∀q ∈Rn, (11) for some α > 0. Let f∆t : Rn×Rn →Rn, (q, v) 7→q+∆t v be the Euler discretization of the single integrator with time step ∆t > 0. There exists a continuous function R : Rn × Rn →R such that ∀q ∈Rn, lim∥w∥→0 R(q,w) ∥w∥ = 0 and h(f∆t(q, ks(q))) ≥(1 -∆t α)h(q) -|R(q, ∆t ks(q))|. (12) Proof. By [36, Chapter 5.2 (2)], there exists a continuous function R : Rn ×Rn →R satisfying lim∥w∥→0 R(q,w) ∥w∥ = 0 and h(q + w) = h(q) + ∇h(q)⊤w + R(q, w) ∀q, w ∈Rn. Taking w = ∆t ks(q), we calculate h(q + ∆t ks(q)) as = h(q) + ∆t · [∇h(q)⊤ks(q)] + R(q, ∆t ks(q)) (13) ≥h(q) -∆t · αh(q) -|R(q, ∆t ks(q))|. (14) See [33] for a discussion of zero-order-hold, intersample safety. RL Policy CBF Safety Filter CBF-Guided Policy Update Hardware Simulation Fig. 2. CBF-RL framework: For one given task, the user defines the safety barrier function h(q) and the accompanying ∇h(q). During training, the RL policy proposes action vpolicy; the CBF safety filter then calculates the closed-form solution of the CBF-QP vfiltered and the safety reward rcbf based on proposed action vpolicy and agent configuration q. The RL agent executes vfiltered in the massively-parallel discretized environment, and the policy is updated with the combination of task, regularization, and safety rewards r = rnominal + rcbf. During deployment, the policy is able to output safe actions vpolicy cbf without needing an explicit runtime filter. Combining h terms, the result follows. Using Lemma 1, we bound the evolution of the barrier function along trajectories of the Euler-discretized integrator. Theorem 1 (Continuous to Discrete Safety). Consider the setting of Lemma 1. Suppose there exists a compact, forwardinvariant set K ⊆Rn for the discrete-time integrator dynamics qk+1 = f∆t(qk, ks(qk)). Then, ∀∆t > 0, μ(∆t) = supq∈K |R(q, ∆t ks(q))| 0. Since R and ks are continuous, compactness of K implies μ(∆t) is finite ∀∆t > 0. To establish the limit lim∆t→0 μ(∆t)/∆t = 0, we note that lim∆t→0 |R(q,∆t ks(q))| ∆t = 0 ∀q ∈K implies sup q∈K lim ∆t→0 |R(q,∆t ks(q))| ∆t = 0. (16) Compactness of K and joint continuity of |R(q, ∆t ks(q))| in q and ∆t implies uniform continuity of |R(q, ∆t ks(q))|; this lets us interchange limit and supremum. We conclude lim∆t→0 supq∈K |R(q,∆t ks(q))| ∆t = 0 ⇒lim∆t→0 μ(∆t) ∆t = 0. Now, we establish the bound using a comparison system. If yk+1 = ρyk-|dk| for all k ≥0 and |dk| ≤μ, a geometric series argument establishes yk ≥ρky0 -( 1 1-ρ)μ ∀k ≥0. Fix q0 ∈K. Since K is forward invariant for the closed-loop system, |R(qk, ∆t ks(qk))| ≤μ(∆t) ∀k ≥0. Using Lemma 1, we take ρ = 1-∆tα, μ(∆t) = supq∈K |R(q, ∆t ks(q))|, and conclude by comparison with yk that the bound h(qk) ≥(1 -∆t α)kh(q0) -μ(∆t) ∆t α , (17) does indeed hold for all q0 ∈K. The established relationship means that with small enough ∆t, as is the norm with physical simulators oriented towards RL, we can apply continuous-time CBF tools directly to discrete-time RL environments; thus we provide the two core parts of CBF-RL shown in Fig. 2: Safety-filtering during training At training time, the policy would generate desired actions that are not necessarily safe vpolicy k at step k. A safety filter is then applied to enforce safe behaviors and guide the system to 'learn' the safety filter as part of the natural closed-loop dynamics. As we have proven the relationship between the continuous-time and discrete-time CBF conditions, typically, we can achieve safety filtering through a CBF-QP as follows [9]: vsafe k = arg min vk∈Rn 1 2||vk -vpolicy k ||2 (18) s.t. ∇h(qk)⊤vk ≥-αh(qk). (19) However, solving QPs for every step of RL training is not preferable in the massively-parallel environments such as IsaacLab [22], instead, we employ the explicit solution: vsafe k =        vpolicy k , if a⊤ k vpolicy k ≥bk vpolicy k + bk -a⊤ k vpolicy k ak ∥ak∥2 , o.w. (20) ak := ∇h(qk), bk := -α h(qk). (21) Penalizing unsafe behavior In addition to filtering the policy actions at training time, we also quantify how safe the environment is to inform training. To this end, we modify the rewards to include rCBF defined as rcbf(qk, vk) = max a⊤ k vpolicy k -bk , 0 (22) + exp -∥vpolicy-vsafe∥2 σ2 -1 (23) and the whole reward is thus r = rnominal + rcbf. Algorithm 1 RL Training with Discrete-Time CBF Safety 1: Initialize policy parameters θ, initial configuration q0, safety function h 2: for step = 1 to Nsteps do 3: Initialize q0, observation o0 4: for k = 0 to T -1 do 5: qpolicy k+1 ←πθ(ok) 6: vpolicy k ← (qpolicy k+1 -qk) ∆t 7: ak = ∇h(qk), bk = -αh(qk) 8: Compute CBF condition: c = a⊤ k vpolicy k -bk 9: if c ≥0 then 10: vsafe k ←vpolicy k , 11: else 12: vsafe k ←vpolicy k + (bk -a⊤ k vpolicy k ) a⊤ k ∥ak∥2 13: end if 14: qsafe k+1 ←qk + ∆t vsafe k 15: qenv k+1, ok+1 ←ENVIRONMENT UPDATE(qsafe k+1) 16: rcbf ← w · [max 0, a⊤ k vpolicy k -bk + exp -∥vpolicy-vsafe∥2 σ2 -1] 17: r ←R(qenv k+1) + rcbf 18: Store transition: (qk, ok, qpolicy k+1 , qsafe k+1, qenv k+1, r) 19: end for 20: Update policy parameters θ ←η∇θL(θ, r) 21: end for Intuitively, this reward penalizes actions whenever the safety filter is activated, and also incentivizes the model to take actions as close to the safe actions as possible to reduce the intervention of the filter. The whole algorithm is as shown in Alg. 1. Single Integrator Example To demonstrate our proposed approach and analyze the effect of each component of CBFRL, we perform extensive ablation studies on a single integrator navigation task. The agent is placed into a 2D world with obstacles and tasked with going to a specified goal. The starting, obstacle, and goal locations are all randomized, and extra care is taken such that the agent always initializes in a safe state following [13]. The single integrator has dynamics qk+1 = qk + vk∆t where q = (x, y) is the robot position, vk is the velocity of the agent, and ∆t is the timestep size. The safety barrier function h is defined as h(q) = min n min j ∥q -pj∥-(ragent + rj) , x -ragent, (L -x) -ragent, y -ragent, (L -y) -ragent o (24) ∇h(q) =        q -pj⋆ ∥q -pj⋆∥, j⋆∈arg minj hj(q), ±ex, if left/right wall active, ±ey, if bottom/top wall active, (25) where pj is the position of the jth obstacle with radius rj, the agent also has radius ragent and the size of the world is L. Thus we can formulate the training-time safety filter with (20) and define our reward modification associated with the safety with (23). Full reward terms are shown in Table II. Ablation To validate our method, we train 4 variants (Dual, Reward-only, Filter-only, Nominal) and test 12 variants following Table III for 1500 steps with 4096 parallel environments. We also evaluate policies trained with filtered action then deployed without a runtime filter (rt. filt.). We observe that the Dual approach achieves higher rewards while remaining safe throughout training. Over 1000 random test environments, the Dual policy performs well with or without a runtime filter, whereas Filter Only performs well only with an active safety filter and degrades markedly without it. Robustness To further investigate the robustness of the policy induced by domain randomization (DR), we train the dual approach with noise on the dynamics model, i.e. qk+1 = qk + (vk + d)∆t where d follows the standard normal distribution scaled by 20% to the maximum velocity. It is observed that the policy trained with the dual method overall suffers least from the dynamics disturbance as can be seen in Table I. TABLE I. Success rates over 1000 random test environments for different methods, with and without DR. The dual policy consistently performs well and suffers less degradation due to dynamics uncertainty. Dual Dual (w/o rt. filt.) Reward Only No DR 99.0% 92.7% 91.9% DR 99.0%(-0%) 91.7%(-1%) 87.6%(-4.3%) Filter Only Filter Only (w/o rt. filt.) Nominal No DR 98.8% 38.7% 51.4% DR 96.7%(-2.1%) 36.8%(-1.9%) 55.0%(+3.6%) IV. CBF-RL FOR SAFE HUMANOID LOCOMOTION In the following section, we present two different use cases of CBF-RL in humanoid locomotion to show the generality and performance of our method. Each uses a different set of user-specified reduced-order coordinates and a valid CBF for the resulting reduced-order dynamics. The nominal rewards and observations (history length 5) follow [37]. Note that for blind stair climbing, we employ the asymmetric actorcritic method [38] and provide a height scan of size 1m × 1.5m with a resolution of 0.1m to the critic observation with a history length of 1. We use a MLP policy with 3 hidden layers of [512, 256, 128] neurons running at 50Hz, outputting the joint position setpoints of the 12-DoF lower body for the Unitree G1. We train in IsaacLab with 4096 environments of ∆t = 0.005s up to 20,000 steps and perform zero-shot hardware transfer experiments to verify our approach. A. Task Definition Planar Obstacle Avoidance First, we consider the task where a humanoid robot needs to avoid obstacles during 0 500 1000 1500 Step 0 500 1000 1500 2000 Reward Mean episode reward vs. steps 0 9 17 26 34 Collisions Dual Dual Collisions Filter Only Collisions Reward Only Reward Only Collisions Nominal Collisions Filter Only Nominal Fig. 3. Training progress with the Dual, Reward Only, Filter Only and Nominal methods. Dual and Filter Only achieve faster convergence and avoid training-time safety violations. 2 4 6 8 10 0 2 4 6 8 Trajectory Comparison Start Nominal Dual Dual (w/o rt. filt.) Reward Only Filter Only Filter Only (w/o rt. filt.) Nominal DR Dual DR Dual (w/o rt. filt.) DR Reward Only DR Filter Only DR Filter Only (w/o rt. filt.) DR Goal Fig. 4. Trajectory comparisons of one example simulation. The black dot is the start and the yellow circle is the goal. Success = reaching the goal; failure = collision with obstacles or wall. TABLE II. Reward terms for single integrator navigation. Agent is rewarded for staying alive and closing the distance to the goal and penalized for hitting the obstacles / walls, exceeding the set time to complete the task, and violating the CBF conditions or not proposing safer actions Reward Formula rgoal 1.0 · 1(goal reached) robstacle -1.0 · 1(obstacle collision) rwall -1.0 · 1(wall collision) rprogress 20.0 · ∥pt-1 -g∥-∥pt -g∥ vmax∆t · 1(active) ralive 0.01 · 1(active) rcbf 100 · max(∇h(q)⊤v + αh(q), 0) + exp - ∥vpolicy-vsafe∥2 0.52 -1 · 1(active) rtimeout -10.0 · 1(time exceeded) TABLE III. List of method configurations for single integrator navigation. We ablate all permutations of the two main components of CBF-RL. Method Training Deployment DR Nominal Nominal No Runtime Filter No Dual Reward+Filter Runtime Filter No Dual (w/o rt. filt.) Reward+Filter No Runtime Filter No Reward Only Reward No Runtime Filter No Filter Only Filter Runtime Filter No Filter Only (w/o rt. filt.) Filter No Runtime Filter No Nominal DR Nominal No Runtime Filter Yes Dual DR Reward+Filter Runtime Filter Yes Dual (w/o rt. filt.) DR Reward+Filter No Runtime Filter Yes Reward Only DR Reward No Runtime Filter Yes Filter Only DR Filter Runtime Filter Yes Filter Only (w/o rt. filt.) DR Filter No Runtime Filter Yes locomotion without intervention, even when the velocity command is to collide with the obstacle. Thus we can simplify the safety problem as a single integrator problem where the policy modulates the robot's planar velocities vbase planar = [vx, vy] to maintain safe distances from the closest obstacle, a cylinder centered at pr o in the robot frame. We define the safety function h(p) = ||pr o||-Rr -Ro where Rr and Ro are the radii of the robot and the obstacle respectively. We then train our robot with the CBF reward: robstacle cbf = max( pr o ||pro|| ⊤vplanar + αh(p), 0) + exp - ∥vbase planar-vsafe planar∥2 σ2 -1. (26) Stair Climbing Second, we consider the task of humanoid locomotion on stairs. For this task, we consider the kinematic model of the foot as our reduced-order model qsw k+1 = qsw k + ∆t Jsw(qsw k )vsw k , where qsw = [px, py]T is the swing foot's position in the body frame, Jsw comprises the rows of the robot's body Jacobian associated with the foot's position, and vsw is robot's joint velocities. In climbing stairs, one problem is the robot hitting its toe against the next stair riser. We design the barrier as the distance to a hyperplane tangent to the stair after the one it is currently stepping on: h(q) = pstair x -px, where pstair x is the x position of the hyperplane in the body reference frame. Thus the CBF reward is: rnext stair cbf = max -Jsw x (q) v + α h(q), 0 + exp(-∥q -qsafe||2 σ2 ) -1. (27) added to the nominal rewards including modifications to the feet clearance reward where the reference feet height now is dependent on the stair at the front of the robot and also a penalty on the swing foot force. B. Hardware Experiments Obstacle Course The obstacle course comprises 2 parts: the first part is the obstacle avoidance task where the robot has to prevent itself from colliding into the obstacles even if the velocity command intends so, and the second part comprises stairs constructed out of wooden pallets of riser height 0.14m and tread depth 0.3m. During execution, the robot locates the obstacles approximated as cylinders using the ZED 2 RGB-D camera through point cloud clustering. As seen from the h values as in Fig. 5, the robot modulates its own velocity to avoid the obstacle even when the velocity command pushes it to do so. However, it has no terrain perception and only uses proprioception for stair climbing. Despite this, we observe that the robot uses proprioception to determine when to climb and how high to lift its feet, successfully climbing the wooden stairs. 0 1 2 3 4 5 6 X (m) Robot Trajectory Trajectory Stairs Obstacles Margin 0 20 40 60 80 100 0.0 0.5 m/s 0 20 40 60 80 100 0.00 0.05 m/s 0 50 100 t (s) 0 2 h h(q) h h=0 t (s) Fig. 5. Robot trajectory, h, and command vs. actual velocity visualization. The robot avoids obstacles approximated as cylinders without a runtime safety filter and climb up stairs. The velocity plots show the robot modulating its own velocities despite the command and the h plot quantifies safety. Fig. 6. Snapshots of high stairs of riser height 0.3m. The nominal policy clips its feet against the riser and stumbles, as shown with the red CBF violations while the CBF-RL dual trained policy successfully climbs up and down. The yellow star marks the point the robot's feet collides with the stair riser. Fig. 7. Snapshots of outdoor experiments. The robot is able to climb up stairs of varying roughness, tread depths and riser heights. Stair Climbing Robustness In order to further test the robustness of our method, we experiment both indoors and outdoors. During indoor experiments, we perform continuous runs up and down stairs, and also test the performance of the policy on high stairs, with the dual-trained policy able to climb up stairs 0.3m high and the policy trained without the CBF modifications unable to do so, as shown in Fig. 6. Here we note that the robot is able to gauge the depth and height of the stairs through proprioception and adjust its footsteps accordingly. For outdoor experiments, we test the dual-trained policy on concrete-poured stairs of different roughness and sizes, with rougher stairs of riser height 0.14m / tread depth 0.33m and smoother stairs of riser height 0.15m / tread depth 0.4m respectively, as shown in Fig. 7, where the robot could adjust its center of mass by modulating the torso pitch angle to account for deeper and higher stairs. V. CONCLUSION This paper introduced CBF-RL, a lightweight dual approach to inject safety into learning by combining trainingtime CBF safety filtering with reward design, leading to policies that internalize safety and operate without a runtime filter, demonstrating its effectiveness through simulated and real-world experiments. We also provided a detailed theoretical analysis rationalizing the use of continuous-time CBF conditions in discrete-time RL simulation training. Looking forward, we plan to incorporate automated barrier discovery, perception-based barriers, and extend the application of CBFRL beyond locomotion to whole-body loco-manipulation that would address broader humanoid capabilities. REFERENCES [1] D. Crowley, J. Dao, H. Duan, K. Green, J. Hurst, and A. Fern, "Optimizing bipedal locomotion for the 100m dash with comparison to human running," arXiv preprint , 2025. [2] Z. Li, X. B. Peng, P. Abbeel, S. Levine, G. Berseth, and K. Sreenath, "Reinforcement learning for versatile, dynamic, and robust bipedal locomotion control," The International Journal of Robotics Research, vol. 44, no. 5, pp. 840-888, 2025. [3] T. Peng, L. Bao, and C. Zhou, "Gait-conditioned reinforcement learning with multi-phase curriculum for humanoid locomotion," arXiv preprint , 2025. [4] T. He, J. Gao, W. Xiao, Y. Zhang, Z. Wang, J. Wang, Z. Luo, G. He, N. Sobanbabu, C. Pan, Z. Yi, G. Qu, K. Kitani, J. K. Hodgins, L. Fan, Y. Zhu, C. Liu, and G. Shi, "ASAP: Aligning Simulation and RealWorld Physics for Learning Agile Humanoid Whole-Body Skills," in Proceedings of Robotics: Science and Systems, LosAngeles, CA, USA, June 2025. [5] A. Allshire, H. Choi, J. Zhang, D. McAllister, A. Zhang, C. M. Kim, T. Darrell, P. Abbeel, J. Malik, and A. Kanazawa, "Visual imitation enables contextual humanoid control," arXiv preprint , 2025. [6] T. E. Truong, Q. Liao, X. Huang, G. Tevet, C. K. Liu, and K. Sreenath, "Beyondmimic: From motion tracking to versatile humanoid control via guided diffusion," arXiv preprint , 2025. [7] Z. Su, B. Zhang, N. Rahmanian, Y. Gao, Q. Liao, C. Regan, K. Sreenath, and S. S. Sastry, "Hitter: A humanoid table tennis robot via hierarchical planning and learning," arXiv preprint , 2025. [8] A. D. Ames, X. Xu, J. W. Grizzle, and P. Tabuada, "Control barrier function based quadratic programs for safety critical systems," IEEE Transactions on Automatic Control, vol. 62, no. 8, pp. 3861-3876, 2016. [9] A. D. Ames, S. Coogan, M. Egerstedt, G. Notomista, K. Sreenath, and P. Tabuada, "Control barrier functions: Theory and applications," in 2019 18th European control conference (ECC). Ieee, 2019, pp. 3420-3431. [10] R. Cheng, G. Orosz, R. M. Murray, and J. W. Burdick, "End-to-end safe reinforcement learning through barrier functions for safety-critical continuous control tasks," in Proceedings of the AAAI conference on artificial intelligence, vol. 33, no. 01, 2019, pp. 3387-3395. [11] H. Ma, J. Chen, S. Eben, Z. Lin, Y. Guan, Y. Ren, and S. Zheng, "Model-based constrained reinforcement learning using generalized control barrier function," in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021, pp. 45524559. [12] D. Van Wijk, K. Dunlap, M. Majji, and K. Hobbs, "Safe Spacecraft Inspection via Deep Reinforcement Learning and Discrete Control Barrier Functions," Journal of Aerospace Information Systems, vol. 21, no. 12, pp. 996-1013, Dec. 2024. [13] F. P. Bejarano, L. Brunke, and A. P. Schoellig, "Safety Filtering While Training: Improving the Performance and Sample Efficiency of Reinforcement Learning Agents," IEEE Robotics and Automation Letters, vol. 10, no. 1, pp. 788-795, Jan. 2025, . [14] M. Alshiekh, R. Bloem, R. Ehlers, B. K ̈onighofer, S. Niekum, and U. Topcu, "Safe reinforcement learning via shielding," in 32nd AAAI Conference on Artificial Intelligence: AAAI-18, 2018, pp. 2669-2678. [15] H. Zhang, Z. Li, and A. Clark, "Model-based Reinforcement Learning with Provable Safety Guarantees via Control Barrier Functions," in 2021 IEEE International Conference on Robotics and Automation (ICRA). Xi'an, China: IEEE, May 2021, pp. 792-798. [16] H. Krasowski, J. Thumm, M. M ̈uller, L. Sch ̈afer, X. Wang, and M. Althoff, "Provably safe reinforcement learning: Conceptual analysis, survey, and benchmarking," Transactions on Machine Learning Research, 2022. [17] K. Dunlap, M. Mote, K. Delsing, and K. L. Hobbs, "Run time assured reinforcement learning for safe satellite docking," Journal of Aerospace Information Systems, vol. 20, no. 1, pp. 25-36, 2023. [18] K. P. Wabersich and M. N. Zeilinger, "A predictive safety filter for learning-based control of constrained nonlinear dynamical systems," Automatica, vol. 129, p. 109597, 2021. [19] X. Wang, "Ensuring safety of learning-based motion planners using control barrier functions," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 4773-4780, 2022. [20] N. Nilaksh, A. Ranjan, S. Agrawal, A. Jain, P. Jagtap, and S. Kolathaya, "Barrier Functions Inspired Reward Shaping for Reinforcement Learning," in 2024 IEEE International Conference on Robotics and Automation (ICRA), May 2024, pp. 10 807-10 813, . [21] Z. Wang, T. Ma, Y. Jia, X. Yang, J. Zhou, W. Ouyang, Q. Zhang, and J. Liang, "Omni-perception: Omnidirectional collision avoidance for legged locomotion in dynamic environments," arXiv preprint , 2025. [22] M. Mittal, C. Yu, Q. Yu, J. Liu, N. Rudin, D. Hoeller, J. L. Yuan, R. Singh, Y. Guo, H. Mazhar, A. Mandlekar, B. Babich, G. State, M. Hutter, and A. Garg, "Orbit: A unified simulation framework for interactive robot learning environments," IEEE Robotics and Automation Letters, vol. 8, no. 6, pp. 3740-3747, 2023. [23] C. Zhang, L. Dai, H. Zhang, and Z. Wang, "Control Barrier FunctionGuided Deep Reinforcement Learning for Decision-Making of Autonomous Vehicle at On-Ramp Merging," IEEE Transactions on Intelligent Transportation Systems, vol. 26, no. 6, pp. 8919-8932, June 2025. [24] H. Hailemichael, B. Ayalew, L. Kerbel, A. Ivanco, and K. Loiselle, "Safe reinforcement learning for an energy-efficient driver assistance system," IFAC-PapersOnLine, vol. 55, no. 37, pp. 615-620, 2022. [25] Y. Emam, G. Notomista, P. Glotfelter, Z. Kira, and M. Egerstedt, "Safe reinforcement learning using robust control barrier functions," IEEE Robotics and Automation Letters, 2022. [26] Y. Cheng, P. Zhao, and N. Hovakimyan, "Safe and efficient reinforcement learning using disturbance-observer-based control barrier functions," in Learning for Dynamics and Control Conference. PMLR, 2023, pp. 104-115. [27] D. Du, S. Han, N. Qi, H. B. Ammar, J. Wang, and W. Pan, "Reinforcement learning for safe robot control using control lyapunov barrier functions," in 2023 IEEE International Conference on Robotics and Automation, ICRA 2023. IEEE, 2023, pp. 9442-9448. [28] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, "Proximal policy optimization algorithms," arXiv preprint , 2017. [29] S. Li and O. Bastani, "Robust model predictive shielding for safe reinforcement learning with stochastic dynamics," in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 7166-7172. [30] M. H. Cohen and C. Belta, "Safe exploration in model-based reinforcement learning using control barrier functions," Automatica, vol. 147, p. 110684, 2023. [31] D. C. H. Tan, F. Acero, R. McCarthy, D. Kanoulas, and Z. Li, "Value Functions are Control Barrier Functions: Verification of Safe Policies using Control Theory," Dec. 2023, . [32] M. H. Cohen, N. Csomay-Shanklin, W. D. Compton, T. G. Molnar, and A. D. Ames, "Safety-critical controller synthesis with reducedorder models," in 2025 American Control Conference (ACC). IEEE, 2025, pp. 5216-5221. [33] J. Breeden, K. Garg, and D. Panagou, "Control barrier functions in sampled-data systems," IEEE Control Systems Letters, vol. 6, pp. 367372, 2021. [34] A. Agrawal and K. Sreenath, "Discrete control barrier functions for safety-critical control of discrete systems with application to bipedal robot navigation." in Robotics: Science and Systems, vol. 13. Cambridge, MA, USA, 2017, pp. 1-10. [35] M. Ahmadi, A. Singletary, J. W. Burdick, and A. D. Ames, "Safe policy synthesis in multi-agent pomdps via discrete-time barrier functions," in 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2019, pp. 4797-4803. [36] C. C. Pugh, Real mathematical analysis. Springer, 2002. [37] K. Zakka, B. Tabanpour, Q. Liao, M. Haiderbhai, S. Holt, J. Y. Luo, A. Allshire, E. Frey, K. Sreenath, L. A. Kahrs, C. Sferrazza, Y. Tassa, and P. Abbeel, "Demonstrating MuJoCo Playground," in Proceedings of Robotics: Science and Systems, LosAngeles, CA, USA, June 2025. [38] L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba, and P. Abbeel, "Asymmetric actor critic for image-based robot learning," in 14th Robotics: Science and Systems, RSS 2018. MIT Press Journals, 2018.
2510.14954
OMNIMOTION: MULTIMODAL MOTION GENERATION WITH CONTINUOUS MASKED AUTOREGRESSION Zhe Li1,2∗, Weihao Yuan2†, Weichao Shen2, Siyu Zhu3, Zilong Dong2, Chang Xu1† 1 University of Sydney 2 Alibaba Group 3 Fudan University ABSTRACT Whole-body multi-modal human motion generation poses two primary challenges: creating an effective motion generation mechanism and integrating various modali- ties, such as text, speech, and music, into a cohesive framework. Unlike previous methods that usually employ discrete masked modeling or autoregressive model- ing, we develop a continuous masked autoregressive motion transformer, where a causal attention is performed considering the sequential nature within the hu- man motion. Within this transformer, we introduce a gated linear attention and an RMSNorm module, which drive the transformer to pay attention to the key actions and suppress the instability caused by either the abnormal movements or the heterogeneous distributions within multi-modalities. To further enhance both the motion generation and the multimodal generalization, we employ the DiT structure to diffuse the conditions from the transformer towards the targets. To fuse different modalities, AdaLN and cross-attention are leveraged to inject the text, speech, and music signals. Experimental results demonstrate that our framework outperforms previous methods across all modalities, including text-to-motion, speech-to-gesture, and music-to-dance. The code of our method will be made public. 1 INTRODUCTION Whole-body human motion generation represents an expanding frontier in computer vision, offering significant value across a variety of applications, including film production, gaming, virtual reality, robotics, and so on. Broadly speaking, motion generation could be conditioned on various signals, such as text, speech, music, and more. Historically, approaches to whole-body motion generation usually focus on isolated tasks. Typically, they either address text-to-motion generation, or concentrate on speech-to-gesture translation, or engage in music-to-dance synthesis. Despite their successes in single task, their frameworks are exclusively designed for individual tasks and cannot be easily adapted to different tasks. In addition, they tend to overlook the underlying commonalities that exist across different tasks. In contrast, in this work we seek to address these motion generation challenges from various signals within an omni-framework. This brings two advantages: 1) It allows each modality to benefit from the patterns present in other modalities, preventing single-mode solutions from becoming trapped in a local minimum; 2) It enhances each task with data from other tasks, which is particularly relevant given the limited scale of data available for individual motion tasks. Previous studies in motion generation generally proceed in two paths. The first employs the vector quantization (VQ) technique to convert continuous motion to discrete tokens, and then performs autoregressive or masked modeling to predict the tokens (Zhang et al., 2023d;a; Kong et al., 2023; Zhong et al., 2023; Guo et al., 2024). While this path effectively utilizes the strengths of autoregressive and masked modeling, the quantization step inevitably introduces approximation errors, which impose undesirable limits on the quality of the generated motions. The second directly regresses the continuous motions using techniques such as generative adversarial networks (GANs) (Tulyakov et al., 2018), variational autoencoders (VAEs) (Xu et al., 2020; Ahuja & Morency, 2019; Petrovich et al., 2022; Guo et al., 2022a), or recent diffusion models (Chen et al., 2023; Tevet et al., 2023; Zhang et al., 2024b; 2023b; Ribeiro-Gomes et al., 2024). Despite avoiding the approximation errors, they ∗Work done during internship at Alibaba † Corresponding Author 1 arXiv:2510.14954v1 [cs.CV] 16 Oct 2025 Dance in the Jazz style Speak: one thing that ... OmniMotion Squat while lifting hands Figure 1: We construct an omni motion framework with a continuous masked autoregressive motion transformer for multimodal whole-body motion modeling, including text-based, music-based, and speech-based motion generation. miss the autoregressive or masked modeling technologies, which have been shown to deliver superior performance in motion generation tasks. Consequently, the performance of the motion generated by this path is overall lower than that achieved by the first path. To both leverage the advantages of autoregressive and masked modeling, and the benefits of the continuous motion representation, in this work we combine them together to propose a continuous masked autoregressive motion generation framework. We apply a random masking on the sequential motion tokens, and employ a transformer to autoregressively predict the masked tokens. Unlike the visual MAR (Li et al., 2024b), we sequentially predict the masked tokens with causal attention rather than performing random reordering, considering the sequential nature within human motion. To enhance the MAR modeling in motion space, we introduce a gated linear mechanism and an RMSNorm module. The gated linear mechanism serves as an adaptive feature selector, driving the transformer to not only pay more attention to the key actions, like gesture switching and large movement, but also disregard less relevant frames and suppress redundant actions, like stationary motions. The RMSNorm is particularly advantageous in scenarios with features exhibiting a large dynamic range, e.g., our unified framework for multi-modalities, where the input distributions are highly heterogeneous. In addition, RMSNorm helps relieve the gradient instability caused by the abnormally large motions, such as sudden jumping or turning back. After the masked autoregressive transformer, the calculated attention of the masked tokens is fed into a series of DiT blocks to diffuse towards the target tokens, which are decoded to the generated motions. In addition to the text-based motion generation, we further extend our framework to multimodal conditions. Building upon a similar structure, the multimodal signals are fused by AdaLN (Guo et al., 2022c) and cross-attention modules. Extensive experiments across different datasets demonstrate our framework can work well with different modalities, including text, speech, and music, and outperform previous methods in whole-body text-to-motion, speech-to-gesture, and music-to-dance tasks. The main contributions of this work are then summarized as follows: • We design an omni motion framework for whole-body human motion generation, where one framework encompasses multiple modalities. • We propose a continuous autoregressive motion transformer with causal attention, where a gated linear mechanism and an RMSNorm module are developed to assist the motion modeling, and the DiT blocks are employed to improve the quality of the generated motions. • We integrate the multimodal signals via AdaLN and cross-attention, obtaining superior performance than previous methods in text-based, speech-based, and music-based motion generation. 2 2 RELATED WORK Text-based Motion Generation. Previous research on text-based motion generation can be gener- ally categorized into two mainstream paths: continuous regression and discrete classification. In the continuous regression domain, numerous strategies have leveraged the variational autoencoder (VAE) framework, integrating latent embeddings of encoded text with those of encoded poses, which are then decoded into motion predictions (Xu et al., 2020; Ahuja & Morency, 2019; Athanasiou et al., 2022; Petrovich et al., 2022; Guo et al., 2022a). Other methods have investigated the potential of recurrent networks (Lin et al., 2018; Plappert et al., 2018; Zhang et al., 2020), generative adversarial networks (Cai et al., 2018; Wang et al., 2020; Tulyakov et al., 2018), or transformer networks (Tevet et al., 2022; Lin et al., 2023b; Bhattacharya et al., 2021; Petrovich et al., 2021) to enhance the motion regression quality. Building on the success of diffusion models, recent approaches have begun to integrate the diffusion process into motion diffusion (Kim et al., 2023; Chen et al., 2023; Tevet et al., 2023; Zhang et al., 2024b; Dabral et al., 2023; Lou et al., 2023; Zhang et al., 2023b; Ribeiro-Gomes et al., 2024; Yuan et al., 2023; Wang et al., 2023; Zhang et al., 2023c; 2024d; Bian et al., 2025), yielding impressive results. In the discrete classification domain, the input motion undergoes initial encoding via a VQ-VAE (Van Den Oord et al., 2017), producing motion tokens for subsequent prediction (Zhong et al., 2023; Li et al., 2024e). Drawing inspiration from advancements in natural language processing, some methods utilize autoregressive modeling to predict tokens sequentially (Guo et al., 2022b; Lou et al., 2023; Zou et al., 2024). Others employ generative masked modeling strategies, with tokens randomly masked during training for the model to predict (Guo et al., 2024; Pinyoanuntapong et al., 2024; Yuan et al., 2024; Li et al., 2023b). More recently, large language models (LLMs) have been harnessed to help the prediction process, considering their large-scale pretraining (Zhang et al., 2023d; Zhou et al., 2024; Li et al., 2024d; 2023c). In this work, we seek to integrate the most effective elements from these two paths: the continuous diffusion and the masked autoregressive modeling. A previous attempt in this direction (Meng et al., 2024) directly transfers the MAR in image generation (Li et al., 2024b) into motion generation without considering the difference between image and motion spaces, especially the temporal correlation. Also, its framework is only designed for body-only motion generation. Differently, we propose a new MAR mechanism that is especially designed for whole-body motion generation. Multimodal Motion Generation. In addition to text, there are many other signals that various human motions are conditioned on, such as speech and music. In the realm of speech-to-gesture generation, both continuous regression and discrete classification paths have been explored. In the continuous domain, methods employ deep generative models like GANs (Ahuja et al., 2022), normalizing flows (Tan et al., 2024), and diffusion models (Alexanderson et al., 2023; Chen et al., 2024a; He et al., 2024; Yang et al., 2023; Zhu et al., 2023; Chen et al., 2024b) to learn complex motion distributions in the speech data. In the discrete domain, methods leverage either the autoregressive modeling (Yi et al., 2023) or the masked modeling (Liu et al., 2024a;b) to predict the discrete tokens quantified by the VQ-VAE. The primary distinction among these methods lies in their specific handling of different parts of human motion, including body movements, hand gestures, and facial expressions. Similarly, in the realm of music-to-dance generation, there are also methods in both the continuous domain (Zhuang et al., 2022; Tseng et al., 2023; Li et al., 2024a; Huang et al., 2024; Zhang et al., 2024a; Li et al., 2023a) and the discrete domain (Siyao et al., 2022). The discrete autoregressive model is leveraged after the motion quantization with VQ-VAE (Siyao et al., 2022). More methods harness the diffusion model to directly regress the target dancing motion in the continuous space (Tseng et al., 2023; Li et al., 2024a; Huang et al., 2024; Li et al., 2023a). Recent methods also start to merge autoregressive and diffusion models, producing coherent and music-aligned dance sequences (Zhang et al., 2024a). Recent works have begun to seek multimodal solutions, i.e., designing one framework for motion generation from different input signals (Luo et al., 2024; Zhang et al., 2024c; Zhou & Wang, 2023; Zhou et al., 2023; Ling et al., 2023; Bian et al., 2025; Li et al., 2024c). Some methods in the discrete domain attempt to incorporate quantized condition tokens into the vocabulary of the generation model (Luo et al., 2024; Zhou & Wang, 2023; Zhou et al., 2023; Zhang et al., 2025), while some methods in the continuous domain try to integrate the multimodal signals by designing the motion ControlNet (Ling et al., 2023; Bian et al., 2025), where the multimodal conditions guide the sampling of a pretrained text-to-motion diffusion model. However, most previous methods are restricted by the varying motion data of different modalities, limiting multi-modal frameworks primarily to body-only 3 Motion Encoder AdaLN “Squat while lifting hands above head” DiT ×4 RMSNorm Multi-head Attn Gate 𝑥! 𝑥" ×4 Text Encoder Motion Encoder AdaLN “Speak / Dance” Multi-head Attn Gate ×4 Text Encoder Multimodal Encoder + Cross Attn Conv1D ReLU Conv1D Down-ResBlock Conv1D ×2 Conv1D Conv1D Up-ResBlock ×2 ReLU Conv1D FFN RMSNorm FFN DiT ×4 𝑥! 𝑥" (a) (b) (c) Figure 2: Framework overview. Our framework consists of three parts: (a) The input motion is encoded by an autoencoder to extract a latent code, producing the continuous motion tokens. (b) The motion tokens are masked and predicted in an autoregressive transformer with causal attention, producing conditions for DiTs to diffuse towards the target tokens. (c) Multimodal signals are encoded and then injected via AdaLN and cross-attention. motion generation. To overcome this, MotionCraft (Bian et al., 2025) standardizes datasets across modalities into a unified whole-body format that includes body, hands, and face. In this work, we follow this unified representation to build a whole-body multi-modal motion framework, taking advantage of continuous masked auto-regression. 3 METHOD 3.1 OVERVIEW The overview of our framework is illustrated in Figure 2, which is divided into three main stages: In the first stage, we encode the input motion with an autoencoder, which generates continuous motion tokens. In the second stage, we focus on the text-based motion generation, utilizing our motion masked autoregressive framework to model the motion generation process. In this stage, an autoregressive transformer is employed to predict the masked tokens, within which a gated linear mechanism is designed, and an RMSNorm module is employed. The text information is integrated into the transformer via AdaLN after encoding. After the transformer, the generated embedding is fed into the DiT modules as the condition to diffuse towards the target token. In the third stage, we extend the model learned in the previous stage to the multi-modal structure. This involves merging the text embedding with multimodal signal embeddings—specifically speech or music—prior to their AdaLN input. Furthermore, we inject the multimodal embedding through a cross-attention module into the masked transformer. In the multimodal learning stage, the DiT module is kept in the same structure and frozen. 3.2 CONTINUOUS AUTOENCODER To feed the human motion into the transformer, we start by encoding the original motion into motion tokens. Given a motion sequence {Mt}T t=1, the objective of an autoencoder is to extract a latent code zAE that optimally captures the essence of the original motion. Different from most previous motion generation methods with autoregressive transformers, we use a continuous autoencoder rather than the VQ-VAE to do the encoding, which avoids the precision loss associated with the quantization approximation. In the encoder, we stack the 1D convolution networks with ReLU activation to do the feature processing. Following this, two down-sampling residual blocks are applied to reduce the motion feature size to one-fourth of its original dimensions. In the decoder, the same structure in the reversed order is utilized to up-sample the motion feature back to the original size, producing 4 { ˆMt}T t=1. Therefore, the loss for training the autoencoder is defined as LAE = X t ∥ˆMt −Mt ∥1. (1) The latent code zAE between the encoder and the decoder serves as the motion tokens x0, x1, ..., xN, with a sequence length that is one-fourth of the initial sequence length. 3.3 CONTINUOUS MASKED AUTOREGRESSION With the continuous motion tokens, we design a masked autoregressive transformer to model the motion generation, effectively capturing the temporal relationship between different tokens and producing the rich contextual condition zi for the subsequent diffusion process. We first randomly mask the motion tokens following language models (Devlin et al., 2018), obtaining some masked tokens {˜xi}. The temporal masking strategies adopt the same mask ratio schedule following (Chang et al., 2022), and are computed as γ(τ) = cos(πτ 2 ), (2) where τ ∈[0, 1]. In training, τ ∼U(0, 1) is uniformly sampled, leading to a mask ratio γ(τ). Then according to this ratio, γ(τ) × N tokens are randomly selected to be masked. After the masking, unlike previous MAR methods (Li et al., 2024b; Meng et al., 2024), our approach does not involve random rearranging of tokens or batch-tokens prediction. Also, we do not perform bidirectional attention. In contrast, we maintain the temporal order of the original motion, and sequentially undertake autoregressive prediction, thus forming a causal attention manner, as illustrated in Figure 2. For the input text prompt T, we first employ the LaMP (Li et al., 2024e) text transformer to extract textual features, leveraging its advanced capabilities to encode the linguistic nuances and semantic structure of the input prompt. This creates a high-dimensional feature representation that is crucial for guiding motion generation process. Following the feature extraction, we utilize AdaLN to seamlessly integrate the text-derived control sig- nals into the masked autoregressive transformer. AdaLN offers a dynamic approach to normalization, allowing the modulation of its parameters in response to the specific text input, thereby facilitating subsequent multimodal condition injection. By employing this method, we enhance the model’s ability to incorporate the guiding signals from the text and other signals into the motion generation process, ensuring that the transformer’s output features are better aligned with the intended motion generation goals. The features outputted by the transformer embody a strong directive capacity for motion generation. This enables the model not only to faithfully interpret the semantic content of the input text but also to produce motion sequences that are coherent with and reflective of the textual intent. The enriched output features contribute to achieving smoother transitions and logically consistent motion sequences in complex generation scenarios. Gated Linear Mechanism. We employ a gated linear attention mechanism within the transformer to regulate the attention weights at each time step. Specifically, we compute a gating signal by applying a linear transformation go to the input x followed by a sigmoid activation function. This gating signal acts as a dynamic filter, adjusting the output of the attention module based on the relevance of the input features. Consequently, during the attention computation, the final output o is modulated by this gating signal, enabling the model to selectively focus on the most pertinent action frames. o = g × Softmax( Q·KT dk )V, g = sigmoid(go(x)). (3) This mechanism effectively serves as an adaptive feature selector, allowing the model to disregard less relevant frames and suppress redundant action frames (such as stationary or repetitive motions), thereby enhancing its attention to key actions (e.g., gesture transitions and changes in motion direction). Furthermore, by dynamically adjusting the attention distribution through gating, the model is capable of selectively retaining historical frames or predicting future frames based on the current action state. 5 A person is doing a speech: “One thing that scared me once was …” A person is doing a speech: “One night I was out with my friends …” A person is doing a speech: “Well yes I have experienced a paranormal …” A person is doing a speech: “One time I was searching for my friends …” Speech-driven Music-driven A dancer is performing a Folk dance in the Dai style to the rhythm of the Xuanyue A dancer is performing a Classic dance in the ShenYun style to the rhythm of the Tanglanting song A dancer is performing a Street dance in the Popping style to the rhythm of the WeAreOne2017 song A dancer is performing a Street dance in the Urban style to the rhythm of the Redeye song unfold in time axis Figure 3: The qualitative results of motions generated from our model driven by speech and music. RMSNorm. Our objective is to construct a unified model that can simultaneously perform text-to- motion, speech-to-gesture, and music-to-dance tasks. Therefore, we aim to ensure that during the fine-tuning process across different datasets, the model does not suffer from instability that could lead to catastrophic failures. RMSNorm (Zhang & Sennrich, 2019) is particularly advantageous in scenarios with features exhibiting a large dynamic range, especially in tasks where the input distributions are highly heterogeneous. This characteristic enables the model to maintain stability when faced with diverse types of inputs or when observing uneven feature distributions. Additionally, RMSNorm has the potential to mitigate the gradient instability that may arise from significant motion variations, such as sudden jumps or rapid turns. 3.4 DIFFUSION TRANSFORMER In contrast to previous works (Li et al., 2024b; Meng et al., 2024), we adopt Diffusion Transformer (DiT) as our diffusion model. While the use of DiT may incur additional time costs during training and inference (not much due to the compact structure of motion data), it significantly enhances the quality of generated outputs. Compared to MLPs, DiT provides greater convenience in injecting conditional control signals. During the training process of multimodal generation, we freeze the diffusion model and only fine-tune the masked transformer. The structural characteristics of DiT facilitate this approach, enabling it to better handle various types of conditional signals. Moreover, MLPs exhibit notable limitations when processing heterogeneous data. This incapacity results in suboptimal performance when confronted with diverse signal types, such as speech and music. Due to the relatively small number of parameters in MLPs, they are prone to overfitting on specific datasets (e.g., text-to-motion). This situation can be analogized to a dancer who is adept only in a single dance style; when asked to incorporate other styles, they appear clumsy and ineffective. Consequently, when we attempt to fine-tune MLPs on a different dataset, they are ill-equipped to adapt to the challenges posed by new signals, leading to failures in multimodal generation tasks. In contrast, DiT demonstrates superior performance in complex multimodal generation contexts. Its enhanced generalization capabilities allow it to flexibly handle a variety of input signal types, rather than being confined to a single data format. This ensures that the model exhibits increased adaptability and reliability when exposed to diverse data, ultimately resulting in higher-quality outcomes. 3.5 TEXT-TO-MOTION PRETRAINING AND MULTIMODAL CONTROL ADAPTATION We first pre-train the model on text-motion paired data in a text-to-motion generation setting. Owing to its strong semantic expressiveness and cross-modal alignment properties, we adopt text as a shared conditional signal across diverse unimodal datasets, enabling the model to learn sequence- level generation capabilities between text and motion, as well as a coarse-grained textual guidance mechanism for generative control. 6 The person is doing karate moves Ours MotionCraft Karate move Karate move ❌ ✅ A person swings his arms loosely in front of him, pats his head and rubs his stomach like he was mimicking a monkey A person crosses his arms and then drops them to his sides A man sits down and then stays still Pats his head Rubs his stomach Pats his head Rubs his stomach Crosses his arms Drops them to his sides Crosses his arms Drops them to his sides Sits down Stays still Sits down Stays still ✅ ✅ ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ✅ ❌ unfold in time axis Figure 4: The qualitative results of text-driven motion generation. Method R Precision FID ↓ MM Dist↓ Top-1 ↑ Top-2 ↑ Top-3 ↑ GT 0.663±0.006 0.807±0.002 0.864±0.002 0.000±0.000 15.567±0.036 T2M-GPT(Zhang et al., 2023a) 0.529±0.004 0.652±0.003 0.732±0.003 10.457±0.108 17.029±0.039 MDM(Tevet et al., 2023) 0.383±0.010 0.527±0.012 0.604±0.009 18.671±0.370 18.785±0.054 MotionDiffuse(Zhang et al., 2024b) 0.525±0.004 0.675±0.009 0.743±0.009 9.982±0.379 17.314±0.066 FineMoGen(Zhang et al., 2023c) 0.565±0.001 0.710±0.004 0.775±0.004 7.323±0.143 16.679±0.029 MCM(Ling et al., 2023) 0.407±0.002 0.559±0.003 0.636±0.001 15.540±0.443 18.673±0.029 MotionCraft (Bian et al., 2025) 0.590±0.003 0.743±0.004 0.804±0.004 8.477±0.102 16.252±0.035 Ours 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 15.871±0.030 Table 1: The quantitative results of text-to-motion on the HumanML3D subset of Motion-X dataset (Lin et al., 2023a), following the unified SMPL-X representation (Bian et al., 2025). We hypothesize that the contextual features output by the masked transformer provide a more expressive control signal compared to raw text embeddings. Accordingly, within the DiT architecture, we inject the transformer’s output features by summing them with the time embeddings, thereby guiding the motion generation process as: ˜xi t−1 ∼p(˜xi t−1|˜xi t, t + zi). (4) Then the training objective for noise prediction is defined as: L = Eϵ,t∥ϵ −ϵθ(˜xi t|t + zi)∥. (5) This procedure yields the trained model Mt2m. When incorporating additional control signals, we initialize the entire model with the parameters of model Mt2m, freeze the DiT, and fine-tune only the masked transformer. Crucially, in contrast to the original design, we introduce cross-attention layers within the transformer to explicitly model interactions between the control signals and the motion sequence. This modification aims to produce more precise, fine-grained control representations, thereby enhancing both the quality and controllability of the generated motions. 3.6 INFERENCE The inference process starts with an all-masked sequence. We introduce mask tokens at the corre- sponding positions in the sequence. This enables the autoregressive model to iteratively predict the masked latent signals conditioned on the observed context. When performing speech-to-gesture and music-to-dance tasks, the speech and music modalities are processed through cross-attention mechanisms within the transformer, interacting with the textual features. This allows the model to generate more fine-grained conditional signals that are temporally aligned with the text. The predicted latents are then fed into the DiT to refine and generate the final 7 Method FIDH ↓ FIDB ↓ Face L2 Loss ↓ Beat Align Score ↑ Diversity ↑ Talkshow (Yi et al., 2023) 26.713 74.824 7.791 6.947 13.472 EMAGE (Liu et al., 2024a) 39.094 90.762 7.680 7.727 13.065 MCM (Ling et al., 2023) 23.946 71.241 16.983 7.993 13.167 MotionCraft (Bian et al., 2025) 18.486 27.023 10.097 8.098 10.334 Ours 17.651 25.923 9.883 8.377 14.703 Table 2: Results of speech-based motion generation on the BEAT2 dataset (Liu et al., 2024a), following the unified SMPL-X representation (Bian et al., 2025). motion sequence. The sampling process can be denoted as: xi t−1 = 1 √αt  xi t − √1 −αt √1 −¯αt ϵθ(xi t | t + zi)  + σtϵt, (6) where ϵt ∼N(0, I), and zi denotes the condition output from the transformer. We adopt classifier- free guidance (CFG) (Chang et al., 2023) to condition the transformer on signal embeddings. At inference time, CFG is applied at the final linear projection layer preceding the softmax operation. At this point, the final logits lf are computed by adjusting the conditional logits lc relative to the unconditional logits luc, using a guidance scale α: lf = (1 + α) · lc −α · luc (7) 4 EXPERIMENTS 4.1 EXPERIMENT SETTINGS Implementation Details. Our model is implemented on one NVIDIA V100 GPU using PyTorch. For our method, the autoencoder employs a ResNet-based (He et al., 2016) three-layer encoder- decoder architecture with a hidden dimension of 512 and an overall downsampling rate of 4. For the generation process, we utilize a four-layer AdaLN-zero transformer encoder as the masked autoregressive transformer, featuring a hidden dimension of 1024 and 16 attention heads. The diffusion model consists of 4 layers of DiT, where each transformer block has a hidden dimension of 1792 and 8 attention heads. We adopt the AdamW optimizer (β1 = 0.9, β2 = 0.99). For training the autoencoder on the HumanML subset of Motion-X, we use a batch size of 256 and a maximum sequence length of 64 frames. For the text-to-motion task, the batch size is set to 50 with a maximum sequence length of 196 frames. The learning rate is initialized at 0.0002 with a linear warmup over 2000 steps. The autoencoder is trained for 50 epochs, while the text-to-motion task is trained for 1500 epochs. During multimodal generation, we first initialize the model with pretrained weights from the text-to-motion autoencoder and fine-tune it on task-specific datasets. Subsequently, we freeze the parameters of the text-to-motion DiT and only fine-tune the masked transformer along with newly incorporated cross-attention layers. The learning rate and training schedule remain consistent with the text-to-motion task. For all three tasks, we employ exponential moving average (EMA) to update model parameters, ensuring training stability. During inference, the classifier-free guidance (CFG) scale is set to 4.5 for text-to-motion, while other tasks use a CFG scale of 6.5. Method FIDH ↓FIDB ↓ Div ↑ Edge (Tseng et al., 2023) 93.430 108.507 13.471 Finedance (Li et al., 2023a) 10.747 72.229 13.813 MCM (Ling et al., 2023) 4.717 78.577 14.890 MotionCraft (Bian et al., 2025) 3.858 76.248 16.667 Ours 3.632 71.930 15.871 Table 3: Results of music-based motion generation on the FineDance (Li et al., 2023a), following the unified SMPL-X representation (Bian et al., 2025). Datasets and Metrics. For the eval- uation, we utilize three datasets: Hu- manML3D (Guo et al., 2022a) for text- to-motion, BEAT2 (Liu et al., 2024a) for speech-to-gesture, and FineDance (Li et al., 2023a) for music-to-dance, all following the unified SMPL-X representation (Bian et al., 2025). Regarding the metrics, we use FID, R-Precision, and MM-Dist for text-based motion generation, use FIDH, FIDB, Face L2 loss, Beat Alignment Score, Diversity for speech-based motion gener- ation, and use FIDH, FIDB, Diversity for music-based motion generation, respectively. For the detailed explanation of the datasets and metrics, please refer to the appendix. 8 4.2 EVALUATION Text-based motion generation. We conduct an evaluation of our model against prior text-to-motion approaches, including both discrete-domain and continuous-domain methods. As summarized in Table 1, our method clearly surpasses prior techniques on the HumanML subset of Motion-X dataset. Remarkably, our model achieves improvements of 19.3%, 13.5%, and 11.7% in R-Precision for Top-1, 2, 3, respectively. Additionally, we enhance the FID score by 75.2% on this dataset, underscoring the exceptional fidelity of our generated motions. The qualitative results in Figure 4 further support these findings, showing that our approach yields whole-body motions that align closely with the input text. Speech-based motion generation. To assess the speech-driven motion generation, we compare to previous speech-to-gesture methods. Our results, summarized in Table 2, reveal that our method achieves good quality and diversity in both hand and body motion generation and excels in aligning with the rhythm of first-person speech. This demonstrates the effectiveness of our framework in motion generation when encompassing different modal signals. However, our method performs worse than single-modal methods. As discussed in (Bian et al., 2025), this is attributed to the random or average expressions in the Motion-X dataset, which confuses the speech-to-gesture training. Music-based motion generation. We further evaluate our framework on the music-to-dance task. As shown in Table 3, our method achieves slightly improved performance over previous approaches, particularly in generating hand motions and body movements. 4.3 ABLATION STUDY Causal Attention. To verify the efficacy of the proposed framework, we initially establish a baseline model following the visual MAR setup (Li et al., 2024b), i.e, using the random pseudo reordering for the batched masked prediction, through the bidirectional attention computing. From the results presented in Table 4, we see that the performance of this baseline in the motion area is limited. We attribute this to the difference between human motions and visual images, e.g., the human motion is in a strong temporal sequential structure, in which case a causal attention makes more sense. Therefore, changing the baseline to sequential masked prediction with causal attention improves the performance. DiT. In order to evaluate how the DiTs contribute to the motion quality, we further replace the MLPs in the baseline model with our DiTs. As shown in Table 4, the model generates superior motions with DiTs compared to MLPs, especially in the context of multimodal motion generation. This reveals the superior potential of DiTs in generating motion with complex multimodal contexts. Gated Linear Mechanism. To assess the function of the gated linear mechanism, we ablate this and report the results in Table 4, which indicates that the model outputs motions of higher quality with the inclusion of this mechanism. In the experiments, we observed that the output motions sometimes contain more detailed actions with this mechanism in place. RMSNorm. We also conduct an ablation study to evaluate the function of the RMSNorm and report the results in Table 4. From the results, we see that the model produces better motions when utilizing RMSNorm. In experiments, we found that this module makes the output more stable. Cross Attention. In the baseline model, the multimodal signals are injected with only the AdaLN structure. We then add the cross attention module and observe a significant improvement in multi- modal motion generation, as depicted in Table 4. 5 CONCLUSION This paper proposes a new omni motion framework for multimodal whole-body human motion generation. Within this one framework, text, speech, and music signals are all encompassed through AdaLN and cross-attention. The motion generation process is modeled by a continuous masked autoregressive transformer with causal attention, as well as a DiT structure. Extensive experiments have been conducted to verify the efficacy of the proposed framework in different-modality tasks. Limitations. Due to the restricted dataset, the naturalness and generalizability of the motion generation model are still limited, especially in speech and music-driven motion generation. 9 Text-based Speech-based Setting R Precision FID ↓ FIDH ↓ FIDB ↓ Top-1 ↑ Top-2 ↑ Top-3 ↑ Baseline 0.578±0.007 0.737±0.006 0.787±0.005 9.324±0.120 37.732 40.419 + Causal Attention 0.589±0.006 0.740±0.004 0.798±0.006 9.031±0.095 36.815 38.674 + DiT 0.688±0.005 0.828±0.007 0.851±0.004 5.562±0.085 19.743 28.228 + Gated Linear 0.692±0.004 0.834±0.005 0.877±0.006 4.844±0.078 19.427 28.156 + RMSNorm 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 18.329 27.741 + Cross Attention 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 17.651 25.823 Table 4: The ablation study of different model components. REFERENCES Chaitanya Ahuja and Louis-Philippe Morency. Language2pose: Natural language grounded pose forecasting. In 2019 International conference on 3D vision (3DV), pp. 719–728. IEEE, 2019. Chaitanya Ahuja, Dong Won Lee, and Louis-Philippe Morency. Low-resource adaptation for personalized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20566–20576, 2022. Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. Listen, denoise, action! audio- driven motion synthesis with diffusion models. ACM Transactions on Graphics, 42(4):1–20, 2023. Nikos Athanasiou, Mathis Petrovich, Michael J Black, and Gül Varol. Teach: Temporal action composition for 3d humans. In 2022 International Conference on 3D Vision (3DV), pp. 414–423. IEEE, 2022. Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan, Aniket Bera, and Dinesh Manocha. Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents. In 2021 IEEE virtual reality and 3D user interfaces (VR), pp. 1–10. IEEE, 2021. Yuxuan Bian, Ailing Zeng, Xuan Ju, Xian Liu, Zhaoyang Zhang, Wei Liu, and Qiang Xu. Motioncraft: Crafting whole-body motion with plug-and-play multimodal controls. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pp. 1880–1888, 2025. Haoye Cai, Chunyan Bai, Yu-Wing Tai, and Chi-Keung Tang. Deep video generation, prediction and completion of human action sequences. In Proceedings of the European conference on computer vision (ECCV), pp. 366–382, 2018. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11315–11325, 2022. Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. Bohong Chen, Yumeng Li, Yao-Xiang Ding, Tianjia Shao, and Kun Zhou. Enabling synergistic full-body control in prompt-based co-speech motion generation. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 6774–6783, 2024a. Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, and Qifeng Chen. Diffsheg: A diffusion-based approach for real-time speech-driven holistic 3d expression and gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7352–7361, 2024b. Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu. Executing your commands via motion diffusion in latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and Christian Theobalt. Mofusion: A framework for denoising-diffusion-based motion synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Bidirectional encoder representations from transformers. arXiv preprint arXiv:1810.04805, 15, 2018. 10 Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5152–5161, 2022a. Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In European Conference on Computer Vision, pp. 580–597. Springer, 2022b. Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2024. Yunhui Guo, Chaofeng Wang, Stella X Yu, Frank McKenna, and Kincho H Law. Adaln: a vision transformer for multidomain learning and predisaster building information extraction from images. Journal of Computing in Civil Engineering, 36(5):04022024, 2022c. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Xu He, Qiaochu Huang, Zhensong Zhang, Zhiwei Lin, Zhiyong Wu, Sicheng Yang, Minglei Li, Zhiyi Chen, Songcen Xu, and Xiaofei Wu. Co-speech gesture video generation via motion-decoupled diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2263–2273, 2024. Zikai Huang, Xuemiao Xu, Cheng Xu, Huaidong Zhang, Chenxi Zheng, Jing Qin, and Shengfeng He. Beat-it: Beat-synchronized multi-condition 3d dance generation. In European conference on computer vision, pp. 273–290. Springer, 2024. Jihoon Kim, Jiseob Kim, and Sungjoon Choi. Flame: Free-form language-based motion synthesis & editing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 8255–8263, 2023. Hanyang Kong, Kehong Gong, Dongze Lian, Michael Bi Mi, and Xinchao Wang. Priority-centric human motion generation in discrete latent space. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14806–14816, 2023. Ronghui Li, Junfan Zhao, Yachao Zhang, Mingyang Su, Zeping Ren, Han Zhang, Yansong Tang, and Xiu Li. Finedance: A fine-grained choreography dataset for 3d full body dance generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10234–10243, 2023a. Ronghui Li, Hongwen Zhang, Yachao Zhang, Yuxiang Zhang, Youliang Zhang, Jie Guo, Yan Zhang, Xiu Li, and Yebin Liu. Lodge++: High-quality and long dance generation with vivid choreography patterns. arXiv preprint arXiv:2410.20389, 2024a. Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 13401–13412, 2021. Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. Advances in Neural Information Processing Systems, 37:56424–56445, 2024b. Zhe Li, Zhangyang Gao, Cheng Tan, Stan Z Li, and Laurence T Yang. General point model with autoencoding and autoregressive. arXiv preprint arXiv:2310.16861, 2023b. Zhe Li, Laurence T Yang, Xin Nie, BoCheng Ren, and Xianjun Deng. Enhancing sentence representation with visually-supervised multimodal pre-training. In Proceedings of the 31st ACM International Conference on Multimedia, pp. 5686–5695, 2023c. Zhe Li, Yisheng He, Lei Zhong, Weichao Shen, Qi Zuo, Lingteng Qiu, Zilong Dong, Laurence Tianruo Yang, and Weihao Yuan. Mulsmo: Multimodal stylized motion generation by bidirectional control flow. arXiv preprint arXiv:2412.09901, 2024c. Zhe Li, Laurence T Yang, Bocheng Ren, Xin Nie, Zhangyang Gao, Cheng Tan, and Stan Z Li. Mlip: Enhancing medical visual representation with divergence encoder and knowledge-guided contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11704–11714, 2024d. Zhe Li, Weihao Yuan, Yisheng He, Lingteng Qiu, Shenhao Zhu, Xiaodong Gu, Weichao Shen, Yuan Dong, Zilong Dong, and Laurence T Yang. Lamp: Language-motion pretraining for motion generation, retrieval, and captioning. arXiv preprint arXiv:2410.07093, 2024e. 11 Angela S Lin, Lemeng Wu, Rodolfo Corona, Kevin Tai, Qixing Huang, and Raymond J Mooney. Generating animated videos of human activities from natural language descriptions. Learning, 1(2018):1, 2018. Jing Lin, Ailing Zeng, Shunlin Lu, Yuanhao Cai, Ruimao Zhang, Haoqian Wang, and Lei Zhang. Motion-x: A large-scale 3d expressive whole-body human motion dataset. Advances in Neural Information Processing Systems, 36:25268–25280, 2023a. Junfan Lin, Jianlong Chang, Lingbo Liu, Guanbin Li, Liang Lin, Qi Tian, and Chang-wen Chen. Being comes from not-being: Open-vocabulary text-to-motion generation with wordless training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 23222–23231, 2023b. Zeyu Ling, Bo Han, Yongkang Wong, Mohan Kangkanhalli, and Weidong Geng. Mcm: Multi-condition motion synthesis framework for multi-scenario. arXiv preprint arXiv:2309.03031, 2023. Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, and Michael J Black. Emage: Towards unified holistic co-speech gesture generation via expressive masked audio gesture modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1144–1154, 2024a. Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, and Changxing Ding. Towards variable and coordinated holistic co-speech motion generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1566–1576, 2024b. Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. Smpl: A skinned multi-person linear model. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2, pp. 851–866. 2023. Yunhong Lou, Linchao Zhu, Yaxiong Wang, Xiaohan Wang, and Yi Yang. Diversemotion: Towards diverse human motion generation via discrete diffusion. arXiv preprint arXiv:2309.01372, 2023. Mingshuang Luo, Ruibing Hou, Zhuo Li, Hong Chang, Zimo Liu, Yaowei Wang, and Shiguang Shan. M3gpt: An advanced multimodal, multitask framework for motion comprehension and generation. Advances in Neural Information Processing Systems, 37:28051–28077, 2024. Zichong Meng, Yiming Xie, Xiaogang Peng, Zeyu Han, and Huaizu Jiang. Rethinking diffusion for text-driven human motion generation. arXiv preprint arXiv:2411.16575, 2024. Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10975–10985, 2019. Mathis Petrovich, Michael J Black, and Gül Varol. Action-conditioned 3d human motion synthesis with transformer vae. In Proceedings of the International Conference on Computer Vision, 2021. Mathis Petrovich, Michael J Black, and Gül Varol. Temos: Generating diverse human motions from textual descriptions. In Proceedings of the European Conference on Computer Vision, 2022. Ekkasit Pinyoanuntapong, Pu Wang, Minwoo Lee, and Chen Chen. Mmm: Generative masked motion model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1546–1555, 2024. Matthias Plappert, Christian Mandery, and Tamim Asfour. Learning a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks. Robotics and Autonomous Systems, 109:13–26, 2018. Jose Ribeiro-Gomes, Tianhui Cai, Zoltán A Milacski, Chen Wu, Aayush Prakash, Shingo Takagi, Amaury Aubel, Daeil Kim, Alexandre Bernardino, and Fernando De La Torre. Motiongpt: Human motion synthesis with improved diversity and realism via gpt-3 prompting. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 5070–5080, 2024. Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian, Chen Change Loy, and Ziwei Liu. Bailando: 3d dance generation by actor-critic gpt with choreographic memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11050–11059, 2022. Shuai Tan, Bin Ji, and Ye Pan. Flowvqtalker: High-quality emotional talking face generation through normalizing flow and quantization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26317–26327, 2024. Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or. Motionclip: Exposing human motion generation to clip space. In Proceedings of the European Conference on Computer Vision, 2022. 12 Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-or, and Amit Haim Bermano. Human motion diffusion model. In Proceedings of the International Conference on Learning Representations, 2023. Jonathan Tseng, Rodrigo Castellon, and Karen Liu. Edge: Editable dance generation from music. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 448–458, 2023. Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. Mocogan: Decomposing motion and content for video generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1526–1535, 2018. Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. Yin Wang, Zhiying Leng, Frederick WB Li, Shun-Cheng Wu, and Xiaohui Liang. Fg-t2m: Fine-grained text-driven human motion generation via diffusion model. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 22035–22044, 2023. Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, and Changyou Chen. Learning diverse stochastic human-action generators by learning smooth latent transitions. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 12281–12288, 2020. Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Xiaolong Wang, and Trevor Darrell. Hierarchical style-based networks for motion synthesis. In Proceedings of the European Conference on Computer Vision, 2020. Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. Diffusestylegesture: Stylized audio-driven co-speech gesture generation with diffusion models. arXiv preprint arXiv:2305.04919, 2023. Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 469–480, 2023. Weihao Yuan, Yisheng He, Weichao Shen, Yuan Dong, Xiaodong Gu, Zilong Dong, Liefeng Bo, and Qixing Huang. Mogents: Motion generation based on spatial-temporal joint modeling. Advances in Neural Information Processing Systems, 37:130739–130763, 2024. Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, and Jan Kautz. Physdiff: Physics-guided human motion diffusion model. In Proceedings of the International Conference on Computer Vision, 2023. Biao Zhang and Rico Sennrich. Root mean square layer normalization. Advances in neural information processing systems, 32, 2019. Canyu Zhang, Youbao Tang, Ning Zhang, Ruei-Sung Lin, Mei Han, Jing Xiao, and Song Wang. Bidirectional autoregessive diffusion model for dance generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 687–696, 2024a. Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan. Generating human motion from textual descriptions with discrete representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14730–14740, 2023a. Mingyuan Zhang, Xinying Guo, Liang Pan, Zhongang Cai, Fangzhou Hong, Huirong Li, Lei Yang, and Ziwei Liu. Remodiffuse: Retrieval-augmented motion diffusion model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 364–373, 2023b. Mingyuan Zhang, Huirong Li, Zhongang Cai, Jiawei Ren, Lei Yang, and Ziwei Liu. Finemogen: Fine-grained spatio-temporal motion generation and editing. Advances in Neural Information Processing Systems, 36: 13981–13992, 2023c. Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Motion- diffuse: Text-driven human motion generation with diffusion model. PAMI, 2024b. Mingyuan Zhang, Daisheng Jin, Chenyang Gu, Fangzhou Hong, Zhongang Cai, Jingfang Huang, Chongzhi Zhang, Xinying Guo, Lei Yang, Ying He, et al. Large motion model for unified multi-modal motion generation. In European Conference on Computer Vision, pp. 397–421. Springer, 2024c. Yan Zhang, Michael J Black, and Siyu Tang. Perpetual motion: Generating unbounded human motion. arXiv preprint arXiv:2007.13886, 2020. 13 Yaqi Zhang, Di Huang, Bin Liu, Shixiang Tang, Yan Lu, Lu Chen, Lei Bai, Qi Chu, Nenghai Yu, and Wanli Ouyang. Motiongpt: Finetuned llms are general-purpose motion generators. arXiv preprint arXiv:2306.10900, 2023d. Zeyu Zhang, Akide Liu, Ian Reid, Richard Hartley, Bohan Zhuang, and Hao Tang. Motion mamba: Efficient and long sequence motion generation. In European Conference on Computer Vision, pp. 265–282. Springer, 2024d. Zeyu Zhang, Yiran Wang, Wei Mao, Danning Li, Rui Zhao, Biao Wu, Zirui Song, Bohan Zhuang, Ian Reid, and Richard Hartley. Motion anything: Any to motion generation. arXiv preprint arXiv:2503.06955, 2025. Chongyang Zhong, Lei Hu, Zihao Zhang, and Shihong Xia. Attt2m: Text-driven human motion generation with multi-perspective attention mechanism. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 509–519, 2023. Zixiang Zhou and Baoyuan Wang. Ude: A unified driving engine for human motion generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5632–5641, 2023. Zixiang Zhou, Yu Wan, and Baoyuan Wang. A unified framework for multimodal, multi-part human motion synthesis. arXiv preprint arXiv:2311.16471, 2023. Zixiang Zhou, Yu Wan, and Baoyuan Wang. Avatargpt: All-in-one framework for motion understanding planning generation and beyond. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1357–1366, 2024. Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. Taming diffusion models for audio-driven co-speech gesture generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10544–10553, 2023. Wenlin Zhuang, Congyi Wang, Jinxiang Chai, Yangang Wang, Ming Shao, and Siyu Xia. Music2dance: Dancenet for music-driven dance generation. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 18(2):1–21, 2022. Qiran Zou, Shangyuan Yuan, Shian Du, Yu Wang, Chang Liu, Yi Xu, Jie Chen, and Xiangyang Ji. Parco: Part-coordinating text-to-motion synthesis. In European Conference on Computer Vision, pp. 126–143. Springer, 2024. 14 A APPENDIX A.1 TRAINING DETAILS Encoding of Speech and Music Our speech and music encoders are designed to extract temporally aligned, high-level features from raw audio signals for effective speech-to-gesture and music-to- dance generation. The architecture builds upon a multi-layer 1D convolutional network with strided convolutions and leaky ReLU activations. Each convolutional block consists of a series of a block unit that progressively downsample the input waveform while increasing feature dimensionality. The input audio sequence is first processed through multiple stages of temporal aggregation and non-linear transformation, resulting in a sequence of compact and expressive latent representations, whereas the input music typically retains sufficient temporal structure and spectral richness in its raw form for effective motion synthesis. These latent codes capture prosodic, rhythmic, and semantic-like patterns in speech and music, which are then projected into the condition latent space of dimensionality. The final encoder output is transposed to align with the temporal structure expected by the diffusion model, enabling fine-grained cross-modal interaction between speech and motion sequences during generation. Training of AE We first pretrain a baseline autoencoder on the text-to-motion task. When fine- tuning it on the speech-to-gesture and music-to-dance tasks, the decoder fails to reconstruct valid motion sequences due to discrepancies in data distribution. However, fine-tuning the autoencoder using the reconstruction objective during the multi-modal training incurs high computational costs. Therefore, we independently fine-tune the baseline AE on each dataset using the reconstruction task before multi-modal generation, and employ the resulting models for downstream tasks. Motion Representation Following previous work Bian et al. (2025), we utilize SMPL-X formatted motion data with an input dimension of (frame length × 322). The parameter structure is organized as follows: Root orientation (0:3): controls global body rotation; body pose (3:66): Governs major body joint rotations; hand articulation (66:156): controls finger movements; jaw pose (156:159): manages mouth opening/closing; facial expression (159:209): drives emotional expressions; facial shape (209: 309): determines static facial structure; translation (309:312): controls global body position; betas (312: 322): represents static body shape parameters. And the maximum motion length is 196. The model’s output maintains identical dimensionality (frame length × 322) to ensure full reconstruction capability. This comprehensive parameterization enables simultaneous control of body motion, facial animation, and global positioning within a unified framework. A.2 DATASETS. For text-based motion generation, we evaluate our method on the HumanML3D (Guo et al., 2022a) dataset, which consists of 14,616 high-quality human motions paired with 44,970 text descriptions. The original body-only SMPL (Loper et al., 2023) format of this dataset is extended to whole-body SMPL-X (Pavlakos et al., 2019) format in MotionCraft (Bian et al., 2025), which we follow in the experiments for evaluation. For speech-based motion generation, we evaluate on the BEAT2 dataset (Liu et al., 2024a), which collects 76 hours of data from 30 speakers, standardized into a mesh representation with paired audio and text lines. The motion of the unified SMPL-X format is also extracted (Bian et al., 2025) for multimodal evaluation. For music-based motion generation, the largest dataset FineDance (Li et al., 2023a) is utilized for evaluation. This dataset collects dances of 14.6 hours across 22 genres and provides detailed human motions using the SMPL-H format, which is then converted to the unified SMPL-X format and appended by text descriptions. To enable full-body, multimodal control over motion generation, we convert all datasets to the SMPL-X format. This involves filling in missing facial expressions in HumanML3D and FineDance using average expression coefficients from the training set, as well as transforming the SMPL-H Rot-6D representation in FineDance into axis-angle format via Gram-Schmidt orthogonalization. This conversion achieves better alignment with SMPL-X parameters and introduces minimal errors compared to the official body-retargeting method, while also offering improved computational efficiency. 15 Method R Precision FID ↓ MM Dist↓ Div → Top-1 ↑ Top-2 ↑ Top-3 ↑ GT 0.511±0.003 0.703±0.003 0.797±0.002 0.002±0.000 9.503±0.065 2.974±0.008 MDM(Tevet et al., 2023) 0.418±0.005 0.604±0.005 0.707±0.004 0.489±0.025 9.450±0.066 3.630±0.023 MotionDiffuse(Zhang et al., 2024b) 0.491±0.001 0.681±0.001 0.782±0.001 0.630±0.001 9.410±0.049 3.113±0.001 FineMoGen(Zhang et al., 2023c) 0.504±0.002 0.690±0.002 0.784±0.002 0.151±0.008 9.263±0.094 2.998±0.008 Motion-Verse(Zhang et al., 2024c) 0.496±0.002 0.685±0.002 0.785±0.002 0.415±0.002 9.176±0.074 3.087±0.012 MCM(Ling et al., 2023) 0.494±0.003 0.682±0.005 0.777±0.003 0.075±0.003 9.484±0.074 3.086±0.011 MotionCraft (Bian et al., 2025) 0.501±0.003 0.697±0.003 0.796±0.002 0.173±0.002 9.543±0.098 3.025±0.008 MARDM (Meng et al., 2024) 0.502±0.003 0.691±0.003 0.787±0.002 0.286±0.003 9.470±0.081 3.346±0.007 Ours 0.548±0.003 0.743±0.003 0.837±0.002 0.141±0.003 9.537±0.087 2.856±0.008 Table 5: Results of text-to-motion on the original HumanML3D benchmark. Method R Precision FID ↓ MM Dist↓ Top-1 ↑ Top-2 ↑ Top-3 ↑ Ours 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 15.871±0.030 Ours-Finetuned 0.701±0.002 0.846±0.005 0.898±0.005 4.843±0.102 15.868±0.027 Table 6: Results of text-to-motion after fine-tuning. (On the HumanML3D subset of Motion-X dataset, following the unified SMPL-X representation.) To ensure consistency with MotionCraft (Bian et al., 2025), we utilize the pretrained motion en- coder and text encode, enabling a unified evaluation of the SMPL-X motion representation across different modalities. For datasets that lack corresponding textual annotations—namely FineDance and BEAT2—we generate pseudo-captions such as “A dancer is performing a street dance in the Jazz style to the rhythm of the wildfire” and “A person is giving a speech, and the content is ...”, respectively, to support cross-modal learning. A.3 METRICS Text-based Motion Generation To assess the quality of the motions generated based on texts compared to the true data, we utilize the Frechet Inception Distance (FID) to evaluate the distribution differences between the generated motions and the ground truth. Additionally, R-Precision is employed to determine how frequently the most relevant motions, identified as top-k closest matches, align with their respective captions within a batch of 32 samples. Lastly, Multi-Modal Distance (MM Dist) is employed to gauge the average Euclidean distance between motion representations and their corresponding textual features. Speech-based Motion Generation For evaluating the quality and diversity of the motions generated based on speech, we employ FIDH, FIDB, and Diversity metrics. FIDH measures the difference between hand motion distribution and the true gesture distribution, whereas FIDB assesses the divergence between the whole-body motion distributions. The Beat Alignment Score (Li et al., 2021) is used to measure the synchronization between motions and speech beats. To quantify the difference between generated expressions and actual expressions, we use the L2 Loss. Music-based Motion Generation Mirroring the approach used for speech-driven gesture gen- eration, we apply FIDH, FIDB, and Diversity metrics to evaluate the quality and diversity of Method R Precision FID ↓ MM Dist↓ Top-1 ↑ Top-2 ↑ Top-3 ↑ GT 0.663±0.006 0.807±0.002 0.864±0.002 0.000±0.000 15.567±0.036 MotionCraft-Basic (Bian et al., 2025) 0.590±0.003 0.743±0.004 0.804±0.004 8.477±0.102 16.252±0.035 MotionCraft-Mix (Bian et al., 2025) 0.600±0.003 0.747±0.004 0.812±0.006 6.707±0.081 16.334±0.059 Ours-Basic 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 15.871±0.030 Ours-Mix 0.712±0.003 0.849±0.005 0.904±0.004 4.759±0.102 15.765±0.026 Table 7: Results of text-driven motion generation on the HumanML3D dataset following the mix training setup (Bian et al., 2025). 16 Method FIDH ↓ FIDB ↓ Face L2 Loss ↓ Beat Align Score ↑ Diversity ↑ MotionCraft-Basic (Bian et al., 2025) 18.486 27.023 10.097 8.098 10.334 MotionCraft-Mix (Bian et al., 2025) 12.882 25.187 8.906 8.226 12.595 Ours-Basic 17.651 25.923 9.883 8.377 14.703 Ours-Mix 12.201 25.644 8.947 8.430 15.003 Table 8: Results of speech-driven motion generation on the BEAT2 dataset (Liu et al., 2024a) following the mix training setup (Bian et al., 2025). A person is doing a speech: “One thing that scared me once was …” A person is doing a speech: “One night I was out with my friends …” A person is doing a speech: “Well yes I have experienced a paranormal …” A person is doing a speech: “One time I was searching for my friends …” Ours MotionCraft unfold in time axis Figure 5: The qualitative results of speech-driven motion generation. music-induced hand and whole-body movements. This approach ensures that the generated motions exhibit both high fidelity and variation. A.4 ADDITIONAL EXPERIMENT RESULTS More Results on Text-to-motion To provide a more comprehensive evaluation, we conduct additional comparisons on the original HumanML3D benchmark using the body-only H3D format, which contains redundant motion information. Here we mainly compare with the methods without VQ-VAE. As shown in Tab. 5, OmniMotion consistently outperforms these baselines in terms of text- motion alignment, motion quality, and diversity, demonstrating its superior generalization capability across different motion representations. Text-to-motion Evaluation after fine-tuning We conduct a comprehensive evaluation on the final model (after fine-tuning on both speech-to-gesture and music-to-dance datasets) and present the results below. Since textual conditioning participates throughout the entire training pipeline, our model does not suffer from catastrophic forgetting after fine-tuning. This confirms the robustness of our architecture’s knowledge retention capabilities under different training paradigms. Evaluation of OmniMotion Variants Following the same strategy as MotionCraft (Bian et al., 2025), we train two variants of our model: OmniMotion-Base and OmniMotion-Mix. OmniMotion- Base is a text-to-motion model pretrained solely on HumanML3D, while OmniMotion-Mix is trained on a combined dataset comprising HumanML3D, BEAT2, and FineDance to enable multimodal motion generation. Quantitative results on the text-to-motion task are summarized in Table 7. We further evaluate OmniMotion-Mix across all three modalities. For the speech-to-gesture and music-to-dance tasks, we fine-tune the model on the respective target datasets. The corresponding results are reported in Table 8 and Table 9, respectively. A.5 MORE VISUALIZATION We display more visual results of speech-driven and music-driven motion generation in Figure 5 and Figure 6, respectively. 17 A dancer is performing a Folk dance in the Dai style to the rhythm of the Xuanyue A dancer is performing a Classic dance in the ShenYun style to the rhythm of the Tanglanting song A dancer is performing a Street dance in the Popping style to the rhythm of the WeAreOne2017 song A dancer is performing a Street dance in the Urban style to the rhythm of the Redeye song unfold in time axis Ours MotionCraft Figure 6: The qualitative results of music-driven motion generation. Method FIDH ↓ FIDB ↓ Div ↑ MotionCraft-Basic (Bian et al., 2025) 3.858 76.248 16.667 MotionCraft-Mix (Bian et al., 2025) 2.849 67.159 18.483 Ours-Basic 3.632 71.930 15.871 Ours-Mix 2.781 64.380 17.605 Table 9: Results of music-driven motion generation on the FineDance dataset (Li et al., 2023a) following the mix training setup (Bian et al., 2025). 18
OMNIMOTION: MULTIMODAL MOTION GENERATION WITH CONTINUOUS MASKED AUTOREGRESSION Zhe Li1,2∗, Weihao Yuan2†, Weichao Shen2, Siyu Zhu3, Zilong Dong2, Chang Xu1† 1 2 Alibaba Group 3 Fudan University ABSTRACT Whole-body multi-modal human motion generation poses two primary challenges: creating an effective motion generation mechanism and integrating various modalities, such as text, speech, and music, into a cohesive framework. Unlike previous methods that usually employ discrete masked modeling or autoregressive modeling, we develop a continuous masked autoregressive motion transformer, where a causal attention is performed considering the sequential nature within the human motion. Within this transformer, we introduce a gated linear attention and an RMSNorm module, which drive the transformer to pay attention to the key actions and suppress the instability caused by either the abnormal movements or the heterogeneous distributions within multi-modalities. To further enhance both the motion generation and the multimodal generalization, we employ the DiT structure to diffuse the conditions from the transformer towards the targets. To fuse different modalities, AdaLN and cross-attention are leveraged to inject the text, speech, and music signals. Experimental results demonstrate that our framework outperforms previous methods across all modalities, including text-to-motion, speech-to-gesture, and music-to-dance. The code of our method will be made public. 1 INTRODUCTION Whole-body human motion generation represents an expanding frontier in computer vision, offering significant value across a variety of applications, including film production, gaming, virtual reality, robotics, and so on. Broadly speaking, motion generation could be conditioned on various signals, such as text, speech, music, and more. Historically, approaches to whole-body motion generation usually focus on isolated tasks. Typically, they either address text-to-motion generation, or concentrate on speech-to-gesture translation, or engage in music-to-dance synthesis. Despite their successes in single task, their frameworks are exclusively designed for individual tasks and cannot be easily adapted to different tasks. In addition, they tend to overlook the underlying commonalities that exist across different tasks. In contrast, in this work we seek to address these motion generation challenges from various signals within an omni-framework. This brings two advantages: 1) It allows each modality to benefit from the patterns present in other modalities, preventing single-mode solutions from becoming trapped in a local minimum; 2) It enhances each task with data from other tasks, which is particularly relevant given the limited scale of data available for individual motion tasks. Previous studies in motion generation generally proceed in two paths. The first employs the vector quantization (VQ) technique to convert continuous motion to discrete tokens, and then performs autoregressive or masked modeling to predict the tokens (Zhang et al., 2023d;a; Kong et al., 2023; Zhong et al., 2023; Guo et al., 2024). While this path effectively utilizes the strengths of autoregressive and masked modeling, the quantization step inevitably introduces approximation errors, which impose undesirable limits on the quality of the generated motions. The second directly regresses the continuous motions using techniques such as generative adversarial networks (GANs) (Tulyakov et al., 2018), variational autoencoders (VAEs) (Xu et al., 2020; Ahuja & Morency, 2019; Petrovich et al., 2022; Guo et al., 2022a), or recent diffusion models (Chen et al., 2023; Tevet et al., 2023; Zhang et al., 2024b; 2023b; Ribeiro-Gomes et al., 2024). Despite avoiding the approximation errors, they ∗Work done during internship at Alibaba † Corresponding Author 1 16 Oct 2025 Dance in the Jazz style Speak: one thing that ... OmniMotion Squat while lifting hands Figure 1: We construct an omni motion framework with a continuous masked autoregressive motion transformer for multimodal whole-body motion modeling, including text-based, music-based, and speech-based motion generation. miss the autoregressive or masked modeling technologies, which have been shown to deliver superior performance in motion generation tasks. Consequently, the performance of the motion generated by this path is overall lower than that achieved by the first path. To both leverage the advantages of autoregressive and masked modeling, and the benefits of the continuous motion representation, in this work we combine them together to propose a continuous masked autoregressive motion generation framework. We apply a random masking on the sequential motion tokens, and employ a transformer to autoregressively predict the masked tokens. Unlike the visual MAR (Li et al., 2024b), we sequentially predict the masked tokens with causal attention rather than performing random reordering, considering the sequential nature within human motion. To enhance the MAR modeling in motion space, we introduce a gated linear mechanism and an RMSNorm module. The gated linear mechanism serves as an adaptive feature selector, driving the transformer to not only pay more attention to the key actions, like gesture switching and large movement, but also disregard less relevant frames and suppress redundant actions, like stationary motions. The RMSNorm is particularly advantageous in scenarios with features exhibiting a large dynamic range, e.g., our unified framework for multi-modalities, where the input distributions are highly heterogeneous. In addition, RMSNorm helps relieve the gradient instability caused by the abnormally large motions, such as sudden jumping or turning back. After the masked autoregressive transformer, the calculated attention of the masked tokens is fed into a series of DiT blocks to diffuse towards the target tokens, which are decoded to the generated motions. In addition to the text-based motion generation, we further extend our framework to multimodal conditions. Building upon a similar structure, the multimodal signals are fused by AdaLN (Guo et al., 2022c) and cross-attention modules. Extensive experiments across different datasets demonstrate our framework can work well with different modalities, including text, speech, and music, and outperform previous methods in whole-body text-to-motion, speech-to-gesture, and music-to-dance tasks. The main contributions of this work are then summarized as follows: • We design an omni motion framework for whole-body human motion generation, where one framework encompasses multiple modalities. • We propose a continuous autoregressive motion transformer with causal attention, where a gated linear mechanism and an RMSNorm module are developed to assist the motion modeling, and the DiT blocks are employed to improve the quality of the generated motions. • We integrate the multimodal signals via AdaLN and cross-attention, obtaining superior performance than previous methods in text-based, speech-based, and music-based motion generation. 2 2 RELATED WORK Text-based Motion Generation. Previous research on text-based motion generation can be generally categorized into two mainstream paths: continuous regression and discrete classification. In the continuous regression domain, numerous strategies have leveraged the variational autoencoder (VAE) framework, integrating latent embeddings of encoded text with those of encoded poses, which are then decoded into motion predictions (Xu et al., 2020; Ahuja & Morency, 2019; Athanasiou et al., 2022; Petrovich et al., 2022; Guo et al., 2022a). Other methods have investigated the potential of recurrent networks (Lin et al., 2018; Plappert et al., 2018; Zhang et al., 2020), generative adversarial networks (Cai et al., 2018; Wang et al., 2020; Tulyakov et al., 2018), or transformer networks (Tevet et al., 2022; Lin et al., 2023b; Bhattacharya et al., 2021; Petrovich et al., 2021) to enhance the motion regression quality. Building on the success of diffusion models, recent approaches have begun to integrate the diffusion process into motion diffusion (Kim et al., 2023; Chen et al., 2023; Tevet et al., 2023; Zhang et al., 2024b; Dabral et al., 2023; Lou et al., 2023; Zhang et al., 2023b; Ribeiro-Gomes et al., 2024; Yuan et al., 2023; Wang et al., 2023; Zhang et al., 2023c; 2024d; Bian et al., 2025), yielding impressive results. In the discrete classification domain, the input motion undergoes initial encoding via a VQ-VAE (Van Den Oord et al., 2017), producing motion tokens for subsequent prediction (Zhong et al., 2023; Li et al., 2024e). Drawing inspiration from advancements in natural language processing, some methods utilize autoregressive modeling to predict tokens sequentially (Guo et al., 2022b; Lou et al., 2023; Zou et al., 2024). Others employ generative masked modeling strategies, with tokens randomly masked during training for the model to predict (Guo et al., 2024; Pinyoanuntapong et al., 2024; Yuan et al., 2024; Li et al., 2023b). More recently, large language models (LLMs) have been harnessed to help the prediction process, considering their large-scale pretraining (Zhang et al., 2023d; Zhou et al., 2024; Li et al., 2024d; 2023c). In this work, we seek to integrate the most effective elements from these two paths: the continuous diffusion and the masked autoregressive modeling. A previous attempt in this direction (Meng et al., 2024) directly transfers the MAR in image generation (Li et al., 2024b) into motion generation without considering the difference between image and motion spaces, especially the temporal correlation. Also, its framework is only designed for body-only motion generation. Differently, we propose a new MAR mechanism that is especially designed for whole-body motion generation. Multimodal Motion Generation. In addition to text, there are many other signals that various human motions are conditioned on, such as speech and music. In the realm of speech-to-gesture generation, both continuous regression and discrete classification paths have been explored. In the continuous domain, methods employ deep generative models like GANs (Ahuja et al., 2022), normalizing flows (Tan et al., 2024), and diffusion models (Alexanderson et al., 2023; Chen et al., 2024a; He et al., 2024; Yang et al., 2023; Zhu et al., 2023; Chen et al., 2024b) to learn complex motion distributions in the speech data. In the discrete domain, methods leverage either the autoregressive modeling (Yi et al., 2023) or the masked modeling (Liu et al., 2024a;b) to predict the discrete tokens quantified by the VQ-VAE. The primary distinction among these methods lies in their specific handling of different parts of human motion, including body movements, hand gestures, and facial expressions. Similarly, in the realm of music-to-dance generation, there are also methods in both the continuous domain (Zhuang et al., 2022; Tseng et al., 2023; Li et al., 2024a; Huang et al., 2024; Zhang et al., 2024a; Li et al., 2023a) and the discrete domain (Siyao et al., 2022). The discrete autoregressive model is leveraged after the motion quantization with VQ-VAE (Siyao et al., 2022). More methods harness the diffusion model to directly regress the target dancing motion in the continuous space (Tseng et al., 2023; Li et al., 2024a; Huang et al., 2024; Li et al., 2023a). Recent methods also start to merge autoregressive and diffusion models, producing coherent and music-aligned dance sequences (Zhang et al., 2024a). Recent works have begun to seek multimodal solutions, i.e., designing one framework for motion generation from different input signals (Luo et al., 2024; Zhang et al., 2024c; Zhou & Wang, 2023; Zhou et al., 2023; Ling et al., 2023; Bian et al., 2025; Li et al., 2024c). Some methods in the discrete domain attempt to incorporate quantized condition tokens into the vocabulary of the generation model (Luo et al., 2024; Zhou & Wang, 2023; Zhou et al., 2023; Zhang et al., 2025), while some methods in the continuous domain try to integrate the multimodal signals by designing the motion ControlNet (Ling et al., 2023; Bian et al., 2025), where the multimodal conditions guide the sampling of a pretrained text-to-motion diffusion model. However, most previous methods are restricted by the varying motion data of different modalities, limiting multi-modal frameworks primarily to body-only 3 Motion Encoder AdaLN "Squat while lifting hands above head" DiT ×4 RMSNorm Multi-head Attn Gate x! x" ×4 Text Encoder Motion Encoder AdaLN "Speak / Dance" Multi-head Attn Gate ×4 Text Encoder Multimodal Encoder + Cross Attn Conv1D ReLU Conv1D Down-ResBlock Conv1D ×2 Conv1D Conv1D Up-ResBlock ×2 ReLU Conv1D FFN RMSNorm FFN DiT ×4 x! x" (a) (b) (c) Figure 2: Framework overview. Our framework consists of three parts: (a) The input motion is encoded by an autoencoder to extract a latent code, producing the continuous motion tokens. (b) The motion tokens are masked and predicted in an autoregressive transformer with causal attention, producing conditions for DiTs to diffuse towards the target tokens. (c) Multimodal signals are encoded and then injected via AdaLN and cross-attention. motion generation. To overcome this, MotionCraft (Bian et al., 2025) standardizes datasets across modalities into a unified whole-body format that includes body, hands, and face. In this work, we follow this unified representation to build a whole-body multi-modal motion framework, taking advantage of continuous masked auto-regression. 3 METHOD 3.1 OVERVIEW The overview of our framework is illustrated in Figure 2, which is divided into three main stages: In the first stage, we encode the input motion with an autoencoder, which generates continuous motion tokens. In the second stage, we focus on the text-based motion generation, utilizing our motion masked autoregressive framework to model the motion generation process. In this stage, an autoregressive transformer is employed to predict the masked tokens, within which a gated linear mechanism is designed, and an RMSNorm module is employed. The text information is integrated into the transformer via AdaLN after encoding. After the transformer, the generated embedding is fed into the DiT modules as the condition to diffuse towards the target token. In the third stage, we extend the model learned in the previous stage to the multi-modal structure. This involves merging the text embedding with multimodal signal embeddings-specifically speech or music-prior to their AdaLN input. Furthermore, we inject the multimodal embedding through a cross-attention module into the masked transformer. In the multimodal learning stage, the DiT module is kept in the same structure and frozen. 3.2 CONTINUOUS AUTOENCODER To feed the human motion into the transformer, we start by encoding the original motion into motion tokens. Given a motion sequence {Mt}T t=1, the objective of an autoencoder is to extract a latent code zAE that optimally captures the essence of the original motion. Different from most previous motion generation methods with autoregressive transformers, we use a continuous autoencoder rather than the VQ-VAE to do the encoding, which avoids the precision loss associated with the quantization approximation. In the encoder, we stack the 1D convolution networks with ReLU activation to do the feature processing. Following this, two down-sampling residual blocks are applied to reduce the motion feature size to one-fourth of its original dimensions. In the decoder, the same structure in the reversed order is utilized to up-sample the motion feature back to the original size, producing 4 { ˆMt}T t=1. Therefore, the loss for training the autoencoder is defined as LAE = X t ∥ˆMt -Mt ∥1. (1) The latent code zAE between the encoder and the decoder serves as the motion tokens x0, x1, ..., xN, with a sequence length that is one-fourth of the initial sequence length. 3.3 CONTINUOUS MASKED AUTOREGRESSION With the continuous motion tokens, we design a masked autoregressive transformer to model the motion generation, effectively capturing the temporal relationship between different tokens and producing the rich contextual condition zi for the subsequent diffusion process. We first randomly mask the motion tokens following language models (Devlin et al., 2018), obtaining some masked tokens { ̃xi}. The temporal masking strategies adopt the same mask ratio schedule following (Chang et al., 2022), and are computed as γ(τ) = cos(πτ 2 ), (2) where τ ∈[0, 1]. In training, τ ∼U(0, 1) is uniformly sampled, leading to a mask ratio γ(τ). Then according to this ratio, γ(τ) × N tokens are randomly selected to be masked. After the masking, unlike previous MAR methods (Li et al., 2024b; Meng et al., 2024), our approach does not involve random rearranging of tokens or batch-tokens prediction. Also, we do not perform bidirectional attention. In contrast, we maintain the temporal order of the original motion, and sequentially undertake autoregressive prediction, thus forming a causal attention manner, as illustrated in Figure 2. For the input text prompt T, we first employ the LaMP (Li et al., 2024e) text transformer to extract textual features, leveraging its advanced capabilities to encode the linguistic nuances and semantic structure of the input prompt. This creates a high-dimensional feature representation that is crucial for guiding motion generation process. Following the feature extraction, we utilize AdaLN to seamlessly integrate the text-derived control signals into the masked autoregressive transformer. AdaLN offers a dynamic approach to normalization, allowing the modulation of its parameters in response to the specific text input, thereby facilitating subsequent multimodal condition injection. By employing this method, we enhance the model's ability to incorporate the guiding signals from the text and other signals into the motion generation process, ensuring that the transformer's output features are better aligned with the intended motion generation goals. The features outputted by the transformer embody a strong directive capacity for motion generation. This enables the model not only to faithfully interpret the semantic content of the input text but also to produce motion sequences that are coherent with and reflective of the textual intent. The enriched output features contribute to achieving smoother transitions and logically consistent motion sequences in complex generation scenarios. Gated Linear Mechanism. We employ a gated linear attention mechanism within the transformer to regulate the attention weights at each time step. Specifically, we compute a gating signal by applying a linear transformation go to the input x followed by a sigmoid activation function. This gating signal acts as a dynamic filter, adjusting the output of the attention module based on the relevance of the input features. Consequently, during the attention computation, the final output o is modulated by this gating signal, enabling the model to selectively focus on the most pertinent action frames. o = g × Softmax( Q·KT dk )V, g = sigmoid(go(x)). (3) This mechanism effectively serves as an adaptive feature selector, allowing the model to disregard less relevant frames and suppress redundant action frames (such as stationary or repetitive motions), thereby enhancing its attention to key actions (e.g., gesture transitions and changes in motion direction). Furthermore, by dynamically adjusting the attention distribution through gating, the model is capable of selectively retaining historical frames or predicting future frames based on the current action state. 5 A person is doing a speech: "One thing that scared me once was ..." A person is doing a speech: "One night I was out with my friends ..." A person is doing a speech: "Well yes I have experienced a paranormal ..." A person is doing a speech: "One time I was searching for my friends ..." Speech-driven Music-driven A dancer is performing a Folk dance in the Dai style to the rhythm of the Xuanyue A dancer is performing a Classic dance in the ShenYun style to the rhythm of the Tanglanting song A dancer is performing a Street dance in the Popping style to the rhythm of the WeAreOne2017 song A dancer is performing a Street dance in the Urban style to the rhythm of the Redeye song unfold in time axis Figure 3: The qualitative results of motions generated from our model driven by speech and music. RMSNorm. Our objective is to construct a unified model that can simultaneously perform text-tomotion, speech-to-gesture, and music-to-dance tasks. Therefore, we aim to ensure that during the fine-tuning process across different datasets, the model does not suffer from instability that could lead to catastrophic failures. RMSNorm (Zhang & Sennrich, 2019) is particularly advantageous in scenarios with features exhibiting a large dynamic range, especially in tasks where the input distributions are highly heterogeneous. This characteristic enables the model to maintain stability when faced with diverse types of inputs or when observing uneven feature distributions. Additionally, RMSNorm has the potential to mitigate the gradient instability that may arise from significant motion variations, such as sudden jumps or rapid turns. 3.4 DIFFUSION TRANSFORMER In contrast to previous works (Li et al., 2024b; Meng et al., 2024), we adopt Diffusion Transformer (DiT) as our diffusion model. While the use of DiT may incur additional time costs during training and inference (not much due to the compact structure of motion data), it significantly enhances the quality of generated outputs. Compared to MLPs, DiT provides greater convenience in injecting conditional control signals. During the training process of multimodal generation, we freeze the diffusion model and only fine-tune the masked transformer. The structural characteristics of DiT facilitate this approach, enabling it to better handle various types of conditional signals. Moreover, MLPs exhibit notable limitations when processing heterogeneous data. This incapacity results in suboptimal performance when confronted with diverse signal types, such as speech and music. Due to the relatively small number of parameters in MLPs, they are prone to overfitting on specific datasets (e.g., text-to-motion). This situation can be analogized to a dancer who is adept only in a single dance style; when asked to incorporate other styles, they appear clumsy and ineffective. Consequently, when we attempt to fine-tune MLPs on a different dataset, they are ill-equipped to adapt to the challenges posed by new signals, leading to failures in multimodal generation tasks. In contrast, DiT demonstrates superior performance in complex multimodal generation contexts. Its enhanced generalization capabilities allow it to flexibly handle a variety of input signal types, rather than being confined to a single data format. This ensures that the model exhibits increased adaptability and reliability when exposed to diverse data, ultimately resulting in higher-quality outcomes. 3.5 TEXT-TO-MOTION PRETRAINING AND MULTIMODAL CONTROL ADAPTATION We first pre-train the model on text-motion paired data in a text-to-motion generation setting. Owing to its strong semantic expressiveness and cross-modal alignment properties, we adopt text as a shared conditional signal across diverse unimodal datasets, enabling the model to learn sequencelevel generation capabilities between text and motion, as well as a coarse-grained textual guidance mechanism for generative control. 6 The person is doing karate moves Ours MotionCraft Karate move Karate move ❌ ✅ A person swings his arms loosely in front of him, pats his head and rubs his stomach like he was mimicking a monkey A person crosses his arms and then drops them to his sides A man sits down and then stays still Pats his head Rubs his stomach Pats his head Rubs his stomach Crosses his arms Drops them to his sides Crosses his arms Drops them to his sides Sits down Stays still Sits down Stays still ✅ ✅ ✅ ✅ ✅ ✅ ✅ ❌ ❌ ❌ ✅ ❌ unfold in time axis Figure 4: The qualitative results of text-driven motion generation. Method R Precision FID ↓ MM Dist↓ Top-1 ↑ Top-2 ↑ Top-3 ↑ GT 0.663±0.006 0.807±0.002 0.864±0.002 0.000±0.000 15.567±0.036 T2M-GPT(Zhang et al., 2023a) 0.529±0.004 0.652±0.003 0.732±0.003 10.457±0.108 17.029±0.039 MDM(Tevet et al., 2023) 0.383±0.010 0.527±0.012 0.604±0.009 18.671±0.370 18.785±0.054 MotionDiffuse(Zhang et al., 2024b) 0.525±0.004 0.675±0.009 0.743±0.009 9.982±0.379 17.314±0.066 FineMoGen(Zhang et al., 2023c) 0.565±0.001 0.710±0.004 0.775±0.004 7.323±0.143 16.679±0.029 MCM(Ling et al., 2023) 0.407±0.002 0.559±0.003 0.636±0.001 15.540±0.443 18.673±0.029 MotionCraft (Bian et al., 2025) 0.590±0.003 0.743±0.004 0.804±0.004 8.477±0.102 16.252±0.035 Ours 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 15.871±0.030 Table 1: The quantitative results of text-to-motion on the HumanML3D subset of Motion-X dataset (Lin et al., 2023a), following the unified SMPL-X representation (Bian et al., 2025). We hypothesize that the contextual features output by the masked transformer provide a more expressive control signal compared to raw text embeddings. Accordingly, within the DiT architecture, we inject the transformer's output features by summing them with the time embeddings, thereby guiding the motion generation process as: ̃xi t-1 ∼p( ̃xi t-1| ̃xi t, t + zi). (4) Then the training objective for noise prediction is defined as: L = Eε,t∥ε -εθ( ̃xi t|t + zi)∥. (5) This procedure yields the trained model Mt2m. When incorporating additional control signals, we initialize the entire model with the parameters of model Mt2m, freeze the DiT, and fine-tune only the masked transformer. Crucially, in contrast to the original design, we introduce cross-attention layers within the transformer to explicitly model interactions between the control signals and the motion sequence. This modification aims to produce more precise, fine-grained control representations, thereby enhancing both the quality and controllability of the generated motions. 3.6 INFERENCE The inference process starts with an all-masked sequence. We introduce mask tokens at the corresponding positions in the sequence. This enables the autoregressive model to iteratively predict the masked latent signals conditioned on the observed context. When performing speech-to-gesture and music-to-dance tasks, the speech and music modalities are processed through cross-attention mechanisms within the transformer, interacting with the textual features. This allows the model to generate more fine-grained conditional signals that are temporally aligned with the text. The predicted latents are then fed into the DiT to refine and generate the final 7 Method FIDH ↓ FIDB ↓ Face L2 Loss ↓ Beat Align Score ↑ Diversity ↑ Talkshow (Yi et al., 2023) 26.713 74.824 7.791 6.947 13.472 EMAGE (Liu et al., 2024a) 39.094 90.762 7.680 7.727 13.065 MCM (Ling et al., 2023) 23.946 71.241 16.983 7.993 13.167 MotionCraft (Bian et al., 2025) 18.486 27.023 10.097 8.098 10.334 Ours 17.651 25.923 9.883 8.377 14.703 Table 2: Results of speech-based motion generation on the BEAT2 dataset (Liu et al., 2024a), following the unified SMPL-X representation (Bian et al., 2025). motion sequence. The sampling process can be denoted as: xi t-1 = 1 √αt xi t - √1 -αt √1 - ̄αt εθ(xi t | t + zi) + σtεt, (6) where εt ∼N(0, I), and zi denotes the condition output from the transformer. We adopt classifierfree guidance (CFG) (Chang et al., 2023) to condition the transformer on signal embeddings. At inference time, CFG is applied at the final linear projection layer preceding the softmax operation. At this point, the final logits lf are computed by adjusting the conditional logits lc relative to the unconditional logits luc, using a guidance scale α: lf = (1 + α) · lc -α · luc (7) 4 EXPERIMENTS 4.1 EXPERIMENT SETTINGS Implementation Details. Our model is implemented on one NVIDIA V100 GPU using PyTorch. For our method, the autoencoder employs a ResNet-based (He et al., 2016) three-layer encoderdecoder architecture with a hidden dimension of 512 and an overall downsampling rate of 4. For the generation process, we utilize a four-layer AdaLN-zero transformer encoder as the masked autoregressive transformer, featuring a hidden dimension of 1024 and 16 attention heads. The diffusion model consists of 4 layers of DiT, where each transformer block has a hidden dimension of 1792 and 8 attention heads. We adopt the AdamW optimizer (β1 = 0.9, β2 = 0.99). For training the autoencoder on the HumanML subset of Motion-X, we use a batch size of 256 and a maximum sequence length of 64 frames. For the text-to-motion task, the batch size is set to 50 with a maximum sequence length of 196 frames. The learning rate is initialized at 0.0002 with a linear warmup over 2000 steps. The autoencoder is trained for 50 epochs, while the text-to-motion task is trained for 1500 epochs. During multimodal generation, we first initialize the model with pretrained weights from the text-to-motion autoencoder and fine-tune it on task-specific datasets. Subsequently, we freeze the parameters of the text-to-motion DiT and only fine-tune the masked transformer along with newly incorporated cross-attention layers. The learning rate and training schedule remain consistent with the text-to-motion task. For all three tasks, we employ exponential moving average (EMA) to update model parameters, ensuring training stability. During inference, the classifier-free guidance (CFG) scale is set to 4.5 for text-to-motion, while other tasks use a CFG scale of 6.5. Method FIDH ↓FIDB ↓ Div ↑ Edge (Tseng et al., 2023) 93.430 108.507 13.471 Finedance (Li et al., 2023a) 10.747 72.229 13.813 MCM (Ling et al., 2023) 4.717 78.577 14.890 MotionCraft (Bian et al., 2025) 3.858 76.248 16.667 Ours 3.632 71.930 15.871 Table 3: Results of music-based motion generation on the FineDance (Li et al., 2023a), following the unified SMPL-X representation (Bian et al., 2025). Datasets and Metrics. For the evaluation, we utilize three datasets: HumanML3D (Guo et al., 2022a) for textto-motion, BEAT2 (Liu et al., 2024a) for speech-to-gesture, and FineDance (Li et al., 2023a) for music-to-dance, all following the unified SMPL-X representation (Bian et al., 2025). Regarding the metrics, we use FID, R-Precision, and MM-Dist for text-based motion generation, use FIDH, FIDB, Face L2 loss, Beat Alignment Score, Diversity for speech-based motion generation, and use FIDH, FIDB, Diversity for music-based motion generation, respectively. For the detailed explanation of the datasets and metrics, please refer to the appendix. 8 4.2 EVALUATION Text-based motion generation. We conduct an evaluation of our model against prior text-to-motion approaches, including both discrete-domain and continuous-domain methods. As summarized in Table 1, our method clearly surpasses prior techniques on the HumanML subset of Motion-X dataset. Remarkably, our model achieves improvements of 19.3%, 13.5%, and 11.7% in R-Precision for Top-1, 2, 3, respectively. Additionally, we enhance the FID score by 75.2% on this dataset, underscoring the exceptional fidelity of our generated motions. The qualitative results in Figure 4 further support these findings, showing that our approach yields whole-body motions that align closely with the input text. Speech-based motion generation. To assess the speech-driven motion generation, we compare to previous speech-to-gesture methods. Our results, summarized in Table 2, reveal that our method achieves good quality and diversity in both hand and body motion generation and excels in aligning with the rhythm of first-person speech. This demonstrates the effectiveness of our framework in motion generation when encompassing different modal signals. However, our method performs worse than single-modal methods. As discussed in (Bian et al., 2025), this is attributed to the random or average expressions in the Motion-X dataset, which confuses the speech-to-gesture training. Music-based motion generation. We further evaluate our framework on the music-to-dance task. As shown in Table 3, our method achieves slightly improved performance over previous approaches, particularly in generating hand motions and body movements. 4.3 ABLATION STUDY Causal Attention. To verify the efficacy of the proposed framework, we initially establish a baseline model following the visual MAR setup (Li et al., 2024b), i.e, using the random pseudo reordering for the batched masked prediction, through the bidirectional attention computing. From the results presented in Table 4, we see that the performance of this baseline in the motion area is limited. We attribute this to the difference between human motions and visual images, e.g., the human motion is in a strong temporal sequential structure, in which case a causal attention makes more sense. Therefore, changing the baseline to sequential masked prediction with causal attention improves the performance. DiT. In order to evaluate how the DiTs contribute to the motion quality, we further replace the MLPs in the baseline model with our DiTs. As shown in Table 4, the model generates superior motions with DiTs compared to MLPs, especially in the context of multimodal motion generation. This reveals the superior potential of DiTs in generating motion with complex multimodal contexts. Gated Linear Mechanism. To assess the function of the gated linear mechanism, we ablate this and report the results in Table 4, which indicates that the model outputs motions of higher quality with the inclusion of this mechanism. In the experiments, we observed that the output motions sometimes contain more detailed actions with this mechanism in place. RMSNorm. We also conduct an ablation study to evaluate the function of the RMSNorm and report the results in Table 4. From the results, we see that the model produces better motions when utilizing RMSNorm. In experiments, we found that this module makes the output more stable. Cross Attention. In the baseline model, the multimodal signals are injected with only the AdaLN structure. We then add the cross attention module and observe a significant improvement in multimodal motion generation, as depicted in Table 4. 5 CONCLUSION This paper proposes a new omni motion framework for multimodal whole-body human motion generation. Within this one framework, text, speech, and music signals are all encompassed through AdaLN and cross-attention. The motion generation process is modeled by a continuous masked autoregressive transformer with causal attention, as well as a DiT structure. Extensive experiments have been conducted to verify the efficacy of the proposed framework in different-modality tasks. Limitations. Due to the restricted dataset, the naturalness and generalizability of the motion generation model are still limited, especially in speech and music-driven motion generation. 9 Text-based Speech-based Setting R Precision FID ↓ FIDH ↓ FIDB ↓ Top-1 ↑ Top-2 ↑ Top-3 ↑ Baseline 0.578±0.007 0.737±0.006 0.787±0.005 9.324±0.120 37.732 40.419 + Causal Attention 0.589±0.006 0.740±0.004 0.798±0.006 9.031±0.095 36.815 38.674 + DiT 0.688±0.005 0.828±0.007 0.851±0.004 5.562±0.085 19.743 28.228 + Gated Linear 0.692±0.004 0.834±0.005 0.877±0.006 4.844±0.078 19.427 28.156 + RMSNorm 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 18.329 27.741 + Cross Attention 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 17.651 25.823 Table 4: The ablation study of different model components. REFERENCES Chaitanya Ahuja and Louis-Philippe Morency. Language2pose: Natural language grounded pose forecasting. In 2019 International conference on 3D vision (3DV), pp. 719-728. IEEE, 2019. Chaitanya Ahuja, Dong Won Lee, and Louis-Philippe Morency. Low-resource adaptation for personalized co-speech gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20566-20576, 2022. Simon Alexanderson, Rajmund Nagy, Jonas Beskow, and Gustav Eje Henter. Listen, denoise, action! audiodriven motion synthesis with diffusion models. ACM Transactions on Graphics, 42(4):1-20, 2023. Nikos Athanasiou, Mathis Petrovich, Michael J Black, and Gül Varol. Teach: Temporal action composition for 3d humans. In 2022 International Conference on 3D Vision (3DV), pp. 414-423. IEEE, 2022. Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan, Aniket Bera, and Dinesh Manocha. Text2gestures: A transformer-based network for generating emotive body gestures for virtual agents. In 2021 IEEE virtual reality and 3D user interfaces (VR), pp. 1-10. IEEE, 2021. Yuxuan Bian, Ailing Zeng, Xuan Ju, Xian Liu, Zhaoyang Zhang, Wei Liu, and Qiang Xu. Motioncraft: Crafting whole-body motion with plug-and-play multimodal controls. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pp. 1880-1888, 2025. Haoye Cai, Chunyan Bai, Yu-Wing Tai, and Chi-Keung Tang. Deep video generation, prediction and completion of human action sequences. In Proceedings of the European conference on computer vision (ECCV), pp. 366-382, 2018. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11315-11325, 2022. Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint , 2023. Bohong Chen, Yumeng Li, Yao-Xiang Ding, Tianjia Shao, and Kun Zhou. Enabling synergistic full-body control in prompt-based co-speech motion generation. In Proceedings of the 32nd ACM International Conference on Multimedia, pp. 6774-6783, 2024a. Junming Chen, Yunfei Liu, Jianan Wang, Ailing Zeng, Yu Li, and Qifeng Chen. Diffsheg: A diffusion-based approach for real-time speech-driven holistic 3d expression and gesture generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7352-7361, 2024b. Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu. Executing your commands via motion diffusion in latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and Christian Theobalt. Mofusion: A framework for denoising-diffusion-based motion synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Bidirectional encoder representations from transformers. arXiv preprint , 15, 2018. 10 Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5152-5161, 2022a. Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In European Conference on Computer Vision, pp. 580-597. Springer, 2022b. Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2024. Yunhui Guo, Chaofeng Wang, Stella X Yu, Frank McKenna, and Kincho H Law. Adaln: a vision transformer for multidomain learning and predisaster building information extraction from images. Journal of Computing in Civil Engineering, 36(5):04022024, 2022c. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. Xu He, Qiaochu Huang, Zhensong Zhang, Zhiwei Lin, Zhiyong Wu, Sicheng Yang, Minglei Li, Zhiyi Chen, Songcen Xu, and Xiaofei Wu. Co-speech gesture video generation via motion-decoupled diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2263-2273, 2024. Zikai Huang, Xuemiao Xu, Cheng Xu, Huaidong Zhang, Chenxi Zheng, Jing Qin, and Shengfeng He. Beat-it: Beat-synchronized multi-condition 3d dance generation. In European conference on computer vision, pp. 273-290. Springer, 2024. Jihoon Kim, Jiseob Kim, and Sungjoon Choi. Flame: Free-form language-based motion synthesis & editing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 8255-8263, 2023. Hanyang Kong, Kehong Gong, Dongze Lian, Michael Bi Mi, and Xinchao Wang. Priority-centric human motion generation in discrete latent space. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 14806-14816, 2023. Ronghui Li, Junfan Zhao, Yachao Zhang, Mingyang Su, Zeping Ren, Han Zhang, Yansong Tang, and Xiu Li. Finedance: A fine-grained choreography dataset for 3d full body dance generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10234-10243, 2023a. Ronghui Li, Hongwen Zhang, Yachao Zhang, Yuxiang Zhang, Youliang Zhang, Jie Guo, Yan Zhang, Xiu Li, and Yebin Liu. Lodge++: High-quality and long dance generation with vivid choreography patterns. arXiv preprint , 2024a. Ruilong Li, Shan Yang, David A Ross, and Angjoo Kanazawa. Ai choreographer: Music conditioned 3d dance generation with aist++. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 13401-13412, 2021. Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. Advances in Neural Information Processing Systems, 37:56424-56445, 2024b. Zhe Li, Zhangyang Gao, Cheng Tan, Stan Z Li, and Laurence T Yang. General point model with autoencoding and autoregressive. arXiv preprint , 2023b. Zhe Li, Laurence T Yang, Xin Nie, BoCheng Ren, and Xianjun Deng. Enhancing sentence representation with visually-supervised multimodal pre-training. In Proceedings of the 31st ACM International Conference on Multimedia, pp. 5686-5695, 2023c. Zhe Li, Yisheng He, Lei Zhong, Weichao Shen, Qi Zuo, Lingteng Qiu, Zilong Dong, Laurence Tianruo Yang, and Weihao Yuan. Mulsmo: Multimodal stylized motion generation by bidirectional control flow. arXiv preprint , 2024c. Zhe Li, Laurence T Yang, Bocheng Ren, Xin Nie, Zhangyang Gao, Cheng Tan, and Stan Z Li. Mlip: Enhancing medical visual representation with divergence encoder and knowledge-guided contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11704-11714, 2024d. Zhe Li, Weihao Yuan, Yisheng He, Lingteng Qiu, Shenhao Zhu, Xiaodong Gu, Weichao Shen, Yuan Dong, Zilong Dong, and Laurence T Yang. Lamp: Language-motion pretraining for motion generation, retrieval, and captioning. arXiv preprint , 2024e. 11 Angela S Lin, Lemeng Wu, Rodolfo Corona, Kevin Tai, Qixing Huang, and Raymond J Mooney. Generating animated videos of human activities from natural language descriptions. Learning, 1(2018):1, 2018. Jing Lin, Ailing Zeng, Shunlin Lu, Yuanhao Cai, Ruimao Zhang, Haoqian Wang, and Lei Zhang. Motion-x: A large-scale 3d expressive whole-body human motion dataset. Advances in Neural Information Processing Systems, 36:25268-25280, 2023a. Junfan Lin, Jianlong Chang, Lingbo Liu, Guanbin Li, Liang Lin, Qi Tian, and Chang-wen Chen. Being comes from not-being: Open-vocabulary text-to-motion generation with wordless training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 23222-23231, 2023b. Zeyu Ling, Bo Han, Yongkang Wong, Mohan Kangkanhalli, and Weidong Geng. Mcm: Multi-condition motion synthesis framework for multi-scenario. arXiv preprint , 2023. Haiyang Liu, Zihao Zhu, Giorgio Becherini, Yichen Peng, Mingyang Su, You Zhou, Xuefei Zhe, Naoya Iwamoto, Bo Zheng, and Michael J Black. Emage: Towards unified holistic co-speech gesture generation via expressive masked audio gesture modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1144-1154, 2024a. Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, and Changxing Ding. Towards variable and coordinated holistic co-speech motion generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1566-1576, 2024b. Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. Smpl: A skinned multi-person linear model. In Seminal Graphics Papers: Pushing the Boundaries, Volume 2, pp. 851-866. 2023. Yunhong Lou, Linchao Zhu, Yaxiong Wang, Xiaohan Wang, and Yi Yang. Diversemotion: Towards diverse human motion generation via discrete diffusion. arXiv preprint , 2023. Mingshuang Luo, Ruibing Hou, Zhuo Li, Hong Chang, Zimo Liu, Yaowei Wang, and Shiguang Shan. M3gpt: An advanced multimodal, multitask framework for motion comprehension and generation. Advances in Neural Information Processing Systems, 37:28051-28077, 2024. Zichong Meng, Yiming Xie, Xiaogang Peng, Zeyu Han, and Huaizu Jiang. Rethinking diffusion for text-driven human motion generation. arXiv preprint , 2024. Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10975-10985, 2019. Mathis Petrovich, Michael J Black, and Gül Varol. Action-conditioned 3d human motion synthesis with transformer vae. In Proceedings of the International Conference on Computer Vision, 2021. Mathis Petrovich, Michael J Black, and Gül Varol. Temos: Generating diverse human motions from textual descriptions. In Proceedings of the European Conference on Computer Vision, 2022. Ekkasit Pinyoanuntapong, Pu Wang, Minwoo Lee, and Chen Chen. Mmm: Generative masked motion model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1546-1555, 2024. Matthias Plappert, Christian Mandery, and Tamim Asfour. Learning a bidirectional mapping between human whole-body motion and natural language using deep recurrent neural networks. Robotics and Autonomous Systems, 109:13-26, 2018. Jose Ribeiro-Gomes, Tianhui Cai, Zoltán A Milacski, Chen Wu, Aayush Prakash, Shingo Takagi, Amaury Aubel, Daeil Kim, Alexandre Bernardino, and Fernando De La Torre. Motiongpt: Human motion synthesis with improved diversity and realism via gpt-3 prompting. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 5070-5080, 2024. Li Siyao, Weijiang Yu, Tianpei Gu, Chunze Lin, Quan Wang, Chen Qian, Chen Change Loy, and Ziwei Liu. Bailando: 3d dance generation by actor-critic gpt with choreographic memory. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11050-11059, 2022. Shuai Tan, Bin Ji, and Ye Pan. Flowvqtalker: High-quality emotional talking face generation through normalizing flow and quantization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 26317-26327, 2024. Guy Tevet, Brian Gordon, Amir Hertz, Amit H Bermano, and Daniel Cohen-Or. Motionclip: Exposing human motion generation to clip space. In Proceedings of the European Conference on Computer Vision, 2022. 12 Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-or, and Amit Haim Bermano. Human motion diffusion model. In Proceedings of the International Conference on Learning Representations, 2023. Jonathan Tseng, Rodrigo Castellon, and Karen Liu. Edge: Editable dance generation from music. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 448-458, 2023. Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. Mocogan: Decomposing motion and content for video generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1526-1535, 2018. Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017. Yin Wang, Zhiying Leng, Frederick WB Li, Shun-Cheng Wu, and Xiaohui Liang. Fg-t2m: Fine-grained text-driven human motion generation via diffusion model. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 22035-22044, 2023. Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, and Changyou Chen. Learning diverse stochastic human-action generators by learning smooth latent transitions. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 12281-12288, 2020. Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Xiaolong Wang, and Trevor Darrell. Hierarchical style-based networks for motion synthesis. In Proceedings of the European Conference on Computer Vision, 2020. Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, and Long Xiao. Diffusestylegesture: Stylized audio-driven co-speech gesture generation with diffusion models. arXiv preprint , 2023. Hongwei Yi, Hualin Liang, Yifei Liu, Qiong Cao, Yandong Wen, Timo Bolkart, Dacheng Tao, and Michael J Black. Generating holistic 3d human motion from speech. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 469-480, 2023. Weihao Yuan, Yisheng He, Weichao Shen, Yuan Dong, Xiaodong Gu, Zilong Dong, Liefeng Bo, and Qixing Huang. Mogents: Motion generation based on spatial-temporal joint modeling. Advances in Neural Information Processing Systems, 37:130739-130763, 2024. Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, and Jan Kautz. Physdiff: Physics-guided human motion diffusion model. In Proceedings of the International Conference on Computer Vision, 2023. Biao Zhang and Rico Sennrich. Root mean square layer normalization. Advances in neural information processing systems, 32, 2019. Canyu Zhang, Youbao Tang, Ning Zhang, Ruei-Sung Lin, Mei Han, Jing Xiao, and Song Wang. Bidirectional autoregessive diffusion model for dance generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 687-696, 2024a. Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan. Generating human motion from textual descriptions with discrete representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14730-14740, 2023a. Mingyuan Zhang, Xinying Guo, Liang Pan, Zhongang Cai, Fangzhou Hong, Huirong Li, Lei Yang, and Ziwei Liu. Remodiffuse: Retrieval-augmented motion diffusion model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 364-373, 2023b. Mingyuan Zhang, Huirong Li, Zhongang Cai, Jiawei Ren, Lei Yang, and Ziwei Liu. Finemogen: Fine-grained spatio-temporal motion generation and editing. Advances in Neural Information Processing Systems, 36: 13981-13992, 2023c. Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Motiondiffuse: Text-driven human motion generation with diffusion model. PAMI, 2024b. Mingyuan Zhang, Daisheng Jin, Chenyang Gu, Fangzhou Hong, Zhongang Cai, Jingfang Huang, Chongzhi Zhang, Xinying Guo, Lei Yang, Ying He, et al. Large motion model for unified multi-modal motion generation. In European Conference on Computer Vision, pp. 397-421. Springer, 2024c. Yan Zhang, Michael J Black, and Siyu Tang. Perpetual motion: Generating unbounded human motion. arXiv preprint , 2020. 13 Yaqi Zhang, Di Huang, Bin Liu, Shixiang Tang, Yan Lu, Lu Chen, Lei Bai, Qi Chu, Nenghai Yu, and Wanli Ouyang. Motiongpt: Finetuned llms are general-purpose motion generators. arXiv preprint , 2023d. Zeyu Zhang, Akide Liu, Ian Reid, Richard Hartley, Bohan Zhuang, and Hao Tang. Motion mamba: Efficient and long sequence motion generation. In European Conference on Computer Vision, pp. 265-282. Springer, 2024d. Zeyu Zhang, Yiran Wang, Wei Mao, Danning Li, Rui Zhao, Biao Wu, Zirui Song, Bohan Zhuang, Ian Reid, and Richard Hartley. Motion anything: Any to motion generation. arXiv preprint , 2025. Chongyang Zhong, Lei Hu, Zihao Zhang, and Shihong Xia. Attt2m: Text-driven human motion generation with multi-perspective attention mechanism. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 509-519, 2023. Zixiang Zhou and Baoyuan Wang. Ude: A unified driving engine for human motion generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5632-5641, 2023. Zixiang Zhou, Yu Wan, and Baoyuan Wang. A unified framework for multimodal, multi-part human motion synthesis. arXiv preprint , 2023. Zixiang Zhou, Yu Wan, and Baoyuan Wang. Avatargpt: All-in-one framework for motion understanding planning generation and beyond. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1357-1366, 2024. Lingting Zhu, Xian Liu, Xuanyu Liu, Rui Qian, Ziwei Liu, and Lequan Yu. Taming diffusion models for audio-driven co-speech gesture generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10544-10553, 2023. Wenlin Zhuang, Congyi Wang, Jinxiang Chai, Yangang Wang, Ming Shao, and Siyu Xia. Music2dance: Dancenet for music-driven dance generation. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 18(2):1-21, 2022. Qiran Zou, Shangyuan Yuan, Shian Du, Yu Wang, Chang Liu, Yi Xu, Jie Chen, and Xiangyang Ji. Parco: Part-coordinating text-to-motion synthesis. In European Conference on Computer Vision, pp. 126-143. Springer, 2024. 14 A APPENDIX A.1 TRAINING DETAILS Encoding of Speech and Music Our speech and music encoders are designed to extract temporally aligned, high-level features from raw audio signals for effective speech-to-gesture and music-todance generation. The architecture builds upon a multi-layer 1D convolutional network with strided convolutions and leaky ReLU activations. Each convolutional block consists of a series of a block unit that progressively downsample the input waveform while increasing feature dimensionality. The input audio sequence is first processed through multiple stages of temporal aggregation and non-linear transformation, resulting in a sequence of compact and expressive latent representations, whereas the input music typically retains sufficient temporal structure and spectral richness in its raw form for effective motion synthesis. These latent codes capture prosodic, rhythmic, and semantic-like patterns in speech and music, which are then projected into the condition latent space of dimensionality. The final encoder output is transposed to align with the temporal structure expected by the diffusion model, enabling fine-grained cross-modal interaction between speech and motion sequences during generation. Training of AE We first pretrain a baseline autoencoder on the text-to-motion task. When finetuning it on the speech-to-gesture and music-to-dance tasks, the decoder fails to reconstruct valid motion sequences due to discrepancies in data distribution. However, fine-tuning the autoencoder using the reconstruction objective during the multi-modal training incurs high computational costs. Therefore, we independently fine-tune the baseline AE on each dataset using the reconstruction task before multi-modal generation, and employ the resulting models for downstream tasks. Motion Representation Following previous work Bian et al. (2025), we utilize SMPL-X formatted motion data with an input dimension of (frame length × 322). The parameter structure is organized as follows: Root orientation (0:3): controls global body rotation; body pose (3:66): Governs major body joint rotations; hand articulation (66:156): controls finger movements; jaw pose (156:159): manages mouth opening/closing; facial expression (159:209): drives emotional expressions; facial shape (209: 309): determines static facial structure; translation (309:312): controls global body position; betas (312: 322): represents static body shape parameters. And the maximum motion length is 196. The model's output maintains identical dimensionality (frame length × 322) to ensure full reconstruction capability. This comprehensive parameterization enables simultaneous control of body motion, facial animation, and global positioning within a unified framework. A.2 DATASETS. For text-based motion generation, we evaluate our method on the HumanML3D (Guo et al., 2022a) dataset, which consists of 14,616 high-quality human motions paired with 44,970 text descriptions. The original body-only SMPL (Loper et al., 2023) format of this dataset is extended to whole-body SMPL-X (Pavlakos et al., 2019) format in MotionCraft (Bian et al., 2025), which we follow in the experiments for evaluation. For speech-based motion generation, we evaluate on the BEAT2 dataset (Liu et al., 2024a), which collects 76 hours of data from 30 speakers, standardized into a mesh representation with paired audio and text lines. The motion of the unified SMPL-X format is also extracted (Bian et al., 2025) for multimodal evaluation. For music-based motion generation, the largest dataset FineDance (Li et al., 2023a) is utilized for evaluation. This dataset collects dances of 14.6 hours across 22 genres and provides detailed human motions using the SMPL-H format, which is then converted to the unified SMPL-X format and appended by text descriptions. To enable full-body, multimodal control over motion generation, we convert all datasets to the SMPL-X format. This involves filling in missing facial expressions in HumanML3D and FineDance using average expression coefficients from the training set, as well as transforming the SMPL-H Rot-6D representation in FineDance into axis-angle format via Gram-Schmidt orthogonalization. This conversion achieves better alignment with SMPL-X parameters and introduces minimal errors compared to the official body-retargeting method, while also offering improved computational efficiency. 15 Method R Precision FID ↓ MM Dist↓ Div → Top-1 ↑ Top-2 ↑ Top-3 ↑ GT 0.511±0.003 0.703±0.003 0.797±0.002 0.002±0.000 9.503±0.065 2.974±0.008 MDM(Tevet et al., 2023) 0.418±0.005 0.604±0.005 0.707±0.004 0.489±0.025 9.450±0.066 3.630±0.023 MotionDiffuse(Zhang et al., 2024b) 0.491±0.001 0.681±0.001 0.782±0.001 0.630±0.001 9.410±0.049 3.113±0.001 FineMoGen(Zhang et al., 2023c) 0.504±0.002 0.690±0.002 0.784±0.002 0.151±0.008 9.263±0.094 2.998±0.008 Motion-Verse(Zhang et al., 2024c) 0.496±0.002 0.685±0.002 0.785±0.002 0.415±0.002 9.176±0.074 3.087±0.012 MCM(Ling et al., 2023) 0.494±0.003 0.682±0.005 0.777±0.003 0.075±0.003 9.484±0.074 3.086±0.011 MotionCraft (Bian et al., 2025) 0.501±0.003 0.697±0.003 0.796±0.002 0.173±0.002 9.543±0.098 3.025±0.008 MARDM (Meng et al., 2024) 0.502±0.003 0.691±0.003 0.787±0.002 0.286±0.003 9.470±0.081 3.346±0.007 Ours 0.548±0.003 0.743±0.003 0.837±0.002 0.141±0.003 9.537±0.087 2.856±0.008 Table 5: Results of text-to-motion on the original HumanML3D benchmark. Method R Precision FID ↓ MM Dist↓ Top-1 ↑ Top-2 ↑ Top-3 ↑ Ours 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 15.871±0.030 Ours-Finetuned 0.701±0.002 0.846±0.005 0.898±0.005 4.843±0.102 15.868±0.027 Table 6: Results of text-to-motion after fine-tuning. (On the HumanML3D subset of Motion-X dataset, following the unified SMPL-X representation.) To ensure consistency with MotionCraft (Bian et al., 2025), we utilize the pretrained motion encoder and text encode, enabling a unified evaluation of the SMPL-X motion representation across different modalities. For datasets that lack corresponding textual annotations-namely FineDance and BEAT2-we generate pseudo-captions such as "A dancer is performing a street dance in the Jazz style to the rhythm of the wildfire" and "A person is giving a speech, and the content is ...", respectively, to support cross-modal learning. A.3 METRICS Text-based Motion Generation To assess the quality of the motions generated based on texts compared to the true data, we utilize the Frechet Inception Distance (FID) to evaluate the distribution differences between the generated motions and the ground truth. Additionally, R-Precision is employed to determine how frequently the most relevant motions, identified as top-k closest matches, align with their respective captions within a batch of 32 samples. Lastly, Multi-Modal Distance (MM Dist) is employed to gauge the average Euclidean distance between motion representations and their corresponding textual features. Speech-based Motion Generation For evaluating the quality and diversity of the motions generated based on speech, we employ FIDH, FIDB, and Diversity metrics. FIDH measures the difference between hand motion distribution and the true gesture distribution, whereas FIDB assesses the divergence between the whole-body motion distributions. The Beat Alignment Score (Li et al., 2021) is used to measure the synchronization between motions and speech beats. To quantify the difference between generated expressions and actual expressions, we use the L2 Loss. Music-based Motion Generation Mirroring the approach used for speech-driven gesture generation, we apply FIDH, FIDB, and Diversity metrics to evaluate the quality and diversity of Method R Precision FID ↓ MM Dist↓ Top-1 ↑ Top-2 ↑ Top-3 ↑ GT 0.663±0.006 0.807±0.002 0.864±0.002 0.000±0.000 15.567±0.036 MotionCraft-Basic (Bian et al., 2025) 0.590±0.003 0.743±0.004 0.804±0.004 8.477±0.102 16.252±0.035 MotionCraft-Mix (Bian et al., 2025) 0.600±0.003 0.747±0.004 0.812±0.006 6.707±0.081 16.334±0.059 Ours-Basic 0.704±0.003 0.843±0.005 0.898±0.005 4.838±0.100 15.871±0.030 Ours-Mix 0.712±0.003 0.849±0.005 0.904±0.004 4.759±0.102 15.765±0.026 Table 7: Results of text-driven motion generation on the HumanML3D dataset following the mix training setup (Bian et al., 2025). 16 Method FIDH ↓ FIDB ↓ Face L2 Loss ↓ Beat Align Score ↑ Diversity ↑ MotionCraft-Basic (Bian et al., 2025) 18.486 27.023 10.097 8.098 10.334 MotionCraft-Mix (Bian et al., 2025) 12.882 25.187 8.906 8.226 12.595 Ours-Basic 17.651 25.923 9.883 8.377 14.703 Ours-Mix 12.201 25.644 8.947 8.430 15.003 Table 8: Results of speech-driven motion generation on the BEAT2 dataset (Liu et al., 2024a) following the mix training setup (Bian et al., 2025). A person is doing a speech: "One thing that scared me once was ..." A person is doing a speech: "One night I was out with my friends ..." A person is doing a speech: "Well yes I have experienced a paranormal ..." A person is doing a speech: "One time I was searching for my friends ..." Ours MotionCraft unfold in time axis Figure 5: The qualitative results of speech-driven motion generation. music-induced hand and whole-body movements. This approach ensures that the generated motions exhibit both high fidelity and variation. A.4 ADDITIONAL EXPERIMENT RESULTS More Results on Text-to-motion To provide a more comprehensive evaluation, we conduct additional comparisons on the original HumanML3D benchmark using the body-only H3D format, which contains redundant motion information. Here we mainly compare with the methods without VQ-VAE. As shown in Tab. 5, OmniMotion consistently outperforms these baselines in terms of textmotion alignment, motion quality, and diversity, demonstrating its superior generalization capability across different motion representations. Text-to-motion Evaluation after fine-tuning We conduct a comprehensive evaluation on the final model (after fine-tuning on both speech-to-gesture and music-to-dance datasets) and present the results below. Since textual conditioning participates throughout the entire training pipeline, our model does not suffer from catastrophic forgetting after fine-tuning. This confirms the robustness of our architecture's knowledge retention capabilities under different training paradigms. Evaluation of OmniMotion Variants Following the same strategy as MotionCraft (Bian et al., 2025), we train two variants of our model: OmniMotion-Base and OmniMotion-Mix. OmniMotionBase is a text-to-motion model pretrained solely on HumanML3D, while OmniMotion-Mix is trained on a combined dataset comprising HumanML3D, BEAT2, and FineDance to enable multimodal motion generation. Quantitative results on the text-to-motion task are summarized in Table 7. We further evaluate OmniMotion-Mix across all three modalities. For the speech-to-gesture and music-to-dance tasks, we fine-tune the model on the respective target datasets. The corresponding results are reported in Table 8 and Table 9, respectively. A.5 MORE VISUALIZATION We display more visual results of speech-driven and music-driven motion generation in Figure 5 and Figure 6, respectively. 17 A dancer is performing a Folk dance in the Dai style to the rhythm of the Xuanyue A dancer is performing a Classic dance in the ShenYun style to the rhythm of the Tanglanting song A dancer is performing a Street dance in the Popping style to the rhythm of the WeAreOne2017 song A dancer is performing a Street dance in the Urban style to the rhythm of the Redeye song unfold in time axis Ours MotionCraft Figure 6: The qualitative results of music-driven motion generation. Method FIDH ↓ FIDB ↓ Div ↑ MotionCraft-Basic (Bian et al., 2025) 3.858 76.248 16.667 MotionCraft-Mix (Bian et al., 2025) 2.849 67.159 18.483 Ours-Basic 3.632 71.930 15.871 Ours-Mix 2.781 64.380 17.605 Table 9: Results of music-driven motion generation on the FineDance dataset (Li et al., 2023a) following the mix training setup (Bian et al., 2025). 18
2510.14951
A universal description of Mott insulators: Characterizing quantum phases beyond broken symmetries Matheus de Sousa,1, ∗Zhiyu Fan,1, ∗and Wei Ku (顧威)1, † 1School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China (Dated: October 17, 2025) Using Mott insulators as a prototypical example, we demonstrate a dynamics-based characterization of quantum phases of matter through a general N-body renormalization group framework. The essential “Mott-ness” turns out to be characterized by a change of size-scaling of the effective intra- momentum repulsions between long-lived emergent “eigen-particles” that encodes the dynamics of two-body bound states in the high-energy sector. This directly offers a universal characterization at long space-time scale for the corresponding class of Mott insulators through a uniform single occupation of all momenta, and otherwise Mott metals. This universal description naturally paves the way to topological Mott insulators and is straightforward to extend to bosonic Mott systems. More generally, this demonstration exemplifies a generic paradigm of characterizing quantum phases of matter through their distinct dynamics beyond broken symmetries. Introduction — Mott insulators represent a paradig- matic example of strongly correlated electron systems, with insulating behavior (unexpected from standard band filling considerations) emerges from strong electron- electron interactions [1–3]. Such interaction-induced insu- lators have been observed and intensively studied among a wide variety of material families, from transition metal ox- ides [4–6], rare-earth nickelates [7–9], layered Ruthenates like Ca2RuO4 [10], to organic charge-transfer salts such as (BEDT −TTF)2X [11–13], all displaying interesting magnetic [14, 15], orbital [16, 17], and lattice [18, 19] prop- erties. Particularly in vanadium oxide V2O3, a paradig- matic metal-insulator transition can be realized through modulation of temperature, and pressure, making it a prototypical system for studying Mott physics [20–23]. Furthermore, upon introduction of additional carriers through chemical doping, most of the doped Mott insula- tor such as Cuprates La2CuO4 [24–27], Nickelates [28, 29], and Iridates [30, 31] host a even rich variety of novel phys- ical behaviors beyond standard lore, including strange metal behavior [32], and superconductivity [25, 26, 33] in association with a pseudogap phase [34–37] that does not even host a complete Fermi surface. It is therefore of utmost importance to understand not only the static structure but also the slow dynamics of Mott insulators and their (self-)doped systems. Even with intensive investigations on simple models [38– 46] and real materials [47–49], currently the standard understanding of the Mott insulators [50] is limited to the ground state description resulting from perturbing a collection of isolated atoms via kinetic coupling between them. Such a position space description of the ground state, while physically intuitive, lacks direct access to the dynamics of long space-time scale relevant to dynamical transport and other response properties, and thus leaves large space for on-going debate on the nature of the ∗These authors contribute equally † email: weiku@sjtu.edu.cn quantum [51] and thermal [50, 52] phase transition to metals. On the other hand, the state-of-the art numerical studies based on dynamical mean-field approximation [53] lead to pictures intimately tied to coherent quasi-particles of the Fermi liquid and thus limited in its applicability. The difficulty in obtaining a satisfactory understand- ing for Mott insulators and their (self-)doped metals is profoundly associated with the limitation of the standard framework [54–56] for thermodynamical and quantum phases. This framework, developed by Landau [54] and extended by Kadanoff [55] and Wilson [56], targets the most relevant information, namely the order parameter (and their correlation) of systems with spontaneously bro- ken symmetries. It has thus been extremely successful as the “universal” paradigm for phases with broken charge, magnetic, or orbital symmetries. Yet, its usefulness and applicability rapidly diminish for phases without a spon- taneous broken symmetry, such as the Mott insulating phase to be demonstrated here. Particularly, to date no knowledge is available on a universal “fixed point” of Mott insulators or around it the “relevant” dynamics at long space-time scales. A more general framework for describ- ing quantum phases of matter beyond broken symmetries is therefore of utmost importance. Here, we address this long-standing problem through a general N-body renormalization group framework that offers a generic dynamics-based characterization of quan- tum phases of matter beyond broken symmetries. The essential “Mott-ness” turns out to be characterized by a change of size-scaling of the effective intra-momentum repulsions between long-lived emergent “eigen-particles” that encodes the dynamics of two-body bound states in the high-energy sector. This directly offers a universal characterization of long space-time scale for the corre- sponding class of Mott insulators through a uniform single occupation of all momenta, and otherwise Mott metals. This universal description naturally enables the possibility of topological Mott insulators and is straightforward to extend to bosonic Mott systems. More importantly, the demonstrated general framework offers a generic paradigm arXiv:2510.14951v1 [cond-mat.str-el] 16 Oct 2025 2 (c) Mott Semimetal 𝐤 (d) Mott Insulator 𝐤 (b) Band Metal 𝐤 (a) Band Insulator 0 1 2 𝑛𝐤 𝐤 ǁ𝜖𝑘 (e) Mott Metal 𝐤 FIG. 1. Schematics of eigen-particle occupation in several quantum phases. (a) Double occupation (in red) of all momenta in band insulators. (b) Double occupation within Fermi wavevector (k < kF ) in band metals. (c) Compensating number of zero and double occupations in Mott-semimetals. (d) Single occupation (in blue) of all momenta in Mott insulators. (e) Double occupation in few momenta in Mott-metals. For easier visualization, the dispersions of eigen-particle energy ⟨˜ϵk⟩in (c) and (e) are for a reference Mott insulating (excited) state of the system, instead of the metallic ground state. to characterize quantum phases of matter through their distinct dynamics beyond broken symmetries. Continuous Particle-Dressing as a General N-Body Renormalization Group Flow — We start by acknowledg- ing that quantum phases of matter must have qualitatively distinct dynamics of long space-time scales. For any given Hamiltonian H[{c† knσ}] of a lattice translational symmet- ric system, represented in creation operator c† knσ of mo- mentum ℏkk, band index n, and spin σ, such emergent dy- namics of long space-time scales can be accessed through systematically absorbing N-body dynamical processes of shorter space-time scales into the internal structure of the dressed particles ˜c† knσ via a unitary transformation [57– 59], ˜c† knσ ≡U†c† knσU, (1) such that the corresponding off-diagonal processes are smoothly reduced in the emergent Hamiltonian, ˜H[{˜c† knσ}] ≡H[{c† knσ}] = UH[{˜c† knσ}]U†, (2) containing only the more relevant slower dynamics. Cor- respondingly, the product states, |Ψ⟩= N Y j ˜c† kjnjσj|0⟩= ˜c† k1n1σ1˜c† k2n2σ2 . . . ˜c† kNnNσN |0⟩, (3) of the dressed particles, with |0⟩denoting the true vacuum with no particle, systematically incorporate the quantum state thermalization [60, 61] up to the finite time scale of the remaining dynamics in ˜H. Upon reaching a diagonal ˜H, the fully dressed particles ˜c† knσ become the infinitely long-lived “eigen-particles” [57, 62–64] that give direct access to the slowest dynamics of the system. Conveniently, since physical dynamics of all space-time scales are fully absorbed, eigen-particles’ (1-to-N)-body occupations are constants of motion [65]. Consistently, having fully incorporated the quantum state thermalization, their product states, |Ψ⟩, form a complete set of N-body eigen-states [66]. This offers a simplest description of many-body eigenstates through integer oc- cupation of eigen-particles. If the above dressing is conducted in a continuous man- ner with U only slightly deviating from 1, for example via the Wegner-flow [58], the smooth evolution of the dressed particles ˜c† knσ and their dynamics toward a fully diagonal ˜H[{˜c† knσ}] can then be perceived as a generalized renormalization group (RG) flow [67] of the full N-body dynamics toward the longest space-time scale of the sys- tem: First, for systems with a fixed particle number N, all possible terms in ˜H together naturally form a closed group during the flow. Second, the final diagonal ˜H corre- sponds to a self-similar “fixed point”, at which further flow for diagonalization returns back to itself. Third, since dynamics of all space-time scales are absorbed in the dressing of the resulting eigen-particles, ˜c† knσ, ˜H contains only the most “relevant” dynamics of longest space-time scales. Therefore, together with the particle number and the physical Hilbert space, the final structure of “relevant” interactions in ˜H (those surviving system size scaling) offers a complete N-body characterization of all possible quantum phases of matter. Band Insulator and Band Metal — As a simple example, Fig. 1(a) illustrates a band insulating state, |ΨBI⟩= Y kσ ˜c† kσ|0⟩, (4) in the eigen-particle representation, in which all momen- tum k of the top valence band are doubly occupied by eigen-particles. (Without loss of generality, we consider a single valence band for maximal simplicity and clarity.) In turn, their current fluctuation [62], ˜Jq = 1 V X kσ ˜c† k+q,σ˜vk+ q 2 ˜ckσ , (5) of wavenumber q is completely disabled by Pauli exclusion principle in the low-energy sector. Here, the diagonal velocity operator of the eigen-particles, ˜vk = 1 ℏ∇k˜ϵk, is defined through their diagonal eigen-particle energy operator ˜ϵkσ through ˜c† kσ˜ϵkσ ≡[ ˜H, ˜c† kσ] [62]. 3 In contrast, in a band metal state [c.f. Fig. 1(b)], |ΨBM⟩= Y k<kF ,σ ˜c† kσ|0⟩, (6) eigen-particles only doubly occupied the momenta within the Fermi surface bound by kF. It thus allows long- wavelength current fluctuation ˜Jq→0 with nearly zero energy. As an interesting side note, the abrupt cutoff of the eigen-particle occupation from 2 to 0 at kF naturally explains the same for bare particles (highly advocated as the “quasi-particle weight” z-factor) as a signature of Fermi liquid. Indeed, under quantitative dressing of Pauli- principle-restricted many-body fluctuation in Eq. 1, eigen- particles’ preservation of bare-particles’ momentum would dictates inheritance of their qualitative discontinuity in occupation by bare particles. Mott-ness from Strong Local Repulsion — To reveal the relevant interaction for Mott-ness at long space-time scales, let’s consider the prototypical Hubbard model, H = t X ⟨ii′⟩σ c† iσci′σ + U X i ni↑ni↓, (7) in which the Mott-ness originates from the strong local repulsion of strength U, when it dominates over the near- est neighboring kinetic processes of strength t ≪U. For the purpose of revealing the key relevant interaction for Mott-ness, we now proceed to find the canonical transfor- mation that diagonalizes just the one-body and two-body terms in such strong interacting limit. First, decoupling the low-energy sector from the U- scaled high-energy one, up to the first order of t/U, via U ≡eA with, (c.f. Appendix D) A = t U X ⟨ii′⟩σ ni¯σc† iσci′σ −h.c., (8) the Hamiltonian represented in the dressed particles, ˆH = ˆH(H) + ˆH(L) + ˆH(n>2), explicitly separates the potential and kinetic motion of the “doublon”, ˆd† i ≡ˆc† i↑ˆc† i↓, ˆH(H) = (U + J) X i ˆd† i ˆdi + J 2 X ⟨ii′⟩ ˆd† i ˆdi′, (9) (J ∼4t2/U) in the high-energy sector, from the dynamics of low-energy dressed electrons in a constrained double- occupation-free space, ˆH(L) [68–72]. Here ˆH(n>2) de- scribes the beyond-two-body dynamics that emerges from the transformation. (Beyond the perturbation regime, such decoupling can still be achieved numerically.) Next, let’s bring the one- and two-body terms of ˆH into a full diagonal form via another transformation, ˆU = exp ( ˆA), that turns ˆc† into the fully dressed eigen-particles, ˜c† i ≡ˆU†ˆc† i ˆU at the two-body level. Explicitly, ˜H(H) = ˆU ˆH(H) ˆU† = X i Ei ˜d† i ˜di, (10) where Ei = U + J(1 −P3 α=1 cos (qi · eα)) and ˜d† i ≡ˆU† ˆd† i ˆU = 1 √ V X i′ ˆd† i′eiqi·ri′ (11) = ˆU†ˆc† i↑ˆc† i↓ˆU = ˆU†ˆc† i↑ˆU ˆU†ˆc† i↓ˆU = ˜c† i↑˜c† i↓, (12) with V denoting the system size (the number of sites) and eα the unit vectors of the 3-dimensional space. As expected from the translational symmetry of the system, Eq. 11 shows that at long space-time scale the eigen-doublons, ˜d† i, have well-defined momentum ℏqi. The same applies to the constrained particles in the low-energy sector upon diagonalizing ˆH(L), such that the low-energy contribution (and the one-body component) of ˜c† iσ also has well-defined momentum ℏki (c.f. Appendix E). Therefore, we will substitute i below by the more familiar notation q and k for indexing the fully dressed two- and one-body eigen-particles, ˜d† i and ˜c† iσ, to better remind the readers that for eigen-particles each q or k indexes a momentum qq=i or kk=i that is one-on-one mapped to the a position ri of ˆd† i, ˆc† iσ, and c† iσ according to the gauge choice of ˆU in Eq. 11. Naturally, a consistent periodic boundary condition for ˜d† i naturally gives qi = 2ki. Equation 10 and 12 reveals that at long space-time scale, such local repulsion-driven Mott-ness is characterized by the emergence of a “relevant” intra-momentum repulsion, ˜H(H) = X q Eq ˜d† q ˜dq = X k Ek˜c† k↑˜c† k↓˜ck↓˜ck↑, (13) that scales as V 0 against the system size V . Recall that by default (such as in the t/U ≫1 band metal limit) two-body interactions between particles of well-defined momenta should scale as V −1, to reflect the probability for two particles to be in proximity to experience their interaction. Therefore, a change of scaling of an inter- action like this must reflect a qualitative change in the underlying correlation. (Such a change of scaling in the relevant couplings is likely common to all quantum phases as a direct reflection to their characteristic correlations.) For the specific case here, this change of scaling results from the formation of doublons in the high-energy sector. Indeed, the dressed particles in the doublon , ˆc† i↑ˆc† i↓, are spatially bound to each other, such that the probability for them to experience their mutual interaction is no longer sensitive to the system size, thus the V 0-scaling. Experts in Mott physics might find surprising such emergent intra-momentum repulsion in the long space- time scale, given the extreme spatial locality of the original strong repulsion. Note, however, that microscopically Eq. 13 only describes the potential and kinetic energy of doublons, rather than two-body interaction between fermions, since it does not acts on the low-energy particles at different positions. The intra-momentum form simply follows eigen-particles’ encoding eigen-doublons, ˜d† q, as equal-momentum pairs, ˜c† k↑˜c† k↓|k=q,(qq=2kk), in which two high-energy electrons, ˆc† i↑ˆc† i↓, are spatially bound during 4 propagation (convolution of two momenta of the “hatted” particles). In contrast, pairs of unbound propagating electrons are encoded as ˜c† kσ˜c† k′̸=k,σ. Naturally, the above consideration on size-scaling would apply to not just the two-body interactions, but all (n>1)- body interactions involving doublons. Upon fully diag- onalizing the remaining (n>2)-body interactions in ˜H (which preserves the already diagonal one- and two-body structures [Appendix C]), the relevant intra-momentum interactions for Mott-ness takes the general form, ˜HIM ≡ X k ˜c† k↑˜c† k↓˜Uk˜ck↓˜ck↑, (14) where ˜Uk, defined through ˜Uk˜c† k↑˜c† k↓≡[ ˜H, ˜c† k↑˜c† k↓], de- notes a diagonal operator [up to (N−2)-body] for the eigen-doublon energy. While the above illustration employs the prototypical Hubbard model, the revealed characteristics of the “stable fixed point” for Mott-ness (emergence of high-energy two- body bound states and the associated V 0-scaled ˜HIM) represent a universal class of Mott systems. Analogous to universality in phase transitions, the same relevant ˜HIM can emerge in long space time scale among a wide variety of Mott systems with distinct rapid dynamics in H. Mott Insulator — Having identified the above fixed point (relevant interactions) for Mott-ness in long space- time scale, it is straightforward to find a universal de- scription for this class of Mott insulators. Specifically, as illustrated in Fig. 1(d), states with single occupation of all momenta by eigen-particles of arbitrary spin σk, |ΨMI; {σk}⟩= Y k ˜c† k,σk |0⟩, (15) must be insulating under a V 0-scaled ˜HIM, since long- wavelength current fluctuations ˜Jq→0 of them would un- avoidably results in double occupation of momenta and thus require a finite energy of scale ⟨˜Uk⟩. (Again, consider a one-band system for simplicity and clarity.) Intuitively, the remaining spin degree of freedom allows 2V number of insulating states, just as expected from the standard positional space description. Interesting, this universal “infrared” description unifies this representative class of Mott insulators with band insu- lators at the long space-time scale. Under the constraints ˜c† k↑˜c† k↓= 0 of the low-energy Hilbert space, eigen-particles are stuck uniformly in momentum space as in the latter (by ˜c† kσ˜c† kσ = 0). Similarly, in both types of insulators, the uniform occupation in momentum trivially eliminates the ϵk-scaled kinetic energy, P k ϵk = 0, from the system energy, as intuitively expected for insulators. Mott Metal — Given the insensitivity of the general RG flow of ˜H to the system’s particle number, the identified fixed point for Mott-ness above also directly defines a gen- eral class of Mott metals. Analogous to band metals from doping band insulators (both sharing the same relevant ˜H), upon introducing extra electrons or holes to Mott TABLE I. Comparison between band metal, Mott semimetal, Mott insulator, and Mott metal, on system-size scaling of ˜Uk, relative strength of dressed/bare one-body bandwidth ˜ W/W, itinerant carrier density ρit, Fermi surface (FS), and low-temperature entropy. (See text.) Band Metal Mott Semimetal Mott Insulator Mott Metal ⟨˜Uk⟩∼V −1 ⟨˜Uk⟩∼V 0 ⟨˜Uk⟩∼V 0 ⟨˜Uk⟩∼V 0 ˜ W ∼W ˜ W ≳˜Uk ˜ W < ˜Uk ˜ W < ˜Uk ρit ∼1 ρit ≪1 ρit = 0 ρit ∼x ≪1 large FS small FS ? no FS small FS ? low entropy high entropy high entropy high entropy insulating states, as illustrated in Fig 1(e), the doubly occupied or unoccupied momenta open up small but fi- nite ˜Uk-free channels for current fluctuation ˜Jq of long space-time scale. Consistently, the ϵkσ-scaled one-body energies no longer vanishes in the system energy. Nonetheless, owing to the Mott-ness dictated by the same relevant ˜HIM, Mott metals retain most physical properties of Mott insulators, such as strong charge cor- relation and large spin entropy at low temperature (c.f. Tab. I). Particularly, the low-energy carrier density for ˜Jq→0 correspond to the low doping level x, rather than the total density, 1 −x, since most eigen-particles still cannot respond without costing extra ˜HIM. These generic properties have been observed in numerous studies [38–42] displaying unusual transport [32] and spectroscopic [34– 36] features distinct from the band metals. Mott Semimetal — Mott metals can also arrive from repopulating eigen-particles in Mott insulators. As illus- trated in Fig. 1(c), when ˜Uk smoothly reduces to slightly below the dressed bandwidth ˜W ≡maxkσk′σ′{˜ϵkσ−˜ϵk′σ′}, relocating eigen-particles from momenta with the highest ˜ϵkσ to those with the lowest ˜ϵk′σ′ can lower the system energy more than the interaction energy cost. The ground state thus switches to a Mott semimetal state contain- ing compensating number of eigen-electrons and holes, in perfect analogy to standard band semimetals that are characterized as negative-gap semiconductors. On the Hatsugai-Kohmoto model — The above deriva- tion explains why recent studies [73–76] of Hatsugai- Kohmoto (HK) model recovered many of the characteris- tics of Mott insulating phase of the Hubbard model [77– 79],such as the interaction-driven metal-insulator tran- sition precisely at half-filling and the violation of Lut- tinger’s theorem [73, 75]. Indeed, Eq. 14 shows that ˜HIM emerges as the infrared limit of strong local repulsion. The two-body intra-momentum repulsion in the HK model should therefore be understood as a two-body reduced ˜HIM with ˜Uk replaced by ⟨˜Uk⟩. By including this most relevant V 0-scaled interaction, the HK model effective imposes by hand the key spatial charge correlation for Mott-ness. Note, however, that ˜HIM is only active on the doublons, ˆc† i↑ˆc† i↓, as a whole. Thus, expending ˜HIM as infinite-range interaction between unbound particles completely obscures the physics. 5 Discussion — The eigen-particle description presented here has several important implications on theoretical understanding of Mott insulators and metals. First, given that in Mott insulators the occupied momenta form a com- pact manifold, the heavily discussed symmetry-protected topological structures in non-interacting systems [80–82] may be directly inherited here. That is, in the eigen- particle description, most known categories of topological insulators can have direct counterparts as “topological Mott insulators”. Second, contrary to band metals, Mott metals gen- erally do not have a well-defined momentum cutoff of eigen-particles corresponding to the total density, nor do their spin configuration support a Pauli-principle in- duced systematic suppression of quantum fluctuations. Therefore, consistent with previous demonstrations [83– 86], generally there is no reason to expect the standard Luttinger sum rule. Third, since the essential Mott-ness is associated with the emergence of V 0-scaled intra-momentum repulsion, ˜HIM, in the long space-time scale, its extension to spin-less bosonic Mott systems is straightforward. The Mott insu- lating states for spin-less bosonic are those with uniform integer eigen-particle occupation of all the momenta. Finally, while this study focuses only on the Mott in- sulting phase, the general N-body RG framework can be applied to produce a universal description of any macroscopic quantum phase, as the resulting fixed point provides all relevant N-body dynamics that reflects the essential correlations. In essence, this dynamics-based characterization of quantum phases is more general than the broken symmetry-based Landau paradigm [54] and more informative than the state-of-the-art RG flow [56]. Conclusion — Using Mott insulators as a prototypical example, we demonstrate a dynamics-based characteri- zation of quantum phases of matter through a general N-body renormalization group framework. The essential “Mott-ness” turns out to be characterized by a change of size-scaling of the effective intra-momentum repulsions be- tween long-lived emergent “eigen-particles” that encodes the dynamics of two-body bound states in the high-energy sector. This directly offers a universal characterization of long space-time scale for the corresponding class of Mott insulators, through a uniform single occupation of all momenta, and otherwise Mott metals. This univer- sal description naturally paves ways to topological Mott insulators and is straightforward to extend to bosonic Mott systems. Finally, this demonstration exemplifies a generic paradigm of characterizing quantum phases of matter through their distinct dynamics beyond broken symmetries. ACKNOWLEDGMENTS Acknowledgment — We thank P. W. Phillips for useful discussion on the HK model and the symmetry prop- erty of the Fermi liquid. This work is supported by the National Natural Science Foundation of China (NSFC) under Grants No. 12274287 and No. 12042507 and the Innovation Program for Quantum Science and Technology No. 2021ZD0301900. [1] J. H. de Boer and E. J. W. Verwey, Proceedings of the Physical Society 49, 59 (1937). [2] N. F. Mott and R. Peierls, Proceedings of the Physical Society 49, 72 (1937). [3] N. F. Mott, Proceedings of the Physical Society. Section A 62, 416 (1949). [4] M. Cyrot, Journal de Physique 33, 125 (1972). [5] K. Terakura, A. R. Williams, T. Oguchi, and J. Kübler, Physical Review Letters 52, 1830 (1984). [6] V. Anisimov, M. Korotin, and E. Kurmaev, Journal of Physics: Condensed Matter 2, 3973 (1990). [7] J. B. Torrance, P. Lacorre, A. I. Nazzal, E. J. Ansaldo, and C. Niedermayer, Phys. Rev. B 45, 8209 (1992). [8] M. L. Medarde, Journal of Physics: Condensed Matter 9, 1679 (1997). [9] S. Catalano, M. Gibert, J. Fowlie, J. Íñiguez, J.-M. Triscone, and J. Kreisel, Reports on Progress in Physics 81, 046501 (2018). [10] S. Nakatsuji, S.-i. Ikeda, and Y. Maeno, Journal of the Physical Society of Japan 66, 1868–1871 (1997). [11] P. Limelette, P. Wzietek, S. Florens, A. Georges, T. A. Costi, C. Pasquier, D. Jérome, C. Mézière, and P. Batail, Phys. Rev. Lett. 91, 016401 (2003). [12] F. Kagawa, T. Itou, K. Miyagawa, and K. Kanoda, Phys. Rev. B 69, 064511 (2004). [13] B. J. Powell and R. H. McKenzie, Journal of Physics: Condensed Matter 18, R827–R866 (2006). [14] P. W. Anderson, Physical Review 79, 350 (1950). [15] K. I. Kugel and Khomski˘ı, Soviet Physics Uspekhi 25, 231 (1982). [16] Y. Tokura and N. Nagaosa, science 288, 462 (2000). [17] G. Khaliullin, Progress of Theoretical Physics Supplement 160, 155 (2005). [18] J. Zhang, D. Yan, S. Yesudhas, H. Deng, H. Xiao, B. Chen, R. Sereika, X. Yin, C. Yi, Y. Shi, Z. Liu, E. M. Pärschke, C.-C. Chen, J. Chang, Y. Ding, and H.-k. Mao, npj Quantum Materials 4 (2019), 10.1038/s41535-019-0162-3. [19] W. M. H. Natori, R. Moessner, and J. Knolle, Physical Review B 100 (2019), 10.1103/physrevb.100.144403. [20] N. Mott, Advances in Physics 21, 785 (1972). [21] N. F. Mott and L. Friedman, Philosophical Magazine 30, 389 (1974). [22] A. Zylbersztejn and N. F. Mott, Physical Review B 11, 4383 (1975). [23] P. Hansmann, A. Toschi, G. Sangiovanni, T. Saha- Dasgupta, S. Lupi, M. Marsi, and K. Held, physica status solidi (b) 250, 1251 (2013), arXiv:1303.2050 [cond-mat]. [24] J. G. Bednorz and K. A. Müller, Zeitschrift für Physik B 6 Condensed Matter 64, 189 (1986). [25] P. A. Lee, N. Nagaosa, and X.-G. Wen, Reviews of Modern Physics 78, 17 (2006). [26] B. Keimer, S. A. Kivelson, M. R. Norman, S. Uchida, and J. Zaanen, Nature 518, 179 (2015). [27] I. Battisti, K. M. Bastiaans, V. Fedoseev, A. de la Torre, N. Iliopoulos, A. Tamai, E. C. Hunter, R. S. Perry, J. Za- anen, F. Baumberger, and M. P. Allan, Nature Physics 13, 21 (2017). [28] D. Li, K. Lee, B. Y. Wang, M. Osada, S. Crossley, H. R. Lee, Y. Cui, Y. Hikita, and H. Y. Hwang, Nature 572, 624 (2019). [29] X. Wu, D. Di Sante, T. Schwemmer, W. Hanke, H. Y. Hwang, S. Raghu, and R. Thomale, Phys. Rev. B 101, 060504 (2020). [30] B. Kim, H. Ohsumi, T. Komesu, S. Sakai, T. Morita, H. Takagi, and T. Arima, Science 323, 1329 (2009). [31] G. Cao and P. Schlottmann, Reports on Progress in Physics 81, 042502 (2018). [32] G. Seibold, R. Arpaia, Y. Y. Peng, R. Fumagalli, L. Braicovich, C. Di Castro, M. Grilli, G. C. Ghiringhelli, and S. Caprara, Communications Physics 4, 7 (2021). [33] T. Kondo, R. Khasanov, T. Takeuchi, J. Schmalian, and A. Kaminski, Nature 457, 296 (2009), arXiv:0902.1342 [cond-mat]. [34] T. D. Stanescu and P. Phillips, Physical Review Letters 91, 017002 (2003). [35] M. R. Norman, D. Pines, and C. Kallin, Advances in Physics 54, 715 (2005). [36] Y. Kohsaka, T. Hanaguri, M. Azuma, M. Takano, J. C. Davis, and H. Takagi, Nature Physics 8, 534 (2012), arXiv:1205.5104 [cond-mat]. [37] W.-L. Tu and T.-K. Lee, Scientific Reports 9, 1719 (2019). [38] H. Yokoyama and H. Shiba, Journal of the Physical Soci- ety of Japan 56, 1490 (1987). [39] K. Yamaji, T. Yanagisawa, T. Nakanishi, and S. Koike, Physica C: Superconductivity 304, 225 (1998). [40] T. Giamarchi and C. Lhuillier, Physical Review B 43, 12943 (1991). [41] S. R. White and D. J. Scalapino, Physical Review B 61, 6320 (2000). [42] D. J. Scalapino and S. R. White, Foundations of Physics 31, 27 (2001). [43] D. Eichenberger and D. Baeriswyl, Physical Review B 76, 180504 (2007). [44] M. H. Hettler, A. N. Tahvildar-Zadeh, M. Jarrell, T. Pr- uschke, and H. R. Krishnamurthy, Physical Review B 58, R7475 (1998). [45] M. H. Hettler, M. Mukherjee, M. Jarrell, and H. R. Krishnamurthy, Physical Review B 61, 12739 (2000). [46] A. Wietek, R. Rossi, F. Šimkovic, M. Klett, P. Hans- mann, M. Ferrero, E. M. Stoudenmire, T. Schäfer, and A. Georges, Physical Review X 11 (2021), 10.1103/phys- revx.11.041013. [47] V. I. Anisimov, J. Zaanen, and O. K. Andersen, Physical Review B 44, 943 (1991). [48] S. Choi, A. Kutepov, K. Haule, M. van Schilfgaarde, and G. Kotliar, npj Quantum Materials 1, 1 (2016). [49] S. Grytsiuk, M. I. Katsnelson, E. G. C. P. van Loon, and M. Rösner, npj Quantum Materials 9, 1 (2024). [50] M. Imada, A. Fujimori, and Y. Tokura, Reviews of Mod- ern Physics 70, 1039 (1998). [51] S. Sachdev, in Quantum Magnetism, edited by U. Scholl- wöck, J. Richter, D. J. J. Farnell, and R. F. Bishop (Springer, Berlin, Heidelberg, 2004) pp. 381–432. [52] N. F. Mott, Reviews of Modern Physics 40, 677 (1968). [53] A. Georges, G. Kotliar, W. Krauth, and M. J. Rozenberg, Rev. Mod. Phys. 68, 13 (1996). [54] L. D. Landau et al., Zh. eksp. teor. Fiz 7, 926 (1937). [55] L. P. Kadanoff, Physics Physique Fizika 2, 263 (1966). [56] K. G. Wilson and J. Kogut, Physics reports 12, 75 (1974). [57] S. Kanno, Progress of Theoretical Physics 41, 966 (1969). [58] F. Wegner, Annalen der Physik 506, 77 (1994). [59] S. R. White, The Journal of Chemical Physics 117, 7472 (2002), arXiv:cond-mat/0201346 [cond-mat]. [60] J. M. Deutsch, Phys. Rev. A 43, 2046 (1991). [61] M. Rigol and M. Srednicki, Phys. Rev. Lett. 108, 110601 (2012). [62] A. Hegg, R. Jiang, J. Wang, J. Hou, T. Zeng, Y. Yildirim, and W. Ku, “Universal low-temperature fluctuation of unconventional superconductors revealed: ’smoking gun’ leaves proper bosonic superfluidity the last theory stand- ing,” (2024), arXiv:2402.08730 [cond-mat.supr-con]. [63] S. Kanno, Progress of Theoretical Physics 41, 1145 (1969). [64] S. Kanno, Progress of Theoretical Physics 41, 949 (1969). [65] X. Zhang, M. S. M. de Sousa, X. Li, A. Hegg, and W. Ku, “Manipulable compact many-body localization and absence of superfluidity in geometrically frustrated systems,” (2024), arXiv:2408.03939 [cond-mat.str-el]. [66] S. Kanno, Progress of Theoretical Physics 41, 966 (1969), https://academic.oup.com/ptp/article- pdf/41/4/966/5338277/41-4-966.pdf. [67] S. Kehrein, The flow equation approach to many-particle systems, Vol. 217 (Springer, 2007). [68] P. W. Anderson, science 235, 1196 (1987). [69] Z.-J. Lang, R. Jiang, and W. Ku, Phys. Rev. B 103, L180502 (2021). [70] Y. Yildirim and W. Ku, Physical Review X 1, 011011 (2011). [71] K. A. Chao, J. Spalek, and A. M. Oles, Journal of Physics C: Solid State Physics 10, L271 (1977). [72] C. Gros, R. Joynt, and T. M. Rice, Physical Review B 36, 381 (1987). [73] Y. Hatsugai and M. Kohmoto, Journal of the Physical Society of Japan 61, 2056 (1992). [74] P. W. Phillips, L. Yeo, and E. W. Huang, Nature Physics 16, 1175 (2020). [75] M. Zhao, W.-W. Yang, and Y. Zhong, “Topic Re- view: Hatsugai-Kohmoto models: Exactly solvable play- ground for Mottness and Non-Fermi Liquid,” (2024), arXiv:2501.00388 [cond-mat]. [76] P. Mai, J. Zhao, G. Tenkila, N. A. Hackner, D. Kush, D. Pan, and P. W. Phillips, “New Approach to Strong Correlation: Twisting Hubbard into the Orbital Hatsugai- Kohmoto Model,” (2024), arXiv:2401.08746 [cond-mat]. [77] J. Hubbard, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 276, 238 (1963). [78] J. Hubbard, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 277, 237 (1964). [79] J. Hubbard, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 281, 401 (1964). [80] S. Raghu, X.-L. Qi, C. Honerkamp, and S.-C. Zhang, Phys. Rev. Lett. 100, 156401 (2008). [81] S. Rachel and K. Le Hur, Physical Review B 82 (2010), 10.1103/physrevb.82.075106. 7 [82] T. Yoshida, R. Peters, S. Fujimoto, and N. Kawakami, Physical review letters 112, 196404 (2014). [83] P. Phillips, “Mottness,” (2007), arXiv:cond-mat/0702348. [84] T. D. Stanescu, P. Phillips, and T.-P. Choy, Physical Review B—Condensed Matter and Materials Physics 75, 104503 (2007). [85] A. Rosch, The European Physical Journal B 59, 495 (2007). [86] W.-W. Yang, Q. Chen, H.-G. Luo, and Y. Zhong, Physical Review B 106, 195117 (2022). [87] T. Furukawa, K. Miyagawa, H. Taniguchi, R. Kato, and K. Kanoda, Nature Physics 11, 221 (2015). [88] T. Li, S. Jiang, L. Li, Y. Zhang, K. Kang, J. Zhu, K. Watanabe, T. Taniguchi, D. Chowdhury, L. Fu, et al., Nature 597, 350 (2021). [89] T. G. Kiely and D. Chowdhury, Physical Review B 110, L241112 (2024). [90] X. Zhang, M. J. Rozenberg, and G. Kotliar, Physical review letters 70, 1666 (1993). [91] D. Guerci, M. Capone, and M. Fabrizio, Phys. Rev. Mater. 3, 054605 (2019). [92] H. F. Baker, Proceedings of the London Mathematical Society 1, 347 (1901). [93] F. Hausdorff, Ber. Verh. Kgl. Sachs. Ges. Wiss. Leipzig., Math.-phys. Kl. 58, 19 (1906). [94] J. E. Campbell, Proceedings of the London Mathematical Society 1, 381 (1896). [95] M. Aristidou and J. Hanson, International Journal of Mathematics and Mathematical Sciences 2007, Article ID 20682, 3 p. (2007). Appendix A: Mott Transition to Mott semimetal According to Fig. 1, under a smoothly reducing ˜Uk/ ˜W, the transition from Mott insulators to Mott semimetals, commonly named “Mott transition”, should display a con- tinuously growing carrier density according to the amount of repopulated eigen-particles from the reference Mott insulating states, analogous to standard phase transition from band semiconductors to band semimetals. Indeed, many numerical studies [87–90] found continuous quan- tum phase transitions for this class of local-interaction driven Mott insulators. Note that compared to the remaining slow dynam- ics in the low-energy sector, the most relevant ˜Uk is of rather high energy scale, so should the energy scale for the emergence of Mott-ness. This leaves a large space for additional relevant interactions to emerge below the scale of ˜Uk, which manifest themselves through further bifurcations in the general N-body RG flows toward dif- ferent fixed points (all with Mott-ness). Therefore, the corresponding quantum phase transitions between these Mott systems can sometimes become first-order to reflect the more dramatic change of low-energy correlation, as observed in previous studies [91]. Nonetheless, we con- sider such first-order nature of transitions unrelated to the continuous nature of quantum insulator-to-metal tran- sition of the generic Mott systems having ˜HIM as the only relevant interaction. From this perspective, Mott’s original proposal [21] of first-order phase transition is perhaps beyond this class of generic Mott transition (defined above via strong local repulsion). This is because Mott’s proposal is based on the competition between spatial size of bound electron-hole pairs in insulating states and the length scale of screened interaction in metallic states. If the single occupation of the above ˆc† iσ in position are viewed as local dressed electron-hole bound pairs, the extreme locality of their binding would be completely insensitive to the long-range interaction. On the other hand, for Mott’s proposal to apply to the low-density electron and hole carriers in Mott- semimetal states, emergence of additional V 0-scaled inter- momentum repulsion beyond ˜HIM would be necessary to encode the excitonic binding at long space-time scale. Note that the above conclusion of default continuous Mott transition is based on the smoothness of unitary transformation of the full many-body structure. Presence of non-linearity from extrinsic sources, such as strong coupling to lattice degrees of freedom, can in principle allow a (weak) first-order transition as well. Appendix B: Unitary Transformation for the Fourier Transform For systems with translational symmetry, eigen- particles and their bound particles should have well- defined momentum. Thus, a Fourier transform of the objects, for example from ˆc† i at position ri to their mo- mentum counterparts ˜c† i of momentum ki, ˜c† i ≡ˆU†ˆc† i ˆU = X i′ ˆc† i′Mi′i, (B1) would often automatically diagonalize the corresponding terms in the Hamiltonian, where Mi′i = 1 √ V eiki·ri′ is the matrix element for Fourier transform. This section gives the general procedure to construct the corresponding unitary transformation ˆU. We first define the Hermitian generator g, g = −i ln(U) = X ii′ c† igii′ci′, (B2) with coefficient gii′, such that U = eig produces the de- sired unitary transformation Eq. B1. Given that the first and second quantization share the identical algebra when applying a unitary transformation Eq. B1 via the Baker–Campbell–Hausdorff formula [92–94], the matrix g with coefficients gii′ must be g = −i ln(M†), (B3) as well, with matrix M containing elements Mi′i. Here we use the principal value definition of the logarithm, where the imaginary part of the lies in the interval (−π, π]. In numerical implementation, it is critical to control eigen- values to ensure consistency of this interval. 8 Then, given the special property M†4 = I, matrix g can be obtained, g = π 2 G1 + πG2 −π 2 G3, through the orthogonal projection matrices, G1 = 1 4(I −iM† −M†2 + iM†3), (B4) G2 = 1 4(I −M† + M†2 −M†3), (B5) G3 = 1 4(I + iM† −M†2 −iM†3), (B6) according to Ref. [95]. Appendix C: Order-by-order diagonalization In this section we show that diagonalization via canoni- cal transformation can be performed order-by-order in sec- ond quantized form. Specifically, starting with H already in diagonal form up to M-body with the rest (n>M)- body terms not in diagonal form, the coefficients of the existing diagonal (n≤M)-body terms would be preserved upon further canonical transformation to diagonalize the remaining terms. To see this, consider the anti-Hermitian generator A = X α1...(M+1);1′...(M+1)′c† 1 . . . c† (M+1)c(M+1)′ . . . c1′, of U = eA that diagonalizes the (M + 1)-body terms, through [92–94], eAHe−A = H + [A, H] + 1 2![A, [A, H]] + . . . . (C1) It becomes clear that if [A, H] does not generate nor- mal ordered terms of M- or fewer body, the previously already diagonal (n≤M)-body terms would then be pre- served. This is, in fact, the case according to the following theorem. Theorem (Bound in particle number from commuta- tion). The resulting normal ordered terms of a commuta- tor, [A1, A2], for normal ordered operators A1 and A2, of n1- and n2-body, respectively, can only be of m-body with m ∈[max{n1, n2}, n1 + n2 −1]. One can verify this theorem by carrying out a few simple cases and observe the basic structure in agreement with the theorem. The preservation of fewer-body terms during diago- nalization of more body terms naturally suggests that one can first diagonalize the most relevant one-body and two-body dynamics, before systematically moving on to three-body and more body dynamics. Since the canonical transformation at n-body level is purely algebraic, unre- lated of the actual number of particles, N, in the system, the diagonalization can be conducted within the much smaller n-body Hilbert space than the N-body one of the actual system of interest. Appendix D: Unitary transformation to absorb intersite charge fluctuation: c† iσ →ˆc† iσ This section provides more detailed description on the first canonical transformation, from c† iσ to ˆc† iσ, that ab- sorbs the inter-site charge fluctuation, after which dou- blons emerge as well-defined objects for dynamics of longer time scale. First, for on-site repulsion U ≫t much stronger than the inter-site hopping t in the Hubbard model, we identify the double occupation of site as the high-energy object of the two-body Hilbert space, and corresponding, two particles occupying different sites are within the low- energy sector. The desired canonical transformation can then be expressed as eA, with anti-Hermitian operator A of the form, A = C X ⟨ii′⟩σ (ni¯σc† iσci′σ −h.c.), where coefficient C can be numerically found to fully decouple the above mention high- and low-energy sector within the two-body Hilbert space. Naturally, C ∼t U in the simple perturbative U ≫t limit. Notice that this choice of A is only of two-body. It is therefore much simpler than the typical one used to fully decouple the high-energy sector of the N-body Hilbert space (containing at least one doublon) from the low- energy one (containing no doublon.) Nonetheless, for such decoupling in the two-body Hilbert space, or at the two-body level of the second quantized Hamiltonian, this is sufficient. (This nicely illustrates the simplicity of the above mentioned order-by-order diagonalization and the convenience of our overall framework.) Represented by the resulting dressed particles, ˆc† iσ ≡ U†c† iσU, using Eq. C1, the Hamiltonian now reads, ˆH = ˆH(L) + ˆH(H) =   t P ⟨ii′⟩σ(ˆc† iσˆci′σ(1 −ˆni′¯σ) + h.c.) +J P ⟨ii′⟩(ˆSi · ˆSi′ −1 4 ˆniˆni′) −J 4 P ⟨ii′i′′⟩(ˆc† iσˆni′¯σˆci′′σ −ˆc† iσc† i′¯σˆci′σˆci′′¯σ + h.c.)   +  (U + J) X i ˆd† i ˆdi + J 2 X ⟨ii′⟩ ( ˆd† i ˆdi′ + h.c.)  , (D1) where J = 4t2/U in the perturbable limit, ˆniσ ≡ˆc† iσˆciσ, ˆSi ≡1 2 P αβ ˆc† iασαβˆciβ, ˆd† i ≡ˆc† i↑ˆc† i↓, and ⟨ii′i′′⟩denoting three neighboring sites. The high-energy sector contains only the dynamics of doublon, ˆd† i, in the two-body Hilbert space. In contrast, the low-energy sector contains only two particles occu- pying different sites (i ̸= i′) that experience emergent interactions in the second and third line of Eq. D1. The sectors are now cleanly separated in the two-body level. In the 3- and (n>3)-body Hilbert space, additional cou- plings are present, which can be further diagonalized order-by-order, if desired. 9 Appendix E: Unitary transformation toward fix points of long space-time scale: ˆc† iσ →˜c† iσ This section provides more detailed description on the second canonical transformation, from ˆc† iσ to ˜c† iσ, that absorbs the kinetic propagation of the doublon (in the high-energy sector) and the individual single-occupation constrained particles (in the low-energy sector) at very long space-time scale, toward an eigen-particle represen- tation. Given the translational symmetry of the system, one expects such kinetic propagation would be naturally ab- sorbed via some sort of Fourier transform, such that the index i would switch from labeling the site location ri to labeling the momentum quantum number ki. Indeed, applying the general gii′ from Appendix B in defining the anti-Hermitian ˆA = ˆA(H) + ˆA(L) with ˆA(H) = X ⟨ii′⟩ ˆc† i↑ˆc† i↓gii′ˆci′↓ˆci′↑−h.c., (E1) and ˆA(L) = X ⟨ii′⟩σ ˆc† iσgii′ˆci′σ(1 −ˆni′¯σ) −h.c., (E2) one transforms the Hamiltonian to a nearly diagonal form ˜H = ˜H(H) + ˜H(L), with ˜H(H) = X i Ei˜c† i↑˜c† i↓˜ci↓˜ci↑, (E3) and ˜H(L) = X i h ϵi(˜c† i↑˜ci↑+ ˜c† i↓˜ci↓) −2ϵi˜c† i↑˜c† i↓˜ci↓˜ci↑ i + . . . , (E4) where Ei = U +J(1−P3 α=1 cos (qi · eα)) gives the energy dispersion of the doublon with momentum qi, and ϵi gives the one-body energy dispersion of the individual electrons of momentum ki. Notice that ϵi remains the same as the bare kinetic energy obtained from diagonalizing the one-body terms in Eq. 7. (Recall its preservation during diagonalization for the (n>1)-body terms in Appendix C.) Also notice that a V 0-scaled intra-momentum terms emerges as well in ˜H(L), to exactly cancel the one-body kinetic contributions ϵi, as the electrons as part of the dou- blon cannot propagate independently, thus their one-body kinetics is thus unrelated to the dispersion of doublons. Strictly, this second canonical transformation still leave some rather small two-body terms in Eq. E4, of order J ≪ϵi. We have verified numerically that a complete diagonalization of these small terms does not qualitatively alter the main structures of Eq. E4.
A universal description of Mott insulators: Characterizing quantum phases beyond broken symmetries Matheus de Sousa,1, ∗Zhiyu Fan,1, ∗and Wei Ku (顧威)1, † 1 200240, China (Dated: October 17, 2025) Using Mott insulators as a prototypical example, we demonstrate a dynamics-based characterization of quantum phases of matter through a general N-body renormalization group framework. The essential "Mott-ness" turns out to be characterized by a change of size-scaling of the effective intramomentum repulsions between long-lived emergent "eigen-particles" that encodes the dynamics of two-body bound states in the high-energy sector. This directly offers a universal characterization at long space-time scale for the corresponding class of Mott insulators through a uniform single occupation of all momenta, and otherwise Mott metals. This universal description naturally paves the way to topological Mott insulators and is straightforward to extend to bosonic Mott systems. More generally, this demonstration exemplifies a generic paradigm of characterizing quantum phases of matter through their distinct dynamics beyond broken symmetries. Introduction - Mott insulators represent a paradigmatic example of strongly correlated electron systems, with insulating behavior (unexpected from standard band filling considerations) emerges from strong electronelectron interactions [1-3]. Such interaction-induced insulators have been observed and intensively studied among a wide variety of material families, from transition metal oxides [4-6], rare-earth nickelates [7-9], layered Ruthenates like Ca2RuO4 [10], to organic charge-transfer salts such as (BEDT -TTF)2X [11-13], all displaying interesting magnetic [14, 15], orbital [16, 17], and lattice [18, 19] properties. Particularly in vanadium oxide V2O3, a paradigmatic metal-insulator transition can be realized through modulation of temperature, and pressure, making it a prototypical system for studying Mott physics [20-23]. Furthermore, upon introduction of additional carriers through chemical doping, most of the doped Mott insulator such as Cuprates La2CuO4 [24-27], Nickelates [28, 29], and Iridates [30, 31] host a even rich variety of novel physical behaviors beyond standard lore, including strange metal behavior [32], and superconductivity [25, 26, 33] in association with a pseudogap phase [34-37] that does not even host a complete Fermi surface. It is therefore of utmost importance to understand not only the static structure but also the slow dynamics of Mott insulators and their (self-)doped systems. Even with intensive investigations on simple models [3846] and real materials [47-49], currently the standard understanding of the Mott insulators [50] is limited to the ground state description resulting from perturbing a collection of isolated atoms via kinetic coupling between them. Such a position space description of the ground state, while physically intuitive, lacks direct access to the dynamics of long space-time scale relevant to dynamical transport and other response properties, and thus leaves large space for on-going debate on the nature of the ∗These authors contribute equally † email: quantum [51] and thermal [50, 52] phase transition to metals. On the other hand, the state-of-the art numerical studies based on dynamical mean-field approximation [53] lead to pictures intimately tied to coherent quasi-particles of the Fermi liquid and thus limited in its applicability. The difficulty in obtaining a satisfactory understanding for Mott insulators and their (self-)doped metals is profoundly associated with the limitation of the standard framework [54-56] for thermodynamical and quantum phases. This framework, developed by Landau [54] and extended by Kadanoff [55] and Wilson [56], targets the most relevant information, namely the order parameter (and their correlation) of systems with spontaneously broken symmetries. It has thus been extremely successful as the "universal" paradigm for phases with broken charge, magnetic, or orbital symmetries. Yet, its usefulness and applicability rapidly diminish for phases without a spontaneous broken symmetry, such as the Mott insulating phase to be demonstrated here. Particularly, to date no knowledge is available on a universal "fixed point" of Mott insulators or around it the "relevant" dynamics at long space-time scales. A more general framework for describing quantum phases of matter beyond broken symmetries is therefore of utmost importance. Here, we address this long-standing problem through a general N-body renormalization group framework that offers a generic dynamics-based characterization of quantum phases of matter beyond broken symmetries. The essential "Mott-ness" turns out to be characterized by a change of size-scaling of the effective intra-momentum repulsions between long-lived emergent "eigen-particles" that encodes the dynamics of two-body bound states in the high-energy sector. This directly offers a universal characterization of long space-time scale for the corresponding class of Mott insulators through a uniform single occupation of all momenta, and otherwise Mott metals. This universal description naturally enables the possibility of topological Mott insulators and is straightforward to extend to bosonic Mott systems. More importantly, the demonstrated general framework offers a generic paradigm 16 Oct 2025 2 (c) Mott Semimetal k (d) Mott Insulator k (b) Band Metal k (a) Band Insulator 0 1 2 nk k ǁεk (e) Mott Metal k FIG. 1. Schematics of eigen-particle occupation in several quantum phases. (a) Double occupation (in red) of all momenta in band insulators. (b) Double occupation within Fermi wavevector (k 2), explicitly separates the potential and kinetic motion of the "doublon", ˆd† i ≡ˆc† i↑ˆc† i↓, ˆH(H) = (U + J) X i ˆd† i ˆdi + J 2 X ⟨ii′⟩ ˆd† i ˆdi′, (9) (J ∼4t2/U) in the high-energy sector, from the dynamics of low-energy dressed electrons in a constrained doubleoccupation-free space, ˆH(L) [68-72]. Here ˆH(n>2) describes the beyond-two-body dynamics that emerges from the transformation. (Beyond the perturbation regime, such decoupling can still be achieved numerically.) Next, let's bring the one- and two-body terms of ˆH into a full diagonal form via another transformation, ˆU = exp ( ˆA), that turns ˆc† into the fully dressed eigen-particles, ̃c† i ≡ˆU†ˆc† i ˆU at the two-body level. Explicitly, ̃H(H) = ˆU ˆH(H) ˆU† = X i Ei ̃d† i ̃di, (10) where Ei = U + J(1 -P3 α=1 cos (qi · eα)) and ̃d† i ≡ˆU† ˆd† i ˆU = 1 √ V X i′ ˆd† i′eiqi·ri′ (11) = ˆU†ˆc† i↑ˆc† i↓ˆU = ˆU†ˆc† i↑ˆU ˆU†ˆc† i↓ˆU = ̃c† i↑ ̃c† i↓, (12) with V denoting the system size (the number of sites) and eα the unit vectors of the 3-dimensional space. As expected from the translational symmetry of the system, Eq. 11 shows that at long space-time scale the eigen-doublons, ̃d† i, have well-defined momentum ħqi. The same applies to the constrained particles in the low-energy sector upon diagonalizing ˆH(L), such that the low-energy contribution (and the one-body component) of ̃c† iσ also has well-defined momentum ħki (c.f. Appendix E). Therefore, we will substitute i below by the more familiar notation q and k for indexing the fully dressed two- and one-body eigen-particles, ̃d† i and ̃c† iσ, to better remind the readers that for eigen-particles each q or k indexes a momentum qq=i or kk=i that is one-on-one mapped to the a position ri of ˆd† i, ˆc† iσ, and c† iσ according to the gauge choice of ˆU in Eq. 11. Naturally, a consistent periodic boundary condition for ̃d† i naturally gives qi = 2ki. Equation 10 and 12 reveals that at long space-time scale, such local repulsion-driven Mott-ness is characterized by the emergence of a "relevant" intra-momentum repulsion, ̃H(H) = X q Eq ̃d† q ̃dq = X k Ek ̃c† k↑ ̃c† k↓ ̃ck↓ ̃ck↑, (13) that scales as V 0 against the system size V . Recall that by default (such as in the t/U ≫1 band metal limit) two-body interactions between particles of well-defined momenta should scale as V -1, to reflect the probability for two particles to be in proximity to experience their interaction. Therefore, a change of scaling of an interaction like this must reflect a qualitative change in the underlying correlation. (Such a change of scaling in the relevant couplings is likely common to all quantum phases as a direct reflection to their characteristic correlations.) For the specific case here, this change of scaling results from the formation of doublons in the high-energy sector. Indeed, the dressed particles in the doublon , ˆc† i↑ˆc† i↓, are spatially bound to each other, such that the probability for them to experience their mutual interaction is no longer sensitive to the system size, thus the V 0-scaling. Experts in Mott physics might find surprising such emergent intra-momentum repulsion in the long spacetime scale, given the extreme spatial locality of the original strong repulsion. Note, however, that microscopically Eq. 13 only describes the potential and kinetic energy of doublons, rather than two-body interaction between fermions, since it does not acts on the low-energy particles at different positions. The intra-momentum form simply follows eigen-particles' encoding eigen-doublons, ̃d† q, as equal-momentum pairs, ̃c† k↑ ̃c† k↓|k=q,(qq=2kk), in which two high-energy electrons, ˆc† i↑ˆc† i↓, are spatially bound during 4 propagation (convolution of two momenta of the "hatted" particles). In contrast, pairs of unbound propagating electrons are encoded as ̃c† kσ ̃c† k′̸=k,σ. Naturally, the above consideration on size-scaling would apply to not just the two-body interactions, but all (n>1)- body interactions involving doublons. Upon fully diagonalizing the remaining (n>2)-body interactions in ̃H (which preserves the already diagonal one- and two-body structures [Appendix C]), the relevant intra-momentum interactions for Mott-ness takes the general form, ̃HIM ≡ X k ̃c† k↑ ̃c† k↓ ̃Uk ̃ck↓ ̃ck↑, (14) where ̃Uk, defined through ̃Uk ̃c† k↑ ̃c† k↓≡[ ̃H, ̃c† k↑ ̃c† k↓], denotes a diagonal operator [up to (N-2)-body] for the eigen-doublon energy. While the above illustration employs the prototypical Hubbard model, the revealed characteristics of the "stable fixed point" for Mott-ness (emergence of high-energy twobody bound states and the associated V 0-scaled ̃HIM) represent a universal class of Mott systems. Analogous to universality in phase transitions, the same relevant ̃HIM can emerge in long space time scale among a wide variety of Mott systems with distinct rapid dynamics in H. Mott Insulator - Having identified the above fixed point (relevant interactions) for Mott-ness in long spacetime scale, it is straightforward to find a universal description for this class of Mott insulators. Specifically, as illustrated in Fig. 1(d), states with single occupation of all momenta by eigen-particles of arbitrary spin σk, |ΨMI; {σk}⟩= Y k ̃c† k,σk |0⟩, (15) must be insulating under a V 0-scaled ̃HIM, since longwavelength current fluctuations ̃Jq→0 of them would unavoidably results in double occupation of momenta and thus require a finite energy of scale ⟨ ̃Uk⟩. (Again, consider a one-band system for simplicity and clarity.) Intuitively, the remaining spin degree of freedom allows 2V number of insulating states, just as expected from the standard positional space description. Interesting, this universal "infrared" description unifies this representative class of Mott insulators with band insulators at the long space-time scale. Under the constraints ̃c† k↑ ̃c† k↓= 0 of the low-energy Hilbert space, eigen-particles are stuck uniformly in momentum space as in the latter (by ̃c† kσ ̃c† kσ = 0). Similarly, in both types of insulators, the uniform occupation in momentum trivially eliminates the εk-scaled kinetic energy, P k εk = 0, from the system energy, as intuitively expected for insulators. Mott Metal - Given the insensitivity of the general RG flow of ̃H to the system's particle number, the identified fixed point for Mott-ness above also directly defines a general class of Mott metals. Analogous to band metals from doping band insulators (both sharing the same relevant ̃H), upon introducing extra electrons or holes to Mott TABLE I. Comparison between band metal, Mott semimetal, Mott insulator, and Mott metal, on system-size scaling of ̃Uk, relative strength of dressed/bare one-body bandwidth ̃ W/W, itinerant carrier density ρit, Fermi surface (FS), and low-temperature entropy. (See text.) Band Metal Mott Semimetal Mott Insulator Mott Metal ⟨ ̃Uk⟩∼V -1 ⟨ ̃Uk⟩∼V 0 ⟨ ̃Uk⟩∼V 0 ⟨ ̃Uk⟩∼V 0 ̃ W ∼W ̃ W ≳ ̃Uk ̃ W M)- body terms not in diagonal form, the coefficients of the existing diagonal (n≤M)-body terms would be preserved upon further canonical transformation to diagonalize the remaining terms. To see this, consider the anti-Hermitian generator A = X α1...(M+1);1′...(M+1)′c† 1 . . . c† (M+1)c(M+1)′ . . . c1′, of U = eA that diagonalizes the (M + 1)-body terms, through [92-94], eAHe-A = H + [A, H] + 1 2![A, [A, H]] + . . . . (C1) It becomes clear that if [A, H] does not generate normal ordered terms of M- or fewer body, the previously already diagonal (n≤M)-body terms would then be preserved. This is, in fact, the case according to the following theorem. Theorem (Bound in particle number from commutation). The resulting normal ordered terms of a commutator, [A1, A2], for normal ordered operators A1 and A2, of n1- and n2-body, respectively, can only be of m-body with m ∈[max{n1, n2}, n1 + n2 -1]. One can verify this theorem by carrying out a few simple cases and observe the basic structure in agreement with the theorem. The preservation of fewer-body terms during diagonalization of more body terms naturally suggests that one can first diagonalize the most relevant one-body and two-body dynamics, before systematically moving on to three-body and more body dynamics. Since the canonical transformation at n-body level is purely algebraic, unrelated of the actual number of particles, N, in the system, the diagonalization can be conducted within the much smaller n-body Hilbert space than the N-body one of the actual system of interest. Appendix D: Unitary transformation to absorb intersite charge fluctuation: c† iσ →ˆc† iσ This section provides more detailed description on the first canonical transformation, from c† iσ to ˆc† iσ, that absorbs the inter-site charge fluctuation, after which doublons emerge as well-defined objects for dynamics of longer time scale. First, for on-site repulsion U ≫t much stronger than the inter-site hopping t in the Hubbard model, we identify the double occupation of site as the high-energy object of the two-body Hilbert space, and corresponding, two particles occupying different sites are within the lowenergy sector. The desired canonical transformation can then be expressed as eA, with anti-Hermitian operator A of the form, A = C X ⟨ii′⟩σ (ni ̄σc† iσci′σ -h.c.), where coefficient C can be numerically found to fully decouple the above mention high- and low-energy sector within the two-body Hilbert space. Naturally, C ∼t U in the simple perturbative U ≫t limit. Notice that this choice of A is only of two-body. It is therefore much simpler than the typical one used to fully decouple the high-energy sector of the N-body Hilbert space (containing at least one doublon) from the lowenergy one (containing no doublon.) Nonetheless, for such decoupling in the two-body Hilbert space, or at the two-body level of the second quantized Hamiltonian, this is sufficient. (This nicely illustrates the simplicity of the above mentioned order-by-order diagonalization and the convenience of our overall framework.) Represented by the resulting dressed particles, ˆc† iσ ≡ U†c† iσU, using Eq. C1, the Hamiltonian now reads, ˆH = ˆH(L) + ˆH(H) =   t P ⟨ii′⟩σ(ˆc† iσˆci′σ(1 -ˆni′ ̄σ) + h.c.) +J P ⟨ii′⟩(ˆSi · ˆSi′ -1 4 ˆniˆni′) -J 4 P ⟨ii′i′′⟩(ˆc† iσˆni′ ̄σˆci′′σ -ˆc† iσc† i′ ̄σˆci′σˆci′′ ̄σ + h.c.)   +  (U + J) X i ˆd† i ˆdi + J 2 X ⟨ii′⟩ ( ˆd† i ˆdi′ + h.c.)  , (D1) where J = 4t2/U in the perturbable limit, ˆniσ ≡ˆc† iσˆciσ, ˆSi ≡1 2 P αβ ˆc† iασαβˆciβ, ˆd† i ≡ˆc† i↑ˆc† i↓, and ⟨ii′i′′⟩denoting three neighboring sites. The high-energy sector contains only the dynamics of doublon, ˆd† i, in the two-body Hilbert space. In contrast, the low-energy sector contains only two particles occupying different sites (i ̸= i′) that experience emergent interactions in the second and third line of Eq. D1. The sectors are now cleanly separated in the two-body level. In the 3- and (n>3)-body Hilbert space, additional couplings are present, which can be further diagonalized order-by-order, if desired. 9 Appendix E: Unitary transformation toward fix points of long space-time scale: ˆc† iσ → ̃c† iσ This section provides more detailed description on the second canonical transformation, from ˆc† iσ to ̃c† iσ, that absorbs the kinetic propagation of the doublon (in the high-energy sector) and the individual single-occupation constrained particles (in the low-energy sector) at very long space-time scale, toward an eigen-particle representation. Given the translational symmetry of the system, one expects such kinetic propagation would be naturally absorbed via some sort of Fourier transform, such that the index i would switch from labeling the site location ri to labeling the momentum quantum number ki. Indeed, applying the general gii′ from Appendix B in defining the anti-Hermitian ˆA = ˆA(H) + ˆA(L) with ˆA(H) = X ⟨ii′⟩ ˆc† i↑ˆc† i↓gii′ˆci′↓ˆci′↑-h.c., (E1) and ˆA(L) = X ⟨ii′⟩σ ˆc† iσgii′ˆci′σ(1 -ˆni′ ̄σ) -h.c., (E2) one transforms the Hamiltonian to a nearly diagonal form ̃H = ̃H(H) + ̃H(L), with ̃H(H) = X i Ei ̃c† i↑ ̃c† i↓ ̃ci↓ ̃ci↑, (E3) and ̃H(L) = X i h εi( ̃c† i↑ ̃ci↑+ ̃c† i↓ ̃ci↓) -2εi ̃c† i↑ ̃c† i↓ ̃ci↓ ̃ci↑ i + . . . , (E4) where Ei = U +J(1-P3 α=1 cos (qi · eα)) gives the energy dispersion of the doublon with momentum qi, and εi gives the one-body energy dispersion of the individual electrons of momentum ki. Notice that εi remains the same as the bare kinetic energy obtained from diagonalizing the one-body terms in Eq. 7. (Recall its preservation during diagonalization for the (n>1)-body terms in Appendix C.) Also notice that a V 0-scaled intra-momentum terms emerges as well in ̃H(L), to exactly cancel the one-body kinetic contributions εi, as the electrons as part of the doublon cannot propagate independently, thus their one-body kinetics is thus unrelated to the dispersion of doublons. Strictly, this second canonical transformation still leave some rather small two-body terms in Eq. E4, of order J ≪εi. We have verified numerically that a complete diagonalization of these small terms does not qualitatively alter the main structures of Eq. E4.
2510.14950
PREPRINT VERSION 1 A formative measurement validation methodology for survey questionnaires Mark Dominique Dalipe Muñoz Mathematics Department (Iloilo Science and Technology University – Main Campus) Iloilo City, Philippines markdominique.munoz@students.isatu.edu.ph ___________________________________________________________________________ Abstract: Model misspecification of formative indicators remains a widely documented issue across academic literature, yet scholars lack a clear consensus on pragmatic, prescriptive approaches to manage this gap. This ambiguity forces researchers to rely on psychometric frameworks primarily intended for reflective models, and thus risks misleading findings. This article introduces a Multi-Step Validation Methodology Framework specifically designed for formative constructs in survey-based research. The proposed framework is grounded in an exhaustive literature review and integrates essential pilot diagnostics through descriptive statistics and multicollinearity checks. The methodology provides researchers with the necessary theoretical and structural clarity to finally justify and adhere to appropriate validation techniques that accurately account for the causal nature of the constructs while ensuring high psychometric and statistical integrity. Keywords: Index construction, formative construct, survey questionnaire, measurement model, framework, pilot testing, indicator, SEM __________________________________________________________________________ 1. INTRODUCTION Pilot testing procedures are a necessary foundational process to ensure the quality and integrity of survey questionnaires [15,23,25], which serve as the primary tools for gathering observational data across fields such as Marketing, Information Systems, and Management [1,3]. However, from my prior experience on the current academic practices, I have frequently encountered persistent methodological misconceptions and incorrect practices that require immediate academic intervention to achieve greater rigor and confidence in the corresponding findings. This methodological gap is not new and has been well-documented by previous researchers throughout history when the psychometric methods were gaining scientific adoption in the 19th century across various social disciplines [1,5]. For instance, researchers in management and social science fields often “force” their constructs to fit within the reflective model framework regardless of their actual theoretical nature [1,4], which functionally misleads their true psychometric nature. Evidence suggests that their historical roots between the reflective and formative measurement models are key reasons for the disparity. Formative constructs have been overshadowed by the continuous development of psychometric tools intended for reflective constructs [1,5,6] as one measure from this said framework is the widespread use of Cronbach’s alpha for assessing instrument reliability due to its computational simplicity and adoption across statistical packages [17]. PREPRINT VERSION 2 However, relying solely on Cronbach’s alpha as a measure for assessing instrument quality overlooks a fundamental distinction in measurement theory. While current literature already acknowledges the limitations of using Cronbach’s alpha alone [14,16,17,18,19] and have thus recommended to explore other types of reliability such as test-retest, split-half, and interrater and types of validity such as face, construct, and content [5], the subtle gap is on the very nature of the constructs we seek to measure and our pre-specified items or questions. We are missing a critical point that Cronbach’s alpha, as well as these reliability and validity types, are rooted deeply in Classical Test Theory (CTT) [3,5]. They are necessary diagnostic tools only when the construct is assumed to be reflective, where each individual item is viewed to conceptually represent the construct in a consistent manner [3]. This is analogous to assuming the items are multiple ways to represent a construct and then finding a common ground to cancel out the item-based variants, and making the composite measure more robust. These items are expected to be highly correlated because they are assumed to share a common cause, satisfying stringent assumptions such as tau (τ)-equivalence and unidimensionality [16,18]. The psychometric properties between reflective and formative constructs are seen as opposites of one another [2]. The indicators on formative models are viewed as “forming together” into a unified construct of interest, and any changes to the combinations of the indicators fundamentally alter the meaning of the construct [1,3,7]. Their relationship is best expressed as a linear composite, often modeled using regression techniques like Partial Least Squares Structural Equation Modeling (PLS-SEM) [2]-[3]. Because formative indicators do not share a common cause, internal consistency reliability is not required and can, in fact, be problematic as it can lead to multicollinearity or redundancy of items and unstable indicator weights [1]. Misspecified constructs pose a risk of obtaining misleading and biased statistical results [1]-[2]. Researchers would sometimes grapple with the meaning of low Cronbach’s alpha values even when the subject matter experts (SMEs) have validated the relevance of each item to the construct's domain. It does signal a psychometric problem of inconsistency when the constructs are reflective, but it can threaten the content validity of a formative construct since ensuring that the indicators adequately represent the scope of the construct is its key characteristic [3]. While the distinction between reflective and formative constructs and their corresponding validity and reliability protocols is well-documented in the context of Structural Equation Modeling (SEM) [1,3,13,22] when assessing the outer (measurement) model, there is still a need to emphasize the key distinctions of these two types of constructs outside of SEM, especially for the purposes of questionnaire validation. This article thus aims to address a long- standing gap in scale development by proposing a formal validation protocol among researchers during pilot testing phases. In this article, the operational definition of three main terms: construct, indicator, item, and model are addressed to provide clearer comprehension. A model will refer to a framework or blueprint and thus, a formative model meant the overall system that the construct, indicator, and other parameters interrelate with one another while assuming the formative context. Now indicator and item are used interchangeably but for indicator is meant as just a more niche or formal term similar to what a questionnaire item is. Finally, a construct is something that is represented by the items either as a representation or a contributor. When referring to a reflective construct, it always implies the indicator or the item is reflective and vice versa. PREPRINT VERSION 3 2. FORMATIVE MEASUREMENT MODEL The formative model is a psychometric model that explains the nature of the relationship between the construct and indicators by assuming that the indicators collectively form the meaning of the underlying construct [2,4,29], yet it is often less understood and acknowledged than the reflective model. Being associated with index construction, its origins are traced back to the principles of operationalism, which posits that a concept is directly represented or measured from the measure itself [4]. Its entire meaning depends on the nature of the indicators themselves. Hence, 𝜂 represents the latent variable and x is an indicator, then η ≡ x (1) This concept can later be generalized into multiple indicators. Suppose we have indicators 𝑥𝑖 where 𝑖= 1,2, … , 𝑛, then these 𝑛 indicators collectively form the inherent meaning of the latent variable 𝜂. Its meaning thus changes depending on the nature of 𝑥𝑖 involved. A formative specification can be implied as [4,7]: η ≡ γ1x1 + γ2x2 + … + γnxn (2) where 𝛾𝑖 is a parameter reflecting the unique contribution of 𝑥𝑖 in explaining 𝜂. These terms are analogous to what we commonly known as the weights of each indicator. Bollen and Lennox [4] also provided another specification as follows: η ≡ γ1x1 + γ2x2 + … + γnxn + ζ (3) Fig. 1. Formative measurement model illustration (Adapted from [4]) Equation 1 captures the widespread use of single-item measures during the 1960s-70s, yet it limits the possibility of multiple measures to represent the same construct. For equations 2 and 3 [4], we can see that the only difference with Bollen and Lennox’s specification [4] is the introduction of a disturbance term ζ. This term defines the remaining aspect that has been left unrepresented by any of the indicators, which would ideally make the collective representation of the indicators theoretically “complete”. Equation 3 emphasizes that the construct is psychometrically represented as a linear composite of the weighted contributions of each indicator. It is often an imperfect representation because of the abstract nature of the construct that lacks a defined scope. From a psychometric perspective, Diamantopoulos and PREPRINT VERSION 4 Wiklhofer [4] presented four key characteristics that distinguish its nature from reflective constructs. Property 1. 𝜼= 𝒇(𝒙𝒊) The inherent meaning of the formative construct is heavily dependent on the nature of the indicators as a whole and across each individual indicator. Property 2. 𝑪𝒐𝒗(𝒙𝒋, 𝒙𝒌) = 𝝈𝒋𝒌 where 𝒋, 𝒌∈{𝟏, 𝟐, … , 𝒏} The indicators are assumed to cause and therefore precede a latent construct, correlations (or covariances) among indicators guide the selection of indicators that are to be retained or removed. Since each correlation value is unique, which directly informs the relative redundancy between any two arbitrary indicators alone, we don’t expect any particular trend to surface in these correlation values since the implications of the model do not depend on them. Hence, we can identify 𝜎𝑗𝑘 here as a free parameter that can take on any values (positive, negative, or zero). Property 3. 𝑪𝒐𝒗(𝒙𝒊, ζ) = 𝟎 Indicators do not have “error” terms associated with the indicators because each indicator is assumed to be a precise measurement of a construct (a direct consequence of Equation 1). Instead, error variance is represented only in the disturbance term ζ associated with the latent construct. Property 4. 𝒅𝒇= 𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒖𝒏𝒊𝒒𝒖𝒆 𝒅𝒂𝒕𝒂 𝒑𝒐𝒊𝒏𝒕𝒔−𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒇𝒓𝒆𝒆 𝒑𝒂𝒓𝒂𝒎𝒆𝒕𝒆𝒓𝒔 This condition is characteristic when a formative model is part of the structural equation modeling (SEM) framework. A single formative construct on its own is statistically underidentified. In the context of SEM, model underidentification means that the number of unique data points is less than the number of free parameters, resulting in insufficient information to identify a solution to a set of structural equations [2]-[3]. Simply put, multiple possible solutions are found but unstable for a given environment, and thus, the model might prefer to have multiple constructs to compensate for the insufficient unique data points, yet this is not always the case. The nature of formative specification poses a problem because the error variances are associated with the constructs that are fewer in quantity than the indicators [3]. Our current statistical approaches, however, circumvent or avoid this modeling problem because we are not estimating any “solution” on the formative model in the form of empirical weights. Rather, we will use theoretical weights for each indicator which is tied to literature review and expert feedback. From these conditions, we can see that internal consistency reliability and construct validity in terms of convergent and discriminant validity are not suitable even in its fundamental model-based characteristics because the constructs are viewed not as scales but as indices formed from a composition of its indicators. PREPRINT VERSION 5 3. GUIDANCE ON REFLECTIVE AND FORMATIVE MODELS Fig. 2. Comparative illustration between reflective and formative models We can now refer to a general guide of the distinction between reflective and formative constructs to aid the researchers in pre-specifying each construct and therefore, have better guidance on the right psychometric approaches to use. Reflective and formative models are also known as principal factor models and latent variable models, respectively. Characteristic Reflective Model Formative Model Causality of construct Construct → Indicator (Construct causes the indicator) Indicator → Construct (Indicator causes the construct) Nature of Error Measurement Error Term (Measures the discrepancy on the ability of an item to represent the construct) Disturbance Term (Measures the unrepresented aspect of the construct relative to its the overall theoretical scope) Conceptual relationship among items All items are conceptually related with one another. No strict requirement of conceptual relatedness across any two items. Domain of items Items are simply a sample of representations of a construct. Items encompass the entire scope of representation of a construct. Content validity A useful validity check to ensure relevance of the items. A mandatory validity check to ensure the integrity of the construct. Covariance among items Collinearity among items is expected. No expectation of collinearity. High collinearity poses problems. Internal consistency Required. Not required and problematic because of redundancy. Construct validity Required especially on convergent and discriminant validity. Only external construct validity (Its relationship towards other constructs) if appropriate. PREPRINT VERSION 6 Mismatch on Theoretical and Empirical Findings Rare and safe case because items are simply manifestations of an already established construct. Problematic and likely when the existing literature does not match the expectations from the statistical findings. This is evident when estimating empirical weights after data analysis. Table 1. Comparison between reflective and formative models (Adapted and synthesized from [1] – [2]) Jarvis et al. [1] also proposed the general guidelines on specifying whether a construct is formative or reflective of which the selection should rely on the discretion of the researchers and/or Subject Matter Experts (SMEs). Since the original table regarding this concept appears to be conceptually overlapping from our previous table in this study, a prescriptive version of its content is provided as follows: Ideally, there are three aspects you have to assess on formative scale development [2]: (1) Construct: Direction of causality, (2) Indicators: Interchangeability and covariation, (3) Extraneous factors on indicators: Nomological net of the construct indicators. a. Direction of causality Identifying the causal direction of each construct should rely on how it is defined on existing literature about the construct itself and its related phenomena [4,30]. Prioritize key articles that are directly utilized to the theories you establish at the beginning of your study as research frameworks, rather than simply an arbitrary pool of articles [29,30]. This avoids the perception bias from “niche” studies because researchers form their own understanding about a particular topic or theory that can influence how they define the term in the context of their study. It is also important to acknowledge that the definition of terms is often intended for the particular context of a study given the quantitative nature of scale development, which can limit its direct application across other studies. Pay attention on how a construct is defined from your literature sources. Does the construct imply an underlying abstract concept that you can measure by itself? If so, it is reflective. Otherwise, it is formative if the definition of the construct directly depends on a combination of multiple factors or aspects which are explicitly stated. However, the researchers might come across a construct that can be framed as both reflective or formative depending on the corresponding literatures that support it. In this case, they are given the freedom to choose the model that is more relevant to the objectives of the study. b. Interchangeability and covariation among indicators/items After pre-specifying the anticipated psychometric model of each construct, the researcher will create or generate the appropriate survey items. If the researchers happened to have access to a standardized questionnaire from a previous study, it is recommended to immediately assign all constructs as reflective. For formative constructs, the gold standard is for the researchers to deliberately generate an item pool [10] based on the existing literature, expert feedback from Subject Matter Experts (SMEs), and assumed psychometric model for the underlying construct. This is more commonly known as “self-made questionnaire” because the construct’s meaning heavily depends on the local context of the researchers and the respondents. PREPRINT VERSION 7 The key hint to look first is for whether the absence of each item would alter the substance of the construct. This is the interchangeability characteristic of items. If it does alter, the items are formative since each item has a unique contribution towards the construct, making them non-interchangeable. If it doesn’t, it’s reflective. However, if the researchers thought the distinction is insufficient or ambiguous from current examination of their constructs, we assess the second aspect: covariation. This refers to the tendency of an item to covary with the trend of another item, and is a key characteristic on reflective constructs, but can also be found on formative items. Hence, the distinction is on the necessity to covary. If responses across items should be expected to follow a similar trend with one another, then it is not formative. This characteristic is analogous to internal consistency reliability where you expect the responses to consistently align with the rest of item responses on their corresponding construct. Because if you observed that a respondent can theoretically disagree on one item but agree on another, it’s a classic suspicion of a formative indicator. What if we found that the items we’ve generated are actually a mix of formative and reflective constructs? We must align to what your construct actually assumes at the first place (i.e. If the construct is formatively defined, you must look for formative indicators). This ensures that we adhere to the very core psychometric characteristics that distinguish these two models since the nature of construct and indicator, by definition, are interconnected. c. Nomological net of construct indicators This refers to the external factors that either influence (cause) or be influenced (effect) by the indicators and is a direct extension from the covariation aspect [30]. If a factor causes or is affected one indicator, then it should also cause or be affected by another indicator under the same construct. This trait is characteristic of reflective constructs, however, since it aims consistency and stability of representing the construct. The assessment of the Nomological Net is related to the structural model that occurs after the measurement model has been purified in the context of Structural Equation Modelling [1,3]. In other words, it means that assessing Nomological Net only happens after completing the questionnaire validation process and the researchers are set to analyze the data gathered from the actual respondents. Hence, this aspect will be excluded from our framework in this study. 4. HIGHER-ORDER MODELS PREPRINT VERSION 8 Fig. 3. Variants of higher-order models [1]. So far, we have only encountered the intricacies between the two models on a single order? But what about for higher level models? There are different combinations of these models involving a mix of both formative and reflective constructs [1]. This challenges the traditional notion of a construct that should be conceptually and empirically unidimensional to be meaningful. Each of these models are still guided by existing literature and expert guidance where the previous literature review guidelines still apply. For the sake of discussion, let’s assume for now a second-order model where the first-order constructs become the indicators of the second-order constructs. While several modeling techniques are used to estimate the values of latent variables, they require a large sample size and for a pilot testing involving a small number of respondents, these tools may struggle to provide accurate estimation [1,3]. Hence, for pragmatic reasons, extract a composite measure of each construct by computing the weighted mean or median for each respondent. The weighted means now serve as the proxy data of the first-order constructs and will be used for further validation on the second order constructs. We will still use the similar diagnostic measures which will be presented in the later sections. This diagnostic approach can be generalized further beyond second-order models. 5. NATURE OF PILOT TESTING Diamantopoulos and his colleagues [4] have identified four key issues to assess in the interest of an effective scale development, which will serve as the primary guide on what to assess during the pilot testing phase. a. Content Specification This refers to the scope of the latent variable as seen in the previous sections. Since it is important for formative constructs to represent its scope with as much breadth as possible from its causal indicators, having a clear and well-defined construct is crucial for a clearer and more informed decision on deciding the appropriate psychometric model. Researchers are recommended to refer back to (a) Direction of causality that presents key highlights on this step as well as its significance in the pilot testing process. PREPRINT VERSION 9 b. Indicator Specification This refers to how the researchers should assign and create the corresponding items of each construct. It requires careful consideration of the underlying aspects that the construct may directly or implicitly imply. Again, I recommend to refer back to the (b) Interchangeability and covariation among indicators/items for a full description on this aspect. c. Indicator Collinearity This refers to the strength of relationship among any pair of indicators. It touches the diagnostic aspect of validating whether the pilot data will also adhere with the assumed theoretical context. Excessive collinearity leads to poor discrimination on the unique effects of each indicator and therefore assess its significance on measuring the latent variable [3]. We conduct its corresponding tests after the pilot data is obtained and will be performed iteratively until we obtain results that satisfy our empirical conditions. d. External Validity This refers to the extent of how our current formative constructs relate to other constructs. Ideally, this can be determined by examining the covariance of each indicator to another external variable. Since the external variable acts as a global construct that provides a valid baseline for our current indicators, indicators that correlate with this variable are retained because it strengthens our assumption that our indicators mirror an ideal representation of our construct. It can be related to nomological validity as it concerns relationships with external factors. Hence, like nomological validity, this issue is beyond the intended internal scope and will thus be excluded from our framework. 6. FRAMEWORK PROPER Literatures surrounding the pilot testing process among survey questionnaires are well- documented. While its paradigm sufficiently lays a practical step by step guide for student researchers in survey questionnaires, they often revolve by assuming the constructs to be reflective. Churchill’s validation methodology [6] is an eight-step process incorporating the use reliability and validity measures as default steps to ensure instrument quality, and also advocated the use of multi-item measures as single items lack the essential scope a construct needs for a more stable finding. We now seek to contextualize and refine these protocols on formative constructs to advocate its use whenever deemed necessary. I replaced the original reliability and validity measures with descriptive statistics and multicollinearity measures while acknowledging the iterative aspect of validating questionnaires. The purification step on his protocol which promoted the use of Cronbach’s alpha and factor analysis to inspect the psychometric characteristics of the constructs is inappropriate for formative models [1,17]. Hence, the researchers are recommended to perform the following sections in particular order. PREPRINT VERSION 10 Fig. 4. Validation flow between Reflective Construct (adapted from [6]) and Formative Construct. 6.1 Domain of construct The scale development process begins with identifying what and how the identified constructs will be measured based from previous literature and expert review [21,24,30]. This is where the researchers will decide and set each construct to assume either the formative or reflective psychometric model, but not both. Corresponding literature should then be cited to support the pre-defined psychometric models for each construct. 6.2 Content validity Once the constructs are established, the researchers must create corresponding items under each construct from previous literature [24,30]. While there are common conventions of either choosing to deliberately formulate the items on their own (self-made) or utilize an existing standardized questionnaire from an existing study, I recommend the self-made option if one of your constructs is formative because of the theory-driven aspect of the items as it allows you to account the contextual nuance of your population and its relation to your indications. This is known as generating an item pool [8,10]. Adapting an existing questionnaire will risk not accounting for the aspects of your construct that are only unique in the context of your study [20]. The nature and number of indicators of each construct should be guided by literature and the related field of study [12,30]. If no clear guideline on the number of items exists, a heuristic rule of thumb of at least 5 items is recommended as a last resort, provided that the items will sufficiently represent the underlying aspects of the constructs found on existing literature. The researchers will then justify selection of items by citing the corresponding articles that either act as: (a) definitional source: define the construct and present the aspects, (b) mirror source: present the precise items used that match the needed aspects to measure. For (a), the researchers will highlight the aspects from the presented definition and reason a theoretical relevance of one or more aspects to an item’s context. For (b), the researchers can PREPRINT VERSION 11 simply use the item, provided that they have permission from the author of the study and have ensured the suitability of the item by assessing the population of the original study. For the theoretical weighing, theoretical weights will be pre-defined as preliminary values for the pilot testing process in setting the relative importance of the items in measuring the construct. The weights seek to answer the question: “How important is each item in understanding the construct when compared with the rest of the items?” While the decision for setting the weights relies on either the researchers or subject matter experts (SMEs), I recommend SMEs where you can calculate Content Validity Ratio (CVR) values of each item 𝑖 based from their collective responses [8,9,24]. Not only would the CVR values could act as theoretical weights for the indicators, but the SMEs can also provide qualitative feedbacks on the questionnaire to correct or improve its scope. Having the researchers to pre-define the weights should be a last resort due to logistical constraints such as lack of availability among SMEs and limited time as it sacrifices methodological rigor. If it does, a score of an item can be calculated using the arithmetic average of the researchers’ rating from 1- not essential to 5- essential which will rely on the corresponding literature that support its significance on measuring the construct. 𝑪𝑽𝑹𝒊= 𝒏𝒆−𝑵 𝟐 𝑵 𝟐 (4) Where: 𝑛𝑒: number of raters indicating “essential” N: total number of raters 6.3. Update questionnaire and collect pilot data This section includes steps 3 and 4 from the proposed framework. The refined questionnaire draft from section 4.2 can now be answered by the pilot respondents. Pilot testing will be conducted with two distinct objectives in mind: (a) Pilot respondents will assess the face validity of the instrument by answering a supplementary questionnaire containing questions about the quality of instrument such as clarity and preciseness of language and/or subjective feedbacks from a select few [15]. There are no strict guidelines on the formatting of the instruments for this objective as long as it meets the fundamental goal to improve and refine the instrument to become effective in gathering data on the study’s population [11,12,23]. (b) Conduct diagnostic statistical checks on the pilot data gathered from the respondents answering the actual survey. These checks will diagnose how the respondents actually respond to each item [24] which will be further discussed in the next section. Pilot data provides a useful, initial baseline on how the survey might perform on an actual data gathering [25]. Hence, the selected pilot respondents should possess the characteristics that are as close as possible with the actual population and are anticipated to have diverse responses on your constructs. Having a pilot respondent similar to the actual population allows an effective proxy and anticipate the potential issues that may arise when dealing with the population [15,24], while the diversity of their responses ensure that an item is capable of measuring the actual item as part of the item without suffering from response PREPRINT VERSION 12 biases such as social desirability bias. While the actual population can be theoretically a gold standard for selecting pilot respondents, I recommend avoiding this approach to ensure sufficient pool of respondents for the actual data gathering. Pilot testing can be iterative, and requires a different set of pilot respondents every iteration to mitigate practice bias. 6.4. Statistical checks: Descriptive statistics and multicollinearity Assuming the pilot data has been collected, I recommend to first check and identify respondents whose responses were outlier values. This informs the pilot respondents that show unusual behavior that is deviant from the rest of the pilot respondents. This provides researchers the opportunity to probe questions on these respondents about their respondents, as well as further examine their outputs when assessing face validity. Researchers can then proceed on conducting and interpreting descriptive statistics and multicollinearity tests on the individual items, which are the primary diagnostic measures to assess how well the items are measuring the formative construct. Descriptive statistics summarize the characteristics of a sample [27,29] and can be used in diagnosing whether the items are performing well based on the characteristics of pilot sample as the proxy estimation. Measures of central tendency such as mean and median can be used to determine the general location on their responses, informing the ability of an item to be as unbiased as possible, while measures of variability such as standard deviation and interquartile range determine the general extent of diversity or uniformity of their responses that informs the item’s conceptual scope to gather various responses [26,29]. Both measures should be used to collectively inform the quality of each item which is guided from empirically established guidelines. Given the diagnostic purpose of the pilot testing, interpreting descriptive statistics on the construct as a whole is pragmatically redundant since interpretations on central tendency and variability will still depend on the nature of the individual items because their values constantly change depending on the involved items, lacking stability of results. Multicollinearity in the context of formative constructs occurs when two or more indicators are found to be statistically too similar that an indicator can be a numerical copy of another indicator. To measure multicollinearity, the Variance Inflation Factor (VIF) is used as the primary standard check which relies on the coefficient of determination or r-squared (𝑅2) that can be obtained by performing an Ordinary Least Squares (OLS) regression [1,3]. 𝑉𝐼𝐹= 1 1−𝑅2 (5) However, because VIF is fundamentally computed from a regression model, it could produce unstable estimates due to low sample size that is evident in our pilot respondents. If feasible, I recommend to refer to the empirical guidelines of Hair et al. [2] on the number of observations needed to run a regression model. Even when the alternative way of assessing bivariate correlations across all possible pairs of indicators using Pearson’s product-moment correlation [2], it assumes a continuous nature of the indicator data which is not always the case for Likert scales, resulting to a more serious consequence. Misspecification holds a greater risk on misleading results rather than the instability due to sample size. The researchers can then proceed to actual data gathering only if the constructs sufficiently satisfied the conditions on the said statistical checks, as well as the preceding auxiliary procedures. PREPRINT VERSION 13 7. CONCLUSION Advancing rigorous methodological practices for formative measurement models is crucial for promoting robust and valid findings since the current validation processes remains compromised and misconceived due to historical origins. Hence, this paper justifies the theoretical motivation on the perceived gaps on questionnaire validation practices and guide the pragmatic roles of the researchers in ensuring appropriate pilot testing procedures related to survey questionnaires. This addresses the long-standing misconceptions of model misspecification on formative constructs as we believe researchers should prioritize theoretical justification and contextual accuracy whenever feasible. REFERENCES [1] C. B. Jarvis, S. B. MacKenzie, and P. M. Podsakoff, “A critical review of construct indicators and measurement model misspecification in marketing and consumer research,” J. Consum. Res., vol. 30, no. 2, pp. 199–218, Sep. 2003. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1086/376806 [2] J. Hair, W. Black, B. Babin, and R. Anderson, Multivariate Data Analysis, 8th ed. Cengage Learn., 2019. [3] J. Henseler, C. M. Ringle, and R. R. Sinkovics, “The use of partial least squares path modeling in international marketing,” in Advances in International Marketing. Emerald Group Publ. Ltd., 2009, pp. 277–319. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1108/s1474-7979(2009)0000020014 [4] A. Diamantopoulos and H. M. Winklhofer, “Index construction with formative indicators: An alternative to scale development,” J. Marketing Res., vol. 38, no. 2, pp. 269–277, May 2001. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1509/jmkr.38.2.269.18845 [5] J. C. Nunnally, Psychometric Theory. New York: McGraw-Hill, 1967. [6] G. A. Churchill, “A paradigm for developing better measures of marketing constructs,” J. Marketing Res., vol. 16, no. 1, p. 64, Feb. 1979. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.2307/3150876 [7] G. R. Franke, K. J. Preacher, and E. E. Rigdon, “Proportional structural effects of formative indicators,” J. Bus. Res., vol. 61, no. 12, pp. 1229–1237, Dec. 2008. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1016/j.jbusres.2008.01.011 [8] V. Zamanzadeh, A. Ghahramanian, M. Rassouli, A. Abbaszadeh, H. Alavi-Majd, and A.-R. Nikanfar, “Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication,” J. Caring Sci., vol. 4, no. 2, pp. 165–178, Jun. 2015. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.15171/jcs.2015.017 [9] M. Romero Jeldres, E. Díaz Costa, and T. Faouzi Nadim, “A review of Lawshe’s method for calculating content validity in the social sciences,” Frontiers Educ., vol. 8, Nov. 2023. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.3389/feduc.2023.1271335 [10] J. V. Gyuricza, A. F. P. L. d’Oliveira, L. B. M. Machado, and J. Brodersen, “Development of an item pool for a questionnaire on the psychosocial consequences of hypertension labelling,” J. Patient-Reported Outcomes, vol. 4, no. 1, Dec. 2019. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1186/s41687-019- 0168-4 PREPRINT VERSION 14 [11] I. Kusmaryono and D. Wijayanti, “Number of Response Options, Reliability, Validity, and Potential Bias in the Use of the Likert Scale Education and Social Science Research: A Literature Review,” Int. J. Educational Methodol., vol. 8, no. 4, pp. 625–637, Nov. 2022. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.12973/ijem.8.4.625 [12] “What Is the Best Response Scale for Survey and Questionnaire Design; Review of Different Lengths of Rating Scale / Attitude Scale / Likert Scale,” Int. J. Academic Res. Manage., vol. 8, no. 1, pp. 1–10, 2019. Accessed: Oct. 16, 2025. [Online]. Available: https://hal.science/hal-02557308v1/document [13] J. Amora, “On the validity assessment of formative measurement models in PLS- SEM,” Data Anal. Perspectives J., vol. 4, no. 2, pp. 1–7, 2023. Accessed: Oct. 16, 2025. [Online]. Available: https://scriptwarp.com/dapj/2023_DAPJ_4_2/Amora_2023_DAPJ_4_2_For mativeAssessment.pdf [14] M. A. Bujang, E. D. Omar, and N. A. Baharum, “A Review on Sample Size Determination for Cronbach’s Alpha Test: A Simple Guide for Researchers,” Malaysian J. Med. Sci., vol. 25, no. 6, pp. 85–99, 2018. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.21315/mjms2018.25.6.9 [15] M. A. Bujang, E. D. Omar, D. H. P. Foo, and Y. K. Hon, “Sample size determination for conducting a pilot study to assess reliability of a questionnaire,” Restor. Dent. & Endod., vol. 49, 2024. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.5395/rde.2024.49.e3 [16] O. F. Morera and S. M. Stokes, “Coefficient α as a Measure of Test Score Reliability: Review of 3 Popular Misconceptions,” Amer. J. Public Health, vol. 106, no. 3, pp. 458– 461, Mar. 2016. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.2105/ajph.2015.302993 [17] K. S. Taber, “The Use of Cronbach’s Alpha When Developing and Reporting Research Instruments in Science Education,” Res. Sci. Educ., vol. 48, no. 6, pp. 1273–1296, Jun. 2017. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1007/s11165- 016-9602-2 [18] M. Tavakol and R. Dennick, “Making sense of Cronbach's alpha,” Int. J. Med. Educ., vol. 2, pp. 53–55, Jun. 2011. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.5116/ijme.4dfb.8dfd [19] D. G. Bonett and T. A. Wright, “Cronbach's alpha reliability: Interval estimation, hypothesis testing, and sample size planning,” J. Organizational Behav., vol. 36, no. 1, pp. 3–15, Oct. 2014. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1002/job.1960 [20] M. A. Bujang, H. Y. Khee, and L. K. Kee, A Step by step Guide to Questionnaire Validation Research. Inst. Clin. Res. (ICR), 2022. [21] D. De, K. Kishore, V. Jaswal, and V. Kulkarni, “Practical guidelines to develop and evaluate a questionnaire,” Indian Dermatol. Online J., vol. 12, no. 2, p. 266, 2021. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.4103/idoj.idoj_674_20 [22] G. W. Cheung, H. D. Cooper-Thomas, R. S. Lau, and L. C. Wang, “Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations,” Asia Pacific J. Manage., Jan. 2023. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1007/s10490-023- 09871-y [23] H. Taherdoost, “How to Design and Create an Effective Survey/Questionnaire; A Step by Step Guide,” Int. J. Academic Res. Manage., vol. 5, no. 4, pp. 37–41, 2016. PREPRINT VERSION 15 [24] H. Taherdoost, “Validity and Reliability of the Research Instrument; How to Test the Validation of a Questionnaire/Survey in a Research,” SSRN Electron. J., 2016. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.2139/ssrn.3205040 [25] R. Tate, F. Beauregard, C. Peter, and L. Marotta, “Pilot Testing as a Strategy to Develop Interview and Questionnaire Skills for Scholar Practitioners,” Impacting Educ.: J. Transforming Professional Pract., vol. 8, no. 4, pp. 20–25, Sep. 2023. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.5195/ie.2023.333 [26] A. G. Bedeian, ““More than Meets the Eye”: A Guide to Interpreting the Descriptive Statistics and Correlation Matrices Reported in Management Research,” Rev. Ibero- Americana Estrateg., vol. 14, no. 02, pp. 08–22, Jun. 2015. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.5585/ijsm.v14i2.2244 [27] G. Marshall and L. Jonker, “An introduction to descriptive statistics: A review and practical guide,” Radiography, vol. 16, no. 4, Nov. 2010, Art. no. e1-e7. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1016/j.radi.2010.01.001 [28] M. Stadler, M. Sailer, and F. Fischer, “Knowledge as a formative construct: A good alpha is not always better,” New Ideas Psychol., vol. 60, p. 100832, Jan. 2021. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1016/j.newideapsych.2020.100832 [29] M. Kebalepile and P. M. Chakane, “Commonly used statistical tests and their application,” Afr J Anaesth Anal, vol. 28, pp. 80–84, 2022. [30] L. S. Lambert and D. A. Newman, “Construct Development and Validation in Three Practical Steps: Recommendations for Reviewers, Editors, and Authors*,” Organizational Res. Methods, p. 109442812211153, Aug. 2022. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1177/10944281221115374
PREPRINT VERSION 1 A formative measurement validation methodology for survey questionnaires Mark Dominique Dalipe Muñoz Mathematics Department (Iloilo Science and Technology University - Main Campus) Iloilo City, Philippines ___________________________________________________________________________ Abstract: Model misspecification of formative indicators remains a widely documented issue across academic literature, yet scholars lack a clear consensus on pragmatic, prescriptive approaches to manage this gap. This ambiguity forces researchers to rely on psychometric frameworks primarily intended for reflective models, and thus risks misleading findings. This article introduces a Multi-Step Validation Methodology Framework specifically designed for formative constructs in survey-based research. The proposed framework is grounded in an exhaustive literature review and integrates essential pilot diagnostics through descriptive statistics and multicollinearity checks. The methodology provides researchers with the necessary theoretical and structural clarity to finally justify and adhere to appropriate validation techniques that accurately account for the causal nature of the constructs while ensuring high psychometric and statistical integrity. Keywords: Index construction, formative construct, survey questionnaire, measurement model, framework, pilot testing, indicator, SEM __________________________________________________________________________ 1. INTRODUCTION Pilot testing procedures are a necessary foundational process to ensure the quality and integrity of survey questionnaires [15,23,25], which serve as the primary tools for gathering observational data across fields such as Marketing, Information Systems, and Management [1,3]. However, from my prior experience on the current academic practices, I have frequently encountered persistent methodological misconceptions and incorrect practices that require immediate academic intervention to achieve greater rigor and confidence in the corresponding findings. This methodological gap is not new and has been well-documented by previous researchers throughout history when the psychometric methods were gaining scientific adoption in the 19th century across various social disciplines [1,5]. For instance, researchers in management and social science fields often "force" their constructs to fit within the reflective model framework regardless of their actual theoretical nature [1,4], which functionally misleads their true psychometric nature. Evidence suggests that their historical roots between the reflective and formative measurement models are key reasons for the disparity. Formative constructs have been overshadowed by the continuous development of psychometric tools intended for reflective constructs [1,5,6] as one measure from this said framework is the widespread use of Cronbach's alpha for assessing instrument reliability due to its computational simplicity and adoption across statistical packages [17]. PREPRINT VERSION 2 However, relying solely on Cronbach's alpha as a measure for assessing instrument quality overlooks a fundamental distinction in measurement theory. While current literature already acknowledges the limitations of using Cronbach's alpha alone [14,16,17,18,19] and have thus recommended to explore other types of reliability such as test-retest, split-half, and interrater and types of validity such as face, construct, and content [5], the subtle gap is on the very nature of the constructs we seek to measure and our pre-specified items or questions. We are missing a critical point that Cronbach's alpha, as well as these reliability and validity types, are rooted deeply in Classical Test Theory (CTT) [3,5]. They are necessary diagnostic tools only when the construct is assumed to be reflective, where each individual item is viewed to conceptually represent the construct in a consistent manner [3]. This is analogous to assuming the items are multiple ways to represent a construct and then finding a common ground to cancel out the item-based variants, and making the composite measure more robust. These items are expected to be highly correlated because they are assumed to share a common cause, satisfying stringent assumptions such as tau (τ)-equivalence and unidimensionality [16,18]. The psychometric properties between reflective and formative constructs are seen as opposites of one another [2]. The indicators on formative models are viewed as "forming together" into a unified construct of interest, and any changes to the combinations of the indicators fundamentally alter the meaning of the construct [1,3,7]. Their relationship is best expressed as a linear composite, often modeled using regression techniques like Partial Least Squares Structural Equation Modeling (PLS-SEM) [2]-[3]. Because formative indicators do not share a common cause, internal consistency reliability is not required and can, in fact, be problematic as it can lead to multicollinearity or redundancy of items and unstable indicator weights [1]. Misspecified constructs pose a risk of obtaining misleading and biased statistical results [1]-[2]. Researchers would sometimes grapple with the meaning of low Cronbach's alpha values even when the subject matter experts (SMEs) have validated the relevance of each item to the construct's domain. It does signal a psychometric problem of inconsistency when the constructs are reflective, but it can threaten the content validity of a formative construct since ensuring that the indicators adequately represent the scope of the construct is its key characteristic [3]. While the distinction between reflective and formative constructs and their corresponding validity and reliability protocols is well-documented in the context of Structural Equation Modeling (SEM) [1,3,13,22] when assessing the outer (measurement) model, there is still a need to emphasize the key distinctions of these two types of constructs outside of SEM, especially for the purposes of questionnaire validation. This article thus aims to address a longstanding gap in scale development by proposing a formal validation protocol among researchers during pilot testing phases. In this article, the operational definition of three main terms: construct, indicator, item, and model are addressed to provide clearer comprehension. A model will refer to a framework or blueprint and thus, a formative model meant the overall system that the construct, indicator, and other parameters interrelate with one another while assuming the formative context. Now indicator and item are used interchangeably but for indicator is meant as just a more niche or formal term similar to what a questionnaire item is. Finally, a construct is something that is represented by the items either as a representation or a contributor. When referring to a reflective construct, it always implies the indicator or the item is reflective and vice versa. PREPRINT VERSION 3 2. FORMATIVE MEASUREMENT MODEL The formative model is a psychometric model that explains the nature of the relationship between the construct and indicators by assuming that the indicators collectively form the meaning of the underlying construct [2,4,29], yet it is often less understood and acknowledged than the reflective model. Being associated with index construction, its origins are traced back to the principles of operationalism, which posits that a concept is directly represented or measured from the measure itself [4]. Its entire meaning depends on the nature of the indicators themselves. Hence, η represents the latent variable and x is an indicator, then η ≡ x (1) This concept can later be generalized into multiple indicators. Suppose we have indicators xi where i= 1,2, ... , n, then these n indicators collectively form the inherent meaning of the latent variable η. Its meaning thus changes depending on the nature of xi involved. A formative specification can be implied as [4,7]: η ≡ γ1x1 + γ2x2 + ... + γnxn (2) where γi is a parameter reflecting the unique contribution of xi in explaining η. These terms are analogous to what we commonly known as the weights of each indicator. Bollen and Lennox [4] also provided another specification as follows: η ≡ γ1x1 + γ2x2 + ... + γnxn + ζ (3) Fig. 1. Formative measurement model illustration (Adapted from [4]) Equation 1 captures the widespread use of single-item measures during the 1960s-70s, yet it limits the possibility of multiple measures to represent the same construct. For equations 2 and 3 [4], we can see that the only difference with Bollen and Lennox's specification [4] is the introduction of a disturbance term ζ. This term defines the remaining aspect that has been left unrepresented by any of the indicators, which would ideally make the collective representation of the indicators theoretically "complete". Equation 3 emphasizes that the construct is psychometrically represented as a linear composite of the weighted contributions of each indicator. It is often an imperfect representation because of the abstract nature of the construct that lacks a defined scope. From a psychometric perspective, Diamantopoulos and PREPRINT VERSION 4 Wiklhofer [4] presented four key characteristics that distinguish its nature from reflective constructs. Property 1. η= f(xi) The inherent meaning of the formative construct is heavily dependent on the nature of the indicators as a whole and across each individual indicator. Property 2. Cov(xj, xk) = σjk where j, k∈{1, 2, ... , n} The indicators are assumed to cause and therefore precede a latent construct, correlations (or covariances) among indicators guide the selection of indicators that are to be retained or removed. Since each correlation value is unique, which directly informs the relative redundancy between any two arbitrary indicators alone, we don't expect any particular trend to surface in these correlation values since the implications of the model do not depend on them. Hence, we can identify σjk here as a free parameter that can take on any values (positive, negative, or zero). Property 3. Cov(xi, ζ) = 0 Indicators do not have "error" terms associated with the indicators because each indicator is assumed to be a precise measurement of a construct (a direct consequence of Equation 1). Instead, error variance is represented only in the disturbance term ζ associated with the latent construct. Property 4. df= Number of unique data points-Number of free parameters This condition is characteristic when a formative model is part of the structural equation modeling (SEM) framework. A single formative construct on its own is statistically underidentified. In the context of SEM, model underidentification means that the number of unique data points is less than the number of free parameters, resulting in insufficient information to identify a solution to a set of structural equations [2]-[3]. Simply put, multiple possible solutions are found but unstable for a given environment, and thus, the model might prefer to have multiple constructs to compensate for the insufficient unique data points, yet this is not always the case. The nature of formative specification poses a problem because the error variances are associated with the constructs that are fewer in quantity than the indicators [3]. Our current statistical approaches, however, circumvent or avoid this modeling problem because we are not estimating any "solution" on the formative model in the form of empirical weights. Rather, we will use theoretical weights for each indicator which is tied to literature review and expert feedback. From these conditions, we can see that internal consistency reliability and construct validity in terms of convergent and discriminant validity are not suitable even in its fundamental model-based characteristics because the constructs are viewed not as scales but as indices formed from a composition of its indicators. PREPRINT VERSION 5 3. GUIDANCE ON REFLECTIVE AND FORMATIVE MODELS Fig. 2. Comparative illustration between reflective and formative models We can now refer to a general guide of the distinction between reflective and formative constructs to aid the researchers in pre-specifying each construct and therefore, have better guidance on the right psychometric approaches to use. Reflective and formative models are also known as principal factor models and latent variable models, respectively. Characteristic Reflective Model Formative Model Causality of construct Construct → Indicator (Construct causes the indicator) Indicator → Construct (Indicator causes the construct) Nature of Error Measurement Error Term (Measures the discrepancy on the ability of an item to represent the construct) Disturbance Term (Measures the unrepresented aspect of the construct relative to its the overall theoretical scope) Conceptual relationship among items All items are conceptually related with one another. No strict requirement of conceptual relatedness across any two items. Domain of items Items are simply a sample of representations of a construct. Items encompass the entire scope of representation of a construct. Content validity A useful validity check to ensure relevance of the items. A mandatory validity check to ensure the integrity of the construct. Covariance among items Collinearity among items is expected. No expectation of collinearity. High collinearity poses problems. Internal consistency Required. Not required and problematic because of redundancy. Construct validity Required especially on convergent and discriminant validity. Only external construct validity (Its relationship towards other constructs) if appropriate. PREPRINT VERSION 6 Mismatch on Theoretical and Empirical Findings Rare and safe case because items are simply manifestations of an already established construct. Problematic and likely when the existing literature does not match the expectations from the statistical findings. This is evident when estimating empirical weights after data analysis. Table 1. Comparison between reflective and formative models (Adapted and synthesized from [1] - [2]) Jarvis et al. [1] also proposed the general guidelines on specifying whether a construct is formative or reflective of which the selection should rely on the discretion of the researchers and/or Subject Matter Experts (SMEs). Since the original table regarding this concept appears to be conceptually overlapping from our previous table in this study, a prescriptive version of its content is provided as follows: Ideally, there are three aspects you have to assess on formative scale development [2]: (1) Construct: Direction of causality, (2) Indicators: Interchangeability and covariation, (3) Extraneous factors on indicators: Nomological net of the construct indicators. a. Direction of causality Identifying the causal direction of each construct should rely on how it is defined on existing literature about the construct itself and its related phenomena [4,30]. Prioritize key articles that are directly utilized to the theories you establish at the beginning of your study as research frameworks, rather than simply an arbitrary pool of articles [29,30]. This avoids the perception bias from "niche" studies because researchers form their own understanding about a particular topic or theory that can influence how they define the term in the context of their study. It is also important to acknowledge that the definition of terms is often intended for the particular context of a study given the quantitative nature of scale development, which can limit its direct application across other studies. Pay attention on how a construct is defined from your literature sources. Does the construct imply an underlying abstract concept that you can measure by itself? If so, it is reflective. Otherwise, it is formative if the definition of the construct directly depends on a combination of multiple factors or aspects which are explicitly stated. However, the researchers might come across a construct that can be framed as both reflective or formative depending on the corresponding literatures that support it. In this case, they are given the freedom to choose the model that is more relevant to the objectives of the study. b. Interchangeability and covariation among indicators/items After pre-specifying the anticipated psychometric model of each construct, the researcher will create or generate the appropriate survey items. If the researchers happened to have access to a standardized questionnaire from a previous study, it is recommended to immediately assign all constructs as reflective. For formative constructs, the gold standard is for the researchers to deliberately generate an item pool [10] based on the existing literature, expert feedback from Subject Matter Experts (SMEs), and assumed psychometric model for the underlying construct. This is more commonly known as "self-made questionnaire" because the construct's meaning heavily depends on the local context of the researchers and the respondents. PREPRINT VERSION 7 The key hint to look first is for whether the absence of each item would alter the substance of the construct. This is the interchangeability characteristic of items. If it does alter, the items are formative since each item has a unique contribution towards the construct, making them non-interchangeable. If it doesn't, it's reflective. However, if the researchers thought the distinction is insufficient or ambiguous from current examination of their constructs, we assess the second aspect: covariation. This refers to the tendency of an item to covary with the trend of another item, and is a key characteristic on reflective constructs, but can also be found on formative items. Hence, the distinction is on the necessity to covary. If responses across items should be expected to follow a similar trend with one another, then it is not formative. This characteristic is analogous to internal consistency reliability where you expect the responses to consistently align with the rest of item responses on their corresponding construct. Because if you observed that a respondent can theoretically disagree on one item but agree on another, it's a classic suspicion of a formative indicator. What if we found that the items we've generated are actually a mix of formative and reflective constructs? We must align to what your construct actually assumes at the first place (i.e. If the construct is formatively defined, you must look for formative indicators). This ensures that we adhere to the very core psychometric characteristics that distinguish these two models since the nature of construct and indicator, by definition, are interconnected. c. Nomological net of construct indicators This refers to the external factors that either influence (cause) or be influenced (effect) by the indicators and is a direct extension from the covariation aspect [30]. If a factor causes or is affected one indicator, then it should also cause or be affected by another indicator under the same construct. This trait is characteristic of reflective constructs, however, since it aims consistency and stability of representing the construct. The assessment of the Nomological Net is related to the structural model that occurs after the measurement model has been purified in the context of Structural Equation Modelling [1,3]. In other words, it means that assessing Nomological Net only happens after completing the questionnaire validation process and the researchers are set to analyze the data gathered from the actual respondents. Hence, this aspect will be excluded from our framework in this study. 4. HIGHER-ORDER MODELS PREPRINT VERSION 8 Fig. 3. Variants of higher-order models [1]. So far, we have only encountered the intricacies between the two models on a single order? But what about for higher level models? There are different combinations of these models involving a mix of both formative and reflective constructs [1]. This challenges the traditional notion of a construct that should be conceptually and empirically unidimensional to be meaningful. Each of these models are still guided by existing literature and expert guidance where the previous literature review guidelines still apply. For the sake of discussion, let's assume for now a second-order model where the first-order constructs become the indicators of the second-order constructs. While several modeling techniques are used to estimate the values of latent variables, they require a large sample size and for a pilot testing involving a small number of respondents, these tools may struggle to provide accurate estimation [1,3]. Hence, for pragmatic reasons, extract a composite measure of each construct by computing the weighted mean or median for each respondent. The weighted means now serve as the proxy data of the first-order constructs and will be used for further validation on the second order constructs. We will still use the similar diagnostic measures which will be presented in the later sections. This diagnostic approach can be generalized further beyond second-order models. 5. NATURE OF PILOT TESTING Diamantopoulos and his colleagues [4] have identified four key issues to assess in the interest of an effective scale development, which will serve as the primary guide on what to assess during the pilot testing phase. a. Content Specification This refers to the scope of the latent variable as seen in the previous sections. Since it is important for formative constructs to represent its scope with as much breadth as possible from its causal indicators, having a clear and well-defined construct is crucial for a clearer and more informed decision on deciding the appropriate psychometric model. Researchers are recommended to refer back to (a) Direction of causality that presents key highlights on this step as well as its significance in the pilot testing process. PREPRINT VERSION 9 b. Indicator Specification This refers to how the researchers should assign and create the corresponding items of each construct. It requires careful consideration of the underlying aspects that the construct may directly or implicitly imply. Again, I recommend to refer back to the (b) Interchangeability and covariation among indicators/items for a full description on this aspect. c. Indicator Collinearity This refers to the strength of relationship among any pair of indicators. It touches the diagnostic aspect of validating whether the pilot data will also adhere with the assumed theoretical context. Excessive collinearity leads to poor discrimination on the unique effects of each indicator and therefore assess its significance on measuring the latent variable [3]. We conduct its corresponding tests after the pilot data is obtained and will be performed iteratively until we obtain results that satisfy our empirical conditions. d. External Validity This refers to the extent of how our current formative constructs relate to other constructs. Ideally, this can be determined by examining the covariance of each indicator to another external variable. Since the external variable acts as a global construct that provides a valid baseline for our current indicators, indicators that correlate with this variable are retained because it strengthens our assumption that our indicators mirror an ideal representation of our construct. It can be related to nomological validity as it concerns relationships with external factors. Hence, like nomological validity, this issue is beyond the intended internal scope and will thus be excluded from our framework. 6. FRAMEWORK PROPER Literatures surrounding the pilot testing process among survey questionnaires are welldocumented. While its paradigm sufficiently lays a practical step by step guide for student researchers in survey questionnaires, they often revolve by assuming the constructs to be reflective. Churchill's validation methodology [6] is an eight-step process incorporating the use reliability and validity measures as default steps to ensure instrument quality, and also advocated the use of multi-item measures as single items lack the essential scope a construct needs for a more stable finding. We now seek to contextualize and refine these protocols on formative constructs to advocate its use whenever deemed necessary. I replaced the original reliability and validity measures with descriptive statistics and multicollinearity measures while acknowledging the iterative aspect of validating questionnaires. The purification step on his protocol which promoted the use of Cronbach's alpha and factor analysis to inspect the psychometric characteristics of the constructs is inappropriate for formative models [1,17]. Hence, the researchers are recommended to perform the following sections in particular order. PREPRINT VERSION 10 Fig. 4. Validation flow between Reflective Construct (adapted from [6]) and Formative Construct. 6.1 Domain of construct The scale development process begins with identifying what and how the identified constructs will be measured based from previous literature and expert review [21,24,30]. This is where the researchers will decide and set each construct to assume either the formative or reflective psychometric model, but not both. Corresponding literature should then be cited to support the pre-defined psychometric models for each construct. 6.2 Content validity Once the constructs are established, the researchers must create corresponding items under each construct from previous literature [24,30]. While there are common conventions of either choosing to deliberately formulate the items on their own (self-made) or utilize an existing standardized questionnaire from an existing study, I recommend the self-made option if one of your constructs is formative because of the theory-driven aspect of the items as it allows you to account the contextual nuance of your population and its relation to your indications. This is known as generating an item pool [8,10]. Adapting an existing questionnaire will risk not accounting for the aspects of your construct that are only unique in the context of your study [20]. The nature and number of indicators of each construct should be guided by literature and the related field of study [12,30]. If no clear guideline on the number of items exists, a heuristic rule of thumb of at least 5 items is recommended as a last resort, provided that the items will sufficiently represent the underlying aspects of the constructs found on existing literature. The researchers will then justify selection of items by citing the corresponding articles that either act as: (a) definitional source: define the construct and present the aspects, (b) mirror source: present the precise items used that match the needed aspects to measure. For (a), the researchers will highlight the aspects from the presented definition and reason a theoretical relevance of one or more aspects to an item's context. For (b), the researchers can PREPRINT VERSION 11 simply use the item, provided that they have permission from the author of the study and have ensured the suitability of the item by assessing the population of the original study. For the theoretical weighing, theoretical weights will be pre-defined as preliminary values for the pilot testing process in setting the relative importance of the items in measuring the construct. The weights seek to answer the question: "How important is each item in understanding the construct when compared with the rest of the items?" While the decision for setting the weights relies on either the researchers or subject matter experts (SMEs), I recommend SMEs where you can calculate Content Validity Ratio (CVR) values of each item i based from their collective responses [8,9,24]. Not only would the CVR values could act as theoretical weights for the indicators, but the SMEs can also provide qualitative feedbacks on the questionnaire to correct or improve its scope. Having the researchers to pre-define the weights should be a last resort due to logistical constraints such as lack of availability among SMEs and limited time as it sacrifices methodological rigor. If it does, a score of an item can be calculated using the arithmetic average of the researchers' rating from 1- not essential to 5essential which will rely on the corresponding literature that support its significance on measuring the construct. CVRi= ne-N 2 N 2 (4) Where: ne: number of raters indicating "essential" N: total number of raters 6.3. Update questionnaire and collect pilot data This section includes steps 3 and 4 from the proposed framework. The refined questionnaire draft from section 4.2 can now be answered by the pilot respondents. Pilot testing will be conducted with two distinct objectives in mind: (a) Pilot respondents will assess the face validity of the instrument by answering a supplementary questionnaire containing questions about the quality of instrument such as clarity and preciseness of language and/or subjective feedbacks from a select few [15]. There are no strict guidelines on the formatting of the instruments for this objective as long as it meets the fundamental goal to improve and refine the instrument to become effective in gathering data on the study's population [11,12,23]. (b) Conduct diagnostic statistical checks on the pilot data gathered from the respondents answering the actual survey. These checks will diagnose how the respondents actually respond to each item [24] which will be further discussed in the next section. Pilot data provides a useful, initial baseline on how the survey might perform on an actual data gathering [25]. Hence, the selected pilot respondents should possess the characteristics that are as close as possible with the actual population and are anticipated to have diverse responses on your constructs. Having a pilot respondent similar to the actual population allows an effective proxy and anticipate the potential issues that may arise when dealing with the population [15,24], while the diversity of their responses ensure that an item is capable of measuring the actual item as part of the item without suffering from response PREPRINT VERSION 12 biases such as social desirability bias. While the actual population can be theoretically a gold standard for selecting pilot respondents, I recommend avoiding this approach to ensure sufficient pool of respondents for the actual data gathering. Pilot testing can be iterative, and requires a different set of pilot respondents every iteration to mitigate practice bias. 6.4. Statistical checks: Descriptive statistics and multicollinearity Assuming the pilot data has been collected, I recommend to first check and identify respondents whose responses were outlier values. This informs the pilot respondents that show unusual behavior that is deviant from the rest of the pilot respondents. This provides researchers the opportunity to probe questions on these respondents about their respondents, as well as further examine their outputs when assessing face validity. Researchers can then proceed on conducting and interpreting descriptive statistics and multicollinearity tests on the individual items, which are the primary diagnostic measures to assess how well the items are measuring the formative construct. Descriptive statistics summarize the characteristics of a sample [27,29] and can be used in diagnosing whether the items are performing well based on the characteristics of pilot sample as the proxy estimation. Measures of central tendency such as mean and median can be used to determine the general location on their responses, informing the ability of an item to be as unbiased as possible, while measures of variability such as standard deviation and interquartile range determine the general extent of diversity or uniformity of their responses that informs the item's conceptual scope to gather various responses [26,29]. Both measures should be used to collectively inform the quality of each item which is guided from empirically established guidelines. Given the diagnostic purpose of the pilot testing, interpreting descriptive statistics on the construct as a whole is pragmatically redundant since interpretations on central tendency and variability will still depend on the nature of the individual items because their values constantly change depending on the involved items, lacking stability of results. Multicollinearity in the context of formative constructs occurs when two or more indicators are found to be statistically too similar that an indicator can be a numerical copy of another indicator. To measure multicollinearity, the Variance Inflation Factor (VIF) is used as the primary standard check which relies on the coefficient of determination or r-squared (R2) that can be obtained by performing an Ordinary Least Squares (OLS) regression [1,3]. VIF= 1 1-R2 (5) However, because VIF is fundamentally computed from a regression model, it could produce unstable estimates due to low sample size that is evident in our pilot respondents. If feasible, I recommend to refer to the empirical guidelines of Hair et al. [2] on the number of observations needed to run a regression model. Even when the alternative way of assessing bivariate correlations across all possible pairs of indicators using Pearson's product-moment correlation [2], it assumes a continuous nature of the indicator data which is not always the case for Likert scales, resulting to a more serious consequence. Misspecification holds a greater risk on misleading results rather than the instability due to sample size. The researchers can then proceed to actual data gathering only if the constructs sufficiently satisfied the conditions on the said statistical checks, as well as the preceding auxiliary procedures. PREPRINT VERSION 13 7. CONCLUSION Advancing rigorous methodological practices for formative measurement models is crucial for promoting robust and valid findings since the current validation processes remains compromised and misconceived due to historical origins. Hence, this paper justifies the theoretical motivation on the perceived gaps on questionnaire validation practices and guide the pragmatic roles of the researchers in ensuring appropriate pilot testing procedures related to survey questionnaires. This addresses the long-standing misconceptions of model misspecification on formative constructs as we believe researchers should prioritize theoretical justification and contextual accuracy whenever feasible. REFERENCES [1] C. B. Jarvis, S. B. MacKenzie, and P. M. Podsakoff, "A critical review of construct indicators and measurement model misspecification in marketing and consumer research," J. Consum. Res., vol. 30, no. 2, pp. 199-218, Sep. 2003. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1086/376806 [2] J. Hair, W. Black, B. Babin, and R. Anderson, Multivariate Data Analysis, 8th ed. Cengage Learn., 2019. [3] J. Henseler, C. M. Ringle, and R. R. Sinkovics, "The use of partial least squares path modeling in international marketing," in Advances in International Marketing. Emerald Group Publ. Ltd., 2009, pp. 277-319. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1108/s1474-7979(2009)0000020014 [4] A. Diamantopoulos and H. M. Winklhofer, "Index construction with formative indicators: An alternative to scale development," J. Marketing Res., vol. 38, no. 2, pp. 269-277, May 2001. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1509/jmkr.38.2.269.18845 [5] J. C. Nunnally, Psychometric Theory. New York: McGraw-Hill, 1967. [6] G. A. Churchill, "A paradigm for developing better measures of marketing constructs," J. Marketing Res., vol. 16, no. 1, p. 64, Feb. 1979. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.2307/3150876 [7] G. R. Franke, K. J. Preacher, and E. E. Rigdon, "Proportional structural effects of formative indicators," J. Bus. Res., vol. 61, no. 12, pp. 1229-1237, Dec. 2008. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1016/j.jbusres.2008.01.011 [8] V. Zamanzadeh, A. Ghahramanian, M. Rassouli, A. Abbaszadeh, H. Alavi-Majd, and A.-R. Nikanfar, "Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication," J. Caring Sci., vol. 4, no. 2, pp. 165-178, Jun. 2015. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.15171/jcs.2015.017 [9] M. Romero Jeldres, E. Díaz Costa, and T. Faouzi Nadim, "A review of Lawshe's method for calculating content validity in the social sciences," Frontiers Educ., vol. 8, Nov. 2023. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.3389/feduc.2023.1271335 [10] J. V. Gyuricza, A. F. P. L. d'Oliveira, L. B. M. Machado, and J. Brodersen, "Development of an item pool for a questionnaire on the psychosocial consequences of hypertension labelling," J. Patient-Reported Outcomes, vol. 4, no. 1, Dec. 2019. Accessed: Oct. 15, 2025. [Online]. Available: https://doi.org/10.1186/s41687-0190168-4 PREPRINT VERSION 14 [11] I. Kusmaryono and D. Wijayanti, "Number of Response Options, Reliability, Validity, and Potential Bias in the Use of the Likert Scale Education and Social Science Research: A Literature Review," Int. J. Educational Methodol., vol. 8, no. 4, pp. 625-637, Nov. 2022. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.12973/ijem.8.4.625 [12] "What Is the Best Response Scale for Survey and Questionnaire Design; Review of Different Lengths of Rating Scale / Attitude Scale / Likert Scale," Int. J. Academic Res. Manage., vol. 8, no. 1, pp. 1-10, 2019. Accessed: Oct. 16, 2025. [Online]. Available: https://hal.science/hal-02557308v1/document [13] J. Amora, "On the validity assessment of formative measurement models in PLSSEM," Data Anal. Perspectives J., vol. 4, no. 2, pp. 1-7, 2023. Accessed: Oct. 16, 2025. [Online]. Available: https://scriptwarp.com/dapj/2023_DAPJ_4_2/Amora_2023_DAPJ_4_2_For mativeAssessment.pdf [14] M. A. Bujang, E. D. Omar, and N. A. Baharum, "A Review on Sample Size Determination for Cronbach's Alpha Test: A Simple Guide for Researchers," Malaysian J. Med. Sci., vol. 25, no. 6, pp. 85-99, 2018. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.21315/mjms2018.25.6.9 [15] M. A. Bujang, E. D. Omar, D. H. P. Foo, and Y. K. Hon, "Sample size determination for conducting a pilot study to assess reliability of a questionnaire," Restor. Dent. & Endod., vol. 49, 2024. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.5395/rde.2024.49.e3 [16] O. F. Morera and S. M. Stokes, "Coefficient α as a Measure of Test Score Reliability: Review of 3 Popular Misconceptions," Amer. J. Public Health, vol. 106, no. 3, pp. 458461, Mar. 2016. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.2105/ajph.2015.302993 [17] K. S. Taber, "The Use of Cronbach's Alpha When Developing and Reporting Research Instruments in Science Education," Res. Sci. Educ., vol. 48, no. 6, pp. 1273-1296, Jun. 2017. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1007/s11165016-9602-2 [18] M. Tavakol and R. Dennick, "Making sense of Cronbach's alpha," Int. J. Med. Educ., vol. 2, pp. 53-55, Jun. 2011. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.5116/ijme.4dfb.8dfd [19] D. G. Bonett and T. A. Wright, "Cronbach's alpha reliability: Interval estimation, hypothesis testing, and sample size planning," J. Organizational Behav., vol. 36, no. 1, pp. 3-15, Oct. 2014. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1002/job.1960 [20] M. A. Bujang, H. Y. Khee, and L. K. Kee, A Step by step Guide to Questionnaire Validation Research. Inst. Clin. Res. (ICR), 2022. [21] D. De, K. Kishore, V. Jaswal, and V. Kulkarni, "Practical guidelines to develop and evaluate a questionnaire," Indian Dermatol. Online J., vol. 12, no. 2, p. 266, 2021. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.4103/idoj.idoj_674_20 [22] G. W. Cheung, H. D. Cooper-Thomas, R. S. Lau, and L. C. Wang, "Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations," Asia Pacific J. Manage., Jan. 2023. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1007/s10490-02309871-y [23] H. Taherdoost, "How to Design and Create an Effective Survey/Questionnaire; A Step by Step Guide," Int. J. Academic Res. Manage., vol. 5, no. 4, pp. 37-41, 2016. PREPRINT VERSION 15 [24] H. Taherdoost, "Validity and Reliability of the Research Instrument; How to Test the Validation of a Questionnaire/Survey in a Research," SSRN Electron. J., 2016. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.2139/ssrn.3205040 [25] R. Tate, F. Beauregard, C. Peter, and L. Marotta, "Pilot Testing as a Strategy to Develop Interview and Questionnaire Skills for Scholar Practitioners," Impacting Educ.: J. Transforming Professional Pract., vol. 8, no. 4, pp. 20-25, Sep. 2023. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.5195/ie.2023.333 [26] A. G. Bedeian, ""More than Meets the Eye": A Guide to Interpreting the Descriptive Statistics and Correlation Matrices Reported in Management Research," Rev. IberoAmericana Estrateg., vol. 14, no. 02, pp. 08-22, Jun. 2015. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.5585/ijsm.v14i2.2244 [27] G. Marshall and L. Jonker, "An introduction to descriptive statistics: A review and practical guide," Radiography, vol. 16, no. 4, Nov. 2010, Art. no. e1-e7. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1016/j.radi.2010.01.001 [28] M. Stadler, M. Sailer, and F. Fischer, "Knowledge as a formative construct: A good alpha is not always better," New Ideas Psychol., vol. 60, p. 100832, Jan. 2021. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1016/j.newideapsych.2020.100832 [29] M. Kebalepile and P. M. Chakane, "Commonly used statistical tests and their application," Afr J Anaesth Anal, vol. 28, pp. 80-84, 2022. [30] L. S. Lambert and D. A. Newman, "Construct Development and Validation in Three Practical Steps: Recommendations for Reviewers, Editors, and Authors*," Organizational Res. Methods, p. 109442812211153, Aug. 2022. Accessed: Oct. 16, 2025. [Online]. Available: https://doi.org/10.1177/10944281221115374
2510.14955
REALDPO: REAL OR NOT REAL, THAT IS THE PREF- ERENCE Guo Cheng3∗ Danni Yang1∗ Ziqi Huang2† Jianlou Si5 Chenyang Si4 Ziwei Liu2B 1Shanghai Artificial Intelligence Laboratory 2S-Lab, Nanyang Technological University 3University of Electronic Science and Technology of China 4Nanjing University 5SenseTime Research Project Page: Vchitect.github.io/RealDPO-Project (b) Real Video vs Generated Video (a) Reward-Based Method vs RealDPO (Ours) Reward Model Video 1 Generation Model Video 2 Video 3 Score 1 Score 2 Score 3 DPO Training Loss Feedback Video 1 Video 2 Video 3 Generation Model DPO Training Win Sample Lose Sample Lose Sample Real Video Data Overcome the limitations of the Reward model Win Sample Lose Sample Reference Image Sample 2 Sample 3 Sample1 Prevent the occurrence of the Hacking Issues Hard to judge Key Action two woman are hugging -1.06 good 0.58 -0.56 bad 0.85 (c) Indistinguishable Preference for Generated Videos Reward-Based Method RealDPO (Ours) Figure 1: Can we align video generative models using real data as preference data without a reward model? (a) Comparison between using the reward model to score synthetic data for pref- erence learning and our RealDPO method, which uses high-quality real data as win samples. Our method avoids the limitations of the reward model and the associated hacking issues. (b) Com- parison between the video generated by the pretrained model and the real video for the same scene. The three scores on the right represent the scores given by the reward model from VisionReward (Xu et al., 2024a), the human action metric from VBench (Huang et al., 2024a;b), and human preference, respectively. It can be observed that while the existing reward model and VBench can evaluate se- mantic correctness, they are limited in assessing human motion quality. (c) Three model-generated examples from the same prompt, each with different initial noise, exhibit poor limb interaction, making it challenging for human annotators to identify which sample should be chosen as the win sample for reward model training. ABSTRACT Video generative models have recently achieved notable advancements in synthe- sis quality. However, generating complex motions remains a critical challenge, as existing models often struggle to produce natural, smooth, and contextually consistent movements. This gap between generated and real-world motions lim- its their practical applicability. To address this issue, we introduce RealDPO, a novel alignment paradigm that leverages real-world data as positive samples for preference learning, enabling more accurate motion synthesis. Unlike tradi- tional supervised fine-tuning (SFT), which offers limited corrective feedback, Re- alDPO employs Direct Preference Optimization (DPO) with a tailored loss func- tion to enhance motion realism. By contrasting real-world videos with erroneous model outputs, RealDPO enables iterative self-correction, progressively refining motion quality. To support post-training in complex motion synthesis, we propose RealAction-5K, a curated dataset of high-quality videos capturing human daily activities with rich and precise motion details. Extensive experiments demonstrate ∗Equal Contributions. †Project Lead. BCorresponding Author. 1 arXiv:2510.14955v1 [cs.CV] 16 Oct 2025 Reference Frame "The video showcases a sincere moment between two young friends against a dark green tiled wall. The individual on the left, wearing a black cap and a black T-shirt with "Dylan" written across it, and the friend on the right, in a white T-shirt and a black backward cap, both move towards each other. They share a warm embrace, their faces lit up with genuine smiles. The scene is a candid capture of friendship and affection, with the dark green tiles providing a simple yet effective backdrop that highlights their interaction." SFT RealDPO Figure 2: RealDPO vs SFT. A qualitative comparison between RealDPO and supervised fine-tuning (SFT). RealDPO demonstrates more natural motion generation. For more details regarding the com- parison, please refer to the supplementary material. that RealDPO significantly improves video quality, text alignment, and motion re- alism compared to state-of-the-art models and existing preference optimization techniques. 1 INTRODUCTION With the advancement in computational power and the availability of large-scale data, video gener- ation models (Yang et al., 2024b; Guo et al., 2023; Blattmann et al., 2023; Li et al., 2024; Lin et al., 2024; Wang et al., 2024b; Xing et al., 2024; Zhang et al., 2023) have made significant progress, producing more realistic and diverse visual content. However, when it comes to generating complex motions, existing models still face considerable challenges in creating motion sequences that adhere to appear natural and smooth, and align with contextual consistency. This issue becomes especially prominent in the generation of human-centric daily activity motions. As shown in Fig. 1(b), even the results generated by the state-of-the-art DiT-based model CogVideoX-5B (Yang et al., 2024b) exhibit unnatural and unrealistic movements, failing to meet human preferences for natural, smooth, and contextually appropriate actions. This prompts us to further explore how to improve the realism and rationality of complex motion generation, particularly in the domain of human motion synthesis. A straightforward solution is to collect a set of high-quality, real-world data specifically for su- pervised fine-tuning (SFT). However, relying exclusively on this dataset for SFT training presents certain limitations. During optimization, the model interprets the provided data as the sole correct reference, lacking awareness of where the original model’s errors stem from. This narrow focus may result in overfitting and suboptimal performance in Fig. 2. A more effective strategy would be letting the model learn from its own mistakes. By utilizing the difference between real samples (positive data) and generative samples (negative data), we can explicitly highlight the model’s er- rors and guide it to correct its behavior. This approach enables the model to progressively align its outputs with the desired actions represented by the positive samples, fostering continuous im- provement through self-reflection. This idea aligns perfectly with Direct Preference Optimization (DPO) (Rafailov et al., 2023), a reinforcement learning technique used in training large language models, which leverages pair-wise win-lose data to guide the learning process. In video generation, recent studies (Liu et al., 2024; Wang et al., 2024c; Xu et al., 2024a; Yuan et al., 2024; Zhang et al., 2024a) have explored training fine-grained reward models using human-annotated preference datasets, primarily through three ways: reward-weighted regression (RWR) (Wang et al., 2024c), direct preference optimization (DPO) (Liu et al., 2024), and gradient feedback (GF) (Yuan et al., 2024). However, these methods face some critical challenges when directly applied to action-centric video generation: (1) Reward Hacking: Video reward model is susceptible to reward hacking, where during the optimization process, human evaluations indicate a decline in video quality, yet the reward model continues to assign high scores. (2) Scalability Issue: Online approaches require decoding latent to pixel space, limiting their scalability for high- resolution video generation. (3) Bias Propagation: Multi-dimensional reward models may lose the ability to assess specific key metrics due to linear combinations of evaluation criteria. As shown in 2 Fig. 1(b), the reward model cannot provide an accurate evaluation for complex motion. These limi- tations highlight the need for a more robust approach tailored to complex motion video generation, motivating our extension beyond traditional DPO frameworks. To address these challenges, we propose RealDPO, a novel training pipeline for generating action- centric activity videos, as shown in Fig. 1(a). Unlike prior methods that rely on model-sampled pairwise comparisons, RealDPO leverages real-world video data as win samples, overcoming the Real Data Deficiency issue where only using synthetic data for preference learning fails to address the distribution errors inherent in the pre-trained generative model. More importantly, this approach significantly enhances the model’s learning upper bound, enabling more accurate video generation. Without real video guidance, as shown in Fig. 1(c), all samples generated by pre-trained model exhibit poor limb interaction, making it hard for human annotators to identify the preferred win sample. Additionally, since RealDPO directly uses real data to guide the preference learning, it eliminates the need for an external reward function, thereby avoid reward hacking and bias propaga- tion issues. Moreover, our naturally paired win-lose samples eliminate the need for decoding latent to pixel space during training, drastically reducing computational overhead. Inspired by Diffusion- DPO (Wallace et al., 2024), we design a tailored DPO loss specifically for the training objective of diffusion-based transformer architectures, enabling effective preference alignment. To support this training, we introduce RealAction-5K, a compact yet high-quality video dataset capturing di- verse human daily actions. The dataset adheres to the principle of “less is more”, emphasizing that RealDPO requires fewer high-quality samples in synergy with model-generated negative samples, whereas traditional supervised fine-tuning (SFT) methods typically requires more data to achieve better performance. Experiments demonstrate that RealDPO significantly improves video quality, text alignment, and action fidelity across diverse human action scenarios compared to pretrained baselines and other preference alignment methods. Our contributions are summarized as follows: • We propose RealDPO, a novel training pipeline for action-centric video generation that leverages real-world data as preference signals to contrastively reveal and correct the model’s inherent mistakes, addressing the limitations of existing reward models and pref- erence alignment methods. • We design a tailored DPO loss for our video generation training objective, enabling ef- ficient and effective preference alignment without the scalability and bias issues of prior approaches. • We introduce RealAction-5K, a compact yet high-quality curated dataset focused on hu- man daily actions, specifically crafted to advance preference learning for video generation models and broader applications. 2 RELATED WORK Diffusion-Based Video Generation. In recent years, diffusion-based video generation models have emerged continuously, primarily generating videos through user-provided text or image prompts. These models are broadly categorized into two architectures: U-Net and Diffusion Transformers (DiT). U-Net-based approaches (Blattmann et al., 2023; Wang et al., 2023; Chen et al., 2024; Guo et al., 2023) build upon the multi-stage down-sampling and up-sampling framework of image diffu- sion models, incorporating temporal attention layers to ensure temporal consistency. However, these methods face limitations in motion dynamics and content richness. Recently, Diffusion Transformer- based methods (Yang et al., 2024b; Li et al., 2024; Lin et al., 2024) have made significant improve- ments by combining 3D-VAE with diffusion transformers, using 3D full-attention layers to jointly learn spatial-temporal correlations, and enhance text encoders to handle complex prompts. These advancements have led to substantial improvements in fidelity, consistency, and scalability for longer video generation. Reinforce Learning in Image/Video Generation. In large language models (LLMs), reward mod- els are commonly used in Reinforcement Learning Human Feedback (RLHF), enabling LLMs to respond more naturally and generate more coherent text. Recently, there have been a series of stud- ies(Xu et al., 2024b; Black et al., 2023; Wallace et al., 2024; Lee et al., 2023; Liang et al., 2024; Yang et al., 2024a; Clark et al., 2023; Fan et al., 2024) in image generation that incorporate human pref- erences into model evaluation and model alignment training, mainly focusing on improving the aes- thetic quality of images. In video generation, the exploration so far is still quite limited. Most related 3 (c) Video Caption Word Distribution (d) Action Content Distribution (e) Prompt Length Distribution 1 bicycle, cooking drinking ... Topic Words Raw Videos filter “horizontal” Video LLM --------------------- --------------------- --------------------- ------- 2 3 Caption Video Understanding Model Final Videos “Please describe this video in detail.””l. Manual checking Corse Videos High Quality filter (a) Samples of RealAction-5K Dataset (b) Data Collection and Processing Pipeline RealAction-5K Figure 3: Overview of the RealAction-5K Dataset. (a) Samples of RealAction-5K Dataset (b) Data Collection and Processing Pipeline (c) Video Caption Word Distribution (d) Action Content Distribution (e) Prompt Length Distribution works(Yuan et al., 2024; Wu et al., 2024; Wang et al., 2024c; Liu et al., 2024; Zhang et al., 2024a; Liu et al., 2025) are mainly focusing on using reward models trained on human-annotated synthetic data for preference learning on model-generated data. However, these methods have some limita- tions. For example, training reward models may suffer from hacking issues, and multi-dimensional reward models might show reduced evaluation ability in specific domains. Additionally, relying entirely on synthetic data for preference learning could hinder the model’s potential. Therefore, we propose a novel approach that transcends the limitations of reward models by incorporating real data for preference-aligned learning. 3 PRELIMINARIES 3.1 DENOISING PROCESS AS MULTI-STEP MDP According to the definition in the Yang et al. (2024a), the denoising process in diffusion models can be formulated as a multi-step Markov Decision Process (MDP). Here, we provide a further explanation of state representations st, probability transition P, and policy functions π, establishing a correspondence between video diffusion models and the MDP framework. This mapping enables a reinforcement learning perspective on the sampling process in video diffusion models. The detailed notation correspondence between the diffusion model and the MDP is as follows: st ≜(c, t, xt) P(st+1 | st, at) ≜(δc, δt+1, δxt−1) at ≜xt−1 π(at | st) ≜pθ(xt−1 | c, t, xt) ρ0(s0) ≜(p(c), δ0, N(0, I)) r(st, at) ≜r((c, t, xt), xt−1) (1) where δx represents the Dirac delta distribution, and t denotes the denoising timesteps. Leveraging this mapping, we can employ RL techniques to fine-tune diffusion models by maximizing returns. However, this approach requires a proficient reward model capable of adequately rewarding the noisy images. The task becomes exceptionally challenging, particularly when t is large, and xt closely resembles Gaussian noise, even for an experienced expert. 4 RealAction Xk w noise ǂ0 w يǏ Ǜ ǂ0 w ǂǏ l,a ǂǏ l,b ǂǏ l,c Video Captioner يǏ l,a ǂ0 l,a يǏ l,b ǂ0 l,b يǏ l,c ǂ0 l,c Timestep Selector k DiT Negative Sample Velocity Postive Sample Velocity ǂ0 l,a ǂ0 l,b ǂ0 l,c noise noise a noise b noise c Lose Samples DiT DiT T5 Text Encoder DiT DiT ǂǏ Ǜ ǂǏ ǐ,Dž ǂ0 Ǜ ǂ0 w ǂ0 l,a ǂ0 l,a Pred Latent Orig Latent Reference Model Training Model Pred Latent 1 2 3 4 5 6 ǂ0 Ǜ ×k ×k ǂ0 ǐ,Dž lj푡ljǜ푡 lj푡ljǜ푡 lj푡ljǜ푡 DPO Loss ( ) 3 2 3 1 Win sample 6 5 6 4 ( ) lose sample ref model train model ref model train model The win sample of timestep t share weight Xt w Xt l,a Xt l,b Xt l,c The lose sample a,b,c of timestep t text embedding Get velocity function Update the params by gradient decent Update the params by EMA strategies DiT DiT Reference Model Training Model DiT DiT Training Iteration DiT DiT DiT DiT T EMA Update t start A little girl is beating eggs... Win-lose Sampling for DPO Training Diagram of Tailored RealDPO Loss Reference Model Update Strategy first frame Figure 4: The RealDPO Framework. We use real data as the win samples in DPO, and illustrate the data pipeline on the left hand side. We present the RealDPO loss, and reference model update strategy on the right hand side. 3.2 DPO FOR DIFFUSION MDP Direct Preference Optimization (DPO) (Rafailov et al., 2023) is a preference-based fine-tuning method that directly optimizes a model using human preference data, without requiring an explicit reward model. This approach is particularly advantageous as it avoids the complexities and poten- tial biases introduced by learned reward models, making the optimization process more stable and interpretable. Given a dataset of preference-labeled pairs {(x, yw, yl)}, where x is the input prompt, yw is the preferred (win) output, and yl is the less preferred (lose) output, DPO aims to maximize the likelihood ratio between the preferred and non-preferred samples while maintaining closeness to the pretrained model. The optimization objective can be formulated as: LDPO(θ)=−Ec,xw,xl  log σ  β log pθ(xw|c) pref(xw|c)) −β log pθ(xl|c) pref(xl|c)  (2) where: pθ(xw|c) is the likelihood of generating win output xw given input c under the fine-tuned model, pref(xw|c) is the likelihood under the reference (pretrained) model. β is a temperature pa- rameter that controls the sharpness of preference optimization. σ(·) is the sigmoid function ensuring a proper probability score. According to the derivation in reference (Wallace et al., 2024), the training objective of Diffusion- DPO is defined as: L(θ) = −E(xw 0 ,xl 0)∼D,t∼U(0,T ),xw t ∼q(xw t |xw 0 ),xl t∼q(xl t|xl 0) log σ (−βTω(λt) ( ∥ϵw −ϵθ(xw t , t)∥2 2 −∥ϵw −ϵref(xw t , t)∥2 2 − ∥ϵl −ϵθ(xl t, t)∥2 2 −∥ϵl −ϵref(xl t, t)∥2 2  (3) where xw t = αtxw 0 + σtϵw, ϵw ∼N(0, I) is a draw from q (xw t | xw 0 ). λt = α2 t/σ2 t is the signal- to-noise ratio. ω(λt) is a weighting function, usually kept constant. 4 THE REALDPO PARADIGM In this section, we introduce our fine-tuning pipeline RealDPO to align video diffusion models with our constructed preferences data, as shown in Fig. 4. Firstly, we introduce our proposed dataset, RealAction, and the pipeline for constructing preference data in Sec.4.1. Then, in Sec.4.2, we present the win-lose sampling approach used for DPO fine-tuning training. Finally, in Sec.4.3, we delve into the alignment training process for the video generation model using preference data. 4.1 REALACTION: PREFERENCE DATA COLLECTION Preference data is essential for reinforcement learning. To acquire it, we designed a robust data processing pipeline that efficiently collects, filters, and processes data, ensuring its high quality, diversity, and representativeness. 5 Collect raw video based on keywords. Our dataset construction begins with selecting relevant topics to collect raw video data, ensuring diversity and real-world representativeness. As shown in Fig. 3(d), we designed daily activity themes across over ten scenarios, such as sports, eating, drinking, walking, and gathered high-quality video clips using these keywords. This step captures diverse actions, participants, and backgrounds, establishing a strong foundation for preference-based training. Use VideoLLM to filter low-quality videos. After collecting the raw videos, we use a video LLM, Qwen2-VL (Wang et al., 2024a) , to filter out rough or irrelevant videos. We provide some instruc- tions for Qwen2-VL to identify and discard videos that do not meet quality standards or are not aligned with the selected topic. Through this filtering process, low-quality content is significantly reduced, ensuring that only clear and meaningful videos proceed to next processing stages. Manual inspection ensures the accuracy. We let human annotators carefully examine the videos to confirm whether they accurately represent the intended theme, have correct actions, and do not contain misleading or irrelevant content. This additional validation step further refines the dataset, ensuring it aligns with preference-based training goals. Generate detailed descriptions for videos. We employ a video understanding model, LLaVA- Video (Zhang et al., 2024b), to generate accurate descriptive captions for each video. These descrip- tions accurately reflect the actions, participants, and appearance. These captions serve as valuable metadata, later used for sampling negative samples. The word cloud composed of high-frequency words in the description caption of these videos is shown in Fig. 3(c). And the length distribution of captions in our constructed dataset is shown in the Fig 3(e). 4.2 WIN-LOSE SAMPLING FOR DPO TRAINING After obtaining real data, we take the real video Xw from RealAction as win sample. The latent after compression through the VAE encoder is Xw 0 . We design a Timestep Selector that randomly generates a timestep k for each round of positive and negative sampling. We add k steps of random noise to Xw 0 , obtaining Xw k . Then, together with the caption embedding etext, we input this into the DiT transformer to get the predicted noise ˆϵw k . Finally, we input ˆϵw k to Positive Sample Velocity to obtain the predicted latent ˆxw 0 , which is prepared for the subsequent DPO loss. ˆϵw k = θ xw k , etext , ˆxw 0 = ψ (ˆϵw k ) , (4) where θ is the training DiT model, ψ is the process of positive sample velocity. For negative samples, in order to ensure diversity, we first randomly generate three init noises ϵa, ϵb, ϵc. These are then combined with the positive sample’s caption embedding etext and we input them together into the DiT, where we sample the full timesteps to obtain three negative samples xl,a 0 , xl,b 0 , xl,c 0 . This step is done offline, and we only need to store the latent of the negative samples. During training optimization, similar to the positive samples, we add k steps of random noise to these three samples to obtain xl,a k , xl,b k , xl,c k . Then, together with caption embedding etext, we input these into the DiT transformer to get the predicted noise ˆϵl,a k , ˆϵl,b k , ˆϵl,c k . Finally, we input predicted noise to Negative Sample Velocity to obtain the predicted latents for the negative samples ˆxl,a 0 , ˆxl,b 0 , ˆxl,c 0 . ˆϵl,∗ k = θ  xl,∗ k , etext , ˆxl,∗ 0 = ψ  ˆϵl,∗ k  , (5) where ∗is set of {a, b, c}, ψ is the process of negative sample velocity. It’s important to note that the first sampling of the negative samples is done offline, while the second sampling for both positive and negative samples involves only one step, saving a significant amount of time during training. 4.3 PREFERENCE LEARNING FOR VIDEO GENERATION After positive and negative samples are prepared, we can use this preference data for DPO training. Due to the constraints of the reference model in DPO training, similarly, we also resample the win- lose samples through the reference model to obtain ˜xw 0 and ˜xl,a 0 . Here, we take the first negative 6 Table 1: Quantitative Comparison on RealAction-TestBench by User Study. We provided users with a five dimensional evaluation, namely Overall Quality, Visual Alignment, Text Alignment, Motion Quality and Human Quality, to compare our model with the pre-trained baseline (Yang et al., 2024b), supervised fine-tuning(SFT), LiFT (Wang et al., 2024c), VideoAlign (Liu et al., 2025). Testers are required to rank the results generated by these models, and we converted the rankings into win rates. Method Overall Visual Text Motion Human Quality Alignment Alignment Quality Quality Baseline (Yang et al., 2024b) 65.56 72.22 71.89 65.56 66.00 SFT 58.22 65.22 68.44 59.11 60.33 LiFT (Wang et al., 2024c) 67.34 73.44 64.33 65.00 67.33 VideoAlign (Liu et al., 2025) 61.00 68.11 68.78 57.22 59.78 RealDPO (Ours) 73.33 77.44 77.00 71.00 72.89 "The video begins with a man wearing a white shirt and blue tie, sitting at an outdoor table. He holds a cup of coffee in one hand, taking a sip, while his other hand is occupied with writing notes on a notepad. The table also holds a smartphone and a pair of sunglasses, suggesting a busy day ahead. The setting is an urban street scene with brick walls and parked cars in the background, providing a realistic and bustling atmosphere. The scene captures a moment of focus and productivity amidst the daily hustle." First Frame Before Alignment After Alignment First Frame "The video begins in a sunny park with lush green grass and trees swaying gently in the breeze. A young blonde girl, dressed in a light pink sweater and floral shorts, stands with a joyful smile. Holding a small treat in her hand, she lifts it slightly to engage the attentive golden retriever sitting before her. The dog's golden fur gleams under the sunlight as it excitedly raises a paw, reaching out to shake hands with the girl." Before Alignment After Alignment Figure 5: Qualitative Results. We visualize the effect of before and after applying RealDPO. See the supplementary for videos. sample ˜xl,a 0 as an example to explain the loss. According to the training objective of CogVideoX- 5B (Yang et al., 2024b), we rewrite Eq. 3 as follows: LDP O(θ) = −E  log σ −βTω(λt) ∥xw 0 −ˆxw 0 ∥2 2 −∥xw 0 −˜xw 0 ∥2 2 − ∥xl 0 −ˆxl 0∥2 2 −∥xl 0 −˜xl 0∥2 2  , (6) where xw 0 /xl 0 are the original win/lose sample, ˆxw 0 /ˆxl 0 are the predicted latents for the win/lose sam- ple by the training model, ˜xw 0 /˜xl 0 are the predicted latents for the win/lose sample by the reference model. The role of the reference model is to constrain the training process of the training model, preventing over-optimization or deviation from the desired objectives. To enhance alignment of the model with human preferences, we gradually improve the capability of the preference model and perform multiple rounds of resampling, ensuring that the training process iteratively refines its predictions and better captures the desired outcomes. In practice, every t train- ing steps, the reference model needs to be updated using the exponential moving average (EMA) algorithm. θt ref ←ωθt ref + (1 −ω)θt, (7) where θt ref is the parameters of the reference model, θt denotes the parameters of the training model, and ω is the decay coefficient of EMA, set to 0.996 in our experiments. 7 Table 2: Quantitative Comparison on VBench-I2V and RealAction-TestBench using MLLM. We evaluate performance via Visual Alignment (VA), Text Alignment (TA), Motion Quality (MQ), and Human Quality (HQ), which are consistent with the sensory perceptions of humans in user study. We use open-source Qwen2-VL (Wang et al., 2024a) supporting video understanding. We provide a detailed instruction template for evaluating video generation quality using MLLM in the appendix. Method VBench-I2V RealAction-Test Bench VA ↑ TA ↑ MQ ↑ HQ ↑ VA ↑ TA ↑ MQ ↑ HQ ↑ Baseline (Yang et al., 2024b) 97.78% 97.71% 89.86% 90.34% 96.11% 99.22% 90.22% 91.89% SFT 97.15% 98.26% 90.03% 89.38% 93.89% 98.89% 90.78% 92.89% LiFT (Wang et al., 2024c) 97.54% 97.91% 89.25% 90.24% 97.54% 97.89% 92.00% 91.67% VideoAlign (Liu et al., 2025) 97.99% 97.66% 89.54% 90.84% 96.44% 98.89% 92.00% 92.89% RealDPO (Ours) 97.99% 97.74% 89.46% 90.10% 96.67% 99.22% 91.67% 94.11% Table 3: Quantitative Comparisons with baselines and reward-based methods via VBench- I2V (Huang et al., 2024a;b), on RealAction-TestBench. Model I2V Subject Background Motion Dynamic Aesthetic Imaging Subject Consistency Consistency Smoothness Degree Quality Quality CogvideoX-5B Baseline (Yang et al., 2024b) 96.10 90.43 94.01 98.15 55.56 59.63 67.01 SFT 96.47 89.50 93.18 98.06 66.67 59.69 67.06 LiFT (Wang et al., 2024c) 96.50 92.34 94.46 98.20 38.89 60.51 68.40 VideoAlign (Liu et al., 2025) 96.55 92.23 94.29 98.37 50.00 60.21 67.66 RealDPO (Ours) 96.58 91.68 94.47 98.31 55.56 61.37 68.05 5 EXPERIMENTS We present the main experiments and discussions in this section. Please refer to the supplementary material for implementation details on the models and evaluation metrics. 5.1 QUANTITATIVE COMPARISONS Quantitative Comparison by User Study. Tab. 1 showcases the evaluation results on the RealAction-TestBench test set, where testers were invited to rank the generated outputs of the pre- trained baseline (Yang et al., 2024b), supervised fine-tuning (SFT), LiFT (Wang et al., 2024c), VideoAlign (Liu et al., 2025), and our RealDPO. The evaluation covers five dimensions: Overall Quality, Visual Alignment, Text Alignment, Motion Quality, and Human Quality. The scores for each model across these dimensions were calculated and summarized. As shown in Tab. 1, our RealDPO demonstrates significant improvements over baseline and SFT in multiple dimensions, indicating that our proposed data effectively enhance the capabilities of RealDPO in action-centric scenarios. Additionally, compared to other preference alignment algorithms utilizing reward mod- els, such as LiFT (based on Reward Weighted Regression) and VideoAlign (a naive DPO variant using synthetic data), our approach of leveraging real data as win samples and synthetic data as lose samples also proves its effectiveness. Quantitative Comparison Using MLLM. To enhance the diversity of evaluation, we employ MLLM capable of video understanding tasks to assess the results generated by the models in a question-answer format across multiple dimensions. In Tab. 2, we selected Qwen2-VL (Wang et al., 2024a) as an evaluation tool, to align the assessment dimensions with user study: Visual Alignment (VA), Text Alignment (TA), Motion Quality (MQ), and Human Quality (HQ) on VBench-I2V test benchmark (Huang et al., 2024b) and RealAction-TestBench. For each dimension, we designed sev- eral questions, and a ”yes” response from the large models indicates a passing score. The scores for all questions within each dimension were aggregated to calculate the total score. Experimental re- sults show that, based on the evaluation by video-language understanding models, our model shows 8 The video captures a moment in an open field where a man, dressed in a grey t-shirt and navy blue shorts, is standing with his hand outstretched towards a black dog with a white chest. The man, wearing tan boots, is smiling as the dog eagerly raises one paw to meet his hand in a friendly shake. The dog's tail wags, indicating its excitement and happiness. The field is dotted with patches of grass, and the sky above is overcast, creating a serene backdrop for this interaction. The natural light highlights the connection between the man and the dog as they share a moment of companionship. First Frame RealDPO VideoAlign SFT First Frame RealDPO LiFT SFT The video captures the exhilarating action of a kite surfer riding the waves. The individual, clad in a wetsuit, is seen expertly maneuvering a surfboard across the choppy sea surface. With a firm grip on the kite's control bar, they harness the power of the wind to glide effortlessly over the water. The surfer's stance is dynamic, with knees bent and body leaning back slightly, demonstrating balance and control. As they cut through the waves, water sprays up around the board, creating a dramatic effect against the backdrop of the ocean's vast expanse. The scene is a testament to the surfer's skill and the thrilling sport of kite surfing. Figure 6: Qualitative Comparison.We recommend readers refer to our appendix files to view more visualizations. competitive results, consistent with human evaluation results, further validating the effectiveness of our RealDPO. Quantitative Comparison Using VBench-I2V Metric. Meanwhile, in video generation, VBench (Huang et al., 2024a;b) is widely recognized as an authoritative evaluation framework. Leveraging VBench-I2V’s automated metrics designed for Image-to-Video (I2V) evaluations in VBench++ (Huang et al., 2024b), we assessed the quality of our test set, revealing that RealDPO achieves competitive performance across multiple general metrics. 5.2 QUALITATIVE COMPARISONS In Fig. 5, we present the visual comparison results before and after RealDPO alignment. We observe that RealDPO is highly effective in enhancing the naturalness and smoothness of the actions, as well as their consistency with the textual instructions. In Fig. 6, we present the visual comparison results of our method against other alignment approaches, such as LiFT (Wang et al., 2024c) and VideoAlign (Liu et al., 2025). It can be observed that the videos generated by RealDPO are more stable and less prone to unnatural actions or visual collapse. For instance, in the example on the left, SFT exhibits a collapse of the character’s limbs, and the coordination of the dog’s four legs appears unnatural. The results of LiFT are slightly better, but LiFT fails to complete the handshake action between the protagonist and the dog, resulting in poor alignment with the text. In contrast, our results demonstrate higher visual quality, with action details highly consistent with the textual instructions and no visual collapse. In the example on the right, the text describes the surfer’s posture as “with knees bent and body leaning back slightly”. SFT shows visual collapse, misaligned actions, and poor consistency in character appearance. VideoAlign performs slightly better, but the generated posture and actions are not highly aligned with the text. In comparison, our results exhibit higher image quality, more accurate action details, and overall superior performance. 6 CONCLUSION In this paper, we propose RealDPO, a novel and data-efficient framework for preference align- ment in video generation, leveraging real-world data as win samples to address challenges in gen- erating complex motions like human actions. By designing a tailored DPO loss and building on diffusion-based transformer architectures, we establish a robust real-data-driven alignment frame- work. To support this, we introduce RealAction-5K, a compact yet high-quality dataset for human daily actions. Extensive experiments show that RealDPO significantly improves visual alignment, text alignment, motion quality, and overall video quality, outperforming traditional fine-tuning and other alignment methods. Our work advances the upper bound of preference alignment and provides a scalable solution for complex motion video generation. We will explore extending RealDPO to broader domains in the future. 9 Ethics statement. Our RealAction-5K dataset was curated from publicly available video sources with appropriate licenses. To address privacy concerns, all personally identifiable information was meticulously anonymized, and the dataset will be released strictly for non-commercial research purposes to mitigate the risk of misuse. All necessary legal and ethical guidelines concerning data provenance and usage were adhered to throughout the project. Additionally, the effectiveness of our RealDPO paradigm is inherently limited by the architectural constraints of the underlying video generative model. We emphasize the need for responsible use, particularly when generating human figures, to prevent potential misuse. Reproducibility statement. To ensure the reproducibility of our work, we have made significant efforts to document our methodology and resources comprehensively. The core of our approach, the RealDPO alignment paradigm, including its tailored loss function, is described in detail within the paper. Furthermore, we provide a complete account of the data collection and processing pipeline for the RealAction-5K dataset. This dataset was meticulously curated by manually sourcing high- quality videos from https://pexels.com. The process involved a combination of targeted scraping and manual downloading, followed by rigorous manual screening and clipping to ensure each video clip depicts a single, coherent action that can be accurately described in text, thereby guaranteeing high quality and clarity. 7 ACKNOWLEDGEMENTS This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP20221-0012, MOE-T2EP20223-0002). This research is also supported by cash and in- kind funding from NTU S-Lab and industry partner(s). This work is also supported by Shanghai Artificial Intelligence Laboratory. This work was partially supported by Nanjing Kunpeng&Ascend Center of Cultivation. REFERENCES Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. arXiv preprint arXiv:2305.13301, 2023. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. Haoxin Chen, Yong Zhang, Xiaodong Cun, Menghan Xia, Xintao Wang, Chao Weng, and Ying Shan. Videocrafter2: Overcoming data limitations for high-quality video diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7310–7320, 2024. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint arXiv:2309.17400, 2023. Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Reinforcement learning for fine- tuning text-to-image diffusion models. Advances in Neural Information Processing Systems, 36, 2024. Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffu- sion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianx- ing Wu, Qingyang Jin, Nattapol Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21807–21818, 2024a. Ziqi Huang, Fan Zhang, Xiaojie Xu, Yinan He, Jiashuo Yu, Ziyue Dong, Qianli Ma, Nattapol Chan- paisit, Chenyang Si, Yuming Jiang, Yaohui Wang, Xinyuan Chen, Ying-Cong Chen, Limin Wang, 10 Dahua Lin, Yu Qiao, and Ziwei Liu. Vbench++: Comprehensive and versatile benchmark suite for video generative models. arXiv preprint arXiv:2411.13503, 2024b. Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. arXiv preprint arXiv:2302.12192, 2023. Zhimin Li, Jianwei Zhang, Qin Lin, Jiangfeng Xiong, Yanxin Long, Xinchi Deng, Yingfang Zhang, Xingchao Liu, Minbin Huang, Zedong Xiao, et al. Hunyuan-dit: A powerful multi-resolution diffusion transformer with fine-grained chinese understanding. arXiv preprint arXiv:2405.08748, 2024. Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, et al. Rich human feedback for text-to-image genera- tion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19401–19411, 2024. Bin Lin, Yunyang Ge, Xinhua Cheng, Zongjian Li, Bin Zhu, Shaodong Wang, Xianyi He, Yang Ye, Shenghai Yuan, Liuhan Chen, et al. Open-sora plan: Open-source large video generation model. arXiv preprint arXiv:2412.00131, 2024. Jie Liu, Gongye Liu, Jiajun Liang, Ziyang Yuan, Xiaokun Liu, Mingwu Zheng, Xiele Wu, Qiulin Wang, Wenyu Qin, Menghan Xia, et al. Improving video generation with human feedback. arXiv preprint arXiv:2501.13918, 2025. Runtao Liu, Haoyu Wu, Zheng Ziqiang, Chen Wei, Yingqing He, Renjie Pi, and Qifeng Chen. Videodpo: Omni-preference alignment for video diffusion generation. arXiv preprint arXiv:2412.14167, 2024. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36:53728–53741, 2023. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8228–8238, 2024. Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Mod- elscope text-to-video technical report. arXiv preprint arXiv:2308.06571, 2023. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024a. Yaohui Wang, Xinyuan Chen, Xin Ma, Shangchen Zhou, Ziqi Huang, Yi Wang, Ceyuan Yang, Yinan He, Jiashuo Yu, Peiqing Yang, et al. Lavie: High-quality video generation with cascaded latent diffusion models. International Journal of Computer Vision, pp. 1–20, 2024b. Yibin Wang, Zhiyu Tan, Junyan Wang, Xiaomeng Yang, Cheng Jin, and Hao Li. Lift: Leveraging human feedback for text-to-video model alignment. arXiv preprint arXiv:2412.04814, 2024c. Xun Wu, Shaohan Huang, Guolong Wang, Jing Xiong, and Furu Wei. Boosting text-to-video gener- ative model with mllms feedback. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. Jinbo Xing, Menghan Xia, Yong Zhang, Haoxin Chen, Wangbo Yu, Hanyuan Liu, Gongye Liu, Xintao Wang, Ying Shan, and Tien-Tsin Wong. Dynamicrafter: Animating open-domain images with video diffusion priors. In European Conference on Computer Vision, pp. 399–417. Springer, 2024. Jiazheng Xu, Yu Huang, Jiale Cheng, Yuanming Yang, Jiajun Xu, Yuan Wang, Wenbo Duan, Shen Yang, Qunlin Jin, Shurun Li, et al. Visionreward: Fine-grained multi-dimensional human prefer- ence learning for image and video generation. arXiv preprint arXiv:2412.21059, 2024a. 11 Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems, 36, 2024b. Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Weihan Shen, Xiaolong Zhu, and Xiu Li. Using human feedback to fine-tune diffusion models without any reward model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8941–8951, 2024a. Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024b. Hangjie Yuan, Shiwei Zhang, Xiang Wang, Yujie Wei, Tao Feng, Yining Pan, Yingya Zhang, Zi- wei Liu, Samuel Albanie, and Dong Ni. Instructvideo: instructing video diffusion models with human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6463–6474, 2024. Jiacheng Zhang, Jie Wu, Weifeng Chen, Yatai Ji, Xuefeng Xiao, Weilin Huang, and Kai Han. On- linevpo: Align video diffusion model with online video-centric preference optimization. arXiv preprint arXiv:2412.15159, 2024a. Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. I2vgen-xl: High-quality image-to-video synthesis via cascaded diffusion models. arXiv preprint arXiv:2311.04145, 2023. Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Ziwei Liu, and Chunyuan Li. Video instruction tuning with synthetic data. arXiv preprint arXiv:2410.02713, 2024b. 12 APPENDIX This supplementary material provides more qualitative results, details of the evaluation, experi- mental results, pseudo-code of RealDPO. Section A elaborates on additional visual comparisons of generated videos, including comparisons with pre-trained models, supervised fine-tuning, and other alignment methods. Section B details the evaluation process, covering the design of user studies, evaluation using LLMs, evaluation using the VBench-I2V metric, as well as interfaces, instructions, and an introduction to automated evaluation metrics. Section C presents the pseudo-code of our core algorithm, RealDPO. A MORE QUALITATIVE RESULTS Due to space limitations in the main text, this section presents additional visual comparisons, in- cluding comparisons with pre-trained models, supervised fine-tuning, and other alignment methods. The results demonstrate that our approach achieves superior performance across a wider range of samples, with enhanced visual-text alignment, text alignment, motion quality, character quality, and overall quality. These findings further validate the effectiveness of the RealDPO framework. This provides new insights and methodologies for future multi-modal generation tasks. A.1 COMPARISON WITH PRE-TRAINED MODEL Pre-trained base models are typically trained on large-scale datasets and exhibit strong generalization capabilities. However, they may underperform on specific tasks, particularly those requiring fine- grained alignment, such as the generation of videos with complex motion as discussed in our work. RealDPO, by incorporating guidance from real-world data through Direct Preference Optimization (DPO), excels at capturing intricate details in tasks, especially in image-text alignment. As shown in Fig. 7, compared to pre-trained models, RealDPO demonstrates significantly improved consistency in visual-text alignment and notable enhancements in the details of characters and motions. A.2 COMPARISON WITH SUPERVISED FINE-TUNING Supervised fine-tuning relies on annotated data and can achieve strong performance on specific tasks. However, its effectiveness is constrained by the quality and quantity of the available anno- tations. In contrast, RealDPO leverages a diverse set of negative samples and real-world data as positive samples to form multiple preference pairs. This approach enables the model to learn from its own mistakes and align more closely with real-world samples, achieving robust alignment even without extensive labeled data. In particular, as shown in Fig. 8, in terms of motion quality and char- acter quality, RealDPO generates images that are more natural, with smoother motions and richer character details. A.3 COMPARISON WITH OTHER ALIGNMENT METHOD Other reward model based alignment methods, such as LiFT and VideoAlign, may perform well on specific tasks. However, in complex scenarios, the reward models often fail to provide effective feedback, leading to misguidance in preference alignment training. In contrast, RealDPO introduces real-world samples as positive examples and pairs them with multiple negative samples generated by the model, naturally forming contrastive pairs. By guiding the model with positive samples, Re- alDPO enables the model to learn from its mistakes and align more closely with the correct samples, thereby better handling complex cross-modal alignment tasks. As shown in Figure 9, compared to existing alignment methods, RealDPO demonstrates greater stability in visual-text alignment and text alignment, generating images and text that are more semantically consistent, with superior mo- tion quality. 13 "The video begins with a person in grey shorts standing in a lush backyard, their hand outstretched towards a playful spotted puppy. The puppy, with its ears perked up and tongue out, eagerly raises one paw to meet the person's hand in a friendly gesture. The puppy's eyes are focused on the person's hand, and its tail wags with excitement. The backyard is filled with vibrant green grass and a wooden fence, creating a warm and inviting atmosphere for this interaction. The natural light from the setting sun casts a golden hue over the scene, highlighting the connection between the person and the puppy as they share a moment of joy and companionship." First Frame Before Alignment After Alignment First Frame "The video features a blonde skateboarder, dressed in a black jacket and shorts, standing in the middle of a road that stretches into the distance. They have one foot on the skateboard, ready to push off and begin skating. The road is bordered by snow-covered hills under a twilight sky, giving a sense of solitude and adventure. The anticipation of the skateboarder's impending journey is palpable." Before Alignment After Alignment Figure 7: Qualitative Results. Comparison with pre-trained model. Reference Frame "The video captures a woman with her hair tied back, wearing a sleeveless top and jeans, standing in a bedroom. She is in the process of making a bed, holding up a white pillow with both hands, seemingly about to place it on the bed. The bedroom has a calm and tidy appearance, with a large window allowing natural light to brighten the room. The focus is on the woman's task, highlighting the routine activity of homemaking." SFT RealDPO Reference Frame "The video captured a couple enjoying a moment on the beach. The man is wearing a gray T-shirt and shorts, and the woman is wearing a white striped sun skirt, they are embracing When they embrace on the damp beach, gentle waves beat against their feet, creating a playful and romantic atmosphere. The cloudy sky and distant hills added a dramatic backdrop to their leisurely stroll, highlighting the connection between the two and the beauty of the natural environment." SFT RealDPO Figure 8: Qualitative Results. Comparison with supervised fine-tuning. 14 The video begins with a woman with curly blonde hair, wearing a beige cardigan and a black top, sitting indoors. She is holding a chocolate-covered ice cream bar and is offering it to a young child with blonde hair, who is wearing a striped shirt. The child is taking a bite from the ice cream bar. The setting is a bright and airy room with natural light illuminating the scene, creating a warm and joyful atmosphere. The focus is on the interaction between the woman and the child, capturing a moment of shared enjoyment. First Frame RealDPO VideoAlign SFT First Frame RealDPO LiFT SFT The video opens with a pair of hands carefully cutting a small fruit tart on a dark wooden board. The tart has a golden, crisp crust, topped with fluffy whipped cream and fresh strawberry slices. Holding it steady, the person gently presses the knife through, preparing to slice it open. More tarts rest nearby, along with a wooden cutting board decorated with scattered blue flowers. The soft, natural lighting gives the scene a warm and inviting feel.. Figure 9: Qualitative Results. comparison with other Alignment Method. B DETAILS OF THE EVALUATION B.1 IMPLEMENTATION DETAILS Models and Settings. We conduct all experiments on 8 Nvidia H100 GPUs, with a total batch size of 8 for training. For our I2V baseline generation model, we adopt CogVideoX-5B (Yang et al., 2024b), which uses diffusion transformer structure. We fine-tune the parameters of all its transformer blocks on the DeepSpeed framework. The learning rate is set to 1e-5, and all the models are trained for 10 epochs. Evaluation Metric. We evaluate the performance of our aligned model through three aspects: user study, automatic LLM-based evaluation, and the assessment metrics of VBench (Huang et al., 2024a). We selected 18 test cases, including test texts and reference images, which constitute the RealAction-TestBench. For the user study, we invited 10 testers to evaluate our model against other baselines across multiple dimensions. For LLM-based evaluation, we designed a question template to guide the model in making decisions. As for VBench, we utilized the I2V evaluation metrics provided by VBench to perform our evaluation. B.2 EVALUATION BY USER STUDY To ensure a fair and comprehensive evaluation of our model, we conducted a user study to collect subjective feedback on the generated results. Participants were presented with a scoring interface (as shown in Fig. 10) and asked to rate the generated videos based on several key criteria, including Visual Alignment, Text Alignment, Motion Quality and Human Quality and Overall Quality. The interface is designed to be intuitive and user-friendly, allowing participants to provide accurate and unbiased scores. Each video was evaluated by multiple users, and the final scores were averaged to ensure reliability. B.3 EVALUATION USING LLMS In addition to human evaluation, we leveraged large language models (LLMs) to assess the quality of the generated videos. We designed a structured instruction template to guide the LLMs in evaluating video generation quality. The template includes detailed prompts for assessing various aspects of the videos, such as adherence to textual descriptions, visual coherence, and overall aesthetic appeal. By utilizing LLMs, we were able to obtain scalable and consistent evaluations that complement the human user study. The results from the LLM-based evaluation align closely with the user study findings, further validating the effectiveness of our approach 15 B.4 EVALUATION USING VBENCH-I2V METRIC To provide a more objective and fine-grained assessment of our model’s performance, we select seven theme-related and human-perception-aligned representative dimensions of video quality from VBench-I2V Huang et al. (2024a;b) as the final evaluation metrics: I2V Subject, Subject Consis- tency, Background Consistency, Motion Smoothness, Dynamic Degree, Aesthetic Quality, and Imag- ing Quality. Figure 10: User Study Scoring Interface for users to give socre. 16 Instruction Template for Evaluating Video Generation Quality Using LLMs (part1) As a video understanding expert, you will be required to evaluate the quality of model- generated videos from four different perspectives, covering the following daily human activities. The specific evaluation angles will be divided into Visual Alignment, Text Alignment, Motion Quality, and Human Quality. Under each dimension, the model needs to determine the quality of the generated content based on the answers to five questions. Each question is scored out of 10 points. The scoring rule is 0 points for the worst and 10 points for the best. The final score is the sum of the scores for all questions under that dimension. Visual Alignment This mainly assesses the consistency of the visual representation of the characters in the generated video with the provided first frame image of the characters and environment, with a score range of 0 to 10. Please answer five questions as follow: Question 1: What is the consistency score of the character’s appearance (such as clothing, hairstyle, skin color) in the generated video? Question 2: What is the consistency score of the environment in the generated video (such as background, lighting, scene setup)? Question 3: What is the score for the character’s proportional changes in the generated video conforming to physical laws? Question 4: What is the fidelity score of the characters or environment in the video (low scores should be given if there are shape distortions or color abnormalities)? Question 5: What is the consistency score of the characters and environment over time (low scores should be given if there are sudden disappearances or changes)? Answer 1: Answer 2: Answer 3: Answer 4: Answer 5: 17 Instruction Template for Evaluating Video Generation Quality Using LLMs (part2) As a video understanding expert, you will be required to evaluate the quality of model- generated videos from four different perspectives, covering the following daily human activities. The specific evaluation angles will be divided into Visual Alignment, Text Alignment, Motion Quality, and Human Quality. Under each dimension, the model needs to determine the quality of the generated content based on the answers to five questions. Each question is scored out of 10 points. The scoring rule is 0 points for the worst and 10 points for the best. The final score is the sum of the scores for all questions under that dimension. Text Alignment This assesses the consistency between the actions of the characters in the generated video and the input text description or target behavior category, with a score range of 0 to 10. Please answer five questions as follow: Question 1: What is the consistency score between the character’s actions in the generated video and the input text description or target behavior category? Question 2: What is the consistency score between the key actions in the video (such as running, hugging, playing an instrument) and the text description? Question 3: What is the score for avoiding actions or distracting elements in the video that are unrelated to the text description? Question 4: What is the accuracy score of the video in conveying the emotions or intentions described in the text? Question 5: What is the score for the video supplementing reasonable details not explicitly mentioned in the text description? Answer 1: Answer 2: Answer 3: Answer 4: Answer 5: 18 Instruction Template for Evaluating Video Generation Quality Using LLMs (part3) As a video understanding expert, you will be required to evaluate the quality of model- generated videos from four different perspectives, covering the following daily human activities. The specific evaluation angles will be divided into Visual Alignment, Text Alignment, Motion Quality, and Human Quality. Under each dimension, the model needs to determine the quality of the generated content based on the answers to five questions. Each question is scored out of 10 points. The scoring rule is 0 points for the worst and 10 points for the best. The final score is the sum of the scores for all questions under that dimension. Motion Quality This assesses the smoothness, naturalness, and reasonableness of the character’s movements in the generated video, with a score range of 0 to 10. Please answer five questions as follow: Question 1: What is the smoothness score of the character’s movements in the generated video (high scores for no stuttering)? Question 2: What is the score for the details of the character’s movements (such as limb movements, gestures) conforming to physical laws? Question 3: What is the naturalness score of the temporal dynamics of the character’s movements (such as speed, rhythm)? Question 4: What is the coordination score between the character’s movements and other elements in the scene (such as objects, background)? Question 5: What is the score for avoiding obvious distortions or unreasonable phenomena in the character’s movements (such as limb twisting, incoherent actions)? Answer 1: Answer 2: Answer 3: Answer 4: Answer 5: 19 Instruction Template for Evaluating Video Generation Quality Using LLMs (part4) As a video understanding expert, you will be required to evaluate the quality of model- generated videos from four different perspectives, covering the following daily human activities. The specific evaluation angles will be divided into Visual Alignment, Text Alignment, Motion Quality, and Human Quality. Under each dimension, the model needs to determine the quality of the generated content based on the answers to five questions. Each question is scored out of 10 points. The scoring rule is 0 points for the worst and 10 points for the best. The final score is the sum of the scores for all questions under that dimension. Human Quality This assesses the quality of the generated characters in the video, with a score range of 0 to 10. Please answer five questions as follow: Question 1: What is the score for the reasonableness of limb distortions in the generated characters (e.g., unnatural joint bends)? Question 2: What is the score for the reasonableness of the number of limbs in the characters (e.g., extra or missing limbs)? Question 3: What is the naturalness score of the facial expressions or body movements of the characters in line with human behavioral characteristics? Question 4: What is the reasonableness score of the interactions between the characters and other objects in the scene (e.g., tools, animals, other people)? Question 5: What is the fluency score of the characters’ behavior in the generated video? Answer 1: Answer 2: Answer 3: Answer 4: Answer 5: 20 C PSEUDO-CODE OF REALDPO def RealDPO_Loss(model, ref_model, x_w, x_l, c, beta): """ Computes the RealDPO loss for aligning model predictions with preferred and non-preferred samples. Args: model: Diffusion Transformer model. ref_model: Frozen reference model used for comparison. x_w: Preferred real video latents (aligned with the desired output). x_l: Non-preferred video-generated video latents (not aligned with the desired output). c: Conditioning input (e.g., text embeddings, image embeddings). beta: Regularization parameter controlling the strength of the alignment. Returns: realdpo_loss: The computed RealDPO loss value. """ # Sample random timesteps and noise for diffusion process timestep_k = torch.rand(len(x_w)) noise = torch.randn_like(x_w) # Create noisy versions of preferred and non-preferred latents noisy_x_w = (1 - timestep_k) * x_w + timestep_k * noise noisy_x_l = (1 - timestep_k) * x_l + timestep_k * noise # Predict latents using the model and reference model latent_w_pred = model(noisy_x_w, c, timestep_k) latent_l_pred = model(noisy_x_l, c, timestep_k) latent_ref_w_pred = ref_model(noisy_x_w, c, timestep_k) latent_ref_l_pred = ref_model(noisy_x_l, c, timestep_k) # Compute prediction errors for preferred and non-preferred latents model_w_loss = (x_w - latent_w_pred).norm().pow(2) ref_w_loss = (x_w - latent_ref_w_pred).norm().pow(2) model_l_loss = (x_l - latent_l_pred).norm().pow(2) ref_l_loss = (x_l - latent_ref_l_pred).norm().pow(2) # Compute alignment differences w_loss_diff = model_w_loss - ref_w_loss l_loss_diff = model_l_loss - ref_l_loss # Compute the RealDPO loss alignment_term = -0.5 * beta * (w_loss_diff - l_loss_diff) realdpo_loss = -1 * torch.log(torch.sigmoid(alignment_term)) return realdpo_loss 21 D THE USE OF LARGE LANGUAGE MODELS (LLMS) In the preparation of this manuscript, we used GPT-4, a large language model from OpenAI, exclu- sively as a writing assistance tool. Its use was confined to the Introduction and Methods sections, where it served to aid in polishing the text. Specifically, the model was prompted to help restruc- ture sentences for improved clarity and flow, ensure consistent academic tone, and simplify complex technical descriptions. All fundamental ideas, research hypotheses, methodological designs, exper- imental data, analysis, conclusions, and the final intellectual content are solely the product of the authors’ work. The LLM generated no original content or ideas and was not used for data analysis or interpretation. The authors carefully reviewed, edited, and verified all AI-generated text to ensure it accurately reflected their research and adhered to the highest standards of academic integrity. 22
REALDPO: REAL OR NOT REAL, THAT IS THE PREFERENCE Guo Cheng3∗ Danni Yang1∗ Ziqi Huang2† Jianlou Si5 Chenyang Si4 Ziwei Liu2B 1Shanghai Artificial Intelligence Laboratory 2S-Lab, Nanyang Technological University 3 4Nanjing University 5SenseTime Research Project Page: Vchitect.github.io/RealDPO-Project (b) Real Video vs Generated Video (a) Reward-Based Method vs RealDPO (Ours) Reward Model Video 1 Generation Model Video 2 Video 3 Score 1 Score 2 Score 3 DPO Training Loss Feedback Video 1 Video 2 Video 3 Generation Model DPO Training Win Sample Lose Sample Lose Sample Real Video Data Overcome the limitations of the Reward model Win Sample Lose Sample Reference Image Sample 2 Sample 3 Sample1 Prevent the occurrence of the Hacking Issues Hard to judge Key Action two woman are hugging -1.06 good 0.58 -0.56 bad 0.85 (c) Indistinguishable Preference for Generated Videos Reward-Based Method RealDPO (Ours) Figure 1: Can we align video generative models using real data as preference data without a reward model? (a) Comparison between using the reward model to score synthetic data for preference learning and our RealDPO method, which uses high-quality real data as win samples. Our method avoids the limitations of the reward model and the associated hacking issues. (b) Comparison between the video generated by the pretrained model and the real video for the same scene. The three scores on the right represent the scores given by the reward model from VisionReward (Xu et al., 2024a), the human action metric from VBench (Huang et al., 2024a;b), and human preference, respectively. It can be observed that while the existing reward model and VBench can evaluate semantic correctness, they are limited in assessing human motion quality. (c) Three model-generated examples from the same prompt, each with different initial noise, exhibit poor limb interaction, making it challenging for human annotators to identify which sample should be chosen as the win sample for reward model training. ABSTRACT Video generative models have recently achieved notable advancements in synthesis quality. However, generating complex motions remains a critical challenge, as existing models often struggle to produce natural, smooth, and contextually consistent movements. This gap between generated and real-world motions limits their practical applicability. To address this issue, we introduce RealDPO, a novel alignment paradigm that leverages real-world data as positive samples for preference learning, enabling more accurate motion synthesis. Unlike traditional supervised fine-tuning (SFT), which offers limited corrective feedback, RealDPO employs Direct Preference Optimization (DPO) with a tailored loss function to enhance motion realism. By contrasting real-world videos with erroneous model outputs, RealDPO enables iterative self-correction, progressively refining motion quality. To support post-training in complex motion synthesis, we propose RealAction-5K, a curated dataset of high-quality videos capturing human daily activities with rich and precise motion details. Extensive experiments demonstrate ∗Equal Contributions. †Project Lead. BCorresponding Author. 1 16 Oct 2025 Reference Frame "The video showcases a sincere moment between two young friends against a dark green tiled wall. The individual on the left, wearing a black cap and a black T-shirt with "Dylan" written across it, and the friend on the right, in a white T-shirt and a black backward cap, both move towards each other. They share a warm embrace, their faces lit up with genuine smiles. The scene is a candid capture of friendship and affection, with the dark green tiles providing a simple yet effective backdrop that highlights their interaction." SFT RealDPO Figure 2: RealDPO vs SFT. A qualitative comparison between RealDPO and supervised fine-tuning (SFT). RealDPO demonstrates more natural motion generation. For more details regarding the comparison, please refer to the supplementary material. that RealDPO significantly improves video quality, text alignment, and motion realism compared to state-of-the-art models and existing preference optimization techniques. 1 INTRODUCTION With the advancement in computational power and the availability of large-scale data, video generation models (Yang et al., 2024b; Guo et al., 2023; Blattmann et al., 2023; Li et al., 2024; Lin et al., 2024; Wang et al., 2024b; Xing et al., 2024; Zhang et al., 2023) have made significant progress, producing more realistic and diverse visual content. However, when it comes to generating complex motions, existing models still face considerable challenges in creating motion sequences that adhere to appear natural and smooth, and align with contextual consistency. This issue becomes especially prominent in the generation of human-centric daily activity motions. As shown in Fig. 1(b), even the results generated by the state-of-the-art DiT-based model CogVideoX-5B (Yang et al., 2024b) exhibit unnatural and unrealistic movements, failing to meet human preferences for natural, smooth, and contextually appropriate actions. This prompts us to further explore how to improve the realism and rationality of complex motion generation, particularly in the domain of human motion synthesis. A straightforward solution is to collect a set of high-quality, real-world data specifically for supervised fine-tuning (SFT). However, relying exclusively on this dataset for SFT training presents certain limitations. During optimization, the model interprets the provided data as the sole correct reference, lacking awareness of where the original model's errors stem from. This narrow focus may result in overfitting and suboptimal performance in Fig. 2. A more effective strategy would be letting the model learn from its own mistakes. By utilizing the difference between real samples (positive data) and generative samples (negative data), we can explicitly highlight the model's errors and guide it to correct its behavior. This approach enables the model to progressively align its outputs with the desired actions represented by the positive samples, fostering continuous improvement through self-reflection. This idea aligns perfectly with Direct Preference Optimization (DPO) (Rafailov et al., 2023), a reinforcement learning technique used in training large language models, which leverages pair-wise win-lose data to guide the learning process. In video generation, recent studies (Liu et al., 2024; Wang et al., 2024c; Xu et al., 2024a; Yuan et al., 2024; Zhang et al., 2024a) have explored training fine-grained reward models using human-annotated preference datasets, primarily through three ways: reward-weighted regression (RWR) (Wang et al., 2024c), direct preference optimization (DPO) (Liu et al., 2024), and gradient feedback (GF) (Yuan et al., 2024). However, these methods face some critical challenges when directly applied to action-centric video generation: (1) Reward Hacking: Video reward model is susceptible to reward hacking, where during the optimization process, human evaluations indicate a decline in video quality, yet the reward model continues to assign high scores. (2) Scalability Issue: Online approaches require decoding latent to pixel space, limiting their scalability for highresolution video generation. (3) Bias Propagation: Multi-dimensional reward models may lose the ability to assess specific key metrics due to linear combinations of evaluation criteria. As shown in 2 Fig. 1(b), the reward model cannot provide an accurate evaluation for complex motion. These limitations highlight the need for a more robust approach tailored to complex motion video generation, motivating our extension beyond traditional DPO frameworks. To address these challenges, we propose RealDPO, a novel training pipeline for generating actioncentric activity videos, as shown in Fig. 1(a). Unlike prior methods that rely on model-sampled pairwise comparisons, RealDPO leverages real-world video data as win samples, overcoming the Real Data Deficiency issue where only using synthetic data for preference learning fails to address the distribution errors inherent in the pre-trained generative model. More importantly, this approach significantly enhances the model's learning upper bound, enabling more accurate video generation. Without real video guidance, as shown in Fig. 1(c), all samples generated by pre-trained model exhibit poor limb interaction, making it hard for human annotators to identify the preferred win sample. Additionally, since RealDPO directly uses real data to guide the preference learning, it eliminates the need for an external reward function, thereby avoid reward hacking and bias propagation issues. Moreover, our naturally paired win-lose samples eliminate the need for decoding latent to pixel space during training, drastically reducing computational overhead. Inspired by DiffusionDPO (Wallace et al., 2024), we design a tailored DPO loss specifically for the training objective of diffusion-based transformer architectures, enabling effective preference alignment. To support this training, we introduce RealAction-5K, a compact yet high-quality video dataset capturing diverse human daily actions. The dataset adheres to the principle of "less is more", emphasizing that RealDPO requires fewer high-quality samples in synergy with model-generated negative samples, whereas traditional supervised fine-tuning (SFT) methods typically requires more data to achieve better performance. Experiments demonstrate that RealDPO significantly improves video quality, text alignment, and action fidelity across diverse human action scenarios compared to pretrained baselines and other preference alignment methods. Our contributions are summarized as follows: • We propose RealDPO, a novel training pipeline for action-centric video generation that leverages real-world data as preference signals to contrastively reveal and correct the model's inherent mistakes, addressing the limitations of existing reward models and preference alignment methods. • We design a tailored DPO loss for our video generation training objective, enabling efficient and effective preference alignment without the scalability and bias issues of prior approaches. • We introduce RealAction-5K, a compact yet high-quality curated dataset focused on human daily actions, specifically crafted to advance preference learning for video generation models and broader applications. 2 RELATED WORK Diffusion-Based Video Generation. In recent years, diffusion-based video generation models have emerged continuously, primarily generating videos through user-provided text or image prompts. These models are broadly categorized into two architectures: U-Net and Diffusion Transformers (DiT). U-Net-based approaches (Blattmann et al., 2023; Wang et al., 2023; Chen et al., 2024; Guo et al., 2023) build upon the multi-stage down-sampling and up-sampling framework of image diffusion models, incorporating temporal attention layers to ensure temporal consistency. However, these methods face limitations in motion dynamics and content richness. Recently, Diffusion Transformerbased methods (Yang et al., 2024b; Li et al., 2024; Lin et al., 2024) have made significant improvements by combining 3D-VAE with diffusion transformers, using 3D full-attention layers to jointly learn spatial-temporal correlations, and enhance text encoders to handle complex prompts. These advancements have led to substantial improvements in fidelity, consistency, and scalability for longer video generation. Reinforce Learning in Image/Video Generation. In large language models (LLMs), reward models are commonly used in Reinforcement Learning Human Feedback (RLHF), enabling LLMs to respond more naturally and generate more coherent text. Recently, there have been a series of studies(Xu et al., 2024b; Black et al., 2023; Wallace et al., 2024; Lee et al., 2023; Liang et al., 2024; Yang et al., 2024a; Clark et al., 2023; Fan et al., 2024) in image generation that incorporate human preferences into model evaluation and model alignment training, mainly focusing on improving the aesthetic quality of images. In video generation, the exploration so far is still quite limited. Most related 3 (c) Video Caption Word Distribution (d) Action Content Distribution (e) Prompt Length Distribution 1 bicycle, cooking drinking ... Topic Words Raw Videos filter "horizontal" Video LLM --------------------- --------------------- --------------------- ------- 2 3 Caption Video Understanding Model Final Videos "Please describe this video in detail.""l. Manual checking Corse Videos High Quality filter (a) Samples of RealAction-5K Dataset (b) Data Collection and Processing Pipeline RealAction-5K Figure 3: Overview of the RealAction-5K Dataset. (a) Samples of RealAction-5K Dataset (b) Data Collection and Processing Pipeline (c) Video Caption Word Distribution (d) Action Content Distribution (e) Prompt Length Distribution works(Yuan et al., 2024; Wu et al., 2024; Wang et al., 2024c; Liu et al., 2024; Zhang et al., 2024a; Liu et al., 2025) are mainly focusing on using reward models trained on human-annotated synthetic data for preference learning on model-generated data. However, these methods have some limitations. For example, training reward models may suffer from hacking issues, and multi-dimensional reward models might show reduced evaluation ability in specific domains. Additionally, relying entirely on synthetic data for preference learning could hinder the model's potential. Therefore, we propose a novel approach that transcends the limitations of reward models by incorporating real data for preference-aligned learning. 3 PRELIMINARIES 3.1 DENOISING PROCESS AS MULTI-STEP MDP According to the definition in the Yang et al. (2024a), the denoising process in diffusion models can be formulated as a multi-step Markov Decision Process (MDP). Here, we provide a further explanation of state representations st, probability transition P, and policy functions π, establishing a correspondence between video diffusion models and the MDP framework. This mapping enables a reinforcement learning perspective on the sampling process in video diffusion models. The detailed notation correspondence between the diffusion model and the MDP is as follows: st ≜(c, t, xt) P(st+1 | st, at) ≜(δc, δt+1, δxt-1) at ≜xt-1 π(at | st) ≜pθ(xt-1 | c, t, xt) ρ0(s0) ≜(p(c), δ0, N(0, I)) r(st, at) ≜r((c, t, xt), xt-1) (1) where δx represents the Dirac delta distribution, and t denotes the denoising timesteps. Leveraging this mapping, we can employ RL techniques to fine-tune diffusion models by maximizing returns. However, this approach requires a proficient reward model capable of adequately rewarding the noisy images. The task becomes exceptionally challenging, particularly when t is large, and xt closely resembles Gaussian noise, even for an experienced expert. 4 RealAction Xk w noise ǂ0 w يǏ Ǜ ǂ0 w ǂǏ l,a ǂǏ l,b ǂǏ l,c Video Captioner يǏ l,a ǂ0 l,a يǏ l,b ǂ0 l,b يǏ l,c ǂ0 l,c Timestep Selector k DiT Negative Sample Velocity Postive Sample Velocity ǂ0 l,a ǂ0 l,b ǂ0 l,c noise noise a noise b noise c Lose Samples DiT DiT T5 Text Encoder DiT DiT ǂǏ Ǜ ǂǏ ǐ,Dž ǂ0 Ǜ ǂ0 w ǂ0 l,a ǂ0 l,a Pred Latent Orig Latent Reference Model Training Model Pred Latent 1 2 3 4 5 6 ǂ0 Ǜ ×k ×k ǂ0 ǐ,Dž lj푡ljǜ푡 lj푡ljǜ푡 lj푡ljǜ푡 DPO Loss ( ) 3 2 3 1 Win sample 6 5 6 4 ( ) lose sample ref model train model ref model train model The win sample of timestep t share weight Xt w Xt l,a Xt l,b Xt l,c The lose sample a,b,c of timestep t text embedding Get velocity function Update the params by gradient decent Update the params by EMA strategies DiT DiT Reference Model Training Model DiT DiT Training Iteration DiT DiT DiT DiT T EMA Update t start A little girl is beating eggs... Win-lose Sampling for DPO Training Diagram of Tailored RealDPO Loss Reference Model Update Strategy first frame Figure 4: The RealDPO Framework. We use real data as the win samples in DPO, and illustrate the data pipeline on the left hand side. We present the RealDPO loss, and reference model update strategy on the right hand side. 3.2 DPO FOR DIFFUSION MDP Direct Preference Optimization (DPO) (Rafailov et al., 2023) is a preference-based fine-tuning method that directly optimizes a model using human preference data, without requiring an explicit reward model. This approach is particularly advantageous as it avoids the complexities and potential biases introduced by learned reward models, making the optimization process more stable and interpretable. Given a dataset of preference-labeled pairs {(x, yw, yl)}, where x is the input prompt, yw is the preferred (win) output, and yl is the less preferred (lose) output, DPO aims to maximize the likelihood ratio between the preferred and non-preferred samples while maintaining closeness to the pretrained model. The optimization objective can be formulated as: LDPO(θ)=-Ec,xw,xl log σ β log pθ(xw|c) pref(xw|c)) -β log pθ(xl|c) pref(xl|c) (2) where: pθ(xw|c) is the likelihood of generating win output xw given input c under the fine-tuned model, pref(xw|c) is the likelihood under the reference (pretrained) model. β is a temperature parameter that controls the sharpness of preference optimization. σ(·) is the sigmoid function ensuring a proper probability score. According to the derivation in reference (Wallace et al., 2024), the training objective of DiffusionDPO is defined as: L(θ) = -E(xw 0 ,xl 0)∼D,t∼U(0,T ),xw t ∼q(xw t |xw 0 ),xl t∼q(xl t|xl 0) log σ (-βTω(λt) ( ∥εw -εθ(xw t , t)∥2 2 -∥εw -εref(xw t , t)∥2 2 - ∥εl -εθ(xl t, t)∥2 2 -∥εl -εref(xl t, t)∥2 2 (3) where xw t = αtxw 0 + σtεw, εw ∼N(0, I) is a draw from q (xw t | xw 0 ). λt = α2 t/σ2 t is the signalto-noise ratio. ω(λt) is a weighting function, usually kept constant. 4 THE REALDPO PARADIGM In this section, we introduce our fine-tuning pipeline RealDPO to align video diffusion models with our constructed preferences data, as shown in Fig. 4. Firstly, we introduce our proposed dataset, RealAction, and the pipeline for constructing preference data in Sec.4.1. Then, in Sec.4.2, we present the win-lose sampling approach used for DPO fine-tuning training. Finally, in Sec.4.3, we delve into the alignment training process for the video generation model using preference data. 4.1 REALACTION: PREFERENCE DATA COLLECTION Preference data is essential for reinforcement learning. To acquire it, we designed a robust data processing pipeline that efficiently collects, filters, and processes data, ensuring its high quality, diversity, and representativeness. 5 Collect raw video based on keywords. Our dataset construction begins with selecting relevant topics to collect raw video data, ensuring diversity and real-world representativeness. As shown in Fig. 3(d), we designed daily activity themes across over ten scenarios, such as sports, eating, drinking, walking, and gathered high-quality video clips using these keywords. This step captures diverse actions, participants, and backgrounds, establishing a strong foundation for preference-based training. Use VideoLLM to filter low-quality videos. After collecting the raw videos, we use a video LLM, Qwen2-VL (Wang et al., 2024a) , to filter out rough or irrelevant videos. We provide some instructions for Qwen2-VL to identify and discard videos that do not meet quality standards or are not aligned with the selected topic. Through this filtering process, low-quality content is significantly reduced, ensuring that only clear and meaningful videos proceed to next processing stages. Manual inspection ensures the accuracy. We let human annotators carefully examine the videos to confirm whether they accurately represent the intended theme, have correct actions, and do not contain misleading or irrelevant content. This additional validation step further refines the dataset, ensuring it aligns with preference-based training goals. Generate detailed descriptions for videos. We employ a video understanding model, LLaVAVideo (Zhang et al., 2024b), to generate accurate descriptive captions for each video. These descriptions accurately reflect the actions, participants, and appearance. These captions serve as valuable metadata, later used for sampling negative samples. The word cloud composed of high-frequency words in the description caption of these videos is shown in Fig. 3(c). And the length distribution of captions in our constructed dataset is shown in the Fig 3(e). 4.2 WIN-LOSE SAMPLING FOR DPO TRAINING After obtaining real data, we take the real video Xw from RealAction as win sample. The latent after compression through the VAE encoder is Xw 0 . We design a Timestep Selector that randomly generates a timestep k for each round of positive and negative sampling. We add k steps of random noise to Xw 0 , obtaining Xw k . Then, together with the caption embedding etext, we input this into the DiT transformer to get the predicted noise ˆεw k . Finally, we input ˆεw k to Positive Sample Velocity to obtain the predicted latent ˆxw 0 , which is prepared for the subsequent DPO loss. ˆεw k = θ xw k , etext , ˆxw 0 = ψ (ˆεw k ) , (4) where θ is the training DiT model, ψ is the process of positive sample velocity. For negative samples, in order to ensure diversity, we first randomly generate three init noises εa, εb, εc. These are then combined with the positive sample's caption embedding etext and we input them together into the DiT, where we sample the full timesteps to obtain three negative samples xl,a 0 , xl,b 0 , xl,c 0 . This step is done offline, and we only need to store the latent of the negative samples. During training optimization, similar to the positive samples, we add k steps of random noise to these three samples to obtain xl,a k , xl,b k , xl,c k . Then, together with caption embedding etext, we input these into the DiT transformer to get the predicted noise ˆεl,a k , ˆεl,b k , ˆεl,c k . Finally, we input predicted noise to Negative Sample Velocity to obtain the predicted latents for the negative samples ˆxl,a 0 , ˆxl,b 0 , ˆxl,c 0 . ˆεl,∗ k = θ xl,∗ k , etext , ˆxl,∗ 0 = ψ ˆεl,∗ k , (5) where ∗is set of {a, b, c}, ψ is the process of negative sample velocity. It's important to note that the first sampling of the negative samples is done offline, while the second sampling for both positive and negative samples involves only one step, saving a significant amount of time during training. 4.3 PREFERENCE LEARNING FOR VIDEO GENERATION After positive and negative samples are prepared, we can use this preference data for DPO training. Due to the constraints of the reference model in DPO training, similarly, we also resample the winlose samples through the reference model to obtain ̃xw 0 and ̃xl,a 0 . Here, we take the first negative 6 Table 1: Quantitative Comparison on RealAction-TestBench by User Study. We provided users with a five dimensional evaluation, namely Overall Quality, Visual Alignment, Text Alignment, Motion Quality and Human Quality, to compare our model with the pre-trained baseline (Yang et al., 2024b), supervised fine-tuning(SFT), LiFT (Wang et al., 2024c), VideoAlign (Liu et al., 2025). Testers are required to rank the results generated by these models, and we converted the rankings into win rates. Method Overall Visual Text Motion Human Quality Alignment Alignment Quality Quality Baseline (Yang et al., 2024b) 65.56 72.22 71.89 65.56 66.00 SFT 58.22 65.22 68.44 59.11 60.33 LiFT (Wang et al., 2024c) 67.34 73.44 64.33 65.00 67.33 VideoAlign (Liu et al., 2025) 61.00 68.11 68.78 57.22 59.78 RealDPO (Ours) 73.33 77.44 77.00 71.00 72.89 "The video begins with a man wearing a white shirt and blue tie, sitting at an outdoor table. He holds a cup of coffee in one hand, taking a sip, while his other hand is occupied with writing notes on a notepad. The table also holds a smartphone and a pair of sunglasses, suggesting a busy day ahead. The setting is an urban street scene with brick walls and parked cars in the background, providing a realistic and bustling atmosphere. The scene captures a moment of focus and productivity amidst the daily hustle." First Frame Before Alignment After Alignment First Frame "The video begins in a sunny park with lush green grass and trees swaying gently in the breeze. A young blonde girl, dressed in a light pink sweater and floral shorts, stands with a joyful smile. Holding a small treat in her hand, she lifts it slightly to engage the attentive golden retriever sitting before her. The dog's golden fur gleams under the sunlight as it excitedly raises a paw, reaching out to shake hands with the girl." Before Alignment After Alignment Figure 5: Qualitative Results. We visualize the effect of before and after applying RealDPO. See the supplementary for videos. sample ̃xl,a 0 as an example to explain the loss. According to the training objective of CogVideoX5B (Yang et al., 2024b), we rewrite Eq. 3 as follows: LDP O(θ) = -E log σ -βTω(λt) ∥xw 0 -ˆxw 0 ∥2 2 -∥xw 0 - ̃xw 0 ∥2 2 - ∥xl 0 -ˆxl 0∥2 2 -∥xl 0 - ̃xl 0∥2 2 , (6) where xw 0 /xl 0 are the original win/lose sample, ˆxw 0 /ˆxl 0 are the predicted latents for the win/lose sample by the training model, ̃xw 0 / ̃xl 0 are the predicted latents for the win/lose sample by the reference model. The role of the reference model is to constrain the training process of the training model, preventing over-optimization or deviation from the desired objectives. To enhance alignment of the model with human preferences, we gradually improve the capability of the preference model and perform multiple rounds of resampling, ensuring that the training process iteratively refines its predictions and better captures the desired outcomes. In practice, every t training steps, the reference model needs to be updated using the exponential moving average (EMA) algorithm. θt ref ←ωθt ref + (1 -ω)θt, (7) where θt ref is the parameters of the reference model, θt denotes the parameters of the training model, and ω is the decay coefficient of EMA, set to 0.996 in our experiments. 7 Table 2: Quantitative Comparison on VBench-I2V and RealAction-TestBench using MLLM. We evaluate performance via Visual Alignment (VA), Text Alignment (TA), Motion Quality (MQ), and Human Quality (HQ), which are consistent with the sensory perceptions of humans in user study. We use open-source Qwen2-VL (Wang et al., 2024a) supporting video understanding. We provide a detailed instruction template for evaluating video generation quality using MLLM in the appendix. Method VBench-I2V RealAction-Test Bench VA ↑ TA ↑ MQ ↑ HQ ↑ VA ↑ TA ↑ MQ ↑ HQ ↑ Baseline (Yang et al., 2024b) 97.78% 97.71% 89.86% 90.34% 96.11% 99.22% 90.22% 91.89% SFT 97.15% 98.26% 90.03% 89.38% 93.89% 98.89% 90.78% 92.89% LiFT (Wang et al., 2024c) 97.54% 97.91% 89.25% 90.24% 97.54% 97.89% 92.00% 91.67% VideoAlign (Liu et al., 2025) 97.99% 97.66% 89.54% 90.84% 96.44% 98.89% 92.00% 92.89% RealDPO (Ours) 97.99% 97.74% 89.46% 90.10% 96.67% 99.22% 91.67% 94.11% Table 3: Quantitative Comparisons with baselines and reward-based methods via VBenchI2V (Huang et al., 2024a;b), on RealAction-TestBench. Model I2V Subject Background Motion Dynamic Aesthetic Imaging Subject Consistency Consistency Smoothness Degree Quality Quality CogvideoX-5B Baseline (Yang et al., 2024b) 96.10 90.43 94.01 98.15 55.56 59.63 67.01 SFT 96.47 89.50 93.18 98.06 66.67 59.69 67.06 LiFT (Wang et al., 2024c) 96.50 92.34 94.46 98.20 38.89 60.51 68.40 VideoAlign (Liu et al., 2025) 96.55 92.23 94.29 98.37 50.00 60.21 67.66 RealDPO (Ours) 96.58 91.68 94.47 98.31 55.56 61.37 68.05 5 EXPERIMENTS We present the main experiments and discussions in this section. Please refer to the supplementary material for implementation details on the models and evaluation metrics. 5.1 QUANTITATIVE COMPARISONS Quantitative Comparison by User Study. Tab. 1 showcases the evaluation results on the RealAction-TestBench test set, where testers were invited to rank the generated outputs of the pretrained baseline (Yang et al., 2024b), supervised fine-tuning (SFT), LiFT (Wang et al., 2024c), VideoAlign (Liu et al., 2025), and our RealDPO. The evaluation covers five dimensions: Overall Quality, Visual Alignment, Text Alignment, Motion Quality, and Human Quality. The scores for each model across these dimensions were calculated and summarized. As shown in Tab. 1, our RealDPO demonstrates significant improvements over baseline and SFT in multiple dimensions, indicating that our proposed data effectively enhance the capabilities of RealDPO in action-centric scenarios. Additionally, compared to other preference alignment algorithms utilizing reward models, such as LiFT (based on Reward Weighted Regression) and VideoAlign (a naive DPO variant using synthetic data), our approach of leveraging real data as win samples and synthetic data as lose samples also proves its effectiveness. Quantitative Comparison Using MLLM. To enhance the diversity of evaluation, we employ MLLM capable of video understanding tasks to assess the results generated by the models in a question-answer format across multiple dimensions. In Tab. 2, we selected Qwen2-VL (Wang et al., 2024a) as an evaluation tool, to align the assessment dimensions with user study: Visual Alignment (VA), Text Alignment (TA), Motion Quality (MQ), and Human Quality (HQ) on VBench-I2V test benchmark (Huang et al., 2024b) and RealAction-TestBench. For each dimension, we designed several questions, and a "yes" response from the large models indicates a passing score. The scores for all questions within each dimension were aggregated to calculate the total score. Experimental results show that, based on the evaluation by video-language understanding models, our model shows 8 The video captures a moment in an open field where a man, dressed in a grey t-shirt and navy blue shorts, is standing with his hand outstretched towards a black dog with a white chest. The man, wearing tan boots, is smiling as the dog eagerly raises one paw to meet his hand in a friendly shake. The dog's tail wags, indicating its excitement and happiness. The field is dotted with patches of grass, and the sky above is overcast, creating a serene backdrop for this interaction. The natural light highlights the connection between the man and the dog as they share a moment of companionship. First Frame RealDPO VideoAlign SFT First Frame RealDPO LiFT SFT The video captures the exhilarating action of a kite surfer riding the waves. The individual, clad in a wetsuit, is seen expertly maneuvering a surfboard across the choppy sea surface. With a firm grip on the kite's control bar, they harness the power of the wind to glide effortlessly over the water. The surfer's stance is dynamic, with knees bent and body leaning back slightly, demonstrating balance and control. As they cut through the waves, water sprays up around the board, creating a dramatic effect against the backdrop of the ocean's vast expanse. The scene is a testament to the surfer's skill and the thrilling sport of kite surfing. Figure 6: Qualitative Comparison.We recommend readers refer to our appendix files to view more visualizations. competitive results, consistent with human evaluation results, further validating the effectiveness of our RealDPO. Quantitative Comparison Using VBench-I2V Metric. Meanwhile, in video generation, VBench (Huang et al., 2024a;b) is widely recognized as an authoritative evaluation framework. Leveraging VBench-I2V's automated metrics designed for Image-to-Video (I2V) evaluations in VBench++ (Huang et al., 2024b), we assessed the quality of our test set, revealing that RealDPO achieves competitive performance across multiple general metrics. 5.2 QUALITATIVE COMPARISONS In Fig. 5, we present the visual comparison results before and after RealDPO alignment. We observe that RealDPO is highly effective in enhancing the naturalness and smoothness of the actions, as well as their consistency with the textual instructions. In Fig. 6, we present the visual comparison results of our method against other alignment approaches, such as LiFT (Wang et al., 2024c) and VideoAlign (Liu et al., 2025). It can be observed that the videos generated by RealDPO are more stable and less prone to unnatural actions or visual collapse. For instance, in the example on the left, SFT exhibits a collapse of the character's limbs, and the coordination of the dog's four legs appears unnatural. The results of LiFT are slightly better, but LiFT fails to complete the handshake action between the protagonist and the dog, resulting in poor alignment with the text. In contrast, our results demonstrate higher visual quality, with action details highly consistent with the textual instructions and no visual collapse. In the example on the right, the text describes the surfer's posture as "with knees bent and body leaning back slightly". SFT shows visual collapse, misaligned actions, and poor consistency in character appearance. VideoAlign performs slightly better, but the generated posture and actions are not highly aligned with the text. In comparison, our results exhibit higher image quality, more accurate action details, and overall superior performance. 6 CONCLUSION In this paper, we propose RealDPO, a novel and data-efficient framework for preference alignment in video generation, leveraging real-world data as win samples to address challenges in generating complex motions like human actions. By designing a tailored DPO loss and building on diffusion-based transformer architectures, we establish a robust real-data-driven alignment framework. To support this, we introduce RealAction-5K, a compact yet high-quality dataset for human daily actions. Extensive experiments show that RealDPO significantly improves visual alignment, text alignment, motion quality, and overall video quality, outperforming traditional fine-tuning and other alignment methods. Our work advances the upper bound of preference alignment and provides a scalable solution for complex motion video generation. We will explore extending RealDPO to broader domains in the future. 9 Ethics statement. Our RealAction-5K dataset was curated from publicly available video sources with appropriate licenses. To address privacy concerns, all personally identifiable information was meticulously anonymized, and the dataset will be released strictly for non-commercial research purposes to mitigate the risk of misuse. All necessary legal and ethical guidelines concerning data provenance and usage were adhered to throughout the project. Additionally, the effectiveness of our RealDPO paradigm is inherently limited by the architectural constraints of the underlying video generative model. We emphasize the need for responsible use, particularly when generating human figures, to prevent potential misuse. Reproducibility statement. To ensure the reproducibility of our work, we have made significant efforts to document our methodology and resources comprehensively. The core of our approach, the RealDPO alignment paradigm, including its tailored loss function, is described in detail within the paper. Furthermore, we provide a complete account of the data collection and processing pipeline for the RealAction-5K dataset. This dataset was meticulously curated by manually sourcing highquality videos from https://pexels.com. The process involved a combination of targeted scraping and manual downloading, followed by rigorous manual screening and clipping to ensure each video clip depicts a single, coherent action that can be accurately described in text, thereby guaranteeing high quality and clarity. 7 ACKNOWLEDGEMENTS This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOET2EP20221-0012, MOE-T2EP20223-0002). This research is also supported by cash and inkind funding from NTU S-Lab and industry partner(s). This work is also supported by Shanghai Artificial Intelligence Laboratory. This work was partially supported by Nanjing Kunpeng&Ascend Center of Cultivation. REFERENCES Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. arXiv preprint , 2023. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint , 2023. Haoxin Chen, Yong Zhang, Xiaodong Cun, Menghan Xia, Xintao Wang, Chao Weng, and Ying Shan. Videocrafter2: Overcoming data limitations for high-quality video diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7310-7320, 2024. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint , 2023. Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Reinforcement learning for finetuning text-to-image diffusion models. Advances in Neural Information Processing Systems, 36, 2024. Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint , 2023. Ziqi Huang, Yinan He, Jiashuo Yu, Fan Zhang, Chenyang Si, Yuming Jiang, Yuanhan Zhang, Tianxing Wu, Qingyang Jin, Nattapol Chanpaisit, et al. Vbench: Comprehensive benchmark suite for video generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21807-21818, 2024a. Ziqi Huang, Fan Zhang, Xiaojie Xu, Yinan He, Jiashuo Yu, Ziyue Dong, Qianli Ma, Nattapol Chanpaisit, Chenyang Si, Yuming Jiang, Yaohui Wang, Xinyuan Chen, Ying-Cong Chen, Limin Wang, 10 Dahua Lin, Yu Qiao, and Ziwei Liu. Vbench++: Comprehensive and versatile benchmark suite for video generative models. arXiv preprint , 2024b. Kimin Lee, Hao Liu, Moonkyung Ryu, Olivia Watkins, Yuqing Du, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, and Shixiang Shane Gu. Aligning text-to-image models using human feedback. arXiv preprint , 2023. Zhimin Li, Jianwei Zhang, Qin Lin, Jiangfeng Xiong, Yanxin Long, Xinchi Deng, Yingfang Zhang, Xingchao Liu, Minbin Huang, Zedong Xiao, et al. Hunyuan-dit: A powerful multi-resolution diffusion transformer with fine-grained chinese understanding. arXiv preprint , 2024. Youwei Liang, Junfeng He, Gang Li, Peizhao Li, Arseniy Klimovskiy, Nicholas Carolan, Jiao Sun, Jordi Pont-Tuset, Sarah Young, Feng Yang, et al. Rich human feedback for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19401-19411, 2024. Bin Lin, Yunyang Ge, Xinhua Cheng, Zongjian Li, Bin Zhu, Shaodong Wang, Xianyi He, Yang Ye, Shenghai Yuan, Liuhan Chen, et al. Open-sora plan: Open-source large video generation model. arXiv preprint , 2024. Jie Liu, Gongye Liu, Jiajun Liang, Ziyang Yuan, Xiaokun Liu, Mingwu Zheng, Xiele Wu, Qiulin Wang, Wenyu Qin, Menghan Xia, et al. Improving video generation with human feedback. arXiv preprint , 2025. Runtao Liu, Haoyu Wu, Zheng Ziqiang, Chen Wei, Yingqing He, Renjie Pi, and Qifeng Chen. Videodpo: Omni-preference alignment for video diffusion generation. arXiv preprint , 2024. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36:53728-53741, 2023. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8228-8238, 2024. Jiuniu Wang, Hangjie Yuan, Dayou Chen, Yingya Zhang, Xiang Wang, and Shiwei Zhang. Modelscope text-to-video technical report. arXiv preprint , 2023. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint , 2024a. Yaohui Wang, Xinyuan Chen, Xin Ma, Shangchen Zhou, Ziqi Huang, Yi Wang, Ceyuan Yang, Yinan He, Jiashuo Yu, Peiqing Yang, et al. Lavie: High-quality video generation with cascaded latent diffusion models. International Journal of Computer Vision, pp. 1-20, 2024b. Yibin Wang, Zhiyu Tan, Junyan Wang, Xiaomeng Yang, Cheng Jin, and Hao Li. Lift: Leveraging human feedback for text-to-video model alignment. arXiv preprint , 2024c. Xun Wu, Shaohan Huang, Guolong Wang, Jing Xiong, and Furu Wei. Boosting text-to-video generative model with mllms feedback. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. Jinbo Xing, Menghan Xia, Yong Zhang, Haoxin Chen, Wangbo Yu, Hanyuan Liu, Gongye Liu, Xintao Wang, Ying Shan, and Tien-Tsin Wong. Dynamicrafter: Animating open-domain images with video diffusion priors. In European Conference on Computer Vision, pp. 399-417. Springer, 2024. Jiazheng Xu, Yu Huang, Jiale Cheng, Yuanming Yang, Jiajun Xu, Yuan Wang, Wenbo Duan, Shen Yang, Qunlin Jin, Shurun Li, et al. Visionreward: Fine-grained multi-dimensional human preference learning for image and video generation. arXiv preprint , 2024a. 11 Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. Imagereward: Learning and evaluating human preferences for text-to-image generation. Advances in Neural Information Processing Systems, 36, 2024b. Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Weihan Shen, Xiaolong Zhu, and Xiu Li. Using human feedback to fine-tune diffusion models without any reward model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8941-8951, 2024a. Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint , 2024b. Hangjie Yuan, Shiwei Zhang, Xiang Wang, Yujie Wei, Tao Feng, Yining Pan, Yingya Zhang, Ziwei Liu, Samuel Albanie, and Dong Ni. Instructvideo: instructing video diffusion models with human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6463-6474, 2024. Jiacheng Zhang, Jie Wu, Weifeng Chen, Yatai Ji, Xuefeng Xiao, Weilin Huang, and Kai Han. Onlinevpo: Align video diffusion model with online video-centric preference optimization. arXiv preprint , 2024a. Shiwei Zhang, Jiayu Wang, Yingya Zhang, Kang Zhao, Hangjie Yuan, Zhiwu Qin, Xiang Wang, Deli Zhao, and Jingren Zhou. I2vgen-xl: High-quality image-to-video synthesis via cascaded diffusion models. arXiv preprint , 2023. Yuanhan Zhang, Jinming Wu, Wei Li, Bo Li, Zejun Ma, Ziwei Liu, and Chunyuan Li. Video instruction tuning with synthetic data. arXiv preprint , 2024b. 12 APPENDIX This supplementary material provides more qualitative results, details of the evaluation, experimental results, pseudo-code of RealDPO. Section A elaborates on additional visual comparisons of generated videos, including comparisons with pre-trained models, supervised fine-tuning, and other alignment methods. Section B details the evaluation process, covering the design of user studies, evaluation using LLMs, evaluation using the VBench-I2V metric, as well as interfaces, instructions, and an introduction to automated evaluation metrics. Section C presents the pseudo-code of our core algorithm, RealDPO. A MORE QUALITATIVE RESULTS Due to space limitations in the main text, this section presents additional visual comparisons, including comparisons with pre-trained models, supervised fine-tuning, and other alignment methods. The results demonstrate that our approach achieves superior performance across a wider range of samples, with enhanced visual-text alignment, text alignment, motion quality, character quality, and overall quality. These findings further validate the effectiveness of the RealDPO framework. This provides new insights and methodologies for future multi-modal generation tasks. A.1 COMPARISON WITH PRE-TRAINED MODEL Pre-trained base models are typically trained on large-scale datasets and exhibit strong generalization capabilities. However, they may underperform on specific tasks, particularly those requiring finegrained alignment, such as the generation of videos with complex motion as discussed in our work. RealDPO, by incorporating guidance from real-world data through Direct Preference Optimization (DPO), excels at capturing intricate details in tasks, especially in image-text alignment. As shown in Fig. 7, compared to pre-trained models, RealDPO demonstrates significantly improved consistency in visual-text alignment and notable enhancements in the details of characters and motions. A.2 COMPARISON WITH SUPERVISED FINE-TUNING Supervised fine-tuning relies on annotated data and can achieve strong performance on specific tasks. However, its effectiveness is constrained by the quality and quantity of the available annotations. In contrast, RealDPO leverages a diverse set of negative samples and real-world data as positive samples to form multiple preference pairs. This approach enables the model to learn from its own mistakes and align more closely with real-world samples, achieving robust alignment even without extensive labeled data. In particular, as shown in Fig. 8, in terms of motion quality and character quality, RealDPO generates images that are more natural, with smoother motions and richer character details. A.3 COMPARISON WITH OTHER ALIGNMENT METHOD Other reward model based alignment methods, such as LiFT and VideoAlign, may perform well on specific tasks. However, in complex scenarios, the reward models often fail to provide effective feedback, leading to misguidance in preference alignment training. In contrast, RealDPO introduces real-world samples as positive examples and pairs them with multiple negative samples generated by the model, naturally forming contrastive pairs. By guiding the model with positive samples, RealDPO enables the model to learn from its mistakes and align more closely with the correct samples, thereby better handling complex cross-modal alignment tasks. As shown in Figure 9, compared to existing alignment methods, RealDPO demonstrates greater stability in visual-text alignment and text alignment, generating images and text that are more semantically consistent, with superior motion quality. 13 "The video begins with a person in grey shorts standing in a lush backyard, their hand outstretched towards a playful spotted puppy. The puppy, with its ears perked up and tongue out, eagerly raises one paw to meet the person's hand in a friendly gesture. The puppy's eyes are focused on the person's hand, and its tail wags with excitement. The backyard is filled with vibrant green grass and a wooden fence, creating a warm and inviting atmosphere for this interaction. The natural light from the setting sun casts a golden hue over the scene, highlighting the connection between the person and the puppy as they share a moment of joy and companionship." First Frame Before Alignment After Alignment First Frame "The video features a blonde skateboarder, dressed in a black jacket and shorts, standing in the middle of a road that stretches into the distance. They have one foot on the skateboard, ready to push off and begin skating. The road is bordered by snow-covered hills under a twilight sky, giving a sense of solitude and adventure. The anticipation of the skateboarder's impending journey is palpable." Before Alignment After Alignment Figure 7: Qualitative Results. Comparison with pre-trained model. Reference Frame "The video captures a woman with her hair tied back, wearing a sleeveless top and jeans, standing in a bedroom. She is in the process of making a bed, holding up a white pillow with both hands, seemingly about to place it on the bed. The bedroom has a calm and tidy appearance, with a large window allowing natural light to brighten the room. The focus is on the woman's task, highlighting the routine activity of homemaking." SFT RealDPO Reference Frame "The video captured a couple enjoying a moment on the beach. The man is wearing a gray T-shirt and shorts, and the woman is wearing a white striped sun skirt, they are embracing When they embrace on the damp beach, gentle waves beat against their feet, creating a playful and romantic atmosphere. The cloudy sky and distant hills added a dramatic backdrop to their leisurely stroll, highlighting the connection between the two and the beauty of the natural environment." SFT RealDPO Figure 8: Qualitative Results. Comparison with supervised fine-tuning. 14 The video begins with a woman with curly blonde hair, wearing a beige cardigan and a black top, sitting indoors. She is holding a chocolate-covered ice cream bar and is offering it to a young child with blonde hair, who is wearing a striped shirt. The child is taking a bite from the ice cream bar. The setting is a bright and airy room with natural light illuminating the scene, creating a warm and joyful atmosphere. The focus is on the interaction between the woman and the child, capturing a moment of shared enjoyment. First Frame RealDPO VideoAlign SFT First Frame RealDPO LiFT SFT The video opens with a pair of hands carefully cutting a small fruit tart on a dark wooden board. The tart has a golden, crisp crust, topped with fluffy whipped cream and fresh strawberry slices. Holding it steady, the person gently presses the knife through, preparing to slice it open. More tarts rest nearby, along with a wooden cutting board decorated with scattered blue flowers. The soft, natural lighting gives the scene a warm and inviting feel.. Figure 9: Qualitative Results. comparison with other Alignment Method. B DETAILS OF THE EVALUATION B.1 IMPLEMENTATION DETAILS Models and Settings. We conduct all experiments on 8 Nvidia H100 GPUs, with a total batch size of 8 for training. For our I2V baseline generation model, we adopt CogVideoX-5B (Yang et al., 2024b), which uses diffusion transformer structure. We fine-tune the parameters of all its transformer blocks on the DeepSpeed framework. The learning rate is set to 1e-5, and all the models are trained for 10 epochs. Evaluation Metric. We evaluate the performance of our aligned model through three aspects: user study, automatic LLM-based evaluation, and the assessment metrics of VBench (Huang et al., 2024a). We selected 18 test cases, including test texts and reference images, which constitute the RealAction-TestBench. For the user study, we invited 10 testers to evaluate our model against other baselines across multiple dimensions. For LLM-based evaluation, we designed a question template to guide the model in making decisions. As for VBench, we utilized the I2V evaluation metrics provided by VBench to perform our evaluation. B.2 EVALUATION BY USER STUDY To ensure a fair and comprehensive evaluation of our model, we conducted a user study to collect subjective feedback on the generated results. Participants were presented with a scoring interface (as shown in Fig. 10) and asked to rate the generated videos based on several key criteria, including Visual Alignment, Text Alignment, Motion Quality and Human Quality and Overall Quality. The interface is designed to be intuitive and user-friendly, allowing participants to provide accurate and unbiased scores. Each video was evaluated by multiple users, and the final scores were averaged to ensure reliability. B.3 EVALUATION USING LLMS In addition to human evaluation, we leveraged large language models (LLMs) to assess the quality of the generated videos. We designed a structured instruction template to guide the LLMs in evaluating video generation quality. The template includes detailed prompts for assessing various aspects of the videos, such as adherence to textual descriptions, visual coherence, and overall aesthetic appeal. By utilizing LLMs, we were able to obtain scalable and consistent evaluations that complement the human user study. The results from the LLM-based evaluation align closely with the user study findings, further validating the effectiveness of our approach 15 B.4 EVALUATION USING VBENCH-I2V METRIC To provide a more objective and fine-grained assessment of our model's performance, we select seven theme-related and human-perception-aligned representative dimensions of video quality from VBench-I2V Huang et al. (2024a;b) as the final evaluation metrics: I2V Subject, Subject Consistency, Background Consistency, Motion Smoothness, Dynamic Degree, Aesthetic Quality, and Imaging Quality. Figure 10: User Study Scoring Interface for users to give socre. 16 Instruction Template for Evaluating Video Generation Quality Using LLMs (part1) As a video understanding expert, you will be required to evaluate the quality of modelgenerated videos from four different perspectives, covering the following daily human activities. The specific evaluation angles will be divided into Visual Alignment, Text Alignment, Motion Quality, and Human Quality. Under each dimension, the model needs to determine the quality of the generated content based on the answers to five questions. Each question is scored out of 10 points. The scoring rule is 0 points for the worst and 10 points for the best. The final score is the sum of the scores for all questions under that dimension. Visual Alignment This mainly assesses the consistency of the visual representation of the characters in the generated video with the provided first frame image of the characters and environment, with a score range of 0 to 10. Please answer five questions as follow: Question 1: What is the consistency score of the character's appearance (such as clothing, hairstyle, skin color) in the generated video? Question 2: What is the consistency score of the environment in the generated video (such as background, lighting, scene setup)? Question 3: What is the score for the character's proportional changes in the generated video conforming to physical laws? Question 4: What is the fidelity score of the characters or environment in the video (low scores should be given if there are shape distortions or color abnormalities)? Question 5: What is the consistency score of the characters and environment over time (low scores should be given if there are sudden disappearances or changes)? Answer 1: Answer 2: Answer 3: Answer 4: Answer 5: 17 Instruction Template for Evaluating Video Generation Quality Using LLMs (part2) As a video understanding expert, you will be required to evaluate the quality of modelgenerated videos from four different perspectives, covering the following daily human activities. The specific evaluation angles will be divided into Visual Alignment, Text Alignment, Motion Quality, and Human Quality. Under each dimension, the model needs to determine the quality of the generated content based on the answers to five questions. Each question is scored out of 10 points. The scoring rule is 0 points for the worst and 10 points for the best. The final score is the sum of the scores for all questions under that dimension. Text Alignment This assesses the consistency between the actions of the characters in the generated video and the input text description or target behavior category, with a score range of 0 to 10. Please answer five questions as follow: Question 1: What is the consistency score between the character's actions in the generated video and the input text description or target behavior category? Question 2: What is the consistency score between the key actions in the video (such as running, hugging, playing an instrument) and the text description? Question 3: What is the score for avoiding actions or distracting elements in the video that are unrelated to the text description? Question 4: What is the accuracy score of the video in conveying the emotions or intentions described in the text? Question 5: What is the score for the video supplementing reasonable details not explicitly mentioned in the text description? Answer 1: Answer 2: Answer 3: Answer 4: Answer 5: 18 Instruction Template for Evaluating Video Generation Quality Using LLMs (part3) As a video understanding expert, you will be required to evaluate the quality of modelgenerated videos from four different perspectives, covering the following daily human activities. The specific evaluation angles will be divided into Visual Alignment, Text Alignment, Motion Quality, and Human Quality. Under each dimension, the model needs to determine the quality of the generated content based on the answers to five questions. Each question is scored out of 10 points. The scoring rule is 0 points for the worst and 10 points for the best. The final score is the sum of the scores for all questions under that dimension. Motion Quality This assesses the smoothness, naturalness, and reasonableness of the character's movements in the generated video, with a score range of 0 to 10. Please answer five questions as follow: Question 1: What is the smoothness score of the character's movements in the generated video (high scores for no stuttering)? Question 2: What is the score for the details of the character's movements (such as limb movements, gestures) conforming to physical laws? Question 3: What is the naturalness score of the temporal dynamics of the character's movements (such as speed, rhythm)? Question 4: What is the coordination score between the character's movements and other elements in the scene (such as objects, background)? Question 5: What is the score for avoiding obvious distortions or unreasonable phenomena in the character's movements (such as limb twisting, incoherent actions)? Answer 1: Answer 2: Answer 3: Answer 4: Answer 5: 19 Instruction Template for Evaluating Video Generation Quality Using LLMs (part4) As a video understanding expert, you will be required to evaluate the quality of modelgenerated videos from four different perspectives, covering the following daily human activities. The specific evaluation angles will be divided into Visual Alignment, Text Alignment, Motion Quality, and Human Quality. Under each dimension, the model needs to determine the quality of the generated content based on the answers to five questions. Each question is scored out of 10 points. The scoring rule is 0 points for the worst and 10 points for the best. The final score is the sum of the scores for all questions under that dimension. Human Quality This assesses the quality of the generated characters in the video, with a score range of 0 to 10. Please answer five questions as follow: Question 1: What is the score for the reasonableness of limb distortions in the generated characters (e.g., unnatural joint bends)? Question 2: What is the score for the reasonableness of the number of limbs in the characters (e.g., extra or missing limbs)? Question 3: What is the naturalness score of the facial expressions or body movements of the characters in line with human behavioral characteristics? Question 4: What is the reasonableness score of the interactions between the characters and other objects in the scene (e.g., tools, animals, other people)? Question 5: What is the fluency score of the characters' behavior in the generated video? Answer 1: Answer 2: Answer 3: Answer 4: Answer 5: 20 C PSEUDO-CODE OF REALDPO def RealDPO_Loss(model, ref_model, x_w, x_l, c, beta): """ Computes the RealDPO loss for aligning model predictions with preferred and non-preferred samples. Args: model: Diffusion Transformer model. ref_model: Frozen reference model used for comparison. x_w: Preferred real video latents (aligned with the desired output). x_l: Non-preferred video-generated video latents (not aligned with the desired output). c: Conditioning input (e.g., text embeddings, image embeddings). beta: Regularization parameter controlling the strength of the alignment. Returns: realdpo_loss: The computed RealDPO loss value. """ # Sample random timesteps and noise for diffusion process timestep_k = torch.rand(len(x_w)) noise = torch.randn_like(x_w) # Create noisy versions of preferred and non-preferred latents noisy_x_w = (1 - timestep_k) * x_w + timestep_k * noise noisy_x_l = (1 - timestep_k) * x_l + timestep_k * noise # Predict latents using the model and reference model latent_w_pred = model(noisy_x_w, c, timestep_k) latent_l_pred = model(noisy_x_l, c, timestep_k) latent_ref_w_pred = ref_model(noisy_x_w, c, timestep_k) latent_ref_l_pred = ref_model(noisy_x_l, c, timestep_k) # Compute prediction errors for preferred and non-preferred latents model_w_loss = (x_w - latent_w_pred).norm().pow(2) ref_w_loss = (x_w - latent_ref_w_pred).norm().pow(2) model_l_loss = (x_l - latent_l_pred).norm().pow(2) ref_l_loss = (x_l - latent_ref_l_pred).norm().pow(2) # Compute alignment differences w_loss_diff = model_w_loss - ref_w_loss l_loss_diff = model_l_loss - ref_l_loss # Compute the RealDPO loss alignment_term = -0.5 * beta * (w_loss_diff - l_loss_diff) realdpo_loss = -1 * torch.log(torch.sigmoid(alignment_term)) return realdpo_loss 21 D THE USE OF LARGE LANGUAGE MODELS (LLMS) In the preparation of this manuscript, we used GPT-4, a large language model from OpenAI, exclusively as a writing assistance tool. Its use was confined to the Introduction and Methods sections, where it served to aid in polishing the text. Specifically, the model was prompted to help restructure sentences for improved clarity and flow, ensure consistent academic tone, and simplify complex technical descriptions. All fundamental ideas, research hypotheses, methodological designs, experimental data, analysis, conclusions, and the final intellectual content are solely the product of the authors' work. The LLM generated no original content or ideas and was not used for data analysis or interpretation. The authors carefully reviewed, edited, and verified all AI-generated text to ensure it accurately reflected their research and adhered to the highest standards of academic integrity. 22
2510.14953
Dark Matter Subhalos and Higher Order Catastrophes in Gravitational Wave Lensing Luka Vujeva,1, ∗Jose Mar´ıa Ezquiaga,1 Daniel Gilman,2 Srashti Goyal,3 and Miguel Zumalac´arregui3 1Center of Gravity, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, Denmark 2Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL 60637, USA 3Max Planck Institute for Gravitational Physics (Albert Einstein Institute) Am M¨uhlenberg 1, D-14476 Potsdam-Golm, Germany (Dated: October 17, 2025) Gravitational lensing is an invaluable probe of the nature of dark matter, and the structures it forms. Lensed gravitational waves in particular allow for unparalleled sensitivity to small scale structures within the lenses, due to the precise time resolution in combination with the continuous monitoring of the entire sky. In this work, we show two distinct ways of using strongly lensed gravitational waves to identify the presence of dark matter subhalos: i) through higher order caustics generating high relative magnification (µr > 2), short time delay image pairs that break the caustic universality relations of single dark matter halos, which occur for ∼1 −10 percent of strongly lensed events in our cold dark matter models, and ii) through the presence of more than three highly magnified images, which occur for ∼0.01−1 percent of the same simulated events. We find that these results are highly sensitive to the concentrations of subhalos in our simulations, and more mildly to their number densities. The presence of low-mass subhalos increases the probability of observing wave-optics lensing in lensed gravitational waves, which is studied by solving the diffraction integral with the stationary phase approximation, as well as numerically. We also report distinct quantitative and qualitative differences in the distributions of relative magnifications and time delays for subhalo populations with increased number densities or concentrations. With the upcoming detection of strongly lensed events by ground- and space- based detectors, comparisons against these simulated distributions will provide insight into the nature of dark matter. I. INTRODUCTION Gravitational lensing offers striking glimpses into the high-redshift Universe by nature of the ’gravitational telescope’ phenomenon caused massive objects such as galaxies and galaxy clusters. This has provided us ac- cess into previously inaccessible periods of the evolution of the Universe, such as allowing us to observe galax- ies [1, 2] and even individual stars [3, 4] at unprece- dented distances beyond the sensitivity of current tele- scopes [5, 6]. Additionally, it has allowed for the most stringent observational tests of the nature of dark mat- ter, and the structures it forms [7, 8]. While there has been a tremendous amount of success in observing grav- itational lensing in the electromagnetic (EM) spectrum (and much more to come with all sky observation from the Vera Rubin Observatory [9] and the Euclid telescope [10, 11]), focusing on different messengers can unveil new information about gravitational lensing [12]. Gravitational waves (GWs) are unique transients be- cause they are coherently detected with a timing preci- sion down to the millisecond. This is to be contrasted with the current state-of-the-art in EM lensing, con- straining differences in the arrival times of images of the order of days [13]. Moreover, their waveform morphol- ogy and phase evolution are accurately recorded with the LIGO [14], VIRGO [15], and KAGRA [16] (LVK) detec- tors, with remarkable recent examples [17, 18]. There- fore, lensed GWs are an exceptional new probe of the ∗luka.vujeva@nbi.ku.dk dark matter (DM) substructures, which intrinsically pre- dict short lensing time delays. Although we still have to detect a strongly lensed GWs [19–22], their discovery is expected in the next observ- ing run [23–27]. This will be further improved in future ground-based detectors such as Einstein Telescope [28] and Cosmic Explorer [29], and future space-based detec- tors such as LISA [30, 31], which offer exciting prospects for measuring wave-optics phenomena from larger lenses such as galaxies, supermassive black holes or subhalos. The focus of GW lensing is typically divided between the study of large scale structures such as galaxies and galaxy clusters [24, 32–36], and the interference effects of compact objects such as stars and black holes [37–39]. While recent works have explored GW lensing with sub- halos in the single image regime [40–42], this work aims to explore the strong lensing phenomenology of these sys- tems, and potential interference effects near the conse- quent caustics, which may differ from the universal pre- dictions without substructure [35, 43]. In the context of GW lensing from galaxy-scale lenses, most work considers single, spheroidal dark matter ha- los following typical dark matter distributions such as Singular Isothermal Spheres (SIS), Navarro-Frenk-White (NFW) [44–46], or Singular Isothermal Ellipsoids (SIE) profiles [47–49]. While these profiles are adequate to de- scribe the main dark matter halo of galaxies, this work aims to be the first to consider the effects of dark matter subhalos on strongly lensed GWs in these traditionally smooth, singular lenses. The existence of DM subhalos is motivated by the hi- erarchical mergers seen in N-body cosmological simula- arXiv:2510.14953v1 [astro-ph.CO] 16 Oct 2025 2 FIG. 1. Magnification (µ) map of an example galaxy populated with cold dark matter subhalos in the image (left) and source (right) plane. Note that this example corresponds to the increased concentration case explored later in the work. tions, which find that a fraction of the total dark matter mass in large DM halos is contained in lower mass, typi- cally more concentrated DM subhalos [50–53]. While not observed directly, their existence has been put forward as a solution to some prevalent issues in EM lensing, such as quasar flux anomalies [54–56], deviations from expected image locations [57], and excess total magnification of sources near caustics [58]. Many alternative DM models create differences in the low end of the subhalo mass distribution (as well as the density distribution of DM halos in certain models). Cold DM (CDM) allows for the creation of much smaller DM halos when compared to warm, fuzzy, wave, and self in- teracting DM, which typically suppress the production of structure on small scales, leading to lower number den- sities of low mass halos when compared to CDM [59]. Therefore, being sensitive to low mass halos is crucial in probing the nature of DM. Moreover, lighter subhalos do not hold baryons efficiently, and therefore are a more direct probe of DM theories [60]. In this work, we show that not only are strongly lensed GWs powerful probes of dark matter subhalos dow to masses of 107M⊙(and can be extended to much lower masses), but we also show that there are concrete crite- ria that can be used to show that even a single strongly lensed GW event has been affected by a subhalo. These deviations come in the forms of either seeing 4 (or more) highly magnified gravitational waves coming in short suc- cession (which has been previously explored in the con- text of EM lensing [61]), or seeing a deviation in the time delay and relative magnifications of the two brightest im- ages from the predictions coming from catastrophe the- ory for single smooth potentials. More specifically, we see that the higher order caustics (or catastrophes) caused by the inclusion of substructure in the form of subhalos can cause the brightest pair of images to have relative mag- nification factors of µr > 2 at short time delays (∆T ), which is impossible in a smooth single potential. The detection of such an event would be a signature of the presence subhalos (or other compact objects) effecting the lensed signals. We can explicitly show that subhalos affect lensed im- ages by the deviation from the expected µr −∆T relation for short time delays and highly magnified GWs. Namely, without the subhalos, the two brightest images must have µr < 2 at small ∆T (which is the upper limit set by the cusp caustic, with the behavior of images near the fold being µr →1 at low ∆T [35, 43, 62]). While deviations from the fold behavior are harder to detect (but could be studied from a large population of lensed GWs), subhalos near the images produced by cusps are highly sensitive to the subhalo perturber, and the production of an image pair with short time delay and µr > 2 is smoking gun ev- idence of the existence of subhalos near caustics. In this work, we do not consider the impact of line-of-sight halos, which could also contribute to a similar effect [63, 64]. This work is organized as follows: in Sec. II we give an overview of gravitational lensing formalism in both the wave optics and geometric optics regimes, and intro- duce the formalism of typical caustics. In Sec. III we 3 summarize our treatment of dark matter subhalos in our composite lens model. Sec. IV provides the results of our fiducial model, as well as exploring the impacts of modify- ing the dark matter model. Finally, we examine the links back to higher order catastrophes in Sec. V. Throughout this work we adopt a Planck 2018 cosmology [65]. II. GRAVITATIONAL LENSING In this section, we review the fundamentals of gravi- tational lensing in both the wave and geometric optics regimes, and introduce the typical caustics seen in single dark matter halos, namely the fold and cusp caustics. A. Wave Optics The changes in the amplitude of a lensed gravitational wave is characterized by the amplification factor F(f), simply defined as F(f) = ˜hL(f) ˜h0(f) , (1) where ˜hL(f) and ˜h0(f) are the Fourier transformed lensed and unlensed gravitational wave strains. The strain of the lensed gravitational wave is determined by the Kirchhoff diffraction integral [62, 66, 67], given by F(ω) = ω 2πi Z d2⃗xeiωT (⃗x,⃗y), (2) where T(⃗x, ⃗y) is the Fermat potential, and ω is the dimensionless frequency defined as ω ≡8πGMLzf.The redshifted lens mass MLz is defines as MLz ≡(1 + zL) DS DLDLS θ2 E 4G. (3) In units where image plane (⃗x) and source plane po- sition are rescaled by a characteristic length scale in the system (typically the Einstein radius θE) as θ = ⃗x/θE β = ⃗y/θE , the Fermat potential, also called the time delay surface [62, 68] can be written as Td(θ, β) = ϕ(θ, β) ≡ (θ −β)2 2 −ψ(θ)  , (4) where ψ(θ) is the lens potential. The amplification factor can also be expressed in the time domain (which is what the lensing code GLoW [69] that is being used in this work solves for) through a simple Fourier transform of F(ω) I(τ) = Z ∞ −∞ iF(ω) ω e−iωt dω. (5) Using this, we can also define the lensing Green’s func- tion, which is simply given by G(τ) = 1 2π d dτ I(τ). (6) The physical meaning of G(τ) is that its convolution with the unlensed signal gives us the desired lensed signal hL. B. Geometric Optics The geometric optics (GO) limit is valid for lensing sys- tems in which the wavelength of the GW is much smaller than the characteristic size of the lens, and when a source of finite frequency is sufficiently far from a caustic. In this limit, GW lensing closely follows that of light [66]. Relat- ing back to the wave optics regime, the image locations in geometric optics are determined by the stationary points of the stationary points of the integrand. For a given lensing configuration in GO, the locations of images are determined by the lens equation, β = θ −α(θ), (7) where β is the source location, θ is the image location, and α(θ) is the deflection angle, determined by the lens- ing potential ψ(θ) as α(θ) = ∇ψ(θ). The dimensionless surface mass density, or convergence, is defined as, κ(θ) = Σ(θ) Σc , (8) where Σ(θ) is the surface mass density of the lens, and Σc is the critical surface mass density at the redshift of the lens, Σc = c2DS 4πGDLDLS , (9) where G is Newton’s gravitational constant, c is the speed of light, and DS, DL, and DLS are the angular diameter distances to the source, the lens, and between the lens and source respectively. The difference in arrival time between two images θi and θj of the same source at a position β is ∆Tij = 1 + zL c DLDS DLS  Td(θi, β) −Td(θj, β)  , (10) where zL is the redshift of the lens In this language, the lens equation and image positions ⃗θi are simply given by ∂Td/∂⃗θ ⃗θ=⃗θi = 0. Finally, the magnification of a given image is defined as µ−1 = (1 −κ(θ))2 −γ(θ)2, (11) 4 FIG. 2. Relative magnification (µr) vs time delay (∆T) for the two brightest images of sources placed near the caustics of a single elliptical single isothermal sphere halo without (left), and with subhalos (right). The grey lines in both plots represent the asymptotic values for the relative magnifications for sources very close to the caustics, where µr = 2 is the limit for the cusp. Note that in the case of subhalo perturbers, the relative magnifications at low time delays can greatly exceed the theoretical limit set by the cusp caustic. The vertical blue line corresponds to the shortest time delay measured in the EM,a quadruply lensed quasar found to have a time delay of ∆T = 0.8+0.8 −0.7 days [13]. where γ(θ) is the shear. This, again, can be de- rived directly from the time delay surface, i.e., µ−1 = det  ∂2Td/∂⃗θ∂⃗θ  . The points along which |µ| →∞in the image plane are called critical curves (CCs), and their equivalent in the source plane are called caustics. These will be explored in detail in the coming sections. Given the highly oscillatory nature of the integrand in Eq. II A at high frequencies, we employ the stationary phase approximation (SPA) to produce lensed waveforms for sources whose frequencies would fall within the regime of ground based detectors such as the LVK network. The SPA assumes that most of the contribution to the integral comes from the stationary points of the integrand, which correspond to the image locations. This leads to the ap- proximation (whose detailed derivation can be found in [62, 70]), F(w) ≈ X j q |µ(⃗xj)| exp (iwTd(⃗xj) −injπ/2) , (12) in which each image has an arrival time Tj, magnifica- tion µj, and phase shift nj. The validity of this approach for gravitational wave sources near caustics has been ex- plored in previous works [43, 71], and should be valid for the source locations considered in this work. Exploring methods of reliably calculating the full frequency depen- dent amplification factor at high dimensionless frequen- cies is left to future work. A quantity that is often used throughout this work is the relative magnification factor, which (unless otherwise specified) is simply µr = |µ1/µ2|, where µ1 and µ2 are the brightest and second brightest images of a given source at a location β. We also define the parity of the images based on the sign of their magnification factors (i.e. a positive parity image has µi > 0, whereas a negative parity image has µi < 0). C. Fold and Cusp Caustics In a non-axisymmetric lens, there are two types of caustics present in lensing configurations with a single potential: fold and cusp caustics. These have been ex- plored in the context of gravitational waves for both the geometric and wave optics regimes in depth in Ezquiaga et al. [43], and point readers to this work for detailed derivations of the following expression. We summarize pertinent details in this sub-section, and how it will re- late to the higher order caustics of interest in this work. The properties of caustics (also known as catastrophes) are studied through an expansion of the time delay sur- face (Td) around critical curves {⃗x} and caustics {⃗y}. We choose our coordinate system such that the critical curves and caustics occur at ⃗x = 0 and ⃗y = 0 (i.e. the center) of the image plane and source plane respectively. We denote derivatives in the time delay surface Td as Ti1···in. Criti- cal curves are found at the image positions T1 = T2 = 0 where the determinant of the Hessian matrix is D ≡det(Tab) = T11T22 −T 2 12 = 0 . (13) The rank of the Hessian matrix Tab will determine the property of a generic caustic, where having a rank of 1 5 corresponds to the stable caustics [72] we are interested in, folds and cusps. A fold caustic is described by a lens mapping in which the determinant of the Hessian van- ishes, it’s normal vector does not vanish, and the Hessian itself does not vanish (i.e T222 ̸= 0). In the case of the cusp, the condition are that T11 ̸= 0 and T222 = 0, thus we would need to include quadratic derivatives in the x2 direction, or in other words, T2222 ̸= 0. A full derivation of the lensing quantities of sources near these caustics can be found in [43]. A fold caustic generated two images which can be highly magnified, whose magnifications are simply given by µ± = ± 1 T11 √2T222y2 , (14) which shows that the relative magnification of a source in the universality regime near a fold caustic will always have µr = 1. The arrival time difference of the images is ∆T = T−−T+ = 4 √ 2 3 |y2|3/2 |T222|1/2 , (15) where one can also see that as y2 →0, ∆T →0, leading to the rich phenomenology of interfering and diffractive signals. For the cusp, the equations become much more compli- cated, and for the purposes of this work we only show the expressions for a source approaching along the symmetry axis y2 = 0. A source within a cusp can produce three highly magnified images. A source on the symmetry axis produces one one image of a given parity (dependent on the γ parameter that is defined below) which arrives at the time Td(⃗x(0)) = Tc −1 2 y2 1 T11 , (16) , as well as two images of parities opposite to the first, which both arrive simultaneously at the time ∆T = Td(⃗x(1,2)) −Td(⃗x(0)) = 1 2 T 2 122y2 1 T 2 11γ . (17) where γ is defined as γ ≡T 2 122 T11 −T2222 3 . (18) The magnifications of these images are given by the expression µ(0) = −2µ(1,2) = 1 T122y1 = T11 √2γ∆T , (19) which shows that the maximum relative magnification of these images is µr = 2. FIG. 3. Concentrations (top) and number counts (bottom) of subhalos as a function of mass (M200) for the fiducial Diemer- Joyce parameters [52], shown for an opening angle of 30 arc- seconds. III. COMPOSITE LENS MODEL In this section, we outline the methods used to model dark matter subhalos in our lensing configuration. We explore the impact of DM subhalos on lensed gravita- tional waves by considering them to be embedded in a galaxy scale main halo. The main halo used in this work is modeled using an el- liptical single isothermal sphere (eSIS). The eSIS lensing potential is described as ψ(⃗x) = q x2 1 + x2 2/q2, (20) where q controls the ellipticity of the halo, and ψ0 is given by ψ0 ≡ σ2 v GΣcrξ0 , (21) 6 where σv is the velocity dispersion of the halo. To simulate the subhalo population, we use pyHalo1 [55] to get the required subhalo properties such as their masses, concentrations, and radial distribution. Follow- ing Ref. [55], the subhalo mass function is given by d2N d log mdA = Σsub(m/m0)−α, (22) where Σsub is the normalization of the mass function at 108M⊙, and α controls the logarithmic slope of the mass function. The subhalos considered in this work are all described with a Navarro-Frenk-White (NFW) profile, which has been shown to adequately describe CDM halos in N−body simulations [44–46]. Note that we make the simplifying assumption of neglecting the tidal evolution models in pyHalo, and thus simply model the subhalos as NFW profiles to interface with GLoW . Exploring the impact of including these effects is left to future studies. They have density distributions that follow ρ(r) = ρs (r/rs)(1 + r/rs)2 , (23) where rs is the scale radius, and ρs/4 is the density at rs (called the scale density). The size of the scale radius is controlled by the halo concentration c, through rs = r200/c, where r200 is the radius at which the overdensity of the halo is two hundred times the critical density of the Universe at that redshift. The relationship between the mass and concentration of the subhalos (called the “c −M” relationship) is adopted from Diemer and Joyce [52] unless otherwise specified, and is implemented with a 0.2 dex scatter in their relationship [73]. Concentrations described by this model will be denoted as cDJ onward. Varying the magnitude of this scatter was found to have a negligible overall impact on the results of this work. We model the combined smooth potential of the main deflector, including the stellar mass of the galaxy and its host dark matter halo, using an eSIS model with an Einstein radius of ∼9.4 kpc (for at a lens plane redshift of zL = 0.5, and source plane at zS = 2), and ellip- ticity of q = 0.9, typical of galaxy-scale deflectors [74]. The minimum subhalo mass considered in this work is 10−6Mmain = 107M⊙. The subhalo mass function and related concentrations are shown in Fig. 3 for a line of sight with an opening angle of 30 arcseconds. Given the high number density of halos (especially in the lower mass range), we only consider subhalos near the critical curves of our main halo in this work to avoid computational limitations. These subhalos are selected through a magnification cut, considering subhalos that fall within the |µ| > 5 annulus. While subhalos farther away form the critical curves (and even along the line 1 https://github.com/dangilman/pyHalo FIG. 4. Relative magnifications and time delays for the highest concentration c and subhalo number density (parametrized by Σsub) considered in this work.The fiducial models are shown in gray. of sight) offer exciting opportunities to learn about both subhalos and dark matter [75], in this work we will only focus on highly magnified gravitational waves, and thus only consider subhalos in this region. This greatly re- duces the number of subhalos in our simulations due to the vast majority of the total ∼6800 subhalos in Fig. 3 re- siding very far from the critical curves of the main halo), and thus makes the computations of lensing observables (especially I(τ) and F(f)) feasible. We leave considering the full spatial distribution of subhalos to future work. In order to solve for the lensing observables, we em- ploy GLoW2 [69], which is allows us not only to calculate the geometric optics observables, but the full diffraction integral to capture both the interference and diffractive effects present in GW lensing for arbitrary lens configu- rations. IV. EFFECTS OF SUBHALOS ON LENSED GW OBSERVABLES TABLE I. Summary of the probabilities of finding a source location with µr > 2, and generating more than 5 images (Nim > 5) for the different models in this work. Model P(µr > 2) P(Nim > 5) Fiducial 9.0 × 10−3 2.8 × 10−4 c = 3 × cDJ 8.0 × 10−2 6.4 × 10−3 c = 6 × cDJ 8.7 × 10−2 1.6 × 10−2 Σsub = 0.050 1.0 × 10−2 1.7 × 10−4 Σsub = 0.075 4.0 × 10−2 3.1 × 10−4 2 https://github.com/miguelzuma/GLoW_public 7 FIG. 5. Relative magnifications of the brightest and second brightest images compared against the time delays of the third and fourth brightest images for the fiducial, high number den- sity, and high concentration models. Note that the existence of the population of short time delay pairs for this image pair is due to these sources falling within higher order or nested caustics. All other source locations fall in the high ∆T branch of the distributions. In this section, we aim to quantify the fraction of the source plane in which the subhalo effects can be seen, and how this is affected by varying both the number density and concentrations of the subhalos in our simulation. We separate what we quantify as the effects of subha- los into two categories: the rate at which we see 4 (or more) bright images, and the rate at which we see devia- tions from the expected µr −∆T relations (summarized in Table I). Due to the large separation in scales of the largest and smallest subhalo mass, we must be cautious to mitigate the effects of the resolution of our simula- tions. In order to do this, we first identify a region in the source plane with a minimum source magnification of µsr > 25. We then densely sample the region within the boundaries set by the magnification cut in order to capture the small-scale effects of the low-mass halos. Fi- nally, we draw source locations from the high-resolution source plane to calculate our lensing observables. A. Deviations from Universality Relations As outlined in § II C, the fold and cusp caustics found in single halo lensing systems follow unique behaviors de- scribed from catastrophe theory [35, 43, 62]. More specif- ically, a source within the main caustics (thus having an image multiplicity of greater than three) must have a rel- ative magnification less than µr < 2. In Fig. 2, we see that this holds in the case of the single halo (left panel), but is broken in the case of a halo populated with sub- halos (right panel). These pairs of images are being gen- erated by sources near higher order catastrophes, which are caustics generated by the subhalos perturbing the CCs of the main halo (which will be explored in § V A). This is to be expected given the observational evidence from quasar flux anomalies [55, 56, 76], as well as excess total magnification in lensed sources near apparent cusps [58]. B. Impact on Additional Images One of the key properties having either higher order caustics or nested caustics caused by subhalos is the pos- sibility of higher image multiplicities. This has been ex- plored in detail in the context of higher lens mass systems in both the EM [77] as well as in gravitational waves [36]. In the context of subhalos, the occurrence of Nim > 5 is expected to be a rare occurrence due to the small strong lensing cross section of low mass subhalos. An example of this can be seen visually in Fig. 1, where there are only very small regions within the caustics of the main halo where there are additional caustic crossings due to either disconnected or higher order caustics. We report the probabilities of seeing higher image multiplicities in Table I. Despite the rarity, even if we do not see the produc- tion of additional images, we expect to see the effects of subhalos in the image properties of images arriving at later times, which should differ from those of a smooth single halo. For instance, sources within a higher order caustic are expected to produce four highly magnified images, which differs from the maximum of three highly magnified images produced by cusps (which is the high- est one can produce in a single lens system). Therefore, in order to study these effects, we can examine the rel- ative arrival times of the third and fourth brightest im- ages ∆T (3,4). We see that small values of ∆T (3,4) (shown in Fig. 5) correspond to source locations that either fall within higher order caustics, or nested caustics. In both cases, the total image multiplicity of these locations in the source plane are determined by how many caustic one must cross in order to reach the region of interest (where every caustic crossing corresponds to the creation of two additional images). We also see that ∆T (3,4) can be short enough to cause interfering signals, opening the possibility for multiple sets of overlapping images pairs for a single source location. An explicit example of this can be seen in Fig. 6, where two distinct pairs of over- lapping signals coming from one source location. C. Discerning Between Concentration and Number Density We proceed by investigating two modifications to the subhalos in our model: their number density (Nsub), and their concentrations (c). The number density is modi- fied through changing the normalization parameter Σsub of the subhalo mass function at 108M⊙(see [55]). We test parameters of Σsub = [0.025, 0.050, 0.075]. The con- 8 FIG. 6. Lensed gravitational wave strains for a source location within a higher order caustic produced by a subhalo in the example case of Fig. 1. The left pane, shows the arrival of the first two images, the second shows the third and fourth images. The second and third images arrive with a delay of ∆T (2,3) = 28.44 s, with a relative magnification of µ(2,3) r = 5. The gray lines show the (rescaled) unlensed signal aligned at the time of peak strain of the first image. Note that the superscript denotes the relative arrival time of the images, and are not ordered by strain amplitude as in the rest of the text (i.e. µ(i,j) r = µj/µi, and ∆T (i,j) = |Tj −Ti|.) FIG. 7. Central convergence of NFW halos near the critical curves of the fiducial main halo. The black line corresponds to the strong lensing criterion of κ = 1. centrations are parametrized using a modified Diemer- Joyce relation [52], where concentrations drawn from this model are rescaled by a constant factor cscale = [1, 3, 6]. The case of c = 3 × cDJ is of particular interest, as re- cent studies have found this to be consistent with sub- halos with masses Mvir ≲109M⊙[78] (which is approx- imately the maximum subhalo mass considered in this work), and has been explored in other works focusing on weakly lensed GWs in the presence of subhalos [42]. We find that as we increase our two parameters, we see distinct features, particularly in the case of high concen- trations. In the high Σsub case (Fig. 4 and Fig. 13), we see that as we increase the number of subhalos in our simula- tion, there are considerably more image pairs that cross our µr > 2 threshold. This is unsurprising given that as we increase the number of subhalos near the critical curves, we increase the number of higher-order caustics generated, thus increasing the probability of seeing their effects on an image pair. In the high-concentration case, we generate signifi- cantly more image pairs above our threshold (substan- tially more than in the increased Σsub case as summa- rized in Table I). Additionally, embedded within µr − ∆T distribution we see the qualitative features of sources placed near higher order caustics (Fig. 11), as well as features akin to those of sources just outside of low mass caustics. These features arise due to the effects of nested caustics embedded within the caustics of the main halo. This feature feature is the spire-like branches of high µr image pairs (see Fig. 12) embedded in the short ∆T distribution due to their low mass. To understand the efficiency at which such caustics can be generated for NFW subhalos, we consider a typical condition for generating strong lensing, having a point in a lens mapping where κ > 1. This is because below this threshold, the mapping remains globally invertible. Above this threshold, the mapping is no longer injective due to the generation of multiple images for single source locations. We estimate the efficiency of subhalos to gen- erate critical curves disconnected and outside of the main halo (which generate nested caustics) by computing the central convergence of the halos in close proximity to the caustics. For an NFW halo, the 2D surface mass density is given by 9 FIG. 8. Time domain amplification factor I(τ) shown alongside its regular and singular components (left) for the example source location generating the lensed signals in Fig. 6, as well as a zoom in around the main peaks (right) to show the contributions of the regular and singular parts of I(τ) near the highly magnified images. Σ(x) = 2ρsrs x2 −1f(x), (24) where f(x) is f(x) =              1 − 2 √ 1−x2 arctanh q 1−x 1+x, x < 1 0, x = 1 1 − 2 √ x2−1arctan q x−1 1+x, x > 1 . (25) We approximate the central convergence of the NFW subhalos in isolation by taking the convergence at x = 10−4. We show the central convergence of NFW subhalos following both the fiducial and modified c −M relations for subhalos near the main CCs in Fig. 7. We make the simplifying assumption that the contribution to the total convergence coming from the main halo near the main CCs is κ ≈1/2, and can therefore write the central convergence of the subhalos near the CCs as κc ≈κ(0) + 1/2. (26) We see that the fiducial subhalos embedded near the CCs fall below the critical κ > 1 threshold for the entire mass range, making it unlikely for them to form nested caustics, even in the case of high subhalo number den- sities due to the low central density of each individual subhalo. However, even small increases to the subhalo concentrations shift much lower subhalo masses above the strong lensing threshold, thus allowing for the pro- duction of significantly more nested caustics. This is even more exaggerated in the high concentration case, where even the lowest mass subhalos in our simulation are ef- ficient enough to generate disconnected caustics. This most likely explains the discrepancy of the rate of see- ing both greater than 5 images (Nim) and µr > 2 in the high concentration case as opposed to the high num- ber density case. We also find that nested caustics are more efficient at creating source plane regions where Nim, which explains why there is a much higher probability of seeing this in the high concentration case as opposed to the high number density case. This also provides insight into the range of masses dominating the contribution to these effects, which are found to be subhalos of masses greater than 108M⊙in both the fiducial and high number density simulations, whereas we see an increased contri- bution from lower mass halos in the high concentration simulations. D. Impact on I(τ) To further study the implications of substructure on the lensed signals, we examine the time dependent am- plification function I(τ) in Fig. 8. This calculation is enabled by creating our setup using GLoW , which allows for the direct computation of these quantities. Due to the 10 FIG. 9. Bifurcation sets of the butterfly (left) and swallowtail (right) catastrophes. In the lensing analogy, the center of the lens is towards the top of the plot. We use Arnold’s parametrization [79] described in §V A. highly oscillatory nature of the integrand of the diffrac- tion integral, computing F(w) at the high w correspond- ing to LISA or LVK signals (note that the dimensionless frequencies for these detectors for our fiducial redshifted lens mass is w ∼104 −1010) is currently not possible for all frequencies required to reconstruct lensed signals for either detectors in our setup. Calculating these quan- tities at higher frequencies is left to the future work of finding a more efficient way of calculating the high fre- quency portion of this integral. However, we present the results for the lower frequency portion that is currently calculable as a proof of concept. We select the same source location used to generate the time domain waveform in Fig. 6. We see distinct struc- ture in the regular and singular parts of I(τ) in Fig. 8, especially near the prominent peaks corresponding to the highly magnified images. Of particular interest are the highly negative portions of the regular portion of I(τ) near the peaks, indicating the existence of wave optics phenomena occurring for these images not fully captured by the singular piece. This will correspond to distinct irregular oscillations in F(w) that are characteristic of the inclusion of additional structure in the lens. Similar features can be seen in the study of wave optics phe- nomena in the weak lensing regime caused by dark mat- ter subhalos [42]. This would create unique frequency dependent oscillations for signals in this regime that is not reproducible by diffraction effects caused by a sin- gle dark matter halo (or even a compact lens such as the point mass lenses typically used to study wave op- tics phenomena in lensed GWs). Whilst the impact of these oscillations might be suppressed at high frequencies where we approach GO, longer wavelength signals could still undergo interesting diffraction phenomena if there are small, compact dark matter halos along the propaga- tion path. This motivates the push towards computing the diffraction integral at higher frequencies in order to hunt for these features in substructure-lensed signals. V. HIGHER ORDER CAUSTICS AND THEIR OBSERVABLES In this section, we introduce some basic theory about the higher order caustics created by the subhalos perturb- ing the critical curves of the main halo to gain a deeper understanding of why we are seeing these deviations from the typical universality relations. We first present them in their most general form, and then build toy model ex- amples of each caustic by placing single massive halos at a point along the critical curve that maps back to ei- ther a fold, or a cusp in the source plane. This allows us to study the most general types of perturbations to the caustics we expect to see from subhalos, in isolation. A. Higher Order Catastrophes Starting from their most general form, we will high- light some of the qualitative features of two most common higher order catastrophes that we find in our systems, the swallowtail and butterfly catastrophes 3. Details about the theoretical aspects of these catastrophes (and many more) can be found in Ref. [79–81]. The implications of higher order catastrophes in lensing in the EM are ex- plored in Ref. [58, 75, 82–87], however deriving the full specific form of the generating functions discussed below using the time delay surface and computing the resulting gravitational wave observables is left to future work. 3 Coincidentally, the swallowtail catastrophe was the subject of Salvador Dal´ı’s final painting ’The Swallow’s Tail’ (1983), in- spired by a meeting with R´ene Thom. Dal´ı had originally set out to paint all of Thom’s catastrophes, but only completed The Swallow’s Tail before his untimely death in 1989. 11 FIG. 10. Source plane magnification map near the perturbed cusp (i.e. the butterfly catastrophe), and perturbed fold (the swallowtail catastrophe), with relative magnifications (µr) of different source locations shown in color. Note that the existence of higher order caustics allows for high µr (shown in pink), short ∆T image pairs. 1. The Swallowtail Catastrophe A swallowtail catastrophe, which has an ADE classifi- cation of A4 in Arnold’s notation [79], in its most general form is defined by the generating function V = 1 5x5 + 1 3ax3 + 1 2bx2 + cx, (27) where x is the so called state variable, and a, b, c are the control parameters. The mapping between control space (analogous to the source plane in lensing) and the generating function (analogous to the time delay surface) is given by the solutions of the gradient of the potential function V ′ = 0 (often called the equilibrium condition): 0 = x4 + ax2 + bx + c. (28) This is analogous to solving the lens equation for a gravitational lensing system. The number of solutions to this equation corresponds to the number of local images generated by a specific catastrophe. Given its quartic nature, it can produce a maximum of 4 real roots, two real roots, or no real roots. The real roots of this equa- tion correspond to the number of local images produced by that region of control space, whose boundaries define the bifurcation sets (in the case of lensing, the caustics). The equations describing the shape of the catastrophe in control space can be found by solving for the bifurcation set of the generating function (i.e. when V ′ = V ′′ = 0). First, we find V ′′ = 0, and solve for b to find b = −4x3 −2ax, (29) which we can substitute into our expression for V ′ and solve for c. This allows us to write the bifurcation set for the swallowtail in its parametric form ( b = −4x3 −2ax c = ax2 + 3x4 . (30) 2. The Butterfly Catastrophe A butterfly catastrophe (A5), in its most general form is defined by the generating function V = 1 6x6 + 1 4ax4 + 1 3bx3 + 1 2cx2 + dx, (31) where we have introduced a new control parameter d. It’s equilibrium condition V ′ = 0 is given by 0 = x5 + ax3 + bx2 + cx + d, (32) 12 FIG. 11. Relative magnifications and time delays as in Fig. 2, but for the butterfly (left) and swallowtail (right) catastrophes shown in Fig. 9 and Fig. 10. which can produce a maximum of five real roots. Using the same approach as for the swallowtail, we can take V ′′ = 0, and this time solving for c, we find that c = −5x4 −3ax2 −2bx. (33) We can substitute this into the equilibrium condition and solve for d to write the bifurcation set of the butterfly in its parametric form ( c = −5x4 −3ax2 −2bx d = 4x5 + 2ax3 + bx2 . (34) We plot the bifurcation set (or caustics in the lensing case) for both the swallowtail and butterfly in Fig. 9 with a fixed value of a = −5 in the case of the swallowtail, and a = −6, b = 0 for the butterfly. Note that the center of the lens is always pointed towards the top of the plot. B. Toy Model Results With the insight from catastrophe theory in hand, we can proceed to building these catastrophes into our lens- ing setup to and solve for the observables of this toy model case. To generate a swallowtail, we place one massive sub- halo on a part the critical curve of the unperturbed main halo which maps back to a fold in the source plane. To generate the butterfly, we place one massive subhalo on a part the critical curve of the unperturbed main halo which maps back to a cusp in the source plane. In both cases, the subhalo has the same NFW profile as the sub- halos in the previous section, but with a mass of 1010M⊙ and c = 13 for both the fold and cusp case. Note that this choice of concentration is slightly higher than the fiducial c −M relation, but is taken solely to make the effects of the subhalos clearer. The perturbed caustics can be seen in Fig. 10. To investigate how the µr −∆T relation is affected by the existence of higher-order caustics, we populate the re- gions of the source plane near these caustics with sources, and calculate µr and ∆T for the brightest two images. Fig. 10 shows that the high µr images pairs are being produced just outside of the cusp-like feature extended towards the center of the lens. Fig. 11 shows that these high µr image pairs also have short time delays, explain- ing the points in Fig. 2 that have both high µr and short ∆T . It is also worth noting that the cusps extended out of the main caustics (the ”wings” of the butterfly), also produce high µr image pairs, however these images have longer time delays that cannot fall within the range of time delays produced by sources inside of the main cusp and fold caustics of an unperturbed main halo, or even the perturbed main halo itself. VI. CONCLUSIONS AND IMPLICATIONS Gravitational lensing offers a unique opportunity to map dark matter structures in the Universe, and lensed GWs in particular allow us to zoom into the small scales. Their unprecedented sensitivity to short time delays — down to milliseconds — enables inferences of halo sub- structure on sub-galactic scales, competitive with other probes of small-scale structure [64]. Although there has been a long and extensive effort to study the effects of small scale subhalos embedded in galaxy-scale halos in 13 EM observations, see e.g. [76, 88], an equivalent study was missing for GWs. In this work, we bridge this gap and study the implications for lensed GWs of dark mat- ter subhalos. We simulate lensing systems with sub- halo populations being calculated with pyHalo [55], and matching recent N-body simulations [52, 78]. We test the sensitivity of GW lensing observables (number of re- peated copies, relative magnifications, time delays, wave- form morphologies, etc.) to increased subhalo number densities and concentrations. Our main results are sum- marized as follows: • Substructure in the form of dark matter subhalos causes two distinct and unique observables in order to infer their presence: the breaking of universal µr −∆T relations caused by the creation of higher order catastrophes, and the generation of systems with higher image multiplicities, short time delay, and highly magnified images. • The inclusion of dark matter subhalos— particularly compact, low mass subhalos— increases the rate of wave-optics lensing phenom- ena, including interference and diffraction. • The rate at which subhalos influence GW observ- ables is heavily sensitive to the c−M relation, indi- cating a higher constraining power on dark matter models with increased concentrations at low mass subhalos. Our results demonstrate the significance of dark matter substructures in GW lensing, opening new avenues to re- veal their nature. The last point, in particular, highlights an exciting prospect for follow-up studies to examine the effects of alternative dark matter subhalos (such as fuzzy and wave dark matter [89]) that are capable of produc- ing more compact central densities. This would both amplify the rates at which we see deviations from the µr −∆T relation and higher image multiplicities, and in- crease the probability of seeing wave-optics phenomena. It also provides physical motivation for lensed GW pa- rameter estimation to include more complex interfering signals, beyond the lowest order caustics [43], as well as for GW searches to target more image pairs associated with the same lensed event. Additionally, multi-messenger lensed events would have even further capabilities [12]. They could allow us to not only combine arrival time and photometry infor- mation — a combination that can give us insight into the physical properties of the perturber — but also see if the DM subhalo is occupied by baryonic matter (such as a satellite galaxy). Where studies of higher order caustics are limited by spatial resolution in the EM, causing im- ages to smear and appear similar to those from standard caustics [85], the combination with the precision time delay information provided by GW detectors would offer unprecedented insights into the structure of such lensing systems. As discussed in Ref. [36], there are challenges for the identification of these short time delay, highly magnified signals with high relative magnifications. Specifically, when inferring the lens mass solely from small numbers of lensed GW images, they could be mistaken as coming from a lower lens mass system. While this is a much more dominant effect in systems with more prevalent struc- tures such as galaxy clusters, we find that this (albeit exciting phenomenon) could be a nuisance for inferring the total lens mass in lensed systems that do not have an EM counterpart. If not accounted for, substructures could also hinder rapid EM followups that utilize catalogs of known strong lenses such as LaStBeRu [90] or lenscat [91]. This is because these programs aim at matching the time delay of the lensed GW signals to the lens masses in a sky localization area most likely to generate them. The existence of unaccounted compact dark matter subhalos could also provide a hindrance to future efforts to perform time delay cosmography with lensed GWs in addition to EM observations [92]. Investigations in the EM have shown that not taking sub-structure into ac- count (both in the form of sub-halos and line of sight ha- los) leads to an added uncertainty in the inferred value of H0 that scales with the square root of the lensing vol- ume divided by the longest time delay image used in the inference [92]. This stresses the need to develop realistic modeling of dark matter substructures. The discovery of the first lensed GWs is expected in the next observing run of current ground-based detec- tors [23–27], and future space-based observatories also promise to detect more lensed signals [40, 41, 93]. In this work, we have shown that strongly lensed GW ob- servables are affected by dark matter subhalos, and there- fore hold a key to probe their properties. As we begin to detect lensed GWs, tests such as those provided in this work will allow for unprecedented precision to the small- scale structures of DM subhalos, and with any luck, will shed light onto the nature of DM itself. ACKNOWLEDGMENTS The authors would like to thank Juno Chan, and Guil- herme Brando for their helpful comments and sugges- tions. The Center of Gravity is a Center of Excellence funded by the Danish National Research Foundation un- der grant No. 184. This project was supported by the research grant no. VIL37766 and no. VIL53101 from Villum Fonden, and the DNRF Chair program grant no. DNRF162 by the Danish National Research Foun- dation. This project has received funding from the Euro- pean Union’s Horizon 2020 research and innovation pro- gramme under the Marie Sklodowska-Curie grant agree- ment No 101131233. J.M.E. is also supported by the Marie Sklodowska-Curie grant agreement No. 847523 INTERACTIONS. DG acknowledges support from the Brinson Foundation through a Brinson Prize Fellowship Grant. The Tycho supercomputer hosted at the SCI- 14 ENCE HPC center at the University of Copenhagen was used for supporting this work. [1] B. Wang, S. Fujimoto, I. Labb´e, L. J. Furtak, T. B. Miller, D. J. Setton, A. Zitrin, H. Atek, R. Bezan- son, G. Brammer, J. Leja, P. A. Oesch, S. H. Price, I. Chemerynska, S. E. Cutler, P. Dayal, P. van Dokkum, A. D. Goulding, J. E. Greene, Y. Fudamoto, G. Khullar, V. Kokorev, D. Marchesini, R. Pan, J. R. Weaver, K. E. Whitaker, and C. C. Williams, UNCOVER: Illuminat- ing the Early Universe-JWST/NIRSpec Confirmation of z ¿ 12 Galaxies, ApJ 957, L34 (2023), arXiv:2308.03745 [astro-ph.GA]. [2] V. Kokorev, H. Atek, J. Chisholm, R. Endsley, I. Chemerynska, J. B. Mu˜noz, L. J. Furtak, R. Pan, D. Berg, S. Fujimoto, P. A. Oesch, A. Weibel, A. Adamo, J. Blaizot, R. Bouwens, M. Dessauges- Zavadsky, G. Khullar, D. Korber, I. Goovaerts, M. Jec- men, I. Labb´e, F. Leclercq, R. Marques-Chaves, C. Ma- son, K. B. W. McQuinn, R. Naidu, P. Natarajan, E. Nelson, J. Rosdahl, A. Saldana-Lopez, D. Schaerer, M. Trebitsch, M. Volonteri, and A. Zitrin, A Glimpse of the New Redshift Frontier through AS1063, ApJ 983, L22 (2025), arXiv:2411.13640 [astro-ph.GA]. [3] J. M. Diego et al., JWST’s PEARLS: A new lens model for ACT-CL J0102−4915, “El Gordo,” and the first red supergiant star at cosmological distances discov- ered by JWST, Astron. Astrophys. 672, A3 (2023), arXiv:2210.06514 [astro-ph.GA]. [4] B. Welch, D. Coe, J. M. Diego, A. Zitrin, E. Zackris- son, P. Dimauro, Y. Jim´enez-Teja, P. Kelly, G. Mahler, M. Oguri, F. X. Timmes, R. Windhorst, M. Florian, S. E. de Mink, R. J. Avila, J. Anderson, L. Bradley, K. Sharon, A. Vikaeus, S. McCandliss, M. Bradaˇc, J. Rigby, B. Frye, S. Toft, V. Strait, M. Trenti, S. Sharma, F. Andrade- Santos, and T. Broadhurst, A highly magnified star at redshift 6.2, Nature 603, 815 (2022), arXiv:2209.14866 [astro-ph.GA]. [5] C. Mason, M. Trenti, and T. Treu, The Galaxy UV Lumi- nosity Function Before the Epoch of Reionization, Astro- phys. J. 813, 21 (2015), arXiv:1508.01204 [astro-ph.GA]. [6] L. Vujeva, C. L. Steinhardt, C. K. Jespersen, B. L. Frye, A. M. Koekemoer, P. Natarajan, A. L. Faisst, P. Hi- bon, L. J. Furtak, H. Atek, R. Cen, and A. Sneppen, Efficient Survey Design for Finding High-redshift Galax- ies with JWST, ApJ 974, 23 (2024), arXiv:2310.15284 [astro-ph.GA]. [7] S. Vegetti et al., Strong Gravitational Lensing as a Probe of Dark Matter, Space Sci. Rev. 220, 58 (2024), arXiv:2306.11781 [astro-ph.CO]. [8] I. Dutra, P. Natarajan, and D. Gilman, Self-interacting Dark Matter, Core Collapse, and the Galaxy–Galaxy Strong-lensing Discrepancy, Astrophys. J. 978, 38 (2025), arXiv:2406.17024 [astro-ph.CO]. [9] ˇZ. Ivezi´c et al. (LSST), LSST: from Science Drivers to Reference Design and Anticipated Data Products, As- trophys. J. 873, 111 (2019), arXiv:0805.2366 [astro-ph]. [10] R. Scaramella et al. (Euclid), Euclid preparation. I. The Euclid Wide Survey, Astron. Astrophys. 662, A112 (2022), arXiv:2108.01201 [astro-ph.CO]. [11] M. Walmsley et al. (Euclid), Euclid Quick Data Release (Q1): The Strong Lensing Discovery Engine A – System overview and lens catalogue, (2025), arXiv:2503.15324 [astro-ph.GA]. [12] G. P. Smith et al., Multi-messenger gravitational lensing, Phil. Trans. Roy. Soc. Lond. A 383, 20240134 (2025), arXiv:2503.19973 [astro-ph.HE]. [13] M. Millon et al., COSMOGRAIL XIX: Time delays in 18 strongly lensed quasars from 15 years of opti- cal monitoring, Astron. Astrophys. 640, A105 (2020), arXiv:2002.05736 [astro-ph.CO]. [14] J. Aasi et al. (LIGO Scientific), Advanced LIGO, Class. Quant. Grav. 32, 074001 (2015), arXiv:1411.4547 [gr-qc]. [15] F. Acernese et al. (VIRGO), Advanced Virgo: a second- generation interferometric gravitational wave detector, Class. Quant. Grav. 32, 024001 (2015), arXiv:1408.3978 [gr-qc]. [16] T. Akutsu et al. (KAGRA), Overview of KAGRA: Detec- tor design and construction history, PTEP 2021, 05A101 (2021), arXiv:2005.05574 [physics.ins-det]. [17] A. G. Abac et al. (LIGO Scientific, Virgo, KAGRA), GW250114: Testing Hawking’s Area Law and the Kerr Nature of Black Holes, Phys. Rev. Lett. 135, 111403 (2025), arXiv:2509.08054 [gr-qc]. [18] GW230814: investigation of a loud gravitational- wave signal observed with a single detector, (2025), arXiv:2509.07348 [gr-qc]. [19] O. A. Hannuksela, K. Haris, K. K. Y. Ng, S. Kumar, A. K. Mehta, D. Keitel, T. G. F. Li, and P. Ajith, Search for gravitational lensing signatures in LIGO-Virgo binary black hole events, Astrophys. J. Lett. 874, L2 (2019), arXiv:1901.02674 [gr-qc]. [20] R. Abbott et al. (LIGO Scientific, VIRGO), Search for Lensing Signatures in the Gravitational-Wave Observa- tions from the First Half of LIGO–Virgo’s Third Observ- ing Run, Astrophys. J. 923, 14 (2021), arXiv:2105.06384 [gr-qc]. [21] J. Janquart et al., Follow-up analyses to the O3 LIGO–Virgo–KAGRA lensing searches, Mon. Not. Roy. Astron. Soc. 526, 3832 (2023), arXiv:2306.03827 [gr-qc]. [22] R. Abbott et al. (LIGO Scientific, KAGRA, VIRGO), Search for Gravitational-lensing Signatures in the Full Third Observing Run of the LIGO–Virgo Network, As- trophys. J. 970, 191 (2024), arXiv:2304.08393 [gr-qc]. [23] K. K. Y. Ng, K. W. K. Wong, T. Broadhurst, and T. G. F. Li, Precise LIGO Lensing Rate Predictions for Binary Black Holes, Phys. Rev. D 97, 023012 (2018), arXiv:1703.06319 [astro-ph.CO]. [24] M. Oguri, Effect of gravitational lensing on the distribu- tion of gravitational waves from distant binary black hole mergers, Mon. Not. Roy. Astron. Soc. 480, 3842 (2018), arXiv:1807.02584 [astro-ph.CO]. [25] F. Xu, J. M. Ezquiaga, and D. E. Holz, Please Repeat: Strong Lensing of Gravitational Waves as a Probe of Compact Binary and Galaxy Populations, Astrophys. J. 929, 9 (2022), arXiv:2105.14390 [astro-ph.CO]. [26] A. R. A. C. Wierda, E. Wempe, O. A. Hannuksela, L. e. V. E. Koopmans, and C. Van Den Broeck, Be- yond the Detector Horizon: Forecasting Gravitational- 15 Wave Strong Lensing, Astrophys. J. 921, 154 (2021), arXiv:2106.06303 [astro-ph.HE]. [27] G. P. Smith, A. Robertson, G. Mahler, M. Nicholl, D. Ryczanowski, M. Bianconi, K. Sharon, R. Massey, J. Richard, and M. Jauzac, Discovering gravitationally lensed gravitational waves: predicted rates, candidate selection, and localization with the Vera Rubin Obser- vatory, Mon. Not. Roy. Astron. Soc. 520, 702 (2023), arXiv:2204.12977 [astro-ph.HE]. [28] M. Maggiore et al. (ET), Science Case for the Ein- stein Telescope, JCAP 03, 050, arXiv:1912.02622 [astro- ph.CO]. [29] D. Reitze et al., Cosmic Explorer: The U.S. Contribu- tion to Gravitational-Wave Astronomy beyond LIGO, Bull. Am. Astron. Soc. 51, 035 (2019), arXiv:1907.04833 [astro-ph.IM]. [30] P. Amaro-Seoane et al. (LISA), Laser Interferometer Space Antenna, (2017), arXiv:1702.00786 [astro-ph.IM]. [31] M. Colpi et al. (LISA), LISA Definition Study Report, (2024), arXiv:2402.07571 [astro-ph.CO]. [32] A. Robertson, G. P. Smith, R. Massey, V. Eke, M. Jauzac, M. Bianconi, and D. Ryczanowski, What does strong gravitational lensing? The mass and redshift dis- tribution of high-magnification lenses, Mon. Not. Roy. Astron. Soc. 495, 3727 (2020), arXiv:2002.01479 [astro- ph.CO]. [33] L. Dai, T. Venumadhav, A. A. Kaurov, and J. Miralda- Escud´e, Probing Dark Matter Subhalos in Galaxy Clus- ters Using Highly Magnified Stars, Astrophys. J. 867, 24 (2018), arXiv:1804.03149 [astro-ph.CO]. [34] M. Oguri, Strong gravitational lensing of explosive transients, Rept. Prog. Phys. 82, 126901 (2019), arXiv:1907.06830 [astro-ph.CO]. [35] R. K. L. Lo, L. Vujeva, J. M. Ezquiaga, and J. C. L. Chan, Observational Signatures of Highly Magnified Gravita- tional Waves from Compact Binary Coalescence, Phys. Rev. Lett. 134, 151401 (2025), arXiv:2407.17547 [gr-qc]. [36] L. Vujeva, J. M. Ezquiaga, R. K. L. Lo, and J. C. L. Chan, Effects of galaxy cluster structure on lensed gravitational waves, Phys. Rev. D 112, 063044 (2025), arXiv:2501.02096 [astro-ph.CO]. [37] M. Pijnenburg, G. Cusin, C. Pitrou, and J.-P. Uzan, Wave optics lensing of gravitational waves: Theory and phenomenology of triple systems in the LISA band, Phys. Rev. D 110, 044054 (2024), arXiv:2404.07186 [gr-qc]. [38] J. C. L. Chan, E. Seo, A. K. Y. Li, H. Fong, and J. M. Ezquiaga, Detectability of lensed gravitational waves in matched-filtering searches, Phys. Rev. D 111, 084019 (2025), arXiv:2411.13058 [gr-qc]. [39] J. C. L. Chan, C. Dyson, M. Garcia, J. Redondo-Yuste, and L. Vujeva, Lensing and wave optics in the strong field of a black hole, (2025), arXiv:2502.14073 [gr-qc]. [40] M. C¸alı¸skan, N. Anil Kumar, L. Ji, J. M. Ezquiaga, R. Cotesta, E. Berti, and M. Kamionkowski, Probing wave-optics effects and dark-matter subhalos with lensing of gravitational waves from massive black holes, (2023), arXiv:2307.06990 [astro-ph.CO]. [41] S. Savastano, G. Tambalo, H. Villarrubia-Rojo, and M. Zumalacarregui, Weakly Lensed Gravitational Waves: Probing Cosmic Structures with Wave-Optics Features, (2023), arXiv:2306.05282 [gr-qc]. [42] G. Brando, S. Goyal, S. Savastano, H. Villarrubia-Rojo, and M. Zumalac´arregui, Signatures of dark and baryonic structures on weakly lensed gravitational waves, Phys. Rev. D 111, 024068 (2025), arXiv:2407.04052 [gr-qc]. [43] J. M. Ezquiaga, R. K. L. Lo, and L. Vujeva, Diffraction around caustics in gravitational wave lensing, Phys. Rev. D 112, 043544 (2025), arXiv:2503.22648 [gr-qc]. [44] J. F. Navarro, C. S. Frenk, and S. D. M. White, A Uni- versal density profile from hierarchical clustering, Astro- phys. J. 490, 493 (1997), arXiv:astro-ph/9611107. [45] J. F. Navarro, C. S. Frenk, and S. D. M. White, A Uni- versal Density Profile from Hierarchical Clustering, As- trophys. J. 490, 493 (1997), astro-ph/9611107. [46] J. E. Taylor and J. F. Navarro, The Phase-Space Den- sity Profiles of Cold Dark Matter Halos, Mon. Not. Roy. Astron. Soc. 325, 1253–1267 (2001), astro-ph/0104002. [47] A. Kassiola and I. Kovner, Elliptic Mass Distributions versus Elliptic Potentials in Gravitational Lenses, ApJ 417, 450 (1993). [48] R. Kormann, P. Schneider, and M. Bartelmann, Isother- mal elliptical gravitational lens models., A&A 284, 285 (1994). [49] R. Barkana, Fast calculation of a family of elliptical mass gravitational lens models, Astrophys. J. 502, 531 (1998), arXiv:astro-ph/9802002. [50] V. Springel, J. Wang, M. Vogelsberger, A. Ludlow, A. Jenkins, A. Helmi, J. F. Navarro, C. S. Frenk, and S. D. M. White, The Aquarius Project: the subhalos of galactic halos, Mon. Not. Roy. Astron. Soc. 391, 1685 (2008), arXiv:0809.0898 [astro-ph]. [51] D. Fiacconi, P. Madau, D. Potter, and J. Stadel, Cold Dark Matter Substructures in Early-Type Galaxy Halos, Astrophys. J. 824, 144 (2016), arXiv:1602.03526 [astro- ph.GA]. [52] B. Diemer and M. Joyce, An accurate physical model for halo concentrations, Astrophys. J. 871, 168 (2019), arXiv:1809.07326 [astro-ph.CO]. [53] E. O. Nadler et al., Symphony: Cosmological Zoom-in Simulation Suites over Four Decades of Host Halo Mass, Astrophys. J. 945, 159 (2023), arXiv:2209.02675 [astro- ph.CO]. [54] P. L. Schechter and J. Wambsganss, Quasar microlens- ing at high magnification and the role of dark matter: Enhanced fluctuations and suppressed saddlepoints, As- trophys. J. 580, 685 (2002), arXiv:astro-ph/0204425. [55] D. Gilman, S. Birrer, A. Nierenberg, T. Treu, X. Du, and A. Benson, Warm dark matter chills out: constraints on the halo mass function and the free-streaming length of dark matter with eight quadruple-image strong grav- itational lenses, Mon. Not. Roy. Astron. Soc. 491, 6077 (2020), arXiv:1908.06983 [astro-ph.CO]. [56] A. M. Nierenberg et al., JWST lensed quasar dark matter survey – I. Description and first results, Mon. Not. Roy. Astron. Soc. 530, 2960 (2024), arXiv:2309.10101 [astro- ph.CO]. [57] N. Ephremidze, C. Chandrashekar, A. C¸. S¸eng¨ul, and C. Dvorkin, Dark Matter Substructure or Source Model Systematics? A Case Study of Cluster Lens Abell S1063, (2025), arXiv:2502.18571 [astro-ph.CO]. [58] A. B. Aazami and P. Natarajan, Substructure and the Cusp and Fold Relations, Mon. Not. Roy. Astron. Soc. 372, 1692 (2006), arXiv:astro-ph/0605347. [59] J. Zavala and C. S. Frenk, Dark matter haloes and sub- haloes, Galaxies 7, 81 (2019), arXiv:1907.11775 [astro- ph.CO]. [60] J. S. Bullock and M. Boylan-Kolchin, Small-Scale Chal- lenges to the ΛCDM Paradigm, Ann. Rev. Astron. As- 16 trophys. 55, 343 (2017), arXiv:1707.04256 [astro-ph.CO]. [61] L. Ji and L. Dai, Effects of Subhalos on Interpreting Highly Magnified Sources Near Lensing Caustics, (2024), arXiv:2407.09594 [astro-ph.GA]. [62] P. Schneider, J. Ehlers, and E. E. Falco, Gravitational Lenses (1992). [63] G. Despali, S. Vegetti, S. D. M. White, C. Giocoli, and F. C. van den Bosch, Modelling the line-of-sight contri- bution in substructure lensing, Mon. Not. Roy. Astron. Soc. 475, 5424 (2018), arXiv:1710.05029 [astro-ph.CO]. [64] D. Gilman, S. Birrer, T. Treu, A. Nierenberg, and A. Ben- son, Probing dark matter structure down to 107 solar masses: flux ratio statistics in gravitational lenses with line-of-sight haloes, Mon. Not. Roy. Astron. Soc. 487, 5721 (2019), arXiv:1901.11031 [astro-ph.CO]. [65] N. Aghanim et al. (Planck), Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)], arXiv:1807.06209 [astro-ph.CO]. [66] R. Takahashi and T. Nakamura, Wave effects in grav- itational lensing of gravitational waves from chirping binaries, Astrophys. J. 595, 1039 (2003), arXiv:astro- ph/0305055. [67] G. Tambalo, M. Zumalac´arregui, L. Dai, and M. H.-Y. Cheung, Gravitational wave lensing as a probe of halo properties and dark matter, (2022), arXiv:2212.11960 [astro-ph.CO]. [68] R. Blandford and R. Narayan, Fermat’s principle, caus- tics, and the classification of gravitational lens images, Astrophys. J. 310, 568 (1986). [69] H. Villarrubia-Rojo, S. Savastano, M. Zumalac´arregui, L. Choi, S. Goyal, L. Dai, and G. Tambalo, Gravitational lensing of waves: Novel methods for wave-optics phenom- ena, Phys. Rev. D 111, 103539 (2025), arXiv:2409.04606 [gr-qc]. [70] J. M. Ezquiaga, D. E. Holz, W. Hu, M. Lagos, and R. M. Wald, Phase effects from strong gravitational lensing of gravitational waves, Phys. Rev. D 103, 064047 (2021), arXiv:2008.12814 [gr-qc]. [71] A. M. Serra and O. Bulashenko, Probing binary lens caustics with gravitational waves: A uniform approx- imation approach, Phys. Rev. D 112, 024041 (2025), arXiv:2504.10128 [gr-qc]. [72] H. Whitney, On singularities of mappings of euclidean spaces. i. mappings of the plane into the plane, in Has- sler Whitney Collected Papers, edited by J. Eells and D. Toledo (Birkh¨auser Boston, Boston, MA, 1992) pp. 370–406. [73] K. Wang, Y.-Y. Mao, A. R. Zentner, J. U. Lange, F. C. van den Bosch, and R. H. Wechsler, Concentra- tions of Dark Haloes Emerge from Their Merger His- tories, Mon. Not. Roy. Astron. Soc. 498, 4450 (2020), arXiv:2004.13732 [astro-ph.GA]. [74] M. W. Auger, T. Treu, A. S. Bolton, R. Gavazzi, L. V. E. Koopmans, P. J. Marshall, L. A. Moustakas, and S. Burles, The Sloan Lens ACS Survey. X. Stellar, Dy- namical, and Total Mass Correlations of Massive Early- type Galaxies, ApJ 724, 511 (2010), arXiv:1007.2880 [astro-ph.CO]. [75] C. R. Keeton and L. A. Moustakas, A New Channel for Detecting Dark Matter Substructure in Galaxies: Grav- itational Lens Time Delays, Astrophys. J. 699, 1720 (2009), arXiv:0805.0309 [astro-ph]. [76] S.-d. Mao and P. Schneider, Evidence for substructure in lens galaxies?, Mon. Not. Roy. Astron. Soc. 295, 587 (1998), arXiv:astro-ph/9707187. [77] L. L. R. Williams, P. L. Kelly, T. Treu, A. Amruth, J. M. Diego, S. K. Li, A. K. Meena, A. Zitrin, T. J. Broadhurst, and A. V. Filippenko, Flashlights: Properties of highly magnified images near cluster critical curves in the pres- ence of dark matter subhalos (2023), arXiv:2304.06064 [astro-ph.CO]. [78] T. Ishiyama et al., The Uchuu simulations: Data Release 1 and dark matter halo concentrations, Mon. Not. Roy. Astron. Soc. 506, 4210 (2021), arXiv:2007.14720 [astro- ph.CO]. [79] V. I. Arnold, Catastrophe Theory, 3rd ed. (Springer, Berlin, Germany, 1992). [80] R. Thom, Mod`eles math´ematiques de la morphogen`ese (Christian Bourgois, Paris, 1981) p. 314, nouvelle ´edition revue et augment´ee. [81] M. Berry and C. Upstill, Iv catastrophe optics: Mor- phologies of caustics and their diffraction patterns (Else- vier, 1980) pp. 257–346. [82] C. R. Keeton, S. Mao, and H. J. Witt, Gravitational lenses with more than four images: I. classification of caustics, Astrophys. J. 537, 697 (2000), arXiv:astro- ph/0002401. [83] M. Bradac, P. Schneider, M. Lombardi, M. Steinmetz, L. V. E. Koopmans, and J. F. Navarro, The signature of cdm substructure on gravitational lensing, Astron. As- trophys. 423, 797 (2004), arXiv:astro-ph/0306238. [84] A. B. Aazami and A. O. Petters, A Universal Magnifica- tion Theorem III. Caustics Beyond Codimension Five, J. Math. Phys. 51, 023503 (2010), arXiv:0909.5235 [math- ph]. [85] G. Orban de Xivry and P. Marshall, An atlas of predicted exotic gravitational lenses, Monthly Notices of the Royal Astronomical Society 399, 2–20 (2009). [86] J. Hidding, S. F. Shandarin, and R. van de Weygaert, The Zel’dovich approximation: key to understanding cosmic web complexity, Mon. Not. Roy. Astron. Soc. 437, 3442 (2014), arXiv:1311.7134 [astro-ph.CO]. [87] A. K. Meena and J. S. Bagla, Finding Singularities in Gravitational Lensing, Mon. Not. Roy. Astron. Soc. 492, 3294 (2020), arXiv:1908.01158 [astro-ph.CO]. [88] N. Dalal and C. S. Kochanek, Direct detection of CDM substructure, Astrophys. J. 572, 25 (2002), arXiv:astro- ph/0111456. [89] L. Hui, Wave Dark Matter, Ann. Rev. Astron. Astrophys. 59, 247 (2021), arXiv:2101.11735 [astro-ph.CO]. [90] R. A. de Oliveira, J. P. C. Fran¸ca, and M. Makler, The Last Stand Before Rubin: a consolidated sample of strong lensing systems in wide-field surveys, (2025), arXiv:2509.09798 [astro-ph.GA]. [91] L. Vujeva, R. K. L. Lo, J. M. Ezquiaga, and J. C. L. Chan, lenscat: a public and community-contributed catalogue of known strong gravitationallenses, Phil. Trans. Roy. Soc. Lond. A 383, 20240168 (2025), arXiv:2406.04398 [astro-ph.GA]. [92] D. Gilman, S. Birrer, and T. Treu, TDCOSMO - III. Dark matter substructure meets dark energy. The ef- fects of (sub)halos on strong-lensing measurements of H0, Astron. Astrophys. 642, A194 (2020), arXiv:2007.01308 [astro-ph.CO]. [93] P. Auclair et al. (LISA Cosmology Working Group), Cos- mology with the Laser Interferometer Space Antenna, (2022), arXiv:2204.05434 [astro-ph.CO]. 17 FIG. 12. Relative magnification (µr) vs time delay (∆T) for the two brightest images of sources placed near the caustics of a single elliptical single isothermal sphere halo. The red and blue points correspond to sources just inside and outside of the caustics respectively. The grey line is the maximum relative magnification factor (µr = 2)) for sources just inside of the caustics. Appendix A: µr −∆T Relation Inside and Outside of Caustics In this section, we will elaborate on the difference in the distribution of the µr −∆T relation immediately in- side and outside of the caustics. Throughout this work, we have generally been interested in the images produced by sources just inside of the main caustics of our host halo (meaning sources that fall on the side of the caustics clos- est to the center of the halo). This is mainly due to the images being produced in these region being highly mag- nified, and thus falling within the sensitivity of our cur- rent and future detectors. However, sources that lie just outside of the main caustics can still be moderately mag- nified by the main halo, or even highly magnified by the presence of subhalos, but the population of these images is significantly different from those of the sources just in- side of the caustics. Namely, the two brightest images of a source just outside of the caustics are not subject to the same universality relations as those of sources just inside of the caustics. This can be seen explicitly in Fig. 12, where the red and blue points correspond to sources just inside and outside of the caustics respectively. The sources inside of the caustics produce the familiar highly magnified, short time delay image pairs that are of interest for this work. However, despite their proximity to the caustics, the sources just outside of them can reach high relative magnifications, but arrive with much larger time delays. Appendix B: Sensitivity to c and Σsub In this section, we present the full distributions for al- ternative subhalo parameters we consider in this work. Namely, we focus on changing the subhalo concentra- tions (c) and subhalo number densities (parametrized by Σsub) to both quantify the number of source locations that generate images that meet our criteria for the the images being affected by subhalos, as well as to identify qualitative differences in the µr −∆T distributions. 18 FIG. 13. Relative magnifications and time delays for modified concentrations (left), and number densities (right). In both cases, the fiducial models are shown in gray. Note that Σsub denotes the normalization of the subhalo mass function at 108M⊙, and is not the 2D surface mass density.
Dark Matter Subhalos and Higher Order Catastrophes in Gravitational Wave Lensing Luka Vujeva,1, ∗Jose Mar ́ıa Ezquiaga,1 Daniel Gilman,2 Srashti Goyal,3 and Miguel Zumalac ́arregui3 1Center of Gravity, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, Denmark 2 60637, USA 3Max Planck Institute for Gravitational Physics (Albert Einstein Institute) Am M ̈uhlenberg 1, D-14476 Potsdam-Golm, Germany (Dated: October 17, 2025) Gravitational lensing is an invaluable probe of the nature of dark matter, and the structures it forms. Lensed gravitational waves in particular allow for unparalleled sensitivity to small scale structures within the lenses, due to the precise time resolution in combination with the continuous monitoring of the entire sky. In this work, we show two distinct ways of using strongly lensed gravitational waves to identify the presence of dark matter subhalos: i) through higher order caustics generating high relative magnification (μr > 2), short time delay image pairs that break the caustic universality relations of single dark matter halos, which occur for ∼1 -10 percent of strongly lensed events in our cold dark matter models, and ii) through the presence of more than three highly magnified images, which occur for ∼0.01-1 percent of the same simulated events. We find that these results are highly sensitive to the concentrations of subhalos in our simulations, and more mildly to their number densities. The presence of low-mass subhalos increases the probability of observing wave-optics lensing in lensed gravitational waves, which is studied by solving the diffraction integral with the stationary phase approximation, as well as numerically. We also report distinct quantitative and qualitative differences in the distributions of relative magnifications and time delays for subhalo populations with increased number densities or concentrations. With the upcoming detection of strongly lensed events by ground- and space- based detectors, comparisons against these simulated distributions will provide insight into the nature of dark matter. I. INTRODUCTION Gravitational lensing offers striking glimpses into the high-redshift Universe by nature of the 'gravitational telescope' phenomenon caused massive objects such as galaxies and galaxy clusters. This has provided us access into previously inaccessible periods of the evolution of the Universe, such as allowing us to observe galaxies [1, 2] and even individual stars [3, 4] at unprecedented distances beyond the sensitivity of current telescopes [5, 6]. Additionally, it has allowed for the most stringent observational tests of the nature of dark matter, and the structures it forms [7, 8]. While there has been a tremendous amount of success in observing gravitational lensing in the electromagnetic (EM) spectrum (and much more to come with all sky observation from the Vera Rubin Observatory [9] and the Euclid telescope [10, 11]), focusing on different messengers can unveil new information about gravitational lensing [12]. Gravitational waves (GWs) are unique transients because they are coherently detected with a timing precision down to the millisecond. This is to be contrasted with the current state-of-the-art in EM lensing, constraining differences in the arrival times of images of the order of days [13]. Moreover, their waveform morphology and phase evolution are accurately recorded with the LIGO [14], VIRGO [15], and KAGRA [16] (LVK) detectors, with remarkable recent examples [17, 18]. Therefore, lensed GWs are an exceptional new probe of the ∗ dark matter (DM) substructures, which intrinsically predict short lensing time delays. Although we still have to detect a strongly lensed GWs [19-22], their discovery is expected in the next observing run [23-27]. This will be further improved in future ground-based detectors such as Einstein Telescope [28] and Cosmic Explorer [29], and future space-based detectors such as LISA [30, 31], which offer exciting prospects for measuring wave-optics phenomena from larger lenses such as galaxies, supermassive black holes or subhalos. The focus of GW lensing is typically divided between the study of large scale structures such as galaxies and galaxy clusters [24, 32-36], and the interference effects of compact objects such as stars and black holes [37-39]. While recent works have explored GW lensing with subhalos in the single image regime [40-42], this work aims to explore the strong lensing phenomenology of these systems, and potential interference effects near the consequent caustics, which may differ from the universal predictions without substructure [35, 43]. In the context of GW lensing from galaxy-scale lenses, most work considers single, spheroidal dark matter halos following typical dark matter distributions such as Singular Isothermal Spheres (SIS), Navarro-Frenk-White (NFW) [44-46], or Singular Isothermal Ellipsoids (SIE) profiles [47-49]. While these profiles are adequate to describe the main dark matter halo of galaxies, this work aims to be the first to consider the effects of dark matter subhalos on strongly lensed GWs in these traditionally smooth, singular lenses. The existence of DM subhalos is motivated by the hierarchical mergers seen in N-body cosmological simula16 Oct 2025 2 FIG. 1. Magnification (μ) map of an example galaxy populated with cold dark matter subhalos in the image (left) and source (right) plane. Note that this example corresponds to the increased concentration case explored later in the work. tions, which find that a fraction of the total dark matter mass in large DM halos is contained in lower mass, typically more concentrated DM subhalos [50-53]. While not observed directly, their existence has been put forward as a solution to some prevalent issues in EM lensing, such as quasar flux anomalies [54-56], deviations from expected image locations [57], and excess total magnification of sources near caustics [58]. Many alternative DM models create differences in the low end of the subhalo mass distribution (as well as the density distribution of DM halos in certain models). Cold DM (CDM) allows for the creation of much smaller DM halos when compared to warm, fuzzy, wave, and self interacting DM, which typically suppress the production of structure on small scales, leading to lower number densities of low mass halos when compared to CDM [59]. Therefore, being sensitive to low mass halos is crucial in probing the nature of DM. Moreover, lighter subhalos do not hold baryons efficiently, and therefore are a more direct probe of DM theories [60]. In this work, we show that not only are strongly lensed GWs powerful probes of dark matter subhalos dow to masses of 107M⊙(and can be extended to much lower masses), but we also show that there are concrete criteria that can be used to show that even a single strongly lensed GW event has been affected by a subhalo. These deviations come in the forms of either seeing 4 (or more) highly magnified gravitational waves coming in short succession (which has been previously explored in the context of EM lensing [61]), or seeing a deviation in the time delay and relative magnifications of the two brightest images from the predictions coming from catastrophe theory for single smooth potentials. More specifically, we see that the higher order caustics (or catastrophes) caused by the inclusion of substructure in the form of subhalos can cause the brightest pair of images to have relative magnification factors of μr > 2 at short time delays (∆T ), which is impossible in a smooth single potential. The detection of such an event would be a signature of the presence subhalos (or other compact objects) effecting the lensed signals. We can explicitly show that subhalos affect lensed images by the deviation from the expected μr -∆T relation for short time delays and highly magnified GWs. Namely, without the subhalos, the two brightest images must have μr 2 is smoking gun evidence of the existence of subhalos near caustics. In this work, we do not consider the impact of line-of-sight halos, which could also contribute to a similar effect [63, 64]. This work is organized as follows: in Sec. II we give an overview of gravitational lensing formalism in both the wave optics and geometric optics regimes, and introduce the formalism of typical caustics. In Sec. III we 3 summarize our treatment of dark matter subhalos in our composite lens model. Sec. IV provides the results of our fiducial model, as well as exploring the impacts of modifying the dark matter model. Finally, we examine the links back to higher order catastrophes in Sec. V. Throughout this work we adopt a Planck 2018 cosmology [65]. II. GRAVITATIONAL LENSING In this section, we review the fundamentals of gravitational lensing in both the wave and geometric optics regimes, and introduce the typical caustics seen in single dark matter halos, namely the fold and cusp caustics. A. Wave Optics The changes in the amplitude of a lensed gravitational wave is characterized by the amplification factor F(f), simply defined as F(f) = ̃hL(f) ̃h0(f) , (1) where ̃hL(f) and ̃h0(f) are the Fourier transformed lensed and unlensed gravitational wave strains. The strain of the lensed gravitational wave is determined by the Kirchhoff diffraction integral [62, 66, 67], given by F(ω) = ω 2πi Z d2⃗xeiωT (⃗x,⃗y), (2) where T(⃗x, ⃗y) is the Fermat potential, and ω is the dimensionless frequency defined as ω ≡8πGMLzf.The redshifted lens mass MLz is defines as MLz ≡(1 + zL) DS DLDLS θ2 E 4G. (3) In units where image plane (⃗x) and source plane position are rescaled by a characteristic length scale in the system (typically the Einstein radius θE) as θ = ⃗x/θE β = ⃗y/θE , the Fermat potential, also called the time delay surface [62, 68] can be written as Td(θ, β) = φ(θ, β) ≡ (θ -β)2 2 -ψ(θ) , (4) where ψ(θ) is the lens potential. The amplification factor can also be expressed in the time domain (which is what the lensing code GLoW [69] that is being used in this work solves for) through a simple Fourier transform of F(ω) I(τ) = Z ∞ -∞ iF(ω) ω e-iωt dω. (5) Using this, we can also define the lensing Green's function, which is simply given by G(τ) = 1 2π d dτ I(τ). (6) The physical meaning of G(τ) is that its convolution with the unlensed signal gives us the desired lensed signal hL. B. Geometric Optics The geometric optics (GO) limit is valid for lensing systems in which the wavelength of the GW is much smaller than the characteristic size of the lens, and when a source of finite frequency is sufficiently far from a caustic. In this limit, GW lensing closely follows that of light [66]. Relating back to the wave optics regime, the image locations in geometric optics are determined by the stationary points of the stationary points of the integrand. For a given lensing configuration in GO, the locations of images are determined by the lens equation, β = θ -α(θ), (7) where β is the source location, θ is the image location, and α(θ) is the deflection angle, determined by the lensing potential ψ(θ) as α(θ) = ∇ψ(θ). The dimensionless surface mass density, or convergence, is defined as, κ(θ) = Σ(θ) Σc , (8) where Σ(θ) is the surface mass density of the lens, and Σc is the critical surface mass density at the redshift of the lens, Σc = c2DS 4πGDLDLS , (9) where G is Newton's gravitational constant, c is the speed of light, and DS, DL, and DLS are the angular diameter distances to the source, the lens, and between the lens and source respectively. The difference in arrival time between two images θi and θj of the same source at a position β is ∆Tij = 1 + zL c DLDS DLS Td(θi, β) -Td(θj, β) , (10) where zL is the redshift of the lens In this language, the lens equation and image positions ⃗θi are simply given by ∂Td/∂⃗θ ⃗θ=⃗θi = 0. Finally, the magnification of a given image is defined as μ-1 = (1 -κ(θ))2 -γ(θ)2, (11) 4 FIG. 2. Relative magnification (μr) vs time delay (∆T) for the two brightest images of sources placed near the caustics of a single elliptical single isothermal sphere halo without (left), and with subhalos (right). The grey lines in both plots represent the asymptotic values for the relative magnifications for sources very close to the caustics, where μr = 2 is the limit for the cusp. Note that in the case of subhalo perturbers, the relative magnifications at low time delays can greatly exceed the theoretical limit set by the cusp caustic. The vertical blue line corresponds to the shortest time delay measured in the EM,a quadruply lensed quasar found to have a time delay of ∆T = 0.8+0.8 -0.7 days [13]. where γ(θ) is the shear. This, again, can be derived directly from the time delay surface, i.e., μ-1 = det ∂2Td/∂⃗θ∂⃗θ . The points along which |μ| →∞in the image plane are called critical curves (CCs), and their equivalent in the source plane are called caustics. These will be explored in detail in the coming sections. Given the highly oscillatory nature of the integrand in Eq. II A at high frequencies, we employ the stationary phase approximation (SPA) to produce lensed waveforms for sources whose frequencies would fall within the regime of ground based detectors such as the LVK network. The SPA assumes that most of the contribution to the integral comes from the stationary points of the integrand, which correspond to the image locations. This leads to the approximation (whose detailed derivation can be found in [62, 70]), F(w) ≈ X j q |μ(⃗xj)| exp (iwTd(⃗xj) -injπ/2) , (12) in which each image has an arrival time Tj, magnification μj, and phase shift nj. The validity of this approach for gravitational wave sources near caustics has been explored in previous works [43, 71], and should be valid for the source locations considered in this work. Exploring methods of reliably calculating the full frequency dependent amplification factor at high dimensionless frequencies is left to future work. A quantity that is often used throughout this work is the relative magnification factor, which (unless otherwise specified) is simply μr = |μ1/μ2|, where μ1 and μ2 are the brightest and second brightest images of a given source at a location β. We also define the parity of the images based on the sign of their magnification factors (i.e. a positive parity image has μi > 0, whereas a negative parity image has μi 5 annulus. While subhalos farther away form the critical curves (and even along the line 1 https://github.com/dangilman/pyHalo FIG. 4. Relative magnifications and time delays for the highest concentration c and subhalo number density (parametrized by Σsub) considered in this work.The fiducial models are shown in gray. of sight) offer exciting opportunities to learn about both subhalos and dark matter [75], in this work we will only focus on highly magnified gravitational waves, and thus only consider subhalos in this region. This greatly reduces the number of subhalos in our simulations due to the vast majority of the total ∼6800 subhalos in Fig. 3 residing very far from the critical curves of the main halo), and thus makes the computations of lensing observables (especially I(τ) and F(f)) feasible. We leave considering the full spatial distribution of subhalos to future work. In order to solve for the lensing observables, we employ GLoW2 [69], which is allows us not only to calculate the geometric optics observables, but the full diffraction integral to capture both the interference and diffractive effects present in GW lensing for arbitrary lens configurations. IV. EFFECTS OF SUBHALOS ON LENSED GW OBSERVABLES TABLE I. Summary of the probabilities of finding a source location with μr > 2, and generating more than 5 images (Nim > 5) for the different models in this work. Model P(μr > 2) P(Nim > 5) Fiducial 9.0 × 10-3 2.8 × 10-4 c = 3 × cDJ 8.0 × 10-2 6.4 × 10-3 c = 6 × cDJ 8.7 × 10-2 1.6 × 10-2 Σsub = 0.050 1.0 × 10-2 1.7 × 10-4 Σsub = 0.075 4.0 × 10-2 3.1 × 10-4 2 https://github.com/miguelzuma/GLoW_public 7 FIG. 5. Relative magnifications of the brightest and second brightest images compared against the time delays of the third and fourth brightest images for the fiducial, high number density, and high concentration models. Note that the existence of the population of short time delay pairs for this image pair is due to these sources falling within higher order or nested caustics. All other source locations fall in the high ∆T branch of the distributions. In this section, we aim to quantify the fraction of the source plane in which the subhalo effects can be seen, and how this is affected by varying both the number density and concentrations of the subhalos in our simulation. We separate what we quantify as the effects of subhalos into two categories: the rate at which we see 4 (or more) bright images, and the rate at which we see deviations from the expected μr -∆T relations (summarized in Table I). Due to the large separation in scales of the largest and smallest subhalo mass, we must be cautious to mitigate the effects of the resolution of our simulations. In order to do this, we first identify a region in the source plane with a minimum source magnification of μsr > 25. We then densely sample the region within the boundaries set by the magnification cut in order to capture the small-scale effects of the low-mass halos. Finally, we draw source locations from the high-resolution source plane to calculate our lensing observables. A. Deviations from Universality Relations As outlined in § II C, the fold and cusp caustics found in single halo lensing systems follow unique behaviors described from catastrophe theory [35, 43, 62]. More specifically, a source within the main caustics (thus having an image multiplicity of greater than three) must have a relative magnification less than μr 5 is expected to be a rare occurrence due to the small strong lensing cross section of low mass subhalos. An example of this can be seen visually in Fig. 1, where there are only very small regions within the caustics of the main halo where there are additional caustic crossings due to either disconnected or higher order caustics. We report the probabilities of seeing higher image multiplicities in Table I. Despite the rarity, even if we do not see the production of additional images, we expect to see the effects of subhalos in the image properties of images arriving at later times, which should differ from those of a smooth single halo. For instance, sources within a higher order caustic are expected to produce four highly magnified images, which differs from the maximum of three highly magnified images produced by cusps (which is the highest one can produce in a single lens system). Therefore, in order to study these effects, we can examine the relative arrival times of the third and fourth brightest images ∆T (3,4). We see that small values of ∆T (3,4) (shown in Fig. 5) correspond to source locations that either fall within higher order caustics, or nested caustics. In both cases, the total image multiplicity of these locations in the source plane are determined by how many caustic one must cross in order to reach the region of interest (where every caustic crossing corresponds to the creation of two additional images). We also see that ∆T (3,4) can be short enough to cause interfering signals, opening the possibility for multiple sets of overlapping images pairs for a single source location. An explicit example of this can be seen in Fig. 6, where two distinct pairs of overlapping signals coming from one source location. C. Discerning Between Concentration and Number Density We proceed by investigating two modifications to the subhalos in our model: their number density (Nsub), and their concentrations (c). The number density is modified through changing the normalization parameter Σsub of the subhalo mass function at 108M⊙(see [55]). We test parameters of Σsub = [0.025, 0.050, 0.075]. The con8 FIG. 6. Lensed gravitational wave strains for a source location within a higher order caustic produced by a subhalo in the example case of Fig. 1. The left pane, shows the arrival of the first two images, the second shows the third and fourth images. The second and third images arrive with a delay of ∆T (2,3) = 28.44 s, with a relative magnification of μ(2,3) r = 5. The gray lines show the (rescaled) unlensed signal aligned at the time of peak strain of the first image. Note that the superscript denotes the relative arrival time of the images, and are not ordered by strain amplitude as in the rest of the text (i.e. μ(i,j) r = μj/μi, and ∆T (i,j) = |Tj -Ti|.) FIG. 7. Central convergence of NFW halos near the critical curves of the fiducial main halo. The black line corresponds to the strong lensing criterion of κ = 1. centrations are parametrized using a modified DiemerJoyce relation [52], where concentrations drawn from this model are rescaled by a constant factor cscale = [1, 3, 6]. The case of c = 3 × cDJ is of particular interest, as recent studies have found this to be consistent with subhalos with masses Mvir ≲109M⊙[78] (which is approximately the maximum subhalo mass considered in this work), and has been explored in other works focusing on weakly lensed GWs in the presence of subhalos [42]. We find that as we increase our two parameters, we see distinct features, particularly in the case of high concentrations. In the high Σsub case (Fig. 4 and Fig. 13), we see that as we increase the number of subhalos in our simulation, there are considerably more image pairs that cross our μr > 2 threshold. This is unsurprising given that as we increase the number of subhalos near the critical curves, we increase the number of higher-order caustics generated, thus increasing the probability of seeing their effects on an image pair. In the high-concentration case, we generate significantly more image pairs above our threshold (substantially more than in the increased Σsub case as summarized in Table I). Additionally, embedded within μr - ∆T distribution we see the qualitative features of sources placed near higher order caustics (Fig. 11), as well as features akin to those of sources just outside of low mass caustics. These features arise due to the effects of nested caustics embedded within the caustics of the main halo. This feature feature is the spire-like branches of high μr image pairs (see Fig. 12) embedded in the short ∆T distribution due to their low mass. To understand the efficiency at which such caustics can be generated for NFW subhalos, we consider a typical condition for generating strong lensing, having a point in a lens mapping where κ > 1. This is because below this threshold, the mapping remains globally invertible. Above this threshold, the mapping is no longer injective due to the generation of multiple images for single source locations. We estimate the efficiency of subhalos to generate critical curves disconnected and outside of the main halo (which generate nested caustics) by computing the central convergence of the halos in close proximity to the caustics. For an NFW halo, the 2D surface mass density is given by 9 FIG. 8. Time domain amplification factor I(τ) shown alongside its regular and singular components (left) for the example source location generating the lensed signals in Fig. 6, as well as a zoom in around the main peaks (right) to show the contributions of the regular and singular parts of I(τ) near the highly magnified images. Σ(x) = 2ρsrs x2 -1f(x), (24) where f(x) is f(x) =              1 - 2 √ 1-x2 arctanh q 1-x 1+x, x 1 . (25) We approximate the central convergence of the NFW subhalos in isolation by taking the convergence at x = 10-4. We show the central convergence of NFW subhalos following both the fiducial and modified c -M relations for subhalos near the main CCs in Fig. 7. We make the simplifying assumption that the contribution to the total convergence coming from the main halo near the main CCs is κ ≈1/2, and can therefore write the central convergence of the subhalos near the CCs as κc ≈κ(0) + 1/2. (26) We see that the fiducial subhalos embedded near the CCs fall below the critical κ > 1 threshold for the entire mass range, making it unlikely for them to form nested caustics, even in the case of high subhalo number densities due to the low central density of each individual subhalo. However, even small increases to the subhalo concentrations shift much lower subhalo masses above the strong lensing threshold, thus allowing for the production of significantly more nested caustics. This is even more exaggerated in the high concentration case, where even the lowest mass subhalos in our simulation are efficient enough to generate disconnected caustics. This most likely explains the discrepancy of the rate of seeing both greater than 5 images (Nim) and μr > 2 in the high concentration case as opposed to the high number density case. We also find that nested caustics are more efficient at creating source plane regions where Nim, which explains why there is a much higher probability of seeing this in the high concentration case as opposed to the high number density case. This also provides insight into the range of masses dominating the contribution to these effects, which are found to be subhalos of masses greater than 108M⊙in both the fiducial and high number density simulations, whereas we see an increased contribution from lower mass halos in the high concentration simulations. D. Impact on I(τ) To further study the implications of substructure on the lensed signals, we examine the time dependent amplification function I(τ) in Fig. 8. This calculation is enabled by creating our setup using GLoW , which allows for the direct computation of these quantities. Due to the 10 FIG. 9. Bifurcation sets of the butterfly (left) and swallowtail (right) catastrophes. In the lensing analogy, the center of the lens is towards the top of the plot. We use Arnold's parametrization [79] described in §V A. highly oscillatory nature of the integrand of the diffraction integral, computing F(w) at the high w corresponding to LISA or LVK signals (note that the dimensionless frequencies for these detectors for our fiducial redshifted lens mass is w ∼104 -1010) is currently not possible for all frequencies required to reconstruct lensed signals for either detectors in our setup. Calculating these quantities at higher frequencies is left to the future work of finding a more efficient way of calculating the high frequency portion of this integral. However, we present the results for the lower frequency portion that is currently calculable as a proof of concept. We select the same source location used to generate the time domain waveform in Fig. 6. We see distinct structure in the regular and singular parts of I(τ) in Fig. 8, especially near the prominent peaks corresponding to the highly magnified images. Of particular interest are the highly negative portions of the regular portion of I(τ) near the peaks, indicating the existence of wave optics phenomena occurring for these images not fully captured by the singular piece. This will correspond to distinct irregular oscillations in F(w) that are characteristic of the inclusion of additional structure in the lens. Similar features can be seen in the study of wave optics phenomena in the weak lensing regime caused by dark matter subhalos [42]. This would create unique frequency dependent oscillations for signals in this regime that is not reproducible by diffraction effects caused by a single dark matter halo (or even a compact lens such as the point mass lenses typically used to study wave optics phenomena in lensed GWs). Whilst the impact of these oscillations might be suppressed at high frequencies where we approach GO, longer wavelength signals could still undergo interesting diffraction phenomena if there are small, compact dark matter halos along the propagation path. This motivates the push towards computing the diffraction integral at higher frequencies in order to hunt for these features in substructure-lensed signals. V. HIGHER ORDER CAUSTICS AND THEIR OBSERVABLES In this section, we introduce some basic theory about the higher order caustics created by the subhalos perturbing the critical curves of the main halo to gain a deeper understanding of why we are seeing these deviations from the typical universality relations. We first present them in their most general form, and then build toy model examples of each caustic by placing single massive halos at a point along the critical curve that maps back to either a fold, or a cusp in the source plane. This allows us to study the most general types of perturbations to the caustics we expect to see from subhalos, in isolation. A. Higher Order Catastrophes Starting from their most general form, we will highlight some of the qualitative features of two most common higher order catastrophes that we find in our systems, the swallowtail and butterfly catastrophes 3. Details about the theoretical aspects of these catastrophes (and many more) can be found in Ref. [79-81]. The implications of higher order catastrophes in lensing in the EM are explored in Ref. [58, 75, 82-87], however deriving the full specific form of the generating functions discussed below using the time delay surface and computing the resulting gravitational wave observables is left to future work. 3 Coincidentally, the swallowtail catastrophe was the subject of Salvador Dal ́ı's final painting 'The Swallow's Tail' (1983), inspired by a meeting with R ́ene Thom. Dal ́ı had originally set out to paint all of Thom's catastrophes, but only completed The Swallow's Tail before his untimely death in 1989. 11 FIG. 10. Source plane magnification map near the perturbed cusp (i.e. the butterfly catastrophe), and perturbed fold (the swallowtail catastrophe), with relative magnifications (μr) of different source locations shown in color. Note that the existence of higher order caustics allows for high μr (shown in pink), short ∆T image pairs. 1. The Swallowtail Catastrophe A swallowtail catastrophe, which has an ADE classification of A4 in Arnold's notation [79], in its most general form is defined by the generating function V = 1 5x5 + 1 3ax3 + 1 2bx2 + cx, (27) where x is the so called state variable, and a, b, c are the control parameters. The mapping between control space (analogous to the source plane in lensing) and the generating function (analogous to the time delay surface) is given by the solutions of the gradient of the potential function V ′ = 0 (often called the equilibrium condition): 0 = x4 + ax2 + bx + c. (28) This is analogous to solving the lens equation for a gravitational lensing system. The number of solutions to this equation corresponds to the number of local images generated by a specific catastrophe. Given its quartic nature, it can produce a maximum of 4 real roots, two real roots, or no real roots. The real roots of this equation correspond to the number of local images produced by that region of control space, whose boundaries define the bifurcation sets (in the case of lensing, the caustics). The equations describing the shape of the catastrophe in control space can be found by solving for the bifurcation set of the generating function (i.e. when V ′ = V ′′ = 0). First, we find V ′′ = 0, and solve for b to find b = -4x3 -2ax, (29) which we can substitute into our expression for V ′ and solve for c. This allows us to write the bifurcation set for the swallowtail in its parametric form ( b = -4x3 -2ax c = ax2 + 3x4 . (30) 2. The Butterfly Catastrophe A butterfly catastrophe (A5), in its most general form is defined by the generating function V = 1 6x6 + 1 4ax4 + 1 3bx3 + 1 2cx2 + dx, (31) where we have introduced a new control parameter d. It's equilibrium condition V ′ = 0 is given by 0 = x5 + ax3 + bx2 + cx + d, (32) 12 FIG. 11. Relative magnifications and time delays as in Fig. 2, but for the butterfly (left) and swallowtail (right) catastrophes shown in Fig. 9 and Fig. 10. which can produce a maximum of five real roots. Using the same approach as for the swallowtail, we can take V ′′ = 0, and this time solving for c, we find that c = -5x4 -3ax2 -2bx. (33) We can substitute this into the equilibrium condition and solve for d to write the bifurcation set of the butterfly in its parametric form ( c = -5x4 -3ax2 -2bx d = 4x5 + 2ax3 + bx2 . (34) We plot the bifurcation set (or caustics in the lensing case) for both the swallowtail and butterfly in Fig. 9 with a fixed value of a = -5 in the case of the swallowtail, and a = -6, b = 0 for the butterfly. Note that the center of the lens is always pointed towards the top of the plot. B. Toy Model Results With the insight from catastrophe theory in hand, we can proceed to building these catastrophes into our lensing setup to and solve for the observables of this toy model case. To generate a swallowtail, we place one massive subhalo on a part the critical curve of the unperturbed main halo which maps back to a fold in the source plane. To generate the butterfly, we place one massive subhalo on a part the critical curve of the unperturbed main halo which maps back to a cusp in the source plane. In both cases, the subhalo has the same NFW profile as the subhalos in the previous section, but with a mass of 1010M⊙ and c = 13 for both the fold and cusp case. Note that this choice of concentration is slightly higher than the fiducial c -M relation, but is taken solely to make the effects of the subhalos clearer. The perturbed caustics can be seen in Fig. 10. To investigate how the μr -∆T relation is affected by the existence of higher-order caustics, we populate the regions of the source plane near these caustics with sources, and calculate μr and ∆T for the brightest two images. Fig. 10 shows that the high μr images pairs are being produced just outside of the cusp-like feature extended towards the center of the lens. Fig. 11 shows that these high μr image pairs also have short time delays, explaining the points in Fig. 2 that have both high μr and short ∆T . It is also worth noting that the cusps extended out of the main caustics (the "wings" of the butterfly), also produce high μr image pairs, however these images have longer time delays that cannot fall within the range of time delays produced by sources inside of the main cusp and fold caustics of an unperturbed main halo, or even the perturbed main halo itself. VI. CONCLUSIONS AND IMPLICATIONS Gravitational lensing offers a unique opportunity to map dark matter structures in the Universe, and lensed GWs in particular allow us to zoom into the small scales. Their unprecedented sensitivity to short time delays - down to milliseconds - enables inferences of halo substructure on sub-galactic scales, competitive with other probes of small-scale structure [64]. Although there has been a long and extensive effort to study the effects of small scale subhalos embedded in galaxy-scale halos in 13 EM observations, see e.g. [76, 88], an equivalent study was missing for GWs. In this work, we bridge this gap and study the implications for lensed GWs of dark matter subhalos. We simulate lensing systems with subhalo populations being calculated with pyHalo [55], and matching recent N-body simulations [52, 78]. We test the sensitivity of GW lensing observables (number of repeated copies, relative magnifications, time delays, waveform morphologies, etc.) to increased subhalo number densities and concentrations. Our main results are summarized as follows: • Substructure in the form of dark matter subhalos causes two distinct and unique observables in order to infer their presence: the breaking of universal μr -∆T relations caused by the creation of higher order catastrophes, and the generation of systems with higher image multiplicities, short time delay, and highly magnified images. • The inclusion of dark matter subhalosparticularly compact, low mass subhalosincreases the rate of wave-optics lensing phenomena, including interference and diffraction. • The rate at which subhalos influence GW observables is heavily sensitive to the c-M relation, indicating a higher constraining power on dark matter models with increased concentrations at low mass subhalos. Our results demonstrate the significance of dark matter substructures in GW lensing, opening new avenues to reveal their nature. The last point, in particular, highlights an exciting prospect for follow-up studies to examine the effects of alternative dark matter subhalos (such as fuzzy and wave dark matter [89]) that are capable of producing more compact central densities. This would both amplify the rates at which we see deviations from the μr -∆T relation and higher image multiplicities, and increase the probability of seeing wave-optics phenomena. It also provides physical motivation for lensed GW parameter estimation to include more complex interfering signals, beyond the lowest order caustics [43], as well as for GW searches to target more image pairs associated with the same lensed event. Additionally, multi-messenger lensed events would have even further capabilities [12]. They could allow us to not only combine arrival time and photometry information - a combination that can give us insight into the physical properties of the perturber - but also see if the DM subhalo is occupied by baryonic matter (such as a satellite galaxy). Where studies of higher order caustics are limited by spatial resolution in the EM, causing images to smear and appear similar to those from standard caustics [85], the combination with the precision time delay information provided by GW detectors would offer unprecedented insights into the structure of such lensing systems. As discussed in Ref. [36], there are challenges for the identification of these short time delay, highly magnified signals with high relative magnifications. Specifically, when inferring the lens mass solely from small numbers of lensed GW images, they could be mistaken as coming from a lower lens mass system. While this is a much more dominant effect in systems with more prevalent structures such as galaxy clusters, we find that this (albeit exciting phenomenon) could be a nuisance for inferring the total lens mass in lensed systems that do not have an EM counterpart. If not accounted for, substructures could also hinder rapid EM followups that utilize catalogs of known strong lenses such as LaStBeRu [90] or lenscat [91]. This is because these programs aim at matching the time delay of the lensed GW signals to the lens masses in a sky localization area most likely to generate them. The existence of unaccounted compact dark matter subhalos could also provide a hindrance to future efforts to perform time delay cosmography with lensed GWs in addition to EM observations [92]. Investigations in the EM have shown that not taking sub-structure into account (both in the form of sub-halos and line of sight halos) leads to an added uncertainty in the inferred value of H0 that scales with the square root of the lensing volume divided by the longest time delay image used in the inference [92]. This stresses the need to develop realistic modeling of dark matter substructures. The discovery of the first lensed GWs is expected in the next observing run of current ground-based detectors [23-27], and future space-based observatories also promise to detect more lensed signals [40, 41, 93]. In this work, we have shown that strongly lensed GW observables are affected by dark matter subhalos, and therefore hold a key to probe their properties. As we begin to detect lensed GWs, tests such as those provided in this work will allow for unprecedented precision to the smallscale structures of DM subhalos, and with any luck, will shed light onto the nature of DM itself. ACKNOWLEDGMENTS The authors would like to thank Juno Chan, and Guilherme Brando for their helpful comments and suggestions. The Center of Gravity is a Center of Excellence funded by the Danish National Research Foundation under grant No. 184. This project was supported by the research grant no. VIL37766 and no. VIL53101 from Villum Fonden, and the DNRF Chair program grant no. DNRF162 by the Danish National Research Foundation. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101131233. J.M.E. is also supported by the Marie Sklodowska-Curie grant agreement No. 847523 INTERACTIONS. DG acknowledges support from the Brinson Foundation through a Brinson Prize Fellowship Grant. The Tycho supercomputer hosted at the SCI14 ENCE HPC center at the . [1] B. Wang, S. Fujimoto, I. Labb ́e, L. J. Furtak, T. B. Miller, D. J. Setton, A. Zitrin, H. Atek, R. Bezanson, G. Brammer, J. Leja, P. A. Oesch, S. H. Price, I. Chemerynska, S. E. Cutler, P. Dayal, P. van Dokkum, A. D. Goulding, J. E. Greene, Y. Fudamoto, G. Khullar, V. Kokorev, D. Marchesini, R. Pan, J. R. Weaver, K. E. Whitaker, and C. C. Williams, UNCOVER: Illuminating the Early Universe-JWST/NIRSpec Confirmation of z ¿ 12 Galaxies, ApJ 957, L34 (2023), . [2] V. Kokorev, H. Atek, J. Chisholm, R. Endsley, I. Chemerynska, J. B. Mu ̃noz, L. J. Furtak, R. Pan, D. Berg, S. Fujimoto, P. A. Oesch, A. Weibel, A. Adamo, J. Blaizot, R. Bouwens, M. DessaugesZavadsky, G. Khullar, D. Korber, I. Goovaerts, M. Jecmen, I. Labb ́e, F. Leclercq, R. Marques-Chaves, C. Mason, K. B. W. McQuinn, R. Naidu, P. Natarajan, E. Nelson, J. Rosdahl, A. Saldana-Lopez, D. Schaerer, M. Trebitsch, M. Volonteri, and A. Zitrin, A Glimpse of the New Redshift Frontier through AS1063, ApJ 983, L22 (2025), . [3] J. M. Diego et al., JWST's PEARLS: A new lens model for ACT-CL J0102-4915, "El Gordo," and the first red supergiant star at cosmological distances discovered by JWST, Astron. Astrophys. 672, A3 (2023), . [4] B. Welch, D. Coe, J. M. Diego, A. Zitrin, E. Zackrisson, P. Dimauro, Y. Jim ́enez-Teja, P. Kelly, G. Mahler, M. Oguri, F. X. Timmes, R. Windhorst, M. Florian, S. E. de Mink, R. J. Avila, J. Anderson, L. Bradley, K. Sharon, A. Vikaeus, S. McCandliss, M. Bradaˇc, J. Rigby, B. Frye, S. Toft, V. Strait, M. Trenti, S. Sharma, F. AndradeSantos, and T. Broadhurst, A highly magnified star at redshift 6.2, Nature 603, 815 (2022), . [5] C. Mason, M. Trenti, and T. Treu, The Galaxy UV Luminosity Function Before the Epoch of Reionization, Astrophys. J. 813, 21 (2015), . [6] L. Vujeva, C. L. Steinhardt, C. K. Jespersen, B. L. Frye, A. M. Koekemoer, P. Natarajan, A. L. Faisst, P. Hibon, L. J. Furtak, H. Atek, R. Cen, and A. Sneppen, Efficient Survey Design for Finding High-redshift Galaxies with JWST, ApJ 974, 23 (2024), . [7] S. Vegetti et al., Strong Gravitational Lensing as a Probe of Dark Matter, Space Sci. Rev. 220, 58 (2024), . [8] I. Dutra, P. Natarajan, and D. Gilman, Self-interacting Dark Matter, Core Collapse, and the Galaxy-Galaxy Strong-lensing Discrepancy, Astrophys. J. 978, 38 (2025), . [9] ˇZ. Ivezi ́c et al. (LSST), LSST: from Science Drivers to Reference Design and Anticipated Data Products, Astrophys. J. 873, 111 (2019), . [10] R. Scaramella et al. (Euclid), Euclid preparation. I. The Euclid Wide Survey, Astron. Astrophys. 662, A112 (2022), . [11] M. Walmsley et al. (Euclid), Euclid Quick Data Release (Q1): The Strong Lensing Discovery Engine A - System overview and lens catalogue, (2025), . [12] G. P. Smith et al., Multi-messenger gravitational lensing, Phil. Trans. Roy. Soc. Lond. A 383, 20240134 (2025), . [13] M. Millon et al., COSMOGRAIL XIX: Time delays in 18 strongly lensed quasars from 15 years of optical monitoring, Astron. Astrophys. 640, A105 (2020), . [14] J. Aasi et al. (LIGO Scientific), Advanced LIGO, Class. Quant. Grav. 32, 074001 (2015), . [15] F. Acernese et al. (VIRGO), Advanced Virgo: a secondgeneration interferometric gravitational wave detector, Class. Quant. Grav. 32, 024001 (2015), . [16] T. Akutsu et al. (KAGRA), Overview of KAGRA: Detector design and construction history, PTEP 2021, 05A101 (2021), . [17] A. G. Abac et al. (LIGO Scientific, Virgo, KAGRA), GW250114: Testing Hawking's Area Law and the Kerr Nature of Black Holes, Phys. Rev. Lett. 135, 111403 (2025), . [18] GW230814: investigation of a loud gravitationalwave signal observed with a single detector, (2025), . [19] O. A. Hannuksela, K. Haris, K. K. Y. Ng, S. Kumar, A. K. Mehta, D. Keitel, T. G. F. Li, and P. Ajith, Search for gravitational lensing signatures in LIGO-Virgo binary black hole events, Astrophys. J. Lett. 874, L2 (2019), . [20] R. Abbott et al. (LIGO Scientific, VIRGO), Search for Lensing Signatures in the Gravitational-Wave Observations from the First Half of LIGO-Virgo's Third Observing Run, Astrophys. J. 923, 14 (2021), . [21] J. Janquart et al., Follow-up analyses to the O3 LIGO-Virgo-KAGRA lensing searches, Mon. Not. Roy. Astron. Soc. 526, 3832 (2023), . [22] R. Abbott et al. (LIGO Scientific, KAGRA, VIRGO), Search for Gravitational-lensing Signatures in the Full Third Observing Run of the LIGO-Virgo Network, Astrophys. J. 970, 191 (2024), . [23] K. K. Y. Ng, K. W. K. Wong, T. Broadhurst, and T. G. F. Li, Precise LIGO Lensing Rate Predictions for Binary Black Holes, Phys. Rev. D 97, 023012 (2018), . [24] M. Oguri, Effect of gravitational lensing on the distribution of gravitational waves from distant binary black hole mergers, Mon. Not. Roy. Astron. Soc. 480, 3842 (2018), . [25] F. Xu, J. M. Ezquiaga, and D. E. Holz, Please Repeat: Strong Lensing of Gravitational Waves as a Probe of Compact Binary and Galaxy Populations, Astrophys. J. 929, 9 (2022), . [26] A. R. A. C. Wierda, E. Wempe, O. A. Hannuksela, L. e. V. E. Koopmans, and C. Van Den Broeck, Beyond the Detector Horizon: Forecasting Gravitational15 Wave Strong Lensing, Astrophys. J. 921, 154 (2021), . [27] G. P. Smith, A. Robertson, G. Mahler, M. Nicholl, D. Ryczanowski, M. Bianconi, K. Sharon, R. Massey, J. Richard, and M. Jauzac, Discovering gravitationally lensed gravitational waves: predicted rates, candidate selection, and localization with the Vera Rubin Observatory, Mon. Not. Roy. Astron. Soc. 520, 702 (2023), . [28] M. Maggiore et al. (ET), Science Case for the Einstein Telescope, JCAP 03, 050, . [29] D. Reitze et al., Cosmic Explorer: The U.S. Contribution to Gravitational-Wave Astronomy beyond LIGO, Bull. Am. Astron. Soc. 51, 035 (2019), . [30] P. Amaro-Seoane et al. (LISA), Laser Interferometer Space Antenna, (2017), . [31] M. Colpi et al. (LISA), LISA Definition Study Report, (2024), . [32] A. Robertson, G. P. Smith, R. Massey, V. Eke, M. Jauzac, M. Bianconi, and D. Ryczanowski, What does strong gravitational lensing? The mass and redshift distribution of high-magnification lenses, Mon. Not. Roy. Astron. Soc. 495, 3727 (2020), . [33] L. Dai, T. Venumadhav, A. A. Kaurov, and J. MiraldaEscud ́e, Probing Dark Matter Subhalos in Galaxy Clusters Using Highly Magnified Stars, Astrophys. J. 867, 24 (2018), . [34] M. Oguri, Strong gravitational lensing of explosive transients, Rept. Prog. Phys. 82, 126901 (2019), . [35] R. K. L. Lo, L. Vujeva, J. M. Ezquiaga, and J. C. L. Chan, Observational Signatures of Highly Magnified Gravitational Waves from Compact Binary Coalescence, Phys. Rev. Lett. 134, 151401 (2025), . [36] L. Vujeva, J. M. Ezquiaga, R. K. L. Lo, and J. C. L. Chan, Effects of galaxy cluster structure on lensed gravitational waves, Phys. Rev. D 112, 063044 (2025), . [37] M. Pijnenburg, G. Cusin, C. Pitrou, and J.-P. Uzan, Wave optics lensing of gravitational waves: Theory and phenomenology of triple systems in the LISA band, Phys. Rev. D 110, 044054 (2024), . [38] J. C. L. Chan, E. Seo, A. K. Y. Li, H. Fong, and J. M. Ezquiaga, Detectability of lensed gravitational waves in matched-filtering searches, Phys. Rev. D 111, 084019 (2025), . [39] J. C. L. Chan, C. Dyson, M. Garcia, J. Redondo-Yuste, and L. Vujeva, Lensing and wave optics in the strong field of a black hole, (2025), . [40] M. C ̧alı ̧skan, N. Anil Kumar, L. Ji, J. M. Ezquiaga, R. Cotesta, E. Berti, and M. Kamionkowski, Probing wave-optics effects and dark-matter subhalos with lensing of gravitational waves from massive black holes, (2023), . [41] S. Savastano, G. Tambalo, H. Villarrubia-Rojo, and M. Zumalacarregui, Weakly Lensed Gravitational Waves: Probing Cosmic Structures with Wave-Optics Features, (2023), . [42] G. Brando, S. Goyal, S. Savastano, H. Villarrubia-Rojo, and M. Zumalac ́arregui, Signatures of dark and baryonic structures on weakly lensed gravitational waves, Phys. Rev. D 111, 024068 (2025), . [43] J. M. Ezquiaga, R. K. L. Lo, and L. Vujeva, Diffraction around caustics in gravitational wave lensing, Phys. Rev. D 112, 043544 (2025), . [44] J. F. Navarro, C. S. Frenk, and S. D. M. White, A Universal density profile from hierarchical clustering, Astrophys. J. 490, 493 (1997), arXiv:astro-ph/9611107. [45] J. F. Navarro, C. S. Frenk, and S. D. M. White, A Universal Density Profile from Hierarchical Clustering, Astrophys. J. 490, 493 (1997), astro-ph/9611107. [46] J. E. Taylor and J. F. Navarro, The Phase-Space Density Profiles of Cold Dark Matter Halos, Mon. Not. Roy. Astron. Soc. 325, 1253-1267 (2001), astro-ph/0104002. [47] A. Kassiola and I. Kovner, Elliptic Mass Distributions versus Elliptic Potentials in Gravitational Lenses, ApJ 417, 450 (1993). [48] R. Kormann, P. Schneider, and M. Bartelmann, Isothermal elliptical gravitational lens models., A&A 284, 285 (1994). [49] R. Barkana, Fast calculation of a family of elliptical mass gravitational lens models, Astrophys. J. 502, 531 (1998), arXiv:astro-ph/9802002. [50] V. Springel, J. Wang, M. Vogelsberger, A. Ludlow, A. Jenkins, A. Helmi, J. F. Navarro, C. S. Frenk, and S. D. M. White, The Aquarius Project: the subhalos of galactic halos, Mon. Not. Roy. Astron. Soc. 391, 1685 (2008), . [51] D. Fiacconi, P. Madau, D. Potter, and J. Stadel, Cold Dark Matter Substructures in Early-Type Galaxy Halos, Astrophys. J. 824, 144 (2016), . [52] B. Diemer and M. Joyce, An accurate physical model for halo concentrations, Astrophys. J. 871, 168 (2019), . [53] E. O. Nadler et al., Symphony: Cosmological Zoom-in Simulation Suites over Four Decades of Host Halo Mass, Astrophys. J. 945, 159 (2023), . [54] P. L. Schechter and J. Wambsganss, Quasar microlensing at high magnification and the role of dark matter: Enhanced fluctuations and suppressed saddlepoints, Astrophys. J. 580, 685 (2002), arXiv:astro-ph/0204425. [55] D. Gilman, S. Birrer, A. Nierenberg, T. Treu, X. Du, and A. Benson, Warm dark matter chills out: constraints on the halo mass function and the free-streaming length of dark matter with eight quadruple-image strong gravitational lenses, Mon. Not. Roy. Astron. Soc. 491, 6077 (2020), . [56] A. M. Nierenberg et al., JWST lensed quasar dark matter survey - I. Description and first results, Mon. Not. Roy. Astron. Soc. 530, 2960 (2024), . [57] N. Ephremidze, C. Chandrashekar, A. C ̧. S ̧eng ̈ul, and C. Dvorkin, Dark Matter Substructure or Source Model Systematics? A Case Study of Cluster Lens Abell S1063, (2025), . [58] A. B. Aazami and P. Natarajan, Substructure and the Cusp and Fold Relations, Mon. Not. Roy. Astron. Soc. 372, 1692 (2006), arXiv:astro-ph/0605347. [59] J. Zavala and C. S. Frenk, Dark matter haloes and subhaloes, Galaxies 7, 81 (2019), . [60] J. S. Bullock and M. Boylan-Kolchin, Small-Scale Challenges to the ΛCDM Paradigm, Ann. Rev. Astron. As16 trophys. 55, 343 (2017), . [61] L. Ji and L. Dai, Effects of Subhalos on Interpreting Highly Magnified Sources Near Lensing Caustics, (2024), . [62] P. Schneider, J. Ehlers, and E. E. Falco, Gravitational Lenses (1992). [63] G. Despali, S. Vegetti, S. D. M. White, C. Giocoli, and F. C. van den Bosch, Modelling the line-of-sight contribution in substructure lensing, Mon. Not. Roy. Astron. Soc. 475, 5424 (2018), . [64] D. Gilman, S. Birrer, T. Treu, A. Nierenberg, and A. Benson, Probing dark matter structure down to 107 solar masses: flux ratio statistics in gravitational lenses with line-of-sight haloes, Mon. Not. Roy. Astron. Soc. 487, 5721 (2019), . [65] N. Aghanim et al. (Planck), Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)], . [66] R. Takahashi and T. Nakamura, Wave effects in gravitational lensing of gravitational waves from chirping binaries, Astrophys. J. 595, 1039 (2003), arXiv:astroph/0305055. [67] G. Tambalo, M. Zumalac ́arregui, L. Dai, and M. H.-Y. Cheung, Gravitational wave lensing as a probe of halo properties and dark matter, (2022), . [68] R. Blandford and R. Narayan, Fermat's principle, caustics, and the classification of gravitational lens images, Astrophys. J. 310, 568 (1986). [69] H. Villarrubia-Rojo, S. Savastano, M. Zumalac ́arregui, L. Choi, S. Goyal, L. Dai, and G. Tambalo, Gravitational lensing of waves: Novel methods for wave-optics phenomena, Phys. Rev. D 111, 103539 (2025), . [70] J. M. Ezquiaga, D. E. Holz, W. Hu, M. Lagos, and R. M. Wald, Phase effects from strong gravitational lensing of gravitational waves, Phys. Rev. D 103, 064047 (2021), . [71] A. M. Serra and O. Bulashenko, Probing binary lens caustics with gravitational waves: A uniform approximation approach, Phys. Rev. D 112, 024041 (2025), . [72] H. Whitney, On singularities of mappings of euclidean spaces. i. mappings of the plane into the plane, in Hassler Whitney Collected Papers, edited by J. Eells and D. Toledo (Birkh ̈auser Boston, Boston, MA, 1992) pp. 370-406. [73] K. Wang, Y.-Y. Mao, A. R. Zentner, J. U. Lange, F. C. van den Bosch, and R. H. Wechsler, Concentrations of Dark Haloes Emerge from Their Merger Histories, Mon. Not. Roy. Astron. Soc. 498, 4450 (2020), . [74] M. W. Auger, T. Treu, A. S. Bolton, R. Gavazzi, L. V. E. Koopmans, P. J. Marshall, L. A. Moustakas, and S. Burles, The Sloan Lens ACS Survey. X. Stellar, Dynamical, and Total Mass Correlations of Massive Earlytype Galaxies, ApJ 724, 511 (2010), . [75] C. R. Keeton and L. A. Moustakas, A New Channel for Detecting Dark Matter Substructure in Galaxies: Gravitational Lens Time Delays, Astrophys. J. 699, 1720 (2009), . [76] S.-d. Mao and P. Schneider, Evidence for substructure in lens galaxies?, Mon. Not. Roy. Astron. Soc. 295, 587 (1998), arXiv:astro-ph/9707187. [77] L. L. R. Williams, P. L. Kelly, T. Treu, A. Amruth, J. M. Diego, S. K. Li, A. K. Meena, A. Zitrin, T. J. Broadhurst, and A. V. Filippenko, Flashlights: Properties of highly magnified images near cluster critical curves in the presence of dark matter subhalos (2023), . [78] T. Ishiyama et al., The Uchuu simulations: Data Release 1 and dark matter halo concentrations, Mon. Not. Roy. Astron. Soc. 506, 4210 (2021), . [79] V. I. Arnold, Catastrophe Theory, 3rd ed. (Springer, Berlin, Germany, 1992). [80] R. Thom, Mod`eles math ́ematiques de la morphogen`ese (Christian Bourgois, Paris, 1981) p. 314, nouvelle ́edition revue et augment ́ee. [81] M. Berry and C. Upstill, Iv catastrophe optics: Morphologies of caustics and their diffraction patterns (Elsevier, 1980) pp. 257-346. [82] C. R. Keeton, S. Mao, and H. J. Witt, Gravitational lenses with more than four images: I. classification of caustics, Astrophys. J. 537, 697 (2000), arXiv:astroph/0002401. [83] M. Bradac, P. Schneider, M. Lombardi, M. Steinmetz, L. V. E. Koopmans, and J. F. Navarro, The signature of cdm substructure on gravitational lensing, Astron. Astrophys. 423, 797 (2004), arXiv:astro-ph/0306238. [84] A. B. Aazami and A. O. Petters, A Universal Magnification Theorem III. Caustics Beyond Codimension Five, J. Math. Phys. 51, 023503 (2010), . [85] G. Orban de Xivry and P. Marshall, An atlas of predicted exotic gravitational lenses, Monthly Notices of the Royal Astronomical Society 399, 2-20 (2009). [86] J. Hidding, S. F. Shandarin, and R. van de Weygaert, The Zel'dovich approximation: key to understanding cosmic web complexity, Mon. Not. Roy. Astron. Soc. 437, 3442 (2014), . [87] A. K. Meena and J. S. Bagla, Finding Singularities in Gravitational Lensing, Mon. Not. Roy. Astron. Soc. 492, 3294 (2020), . [88] N. Dalal and C. S. Kochanek, Direct detection of CDM substructure, Astrophys. J. 572, 25 (2002), arXiv:astroph/0111456. [89] L. Hui, Wave Dark Matter, Ann. Rev. Astron. Astrophys. 59, 247 (2021), . [90] R. A. de Oliveira, J. P. C. Fran ̧ca, and M. Makler, The Last Stand Before Rubin: a consolidated sample of strong lensing systems in wide-field surveys, (2025), . [91] L. Vujeva, R. K. L. Lo, J. M. Ezquiaga, and J. C. L. Chan, lenscat: a public and community-contributed catalogue of known strong gravitationallenses, Phil. Trans. Roy. Soc. Lond. A 383, 20240168 (2025), . [92] D. Gilman, S. Birrer, and T. Treu, TDCOSMO - III. Dark matter substructure meets dark energy. The effects of (sub)halos on strong-lensing measurements of H0, Astron. Astrophys. 642, A194 (2020), . [93] P. Auclair et al. (LISA Cosmology Working Group), Cosmology with the Laser Interferometer Space Antenna, (2022), . 17 FIG. 12. Relative magnification (μr) vs time delay (∆T) for the two brightest images of sources placed near the caustics of a single elliptical single isothermal sphere halo. The red and blue points correspond to sources just inside and outside of the caustics respectively. The grey line is the maximum relative magnification factor (μr = 2)) for sources just inside of the caustics. Appendix A: μr -∆T Relation Inside and Outside of Caustics In this section, we will elaborate on the difference in the distribution of the μr -∆T relation immediately inside and outside of the caustics. Throughout this work, we have generally been interested in the images produced by sources just inside of the main caustics of our host halo (meaning sources that fall on the side of the caustics closest to the center of the halo). This is mainly due to the images being produced in these region being highly magnified, and thus falling within the sensitivity of our current and future detectors. However, sources that lie just outside of the main caustics can still be moderately magnified by the main halo, or even highly magnified by the presence of subhalos, but the population of these images is significantly different from those of the sources just inside of the caustics. Namely, the two brightest images of a source just outside of the caustics are not subject to the same universality relations as those of sources just inside of the caustics. This can be seen explicitly in Fig. 12, where the red and blue points correspond to sources just inside and outside of the caustics respectively. The sources inside of the caustics produce the familiar highly magnified, short time delay image pairs that are of interest for this work. However, despite their proximity to the caustics, the sources just outside of them can reach high relative magnifications, but arrive with much larger time delays. Appendix B: Sensitivity to c and Σsub In this section, we present the full distributions for alternative subhalo parameters we consider in this work. Namely, we focus on changing the subhalo concentrations (c) and subhalo number densities (parametrized by Σsub) to both quantify the number of source locations that generate images that meet our criteria for the the images being affected by subhalos, as well as to identify qualitative differences in the μr -∆T distributions. 18 FIG. 13. Relative magnifications and time delays for modified concentrations (left), and number densities (right). In both cases, the fiducial models are shown in gray. Note that Σsub denotes the normalization of the subhalo mass function at 108M⊙, and is not the 2D surface mass density.
2510.14952
FROM LANGUAGE TO LOCOMOTION: RETARGETING-FREE HUMANOID CONTROL VIA MOTION LATENT GUIDANCE Zhe Li1∗, Cheng Chi2∗†, Yangyang Wei3, Boan Zhu4, Yibo Peng2, Tao Huang5 Pengwei Wang2, Zhongyuan Wang2, Shanghang Zhang2,6B, Chang Xu1B 1 University of Sydney, 2 BAAI, 3 Harbin Institute of Technology 4 Hong Kong University of Science and Technology, 5 Shanghai Jiao Tong University 6 Peking University Prompt Motion Generator Motion Latent Decoder Retarget MLP Policy Prompt Motion Generator Diffusion Policy Previous Ours Time Cost and Succ Rate (a) (b) (c) Text Input: The person is performing a backflip. Text Input: A person dances with flair then leaps forward. (d) (e) Figure 1: RoboGhost is a retargeting-free latent driven policy for language-guided humanoid locomotion. By removing the dependency on motion retargeting, it thus allows robots to be controlled directly via open-ended language commands. The figure showcases (a) the previous pipeline with motion retargeting, (b) our proposed retargeting-free latent-driven pipeline, (c) quantitative comparisons of success rate and time cost between baseline and RoboGhost, (d) performing the backflip, and (e) dancing and leaping forward. ABSTRACT Natural language offers a natural interface for humanoid robots, but existing language-guided humanoid locomotion pipelines remain cumbersome and un- reliable. They typically decode human motion, retarget it to robot morphology, and then track it with a physics-based controller. However, this multi-stage process is prone to cumulative errors, introduces high latency, and yields weak coupling between semantics and control. These limitations call for a more direct pathway from language to action, one that eliminates fragile intermediate stages. There- fore, we present RoboGhost, a retargeting-free framework that directly conditions humanoid policies on language-grounded motion latents. By bypassing explicit motion decoding and retargeting, RoboGhost enables a diffusion-based policy to denoise executable actions directly from noise, preserving semantic intent and supporting fast, reactive control. A hybrid causal transformer–diffusion motion generator further ensures long-horizon consistency while maintaining stability and diversity, yielding rich latent representations for precise humanoid behavior. Extensive experiments demonstrate that RoboGhost substantially reduces deploy- ∗Equal Contribution † Project Leader B Corresponding Author 1 arXiv:2510.14952v2 [cs.RO] 17 Oct 2025 ment latency, improves success rates and tracking accuracy, and produces smooth, semantically aligned locomotion on real humanoids. Beyond text, the framework naturally extends to other modalities such as images, audio, and music, providing a general foundation for vision–language–action humanoid systems. Project Page 1 INTRODUCTION Natural language provides an intuitive interface for humanoid robots, enabling users to translate free- form instructions into physically feasible humanoid motion. Recent text-to-motion (T2M) models can generate diverse and semantically meaningful human motions (Li et al., 2024d; Chen et al., 2023; Guo et al., 2024; Li et al., 2024b; Tevet et al., 2022). However, deploying these models on real robots typically requires a hierarchical pipeline: decoding human motion from language, retargeting it to robot morphology, and tracking the trajectory with a physics-based controller (Peng et al., 2018; Ji et al., 2024; Chen et al., 2025; He et al., 2025a). This pipeline, while functional in constrained settings, suffers from systemic drawbacks. (1) Errors accumulate across decoding, retargeting, and tracking, degrading both semantic fidelity and physical feasibility. (2) High latency from multiple sequential stages, making real-time interaction difficult. (3) Loose coupling between language and control, since each stage is optimized in isolation rather than end-to-end. Recent refinements (Yue et al., 2025; Serifi et al., 2024; Shao et al., 2025) attempt to mitigate these issues, but improvements are applied locally to decoders or controllers, leaving the overall pipeline fragile and inefficient. Our key insight is simple: treat motion latents as first-class conditioning signals, directly use them as conditions to generate humanoid actions without decoding and retargeting altogether. We therefore propose RoboGhost, a retargeting-free framework named to highlight the latent representa- tions that are invisible like a ghost yet strongly drive humanoid behavior. As shown in Fig 1, rather than producing explicit human motion, RoboGhost leverages language-conditioned motion latents as semantic anchors to guide a diffusion-based humanoid policy. The policy denoises executable actions directly from noise, eliminating error-prone intermediate stages while preserving fine-grained intent and enabling fast, reactive control. To our knowledge, this is the first diffusion-based humanoid policy driven by motion latents, achieving smooth and natural locomotion with DDIM-accelerated sampling (Song et al., 2020) for real-time deployment. To enhance temporal coherence and motion diversity, RoboGhost employs a hybrid causal trans- former–diffusion architecture. The transformer backbone captures long-horizon dependencies and ensures global consistency (Zhou et al., 2024; 2025; Han et al., 2025). The diffusion component contributes stability and stochasticity for fine-grained motion synthesis. Together, this design miti- gates drift and information loss typical of autoregressive models, while producing expressive motion latents that provide downstream policies with rich semantic conditioning for precise control. Extensive experiments validate the effectiveness and practicality of RoboGhost. We dramatically accelerate the full pipeline from motion generation to humanoid deployment, cutting the time from 17.85 s to 5.84 s. Beyond sheer speed, our approach yields higher-quality control by circumventing retargeting losses and improving generalization, which is reflected in a 5% higher success rate and reduced tracking error compared to baseline methods. Our framework achieves robust, semantically aligned locomotion on real humanoids, substantially reducing latency compared to retargeting-based pipelines. We can further extend this framework to support other input modalities such as images, audio, and music, thereby offering a reference architecture for humanoid vision-language-action systems. In short, RoboGhost moves text-driven humanoid control from fragile pose imitation to robust, real-time interaction. In summary, our key contributions can be summarized as follows: • We propose RoboGhost, a retargeting-free framework that directly leverages language- generated motion latents for end-to-end humanoid policy learning, eliminating error-prone decoding and retargeting stages. • We introduce the first diffusion-based humanoid policy conditioned on motion latents, which denoises executable actions directly from noise and achieves smooth, diverse, and physically plausible locomotion with DDIM-accelerated sampling. 2 • We design a hybrid transformer–diffusion architecture that unifies long-horizon tempo- ral coherence with stochastic stability, yielding expressive motion latents and strong lan- guage–motion alignment. • We validate RoboGhost through extensive experiments, demonstrating its effectiveness and generality in enabling robust and real-time language-guided humanoid locomotion. 2 RELATED WORK 2.1 HUMAN MOTION SYNTHESIS Language-guided humanoid locomotion leverages advances in text-to-motion generation, which primarily uses transformer-based discrete modeling or diffusion-based continuous modeling. Discrete methods model motion as tokens, evolving from early vector quantization (Guo et al., 2022c) to GPT- style autoregression (Zhang et al., 2023a), scaling with LLMs (Jiang et al., 2023; Zhang et al., 2024) or improved attention (Zhong et al., 2023), and recently, bidirectional masking (Guo et al., 2023) and enhanced text alignment (Li et al., 2024d). In parallel, diffusion models excel in synthesis (Kim et al., 2023; Hu et al., 2023; Tevet et al., 2022; Li et al., 2024c; Bai et al., 2025; Li et al., 2023b; ?), with progress in efficiency (Chen et al., 2023), retrieval-augmentation (Zhang et al., 2023b), controllability (Li et al., 2024b), and architectures (Meng et al., 2024; Li et al., 2025). Our work builds on the continuous autoregressive framework (Li et al., 2024a), combining the benefits of both paradigms. 2.2 HUMANOID WHOLE-BODY CONTROL Whole-body control (WBC) is crucial for humanoids, yet learning a general-purpose policy remains challenging. Existing approaches exhibit inherent trade-offs: methods like OmniH2O (He et al., 2024) and HumanPlus (Fu et al., 2024) prioritize specific robustness at the cost of generality or long-term accuracy, while others like Hover (He et al., 2025b), ExBody2 (Ji et al., 2024), and GMT (Chen et al., 2025) employs strategies (e.g., masking, curriculum learning, MoE) to enhance adaptability, though generalization is not guaranteed. Recent language-guided works also face limitations. LangWBC (Shao et al., 2025) scales poorly and offers no guarantee of generalization to unseen instructions, and RLPF (Yue et al., 2025) risks catastrophic forgetting and limited output diversity. To overcome these issues, we propose a MoE-based oracle paired with a latent-driven diffusion student, enhancing generalization while reducing deployment cost. 3 METHOD This section presents the core components of our framework, which is depicted in Fig 2. We begin with an overview of our method in Section 3.1, providing a high-level description of the architecture and motivation. Section 3.2 details the construction of our motion generator, which leverages continuous autoregression and a causal autoencoder. Furthermore, Section 3.3 elaborates on our novel retargeting-free, latent-driven reinforcement learning architecture based on diffusion models, including its training and inference procedures. Finally, we present our causal adaptive sampling strategy in Section 3.4. Other details can be found in appendix 6.2. 3.1 OVERVIEW Our work introduces a novel retargeting-free, latent-driven reinforcement learning architecture for language-guided humanoid control, which fundamentally diverges from conventional motion-tracking pipelines. As depicted in Fig. 2, our approach comprises three core components: a continuous autoregressive motion generator, a MoE-based teacher policy, and a latent-driven diffusion-based student policy. We focus on the challenge of generating diverse, physically plausible motions from high-level instructions without the need for complex kinematic retargeting. The process begins by feeding a textual prompt T into the continuous autoregressive motion generator. Unlike prior works that decode the generated motion into an explicit kinematic sequence requiring tedious retargeting to the robot, our generator produces a compact latent motion representation lref. 3 a a “A person walks in a circle.” Task Instruction Motion Encoder AdaLN Causal Transformer Text Encoder Diffusion Model Stage1—Motion Generation Stage2—Policy Training Retargeted Dataset Proprioception Privileged Information Motion Generator History observations Diffusion Add Noise Teacher Policy Student Policy Inference Text Encoder Causal Transformer Diffusion Model motion latent Gaussian Noise PD Diffusion MoE Task Instruction Latent Encoder Add Noise Trainable Frozen “A person is doing jump jacks.” Motion Latent Figure 2: Overview of RoboGhost. We propose a two-stage approach: a motion latent is first generated, then a MoE-based teacher policy is trained with RL and a diffusion-based student policy is trained to denoise actions conditioned on motion latent. This latent-driven scheme bypasses the need for motion retargeting. This design is pivotal, as it bypasses the error-prone retargeting step and mitigates performance degradation caused by limited generator capability. The generation process can be formulated as: lref = G(T), where G denotes our motion generator. This latent representation lref, alongside proprioceptive states and historical observations, then conditions a diffusion-based student policy πs. The policy operates a denoising process to output actions directly executable on the physical humanoid. This latent-driven paradigm eliminates the dependency on privileged information and explicit reference motions during deployment, significantly streamlining the sim-to-real pipeline. To efficiently train the high-level oracle teacher policy, we introduce a causal adaptive sampling strategy. It dynamically prioritizes challenging motion segments by attributing failures to their causal antecedents, thereby maximizing sample efficiency and enabling the learning of long-horizon, agile motor skills. Finally, a meticulously designed reward function ensures accurate and expressive whole-body motion tracking. Collectively, our framework achieves robust, direct-drive control from language commands, setting a new paradigm for retargeting-free humanoid locomotion. 3.2 CONTINUOUS AUTOREGRESSIVE MOTION GENERATOR Given that discretized models are prone to information loss and to leverage the advantages of both masked modeling and autoregressive approaches, we adopt a causal autoencoder and continuous masked autoregressive architecture with causal attention mask, which effectively captures temporal dependencies between tokens and produces rich contextual conditions for the subsequent diffusion process. Specifically, we first randomly mask motion tokens following the practice in language models (Devlin et al., 2019), obtaining a set of masked tokens. The temporal masking strategy follows the same mask ratio scheduling as (Chang et al., 2022), defined by the function: γ(τ) = cos πτ 2  , (1) where τ ∈[0, 1]. During training, τ is uniformly sampled from U(0, 1), yielding a mask ratio γ(τ). Accordingly, γ(τ) × N tokens are randomly selected for masking. Unlike previous masked autoregressive methods (Li et al., 2024a; Meng et al., 2024; Li et al., 2023a), our approach does not involve random token shuffling or batch-token prediction. Moreover, to mitigate the limitation in model expressiveness caused by low-rank matrix approximations during training, we replace bidirectional attention masks with causal attention masking. And for the input 4 text prompt T, we use the LaMP (Li et al., 2024d) text transformer to extract textual features. This model captures linguistic nuances and semantic structures effectively, resulting in high-dimensional feature representations that provide essential guidance for motion generation. After the transformer completes the masked token prediction task, the predicted latent representations are used to condition the diffusion model, guiding the denoising process to produce more accurate and semantically rich latent representations. This enables downstream latent-driven action generation. 3.3 LATENT-DRIVEN DIFFUSION POLICY FOR RETARGETING-FREE DEPLOYMENT 3.3.1 MOE-BASED TEACHER POLICY The core challenge in text-to-locomotion is the inherent open-endedness of language. Therefore, generalization capability is the key enabler for policies to respond successfully to novel prompts and achieve true deployment flexibility. First, we train an oracle teacher policy using PPO (Schulman et al., 2017) with privileged simulator-state information. To learn a policy πt that generalizes across diverse motion inputs, we first train an initial policy π0 on a curated motion dataset D0 of high diversity. We then evaluate the tracking accuracy of π0 for each motion sequence s ∈D0, focusing on the lower body through an error metric e(s) = α · Ekey(s) + β · Edof(s), where Ekey(s) denotes the mean key-body position error and Edof(s) the mean joint angle tracking error of the lower body. Motion sequences with e(s) > 0.6 are filtered out, and the remaining data D are used to train a general teacher policy. The teacher policy πt is trained as an oracle in simulation using PPO, leveraging privileged infor- mation unavailable in the real world—including ground-truth root velocity, global joint positions, physical properties (e.g., friction coefficients, motor strength), proprioceptive state, and reference motion. The policy outputs target joint positions ˆat ∈R23 for proportional derivative (PD) control, maximizing cumulative rewards to achieve accurate motion tracking and robust behavior, resulting in a high-performance expert policy trained solely on simulated data. Finally, we seek: πt = arg max π Es∈D [Performance(π, s)] (2) To enhance the expressive power and generalization capability of the model, we incorporate a Mixture of Experts (MoE) module into the training of the teacher policy. The policy network consists of a set of expert networks and a gating network. The expert networks take as input both the robot state observations and reference motion, and output the final action at. The gating network receives the same input observations and produces a probability distribution over all experts. The final action is computed as a weighted combination of actions sampled from each expert’s output distribution: a = Pn i=1 pi · ai, where pi denotes the probability assigned to the i-th expert by the gating network, and ai represents the output of the i-th expert policy. This architecture enhances the policy’s generalization capacity and obtains actions that accurately track reference motions, thereby providing more precise supervised signals for the student policy. 3.3.2 DIFFUSION-BASED STUDENT POLICY Unlike prior approaches where the student policy πs distills knowledge from the teacher using refer- ence motion, we propose a novel latent-driven student policy that takes motion latent representations as input. In addition to the observation history, it incorporates a latent representation lref from the motion generator as input. This implicit design enables the policy to operate during inference without retargeted explicit reference motion, thereby streamlining deployment, reducing performance degradation caused by limitations of motion generator capability and retargeting. Following a DAgger-like (Ross et al., 2011) approach, we roll out the student policy in simulation and query the teacher for optimal actions ˆat at visited states. During training, we progressively inject Gaussian noise ϵt into the teacher’s actions and use the first-stage pretrained motion generator to obtain a latent representation with textual descriptions. The forward process can be modeled as a Markov nosing process using: q(xt|xt−1) = N(xt; √ 1 −αt · xt−1, αtI) (3) where the constant αt ∈(0, 1) is a hyper-parameter for sampling. Here, we denote the denoiser as ϵθ, use {xt}T t=0 to denote the noising sequence, and xt−1 = ϵθ(xt, t) for the t-step denoising. 5 While the motion generator can produce motions lacking physical realism, our method does not blindly follow its output. Instead, a trainable latent encoder intelligently conditions the policy, translating raw proposals into actionable and stable commands. This enables the policy to generate diverse and robust actions even from imperfect guidance. This representation, along with proprioceptive states and historical observations, serves as conditioning to guide the denoising process. For tractability, we adopt an x0-prediction strategy and supervise the student policy by minimizing the mean squared error loss L = ∥a −ˆat∥2 2, where a = xt−√1−¯αt·ϵθ(xt,t) √¯αt . The process iterates until convergence, yielding a policy that requires neither privileged knowledge nor explicit reference motion, and is suitable for real-world deployment. 3.3.3 INFERENCE PIPELINE To ensure motion fluency and smoothness, it is essential to minimize the time required for the denoising process. We therefore adopt DDIM sampling (Song et al., 2020) and an MLP-based diffusion model to generate actions for deployment. The reverse process can be formulated as: xt−1 = √αt−1 xt −√1 −αt · ϵθ(xt, t) √αt  + p 1 −αt−1 · ϵθ(xt, t) (4) Our framework is entirely retargeting-free and latent-driven. During inference, the textual description is first input into a motion generator and obtains a latent motion representation lref. Crucially, we bypass the decoding of this latent into explicit motion sequences, thus eliminating the need for retargeting to the robot. We sample a random noise as input to the student policy and condition the diffusion model via AdaLN (Huang & Belongie, 2017) on the motion latent, proprioceptive states po and historical observations ot−H:t, producing a clean action a = ϵθ(ϵ|lref, po, ot−H:t), that is directly executable on the physical robot. This streamlined pipeline not only reduces complexity but also mitigates issues such as low-quality motion generation due to limited generator capability, retargeting-induced errors, and insufficient action diversity. 3.4 CAUSAL ADAPTIVE SAMPLING RoboGhost aims at tracking a reference motion more expressively in the whole body. To this end, we employ a novel sampling method to facilitate the teacher policy in mastering more challenging and longer motion sequences. Details about various reward functions, curriculum learning, and domain randomizations can be found in appendix 6.2. Training long-horizon motor skills faces a fundamental challenge: motion segments exhibit het- erogeneous difficulty levels. Conventional approaches typically employ uniform sampling across trajectories (He et al., 2025a; Truong et al., 2024), which leads to oversampling of trivial segments and under-sampling of challenging ones, resulting in high reward variance and suboptimal sample efficiency. To address this, we propose a causality-aware adaptive sampling mechanism. The motion sequence is divided into K equal-length intervals, and the sampling probability for each interval is dynamically adjusted based on empirical failure statistics. Let kt denote the interval in which a rollout terminates due to failure. We hypothesize that the root cause often arises s steps prior to kt—such as a misstep or collision—that propagates into eventual failure at kt. To enable corrective learning, we increase the sampling likelihood of these antecedent intervals. Motivated by the temporal structure of failures—typically preceded by suboptimal actions—we apply an exponential decay kernel α(u) = γu (γ ∈(0, 1)) to assign higher weights to time steps leading up to termination. The base sampling probability for interval s is defined as: ∆pi = α(t −i) · p, i ∈[t −s, t], ∆pi = 0, i /∈[t −s, t] (5) where K controls the backward horizon for causal attribution. Subsequently, we update and normalize the sampling probabilities across all intervals. Let pi denote the base probability for interval i. The updated probability is given by: p′ i ←pi + ∆pi, where ∆pi denotes the increment derived from failure attribution. The distribution is then renormalized to ensure P i p′ 1 = 1. Finally, the initial interval is sampled from the multinomial distribution Multinomial(p′ 1, . . . , p′ K), enabling selective 6 focus on high-difficulty segments. Since each interval consists of multiple frames, after sampling the restart interval, we further select the exact restart frame uniformly within that interval. 4 EXPERIMENT We present a rigorous empirical evaluation of our proposed method in this section, structured to systematically address three core research questions central to its design and efficacy. We organize our analysis around the following three research questions: • Q1: Advantages of Latent-driven Pipeline. To what extent does a retargeting-free, latent- driven approach improve efficiency, robustness, and generalization in motion generation, and how do these advantages manifest in real-world deployment scenarios? • Q2: Generalization Gains via Diffusion Policy. Does replacing MLP-based policies with diffusion-based policies enhance generalization to unseen instructions or environments? • Q3: Better Motion Generator. Does the continuous autoregressive architecture yield more semantically and kinematically precise latent embeddings than standard alternatives? Further, how do architectural variants of the diffusion model (e.g., MLP and DiT (Peebles & Xie, 2023)) affect motion generation quality? 4.1 EXPERIMENTS SETTINGS Datasets and Metrics We evaluate on two MotionMillion (Fan et al., 2025) subsets: HumanML and Kungfu. For policy training, we use only index-0 sequences per motion name (e.g., “00000_0.npy”), excluding mirrored variants to reduce redundancy. Motions with non-planar terrains or infeasible contacts are filtered to ensure flat-ground consistency. The curated set is split 8:2 (train:test) with stratified category balance. For motion generator training, we leverage the full subsets. Evaluation metrics include motion generation and motion tracking. Generation follows Guo et al. (2022b) with retrieval precision R@1, 2, 3, MM Dist, FID, and Diversity. Tracking is assessed in physics simulators, consistent with OmniH2O (He et al., 2024) and PHC (Luo et al., 2023), using success rate (primary), mean per-joint error (Empjpe), and mean per-keypoint error (Empkpe). Success rate reflects both accuracy and stability. Further details are in appendix 6.3. 4.2 EVALUATION OF MOTION GENERATION To validate the effectiveness of our continuous autoregressive motion generation model, we train and evaluate it on two distinct skeleton representations: the 263-dim HumanML3D (Guo et al., 2022a) and the 272-dim Humanml and Kungfu subsets of MotionMillion. We benchmark against text-to-motion methods, including diffusion- and transformer-based approaches. Motivated by SiT (Ma et al., 2024) and MARDM (Meng et al., 2024), we reformulate the training objective of the motion generator from noise prediction to velocity prediction, leading to improved motion quality and enhanced dynamic consistency in generated sequences. As summarized in Table 1, our model achieves competitive performance across both formats, demonstrating robustness to representation variation. 4.3 EVALUATION OF MOTION TRACKING POLICY To further validate the efficacy of our motion tracking policy, we conduct evaluations on the Hu- manML and Kungfu subsets of MotionMillion, measuring Empjpe and Empkpe under physics-based simulation in both IsaacGym and MuJoCo. The pipeline operates as follows: textual descriptions are first input into the motion generator to produce a latent motion representation, which is subsequently input by the student policy for action execution. As summarized in Table 2, our method achieves high success rates on HumanML, alongside low joint and keypoint errors, indicating robust alignment between generated motion semantics and physically executable trajectories. Here, Baseline refers to the conventional, explicit framework where both the teacher and student policies use MLP backbones. 7 Methods R Precision↑ FID↓ MM-Dist↓ Diversity→ Top 1 Top 2 Top 3 HumanML3D Ground Truth 0.702 0.864 0.914 0.002 15.151 27.492 MDM (Tevet et al., 2023) 0.523 0.692 0.764 23.454 17.423 26.325 MLD (Chen et al., 2023) 0.546 0.730 0.792 18.236 16.638 26.352 T2M-GPT (Zhang et al., 2023a) 0.606 0.774 0.838 12.475 16.812 27.275 MotionGPT (Jiang et al., 2023) 0.456 0.598 0.628 14.375 14.892 27.114 MoMask (Guo et al., 2023) 0.621 0.784 0.846 12.232 16.138 27.127 AttT2M (Zhong et al., 2023) 0.592 0.765 0.834 15.428 15.726 26.674 MotionStreamer (Xiao et al., 2025) 0.631 0.802 0.859 11.790 16.081 27.284 Ours-DDPM 0.639 0.808 0.867 11.706 15.978 27.230 Ours-SiT 0.641 0.812 0.870 11.743 15.972 27.307 HumanML (MotionMillion) Ground Truth 0.714 0.876 0.920 0.002 14.984 26.571 Ours-DDPM 0.644 0.819 0.873 11.724 15.870 26.395 Ours-SiT 0.646 0.818 0.872 11.716 15.603 26.471 Table 1: Quantitative results of text-to-motion generation on the HumanML3D dataset and HumanML subset. →denotes that if the value is closer to the ground truth, the metric is better. Method IsaacGym MuJoCo Succ ↑ Empjpe ↓ Empkpe ↓ Succ ↑ Empjpe ↓ Empkpe ↓ HumanML (MotionMillion) Baseline 0.92 0.23 0.19 0.64 0.34 0.31 Ours-DDPM 0.97 0.12 0.09 0.74 0.24 0.20 Ours-SiT 0.98 0.14 0.08 0.72 0.26 0.23 Kungfu (MotionMillion) Baseline 0.66 0.43 0.37 0.51 0.58 0.52 Ours-DDPM 0.72 0.34 0.31 0.57 0.54 0.50 Ours-SiT 0.71 0.36 0.32 0.55 0.53 0.48 Table 2: Motion tracking performance comparison in simulation on the HumanML and Kungfu test sets. 4.4 QUALITATIVE EVALUATION We present a qualitative evaluation of the motion tracking policy across three deployment stages: simulation (IsaacGym), cross-simulator transfer (MuJoCo), and real-world execution on the Unitree G1 humanoid. Fig 3 visualizes representative tracking sequences, highlighting the policy’s ability to preserve motion semantics, maintain balance under dynamic transitions, and generalize across physics engines and hardware. In particular, real-robot deployments demonstrate that our latent- driven, retargeting-free framework enables smooth, temporally coherent motion execution without manual tuning, validating its readiness for practical embodied applications. 4.5 ABLATION STUDIES To systematically answer the three research questions posed at the outset of this section, we conduct various ablation studies. To answer Q1 (Advantages of Retargeting-free Pipeline), we evaluate the conventional pipeline: text prompts are fed to the motion generator, the output latent is decoded into explicit motion, which is then retargeted to the G1 robot and executed by the policy. As shown in Table 3, this approach underperforms due to its multi-stage complexity and error accumulation, and incurs higher real-world 8 “The person is performing a Wushu Kungfu sequence that may culminate in a backflip.” “He places his hands together on the right side, execute a series of kicks with their left leg.” “A man leaps 3 times in rapid succession.” MuJoCo IsaacGym “The man is performing jumping jacks.” “A person leaps and spins.” “A person executes a Kung Fu Flying Kick.” Figure 3: Qualitative results in the IsaacGym and MuJoCo. Method IsaacGym MuJoCo Time Cost (s) Succ ↑ Empjpe ↓ Empkpe ↓ Succ ↑ Empjpe ↓ Empkpe ↓ Ours-Explicit 0.93 0.21 0.17 0.66 0.32 0.27 17.85 Ours-Implicit 0.97 0.12 0.09 0.74 0.24 0.20 5.84 Table 3: Motion tracking performance comparison across different simulators on the HumanML and Kungfu test sets. Explicit version including PHC retargeting (1000 interations) and latent decode processes. deployment time cost. This confirms the efficiency and performance advantage of our retargeting-free, latent-driven framework. To answer Q2 (Generalization Gains via Diffusion Policy), we conduct an ablation by replacing the diffusion policy with an MLP-based policy that concatenates latent and other observations to predict actions. Diffusion policies, by design, better capture various action distributions, enabling more robust adaptation to diverse or imperfect latents. As shown in Table 4, the diffusion policy significantly outperforms its MLP counterpart. To further evaluate generalization, we test both policies on 10 randomly sampled motions from unseen MotionMillion subsets (fitness, perform, 100style, haa). Although the motion generator was not trained on these subsets — resulting in suboptimal latents — the diffusion-based policy still achieves substantially better tracking and robustness than MLP, as evidenced in Table 4. This highlights the humanoid diffusion policy’s superiority. Method IsaacGym Succ ↑ Empjpe ↓ Empkpe ↓ MLP Policy 0.96 0.17 0.11 Diffusion Policy 0.97 0.12 0.09 Method IsaacGym Succ ↑ Empjpe ↓ Empkpe ↓ MLP Policy 0.54 0.48 0.45 Diffusion Policy 0.68 0.42 0.39 Table 4: Comparison of MLP-based and diffusion-based policy. The left table shows the tracking performance on HumanML subset, and the right table presents the generalization ability of two different policy architectures. To address Q3 (Better Motion Generator), we conduct an ablation study on the diffusion backbone in our framework, comparing a 16-layer MLP and a 4-layer DiT under identical settings. As shown in Table 5, DiT offers slight gains in generation metrics but no measurable improvement in tracking success or joint accuracy, while incurring higher latency due to larger model size. Balancing effectiveness and efficiency, we therefore adopt the 16-layer MLP as our default backbone. 9 Method IsaacGym HumanML3D Time Cost (s) ↓ Succ ↑ Empjpe ↓ Empkpe ↓ Top 3 ↑ FID ↓ DiT 0.96 0.11 0.11 0.870 11.697 14.28 MLP 0.97 0.12 0.09 0.867 11.706 5.84 Table 5: Comparison of tracking performance across different diffusion backbones for motion generation. 5 CONCLUSION In this paper, we introduce RoboGhost, a retargeting-free, latent-driven framework that bridges natural language instructions with physically feasible humanoid motion control. Our method harnesses rich semantic representations from a pretrained text-to-motion model and integrates them into a diffusion-based humanoid policy, thereby bypassing motion decoding and retargeting stages that are not only error-prone but also time-consuming. This design enables real-time, language-guided humanoid locomotion with improved adaptability. Extensive experiments show that RoboGhost re- duces cumulative distortions and execution latency while preserving semantic alignment and physical realism. Beyond advancing efficiency and robustness, RoboGhost offers a new and practical pathway toward more intuitive, responsive, and deployable humanoid robots in real-world environments. REFERENCES Shuanghao Bai, Wenxuan Song, Jiayi Chen, Yuheng Ji, Zhide Zhong, Jin Yang, Han Zhao, Wanqi Zhou, Wei Zhao, Zhe Li, Pengxiang Ding, Cheng Chi, Haoang Li, Chang Xu, Xiaolong Zheng, Donglin Wang, Shanghang Zhang, and Badong Chen. Towards a unified understanding of robot manipulation: A comprehensive survey. arXiv preprint arXiv:2510.10903, 2025. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11315–11325, 2022. Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu. Executing your commands via motion diffusion in latent space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 18000–18010, 2023. Zixuan Chen, Mazeyu Ji, Xuxin Cheng, Xuanbin Peng, Xue Bin Peng, and Xiaolong Wang. Gmt: General motion tracking for humanoid whole-body control. arXiv preprint arXiv:2506.14770, 2025. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp. 4171–4186, 2019. Ke Fan, Shunlin Lu, Minyue Dai, Runyi Yu, Lixing Xiao, Zhiyang Dou, Junting Dong, Lizhuang Ma, and Jingbo Wang. Go to zero: Towards zero-shot motion generation with million-scale data. arXiv preprint arXiv:2507.07095, 2025. Zipeng Fu, Qingqing Zhao, Qi Wu, Gordon Wetzstein, and Chelsea Finn. Humanplus: Humanoid shadowing and imitation from humans. arXiv preprint arXiv:2406.10454, 2024. Xinyang Gu, Yen-Jen Wang, and Jianyu Chen. Humanoid-gym: Reinforcement learning for humanoid robot with zero-shot sim2real transfer. arXiv preprint arXiv:2404.05695, 2024. Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5152–5161, June 2022a. Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5152–5161, 2022b. 10 Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In European Conference on Computer Vision, pp. 580–597. Springer, 2022c. Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. 2023. Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1900–1910, 2024. Yi Han, Cheng Chi, Enshen Zhou, Shanyu Rong, Jingkun An, Pengwei Wang, Zhongyuan Wang, Lu Sheng, and Shanghang Zhang. Tiger: Tool-integrated geometric reasoning in vision-language models for robotics. arXiv preprint arXiv:2510.07181, 2025. Tairan He, Zhengyi Luo, Xialin He, Wenli Xiao, Chong Zhang, Weinan Zhang, Kris Kitani, Changliu Liu, and Guanya Shi. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint arXiv:2406.08858, 2024. Tairan He, Jiawei Gao, Wenli Xiao, Yuanhang Zhang, Zi Wang, Jiashun Wang, Zhengyi Luo, Guanqi He, Nikhil Sobanbab, Chaoyi Pan, et al. Asap: Aligning simulation and real-world physics for learning agile humanoid whole-body skills. arXiv preprint arXiv:2502.01143, 2025a. Tairan He, Wenli Xiao, Toru Lin, Zhengyi Luo, Zhenjia Xu, Zhenyu Jiang, Jan Kautz, Changliu Liu, Guanya Shi, Xiaolong Wang, et al. Hover: Versatile neural whole-body controller for humanoid robots. In 2025 IEEE International Conference on Robotics and Automation (ICRA), pp. 9989–9996. IEEE, 2025b. Vincent Tao Hu, Wenzhe Yin, Pingchuan Ma, Yunlu Chen, Basura Fernando, Yuki M Asano, Efstratios Gavves, Pascal Mettes, Bjorn Ommer, and Cees GM Snoek. Motion flow matching for human motion synthesis and editing. arXiv preprint arXiv:2312.08895, 2023. Albert S Huang, Edwin Olson, and David C Moore. Lcm: Lightweight communications and marshalling. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4057–4062. IEEE, 2010. Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normal- ization. In Proceedings of the IEEE international conference on computer vision, pp. 1501–1510, 2017. Mazeyu Ji, Xuanbin Peng, Fangchen Liu, Jialong Li, Ge Yang, Xuxin Cheng, and Xiaolong Wang. Exbody2: Advanced expressive humanoid whole-body control. arXiv preprint arXiv:2412.13196, 2024. Yuheng Ji, Huajie Tan, Jiayu Shi, Xiaoshuai Hao, Yuan Zhang, Hengyuan Zhang, Pengwei Wang, Mengdi Zhao, Yao Mu, Pengju An, et al. Robobrain: A unified brain model for robotic manipulation from abstract to concrete. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 1724–1734, 2025. Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. Motiongpt: Human motion as a foreign language. Advances in Neural Information Processing Systems, 36:20067–20079, 2023. Jihoon Kim, Jiseob Kim, and Sungjoon Choi. Flame: Free-form language-based motion synthesis & editing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 8255–8263, 2023. Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. Advances in Neural Information Processing Systems, 37: 56424–56445, 2024a. Zhe Li, Zhangyang Gao, Cheng Tan, Stan Z Li, and Laurence T Yang. General point model with autoencoding and autoregressive. arXiv preprint arXiv:2310.16861, 2023a. 11 Zhe Li, Laurence T Yang, Xin Nie, BoCheng Ren, and Xianjun Deng. Enhancing sentence rep- resentation with visually-supervised multimodal pre-training. In Proceedings of the 31st ACM International Conference on Multimedia, pp. 5686–5695, 2023b. Zhe Li, Yisheng He, Lei Zhong, Weichao Shen, Qi Zuo, Lingteng Qiu, Zilong Dong, Laurence Tianruo Yang, and Weihao Yuan. Mulsmo: Multimodal stylized motion generation by bidirectional control flow. arXiv preprint arXiv:2412.09901, 2024b. Zhe Li, Laurence T Yang, Bocheng Ren, Xin Nie, Zhangyang Gao, Cheng Tan, and Stan Z Li. Mlip: Enhancing medical visual representation with divergence encoder and knowledge-guided contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11704–11714, 2024c. Zhe Li, Weihao Yuan, Yisheng He, Lingteng Qiu, Shenhao Zhu, Xiaodong Gu, Weichao Shen, Yuan Dong, Zilong Dong, and Laurence T Yang. Lamp: Language-motion pretraining for motion generation, retrieval, and captioning. arXiv preprint arXiv:2410.07093, 2024d. Zhe Li, Weihao Yuan, Weichao Shen, Siyu Zhu, Zilong Dong, and Chang Xu. Omnimotion: Multimodal motion generation with continuous masked autoregression, 2025. URL https: //arxiv.org/abs/2510.14954. Zhengyi Luo, Jinkun Cao, Kris Kitani, Weipeng Xu, et al. Perpetual humanoid control for real-time simulated avatars. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10895–10904, 2023. Nanye Ma, Mark Goldstein, Michael S Albergo, Nicholas M Boffi, Eric Vanden-Eijnden, and Saining Xie. Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. In European Conference on Computer Vision, pp. 23–40. Springer, 2024. Zichong Meng, Yiming Xie, Xiaogang Peng, Zeyu Han, and Huaizu Jiang. Rethinking diffusion for text-driven human motion generation. arXiv preprint arXiv:2411.16575, 2024. William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4195–4205, 2023. Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel Van de Panne. Deepmimic: Example- guided deep reinforcement learning of physics-based character skills. ACM Transactions On Graphics (TOG), 37(4):1–14, 2018. Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627–635. JMLR Workshop and Conference Proceedings, 2011. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Agon Serifi, Ruben Grandia, Espen Knoop, Markus Gross, and Moritz Bächer. Robot motion diffusion model: Motion generation for robotic characters. In SIGGRAPH asia 2024 conference papers, pp. 1–9, 2024. Yiyang Shao, Xiaoyu Huang, Bike Zhang, Qiayuan Liao, Yuman Gao, Yufeng Chi, Zhongyu Li, Sophia Shao, and Koushil Sreenath. Langwbc: Language-directed humanoid whole-body control via end-to-end learning. arXiv preprint arXiv:2504.21738, 2025. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. BAAI RoboBrain Team, Mingyu Cao, Huajie Tan, Yuheng Ji, Minglan Lin, Zhiyu Li, Zhou Cao, Pengwei Wang, Enshen Zhou, Yi Han, et al. Robobrain 2.0 technical report. arXiv preprint arXiv:2507.02029, 2025. Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, and Amit H Bermano. Human motion diffusion model. arXiv preprint arXiv:2209.14916, 2022. 12 Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. Human motion diffusion model. In The Eleventh International Conference on Learning Represen- tations, 2023. Shashank Tripathi, Lea Müller, Chun-Hao P Huang, Omid Taheri, Michael J Black, and Dimitrios Tzionas. 3d human pose estimation via intuitive physics. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4713–4725, 2023. Takara Everest Truong, Michael Piseno, Zhaoming Xie, and Karen Liu. Pdp: Physics-based character animation via diffusion policy. In SIGGRAPH Asia 2024 Conference Papers, pp. 1–10, 2024. Lixing Xiao, Shunlin Lu, Huaijin Pi, Ke Fan, Liang Pan, Yueer Zhou, Ziyong Feng, Xiaowei Zhou, Sida Peng, and Jingbo Wang. Motionstreamer: Streaming motion generation via diffusion-based autoregressive model in causal latent space. arXiv preprint arXiv:2503.15451, 2025. Weiji Xie, Jinrui Han, Jiakun Zheng, Huanyu Li, Xinzhe Liu, Jiyuan Shi, Weinan Zhang, Chenjia Bai, and Xuelong Li. Kungfubot: Physics-based humanoid whole-body control for learning highly-dynamic skills. arXiv preprint arXiv:2506.12851, 2025. Junpeng Yue, Zepeng Wang, Yuxuan Wang, Weishuai Zeng, Jiangxing Wang, Xinrun Xu, Yu Zhang, Sipeng Zheng, Ziluo Ding, and Zongqing Lu. Rl from physical feedback: Aligning large motion models with humanoid control. arXiv preprint arXiv:2506.12769, 2025. Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan. Generating human motion from textual descriptions with discrete representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14730–14740, 2023a. Mingyuan Zhang, Xinying Guo, Liang Pan, Zhongang Cai, Fangzhou Hong, Huirong Li, Lei Yang, and Ziwei Liu. Remodiffuse: Retrieval-augmented motion diffusion model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 364–373, 2023b. Yaqi Zhang, Di Huang, Bin Liu, Shixiang Tang, Yan Lu, Lu Chen, Lei Bai, Qi Chu, Nenghai Yu, and Wanli Ouyang. Motiongpt: Finetuned llms are general-purpose motion generators. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 7368–7376, 2024. Chongyang Zhong, Lei Hu, Zihao Zhang, and Shihong Xia. Attt2m: Text-driven human motion gen- eration with multi-perspective attention mechanism. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 509–519, 2023. Enshen Zhou, Qi Su, Cheng Chi, Zhizheng Zhang, Zhongyuan Wang, Tiejun Huang, Lu Sheng, and He Wang. Code-as-monitor: Constraint-aware visual programming for reactive and proactive robotic failure detection. arXiv preprint arXiv:2412.04455, 2024. Enshen Zhou, Jingkun An, Cheng Chi, Yi Han, Shanyu Rong, Chi Zhang, Pengwei Wang, Zhongyuan Wang, Tiejun Huang, Lu Sheng, et al. Roborefer: Towards spatial referring with reasoning in vision-language models for robotics. arXiv preprint arXiv:2506.04308, 2025. 13 6 APPENDIX APPENDIX OVERVIEW This appendix provides additional details and results, organized as follows: • Section 6.1: Detailed information of implementation, including detailed proprioceptive states, privileged information and hyper-parameters of network structure. • Section 6.2: Elaboration on some details during training, including dataset details, motion filter and retargeting, simulator, domain randomization, regularization, reward functions, penalty and curriculm learning. • Section 6.3: Details about evaluation, including metrics about motion generation and motion tracking. • Section 6.4: Deployment details, including sim-to-sim and sim-to-real. • Section 6.5: Extra qualitative experiment results and visualizations, including in the simula- tion and in the real-world. • Section 6.6: Extra ablation experiment results and visualizations, including the effect of causal adaptive sampling (CAS), hyper-parameter λ, and the number of experts in MoE of teacher policy. 6.1 IMPLEMENTATION DETAILS In this section, we describe the state representation used for policy training, covering proprioceptive states, privileged information, and hyper-parameters of our network. The proprioceptive state components for both the teacher and student policies are summarized in Table 6. A key distinction is that the student policy utilizes a longer history of observations, compensating for its lack of access to privileged information by relying on extended temporal context. For privileged information, the teacher policy incorporates privileged information to achieve precise motion tracking. The full set of privileged state features is provided in Table 6. Previous methods receive motion target information as part of the observation in both teacher and student policy, which includes keypoint positions, desired joint (DoF) positions, and root velocity. But in our method, only the teacher policy receives these information, and the student policy only receives the latent from motion generator. A detailed target state composition is presented in Table 7. The action output corresponds to target joint positions for proportional-derivative (PD) control, with a dimensionality of 23 for the Unitree G1 humanoid platform. RoboGhost employs a teacher-student training framework. The teacher policy, trained via standard PPO (Schulman et al., 2017), utilizes privileged information, tracking targets, and proprioceptive states. In contrast, the student policy is trained using Dagger without access to privileged information, but instead relies on an extended history of observations. Furthermore, while the teacher policy uses explicit reference motion information, the student policy replaces this with a latent representation from a motion generator, resulting in a retargeting-free and latent-driven approach. For the teacher policy, inputs are concatenated and processed by a Mixture-of-Experts (MoE) network for policy learning. The student policy feeds its concatenated inputs as conditions to a diffusion model with an MLP backbone. Furthermore, we employ AdaLN to inject conditional information throughout the denoising process within the diffusion model. A final MLP layer is appended to the backbone network, which projects the output to a 23-dimensional action space while additionally incorporating the conditional signals. Detailed hyperparameters for both policies are provided in Table 8. 6.2 TRAINING DETAILS Dataset Details Our motion generator is pretrained on the MotionMillion dataset (Fan et al., 2025). Given the substantial scale of this dataset, we restrict pretraining to the humanml and kungfu subsets from the MotionUnion branch, comprising a total of 50,378 motion sequences. During policy training, we again draw data from the humanml and kungfu subsets. However, due to the presence of extensive duplicate and mirrored sequences, we perform a data cleaning process to remove redundancies and 14 Proprioceptive States State Component Dim. DoF position 23 DoF velocity 23 Last Action 23 Root Angular Velocity 3 Roll 1 Pitch 1 Yaw 1 Total dim 75 × 10 Privileged Information DoF Difference 23 Keybody Difference 36 Root velocity 3 Total dim 62 Table 6: Proprioceptive states and privileged informa- tion. Teacher Policy State Component Dim. DoF position 23 Keypoint position 36 Root Velocity 3 Root Angular Velocity 3 Roll 1 Pitch 1 Yaw 1 Height 1 Total dim 69 Student Policy Latent 64 Total dim 64 Table 7: Reference information in the teacher and student policies. Hyperparameter Value Optimizer AdamW β1, β2 0.9, 0.999 Learning Rate 1 × 10−4 Batch Size 4096 Teacher Policy Discount factor (γ) 0.99 Clip Parameter 0.2 Entropy Coefficient 0.005 Max Gradient Norm 1 Learning Epochs 5 Mini Batches 4 Value Loss Coefficient 1 Value MLP Size [512, 256, 128] Actor MLP Size [512, 256, 128] Experts 5 Student Policy MLP Layers 4 MLP Size [256, 256, 256] Table 8: Hyperparameters for teacher and student policy training. non-flat-ground motions. Using this filtered dataset, we first train an initial policy, denoted as π0. This policy is then used to further filter the dataset by excluding sequences with a tracking error exceeding 0.6. After this refinement, the humanml subset contains 3,261 motion sequences, and the kungfu subset contains 200. And due to the significant domain gap and difference in complexity between these two subsets, we opt to train two separate policies, each specialized on one subset. Motion Filter and Retargeting Following Xie et al. (2025) and Tripathi et al. (2023), we compute the ground-projected distance between the center of mass (CoM) and center of pressure (CoP) for each frame and apply a stability threshold. Let ¯pCoM t = (pCoM t,x , pCoM t,y ) and ¯pCoP t = (pCoP t,x , pCoP t,y ) denote the 2D ground projections of CoM and CoP at frame t, respectively. Let ∆dt = ∥¯pCoM t −¯pCoP t ∥2. We define the stability criterion of a frame as ∆dt < ϵstab. A motion sequence will not be filtered if its first and last frames are stable, and the longest consecutive unstable segment is shorter than 100. 15 Simulator Following established protocols in motion tracking policy research (Ji et al., 2024; He et al., 2025a; Ji et al., 2025; Team et al., 2025), we also adopt a three-stage evaluation pipeline: (1) large-scale reinforcement learning training in IsaacGym; (2) zero-shot transfer to MuJoCo to assess cross-simulator generalization; and (3) physical deployment on the Unitree G1 humanoid platform to validate the performance in real-world. Reference State Initialization Task initialization is a critical factor in reinforcement learning (RL) training. We observe that naïvely initializing episodes from the beginning of the reference motion frequently leads to policy failure, especially for difficult motions. This can cause the environment to overfit to easier frames and fail to learn the most challenging segments of the motion. To mitigate this issue, we employ the Reference State Initialization (RSI) framework (Peng et al., 2018). Concretely, we sample time-phase variables uniformly over [0, 1], randomizing the starting point within the reference motion that the policy must track. The robot’s state—including root position, orientation, linear and angular velocities, as well as joint positions and velocities—is then initialized to the values from the reference motion at the sampled phase. This approach substantially enhances motion tracking performance, particularly for highly-dynamic whole-body motions, by enabling the policy to learn various segments of the movement in parallel rather than being limited to a strictly sequential learning process. Domain Randomization and Regularization To improve the robustness and generalization of the pretrained policy, we utilize the domain randomization techniques and regularization items, which are listed in Table 9. Domain Randomization Term Value Friction U(0.5, 2.2) P Gain U(0.75, 1.25) × default Control delay U(20, 40) ms External Perturbation Push robot interval = 8 s, vxy = 0.5 m/s Regularization Term Expression Weight DoF position limits 1(dt /∈[qmin, qmax]) −10 DoF acceleration ∥¨dt∥2 2 −3 × 10−7 DoF error ∥dt −d0∥2 2 −0.1 Action rate ∥at −at−1∥2 2 −0.5 Feet air time Tair −0.5 10 Feet contact force ∥Ffeet∥2 2 −0.003 Stumble 1(F x feet > 5 × F z feet) −2 Waist roll pitch error ∥pwrp t −pwrp 0 ∥2 2 −0.5 Ankle Action ∥aankle t ∥2 2 −0.3 Table 9: Domain randomization and regularization parameters. Motion Tracking Rewards We define the reward function as the sum of penalty, regularization, and task rewards, which is meticulously designed to improve both the performance and motion realism of the humanoid robot. It consists of terms that incentivize accurate tracking of root velocity, direction, and orientation, as well as precise keypoints and joints position tracking. Inspired by ASAP (He et al., 2025a), we additionally incorporate regularization terms to enhance stability and sim-to-real transferability, including torques, action rate, feet orientation, feet heading, and slippage. In terms of penalty, we adopt penalty terms for DoF position limits, torque limits, and termination. The detailed reward functions are summarized in Table 10, more terms can be seen in the supplementary material. 16 Task Reward Root velocity 10.0 Root velocity direction 6.0 Root angular velocity 1.0 Keypoint position 10.0 Feet position 12.0 DoF position 6.0 DoF velocity 6.0 Penalty DoF position limits −10.0 Torque limits −5.0 Termination −200.0 Table 10: Reward terms for pretraining Curriculum Learning Training policies to track agile motions in simulation is challenging, as certain dynamic behaviors are too difficult for the policy to learn effectively from the outset. To mitigate this issue, we utilize a termination curriculum (He et al., 2025a) that progressively tightens the motion tracking error tolerance during training, thereby guiding the policy to gradually improve its tracking fidelity. Initially, we set a lenient termination threshold of 1.5m—episodes are terminated if the robot deviates from the reference motion by more than this margin. As training progresses, the threshold is annealed down gradually, incrementally increasing the precision required for successful tracking. This curriculum enables the policy to first acquire basic balancing skills before refining them under stricter tracking constraints, ultimately facilitating the successful execution of highly dynamic behaviors. 6.3 EVALUATION DETAILS Motion Generation Metrics Following (Guo et al., 2022c; Li et al., 2024d;b), we evaluate our approach using established text-to-motion metrics: retrieval accuracy (R@1, R@2, R@3), Multimodal Distance (MMDist), and Fréchet Inception Distance (FID). • Retrieval Accuracy (R-Precision): These metrics measure the relevance of generated motions to text descriptions in a retrieval setup. R@1 denotes the fraction of text queries for which the correct motion is retrieved as the top match, reflecting the model’s precision in identifying the most relevant motion. R@2 and R@3 extend this notion, indicating recall within the top two and three retrieved motions, respectively. • Multimodal Distance (MMDist): This quantifies the average feature-space distance between generated motions and their corresponding text embeddings, typically extracted via a pretrained retrieval model. Smaller MMDist values indicate stronger semantic alignment between text descriptions and motion outputs. • Fr échet Inception Distance (FID): FID assesses the quality and realism of generated motions by comparing their feature distribution to that of real motion data using a pretrained feature encoder (e.g., an Inception-style network). It computes the Fréchet distance between multivariate Gaussian distributions fitted to real and generated motion features. Lower FID scores reflect better distributional similarity and higher perceptual quality. • Diversity: Diversity quantifies the variance of motion sequences within the dataset. It is computed by randomly sampling Ndiver = 300 pairs of motions, denoted as (fi,1, fi,2) for each pair i. The metric is calculated as 1 Ndiver PNdiver i=1 ∥fi,1 −fi,2∥ Motion Tracking Metrics For motion tracking evaluation, we adopt metrics commonly used in prior work (Ji et al., 2024): Success Rate (Succ), Mean Per Joint Position Error (Empjpe), and Mean Per Keybody Position Error (Empkpe). • Success Rate (Succ): This metric evaluates whether the humanoid robot successfully follows the reference motion without falling. A trial is considered a failure if the average trajectory deviation exceeds 0.5 meters at any point, or if the root pitch angle passes a predefined threshold. 17 • Mean Per Joint Position Error (Empjpe in rad): Empjpe measures joint-level tracking accuracy by computing the average error in degrees of freedom (DoF) rotations between the reference and generated motion. • Mean Per Keybody Position Error (Empkpe in m): Empkpe assesses keypoint tracking performance based on the average positional discrepancy between reference and generated keypoint trajectories. 6.4 DEPLOYMENT DETAILS Sim-to-Sim As demonstrated in Humanoid-Gym (Gu et al., 2024), MuJoCo exhibits more real- istic dynamics compared to Isaac Gym. Following established practices in motion tracking policy research (Ji et al., 2024), we perform reinforcement learning training in Isaac Gym to leverage its computational efficiency. To assess policy robustness and generalization, we then conduct zero-shot transfer to the MuJoCo simulator. Finally, we deploy the policy on a physical humanoid robot to evaluate the effectiveness of our RoboGhost framework for real-world motion tracking. Sim-to-Real Our real-world experiments utilize a Unitree G1 humanoid robot, equipped with an onboard Jetson Orin NX module for computation and communication. The control policy takes motion-tracking targets as input, computes target joint positions, and issues commands to the robot’s low-level controller at 50Hz. Command transmission introduces a latency of 18–30ms. The low-level controller operates at 500Hz to ensure stable real-time actuation. Communication between the policy and the low-level interface is implemented using the Lightweight Communications and Marshalling (LCM) (Huang et al., 2010). 6.5 ADDITIONAL QUALITATIVE RESULTS To further validate the effectiveness of our method, we provide additional qualitative results in this section, including motion tracking visualizations in IsaacGym and MuJoCo (Fig 4) and real-world locomotion demonstrations on the physical robot (Fig 5). Moreover, we provide additional motion generation qualtative results in Fig 6. A supplementary video showcasing real-robot deployments is provided in the supplementary material. 6.6 ADDITIONAL ABLATION STUDIES To rigorously evaluate the effectiveness of causal adaptive sampling (CAS), we conduct various ablation studies examining both its performance gains and sensitivity to hyperparameters. As shown in Table 11, causal adaptive sampling increases the sampling probability of kinematically challenging frames within a motion sequence, thereby improving training efficiency and enhancing model generalization. Furthermore, as illustrated in Fig 7a and 7b, optimal performance is achieved when the adaptation parameters are set to λ = 0.8 and p = 0.005. Furthermore, we conduct an ablation study on the number of experts in the MoE architecture. As illustrated in Fig 8, the variation in the number of experts has a measurable yet limited impact on the policy’s performance, with the optimal result achieved when the number of experts is set to 5. 18 “The person kicks with left leg forward and back.” “Someone runs a zigzag course with arms out.” “A person is playing golf.” MuJoCo IsaacGym “A person is strolling without a clear destination.” “A person rotates their torso while shaking their legs in a circular motion.” “A person appears to be swinging their arm in a circular motion..” “Boxers throw an uppercut, block, and counter with a few fast right jabs.” “A person dances with flair, then leaps forward.” “A man kicks forward.” “A person runs from left to right.” “Someone is moving their body in a flexible, ever-changing motion.” “The person demonstrates a Tiger Sword routine, combining various stances, strikes, and blocks with precise footwork and sword techniques.” Figure 4: Qualitative results in the IsaacGym and MuJoCo. Method IsaacGym MuJoCo Succ ↑ Empjpe ↓ Empkpe ↓ Succ ↑ Empjpe ↓ Empkpe ↓ HumanML (MotionMillion) w/o CAS 0.92 0.18 0.14 0.68 0.32 0.28 Ours 0.97 0.12 0.09 0.74 0.24 0.20 Kungfu (MotionMillion) w/o CAS 0.62 0.42 0.41 0.48 0.67 0.63 Ours 0.72 0.34 0.31 0.57 0.54 0.50 Table 11: Ablation study on causal adaptive sampling. 19 Text Input: The person starts in an upright position, then performs two squats, moving their arms and hands to their chin. Text Input: While boxing, the man defends, dodges, then delivers a few right jabs. Text Input: The man is performing jumping jacks. Figure 5: Qualitative results on the Unitree G1. “person switches from doing jumping jacks to jumps with brining one of the legs forward .” “a person is sitting in a chair, wobbles side to side, stands up, and then start walking.”“the person is kicking with his left and right foot.” “a person walks forward one foot in front of the other with both arms stretched out. ” “a person walks backwards and stops. ” “a person jumps while moving their legs in and out while also moving their arms up and down ” “a person is walking in one place. ” “a man walks forward and then back in a boxing stance. ” “a person walks straight forward .” “a person walks forward, increases his pace, and then leaps in the air. ” “the person is throwing a baseball. ” “the person took a small jump forward. ” “a person walks backwards and stops. ” “a man walks forward and then back in a boxing stance. ” “a person walks forward, increases his pace, and then leaps in the air. ” “the person switches from doing jumping jacks to jumps with brining one of the legs forward .” Figure 6: Qualitative results of motion generator. 20 0.5 0.6 0.7 0.8 0.9 0.10 0.12 0.14 0.16 0.18 Error (m) Tracking Error in IsaacGym MPJPE MKKPE (a) Isaac Gym 0.003 0.004 0.005 0.006 0.007 0.20 0.22 0.24 0.26 0.28 Error (m) Tracking Error in MuJoCo MPJPE MKKPE (b) MuJoCo Figure 7: The impact of different λ values on tracking performance in IsaacGym and MuJoCo. 3 4 5 6 7 0.09 0.10 0.11 0.12 0.13 0.14 0.15 Error (m) Tracking Performance in IsaacGym MPJPE MKKPE (a) IsaacGym 3 4 5 6 7 0.20 0.21 0.22 0.23 0.24 0.25 0.26 Error (m) Tracking Performance in MuJoCo MPJPE MKKPE (b) MuJoCo Figure 8: The impact of different number of experts in MoE on tracking performance in IsaacGym and MuJoCo. 21
FROM LANGUAGE TO LOCOMOTION: RETARGETING-FREE HUMANOID CONTROL VIA MOTION LATENT GUIDANCE Zhe Li1∗, Cheng Chi2∗†, Yangyang Wei3, Boan Zhu4, Yibo Peng2, Tao Huang5 Pengwei Wang2, Zhongyuan Wang2, Shanghang Zhang2,6B, Chang Xu1B 1 2 BAAI, 3 Harbin 4 Hong Kong 5 Shanghai Jiao Tong University 6 Peking University Prompt Motion Generator Motion Latent Decoder Retarget MLP Policy Prompt Motion Generator Diffusion Policy Previous Ours Time Cost and Succ Rate (a) (b) (c) Text Input: The person is performing a backflip. Text Input: A person dances with flair then leaps forward. (d) (e) Figure 1: RoboGhost is a retargeting-free latent driven policy for language-guided humanoid locomotion. By removing the dependency on motion retargeting, it thus allows robots to be controlled directly via open-ended language commands. The figure showcases (a) the previous pipeline with motion retargeting, (b) our proposed retargeting-free latent-driven pipeline, (c) quantitative comparisons of success rate and time cost between baseline and RoboGhost, (d) performing the backflip, and (e) dancing and leaping forward. ABSTRACT Natural language offers a natural interface for humanoid robots, but existing language-guided humanoid locomotion pipelines remain cumbersome and unreliable. They typically decode human motion, retarget it to robot morphology, and then track it with a physics-based controller. However, this multi-stage process is prone to cumulative errors, introduces high latency, and yields weak coupling between semantics and control. These limitations call for a more direct pathway from language to action, one that eliminates fragile intermediate stages. Therefore, we present RoboGhost, a retargeting-free framework that directly conditions humanoid policies on language-grounded motion latents. By bypassing explicit motion decoding and retargeting, RoboGhost enables a diffusion-based policy to denoise executable actions directly from noise, preserving semantic intent and supporting fast, reactive control. A hybrid causal transformer-diffusion motion generator further ensures long-horizon consistency while maintaining stability and diversity, yielding rich latent representations for precise humanoid behavior. Extensive experiments demonstrate that RoboGhost substantially reduces deploy- ∗Equal Contribution † Project Leader B Corresponding Author 1 17 Oct 2025 ment latency, improves success rates and tracking accuracy, and produces smooth, semantically aligned locomotion on real humanoids. Beyond text, the framework naturally extends to other modalities such as images, audio, and music, providing a general foundation for vision-language-action humanoid systems. Project Page 1 INTRODUCTION Natural language provides an intuitive interface for humanoid robots, enabling users to translate freeform instructions into physically feasible humanoid motion. Recent text-to-motion (T2M) models can generate diverse and semantically meaningful human motions (Li et al., 2024d; Chen et al., 2023; Guo et al., 2024; Li et al., 2024b; Tevet et al., 2022). However, deploying these models on real robots typically requires a hierarchical pipeline: decoding human motion from language, retargeting it to robot morphology, and tracking the trajectory with a physics-based controller (Peng et al., 2018; Ji et al., 2024; Chen et al., 2025; He et al., 2025a). This pipeline, while functional in constrained settings, suffers from systemic drawbacks. (1) Errors accumulate across decoding, retargeting, and tracking, degrading both semantic fidelity and physical feasibility. (2) High latency from multiple sequential stages, making real-time interaction difficult. (3) Loose coupling between language and control, since each stage is optimized in isolation rather than end-to-end. Recent refinements (Yue et al., 2025; Serifi et al., 2024; Shao et al., 2025) attempt to mitigate these issues, but improvements are applied locally to decoders or controllers, leaving the overall pipeline fragile and inefficient. Our key insight is simple: treat motion latents as first-class conditioning signals, directly use them as conditions to generate humanoid actions without decoding and retargeting altogether. We therefore propose RoboGhost, a retargeting-free framework named to highlight the latent representations that are invisible like a ghost yet strongly drive humanoid behavior. As shown in Fig 1, rather than producing explicit human motion, RoboGhost leverages language-conditioned motion latents as semantic anchors to guide a diffusion-based humanoid policy. The policy denoises executable actions directly from noise, eliminating error-prone intermediate stages while preserving fine-grained intent and enabling fast, reactive control. To our knowledge, this is the first diffusion-based humanoid policy driven by motion latents, achieving smooth and natural locomotion with DDIM-accelerated sampling (Song et al., 2020) for real-time deployment. To enhance temporal coherence and motion diversity, RoboGhost employs a hybrid causal transformer-diffusion architecture. The transformer backbone captures long-horizon dependencies and ensures global consistency (Zhou et al., 2024; 2025; Han et al., 2025). The diffusion component contributes stability and stochasticity for fine-grained motion synthesis. Together, this design mitigates drift and information loss typical of autoregressive models, while producing expressive motion latents that provide downstream policies with rich semantic conditioning for precise control. Extensive experiments validate the effectiveness and practicality of RoboGhost. We dramatically accelerate the full pipeline from motion generation to humanoid deployment, cutting the time from 17.85 s to 5.84 s. Beyond sheer speed, our approach yields higher-quality control by circumventing retargeting losses and improving generalization, which is reflected in a 5% higher success rate and reduced tracking error compared to baseline methods. Our framework achieves robust, semantically aligned locomotion on real humanoids, substantially reducing latency compared to retargeting-based pipelines. We can further extend this framework to support other input modalities such as images, audio, and music, thereby offering a reference architecture for humanoid vision-language-action systems. In short, RoboGhost moves text-driven humanoid control from fragile pose imitation to robust, real-time interaction. In summary, our key contributions can be summarized as follows: • We propose RoboGhost, a retargeting-free framework that directly leverages languagegenerated motion latents for end-to-end humanoid policy learning, eliminating error-prone decoding and retargeting stages. • We introduce the first diffusion-based humanoid policy conditioned on motion latents, which denoises executable actions directly from noise and achieves smooth, diverse, and physically plausible locomotion with DDIM-accelerated sampling. 2 • We design a hybrid transformer-diffusion architecture that unifies long-horizon temporal coherence with stochastic stability, yielding expressive motion latents and strong language-motion alignment. • We validate RoboGhost through extensive experiments, demonstrating its effectiveness and generality in enabling robust and real-time language-guided humanoid locomotion. 2 RELATED WORK 2.1 HUMAN MOTION SYNTHESIS Language-guided humanoid locomotion leverages advances in text-to-motion generation, which primarily uses transformer-based discrete modeling or diffusion-based continuous modeling. Discrete methods model motion as tokens, evolving from early vector quantization (Guo et al., 2022c) to GPTstyle autoregression (Zhang et al., 2023a), scaling with LLMs (Jiang et al., 2023; Zhang et al., 2024) or improved attention (Zhong et al., 2023), and recently, bidirectional masking (Guo et al., 2023) and enhanced text alignment (Li et al., 2024d). In parallel, diffusion models excel in synthesis (Kim et al., 2023; Hu et al., 2023; Tevet et al., 2022; Li et al., 2024c; Bai et al., 2025; Li et al., 2023b; ?), with progress in efficiency (Chen et al., 2023), retrieval-augmentation (Zhang et al., 2023b), controllability (Li et al., 2024b), and architectures (Meng et al., 2024; Li et al., 2025). Our work builds on the continuous autoregressive framework (Li et al., 2024a), combining the benefits of both paradigms. 2.2 HUMANOID WHOLE-BODY CONTROL Whole-body control (WBC) is crucial for humanoids, yet learning a general-purpose policy remains challenging. Existing approaches exhibit inherent trade-offs: methods like OmniH2O (He et al., 2024) and HumanPlus (Fu et al., 2024) prioritize specific robustness at the cost of generality or long-term accuracy, while others like Hover (He et al., 2025b), ExBody2 (Ji et al., 2024), and GMT (Chen et al., 2025) employs strategies (e.g., masking, curriculum learning, MoE) to enhance adaptability, though generalization is not guaranteed. Recent language-guided works also face limitations. LangWBC (Shao et al., 2025) scales poorly and offers no guarantee of generalization to unseen instructions, and RLPF (Yue et al., 2025) risks catastrophic forgetting and limited output diversity. To overcome these issues, we propose a MoE-based oracle paired with a latent-driven diffusion student, enhancing generalization while reducing deployment cost. 3 METHOD This section presents the core components of our framework, which is depicted in Fig 2. We begin with an overview of our method in Section 3.1, providing a high-level description of the architecture and motivation. Section 3.2 details the construction of our motion generator, which leverages continuous autoregression and a causal autoencoder. Furthermore, Section 3.3 elaborates on our novel retargeting-free, latent-driven reinforcement learning architecture based on diffusion models, including its training and inference procedures. Finally, we present our causal adaptive sampling strategy in Section 3.4. Other details can be found in appendix 6.2. 3.1 OVERVIEW Our work introduces a novel retargeting-free, latent-driven reinforcement learning architecture for language-guided humanoid control, which fundamentally diverges from conventional motion-tracking pipelines. As depicted in Fig. 2, our approach comprises three core components: a continuous autoregressive motion generator, a MoE-based teacher policy, and a latent-driven diffusion-based student policy. We focus on the challenge of generating diverse, physically plausible motions from high-level instructions without the need for complex kinematic retargeting. The process begins by feeding a textual prompt T into the continuous autoregressive motion generator. Unlike prior works that decode the generated motion into an explicit kinematic sequence requiring tedious retargeting to the robot, our generator produces a compact latent motion representation lref. 3 a a "A person walks in a circle." Task Instruction Motion Encoder AdaLN Causal Transformer Text Encoder Diffusion Model Stage1-Motion Generation Stage2-Policy Training Retargeted Dataset Proprioception Privileged Information Motion Generator History observations Diffusion Add Noise Teacher Policy Student Policy Inference Text Encoder Causal Transformer Diffusion Model motion latent Gaussian Noise PD Diffusion MoE Task Instruction Latent Encoder Add Noise Trainable Frozen "A person is doing jump jacks." Motion Latent Figure 2: Overview of RoboGhost. We propose a two-stage approach: a motion latent is first generated, then a MoE-based teacher policy is trained with RL and a diffusion-based student policy is trained to denoise actions conditioned on motion latent. This latent-driven scheme bypasses the need for motion retargeting. This design is pivotal, as it bypasses the error-prone retargeting step and mitigates performance degradation caused by limited generator capability. The generation process can be formulated as: lref = G(T), where G denotes our motion generator. This latent representation lref, alongside proprioceptive states and historical observations, then conditions a diffusion-based student policy πs. The policy operates a denoising process to output actions directly executable on the physical humanoid. This latent-driven paradigm eliminates the dependency on privileged information and explicit reference motions during deployment, significantly streamlining the sim-to-real pipeline. To efficiently train the high-level oracle teacher policy, we introduce a causal adaptive sampling strategy. It dynamically prioritizes challenging motion segments by attributing failures to their causal antecedents, thereby maximizing sample efficiency and enabling the learning of long-horizon, agile motor skills. Finally, a meticulously designed reward function ensures accurate and expressive whole-body motion tracking. Collectively, our framework achieves robust, direct-drive control from language commands, setting a new paradigm for retargeting-free humanoid locomotion. 3.2 CONTINUOUS AUTOREGRESSIVE MOTION GENERATOR Given that discretized models are prone to information loss and to leverage the advantages of both masked modeling and autoregressive approaches, we adopt a causal autoencoder and continuous masked autoregressive architecture with causal attention mask, which effectively captures temporal dependencies between tokens and produces rich contextual conditions for the subsequent diffusion process. Specifically, we first randomly mask motion tokens following the practice in language models (Devlin et al., 2019), obtaining a set of masked tokens. The temporal masking strategy follows the same mask ratio scheduling as (Chang et al., 2022), defined by the function: γ(τ) = cos πτ 2 , (1) where τ ∈[0, 1]. During training, τ is uniformly sampled from U(0, 1), yielding a mask ratio γ(τ). Accordingly, γ(τ) × N tokens are randomly selected for masking. Unlike previous masked autoregressive methods (Li et al., 2024a; Meng et al., 2024; Li et al., 2023a), our approach does not involve random token shuffling or batch-token prediction. Moreover, to mitigate the limitation in model expressiveness caused by low-rank matrix approximations during training, we replace bidirectional attention masks with causal attention masking. And for the input 4 text prompt T, we use the LaMP (Li et al., 2024d) text transformer to extract textual features. This model captures linguistic nuances and semantic structures effectively, resulting in high-dimensional feature representations that provide essential guidance for motion generation. After the transformer completes the masked token prediction task, the predicted latent representations are used to condition the diffusion model, guiding the denoising process to produce more accurate and semantically rich latent representations. This enables downstream latent-driven action generation. 3.3 LATENT-DRIVEN DIFFUSION POLICY FOR RETARGETING-FREE DEPLOYMENT 3.3.1 MOE-BASED TEACHER POLICY The core challenge in text-to-locomotion is the inherent open-endedness of language. Therefore, generalization capability is the key enabler for policies to respond successfully to novel prompts and achieve true deployment flexibility. First, we train an oracle teacher policy using PPO (Schulman et al., 2017) with privileged simulator-state information. To learn a policy πt that generalizes across diverse motion inputs, we first train an initial policy π0 on a curated motion dataset D0 of high diversity. We then evaluate the tracking accuracy of π0 for each motion sequence s ∈D0, focusing on the lower body through an error metric e(s) = α · Ekey(s) + β · Edof(s), where Ekey(s) denotes the mean key-body position error and Edof(s) the mean joint angle tracking error of the lower body. Motion sequences with e(s) > 0.6 are filtered out, and the remaining data D are used to train a general teacher policy. The teacher policy πt is trained as an oracle in simulation using PPO, leveraging privileged information unavailable in the real world-including ground-truth root velocity, global joint positions, physical properties (e.g., friction coefficients, motor strength), proprioceptive state, and reference motion. The policy outputs target joint positions ˆat ∈R23 for proportional derivative (PD) control, maximizing cumulative rewards to achieve accurate motion tracking and robust behavior, resulting in a high-performance expert policy trained solely on simulated data. Finally, we seek: πt = arg max π Es∈D [Performance(π, s)] (2) To enhance the expressive power and generalization capability of the model, we incorporate a Mixture of Experts (MoE) module into the training of the teacher policy. The policy network consists of a set of expert networks and a gating network. The expert networks take as input both the robot state observations and reference motion, and output the final action at. The gating network receives the same input observations and produces a probability distribution over all experts. The final action is computed as a weighted combination of actions sampled from each expert's output distribution: a = Pn i=1 pi · ai, where pi denotes the probability assigned to the i-th expert by the gating network, and ai represents the output of the i-th expert policy. This architecture enhances the policy's generalization capacity and obtains actions that accurately track reference motions, thereby providing more precise supervised signals for the student policy. 3.3.2 DIFFUSION-BASED STUDENT POLICY Unlike prior approaches where the student policy πs distills knowledge from the teacher using reference motion, we propose a novel latent-driven student policy that takes motion latent representations as input. In addition to the observation history, it incorporates a latent representation lref from the motion generator as input. This implicit design enables the policy to operate during inference without retargeted explicit reference motion, thereby streamlining deployment, reducing performance degradation caused by limitations of motion generator capability and retargeting. Following a DAgger-like (Ross et al., 2011) approach, we roll out the student policy in simulation and query the teacher for optimal actions ˆat at visited states. During training, we progressively inject Gaussian noise εt into the teacher's actions and use the first-stage pretrained motion generator to obtain a latent representation with textual descriptions. The forward process can be modeled as a Markov nosing process using: q(xt|xt-1) = N(xt; √ 1 -αt · xt-1, αtI) (3) where the constant αt ∈(0, 1) is a hyper-parameter for sampling. Here, we denote the denoiser as εθ, use {xt}T t=0 to denote the noising sequence, and xt-1 = εθ(xt, t) for the t-step denoising. 5 While the motion generator can produce motions lacking physical realism, our method does not blindly follow its output. Instead, a trainable latent encoder intelligently conditions the policy, translating raw proposals into actionable and stable commands. This enables the policy to generate diverse and robust actions even from imperfect guidance. This representation, along with proprioceptive states and historical observations, serves as conditioning to guide the denoising process. For tractability, we adopt an x0-prediction strategy and supervise the student policy by minimizing the mean squared error loss L = ∥a -ˆat∥2 2, where a = xt-√1- ̄αt·εθ(xt,t) √ ̄αt . The process iterates until convergence, yielding a policy that requires neither privileged knowledge nor explicit reference motion, and is suitable for real-world deployment. 3.3.3 INFERENCE PIPELINE To ensure motion fluency and smoothness, it is essential to minimize the time required for the denoising process. We therefore adopt DDIM sampling (Song et al., 2020) and an MLP-based diffusion model to generate actions for deployment. The reverse process can be formulated as: xt-1 = √αt-1 xt -√1 -αt · εθ(xt, t) √αt + p 1 -αt-1 · εθ(xt, t) (4) Our framework is entirely retargeting-free and latent-driven. During inference, the textual description is first input into a motion generator and obtains a latent motion representation lref. Crucially, we bypass the decoding of this latent into explicit motion sequences, thus eliminating the need for retargeting to the robot. We sample a random noise as input to the student policy and condition the diffusion model via AdaLN (Huang & Belongie, 2017) on the motion latent, proprioceptive states po and historical observations ot-H:t, producing a clean action a = εθ(ε|lref, po, ot-H:t), that is directly executable on the physical robot. This streamlined pipeline not only reduces complexity but also mitigates issues such as low-quality motion generation due to limited generator capability, retargeting-induced errors, and insufficient action diversity. 3.4 CAUSAL ADAPTIVE SAMPLING RoboGhost aims at tracking a reference motion more expressively in the whole body. To this end, we employ a novel sampling method to facilitate the teacher policy in mastering more challenging and longer motion sequences. Details about various reward functions, curriculum learning, and domain randomizations can be found in appendix 6.2. Training long-horizon motor skills faces a fundamental challenge: motion segments exhibit heterogeneous difficulty levels. Conventional approaches typically employ uniform sampling across trajectories (He et al., 2025a; Truong et al., 2024), which leads to oversampling of trivial segments and under-sampling of challenging ones, resulting in high reward variance and suboptimal sample efficiency. To address this, we propose a causality-aware adaptive sampling mechanism. The motion sequence is divided into K equal-length intervals, and the sampling probability for each interval is dynamically adjusted based on empirical failure statistics. Let kt denote the interval in which a rollout terminates due to failure. We hypothesize that the root cause often arises s steps prior to kt-such as a misstep or collision-that propagates into eventual failure at kt. To enable corrective learning, we increase the sampling likelihood of these antecedent intervals. Motivated by the temporal structure of failures-typically preceded by suboptimal actions-we apply an exponential decay kernel α(u) = γu (γ ∈(0, 1)) to assign higher weights to time steps leading up to termination. The base sampling probability for interval s is defined as: ∆pi = α(t -i) · p, i ∈[t -s, t], ∆pi = 0, i /∈[t -s, t] (5) where K controls the backward horizon for causal attribution. Subsequently, we update and normalize the sampling probabilities across all intervals. Let pi denote the base probability for interval i. The updated probability is given by: p′ i ←pi + ∆pi, where ∆pi denotes the increment derived from failure attribution. The distribution is then renormalized to ensure P i p′ 1 = 1. Finally, the initial interval is sampled from the multinomial distribution Multinomial(p′ 1, . . . , p′ K), enabling selective 6 focus on high-difficulty segments. Since each interval consists of multiple frames, after sampling the restart interval, we further select the exact restart frame uniformly within that interval. 4 EXPERIMENT We present a rigorous empirical evaluation of our proposed method in this section, structured to systematically address three core research questions central to its design and efficacy. We organize our analysis around the following three research questions: • Q1: Advantages of Latent-driven Pipeline. To what extent does a retargeting-free, latentdriven approach improve efficiency, robustness, and generalization in motion generation, and how do these advantages manifest in real-world deployment scenarios? • Q2: Generalization Gains via Diffusion Policy. Does replacing MLP-based policies with diffusion-based policies enhance generalization to unseen instructions or environments? • Q3: Better Motion Generator. Does the continuous autoregressive architecture yield more semantically and kinematically precise latent embeddings than standard alternatives? Further, how do architectural variants of the diffusion model (e.g., MLP and DiT (Peebles & Xie, 2023)) affect motion generation quality? 4.1 EXPERIMENTS SETTINGS Datasets and Metrics We evaluate on two MotionMillion (Fan et al., 2025) subsets: HumanML and Kungfu. For policy training, we use only index-0 sequences per motion name (e.g., "00000_0.npy"), excluding mirrored variants to reduce redundancy. Motions with non-planar terrains or infeasible contacts are filtered to ensure flat-ground consistency. The curated set is split 8:2 (train:test) with stratified category balance. For motion generator training, we leverage the full subsets. Evaluation metrics include motion generation and motion tracking. Generation follows Guo et al. (2022b) with retrieval precision R@1, 2, 3, MM Dist, FID, and Diversity. Tracking is assessed in physics simulators, consistent with OmniH2O (He et al., 2024) and PHC (Luo et al., 2023), using success rate (primary), mean per-joint error (Empjpe), and mean per-keypoint error (Empkpe). Success rate reflects both accuracy and stability. Further details are in appendix 6.3. 4.2 EVALUATION OF MOTION GENERATION To validate the effectiveness of our continuous autoregressive motion generation model, we train and evaluate it on two distinct skeleton representations: the 263-dim HumanML3D (Guo et al., 2022a) and the 272-dim Humanml and Kungfu subsets of MotionMillion. We benchmark against text-to-motion methods, including diffusion- and transformer-based approaches. Motivated by SiT (Ma et al., 2024) and MARDM (Meng et al., 2024), we reformulate the training objective of the motion generator from noise prediction to velocity prediction, leading to improved motion quality and enhanced dynamic consistency in generated sequences. As summarized in Table 1, our model achieves competitive performance across both formats, demonstrating robustness to representation variation. 4.3 EVALUATION OF MOTION TRACKING POLICY To further validate the efficacy of our motion tracking policy, we conduct evaluations on the HumanML and Kungfu subsets of MotionMillion, measuring Empjpe and Empkpe under physics-based simulation in both IsaacGym and MuJoCo. The pipeline operates as follows: textual descriptions are first input into the motion generator to produce a latent motion representation, which is subsequently input by the student policy for action execution. As summarized in Table 2, our method achieves high success rates on HumanML, alongside low joint and keypoint errors, indicating robust alignment between generated motion semantics and physically executable trajectories. Here, Baseline refers to the conventional, explicit framework where both the teacher and student policies use MLP backbones. 7 Methods R Precision↑ FID↓ MM-Dist↓ Diversity→ Top 1 Top 2 Top 3 HumanML3D Ground Truth 0.702 0.864 0.914 0.002 15.151 27.492 MDM (Tevet et al., 2023) 0.523 0.692 0.764 23.454 17.423 26.325 MLD (Chen et al., 2023) 0.546 0.730 0.792 18.236 16.638 26.352 T2M-GPT (Zhang et al., 2023a) 0.606 0.774 0.838 12.475 16.812 27.275 MotionGPT (Jiang et al., 2023) 0.456 0.598 0.628 14.375 14.892 27.114 MoMask (Guo et al., 2023) 0.621 0.784 0.846 12.232 16.138 27.127 AttT2M (Zhong et al., 2023) 0.592 0.765 0.834 15.428 15.726 26.674 MotionStreamer (Xiao et al., 2025) 0.631 0.802 0.859 11.790 16.081 27.284 Ours-DDPM 0.639 0.808 0.867 11.706 15.978 27.230 Ours-SiT 0.641 0.812 0.870 11.743 15.972 27.307 HumanML (MotionMillion) Ground Truth 0.714 0.876 0.920 0.002 14.984 26.571 Ours-DDPM 0.644 0.819 0.873 11.724 15.870 26.395 Ours-SiT 0.646 0.818 0.872 11.716 15.603 26.471 Table 1: Quantitative results of text-to-motion generation on the HumanML3D dataset and HumanML subset. →denotes that if the value is closer to the ground truth, the metric is better. Method IsaacGym MuJoCo Succ ↑ Empjpe ↓ Empkpe ↓ Succ ↑ Empjpe ↓ Empkpe ↓ HumanML (MotionMillion) Baseline 0.92 0.23 0.19 0.64 0.34 0.31 Ours-DDPM 0.97 0.12 0.09 0.74 0.24 0.20 Ours-SiT 0.98 0.14 0.08 0.72 0.26 0.23 Kungfu (MotionMillion) Baseline 0.66 0.43 0.37 0.51 0.58 0.52 Ours-DDPM 0.72 0.34 0.31 0.57 0.54 0.50 Ours-SiT 0.71 0.36 0.32 0.55 0.53 0.48 Table 2: Motion tracking performance comparison in simulation on the HumanML and Kungfu test sets. 4.4 QUALITATIVE EVALUATION We present a qualitative evaluation of the motion tracking policy across three deployment stages: simulation (IsaacGym), cross-simulator transfer (MuJoCo), and real-world execution on the Unitree G1 humanoid. Fig 3 visualizes representative tracking sequences, highlighting the policy's ability to preserve motion semantics, maintain balance under dynamic transitions, and generalize across physics engines and hardware. In particular, real-robot deployments demonstrate that our latentdriven, retargeting-free framework enables smooth, temporally coherent motion execution without manual tuning, validating its readiness for practical embodied applications. 4.5 ABLATION STUDIES To systematically answer the three research questions posed at the outset of this section, we conduct various ablation studies. To answer Q1 (Advantages of Retargeting-free Pipeline), we evaluate the conventional pipeline: text prompts are fed to the motion generator, the output latent is decoded into explicit motion, which is then retargeted to the G1 robot and executed by the policy. As shown in Table 3, this approach underperforms due to its multi-stage complexity and error accumulation, and incurs higher real-world 8 "The person is performing a Wushu Kungfu sequence that may culminate in a backflip." "He places his hands together on the right side, execute a series of kicks with their left leg." "A man leaps 3 times in rapid succession." MuJoCo IsaacGym "The man is performing jumping jacks." "A person leaps and spins." "A person executes a Kung Fu Flying Kick." Figure 3: Qualitative results in the IsaacGym and MuJoCo. Method IsaacGym MuJoCo Time Cost (s) Succ ↑ Empjpe ↓ Empkpe ↓ Succ ↑ Empjpe ↓ Empkpe ↓ Ours-Explicit 0.93 0.21 0.17 0.66 0.32 0.27 17.85 Ours-Implicit 0.97 0.12 0.09 0.74 0.24 0.20 5.84 Table 3: Motion tracking performance comparison across different simulators on the HumanML and Kungfu test sets. Explicit version including PHC retargeting (1000 interations) and latent decode processes. deployment time cost. This confirms the efficiency and performance advantage of our retargeting-free, latent-driven framework. To answer Q2 (Generalization Gains via Diffusion Policy), we conduct an ablation by replacing the diffusion policy with an MLP-based policy that concatenates latent and other observations to predict actions. Diffusion policies, by design, better capture various action distributions, enabling more robust adaptation to diverse or imperfect latents. As shown in Table 4, the diffusion policy significantly outperforms its MLP counterpart. To further evaluate generalization, we test both policies on 10 randomly sampled motions from unseen MotionMillion subsets (fitness, perform, 100style, haa). Although the motion generator was not trained on these subsets - resulting in suboptimal latents - the diffusion-based policy still achieves substantially better tracking and robustness than MLP, as evidenced in Table 4. This highlights the humanoid diffusion policy's superiority. Method IsaacGym Succ ↑ Empjpe ↓ Empkpe ↓ MLP Policy 0.96 0.17 0.11 Diffusion Policy 0.97 0.12 0.09 Method IsaacGym Succ ↑ Empjpe ↓ Empkpe ↓ MLP Policy 0.54 0.48 0.45 Diffusion Policy 0.68 0.42 0.39 Table 4: Comparison of MLP-based and diffusion-based policy. The left table shows the tracking performance on HumanML subset, and the right table presents the generalization ability of two different policy architectures. To address Q3 (Better Motion Generator), we conduct an ablation study on the diffusion backbone in our framework, comparing a 16-layer MLP and a 4-layer DiT under identical settings. As shown in Table 5, DiT offers slight gains in generation metrics but no measurable improvement in tracking success or joint accuracy, while incurring higher latency due to larger model size. Balancing effectiveness and efficiency, we therefore adopt the 16-layer MLP as our default backbone. 9 Method IsaacGym HumanML3D Time Cost (s) ↓ Succ ↑ Empjpe ↓ Empkpe ↓ Top 3 ↑ FID ↓ DiT 0.96 0.11 0.11 0.870 11.697 14.28 MLP 0.97 0.12 0.09 0.867 11.706 5.84 Table 5: Comparison of tracking performance across different diffusion backbones for motion generation. 5 CONCLUSION In this paper, we introduce RoboGhost, a retargeting-free, latent-driven framework that bridges natural language instructions with physically feasible humanoid motion control. Our method harnesses rich semantic representations from a pretrained text-to-motion model and integrates them into a diffusion-based humanoid policy, thereby bypassing motion decoding and retargeting stages that are not only error-prone but also time-consuming. This design enables real-time, language-guided humanoid locomotion with improved adaptability. Extensive experiments show that RoboGhost reduces cumulative distortions and execution latency while preserving semantic alignment and physical realism. Beyond advancing efficiency and robustness, RoboGhost offers a new and practical pathway toward more intuitive, responsive, and deployable humanoid robots in real-world environments. REFERENCES Shuanghao Bai, Wenxuan Song, Jiayi Chen, Yuheng Ji, Zhide Zhong, Jin Yang, Han Zhao, Wanqi Zhou, Wei Zhao, Zhe Li, Pengxiang Ding, Cheng Chi, Haoang Li, Chang Xu, Xiaolong Zheng, Donglin Wang, Shanghang Zhang, and Badong Chen. Towards a unified understanding of robot manipulation: A comprehensive survey. arXiv preprint , 2025. Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, and William T Freeman. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 11315-11325, 2022. Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, and Gang Yu. Executing your commands via motion diffusion in latent space. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 18000-18010, 2023. Zixuan Chen, Mazeyu Ji, Xuxin Cheng, Xuanbin Peng, Xue Bin Peng, and Xiaolong Wang. Gmt: General motion tracking for humanoid whole-body control. arXiv preprint , 2025. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp. 4171-4186, 2019. Ke Fan, Shunlin Lu, Minyue Dai, Runyi Yu, Lixing Xiao, Zhiyang Dou, Junting Dong, Lizhuang Ma, and Jingbo Wang. Go to zero: Towards zero-shot motion generation with million-scale data. arXiv preprint , 2025. Zipeng Fu, Qingqing Zhao, Qi Wu, Gordon Wetzstein, and Chelsea Finn. Humanplus: Humanoid shadowing and imitation from humans. arXiv preprint , 2024. Xinyang Gu, Yen-Jen Wang, and Jianyu Chen. Humanoid-gym: Reinforcement learning for humanoid robot with zero-shot sim2real transfer. arXiv preprint , 2024. Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5152-5161, June 2022a. Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5152-5161, 2022b. 10 Chuan Guo, Xinxin Zuo, Sen Wang, and Li Cheng. Tm2t: Stochastic and tokenized modeling for the reciprocal generation of 3d human motions and texts. In European Conference on Computer Vision, pp. 580-597. Springer, 2022c. Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. 2023. Chuan Guo, Yuxuan Mu, Muhammad Gohar Javed, Sen Wang, and Li Cheng. Momask: Generative masked modeling of 3d human motions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1900-1910, 2024. Yi Han, Cheng Chi, Enshen Zhou, Shanyu Rong, Jingkun An, Pengwei Wang, Zhongyuan Wang, Lu Sheng, and Shanghang Zhang. Tiger: Tool-integrated geometric reasoning in vision-language models for robotics. arXiv preprint , 2025. Tairan He, Zhengyi Luo, Xialin He, Wenli Xiao, Chong Zhang, Weinan Zhang, Kris Kitani, Changliu Liu, and Guanya Shi. Omnih2o: Universal and dexterous human-to-humanoid whole-body teleoperation and learning. arXiv preprint , 2024. Tairan He, Jiawei Gao, Wenli Xiao, Yuanhang Zhang, Zi Wang, Jiashun Wang, Zhengyi Luo, Guanqi He, Nikhil Sobanbab, Chaoyi Pan, et al. Asap: Aligning simulation and real-world physics for learning agile humanoid whole-body skills. arXiv preprint , 2025a. Tairan He, Wenli Xiao, Toru Lin, Zhengyi Luo, Zhenjia Xu, Zhenyu Jiang, Jan Kautz, Changliu Liu, Guanya Shi, Xiaolong Wang, et al. Hover: Versatile neural whole-body controller for humanoid robots. In 2025 IEEE International Conference on Robotics and Automation (ICRA), pp. 9989-9996. IEEE, 2025b. Vincent Tao Hu, Wenzhe Yin, Pingchuan Ma, Yunlu Chen, Basura Fernando, Yuki M Asano, Efstratios Gavves, Pascal Mettes, Bjorn Ommer, and Cees GM Snoek. Motion flow matching for human motion synthesis and editing. arXiv preprint , 2023. Albert S Huang, Edwin Olson, and David C Moore. Lcm: Lightweight communications and marshalling. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4057-4062. IEEE, 2010. Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision, pp. 1501-1510, 2017. Mazeyu Ji, Xuanbin Peng, Fangchen Liu, Jialong Li, Ge Yang, Xuxin Cheng, and Xiaolong Wang. Exbody2: Advanced expressive humanoid whole-body control. arXiv preprint , 2024. Yuheng Ji, Huajie Tan, Jiayu Shi, Xiaoshuai Hao, Yuan Zhang, Hengyuan Zhang, Pengwei Wang, Mengdi Zhao, Yao Mu, Pengju An, et al. Robobrain: A unified brain model for robotic manipulation from abstract to concrete. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 1724-1734, 2025. Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. Motiongpt: Human motion as a foreign language. Advances in Neural Information Processing Systems, 36:20067-20079, 2023. Jihoon Kim, Jiseob Kim, and Sungjoon Choi. Flame: Free-form language-based motion synthesis & editing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 8255-8263, 2023. Tianhong Li, Yonglong Tian, He Li, Mingyang Deng, and Kaiming He. Autoregressive image generation without vector quantization. Advances in Neural Information Processing Systems, 37: 56424-56445, 2024a. Zhe Li, Zhangyang Gao, Cheng Tan, Stan Z Li, and Laurence T Yang. General point model with autoencoding and autoregressive. arXiv preprint , 2023a. 11 Zhe Li, Laurence T Yang, Xin Nie, BoCheng Ren, and Xianjun Deng. Enhancing sentence representation with visually-supervised multimodal pre-training. In Proceedings of the 31st ACM International Conference on Multimedia, pp. 5686-5695, 2023b. Zhe Li, Yisheng He, Lei Zhong, Weichao Shen, Qi Zuo, Lingteng Qiu, Zilong Dong, Laurence Tianruo Yang, and Weihao Yuan. Mulsmo: Multimodal stylized motion generation by bidirectional control flow. arXiv preprint , 2024b. Zhe Li, Laurence T Yang, Bocheng Ren, Xin Nie, Zhangyang Gao, Cheng Tan, and Stan Z Li. Mlip: Enhancing medical visual representation with divergence encoder and knowledge-guided contrastive learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11704-11714, 2024c. Zhe Li, Weihao Yuan, Yisheng He, Lingteng Qiu, Shenhao Zhu, Xiaodong Gu, Weichao Shen, Yuan Dong, Zilong Dong, and Laurence T Yang. Lamp: Language-motion pretraining for motion generation, retrieval, and captioning. arXiv preprint , 2024d. Zhe Li, Weihao Yuan, Weichao Shen, Siyu Zhu, Zilong Dong, and Chang Xu. Omnimotion: Multimodal motion generation with continuous masked autoregression, 2025. URL https: //arxiv.org/abs/2510.14954. Zhengyi Luo, Jinkun Cao, Kris Kitani, Weipeng Xu, et al. Perpetual humanoid control for real-time simulated avatars. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10895-10904, 2023. Nanye Ma, Mark Goldstein, Michael S Albergo, Nicholas M Boffi, Eric Vanden-Eijnden, and Saining Xie. Sit: Exploring flow and diffusion-based generative models with scalable interpolant transformers. In European Conference on Computer Vision, pp. 23-40. Springer, 2024. Zichong Meng, Yiming Xie, Xiaogang Peng, Zeyu Han, and Huaizu Jiang. Rethinking diffusion for text-driven human motion generation. arXiv preprint , 2024. William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 4195-4205, 2023. Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel Van de Panne. Deepmimic: Exampleguided deep reinforcement learning of physics-based character skills. ACM Transactions On Graphics (TOG), 37(4):1-14, 2018. Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pp. 627-635. JMLR Workshop and Conference Proceedings, 2011. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint , 2017. Agon Serifi, Ruben Grandia, Espen Knoop, Markus Gross, and Moritz Bächer. Robot motion diffusion model: Motion generation for robotic characters. In SIGGRAPH asia 2024 conference papers, pp. 1-9, 2024. Yiyang Shao, Xiaoyu Huang, Bike Zhang, Qiayuan Liao, Yuman Gao, Yufeng Chi, Zhongyu Li, Sophia Shao, and Koushil Sreenath. Langwbc: Language-directed humanoid whole-body control via end-to-end learning. arXiv preprint , 2025. Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint , 2020. BAAI RoboBrain Team, Mingyu Cao, Huajie Tan, Yuheng Ji, Minglan Lin, Zhiyu Li, Zhou Cao, Pengwei Wang, Enshen Zhou, Yi Han, et al. Robobrain 2.0 technical report. arXiv preprint , 2025. Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, and Amit H Bermano. Human motion diffusion model. arXiv preprint , 2022. 12 Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. Human motion diffusion model. In The Eleventh International Conference on Learning Representations, 2023. Shashank Tripathi, Lea Müller, Chun-Hao P Huang, Omid Taheri, Michael J Black, and Dimitrios Tzionas. 3d human pose estimation via intuitive physics. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4713-4725, 2023. Takara Everest Truong, Michael Piseno, Zhaoming Xie, and Karen Liu. Pdp: Physics-based character animation via diffusion policy. In SIGGRAPH Asia 2024 Conference Papers, pp. 1-10, 2024. Lixing Xiao, Shunlin Lu, Huaijin Pi, Ke Fan, Liang Pan, Yueer Zhou, Ziyong Feng, Xiaowei Zhou, Sida Peng, and Jingbo Wang. Motionstreamer: Streaming motion generation via diffusion-based autoregressive model in causal latent space. arXiv preprint , 2025. Weiji Xie, Jinrui Han, Jiakun Zheng, Huanyu Li, Xinzhe Liu, Jiyuan Shi, Weinan Zhang, Chenjia Bai, and Xuelong Li. Kungfubot: Physics-based humanoid whole-body control for learning highly-dynamic skills. arXiv preprint , 2025. Junpeng Yue, Zepeng Wang, Yuxuan Wang, Weishuai Zeng, Jiangxing Wang, Xinrun Xu, Yu Zhang, Sipeng Zheng, Ziluo Ding, and Zongqing Lu. Rl from physical feedback: Aligning large motion models with humanoid control. arXiv preprint , 2025. Jianrong Zhang, Yangsong Zhang, Xiaodong Cun, Yong Zhang, Hongwei Zhao, Hongtao Lu, Xi Shen, and Ying Shan. Generating human motion from textual descriptions with discrete representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 14730-14740, 2023a. Mingyuan Zhang, Xinying Guo, Liang Pan, Zhongang Cai, Fangzhou Hong, Huirong Li, Lei Yang, and Ziwei Liu. Remodiffuse: Retrieval-augmented motion diffusion model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 364-373, 2023b. Yaqi Zhang, Di Huang, Bin Liu, Shixiang Tang, Yan Lu, Lu Chen, Lei Bai, Qi Chu, Nenghai Yu, and Wanli Ouyang. Motiongpt: Finetuned llms are general-purpose motion generators. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pp. 7368-7376, 2024. Chongyang Zhong, Lei Hu, Zihao Zhang, and Shihong Xia. Attt2m: Text-driven human motion generation with multi-perspective attention mechanism. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 509-519, 2023. Enshen Zhou, Qi Su, Cheng Chi, Zhizheng Zhang, Zhongyuan Wang, Tiejun Huang, Lu Sheng, and He Wang. Code-as-monitor: Constraint-aware visual programming for reactive and proactive robotic failure detection. arXiv preprint , 2024. Enshen Zhou, Jingkun An, Cheng Chi, Yi Han, Shanyu Rong, Chi Zhang, Pengwei Wang, Zhongyuan Wang, Tiejun Huang, Lu Sheng, et al. Roborefer: Towards spatial referring with reasoning in vision-language models for robotics. arXiv preprint , 2025. 13 6 APPENDIX APPENDIX OVERVIEW This appendix provides additional details and results, organized as follows: • Section 6.1: Detailed information of implementation, including detailed proprioceptive states, privileged information and hyper-parameters of network structure. • Section 6.2: Elaboration on some details during training, including dataset details, motion filter and retargeting, simulator, domain randomization, regularization, reward functions, penalty and curriculm learning. • Section 6.3: Details about evaluation, including metrics about motion generation and motion tracking. • Section 6.4: Deployment details, including sim-to-sim and sim-to-real. • Section 6.5: Extra qualitative experiment results and visualizations, including in the simulation and in the real-world. • Section 6.6: Extra ablation experiment results and visualizations, including the effect of causal adaptive sampling (CAS), hyper-parameter λ, and the number of experts in MoE of teacher policy. 6.1 IMPLEMENTATION DETAILS In this section, we describe the state representation used for policy training, covering proprioceptive states, privileged information, and hyper-parameters of our network. The proprioceptive state components for both the teacher and student policies are summarized in Table 6. A key distinction is that the student policy utilizes a longer history of observations, compensating for its lack of access to privileged information by relying on extended temporal context. For privileged information, the teacher policy incorporates privileged information to achieve precise motion tracking. The full set of privileged state features is provided in Table 6. Previous methods receive motion target information as part of the observation in both teacher and student policy, which includes keypoint positions, desired joint (DoF) positions, and root velocity. But in our method, only the teacher policy receives these information, and the student policy only receives the latent from motion generator. A detailed target state composition is presented in Table 7. The action output corresponds to target joint positions for proportional-derivative (PD) control, with a dimensionality of 23 for the Unitree G1 humanoid platform. RoboGhost employs a teacher-student training framework. The teacher policy, trained via standard PPO (Schulman et al., 2017), utilizes privileged information, tracking targets, and proprioceptive states. In contrast, the student policy is trained using Dagger without access to privileged information, but instead relies on an extended history of observations. Furthermore, while the teacher policy uses explicit reference motion information, the student policy replaces this with a latent representation from a motion generator, resulting in a retargeting-free and latent-driven approach. For the teacher policy, inputs are concatenated and processed by a Mixture-of-Experts (MoE) network for policy learning. The student policy feeds its concatenated inputs as conditions to a diffusion model with an MLP backbone. Furthermore, we employ AdaLN to inject conditional information throughout the denoising process within the diffusion model. A final MLP layer is appended to the backbone network, which projects the output to a 23-dimensional action space while additionally incorporating the conditional signals. Detailed hyperparameters for both policies are provided in Table 8. 6.2 TRAINING DETAILS Dataset Details Our motion generator is pretrained on the MotionMillion dataset (Fan et al., 2025). Given the substantial scale of this dataset, we restrict pretraining to the humanml and kungfu subsets from the MotionUnion branch, comprising a total of 50,378 motion sequences. During policy training, we again draw data from the humanml and kungfu subsets. However, due to the presence of extensive duplicate and mirrored sequences, we perform a data cleaning process to remove redundancies and 14 Proprioceptive States State Component Dim. DoF position 23 DoF velocity 23 Last Action 23 Root Angular Velocity 3 Roll 1 Pitch 1 Yaw 1 Total dim 75 × 10 Privileged Information DoF Difference 23 Keybody Difference 36 Root velocity 3 Total dim 62 Table 6: Proprioceptive states and privileged information. Teacher Policy State Component Dim. DoF position 23 Keypoint position 36 Root Velocity 3 Root Angular Velocity 3 Roll 1 Pitch 1 Yaw 1 Height 1 Total dim 69 Student Policy Latent 64 Total dim 64 Table 7: Reference information in the teacher and student policies. Hyperparameter Value Optimizer AdamW β1, β2 0.9, 0.999 Learning Rate 1 × 10-4 Batch Size 4096 Teacher Policy Discount factor (γ) 0.99 Clip Parameter 0.2 Entropy Coefficient 0.005 Max Gradient Norm 1 Learning Epochs 5 Mini Batches 4 Value Loss Coefficient 1 Value MLP Size [512, 256, 128] Actor MLP Size [512, 256, 128] Experts 5 Student Policy MLP Layers 4 MLP Size [256, 256, 256] Table 8: Hyperparameters for teacher and student policy training. non-flat-ground motions. Using this filtered dataset, we first train an initial policy, denoted as π0. This policy is then used to further filter the dataset by excluding sequences with a tracking error exceeding 0.6. After this refinement, the humanml subset contains 3,261 motion sequences, and the kungfu subset contains 200. And due to the significant domain gap and difference in complexity between these two subsets, we opt to train two separate policies, each specialized on one subset. Motion Filter and Retargeting Following Xie et al. (2025) and Tripathi et al. (2023), we compute the ground-projected distance between the center of mass (CoM) and center of pressure (CoP) for each frame and apply a stability threshold. Let ̄pCoM t = (pCoM t,x , pCoM t,y ) and ̄pCoP t = (pCoP t,x , pCoP t,y ) denote the 2D ground projections of CoM and CoP at frame t, respectively. Let ∆dt = ∥ ̄pCoM t - ̄pCoP t ∥2. We define the stability criterion of a frame as ∆dt 5 × F z feet) -2 Waist roll pitch error ∥pwrp t -pwrp 0 ∥2 2 -0.5 Ankle Action ∥aankle t ∥2 2 -0.3 Table 9: Domain randomization and regularization parameters. Motion Tracking Rewards We define the reward function as the sum of penalty, regularization, and task rewards, which is meticulously designed to improve both the performance and motion realism of the humanoid robot. It consists of terms that incentivize accurate tracking of root velocity, direction, and orientation, as well as precise keypoints and joints position tracking. Inspired by ASAP (He et al., 2025a), we additionally incorporate regularization terms to enhance stability and sim-to-real transferability, including torques, action rate, feet orientation, feet heading, and slippage. In terms of penalty, we adopt penalty terms for DoF position limits, torque limits, and termination. The detailed reward functions are summarized in Table 10, more terms can be seen in the supplementary material. 16 Task Reward Root velocity 10.0 Root velocity direction 6.0 Root angular velocity 1.0 Keypoint position 10.0 Feet position 12.0 DoF position 6.0 DoF velocity 6.0 Penalty DoF position limits -10.0 Torque limits -5.0 Termination -200.0 Table 10: Reward terms for pretraining Curriculum Learning Training policies to track agile motions in simulation is challenging, as certain dynamic behaviors are too difficult for the policy to learn effectively from the outset. To mitigate this issue, we utilize a termination curriculum (He et al., 2025a) that progressively tightens the motion tracking error tolerance during training, thereby guiding the policy to gradually improve its tracking fidelity. Initially, we set a lenient termination threshold of 1.5m-episodes are terminated if the robot deviates from the reference motion by more than this margin. As training progresses, the threshold is annealed down gradually, incrementally increasing the precision required for successful tracking. This curriculum enables the policy to first acquire basic balancing skills before refining them under stricter tracking constraints, ultimately facilitating the successful execution of highly dynamic behaviors. 6.3 EVALUATION DETAILS Motion Generation Metrics Following (Guo et al., 2022c; Li et al., 2024d;b), we evaluate our approach using established text-to-motion metrics: retrieval accuracy (R@1, R@2, R@3), Multimodal Distance (MMDist), and Fréchet Inception Distance (FID). • Retrieval Accuracy (R-Precision): These metrics measure the relevance of generated motions to text descriptions in a retrieval setup. R@1 denotes the fraction of text queries for which the correct motion is retrieved as the top match, reflecting the model's precision in identifying the most relevant motion. R@2 and R@3 extend this notion, indicating recall within the top two and three retrieved motions, respectively. • Multimodal Distance (MMDist): This quantifies the average feature-space distance between generated motions and their corresponding text embeddings, typically extracted via a pretrained retrieval model. Smaller MMDist values indicate stronger semantic alignment between text descriptions and motion outputs. • Fr échet Inception Distance (FID): FID assesses the quality and realism of generated motions by comparing their feature distribution to that of real motion data using a pretrained feature encoder (e.g., an Inception-style network). It computes the Fréchet distance between multivariate Gaussian distributions fitted to real and generated motion features. Lower FID scores reflect better distributional similarity and higher perceptual quality. • Diversity: Diversity quantifies the variance of motion sequences within the dataset. It is computed by randomly sampling Ndiver = 300 pairs of motions, denoted as (fi,1, fi,2) for each pair i. The metric is calculated as 1 Ndiver PNdiver i=1 ∥fi,1 -fi,2∥ Motion Tracking Metrics For motion tracking evaluation, we adopt metrics commonly used in prior work (Ji et al., 2024): Success Rate (Succ), Mean Per Joint Position Error (Empjpe), and Mean Per Keybody Position Error (Empkpe). • Success Rate (Succ): This metric evaluates whether the humanoid robot successfully follows the reference motion without falling. A trial is considered a failure if the average trajectory deviation exceeds 0.5 meters at any point, or if the root pitch angle passes a predefined threshold. 17 • Mean Per Joint Position Error (Empjpe in rad): Empjpe measures joint-level tracking accuracy by computing the average error in degrees of freedom (DoF) rotations between the reference and generated motion. • Mean Per Keybody Position Error (Empkpe in m): Empkpe assesses keypoint tracking performance based on the average positional discrepancy between reference and generated keypoint trajectories. 6.4 DEPLOYMENT DETAILS Sim-to-Sim As demonstrated in Humanoid-Gym (Gu et al., 2024), MuJoCo exhibits more realistic dynamics compared to Isaac Gym. Following established practices in motion tracking policy research (Ji et al., 2024), we perform reinforcement learning training in Isaac Gym to leverage its computational efficiency. To assess policy robustness and generalization, we then conduct zero-shot transfer to the MuJoCo simulator. Finally, we deploy the policy on a physical humanoid robot to evaluate the effectiveness of our RoboGhost framework for real-world motion tracking. Sim-to-Real Our real-world experiments utilize a Unitree G1 humanoid robot, equipped with an onboard Jetson Orin NX module for computation and communication. The control policy takes motion-tracking targets as input, computes target joint positions, and issues commands to the robot's low-level controller at 50Hz. Command transmission introduces a latency of 18-30ms. The low-level controller operates at 500Hz to ensure stable real-time actuation. Communication between the policy and the low-level interface is implemented using the Lightweight Communications and Marshalling (LCM) (Huang et al., 2010). 6.5 ADDITIONAL QUALITATIVE RESULTS To further validate the effectiveness of our method, we provide additional qualitative results in this section, including motion tracking visualizations in IsaacGym and MuJoCo (Fig 4) and real-world locomotion demonstrations on the physical robot (Fig 5). Moreover, we provide additional motion generation qualtative results in Fig 6. A supplementary video showcasing real-robot deployments is provided in the supplementary material. 6.6 ADDITIONAL ABLATION STUDIES To rigorously evaluate the effectiveness of causal adaptive sampling (CAS), we conduct various ablation studies examining both its performance gains and sensitivity to hyperparameters. As shown in Table 11, causal adaptive sampling increases the sampling probability of kinematically challenging frames within a motion sequence, thereby improving training efficiency and enhancing model generalization. Furthermore, as illustrated in Fig 7a and 7b, optimal performance is achieved when the adaptation parameters are set to λ = 0.8 and p = 0.005. Furthermore, we conduct an ablation study on the number of experts in the MoE architecture. As illustrated in Fig 8, the variation in the number of experts has a measurable yet limited impact on the policy's performance, with the optimal result achieved when the number of experts is set to 5. 18 "The person kicks with left leg forward and back." "Someone runs a zigzag course with arms out." "A person is playing golf." MuJoCo IsaacGym "A person is strolling without a clear destination." "A person rotates their torso while shaking their legs in a circular motion." "A person appears to be swinging their arm in a circular motion.." "Boxers throw an uppercut, block, and counter with a few fast right jabs." "A person dances with flair, then leaps forward." "A man kicks forward." "A person runs from left to right." "Someone is moving their body in a flexible, ever-changing motion." "The person demonstrates a Tiger Sword routine, combining various stances, strikes, and blocks with precise footwork and sword techniques." Figure 4: Qualitative results in the IsaacGym and MuJoCo. Method IsaacGym MuJoCo Succ ↑ Empjpe ↓ Empkpe ↓ Succ ↑ Empjpe ↓ Empkpe ↓ HumanML (MotionMillion) w/o CAS 0.92 0.18 0.14 0.68 0.32 0.28 Ours 0.97 0.12 0.09 0.74 0.24 0.20 Kungfu (MotionMillion) w/o CAS 0.62 0.42 0.41 0.48 0.67 0.63 Ours 0.72 0.34 0.31 0.57 0.54 0.50 Table 11: Ablation study on causal adaptive sampling. 19 Text Input: The person starts in an upright position, then performs two squats, moving their arms and hands to their chin. Text Input: While boxing, the man defends, dodges, then delivers a few right jabs. Text Input: The man is performing jumping jacks. Figure 5: Qualitative results on the Unitree G1. "person switches from doing jumping jacks to jumps with brining one of the legs forward ." "a person is sitting in a chair, wobbles side to side, stands up, and then start walking.""the person is kicking with his left and right foot." "a person walks forward one foot in front of the other with both arms stretched out. " "a person walks backwards and stops. " "a person jumps while moving their legs in and out while also moving their arms up and down " "a person is walking in one place. " "a man walks forward and then back in a boxing stance. " "a person walks straight forward ." "a person walks forward, increases his pace, and then leaps in the air. " "the person is throwing a baseball. " "the person took a small jump forward. " "a person walks backwards and stops. " "a man walks forward and then back in a boxing stance. " "a person walks forward, increases his pace, and then leaps in the air. " "the person switches from doing jumping jacks to jumps with brining one of the legs forward ." Figure 6: Qualitative results of motion generator. 20 0.5 0.6 0.7 0.8 0.9 0.10 0.12 0.14 0.16 0.18 Error (m) Tracking Error in IsaacGym MPJPE MKKPE (a) Isaac Gym 0.003 0.004 0.005 0.006 0.007 0.20 0.22 0.24 0.26 0.28 Error (m) Tracking Error in MuJoCo MPJPE MKKPE (b) MuJoCo Figure 7: The impact of different λ values on tracking performance in IsaacGym and MuJoCo. 3 4 5 6 7 0.09 0.10 0.11 0.12 0.13 0.14 0.15 Error (m) Tracking Performance in IsaacGym MPJPE MKKPE (a) IsaacGym 3 4 5 6 7 0.20 0.21 0.22 0.23 0.24 0.25 0.26 Error (m) Tracking Performance in MuJoCo MPJPE MKKPE (b) MuJoCo Figure 8: The impact of different number of experts in MoE on tracking performance in IsaacGym and MuJoCo. 21
2510.14948
FAIR COMMISSIONING - TOWARDS FIRST SCIENCE S. Reimann∗, H. Albers, R. W. Assmann1, P. Gasik, O. Geithner, F. Hagenbuck, A. Herlert, P. Hofmann, V. Kamerdzhiev, M. Kauschke, H. Kollmus, S. Pietri, N. Pyka, T. Radon, C. Schroeder, S. Schwarz, H. Simon, P. Spiller, K. Vogt GSI Helmholtz Centre, Darmstadt, Germany 1also at Institute for Applied Physics (IAP), Goethe University Frankfurt, Frankfurt, Germany Abstract The international Facility for Antiproton and Ion Research (FAIR) is under construction at the GSI Helmholtz Centre in Darmstadt. The first project stage includes the superconduct- ing 100 T m heavy-ion synchrotron SIS100, the Super Frag- ment Separator, and associated beam transport lines. Part of GSI’s existing accelerator chain, comprising UNILAC and SIS18, will serve as injector. Installation work in the FAIR accelerator tunnels and supply buildings has been ongoing since early 2024. As progress continues, special attention is now on the start of commissioning, beginning in 2025 with the cryogenic plant. Commissioning of the transport line will follow at the end of 2025, and beam commissioning is scheduled for the second half of 2027. This paper outlines the current status of the project, commissioning strategy and timeline. GSI & FAIR Figure 1 shows the existing GSI accelerator complex and the FAIR [1,2] project facilities currently under construction. The first stage being implemented is referred to as “First Science+”. Within this stage, beam delivery from SIS18 via the SFRS to the high-energy branch of the NUSTAR cave will be the first to be commissioned; this marks the “Early Science” milestone. The FAIR Phase-0 user program [3] will continue to be carried out by GSI throughout the entire commissioning phase. Figure 1: Overview of the GSI accelerator complex (in blue) and the facilities of the FAIR project. The first project stage, ’First Science +’, is depicted in green. ∗s.reimann@gsi.de THE CRYOGENIC SYSTEM The FAIR cryogenic system [4] (Fig. 2) is designed to provide 14 kW of cooling power at 4.4 K and 49 kW at 50 K, supplying two major machines: the fast-ramping heavy-ion synchrotron SIS100 and the Super Fragment Separator, as well as the superconducting magnets [5] of CBM and NUS- TAR. Commissioning of the cryoplant "Cryo2" is scheduled to begin in June 2025, initially in connection with the first distribution box (DB3). This phase involves two buildings, the cryoplant and the compressor station, along with he- lium storage tanks holding an initial helium inventory of approximately 10 000 m3. Due to ongoing construction in the surrounding area, special safety measures will be im- plemented. Subsequently, the distribution systems for the Super-FRS and SIS100 will be commissioned. Figure 2: Layout of the FAIR cryogenic system with cryo plant and corresponding distribution system. FAIR CONTROL CENTRE (FCC) A new control room for FAIR is under construction and expected to be operational in 2026 [6]. It will serve as the central control facility for all accelerator systems, encom- passing UNILAC, SIS18, the GSI storage ring program and the FAIR complex. With an area of approx. 600 m2, it will be about 2 × larger than the current GSI control room and equipped with 25 freely configurable workstations (Fig. 3). Figure 3: Console layout for the new FAIR control room. arXiv:2510.14948v1 [physics.acc-ph] 16 Oct 2025 System control will be fully digital; no analog cabling is planned. Therefore, the entire injector chain is being mi- grated to the FAIR control system. With the exception of UNILAC, this transition is largely complete [7–10]. UNI- LAC is currently being upgraded as part of the Injector Controls Upgrade Project [11]. A prototype test has already been successfully completed, with further tests scheduled for July 2025 and February 2026. These milestones will enable the 2026 user beam time at GSI and FAIR hardware commissioning to be carried out from the new control room. ACCELERATOR COMMISSIONING Commissioning preparations began in 2021 with the for- mation of the FAIR commissioning team. The process is structured into four phases: 1. Local commissioning 2. Remote & system commissioning 3. Integration tests and full dry-runs 4. Beam commissioning For the first two phases, commonly referred to as device check-out, commissioning procedures for all system types were reviewed in detail. Each process step was documented together with its specific prerequisites and boundary condi- tions. Final acceptance tests before dry-runs will primarily use automated procedures executed by a software sequencer. These are under development and already tested at the exist- ing GSI facility. A high-level timeline of the commissioning activities is shown in Fig. 4. FCC HEBT SFRS SIS100 Cryo 2025 2026 2027 2028 Hardware Commissioning (Phases 1–3) Beam Commissioning Commissioning of Cryogenic System / Machine Cooldown Figure 4: FAIR Commissioning Schedule (2025–2028). The pilot-beam commissioning will begin immediately after the final acceptance dry-run. Its goal is to deliver an ion beam defined by the Project Completion Parameters (PCPs) to a target or experiment. These parameters have been se- lected to ensure reliable achievement using the existing GSI injector chain while enabling the first physics experiment at NUSTAR or CBM. For this initial experiment, SIS100 will be operated with a single bunch instead of four. The PCPs for the respective beamlines are listed in Table 1. Completion of the first experiment marks the formal end of commissioning for the corresponding accelerator chain. Beam intensity will then be ramped up during subsequent operational phases. Table 1: Project Completion Parameters Chain Ion Energy Ion-Intensity SIS18–SFRS–NUSTAR 238U73+ 1 GeV/u 2 × 109 ion/s SIS100–SFRS–NUSTAR 238U28+ 1.4 GeV/u 2.5 × 1010 ion/s SIS100–CBM 197Au79+ 11 GeV/u 1 × 107 ion/s High Energy Beam Transport (HEBT) The HEBT system [12] includes, in the Early Science stage, the beam line from SIS18 to the Super-FRS. It spans approximately 300 m, with 82 magnets, 54 diagnostic de- vices, three safety beam plugs, and vacuum components such as pumps, valves, and gauges. Initial hardware commissioning will begin in Q4 2025, once installations in building and in the tunnel are complete and infrastructure is operational. As work progresses in tunnel, further commissioning will continue section by sec- tion throughout 2026. Remote/system commissioning and integration tests will follow, concluding with dry-runs in Q2 2027. In parallel, installation of the 400 m HEBT First Science section will be finalized by mid-2027. Beam com- missioning is planned using fast-extracted SIS18 beam at 12 Tm to 18 Tm, e.g., 40Ar18+ at 1 GeV/u, with pilot intensi- ties of 107–108 ions per cycle. Commissioning steps include measurement of beam pa- rameters, optical tuning, element-by-element steering, and functional verification of beamline components. Once beam dumping on the Super-FRS catcher is confirmed, further tests such as kick response measurements, aperture scans, and dispersion measurements will be conducted. The beam can then be optimized for Super-FRS commissioning. Superconducting Fragment Separator (SFRS) The Super-FRS, in its Early Science configuration con- sisting only of the main branch (see Fig. 5), is expected to complete commissioning by the end of 2027 in preparation for the first the NUSTAR experiment. This configuration comprises 150 racks, 30,000 cables, more than 120 mag- netic elements, 40 drives, and 60 beam diagnostic devices installed across 25 vacuum sections. Figure 5: Super-FRS in its early science configuration. The superconducting magnets [13] are cooled via eight cryogenic branches and 45 feed boxes. The initial phases of commissioning will run in parallel with ongoing installation during 2025 and 2026. All cryogenic sections are planned to be cooled down in 2027, enabling system dry-runs in Q3 and beam commissioning in Q4. The objectives of beam com- missioning are to characterize the ion-optical parameters, validate the fragment selection and identification capabil- ities, and ensure all systems are fully functional and safe for delivering reliable fragment beams for the 2028 Early Science campaign. The Heavy-Ion Synchrotron SIS100 SIS100 is a heavy-ion synchrotron equipped with fast- ramped superconducting magnets. Its goal is to provide full cycling flexibility, similar to a room-temperature syn- chrotron [14]. However, fast ramping induces dynamic losses in the cold mass, especially in the yokes of the superfer- ric magnets, resulting in significant variations in cryogenic heat load depending on the operational cycle. Operation modes vary by user requirements: either a long extraction flat-top for slow extraction and low AC loss, or a triangular cycle with fast extraction and high AC loss [15]. SIS100’s design addresses this through two-phase 4.5 K liq- uid helium cooling in the supply header and maximized he- lium gas return for efficient cryoplant operation. Hydraulic circuits in the quadrupole systems are balanced via cali- brated flow restrictors, while LHe pumps and phase separa- tors recover liquid helium from the return line. A separate hydraulic circuit independently cools the ultra-high vacuum (UHV) system. This complex thermal management requires detailed cold commissioning tailored to different use cases. In the long term, predictive heat-load modeling and proac- tive control of hydraulic circuits via the accelerator control system are planned. Cold commissioning is scheduled to begin in early 2028 after cryomagnet installation. Initial steps will include com- missioning of the main power converters and quench pro- tection systems at various current levels, followed by user- specific operation cycles. Conventional systems in the room-temperature straights will be commissioned in parallel with other FAIR compo- nents. RF systems are currently undergoing offline testing. Specialized devices such as the ramped bipolar extraction kicker will be fully integrated and FAT-tested at the manu- facturer’s site. NUSTAR AND CBM - EXPERIMENTS The NUSTAR experimental collaboration, one of the four core scientific pillars at FAIR, focuses on the study of NUclear STructure, Astrophysics, and Reactions [16]. Comprising more than 600 members from more than 140 international institutes, NUSTAR includes several sub- collaborations using a wide range of state-of-the-art instru- ments to explore different aspects of the overarching science program through complementary experimental methods. Since 2019, NUSTAR experiments have been success- fully conducted at existing GSI facilities as part of the FAIR Phase-0 program, during which many components were brought into operation. Three sub-collaborations, R3B, HIS- PEC/DESPEC, and the Super-FRS Experiment Collabora- tion, will be the first to relocate to the FAIR site for Early Science. These groups will perform new experiments using exotic secondary beams provided by the Super-FRS. Installation of NUSTAR components in the High-Energy Cave will begin in 2026, followed by offline commissioning, integrated system tests, and full in-beam commissioning with Super-FRS beams. The NUSTAR Early Science physics program will commence once all systems are verified to be fully operational. The Compressed Baryonic Matter (CBM) experiment aims to explore the phase structure of strong interaction (QCD) matter at large net-baryon densities and moderate temperatures using heavy-ion collisions in the energy range √𝑠NN = 2.9 – 4.9 GeV. CBM is a fixed-target experiment, equipped with fast and radiation-hard detector systems and an advanced triggerless data acquisition scheme. The CBM will collect data at interaction rates of up to 10 MHz by performing online reconstruction and event selection, thus allowing measurements of rare probes not accessible so far in this energy range. These include: multi-strange hadron production and their flow, higher-order cumulants, dileptons, as well as double-strange hypernuclei. The installation of the experimental infrastructure commenced in 2023 and will proceed through 2027, culminating in the installation of the CBM detectors and the start of global commissioning. The SIS100 synchrotron and CBM experiment are expected to begin operations in late 2028. CONCLUSION The FAIR project is entering a crucial phase with the tran- sition from installation to commissioning. A comprehensive strategy has been developed to ensure structured, stepwise commissioning of all systems, beginning with the cryogenic infrastructure and progressing through local hardware com- missioning, integration tests, and finally beam commission- ing. Key infrastructure such as the FAIR Control Centre and cryogenic plant will be operational by 2026. The outlined timeline provides a realistic path towards first science while maintaining parallel operation of GSI systems. ACKNOWLEDGEMENTS The authors would like to thank Jörg Wenninger, Ciprian Plostinar, and Matthias Scholz for their continuous support and valuable advice. Preparatory conceptual work by the FAIR Commissioning & Controls Working Group [17] is also acknowledged. REFERENCES [1] P. J. Spiller et al., “Status of the FAIR Project”, in Proc. IPAC’18, Vancouver, Canada, Apr.-May 2018, pp. 63–68. doi:10.18429/JACoW-IPAC2018-MOZGBF2 [2] J. Blaurock, H. Simon, O. Boine-Frankenheim, and P. Spiller, “FAIR completion of construction works, towards commis- sioning and first science”, in Proc. IPAC’23, Venice, Italy, May 2023, pp. 3923–3927. doi:10.18429/JACoW-IPAC2023-THYD1 [3] M. Bai et al., “Challenges of FAIR Phase 0”, in Proc. IPAC’18, Vancouver, Canada, Apr.-May 2018, pp. 2947–2949. doi:10.18429/JACoW-IPAC2018-THYGBF3 [4] J. McEntee, “Cryogenics at FAIR: adaptability is key”, CERN Courier, July 2023. https://cerncourier.com/a/ cryogenics-at-fair-adaptability-is-key/ [5] E. S. Fischer et al., “Superconducting Magnets at FAIR”, in Proc. IPAC’17, Copenhagen, Denmark, May 2017, pp. 2546– 2549. doi:10.18429/JACoW-IPAC2017-WEOCB2 [6] M. Vossberg, K. Berkl, S. Reimann, P. Schuett, R. J. Stein- hagen, and G. Stephan, “FAIR Control Centre (FCC) - Con- cepts and Interim Options for the Existing GSI Main Control Room”, in Proc. IPAC’17, Copenhagen, Denmark, May 2017, pp. 1791–1794. doi:10.18429/JACoW-IPAC2017-TUPIK047 [7] S. Krepp, J. Fitzek, H. C. Hüther, R. Mueller, A. Schaller, and A. Walter, “A Dynamic Beam Scheduling System for the FAIR Accelerator Facility”, in Proc. ICALEPCS’21, Shang- hai, China, Oct. 2021, pp. 138–142. doi:10.18429/JACoW-ICALEPCS2021-MOPV013 [8] D. Ondreka, C. Dimopoulou, H. C. Huether, H. Liebermann, J. Stadlmann, and R. J. Steinhagen, “Recommissioning of SIS18 After FAIR Upgrades”, in Proc. IPAC’19, Melbourne, Australia, May 2019, pp. 932–935. doi:10.18429/JACoW-IPAC2019-MOPTS035 [9] S. A. Litvinov, R. Hess, B. Lorentz, and M. Steck, “Operation of the ESR Storage Ring with the LSA Control System”, in Proc. 19th Int. Conf. Accel. Large Exp. Phys. Control Syst. (ICALEPCS’23), Cape Town, South Africa, Oct. 2023, pp. 534–536. doi:10.18429/JACoW-ICALEPCS2023-TUPDP019 [10] F. Herfurth et al., “Commissioning of the Low Energy Storage Ring Facility CRYRING@ESR”, in Proc. COOL’17, Bonn, Germany, Sep. 2017, pp. 81–83. doi:10.18429/JACoW-COOL2017-THM13 [11] P. Gerhard, “About the New Linear Accelerator Control Sys- tem at GSI”, in Proc. 19th Int. Conf. Accel. Large Exp. Phys. Control Syst. (ICALEPCS’23), Cape Town, South Africa, Oct. 2023, JACoW Publishing, Geneva, Switzerland. doi:10.18429/JACoW-ICALEPCS2023-TUPDP018 [12] F. Hagenbuck et al., “Status of the High Energy Beam Trans- port System for FAIR”, in Proc. IPAC’15, Richmond, VA, USA, May 2015, pp. 3705–3708. doi:10.18429/JACoW-IPAC2015-THPF012 [13] K. Sugita et al., “Status of superconducting magnets for super- FRS at FAIR”, in Proc. IPAC’23, Venice, Italy, May 2023, pp. 3731–3734. doi:10.18429/JACoW-IPAC2023-WEPM068 [14] P. Spiller et al., “Technological features and status of the new heavy ions synchrotron SIS100 at FAIR”, in Proc. IPAC’23, Venice, Italy, May 2023, pp. 165–168. doi:10.18429/JACoW-IPAC2023-MOPA062 [15] I. J. Petzenhauser, U. Blell, and S. Heberer, “Injection and Extraction Systems of the SIS100 Heavy Ion Synchrotron at FAIR”, in Proc. IPAC’21, Campinas, Brazil, May 2021, pp. 3514–3516. doi:10.18429/JACoW-IPAC2021-WEPAB348 [16] N. Kalantar-Nayestanaki and A. Bruce, “NUSTAR: NUclear STructure Astrophysics and Reactions at FAIR”, Nuclear Physics News, vol. 28, no. 3, pp. 5–11, 2018. doi:10.1080/10619127.2018.1495476 [17] R. J. Steinhagen, “FAIR Commissioning – Concepts and Strategies in View of High-Intensity Operation”, in Proc. 61st ICFA Advanced Beam Dynamics Workshop (HB’18), Daejeon, Korea, June 2018, pp. 141–146. doi:10.18429/JACoW-HB2018-TUA2WD01
FAIR COMMISSIONING - TOWARDS FIRST SCIENCE S. Reimann∗, H. Albers, R. W. Assmann1, P. Gasik, O. Geithner, F. Hagenbuck, A. Herlert, P. Hofmann, V. Kamerdzhiev, M. Kauschke, H. Kollmus, S. Pietri, N. Pyka, T. Radon, C. Schroeder, S. Schwarz, H. Simon, P. Spiller, K. Vogt GSI Helmholtz Centre, Darmstadt, Germany 1also at Institute for Applied Physics (IAP), Goethe University Frankfurt, Frankfurt, Germany Abstract The international Facility for Antiproton and Ion Research (FAIR) is under construction at the GSI Helmholtz Centre in Darmstadt. The first project stage includes the superconducting 100 T m heavy-ion synchrotron SIS100, the Super Fragment Separator, and associated beam transport lines. Part of GSI's existing accelerator chain, comprising UNILAC and SIS18, will serve as injector. Installation work in the FAIR accelerator tunnels and supply buildings has been ongoing since early 2024. As progress continues, special attention is now on the start of commissioning, beginning in 2025 with the cryogenic plant. Commissioning of the transport line will follow at the end of 2025, and beam commissioning is scheduled for the second half of 2027. This paper outlines the current status of the project, commissioning strategy and timeline. GSI & FAIR Figure 1 shows the existing GSI accelerator complex and the FAIR [1,2] project facilities currently under construction. The first stage being implemented is referred to as "First Science+". Within this stage, beam delivery from SIS18 via the SFRS to the high-energy branch of the NUSTAR cave will be the first to be commissioned; this marks the "Early Science" milestone. The FAIR Phase-0 user program [3] will continue to be carried out by GSI throughout the entire commissioning phase. Figure 1: Overview of the GSI accelerator complex (in blue) and the facilities of the FAIR project. The first project stage, 'First Science +', is depicted in green. ∗ THE CRYOGENIC SYSTEM The FAIR cryogenic system [4] (Fig. 2) is designed to provide 14 kW of cooling power at 4.4 K and 49 kW at 50 K, supplying two major machines: the fast-ramping heavy-ion synchrotron SIS100 and the Super Fragment Separator, as well as the superconducting magnets [5] of CBM and NUSTAR. Commissioning of the cryoplant "Cryo2" is scheduled to begin in June 2025, initially in connection with the first distribution box (DB3). This phase involves two buildings, the cryoplant and the compressor station, along with helium storage tanks holding an initial helium inventory of approximately 10 000 m3. Due to ongoing construction in the surrounding area, special safety measures will be implemented. Subsequently, the distribution systems for the Super-FRS and SIS100 will be commissioned. Figure 2: Layout of the FAIR cryogenic system with cryo plant and corresponding distribution system. FAIR CONTROL CENTRE (FCC) A new control room for FAIR is under construction and expected to be operational in 2026 [6]. It will serve as the central control facility for all accelerator systems, encompassing UNILAC, SIS18, the GSI storage ring program and the FAIR complex. With an area of approx. 600 m2, it will be about 2 × larger than the current GSI control room and equipped with 25 freely configurable workstations (Fig. 3). Figure 3: Console layout for the new FAIR control room. 16 Oct 2025 System control will be fully digital; no analog cabling is planned. Therefore, the entire injector chain is being migrated to the FAIR control system. With the exception of UNILAC, this transition is largely complete [7-10]. UNILAC is currently being upgraded as part of the Injector Controls Upgrade Project [11]. A prototype test has already been successfully completed, with further tests scheduled for July 2025 and February 2026. These milestones will enable the 2026 user beam time at GSI and FAIR hardware commissioning to be carried out from the new control room. ACCELERATOR COMMISSIONING Commissioning preparations began in 2021 with the formation of the FAIR commissioning team. The process is structured into four phases: 1. Local commissioning 2. Remote & system commissioning 3. Integration tests and full dry-runs 4. Beam commissioning For the first two phases, commonly referred to as device check-out, commissioning procedures for all system types were reviewed in detail. Each process step was documented together with its specific prerequisites and boundary conditions. Final acceptance tests before dry-runs will primarily use automated procedures executed by a software sequencer. These are under development and already tested at the existing GSI facility. A high-level timeline of the commissioning activities is shown in Fig. 4. FCC HEBT SFRS SIS100 Cryo 2025 2026 2027 2028 Hardware Commissioning (Phases 1-3) Beam Commissioning Commissioning of Cryogenic System / Machine Cooldown Figure 4: FAIR Commissioning Schedule (2025-2028). The pilot-beam commissioning will begin immediately after the final acceptance dry-run. Its goal is to deliver an ion beam defined by the Project Completion Parameters (PCPs) to a target or experiment. These parameters have been selected to ensure reliable achievement using the existing GSI injector chain while enabling the first physics experiment at NUSTAR or CBM. For this initial experiment, SIS100 will be operated with a single bunch instead of four. The PCPs for the respective beamlines are listed in Table 1. Completion of the first experiment marks the formal end of commissioning for the corresponding accelerator chain. Beam intensity will then be ramped up during subsequent operational phases. Table 1: Project Completion Parameters Chain Ion Energy Ion-Intensity SIS18-SFRS-NUSTAR 238U73+ 1 GeV/u 2 × 109 ion/s SIS100-SFRS-NUSTAR 238U28+ 1.4 GeV/u 2.5 × 1010 ion/s SIS100-CBM 197Au79+ 11 GeV/u 1 × 107 ion/s High Energy Beam Transport (HEBT) The HEBT system [12] includes, in the Early Science stage, the beam line from SIS18 to the Super-FRS. It spans approximately 300 m, with 82 magnets, 54 diagnostic devices, three safety beam plugs, and vacuum components such as pumps, valves, and gauges. Initial hardware commissioning will begin in Q4 2025, once installations in building and in the tunnel are complete and infrastructure is operational. As work progresses in tunnel, further commissioning will continue section by section throughout 2026. Remote/system commissioning and integration tests will follow, concluding with dry-runs in Q2 2027. In parallel, installation of the 400 m HEBT First Science section will be finalized by mid-2027. Beam commissioning is planned using fast-extracted SIS18 beam at 12 Tm to 18 Tm, e.g., 40Ar18+ at 1 GeV/u, with pilot intensities of 107-108 ions per cycle. Commissioning steps include measurement of beam parameters, optical tuning, element-by-element steering, and functional verification of beamline components. Once beam dumping on the Super-FRS catcher is confirmed, further tests such as kick response measurements, aperture scans, and dispersion measurements will be conducted. The beam can then be optimized for Super-FRS commissioning. Superconducting Fragment Separator (SFRS) The Super-FRS, in its Early Science configuration consisting only of the main branch (see Fig. 5), is expected to complete commissioning by the end of 2027 in preparation for the first the NUSTAR experiment. This configuration comprises 150 racks, 30,000 cables, more than 120 magnetic elements, 40 drives, and 60 beam diagnostic devices installed across 25 vacuum sections. Figure 5: Super-FRS in its early science configuration. The superconducting magnets [13] are cooled via eight cryogenic branches and 45 feed boxes. The initial phases of commissioning will run in parallel with ongoing installation during 2025 and 2026. All cryogenic sections are planned to be cooled down in 2027, enabling system dry-runs in Q3 and beam commissioning in Q4. The objectives of beam commissioning are to characterize the ion-optical parameters, validate the fragment selection and identification capabilities, and ensure all systems are fully functional and safe for delivering reliable fragment beams for the 2028 Early Science campaign. The Heavy-Ion Synchrotron SIS100 SIS100 is a heavy-ion synchrotron equipped with fastramped superconducting magnets. Its goal is to provide full cycling flexibility, similar to a room-temperature synchrotron [14]. However, fast ramping induces dynamic losses in the cold mass, especially in the yokes of the superferric magnets, resulting in significant variations in cryogenic heat load depending on the operational cycle. Operation modes vary by user requirements: either a long extraction flat-top for slow extraction and low AC loss, or a triangular cycle with fast extraction and high AC loss [15]. SIS100's design addresses this through two-phase 4.5 K liquid helium cooling in the supply header and maximized helium gas return for efficient cryoplant operation. Hydraulic circuits in the quadrupole systems are balanced via calibrated flow restrictors, while LHe pumps and phase separators recover liquid helium from the return line. A separate hydraulic circuit independently cools the ultra-high vacuum (UHV) system. This complex thermal management requires detailed cold commissioning tailored to different use cases. In the long term, predictive heat-load modeling and proactive control of hydraulic circuits via the accelerator control system are planned. Cold commissioning is scheduled to begin in early 2028 after cryomagnet installation. Initial steps will include commissioning of the main power converters and quench protection systems at various current levels, followed by userspecific operation cycles. Conventional systems in the room-temperature straights will be commissioned in parallel with other FAIR components. RF systems are currently undergoing offline testing. Specialized devices such as the ramped bipolar extraction kicker will be fully integrated and FAT-tested at the manufacturer's site. NUSTAR AND CBM - EXPERIMENTS The NUSTAR experimental collaboration, one of the four core scientific pillars at FAIR, focuses on the study of NUclear STructure, Astrophysics, and Reactions [16]. Comprising more than 600 members from more than 140 international institutes, NUSTAR includes several subcollaborations using a wide range of state-of-the-art instruments to explore different aspects of the overarching science program through complementary experimental methods. Since 2019, NUSTAR experiments have been successfully conducted at existing GSI facilities as part of the FAIR Phase-0 program, during which many components were brought into operation. Three sub-collaborations, R3B, HISPEC/DESPEC, and the Super-FRS Experiment Collaboration, will be the first to relocate to the FAIR site for Early Science. These groups will perform new experiments using exotic secondary beams provided by the Super-FRS. Installation of NUSTAR components in the High-Energy Cave will begin in 2026, followed by offline commissioning, integrated system tests, and full in-beam commissioning with Super-FRS beams. The NUSTAR Early Science physics program will commence once all systems are verified to be fully operational. The Compressed Baryonic Matter (CBM) experiment aims to explore the phase structure of strong interaction (QCD) matter at large net-baryon densities and moderate temperatures using heavy-ion collisions in the energy range √sNN = 2.9 - 4.9 GeV. CBM is a fixed-target experiment, equipped with fast and radiation-hard detector systems and an advanced triggerless data acquisition scheme. The CBM will collect data at interaction rates of up to 10 MHz by performing online reconstruction and event selection, thus allowing measurements of rare probes not accessible so far in this energy range. These include: multi-strange hadron production and their flow, higher-order cumulants, dileptons, as well as double-strange hypernuclei. The installation of the experimental infrastructure commenced in 2023 and will proceed through 2027, culminating in the installation of the CBM detectors and the start of global commissioning. The SIS100 synchrotron and CBM experiment are expected to begin operations in late 2028. CONCLUSION The FAIR project is entering a crucial phase with the transition from installation to commissioning. A comprehensive strategy has been developed to ensure structured, stepwise commissioning of all systems, beginning with the cryogenic infrastructure and progressing through local hardware commissioning, integration tests, and finally beam commissioning. Key infrastructure such as the FAIR Control Centre and cryogenic plant will be operational by 2026. The outlined timeline provides a realistic path towards first science while maintaining parallel operation of GSI systems. ACKNOWLEDGEMENTS The authors would like to thank Jörg Wenninger, Ciprian Plostinar, and Matthias Scholz for their continuous support and valuable advice. Preparatory conceptual work by the FAIR Commissioning & Controls Working Group [17] is also acknowledged. REFERENCES [1] P. J. Spiller et al., "Status of the FAIR Project", in Proc. IPAC'18, Vancouver, Canada, Apr.-May 2018, pp. 63-68. [2] J. Blaurock, H. Simon, O. Boine-Frankenheim, and P. Spiller, "FAIR completion of construction works, towards commissioning and first science", in Proc. IPAC'23, Venice, Italy, May 2023, pp. 3923-3927. [3] M. Bai et al., "Challenges of FAIR Phase 0", in Proc. IPAC'18, Vancouver, Canada, Apr.-May 2018, pp. 2947-2949. [4] J. McEntee, "Cryogenics at FAIR: adaptability is key", CERN Courier, July 2023. https://cerncourier.com/a/ cryogenics-at-fair-adaptability-is-key/ [5] E. S. Fischer et al., "Superconducting Magnets at FAIR", in Proc. IPAC'17, Copenhagen, Denmark, May 2017, pp. 25462549. [6] M. Vossberg, K. Berkl, S. Reimann, P. Schuett, R. J. Steinhagen, and G. Stephan, "FAIR Control Centre (FCC) - Concepts and Interim Options for the Existing GSI Main Control Room", in Proc. IPAC'17, Copenhagen, Denmark, May 2017, pp. 1791-1794. [7] S. Krepp, J. Fitzek, H. C. Hüther, R. Mueller, A. Schaller, and A. Walter, "A Dynamic Beam Scheduling System for the FAIR Accelerator Facility", in Proc. ICALEPCS'21, Shanghai, China, Oct. 2021, pp. 138-142. [8] D. Ondreka, C. Dimopoulou, H. C. Huether, H. Liebermann, J. Stadlmann, and R. J. Steinhagen, "Recommissioning of SIS18 After FAIR Upgrades", in Proc. IPAC'19, Melbourne, Australia, May 2019, pp. 932-935. [9] S. A. Litvinov, R. Hess, B. Lorentz, and M. Steck, "Operation of the ESR Storage Ring with the LSA Control System", in Proc. 19th Int. Conf. Accel. Large Exp. Phys. Control Syst. (ICALEPCS'23), Cape Town, South Africa, Oct. 2023, pp. 534-536. [10] F. Herfurth et al., "Commissioning of the Low Energy Storage Ring Facility CRYRING@ESR", in Proc. COOL'17, Bonn, Germany, Sep. 2017, pp. 81-83. [11] P. Gerhard, "About the New Linear Accelerator Control System at GSI", in Proc. 19th Int. Conf. Accel. Large Exp. Phys. Control Syst. (ICALEPCS'23), Cape Town, South Africa, Oct. 2023, JACoW Publishing, Geneva, Switzerland. [12] F. Hagenbuck et al., "Status of the High Energy Beam Transport System for FAIR", in Proc. IPAC'15, Richmond, VA, USA, May 2015, pp. 3705-3708. [13] K. Sugita et al., "Status of superconducting magnets for superFRS at FAIR", in Proc. IPAC'23, Venice, Italy, May 2023, pp. 3731-3734. [14] P. Spiller et al., "Technological features and status of the new heavy ions synchrotron SIS100 at FAIR", in Proc. IPAC'23, Venice, Italy, May 2023, pp. 165-168. [15] I. J. Petzenhauser, U. Blell, and S. Heberer, "Injection and Extraction Systems of the SIS100 Heavy Ion Synchrotron at FAIR", in Proc. IPAC'21, Campinas, Brazil, May 2021, pp. 3514-3516. [16] N. Kalantar-Nayestanaki and A. Bruce, "NUSTAR: NUclear STructure Astrophysics and Reactions at FAIR", Nuclear Physics News, vol. 28, no. 3, pp. 5-11, 2018. [17] R. J. Steinhagen, "FAIR Commissioning - Concepts and Strategies in View of High-Intensity Operation", in Proc. 61st ICFA Advanced Beam Dynamics Workshop (HB'18), Daejeon, Korea, June 2018, pp. 141-146.
2510.14949
DIALECTGEN: BENCHMARKING AND IMPROVING DI- ALECT ROBUSTNESS IN MULTIMODAL GENERATION Yu Zhou ∗†, Sohyun An ∗, Haikang Deng ∗, Da Yin, Clark Peng, Cho-Jui Hsieh, Kai-Wei Chang, Nanyun Peng University of California, Los Angeles {yuzhou, kwchang, violetpeng}@cs.ucla.edu Stable Diffusion 3.5 Two red packets on a table (Standard American English) Two ang pows on a table (Singaporean English) DALL·E Mini A man driving his car (Standard American English) A man driving his whip (African American English) A man hiking with his brother (Standard American English) A man hiking with his carnal (Chicano English) Wan 2.1 Video FLUX.1 dev A man selling brinjal (Indian English) A man selling eggplant (Standard American English) Figure 1: Multimodal Generative Model Outputs on semantically identical prompts that differ only in one synonymous lexical feature in Standard American English (top) / a lower-resource English dialect (bottom). ABSTRACT Contact languages like English exhibit rich regional variations in the form of dialects, which are often used by dialect speakers interacting with generative models. However, can multimodal generative models effectively produce content given dialectal textual input? In this work, we study this question by constructing a new large-scale benchmark spanning six common English dialects. We work with dialect speakers to collect and verify over 4,200 unique prompts and evaluate on 17 image and video generative models. Our automatic and human evaluation results show that current state-of-the-art multimodal generative models exhibit 32.26% to 48.17% performance degradation when a single dialect word is used in the prompt. Common mitigation methods such as fine-tuning and prompt rewriting can only improve dialect performance by small margins (< 7%), while potentially incurring significant performance degradation in Standard American English (SAE). To this end, we design a general encoder-based mitigation strategy for multimodal generative models. Our method teaches the model to recognize new dialect features while preserving SAE performance. Experiments on models such as Stable Diffusion 1.5 show that our method is able to simultaneously raise performance on five dialects to be on par with SAE (+34.4%), while incurring near-zero cost to SAE performance. Code and data at: dialectgen.github.io. 1 INTRODUCTION Linguists have defined over 160 dialects (Aeni et al., 2021) within the English language, with three out of four English speakers having a dialect background other than Standard American or British English (Crystal, 2003). Despite this rich diversity, current pre-training paradigms employ content filters that can exclude data involving lower-resource English dialects other than Standard American and British English (Gururangan et al., 2022), reducing the effectiveness of pretrained models on * Core contributors. † Project lead. 1 arXiv:2510.14949v1 [cs.CL] 16 Oct 2025 inputs from other dialects (Lee et al., 2023). Prior works have shown significant allocational harms toward dialect speakers caused by such dialect performance discrepancies in machine learning appli- cations (Hovy & Spruit, 2016; Bender et al., 2021), making the observation of similar performance trends in multimodal generative models an alarming sign. As shown in Figure 1, while current multimodal generative models can accurately generate high quality image and video content given Standard American English (SAE) prompts (left); they fail in various manners when provided with semantically equivalent prompts containing a single synonymous dialect word (right). Stable Diffusion 3.5 Large (Esser et al., 2024) fails to generate "ang pow", which is commonly used in Singaporean English to mean "red packet", and FLUX.1 [dev] (Black Forest Labs, 2024) fails to generate "brinjal", which is synonymous with "eggplant" in Indian English. Furthermore, when the dialect lexeme is polysemous, i.e., has an alternative meaning in SAE, models tend to always generate content that align with the SAE meaning, even when the context makes such interpretation highly improbable. For example, DALL-E Mini (Dayma et al., 2021) generations of "A man driving his whip" fail to capture the correct meaning of "whip" as "car" in African American English, given clear context indications. Similar failure modes are observed in text-to-video generative models: Wan 2.1 (Wang et al., 2025) fails to correctly render "carnal", which refers to "brother" in Chicano English. In this work, we construct DialectGen, a new large-scale benchmark evaluating dialect robustness in image and video generation. Our benchmark dataset spans six common English dialects, including Standard American English (SAE), British English (BrE), Chicano English (ChE), Indian English (InE), and Singaporean English (SgE). For each dialect other than SAE, we create SAE Prompt / Dialect Prompt pairs that are semantically identical besides switching a single SAE lexeme for a synonymous dialect lexeme. We work with dialect speaker annotators to create a rigorous feature selection and prompt filtering pipeline that ensures the final dialect prompts are (1) exactly synony- mous with the SAE prompt; (2) valid in the dialect context; and (3) non-ambiguous (for polysemous lexemes). These strictly enforced quality guarantees facilitate the development of simple yet effective automatic and human evaluation metrics for evaluating generative model performance. We experi- ment with 17 widely used image and video generative models on DialectGen, demonstrating up to 38.63% and 48.17% performance drops for SOTA open-weights image and video generative models, respectively. To alleviate such significant dialect performance drops observed in current multimodal generative models, we design a general encoder-based learning strategy that enhances dialect robustness for diffusion-based multimodal generative models. Our method teaches the model’s text encoder to recognize dialect lexemes while retaining its knowledge of SAE polysemous lexemes. We also include an encoder-based KL regularization loss based on image-SAE caption datasets to regulate output distribution shifts. Experiments on five dialects show that our method is able to simultaneously improve Stable Diffusion 1.5 (Rombach et al., 2022) and SDXL (Podell et al., 2023) performance on five dialects to be on par with SAE performance. At the same time, we observe near zero (< 1%) SAE performance drop on the general MSCOCO (Lin et al., 2014) validation set for both models. Our key contributions include: • DialectGen, a new large-scale multi-dialectal benchmark for evaluating dialect robustness in text-to-image and text-to-video generation. • Comprehensive evaluation and analysis of 17 multimodal generative models and five baseline mitigation methods on DialectGen. • A high-performing method for improving dialect robustness in multimodal generation while maintaining strong SAE performance. 2 RELATED WORKS Linguists define dialects as regional variations of a language distinguished by unique features in lexicon, phonology, and grammar from each other, together constituting a single language (Hudson, 1996; Chambers & Trudgill, 1998; Fromkin et al., 1998; Nerbonne, 2009; Wardhaugh & Fuller, 2021). English, like any other language, is subject to such variations. However, most dataset resources and pre-training paradigms focus only on Standard American and British English (Gururangan et al., 2022), leading to dialect robustness issues and performance gaps in downstream machine 2 Table 1: Example paired textual data entries from the DialectGen dataset, including Lexeme, Concise Prompt, and Detailed Prompt. Dialect name abbreviations: SAE (Standard American English), AAE (African American English), BrE (British English), SgE (Singaporean English). Dialect Lexeme Concise Prompt Detailed Prompt SAE sneakers brand new sneakers a little girl wearing a pair of stylish white sneakers AAE kicks brand new kicks a little girl wearing a pair of stylish white kicks SAE bathroom a spacious bathroom a clean and tidy bathroom with shiny blue wall tiles BrE loo a spacious loo a clean and tidy loo with shiny blue wall tiles SAE squid a squid on a counter a large squid in an aquarium with colorful coral SgE sotong a sotong on a counter a large sotong in an aquarium with colorful coral learning applications. Previous works have analyzed and explored such dialectal performance gaps in NLP tasks like QA (Ziems et al., 2023), NLI (Ziems et al., 2022), dependency parsing, and POS tagging (Blodgett et al., 2018; Jørgensen et al., 2015). Recent works have also noticed the impact of dialect variations on text-to-image generation (Lee et al., 2023; Wan et al., 2024). Along this line of research, we create the first large-scale benchmark of dialect robustness in multimodal generation, evaluating both text-to-image and text-to-video generative models on inputs across six different dialects. Moreover, while lexicon, phonology, and grammar are the three key aspects that distinguish each dialect from others, existing works in Dialectal NLP have so far mainly focused on the grammar variations of dialects (Ziems et al., 2022; 2023; Blodgett et al., 2018; Jørgensen et al., 2015). In this work, we provide the first large-scale dataset of dialectal lexical variations, bridging the gap towards holistic dialectal variation evaluation and building dialect-robust machine learning models. 3 DIALECTGEN BENCHMARK 3.1 DATASET CONSTRUCTION To select dialect features for our benchmark dataset, we first gather dialect lexemes along with their dictionary definitions and example usages from publicly available regional English dictionaries including The Oxford Regional English Dictionary (Gates et al., 2023), Dictionary of American regional english (Cassidy et al., 1985), A dictionary of Singlish and Singapore English (Lee, 2004), Dictionary of Indian English (Subhash, 2020), and The Oxford Dictionary of African American English (Heinmiller, 2023). We collect a total of 1126 dialect lexemes for initial processing. Based on the dictionary definitions of the selected lexemes, we manually filter out: (1) potentially derogatory lexemes; (2) culture-unique lexemes without Standard American English (SAE) equiva- lents. We then carefully read the dictionary definitions of each remaining dialect lexeme and assign it a SAE equivalent lexeme with the same meaning, creating a list of pair-wise corresponding lexical features for each dialect. Examples of selected pairs can be seen in Table 1 and Figure 1. Next, we use GPT4o (Hurst et al., 2024) to generate prompts for each SAE word in our paired lexical feature set. We specifically instruct the model to generate prompts describing a visual scene with the lexeme playing a central role, which can be one of the following depending on the semantic role of the lexeme: (1) The central object in the scene; (2) The main action of the central object; (3) A prominent descriptive feature of the central object. We also ask the model to create two different sets of Concise and Detailed prompts for each SAE lexeme. Then we simply replace the SAE lexeme in the prompts with the dialect lexeme (Table 1) to create our two dialect evaluation settings: • Concise prompts generally consist of ≤6 words, with the goal of providing a more challenging evaluation setting where the multimodal generative model is not given too many contextual hints about the lexeme’s meaning. 3 • Detailed prompts generally consist of ≥9 words, with the goal of providing a more relaxed evaluation setting where the multimodal generative model can use more contextual hints to infer the lexeme’s meaning. These two evaluation settings also intuitively represent two common user input styles for multimodal generative models, where casual users may tend to provide concise prompts and professional users may be more inclined to write detailed prompts. Across Concise and Detailed evaluation settings, we generate a total of 6552 prompts. For specific Dialect Prompt / SAE Prompt pairs where the dialect lexeme has an additional polysemous meaning recorded in an SAE dictionary (Webster, 1869), we generate an additional SAE Polysemy Prompt, where the lexeme is used unambiguously in its SAE meaning. This data can be used for regulating model behavior in training scenarios. 3.2 DIALECT SPEAKER VALIDATION AND FILTERING Before admitting the generated prompts to our final evaluation benchmark, we carefully verify their quality and correctness with dialect speaker human annotators. We created a specialized Amazon MTurk interface (Figure 4) for prompt annotation and matching potential dialect speaker annotators to their spoken dialect: each human annotator must first self-identify their dialect background and then complete a dialect speaker assessment quiz (Ziems et al., 2023) that matches each annotator to at most one dialect (Figure 5). Annotators are only selected if both their self-identified dialect background and their quiz assessment result match to the same dialect. More details on human annotation are available in Section E. After each dialect speaker is selected, they will be presented with Dialect Prompt / SAE Prompt pairs where the only difference is the dialect lexeme being swapped with its SAE equivalent word. For each pair of prompts, the dialect speaker must answer two questions: 1. Does the given Dialect Prompt make sense in said Dialect and correspond exactly in meaning to the given SAE prompt in Standard American English? (Yes / No / I don’t know) 2. Is the given Dialect Prompt ambiguous? i.e., Does it have a reasonable alternative interpreta- tion in the Standard American English (SAE) context? (Yes / No / I don’t know) Each Dialect Prompt / SAE Prompt pair is presented to two independent dialect speaker annotators. A pair is included in the final dataset only if both human annotators answer “Yes” to the first question and “No” to the second question. Consistent responses ensure the dialect prompt is: (1) exactly synonymous with the SAE prompt. (2) valid in the dialectal context. (3) non-ambiguous (for polysemous lexemes). In total, dialect speaker filtering further removes 35.9% of all generated prompts, resulting in a final dataset containing 4,200 validated prompts. 4 EXPERIMENTS 4.1 EVALUATION METRICS Automatic Evaluation To automatically evaluate any multimodal generative model G(·) on our benchmark, we design scoring functions based on reference-free image-text alignment metrics, including VQAScore (Lin et al., 2024) and CLIPScore (Hessel et al., 2021). For simplicity, we denote any such alignment metric below as A. We further denote the DialectGen prompt subset for any dialect as P, which contains many SAE Prompt / Dialect Prompt pairs p = (ps, pd). For each individual text prompt ps or pd, we generate n images under different random seeds for text-to-image generative models, or uniformly sample n frames in a video for text-to-video generative models. Therefore, for each SAE Prompt / Dialect Prompt pair p = (ps, pd) ∈P, we can calculate its SAE and Dialect performance as follows: SAE(p, G) = 1 n n X i=1 A(ps, G(ps)i) (1) 4 Table 2: DialectGen benchmark results for popular text-to-image and text-to-video generative models, including Dialect-wise Performance Drop measured by VQAScore (Lin et al., 2024); and Overall Performance Drop measured by human eval, VQAScore, and CLIPScore (Hessel et al., 2021). Cells are highlighted based on numerical value normalized across the entire table, with darker red indicating a higher performance drop in the given metric. Model Overall Performance Drop (%) ↓ Dialect-wise Performance Drop (%) ↓ Human VQAScore CLIPScore AAE BrE ChE InE SgE Concise Prompts T2I Models Stable Diffusion 1.4 28.19 26.7 10.35 20.67 9.64 34.94 41.27 26.96 Stable Diffusion 1.5 29.77 27.06 10.32 19.51 8.66 36.5 42.15 28.48 Stable Diffusion 2.1 31.46 28.79 11.7 24.35 9.31 44.82 41.12 28.89 Stable Diffusion XL 29.8 26.69 10.88 23.37 7.95 41.22 38.74 22.17 Stable Diffusion 3 31.89 29.01 10.81 27.89 8.64 42.67 40.69 25.12 Stable Diffusion 3.5 Large 32.31 29.43 11.37 28.3 9.74 42.66 41.9 24.56 Stable Diffusion 3.5 Large Turbo 32.92 30.28 11.34 30.33 9.27 43.6 42.49 25.72 Flux.1 [dev] 36.43 32.26 10.88 30.61 10.83 44.64 42.59 32.62 DALL-E Mini 34.29 31.52 11.71 33.91 8.18 47.11 42.85 25.51 DALL-E 2 38.63 32.79 9.97 35.87 7.95 48.78 47.21 24.14 DALL-E 3 26.55 24.39 9.32 18.97 3.58 41.95 31.9 25.56 DALL-E 3 (w/ Prompt Rewrite) 20.19 18.25 6.69 22.11 6.48 26.86 23.05 12.74 gpt-image-1 (4o Image Gen) 22.18 19.18 7.65 26.12 5.2 26.09 26.51 11.99 T2V Models Cosmos-1 25.41 20.49 6.66 22.15 9.69 26.1 27.44 17.09 Open-Sora 29.98 26.63 8.93 22.59 9.19 43.09 31.74 26.53 VideoCrafter-2 32.5 30.24 10.51 25.36 9.43 50.36 39.95 26.08 CogVideoX 40.06 42.55 11.04 38.33 23.75 55.18 26.1 27.44 Wan 2.1 48.17 47.33 13.1 52.68 31.27 43.83 53.38 55.47 Detailed Prompts T2I Models Stable Diffusion 1.4 14.33 15.93 5.16 11.65 4.37 17.35 29.23 17.03 Stable Diffusion 1.5 16.56 16.17 5.51 11.18 5.39 17.34 28.7 18.22 Stable Diffusion 2.1 17.39 18.4 5.78 15.06 5.51 23.03 29.36 19.06 Stable Diffusion XL 17.12 17.09 5.83 14.09 5.56 20.57 30.12 15.1 Stable Diffusion 3 17.15 18.64 5.86 14.74 6.67 23.85 28.94 19.02 Stable Diffusion 3.5 Large 18.42 19.54 6.12 15.7 6.99 23.46 31.83 19.72 Stable Diffusion 3.5 Large Turbo 19.9 20.63 6.09 15.06 8.13 24.94 33.42 21.61 Flux.1 [dev] 23.29 21.25 5.46 14.84 9.11 25.69 31.4 25.23 DALL-E Mini 24.71 21.44 7.05 27.56 5.29 27.35 31.47 15.53 DALL-E 2 17.73 20.2 5.98 18.43 6.52 25.5 32.8 17.76 DALL-E 3 12.18 13.27 4.29 8.85 4.74 20.98 18.91 12.85 DALL-E 3 (w/ Prompt Rewrite) 6.55 10.77 2.97 11.93 5.28 10.62 17.09 8.94 gpt-image-1 (4o Image Gen) 8.98 10.97 3.24 13.72 4.46 10.56 15.96 10.17 T2V Models Cosmos-1 18.04 14.28 4.3 11.05 9.25 14.04 22.49 14.58 Open-Sora 17.16 14.1 4.57 13.49 5.13 19.4 19.8 12.69 VideoCrafter-2 22.59 18.31 5.91 16.97 4.18 24.16 27.63 18.61 CogVideoX 31.87 29.6 8.08 21.33 14.63 32.74 42.88 36.4 Wan 2.1 32.69 31.94 8.59 30.23 14.97 42.58 36.21 35.71 Dialect(p, G) = 1 n n X i=1 A(ps, G(pd)i) (2) Note that when calculating dialect performance, we align the SAE Prompt ps with multimodal output generated from the corresponding Dialect Prompt, i.e., G(pd). This is feasible given that the paired prompts are synonymous, as verified by dialect speaker annotators in Section 3.2. Based on this, we can compute the dialect-induced performance drop of G(·) for each prompt pair p: Drop(p, G) = SAE(p, G) −Dialect(p, G) SAE(p, G) = n X i=1 A(G(ps)i, ps) −A(G(pd)i, ps) A(G(ps)i, ps) (3) To obtain the average model performance drop for a specific dialect, i.e., Drop(P, G), we simply average Drop(p, G) for all p in P. Human Evaluation We further design a human evaluation pipeline to check the empirical alignment between our automatic evaluation metrics and human judgment. For 5% of the model outputs in our benchmark, we ask three independent external human annotators to evaluate: to what extent does the multimodal generations conditioned on the SAE Prompt G(ps) or Dialect Prompt G(pd) match with the scene described by SAE prompt ps. Annotators are asked to rate the alignment between each (image/video, caption) pair with a numerical score between 0 and 10. The numerical scores are scaled by 0.1 to match the scoring range of VQAScore and CLIPScore before calculating SAE and 5 Dialect performance. Finally, we use the same formula to calculate the dialect-induced performance drop Drop(p, G). Since we only evaluate the alignment between image/video and the SAE prompt, this task does not require dialect speaker human annotators. 4.2 BENCHMARK EXPERIMENTS Applying the automatic and human evaluation metrics described in Section 4.1, we evaluate popular open-weights and proprietary multimodal generative models on DialectGen. Model performances are separately aggregated for Concise Prompts and Detailed Prompts settings in Table 2. Overall Performances For each model, we record overall dialect-induced performance drop on DialectGen using three different metrics: Human Eval, VQAScore, and CLIPScore. We calculate Pearson correlation coefficients (Pearson, 1895) r between each of the two metrics and observe r(Human, VQAScore) = 0.968, r(Human, CLIPScore) = 0.924, and r(VQAScore, CLIPScore) = 0.907. This shows that while both automatic scoring metrics have high correlations to human judgement (the gold standard), VQAScore is a better-aligned scoring metric for measuring dialect- induced performance drop. Contrasting the model performance drops across the two evaluation settings Concise Prompts and Detailed Prompts, we can clearly see that all models exhibit significantly larger performance drops for concise prompts compared to detailed prompts. This is in line with our assumption that models can more easily infer the meanings of unknown dialect lexemes from richer prompt contexts, highlighting the need for challenging evaluation via concise prompts to reveal model robustness issues. Looking at individual model performances, we observe that among text-to-video generative models: Wan 2.1 (Wang et al., 2025) and CogVideoX (Yang et al., 2024) exhibit the largest overall performance drops while Cosmos-1 (Agarwal et al., 2025) is the most robust. While for text-to-image generative models, DALL-E 2 (Ramesh et al., 2022) and Flux.1 [dev] (Black Forest Labs, 2024) exhibit the largest overall performance drops while DALL-E 3 (Betker et al., 2023) (w/ Prompt Rewrite) and gpt-image-1 (4o Image Generation) (OpenAI, 2025) are the most robust. Dialect-wise Performance Drop In addition to overall performance, we record each model’s performance drop on each dialect, measured by VQAScore. Based on the color heatmap in Table 2, we can clearly see that the most severe performance drops occur for ChE and InE for most models, while AAE and SgE also suffer significant performance decreases. On the other hand, models generally do not see a very significant performance drop for BrE, which is expected given the relatively higher-resource nature of the dialect. 5 MITIGATION METHODS The significant dialect performance drops of current multimodal generative models shown in Sec- tion 4.2 highlight the need for effective mitigation strategies to improve dialect robustness. Here, the goal is to develop a method that enhances robustness across multiple dialects while preserving performance on standard SAE prompts. To this end, we first investigate intuitive baseline approaches, including (1) UNet Finetuning; (2) Prompt Revision, and then introduce our new mitigation strategy. 5.1 BASELINE METHODS UNet Finetuning The vast majority of current text-to-image and text-to-video generative models comprise two main components: a text encoder and a diffusion-based image/video decoder. In current post-training paradigms, typically the text encoder is kept frozen while the diffusion UNet is fine-tuned (Podell et al., 2023; Rombach et al., 2022; Betker et al., 2023; Dai et al., 2023). Existing works in aligning, enhancing, and customizing multimodal generative models also focus heavily on developing reward-based fine-tuning methods for the diffusion UNet while freezing the text encoder (Segalis et al., 2023; Clark et al., 2023; Prabhudesai et al., 2023; Black et al., 2023; Fan et al., 2023; Wallace et al., 2024; Dang et al., 2025). Based on existing works, we apply prominent multimodal generation enhancement methods towards improving dialect robustness, including: 6 PrompP 1 PrompP 1 A man driving his whip Dialect Prompts PrompP 1 PrompP 1 A man driving his car SAE Prompts PrompP 1 PrompP 1 A long whip made of braided rope SAE Polysemy Prompts Sn … S3 S2 S1 Pn … P3 P2 P1 Pn ’ … P3 ’ P2 ’ P1 ’ min(cos θ) Dialect Learning Polysemy Control Text Encoder Text Encoder Image Encoder PrompP 1 PrompP 1 A large white bowl of many green apples MSCOCO Captions C1 ’ C2 ’ C3 ’ … Cn ’ Cn … C3 C2 C1 I1⋅C1 I1⋅C2 I1⋅C3 … I1⋅Cn I2⋅C1 I2⋅C2 I2⋅C3 … I2⋅Cn I3⋅C1 I3⋅C2 I3⋅C3 … I3⋅Cn … … … … … In⋅C1 In⋅C2 In⋅C3 … In⋅Cn KL Regularization In … I3 I2 I1 I1⋅C1 ’ I1⋅C2 ’ I1⋅C3 ’ … I1⋅Cn ’ I2⋅C1 ’ I2⋅C2 ’ I2⋅C3 ’ … I2⋅Cn ’ I3⋅C1 ’ I3⋅C2 ’ I3⋅C3 ’ … I3⋅Cn ’ … … … … … In⋅C1 ’ In⋅C2 ’ In⋅C3 ’ … In⋅Cn ’ PrompP 1 PrompP 1 A large white bowl of many green apples MSCOCO Captions MSCOCO Images Text Encoder Text Encoder Text Encoder Text Encoder min(cos θ) Dn ’ … D3 ’ D2 ’ D1 ’ Figure 2: Losses used in our mitigation. Text prompts for Dialect Learning and Polysemy Control come from the DialectGen training set, while image-caption pairs for KL Regularization come from the MSCOCO validation set. • Diffusion Fine-tune (Rombach et al., 2022) given a pair of synonymous Dialect / SAE Prompts, we fine-tune the diffusion UNet with the Dialect Prompt as input, and images generated using the SAE Prompt as target output. • Diffusion DPO (Wallace et al., 2024) We similarly use the Dialect Prompt as input, and use images generated with the SAE Prompt / Dialect Prompt as Win / Lose pairs for DPO. Prompt Revision Beyond UNet fine-tuning, another popular family of methods for aligning and enhancing multimodal generative models is prompt revision (Hao et al., 2023; Betker et al., 2023; Wang et al., 2024; Chen et al., 2024). In our experiments, we include both a general prompt rewriting method and targeted prompt translation methods using general-purpose LLMs: • Prompt Rewrite We apply the general prompt rewriting pipeline in Betker et al. (2023) to all test prompts before passing them to the generative model. • Prompt Translate We use general-purpose LLMs (Grattafiori et al., 2024; OpenAI, 2025) to translate all prompts to SAE before passing them to the generative model. 5.2 OUR METHOD Unlike prior approaches, we propose a new mitigation strategy that focuses on updating the text encoder(s). A natural first step toward improving dialectal robustness is to align the semantic representation of a dialect expression with that of its corresponding SAE counterpart. Dialect Learning To operationalize this idea, we introduce a Dialect Learning loss that encourages the target text encoder to recognize dialectal lexemes by minimizing the cosine distance between the target encoder’s embedding of a dialect prompt and the frozen encoder’s embedding of its synonymous SAE prompt: LDL = 1 N N X i=1 1 −⟨π(pd i ), π0(ps i)⟩  . (4) Here, ⟨·, ·⟩denotes cosine similarity; π(·) and π0(·) represent the trainable target text encoder and the frozen reference encoder, respectively; and pd i and ps i denote the i-th pair of synonymous prompts in dialect and standard English, respectively. Although this may improve dialectal robustness, relying on this loss alone may compromise the model’s ability to handle dialect lexemes that exhibit polysemy in SAE contexts. Polysemy Control In order to retain the model’s ability to correctly recognize polysemous lexemes within SAE contexts, we introduce a Polysemy Control loss that minimizes the cosine distance between embeddings of the same SAE polysemous prompt generated by the target and frozen encoders: 7 Table 3: Mitigation results for all baseline methods and our best performing method, including Overall Performances on SAE MSCOCO, SAE Polysemy, average Dialect performance, and Dialect Performance for each dialect, all measured using VQAScore Lin et al. (2024). Cell colors reflect column-normalized performance values, with darker green indicating higher VQAScore performance. Mitigation Methods Overall Performances ↑ Dialect Performance ↑ SAE SAE Dialect AAE BrE ChE InE SgE MSCOCO Polysemy Avg. Base Model (Stable Diffusion 1.5) 75.49 72.84 57.80 60.13 69.39 52.65 49.94 56.89 Prompt Revision DALL-E 3 Prompt Rewrite 74.25 70.85 60.91 57.34 69.51 56.36 57.54 63.81 LLaMA 3 Prompt Translate 74.03 71.33 58.48 57.73 70.4 53.98 50.42 59.87 GPT4.1 Prompt Translate 74.54 71.47 63.90 60.87 74.39 59.05 60.20 64.98 UNet Fine-tuning Diffusion Finetune 65.01 52.13 60.94 63.85 70.14 57.3 52.84 60.56 Diffusion DPO 63.94 50.32 63.52 66.31 68.91 61.22 56.38 64.79 Our Encoder Tuning Methods Dialect Learning 67.14 46.30 78.02 75.21 78.33 79.31 78.10 79.15 + Text Cosine Reg. 67.06 46.39 77.93 75.44 77.84 79.31 78.22 78.86 + Image Cosine Reg. 67.73 46.48 78.00 74.91 78.20 79.45 78.33 79.11 + Text KL Reg. 72.68 52.72 77.78 74.40 78.27 78.36 78.17 79.71 + Image KL Reg. 71.69 53.41 78.12 73.77 77.23 79.06 79.25 81.29 + Text KL Reg. + Polysemy Ctrl. 72.71 70.15 77.74 72.24 75.76 78.95 80.67 81.07 + Image KL Reg. + Polysemy Ctrl. 74.80 71.17 77.68 72.61 76.74 77.51 80.41 81.14 LPC = 1 N N X i=1 (1 −⟨π(pm i ), π0(pm i )⟩) , (5) where each pm i is a polysemous SAE prompt sampled from the dataset. This loss is applied only to examples containing SAE polysemous lexemes. KL Regularization In addition to the previous two losses, it is also essential to preserve the model’s performance on general SAE prompts. To this end, one might consider employing the conventional Kullback-Leibler (KL) divergence loss, which promotes alignment between the output distributions of a trainable target model and a frozen reference model over a predefined discrete logit space. However, this approach is not directly applicable in our setting, as text encoders output continuous embeddings rather than discrete logits. To address this challenge, we approximate the output distribution by computing similarity scores between a given caption embedding and a set of reference image embeddings drawn from a joint image-text embedding space. Concretely, we begin by sampling M caption-image pairs n (xcap i , ximg i ) | i ∈[M] o from a general SAE dataset such as MSCOCO (Lin et al., 2014). For each pair, we compute the caption embedding Ci = π0(xcap i ) using a frozen text encoder π0, and the corresponding image embedding Ii = ϕ0(ximg i ) using a frozen image encoder ϕ0, with both encoders operating in the same shared text-image embedding space. The resulting image embeddings {Ii | i ∈[M]} serve as reference anchors for computing similarity scores with a given caption embedding. These scores act as surrogate logits that approximate the output distributions required for the KL divergence computation. Specifically, for each caption xcap i , we define the approximated output distributions for the frozen encoder π0 and the trainable target encoder π as: sπ0 i = [⟨I1, Ci⟩, . . . , ⟨IM, Ci⟩] , sπ i = [⟨I1, C′ i⟩, . . . , ⟨IM, C′ i⟩] , (6) where C′ i = π(xcap i ). Given these simulated logits, we define the KL divergence loss to encourage the target encoder’s output distribution to remain close to that of the frozen encoder: LKL = 1 M M X i=1 KL (softmax(sπ i ) ∥softmax(sπ0 i )) . (7) This approach is compatible with CLIP-style models (Radford et al., 2021; Zhai et al., 2023), in which image and text embeddings are aligned within a shared representation space. When an image encoder is unavailable, we instead use the frozen caption embeddings {Ci | i ∈[M]} as proxies for 8 reference anchors. We hereafter refer to the case where image embeddings are used as reference anchors as “Image KL Reg.” and the one using text embeddings as “Text KL Reg.” Based on these design choices, the final combined loss function integrates all three components: L = LDL + LPC + LKL as illustrated in Figure 2. For more details, please refer to Section B. 5.3 MITIGATION RESULTS Here, we validate all baselines and our method on SD1.5 and SDXL. Due to space limitations, the results for SDXL are reported in Section F. 5.3.1 COMPARISON WITH THE BASELINES As shown in Table 3, prompt rewriting methods that operate solely at the input level do not degrade SAE MSCOCO or polysemy performance, but yield only slight improvements up to 6.1% in average dialect performance. Furthermore, UNet fine-tuning approaches also lead to small gains of up to 5.7% in dialect performance, but at the cost of substantial drops in both general SAE and polysemy scores. In contrast, our method, corresponding to the last row of the table and incorporating all three loss components described in Section 5.2, significantly improves dialect robustness across all five dialects. Its average dialect performance of 77.68% closely approaches the base model’s SAE score of 77.91%, while causing negligible degradation in SAE MSCOCO and polysemy performance. 5.3.2 ABLATION STUDY To evaluate the contribution of each component in our method, we conduct an ablation study here. Base Model vs. Dialect Learning As shown in Table 3, applying the Dialect Learning loss (LDL) alone yields huge improvements in the base model’s dialect performance, but also degrades SAE MSCOCO and polysemy performance. Cosine Reg. vs. KL Reg. To solve this issue, simply maximizing cosine similarity between the target text encoder’s text embeddings and the corresponding text/image embeddings from the frozen text/image encoder (denoted as Text/Image Cosine Reg.), which is computed over the same caption-image pairs used in our KL regularization, does not effectively recover the base model’s SAE MSCOCO and polysemy performance. In contrast, adding our KL regularization loss (LKL) improves both metrics while preserving dialect gains. Adding Polysemy Ctrl. Finally, incorporating the Polysemy Control loss (LPC) yields substantial gains in polysemy performance, improving it by 17.43% and 17.76% for Text and Image KL Reg. respectively, underscoring the importance of this component in recognizing polysemous lexemes within SAE contexts. 6 LIMITATIONS Our study focuses on the lexical variations that characterize dialects, motivated by the empirical observation that such variations exert much greater influence on multimodal generative model performance than grammatical variations (see Section G). Furthermore, grammatical variation has already been the subject of extensive investigation in text-only contexts (Hudson, 1996; Chambers & Trudgill, 1998; Fromkin et al., 1998; Nerbonne, 2009; Wardhaugh & Fuller, 2021). These considerations jointly motivate our decision to prioritize the evaluation of lexical dialect variation, which appears especially consequential in the multimodal generative setting. Furthermore, our evaluation of text-image alignment utilizes reference-free metrics, namely VQAScore (Lin et al., 2024) and CLIPScore (Hessel et al., 2021). We recognize that these pretrained vision-language models are not perfect. To address this potential weakness, we conducted a thorough human evaluation and found very high statistical correlation between our automatic metrics and human judgment (Pearson correlation coefficient r = 0.968 for VQAScore and r = 0.924 for CLIPScore). Therefore, while acknowledging the imperfections of automated metrics, this high degree of human correlation provides strong evidence for the validity of our evaluation metrics and associated analysis conclusions. 9 7 CONCLUSIONS In this work, we create DialectGen, a large-scale multi-dialectal benchmark evaluating the dialect robustness of multimodal generative models. Our experiments on 17 widely used text-to-image and text-to-video generative models reveal severe performance drops up to 38.63% and 48.17% for image and video generative models, respectively. We further design an encoder-based mitigation strategy to enhance dialect robustness while preserving performance on Standard American English. 8 ETHICS STATEMENT This work makes use of human subjects for annotation and evaluation. All procedures were subject to ethical review and were approved by the IRB from the authors’ institution. Consent was gathered in accordance with the authors’ institution guidelines, and annotators had access to a data use statement when giving consent. The purpose of DialectGen is to provide tools that enable researchers and practitioners to evaluate and improve dialect robustness in their models. We will release these data responsibly, ensuring that users sign a Data Use Agreement that forbids the use of DialectGen for deception, impersonation, mockery, discrimination, hate speech, targeted harassment, and cultural appropriation. In the agreement, researchers and practitioners will also acknowledge the limitations of this work, that DialectGen may not fully or accurately represent the natural usage patterns of all sub-communities of speakers. DialectGen is designed to be easily updatable and configurable, such that it can be extended by and for specific sub-communities and updated as dialects evolve over time. We have carefully checked our data to make sure no personally identifying information or offensive content is included. When utilizing existing artifacts and models, we make sure to follow all relevant regulations and licenses. 9 REPRODUCIBILITY STATEMENT We have taken several steps to ensure the reproducibility of our work. Detailed descriptions of dataset construction, annotation procedures, evaluation protocols, and mitigation methods are provided in the main paper (see Sections 3, 4, etc.), with further implementation details, training configurations, and additional qualitative results included in the appendix (see Sections B, A, etc.). To facilitate independent verification, we also provide as anonymized supplementary material both the DialectGen benchmark dataset and the source code used for data processing, model training, and evaluation. The dataset files include all validated dialect–SAE prompt pairs, while the code folder contains scripts for dataset generation, automatic and human evaluation, and reproduction of all tables and figures reported in the paper. Together, these resources enable researchers to replicate our experimental results and extend the benchmark for future work. 10 ACKNOWLEDGEMENTS We would like to thank Connor Couture and Allen Cheung for designing the initial version of our MTurk annotation interface, and Xinrong Du for providing feedback. We also thank Prof. Diyi Yang, Prof. Xiang Chen, Caleb Ziems, Julia Kruk, Hritik Bansal, Jiachen Gu, Zongyu Lin, and Amita Kamath for valuable discussions and pointers; and especially Caleb Ziems for sharing the grammar-based dialect-speaker quiz he created. Finally, we appreciate Wenbo Hu, Lucas Bandarkar, Mohsen Fayyaz, and Tanmay Parekh for their helpful feedback on the paper draft. 10 REFERENCES Nur Aeni, Like Raskova Octaberlina, Nenni Dwi Aprianti Lubis, et al. A literature review of English Language Variation on Sociolinguistics. OSF, 2021. Niket Agarwal, Arslan Ali, Maciej Bala, Yogesh Balaji, Erik Barker, Tiffany Cai, Prithvijit Chat- topadhyay, Yongxin Chen, Yin Cui, Yifan Ding, et al. Cosmos world foundation model platform for physical ai. arXiv preprint arXiv:2501.03575, 2025. Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610–623, 2021. James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf, 2(3):8, 2023. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. arXiv preprint arXiv:2305.13301, 2023. Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024. Su Lin Blodgett, Johnny Wei, and Brendan O’Connor. Twitter universal dependency parsing for african-american and mainstream american english. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1415–1425, 2018. Frederic G Cassidy, Joan Houston Hall, and Luanne Von Schneidemesser. Dictionary of American regional english, volume 1. Belknap Press of Harvard University Press Cambridge, Mass., 1985. Jack K Chambers and Peter Trudgill. Dialectology. Cambridge University Press, 1998. Zijie Chen, Lichao Zhang, Fangsheng Weng, Lili Pan, and Zhenzhong Lan. Tailored visions: Enhancing text-to-image generation with personalized prompt rewriting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7727–7736, 2024. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint arXiv:2309.17400, 2023. David Crystal. English as a Global Language. Cambridge University Press, Cambridge, 2 edition, 2003. ISBN 9780521823470. Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, et al. Emu: Enhancing image generation models using photogenic needles in a haystack. arXiv preprint arXiv:2309.15807, 2023. Meihua Dang, Anikait Singh, Linqi Zhou, Stefano Ermon, and Jiaming Song. Personalized preference fine-tuning of diffusion models. arXiv preprint arXiv:2501.06655, 2025. Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê Khac, Luke Melas, and Ritobrata Ghosh. Dall·e mini, 7 2021. URL https://github.com/borisdayma/ dalle-mini. Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis, 2024. URL https://arxiv. org/abs/2403.03206, 2, 2024. Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models. Advances in Neural Information Processing Systems, 36:79858–79885, 2023. VA Fromkin, Robert Rodman, and V Hyams. An introduction to language 6e. Hartcourt Brace College Publishers: Orlando, FL, USA, 1998. 11 Henry Louis Gates, James Murray, et al. The Oxford Regional English Dictionary. Oxford University Press, Oxford, 2023. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. Suchin Gururangan, Dallas Card, Sarah K Dreier, Emily K Gade, Leroy Z Wang, Zeyu Wang, Luke Zettlemoyer, and Noah A Smith. Whose language counts as high quality? measuring language ideologies in text data selection. arXiv preprint arXiv:2201.10474, 2022. Yaru Hao, Zewen Chi, Li Dong, and Furu Wei. Optimizing prompts for text-to-image generation. Advances in Neural Information Processing Systems, 36:66923–66939, 2023. Jennifer KN Heinmiller. Compiling the oxford dictionary of african american english: A progress report. Dictionaries: Journal of the Dictionary Society of North America, 44(1):91–104, 2023. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Joseph Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In Conference on Empirical Methods in Natural Language Processing, 2021. Dirk Hovy and Shannon L Spruit. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 591–598, 2016. Richard A Hudson. Sociolinguistics. Cambridge university press, 1996. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Os- trow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. Anna Jørgensen, Dirk Hovy, and Anders Søgaard. Challenges of studying and processing dialects in social media. In Proceedings of the workshop on noisy user-generated text, pp. 9–18, 2015. Jack Tsen-Ta Lee. A Dictionary of Singlish and Singapore English. Lee, Jack Tsen-Ta, 2004. Accessed 2025-05-16. Tony Lee, Michihiro Yasunaga, Chenlin Meng, Yifan Mai, Joon Sung Park, Agrim Gupta, Yunzhi Zhang, Deepak Narayanan, Hannah Teufel, Marco Bellagente, et al. Holistic evaluation of text-to-image models. Advances in Neural Information Processing Systems, 36:69981–70011, 2023. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV), 2014. Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Evaluating text-to-visual generation with image-to-text generation. In European Conference on Computer Vision, pp. 366–384. Springer, 2024. John Nerbonne. Data-driven dialectology. Language and linguistics compass, 3(1):175–198, 2009. OpenAI. Introducing 4o image generation. https://openai.com/index/ introducing-4o-image-generation/, 2025. Accessed: 2025-05-19. OpenAI. Gpt-4.1. https://openai.com/index/gpt-4-1/, April 2025. Accessed 18 May 2025. Karl Pearson. Notes on regression and inheritance in the case of two parents. Proceedings of the Royal Society of London, 58:240–242, 1895. doi: 10.1098/rspl.1895.0041. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952, 2023. 12 Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, and Katerina Fragkiadaki. Aligning text-to-image diffusion models with reward backpropagation (2023). arXiv preprint arXiv:2310.03739, 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Dall·e 2: A new ai system that can create realistic images and art from a description in natural language. https: //openai.com/research/dall-e-2, 2022. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF confer- ence on computer vision and pattern recognition, pp. 10684–10695, 2022. Eyal Segalis, Dani Valevski, Danny Lumen, Yossi Matias, and Yaniv Leviathan. A picture is worth a thousand words: Principled recaptioning improves image generation. ArXiv, abs/2310.16656, 2023. URL https://api.semanticscholar.org/CorpusID:266003242. V. Subhash. Dictionary of Indian English. V. Subhash, Hyderabad, 2020. ISBN 9789354374487. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8228–8238, 2024. Yixin Wan, Arjun Subramonian, Anaelia Ovalle, Zongyu Lin, Ashima Suvarna, Christina Chance, Hritik Bansal, Rebecca Pattichis, and Kai-Wei Chang. Survey of bias in text-to-image generation: Definition. Evaluation, and Mitigation, 2024. Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, Jianyuan Zeng, Jiayu Wang, Jingfeng Zhang, Jingren Zhou, Jinkai Wang, Jixuan Chen, Kai Zhu, Kang Zhao, Keyu Yan, Lianghua Huang, Mengyang Feng, Ningyi Zhang, Pandeng Li, Pingyu Wu, Ruihang Chu, Ruili Feng, Shiwei Zhang, Siyang Sun, Tao Fang, Tianxing Wang, Tianyi Gui, Tingyu Weng, Tong Shen, Wei Lin, Wei Wang, Wei Wang, Wenmeng Zhou, Wente Wang, Wenting Shen, Wenyuan Yu, Xianzhong Shi, Xiaoming Huang, Xin Xu, Yan Kou, Yangyu Lv, Yifei Li, Yijing Liu, Yiming Wang, Yingya Zhang, Yitong Huang, Yong Li, You Wu, Yu Liu, Yulin Pan, Yun Zheng, Yuntao Hong, Yupeng Shi, Yutong Feng, Zeyinzi Jiang, Zhen Han, Zhi-Fan Wu, and Ziyu Liu. Wan: Open and advanced large-scale video generative models. arXiv preprint arXiv:2503.20314, 2025. Zhijie Wang, Yuheng Huang, Da Song, Lei Ma, and Tianyi Zhang. Promptcharm: Text-to-image generation through multi-modal prompting and refinement. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, pp. 1–21, 2024. Ronald Wardhaugh and Janet M Fuller. An introduction to sociolinguistics. John Wiley & Sons, 2021. Noah Webster. An American dictionary of the English language. Merriam, 1869. Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 11975–11986, 2023. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3836–3847, 2023. 13 Yu Zhou, Bingxuan Li, Mohan Tang, Xiaomeng Jin, Te-Lin Wu, Kuan-Hao Huang, Heng Ji, Kai-Wei Chang, and Nanyun Peng. Contrastive visual data augmentation. In ICML 2025, 2025. Caleb Ziems, Jiaao Chen, Camille Harris, Jessica Brooke Anderson, and Diyi Yang. Value: Un- derstanding dialect disparity in nlu. In Annual Meeting of the Association for Computational Linguistics, 2022. Caleb Ziems, William B. Held, Jingfeng Yang, and Diyi Yang. Multi-value: A framework for cross-dialectal english nlp. ACL 2023, 2023. 14 APPENDIX A QUALITATIVE COMPARISON In Figure 3, we provide additional qualitative examples to demonstrate the performances of the baseline mitigation strategy, Diffusion DPO (Wallace et al., 2024), compared with our method. Specifically, we update the Stable Diffusion 1.5 model encoder using Dialect Learning, Polysemy Control, and Image KL. After mitigation, we ask each model to generate images based on the four dialect prompts first mentioned in Figure 1. The Stable Diffusion 1.5 Base model struggles to generate correct images for most of these prompts, including "Two ang pows on a table", "A man selling brinjal", and "A man hiking with his carnal". While the model is able to generate moderately reasonable images for the prompt "A man driving his whip", it commonly generates physically implausible details such as the man’s torso protruding through the car. Fine-tuning the UNet with Diffusion DPO is able to slightly improve generation alignment with the text prompt (e.g., occasionally generating two people for the prompt "A man hiking with his carnal"). However, it more often blends visual elements within the desired target images with other irrelevant objects (e.g., generating a man selling purple pastries in place of eggplants or a man wearing a purple shirt holding vegetables). Our method generates higher-quality and better aligned images compared to the base model and Diffusion DPO by accurately learning to generate the target concepts without negatively impacting image quality. A significant majority of images in our sampled generations are able to generate images that correctly depict the target prompts, in line with quantitative evaluation results. Ours (Dialect Learning + Image KL + Polysemy Ctrl.) Fine-tune w/ Diffusion DPO Base Model (Stable Diffusion 1.5) A man hiking with his carnal “carnal” = “brother” in Chicano English A man driving his whip “whip” = “car” in African American English A man selling brinjal “brinjal” = “eggplant” in Indian English Two ang pows on a table “ang pow” = “red packet” in Singaporean English Figure 3: Qualitative Comparison of Mitigation Strategies using the Stable Diffusion 1.5 model (Rombach et al., 2022) on four different dialect prompts. Specifically, we compare the dialect prompt image generation results of the Stable Diffusion 1.5 Base Model, Stable Diffusion 1.5 fine-tuned with Diffusion DPO (Wallace et al., 2024), and Stable Diffusion 1.5 updated via our best performing method (Dialect Learning + Image KL Regularization + Polysemy Control). B IMPLEMENTATION DETAILS Data Preparation We first split the DialectGen dataset into training, validation, and test sets in a ratio of 80%, 10%, and 10%, respectively. These training and validation splits of DialectGen are used to compute the Dialect Learning loss and the Polysemy Control loss. For KL Regularization loss, we randomly sample 1,024 and 256 image-caption pairs from the MSCOCO validation set (Lin et al., 2014) for use in training and validation, respectively. The target text encoder is evaluated on 15 the validation set at the end of each epoch, and the checkpoint with the lowest validation loss is selected and saved for final evaluation. We then evaluate SAE polysemy and per-dialect performance using the test split of DialectGen, and assess SAE MSCOCO performance on 50 randomly sampled captions from the MSCOCO validation set. Training We employ the pretrained text encoder and fine-tune it for 30 epochs using the AdamW optimizer with an initial learning rate of 1 × 10−4, β1 = 0.9, β2 = 0.999, and ϵ = 1 × 10−8. A cosine annealing learning rate scheduler is applied across the 30 training epochs. The batch size, i.e., N in Equation (4) and Equation (5), is set to 32, and the number of image-caption pairs used for KL regularization, i.e., M in Equation (7), is set to 1,024. Training is completed in less than one hour on a single NVIDIA RTX A6000 GPU. In the case of SDXL, which includes both Base and Refiner encoders, the number of pairs M for the Refiner encoder is set to 512 due to its larger size, and training takes approximately one hour using four NVIDIA RTX A6000 GPUs, with all other configurations kept the same as in the Stable Diffusion 1.5 and SDXL Base encoder settings. About T2Video Models Video-generation models incur substantially higher computational cost than their image counterparts. Since our primary goal is to assess the models’ ability to interpret and render textual prompts, we generate only a small, fixed number of frames per video. This strategy is justified by two observations: (i) the first few frames typically suffice to judge prompt fidelity, and (ii) our prompts do not exhibit extensive motion, so long sequences offer diminishing returns. All models were obtained by cloning their official repositories and following the authors’ installation instructions. Frame numbers were uniformly reduced frame counts when possible, and in some cases, spatial resolution was also reduced to facilitate efficient evaluation—see Table 5 for the precise settings. Average time per video was measured on a single NVIDIA RTX A6000 GPU; the Wan2.1-T2V-14B model, which does not fit in single-GPU memory, was benchmarked using six A6000 GPUs under Fully Sharded Data Parallel (FSDP) supported by the repository under the xdit framework. All models except Wan2.1 fit under a single A6000 GPU and use approximately 20-30 GB of VRAM max. Wan2.1 takes at least 3 GPUs, taking an approximate memory usage of 100GB of combined VRAM. C MODEL DETAILS We provide detailed information on the multimodal generative models and key experimental settings used in our benchmark. Table 4 lists the comprehensive specifications for all models evaluated in our work, including both text-to-image and text-to-video models. For each model, we provide details such as its creator organization, initial release date, hosting platform, availability type (e.g., open source, proprietary), and model size. Table 5 describes in detail the key generation parameters used for the text-to-video models. This includes the specific resolution, number of frames, and inference steps used for each model. Further- more, we specify the average time required to generate a single video and the total time needed to generate our full video dataset to aid in understanding the reproducibility and computational cost of our experiments. D DATASET DETAILS The final DialectGen Dataset contains a total of 4632 prompts, which include 2100 non-SAE dialect prompts, 2100 SAE prompts, and 432 polysemous SAE prompts. The entire dataset is split into three subsets: training, validation, and test. The data split ratio is train : validation : test = 8 : 1 : 1. All benchmarking experiments are performed on the entire dataset, while for mitigation experiments, models are trained on the DialectGen training set while evaluated on the validation set. 16 Table 4: Detailed Model Specifications for all multimodal generative models (text-to-image and text- to-video generative models) benchmarked in this work. For reference and reproducibility, we include model name, model type, creator organization, initial release date, hosting platform, availability type, and model size. Model Name Model Type Created by Release Date Hosted by Availability Type Model Size Stable Diffusion 1.4 Text to Image CompVis 8/22/2022 Hugging Face Open Source 1B Stable Diffusion 1.5 Text to Image Runway ML 10/20/2022 Hugging Face Open Weights 1.3 B Stable Diffusion 2.1 Text to Image Stability AI 12/7/2022 Hugging Face Open Weights 1.3 B Stable Diffusion XL Text to Image Stability AI 7/26/2023 Hugging Face Open Weights 6.6 B Stable Diffusion 3 Medium Text to Image Stability AI 6/12/2024 Hugging Face Open Weights 2 B Stable Diffusion 3.5 Large Text to Image Stability AI 10/22/2024 Hugging Face Open Weights 8.1 B Stable Diffusion 3.5 Large Turbo Text to Image Stability AI 10/22/2024 Hugging Face Open Weights 8.1 B Flux.1 [dev] Text to Image Black Forest Labs 4/2/2024 Hugging Face Open Weights 12B DALL-E Mini Text to Image Boris Dayma et al. 7/25/2022 Github Open Weights 0.4 B DALL-E 2 Text to Image OpenAI 9/28/2022 OpenAI Proprietary N/A DALL-E 3 Text to Image OpenAI 8/20/2023 OpenAI Proprietary N/A gpt-image-1 Text to Image OpenAI 4/23/2025 OpenAI Proprietary N/A VideoCrafter-2 Text to Video Tencent 1/26/2024 Hugging Face Open Weights 1.4 B Open-Sora Text to Video HPC-AI Tech 6/17/2024 Hugging Face Open Weights 1.2 B CogVideoX Text to Video THUDM Lab 8/27/2024 Hugging Face Open Weights 5 B Cosmos-1 Text to Video Nvidia 1/6/2025 Hugging Face Open Weights 7 B Wan 2.1 Text to Video Alibaba 2/22/2025 Hugging Face Open Weights 14 B Table 5: Key Generation Parameters for Text-to-Video Generative Models. For reproducibility and computational cost estimation, we list GPU runtime per video in minutes and GPU runtime for the full video dataset (both concise and detailed = 4110 videos) in hours. All computational costs are estimated for NVIDIA-A6000 GPUs with 48 GB Memory. Model Version Resolution Frames Steps Time / Video (min) Time / Dataset (h) VideoCrafter2 512 × 512 16 50 5.0 342.5 OpenSora-STDiT-v3 405 × 720 51 30 8.3 570.8 CogVideoX-5b 720 × 480 10 10 6.1 416.7 Cosmos-1.0-Diffusion-7B-Text2World 704 × 1280 121 35 26.5 1815.3 Wan2.1-T2V-14B 832 × 480 10 12 4.8 329.4 Note: The dataset-scale timing for Wan2.1-T2V-14B was measured using 6 A6000 GPUs using xdit FSDP. 17 E HUMAN ANNOTATION DETAILS Figure 4: The Amazon Mechanical Turk Data Annotation Interface for dialect speaker human filtering of generated prompts (prompt generation details in Section 3). Human annotators may use the "View Instructions" button to collapse / re-open detailed annotation instructions at any time. The annotation interface places no maximum time limit on each annotation question. Human annotators are allowed to return to previously annotated questions and update their answers at any time. In the creation of the DialectGen Dataset, we recruit a total of 17 dialect speaker human annotators from Amazon Mechanical Turk. The demographic involves six annotators from Asia, eight annotators from North America, and 3 annotators from Europe. Each selected annotator is given the option to complete any number of questions as they prefer. We encourage each annotator to take regular breaks during the task and not to work consecutively for more than 2 hours on our task. Our task is relatively simple for dialect speakers as it mainly involves judging the plausibility and meaning of a sentence in their native dialect. We estimate each HIT to take around 12 seconds, this corresponds to an hourly wage of $15 USD. Our total annotation time is 21.84 hours, costing a total of $327.6. We ran 4 rounds of annotations, with a combined total of 6552 prompts. 35.9% of total proposed prompts were rejected by the annotators while 64.1% of prompts were approved. F MITIGATION RESULTS ON STABLE DIFFUSION XL Stable Diffusion XL consists of two encoders: a Base encoder and a Refiner encoder. We fine-tuned both components as part of our method. However, since the corresponding CLIP-style image encoder for the Refiner is not publicly accessible, only Text KL Regularization can be applied in this case. Given the Refiner’s larger size and additional encoding modules, we evaluate our final method against other baselines within this more complex configuration. 18 Figure 5: The English Dialect Speaker Assessment Quiz used for matching dialect speaker annotators to specific dialects for prompt annotation. We adapt the assessment quiz from the existing English Dialect Speaker Survey first created in MultiVALUE (Ziems et al., 2023), which asks the human annotator to select their linguistic acceptability preference for 10 different dialect excerpts. We report the mitigation results on Stable Diffusion XL (Podell et al., 2023) in Table 6, under the experimental setup described above. Similar to the findings on Stable Diffusion 1.5, Prompt Revision methods preserve general SAE performance but yield only marginal improvements in dialect VQAScore, with gains of up to 7.8%. Additionally, UNet fine-tuning methods also result in small gains of up to 5.3% in dialect performance, but at the cost of noticeable degradation in both SAE MSCOCO and SAE polysemy performance. In contrast, our method substantially improves dialect robustness across all five dialects, achieving an average performance of 85.99%, which surpasses the 19 Table 6: Mitigation results on SDXL (Podell et al., 2023) for all methods, including Overall Performances on SAE MSCOCO, SAE Polysemy, average Dialect performance, and Dialect Performance for each dialect, all measured using VQAScore (Lin et al., 2024). Cell colors reflect column-normalized performance values, with darker green indicating higher VQAScore performance. Mitigation Methods Overall Performances ↑ Dialect Performance ↑ SAE SAE Dialect AAE BrE ChE InE SgE MSCOCO Polysemy Avg. Base Model (Stable Diffusion XL) 86.21 78.21 61.55 61.17 77.58 47.04 53.21 68.76 Prompt Revision DALL-E 3 Prompt Rewrite 85.36 78.01 66.49 59.93 77.92 60.61 63.62 70.39 LLaMA 3 Prompt Translate 84.72 77.60 64.19 63.74 77.93 57.40 56.09 65.80 GPT4.1 Prompt Translate 85.93 78.12 69.30 61.97 82.24 63.87 65.45 72.97 UNet Fine-tuning Diffusion Finetune 70.49 52.37 65.22 65.31 76.69 60.12 58.05 65.91 Diffusion DPO 72.03 50.29 66.89 65.97 78.12 62.88 60.10 67.40 Ours Dialect Learning + Text KL Reg.+ Polysemy Reg. 85.45 78.08 85.99 82.43 84.71 85.97 89.70 87.14 base model’s SAE score of 84.43%, while inducing less than a 1% drop in both SAE MSCOCO and SAE polysemy performance. Table 7: Quantitative Effects of Grammatical and Lexical Variations on Multimodal Generation, measured in VQAScore. We evaluate three text-to-image generative models under the following dialectal variation types: Grammatical, Lexical, and Grammatical + Lexical. Values in parentheses indicate the percentage performance drop in VQAScore compared to baseline SAE performance. Model SAE Performance (%) Performance under Dialectal Variations (%) Grammatical Lexical Grammatical + Lexical DALL-E Mini 75.63 74.72 (-1.20) 51.92 (-31.35) 51.26 (-32.22) FLUX.1 dev 82.94 82.40 (-0.65) 61.88 (-25.39) 61.02 (-26.43) Stable Diffusion 3.5 Large 85.18 83.91 (-1.49) 65.37 (-23.26) 63.80 (-25.10) G GRAMMATICAL VS. LEXICAL ROBUSTNESS IN MULTIMODAL MODELS To establish the rationale for our study’s focus on lexical variations, we begin with an observation about multimodal generative models. These models often exhibit a notable insensitivity to gram- matical or syntactic structure, a tendency that likely arises from the bag-of-words nature of their CLIP-style encoders. This architectural trait means that variations in sentence construction, such as word order or verb tenses, tend to have a minimal effect on the final output. Table 8, adapted from Multi-VALUE (Ziems et al., 2023), showcases several examples of these grammatical variations. Table 8: Examples of Grammatical Dialect Variations between Standard American English (SAE) sentences and African American English (AAE) dialect sentences. The blue texts highlight unique features in SAE while the purple texts (if applicable) highlight corresponding features in AAE. Grammatical Variation Type SAE Prompt AAE Dialect Prompt Clause Structure A chair that can be folded A chair can be folded Negative Concord There is no food on the table There ain’t no food on the table Word Order A big and fresh fish A fish big and fresh Verb Morphology Mom brought rice to me Mom brin rice give me To formally quantify this observation, we conducted a small-scale experiment with three representative models in the African American English evaluation setting. We used the Multi-VALUE (Ziems et al., 2023) translation system to apply grammatical variations to 300 SAE prompts from DialectGen and evaluated their generation quality using VQAScore. 20 Table 9: Stable Diffusion 1.5 Mitigation Performance Breakdown by dialect for different mitigation methods on the DialectGen dataset for all baseline methods and ablations of our method. All performance scores are measured using VQAScore (Lin et al., 2024), higher score is better. Mitigation Methods Performance by Dialect (VQAScore) ↑ AAE BrE ChE InE SgE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Base Model (Stable Diffusion 1.5) 57.34 72.94 69.51 76.40 56.36 78.66 57.54 81.05 63.81 80.50 Prompt Revision DALL-E 3 Prompt Rewrite 57.73 73.16 70.40 77.86 53.98 79.99 50.42 81.33 59.87 81.66 LLaMA 3 Prompt Translate 60.87 70.36 74.39 76.49 59.05 78.15 60.22 81.09 64.98 79.84 GPT4.1 Prompt Translate 65.32 71.28 73.52 76.40 58.32 78.29 53.04 81.03 65.12 79.83 UNet Fine-tuning Diffusion Finetune 63.85 64.34 70.14 68.35 57.30 69.55 52.84 70.72 60.56 72.42 Diffusion DPO 66.31 63.02 68.91 69.17 61.22 67.83 56.38 70.94 64.79 71.85 Ours Dialect Learning 75.21 74.31 78.33 78.34 79.31 80.20 78.10 79.90 79.15 78.33 + Text Cosine Reg. 75.44 74.86 77.84 77.52 79.31 79.74 78.22 80.13 78.86 79.21 + Image Cosine Reg. 74.91 74.83 78.20 78.22 79.45 80.32 78.00 80.00 79.11 78.72 + Text KL Reg. 74.40 73.97 78.27 79.40 78.36 80.72 78.17 78.24 79.71 78.66 + Image KL Reg. 73.77 74.36 77.23 77.60 79.06 80.43 79.25 80.99 81.29 79.54 + Text KL Reg.+ Polysemy Ctrl. 72.24 72.25 75.76 79.57 78.95 79.27 80.67 79.89 81.07 79.84 + Image KL Reg.+ Polysemy Ctrl. 72.61 74.30 76.74 76.77 77.51 78.83 80.41 80.85 81.14 78.15 Table 10: Complete DialectGen Benchmark Performance Breakdown by dialect for all text-to- image and text-to-video generative models. All performance scores are measured using VQAS- core (Lin et al., 2024), higher score is better. Results complements Table 2 in the main paper. Model Performance by Dialect (VQAScore) ↑ AAE BrE ChE InE SgE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Concise Prompts T2I Models Stable Diffusion 1.4 60.66 76.47 71.46 79.08 51.31 78.86 47.5 80.88 57.64 78.92 Stable Diffusion 1.5 62.31 77.41 72.59 79.47 50.4 79.37 47.03 81.29 56.36 78.8 Stable Diffusion 2.1 60.97 80.59 76.37 84.21 45.88 83.15 50.63 85.99 58.53 82.31 Stable Diffusion XL 62.97 82.17 80.49 87.44 49.82 84.75 53.66 87.6 65.56 84.23 Stable Diffusion 3 60.9 84.46 79.22 86.71 48.32 84.29 51.91 87.52 61.64 82.32 Stable Diffusion 3.5 Large 60.16 83.91 80.53 89.22 48.93 85.33 51.53 88.69 63.21 83.79 Stable Diffusion 3.5 Large Turbo 57.27 82.2 79.4 87.51 47.16 83.62 50.07 87.06 61.72 83.09 Flux.1 [dev] 55.63 80.17 72.7 81.53 45.85 82.82 46.73 81.39 51.63 76.62 DALL-E Mini 50.86 76.96 73.55 80.1 41.51 78.48 44.07 77.11 54.11 72.64 DALL-E 2 52.07 81.19 79.19 86.03 42.54 83.05 43.11 81.66 61.65 81.27 DALL-E 3 67.09 82.8 85.68 88.86 50.43 86.87 58.8 86.34 64.3 86.38 DALL-E 3 w/ Rewrite 63.74 81.83 84.24 90.08 61.41 83.96 68.7 89.28 74.77 85.69 gpt-image-1 65.47 88.62 88.39 93.24 65.31 88.37 67.77 92.22 77.67 88.25 T2V Models Cosmos-1 59.61 76.57 68.87 76.26 53.27 72.08 56.84 78.34 54.04 65.18 Open-Sora 65.46 84.56 75.56 83.21 48.49 85.21 59.79 87.59 59.19 80.56 VideoCrafter-2 61.3 82.13 76.19 84.12 42.9 86.43 53.3 88.76 61.73 83.51 CogVideoX 36.72 59.54 42.55 55.8 27.71 61.82 28.76 63.23 25.98 44 Wan 2.1 29.57 62.49 47.02 68.41 30.37 54.07 30.68 65.81 30.23 67.89 Detailed Prompts T2I Models Stable Diffusion 1.4 70.07 79.31 74.19 77.58 65.24 78.94 56.99 80.53 63.87 76.98 Stable Diffusion 1.5 71.03 79.97 73.5 77.69 65.21 78.89 56.84 79.72 63.02 77.06 Stable Diffusion XL 72.82 84.76 80.84 85.6 68.19 85.85 61.1 87.44 70.93 83.55 Stable Diffusion 2.1 69.41 81.72 77.51 82.03 63.64 82.68 59.39 84.07 64.71 79.95 Stable Diffusion 3 74.27 87.11 82.58 88.48 66.21 86.95 62.59 88.08 67.32 83.13 Stable Diffusion 3.5 Large 73.21 86.84 83.24 89.5 67.05 87.6 60.55 88.82 67.65 84.27 Stable Diffusion 3.5 Large Turbo 73.24 86.23 81.07 88.24 64.83 86.37 58.46 87.81 65.05 82.98 Flux.1 [dev] 72.86 85.56 77.43 85.19 61.47 82.72 58.52 85.31 59.56 79.66 DALL-E Mini 53.69 74.12 69.5 73.38 52.39 72.11 50.47 73.65 58.22 68.92 DALL-E 2 64.72 79.34 80.2 85.79 62.33 83.66 55.51 82.6 66.07 80.34 DALL-E 3 77.75 85.3 83.82 87.99 68.16 86.26 71.19 87.79 73.51 84.35 DALL-E 3 w/ Rewrite 76.73 87.12 85.56 90.33 76.36 85.43 75.63 91.22 78.8 86.54 gpt-image-1 78.26 90.7 86.88 90.94 79.47 88.85 78.04 92.86 79.39 88.38 T2V Models Cosmos-1 64.61 72.64 67.1 73.94 57.62 67.03 56.58 73 50.39 58.99 Open-Sora 74.81 86.48 76.69 80.84 67.65 83.93 69.59 86.77 71.15 81.49 VideoCrafter-2 70.88 85.37 79.53 83 66.14 87.21 62.58 86.47 68.14 83.72 CogVideoX 39.83 50.63 46.4 54.35 38.89 57.82 35.8 62.68 25.51 40.11 Wan 2.1 55.79 79.96 62.14 73.08 42.39 73.82 48.86 76.6 48.73 75.8 The results, presented in Table 7, provide strong quantitative evidence supporting our initial analysis. While lexical feature variations cause significant performance drops for existing text-to-image generative models, grammatical variations do not incur significant performance drops. This clear distinction validates our decision to focus on the more impactful lexical variations throughout this work. 21 Table 11: Complete DialectGen Benchmark Performance Breakdown by dialect for all text- to-image and text-to-video generative models. All performance scores are measured using CLIP- Score (Hessel et al., 2021), higher score is better. Results complements Table 2 in the main paper. Model Performance by Dialect (CLIPScore) ↑ AAE BrE ChE InE SgE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Concise Prompts T2I Models Stable Diffusion 1.4 25.46 27.83 28.65 29.79 24.68 28.34 24.34 29.38 25.97 28.64 Stable Diffusion 1.5 25.79 27.95 28.66 29.91 24.7 28.32 24.38 29.44 25.86 28.65 Stable Diffusion 2.1 25.72 28.74 29.67 30.88 24.7 29.44 25.31 30.69 26.46 29.54 Stable Diffusion XL 25.81 28.69 29.97 31.21 25.37 29.57 25.85 31.15 27.45 30.23 Stable Diffusion 3 25.45 28.42 29.89 30.97 25.01 28.74 25.02 30.31 26.8 29.67 Stable Diffusion 3.5 Large 25.62 28.78 30.25 31.5 25.22 29.42 25.67 31.14 27.18 30.22 Stable Diffusion 3.5 Large Turbo 25.1 28.4 29.9 31.16 24.95 28.9 25.3 30.78 26.96 29.82 Flux.1 [dev] 24.74 27.54 28.52 29.88 24.21 27.97 24.78 29.48 25.4 28.31 DALL-E Mini 24.77 28.15 29.48 30.65 23.57 27.81 24.4 29.56 25.74 28.6 DALL-E 2 24.57 27.4 29.86 30.56 23.98 27.3 24.1 29.3 26.44 28.53 DALL-E 3 25.19 27.51 28.95 29.75 24.57 28.11 25.71 29.66 26.3 29.08 DALL-E 3 w/ Rewrite 24.92 26.91 29.41 30.11 25.15 27.57 26.87 29.93 27.12 28.47 gpt-image-1 25.96 28.33 30.94 31.62 26.51 29.48 27.51 31.21 28.57 30.33 T2V Models Cosmos-1 23.49 25.42 26.17 27.16 22.89 24.62 24.18 27.04 21.89 22.91 Open-Sora 25.02 27.3 28.63 29.73 24.34 27.09 25.35 29.36 25.55 28.01 VideoCrafter-2 25.88 28.83 29.41 30.69 25.04 29.04 25.88 30.56 27 29.69 CogVideoX 22.62 25.71 24.14 25.84 22.03 24.61 22.95 27.4 19.99 22.18 Wan 2.1 22.37 25.49 25.45 28.27 22.14 24.55 22.55 27.57 22.75 26.85 Detailed Prompts T2I Models Stable Diffusion 1.4 27.98 28.84 29.59 30.23 28.61 30.1 26.79 29.83 28.12 29.78 Stable Diffusion 1.5 28.08 28.99 29.54 30.29 28.52 30.18 26.87 29.94 27.98 29.82 Stable Diffusion 2.1 28.57 29.9 30.83 31.69 29.68 31.51 28.24 31.47 29.54 31.32 Stable Diffusion XL 28.46 29.68 30.49 31.33 29.12 31.14 27.95 30.96 28.65 30.52 Stable Diffusion 3 28.69 29.82 30.76 31.59 29.13 31.15 27.82 31 29.13 31.04 Stable Diffusion 3.5 Large 28.86 30.01 31.02 31.84 29.48 31.64 28.04 31.61 29.29 31.19 Stable Diffusion 3.5 Large Turbo 28.61 29.58 30.67 31.6 29.09 31.23 27.8 31.18 28.76 30.78 Flux.1 [dev] 27.97 28.69 29.54 30.37 28.17 30.17 27.15 30.01 27.72 29.46 DALL-E Mini 27.26 29.18 29.84 30.56 27.75 30.23 26.71 29.93 27.42 29.59 DALL-E 2 27.66 29.02 30.5 31.3 28.57 30.54 26.48 30.17 28.69 29.88 DALL-E 3 27.79 28.3 29.55 30.21 28.52 30.04 27.48 29.83 28.67 30.03 DALL-E 3 w/ Rewrite 27.71 28.23 29.75 30.46 28.88 29.85 28.57 29.98 28.61 29.42 gpt-image-1 28.65 29.45 31.2 31.6 29.98 31.29 29.41 30.81 30.18 31.27 T2V Models Cosmos-1 23.07 23.79 25.98 26.67 24.19 24.94 23.35 25.29 19.99 21.09 Open-Sora 27.4 28.36 29.5 29.93 28.07 29.46 27.64 30.04 27.64 29.19 VideoCrafter-2 28.4 29.76 30.24 30.98 28.95 31 27.83 30.88 28.74 30.61 CogVideoX 21.42 22.55 24.38 25.74 22.89 24.6 22.37 25.51 17.67 19.82 Wan 2.1 25.85 27.89 27.96 29.2 26.05 28.83 25.25 28.92 25.45 27.98 H PERFORMANCE BY DIALECT Due to space constraints, we report performance by dialect in Table 9, Table 10, and Table 11. As de- scribed in Section 4.1, the scoring functions are based on reference-free image-text alignment metrics, including VQAScore and CLIPScore. We denote the subset of DialectGen prompts corresponding to a given dialect as P, which consists of multiple SAE Prompt / Dialect Prompt pairs p = (ps, pd). For each individual text prompt ps or pd, we generate n images under different random seeds for text-to-image generative models, or uniformly sample n frames for text-to-video generative models. Accordingly, for each SAE Prompt / Dialect Prompt pair p = (ps, pd) ∈P, we compute its SAE and Dialect performance using Equation (1) and Equation (2), respectively. More concretely, SAE(p, G) in Equation (1) denotes the average VQAScore (as reported in Table 9 and Table 10) or CLIPScore (in Table 11) computed over the n images generated from the SAE prompt ps. Similarly, Dialect(p, G) in Equation (2) is computed using the same evaluation pipeline, but with the corresponding dialect prompt pd from the same pair. Each value of SAE(p, G) and Dialect(p, G) is reported as SAE and Dialect, respectively, in the tables. 22 I USE OF AI TOOLS We employed large language models (LLMs), including OpenAI’s GPT-5 and GPT-4o, as auxiliary tools to refine the manuscript and identify grammatical errors. All LLM-assisted content was critically reviewed, fact-checked, and revised by the authors to ensure scientific validity and originality. The authors retain full responsibility for all statements and conclusions presented in this work. Specifically, LLMs were used only to improve wording and clarity of expression. J FUTURE WORK Our work highlights several promising directions for future research, which we encourage the community to explore. Investigating Cultural and Representational Biases It would be interesting for future works to explore and evaluate the significance of representational and skin tone shifts induced by dialect inputs. For instance, as noted in Figure 1, we observed that FLUX.1 [dev] (Black Forest Labs, 2024) image generations for the prompt “A man selling eggplant” depict more upscale and decorated environments compared to generations for “A man selling brinjal.” Furthermore, individuals depicted in the images for “brinjal” are darker-skinned. A systematic study of these shifts would provide valuable insights into the inherent biases of large-scale multimodal models. Exploring Grammatical and Joint Dialect Variations While this work concentrated on lexical variations, we welcome future works in this line to carefully study the impacts of grammatical dialect variations and their joint effects with lexical variations. Such research could reveal more complex interactions and failure modes in the performance of multimodal generative models. Investigating Downstream Impacts of Dialectal Performance Gaps Many existing studies rely on the accurate semantic understanding and high-fidelity generation capabilities of multimodal text-to-image and text-to-video generative models Zhang et al. (2023); Wallace et al. (2024); Zhou et al. (2025). It would be interesting to investigate the downstream research impacts of dialectal performance gaps on these works as well as downstream societal impacts to dialect speaker user groups. Extending Evaluation to Multi-Lexeme Prompts Another related area for future work is the extension of our evaluation to settings where multiple dialect lexemes are used. This would test the models’ compositional understanding of dialectal language, and we encourage future works to explore such possibilities. However, it should be noted that creating high-quality, controlled data at scale for such experiments is a non-trivial problem that needs to be addressed. Applying the Mitigation Strategy to Text-to-Video Models While our proposed mitigation strategy is designed to be broadly compatible with most multimodal models, it would be interesting to apply our method to text-to-video generative models. Our experiments were limited to text-to-image models due to resource constraints. Therefore, we encourage future researchers with the necessary computing resources to experiment in this domain, as it would serve as a strong test of our method’s generalizability. 23
DIALECTGEN: BENCHMARKING AND IMPROVING DIALECT ROBUSTNESS IN MULTIMODAL GENERATION Yu Zhou ∗†, Sohyun An ∗, Haikang Deng ∗, Da Yin, Clark Peng, Cho-Jui Hsieh, Kai-Wei Chang, Nanyun Peng {yuzhou, kwchang, Stable Diffusion 3.5 Two red packets on a table (Standard American English) Two ang pows on a table (Singaporean English) DALL·E Mini A man driving his car (Standard American English) A man driving his whip (African American English) A man hiking with his brother (Standard American English) A man hiking with his carnal (Chicano English) Wan 2.1 Video FLUX.1 dev A man selling brinjal (Indian English) A man selling eggplant (Standard American English) Figure 1: Multimodal Generative Model Outputs on semantically identical prompts that differ only in one synonymous lexical feature in Standard American English (top) / a lower-resource English dialect (bottom). ABSTRACT Contact languages like English exhibit rich regional variations in the form of dialects, which are often used by dialect speakers interacting with generative models. However, can multimodal generative models effectively produce content given dialectal textual input? In this work, we study this question by constructing a new large-scale benchmark spanning six common English dialects. We work with dialect speakers to collect and verify over 4,200 unique prompts and evaluate on 17 image and video generative models. Our automatic and human evaluation results show that current state-of-the-art multimodal generative models exhibit 32.26% to 48.17% performance degradation when a single dialect word is used in the prompt. Common mitigation methods such as fine-tuning and prompt rewriting can only improve dialect performance by small margins (< 7%), while potentially incurring significant performance degradation in Standard American English (SAE). To this end, we design a general encoder-based mitigation strategy for multimodal generative models. Our method teaches the model to recognize new dialect features while preserving SAE performance. Experiments on models such as Stable Diffusion 1.5 show that our method is able to simultaneously raise performance on five dialects to be on par with SAE (+34.4%), while incurring near-zero cost to SAE performance. Code and data at: dialectgen.github.io. 1 INTRODUCTION Linguists have defined over 160 dialects (Aeni et al., 2021) within the English language, with three out of four English speakers having a dialect background other than Standard American or British English (Crystal, 2003). Despite this rich diversity, current pre-training paradigms employ content filters that can exclude data involving lower-resource English dialects other than Standard American and British English (Gururangan et al., 2022), reducing the effectiveness of pretrained models on * Core contributors. † Project lead. 1 16 Oct 2025 inputs from other dialects (Lee et al., 2023). Prior works have shown significant allocational harms toward dialect speakers caused by such dialect performance discrepancies in machine learning applications (Hovy & Spruit, 2016; Bender et al., 2021), making the observation of similar performance trends in multimodal generative models an alarming sign. As shown in Figure 1, while current multimodal generative models can accurately generate high quality image and video content given Standard American English (SAE) prompts (left); they fail in various manners when provided with semantically equivalent prompts containing a single synonymous dialect word (right). Stable Diffusion 3.5 Large (Esser et al., 2024) fails to generate "ang pow", which is commonly used in Singaporean English to mean "red packet", and FLUX.1 [dev] (Black Forest Labs, 2024) fails to generate "brinjal", which is synonymous with "eggplant" in Indian English. Furthermore, when the dialect lexeme is polysemous, i.e., has an alternative meaning in SAE, models tend to always generate content that align with the SAE meaning, even when the context makes such interpretation highly improbable. For example, DALL-E Mini (Dayma et al., 2021) generations of "A man driving his whip" fail to capture the correct meaning of "whip" as "car" in African American English, given clear context indications. Similar failure modes are observed in text-to-video generative models: Wan 2.1 (Wang et al., 2025) fails to correctly render "carnal", which refers to "brother" in Chicano English. In this work, we construct DialectGen, a new large-scale benchmark evaluating dialect robustness in image and video generation. Our benchmark dataset spans six common English dialects, including Standard American English (SAE), British English (BrE), Chicano English (ChE), Indian English (InE), and Singaporean English (SgE). For each dialect other than SAE, we create SAE Prompt / Dialect Prompt pairs that are semantically identical besides switching a single SAE lexeme for a synonymous dialect lexeme. We work with dialect speaker annotators to create a rigorous feature selection and prompt filtering pipeline that ensures the final dialect prompts are (1) exactly synonymous with the SAE prompt; (2) valid in the dialect context; and (3) non-ambiguous (for polysemous lexemes). These strictly enforced quality guarantees facilitate the development of simple yet effective automatic and human evaluation metrics for evaluating generative model performance. We experiment with 17 widely used image and video generative models on DialectGen, demonstrating up to 38.63% and 48.17% performance drops for SOTA open-weights image and video generative models, respectively. To alleviate such significant dialect performance drops observed in current multimodal generative models, we design a general encoder-based learning strategy that enhances dialect robustness for diffusion-based multimodal generative models. Our method teaches the model's text encoder to recognize dialect lexemes while retaining its knowledge of SAE polysemous lexemes. We also include an encoder-based KL regularization loss based on image-SAE caption datasets to regulate output distribution shifts. Experiments on five dialects show that our method is able to simultaneously improve Stable Diffusion 1.5 (Rombach et al., 2022) and SDXL (Podell et al., 2023) performance on five dialects to be on par with SAE performance. At the same time, we observe near zero (< 1%) SAE performance drop on the general MSCOCO (Lin et al., 2014) validation set for both models. Our key contributions include: • DialectGen, a new large-scale multi-dialectal benchmark for evaluating dialect robustness in text-to-image and text-to-video generation. • Comprehensive evaluation and analysis of 17 multimodal generative models and five baseline mitigation methods on DialectGen. • A high-performing method for improving dialect robustness in multimodal generation while maintaining strong SAE performance. 2 RELATED WORKS Linguists define dialects as regional variations of a language distinguished by unique features in lexicon, phonology, and grammar from each other, together constituting a single language (Hudson, 1996; Chambers & Trudgill, 1998; Fromkin et al., 1998; Nerbonne, 2009; Wardhaugh & Fuller, 2021). English, like any other language, is subject to such variations. However, most dataset resources and pre-training paradigms focus only on Standard American and British English (Gururangan et al., 2022), leading to dialect robustness issues and performance gaps in downstream machine 2 Table 1: Example paired textual data entries from the DialectGen dataset, including Lexeme, Concise Prompt, and Detailed Prompt. Dialect name abbreviations: SAE (Standard American English), AAE (African American English), BrE (British English), SgE (Singaporean English). Dialect Lexeme Concise Prompt Detailed Prompt SAE sneakers brand new sneakers a little girl wearing a pair of stylish white sneakers AAE kicks brand new kicks a little girl wearing a pair of stylish white kicks SAE bathroom a spacious bathroom a clean and tidy bathroom with shiny blue wall tiles BrE loo a spacious loo a clean and tidy loo with shiny blue wall tiles SAE squid a squid on a counter a large squid in an aquarium with colorful coral SgE sotong a sotong on a counter a large sotong in an aquarium with colorful coral learning applications. Previous works have analyzed and explored such dialectal performance gaps in NLP tasks like QA (Ziems et al., 2023), NLI (Ziems et al., 2022), dependency parsing, and POS tagging (Blodgett et al., 2018; Jørgensen et al., 2015). Recent works have also noticed the impact of dialect variations on text-to-image generation (Lee et al., 2023; Wan et al., 2024). Along this line of research, we create the first large-scale benchmark of dialect robustness in multimodal generation, evaluating both text-to-image and text-to-video generative models on inputs across six different dialects. Moreover, while lexicon, phonology, and grammar are the three key aspects that distinguish each dialect from others, existing works in Dialectal NLP have so far mainly focused on the grammar variations of dialects (Ziems et al., 2022; 2023; Blodgett et al., 2018; Jørgensen et al., 2015). In this work, we provide the first large-scale dataset of dialectal lexical variations, bridging the gap towards holistic dialectal variation evaluation and building dialect-robust machine learning models. 3 DIALECTGEN BENCHMARK 3.1 DATASET CONSTRUCTION To select dialect features for our benchmark dataset, we first gather dialect lexemes along with their dictionary definitions and example usages from publicly available regional English dictionaries including The Oxford Regional English Dictionary (Gates et al., 2023), Dictionary of American regional english (Cassidy et al., 1985), A dictionary of Singlish and Singapore English (Lee, 2004), Dictionary of Indian English (Subhash, 2020), and The Oxford Dictionary of African American English (Heinmiller, 2023). We collect a total of 1126 dialect lexemes for initial processing. Based on the dictionary definitions of the selected lexemes, we manually filter out: (1) potentially derogatory lexemes; (2) culture-unique lexemes without Standard American English (SAE) equivalents. We then carefully read the dictionary definitions of each remaining dialect lexeme and assign it a SAE equivalent lexeme with the same meaning, creating a list of pair-wise corresponding lexical features for each dialect. Examples of selected pairs can be seen in Table 1 and Figure 1. Next, we use GPT4o (Hurst et al., 2024) to generate prompts for each SAE word in our paired lexical feature set. We specifically instruct the model to generate prompts describing a visual scene with the lexeme playing a central role, which can be one of the following depending on the semantic role of the lexeme: (1) The central object in the scene; (2) The main action of the central object; (3) A prominent descriptive feature of the central object. We also ask the model to create two different sets of Concise and Detailed prompts for each SAE lexeme. Then we simply replace the SAE lexeme in the prompts with the dialect lexeme (Table 1) to create our two dialect evaluation settings: • Concise prompts generally consist of ≤6 words, with the goal of providing a more challenging evaluation setting where the multimodal generative model is not given too many contextual hints about the lexeme's meaning. 3 • Detailed prompts generally consist of ≥9 words, with the goal of providing a more relaxed evaluation setting where the multimodal generative model can use more contextual hints to infer the lexeme's meaning. These two evaluation settings also intuitively represent two common user input styles for multimodal generative models, where casual users may tend to provide concise prompts and professional users may be more inclined to write detailed prompts. Across Concise and Detailed evaluation settings, we generate a total of 6552 prompts. For specific Dialect Prompt / SAE Prompt pairs where the dialect lexeme has an additional polysemous meaning recorded in an SAE dictionary (Webster, 1869), we generate an additional SAE Polysemy Prompt, where the lexeme is used unambiguously in its SAE meaning. This data can be used for regulating model behavior in training scenarios. 3.2 DIALECT SPEAKER VALIDATION AND FILTERING Before admitting the generated prompts to our final evaluation benchmark, we carefully verify their quality and correctness with dialect speaker human annotators. We created a specialized Amazon MTurk interface (Figure 4) for prompt annotation and matching potential dialect speaker annotators to their spoken dialect: each human annotator must first self-identify their dialect background and then complete a dialect speaker assessment quiz (Ziems et al., 2023) that matches each annotator to at most one dialect (Figure 5). Annotators are only selected if both their self-identified dialect background and their quiz assessment result match to the same dialect. More details on human annotation are available in Section E. After each dialect speaker is selected, they will be presented with Dialect Prompt / SAE Prompt pairs where the only difference is the dialect lexeme being swapped with its SAE equivalent word. For each pair of prompts, the dialect speaker must answer two questions: 1. Does the given Dialect Prompt make sense in said Dialect and correspond exactly in meaning to the given SAE prompt in Standard American English? (Yes / No / I don't know) 2. Is the given Dialect Prompt ambiguous? i.e., Does it have a reasonable alternative interpretation in the Standard American English (SAE) context? (Yes / No / I don't know) Each Dialect Prompt / SAE Prompt pair is presented to two independent dialect speaker annotators. A pair is included in the final dataset only if both human annotators answer "Yes" to the first question and "No" to the second question. Consistent responses ensure the dialect prompt is: (1) exactly synonymous with the SAE prompt. (2) valid in the dialectal context. (3) non-ambiguous (for polysemous lexemes). In total, dialect speaker filtering further removes 35.9% of all generated prompts, resulting in a final dataset containing 4,200 validated prompts. 4 EXPERIMENTS 4.1 EVALUATION METRICS Automatic Evaluation To automatically evaluate any multimodal generative model G(·) on our benchmark, we design scoring functions based on reference-free image-text alignment metrics, including VQAScore (Lin et al., 2024) and CLIPScore (Hessel et al., 2021). For simplicity, we denote any such alignment metric below as A. We further denote the DialectGen prompt subset for any dialect as P, which contains many SAE Prompt / Dialect Prompt pairs p = (ps, pd). For each individual text prompt ps or pd, we generate n images under different random seeds for text-to-image generative models, or uniformly sample n frames in a video for text-to-video generative models. Therefore, for each SAE Prompt / Dialect Prompt pair p = (ps, pd) ∈P, we can calculate its SAE and Dialect performance as follows: SAE(p, G) = 1 n n X i=1 A(ps, G(ps)i) (1) 4 Table 2: DialectGen benchmark results for popular text-to-image and text-to-video generative models, including Dialect-wise Performance Drop measured by VQAScore (Lin et al., 2024); and Overall Performance Drop measured by human eval, VQAScore, and CLIPScore (Hessel et al., 2021). Cells are highlighted based on numerical value normalized across the entire table, with darker red indicating a higher performance drop in the given metric. Model Overall Performance Drop (%) ↓ Dialect-wise Performance Drop (%) ↓ Human VQAScore CLIPScore AAE BrE ChE InE SgE Concise Prompts T2I Models Stable Diffusion 1.4 28.19 26.7 10.35 20.67 9.64 34.94 41.27 26.96 Stable Diffusion 1.5 29.77 27.06 10.32 19.51 8.66 36.5 42.15 28.48 Stable Diffusion 2.1 31.46 28.79 11.7 24.35 9.31 44.82 41.12 28.89 Stable Diffusion XL 29.8 26.69 10.88 23.37 7.95 41.22 38.74 22.17 Stable Diffusion 3 31.89 29.01 10.81 27.89 8.64 42.67 40.69 25.12 Stable Diffusion 3.5 Large 32.31 29.43 11.37 28.3 9.74 42.66 41.9 24.56 Stable Diffusion 3.5 Large Turbo 32.92 30.28 11.34 30.33 9.27 43.6 42.49 25.72 Flux.1 [dev] 36.43 32.26 10.88 30.61 10.83 44.64 42.59 32.62 DALL-E Mini 34.29 31.52 11.71 33.91 8.18 47.11 42.85 25.51 DALL-E 2 38.63 32.79 9.97 35.87 7.95 48.78 47.21 24.14 DALL-E 3 26.55 24.39 9.32 18.97 3.58 41.95 31.9 25.56 DALL-E 3 (w/ Prompt Rewrite) 20.19 18.25 6.69 22.11 6.48 26.86 23.05 12.74 gpt-image-1 (4o Image Gen) 22.18 19.18 7.65 26.12 5.2 26.09 26.51 11.99 T2V Models Cosmos-1 25.41 20.49 6.66 22.15 9.69 26.1 27.44 17.09 Open-Sora 29.98 26.63 8.93 22.59 9.19 43.09 31.74 26.53 VideoCrafter-2 32.5 30.24 10.51 25.36 9.43 50.36 39.95 26.08 CogVideoX 40.06 42.55 11.04 38.33 23.75 55.18 26.1 27.44 Wan 2.1 48.17 47.33 13.1 52.68 31.27 43.83 53.38 55.47 Detailed Prompts T2I Models Stable Diffusion 1.4 14.33 15.93 5.16 11.65 4.37 17.35 29.23 17.03 Stable Diffusion 1.5 16.56 16.17 5.51 11.18 5.39 17.34 28.7 18.22 Stable Diffusion 2.1 17.39 18.4 5.78 15.06 5.51 23.03 29.36 19.06 Stable Diffusion XL 17.12 17.09 5.83 14.09 5.56 20.57 30.12 15.1 Stable Diffusion 3 17.15 18.64 5.86 14.74 6.67 23.85 28.94 19.02 Stable Diffusion 3.5 Large 18.42 19.54 6.12 15.7 6.99 23.46 31.83 19.72 Stable Diffusion 3.5 Large Turbo 19.9 20.63 6.09 15.06 8.13 24.94 33.42 21.61 Flux.1 [dev] 23.29 21.25 5.46 14.84 9.11 25.69 31.4 25.23 DALL-E Mini 24.71 21.44 7.05 27.56 5.29 27.35 31.47 15.53 DALL-E 2 17.73 20.2 5.98 18.43 6.52 25.5 32.8 17.76 DALL-E 3 12.18 13.27 4.29 8.85 4.74 20.98 18.91 12.85 DALL-E 3 (w/ Prompt Rewrite) 6.55 10.77 2.97 11.93 5.28 10.62 17.09 8.94 gpt-image-1 (4o Image Gen) 8.98 10.97 3.24 13.72 4.46 10.56 15.96 10.17 T2V Models Cosmos-1 18.04 14.28 4.3 11.05 9.25 14.04 22.49 14.58 Open-Sora 17.16 14.1 4.57 13.49 5.13 19.4 19.8 12.69 VideoCrafter-2 22.59 18.31 5.91 16.97 4.18 24.16 27.63 18.61 CogVideoX 31.87 29.6 8.08 21.33 14.63 32.74 42.88 36.4 Wan 2.1 32.69 31.94 8.59 30.23 14.97 42.58 36.21 35.71 Dialect(p, G) = 1 n n X i=1 A(ps, G(pd)i) (2) Note that when calculating dialect performance, we align the SAE Prompt ps with multimodal output generated from the corresponding Dialect Prompt, i.e., G(pd). This is feasible given that the paired prompts are synonymous, as verified by dialect speaker annotators in Section 3.2. Based on this, we can compute the dialect-induced performance drop of G(·) for each prompt pair p: Drop(p, G) = SAE(p, G) -Dialect(p, G) SAE(p, G) = n X i=1 A(G(ps)i, ps) -A(G(pd)i, ps) A(G(ps)i, ps) (3) To obtain the average model performance drop for a specific dialect, i.e., Drop(P, G), we simply average Drop(p, G) for all p in P. Human Evaluation We further design a human evaluation pipeline to check the empirical alignment between our automatic evaluation metrics and human judgment. For 5% of the model outputs in our benchmark, we ask three independent external human annotators to evaluate: to what extent does the multimodal generations conditioned on the SAE Prompt G(ps) or Dialect Prompt G(pd) match with the scene described by SAE prompt ps. Annotators are asked to rate the alignment between each (image/video, caption) pair with a numerical score between 0 and 10. The numerical scores are scaled by 0.1 to match the scoring range of VQAScore and CLIPScore before calculating SAE and 5 Dialect performance. Finally, we use the same formula to calculate the dialect-induced performance drop Drop(p, G). Since we only evaluate the alignment between image/video and the SAE prompt, this task does not require dialect speaker human annotators. 4.2 BENCHMARK EXPERIMENTS Applying the automatic and human evaluation metrics described in Section 4.1, we evaluate popular open-weights and proprietary multimodal generative models on DialectGen. Model performances are separately aggregated for Concise Prompts and Detailed Prompts settings in Table 2. Overall Performances For each model, we record overall dialect-induced performance drop on DialectGen using three different metrics: Human Eval, VQAScore, and CLIPScore. We calculate Pearson correlation coefficients (Pearson, 1895) r between each of the two metrics and observe r(Human, VQAScore) = 0.968, r(Human, CLIPScore) = 0.924, and r(VQAScore, CLIPScore) = 0.907. This shows that while both automatic scoring metrics have high correlations to human judgement (the gold standard), VQAScore is a better-aligned scoring metric for measuring dialectinduced performance drop. Contrasting the model performance drops across the two evaluation settings Concise Prompts and Detailed Prompts, we can clearly see that all models exhibit significantly larger performance drops for concise prompts compared to detailed prompts. This is in line with our assumption that models can more easily infer the meanings of unknown dialect lexemes from richer prompt contexts, highlighting the need for challenging evaluation via concise prompts to reveal model robustness issues. Looking at individual model performances, we observe that among text-to-video generative models: Wan 2.1 (Wang et al., 2025) and CogVideoX (Yang et al., 2024) exhibit the largest overall performance drops while Cosmos-1 (Agarwal et al., 2025) is the most robust. While for text-to-image generative models, DALL-E 2 (Ramesh et al., 2022) and Flux.1 [dev] (Black Forest Labs, 2024) exhibit the largest overall performance drops while DALL-E 3 (Betker et al., 2023) (w/ Prompt Rewrite) and gpt-image-1 (4o Image Generation) (OpenAI, 2025) are the most robust. Dialect-wise Performance Drop In addition to overall performance, we record each model's performance drop on each dialect, measured by VQAScore. Based on the color heatmap in Table 2, we can clearly see that the most severe performance drops occur for ChE and InE for most models, while AAE and SgE also suffer significant performance decreases. On the other hand, models generally do not see a very significant performance drop for BrE, which is expected given the relatively higher-resource nature of the dialect. 5 MITIGATION METHODS The significant dialect performance drops of current multimodal generative models shown in Section 4.2 highlight the need for effective mitigation strategies to improve dialect robustness. Here, the goal is to develop a method that enhances robustness across multiple dialects while preserving performance on standard SAE prompts. To this end, we first investigate intuitive baseline approaches, including (1) UNet Finetuning; (2) Prompt Revision, and then introduce our new mitigation strategy. 5.1 BASELINE METHODS UNet Finetuning The vast majority of current text-to-image and text-to-video generative models comprise two main components: a text encoder and a diffusion-based image/video decoder. In current post-training paradigms, typically the text encoder is kept frozen while the diffusion UNet is fine-tuned (Podell et al., 2023; Rombach et al., 2022; Betker et al., 2023; Dai et al., 2023). Existing works in aligning, enhancing, and customizing multimodal generative models also focus heavily on developing reward-based fine-tuning methods for the diffusion UNet while freezing the text encoder (Segalis et al., 2023; Clark et al., 2023; Prabhudesai et al., 2023; Black et al., 2023; Fan et al., 2023; Wallace et al., 2024; Dang et al., 2025). Based on existing works, we apply prominent multimodal generation enhancement methods towards improving dialect robustness, including: 6 PrompP 1 PrompP 1 A man driving his whip Dialect Prompts PrompP 1 PrompP 1 A man driving his car SAE Prompts PrompP 1 PrompP 1 A long whip made of braided rope SAE Polysemy Prompts Sn ... S3 S2 S1 Pn ... P3 P2 P1 Pn ' ... P3 ' P2 ' P1 ' min(cos θ) Dialect Learning Polysemy Control Text Encoder Text Encoder Image Encoder PrompP 1 PrompP 1 A large white bowl of many green apples MSCOCO Captions C1 ' C2 ' C3 ' ... Cn ' Cn ... C3 C2 C1 I1⋅C1 I1⋅C2 I1⋅C3 ... I1⋅Cn I2⋅C1 I2⋅C2 I2⋅C3 ... I2⋅Cn I3⋅C1 I3⋅C2 I3⋅C3 ... I3⋅Cn ... ... ... ... ... In⋅C1 In⋅C2 In⋅C3 ... In⋅Cn KL Regularization In ... I3 I2 I1 I1⋅C1 ' I1⋅C2 ' I1⋅C3 ' ... I1⋅Cn ' I2⋅C1 ' I2⋅C2 ' I2⋅C3 ' ... I2⋅Cn ' I3⋅C1 ' I3⋅C2 ' I3⋅C3 ' ... I3⋅Cn ' ... ... ... ... ... In⋅C1 ' In⋅C2 ' In⋅C3 ' ... In⋅Cn ' PrompP 1 PrompP 1 A large white bowl of many green apples MSCOCO Captions MSCOCO Images Text Encoder Text Encoder Text Encoder Text Encoder min(cos θ) Dn ' ... D3 ' D2 ' D1 ' Figure 2: Losses used in our mitigation. Text prompts for Dialect Learning and Polysemy Control come from the DialectGen training set, while image-caption pairs for KL Regularization come from the MSCOCO validation set. • Diffusion Fine-tune (Rombach et al., 2022) given a pair of synonymous Dialect / SAE Prompts, we fine-tune the diffusion UNet with the Dialect Prompt as input, and images generated using the SAE Prompt as target output. • Diffusion DPO (Wallace et al., 2024) We similarly use the Dialect Prompt as input, and use images generated with the SAE Prompt / Dialect Prompt as Win / Lose pairs for DPO. Prompt Revision Beyond UNet fine-tuning, another popular family of methods for aligning and enhancing multimodal generative models is prompt revision (Hao et al., 2023; Betker et al., 2023; Wang et al., 2024; Chen et al., 2024). In our experiments, we include both a general prompt rewriting method and targeted prompt translation methods using general-purpose LLMs: • Prompt Rewrite We apply the general prompt rewriting pipeline in Betker et al. (2023) to all test prompts before passing them to the generative model. • Prompt Translate We use general-purpose LLMs (Grattafiori et al., 2024; OpenAI, 2025) to translate all prompts to SAE before passing them to the generative model. 5.2 OUR METHOD Unlike prior approaches, we propose a new mitigation strategy that focuses on updating the text encoder(s). A natural first step toward improving dialectal robustness is to align the semantic representation of a dialect expression with that of its corresponding SAE counterpart. Dialect Learning To operationalize this idea, we introduce a Dialect Learning loss that encourages the target text encoder to recognize dialectal lexemes by minimizing the cosine distance between the target encoder's embedding of a dialect prompt and the frozen encoder's embedding of its synonymous SAE prompt: LDL = 1 N N X i=1 1 -⟨π(pd i ), π0(ps i)⟩ . (4) Here, ⟨·, ·⟩denotes cosine similarity; π(·) and π0(·) represent the trainable target text encoder and the frozen reference encoder, respectively; and pd i and ps i denote the i-th pair of synonymous prompts in dialect and standard English, respectively. Although this may improve dialectal robustness, relying on this loss alone may compromise the model's ability to handle dialect lexemes that exhibit polysemy in SAE contexts. Polysemy Control In order to retain the model's ability to correctly recognize polysemous lexemes within SAE contexts, we introduce a Polysemy Control loss that minimizes the cosine distance between embeddings of the same SAE polysemous prompt generated by the target and frozen encoders: 7 Table 3: Mitigation results for all baseline methods and our best performing method, including Overall Performances on SAE MSCOCO, SAE Polysemy, average Dialect performance, and Dialect Performance for each dialect, all measured using VQAScore Lin et al. (2024). Cell colors reflect column-normalized performance values, with darker green indicating higher VQAScore performance. Mitigation Methods Overall Performances ↑ Dialect Performance ↑ SAE SAE Dialect AAE BrE ChE InE SgE MSCOCO Polysemy Avg. Base Model (Stable Diffusion 1.5) 75.49 72.84 57.80 60.13 69.39 52.65 49.94 56.89 Prompt Revision DALL-E 3 Prompt Rewrite 74.25 70.85 60.91 57.34 69.51 56.36 57.54 63.81 LLaMA 3 Prompt Translate 74.03 71.33 58.48 57.73 70.4 53.98 50.42 59.87 GPT4.1 Prompt Translate 74.54 71.47 63.90 60.87 74.39 59.05 60.20 64.98 UNet Fine-tuning Diffusion Finetune 65.01 52.13 60.94 63.85 70.14 57.3 52.84 60.56 Diffusion DPO 63.94 50.32 63.52 66.31 68.91 61.22 56.38 64.79 Our Encoder Tuning Methods Dialect Learning 67.14 46.30 78.02 75.21 78.33 79.31 78.10 79.15 + Text Cosine Reg. 67.06 46.39 77.93 75.44 77.84 79.31 78.22 78.86 + Image Cosine Reg. 67.73 46.48 78.00 74.91 78.20 79.45 78.33 79.11 + Text KL Reg. 72.68 52.72 77.78 74.40 78.27 78.36 78.17 79.71 + Image KL Reg. 71.69 53.41 78.12 73.77 77.23 79.06 79.25 81.29 + Text KL Reg. + Polysemy Ctrl. 72.71 70.15 77.74 72.24 75.76 78.95 80.67 81.07 + Image KL Reg. + Polysemy Ctrl. 74.80 71.17 77.68 72.61 76.74 77.51 80.41 81.14 LPC = 1 N N X i=1 (1 -⟨π(pm i ), π0(pm i )⟩) , (5) where each pm i is a polysemous SAE prompt sampled from the dataset. This loss is applied only to examples containing SAE polysemous lexemes. KL Regularization In addition to the previous two losses, it is also essential to preserve the model's performance on general SAE prompts. To this end, one might consider employing the conventional Kullback-Leibler (KL) divergence loss, which promotes alignment between the output distributions of a trainable target model and a frozen reference model over a predefined discrete logit space. However, this approach is not directly applicable in our setting, as text encoders output continuous embeddings rather than discrete logits. To address this challenge, we approximate the output distribution by computing similarity scores between a given caption embedding and a set of reference image embeddings drawn from a joint image-text embedding space. Concretely, we begin by sampling M caption-image pairs n (xcap i , ximg i ) | i ∈[M] o from a general SAE dataset such as MSCOCO (Lin et al., 2014). For each pair, we compute the caption embedding Ci = π0(xcap i ) using a frozen text encoder π0, and the corresponding image embedding Ii = φ0(ximg i ) using a frozen image encoder φ0, with both encoders operating in the same shared text-image embedding space. The resulting image embeddings {Ii | i ∈[M]} serve as reference anchors for computing similarity scores with a given caption embedding. These scores act as surrogate logits that approximate the output distributions required for the KL divergence computation. Specifically, for each caption xcap i , we define the approximated output distributions for the frozen encoder π0 and the trainable target encoder π as: sπ0 i = [⟨I1, Ci⟩, . . . , ⟨IM, Ci⟩] , sπ i = [⟨I1, C′ i⟩, . . . , ⟨IM, C′ i⟩] , (6) where C′ i = π(xcap i ). Given these simulated logits, we define the KL divergence loss to encourage the target encoder's output distribution to remain close to that of the frozen encoder: LKL = 1 M M X i=1 KL (softmax(sπ i ) ∥softmax(sπ0 i )) . (7) This approach is compatible with CLIP-style models (Radford et al., 2021; Zhai et al., 2023), in which image and text embeddings are aligned within a shared representation space. When an image encoder is unavailable, we instead use the frozen caption embeddings {Ci | i ∈[M]} as proxies for 8 reference anchors. We hereafter refer to the case where image embeddings are used as reference anchors as "Image KL Reg." and the one using text embeddings as "Text KL Reg." Based on these design choices, the final combined loss function integrates all three components: L = LDL + LPC + LKL as illustrated in Figure 2. For more details, please refer to Section B. 5.3 MITIGATION RESULTS Here, we validate all baselines and our method on SD1.5 and SDXL. Due to space limitations, the results for SDXL are reported in Section F. 5.3.1 COMPARISON WITH THE BASELINES As shown in Table 3, prompt rewriting methods that operate solely at the input level do not degrade SAE MSCOCO or polysemy performance, but yield only slight improvements up to 6.1% in average dialect performance. Furthermore, UNet fine-tuning approaches also lead to small gains of up to 5.7% in dialect performance, but at the cost of substantial drops in both general SAE and polysemy scores. In contrast, our method, corresponding to the last row of the table and incorporating all three loss components described in Section 5.2, significantly improves dialect robustness across all five dialects. Its average dialect performance of 77.68% closely approaches the base model's SAE score of 77.91%, while causing negligible degradation in SAE MSCOCO and polysemy performance. 5.3.2 ABLATION STUDY To evaluate the contribution of each component in our method, we conduct an ablation study here. Base Model vs. Dialect Learning As shown in Table 3, applying the Dialect Learning loss (LDL) alone yields huge improvements in the base model's dialect performance, but also degrades SAE MSCOCO and polysemy performance. Cosine Reg. vs. KL Reg. To solve this issue, simply maximizing cosine similarity between the target text encoder's text embeddings and the corresponding text/image embeddings from the frozen text/image encoder (denoted as Text/Image Cosine Reg.), which is computed over the same caption-image pairs used in our KL regularization, does not effectively recover the base model's SAE MSCOCO and polysemy performance. In contrast, adding our KL regularization loss (LKL) improves both metrics while preserving dialect gains. Adding Polysemy Ctrl. Finally, incorporating the Polysemy Control loss (LPC) yields substantial gains in polysemy performance, improving it by 17.43% and 17.76% for Text and Image KL Reg. respectively, underscoring the importance of this component in recognizing polysemous lexemes within SAE contexts. 6 LIMITATIONS Our study focuses on the lexical variations that characterize dialects, motivated by the empirical observation that such variations exert much greater influence on multimodal generative model performance than grammatical variations (see Section G). Furthermore, grammatical variation has already been the subject of extensive investigation in text-only contexts (Hudson, 1996; Chambers & Trudgill, 1998; Fromkin et al., 1998; Nerbonne, 2009; Wardhaugh & Fuller, 2021). These considerations jointly motivate our decision to prioritize the evaluation of lexical dialect variation, which appears especially consequential in the multimodal generative setting. Furthermore, our evaluation of text-image alignment utilizes reference-free metrics, namely VQAScore (Lin et al., 2024) and CLIPScore (Hessel et al., 2021). We recognize that these pretrained vision-language models are not perfect. To address this potential weakness, we conducted a thorough human evaluation and found very high statistical correlation between our automatic metrics and human judgment (Pearson correlation coefficient r = 0.968 for VQAScore and r = 0.924 for CLIPScore). Therefore, while acknowledging the imperfections of automated metrics, this high degree of human correlation provides strong evidence for the validity of our evaluation metrics and associated analysis conclusions. 9 7 CONCLUSIONS In this work, we create DialectGen, a large-scale multi-dialectal benchmark evaluating the dialect robustness of multimodal generative models. Our experiments on 17 widely used text-to-image and text-to-video generative models reveal severe performance drops up to 38.63% and 48.17% for image and video generative models, respectively. We further design an encoder-based mitigation strategy to enhance dialect robustness while preserving performance on Standard American English. 8 ETHICS STATEMENT This work makes use of human subjects for annotation and evaluation. All procedures were subject to ethical review and were approved by the IRB from the authors' institution. Consent was gathered in accordance with the authors' institution guidelines, and annotators had access to a data use statement when giving consent. The purpose of DialectGen is to provide tools that enable researchers and practitioners to evaluate and improve dialect robustness in their models. We will release these data responsibly, ensuring that users sign a Data Use Agreement that forbids the use of DialectGen for deception, impersonation, mockery, discrimination, hate speech, targeted harassment, and cultural appropriation. In the agreement, researchers and practitioners will also acknowledge the limitations of this work, that DialectGen may not fully or accurately represent the natural usage patterns of all sub-communities of speakers. DialectGen is designed to be easily updatable and configurable, such that it can be extended by and for specific sub-communities and updated as dialects evolve over time. We have carefully checked our data to make sure no personally identifying information or offensive content is included. When utilizing existing artifacts and models, we make sure to follow all relevant regulations and licenses. 9 REPRODUCIBILITY STATEMENT We have taken several steps to ensure the reproducibility of our work. Detailed descriptions of dataset construction, annotation procedures, evaluation protocols, and mitigation methods are provided in the main paper (see Sections 3, 4, etc.), with further implementation details, training configurations, and additional qualitative results included in the appendix (see Sections B, A, etc.). To facilitate independent verification, we also provide as anonymized supplementary material both the DialectGen benchmark dataset and the source code used for data processing, model training, and evaluation. The dataset files include all validated dialect-SAE prompt pairs, while the code folder contains scripts for dataset generation, automatic and human evaluation, and reproduction of all tables and figures reported in the paper. Together, these resources enable researchers to replicate our experimental results and extend the benchmark for future work. 10 ACKNOWLEDGEMENTS We would like to thank Connor Couture and Allen Cheung for designing the initial version of our MTurk annotation interface, and Xinrong Du for providing feedback. We also thank Prof. Diyi Yang, Prof. Xiang Chen, Caleb Ziems, Julia Kruk, Hritik Bansal, Jiachen Gu, Zongyu Lin, and Amita Kamath for valuable discussions and pointers; and especially Caleb Ziems for sharing the grammar-based dialect-speaker quiz he created. Finally, we appreciate Wenbo Hu, Lucas Bandarkar, Mohsen Fayyaz, and Tanmay Parekh for their helpful feedback on the paper draft. 10 REFERENCES Nur Aeni, Like Raskova Octaberlina, Nenni Dwi Aprianti Lubis, et al. A literature review of English Language Variation on Sociolinguistics. OSF, 2021. Niket Agarwal, Arslan Ali, Maciej Bala, Yogesh Balaji, Erik Barker, Tiffany Cai, Prithvijit Chattopadhyay, Yongxin Chen, Yin Cui, Yifan Ding, et al. Cosmos world foundation model platform for physical ai. arXiv preprint , 2025. Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610-623, 2021. James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf, 2(3):8, 2023. Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. arXiv preprint , 2023. Black Forest Labs. Flux. https://github.com/black-forest-labs/flux, 2024. Su Lin Blodgett, Johnny Wei, and Brendan O'Connor. Twitter universal dependency parsing for african-american and mainstream american english. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1415-1425, 2018. Frederic G Cassidy, Joan Houston Hall, and Luanne Von Schneidemesser. Dictionary of American regional english, volume 1. Belknap Press of Harvard University Press Cambridge, Mass., 1985. Jack K Chambers and Peter Trudgill. Dialectology. Cambridge University Press, 1998. Zijie Chen, Lichao Zhang, Fangsheng Weng, Lili Pan, and Zhenzhong Lan. Tailored visions: Enhancing text-to-image generation with personalized prompt rewriting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7727-7736, 2024. Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusion models on differentiable rewards. arXiv preprint , 2023. David Crystal. English as a Global Language. Cambridge University Press, Cambridge, 2 edition, 2003. ISBN 9780521823470. Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, et al. Emu: Enhancing image generation models using photogenic needles in a haystack. arXiv preprint , 2023. Meihua Dang, Anikait Singh, Linqi Zhou, Stefano Ermon, and Jiaming Song. Personalized preference fine-tuning of diffusion models. arXiv preprint , 2025. Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Lê Khac, Luke Melas, and Ritobrata Ghosh. Dall·e mini, 7 2021. URL https://github.com/borisdayma/ dalle-mini. Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis, 2024. URL https://arxiv. org/abs/2403.03206, 2, 2024. Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee. Dpok: Reinforcement learning for fine-tuning text-to-image diffusion models. Advances in Neural Information Processing Systems, 36:79858-79885, 2023. VA Fromkin, Robert Rodman, and V Hyams. An introduction to language 6e. Hartcourt Brace College Publishers: Orlando, FL, USA, 1998. 11 Henry Louis Gates, James Murray, et al. The Oxford Regional English Dictionary. Oxford University Press, Oxford, 2023. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint , 2024. Suchin Gururangan, Dallas Card, Sarah K Dreier, Emily K Gade, Leroy Z Wang, Zeyu Wang, Luke Zettlemoyer, and Noah A Smith. Whose language counts as high quality? measuring language ideologies in text data selection. arXiv preprint , 2022. Yaru Hao, Zewen Chi, Li Dong, and Furu Wei. Optimizing prompts for text-to-image generation. Advances in Neural Information Processing Systems, 36:66923-66939, 2023. Jennifer KN Heinmiller. Compiling the oxford dictionary of african american english: A progress report. Dictionaries: Journal of the Dictionary Society of North America, 44(1):91-104, 2023. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Joseph Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In Conference on Empirical Methods in Natural Language Processing, 2021. Dirk Hovy and Shannon L Spruit. The social impact of natural language processing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 591-598, 2016. Richard A Hudson. Sociolinguistics. Cambridge university press, 1996. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint , 2024. Anna Jørgensen, Dirk Hovy, and Anders Søgaard. Challenges of studying and processing dialects in social media. In Proceedings of the workshop on noisy user-generated text, pp. 9-18, 2015. Jack Tsen-Ta Lee. A Dictionary of Singlish and Singapore English. Lee, Jack Tsen-Ta, 2004. Accessed 2025-05-16. Tony Lee, Michihiro Yasunaga, Chenlin Meng, Yifan Mai, Joon Sung Park, Agrim Gupta, Yunzhi Zhang, Deepak Narayanan, Hannah Teufel, Marco Bellagente, et al. Holistic evaluation of text-to-image models. Advances in Neural Information Processing Systems, 36:69981-70011, 2023. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV), 2014. Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Evaluating text-to-visual generation with image-to-text generation. In European Conference on Computer Vision, pp. 366-384. Springer, 2024. John Nerbonne. Data-driven dialectology. Language and linguistics compass, 3(1):175-198, 2009. OpenAI. Introducing 4o image generation. https://openai.com/index/ introducing-4o-image-generation/, 2025. Accessed: 2025-05-19. OpenAI. Gpt-4.1. https://openai.com/index/gpt-4-1/, April 2025. Accessed 18 May 2025. Karl Pearson. Notes on regression and inheritance in the case of two parents. Proceedings of the Royal Society of London, 58:240-242, 1895. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint , 2023. 12 Mihir Prabhudesai, Anirudh Goyal, Deepak Pathak, and Katerina Fragkiadaki. Aligning text-to-image diffusion models with reward backpropagation (2023). arXiv preprint , 2023. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, 2021. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Dall·e 2: A new ai system that can create realistic images and art from a description in natural language. https: //openai.com/research/dall-e-2, 2022. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10684-10695, 2022. Eyal Segalis, Dani Valevski, Danny Lumen, Yossi Matias, and Yaniv Leviathan. A picture is worth a thousand words: Principled recaptioning improves image generation. ArXiv, abs/2310.16656, 2023. URL https://api.semanticscholar.org/CorpusID:266003242. V. Subhash. Dictionary of Indian English. V. Subhash, Hyderabad, 2020. ISBN 9789354374487. Bram Wallace, Meihua Dang, Rafael Rafailov, Linqi Zhou, Aaron Lou, Senthil Purushwalkam, Stefano Ermon, Caiming Xiong, Shafiq Joty, and Nikhil Naik. Diffusion model alignment using direct preference optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8228-8238, 2024. Yixin Wan, Arjun Subramonian, Anaelia Ovalle, Zongyu Lin, Ashima Suvarna, Christina Chance, Hritik Bansal, Rebecca Pattichis, and Kai-Wei Chang. Survey of bias in text-to-image generation: Definition. Evaluation, and Mitigation, 2024. Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, Jianyuan Zeng, Jiayu Wang, Jingfeng Zhang, Jingren Zhou, Jinkai Wang, Jixuan Chen, Kai Zhu, Kang Zhao, Keyu Yan, Lianghua Huang, Mengyang Feng, Ningyi Zhang, Pandeng Li, Pingyu Wu, Ruihang Chu, Ruili Feng, Shiwei Zhang, Siyang Sun, Tao Fang, Tianxing Wang, Tianyi Gui, Tingyu Weng, Tong Shen, Wei Lin, Wei Wang, Wei Wang, Wenmeng Zhou, Wente Wang, Wenting Shen, Wenyuan Yu, Xianzhong Shi, Xiaoming Huang, Xin Xu, Yan Kou, Yangyu Lv, Yifei Li, Yijing Liu, Yiming Wang, Yingya Zhang, Yitong Huang, Yong Li, You Wu, Yu Liu, Yulin Pan, Yun Zheng, Yuntao Hong, Yupeng Shi, Yutong Feng, Zeyinzi Jiang, Zhen Han, Zhi-Fan Wu, and Ziyu Liu. Wan: Open and advanced large-scale video generative models. arXiv preprint , 2025. Zhijie Wang, Yuheng Huang, Da Song, Lei Ma, and Tianyi Zhang. Promptcharm: Text-to-image generation through multi-modal prompting and refinement. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, pp. 1-21, 2024. Ronald Wardhaugh and Janet M Fuller. An introduction to sociolinguistics. John Wiley & Sons, 2021. Noah Webster. An American dictionary of the English language. Merriam, 1869. Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint , 2024. Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 11975-11986, 2023. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3836-3847, 2023. 13 Yu Zhou, Bingxuan Li, Mohan Tang, Xiaomeng Jin, Te-Lin Wu, Kuan-Hao Huang, Heng Ji, Kai-Wei Chang, and Nanyun Peng. Contrastive visual data augmentation. In ICML 2025, 2025. Caleb Ziems, Jiaao Chen, Camille Harris, Jessica Brooke Anderson, and Diyi Yang. Value: Understanding dialect disparity in nlu. In Annual Meeting of the Association for Computational Linguistics, 2022. Caleb Ziems, William B. Held, Jingfeng Yang, and Diyi Yang. Multi-value: A framework for cross-dialectal english nlp. ACL 2023, 2023. 14 APPENDIX A QUALITATIVE COMPARISON In Figure 3, we provide additional qualitative examples to demonstrate the performances of the baseline mitigation strategy, Diffusion DPO (Wallace et al., 2024), compared with our method. Specifically, we update the Stable Diffusion 1.5 model encoder using Dialect Learning, Polysemy Control, and Image KL. After mitigation, we ask each model to generate images based on the four dialect prompts first mentioned in Figure 1. The Stable Diffusion 1.5 Base model struggles to generate correct images for most of these prompts, including "Two ang pows on a table", "A man selling brinjal", and "A man hiking with his carnal". While the model is able to generate moderately reasonable images for the prompt "A man driving his whip", it commonly generates physically implausible details such as the man's torso protruding through the car. Fine-tuning the UNet with Diffusion DPO is able to slightly improve generation alignment with the text prompt (e.g., occasionally generating two people for the prompt "A man hiking with his carnal"). However, it more often blends visual elements within the desired target images with other irrelevant objects (e.g., generating a man selling purple pastries in place of eggplants or a man wearing a purple shirt holding vegetables). Our method generates higher-quality and better aligned images compared to the base model and Diffusion DPO by accurately learning to generate the target concepts without negatively impacting image quality. A significant majority of images in our sampled generations are able to generate images that correctly depict the target prompts, in line with quantitative evaluation results. Ours (Dialect Learning + Image KL + Polysemy Ctrl.) Fine-tune w/ Diffusion DPO Base Model (Stable Diffusion 1.5) A man hiking with his carnal "carnal" = "brother" in Chicano English A man driving his whip "whip" = "car" in African American English A man selling brinjal "brinjal" = "eggplant" in Indian English Two ang pows on a table "ang pow" = "red packet" in Singaporean English Figure 3: Qualitative Comparison of Mitigation Strategies using the Stable Diffusion 1.5 model (Rombach et al., 2022) on four different dialect prompts. Specifically, we compare the dialect prompt image generation results of the Stable Diffusion 1.5 Base Model, Stable Diffusion 1.5 fine-tuned with Diffusion DPO (Wallace et al., 2024), and Stable Diffusion 1.5 updated via our best performing method (Dialect Learning + Image KL Regularization + Polysemy Control). B IMPLEMENTATION DETAILS Data Preparation We first split the DialectGen dataset into training, validation, and test sets in a ratio of 80%, 10%, and 10%, respectively. These training and validation splits of DialectGen are used to compute the Dialect Learning loss and the Polysemy Control loss. For KL Regularization loss, we randomly sample 1,024 and 256 image-caption pairs from the MSCOCO validation set (Lin et al., 2014) for use in training and validation, respectively. The target text encoder is evaluated on 15 the validation set at the end of each epoch, and the checkpoint with the lowest validation loss is selected and saved for final evaluation. We then evaluate SAE polysemy and per-dialect performance using the test split of DialectGen, and assess SAE MSCOCO performance on 50 randomly sampled captions from the MSCOCO validation set. Training We employ the pretrained text encoder and fine-tune it for 30 epochs using the AdamW optimizer with an initial learning rate of 1 × 10-4, β1 = 0.9, β2 = 0.999, and ε = 1 × 10-8. A cosine annealing learning rate scheduler is applied across the 30 training epochs. The batch size, i.e., N in Equation (4) and Equation (5), is set to 32, and the number of image-caption pairs used for KL regularization, i.e., M in Equation (7), is set to 1,024. Training is completed in less than one hour on a single NVIDIA RTX A6000 GPU. In the case of SDXL, which includes both Base and Refiner encoders, the number of pairs M for the Refiner encoder is set to 512 due to its larger size, and training takes approximately one hour using four NVIDIA RTX A6000 GPUs, with all other configurations kept the same as in the Stable Diffusion 1.5 and SDXL Base encoder settings. About T2Video Models Video-generation models incur substantially higher computational cost than their image counterparts. Since our primary goal is to assess the models' ability to interpret and render textual prompts, we generate only a small, fixed number of frames per video. This strategy is justified by two observations: (i) the first few frames typically suffice to judge prompt fidelity, and (ii) our prompts do not exhibit extensive motion, so long sequences offer diminishing returns. All models were obtained by cloning their official repositories and following the authors' installation instructions. Frame numbers were uniformly reduced frame counts when possible, and in some cases, spatial resolution was also reduced to facilitate efficient evaluation-see Table 5 for the precise settings. Average time per video was measured on a single NVIDIA RTX A6000 GPU; the Wan2.1-T2V-14B model, which does not fit in single-GPU memory, was benchmarked using six A6000 GPUs under Fully Sharded Data Parallel (FSDP) supported by the repository under the xdit framework. All models except Wan2.1 fit under a single A6000 GPU and use approximately 20-30 GB of VRAM max. Wan2.1 takes at least 3 GPUs, taking an approximate memory usage of 100GB of combined VRAM. C MODEL DETAILS We provide detailed information on the multimodal generative models and key experimental settings used in our benchmark. Table 4 lists the comprehensive specifications for all models evaluated in our work, including both text-to-image and text-to-video models. For each model, we provide details such as its creator organization, initial release date, hosting platform, availability type (e.g., open source, proprietary), and model size. Table 5 describes in detail the key generation parameters used for the text-to-video models. This includes the specific resolution, number of frames, and inference steps used for each model. Furthermore, we specify the average time required to generate a single video and the total time needed to generate our full video dataset to aid in understanding the reproducibility and computational cost of our experiments. D DATASET DETAILS The final DialectGen Dataset contains a total of 4632 prompts, which include 2100 non-SAE dialect prompts, 2100 SAE prompts, and 432 polysemous SAE prompts. The entire dataset is split into three subsets: training, validation, and test. The data split ratio is train : validation : test = 8 : 1 : 1. All benchmarking experiments are performed on the entire dataset, while for mitigation experiments, models are trained on the DialectGen training set while evaluated on the validation set. 16 Table 4: Detailed Model Specifications for all multimodal generative models (text-to-image and textto-video generative models) benchmarked in this work. For reference and reproducibility, we include model name, model type, creator organization, initial release date, hosting platform, availability type, and model size. Model Name Model Type Created by Release Date Hosted by Availability Type Model Size Stable Diffusion 1.4 Text to Image CompVis 8/22/2022 Hugging Face Open Source 1B Stable Diffusion 1.5 Text to Image Runway ML 10/20/2022 Hugging Face Open Weights 1.3 B Stable Diffusion 2.1 Text to Image Stability AI 12/7/2022 Hugging Face Open Weights 1.3 B Stable Diffusion XL Text to Image Stability AI 7/26/2023 Hugging Face Open Weights 6.6 B Stable Diffusion 3 Medium Text to Image Stability AI 6/12/2024 Hugging Face Open Weights 2 B Stable Diffusion 3.5 Large Text to Image Stability AI 10/22/2024 Hugging Face Open Weights 8.1 B Stable Diffusion 3.5 Large Turbo Text to Image Stability AI 10/22/2024 Hugging Face Open Weights 8.1 B Flux.1 [dev] Text to Image Black Forest Labs 4/2/2024 Hugging Face Open Weights 12B DALL-E Mini Text to Image Boris Dayma et al. 7/25/2022 Github Open Weights 0.4 B DALL-E 2 Text to Image OpenAI 9/28/2022 OpenAI Proprietary N/A DALL-E 3 Text to Image OpenAI 8/20/2023 OpenAI Proprietary N/A gpt-image-1 Text to Image OpenAI 4/23/2025 OpenAI Proprietary N/A VideoCrafter-2 Text to Video Tencent 1/26/2024 Hugging Face Open Weights 1.4 B Open-Sora Text to Video HPC-AI Tech 6/17/2024 Hugging Face Open Weights 1.2 B CogVideoX Text to Video THUDM Lab 8/27/2024 Hugging Face Open Weights 5 B Cosmos-1 Text to Video Nvidia 1/6/2025 Hugging Face Open Weights 7 B Wan 2.1 Text to Video Alibaba 2/22/2025 Hugging Face Open Weights 14 B Table 5: Key Generation Parameters for Text-to-Video Generative Models. For reproducibility and computational cost estimation, we list GPU runtime per video in minutes and GPU runtime for the full video dataset (both concise and detailed = 4110 videos) in hours. All computational costs are estimated for NVIDIA-A6000 GPUs with 48 GB Memory. Model Version Resolution Frames Steps Time / Video (min) Time / Dataset (h) VideoCrafter2 512 × 512 16 50 5.0 342.5 OpenSora-STDiT-v3 405 × 720 51 30 8.3 570.8 CogVideoX-5b 720 × 480 10 10 6.1 416.7 Cosmos-1.0-Diffusion-7B-Text2World 704 × 1280 121 35 26.5 1815.3 Wan2.1-T2V-14B 832 × 480 10 12 4.8 329.4 Note: The dataset-scale timing for Wan2.1-T2V-14B was measured using 6 A6000 GPUs using xdit FSDP. 17 E HUMAN ANNOTATION DETAILS Figure 4: The Amazon Mechanical Turk Data Annotation Interface for dialect speaker human filtering of generated prompts (prompt generation details in Section 3). Human annotators may use the "View Instructions" button to collapse / re-open detailed annotation instructions at any time. The annotation interface places no maximum time limit on each annotation question. Human annotators are allowed to return to previously annotated questions and update their answers at any time. In the creation of the DialectGen Dataset, we recruit a total of 17 dialect speaker human annotators from Amazon Mechanical Turk. The demographic involves six annotators from Asia, eight annotators from North America, and 3 annotators from Europe. Each selected annotator is given the option to complete any number of questions as they prefer. We encourage each annotator to take regular breaks during the task and not to work consecutively for more than 2 hours on our task. Our task is relatively simple for dialect speakers as it mainly involves judging the plausibility and meaning of a sentence in their native dialect. We estimate each HIT to take around 12 seconds, this corresponds to an hourly wage of 327.6. We ran 4 rounds of annotations, with a combined total of 6552 prompts. 35.9% of total proposed prompts were rejected by the annotators while 64.1% of prompts were approved. F MITIGATION RESULTS ON STABLE DIFFUSION XL Stable Diffusion XL consists of two encoders: a Base encoder and a Refiner encoder. We fine-tuned both components as part of our method. However, since the corresponding CLIP-style image encoder for the Refiner is not publicly accessible, only Text KL Regularization can be applied in this case. Given the Refiner's larger size and additional encoding modules, we evaluate our final method against other baselines within this more complex configuration. 18 Figure 5: The English Dialect Speaker Assessment Quiz used for matching dialect speaker annotators to specific dialects for prompt annotation. We adapt the assessment quiz from the existing English Dialect Speaker Survey first created in MultiVALUE (Ziems et al., 2023), which asks the human annotator to select their linguistic acceptability preference for 10 different dialect excerpts. We report the mitigation results on Stable Diffusion XL (Podell et al., 2023) in Table 6, under the experimental setup described above. Similar to the findings on Stable Diffusion 1.5, Prompt Revision methods preserve general SAE performance but yield only marginal improvements in dialect VQAScore, with gains of up to 7.8%. Additionally, UNet fine-tuning methods also result in small gains of up to 5.3% in dialect performance, but at the cost of noticeable degradation in both SAE MSCOCO and SAE polysemy performance. In contrast, our method substantially improves dialect robustness across all five dialects, achieving an average performance of 85.99%, which surpasses the 19 Table 6: Mitigation results on SDXL (Podell et al., 2023) for all methods, including Overall Performances on SAE MSCOCO, SAE Polysemy, average Dialect performance, and Dialect Performance for each dialect, all measured using VQAScore (Lin et al., 2024). Cell colors reflect column-normalized performance values, with darker green indicating higher VQAScore performance. Mitigation Methods Overall Performances ↑ Dialect Performance ↑ SAE SAE Dialect AAE BrE ChE InE SgE MSCOCO Polysemy Avg. Base Model (Stable Diffusion XL) 86.21 78.21 61.55 61.17 77.58 47.04 53.21 68.76 Prompt Revision DALL-E 3 Prompt Rewrite 85.36 78.01 66.49 59.93 77.92 60.61 63.62 70.39 LLaMA 3 Prompt Translate 84.72 77.60 64.19 63.74 77.93 57.40 56.09 65.80 GPT4.1 Prompt Translate 85.93 78.12 69.30 61.97 82.24 63.87 65.45 72.97 UNet Fine-tuning Diffusion Finetune 70.49 52.37 65.22 65.31 76.69 60.12 58.05 65.91 Diffusion DPO 72.03 50.29 66.89 65.97 78.12 62.88 60.10 67.40 Ours Dialect Learning + Text KL Reg.+ Polysemy Reg. 85.45 78.08 85.99 82.43 84.71 85.97 89.70 87.14 base model's SAE score of 84.43%, while inducing less than a 1% drop in both SAE MSCOCO and SAE polysemy performance. Table 7: Quantitative Effects of Grammatical and Lexical Variations on Multimodal Generation, measured in VQAScore. We evaluate three text-to-image generative models under the following dialectal variation types: Grammatical, Lexical, and Grammatical + Lexical. Values in parentheses indicate the percentage performance drop in VQAScore compared to baseline SAE performance. Model SAE Performance (%) Performance under Dialectal Variations (%) Grammatical Lexical Grammatical + Lexical DALL-E Mini 75.63 74.72 (-1.20) 51.92 (-31.35) 51.26 (-32.22) FLUX.1 dev 82.94 82.40 (-0.65) 61.88 (-25.39) 61.02 (-26.43) Stable Diffusion 3.5 Large 85.18 83.91 (-1.49) 65.37 (-23.26) 63.80 (-25.10) G GRAMMATICAL VS. LEXICAL ROBUSTNESS IN MULTIMODAL MODELS To establish the rationale for our study's focus on lexical variations, we begin with an observation about multimodal generative models. These models often exhibit a notable insensitivity to grammatical or syntactic structure, a tendency that likely arises from the bag-of-words nature of their CLIP-style encoders. This architectural trait means that variations in sentence construction, such as word order or verb tenses, tend to have a minimal effect on the final output. Table 8, adapted from Multi-VALUE (Ziems et al., 2023), showcases several examples of these grammatical variations. Table 8: Examples of Grammatical Dialect Variations between Standard American English (SAE) sentences and African American English (AAE) dialect sentences. The blue texts highlight unique features in SAE while the purple texts (if applicable) highlight corresponding features in AAE. Grammatical Variation Type SAE Prompt AAE Dialect Prompt Clause Structure A chair that can be folded A chair can be folded Negative Concord There is no food on the table There ain't no food on the table Word Order A big and fresh fish A fish big and fresh Verb Morphology Mom brought rice to me Mom brin rice give me To formally quantify this observation, we conducted a small-scale experiment with three representative models in the African American English evaluation setting. We used the Multi-VALUE (Ziems et al., 2023) translation system to apply grammatical variations to 300 SAE prompts from DialectGen and evaluated their generation quality using VQAScore. 20 Table 9: Stable Diffusion 1.5 Mitigation Performance Breakdown by dialect for different mitigation methods on the DialectGen dataset for all baseline methods and ablations of our method. All performance scores are measured using VQAScore (Lin et al., 2024), higher score is better. Mitigation Methods Performance by Dialect (VQAScore) ↑ AAE BrE ChE InE SgE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Base Model (Stable Diffusion 1.5) 57.34 72.94 69.51 76.40 56.36 78.66 57.54 81.05 63.81 80.50 Prompt Revision DALL-E 3 Prompt Rewrite 57.73 73.16 70.40 77.86 53.98 79.99 50.42 81.33 59.87 81.66 LLaMA 3 Prompt Translate 60.87 70.36 74.39 76.49 59.05 78.15 60.22 81.09 64.98 79.84 GPT4.1 Prompt Translate 65.32 71.28 73.52 76.40 58.32 78.29 53.04 81.03 65.12 79.83 UNet Fine-tuning Diffusion Finetune 63.85 64.34 70.14 68.35 57.30 69.55 52.84 70.72 60.56 72.42 Diffusion DPO 66.31 63.02 68.91 69.17 61.22 67.83 56.38 70.94 64.79 71.85 Ours Dialect Learning 75.21 74.31 78.33 78.34 79.31 80.20 78.10 79.90 79.15 78.33 + Text Cosine Reg. 75.44 74.86 77.84 77.52 79.31 79.74 78.22 80.13 78.86 79.21 + Image Cosine Reg. 74.91 74.83 78.20 78.22 79.45 80.32 78.00 80.00 79.11 78.72 + Text KL Reg. 74.40 73.97 78.27 79.40 78.36 80.72 78.17 78.24 79.71 78.66 + Image KL Reg. 73.77 74.36 77.23 77.60 79.06 80.43 79.25 80.99 81.29 79.54 + Text KL Reg.+ Polysemy Ctrl. 72.24 72.25 75.76 79.57 78.95 79.27 80.67 79.89 81.07 79.84 + Image KL Reg.+ Polysemy Ctrl. 72.61 74.30 76.74 76.77 77.51 78.83 80.41 80.85 81.14 78.15 Table 10: Complete DialectGen Benchmark Performance Breakdown by dialect for all text-toimage and text-to-video generative models. All performance scores are measured using VQAScore (Lin et al., 2024), higher score is better. Results complements Table 2 in the main paper. Model Performance by Dialect (VQAScore) ↑ AAE BrE ChE InE SgE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Concise Prompts T2I Models Stable Diffusion 1.4 60.66 76.47 71.46 79.08 51.31 78.86 47.5 80.88 57.64 78.92 Stable Diffusion 1.5 62.31 77.41 72.59 79.47 50.4 79.37 47.03 81.29 56.36 78.8 Stable Diffusion 2.1 60.97 80.59 76.37 84.21 45.88 83.15 50.63 85.99 58.53 82.31 Stable Diffusion XL 62.97 82.17 80.49 87.44 49.82 84.75 53.66 87.6 65.56 84.23 Stable Diffusion 3 60.9 84.46 79.22 86.71 48.32 84.29 51.91 87.52 61.64 82.32 Stable Diffusion 3.5 Large 60.16 83.91 80.53 89.22 48.93 85.33 51.53 88.69 63.21 83.79 Stable Diffusion 3.5 Large Turbo 57.27 82.2 79.4 87.51 47.16 83.62 50.07 87.06 61.72 83.09 Flux.1 [dev] 55.63 80.17 72.7 81.53 45.85 82.82 46.73 81.39 51.63 76.62 DALL-E Mini 50.86 76.96 73.55 80.1 41.51 78.48 44.07 77.11 54.11 72.64 DALL-E 2 52.07 81.19 79.19 86.03 42.54 83.05 43.11 81.66 61.65 81.27 DALL-E 3 67.09 82.8 85.68 88.86 50.43 86.87 58.8 86.34 64.3 86.38 DALL-E 3 w/ Rewrite 63.74 81.83 84.24 90.08 61.41 83.96 68.7 89.28 74.77 85.69 gpt-image-1 65.47 88.62 88.39 93.24 65.31 88.37 67.77 92.22 77.67 88.25 T2V Models Cosmos-1 59.61 76.57 68.87 76.26 53.27 72.08 56.84 78.34 54.04 65.18 Open-Sora 65.46 84.56 75.56 83.21 48.49 85.21 59.79 87.59 59.19 80.56 VideoCrafter-2 61.3 82.13 76.19 84.12 42.9 86.43 53.3 88.76 61.73 83.51 CogVideoX 36.72 59.54 42.55 55.8 27.71 61.82 28.76 63.23 25.98 44 Wan 2.1 29.57 62.49 47.02 68.41 30.37 54.07 30.68 65.81 30.23 67.89 Detailed Prompts T2I Models Stable Diffusion 1.4 70.07 79.31 74.19 77.58 65.24 78.94 56.99 80.53 63.87 76.98 Stable Diffusion 1.5 71.03 79.97 73.5 77.69 65.21 78.89 56.84 79.72 63.02 77.06 Stable Diffusion XL 72.82 84.76 80.84 85.6 68.19 85.85 61.1 87.44 70.93 83.55 Stable Diffusion 2.1 69.41 81.72 77.51 82.03 63.64 82.68 59.39 84.07 64.71 79.95 Stable Diffusion 3 74.27 87.11 82.58 88.48 66.21 86.95 62.59 88.08 67.32 83.13 Stable Diffusion 3.5 Large 73.21 86.84 83.24 89.5 67.05 87.6 60.55 88.82 67.65 84.27 Stable Diffusion 3.5 Large Turbo 73.24 86.23 81.07 88.24 64.83 86.37 58.46 87.81 65.05 82.98 Flux.1 [dev] 72.86 85.56 77.43 85.19 61.47 82.72 58.52 85.31 59.56 79.66 DALL-E Mini 53.69 74.12 69.5 73.38 52.39 72.11 50.47 73.65 58.22 68.92 DALL-E 2 64.72 79.34 80.2 85.79 62.33 83.66 55.51 82.6 66.07 80.34 DALL-E 3 77.75 85.3 83.82 87.99 68.16 86.26 71.19 87.79 73.51 84.35 DALL-E 3 w/ Rewrite 76.73 87.12 85.56 90.33 76.36 85.43 75.63 91.22 78.8 86.54 gpt-image-1 78.26 90.7 86.88 90.94 79.47 88.85 78.04 92.86 79.39 88.38 T2V Models Cosmos-1 64.61 72.64 67.1 73.94 57.62 67.03 56.58 73 50.39 58.99 Open-Sora 74.81 86.48 76.69 80.84 67.65 83.93 69.59 86.77 71.15 81.49 VideoCrafter-2 70.88 85.37 79.53 83 66.14 87.21 62.58 86.47 68.14 83.72 CogVideoX 39.83 50.63 46.4 54.35 38.89 57.82 35.8 62.68 25.51 40.11 Wan 2.1 55.79 79.96 62.14 73.08 42.39 73.82 48.86 76.6 48.73 75.8 The results, presented in Table 7, provide strong quantitative evidence supporting our initial analysis. While lexical feature variations cause significant performance drops for existing text-to-image generative models, grammatical variations do not incur significant performance drops. This clear distinction validates our decision to focus on the more impactful lexical variations throughout this work. 21 Table 11: Complete DialectGen Benchmark Performance Breakdown by dialect for all textto-image and text-to-video generative models. All performance scores are measured using CLIPScore (Hessel et al., 2021), higher score is better. Results complements Table 2 in the main paper. Model Performance by Dialect (CLIPScore) ↑ AAE BrE ChE InE SgE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Dialect SAE Concise Prompts T2I Models Stable Diffusion 1.4 25.46 27.83 28.65 29.79 24.68 28.34 24.34 29.38 25.97 28.64 Stable Diffusion 1.5 25.79 27.95 28.66 29.91 24.7 28.32 24.38 29.44 25.86 28.65 Stable Diffusion 2.1 25.72 28.74 29.67 30.88 24.7 29.44 25.31 30.69 26.46 29.54 Stable Diffusion XL 25.81 28.69 29.97 31.21 25.37 29.57 25.85 31.15 27.45 30.23 Stable Diffusion 3 25.45 28.42 29.89 30.97 25.01 28.74 25.02 30.31 26.8 29.67 Stable Diffusion 3.5 Large 25.62 28.78 30.25 31.5 25.22 29.42 25.67 31.14 27.18 30.22 Stable Diffusion 3.5 Large Turbo 25.1 28.4 29.9 31.16 24.95 28.9 25.3 30.78 26.96 29.82 Flux.1 [dev] 24.74 27.54 28.52 29.88 24.21 27.97 24.78 29.48 25.4 28.31 DALL-E Mini 24.77 28.15 29.48 30.65 23.57 27.81 24.4 29.56 25.74 28.6 DALL-E 2 24.57 27.4 29.86 30.56 23.98 27.3 24.1 29.3 26.44 28.53 DALL-E 3 25.19 27.51 28.95 29.75 24.57 28.11 25.71 29.66 26.3 29.08 DALL-E 3 w/ Rewrite 24.92 26.91 29.41 30.11 25.15 27.57 26.87 29.93 27.12 28.47 gpt-image-1 25.96 28.33 30.94 31.62 26.51 29.48 27.51 31.21 28.57 30.33 T2V Models Cosmos-1 23.49 25.42 26.17 27.16 22.89 24.62 24.18 27.04 21.89 22.91 Open-Sora 25.02 27.3 28.63 29.73 24.34 27.09 25.35 29.36 25.55 28.01 VideoCrafter-2 25.88 28.83 29.41 30.69 25.04 29.04 25.88 30.56 27 29.69 CogVideoX 22.62 25.71 24.14 25.84 22.03 24.61 22.95 27.4 19.99 22.18 Wan 2.1 22.37 25.49 25.45 28.27 22.14 24.55 22.55 27.57 22.75 26.85 Detailed Prompts T2I Models Stable Diffusion 1.4 27.98 28.84 29.59 30.23 28.61 30.1 26.79 29.83 28.12 29.78 Stable Diffusion 1.5 28.08 28.99 29.54 30.29 28.52 30.18 26.87 29.94 27.98 29.82 Stable Diffusion 2.1 28.57 29.9 30.83 31.69 29.68 31.51 28.24 31.47 29.54 31.32 Stable Diffusion XL 28.46 29.68 30.49 31.33 29.12 31.14 27.95 30.96 28.65 30.52 Stable Diffusion 3 28.69 29.82 30.76 31.59 29.13 31.15 27.82 31 29.13 31.04 Stable Diffusion 3.5 Large 28.86 30.01 31.02 31.84 29.48 31.64 28.04 31.61 29.29 31.19 Stable Diffusion 3.5 Large Turbo 28.61 29.58 30.67 31.6 29.09 31.23 27.8 31.18 28.76 30.78 Flux.1 [dev] 27.97 28.69 29.54 30.37 28.17 30.17 27.15 30.01 27.72 29.46 DALL-E Mini 27.26 29.18 29.84 30.56 27.75 30.23 26.71 29.93 27.42 29.59 DALL-E 2 27.66 29.02 30.5 31.3 28.57 30.54 26.48 30.17 28.69 29.88 DALL-E 3 27.79 28.3 29.55 30.21 28.52 30.04 27.48 29.83 28.67 30.03 DALL-E 3 w/ Rewrite 27.71 28.23 29.75 30.46 28.88 29.85 28.57 29.98 28.61 29.42 gpt-image-1 28.65 29.45 31.2 31.6 29.98 31.29 29.41 30.81 30.18 31.27 T2V Models Cosmos-1 23.07 23.79 25.98 26.67 24.19 24.94 23.35 25.29 19.99 21.09 Open-Sora 27.4 28.36 29.5 29.93 28.07 29.46 27.64 30.04 27.64 29.19 VideoCrafter-2 28.4 29.76 30.24 30.98 28.95 31 27.83 30.88 28.74 30.61 CogVideoX 21.42 22.55 24.38 25.74 22.89 24.6 22.37 25.51 17.67 19.82 Wan 2.1 25.85 27.89 27.96 29.2 26.05 28.83 25.25 28.92 25.45 27.98 H PERFORMANCE BY DIALECT Due to space constraints, we report performance by dialect in Table 9, Table 10, and Table 11. As described in Section 4.1, the scoring functions are based on reference-free image-text alignment metrics, including VQAScore and CLIPScore. We denote the subset of DialectGen prompts corresponding to a given dialect as P, which consists of multiple SAE Prompt / Dialect Prompt pairs p = (ps, pd). For each individual text prompt ps or pd, we generate n images under different random seeds for text-to-image generative models, or uniformly sample n frames for text-to-video generative models. Accordingly, for each SAE Prompt / Dialect Prompt pair p = (ps, pd) ∈P, we compute its SAE and Dialect performance using Equation (1) and Equation (2), respectively. More concretely, SAE(p, G) in Equation (1) denotes the average VQAScore (as reported in Table 9 and Table 10) or CLIPScore (in Table 11) computed over the n images generated from the SAE prompt ps. Similarly, Dialect(p, G) in Equation (2) is computed using the same evaluation pipeline, but with the corresponding dialect prompt pd from the same pair. Each value of SAE(p, G) and Dialect(p, G) is reported as SAE and Dialect, respectively, in the tables. 22 I USE OF AI TOOLS We employed large language models (LLMs), including OpenAI's GPT-5 and GPT-4o, as auxiliary tools to refine the manuscript and identify grammatical errors. All LLM-assisted content was critically reviewed, fact-checked, and revised by the authors to ensure scientific validity and originality. The authors retain full responsibility for all statements and conclusions presented in this work. Specifically, LLMs were used only to improve wording and clarity of expression. J FUTURE WORK Our work highlights several promising directions for future research, which we encourage the community to explore. Investigating Cultural and Representational Biases It would be interesting for future works to explore and evaluate the significance of representational and skin tone shifts induced by dialect inputs. For instance, as noted in Figure 1, we observed that FLUX.1 [dev] (Black Forest Labs, 2024) image generations for the prompt "A man selling eggplant" depict more upscale and decorated environments compared to generations for "A man selling brinjal." Furthermore, individuals depicted in the images for "brinjal" are darker-skinned. A systematic study of these shifts would provide valuable insights into the inherent biases of large-scale multimodal models. Exploring Grammatical and Joint Dialect Variations While this work concentrated on lexical variations, we welcome future works in this line to carefully study the impacts of grammatical dialect variations and their joint effects with lexical variations. Such research could reveal more complex interactions and failure modes in the performance of multimodal generative models. Investigating Downstream Impacts of Dialectal Performance Gaps Many existing studies rely on the accurate semantic understanding and high-fidelity generation capabilities of multimodal text-to-image and text-to-video generative models Zhang et al. (2023); Wallace et al. (2024); Zhou et al. (2025). It would be interesting to investigate the downstream research impacts of dialectal performance gaps on these works as well as downstream societal impacts to dialect speaker user groups. Extending Evaluation to Multi-Lexeme Prompts Another related area for future work is the extension of our evaluation to settings where multiple dialect lexemes are used. This would test the models' compositional understanding of dialectal language, and we encourage future works to explore such possibilities. However, it should be noted that creating high-quality, controlled data at scale for such experiments is a non-trivial problem that needs to be addressed. Applying the Mitigation Strategy to Text-to-Video Models While our proposed mitigation strategy is designed to be broadly compatible with most multimodal models, it would be interesting to apply our method to text-to-video generative models. Our experiments were limited to text-to-image models due to resource constraints. Therefore, we encourage future researchers with the necessary computing resources to experiment in this domain, as it would serve as a strong test of our method's generalizability. 23
2510.14945
3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation 3D SCENE PROMPTING FOR SCENE-CONSISTENT CAMERA-CONTROLLABLE VIDEO GENERATION JoungBin Lee1∗ Jaewoo Jung1∗ Jisang Han1∗ Takuya Narihira2 Kazumi Fukuda2 Junyoung Seo1 Sunghwan Hong3 Yuki Mitsufuji2,4† Seungryong Kim1† 1KAIST AI 2Sony AI 3ETH Z¨urich 4Sony Group Corporation https://cvlab-kaist.github.io/3DScenePrompt Cam Traj. User-specified Cam Traj. Spatial consistency from projected point clouds to specified Traj Ours GT Future Video Input Video Lift point cloud from static regions 1 Condition 2 4 3 Temporal consistency from Last Few Frames Future Video Generate! Figure 1: Teaser. Our framework generates the next video chunk that follows a user-specified cam- era trajectory while maintaining scene consistency. Our dual spatio-temporal conditioning combines the last few frames for temporal continuity and the rendered point cloud for spatial consistency. ABSTRACT We present 3DScenePrompt, a framework that generates the next video chunk from arbitrary-length input while enabling precise camera control and preserving scene consistency. Unlike methods conditioned on a single image or a short clip, we employ dual spatio-temporal conditioning that reformulates context-view ref- erencing across the input video. Our approach conditions on both temporally adja- cent frames for motion continuity and spatially adjacent content for scene consis- tency. However, when generating beyond temporal boundaries, directly using spa- tially adjacent frames would incorrectly preserve dynamic elements from the past. We address this by introducing a 3D scene memory that represents exclusively the static geometry extracted from the entire input video. To construct this memory, we leverage dynamic SLAM with our newly introduced dynamic masking strat- egy that explicitly separates static scene geometry from moving elements. The static scene representation can then be projected to any target viewpoint, provid- ing geometrically-consistent warped views that serve as strong 3D spatial prompts while allowing dynamic regions to evolve naturally from temporal context. This enables our model to maintain long-range spatial coherence and precise camera control without sacrificing computational efficiency or motion realism. Extensive experiments demonstrate that our framework significantly outperforms existing methods in scene consistency, camera controllability, and generation quality. ∗Equal contribution. †Co-corresponding authors. 1 arXiv:2510.14945v1 [cs.CV] 16 Oct 2025 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation 1 INTRODUCTION Camera-controllable video generation (He et al., 2024; Wang et al., 2024b; Jin et al., 2025) aims to synthesize videos following user-specified camera trajectories while maintaining visual coherence and temporal consistency. Recent advances have progressed from generating entirely new videos with controllable viewpoints (Bahmani et al., 2025) to enabling users to extend a single image or short video clips along desired camera paths (He et al., 2024; Agarwal et al., 2025). Yet these meth- ods share a fundamental limitation: they can only process extremely short conditioning sequences, typically just a few frames, which constrains their ability to understand longer videos and hence fails to preserve the rich scene context present in those longer videos. What if we could provide a model with arbitrary-length video sequences and generate continuations that not only follow precise cam- era controls but also maintain scene consistency with the entire input? Such task, which we refer to as scene-consistent camera-controllable video generation, has immediate applications in film pro- duction (Zhang et al., 2025), virtual reality (He et al., 2025), and synthetic data generation (Knapp & Bohacek, 2025). Scene-consistent, camera-controllable video generation presents three coupled challenges: first, static elements must remain stable across time, while dynamic elements e.g., moving objects or people, should evolve naturally from their most recent states rather than replaying motions from the distant past.Second, effective camera control requires understanding the scene’s 3D geometry: generated content must obey physical constraints, handle occlusions correctly, seamlessly compose dynamic elements with static structure, and plausibly extrapolate previously unobserved regions. Third, these capabilities must remain computationally practical: na¨ıvely conditioning on all input frames scales poorly and becomes intractable as sequence length grows. Rather than building a bespoke architecture, we leverage strong pretrained video generators, re- taining their learned priors and training efficiency, by redesigning how prior content is referenced. Our key insight is to fundamentally rethink this referencing via a dual spatio-temporal conditioning mechanism. Current image-to-video (Yang et al., 2024) and video-to-future-video models 1 (Agar- wal et al., 2025) achieve realistic generation by conditioning on temporally adjacent frames to main- tain short-term consistency and motion continuity. However, adjacency in video is not purely tempo- ral—it can also be spatial. When generating scene-consistent videos, the frames we synthesize may be spatially adjacent to frames from much earlier in the input sequence, particularly when the cam- era revisits similar viewpoints or explores nearby regions. This dual nature of adjacency suggests a new conditioning paradigm that leverages both temporal and spatial relationships. Based on these motivations, we propose 3DScenePrompt, a novel video generation framework de- signed for scene-consistent camera-controllable video synthesis. It takes an arbitrary-length video as context and generates the future video that is consistent with the scene geometry of the context video. The key innovation lies in our dual spatio-temporal conditioning strategy: the model con- ditions on both temporally adjacent frames (for motion continuity) and spatially adjacent frames (for scene consistency). However, an important consideration for spatial conditioning for our task is that it must provide only the persistent static scene structure while excluding dynamic content, as directly conditioning on spatially adjacent frames from the past would incorrectly preserve dynamic elements. To enable this without temporal contradictions, we construct a 3D scene memory that exclusively represents the static geometry extracted from the entire input video. To construct this 3D scene memory from dynamic videos, we leverage recent advances in dynamic SLAM frameworks (Zhang et al., 2022; 2024; Li et al., 2024) to estimate camera poses and 3D structure from the input video. To extract only the static regions from the estimated 3D structure, we introduce a dynamic masking strategy that explicitly separates static elements and moving objects. The static-only 3D representation can then be projected to target viewpoints, yielding geometrically- consistent warped views that serve as spatial prompts while allowing dynamic elements to evolve naturally from temporal context alone. Surprisingly, the integration of 3D scene memory provides an additional benefit: the geometrically-consistent warped views provide rich visual references that significantly reduce uncertainty in viewpoint manipulation, enabling precise camera control without any other explicit camera conditioning. 1Throughout our paper, video-to-future-video models refer to models that are capable of generating the subsequent frames of the given input video (e.g., cosmos-predict2 (Agarwal et al., 2025). 2 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation (a) Camera control video generation Caption Last frame Traj. Target video ℱ Input video (b) Video-to-future-video generation Future video 𝐺 Input video Caption Last few frames Future video ℱ Input video (c) Ours Caption Traj. Last few frames Frames close to Traj. Figure 2: Comparison to existing architectures. (a) Camera-controllable methods condition on a single frame and camera trajectory (Wang et al., 2024b; He et al., 2024; Jin et al., 2025). (b) Video-to-future-video methods use the last few frames of the input video when generating the future video for temporal continuity (Song et al., 2025), but fail to maintain long-term spatial consistency when revisiting viewpoints unseen in the given few frames. (c) Our approach combines temporal conditioning (last few frames) with spatial conditioning (spatially adjacent frames) to achieve scene- consistent generation with precise camera control. In summary, 3DScenePrompt enables both accurate camera control and long-range spatial consis- tency by treating the static scene representation as a persistent spatial prompt that guides generation across arbitrary timescales. Extensive experiments demonstrate that our framework significantly outperforms existing methods in maintaining scene consistency, achieving precise camera control, and generating high-quality videos from arbitrary-length inputs. 2 RELATED WORK Camera-controllable video generation. Building upon the recent success of video diffusion models (Blattmann et al., 2023; Guo et al., 2023; Yang et al., 2024; Runway, 2024; Brooks et al., 2024), recent works (He et al., 2024; Wang et al., 2024b; Bahmani et al., 2024) have achieved camera-controllable video generation by introducing additional adapters into U-Net-based video diffusion models that accept camera trajectories. For instance, CameraCtrl and VD3D (Bahmani et al., 2024; He et al., 2024) incorporate spatiotemporal camera embeddings, such as Pl¨ucker coor- dinates, via ControlNet-like mechanisms (Zhang et al., 2023). While these methods enable precise trajectory following, they only condition on single starting images, lacking mechanisms to maintain consistency with extended video context. In contrast, our approach enables leveraging entire video sequences as spatial prompts through 3D memory construction, enabling scene-consistent genera- tion that preserves the rich scene context within arbitrary-length inputs. Geometry-grounded video generation. Recent works (Ren et al., 2025; Yu et al., 2025; Seo et al., 2025) have integrated off-the-shelf geometry estimators into video generation pipelines to improve geometric accuracy. Gen3C (Ren et al., 2025), for instance, similarly adopts dynamic SLAM (Li et al., 2024; Han et al., 2025) to lift videos to 3D representations. However, these methods exclu- sively address dynamic novel view synthesis—generating new viewpoints within the same temporal window as the input. This constrained setting allows them to simply warp entire scenes without distinguishing static and dynamic elements. Our work fundamentally differs by generating con- tent beyond temporal boundaries, requiring selective masking of dynamic regions during 3D con- struction—a critical challenge that emerges only when static geometry must persist while dynamics evolve naturally into the future. Long-horizon scene-consistent generation. Various approaches attempt scene-consistent long video generation through different strategies. ReCamMaster (Bai et al., 2025) and Trajecto- ryCrafter (Yu et al., 2025) interpolate frames or construct 3D representations but remain confined to the input’s spatiotemporal coverage, essentially performing dynamic novel view synthesis. Star- Gen (Zhai et al., 2025) scales to long trajectories but assumes static worlds, eliminating temporal dynamics entirely. DFoT (Song et al., 2025) most closely relates to our work, proposing guidance methods that condition on previous frames for scene consistency. However, DFoT also faces fun- damental memory constraints when processing extended sequences, limiting its ability to maintain long-range spatial coherence. Our dual spatio-temporal strategy with SLAM-based spatial memory 3 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation Lift Static Part Last Few Frames Static Point Cloud Rendered Point Cloud Patchify DiT Generated Dynamic Video C Noise C : Channel Concat. Text Encoder Text Prompt Man wearing cap and sunglasses hikes a rocky mountain trail with backpack. VAE Encoder VAE Decoder Spatially Adjacent Frames Rendered Point Cloud Last Few Frames Input Video Dynamic Mask User-specified Cam Traj. Cam Traj. … … … … … x N Figure 3: Overview of 3DScenePrompt framework. To generate the next chunk video that remains spatially consistent with the input video, we design a dual spatio-temporal conditioning pipeline to extract the most relevant information from the input video. The last few frames are utilized to provide temporal conditioning, ensuring motion continuity between conditioning inputs and the generated frames. In parallel, for the spatial conditioning, we first select the most representative frames from the input sequence, lift their static regions into a 3D point cloud using the dynamic mask, and render it along a user-specified camera trajectory to preserve scene geometry. overcomes these limitations by selectively retrieving only the most relevant frames, both tempo- rally and spatially, enabling computationally efficient processing of arbitrary-length videos while maintaining both motion continuity and scene consistency. 3 METHODOLOGY 3.1 PROBLEM FORMULATION AND MOTIVATION We address the task of scene-consistent camera-controllable video generation: given a dynamic video Vin ∈RL×H×W ×3 of arbitrary length L as context with height H and width W, our goal is to generate T subsequent frames Vout ∈RT ×H×W ×3 that follow a desired camera trajectory C = {Ct}T t=1 while maintaining consistency with the scene captured in the context input: Vout = F(Vin, T , C), (1) where Ct ∈SE(3) represents camera extrinsic matrices and T is a text prompt when a video gener- ator F(·) is based on pretrained text-to-video priors (Hong et al., 2023; Yang et al., 2024; Bahmani et al., 2025). Comparison to existing solutions. This task fundamentally differs from existing video generation paradigms. Existing camera-controllable generation methods (He et al., 2024; Wang et al., 2024b; Bahmani et al., 2024) synthesize videos following user-specified trajectories but only condition on a single image Iref or plain text T (Fig. 2-(a)): Vout = F(Iref, T , C), or Vout = F(T , C), (2) which is insufficient for our task, where the entire underlying 3D scene of the context video should be considered. In contrast, video-to-future-video generation methods such as Cosmos-predict-2 (Agarwal et al., 2025) G(·) employ temporal sliding windows to generate future frames (Fig. 2-(b)): Vout = G(Vin[L −w : L], T ) (3) where Vin[L −w : L] for w ≪L represents a small overlap window, typically consisting of the last few frames of Vin. Although this design encourages temporal smoothness by providing the last few frames when generating the future video, it often fails to preserve long-term spatial consistency when the camera revisits regions not covered by the small window w. 3.2 TOWARDS SCENE-CONSISTENT CAMERA-CONTROLLABLE VIDEO GENERATION The key challenge of scene-consistent camera-controllable video generation lies in reconciling two competing requirements: maintaining consistency with potentially distant frames that share spatial 4 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation proximity (when the camera returns to similar viewpoints), while evolving dynamic content naturally from the recent temporal context. Ideally, extending the conditioning window of video-to-future- video frameworks to condition on all frames Vin would ensure optimal global spatial consistency and dynamic smoothness. However, this quickly becomes impractical as the sequence grows, since standard self-attention incurs quadratic time/memory in the sequence length. Dual spatio-temporal sliding window strategy. Instead of increasing the temporal window size w of the existing video-to-future-video generation methods, we introduce a dual sliding window strategy that conditions on frames selected along both temporal and spatial axes (Fig. 2-(c), and Fig. 3). Beyond the standard temporal window that captures recent motion dynamics, we add a spatial window that retrieves frames sharing similar 3D viewpoints, regardless of their temporal distance: Vout = F( ˜Vin, T , C), where ˜Vin = {Temporal(w)} ∪{Spatial(T)}, (4) where the model F generates a future sequence Vout conditioned on Temporal(w), last w frames of the input video Vin[L−w : L], and Spatial(T), the T retrieved frames from the entire input sequence based on viewpoint similarity to the target viewpoint C. This dual conditioning enables the model to reference distant frames that observe the same spatial regions, maintaining scene consistency without processing all L input frames. While this dual conditioning is conceptually appealing, na¨ıvely retrieving and providing spatially ad- jacent frames directly would be problematic for our task. Since we aim to generate content beyond the input’s temporal boundary, directly conditioning on much earlier frames would freeze outdated dynamics—for example, a walking person seen many frames ago should not reappear at the same location when synthesizing far-future frames. The spatial conditioning should therefore provide only the persistent scene structure while excluding dynamic content. Rather than retrieving individ- ual frames, we introduce a 3D scene memory M that represents exclusively the static geometry extracted from all spatially relevant frames. 3.3 3D SCENE MEMORY CONSTRUCTION Our 3D scene memory must efficiently encode spatial relationships across all L frames while ex- tracting only persistent static geometry. To construct the 3D scene memory, we leverage dynamic SLAM frameworks (Li et al., 2024; Zhang et al., 2024) to estimate camera poses and reconstruct 3D structure: ( ˆC, P) = DSLAM(Vin), (5) where ˆC = { ˆCi}L i=1 are the estimated camera poses, P represents the aggregated 3D point cloud from the L input frames, and DSLAM(·) represents the dynamic SLAM framework. This SLAM integration is effective in that it not only estimates the camera parameters of the input frames but also reconstructs the 3D structure of the scene, which can be further utilized to represent the 3D static geometry in a more efficient manner than other representations (Hong et al., 2024a;b). While the camera poses ˆC enable efficient spatial retrieval by comparing viewpoint similarity with the target trajectory C, the aggregated 3D point cloud P still contains both static and dynamic regions. Thus, we now explain our full pipeline on how to identify dynamic regions and only maintain the persistent static geometry of the input video. Dynamic masking for static scene extraction. Na¨ıvely aggregating points across frames creates ghosting artifacts where moving objects appear frozen at multiple positions, as shown in Fig. 4-(a). We address this through a comprehensive three-stage masking pipeline that identifies and excludes all dynamic content as depicted in Fig. 5. We begin with pixel-level motion detection following existing works (Zhang et al., 2024; Han et al., 2025). For each frame pair, we compute optical flow using off-the-shelf models (Wang et al., 2024a; Teed & Deng, 2020; An et al., 2025) (Flowoptical) and compare it against the flow induced by camera motion alone (Flowwarp). Regions where the L1 difference exceeds a specific threshold τ are marked as potentially dynamic: M pixel i = 1 [∥Flowoptical −Flowwarp∥1 > τ] , (6) 5 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation (a) w/o mask (b) w mask T = 0 T = 20 T = 40 Figure 4: Illustration of dynamic masking for static scene extraction. When aggregating 3D points across frames, moving objects create ghosting artifacts if not properly masked. (a) Without masking, dynamic elements (horses and riders) appear frozen at multiple positions, severely degrad- ing the warped views. (b) With our dynamic masking pipeline, these elements are identified and excluded, resulting in clean static-only point clouds that can be reliably warped to new viewpoints. where 1[·] denotes the element-wise indicator that returns 1 if the condition holds for a pixel and 0 otherwise, i.e., it flags pixels whose optical flow deviates from the warped flow by more than τ. However, pixel-level detection captures motion only at specific instants and misses complete object boundaries. We therefore propagate these sparse detections to full objects using SAM2 (Ravi et al., 2024), where we sample points from dynamic pixels in the first frame for prompts. Yet this approach still has limitations: static objects that begin moving in later frames may not be detected if they appear static initially. Our solution employs backward tracking with CoTracker3 (Karaev et al., 2024) to aggregate motion evidence across the entire sequence. From the sampled points in each frame obtained from our pixel-level motion detection, we track these points from all frames back to t = 0, capturing motions of objects that move at any point. These aggregated points are used to prompt the final SAM2 pass, producing complete object-level masks M obj i that cleanly separate all dynamic content (Fig. 4-(b)). With the full dynamic mask, we can now obtain the static-only 3D geometry Pstatic: Pstatic = L [ i=1 Pi ⊙(1 −M obj i ). (7) From the constructed static-only 3D geometry Pstatic with our proposed dynamic masking strategy, we now obtain the 3D scene memory: M = ( ˆC, Pstatic), (8) where we now explain how this 3D scene memory M can be used for scene-consistent camera- controllable video generation in the following section. 3.4 3D SCENE PROMPTING Having constructed the static-only 3D representation Pstatic, rather than na¨ıvely retrieving T frames from the input video based on viewpoint similarity, we synthesize static-only spatial frames through the projection of Pstatic. For each target camera pose Ct ∈C, we generate the corresponding spatial frame by projecting the static points from the most spatially relevant input frames: Spatial(t) = Π(K · Ct · P(n) static), (9) where P(n) static ⊂Pstatic contains points from the top-n spatially adjacent input frames to Ct, Π(·) de- notes perspective projection, and K is the camera intrinsic matrix. The complete spatial condition- ing becomes Spatial(T) = {Spatial(t)}T t=1 ∈RT ×H×W ×3, where spatial adjacency is calculated by field-of-view overlap. This projection-based approach ensures only static content appears in conditioning while providing geometrically consistent views aligned to target poses. Notably, the static point cloud aggregates in- formation from multiple viewpoints, potentially filling regions occluded by dynamic objects. These 6 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation … Input video … Optical flow … Dynamic mask … Flow mask Dynamic thresholding … Point sampling Dynamic aggregation … Propagation … … BW tracking Figure 5: Dynamic masking. A three-stage pipeline refines dynamic region detection to produce complete object-level masks: (1) optical-flow differences detect pixel-level motion (Dynamic thresh- olding); (2) sample points from these regions for all frames and perform backward tracking (BW tracking) with CoTracker3 (Karaev et al., 2024) to aggregate motion evidence across all frames back to t=0 (dynamic aggregation), capturing objects that move at any time; (3) propagate aggre- gated points in the first frame to the entire video using SAM2 (Ravi et al., 2024). The resulting dynamic masks cleanly separate moving elements (people, objects) from the static background, en- abling construction of the static-only point cloud Pstatic. projected views serve as 3D scene prompts that provide explicit guidance about persistent scene structure, enabling precise camera control without additional encoding modules. The projected views Spatial(T) serve as what we term 3D scene prompts—they provide the model with explicit guidance about the persistent scene structure. By conditioning on both Temporal(w) and Spatial(T), our framework effectively enables scene-consistent camera-controllable video gen- eration with computational efficiency while preserving the prior for high-quality video synthesis. 3.5 TRAINING DATASET CONSTRUCTION To train our model to effectively handle both static and dynamic scenes while leveraging spatial and temporal conditioning appropriately, we construct training data from diverse video sources with different preprocessing strategies based on scene characteristics. Data processing. Our training data comes from two primary sources: RealEstate10K (Zhou et al., 2018) containing static scenes, and OpenVid-1M (Nan et al., 2024), featuring dynamic content. For static videos from RealEstate10K, we directly extract 3D scene geometry without dynamic masking, as the scenes contain negligible motion. In contrast, for dynamic videos from OpenVid-1M, we employ a comprehensive dynamic masking pipeline explained in Section 3.3 to separate static and dynamic elements. 3D scene memory construction. For effective training, we require videos of sufficient length to construct meaningful 3D scene context. To enable the model to handle arbitrary-length videos as input, we select long videos as the input video with L > 100. As our video generation framework F(·) generates T frames, we designate the last T frames as the ground truth future video that the model should generate, while frames before this window serve as input for constructing the 3D scene memory. Using MegaSAM (Li et al., 2024), we process the entire video to estimate camera poses { ˆCi}L i=1 for all L frames and reconstruct the 3D structure. We use frames Vin ∈R(L−T )×H×W ×3 (i.e., frames before the last T frames) to construct the 3D scene memory, while the camera poses for the last T frames { ˆCi}L i=L−(T −1) are used during training to project the scene memory and generate spatial conditioning. 7 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation For static scenes, we directly use the reconstructed geometry from Vin. For dynamic scenes, we apply our dynamic masks to extract static-only geometry: Pstatic = L−T [ i=1 Pi ⊙(1 −M obj i ) (10) The resulting 3D scene memory M = ( ˆC, Pstatic) provides the spatial context for generating the future T frames. During training, for each target frame t in the last T frames, we use its corre- sponding camera pose ˆCt to project Pstatic from spatially adjacent viewpoints, creating the spatial conditioning Spatial(T) that guides the generation process. Spatial and temporal conditioning. To train the model to utilize spatial conditioning effectively, it is crucial to incorporate both static and dynamic regions in our training data. The mixture of RealEstate10K and OpenVid-1M ensures the model tackles scenarios where spatial conditioning should dominate (static regions) and also maintains the prior of generating high-fidelity dynamic motions learned from large-scale pretraining. For each training sample, we provide: • Temporal conditioning: The last w = 9 frames from the input sequence, capturing recent motion dynamics. • Spatial conditioning: Projected views from the static 3D scene memory, synthesized by selecting the point clouds from the top-n spatially adjacent viewpoints based on field-of- view overlap. This dual conditioning strategy, trained on both static and dynamic content, enables the model to learn when to rely on spatial prompts for scene consistency versus temporal frames for motion continuity. The diversity in our training data—from completely static architectural scenes to videos with dynamic motion—ensures robust generalization to real-world scenarios where both static and dynamic elements coexist. 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS Model architecture. We build upon CogVideoX-I2V-5B (Yang et al., 2024), extending its single- image conditioning to accept dual spatio-temporal inputs with minimal architectural changes. The key modification is repurposing the existing image conditioning channel to accept concatenated latents from both temporal frames and spatial projections. Specifically, we provide the last w = 9 frames from Vin as temporal conditioning and T projected views from the static point cloud as spatial conditioning. This enables the DiT backbone to remain entirely unchanged, preserving all pretrained video priors. Both conditions are encoded through the frozen 3D VAE and concatenated channel-wise such that Zcond = Concat[E(Temporal(w)), E(Spatial(T))]. Fine-tuning. We fully fine-tune the model for a total of 4K iterations with a batch size of 8 using 4 H100 GPUs, which required approximately 48 hours. We used the 16-bit Adam optimizer with a learning rate of 1×10−5, and adopted the same hyperparameter settings as those used in the training of CogVideoX (Yang et al., 2024). For the temporal sliding window, we provide the last 9 frames of the input video, setting w = 9. For the projection of top-n spatially adjacent views, we set n = 7. Experimental settings. We evaluate our method across four key aspects: camera controllability, video quality, scene consistency, and geometric consistency. Since no prior work directly addresses scene-consistent camera-controllable video generation, we compare against two categories of base- lines: (1) camera-controllable methods (CameraCtrl (He et al., 2024), MotionCtrl (Wang et al., 2024b), FloVD (Jin et al., 2025), AC3D (Bahmani et al., 2025)) for camera control and video qual- ity metrics, and (2) DFoT (Song et al., 2025), which attempts scene-consistent camera-controllable generation, for spatial and geometric consistency metrics. 8 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation Methods RealEstate10K DynPose-100K PSNR↑ SSIM↑ LPIPS↓ MEt3R↓ PSNR↑ SSIM↑ LPIPS↓ MEt3R↓ DFoT Song et al. (2025) 18.3044 0.5960 0.3077 0.181164 12.1471 0.3040 0.4172 0.183202 3DScenePrompt (Ours) 20.8932 0.7171 0.2120 0.040843 13.0468 0.3666 0.3812 0.124189 Table 1: Evaluation of spatial and geometric consistency. We compare DFoT and Ours on the RealEstate10K (Zhou et al., 2018) and DynPose-100K (Rockwell et al., 2025) datasets. For spatial consistency, we evaluate PSNR, SSIM, and LPIPS on revisited camera trajectories, while for geo- metric consistency, we report the MEt3R (Asim et al., 2025) metric. We primarily evaluate on 1,000 dynamic videos from DynPose-100K (Rockwell et al., 2025). For scene consistency evaluation, we additionally test on 1,000 static videos from RealEstate10K (Zhou et al., 2018), as static scenes provide clearer spatial consistency assessment. 4.2 SCENE-CONSISTENT VIDEO GENERATION Evaluation Protocol. As mentioned in Section 3.1, one of the unique and key challenges in scene- consistent camera-controllable video generation is maintaining spatial consistency over extended du- rations. From a given input video, we evaluate spatial consistency by generating camera trajectories that revisit the viewpoints in the given video. By matching frames in the generated video and the in- put video that share the same viewpoint, we calculate PSNR, SSIM, and LPIPS. For RealEstate10K, we evaluate the whole image, whereas we only evaluate the static regions by masking out the dy- namic regions for DynPose-100K. We also assess geometric consistency using Met3R (Asim et al., 2025), which measures multi-view alignment of generated frames under the recovered camera pose. Results. As shown in Tab. 1, 3DScenePrompt significantly outperforms DFoT across all metrics for both static and dynamic scenes. Most notably, our MEt3R evaluation error drops 77% (0.041 vs 0.181), demonstrating superior multi-view geometric alignment. While DFoT similarly tack- les scene-consistent camera-controllable video generation through history guidance, their approach fails to maintain scene-consistency for long sequences due to memory constraints. In contrast, our dual spatio-temporal conditioning enables long-term scene-consistency without causing significant computational overhead. The qualitative comparisons shown in Fig. 6 also validate the effectiveness of our approach over DFoT. 4.3 CAMERA-CONTROLLABLE VIDEO GENERATION Evaluation Protocol. We employ the evaluation protocol of previous methods (He et al., 2024; Zheng et al., 2024; Jin et al., 2025) for the camera controllability. We provide an input image along with associated camera parameters for I2V models (He et al., 2024; Wang et al., 2024b; Jin et al., 2025) and solely provide camera parameters for the T2V model (Bahmani et al., 2025). For our model, we provide the last 9 frames of the input video together with the camera parameters. To evaluate how faithfully the generated video follows the camera condition, we estimate camera parameters from the synthesized video using MegaSAM (Li et al., 2024), and compare the estimated camera parameters against the condition camera trajectory C. As done in previous works (Hong et al., 2024b;a; Rockwell et al., 2025), the comparison between the estimated and input camera parameters is quantified using three metrics: mean rotation error (mRotErr), mean translation error (mTransErr), and mean error in the camera extrinsic matrices (mCamMC). Table 2: Camera controllability evaluation. Methods DynPose-100K mRotErr (◦)↓ mTransErr↓ mCamMC↓ MotionCtrl Wang et al. (2024b) 3.5654 7.8231 9.7834 CameraCtrl He et al. (2024) 3.3273 9.5989 11.2122 FloVD Jin et al. (2025) 3.4811 11.0302 12.6202 AC3D Bahmani et al. (2025) 3.0675 9.7044 11.1634 DFoT Song et al. (2025) 2.3977 8.0866 9.2330 3DScenePrompt (Ours) 2.3772 7.4174 8.6352 For the generated video, we also as- sess video synthesis performance using the Fr´echet Video Distance (FVD) (Sko- rokhodov et al., 2022) and seven metrics from VBench++ (Huang et al., 2024): sub- ject consistency, background consistency, aesthetic quality, imaging quality, tempo- ral flickering, motion smoothness, and dy- namic degree. 9 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation GT Future Video Camera DFoT History Guided + Camera Control Condition Last Frames Generated Input Video T=57 T=60 : Conditions for Generating the Next Video Generated T=51 T=60 T=5 T=20 Ours Ours Condition Frames close to Traj. Last Frames Figure 6: Visualization of generated videos following trajectories that revisit early frames in the input video. We visualize and compare frames obtained from DFoT (Song et al., 2025) and Ours. We condition both DFoT and ours to generate a video that follows the camera trajectory shown in the top row (GT Future Video), which revisits a viewpoint in the input (Input Video). The comparison shows that ours shows much more consistent generation, whereas DFoT fails to generate scene-consistent frames mainly due to the limited number of frames it can condition on. Also, the comparison of the camera trajectory shows that our method more faithfully follows the given camera condition. Table 3: Evaluation of video generation quality. We assess the quality of generated videos using FVD and VBench++ scores. For FVD, lower values indicate higher video quality. For VBench++ scores, higher values indicate better performance for all metrics. All VBench++ scores are normal- ized. Methods DynPose-100K FVD Overall Score Subject Consist Bg Consist Aesthetic Quality Imaging Quality Temporal Flicker Motion Smooth Dynamic Degree MotionCtrl Wang et al. (2024b) 1017.4247 0.5625 0.5158 0.7093 0.3157 0.3149 0.8297 0.8432 0.7900 CameraCtrl He et al. (2024) 737.0506 0.6280 0.6775 0.8238 0.3736 0.3888 0.6837 0.6955 0.9900 FloVD Jin et al. (2025) 171.2697 0.7273 0.7964 0.8457 0.4722 0.5546 0.7842 0.8364 0.9900 AC3D Bahmani et al. (2025) 281.2140 0.7428 0.8360 0.8674 0.4766 0.5381 0.8020 0.8673 1.0000 3DScenePrompt (Ours) 127.4758 0.7747 0.8669 0.8727 0.4990 0.5964 0.8551 0.9260 1.0000 Results. We first evaluate camera controllability and compare our method with competitive base- lines. As shown in Tab. 2, our approach consistently outperforms existing methods, indicating 3DScenePrompt is capable of generating videos with precise camera control. We then assess the overall video quality (Tab. 3) and provide qualitative comparisons (Fig. 7). As observed in Tab. 3, our method achieves the best generation quality across all metrics for dynamic video generation, which is further supported by the visual results in Fig. 7. 4.4 ABLATION STUDIES Table 4: Ablation study on varying n. Methods Dynamic mask M DynPose-100K PSNR↑ SSIM↑ LPIPS↓ MEt3R↓ Ours (n = 1) ✓ 13.0207 0.3732 0.3771 0.124773 Ours (n = 4) ✓ 13.0382 0.3733 0.3758 0.124893 Ours (n = L) ✓ 13.0206 0.3631 0.3810 0.123507 Ours (n = 7) ✗ 12.2304 0.3063 0.3821 0.134885 Ours (n = 7) ✓ 13.0468 0.3666 0.3812 0.124189 We analyze two critical components of our framework: the dynamic masking strategy that separates static and dynamic elements, and the number of spatially adjacent frames n retrieved for spatial conditioning. Tab. 4 demonstrates the impact of varying n and the necessity of dynamic masking. Without dynamic masking (4th row), the model suffers significantly across all, showing a large drop of PSNR of approximately 10 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation t=30 Camera AC3D FloVD MotionCtrl CameraCtrl I2V + camera control T=60 Fail Condition “A man in a black cap and t-shirt stands at the open door of a luxury car, conversing with another man inside the vehicle, in a parking lot with several cars …(more) T2V + camera control AC3D Input Video GT Future Video Last Frame Condition T=51 T=60 T=5 T=20 Ours GT Video Ours : Conditions for Generating the Next Video Condition Last Frames Frames close to Traj. Generated Generated Generated Figure 7: Visualization of scene-consistent camera-controllable video generation. Comparison of different methods (Wang et al., 2024b; He et al., 2024; Jin et al., 2025; Bahmani et al., 2025) for generating videos from the same input (shown in Input Video) that follow the camera trajectory shown in the rightmost column (Camera). Our method best preserves scene consistency with the in- put video. Note the red-boxed regions: while the input video shows a white wall, competing methods either lose scene detail or fail to maintain the original scene structure. In contrast, our approach ac- curately remembers the white wall and maintains consistent scene elements throughout generation. In addition, when compared with the GT Future Video, ours best follows the camera condition, effectively verifying the strength of our framework for scene-consistent camera-controllable video generation. 0.8dB and also an increase of MEt3R error. This degradation occurs because unmasksed dynamic objects create ghosting artifacts when warped to new viewpoints, corrupting the spatial condition- ing. Regarding the number of spatially adjacent frames, we find that performance stabilizes around n = 7, with minimal improvements beyond this point, suggesting that 7 frames provide sufficient spatial context while maintaining computational efficiency. 5 CONCLUSION In this work, we introduced 3DScenePrompt, a framework for scene-consistent camera-controllable video generation. By combining dual spatio-temporal conditioning with a static-only 3D scene memory constructed through dynamic SLAM and our dynamic masking strategy, we enable gen- erating continuations from arbitrary-length videos while preserving scene geometry and allowing natural motion evolution. Extensive experiments demonstrate superior performance in camera con- trollability, scene consistency, and generation quality compared to existing methods. Our approach opens new possibilities for long-form video synthesis applications where maintaining both spatial consistency and precise camera control is essential. 11 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation REFERENCES Niket Agarwal, Arslan Ali, Maciej Bala, Yogesh Balaji, Erik Barker, Tiffany Cai, Prithvijit Chat- topadhyay, Yongxin Chen, Yin Cui, Yifan Ding, et al. Cosmos world foundation model platform for physical ai. arXiv preprint arXiv:2501.03575, 2025. Honggyu An, Jin Hyeon Kim, Seonghoon Park, Jaewoo Jung, Jisang Han, Sunghwan Hong, and Seungryong Kim. Cross-view completion models are zero-shot correspondence estimators. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 1103–1115, 2025. Mohammad Asim, Christopher Wewer, Thomas Wimmer, Bernt Schiele, and Jan Eric Lenssen. Met3r: Measuring multi-view consistency in generated images. arXiv preprint arXiv:2501.06336, 2025. Sherwin Bahmani, Ivan Skorokhodov, Aliaksandr Siarohin, Willi Menapace, Guocheng Qian, Michael Vasilkovsky, Hsin-Ying Lee, Chaoyang Wang, Jiaxu Zou, Andrea Tagliasacchi, et al. Vd3d: Taming large video diffusion transformers for 3d camera control. arXiv preprint arXiv:2407.12781, 2024. Sherwin Bahmani, Ivan Skorokhodov, Guocheng Qian, Aliaksandr Siarohin, Willi Menapace, An- drea Tagliasacchi, David B. Lindell, and Sergey Tulyakov. Ac3d: Analyzing and improving 3d camera control in video diffusion transformers. Proc. CVPR, 2025. Jianhong Bai, Menghan Xia, Xiao Fu, Xintao Wang, Lianrui Mu, Jinwen Cao, Zuozhu Liu, Haoji Hu, Xiang Bai, Pengfei Wan, et al. Recammaster: Camera-controlled generative rendering from a single video. arXiv preprint arXiv:2503.11647, 2025. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint arXiv:2311.15127, 2023. Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators, 2024. URL https://openai.com/research/ video-generation-models-as-world-simulators. Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffu- sion models without specific tuning. arXiv preprint arXiv:2307.04725, 2023. Jisang Han, Honggyu An, Jaewoo Jung, Takuya Narihira, Junyoung Seo, Kazumi Fukuda, Chae- hyun Kim, Sunghwan Hong, Yuki Mitsufuji, and Seungryong Kim. Dˆ 2ust3r: Enhancing 3d reconstruction with 4d pointmaps for dynamic scenes. arXiv preprint arXiv:2504.06264, 2025. Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrl: Enabling camera control for text-to-video generation. arXiv preprint arXiv:2404.02101, 2024. Haoran He, Yang Zhang, Liang Lin, Zhongwen Xu, and Ling Pan. Pre-trained video generative models as world simulators. arXiv preprint arXiv:2502.07825, 2025. Sunghwan Hong, Jaewoo Jung, Heeseong Shin, Jisang Han, Jiaolong Yang, Chong Luo, and Seungryong Kim. Pf3plat: Pose-free feed-forward 3d gaussian splatting. arXiv preprint arXiv:2410.22128, 2024a. Sunghwan Hong, Jaewoo Jung, Heeseong Shin, Jiaolong Yang, Seungryong Kim, and Chong Luo. Unifying correspondence pose and nerf for generalized pose-free novel view synthesis. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20196– 20206, 2024b. Susung Hong, Junyoung Seo, Heeseong Shin, Sunghwan Hong, and Seungryong Kim. Direct2v: Large language models are frame-level directors for zero-shot text-to-video generation. arXiv preprint arXiv:2305.14330, 2023. 12 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation Ziqi Huang, Fan Zhang, Xiaojie Xu, Yinan He, Jiashuo Yu, Ziyue Dong, Qianli Ma, Nattapol Chan- paisit, Chenyang Si, Yuming Jiang, et al. Vbench++: Comprehensive and versatile benchmark suite for video generative models. arXiv preprint arXiv:2411.13503, 2024. Wonjoon Jin, Qi Dai, Chong Luo, Seung-Hwan Baek, and Sunghyun Cho. Flovd: Optical flow meets video diffusion model for enhanced camera-controlled video synthesis. arXiv preprint arXiv:2502.08244, 2025. Nikita Karaev, Iurii Makarov, Jianyuan Wang, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. Cotracker3: Simpler and better point tracking by pseudo-labelling real videos. arXiv preprint arXiv:2410.11831, 2024. V´aclav Knapp and Maty Bohacek. Synthetic human action video data generation with pose transfer. In Synthetic Data for Computer Vision Workshop@ CVPR 2025, 2025. Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, and Noah Snavely. Megasam: Accurate, fast, and robust struc- ture and motion from casual dynamic videos. arXiv preprint arXiv:2412.04463, 2024. Kepan Nan, Rui Xie, Penghao Zhou, Tiehan Fan, Zhenheng Yang, Zhijie Chen, Xiang Li, Jian Yang, and Ying Tai. Openvid-1m: A large-scale high-quality dataset for text-to-video generation. arXiv preprint arXiv:2407.02371, 2024. Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R¨adle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714, 2024. Xuanchi Ren, Tianchang Shen, Jiahui Huang, Huan Ling, Yifan Lu, Merlin Nimier-David, Thomas M¨uller, Alexander Keller, Sanja Fidler, and Jun Gao. Gen3c: 3d-informed world-consistent video generation with precise camera control. arXiv preprint arXiv:2503.03751, 2025. Chris Rockwell, Joseph Tung, Tsung-Yi Lin, Ming-Yu Liu, David F Fouhey, and Chen-Hsuan Lin. Dynamic camera poses and where to find them. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 12444–12455, 2025. Runway. Runway. https://runwayml.com/, 2024. Accessed: 2024-11-05. Junyoung Seo, Jisang Han, Jaewoo Jung, Siyoon Jin, Joungbin Lee, Takuya Narihira, Kazumi Fukuda, Takashi Shibuya, Donghoon Ahn, Shoukang Hu, et al. Vid-camedit: Video cam- era trajectory editing with generative rendering from estimated geometry. arXiv preprint arXiv:2506.13697, 2025. Ivan Skorokhodov, Sergey Tulyakov, and Mohamed Elhoseiny. Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3626–3636, 2022. Kiwhan Song, Boyuan Chen, Max Simchowitz, Yilun Du, Russ Tedrake, and Vincent Sitzmann. History-guided video diffusion. arXiv preprint arXiv:2502.06764, 2025. Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pp. 402–419. Springer, 2020. Yihan Wang, Lahav Lipson, and Jia Deng. Sea-raft: Simple, efficient, accurate raft for optical flow. In European Conference on Computer Vision, pp. 36–54. Springer, 2024a. Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. Motionctrl: A unified and flexible motion controller for video generation. In ACM SIGGRAPH 2024 Conference Papers, pp. 1–11, 2024b. Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint arXiv:2408.06072, 2024. 13 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation Mark Yu, Wenbo Hu, Jinbo Xing, and Ying Shan. Trajectorycrafter: Redirecting camera trajec- tory for monocular videos via diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision, 2025. Shangjin Zhai, Zhichao Ye, Jialin Liu, Weijian Xie, Jiaqi Hu, Zhen Peng, Hua Xue, Danpeng Chen, Xiaomeng Wang, Lei Yang, et al. Stargen: A spatiotemporal autoregression framework with video diffusion model for scalable and controllable scene generation. arXiv preprint arXiv:2501.05763, 2025. Junyi Zhang, Charles Herrmann, Junhwa Hur, Varun Jampani, Trevor Darrell, Forrester Cole, De- qing Sun, and Ming-Hsuan Yang. Monst3r: A simple approach for estimating geometry in the presence of motion. arXiv preprint arXiv:2410.03825, 2024. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3836–3847, 2023. Ruihan Zhang, Borou Yu, Jiajian Min, Yetong Xin, Zheng Wei, Juncheng Nemo Shi, Mingzhen Huang, Xianghao Kong, Nix Liu Xin, Shanshan Jiang, et al. Generative ai for film creation: A survey of recent advances. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 6267–6279, 2025. Zhoutong Zhang, Forrester Cole, Zhengqi Li, Michael Rubinstein, Noah Snavely, and William T Freeman. Structure and motion from casual videos. In European Conference on Computer Vision, pp. 20–37. Springer, 2022. Guangcong Zheng, Teng Li, Rui Jiang, Yehao Lu, Tao Wu, and Xi Li. Cami2v: Camera-controlled image-to-video diffusion model. arXiv preprint arXiv:2410.15957, 2024. Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. arXiv preprint arXiv:1805.09817, 2018. 14
3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation 3D SCENE PROMPTING FOR SCENE-CONSISTENT CAMERA-CONTROLLABLE VIDEO GENERATION JoungBin Lee1∗ Jaewoo Jung1∗ Jisang Han1∗ Takuya Narihira2 Kazumi Fukuda2 Junyoung Seo1 Sunghwan Hong3 Yuki Mitsufuji2,4† Seungryong Kim1† 1KAIST AI 2Sony AI 3ETH Z ̈urich 4Sony Group Corporation https://cvlab-kaist.github.io/3DScenePrompt Cam Traj. User-specified Cam Traj. Spatial consistency from projected point clouds to specified Traj Ours GT Future Video Input Video Lift point cloud from static regions 1 Condition 2 4 3 Temporal consistency from Last Few Frames Future Video Generate! Figure 1: Teaser. Our framework generates the next video chunk that follows a user-specified camera trajectory while maintaining scene consistency. Our dual spatio-temporal conditioning combines the last few frames for temporal continuity and the rendered point cloud for spatial consistency. ABSTRACT We present 3DScenePrompt, a framework that generates the next video chunk from arbitrary-length input while enabling precise camera control and preserving scene consistency. Unlike methods conditioned on a single image or a short clip, we employ dual spatio-temporal conditioning that reformulates context-view referencing across the input video. Our approach conditions on both temporally adjacent frames for motion continuity and spatially adjacent content for scene consistency. However, when generating beyond temporal boundaries, directly using spatially adjacent frames would incorrectly preserve dynamic elements from the past. We address this by introducing a 3D scene memory that represents exclusively the static geometry extracted from the entire input video. To construct this memory, we leverage dynamic SLAM with our newly introduced dynamic masking strategy that explicitly separates static scene geometry from moving elements. The static scene representation can then be projected to any target viewpoint, providing geometrically-consistent warped views that serve as strong 3D spatial prompts while allowing dynamic regions to evolve naturally from temporal context. This enables our model to maintain long-range spatial coherence and precise camera control without sacrificing computational efficiency or motion realism. Extensive experiments demonstrate that our framework significantly outperforms existing methods in scene consistency, camera controllability, and generation quality. ∗Equal contribution. †Co-corresponding authors. 1 16 Oct 2025 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation 1 INTRODUCTION Camera-controllable video generation (He et al., 2024; Wang et al., 2024b; Jin et al., 2025) aims to synthesize videos following user-specified camera trajectories while maintaining visual coherence and temporal consistency. Recent advances have progressed from generating entirely new videos with controllable viewpoints (Bahmani et al., 2025) to enabling users to extend a single image or short video clips along desired camera paths (He et al., 2024; Agarwal et al., 2025). Yet these methods share a fundamental limitation: they can only process extremely short conditioning sequences, typically just a few frames, which constrains their ability to understand longer videos and hence fails to preserve the rich scene context present in those longer videos. What if we could provide a model with arbitrary-length video sequences and generate continuations that not only follow precise camera controls but also maintain scene consistency with the entire input? Such task, which we refer to as scene-consistent camera-controllable video generation, has immediate applications in film production (Zhang et al., 2025), virtual reality (He et al., 2025), and synthetic data generation (Knapp & Bohacek, 2025). Scene-consistent, camera-controllable video generation presents three coupled challenges: first, static elements must remain stable across time, while dynamic elements e.g., moving objects or people, should evolve naturally from their most recent states rather than replaying motions from the distant past.Second, effective camera control requires understanding the scene's 3D geometry: generated content must obey physical constraints, handle occlusions correctly, seamlessly compose dynamic elements with static structure, and plausibly extrapolate previously unobserved regions. Third, these capabilities must remain computationally practical: na ̈ıvely conditioning on all input frames scales poorly and becomes intractable as sequence length grows. Rather than building a bespoke architecture, we leverage strong pretrained video generators, retaining their learned priors and training efficiency, by redesigning how prior content is referenced. Our key insight is to fundamentally rethink this referencing via a dual spatio-temporal conditioning mechanism. Current image-to-video (Yang et al., 2024) and video-to-future-video models 1 (Agarwal et al., 2025) achieve realistic generation by conditioning on temporally adjacent frames to maintain short-term consistency and motion continuity. However, adjacency in video is not purely temporal-it can also be spatial. When generating scene-consistent videos, the frames we synthesize may be spatially adjacent to frames from much earlier in the input sequence, particularly when the camera revisits similar viewpoints or explores nearby regions. This dual nature of adjacency suggests a new conditioning paradigm that leverages both temporal and spatial relationships. Based on these motivations, we propose 3DScenePrompt, a novel video generation framework designed for scene-consistent camera-controllable video synthesis. It takes an arbitrary-length video as context and generates the future video that is consistent with the scene geometry of the context video. The key innovation lies in our dual spatio-temporal conditioning strategy: the model conditions on both temporally adjacent frames (for motion continuity) and spatially adjacent frames (for scene consistency). However, an important consideration for spatial conditioning for our task is that it must provide only the persistent static scene structure while excluding dynamic content, as directly conditioning on spatially adjacent frames from the past would incorrectly preserve dynamic elements. To enable this without temporal contradictions, we construct a 3D scene memory that exclusively represents the static geometry extracted from the entire input video. To construct this 3D scene memory from dynamic videos, we leverage recent advances in dynamic SLAM frameworks (Zhang et al., 2022; 2024; Li et al., 2024) to estimate camera poses and 3D structure from the input video. To extract only the static regions from the estimated 3D structure, we introduce a dynamic masking strategy that explicitly separates static elements and moving objects. The static-only 3D representation can then be projected to target viewpoints, yielding geometricallyconsistent warped views that serve as spatial prompts while allowing dynamic elements to evolve naturally from temporal context alone. Surprisingly, the integration of 3D scene memory provides an additional benefit: the geometrically-consistent warped views provide rich visual references that significantly reduce uncertainty in viewpoint manipulation, enabling precise camera control without any other explicit camera conditioning. 1Throughout our paper, video-to-future-video models refer to models that are capable of generating the subsequent frames of the given input video (e.g., cosmos-predict2 (Agarwal et al., 2025). 2 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation (a) Camera control video generation Caption Last frame Traj. Target video F Input video (b) Video-to-future-video generation Future video G Input video Caption Last few frames Future video F Input video (c) Ours Caption Traj. Last few frames Frames close to Traj. Figure 2: Comparison to existing architectures. (a) Camera-controllable methods condition on a single frame and camera trajectory (Wang et al., 2024b; He et al., 2024; Jin et al., 2025). (b) Video-to-future-video methods use the last few frames of the input video when generating the future video for temporal continuity (Song et al., 2025), but fail to maintain long-term spatial consistency when revisiting viewpoints unseen in the given few frames. (c) Our approach combines temporal conditioning (last few frames) with spatial conditioning (spatially adjacent frames) to achieve sceneconsistent generation with precise camera control. In summary, 3DScenePrompt enables both accurate camera control and long-range spatial consistency by treating the static scene representation as a persistent spatial prompt that guides generation across arbitrary timescales. Extensive experiments demonstrate that our framework significantly outperforms existing methods in maintaining scene consistency, achieving precise camera control, and generating high-quality videos from arbitrary-length inputs. 2 RELATED WORK Camera-controllable video generation. Building upon the recent success of video diffusion models (Blattmann et al., 2023; Guo et al., 2023; Yang et al., 2024; Runway, 2024; Brooks et al., 2024), recent works (He et al., 2024; Wang et al., 2024b; Bahmani et al., 2024) have achieved camera-controllable video generation by introducing additional adapters into U-Net-based video diffusion models that accept camera trajectories. For instance, CameraCtrl and VD3D (Bahmani et al., 2024; He et al., 2024) incorporate spatiotemporal camera embeddings, such as Pl ̈ucker coordinates, via ControlNet-like mechanisms (Zhang et al., 2023). While these methods enable precise trajectory following, they only condition on single starting images, lacking mechanisms to maintain consistency with extended video context. In contrast, our approach enables leveraging entire video sequences as spatial prompts through 3D memory construction, enabling scene-consistent generation that preserves the rich scene context within arbitrary-length inputs. Geometry-grounded video generation. Recent works (Ren et al., 2025; Yu et al., 2025; Seo et al., 2025) have integrated off-the-shelf geometry estimators into video generation pipelines to improve geometric accuracy. Gen3C (Ren et al., 2025), for instance, similarly adopts dynamic SLAM (Li et al., 2024; Han et al., 2025) to lift videos to 3D representations. However, these methods exclusively address dynamic novel view synthesis-generating new viewpoints within the same temporal window as the input. This constrained setting allows them to simply warp entire scenes without distinguishing static and dynamic elements. Our work fundamentally differs by generating content beyond temporal boundaries, requiring selective masking of dynamic regions during 3D construction-a critical challenge that emerges only when static geometry must persist while dynamics evolve naturally into the future. Long-horizon scene-consistent generation. Various approaches attempt scene-consistent long video generation through different strategies. ReCamMaster (Bai et al., 2025) and TrajectoryCrafter (Yu et al., 2025) interpolate frames or construct 3D representations but remain confined to the input's spatiotemporal coverage, essentially performing dynamic novel view synthesis. StarGen (Zhai et al., 2025) scales to long trajectories but assumes static worlds, eliminating temporal dynamics entirely. DFoT (Song et al., 2025) most closely relates to our work, proposing guidance methods that condition on previous frames for scene consistency. However, DFoT also faces fundamental memory constraints when processing extended sequences, limiting its ability to maintain long-range spatial coherence. Our dual spatio-temporal strategy with SLAM-based spatial memory 3 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation Lift Static Part Last Few Frames Static Point Cloud Rendered Point Cloud Patchify DiT Generated Dynamic Video C Noise C : Channel Concat. Text Encoder Text Prompt Man wearing cap and sunglasses hikes a rocky mountain trail with backpack. VAE Encoder VAE Decoder Spatially Adjacent Frames Rendered Point Cloud Last Few Frames Input Video Dynamic Mask User-specified Cam Traj. Cam Traj. ... ... ... ... ... x N Figure 3: Overview of 3DScenePrompt framework. To generate the next chunk video that remains spatially consistent with the input video, we design a dual spatio-temporal conditioning pipeline to extract the most relevant information from the input video. The last few frames are utilized to provide temporal conditioning, ensuring motion continuity between conditioning inputs and the generated frames. In parallel, for the spatial conditioning, we first select the most representative frames from the input sequence, lift their static regions into a 3D point cloud using the dynamic mask, and render it along a user-specified camera trajectory to preserve scene geometry. overcomes these limitations by selectively retrieving only the most relevant frames, both temporally and spatially, enabling computationally efficient processing of arbitrary-length videos while maintaining both motion continuity and scene consistency. 3 METHODOLOGY 3.1 PROBLEM FORMULATION AND MOTIVATION We address the task of scene-consistent camera-controllable video generation: given a dynamic video Vin ∈RL×H×W ×3 of arbitrary length L as context with height H and width W, our goal is to generate T subsequent frames Vout ∈RT ×H×W ×3 that follow a desired camera trajectory C = {Ct}T t=1 while maintaining consistency with the scene captured in the context input: Vout = F(Vin, T , C), (1) where Ct ∈SE(3) represents camera extrinsic matrices and T is a text prompt when a video generator F(·) is based on pretrained text-to-video priors (Hong et al., 2023; Yang et al., 2024; Bahmani et al., 2025). Comparison to existing solutions. This task fundamentally differs from existing video generation paradigms. Existing camera-controllable generation methods (He et al., 2024; Wang et al., 2024b; Bahmani et al., 2024) synthesize videos following user-specified trajectories but only condition on a single image Iref or plain text T (Fig. 2-(a)): Vout = F(Iref, T , C), or Vout = F(T , C), (2) which is insufficient for our task, where the entire underlying 3D scene of the context video should be considered. In contrast, video-to-future-video generation methods such as Cosmos-predict-2 (Agarwal et al., 2025) G(·) employ temporal sliding windows to generate future frames (Fig. 2-(b)): Vout = G(Vin[L -w : L], T ) (3) where Vin[L -w : L] for w ≪L represents a small overlap window, typically consisting of the last few frames of Vin. Although this design encourages temporal smoothness by providing the last few frames when generating the future video, it often fails to preserve long-term spatial consistency when the camera revisits regions not covered by the small window w. 3.2 TOWARDS SCENE-CONSISTENT CAMERA-CONTROLLABLE VIDEO GENERATION The key challenge of scene-consistent camera-controllable video generation lies in reconciling two competing requirements: maintaining consistency with potentially distant frames that share spatial 4 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation proximity (when the camera returns to similar viewpoints), while evolving dynamic content naturally from the recent temporal context. Ideally, extending the conditioning window of video-to-futurevideo frameworks to condition on all frames Vin would ensure optimal global spatial consistency and dynamic smoothness. However, this quickly becomes impractical as the sequence grows, since standard self-attention incurs quadratic time/memory in the sequence length. Dual spatio-temporal sliding window strategy. Instead of increasing the temporal window size w of the existing video-to-future-video generation methods, we introduce a dual sliding window strategy that conditions on frames selected along both temporal and spatial axes (Fig. 2-(c), and Fig. 3). Beyond the standard temporal window that captures recent motion dynamics, we add a spatial window that retrieves frames sharing similar 3D viewpoints, regardless of their temporal distance: Vout = F( ̃Vin, T , C), where ̃Vin = {Temporal(w)} ∪{Spatial(T)}, (4) where the model F generates a future sequence Vout conditioned on Temporal(w), last w frames of the input video Vin[L-w : L], and Spatial(T), the T retrieved frames from the entire input sequence based on viewpoint similarity to the target viewpoint C. This dual conditioning enables the model to reference distant frames that observe the same spatial regions, maintaining scene consistency without processing all L input frames. While this dual conditioning is conceptually appealing, na ̈ıvely retrieving and providing spatially adjacent frames directly would be problematic for our task. Since we aim to generate content beyond the input's temporal boundary, directly conditioning on much earlier frames would freeze outdated dynamics-for example, a walking person seen many frames ago should not reappear at the same location when synthesizing far-future frames. The spatial conditioning should therefore provide only the persistent scene structure while excluding dynamic content. Rather than retrieving individual frames, we introduce a 3D scene memory M that represents exclusively the static geometry extracted from all spatially relevant frames. 3.3 3D SCENE MEMORY CONSTRUCTION Our 3D scene memory must efficiently encode spatial relationships across all L frames while extracting only persistent static geometry. To construct the 3D scene memory, we leverage dynamic SLAM frameworks (Li et al., 2024; Zhang et al., 2024) to estimate camera poses and reconstruct 3D structure: ( ˆC, P) = DSLAM(Vin), (5) where ˆC = { ˆCi}L i=1 are the estimated camera poses, P represents the aggregated 3D point cloud from the L input frames, and DSLAM(·) represents the dynamic SLAM framework. This SLAM integration is effective in that it not only estimates the camera parameters of the input frames but also reconstructs the 3D structure of the scene, which can be further utilized to represent the 3D static geometry in a more efficient manner than other representations (Hong et al., 2024a;b). While the camera poses ˆC enable efficient spatial retrieval by comparing viewpoint similarity with the target trajectory C, the aggregated 3D point cloud P still contains both static and dynamic regions. Thus, we now explain our full pipeline on how to identify dynamic regions and only maintain the persistent static geometry of the input video. Dynamic masking for static scene extraction. Na ̈ıvely aggregating points across frames creates ghosting artifacts where moving objects appear frozen at multiple positions, as shown in Fig. 4-(a). We address this through a comprehensive three-stage masking pipeline that identifies and excludes all dynamic content as depicted in Fig. 5. We begin with pixel-level motion detection following existing works (Zhang et al., 2024; Han et al., 2025). For each frame pair, we compute optical flow using off-the-shelf models (Wang et al., 2024a; Teed & Deng, 2020; An et al., 2025) (Flowoptical) and compare it against the flow induced by camera motion alone (Flowwarp). Regions where the L1 difference exceeds a specific threshold τ are marked as potentially dynamic: M pixel i = 1 [∥Flowoptical -Flowwarp∥1 > τ] , (6) 5 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation (a) w/o mask (b) w mask T = 0 T = 20 T = 40 Figure 4: Illustration of dynamic masking for static scene extraction. When aggregating 3D points across frames, moving objects create ghosting artifacts if not properly masked. (a) Without masking, dynamic elements (horses and riders) appear frozen at multiple positions, severely degrading the warped views. (b) With our dynamic masking pipeline, these elements are identified and excluded, resulting in clean static-only point clouds that can be reliably warped to new viewpoints. where 1[·] denotes the element-wise indicator that returns 1 if the condition holds for a pixel and 0 otherwise, i.e., it flags pixels whose optical flow deviates from the warped flow by more than τ. However, pixel-level detection captures motion only at specific instants and misses complete object boundaries. We therefore propagate these sparse detections to full objects using SAM2 (Ravi et al., 2024), where we sample points from dynamic pixels in the first frame for prompts. Yet this approach still has limitations: static objects that begin moving in later frames may not be detected if they appear static initially. Our solution employs backward tracking with CoTracker3 (Karaev et al., 2024) to aggregate motion evidence across the entire sequence. From the sampled points in each frame obtained from our pixel-level motion detection, we track these points from all frames back to t = 0, capturing motions of objects that move at any point. These aggregated points are used to prompt the final SAM2 pass, producing complete object-level masks M obj i that cleanly separate all dynamic content (Fig. 4-(b)). With the full dynamic mask, we can now obtain the static-only 3D geometry Pstatic: Pstatic = L [ i=1 Pi ⊙(1 -M obj i ). (7) From the constructed static-only 3D geometry Pstatic with our proposed dynamic masking strategy, we now obtain the 3D scene memory: M = ( ˆC, Pstatic), (8) where we now explain how this 3D scene memory M can be used for scene-consistent cameracontrollable video generation in the following section. 3.4 3D SCENE PROMPTING Having constructed the static-only 3D representation Pstatic, rather than na ̈ıvely retrieving T frames from the input video based on viewpoint similarity, we synthesize static-only spatial frames through the projection of Pstatic. For each target camera pose Ct ∈C, we generate the corresponding spatial frame by projecting the static points from the most spatially relevant input frames: Spatial(t) = Π(K · Ct · P(n) static), (9) where P(n) static ⊂Pstatic contains points from the top-n spatially adjacent input frames to Ct, Π(·) denotes perspective projection, and K is the camera intrinsic matrix. The complete spatial conditioning becomes Spatial(T) = {Spatial(t)}T t=1 ∈RT ×H×W ×3, where spatial adjacency is calculated by field-of-view overlap. This projection-based approach ensures only static content appears in conditioning while providing geometrically consistent views aligned to target poses. Notably, the static point cloud aggregates information from multiple viewpoints, potentially filling regions occluded by dynamic objects. These 6 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation ... Input video ... Optical flow ... Dynamic mask ... Flow mask Dynamic thresholding ... Point sampling Dynamic aggregation ... Propagation ... ... BW tracking Figure 5: Dynamic masking. A three-stage pipeline refines dynamic region detection to produce complete object-level masks: (1) optical-flow differences detect pixel-level motion (Dynamic thresholding); (2) sample points from these regions for all frames and perform backward tracking (BW tracking) with CoTracker3 (Karaev et al., 2024) to aggregate motion evidence across all frames back to t=0 (dynamic aggregation), capturing objects that move at any time; (3) propagate aggregated points in the first frame to the entire video using SAM2 (Ravi et al., 2024). The resulting dynamic masks cleanly separate moving elements (people, objects) from the static background, enabling construction of the static-only point cloud Pstatic. projected views serve as 3D scene prompts that provide explicit guidance about persistent scene structure, enabling precise camera control without additional encoding modules. The projected views Spatial(T) serve as what we term 3D scene prompts-they provide the model with explicit guidance about the persistent scene structure. By conditioning on both Temporal(w) and Spatial(T), our framework effectively enables scene-consistent camera-controllable video generation with computational efficiency while preserving the prior for high-quality video synthesis. 3.5 TRAINING DATASET CONSTRUCTION To train our model to effectively handle both static and dynamic scenes while leveraging spatial and temporal conditioning appropriately, we construct training data from diverse video sources with different preprocessing strategies based on scene characteristics. Data processing. Our training data comes from two primary sources: RealEstate10K (Zhou et al., 2018) containing static scenes, and OpenVid-1M (Nan et al., 2024), featuring dynamic content. For static videos from RealEstate10K, we directly extract 3D scene geometry without dynamic masking, as the scenes contain negligible motion. In contrast, for dynamic videos from OpenVid-1M, we employ a comprehensive dynamic masking pipeline explained in Section 3.3 to separate static and dynamic elements. 3D scene memory construction. For effective training, we require videos of sufficient length to construct meaningful 3D scene context. To enable the model to handle arbitrary-length videos as input, we select long videos as the input video with L > 100. As our video generation framework F(·) generates T frames, we designate the last T frames as the ground truth future video that the model should generate, while frames before this window serve as input for constructing the 3D scene memory. Using MegaSAM (Li et al., 2024), we process the entire video to estimate camera poses { ˆCi}L i=1 for all L frames and reconstruct the 3D structure. We use frames Vin ∈R(L-T )×H×W ×3 (i.e., frames before the last T frames) to construct the 3D scene memory, while the camera poses for the last T frames { ˆCi}L i=L-(T -1) are used during training to project the scene memory and generate spatial conditioning. 7 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation For static scenes, we directly use the reconstructed geometry from Vin. For dynamic scenes, we apply our dynamic masks to extract static-only geometry: Pstatic = L-T [ i=1 Pi ⊙(1 -M obj i ) (10) The resulting 3D scene memory M = ( ˆC, Pstatic) provides the spatial context for generating the future T frames. During training, for each target frame t in the last T frames, we use its corresponding camera pose ˆCt to project Pstatic from spatially adjacent viewpoints, creating the spatial conditioning Spatial(T) that guides the generation process. Spatial and temporal conditioning. To train the model to utilize spatial conditioning effectively, it is crucial to incorporate both static and dynamic regions in our training data. The mixture of RealEstate10K and OpenVid-1M ensures the model tackles scenarios where spatial conditioning should dominate (static regions) and also maintains the prior of generating high-fidelity dynamic motions learned from large-scale pretraining. For each training sample, we provide: • Temporal conditioning: The last w = 9 frames from the input sequence, capturing recent motion dynamics. • Spatial conditioning: Projected views from the static 3D scene memory, synthesized by selecting the point clouds from the top-n spatially adjacent viewpoints based on field-ofview overlap. This dual conditioning strategy, trained on both static and dynamic content, enables the model to learn when to rely on spatial prompts for scene consistency versus temporal frames for motion continuity. The diversity in our training data-from completely static architectural scenes to videos with dynamic motion-ensures robust generalization to real-world scenarios where both static and dynamic elements coexist. 4 EXPERIMENTS 4.1 IMPLEMENTATION DETAILS Model architecture. We build upon CogVideoX-I2V-5B (Yang et al., 2024), extending its singleimage conditioning to accept dual spatio-temporal inputs with minimal architectural changes. The key modification is repurposing the existing image conditioning channel to accept concatenated latents from both temporal frames and spatial projections. Specifically, we provide the last w = 9 frames from Vin as temporal conditioning and T projected views from the static point cloud as spatial conditioning. This enables the DiT backbone to remain entirely unchanged, preserving all pretrained video priors. Both conditions are encoded through the frozen 3D VAE and concatenated channel-wise such that Zcond = Concat[E(Temporal(w)), E(Spatial(T))]. Fine-tuning. We fully fine-tune the model for a total of 4K iterations with a batch size of 8 using 4 H100 GPUs, which required approximately 48 hours. We used the 16-bit Adam optimizer with a learning rate of 1×10-5, and adopted the same hyperparameter settings as those used in the training of CogVideoX (Yang et al., 2024). For the temporal sliding window, we provide the last 9 frames of the input video, setting w = 9. For the projection of top-n spatially adjacent views, we set n = 7. Experimental settings. We evaluate our method across four key aspects: camera controllability, video quality, scene consistency, and geometric consistency. Since no prior work directly addresses scene-consistent camera-controllable video generation, we compare against two categories of baselines: (1) camera-controllable methods (CameraCtrl (He et al., 2024), MotionCtrl (Wang et al., 2024b), FloVD (Jin et al., 2025), AC3D (Bahmani et al., 2025)) for camera control and video quality metrics, and (2) DFoT (Song et al., 2025), which attempts scene-consistent camera-controllable generation, for spatial and geometric consistency metrics. 8 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation Methods RealEstate10K DynPose-100K PSNR↑ SSIM↑ LPIPS↓ MEt3R↓ PSNR↑ SSIM↑ LPIPS↓ MEt3R↓ DFoT Song et al. (2025) 18.3044 0.5960 0.3077 0.181164 12.1471 0.3040 0.4172 0.183202 3DScenePrompt (Ours) 20.8932 0.7171 0.2120 0.040843 13.0468 0.3666 0.3812 0.124189 Table 1: Evaluation of spatial and geometric consistency. We compare DFoT and Ours on the RealEstate10K (Zhou et al., 2018) and DynPose-100K (Rockwell et al., 2025) datasets. For spatial consistency, we evaluate PSNR, SSIM, and LPIPS on revisited camera trajectories, while for geometric consistency, we report the MEt3R (Asim et al., 2025) metric. We primarily evaluate on 1,000 dynamic videos from DynPose-100K (Rockwell et al., 2025). For scene consistency evaluation, we additionally test on 1,000 static videos from RealEstate10K (Zhou et al., 2018), as static scenes provide clearer spatial consistency assessment. 4.2 SCENE-CONSISTENT VIDEO GENERATION Evaluation Protocol. As mentioned in Section 3.1, one of the unique and key challenges in sceneconsistent camera-controllable video generation is maintaining spatial consistency over extended durations. From a given input video, we evaluate spatial consistency by generating camera trajectories that revisit the viewpoints in the given video. By matching frames in the generated video and the input video that share the same viewpoint, we calculate PSNR, SSIM, and LPIPS. For RealEstate10K, we evaluate the whole image, whereas we only evaluate the static regions by masking out the dynamic regions for DynPose-100K. We also assess geometric consistency using Met3R (Asim et al., 2025), which measures multi-view alignment of generated frames under the recovered camera pose. Results. As shown in Tab. 1, 3DScenePrompt significantly outperforms DFoT across all metrics for both static and dynamic scenes. Most notably, our MEt3R evaluation error drops 77% (0.041 vs 0.181), demonstrating superior multi-view geometric alignment. While DFoT similarly tackles scene-consistent camera-controllable video generation through history guidance, their approach fails to maintain scene-consistency for long sequences due to memory constraints. In contrast, our dual spatio-temporal conditioning enables long-term scene-consistency without causing significant computational overhead. The qualitative comparisons shown in Fig. 6 also validate the effectiveness of our approach over DFoT. 4.3 CAMERA-CONTROLLABLE VIDEO GENERATION Evaluation Protocol. We employ the evaluation protocol of previous methods (He et al., 2024; Zheng et al., 2024; Jin et al., 2025) for the camera controllability. We provide an input image along with associated camera parameters for I2V models (He et al., 2024; Wang et al., 2024b; Jin et al., 2025) and solely provide camera parameters for the T2V model (Bahmani et al., 2025). For our model, we provide the last 9 frames of the input video together with the camera parameters. To evaluate how faithfully the generated video follows the camera condition, we estimate camera parameters from the synthesized video using MegaSAM (Li et al., 2024), and compare the estimated camera parameters against the condition camera trajectory C. As done in previous works (Hong et al., 2024b;a; Rockwell et al., 2025), the comparison between the estimated and input camera parameters is quantified using three metrics: mean rotation error (mRotErr), mean translation error (mTransErr), and mean error in the camera extrinsic matrices (mCamMC). Table 2: Camera controllability evaluation. Methods DynPose-100K mRotErr (◦)↓ mTransErr↓ mCamMC↓ MotionCtrl Wang et al. (2024b) 3.5654 7.8231 9.7834 CameraCtrl He et al. (2024) 3.3273 9.5989 11.2122 FloVD Jin et al. (2025) 3.4811 11.0302 12.6202 AC3D Bahmani et al. (2025) 3.0675 9.7044 11.1634 DFoT Song et al. (2025) 2.3977 8.0866 9.2330 3DScenePrompt (Ours) 2.3772 7.4174 8.6352 For the generated video, we also assess video synthesis performance using the Fr ́echet Video Distance (FVD) (Skorokhodov et al., 2022) and seven metrics from VBench++ (Huang et al., 2024): subject consistency, background consistency, aesthetic quality, imaging quality, temporal flickering, motion smoothness, and dynamic degree. 9 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation GT Future Video Camera DFoT History Guided + Camera Control Condition Last Frames Generated Input Video T=57 T=60 : Conditions for Generating the Next Video Generated T=51 T=60 T=5 T=20 Ours Ours Condition Frames close to Traj. Last Frames Figure 6: Visualization of generated videos following trajectories that revisit early frames in the input video. We visualize and compare frames obtained from DFoT (Song et al., 2025) and Ours. We condition both DFoT and ours to generate a video that follows the camera trajectory shown in the top row (GT Future Video), which revisits a viewpoint in the input (Input Video). The comparison shows that ours shows much more consistent generation, whereas DFoT fails to generate scene-consistent frames mainly due to the limited number of frames it can condition on. Also, the comparison of the camera trajectory shows that our method more faithfully follows the given camera condition. Table 3: Evaluation of video generation quality. We assess the quality of generated videos using FVD and VBench++ scores. For FVD, lower values indicate higher video quality. For VBench++ scores, higher values indicate better performance for all metrics. All VBench++ scores are normalized. Methods DynPose-100K FVD Overall Score Subject Consist Bg Consist Aesthetic Quality Imaging Quality Temporal Flicker Motion Smooth Dynamic Degree MotionCtrl Wang et al. (2024b) 1017.4247 0.5625 0.5158 0.7093 0.3157 0.3149 0.8297 0.8432 0.7900 CameraCtrl He et al. (2024) 737.0506 0.6280 0.6775 0.8238 0.3736 0.3888 0.6837 0.6955 0.9900 FloVD Jin et al. (2025) 171.2697 0.7273 0.7964 0.8457 0.4722 0.5546 0.7842 0.8364 0.9900 AC3D Bahmani et al. (2025) 281.2140 0.7428 0.8360 0.8674 0.4766 0.5381 0.8020 0.8673 1.0000 3DScenePrompt (Ours) 127.4758 0.7747 0.8669 0.8727 0.4990 0.5964 0.8551 0.9260 1.0000 Results. We first evaluate camera controllability and compare our method with competitive baselines. As shown in Tab. 2, our approach consistently outperforms existing methods, indicating 3DScenePrompt is capable of generating videos with precise camera control. We then assess the overall video quality (Tab. 3) and provide qualitative comparisons (Fig. 7). As observed in Tab. 3, our method achieves the best generation quality across all metrics for dynamic video generation, which is further supported by the visual results in Fig. 7. 4.4 ABLATION STUDIES Table 4: Ablation study on varying n. Methods Dynamic mask M DynPose-100K PSNR↑ SSIM↑ LPIPS↓ MEt3R↓ Ours (n = 1) ✓ 13.0207 0.3732 0.3771 0.124773 Ours (n = 4) ✓ 13.0382 0.3733 0.3758 0.124893 Ours (n = L) ✓ 13.0206 0.3631 0.3810 0.123507 Ours (n = 7) ✗ 12.2304 0.3063 0.3821 0.134885 Ours (n = 7) ✓ 13.0468 0.3666 0.3812 0.124189 We analyze two critical components of our framework: the dynamic masking strategy that separates static and dynamic elements, and the number of spatially adjacent frames n retrieved for spatial conditioning. Tab. 4 demonstrates the impact of varying n and the necessity of dynamic masking. Without dynamic masking (4th row), the model suffers significantly across all, showing a large drop of PSNR of approximately 10 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation t=30 Camera AC3D FloVD MotionCtrl CameraCtrl I2V + camera control T=60 Fail Condition "A man in a black cap and t-shirt stands at the open door of a luxury car, conversing with another man inside the vehicle, in a parking lot with several cars ...(more) T2V + camera control AC3D Input Video GT Future Video Last Frame Condition T=51 T=60 T=5 T=20 Ours GT Video Ours : Conditions for Generating the Next Video Condition Last Frames Frames close to Traj. Generated Generated Generated Figure 7: Visualization of scene-consistent camera-controllable video generation. Comparison of different methods (Wang et al., 2024b; He et al., 2024; Jin et al., 2025; Bahmani et al., 2025) for generating videos from the same input (shown in Input Video) that follow the camera trajectory shown in the rightmost column (Camera). Our method best preserves scene consistency with the input video. Note the red-boxed regions: while the input video shows a white wall, competing methods either lose scene detail or fail to maintain the original scene structure. In contrast, our approach accurately remembers the white wall and maintains consistent scene elements throughout generation. In addition, when compared with the GT Future Video, ours best follows the camera condition, effectively verifying the strength of our framework for scene-consistent camera-controllable video generation. 0.8dB and also an increase of MEt3R error. This degradation occurs because unmasksed dynamic objects create ghosting artifacts when warped to new viewpoints, corrupting the spatial conditioning. Regarding the number of spatially adjacent frames, we find that performance stabilizes around n = 7, with minimal improvements beyond this point, suggesting that 7 frames provide sufficient spatial context while maintaining computational efficiency. 5 CONCLUSION In this work, we introduced 3DScenePrompt, a framework for scene-consistent camera-controllable video generation. By combining dual spatio-temporal conditioning with a static-only 3D scene memory constructed through dynamic SLAM and our dynamic masking strategy, we enable generating continuations from arbitrary-length videos while preserving scene geometry and allowing natural motion evolution. Extensive experiments demonstrate superior performance in camera controllability, scene consistency, and generation quality compared to existing methods. Our approach opens new possibilities for long-form video synthesis applications where maintaining both spatial consistency and precise camera control is essential. 11 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation REFERENCES Niket Agarwal, Arslan Ali, Maciej Bala, Yogesh Balaji, Erik Barker, Tiffany Cai, Prithvijit Chattopadhyay, Yongxin Chen, Yin Cui, Yifan Ding, et al. Cosmos world foundation model platform for physical ai. arXiv preprint , 2025. Honggyu An, Jin Hyeon Kim, Seonghoon Park, Jaewoo Jung, Jisang Han, Sunghwan Hong, and Seungryong Kim. Cross-view completion models are zero-shot correspondence estimators. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 1103-1115, 2025. Mohammad Asim, Christopher Wewer, Thomas Wimmer, Bernt Schiele, and Jan Eric Lenssen. Met3r: Measuring multi-view consistency in generated images. arXiv preprint , 2025. Sherwin Bahmani, Ivan Skorokhodov, Aliaksandr Siarohin, Willi Menapace, Guocheng Qian, Michael Vasilkovsky, Hsin-Ying Lee, Chaoyang Wang, Jiaxu Zou, Andrea Tagliasacchi, et al. Vd3d: Taming large video diffusion transformers for 3d camera control. arXiv preprint , 2024. Sherwin Bahmani, Ivan Skorokhodov, Guocheng Qian, Aliaksandr Siarohin, Willi Menapace, Andrea Tagliasacchi, David B. Lindell, and Sergey Tulyakov. Ac3d: Analyzing and improving 3d camera control in video diffusion transformers. Proc. CVPR, 2025. Jianhong Bai, Menghan Xia, Xiao Fu, Xintao Wang, Lianrui Mu, Jinwen Cao, Zuozhu Liu, Haoji Hu, Xiang Bai, Pengfei Wan, et al. Recammaster: Camera-controlled generative rendering from a single video. arXiv preprint , 2025. Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. arXiv preprint , 2023. Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators, 2024. URL https://openai.com/research/ video-generation-models-as-world-simulators. Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. arXiv preprint , 2023. Jisang Han, Honggyu An, Jaewoo Jung, Takuya Narihira, Junyoung Seo, Kazumi Fukuda, Chaehyun Kim, Sunghwan Hong, Yuki Mitsufuji, and Seungryong Kim. Dˆ 2ust3r: Enhancing 3d reconstruction with 4d pointmaps for dynamic scenes. arXiv preprint , 2025. Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, and Ceyuan Yang. Cameractrl: Enabling camera control for text-to-video generation. arXiv preprint , 2024. Haoran He, Yang Zhang, Liang Lin, Zhongwen Xu, and Ling Pan. Pre-trained video generative models as world simulators. arXiv preprint , 2025. Sunghwan Hong, Jaewoo Jung, Heeseong Shin, Jisang Han, Jiaolong Yang, Chong Luo, and Seungryong Kim. Pf3plat: Pose-free feed-forward 3d gaussian splatting. arXiv preprint , 2024a. Sunghwan Hong, Jaewoo Jung, Heeseong Shin, Jiaolong Yang, Seungryong Kim, and Chong Luo. Unifying correspondence pose and nerf for generalized pose-free novel view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2019620206, 2024b. Susung Hong, Junyoung Seo, Heeseong Shin, Sunghwan Hong, and Seungryong Kim. Direct2v: Large language models are frame-level directors for zero-shot text-to-video generation. arXiv preprint , 2023. 12 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation Ziqi Huang, Fan Zhang, Xiaojie Xu, Yinan He, Jiashuo Yu, Ziyue Dong, Qianli Ma, Nattapol Chanpaisit, Chenyang Si, Yuming Jiang, et al. Vbench++: Comprehensive and versatile benchmark suite for video generative models. arXiv preprint , 2024. Wonjoon Jin, Qi Dai, Chong Luo, Seung-Hwan Baek, and Sunghyun Cho. Flovd: Optical flow meets video diffusion model for enhanced camera-controlled video synthesis. arXiv preprint , 2025. Nikita Karaev, Iurii Makarov, Jianyuan Wang, Natalia Neverova, Andrea Vedaldi, and Christian Rupprecht. Cotracker3: Simpler and better point tracking by pseudo-labelling real videos. arXiv preprint , 2024. V ́aclav Knapp and Maty Bohacek. Synthetic human action video data generation with pose transfer. In Synthetic Data for Computer Vision Workshop@ CVPR 2025, 2025. Zhengqi Li, Richard Tucker, Forrester Cole, Qianqian Wang, Linyi Jin, Vickie Ye, Angjoo Kanazawa, Aleksander Holynski, and Noah Snavely. Megasam: Accurate, fast, and robust structure and motion from casual dynamic videos. arXiv preprint , 2024. Kepan Nan, Rui Xie, Penghao Zhou, Tiehan Fan, Zhenheng Yang, Zhijie Chen, Xiang Li, Jian Yang, and Ying Tai. Openvid-1m: A large-scale high-quality dataset for text-to-video generation. arXiv preprint , 2024. Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R ̈adle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. arXiv preprint , 2024. Xuanchi Ren, Tianchang Shen, Jiahui Huang, Huan Ling, Yifan Lu, Merlin Nimier-David, Thomas M ̈uller, Alexander Keller, Sanja Fidler, and Jun Gao. Gen3c: 3d-informed world-consistent video generation with precise camera control. arXiv preprint , 2025. Chris Rockwell, Joseph Tung, Tsung-Yi Lin, Ming-Yu Liu, David F Fouhey, and Chen-Hsuan Lin. Dynamic camera poses and where to find them. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 12444-12455, 2025. Runway. Runway. https://runwayml.com/, 2024. Accessed: 2024-11-05. Junyoung Seo, Jisang Han, Jaewoo Jung, Siyoon Jin, Joungbin Lee, Takuya Narihira, Kazumi Fukuda, Takashi Shibuya, Donghoon Ahn, Shoukang Hu, et al. Vid-camedit: Video camera trajectory editing with generative rendering from estimated geometry. arXiv preprint , 2025. Ivan Skorokhodov, Sergey Tulyakov, and Mohamed Elhoseiny. Stylegan-v: A continuous video generator with the price, image quality and perks of stylegan2. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3626-3636, 2022. Kiwhan Song, Boyuan Chen, Max Simchowitz, Yilun Du, Russ Tedrake, and Vincent Sitzmann. History-guided video diffusion. arXiv preprint , 2025. Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pp. 402-419. Springer, 2020. Yihan Wang, Lahav Lipson, and Jia Deng. Sea-raft: Simple, efficient, accurate raft for optical flow. In European Conference on Computer Vision, pp. 36-54. Springer, 2024a. Zhouxia Wang, Ziyang Yuan, Xintao Wang, Yaowei Li, Tianshui Chen, Menghan Xia, Ping Luo, and Ying Shan. Motionctrl: A unified and flexible motion controller for video generation. In ACM SIGGRAPH 2024 Conference Papers, pp. 1-11, 2024b. Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, et al. Cogvideox: Text-to-video diffusion models with an expert transformer. arXiv preprint , 2024. 13 3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation Mark Yu, Wenbo Hu, Jinbo Xing, and Ying Shan. Trajectorycrafter: Redirecting camera trajectory for monocular videos via diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision, 2025. Shangjin Zhai, Zhichao Ye, Jialin Liu, Weijian Xie, Jiaqi Hu, Zhen Peng, Hua Xue, Danpeng Chen, Xiaomeng Wang, Lei Yang, et al. Stargen: A spatiotemporal autoregression framework with video diffusion model for scalable and controllable scene generation. arXiv preprint , 2025. Junyi Zhang, Charles Herrmann, Junhwa Hur, Varun Jampani, Trevor Darrell, Forrester Cole, Deqing Sun, and Ming-Hsuan Yang. Monst3r: A simple approach for estimating geometry in the presence of motion. arXiv preprint , 2024. Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 3836-3847, 2023. Ruihan Zhang, Borou Yu, Jiajian Min, Yetong Xin, Zheng Wei, Juncheng Nemo Shi, Mingzhen Huang, Xianghao Kong, Nix Liu Xin, Shanshan Jiang, et al. Generative ai for film creation: A survey of recent advances. In Proceedings of the Computer Vision and Pattern Recognition Conference, pp. 6267-6279, 2025. Zhoutong Zhang, Forrester Cole, Zhengqi Li, Michael Rubinstein, Noah Snavely, and William T Freeman. Structure and motion from casual videos. In European Conference on Computer Vision, pp. 20-37. Springer, 2022. Guangcong Zheng, Teng Li, Rui Jiang, Yehao Lu, Tao Wu, and Xi Li. Cami2v: Camera-controlled image-to-video diffusion model. arXiv preprint , 2024. Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. arXiv preprint , 2018. 14
2510.14943
2025-10-17 LaSeR: Reinforcement Learning with Last-Token Self-Rewarding Wenkai Yang1,∗, Weijie Liu2, Ruobing Xie2, Yiju Guo1, Lulu Wu2, Saiyong Yang2, Yankai Lin1,† 1Gaoling School of Artificial Intelligence, Renmin University of China 2LLM Department, Tencent B {wenkaiyang,yankailin}@ruc.edu.cn Abstract Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a core paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). To address the lack of verification signals at test time, prior studies incorporate the training of model’s self- verification capability into the standard RLVR process, thereby unifying reasoning and verification capabilities within a single LLM. However, previous practice requires the LLM to sequentially generate solutions and self-verifications using two separate prompt templates, which significantly reduces efficiency. In this work, we theoretically reveal that the closed-form solution to the RL objective of self-verification can be reduced to a remarkably simple form: the true reasoning reward of a solution is equal to its last-token self-rewarding score, which is computed as the difference between the policy model’s next-token log-probability assigned to any pre-specified token at the solution’s last token and a pre-calculated constant, scaled by the KL coefficient. Based on this insight, we propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), an algorithm that simply augments the original RLVR loss with a MSE loss that aligns the last-token self-rewarding scores with verifier-based reasoning rewards, jointly optimizing the reasoning and self-rewarding capabilities of LLMs. The optimized self-rewarding scores can be utilized in both training and testing to enhance model performance. Notably, our algorithm derives these scores from the predicted next-token probability distribution of the last token immediately after generation, incurring only the minimal extra cost of one additional token inference. Experiments show that our method not only improves the model’s reasoning performance but also equips it with remarkable self-rewarding capability, thereby boosting its inference-time scaling performance. Code and models are available at https://github.com/RUCBM/LaSeR. 1 Introduction In the past few years, Large Language Models (LLMs) (Achiam et al., 2023; MetaAI, 2024a; Qwen Team, 2024; Liu et al., 2024a) have advanced significantly, excelling in various domains (Li et al., 2023; Wang et al., 2024b). However, they still face limitations in complex reasoning tasks (AI-MO, 2024a; OpenCompass, 2025; Rein et al., 2024; Jain et al., 2025). Recently, Reinforcement Learning with Verifiable Rewards (RLVR) has shown great promise in enhancing the complex reasoning abilities of LLMs, as demonstrated by OpenAI o1 (Jaech et al., 2024) and DeepSeek-R1 (Guo et al., 2025). By rewarding reasoning paths based on the consistency between final outcomes and ground-truth answers through a deterministic verifier, RLVR incentivizes LLMs to produce more deliberate reasoning chains while effectively mitigating the risk of reward hacking (Gao et al., 2023). Despite its effectiveness, a limitation of standard RLVR is its inability to continue providing verification signals for model outputs in scenarios where ground truth answers are unavailable, such as during test-time inference (Zuo et al., 2025). To address this, existing works either train an external verifier (Lightman et al., 2023; Snell et al., 2024; Zhang et al., 2024; Gao et al., 2024; Yang et al., 2025b) for evaluating candidate solutions or jointly optimize the self-verification and reasoning capabilities of the same policy model during RLVR (Sareen et al., 2025; Liu et al., 2025a; Zha et al., 2025; Jiang et al., 2025). However, we argue that these methods have a major issue of inefficiency: the external verifier requires additional training on a separate LLM during or after reinforcement learning (RL); while joint optimization involves generating both solutions and self-verifications sequentially under two separate prompt templates, which doubles the per-sample inference cost and reduces generation efficiency. In this work, we propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), a lightweight and highly effective algorithm that achieves this goal, jointly optimizing reasoning and self-verification capabilities at nearly zero additional cost. Our core insight is that a model’s assessment in its own solution can be captured in the last token’s predicted probability distribution. We first show theoretically that the RL objective of self- verification has a closed-form solution, where the true reasoning reward from the verifier is equal to the next-token log-probability ratio between the policy and reference models for a pre-specified special token (an unused token ∗Work done during an internship at Tencent. † Corresponding author. 1 arXiv:2510.14943v1 [cs.CL] 16 Oct 2025 Rollout RL Prompt … <eos> Responses <spe> (𝒓𝒔−𝒓𝒗)𝟐 Prompt Training Stage Testing Stage Self-Rewarding Advantages Self-Rewarding Score <spe> Log-Prob. of Special Token Verifier-based Advantages −𝒄𝒓𝒆𝒇×𝜷𝒗 𝑟! 𝑟" 0.72 0.95 𝑨𝒔 𝑵 𝑨𝒔𝟏 Verifier-based Reward Self-Rewarding-based Correctness <eos> EOSToken 𝑨𝒓𝑵 𝑨𝒔𝑵 𝑨" 𝑨! 𝟏−𝝉𝑨𝒗+ 𝝉𝑨𝒔 … <eos> … <eos> … <eos> … <eos> … … … … … … … … … … … 𝑨𝒔𝟐 𝑨𝒔𝟑 𝑨𝒔𝟒 𝑨𝒗𝑵 𝑨𝒗𝟏 𝑨𝒗𝟐 𝑨𝒗𝟑 𝑨𝒗𝟒 0.34 0.22 0.30 <spe> <spe> <spe> <spe> … <eos> <spe> … <eos> <spe> … <eos> <spe> 0.88 0.19 0.72 Figure 1: The full illustration of our method LaSeR. During training, our approach augments the standard RLVR process with an additional MSE loss between the verifier-based rewards (rv) and the last-token self-rewarding scores (rs), where rs is the difference between the policy model’s next-token log-probabilities of a pre-specified special token at the final response token and a pre-calculated constant cre f , scaled by the KL coefficient βv. The optimized self-rewarding scores can serve as auxiliary reward signals in both training and testing stages to enhance model performance. like “<|vision start|>” that serves as the pre-defined ground truth for verifications on correct candidate solutions) at the last response token, scaled by the KL coefficient. We refer to this scaled log-probability ratio as the last-token self-rewarding score. Furthermore, we observe that for a randomly chosen special token, its predicted log-probability under the reference model is practically a constant, small value across all problems and solutions (see Figure 5). This enables us to simplify the self-rewarding score into a remarkably simple form that depends only on the policy model’s outputs and a pre-calculated constant, making it exceptionally efficient to compute. Building on above analysis, we replace the explicit RL optimization for self-verification with a simple Mean Squared Error (MSE) loss. As illustrated in Figure 1, we train the model to align its last-token self-rewarding score with the true reasoning reward from the verifier. In specific, after the policy model generates the solution for each problem, we calculate the last-token self-rewarding score based on its last token’s next-token log-probability for the pre-specified special token, and construct the corresponding MSE loss. This MSE objective is added directly to the standard RLVR loss, allowing for seamless joint optimization for both the reasoning and self-rewarding capabilities of the policy model. At both training and testing time, our method generates each candidate solution and computes the self-rewarding score in a single forward pass, incurring the cost of at most one additional token inference with no extra generation required. This is significantly more efficient than prior approaches, which require a separate inference step. The optimized self-rewarding scores can not only complement the original reasoning rewards during RLVR to further enhance training performance, but also be used at test time to rank and weight solutions for more accurate answer aggregation. We conduct experiments on both LLaMA (MetaAI, 2024b) and Qwen (Qwen Team, 2024) architectures, including pre-trained, mid-trained and reinforced variants, to demonstrate the effectiveness of our method in broader math reasoning tasks. Experimental results show that our methods not only effectively improve the reasoning performance of the policy model, but also allows its self-rewarding accuracy to reach a high level, thereby equipping the model with better confidence calibration of its own outputs and improving its inference-time scaling performance. 2 Related Work RLVR for LLM Reasoning Reinforcement Learning with Verifiable Rewards (RLVR), which sorely calculates binary rewards based on the final answers, has been shown to be highly effective in enhancing the reasoning capabili- ties of LLMs (Jaech et al., 2024; Guo et al., 2025; Team et al., 2025b). Current studies can be categorized into several directions, including but not limited to (1) designing more efficient and effective RLVR algorithms (Schulman et al., 2017; Shao et al., 2024; Yu et al., 2025a; Yue et al., 2025b; Liu et al., 2025b; Zheng et al., 2025), (2) extending RLVR to general reasoning domain (Ma et al., 2025; Zhou et al., 2025; Yu et al., 2025b; Li et al., 2025) and agent 2 scenarios (Wang et al., 2025b; Team et al., 2025a; Dong et al., 2025), (3) collecting diverse verifiable datasets (Hu et al., 2025; He et al., 2025; Liu & Zhang, 2025; Ma et al., 2025; Fan et al., 2025), and (4) analyzing the mechanisms of RLVR (Mukherjee et al., 2025; Yue et al., 2025a; Wen et al., 2025; Huan et al., 2025). External Verifiers for LLM Reasoning Training external verifiers to identify the correctness of the LLM-generated solutions is an effective way to enhance the reasoning performance of LLMs in the inference time. External verifiers usually fall into two categories: (1) Scalar Reward Models: Outcome-supervised Reward Models (ORMs) (Cobbe et al., 2021; Yang et al., 2024) and Process-supervised Reward Models (PRMs) (Lightman et al., 2023; Wang et al., 2024a; Skywork-o1, 2024; Yuan et al., 2024) are two representative approaches. ORMs provide supervision by evaluating the final answer, while PRMs offer more fine-grained feedback by assessing the intermediate reasoning steps. (2) Generative Verifiers: Recent studies have explored the potential of training LLMs to perform natural language critiques of reasoning solutions generated by the LLM generators, and then to judge their final out- comes (Zhang et al., 2024; Gao et al., 2024; Yang et al., 2025b; Zhao et al., 2025). This paradigm has demonstrated stronger verification performance than scalar reward models, as it enables the LLM verifier to conduct deliberate chain-of-thought reasoning before arriving at the final judgment. Self-Verification for LLM Reasoning Several recent studies (Sareen et al., 2025; Liu et al., 2025a; Zha et al., 2025; Jiang et al., 2025; Lu et al., 2025) aim to unify the roles of generator and verifier by equipping a single policy model with self-verification capability. The trained self-verification capability can be used in both the RL training and inference-time scaling stages to enhance the model performance. However, these approaches require generating solutions and self-verifications sequentially during training and inference. In contrast, our method derives the self-rewarding signal directly from the next-token probability distribution of the final token of the generated sequence, achieving a more efficient and effective unification of generation and self-verification. 3 Methodology 3.1 Preliminaries RL Objective We denote πθ as the target policy model to be optimized, and πref as the reference model from which πθ is initialized. D is the query set, x is an input and y is the generated response to x. The standard optimization objective of RL is formalized as Oπθ = max πθ Ex∼D,y∼πθ(·|x) h r(x, y) −βDKL(πθ||πre f ) i , (1) where r(x, y) represents a reward function to score the response y given x, DKL is the Kullback–Leibler (KL) divergence loss regularizing the distance between two model distributions. RLVR Recently, RLVR (Guo et al., 2025; Hu et al., 2025) has emerged as a widely adopted and effective paradigm for enhancing the reasoning capabilities of LLMs. In RLVR, the reward function r is typically chosen as a deterministic verifier rv, such as a rule-based verifier, to evaluate whether the final extracted answer a ⊂y matches the ground-truth answer a∗, and to produce binary feedback (e.g., {0,1}). That is, rv(x, y) = 1{a≡a∗} = 1 if a is semantically equivalent to a∗, 0 otherwise. (2) Policy Gradient Method Policy Gradient (Sutton et al., 1998) is a widely adopted algorithm to optimize the objective of Eq. (1), which updates the policy model via the estimated gradient ∇θOπθ = Ex∼D,y∼πθ(·|x) " T ∑ t=1 At∇θ log πθ(yt|x, y<t) # , (3) where At is the advantage function measuring the relative value of the action at (i.e., token yt) compared to the baseline value under state st (i.e., sequence (x, y<t)). In practice, At can be estimated in various ways (Schulman et al., 2017; Ahmadian et al., 2024). For example, Group Relative Policy Optimization (GRPO) (Shao et al., 2024) estimates the baseline value as the average reward within a sampled group {y1, · · · , yK} for the same problem, and computes the relative advantage for each token yi t in sequence yi as Ai t = (ri v −mean(r1 v, · · · , rK v ))/std(r1 v, · · · , rK v ), ri v = rv(x, yi). (4) Implicit Reward Previous studies (Rafailov et al., 2023; Peters & Schaal, 2007) have identified that the optimal solution to the objective Eq. (1) satisfies that rv(x, y) = β log[πθ(y|x)/πre f (y|x)] + β log Z(x), (5) where Z(x) = ∑y πre f (y|x) exp( 1 βrv(x, y)) is a partition function. β log πθ(y|x) πre f (y|x) is termed as the implicit reward, which has been used in prior works (Mitchell et al., 2024; Liu et al., 2024b) to analyze the behavioral shift induced by the alignment process. 3 3.2 LaSeR: Reinforcement Learning with Last-Token Self-Rewarding 3.2.1 Formal Formulation In training, ground-truth answers can be reliably used to determine the correctness of solutions. At test time, however, when ground-truth answers are unavailable, the use of verifiers becomes crucial for evaluating solution quality and providing feedback signals. To address this problem, in this work, we explore the promising paradigm of jointly optimizing the self-verification and reasoning capabilities of LLMs within the RLVR framework, thereby enabling them not only to produce high-quality reasoning paths but also to evaluate their own outputs at test time. According to Eq. (5), as Z(x) remains the same for all y, a straight-forward idea is to utilize the implicit reward β log πθ(y|x) πre f (y|x) as the indicator to rank different generations at test time. However, this approach has a critical drawback: the absolute value of the implicit reward is length-biased, since the absolute value of β log πθ(y|x) πre f (y|x) = β ∑i log πθ(yi|x,y<i) πre f (yi|x,,y<i) increases proportionally with the response length. In reasoning tasks, the incorrect solutions are usually longer than the correct solutions (Hassid et al., 2025), making the implicit reward unreliable in evaluating solution correctness (see Appendix A). Furthermore, disregarding Z(x) and directly aligning the implicit reward with the true reasoning reward during training degrades the policy model’s generation ability (Cui et al., 2025), since a fundamental gap (i.e., β log Z(x)) exists between the solution to RLVR and that to reward modeling. In this work, we begin by formulating our approach from the RL objective of verification. Given a problem x, and a candidate solution y, the model is required to produce a verification z to identify the correctness of the solution: z ∼πθ(·|x, y). Thus, the RL objective of verification can be written as Vπθ = max πθ Ex∼D,y∼πg(·|x),z∼πθ(·|x,y) h ˆr(x, y, z) −βvDKL(πθ||πre f ) i , (6) where πg is the generator to solve the problem (can also be the target model πθ itself in the self-verification setting), ˆr(x, y, z) is the verification reward that measures the consistency between the true correctness of y and the verification result of z: ˆr(x, y, z) = 1 if verification result of z matches the true correctness of y, 0 otherwise. (7) In practice, z can be either a single token—for instance, “Yes” or “No” to directly indicate whether the solution is verified as correct or incorrect—or a sequence that includes both a chain of thought and the final judgment. In this work, we focus on the former setting and simplify the ground-truth label space to two single tokens zc (e.g., “Yes”) and zi (e.g., “No”). That is, the verification reward is simplified to ˆr(x, y, z) = 1 (z = zc and rv(x, y) = 1) or (z = zi and rv(x, y) = 0) 0 otherwise. (8) Similarly, following from Eq. (5), the close-form solution to Eq. (6) can be written as ˆr(x, y, z) = βv log πθ(z|x, y) πre f (z|x, y) + βv log Z(x, y), Z(x, y) = ∑ z πre f (z|x, y) exp( 1 βv ˆr(x, y, z)). (9) Now, let’s take a closer look at Z(x, y). First, for z ∈{zc, zi}, πre f (z|x, y) is a extremely small positive value for any problem-solution pair (x, y), i.e., πre f (z|x, y) ≈0, for z ∈{zc, zi}. The reason is that the model is not specifically optimized for predicting the next token once it completes the generation and produces the final token (typically the “<EOS>” token). We present a numerical analysis to validate this claim in Figure 5, and we can see the value of πre f (z|x, y) is less than e−9 for common tokens and even less than e−20 for unused special tokens. Then, we can get that Z(x, y) = ∑ z πre f (z|x, y) exp( 1 βv ˆr(x, y, z)) = ∑ z/∈{zc,zi} πre f (z|x, y) exp( 1 βv ˆr(x, y, z)) + πre f (zc|x, y) exp( 1 βv ˆr(x, y, zc)) + πre f (zi|x, y) exp( 1 βv ˆr(x, y, zi)) = (1 −πre f (zc|x, y) −πre f (zi|x, y)) exp(0) + (πre f (zc|x, y) + πre f (zi|x, y)) exp( 1 βv ) ≈1 × 1 + 0 × exp( 1 βv ) = 1 =⇒log Z(x, y) ≈0. (10) The above analysis reveals that, under our formulation, the partition function that cannot be ignored by previous studies (Cui et al., 2025) can be now naturally discarded. Consequently, the optimal solution to Eq. (6) can be approximately reduced to: ˆr(x, y, z) = βv log[πθ(z|x, y)/πre f (z|x, y)]. (11) 4 In particular, the true verification reward when the model verifies a solution as correct is: ˆr(x, y, zc) = rv(x, y) = βv log[πθ(zc|x, y)/πre f (zc|x, y)]. (12) The first equation is derived from the definition in Eq. (8). The second equation reveals that the true reasoning reward is equal to log-probability ratio of the policy model to the reference model at zc, scaled by the KL coefficient. Thus, to optimize the model’s verification capability, we do not need to explicitly perform a RLVR procedure. Instead, we can directly optimize the following MSE loss: L = Ex∼D,y∼πg(·|x)  βv log[πθ(zc|x, y)/πre f (zc|x, y)] −rv(x, y) 2 . (13) Thus, in the self-verification setting where πg = πθ, we can directly adds the above loss into the original RLVR loss to jointly optimize the reasoning and self-verification capabilities of the policy model: Sπθ = max πθ Ex∼D,y∼πθ(·|x)   rv(x, y) −βDKL(πθ(y|x)||πre f (y|x)) −α " βv log πθ(zc|x, y) πre f (zc|x, y) −rv(x, y) #2  , (14) where α is a loss balancing coefficient. We refer the term rs = βv log πθ(zc|x,y) πre f (zc|x,y) to the last-token self-rewarding score, since it depends on the log-probability distributions of the last token in y. 3.3 Other Techniques Here, we discuss several practical techniques to further simplify and improve the efficiency and effectiveness of the self-rewarding MSE loss introduced above. Simplification of the Log-Probability in the Reference Model As shown in Figure 5, the quantity log πre f (zc|x, y) remains almost constant, exhibiting only a negligible standard deviation across all x and y. Therefore, we can regard it as a pre-calculated constant cre f in calculating the last-token self-rewarding score during both training and inference. This eliminates the need for forwarding y through the reference model and thus further enhances efficiency. In specific, cre f is the mean value of log πre f (zc|x, y) on a small set of pre-generated set of (x, y). Furthermore, based on the findings in Figure 5, we select an unused special token as zc to make πre f (zc|x, y) closer to 0 and to further minimize its impact on the approximation of Z(x, y) = 1 and the stability of training. Self-Rewarding Loss Re-Weighting During training, the numbers of correct and incorrect solutions are imbalanced, and their ratio dynamically changes. To prevent the last-token self-rewarding score from being biased toward the class with more samples, we apply a class-level loss re-weighting strategy within each optimization step. In each step, we calculate the total numbers of correct and incorrect solutions (identified by the deterministic verifier) for all problems in the current batch as Nc and Ni. Then, we apply the loss re-weighting as l = 1 Nc + Ni ∑ x ∑ y h wc1{rv(x,y)=1} + wi1{rv(x,y)=0} i h βv log πθ(zc|x, y) −βvcre f −rv(x, y) i2 , (15) where wc = Nc+Ni 2×Nc and wi = Nc+Ni 2×Ni are re-weighting factors. This practice achieves a more balanced self- verification capability. We provide empirical validations on this in Appendix E. Future work can explore more effective ways to address the issue of imbalanced distribution of solutions. Integration of Verifier-based and Self-Rewarding-based Advantages The last-token self-rewarding scores can not only be used at test time, but also facilitate the training process through the integration of verifier-based and self-rewarding-based advantages. We believe such practice can help mitigate the issue of misjudgments by rule-based verifiers, which often occur when the format of ground-truth answer is overly complex, and produce more fine-grained rewards. For example, in GRPO, the final advantage can be calculated as: ˆAi t = (1 −τ)riv −mean(r1v, · · · , rKv ) std(r1v, · · · , rKv ) + τ ris −mean(r1s, · · · , rKs ) std(r1s, · · · , rKs ) , where ri v = rv(x, yi) and ri s = βv log πθ(zc|x, yi) −βvcre f . (16) To stabilize training, we adopt a filtering strategy that sets τ = 0 for any group whenever the standard deviation std(r1 s, · · · , rK s ) within this group falls below a threshold T, which is set to 0.1. Separate Warm-Up of Reasoning and Self-Rewarding Capabilities During the initial phase of training, we optimize only the last-token self-rewarding score, without integrating self-rewarding-based advantages into the learning process. After a certain steps when the last-token self-rewarding loss is sufficiently small, we proceed to integrate verifier-based and self-rewarding-based advantages. Moreover, when training from base (i.e., pre-trained) models, we first perform standard RLVR without incorporating the last-token self-rewarding loss in order to warm 5 Algorithm 1: LaSeR: Reinforcement Learning with Last-Token Self-Rewarding Input: Initial policy model πθ, prompts D, verifier rv, warm-up hyper-parameters wr and wsr, coefficient βv, pre-specified token zc, pre-calculated cre f = E(x,y)[log πre f (zc|x, y)] for Step s = 1, · · · , S do 1. Set πold ←πθ; 2. Sample batch prompts Ds from D; 3. Generate solutions {yi}K i=1 for each x ∈Ds; 4. Calculate verifier-based rewards and advantages (e.g., Eq. (4)), calculate RL loss; 5. If s ≥wr, calculate last-token self-rewarding loss based on Eq. (15) and add it to RL loss; 6. If s ≥wsr, calculate self-rewarding-based advantages and perform advantage integration based on Eq. (16); 7. Update the policy model πθ using any RL algorithm with integrated loss and advantages; end Output: πθ up the model’s reasoning capability, followed by a warm-up phase for the self-rewarding capability before the complete integration of verifier-based and self-rewarding-based advantages. By combining all the aforementioned techniques, our full algorithm Reinforcement Learning with Last-Token Self-Rewarding (LaSeR), is summarized in Algorithm 1 and illustrated in Figure 1. During the testing phase, once the model generates a solution, we compute the last-token self-rewarding score based on rs = βv log πθ(zc|x, y) − βvcre f . The comparison between this score and 0.5 determines the self-verification outcome of the solution, or the score itself can be further used to perform weighted majority voting. 3.4 Brief Discussion Comparison Between LaSeR and Prior Approaches Compared with previous methods (Sareen et al., 2025; Liu et al., 2025a; Zha et al., 2025) that requires the policy model to perform separate generations for solutions and verifications, our method directly derives the self-rewarding result from the next-token log-probability of the final solution token. In the RL process, the computation of token log-probabilities is typically carried out after all the generations are completed (Sheng et al., 2024). Therefore, we can directly replace the token id of the first padding token with the token id of the pre-specified token before computing the log-probabilities of the sequences, thereby incurring no additional computation cost during training. During inference, our method requires only one more token inference after the solution is completed, which substantially reduces the computational cost compared to previous methods. We also discuss the potential way to further reduce the self-rewarding cost by avoiding any extra token inference in Section 5.3, which can be an interesting future work. Difference Between Last-Token Self-Rewarding Loss and Supervised Fine-Tuning Loss An alternative to train the self-verification capability is to optimize the following supervised fine-tuning (SFT) loss by maximizing the next-token probability of the token zc or zi based on the context (x, y): LSFT = −Ex∼D,y∼πg(·|x) [rv(x, y) · log πθ(zc|x, y) + (1 −rv(x, y)) · log πθ(zi|x, y)] . (17) The major difference between SFT loss and our last-token self-rewarding loss in Eq. (13) is that the SFT loss drives πθ(zc|x, y) to fit 1 when rv(x, y) = 1, which may lead to strong interference with the optimization of reasoning capability. In contrast, our loss drives πθ(zc|x, y) toward exp(1/βv) · πref(zc|x, y) for rv(x, y) = 1.0. When βv is relatively large, πθ(zc|x, y) remains still very small, thereby exerting only a negligible influence on the original RLVR optimization (e.g., πθ(zc|x, y) = e−13 when πref(zc|x, y) = e−23 and βv = 0.1). We provide the empirical comparison in Appendix F. 4 Experiments 4.1 Experimental Settings Base Models and Baselines We primarily conduct empirical validations on both LLaMA3.2 MetaAI (2024b) and Qwen2.5 (Qwen Team, 2024) architectures, including three base models: OctoThinker-3B-Short-Base (Wang et al., 2025a) (mid-trained version of LLaMA3.2-3B-Base), Qwen2.5-7B-Base (Qwen Team, 2024) (pre-trained model) and Open-Reasoner-Zero-7B (Hu et al., 2025) (reinforced version of Qwen2.5-7B-Base). In principle, our method can be seamlessly integrated into any RLVR framework, as it only introduces an additional MSE loss term. In this work, we adopt the widely used GRPO (Shao et al., 2024) as the base algorithm and primarily investigate the effectiveness of applying our method within GRPO, while leaving the exploration on other RL algorithms in the future work. 6 Table 1: Reasoning and self-verification performance of each model on five mathematical reasoning benchmarks. We do not report the results of OctoThinker-based models on AIME24-25, as the number of correct solutions is quite insufficient for a reliable evaluation. Method Reasoning Accuracy Self-Verification F1 Score MATH- 500 AMC- 23 AIME- 24 AIME- 25 Olym.- Bench Avg. MATH- 500 AMC- 23 AIME- 24 AIME- 25 Olym.- Bench Avg. OctoThinker-3B-Short-Base Base 3.7 1.3 - - 1.0 2.0 22.3 11.2 - - 13.7 15.7 GRPO 49.8 25.3 - - 17.3 30.8 56.9 47.3 - - 48.8 51.0 LaSeR 53.1 27.0 - - 18.2 32.8 73.6 70.2 - - 73.6 72.5 - SWA 52.9 26.1 - - 18.2 32.4 80.4 70.9 - - 66.0 72.4 Qwen2.5-7B-Base Base 35.8 20.6 3.5 1.6 12.3 14.8 36.4 30.8 27.6 32.9 36.9 32.9 GRPO 79.9 55.9 16.2 13.8 43.3 41.8 54.6 59.7 36.6 41.5 53.5 49.2 LaSeR 80.2 58.1 15.4 15.7 44.1 42.7 83.2 82.5 79.6 74.3 78.3 79.6 - SWA 78.0 58.3 15.4 12.3 41.7 41.1 79.7 80.2 81.3 74.9 83.3 79.9 Open-Reasoner-Zero-7B Base 81.9 60.3 15.6 15.1 46.9 44.0 26.7 51.3 45.9 55.2 37.5 43.3 GRPO 83.1 61.9 18.1 15.0 47.1 45.0 57.1 44.8 14.6 28.1 49.5 38.8 LaSeR 82.8 62.7 19.1 15.1 47.8 45.5 87.2 79.7 64.6 77.7 78.7 77.6 - SWA 83.2 62.6 19.0 14.5 47.6 45.4 87.5 77.7 63.3 77.3 77.9 76.7 Training and Evaluation Datasets We adopt DeepMath-103K (He et al., 2025), a large-scale and high-quality math- ematical reasoning dataset, for our RL training data. In testing, we evaluate both the reasoning and self-verification performance of each model on five typical math reasoning benchmarks: MATH500 (Hendrycks et al., 2021), AMC23 (AI-MO, 2024b), AIME24 (AI-MO, 2024a), AIME25 (OpenCompass, 2025), and OlympiadBench (He et al., 2024). We also explore the effectiveness of our method in general reasoning tasks beyond math reasoning in Section 5.2. Training Settings The detailed training hyper-parameters of GRPO are put in Appendix C. The prompt template for each model is in Appendix I. When applying our method, we set the hyper-parameters (βv, α, τ) = (0.1, 0.1, 0.1), which are empirically determined based on the observations in Appendix D. zc is selected as “<vision start>” for Qwen2.5-7B-Base and Open-Reasoner-Zero-7B, and “<reserved special token 0>” for OctoThinker- 3B-Short-Base. The simplified constant of the reference log-probability, cref, is −23.0 for Qwen2.5-7B-Base and Open-Reasoner-Zero-7B, and −25.0 for OctoThinker-3B-Short-Base, as estimated from the results in Figure 5. The number of reasoning warm-up steps is set to 200 for both Qwen2.5-7B-Base and OctoThinker-3B-Short-Base, and the number of self-rewarding warm-up steps is 200 across all models. Evaluation Settings During generation, we set both the temperature and top p to 1.0 for all models. The max generation len is 8192. On MATH500 and OlympiadBench, we sample 2 solutions for each problem; whereas on AMC23, AIME24, and AIME25, we sample 32 solutions per problem. We then report the average Pass@1 accuracy of each model on each benchmark. We also evaluate the self-verification performance of each model by computing the self-verification F1 score, defined as the harmonic mean of self-verification accuracy on self-generated correct and incorrect solutions. Baselines perform self-verification based on the prompt in Appendix I. Any solution without a final answer is automatically treated as incorrect and excluded from the verification accuracy calculation. Detailed self-verification accuracy results are provided in Appendix G. 4.2 Main Results and Analysis We put the main results in Table 1. The key conclusion is that, across different model variants, our method not only yields better reasoning performance for the policy model compared with the baseline, but also substantially enhances its self-verification capability by enabling the self-rewarding scores to achieve remarkably high F1 scores. Regarding reasoning performance, applying our algorithm leads to higher accuracy in most settings and consistently yields higher average accuracy on the three base models. We think there are two main reasons for this improvement: (1) First, our method encourages the model to encode its assessment of the overall solution in the final response token, which leads to better confidence calibration. Improved calibration itself can have a positive impact on the model’s learning. (2) Second, by integrating self-rewarding-based advantages into verifier-based advantages, our approach enables more fine-grained advantage estimation, which in turn facilitates more effective learning. For comparison, we report the results without self-rewarding-based advantages (-SWA) in Table 1. Regarding self-rewarding performance, applying a simple last-token self-rewarding MSE loss substantially enhances 7 Table 2: Comparison of verification F1 scores between LaSeR (self-rewarding) and external reward models (Qwen2.5-Math-7B-PRM800K, Qwen2.5-Math-PRM-7B, and Qwen2.5-Math-RM-72B) on responses generated by different policy models. Method MATH500 AMC23 AIME24 AIME25 Olym. Avg. Generator: OctoThinker-3B-Short-LaSeR Qwen2.5-Math-7B-PRM800K (7B RM) 77.0 68.9 - - 68.5 71.5 Qwen2.5-Math-PRM-7B (7B RM) 80.9 63.5 - - 64.1 69.5 Qwen2.5-Math-RM-72B (72B RM) 89.2 71.7 - - 72.9 77.9 LaSeR (3B Self-Rewarding) 73.6 70.2 - - 73.6 72.5 Generator: Qwen2.5-7B-Laser Qwen2.5-Math-7B-PRM800K (7B RM) 59.4 52.7 58.8 53.8 52.0 55.3 Qwen2.5-Math-PRM-7B (7B RM) 82.5 79.2 75.1 72.3 77.8 77.4 Qwen2.5-Math-RM-72B (72B RM) 87.8 80.7 81.3 74.8 75.4 80.0 LaSeR (7B Self-Rewarding) 83.2 82.5 79.6 74.3 78.3 79.6 Generator: Open-Reasoner-Zero-7B-LaSeR Qwen2.5-Math-7B-PRM800K (7B RM) 56.3 42.5 51.4 50.8 38.5 47.9 Qwen2.5-Math-PRM-7B (7B RM) 86.0 79.6 70.8 67.3 76.0 75.9 Qwen2.5-Math-RM-72B (72B RM) 86.8 79.4 71.0 71.4 75.5 76.8 LaSeR (7B Self-Rewarding) 87.2 79.7 64.6 77.7 78.7 77.6 Table 3: Comparison of reasoning and self-verification performance with and without reference log-probability simplification in our method. Based model is Open-Reasoner-Zero-7B. Method Reasoning Accuracy Self-Verification F1 Score MATH- 500 AMC- 23 AIME- 24 AIME- 25 Olym.- Bench Avg. MATH- 500 AMC- 23 AIME- 24 AIME- 25 Olym.- Bench Avg. w/ Simpl. 82.5 61.6 18.8 16.2 46.5 45.1 82.3 79.3 77.9 79.2 78.4 79.4 w/o Simpl. 81.0 61.2 17.3 17.3 48.3 45.0 81.8 79.2 79.0 78.9 77.5 79.3 the self-rewarding capability of the models, achieving around 80% self-verification F1 scores. This demonstrates strong self-verification accuracy on both correct and incorrect solutions. To further highlight the self-rewarding capabilities, we display the comparison results of verification F1 scores between LaSeR and three advanced external reward models (Qwen2.5-Math-7B-PRM800K (Zhang et al., 2025), Qwen2.5-Math-PRM-7B (Zhang et al., 2025), and Qwen2.5-Math-RM-72B (Yang et al., 2024)) on evaluating the solutions generated by the different reinforced models, i.e., OctoThinker-3B-Short-LaSeR, Qwen2.5-7B-LaSeR, and Open-Reasoner-Zero-7B-LaSeR. The results in Table 2 show that LaSeR outperforms equally sized state-of-the-art external verifiers in assessing the model’s own solutions, and even matches the verification performance of a 72B reward model, demonstrating its non-trivial effectiveness in enhancing self-rewarding capability. Also, our method requires one additional token inference only to compute the self-rewarding scores for enabling the policy model to function simultaneously as both the generator and reward model, which is highly efficient and practical. 4.3 Inference-Time Scaling Results Here, we explore the effectiveness of self-rewarding in the inference-time scaling via weighted majority voting. We compare majority voting results with (RM@K) and without (Maj@K) weighting by the last-token self-rewarding scores, on MATH500 and OlympiadBench. The results are shown in Figure 2. We denote the three base models by “OT-3B”, “Qwen2.5-7B”, and “ORZ-7B”. The suffixes “-GRPO” and “-LaSeR” indicate the variants trained with GRPO and our method LaSeR, respectively. The results show that the optimized self-rewarding capability of the model is highly effective on improving its own inference-time scaling performance. 5 Analysis and Discussion 5.1 The Impact of Simplifying The Reference Log-Probabilities to A Constant As discussed in Section 3.3, we approximate the log-probability of the pre-specified token under the reference model, log πre f (zc|x, y), by its mean computed over a small sample set when calculating the last-token self-rewarding scores. Here, we empirically validate this practice by conducting comparison experiments on Open-Reasoner- Zero-7B, with and without reference log-probability simplification in our method. We evaluate the checkpoint after 200 optimization steps in each setting (corresponding to the last checkpoint before advantage integration). The results are reported in Table 3. As shown, the simplification does not affect the optimization of reasoning 8 21 22 23 24 25 Number of Sampled Solutions 50 52 54 56 58 60 62 64 Accuracy (%) OT-3B-GRPO (Maj@K) OT-3B-LaSeR (Maj@K) OT-3B-LaSeR (RM@K) (a) Results of OT-3B-based models on MATH500 21 22 23 24 25 Number of Sampled Solutions 80 81 82 83 84 85 86 Accuracy (%) Qwen2.5-7B-GRPO (Maj@K) Qwen2.5-7B-LaSeR (Maj@K) Qwen2.5-7B-LaSeR (RM@K) (b) Results of Qwen2.5-7B-based models on MATH500 21 22 23 24 25 Number of Sampled Solutions 83 84 85 86 87 88 Accuracy (%) ORZ-7B-GRPO (Maj@K) ORZ-7B-LaSeR (Maj@K) ORZ-7B-LaSeR (RM@K) (c) Results of ORZ-7B-based models on MATH500 21 22 23 24 25 Number of Sampled Solutions 16 18 20 22 24 26 28 Accuracy (%) OT-3B-GRPO (Maj@K) OT-3B-LaSeR (Maj@K) OT-3B-LaSeR (RM@K) (d) Results of OT-3B-based models on OlympiadBench 21 22 23 24 25 Number of Sampled Solutions 44 45 46 47 48 49 50 51 Accuracy (%) Qwen2.5-7B-GRPO (Maj@K) Qwen2.5-7B-LaSeR (Maj@K) Qwen2.5-7B-LaSeR (RM@K) (e) Results of Qwen2.5-7B-based mod- els on OlympiadBench 21 22 23 24 25 Number of Sampled Solutions 48 49 50 51 52 53 54 55 Accuracy (%) ORZ-7B-GRPO (Maj@K) ORZ-7B-LaSeR (Maj@K) ORZ-7B-LaSeR (RM@K) (f) Results of ORZ-7B-based models on OlympiadBench Figure 2: The majority voting (Maj@K) and weighted majority voting (RM@K) results on MATH500 and OlympiadBench. MMLU-Pro GPQA-Diamond 40 45 50 55 60 65 70 Accuracy (%) 66.54 44.44 67.10 45.96 Qwen3-4B-GRPO (Avg.) Qwen3-4B-LaSeR (Avg.) Qwen3-4B-GRPO (Maj@4) Qwen3-4B-LaSeR (RM@4) (a) Evaluation accuracy on MMLU-Pro and GPQA-Diamond 0.0 0.2 0.4 0.6 0.8 1.0 Self-Rewarding Score 0.00 0.01 0.02 0.03 0.04 0.05 Proportion of Solutions Correct Solutions Incorrect Solutions (b) Self-rewarding score distribution on MMLU-Pro 0.0 0.2 0.4 0.6 0.8 1.0 Self-Rewarding Score 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Proportion of Solutions Correct Solutions Incorrect Solutions (c) Self-rewarding score distribution on GPQA-Diamond Figure 3: The generalizability of LaSeR on general reasoning tasks. and self-rewarding capabilities, since the performance under the two settings remains comparable. However, it effectively reduces the computational cost of calculating the last-token self-rewarding value by half. 5.2 The Generalizability of LaSeR to General Reasoning Domain We conduct additional experiments to explore the generalizability of our method to general reasoning domain. We use a filtered version (Yu et al., 2025b) of WebInstruct-verified dataset (Ma et al., 2025), and conduct RL experiments on Qwen3-4B-Base (Yang et al., 2025a). We use the general-verifier-1.5B model from Ma et al. (2025) as the model-based verifier and adopt GRPO as the RL algorithm. For our method, we do not perform the advantage integration strategy here. The reason is that we observe the self-verification F1 score of our method during training is relatively low in the general reasoning setting (only between 65% and 70%, and the self-rewarding score distributions in the test sets shown in Figure 3(b) and Figure 3(c) also reveal this phenomenon). This leads to large noise in the self-rewarding-based advantage estimation, and consequently, the integration of self-rewarding-based advantages results in performance degradation. After training, we conduct evaluations on two general reasoning benchmarks: MMLU-Pro (Wang et al., 2024b) and GPQA-Diamond (Rein et al., 2024). We sample 4 solutions per problem on each dataset for each model, and calculate both the average accuracy and the (weighted) majority voting accuracy. Detailed training and evaluation settings are in Appendix H. 9 We display the evaluation accuracy in Figure 3(a), and additionally, we display the self-rewarding score distributions on two datasets in Figure 3(b) and Figure 3(c) for reference. First, we observe that jointly optimizing the self- rewarding capability does not impact the model’s general reasoning ability, allowing the policy model to achieve comparable average reasoning accuracy to the baseline. However, as mentioned above, the optimized self-rewarding score on general reasoning tasks does not achieve the high accuracy seen in math reasoning tasks. We can see that the self-rewarding score distributions for correct and incorrect solutions on MMLU-Pro exhibit certain overlap, and the distinction further diminishes on the more challenging benchmark GPQA-Diamond. We speculate that two factors may contribute to this: (1) The model’s general reasoning ability is inherently weaker than its math reasoning ability, which limits the upper bound of its self-rewarding capabilities in the general reasoning domain. (2) The model-based verifier used in the experiment (general-verifier-1.5B) has limited verification ability, resulting in high noise in the reasoning rewards, which in turn affects the optimization of the self-rewarding capability. A promising direction for future work is to further explore and unlock the full potential of our method in the general reasoning domain. Though not perfect, the optimized self-rewarding scores can still provide useful signals during inference time, leading to better weighted majority voting results. 5.3 Further Reduction or Increase of Self-Rewarding Cost In this section, we discuss two additional variants of LaSeR for future work. In the current method, we calculate the last-token self-rewarding score based on the next-token log-probability distribution of the “<EOS>” token, requiring one additional token inference compared with standard inference. One potential way to further reduce the inference cost of LaSeR is to derive the last-token self-rewarding score directly from the predicted log-probability of pre-specified token zc at the “<EOS>” token position. Specifically, let yT denote the “<EOS>” token in y. Then, the reduced last-token self-rewarding score can be defined as rs = βv log πθ(zc|x, y<T) −βvcre f , as we have observed that πref(zc|x, y<T) remains nearly constant across (x, y) (e.g., approximately e−28 for Qwen2.5-7B-Base). In this case, we can achieve ideally zero additional inference cost for self-rewarding compared with standard generation by directly calculating the self-rewarding score from the log-probability distribution at the “<EOS>” token position, without requiring any extra token inference. In theory, this works because setting a relatively large βv still yields a small value of πθ(zc|x, y<T) (e.g., πθ(zc|x, y<T) = e−18 when βv = 0.1 and cref = −28), thereby allowing πθ(<EOS>|x, y<T) to still dominate the probability mass. However, although the probability is very low, we observe that the generator may still select zc at the end of the sequence in few cases during training, which can adversely affect training stability as the generator continues to generate after zc. One straight-forward solution may be to set the sampling hyper-parameter top p to a value less than 1.0. Future work can further investigate advanced strategies to make the above adjustment more principled and robust. While reducing the self-rewarding cost improves efficiency, an alternative is to increase the inference cost in exchange for stronger self-rewarding capability. That is, instead of computing the self-rewarding score based on the log-probability distribution of a single token only, we can increase the number of additional inference tokens by calculating it over M tokens as rs = βv ∑M m=1 log πθ(zc|x, y, zc, · · · , zc | {z } m−1 times )) −Mβvcre f . It is a promising direction for future research to explore whether increasing the number of additional inference tokens can yield positive inference-time scaling effect for latent self-rewarding capability. 6 Conclusion In this work, we propose LaSeR, a lightweight and effective algorithm that jointly optimizes the reasoning and self-rewarding capabilities of LLMs. By deriving the closed-form solution to the RL objective of verification, we uncover a concise yet intriguing formula: the true reasoning reward provided by the verifier is equal to the last-token self-rewarding score produced by the model. This self-rewarding score depends on the model’s next-token log-probability for a pre-specified token at the final response token, a pre-calculated constant, and the KL coefficient. Based on this insight, our method simply adds a MSE loss between the verifier-based reasoning rewards and the corresponding last-token self-rewarding scores into the standard RLVR process. The optimized self-rewarding scores can not only be incorporated back into the RL process to further enhance training, but also achieve high verification accuracy at test time, thereby improving solution ranking and selection. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Arash Ahmadian, Chris Cremer, Matthias Gall´e, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet ¨Ust¨un, and Sara Hooker. Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms. arXiv preprint arXiv:2402.14740, 2024. 10 AI-MO. Aime 2024. https://huggingface.co/datasets/AI-MO/aimo-validation-aime, 2024a. AI-MO. Amc 2023. https://huggingface.co/datasets/AI-MO/aimo-validation-amc, 2024b. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, et al. Process reinforcement through implicit rewards. arXiv preprint arXiv:2502.01456, 2025. Guanting Dong, Hangyu Mao, Kai Ma, Licheng Bao, Yifei Chen, Zhongyuan Wang, Zhongxia Chen, Jiazhen Du, Huiyang Wang, Fuzheng Zhang, et al. Agentic reinforced policy optimization. arXiv preprint arXiv:2507.19849, 2025. Run-Ze Fan, Zengzhi Wang, and Pengfei Liu. Megascience: Pushing the frontiers of post-training datasets for science reasoning. arXiv preprint arXiv:2507.16812, 2025. Bofei Gao, Zefan Cai, Runxin Xu, Peiyi Wang, Ce Zheng, Runji Lin, Keming Lu, Dayiheng Liu, Chang Zhou, Wen Xiao, et al. Llm critics help catch bugs in mathematics: Towards a better mathematical verifier with natural language feedback. arXiv preprint arXiv:2406.14024, 2024. Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. In International Conference on Machine Learning, pp. 10835–10866. PMLR, 2023. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. Michael Hassid, Gabriel Synnaeve, Yossi Adi, and Roy Schwartz. Don’t overthink it. preferring shorter thinking chains for improved llm reasoning. arXiv preprint arXiv:2505.17813, 2025. Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 3828–3850, 2024. Zhiwei He, Tian Liang, Jiahao Xu, Qiuzhi Liu, Xingyu Chen, Yue Wang, Linfeng Song, Dian Yu, Zhenwen Liang, Wenxuan Wang, et al. Deepmath-103k: A large-scale, challenging, decontaminated, and verifiable mathematical dataset for advancing reasoning. arXiv preprint arXiv:2504.11456, 2025. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL https: //openreview.net/forum?id=7Bywt2mQsCe. Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner- zero: An open source approach to scaling up reinforcement learning on the base model. arXiv preprint arXiv:2503.24290, 2025. Maggie Huan, Yuetai Li, Tuney Zheng, Xiaoyu Xu, Seungone Kim, Minxin Du, Radha Poovendran, Graham Neubig, and Xiang Yue. Does math reasoning improve general llm capabilities? understanding transferability of llm reasoning. arXiv preprint arXiv:2507.00432, 2025. Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=chfJJYC3iL. Yuhua Jiang, Yuwen Xiong, Yufeng Yuan, Chao Xin, Wenyuan Xu, Yu Yue, Qianchuan Zhao, and Lin Yan. Pag: Multi-turn reinforced llm self-correction with policy as generative verifier. arXiv preprint arXiv:2506.10406, 2025. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https:// github.com/tatsu-lab/alpaca_eval, 5 2023. 11 Zongzhao Li, Zongyang Ma, Mingze Li, Songyou Li, Yu Rong, Tingyang Xu, Ziqi Zhang, Deli Zhao, and Wenbing Huang. Star-r1: Spatial transformation reasoning by reinforcing multimodal llms. arXiv preprint arXiv:2505.15804, 2025. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, 2023. Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437, 2024a. Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, and Noah A. Smith. Tuning language models by proxy. In First Conference on Language Modeling, 2024b. URL https://openreview.net/ forum?id=dribhnhm1i. Jiawei Liu and Lingming Zhang. Code-r1: Reproducing r1 for code with reliable rewards. 2025. Xiaoyuan Liu, Tian Liang, Zhiwei He, Jiahao Xu, Wenxuan Wang, Pinjia He, Zhaopeng Tu, Haitao Mi, and Dong Yu. Trust, but verify: A self-verification approach to reinforcement learning with verifiable rewards. arXiv preprint arXiv:2505.13445, 2025a. Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Under- standing r1-zero-like training: A critical perspective. arXiv preprint arXiv:2503.20783, 2025b. Songshuo Lu, Hua Wang, Zhi Chen, and Yaohua Tang. Urpo: A unified reward & policy optimization framework for large language models. arXiv preprint arXiv:2507.17515, 2025. Xueguang Ma, Qian Liu, Dongfu Jiang, Ge Zhang, Zejun Ma, and Wenhu Chen. General-reasoner: Advancing llm reasoning across all domains. arXiv preprint arXiv:2505.14652, 2025. MetaAI. Introducing llama 3.1: Our most capable models to date. https://ai.meta.com/blog/ meta-llama-3-1/, 2024a. MetaAI. Llama 3.2: Revolutionizing edge ai and vision with open, customizable models. https://ai.meta. com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/, 2024b. Eric Mitchell, Rafael Rafailov, Archit Sharma, Chelsea Finn, and Christopher D Manning. An emulator for fine-tuning large language models using small language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=Eo7kv0sllr. Sagnik Mukherjee, Lifan Yuan, Dilek Hakkani-Tur, and Hao Peng. Reinforcement learning finetunes small subnetworks in large language models. arXiv preprint arXiv:2505.11711, 2025. OpenCompass. Aime 2025. https://huggingface.co/datasets/opencompass/AIME2025, 2025. Jan Peters and Stefan Schaal. Reinforcement learning by reward-weighted regression for operational space control. In Proceedings of the 24th international conference on Machine learning, pp. 745–750, 2007. Qwen Team. Qwen2. 5: A party of foundation models. Qwen (Sept. 2024). url: https://qwenlm. github. io/blog/qwen2, 5, 2024. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in neural information processing systems, 36:53728–53741, 2023. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, 2024. Kusha Sareen, Morgane M Moss, Alessandro Sordoni, Rishabh Agarwal, and Arian Hosseini. Putting the value back in rl: Better test-time scaling by unifying llm reasoners with verifiers. arXiv preprint arXiv:2505.04842, 2025. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. 12 Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv: 2409.19256, 2024. Skywork-o1. Skywork-o1 open series. https://huggingface.co/Skywork, November 2024. URL https: //huggingface.co/Skywork. Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024. Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 1998. Kimi Team, Yifan Bai, Yiping Bao, Guanduo Chen, Jiahao Chen, Ningxin Chen, Ruijue Chen, Yanru Chen, Yuankun Chen, Yutian Chen, et al. Kimi k2: Open agentic intelligence. arXiv preprint arXiv:2507.20534, 2025a. Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chen- zhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599, 2025b. Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math- shepherd: Verify and reinforce llms step-by-step without human annotations. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 9426–9439, 2024a. Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. arXiv preprint arXiv:2406.01574, 2024b. Zengzhi Wang, Fan Zhou, Xuefeng Li, and Pengfei Liu. Octothinker: Mid-training incentivizes reinforcement learning scaling. arXiv preprint arXiv:2506.20512, 2025a. Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Xing Jin, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, et al. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning. arXiv preprint arXiv:2504.20073, 2025b. Xumeng Wen, Zihan Liu, Shun Zheng, Zhijian Xu, Shengyu Ye, Zhirong Wu, Xiao Liang, Yang Wang, Junjie Li, Ziming Miao, et al. Reinforcement learning with verifiable rewards implicitly incentivizes correct reasoning in base llms. arXiv preprint arXiv:2506.14245, 2025. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, et al. Qwen3 technical report. arXiv preprint arXiv:2505.09388, 2025a. Wenkai Yang, Jingwen Chen, Yankai Lin, and Ji-Rong Wen. Deepcritic: Deliberate critique with large language models. arXiv preprint arXiv:2505.00662, 2025b. Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Weinan Dai, Tiantian Fan, Gaohong Liu, Lingjun Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476, 2025a. Tianyu Yu, Bo Ji, Shouli Wang, Shu Yao, Zefan Wang, Ganqu Cui, Lifan Yuan, Ning Ding, Yuan Yao, Zhiyuan Liu, et al. Rlpr: Extrapolating rlvr to general domains without verifiers. arXiv preprint arXiv:2506.18254, 2025b. Lifan Yuan, Wendi Li, Huayu Chen, Ganqu Cui, Ning Ding, Kaiyan Zhang, Bowen Zhou, Zhiyuan Liu, and Hao Peng. Free process rewards without process labels. arXiv preprint arXiv:2412.01981, 2024. Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837, 2025a. Yu Yue, Yufeng Yuan, Qiying Yu, Xiaochen Zuo, Ruofei Zhu, Wenyuan Xu, Jiaze Chen, Chengyi Wang, TianTian Fan, Zhengyin Du, et al. Vapo: Efficient and reliable reinforcement learning for advanced reasoning tasks. arXiv preprint arXiv:2504.05118, 2025b. Kaiwen Zha, Zhengqi Gao, Maohao Shen, Zhang-Wei Hong, Duane S Boning, and Dina Katabi. Rl tango: Reinforcing generator and verifier together for language reasoning. arXiv preprint arXiv:2505.15034, 2025. 13 Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint arXiv:2408.15240, 2024. Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. The lessons of developing process reward models in mathematical reasoning. arXiv preprint arXiv:2501.07301, 2025. Jian Zhao, Runze Liu, Kaiyan Zhang, Zhimu Zhou, Junqi Gao, Dong Li, Jiafei Lyu, Zhouyi Qian, Biqing Qi, Xiu Li, et al. Genprm: Scaling test-time compute of process reward models via generative reasoning. arXiv preprint arXiv:2504.00891, 2025. Chujie Zheng, Shixuan Liu, Mingze Li, Xiong-Hui Chen, Bowen Yu, Chang Gao, Kai Dang, Yuqiong Liu, Rui Men, An Yang, et al. Group sequence policy optimization. arXiv preprint arXiv:2507.18071, 2025. Xiangxin Zhou, Zichen Liu, Anya Sims, Haonan Wang, Tianyu Pang, Chongxuan Li, Liang Wang, Min Lin, and Chao Du. Reinforcing general reasoning without verifiers. arXiv preprint arXiv:2505.21493, 2025. Yuxin Zuo, Kaiyan Zhang, Li Sheng, Shang Qu, Ganqu Cui, Xuekai Zhu, Haozhan Li, Yuchen Zhang, Xinwei Long, Ermo Hua, et al. Ttrl: Test-time reinforcement learning. arXiv preprint arXiv:2504.16084, 2025. 14 0 1000 2000 3000 4000 5000 Token Index 0 200 400 600 800 Cumulative Implicit Reward Correct Solution Incorrect Solution Figure 4: Cumulative implicit reward values across 32 reasoning trajectories sampled from Open-Reasoner- Zero-7B on an AIME2024 problem. Red lines corre- spond to wrong solutions and green lines correspond to correct solutions. Combination of Reference Model ref + Specified Token zc 0 5 10 15 20 25 Value of log ref(zc|x, y) 9.13 ± 0.13 11.19 ± 1.92 23.11 ± 0.04 24.87 ± 1.18 Qwen.+Yes Octo.+Yes Qwen.+<vision_start> Octo.+<|reserved_special_token_0|> Figure 5: The mean and standard deviation of −log πre f (zc|x, y) under different combinations of ref- erence model πre f and pre-specified token zc over 300 input-output pairs. A The Length Bias in Implicit Reward Here, we present the trend of the cumulative implicit reward values (log πθ(y<i|x) πre f (y<i|x) where πre f is Qwen2.5-7B-Base) across 32 reasoning trajectories sampled from Open-Reasoner-Zero-7B on an AIME2024 problem, showing how they vary with the increasing trajectory lengths. As illustrated in Figure 4, the curves of all samples exhibit a positive correlation between the implicit reward and the number of tokens, and longer trajectories tend to yield higher final implicit reward scores, indicating a strong length bias in implicit reward. Since incorrect solutions are generally longer than correct ones in reasoning tasks (Hassid et al., 2025), implicit reward is therefore not a reliable indicator of the relative quality of reasoning paths at test time. B Statistics of log πre f (zc|x, y) We present the mean and standard deviation of −log πre f (zc|x, y) computed over 300 input-output pairs. The reference model πre f is chosen as either Qwen2.5-7B-Base or OctoThinker-3B-Short-Base, and the evaluation is per- formed under two different choices of zc for each reference model (one common token and one unused special token): “Yes” and “<vision start>” for Qwen2.5-7B-Base, “Yes” and “<|reserved special token 0|>” for OctoThinker-3B-Short-Base. The results in Figure 5 indicates that −log πre f (zc|x, y) remains nearly constant and extremely small, with only a low standard deviation across different x and y. Thus, we can consider log πre f (zc|x, y) as a constant when calculating the last-token self-rewarding scores, which effectively reduces the computational cost by half. C Detailed Training Settings We use verl (Sheng et al., 2024) as our RL training framework. The basic training hyper-parameters in both GRPO training and LaSeR training for each model are put in Table 4, and the newly introduced training hyper-parameters for LaSeR are put in Table 5. The number of optimization steps is 1000 for Qwen2.5-7B-Base and OctoThinker- 3B-Short-Base, and 500 for Open-Reasoner-Zero-7B. In RL, a reasoning reward of 1.0 is given if the final answer and the answer format are both correct; otherwise, it is 0.0. In our method, the reasoning warm-up is performed for Qwen2.5-7B-Base and OctoThinker-3B-Short-Base only, and the self-rewarding warm-up is performed for all models. D Ablation Studies on Self-Rewarding Hyper-Parameters Here, we display the curves (with Exponential Moving Average (EMA) smoothing) of training rewards and training self-verification F1 scores of our method under different choices of coefficient βv and self-rewarding MSE loss weight α. The experiments are conducted on Open-Reasoner-Zero-7B, which help to skip the reasoning warm-up phase compared with using Qwen2.5-7B-Base and OctoThinker-3B-Short-Base, while the results are similar in other 15 Table 4: Basic training hyper-parameters of both GRPO and LaSeR. Hyper-parameter Value Train Batch Size 128 Micro Batch Size 128 Rollout n 8 Maximum Prompt Length 2048 Maximum Response Length 8192 Temperature 1.0 Top p 1.0 LR 1 × 10−6 KL Coefficient 0.0 Table 5: Unique training hyper-parameters of LaSeR. Hyper-parameter Value Coefficient βv 0.1 Loss Weight α 0.1 Self-Rewarding Adv. Weight τ 0.1 Reasoning Warm-Up Steps 200 Self-Rewarding Warm-Up Steps 200 two base models after reasoning warm-up. The dynamics of training rewards and training self-verification F1 scores are displayed in Figure 6. As we can see, assigning a larger weight α to the last-token self-rewarding loss has a more detrimental impact on the model’s reasoning capabilities. On the other hand, the coefficient βv has little impact on optimizing the self-rewarding scores, as long as it remains within a reasonable range (0.1 ∼0.5). However, much smaller values of βv can impair the model’s reasoning capability, as indicated by the analysis in the end of Section 3.4. For example, when βv = 0.05, we should have πθ(zc|x, y) = e−3 ≈0.05 under πref(zc|x, y) = e−23 and rv(x, y) = 1, then the large value of πθ(zc|x, y) causes large interference with the optimization of reasoning capability. In our main experiments, we choose (βv, α) = (0.1, 0.1). E The Effect of Class-Level Re-Weighting on The Balanced Self-Verification Capability We present the training dynamics of our method on Open-Reasoner-Zero-7B, with and without class-level loss re-weighting, in Figure 7 for comparison. As shown, applying loss re-weighting leads to a more balanced self- verification performance by mitigating the bias toward the majority class with larger sample size, while still maintaining high reasoning accuracy. F Comparison between Last-Token Self-Rewarding Loss and Supervised Fine-Tuning Loss Following the discussion in Section 3.4, we compare the training performance of our introduced last-token self- rewarding loss with the supervised fine-tuning (SFT) loss on Open-Reasoner-Zero-7B. The training dynamics are shown in Figure 8. As observed, applying the SFT loss to optimize the self-rewarding capability causes substantial interference with the optimization of reasoning capability, leading to a marked degradation in training rewards. Moreover, the SFT loss degrades extremely slowly, indicating that directly driving πθ(zc|x, y) from 0 to 1 for rv(x, y) = 1 is inherently difficult. However, our method only requires fitting πθ(zc|x, y) to exp(1/βv) · πref(zc|x, y) for rv(x, y) = 1, which is considerably easier and introduces much less interference. G Detailed Self-Verification Results We report the detailed self-verification results of each model on self-generated solutions across all benchmarks in Table 6, including both overall accuracy and F1 score. Our method consistently yields significant improvements in model’s self-rewarding and self-verification capabilities, while incurring only minimal additional computational cost. H Training and Evaluation Settings in General Reasoning Experiments The basic training and testing hyper-parameters for experiments on WebInstruct-verified are the same as those in Table 4 and Table 5, while the number of optimization steps here is 800. The simplified constant of the reference log-probability cref is −23.0. We do not employ the advantage integration strategy here, as we find that the optimized self-rewarding capability of Qwen3-4B-LaSeR on general reasoning tasks is limited, and introducing self-rewarding-based advantage integration leads to performance degradation. I Prompt Templates We show the training, evaluation and self-verification prompt templates used in our experiments in the end. 16 Baseline Coef. v = 0.1, Loss Weight = 0.1 Coef. v = 0.1, Loss Weight = 0.5 Coef. v = 0.2, Loss Weight = 0.1 Coef. v = 0.2, Loss Weight = 0.5 Coef. v = 0.5, Loss Weight = 0.1 0 25 50 75 100 125 150 Optimization Step 0.55 0.60 0.65 0.70 0.75 Training Rewards (a) Training rewards with EMA smoothing 0 25 50 75 100 125 150 Optimization Step 0.0 0.2 0.4 0.6 0.8 Training Self-Verification F1 Scores (b) Training self-verification F1 scores with EMA smoothing Figure 6: The curves of training rewards and training self-verification F1 scores under different combinations of hyper-parameters with EMA smoothing (EMA coef.=0.9). 17 Coef. v = 0.1, Loss Weight = 0.1, w/ Loss Reweighting Coef. v = 0.1, Loss Weight = 0.1, w/o Loss Reweighting 0 25 50 75 100 125 150 175 200 Optimization Step 0.55 0.60 0.65 0.70 0.75 Training Rewards (a) Training rewards with EMA smoothing 0 25 50 75 100 125 150 175 200 Optimization Step 0.0 0.2 0.4 0.6 0.8 Training Self-Verification F1 Scores (b) Training self-verification F1 scores with EMA smoothing Figure 7: The curves of training rewards and training self-verification F1 scores of our method with and without class-level loss re-weighting practice (EMA coef.=0.9). 18 Last-Token Self-Rewarding Loss, Coef. v = 0.1, Loss Weight = 0.1 Supervised Fine-Tuning Loss, Loss Weight = 0.1 0 25 50 75 100 125 150 175 200 Optimization Step 0.55 0.60 0.65 0.70 0.75 Training Rewards (a) Training rewards with EMA smoothing 0 25 50 75 100 125 150 175 200 Optimization Step 2 1 0 1 2 3 Self-Rewarding/SFT Loss (log scale) (b) Self-rewarding/SFT loss curve on a log scale Figure 8: The comparison of the training dynamics between the last-token self-rewarding loss and the SFT loss. 19 Table 6: Detailed self-verification results. Method MATH500 AMC23 AIME24 AIME25 Olym. Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 OctoThinker-3B-Short-Base Base 60.2 22.3 52.3 11.2 - - - - 62.0 13.7 GRPO 58.2 56.9 66.7 47.3 - - - - 66.4 48.8 LaSeR 77.0 73.6 77.3 70.2 - - - - 80.3 73.6 - SWA 81.0 80.4 84.1 70.9 - - - - 83.5 66.0 Qwen2.5-7B-Base Base 45.0 36.4 30.7 30.8 24.5 27.6 28.2 32.9 33.8 36.9 GRPO 76.5 54.6 61.1 59.7 60.4 36.6 72.5 41.5 54.6 53.5 LaSeR 88.0 83.2 81.5 82.5 92.2 79.6 90.5 74.3 79.5 78.3 - SWA 87.8 79.7 79.6 80.2 94.3 81.3 92.2 74.9 83.9 83.3 Open-Reasoner-Zero-7B Base 79.6 26.7 66.6 51.3 39.6 45.9 47.6 55.2 55.2 37.5 GRPO 52.9 57.1 50.9 44.8 66.9 14.6 78.9 28.1 54.7 49.5 LaSeR 90.1 87.2 77.7 79.7 87.2 64.6 92.8 77.7 80.1 78.7 - SWA 89.0 87.5 76.2 77.7 87.7 63.3 93.6 77.3 80.2 77.9 20 Training and Evaluation Prompt Template for OctoThinker-3B-Short-Base <bos token> A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. User: You must put your answer inside \boxed{} and Your final answer will be extracted automatically by the \boxed{} tag. {question} Assistant: Training Prompt Template for Qwen2.5-7B-Base <bos token> A conversation between User and Assistant. The User asks a question, and the Assistant solves it. The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. The reasoning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. User: You must put your answer inside <answer> </answer> tags, i.e., <answer> answer here </answer>. And your final answer will be extracted automatically by the \boxed{} tag. This is the problem: {question} Assistant: <think> Zero-Shot Evaluation Prompt Template for Qwen2.5-7B-Base < |im start| >system You are a helpful assistant.< |im end| > < |im start| >user {question} Please reason step by step, and put your final answer within \boxed{}.< |im end| > < |im start| >assistant Training and Evaluation Prompt Template for Open-Reasoner-Zero-7B A conversation between User and Assistant. The User asks a question, and the Assistant solves it. The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. The reasoning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think> <answer> answer here </answer>. User: You must put your answer inside <answer> </answer> tags, i.e., <answer> answer here </answer>. And your final answer will be extracted automatically by the \boxed{} tag. {question} Assistant: <think> Training and Evaluation Prompt Template for Qwen3-4B-Base < |im start| >user {question} Please reason step by step, and put your final answer within \boxed{}.< |im end| > < |im start| >assistant 21 Prompt Template for Self-Verification (Modified from Liu et al. (2025a)) Below you are presented with a question and a tentative response. Your task is to evaluate the response and assign a rating to the response based on the following clear criteria: Rating Criteria: 1. Missing final answer, or incorrect response with the wrong final answer: assign \boxed{0}. 2. Correct response with the correct final answer: assign \boxed{1}. ### Question Begin ### {question} ### Question End ### ### Response Begin ### {response} ### Response End ### First provide your evaluation process, then clearly state your final rating value enclosed in \boxed{} at the end. 22
2025-10-17 LaSeR: Reinforcement Learning with Last-Token Self-Rewarding Wenkai Yang1,∗, Weijie Liu2, Ruobing Xie2, Yiju Guo1, Lulu Wu2, Saiyong Yang2, Yankai Lin1,† 1Gaoling 2LLM Department, Tencent B { Abstract Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a core paradigm for enhancing the reasoning capabilities of Large Language Models (LLMs). To address the lack of verification signals at test time, prior studies incorporate the training of model's selfverification capability into the standard RLVR process, thereby unifying reasoning and verification capabilities within a single LLM. However, previous practice requires the LLM to sequentially generate solutions and self-verifications using two separate prompt templates, which significantly reduces efficiency. In this work, we theoretically reveal that the closed-form solution to the RL objective of self-verification can be reduced to a remarkably simple form: the true reasoning reward of a solution is equal to its last-token self-rewarding score, which is computed as the difference between the policy model's next-token log-probability assigned to any pre-specified token at the solution's last token and a pre-calculated constant, scaled by the KL coefficient. Based on this insight, we propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), an algorithm that simply augments the original RLVR loss with a MSE loss that aligns the last-token self-rewarding scores with verifier-based reasoning rewards, jointly optimizing the reasoning and self-rewarding capabilities of LLMs. The optimized self-rewarding scores can be utilized in both training and testing to enhance model performance. Notably, our algorithm derives these scores from the predicted next-token probability distribution of the last token immediately after generation, incurring only the minimal extra cost of one additional token inference. Experiments show that our method not only improves the model's reasoning performance but also equips it with remarkable self-rewarding capability, thereby boosting its inference-time scaling performance. Code and models are available at https://github.com/RUCBM/LaSeR. 1 Introduction In the past few years, Large Language Models (LLMs) (Achiam et al., 2023; MetaAI, 2024a; Qwen Team, 2024; Liu et al., 2024a) have advanced significantly, excelling in various domains (Li et al., 2023; Wang et al., 2024b). However, they still face limitations in complex reasoning tasks (AI-MO, 2024a; OpenCompass, 2025; Rein et al., 2024; Jain et al., 2025). Recently, Reinforcement Learning with Verifiable Rewards (RLVR) has shown great promise in enhancing the complex reasoning abilities of LLMs, as demonstrated by OpenAI o1 (Jaech et al., 2024) and DeepSeek-R1 (Guo et al., 2025). By rewarding reasoning paths based on the consistency between final outcomes and ground-truth answers through a deterministic verifier, RLVR incentivizes LLMs to produce more deliberate reasoning chains while effectively mitigating the risk of reward hacking (Gao et al., 2023). Despite its effectiveness, a limitation of standard RLVR is its inability to continue providing verification signals for model outputs in scenarios where ground truth answers are unavailable, such as during test-time inference (Zuo et al., 2025). To address this, existing works either train an external verifier (Lightman et al., 2023; Snell et al., 2024; Zhang et al., 2024; Gao et al., 2024; Yang et al., 2025b) for evaluating candidate solutions or jointly optimize the self-verification and reasoning capabilities of the same policy model during RLVR (Sareen et al., 2025; Liu et al., 2025a; Zha et al., 2025; Jiang et al., 2025). However, we argue that these methods have a major issue of inefficiency: the external verifier requires additional training on a separate LLM during or after reinforcement learning (RL); while joint optimization involves generating both solutions and self-verifications sequentially under two separate prompt templates, which doubles the per-sample inference cost and reduces generation efficiency. In this work, we propose LaSeR (Reinforcement Learning with Last-Token Self-Rewarding), a lightweight and highly effective algorithm that achieves this goal, jointly optimizing reasoning and self-verification capabilities at nearly zero additional cost. Our core insight is that a model's assessment in its own solution can be captured in the last token's predicted probability distribution. We first show theoretically that the RL objective of selfverification has a closed-form solution, where the true reasoning reward from the verifier is equal to the next-token log-probability ratio between the policy and reference models for a pre-specified special token (an unused token ∗Work done during an internship at Tencent. † Corresponding author. 1 16 Oct 2025 Rollout RL Prompt ... Responses (rs-rv)2 Prompt Training Stage Testing Stage Self-Rewarding Advantages Self-Rewarding Score Log-Prob. of Special Token Verifier-based Advantages -cref×βv r! r" 0.72 0.95 As N As1 Verifier-based Reward Self-Rewarding-based Correctness EOSToken ArN AsN A" A! 1-τAv+ τAs ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... As2 As3 As4 AvN Av1 Av2 Av3 Av4 0.34 0.22 0.30 ... ... ... 0.88 0.19 0.72 Figure 1: The full illustration of our method LaSeR. During training, our approach augments the standard RLVR process with an additional MSE loss between the verifier-based rewards (rv) and the last-token self-rewarding scores (rs), where rs is the difference between the policy model's next-token log-probabilities of a pre-specified special token at the final response token and a pre-calculated constant cre f , scaled by the KL coefficient βv. The optimized self-rewarding scores can serve as auxiliary reward signals in both training and testing stages to enhance model performance. like " " that serves as the pre-defined ground truth for verifications on correct candidate solutions) at the last response token, scaled by the KL coefficient. We refer to this scaled log-probability ratio as the last-token self-rewarding score. Furthermore, we observe that for a randomly chosen special token, its predicted log-probability under the reference model is practically a constant, small value across all problems and solutions (see Figure 5). This enables us to simplify the self-rewarding score into a remarkably simple form that depends only on the policy model's outputs and a pre-calculated constant, making it exceptionally efficient to compute. Building on above analysis, we replace the explicit RL optimization for self-verification with a simple Mean Squared Error (MSE) loss. As illustrated in Figure 1, we train the model to align its last-token self-rewarding score with the true reasoning reward from the verifier. In specific, after the policy model generates the solution for each problem, we calculate the last-token self-rewarding score based on its last token's next-token log-probability for the pre-specified special token, and construct the corresponding MSE loss. This MSE objective is added directly to the standard RLVR loss, allowing for seamless joint optimization for both the reasoning and self-rewarding capabilities of the policy model. At both training and testing time, our method generates each candidate solution and computes the self-rewarding score in a single forward pass, incurring the cost of at most one additional token inference with no extra generation required. This is significantly more efficient than prior approaches, which require a separate inference step. The optimized self-rewarding scores can not only complement the original reasoning rewards during RLVR to further enhance training performance, but also be used at test time to rank and weight solutions for more accurate answer aggregation. We conduct experiments on both LLaMA (MetaAI, 2024b) and Qwen (Qwen Team, 2024) architectures, including pre-trained, mid-trained and reinforced variants, to demonstrate the effectiveness of our method in broader math reasoning tasks. Experimental results show that our methods not only effectively improve the reasoning performance of the policy model, but also allows its self-rewarding accuracy to reach a high level, thereby equipping the model with better confidence calibration of its own outputs and improving its inference-time scaling performance. 2 Related Work RLVR for LLM Reasoning Reinforcement Learning with Verifiable Rewards (RLVR), which sorely calculates binary rewards based on the final answers, has been shown to be highly effective in enhancing the reasoning capabilities of LLMs (Jaech et al., 2024; Guo et al., 2025; Team et al., 2025b). Current studies can be categorized into several directions, including but not limited to (1) designing more efficient and effective RLVR algorithms (Schulman et al., 2017; Shao et al., 2024; Yu et al., 2025a; Yue et al., 2025b; Liu et al., 2025b; Zheng et al., 2025), (2) extending RLVR to general reasoning domain (Ma et al., 2025; Zhou et al., 2025; Yu et al., 2025b; Li et al., 2025) and agent 2 scenarios (Wang et al., 2025b; Team et al., 2025a; Dong et al., 2025), (3) collecting diverse verifiable datasets (Hu et al., 2025; He et al., 2025; Liu & Zhang, 2025; Ma et al., 2025; Fan et al., 2025), and (4) analyzing the mechanisms of RLVR (Mukherjee et al., 2025; Yue et al., 2025a; Wen et al., 2025; Huan et al., 2025). External Verifiers for LLM Reasoning Training external verifiers to identify the correctness of the LLM-generated solutions is an effective way to enhance the reasoning performance of LLMs in the inference time. External verifiers usually fall into two categories: (1) Scalar Reward Models: Outcome-supervised Reward Models (ORMs) (Cobbe et al., 2021; Yang et al., 2024) and Process-supervised Reward Models (PRMs) (Lightman et al., 2023; Wang et al., 2024a; Skywork-o1, 2024; Yuan et al., 2024) are two representative approaches. ORMs provide supervision by evaluating the final answer, while PRMs offer more fine-grained feedback by assessing the intermediate reasoning steps. (2) Generative Verifiers: Recent studies have explored the potential of training LLMs to perform natural language critiques of reasoning solutions generated by the LLM generators, and then to judge their final outcomes (Zhang et al., 2024; Gao et al., 2024; Yang et al., 2025b; Zhao et al., 2025). This paradigm has demonstrated stronger verification performance than scalar reward models, as it enables the LLM verifier to conduct deliberate chain-of-thought reasoning before arriving at the final judgment. Self-Verification for LLM Reasoning Several recent studies (Sareen et al., 2025; Liu et al., 2025a; Zha et al., 2025; Jiang et al., 2025; Lu et al., 2025) aim to unify the roles of generator and verifier by equipping a single policy model with self-verification capability. The trained self-verification capability can be used in both the RL training and inference-time scaling stages to enhance the model performance. However, these approaches require generating solutions and self-verifications sequentially during training and inference. In contrast, our method derives the self-rewarding signal directly from the next-token probability distribution of the final token of the generated sequence, achieving a more efficient and effective unification of generation and self-verification. 3 Methodology 3.1 Preliminaries RL Objective We denote πθ as the target policy model to be optimized, and πref as the reference model from which πθ is initialized. D is the query set, x is an input and y is the generated response to x. The standard optimization objective of RL is formalized as Oπθ = max πθ Ex∼D,y∼πθ(·|x) h r(x, y) -βDKL(πθ||πre f ) i , (1) where r(x, y) represents a reward function to score the response y given x, DKL is the Kullback-Leibler (KL) divergence loss regularizing the distance between two model distributions. RLVR Recently, RLVR (Guo et al., 2025; Hu et al., 2025) has emerged as a widely adopted and effective paradigm for enhancing the reasoning capabilities of LLMs. In RLVR, the reward function r is typically chosen as a deterministic verifier rv, such as a rule-based verifier, to evaluate whether the final extracted answer a ⊂y matches the ground-truth answer a∗, and to produce binary feedback (e.g., {0,1}). That is, rv(x, y) = 1{a≡a∗} = 1 if a is semantically equivalent to a∗, 0 otherwise. (2) Policy Gradient Method Policy Gradient (Sutton et al., 1998) is a widely adopted algorithm to optimize the objective of Eq. (1), which updates the policy model via the estimated gradient ∇θOπθ = Ex∼D,y∼πθ(·|x) " T ∑ t=1 At∇θ log πθ(yt|x, y " token). We present a numerical analysis to validate this claim in Figure 5, and we can see the value of πre f (z|x, y) is less than e-9 for common tokens and even less than e-20 for unused special tokens. Then, we can get that Z(x, y) = ∑ z πre f (z|x, y) exp( 1 βv ˆr(x, y, z)) = ∑ z/∈{zc,zi} πre f (z|x, y) exp( 1 βv ˆr(x, y, z)) + πre f (zc|x, y) exp( 1 βv ˆr(x, y, zc)) + πre f (zi|x, y) exp( 1 βv ˆr(x, y, zi)) = (1 -πre f (zc|x, y) -πre f (zi|x, y)) exp(0) + (πre f (zc|x, y) + πre f (zi|x, y)) exp( 1 βv ) ≈1 × 1 + 0 × exp( 1 βv ) = 1 =⇒log Z(x, y) ≈0. (10) The above analysis reveals that, under our formulation, the partition function that cannot be ignored by previous studies (Cui et al., 2025) can be now naturally discarded. Consequently, the optimal solution to Eq. (6) can be approximately reduced to: ˆr(x, y, z) = βv log[πθ(z|x, y)/πre f (z|x, y)]. (11) 4 In particular, the true verification reward when the model verifies a solution as correct is: ˆr(x, y, zc) = rv(x, y) = βv log[πθ(zc|x, y)/πre f (zc|x, y)]. (12) The first equation is derived from the definition in Eq. (8). The second equation reveals that the true reasoning reward is equal to log-probability ratio of the policy model to the reference model at zc, scaled by the KL coefficient. Thus, to optimize the model's verification capability, we do not need to explicitly perform a RLVR procedure. Instead, we can directly optimize the following MSE loss: L = Ex∼D,y∼πg(·|x) βv log[πθ(zc|x, y)/πre f (zc|x, y)] -rv(x, y) 2 . (13) Thus, in the self-verification setting where πg = πθ, we can directly adds the above loss into the original RLVR loss to jointly optimize the reasoning and self-verification capabilities of the policy model: Sπθ = max πθ Ex∼D,y∼πθ(·|x)   rv(x, y) -βDKL(πθ(y|x)||πre f (y|x)) -α " βv log πθ(zc|x, y) πre f (zc|x, y) -rv(x, y) #2  , (14) where α is a loss balancing coefficient. We refer the term rs = βv log πθ(zc|x,y) πre f (zc|x,y) to the last-token self-rewarding score, since it depends on the log-probability distributions of the last token in y. 3.3 Other Techniques Here, we discuss several practical techniques to further simplify and improve the efficiency and effectiveness of the self-rewarding MSE loss introduced above. Simplification of the Log-Probability in the Reference Model As shown in Figure 5, the quantity log πre f (zc|x, y) remains almost constant, exhibiting only a negligible standard deviation across all x and y. Therefore, we can regard it as a pre-calculated constant cre f in calculating the last-token self-rewarding score during both training and inference. This eliminates the need for forwarding y through the reference model and thus further enhances efficiency. In specific, cre f is the mean value of log πre f (zc|x, y) on a small set of pre-generated set of (x, y). Furthermore, based on the findings in Figure 5, we select an unused special token as zc to make πre f (zc|x, y) closer to 0 and to further minimize its impact on the approximation of Z(x, y) = 1 and the stability of training. Self-Rewarding Loss Re-Weighting During training, the numbers of correct and incorrect solutions are imbalanced, and their ratio dynamically changes. To prevent the last-token self-rewarding score from being biased toward the class with more samples, we apply a class-level loss re-weighting strategy within each optimization step. In each step, we calculate the total numbers of correct and incorrect solutions (identified by the deterministic verifier) for all problems in the current batch as Nc and Ni. Then, we apply the loss re-weighting as l = 1 Nc + Ni ∑ x ∑ y h wc1{rv(x,y)=1} + wi1{rv(x,y)=0} i h βv log πθ(zc|x, y) -βvcre f -rv(x, y) i2 , (15) where wc = Nc+Ni 2×Nc and wi = Nc+Ni 2×Ni are re-weighting factors. This practice achieves a more balanced selfverification capability. We provide empirical validations on this in Appendix E. Future work can explore more effective ways to address the issue of imbalanced distribution of solutions. Integration of Verifier-based and Self-Rewarding-based Advantages The last-token self-rewarding scores can not only be used at test time, but also facilitate the training process through the integration of verifier-based and self-rewarding-based advantages. We believe such practice can help mitigate the issue of misjudgments by rule-based verifiers, which often occur when the format of ground-truth answer is overly complex, and produce more fine-grained rewards. For example, in GRPO, the final advantage can be calculated as: ˆAi t = (1 -τ)riv -mean(r1v, · · · , rKv ) std(r1v, · · · , rKv ) + τ ris -mean(r1s, · · · , rKs ) std(r1s, · · · , rKs ) , where ri v = rv(x, yi) and ri s = βv log πθ(zc|x, yi) -βvcre f . (16) To stabilize training, we adopt a filtering strategy that sets τ = 0 for any group whenever the standard deviation std(r1 s, · · · , rK s ) within this group falls below a threshold T, which is set to 0.1. Separate Warm-Up of Reasoning and Self-Rewarding Capabilities During the initial phase of training, we optimize only the last-token self-rewarding score, without integrating self-rewarding-based advantages into the learning process. After a certain steps when the last-token self-rewarding loss is sufficiently small, we proceed to integrate verifier-based and self-rewarding-based advantages. Moreover, when training from base (i.e., pre-trained) models, we first perform standard RLVR without incorporating the last-token self-rewarding loss in order to warm 5 Algorithm 1: LaSeR: Reinforcement Learning with Last-Token Self-Rewarding Input: Initial policy model πθ, prompts D, verifier rv, warm-up hyper-parameters wr and wsr, coefficient βv, pre-specified token zc, pre-calculated cre f = E(x,y)[log πre f (zc|x, y)] for Step s = 1, · · · , S do 1. Set πold ←πθ; 2. Sample batch prompts Ds from D; 3. Generate solutions {yi}K i=1 for each x ∈Ds; 4. Calculate verifier-based rewards and advantages (e.g., Eq. (4)), calculate RL loss; 5. If s ≥wr, calculate last-token self-rewarding loss based on Eq. (15) and add it to RL loss; 6. If s ≥wsr, calculate self-rewarding-based advantages and perform advantage integration based on Eq. (16); 7. Update the policy model πθ using any RL algorithm with integrated loss and advantages; end Output: πθ up the model's reasoning capability, followed by a warm-up phase for the self-rewarding capability before the complete integration of verifier-based and self-rewarding-based advantages. By combining all the aforementioned techniques, our full algorithm Reinforcement Learning with Last-Token Self-Rewarding (LaSeR), is summarized in Algorithm 1 and illustrated in Figure 1. During the testing phase, once the model generates a solution, we compute the last-token self-rewarding score based on rs = βv log πθ(zc|x, y) - βvcre f . The comparison between this score and 0.5 determines the self-verification outcome of the solution, or the score itself can be further used to perform weighted majority voting. 3.4 Brief Discussion Comparison Between LaSeR and Prior Approaches Compared with previous methods (Sareen et al., 2025; Liu et al., 2025a; Zha et al., 2025) that requires the policy model to perform separate generations for solutions and verifications, our method directly derives the self-rewarding result from the next-token log-probability of the final solution token. In the RL process, the computation of token log-probabilities is typically carried out after all the generations are completed (Sheng et al., 2024). Therefore, we can directly replace the token id of the first padding token with the token id of the pre-specified token before computing the log-probabilities of the sequences, thereby incurring no additional computation cost during training. During inference, our method requires only one more token inference after the solution is completed, which substantially reduces the computational cost compared to previous methods. We also discuss the potential way to further reduce the self-rewarding cost by avoiding any extra token inference in Section 5.3, which can be an interesting future work. Difference Between Last-Token Self-Rewarding Loss and Supervised Fine-Tuning Loss An alternative to train the self-verification capability is to optimize the following supervised fine-tuning (SFT) loss by maximizing the next-token probability of the token zc or zi based on the context (x, y): LSFT = -Ex∼D,y∼πg(·|x) [rv(x, y) · log πθ(zc|x, y) + (1 -rv(x, y)) · log πθ(zi|x, y)] . (17) The major difference between SFT loss and our last-token self-rewarding loss in Eq. (13) is that the SFT loss drives πθ(zc|x, y) to fit 1 when rv(x, y) = 1, which may lead to strong interference with the optimization of reasoning capability. In contrast, our loss drives πθ(zc|x, y) toward exp(1/βv) · πref(zc|x, y) for rv(x, y) = 1.0. When βv is relatively large, πθ(zc|x, y) remains still very small, thereby exerting only a negligible influence on the original RLVR optimization (e.g., πθ(zc|x, y) = e-13 when πref(zc|x, y) = e-23 and βv = 0.1). We provide the empirical comparison in Appendix F. 4 Experiments 4.1 Experimental Settings Base Models and Baselines We primarily conduct empirical validations on both LLaMA3.2 MetaAI (2024b) and Qwen2.5 (Qwen Team, 2024) architectures, including three base models: OctoThinker-3B-Short-Base (Wang et al., 2025a) (mid-trained version of LLaMA3.2-3B-Base), Qwen2.5-7B-Base (Qwen Team, 2024) (pre-trained model) and Open-Reasoner-Zero-7B (Hu et al., 2025) (reinforced version of Qwen2.5-7B-Base). In principle, our method can be seamlessly integrated into any RLVR framework, as it only introduces an additional MSE loss term. In this work, we adopt the widely used GRPO (Shao et al., 2024) as the base algorithm and primarily investigate the effectiveness of applying our method within GRPO, while leaving the exploration on other RL algorithms in the future work. 6 Table 1: Reasoning and self-verification performance of each model on five mathematical reasoning benchmarks. We do not report the results of OctoThinker-based models on AIME24-25, as the number of correct solutions is quite insufficient for a reliable evaluation. Method Reasoning Accuracy Self-Verification F1 Score MATH500 AMC23 AIME24 AIME25 Olym.- Bench Avg. MATH500 AMC23 AIME24 AIME25 Olym.- Bench Avg. OctoThinker-3B-Short-Base Base 3.7 1.3 - - 1.0 2.0 22.3 11.2 - - 13.7 15.7 GRPO 49.8 25.3 - - 17.3 30.8 56.9 47.3 - - 48.8 51.0 LaSeR 53.1 27.0 - - 18.2 32.8 73.6 70.2 - - 73.6 72.5 - SWA 52.9 26.1 - - 18.2 32.4 80.4 70.9 - - 66.0 72.4 Qwen2.5-7B-Base Base 35.8 20.6 3.5 1.6 12.3 14.8 36.4 30.8 27.6 32.9 36.9 32.9 GRPO 79.9 55.9 16.2 13.8 43.3 41.8 54.6 59.7 36.6 41.5 53.5 49.2 LaSeR 80.2 58.1 15.4 15.7 44.1 42.7 83.2 82.5 79.6 74.3 78.3 79.6 - SWA 78.0 58.3 15.4 12.3 41.7 41.1 79.7 80.2 81.3 74.9 83.3 79.9 Open-Reasoner-Zero-7B Base 81.9 60.3 15.6 15.1 46.9 44.0 26.7 51.3 45.9 55.2 37.5 43.3 GRPO 83.1 61.9 18.1 15.0 47.1 45.0 57.1 44.8 14.6 28.1 49.5 38.8 LaSeR 82.8 62.7 19.1 15.1 47.8 45.5 87.2 79.7 64.6 77.7 78.7 77.6 - SWA 83.2 62.6 19.0 14.5 47.6 45.4 87.5 77.7 63.3 77.3 77.9 76.7 Training and Evaluation Datasets We adopt DeepMath-103K (He et al., 2025), a large-scale and high-quality mathematical reasoning dataset, for our RL training data. In testing, we evaluate both the reasoning and self-verification performance of each model on five typical math reasoning benchmarks: MATH500 (Hendrycks et al., 2021), AMC23 (AI-MO, 2024b), AIME24 (AI-MO, 2024a), AIME25 (OpenCompass, 2025), and OlympiadBench (He et al., 2024). We also explore the effectiveness of our method in general reasoning tasks beyond math reasoning in Section 5.2. Training Settings The detailed training hyper-parameters of GRPO are put in Appendix C. The prompt template for each model is in Appendix I. When applying our method, we set the hyper-parameters (βv, α, τ) = (0.1, 0.1, 0.1), which are empirically determined based on the observations in Appendix D. zc is selected as " " for Qwen2.5-7B-Base and Open-Reasoner-Zero-7B, and " " for OctoThinker3B-Short-Base. The simplified constant of the reference log-probability, cref, is -23.0 for Qwen2.5-7B-Base and Open-Reasoner-Zero-7B, and -25.0 for OctoThinker-3B-Short-Base, as estimated from the results in Figure 5. The number of reasoning warm-up steps is set to 200 for both Qwen2.5-7B-Base and OctoThinker-3B-Short-Base, and the number of self-rewarding warm-up steps is 200 across all models. Evaluation Settings During generation, we set both the temperature and top p to 1.0 for all models. The max generation len is 8192. On MATH500 and OlympiadBench, we sample 2 solutions for each problem; whereas on AMC23, AIME24, and AIME25, we sample 32 solutions per problem. We then report the average Pass@1 accuracy of each model on each benchmark. We also evaluate the self-verification performance of each model by computing the self-verification F1 score, defined as the harmonic mean of self-verification accuracy on self-generated correct and incorrect solutions. Baselines perform self-verification based on the prompt in Appendix I. Any solution without a final answer is automatically treated as incorrect and excluded from the verification accuracy calculation. Detailed self-verification accuracy results are provided in Appendix G. 4.2 Main Results and Analysis We put the main results in Table 1. The key conclusion is that, across different model variants, our method not only yields better reasoning performance for the policy model compared with the baseline, but also substantially enhances its self-verification capability by enabling the self-rewarding scores to achieve remarkably high F1 scores. Regarding reasoning performance, applying our algorithm leads to higher accuracy in most settings and consistently yields higher average accuracy on the three base models. We think there are two main reasons for this improvement: (1) First, our method encourages the model to encode its assessment of the overall solution in the final response token, which leads to better confidence calibration. Improved calibration itself can have a positive impact on the model's learning. (2) Second, by integrating self-rewarding-based advantages into verifier-based advantages, our approach enables more fine-grained advantage estimation, which in turn facilitates more effective learning. For comparison, we report the results without self-rewarding-based advantages (-SWA) in Table 1. Regarding self-rewarding performance, applying a simple last-token self-rewarding MSE loss substantially enhances 7 Table 2: Comparison of verification F1 scores between LaSeR (self-rewarding) and external reward models (Qwen2.5-Math-7B-PRM800K, Qwen2.5-Math-PRM-7B, and Qwen2.5-Math-RM-72B) on responses generated by different policy models. Method MATH500 AMC23 AIME24 AIME25 Olym. Avg. Generator: OctoThinker-3B-Short-LaSeR Qwen2.5-Math-7B-PRM800K (7B RM) 77.0 68.9 - - 68.5 71.5 Qwen2.5-Math-PRM-7B (7B RM) 80.9 63.5 - - 64.1 69.5 Qwen2.5-Math-RM-72B (72B RM) 89.2 71.7 - - 72.9 77.9 LaSeR (3B Self-Rewarding) 73.6 70.2 - - 73.6 72.5 Generator: Qwen2.5-7B-Laser Qwen2.5-Math-7B-PRM800K (7B RM) 59.4 52.7 58.8 53.8 52.0 55.3 Qwen2.5-Math-PRM-7B (7B RM) 82.5 79.2 75.1 72.3 77.8 77.4 Qwen2.5-Math-RM-72B (72B RM) 87.8 80.7 81.3 74.8 75.4 80.0 LaSeR (7B Self-Rewarding) 83.2 82.5 79.6 74.3 78.3 79.6 Generator: Open-Reasoner-Zero-7B-LaSeR Qwen2.5-Math-7B-PRM800K (7B RM) 56.3 42.5 51.4 50.8 38.5 47.9 Qwen2.5-Math-PRM-7B (7B RM) 86.0 79.6 70.8 67.3 76.0 75.9 Qwen2.5-Math-RM-72B (72B RM) 86.8 79.4 71.0 71.4 75.5 76.8 LaSeR (7B Self-Rewarding) 87.2 79.7 64.6 77.7 78.7 77.6 Table 3: Comparison of reasoning and self-verification performance with and without reference log-probability simplification in our method. Based model is Open-Reasoner-Zero-7B. Method Reasoning Accuracy Self-Verification F1 Score MATH500 AMC23 AIME24 AIME25 Olym.- Bench Avg. MATH500 AMC23 AIME24 AIME25 Olym.- Bench Avg. w/ Simpl. 82.5 61.6 18.8 16.2 46.5 45.1 82.3 79.3 77.9 79.2 78.4 79.4 w/o Simpl. 81.0 61.2 17.3 17.3 48.3 45.0 81.8 79.2 79.0 78.9 77.5 79.3 the self-rewarding capability of the models, achieving around 80% self-verification F1 scores. This demonstrates strong self-verification accuracy on both correct and incorrect solutions. To further highlight the self-rewarding capabilities, we display the comparison results of verification F1 scores between LaSeR and three advanced external reward models (Qwen2.5-Math-7B-PRM800K (Zhang et al., 2025), Qwen2.5-Math-PRM-7B (Zhang et al., 2025), and Qwen2.5-Math-RM-72B (Yang et al., 2024)) on evaluating the solutions generated by the different reinforced models, i.e., OctoThinker-3B-Short-LaSeR, Qwen2.5-7B-LaSeR, and Open-Reasoner-Zero-7B-LaSeR. The results in Table 2 show that LaSeR outperforms equally sized state-of-the-art external verifiers in assessing the model's own solutions, and even matches the verification performance of a 72B reward model, demonstrating its non-trivial effectiveness in enhancing self-rewarding capability. Also, our method requires one additional token inference only to compute the self-rewarding scores for enabling the policy model to function simultaneously as both the generator and reward model, which is highly efficient and practical. 4.3 Inference-Time Scaling Results Here, we explore the effectiveness of self-rewarding in the inference-time scaling via weighted majority voting. We compare majority voting results with (RM@K) and without (Maj@K) weighting by the last-token self-rewarding scores, on MATH500 and OlympiadBench. The results are shown in Figure 2. We denote the three base models by "OT-3B", "Qwen2.5-7B", and "ORZ-7B". The suffixes "-GRPO" and "-LaSeR" indicate the variants trained with GRPO and our method LaSeR, respectively. The results show that the optimized self-rewarding capability of the model is highly effective on improving its own inference-time scaling performance. 5 Analysis and Discussion 5.1 The Impact of Simplifying The Reference Log-Probabilities to A Constant As discussed in Section 3.3, we approximate the log-probability of the pre-specified token under the reference model, log πre f (zc|x, y), by its mean computed over a small sample set when calculating the last-token self-rewarding scores. Here, we empirically validate this practice by conducting comparison experiments on Open-ReasonerZero-7B, with and without reference log-probability simplification in our method. We evaluate the checkpoint after 200 optimization steps in each setting (corresponding to the last checkpoint before advantage integration). The results are reported in Table 3. As shown, the simplification does not affect the optimization of reasoning 8 21 22 23 24 25 Number of Sampled Solutions 50 52 54 56 58 60 62 64 Accuracy (%) OT-3B-GRPO (Maj@K) OT-3B-LaSeR (Maj@K) OT-3B-LaSeR (RM@K) (a) Results of OT-3B-based models on MATH500 21 22 23 24 25 Number of Sampled Solutions 80 81 82 83 84 85 86 Accuracy (%) Qwen2.5-7B-GRPO (Maj@K) Qwen2.5-7B-LaSeR (Maj@K) Qwen2.5-7B-LaSeR (RM@K) (b) Results of Qwen2.5-7B-based models on MATH500 21 22 23 24 25 Number of Sampled Solutions 83 84 85 86 87 88 Accuracy (%) ORZ-7B-GRPO (Maj@K) ORZ-7B-LaSeR (Maj@K) ORZ-7B-LaSeR (RM@K) (c) Results of ORZ-7B-based models on MATH500 21 22 23 24 25 Number of Sampled Solutions 16 18 20 22 24 26 28 Accuracy (%) OT-3B-GRPO (Maj@K) OT-3B-LaSeR (Maj@K) OT-3B-LaSeR (RM@K) (d) Results of OT-3B-based models on OlympiadBench 21 22 23 24 25 Number of Sampled Solutions 44 45 46 47 48 49 50 51 Accuracy (%) Qwen2.5-7B-GRPO (Maj@K) Qwen2.5-7B-LaSeR (Maj@K) Qwen2.5-7B-LaSeR (RM@K) (e) Results of Qwen2.5-7B-based models on OlympiadBench 21 22 23 24 25 Number of Sampled Solutions 48 49 50 51 52 53 54 55 Accuracy (%) ORZ-7B-GRPO (Maj@K) ORZ-7B-LaSeR (Maj@K) ORZ-7B-LaSeR (RM@K) (f) Results of ORZ-7B-based models on OlympiadBench Figure 2: The majority voting (Maj@K) and weighted majority voting (RM@K) results on MATH500 and OlympiadBench. MMLU-Pro GPQA-Diamond 40 45 50 55 60 65 70 Accuracy (%) 66.54 44.44 67.10 45.96 Qwen3-4B-GRPO (Avg.) Qwen3-4B-LaSeR (Avg.) Qwen3-4B-GRPO (Maj@4) Qwen3-4B-LaSeR (RM@4) (a) Evaluation accuracy on MMLU-Pro and GPQA-Diamond 0.0 0.2 0.4 0.6 0.8 1.0 Self-Rewarding Score 0.00 0.01 0.02 0.03 0.04 0.05 Proportion of Solutions Correct Solutions Incorrect Solutions (b) Self-rewarding score distribution on MMLU-Pro 0.0 0.2 0.4 0.6 0.8 1.0 Self-Rewarding Score 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Proportion of Solutions Correct Solutions Incorrect Solutions (c) Self-rewarding score distribution on GPQA-Diamond Figure 3: The generalizability of LaSeR on general reasoning tasks. and self-rewarding capabilities, since the performance under the two settings remains comparable. However, it effectively reduces the computational cost of calculating the last-token self-rewarding value by half. 5.2 The Generalizability of LaSeR to General Reasoning Domain We conduct additional experiments to explore the generalizability of our method to general reasoning domain. We use a filtered version (Yu et al., 2025b) of WebInstruct-verified dataset (Ma et al., 2025), and conduct RL experiments on Qwen3-4B-Base (Yang et al., 2025a). We use the general-verifier-1.5B model from Ma et al. (2025) as the model-based verifier and adopt GRPO as the RL algorithm. For our method, we do not perform the advantage integration strategy here. The reason is that we observe the self-verification F1 score of our method during training is relatively low in the general reasoning setting (only between 65% and 70%, and the self-rewarding score distributions in the test sets shown in Figure 3(b) and Figure 3(c) also reveal this phenomenon). This leads to large noise in the self-rewarding-based advantage estimation, and consequently, the integration of self-rewarding-based advantages results in performance degradation. After training, we conduct evaluations on two general reasoning benchmarks: MMLU-Pro (Wang et al., 2024b) and GPQA-Diamond (Rein et al., 2024). We sample 4 solutions per problem on each dataset for each model, and calculate both the average accuracy and the (weighted) majority voting accuracy. Detailed training and evaluation settings are in Appendix H. 9 We display the evaluation accuracy in Figure 3(a), and additionally, we display the self-rewarding score distributions on two datasets in Figure 3(b) and Figure 3(c) for reference. First, we observe that jointly optimizing the selfrewarding capability does not impact the model's general reasoning ability, allowing the policy model to achieve comparable average reasoning accuracy to the baseline. However, as mentioned above, the optimized self-rewarding score on general reasoning tasks does not achieve the high accuracy seen in math reasoning tasks. We can see that the self-rewarding score distributions for correct and incorrect solutions on MMLU-Pro exhibit certain overlap, and the distinction further diminishes on the more challenging benchmark GPQA-Diamond. We speculate that two factors may contribute to this: (1) The model's general reasoning ability is inherently weaker than its math reasoning ability, which limits the upper bound of its self-rewarding capabilities in the general reasoning domain. (2) The model-based verifier used in the experiment (general-verifier-1.5B) has limited verification ability, resulting in high noise in the reasoning rewards, which in turn affects the optimization of the self-rewarding capability. A promising direction for future work is to further explore and unlock the full potential of our method in the general reasoning domain. Though not perfect, the optimized self-rewarding scores can still provide useful signals during inference time, leading to better weighted majority voting results. 5.3 Further Reduction or Increase of Self-Rewarding Cost In this section, we discuss two additional variants of LaSeR for future work. In the current method, we calculate the last-token self-rewarding score based on the next-token log-probability distribution of the " " token, requiring one additional token inference compared with standard inference. One potential way to further reduce the inference cost of LaSeR is to derive the last-token self-rewarding score directly from the predicted log-probability of pre-specified token zc at the " " token position. Specifically, let yT denote the " " token in y. Then, the reduced last-token self-rewarding score can be defined as rs = βv log πθ(zc|x, y " token position, without requiring any extra token inference. In theory, this works because setting a relatively large βv still yields a small value of πθ(zc|x, y |x, y Octo.+ Figure 5: The mean and standard deviation of -log πre f (zc|x, y) under different combinations of reference model πre f and pre-specified token zc over 300 input-output pairs. A The Length Bias in Implicit Reward Here, we present the trend of the cumulative implicit reward values (log πθ(y " for Qwen2.5-7B-Base, "Yes" and " " for OctoThinker-3B-Short-Base. The results in Figure 5 indicates that -log πre f (zc|x, y) remains nearly constant and extremely small, with only a low standard deviation across different x and y. Thus, we can consider log πre f (zc|x, y) as a constant when calculating the last-token self-rewarding scores, which effectively reduces the computational cost by half. C Detailed Training Settings We use verl (Sheng et al., 2024) as our RL training framework. The basic training hyper-parameters in both GRPO training and LaSeR training for each model are put in Table 4, and the newly introduced training hyper-parameters for LaSeR are put in Table 5. The number of optimization steps is 1000 for Qwen2.5-7B-Base and OctoThinker3B-Short-Base, and 500 for Open-Reasoner-Zero-7B. In RL, a reasoning reward of 1.0 is given if the final answer and the answer format are both correct; otherwise, it is 0.0. In our method, the reasoning warm-up is performed for Qwen2.5-7B-Base and OctoThinker-3B-Short-Base only, and the self-rewarding warm-up is performed for all models. D Ablation Studies on Self-Rewarding Hyper-Parameters Here, we display the curves (with Exponential Moving Average (EMA) smoothing) of training rewards and training self-verification F1 scores of our method under different choices of coefficient βv and self-rewarding MSE loss weight α. The experiments are conducted on Open-Reasoner-Zero-7B, which help to skip the reasoning warm-up phase compared with using Qwen2.5-7B-Base and OctoThinker-3B-Short-Base, while the results are similar in other 15 Table 4: Basic training hyper-parameters of both GRPO and LaSeR. Hyper-parameter Value Train Batch Size 128 Micro Batch Size 128 Rollout n 8 Maximum Prompt Length 2048 Maximum Response Length 8192 Temperature 1.0 Top p 1.0 LR 1 × 10-6 KL Coefficient 0.0 Table 5: Unique training hyper-parameters of LaSeR. Hyper-parameter Value Coefficient βv 0.1 Loss Weight α 0.1 Self-Rewarding Adv. Weight τ 0.1 Reasoning Warm-Up Steps 200 Self-Rewarding Warm-Up Steps 200 two base models after reasoning warm-up. The dynamics of training rewards and training self-verification F1 scores are displayed in Figure 6. As we can see, assigning a larger weight α to the last-token self-rewarding loss has a more detrimental impact on the model's reasoning capabilities. On the other hand, the coefficient βv has little impact on optimizing the self-rewarding scores, as long as it remains within a reasonable range (0.1 ∼0.5). However, much smaller values of βv can impair the model's reasoning capability, as indicated by the analysis in the end of Section 3.4. For example, when βv = 0.05, we should have πθ(zc|x, y) = e-3 ≈0.05 under πref(zc|x, y) = e-23 and rv(x, y) = 1, then the large value of πθ(zc|x, y) causes large interference with the optimization of reasoning capability. In our main experiments, we choose (βv, α) = (0.1, 0.1). E The Effect of Class-Level Re-Weighting on The Balanced Self-Verification Capability We present the training dynamics of our method on Open-Reasoner-Zero-7B, with and without class-level loss re-weighting, in Figure 7 for comparison. As shown, applying loss re-weighting leads to a more balanced selfverification performance by mitigating the bias toward the majority class with larger sample size, while still maintaining high reasoning accuracy. F Comparison between Last-Token Self-Rewarding Loss and Supervised Fine-Tuning Loss Following the discussion in Section 3.4, we compare the training performance of our introduced last-token selfrewarding loss with the supervised fine-tuning (SFT) loss on Open-Reasoner-Zero-7B. The training dynamics are shown in Figure 8. As observed, applying the SFT loss to optimize the self-rewarding capability causes substantial interference with the optimization of reasoning capability, leading to a marked degradation in training rewards. Moreover, the SFT loss degrades extremely slowly, indicating that directly driving πθ(zc|x, y) from 0 to 1 for rv(x, y) = 1 is inherently difficult. However, our method only requires fitting πθ(zc|x, y) to exp(1/βv) · πref(zc|x, y) for rv(x, y) = 1, which is considerably easier and introduces much less interference. G Detailed Self-Verification Results We report the detailed self-verification results of each model on self-generated solutions across all benchmarks in Table 6, including both overall accuracy and F1 score. Our method consistently yields significant improvements in model's self-rewarding and self-verification capabilities, while incurring only minimal additional computational cost. H Training and Evaluation Settings in General Reasoning Experiments The basic training and testing hyper-parameters for experiments on WebInstruct-verified are the same as those in Table 4 and Table 5, while the number of optimization steps here is 800. The simplified constant of the reference log-probability cref is -23.0. We do not employ the advantage integration strategy here, as we find that the optimized self-rewarding capability of Qwen3-4B-LaSeR on general reasoning tasks is limited, and introducing self-rewarding-based advantage integration leads to performance degradation. I Prompt Templates We show the training, evaluation and self-verification prompt templates used in our experiments in the end. 16 Baseline Coef. v = 0.1, Loss Weight = 0.1 Coef. v = 0.1, Loss Weight = 0.5 Coef. v = 0.2, Loss Weight = 0.1 Coef. v = 0.2, Loss Weight = 0.5 Coef. v = 0.5, Loss Weight = 0.1 0 25 50 75 100 125 150 Optimization Step 0.55 0.60 0.65 0.70 0.75 Training Rewards (a) Training rewards with EMA smoothing 0 25 50 75 100 125 150 Optimization Step 0.0 0.2 0.4 0.6 0.8 Training Self-Verification F1 Scores (b) Training self-verification F1 scores with EMA smoothing Figure 6: The curves of training rewards and training self-verification F1 scores under different combinations of hyper-parameters with EMA smoothing (EMA coef.=0.9). 17 Coef. v = 0.1, Loss Weight = 0.1, w/ Loss Reweighting Coef. v = 0.1, Loss Weight = 0.1, w/o Loss Reweighting 0 25 50 75 100 125 150 175 200 Optimization Step 0.55 0.60 0.65 0.70 0.75 Training Rewards (a) Training rewards with EMA smoothing 0 25 50 75 100 125 150 175 200 Optimization Step 0.0 0.2 0.4 0.6 0.8 Training Self-Verification F1 Scores (b) Training self-verification F1 scores with EMA smoothing Figure 7: The curves of training rewards and training self-verification F1 scores of our method with and without class-level loss re-weighting practice (EMA coef.=0.9). 18 Last-Token Self-Rewarding Loss, Coef. v = 0.1, Loss Weight = 0.1 Supervised Fine-Tuning Loss, Loss Weight = 0.1 0 25 50 75 100 125 150 175 200 Optimization Step 0.55 0.60 0.65 0.70 0.75 Training Rewards (a) Training rewards with EMA smoothing 0 25 50 75 100 125 150 175 200 Optimization Step 2 1 0 1 2 3 Self-Rewarding/SFT Loss (log scale) (b) Self-rewarding/SFT loss curve on a log scale Figure 8: The comparison of the training dynamics between the last-token self-rewarding loss and the SFT loss. 19 Table 6: Detailed self-verification results. Method MATH500 AMC23 AIME24 AIME25 Olym. Acc. F1 Acc. F1 Acc. F1 Acc. F1 Acc. F1 OctoThinker-3B-Short-Base Base 60.2 22.3 52.3 11.2 - - - - 62.0 13.7 GRPO 58.2 56.9 66.7 47.3 - - - - 66.4 48.8 LaSeR 77.0 73.6 77.3 70.2 - - - - 80.3 73.6 - SWA 81.0 80.4 84.1 70.9 - - - - 83.5 66.0 Qwen2.5-7B-Base Base 45.0 36.4 30.7 30.8 24.5 27.6 28.2 32.9 33.8 36.9 GRPO 76.5 54.6 61.1 59.7 60.4 36.6 72.5 41.5 54.6 53.5 LaSeR 88.0 83.2 81.5 82.5 92.2 79.6 90.5 74.3 79.5 78.3 - SWA 87.8 79.7 79.6 80.2 94.3 81.3 92.2 74.9 83.9 83.3 Open-Reasoner-Zero-7B Base 79.6 26.7 66.6 51.3 39.6 45.9 47.6 55.2 55.2 37.5 GRPO 52.9 57.1 50.9 44.8 66.9 14.6 78.9 28.1 54.7 49.5 LaSeR 90.1 87.2 77.7 79.7 87.2 64.6 92.8 77.7 80.1 78.7 - SWA 89.0 87.5 76.2 77.7 87.7 63.3 93.6 77.3 80.2 77.9 20 Training and Evaluation Prompt Template for OctoThinker-3B-Short-Base A conversation between User and Assistant. The user asks a question, and the Assistant solves it. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. User: You must put your answer inside and Your final answer will be extracted automatically by the tag. {question} Assistant: Training Prompt Template for Qwen2.5-7B-Base A conversation between User and Assistant. The User asks a question, and the Assistant solves it. The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. The reasoning process is enclosed within and answer is enclosed within tags, respectively, i.e., reasoning process here answer here . User: You must put your answer inside tags, i.e., answer here . And your final answer will be extracted automatically by the tag. This is the problem: {question} Assistant: Zero-Shot Evaluation Prompt Template for Qwen2.5-7B-Base system You are a helpful assistant. user {question} Please reason step by step, and put your final answer within . assistant Training and Evaluation Prompt Template for Open-Reasoner-Zero-7B A conversation between User and Assistant. The User asks a question, and the Assistant solves it. The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. The reasoning process is enclosed within and answer is enclosed within tags, respectively, i.e., reasoning process here answer here . User: You must put your answer inside tags, i.e., answer here . And your final answer will be extracted automatically by the tag. {question} Assistant: Training and Evaluation Prompt Template for Qwen3-4B-Base user {question} Please reason step by step, and put your final answer within . assistant 21 Prompt Template for Self-Verification (Modified from Liu et al. (2025a)) Below you are presented with a question and a tentative response. Your task is to evaluate the response and assign a rating to the response based on the following clear criteria: Rating Criteria: 1. Missing final answer, or incorrect response with the wrong final answer: assign . 2. Correct response with the correct final answer: assign . ### Question Begin ### {question} ### Question End ### ### Response Begin ### {response} ### Response End ### First provide your evaluation process, then clearly state your final rating value enclosed in at the end. 22
2510.14947
Architecture Is All You Need: Diversity-Enabled Sweet Spots for Robust Humanoid Locomotion *Blake Werner, *Lizhi Yang, Aaron D. Ames Abstract— Robust humanoid locomotion in unstructured en- vironments requires architectures that balance fast low-level stabilization with slower perceptual decision-making. We show that a simple layered control architecture (LCA), a proprio- ceptive stabilizer running at high rate, coupled with a compact low-rate perceptual policy, enables substantially more robust performance than monolithic end-to-end designs, even when us- ing minimal perception encoders. Through a two-stage training curriculum (blind stabilizer pretraining followed by perceptual fine-tuning), we demonstrate that layered policies consistently outperform one-stage alternatives in both simulation and hard- ware. On a Unitree G1 humanoid, our approach succeeds across stair and ledge tasks where one-stage perceptual policies fail. These results highlight that architectural separation of timescales, rather than network scale or complexity, is the key enabler for robust perception-conditioned locomotion. I. INTRODUCTION Robust humanoid locomotion over mixed and unstruc- tured terrain is a task as old as the platform itself, while still an unsolved problem. Sensing of terrain is partial and noisy, contact events are discontinuous, and controllers must react faster than perception can resolve in detail. Decades of practice in guidance–navigation–control (GNC) suggest a simple lesson: robustness emerges when fast, low-level stabilization is paired with slower, longer-horizon naviga- tion. The canonical example is aerospace GNC [1], [2]: a slow, semantic guidance layer chooses where to go; an intermediate-rate trajectory-generation layer turns goals into feasible references; and a fast feedback control layer tracks those references and rejects disturbances. The same pattern, “slow and flexible” above “fast and rigid,” with well-defined interfaces, recurs across robotics and biological sensorimotor systems [3], [4]. This work takes this observation to its logical extreme and argues that, for high-dimensional perception-conditioned control problems, layered control architecture (LCA) [5]–[9] is the primary driver of robustness. Sophisticated models, learned world representations, or intricate reward shaping help to get the maximum absolute performance, but are not necessary for task success when the stack itself is well-posed. In a well-posed LCA, information flows through narrow interfaces: references descend (planner →controller) while tracking error or status ascends (controller →planner). Cru- cially, layers operate at different time scales—a design that * denotes equal contribution. All authors affiliated with Caltech MCE. This research is supported in part by the Technology Innovation Institute (TII), BP p.l.c., and by Dow via project #227027AW. Fig. 1. A humanoid robot trained to traverse complex terrain through use of a combination perception information and fast proprioception information. Using this input effectively requires the use of structured architecture in order to produce performant and robust results. both reduces computational burden and improves robustness by letting each layer specialize where it is most effective [5]. The separation of layers in a LCA, together with heteroge- neous objectives and information, enables “diversity-enabled sweet spots” (DeSS) [10]: the combined stack can outper- form any single monolithic component tuned in isolation. For perception-conditioned humanoid walking, the LCA framing implies a minimal yet sufficient stack: (i) a compact, local- perception navigation encoder that updates at moderate rate to construct an latent space that reflects long-horizon terrain geometry, and (ii) a fast stabilizer that uses proprioception to condition upon this geometry and contend with contact variability. Our method instantiates exactly this two-layer core, with the guidance layer assumed given, aligning with the quantitative architectural principles in [5]. A. Contributions This paper makes two central claims. First, robust loco- motion necessarily requires a layered, multi-rate design: a reflexive controller that stabilizes with proprioception at high rate, and a navigation layer that updates more slowly from exteroceptive cues to set short-horizon trajectories. Second, there exists a minimal instantiation of the LCA that can perform complex robust locomotion tasks without the use of heavy machinery: no complex environment estimators, no mixed-integer footstep search, no world models, and arXiv:2510.14947v2 [cs.RO] 19 Oct 2025 Fig. 2. Training and Deployment Overview: both actor and critic are two-stage architectures each with their own perception encoder. The actor receives noisy heightmap information, while the critic receives perfect information, and each receive proprioception history. During deployment, a depth image is filtered and passed through the trained encoder, and the actor combines this with the proprioception history to determine action. no complex network architectures. The performance “sweet spot” arises from different parts of the control architecture taking information at different rates and information budgets rather than from any single sophisticated component. Concretely, this paper realizes the smallest useful LCA for humanoids: (i) a fast low-level stabilizer (joint-space tracking with largely standard locomotion RL rewards) that runs purely on proprioception, and (ii) a slow navigation policy that consumes a compact local heightmap and allows the low-level to condition itself upon longer-horizon informa- tion. Training follows a two-stage curriculum: a blind phase (perception zeroed) that emphasizes stabilization, followed by perception phase that allows for more intelligent longer- horizon planning. This architecture is intentionally plain by design, yet we show it closes the gap to recent methods that rely on richer models or elaborate perception stacks. Our contributions are as follows: • Architecture over complexity We argue and empiri- cally validate that robust humanoid locomotion requires a layered, multi-rate stack; the particular choice of sophisticated models is secondary. • Minimal LCA for robust humanoid locomotion We instantiate a two-layer, two-stage pipeline with standard rewards and a compact local perception interface that performs well on complex locomotion tasks in unstruc- tured terrain. • Architecture-isolating ablations We vary network ar- chitectures and training curriculums. Results show that while model details produce only small performance differences, removing the layered structure causes large drops in success and tracking metrics. B. Related Work Two-stage training pipelines. Two-stage curricula appear in several locomotion settings, but for different architectural reasons. On sparse or precarious supports, works emphasize contact selection and balance under limited footholds, ef- fectively prioritizing longer-horizon foot placement behavior before refining stabilization [11], [12]. In contrast, other works on challenging terrain follow a “blind-then-vision” strategy: first learn a robust proprioceptive stabilizer, then condition that stabilizer on exteroceptive cues via a slower vision module [13], [14]. Both fall naturally into the LCA view: the first stage trains one layer in isolation (navigation or stabilization), while the second introduces the comple- mentary layer and its interface. Blind stair-traversal and in general rough terrain works [15]–[20], can be seen as extreme instantiations where the navigation/perception layer is absent: such works are often very strong at stabilization but limited in foresight, relying on overfitting to a terrain type from training, and implicitly switching to this ’mode’ when encountering this obstacle during deployment. Our design follows the second category, choosing to emphasize proprioceptive stabilization first before adding a conditioning vision module; however, doing so with only a minimal architecture consisting of just those two components. Perception encoders. Across humanoid pipelines, percep- tion does not feed torques directly; instead, visual depth or heightmaps are first encoded, then fused with proprioception downstream [21]. This preserves rate separation and prevents slow, noisy exteroception from contaminating fast feedback. Examples include perceptive internal models that fuse vision and state estimates for improved foothold selection [22], [23] and world-model approaches that learn latent representations to inform mid-rate decision making [24]. Our design follows the same pattern but keeps the encoder intentionally compact, simple, and local to maintain a narrow interface between the navigation layer and the stabilizer. Model-based stepping and hybrid stacks. Classical “per- ceptive” footstep planners select contacts via mixed-integer optimization using sensed terrain [25]. Hybrid pipelines in- tegrate such planners with model-free RL, letting the planner handle discrete contact choices while RL handles low-level tracking and robustness [26]. Whole-body methods with sequential contacts and adaptive motion optimization for dex- terous humanoids also embody this decomposition: a mid- rate generator proposes feasible references, while a high- rate controller enforces stability and feasibility [27]. In all cases, the pipeline is explicitly layered: planning (navigation) up top, fast feedback below, with narrow reference/feedback channels—precisely the LCA pattern. Student–teacher and distillation. Teacher–student pipelines leverage privileged information and rich supervision to train a capable teacher, then distill a deployable student with restricted observations [28]–[33]. From an LCA standpoint, such methods can partially sidestep architectural constraints during training by allowing the teacher to approximate harder, more global solutions before compressing capability into a smaller runtime policy. While highly effective, analyzing their architectural equivalence (e.g., whether the distilled student implicitly embeds a multi-rate decomposition) remains open; we regard this as complementary and leave a deeper treatment for future work. II. METHODS A. Optimization Analysis To analyze the complex problem of perception-informed robot control, consider the following optimization: θ∗= max θ E  X k γkr(sk, ak|θ)  , (1) where θ are the network parameters This is the classical one-stage formulation of the reinforcement learning pipeline. Note that while this attempts to solve the global optimal control problem, it suffers from significant sensitivity to initial conditions [34] [35]. From an optimization perspective, solving the problems in sequence performs a different optimization, with less of the specified sensitivity. Let the parameters of the networks be divided as θ = [θx, θy]T , with ‘slow’ network parameters θy and ‘fast’ parameters θx. Note that in practice, these rates are more frequencies of the signals themselves, rather than the frequency of the controller (as they are all one network running at one speed). By solving the fast-rate optimization first, we solve θ† x = max θx E  X k γkr(sk, ak|θx, θy,0)  , (2) wherein we maximize the reward conditioned on the fast rate controller parameters θx subject to an initial setting of the slow parameters θy,0. In practice, since we will be removing perception from the optimization in stage one of the training, we remove the dependence on θy,0, allowing for a simpler optimization more likely to find a satisfactory local maxima. In the second stage of the optimization then, we solve θ∗ x, θ∗ y = max θx,θy E  X k γkr(sk, ak|θx, θy)  (3) s.t. θx,0 = θ† x. (4) Since we are optimizing over both variables, we perform the same optimization as the one-stage, so in sufficiently regular cases such as strictly concave reward landscapes we are guaranteed the same solution, i.e. θ∗ one-stage = θ∗ two-stage. However, in the highly nonconcave reward landscape that we work with, we note that by choosing a good initial condition from the first optimization, we are less susceptible to bad Fig. 3. Layered verses monolithic architectures: while the network archi- tecture may be identical, training in two phases allows them to assume the layered control structure. local maxima with large regions of convergence that would otherwise attract the optimization algorithm. B. Observations and Normalization We propose a minimal robust humanoid locomotion pipeline with the goal of illustrating our LCA hypothesis. Let q be joint positions, ˙q joint velocities, gb the projected gravity direction in body frame, ωb the base angular velocity in body frame, ak−1 the last applied action, and u the commanded planar velocity. Let ok be a K-length history of robot state in- formation along with current velocity command and last ac- tion: ok = [qk, qk−1, ..., qk−K, ˙qk, ... ˙qk−K, ωb,k, ..., ωb,k−K, gb,k, ...gb,k−K, u, ak−1]. a) Actor observation: Our actor observation is a con- catenation of a the K-length history of robot state infor- mation and the current velocity command and heightmap: oπ k = [ok, Hπ], where Hπ ∈R11×11 is a noisy, sparse heightmap covering 1.0, m × 1.0, m around the robot (robot- centric frame). Note that with a nominal max velocity range of ±0.6 m/s, and the set step period of 0.4s, the robot will take about two steps to move from its current location to that at the edge of the map. Therefore, the map encodes some temporal information (ground height where the robot will be in the future) despite the use of only current heightmap information. Additionally, we do not incorporate perception delays or latency for simplicity and consistency with other methods, but we note that the heightmap signal changes relatively much more slowly than the proprioceptive information. Finally, by using a history of states, we can capture transient and higher-order behaviors than would be allowed by strictly using the current state. b) Critic observation: To construct our critic observa- tion, we use similar information to the actor with the addition of the world-frame body velocities and a larger and more accurate heightmap with zero noise covering 1.5m × 1.5m around the robot. Giving the critic correct ground height information allows for a more accurate advantage function estimate, and the larger heightmap size allows the critic to see further into the ‘future’. In total, the observation is oV k : [ok, vbase, HV ]. Both heightmaps are normalized by subtracting the grid mean (cellwise) and clipping to [−1, 1]. Zero-centering re- moves steady-state sim-to-real offsets such as those caused by changed camera mounting and compliance or different motor characteristics and simplifies biases in the MLP during the two-stage training. C. Network Architecture Our network architecture consists of two main compo- nents: the perception encoder, a network that takes the per- ception information and encodes it in a latent representation usable by the main actor network, and the primary actor network that uses a combination of the latent perception information and the standard robot proprioception to de- termine the robot’s actions. Note that in our studies, we consider multiple choices for both the encoder network and the actor network to show the minimal underlying benefits of the actual implementation, instead highlighting the benefit of the layered architecture itself. Our choice of encoder is either a small CNN or MLP mapping H ∈RN×N to an embedding zH ∈RdH. By ablating this to an MLP, we see if the spatial encoding characteristic of a CNN performs better than a simpler model, even on the small scale of the 11x11 heightmap. A similar network is used to encode the perception information sent to the critic network, the only difference being the larger size of the input. The actor network, where we consider both the LSTM and MLP network architectures takes a concatenation of propri- oception and perception information and outputs actions at as position setpoints, which are then tracked by joint-level PD controllers. D. Rewards We construct a number of rewards designed for our specific task to guide training, allowing for feasible perfor- mance on all of the proposed architectures, where we keep these rewards consistent. We use the following notation: we notate feet i ∈{L, R}, the contact indicator as Ci(k) = 1 maxj ∥F(j) i (k)∥2 > F⋆  with F⋆=1 N, the planar foot velocity as vxy i (k), foot pitch as θi,pitch(t), foot height as zi(k), and phase as ϕi(k). a) Phase–contact consistency reward: For τ=0.55, ε=5×10−3, let the stance intent be si(t) = 1 ϕi(k) < τ ∨∥ucmd(k)∥2 < ε  , . Detected contact is ci(t) = Ci(k). The reward is the XNOR agreement: rphase(k) = X i∈{L,R} 1 ci(k) = si(k)  = 2 − X i∈{L,R} ci(k) −si(k) . Standing case. When ∥ucmd(t)∥2 < ε, we have sL = sR = 1, so rphase rewards double support and acts as the standing reward. We therefore do not include a separate standing term. b) Foot–strike cost: Penalize lateral ground–reaction forces (scuffs, edge kicks): rstrike = X i∈{L,R} Fxy i 2, Fxy i = F x i F y i  . c) Feet sliding cost: Suppress planar slip during stance: rslide = X i∈{L,R} Ci(k) vxy i (k) 2 2. d) Feet orientation (flatness) cost: Encourage flat feet in contact with smooth saturation: rorient = 1−exp  −kθ X i∈{L,R} Ci(k) θi,pitch(k)  , kθ=25. e) Feet clearance (swing height) cost: Penalize devia- tion from target swing height only when not in contact: rclear = X i∈{L,R} 1 −Ci(k)   zi(k)−h⋆ i (k) hscale 2 gi(k), where h⋆ i (k) is the nominal swing height (collapsing to foot thickness near zero command), gi(k) = tanh κ∥vxy i (k)∥  gates by step activity, and hscale normalizes units. f) Total reward: We combine the terms with positive weights and subtract the penalties from the standard loco- motion rewards: r = rlocomotion+0.5 rphase−rstrike−0.2 rslide−rorient+ rclear. E. Two-Stage Curriculum Our training curriculum consists of two stages, wherein the robot first learns to traverse complex terrains without perception information, yielding a good baseline along with stabilization capabilities in order to deal with unseen obsta- cles or perturbations, then is given heightfield information, allowing the robot to learn longer-horizon behavior. a) Stage 1 (blind stabilization): We set H ≡0 for the actor (though the critic is still given full information), and train in an environment made up of a quarter respectively of up-stairs, down-stairs, uneven terrain, and flat terrain tasks. b) Stage 2 (perception-critical): Re-enable H for the actor. This allows the robot to make longer-horizon plans based on the local terrain, or put differently, condition the blind policy on the perceived surroundings. F. Perception Filtering During deployment, we perform a few stages of filtering for our perception stack in order to curate our data in a way that is usable by the policy. Note that while other works have used more complex perception filtering pipelines such as U-Nets and Transformers [23] [13], ours is intentionally simple, robust, and requires minimal tuning. Our input is a noisy, dense, depth image from the concatenation of two depth images. We then perform the following steps. a) Downsampling: We aggregate the dense pointcloud to an 11×11 grid over 1.0, m×1.0, m by taking the minimum of each of the valid point heights in each cell. While this method does make the downsampling more sensitive to noise, the minimum value approximation allows for a more correct height estimate in situations such as stair occlusions, where the higher stair occludes the lower one, but the occluded values should be mapped to the lower stair height. b) Outlier Rejection: We then compute the mean µz and standard deviation σz of the grid heights, then clamp outliers to the mean value. Here, we consider outliers to be cells (gx, gy, gz) such that |gz −µz| ≥γσz (5) where γ is a tuned parameter. While the prior step deals with disturbances and outliers at the point level, this helps to deal with outliers at the grid cell level, such as the robot’s own legs and small anomalies in the terrain. c) Zero-mean and Quantize: We then subtract µz from all the heightmap values in order to zero-center them and clip each to [−1, 1] as the observation requires. Finally, we quantize to buckets corresponding to ‘steps’ of 5cm. This is the smallest ground height perturbation in simulation, and the stabilization of the policy seems robust to smaller perturbations. III. SIMULATION EXPERIMENTS A. Training Final Rewards: Blind: 63.86 One-Stage MLP: 61.73 Two-Stage MLP: 64.10 One-Stage LSTM: 67.74 Two-Stage LSTM: 69.00 One-Stage CNN: 59.84 Two-Stage CNN: 59.42 Fig. 4. Training rewards from each of the policies. We see that in training, all the policies perform largely identically; we believe that the small deviations may be a function of the network architecture of that component, such as the LSTM’s ability to store a hidden state or the CNN’s ability to reason more spatially, but may also simply be products of randomness. a) Setup: We train all policies in IsaacSim on a single RTX 4090 with 4096 parallel environments. Each training batch is drawn from a balanced mixture of tasks: stair ascent (25%), stair descent (25%), uneven terrain (25%), and flat ground (25%), all trained using the asymmetric actor critic algorithm [36] to 40000 steps. b) Policies: We compare seven variants: a blind baseline (no exteroception throughout), three one-stage perception-informed models (vision available for the full curriculum), and three two-stage models (blind in the first half of training, perception introduced in the second half). All actors share the same backbone sizes and differ only in network and encoder architectures: the actor is either an MLP or an LSTM with hidden layers {512, 256, 128}; the perception encoder is either a CNN (3×3, stride 1) or an MLP, both with hidden layers {256, 256}. The critic is an MLP with the same hidden sizes and receives privileged inputs: a height scan of 1.5 × 1.5 m at 0.1 m resolution, joint states, base orientation, and CoM velocities. Rewards and observation normalizations are held fixed across policies so that differences reflect architecture and curriculum, not reward shaping. c) Metrics: We report: (i) Success rate-the fraction of episodes that time out (task completed) without a fall or intervention; (ii) Contacts per step—the number of high- force lateral foot–environment impacts per robot step (thresh- old > 100N), normalized by step count; and (iii) Tracking error—the mean ℓ2 difference between commanded and measured base velocity, reported in cm/s. d) Protocol: For each policy we roll out 500 simulated units, each performing 3 episodes of 1000 steps. Stair risers are uniformly randomized and treads set per condition; uneven terrain uses block fields with specified height ranges; and uniform noise of fixed magnitude is added to the heightfield observation. Parameter ranges are summarized in Table II (units in meters). e) Results: On the medium (in-distribution) setting, all policies achieve near-parity; residual differences are within run-to-run variability. However, out-of-distribution (OOD) effects are more revealing: Stairs (OOD) All models remain reasonably strong—stairs are structured and thus easier to “overfit.” However, the two-stage variants exhibit roughly 3× lower contacts/step than their one-stage counterparts and are closer to the blind baseline in this metric. A plausible mechanism is that, under noisy exteroception, two-stage policies fall back to the robust blind stabilizer learned in stage 1, whereas one-stage policies rely more heavily on the heightfield for both planning and stabilization, leading to occasional poor foot placements. Tracking error shows a smaller but consistent improvement in the same direction. Uneven terrain (OOD) Here the differences are pro- nounced: the two-stage policies outperform one-stage by ≈10 percentage points in success on average, with cor- responding reductions in contacts/step. Because the ter- rain is unstructured and harder to memorize, robustness requires both fast stabilization and longer-horizon place- ment—capabilities that are explicitly separated and co- trained in the two-stage pipeline but entangled in the one- stage models. TABLE I. Simulation results across two tasks. Metrics: success rate (↑), contacts per step (↓), tracking error in cm (↓). Stairs Uneven Terrain Policy Rsucc (%, ↑) Contacts/step (%, ↓) Track (cm/s, ↓) Rsucc (%, ↑) Contacts/step (%, ↓) Track (cm/s, ↓) Medium difficulty Blind 98.30 0.85 ± 0.20 13.40 ± 1.43 97.86 2.28 ± 0.34 13.90 ± 1.44 One-Stage MLP 99.00 0.64 ± 0.16 14.10 ± 1.39 98.00 1.66 ± 0.31 15.30 ± 1.45 Two-Stage MLP 97.90 1.04 ± 0.18 13.30 ± 1.42 98.60 0.80 ± 0.15 13.40 ± 1.35 One-Stage LSTM 97.40 0.55 ± 0.20 13.10 ± 1.36 98.80 0.54 ± 0.15 13.30 ± 1.37 Two-Stage LSTM 99.40 0.26 ± 0.11 13.50 ± 1.29 99.40 0.34 ± 0.12 14.40 ± 1.32 One-Stage CNN 98.90 0.99 ± 0.23 13.50 ± 1.49 97.80 2.74 ± 0.38 14.80 ± 1.54 Two-Stage CNN 98.00 0.79 ± 0.26 13.60 ± 1.34 98.50 2.79 ± 0.39 15.70 ± 1.54 Hard difficulty Blind 93.08 0.81 ± 0.20 15.10 ± 1.56 64.63 2.79 ± 0.43 17.70 ± 2.05 One-Stage MLP 92.80 1.59 ± 0.45 16.10 ± 1.60 61.60 3.36 ± 0.54 18.20 ± 1.92 Two-Stage MLP 90.48 0.56 ± 0.14 13.40 ± 1.41 70.96 3.10 ± 0.55 17.70 ± 2.00 One-Stage LSTM 85.33 3.24 ± 0.47 17.20 ± 1.96 58.02 3.10 ± 0.52 18.10 ± 2.01 Two-Stage LSTM 95.01 0.78 ± 0.25 16.10 ± 1.64 72.36 3.06 ± 0.51 17.60 ± 1.92 One-Stage CNN 96.10 2.15 ± 0.39 15.30 ± 1.68 63.93 5.45 ± 0.69 19.10 ± 2.18 Two-Stage CNN 98.40 0.83 ± 0.20 14.47 ± 1.42 71.95 5.68 ± 0.86 19.10 ± 2.12 TABLE II. Environment parameters for stairs and uneven terrain. Stairs Uneven Terrain Difficulty Height Depth Noise Height Noise Medium 0.14–0.18 0.31 0.10 0.05–0.20 0.10 Hard 0.16–0.20 0.23 0.30 0.05–0.40 0.30 TABLE III. Proprioception observation terms oname with noise, scaling, and history length applied. Observation Formula / Description obase ang vel ωb ∈R3, base angular velocity in body frame. Noise U(−0.2, 0.2), scale 0.25. oprojected gravity gb ∈R3, gravity vector projected in body frame. Noise U(−0.05, 0.05). ovelocity commands u = (ux, uy, uω), commanded base velocity, scale (2.0, 2.0, 0.25). ojoint pos q −qdefault, joint positions relative to defaults. Noise U(−0.01, 0.01). ojoint vel ˙q −˙qdefault, joint velocities relative to defaults. Noise U(−1.5, 1.5), scale 0.05. oactions ak−1, last applied action. ophase obs gait phase ϕ ∈[0, 1] with standing detection, period 0.8. B. Hardware Experiments To verify our hypothesis on hardware, we deploy a subset of our policies on the G1 Humanoid robot. The perception stack is run on board using the robot’s Jetson Orin NX 16GB, and the RL controller is run on a Framework Laptop 13 with AMD Ryzen 7 7840u, which can either be off-board or strapped to the robot for a complete self-contained hardware stack. Perception data is provided by two Intel Realsense D435 cameras, one on the back of the hips, and one mounted on the chest, both pointing down. The lower hip camera allows for vision between the legs and behind the robot, while the upper chest camera allows for vision further in front of the robot. Together, they have a near-complete field of view of a 1.4m square around the robot, except for small TABLE IV. Nominal reward terms and weights for humanoid locomotion. Reward Formula rtrack lin vel xy exp 1.0 · exp  − ∥vbase xy −vcommand xy ∥2 0.25  rtrack ang vel z exp 1.0 · exp  −(uz−ωz)2 0.25  rlin vel z l2 −2.0 · v2 z rang vel xy l2 −0.05 · ω2 x + ω2 y  rdof torques l2 −2.0×10−5 · P j | ˙qj| |τj| rdof acc l2 −2.5×10−7 · P j ¨q2 j rdof vel l2 −1.0×10−3 · P j ˙q2 j raction rate l2 −0.01 · P a(at −at−1)2 rundesired contacts −1.0 · P b∈B 1(maxt ∥Ft,b∥> θ) rcontact no vel −0.2 · P b∈A ∥vb∥2 1(contactb) rjoint deviation hip −1.0 · P j∈Jhip |qj −qdef j | rjoint deviation arms −0.5 · P j∈Jarms |qj −qdef j | rjoint deviation torso −1.0 · |qwaist −qdef waist| rheight torso −50.0 · (zroot −0.77)2 rfeet clearance +1.0 · P f zf −htarget(sf) 2 · (1 −contactf) rfeet slide −0.2 · P f ∥vf,xy∥2 1(contactf) rphase contact +0.5 · P f 1(contactf = stancef) rstand still −0.1 · P j |qj −qdef j | 1(∥u∥< ϵ) rfeet flat −1.0 ·  1 −e−25(|θL pitch|cL+|θR pitch|cR) rflat orientation l2 −1.0 · g2 x + g2 y  rdof pos limits −5.0 · P j violation(qj) ralive +0.15 · 1(¬terminated) rtermination penalty −200.0 · 1(terminated) holes where the legs shadow the camera view. Due to the edge warping, we crop the center 1m x 1m area for depth. The cameras send depth images which are merged into a combined point cloud at a rate of 30Hz. The policy runs at 50hz, sending position setpoints to PD controllers at the joints operating at 1kHz. Fig. 5. The four hardware experiment tasks. The first two, stair ascent and stair descent, emphasize navigation, while the second two, hinged and soft ledge, emphasize control. Fig. 6. Blind, MLP, and CNN depicted results. The monolithic policy is not included, as for some tasks it was never successful. TABLE V. Four tasks. Success rate (↑) and tracking error in m (↓). Stair Ascent Stair Descent Hinged Ledge Soft Ledge Policy Rsucc (%, ↑) Track (m, ↓) Rsucc (%, ↑) Track (m, ↓) Rsucc (%, ↑) Track (m, ↓) Rsucc (%, ↑) Track (m, ↓) Blind 3/5 0.288 2/5 0.137 5/5 0.037 5/5 0.063 One-Stage MLP 1/5 0.000 1/5 0.000 0/5 – 0/5 – Two-Stage CNN 4/5 0.175 5/5 0.228 5/5 0.059 5/5 0.167 Two-Stage MLP 4/5 0.045 5/5 0.163 5/5 0.199 4/5 0.113 a) Hardware tasks: We evaluate on four hardware tasks designed to probe different components of the stack: stair ascent, stair descent, hinged ledge, and soft ledge. The stair tasks comprise a short flight of three steps (riser ≈18 cm) with small landings (∼20 cm) and a horizontal skew of ∼25◦. This geometry forces careful toe/heel placement and weight transfer—missteps induce lateral perturbations—so these tasks primarily stress navigation (longer-horizon foot- step/velocity planning). The ledge tasks use a 36 cm elevation change with transient compliance: the hinged variant is a plank balanced on a pivot that tips under load, and the soft variant lands onto a compliant gym mat. Perception sees a nominal ledge, but the dominant difficulty is the unmodeled, state-dependent disturbance at contact; these tasks primarily stress control (fast stabilization under transients). b) Policies and trials: For each task we run five trials for each of four policies: (i) a blind baseline, (ii) a one-stage MLP, (iii) a two-stage MLP, and (iv) a two-stage CNN+MLP. c) Metrics: We report two task-level metrics. Success rate counts trials that complete the task without a fall or human intervention. Precision measures repeatability: from a standing start we command 0.3 m/s forward, stop after task completion, and record the net lateral drift. We report the mean absolute deviation across the five trials to suppress fixed biases and emphasize stability and consistency. d) Observations: First, the one-stage MLP underper- forms across terrains despite identical training conditions. Empirically, footstep selection often appears reasonable, but stabilization degrades quickly, consistent with over-reliance on noisy heightfields for low-level control. Second, the blind policy transfers reasonably well and is particularly strong on the ledge (control-dominant) tasks, but fails more frequently on stairs where precise edge-aware placement is required (missed steps when descending; edge strikes when ascend- ing). Finally, the two-stage policies (MLP and CNN+MLP) perform similarly and robustly on both navigation- and control-dominant tasks, supporting our central claim: once the layered structure is in place, the specific encoder and backbone choice is of lesser importance. For absolute peak performance, one could further tune architectures or add specialized modules, but our results indicate such complexity is unnecessary to achieve robust behavior on these tasks. REFERENCES [1] H. Tsien, T. Adamson, and E. Knuth, “Automatic navigation of a long range rocket vehicle,” Journal of the American Rocket Society, vol. 22, no. 4, pp. 192–199, 1952. [2] C. S. Draper and W. Wrigley, “Guidance-basic principles,” GUID- ANCE AND NAVIGATION, 1965. [3] M. R. Tucker, J. Olivier, A. Pagel, H. Bleuler, M. Bouri, O. Lambercy, J. d. R. Millan, R. Riener, H. Vallery, and R. Gassert, “Control strate- gies for active lower extremity prosthetics and orthotics: a review,” Journal of neuroengineering and rehabilitation, vol. 12, no. 1, p. 1, 2015. [4] Y. Nakahira, Q. Liu, T. J. Sejnowski, and J. C. Doyle, “Diversity- enabled sweet spots in layered architectures and speed–accuracy trade- offs in sensorimotor control,” Proceedings of the National Academy of Sciences, vol. 118, no. 22, p. e1916367118, 2021. [5] N. Matni, A. D. Ames, and J. C. Doyle, “Towards a theory of control architecture: A quantitative framework for layered multi-rate control,” arXiv preprint arXiv:2401.15185, 2024. [6] “Templates and anchors: neuromechanical hypotheses of legged loco- motion on land,” Journal of experimental biology, vol. 202, no. 23, pp. 3325–3332, 1999. [7] U. Rosolia and A. D. Ames, “Multi-rate control design leveraging control barrier functions and model predictive control policies,” IEEE Control Systems Letters, vol. 5, no. 3, pp. 1007–1012, 2020. [8] U. Rosolia, A. Singletary, and A. D. Ames, “Unified multirate control: From low-level actuation to high-level planning,” IEEE Transactions on Automatic Control, vol. 67, no. 12, pp. 6627–6640, 2022. [9] N. Csomay-Shanklin, “Layered control architectures: Constructive theory and application to legged robots,” Ph.D. dissertation, California Institute of Technology, 2025. [10] Y. Nakahira, Q. Liu, T. J. Sejnowski, and J. C. Doyle, “Diversity- enabled sweet spots in layered architectures and speed–accuracy trade- offs in sensorimotor control,” Proceedings of the National Academy of Sciences, vol. 118, no. 22, p. e1916367118, 2021. [11] H. Wang, Z. Wang, J. Ren, Q. Ben, T. Huang, W. Zhang, and J. Pang, “Beamdojo: Learning agile humanoid locomotion on sparse footholds,” arXiv preprint arXiv:2502.10363, 2025. [12] C. Zhang, W. Xiao, T. He, and G. Shi, “Wococo: Learning whole-body humanoid control with sequential contacts,” in Conference on Robot Learning. PMLR, 2025, pp. 455–472. [13] H. Duan, B. Pandit, M. S. Gadde, B. Van Marum, J. Dao, C. Kim, and A. Fern, “Learning vision-based bipedal locomotion for challenging terrain,” in 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024, pp. 56–62. [14] M. S. Gadde, P. Dugar, A. Malik, and A. Fern, “No more blind spots: Learning vision-based omnidirectional bipedal locomotion for challenging terrain,” arXiv preprint arXiv:2508.11929, 2025. [15] J. Siekmann, K. Green, J. Warila, A. Fern, and J. Hurst, “Blind bipedal stair traversal via sim-to-real reinforcement learning,” in Robotics: Science and Systems (RSS), 2021. [16] S. Chamorro, V. Klemm, M. d. L. I. Valls, C. Pal, and R. Siegwart, “Reinforcement learning for blind stair climbing with legged and wheeled-legged robots,” in 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024, pp. 8081–8087. [17] R. Li, H. Wang, Q. Li, Z. Han, Y. Chu, L. Ye, W. Xie, and W. Liao, “Ctbc: Contact-triggered blind climbing for wheeled bipedal robots with instruction learning and reinforcement learning,” arXiv preprint arXiv:2509.02986, 2025. [18] R. P. Singh, M. Morisawa, M. Benallegue, Z. Xie, and F. Kanehiro, “Robust humanoid walking on compliant and uneven terrain with deep reinforcement learning,” in 2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids). IEEE, 2024, pp. 497–504. [19] Y. Zhang, Z. Yu, X. Chen, Y. Du, Z. Zhou, and J. Gao, “Bipedal walk- ing outdoors with a point-footed robot via reinforcement learning,” in 2024 International Conference on Intelligent Robotics and Automatic Control (IRAC). IEEE, 2024, pp. 193–197. [20] C. Ji, D. Liu, W. Gao, and S. Zhang, “Robust and efficient walking of a bipedal humanoid robot via reinforcement learning,” in 2025 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, 2025, pp. 381–388. [21] M. Su, Y. Jia, and Y. Huang, “Effects of prior knowledge for stair climbing of bipedal robots based on reinforcement learning,” in 2024 International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2024, pp. 216–222. [22] J. Long, J. Ren, M. Shi, Z. Wang, T. Huang, P. Luo, and J. Pang, “Learning humanoid locomotion with perceptive internal model,” arXiv preprint arXiv:2411.14386, 2024, submitted to ICRA2025. [Online]. Available: https://arxiv.org/abs/2411.14386 [23] J. He, C. Zhang, F. Jenelten, R. Grandia, M. B¨acher, and M. Hutter, “Attention-based map encoding for learning generalized legged loco- motion,” Science Robotics, vol. 10, no. 105, p. eadv3604, 2025. [24] X. Gu, Y.-J. Wang, X. Zhu, C. Shi, Y. Guo, Y. Liu, and J. Chen, “Ad- vancing humanoid locomotion: Mastering challenging terrains with denoising world model learning,” arXiv preprint arXiv:2408.14472, 2024. [25] B. Acosta and M. Posa, “Perceptive mixed-integer footstep control for underactuated bipedal walking on rough terrain,” arXiv preprint arXiv:2501.19391, 2025. [26] H. Su, H. Luo, S. Yang, K. Jiang, W. Zhang, and H. Chen, “Lipm- guided reinforcement learning for stable and perceptive locomotion in bipedal robots,” arXiv preprint arXiv:2509.09106, 2025. [27] Q. Liao, T. E. Truong, X. Huang, G. Tevet, K. Sreenath, and C. K. Liu, “Beyondmimic: From motion tracking to versatile humanoid control via guided diffusion,” arXiv e-prints, pp. arXiv–2508, 2025. [28] J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, “Learning quadrupedal locomotion over challenging terrain,” Science robotics, vol. 5, no. 47, p. eabc5986, 2020. [29] A. Kumar, Z. Fu, D. Pathak, and J. Malik, “Rma: Rapid motor adaptation for legged robots,” arXiv preprint arXiv:2107.04034, 2021. [30] Z. Zhuang, S. Yao, and H. Zhao, “Humanoid parkour learning,” arXiv preprint arXiv:2406.10759, 2024. [31] F. Wu, X. Nal, J. Jang, W. Zhu, Z. Gu, A. Wu, and Y. Zhao, “Learn to teach: Sample-efficient privileged learning for humanoid locomo- tion over real-world uneven terrain,” IEEE Robotics and Automation Letters, 2025. [32] Y. Fan, T. Gui, K. Ji, S. Ding, C. Zhang, J. Gu, J. Yu, J. Wang, and Y. Shi, “One policy but many worlds: A scalable unified policy for versatile humanoid locomotion,” arXiv preprint arXiv:2505.18780, 2025. [33] T. He, W. Xiao, T. Lin, Z. Luo, Z. Xu, Z. Jiang, J. Kautz, C. Liu, G. Shi, X. Wang et al., “Hover: Versatile neural whole-body controller for humanoid robots,” in 2025 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2025, pp. 9989–9996. [34] D. Picard, “Torch. manual seed (3407) is all you need: On the influence of random seeds in deep learning architectures for computer vision,” arXiv preprint arXiv:2109.08203, 2021. [35] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org. [36] L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba, and P. Abbeel, “Asymmetric actor critic for image-based robot learning,” in 14th Robotics: Science and Systems, RSS 2018. MIT Press Journals, 2018.
Architecture Is All You Need: Diversity-Enabled Sweet Spots for Robust Humanoid Locomotion *Blake Werner, *Lizhi Yang, Aaron D. Ames Abstract- Robust humanoid locomotion in unstructured environments requires architectures that balance fast low-level stabilization with slower perceptual decision-making. We show that a simple layered control architecture (LCA), a proprioceptive stabilizer running at high rate, coupled with a compact low-rate perceptual policy, enables substantially more robust performance than monolithic end-to-end designs, even when using minimal perception encoders. Through a two-stage training curriculum (blind stabilizer pretraining followed by perceptual fine-tuning), we demonstrate that layered policies consistently outperform one-stage alternatives in both simulation and hardware. On a Unitree G1 humanoid, our approach succeeds across stair and ledge tasks where one-stage perceptual policies fail. These results highlight that architectural separation of timescales, rather than network scale or complexity, is the key enabler for robust perception-conditioned locomotion. I. INTRODUCTION Robust humanoid locomotion over mixed and unstructured terrain is a task as old as the platform itself, while still an unsolved problem. Sensing of terrain is partial and noisy, contact events are discontinuous, and controllers must react faster than perception can resolve in detail. Decades of practice in guidance-navigation-control (GNC) suggest a simple lesson: robustness emerges when fast, low-level stabilization is paired with slower, longer-horizon navigation. The canonical example is aerospace GNC [1], [2]: a slow, semantic guidance layer chooses where to go; an intermediate-rate trajectory-generation layer turns goals into feasible references; and a fast feedback control layer tracks those references and rejects disturbances. The same pattern, "slow and flexible" above "fast and rigid," with well-defined interfaces, recurs across robotics and biological sensorimotor systems [3], [4]. This work takes this observation to its logical extreme and argues that, for high-dimensional perception-conditioned control problems, layered control architecture (LCA) [5]-[9] is the primary driver of robustness. Sophisticated models, learned world representations, or intricate reward shaping help to get the maximum absolute performance, but are not necessary for task success when the stack itself is well-posed. In a well-posed LCA, information flows through narrow interfaces: references descend (planner →controller) while tracking error or status ascends (controller →planner). Crucially, layers operate at different time scales-a design that * denotes equal contribution. All authors affiliated with Caltech MCE. This research is supported in part by the Technology Innovation Institute (TII), BP p.l.c., and by Dow via project #227027AW. Fig. 1. A humanoid robot trained to traverse complex terrain through use of a combination perception information and fast proprioception information. Using this input effectively requires the use of structured architecture in order to produce performant and robust results. both reduces computational burden and improves robustness by letting each layer specialize where it is most effective [5]. The separation of layers in a LCA, together with heterogeneous objectives and information, enables "diversity-enabled sweet spots" (DeSS) [10]: the combined stack can outperform any single monolithic component tuned in isolation. For perception-conditioned humanoid walking, the LCA framing implies a minimal yet sufficient stack: (i) a compact, localperception navigation encoder that updates at moderate rate to construct an latent space that reflects long-horizon terrain geometry, and (ii) a fast stabilizer that uses proprioception to condition upon this geometry and contend with contact variability. Our method instantiates exactly this two-layer core, with the guidance layer assumed given, aligning with the quantitative architectural principles in [5]. A. Contributions This paper makes two central claims. First, robust locomotion necessarily requires a layered, multi-rate design: a reflexive controller that stabilizes with proprioception at high rate, and a navigation layer that updates more slowly from exteroceptive cues to set short-horizon trajectories. Second, there exists a minimal instantiation of the LCA that can perform complex robust locomotion tasks without the use of heavy machinery: no complex environment estimators, no mixed-integer footstep search, no world models, and 19 Oct 2025 Fig. 2. Training and Deployment Overview: both actor and critic are two-stage architectures each with their own perception encoder. The actor receives noisy heightmap information, while the critic receives perfect information, and each receive proprioception history. During deployment, a depth image is filtered and passed through the trained encoder, and the actor combines this with the proprioception history to determine action. no complex network architectures. The performance "sweet spot" arises from different parts of the control architecture taking information at different rates and information budgets rather than from any single sophisticated component. Concretely, this paper realizes the smallest useful LCA for humanoids: (i) a fast low-level stabilizer (joint-space tracking with largely standard locomotion RL rewards) that runs purely on proprioception, and (ii) a slow navigation policy that consumes a compact local heightmap and allows the low-level to condition itself upon longer-horizon information. Training follows a two-stage curriculum: a blind phase (perception zeroed) that emphasizes stabilization, followed by perception phase that allows for more intelligent longerhorizon planning. This architecture is intentionally plain by design, yet we show it closes the gap to recent methods that rely on richer models or elaborate perception stacks. Our contributions are as follows: • Architecture over complexity We argue and empirically validate that robust humanoid locomotion requires a layered, multi-rate stack; the particular choice of sophisticated models is secondary. • Minimal LCA for robust humanoid locomotion We instantiate a two-layer, two-stage pipeline with standard rewards and a compact local perception interface that performs well on complex locomotion tasks in unstructured terrain. • Architecture-isolating ablations We vary network architectures and training curriculums. Results show that while model details produce only small performance differences, removing the layered structure causes large drops in success and tracking metrics. B. Related Work Two-stage training pipelines. Two-stage curricula appear in several locomotion settings, but for different architectural reasons. On sparse or precarious supports, works emphasize contact selection and balance under limited footholds, effectively prioritizing longer-horizon foot placement behavior before refining stabilization [11], [12]. In contrast, other works on challenging terrain follow a "blind-then-vision" strategy: first learn a robust proprioceptive stabilizer, then condition that stabilizer on exteroceptive cues via a slower vision module [13], [14]. Both fall naturally into the LCA view: the first stage trains one layer in isolation (navigation or stabilization), while the second introduces the complementary layer and its interface. Blind stair-traversal and in general rough terrain works [15]-[20], can be seen as extreme instantiations where the navigation/perception layer is absent: such works are often very strong at stabilization but limited in foresight, relying on overfitting to a terrain type from training, and implicitly switching to this 'mode' when encountering this obstacle during deployment. Our design follows the second category, choosing to emphasize proprioceptive stabilization first before adding a conditioning vision module; however, doing so with only a minimal architecture consisting of just those two components. Perception encoders. Across humanoid pipelines, perception does not feed torques directly; instead, visual depth or heightmaps are first encoded, then fused with proprioception downstream [21]. This preserves rate separation and prevents slow, noisy exteroception from contaminating fast feedback. Examples include perceptive internal models that fuse vision and state estimates for improved foothold selection [22], [23] and world-model approaches that learn latent representations to inform mid-rate decision making [24]. Our design follows the same pattern but keeps the encoder intentionally compact, simple, and local to maintain a narrow interface between the navigation layer and the stabilizer. Model-based stepping and hybrid stacks. Classical "perceptive" footstep planners select contacts via mixed-integer optimization using sensed terrain [25]. Hybrid pipelines integrate such planners with model-free RL, letting the planner handle discrete contact choices while RL handles low-level tracking and robustness [26]. Whole-body methods with sequential contacts and adaptive motion optimization for dexterous humanoids also embody this decomposition: a midrate generator proposes feasible references, while a highrate controller enforces stability and feasibility [27]. In all cases, the pipeline is explicitly layered: planning (navigation) up top, fast feedback below, with narrow reference/feedback channels-precisely the LCA pattern. Student-teacher and distillation. Teacher-student pipelines leverage privileged information and rich supervision to train a capable teacher, then distill a deployable student with restricted observations [28]-[33]. From an LCA standpoint, such methods can partially sidestep architectural constraints during training by allowing the teacher to approximate harder, more global solutions before compressing capability into a smaller runtime policy. While highly effective, analyzing their architectural equivalence (e.g., whether the distilled student implicitly embeds a multi-rate decomposition) remains open; we regard this as complementary and leave a deeper treatment for future work. II. METHODS A. Optimization Analysis To analyze the complex problem of perception-informed robot control, consider the following optimization: θ∗= max θ E X k γkr(sk, ak|θ) , (1) where θ are the network parameters This is the classical one-stage formulation of the reinforcement learning pipeline. Note that while this attempts to solve the global optimal control problem, it suffers from significant sensitivity to initial conditions [34] [35]. From an optimization perspective, solving the problems in sequence performs a different optimization, with less of the specified sensitivity. Let the parameters of the networks be divided as θ = [θx, θy]T , with 'slow' network parameters θy and 'fast' parameters θx. Note that in practice, these rates are more frequencies of the signals themselves, rather than the frequency of the controller (as they are all one network running at one speed). By solving the fast-rate optimization first, we solve θ† x = max θx E X k γkr(sk, ak|θx, θy,0) , (2) wherein we maximize the reward conditioned on the fast rate controller parameters θx subject to an initial setting of the slow parameters θy,0. In practice, since we will be removing perception from the optimization in stage one of the training, we remove the dependence on θy,0, allowing for a simpler optimization more likely to find a satisfactory local maxima. In the second stage of the optimization then, we solve θ∗ x, θ∗ y = max θx,θy E X k γkr(sk, ak|θx, θy) (3) s.t. θx,0 = θ† x. (4) Since we are optimizing over both variables, we perform the same optimization as the one-stage, so in sufficiently regular cases such as strictly concave reward landscapes we are guaranteed the same solution, i.e. θ∗ one-stage = θ∗ two-stage. However, in the highly nonconcave reward landscape that we work with, we note that by choosing a good initial condition from the first optimization, we are less susceptible to bad Fig. 3. Layered verses monolithic architectures: while the network architecture may be identical, training in two phases allows them to assume the layered control structure. local maxima with large regions of convergence that would otherwise attract the optimization algorithm. B. Observations and Normalization We propose a minimal robust humanoid locomotion pipeline with the goal of illustrating our LCA hypothesis. Let q be joint positions, ̇q joint velocities, gb the projected gravity direction in body frame, ωb the base angular velocity in body frame, ak-1 the last applied action, and u the commanded planar velocity. Let ok be a K-length history of robot state information along with current velocity command and last action: ok = [qk, qk-1, ..., qk-K, ̇qk, ... ̇qk-K, ωb,k, ..., ωb,k-K, gb,k, ...gb,k-K, u, ak-1]. a) Actor observation: Our actor observation is a concatenation of a the K-length history of robot state information and the current velocity command and heightmap: oπ k = [ok, Hπ], where Hπ ∈R11×11 is a noisy, sparse heightmap covering 1.0, m × 1.0, m around the robot (robotcentric frame). Note that with a nominal max velocity range of ±0.6 m/s, and the set step period of 0.4s, the robot will take about two steps to move from its current location to that at the edge of the map. Therefore, the map encodes some temporal information (ground height where the robot will be in the future) despite the use of only current heightmap information. Additionally, we do not incorporate perception delays or latency for simplicity and consistency with other methods, but we note that the heightmap signal changes relatively much more slowly than the proprioceptive information. Finally, by using a history of states, we can capture transient and higher-order behaviors than would be allowed by strictly using the current state. b) Critic observation: To construct our critic observation, we use similar information to the actor with the addition of the world-frame body velocities and a larger and more accurate heightmap with zero noise covering 1.5m × 1.5m around the robot. Giving the critic correct ground height information allows for a more accurate advantage function estimate, and the larger heightmap size allows the critic to see further into the 'future'. In total, the observation is oV k : [ok, vbase, HV ]. Both heightmaps are normalized by subtracting the grid mean (cellwise) and clipping to [-1, 1]. Zero-centering removes steady-state sim-to-real offsets such as those caused by changed camera mounting and compliance or different motor characteristics and simplifies biases in the MLP during the two-stage training. C. Network Architecture Our network architecture consists of two main components: the perception encoder, a network that takes the perception information and encodes it in a latent representation usable by the main actor network, and the primary actor network that uses a combination of the latent perception information and the standard robot proprioception to determine the robot's actions. Note that in our studies, we consider multiple choices for both the encoder network and the actor network to show the minimal underlying benefits of the actual implementation, instead highlighting the benefit of the layered architecture itself. Our choice of encoder is either a small CNN or MLP mapping H ∈RN×N to an embedding zH ∈RdH. By ablating this to an MLP, we see if the spatial encoding characteristic of a CNN performs better than a simpler model, even on the small scale of the 11x11 heightmap. A similar network is used to encode the perception information sent to the critic network, the only difference being the larger size of the input. The actor network, where we consider both the LSTM and MLP network architectures takes a concatenation of proprioception and perception information and outputs actions at as position setpoints, which are then tracked by joint-level PD controllers. D. Rewards We construct a number of rewards designed for our specific task to guide training, allowing for feasible performance on all of the proposed architectures, where we keep these rewards consistent. We use the following notation: we notate feet i ∈{L, R}, the contact indicator as Ci(k) = 1 maxj ∥F(j) i (k)∥2 > F⋆ with F⋆=1 N, the planar foot velocity as vxy i (k), foot pitch as θi,pitch(t), foot height as zi(k), and phase as φi(k). a) Phase-contact consistency reward: For τ=0.55, ε=5×10-3, let the stance intent be si(t) = 1 φi(k) 100N), normalized by step count; and (iii) Tracking error-the mean l2 difference between commanded and measured base velocity, reported in cm/s. d) Protocol: For each policy we roll out 500 simulated units, each performing 3 episodes of 1000 steps. Stair risers are uniformly randomized and treads set per condition; uneven terrain uses block fields with specified height ranges; and uniform noise of fixed magnitude is added to the heightfield observation. Parameter ranges are summarized in Table II (units in meters). e) Results: On the medium (in-distribution) setting, all policies achieve near-parity; residual differences are within run-to-run variability. However, out-of-distribution (OOD) effects are more revealing: Stairs (OOD) All models remain reasonably strong-stairs are structured and thus easier to "overfit." However, the two-stage variants exhibit roughly 3× lower contacts/step than their one-stage counterparts and are closer to the blind baseline in this metric. A plausible mechanism is that, under noisy exteroception, two-stage policies fall back to the robust blind stabilizer learned in stage 1, whereas one-stage policies rely more heavily on the heightfield for both planning and stabilization, leading to occasional poor foot placements. Tracking error shows a smaller but consistent improvement in the same direction. Uneven terrain (OOD) Here the differences are pronounced: the two-stage policies outperform one-stage by ≈10 percentage points in success on average, with corresponding reductions in contacts/step. Because the terrain is unstructured and harder to memorize, robustness requires both fast stabilization and longer-horizon placement-capabilities that are explicitly separated and cotrained in the two-stage pipeline but entangled in the onestage models. TABLE I. Simulation results across two tasks. Metrics: success rate (↑), contacts per step (↓), tracking error in cm (↓). Stairs Uneven Terrain Policy Rsucc (%, ↑) Contacts/step (%, ↓) Track (cm/s, ↓) Rsucc (%, ↑) Contacts/step (%, ↓) Track (cm/s, ↓) Medium difficulty Blind 98.30 0.85 ± 0.20 13.40 ± 1.43 97.86 2.28 ± 0.34 13.90 ± 1.44 One-Stage MLP 99.00 0.64 ± 0.16 14.10 ± 1.39 98.00 1.66 ± 0.31 15.30 ± 1.45 Two-Stage MLP 97.90 1.04 ± 0.18 13.30 ± 1.42 98.60 0.80 ± 0.15 13.40 ± 1.35 One-Stage LSTM 97.40 0.55 ± 0.20 13.10 ± 1.36 98.80 0.54 ± 0.15 13.30 ± 1.37 Two-Stage LSTM 99.40 0.26 ± 0.11 13.50 ± 1.29 99.40 0.34 ± 0.12 14.40 ± 1.32 One-Stage CNN 98.90 0.99 ± 0.23 13.50 ± 1.49 97.80 2.74 ± 0.38 14.80 ± 1.54 Two-Stage CNN 98.00 0.79 ± 0.26 13.60 ± 1.34 98.50 2.79 ± 0.39 15.70 ± 1.54 Hard difficulty Blind 93.08 0.81 ± 0.20 15.10 ± 1.56 64.63 2.79 ± 0.43 17.70 ± 2.05 One-Stage MLP 92.80 1.59 ± 0.45 16.10 ± 1.60 61.60 3.36 ± 0.54 18.20 ± 1.92 Two-Stage MLP 90.48 0.56 ± 0.14 13.40 ± 1.41 70.96 3.10 ± 0.55 17.70 ± 2.00 One-Stage LSTM 85.33 3.24 ± 0.47 17.20 ± 1.96 58.02 3.10 ± 0.52 18.10 ± 2.01 Two-Stage LSTM 95.01 0.78 ± 0.25 16.10 ± 1.64 72.36 3.06 ± 0.51 17.60 ± 1.92 One-Stage CNN 96.10 2.15 ± 0.39 15.30 ± 1.68 63.93 5.45 ± 0.69 19.10 ± 2.18 Two-Stage CNN 98.40 0.83 ± 0.20 14.47 ± 1.42 71.95 5.68 ± 0.86 19.10 ± 2.12 TABLE II. Environment parameters for stairs and uneven terrain. Stairs Uneven Terrain Difficulty Height Depth Noise Height Noise Medium 0.14-0.18 0.31 0.10 0.05-0.20 0.10 Hard 0.16-0.20 0.23 0.30 0.05-0.40 0.30 TABLE III. Proprioception observation terms oname with noise, scaling, and history length applied. Observation Formula / Description obase ang vel ωb ∈R3, base angular velocity in body frame. Noise U(-0.2, 0.2), scale 0.25. oprojected gravity gb ∈R3, gravity vector projected in body frame. Noise U(-0.05, 0.05). ovelocity commands u = (ux, uy, uω), commanded base velocity, scale (2.0, 2.0, 0.25). ojoint pos q -qdefault, joint positions relative to defaults. Noise U(-0.01, 0.01). ojoint vel ̇q - ̇qdefault, joint velocities relative to defaults. Noise U(-1.5, 1.5), scale 0.05. oactions ak-1, last applied action. ophase obs gait phase φ ∈[0, 1] with standing detection, period 0.8. B. Hardware Experiments To verify our hypothesis on hardware, we deploy a subset of our policies on the G1 Humanoid robot. The perception stack is run on board using the robot's Jetson Orin NX 16GB, and the RL controller is run on a Framework Laptop 13 with AMD Ryzen 7 7840u, which can either be off-board or strapped to the robot for a complete self-contained hardware stack. Perception data is provided by two Intel Realsense D435 cameras, one on the back of the hips, and one mounted on the chest, both pointing down. The lower hip camera allows for vision between the legs and behind the robot, while the upper chest camera allows for vision further in front of the robot. Together, they have a near-complete field of view of a 1.4m square around the robot, except for small TABLE IV. Nominal reward terms and weights for humanoid locomotion. Reward Formula rtrack lin vel xy exp 1.0 · exp - ∥vbase xy -vcommand xy ∥2 0.25 rtrack ang vel z exp 1.0 · exp -(uz-ωz)2 0.25 rlin vel z l2 -2.0 · v2 z rang vel xy l2 -0.05 · ω2 x + ω2 y rdof torques l2 -2.0×10-5 · P j | ̇qj| |τj| rdof acc l2 -2.5×10-7 · P j ̈q2 j rdof vel l2 -1.0×10-3 · P j ̇q2 j raction rate l2 -0.01 · P a(at -at-1)2 rundesired contacts -1.0 · P b∈B 1(maxt ∥Ft,b∥> θ) rcontact no vel -0.2 · P b∈A ∥vb∥2 1(contactb) rjoint deviation hip -1.0 · P j∈Jhip |qj -qdef j | rjoint deviation arms -0.5 · P j∈Jarms |qj -qdef j | rjoint deviation torso -1.0 · |qwaist -qdef waist| rheight torso -50.0 · (zroot -0.77)2 rfeet clearance +1.0 · P f zf -htarget(sf) 2 · (1 -contactf) rfeet slide -0.2 · P f ∥vf,xy∥2 1(contactf) rphase contact +0.5 · P f 1(contactf = stancef) rstand still -0.1 · P j |qj -qdef j | 1(∥u∥< ε) rfeet flat -1.0 · 1 -e-25(|θL pitch|cL+|θR pitch|cR) rflat orientation l2 -1.0 · g2 x + g2 y rdof pos limits -5.0 · P j violation(qj) ralive +0.15 · 1(¬terminated) rtermination penalty -200.0 · 1(terminated) holes where the legs shadow the camera view. Due to the edge warping, we crop the center 1m x 1m area for depth. The cameras send depth images which are merged into a combined point cloud at a rate of 30Hz. The policy runs at 50hz, sending position setpoints to PD controllers at the joints operating at 1kHz. Fig. 5. The four hardware experiment tasks. The first two, stair ascent and stair descent, emphasize navigation, while the second two, hinged and soft ledge, emphasize control. Fig. 6. Blind, MLP, and CNN depicted results. The monolithic policy is not included, as for some tasks it was never successful. TABLE V. Four tasks. Success rate (↑) and tracking error in m (↓). Stair Ascent Stair Descent Hinged Ledge Soft Ledge Policy Rsucc (%, ↑) Track (m, ↓) Rsucc (%, ↑) Track (m, ↓) Rsucc (%, ↑) Track (m, ↓) Rsucc (%, ↑) Track (m, ↓) Blind 3/5 0.288 2/5 0.137 5/5 0.037 5/5 0.063 One-Stage MLP 1/5 0.000 1/5 0.000 0/5 - 0/5 - Two-Stage CNN 4/5 0.175 5/5 0.228 5/5 0.059 5/5 0.167 Two-Stage MLP 4/5 0.045 5/5 0.163 5/5 0.199 4/5 0.113 a) Hardware tasks: We evaluate on four hardware tasks designed to probe different components of the stack: stair ascent, stair descent, hinged ledge, and soft ledge. The stair tasks comprise a short flight of three steps (riser ≈18 cm) with small landings (∼20 cm) and a horizontal skew of ∼25◦. This geometry forces careful toe/heel placement and weight transfer-missteps induce lateral perturbations-so these tasks primarily stress navigation (longer-horizon footstep/velocity planning). The ledge tasks use a 36 cm elevation change with transient compliance: the hinged variant is a plank balanced on a pivot that tips under load, and the soft variant lands onto a compliant gym mat. Perception sees a nominal ledge, but the dominant difficulty is the unmodeled, state-dependent disturbance at contact; these tasks primarily stress control (fast stabilization under transients). b) Policies and trials: For each task we run five trials for each of four policies: (i) a blind baseline, (ii) a one-stage MLP, (iii) a two-stage MLP, and (iv) a two-stage CNN+MLP. c) Metrics: We report two task-level metrics. Success rate counts trials that complete the task without a fall or human intervention. Precision measures repeatability: from a standing start we command 0.3 m/s forward, stop after task completion, and record the net lateral drift. We report the mean absolute deviation across the five trials to suppress fixed biases and emphasize stability and consistency. d) Observations: First, the one-stage MLP underperforms across terrains despite identical training conditions. Empirically, footstep selection often appears reasonable, but stabilization degrades quickly, consistent with over-reliance on noisy heightfields for low-level control. Second, the blind policy transfers reasonably well and is particularly strong on the ledge (control-dominant) tasks, but fails more frequently on stairs where precise edge-aware placement is required (missed steps when descending; edge strikes when ascending). Finally, the two-stage policies (MLP and CNN+MLP) perform similarly and robustly on both navigation- and control-dominant tasks, supporting our central claim: once the layered structure is in place, the specific encoder and backbone choice is of lesser importance. For absolute peak performance, one could further tune architectures or add specialized modules, but our results indicate such complexity is unnecessary to achieve robust behavior on these tasks. REFERENCES [1] H. Tsien, T. Adamson, and E. Knuth, "Automatic navigation of a long range rocket vehicle," Journal of the American Rocket Society, vol. 22, no. 4, pp. 192-199, 1952. [2] C. S. Draper and W. Wrigley, "Guidance-basic principles," GUIDANCE AND NAVIGATION, 1965. [3] M. R. Tucker, J. Olivier, A. Pagel, H. Bleuler, M. Bouri, O. Lambercy, J. d. R. Millan, R. Riener, H. Vallery, and R. Gassert, "Control strategies for active lower extremity prosthetics and orthotics: a review," Journal of neuroengineering and rehabilitation, vol. 12, no. 1, p. 1, 2015. [4] Y. Nakahira, Q. Liu, T. J. Sejnowski, and J. C. Doyle, "Diversityenabled sweet spots in layered architectures and speed-accuracy tradeoffs in sensorimotor control," Proceedings of the National Academy of Sciences, vol. 118, no. 22, p. e1916367118, 2021. [5] N. Matni, A. D. Ames, and J. C. Doyle, "Towards a theory of control architecture: A quantitative framework for layered multi-rate control," arXiv preprint , 2024. [6] "Templates and anchors: neuromechanical hypotheses of legged locomotion on land," Journal of experimental biology, vol. 202, no. 23, pp. 3325-3332, 1999. [7] U. Rosolia and A. D. Ames, "Multi-rate control design leveraging control barrier functions and model predictive control policies," IEEE Control Systems Letters, vol. 5, no. 3, pp. 1007-1012, 2020. [8] U. Rosolia, A. Singletary, and A. D. Ames, "Unified multirate control: From low-level actuation to high-level planning," IEEE Transactions on Automatic Control, vol. 67, no. 12, pp. 6627-6640, 2022. [9] N. Csomay-Shanklin, "Layered control architectures: Constructive theory and application to legged robots," Ph.D. dissertation, California 2025. [10] Y. Nakahira, Q. Liu, T. J. Sejnowski, and J. C. Doyle, "Diversityenabled sweet spots in layered architectures and speed-accuracy tradeoffs in sensorimotor control," Proceedings of the National Academy of Sciences, vol. 118, no. 22, p. e1916367118, 2021. [11] H. Wang, Z. Wang, J. Ren, Q. Ben, T. Huang, W. Zhang, and J. Pang, "Beamdojo: Learning agile humanoid locomotion on sparse footholds," arXiv preprint , 2025. [12] C. Zhang, W. Xiao, T. He, and G. Shi, "Wococo: Learning whole-body humanoid control with sequential contacts," in Conference on Robot Learning. PMLR, 2025, pp. 455-472. [13] H. Duan, B. Pandit, M. S. Gadde, B. Van Marum, J. Dao, C. Kim, and A. Fern, "Learning vision-based bipedal locomotion for challenging terrain," in 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024, pp. 56-62. [14] M. S. Gadde, P. Dugar, A. Malik, and A. Fern, "No more blind spots: Learning vision-based omnidirectional bipedal locomotion for challenging terrain," arXiv preprint , 2025. [15] J. Siekmann, K. Green, J. Warila, A. Fern, and J. Hurst, "Blind bipedal stair traversal via sim-to-real reinforcement learning," in Robotics: Science and Systems (RSS), 2021. [16] S. Chamorro, V. Klemm, M. d. L. I. Valls, C. Pal, and R. Siegwart, "Reinforcement learning for blind stair climbing with legged and wheeled-legged robots," in 2024 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2024, pp. 8081-8087. [17] R. Li, H. Wang, Q. Li, Z. Han, Y. Chu, L. Ye, W. Xie, and W. Liao, "Ctbc: Contact-triggered blind climbing for wheeled bipedal robots with instruction learning and reinforcement learning," arXiv preprint , 2025. [18] R. P. Singh, M. Morisawa, M. Benallegue, Z. Xie, and F. Kanehiro, "Robust humanoid walking on compliant and uneven terrain with deep reinforcement learning," in 2024 IEEE-RAS 23rd International Conference on Humanoid Robots (Humanoids). IEEE, 2024, pp. 497-504. [19] Y. Zhang, Z. Yu, X. Chen, Y. Du, Z. Zhou, and J. Gao, "Bipedal walking outdoors with a point-footed robot via reinforcement learning," in 2024 International Conference on Intelligent Robotics and Automatic Control (IRAC). IEEE, 2024, pp. 193-197. [20] C. Ji, D. Liu, W. Gao, and S. Zhang, "Robust and efficient walking of a bipedal humanoid robot via reinforcement learning," in 2025 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, 2025, pp. 381-388. [21] M. Su, Y. Jia, and Y. Huang, "Effects of prior knowledge for stair climbing of bipedal robots based on reinforcement learning," in 2024 International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2024, pp. 216-222. [22] J. Long, J. Ren, M. Shi, Z. Wang, T. Huang, P. Luo, and J. Pang, "Learning humanoid locomotion with perceptive internal model," arXiv preprint , 2024, submitted to ICRA2025. [Online]. Available: https://arxiv.org/abs/2411.14386 [23] J. He, C. Zhang, F. Jenelten, R. Grandia, M. B ̈acher, and M. Hutter, "Attention-based map encoding for learning generalized legged locomotion," Science Robotics, vol. 10, no. 105, p. eadv3604, 2025. [24] X. Gu, Y.-J. Wang, X. Zhu, C. Shi, Y. Guo, Y. Liu, and J. Chen, "Advancing humanoid locomotion: Mastering challenging terrains with denoising world model learning," arXiv preprint , 2024. [25] B. Acosta and M. Posa, "Perceptive mixed-integer footstep control for underactuated bipedal walking on rough terrain," arXiv preprint , 2025. [26] H. Su, H. Luo, S. Yang, K. Jiang, W. Zhang, and H. Chen, "Lipmguided reinforcement learning for stable and perceptive locomotion in bipedal robots," arXiv preprint , 2025. [27] Q. Liao, T. E. Truong, X. Huang, G. Tevet, K. Sreenath, and C. K. Liu, "Beyondmimic: From motion tracking to versatile humanoid control via guided diffusion," arXiv e-prints, pp. arXiv-2508, 2025. [28] J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, "Learning quadrupedal locomotion over challenging terrain," Science robotics, vol. 5, no. 47, p. eabc5986, 2020. [29] A. Kumar, Z. Fu, D. Pathak, and J. Malik, "Rma: Rapid motor adaptation for legged robots," arXiv preprint , 2021. [30] Z. Zhuang, S. Yao, and H. Zhao, "Humanoid parkour learning," arXiv preprint , 2024. [31] F. Wu, X. Nal, J. Jang, W. Zhu, Z. Gu, A. Wu, and Y. Zhao, "Learn to teach: Sample-efficient privileged learning for humanoid locomotion over real-world uneven terrain," IEEE Robotics and Automation Letters, 2025. [32] Y. Fan, T. Gui, K. Ji, S. Ding, C. Zhang, J. Gu, J. Yu, J. Wang, and Y. Shi, "One policy but many worlds: A scalable unified policy for versatile humanoid locomotion," arXiv preprint , 2025. [33] T. He, W. Xiao, T. Lin, Z. Luo, Z. Xu, Z. Jiang, J. Kautz, C. Liu, G. Shi, X. Wang et al., "Hover: Versatile neural whole-body controller for humanoid robots," in 2025 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2025, pp. 9989-9996. [34] D. Picard, "Torch. manual seed (3407) is all you need: On the influence of random seeds in deep learning architectures for computer vision," arXiv preprint , 2021. [35] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org. [36] L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba, and P. Abbeel, "Asymmetric actor critic for image-based robot learning," in 14th Robotics: Science and Systems, RSS 2018. MIT Press Journals, 2018.
2510.14946
EdgeNavMamba: Mamba-Optimized Object Detection for Energy-Efficient Edge Devices Romina Aalishah, Mozhgan Navardi, and Tinoosh Mohsenin Department of Electrical and Computer Engineering Johns Hopkins University, Baltimore, MD, USA Abstract—Deployment of efficient and accurate Deep Learning models has long been a challenge in autonomous navigation, particularly for real-time applications on resource-constrained edge devices. Edge devices are limited in computing power and memory, making model efficiency and compression essential. In this work, we propose EdgeNavMamba, a reinforcement learning-based framework for goal-directed navigation using an efficient Mamba object detection model. To train and evaluate the detector, we introduce a custom shape detection dataset collected in diverse indoor settings, reflecting visual cues common in real- world navigation. The object detector serves as a pre-processing module, extracting bounding boxes (BBOX) from visual input, which are then passed to an RL policy to control goal-oriented navigation. Experimental results show that the student model achieved a reduction of 67% in size, and up to 73% in energy per inference on edge devices of NVIDIA Jetson Orin Nano and Raspberry Pi 5, while keeping the same performance as the teacher model. EdgeNavMamba also maintains high detection accuracy in MiniWorld and IsaacLab simulators while reducing parameters by 31% compared to the baseline. I. INTRODUCTION Edge deployment is a key challenge for practical Deep Learning (DL) applications [1], [2], particularly in autonomous navigation, medical imaging , which require real-time per- formance [3]–[8]. DL models on edge devices must be lightweight and efficient to provide real-time, reliable per- formance despite constraints in computation and power [9]– [11]. Particularly in autonomous navigation (Fig. 1), scene understanding is critical, enabling vision models to learn envi- ronmental features, obstacles, and paths for navigation in both new and familiar scenarios [12], [13]. Deploying these models on edge devices is challenging due to their computational intensity, which is necessary for high accuracy [14]. Optimization methods have been applied to these models to improve power and memory efficiency. Since You Only Look Once (YOLO) [15] revolutionized object detection by using regression on bounding boxes, several efforts have applied these methods to YOLO. YOLO-ACE redesigned the backbone and applied double distillation [12], and Mamba YOLO [16] integrated a state-space-model (SSM) [17] back- bone for efficiency. With the introduction of these lightweight yet powerful models, the deployment of edge devices for navigation tasks becomes more feasible and efficient. For the navigation phase, Reinforcement Learning (RL) has been a successful inspiration, as it allows the agent to learn through interactions and real-time feedback [18]. However, to the best of our knowledge no existing work has attempted to combine (b) Rosmaster Edge Platforms: (a) Go2 Robot Dog Edge Accelerator: Jetson Orin Nano Computing Power: 40 TOPs Memory: 8GB DRAM Power Mode: 7W | 15W Fig. 1. Edge platforms with onboard Jetson Orin Nano accelerators: (a) the Unitree Go2 robot dog and (b) the Yahboom Rosmaster wheeled robot. Mamba, Knowledge Distillation (KD), and an optimization strategy to produce a model small enough to fit into cache memory, thereby improving time and energy efficiency. To address this, we develop EdgeNavMamba, a customized Mamba-based detector tailored for efficient on-device per- ception. Unlike prior lightweight YOLO variants or state- space backbones, our design uniquely integrates the Mamba architecture with KD [21] to achieve a balance between ac- curacy, and energy efficiency. The combination of state-space modeling and distillation enables compact yet context-aware feature representations that YOLO variants cannot capture. This framework directly addresses the memory and compu- tational bottlenecks of edge deployment while maintaining real-time performance. We further validate the deployment of EdgeNavMamba on resource-constrained edge devices such as NVIDIA Jetson Orin Nano with 8 GB memory [22] and Raspberry Pi 5 with 16 GB memory [23]. The experimental results demonstrate that EdgeNavMamba successfully achieves efficiency with minimal performance loss compared to the teacher model. Our contributions are as follows: • Development of an edge Mamba object detector through architecture modification and knowledge distillation. • Power and latency analysis for the proposed EdgeNav- Mamba on edge devices, such as the Raspberry Pi 5 and NVIDIA Jetson Orin Nano with Arm Cortex processors. • Validation of object detection in simulators MiniWorld and IsaacLab, as well as RL navigation validation in MiniWorld with different complexities. arXiv:2510.14946v1 [eess.IV] 16 Oct 2025 Agent Environment Action State Reward features logits features logits Frozen Teacher Model KD Loss Ground Truth Data red blue (a) (b) (c) Input Edge Student Model Conv dim i/2 dim i/2 SiLU SiLU SSM Linear Linear dim i dim i dim i Linear ... ... ... ... ... ... Fig. 2. (a) Reinforcement Learning (RL) diagram, including the interaction with the environment to maximize reward [19], [20], (b) Architecture of Mamba [17], used for feature extraction and model efficiency, (c) The process of knowledge distillation [21]; the teacher model is trained and frozen, the student model is trained based on the teacher features, logits, and ground truth data. The rest of this paper reviews related work, outlines key preliminaries, introduces the EdgeNavMamba framework, and presents experimental results, and concludes with key findings. II. RELATED WORK Edge deployment is critical for real-world deep learning applications [1], [24], especially in autonomous systems where onboard processing requires models to be light and efficient for real-time, reliable performance [5]. Common optimization techniques include architecture modification, knowledge dis- tillation [21], quantization, and pruning [2], [25] Architecture changes adjust layer types, sizes, and their repetitions to main- tain performance while reducing model size [6]. Knowledge distillation is applicable wether the teacher model is open- source or not [2]. Quantization and pruning reduce memory usage by decreasing bit precision and removing connections in a structured or unstructured manner, respectively [25]. Object detection is one of the most computationally inten- sive tasks in computer vision and deep learning. Due to the need for high precision to detect objects of varying sizes, models are often large or require significant computational resources [14], [26]. Compression techniques address this issue. YOLO represents a major advancement in this field, addressing object detection in a regression-based manner [15]. Lighter variants, such as YOLOv9 [27], have been adapted for edge object deployment. To improve precision, newer versions add an attention-based mechanism [28], but with a higher computational cost [17]. Mamba, a more efficient alternative to attention architecture, has been adopted in both full and hybrid forms in detection models [16], [29], [30]. With the introduction of these lightweight yet powerful models, the deployment of edge devices for navigation tasks becomes more feasible and efficient. Mela et al. applied quantization and pruning for unmanned surface vehicles [14]. Yang et al. proposed a multimodal 3D object detection framework using attention-guided and category-aware distillation [31]. Reinforcement learning (RL) approaches such as deep Q-networks (DQN) [32] and proximal policy optimization (PPO) [33] have been applied to autonomous navigation on resource-constrained edge devices by directly mapping vision inputs to control commands [34]. In [35], YOLO was in- tegrated into a Deep Reinforcement Learning algorithm by passing the bounding-box (BBOX) coordinates of n goals instead of raw images, improving training time and real- world performance. However, as n grows, the input vector becomes larger, complicating goal learning, and adding a YOLO module adds significant edge-device overhead. To address this, a Squeezed-Edge YOLO module integrated with RL was proposed to enhance the energy efficiency of the detection on edge devices [20], [26]. In this work, we present an end-to-end framework for RL- based autonomous navigation with an optimized Mamba object detection model for energy-efficient edge computing. First, we design the optimized detector, which achieves competi- tive accuracy while using less memory and computation and therefore less energy than existing work. Next, we integrate this model into an RL algorithm and train the navigation policy in simulation. Finally, we deploy and evaluate the optimized object detection model on edge devices. III. PRELIMINARIES Reinforcement Learning (RL). Goal-directed navigation, where the agent aims to reach an object in each episode, can be modeled as a Markov Decision Process (MDP) [36], defined by a state space S, action space A, reward function r : S×A →R, initial state distribution s0, and transition prob- ability p(st+1 | st, at). RL [19] provides a set of algorithms that enable an agent to learn optimal policies π(a | s) through trial-and-error interactions with the environment, aiming to maximize the cumulative expected reward. In goal-based tasks, the objective can be formulated as a goal-oriented MDP [20], [37], where RL methods learn to map states to actions that lead the agent toward the goal. Fig. 2 (a) illustrates how an RL agent interacts with the environment under the MDP framework to receive rewards. Object Detection. Object detection in computer vision aims to locate an object in images or videos by providing its spatial location in the form of bounding boxes and its category through class labels. The field is divided into two types of approaches: traditional techniques and machine learning- based methods. Traditional object detection methods rely on handcrafted features such as Haar [38] , combined with brute- force techniques like sliding window searches across multiple scales and positions [38]. Due to their multi-stage pipelines, Agent - Object Level (Reasoning) Agent View Agent Sub-Goals Environment - Ground Level (Doing) Image BBox Coordinates Pre-processing Module State Reward Action 4 x n Discrete Action 5 x n + a Goal BBox RL Model Last Action n a EdgeNavMamba Fig. 3. The proposed architecture of EdgeNavMamba, consisting of two branches of convolution and SSM for feature extraction. The same architecture is used for teacher and student models with a different set of dimensions. Features, then, undergo a detection process, and the bounding boxes are given to the RL model for navigation to the goal. the introduction of YOLO as a real-time approach helped address it as a single regression problem [15]. Mamba and State Space Models. Mamba [17] architecture introduces Selective State Space Models, an efficient alterna- tive to Transformers [39], reducing computational complexity while maintaining feature extraction capabilities. The state space representation in Mamba is formulated as follows: y(t) = Cx(t) (1) d dtx(t) = Ax(t) + Bu(t) (2) where x(t) represents the hidden state, u(t) is the input signal and A, B, and C are learnable matrices. This structure enables Mamba to capture long-range dependencies efficiently while requiring fewer parameters than traditional self-attention mechanisms. As a result, several efforts have been made to apply this method across various tasks. Fig. 2 (b) demonstrates its architecture as a part of the network. Knowledge Distillation (KD). Knowledge distillation trans- fers knowledge from a larger teacher model to a smaller student model to achieve similar performance with fewer parameters. In our setting, the student is trained using a combi- nation of the standard YOLO detection loss, a distillation loss on classification logits, and a feature matching loss between intermediate representations: L = Ldet + λkdLKD + λfeatLfeat (3) where Ldet is the standard YOLO detection loss computed from ground truth boxes and labels. LKD is a temperature- scaled Kullback–Leibler divergence between the teacher and student classification logits controlled by a temperature param- eter T. Lfeat is the mean squared error between intermediate feature maps of the teacher and student. The hyperparameters λkd and λfeat control the relative contributions of the distilla- tion and feature matching terms. Lite-Conv-SSM Block  EdgeNavMamba Patch Embedding Lite-Conv-SSM Block Patch Merging Lite-Conv-SSM Block Patch Merging Lite-Conv-SSM Block Patch Merging Lite-Conv-SSM Block Patch Merging Detector Goal Detection Split DWConv Block PWConv Block LN Linear LiteSS2D Block Concatenate Shuffle Conv Branch SSM Branch Patch Linear BN Detector DWConv Conv2D (1x1) Linear ReLU AvgPooling x2 x2 x4 x2 dim 1 dim 2 dim 3 dim 4 dim teacher student 1 64 32 2 128 64 3 256 128 4 512 256 Fig. 4. The proposed architecture of EdgeNavMamba, consisting of two branches of convolution and SSM (including LiteSS2D) for feature extraction. The same architecture is used for teacher and student models with a different set of dimensions. Features, then, undergo a detection process, and the bounding boxes are given to the RL model for navigation to the goal. IV. PROPOSED METHODOLOGY In this section, we introduce the end-to-end framework called EdgeNavMamba for energy-efficient autonomous navi- gation, utilizing an optimized Mamba object detection model for on-device edge computing. Fig. 3 and Algorithm 1 provide an overview of the proposed system. At each timestep, the agent captures an image of its environment, which the detector processes to extract BBOX coordinates of objects. These coordinates are then encoded as a feature vector and passed to an RL policy for goal navigation. Together, these components enable the agent to navigate autonomously to the goal while minimizing computations and energy usage. The RL policy is trained in MiniWorld and IsaacLab simulation environments. In the following section, we present our detailed approach. A. Sim-to-Real Goal Navigation Framework The navigation framework consists of two modules: an object detection network and an RL policy to reach the goal. First, the EdgeNavMamba processes the input image, divides it into a fixed resolution, and outputs normalized bounding boxes with confidence scores for n detected objects. The resulting BBox coordinates (x1, y1, x2, y2), producing a 1×(4n) vector. This is concatenated with a one-hot encoded sub-goal vector of size 1×n, and a one-hot encoded last-action vector of size 1 × a, where a is the number of discrete actions. resulting in a 1 × (5n + a) state vector. During each episode in simula- tion, the PPO policy receives the full state vector, including all detected boxes plus one-hot goal, while reward shaping focuses on the BBox coordinates of the current goal object. The action space is discrete: {left, right, forward}. The PPO policy receives this state at each step and outputs an action. A task is considered complete when the agent comes within a predefined proximity threshold of the correct goal object, which checks the Euclidean distance between the agent and the target. Navigation is guided by the reward function shown in Table I. Distance change is to encourage the agent to reduce its distance to the goal at each step. First goal visibility Linear DWConv SiLU LN LiteSS2D LiteSS2D Block 1 2 3 ... 1 4 7 ... 9 8 7 ... 9 6 3 ... 1 2 3 ... 1 4 7 ... 9 8 7 ... 9 6 3 ... Scan Expanding Scan Merging S6 LiteSS2D 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Fig. 5. The architecture of LiteSS2D block, which is in the SSM branch of EdgeNavMamba, following the overall flow of the SS2D proposed by VMamba [40], but with modifications for better efficiency, mentioned in MedMambaLite [6]. TABLE I REWARD COMPONENTS FOR NAVIGATION TASK IN MINIWORLD Condition Reward Correct goal reached +10.0 Wrong goal / wall collision −2.0 Per step penalty −0.01 Opposite turn actions −0.05 Distance change 0.5 · (∆dprev −∆dcurr) First goal visibility +0.1 Exploration (forward, no goal) +0.01 Exploration (turn, no goal) +0.005 and exploration rewards provide additional guidance when the goal is not yet in view, preventing the agent from remaining in states with no other positive reward signals. B. Edge-Optimized Mamba Object Detection Model Architecture. Fig. 4 shows the overview of the proposed architecture for object detection, inspired by Med- MambaLite [6], which includes five main units as follows: 1) Patch Embedding: The input image is split into patches and projected into a higher-dimensional space. 2) Lite-Conv-SSM-Block: Features pass through a series of Lite-Conv-SSM blocks, including convolutional and State- Space Modeling (SSM) components. Convolutional branch captures local features using depthwise and pointwise convolu- tions. Meanwhile, the SSM branch utilizes a Lite 2D Selective Scan Mamba (LiteSS2D) module to capture long-range depen- dencies and global features. The outputs are concatenated and shuffled to fuse global and local features. A number of these blocks form stages in a hierarchical architecture. 3) Lite 2D-Selective-Scan: Fig. 5 shows Lite 2D-Selective- Scan (LiteSS2D), which shares weights across four directions to reduce computation. The block starts by projecting the input features into a higher dimension, applies row-wise and column-wise convolutions, then runs a four-way Selective Scan with a shared SSM core. Scan Expanding: flattens the input along four directions. S6 block: processes each sequence with shared weights. Scan Merging: sums directional outputs and reshapes them. This approach provides memory efficiency by avoiding repeated tensor reshaping and using compact representations. Compared to available object detection models, we introduced important changes to provide an efficient model. Efficiency is improved by factorizing convolutions, sharing projection weights, and reusing Mamba weight matrices across blocks. 4) Patch Merging: Between stages, patch merging layers reduce spatial resolution while increasing channel depth, build- ing a hierarchical representation. Algorithm 1 EdgeNavMamba Proposed Approach Require: Dataset D, teacher model T , student model S, RL policy π Ensure: Trained edge detector S⋆and navigation policy π⋆ 1: Train Teacher: Train T on D using detection loss. 2: Distill Student: Freeze T and train S using L = Ldet + λkdLKD + λfeatLfeat. 3: Train RL Policy: Use S to extract object bounding boxes and feed them as state input to PPO agent π in MiniWorld. 4: Deploy on Edge: Export S⋆and π⋆to edge devices for real-time goal navigation. 5) Detector: The detector processes extracted features to identify the presence and bounding box of a target object. It uses depthwise and pointwise convolutions, followed by pooling and a linear layer to output the goal detection result. Knowledge Distillation. Fig. 2 (c) illustrates our knowledge distillation framework, where an edge student model is trained based on a frozen teacher model and ground truth data. Models have a similar architecture, as shown in Fig. 4, but with varying channel dimensions. During training, each input batch is processed by both teacher and student, and the student parameters are updated using a combined KD Loss according to the Eq. 3, and optimization is performed on the student while keeping the teacher fixed. V. EXPERIMENTAL EVALUATION A. Experimental Setup 1) Datasets: Two datasets were prepared for training and deployment of teacher and student models: a real-world dataset containing 1,800 images and a simulated MiniWorld dataset with around 5,500 images. Both include three object classes, red, blue, and black boxes, and are split into training and validation sets with a 90/10 split. 2) Training Details: For object detection model and knowl- edge distillation experiments, we set the temperature to T = 2.0, the KL divergence weight to λkd = 1.0, and the feature- matching weight to λfeat = 0.25. The teacher is first trained, then frozen, and distillation is performed into the student configured with depths [2, 2, 4, 2] and channel dimensions (32, 64, 128, 256) on the same dataset. We use the Adam optimizer with learning rate lr = 10−4, batch size 32, and a learning rate scheduler that reduces the rate when validation Mean Average Precision (mAP) shows no improvement. Inputs are resized to 224×224 and normalized. Evaluation uses the mAP metric. During training and validation we periodically USB Power Meter Power (W) = I (A) x V (V) Raspberry Pi 5 16 GB Memory Jetson Orin Nano 8GB Memory with CUDA GPU Onboard INA3221 Power Monitor NVIDIA Jetson Orin Nano CPU GPU 4x 256 KB L2 4x A78 GPC 4x 128 KB L1 2x A78 2x 128 KB L1 2x 256 KB L2 2 MB L3 2 MB L3 4 MB L2 8x 192 KB L1 4 MB Sys. Cache Raspberry Pi 5 4x 512 KB L2 4x A76 2 MB L3 16 GB Global Memory CPU 4x 128 KB L1 8 GB Global Memory SM SM SM SM SM SM SM SM (b) (c) (d) (a) ISAAC Simulator Fig. 6. Experimental setup for energy-efficient multi-goal autonomous naviga- tion: (a) simulation environment in NVIDIA Isaac Simulator and MiniWorld. (b) Edge robotic platforms: Yahboom Rosmaster wheeled robot and Unitree Go2 dog robot; (c) Raspberry Pi 5 edge node (4× Cortex-A76 CPU, 16 GB LPDDR5, multi-level cache); (d) NVIDIA Jetson Orin Nano 8 GB edge AI accelerator (6-core Cortex-A78AE CPU, Ampere GPU, 8 GB LPDDR5, multi-level cache); (c) Cache hierarchy and power-measurement setup using a USB power meter and onboard INA3221 monitor. decode detections with confidence thresholds (0.25, 0.45) for qualitative inspection. The trained student is exported to ONNX for deployment and integration into the RL network. Navigation policy is trained in MiniWorld environment, consisting of a rectangular room with three colored boxes (red, blue, black) placed at random non-overlapping positions. At the beginning of each episode, one of the objects is randomly selected as the target, and its class is encoded as a one-hot goal vector. The policy is trained with the PPO algorithm. We use a learning rate of 3 × 10−4, a batch size of 128, and episode length of 1024 steps. The agent is trained for a total of 500,000 timesteps. The reward function is described in Table I, combining sparse success and failure signals with dense terms for distance reduction, exploration, and first-goal visibility. All experiments are done on an NVIDIA 4090 GPU. 3) Hardware Deployment Platforms: To validate our ap- proach in real-world settings, we deployed it on two edge platforms, an NVIDIA Jetson Orin Nano (8 GB RAM) and a Raspberry Pi 5 (16 GB RAM), mounted on the legged Unitree Go 2 robot and the Yahboom Rosmaster wheeled robot (Fig. 6 (b)). Power consumption was measured on both de- vices, as illustrated in Fig. 6 (c), and their memory hierarchies and CPU/GPU architectures are shown in Fig. 6 (d). B. Results and Discussion 1) Mamba Model Optimization: Table II presents the per- formance of the EdgeNavMamba teacher and student models in mAP compared to the existing shape detection models. Knowledge distillation effectively reduces model size and FLOPs, without degrading performance. Meanwhile, our stu- dent model achieves a 31% reduction in the number of param- eters compared to the baseline, while maintaining competitive accuracy. Detections are evaluated in both MiniWorld and IsaacLab simulators for comprehensive analysis. Fig. 7 illus- trates these environments along with examples of detections made by the agent in various scenarios. Fig. 7. MiniWorld and IsaacLab samples of environments and object detections by the agent during exploration using EdgeNavMamba-ST. The environment contains three boxes placed at random, non-overlapping posi- tions, with one randomly chosen as the target each episode. Fig. 8. Success rate of navigation toward a defined goal during training in different environment complexities. Each value is calculated over the last 100 episodes. In each case, one box is designated as the goal, while the others serve as distractions. 2) RL-Driven Goal Navigation: We evaluated EdgeNav- Mamba for navigation in MiniWorld using three scenarios. In each, one box was designated as the goal while the others served as distractions. In the first case, only one object was present; in the second, two objects were present, one being the goal; and in the third, three objects including one goal were placed. Fig. 8 shows success rates for these scenarios over the last 100 training episodes. In the first case, the agent achieved a 100% success rate, confirming accurate detection during navigation. In the second and third cases, the agent achieved 94% and 90% success rates, respectively. 3) On-Device Energy Profiling: In Fig. 9, we evaluate knowledge distillation by comparing the baseline EdgeNavMamba-TR with its distilled variant, EdgeNavMamba-ST, on two representative edge platforms. On the Jetson Orin Nano, EdgeNavMamba-ST achieves a 63% reduction in energy per inference while improving throughput. Likewise, on the Raspberry Pi 5, EdgeNavMamba-ST delivers a 73% energy reduction, demonstrating substantial efficiency gains with only negligible power overhead. VI. CONCLUSION In this work, we presented EdgeNavMamba, an RL-based framework designed for goal navigation using an efficient Mamba-based object detection model. By combining archi- tectural modifications and knowledge distillation on the object TABLE II COMPARISON OF EDGENAVMAMBA WITH PRIOR MODELS ON THE SHAPES DATASET. YOLOV5S [26], [37] (32-BIT) AND SQUEEZED EDGE YOLO [26] (8-BIT) DO NOT REPORT FLOPS. Block Params Size FLOPs mAP YOLOv5s [26] 7.3 M 237 MB - 0.96 Squeezed Edge YOLO [26] 931 k 7.5 MB - 0.95 EdgeMambaNav-TR 2.4 M 9.1 MB 0.47 G 0.93 EdgeMambaNav-ST 639 k 2.5 MB 0.15 G 0.93 Fig. 9. Energy and performance comparison of proposed EdgeNavMamba- TR and EdgeNavMamba-ST on Jetson Orin Nano and Raspberry Pi 5 16GB. detection model, we achieved a 31% reduction in the number of parameters compared to the baselines while preserving de- tection accuracy. The student model also, achieved a reduction of 67% in size, and up to 73% in energy per inference on edge devices of NVIDIA Jetson Orin Nano and Raspberry Pi 5, while keeping the same performance as the teacher model, emphasizing the efficiency of the edge model. Navigation results in the MiniWorld simulator demonstrate over 90% success rate in various environment complexities. REFERENCES [1] Y. Wang et al., “Computation-efficient deep learning for computer vision: A survey,” Cybernetics and Intelligence, pp. 1–24, 2024. [2] M. Navardi et al., “Genai at the edge: Comprehensive survey on empowering edge devices,” Proceedings of the AAAI SSS, 2025. [3] U. Kallakuri et al., “ATLAS: Adaptive landmark acquisition using llm- guided navigation,” in Proceedings of the First Vision and Language for Autonomous Driving and Robotics Workshop. OpenReview.net, 2024. [4] M. Walczak et al., “ATLASv2: Llm-guided adaptive landmark acquisi- tion and navigation on the edge,” arXiv:2504.10784, 2025. [5] N. Tahir et al., “Edge computing and its application in robotics: A survey,” arXiv preprint arXiv:2507.00523, 2025. [6] R. Aalishah et al., “Medmambalite: Hardware-aware mamba for medical image classification,” 2025, 21st IEEE Biomedical Circuits and Systems Conference (BioCAS) 2025. [7] Y. Xu et al., “Edge deep learning in computer vision and medical diagnostics: a comprehensive survey,” Artificial Intelligence Review, vol. 58, no. 93, 2025. [8] S. H. Lee et al., “Fast on-device learning framework for single-image super-resolution,” IEEE Access, vol. 12, pp. 37 276–37 287, 2024. [9] R. Aalishah et al., “Mambalitesr: Image super-resolution with low-rank mamba using knowledge distillation,” in Proceedings of the Interna- tional Symposium on Quality Electronic Design (ISQED), 2025. [10] M. Navardi et al., “Metatinyml: End-to-end metareasoning framework for tinyml platforms,” IEEE Embedded Systems Letters, 2024. [11] A. N. Mazumder et al., “A survey on the optimization of neural network accelerators for micro-ai on-device inference,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2021. [12] Y. Xie et al., “YOLO-ACE: A Vehicle and Pedestrian Detection Algorithm for Autonomous Driving Scenarios Based on Knowledge Distillation of YOLOv10,” IEEE IoT Journal, Aug. 2025. [13] M. Walczak et al., “Eden: Entorhinal driven egocentric navigation toward robotic deployment,” arXiv preprint arXiv:2506.03046, 2025. [14] J. L. Mela et al., “Yolo-based power-efficient object detection on edge devices for usvs,” Journal of Real-Time Image Processing, 2025. [15] J. Redmon et al., “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016, pp. 779–788. [16] Z. Wang et al., “Mamba yolo: A simple baseline for object detection with state space model,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 8, pp. 8205–8213, 2025. [17] A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” arXiv preprint arXiv:2312.00752, 2023. [18] W. Zixiang et al., “Research on autonomous robots navigation based on reinforcement learning,” 2024 3rd International Conference on Robotics, Artificial Intelligence and Intelligent Control (RAIIC), pp. 78–81, 2024. [19] R. S. Sutton and A. G. Barto, “Reinforcement learning: An introduction,” MIT Press, 1998. [20] M. Navardi et al., “Metae2rl: Toward metareasoning for energy-efficient multi-goal reinforcement learning with squeezed edge yolo,” IEEE Micro, 2023. [21] G. Hinton et al., “Distilling the knowledge in a neural network,” 2015, nIPS 2014 Deep Learning Workshop. [22] NVIDIA, “Jetson orin nano developer kit getting started — nvidia developer,” https://developer.nvidia.com/embedded/learn/get-started- jetson-orinnano-devkit, 2025, accessed: 2025-06-06. [23] Raspberry Pi, “Getting started with your raspberry pi,” https://www.raspberrypi.com/documentation/computers/getting- started.html, 2025, accessed: 2025-06-06. [24] M. Navardi et al., “Genai at the edge: Comprehensive survey on empowering edge devices,” in Proceedings of the AAAI Symposium Series, vol. 5, no. 1, 2025, pp. 180–187. [25] S. Han et al., “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015. [26] E. Humes et al., “Squeezed edge yolo: Onboard object detection on edge devices,” ML with New Compute Paradigms (MLNCP) Workshop at NeurIPS, arXiv preprint arXiv:2312.11716, 2023. [27] C.-Y. Wang and H.-Y. M. Liao, “Yolov9: Learning what you want to learn using programmable gradient information,” 2024. [28] Y. Tian, Q. Ye, and D. Doermann, “Yolov12: Attention-centric real-time object detectors,” arXiv preprint arXiv:2502.12524, 2025. [29] L. Zhu et al., “Vision mamba: Efficient visual representation learning with bidirectional state space model,” in Proceedings of the International Conference on Machine Learning (ICML), 2024. [30] A. Hatamizadeh and J. Kautz, “Mambavision: A hybrid mamba- transformer vision backbone,” arXiv preprint arXiv:2407.08083, 2024. [31] B. Yang et al., “Multidistiller: Efficient multimodal 3d detection via knowledge distillation for drones and autonomous vehicles,” Drones, vol. 9, no. 5, p. 322, 2025. [32] V. Mnih et al., “Human-level control through deep reinforcement learn- ing,” nature, vol. 518, no. 7540, pp. 529–533, 2015. [33] J. Schulman et al., “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. [34] S. Nahavandi et al., “A comprehensive review on autonomous naviga- tion,” ACM Computing Surveys, vol. 57, no. 9, pp. 1–67, 2025. [35] M. Navardi et al., “Toward real-world implementation of deep rein- forcement learning for vision-based autonomous drone navigation with mission,” UMBC Student Collection, 2022. [36] R. Bellman, “A markovian decision process,” Journal of Mathematics and Mechanics, vol. 6, no. 5, pp. 679–684, 1957. [37] T. Manjunath et al., “Reprohrl: Towards multi-goal navigation in the real world using hierarchical agents. on 37th aaai conference on artificial intelligence,” in The 1st Reinforcement Learning Ready for Production workshop, 2023. [38] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1. IEEE, 2001, pp. I–I. [39] K. He et al., “Deep residual learning for image recognition,” in Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [40] Y. Liu et al., “Vmamba: Visual state space model,” arXiv preprint arXiv:2401.10166, 2024.
EdgeNavMamba: Mamba-Optimized Object Detection for Energy-Efficient Edge Devices Romina Aalishah, Mozhgan Navardi, and Tinoosh Mohsenin -Deployment of efficient and accurate Deep Learning models has long been a challenge in autonomous navigation, particularly for real-time applications on resource-constrained edge devices. Edge devices are limited in computing power and memory, making model efficiency and compression essential. In this work, we propose EdgeNavMamba, a reinforcement learning-based framework for goal-directed navigation using an efficient Mamba object detection model. To train and evaluate the detector, we introduce a custom shape detection dataset collected in diverse indoor settings, reflecting visual cues common in realworld navigation. The object detector serves as a pre-processing module, extracting bounding boxes (BBOX) from visual input, which are then passed to an RL policy to control goal-oriented navigation. Experimental results show that the student model achieved a reduction of 67% in size, and up to 73% in energy per inference on edge devices of NVIDIA Jetson Orin Nano and Raspberry Pi 5, while keeping the same performance as the teacher model. EdgeNavMamba also maintains high detection accuracy in MiniWorld and IsaacLab simulators while reducing parameters by 31% compared to the baseline. I. INTRODUCTION Edge deployment is a key challenge for practical Deep Learning (DL) applications [1], [2], particularly in autonomous navigation, medical imaging , which require real-time performance [3]-[8]. DL models on edge devices must be lightweight and efficient to provide real-time, reliable performance despite constraints in computation and power [9]- [11]. Particularly in autonomous navigation (Fig. 1), scene understanding is critical, enabling vision models to learn environmental features, obstacles, and paths for navigation in both new and familiar scenarios [12], [13]. Deploying these models on edge devices is challenging due to their computational intensity, which is necessary for high accuracy [14]. Optimization methods have been applied to these models to improve power and memory efficiency. Since You Only Look Once (YOLO) [15] revolutionized object detection by using regression on bounding boxes, several efforts have applied these methods to YOLO. YOLO-ACE redesigned the backbone and applied double distillation [12], and Mamba YOLO [16] integrated a state-space-model (SSM) [17] backbone for efficiency. With the introduction of these lightweight yet powerful models, the deployment of edge devices for navigation tasks becomes more feasible and efficient. For the navigation phase, Reinforcement Learning (RL) has been a successful inspiration, as it allows the agent to learn through interactions and real-time feedback [18]. However, to the best of our knowledge no existing work has attempted to combine (b) Rosmaster Edge Platforms: (a) Go2 Robot Dog Edge Accelerator: Jetson Orin Nano Computing Power: 40 TOPs Memory: 8GB DRAM Power Mode: 7W | 15W Fig. 1. Edge platforms with onboard Jetson Orin Nano accelerators: (a) the Unitree Go2 robot dog and (b) the Yahboom Rosmaster wheeled robot. Mamba, Knowledge Distillation (KD), and an optimization strategy to produce a model small enough to fit into cache memory, thereby improving time and energy efficiency. To address this, we develop EdgeNavMamba, a customized Mamba-based detector tailored for efficient on-device perception. Unlike prior lightweight YOLO variants or statespace backbones, our design uniquely integrates the Mamba architecture with KD [21] to achieve a balance between accuracy, and energy efficiency. The combination of state-space modeling and distillation enables compact yet context-aware feature representations that YOLO variants cannot capture. This framework directly addresses the memory and computational bottlenecks of edge deployment while maintaining real-time performance. We further validate the deployment of EdgeNavMamba on resource-constrained edge devices such as NVIDIA Jetson Orin Nano with 8 GB memory [22] and Raspberry Pi 5 with 16 GB memory [23]. The experimental results demonstrate that EdgeNavMamba successfully achieves efficiency with minimal performance loss compared to the teacher model. Our contributions are as follows: • Development of an edge Mamba object detector through architecture modification and knowledge distillation. • Power and latency analysis for the proposed EdgeNavMamba on edge devices, such as the Raspberry Pi 5 and NVIDIA Jetson Orin Nano with Arm Cortex processors. • Validation of object detection in simulators MiniWorld and IsaacLab, as well as RL navigation validation in MiniWorld with different complexities. 16 Oct 2025 Agent Environment Action State Reward features logits features logits Frozen Teacher Model KD Loss Ground Truth Data red blue (a) (b) (c) Input Edge Student Model Conv dim i/2 dim i/2 SiLU SiLU SSM Linear Linear dim i dim i dim i Linear ... ... ... ... ... ... Fig. 2. (a) Reinforcement Learning (RL) diagram, including the interaction with the environment to maximize reward [19], [20], (b) Architecture of Mamba [17], used for feature extraction and model efficiency, (c) The process of knowledge distillation [21]; the teacher model is trained and frozen, the student model is trained based on the teacher features, logits, and ground truth data. The rest of this paper reviews related work, outlines key preliminaries, introduces the EdgeNavMamba framework, and presents experimental results, and concludes with key findings. II. RELATED WORK Edge deployment is critical for real-world deep learning applications [1], [24], especially in autonomous systems where onboard processing requires models to be light and efficient for real-time, reliable performance [5]. Common optimization techniques include architecture modification, knowledge distillation [21], quantization, and pruning [2], [25] Architecture changes adjust layer types, sizes, and their repetitions to maintain performance while reducing model size [6]. Knowledge distillation is applicable wether the teacher model is opensource or not [2]. Quantization and pruning reduce memory usage by decreasing bit precision and removing connections in a structured or unstructured manner, respectively [25]. Object detection is one of the most computationally intensive tasks in computer vision and deep learning. Due to the need for high precision to detect objects of varying sizes, models are often large or require significant computational resources [14], [26]. Compression techniques address this issue. YOLO represents a major advancement in this field, addressing object detection in a regression-based manner [15]. Lighter variants, such as YOLOv9 [27], have been adapted for edge object deployment. To improve precision, newer versions add an attention-based mechanism [28], but with a higher computational cost [17]. Mamba, a more efficient alternative to attention architecture, has been adopted in both full and hybrid forms in detection models [16], [29], [30]. With the introduction of these lightweight yet powerful models, the deployment of edge devices for navigation tasks becomes more feasible and efficient. Mela et al. applied quantization and pruning for unmanned surface vehicles [14]. Yang et al. proposed a multimodal 3D object detection framework using attention-guided and category-aware distillation [31]. Reinforcement learning (RL) approaches such as deep Q-networks (DQN) [32] and proximal policy optimization (PPO) [33] have been applied to autonomous navigation on resource-constrained edge devices by directly mapping vision inputs to control commands [34]. In [35], YOLO was integrated into a Deep Reinforcement Learning algorithm by passing the bounding-box (BBOX) coordinates of n goals instead of raw images, improving training time and realworld performance. However, as n grows, the input vector becomes larger, complicating goal learning, and adding a YOLO module adds significant edge-device overhead. To address this, a Squeezed-Edge YOLO module integrated with RL was proposed to enhance the energy efficiency of the detection on edge devices [20], [26]. In this work, we present an end-to-end framework for RLbased autonomous navigation with an optimized Mamba object detection model for energy-efficient edge computing. First, we design the optimized detector, which achieves competitive accuracy while using less memory and computation and therefore less energy than existing work. Next, we integrate this model into an RL algorithm and train the navigation policy in simulation. Finally, we deploy and evaluate the optimized object detection model on edge devices. III. PRELIMINARIES Reinforcement Learning (RL). Goal-directed navigation, where the agent aims to reach an object in each episode, can be modeled as a Markov Decision Process (MDP) [36], defined by a state space S, action space A, reward function r : S×A →R, initial state distribution s0, and transition probability p(st+1 | st, at). RL [19] provides a set of algorithms that enable an agent to learn optimal policies π(a | s) through trial-and-error interactions with the environment, aiming to maximize the cumulative expected reward. In goal-based tasks, the objective can be formulated as a goal-oriented MDP [20], [37], where RL methods learn to map states to actions that lead the agent toward the goal. Fig. 2 (a) illustrates how an RL agent interacts with the environment under the MDP framework to receive rewards. Object Detection. Object detection in computer vision aims to locate an object in images or videos by providing its spatial location in the form of bounding boxes and its category through class labels. The field is divided into two types of approaches: traditional techniques and machine learningbased methods. Traditional object detection methods rely on handcrafted features such as Haar [38] , combined with bruteforce techniques like sliding window searches across multiple scales and positions [38]. Due to their multi-stage pipelines, Agent - Object Level (Reasoning) Agent View Agent Sub-Goals Environment - Ground Level (Doing) Image BBox Coordinates Pre-processing Module State Reward Action 4 x n Discrete Action 5 x n + a Goal BBox RL Model Last Action n a EdgeNavMamba Fig. 3. The proposed architecture of EdgeNavMamba, consisting of two branches of convolution and SSM for feature extraction. The same architecture is used for teacher and student models with a different set of dimensions. Features, then, undergo a detection process, and the bounding boxes are given to the RL model for navigation to the goal. the introduction of YOLO as a real-time approach helped address it as a single regression problem [15]. Mamba and State Space Models. Mamba [17] architecture introduces Selective State Space Models, an efficient alternative to Transformers [39], reducing computational complexity while maintaining feature extraction capabilities. The state space representation in Mamba is formulated as follows: y(t) = Cx(t) (1) d dtx(t) = Ax(t) + Bu(t) (2) where x(t) represents the hidden state, u(t) is the input signal and A, B, and C are learnable matrices. This structure enables Mamba to capture long-range dependencies efficiently while requiring fewer parameters than traditional self-attention mechanisms. As a result, several efforts have been made to apply this method across various tasks. Fig. 2 (b) demonstrates its architecture as a part of the network. Knowledge Distillation (KD). Knowledge distillation transfers knowledge from a larger teacher model to a smaller student model to achieve similar performance with fewer parameters. In our setting, the student is trained using a combination of the standard YOLO detection loss, a distillation loss on classification logits, and a feature matching loss between intermediate representations: L = Ldet + λkdLKD + λfeatLfeat (3) where Ldet is the standard YOLO detection loss computed from ground truth boxes and labels. LKD is a temperaturescaled Kullback-Leibler divergence between the teacher and student classification logits controlled by a temperature parameter T. Lfeat is the mean squared error between intermediate feature maps of the teacher and student. The hyperparameters λkd and λfeat control the relative contributions of the distillation and feature matching terms. Lite-Conv-SSM Block EdgeNavMamba Patch Embedding Lite-Conv-SSM Block Patch Merging Lite-Conv-SSM Block Patch Merging Lite-Conv-SSM Block Patch Merging Lite-Conv-SSM Block Patch Merging Detector Goal Detection Split DWConv Block PWConv Block LN Linear LiteSS2D Block Concatenate Shuffle Conv Branch SSM Branch Patch Linear BN Detector DWConv Conv2D (1x1) Linear ReLU AvgPooling x2 x2 x4 x2 dim 1 dim 2 dim 3 dim 4 dim teacher student 1 64 32 2 128 64 3 256 128 4 512 256 Fig. 4. The proposed architecture of EdgeNavMamba, consisting of two branches of convolution and SSM (including LiteSS2D) for feature extraction. The same architecture is used for teacher and student models with a different set of dimensions. Features, then, undergo a detection process, and the bounding boxes are given to the RL model for navigation to the goal. IV. PROPOSED METHODOLOGY In this section, we introduce the end-to-end framework called EdgeNavMamba for energy-efficient autonomous navigation, utilizing an optimized Mamba object detection model for on-device edge computing. Fig. 3 and Algorithm 1 provide an overview of the proposed system. At each timestep, the agent captures an image of its environment, which the detector processes to extract BBOX coordinates of objects. These coordinates are then encoded as a feature vector and passed to an RL policy for goal navigation. Together, these components enable the agent to navigate autonomously to the goal while minimizing computations and energy usage. The RL policy is trained in MiniWorld and IsaacLab simulation environments. In the following section, we present our detailed approach. A. Sim-to-Real Goal Navigation Framework The navigation framework consists of two modules: an object detection network and an RL policy to reach the goal. First, the EdgeNavMamba processes the input image, divides it into a fixed resolution, and outputs normalized bounding boxes with confidence scores for n detected objects. The resulting BBox coordinates (x1, y1, x2, y2), producing a 1×(4n) vector. This is concatenated with a one-hot encoded sub-goal vector of size 1×n, and a one-hot encoded last-action vector of size 1 × a, where a is the number of discrete actions. resulting in a 1 × (5n + a) state vector. During each episode in simulation, the PPO policy receives the full state vector, including all detected boxes plus one-hot goal, while reward shaping focuses on the BBox coordinates of the current goal object. The action space is discrete: {left, right, forward}. The PPO policy receives this state at each step and outputs an action. A task is considered complete when the agent comes within a predefined proximity threshold of the correct goal object, which checks the Euclidean distance between the agent and the target. Navigation is guided by the reward function shown in Table I. Distance change is to encourage the agent to reduce its distance to the goal at each step. First goal visibility Linear DWConv SiLU LN LiteSS2D LiteSS2D Block 1 2 3 ... 1 4 7 ... 9 8 7 ... 9 6 3 ... 1 2 3 ... 1 4 7 ... 9 8 7 ... 9 6 3 ... Scan Expanding Scan Merging S6 LiteSS2D 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Fig. 5. The architecture of LiteSS2D block, which is in the SSM branch of EdgeNavMamba, following the overall flow of the SS2D proposed by VMamba [40], but with modifications for better efficiency, mentioned in MedMambaLite [6]. TABLE I REWARD COMPONENTS FOR NAVIGATION TASK IN MINIWORLD Condition Reward Correct goal reached +10.0 Wrong goal / wall collision -2.0 Per step penalty -0.01 Opposite turn actions -0.05 Distance change 0.5 · (∆dprev -∆dcurr) First goal visibility +0.1 Exploration (forward, no goal) +0.01 Exploration (turn, no goal) +0.005 and exploration rewards provide additional guidance when the goal is not yet in view, preventing the agent from remaining in states with no other positive reward signals. B. Edge-Optimized Mamba Object Detection Model Architecture. Fig. 4 shows the overview of the proposed architecture for object detection, inspired by MedMambaLite [6], which includes five main units as follows: 1) Patch Embedding: The input image is split into patches and projected into a higher-dimensional space. 2) Lite-Conv-SSM-Block: Features pass through a series of Lite-Conv-SSM blocks, including convolutional and StateSpace Modeling (SSM) components. Convolutional branch captures local features using depthwise and pointwise convolutions. Meanwhile, the SSM branch utilizes a Lite 2D Selective Scan Mamba (LiteSS2D) module to capture long-range dependencies and global features. The outputs are concatenated and shuffled to fuse global and local features. A number of these blocks form stages in a hierarchical architecture. 3) Lite 2D-Selective-Scan: Fig. 5 shows Lite 2D-SelectiveScan (LiteSS2D), which shares weights across four directions to reduce computation. The block starts by projecting the input features into a higher dimension, applies row-wise and column-wise convolutions, then runs a four-way Selective Scan with a shared SSM core. Scan Expanding: flattens the input along four directions. S6 block: processes each sequence with shared weights. Scan Merging: sums directional outputs and reshapes them. This approach provides memory efficiency by avoiding repeated tensor reshaping and using compact representations. Compared to available object detection models, we introduced important changes to provide an efficient model. Efficiency is improved by factorizing convolutions, sharing projection weights, and reusing Mamba weight matrices across blocks. 4) Patch Merging: Between stages, patch merging layers reduce spatial resolution while increasing channel depth, building a hierarchical representation. Algorithm 1 EdgeNavMamba Proposed Approach Require: Dataset D, teacher model T , student model S, RL policy π Ensure: Trained edge detector S⋆and navigation policy π⋆ 1: Train Teacher: Train T on D using detection loss. 2: Distill Student: Freeze T and train S using L = Ldet + λkdLKD + λfeatLfeat. 3: Train RL Policy: Use S to extract object bounding boxes and feed them as state input to PPO agent π in MiniWorld. 4: Deploy on Edge: Export S⋆and π⋆to edge devices for real-time goal navigation. 5) Detector: The detector processes extracted features to identify the presence and bounding box of a target object. It uses depthwise and pointwise convolutions, followed by pooling and a linear layer to output the goal detection result. Knowledge Distillation. Fig. 2 (c) illustrates our knowledge distillation framework, where an edge student model is trained based on a frozen teacher model and ground truth data. Models have a similar architecture, as shown in Fig. 4, but with varying channel dimensions. During training, each input batch is processed by both teacher and student, and the student parameters are updated using a combined KD Loss according to the Eq. 3, and optimization is performed on the student while keeping the teacher fixed. V. EXPERIMENTAL EVALUATION A. Experimental Setup 1) Datasets: Two datasets were prepared for training and deployment of teacher and student models: a real-world dataset containing 1,800 images and a simulated MiniWorld dataset with around 5,500 images. Both include three object classes, red, blue, and black boxes, and are split into training and validation sets with a 90/10 split. 2) Training Details: For object detection model and knowledge distillation experiments, we set the temperature to T = 2.0, the KL divergence weight to λkd = 1.0, and the featurematching weight to λfeat = 0.25. The teacher is first trained, then frozen, and distillation is performed into the student configured with depths [2, 2, 4, 2] and channel dimensions (32, 64, 128, 256) on the same dataset. We use the Adam optimizer with learning rate lr = 10-4, batch size 32, and a learning rate scheduler that reduces the rate when validation Mean Average Precision (mAP) shows no improvement. Inputs are resized to 224×224 and normalized. Evaluation uses the mAP metric. During training and validation we periodically USB Power Meter Power (W) = I (A) x V (V) Raspberry Pi 5 16 GB Memory Jetson Orin Nano 8GB Memory with CUDA GPU Onboard INA3221 Power Monitor NVIDIA Jetson Orin Nano CPU GPU 4x 256 KB L2 4x A78 GPC 4x 128 KB L1 2x A78 2x 128 KB L1 2x 256 KB L2 2 MB L3 2 MB L3 4 MB L2 8x 192 KB L1 4 MB Sys. Cache Raspberry Pi 5 4x 512 KB L2 4x A76 2 MB L3 16 GB Global Memory CPU 4x 128 KB L1 8 GB Global Memory SM SM SM SM SM SM SM SM (b) (c) (d) (a) ISAAC Simulator Fig. 6. Experimental setup for energy-efficient multi-goal autonomous navigation: (a) simulation environment in NVIDIA Isaac Simulator and MiniWorld. (b) Edge robotic platforms: Yahboom Rosmaster wheeled robot and Unitree Go2 dog robot; (c) Raspberry Pi 5 edge node (4× Cortex-A76 CPU, 16 GB LPDDR5, multi-level cache); (d) NVIDIA Jetson Orin Nano 8 GB edge AI accelerator (6-core Cortex-A78AE CPU, Ampere GPU, 8 GB LPDDR5, multi-level cache); (c) Cache hierarchy and power-measurement setup using a USB power meter and onboard INA3221 monitor. decode detections with confidence thresholds (0.25, 0.45) for qualitative inspection. The trained student is exported to ONNX for deployment and integration into the RL network. Navigation policy is trained in MiniWorld environment, consisting of a rectangular room with three colored boxes (red, blue, black) placed at random non-overlapping positions. At the beginning of each episode, one of the objects is randomly selected as the target, and its class is encoded as a one-hot goal vector. The policy is trained with the PPO algorithm. We use a learning rate of 3 × 10-4, a batch size of 128, and episode length of 1024 steps. The agent is trained for a total of 500,000 timesteps. The reward function is described in Table I, combining sparse success and failure signals with dense terms for distance reduction, exploration, and first-goal visibility. All experiments are done on an NVIDIA 4090 GPU. 3) Hardware Deployment Platforms: To validate our approach in real-world settings, we deployed it on two edge platforms, an NVIDIA Jetson Orin Nano (8 GB RAM) and a Raspberry Pi 5 (16 GB RAM), mounted on the legged Unitree Go 2 robot and the Yahboom Rosmaster wheeled robot (Fig. 6 (b)). Power consumption was measured on both devices, as illustrated in Fig. 6 (c), and their memory hierarchies and CPU/GPU architectures are shown in Fig. 6 (d). B. Results and Discussion 1) Mamba Model Optimization: Table II presents the performance of the EdgeNavMamba teacher and student models in mAP compared to the existing shape detection models. Knowledge distillation effectively reduces model size and FLOPs, without degrading performance. Meanwhile, our student model achieves a 31% reduction in the number of parameters compared to the baseline, while maintaining competitive accuracy. Detections are evaluated in both MiniWorld and IsaacLab simulators for comprehensive analysis. Fig. 7 illustrates these environments along with examples of detections made by the agent in various scenarios. Fig. 7. MiniWorld and IsaacLab samples of environments and object detections by the agent during exploration using EdgeNavMamba-ST. The environment contains three boxes placed at random, non-overlapping positions, with one randomly chosen as the target each episode. Fig. 8. Success rate of navigation toward a defined goal during training in different environment complexities. Each value is calculated over the last 100 episodes. In each case, one box is designated as the goal, while the others serve as distractions. 2) RL-Driven Goal Navigation: We evaluated EdgeNavMamba for navigation in MiniWorld using three scenarios. In each, one box was designated as the goal while the others served as distractions. In the first case, only one object was present; in the second, two objects were present, one being the goal; and in the third, three objects including one goal were placed. Fig. 8 shows success rates for these scenarios over the last 100 training episodes. In the first case, the agent achieved a 100% success rate, confirming accurate detection during navigation. In the second and third cases, the agent achieved 94% and 90% success rates, respectively. 3) On-Device Energy Profiling: In Fig. 9, we evaluate knowledge distillation by comparing the baseline EdgeNavMamba-TR with its distilled variant, EdgeNavMamba-ST, on two representative edge platforms. On the Jetson Orin Nano, EdgeNavMamba-ST achieves a 63% reduction in energy per inference while improving throughput. Likewise, on the Raspberry Pi 5, EdgeNavMamba-ST delivers a 73% energy reduction, demonstrating substantial efficiency gains with only negligible power overhead. VI. CONCLUSION In this work, we presented EdgeNavMamba, an RL-based framework designed for goal navigation using an efficient Mamba-based object detection model. By combining architectural modifications and knowledge distillation on the object TABLE II COMPARISON OF EDGENAVMAMBA WITH PRIOR MODELS ON THE SHAPES DATASET. YOLOV5S [26], [37] (32-BIT) AND SQUEEZED EDGE YOLO [26] (8-BIT) DO NOT REPORT FLOPS. Block Params Size FLOPs mAP YOLOv5s [26] 7.3 M 237 MB - 0.96 Squeezed Edge YOLO [26] 931 k 7.5 MB - 0.95 EdgeMambaNav-TR 2.4 M 9.1 MB 0.47 G 0.93 EdgeMambaNav-ST 639 k 2.5 MB 0.15 G 0.93 Fig. 9. Energy and performance comparison of proposed EdgeNavMambaTR and EdgeNavMamba-ST on Jetson Orin Nano and Raspberry Pi 5 16GB. detection model, we achieved a 31% reduction in the number of parameters compared to the baselines while preserving detection accuracy. The student model also, achieved a reduction of 67% in size, and up to 73% in energy per inference on edge devices of NVIDIA Jetson Orin Nano and Raspberry Pi 5, while keeping the same performance as the teacher model, emphasizing the efficiency of the edge model. Navigation results in the MiniWorld simulator demonstrate over 90% success rate in various environment complexities. REFERENCES [1] Y. Wang et al., "Computation-efficient deep learning for computer vision: A survey," Cybernetics and Intelligence, pp. 1-24, 2024. [2] M. Navardi et al., "Genai at the edge: Comprehensive survey on empowering edge devices," Proceedings of the AAAI SSS, 2025. [3] U. Kallakuri et al., "ATLAS: Adaptive landmark acquisition using llmguided navigation," in Proceedings of the First Vision and Language for Autonomous Driving and Robotics Workshop. OpenReview.net, 2024. [4] M. Walczak et al., "ATLASv2: Llm-guided adaptive landmark acquisition and navigation on the edge," , 2025. [5] N. Tahir et al., "Edge computing and its application in robotics: A survey," arXiv preprint , 2025. [6] R. Aalishah et al., "Medmambalite: Hardware-aware mamba for medical image classification," 2025, 21st IEEE Biomedical Circuits and Systems Conference (BioCAS) 2025. [7] Y. Xu et al., "Edge deep learning in computer vision and medical diagnostics: a comprehensive survey," Artificial Intelligence Review, vol. 58, no. 93, 2025. [8] S. H. Lee et al., "Fast on-device learning framework for single-image super-resolution," IEEE Access, vol. 12, pp. 37 276-37 287, 2024. [9] R. Aalishah et al., "Mambalitesr: Image super-resolution with low-rank mamba using knowledge distillation," in Proceedings of the International Symposium on Quality Electronic Design (ISQED), 2025. [10] M. Navardi et al., "Metatinyml: End-to-end metareasoning framework for tinyml platforms," IEEE Embedded Systems Letters, 2024. [11] A. N. Mazumder et al., "A survey on the optimization of neural network accelerators for micro-ai on-device inference," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2021. [12] Y. Xie et al., "YOLO-ACE: A Vehicle and Pedestrian Detection Algorithm for Autonomous Driving Scenarios Based on Knowledge Distillation of YOLOv10," IEEE IoT Journal, Aug. 2025. [13] M. Walczak et al., "Eden: Entorhinal driven egocentric navigation toward robotic deployment," arXiv preprint , 2025. [14] J. L. Mela et al., "Yolo-based power-efficient object detection on edge devices for usvs," Journal of Real-Time Image Processing, 2025. [15] J. Redmon et al., "You only look once: Unified, real-time object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016, pp. 779-788. [16] Z. Wang et al., "Mamba yolo: A simple baseline for object detection with state space model," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 8, pp. 8205-8213, 2025. [17] A. Gu and T. Dao, "Mamba: Linear-time sequence modeling with selective state spaces," arXiv preprint , 2023. [18] W. Zixiang et al., "Research on autonomous robots navigation based on reinforcement learning," 2024 3rd International Conference on Robotics, Artificial Intelligence and Intelligent Control (RAIIC), pp. 78-81, 2024. [19] R. S. Sutton and A. G. Barto, "Reinforcement learning: An introduction," MIT Press, 1998. [20] M. Navardi et al., "Metae2rl: Toward metareasoning for energy-efficient multi-goal reinforcement learning with squeezed edge yolo," IEEE Micro, 2023. [21] G. Hinton et al., "Distilling the knowledge in a neural network," 2015, nIPS 2014 Deep Learning Workshop. [22] NVIDIA, "Jetson orin nano developer kit getting started - nvidia developer," https://developer.nvidia.com/embedded/learn/get-startedjetson-orinnano-devkit, 2025, accessed: 2025-06-06. [23] Raspberry Pi, "Getting started with your raspberry pi," https://www.raspberrypi.com/documentation/computers/gettingstarted.html, 2025, accessed: 2025-06-06. [24] M. Navardi et al., "Genai at the edge: Comprehensive survey on empowering edge devices," in Proceedings of the AAAI Symposium Series, vol. 5, no. 1, 2025, pp. 180-187. [25] S. Han et al., "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding," arXiv preprint , 2015. [26] E. Humes et al., "Squeezed edge yolo: Onboard object detection on edge devices," ML with New Compute Paradigms (MLNCP) Workshop at NeurIPS, arXiv preprint , 2023. [27] C.-Y. Wang and H.-Y. M. Liao, "Yolov9: Learning what you want to learn using programmable gradient information," 2024. [28] Y. Tian, Q. Ye, and D. Doermann, "Yolov12: Attention-centric real-time object detectors," arXiv preprint , 2025. [29] L. Zhu et al., "Vision mamba: Efficient visual representation learning with bidirectional state space model," in Proceedings of the International Conference on Machine Learning (ICML), 2024. [30] A. Hatamizadeh and J. Kautz, "Mambavision: A hybrid mambatransformer vision backbone," arXiv preprint , 2024. [31] B. Yang et al., "Multidistiller: Efficient multimodal 3d detection via knowledge distillation for drones and autonomous vehicles," Drones, vol. 9, no. 5, p. 322, 2025. [32] V. Mnih et al., "Human-level control through deep reinforcement learning," nature, vol. 518, no. 7540, pp. 529-533, 2015. [33] J. Schulman et al., "Proximal policy optimization algorithms," arXiv preprint , 2017. [34] S. Nahavandi et al., "A comprehensive review on autonomous navigation," ACM Computing Surveys, vol. 57, no. 9, pp. 1-67, 2025. [35] M. Navardi et al., "Toward real-world implementation of deep reinforcement learning for vision-based autonomous drone navigation with mission," UMBC Student Collection, 2022. [36] R. Bellman, "A markovian decision process," Journal of Mathematics and Mechanics, vol. 6, no. 5, pp. 679-684, 1957. [37] T. Manjunath et al., "Reprohrl: Towards multi-goal navigation in the real world using hierarchical agents. on 37th aaai conference on artificial intelligence," in The 1st Reinforcement Learning Ready for Production workshop, 2023. [38] P. Viola and M. Jones, "Rapid object detection using a boosted cascade of simple features," in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1. IEEE, 2001, pp. I-I. [39] K. He et al., "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. [40] Y. Liu et al., "Vmamba: Visual state space model," arXiv preprint , 2024.
2510.14942
GroundedPRM: Tree-Guided and Fidelity-Aware Process Reward Modeling for Step-Level Reasoning Yao Zhang 1,6 ∗† Yu Wu 2 † Haowei Zhang 3 ‡ Weiguo Li 4 Haokun Chen1 Jingpei Wu1 Guohao Li5 Zhen Han 1 ∗ Volker Tresp1,6 1 LMU Munich 2 Technical University of Munich 3 Fudan University 4 University Heidelberg 5 University of Oxford 6 Munich Center for Machine Learning Abstract Process Reward Models (PRMs) aim to improve multi-step reasoning in Large Language Models (LLMs) by supervising intermediate steps and identifying errors throughout the reasoning process. However, building effective PRMs remains chal- lenging due to the lack of scalable, high-quality annotations. Existing approaches rely on costly human labeling, LLM-based self-evaluation that is prone to hallu- cination, or Monte Carlo (MC) estimation, which infers step quality solely from rollout outcomes and often introduces noisy, misaligned supervision due to credit misattribution. These issues result in three core limitations: noisy rewards, low factual fidelity, and misalignment with step-level reasoning objectives. To address these challenges, we introduce GroundedPRM, a tree-guided and fidelity-aware framework for automatic process supervision. To reduce reward noise and enable fine-grained credit assignment, we construct structured reasoning paths via Monte Carlo Tree Search (MCTS). To eliminate hallucinated supervision, we validate each intermediate step using an external tool, providing precise, execution-grounded correctness signals. To combine both step-level validation and global outcome as- sessment, we design a hybrid reward aggregation mechanism that fuses tool-based verification with MCTS-derived feedback. Finally, we format the reward signal into a rationale-enhanced, generative structure to promote interpretability and com- patibility with instruction-tuned LLMs. GroundedPRM is trained on only 40K automatically labeled samples, amounting to just 10% of the data used by the best- performing PRM trained with auto-labeled supervision. Nevertheless, it achieves up to a 26% relative improvement in average performance on ProcessBench. When used for reward-guided greedy search, GroundedPRM outperforms even PRMs trained with human-labeled supervision, offering a scalable and verifiable path toward high-quality process-level reasoning. Our code is publicly released at github.com/GroundedPRM. 1 Introduction Large Language Models (LLMs) [1, 9, 31] have demonstrated impressive capabilities in planning [13, 44], decision-making [19], and complex task execution [38, 45]. However, they remain prone to hallucinations and reasoning errors, particularly in multi-step tasks such as mathematical problem solving. Existing methods like Chain-of-Thought prompting [36, 40] and Test-Time Scaling [21, 27] improve final accuracy, yet LLMs often produce solutions that appear coherent while containing errors in reasoning or calculation. These issues are further exacerbated by outcome-level supervision ∗Corresponding authors: yzhang@dbs.ifi.lmu.de, hanzhen02111@163.com †Equal contribution ‡Work done during an internship at Technical University of Munich Preprint. arXiv:2510.14942v1 [cs.AI] 16 Oct 2025 and coarse decoding strategies, e.g., majority voting, which overlook step-level correctness and provide little guidance during intermediate reasoning. To mitigate these shortcomings, Process Reward Models (PRMs) have emerged as a promising direc- tion [20]. PRMs assign step-level scores to reasoning trajectories, enabling fine-grained supervision that supports better control and interpretability in multi-step reasoning. However, developing effective PRMs remains challenging due to the lack of reliable and faithful reward signals for training. Human annotation [20], while accurate, is costly and unscalable. LLM-as-a-judge [48] is more efficient but susceptible to hallucination, often rewarding fluent yet incorrect reasoning and thus compromising factual fidelity. Monte Carlo (MC) estimation [22, 35] provides another alternative by inferring step quality from final rollout outcomes, but it introduces noisy reward due to credit misattribution: correct steps may be penalized if the rollout fails, while flawed steps may be rewarded if the final answer happens to be correct [46]. Moreover, MC estimation typically evaluates only final outcomes, ignoring explicit assessment of intermediate step correctness, which misaligns the supervision signal with the objective of step-wise reasoning accuracy. Several recent works have attempted to refine MC-based supervision, but core limitations persist. OmegaPRM [22] uses a binary search strategy to locate the first incorrect step, but still relies on rollout success to infer correctness, leaving credit assignment coarse. Qwen2.5-Math-PRM [46] filters samples based on agreement between MC estimation and LLM judgments, but this strategy inherits hallucination bias and scores each step solely based on rollout outcomes, without assessing whether it contributes to or hinders correct reasoning. BiRM [7] augments PRM with a value head to predict future success probability, but both reward and value signals are derived from noisy rollouts and lack external validation. These approaches offer partial improvements, yet remain constrained by outcome-based heuristics, hallucination-prone feedback, or weak step-level credit modeling. To address these challenges, we propose GroundedPRM, a tree-guided and fidelity-aware framework for automatic process supervision. GroundedPRM is designed to resolve three core limitations in existing PRMs: noisy rewards, low factual fidelity, and misalignment with step-level reasoning objectives. First, to reduce reward noise and improve credit attribution, GroundedPRM leverages Monte Carlo Tree Search (MCTS) to construct structured reasoning paths and assess each step based on its contribution within the trajectory. Second, to ensure factual grounding, each intermediate step is verified using an external math tool, producing correctness signals based on executable logic rather than LLM-generated feedback, thereby eliminating hallucinated supervision. Third, to combine step- level validation with global outcome assessment, we design a hybrid reward aggregation mechanism that fuses tool-based verification with MCTS-derived feedback. Finally, all rewards are formatted into binary decisions paired with rationale-enhanced justifications, enabling interpretable supervision signals that are compatible with LLM-based generation and downstream reasoning workflows. We evaluate GroundedPRM on ProcessBench and observe substantial gains in both data efficiency and overall performance. It is trained on only 40K automatically labeled samples, just 10% of the data used by the best-performing PRM trained with auto-labeled supervision, yet achieves up to a 26% relative improvement in average performance. Furthermore, when deployed in reward-guided greedy search, where candidate steps are selected based on predicted reward, GroundedPRM surpasses even PRMs trained with human-labeled supervision, establishing new state-of-the-art results across multiple mathematical reasoning benchmarks. These findings highlight the effectiveness, scalability, and practical value of our structured and fidelity-aware supervision framework for both training and inference. The key contributions of this work are: 1. We propose GroundedPRM, a tree-guided and fidelity-aware process reward modeling framework that leverages MCTS to construct structured reasoning paths and support step- level credit assignment. 2. We introduce a fidelity-aware verification mechanism that validates each reasoning step using an external math tool, ensuring correctness grounded in executable logic and eliminating hallucinated supervision. 3. We design a hybrid reward aggregation mechanism that integrates tool-based step validation with feedback derived from MCTS-guided reasoning paths. 2 4. We format rewards into a rationale-enhanced, generative structure to improve interpretability and enable seamless integration into inference-time decoding and downstream reasoning workflows. 5. We demonstrate strong data efficiency and inference performance by evaluating Grounded- PRM on ProcessBench and reward-guided greedy search. 2 Related Work 2.1 Mathematical Reasoning with LLMs Large Language Models (LLMs) have shown remarkable progress in solving math problems via Chain- of-Thought (CoT) reasoning, where step-by-step solutions often improve final answer accuracy [36]. Building on this, recent efforts have focused on enhancing reasoning capabilities through pretraining on math-related corpora [15, 25, 42], instruction tuning with annotated derivations [19, 40, 41, 43], and prompting strategies tailored for math tasks [4, 14, 17]. Despite these improvements, LLMs remain vulnerable to intermediate reasoning errors, even when final answers are correct [47]. This discrepancy undermines the reliability of generated solutions, motivating the use of external verification or inference-time selection strategies [9, 26, 32]. Such approaches typically operate at the output level, offering limited supervision for correcting internal steps. Unlike prior methods that intervene at the output level, our approach supervises the reasoning process itself via step-level reward modeling, enabling finer-grained error identification and ensuring more faithful alignment with step-level reasoning and factual correctness. 2.2 Process Reward Models for Step-Level Supervision To enhance reasoning fidelity and identify intermediate errors, PRMs have emerged as a promising alternative to outcome-level supervision [20, 33]. PRMs evaluate the correctness of individual reasoning steps and have been shown to improve alignment and generalization across math tasks [35, 46]. A key challenge lies in generating reliable step-level annotations. Early methods rely on expert-labeled datasets such as PRM800K [20], which are expensive to scale. Recent work has explored automatic synthesis through MC estimation [22, 35], often leveraging rollout outcomes to infer step validity. However, MC-based supervision introduces noise due to credit misattribution and dependency on the quality of the completion model [46, 47]. To mitigate this, several methods combine MC with LLM-as-a-judge consensus filtering [46] or adopt preference-based learning frameworks [6]. In contrast, our method GroundedPRM constructs PRM supervision from the ground up by integrating tree-structured search via MCTS [5], step-level verification with external math engines, and fused value–correctness reward modeling. This pipeline produces reward signals that are verifiable, structurally grounded, and directly aligned with step-level reasoning objectives, effectively addressing the fidelity and alignment issues that prior methods leave unresolved. 3 Methodology GroundedPRM is designed to address three core limitations of existing process reward modeling methods: noisy rewards, low factual fidelity caused by hallucinated self-assessment, and misalignment with step-level reasoning objectives. These challenges call for a framework that can assign fine- grained credit, validate the factual correctness of individual steps, and integrate local and global signals into a reliable and interpretable supervision objective. To this end, GroundedPRM introduces a tree-guided and fidelity-aware reward modeling framework composed of four core components. First, it employs Monte Carlo Tree Search (MCTS) to construct structured reasoning paths and assess each step based on its contribution within the search trajectory, enabling more stable and attribution-aware supervision than flat sampling-based methods. Second, it verifies each intermediate step using an external tool, producing binary correctness labels grounded in executable logic and thereby mitigating hallucinated feedback from the model. Third, it unifies verified step-level signals and final-answer correctness into a joint supervision objective, maintaining fine-grained credit assignment while offering stable and reasoning-grounded supervision. Finally, the reward supervision is formatted into a rationale-enhanced generative structure, pairing each step with both a binary score and an explanation to support interpretability and compatibility 3 Action a: Simplifying the equation: 60x - 30(20 - x) = 660, which yields x = 14. Query q: Solve[{x + y == 20, 60x - 30y == 660}, {x, y}] Let’s break down the problem to verify the solution. For simplifying the equation: 60x - 600 + 30x = 660. x = 14 ... The result from the tool is: x = 14 and y = 6 ... The calculations are correct Update Rollout Reward u Tool answer: x = 14 and y = 6 Step Verification v Problem P: Find all (x, y) satisfying x + y = 20 and 60x - 30y = 660 Step1 Selection Step2 Expansion Step3 Simulation Step4 Backpropagation Current Node Real Node Simulation Node Correct Node Incorrect Node LLM Math Tool Ground Truth Final Outcome Correctness F Figure 1: Overview of the GroundedPRM framework. GroundedPRM constructs reasoning paths via MCTS, where each node corresponds to an LLM-generated step. During simulation, intermediate steps are verified using an external tool, and final answers are checked against ground truth. Step-level and outcome-level correctness signals are aggregated into a rollout reward, which is backpropagated along the tree to update node statistics; the next node is then selected by UCT, continuing the MCTS search until convergence or budget exhaustion. The framework enables verifiable, interpretable, and structure-aware process supervision for multi-step reasoning. The generative rationale provides interpretable feedback for each step. with instruction-tuned LLMs. An overview of this framework is illustrated in Fig. 1. We provide the full algorithmic pseudocode in Appendix A. 3.1 Tree-Guided Reasoning Path Construction To enable stable and attribution-aware process supervision, GroundedPRM employs MCTS to construct structured reasoning paths for each input problem P. Each node in the search tree is associated with a partial reasoning state s = {s1, . . . , si}, representing the sequence of previously generated reasoning steps. In addition to the state, each node stores auxiliary information including tool queries q, verification outcomes v, and value estimates Q. A reasoning step is represented as an action a, defined as a natural language expression generated by the LLM that extends the current reasoning state, transitioning it from state s to a new state s′. The value function Q(s, a) estimates the expected return of applying action a in state s, and is updated through feedback from simulated rollouts. The search process consists of four stages: Selection. Starting from the root node, the algorithm recursively selects child nodes according to a tree policy until reaching a node that is not fully expanded. To balance exploration and exploitation, we use the Upper Confidence Bound for Trees (UCT) [16], which balances estimated value with an exploration bonus that decreases as a node is visited more often, thereby encouraging the search toward promising yet under-explored nodes. The UCT score for each candidate action a at state s is computed as: UCT(s, a) = Q(s, a) + c · s log N(s) N(s, a) , (1) where N(s) and N(s, a) are the visit counts of the parent and child nodes, respectively; and c is a hyperparameter controlling the exploration strength. Expansion. If the selected node is not terminal, it is expanded by sampling K new actions from LLM, each producing a distinct child state s′. We set K = 3 in our experiments. This constrains the branching factor while maintaining reasoning diversity. Simulation. From the newly expanded node, we simulate a complete reasoning trajectory by sequentially sampling steps si+1, . . . , sT until the model produces a final answer. We sample from 4 the current state using the LLM in a left-to-right fashion to complete the solution. For each step sj where j ∈{i + 1, ..., T −1}, we obtain a binary correctness label vj ∈{−1, 1} using the tool-based verification procedure described in § 3.2. Additionally, the final answer is compared against the ground-truth solution to determine the overall outcome F ∈{−1, 1}. We adopt signed labels {-1,+1} instead of {0,1} so that incorrect steps propagate negative feedback, thereby decreasing node values during MCTS rather than being treated as neutral. These per-step and final correctness signals are subsequently aggregated into a single rollout reward ui, as defined in § 3.3. Backpropagation. The reward u computed for the simulated trajectory is propagated backward along the path traversed during selection. For each visited state-action pair (sk, ak) at depth dk from the terminal node, we update its value as: Q(sk, ak) ←Q(sk, ak) + γdk · (ui + vi), (2) where k ∈{0, ..., i −1}, γ ∈(0, 1) is a decay factor controlling temporal discount, and dk denotes the number of steps from node i. This update scheme assigns stronger credit to steps closer to the final outcome, aligning attribution with their causal impact in the reasoning process. By iteratively executing the four MCTS stages, GroundedPRM constructs a structured and diverse distribution over reasoning paths. This search process prioritizes trajectories with high step-level validity and globally correct outcomes, yielding supervision signals that are both structure-aware and attribution-sensitive. The resulting credit assignments are more stable and fine-grained than those produced by flat Monte Carlo rollouts, directly addressing reward noise and misattribution. Multiple rollouts are performed per input to balance path diversity with search efficiency. 3.2 Fidelity-Aware Step Verification with External Tool To ensure reward fidelity and eliminate hallucinated supervision, GroundedPRM integrates step-level verification into each reasoning step via external tools. During simulation (see § 3.1), the LLM generates a sequence of reasoning steps {si+1, . . . , sT }, where each sj (i + 1 ≤j ≤T −1) denotes an intermediate reasoning step expressed in natural language during rollout. For each step sj, we construct a corresponding structured math query and submit it to an external math tool, such as Wolfram Alpha (WA) [37]. The tool’s response is parsed to determine whether the computation or transformation expressed in sj is factually correct. We represent this outcome as a binary verification label vj ∈{−1, 1}, where vj = 1 indicates successful verification and vj = −1 denotes failure. The resulting sequence {vi+1, . . . , vT −1} provides a fine-grained step-level correctness evaluation for the entire reasoning trace. These step-level signals are used during rollout to compute the aggregated reward u (see § 3.3). Unlike LLM-based self-evaluation, which often overestimates fluent but invalid reasoning, this fidelity-aware mechanism grounds supervision in objective, tool-based verification. While WA is used in our experiments due to its strong mathematical solving capabilities, such as equation solving and equivalence checking, our verification module is tool-agnostic. It supports integration with alternatives like SymPy [23] or domain-specific solvers. This modular design ensures that GroundedPRM generalizes across reasoning domains while maintaining high verification precision. 3.3 Hybrid Reward Aggregation To construct reward signals that are both verifiable and forward-looking, GroundedPRM introduces a hybrid aggregation mechanism that combines step-level verification with trajectory-level outcome assessment. This design balances two supervision objectives: (1) factual fidelity of intermediate reasoning steps, and (2) global correctness of the final answer. Given a simulated reasoning trace of length T, we collect step-level correctness signals {vi+1, . . . , vT −1}, where each vi ∈{−1, 1} is obtained via external tool verification (see § 3.2). In addition, we evaluate the final answer against ground truth to obtain a binary outcome signal F ∈{−1, 1}. These signals are aggregated into a single scalar reward: ui = 1 T −1 −i T −1 X j=i+1 dj · vj + β · F, (3) 5 where β ≥0 is a weighting coefficient that adjusts the contribution of final answer correctness relative to step-level reliability. The resulting reward u is used during backpropagation in MCTS (see § 3.1) to update value estimates and guide exploration. We further define the MCTS value estimate at each state–action pair (si, ai) as: Q(si, ai) = ui + vi. By fusing local and global correctness signals, this hybrid reward formulation offers more stable and interpretable supervision than prior MC-based methods that rely solely on rollout success. Moreover, this mechanism directly addresses the three core limitations of existing PRMs: it reduces reward noise via structure-aware simulation, avoids unverifiable supervision through external tool-based validation, and aligns the reward objective with both step-wise precision and task-level success. 3.4 Generative Process Reward Model GroundedPRM adopts a generative reward modeling paradigm, enabling seamless integration with instruction-tuned LLMs and providing supervision for open-ended reasoning workflows. Each training instance is structured as a rationale-enhanced sequence that pairs intermediate reasoning steps with corresponding verification outcomes and justifications. Formally, each instance includes: (1) the original problem P; (2) the full reasoning trajectory {s1, . . . , sT }; (3) binary labels indicating the sign of the aggregated reward, combining tool-verified step fidelity and rollout outcome signals; and (4) natural-language explanations derived from external tool feedback, retained after consistency filtering to align with the verified binary labels. Unlike conventional discriminative reward models that treat reward prediction as a binary classifica- tion task, we train GroundedPRM autoregressively to generate both correctness labels and rationales conditioned on the problem and its intermediate reasoning trace. This generative formulation improves interpretability and enables seamless integration into LLM-based reasoning pipelines. 3.5 Data Construction for GroundedPRM Training To train GroundedPRM, we apply the full supervision framework described above to the MATH dataset [11], constructing a reward-labeled dataset with tool-based step-level verification and hybrid scoring. For each problem, the policy model generates intermediate reasoning steps, which are verified using external tools (see § 3.2). Each step is labeled based on tool-verified correctness, and the full trajectory is scored using the hybrid reward mechanism introduced in § 3.3. To ensure coverage and diversity, we adopt a multi-round MCTS rollout strategy that explores both optimal and suboptimal paths. Post-processing includes filtering incomplete, inconsistent, or tool-unverifiable traces, and formatting the final data into a rationale-enhanced generative structure (see § 3.4). Each instance includes the problem, a full reasoning trace, correctness labels, and explanations. The resulting dataset contains approximately 40K verified samples, covering a broad spectrum of problem types and reasoning strategies with high tool-verified fidelity. Exact generation and verification prompts are given in Appendix B. 4 Experiment 4.1 Experimental Setup Benchmarks. We evaluate GroundedPRM from two perspectives: its ability to accurately identify erroneous steps within multi-step reasoning processes, and its effectiveness in directly enhancing downstream task performance. • ProcessBench [47]. This benchmark evaluates the ability of reward models to supervise step-level reasoning in mathematical problems. Each instance includes an LLM-generated solution with the first incorrect step annotated by human experts. Models are evaluated based on their ability to accurately identify the first faulty step or confirm that all steps are valid, following standard PRM evaluation protocols. • Reward-Guided Greedy Search. To further assess the utility of GroundedPRM in guid- ing multi-step reasoning, we perform inference-time decoding using a reward-guided greedy strategy. At each generation step, we sample N = 8 candidate actions from Qwen2.5-7B-Instruct [24] using a temperature of 1, and select the candidate with the highest 6 Table 1: F1 scores on ProcessBench for models trained with auto-labeled data. Models marked with ∗ share the same base model: Qwen2.5-Math-7B-Instruct. GroundedPRM achieves the highest average F1, surpassing the strongest existing model, Math-Shepherd-PRM-7B, by 26% relative improvement while using only 10% of the training data. All baseline results are directly cited from [46]. Oly. denotes OlympiadBench. Full results are provided in Appendix D. Model #Sample GSM8K MATH Oly. Omni-MATH Avg. RLHFlow-DeepSeek-8B 253K 38.8 33.8 16.9 16.9 26.6 RLHFlow-Mistral-8B 273K 50.4 33.4 13.8 15.8 28.4 Qwen2.5-Math-7B-Math-Shepherd∗ 445K 62.5 31.6 13.7 7.7 28.9 EurusPRM-Stage1∗ 453K 44.3 35.6 21.7 23.1 31.2 EurusPRM-Stage2∗ 230K 47.3 35.7 21.2 20.9 31.3 Math-Shepherd-PRM-7B 445K 47.9 29.5 24.8 23.8 31.5 GroundedPRM 40K 43.4 47.0 33.8 34.4 39.7 predicted reward assigned by the PRM. This process is repeated iteratively until a com- plete solution is generated. We evaluate this procedure on six mathematical benchmarks: AMC23 [3], AIME24 [2], MATH [11], College MATH [30], OlympiadBench [10], and Minerva MATH [18]. We also report the result of pass@n, i.e., the proportion of test samples where any of the n samplings lead to the correct final answers. Baselines. For both ProcessBench and reward-guided greedy search experiments, we compare GroundedPRM against the following representative baselines. These baselines span a diverse set of supervision strategies, including models trained with human-labeled rewards, automated annotations, and hybrid approaches, as well as a range of training data scales. • Math-Shepherd [35]: Utilizes MC estimation to perform automated step-level annotation with hard labels. • RLHFlow-PRM series [8]: Includes DeepSeek and Mistral variants, both of which use MC estimation for data generation, but adopt the Direct Preference Optimization (DPO) training paradigm. • Math-PSA-7B [34]: Trained on mixed annotated data, namely PRM800K [20], Math- Shepherd [35], and generated data following [22]. • EurusPRM-series [28]: EurusPRM-Stage1 and EurusPRM-Stage2 construct weakly super- vised labels from final outcomes using noise-aware heuristics. • Qwen2.5-Math-7B series [47, 46]: Qwen2.5-Math-7B-Math-Shepherd and Qwen2.5-Math- 7B-PRM800K are trained with Math-Shepherd [35] and PRM800K [20] using Qwen2.5- Math-7B-Instruct [40], respectively. • Llemma-PRM800K-7B [29]: Utilizes MC estimation to perform automated step-level annotation with hard labels. • ReasonEval-7B [39]: Prompt-based model for evaluating step validity and redundancy. Implementation Details. All reward models are fine-tuned on step-labeled reasoning trajectories using LoRA [12] for parameter-efficient adaptation. We use Qwen2.5-7B-Instruct [24] as the base model. Complete training hyperparameters are listed in Appendix C. 4.2 Results on ProcessBench GroundedPRM Achieves Strong Supervision Performance with High Data Efficiency. As shown in Tab. 1, GroundedPRM achieves the highest average F1 score among all PRMs trained with automatically labeled data, outperforming the second-best model, Math-Shepherd-PRM-7B, by a relative improvement of 26% while using only 10% training samples. GroundedPRM also ranks first on MATH, OlympiadBench, and Omni-MATH, indicating strong capability in evaluating complex mathematical reasoning steps. These results reinforce our central hypothesis: verifiable, structure- guided supervision is substantially more effective than scale alone. GroundedPRM’s fidelity-aware 7 Table 2: F1 scores of GroundedPRM and Qwen2.5-Math-7B-PRM800K under matched training sizes. Both methods are trained using Qwen2.5-7B-Instruct but differ in supervision sources. Despite relying solely on automatically labeled data, GroundedPRM consistently outperforms Qwen2.5-Math- 7B-PRM800K across all data scales. Oly. denotes OlympiadBench. #Sample Model GSM8K MATH Oly. Omni-MATH Avg. 10K Qwen2.5-Math-7B-PRM800K 30.3 31.6 21.9 19.8 25.9 GroundedPRM 39.0 41.9 29.4 29.8 35.0 20K Qwen2.5-Math-7B-PRM800K 37.4 32.9 29.9 30.6 32.7 GroundedPRM 39.9 44.0 30.1 31.4 36.4 30K Qwen2.5-Math-7B-PRM800K 37.5 40.0 28.4 34.8 35.2 GroundedPRM 42.1 47.4 30.7 31.7 38.0 40K Qwen2.5-Math-7B-PRM800K 43.1 46.0 32.9 34.0 39.0 GroundedPRM 43.4 47.0 33.8 34.4 39.7 Table 3: F1 scores on ProcessBench under different supervision and inference configurations within the GroundedPRM framework. Step-Only and Outcome-Only variants remove one supervision source during training, while the Inference w/o Rationale variant skips rationale generation and outputs correctness labels directly. All variants share the same model architecture; the full version combines step-level verification, outcome consistency, and rationale generation. Method GSM8K MATH OlympiadBench Omni-MATH Avg. Step-Only 40.1 42.3 28.3 29.2 35.0 Outcome-Only 1.4 3.3 1.0 1.0 1.7 Inference w/o Rationale 34.1 34.7 22.7 23.7 28.8 GroundedPRM 43.4 47.0 33.8 34.4 39.7 rewards, grounded in tool-based validation and MCTS-based credit assignment, enable efficient learning under low-resource constraints. Generative Supervision Enhances Interpretability and Robust Generalization. Unlike prior PRMs that produce only binary decisions, GroundedPRM adopts a generative format that outputs both a step-level reward and an accompanying rationale. This design improves alignment with instruction-tuned LLMs, encourages interpretable supervision, and enables the model to better distinguish between fluent but incorrect reasoning and truly valid logic. Empirically, GroundedPRM achieves notable improvements on challenging benchmarks like OlympiadBench and MATH, where fine-grained error localization is essential. These results suggest that explanation-based rewards foster more robust and generalizable reasoning behavior. 4.3 Analysis and Discussions GroundedPRM Provides Superior Data Efficiency through Structured and Fidelity-Aware Supervision. To compare the effectiveness of our automatically labeled supervision against human- labeled reward models under identical data budgets, we conduct a controlled comparison with the Qwen2.5-PRM series using the same model architecture, i.e., Qwen2.5-7B-Instruct, and matched training sizes. For each training size, we randomly sample a subset of examples to ensure a fair comparison. This setup isolates the effect of supervision quality by ensuring that both methods are evaluated under the same data scale. As shown in Tab. 2, GroundedPRM consistently achieves higher F1 scores across all training sizes, despite relying entirely on automatically constructed labels. Dual-Signal Supervision Enhances Data Fidelity and Credit Attribution. To assess the contribu- tion of our dual-signal supervision, we compare GroundedPRM against two ablations: Outcome-Only Supervision, which assigns labels based solely on final-answer correctness from MCTS rollouts, and 8 Table 4: Accuracy of reward-guided greedy search using different PRMs to supervise the Qwen2.5- 7B-Instruct policy model. GroundedPRM outperforms all PRMs trained with human, mixed, or automated labels, achieving the highest average accuracy. Oly. denotes OlympiadBench. Model #Sample AMC23 AIME24 MATH College Oly. Minerva Avg. pass@1 - 50.0 10.0 73.4 48.5 30.0 29.8 40.3 pass@8(Upper Bound) - 82.5 20.0 90.4 61.0 48.0 49.6 58.6 Reward-Guided Greedy Search (prm@8) Trained on Human Annotated Data (PRM800K) Qwen2.5-Math-7B-PRM800K 264K 60.0 10.0 75.6 36.5 23.5 29.0 39.1 Llemma-PRM800K-7B 350K 42.5 6.7 72.2 47.5 27.6 29.5 37.7 ReasonEval-7B 350K 52.5 6.7 76.0 33.8 33.8 30.0 41.9 Trained on a Mix of Human and Automated Annotation Data Math-PSA-7B 860K 47.5 13.3 69.8 46.0 27.6 33.5 39.6 Trained on Automated Annotation Data Math-Shepherd-PRM-7B 445K 45.0 10.0 74.8 48.5 28.0 29.0 39.2 RLHFlow-DeepSeek-8B 253K 50.0 6.7 74.2 48.0 30.9 27.5 39.5 RLHFlow-Mistral-8B 273K 37.5 13.3 74.8 50.5 29.8 30.0 39.3 EurusPRM-Stage1 453K 47.5 10.0 73.0 49.0 30.1 31.0 40.1 EurusPRM-Stage2 230K 45.0 13.3 73.6 51.0 31.6 32.5 41.1 GroundedPRM 40K 57.5 10.0 74.8 49.0 31.3 32.5 42.4 Step-Only Supervision, which uses external tool verification without considering global trajectory outcomes. As shown in Tab. 3, Outcome-Only Supervision severely underperforms due to credit misattribution. Correct steps may be penalized if downstream steps fail, while flawed steps may be rewarded if the final answer happens to be correct. Step-Only Supervision achieves higher recall but suffers from precision loss, as external math tools can detect surface-level arithmetic errors but often fail to capture deeper logical flaws, resulting in false positives. A detailed example of this failure mode is provided in Appendix E.2. In contrast, GroundedPRM fuses step-level correctness signals with trajectory-level feedback, enabling accurate credit assignment that is grounded in both local fidelity and global reasoning success. This hybrid design achieves the highest average F1, demonstrating the effectiveness of our supervision framework in producing reliable and structurally aligned reward signals. Rationale Generation Enhances Consistency and Long-Horizon Reasoning. To assess the impact of rationale generation, we compare the full GroundedPRM with an Inference w/o Rationale variant that directly predicts correctness labels without generating explanations. As shown in Tab. 3, removing rationales leads to a consistent drop in F1 across all datasets, with larger gaps on more challenging benchmarks such as MATH and OlympiadBench. Generating intermediate justifications helps maintain step-level consistency, stabilize reward attribution, and localize reasoning errors in complex, long-horizon problems. Qualitative examples in Appendix E.1 further illustrate how rationale generation improves interpretability and factual grounding without compromising predictive accuracy. 4.4 Results on Reward-Guided Greedy Search As shown in Tab. 4, GroundedPRM, trained on only 40K automatically labeled examples, achieves the highest average accuracy across all PRMs, surpassing those trained on automated, mixed, or even large-scale human annotations. Within the automated annotation group, GroundedPRM achieves new state-of-the-art results on AMC23 and performs on par or better than all counterparts on MATH and Minerva. These results validate the effectiveness of the design: tool-grounded verification improves label fidelity, tree-guided path construction yields stable and attribution-aware credit assignment, and rationale-enhanced supervision delivers precise and verifiable step-level evaluation. By evaluating each candidate step with grounded feedback, GroundedPRM reliably guides the policy toward accurate multi-step reasoning without requiring external demonstrations or value-based lookahead. 9 5 Conclusion We introduced GroundedPRM, a tree-guided and fidelity-aware framework for process supervision. By combining structured path exploration via MCTS, tool-based step-level verification, hybrid reward aggregation, and rationale-enhanced supervision formatting, GroundedPRM addresses three core limitations of prior PRMs: low factual fidelity, noisy reward signals, and misalignment with step- level reasoning objectives. GroundedPRM is trained on only 40K automatically labeled samples, amounting to just 10% of the data used by the best-performing PRM trained with auto-labeled supervision. Nevertheless, it achieves up to a 26% relative improvement in average performance on ProcessBench. When used for reward-guided greedy search, GroundedPRM outperforms even PRMs trained with human-labeled supervision. These results underscore the effectiveness of structured, verifiable reward modeling in enhancing the reasoning capabilities of LLMs. 6 Future Work While GroundedPRM establishes a strong foundation for fidelity-aware and tree-guided process reward modeling, several natural extensions remain. Scaling the underlying LLM may further improve the quality and diversity of generated reasoning paths. Expanding the set of external verifiers beyond the mathematical tool used in this work could enhance flexibility and extend applicability across different reasoning domains. GroundedPRM is inherently tool-agnostic: a “tool” broadly refers to any fidelity verifier that provides execution-grounded feedback for intermediate reasoning steps, including model-based self-checkers, retrieval-augmented verifiers, and rule-based evaluators. Additionally, integrating human preference signals may further align supervision with interpretable and human-consistent reasoning. A further direction is to integrate GroundedPRM into reinforcement learning pipelines, where it serves as a verifiable reward function guiding policy optimization in long-horizon tasks. Such integration would enable process-level supervision under on-policy updates and reveal how structured rewards interact with exploration, search, and credit assignment. Although our experiments focus on the mathematical domain due to its established PRM benchmarks and baselines, the framework naturally generalizes to any domain where step-level fidelity can be defined and verified, offering a unified and scalable paradigm for grounded process supervision. References [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [2] AI-MO. Amc 2023, 2024b. aimo-validation-aime, 2024. [3] AI-MO. Amc 2023, 2024b. aimo-validation-amc. https://huggingface.co/ datasets/AI-MO/aimo-validation-amc, 2024. Accessed: 2025-07-30. [4] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787, 2024. [5] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1–43, 2012. [6] Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Step-level value preference optimization for mathematical reasoning. arXiv preprint arXiv:2406.10858, 2024. [7] Wenxiang Chen, Wei He, Zhiheng Xi, Honglin Guo, Boyang Hong, Jiazheng Zhang, Rui Zheng, Nijun Li, Tao Gui, Yun Li, et al. Better process supervision with bi-directional rewarding signals. arXiv preprint arXiv:2503.04618, 2025. 10 [8] Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, and Tong Zhang. Rlhf workflow: From reward modeling to online rlhf. arXiv preprint arXiv:2405.07863, 2024. [9] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. [10] Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008, 2024. [11] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874, 2021. [12] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR, 1(2):3, 2022. [13] Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, and Enhong Chen. Understanding the planning of llm agents: A survey, 2024. [14] Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023. [15] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720, 2024. [16] Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conference on machine learning, pages 282–293. Springer, 2006. [17] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199–22213, 2022. [18] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quan- titative reasoning problems with language models. Advances in neural information processing systems, 35:3843–3857, 2022. [19] Manling Li, Shiyu Zhao, Qineng Wang, Kangrui Wang, Yu Zhou, Sanjana Srivastava, Cem Gokmen, Tony Lee, Erran Li Li, Ruohan Zhang, et al. Embodied agent interface: Benchmarking llms for embodied decision making. Advances in Neural Information Processing Systems, 37:100428–100534, 2024. [20] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, 2023. [21] Runze Liu, Junqi Gao, Jian Zhao, Kaiyan Zhang, Xiu Li, Biqing Qi, Wanli Ouyang, and Bowen Zhou. Can 1b llm surpass 405b llm? rethinking compute-optimal test-time scaling. arXiv preprint arXiv:2502.06703, 2025. [22] Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592, 2024. [23] Aaron Meurer, Christopher P Smith, Mateusz Paprocki, Ondˇrej ˇCertík, Sergey B Kirpichev, Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K Moore, Sartaj Singh, et al. Sympy: symbolic computing in python. PeerJ Computer Science, 3:e103, 2017. 11 [24] Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. [25] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300, 2024. [26] Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute opti- mally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024. [27] Charlie Victor Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling parameters for reasoning. In The Thirteenth International Conference on Learning Representations, 2025. [28] Lin Sun, Chuang Liu, Xiaofeng Ma, Tao Yang, Weijia Lu, and Ning Wu. Freeprm: Training process reward models without ground truth process labels. arXiv preprint arXiv:2506.03570, 2025. [29] Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang Gan. Easy-to-hard generalization: Scalable alignment beyond human supervision. Advances in Neural Information Processing Systems, 37:51118–51168, 2024. [30] Zhengyang Tang, Xingxing Zhang, Benyou Wang, and Furu Wei. Mathscale: Scaling instruction tuning for mathematical reasoning. arXiv preprint arXiv:2403.02884, 2024. [31] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. [32] Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown. Hugging Face, 2024. [33] Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022. [34] Jun Wang, Meng Fang, Ziyu Wan, Muning Wen, Jiachen Zhu, Anjie Liu, Ziqin Gong, Yan Song, Lei Chen, Lionel M Ni, et al. Openr: An open source framework for advanced reasoning with large language models. arXiv preprint arXiv:2410.09671, 2024. [35] Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935, 2023. [36] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. [37] Wolfram Alpha LLC. Wolframalpha. https://www.wolframalpha.com/. [38] Zhiheng Xi, Yiwen Ding, Wenxiang Chen, Boyang Hong, Honglin Guo, Junzhe Wang, Dingwen Yang, Chenyang Liao, Xin Guo, Wei He, et al. Agentgym: Evolving large language model-based agents across diverse environments. arXiv preprint arXiv:2406.04151, 2024. [39] Shijie Xia, Xuefeng Li, Yixin Liu, Tongshuang Wu, and Pengfei Liu. Evaluating mathematical reasoning beyond accuracy. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 27723–27730, 2025. 12 [40] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122, 2024. [41] Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387, 2025. [42] Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, Jiawei Hong, Kuikun Liu, Ziyi Wang, et al. Internlm-math: Open math large language models toward verifiable reasoning. arXiv preprint arXiv:2402.06332, 2024. [43] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023. [44] Yao Zhang, Chenyang Lin, Shijie Tang, Haokun Chen, Shijie Zhou, Yunpu Ma, and Volker Tresp. Swarmagentic: Towards fully automated agentic system generation via swarm intelligence. arXiv preprint arXiv:2506.15672, 2025. [45] Yao Zhang, Zijian Ma, Yunpu Ma, Zhen Han, Yu Wu, and Volker Tresp. Webpilot: A ver- satile and autonomous multi-agent system for web task execution with strategic exploration. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 23378– 23386, 2025. [46] Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. The lessons of developing process reward models in mathematical reasoning. arXiv preprint arXiv:2501.07301, 2025. [47] Chujie Zheng, Zhenru Zhang, Beichen Zhang, Runji Lin, Keming Lu, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. Processbench: Identifying process errors in mathematical reasoning. arXiv preprint arXiv:2412.06559, 2024. [48] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in neural information processing systems, 36:46595–46623, 2023. [49] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand, 2024. Association for Computational Linguistics. 13 Contents 1 Introduction 1 2 Related Work 3 2.1 Mathematical Reasoning with LLMs . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Process Reward Models for Step-Level Supervision . . . . . . . . . . . . . . . . . 3 3 Methodology 3 3.1 Tree-Guided Reasoning Path Construction . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Fidelity-Aware Step Verification with External Tool . . . . . . . . . . . . . . . . . 5 3.3 Hybrid Reward Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.4 Generative Process Reward Model . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.5 Data Construction for GroundedPRM Training . . . . . . . . . . . . . . . . . . . 6 4 Experiment 6 4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2 Results on ProcessBench . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.3 Analysis and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.4 Results on Reward-Guided Greedy Search . . . . . . . . . . . . . . . . . . . . . . 9 5 Conclusion 10 6 Future Work 10 A Algorithm Overview and Pseudocode 15 B Prompt Template for Data Annotation 15 C Training Hyperparameters 18 D Supplementary Evaluation Results 18 E Case Studies 18 E.1 Fidelity- and Error-Type–Aware Reasoning Supervision . . . . . . . . . . . . . . . 19 E.2 Dual-Signal Supervision Improves Data Fidelity and Credit Attribution . . . . . . 19 14 A Algorithm Overview and Pseudocode To facilitate reproducibility and provide an intuitive understanding of our reward modeling pipeline, we present the high-level pseudocode of GroundedPRM’s data generation algorithm. As described in Algo. 1, GroundedPRM integrates MCTS-based reasoning path construction, tool-based step-level verification, and hybrid reward aggregation into a unified supervision framework. Algorithm 1 Algorithm of GroundedPRM Require: Initial state s0, Node n, Node Value Q, Execution Round r, Max Rounds R, Max Children K, Tool Result v, Rollout Reward u, Node List L ←∅, Node Value List V ←∅, Visit Count N 1: Initialize: s0 ←sinitial, n ←n0, Q0 ←0 2: for r = 1 to R do 3: while n is fully expanded do 4: nselect ←Selection(n) {Select node with highest UCT score} 5: if nselect is terminal or has no children then 6: break 7: end if 8: n ←nselect 9: end while 10: a ←Generate(nselect) 11: for aj from a1 to aK do 12: si−1 ←State(nselect) 13: ni−1 ←nselect 14: si,j ←Transition(si−1, aj) 15: ni,j ←Children(ni−1, si,j) 16: Q(si) ←vi,j {Verify step with tool} 17: end for 18: if vi,j = max(v) then 19: nsim ←ni,j 20: end if 21: n ←nsim, s ←State(nsim) 22: while n ̸= nterminal do 23: s′ ←Transition(s, a) 24: n′ ←Children(n, s′) 25: V ←V + v′ 26: L ←L + n′ 27: n ←n′, s ←s′ 28: end while 29: F ←FinalAnswerCorrectness(n) {Compare to ground truth} 30: T ←Length(L), ni ←nsim 31: for all vj ∈V do 32: ui ← 1 T −1−i PT −1 j=i+1 dj · vj + β · F 33: end for 34: Q(si, ai) ←ui + vi {Aggregate reward} 35: for k = i −1 to 0 do 36: Qk ←Qk + γdk · Q(si, ai) 37: Nk ←Nk + 1 38: end for 39: end for B Prompt Template for Data Annotation To construct the step-level supervision for GroundedPRM, we adopt two structured prompt templates. The prompt in Fig. 2 is used to autoregressively generate intermediate reasoning steps during MCTS rollouts, producing structured trajectories consisting of step objectives and corresponding actions. The prompt in Fig. 3 is applied to verify each generated step using external tools, outputting binary 15 correctness labels along with rationale-enhanced explanations, which together form the fidelity-aware supervision signals used to train GroundedPRM. Figure 2: Prompt used to generate the next reasoning step during MCTS rollouts. The output consists of a structured step objective and a logically grounded action aligned with the current goal. These step-level generations are used to construct diverse reasoning trajectories for reward modeling in GroundedPRM. 16 Figure 3: Prompt used for tool-based step-level verification. The assistant analyzes the reasoning step for logical consistency, evaluates the relevance of Wolfram Alpha responses, and outputs a binary correctness label along with a structured rationale, forming the fidelity-aware supervision signal for GroundedPRM. 17 C Training Hyperparameters Tab. 5 lists the hyperparameters used to train GroundedPRM. We fine-tuned the model in a sequence- to-sequence manner using the LLaMA-Factory [49] Trainer implementation on 4×A100 80GB GPUs. Table 5: Training configuration for Qwen2.5-7B-Instruct. Parameter Value Model Qwen2.5-7B-Instruct Torch data type bfloat16 Attention implementation flash attention 2 Lora rank 128 Lora alpha 256 Per-device train batch size 4 Gradient accumulation steps 8 Learning rate 3.0 × 10−5 Number of training epochs 6 LR scheduler type cosine Max gradient norm 1.0 Warmup ratio 0.1 Seed 42 Optimizer Adam Gradient checkpointing True D Supplementary Evaluation Results We assess the step-level supervision quality of GroundedPRM on ProcessBench, comparing it to several strong PRM baselines trained with automated labels. As shown in Tab. 1 of the main paper, GroundedPRM achieves the highest average F1 score across all benchmarks, with notable gains on MATH, OlympiadBench, and Omni-MATH. These results highlight the effectiveness of our fidelity-aware, structure-guided reward modeling framework in generating accurate and reliable supervision, even under limited data budgets. Full results are provided in Tab. 6. Table 6: F1 scores on ProcessBench across four math benchmarks. GroundedPRM achieves the highest average F1 score among all PRMs trained with automatically labeled data, outperforming all prior methods by a significant margin, particularly on MATH, OlympiadBench, and Omni-MATH. Scoring Approach GSM8K MATH OlympiadBench Omni-MATH Avg. F1 error correct F1 error correct F1 error correct F1 error correct F1 RLHFlow-PRM-Deepseek-8B 24.2 98.4 38.8 21.4 80.0 33.8 10.1 51.0 16.9 10.9 51.9 16.9 26.6 RLHFlow-PRM-Mistral-8B 33.8 99.0 50.4 21.7 72.2 33.4 8.2 43.1 13.8 9.6 45.2 15.8 28.4 Qwen2.5-Math-7B-Math-Shepherd 46.4 95.9 62.5 18.9 96.6 31.6 7.4 93.8 13.7 4.0 95.0 7.7 28.9 EurusPRM-Stage1 46.9 42.0 44.3 33.3 38.2 35.6 23.9 19.8 21.7 21.9 24.5 23.1 31.2 EurusPRM-Stage2 51.2 44.0 47.3 36.4 35.0 35.7 25.7 18.0 21.2 23.1 19.1 20.9 31.3 Math-Shepherd-PRM-7B 32.4 91.7 47.9 18.0 82.0 29.5 15.0 71.1 24.8 14.2 73.0 23.8 31.5 GroundedPRM 31.9 67.9 43.4 36.0 67.5 47.0 23.4 60.5 33.8 23.8 61.4 34.4 39.7 E Case Studies We conduct qualitative case studies to demonstrate how GroundedPRM performs fine-grained, interpretable supervision across diverse reasoning scenarios. § E.1 illustrates its ability to detect arithmetic, algebraic, and constraint-based inconsistencies with high fidelity and structural awareness. § E.2 examines an ablation case that contrasts Step-Only and Dual-Signal supervision, revealing how dual-signal fusion enhances data fidelity and accurate credit attribution. 18 E.1 Fidelity- and Error-Type–Aware Reasoning Supervision We present three qualitative cases to illustrate how GroundedPRM delivers fidelity-aware, error- type–aware, and first-wrong-step localization in process supervision. Across all cases, a general- purpose LLM-as-judge fails to catch basic inconsistencies, whereas GroundedPRM recomputes the relevant quantities, checks constraints, and explains why a step is wrong. Case 1: Basic arithmetic aggregation (Fig. 4). A student sums nine quiz scores. The LLM solution totals them to 570, and the LLM-as-judge accepts this step. In contrast, GroundedPRM reproduces the additions step-by-step (50 →130 →210 →270 →310 →400 →500 →570 →630), recovers the correct total 630, and labels the presented step as incorrect. This shows fidelity-aware arithmetic checking and precise localization of the first wrong step. Case 2: Sum-of-pairs with spurious halving (Fig. 5). The problem gives three pairwise sums of products. The LLM correctly aggregates them to 210 but then unjustifiably divides by 2 to claim 105; the LLM-as-judge still marks the step as correct. GroundedPRM re-evaluates the algebra and confirms 210, explicitly naming the first error as a spurious halving and explaining why this single slip corrupts downstream reasoning. This evidences error-type awareness rather than mere outcome comparison. Case 3: Ratio bounded by inequalities and number-theoretic constraints (Figs. 6,7). Given b−a = 15 and 5 9 < a b < 4 7 with gcd(a, b) = 1, the LLM rewrites b = a+15 and proposes candidates; it eventually claims (a, b) = (26, 41), which violates the upper bound since 26 41 ≈0.6341 > 4 7 ≈ 0.5714. Fig. 6 shows the LLM-as-judge validating this wrong candidate by failing to enforce the bound. Fig. 7 shows GroundedPRM re-checking the inequality numerically, verifying b −a = 15 and coprimeness, and recovering a valid pair (19, 34) with 19 34 ≈0.5588 ∈( 5 9, 4 7). This two-part case highlights constraint checking beyond arithmetic (inequality bounds + number theory). Takeaways. Across the three cases, GroundedPRM (i) recomputes key quantities instead of trusting fluent text, (ii) localizes the first wrong step and names the error type (e.g., spurious halving), and (iii) verifies multi-constraint consistency (inequalities, differences, coprimeness). These behaviors support our central claim: fidelity-aware, structure-guided supervision yields more reliable step-level judgments than generic LLM-as-judge baselines, and produces actionable rationales aligned with process-level reasoning. E.2 Dual-Signal Supervision Improves Data Fidelity and Credit Attribution We illustrate the impact of dual-signal supervision through a math factorization example where a Step-Only verifier fails to detect a missing factor. By combining step-level verification with trajectory- level feedback, GroundedPRM identifies the omission and corrects the reasoning, demonstrating its advantage in aligning local fidelity with global consistency. Case 4: Prime factorization with an omitted factor (Figs. 8,9). An LLM computes the GCF of 180 and 168 but omits the factor 3 in 168 (using 23 × 7) and concludes 252; a Step-Only supervisor validates this step. In contrast, GroundedPRM re-derives 168 = 23 × 3 × 7, checks common primes with 180 = 22 × 32 × 5, and recovers the correct GCF 22 × 3 = 12, labeling the presented step as incorrect. This highlights how dual-signal supervision prevents surface-level false positives and yields faithful credit attribution. 19 Question: Sally received the following scores on her math quizzes: 50, 80, 80, 60, 40, 90, 100, 70, and 60. Find her mean score. Reasoning Step: To calculate the mean score of Sally’s math quizzes, first, I need to add up all of her quiz scores. The question shows the score of each quiz is: 50, 80, 80, 60, 40, 90, 100, 70, and 60. Adding these together, we get: Total score = 570. GPT-4o as Judge: q **Incorrect judgement.** The total sum of the scores is accurate: 50 + 80 + 80 + 60 + 40 + 90 + 100 + 70 + 60 = 570. This step is: + GroundedPRM as Judge: ¥ **Correct judgement.** The objective is to find Sally’s mean score from her math quizzes. The given scores are 50, 80, 80, 60, 40, 90, 100, 70, and 60. First, I need to verify if the LLM’s reasoning correctly adds up all these scores. The LLM states that the total score is 570. Let me add them one by one to check. Starting with 50, adding 80 gives 130. Then, adding another 80 brings it to 210. Adding 60 makes it 270. Next, adding 40 results in 310. Adding 90 increases it to 400. Then, adding 100 brings it to 500. Adding 70 makes it 570, and finally, adding 60 gives 630. The correct total should be 630, not 570. So the LLM’s answer is incorrect here. According to the conclusion of the judgement, the label is: − Figure 4: Comparison between GPT-4o and GroundedPRM as judge models. GPT-4o q incorrectly validates the miscalculated reasoning step (570 instead of 630), while GroundedPRM ¥ correctly re-evaluates the arithmetic, detects the error, and provides a rationale-supported supervision label, demonstrating fidelity-aware verification. 20 Question: Alice thinks of four positive integers a ≤b ≤c ≤d satisfying {ab + cd, ac + bd, ad + bc} = {40, 70, 100}. What are all the possible tuples (a, b, c, d) that Alice could be thinking of? Reasoning Step: Given the equations ab + cd = 40, ac + bd = 70, and ad + bc = 100, we need to find the positive integers a, b, c, d such that a ≤b ≤c ≤d. First, let’s add all three equations: (ab + cd) + (ac + bd) + (ad + bc) = 40 + 70 + 100 2(ab + ac + ad + bc + bd + cd) = 210 ab + ac + ad + bc + bd + cd = 105 GPT-4o as Judge: q **Incorrect judgement.** The current step appears to be correct. Let’s break it down: 1. You are given three equations involving products of four integers a, b, c, and d: ab + cd = 40, ac + bd = 70, ad + bc = 100 2. To simplify the problem, the step suggests adding all three equations together: (ab + cd) + (ac + bd) + (ad + bc) = 40 + 70 + 100 3. Simplifying the left-hand side: ab + cd + ac + bd + ad + bc. The expression ab + ac + ad + bc + bd + cd indeed correctly accounts for all pairwise products. 4. The right-hand side simplifies to:40 + 70 + 100 = 210 5. Dividing both sides by 2 to account for each pair being counted twice, you get: ab + ac + ad + bc + bd + cd = 105 Conclusively, the current step has correctly simplified the sum of the given equations to derive the result ab + ac + ad + bc + bd + cd = 105. This expression will be useful in further exploring potential solutions for the tuple (a, b, c, d). This step is: + GroundedPRM as Judge: ¥ **Correct judgement.** Alright, I’m trying to verify the reasoning step provided. Let me go through it step by step. First, the problem gives us three equations: ab + cd = 40, ac + bd = 70, ad + bc = 100. We need to find positive integers a, b, c, d with a ≤b ≤c ≤d. The reasoning starts by adding all three equations: (ab + cd) + (ac + bd) + (ad + bc) = 40 + 70 + 100, Which simplifies to: ab + cd + ac + bd + ad + bc = 210. Then, the reasoning factors this sum into: (ab + ac + ad + bc + bd + cd) = 105. I need to check if this step is logically sound. Let’s see: Adding the three equations: ab + cd = 40, ac + bd = 70, ad + bc = 100. Adding them together gives: ab + cd + ac + bd + ad + bc = 40 + 70 + 100 = 210. So, the sum of all these terms is indeed 210. The reasoning then states that this sum equals ab + ac + ad + bc + bd + cd = 105. Wait, that seems off because 210 divided by 2 is 105, but the sum is 210, not 105. There’s a mistake here. So, the correct equation after adding should be: ab + ac + ad + bc + bd + cd = 210. Not 105. Therefore, the reasoning incorrectly halved the total sum, leading to an incorrect conclusion. This error affects the rest of the reasoning, as subsequent steps would be based on an incorrect total sum. Hence, the logical check fails because the step introduces an error in the equation. In summary, the step adds the equations correctly but incorrectly divides the total sum by 2, leading to an incorrect value for the sum of the terms. This makes the reasoning flawed. The logical check reveals an error in the reasoning step. Adding the three equations correctly gives a total sum of 210, but the reasoning incorrectly calculates this as 105 by dividing by 2. This introduces an inconsistency in the problem-solving process. According to the judgment conclusion, the label of this step is: − Figure 5: Comparison between GPT-4o and GroundedPRM as judge models. After correctly aggregating the three sums to 210, the LLM performs an unjustified halving to 105. GPT-4o q incorrectly validates this step. GroundedPRM ¥ correctly re-evaluates the arithmetic to 210, verifies each sub-sum, localizes the first wrong step (the halving), and outputs a rationale-supported negative label, showing fidelity-aware, error-type–aware verification. 21 Question: The positive integers a and b have no common divisor larger than 1 . If the difference between b and a is 15 and 5 9 < a b < 4 7, what is the value of a b ? Reasoning Step: To find the value of a b under the given conditions, let’s reason through the problem step-by-step: First, understanding the conditions: gcd(a, b) = 1 means that a and b are coprime, i.e., they have no common divisors other than 1. b−a = 15 indicates the difference between b and a. 5 9 < a b < 4 7 gives us bounds on the ratio a b . Second, interpreting the inequality: The inequality 5 9 < a b < 4 7 implies: a b is greater than 5 9 ≈0.5556 a b is less than 4 7 ≈0.5714 Third, simplifying the problem: Since b −a = 15, we can write b = a + 15. Substituting this into the inequality gives: 5 9 < a a+15 < 4 7 Fourth, solving for a and b: We want to find values of a and b that satisfy the conditions while being integers. Let’s test possible values of a and b keeping in mind b = a + 15. Fifth, finding the solution: We know from the bounds of the inequality that a must be close to b (but smaller) since both fractions are close to each other. However, a and b also differ by 15, so the fraction should give a value slightly less than but close to 4 7. Testing different values for a and b, while ensuring they are coprime and b −a = 15: - For a = 1, b = 16, 1 16 is too small. - For a = 2, b = 17, 2 17 is still too small. - Continuing this process... - For a = 15, b = 30, 15 30 = 1 2 is too large. - For a = 21, b = 36, 21 36 = 7 12 is too large. Notice that when a = 26 and b = 41, we have: - 26 41 which satisfies the inequality 5 9 < 26 41 < 4 7 and gcd(26, 41) = 1. Sixth, conclusion: Given the constraints and the testing process, the values a = 26 and b = 41 satisfy all the conditions. Therefore, the value of a b is 26 41. GPT-4o as Judge: q **Incorrect judgement.** The current step contains an error in calculations and reasoning, which should be addressed for correctness. Let’s analyze the problem: 1. Given Conditions: - Positive integers a and b with gcd(a, b) = 1. - b −a = 15. - 5 9 < a b < 4 7. 2. Express b in terms of a: - Since b = a + 15, substitute to get a a+15. 3. Refine the inequality: - Plug b = a + 15 into 5 9 < a b < 4 7, giving 5 9 < a a+15 < 4 7. 4. Estimation by calculation: - Solve a a+15 > 5 9 and a a+15 < 4 7 separately to identify candidate values for a 5. Determine suitable integers: - The process should ensure integer solutions for a and b while maintaining coprimeness and verifying bounds. Issues in the current step: - The calculation steps provided for a = 21, b = 36 yielding 21 36 = 7 12 do not match b −a = 15 since b −a ̸= 15. - The test case a = 26, b = 41 accurately satisfies the division 26 41 within bounds and has b −a = 15, as well as ensuring gcd(26, 41) = 1. This is the correct solution for confirmed calculations. The correct logic requires testing every integer value while ensuring all conditions (gcd, b−a = 15) are met, leading to the right solution. Thus, a = 26 and b = 41, is the correct match. This step is: + Figure 6: Comparison between GPT-4o and GroundedPRM as judge models. Part 1 of 2 (see Fig. 7 for part 2). Given b −a = 15 and 5 9 < a b < 4 7, the LLM rewrites the bound as a a+15 and then proposes candidates. GPT-4o q incorrectly validates the step that accepts (a, b) = (26, 41) even though 26 41 ≈0.6341 > 4 7, failing the bound check. 22 GroundedPRM as Judge: ¥ **Correct judgement.** Okay, I need to verify the LLM’s reasoning step based on the given problem. Let me go through each part carefully. First, the objective is to find integers a and b such that their greatest common divisor is 1, their difference is 15, and the fraction a/b lies between 5/9 and 4/7. The LLM suggests testing values where a is close to b but smaller, with b = a + 15. Looking at the examples provided: - a=1, b=16: 1/16 ≈0.0625, which is way below 5/9 (≈0.5556). So this doesn’t fit. - a=2, b=17: 2/17 ≈0.1176, still too low. - a=15, b=30: 15/30 = 0.5, which is exactly 1/2, but the upper bound is 4/7 ≈0.5714. So it’s within the range, but let me check if a and b are coprime. gcd(15,30)=15, not 1. So this doesn’t work. - a=21, b=36: 21/36 = 7/12 ≈0.5833, which is above 4/7. So it’s outside the upper bound. - Finally, a=26, b=41: 26/41 ≈0.6341, which is above 4/7. Wait, that’s over. But the LLM says it’s within the range. That seems incorrect because 4/7 ≈0.5714, and 26/41 ≈0.6341, which is higher than 4/7. So maybe there’s a mistake here. Wait, perhaps I miscalculated 26/41. Let me compute it again. 26/41 is indeed approximately 0.6341, which is greater than 4/7 (≈0.5714). So the LLM’s conclusion that 26/41 is within the range is incorrect. Therefore, the reasoning might have an error. But wait, maybe I made a mistake in the comparison. Let me check 26/41 against 5/9 and 4/7 again. 5/9 ≈0.5556, and 4/7 ≈0.5714. 26/41 ≈0.6341, which is above 4/7. So the fraction is actually outside the upper bound. Therefore, the LLM’s answer might be wrong because it claims 26/41 is within the range when it’s not. However, perhaps I should check if there’s another pair that fits better. Maybe a=31, b=46: 31/46 ≈0.6739, still above 4/7. a=36, b=51: 36/51 = 12/17 ≈0.7059, which is even higher. Maybe a=25, b=40: 25/40 = 5/8 = 0.625, still above 4/7. Hmm, perhaps the correct a and b are lower. Wait, let me try a=16, b=31: 16/31 ≈0.5161, which is above 5/9 but below 4/7. But b −a = 15, so 31 −16 = 15. Now, gcd(16, 31) = 1, since 16 is 24 and 31 is prime. So this could be a valid solution. Let me check: 16/31 ≈0.5161, which is between 5/9 ≈0.5556? Wait, no, 0.5161 is less than 5/9. So that doesn’t work. Alternatively, a=19, b=34: 19/34 ≈0.5588, which is above 5/9 but below 4/7. Check gcd(19, 34) = 1. So 19/34 ≈0.5588, which is between 5/9 and 4/7. So this might be the correct pair. So the LLM’s answer with a=26 and b=41 is incorrect because 26/41 > 4/7. The correct pair should be a = 19 and b = 34, which satisfies all conditions. Therefore, the LLM’s reasoning has a logical error in verifying the upper bound of the fraction. It incorrectly concluded that 26/41 is within the required range when it’s actually outside. Hence, the result is false. The LLM’s reasoning incorrectly concludes that 26 41 is within the specified range. Upon verification, 26 41 ≈0.6341, which exceeds the upper bound of 4 7 ≈0.5714. A correct pair satisfying all condi- tions is a = 19 and b = 34, yielding 19 34 = 1 2, which is within the range and coprime. According to the conclusion of the judgement, the label is: − Figure 7: Comparison between GPT-4o and GroundedPRM as judge models. Part 2 of 2 (complements Fig. 6). For the same case, GroundedPRM ¥ correctly re-evaluates the inequality, verifies b−a = 15 and coprimeness, rejects (26, 41), and identifies the valid pair (19, 34) with 19 34 ≈0.5588; it outputs a rationale-supported negative label, demonstrating fidelity-aware, error-type–aware verification. 23 Question: Find the greatest common factor of 180 and 168. Reasoning Step: To find the greatest common factor (GCF) of 180 and 168, we can use prime factorization. First, let’s express each number as a product of its prime factors: First, 180 = 22 × 32 × 5. Second, 168 = 23 × 7. Step-Only: q **Incorrect judgement.** Okay, so I need to verify the LLM’s answer about finding the GCF of 180 and 168 using prime factorization. Let me start by understanding what the problem is asking. The objective is to find the greatest common factor of these two numbers. The LLM provided the prime factorizations: 180 is broken down into 22 × 32 × 5 and 168 into 23 × 7. I should check if these factorizations are correct because that’s the foundation for finding the GCF. Starting with 180: Let’s do the prime factorization step by step. 180 divided by 2 is 90, which divided by 2 again is 45. Then 45 divided by 3 is 15, and 15 divided by 3 is 5, which is a prime number. So yes, that gives us 22 × 32 × 5, which matches the LLM’s answer. Now for 168: Let’s factorize it. 168 divided by 2 is 84, divided by 2 again is 42, and once more by 2 is 21. 21 divided by 3 is 7, which is prime. So that’s 23 × 3 × 7. Wait, the LLM wrote 23 × 7, but I think they missed the 3. Hmm, that might be an issue. But wait, in their answer, they only mentioned 23 × 7, so maybe they considered only the highest powers present in both? Or perhaps they made a mistake. But wait, the GCF is found by taking the lowest powers of the common prime factors. So even if 168 has a 3, since 180 doesn’t have a 7, the common prime factors are just 2 and 3. So the LLM correctly identified the common factors as 22 and 31, leading to a GCF of 22 × 3 = 4 × 3 = 12. However, their factorization of 168 didn’t include the 3, which might be a mistake, but since they still arrived at the correct GCF, maybe it’s acceptable. Wait, no, actually, the prime factorization must be accurate. If 168 is indeed 23 × 3 × 7, then the GCF should consider the common primes. So 180 has 22, 32, and 5. 168 has 23, 3, and 7. The common primes are 2 and 3. For 2, the lowest power is 22, and for 3, it’s 31. So the GCF should be 22 × 3 = 4 × 3 = 12. So even though the LLM’s factorization of 168 was incomplete, their conclusion was correct because they only took the common factors. So perhaps it’s a minor error in the factorization step but the GCF is still right. Alternatively, maybe the LLM intended to show only the common factors. Let me double-check. If 168 is 23 × 7, then the common factors would be 22, but then the GCF would be 4, which is incorrect because 3 is also a common factor. So the LLM must have included the 3 in their factorization, even if they didn’t mention it in the answer. So perhaps there was a typo, but the actual factorization includes the 3, leading to the correct GCF. In any case, the key point is that the LLM’s answer correctly identifies the GCF as 12, even if there was a slight error in the intermediate steps. Therefore, the logical check passes because the reasoning aligns with the known conditions, and the calculation is correct despite a possible minor miscalculation in the factorization of 168. The LLM correctly used prime factorization to find the GCF of 180 and 168. The prime factors of 180 are 22 ×32 ×5, and those of 168 are 23 ×3×7. The common factors are 22 and 3, leading to a GCF of 12. Despite a possible error in the factorization of 168, the final result is correct.According to the judgement conclusion, this step is: + Figure 8: Comparison between Step-Only and GroundedPRM. Step-Only q incorrectly validates a step that is arithmetically valid but logically inconsistent, while GroundedPRM ¥ correctly detects the missing factor in 168’s prime factorization through dual-signal verification, ensuring high-fidelity and logically grounded supervision. Part 1 of 2 (see Fig. 9 for part 2). 24 GroundedPRM ¥ **Correct judgement.** Alright, I’m trying to verify the LLM’s answer regarding the GCF of 180 and 168 using prime factorization. Let me go through the process step by step. First, the objective is clear: find the GCF of 180 and 168. The LLM provided the prime factoriza- tions as 180 = 2² × 3² × 5 and 168 = 2³ × 7. I need to check if these factorizations are correct. Starting with 180: dividing by 2 gives 90, then another 2 gives 45. So that’s 2². Then 45 divides by 3 to get 15, then again by 3 to get 5. So that’s 3² and 5¹. So 180’s prime factors are indeed 2² × 3² × 5. Now for 168: dividing by 2 gives 84, then 42, then 21. 21 divides by 3 to get 7. So that’s 2³ × 3¹ × 7¹. Wait, the LLM only wrote 2³ × 7. That seems missing the 3. Hmm, so the LLM might have missed including the prime factor 3 in 168’s factorization. But wait, maybe I made a mistake. Let me double-check. 168 divided by 2 is 84, divided by 2 again is 42, divided by 2 again is 21. 21 is 3×7. So yes, it’s 2³ × 3 × 7. So the LLM’s factorization is incomplete because they only included 2³ and 7, omitting the 3. However, when finding the GCF, we take the minimum exponents of the common prime factors. Both numbers have 2² and 3², but wait, 168 doesn’t have 3². It only has 3¹. So actually, the common prime factors are 2 and 3, with exponents 2 and 1 respectively. Therefore, the GCF should be 2² × 3 = 4 × 3 = 12. But the LLM didn’t include the 3 in their factorization of 168, which could lead to an incorrect GCF. However, in this case, since 168 only has one 3, the GCF would still be 3¹, but the LLM’s omission might have confused the process. Wait, but the LLM’s answer is just providing the factorizations, not calculating the GCF yet. So perhaps the issue is more about the accuracy of the factorizations before proceeding. Since the LLM missed the 3 in 168’s factorization, their information is incomplete, which could affect the subsequent GCF calculation. So, the logical check shows that the LLM’s reasoning is flawed because it omitted a prime factor in 168. The factorization should include 3, which is crucial for determining the correct GCF. Therefore, the result should be false because the provided factorizations are incorrect.According to the judgement conclusion, this step is: − Figure 9: Comparison between Step-Only and GroundedPRM. Step-Only q incorrectly validates a step that is arithmetically valid but logically inconsistent, while GroundedPRM ¥ correctly detects the missing factor in 168’s prime factorization through dual-signal verification, ensuring high-fidelity and logically grounded supervision. Part 2 of 2 (complements Fig. 8). 25
GroundedPRM: Tree-Guided and Fidelity-Aware Process Reward Modeling for Step-Level Reasoning Yao Zhang 1,6 ∗† Yu Wu 2 † Haowei Zhang 3 ‡ Weiguo Li 4 Haokun Chen1 Jingpei Wu1 Guohao Li5 Zhen Han 1 ∗ Volker Tresp1,6 1 LMU Munich 2 Technical 3 Fudan University 4 University Heidelberg 5 6 Munich Center for Machine Learning Abstract Process Reward Models (PRMs) aim to improve multi-step reasoning in Large Language Models (LLMs) by supervising intermediate steps and identifying errors throughout the reasoning process. However, building effective PRMs remains challenging due to the lack of scalable, high-quality annotations. Existing approaches rely on costly human labeling, LLM-based self-evaluation that is prone to hallucination, or Monte Carlo (MC) estimation, which infers step quality solely from rollout outcomes and often introduces noisy, misaligned supervision due to credit misattribution. These issues result in three core limitations: noisy rewards, low factual fidelity, and misalignment with step-level reasoning objectives. To address these challenges, we introduce GroundedPRM, a tree-guided and fidelity-aware framework for automatic process supervision. To reduce reward noise and enable fine-grained credit assignment, we construct structured reasoning paths via Monte Carlo Tree Search (MCTS). To eliminate hallucinated supervision, we validate each intermediate step using an external tool, providing precise, execution-grounded correctness signals. To combine both step-level validation and global outcome assessment, we design a hybrid reward aggregation mechanism that fuses tool-based verification with MCTS-derived feedback. Finally, we format the reward signal into a rationale-enhanced, generative structure to promote interpretability and compatibility with instruction-tuned LLMs. GroundedPRM is trained on only 40K automatically labeled samples, amounting to just 10% of the data used by the bestperforming PRM trained with auto-labeled supervision. Nevertheless, it achieves up to a 26% relative improvement in average performance on ProcessBench. When used for reward-guided greedy search, GroundedPRM outperforms even PRMs trained with human-labeled supervision, offering a scalable and verifiable path toward high-quality process-level reasoning. Our code is publicly released at github.com/GroundedPRM. 1 Introduction Large Language Models (LLMs) [1, 9, 31] have demonstrated impressive capabilities in planning [13, 44], decision-making [19], and complex task execution [38, 45]. However, they remain prone to hallucinations and reasoning errors, particularly in multi-step tasks such as mathematical problem solving. Existing methods like Chain-of-Thought prompting [36, 40] and Test-Time Scaling [21, 27] improve final accuracy, yet LLMs often produce solutions that appear coherent while containing errors in reasoning or calculation. These issues are further exacerbated by outcome-level supervision ∗Corresponding authors: , †Equal contribution ‡Work done during an internship at Technical . 16 Oct 2025 and coarse decoding strategies, e.g., majority voting, which overlook step-level correctness and provide little guidance during intermediate reasoning. To mitigate these shortcomings, Process Reward Models (PRMs) have emerged as a promising direction [20]. PRMs assign step-level scores to reasoning trajectories, enabling fine-grained supervision that supports better control and interpretability in multi-step reasoning. However, developing effective PRMs remains challenging due to the lack of reliable and faithful reward signals for training. Human annotation [20], while accurate, is costly and unscalable. LLM-as-a-judge [48] is more efficient but susceptible to hallucination, often rewarding fluent yet incorrect reasoning and thus compromising factual fidelity. Monte Carlo (MC) estimation [22, 35] provides another alternative by inferring step quality from final rollout outcomes, but it introduces noisy reward due to credit misattribution: correct steps may be penalized if the rollout fails, while flawed steps may be rewarded if the final answer happens to be correct [46]. Moreover, MC estimation typically evaluates only final outcomes, ignoring explicit assessment of intermediate step correctness, which misaligns the supervision signal with the objective of step-wise reasoning accuracy. Several recent works have attempted to refine MC-based supervision, but core limitations persist. OmegaPRM [22] uses a binary search strategy to locate the first incorrect step, but still relies on rollout success to infer correctness, leaving credit assignment coarse. Qwen2.5-Math-PRM [46] filters samples based on agreement between MC estimation and LLM judgments, but this strategy inherits hallucination bias and scores each step solely based on rollout outcomes, without assessing whether it contributes to or hinders correct reasoning. BiRM [7] augments PRM with a value head to predict future success probability, but both reward and value signals are derived from noisy rollouts and lack external validation. These approaches offer partial improvements, yet remain constrained by outcome-based heuristics, hallucination-prone feedback, or weak step-level credit modeling. To address these challenges, we propose GroundedPRM, a tree-guided and fidelity-aware framework for automatic process supervision. GroundedPRM is designed to resolve three core limitations in existing PRMs: noisy rewards, low factual fidelity, and misalignment with step-level reasoning objectives. First, to reduce reward noise and improve credit attribution, GroundedPRM leverages Monte Carlo Tree Search (MCTS) to construct structured reasoning paths and assess each step based on its contribution within the trajectory. Second, to ensure factual grounding, each intermediate step is verified using an external math tool, producing correctness signals based on executable logic rather than LLM-generated feedback, thereby eliminating hallucinated supervision. Third, to combine steplevel validation with global outcome assessment, we design a hybrid reward aggregation mechanism that fuses tool-based verification with MCTS-derived feedback. Finally, all rewards are formatted into binary decisions paired with rationale-enhanced justifications, enabling interpretable supervision signals that are compatible with LLM-based generation and downstream reasoning workflows. We evaluate GroundedPRM on ProcessBench and observe substantial gains in both data efficiency and overall performance. It is trained on only 40K automatically labeled samples, just 10% of the data used by the best-performing PRM trained with auto-labeled supervision, yet achieves up to a 26% relative improvement in average performance. Furthermore, when deployed in reward-guided greedy search, where candidate steps are selected based on predicted reward, GroundedPRM surpasses even PRMs trained with human-labeled supervision, establishing new state-of-the-art results across multiple mathematical reasoning benchmarks. These findings highlight the effectiveness, scalability, and practical value of our structured and fidelity-aware supervision framework for both training and inference. The key contributions of this work are: 1. We propose GroundedPRM, a tree-guided and fidelity-aware process reward modeling framework that leverages MCTS to construct structured reasoning paths and support steplevel credit assignment. 2. We introduce a fidelity-aware verification mechanism that validates each reasoning step using an external math tool, ensuring correctness grounded in executable logic and eliminating hallucinated supervision. 3. We design a hybrid reward aggregation mechanism that integrates tool-based step validation with feedback derived from MCTS-guided reasoning paths. 2 4. We format rewards into a rationale-enhanced, generative structure to improve interpretability and enable seamless integration into inference-time decoding and downstream reasoning workflows. 5. We demonstrate strong data efficiency and inference performance by evaluating GroundedPRM on ProcessBench and reward-guided greedy search. 2 Related Work 2.1 Mathematical Reasoning with LLMs Large Language Models (LLMs) have shown remarkable progress in solving math problems via Chainof-Thought (CoT) reasoning, where step-by-step solutions often improve final answer accuracy [36]. Building on this, recent efforts have focused on enhancing reasoning capabilities through pretraining on math-related corpora [15, 25, 42], instruction tuning with annotated derivations [19, 40, 41, 43], and prompting strategies tailored for math tasks [4, 14, 17]. Despite these improvements, LLMs remain vulnerable to intermediate reasoning errors, even when final answers are correct [47]. This discrepancy undermines the reliability of generated solutions, motivating the use of external verification or inference-time selection strategies [9, 26, 32]. Such approaches typically operate at the output level, offering limited supervision for correcting internal steps. Unlike prior methods that intervene at the output level, our approach supervises the reasoning process itself via step-level reward modeling, enabling finer-grained error identification and ensuring more faithful alignment with step-level reasoning and factual correctness. 2.2 Process Reward Models for Step-Level Supervision To enhance reasoning fidelity and identify intermediate errors, PRMs have emerged as a promising alternative to outcome-level supervision [20, 33]. PRMs evaluate the correctness of individual reasoning steps and have been shown to improve alignment and generalization across math tasks [35, 46]. A key challenge lies in generating reliable step-level annotations. Early methods rely on expert-labeled datasets such as PRM800K [20], which are expensive to scale. Recent work has explored automatic synthesis through MC estimation [22, 35], often leveraging rollout outcomes to infer step validity. However, MC-based supervision introduces noise due to credit misattribution and dependency on the quality of the completion model [46, 47]. To mitigate this, several methods combine MC with LLM-as-a-judge consensus filtering [46] or adopt preference-based learning frameworks [6]. In contrast, our method GroundedPRM constructs PRM supervision from the ground up by integrating tree-structured search via MCTS [5], step-level verification with external math engines, and fused value-correctness reward modeling. This pipeline produces reward signals that are verifiable, structurally grounded, and directly aligned with step-level reasoning objectives, effectively addressing the fidelity and alignment issues that prior methods leave unresolved. 3 Methodology GroundedPRM is designed to address three core limitations of existing process reward modeling methods: noisy rewards, low factual fidelity caused by hallucinated self-assessment, and misalignment with step-level reasoning objectives. These challenges call for a framework that can assign finegrained credit, validate the factual correctness of individual steps, and integrate local and global signals into a reliable and interpretable supervision objective. To this end, GroundedPRM introduces a tree-guided and fidelity-aware reward modeling framework composed of four core components. First, it employs Monte Carlo Tree Search (MCTS) to construct structured reasoning paths and assess each step based on its contribution within the search trajectory, enabling more stable and attribution-aware supervision than flat sampling-based methods. Second, it verifies each intermediate step using an external tool, producing binary correctness labels grounded in executable logic and thereby mitigating hallucinated feedback from the model. Third, it unifies verified step-level signals and final-answer correctness into a joint supervision objective, maintaining fine-grained credit assignment while offering stable and reasoning-grounded supervision. Finally, the reward supervision is formatted into a rationale-enhanced generative structure, pairing each step with both a binary score and an explanation to support interpretability and compatibility 3 Action a: Simplifying the equation: 60x - 30(20 - x) = 660, which yields x = 14. Query q: Solve[{x + y == 20, 60x - 30y == 660}, {x, y}] Let's break down the problem to verify the solution. For simplifying the equation: 60x - 600 + 30x = 660. x = 14 ... The result from the tool is: x = 14 and y = 6 ... The calculations are correct Update Rollout Reward u Tool answer: x = 14 and y = 6 Step Verification v Problem P: Find all (x, y) satisfying x + y = 20 and 60x - 30y = 660 Step1 Selection Step2 Expansion Step3 Simulation Step4 Backpropagation Current Node Real Node Simulation Node Correct Node Incorrect Node LLM Math Tool Ground Truth Final Outcome Correctness F Figure 1: Overview of the GroundedPRM framework. GroundedPRM constructs reasoning paths via MCTS, where each node corresponds to an LLM-generated step. During simulation, intermediate steps are verified using an external tool, and final answers are checked against ground truth. Step-level and outcome-level correctness signals are aggregated into a rollout reward, which is backpropagated along the tree to update node statistics; the next node is then selected by UCT, continuing the MCTS search until convergence or budget exhaustion. The framework enables verifiable, interpretable, and structure-aware process supervision for multi-step reasoning. The generative rationale provides interpretable feedback for each step. with instruction-tuned LLMs. An overview of this framework is illustrated in Fig. 1. We provide the full algorithmic pseudocode in Appendix A. 3.1 Tree-Guided Reasoning Path Construction To enable stable and attribution-aware process supervision, GroundedPRM employs MCTS to construct structured reasoning paths for each input problem P. Each node in the search tree is associated with a partial reasoning state s = {s1, . . . , si}, representing the sequence of previously generated reasoning steps. In addition to the state, each node stores auxiliary information including tool queries q, verification outcomes v, and value estimates Q. A reasoning step is represented as an action a, defined as a natural language expression generated by the LLM that extends the current reasoning state, transitioning it from state s to a new state s′. The value function Q(s, a) estimates the expected return of applying action a in state s, and is updated through feedback from simulated rollouts. The search process consists of four stages: Selection. Starting from the root node, the algorithm recursively selects child nodes according to a tree policy until reaching a node that is not fully expanded. To balance exploration and exploitation, we use the Upper Confidence Bound for Trees (UCT) [16], which balances estimated value with an exploration bonus that decreases as a node is visited more often, thereby encouraging the search toward promising yet under-explored nodes. The UCT score for each candidate action a at state s is computed as: UCT(s, a) = Q(s, a) + c · s log N(s) N(s, a) , (1) where N(s) and N(s, a) are the visit counts of the parent and child nodes, respectively; and c is a hyperparameter controlling the exploration strength. Expansion. If the selected node is not terminal, it is expanded by sampling K new actions from LLM, each producing a distinct child state s′. We set K = 3 in our experiments. This constrains the branching factor while maintaining reasoning diversity. Simulation. From the newly expanded node, we simulate a complete reasoning trajectory by sequentially sampling steps si+1, . . . , sT until the model produces a final answer. We sample from 4 the current state using the LLM in a left-to-right fashion to complete the solution. For each step sj where j ∈{i + 1, ..., T -1}, we obtain a binary correctness label vj ∈{-1, 1} using the tool-based verification procedure described in § 3.2. Additionally, the final answer is compared against the ground-truth solution to determine the overall outcome F ∈{-1, 1}. We adopt signed labels {-1,+1} instead of {0,1} so that incorrect steps propagate negative feedback, thereby decreasing node values during MCTS rather than being treated as neutral. These per-step and final correctness signals are subsequently aggregated into a single rollout reward ui, as defined in § 3.3. Backpropagation. The reward u computed for the simulated trajectory is propagated backward along the path traversed during selection. For each visited state-action pair (sk, ak) at depth dk from the terminal node, we update its value as: Q(sk, ak) ←Q(sk, ak) + γdk · (ui + vi), (2) where k ∈{0, ..., i -1}, γ ∈(0, 1) is a decay factor controlling temporal discount, and dk denotes the number of steps from node i. This update scheme assigns stronger credit to steps closer to the final outcome, aligning attribution with their causal impact in the reasoning process. By iteratively executing the four MCTS stages, GroundedPRM constructs a structured and diverse distribution over reasoning paths. This search process prioritizes trajectories with high step-level validity and globally correct outcomes, yielding supervision signals that are both structure-aware and attribution-sensitive. The resulting credit assignments are more stable and fine-grained than those produced by flat Monte Carlo rollouts, directly addressing reward noise and misattribution. Multiple rollouts are performed per input to balance path diversity with search efficiency. 3.2 Fidelity-Aware Step Verification with External Tool To ensure reward fidelity and eliminate hallucinated supervision, GroundedPRM integrates step-level verification into each reasoning step via external tools. During simulation (see § 3.1), the LLM generates a sequence of reasoning steps {si+1, . . . , sT }, where each sj (i + 1 ≤j ≤T -1) denotes an intermediate reasoning step expressed in natural language during rollout. For each step sj, we construct a corresponding structured math query and submit it to an external math tool, such as Wolfram Alpha (WA) [37]. The tool's response is parsed to determine whether the computation or transformation expressed in sj is factually correct. We represent this outcome as a binary verification label vj ∈{-1, 1}, where vj = 1 indicates successful verification and vj = -1 denotes failure. The resulting sequence {vi+1, . . . , vT -1} provides a fine-grained step-level correctness evaluation for the entire reasoning trace. These step-level signals are used during rollout to compute the aggregated reward u (see § 3.3). Unlike LLM-based self-evaluation, which often overestimates fluent but invalid reasoning, this fidelity-aware mechanism grounds supervision in objective, tool-based verification. While WA is used in our experiments due to its strong mathematical solving capabilities, such as equation solving and equivalence checking, our verification module is tool-agnostic. It supports integration with alternatives like SymPy [23] or domain-specific solvers. This modular design ensures that GroundedPRM generalizes across reasoning domains while maintaining high verification precision. 3.3 Hybrid Reward Aggregation To construct reward signals that are both verifiable and forward-looking, GroundedPRM introduces a hybrid aggregation mechanism that combines step-level verification with trajectory-level outcome assessment. This design balances two supervision objectives: (1) factual fidelity of intermediate reasoning steps, and (2) global correctness of the final answer. Given a simulated reasoning trace of length T, we collect step-level correctness signals {vi+1, . . . , vT -1}, where each vi ∈{-1, 1} is obtained via external tool verification (see § 3.2). In addition, we evaluate the final answer against ground truth to obtain a binary outcome signal F ∈{-1, 1}. These signals are aggregated into a single scalar reward: ui = 1 T -1 -i T -1 X j=i+1 dj · vj + β · F, (3) 5 where β ≥0 is a weighting coefficient that adjusts the contribution of final answer correctness relative to step-level reliability. The resulting reward u is used during backpropagation in MCTS (see § 3.1) to update value estimates and guide exploration. We further define the MCTS value estimate at each state-action pair (si, ai) as: Q(si, ai) = ui + vi. By fusing local and global correctness signals, this hybrid reward formulation offers more stable and interpretable supervision than prior MC-based methods that rely solely on rollout success. Moreover, this mechanism directly addresses the three core limitations of existing PRMs: it reduces reward noise via structure-aware simulation, avoids unverifiable supervision through external tool-based validation, and aligns the reward objective with both step-wise precision and task-level success. 3.4 Generative Process Reward Model GroundedPRM adopts a generative reward modeling paradigm, enabling seamless integration with instruction-tuned LLMs and providing supervision for open-ended reasoning workflows. Each training instance is structured as a rationale-enhanced sequence that pairs intermediate reasoning steps with corresponding verification outcomes and justifications. Formally, each instance includes: (1) the original problem P; (2) the full reasoning trajectory {s1, . . . , sT }; (3) binary labels indicating the sign of the aggregated reward, combining tool-verified step fidelity and rollout outcome signals; and (4) natural-language explanations derived from external tool feedback, retained after consistency filtering to align with the verified binary labels. Unlike conventional discriminative reward models that treat reward prediction as a binary classification task, we train GroundedPRM autoregressively to generate both correctness labels and rationales conditioned on the problem and its intermediate reasoning trace. This generative formulation improves interpretability and enables seamless integration into LLM-based reasoning pipelines. 3.5 Data Construction for GroundedPRM Training To train GroundedPRM, we apply the full supervision framework described above to the MATH dataset [11], constructing a reward-labeled dataset with tool-based step-level verification and hybrid scoring. For each problem, the policy model generates intermediate reasoning steps, which are verified using external tools (see § 3.2). Each step is labeled based on tool-verified correctness, and the full trajectory is scored using the hybrid reward mechanism introduced in § 3.3. To ensure coverage and diversity, we adopt a multi-round MCTS rollout strategy that explores both optimal and suboptimal paths. Post-processing includes filtering incomplete, inconsistent, or tool-unverifiable traces, and formatting the final data into a rationale-enhanced generative structure (see § 3.4). Each instance includes the problem, a full reasoning trace, correctness labels, and explanations. The resulting dataset contains approximately 40K verified samples, covering a broad spectrum of problem types and reasoning strategies with high tool-verified fidelity. Exact generation and verification prompts are given in Appendix B. 4 Experiment 4.1 Experimental Setup Benchmarks. We evaluate GroundedPRM from two perspectives: its ability to accurately identify erroneous steps within multi-step reasoning processes, and its effectiveness in directly enhancing downstream task performance. • ProcessBench [47]. This benchmark evaluates the ability of reward models to supervise step-level reasoning in mathematical problems. Each instance includes an LLM-generated solution with the first incorrect step annotated by human experts. Models are evaluated based on their ability to accurately identify the first faulty step or confirm that all steps are valid, following standard PRM evaluation protocols. • Reward-Guided Greedy Search. To further assess the utility of GroundedPRM in guiding multi-step reasoning, we perform inference-time decoding using a reward-guided greedy strategy. At each generation step, we sample N = 8 candidate actions from Qwen2.5-7B-Instruct [24] using a temperature of 1, and select the candidate with the highest 6 Table 1: F1 scores on ProcessBench for models trained with auto-labeled data. Models marked with ∗ share the same base model: Qwen2.5-Math-7B-Instruct. GroundedPRM achieves the highest average F1, surpassing the strongest existing model, Math-Shepherd-PRM-7B, by 26% relative improvement while using only 10% of the training data. All baseline results are directly cited from [46]. Oly. denotes OlympiadBench. Full results are provided in Appendix D. Model #Sample GSM8K MATH Oly. Omni-MATH Avg. RLHFlow-DeepSeek-8B 253K 38.8 33.8 16.9 16.9 26.6 RLHFlow-Mistral-8B 273K 50.4 33.4 13.8 15.8 28.4 Qwen2.5-Math-7B-Math-Shepherd∗ 445K 62.5 31.6 13.7 7.7 28.9 EurusPRM-Stage1∗ 453K 44.3 35.6 21.7 23.1 31.2 EurusPRM-Stage2∗ 230K 47.3 35.7 21.2 20.9 31.3 Math-Shepherd-PRM-7B 445K 47.9 29.5 24.8 23.8 31.5 GroundedPRM 40K 43.4 47.0 33.8 34.4 39.7 predicted reward assigned by the PRM. This process is repeated iteratively until a complete solution is generated. We evaluate this procedure on six mathematical benchmarks: AMC23 [3], AIME24 [2], MATH [11], College MATH [30], OlympiadBench [10], and Minerva MATH [18]. We also report the result of pass@n, i.e., the proportion of test samples where any of the n samplings lead to the correct final answers. Baselines. For both ProcessBench and reward-guided greedy search experiments, we compare GroundedPRM against the following representative baselines. These baselines span a diverse set of supervision strategies, including models trained with human-labeled rewards, automated annotations, and hybrid approaches, as well as a range of training data scales. • Math-Shepherd [35]: Utilizes MC estimation to perform automated step-level annotation with hard labels. • RLHFlow-PRM series [8]: Includes DeepSeek and Mistral variants, both of which use MC estimation for data generation, but adopt the Direct Preference Optimization (DPO) training paradigm. • Math-PSA-7B [34]: Trained on mixed annotated data, namely PRM800K [20], MathShepherd [35], and generated data following [22]. • EurusPRM-series [28]: EurusPRM-Stage1 and EurusPRM-Stage2 construct weakly supervised labels from final outcomes using noise-aware heuristics. • Qwen2.5-Math-7B series [47, 46]: Qwen2.5-Math-7B-Math-Shepherd and Qwen2.5-Math7B-PRM800K are trained with Math-Shepherd [35] and PRM800K [20] using Qwen2.5Math-7B-Instruct [40], respectively. • Llemma-PRM800K-7B [29]: Utilizes MC estimation to perform automated step-level annotation with hard labels. • ReasonEval-7B [39]: Prompt-based model for evaluating step validity and redundancy. Implementation Details. All reward models are fine-tuned on step-labeled reasoning trajectories using LoRA [12] for parameter-efficient adaptation. We use Qwen2.5-7B-Instruct [24] as the base model. Complete training hyperparameters are listed in Appendix C. 4.2 Results on ProcessBench GroundedPRM Achieves Strong Supervision Performance with High Data Efficiency. As shown in Tab. 1, GroundedPRM achieves the highest average F1 score among all PRMs trained with automatically labeled data, outperforming the second-best model, Math-Shepherd-PRM-7B, by a relative improvement of 26% while using only 10% training samples. GroundedPRM also ranks first on MATH, OlympiadBench, and Omni-MATH, indicating strong capability in evaluating complex mathematical reasoning steps. These results reinforce our central hypothesis: verifiable, structureguided supervision is substantially more effective than scale alone. GroundedPRM's fidelity-aware 7 Table 2: F1 scores of GroundedPRM and Qwen2.5-Math-7B-PRM800K under matched training sizes. Both methods are trained using Qwen2.5-7B-Instruct but differ in supervision sources. Despite relying solely on automatically labeled data, GroundedPRM consistently outperforms Qwen2.5-Math7B-PRM800K across all data scales. Oly. denotes OlympiadBench. #Sample Model GSM8K MATH Oly. Omni-MATH Avg. 10K Qwen2.5-Math-7B-PRM800K 30.3 31.6 21.9 19.8 25.9 GroundedPRM 39.0 41.9 29.4 29.8 35.0 20K Qwen2.5-Math-7B-PRM800K 37.4 32.9 29.9 30.6 32.7 GroundedPRM 39.9 44.0 30.1 31.4 36.4 30K Qwen2.5-Math-7B-PRM800K 37.5 40.0 28.4 34.8 35.2 GroundedPRM 42.1 47.4 30.7 31.7 38.0 40K Qwen2.5-Math-7B-PRM800K 43.1 46.0 32.9 34.0 39.0 GroundedPRM 43.4 47.0 33.8 34.4 39.7 Table 3: F1 scores on ProcessBench under different supervision and inference configurations within the GroundedPRM framework. Step-Only and Outcome-Only variants remove one supervision source during training, while the Inference w/o Rationale variant skips rationale generation and outputs correctness labels directly. All variants share the same model architecture; the full version combines step-level verification, outcome consistency, and rationale generation. Method GSM8K MATH OlympiadBench Omni-MATH Avg. Step-Only 40.1 42.3 28.3 29.2 35.0 Outcome-Only 1.4 3.3 1.0 1.0 1.7 Inference w/o Rationale 34.1 34.7 22.7 23.7 28.8 GroundedPRM 43.4 47.0 33.8 34.4 39.7 rewards, grounded in tool-based validation and MCTS-based credit assignment, enable efficient learning under low-resource constraints. Generative Supervision Enhances Interpretability and Robust Generalization. Unlike prior PRMs that produce only binary decisions, GroundedPRM adopts a generative format that outputs both a step-level reward and an accompanying rationale. This design improves alignment with instruction-tuned LLMs, encourages interpretable supervision, and enables the model to better distinguish between fluent but incorrect reasoning and truly valid logic. Empirically, GroundedPRM achieves notable improvements on challenging benchmarks like OlympiadBench and MATH, where fine-grained error localization is essential. These results suggest that explanation-based rewards foster more robust and generalizable reasoning behavior. 4.3 Analysis and Discussions GroundedPRM Provides Superior Data Efficiency through Structured and Fidelity-Aware Supervision. To compare the effectiveness of our automatically labeled supervision against humanlabeled reward models under identical data budgets, we conduct a controlled comparison with the Qwen2.5-PRM series using the same model architecture, i.e., Qwen2.5-7B-Instruct, and matched training sizes. For each training size, we randomly sample a subset of examples to ensure a fair comparison. This setup isolates the effect of supervision quality by ensuring that both methods are evaluated under the same data scale. As shown in Tab. 2, GroundedPRM consistently achieves higher F1 scores across all training sizes, despite relying entirely on automatically constructed labels. Dual-Signal Supervision Enhances Data Fidelity and Credit Attribution. To assess the contribution of our dual-signal supervision, we compare GroundedPRM against two ablations: Outcome-Only Supervision, which assigns labels based solely on final-answer correctness from MCTS rollouts, and 8 Table 4: Accuracy of reward-guided greedy search using different PRMs to supervise the Qwen2.57B-Instruct policy model. GroundedPRM outperforms all PRMs trained with human, mixed, or automated labels, achieving the highest average accuracy. Oly. denotes OlympiadBench. Model #Sample AMC23 AIME24 MATH College Oly. Minerva Avg. pass@1 - 50.0 10.0 73.4 48.5 30.0 29.8 40.3 pass@8(Upper Bound) - 82.5 20.0 90.4 61.0 48.0 49.6 58.6 Reward-Guided Greedy Search (prm@8) Trained on Human Annotated Data (PRM800K) Qwen2.5-Math-7B-PRM800K 264K 60.0 10.0 75.6 36.5 23.5 29.0 39.1 Llemma-PRM800K-7B 350K 42.5 6.7 72.2 47.5 27.6 29.5 37.7 ReasonEval-7B 350K 52.5 6.7 76.0 33.8 33.8 30.0 41.9 Trained on a Mix of Human and Automated Annotation Data Math-PSA-7B 860K 47.5 13.3 69.8 46.0 27.6 33.5 39.6 Trained on Automated Annotation Data Math-Shepherd-PRM-7B 445K 45.0 10.0 74.8 48.5 28.0 29.0 39.2 RLHFlow-DeepSeek-8B 253K 50.0 6.7 74.2 48.0 30.9 27.5 39.5 RLHFlow-Mistral-8B 273K 37.5 13.3 74.8 50.5 29.8 30.0 39.3 EurusPRM-Stage1 453K 47.5 10.0 73.0 49.0 30.1 31.0 40.1 EurusPRM-Stage2 230K 45.0 13.3 73.6 51.0 31.6 32.5 41.1 GroundedPRM 40K 57.5 10.0 74.8 49.0 31.3 32.5 42.4 Step-Only Supervision, which uses external tool verification without considering global trajectory outcomes. As shown in Tab. 3, Outcome-Only Supervision severely underperforms due to credit misattribution. Correct steps may be penalized if downstream steps fail, while flawed steps may be rewarded if the final answer happens to be correct. Step-Only Supervision achieves higher recall but suffers from precision loss, as external math tools can detect surface-level arithmetic errors but often fail to capture deeper logical flaws, resulting in false positives. A detailed example of this failure mode is provided in Appendix E.2. In contrast, GroundedPRM fuses step-level correctness signals with trajectory-level feedback, enabling accurate credit assignment that is grounded in both local fidelity and global reasoning success. This hybrid design achieves the highest average F1, demonstrating the effectiveness of our supervision framework in producing reliable and structurally aligned reward signals. Rationale Generation Enhances Consistency and Long-Horizon Reasoning. To assess the impact of rationale generation, we compare the full GroundedPRM with an Inference w/o Rationale variant that directly predicts correctness labels without generating explanations. As shown in Tab. 3, removing rationales leads to a consistent drop in F1 across all datasets, with larger gaps on more challenging benchmarks such as MATH and OlympiadBench. Generating intermediate justifications helps maintain step-level consistency, stabilize reward attribution, and localize reasoning errors in complex, long-horizon problems. Qualitative examples in Appendix E.1 further illustrate how rationale generation improves interpretability and factual grounding without compromising predictive accuracy. 4.4 Results on Reward-Guided Greedy Search As shown in Tab. 4, GroundedPRM, trained on only 40K automatically labeled examples, achieves the highest average accuracy across all PRMs, surpassing those trained on automated, mixed, or even large-scale human annotations. Within the automated annotation group, GroundedPRM achieves new state-of-the-art results on AMC23 and performs on par or better than all counterparts on MATH and Minerva. These results validate the effectiveness of the design: tool-grounded verification improves label fidelity, tree-guided path construction yields stable and attribution-aware credit assignment, and rationale-enhanced supervision delivers precise and verifiable step-level evaluation. By evaluating each candidate step with grounded feedback, GroundedPRM reliably guides the policy toward accurate multi-step reasoning without requiring external demonstrations or value-based lookahead. 9 5 Conclusion We introduced GroundedPRM, a tree-guided and fidelity-aware framework for process supervision. By combining structured path exploration via MCTS, tool-based step-level verification, hybrid reward aggregation, and rationale-enhanced supervision formatting, GroundedPRM addresses three core limitations of prior PRMs: low factual fidelity, noisy reward signals, and misalignment with steplevel reasoning objectives. GroundedPRM is trained on only 40K automatically labeled samples, amounting to just 10% of the data used by the best-performing PRM trained with auto-labeled supervision. Nevertheless, it achieves up to a 26% relative improvement in average performance on ProcessBench. When used for reward-guided greedy search, GroundedPRM outperforms even PRMs trained with human-labeled supervision. These results underscore the effectiveness of structured, verifiable reward modeling in enhancing the reasoning capabilities of LLMs. 6 Future Work While GroundedPRM establishes a strong foundation for fidelity-aware and tree-guided process reward modeling, several natural extensions remain. Scaling the underlying LLM may further improve the quality and diversity of generated reasoning paths. Expanding the set of external verifiers beyond the mathematical tool used in this work could enhance flexibility and extend applicability across different reasoning domains. GroundedPRM is inherently tool-agnostic: a "tool" broadly refers to any fidelity verifier that provides execution-grounded feedback for intermediate reasoning steps, including model-based self-checkers, retrieval-augmented verifiers, and rule-based evaluators. Additionally, integrating human preference signals may further align supervision with interpretable and human-consistent reasoning. A further direction is to integrate GroundedPRM into reinforcement learning pipelines, where it serves as a verifiable reward function guiding policy optimization in long-horizon tasks. Such integration would enable process-level supervision under on-policy updates and reveal how structured rewards interact with exploration, search, and credit assignment. Although our experiments focus on the mathematical domain due to its established PRM benchmarks and baselines, the framework naturally generalizes to any domain where step-level fidelity can be defined and verified, offering a unified and scalable paradigm for grounded process supervision. References [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint , 2023. [2] AI-MO. Amc 2023, 2024b. aimo-validation-aime, 2024. [3] AI-MO. Amc 2023, 2024b. aimo-validation-amc. https://huggingface.co/ datasets/AI-MO/aimo-validation-amc, 2024. Accessed: 2025-07-30. [4] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint , 2024. [5] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1):1-43, 2012. [6] Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Step-level value preference optimization for mathematical reasoning. arXiv preprint , 2024. [7] Wenxiang Chen, Wei He, Zhiheng Xi, Honglin Guo, Boyang Hong, Jiazheng Zhang, Rui Zheng, Nijun Li, Tao Gui, Yun Li, et al. Better process supervision with bi-directional rewarding signals. arXiv preprint , 2025. 10 [8] Hanze Dong, Wei Xiong, Bo Pang, Haoxiang Wang, Han Zhao, Yingbo Zhou, Nan Jiang, Doyen Sahoo, Caiming Xiong, and Tong Zhang. Rlhf workflow: From reward modeling to online rlhf. arXiv preprint , 2024. [9] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint , 2025. [10] Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint , 2024. [11] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint , 2021. [12] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR, 1(2):3, 2022. [13] Xu Huang, Weiwen Liu, Xiaolong Chen, Xingmei Wang, Hao Wang, Defu Lian, Yasheng Wang, Ruiming Tang, and Enhong Chen. Understanding the planning of llm agents: A survey, 2024. [14] Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint , 2023. [15] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint , 2024. [16] Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conference on machine learning, pages 282-293. Springer, 2006. [17] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199-22213, 2022. [18] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances in neural information processing systems, 35:3843-3857, 2022. [19] Manling Li, Shiyu Zhao, Qineng Wang, Kangrui Wang, Yu Zhou, Sanjana Srivastava, Cem Gokmen, Tony Lee, Erran Li Li, Ruohan Zhang, et al. Embodied agent interface: Benchmarking llms for embodied decision making. Advances in Neural Information Processing Systems, 37:100428-100534, 2024. [20] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2023. [21] Runze Liu, Junqi Gao, Jian Zhao, Kaiyan Zhang, Xiu Li, Biqing Qi, Wanli Ouyang, and Bowen Zhou. Can 1b llm surpass 405b llm? rethinking compute-optimal test-time scaling. arXiv preprint , 2025. [22] Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint , 2024. [23] Aaron Meurer, Christopher P Smith, Mateusz Paprocki, Ondˇrej ˇCertík, Sergey B Kirpichev, Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K Moore, Sartaj Singh, et al. Sympy: symbolic computing in python. PeerJ Computer Science, 3:e103, 2017. 11 [24] Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. [25] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint , 2024. [26] Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint , 2024. [27] Charlie Victor Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling parameters for reasoning. In The Thirteenth International Conference on Learning Representations, 2025. [28] Lin Sun, Chuang Liu, Xiaofeng Ma, Tao Yang, Weijia Lu, and Ning Wu. Freeprm: Training process reward models without ground truth process labels. arXiv preprint , 2025. [29] Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang Gan. Easy-to-hard generalization: Scalable alignment beyond human supervision. Advances in Neural Information Processing Systems, 37:51118-51168, 2024. [30] Zhengyang Tang, Xingxing Zhang, Benyou Wang, and Furu Wei. Mathscale: Scaling instruction tuning for mathematical reasoning. arXiv preprint , 2024. [31] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint , 2023. [32] Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown. Hugging Face, 2024. [33] Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint , 2022. [34] Jun Wang, Meng Fang, Ziyu Wan, Muning Wen, Jiachen Zhu, Anjie Liu, Ziqin Gong, Yan Song, Lei Chen, Lionel M Ni, et al. Openr: An open source framework for advanced reasoning with large language models. arXiv preprint , 2024. [35] Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint , 2023. [36] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. [37] Wolfram Alpha LLC. Wolframalpha. https://www.wolframalpha.com/. [38] Zhiheng Xi, Yiwen Ding, Wenxiang Chen, Boyang Hong, Honglin Guo, Junzhe Wang, Dingwen Yang, Chenyang Liao, Xin Guo, Wei He, et al. Agentgym: Evolving large language model-based agents across diverse environments. arXiv preprint , 2024. [39] Shijie Xia, Xuefeng Li, Yixin Liu, Tongshuang Wu, and Pengfei Liu. Evaluating mathematical reasoning beyond accuracy. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 27723-27730, 2025. 12 [40] An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint , 2024. [41] Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint , 2025. [42] Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, Jiawei Hong, Kuikun Liu, Ziyi Wang, et al. Internlm-math: Open math large language models toward verifiable reasoning. arXiv preprint , 2024. [43] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint , 2023. [44] Yao Zhang, Chenyang Lin, Shijie Tang, Haokun Chen, Shijie Zhou, Yunpu Ma, and Volker Tresp. Swarmagentic: Towards fully automated agentic system generation via swarm intelligence. arXiv preprint , 2025. [45] Yao Zhang, Zijian Ma, Yunpu Ma, Zhen Han, Yu Wu, and Volker Tresp. Webpilot: A versatile and autonomous multi-agent system for web task execution with strategic exploration. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 2337823386, 2025. [46] Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. The lessons of developing process reward models in mathematical reasoning. arXiv preprint , 2025. [47] Chujie Zheng, Zhenru Zhang, Beichen Zhang, Runji Lin, Keming Lu, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. Processbench: Identifying process errors in mathematical reasoning. arXiv preprint , 2024. [48] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in neural information processing systems, 36:46595-46623, 2023. [49] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), Bangkok, Thailand, 2024. Association for Computational Linguistics. 13 Contents 1 Introduction 1 2 Related Work 3 2.1 Mathematical Reasoning with LLMs . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Process Reward Models for Step-Level Supervision . . . . . . . . . . . . . . . . . 3 3 Methodology 3 3.1 Tree-Guided Reasoning Path Construction . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Fidelity-Aware Step Verification with External Tool . . . . . . . . . . . . . . . . . 5 3.3 Hybrid Reward Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.4 Generative Process Reward Model . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.5 Data Construction for GroundedPRM Training . . . . . . . . . . . . . . . . . . . 6 4 Experiment 6 4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2 Results on ProcessBench . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.3 Analysis and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.4 Results on Reward-Guided Greedy Search . . . . . . . . . . . . . . . . . . . . . . 9 5 Conclusion 10 6 Future Work 10 A Algorithm Overview and Pseudocode 15 B Prompt Template for Data Annotation 15 C Training Hyperparameters 18 D Supplementary Evaluation Results 18 E Case Studies 18 E.1 Fidelity- and Error-Type-Aware Reasoning Supervision . . . . . . . . . . . . . . . 19 E.2 Dual-Signal Supervision Improves Data Fidelity and Credit Attribution . . . . . . 19 14 A Algorithm Overview and Pseudocode To facilitate reproducibility and provide an intuitive understanding of our reward modeling pipeline, we present the high-level pseudocode of GroundedPRM's data generation algorithm. As described in Algo. 1, GroundedPRM integrates MCTS-based reasoning path construction, tool-based step-level verification, and hybrid reward aggregation into a unified supervision framework. Algorithm 1 Algorithm of GroundedPRM Require: Initial state s0, Node n, Node Value Q, Execution Round r, Max Rounds R, Max Children K, Tool Result v, Rollout Reward u, Node List L ←∅, Node Value List V ←∅, Visit Count N 1: Initialize: s0 ←sinitial, n ←n0, Q0 ←0 2: for r = 1 to R do 3: while n is fully expanded do 4: nselect ←Selection(n) {Select node with highest UCT score} 5: if nselect is terminal or has no children then 6: break 7: end if 8: n ←nselect 9: end while 10: a ←Generate(nselect) 11: for aj from a1 to aK do 12: si-1 ←State(nselect) 13: ni-1 ←nselect 14: si,j ←Transition(si-1, aj) 15: ni,j ←Children(ni-1, si,j) 16: Q(si) ←vi,j {Verify step with tool} 17: end for 18: if vi,j = max(v) then 19: nsim ←ni,j 20: end if 21: n ←nsim, s ←State(nsim) 22: while n ̸= nterminal do 23: s′ ←Transition(s, a) 24: n′ ←Children(n, s′) 25: V ←V + v′ 26: L ←L + n′ 27: n ←n′, s ←s′ 28: end while 29: F ←FinalAnswerCorrectness(n) {Compare to ground truth} 30: T ←Length(L), ni ←nsim 31: for all vj ∈V do 32: ui ← 1 T -1-i PT -1 j=i+1 dj · vj + β · F 33: end for 34: Q(si, ai) ←ui + vi {Aggregate reward} 35: for k = i -1 to 0 do 36: Qk ←Qk + γdk · Q(si, ai) 37: Nk ←Nk + 1 38: end for 39: end for B Prompt Template for Data Annotation To construct the step-level supervision for GroundedPRM, we adopt two structured prompt templates. The prompt in Fig. 2 is used to autoregressively generate intermediate reasoning steps during MCTS rollouts, producing structured trajectories consisting of step objectives and corresponding actions. The prompt in Fig. 3 is applied to verify each generated step using external tools, outputting binary 15 correctness labels along with rationale-enhanced explanations, which together form the fidelity-aware supervision signals used to train GroundedPRM. Figure 2: Prompt used to generate the next reasoning step during MCTS rollouts. The output consists of a structured step objective and a logically grounded action aligned with the current goal. These step-level generations are used to construct diverse reasoning trajectories for reward modeling in GroundedPRM. 16 Figure 3: Prompt used for tool-based step-level verification. The assistant analyzes the reasoning step for logical consistency, evaluates the relevance of Wolfram Alpha responses, and outputs a binary correctness label along with a structured rationale, forming the fidelity-aware supervision signal for GroundedPRM. 17 C Training Hyperparameters Tab. 5 lists the hyperparameters used to train GroundedPRM. We fine-tuned the model in a sequenceto-sequence manner using the LLaMA-Factory [49] Trainer implementation on 4×A100 80GB GPUs. Table 5: Training configuration for Qwen2.5-7B-Instruct. Parameter Value Model Qwen2.5-7B-Instruct Torch data type bfloat16 Attention implementation flash attention 2 Lora rank 128 Lora alpha 256 Per-device train batch size 4 Gradient accumulation steps 8 Learning rate 3.0 × 10-5 Number of training epochs 6 LR scheduler type cosine Max gradient norm 1.0 Warmup ratio 0.1 Seed 42 Optimizer Adam Gradient checkpointing True D Supplementary Evaluation Results We assess the step-level supervision quality of GroundedPRM on ProcessBench, comparing it to several strong PRM baselines trained with automated labels. As shown in Tab. 1 of the main paper, GroundedPRM achieves the highest average F1 score across all benchmarks, with notable gains on MATH, OlympiadBench, and Omni-MATH. These results highlight the effectiveness of our fidelity-aware, structure-guided reward modeling framework in generating accurate and reliable supervision, even under limited data budgets. Full results are provided in Tab. 6. Table 6: F1 scores on ProcessBench across four math benchmarks. GroundedPRM achieves the highest average F1 score among all PRMs trained with automatically labeled data, outperforming all prior methods by a significant margin, particularly on MATH, OlympiadBench, and Omni-MATH. Scoring Approach GSM8K MATH OlympiadBench Omni-MATH Avg. F1 error correct F1 error correct F1 error correct F1 error correct F1 RLHFlow-PRM-Deepseek-8B 24.2 98.4 38.8 21.4 80.0 33.8 10.1 51.0 16.9 10.9 51.9 16.9 26.6 RLHFlow-PRM-Mistral-8B 33.8 99.0 50.4 21.7 72.2 33.4 8.2 43.1 13.8 9.6 45.2 15.8 28.4 Qwen2.5-Math-7B-Math-Shepherd 46.4 95.9 62.5 18.9 96.6 31.6 7.4 93.8 13.7 4.0 95.0 7.7 28.9 EurusPRM-Stage1 46.9 42.0 44.3 33.3 38.2 35.6 23.9 19.8 21.7 21.9 24.5 23.1 31.2 EurusPRM-Stage2 51.2 44.0 47.3 36.4 35.0 35.7 25.7 18.0 21.2 23.1 19.1 20.9 31.3 Math-Shepherd-PRM-7B 32.4 91.7 47.9 18.0 82.0 29.5 15.0 71.1 24.8 14.2 73.0 23.8 31.5 GroundedPRM 31.9 67.9 43.4 36.0 67.5 47.0 23.4 60.5 33.8 23.8 61.4 34.4 39.7 E Case Studies We conduct qualitative case studies to demonstrate how GroundedPRM performs fine-grained, interpretable supervision across diverse reasoning scenarios. § E.1 illustrates its ability to detect arithmetic, algebraic, and constraint-based inconsistencies with high fidelity and structural awareness. § E.2 examines an ablation case that contrasts Step-Only and Dual-Signal supervision, revealing how dual-signal fusion enhances data fidelity and accurate credit attribution. 18 E.1 Fidelity- and Error-Type-Aware Reasoning Supervision We present three qualitative cases to illustrate how GroundedPRM delivers fidelity-aware, errortype-aware, and first-wrong-step localization in process supervision. Across all cases, a generalpurpose LLM-as-judge fails to catch basic inconsistencies, whereas GroundedPRM recomputes the relevant quantities, checks constraints, and explains why a step is wrong. Case 1: Basic arithmetic aggregation (Fig. 4). A student sums nine quiz scores. The LLM solution totals them to 570, and the LLM-as-judge accepts this step. In contrast, GroundedPRM reproduces the additions step-by-step (50 →130 →210 →270 →310 →400 →500 →570 →630), recovers the correct total 630, and labels the presented step as incorrect. This shows fidelity-aware arithmetic checking and precise localization of the first wrong step. Case 2: Sum-of-pairs with spurious halving (Fig. 5). The problem gives three pairwise sums of products. The LLM correctly aggregates them to 210 but then unjustifiably divides by 2 to claim 105; the LLM-as-judge still marks the step as correct. GroundedPRM re-evaluates the algebra and confirms 210, explicitly naming the first error as a spurious halving and explaining why this single slip corrupts downstream reasoning. This evidences error-type awareness rather than mere outcome comparison. Case 3: Ratio bounded by inequalities and number-theoretic constraints (Figs. 6,7). Given b-a = 15 and 5 9 4 7 ≈ 0.5714. Fig. 6 shows the LLM-as-judge validating this wrong candidate by failing to enforce the bound. Fig. 7 shows GroundedPRM re-checking the inequality numerically, verifying b -a = 15 and coprimeness, and recovering a valid pair (19, 34) with 19 34 ≈0.5588 ∈( 5 9, 4 7). This two-part case highlights constraint checking beyond arithmetic (inequality bounds + number theory). Takeaways. Across the three cases, GroundedPRM (i) recomputes key quantities instead of trusting fluent text, (ii) localizes the first wrong step and names the error type (e.g., spurious halving), and (iii) verifies multi-constraint consistency (inequalities, differences, coprimeness). These behaviors support our central claim: fidelity-aware, structure-guided supervision yields more reliable step-level judgments than generic LLM-as-judge baselines, and produces actionable rationales aligned with process-level reasoning. E.2 Dual-Signal Supervision Improves Data Fidelity and Credit Attribution We illustrate the impact of dual-signal supervision through a math factorization example where a Step-Only verifier fails to detect a missing factor. By combining step-level verification with trajectorylevel feedback, GroundedPRM identifies the omission and corrects the reasoning, demonstrating its advantage in aligning local fidelity with global consistency. Case 4: Prime factorization with an omitted factor (Figs. 8,9). An LLM computes the GCF of 180 and 168 but omits the factor 3 in 168 (using 23 × 7) and concludes 252; a Step-Only supervisor validates this step. In contrast, GroundedPRM re-derives 168 = 23 × 3 × 7, checks common primes with 180 = 22 × 32 × 5, and recovers the correct GCF 22 × 3 = 12, labeling the presented step as incorrect. This highlights how dual-signal supervision prevents surface-level false positives and yields faithful credit attribution. 19 Question: Sally received the following scores on her math quizzes: 50, 80, 80, 60, 40, 90, 100, 70, and 60. Find her mean score. Reasoning Step: To calculate the mean score of Sally's math quizzes, first, I need to add up all of her quiz scores. The question shows the score of each quiz is: 50, 80, 80, 60, 40, 90, 100, 70, and 60. Adding these together, we get: Total score = 570. GPT-4o as Judge: q **Incorrect judgement.** The total sum of the scores is accurate: 50 + 80 + 80 + 60 + 40 + 90 + 100 + 70 + 60 = 570. This step is: + GroundedPRM as Judge: ¥ **Correct judgement.** The objective is to find Sally's mean score from her math quizzes. The given scores are 50, 80, 80, 60, 40, 90, 100, 70, and 60. First, I need to verify if the LLM's reasoning correctly adds up all these scores. The LLM states that the total score is 570. Let me add them one by one to check. Starting with 50, adding 80 gives 130. Then, adding another 80 brings it to 210. Adding 60 makes it 270. Next, adding 40 results in 310. Adding 90 increases it to 400. Then, adding 100 brings it to 500. Adding 70 makes it 570, and finally, adding 60 gives 630. The correct total should be 630, not 570. So the LLM's answer is incorrect here. According to the conclusion of the judgement, the label is: - Figure 4: Comparison between GPT-4o and GroundedPRM as judge models. GPT-4o q incorrectly validates the miscalculated reasoning step (570 instead of 630), while GroundedPRM ¥ correctly re-evaluates the arithmetic, detects the error, and provides a rationale-supported supervision label, demonstrating fidelity-aware verification. 20 Question: Alice thinks of four positive integers a ≤b ≤c ≤d satisfying {ab + cd, ac + bd, ad + bc} = {40, 70, 100}. What are all the possible tuples (a, b, c, d) that Alice could be thinking of? Reasoning Step: Given the equations ab + cd = 40, ac + bd = 70, and ad + bc = 100, we need to find the positive integers a, b, c, d such that a ≤b ≤c ≤d. First, let's add all three equations: (ab + cd) + (ac + bd) + (ad + bc) = 40 + 70 + 100 2(ab + ac + ad + bc + bd + cd) = 210 ab + ac + ad + bc + bd + cd = 105 GPT-4o as Judge: q **Incorrect judgement.** The current step appears to be correct. Let's break it down: 1. You are given three equations involving products of four integers a, b, c, and d: ab + cd = 40, ac + bd = 70, ad + bc = 100 2. To simplify the problem, the step suggests adding all three equations together: (ab + cd) + (ac + bd) + (ad + bc) = 40 + 70 + 100 3. Simplifying the left-hand side: ab + cd + ac + bd + ad + bc. The expression ab + ac + ad + bc + bd + cd indeed correctly accounts for all pairwise products. 4. The right-hand side simplifies to:40 + 70 + 100 = 210 5. Dividing both sides by 2 to account for each pair being counted twice, you get: ab + ac + ad + bc + bd + cd = 105 Conclusively, the current step has correctly simplified the sum of the given equations to derive the result ab + ac + ad + bc + bd + cd = 105. This expression will be useful in further exploring potential solutions for the tuple (a, b, c, d). This step is: + GroundedPRM as Judge: ¥ **Correct judgement.** Alright, I'm trying to verify the reasoning step provided. Let me go through it step by step. First, the problem gives us three equations: ab + cd = 40, ac + bd = 70, ad + bc = 100. We need to find positive integers a, b, c, d with a ≤b ≤c ≤d. The reasoning starts by adding all three equations: (ab + cd) + (ac + bd) + (ad + bc) = 40 + 70 + 100, Which simplifies to: ab + cd + ac + bd + ad + bc = 210. Then, the reasoning factors this sum into: (ab + ac + ad + bc + bd + cd) = 105. I need to check if this step is logically sound. Let's see: Adding the three equations: ab + cd = 40, ac + bd = 70, ad + bc = 100. Adding them together gives: ab + cd + ac + bd + ad + bc = 40 + 70 + 100 = 210. So, the sum of all these terms is indeed 210. The reasoning then states that this sum equals ab + ac + ad + bc + bd + cd = 105. Wait, that seems off because 210 divided by 2 is 105, but the sum is 210, not 105. There's a mistake here. So, the correct equation after adding should be: ab + ac + ad + bc + bd + cd = 210. Not 105. Therefore, the reasoning incorrectly halved the total sum, leading to an incorrect conclusion. This error affects the rest of the reasoning, as subsequent steps would be based on an incorrect total sum. Hence, the logical check fails because the step introduces an error in the equation. In summary, the step adds the equations correctly but incorrectly divides the total sum by 2, leading to an incorrect value for the sum of the terms. This makes the reasoning flawed. The logical check reveals an error in the reasoning step. Adding the three equations correctly gives a total sum of 210, but the reasoning incorrectly calculates this as 105 by dividing by 2. This introduces an inconsistency in the problem-solving process. According to the judgment conclusion, the label of this step is: - Figure 5: Comparison between GPT-4o and GroundedPRM as judge models. After correctly aggregating the three sums to 210, the LLM performs an unjustified halving to 105. GPT-4o q incorrectly validates this step. GroundedPRM ¥ correctly re-evaluates the arithmetic to 210, verifies each sub-sum, localizes the first wrong step (the halving), and outputs a rationale-supported negative label, showing fidelity-aware, error-type-aware verification. 21 Question: The positive integers a and b have no common divisor larger than 1 . If the difference between b and a is 15 and 5 9 5 9 and a a+15 4 7, failing the bound check. 22 GroundedPRM as Judge: ¥ **Correct judgement.** Okay, I need to verify the LLM's reasoning step based on the given problem. Let me go through each part carefully. First, the objective is to find integers a and b such that their greatest common divisor is 1, their difference is 15, and the fraction a/b lies between 5/9 and 4/7. The LLM suggests testing values where a is close to b but smaller, with b = a + 15. Looking at the examples provided: - a=1, b=16: 1/16 ≈0.0625, which is way below 5/9 (≈0.5556). So this doesn't fit. - a=2, b=17: 2/17 ≈0.1176, still too low. - a=15, b=30: 15/30 = 0.5, which is exactly 1/2, but the upper bound is 4/7 ≈0.5714. So it's within the range, but let me check if a and b are coprime. gcd(15,30)=15, not 1. So this doesn't work. - a=21, b=36: 21/36 = 7/12 ≈0.5833, which is above 4/7. So it's outside the upper bound. - Finally, a=26, b=41: 26/41 ≈0.6341, which is above 4/7. Wait, that's over. But the LLM says it's within the range. That seems incorrect because 4/7 ≈0.5714, and 26/41 ≈0.6341, which is higher than 4/7. So maybe there's a mistake here. Wait, perhaps I miscalculated 26/41. Let me compute it again. 26/41 is indeed approximately 0.6341, which is greater than 4/7 (≈0.5714). So the LLM's conclusion that 26/41 is within the range is incorrect. Therefore, the reasoning might have an error. But wait, maybe I made a mistake in the comparison. Let me check 26/41 against 5/9 and 4/7 again. 5/9 ≈0.5556, and 4/7 ≈0.5714. 26/41 ≈0.6341, which is above 4/7. So the fraction is actually outside the upper bound. Therefore, the LLM's answer might be wrong because it claims 26/41 is within the range when it's not. However, perhaps I should check if there's another pair that fits better. Maybe a=31, b=46: 31/46 ≈0.6739, still above 4/7. a=36, b=51: 36/51 = 12/17 ≈0.7059, which is even higher. Maybe a=25, b=40: 25/40 = 5/8 = 0.625, still above 4/7. Hmm, perhaps the correct a and b are lower. Wait, let me try a=16, b=31: 16/31 ≈0.5161, which is above 5/9 but below 4/7. But b -a = 15, so 31 -16 = 15. Now, gcd(16, 31) = 1, since 16 is 24 and 31 is prime. So this could be a valid solution. Let me check: 16/31 ≈0.5161, which is between 5/9 ≈0.5556? Wait, no, 0.5161 is less than 5/9. So that doesn't work. Alternatively, a=19, b=34: 19/34 ≈0.5588, which is above 5/9 but below 4/7. Check gcd(19, 34) = 1. So 19/34 ≈0.5588, which is between 5/9 and 4/7. So this might be the correct pair. So the LLM's answer with a=26 and b=41 is incorrect because 26/41 > 4/7. The correct pair should be a = 19 and b = 34, which satisfies all conditions. Therefore, the LLM's reasoning has a logical error in verifying the upper bound of the fraction. It incorrectly concluded that 26/41 is within the required range when it's actually outside. Hence, the result is false. The LLM's reasoning incorrectly concludes that 26 41 is within the specified range. Upon verification, 26 41 ≈0.6341, which exceeds the upper bound of 4 7 ≈0.5714. A correct pair satisfying all conditions is a = 19 and b = 34, yielding 19 34 = 1 2, which is within the range and coprime. According to the conclusion of the judgement, the label is: - Figure 7: Comparison between GPT-4o and GroundedPRM as judge models. Part 2 of 2 (complements Fig. 6). For the same case, GroundedPRM ¥ correctly re-evaluates the inequality, verifies b-a = 15 and coprimeness, rejects (26, 41), and identifies the valid pair (19, 34) with 19 34 ≈0.5588; it outputs a rationale-supported negative label, demonstrating fidelity-aware, error-type-aware verification. 23 Question: Find the greatest common factor of 180 and 168. Reasoning Step: To find the greatest common factor (GCF) of 180 and 168, we can use prime factorization. First, let's express each number as a product of its prime factors: First, 180 = 22 × 32 × 5. Second, 168 = 23 × 7. Step-Only: q **Incorrect judgement.** Okay, so I need to verify the LLM's answer about finding the GCF of 180 and 168 using prime factorization. Let me start by understanding what the problem is asking. The objective is to find the greatest common factor of these two numbers. The LLM provided the prime factorizations: 180 is broken down into 22 × 32 × 5 and 168 into 23 × 7. I should check if these factorizations are correct because that's the foundation for finding the GCF. Starting with 180: Let's do the prime factorization step by step. 180 divided by 2 is 90, which divided by 2 again is 45. Then 45 divided by 3 is 15, and 15 divided by 3 is 5, which is a prime number. So yes, that gives us 22 × 32 × 5, which matches the LLM's answer. Now for 168: Let's factorize it. 168 divided by 2 is 84, divided by 2 again is 42, and once more by 2 is 21. 21 divided by 3 is 7, which is prime. So that's 23 × 3 × 7. Wait, the LLM wrote 23 × 7, but I think they missed the 3. Hmm, that might be an issue. But wait, in their answer, they only mentioned 23 × 7, so maybe they considered only the highest powers present in both? Or perhaps they made a mistake. But wait, the GCF is found by taking the lowest powers of the common prime factors. So even if 168 has a 3, since 180 doesn't have a 7, the common prime factors are just 2 and 3. So the LLM correctly identified the common factors as 22 and 31, leading to a GCF of 22 × 3 = 4 × 3 = 12. However, their factorization of 168 didn't include the 3, which might be a mistake, but since they still arrived at the correct GCF, maybe it's acceptable. Wait, no, actually, the prime factorization must be accurate. If 168 is indeed 23 × 3 × 7, then the GCF should consider the common primes. So 180 has 22, 32, and 5. 168 has 23, 3, and 7. The common primes are 2 and 3. For 2, the lowest power is 22, and for 3, it's 31. So the GCF should be 22 × 3 = 4 × 3 = 12. So even though the LLM's factorization of 168 was incomplete, their conclusion was correct because they only took the common factors. So perhaps it's a minor error in the factorization step but the GCF is still right. Alternatively, maybe the LLM intended to show only the common factors. Let me double-check. If 168 is 23 × 7, then the common factors would be 22, but then the GCF would be 4, which is incorrect because 3 is also a common factor. So the LLM must have included the 3 in their factorization, even if they didn't mention it in the answer. So perhaps there was a typo, but the actual factorization includes the 3, leading to the correct GCF. In any case, the key point is that the LLM's answer correctly identifies the GCF as 12, even if there was a slight error in the intermediate steps. Therefore, the logical check passes because the reasoning aligns with the known conditions, and the calculation is correct despite a possible minor miscalculation in the factorization of 168. The LLM correctly used prime factorization to find the GCF of 180 and 168. The prime factors of 180 are 22 ×32 ×5, and those of 168 are 23 ×3×7. The common factors are 22 and 3, leading to a GCF of 12. Despite a possible error in the factorization of 168, the final result is correct.According to the judgement conclusion, this step is: + Figure 8: Comparison between Step-Only and GroundedPRM. Step-Only q incorrectly validates a step that is arithmetically valid but logically inconsistent, while GroundedPRM ¥ correctly detects the missing factor in 168's prime factorization through dual-signal verification, ensuring high-fidelity and logically grounded supervision. Part 1 of 2 (see Fig. 9 for part 2). 24 GroundedPRM ¥ **Correct judgement.** Alright, I'm trying to verify the LLM's answer regarding the GCF of 180 and 168 using prime factorization. Let me go through the process step by step. First, the objective is clear: find the GCF of 180 and 168. The LLM provided the prime factorizations as 180 = 22 × 32 × 5 and 168 = 23 × 7. I need to check if these factorizations are correct. Starting with 180: dividing by 2 gives 90, then another 2 gives 45. So that's 22. Then 45 divides by 3 to get 15, then again by 3 to get 5. So that's 32 and 51. So 180's prime factors are indeed 22 × 32 × 5. Now for 168: dividing by 2 gives 84, then 42, then 21. 21 divides by 3 to get 7. So that's 23 × 31 × 71. Wait, the LLM only wrote 23 × 7. That seems missing the 3. Hmm, so the LLM might have missed including the prime factor 3 in 168's factorization. But wait, maybe I made a mistake. Let me double-check. 168 divided by 2 is 84, divided by 2 again is 42, divided by 2 again is 21. 21 is 3×7. So yes, it's 23 × 3 × 7. So the LLM's factorization is incomplete because they only included 23 and 7, omitting the 3. However, when finding the GCF, we take the minimum exponents of the common prime factors. Both numbers have 22 and 32, but wait, 168 doesn't have 32. It only has 31. So actually, the common prime factors are 2 and 3, with exponents 2 and 1 respectively. Therefore, the GCF should be 22 × 3 = 4 × 3 = 12. But the LLM didn't include the 3 in their factorization of 168, which could lead to an incorrect GCF. However, in this case, since 168 only has one 3, the GCF would still be 31, but the LLM's omission might have confused the process. Wait, but the LLM's answer is just providing the factorizations, not calculating the GCF yet. So perhaps the issue is more about the accuracy of the factorizations before proceeding. Since the LLM missed the 3 in 168's factorization, their information is incomplete, which could affect the subsequent GCF calculation. So, the logical check shows that the LLM's reasoning is flawed because it omitted a prime factor in 168. The factorization should include 3, which is crucial for determining the correct GCF. Therefore, the result should be false because the provided factorizations are incorrect.According to the judgement conclusion, this step is: - Figure 9: Comparison between Step-Only and GroundedPRM. Step-Only q incorrectly validates a step that is arithmetically valid but logically inconsistent, while GroundedPRM ¥ correctly detects the missing factor in 168's prime factorization through dual-signal verification, ensuring high-fidelity and logically grounded supervision. Part 2 of 2 (complements Fig. 8). 25
2510.14940
Invisible neutron decay and light BSM particles J.C. Helo∗and T. Ota† Departamento de F´ısica, Facultad de Ciencias, Universidad de La Serena, Avenida Cisternas 1200, La Serena, Chile, Millennium Institute for Subatomic Physics at the High Energy Frontier (SAPHIR), Fern´andez Concha 700, Santiago, Chile M. Hirsch‡ Instituto de F´ısica Corpuscular – CSIC - Universitat de Val`encia, Parc Cient´ıfic-UV, c/ Catedr´atico Jos´e Beltr´an, 2, E-46980 Paterna (Val`encia), Spain Abstract In Standard Model Effective Field Theory (SMEFT) invisible neutron decay arises from d = 12 operators. Adding new, light particles to the field content of the SM, such as right-handed neutrinos, allows to construct operators for invisible neutron decay at much lower dimensions. Observing invisible neutron decay, if nucleon decays with charged leptons remain absent, would therefore point towards the existence of new neutral degrees of freedom. Here, we discuss four cases: (i) Adding right-handed neutrinos to the SM; (ii) a right-handed neutrino and an axion-like particle; (iii) a right-handed neutrino and a (nearly) massless singlet scalar; and (iv) a right-handed neutrino and a light Z′. We give the general tree-level decomposition for the resulting d = (7 −9) operators for invisible neutron decay. We also briefly discuss LHC searches related to the exotic states found in these UV completions. Keywords: Nucleon decays, SMEFT, light sterile neutrinos, LHC ∗Electronic address: jchelo@userena.cl †Electronic address: toshihiko.ota@userena.cl ‡Electronic address: mahirsch@ific.uv.es 1 arXiv:2510.14940v1 [hep-ph] 16 Oct 2025 Contents I. Introduction 2 II. Setup: Models and effective operators 5 A. Variant I: Right-handed neutrinos 5 B. Variant II: A right-handed neutrino and a ALP 6 C. Variant III: A right-handed neutrino and a light scalar 7 D. Variant IV: A right-handed neutrino and a light vector 8 E. A caveat 9 III. High-energy completions 10 IV. LHC phenomenology 14 V. Summary 17 A. List of the high-energy completions 19 References 20 I. INTRODUCTION Searches for nucleon decays by Super-Kamiokande have pushed limits to 1034 yr and above for several decay channels. The best known examples are probably p →ℓ+π0 (ℓ= e, µ) [1] and n →¯νK0 [2], which are motivated by Grand Unified Theories (GUTs). Less known are decays to three charged leptons, where Super-Kamiokande also gives limits in excess of 1034 yr now [3]. Limits on many other channels, however, are less stringent, sometimes by orders of magnitude, see [4, 5]. On the other hand, nucleon decay searches form an important part of the agenda of next generation underground detectors such as JUNO [6], Hyper-Kamiokande [7] or DUNE [8] and significant improvements in sensitivity on many nucleon decay modes are expected from these experiments in the foreseeable future. Traditionally, searches for proton decay were motivated by the predictions from GUTs, such as the classical SU(5) [9], for a review see for example [10]. More recently, however, many theoretical papers have taken a more agnostic view and studied nucleon decays from the point of view of effective field theory (EFT) [11–25].1 In Standard Model Effective Field Theory (SMEFT), one can classify different decay modes according to the dimensionality of the corresponding operators. For example, two-body decays with ∆(B + L) = 2 are described by d = 6 operators, ∆(B −L) = 2 are d = 7, genuine three-body decays need at least d = 9 operators [17]. 1 Here we also cite some pioneering works on nucleon decays in SMEFT [26–31] and the studies on the decompositions of the relevant effective operators [32–37]. 2 In the current paper, we are interested in invisible neutron decay. Considering only SM particles, the only possibility for a neutron to decay into “nothing” is the channel n →3ν.2 SMEFT operators with three leptons have been discussed in [13]. In SMEFT, the lowest dimensional operator with three leptons has d = 9, one example is O ∝(QL)(¯L¯L)(dRdR). Ref. [13] lists a total of 16 operators of this type at d = 9. Some of these d = 9 operators can generate the decay n →3ν. However, all operators that can do so, always also generate decays to charged leptons with very similar rates. Since nucleon decay searches are much more sensitive to final states with charged leptons [3] than to n →3ν [5], d = 9 operators can not generate invisible neutron decay at observable rates. In SMEFT one has to go up to d = 12, to find for the first time an operator capable of generating invisible neutron decay, without the corresponding decays to charged leptons, as already realized by Weinberg more than 40 years ago [28]. Weinberg gave as an example the operator [28]: L d=12 W = cW Λ8 ϵIJKϵijϵklϵmn uR ILi   dR JLk   dR KLm  HjHlHn, (1) where Λ is the typical scale of the origin of this effective operator and cW is the Wilson coefficient. In Eq. (1) we have suppressed generation indices for simplicity, ϵIJK and ϵij are anti-symmetric tensors for the contraction of SU(3)c and SU(2)L indices. This operator leads to the decay mode n →3ν. For this d = 12 operator one can estimate the nucleon decay width for the 3-body final state roughly as [17] Γ ∼β2 hm5 N⟨H0⟩6 6144π3Λ16 ⇒ 1 9 · 1029 yr 13.4 TeV Λ 16 . (2) Here, βh ≃0.014 GeV3 is the nuclear matrix element [38], mN the nucleon mass, ⟨H0⟩≃174 GeV the vacuum expectation value of the SM Higgs field, and cW is taken to be unity. For invisible neutron decay the particle data booklet [5] cites an old Kamiokande result (from 1993) [39] (4.9 · 1026 yr), but there are more recent and stronger bounds set by SNO [40] (2 · 1029), KamLAND [41] (5.8 · 1029 yr, cf. also [42]) and two analysis by the SNO+ collaboration [43] (2.5 · 1029 yr) and [44] (9.0 · 1029 yr). For the nearer and farther future sensitivity estimates have been given by JUNO [45] (5·1031 yr) and for THEIA (80kt), that could ultimately reach ∼4 · 1032 yr according to their proposal [46]. However, there exists a simple possibility to generate invisible neutron decay with lower dimensional operators: We can extend the SM particle content with new light degrees of freedom. In this paper we discuss four different possibilities. First we add (i) (at least two) right-handed neutrinos NR, also sometimes called sterile neutrinos in the literature, cf. [47– 53] for reviews and current bounds, motivated from various neutrino experiments [54–60]. In addition, the introduction of such light degrees of freedom is often discussed in the context of cosmology, e.g., the stringent bound on neutrino masses from recent cosmological obser- vations [61–64] maybe be alleviated by the mixing between active neutrinos and massless 2 Here both ν and ¯ν are mentioned as ν. 3 sterile neutrinos [65–68]. In NRSMEFT, for a list of basis operators see [69–71], invisible neutron decay operators appear at d = 9. Second, we add (ii) a right-handed neutrino and an axion-like particle (ALP), a. The axion is one of the best-motivated light new physics fields [72, 73]. It arises as a conse- quence of the breaking of the Peccei-Quinn symmetry introduced to solve the strong CP problem [74, 75]. For experimental searches and phenomenology of ALPs, cf. e.g., [76–83] and for cosmological aspects of ALPs, cf. e.g., [84–89]. The basis operators of aSMEFT, the effective theory with the SM particle content plus an ALP, are listed in [90–92]. In aNRSMEFT, i.e. SMEFT with NRs and ALPs, invisible neutron decay operators appear at d = 8.3 Finally, we add a right-handed neutrino and (iii) a (nearly) massless scalar, ϕ, or (iv) a light vector boson, Z′. A ultra-light scalar is an attractive candidate of dark matter, aka “fuzzy” dark matter [95]. For experimental searches and cosmologcal effects of the light vector boson, aka dark photon, cf. e.g., [96–99]. The basis operators of ϕSMEFT, SMEFT with a SM singlet scalar, and Z′SMEFT, SMEFT with a dark photon, are listed in [90, 91].4 The invisible decay of neutrons is generated from d = 7 operators in both ϕNRSMEFT (SMEFT with ϕ and NR) and Z′NRSMEFT (SMEFT with Z′ and NR). In this paper, we will give the operators for invisible neutron decay for all four cases, discussed above, estimate the neutron decay half-lives and give the complete list of the possible UV completions for these operators at tree-level. For the different BSM particles found in these UV models we will also briefly discuss current LHC limits. This introduction would not be complete without mentioning that invisible neutron decay has been discussed in some earlier papers. For example, in [100] the authors speculated about “decays to nothing” in models with extra time dimensions. The authors of [101] considered universal extra dimensions and an extended gauge group, to have the neutron decay dominantly into n →ν¯νs¯νs, where νs is some sterile neutrino. The effective operators relevant to n →¯νs¯ν′ s¯ν′′ s are listed in [102] in the context of an extra dimension model. In [103] invisible neutron decay was studied in a left-right symmetric model with large extra dimensions. Some recent works have considered also the possibility of a “dark” (invisible) neutron decay [104–108] in relation to the neutron life-time anomaly [109]. And, finally, a particular UV model for invisible di-neutron decay to two singlet fermions has been studied in [110], where the authors discussed also possible constraints from neutron stars. The rest of this paper is organized as follows. In section 2 we will introduce the different model variants in a bit more detail, discuss some generalities, define the operators and estimate the neutron decay half-lives for the different cases. In section 3 we will then decompose the operators to find the possible UV models at tree-level. Section 4 discusses a 3 Basis operators with a given particle content can be constructed up to a given mass dimension d by AutoEFT [93] or also Sym2Int [94] 4 Later we will use the symbol S for the scalars that mediate the effective operators, and therefore, to avoid the confusion, we use ϕ for the light scalar field in the final state of the invisible neutron decay process, although the effective theory with ϕ is called sometimes sSMEFT in the literature. 4 number of different LHC searches and how they can be used or reinterpreted to derive limits on the BSM states found in section 3. We then close with a short discussion and summary. Some tables for the decompositions will be relegated to the appendix for the sake of keeping the main text more compact. II. SETUP: MODELS AND EFFECTIVE OPERATORS In this section we will describe the theoretical setup. We consider four different extensions of the SM particle content. For each case, we briefly describe the model. We then define the operators relevant for invisible neutron decay. We give an estimate of the neutron decay half-live in each case. Finally, in an additional subsection we discuss a possible caveat. The decomposition of the operators, defined here, is discussed in section 3. A. Variant I: Right-handed neutrinos In the standard model neutrinos are massless. While there are many extensions of the SM that can explain neutrino masses, as observed in oscilllation experiments, maybe the simplest possibility is to add right-handed neutrinos to the SM particle content. Massive neutrinos can be either Dirac or Majorana particles. In the classical type-I seesaw model [111–114] right-handed neutrinos have a Majorana mass term and the Lagrangian can be written as: LY = (Y ∗ ν )αjNRjLα · H + 1 2(MM)jjNRj cNRj + h.c. (3) Note that MM can be taken diagonal without loss of generality. Majorana neutrinos, on the other hand, violate lepton number by two units. Whether such a ∆L = 2 term is allowed or not, will depend on the ultra-violet completion of the operators defined below. We note, however, that the invisible neutron decay n →3 ¯N violates L by three units, thus, Majorana masses need not to be allowed in general and neutrinos could be Dirac particles, as far as invisible neutron decay is concerned. For the fit to current neutrino data, at least two copies of NR are needed and both Dirac and Majorana neutrinos can explain existing data. We will consider three copies of NR, to mirror the three SM generations, but doing an exact fit to oscillation data is irrelevant for the neutron decay we are interested in. The only condition for the decay to occur is that the right-handed neutrino masses are smaller than mn/3. Two operators can be constructed at level d = 9, that contribute to invisible neutron decay: ON 1 = c(1) αβγijk Λ5 (uRαcdRβ)  dRγ cNRi  NRj cNRk  , (4) ON 2 = c(2) αβγijk Λ5 Qα cQβ   dRγ cNRi  NRj cNRk  . (5) 5 Here, α, β, γ are SM generation indices, while i, j, k runs over the generation of NR. To keep the expressions simpler, we have suppressed SU(3)c (and SU(2)L) indices. Here, and in all cases discussed below the colour of the three quarks must be contracted with a completely anti-symmetric tensor, ϵIJK. It is straightforward to show that both of these operators vanish in case there is only one copy of NR, we have checked this also with Sym2Int [94]. Assuming the mass of the right-handed neutrinos to be negligible and one of the c(1/2) 111ijk = 1, we can estimate the partial half-life for the invisible neutron decay as Γ(n →¯Ni ¯Nj ¯Nk) ≃ β2 hm5 n 6144π3Λ10 ≃ 1 9 · 1029yr 180TeV Λ 10 . (6) Here again, βh ≃0.014 GeV3 is the nuclear matrix element [38] and the half-live is normalized to the current bound from SNO+ [44]. We can apply Eq. (6) to the different existing and projected half-life limits and obtain estimates for the sensitivity to the new physics scale Λ: Γ ∼                  1 4.9 · 1026yr 84TeV Λ 10 Kamiokande (PDG), 1 5.0 · 1031yr 270TeV Λ 10 JUNO (future), 1 4.0 · 1032yr 330TeV Λ 10 THEIA (future). (7) B. Variant II: A right-handed neutrino and a ALP Axion-like particles (ALPs) appear in many extensions of the SM. While the ALP mass is a free parameter, the couplings of the ALPs with SM fields are protected by an approximate classical shift symmetry, as is the case for the classical axion. Many different studies of properties of and searches for ALPs have been published, see for example [77, 78, 115, 116]. Here we only briefly mention that ALPs could also explan the dark matter [117, 118] and/or be long-lived particles [118, 119]. The ALP Lagrangian up to d = 5 contains the following terms [120] La = 1 2∂µa∂µa −1 2m2 aa2 − X X cX e Xa Λ aXµν e Xµν − X ψ cψa Λ ∂µa(ψγµψ). (8) Here Xµν stands for any of the field strength tensors of the SM, i.e. X = B, W, G, and e Xµν is its dual. ma is the ALP mass, in principle a free parameter. Note, however, that one usually assumes the ALP to be the pseudo-Goldstones of a spontaneously broken global symmetry, thus ma can be naturally small compared to the scale of symmetry breaking. ψ are the SM fermions and for the case we are interested in also ψ = NR. It is important to note that the ALP couples only derivatively to fermions. There are two d = 8 B-violating operators containing a right-handed singlet fermion, NR, and an ALP, a: Oa 1 = c(1,a) αβγ Λ4 (∂µa) NRγµdRα  (uRβcdRγ) , (9) 6 Oa 2 = c(2,a) αβγ Λ4 (∂µa) NRγµdRα  Qβ cQγ  . One copy of NR is enough to form these operators, thus for simplicity we have not added a generation index to NR. However, two NR would still be needed, if one wants to explain neutrino data. There are two more d = 8 B-violating operators, containing an ALP, that can be written down with only SM fields: Oa 3 = c(3,a) αβγδ Λ4 (∂µa) LαdRβ  Qγ cγµdRδ  , (10) Oa 4 = c(4,a) αβγδ Λ4 (∂µa) (eRαγµdRβ)  dRγ cdRδ  . The operators in Eq. (9) can generate invisible neutron decays, without accompanying charged lepton final states, while Oa 3 will always also have charged leptons in the nucleon decays. We have listed Oa 4 for completeness, it does not give a contribution to invisible neutron decay. The operators in Eqs. (4) and (5) violate (B + L) = 4. Thus, it is easy to argue that the standard d = 6 2-body (B + L) = 2 decays are absent on symmetry grounds for the invisible neutron decay to three NR’s. However, the situation is different for the case of an ALP. ALPs do not usually carry either B or L, as can be seen from Eq. (8). For an ALP with (B, L) = (0, 0), however, Oa 1,2 both have (B −L) = 2. One can write down operators at d = 7 with (B −L) = 2, such as, for example, O ∝(NRγµdRα)(Qβ c(iDµ)Qγ). Since this type of operators leads to 2-body decays, such as n →π0 + E/ and p →π+ + E/ , one would expect the invisible neutron decay to be a very sub-dominant decay mode in this case. A possible way out of this conclusion is based on the following argument. If we assign a the quantum numbers (B, L) = (−1, 1), Oa 1,2 both conserve (B + L) and (B −L), thus none of the (B −L) = 2 operators need to be present in the theory. For this to be possible, however, the “standard” d = 5 ALP interaction terms in Eq. (8) need to be absent. It is possible to write down UV completions that fulfil these assignments. For the operators Oa 1,2, the width for the two-body decay n →Na can be estimated to be roughly Γ(n →Na) ≃β2 hm3 n 32πΛ8 ≃ 1 9.0 · 1029yr 9.6 · 106GeV Λ 8 . (11) The much larger scale found here, compared to the case n →3 ¯N, see Eq. (6), makes it unlikely that the UV completions for these operators can ever be tested in accelerator experiments. C. Variant III: A right-handed neutrino and a light scalar The third possibility for generating operators for the invisible neutron decay with BSM fields is to add a right-handed neutrino and a light scalar to the SM. We discuss this case 7 only briefly, since we believe it to be less motivated and add it only for completeness of the discussion. For a model with a light singlet scalar, ϕ, and right-handed neutrinos there are two d = 7 operators contributing to invisible neutron decay: Oϕ 1 = c(1,ϕ) αβγ Λ3 ϕ NR cdRα  (uRβcdRγ) , (12) Oϕ 2 = c(2,ϕ) αβγ Λ3 ϕ NR cdRα  Qβ cQγ  . Similar to the ALP case, discussed above, if ϕ has (B, L) = (0, 0), the d = 6 operators that one can construct from Eq. (12), eliminating ϕ, would lead to dominant 2-body decays, ren- dering invisible neutron decay uninteresting. Asigning non-zero baryon and lepton number to ϕ allows to eliminate this problem technically. The width n →¯Nϕ can be estimated to be roughly Γ(n →¯Nϕ) ≃β2 hmn 32πΛ6 ≃ 1 9.0 · 1029yr 2.1 · 109GeV Λ 6 . (13) D. Variant IV: A right-handed neutrino and a light vector Finally, we briefly mention the possibility where a neutron decays to a right-handed neutrino and a SM-singlet vector boson Z′, which is prompted by the following d = 7 effective operators: OZ′ 1 = c(1,Z′) αβγ Λ3 Z′ µ NRγµdRα  (uRβcdRγ) , (14) OZ′ 2 = c(2,Z′) αβγ Λ3 Z′ µ NRγµdRα  Qβ cQγ  . To avoid the possible d = 6 nucleon decays and make the d = 7 invisible neutron decay dominant, we need to assign charges (which may be the B and L numbers) to Z′. The argument is the same as for the ALP case, cf. Sec. II B. Assuming Z′ is a massless gauge boson under a new gauge symmetry, the decay rate is calculated to be Γ(n →NZ′) ≃β2 hmn 16πΛ6 ≃ 1 9.0 · 1029yr 2.3 · 109GeV Λ 6 . (15) When Z′ aquires a mass through the spontaneous breaking of some new gauge symmetry, the decay rate changes to: Γ ≃ β2 hm3 n 32πΛ6M 2 Z′  1 −M 2 Z′ m2 n 2  1 + 2M 2 Z′ m2 n  . (16) 8 This equation seems to diverge in the limit of MZ′ →0. However, for a gauge boson such divergencies are known to be unphysical. In a full-fledged model for the massive Z′, one would need to specify also the symmetry breaking sector of the theory. The full calculation then includes the correct treatment of the would-be Goldstone boson of the theory and in the limit of MZ′ ≪mn Eq. (16) will reduce to the same expression as for the scalar case Eq. (13), up to a coefficient. However, constructing such a complete model for the Z′ is beyond the scope of the current paper. E. A caveat There is one important caveat to the above discussion: For any operator that annihilates three quarks to invisible particles it is possible to replace an initial state quark for a final state anti-quark. An example is shown in Fig. 1. Thus, four-body final states with a (neutral or charged) pion will always accompany three-body invisible decays. - And similarly for the two-body final states NR + a and NR + S, three-body final states with one additional pion will be generated.5 No search for such three- and four-body final states exist. However, Super-Kamiokande has a search for n →π0¯ν and p →π+¯ν. The current bounds are [121]: Γ(p →π+¯ν) >3.9 · 1032yr, (17) Γ(n →π0¯ν) >1.1 · 1033yr. (18) These are two-body decays and thus the pion momentum is fixed at roughly p(π) ∼460 MeV. Super-Kamiokande uses this constraint in setting the limits. For the 4-body decays, on the other hand, the pion momentum takes a distribution, which depends in addition on the mass of the final state particles. Nevertheless, Super-Kamiokande provides sufficient information such that we can make a rough estimate of the limit for the 4-body decay n →π0 + 3 ¯N. The limit on Γ(n →π0¯ν) is set by Super-Kamiokande [121] excluding 19.1 signal events in the bin p(π0) = [400, 500] MeV. From Fig. 3 of [121] we can estimate that the total number of background events in the range p(π0) = [0, 500] MeV sums to about 850 events, and about 80 events in the bin with p(π) = [400, 500] MeV. Thus, we estimate that a limit of 19.1 × p 850/80 ≃62 events could be set from the total data. This would lower the exclusion limit by a factor of roughly 3.3 and the limit on Γ(n →π0 + 3 ¯N) should roughly be T1/2 >∼3.3 × 1032 yr. We stress that this is only a rough estimate and also that the Super-Kamiokande [121] result is based on only 172.8 kton.yr of data. A re-analysis of all Super-Kamiokande data should be able to provide a limit, which could be up to a factor two better than our naive estimate. 5 Since one can replace an initial d-quark by a final state s, producing a final state kaon, this allows, in principle, also sensitivity to 2nd generation indices in the operators. 9 uR dR uR(dR) p(n) NR NR NR dR uR(dR) π+(π0) FIG. 1: p(n) →π+(π0) + missing induced by the d = 9 uRdRdRNRNRNR operator. For the four-body decays of Fig. 1, we can estimate the partial half-live, using results from [17], with appropriate replacements: Γ(n →π0 ¯Ni ¯Nj ¯Nk) ≃m7 nW0(π)2 737280π5Λ10 ≃ 1 3.3 · 1032yr 245TeV Λ 10 (19) Here the matrix element is taken to be |W0(π)| = |⟨π0|(ud)RdR|n⟩| = 0.134GeV2 [38]. And again, we assume the Wilson coefficient is equal to one. Comparing this estimate with with Eq. (7), we can see that the sensitivity of this pion mode to the new physics scale Λ is actually better than the sensitivity of the invisible mode at SNO+ (180 TeV), and only moderately lower than the future sensitivity estimated for JUNO (270 TeV). Given these numbers, one must conclude that searches for n →π0 + E/ provide important constraints on searches for invisible neutron decay. Therefore, we encourage experimentalists to study the n →π0+“multiple missing” mode (independently from n → π0¯ν) and provide the bound. We stress again, that this is not only true for the case of the d = 9 operator discussed here, but for all operators generating invisible neutron decay. III. HIGH-ENERGY COMPLETIONS In the previous section we have defined the effective operators for invisible neutron de- cay for the four possible different model variants. In this section we will discuss how these operators can be generated in the UV, after integrating out heavy particles. Our aim here is to find systematically all possible UV models via the diagrammatic method. Since, how- ever, the basic procedure is the same for all decompositions, we will discuss here only one decomposition of the operator ON 1 ∝uRdRdRNRNRNR in detail. Full tables for all other operators and decompositions are given in the appendix. For 6-fermion operators there are only two possible topologies at tree-level [122], using only renormalizable interactions. They are shown in Fig. 2. Note that here the arrows indicate the flow of the particle number. Internal particles can be either scalars, fermions or vectors. For Topology A there are exactly three possibilities to distribute the fermions of ON 1 to the outer legs of the diagram. We list these in Tab. I. 10 S1/V1 S2/V2 S3/V3 Topology A S1/V1 S2/V2 F Topology B FIG. 2: Topologies for tree-level realizations of the d = 9 neutron decay operators, see Eqs. (4) and (5). Depending on the chiralities of the outer fermion fields, the dashed lines can be either a scalar, S, or vector, V . The mediator F in Topology B must be introduced as a vector-like fermion, but when it is a SM singlet, it can also be a Majorana fermion. Decomposition S1 S2 S3 Comment Eq. (uRdR)(dRNRi)(NRjNRk) (3, 1, +1/3) (3, 1, −1/3) (1, 1, 0) S2 ̸= S† 1 to avoid d = 6 (20) (uRNRi)(dRdR)(NRjNRk) (3, 1, +2/3) (3, 1, −2/3) (1, 1, 0) dRdRS† 2 = 0 (uRNRi)(dRNRj)(dRNRk) (3, 1, +2/3) (3, 1, −1/3) (3, 1, −1/3) S3 ̸= S2 to avoid S1S2S2 = 0 (21) TABLE I: Decompositions for the operator uRdRdRNRiNRjNRk allowed with Topology A. The fields in a parenthesis form an interaction with the corresponding mediator field, S1,2,3. For the explicit form of the interaction Lagrangians, cf. Eqs. (20) and (21). For ON 1 and Topology A, all mediator fields are Lorentz scalars (indicated with symbol S). The charges of them under the SM gauge symmetries, in the order (SU(3)c, SU(2)L, U(1)Y ), are uniquely determined as given in the table. The interaction Lagrangians for the different cases can be written as: LA-1 =yudϵIJK(uRc)I ˙a(dR)˙a J(S† 1)K + ydN(dR c)I ˙a(NRi)˙a(S† 2)I + yNN(NRj c)˙a(NRk)˙aS† 3 + µ(S1)I(S2)IS3, (20) LA-3 =yuN(uRc)I ˙a(NRi)˙a(S† 1)I + ydNS2(dR c)I ˙a(NRj)˙a(S† 2)I + ydNS3(dR c)I ˙a(NRk)˙a(S† 3)I + µϵIJK(S1)I(S2)J(S3)K, (21) where the indices I, J, K in the lower (upper) position are for 3 (3) in SU(3)c, and ˙a is for a right-handed 2-spinor. For case A-2 the interaction dRdRS† 2 vanishes identically, thus it is not a valid decomposition for ON 1 .6 For a more compact notation, we have suppressed 6 However, the interaction dRsRS† 2 does not vanish. Since operators are usually given in the flavour basis, 11 generation indices in these Lagrangians. The different Yukawas are to be understood as matrices of either dimensions (3, 3), (3, n) or (n, n), where n is the number of copies of right-handed neutrinos. A few more comments are in order. In case A-1, described by the Lagrangian Eq. (20), the diquark, S1, has the same SM charges as the leptoquark, S† 2. However, these two fields have to be distinct states. If they were identical, one could construct from S1 a d = 6 effective operator O ∝uRdRdRNR. This operator generates the the 2-body decays n(p) →π0(π+) ¯N with a significantly larger rate than that of the invisible neutron decay. To make n →3 ¯N the dominant decay mode, one must avoid all d = 6 nucleon decays, i.e. S1 and S2 must be two different fields. This can be realized by assigning for case A-1 all Si definite baryon and lepton numbers. With the asignments S1 = (2/3, 0), S2 = (1/3, 1) and S3 = (0, 2), where the brackets stand for (B, L), all Yukawa couplings conserve baryon and lepton number trivially. The only B and L violating coupling would then be the soft parameter µ. Since µ is a soft parameter, one can actually argue that it can be technically small in the sense of ’t Hooft’s naturalness, since in the limit of µ →0 baryon and lepton numbers are conserved. A small value of µ would change the expected mass scale for the Si drastically. Instead of Eq. (6), for topology-A one can express the decay width as: Γ(n →¯Ni ¯Nj ¯Nk) ≃ β2 hm5 nµ2 6144π3Λ12 ≃ 1 9 · 1029yr 1keV µ 2 2.4TeV Λ 10 . (22) Here, it is assumed that mS1 ≃mS2 ≃mS3 ≃Λ. Putting µ = Λ, Eq. (22) reduces to Eq. (6), of course. However, this argument shows that it might not be completely hopeless to search for the BSM particles that appear in the decomposition of a d = 9 operator at the LHC or the future FCC-hh. The third case, case A-3, is similar to case A-1. However, see Eq. (21), case A-3 contains three leptoquarks, and two of them, S2 and S3, interact with dR and NR. They therefore have the same SM charges. However, also in this case they must be two different fields, otherwise the triple scalar interaction vanishes. In this case, a simple lepton number assignment, as discussed for case A-1, is not sufficient to make S2 and S3 distinct. This problem can, on the other hand, be cured by adding a discrete symmetry, for example Z3, with the charge ω ≡ei2π/3. We can assign ω to NRj and S2 and ω2 to NRk and S3. Under this assignment both, the triple scalar interaction as well as the Yukawas, are allowed under the discrete symmetry. However, the µ-term will still break lepton and baryon numbers softly, thus guaranteeing that the dangerous 2-body nucleon decay modes are absent. In the appendix, we list all mediator fields that appear in the high-energy completions of the uRdRdRNRNRNR operator with Topology B in Fig. 2. There we also give the decompo- sitions of the QQdRNRNRNR operator Eq. (5) with both the topologies. We note that also while in the neutron the quarks are in the mass eigenstate basis, decomposition A-2 can contribute to the invisible neutron decay rate. The rate, however, will be suppressed by a factor sin2 θC, where θC is the Cabibbo angle. We will not discuss this possibility in further details. 12 n →3 ¯N n →Na(NZ′) n →¯Nϕ Mediator uudNNN QQdNNN ∂audd ¯N (Z′udd ¯N) ∂aQQd ¯N (Z′QQd ¯N) uudNϕ QQdNϕ S(1, 1, 0) ✓ ✓ S(3, 1, −1/3) ✓ ✓ ✓ ✓ ✓ ✓ S(3, 1, +2/3) ✓ ✓ ✓ S(3, 2, +1/6) ✓ V (3, 1, −1/3) ✓ ✓ V (3, 1, +2/3) ✓ V (3, 2, +1/6) ✓ ✓ ✓ F(1, 1, 0) ✓ ✓ ✓ ✓ ✓ ✓ F(3, 1, −1/3) ✓ ✓ ✓ ✓ ✓ ✓ F(3, 1, +2/3) ✓ ✓ ✓ F(3, 2, +1/6) ✓ ✓ ✓ TABLE II: List of the mediators that appear in the decompositions of the effective operators relevant for the invisible neutron decay. for topology B it is, of course, possible to lower the expected mass scale. In EFT one usually estimates the scale Λ for a Wilson coefficient c ≃1. However, for a d = 9 operator c ∝Y 4, where Y stands symbolically for any Yukawa coupling, see Fig. 2. Thus, even moderatly small Yukawa coouplings, will change the expections given in Eq. (6) drastically: Γ(n →¯Ni ¯Nj ¯Nk) ≃ β2 hY 8m5 n 6144π3Λ10 ≃ 1 9 · 1029yr  Y 0.01 8 4.5TeV Λ 10 . (23) In addition, the appendix contains tables of the decompositions for the effective operators in the other variants, which were presented in Secs. II B-II D. Similar to cases A-1 and A-3 discussed here for ON 1 , in all cases one has to make sure, that the dangerous 2-body nucleon decays are absent by the use of some symmetry, otherwise the rate for the invisible neutron decay will be negligible. Table II lists all mediator fields that appear in the decompositions of the d = (7 −9) operators. There is a total of 11 fields, 4 scalars, 4 fermions and 3 vectors. Some fields appear in all operators and in all 4 model variants (S3,1,−1/3, F1,1,0 and F3,1,−1/3), while some others appear only for a particular model variant (S1,1,0, V3,1,−1/3) or even only one particular operator (S3,2,1/6 and V3,1,2/3). Note that the fields appearing in the decompositions of the effective operator with a Z′ are the same as the ones for the operators with an ALP, the corresponding column applies to both fields. 13 W V d U u d d W l ν Q Q W/Z/h W/Z/h j j FIG. 3: Example Feynman diagrams for single and pair production of VLQs and their decays. IV. LHC PHENOMENOLOGY In this section, we will briefly discuss a variety of LHC searches, that can be re-interpreted as limits for fields listed in Tab. II. Naive expectations for the scale, Λ, of the d = (7 −9) operators, discussed in secton II, make it look quite unlikely that the LHC can find any positive signals for these states. However, as discussed around Eqs. (22) and (23), the estimates derived using EFT in section II might vastly overestimate the masses of the BSM fields in a full-fledged UV model. We therefore felt motivated to at least briefly discuss the current experimental status. The decomposition of the d = 9 operators for the NRSMEFT model, discussed in sec- tion III, generates a set of eight different BSM fields: Four vector-like fermions (F1,1,0, F3,1,−1/3, F3,1,2/3 and F3,2,1/6), three types of scalars (S1,1,0, S3,1,−1/3 and S3,1,2/3) and one vector (V3,2,1/6). Three more fields appear only in the ALP/Z′ variants (S3,2,1/6, V3,1,−1/3 and V3,1,2/3). We mention these only for the sake of completeness here, since a LHC discov- ery for the fields from the d = 7, 8 operator seems (even) less likely than those appearing in models for the d = 9 operators. First, we mention that for the singlet fields, there is no production mode at the LHC other than decays of the coloured BSM states. Moreover, F1,1,0 and S1,1,0 will decay to NR’s and thus be invisible, if they are lighter than the coloured states. Thus, there are no constraints on these two fields from LHC searches. Let us then concentrate on the coloured fermions. Here, we follow largely the discussion in [123]. All three coloured fermions can have interactions with SM quarks and the Higgs: LY = YURQF3,1,2/3H† + YDR QF3,1,−1/3H + YQu F3,2,1/6uRH† + YQd F3,2,1/6dRH + h.c.(24) In Eq. (24) we have suppressed generation indices. The fermions F3,1,−1/3, F3,1,2/3 and F3,2,1/6 must also have vector-like mass terms. After electro-weak symmetry breaking, Eq. (24) leads to mixing between heavy and SM quarks. This mixing will dominate their decay widths, unless the corresponding Yukawa couplings are tiny. If, on the other hand, some Yukawa couplings are large, this mixing may dominate also VLQ production at the LHC, see Fig. 3, left. In addition, VLQs can always be pair-produced, see Fig. 3 to the right, independent of the size of the mixing. Note that in this figure we show the VLQs decaying to “j” to indicate any final state quark, i.e. including also third generation. 14 Both ATLAS and CMS have searched for such vector-like quarks in a number of publica- tions. For example, ATLAS has searched for pair-produced VLQs decaying to 1st generation quarks in [124]. Limits are given as function of the VLQ mass, for different branching ratios Br(Q →W +j) versus Br(Q →H +j), assuming Br(Q →W +j)+Br(Q →H +j)+Br(Q → Z +j)= 1 [124]. Derived limits are in the range (900−1500) GeV for Br(Q →W +j) in the range (10-100) %. In [125] ATLAS presented the results of a search for pair-produced VLQs decaying to third generation quarks. Limits range from (1 −1.7) TeV depending mostly on the branching ratio Br(Q →W + t/b). These limits are slightly stronger than the 1st generation quark limits, mostly due to lower backgrounds. There are also searches for singly produced VLQs. However, in the search [126] ATLAS assumes that the VLQ decays to the final state t+h or W +b, where the top/bottom is tagged in order to reduce backgrounds. Similarly CMS [127] searched for single VLQ production in the final state t+h and t+Z. For κT = 1, where κT measures the strength of the interaction of the VLQ with electro-weak gauge bosons relative to standard EW couplings, a lower limit on VLQs decaying to top quarks of around mVLQ ≥1.9 TeV is found in [126]. There are no searches yet published for singly produced VLQs decaying to second or 1st generation quarks. However, due to much larger backgrounds in such searches, one can expects limits would be considerably weaker. However, we need to stress that all the VLQ searches discussed up to now depend on the presence of mixing. The neutron decay diagrams, that we study here, however, do not require any of the Yukawa coupling in Eq. (24) to be non-zero. Instead, couplings of the form F-q-S, where F, q and S stand for a VLQ, a SM quark and one of the BSM scalars must be present. If the VLQs are pair produced at the LHC they can then decay to a final states containing q + NR + NR, i.e. jet plus missing energy. This signal is equivalent to the one used in the standard squark search at the LHC. For example, ATLAS [128] has searched for pair produced squarks with L = 139/fb. ATLAS derives limits for one generation of squarks (roughly m˜q ≥1.2 TeV) and 8 degenerate squarks (roughly m˜q ≥1.8 TeV) for light neutralinos. Since VLQs (being fermions) have cross sections at least ∼4 times larger than a colour triplet scalar of the same mass, this SUSY search [128] could be reinterpreted to yield a lower limit on the VLQ mass. We estimate this limit to be roughly mVLQ ≥1.5 TeV, for any of the three coloured VLQs under consideration. The coloured scalar states appear either as leptoquarks or as diquarks in the different decompositions, discussed in the previous sections. The leptoquark couplings are: LLQ = YNu uRcNRS† 3,1,2/3 + Yde dR ceRS3,1,2/3 (25) + YNd dR cNRS† 3,1,−1/3 + Yeu uRceRS† 3,1,−1/3 + YLQ QcLS† 3,1,−1/3 + h.c., while the diquark couplings can be written as: LDQ = Ydd dR cdRS3,1,2/3 + YQQ QcQS3,1,−1/3 + Yud uRcdRS3,1,−1/3 + h.c. (26) Again, we have suppressed generation indices. We stress, however, that the flavour diagonal entries in Ydd are zero, see section III. Importantly, while the LQs and DQs appear with 15 the same SM quantum numbers, they must be different states, otherwise one can generate d = 6 proton decay tree-level diagrams, which would lead to lower limits on their masses near the GUT scale and render invisible neutron decay completely negligible, as discussed in section III. Note that only (YNu, YNd and Yud) of the eight couplings given in Eqs. (25) and (26) appear in the decompositions for the neutron decay. This is important, since none of these couplings will generate final states with charged leptons. Leptoquarks, both pair and singly produced, have been searched for at the LHC in a number of different final states. The couplings Yeu and YLQ do not appear in the decompo- sitions, but will lead to final states with charged leptons, that are more easily constrained at the LHC. For example, [129] searched for pair produced LQs decaying to 3rd generation quarks. Lower limits on LQ masses in the range (1.2 −1.7) TeV are derived, depending on LQ decay branching ratios. ATLAS also searched for LQs decaying to light quarks [130]. For a branching ratio of Br(LQ→l + q)= 1 limits of mLQ ≥1.8 TeV and mLQ ≥1.7 TeV are excluded in the electron and muon channels. These limits weaken to roughly (700−900) GeV for Br(LQ→l + q)= 0.1. Single or resonant production of LQs requires large Yukawa couplings to give sizeable cross sections. ATLAS searched for resonant LQs [131] via a lepton-jet signature, with either one or two leptons in the final state. For Yde = 1.0 LQs with masses below mLQ = 3.4 TeV are excluded by this search. CMS searched for single LQs in t-channel diagrams [132], leading to di-lepton final states. Both scalar and vector LQs were considered. For scalar [vector] LQs with masses between (1 −5) TeV Yukawa couplings larger than (0.3–1.0) [(0.1 −1.4)] have been excluded. Again, the searches just discussed require charged lepton final states and the correspond- ing couplings are not required by the decompositions for invisible neutron decay. LQs that decay only to quarks and invisible final states are less constrained. For example, the SUSY search by ATLAS [128] will give a limit of roughly mLQ >∼1.2 TeV for pair produced LQ. A CMS paper [133] gives limits for pair produced LQs decaying to neutrinos. Limits are mLQ ≥1140 GeV for scalar LQs and mLQ ≥(1560 −1980) GeV for vector LQs, depending on the strength of the LQ gluon coupling. The more stringent limits for vector LQs simply reflect the larger cross sections for vectors compared to scalars. These limits apply to the state V3,2,1/6 that appears in the decomposition of ON 2 . Finally, diquarks are stringently constrained from dijet searches, since the production is s- channel enhanced. For example, CMS [134] gives a lower limit on the mass of a colour triplet scalar of mDQ ≃7.5 TeV assuming a Yukawa coupling to two quarks of electromagnetic strength, e. The limits derived by CMS are strictly speaking valid only for a scalar diquark coupling to both, up-type and down-type quarks with the same strength. Thus, they apply only to S3,1,−1/3. The searches discussed above are based on luminosities up to 140/fb. Moderate im- provements can be expected from the high-luminosity LHC. However, the future FCC-hh [135] would considerably improve sensitivities. For example, [136] quotes sensitivities up to mLQ ∼8 (15) TeV for pair (singly) produced leptoquarks, decaying to charged leptons. 16 Many other search channels, discussed above, should improve by similar factors. We will close this section with a short discussion of the d = 12 operator given in Eq. (1). In the beginning of this section we have argued that a Wilson coefficient of cW = 1 is often unrealistic in UV models, thus EFT tends to overestimate the masses of the BSM states. This is certainly true also for a d = 12 operator.7 We have used an automated code for operator decomposition, based on the diagrammatic method and described in [137, 138], to decompose Eq. (1) at tree-level. At such a large dimension, there is a proliferation of models. We count 39 topologies, 150 different diagrams and 12713 model variants. Despite this large number of possible model variations all model variants can be described with just 38 different fields (scalars and fermions, no vectors). 26 of these are triplets of colour, either diquarks, leptoquarks or heavy vector-like quarks. All scalars and fermions discussed above appear in this list too and the constraints we have discussed above apply also to this case. The remaining coloured states, not covered in the above discussion will have bounds, which are at least as strong, since they are all either larger SU(2) multiplets or states with larger hyper-charge. There is, however, one important difference between the UV states of the d = 12 operator and those of the d = (7 −9) operators. Whereas for the d = 9 operators, strictly speaking, only right-handed neutrino final states are required by the couplings appearing in the decomposition, in the case of the d = 12 operator similar decay rates of the leptoquarks to final states with missing energy and final states with charged leptons are expected. As discussed above, final states with charged leptons are a better signal than missing energy at the LHC and thus give generally more stringent limits. In other words, limits on states from the decompositions of the d = 12 operator from LHC will in general be more stringent than those for the d = 9 case, that motivated the discussion given in this section. V. SUMMARY In this paper we have studied invisible neutron decay. Limits on nucleon decay modes with three charged leptons [3] are much stronger than those of invisible neutron decay [5]. Invisible neutron decay with observable rates can therefore be generated only by operators that do not also generate charged lepton final states. In SMEFT one has to go up to d = 12 to find such an operator. Current limits on invisible neutron decay correspond then to scales of order Λ ≃13 TeV. An observation of invisible neutron decay in the next round of experiments might then be accompanied by new physics at the future FCC-hh [136]. However, this conclusion changes, if one adds new light degrees of freedom to the SM particle content. We have discussed four different extensions of the SM for which invisible neutron decay with observable rates can be induced by operators of dimension d = (7 −9). We have estimated the half-lives in each case and for each model we also give all possible UV completions at tree-level. Current limits on the scales of these operators range from 7 Note that in the decomposition of Eq. (1) up to seven different Yukawa couplings appear. 17 Λ ≃180 TeV for the variant model with only right-handed neutrinos (d = 9 operator) to Λ ≃2 · 109 GeV (d = 7 operator). A discovery of invisible neutron decay, therefore may point towards to existence of new, light BSM states. In particular, we stress that in all possible models a neutral fermion state with the quantum numbers of a right-handed neutrino must exist. Thus, these models will also be able to explain neutrino masses. We have also discussed constraints on the allowed rate for invisible neutron decay, that could be placed by a search for n →π0 + E/ . Hyper-Kamiokande [7] should be able to improve on the limit of n →π0 + E/ from Super-Kamiokande [121] by a considerable factor. We note that the Super-Kamiokande analysis [121] gives only a limit on a two-body final state, but the same type of search can also be used to constrain decays with more than one invisible particle in the final state. Non-observation of n →π0 + 3 ¯N at Hyper-Kamiokande might rule out the possibility of observing invisible neutron decay in the foreseeable future. However, to end this discussion on a more positive note, we mention that the sensitivity of Hyper-Kamiokande and the future JUNO and (the proposed) THEIA experiment to the scales of the invisible neutron decay operators will be rather similar. Observation of invisible neutron decay in JUNO should lead to an excess in the Hyper-Kamiokande search for n →π0 + E/ and vice versa. The question whether invisible neutron decay or the decay n →π0 + E/ would be discovered first, however, is not so easy to answer. While the future Hyper-Kamiokande experiment [7] is much larger than Super-Kamiokande, and thus allows to probe much larger half-lives, already the Super-Kamiokande limit [121] is background dominated. To claim a discovery in this case requires both hundreds of signal events and a detailed understanding of backgrounds. JUNO’s search for the invisible neutron decay [45], on the other hand, is based on a characteristic triple coincidence arising from the invisible decays of s-shell neutrons in 12C, which leaves very few background events, making a discovery with just a handful of events possible. Finally, one can speculate that adding the two gammas from the π0 decay to the triple coincidence for the invisible neutron decay would eliminate all remaining background in JUNO’s search. We do not know, however, whether this advantage is (over-)compensated by the loss of signal extraction efficiency or not. Only the experimental collaboration can perform an analysis sufficiently sophisticated to decide whether n →π0 +E/ or the invisible neutron decay itself would be more sensitive to the different models (and the corresponding operators) that we discussed in the present paper. Acknowledgements J.C.H and T.O acknowledge support from ANID – Millennium Science Initiative Pro- gram ICN2019 044. The research of J.C.H is supported by ANID Chile through FONDE- CYT regular grant No 1241685. The research of T.O. is supported by ANID Chile through FONDECYT regular grant No 1250343. M.H. acknowledges support by Spanish grants PID2023-147306NB-I00 and CEX2023-001292-S (MCIU/AEI/10.13039/501100011033), as well as CIPROM/2021/054 (Generalitat Valenciana). 18 Appendix A: List of the high-energy completions Here we list the decompositions of the effective operators and the charges under the SM gauge symmetries of the necessary mediator fields in each decomposition. • Variant I with light right-handed neutrinos NR — Tabs. I, III, IV, and V. Fig. 2 for the topologies. • Variant II with an NR and an axion-like particle a — Tabs. VI and VII. Fig. 4 for the topology. • Variant III with an NR and a scalar field ϕ — Tabs. VIII and IX. Fig. 4 for the topology. • Variant IV with an NR and a vector boson Z′ — The mediators appear in this variant are the same as those listed in Tabs. VI and VII for Variant II. The “Decomposition” column shows how the outer fields are distributed to the vertices in the corresponding topology. The fields in a parenthesis are in the same vertex. Let us take the first line of Tab. III (uRdR)(NRi)(NRj)(dRNRk) (A1) as an example. “(uRdR)” means that uR and dR correspond to the two outer fields in the leftmost vertex in Topology B given in Fig. 2. The mediator field between the (uRdR) vertex and the second-left vertex with NRi is S1(3, 1, +1/3). The direction of the S1 particle number is indicated with arrow in Fig. 2. The two of the middle vertices with NRi and NRj are mediated by the fermion F(3, 1, +1/3). The outer fields in the rightmost vertex are dR and NRk, and the mediator between the (NRj) vertex and the (dRNRk) vertex is S2(3, 1, −1/3). In short, the high-energy completion of this decomposition is determined as LTab.III-1 =yudϵIJK(uRc)I ˙a(dR)˙a J(S† 1)K + ydN(dR c)I ˙a(NR)˙a(S† 2)I + yFNS1(FL)I ˙a(NR)˙a(S1)I + yNFS2(NR c)˙a(FR)I ˙a(S2)I. (A2) The charges of S2 and S† 1 are the same. However, if they are an identical field, it mediates the d = 6 effective operator Ld=6 = yudydN M 2 S1 ϵIJK(uRc)I ˙a(dR)˙a J(dR c)K ˙b(NR) ˙b (A3) which induces p →π+ ¯N with larger decay rate than d = 9 induced invisible neutron decay, i.e., the strong bound from p →π+ ¯N constrains the invisible neutron decay to an unreachable level. In short, if one wants to have this decomposition for the invisible neutron decay, the mediator S2 must be an independent field from S† 1, although the SM charges are the same. This is mentioned in the “Comment” column as “S2 ̸= S† 1 to avoid d = 6”. 19 S1/V1 /F1 S2/V2 /F2 FIG. 4: Topology for the operators with four fermions and 1 scalar, such as uRdRdRNRϕ and uRdRdRNRa, i.e., One of the outer legs should be a scalar, ϕ or a. The directions of arrows on the outer legs depends on the decomposition. Although the mediators are given with solid lines in this topology, they can be scalars S, vectors V or fermions F, depending on the distribution of outer fields. The decompositions with the comment “dRdRS† = 0” contain the interaction L = yϵIJK(dR c)I ˙a(dR)˙a J(S†)K (A4) which vanishes because of the colour antisymmetric nature of the two dRs forming a scalar. The decomposition (uRNRi)(dR)(dR)(NRjNRk) does not contain the interaction Eq. (A4), but the effective operator induced from this decomposition results in the same structure ϵIJK(dR c)I ˙a(dR)˙a J that vanishes. For all fermion mediators that appear in the decompositions, we assume a vector-like fermions (left and right 2-spinors with the same charges form a 4-componenet Dirac fermion). However, the mediator of the SM singlet field can be a Majorana fermion (only 2 components out of 4 are independent). The decompositions with the SM singlet fermion mediator are indicated with the comment “F can be Majorana”. In the last decomposition in Tab. IV, the two vector mediators have the same SM charges. Since they are antisymmetric under both SU(3)c and SU(2)L, the interaction V1V2S3 LTab.IV-3 = ϵIJK(V1)Iiρ(iτ 2)ij(V2)ρ Jj(S3)K + · · · (A5) does not vanish even if they are identical fields. This is different from the last decomposition in Tab. I, where S3 ̸= S2 is required so that the S1S2S3 interaction does not vanish. The comment “V1 = V2” indicates this fact. [1] Super-Kamiokande, A. Takenaka et al., Phys. Rev. D 102, 112011 (2020), arXiv:2010.16098. [2] Super-Kamiokande, K. Yamauchi et al., (2025), arXiv:2506.14406. [3] Super-Kamiokande, M. Tanaka et al., Phys. Rev. D 101, 052011 (2020), arXiv:2001.08011. 20 Decomposition S1 F S2 Comment (uRdR)(NRi)(NRj)(dRNRk) (3, 1, +1/3) (3, 1, +1/3) (3, 1, −1/3) S2 ̸= S† 1 to avoid d = 6 (uRdR)(dR)(NRi)(NRjNRk) (3, 1, +1/3) (1, 1, 0) (1, 1, 0) F can be Majorana (uRdR)(NRi)(dR)(NRjNRk) (3, 1, +1/3) (3, 1, +1/3) (1, 1, 0) (dRNRi)(uR)(dR)(NRjNRk) (3, 1, −1/3) (3, 1, +1/3) (1, 1, 0) (dRNRi)(dR)(uR)(NRjNRk) (3, 1, −1/3) (3, 1, −2/3) (1, 1, 0) (uRNRi)(NRj)(NRk)(dRdR) (3, 1, +2/3) (3, 1, +2/3) (3, 1, −2/3) dRdRS† 2 = 0 (uRNRi)(dR)(dR)(NRjNRk) (3, 1, +2/3) (3, 1, +1/3) (1, 1, 0) ϵIJK(dRIdRJ) = 0 (dRdR)(uR)(NRi)(NRjNRk) (3, 1, −2/3) (1, 1, 0) (1, 1, 0) dRdRS† 1 = 0 (dRdR)(NRi)(uR)(NRjNRk) (3, 1, −2/3) (3, 1, −2/3) (1, 1, 0) dRdRS† 1 = 0 (uRNRi)(dR)(NRj)(dRNRk) (3, 1, +2/3) (3, 1, +1/3) (3, 1, −1/3) (uRNRi)(NRj)(dR)(dRNRk) (3, 1, +2/3) (3, 1, +2/3) (3, 1, −1/3) (dRNRi)(uR)(NRj)(dRNRk) (3, 1, −1/3) (3, 1, +1/3) (3, 1, −1/3) TABLE III: Decompositions and the necessary mediator fields of the d = 9 effective operator uRdRdRNRNRNR with Topology B in Fig. 2. Decomposition S1/V1 S2/V2 S3/V3 Comment (QQ)(dRNRi)(NRjNRk) S1(3, 1, +1/3) S2(3, 1, −1/3) S3(1, 1, 0) (QdR)(QNRi)(NRjNRk) V1(3, 2, −1/6) V2(3, 2, +1/6) S3(1, 1, 0) (QNRi)(QNRj)(dRNRk) V1(3, 2, +1/6) V2(3, 2, +1/6) S3(3, 1, −1/3) V1 = V2 TABLE IV: Decompositions of the QQdRNRNRNR operator with Topology A in Fig. 2. [4] Super-Kamiokande, V. Takhistov, Review of Nucleon Decay Searches at Super- Kamiokande, in 51st Rencontres de Moriond on EW Interactions and Unified Theories, pp. 437–444, 2016, arXiv:1605.03235. [5] Particle Data Group, S. Navas et al., Phys. Rev. D 110, 030001 (2024). [6] JUNO, F. An et al., J. Phys. G 43, 030401 (2016), arXiv:1507.05613. [7] Hyper-Kamiokande, K. Abe et al., (2018), arXiv:1805.04163. [8] DUNE, B. Abi et al., JINST 15, T08008 (2020), arXiv:2002.02967. [9] H. Georgi and S. L. Glashow, Phys. Rev. Lett. 32, 438 (1974). [10] P. Nath and P. Fileviez Perez, Phys. Rept. 441, 191 (2007), arXiv:hep-ph/0601023. [11] A. de Gouvea, J. Herrero-Garcia, and A. Kobach, Phys. Rev. D 90, 016011 (2014), 21 Decomposition S1/V1 F S2/V2 Comment (QQ)(NRi)(NRj)(dRNRk) S1(3, 1, +1/3) (3, 1, +1/3) S2(3, 1, −1/3) (QQ)(dR)(NRi)(NRjNRk) S1(3, 1, +1/3) (1, 1, 0) S2(1, 1, 0) F can be Majorana (QQ)(NRi)(dR)(NRjNRk) S1(3, 1, +1/3) (3, 1, +1/3) S2(1, 1, 0) (dRNRi)(Q)(Q)(NRjNRk) S1(3, 1, −1/3) (3, 2, −1/6) S2(1, 1, 0) (QdR)(NRi)(NRj)(QNRk) V1(3, 2, −1/6) (3, 2, −1/6) V2(3, 2, +1/6) (QdR)(Q)(NRi)(NRjNRk) V1(3, 2, −1/6) (1, 1, 0) S2(1, 1, 0) F can be Majorana (QdR)(NRi)(Q)(NRjNRk) V1(3, 2, −1/6) (3, 2, −1/6) S2(1, 1, 0) (QNRi)(Q)(dR)(NRjNRk) V1(3, 2, +1/6) (3, 1, +1/3) S2(1, 1, 0) (QNRi)(dR)(Q)(NRjNRk) V1(3, 2, +1/6) (3, 2, −1/6) S2(1, 1, 0) (QNRi)(dR)(NRj)(QNRk) V1(3, 2, +1/6) (3, 2, −1/6) V2(3, 2, +1/6) V1 = V2 (QNRi)(Q)(NRj)(dRNRk) V1(3, 2, +1/6) (3, 1, +1/3) S2(3, 1, −1/3) (QNRi)(NRj)(Q)(dRNRk) V1(3, 2, +1/6) (3, 2, +1/6) S2(3, 1, −1/3) TABLE V: Decompositions of the QQdRNRNRNR operator with Topology B in Fig. 2. arXiv:1404.4057. [12] A. Kobach, Phys. Lett. B 758, 455 (2016), arXiv:1604.05726. [13] T. Hambye and J. Heeck, Phys. Rev. Lett. 120, 171801 (2018), arXiv:1712.04871. [14] R. M. Fonseca, M. Hirsch, and R. Srivastava, Phys. Rev. D 97, 075026 (2018), arXiv:1802.04814. [15] J. C. Helo, M. Hirsch, and T. Ota, JHEP 06, 047 (2018), arXiv:1803.00035. [16] J. C. Helo, M. Hirsch, and T. Ota, Phys. Rev. D 99, 095021 (2019), arXiv:1904.00036. [17] J. Heeck and V. Takhistov, Phys. Rev. D 101, 015005 (2020), arXiv:1910.07647. [18] A. B. Beneito, I, J. Gargalionis, J. Herrero-Garcia, A. Santamaria, and M. A. Schmidt, JHEP 07, 004 (2024), arXiv:2312.13361. [19] J. Gargalionis, J. Herrero-Garc´ıa, and M. A. Schmidt, JHEP 06, 182 (2024), arXiv:2401.04768. [20] J. Heeck and D. Watkins, JHEP 07, 170 (2024), arXiv:2405.18478. [21] T. Li, M. A. Schmidt, and C.-Y. Yao, JHEP 08, 221 (2024), arXiv:2406.11382. [22] T. Li, M. A. Schmidt, and C.-Y. Yao, JHEP 06, 077 (2025), arXiv:2502.14303. [23] A. B. I Beneito, J. Gargalionis, J. Herrero-Garcia, and M. A. Schmidt, (2025), arXiv:2503.20928. [24] J. Heeck and D. Sokhashvili, Phys. Lett. B 868, 139791 (2025), arXiv:2505.06172. [25] J. Heeck and I. M. Shoemaker, Phys. Rev. Lett. 135, 111804 (2025), arXiv:2506.08090. 22 Decomposition S1/V1/F1 S2/V2/F2 Comment [∂](dRdR)(uR)(NRa) S1(3, 1, −2/3) F2(1, 1, 0) dRdRS† 1 = 0 [∂](dRNR)(uR)(dRa) V1(3, 1, −1/3) F2(3, 1, −1/3) [∂](uRdR)(dR)(NRa) S1(3, 1, +1/3) F2(1, 1, 0) F2 can be Majorana [∂](uRNR)(dR)(dRa) V1(3, 1, +2/3) F2(3, 1, −1/3) [∂](uRa)(dR)(dRNR) F1(3, 1, +2/3) V2(3, 1, −1/3) [∂](uRdR)(NR)(dRa) S1(3, 1, +1/3) F2(3, 1, −1/3) [∂](uRa)(NR)(dRdR) F1(3, 1, +2/3) S2(3, 1, −2/3) dRdRS† 2 = 0 [∂](uRdR)(a)(dRNR) S1(3, 1, +1/3) V2(3, 1, −1/3) [∂](uRNR)(a)(dRdR) V1(3, 1, +2/3) S2(3, 1, −2/3) dRdRS† 2 = 0 TABLE VI: Decompositions of uRdRdRNRa, which result in the d = 8 effective operator with a derivative operator, ∂auRdRdRNR. The operator uRdRdRNRZ′ with a vector boson is decomposed with the same mediators. [26] S. Weinberg, Phys. Rev. Lett. 43, 1566 (1979). [27] F. Wilczek and A. Zee, Phys. Rev. Lett. 43, 1571 (1979). [28] S. Weinberg, Phys. Rev. D 22, 1694 (1980). [29] H. A. Weldon and A. Zee, Nucl. Phys. B 173, 269 (1980). [30] L. F. Abbott and M. B. Wise, Phys. Rev. D 22, 2208 (1980). [31] M. Claudson, M. B. Wise, and L. J. Hall, Nucl. Phys. B 195, 297 (1982). [32] J. P. Bowes, R. Foot, and R. R. Volkas, Phys. Rev. D 54, 6936 (1996), arXiv:hep-ph/9609290. [33] S. Kovalenko and I. Schmidt, Phys. Lett. B 562, 104 (2003), arXiv:hep-ph/0210187. [34] J. M. Arnold, B. Fornal, and M. B. Wise, Phys. Rev. D 87, 075004 (2013), arXiv:1212.4556. [35] N. Assad, B. Fornal, and B. Grinstein, Phys. Lett. B 777, 324 (2018), arXiv:1708.06350. [36] J. de Blas, J. C. Criado, M. Perez-Victoria, and J. Santiago, JHEP 03, 109 (2018), arXiv:1711.10391. [37] X.-X. Li, Z. Ren, and J.-H. Yub, Phys. Rev. D 109, 095041 (2024), arXiv:2307.10380. [38] Y. Aoki, T. Izubuchi, E. Shintani, and A. Soni, Phys. Rev. D 96, 014506 (2017), arXiv:1705.01338. [39] Kamiokande, Y. Suzuki et al., Phys. Lett. B 311, 357 (1993). [40] SNO, S. N. Ahmed et al., Phys. Rev. Lett. 92, 102004 (2004), arXiv:hep-ex/0310030. [41] KamLAND, T. Araki et al., Phys. Rev. Lett. 96, 101802 (2006), arXiv:hep-ex/0512059. [42] Y. A. Kamyshkov and E. Kolbe, Phys. Rev. D 67, 076007 (2003), arXiv:nucl-th/0206030. [43] SNO+, M. Anderson et al., Phys. Rev. D 99, 032008 (2019), arXiv:1812.05552. [44] SNO+, A. Allega et al., Phys. Rev. D 105, 112012 (2022), arXiv:2205.06400. 23 Decomposition S1/V1/F1 S2/V2/F2 Comment [∂](QQ)(dR)(NRa) S1(3, 1, +1/3) F2(1, 1, 0) F can be Majorana [∂](QNR)(dR)(Qa) S1(3, 2, +1/6) F2(3, 2, +1/6) [∂](QdR)(Q)(NRa) V1(3, 2, −1/6) F2(1, 1, 0) F can be Majorana [∂](QNR)(Q)(dRa) S1(3, 2, +1/6) F2(3, 1, −1/3) [∂](Qa)(Q)(dRNR) F1(3, 2, +1/6) V2(3, 1, −1/3) [∂](QdR)(NR)(Qa) V1(3, 2, −1/6) F2(3, 2, +1/6) [∂](dRa)(NR)(QQ) F1(3, 1, −1/3) S2(3, 1, +1/3) [∂](QdR)(a)(QNR) V1(3, 2, −1/6) S2(3, 2, +1/6) [∂](dRNR)(a)(QQ) V1(3, 1, −1/3) S2(3, 1, +1/3) TABLE VII: Decompositions of QQdRNRa, which result in the d = 8 effective operator with a derivative operator, ∂aQQdRNR. The operator QQdRNRZ′ is decomposed with the same media- tors. Decomposition S1/F1 S2/F2 Comment (dRdR)(uR)(NRϕ) S1(3, 1, −2/3) F2(1, 1, 0) dRdRS† 1 = 0 (dRNR)(uR)(dRϕ) S1(3, 1, −1/3) F2(3, 1, −1/3) (uRdR)(dR)(NRϕ) S1(3, 1, +1/3) F2(1, 1, 0) F2 can be Majorana (uRNR)(dR)(dRϕ) S1(3, 1, +2/3) F2(3, 1, −1/3) (uRϕ)(dR)(dRNR) F1(3, 1, +2/3) S2(3, 1, −1/3) (uRdR)(NR)(dRϕ) S1(3, 1, +1/3) F2(3, 1, −1/3) (uRϕ)(NR)(dRdR) F1(3, 1, +2/3) S2(3, 1, −2/3) dRdRS† 2 = 0 (uRdR)(ϕ)(dRNR) S1(3, 1, +1/3) S2(3, 1, −1/3) S† 2 ̸= S1 to avoid d = 6 uRdRdRNR (uRNR)(ϕ)(dRdR) S1(3, 1, +2/3) S2(3, 1, −2/3) dRdRS† 2 = 0 TABLE VIII: Decompositions of uRdRdRNRϕ. [45] JUNO, A. Abusleme et al., Eur. Phys. J. C 85, 5 (2025), arXiv:2405.17792. [46] Theia, M. Askins et al., Eur. Phys. J. C 80, 416 (2020), arXiv:1911.03501. [47] M. A. Acero et al., J. Phys. G 51, 120501 (2024), arXiv:2203.07323. [48] B. Dasgupta and J. Kopp, Phys. Rept. 928, 1 (2021), arXiv:2106.05913. 24 Decomposition S1/V1/F1 S2/V2/F2 Comment (QdR)(Q)(NRϕ) V1(3, 2, −1/6) F2(1, 1, 0) F2 can be Majorana (QNR)(Q)(dRϕ) V1(3, 2, +1/6) F2(3, 1, −1/3) (Qϕ)(Q)(dRNR) F1(3, 2, +1/6) S2(3, 1, −1/3) (QQ)(dR)(NRϕ) S1(3, 1, +1/3) F2(1, 1, 0) F2 can be Majorana (QNR)(dR)(Qϕ) V1(3, 2, +1/6) F2(3, 2, +1/6) (QQ)(NR)(dRϕ) S1(3, 1, +1/3) F2(3, 1, −1/3) (QdR)(NR)(Qϕ) V1(3, 2, −1/6) F2(3, 2, +1/6) (QQ)(ϕ)(dRNR) S1(3, 1, +1/3) S2(3, 1, −1/3) S† 2 ̸= S1 to avoid d = 6 QQdRNR (QdR)(ϕ)(QNR) V1(3, 2, −1/6) V2(3, 2, +1/6) V † 2 ̸= V1 to avoid d = 6 QQdRNR TABLE IX: Decompositions of QQdRNRϕ. [49] A. Diaz, C. A. Arg¨uelles, G. H. Collin, J. M. Conrad, and M. H. Shaevitz, Phys. Rept. 884, 1 (2020), arXiv:1906.00045. [50] S. B¨oser et al., Prog. Part. Nucl. Phys. 111, 103736 (2020), arXiv:1906.01739. [51] M. Dentler et al., JHEP 08, 010 (2018), arXiv:1803.10661. [52] J. Kopp, P. A. N. Machado, M. Maltoni, and T. Schwetz, JHEP 05, 050 (2013), arXiv:1303.3011. [53] K. N. Abazajian et al., (2012), arXiv:1204.5379. [54] LSND, A. Aguilar et al., Phys. Rev. D 64, 112007 (2001), arXiv:hep-ex/0104049. [55] MiniBooNE, A. A. Aguilar-Arevalo et al., Phys. Rev. Lett. 105, 181801 (2010), arXiv:1007.1150. [56] MiniBooNE, A. A. Aguilar-Arevalo et al., Phys. Rev. D 103, 052002 (2021), arXiv:2006.16883. [57] S. R. Elliott, V. Gavrin, and W. Haxton, Prog. Part. Nucl. Phys. 134, 104082 (2024), arXiv:2306.03299. [58] V. V. Barinov et al., Phys. Rev. Lett. 128, 232501 (2022), arXiv:2109.11482. [59] M. Dentler, ´A. Hern´andez-Cabezudo, J. Kopp, M. Maltoni, and T. Schwetz, JHEP 11, 099 (2017), arXiv:1709.04294. [60] STEREO, H. Almaz´an et al., Nature 613, 257 (2023), arXiv:2210.07664. [61] DESI, A. G. Adame et al., JCAP 02, 021 (2025), arXiv:2404.03002. [62] DESI, A. G. Adame et al., JCAP 07, 028 (2025), arXiv:2411.12022. [63] DESI, M. Abdul Karim et al., (2025), arXiv:2503.14738. [64] DESI, W. Elbers et al., (2025), arXiv:2503.14744. [65] Y. Farzan and S. Hannestad, JCAP 02, 058 (2016), arXiv:1510.02201. 25 [66] M. Escudero, T. Schwetz, and J. Terol-Calvo, JHEP 02, 142 (2023), arXiv:2211.01729, [Addendum: JHEP 06, 119 (2024)]. [67] C. Benso, T. Schwetz, and D. Vatsyayan, JCAP 04, 054 (2025), arXiv:2410.23926. [68] T. Ota, JHEP 03, 023 (2025), arXiv:2411.16356. [69] S. Bhattacharya and J. Wudka, Phys. Rev. D 94, 055022 (2016), arXiv:1505.05264, [Erratum: Phys.Rev.D 95, 039904 (2017)]. [70] Y. Liao and X.-D. Ma, Phys. Rev. D 96, 015012 (2017), arXiv:1612.04527. [71] H.-L. Li, Z. Ren, M.-L. Xiao, J.-H. Yu, and Y.-H. Zheng, JHEP 11, 003 (2021), arXiv:2105.09329. [72] S. Weinberg, Phys. Rev. Lett. 40, 223 (1978). [73] F. Wilczek, Phys. Rev. Lett. 40, 279 (1978). [74] R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977). [75] R. D. Peccei and H. R. Quinn, Phys. Rev. D 16, 1791 (1977). [76] P. W. Graham, I. G. Irastorza, S. K. Lamoreaux, A. Lindner, and K. A. van Bibber, Ann. Rev. Nucl. Part. Sci. 65, 485 (2015), arXiv:1602.00039. [77] I. Brivio et al., Eur. Phys. J. C 77, 572 (2017), arXiv:1701.05379. [78] M. Bauer, M. Neubert, and A. Thamm, JHEP 12, 044 (2017), arXiv:1708.00443. [79] I. G. Irastorza and J. Redondo, Prog. Part. Nucl. Phys. 102, 89 (2018), arXiv:1801.08127. [80] M. Bauer, M. Heiles, M. Neubert, and A. Thamm, Eur. Phys. J. C 79, 74 (2019), arXiv:1808.10323. [81] M. B. Gavela, R. Houtz, P. Quilez, R. Del Rey, and O. Sumensari, Eur. Phys. J. C 79, 369 (2019), arXiv:1901.02031. [82] L. Merlo, F. Pobbe, S. Rigolin, and O. Sumensari, JHEP 06, 091 (2019), arXiv:1905.03259. [83] J. Alda, M. Fuentes Zamoro, L. Merlo, X. Ponce D´ıaz, and S. Rigolin, (2025), arXiv:2507.19578. [84] D. Cadamuro and J. Redondo, JCAP 02, 032 (2012), arXiv:1110.2895. [85] P. Arias et al., JCAP 06, 013 (2012), arXiv:1201.5902. [86] D. J. E. Marsh, Phys. Rept. 643, 1 (2016), arXiv:1510.07633. [87] J. A. Dror, H. Murayama, and N. L. Rodd, Phys. Rev. D 103, 115004 (2021), arXiv:2101.09287, [Erratum: Phys.Rev.D 106, 119902 (2022)]. [88] F. Chadha-Day, J. Ellis, and D. J. E. Marsh, Sci. Adv. 8, abj3618 (2022), arXiv:2105.01406. [89] C. B. Adams et al., Axion Dark Matter, in Snowmass 2021, 2022, arXiv:2203.14923. [90] H. Song, H. Sun, and J.-H. Yu, JHEP 01, 161 (2024), arXiv:2305.16770. [91] H. Song, H. Sun, and J.-H. Yu, JHEP 05, 103 (2024), arXiv:2306.05999. [92] C. Grojean, J. Kley, and C.-Y. Yao, JHEP 11, 196 (2023), arXiv:2307.08563. [93] R. V. Harlander and M. C. Schaaf, Comput. Phys. Commun. 300, 109198 (2024), arXiv:2309.15783. [94] R. M. Fonseca, J. Phys. Conf. Ser. 873, 012045 (2017), arXiv:1703.05221. [95] L. Hui, J. P. Ostriker, S. Tremaine, and E. Witten, Phys. Rev. D 95, 043541 (2017), arXiv:1610.08297. 26 [96] P. Agrawal, N. Kitajima, M. Reece, T. Sekiguchi, and F. Takahashi, Phys. Lett. B 801, 135136 (2020), arXiv:1810.07188. [97] S. D. McDermott and S. J. Witte, Phys. Rev. D 101, 063030 (2020), arXiv:1911.05086. [98] M. Fabbrichesi, E. Gabrielli, and G. Lanfranchi, (2020), arXiv:2005.01515. [99] A. Caputo, A. J. Millar, C. A. J. O’Hare, and E. Vitagliano, Phys. Rev. D 104, 095029 (2021), arXiv:2105.04565. [100] G. R. Dvali, G. Gabadadze, and G. Senjanovic, p. 525 (1999), arXiv:hep-ph/9910207. [101] R. N. Mohapatra and A. Perez-Lorenzana, Phys. Rev. D 67, 075015 (2003), arXiv:hep- ph/0212254. [102] S. Girmohanta and R. Shrock, Phys. Rev. D 101, 015017 (2020), arXiv:1911.05102. [103] S. Girmohanta, Eur. Phys. J. C 81, 143 (2021), arXiv:2005.12952. [104] B. Fornal and B. Grinstein, Phys. Rev. Lett. 120, 191801 (2018), arXiv:1801.01124, [Erra- tum: Phys.Rev.Lett. 124, 219901 (2020)]. [105] D. Barducci, M. Fabbrichesi, and E. Gabrielli, Phys. Rev. D 98, 035049 (2018), arXiv:1806.05678. [106] J. M. Cline and J. M. Cornell, JHEP 07, 081 (2018), arXiv:1803.04961. [107] Z. Berezhiani, LHEP 2, 118 (2019), arXiv:1812.11089. [108] B. Fornal and B. Grinstein, Mod. Phys. Lett. A 35, 2030019 (2020), arXiv:2007.13931. [109] A. Czarnecki, W. J. Marciano, and A. Sirlin, Phys. Rev. Lett. 120, 202002 (2018), arXiv:1802.01804. [110] Y. Hao and D. Ni, Phys. Rev. D 107, 035026 (2023), arXiv:2211.16163. [111] P. Minkowski, Phys.Lett. B67, 421 (1977). [112] T. Yanagida, Conf.Proc. C7902131, 95 (1979). [113] R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980). [114] J. Schechter and J. Valle, Phys. Rev. D22, 2227 (1980). [115] M. Bauer, M. Neubert, S. Renner, M. Schnubel, and A. Thamm, JHEP 09, 056 (2022), arXiv:2110.10698. [116] A. Biek¨otter and K. Mimasu, Axions and Axion-like particles: collider searches (, 2025), , arXiv:2508.19358. [117] J. Jaeckel and A. Ringwald, Ann. Rev. Nucl. Part. Sci. 60, 405 (2010), arXiv:1002.0329. [118] S. Alekhin et al., Rept. Prog. Phys. 79, 124201 (2016), arXiv:1504.04855. [119] J. Alimena et al., J. Phys. G 47, 090501 (2020), arXiv:1903.04497. [120] H. Georgi, D. B. Kaplan, and L. Randall, Phys. Lett. B 169, 73 (1986). [121] Super-Kamiokande, K. Abe et al., Phys. Rev. Lett. 113, 121802 (2014), arXiv:1305.4391. [122] F. Bonnet, M. Hirsch, T. Ota, and W. Winter, JHEP 03, 055 (2013), arXiv:1212.3045, [Erratum: JHEP 04, 090 (2014)]. [123] R. Cepedello, F. Esser, M. Hirsch, and V. Sanz, JHEP 12, 098 (2024), arXiv:2409.06776. [124] ATLAS, G. Aad et al., Phys. Rev. D 110, 052009 (2024), arXiv:2405.19862. [125] ATLAS, G. Aad et al., Phys. Lett. B 854, 138743 (2024), arXiv:2401.17165. [126] ATLAS, G. Aad et al., Phys. Rev. D 105, 092012 (2022), arXiv:2201.07045. 27 [127] CMS, A. Hayrapetyan et al., Phys. Rev. D 110, 072012 (2024), arXiv:2405.05071. [128] ATLAS, G. Aad et al., JHEP 02, 143 (2021), arXiv:2010.14293. [129] ATLAS, G. Aad et al., Phys. Lett. B 854, 138736 (2024), arXiv:2401.11928. [130] ATLAS, G. Aad et al., JHEP 10, 112 (2020), arXiv:2006.05872. [131] ATLAS, G. Aad et al., (2025), arXiv:2507.03650. [132] CMS, A. Hayrapetyan et al., (2025), arXiv:2503.20023. [133] CMS, A. M. Sirunyan et al., Eur. Phys. J. C 80, 3 (2020), arXiv:1909.03460. [134] CMS, A. M. Sirunyan et al., JHEP 05, 033 (2020), arXiv:1911.03947. [135] FCC, A. Abada et al., Eur. Phys. J. ST 228, 755 (2019). [136] FCC, A. Abada et al., Eur. Phys. J. C 79, 474 (2019). [137] R. Cepedello, F. Esser, M. Hirsch, and V. Sanz, JHEP 09, 229 (2022), arXiv:2207.13714. [138] R. Cepedello, F. Esser, M. Hirsch, and V. Sanz, JHEP 09, 081 (2023), arXiv:2302.03485. 28
Invisible neutron decay and light BSM particles J.C. Helo∗and T. Ota† Departamento de F ́ısica, Facultad de Ciencias, Universidad de La Serena, Avenida Cisternas 1200, La Serena, Chile, Millennium Institute for Subatomic Physics at the High Energy Frontier (SAPHIR), Fern ́andez Concha 700, Santiago, Chile M. Hirsch‡ Instituto de F ́ısica Corpuscular - CSIC - Universitat de Val`encia, Parc Cient ́ıfic-UV, c/ Catedr ́atico Jos ́e Beltr ́an, 2, E-46980 Paterna (Val`encia), Spain Abstract In Standard Model Effective Field Theory (SMEFT) invisible neutron decay arises from d = 12 operators. Adding new, light particles to the field content of the SM, such as right-handed neutrinos, allows to construct operators for invisible neutron decay at much lower dimensions. Observing invisible neutron decay, if nucleon decays with charged leptons remain absent, would therefore point towards the existence of new neutral degrees of freedom. Here, we discuss four cases: (i) Adding right-handed neutrinos to the SM; (ii) a right-handed neutrino and an axion-like particle; (iii) a right-handed neutrino and a (nearly) massless singlet scalar; and (iv) a right-handed neutrino and a light Z′. We give the general tree-level decomposition for the resulting d = (7 -9) operators for invisible neutron decay. We also briefly discuss LHC searches related to the exotic states found in these UV completions. Keywords: Nucleon decays, SMEFT, light sterile neutrinos, LHC ∗Electronic address: †Electronic address: ‡Electronic address: 1 16 Oct 2025 Contents I. Introduction 2 II. Setup: Models and effective operators 5 A. Variant I: Right-handed neutrinos 5 B. Variant II: A right-handed neutrino and a ALP 6 C. Variant III: A right-handed neutrino and a light scalar 7 D. Variant IV: A right-handed neutrino and a light vector 8 E. A caveat 9 III. High-energy completions 10 IV. LHC phenomenology 14 V. Summary 17 A. List of the high-energy completions 19 References 20 I. INTRODUCTION Searches for nucleon decays by Super-Kamiokande have pushed limits to 1034 yr and above for several decay channels. The best known examples are probably p →l+π0 (l= e, μ) [1] and n → ̄νK0 [2], which are motivated by Grand Unified Theories (GUTs). Less known are decays to three charged leptons, where Super-Kamiokande also gives limits in excess of 1034 yr now [3]. Limits on many other channels, however, are less stringent, sometimes by orders of magnitude, see [4, 5]. On the other hand, nucleon decay searches form an important part of the agenda of next generation underground detectors such as JUNO [6], Hyper-Kamiokande [7] or DUNE [8] and significant improvements in sensitivity on many nucleon decay modes are expected from these experiments in the foreseeable future. Traditionally, searches for proton decay were motivated by the predictions from GUTs, such as the classical SU(5) [9], for a review see for example [10]. More recently, however, many theoretical papers have taken a more agnostic view and studied nucleon decays from the point of view of effective field theory (EFT) [11-25].1 In Standard Model Effective Field Theory (SMEFT), one can classify different decay modes according to the dimensionality of the corresponding operators. For example, two-body decays with ∆(B + L) = 2 are described by d = 6 operators, ∆(B -L) = 2 are d = 7, genuine three-body decays need at least d = 9 operators [17]. 1 Here we also cite some pioneering works on nucleon decays in SMEFT [26-31] and the studies on the decompositions of the relevant effective operators [32-37]. 2 In the current paper, we are interested in invisible neutron decay. Considering only SM particles, the only possibility for a neutron to decay into "nothing" is the channel n →3ν.2 SMEFT operators with three leptons have been discussed in [13]. In SMEFT, the lowest dimensional operator with three leptons has d = 9, one example is O ∝(QL)( ̄L ̄L)(dRdR). Ref. [13] lists a total of 16 operators of this type at d = 9. Some of these d = 9 operators can generate the decay n →3ν. However, all operators that can do so, always also generate decays to charged leptons with very similar rates. Since nucleon decay searches are much more sensitive to final states with charged leptons [3] than to n →3ν [5], d = 9 operators can not generate invisible neutron decay at observable rates. In SMEFT one has to go up to d = 12, to find for the first time an operator capable of generating invisible neutron decay, without the corresponding decays to charged leptons, as already realized by Weinberg more than 40 years ago [28]. Weinberg gave as an example the operator [28]: L d=12 W = cW Λ8 εIJKεijεklεmn uR ILi dR JLk dR KLm HjHlHn, (1) where Λ is the typical scale of the origin of this effective operator and cW is the Wilson coefficient. In Eq. (1) we have suppressed generation indices for simplicity, εIJK and εij are anti-symmetric tensors for the contraction of SU(3)c and SU(2)L indices. This operator leads to the decay mode n →3ν. For this d = 12 operator one can estimate the nucleon decay width for the 3-body final state roughly as [17] Γ ∼β2 hm5 N⟨H0⟩6 6144π3Λ16 ⇒ 1 9 · 1029 yr 13.4 TeV Λ 16 . (2) Here, βh ≃0.014 GeV3 is the nuclear matrix element [38], mN the nucleon mass, ⟨H0⟩≃174 GeV the vacuum expectation value of the SM Higgs field, and cW is taken to be unity. For invisible neutron decay the particle data booklet [5] cites an old Kamiokande result (from 1993) [39] (4.9 · 1026 yr), but there are more recent and stronger bounds set by SNO [40] (2 · 1029), KamLAND [41] (5.8 · 1029 yr, cf. also [42]) and two analysis by the SNO+ collaboration [43] (2.5 · 1029 yr) and [44] (9.0 · 1029 yr). For the nearer and farther future sensitivity estimates have been given by JUNO [45] (5·1031 yr) and for THEIA (80kt), that could ultimately reach ∼4 · 1032 yr according to their proposal [46]. However, there exists a simple possibility to generate invisible neutron decay with lower dimensional operators: We can extend the SM particle content with new light degrees of freedom. In this paper we discuss four different possibilities. First we add (i) (at least two) right-handed neutrinos NR, also sometimes called sterile neutrinos in the literature, cf. [4753] for reviews and current bounds, motivated from various neutrino experiments [54-60]. In addition, the introduction of such light degrees of freedom is often discussed in the context of cosmology, e.g., the stringent bound on neutrino masses from recent cosmological observations [61-64] maybe be alleviated by the mixing between active neutrinos and massless 2 Here both ν and ̄ν are mentioned as ν. 3 sterile neutrinos [65-68]. In NRSMEFT, for a list of basis operators see [69-71], invisible neutron decay operators appear at d = 9. Second, we add (ii) a right-handed neutrino and an axion-like particle (ALP), a. The axion is one of the best-motivated light new physics fields [72, 73]. It arises as a consequence of the breaking of the Peccei-Quinn symmetry introduced to solve the strong CP problem [74, 75]. For experimental searches and phenomenology of ALPs, cf. e.g., [76-83] and for cosmological aspects of ALPs, cf. e.g., [84-89]. The basis operators of aSMEFT, the effective theory with the SM particle content plus an ALP, are listed in [90-92]. In aNRSMEFT, i.e. SMEFT with NRs and ALPs, invisible neutron decay operators appear at d = 8.3 Finally, we add a right-handed neutrino and (iii) a (nearly) massless scalar, φ, or (iv) a light vector boson, Z′. A ultra-light scalar is an attractive candidate of dark matter, aka "fuzzy" dark matter [95]. For experimental searches and cosmologcal effects of the light vector boson, aka dark photon, cf. e.g., [96-99]. The basis operators of φSMEFT, SMEFT with a SM singlet scalar, and Z′SMEFT, SMEFT with a dark photon, are listed in [90, 91].4 The invisible decay of neutrons is generated from d = 7 operators in both φNRSMEFT (SMEFT with φ and NR) and Z′NRSMEFT (SMEFT with Z′ and NR). In this paper, we will give the operators for invisible neutron decay for all four cases, discussed above, estimate the neutron decay half-lives and give the complete list of the possible UV completions for these operators at tree-level. For the different BSM particles found in these UV models we will also briefly discuss current LHC limits. This introduction would not be complete without mentioning that invisible neutron decay has been discussed in some earlier papers. For example, in [100] the authors speculated about "decays to nothing" in models with extra time dimensions. The authors of [101] considered universal extra dimensions and an extended gauge group, to have the neutron decay dominantly into n →ν ̄νs ̄νs, where νs is some sterile neutrino. The effective operators relevant to n → ̄νs ̄ν′ s ̄ν′′ s are listed in [102] in the context of an extra dimension model. In [103] invisible neutron decay was studied in a left-right symmetric model with large extra dimensions. Some recent works have considered also the possibility of a "dark" (invisible) neutron decay [104-108] in relation to the neutron life-time anomaly [109]. And, finally, a particular UV model for invisible di-neutron decay to two singlet fermions has been studied in [110], where the authors discussed also possible constraints from neutron stars. The rest of this paper is organized as follows. In section 2 we will introduce the different model variants in a bit more detail, discuss some generalities, define the operators and estimate the neutron decay half-lives for the different cases. In section 3 we will then decompose the operators to find the possible UV models at tree-level. Section 4 discusses a 3 Basis operators with a given particle content can be constructed up to a given mass dimension d by AutoEFT [93] or also Sym2Int [94] 4 Later we will use the symbol S for the scalars that mediate the effective operators, and therefore, to avoid the confusion, we use φ for the light scalar field in the final state of the invisible neutron decay process, although the effective theory with φ is called sometimes sSMEFT in the literature. 4 number of different LHC searches and how they can be used or reinterpreted to derive limits on the BSM states found in section 3. We then close with a short discussion and summary. Some tables for the decompositions will be relegated to the appendix for the sake of keeping the main text more compact. II. SETUP: MODELS AND EFFECTIVE OPERATORS In this section we will describe the theoretical setup. We consider four different extensions of the SM particle content. For each case, we briefly describe the model. We then define the operators relevant for invisible neutron decay. We give an estimate of the neutron decay half-live in each case. Finally, in an additional subsection we discuss a possible caveat. The decomposition of the operators, defined here, is discussed in section 3. A. Variant I: Right-handed neutrinos In the standard model neutrinos are massless. While there are many extensions of the SM that can explain neutrino masses, as observed in oscilllation experiments, maybe the simplest possibility is to add right-handed neutrinos to the SM particle content. Massive neutrinos can be either Dirac or Majorana particles. In the classical type-I seesaw model [111-114] right-handed neutrinos have a Majorana mass term and the Lagrangian can be written as: LY = (Y ∗ ν )αjNRjLα · H + 1 2(MM)jjNRj cNRj + h.c. (3) Note that MM can be taken diagonal without loss of generality. Majorana neutrinos, on the other hand, violate lepton number by two units. Whether such a ∆L = 2 term is allowed or not, will depend on the ultra-violet completion of the operators defined below. We note, however, that the invisible neutron decay n →3 ̄N violates L by three units, thus, Majorana masses need not to be allowed in general and neutrinos could be Dirac particles, as far as invisible neutron decay is concerned. For the fit to current neutrino data, at least two copies of NR are needed and both Dirac and Majorana neutrinos can explain existing data. We will consider three copies of NR, to mirror the three SM generations, but doing an exact fit to oscillation data is irrelevant for the neutron decay we are interested in. The only condition for the decay to occur is that the right-handed neutrino masses are smaller than mn/3. Two operators can be constructed at level d = 9, that contribute to invisible neutron decay: ON 1 = c(1) αβγijk Λ5 (uRαcdRβ) dRγ cNRi NRj cNRk , (4) ON 2 = c(2) αβγijk Λ5 Qα cQβ dRγ cNRi NRj cNRk . (5) 5 Here, α, β, γ are SM generation indices, while i, j, k runs over the generation of NR. To keep the expressions simpler, we have suppressed SU(3)c (and SU(2)L) indices. Here, and in all cases discussed below the colour of the three quarks must be contracted with a completely anti-symmetric tensor, εIJK. It is straightforward to show that both of these operators vanish in case there is only one copy of NR, we have checked this also with Sym2Int [94]. Assuming the mass of the right-handed neutrinos to be negligible and one of the c(1/2) 111ijk = 1, we can estimate the partial half-life for the invisible neutron decay as Γ(n → ̄Ni ̄Nj ̄Nk) ≃ β2 hm5 n 6144π3Λ10 ≃ 1 9 · 1029yr 180TeV Λ 10 . (6) Here again, βh ≃0.014 GeV3 is the nuclear matrix element [38] and the half-live is normalized to the current bound from SNO+ [44]. We can apply Eq. (6) to the different existing and projected half-life limits and obtain estimates for the sensitivity to the new physics scale Λ: Γ ∼                  1 4.9 · 1026yr 84TeV Λ 10 Kamiokande (PDG), 1 5.0 · 1031yr 270TeV Λ 10 JUNO (future), 1 4.0 · 1032yr 330TeV Λ 10 THEIA (future). (7) B. Variant II: A right-handed neutrino and a ALP Axion-like particles (ALPs) appear in many extensions of the SM. While the ALP mass is a free parameter, the couplings of the ALPs with SM fields are protected by an approximate classical shift symmetry, as is the case for the classical axion. Many different studies of properties of and searches for ALPs have been published, see for example [77, 78, 115, 116]. Here we only briefly mention that ALPs could also explan the dark matter [117, 118] and/or be long-lived particles [118, 119]. The ALP Lagrangian up to d = 5 contains the following terms [120] La = 1 2∂μa∂μa -1 2m2 aa2 - X X cX e Xa Λ aXμν e Xμν - X ψ cψa Λ ∂μa(ψγμψ). (8) Here Xμν stands for any of the field strength tensors of the SM, i.e. X = B, W, G, and e Xμν is its dual. ma is the ALP mass, in principle a free parameter. Note, however, that one usually assumes the ALP to be the pseudo-Goldstones of a spontaneously broken global symmetry, thus ma can be naturally small compared to the scale of symmetry breaking. ψ are the SM fermions and for the case we are interested in also ψ = NR. It is important to note that the ALP couples only derivatively to fermions. There are two d = 8 B-violating operators containing a right-handed singlet fermion, NR, and an ALP, a: Oa 1 = c(1,a) αβγ Λ4 (∂μa) NRγμdRα (uRβcdRγ) , (9) 6 Oa 2 = c(2,a) αβγ Λ4 (∂μa) NRγμdRα Qβ cQγ . One copy of NR is enough to form these operators, thus for simplicity we have not added a generation index to NR. However, two NR would still be needed, if one wants to explain neutrino data. There are two more d = 8 B-violating operators, containing an ALP, that can be written down with only SM fields: Oa 3 = c(3,a) αβγδ Λ4 (∂μa) LαdRβ Qγ cγμdRδ , (10) Oa 4 = c(4,a) αβγδ Λ4 (∂μa) (eRαγμdRβ) dRγ cdRδ . The operators in Eq. (9) can generate invisible neutron decays, without accompanying charged lepton final states, while Oa 3 will always also have charged leptons in the nucleon decays. We have listed Oa 4 for completeness, it does not give a contribution to invisible neutron decay. The operators in Eqs. (4) and (5) violate (B + L) = 4. Thus, it is easy to argue that the standard d = 6 2-body (B + L) = 2 decays are absent on symmetry grounds for the invisible neutron decay to three NR's. However, the situation is different for the case of an ALP. ALPs do not usually carry either B or L, as can be seen from Eq. (8). For an ALP with (B, L) = (0, 0), however, Oa 1,2 both have (B -L) = 2. One can write down operators at d = 7 with (B -L) = 2, such as, for example, O ∝(NRγμdRα)(Qβ c(iDμ)Qγ). Since this type of operators leads to 2-body decays, such as n →π0 + E/ and p →π+ + E/ , one would expect the invisible neutron decay to be a very sub-dominant decay mode in this case. A possible way out of this conclusion is based on the following argument. If we assign a the quantum numbers (B, L) = (-1, 1), Oa 1,2 both conserve (B + L) and (B -L), thus none of the (B -L) = 2 operators need to be present in the theory. For this to be possible, however, the "standard" d = 5 ALP interaction terms in Eq. (8) need to be absent. It is possible to write down UV completions that fulfil these assignments. For the operators Oa 1,2, the width for the two-body decay n →Na can be estimated to be roughly Γ(n →Na) ≃β2 hm3 n 32πΛ8 ≃ 1 9.0 · 1029yr 9.6 · 106GeV Λ 8 . (11) The much larger scale found here, compared to the case n →3 ̄N, see Eq. (6), makes it unlikely that the UV completions for these operators can ever be tested in accelerator experiments. C. Variant III: A right-handed neutrino and a light scalar The third possibility for generating operators for the invisible neutron decay with BSM fields is to add a right-handed neutrino and a light scalar to the SM. We discuss this case 7 only briefly, since we believe it to be less motivated and add it only for completeness of the discussion. For a model with a light singlet scalar, φ, and right-handed neutrinos there are two d = 7 operators contributing to invisible neutron decay: Oφ 1 = c(1,φ) αβγ Λ3 φ NR cdRα (uRβcdRγ) , (12) Oφ 2 = c(2,φ) αβγ Λ3 φ NR cdRα Qβ cQγ . Similar to the ALP case, discussed above, if φ has (B, L) = (0, 0), the d = 6 operators that one can construct from Eq. (12), eliminating φ, would lead to dominant 2-body decays, rendering invisible neutron decay uninteresting. Asigning non-zero baryon and lepton number to φ allows to eliminate this problem technically. The width n → ̄Nφ can be estimated to be roughly Γ(n → ̄Nφ) ≃β2 hmn 32πΛ6 ≃ 1 9.0 · 1029yr 2.1 · 109GeV Λ 6 . (13) D. Variant IV: A right-handed neutrino and a light vector Finally, we briefly mention the possibility where a neutron decays to a right-handed neutrino and a SM-singlet vector boson Z′, which is prompted by the following d = 7 effective operators: OZ′ 1 = c(1,Z′) αβγ Λ3 Z′ μ NRγμdRα (uRβcdRγ) , (14) OZ′ 2 = c(2,Z′) αβγ Λ3 Z′ μ NRγμdRα Qβ cQγ . To avoid the possible d = 6 nucleon decays and make the d = 7 invisible neutron decay dominant, we need to assign charges (which may be the B and L numbers) to Z′. The argument is the same as for the ALP case, cf. Sec. II B. Assuming Z′ is a massless gauge boson under a new gauge symmetry, the decay rate is calculated to be Γ(n →NZ′) ≃β2 hmn 16πΛ6 ≃ 1 9.0 · 1029yr 2.3 · 109GeV Λ 6 . (15) When Z′ aquires a mass through the spontaneous breaking of some new gauge symmetry, the decay rate changes to: Γ ≃ β2 hm3 n 32πΛ6M 2 Z′ 1 -M 2 Z′ m2 n 2 1 + 2M 2 Z′ m2 n . (16) 8 This equation seems to diverge in the limit of MZ′ →0. However, for a gauge boson such divergencies are known to be unphysical. In a full-fledged model for the massive Z′, one would need to specify also the symmetry breaking sector of the theory. The full calculation then includes the correct treatment of the would-be Goldstone boson of the theory and in the limit of MZ′ ≪mn Eq. (16) will reduce to the same expression as for the scalar case Eq. (13), up to a coefficient. However, constructing such a complete model for the Z′ is beyond the scope of the current paper. E. A caveat There is one important caveat to the above discussion: For any operator that annihilates three quarks to invisible particles it is possible to replace an initial state quark for a final state anti-quark. An example is shown in Fig. 1. Thus, four-body final states with a (neutral or charged) pion will always accompany three-body invisible decays. - And similarly for the two-body final states NR + a and NR + S, three-body final states with one additional pion will be generated.5 No search for such three- and four-body final states exist. However, Super-Kamiokande has a search for n →π0 ̄ν and p →π+ ̄ν. The current bounds are [121]: Γ(p →π+ ̄ν) >3.9 · 1032yr, (17) Γ(n →π0 ̄ν) >1.1 · 1033yr. (18) These are two-body decays and thus the pion momentum is fixed at roughly p(π) ∼460 MeV. Super-Kamiokande uses this constraint in setting the limits. For the 4-body decays, on the other hand, the pion momentum takes a distribution, which depends in addition on the mass of the final state particles. Nevertheless, Super-Kamiokande provides sufficient information such that we can make a rough estimate of the limit for the 4-body decay n →π0 + 3 ̄N. The limit on Γ(n →π0 ̄ν) is set by Super-Kamiokande [121] excluding 19.1 signal events in the bin p(π0) = [400, 500] MeV. From Fig. 3 of [121] we can estimate that the total number of background events in the range p(π0) = [0, 500] MeV sums to about 850 events, and about 80 events in the bin with p(π) = [400, 500] MeV. Thus, we estimate that a limit of 19.1 × p 850/80 ≃62 events could be set from the total data. This would lower the exclusion limit by a factor of roughly 3.3 and the limit on Γ(n →π0 + 3 ̄N) should roughly be T1/2 >∼3.3 × 1032 yr. We stress that this is only a rough estimate and also that the Super-Kamiokande [121] result is based on only 172.8 kton.yr of data. A re-analysis of all Super-Kamiokande data should be able to provide a limit, which could be up to a factor two better than our naive estimate. 5 Since one can replace an initial d-quark by a final state s, producing a final state kaon, this allows, in principle, also sensitivity to 2nd generation indices in the operators. 9 uR dR uR(dR) p(n) NR NR NR dR uR(dR) π+(π0) FIG. 1: p(n) →π+(π0) + missing induced by the d = 9 uRdRdRNRNRNR operator. For the four-body decays of Fig. 1, we can estimate the partial half-live, using results from [17], with appropriate replacements: Γ(n →π0 ̄Ni ̄Nj ̄Nk) ≃m7 nW0(π)2 737280π5Λ10 ≃ 1 3.3 · 1032yr 245TeV Λ 10 (19) Here the matrix element is taken to be |W0(π)| = |⟨π0|(ud)RdR|n⟩| = 0.134GeV2 [38]. And again, we assume the Wilson coefficient is equal to one. Comparing this estimate with with Eq. (7), we can see that the sensitivity of this pion mode to the new physics scale Λ is actually better than the sensitivity of the invisible mode at SNO+ (180 TeV), and only moderately lower than the future sensitivity estimated for JUNO (270 TeV). Given these numbers, one must conclude that searches for n →π0 + E/ provide important constraints on searches for invisible neutron decay. Therefore, we encourage experimentalists to study the n →π0+"multiple missing" mode (independently from n → π0 ̄ν) and provide the bound. We stress again, that this is not only true for the case of the d = 9 operator discussed here, but for all operators generating invisible neutron decay. III. HIGH-ENERGY COMPLETIONS In the previous section we have defined the effective operators for invisible neutron decay for the four possible different model variants. In this section we will discuss how these operators can be generated in the UV, after integrating out heavy particles. Our aim here is to find systematically all possible UV models via the diagrammatic method. Since, however, the basic procedure is the same for all decompositions, we will discuss here only one decomposition of the operator ON 1 ∝uRdRdRNRNRNR in detail. Full tables for all other operators and decompositions are given in the appendix. For 6-fermion operators there are only two possible topologies at tree-level [122], using only renormalizable interactions. They are shown in Fig. 2. Note that here the arrows indicate the flow of the particle number. Internal particles can be either scalars, fermions or vectors. For Topology A there are exactly three possibilities to distribute the fermions of ON 1 to the outer legs of the diagram. We list these in Tab. I. 10 S1/V1 S2/V2 S3/V3 Topology A S1/V1 S2/V2 F Topology B FIG. 2: Topologies for tree-level realizations of the d = 9 neutron decay operators, see Eqs. (4) and (5). Depending on the chiralities of the outer fermion fields, the dashed lines can be either a scalar, S, or vector, V . The mediator F in Topology B must be introduced as a vector-like fermion, but when it is a SM singlet, it can also be a Majorana fermion. Decomposition S1 S2 S3 Comment Eq. (uRdR)(dRNRi)(NRjNRk) (3, 1, +1/3) (3, 1, -1/3) (1, 1, 0) S2 ̸= S† 1 to avoid d = 6 (20) (uRNRi)(dRdR)(NRjNRk) (3, 1, +2/3) (3, 1, -2/3) (1, 1, 0) dRdRS† 2 = 0 (uRNRi)(dRNRj)(dRNRk) (3, 1, +2/3) (3, 1, -1/3) (3, 1, -1/3) S3 ̸= S2 to avoid S1S2S2 = 0 (21) TABLE I: Decompositions for the operator uRdRdRNRiNRjNRk allowed with Topology A. The fields in a parenthesis form an interaction with the corresponding mediator field, S1,2,3. For the explicit form of the interaction Lagrangians, cf. Eqs. (20) and (21). For ON 1 and Topology A, all mediator fields are Lorentz scalars (indicated with symbol S). The charges of them under the SM gauge symmetries, in the order (SU(3)c, SU(2)L, U(1)Y ), are uniquely determined as given in the table. The interaction Lagrangians for the different cases can be written as: LA-1 =yudεIJK(uRc)I ̇a(dR) ̇a J(S† 1)K + ydN(dR c)I ̇a(NRi) ̇a(S† 2)I + yNN(NRj c) ̇a(NRk) ̇aS† 3 + μ(S1)I(S2)IS3, (20) LA-3 =yuN(uRc)I ̇a(NRi) ̇a(S† 1)I + ydNS2(dR c)I ̇a(NRj) ̇a(S† 2)I + ydNS3(dR c)I ̇a(NRk) ̇a(S† 3)I + μεIJK(S1)I(S2)J(S3)K, (21) where the indices I, J, K in the lower (upper) position are for 3 (3) in SU(3)c, and ̇a is for a right-handed 2-spinor. For case A-2 the interaction dRdRS† 2 vanishes identically, thus it is not a valid decomposition for ON 1 .6 For a more compact notation, we have suppressed 6 However, the interaction dRsRS† 2 does not vanish. Since operators are usually given in the flavour basis, 11 generation indices in these Lagrangians. The different Yukawas are to be understood as matrices of either dimensions (3, 3), (3, n) or (n, n), where n is the number of copies of right-handed neutrinos. A few more comments are in order. In case A-1, described by the Lagrangian Eq. (20), the diquark, S1, has the same SM charges as the leptoquark, S† 2. However, these two fields have to be distinct states. If they were identical, one could construct from S1 a d = 6 effective operator O ∝uRdRdRNR. This operator generates the the 2-body decays n(p) →π0(π+) ̄N with a significantly larger rate than that of the invisible neutron decay. To make n →3 ̄N the dominant decay mode, one must avoid all d = 6 nucleon decays, i.e. S1 and S2 must be two different fields. This can be realized by assigning for case A-1 all Si definite baryon and lepton numbers. With the asignments S1 = (2/3, 0), S2 = (1/3, 1) and S3 = (0, 2), where the brackets stand for (B, L), all Yukawa couplings conserve baryon and lepton number trivially. The only B and L violating coupling would then be the soft parameter μ. Since μ is a soft parameter, one can actually argue that it can be technically small in the sense of 't Hooft's naturalness, since in the limit of μ →0 baryon and lepton numbers are conserved. A small value of μ would change the expected mass scale for the Si drastically. Instead of Eq. (6), for topology-A one can express the decay width as: Γ(n → ̄Ni ̄Nj ̄Nk) ≃ β2 hm5 nμ2 6144π3Λ12 ≃ 1 9 · 1029yr 1keV μ 2 2.4TeV Λ 10 . (22) Here, it is assumed that mS1 ≃mS2 ≃mS3 ≃Λ. Putting μ = Λ, Eq. (22) reduces to Eq. (6), of course. However, this argument shows that it might not be completely hopeless to search for the BSM particles that appear in the decomposition of a d = 9 operator at the LHC or the future FCC-hh. The third case, case A-3, is similar to case A-1. However, see Eq. (21), case A-3 contains three leptoquarks, and two of them, S2 and S3, interact with dR and NR. They therefore have the same SM charges. However, also in this case they must be two different fields, otherwise the triple scalar interaction vanishes. In this case, a simple lepton number assignment, as discussed for case A-1, is not sufficient to make S2 and S3 distinct. This problem can, on the other hand, be cured by adding a discrete symmetry, for example Z3, with the charge ω ≡ei2π/3. We can assign ω to NRj and S2 and ω2 to NRk and S3. Under this assignment both, the triple scalar interaction as well as the Yukawas, are allowed under the discrete symmetry. However, the μ-term will still break lepton and baryon numbers softly, thus guaranteeing that the dangerous 2-body nucleon decay modes are absent. In the appendix, we list all mediator fields that appear in the high-energy completions of the uRdRdRNRNRNR operator with Topology B in Fig. 2. There we also give the decompositions of the QQdRNRNRNR operator Eq. (5) with both the topologies. We note that also while in the neutron the quarks are in the mass eigenstate basis, decomposition A-2 can contribute to the invisible neutron decay rate. The rate, however, will be suppressed by a factor sin2 θC, where θC is the Cabibbo angle. We will not discuss this possibility in further details. 12 n →3 ̄N n →Na(NZ′) n → ̄Nφ Mediator uudNNN QQdNNN ∂audd ̄N (Z′udd ̄N) ∂aQQd ̄N (Z′QQd ̄N) uudNφ QQdNφ S(1, 1, 0) ✓ ✓ S(3, 1, -1/3) ✓ ✓ ✓ ✓ ✓ ✓ S(3, 1, +2/3) ✓ ✓ ✓ S(3, 2, +1/6) ✓ V (3, 1, -1/3) ✓ ✓ V (3, 1, +2/3) ✓ V (3, 2, +1/6) ✓ ✓ ✓ F(1, 1, 0) ✓ ✓ ✓ ✓ ✓ ✓ F(3, 1, -1/3) ✓ ✓ ✓ ✓ ✓ ✓ F(3, 1, +2/3) ✓ ✓ ✓ F(3, 2, +1/6) ✓ ✓ ✓ TABLE II: List of the mediators that appear in the decompositions of the effective operators relevant for the invisible neutron decay. for topology B it is, of course, possible to lower the expected mass scale. In EFT one usually estimates the scale Λ for a Wilson coefficient c ≃1. However, for a d = 9 operator c ∝Y 4, where Y stands symbolically for any Yukawa coupling, see Fig. 2. Thus, even moderatly small Yukawa coouplings, will change the expections given in Eq. (6) drastically: Γ(n → ̄Ni ̄Nj ̄Nk) ≃ β2 hY 8m5 n 6144π3Λ10 ≃ 1 9 · 1029yr Y 0.01 8 4.5TeV Λ 10 . (23) In addition, the appendix contains tables of the decompositions for the effective operators in the other variants, which were presented in Secs. II B-II D. Similar to cases A-1 and A-3 discussed here for ON 1 , in all cases one has to make sure, that the dangerous 2-body nucleon decays are absent by the use of some symmetry, otherwise the rate for the invisible neutron decay will be negligible. Table II lists all mediator fields that appear in the decompositions of the d = (7 -9) operators. There is a total of 11 fields, 4 scalars, 4 fermions and 3 vectors. Some fields appear in all operators and in all 4 model variants (S3,1,-1/3, F1,1,0 and F3,1,-1/3), while some others appear only for a particular model variant (S1,1,0, V3,1,-1/3) or even only one particular operator (S3,2,1/6 and V3,1,2/3). Note that the fields appearing in the decompositions of the effective operator with a Z′ are the same as the ones for the operators with an ALP, the corresponding column applies to both fields. 13 W V d U u d d W l ν Q Q W/Z/h W/Z/h j j FIG. 3: Example Feynman diagrams for single and pair production of VLQs and their decays. IV. LHC PHENOMENOLOGY In this section, we will briefly discuss a variety of LHC searches, that can be re-interpreted as limits for fields listed in Tab. II. Naive expectations for the scale, Λ, of the d = (7 -9) operators, discussed in secton II, make it look quite unlikely that the LHC can find any positive signals for these states. However, as discussed around Eqs. (22) and (23), the estimates derived using EFT in section II might vastly overestimate the masses of the BSM fields in a full-fledged UV model. We therefore felt motivated to at least briefly discuss the current experimental status. The decomposition of the d = 9 operators for the NRSMEFT model, discussed in section III, generates a set of eight different BSM fields: Four vector-like fermions (F1,1,0, F3,1,-1/3, F3,1,2/3 and F3,2,1/6), three types of scalars (S1,1,0, S3,1,-1/3 and S3,1,2/3) and one vector (V3,2,1/6). Three more fields appear only in the ALP/Z′ variants (S3,2,1/6, V3,1,-1/3 and V3,1,2/3). We mention these only for the sake of completeness here, since a LHC discovery for the fields from the d = 7, 8 operator seems (even) less likely than those appearing in models for the d = 9 operators. First, we mention that for the singlet fields, there is no production mode at the LHC other than decays of the coloured BSM states. Moreover, F1,1,0 and S1,1,0 will decay to NR's and thus be invisible, if they are lighter than the coloured states. Thus, there are no constraints on these two fields from LHC searches. Let us then concentrate on the coloured fermions. Here, we follow largely the discussion in [123]. All three coloured fermions can have interactions with SM quarks and the Higgs: LY = YURQF3,1,2/3H† + YDR QF3,1,-1/3H + YQu F3,2,1/6uRH† + YQd F3,2,1/6dRH + h.c.(24) In Eq. (24) we have suppressed generation indices. The fermions F3,1,-1/3, F3,1,2/3 and F3,2,1/6 must also have vector-like mass terms. After electro-weak symmetry breaking, Eq. (24) leads to mixing between heavy and SM quarks. This mixing will dominate their decay widths, unless the corresponding Yukawa couplings are tiny. If, on the other hand, some Yukawa couplings are large, this mixing may dominate also VLQ production at the LHC, see Fig. 3, left. In addition, VLQs can always be pair-produced, see Fig. 3 to the right, independent of the size of the mixing. Note that in this figure we show the VLQs decaying to "j" to indicate any final state quark, i.e. including also third generation. 14 Both ATLAS and CMS have searched for such vector-like quarks in a number of publications. For example, ATLAS has searched for pair-produced VLQs decaying to 1st generation quarks in [124]. Limits are given as function of the VLQ mass, for different branching ratios Br(Q →W +j) versus Br(Q →H +j), assuming Br(Q →W +j)+Br(Q →H +j)+Br(Q → Z +j)= 1 [124]. Derived limits are in the range (900-1500) GeV for Br(Q →W +j) in the range (10-100) %. In [125] ATLAS presented the results of a search for pair-produced VLQs decaying to third generation quarks. Limits range from (1 -1.7) TeV depending mostly on the branching ratio Br(Q →W + t/b). These limits are slightly stronger than the 1st generation quark limits, mostly due to lower backgrounds. There are also searches for singly produced VLQs. However, in the search [126] ATLAS assumes that the VLQ decays to the final state t+h or W +b, where the top/bottom is tagged in order to reduce backgrounds. Similarly CMS [127] searched for single VLQ production in the final state t+h and t+Z. For κT = 1, where κT measures the strength of the interaction of the VLQ with electro-weak gauge bosons relative to standard EW couplings, a lower limit on VLQs decaying to top quarks of around mVLQ ≥1.9 TeV is found in [126]. There are no searches yet published for singly produced VLQs decaying to second or 1st generation quarks. However, due to much larger backgrounds in such searches, one can expects limits would be considerably weaker. However, we need to stress that all the VLQ searches discussed up to now depend on the presence of mixing. The neutron decay diagrams, that we study here, however, do not require any of the Yukawa coupling in Eq. (24) to be non-zero. Instead, couplings of the form F-q-S, where F, q and S stand for a VLQ, a SM quark and one of the BSM scalars must be present. If the VLQs are pair produced at the LHC they can then decay to a final states containing q + NR + NR, i.e. jet plus missing energy. This signal is equivalent to the one used in the standard squark search at the LHC. For example, ATLAS [128] has searched for pair produced squarks with L = 139/fb. ATLAS derives limits for one generation of squarks (roughly m ̃q ≥1.2 TeV) and 8 degenerate squarks (roughly m ̃q ≥1.8 TeV) for light neutralinos. Since VLQs (being fermions) have cross sections at least ∼4 times larger than a colour triplet scalar of the same mass, this SUSY search [128] could be reinterpreted to yield a lower limit on the VLQ mass. We estimate this limit to be roughly mVLQ ≥1.5 TeV, for any of the three coloured VLQs under consideration. The coloured scalar states appear either as leptoquarks or as diquarks in the different decompositions, discussed in the previous sections. The leptoquark couplings are: LLQ = YNu uRcNRS† 3,1,2/3 + Yde dR ceRS3,1,2/3 (25) + YNd dR cNRS† 3,1,-1/3 + Yeu uRceRS† 3,1,-1/3 + YLQ QcLS† 3,1,-1/3 + h.c., while the diquark couplings can be written as: LDQ = Ydd dR cdRS3,1,2/3 + YQQ QcQS3,1,-1/3 + Yud uRcdRS3,1,-1/3 + h.c. (26) Again, we have suppressed generation indices. We stress, however, that the flavour diagonal entries in Ydd are zero, see section III. Importantly, while the LQs and DQs appear with 15 the same SM quantum numbers, they must be different states, otherwise one can generate d = 6 proton decay tree-level diagrams, which would lead to lower limits on their masses near the GUT scale and render invisible neutron decay completely negligible, as discussed in section III. Note that only (YNu, YNd and Yud) of the eight couplings given in Eqs. (25) and (26) appear in the decompositions for the neutron decay. This is important, since none of these couplings will generate final states with charged leptons. Leptoquarks, both pair and singly produced, have been searched for at the LHC in a number of different final states. The couplings Yeu and YLQ do not appear in the decompositions, but will lead to final states with charged leptons, that are more easily constrained at the LHC. For example, [129] searched for pair produced LQs decaying to 3rd generation quarks. Lower limits on LQ masses in the range (1.2 -1.7) TeV are derived, depending on LQ decay branching ratios. ATLAS also searched for LQs decaying to light quarks [130]. For a branching ratio of Br(LQ→l + q)= 1 limits of mLQ ≥1.8 TeV and mLQ ≥1.7 TeV are excluded in the electron and muon channels. These limits weaken to roughly (700-900) GeV for Br(LQ→l + q)= 0.1. Single or resonant production of LQs requires large Yukawa couplings to give sizeable cross sections. ATLAS searched for resonant LQs [131] via a lepton-jet signature, with either one or two leptons in the final state. For Yde = 1.0 LQs with masses below mLQ = 3.4 TeV are excluded by this search. CMS searched for single LQs in t-channel diagrams [132], leading to di-lepton final states. Both scalar and vector LQs were considered. For scalar [vector] LQs with masses between (1 -5) TeV Yukawa couplings larger than (0.3-1.0) [(0.1 -1.4)] have been excluded. Again, the searches just discussed require charged lepton final states and the corresponding couplings are not required by the decompositions for invisible neutron decay. LQs that decay only to quarks and invisible final states are less constrained. For example, the SUSY search by ATLAS [128] will give a limit of roughly mLQ >∼1.2 TeV for pair produced LQ. A CMS paper [133] gives limits for pair produced LQs decaying to neutrinos. Limits are mLQ ≥1140 GeV for scalar LQs and mLQ ≥(1560 -1980) GeV for vector LQs, depending on the strength of the LQ gluon coupling. The more stringent limits for vector LQs simply reflect the larger cross sections for vectors compared to scalars. These limits apply to the state V3,2,1/6 that appears in the decomposition of ON 2 . Finally, diquarks are stringently constrained from dijet searches, since the production is schannel enhanced. For example, CMS [134] gives a lower limit on the mass of a colour triplet scalar of mDQ ≃7.5 TeV assuming a Yukawa coupling to two quarks of electromagnetic strength, e. The limits derived by CMS are strictly speaking valid only for a scalar diquark coupling to both, up-type and down-type quarks with the same strength. Thus, they apply only to S3,1,-1/3. The searches discussed above are based on luminosities up to 140/fb. Moderate improvements can be expected from the high-luminosity LHC. However, the future FCC-hh [135] would considerably improve sensitivities. For example, [136] quotes sensitivities up to mLQ ∼8 (15) TeV for pair (singly) produced leptoquarks, decaying to charged leptons. 16 Many other search channels, discussed above, should improve by similar factors. We will close this section with a short discussion of the d = 12 operator given in Eq. (1). In the beginning of this section we have argued that a Wilson coefficient of cW = 1 is often unrealistic in UV models, thus EFT tends to overestimate the masses of the BSM states. This is certainly true also for a d = 12 operator.7 We have used an automated code for operator decomposition, based on the diagrammatic method and described in [137, 138], to decompose Eq. (1) at tree-level. At such a large dimension, there is a proliferation of models. We count 39 topologies, 150 different diagrams and 12713 model variants. Despite this large number of possible model variations all model variants can be described with just 38 different fields (scalars and fermions, no vectors). 26 of these are triplets of colour, either diquarks, leptoquarks or heavy vector-like quarks. All scalars and fermions discussed above appear in this list too and the constraints we have discussed above apply also to this case. The remaining coloured states, not covered in the above discussion will have bounds, which are at least as strong, since they are all either larger SU(2) multiplets or states with larger hyper-charge. There is, however, one important difference between the UV states of the d = 12 operator and those of the d = (7 -9) operators. Whereas for the d = 9 operators, strictly speaking, only right-handed neutrino final states are required by the couplings appearing in the decomposition, in the case of the d = 12 operator similar decay rates of the leptoquarks to final states with missing energy and final states with charged leptons are expected. As discussed above, final states with charged leptons are a better signal than missing energy at the LHC and thus give generally more stringent limits. In other words, limits on states from the decompositions of the d = 12 operator from LHC will in general be more stringent than those for the d = 9 case, that motivated the discussion given in this section. V. SUMMARY In this paper we have studied invisible neutron decay. Limits on nucleon decay modes with three charged leptons [3] are much stronger than those of invisible neutron decay [5]. Invisible neutron decay with observable rates can therefore be generated only by operators that do not also generate charged lepton final states. In SMEFT one has to go up to d = 12 to find such an operator. Current limits on invisible neutron decay correspond then to scales of order Λ ≃13 TeV. An observation of invisible neutron decay in the next round of experiments might then be accompanied by new physics at the future FCC-hh [136]. However, this conclusion changes, if one adds new light degrees of freedom to the SM particle content. We have discussed four different extensions of the SM for which invisible neutron decay with observable rates can be induced by operators of dimension d = (7 -9). We have estimated the half-lives in each case and for each model we also give all possible UV completions at tree-level. Current limits on the scales of these operators range from 7 Note that in the decomposition of Eq. (1) up to seven different Yukawa couplings appear. 17 Λ ≃180 TeV for the variant model with only right-handed neutrinos (d = 9 operator) to Λ ≃2 · 109 GeV (d = 7 operator). A discovery of invisible neutron decay, therefore may point towards to existence of new, light BSM states. In particular, we stress that in all possible models a neutral fermion state with the quantum numbers of a right-handed neutrino must exist. Thus, these models will also be able to explain neutrino masses. We have also discussed constraints on the allowed rate for invisible neutron decay, that could be placed by a search for n →π0 + E/ . Hyper-Kamiokande [7] should be able to improve on the limit of n →π0 + E/ from Super-Kamiokande [121] by a considerable factor. We note that the Super-Kamiokande analysis [121] gives only a limit on a two-body final state, but the same type of search can also be used to constrain decays with more than one invisible particle in the final state. Non-observation of n →π0 + 3 ̄N at Hyper-Kamiokande might rule out the possibility of observing invisible neutron decay in the foreseeable future. However, to end this discussion on a more positive note, we mention that the sensitivity of Hyper-Kamiokande and the future JUNO and (the proposed) THEIA experiment to the scales of the invisible neutron decay operators will be rather similar. Observation of invisible neutron decay in JUNO should lead to an excess in the Hyper-Kamiokande search for n →π0 + E/ and vice versa. The question whether invisible neutron decay or the decay n →π0 + E/ would be discovered first, however, is not so easy to answer. While the future Hyper-Kamiokande experiment [7] is much larger than Super-Kamiokande, and thus allows to probe much larger half-lives, already the Super-Kamiokande limit [121] is background dominated. To claim a discovery in this case requires both hundreds of signal events and a detailed understanding of backgrounds. JUNO's search for the invisible neutron decay [45], on the other hand, is based on a characteristic triple coincidence arising from the invisible decays of s-shell neutrons in 12C, which leaves very few background events, making a discovery with just a handful of events possible. Finally, one can speculate that adding the two gammas from the π0 decay to the triple coincidence for the invisible neutron decay would eliminate all remaining background in JUNO's search. We do not know, however, whether this advantage is (over-)compensated by the loss of signal extraction efficiency or not. Only the experimental collaboration can perform an analysis sufficiently sophisticated to decide whether n →π0 +E/ or the invisible neutron decay itself would be more sensitive to the different models (and the corresponding operators) that we discussed in the present paper. Acknowledgements J.C.H and T.O acknowledge support from ANID - Millennium Science Initiative Program ICN2019 044. The research of J.C.H is supported by ANID Chile through FONDECYT regular grant No 1241685. The research of T.O. is supported by ANID Chile through FONDECYT regular grant No 1250343. M.H. acknowledges support by Spanish grants PID2023-147306NB-I00 and CEX2023-001292-S (MCIU/AEI/10.13039/501100011033), as well as CIPROM/2021/054 (Generalitat Valenciana). 18 Appendix A: List of the high-energy completions Here we list the decompositions of the effective operators and the charges under the SM gauge symmetries of the necessary mediator fields in each decomposition. • Variant I with light right-handed neutrinos NR - Tabs. I, III, IV, and V. Fig. 2 for the topologies. • Variant II with an NR and an axion-like particle a - Tabs. VI and VII. Fig. 4 for the topology. • Variant III with an NR and a scalar field φ - Tabs. VIII and IX. Fig. 4 for the topology. • Variant IV with an NR and a vector boson Z′ - The mediators appear in this variant are the same as those listed in Tabs. VI and VII for Variant II. The "Decomposition" column shows how the outer fields are distributed to the vertices in the corresponding topology. The fields in a parenthesis are in the same vertex. Let us take the first line of Tab. III (uRdR)(NRi)(NRj)(dRNRk) (A1) as an example. "(uRdR)" means that uR and dR correspond to the two outer fields in the leftmost vertex in Topology B given in Fig. 2. The mediator field between the (uRdR) vertex and the second-left vertex with NRi is S1(3, 1, +1/3). The direction of the S1 particle number is indicated with arrow in Fig. 2. The two of the middle vertices with NRi and NRj are mediated by the fermion F(3, 1, +1/3). The outer fields in the rightmost vertex are dR and NRk, and the mediator between the (NRj) vertex and the (dRNRk) vertex is S2(3, 1, -1/3). In short, the high-energy completion of this decomposition is determined as LTab.III-1 =yudεIJK(uRc)I ̇a(dR) ̇a J(S† 1)K + ydN(dR c)I ̇a(NR) ̇a(S† 2)I + yFNS1(FL)I ̇a(NR) ̇a(S1)I + yNFS2(NR c) ̇a(FR)I ̇a(S2)I. (A2) The charges of S2 and S† 1 are the same. However, if they are an identical field, it mediates the d = 6 effective operator Ld=6 = yudydN M 2 S1 εIJK(uRc)I ̇a(dR) ̇a J(dR c)K ̇b(NR) ̇b (A3) which induces p →π+ ̄N with larger decay rate than d = 9 induced invisible neutron decay, i.e., the strong bound from p →π+ ̄N constrains the invisible neutron decay to an unreachable level. In short, if one wants to have this decomposition for the invisible neutron decay, the mediator S2 must be an independent field from S† 1, although the SM charges are the same. This is mentioned in the "Comment" column as "S2 ̸= S† 1 to avoid d = 6". 19 S1/V1 /F1 S2/V2 /F2 FIG. 4: Topology for the operators with four fermions and 1 scalar, such as uRdRdRNRφ and uRdRdRNRa, i.e., One of the outer legs should be a scalar, φ or a. The directions of arrows on the outer legs depends on the decomposition. Although the mediators are given with solid lines in this topology, they can be scalars S, vectors V or fermions F, depending on the distribution of outer fields. The decompositions with the comment "dRdRS† = 0" contain the interaction L = yεIJK(dR c)I ̇a(dR) ̇a J(S†)K (A4) which vanishes because of the colour antisymmetric nature of the two dRs forming a scalar. The decomposition (uRNRi)(dR)(dR)(NRjNRk) does not contain the interaction Eq. (A4), but the effective operator induced from this decomposition results in the same structure εIJK(dR c)I ̇a(dR) ̇a J that vanishes. For all fermion mediators that appear in the decompositions, we assume a vector-like fermions (left and right 2-spinors with the same charges form a 4-componenet Dirac fermion). However, the mediator of the SM singlet field can be a Majorana fermion (only 2 components out of 4 are independent). The decompositions with the SM singlet fermion mediator are indicated with the comment "F can be Majorana". In the last decomposition in Tab. IV, the two vector mediators have the same SM charges. Since they are antisymmetric under both SU(3)c and SU(2)L, the interaction V1V2S3 LTab.IV-3 = εIJK(V1)Iiρ(iτ 2)ij(V2)ρ Jj(S3)K + · · · (A5) does not vanish even if they are identical fields. This is different from the last decomposition in Tab. I, where S3 ̸= S2 is required so that the S1S2S3 interaction does not vanish. The comment "V1 = V2" indicates this fact. [1] Super-Kamiokande, A. Takenaka et al., Phys. Rev. D 102, 112011 (2020), . [2] Super-Kamiokande, K. Yamauchi et al., (2025), . [3] Super-Kamiokande, M. Tanaka et al., Phys. Rev. D 101, 052011 (2020), . 20 Decomposition S1 F S2 Comment (uRdR)(NRi)(NRj)(dRNRk) (3, 1, +1/3) (3, 1, +1/3) (3, 1, -1/3) S2 ̸= S† 1 to avoid d = 6 (uRdR)(dR)(NRi)(NRjNRk) (3, 1, +1/3) (1, 1, 0) (1, 1, 0) F can be Majorana (uRdR)(NRi)(dR)(NRjNRk) (3, 1, +1/3) (3, 1, +1/3) (1, 1, 0) (dRNRi)(uR)(dR)(NRjNRk) (3, 1, -1/3) (3, 1, +1/3) (1, 1, 0) (dRNRi)(dR)(uR)(NRjNRk) (3, 1, -1/3) (3, 1, -2/3) (1, 1, 0) (uRNRi)(NRj)(NRk)(dRdR) (3, 1, +2/3) (3, 1, +2/3) (3, 1, -2/3) dRdRS† 2 = 0 (uRNRi)(dR)(dR)(NRjNRk) (3, 1, +2/3) (3, 1, +1/3) (1, 1, 0) εIJK(dRIdRJ) = 0 (dRdR)(uR)(NRi)(NRjNRk) (3, 1, -2/3) (1, 1, 0) (1, 1, 0) dRdRS† 1 = 0 (dRdR)(NRi)(uR)(NRjNRk) (3, 1, -2/3) (3, 1, -2/3) (1, 1, 0) dRdRS† 1 = 0 (uRNRi)(dR)(NRj)(dRNRk) (3, 1, +2/3) (3, 1, +1/3) (3, 1, -1/3) (uRNRi)(NRj)(dR)(dRNRk) (3, 1, +2/3) (3, 1, +2/3) (3, 1, -1/3) (dRNRi)(uR)(NRj)(dRNRk) (3, 1, -1/3) (3, 1, +1/3) (3, 1, -1/3) TABLE III: Decompositions and the necessary mediator fields of the d = 9 effective operator uRdRdRNRNRNR with Topology B in Fig. 2. Decomposition S1/V1 S2/V2 S3/V3 Comment (QQ)(dRNRi)(NRjNRk) S1(3, 1, +1/3) S2(3, 1, -1/3) S3(1, 1, 0) (QdR)(QNRi)(NRjNRk) V1(3, 2, -1/6) V2(3, 2, +1/6) S3(1, 1, 0) (QNRi)(QNRj)(dRNRk) V1(3, 2, +1/6) V2(3, 2, +1/6) S3(3, 1, -1/3) V1 = V2 TABLE IV: Decompositions of the QQdRNRNRNR operator with Topology A in Fig. 2. [4] Super-Kamiokande, V. Takhistov, Review of Nucleon Decay Searches at SuperKamiokande, in 51st Rencontres de Moriond on EW Interactions and Unified Theories, pp. 437-444, 2016, . [5] Particle Data Group, S. Navas et al., Phys. Rev. D 110, 030001 (2024). [6] JUNO, F. An et al., J. Phys. G 43, 030401 (2016), . [7] Hyper-Kamiokande, K. Abe et al., (2018), . [8] DUNE, B. Abi et al., JINST 15, T08008 (2020), . [9] H. Georgi and S. L. Glashow, Phys. Rev. Lett. 32, 438 (1974). [10] P. Nath and P. Fileviez Perez, Phys. Rept. 441, 191 (2007), arXiv:hep-ph/0601023. [11] A. de Gouvea, J. Herrero-Garcia, and A. Kobach, Phys. Rev. D 90, 016011 (2014), 21 Decomposition S1/V1 F S2/V2 Comment (QQ)(NRi)(NRj)(dRNRk) S1(3, 1, +1/3) (3, 1, +1/3) S2(3, 1, -1/3) (QQ)(dR)(NRi)(NRjNRk) S1(3, 1, +1/3) (1, 1, 0) S2(1, 1, 0) F can be Majorana (QQ)(NRi)(dR)(NRjNRk) S1(3, 1, +1/3) (3, 1, +1/3) S2(1, 1, 0) (dRNRi)(Q)(Q)(NRjNRk) S1(3, 1, -1/3) (3, 2, -1/6) S2(1, 1, 0) (QdR)(NRi)(NRj)(QNRk) V1(3, 2, -1/6) (3, 2, -1/6) V2(3, 2, +1/6) (QdR)(Q)(NRi)(NRjNRk) V1(3, 2, -1/6) (1, 1, 0) S2(1, 1, 0) F can be Majorana (QdR)(NRi)(Q)(NRjNRk) V1(3, 2, -1/6) (3, 2, -1/6) S2(1, 1, 0) (QNRi)(Q)(dR)(NRjNRk) V1(3, 2, +1/6) (3, 1, +1/3) S2(1, 1, 0) (QNRi)(dR)(Q)(NRjNRk) V1(3, 2, +1/6) (3, 2, -1/6) S2(1, 1, 0) (QNRi)(dR)(NRj)(QNRk) V1(3, 2, +1/6) (3, 2, -1/6) V2(3, 2, +1/6) V1 = V2 (QNRi)(Q)(NRj)(dRNRk) V1(3, 2, +1/6) (3, 1, +1/3) S2(3, 1, -1/3) (QNRi)(NRj)(Q)(dRNRk) V1(3, 2, +1/6) (3, 2, +1/6) S2(3, 1, -1/3) TABLE V: Decompositions of the QQdRNRNRNR operator with Topology B in Fig. 2. . [12] A. Kobach, Phys. Lett. B 758, 455 (2016), . [13] T. Hambye and J. Heeck, Phys. Rev. Lett. 120, 171801 (2018), . [14] R. M. Fonseca, M. Hirsch, and R. Srivastava, Phys. Rev. D 97, 075026 (2018), . [15] J. C. Helo, M. Hirsch, and T. Ota, JHEP 06, 047 (2018), . [16] J. C. Helo, M. Hirsch, and T. Ota, Phys. Rev. D 99, 095021 (2019), . [17] J. Heeck and V. Takhistov, Phys. Rev. D 101, 015005 (2020), . [18] A. B. Beneito, I, J. Gargalionis, J. Herrero-Garcia, A. Santamaria, and M. A. Schmidt, JHEP 07, 004 (2024), . [19] J. Gargalionis, J. Herrero-Garc ́ıa, and M. A. Schmidt, JHEP 06, 182 (2024), . [20] J. Heeck and D. Watkins, JHEP 07, 170 (2024), . [21] T. Li, M. A. Schmidt, and C.-Y. Yao, JHEP 08, 221 (2024), . [22] T. Li, M. A. Schmidt, and C.-Y. Yao, JHEP 06, 077 (2025), . [23] A. B. I Beneito, J. Gargalionis, J. Herrero-Garcia, and M. A. Schmidt, (2025), . [24] J. Heeck and D. Sokhashvili, Phys. Lett. B 868, 139791 (2025), . [25] J. Heeck and I. M. Shoemaker, Phys. Rev. Lett. 135, 111804 (2025), . 22 Decomposition S1/V1/F1 S2/V2/F2 Comment [∂](dRdR)(uR)(NRa) S1(3, 1, -2/3) F2(1, 1, 0) dRdRS† 1 = 0 [∂](dRNR)(uR)(dRa) V1(3, 1, -1/3) F2(3, 1, -1/3) [∂](uRdR)(dR)(NRa) S1(3, 1, +1/3) F2(1, 1, 0) F2 can be Majorana [∂](uRNR)(dR)(dRa) V1(3, 1, +2/3) F2(3, 1, -1/3) [∂](uRa)(dR)(dRNR) F1(3, 1, +2/3) V2(3, 1, -1/3) [∂](uRdR)(NR)(dRa) S1(3, 1, +1/3) F2(3, 1, -1/3) [∂](uRa)(NR)(dRdR) F1(3, 1, +2/3) S2(3, 1, -2/3) dRdRS† 2 = 0 [∂](uRdR)(a)(dRNR) S1(3, 1, +1/3) V2(3, 1, -1/3) [∂](uRNR)(a)(dRdR) V1(3, 1, +2/3) S2(3, 1, -2/3) dRdRS† 2 = 0 TABLE VI: Decompositions of uRdRdRNRa, which result in the d = 8 effective operator with a derivative operator, ∂auRdRdRNR. The operator uRdRdRNRZ′ with a vector boson is decomposed with the same mediators. [26] S. Weinberg, Phys. Rev. Lett. 43, 1566 (1979). [27] F. Wilczek and A. Zee, Phys. Rev. Lett. 43, 1571 (1979). [28] S. Weinberg, Phys. Rev. D 22, 1694 (1980). [29] H. A. Weldon and A. Zee, Nucl. Phys. B 173, 269 (1980). [30] L. F. Abbott and M. B. Wise, Phys. Rev. D 22, 2208 (1980). [31] M. Claudson, M. B. Wise, and L. J. Hall, Nucl. Phys. B 195, 297 (1982). [32] J. P. Bowes, R. Foot, and R. R. Volkas, Phys. Rev. D 54, 6936 (1996), arXiv:hep-ph/9609290. [33] S. Kovalenko and I. Schmidt, Phys. Lett. B 562, 104 (2003), arXiv:hep-ph/0210187. [34] J. M. Arnold, B. Fornal, and M. B. Wise, Phys. Rev. D 87, 075004 (2013), . [35] N. Assad, B. Fornal, and B. Grinstein, Phys. Lett. B 777, 324 (2018), . [36] J. de Blas, J. C. Criado, M. Perez-Victoria, and J. Santiago, JHEP 03, 109 (2018), . [37] X.-X. Li, Z. Ren, and J.-H. Yub, Phys. Rev. D 109, 095041 (2024), . [38] Y. Aoki, T. Izubuchi, E. Shintani, and A. Soni, Phys. Rev. D 96, 014506 (2017), . [39] Kamiokande, Y. Suzuki et al., Phys. Lett. B 311, 357 (1993). [40] SNO, S. N. Ahmed et al., Phys. Rev. Lett. 92, 102004 (2004), arXiv:hep-ex/0310030. [41] KamLAND, T. Araki et al., Phys. Rev. Lett. 96, 101802 (2006), arXiv:hep-ex/0512059. [42] Y. A. Kamyshkov and E. Kolbe, Phys. Rev. D 67, 076007 (2003), arXiv:nucl-th/0206030. [43] SNO+, M. Anderson et al., Phys. Rev. D 99, 032008 (2019), . [44] SNO+, A. Allega et al., Phys. Rev. D 105, 112012 (2022), . 23 Decomposition S1/V1/F1 S2/V2/F2 Comment [∂](QQ)(dR)(NRa) S1(3, 1, +1/3) F2(1, 1, 0) F can be Majorana [∂](QNR)(dR)(Qa) S1(3, 2, +1/6) F2(3, 2, +1/6) [∂](QdR)(Q)(NRa) V1(3, 2, -1/6) F2(1, 1, 0) F can be Majorana [∂](QNR)(Q)(dRa) S1(3, 2, +1/6) F2(3, 1, -1/3) [∂](Qa)(Q)(dRNR) F1(3, 2, +1/6) V2(3, 1, -1/3) [∂](QdR)(NR)(Qa) V1(3, 2, -1/6) F2(3, 2, +1/6) [∂](dRa)(NR)(QQ) F1(3, 1, -1/3) S2(3, 1, +1/3) [∂](QdR)(a)(QNR) V1(3, 2, -1/6) S2(3, 2, +1/6) [∂](dRNR)(a)(QQ) V1(3, 1, -1/3) S2(3, 1, +1/3) TABLE VII: Decompositions of QQdRNRa, which result in the d = 8 effective operator with a derivative operator, ∂aQQdRNR. The operator QQdRNRZ′ is decomposed with the same mediators. Decomposition S1/F1 S2/F2 Comment (dRdR)(uR)(NRφ) S1(3, 1, -2/3) F2(1, 1, 0) dRdRS† 1 = 0 (dRNR)(uR)(dRφ) S1(3, 1, -1/3) F2(3, 1, -1/3) (uRdR)(dR)(NRφ) S1(3, 1, +1/3) F2(1, 1, 0) F2 can be Majorana (uRNR)(dR)(dRφ) S1(3, 1, +2/3) F2(3, 1, -1/3) (uRφ)(dR)(dRNR) F1(3, 1, +2/3) S2(3, 1, -1/3) (uRdR)(NR)(dRφ) S1(3, 1, +1/3) F2(3, 1, -1/3) (uRφ)(NR)(dRdR) F1(3, 1, +2/3) S2(3, 1, -2/3) dRdRS† 2 = 0 (uRdR)(φ)(dRNR) S1(3, 1, +1/3) S2(3, 1, -1/3) S† 2 ̸= S1 to avoid d = 6 uRdRdRNR (uRNR)(φ)(dRdR) S1(3, 1, +2/3) S2(3, 1, -2/3) dRdRS† 2 = 0 TABLE VIII: Decompositions of uRdRdRNRφ. [45] JUNO, A. Abusleme et al., Eur. Phys. J. C 85, 5 (2025), . [46] Theia, M. Askins et al., Eur. Phys. J. C 80, 416 (2020), . [47] M. A. Acero et al., J. Phys. G 51, 120501 (2024), . [48] B. Dasgupta and J. Kopp, Phys. Rept. 928, 1 (2021), . 24 Decomposition S1/V1/F1 S2/V2/F2 Comment (QdR)(Q)(NRφ) V1(3, 2, -1/6) F2(1, 1, 0) F2 can be Majorana (QNR)(Q)(dRφ) V1(3, 2, +1/6) F2(3, 1, -1/3) (Qφ)(Q)(dRNR) F1(3, 2, +1/6) S2(3, 1, -1/3) (QQ)(dR)(NRφ) S1(3, 1, +1/3) F2(1, 1, 0) F2 can be Majorana (QNR)(dR)(Qφ) V1(3, 2, +1/6) F2(3, 2, +1/6) (QQ)(NR)(dRφ) S1(3, 1, +1/3) F2(3, 1, -1/3) (QdR)(NR)(Qφ) V1(3, 2, -1/6) F2(3, 2, +1/6) (QQ)(φ)(dRNR) S1(3, 1, +1/3) S2(3, 1, -1/3) S† 2 ̸= S1 to avoid d = 6 QQdRNR (QdR)(φ)(QNR) V1(3, 2, -1/6) V2(3, 2, +1/6) V † 2 ̸= V1 to avoid d = 6 QQdRNR TABLE IX: Decompositions of QQdRNRφ. [49] A. Diaz, C. A. Arg ̈uelles, G. H. Collin, J. M. Conrad, and M. H. Shaevitz, Phys. Rept. 884, 1 (2020), . [50] S. B ̈oser et al., Prog. Part. Nucl. Phys. 111, 103736 (2020), . [51] M. Dentler et al., JHEP 08, 010 (2018), . [52] J. Kopp, P. A. N. Machado, M. Maltoni, and T. Schwetz, JHEP 05, 050 (2013), . [53] K. N. Abazajian et al., (2012), . [54] LSND, A. Aguilar et al., Phys. Rev. D 64, 112007 (2001), arXiv:hep-ex/0104049. [55] MiniBooNE, A. A. Aguilar-Arevalo et al., Phys. Rev. Lett. 105, 181801 (2010), . [56] MiniBooNE, A. A. Aguilar-Arevalo et al., Phys. Rev. D 103, 052002 (2021), . [57] S. R. Elliott, V. Gavrin, and W. Haxton, Prog. Part. Nucl. Phys. 134, 104082 (2024), . [58] V. V. Barinov et al., Phys. Rev. Lett. 128, 232501 (2022), . [59] M. Dentler, ́A. Hern ́andez-Cabezudo, J. Kopp, M. Maltoni, and T. Schwetz, JHEP 11, 099 (2017), . [60] STEREO, H. Almaz ́an et al., Nature 613, 257 (2023), . [61] DESI, A. G. Adame et al., JCAP 02, 021 (2025), . [62] DESI, A. G. Adame et al., JCAP 07, 028 (2025), . [63] DESI, M. Abdul Karim et al., (2025), . [64] DESI, W. Elbers et al., (2025), . [65] Y. Farzan and S. Hannestad, JCAP 02, 058 (2016), . 25 [66] M. Escudero, T. Schwetz, and J. Terol-Calvo, JHEP 02, 142 (2023), , [Addendum: JHEP 06, 119 (2024)]. [67] C. Benso, T. Schwetz, and D. Vatsyayan, JCAP 04, 054 (2025), . [68] T. Ota, JHEP 03, 023 (2025), . [69] S. Bhattacharya and J. Wudka, Phys. Rev. D 94, 055022 (2016), , [Erratum: Phys.Rev.D 95, 039904 (2017)]. [70] Y. Liao and X.-D. Ma, Phys. Rev. D 96, 015012 (2017), . [71] H.-L. Li, Z. Ren, M.-L. Xiao, J.-H. Yu, and Y.-H. Zheng, JHEP 11, 003 (2021), . [72] S. Weinberg, Phys. Rev. Lett. 40, 223 (1978). [73] F. Wilczek, Phys. Rev. Lett. 40, 279 (1978). [74] R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977). [75] R. D. Peccei and H. R. Quinn, Phys. Rev. D 16, 1791 (1977). [76] P. W. Graham, I. G. Irastorza, S. K. Lamoreaux, A. Lindner, and K. A. van Bibber, Ann. Rev. Nucl. Part. Sci. 65, 485 (2015), . [77] I. Brivio et al., Eur. Phys. J. C 77, 572 (2017), . [78] M. Bauer, M. Neubert, and A. Thamm, JHEP 12, 044 (2017), . [79] I. G. Irastorza and J. Redondo, Prog. Part. Nucl. Phys. 102, 89 (2018), . [80] M. Bauer, M. Heiles, M. Neubert, and A. Thamm, Eur. Phys. J. C 79, 74 (2019), . [81] M. B. Gavela, R. Houtz, P. Quilez, R. Del Rey, and O. Sumensari, Eur. Phys. J. C 79, 369 (2019), . [82] L. Merlo, F. Pobbe, S. Rigolin, and O. Sumensari, JHEP 06, 091 (2019), . [83] J. Alda, M. Fuentes Zamoro, L. Merlo, X. Ponce D ́ıaz, and S. Rigolin, (2025), . [84] D. Cadamuro and J. Redondo, JCAP 02, 032 (2012), . [85] P. Arias et al., JCAP 06, 013 (2012), . [86] D. J. E. Marsh, Phys. Rept. 643, 1 (2016), . [87] J. A. Dror, H. Murayama, and N. L. Rodd, Phys. Rev. D 103, 115004 (2021), , [Erratum: Phys.Rev.D 106, 119902 (2022)]. [88] F. Chadha-Day, J. Ellis, and D. J. E. Marsh, Sci. Adv. 8, abj3618 (2022), . [89] C. B. Adams et al., Axion Dark Matter, in Snowmass 2021, 2022, . [90] H. Song, H. Sun, and J.-H. Yu, JHEP 01, 161 (2024), . [91] H. Song, H. Sun, and J.-H. Yu, JHEP 05, 103 (2024), . [92] C. Grojean, J. Kley, and C.-Y. Yao, JHEP 11, 196 (2023), . [93] R. V. Harlander and M. C. Schaaf, Comput. Phys. Commun. 300, 109198 (2024), . [94] R. M. Fonseca, J. Phys. Conf. Ser. 873, 012045 (2017), . [95] L. Hui, J. P. Ostriker, S. Tremaine, and E. Witten, Phys. Rev. D 95, 043541 (2017), . 26 [96] P. Agrawal, N. Kitajima, M. Reece, T. Sekiguchi, and F. Takahashi, Phys. Lett. B 801, 135136 (2020), . [97] S. D. McDermott and S. J. Witte, Phys. Rev. D 101, 063030 (2020), . [98] M. Fabbrichesi, E. Gabrielli, and G. Lanfranchi, (2020), . [99] A. Caputo, A. J. Millar, C. A. J. O'Hare, and E. Vitagliano, Phys. Rev. D 104, 095029 (2021), . [100] G. R. Dvali, G. Gabadadze, and G. Senjanovic, p. 525 (1999), arXiv:hep-ph/9910207. [101] R. N. Mohapatra and A. Perez-Lorenzana, Phys. Rev. D 67, 075015 (2003), arXiv:hepph/0212254. [102] S. Girmohanta and R. Shrock, Phys. Rev. D 101, 015017 (2020), . [103] S. Girmohanta, Eur. Phys. J. C 81, 143 (2021), . [104] B. Fornal and B. Grinstein, Phys. Rev. Lett. 120, 191801 (2018), , [Erratum: Phys.Rev.Lett. 124, 219901 (2020)]. [105] D. Barducci, M. Fabbrichesi, and E. Gabrielli, Phys. Rev. D 98, 035049 (2018), . [106] J. M. Cline and J. M. Cornell, JHEP 07, 081 (2018), . [107] Z. Berezhiani, LHEP 2, 118 (2019), . [108] B. Fornal and B. Grinstein, Mod. Phys. Lett. A 35, 2030019 (2020), . [109] A. Czarnecki, W. J. Marciano, and A. Sirlin, Phys. Rev. Lett. 120, 202002 (2018), . [110] Y. Hao and D. Ni, Phys. Rev. D 107, 035026 (2023), . [111] P. Minkowski, Phys.Lett. B67, 421 (1977). [112] T. Yanagida, Conf.Proc. C7902131, 95 (1979). [113] R. N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. 44, 912 (1980). [114] J. Schechter and J. Valle, Phys. Rev. D22, 2227 (1980). [115] M. Bauer, M. Neubert, S. Renner, M. Schnubel, and A. Thamm, JHEP 09, 056 (2022), . [116] A. Biek ̈otter and K. Mimasu, Axions and Axion-like particles: collider searches (, 2025), , . [117] J. Jaeckel and A. Ringwald, Ann. Rev. Nucl. Part. Sci. 60, 405 (2010), . [118] S. Alekhin et al., Rept. Prog. Phys. 79, 124201 (2016), . [119] J. Alimena et al., J. Phys. G 47, 090501 (2020), . [120] H. Georgi, D. B. Kaplan, and L. Randall, Phys. Lett. B 169, 73 (1986). [121] Super-Kamiokande, K. Abe et al., Phys. Rev. Lett. 113, 121802 (2014), . [122] F. Bonnet, M. Hirsch, T. Ota, and W. Winter, JHEP 03, 055 (2013), , [Erratum: JHEP 04, 090 (2014)]. [123] R. Cepedello, F. Esser, M. Hirsch, and V. Sanz, JHEP 12, 098 (2024), . [124] ATLAS, G. Aad et al., Phys. Rev. D 110, 052009 (2024), . [125] ATLAS, G. Aad et al., Phys. Lett. B 854, 138743 (2024), . [126] ATLAS, G. Aad et al., Phys. Rev. D 105, 092012 (2022), . 27 [127] CMS, A. Hayrapetyan et al., Phys. Rev. D 110, 072012 (2024), . [128] ATLAS, G. Aad et al., JHEP 02, 143 (2021), . [129] ATLAS, G. Aad et al., Phys. Lett. B 854, 138736 (2024), . [130] ATLAS, G. Aad et al., JHEP 10, 112 (2020), . [131] ATLAS, G. Aad et al., (2025), . [132] CMS, A. Hayrapetyan et al., (2025), . [133] CMS, A. M. Sirunyan et al., Eur. Phys. J. C 80, 3 (2020), . [134] CMS, A. M. Sirunyan et al., JHEP 05, 033 (2020), . [135] FCC, A. Abada et al., Eur. Phys. J. ST 228, 755 (2019). [136] FCC, A. Abada et al., Eur. Phys. J. C 79, 474 (2019). [137] R. Cepedello, F. Esser, M. Hirsch, and V. Sanz, JHEP 09, 229 (2022), . [138] R. Cepedello, F. Esser, M. Hirsch, and V. Sanz, JHEP 09, 081 (2023), . 28
2510.14941
Non-Minimally Coupled Quintessence in Light of DESI Samuel S´anchez L´opez,a,b Alexandros Karam,c Dhiraj Kumar Hazraa,b,d aThe Institute of Mathematical Sciences, HBNI, CIT Campus, Chennai 600113, India bHomi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India cNational Institute of Chemical Physics and Biophysics, R¨avala 10, 10143 Tallinn, Estonia dINAF/OAS Bologna, Osservatorio di Astrofisica e Scienza dello Spazio, Area della ricerca CNR-INAF, via Gobetti 101, I-40129 Bologna, Italy E-mail: ssanchezlopez@imsc.res.in, alexandros.karam@kbfi.ee, dhiraj@imsc.res.in Abstract. We analyze a model of quintessence governed by an exponential potential and non- minimally coupled to gravity, in light of recent datasets, including cosmic microwave back- ground, baryon acoustic oscillations, and supernovae distance moduli observations. Mainly focusing on the Palatini formulation of gravity, a phase space analysis reveals the existence of a late-time stable de Sitter attractor as long as the non-minimal coupling constant is nega- tive, regardless of the value of the slope of the exponential. Fitting to CMB+DESI+DESY5 data, we find strong evidence for our model over ΛCDM, with a Bayes factor log B = 5.52. Furthermore, the data seem to prefer dynamical dark energy at > 3σ C.L. and a phantom crossing in the barotropic parameter of dark energy at 2 −3σ C.L.. We find that the scalar field dynamics in the Palatini formalism provides marginally better agreement to the data compared to the metric formalism. arXiv:2510.14941v1 [astro-ph.CO] 16 Oct 2025 Contents 1 Introduction 1 2 The model 2 2.1 Action and field equations 3 2.2 Dynamical system analysis 6 3 Datasets and Methodology 10 3.1 CMB Data 10 3.2 BAO Data 12 3.3 SNIa Data 13 3.4 Dynamical System and Initial Conditions 13 3.5 Free Parameters and Posterior Sampling 14 4 Results 16 4.1 Observational constraints 16 4.2 Palatini vs Metric 20 5 Conclusions 22 1 Introduction The entire history of the Universe, from the initial conditions of the Hot Big Bang, provided by primordial quantum fluctuations stretched to super-horizon scales [1–4], until the present day, 13.8 billion years later, is well described by a six-parameter model dubbed ΛCDM with a power law form of primordial spectrum, or the standard model of cosmology. Two of the six parameters correspond to the amplitude and tilt of a nearly scale-invariant spectrum for the initial conditions, while the remaining four are linked to background quantities, namely the density parameters of baryonic and cold dark matter (CDM), the reionization depth, and the angular size of the horizon at recombination. ΛCDM, which serves as a baseline model, not only is in excellent agreement with the most precise Cosmic Microwave Background (CMB) data from Planck [5], but also successfully addresses the low redshift observations, making it the most successful cosmological model to date. Despite this triumphant success, a more careful analysis reveals tensions between cos- mological observations when the standard model is used to simultaneously fit high- and low-redshift datasets in a joint analysis. For example, the value of the Hubble parameter inferred from CMB data is about 8% smaller, at a confidence level of 5σ, than the value locally measured by using the distance ladder [6], in what is called the Hubble tension [7, 8]. Other tensions include discrepancies in the inferred value of the matter clustering parameter S8 or the CMB lensing amplitude Alens anomaly (see Ref. [9] for a comprehensive review). More recently, baryon acoustic oscillations (BAO) measurements [10, 11] reveal mild tensions with CMB data, particularly when combined with type Ia supernovae (SNIa) observations. The focus of the present work will be the latter, by considering a model of dynamical dark energy featuring deviations from a cosmological constant behaviour at late times, something that has been the subject of intense study in the recent past [12–83]. – 1 – In this paper, we take a theoretically motivated approach, keeping simplicity in mind. Thus, the role of dynamical dark energy is played by a scalar field ϕ. Scalar fields are ubiqui- tous in high energy physics, including the Higgs mechanism [84, 85], inflation [86], dark mat- ter [87–89], string theory [90–92], modified gravity [93, 94], and, in our case, quintessence [95– 97]. Furthermore, we include a non-minimal coupling to gravity in the action governing its dynamics. Indeed, in the context of quantum field theory in curved spacetime, even if at tree level the field is minimally coupled, renormalization requires the inclusion of counterterms that couple it to the Ricci scalar R. In this way, the action at loop level must include a term proportional to ϕ2R [98–100]. The inclusion of a non-minimal coupling in the action makes the dynamics sensitive to the formalism of the theory of gravity. In this work we mainly focus on the Palatini formalism [101, 102], which has gained decisive momentum in recent years [103–176], and emphasize the differences with respect to the widely used metric formalism throughout the paper. In the Palatini formalism, the connection is taken to be a priori independent in such a way that the action should be varied with respect to it, in addition to the metric. The result is that the field equations acquire additional terms relative to their metric counterparts, in which the connection is fixed to the Levi-Civita form, leading to different dynamics. This relatively subtle point is non-existent for an Einstein-Hilbert action. In this case, the variation of the action with respect to the connection dynamically fixes it to its Levi-Civita form, and both the metric and Palatini formalisms agree. In this work, we analyze for the first time a model of Palatini non-minimally cou- pled quintessence in the light of state-of-the-art cosmological data, including CMB [5], BAO [11, 177], and SNe [178] observations. Our focus is both theoretical, performing a phase space analysis of the model, and observational, including a thorough statistical study of our results. Our work emphasizes the capability of current data to probe modifications to general relativity, as well as the degrees of freedom of the theory of gravity itself. The paper is organized as follows. In Sec. 2 we lay out the theoretical aspects of the model. In Sec. 2.1 we compute the field equations, explicitly showing the difference between the metric and Palatini formalisms, and in Sec. 2.2 we write the field equations as an autonomous dynamical system, providing the phase space analysis. Sec. 3 is devoted to describing the datasets and methodology used to constrain our model, and we present the results in Sec. 4. In Sec. 4.1 we show the improvement in fit to different datasets and the resulting parameter posteriors, and in Sec. 4.2 we compare the fits to the data of both the Palatini and metric formalisms. Finally, in Sec. 5 we give our concluding remarks and outlook. Greek indices represent space-time coordinates µ, ν = 0, 1, 2, 3 and Latin indices repre- sent spatial coordinates i, j = 1, 2, 3. Repeated indices are summed over. We assume natural units with c = ℏ= 1 and mP = 1/√8πGN = 2.44 × 1018 GeV, where mP is the reduced Planck mass. The signature of the metric is mostly positive (−, +, +, +). 2 The model In this section, we present our model of non-minimally coupled quintessence, which will then be analyzed in the light of different datasets. Previous works on the subject include Refs. [179–182], utilizing the DESI DR1 data, and the more recent Refs. [13, 17, 22, 45, 68] utilizing the DESI DR2 data. Even though they study different models, what they all have in common is the underlying formalism of gravity: the metric formalism. In the present work, – 2 – we mainly focus on the Palatini formalism, although we shall also give important results in the metric formalism, both for the sake of comparison and to emphasize the differences between the two formalisms. To the best of our knowledge, our model was only previously considered in Ref. [183] as a standalone quintessence model and in Refs. [171, 184] in the context of quintessential inflation. In what follows, we give a brief overview of the dynamics of a scalar field non-minimally coupled to gravity, deriving the field equations in both formalisms. The interested reader may consult e.g. [172] for further details. We then express the equations of motion as a dynamical system and provide the phase space analysis. This approach was previously considered in Refs. [185–187] in the context of inflation and in Refs. [183, 188, 189] in the context of quintessence. 2.1 Action and field equations We consider a canonical scalar field, ϕ, which plays the role of quintessence. It is non- minimally coupled to gravity and minimally coupled to the matter and radiation sectors. The action in the Jordan frame reads S = Z d4x√−g m2 P 2 f(ϕ)R −1 2gµν∂µϕ∂νϕ −V (ϕ)  + Sm [gµν, χm] , (2.1) where gµν is the metric tensor, Sm is the matter action, χm collectively represents the matter fields, and R is the Ricci scalar, which is obtained by contracting the metric with the Ricci tensor R = gµνRµν. The latter is obtained from the contraction of the Riemann tensor Rα µαν and can be written solely in terms of the connection as Rµν = ∂λΓλ µν −∂νΓλ µλ + Γλ λρΓρ µν −Γρ µλΓλ νρ. (2.2) In the metric formalism of gravity, the connection takes the Levi-Civita form given by Lµ αβ = 1 2gµλ (∂αgλβ + ∂βgλα −∂λgαβ) , (2.3) which depends only on the metric. However, a priori the connection and the metric need not be related. This is so in the Palatini formalism [101, 102], where the connection is assumed to be an independent gravitational field, denoted as ˆΓµ αβ. Consequently, in order to obtain the equations of motion, one must vary the action with respect to both gµν and ˆΓµ αβ. Hereinafter, we denote tensors constructed using the independent connection with a hat and tensors constructed using the Levi-Civita connection without a hat. Scripted quantities correspond to either metric or Palatini, as in the action (2.1). It is important to mention that in a theory with a pure Einstein–Hilbert (EH) action (and minimally coupled matter), the metric and Palatini variational formalisms are dynami- cally equivalent. However, once the action is extended, e.g. by a non-minimal coupling f(ϕ)R or by higher-curvature terms such as F(R), the two approaches yield different field equations and, hence, distinct cosmological dynamics. To understand why, one can take the Palatini action in the non-minimal coupling case, where R = gµν ˆRµν, and vary it with respect to ˆΓµ αβ, giving ˆ∇λ √−gfgµν  = 0 . (2.4) This means that ˆΓλ µν is compatible with hµν = fgµν. Therefore, ˆΓµ αβ = 1 2hµλ (∂αhλβ + ∂βhλα −∂λhαβ) = Lµ αβ + 1 2 h δµ β∂α log f + δµ α∂β log f −gαβ∂µ log f i . (2.5) – 3 – It is clear that for a minimally coupled scalar field f(ϕ) = 1, ˆΓµ αβ = Lµ αβ. However, for any other function of the field f(ϕ), the Ricci tensor ˆRµν will acquire additional terms with respect to Rµν, coming from the last bracket in Eq. (2.5) (e.g. see Eq. (2.17)). We take the non-minimal coupling function to be f(ϕ) = 1 + ξϕ2/m2 P, extensively studied in the context of inflation [103, 105–132, 134–141, 145, 172, 186, 190–197]. Indeed, even if a theory is minimally coupled at tree level, a non-minimal coupling of this form will be generated at the loop level [98, 100]. The function f(ϕ) rescales the effective Planck mass as M 2 eff(ϕ) ≡m2 pf(ϕ), so one must require f(ϕ) > 0 at all times. In the case of f(ϕ) = 1 and a fixed scalar field, the action reduces to Einstein gravity with the potential playing the role of a cosmological constant. The non-minimal coupling f(ϕ)R allows one to recast the theory by a conformal trans- formation to the Einstein frame, where the gravitational sector takes the EH form and the scalar field is canonical (after a field redefinition). The trade-off is that the matter sector is no longer minimally coupled to the metric, and the usual conservation law for the matter energy-momentum tensor [198] does not hold in its standard form. This is the issue faced by the authors of Ref. [183], where these new couplings are neglected as a simplifying assumption to their analysis in the Einstein frame. In the present work, we choose to work exclusively in the Jordan frame, making our treatment exact. Note that although the two frames are mathematically related, they are not physically equivalent unless one simultaneously adopts variable units in the Einstein frame. We further adopt an exponential potential for the scalar field, given by V = V0e−λϕ/mP , (2.6) where λ is a constant that controls the slope, with λ > 0, V0 ≥0. This potential is a mini- mal, theoretically motivated choice, which commonly appears in string theory and supergrav- ity [199–201] and has been extensively studied in different cosmological scenarios [97, 202– 212]. It provides a clean baseline for assessing the impact of the non-minimal coupling on the background expansion. Varying the action (2.1) with respect to the metric tensor gµν, we obtain the field equations fRµν −1 2fRgµν −(1 −δP ) (∇µ∇νf −gµν∇σ∇σf) = 1 m2 P h T (ϕ) µν + T (m,r) µν i , (2.7) where δP =  0 , metric 1 , Palatini (2.8) and we emphasize the unhatted ∇µ is the covariant derivative related to the Levi-Civita connection. In Eq. (2.7), the energy-momentum tensor of quintessence reads T (ϕ) µν = ∂µϕ ∂νϕ −1 2gµν(∂ϕ)2 −gµνV (ϕ) (2.9) and T (m,r) µν is the combined matter–radiation tensor, taken to be that of a perfect fluid T (m,r) µν = − 2 √−g δSm δgµν = (ρ + p)uµuν + pgµν . (2.10) – 4 – Using the metric Einstein tensor Gµ ν = Rµ ν −1 2Rgµ ν, Eq. (2.7) can be cast in the standard form Gµ ν = 1 m2 P h T µ(ϕ) ν + T µ(m,r) ν + T µ(eff) ν i , (2.11) where the effective energy-momentum tensor is defined as T µ(eff) ν =m2 P  (1 −f)Rµ ν + ∇µ∇νf + 1 2δµ ν (fR −R −2∇σ∇σf) +δP  −3 2 ∇µf∇νf f + 3 4δµ ν ∇σf∇σf f  , (2.12) Again, in the limit of general relativity f(ϕ) = 1, this tensor vanishes, and the metric and Palatini formalisms reduce to the same field equations. Next, we adopt the flat Friedmann–Lemaˆıtre–Robertson–Walker (FLRW) metric ds2 = gµνdxµdxν = −dt2 + a2(t)  dr2 + r2 dθ2 + sin2 θdϕ2 (2.13) where a(t) is the cosmic scale and t is the cosmic time. From the (0, 0) and (i, j) components of Eq. (2.11) we obtain the modified Friedmann and Raychaudhuri equations 3fH2 = ˙ϕ2/2 + V + ρm + ρr m2 P −3H ˙f −δP 3 ˙f2 4f , (2.14) −2f ˙H = ˙ϕ2 + ρm + 4ρr/3 m2 P + ¨f −H ˙f −δP 3 ˙f2 2f , (2.15) where H = ˙a/a is the Hubble parameter, with a dot denoting the derivative with respect to cosmic time. The modified Klein-Gordon equation in either formulation reads ¨ϕ + 3H ˙ϕ + V,ϕ = m2 P 2 f,ϕR, (2.16) where R = 6  ˙H + 2H2 + δP −3 ˙f2 2f2 + 3 ¨f f + 9H ˙f f ! . (2.17) Since matter and radiation are minimally coupled in (2.1), their stress tensors are sep- arately conserved [198] ˙ρm + 3Hρm = 0 , ˙ρr + 4Hρr = 0 . (2.18) We introduce the fractional energy densities of radiation, matter, and the scalar field Ωr = ρr 3m2 PfH2 , Ωm = ρm 3m2 PfH2 , Ωϕ = ˙ϕ2 6fH2 −f,ϕ ˙ϕ fH −δP f2 ,ϕ ˙ϕ2 4f2H2 + V (ϕ) 3fH2 (2.19) In the minimal coupling limit f(ϕ) = 1, these expressions reduce to their standard GR definitions. With a non-minimal coupling, caution is required in interpreting these fractions, because the coupling mixes contributions in such a way that they are not guaranteed to – 5 – remain ≤1 (or even strictly positive). Under the standard assumptions that radiation and matter have non-negative energy densities (ρr ≥0 , ρm ≥0) and f(ϕ) > 0, Ωr and Ωm remain positive definite. For Ωϕ, the first three terms of Eq. (2.19) can be viewed as a relative kinetic contribution, while the last term represents the relative potential part. If the potential is non-negative, the potential contribution is always non-negative, whereas the kinetic contribution can become negative in certain non-minimal regimes. It is useful to combine (2.14) and (2.15) to obtain an expression for the Hubble flow parameter, ˙H H2 = − 1 2fH2m2 P  ˙ϕ2 + ρm + 4 3ρr  − ¨f 2fH2 + ˙f 2fH + δP 3 4 ˙f fH !2 . (2.20) For later comparison with ΛCDM and to keep the background equations in their stan- dard GR form, it is convenient to define an effective dark sector with density and pressure ρeff = ρr f(ϕ) + ρm f(ϕ) + ρϕ,eff, (2.21) peff = 1 3 ρr f(ϕ) + pϕ,eff, (2.22) where we have defined the effective density and pressure of the field as ρϕ,eff = m2 P f(ϕ) " ˙ϕ2 2 + V (ϕ) −3H f,ϕ(ϕ) ˙ϕ −δP 3 4 f2 ,ϕ(ϕ) f(ϕ) ˙ϕ2 # , (2.23) and pϕ,eff = m2 P f(ϕ) " ˙ϕ2 2 + 2H f,ϕ(ϕ) ˙ϕ + f,ϕϕ(ϕ) ˙ϕ2 + f,ϕ(ϕ) ¨ϕ + δP 3 4 f2 ,ϕ(ϕ) f(ϕ) ˙ϕ2 −V (ϕ) # . (2.24) With these definitions the background equations take the GR form 3m2 PH2 = ρeff and 3m2 PH2 + 2m2 P ˙H = −peff, and the total equation of state, wtot ≡peff/ρeff, satisfies wtot = −1 −2 3 ˙H H2 = Ωr 3 + 1 3fH2 " ˙ϕ2 2 + 2Hf,ϕ ˙ϕ + f,ϕϕ ˙ϕ2 + f,ϕ ¨ϕ + δP 3 4 f2 ,ϕ f ˙ϕ2 −V (ϕ) # . (2.25) In the GR limit, the standard behaviors are recovered: during radiation domination weff = 1/3, during matter domination weff = 0, and under potential domination weff = −1. Cosmic acceleration requires weff < −1/3, while a phantom crossing weff < −1 leads to ˙H > 0. The de Sitter solution is characterized by weff = −1 and a constant Hubble parameter H = p Λ/3. A minimally coupled scalar cannot sustain a phantom crossing since its expansion history is restricted to −1 ≤weff ≤1, unless the Lagrangian contains a ghost term [213, 214]. In contrast, a non-minimal coupling can support phases with weff < −1 [215–217] without necessarily becoming unstable [93, 215, 218–222]. 2.2 Dynamical system analysis Next, we recast the background equations into an autonomous system, a suitable language for both the numerical analysis and understanding the dynamics. We study the model by – 6 – −15 −10 −5 0 N −1.0 −0.5 0.0 0.5 1.0 Ωm Ωr Ωφ wφ,eff wtot 100 101 102 103 104 105 106 z Figure 1: Time evolution of the density parameters of matter (blue), radiation (orange), and quintessence (green), as well as of the effective barotropic parameter of quintessence (red) and the Universe (purple), as a function of the elapsing number of e-folds N and redshift z, for the CMB+DESI+DESY5 best-fit parameters ξ = −1.41, λ = 1.95, Ωm = 0.3193, H0 = 66.70km/s/Mpc, and Ωb = 0.05049. locating the critical points of this system, i.e. points where the derivatives of the phase- space variables vanish. To proceed, we introduce the standard e-fold time N ≡ln a and the dimensionless parameters x1 = ˙ϕ √ 6HmP , x2 = √ V √ 3HmP , x3 = ρr 3H2m2 P , x4 = ϕ mP , λ = −mP V,ϕ V . (2.26) and we keep the matter density fraction Ωm ≡ ρm 3H2m2 P as an auxiliary variable. The Friedmann equation (2.14) gives a single algebraic constraint among (x1, x2, x3, x4, Ωm): Ωm = 1 + ξx2 4 −x2 1 −x2 2 −x3 + 2 √ 6ξx1x4 + 6δPξ2 x2 1x2 4 f , (2.27) which defines a three-dimensional phase surface embedded in R4. Physical trajectories must also satisfy f > 0. Using the background equations (2.14)–(2.25) one finds the closed system – 7 – for X = (x1, x2, x3, x4): dx1 dN = β √ 6 + ϵx1 dx2 dN = − √ 6 2 λx1x2 + ϵx2 dx3 dN = −4x3 + 2ϵx3 dx4 dN = √ 6x1 (2.28) where β ≡ ¨ϕ mPH2 = −3 √ 6x1 + 3λx2 2 + ξx4 R H2 = = 1 1 + 6ξ2x2 4 f (1 −δP) ( −3 √ 6x1 + 3λx2 2 + ξx4 " 3 −9 f (x2 1 −x2 2) −9x3 f + + 36ξx2 1 f (δP −1) + 6 √ 6ξx4x1 f (3δP −2) + 18δPξ2x2 1x2 4 f2 #) (2.29) and ϵ ≡− ˙H H2 = 3 2 + 3 2f x2 1 −x2 2  + x3 2f + 6ξx2 1 f + 2 √ 6ξx1x4 f −9δP ξ2x2 1x2 4 f2 + βξx4 f (2.30) From this, the effective barotropic parameter of the universe wtot can be read off from the expression ϵ = 3(1 + wtot)/2 as wtot = 1 A x2 1 −x2 2  + x3 3f + 4ξx2 1 f + 4 √ 6ξx1x4 3f −δP 6ξ2x2 1x2 4 f2 + 2βξx4 3f . (2.31) The effective barotropic parameter of the field reads wϕ,eff = pϕ,eff ρϕ,eff , (2.32) where pϕ,eff and ρϕ,eff are given by Eqs. (2.24) and (2.23), respectively. In Fig. 1 we present relevant cosmological parameters as a function of the number of e-folds and redshift, product of solving Eq. (2.28) numerically. The evolution is standard, following radiation-dominated (RD) and matter-dominated (MD) eras, until around z ≲10, when quintessence begins to dominate. Its barotropic parameter becomes phantom for a brief period, before the present time when wϕ,eff = −0.81. Solving the fixed-point conditions dx1 dN = dx2 dN = dx3 dN = dx4 dN = 0 , (2.33) together with the algebraic constraint, yields three physically distinct branches: (i) A de Sitter branch in which the scalar is frozen, matter and radiation vanish, and the expansion is characterized by weff = −1 with Ωϕ = 1. This branch splits into two algebraic solutions, denoted DE-a and DE-b. Both correspond to accelerated expansion; however, their exis- tence conditions differ: DE-a exists whenever λ2 ≤4ξ or for ξ < 0 and remains physical – 8 – C.P. (x∗ 1, x∗ 2, x∗ 3, x∗ 4) Existence Ωϕ weff Acceleration DE-a  0, 2 λ √2ξ + ∆, 0, −2ξ −∆ λξ  λ2 ≤4ξ or ξ < 0 1 −1 Yes DE-b  0, 2 λ √2ξ −∆, 0, −2ξ + ∆ λξ  λ2 ≤4ξ 1 −1 Yes M 0, 0, 0, 0  with Ωm = 1 always 0 0 No R 0, 0, 1, 0  always 0 1/3 No Table 1: Critical points for our model. Here ∆= p ξ(4ξ −λ2). Label The eigenvalues (λ1, λ2, λ3, λ4) Stability DE-a −4, 0, −3 2 + 3 2 q 1 −8 3∆, −3 2 −3 2 q 1 −8 3∆ Stable DE-b −4, 0, −3 2 + 3 2 q 1 + 8 3∆, −3 2 −3 2 q 1 + 8 3∆ Unstable M −1, 3 2, 1 4 −3 −√9 + 48 ξ  , 1 4 −3 + √9 + 48 ξ  Unstable R +1, −2, −5, 0 Unstable Table 2: Eigenvalues of the linearized system. Here ∆= p ξ(4ξ −λ2). with positive effective Planck mass (i.e. f(ϕ∗) > 0), whereas DE-b exists only if λ2 ≤4ξ and becomes unphysical for ξ < 0 because f(ϕ∗) < 0. (ii) A radiation point R, with Ωm = 0 and weff = 1/3. (iii) A matter point M, with Ωm = 1 and weff = 0. The explicit coordinates of these points in (x1, x2, x3, x4), together with their existence conditions, are summarized in Table 1. The relation dx4/dN = √ 6 x1 forces x1∗= 0 at any fixed point, so the kinetic (x1∗̸= 0, x2∗= 0) and scaling (x1∗̸= 0, x2∗̸= 0) solutions present in the minimal models [202] are absent here. The only physical fixed points in a flat FLRW universe are DE-a (and possibly DE-b), M, and R. Also, in our model, late-time acceleration is generic. The non-minimal coupling generates de Sitter fixed points with ϵ∗= 0 (hence weff = −1) independently of the steepness of the potential. This is in contrast to the minimal case, where late-time acceleration occurs only if λ2 < 2 [202]. To assess the fate of the fixed points, we linearize the autonomous system (2.28) around each solution, X →X∗+δX, and study the first–order system δX′ = J∗δX. Because all fixed points have x1∗= 0, every term in β and ϵ proportional to x1 or x2 1 vanishes at the point. As a result, the linear spectra are identical in the metric and Palatini formulations. The eigenvalues of the Jacobian J∗, listed in Table 2, determine the nature of each point: a fixed point is (linearly) stable if the real parts are all negative, unstable if at least one is positive, and a saddle if they have mixed signs. For the two de Sitter branches, we find one vanishing eigenvalue associated with a non-hyperbolic direction, but the remaining eigenvalues show that DE-a is stable on its center manifold and constitutes the unique late-time attractor of the system. The companion branch DE-b always possesses a positive eigenvalue and is therefore unstable. The standard matter point M is a saddle, while the radiation point R is unstable. Trajectories that originate in the radiation/matter region are repelled from R and M and flow towards the accelerating attractor DE-a, implying an asymptotic de Sitter expansion. – 9 – −1.0 −0.5 0.0 0.5 1.0 x1 0.00 0.25 0.50 0.75 1.00 x2 DEa Palatini formalism x3 = x∗ 3 x4 = x∗ 4 −1.0 −0.5 0.0 0.5 1.0 x1 0.00 0.25 0.50 0.75 1.00 x2 DEa Metric formalism x3 = x∗ 3 x4 = x∗ 4 Figure 2: Phase space slice in the (x1, x2) plane for the Palatini (left) and metric (right) formalisms, given the CMB+DESI+DESY5 best-fit parameters ξ = −1.41, λ = 1.95, Ωm = 0.3193, H0 = 66.70km/s/Mpc, and Ωb = 0.05049. The variables x3 and x4 are fixed to their DE-a (red point) attractor values. Both formalisms share the same fixed points but the overall dynamics differ. 3 Datasets and Methodology In this section, we describe the data, as well as the methodology, used to constrain our model. We use three datasets: the second data release (DR2) of the baryon acoustic os- cillations (BAO) distance measurements from the Dark Energy Spectroscopic Instrument (DESI) [11, 177], the cosmic microwave background (CMB) compressed likelihood [223] (see also Ref. [224]) obtained from the final Planck data release [5], and type Ia supernovae (SNIa) distance moduli measurements from the Dark Energy Survey Year 5 (DESY5) [178]. 3.1 CMB Data The full CMB likelihood encodes information about dark energy perturbations mainly via the integrated Sachs-Wolfe (ISW) effect [225] and gravitational lensing. However, our model reduces to standard ΛCDM for z ≳50 (see below) and for z < 50, previous results [226] suggest that the effect of the non-minimal coupling term ξϕ2 on the ISW time integrals is small. It has also been argued [39] that not only the full CMB data impose weak constraints on the dark energy parameters, but may bias parameter estimation through prior sensitivity. We therefore employ the more robust CMB compressed data, in the same vein as the DESI collaboration [11] (see Appendix A therein), and many other works [14, 15, 18, 21, 25, 36, 38, 39, 47, 52–54, 57, 59, 61, 62, 67, 77]. The CMB compressed data is comprised of the shift parameters R and la, together with ωb = Ωbh2 (h ≡H0/(100 km s−1 Mpc−1)). These quantities are given by R = q ΩmH2 0 DM(z∗) c , (3.1) and la = πDM(z∗) rs(z∗) , (3.2) – 10 – where z∗is the redshift of photon decoupling and Ωm = Ωcmd + Ωb, with Ωcmd and Ωb being the density parameters of cold dark matter and baryonic matter, respectively. To find z∗we use the fitting formula from Ref. [227], given by z∗= 1048 1 + 0.00124ω−0.738 b  (1 + g1ωg2 m ) , (3.3) where g1 = 0.0783ω−0.238 b / 1 + 39.5ω0.763 b  , g2 = 0.560/ 1 + 21.1ω1.81 b  , and ωm = Ωmh2. −15 −10 −5 0 N −1.0 −0.9 −0.8 −0.7 −0.6 wφ,eff 100 101 102 103 104 105 106 z -5.7 -4.3 -2.9 -1.5 -0.1 ξ 0.8 1.3 1.8 2.3 λ −3 −2 −1 0 N −1.0 −0.9 −0.8 −0.7 −0.6 wφ,eff 100 101 z -5.7 -4.3 -2.9 -1.5 -0.1 ξ 0.8 1.3 1.8 2.3 λ −15 −10 −5 0 N 10−24 10−19 10−14 10−9 10−4 101 Ωφ,eff 100 101 102 103 104 105 106 z -5.7 -4.3 -2.9 -1.5 -0.1 ξ 0.8 1.3 1.8 2.3 λ −3 −2 −1 0 N 0.0 0.2 0.4 0.6 Ωφ,eff 100 101 z -5.7 -4.3 -2.9 -1.5 -0.1 ξ 0.8 1.3 1.8 2.3 λ Figure 3: Upper Left: 100 realizations of the effective barotropic parameter of non-minimally coupled quintessence with an exponential potential for the entire integration range N = [−15, 0]. The values of ξ and λ are randomly drawn from the joint 95% credible region in (ξ, λ). The rest of the model parameters are fixed to x2(−15) = 3.00 × 10−11, H0 = 66.70 and Ωb = 0.05050. Upper Right: Zoomed-in version of the upper left panel focusing on N = [−3, 0]. Bottom left: 100 realizations of the effective density parameter for the same parameter values as the upper panels, for the entire integration range N = [−15, 0]. Bottom right: Zoomed-in version of the lower left panel focusing on N = [−3, 0]. Eq. (3.3) depends only on pre-recombination physics. Its use is justified since in our model the effects of quintessence and, therefore, of modified gravity, only become relevant at low redshift. To demonstrate this, we plot wϕ,eff and Ωϕ,eff for 100 random pairs of ξ and λ (taken from the joint (ξ, λ) 95% credible region) in Fig. 3. It is apparent that for z ≳50 (which is much smaller than z∗∼1100), the effect of the field reduces to that of a subdominant cosmological constant. Furthermore, since the field is initially frozen at zero (see below), – 11 – f = 1, and there are no modifications to the effective Planck mass until late times. Even then, the total field excursion remains sub-Planckian ∆ϕ/mP < 1, as can be seen from the bottom right panel of Fig. 7, where we plot the posterior distribution of x4 = ϕ/mP at present. Since the field is initially fixed at zero, indeed x4(z = 0) = ∆ϕ/mP. DM(z) is the transverse comoving distance, given by (for a nearly-flat FLRW universe) DM(z) = c H0 √Ωk sinh p Ωk Z z 0 dz′ h(z′)  |Ωk|≪1 ≃ c H0 Z z 0 dz′ h(z′), (3.4) where h(z) ≡H(z)/H0 is the reduced Hubble parameter. The comoving sound horizon, rs(z), is given by rs(z) = 1 H0 Z ∞ z cs(z′)dz′ h(z′) = c H0 Z ∞ z dz′ r 3  1 + 3Ωb 4Ωγ(1+z′)  h(z′) , (3.5) where Ωγ is the density parameter of radiation, fixed by the temperature of the CMB via Ωγh2 = (2.4729 ± 0.0002) × 10−5 [228]. Note that current bounds on the neutrino masses imply they become non-relativistic after decoupling, and so their contribution to the radiation density needs to be included in h(z) when calculating the comoving sound horizon. We employ a Gaussian prior on x = (R, la, ωb), with mean values given by ¯x = (1.74963, 301.80845, 0.02237), (3.6) and the covariance matrix given by C = 10−8 ×   1598.9554 17112.007 −36.311179 17112.007 811208.45 −494.79813 −36.311179 −494.79813 2.1242182  . (3.7) 3.2 BAO Data The BAO sample provided by DESI in their second data release1 includes measurements of galaxies, quasars, and Lyman-α forest, spanning redshifts in the range 0.295 ≤z ≤2.33. They determine the ratios DM(z)/rs(zd), DH(z)/rs(zd), and DV (z)/rs(zd), where DH(z) is the Hubble distance, given by DH(z) = c H0h(z), (3.8) DV (z) is the isotropic BAO distance, given by DV (z) = zD2 M(z)DH(z) 1/3 , (3.9) and zd is the redshift at the drag epoch. In order to be consistent with Ref. [177], we use the following expression for the comoving sound horizon at the drag epoch [229] rs(zd) = 147.05 Mpc  ωb 0.02236 −0.13  ωm 0.1432 −0.23 Neff 3.04 −0.1 , (3.10) where Neff is the effective number of relativistic neutrino species. Importantly, this equation assumes standard pre-recombination physics, making its use consistent with the fact that the field reduces to an effective cosmological constant for z ≳50 (see Fig. 3). In our numerical analysis, we utilize the value Neff = 3.044 from recent neutrino decoupling simulations [230– 233]. 1The data and covariance matrix can be found in the following public repository https://github.com/ CobayaSampler/bao_data/tree/master/desi_bao_dr2. – 12 – 3.3 SNIa Data The SNIa sample provided by DESY52 includes measurements of the distance modulus µ(z) for 1635 photometrically-classified supernovae in the redshift range 0.10 ≤z ≤1.13, as well as 194 low-redshift supernovae in the redshift range 0.024 ≤z ≤0.10. The distance modulus is given by µ(z) = 5 log10 DL(z) Mpc  + 25, (3.11) where DL(z) is the luminosity distance, related to the transverse comoving distance via DL(z) = (1 + z)DM(z). Since the absolute magnitude M of SNIa is fully degenerate with H0 (appearing in DL(z) through DM(z)), both parameters can be combined as M = M + 5 log10(c/H0/Mpc). In the computation of the likelihood, the nuisance parameter M is marginalized over (see Appendix A.1 of Ref. [234] for further details). 3.4 Dynamical System and Initial Conditions From the above, it is clear that the dynamical properties of our model are constrained by the data only via the function h(z). To obtain it, we solve the dynamical system in Eq. (2.28) numerically. We begin the integration deep in the RD era at N = −15, i.e., 15 e-folds before the present time, at N = 0. Along with Eq. (2.28), we simultaneously solve Eq. (2.30), which may be recast as d log H dN = −ϵ(x1, x2, x3, x4), (3.12) where ϵ(x1, x2, x3, x4) summarizes the complicated expression in the right-hand-side. This equation benefits from a re-scaling invariance given by H(N) →cH(N). Using it, for any initial condition Hini = H(−15), one may always normalize the solution by its value at present, c = 1/H(0), to trivially obtain h(N). Since Eq. (2.30) is simultaneously solved with Eq. (2.28), the obtained h(z) automatically inherits the non-trivial late-time dynamics of non-minimally coupled quintessence. We make sure the density parameter of radiation is fixed to Ωγh2 = x3(0)h2 = 2.4729× 10−5 by using a shooting algorithm. This effectively fixes the initial condition for the variable x3. Furthermore, in the RD era, the energy densities satisfy ρr ≫ρϕ,eff, ρm. From the Friedmann equation (2.14) together with Eq. (2.21), this implies that x3 ≃1 ≫x1, x2, x4, Ωm, which in turn implies that β ≪1. Therefore, expressing the Ricci scalar in terms of the dynamical system variables, we find R H2 = 3(1 −3weff) + δP A  18 √ 6ξx1x4 + 36ξx2 1 + 6ξβx4 −36ξ2x2 1x2 4 A  = = 3 A " 1 + x2 4 −3(x2 1 −x2 2) −x3 + 12(δP −1)ξx2 1 + 6 √ 6  δP −2 3  ξx1x4+ + 2(δP −1)ξβx4 + δP 6ξ2x2 1x2 4 A # ≪1. (3.13) Furthermore, since V ≪m2 PH2, V,ϕ = λ mP V and λ ∼O(1), we have V,ϕ ≪mPH2 and so the quintessence equation of motion (2.16) is reduced to ϕ′′ + (3 −ϵ)ϕ′ = 0, (3.14) 2The data, the covariance matrix, and the likelihood, can be found in the following public repository https://github.com/des-science/DES-SN5YR. – 13 – where primes denote derivatives with respect to the number of e-folds. Using that during RD ϵ = 2, the field evolution is found to be ϕ(N) = c1e−N + c2, (3.15) where c1 and c2 are integration constants. We conclude that the field quickly freezes to a constant value, indeed corresponding to the thawing quintessence regime. We may set the initial condition x1(−15) = 0 without loss of generality. Parameter Description Prior ξ Non-minimal coupling U[-10, 0] λ Slope of the exponential potential U[0, 5] x2(−15) Initial condition for x2 U[10−15, 10−10] H0 Hubble constant [km s−1 Mpc−1] U[60,80] Ωb Density parameter of baryonic matter U[0, 0.1] Table 3: Priors adopted for the cosmological parameters in the MCMC analysis. As for the initial condition on x2, we leave x2(−15) as a free parameter. Of course, it should be small enough so that the system starts near the RD fixed point. This is ensured by imposing appropriate priors. The value of x2(−15) is directly related to Ωm (see below). This leaves x4. Although in order to begin the integration near the RD fixed point one needs x4 ≪1, its value is not necessarily zero. However, we may note that in the minimally coupled case, x4 is degenerate with V0 (see Eq. (2.6)), which means that one may set x4(−15) = 0 and vary x2(−15) to explore the totality of parameter space. Turning on the non-minimal coupling ξ ̸= 0 breaks this degeneracy. Nevertheless, since x4 ≪1 is still a requirement, the sensitivity of the dynamics to its precise initial value is expected to be suppressed. We thus set x4(−15) = 0 and leave an examination of the dependence on this initial condition to future work. 3.5 Free Parameters and Posterior Sampling The parameter space of ξϕCDM is five-dimensional, Θ = {ξ, λ, x2(−15), H0, Ωb}. Those of ϕCDM and ΛCDM are nested subsets of Θ, obtained by taking ξ = 0 and ξ = λ = 0, respectively. To obtain the posterior distribution of the parameters of each model, we perform a Markov Chain Monte Carlo (MCMC) analysis by using the publicly available Python package emcee [235]. The combined likelihood L from all three observational datasets used in this work is given by −2 log L(Θ) ≡χ2 TOT(Θ) = χ2 CMB(Θ) + χ2 BAO(Θ) + χ2 SN(θ). (3.16) For CMB and BAO, we use multivariate Gaussian likelihoods, χ2(Θ) = (di −ti(Θ))T C−1 ij (dj −tj(Θ)) , (3.17) where di is the data vector, ti is the theoretical prediction vector for a given Θ, and Cij is the covariance matrix of each experiment. For the SNe likelihood, we follow DESY5, including a marginalization over M among other corrections. From Eq. (3.11) follows that SNIa cannot constrain Ωb (which mainly enters χ2 TOT via the comoving sound horizon) or – 14 – H0 (which is degenerate with M). Therefore, the SN likelihood depends only on the subset θ = {ξ, λ, x2(−15)}. We impose uniform priors for all parameters, listed in Table 3. The initial state of the chains is given by 128 walkers, following a Gaussian distribution centered on a preliminary best fit point, with a spread large enough to probe the entire prior space. Each walker is evolved for 8 × 104 iterations in ϕCDM and ΛCDM, and for 105 in ξϕCDM. We assess convergence by following Refs. [235, 236] to estimate the integrated autocorrelation time τp for each parameter. After discarding a burn-in of 5τ, where τ = max(τp), the number of remaining samples per walker N must satisfy N > 50τ. Params ΛCDM ϕCDM ξϕCDM ξ – – −2.50(−1.41)+1.7 −0.42 λ – 0.81(0.85)+0.18 −0.11 1.68(1.95)+0.36 −0.41 Ωm 0.3039(0.3039) ± 0.0038 0.3153(0.3160) ± 0.0056 0.3179(0.3193) ± 0.0059 H0 68.43(68.43) ± 0.30 67.00(66.91) ± 0.59 66.82(66.70) ± 0.58 Ωb 0.04791(0.04791)±0.00033 0.05016(0.05029)±0.00090 0.05036(0.05049)±0.00086 Table 4: CMB+BAO+SN 68% (1σ) credible intervals and best-fit values (in parentheses) for the parameters of ΛCDM, ϕCDM and ξϕCDM. H0 is given in units of km/s/Mpc. Having ensured convergence, we take the initial chains and conservatively discard an initial 30% burn-in length (exceeding 5τ in all models) to ensure the sampling is independent of the initial state. From the chains, we determine the matter density at present as a derived parameter. Posterior distributions are then plotted using the publicly available package GetDist [237]. The maximum a posteriori (MAP) parameters (which coincide with their best-fit values since we assume flat priors) for each model are found by using a Nelder-Mead simplex minimizer [238] starting from the maximum likelihood sample obtained from MCMC. Model Planck DESI DR2 DESY5 Total ∆χ2 ξϕCDM 0.46 -3.41 -11.70 -14.66 ϕCDM 2.00 -1.18 -10.61 -9.80 log B ξϕCDM - - - 5.52 ϕCDM - - - 2.45 Table 5: Change in χ2 (first two rows) and log B (last two rows) for ξϕCDM and ϕCDM relative to ΛCDM, evaluated at the CMB+BAO+SN best-fit parameters. The contribution from each dataset is shown in the corresponding column, with the total reported in the last column. A negative (positive) ∆χ2 corresponds to an improvement (worsening) in fit. A positive (negative) log B implies evidence in favor (against) of the model over ΛCDM. We assess the evidence for a given model M, defined as Z = Z dΘL(d|Θ, M)p(Θ|M), (3.18) where the likelihood L(d|Θ, M) is defined in Eq. (3.16) and p(Θ|M) is the prior, by using the publicly available package MCEvidence [239]. We do so by taking into account prior volumes and thinning the chains by τ/2 to reduce autocorrelation between the samples that are used for the calculation. We interpret results according to the Jeffrey’s scale [240, 241]: given two models M0 and M1, with evidences Z0 and Z1, respectively, the evidence of M1 over M0 – 15 – is determined inconclusive if log BM1,M0 < 1.0, where BM1,M0 ≡Z1/Z0 is the Bayes factor. It is weak for 1.0 < log BM1,M0 < 2.5, moderate for 2.5 < log BM1,M0 < 5.0 and strong for log BM1,M0 > 5.0. 10−1 100 z 0.98 0.99 1.00 1.01 1.02 1.03 1.04 (DM/rd)/(DΛ M/rΛ d ) ξφCDM φCDM DESI DR2 10−1 100 z 0.96 0.98 1.00 1.02 1.04 (DH/rd)/(DΛ H/rΛ d ) ξφCDM φCDM DESI DR2 10−1 100 z 0.98 0.99 1.00 1.01 1.02 (DV /rd)/(DΛ V /rΛ d ) ξφCDM φCDM DESI DR2 Figure 4: BAO distances DM(z)/zd (left) DH(z)/zd (center) and DV (z)/zd (right) predicted by ξϕCDM (full blue) and ϕCDM (dashed orange) relative to ΛCDM (horizontal dashed black). We use the CMB+BAO+SN best-fit parameter values for each model. Black dots represent the DESI DR2 residuals. In gray are DV (z)/zd values derived from the DM(z)/zd and DH(z)/zd actual data points. 4 Results In this section, we compare the predictions of our models with the data and analyze the constraints on the parameters obtained from sampling their posterior distributions. 4.1 Observational constraints In Table 4 we report the CMB+BAO+SN 68% confidence intervals for the parameters of ξϕCDM, ϕCDM, and ΛCDM, along with their best-fit values. The ϕCDM best-fit value of λ satisfies λ < √ 2. This agrees with the late-time dominant attractor condition [202], leading to an accelerating phase of expansion of the Universe. We emphasize that, in this attractor, minimally coupled quintessence with an exponential potential with λ > √ 2 cannot support acceleration. As our phase-space analysis reveals, this condition is relaxed with the addition of a non-minimal coupling. Indeed, the existence of the late-time acceleration attractor is guaranteed as long as ξ < 0, independently of the value λ takes. This is consistent with the reported λ = 1.95 best-fit value for ξϕCDM. In Table 5 we report the goodness of fit of ξϕCDM and ϕCDM relative to ΛCDM in terms of the difference in χ2, for each dataset as well as the total, given the best-fit parameters. We also provide the (logarithm of the) Bayes factor for both models. While we find that the ΛCDM baseline model is marginally in better agreement with the CMB data, the dynamical dark energy models discussed in our analysis significantly improve the fit to the BAO and SN data compared to the baseline in a joint analysis. We find the improvement in χ2 to be ∆χ2 = −14.66 for ξϕCDM and ∆χ2 = −9.8 for ϕCDM. Regarding log B, the data shows weak (albeit close to moderate) evidence for ϕCDM over ΛCDM and strong evidence for ξϕCDM over ΛCDM. We investigate these claims further by analyzing the predictions of the models. In Fig. 4 we plot the three BAO distances DM(z)/zd, DH(z)/zd, and DV (z)/zd predicted by ξϕCDM and ϕCDM relative to ΛCDM. For all three models, we take their corresponding – 16 – CMB+BAO+SN best-fit parameter values. We also show the DESI DR2 data relative to the best-fit ΛCDM model. Since we plot ratios of distances, deviations from ΛCDM correspond to deviations from unity. Note that DESI DR2 provides one data point only for DV (z)/zd, at z = 0.295. The other six points in the third panel are obtained by using Eq. (3.9) with the data for DM(z)/zd and DH(z)/zd at each redshift bin, with the corresponding propagated errors. These points are represented in gray to emphasize the difference with actual data. From the figure, it is clear that the largest improvement in fit is driven by the first two DH(z)/zd data points, at redshifts z = 0.510 and z = 0.706, respectively. In Fig. 5 we show 10−1 100 z −0.04 −0.02 0.00 0.02 0.04 0.06 µ −µΛ ξφCDM φCDM DESY5 Figure 5: Distance moduli µ(z) predicted by ξϕCDM (blue) and ϕCDM (orange) relative to ΛCDM (horizontal dashed black). We use the CMB+BAO+SN best-fit parameter values for each model. 39 out of the total 1829 redshift values in the DESY5 data are duplicated. To construct the blue and orange curves, a small shift ∆z = 5 × 10−8 was added to those points to allow interpolation. Black dots represent the DESY5 binned residuals and vertical dotted lines mark bin edges. The DESY5 data have been calibrated by subtracting the inverse- covariance weighted mean. the distance moduli predicted by ξϕCDM and ϕCDM, as well as the DESY5 data, all relative to ΛCDM. Again, for each model, we use their respective CMB+BAO+SN best-fit parameter values. For visualization purposes, we show the SN data binned in redshift, following a similar procedure to Ref. [177]. More specifically, given a residual vector ri = µdata i −µΛCDM i , where i ranges from 1 to the total number N of data points, a bin is a collection of indices ordered in ascending redshift b = {j, ..., j + k}, with j, k > 0 and j + k < N. (4.1) Given indices p, q in the bin p, q ∈b, we define ˆup = 1 and ˆCpq as the submatrix in the bin of the total covariance matrix Cij. With this, the residuals shown in the figure are given by ˆrb = ˆup ˆC−1 pq rq ˆup ˆC−1 pq ˆuq , ˆσb = 1 q ˆup ˆC−1 pq ˆuq . (4.2) – 17 – We also account for the uncalibrated absolute magnitude M of the SNe. To do so, we remove the weighted average from the data using the full covariance matrix. For each model, we also remove its corresponding offset relative to ΛCDM. This procedure ensures that all residuals (both data and models) with respect to ΛCDM have null weighted means uiC−1 ij rj = 0, where ui = 1 and i, j = 1, ..., N, (4.3) so that they all share the same zero point. Finally, we emphasize that binning ignores cross- bin correlations and mixes within-bin correlations, making it useful for visualization purposes only. Keeping this caveat in mind, the figure suggests that ξϕCDM and ϕCDM are able to fit both the low- and high-redshift SNe better than ΛCDM. Fig. 6 shows the posterior −8 −4 0 ξ 0.048 0.050 0.052 Ωb 66 68 H0 0.30 0.31 0.32 0.33 Ωm 1 2 λ 0.4 1.2 2.0 2.8 λ 0.30 0.31 0.32 0.33 Ωm 65 66 67 68 69 H0 0.048 0.050 0.052 Ωb ξφCDM φCDM ΛCDM Figure 6: Parameter posteriors for ξϕCDM (blue), ϕCDM (orange), and ΛCDM (green) using CMB+DESI+DESY5. The darker (lighter) shaded regions represent the 68% (95%) credible intervals. distributions for the parameters of all three models. Indeed, since ϕCDM corresponds to ξ = 0 and ΛCDM to ξ = λ = 0 they can share the same figure. All posterior distributions lie within the prior range, indicating that our choice of priors does not drive the results. It is clear that the data prefers a non-zero value for the non-minimal coupling. This is one of the main results of our paper. Furthermore, at the start of this section we commented on the ξϕCDM and ϕCDM best-fit values of λ satisfying their respective late-time acceleration attractor conditions. The same can be found in the ξϕCDM posterior distributions, where most of the 2σ confidence interval of λ lies at λ > √ 2. In order to be more precise, we utilize – 18 – −10 −5 0 ξ 0.0 0.2 0.4 0.6 0.8 1.0 p(ξ|d) ξφCDM 68% 95% 98.8% 0 1 2 3 λ 0.0 0.2 0.4 0.6 0.8 1.0 p(λ|d) ξφCDM 68% 95% 99.7% 99.98% 0.0 0.5 1.0 1.5 λ 0.0 0.2 0.4 0.6 0.8 1.0 p(λ|d) φCDM 68% 95% 99.7% 99.94% 0.0 0.2 0.4 x4(z = 0) 0.0 0.2 0.4 0.6 0.8 1.0 p(x4(z = 0)|d) ξφCDM 68% 95% 99.7% Figure 7: Posterior distribution of ξ (top left), λ in ξϕCDM (top right), λ in ϕCDM (bottom left), and the total field excursion x4(z = 0) = ∆ϕ mP in ξϕCDM (bottom right) using CMB+DESI+DESY5, with the 68% (blue), 95% (orange), and 99.7% (green) credible intervals. For p(ξ|d) we show the largest credible interval such that ξ < 0 in yellow. For p(λ|d), both in ξϕCDM and ϕCDM, we show the largest credible interval such that λ > 0 in red. From p(x4(z = 0)|d) we see that the total field excursion remains sub-Planckian. the Kernel Density Estimate (KDE) from GetDist to obtain the different credible intervals at which ξ and λ are non-zero, i.e., we find the significance at which dark energy is dynamical and non-minimally coupled. The results are shown in Fig. 7. For the non-minimal coupling, we find that ξ < 0 at 98.8% C.L., making it larger than 2σ but not quite 3σ. Regarding the slope of the exponential, we find that λ > 0 at 99.98% and 99.94%, for ξϕCDM and ϕCDM, respectively. This means that in both models λ is non-zero at more than 3σ. Finally, we plot the posterior distribution of the effective barotropic parameter of the field, for both ξϕCDM and ϕCDM, as a function of redshift in Fig. 8. Functional posterior distributions were generated using the publicly available Python package fgivenx [242], utilizing the same samples as those in the MCEvidence analysis. For the non-minimal coupling case, we find that the ΛCDM line w = −1 lies outside of the 2σ confidence band, indicating a preference for a phantom crossing at ≲3σ C.L.. As for the minimal coupling case, the data – 19 – prefers a barotropic parameter w ̸= −1 at more than 2σ but less than 3σ. However, above this confidence level, the data is unable to distinguish between dynamical dark energy and a cosmological constant. Even though we found that strictly speaking λ > 0 at more than 3σ, the time evolution of wϕ also depends on the field dynamics. Indeed, our numerical analysis reveals that to have wϕ,eff(z = 0) > −0.99 one needs λ > 0.25, which lies in the 98% C.L., consistent with Fig. 8. However, a smaller value like λ = 0.08 gives wϕ,eff(z = 0) = −0.9992, extremely close to ΛCDM. 0 1 2 3 4 5 z −1.0 −0.9 −0.8 −0.7 wφ,eff(z) ξφCDM Best fit 1σ 2σ 3σ 0 1 2 3 4 5 z −0.95 −0.90 −0.85 −0.80 wφ(z) φCDM Best fit 1σ 2σ 3σ Figure 8: Posterior distributions of the effective barotropic parameter of quintessence as a function of redshift for ξϕCDM (left) and ϕCDM (right), given the posterior distributions of the model parameters. We show the 68%, 95%, and 99.7% confidence intervals in progres- sively lighter shades of blue and the best-fit barotropic parameter in red. 4.2 Palatini vs Metric In this section, we assess the question of whether the data has anything to say about the degrees of freedom of the theory of gravity. To do so, we repeat the entire MCMC analysis pipeline in the metric formalism, labeled ˜ξϕCDM for convenience. Params Metric ˜ξϕCDM Palatini ξϕCDM ξ −3.68(−2.05)+3.1 −0.97 −2.50(−1.41)+1.7 −0.42 λ 1.72(1.61)+0.33 −0.47 1.68(1.95)+0.36 −0.41 Ωm 0.3163(0.3174) ± 0.0056 0.3179(0.3193) ± 0.0059 H0 66.95(66.85) ± 0.56 66.82(66.70) ± 0.58 Ωb 0.05019(0.05029)±0.00084 0.05036(0.05049)±0.00086 Table 6: CMB+BAO+SN 68% (1σ) credible intervals and best-fit values (in parentheses) for the parameters in the metric ˜ξϕCDM and Palatini ξϕCDM formalisms. – 20 – Model Planck DESI DR2 DESY5 Total ∆χ2 ξϕCDM 0.46 -3.41 -11.70 -14.66 ˜ξϕCDM 0.83 -2.96 -10.82 -12.95 log B ξϕCDM - - - 5.52 ˜ξϕCDM - - - 4.78 Table 7: Change in χ2 (first two rows) and log B (last two rows) for the metric ˜ξϕCDM and Palatini ξϕCDM formalisms, relative to ΛCDM, evaluated at the CMB+BAO+SN best- fit parameters. The contribution from each dataset is shown in the corresponding column, with the total reported in the last column. A negative (positive) ∆χ2 corresponds to an improvement (worsening) in fit. A positive (negative) log B implies evidence in favor (against) of the model over ΛCDM. −10 −5 0 ξ 0.048 0.050 0.052 Ωb 65 66 67 68 H0 0.30 0.31 0.32 0.33 Ωm 1 2 3 λ 1 2 3 λ 0.30 0.31 0.32 0.33 Ωm 65 66 67 68 H0 0.048 0.050 0.052 Ωb Metric ˆξφCDM Palatini ξφCDM Figure 9: Parameter posteriors for in the Palatini ξϕCDM (blue) and metric ˜ξϕCDM (red) formalisms using CMB+DESI+DESY5. The darker (lighter) shaded regions represent the 68% (95%) credible intervals. In Table 6 we report the CMB+BAO+SN 68% confidence intervals for the parameters of ˜ξϕCDM along with its best-fit values. For the convenience of the reader, we also repeat the ξϕCDM values. Just like in the Palatini formalism, the late-time accelerating attractor exists for all values of λ as long as ξ < 0, something that is reflected in the best-fit value and confidence interval of ξ. In Table 7 we report the goodness of fit of ˜ξϕCDM relative to – 21 – 0 1 2 3 4 5 z −1.05 −1.00 −0.95 −0.90 −0.85 −0.80 wφ,eff(z) Metric ˜ξφCDM Best fit 1σ 2σ 3σ Figure 10: Posterior distributions of the effective barotropic parameter of quintessence as a function of redshift for metric ˜ξϕCMD, given the posterior distributions of the model parameters. We show the 68%, 95%, and 99.7% confidence intervals in progressively lighter shades of blue and the best-fit barotropic parameter in red. ΛCDM in terms of the difference in χ2, for each dataset as well as the total, given the best-fit parameters. We also report the Bayes factor. We again provide the Palatini formalism values for convenience. The fit of ˜ξϕCDM to all data sets is slightly worse than that of ξϕCDM, although the difference between formalisms is not large, amounting to a total ∆χ2 = −1.17 in favor of the Palatini formalism. This is reflected in the Bayes factor, obtaining moderate evidence of ˜ξϕCDM over ΛCDM, albeit close to strong, according to the Jeffrey’s scale. Lastly, we plot the posterior distribution of the effective barotropic parameter of the field in Fig. 10. As in the Palatini formalism, we find a preference for a phantom crossing at ≲3σ C.L., although the significance seems slightly smaller. Furthermore, the 1σ confidence band covers a lower value for wϕ,eff at present z = 0. 5 Conclusions We have explored the role of a scalar field, non-minimally coupled to gravity, in describing a range of cosmological observations and tested the importance of the non-minimal coupling. Our main focus is on the Palatini formalism of gravity, where the connection is taken to be an independent gravitational field. This is contrary to the metric formalism, with the connection being fixed to its standard Levi-Civita form, leading to different dynamics. Among other modifications, for an FLRW metric, the Palatini Friedmann and Raychaudhuri equations, as well as the Klein-Gordon equation, feature additional terms with respect to their metric counterparts. This is also apparent by writing the field equations as a closed dynamical system, after choosing the quintessence potential to be an exponential. Most importantly, the non-minimal coupling (in both formalisms) allows for the barotropic parameter of the field to cross the phantom divide and become smaller than −1 in the redshift range 0.5 ≲z ≲4. This is, however, a transitory phase, as our phase space analysis reveals a – 22 – late-time de Sitter attractor, the existence of which is guaranteed as long as the non-minimal coupling constant is negative. Both formalisms share the same coordinates in phase space for this attractor, although, again, the details of the dynamics are different. In this work, we have for the first time analyzed a model of Palatini non-minimally coupled quintessence in the light of state-of-the-art cosmological data, and tested whether the data has a preference for the formalism of gravity. We compare the theory against the most recent datasets from three experiments that probe observables in the Universe at different epochs. In particular, we use the summary statistics from the Cosmic Microwave Background observations, obtained from the final Planck data release, distance moduli from the Dark Energy Survey Year 5, and the second data release of the geometrical measurements from the Dark Energy Spectroscopic Instrument. Our analysis also includes the standard ΛCDM model of cosmology, minimally coupled quintessence, and non-minimally coupled quintessence in the metric formalism. Quantifying the goodness of fit with the difference in χ2, we find an improvement of nearly 15 when compared to ΛCDM. The breakdown of χ2 indicates the improvement is mainly driven by the SNe and BAO data, while the CMB is addressed similarly by both models, non-minimally coupled or not, since the scalar field evolution at early times ensures ΛCDM behavior. The Bayes factor with respect to ΛCDM is 5.52, providing strong evidence for the new model, according to the Jeffrey’s scale. In the metric formalism, the improvement in fit (and so the Bayes factor) with respect to ΛCDM from the non-minimal coupling is reduced marginally compared to its Palatini counterpart. In two steps, the improvement can be explained as follows: firstly, we find that λ > 0 at 99.94% C.L. for a minimally coupled scalar field. This means that the three datasets, when combined, prefer an evolving equation-of-state for dark energy at > 3σ. Furthermore, for the non-minimally coupled field, we find that ξ < 0 at 98.8% C.L., i.e., a 2-3σ preference for the coupling term. Secondly, from a qualitative point of view, the observations prefer a phantom crossing in the barotropic parameter of dark energy between z = 0.5 −4 and a steep rise to w > −1 at z < 0.5. This behavior changes impose a crossing in the distance modulus with respect to the ΛCDM model and a dip in the comoving and Hubble distance near z = 0.4 that improves the fit to both datasets. Our results demonstrate, for the first time, that a non-minimally coupled scalar field in the Palatini formalism significantly improves the fit to the CMB, BAO, and SNe data in a joint analysis, when compared to ΛCDM. The improvement rules out any statistical uncertainties and leaves room only for the systematics in the SNe or BAO measurements as an example that counters the beyond standard model physics. Our results also demonstrate a marginal improvement with respect to the analogous theory in the metric formalism. The evidence for a non-minimal coupling also raises challenges for the future. Indeed, its presence leads to a time evolution in the effective gravitational constant and to fifth forces [243], both tightly constrained by experiments [244, 245]. Regarding the first, a time- varying gravitational constant changes the Poisson equation for the gravitational potential, thereby affecting the growth of structure. As for the latter, the Parametrized Post-Newtonian parameters are strongly bounded by Solar System experiments, such as the Cassini data [246]. Although we find a sub-Planckian field displacement in our model, these issues should be addressed in detail. We leave that for future work. – 23 – Acknowledgements We thank Kushal Lodha for insightful discussions. SSL and DKH would like to thank the Indo-French Centre for the Promotion of Advanced Research (IFCPAR/CEFIPRA) for sup- port of the proposal 6704-4 titled ‘Testing flavors of the early universe beyond vanilla models with cosmological observations’ under the Collaborative Scientific Research Programme. AK was supported by the Estonian Research Council grants PSG761, RVTT3, RVTT7 and by the Center of Excellence program TK202. Computational portions of this work were car- ried out on the Kamet high performance computing cluster at the Institute of Mathematical Sciences (IMSc), Chennai, maintained and supported by the High-Performance Computing Center of IMSc. References [1] A. A. Starobinsky, A New Type of Isotropic Cosmological Models Without Singularity, Phys. Lett. B 91 (1980) 99–102. [2] A. H. Guth, The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems, Phys. Rev. D 23 (1981) 347–356. [3] A. D. Linde, A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems, Phys. Lett. B 108 (1982) 389–393. [4] A. Albrecht and P. J. Steinhardt, Cosmology for Grand Unified Theories with Radiatively Induced Symmetry Breaking, Phys. Rev. Lett. 48 (1982) 1220–1223. [5] Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [arXiv:1807.06209]. [Erratum: Astron.Astrophys. 652, C4 (2021)]. [6] A. G. Riess et al., A Comprehensive Measurement of the Local Value of the Hubble Constant with 1 km/s//Mpc Uncertainty from the Hubble Space Telescope and the SH0ES Team, Astrophys. J. Lett. 934 (2022), no. 1 L7, [arXiv:2112.04510]. [7] M. Kamionkowski and A. G. Riess, The Hubble Tension and Early Dark Energy, Ann. Rev. Nucl. Part. Sci. 73 (2023) 153–180, [arXiv:2211.04492]. [8] J. L. Bernal, L. Verde, and A. G. Riess, The trouble with H0, JCAP 10 (2016) 019, [arXiv:1607.05617]. [9] E. Abdalla et al., Cosmology intertwined: A review of the particle physics, astrophysics, and cosmology associated with the cosmological tensions and anomalies, JHEAp 34 (2022) 49–211, [arXiv:2203.06142]. [10] DESI Collaboration, A. G. Adame et al., DESI 2024 VI: cosmological constraints from the measurements of baryon acoustic oscillations, JCAP 02 (2025) 021, [arXiv:2404.03002]. [11] DESI Collaboration, M. Abdul Karim et al., DESI DR2 results. II. Measurements of baryon acoustic oscillations and cosmological constraints, Phys. Rev. D 112 (2025), no. 8 083515, [arXiv:2503.14738]. [12] J. Khoury, M.-X. Lin, and M. Trodden, Apparent w < −1 and a Lower S8 from Dark Axion and Dark Baryons Interactions, arXiv:2503.16415. [13] J. Pan and G. Ye, Non-minimally coupled gravity constraints from DESI DR2 data, arXiv:2503.19898. [14] S. Nesseris, Y. Akrami, and G. D. Starkman, To CPL, or not to CPL? What we have not learned about the dark energy equation of state, arXiv:2503.22529. – 24 – [15] S. Pan, S. Paul, E. N. Saridakis, and W. Yang, Interacting dark energy after DESI DR2: a challenge for ΛCDM paradigm?, arXiv:2504.00994. [16] D. A. Kessler, L. A. Escamilla, S. Pan, and E. Di Valentino, One-parameter dynamical dark energy: Hints for oscillations, arXiv:2504.00776. [17] Y. Tiwari, U. Upadhyay, and R. K. Jain, Exploring cosmological imprints of phantom crossing with dynamical dark energy in Horndeski gravity, Phys. Rev. D 111 (2025), no. 4 043530, [arXiv:2412.00931]. [18] M. Berbig, Kick it like DESI: PNGB quintessence with a dynamically generated initial velocity, JCAP 03 (2025) 015, [arXiv:2412.07418]. [19] D. Shlivko, P. J. Steinhardt, and C. L. Steinhardt, Optimal parameterizations for observational constraints on thawing dark energy, arXiv:2504.02028. [20] D. Benisty, The Scale for the Expansion of the Universe: From Local Structures to Cosmology, arXiv:2504.03009. [21] Y. Akrami, G. Alestas, and S. Nesseris, Has DESI detected exponential quintessence?, arXiv:2504.04226. [22] W. J. Wolf, C. Garc´ıa-Garc´ıa, T. Anton, and P. G. Ferreira, Assessing Cosmological Evidence for Nonminimal Coupling, Phys. Rev. Lett. 135 (2025), no. 8 081001, [arXiv:2504.07679]. [23] S. H. Mirpoorian, K. Jedamzik, and L. Pogosian, Is Dynamical Dark Energy Necessary? DESI BAO and Modified Recombination, arXiv:2504.15274. [24] D. Wang and D. Mota, Did DESI DR2 truly reveal dynamical dark energy?, arXiv:2504.15222. [25] B. R. Dinda and R. Maartens, Physical vs phantom dark energy after DESI: thawing quintessence in a curved background, Mon. Not. Roy. Astron. Soc. 542 (2025) L31–L35, [arXiv:2504.15190]. [26] D. Wang, Questioning cosmic acceleration with desi: The big stall of the universe, arXiv:2504.15635. [27] M. Cortˆes and A. R. Liddle, On desi’s dr2 exclusion of λcdm, arXiv:2504.15336. [28] R. de Souza, G. Rodrigues, and J. Alcaniz, Thawing quintessence and transient cosmic acceleration in light of DESI, arXiv:2504.16337. [29] S. Afroz and S. Mukherjee, Hint towards inconsistency between BAO and Supernovae Dataset: The Evidence of Redshift Evolving Dark Energy from DESI DR2 is Absent, arXiv:2504.16868. [30] C. Garcia-Quintero et al., Cosmological implications of DESI DR2 BAO measurements in light of the latest ACT DR6 CMB data, arXiv:2504.18464. [31] M. Scherer, M. A. Sabogal, R. C. Nunes, and A. De Felice, Challenging the ΛCDM model: 5σ evidence for a dynamical dark energy late-time transition, Phys. Rev. D 112 (2025), no. 4 043513, [arXiv:2504.20664]. [32] S.-F. Chen and M. Zaldarriaga, It’s all Ok: curvature in light of BAO from DESI DR2, JCAP 08 (2025) 014, [arXiv:2505.00659]. [33] G. Ye and S.-J. Lin, On the tension between DESI DR2 BAO and CMB, arXiv:2505.02207. [34] X. Chen and A. Loeb, Evolving Dark Energy or Evolving Dark Matter?, arXiv:2505.02645. [35] J. Smirnov, Dynamical Dark Energy Emerges from Massive Gravity, arXiv:2505.03870. [36] L. Giani, R. Von Marttens, and O. F. Piattella, The matter with(in) CPL, arXiv:2505.08467. [37] D. Wang, Quintessence Dark Matter, arXiv:2505.08154. – 25 – [38] S. Hussain, S. Arora, A. Wang, and B. Rose, Probing the Dynamics of Gaussian Dark Energy Equation of State Using DESI BAO, arXiv:2505.09913. [39] Y. Wang and K. Freese, Model-Independent Dark Energy Measurements from DESI DR2 and Planck 2015 Data, arXiv:2505.17415. [40] S. Lee, Probing Time-Varying Dark Energy with DESI: The Crucial Role of Precision Matter Density (Ωm0) Measurements, arXiv:2505.19052. [41] E. O. Colg´ain, S. Pourojaghi, and M. M. Sheikh-Jabbari, On the Pipeline Dependence of DESI Dynamical Dark Energy, arXiv:2505.19029. [42] Z. Bayat and M. P. Hertzberg, Examining quintessence models with DESI data, JCAP 08 (2025) 065, [arXiv:2505.18937]. [43] R. D’Agostino and F. Bajardi, Teleparallel dark energy in a nonflat universe, Phys. Rev. D 111 (2025), no. 10 104076, [arXiv:2505.21359]. [44] M. A. Sabogal and R. C. Nunes, Robust evidence for dynamical dark energy from DESI galaxy-CMB lensing cross-correlation and geometric probes, JCAP 09 (2025) 084, [arXiv:2505.24465]. [45] Y. Myrzakulov, S. Hussain, and M. Shahalam, Phase space and Data analyses of a non-minimally coupled scalar field system with decaying dark energy model, arXiv:2506.11755. [46] J. M. Cline and V. Muralidharan, Simple quintessence models in light of DESI-BAO observations, Phys. Rev. D 112 (2025), no. 6 063539, [arXiv:2506.13047]. [47] R. E. Keeley, A. Shafieloo, and W. L. Matthewson, Could We Be Fooled about Phantom Crossing?, arXiv:2506.15091. [48] L. A. Anchordoqui, I. Antoniadis, N. Cribiori, A. Hasar, D. Lust, J. Masias, and M. Scalisi, Bulk/boundary modular quintessence and DESI, JHEP 09 (2025) 128, [arXiv:2506.02731]. [49] S. Lee, Unveiling the Pitfalls of CPL Parametrization at High Redshifts: A Critical Assessment of the ω0ωaCDM Model with DESI DR2 BAO Data, arXiv:2506.18230. [50] S. Lee, The Impact of Ωm0 Prior Bias on Cosmological Parameter Estimation: Reconciling DESI DR2 BAO and Pantheon+ SNe Data Combination Results, arXiv:2506.16022. [51] E. ¨Oz¨ulker, E. Di Valentino, and W. Giar`e, Dark Energy Crosses the Line: Quantifying and Testing the Evidence for Phantom Crossing, arXiv:2506.19053. [52] I. D. Gialamas, G. H¨utsi, M. Raidal, J. Urrutia, M. Vasar, and H. Veerm¨ae, Quintessence and phantoms in light of DESI 2025, Phys. Rev. D 112 (2025), no. 6 063551, [arXiv:2506.21542]. [53] J.-X. Li and S. Wang, Reconstructing dark energy with model independent methods after DESI DR2 BAO, arXiv:2506.22953. [54] S. Dhawan and E. M¨ortsell, Implications for dark energy of cosmic transparency in light of DESI data, arXiv:2506.22599. [55] S. Lee, Constraining ΛCDM, ωCDM, and ω0ωaCDM models with DESI DR2 BAO: Redshift-Resolved Diagnostics and the Role of rd, arXiv:2507.01380. [56] T. Liu, X. Li, T. Xu, M. Biesiada, and J. Wang, Torsion cosmology in the light of DESI, supernovae and CMB observational constraints, arXiv:2507.04265. [57] M. H¨og˚as and E. M¨ortsell, Bimetric gravity improves the fit to DESI BAO and eases the Hubble tension, arXiv:2507.03743. [58] D.-C. Qiang, J.-Y. Jia, and H. Wei, New Insights into Dark Energy from DESI DR2 with CMB and SNIa, arXiv:2507.09981. – 26 – [59] S. S. Mishra, W. L. Matthewson, V. Sahni, A. Shafieloo, and Y. Shtanov, Braneworld Dark Energy in light of DESI DR2, arXiv:2507.07193. [60] S. Bhattacharjee, S. Halder, J. de Haro, S. Pan, and E. N. Saridakis, Accelerating Universe without dark energy: matter creation after DESI DR2, arXiv:2507.15575. [61] M. Braglia, X. Chen, and A. Loeb, Exotic Dark Matter and the DESI Anomaly, arXiv:2507.13925. [62] S. Goldstein, M. Celoria, and F. Schmidt, Monodromic Dark Energy and DESI, arXiv:2507.16970. [63] H. Chaudhary, S. Capozziello, V. K. Sharma, and G. Mustafa, Does DESI DR2 challenge ΛCDM paradigm ?, arXiv:2507.21607. [64] J. Wang, H. Yu, and P. Wu, Revisiting cosmic acceleration with DESI BAO, Eur. Phys. J. C 85 (2025), no. 8 853, [arXiv:2507.22575]. [65] M. Ishak and L. Medina-Varela, Is this the fall of the ΛCDM throne? Evidence for dynamical dark energy rising from combinations of different types of datasets, arXiv:2507.22856. [66] J. A. L. Torres and A. de la Macorra, Bound dark energy: Particle origin of dark energy with DESI BAO and DES supernova data, arXiv:2507.19619. [67] A. G´omez-Valent and A. Gonz´alez-Fuentes, Effective Phantom Divide Crossing with Standard and Negative Quintessence, arXiv:2508.00621. [68] J.-Q. Wang, R.-G. Cai, Z.-K. Guo, and S.-J. Wang, Resolving the Planck-DESI tension by non-minimally coupled quintessence, arXiv:2508.01759. [69] R. Chen, J. M. Cline, V. Muralidharan, and B. Salewicz, Quintessential dark energy crossing the phantom divide, arXiv:2508.19101. [70] L. W. K. Goh and A. N. Taylor, Phantom Crossing with Quintom Models, arXiv:2509.12335. [71] W. J. Wolf, P. G. Ferreira, and C. Garc´ıa-Garc´ıa, Cosmological constraints on Galileon dark energy with broken shift symmetry, arXiv:2509.17586. [72] G. G. Luciano and A. Paliathanasis, Late-Time Cosmological Constraints on Kaniadakis Holographic Dark Energy, arXiv:2509.17527. [73] H. Chaudhary, S. Capozziello, S. Praharaj, S. K. J. Pacif, and G. Mustafa, Is the ΛCDM Model in Crisis?, arXiv:2509.17124. [74] S. Roy Choudhury, T. Okumura, and K. Umetsu, Cosmological constraints on non-phantom dynamical dark energy with DESI Data Release 2 Baryon Acoustic Oscillations: A 3σ+ lensing anomaly, arXiv:2509.26144. [75] C.-G. Park and B. Ratra, Updated observational constraints on ϕCDM dynamical dark energy cosmological models, arXiv:2509.25812. [76] M. Artola, I. Ayuso, R. Lazkoz, and V. Salzano, Is CPL dark energy a mirage?, arXiv:2510.04191. [77] S. L. Guedezounme, B. R. Dinda, and R. Maartens, Phantom crossing or dark interaction?, arXiv:2507.18274. [78] H. Chaudhary, V. K. Sharma, S. Capozziello, and G. Mustafa, Probing departures from ΛCDM by late-time datasets, arXiv:2510.08339. [79] T.-N. Li, G.-H. Du, Y.-H. Li, Y. Li, J.-L. Ling, J.-F. Zhang, and X. Zhang, Updated constraints on interacting dark energy: A comprehensive analysis using multiple CMB probes, DESI DR2, and supernovae observations, arXiv:2510.11363. [80] M. Rezaei, S. Pan, W. Yang, and D. F. Mota, Is Dark Energy Changing? Probing the Universe’s Expansion with present and future astronomical probes, arXiv:2510.09766. – 27 – [81] Y.-M. Zhang, T.-N. Li, G.-H. Du, S.-H. Zhou, L.-Y. Gao, J.-F. Zhang, and X. Zhang, Alleviating the H0 tension through new interacting dark energy model in light of DESI DR2, arXiv:2510.12627. [82] M. W. Toomey, G. Montefalcone, E. McDonough, and K. Freese, How Theory-Informed Priors Affect DESI Evidence for Evolving Dark Energy, arXiv:2509.13318. [83] Y.-H. Yao, Y.-H. Shen, T.-N. Li, G.-H. Du, and Y. Gong, Examining a new form of non-standard dark matter using DESI DR2 data, arXiv:2510.13436. [84] F. Englert and R. Brout, Broken Symmetry and the Mass of Gauge Vector Mesons, Phys. Rev. Lett. 13 (1964) 321–323. [85] P. W. Higgs, Broken symmetries, massless particles and gauge fields, Phys. Lett. 12 (1964) 132–133. [86] J. Martin, C. Ringeval, and V. Vennin, Encyclopædia Inflationaris: Opiparous Edition, Phys. Dark Univ. 5-6 (2014) 75–235, [arXiv:1303.3787]. [87] W. Hu, R. Barkana, and A. Gruzinov, Cold and fuzzy dark matter, Phys. Rev. Lett. 85 (2000) 1158–1161, [astro-ph/0003365]. [88] J. M. Cline, K. Kainulainen, P. Scott, and C. Weniger, Update on scalar singlet dark matter, Phys. Rev. D 88 (2013) 055025, [arXiv:1306.4710]. [Erratum: Phys.Rev.D 92, 039906 (2015)]. [89] D. J. E. Marsh, Axion Cosmology, Phys. Rept. 643 (2016) 1–79, [arXiv:1510.07633]. [90] P. Svrcek and E. Witten, Axions In String Theory, JHEP 06 (2006) 051, [hep-th/0605206]. [91] M. Cicoli, M. Goodsell, and A. Ringwald, The type IIB string axiverse and its low-energy phenomenology, JHEP 10 (2012) 146, [arXiv:1206.0819]. [92] A. Arvanitaki, S. Dimopoulos, S. Dubovsky, N. Kaloper, and J. March-Russell, String Axiverse, Phys. Rev. D 81 (2010) 123530, [arXiv:0905.4720]. [93] V. Faraoni, Superquintessence, Int. J. Mod. Phys. D 11 (2002) 471–482, [astro-ph/0110067]. [94] A. De Felice and S. Tsujikawa, f(R) theories, Living Rev. Rel. 13 (2010) 3, [arXiv:1002.4928]. [95] B. Ratra and P. J. E. Peebles, Cosmological Consequences of a Rolling Homogeneous Scalar Field, Phys. Rev. D 37 (1988) 3406. [96] C. Wetterich, Cosmology and the Fate of Dilatation Symmetry, Nucl. Phys. B 302 (1988) 668–696, [arXiv:1711.03844]. [97] R. R. Caldwell, R. Dave, and P. J. Steinhardt, Cosmological imprint of an energy component with general equation of state, Phys. Rev. Lett. 80 (1998) 1582–1585, [astro-ph/9708069]. [98] N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge University Press, Cambridge, UK, 1982. [99] L. E. Parker and D. Toms, Quantum Field Theory in Curved Spacetime: Quantized Field and Gravity. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 8, 2009. [100] C. G. Callan, Jr., S. R. Coleman, and R. Jackiw, A New improved energy - momentum tensor, Annals Phys. 59 (1970) 42–73. [101] A. Palatini, Deduzione invariantiva delle equazioni gravitazionali dal principio di Hamilton, Rend. Circ. Mat. Palermo 43 (1919), no. 1 203–212. [102] M. Ferraris, M. Francaviglia, and C. Reina, Variational formulation of general relativity from 1915 to 1925 “Palatini’s method” discovered by Einstein in 1925, Gen. Rel. Grav. 14 (1982), no. 3 243–254. – 28 – [103] F. Bauer and D. A. Demir, Inflation with Non-Minimal Coupling: Metric versus Palatini Formulations, Phys. Lett. B 665 (2008) 222–226, [arXiv:0803.2664]. [104] F. Bauer, Filtering out the cosmological constant in the Palatini formalism of modified gravity, Gen. Rel. Grav. 43 (2011) 1733–1757, [arXiv:1007.2546]. [105] N. Tamanini and C. R. Contaldi, Inflationary Perturbations in Palatini Generalised Gravity, Phys. Rev. D 83 (2011) 044018, [arXiv:1010.0689]. [106] F. Bauer and D. A. Demir, Higgs-Palatini Inflation and Unitarity, Phys. Lett. B 698 (2011) 425–429, [arXiv:1012.2900]. [107] S. Rasanen and P. Wahlman, Higgs inflation with loop corrections in the Palatini formulation, JCAP 11 (2017) 047, [arXiv:1709.07853]. [108] T. Tenkanen, Resurrecting Quadratic Inflation with a non-minimal coupling to gravity, JCAP 12 (2017) 001, [arXiv:1710.02758]. [109] A. Racioppi, Coleman-Weinberg linear inflation: metric vs. Palatini formulation, JCAP 12 (2017) 041, [arXiv:1710.04853]. [110] T. Markkanen, T. Tenkanen, V. Vaskonen, and H. Veerm¨ae, Quantum corrections to quartic inflation with a non-minimal coupling: metric vs. Palatini, JCAP 03 (2018) 029, [arXiv:1712.04874]. [111] L. J¨arv, A. Racioppi, and T. Tenkanen, Palatini side of inflationary attractors, Phys. Rev. D 97 (2018), no. 8 083513, [arXiv:1712.08471]. [112] C. Fu, P. Wu, and H. Yu, Inflationary dynamics and preheating of the nonminimally coupled inflaton field in the metric and Palatini formalisms, Phys. Rev. D 96 (2017), no. 10 103542, [arXiv:1801.04089]. [113] A. Racioppi, New universal attractor in nonminimally coupled gravity: Linear inflation, Phys. Rev. D 97 (2018), no. 12 123514, [arXiv:1801.08810]. [114] P. Carrilho, D. Mulryne, J. Ronayne, and T. Tenkanen, Attractor Behaviour in Multifield Inflation, JCAP 06 (2018) 032, [arXiv:1804.10489]. [115] A. Kozak and A. Borowiec, Palatini frames in scalar–tensor theories of gravity, Eur. Phys. J. C 79 (2019), no. 4 335, [arXiv:1808.05598]. [116] S. Rasanen and E. Tomberg, Planck scale black hole dark matter from Higgs inflation, JCAP 01 (2019) 038, [arXiv:1810.12608]. [117] S. Rasanen, Higgs inflation in the Palatini formulation with kinetic terms for the metric, Open J. Astrophys. 2 (2019), no. 1 1, [arXiv:1811.09514]. [118] J. P. B. Almeida, N. Bernal, J. Rubio, and T. Tenkanen, Hidden inflation dark matter, JCAP 03 (2019) 012, [arXiv:1811.09640]. [119] K. Shimada, K. Aoki, and K.-i. Maeda, Metric-affine Gravity and Inflation, Phys. Rev. D 99 (2019), no. 10 104020, [arXiv:1812.03420]. [120] T. Takahashi and T. Tenkanen, Towards distinguishing variants of non-minimal inflation, JCAP 04 (2019) 035, [arXiv:1812.08492]. [121] R. Jinno, K. Kaneta, K.-y. Oda, and S. C. Park, Hillclimbing inflation in metric and Palatini formulations, Phys. Lett. B 791 (2019) 396–402, [arXiv:1812.11077]. [122] J. Rubio and E. S. Tomberg, Preheating in Palatini Higgs inflation, JCAP 04 (2019) 021, [arXiv:1902.10148]. [123] N. Bostan, Non-minimally coupled quartic inflation with Coleman-Weinberg one-loop corrections in the Palatini formulation, Phys. Lett. B 811 (2020) 135954, [arXiv:1907.13235]. – 29 – [124] N. Bostan, Quadratic, Higgs and hilltop potentials in the Palatini gravity, Commun. Theor. Phys. 72 (2020) 085401, [arXiv:1908.09674]. [125] A. Racioppi, Non-Minimal (Self-)Running Inflation: Metric vs. Palatini Formulation, JHEP 21 (2020) 011, [arXiv:1912.10038]. [126] T. Tenkanen, Tracing the high energy theory of gravity: an introduction to Palatini inflation, Gen. Rel. Grav. 52 (2020), no. 4 33, [arXiv:2001.10135]. [127] M. Shaposhnikov, A. Shkerin, and S. Zell, Quantum Effects in Palatini Higgs Inflation, JCAP 07 (2020) 064, [arXiv:2002.07105]. [128] A. Borowiec and A. Kozak, New class of hybrid metric-Palatini scalar-tensor theories of gravity, JCAP 07 (2020) 003, [arXiv:2003.02741]. [129] L. J¨arv, A. Karam, A. Kozak, A. Lykkas, A. Racioppi, and M. Saal, Equivalence of inflationary models between the metric and Palatini formulation of scalar-tensor theories, Phys. Rev. D 102 (2020), no. 4 044029, [arXiv:2005.14571]. [130] A. Karam, M. Raidal, and E. Tomberg, Gravitational dark matter production in Palatini preheating, JCAP 03 (2021) 064, [arXiv:2007.03484]. [131] J. McDonald, Does Palatini Higgs Inflation Conserve Unitarity?, JCAP 04 (2021) 069, [arXiv:2007.04111]. [132] M. L˚angvik, J.-M. Ojanper¨a, S. Raatikainen, and S. Rasanen, Higgs inflation with the Holst and the Nieh–Yan term, Phys. Rev. D 103 (2021), no. 8 083514, [arXiv:2007.12595]. [133] T. Tenkanen and L. Visinelli, Axion dark matter from Higgs inflation with an intermediate H∗, JCAP 08 (2019) 033, [arXiv:1906.11837]. [134] M. Shaposhnikov, A. Shkerin, I. Timiryasov, and S. Zell, Higgs inflation in Einstein-Cartan gravity, JCAP 02 (2021) 008, [arXiv:2007.14978]. [Erratum: JCAP 10, E01 (2021)]. [135] M. Shaposhnikov, A. Shkerin, I. Timiryasov, and S. Zell, Einstein-Cartan gravity, matter, and scale-invariant generalization , JHEP 10 (2020) 177, [arXiv:2007.16158]. [136] I. D. Gialamas, A. Karam, A. Lykkas, and T. D. Pappas, Palatini-Higgs inflation with nonminimal derivative coupling, Phys. Rev. D 102 (2020), no. 6 063522, [arXiv:2008.06371]. [137] Y. Mikura, Y. Tada, and S. Yokoyama, Conformal inflation in the metric-affine geometry, EPL 132 (2020), no. 3 39001, [arXiv:2008.00628]. [138] S. Verner, Quintessential Inflation in Palatini Gravity, JCAP 04 (2021) [arXiv:2010.11201]. [139] V.-M. Enckell, S. Nurmi, S. R¨as¨anen, and E. Tomberg, Critical point Higgs inflation in the Palatini formulation, JHEP 04 (2021) 059, [arXiv:2012.03660]. [140] Y. Reyimuaji and X. Zhang, Natural inflation with a nonminimal coupling to gravity, JCAP 03 (2021) 059, [arXiv:2012.14248]. [141] A. Karam, S. Karamitsos, and M. Saal, β-function reconstruction of Palatini inflationary attractors, JCAP 10 (2021) 068, [arXiv:2103.01182]. [142] Y. Mikura, Y. Tada, and S. Yokoyama, Minimal k-inflation in light of the conformal metric-affine geometry, Phys. Rev. D 103 (2021), no. 10 L101303, [arXiv:2103.13045]. [143] M. Kubota, K.-Y. Oda, K. Shimada, and M. Yamaguchi, Cosmological Perturbations in Palatini Formalism, JCAP 03 (2021) 006, [arXiv:2010.07867]. [144] D. S.-C. G´omez, 3+1 decomposition in modified gravities within the Palatini formalism and some applications, Phys. Rev. D 104 (2021), no. 2 024029, [arXiv:2103.16319]. [145] Y. Mikura and Y. Tada, On UV-completion of Palatini-Higgs inflation, JCAP 05 (2022), no. 05 035, [arXiv:2110.03925]. – 30 – [146] F. Bombacigno and G. Montani, Big bounce cosmology for Palatini R2 gravity with a Nieh–Yan term, Eur. Phys. J. C 79 (2019), no. 5 405, [arXiv:1809.07563]. [147] V.-M. Enckell, K. Enqvist, S. Rasanen, and L.-P. Wahlman, Inflation with R2 term in the Palatini formalism, JCAP 02 (2019) 022, [arXiv:1810.05536]. [148] I. Antoniadis, A. Karam, A. Lykkas, and K. Tamvakis, Palatini inflation in models with an R2 term, JCAP 11 (2018) 028, [arXiv:1810.10418]. [149] I. Antoniadis, A. Karam, A. Lykkas, T. Pappas, and K. Tamvakis, Rescuing Quartic and Natural Inflation in the Palatini Formalism, JCAP 03 (2019) 005, [arXiv:1812.00847]. [150] T. Tenkanen, Minimal Higgs inflation with an R2 term in Palatini gravity, Phys. Rev. D 99 (2019), no. 6 063528, [arXiv:1901.01794]. [151] A. Edery and Y. Nakayama, Palatini formulation of pure R2 gravity yields Einstein gravity with no massless scalar, Phys. Rev. D 99 (2019), no. 12 124018, [arXiv:1902.07876]. [152] M. Giovannini, Post-inflationary phases stiffer than radiation and Palatini formulation, Class. Quant. Grav. 36 (2019), no. 23 235017, [arXiv:1905.06182]. [153] T. Tenkanen, Trans-Planckian censorship, inflation, and dark matter, Phys. Rev. D 101 (2020), no. 6 063517, [arXiv:1910.00521]. [154] I. D. Gialamas and A. Lahanas, Reheating in R2 Palatini inflationary models, Phys. Rev. D 101 (2020), no. 8 084007, [arXiv:1911.11513]. [155] T. Tenkanen and E. Tomberg, Initial conditions for plateau inflation: a case study, JCAP 04 (2020) 050, [arXiv:2002.02420]. [156] A. Lloyd-Stubbs and J. McDonald, Sub-Planckian ϕ2 inflation in the Palatini formulation of gravity with an R2 term, Phys. Rev. D 101 (2020), no. 12 123515, [arXiv:2002.08324]. [157] I. Antoniadis, A. Lykkas, and K. Tamvakis, Constant-roll in the Palatini-R2 models, JCAP 04 (2020), no. 04 033, [arXiv:2002.12681]. [158] D. M. Ghilencea, Palatini quadratic gravity: spontaneous breaking of gauged scale symmetry and inflation, Eur. Phys. J. C 80 (4, 2020) 1147, [arXiv:2003.08516]. [159] N. Das and S. Panda, Inflation and Reheating in f(R,h) theory formulated in the Palatini formalism, JCAP 05 (2021) 019, [arXiv:2005.14054]. [160] I. D. Gialamas, A. Karam, and A. Racioppi, Dynamically induced Planck scale and inflation in the Palatini formulation, JCAP 11 (2020) 014, [arXiv:2006.09124]. [161] D. M. Ghilencea, Gauging scale symmetry and inflation: Weyl versus Palatini gravity, Eur. Phys. J. C 81 (2021), no. 6 510, [arXiv:2007.14733]. [162] S. Bekov, K. Myrzakulov, R. Myrzakulov, and D. S.-C. G´omez, General slow-roll inflation in f(R) gravity under the Palatini approach, Symmetry 12 (2020), no. 12 1958, [arXiv:2010.12360]. [163] D. S.-C. G´omez, Variational principle and boundary terms in gravity `a la Palatini, Phys. Lett. B 814 (2021) 136103, [arXiv:2011.11568]. [164] K. Dimopoulos and S. S´anchez L´opez, Quintessential inflation in Palatini f(R) gravity, Phys. Rev. D 103 (2021), no. 4 043533, [arXiv:2012.06831]. [165] A. Karam, E. Tomberg, and H. Veerm¨ae, Tachyonic preheating in Palatini R 2 inflation, JCAP 06 (2021) 023, [arXiv:2102.02712]. [166] A. Lykkas and K. Tamvakis, Extended interactions in the Palatini-R2 inflation, JCAP 08 (2021), no. 043 [arXiv:2103.10136]. – 31 – [167] I. D. Gialamas, A. Karam, T. D. Pappas, and V. C. Spanos, Scale-invariant quadratic gravity and inflation in the Palatini formalism, Phys. Rev. D 104 (2021), no. 2 023521, [arXiv:2104.04550]. [168] J. Annala and S. Rasanen, Inflation with R (αβ) terms in the Palatini formulation, JCAP 09 (2021) 032, [arXiv:2106.12422]. [169] M. AlHallak, A. AlRakik, N. Chamoun, and M. S. El-Daher, Palatini f(R) Gravity and Variants of k-/Constant Roll/Warm Inflation within Variation of Strong Coupling Scenario, Universe 8 (2022), no. 2 126, [arXiv:2111.05075]. [170] C. Dioguardi, A. Racioppi, and E. Tomberg, Slow-roll inflation in Palatini F(R) gravity, JHEP 06 (2022) 106, [arXiv:2112.12149]. [171] K. Dimopoulos, A. Karam, S. S´anchez L´opez, and E. Tomberg, Modelling Quintessential Inflation in Palatini-Modified Gravity, Galaxies 10 (2022), no. 2 57, [arXiv:2203.05424]. [172] I. D. Gialamas, A. Karam, T. D. Pappas, and E. Tomberg, Implications of Palatini gravity for inflation and beyond, Int. J. Geom. Meth. Mod. Phys. 20 (2023), no. 13 2330007, [arXiv:2303.14148]. [173] S. S´anchez L´opez, K. Dimopoulos, A. Karam, and E. Tomberg, Observable gravitational waves from hyperkination in Palatini gravity and beyond, Eur. Phys. J. C 83 (2023), no. 12 1152, [arXiv:2305.01399]. [174] J. J. Terente D´ıaz, K. Dimopoulos, M. Karˇciauskas, and A. Racioppi, Quintessence in the Weyl-Gauss-Bonnet model, JCAP 02 (2024) 040, [arXiv:2310.08128]. [175] H. J. Kuralkar, S. Panda, and A. Vidyarthi, Observable primordial gravitational waves from non-minimally coupled R 2 Palatini modified gravity, JCAP 05 (2025) 073, [arXiv:2502.03573]. [176] S. S. L´opez and J. J. Terente D´ıaz, Scalar-Induced Gravitational Waves in Palatini f(R) Gravity, arXiv:2505.13420. [177] DESI Collaboration, M. Abdul Karim et al., DESI DR2 Results I: Baryon Acoustic Oscillations from the Lyman Alpha Forest, arXiv:2503.14739. [178] DES Collaboration, T. M. C. Abbott et al., The Dark Energy Survey: Cosmology Results with ∼1500 New High-redshift Type Ia Supernovae Using the Full 5 yr Data Set, Astrophys. J. Lett. 973 (2024), no. 1 L14, [arXiv:2401.02929]. [179] W. J. Wolf, P. G. Ferreira, and C. Garc´ıa-Garc´ıa, Matching current observational constraints with nonminimally coupled dark energy, Phys. Rev. D 111 (2025), no. 4 L041303, [arXiv:2409.17019]. [180] G. Ye, M. Martinelli, B. Hu, and A. Silvestri, Hints of Nonminimally Coupled Gravity in DESI 2024 Baryon Acoustic Oscillation Measurements, Phys. Rev. Lett. 134 (2025), no. 18 181002, [arXiv:2407.15832]. [181] G. Ye, Bridge the Cosmological Tensions with Thawing Gravity, arXiv:2411.11743. [182] A. G. Ferrari, M. Ballardini, F. Finelli, and D. Paoletti, Scalar-tensor gravity and DESI 2024 BAO data, Phys. Rev. D 111 (2025), no. 8 083523, [arXiv:2501.15298]. [183] I. Antoniadis, A. Guillen, and K. Tamvakis, Late time acceleration in Palatini gravity, JHEP 11 (2022) 144, [arXiv:2207.13732]. [184] K. Dimopoulos, A. Karam, S. S´anchez L´opez, and E. Tomberg, Palatini R 2 quintessential inflation, JCAP 10 (2022) 076, [arXiv:2206.14117]. [185] L. J¨arv and A. Toporensky, Global portraits of nonminimal inflation, Eur. Phys. J. C 82 (2022), no. 2 179, [arXiv:2104.10183]. – 32 – [186] L. J¨arv, S. Karamitsos, and M. Saal, Global portraits of nonminimal inflation: Metric and Palatini formalism, Phys. Rev. D 109 (2024), no. 8 084073, [arXiv:2401.12314]. [187] L. J¨arv and D. Kraiko, Global portraits of inflation in nonsingular variables, Eur. Phys. J. C 85 (2025), no. 6 715, [arXiv:2503.07544]. [188] P. Wang, P. Wu, and H. Yu, A new extended quintessence, Eur. Phys. J. C 72 (2012) 2245, [arXiv:1301.5832]. [189] Y. Fan, P. Wu, and H. Yu, Cosmological perturbations of non-minimally coupled quintessence in the metric and Palatini formalisms, Phys. Lett. B 746 (2015) 230–236. [190] F. Bauer, Filtering out the cosmological constant in the Palatini formalism of modified gravity, Gen. Rel. Grav. 43 (2011) 1733–1757, [arXiv:1007.2546]. [191] A. Racioppi, J. Rajasalu, and K. Selke, Multiple point criticality principle and Coleman-Weinberg inflation, JHEP 06 (2022) 107, [arXiv:2109.03238]. [192] D. Y. Cheong, S. M. Lee, and S. C. Park, Reheating in models with non-minimal coupling in metric and Palatini formalisms, JCAP 02 (2022), no. 02 029, [arXiv:2111.00825]. [193] A. Racioppi and M. Vasar, On the number of e-folds in the Jordan and Einstein frames, Eur. Phys. J. Plus 137 (2022), no. 5 637, [arXiv:2111.09677]. [194] T. Kodama and T. Takahashi, Relaxing inflation models with nonminimal coupling: A general study, Phys. Rev. D 105 (2022), no. 6 063542, [arXiv:2112.05283]. [195] G. K. Karananas, M. Shaposhnikov, and S. Zell, Field redefinitions, perturbative unitarity and Higgs inflation, JHEP 06 (2022) 132, [arXiv:2203.09534]. [196] W. Yin, Weak-scale Higgs inflation, JCAP 05 (2024) 060, [arXiv:2210.15680]. [197] I. D. Gialamas, A. Karam, and T. D. Pappas, Gravitational corrections to electroweak vacuum decay: metric vs. Palatini, Phys. Lett. B 840 (2023) 137885, [arXiv:2212.03052]. [198] T. Koivisto, Covariant conservation of energy momentum in modified gravities, Class. Quant. Grav. 23 (2006) 4289–4296, [gr-qc/0505128]. [199] L. Gorlich, S. Kachru, P. K. Tripathy, and S. P. Trivedi, Gaugino condensation and nonperturbative superpotentials in flux compactifications, JHEP 12 (2004) 074, [hep-th/0407130]. [200] M. Haack, D. Krefl, D. Lust, A. Van Proeyen, and M. Zagermann, Gaugino Condensates and D-terms from D7-branes, JHEP 01 (2007) 078, [hep-th/0609211]. [201] Z. Lalak, G. G. Ross, and S. Sarkar, Racetrack inflation and assisted moduli stabilisation, Nucl. Phys. B 766 (2007) 1–20, [hep-th/0503178]. [202] E. J. Copeland, A. R. Liddle, and D. Wands, Exponential potentials and cosmological scaling solutions, Phys. Rev. D 57 (1998) 4686–4690, [gr-qc/9711068]. [203] C. Rubano and P. Scudellaro, On some exponential potentials for a cosmological scalar field as quintessence, Gen. Rel. Grav. 34 (2002) 307–328, [astro-ph/0103335]. [204] T. Barreiro, E. J. Copeland, and N. J. Nunes, Quintessence arising from exponential potentials, Phys. Rev. D 61 (2000) 127301, [astro-ph/9910214]. [205] S. Bhattacharya, G. Borghetto, A. Malhotra, S. Parameswaran, G. Tasinato, and I. Zavala, Cosmological constraints on curved quintessence, JCAP 09 (2024) 073, [arXiv:2405.17396]. [206] G. Alestas, M. Delgado, I. Ruiz, Y. Akrami, M. Montero, and S. Nesseris, Is curvature-assisted quintessence observationally viable?, Phys. Rev. D 110 (2024), no. 10 106010, [arXiv:2406.09212]. [207] O. F. Ramadan, J. Sakstein, and D. Rubin, DESI constraints on exponential quintessence, Phys. Rev. D 110 (2024), no. 4 L041303, [arXiv:2405.18747]. – 33 – [208] C. Wetterich, The Cosmon model for an asymptotically vanishing time dependent cosmological ’constant’, Astron. Astrophys. 301 (1995) 321–328, [hep-th/9408025]. [209] P. Binetruy, Models of dynamical supersymmetry breaking and quintessence, Phys. Rev. D 60 (1999) 063502, [hep-ph/9810553]. [210] P. Agrawal, G. Obied, P. J. Steinhardt, and C. Vafa, On the Cosmological Implications of the String Swampland, Phys. Lett. B 784 (2018) 271–276, [arXiv:1806.09718]. [211] Y. Akrami, R. Kallosh, A. Linde, and V. Vardanyan, The Landscape, the Swampland and the Era of Precision Cosmology, Fortsch. Phys. 67 (2019), no. 1-2 1800075, [arXiv:1808.09440]. [212] M. Raveri, W. Hu, and S. Sethi, Swampland Conjectures and Late-Time Cosmology, Phys. Rev. D 99 (2019), no. 8 083518, [arXiv:1812.10448]. [213] R. R. Caldwell, A Phantom menace?, Phys. Lett. B 545 (2002) 23–29, [astro-ph/9908168]. [214] S. M. Carroll, M. Hoffman, and M. Trodden, Can the dark energy equation-of-state parameter w be less than -1?, Phys. Rev. D 68 (2003) 023509, [astro-ph/0301273]. [215] R. Gannouji, D. Polarski, A. Ranquet, and A. A. Starobinsky, Scalar-Tensor Models of Normal and Phantom Dark Energy, JCAP 09 (2006) 016, [astro-ph/0606287]. [216] B. Boisseau, G. Esposito-Farese, D. Polarski, and A. A. Starobinsky, Reconstruction of a scalar tensor theory of gravity in an accelerating universe, Phys. Rev. Lett. 85 (2000) 2236, [gr-qc/0001066]. [217] D. F. Torres, Quintessence, superquintessence and observable quantities in Brans-Dicke and nonminimally coupled theories, Phys. Rev. D 66 (2002) 043522, [astro-ph/0204504]. [218] E. Gunzig, A. Saa, L. Brenig, V. Faraoni, T. M. Rocha Filho, and A. Figueiredo, Superinflation, quintessence, and nonsingular cosmologies, Phys. Rev. D 63 (2001) 067301, [gr-qc/0012085]. [219] F. C. Carvalho and A. Saa, Non-minimal coupling, exponential potentials and the w < -1 regime of dark energy, Phys. Rev. D 70 (2004) 087302, [astro-ph/0408013]. [220] S. M. Carroll, A. De Felice, and M. Trodden, Can we be tricked into thinking that w is less than -1?, Phys. Rev. D 71 (2005) 023525, [astro-ph/0408081]. [221] L. Perivolaropoulos, Crossing the phantom divide barrier with scalar tensor theories, JCAP 10 (2005) 001, [astro-ph/0504582]. [222] S. Nesseris and L. Perivolaropoulos, Crossing the Phantom Divide: Theoretical Implications and Observational Status, JCAP 01 (2007) 018, [astro-ph/0610092]. [223] Z. Zhai and Y. Wang, Robust and model-independent cosmological constraints from distance measurements, JCAP 07 (2019) 005, [arXiv:1811.07425]. [224] P. Lemos and A. Lewis, CMB constraints on the early Universe independent of late-time cosmology, Phys. Rev. D 107 (2023), no. 10 103505, [arXiv:2302.12911]. [225] R. K. Sachs and A. M. Wolfe, Perturbations of a cosmological model and angular variations of the microwave background, Astrophys. J. 147 (1967) 73–90. [226] Y. Fan, P. Wu, and H. Yu, The integrated Sachs–Wolfe effect in the extended quintessence cosmological models, Class. Quant. Grav. 33 (2016), no. 8 085006. [227] W. Hu and N. Sugiyama, Small scale cosmological perturbations: An Analytic approach, Astrophys. J. 471 (1996) 542–570, [astro-ph/9510117]. [228] Particle Data Group Collaboration, R. L. Workman et al., Review of Particle Physics, PTEP 2022 (2022) 083C01. [229] S. Brieden, H. Gil-Mar´ın, and L. Verde, A tale of two (or more) h’s, JCAP 04 (2023) 023, [arXiv:2212.04522]. – 34 – [230] K. Akita and M. Yamaguchi, A precision calculation of relic neutrino decoupling, JCAP 08 (2020) 012, [arXiv:2005.07047]. [231] J. Froustey, C. Pitrou, and M. C. Volpe, Neutrino decoupling including flavour oscillations and primordial nucleosynthesis, JCAP 12 (2020) 015, [arXiv:2008.01074]. [232] J. J. Bennett, G. Buldgen, P. F. De Salas, M. Drewes, S. Gariazzo, S. Pastor, and Y. Y. Y. Wong, Towards a precision calculation of Neff in the Standard Model II: Neutrino decoupling in the presence of flavour oscillations and finite-temperature QED, JCAP 04 (2021) 073, [arXiv:2012.02726]. [233] M. Drewes, Y. Georis, M. Klasen, L. P. Wiggering, and Y. Y. Y. Wong, Towards a precision calculation of Neff in the Standard Model. Part III. Improved estimate of NLO contributions to the collision integral, JCAP 06 (2024) 032, [arXiv:2402.18481]. [234] M. Goliath, R. Amanullah, P. Astier, A. Goobar, and R. Pain, Supernovae and the nature of the dark energy, Astron. Astrophys. 380 (2001) 6–18, [astro-ph/0104009]. [235] D. Foreman-Mackey, D. W. Hogg, D. Lang, and J. Goodman, emcee: The MCMC Hammer, Publ. Astron. Soc. Pac. 125 (2013) 306–312, [arXiv:1202.3665]. [236] J. Goodman and J. Weare, Ensemble samplers with affine invariance, Commun. Appl. Math. Comput. Sc. 5 (2010), no. 1 65–80. [237] A. Lewis, GetDist: a Python package for analysing Monte Carlo samples, JCAP 08 (2025) 025, [arXiv:1910.13970]. [238] J. A. Nelder and R. Mead, A Simplex Method for Function Minimization, Comput. J. 7 (1965) 308–313. [239] A. Heavens, Y. Fantaye, A. Mootoovaloo, H. Eggers, Z. Hosenie, S. Kroon, and E. Sellentin, Marginal Likelihoods from Monte Carlo Markov Chains, arXiv:1704.03472. [240] H. Jeffreys, The Theory of Probability. Oxford Classic Texts in the Physical Sciences. 1939. [241] R. Trotta, Bayes in the sky: Bayesian inference and model selection in cosmology, Contemp. Phys. 49 (2008) 71–104, [arXiv:0803.4089]. [242] W. Handley, fgivenx: Functional posterior plotter, The Journal of Open Source Software 3 (Aug, 2018). [243] S. M. Carroll, Quintessence and the rest of the world, Phys. Rev. Lett. 81 (1998) 3067–3070, [astro-ph/9806099]. [244] E. G. Adelberger, B. R. Heckel, and A. E. Nelson, Tests of the gravitational inverse square law, Ann. Rev. Nucl. Part. Sci. 53 (2003) 77–121, [hep-ph/0307284]. [245] C. M. Will, The Confrontation between General Relativity and Experiment, Living Rev. Rel. 17 (2014) 4, [arXiv:1403.7377]. [246] B. Bertotti, L. Iess, and P. Tortora, A test of general relativity using radio links with the Cassini spacecraft, Nature 425 (2003) 374–376. – 35 –
Non-Minimally Coupled Quintessence in Light of DESI Samuel S ́anchez L ́opez,a,b Alexandros Karam,c Dhiraj Kumar Hazraa,b,d aThe 600113, India bHomi Bhabha National Institute, Training School Complex, Anushakti Nagar, Mumbai 400094, India cNational ̈avala 10, 10143 Tallinn, Estonia dINAF/OAS Bologna, Osservatorio di Astrofisica e Scienza dello Spazio, Area della ricerca CNR-INAF, via Gobetti 101, I-40129 Bologna, Italy E-mail: , , Abstract. We analyze a model of quintessence governed by an exponential potential and nonminimally coupled to gravity, in light of recent datasets, including cosmic microwave background, baryon acoustic oscillations, and supernovae distance moduli observations. Mainly focusing on the Palatini formulation of gravity, a phase space analysis reveals the existence of a late-time stable de Sitter attractor as long as the non-minimal coupling constant is negative, regardless of the value of the slope of the exponential. Fitting to CMB+DESI+DESY5 data, we find strong evidence for our model over ΛCDM, with a Bayes factor log B = 5.52. Furthermore, the data seem to prefer dynamical dark energy at > 3σ C.L. and a phantom crossing in the barotropic parameter of dark energy at 2 -3σ C.L.. We find that the scalar field dynamics in the Palatini formalism provides marginally better agreement to the data compared to the metric formalism. 16 Oct 2025 Contents 1 Introduction 1 2 The model 2 2.1 Action and field equations 3 2.2 Dynamical system analysis 6 3 Datasets and Methodology 10 3.1 CMB Data 10 3.2 BAO Data 12 3.3 SNIa Data 13 3.4 Dynamical System and Initial Conditions 13 3.5 Free Parameters and Posterior Sampling 14 4 Results 16 4.1 Observational constraints 16 4.2 Palatini vs Metric 20 5 Conclusions 22 1 Introduction The entire history of the Universe, from the initial conditions of the Hot Big Bang, provided by primordial quantum fluctuations stretched to super-horizon scales [1-4], until the present day, 13.8 billion years later, is well described by a six-parameter model dubbed ΛCDM with a power law form of primordial spectrum, or the standard model of cosmology. Two of the six parameters correspond to the amplitude and tilt of a nearly scale-invariant spectrum for the initial conditions, while the remaining four are linked to background quantities, namely the density parameters of baryonic and cold dark matter (CDM), the reionization depth, and the angular size of the horizon at recombination. ΛCDM, which serves as a baseline model, not only is in excellent agreement with the most precise Cosmic Microwave Background (CMB) data from Planck [5], but also successfully addresses the low redshift observations, making it the most successful cosmological model to date. Despite this triumphant success, a more careful analysis reveals tensions between cosmological observations when the standard model is used to simultaneously fit high- and low-redshift datasets in a joint analysis. For example, the value of the Hubble parameter inferred from CMB data is about 8% smaller, at a confidence level of 5σ, than the value locally measured by using the distance ladder [6], in what is called the Hubble tension [7, 8]. Other tensions include discrepancies in the inferred value of the matter clustering parameter S8 or the CMB lensing amplitude Alens anomaly (see Ref. [9] for a comprehensive review). More recently, baryon acoustic oscillations (BAO) measurements [10, 11] reveal mild tensions with CMB data, particularly when combined with type Ia supernovae (SNIa) observations. The focus of the present work will be the latter, by considering a model of dynamical dark energy featuring deviations from a cosmological constant behaviour at late times, something that has been the subject of intense study in the recent past [12-83]. - 1 - In this paper, we take a theoretically motivated approach, keeping simplicity in mind. Thus, the role of dynamical dark energy is played by a scalar field φ. Scalar fields are ubiquitous in high energy physics, including the Higgs mechanism [84, 85], inflation [86], dark matter [87-89], string theory [90-92], modified gravity [93, 94], and, in our case, quintessence [9597]. Furthermore, we include a non-minimal coupling to gravity in the action governing its dynamics. Indeed, in the context of quantum field theory in curved spacetime, even if at tree level the field is minimally coupled, renormalization requires the inclusion of counterterms that couple it to the Ricci scalar R. In this way, the action at loop level must include a term proportional to φ2R [98-100]. The inclusion of a non-minimal coupling in the action makes the dynamics sensitive to the formalism of the theory of gravity. In this work we mainly focus on the Palatini formalism [101, 102], which has gained decisive momentum in recent years [103-176], and emphasize the differences with respect to the widely used metric formalism throughout the paper. In the Palatini formalism, the connection is taken to be a priori independent in such a way that the action should be varied with respect to it, in addition to the metric. The result is that the field equations acquire additional terms relative to their metric counterparts, in which the connection is fixed to the Levi-Civita form, leading to different dynamics. This relatively subtle point is non-existent for an Einstein-Hilbert action. In this case, the variation of the action with respect to the connection dynamically fixes it to its Levi-Civita form, and both the metric and Palatini formalisms agree. In this work, we analyze for the first time a model of Palatini non-minimally coupled quintessence in the light of state-of-the-art cosmological data, including CMB [5], BAO [11, 177], and SNe [178] observations. Our focus is both theoretical, performing a phase space analysis of the model, and observational, including a thorough statistical study of our results. Our work emphasizes the capability of current data to probe modifications to general relativity, as well as the degrees of freedom of the theory of gravity itself. The paper is organized as follows. In Sec. 2 we lay out the theoretical aspects of the model. In Sec. 2.1 we compute the field equations, explicitly showing the difference between the metric and Palatini formalisms, and in Sec. 2.2 we write the field equations as an autonomous dynamical system, providing the phase space analysis. Sec. 3 is devoted to describing the datasets and methodology used to constrain our model, and we present the results in Sec. 4. In Sec. 4.1 we show the improvement in fit to different datasets and the resulting parameter posteriors, and in Sec. 4.2 we compare the fits to the data of both the Palatini and metric formalisms. Finally, in Sec. 5 we give our concluding remarks and outlook. Greek indices represent space-time coordinates μ, ν = 0, 1, 2, 3 and Latin indices represent spatial coordinates i, j = 1, 2, 3. Repeated indices are summed over. We assume natural units with c = ħ= 1 and mP = 1/√8πGN = 2.44 × 1018 GeV, where mP is the reduced Planck mass. The signature of the metric is mostly positive (-, +, +, +). 2 The model In this section, we present our model of non-minimally coupled quintessence, which will then be analyzed in the light of different datasets. Previous works on the subject include Refs. [179-182], utilizing the DESI DR1 data, and the more recent Refs. [13, 17, 22, 45, 68] utilizing the DESI DR2 data. Even though they study different models, what they all have in common is the underlying formalism of gravity: the metric formalism. In the present work, - 2 - we mainly focus on the Palatini formalism, although we shall also give important results in the metric formalism, both for the sake of comparison and to emphasize the differences between the two formalisms. To the best of our knowledge, our model was only previously considered in Ref. [183] as a standalone quintessence model and in Refs. [171, 184] in the context of quintessential inflation. In what follows, we give a brief overview of the dynamics of a scalar field non-minimally coupled to gravity, deriving the field equations in both formalisms. The interested reader may consult e.g. [172] for further details. We then express the equations of motion as a dynamical system and provide the phase space analysis. This approach was previously considered in Refs. [185-187] in the context of inflation and in Refs. [183, 188, 189] in the context of quintessence. 2.1 Action and field equations We consider a canonical scalar field, φ, which plays the role of quintessence. It is nonminimally coupled to gravity and minimally coupled to the matter and radiation sectors. The action in the Jordan frame reads S = Z d4x√-g m2 P 2 f(φ)R -1 2gμν∂μφ∂νφ -V (φ) + Sm [gμν, χm] , (2.1) where gμν is the metric tensor, Sm is the matter action, χm collectively represents the matter fields, and R is the Ricci scalar, which is obtained by contracting the metric with the Ricci tensor R = gμνRμν. The latter is obtained from the contraction of the Riemann tensor Rα μαν and can be written solely in terms of the connection as Rμν = ∂λΓλ μν -∂νΓλ μλ + Γλ λρΓρ μν -Γρ μλΓλ νρ. (2.2) In the metric formalism of gravity, the connection takes the Levi-Civita form given by Lμ αβ = 1 2gμλ (∂αgλβ + ∂βgλα -∂λgαβ) , (2.3) which depends only on the metric. However, a priori the connection and the metric need not be related. This is so in the Palatini formalism [101, 102], where the connection is assumed to be an independent gravitational field, denoted as ˆΓμ αβ. Consequently, in order to obtain the equations of motion, one must vary the action with respect to both gμν and ˆΓμ αβ. Hereinafter, we denote tensors constructed using the independent connection with a hat and tensors constructed using the Levi-Civita connection without a hat. Scripted quantities correspond to either metric or Palatini, as in the action (2.1). It is important to mention that in a theory with a pure Einstein-Hilbert (EH) action (and minimally coupled matter), the metric and Palatini variational formalisms are dynamically equivalent. However, once the action is extended, e.g. by a non-minimal coupling f(φ)R or by higher-curvature terms such as F(R), the two approaches yield different field equations and, hence, distinct cosmological dynamics. To understand why, one can take the Palatini action in the non-minimal coupling case, where R = gμν ˆRμν, and vary it with respect to ˆΓμ αβ, giving ˆ∇λ √-gfgμν = 0 . (2.4) This means that ˆΓλ μν is compatible with hμν = fgμν. Therefore, ˆΓμ αβ = 1 2hμλ (∂αhλβ + ∂βhλα -∂λhαβ) = Lμ αβ + 1 2 h δμ β∂α log f + δμ α∂β log f -gαβ∂μ log f i . (2.5) - 3 - It is clear that for a minimally coupled scalar field f(φ) = 1, ˆΓμ αβ = Lμ αβ. However, for any other function of the field f(φ), the Ricci tensor ˆRμν will acquire additional terms with respect to Rμν, coming from the last bracket in Eq. (2.5) (e.g. see Eq. (2.17)). We take the non-minimal coupling function to be f(φ) = 1 + ξφ2/m2 P, extensively studied in the context of inflation [103, 105-132, 134-141, 145, 172, 186, 190-197]. Indeed, even if a theory is minimally coupled at tree level, a non-minimal coupling of this form will be generated at the loop level [98, 100]. The function f(φ) rescales the effective Planck mass as M 2 eff(φ) ≡m2 pf(φ), so one must require f(φ) > 0 at all times. In the case of f(φ) = 1 and a fixed scalar field, the action reduces to Einstein gravity with the potential playing the role of a cosmological constant. The non-minimal coupling f(φ)R allows one to recast the theory by a conformal transformation to the Einstein frame, where the gravitational sector takes the EH form and the scalar field is canonical (after a field redefinition). The trade-off is that the matter sector is no longer minimally coupled to the metric, and the usual conservation law for the matter energy-momentum tensor [198] does not hold in its standard form. This is the issue faced by the authors of Ref. [183], where these new couplings are neglected as a simplifying assumption to their analysis in the Einstein frame. In the present work, we choose to work exclusively in the Jordan frame, making our treatment exact. Note that although the two frames are mathematically related, they are not physically equivalent unless one simultaneously adopts variable units in the Einstein frame. We further adopt an exponential potential for the scalar field, given by V = V0e-λφ/mP , (2.6) where λ is a constant that controls the slope, with λ > 0, V0 ≥0. This potential is a minimal, theoretically motivated choice, which commonly appears in string theory and supergravity [199-201] and has been extensively studied in different cosmological scenarios [97, 202212]. It provides a clean baseline for assessing the impact of the non-minimal coupling on the background expansion. Varying the action (2.1) with respect to the metric tensor gμν, we obtain the field equations fRμν -1 2fRgμν -(1 -δP ) (∇μ∇νf -gμν∇σ∇σf) = 1 m2 P h T (φ) μν + T (m,r) μν i , (2.7) where δP = 0 , metric 1 , Palatini (2.8) and we emphasize the unhatted ∇μ is the covariant derivative related to the Levi-Civita connection. In Eq. (2.7), the energy-momentum tensor of quintessence reads T (φ) μν = ∂μφ ∂νφ -1 2gμν(∂φ)2 -gμνV (φ) (2.9) and T (m,r) μν is the combined matter-radiation tensor, taken to be that of a perfect fluid T (m,r) μν = - 2 √-g δSm δgμν = (ρ + p)uμuν + pgμν . (2.10) - 4 - Using the metric Einstein tensor Gμ ν = Rμ ν -1 2Rgμ ν, Eq. (2.7) can be cast in the standard form Gμ ν = 1 m2 P h T μ(φ) ν + T μ(m,r) ν + T μ(eff) ν i , (2.11) where the effective energy-momentum tensor is defined as T μ(eff) ν =m2 P (1 -f)Rμ ν + ∇μ∇νf + 1 2δμ ν (fR -R -2∇σ∇σf) +δP -3 2 ∇μf∇νf f + 3 4δμ ν ∇σf∇σf f , (2.12) Again, in the limit of general relativity f(φ) = 1, this tensor vanishes, and the metric and Palatini formalisms reduce to the same field equations. Next, we adopt the flat Friedmann-Lemaˆıtre-Robertson-Walker (FLRW) metric ds2 = gμνdxμdxν = -dt2 + a2(t) dr2 + r2 dθ2 + sin2 θdφ2 (2.13) where a(t) is the cosmic scale and t is the cosmic time. From the (0, 0) and (i, j) components of Eq. (2.11) we obtain the modified Friedmann and Raychaudhuri equations 3fH2 = ̇φ2/2 + V + ρm + ρr m2 P -3H ̇f -δP 3 ̇f2 4f , (2.14) -2f ̇H = ̇φ2 + ρm + 4ρr/3 m2 P + ̈f -H ̇f -δP 3 ̇f2 2f , (2.15) where H = ̇a/a is the Hubble parameter, with a dot denoting the derivative with respect to cosmic time. The modified Klein-Gordon equation in either formulation reads ̈φ + 3H ̇φ + V,φ = m2 P 2 f,φR, (2.16) where R = 6 ̇H + 2H2 + δP -3 ̇f2 2f2 + 3 ̈f f + 9H ̇f f ! . (2.17) Since matter and radiation are minimally coupled in (2.1), their stress tensors are separately conserved [198] ̇ρm + 3Hρm = 0 , ̇ρr + 4Hρr = 0 . (2.18) We introduce the fractional energy densities of radiation, matter, and the scalar field Ωr = ρr 3m2 PfH2 , Ωm = ρm 3m2 PfH2 , Ωφ = ̇φ2 6fH2 -f,φ ̇φ fH -δP f2 ,φ ̇φ2 4f2H2 + V (φ) 3fH2 (2.19) In the minimal coupling limit f(φ) = 1, these expressions reduce to their standard GR definitions. With a non-minimal coupling, caution is required in interpreting these fractions, because the coupling mixes contributions in such a way that they are not guaranteed to - 5 - remain ≤1 (or even strictly positive). Under the standard assumptions that radiation and matter have non-negative energy densities (ρr ≥0 , ρm ≥0) and f(φ) > 0, Ωr and Ωm remain positive definite. For Ωφ, the first three terms of Eq. (2.19) can be viewed as a relative kinetic contribution, while the last term represents the relative potential part. If the potential is non-negative, the potential contribution is always non-negative, whereas the kinetic contribution can become negative in certain non-minimal regimes. It is useful to combine (2.14) and (2.15) to obtain an expression for the Hubble flow parameter, ̇H H2 = - 1 2fH2m2 P ̇φ2 + ρm + 4 3ρr - ̈f 2fH2 + ̇f 2fH + δP 3 4 ̇f fH !2 . (2.20) For later comparison with ΛCDM and to keep the background equations in their standard GR form, it is convenient to define an effective dark sector with density and pressure ρeff = ρr f(φ) + ρm f(φ) + ρφ,eff, (2.21) peff = 1 3 ρr f(φ) + pφ,eff, (2.22) where we have defined the effective density and pressure of the field as ρφ,eff = m2 P f(φ) " ̇φ2 2 + V (φ) -3H f,φ(φ) ̇φ -δP 3 4 f2 ,φ(φ) f(φ) ̇φ2 # , (2.23) and pφ,eff = m2 P f(φ) " ̇φ2 2 + 2H f,φ(φ) ̇φ + f,φφ(φ) ̇φ2 + f,φ(φ) ̈φ + δP 3 4 f2 ,φ(φ) f(φ) ̇φ2 -V (φ) # . (2.24) With these definitions the background equations take the GR form 3m2 PH2 = ρeff and 3m2 PH2 + 2m2 P ̇H = -peff, and the total equation of state, wtot ≡peff/ρeff, satisfies wtot = -1 -2 3 ̇H H2 = Ωr 3 + 1 3fH2 " ̇φ2 2 + 2Hf,φ ̇φ + f,φφ ̇φ2 + f,φ ̈φ + δP 3 4 f2 ,φ f ̇φ2 -V (φ) # . (2.25) In the GR limit, the standard behaviors are recovered: during radiation domination weff = 1/3, during matter domination weff = 0, and under potential domination weff = -1. Cosmic acceleration requires weff 0. The de Sitter solution is characterized by weff = -1 and a constant Hubble parameter H = p Λ/3. A minimally coupled scalar cannot sustain a phantom crossing since its expansion history is restricted to -1 ≤weff ≤1, unless the Lagrangian contains a ghost term [213, 214]. In contrast, a non-minimal coupling can support phases with weff 0. Using the background equations (2.14)-(2.25) one finds the closed system - 7 - for X = (x1, x2, x3, x4): dx1 dN = β √ 6 + εx1 dx2 dN = - √ 6 2 λx1x2 + εx2 dx3 dN = -4x3 + 2εx3 dx4 dN = √ 6x1 (2.28) where β ≡ ̈φ mPH2 = -3 √ 6x1 + 3λx2 2 + ξx4 R H2 = = 1 1 + 6ξ2x2 4 f (1 -δP) ( -3 √ 6x1 + 3λx2 2 + ξx4 " 3 -9 f (x2 1 -x2 2) -9x3 f + + 36ξx2 1 f (δP -1) + 6 √ 6ξx4x1 f (3δP -2) + 18δPξ2x2 1x2 4 f2 #) (2.29) and ε ≡- ̇H H2 = 3 2 + 3 2f x2 1 -x2 2 + x3 2f + 6ξx2 1 f + 2 √ 6ξx1x4 f -9δP ξ2x2 1x2 4 f2 + βξx4 f (2.30) From this, the effective barotropic parameter of the universe wtot can be read off from the expression ε = 3(1 + wtot)/2 as wtot = 1 A x2 1 -x2 2 + x3 3f + 4ξx2 1 f + 4 √ 6ξx1x4 3f -δP 6ξ2x2 1x2 4 f2 + 2βξx4 3f . (2.31) The effective barotropic parameter of the field reads wφ,eff = pφ,eff ρφ,eff , (2.32) where pφ,eff and ρφ,eff are given by Eqs. (2.24) and (2.23), respectively. In Fig. 1 we present relevant cosmological parameters as a function of the number of e-folds and redshift, product of solving Eq. (2.28) numerically. The evolution is standard, following radiation-dominated (RD) and matter-dominated (MD) eras, until around z ≲10, when quintessence begins to dominate. Its barotropic parameter becomes phantom for a brief period, before the present time when wφ,eff = -0.81. Solving the fixed-point conditions dx1 dN = dx2 dN = dx3 dN = dx4 dN = 0 , (2.33) together with the algebraic constraint, yields three physically distinct branches: (i) A de Sitter branch in which the scalar is frozen, matter and radiation vanish, and the expansion is characterized by weff = -1 with Ωφ = 1. This branch splits into two algebraic solutions, denoted DE-a and DE-b. Both correspond to accelerated expansion; however, their existence conditions differ: DE-a exists whenever λ2 ≤4ξ or for ξ 0), whereas DE-b exists only if λ2 ≤4ξ and becomes unphysical for ξ 50τ. Params ΛCDM φCDM ξφCDM ξ - - -2.50(-1.41)+1.7 -0.42 λ - 0.81(0.85)+0.18 -0.11 1.68(1.95)+0.36 -0.41 Ωm 0.3039(0.3039) ± 0.0038 0.3153(0.3160) ± 0.0056 0.3179(0.3193) ± 0.0059 H0 68.43(68.43) ± 0.30 67.00(66.91) ± 0.59 66.82(66.70) ± 0.58 Ωb 0.04791(0.04791)±0.00033 0.05016(0.05029)±0.00090 0.05036(0.05049)±0.00086 Table 4: CMB+BAO+SN 68% (1σ) credible intervals and best-fit values (in parentheses) for the parameters of ΛCDM, φCDM and ξφCDM. H0 is given in units of km/s/Mpc. Having ensured convergence, we take the initial chains and conservatively discard an initial 30% burn-in length (exceeding 5τ in all models) to ensure the sampling is independent of the initial state. From the chains, we determine the matter density at present as a derived parameter. Posterior distributions are then plotted using the publicly available package GetDist [237]. The maximum a posteriori (MAP) parameters (which coincide with their best-fit values since we assume flat priors) for each model are found by using a Nelder-Mead simplex minimizer [238] starting from the maximum likelihood sample obtained from MCMC. Model Planck DESI DR2 DESY5 Total ∆χ2 ξφCDM 0.46 -3.41 -11.70 -14.66 φCDM 2.00 -1.18 -10.61 -9.80 log B ξφCDM - - - 5.52 φCDM - - - 2.45 Table 5: Change in χ2 (first two rows) and log B (last two rows) for ξφCDM and φCDM relative to ΛCDM, evaluated at the CMB+BAO+SN best-fit parameters. The contribution from each dataset is shown in the corresponding column, with the total reported in the last column. A negative (positive) ∆χ2 corresponds to an improvement (worsening) in fit. A positive (negative) log B implies evidence in favor (against) of the model over ΛCDM. We assess the evidence for a given model M, defined as Z = Z dΘL(d|Θ, M)p(Θ|M), (3.18) where the likelihood L(d|Θ, M) is defined in Eq. (3.16) and p(Θ|M) is the prior, by using the publicly available package MCEvidence [239]. We do so by taking into account prior volumes and thinning the chains by τ/2 to reduce autocorrelation between the samples that are used for the calculation. We interpret results according to the Jeffrey's scale [240, 241]: given two models M0 and M1, with evidences Z0 and Z1, respectively, the evidence of M1 over M0 - 15 - is determined inconclusive if log BM1,M0 5.0. 10-1 100 z 0.98 0.99 1.00 1.01 1.02 1.03 1.04 (DM/rd)/(DΛ M/rΛ d ) ξφCDM φCDM DESI DR2 10-1 100 z 0.96 0.98 1.00 1.02 1.04 (DH/rd)/(DΛ H/rΛ d ) ξφCDM φCDM DESI DR2 10-1 100 z 0.98 0.99 1.00 1.01 1.02 (DV /rd)/(DΛ V /rΛ d ) ξφCDM φCDM DESI DR2 Figure 4: BAO distances DM(z)/zd (left) DH(z)/zd (center) and DV (z)/zd (right) predicted by ξφCDM (full blue) and φCDM (dashed orange) relative to ΛCDM (horizontal dashed black). We use the CMB+BAO+SN best-fit parameter values for each model. Black dots represent the DESI DR2 residuals. In gray are DV (z)/zd values derived from the DM(z)/zd and DH(z)/zd actual data points. 4 Results In this section, we compare the predictions of our models with the data and analyze the constraints on the parameters obtained from sampling their posterior distributions. 4.1 Observational constraints In Table 4 we report the CMB+BAO+SN 68% confidence intervals for the parameters of ξφCDM, φCDM, and ΛCDM, along with their best-fit values. The φCDM best-fit value of λ satisfies λ √ 2 cannot support acceleration. As our phase-space analysis reveals, this condition is relaxed with the addition of a non-minimal coupling. Indeed, the existence of the late-time acceleration attractor is guaranteed as long as ξ 0 and j + k √ 2. In order to be more precise, we utilize - 18 - -10 -5 0 ξ 0.0 0.2 0.4 0.6 0.8 1.0 p(ξ|d) ξφCDM 68% 95% 98.8% 0 1 2 3 λ 0.0 0.2 0.4 0.6 0.8 1.0 p(λ|d) ξφCDM 68% 95% 99.7% 99.98% 0.0 0.5 1.0 1.5 λ 0.0 0.2 0.4 0.6 0.8 1.0 p(λ|d) φCDM 68% 95% 99.7% 99.94% 0.0 0.2 0.4 x4(z = 0) 0.0 0.2 0.4 0.6 0.8 1.0 p(x4(z = 0)|d) ξφCDM 68% 95% 99.7% Figure 7: Posterior distribution of ξ (top left), λ in ξφCDM (top right), λ in φCDM (bottom left), and the total field excursion x4(z = 0) = ∆φ mP in ξφCDM (bottom right) using CMB+DESI+DESY5, with the 68% (blue), 95% (orange), and 99.7% (green) credible intervals. For p(ξ|d) we show the largest credible interval such that ξ 0 in red. From p(x4(z = 0)|d) we see that the total field excursion remains sub-Planckian. the Kernel Density Estimate (KDE) from GetDist to obtain the different credible intervals at which ξ and λ are non-zero, i.e., we find the significance at which dark energy is dynamical and non-minimally coupled. The results are shown in Fig. 7. For the non-minimal coupling, we find that ξ 0 at 99.98% and 99.94%, for ξφCDM and φCDM, respectively. This means that in both models λ is non-zero at more than 3σ. Finally, we plot the posterior distribution of the effective barotropic parameter of the field, for both ξφCDM and φCDM, as a function of redshift in Fig. 8. Functional posterior distributions were generated using the publicly available Python package fgivenx [242], utilizing the same samples as those in the MCEvidence analysis. For the non-minimal coupling case, we find that the ΛCDM line w = -1 lies outside of the 2σ confidence band, indicating a preference for a phantom crossing at ≲3σ C.L.. As for the minimal coupling case, the data - 19 - prefers a barotropic parameter w ̸= -1 at more than 2σ but less than 3σ. However, above this confidence level, the data is unable to distinguish between dynamical dark energy and a cosmological constant. Even though we found that strictly speaking λ > 0 at more than 3σ, the time evolution of wφ also depends on the field dynamics. Indeed, our numerical analysis reveals that to have wφ,eff(z = 0) > -0.99 one needs λ > 0.25, which lies in the 98% C.L., consistent with Fig. 8. However, a smaller value like λ = 0.08 gives wφ,eff(z = 0) = -0.9992, extremely close to ΛCDM. 0 1 2 3 4 5 z -1.0 -0.9 -0.8 -0.7 wφ,eff(z) ξφCDM Best fit 1σ 2σ 3σ 0 1 2 3 4 5 z -0.95 -0.90 -0.85 -0.80 wφ(z) φCDM Best fit 1σ 2σ 3σ Figure 8: Posterior distributions of the effective barotropic parameter of quintessence as a function of redshift for ξφCDM (left) and φCDM (right), given the posterior distributions of the model parameters. We show the 68%, 95%, and 99.7% confidence intervals in progressively lighter shades of blue and the best-fit barotropic parameter in red. 4.2 Palatini vs Metric In this section, we assess the question of whether the data has anything to say about the degrees of freedom of the theory of gravity. To do so, we repeat the entire MCMC analysis pipeline in the metric formalism, labeled ̃ξφCDM for convenience. Params Metric ̃ξφCDM Palatini ξφCDM ξ -3.68(-2.05)+3.1 -0.97 -2.50(-1.41)+1.7 -0.42 λ 1.72(1.61)+0.33 -0.47 1.68(1.95)+0.36 -0.41 Ωm 0.3163(0.3174) ± 0.0056 0.3179(0.3193) ± 0.0059 H0 66.95(66.85) ± 0.56 66.82(66.70) ± 0.58 Ωb 0.05019(0.05029)±0.00084 0.05036(0.05049)±0.00086 Table 6: CMB+BAO+SN 68% (1σ) credible intervals and best-fit values (in parentheses) for the parameters in the metric ̃ξφCDM and Palatini ξφCDM formalisms. - 20 - Model Planck DESI DR2 DESY5 Total ∆χ2 ξφCDM 0.46 -3.41 -11.70 -14.66 ̃ξφCDM 0.83 -2.96 -10.82 -12.95 log B ξφCDM - - - 5.52 ̃ξφCDM - - - 4.78 Table 7: Change in χ2 (first two rows) and log B (last two rows) for the metric ̃ξφCDM and Palatini ξφCDM formalisms, relative to ΛCDM, evaluated at the CMB+BAO+SN bestfit parameters. The contribution from each dataset is shown in the corresponding column, with the total reported in the last column. A negative (positive) ∆χ2 corresponds to an improvement (worsening) in fit. A positive (negative) log B implies evidence in favor (against) of the model over ΛCDM. -10 -5 0 ξ 0.048 0.050 0.052 Ωb 65 66 67 68 H0 0.30 0.31 0.32 0.33 Ωm 1 2 3 λ 1 2 3 λ 0.30 0.31 0.32 0.33 Ωm 65 66 67 68 H0 0.048 0.050 0.052 Ωb Metric ˆξφCDM Palatini ξφCDM Figure 9: Parameter posteriors for in the Palatini ξφCDM (blue) and metric ̃ξφCDM (red) formalisms using CMB+DESI+DESY5. The darker (lighter) shaded regions represent the 68% (95%) credible intervals. In Table 6 we report the CMB+BAO+SN 68% confidence intervals for the parameters of ̃ξφCDM along with its best-fit values. For the convenience of the reader, we also repeat the ξφCDM values. Just like in the Palatini formalism, the late-time accelerating attractor exists for all values of λ as long as ξ 0 at 99.94% C.L. for a minimally coupled scalar field. This means that the three datasets, when combined, prefer an evolving equation-of-state for dark energy at > 3σ. Furthermore, for the non-minimally coupled field, we find that ξ -1 at z < 0.5. This behavior changes impose a crossing in the distance modulus with respect to the ΛCDM model and a dip in the comoving and Hubble distance near z = 0.4 that improves the fit to both datasets. Our results demonstrate, for the first time, that a non-minimally coupled scalar field in the Palatini formalism significantly improves the fit to the CMB, BAO, and SNe data in a joint analysis, when compared to ΛCDM. The improvement rules out any statistical uncertainties and leaves room only for the systematics in the SNe or BAO measurements as an example that counters the beyond standard model physics. Our results also demonstrate a marginal improvement with respect to the analogous theory in the metric formalism. The evidence for a non-minimal coupling also raises challenges for the future. Indeed, its presence leads to a time evolution in the effective gravitational constant and to fifth forces [243], both tightly constrained by experiments [244, 245]. Regarding the first, a timevarying gravitational constant changes the Poisson equation for the gravitational potential, thereby affecting the growth of structure. As for the latter, the Parametrized Post-Newtonian parameters are strongly bounded by Solar System experiments, such as the Cassini data [246]. Although we find a sub-Planckian field displacement in our model, these issues should be addressed in detail. We leave that for future work. - 23 - Acknowledgements We thank Kushal Lodha for insightful discussions. SSL and DKH would like to thank the Indo-French Centre for the Promotion of Advanced Research (IFCPAR/CEFIPRA) for support of the proposal 6704-4 titled 'Testing flavors of the early universe beyond vanilla models with cosmological observations' under the Collaborative Scientific Research Programme. AK was supported by the Estonian Research Council grants PSG761, RVTT3, RVTT7 and by the Center of Excellence program TK202. Computational portions of this work were carried out on the Kamet high performance computing cluster at the (IMSc), Chennai, maintained and supported by the High-Performance Computing Center of IMSc. References [1] A. A. Starobinsky, A New Type of Isotropic Cosmological Models Without Singularity, Phys. Lett. B 91 (1980) 99-102. [2] A. H. Guth, The Inflationary Universe: A Possible Solution to the Horizon and Flatness Problems, Phys. Rev. D 23 (1981) 347-356. [3] A. D. Linde, A New Inflationary Universe Scenario: A Possible Solution of the Horizon, Flatness, Homogeneity, Isotropy and Primordial Monopole Problems, Phys. Lett. B 108 (1982) 389-393. [4] A. Albrecht and P. J. Steinhardt, Cosmology for Grand Unified Theories with Radiatively Induced Symmetry Breaking, Phys. Rev. Lett. 48 (1982) 1220-1223. [5] Planck Collaboration, N. Aghanim et al., Planck 2018 results. VI. Cosmological parameters, Astron. Astrophys. 641 (2020) A6, [ ]. [Erratum: Astron.Astrophys. 652, C4 (2021)]. [6] A. G. Riess et al., A Comprehensive Measurement of the Local Value of the Hubble Constant with 1 km/s//Mpc Uncertainty from the Hubble Space Telescope and the SH0ES Team, Astrophys. J. Lett. 934 (2022), no. 1 L7, [ ]. [7] M. Kamionkowski and A. G. Riess, The Hubble Tension and Early Dark Energy, Ann. Rev. Nucl. Part. Sci. 73 (2023) 153-180, [ ]. [8] J. L. Bernal, L. Verde, and A. G. Riess, The trouble with H0, JCAP 10 (2016) 019, [ ]. [9] E. Abdalla et al., Cosmology intertwined: A review of the particle physics, astrophysics, and cosmology associated with the cosmological tensions and anomalies, JHEAp 34 (2022) 49-211, [ ]. [10] DESI Collaboration, A. G. Adame et al., DESI 2024 VI: cosmological constraints from the measurements of baryon acoustic oscillations, JCAP 02 (2025) 021, [ ]. [11] DESI Collaboration, M. Abdul Karim et al., DESI DR2 results. II. Measurements of baryon acoustic oscillations and cosmological constraints, Phys. Rev. D 112 (2025), no. 8 083515, [ ]. [12] J. Khoury, M.-X. Lin, and M. Trodden, Apparent w < -1 and a Lower S8 from Dark Axion and Dark Baryons Interactions, . [13] J. Pan and G. Ye, Non-minimally coupled gravity constraints from DESI DR2 data, . [14] S. Nesseris, Y. Akrami, and G. D. Starkman, To CPL, or not to CPL? What we have not learned about the dark energy equation of state, . - 24 - [15] S. Pan, S. Paul, E. N. Saridakis, and W. Yang, Interacting dark energy after DESI DR2: a challenge for ΛCDM paradigm?, . [16] D. A. Kessler, L. A. Escamilla, S. Pan, and E. Di Valentino, One-parameter dynamical dark energy: Hints for oscillations, . [17] Y. Tiwari, U. Upadhyay, and R. K. Jain, Exploring cosmological imprints of phantom crossing with dynamical dark energy in Horndeski gravity, Phys. Rev. D 111 (2025), no. 4 043530, [ ]. [18] M. Berbig, Kick it like DESI: PNGB quintessence with a dynamically generated initial velocity, JCAP 03 (2025) 015, [ ]. [19] D. Shlivko, P. J. Steinhardt, and C. L. Steinhardt, Optimal parameterizations for observational constraints on thawing dark energy, . [20] D. Benisty, The Scale for the Expansion of the Universe: From Local Structures to Cosmology, . [21] Y. Akrami, G. Alestas, and S. Nesseris, Has DESI detected exponential quintessence?, . [22] W. J. Wolf, C. Garc ́ıa-Garc ́ıa, T. Anton, and P. G. Ferreira, Assessing Cosmological Evidence for Nonminimal Coupling, Phys. Rev. Lett. 135 (2025), no. 8 081001, [ ]. [23] S. H. Mirpoorian, K. Jedamzik, and L. Pogosian, Is Dynamical Dark Energy Necessary? DESI BAO and Modified Recombination, . [24] D. Wang and D. Mota, Did DESI DR2 truly reveal dynamical dark energy?, . [25] B. R. Dinda and R. Maartens, Physical vs phantom dark energy after DESI: thawing quintessence in a curved background, Mon. Not. Roy. Astron. Soc. 542 (2025) L31-L35, [ ]. [26] D. Wang, Questioning cosmic acceleration with desi: The big stall of the universe, . [27] M. Cortˆes and A. R. Liddle, On desi's dr2 exclusion of λcdm, . [28] R. de Souza, G. Rodrigues, and J. Alcaniz, Thawing quintessence and transient cosmic acceleration in light of DESI, . [29] S. Afroz and S. Mukherjee, Hint towards inconsistency between BAO and Supernovae Dataset: The Evidence of Redshift Evolving Dark Energy from DESI DR2 is Absent, . [30] C. Garcia-Quintero et al., Cosmological implications of DESI DR2 BAO measurements in light of the latest ACT DR6 CMB data, . [31] M. Scherer, M. A. Sabogal, R. C. Nunes, and A. De Felice, Challenging the ΛCDM model: 5σ evidence for a dynamical dark energy late-time transition, Phys. Rev. D 112 (2025), no. 4 043513, [ ]. [32] S.-F. Chen and M. Zaldarriaga, It's all Ok: curvature in light of BAO from DESI DR2, JCAP 08 (2025) 014, [ ]. [33] G. Ye and S.-J. Lin, On the tension between DESI DR2 BAO and CMB, . [34] X. Chen and A. Loeb, Evolving Dark Energy or Evolving Dark Matter?, . [35] J. Smirnov, Dynamical Dark Energy Emerges from Massive Gravity, . [36] L. Giani, R. Von Marttens, and O. F. Piattella, The matter with(in) CPL, . [37] D. Wang, Quintessence Dark Matter, . - 25 - [38] S. Hussain, S. Arora, A. Wang, and B. Rose, Probing the Dynamics of Gaussian Dark Energy Equation of State Using DESI BAO, . [39] Y. Wang and K. Freese, Model-Independent Dark Energy Measurements from DESI DR2 and Planck 2015 Data, . [40] S. Lee, Probing Time-Varying Dark Energy with DESI: The Crucial Role of Precision Matter Density (Ωm0) Measurements, . [41] E. O. Colg ́ain, S. Pourojaghi, and M. M. Sheikh-Jabbari, On the Pipeline Dependence of DESI Dynamical Dark Energy, . [42] Z. Bayat and M. P. Hertzberg, Examining quintessence models with DESI data, JCAP 08 (2025) 065, [ ]. [43] R. D'Agostino and F. Bajardi, Teleparallel dark energy in a nonflat universe, Phys. Rev. D 111 (2025), no. 10 104076, [ ]. [44] M. A. Sabogal and R. C. Nunes, Robust evidence for dynamical dark energy from DESI galaxy-CMB lensing cross-correlation and geometric probes, JCAP 09 (2025) 084, [ ]. [45] Y. Myrzakulov, S. Hussain, and M. Shahalam, Phase space and Data analyses of a non-minimally coupled scalar field system with decaying dark energy model, . [46] J. M. Cline and V. Muralidharan, Simple quintessence models in light of DESI-BAO observations, Phys. Rev. D 112 (2025), no. 6 063539, [ ]. [47] R. E. Keeley, A. Shafieloo, and W. L. Matthewson, Could We Be Fooled about Phantom Crossing?, . [48] L. A. Anchordoqui, I. Antoniadis, N. Cribiori, A. Hasar, D. Lust, J. Masias, and M. Scalisi, Bulk/boundary modular quintessence and DESI, JHEP 09 (2025) 128, [ ]. [49] S. Lee, Unveiling the Pitfalls of CPL Parametrization at High Redshifts: A Critical Assessment of the ω0ωaCDM Model with DESI DR2 BAO Data, . [50] S. Lee, The Impact of Ωm0 Prior Bias on Cosmological Parameter Estimation: Reconciling DESI DR2 BAO and Pantheon+ SNe Data Combination Results, . [51] E. ̈Oz ̈ulker, E. Di Valentino, and W. Giar`e, Dark Energy Crosses the Line: Quantifying and Testing the Evidence for Phantom Crossing, . [52] I. D. Gialamas, G. H ̈utsi, M. Raidal, J. Urrutia, M. Vasar, and H. Veerm ̈ae, Quintessence and phantoms in light of DESI 2025, Phys. Rev. D 112 (2025), no. 6 063551, [ ]. [53] J.-X. Li and S. Wang, Reconstructing dark energy with model independent methods after DESI DR2 BAO, . [54] S. Dhawan and E. M ̈ortsell, Implications for dark energy of cosmic transparency in light of DESI data, . [55] S. Lee, Constraining ΛCDM, ωCDM, and ω0ωaCDM models with DESI DR2 BAO: Redshift-Resolved Diagnostics and the Role of rd, . [56] T. Liu, X. Li, T. Xu, M. Biesiada, and J. Wang, Torsion cosmology in the light of DESI, supernovae and CMB observational constraints, . [57] M. H ̈og ̊as and E. M ̈ortsell, Bimetric gravity improves the fit to DESI BAO and eases the Hubble tension, . [58] D.-C. Qiang, J.-Y. Jia, and H. Wei, New Insights into Dark Energy from DESI DR2 with CMB and SNIa, . - 26 - [59] S. S. Mishra, W. L. Matthewson, V. Sahni, A. Shafieloo, and Y. Shtanov, Braneworld Dark Energy in light of DESI DR2, . [60] S. Bhattacharjee, S. Halder, J. de Haro, S. Pan, and E. N. Saridakis, Accelerating Universe without dark energy: matter creation after DESI DR2, . [61] M. Braglia, X. Chen, and A. Loeb, Exotic Dark Matter and the DESI Anomaly, . [62] S. Goldstein, M. Celoria, and F. Schmidt, Monodromic Dark Energy and DESI, . [63] H. Chaudhary, S. Capozziello, V. K. Sharma, and G. Mustafa, Does DESI DR2 challenge ΛCDM paradigm ?, . [64] J. Wang, H. Yu, and P. Wu, Revisiting cosmic acceleration with DESI BAO, Eur. Phys. J. C 85 (2025), no. 8 853, [ ]. [65] M. Ishak and L. Medina-Varela, Is this the fall of the ΛCDM throne? Evidence for dynamical dark energy rising from combinations of different types of datasets, . [66] J. A. L. Torres and A. de la Macorra, Bound dark energy: Particle origin of dark energy with DESI BAO and DES supernova data, . [67] A. G ́omez-Valent and A. Gonz ́alez-Fuentes, Effective Phantom Divide Crossing with Standard and Negative Quintessence, . [68] J.-Q. Wang, R.-G. Cai, Z.-K. Guo, and S.-J. Wang, Resolving the Planck-DESI tension by non-minimally coupled quintessence, . [69] R. Chen, J. M. Cline, V. Muralidharan, and B. Salewicz, Quintessential dark energy crossing the phantom divide, . [70] L. W. K. Goh and A. N. Taylor, Phantom Crossing with Quintom Models, . [71] W. J. Wolf, P. G. Ferreira, and C. Garc ́ıa-Garc ́ıa, Cosmological constraints on Galileon dark energy with broken shift symmetry, . [72] G. G. Luciano and A. Paliathanasis, Late-Time Cosmological Constraints on Kaniadakis Holographic Dark Energy, . [73] H. Chaudhary, S. Capozziello, S. Praharaj, S. K. J. Pacif, and G. Mustafa, Is the ΛCDM Model in Crisis?, . [74] S. Roy Choudhury, T. Okumura, and K. Umetsu, Cosmological constraints on non-phantom dynamical dark energy with DESI Data Release 2 Baryon Acoustic Oscillations: A 3σ+ lensing anomaly, . [75] C.-G. Park and B. Ratra, Updated observational constraints on φCDM dynamical dark energy cosmological models, . [76] M. Artola, I. Ayuso, R. Lazkoz, and V. Salzano, Is CPL dark energy a mirage?, . [77] S. L. Guedezounme, B. R. Dinda, and R. Maartens, Phantom crossing or dark interaction?, . [78] H. Chaudhary, V. K. Sharma, S. Capozziello, and G. Mustafa, Probing departures from ΛCDM by late-time datasets, . [79] T.-N. Li, G.-H. Du, Y.-H. Li, Y. Li, J.-L. Ling, J.-F. Zhang, and X. Zhang, Updated constraints on interacting dark energy: A comprehensive analysis using multiple CMB probes, DESI DR2, and supernovae observations, . [80] M. Rezaei, S. Pan, W. Yang, and D. F. Mota, Is Dark Energy Changing? Probing the Universe's Expansion with present and future astronomical probes, . - 27 - [81] Y.-M. Zhang, T.-N. Li, G.-H. Du, S.-H. Zhou, L.-Y. Gao, J.-F. Zhang, and X. Zhang, Alleviating the H0 tension through new interacting dark energy model in light of DESI DR2, . [82] M. W. Toomey, G. Montefalcone, E. McDonough, and K. Freese, How Theory-Informed Priors Affect DESI Evidence for Evolving Dark Energy, . [83] Y.-H. Yao, Y.-H. Shen, T.-N. Li, G.-H. Du, and Y. Gong, Examining a new form of non-standard dark matter using DESI DR2 data, . [84] F. Englert and R. Brout, Broken Symmetry and the Mass of Gauge Vector Mesons, Phys. Rev. Lett. 13 (1964) 321-323. [85] P. W. Higgs, Broken symmetries, massless particles and gauge fields, Phys. Lett. 12 (1964) 132-133. [86] J. Martin, C. Ringeval, and V. Vennin, Encyclopædia Inflationaris: Opiparous Edition, Phys. Dark Univ. 5-6 (2014) 75-235, [ ]. [87] W. Hu, R. Barkana, and A. Gruzinov, Cold and fuzzy dark matter, Phys. Rev. Lett. 85 (2000) 1158-1161, [astro-ph/0003365]. [88] J. M. Cline, K. Kainulainen, P. Scott, and C. Weniger, Update on scalar singlet dark matter, Phys. Rev. D 88 (2013) 055025, [ ]. [Erratum: Phys.Rev.D 92, 039906 (2015)]. [89] D. J. E. Marsh, Axion Cosmology, Phys. Rept. 643 (2016) 1-79, [ ]. [90] P. Svrcek and E. Witten, Axions In String Theory, JHEP 06 (2006) 051, [hep-th/0605206]. [91] M. Cicoli, M. Goodsell, and A. Ringwald, The type IIB string axiverse and its low-energy phenomenology, JHEP 10 (2012) 146, [ ]. [92] A. Arvanitaki, S. Dimopoulos, S. Dubovsky, N. Kaloper, and J. March-Russell, String Axiverse, Phys. Rev. D 81 (2010) 123530, [ ]. [93] V. Faraoni, Superquintessence, Int. J. Mod. Phys. D 11 (2002) 471-482, [astro-ph/0110067]. [94] A. De Felice and S. Tsujikawa, f(R) theories, Living Rev. Rel. 13 (2010) 3, [ ]. [95] B. Ratra and P. J. E. Peebles, Cosmological Consequences of a Rolling Homogeneous Scalar Field, Phys. Rev. D 37 (1988) 3406. [96] C. Wetterich, Cosmology and the Fate of Dilatation Symmetry, Nucl. Phys. B 302 (1988) 668-696, [ ]. [97] R. R. Caldwell, R. Dave, and P. J. Steinhardt, Cosmological imprint of an energy component with general equation of state, Phys. Rev. Lett. 80 (1998) 1582-1585, [astro-ph/9708069]. [98] N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge Monographs on Mathematical Physics. Cambridge University Press, Cambridge, UK, 1982. [99] L. E. Parker and D. Toms, Quantum Field Theory in Curved Spacetime: Quantized Field and Gravity. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 8, 2009. [100] C. G. Callan, Jr., S. R. Coleman, and R. Jackiw, A New improved energy - momentum tensor, Annals Phys. 59 (1970) 42-73. [101] A. Palatini, Deduzione invariantiva delle equazioni gravitazionali dal principio di Hamilton, Rend. Circ. Mat. Palermo 43 (1919), no. 1 203-212. [102] M. Ferraris, M. Francaviglia, and C. Reina, Variational formulation of general relativity from 1915 to 1925 "Palatini's method" discovered by Einstein in 1925, Gen. Rel. Grav. 14 (1982), no. 3 243-254. - 28 - [103] F. Bauer and D. A. Demir, Inflation with Non-Minimal Coupling: Metric versus Palatini Formulations, Phys. Lett. B 665 (2008) 222-226, [ ]. [104] F. Bauer, Filtering out the cosmological constant in the Palatini formalism of modified gravity, Gen. Rel. Grav. 43 (2011) 1733-1757, [ ]. [105] N. Tamanini and C. R. Contaldi, Inflationary Perturbations in Palatini Generalised Gravity, Phys. Rev. D 83 (2011) 044018, [ ]. [106] F. Bauer and D. A. Demir, Higgs-Palatini Inflation and Unitarity, Phys. Lett. B 698 (2011) 425-429, [ ]. [107] S. Rasanen and P. Wahlman, Higgs inflation with loop corrections in the Palatini formulation, JCAP 11 (2017) 047, [ ]. [108] T. Tenkanen, Resurrecting Quadratic Inflation with a non-minimal coupling to gravity, JCAP 12 (2017) 001, [ ]. [109] A. Racioppi, Coleman-Weinberg linear inflation: metric vs. Palatini formulation, JCAP 12 (2017) 041, [ ]. [110] T. Markkanen, T. Tenkanen, V. Vaskonen, and H. Veerm ̈ae, Quantum corrections to quartic inflation with a non-minimal coupling: metric vs. Palatini, JCAP 03 (2018) 029, [ ]. [111] L. J ̈arv, A. Racioppi, and T. Tenkanen, Palatini side of inflationary attractors, Phys. Rev. D 97 (2018), no. 8 083513, [ ]. [112] C. Fu, P. Wu, and H. Yu, Inflationary dynamics and preheating of the nonminimally coupled inflaton field in the metric and Palatini formalisms, Phys. Rev. D 96 (2017), no. 10 103542, [ ]. [113] A. Racioppi, New universal attractor in nonminimally coupled gravity: Linear inflation, Phys. Rev. D 97 (2018), no. 12 123514, [ ]. [114] P. Carrilho, D. Mulryne, J. Ronayne, and T. Tenkanen, Attractor Behaviour in Multifield Inflation, JCAP 06 (2018) 032, [ ]. [115] A. Kozak and A. Borowiec, Palatini frames in scalar-tensor theories of gravity, Eur. Phys. J. C 79 (2019), no. 4 335, [ ]. [116] S. Rasanen and E. Tomberg, Planck scale black hole dark matter from Higgs inflation, JCAP 01 (2019) 038, [ ]. [117] S. Rasanen, Higgs inflation in the Palatini formulation with kinetic terms for the metric, Open J. Astrophys. 2 (2019), no. 1 1, [ ]. [118] J. P. B. Almeida, N. Bernal, J. Rubio, and T. Tenkanen, Hidden inflation dark matter, JCAP 03 (2019) 012, [ ]. [119] K. Shimada, K. Aoki, and K.-i. Maeda, Metric-affine Gravity and Inflation, Phys. Rev. D 99 (2019), no. 10 104020, [ ]. [120] T. Takahashi and T. Tenkanen, Towards distinguishing variants of non-minimal inflation, JCAP 04 (2019) 035, [ ]. [121] R. Jinno, K. Kaneta, K.-y. Oda, and S. C. Park, Hillclimbing inflation in metric and Palatini formulations, Phys. Lett. B 791 (2019) 396-402, [ ]. [122] J. Rubio and E. S. Tomberg, Preheating in Palatini Higgs inflation, JCAP 04 (2019) 021, [ ]. [123] N. Bostan, Non-minimally coupled quartic inflation with Coleman-Weinberg one-loop corrections in the Palatini formulation, Phys. Lett. B 811 (2020) 135954, [ ]. - 29 - [124] N. Bostan, Quadratic, Higgs and hilltop potentials in the Palatini gravity, Commun. Theor. Phys. 72 (2020) 085401, [ ]. [125] A. Racioppi, Non-Minimal (Self-)Running Inflation: Metric vs. Palatini Formulation, JHEP 21 (2020) 011, [ ]. [126] T. Tenkanen, Tracing the high energy theory of gravity: an introduction to Palatini inflation, Gen. Rel. Grav. 52 (2020), no. 4 33, [ ]. [127] M. Shaposhnikov, A. Shkerin, and S. Zell, Quantum Effects in Palatini Higgs Inflation, JCAP 07 (2020) 064, [ ]. [128] A. Borowiec and A. Kozak, New class of hybrid metric-Palatini scalar-tensor theories of gravity, JCAP 07 (2020) 003, [ ]. [129] L. J ̈arv, A. Karam, A. Kozak, A. Lykkas, A. Racioppi, and M. Saal, Equivalence of inflationary models between the metric and Palatini formulation of scalar-tensor theories, Phys. Rev. D 102 (2020), no. 4 044029, [ ]. [130] A. Karam, M. Raidal, and E. Tomberg, Gravitational dark matter production in Palatini preheating, JCAP 03 (2021) 064, [ ]. [131] J. McDonald, Does Palatini Higgs Inflation Conserve Unitarity?, JCAP 04 (2021) 069, [ ]. [132] M. L ̊angvik, J.-M. Ojanper ̈a, S. Raatikainen, and S. Rasanen, Higgs inflation with the Holst and the Nieh-Yan term, Phys. Rev. D 103 (2021), no. 8 083514, [ ]. [133] T. Tenkanen and L. Visinelli, Axion dark matter from Higgs inflation with an intermediate H∗, JCAP 08 (2019) 033, [ ]. [134] M. Shaposhnikov, A. Shkerin, I. Timiryasov, and S. Zell, Higgs inflation in Einstein-Cartan gravity, JCAP 02 (2021) 008, [ ]. [Erratum: JCAP 10, E01 (2021)]. [135] M. Shaposhnikov, A. Shkerin, I. Timiryasov, and S. Zell, Einstein-Cartan gravity, matter, and scale-invariant generalization , JHEP 10 (2020) 177, [ ]. [136] I. D. Gialamas, A. Karam, A. Lykkas, and T. D. Pappas, Palatini-Higgs inflation with nonminimal derivative coupling, Phys. Rev. D 102 (2020), no. 6 063522, [ ]. [137] Y. Mikura, Y. Tada, and S. Yokoyama, Conformal inflation in the metric-affine geometry, EPL 132 (2020), no. 3 39001, [ ]. [138] S. Verner, Quintessential Inflation in Palatini Gravity, JCAP 04 (2021) [ ]. [139] V.-M. Enckell, S. Nurmi, S. R ̈as ̈anen, and E. Tomberg, Critical point Higgs inflation in the Palatini formulation, JHEP 04 (2021) 059, [ ]. [140] Y. Reyimuaji and X. Zhang, Natural inflation with a nonminimal coupling to gravity, JCAP 03 (2021) 059, [ ]. [141] A. Karam, S. Karamitsos, and M. Saal, β-function reconstruction of Palatini inflationary attractors, JCAP 10 (2021) 068, [ ]. [142] Y. Mikura, Y. Tada, and S. Yokoyama, Minimal k-inflation in light of the conformal metric-affine geometry, Phys. Rev. D 103 (2021), no. 10 L101303, [ ]. [143] M. Kubota, K.-Y. Oda, K. Shimada, and M. Yamaguchi, Cosmological Perturbations in Palatini Formalism, JCAP 03 (2021) 006, [ ]. [144] D. S.-C. G ́omez, 3+1 decomposition in modified gravities within the Palatini formalism and some applications, Phys. Rev. D 104 (2021), no. 2 024029, [ ]. [145] Y. Mikura and Y. Tada, On UV-completion of Palatini-Higgs inflation, JCAP 05 (2022), no. 05 035, [ ]. - 30 - [146] F. Bombacigno and G. Montani, Big bounce cosmology for Palatini R2 gravity with a Nieh-Yan term, Eur. Phys. J. C 79 (2019), no. 5 405, [ ]. [147] V.-M. Enckell, K. Enqvist, S. Rasanen, and L.-P. Wahlman, Inflation with R2 term in the Palatini formalism, JCAP 02 (2019) 022, [ ]. [148] I. Antoniadis, A. Karam, A. Lykkas, and K. Tamvakis, Palatini inflation in models with an R2 term, JCAP 11 (2018) 028, [ ]. [149] I. Antoniadis, A. Karam, A. Lykkas, T. Pappas, and K. Tamvakis, Rescuing Quartic and Natural Inflation in the Palatini Formalism, JCAP 03 (2019) 005, [ ]. [150] T. Tenkanen, Minimal Higgs inflation with an R2 term in Palatini gravity, Phys. Rev. D 99 (2019), no. 6 063528, [ ]. [151] A. Edery and Y. Nakayama, Palatini formulation of pure R2 gravity yields Einstein gravity with no massless scalar, Phys. Rev. D 99 (2019), no. 12 124018, [ ]. [152] M. Giovannini, Post-inflationary phases stiffer than radiation and Palatini formulation, Class. Quant. Grav. 36 (2019), no. 23 235017, [ ]. [153] T. Tenkanen, Trans-Planckian censorship, inflation, and dark matter, Phys. Rev. D 101 (2020), no. 6 063517, [ ]. [154] I. D. Gialamas and A. Lahanas, Reheating in R2 Palatini inflationary models, Phys. Rev. D 101 (2020), no. 8 084007, [ ]. [155] T. Tenkanen and E. Tomberg, Initial conditions for plateau inflation: a case study, JCAP 04 (2020) 050, [ ]. [156] A. Lloyd-Stubbs and J. McDonald, Sub-Planckian φ2 inflation in the Palatini formulation of gravity with an R2 term, Phys. Rev. D 101 (2020), no. 12 123515, [ ]. [157] I. Antoniadis, A. Lykkas, and K. Tamvakis, Constant-roll in the Palatini-R2 models, JCAP 04 (2020), no. 04 033, [ ]. [158] D. M. Ghilencea, Palatini quadratic gravity: spontaneous breaking of gauged scale symmetry and inflation, Eur. Phys. J. C 80 (4, 2020) 1147, [ ]. [159] N. Das and S. Panda, Inflation and Reheating in f(R,h) theory formulated in the Palatini formalism, JCAP 05 (2021) 019, [ ]. [160] I. D. Gialamas, A. Karam, and A. Racioppi, Dynamically induced Planck scale and inflation in the Palatini formulation, JCAP 11 (2020) 014, [ ]. [161] D. M. Ghilencea, Gauging scale symmetry and inflation: Weyl versus Palatini gravity, Eur. Phys. J. C 81 (2021), no. 6 510, [ ]. [162] S. Bekov, K. Myrzakulov, R. Myrzakulov, and D. S.-C. G ́omez, General slow-roll inflation in f(R) gravity under the Palatini approach, Symmetry 12 (2020), no. 12 1958, [ ]. [163] D. S.-C. G ́omez, Variational principle and boundary terms in gravity `a la Palatini, Phys. Lett. B 814 (2021) 136103, [ ]. [164] K. Dimopoulos and S. S ́anchez L ́opez, Quintessential inflation in Palatini f(R) gravity, Phys. Rev. D 103 (2021), no. 4 043533, [ ]. [165] A. Karam, E. Tomberg, and H. Veerm ̈ae, Tachyonic preheating in Palatini R 2 inflation, JCAP 06 (2021) 023, [ ]. [166] A. Lykkas and K. Tamvakis, Extended interactions in the Palatini-R2 inflation, JCAP 08 (2021), no. 043 [ ]. - 31 - [167] I. D. Gialamas, A. Karam, T. D. Pappas, and V. C. Spanos, Scale-invariant quadratic gravity and inflation in the Palatini formalism, Phys. Rev. D 104 (2021), no. 2 023521, [ ]. [168] J. Annala and S. Rasanen, Inflation with R (αβ) terms in the Palatini formulation, JCAP 09 (2021) 032, [ ]. [169] M. AlHallak, A. AlRakik, N. Chamoun, and M. S. El-Daher, Palatini f(R) Gravity and Variants of k-/Constant Roll/Warm Inflation within Variation of Strong Coupling Scenario, Universe 8 (2022), no. 2 126, [ ]. [170] C. Dioguardi, A. Racioppi, and E. Tomberg, Slow-roll inflation in Palatini F(R) gravity, JHEP 06 (2022) 106, [ ]. [171] K. Dimopoulos, A. Karam, S. S ́anchez L ́opez, and E. Tomberg, Modelling Quintessential Inflation in Palatini-Modified Gravity, Galaxies 10 (2022), no. 2 57, [ ]. [172] I. D. Gialamas, A. Karam, T. D. Pappas, and E. Tomberg, Implications of Palatini gravity for inflation and beyond, Int. J. Geom. Meth. Mod. Phys. 20 (2023), no. 13 2330007, [ ]. [173] S. S ́anchez L ́opez, K. Dimopoulos, A. Karam, and E. Tomberg, Observable gravitational waves from hyperkination in Palatini gravity and beyond, Eur. Phys. J. C 83 (2023), no. 12 1152, [ ]. [174] J. J. Terente D ́ıaz, K. Dimopoulos, M. Karˇciauskas, and A. Racioppi, Quintessence in the Weyl-Gauss-Bonnet model, JCAP 02 (2024) 040, [ ]. [175] H. J. Kuralkar, S. Panda, and A. Vidyarthi, Observable primordial gravitational waves from non-minimally coupled R 2 Palatini modified gravity, JCAP 05 (2025) 073, [ ]. [176] S. S. L ́opez and J. J. Terente D ́ıaz, Scalar-Induced Gravitational Waves in Palatini f(R) Gravity, . [177] DESI Collaboration, M. Abdul Karim et al., DESI DR2 Results I: Baryon Acoustic Oscillations from the Lyman Alpha Forest, . [178] DES Collaboration, T. M. C. Abbott et al., The Dark Energy Survey: Cosmology Results with ∼1500 New High-redshift Type Ia Supernovae Using the Full 5 yr Data Set, Astrophys. J. Lett. 973 (2024), no. 1 L14, [ ]. [179] W. J. Wolf, P. G. Ferreira, and C. Garc ́ıa-Garc ́ıa, Matching current observational constraints with nonminimally coupled dark energy, Phys. Rev. D 111 (2025), no. 4 L041303, [ ]. [180] G. Ye, M. Martinelli, B. Hu, and A. Silvestri, Hints of Nonminimally Coupled Gravity in DESI 2024 Baryon Acoustic Oscillation Measurements, Phys. Rev. Lett. 134 (2025), no. 18 181002, [ ]. [181] G. Ye, Bridge the Cosmological Tensions with Thawing Gravity, . [182] A. G. Ferrari, M. Ballardini, F. Finelli, and D. Paoletti, Scalar-tensor gravity and DESI 2024 BAO data, Phys. Rev. D 111 (2025), no. 8 083523, [ ]. [183] I. Antoniadis, A. Guillen, and K. Tamvakis, Late time acceleration in Palatini gravity, JHEP 11 (2022) 144, [ ]. [184] K. Dimopoulos, A. Karam, S. S ́anchez L ́opez, and E. Tomberg, Palatini R 2 quintessential inflation, JCAP 10 (2022) 076, [ ]. [185] L. J ̈arv and A. Toporensky, Global portraits of nonminimal inflation, Eur. Phys. J. C 82 (2022), no. 2 179, [ ]. - 32 - [186] L. J ̈arv, S. Karamitsos, and M. Saal, Global portraits of nonminimal inflation: Metric and Palatini formalism, Phys. Rev. D 109 (2024), no. 8 084073, [ ]. [187] L. J ̈arv and D. Kraiko, Global portraits of inflation in nonsingular variables, Eur. Phys. J. C 85 (2025), no. 6 715, [ ]. [188] P. Wang, P. Wu, and H. Yu, A new extended quintessence, Eur. Phys. J. C 72 (2012) 2245, [ ]. [189] Y. Fan, P. Wu, and H. Yu, Cosmological perturbations of non-minimally coupled quintessence in the metric and Palatini formalisms, Phys. Lett. B 746 (2015) 230-236. [190] F. Bauer, Filtering out the cosmological constant in the Palatini formalism of modified gravity, Gen. Rel. Grav. 43 (2011) 1733-1757, [ ]. [191] A. Racioppi, J. Rajasalu, and K. Selke, Multiple point criticality principle and Coleman-Weinberg inflation, JHEP 06 (2022) 107, [ ]. [192] D. Y. Cheong, S. M. Lee, and S. C. Park, Reheating in models with non-minimal coupling in metric and Palatini formalisms, JCAP 02 (2022), no. 02 029, [ ]. [193] A. Racioppi and M. Vasar, On the number of e-folds in the Jordan and Einstein frames, Eur. Phys. J. Plus 137 (2022), no. 5 637, [ ]. [194] T. Kodama and T. Takahashi, Relaxing inflation models with nonminimal coupling: A general study, Phys. Rev. D 105 (2022), no. 6 063542, [ ]. [195] G. K. Karananas, M. Shaposhnikov, and S. Zell, Field redefinitions, perturbative unitarity and Higgs inflation, JHEP 06 (2022) 132, [ ]. [196] W. Yin, Weak-scale Higgs inflation, JCAP 05 (2024) 060, [ ]. [197] I. D. Gialamas, A. Karam, and T. D. Pappas, Gravitational corrections to electroweak vacuum decay: metric vs. Palatini, Phys. Lett. B 840 (2023) 137885, [ ]. [198] T. Koivisto, Covariant conservation of energy momentum in modified gravities, Class. Quant. Grav. 23 (2006) 4289-4296, [gr-qc/0505128]. [199] L. Gorlich, S. Kachru, P. K. Tripathy, and S. P. Trivedi, Gaugino condensation and nonperturbative superpotentials in flux compactifications, JHEP 12 (2004) 074, [hep-th/0407130]. [200] M. Haack, D. Krefl, D. Lust, A. Van Proeyen, and M. Zagermann, Gaugino Condensates and D-terms from D7-branes, JHEP 01 (2007) 078, [hep-th/0609211]. [201] Z. Lalak, G. G. Ross, and S. Sarkar, Racetrack inflation and assisted moduli stabilisation, Nucl. Phys. B 766 (2007) 1-20, [hep-th/0503178]. [202] E. J. Copeland, A. R. Liddle, and D. Wands, Exponential potentials and cosmological scaling solutions, Phys. Rev. D 57 (1998) 4686-4690, [gr-qc/9711068]. [203] C. Rubano and P. Scudellaro, On some exponential potentials for a cosmological scalar field as quintessence, Gen. Rel. Grav. 34 (2002) 307-328, [astro-ph/0103335]. [204] T. Barreiro, E. J. Copeland, and N. J. Nunes, Quintessence arising from exponential potentials, Phys. Rev. D 61 (2000) 127301, [astro-ph/9910214]. [205] S. Bhattacharya, G. Borghetto, A. Malhotra, S. Parameswaran, G. Tasinato, and I. Zavala, Cosmological constraints on curved quintessence, JCAP 09 (2024) 073, [ ]. [206] G. Alestas, M. Delgado, I. Ruiz, Y. Akrami, M. Montero, and S. Nesseris, Is curvature-assisted quintessence observationally viable?, Phys. Rev. D 110 (2024), no. 10 106010, [ ]. [207] O. F. Ramadan, J. Sakstein, and D. Rubin, DESI constraints on exponential quintessence, Phys. Rev. D 110 (2024), no. 4 L041303, [ ]. - 33 - [208] C. Wetterich, The Cosmon model for an asymptotically vanishing time dependent cosmological 'constant', Astron. Astrophys. 301 (1995) 321-328, [hep-th/9408025]. [209] P. Binetruy, Models of dynamical supersymmetry breaking and quintessence, Phys. Rev. D 60 (1999) 063502, [hep-ph/9810553]. [210] P. Agrawal, G. Obied, P. J. Steinhardt, and C. Vafa, On the Cosmological Implications of the String Swampland, Phys. Lett. B 784 (2018) 271-276, [ ]. [211] Y. Akrami, R. Kallosh, A. Linde, and V. Vardanyan, The Landscape, the Swampland and the Era of Precision Cosmology, Fortsch. Phys. 67 (2019), no. 1-2 1800075, [ ]. [212] M. Raveri, W. Hu, and S. Sethi, Swampland Conjectures and Late-Time Cosmology, Phys. Rev. D 99 (2019), no. 8 083518, [ ]. [213] R. R. Caldwell, A Phantom menace?, Phys. Lett. B 545 (2002) 23-29, [astro-ph/9908168]. [214] S. M. Carroll, M. Hoffman, and M. Trodden, Can the dark energy equation-of-state parameter w be less than -1?, Phys. Rev. D 68 (2003) 023509, [astro-ph/0301273]. [215] R. Gannouji, D. Polarski, A. Ranquet, and A. A. Starobinsky, Scalar-Tensor Models of Normal and Phantom Dark Energy, JCAP 09 (2006) 016, [astro-ph/0606287]. [216] B. Boisseau, G. Esposito-Farese, D. Polarski, and A. A. Starobinsky, Reconstruction of a scalar tensor theory of gravity in an accelerating universe, Phys. Rev. Lett. 85 (2000) 2236, [gr-qc/0001066]. [217] D. F. Torres, Quintessence, superquintessence and observable quantities in Brans-Dicke and nonminimally coupled theories, Phys. Rev. D 66 (2002) 043522, [astro-ph/0204504]. [218] E. Gunzig, A. Saa, L. Brenig, V. Faraoni, T. M. Rocha Filho, and A. Figueiredo, Superinflation, quintessence, and nonsingular cosmologies, Phys. Rev. D 63 (2001) 067301, [gr-qc/0012085]. [219] F. C. Carvalho and A. Saa, Non-minimal coupling, exponential potentials and the w < -1 regime of dark energy, Phys. Rev. D 70 (2004) 087302, [astro-ph/0408013]. [220] S. M. Carroll, A. De Felice, and M. Trodden, Can we be tricked into thinking that w is less than -1?, Phys. Rev. D 71 (2005) 023525, [astro-ph/0408081]. [221] L. Perivolaropoulos, Crossing the phantom divide barrier with scalar tensor theories, JCAP 10 (2005) 001, [astro-ph/0504582]. [222] S. Nesseris and L. Perivolaropoulos, Crossing the Phantom Divide: Theoretical Implications and Observational Status, JCAP 01 (2007) 018, [astro-ph/0610092]. [223] Z. Zhai and Y. Wang, Robust and model-independent cosmological constraints from distance measurements, JCAP 07 (2019) 005, [ ]. [224] P. Lemos and A. Lewis, CMB constraints on the early Universe independent of late-time cosmology, Phys. Rev. D 107 (2023), no. 10 103505, [ ]. [225] R. K. Sachs and A. M. Wolfe, Perturbations of a cosmological model and angular variations of the microwave background, Astrophys. J. 147 (1967) 73-90. [226] Y. Fan, P. Wu, and H. Yu, The integrated Sachs-Wolfe effect in the extended quintessence cosmological models, Class. Quant. Grav. 33 (2016), no. 8 085006. [227] W. Hu and N. Sugiyama, Small scale cosmological perturbations: An Analytic approach, Astrophys. J. 471 (1996) 542-570, [astro-ph/9510117]. [228] Particle Data Group Collaboration, R. L. Workman et al., Review of Particle Physics, PTEP 2022 (2022) 083C01. [229] S. Brieden, H. Gil-Mar ́ın, and L. Verde, A tale of two (or more) h's, JCAP 04 (2023) 023, [ ]. - 34 - [230] K. Akita and M. Yamaguchi, A precision calculation of relic neutrino decoupling, JCAP 08 (2020) 012, [ ]. [231] J. Froustey, C. Pitrou, and M. C. Volpe, Neutrino decoupling including flavour oscillations and primordial nucleosynthesis, JCAP 12 (2020) 015, [ ]. [232] J. J. Bennett, G. Buldgen, P. F. De Salas, M. Drewes, S. Gariazzo, S. Pastor, and Y. Y. Y. Wong, Towards a precision calculation of Neff in the Standard Model II: Neutrino decoupling in the presence of flavour oscillations and finite-temperature QED, JCAP 04 (2021) 073, [ ]. [233] M. Drewes, Y. Georis, M. Klasen, L. P. Wiggering, and Y. Y. Y. Wong, Towards a precision calculation of Neff in the Standard Model. Part III. Improved estimate of NLO contributions to the collision integral, JCAP 06 (2024) 032, [ ]. [234] M. Goliath, R. Amanullah, P. Astier, A. Goobar, and R. Pain, Supernovae and the nature of the dark energy, Astron. Astrophys. 380 (2001) 6-18, [astro-ph/0104009]. [235] D. Foreman-Mackey, D. W. Hogg, D. Lang, and J. Goodman, emcee: The MCMC Hammer, Publ. Astron. Soc. Pac. 125 (2013) 306-312, [ ]. [236] J. Goodman and J. Weare, Ensemble samplers with affine invariance, Commun. Appl. Math. Comput. Sc. 5 (2010), no. 1 65-80. [237] A. Lewis, GetDist: a Python package for analysing Monte Carlo samples, JCAP 08 (2025) 025, [ ]. [238] J. A. Nelder and R. Mead, A Simplex Method for Function Minimization, Comput. J. 7 (1965) 308-313. [239] A. Heavens, Y. Fantaye, A. Mootoovaloo, H. Eggers, Z. Hosenie, S. Kroon, and E. Sellentin, Marginal Likelihoods from Monte Carlo Markov Chains, . [240] H. Jeffreys, The Theory of Probability. Oxford Classic Texts in the Physical Sciences. 1939. [241] R. Trotta, Bayes in the sky: Bayesian inference and model selection in cosmology, Contemp. Phys. 49 (2008) 71-104, [ ]. [242] W. Handley, fgivenx: Functional posterior plotter, The Journal of Open Source Software 3 (Aug, 2018). [243] S. M. Carroll, Quintessence and the rest of the world, Phys. Rev. Lett. 81 (1998) 3067-3070, [astro-ph/9806099]. [244] E. G. Adelberger, B. R. Heckel, and A. E. Nelson, Tests of the gravitational inverse square law, Ann. Rev. Nucl. Part. Sci. 53 (2003) 77-121, [hep-ph/0307284]. [245] C. M. Will, The Confrontation between General Relativity and Experiment, Living Rev. Rel. 17 (2014) 4, [ ]. [246] B. Bertotti, L. Iess, and P. Tortora, A test of general relativity using radio links with the Cassini spacecraft, Nature 425 (2003) 374-376. - 35 -
2510.14939
Decoding in the presence of ISI without interleaving – ORBGRAND-AI Ken R. Duffy Dept. of ECE & Dept. Mathematics Northeastern University Boston, USA k.duffy@northeastern.edu Moritz Grundei Electrical and Computer Engineering Technical University of Munich Munich, Germany moritz.grundei@tum.de Jane A. Millward Research Laboratory of Electronics Massachusetts Institute of Technology Cambridge, USA janem7@mit.edu Muralidhar Rangaswamy Sensors Directorate Air Force Research Laboratory WPAFB, USA muralidhar.rangaswamy@us.af.mil Muriel M´edard Research Laboratory of Electronics Massachusetts Institute of Technology Cambridge, USA medard@mit.edu Abstract—Inter symbol interference (ISI), which occurs in a wide variety of channels, is a result of time dispersion. It can be mitigated by equalization which results in noise coloring. For such colored noise, we propose a decoder called Ordered Reliability Bit Guessing Random Additive Noise Decoding (ORBGRAND- AI) which is inspired by the development of approximate independence in statistical physics. By foregoing interleaving, ORBGRAND-AI can deliver the same, or lower, block error rate (BLER) for the same amount of energy per information bit in an ISI channel as a state-of-the-art soft input decoder, such as Cyclic Redundancy Check Assisted-Successive Cancellation List (CA-SCL) decoding, with an interleaver. To assess the decoding performance of ORBGRAND-AI, we consider delay tap models and their associated colored noise. In particular, we examine a two-tap dicode ISI channel as well as an ISI channel derived from data from RFView, a physics-informed modeling and simulation tool. We investigate the dicode and RFView channel under a variety of imperfect channel state information assumptions and show that a second order autoregressive model adequately represents the RFView channel effect. Index Terms—Soft input, correlation, interleavers, URLLC, GRAND I. INTRODUCTION Inter symbol interference (ISI) occurs in many modern communication systems and is mostly handled by equalization techniques that create correlation in the noise. Interleaving is a technique which diminishes channel correlation to provide white noise to the decoder. It can be shown, however, that correlated noise has lower entropy [4], [5] than uncorrelated noise, which means that the original correlated channel has Preliminary versions of this paper were presented in the 2023 Globecom, 2024 SPAWC and Asilomar conferences [1]–[3]. Due to space limitations, those papers made succinct observations about the impact of correlated ISI on decoder performance. This paper extends the work of the conference papers by providing a more well-rounded treatment of the problem, via an explicit demonstration of the fact that the entropy of the correlated ISI channel is less than that of the uncorrelated ISI channel. Furthermore, the rationale for approximating the interference generated using RFVIEW as an AR(2) process is established. The numerical simulations are more comprehensive compared to the conference submissions. higher capacity. Therefore, by interleaving we are missing out on decoding performance gains. Here we realize the improved decoding performance afforded by preserving and making use of channel correlation using our proposed decoder Ordered Reliability Bit Guessing Random Additive Noise Decoding (ORBGRAND-AI). To demonstrate the real-world applicability of ORBGRAND-AI, we show ORBGRAND-AI’s performance in a simple two-tap dicode ISI channel and in an ISI channel generated with data from RFView, a high fidelity RF simulation and modeling tool [6]. We show that with both perfect channel state information (CSI) and imperfect CSI, using ORBGRAND-AI in both the dicode and RFView channels provides decoding performance improvements. Before delving into the details of ORBGRAND-AI, it is worthwhile considering why interleavers are currently used. The need for interleavers arises in soft detection decoding techniques which typically assume that each bit in a com- munication is impacted independently by noise, resulting in probabilistically independent per-bit reliabilities [7]. As real- world noise and interference are subject to temporal correla- tions that result in correlated bit reliabilities, in the absence of interleaving the mismatched input to decoders would result in degraded error correction performance. Since correlated noise has lower entropy than white noise of the same energy, it is counterproductive to transform this noise into white noise by interleaving given that interleaving is detrimental to rate and reliability. The question of how to practically capture and make use of noise correlation remains. Our decoding approach, ORBGRAND-AI, exploits noise correlation in a low complexity manner using techniques and theory inspired by the development of approximate inde- pendence in statistical physics. With its approach to decoding by identifying noise-effects and inferring code-words, Guessing Random Additive Noise Decoding (GRAND) [8] is well-positioned to embrace noise arXiv:2510.14939v1 [eess.SP] 16 Oct 2025 correlation and decode without interleaving. It has been shown that in the hard-detection setting GRAND can exploit statis- tically described Markovian correlation structures to enhance decoding performance [9]–[12], but the approach taken there cannot be carried over to the soft detection setting, which requires distinct innovation. By adopting techniques from thermodynamic probability theory to manage dependence and combining them with symbol-level Ordered Reliability Bit Guessing Random Ad- ditive Noise Decoding (ORBGRAND) [13], we demonstrate that it is possible to accurately soft-detection decode any moderate redundancy code without the need for interleaving. By removing the interleaver, decoding performance can be enhanced by multiple dB, while complexity and latency are reduced, offering a potential route forward for delivering URLLC. The approach uses Approximate Independence (AI) [14]– [17] with ORBGRAND (ORBGRAND-AI) to obtain these large gains in decoding performance by leveraging dependence over small, manageable neighborhoods of symbols. Unlike code-centric decoding methods, such as Cyclic Redundancy Check Code Assisted Successive Cancellation List (CA-SCL) decoding, which is specifically designed to decode CA-Polar codes, ORBGRAND-AI has no limitations regarding the code structure. We show that, contrary to common belief that codes need to be specifically designed for correlated channels [18], ORBGRAND-AI can decode any code in channels that exhibit noise correlations. This paper is structured as follows: Sec. II introduces the two channel models that are considered. In Sec. III, we then provide a heuristic argument why treating small blocks of communications as approximately independent from one another can provide significant gains in decoding performance for correlated channels. Sec. IV provides a full description of ORBGRAND-AI. Sec. V provides a performance evaluation of of ORBGRAND-AI benchmarked to decoders including CA-SCL and the original ORBGRAND. For the performance evaluation, we consider both a standard two-tap dicode ISI channel, for which a first order autoregressive (AR(1)) model suffices, and a six tap model informed by RFView, which necessitates an second order autoregressive (AR(2)) model. Section VI considers robustness to channel mischaracterisation in a variety of settings, establishing graceful degredation in presence of mismatch. Closing comments can be found in Section VII. II. ISI CHANNEL DESCRIPTION Unless otherwise stated, we shall denote random variables by capital letters and sample values or constants by lower case letters. Vectors and matrices we shall denote by underlining. We denote the dimensions of vectors and matrices using superscripts and specific coordinates within the matrices and vectors with subscripts. The standard structure of an interleaved communication system is shown in Fig. 1. For the channel, we consider a linear model Yk′ = X j≥0 hk′,jXk′−j + Nk′, where k′ ∈Z+ denotes the symbol time scale, Xk′ is the complex-valued transmitted symbol, Nk′ is complex white Gaussian noise, and hk′ denotes the discrete representation of the channel impulse response at the time of the transmission. We use k′ as we use k later to denote the information bits in a codeword as per convention. ISI is generally the result of time dispersion imparted on the transmitted signal. When we consider a sequence of symbols of length ns, the model becomes Y ns = hns×nsXns + N ns, (1) where Y ns = (Y1, Y2, ..., Yns)T denotes the sequence of transmitted symbols, N ns denotes a vector of additive white Gaussian noise with auto-covariance Cns×ns N = σ2 NIns×ns and hns×ns is the channel matrix whose elements are the hk′,js. We denote the identity matrix of size ns × ns by Ins×ns. When we consider codewords of length n, we denote the vectors corresponding to the received code word and noise effect by Y n, Xn and N n respectively. There are several ways to tackle ISI. For example, in OFDM or discrete multitone (DMT) systems, a cyclic prefix may be added to cancel the ISI at the expense of reducing the data rate of the system [19]–[21]. Other popular methods for mitigating ISI include nonlinear equalization strategies such as Tomlin- son Harashima precoding and decision feedback equalization [22], [23]. These methods have their own advantages and disadvantages, but they mostly entail higher signal processing complexity. In this work, we employ a linear equalizer at the receiver that introduces noise coloring as a consequence of removing ISI. In the case of the interleaved system, the colored noise is dispersed and turned into white noise from the point of view of a single decoder. A. Delay Tap Channel Model We use a delay line model to generate the channel profile in terms of its paths indexed by d ranging from 1 to pk′,j where k′ denotes the symbol time index and j denotes the delay due to ISI. Each path d has complex attenuation {ak′,j,d} and delays {τk′,j,d}. Given a sampling frequency fs we can compute the time discrete channel impulse response: hk′,j = Ppk′,j d=1 ak′,j,dsinc (τk′,j,dfs −k′). A representative value for delay spread, defined as the maximum difference among the τds is 1µs for terrestrial outdoor systems. A special case of the delay tap channel model is the dicode partial response channel. It is an essential example of ISI where there are only two channel taps: hk′,j =      1 j = 0, −ρ j = 1, 0 otherwise, with ρ ∈[0, 1]. S/P encoder encoder encoder interleaver modulator channel equalizer demodulator deinterleaver decoder decoder decoder . . . . . . . . . . . . Fig. 1: Signal processing chain of a bit interleaved communication system Equalization through zero forcing removes ISI but leads to colored noise, { ˜Nk′}, with an autoregressive description ˜Nk′ = ρ ˜Nk′+Nk′ where Nk′ denotes the Gaussian noise prior to equalization. This type of colored noise is commonly re- ferred to as Gauss-Markov noise and it exhibits exponentially decaying temporal correlation strength: E[| ˜Nk′ ˜Ni′|] ∝ρ|k′−i′| where i′ ∈Z+ is another variable denoting the symbol time scale (we use i later to denote the block index when we discuss ORBGRAND-AI). B. RFView ISI Channel We consider an ISI channel generated using channel impulse response data from RFView, a high-fidelity, physics-based RF simulation and modelling tool [6]. The RFView dataset consists of the in-phase and quadrature (I-Q) channel clutter impulse response of a mixed terrain environment with some discrete clutter sources, such as buildings. The dataset contains 30 coherent processing intervals (CPIs). Each CPI consists of a 3D data cube comprised of 32 antenna channels, 64 pulses and 2335 impulse response samples sampled at 10 MHz, Fig. 2. 2335 Impulse response samples 64 Pulses 32 Antennas Fig. 2: Illustration of RFView dataset for a single CPI. We process the data from each data cube corresponding to a particular CPI for the first antenna element only, highlighted in yellow, as we consider single-input single-output commu- nications scenarios. To provide a channel estimate consistent with our earlier ISI channel definition, we process each data cube corresponding to a particular CPI. As radar data is usually comprised of several pulses, we treat the pulse axis within each data cube as “slow time” and we assume the channel impulse response changes from pulse to pulse (i.e. the channel impulse response from time sample 1 to 2335 corresponds to the impulse response at pulse index 1, and the channel impulse response from time sample 2336 to 4760 (= 2 × 2335) corresponds to the channel impulse response at pulse 2, etc.). Additionally, we only process data from a single antenna element (the first element along the antenna dimension in Fig. 2) as we consider the single-input single-output setting. To obtain the estimate of each coefficient hk′,j, we trans- mit a complex passband pulse through the channel impulse response corresponding to a particular pulse index. We set fs = 10MHz and the length of the pulse to L = 467, which ensures that we obtain a 6-tap ISI channel, i.e. j ∈{1, ..., 6} for all k′ for the RFView channel. We set the carrier frequency, fc, of the complex passband sounding signal to fS/L. We denote the complex passband pulse by ul where ul = ( a exp(i2πfcl) l = 1, . . . , L, 0 otherwise. The constant a is selected so thatPL l=1 |ul|2 = 1 and we denote the full vector of the sounding signal by uL. The channel response given by RFView is gm,µ, which is a matrix that takes into account the m = 2335 impulse response samples sampled at 10MHz and the µ = 64 pulses. For each r ∈{1, ...µ}, we are able to transmit five sounding signals per gm,r and a total of 5 × 64 = 320 sounding signals per CPI. For each gm,r, as m/L = 5 we can transmit 5 sounding signals. We transmit each sounding signal through the channel separately so that we are able to measure the individual effect of each sounding signal and therefore construct the 6-tap ISI channel. We set r(k′) = k′ + (5 −k′mod5) 5 to account for the fact that we are able to transmit 5 sounding signals per gm,r(k′), but wish to isolate the output response of each sounding signal individually to be able to construct the ISI coefficients. We account for the fact that each sounding signal will be delayed by L samples more than the previous one later when we construct the coefficients, hk′,j by sampling from the matched filter output of the channel. From a data processing perspective we do not need to account for the delays in the sounding signals because adding the delay, effectively zero-padding, does not affect the result of the channel output response convolutions. We account for the delay later when we go to construct hns×ns RF V . The noiseless output zζ,320 of the (k′)th sounding signal k′ ∈{1, ..., 320} and the channel response at pulse index r(k′) is given by the convolution zk′ = gm,r(k′) ∗uL with components zq,k′ = m X κ=1 gκ,r(k′)uq−κ. By the properties of convolution, we have that ζ = m+L−1. We next perform matched filtering on uL to obtain z′ k′ = zk′ ∗(uL)′ for each sounding signal k′ where (uL)′ denotes the matched filter response of uL as defined in [24], where z′ k′ has components z′ q,k′ = ζ X κ=1 zκ,k′u′ q−κ. (2) The full matrix of the matched filter output is z′η,5µ, where η is 2L + m −2 = 2 × 467 + 2335 −2 = 3267. We next sample the elements of z′η,5µ at intervals of L along the impulse response axis (i.e the axis with dimension η) to obtain z′′6,5µ. The η axis has now been reduced to a dimension of ⌊η/L⌋= 6. The samples of z′′6,5µ become the ISI coefficients hk′,j for j ∈{1, ..., 6} via hk′,j = z′′ j,k′−(j−1). We next construct the matrix hns×ns RF V using the hk′,js obtained above by: hns×ns RF V =   h1,1 0 0 0 . . . 0 h2,2 h2,1 0 0 . . . 0 h3,3 h3,2 h3,1 0 . . . 0 ... ... ... ... ... ... 0 . . . 0 hns,6 . . . hns,1   . We complete this process for each of the 30 CPIs. In our simulations, we uniformly sample from these 30 matrices to obtain the channel realization hns×ns RF V . III. A THEORETICAL HEURISTIC We will now argue through the example of an equalized dicode channel (or equivalently Gauss-Markov noise) that there is a significant gain to be realized when we only consider correlations across small neighborhoods (blocks) of received symbols and treat the blocks themselves as independent with regard to one another. With variance σ2 N and correlation coefficient ρ ∈(0, 1], assume that the continuous noise sequence {N ns} is a zero- mean complex-valued Gaussian with auto-covariance matrix Cns×ns N ∈Rns×ns having entries CNk′,i′ = σ2 Nρ|k′−i′|. The normalized differential entropy rate of N ns can be calculated as log(2eπ) + 1 n log(|Cns×ns N |) = log(2eπσ2 N) +  1 −1 ns  log(1 −ρ2), e.g. eq. (9.34) [5]. The final term encapsulates the decrease in entropy that arises from channel correlation as log(1−ρ2) < 0 for ρ > 0. In a heavily interleaved channel ρ = 0 and the final term is zero. If the channel was truly independent for each block of b bits, then CNk′,i′ would be 0 for |k′ −i′| > b and the normalized differential entropy rate would instead be log(2eπσ2 N) +  1 −1 b  log(1 −ρ2), where the only difference is the multiplier of the final term, which has changed from (1−1/ns) to (1−1/b). Thus, in this setting, to get more than half of the reduction in normalized differential entropy, a block-size of b = 2 suffices, suggesting significant gains should be available with small blocks. The principle of treating neighboring blocks as approxi- mately independent random variables originates from consid- erations in thermodynamic probability theory where stochas- tic processes are approximated by product measures across boundaries [14]–[17]. We next show that we can expect similar performance gains in the case of a second-order Gauss-Markov process. We analyze the entropy rate of a second-order Gauss-Markov process because, as per Burg’s theorem, the entropy of a stochastic-process subject to α ∈Z+ covariance constraints is maximized by a α-th order Gauss-Markov process [25]. We proceed by calculating the normalized differential en- tropy rate of a second-order Gauss-Markov process which has the following covariance constraints: E[Nk′Nk′] = σ2 N E[Nk′Nk′+1] = ρ1σ2 N E[Nk′Nk′+2] = ρ2σ2 N for k′ = 1, 2, 3.... We note that ρ1 and ρ2 denote the corre- lation coefficients and in our physical model ρ1, ρ2 ∈(0, 1]. For i′ > 2, the cross-covariance terms are E[Nk′Nk′+i′] = β1E[Nk′Nk′+i′−1] + β2E[Nk′Nk′+i′−2] where β1, β2 are the coefficients of the time-series and can be found by solving the Yule-Walker equations [26]. Using the recursive expression for E[Nk′Nk′+i′], we find that the determinant of the auto-covariance matrix for ns ≥4 is |Cns×ns N | = −(ρ2 −1)ns−2(1 −2ρ2 1 + ρ2)ns−2(σ2 N)ns (ρ2 1 −1)ns−3 . Recall that ρ1 and ρ2 denote the correlation coefficients which must be numbers between 0 and 1. This means that when ns is odd −(ρ2−1)ns−2 is positive and (ρ2 1−1)ns−3 is positive, and when ns is even −(ρ2 −1)ns−2 is negative and (ρ2 1 −1)ns−3 is negative. Since Cns×ns N is a positive semi-definite matrix, we now need to check that 1 −2ρ2 1 + ρ2 is positive (i.e ρ2 1 < (ρ2 +1)/2) as this will ensure that the determinant is positive. The Yule-Walker equations impose the following conditions on the selection of ρ1 and ρ2 via the variance of the innovation process in the time series: 0 < ρ1(ρ2 −1) ρ2 1 −1 ρ1 + ρ2 1 −ρ2 ρ2 1 −1 ρ2 < 1 =⇒0 < ρ2 1 + ρ2 2 −2ρ2 1ρ2 1 −ρ2 1 < 1. Manipulating the right-hand side of the variance constraint we find that ρ2 1 < ρ2 + 1 2 which ensures that the 1 −2ρ2 1 + ρ2 term, in the expression for the determinant is indeed positive. We can now write the normalized differential entropy rate for second-order Gauss-Markov noise as 1 2 log(2πeσ2 N)+ 1 2ns log  −(ρ2 −1)ns−2(1 −2ρ2 1 + ρ2)ns−2 (ρ2 1 −1)ns−3  We now proceed to use the expression we have found for the normalized differential entropy rate to find the capacity of a channel subject to second-order Gauss-Markov noise. The channel capacity can be computed as C = sup X′ I(X′; Y ′) where I(X′; Y ′) is the lim-inf information rate [27] with I(X′; Y ′) = lim inf n→∞ 1 n log PY ns|Xns (yns|xns) PY ns (yns) and X′, Y ′ denote sequences of the finite dimensional distri- butions X′ = {Xns = {X(ns) 1 , ...X(ns) ns }}∞ ns=1 and Y ′ = {Y ns = {Y (ns) 1 , ...Y (ns) ns }}∞ ns=1 respectively. There is an additional inequality relating the lim-inf information rate and the lim-sup entropy rate in [27] I(X′; Y ′) ≤H(Y ′) −H(Y ′|X′) where H(·) denotes the lim-sup entropy rate defined as H(Y ′) = lim sup n→∞ 1 n log 1 PY ns (yns). H(Y ′|X′) is defined similarly. Applying this inequality to find the capacity of second-order Gauss-Markov noise, we find that I(X′; Y ′) ≤H(Y ′) −1 2 log(2πeσ2 N) −lim ns→∞ 1 2ns log  −(ρ2 −1)ns−2(1 −2ρ2 1 −ρ2)ns−2 (ρ2 1 −1)ns−3  . Noting that H(Y ′) is maximized when each Y ns is a Gaussian with auto-covariance (σ2 X +σ2 N)Ins×ns where σ2 X denotes the maximum power of the data symbols, we find C ≤1 2 log(2πe) + 1 2 log(σ2 X + σ2 N) −1 2 log(2πeσ2 N) −lim ns→∞ 1 2ns log  −(ρ2 −1)ns−2(1 −2ρ2 1 −ρ2)ns−2 (ρ2 1 −1)ns−3  where the inequality remains because of Hadamard’s inequal- ity and the auto-covariance matrix of Y ns has off-diagonal terms so we bound it using Hadamard’s inequality. Since the auto-covariance matrix of the second-order Gauss-Markov process, Cns×ns N , has off-diagonal terms, the lim-sup entropy rate of the second order Gauss-Markov noise, H(Y ′|X′), must be less than the lim-sup entropy rate of uncorrelated Gaussian noise as a consequence of Hadamard’s inequality. This result becomes helpful when we approximate the RFView channel by a second order autoregressive (AR(2)) process process later in the paper. IV. ORBGRAND-AI A. GRAND Guessing Random Additive Noise Decoding (GRAND) is a family of codebook agnostic decoders. The basic premise of GRAND is that, in additive error channels for codewords of length n with Y n = Xn ⊕N n, the entropy of the noise N n is typically much smaller than the entropy of the code word Xn. GRAND finds a decoding by iteratively guessing the noise realization N n and subsequently inverting the channel until a code word is found [8]. In the case of a binary linear code with parity check matrix H ∈{0, 1}n−k×n, GRAND computes the syndrome H(Y n ⊖N n g′) for each noise guess N n g′. If the syndrome is zero, a decoding is found, else, GRAND continues guessing. Assuming that the noise sequences are queried in decreasing likelihood order given SI and/or channel statistics, GRAND is maximum likelihood achieving [8]. B. ORBGRAND ORBGRAND is a soft detection variant of GRAND. Given an ordered list of bit reliabilities by a soft demapper, ORB- GRAND uses a linear approximation of the reliability curve to turn the problem of finding a decreasing likelihood guesswork function into generating binary sequences in increasing order of logistic weight [28]. This sequence, in addition to the reliability order, is then used to produce the noise realiza- tions. A multi-line approximation of the reliability curve has been investigated too [28]. In generating binary sequences in increasing order of logistic weight as well as demap- ping, ORBGRAND assumes independent bits and thus relies on interleaving. Generating sequences in increasing logistic weight order may be done using the landslide algorithm [28], [29]. In general, ORBGRAND is well-suited to efficient implementation in hardware, eg. [29]–[33]. C. Symbol-level ORBGRAND Symbol-level ORBGRAND [10] introduced a modulation- aware variant that assumes symbols experience indepen- dent channel effects consistent with symbol-level interleav- ing. Given a hard detected symbol, its neighbors in the constellation are considered as potential substitutions. The exceedance distance between potential substitution symbols and the hard detected symbol is used as a reliability input for ORBGRAND’s rank ordering, whereupon the original noise effect pattern generator is employed. In contrast to bit level ORBGRAND, symbol level ORBGRAND uses the generated patterns to pick symbols to substitute. Hence, symbols with lower exceedance distance are swapped in earlier. If a symbol substitution pattern proposes a single symbol be substituted more than once, the pattern is discarded. Empirical results demonstrate that symbol-level ORBGRAND can achieve iden- tical performance to operating on bit level reliabilities while realizing a reduction in rank ordering effort. D. ORBGRAND-AI To enable a receiver to detect or correct errors, prior to transmission each collection of k information bits is coded to a n > k bit code-word cn = (c1, . . . , cn) ∈{0, 1}n. For spectral efficiency, most communication systems employ high-order modulation where each transmitted symbol com- municates multiple bits of information [34]. If a modulation scheme is employed with a complex constellation of size |χ| = 2ms, the n coded bits are translated into ns = n/ms symbols by sequentially mapping each collection of ms bits to the corresponding higher order symbol. In the absence of interleaving, this results in the transmission of the higher order sequence mod(cn) = Xns = (X1, . . . , Xns) ∈χns. Transmissions are impacted by channel effects and noise that cause the received signal sequence to be perturbed. The complex received vector can be written as Y ns = (Y1, . . . , Yns) = hns×nsXns + N ns, where we assume that the receiver has perfect channel state information (CSI), and so knows both hns×ns ∈Cns×ns and possesses a probabilistic description of N ns, e.g. that it is complex-valued white Gaussian noise with known variance. In Sec. VI we will show to what degree inaccurate assumptions about the noise model or CSI impact ORBGRAND-AI’s performance. For ORBGRAND-AI’s operation, each received signal corresponding to a coded transmission is split into non- overlapping blocks of b symbols, where for notational ease we assume ns/b is an integer: Y ns = ns symbols z }| { (Y1, . . . , Yb | Yb+1, . . . , Y2b | {z } b symbols | · · · | Yns−b+1, . . . , Yns) = (Y b 1, . . . , Y b ns/b). Each block i ∈{1, . . . , ns/b} of b symbols, Y b i, is treated separately, with the likelihoods pXb i|Y b i(tb i|Y b i) for each tb i ∈χb being evaluated using the channel model and CSI. We define tb,∗ i = arg max pXb i|Y b i(tb i|Y b i) to be the symbol-level hard demodulation of the block Y b i = (Y(i−1)b+1, . . . , Yib), which takes channel memory over the block into account. The core approximation when evaluating the posterior prob- ability of a noise effect sequence tns ∈χns describing symbols to be swapped is that the blocks are assumed to be independent, resulting in the following expression pXns|Y ns (tns|Y ns) = ns/b Y i=1 pXb i|Y b i(tb,∗ i |Y b i) ns/b Y i=1 pXb i|Y b i(tb i|Y b i) pXb i|Y b i(tb,∗ i |Y bi) , (3) which has a common term associated to the sequence of all hard-demodulated blocks and each noise effect sequence that swaps a block experiences a likelihood penalty. Algorithm 1 ORBGRAND-AI inputs: The received signal Y ns, abandonment threshold τ ′, channel statistics Ψ and a codebook membership check function Φ. Inputs: Y ns, Φ, τ ′, Ψ Output: cn,∗ d′ ←0 wµ ←compute likelihoods for substitution symbol blocks while d′ < τ ′ do d′ ←d′ + 1 eµ ←next most likely ORBGRAND pattern for wµ if no substitution conflict then sns ←substitute blocks cn ←demodulate sns if Φ(cn) = 1 then return cn,∗←cn end if end if end while return FAILURE With the blocks of symbols, ti b, now playing the role of individual symbols, this expression is identical to the one used for symbol-level ORBGRAND, and so the ORBGRAND approach can be used to generate putative noise effect patterns, tns, in approximately decreasing order of likelihood. In par- ticular, the set of all alternative groups, {tb i ̸= tb,∗ i : i ∈ {1, . . . , ns/b}}, to the hard demodulated blocks of symbols contains ω = (2msb −1)n/(bms) elements and they are provided as input to symbol-level ORBGRAND. To further show that the ORBGRAND pattern generator is well suited to choose substitution blocks, Fig. 3 displays the substitution likelihoods of the candidate blocks (block length 4) for various channel conditions at moderate channel corre- lation ρ = 0.5 for first order Gauss-Markov noise. Especially at low SNR, the likelihood curve is well approximated by a linear function as assumed by ORBGRAND. Pseudo-code for ORBGRAND-AI can be found in Algorithm 1. For a Gauss-Markov channel, information theoretic results in Section III indicate that in order to move half-way between the differential entropy rate of the interleaved channel and the differential entropy of the noise with complete correlation, it’s sufficient to set b = 2, suggesting that only small block sizes are necessary to obtain significant performance gains. 0 100 200 300 400 500 Rank 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Inverse Substitution Likelihood 0dB 2dB 4dB 8dB Fig. 3: Ranked normalized inverse block substitution likeli- hoods for BPSK modulation in a channel with moderately correlated (ρ = 0.5) first-order Gauss-Markov noise and 128 coded bits. Correlations were only accounted for across blocks of size b = 4. Blocks with lower rank are more likely to be swapped in. E. Illustrative Example As an example to show how ORBGRAND-AI constructs code word candidates and queries them iteratively, we use a length n = 4 code with BPSK (Xk′ ∈{+1, −1}) and b = 2. We further assume complex first-order Gauss-Markov noise (ρ = 0.5, σ2 = 1) resulting in the received symbols Y ns = [1.5, 0.1, −0.2, 0.1]T . Table I shows the the probabilities of the candidate blocks as they appear in eq. (3). Evidently, the most likely (hard detected) symbol vector is [1, −1, −1, −1]T . The remaining blocks are ordered according to their substitution likelihood (Table I). The landslide algorithm can then be used to efficiently generate a substitution order. Table II displays the order in which the candidate symbol sequences (or rather their corresponding bit representation) will be tested against codebook membership. The first pattern is discarded due to a substitution conflict that occurs once ORBGRAND tells us to swap substitute 1 and 3, which both belong to the same block, simultaneously (query 6). Note that ORBGRAND-AI could also be deployed with a different pattern generator for combining the blocks in approximately decreasing likelihood order, if desired. X2 Block 1 2 [+1, +1] 0.30 0.38 [+1, −1] 0.68 0.00 [−1, +1] 0 0.10 [−1, −1] 0.00 0.50 Rank X2 Block 1 [+1, +1] 2 2 [+1, +1] 1 3 [−1, +1] 2 4 [+1, −1] 2 5 [−1, −1] 1 6 [−1, +1] 1 TABLE I: Exemplary ORBGRAND-AI demapping procedure result. Left: Probabilities for different block candidates. Right: Ranked (according to their substitution likelihood) substitute blocks. Query Swap Indexes symbols 1 - [+1, −1, −1, −1] 2 1 [+1, −1, +1, +1] 3 2 [+1, +1, −1, −1] 4 1,2 [+1, +1, +1, +1] 5 3 [+1, −1, −1, +1] 6 3,1 discarded 7 4 [+1, −1, +1, −1] TABLE II: Illustrative example of a candidate symbol se- quence querying order when ORBGRAND patterns are used to pick the blocks to substitute. The first guess corresponds to the hard demodulated sequence. Bold symbols indicate that a swap has taken place. F. Higher Order Modulations While cursory considerations may suggest that ORBGRAND-AI may not be suitable for use with higher order modulations, here we establish that is not the case. For example, using a code with 128 bits, 256-QAM and b = 2 results in 524280 potential substitutes for the blocks of the hard demodulated sequence. Similar to symbol level ORBGRAND’s approach, we can reduce that number significantly if we only consider symbols in the neighborhood of the corresponding received signal, i.e the γ closest symbols. By doing that, we implicitly assume that every substitution candidate which contains a symbol that is not in the neighborhood of its respective received signal is assigned the probability 0. In general, this leads to (ωb −1)n/(msb) substitution candidate blocks. For n = 128, 256QAM and a block size b = 2, we can thus reduce the amount of block substitutes we have to rank order to 120 when choosing ω = 4. V. PERFORMANCE EVALUATION We present the block error rate (BLER) performance of ORBGRAND-AI in the dicode and RFView ISI channels in the following section. In Sec. V-A, we demonstrate that ORBGRAND-AI can decode any moderate redundancy code and explore the impact of the block size b on the perfor- mance gain compared to decoders operating on the interleaved equalized dicode channel, achieving multiple dB gains. We demonstrate the law of diminishing returns for increasing b which was derived in Sec. III. Finally, in Sec. V-B we show the performance of ORBGRAND-AI in the RFView channel. A. Equalized Dicode Channel We have established in Sec. II that first-order Gauss-Markov noise is the result of a two tap channel followed by zero forcing equalization. In the following simulations, we shall refer to the noise power σ2 N as the power of the noise after equalization. This comparison is fair because the benchmark decoders are evaluated on the interleaved channel after equal- ization. We first restrict ourselves to BPSK as a modulation scheme. Under these conditions, if we were to drop the interleaver, the performance of CA-Polar codes decoded with CA-SCL decoding would be significantly worse. Fig. 4 shows that there is an increasing loss as the correlation of the noise samples increases. 0 2 4 6 8 Eb/N0 10-5 10-4 10-3 10-2 10-1 100 BLER CA-SCL, ;=0 CA-SCL, ;=0.25 CA-SCL, ;=0.5 CA-SCL, ;=0.75 CA-SCL, ;=0.9 Fig. 4: Impact of channel correlation on CA-SCL decoding of a [128,64] 5G NR CA-Polar code with an 11 bit CRC, list size 8 and within-block interleaver. Block error rate (BLER) is plotted versus the energy per information bit, Eb/N0, for the complex equalized dicode channel using BPSK modulation with channel correlation strength ρ that increases from blue to red. ORBGRAND-AI on the other hand, operating at a higher rate ([128,110] CA-Polar Code) sees its performance signif- icantly improved with increasing correlation ρ as displayed in Fig. 5. At a target BLER of 10−3 ORBGRAND-AI outper- forms the interleaved CA-SCL decoder with list size 8 by 2 or even 4dB for ρ = 0.5 and ρ = 0.75 respectively. At correlation ρ = 0.75, ORBGRAND-AI delivers the same 10−3 BLER as the 1/2 rate CA-Polar code (Fig. 4) under ideal conditions at the same amount of energy spent per bit of information. The intuition developed in Sec. III is strengthened by Fig. 6. At fixed channel conditions ρ = 0.5, increasing the block size b will only give us diminishing returns on the gain compared to interleaved ORBGRAND. When we move to higher modulations, as described in Sec. IV, we have to make approximations for the sake of complex- ity reduction. This means that we only consider substitution blocks containing symbols located in the neighborhood of their respective received signals. Fig. 7 shows the impact of the neighborhood size γ for fixed channel correlation ρ = 0.75 and block size b = 4. Obviously, the performance increases when we add more symbols and thus more candidate blocks to the consideration. Further, we see a saturation of performance increase once we consider neighborhoods of 4 or more symbols. This might be due to the fact that in QAM, 4 substitutes suffice to include a potential substitute in every direction in the I-Q plane. ORBGRAND-AI’s complexity stems from two processes: pattern generation and codebook checking. For algorithms like ORBGRAND, efficient pattern generation and codebook -4 -2 0 2 4 6 Eb/N0 10-5 10-4 10-3 10-2 10-1 100 BLER CA-SCL, ;=0 ORBGRAND-AI, ;=0 ORBGRAND-AI, ;=0.25 ORBGRAND-AI, ;=0.5 ORBGRAND-AI, ;=0.75 Fig. 5: The impact of channel correlation on ORBGRAND- AI decoding with b = 4 for a [128,110] 5G NR CA- Polar code with an 11 bit CRC and within-block interleaver for the equalized dicode channel using BPSK modulation. Channel correlation strength, ρ, increases from blue to red. The performance of CA-SCL on an interleaved AWGN channel is shown as a benchmark. -1 0 1 2 3 4 5 6 7 Eb/N0 10-4 10-3 10-2 10-1 100 BLER ORBGRAND, interleaved ORBGRAND-AI, b=1 ORBGRAND-AI, b=2 ORBGRAND-AI, b=4 ORBGRAND-AI, b=8 Fig. 6: The impact of block-size, b, on BLER performance of ORBGRAND-AI for ρ = 0.5 and an [128,116] RLC in the equalized dicode channel. checking circuits have been built [28], [32], [35]. Thus, com- plexity for GRAND algorithms is generally compared by the average number of codebook queries until a decoding is found as this operation dominates the total energy consumption of the circuits. Fig. 8 displays the number of queries needed by ORBGRAND-AI to find a decoding for a fixed neighborhood size γ = 5. At a target BLER 10−4, with block size b = 4, it takes around 100 codebook queries on average to decode. B. Equalized ISI Channel from RFView Dataset Finally, we investigate the performance of ORBGRAND- AI in the RFView channel, hns×ns ISI , as described in Sec. II-B. In these simulations σ2 N denotes the variance of N ns before 0 2 4 6 8 10 Eb/N0 10-4 10-3 10-2 10-1 100 BLER ORBGRAND (interleaved) ORBGRAND-AI, = 2 ORBGRAND-AI, = 3 ORBGRAND-AI, = 4 ORBGRAND-AI, = 5 ORBGRAND-AI, = 6 Fig. 7: [256,240+11] 5G NR Uplink CA-Polar code with 11 bit CRC, 16QAM, with ρ = 0.75 in the complex equalized dicode channel. The block size is fixed at b = 4 taking into account γ candidates per symbol. 0 2 4 6 8 10 Eb/N0 101 102 103 104 105 Average Number of Queries ORBGRAND (interleaved) ORBGRAND-AI b = 1 ORBGRAND-AI b = 2 ORBGRAND-AI b = 4 Fig. 8: Number of codebook queries it takes ORBGRAND-AI until a code word is found for ρ = 0.5 and an [256, 240+11] CA-Polar code in the equalized dicode channel. Only γ = 5 substitutes were considered per 16QAM modulated symbol. equalization. To decode, Xns, the received symbols, Y ns, are equalized using a minimum mean-square error (MMSE) equalizer: hns×ns eq, MMSE =Cns×ns X (hns×ns ISI )H(hISICns×ns X (hns×ns ISI )H + Cns×ns N )−1 where Cns×ns X and Cns×ns N denote the auto-covariance ma- trices of XnS and N ns respectively and the operator (·)H denotes the Hermitian transpose. The equalized symbols are denoted by Y ns eq = hns×ns eq, MMSEY ns = hns×ns eq, MMSE(hns×ns ISI Xns + Y ns) The equalized channel output Y ns eq is provided to the GRAND decoder. The auto-covariance matrix of the equalized symbols, CnS×ns Yeq , provides the colored noise statistics to the GRAND decoder where: Cns×nS Yeq = E[(Y ns eq −Xns)(Y ns eq −Xns)H] = hns×ns eq, MMSEhns×ns ISI Cns×ns X (hns×ns ISI )H(hns×ns eq, MMSE)H + hns×ns eq, MMSECns×ns N (hns×ns eq, MMSE)H −hns×ns eq, MMSEhns×ns ISI Cns×ns X −Cns×ns X (hns×ns ISI )H(hns×ns eq )H + Cns×ns X , (4) that is in Algorithm 1, Ψ = Cns×ns Yeq . In the ORBGRAND- AI algorithm, correlation over small blocks of symbols is considered. To compute Cns×ns Yeq for a particular block, we use the following covariance matrix Cns×ns X|Xb i = E[[X1...Xib...Xib+(b−1)...Xn]H· [X1...Xib...Xib+(b−1)...Xn]] in eqn. (4). The equalization process colors the noise in the channel. This means that the equalized channel output which is an estimate of the transmitted BPSK symbol is observed in colored noise. This is reflected in the auto-covariance matrix, Cns×ns Yeq , which has non-zero off-diagonal elements. Figure 9 shows that by accounting for the coloring in the noise due to ISI over 2.5 dB gain can be obtained in BLER performance using b = 2. BLER performance improves as the block size increases because colored, or correlated, channels have lower entropy and therefore have higher capacity [5]. Figure 9 also shows that by adding a forward error correcting code a further 4 dB gain can be obtained in terms of BLER. These results show that by applying forward error correction coding and accounting for any coloring, or statistical correla- tion, in the channel over a 6 dB improvement BLER can be obtained over uncoded systems. VI. ROBUSTNESS CONSIDERATIONS In practice, CSI is always subject to error [37]. This is mostly due to the fact that precise channel estimation methods are costly in terms of the number of pilot symbols needed and thus there is a trade off between the percentage of pilot symbols used and the quality of CSI [38]. Another cause of imperfection is quantization of parameters used in the processing of the received signal. Hence, decoders are desired to be as robust to mismatched CSI as possible. In the case of ORBGRAND-AI, this question translates to how imperfect CSI impacts the query order of noise sequences. In the following sections we consider the effect of measurement and quantization error on decoding performance in both the dicode and RFView channels. 5 10 15 Eb/N0 10-6 10-4 10-2 100 BLER Uncoded BPSK b = 1, CRC b = 2, CRC b = 4, CRC b = 1, RLC b = 2, RLC b = 4, RLC Fig. 9: Comparison of BLER for different block sizes, b, for a [132,120] random linear code and [132,120] cyclic- redundancy code with polynomial 0xb41 in Koopman notation [36] MMSE equalization and ORBGRAND-AI decoding in the equalized RFView ISI channel, hRF V . A. Equalized Dicode Channel For the equalized dicode channel, we first explore the effect of measurement error. We assume the estimation error to be additive and normally distributed: hk,est = hk(1 + ϵk) where ϵks variance is known as normalized mean squared error (NMSE). In fact, the estimation error does not only impact the query order of ORBGRAND-AI, but also leads to incorrect equalization resulting in an error floor. Fig. 10 shows the result of the equalized dicode channel for various NMSE values. For a significant NMSE of 0.1, the error floor is clearly visible. To further isolate the effect of a mismatch in the decoder, Fig. 11 displays the degradation of ORBGRAND-AI’s perfor- mance for a quantization error ∆ρ regarding ρ in the decoding process for ρreal = 0.5 where ρ = ρreal+∆ρ. We see that for a considerable mismatch of even ∆ρ = 0.2, the performance degradation is still less than 0.5dB. A reason for the query order’s robustness against imperfect CSI may lie in the fact that for the ordering of potential substitutes, the exact strength of statistical correlation between neighboring symbols is not as important as the fact that they are correlated at all. B. Equalized ISI Channel from RFView Dataset We first investigate the effect of measurement error in the RFView channel by approximating the RFView channel as a second order autoregressive (AR(2)) process. To do this, we fit an AR(2) process to each set of channel coefficients from the matched filter output z′′6,k′ for each sounding signal k′. We denote the resulting matrix of channel coefficients obtained from the AR(2) approximation by ˆh ns×ns AR(2) . We selected an AR(2) process because the performance of a first order au- toregressive (AR(1)) process estimate was poor. The AR(1) channel estimate quickly diverges from the true estimate yielding poor BLER performance. -5 0 5 10 Eb/N0 10-6 10-4 10-2 100 BLER nmse = 0.1 nmse = 0.01 nmse = 0.001 nmse = 0.0001 Fig. 10: ORBGRAND-AI performance degradation due to imperfect equalization in the equalized dicode channel with ρ = 0.75 and normally distributed additive estimation error with variance nmse. We used a [128, 116] RLC and BPSK with b = 4. 0 2 4 6 8 Eb/N0 10-6 10-4 10-2 100 BLER = 0 = 0.1 = 0.2 Fig. 11: ORBRAND-AI’s sensitivity to a CSI quantization error ∆ρ in an equalized dicode channel with ρreal = 0.5. A [128,112] CRC with polynomial 0x9eb2 [36] was used alongside BPSK and a fixed block size b = 4. We wish to model the matched filter output z′′6,k′ as an AR(2) process, i.e. we wish to find coefficients ϕ1 and ϕ2 such that zj′,k′ = ϕ1zj′−1,k′ + ϕ2zj′−2,k′ + ϵj′ for j′ ∈[1, ...6] and where ϵj′ denotes the Gaussian innovation process. We calculate estimates ˆϕ1 and ˆϕ2 of ϕ1 and ϕ2 respectively using the least squares estimate for each sounding signal k′ [ˆϕ1,k′ ˆϕ2,k′]T = ((¯z4×2 k′ )H ¯z4×2 k′ )−1(¯z4×2 k′ )H ˜z4 k′. where ¯z4×2 k′ =   z′′ 1,k′ z′′ 2,k′ ... ... z′′ 4,k′ z′′ 5,k′   and ˜z4 k′ = [z′′ 3,k′ . . . z′′ 6,k′]T . We use ϕ1 and ϕ2 to denote the AR(2) parameters instead of ρ1 and ρ2 as was done in section III because when we compute the least squares estimates ˆϕ1, ˆϕ2 we are not guaranteed to obtain numbers in the range (0, 1). Given initialization conditions ˆz′′ 1,k′ = z′′ 1,k′ and ˆz′′ 2,k′ = z′′ 2,k′, and using ˆρ1,k′ and ˆρ2,k′, we approximate the remaining four coefficients in the matched filter output as ˆzj′,k′ = ˆϕ1,k′ ˆzj′−1,k′ + ˆϕ2,k′ ˆzj′−2,k′ for j > 2. Now we can construct ˆh ns×ns AR(2) using the sampling process outlined previously by sampling from ˆz′′6,k′ instead. We now use the AR(2) estimate, ˆh ns×ns AR(2) , of the channel in the equalization and covariance matrix calculations in place of hns×ns RF V . In Fig. 12, we compare the BLER of the performance of ORBGRAND-AI with perfect CSI and imperfect CSI with the channel approximated with an AR(2) process. We observe that there is high concordance between the case with perfect CSI and the case where we only have access to an AR(2) estimate of the channel. This shows that even with imperfect channel estimates we are able to obtain performance gains with ORBGRAND-AI. 5 10 15 Eb/N0 10-6 10-4 10-2 100 BLER b = 1,h b = 2,h b = 4,h b = 1,h b = 2,h b = 4,h Fig. 12: Comparison of BLER for different block sizes, b, using a [132,120] cyclic-redundancy code with polynomial 0xb41 in Koopman notation [36] using MMSE equalization with perfect CSI, hns×ns RF V , and the AR(2) process approxima- tion of the RFView channel, ˆh ns×ns AR(2) using ORBGRAND-AI decoding. Next, we investigate the effect of quantization on the RFView channel. To quantize the RFView channel, we find the minimum and maximum of both the real and imaginary components of all channel coefficients in hns×ns RF V . Then, we create q′ evenly spaced quantization levels between both the maximum and minimum of the real and imaginary components of the channel coefficients. We then map the original channel coefficients to their quantized counterparts represented by the matrix ˆh ns×ns q′ . Fig. 13 shows that when we do not take into account channel correlation for q′ = 25, i.e. when b = 1, the BLER performance under the quantized scheme plateaus at high Eb/N0. At high Eb/N0 values, the noise introduced by the quantization scheme becomes the dominant source of error. As we consider correlation by increasing the block size, b, for q′ = 25 we find that we recover BLER performance, therefore showing that we can mitigate the effects of quantization noise by accounting for the correlation. For a higher quantization level of q′ = 100 we observe performance similar to the perfect CSI case. 5 10 15 Eb/N0 10-6 10-4 10-2 100 BLER b = 1,h b = 1,h b = 1,h b = 2,h b = 2,h b = 2,h b = 4,h b = 4,h b = 4,h Fig. 13: Comparison of BLER for different block sizes, b, using a [132,120] cyclic-redundancy code with polynomial 0xb41 in Koopman notation [36] MMSE equalization with perfect CSI, hns×ns RF V , and the 25 and 100-level quantization of the RFView channel, ˆh ns×ns q′=25 and ˆh ns×ns q′=100 respectively, using ORBGRAND-AI decoding. VII. CONCLUSION We have presented ORBGRAND-AI which is a decoder that can account for temporal correlation in the channel, thus elimi- nating the need for interleavers. We showed that by accounting for channel correlation using ORBGRAND-AI we can obtain higher block error rate performance than current state-of- the-art methods. By exploiting correlation, we eliminate the need for interleavers thus enabling communications with lower delays and higher throughput. We presented the performance of ORBGRAND-AI under different ISI channel models and under different levels of channel state information. These investigations in the two-tap dicode and RFView channels demonstrated the improved performance of ORBGRAND- AI. A natural extension for this work is to investigate how multiple anntenas can be leveraged. The process of channel sensing introduces correlated measurement error that could be exploited to devise optimal sensing strategies for use with ORBGRAND-AI decoding. REFERENCES [1] K. R. Duffy, M. Grundei, and M. M´edard, “Using Channel Correlation to Improve Decoding – ORBGRAND-AI,” in IEEE Globecom, 2023, doi: 10.1109/GLOBECOM54140.2023.10437763. [2] J. A. Millward, K. R. Duffy, M. Rangaswamy, and M. M´edard, “Using Equalization-Induced Noise Coloring to Improve Error- Correcting Decoding,” in 58th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2024, pp. 1400–1403, doi: 10.1109/IEEECONF60004.2024.10943085. [3] ——, “Enhancing Guessing Random Additive Noise Decoding using Channel Estimation,” in IEEE International Workshop on Signal Pro- cessing Advances in Wireless Communications (SPAWC), 2024, pp. 676– 680, doi: 10.1109/SPAWC60668.2024.10694171. [4] R. G. Gallager, Information Theory and Reliable Communication. New York, NY, USA: John Wiley & Sons, Inc., 1968. [5] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley & Sons, 1991. [6] S. Gogineni, J. R. Guerci, H. K. Nguyen, J. S. Bergin, D. R. Kirk, B. C. Watson, and M. Rangaswamy, “High Fidelity RF Clutter Modeling and Simulation,” IEEE Aerospace and Electronic Systems Magazine, vol. 37, no. 11, pp. 24–43, 2022, doi: 10.1109/MAES.2022.3208862. [7] S. Lin and D. J. Costello, Error Control Coding: Fundamentals and Applications. Pearson/Prentice Hall, 2004. [8] K. R. Duffy, J. Li, and M. M´edard, “Capacity-Achieving Guessing Random Additive Noise decoding,” IEEE Trans. Inf. Theory, vol. 65, no. 7, pp. 4023–4040, 2019, doi: 10.1109/TIT.2019.2896110. [9] W. An, M. M´edard, and K. R. Duffy, “Keep the Bursts and Ditch the Interleavers,” in IEEE GLOBECOM, 2020, doi: 10.1109/TCOMM.2022.3171798. [10] W. An, M. M´edard, and K. R. Duffy, “Keep the Bursts and Ditch the Interleavers,” IEEE Trans. Commun., vol. 70, pp. 3655–3667, 2022, doi: 10.1109/TCOMM.2022.3171798. [11] S. M. Abbas, M. Jalaleddine, and W. J. Gross, “High-Throughput VLSI Architecture for GRAND Markov Order,” in IEEE Workshop Sig. Proc. Sys., 2021. [12] F. Rezaei, D. Galappaththige, C. Tellambura, and S. Herath, “Coding Techniques for Backscatter Communications—A Contemporary Sur- vey,” IEEE Communications Surveys & Tutorials, vol. 25, no. 2, pp. 1020–1058, 2023. [13] W. An, M. Medard, and K. R. Duffy, “Soft Decoding without Soft Demapping with ORBGRAND,” in IEEE ISIT, 2023, pp. 1080–1084, doi: 10.1109/ISIT54713.2023.10206762. [14] W. G. Sullivan, “Specific Information Gain for Interacting Markov Processes,” Probab. Theory Relat. Fields, vol. 37, no. 1, pp. 77–90, 1976, doi: 10.1007/BF00536299. [15] J. T. Lewis, C.-E. Pfister, and W. G. Sullivan, “Entropy, Concentration of Probability and Conditional Limit Theorems,” Markov Process. Relat. Fields, vol. 1, pp. 319–386, 1995. [16] C.-E. Pfister, “Thermodynamical Aspects of Classical Lattice Systems,” Prog. Probab., pp. 393–472, 2002. [17] C.-E. Pfister and W. G. Sullivan, “Large Deviations Estimates for Dy- namical Systems without the Specification Property. Application to the β-shifts,” Nonlinearity, vol. 18, no. 1, p. 237, 2004, doi: 10.1088/0951- 7715/18/1/013. [18] G. Forney, “Burst-Correcting Codes for the Classic Bursty Channel,” IEEE Trans. Commun. Technol., vol. 19, no. 5, pp. 772–781, 1971, doi: 10.1109/TCOM.1971.1090719. [19] R. Van Nee and R. Prasad, OFDM for Wireless Multimedia Communi- cations. Artech House, Inc., 2000. [20] A. Peled and A. Ruiz, “Frequency Domain Data Transmission using Reduced Computational Complexity Algorithms,” in IEEE ICASSP, vol. 5, 1980, pp. 964–967, doi: 10.1109/ICASSP.1980.1171076. [21] W. Henkel, G. Taubock, P. Odling, P. O. Borjesson, and N. Petersson, “The Cyclic Prefix of OFDM/DMT-An Analysis,” in International Zurich Seminar on Broadband Communications Access-Transmission- Networking, 2002, pp. 22–22, doi: 10.1109/IZSBC.2002.991762. [22] R. F. Fischer, C. Windpassinger, A. Lampe, and J. B. Huber, “Tomlinson-Harashima Precoding in Space-Time Transmission for Low- Rate Backward Channel,” in 2002 International Zurich Seminar on Broadband Communications Access-Transmission-Networking (Cat. No. 02TH8599). IEEE, 2002, pp. 7–7, 10.1109/IZSBC.2002.991747. [23] M. E. Austin, “Decision-Feedback Equalization for Digital Communi- cation over Dispersive Channels,” 1967. [24] B. D. Anderson and J. B. Moore, Optimal Filtering. Courier Corpora- tion, 2005. [25] B. Choi and T. M. Cover, “An Information-Theoretic Proof of Burg’s Maximum Entropy Spectrum,” Prof. IEEE, vol. 72, no. 8, pp. 1094– 1096, 1984, doi: 10.1109/PROC.1984.12981. [26] R. H. Shumway, D. S. Stoffer, and D. S. Stoffer, Time Series Analysis and its Applications, 4th ed. Springer, 2017. [27] S. Verd´u and T. S. Han, “A General Formula for Channel Capacity,” IEEE Transactions on Information Theory, vol. 40, no. 4, pp. 1147– 1157, 1994, doi: 10.1109/18.335960. [28] K. R. Duffy, W. An, and M. M´edard, “Ordered Reliability Bits Guessing Random Additive Noise Decoding,” IEEE Trans. Signal Process., vol. 70, pp. 4528–4542, 2022, doi: 10.1109/ICASSP39728.2021.9414615. [29] A. Riaz, A. Yasar, F. Ercan, W. An, J. Ngo, K. Galligan, M. M´edard, K. R. Duffy, and R. T. Yazicigil, “A sub-0.8-pj/bit Universal Soft- Detection Decoder Using ORBGRAND,” IEEE Journal of Solid-State Circuits, 2024, doi: 10.1109/JSSC.2024.3502240. [30] S. M. Abbas, T. Tonnellier, F. Ercan, M. Jalaleddine, and W. J. Gross, “High-Throughput and Energy-Efficient VLSI Architecture for Ordered Reliability Bits GRAND,” IEEE Trans. Very Large Scale Integr. Syst., vol. 30, no. 6, pp. 681–693, 2022, doi: 10.1109/TVLSI.2022.3153605. [31] C. Condo, “A Fixed Latency ORBGRAND Decoder Architecture with LUT-Aided Error-Pattern Scheduling,” IEEE Trans. Circuits Syst. I Regul. Pap., 2022, doi: 10.1109/TCSI.2022.3150583. [32] A. Riaz, A. Yasar, F. Ercan, W. An, J. Ngo, K. Galligan, M. M´edard, K. R. Duffy, and R. T. Yazicigil, “A Sub-0.8pJ/b 16.3Gbps/mm2 Uni- versal Soft-Detection Decoder Using ORBGRAND in 40nm CMOS,” in IEEE ISSCC, 2023, doi: 10.1109/ISSCC42615.2023.10067519. [33] J. Xiao, Y. Zhou, S. Song, and Z. Wang, “A Low-Latency and Area- Efficient ORBGRAND Decoder for Polar Codes,” in IEEE ICTC, 2023, pp. 10–15, doi: 10.1109/ICTC57116.2023.10154861. [34] J. G. Proakis, Digital Communications. NY, USA: McGraw-Hill, 2001. [35] A. Riaz, A. Yasar, F. Ercan, W. An, J. Ngo, K. Galligan, M. M´edard, K. R. Duffy, and R. T. Yazicigil, “A sub-0.8-pJ/bit Universal Soft- Detection Decoder Using ORBGRAND,” IEEE J. Solid-State Circuits, to appear. [36] P. Koopman, “Best CRC Polynomials,” Available at https://users.ece. cmu.edu/∼koopman/crc/ (2025/05/31). [37] D. Tse and P. Viswanath, Fundamentals of Wireless Communication. Cambridge university press, 2005. [38] S. Coleri, M. Ergen, A. Puri, and A. Bahai, “Channel Estima- tion Techniques Based on Pilot Arrangement in OFDM systems,” IEEE Trans Brodcast, vol. 48, no. 3, pp. 223–229, 2002, doi: 10.1109/TBC.2002.804034.
Decoding in the presence of ISI without interleaving - ORBGRAND-AI Ken R. Duffy Dept. of ECE & Dept. Mathematics Northeastern University Boston, USA Moritz Grundei Electrical and Computer Engineering Technical . Millward Research Laboratory of Electronics Massachusetts ́edard Research Laboratory of Electronics Massachusetts -Inter symbol interference (ISI), which occurs in a wide variety of channels, is a result of time dispersion. It can be mitigated by equalization which results in noise coloring. For such colored noise, we propose a decoder called Ordered Reliability Bit Guessing Random Additive Noise Decoding (ORBGRANDAI) which is inspired by the development of approximate independence in statistical physics. By foregoing interleaving, ORBGRAND-AI can deliver the same, or lower, block error rate (BLER) for the same amount of energy per information bit in an ISI channel as a state-of-the-art soft input decoder, such as Cyclic Redundancy Check Assisted-Successive Cancellation List (CA-SCL) decoding, with an interleaver. To assess the decoding performance of ORBGRAND-AI, we consider delay tap models and their associated colored noise. In particular, we examine a two-tap dicode ISI channel as well as an ISI channel derived from data from RFView, a physics-informed modeling and simulation tool. We investigate the dicode and RFView channel under a variety of imperfect channel state information assumptions and show that a second order autoregressive model adequately represents the RFView channel effect. Index Terms-Soft input, correlation, interleavers, URLLC, GRAND I. INTRODUCTION Inter symbol interference (ISI) occurs in many modern communication systems and is mostly handled by equalization techniques that create correlation in the noise. Interleaving is a technique which diminishes channel correlation to provide white noise to the decoder. It can be shown, however, that correlated noise has lower entropy [4], [5] than uncorrelated noise, which means that the original correlated channel has Preliminary versions of this paper were presented in the 2023 Globecom, 2024 SPAWC and Asilomar conferences [1]-[3]. Due to space limitations, those papers made succinct observations about the impact of correlated ISI on decoder performance. This paper extends the work of the conference papers by providing a more well-rounded treatment of the problem, via an explicit demonstration of the fact that the entropy of the correlated ISI channel is less than that of the uncorrelated ISI channel. Furthermore, the rationale for approximating the interference generated using RFVIEW as an AR(2) process is established. The numerical simulations are more comprehensive compared to the conference submissions. higher capacity. Therefore, by interleaving we are missing out on decoding performance gains. Here we realize the improved decoding performance afforded by preserving and making use of channel correlation using our proposed decoder Ordered Reliability Bit Guessing Random Additive Noise Decoding (ORBGRAND-AI). To demonstrate the real-world applicability of ORBGRAND-AI, we show ORBGRAND-AI's performance in a simple two-tap dicode ISI channel and in an ISI channel generated with data from RFView, a high fidelity RF simulation and modeling tool [6]. We show that with both perfect channel state information (CSI) and imperfect CSI, using ORBGRAND-AI in both the dicode and RFView channels provides decoding performance improvements. Before delving into the details of ORBGRAND-AI, it is worthwhile considering why interleavers are currently used. The need for interleavers arises in soft detection decoding techniques which typically assume that each bit in a communication is impacted independently by noise, resulting in probabilistically independent per-bit reliabilities [7]. As realworld noise and interference are subject to temporal correlations that result in correlated bit reliabilities, in the absence of interleaving the mismatched input to decoders would result in degraded error correction performance. Since correlated noise has lower entropy than white noise of the same energy, it is counterproductive to transform this noise into white noise by interleaving given that interleaving is detrimental to rate and reliability. The question of how to practically capture and make use of noise correlation remains. Our decoding approach, ORBGRAND-AI, exploits noise correlation in a low complexity manner using techniques and theory inspired by the development of approximate independence in statistical physics. With its approach to decoding by identifying noise-effects and inferring code-words, Guessing Random Additive Noise Decoding (GRAND) [8] is well-positioned to embrace noise 16 Oct 2025 correlation and decode without interleaving. It has been shown that in the hard-detection setting GRAND can exploit statistically described Markovian correlation structures to enhance decoding performance [9]-[12], but the approach taken there cannot be carried over to the soft detection setting, which requires distinct innovation. By adopting techniques from thermodynamic probability theory to manage dependence and combining them with symbol-level Ordered Reliability Bit Guessing Random Additive Noise Decoding (ORBGRAND) [13], we demonstrate that it is possible to accurately soft-detection decode any moderate redundancy code without the need for interleaving. By removing the interleaver, decoding performance can be enhanced by multiple dB, while complexity and latency are reduced, offering a potential route forward for delivering URLLC. The approach uses Approximate Independence (AI) [14]- [17] with ORBGRAND (ORBGRAND-AI) to obtain these large gains in decoding performance by leveraging dependence over small, manageable neighborhoods of symbols. Unlike code-centric decoding methods, such as Cyclic Redundancy Check Code Assisted Successive Cancellation List (CA-SCL) decoding, which is specifically designed to decode CA-Polar codes, ORBGRAND-AI has no limitations regarding the code structure. We show that, contrary to common belief that codes need to be specifically designed for correlated channels [18], ORBGRAND-AI can decode any code in channels that exhibit noise correlations. This paper is structured as follows: Sec. II introduces the two channel models that are considered. In Sec. III, we then provide a heuristic argument why treating small blocks of communications as approximately independent from one another can provide significant gains in decoding performance for correlated channels. Sec. IV provides a full description of ORBGRAND-AI. Sec. V provides a performance evaluation of of ORBGRAND-AI benchmarked to decoders including CA-SCL and the original ORBGRAND. For the performance evaluation, we consider both a standard two-tap dicode ISI channel, for which a first order autoregressive (AR(1)) model suffices, and a six tap model informed by RFView, which necessitates an second order autoregressive (AR(2)) model. Section VI considers robustness to channel mischaracterisation in a variety of settings, establishing graceful degredation in presence of mismatch. Closing comments can be found in Section VII. II. ISI CHANNEL DESCRIPTION Unless otherwise stated, we shall denote random variables by capital letters and sample values or constants by lower case letters. Vectors and matrices we shall denote by underlining. We denote the dimensions of vectors and matrices using superscripts and specific coordinates within the matrices and vectors with subscripts. The standard structure of an interleaved communication system is shown in Fig. 1. For the channel, we consider a linear model Yk′ = X j≥0 hk′,jXk′-j + Nk′, where k′ ∈Z+ denotes the symbol time scale, Xk′ is the complex-valued transmitted symbol, Nk′ is complex white Gaussian noise, and hk′ denotes the discrete representation of the channel impulse response at the time of the transmission. We use k′ as we use k later to denote the information bits in a codeword as per convention. ISI is generally the result of time dispersion imparted on the transmitted signal. When we consider a sequence of symbols of length ns, the model becomes Y ns = hns×nsXns + N ns, (1) where Y ns = (Y1, Y2, ..., Yns)T denotes the sequence of transmitted symbols, N ns denotes a vector of additive white Gaussian noise with auto-covariance Cns×ns N = σ2 NIns×ns and hns×ns is the channel matrix whose elements are the hk′,js. We denote the identity matrix of size ns × ns by Ins×ns. When we consider codewords of length n, we denote the vectors corresponding to the received code word and noise effect by Y n, Xn and N n respectively. There are several ways to tackle ISI. For example, in OFDM or discrete multitone (DMT) systems, a cyclic prefix may be added to cancel the ISI at the expense of reducing the data rate of the system [19]-[21]. Other popular methods for mitigating ISI include nonlinear equalization strategies such as Tomlinson Harashima precoding and decision feedback equalization [22], [23]. These methods have their own advantages and disadvantages, but they mostly entail higher signal processing complexity. In this work, we employ a linear equalizer at the receiver that introduces noise coloring as a consequence of removing ISI. In the case of the interleaved system, the colored noise is dispersed and turned into white noise from the point of view of a single decoder. A. Delay Tap Channel Model We use a delay line model to generate the channel profile in terms of its paths indexed by d ranging from 1 to pk′,j where k′ denotes the symbol time index and j denotes the delay due to ISI. Each path d has complex attenuation {ak′,j,d} and delays {τk′,j,d}. Given a sampling frequency fs we can compute the time discrete channel impulse response: hk′,j = Ppk′,j d=1 ak′,j,dsinc (τk′,j,dfs -k′). A representative value for delay spread, defined as the maximum difference among the τds is 1μs for terrestrial outdoor systems. A special case of the delay tap channel model is the dicode partial response channel. It is an essential example of ISI where there are only two channel taps: hk′,j =      1 j = 0, -ρ j = 1, 0 otherwise, with ρ ∈[0, 1]. S/P encoder encoder encoder interleaver modulator channel equalizer demodulator deinterleaver decoder decoder decoder . . . . . . . . . . . . Fig. 1: Signal processing chain of a bit interleaved communication system Equalization through zero forcing removes ISI but leads to colored noise, { ̃Nk′}, with an autoregressive description ̃Nk′ = ρ ̃Nk′+Nk′ where Nk′ denotes the Gaussian noise prior to equalization. This type of colored noise is commonly referred to as Gauss-Markov noise and it exhibits exponentially decaying temporal correlation strength: E[| ̃Nk′ ̃Ni′|] ∝ρ|k′-i′| where i′ ∈Z+ is another variable denoting the symbol time scale (we use i later to denote the block index when we discuss ORBGRAND-AI). B. RFView ISI Channel We consider an ISI channel generated using channel impulse response data from RFView, a high-fidelity, physics-based RF simulation and modelling tool [6]. The RFView dataset consists of the in-phase and quadrature (I-Q) channel clutter impulse response of a mixed terrain environment with some discrete clutter sources, such as buildings. The dataset contains 30 coherent processing intervals (CPIs). Each CPI consists of a 3D data cube comprised of 32 antenna channels, 64 pulses and 2335 impulse response samples sampled at 10 MHz, Fig. 2. 2335 Impulse response samples 64 Pulses 32 Antennas Fig. 2: Illustration of RFView dataset for a single CPI. We process the data from each data cube corresponding to a particular CPI for the first antenna element only, highlighted in yellow, as we consider single-input single-output communications scenarios. To provide a channel estimate consistent with our earlier ISI channel definition, we process each data cube corresponding to a particular CPI. As radar data is usually comprised of several pulses, we treat the pulse axis within each data cube as "slow time" and we assume the channel impulse response changes from pulse to pulse (i.e. the channel impulse response from time sample 1 to 2335 corresponds to the impulse response at pulse index 1, and the channel impulse response from time sample 2336 to 4760 (= 2 × 2335) corresponds to the channel impulse response at pulse 2, etc.). Additionally, we only process data from a single antenna element (the first element along the antenna dimension in Fig. 2) as we consider the single-input single-output setting. To obtain the estimate of each coefficient hk′,j, we transmit a complex passband pulse through the channel impulse response corresponding to a particular pulse index. We set fs = 10MHz and the length of the pulse to L = 467, which ensures that we obtain a 6-tap ISI channel, i.e. j ∈{1, ..., 6} for all k′ for the RFView channel. We set the carrier frequency, fc, of the complex passband sounding signal to fS/L. We denote the complex passband pulse by ul where ul = ( a exp(i2πfcl) l = 1, . . . , L, 0 otherwise. The constant a is selected so thatPL l=1 |ul|2 = 1 and we denote the full vector of the sounding signal by uL. The channel response given by RFView is gm,μ, which is a matrix that takes into account the m = 2335 impulse response samples sampled at 10MHz and the μ = 64 pulses. For each r ∈{1, ...μ}, we are able to transmit five sounding signals per gm,r and a total of 5 × 64 = 320 sounding signals per CPI. For each gm,r, as m/L = 5 we can transmit 5 sounding signals. We transmit each sounding signal through the channel separately so that we are able to measure the individual effect of each sounding signal and therefore construct the 6-tap ISI channel. We set r(k′) = k′ + (5 -k′mod5) 5 to account for the fact that we are able to transmit 5 sounding signals per gm,r(k′), but wish to isolate the output response of each sounding signal individually to be able to construct the ISI coefficients. We account for the fact that each sounding signal will be delayed by L samples more than the previous one later when we construct the coefficients, hk′,j by sampling from the matched filter output of the channel. From a data processing perspective we do not need to account for the delays in the sounding signals because adding the delay, effectively zero-padding, does not affect the result of the channel output response convolutions. We account for the delay later when we go to construct hns×ns RF V . The noiseless output zζ,320 of the (k′)th sounding signal k′ ∈{1, ..., 320} and the channel response at pulse index r(k′) is given by the convolution zk′ = gm,r(k′) ∗uL with components zq,k′ = m X κ=1 gκ,r(k′)uq-κ. By the properties of convolution, we have that ζ = m+L-1. We next perform matched filtering on uL to obtain z′ k′ = zk′ ∗(uL)′ for each sounding signal k′ where (uL)′ denotes the matched filter response of uL as defined in [24], where z′ k′ has components z′ q,k′ = ζ X κ=1 zκ,k′u′ q-κ. (2) The full matrix of the matched filter output is z′η,5μ, where η is 2L + m -2 = 2 × 467 + 2335 -2 = 3267. We next sample the elements of z′η,5μ at intervals of L along the impulse response axis (i.e the axis with dimension η) to obtain z′′6,5μ. The η axis has now been reduced to a dimension of ⌊η/L⌋= 6. The samples of z′′6,5μ become the ISI coefficients hk′,j for j ∈{1, ..., 6} via hk′,j = z′′ j,k′-(j-1). We next construct the matrix hns×ns RF V using the hk′,js obtained above by: hns×ns RF V =   h1,1 0 0 0 . . . 0 h2,2 h2,1 0 0 . . . 0 h3,3 h3,2 h3,1 0 . . . 0 ... ... ... ... ... ... 0 . . . 0 hns,6 . . . hns,1   . We complete this process for each of the 30 CPIs. In our simulations, we uniformly sample from these 30 matrices to obtain the channel realization hns×ns RF V . III. A THEORETICAL HEURISTIC We will now argue through the example of an equalized dicode channel (or equivalently Gauss-Markov noise) that there is a significant gain to be realized when we only consider correlations across small neighborhoods (blocks) of received symbols and treat the blocks themselves as independent with regard to one another. With variance σ2 N and correlation coefficient ρ ∈(0, 1], assume that the continuous noise sequence {N ns} is a zeromean complex-valued Gaussian with auto-covariance matrix Cns×ns N ∈Rns×ns having entries CNk′,i′ = σ2 Nρ|k′-i′|. The normalized differential entropy rate of N ns can be calculated as log(2eπ) + 1 n log(|Cns×ns N |) = log(2eπσ2 N) + 1 -1 ns log(1 -ρ2), e.g. eq. (9.34) [5]. The final term encapsulates the decrease in entropy that arises from channel correlation as log(1-ρ2) 0. In a heavily interleaved channel ρ = 0 and the final term is zero. If the channel was truly independent for each block of b bits, then CNk′,i′ would be 0 for |k′ -i′| > b and the normalized differential entropy rate would instead be log(2eπσ2 N) + 1 -1 b log(1 -ρ2), where the only difference is the multiplier of the final term, which has changed from (1-1/ns) to (1-1/b). Thus, in this setting, to get more than half of the reduction in normalized differential entropy, a block-size of b = 2 suffices, suggesting significant gains should be available with small blocks. The principle of treating neighboring blocks as approximately independent random variables originates from considerations in thermodynamic probability theory where stochastic processes are approximated by product measures across boundaries [14]-[17]. We next show that we can expect similar performance gains in the case of a second-order Gauss-Markov process. We analyze the entropy rate of a second-order Gauss-Markov process because, as per Burg's theorem, the entropy of a stochastic-process subject to α ∈Z+ covariance constraints is maximized by a α-th order Gauss-Markov process [25]. We proceed by calculating the normalized differential entropy rate of a second-order Gauss-Markov process which has the following covariance constraints: E[Nk′Nk′] = σ2 N E[Nk′Nk′+1] = ρ1σ2 N E[Nk′Nk′+2] = ρ2σ2 N for k′ = 1, 2, 3.... We note that ρ1 and ρ2 denote the correlation coefficients and in our physical model ρ1, ρ2 ∈(0, 1]. For i′ > 2, the cross-covariance terms are E[Nk′Nk′+i′] = β1E[Nk′Nk′+i′-1] + β2E[Nk′Nk′+i′-2] where β1, β2 are the coefficients of the time-series and can be found by solving the Yule-Walker equations [26]. Using the recursive expression for E[Nk′Nk′+i′], we find that the determinant of the auto-covariance matrix for ns ≥4 is |Cns×ns N | = -(ρ2 -1)ns-2(1 -2ρ2 1 + ρ2)ns-2(σ2 N)ns (ρ2 1 -1)ns-3 . Recall that ρ1 and ρ2 denote the correlation coefficients which must be numbers between 0 and 1. This means that when ns is odd -(ρ2-1)ns-2 is positive and (ρ2 1-1)ns-3 is positive, and when ns is even -(ρ2 -1)ns-2 is negative and (ρ2 1 -1)ns-3 is negative. Since Cns×ns N is a positive semi-definite matrix, we now need to check that 1 -2ρ2 1 + ρ2 is positive (i.e ρ2 1 k bit code-word cn = (c1, . . . , cn) ∈{0, 1}n. For spectral efficiency, most communication systems employ high-order modulation where each transmitted symbol communicates multiple bits of information [34]. If a modulation scheme is employed with a complex constellation of size |χ| = 2ms, the n coded bits are translated into ns = n/ms symbols by sequentially mapping each collection of ms bits to the corresponding higher order symbol. In the absence of interleaving, this results in the transmission of the higher order sequence mod(cn) = Xns = (X1, . . . , Xns) ∈χns. Transmissions are impacted by channel effects and noise that cause the received signal sequence to be perturbed. The complex received vector can be written as Y ns = (Y1, . . . , Yns) = hns×nsXns + N ns, where we assume that the receiver has perfect channel state information (CSI), and so knows both hns×ns ∈Cns×ns and possesses a probabilistic description of N ns, e.g. that it is complex-valued white Gaussian noise with known variance. In Sec. VI we will show to what degree inaccurate assumptions about the noise model or CSI impact ORBGRAND-AI's performance. For ORBGRAND-AI's operation, each received signal corresponding to a coded transmission is split into nonoverlapping blocks of b symbols, where for notational ease we assume ns/b is an integer: Y ns = ns symbols z }| { (Y1, . . . , Yb | Yb+1, . . . , Y2b | {z } b symbols | · · · | Yns-b+1, . . . , Yns) = (Y b 1, . . . , Y b ns/b). Each block i ∈{1, . . . , ns/b} of b symbols, Y b i, is treated separately, with the likelihoods pXb i|Y b i(tb i|Y b i) for each tb i ∈χb being evaluated using the channel model and CSI. We define tb,∗ i = arg max pXb i|Y b i(tb i|Y b i) to be the symbol-level hard demodulation of the block Y b i = (Y(i-1)b+1, . . . , Yib), which takes channel memory over the block into account. The core approximation when evaluating the posterior probability of a noise effect sequence tns ∈χns describing symbols to be swapped is that the blocks are assumed to be independent, resulting in the following expression pXns|Y ns (tns|Y ns) = ns/b Y i=1 pXb i|Y b i(tb,∗ i |Y b i) ns/b Y i=1 pXb i|Y b i(tb i|Y b i) pXb i|Y b i(tb,∗ i |Y bi) , (3) which has a common term associated to the sequence of all hard-demodulated blocks and each noise effect sequence that swaps a block experiences a likelihood penalty. Algorithm 1 ORBGRAND-AI inputs: The received signal Y ns, abandonment threshold τ ′, channel statistics Ψ and a codebook membership check function Φ. Inputs: Y ns, Φ, τ ′, Ψ Output: cn,∗ d′ ←0 wμ ←compute likelihoods for substitution symbol blocks while d′ 2. Now we can construct ˆh ns×ns AR(2) using the sampling process outlined previously by sampling from ˆz′′6,k′ instead. We now use the AR(2) estimate, ˆh ns×ns AR(2) , of the channel in the equalization and covariance matrix calculations in place of hns×ns RF V . In Fig. 12, we compare the BLER of the performance of ORBGRAND-AI with perfect CSI and imperfect CSI with the channel approximated with an AR(2) process. We observe that there is high concordance between the case with perfect CSI and the case where we only have access to an AR(2) estimate of the channel. This shows that even with imperfect channel estimates we are able to obtain performance gains with ORBGRAND-AI. 5 10 15 Eb/N0 10-6 10-4 10-2 100 BLER b = 1,h b = 2,h b = 4,h b = 1,h b = 2,h b = 4,h Fig. 12: Comparison of BLER for different block sizes, b, using a [132,120] cyclic-redundancy code with polynomial 0xb41 in Koopman notation [36] using MMSE equalization with perfect CSI, hns×ns RF V , and the AR(2) process approximation of the RFView channel, ˆh ns×ns AR(2) using ORBGRAND-AI decoding. Next, we investigate the effect of quantization on the RFView channel. To quantize the RFView channel, we find the minimum and maximum of both the real and imaginary components of all channel coefficients in hns×ns RF V . Then, we create q′ evenly spaced quantization levels between both the maximum and minimum of the real and imaginary components of the channel coefficients. We then map the original channel coefficients to their quantized counterparts represented by the matrix ˆh ns×ns q′ . Fig. 13 shows that when we do not take into account channel correlation for q′ = 25, i.e. when b = 1, the BLER performance under the quantized scheme plateaus at high Eb/N0. At high Eb/N0 values, the noise introduced by the quantization scheme becomes the dominant source of error. As we consider correlation by increasing the block size, b, for q′ = 25 we find that we recover BLER performance, therefore showing that we can mitigate the effects of quantization noise by accounting for the correlation. For a higher quantization level of q′ = 100 we observe performance similar to the perfect CSI case. 5 10 15 Eb/N0 10-6 10-4 10-2 100 BLER b = 1,h b = 1,h b = 1,h b = 2,h b = 2,h b = 2,h b = 4,h b = 4,h b = 4,h Fig. 13: Comparison of BLER for different block sizes, b, using a [132,120] cyclic-redundancy code with polynomial 0xb41 in Koopman notation [36] MMSE equalization with perfect CSI, hns×ns RF V , and the 25 and 100-level quantization of the RFView channel, ˆh ns×ns q′=25 and ˆh ns×ns q′=100 respectively, using ORBGRAND-AI decoding. VII. CONCLUSION We have presented ORBGRAND-AI which is a decoder that can account for temporal correlation in the channel, thus eliminating the need for interleavers. We showed that by accounting for channel correlation using ORBGRAND-AI we can obtain higher block error rate performance than current state-ofthe-art methods. By exploiting correlation, we eliminate the need for interleavers thus enabling communications with lower delays and higher throughput. We presented the performance of ORBGRAND-AI under different ISI channel models and under different levels of channel state information. These investigations in the two-tap dicode and RFView channels demonstrated the improved performance of ORBGRANDAI. A natural extension for this work is to investigate how multiple anntenas can be leveraged. The process of channel sensing introduces correlated measurement error that could be exploited to devise optimal sensing strategies for use with ORBGRAND-AI decoding. REFERENCES [1] K. R. Duffy, M. Grundei, and M. M ́edard, "Using Channel Correlation to Improve Decoding - ORBGRAND-AI," in IEEE Globecom, 2023, [2] J. A. Millward, K. R. Duffy, M. Rangaswamy, and M. M ́edard, "Using Equalization-Induced Noise Coloring to Improve ErrorCorrecting Decoding," in 58th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2024, pp. 1400-1403, [3] --, "Enhancing Guessing Random Additive Noise Decoding using Channel Estimation," in IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2024, pp. 676680, [4] R. G. Gallager, Information Theory and Reliable Communication. New York, NY, USA: John Wiley & Sons, Inc., 1968. [5] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley & Sons, 1991. [6] S. Gogineni, J. R. Guerci, H. K. Nguyen, J. S. Bergin, D. R. Kirk, B. C. Watson, and M. Rangaswamy, "High Fidelity RF Clutter Modeling and Simulation," IEEE Aerospace and Electronic Systems Magazine, vol. 37, no. 11, pp. 24-43, 2022, [7] S. Lin and D. J. Costello, Error Control Coding: Fundamentals and Applications. Pearson/Prentice Hall, 2004. [8] K. R. Duffy, J. Li, and M. M ́edard, "Capacity-Achieving Guessing Random Additive Noise decoding," IEEE Trans. Inf. Theory, vol. 65, no. 7, pp. 4023-4040, 2019, [9] W. An, M. M ́edard, and K. R. Duffy, "Keep the Bursts and Ditch the Interleavers," in IEEE GLOBECOM, 2020, [10] W. An, M. M ́edard, and K. R. Duffy, "Keep the Bursts and Ditch the Interleavers," IEEE Trans. Commun., vol. 70, pp. 3655-3667, 2022, [11] S. M. Abbas, M. Jalaleddine, and W. J. Gross, "High-Throughput VLSI Architecture for GRAND Markov Order," in IEEE Workshop Sig. Proc. Sys., 2021. [12] F. Rezaei, D. Galappaththige, C. Tellambura, and S. Herath, "Coding Techniques for Backscatter Communications-A Contemporary Survey," IEEE Communications Surveys & Tutorials, vol. 25, no. 2, pp. 1020-1058, 2023. [13] W. An, M. Medard, and K. R. Duffy, "Soft Decoding without Soft Demapping with ORBGRAND," in IEEE ISIT, 2023, pp. 1080-1084, [14] W. G. Sullivan, "Specific Information Gain for Interacting Markov Processes," Probab. Theory Relat. Fields, vol. 37, no. 1, pp. 77-90, 1976, [15] J. T. Lewis, C.-E. Pfister, and W. G. Sullivan, "Entropy, Concentration of Probability and Conditional Limit Theorems," Markov Process. Relat. Fields, vol. 1, pp. 319-386, 1995. [16] C.-E. Pfister, "Thermodynamical Aspects of Classical Lattice Systems," Prog. Probab., pp. 393-472, 2002. [17] C.-E. Pfister and W. G. Sullivan, "Large Deviations Estimates for Dynamical Systems without the Specification Property. Application to the β-shifts," Nonlinearity, vol. 18, no. 1, p. 237, 2004, 7715/18/1/013. [18] G. Forney, "Burst-Correcting Codes for the Classic Bursty Channel," IEEE Trans. Commun. Technol., vol. 19, no. 5, pp. 772-781, 1971, [19] R. Van Nee and R. Prasad, OFDM for Wireless Multimedia Communications. Artech House, Inc., 2000. [20] A. Peled and A. Ruiz, "Frequency Domain Data Transmission using Reduced Computational Complexity Algorithms," in IEEE ICASSP, vol. 5, 1980, pp. 964-967, [21] W. Henkel, G. Taubock, P. Odling, P. O. Borjesson, and N. Petersson, "The Cyclic Prefix of OFDM/DMT-An Analysis," in International Zurich Seminar on Broadband Communications Access-TransmissionNetworking, 2002, pp. 22-22, [22] R. F. Fischer, C. Windpassinger, A. Lampe, and J. B. Huber, "Tomlinson-Harashima Precoding in Space-Time Transmission for LowRate Backward Channel," in 2002 International Zurich Seminar on Broadband Communications Access-Transmission-Networking (Cat. No. 02TH8599). IEEE, 2002, pp. 7-7, 10.1109/IZSBC.2002.991747. [23] M. E. Austin, "Decision-Feedback Equalization for Digital Communication over Dispersive Channels," 1967. [24] B. D. Anderson and J. B. Moore, Optimal Filtering. Courier Corporation, 2005. [25] B. Choi and T. M. Cover, "An Information-Theoretic Proof of Burg's Maximum Entropy Spectrum," Prof. IEEE, vol. 72, no. 8, pp. 10941096, 1984, [26] R. H. Shumway, D. S. Stoffer, and D. S. Stoffer, Time Series Analysis and its Applications, 4th ed. Springer, 2017. [27] S. Verd ́u and T. S. Han, "A General Formula for Channel Capacity," IEEE Transactions on Information Theory, vol. 40, no. 4, pp. 11471157, 1994, [28] K. R. Duffy, W. An, and M. M ́edard, "Ordered Reliability Bits Guessing Random Additive Noise Decoding," IEEE Trans. Signal Process., vol. 70, pp. 4528-4542, 2022, [29] A. Riaz, A. Yasar, F. Ercan, W. An, J. Ngo, K. Galligan, M. M ́edard, K. R. Duffy, and R. T. Yazicigil, "A sub-0.8-pj/bit Universal SoftDetection Decoder Using ORBGRAND," IEEE Journal of Solid-State Circuits, 2024, [30] S. M. Abbas, T. Tonnellier, F. Ercan, M. Jalaleddine, and W. J. Gross, "High-Throughput and Energy-Efficient VLSI Architecture for Ordered Reliability Bits GRAND," IEEE Trans. Very Large Scale Integr. Syst., vol. 30, no. 6, pp. 681-693, 2022, [31] C. Condo, "A Fixed Latency ORBGRAND Decoder Architecture with LUT-Aided Error-Pattern Scheduling," IEEE Trans. Circuits Syst. I Regul. Pap., 2022, [32] A. Riaz, A. Yasar, F. Ercan, W. An, J. Ngo, K. Galligan, M. M ́edard, K. R. Duffy, and R. T. Yazicigil, "A Sub-0.8pJ/b 16.3Gbps/mm2 Universal Soft-Detection Decoder Using ORBGRAND in 40nm CMOS," in IEEE ISSCC, 2023, [33] J. Xiao, Y. Zhou, S. Song, and Z. Wang, "A Low-Latency and AreaEfficient ORBGRAND Decoder for Polar Codes," in IEEE ICTC, 2023, pp. 10-15, [34] J. G. Proakis, Digital Communications. NY, USA: McGraw-Hill, 2001. [35] A. Riaz, A. Yasar, F. Ercan, W. An, J. Ngo, K. Galligan, M. M ́edard, K. R. Duffy, and R. T. Yazicigil, "A sub-0.8-pJ/bit Universal SoftDetection Decoder Using ORBGRAND," IEEE J. Solid-State Circuits, to appear. [36] P. Koopman, "Best CRC Polynomials," Available at https://users.ece. cmu.edu/∼koopman/crc/ (2025/05/31). [37] D. Tse and P. Viswanath, Fundamentals of Wireless Communication. Cambridge university press, 2005. [38] S. Coleri, M. Ergen, A. Puri, and A. Bahai, "Channel Estimation Techniques Based on Pilot Arrangement in OFDM systems," IEEE Trans Brodcast, vol. 48, no. 3, pp. 223-229, 2002,
2510.14944
MetaBench: A Multi-task Benchmark for Assessing LLMs in Metabolomics Yuxing Lu1,2, Xukai Zhao3, J. Ben Tamo4, Micky C. Nnamdi4, Rui Peng2, Shuang Zeng1,2, Xingyu Hu5, Jinzhuo Wang2,*, May D. Wang 1,4,*, 1Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University 2College of Future of Technology, Peking University 3School of Architecture, Tsinghua University 4School of Electrical and Computer Engineering, Georgia Institute of Technology 5School of Computer Science, Georgia Institute of Technology Correspondence: wangjinzhuo@pku.edu.cn, maywang@gatech.edu Abstract Large Language Models (LLMs) have demon- strated remarkable capabilities on general text; however, their proficiency in specialized sci- entific domains that require deep, intercon- nected knowledge remains largely uncharac- terized. Metabolomics presents unique chal- lenges with its complex biochemical pathways, heterogeneous identifier systems, and frag- mented databases. To systematically evalu- ate LLM capabilities in this domain, we in- troduce MetaBench, the first benchmark for metabolomics assessment. Curated from au- thoritative public resources, MetaBench evalu- ates five capabilities essential for metabolomics research: knowledge, understanding, ground- ing, reasoning, and research. Our evalua- tion of 25 open- and closed-source LLMs reveals distinct performance patterns across metabolomics tasks: while models perform well on text generation tasks, cross-database identifier grounding remains challenging even with retrieval augmentation. Model perfor- mance also decreases on long-tail metabolites with sparse annotations. With MetaBench, we provide essential infrastructure for developing and evaluating metabolomics AI systems, en- abling systematic progress toward reliable com- putational tools for metabolomics research. 1 Introduction Large Language Models (LLMs) are being rapidly adopted across metabolomics research, driven by their demonstrated success in adjacent biomedi- cal domains (Liu et al., 2025; Bekbergenova et al., 2025). Biomedical LLMs have transformed protein structure prediction, clinically assist with diagno- sis and treatment planning, and chemistry-focused systems support reaction prediction and molecular design (Wang et al., 2023; Zhao et al., 2023; Hu and Lu, 2024). Research groups now routinely utilize LLMs for tasks ranging from analyzing experiment results to generating study proposals. However, this rapid adoption has outpaced systematic evalu- ation: we lack a comprehensive understanding of which metabolomics tasks LLMs can reliably per- form, where they fail, and why. This evaluation gap poses significant risks for a field where incorrect metabolite assignments or pathway interpretations can propagate through analysis pipelines and lead to false biological conclusions. The consequences of deploying LLMs without proper evaluation extend beyond individual re- search errors. Metabolomics research demands capabilities that differ fundamentally from gen- eral text understanding and generation (Bifarin et al., 2025). Researchers must integrate informa- tion across specialized databases such as the Hu- man Metabolome Database (HMDB) and the Ky- oto Encyclopedia of Genes and Genomes (KEGG) (Wishart et al., 2022; Kanehisa et al., 2025), each employing distinct identifier systems and ontolo- gies. Without knowing which of the capabilities current LLMs possess, researchers cannot make informed decisions about where to deploy these tools, what verification procedures to implement, or how to interpret AI-assisted results. Current biomedical evaluation frameworks can- not answer these questions (Chen et al., 2025; Krithara et al., 2023; Jin et al., 2019). These bench- marks evaluate capabilities in natural language un- derstanding but do not measure the specialized op- erations that metabolomics requires. For example, high performance on MedQA provides no evidence for reliability on identifier grounding, and failure on BioASQ does not preclude success on pathway description generation. Without evaluation frame- works designed for metabolomics-specific tasks, the field lacks criteria for model selection, failure mode identification, or systematic improvement. In this paper, we present MetaBench, a com- prehensive benchmark designed to systematically assess LLM capabilities across metabolomics tasks. MetaBench evaluates five capability levels essen- 1 arXiv:2510.14944v1 [cs.CL] 16 Oct 2025 tial for metabolomics research: Knowledge (fac- tual recall of metabolite properties), Understand- ing (generation of coherent pathway descriptions), Grounding (accurate identifier mapping across het- erogeneous databases), Reasoning (extraction of structured relationships from natural language), and Research (synthesis of comprehensive study descriptions). Through evaluation of 25 state-of- the-art models on ~8,000 test cases derived from authoritative resources including HMDB, KEGG, PathBank, MetaKG, and MetaboLights (Wishart et al., 2022; Guijas et al., 2018; Lu, 2025; Yurekten et al., 2024), we provide the first systematic char- acterization of LLM performance in metabolomics, revealing which capabilities current models pos- sess, where critical bottlenecks exist, and what architectural innovations are needed. This work makes the following contributions: • We introduce MetaBench, the first comprehen- sive benchmark for evaluating LLM capabili- ties in metabolomics, comprising ~8,000 test cases across five core capability levels. • We conduct a systematic evaluation of 25 LLMs, revealing how these models perform across different metabolomics tasks. • We identify and analyze critical bottlenecks in current LLMs for metabolomics applica- tions, providing mechanistic insights into fail- ure modes and pathways for improvement. 2 Related Work 2.1 Benchmarks in Scientific Domains The rapid advancement of LLMs has spurred the de- velopment of numerous benchmarks to assess their capabilities in the scientific domain, largely concen- trating on broad domains like biomedicine, urban planning, psychology, and chemistry (Cai et al., 2024; Ren et al., 2024; Zhao et al., 2025). Biomedi- cal benchmarks have traditionally focused on tasks like question answering over literature and named entity recognition from clinical notes (Krithara et al., 2023; Jin et al., 2019; Jiang et al., 2025a). While invaluable, these benchmarks do not capture the domain knowledge, data structures, and specific tasks central to metabolomics, such as biochemical pathways, study proposals, and metabolite identi- fier groundings. Similarly, in chemistry, benchmarks like Molecu- leNet (Wu et al., 2018) have emerged to eval- uate models on tasks such as molecule prop- erty prediction and name conversion. These benchmarks are tailored to the conventions of chemistry, focusing on chemical structure repre- sentations (e.g., SMILES) and reaction kinetics, which only partially overlap with the challenges in metabolomics (Roessner and Bowne, 2009). Metabolomics requires an understanding not just of individual molecules but of their roles within complex, dynamic biological systems (Weckwerth, 2003). Our MetaBench is the first to bridge this gap, offering a suite of tasks designed specifically to test the deep, systems-level capabilities required for metabolomics. 2.2 NLP in Metabolomics The application of Natural Language Processing (NLP) in metabolomics is an emerging but critical area of research (Bifarin et al., 2025; Coler et al., 2024). Prior work has primarily focused on leverag- ing NLP to aid in the interpretation of experimental results by contextualizing findings with existing lit- erature or databases (Lu, 2025; Bekbergenova et al., 2025; Rahman et al., 2024). These efforts often in- volve tools for named entity recognition (NER) to identify metabolites in text and relation extraction to link them to diseases or genes. However, these applications typically operate as pipeline compo- nents rather than end-to-end reasoning systems. Furthermore, a significant challenge in the metabolomics field is knowledge fragmentation; in- formation about a single metabolite may be spread across multiple databases (e.g., KEGG (Kanehisa et al., 2025), HMDB (Wishart et al., 2022), Pub- Chem (Canese and Weis, 2013)), each using differ- ent identifier systems. This necessitates robust en- tity grounding to create a unified understanding (Ji et al., 2020; Weston et al., 2019). While entity grounding is a well-established NLP task, its ap- plication to the diverse and often overlapping iden- tifiers in metabolomics is a unique and unsolved challenge. MetaBench formalizes these challenges into benchmark tasks for the first time, positioning our work as a necessary next step: moving from evaluating specific, isolated NLP tasks to a holis- tic assessment of powerful, end-to-end generative language models. 3 MetaBench 3.1 Overview The lack of standardized evaluation benchmarks in metabolomics has left the field without clear crite- ria to assess LLM performance, making it difficult 2 Total tasks 1264 MetaboliteIDMapping MetaKG MetaboLights Question: What is the KEGG ID of HMDB ID HMDB0010090? Correct Answer: C00626 Generate Generate Generate Generate Generate Evaluate Evaluate Evaluate Evaluate Evaluate Total tasks 1000 Text: Research has linked L-Methionine to the onset and manifestation of Epilepsy. Reference Triple: Head: L-Methionine Relationship: has_disease Tail: Epilepsy Total tasks 1200 Question: Write a study description for the study titled 'Comprehensive evaluation of untargeted metabolomics data processing software in feature detection, quantification and discriminating marker selection'. Reference Description: Data analysis represents a key challenge for untargeted metabolomics studies and it… Total tasks 2125 PathBank Knowledge Understanding Grounding Research Reasoning Question: Where is Acetylcholine located in tissue? Options: A: Bone Marrow B: Adrenal Cortex C: Vitreous humor D: Neuron HMDB / KEGG MCQA Description Generation Identifier Grounding Triple Extraction Study generation Total tasks 2500 Question: What is the definition of Operon: Ribosomal Protein rps0? Reference Description: The ribosomal protein operon is a bicistronic operon consisting of rps0 and pnp. This operon is regulated by a Rho independent terminator, allowing for only rps0 to be transcribed… Data Source Benchmark Task Performance Task Example 60% 80% 10% 70% 80% Figure 1: MetaBench construction and task design. MetaBench integrates data from multiple datasets to assess five capabilities: knowledge, understanding, grounding, reasoning, and research. to compare methods, identify limitations, or pro- vide reliable guidance for practical use. This gap slows both scientific progress and the safe adoption of LLMs in metabolomics research. To address this challenge, we curated a comprehensive bench- mark from different sources (detailed introduction in Appendix A) that evaluates LLMs across five capability levels, from factual knowledge recall to research-oriented text generation (Figure 1). This benchmark establishes the first critical foundation for methodological innovation and future applica- tions of LLMs in metabolomics. 3.2 Benchmark Taxonomy We organize MetaBench into five capabilities, each aligned with a specialized requirement in metabolomics workflows: Knowledge. Factual recall of general knowledge, such as the correct metabolite taxonomy. Understanding. Generation of coherent descrip- tions of certain metabolites and pathways; Grounding. Accurate alignment of identifiers across biomedical databases; Reasoning. Entity and relationship extraction and structuring from natural language; Samples Tokens Knowledge: 313.5K Understanding: 381.8K Grounding: 24.5K Reasoning: 58.2K Research: 715.8K Knowledge: 2.5K Understanding: 1.3K Grounding: 1.0K Reasoning: 1.2K Research: 2.1K Figure 2: The statistics of MetaBench datasets. Research. Generation of research-related study descriptions from minimal prompts. Together, these datasets provide a quantifiable, hierarchical framework for assessing LLMs in metabolomics, analogous to capability ladders used in general NLP benchmarks (Jiang et al., 2025b). 3.3 Dataset Construction We instantiate these capabilities through public available datasets like HMDB (Wishart et al., 2022), KEGG (Kanehisa et al., 2025), Metabo- lights (Yurekten et al., 2024), PathBank (Wishart et al., 2024), and MetaKG (Lu, 2025), the overall process is illustrated in Figure 1. All datasets are re- 3 Table 1: Statistics of the MetaBench datasets. Min / Avg. / Max indicates token lengths per sample. Source Capability Min Avg. Max HMDB, KEGG Knowledge 15 48.17 2113 PathBank Understanding 26 166.28 880 MetabolitesID Grounding 7 11.94 35 MetaKG Reasoning 9 17.95 35 MetaboLights Research 17 222.57 852 leased publicly on HuggingFace1 for reproducibil- ity and the development of LLMs in metabolomics. 3.3.1 Knowledge-based MCQA To assess how well different LLMs encode metabolomics knowledge, we construct a multiple-choice QA benchmark derived from HMDB (Wishart et al., 2022) and KEGG (Kane- hisa et al., 2025). We select 26 attributes from these databases spanning taxonomy, molecular properties, biological associations, and pathway relationships. For each attribute, we extract entry-attribute pairs and generate four-option ques- tions by combining the correct value with three in-domain distractors randomly sampled from other values of the same attribute. All questions follow standardized templates (Appendix Table 3), such as "What is the taxonomy class of subject?" The resulting dataset comprises 2,500 questions, with 100 questions per attribute. Examples are provided in Appendix C. 3.3.2 Description generation Beyond factual knowledge assessment, we eval- uate whether LLMs can generate coherent, sci- entifically accurate descriptions of metabolomics concepts. This task tests the model’s ability to produce informative pathway descriptions from pathway names alone. We curated a benchmark using 1,264 pathway-description pairs from Path- Bank (Wishart et al., 2024), where each pathway name serves as the input prompt and the corre- sponding expert-written description serves as the reference output. Descriptions range from 148 to 6,452 tokens and contain comprehensive informa- tion about biochemical mechanisms, participating enzymes, and biological significance. Performance is assessed using BERTScore to measure semantic similarity between generated and reference texts. Examples are provided in Appendix D. 1Anonymized for double-blind review 3.3.3 Cross-Database Identifier Grounding A fundamental challenge in metabolomics re- search is integrating information about metabo- lites across heterogeneous databases, which re- quires accurate mapping between different iden- tifier systems. Only through successful identi- fier resolution can fragmented knowledge be uni- fied. To evaluate this grounding capability, we curated a benchmark using the MetabolitesID2 package that requires models to map metabo- lite identifiers across KEGG, HMDB, and ChEBI databases. The task encompasses bidirectional mappings: KEGG↔HMDB, KEGG↔ChEBI, HMDB↔ChEBI, and identifier↔name conver- sions. We sample 1,000 mapping pairs from a com- prehensive table containing 130,005 entries, en- suring balanced representation across all mapping types. Each question provides a source identifier and requires the model to predict the correspond- ing target identifier, evaluated using exact match accuracy. Examples are provided in Appendix E. 3.3.4 Knowledge extraction reasoning To evaluate structured knowledge extraction capa- bilities, we develop a benchmark that tests whether models can accurately parse metabolomics facts from natural language into knowledge graph triples. We select 1,200 triples from MetaKG (Lu, 2025) spanning six relationship types: has_disease, has_disposition, has_smiles, has_synonym, has_class, and has_tissue_location. For each triple, we use DeepSeek-V3.1 (Liu et al., 2024) to generate natural language sentences express- ing the relationship. For example, the triple (In- osine, has_tissue_location, Platelet) is rendered as "Within the human body, the blood’s platelets are a known reservoir for the compound inosine." The generation prompt is provided in Appendix F. Models must extract the complete (head, relation- ship, tail) triple from each sentence, requiring both natural language understanding and structured reasoning. Performance is evaluated using exact match accuracy, where all three components must be correctly identified. Representative examples are shown in Appendix G. 3.3.5 Study description generation The ultimate goal of applying LLMs to metabolomics is to enable comprehensive research support, from interpreting experimental results through multi-agent systems and data 2https://github.com/yigbt/metaboliteIDmapping 4 Table 2: MetaBench results for open- and closed-source LLMs across five capabilities. Cells report scores on a 0–100 scale. Best per column is shaded pink and second-best green. Model Knowledge Understanding Grounding Reasoning Research Average Open-source models Qwen3-1.7b 32.01 81.46 0.27 56.08 79.46 49.86 Qwen3-4b 35.94 83.21 0.27 66.39 80.52 53.27 Qwen3-8b 43.58 82.87 0.25 66.47 80.83 54.40 Qwen3-14b 52.62 82.95 0.29 65.14 80.88 56.38 Qwen3-32b 56.42 83.01 0.33 67.72 82.57 58.01 Gemma-3-270m-it 13.61 81.28 0.20 11.17 80.37 37.33 Gemma-3-1b-it 28.89 81.43 0.47 44.04 81.70 47.31 Gemma-3-4b-it 46.26 81.97 0.53 68.50 82.29 55.91 Gemma-3-12b 51.25 82.64 0.60 70.22 82.73 57.49 Gemma-3-27b 55.82 83.05 0.70 72.69 83.24 59.10 Llama-3.2-1b 12.44 82.40 0.37 33.04 81.77 42.80 Llama-3.2-3b 42.93 82.75 0.60 66.83 82.33 55.49 Llama-3.1-8b 50.58 82.71 0.57 71.92 83.25 57.81 Llama-3.1-70b 57.52 83.08 0.63 72.98 83.18 59.88 DeepSeek-v3.1 54.34 82.38 0.48 73.81 83.52 58.91 GPT-oss-20b 50.66 82.83 0.50 70.16 82.87 57.40 Closed-source models GPT-4o-mini 53.82 82.36 0.27 66.39 80.02 56.17 GPT-5-nano 35.65 82.71 0.27 68.47 82.00 53.82 GPT-5-mini 59.02 82.95 0.40 71.89 82.39 59.33 GPT-5 60.50 83.52 0.47 72.39 82.72 59.92 Claude-haiku-3.5 44.34 82.15 0.47 70.64 81.41 55.80 Claude-sonnet-3.7 59.78 82.87 0.53 71.31 83.87 59.67 Claude-sonnet-4 60.94 83.20 0.87 72.56 83.39 60.99 Gemini-2.5-flash 52.10 82.37 0.37 69.25 82.43 57.30 Gemini-2.5-pro 59.54 83.12 0.63 73.14 83.28 60.34 Higher is better for all columns. analysis pipelines to potentially designing or executing experiments autonomously. As a step toward this vision, we evaluate models on a research-level task requiring the generation of detailed study descriptions from concise titles. We curated a benchmark using 2,125 title-description pairs from MetaboLights (Yurekten et al., 2024), a leading metabolomics data repository. Each study title serves as the input prompt, and models must generate the corresponding comprehensive description containing experimental methodolo- gies, analytical techniques, key findings, and biological implications. Examples are provided in Appendix H. 3.4 Statistics Table 1 and Figure 2 present comprehensive statis- tics for the MetaBench datasets. We report mini- mum, average, and maximum token lengths per sample across all tasks. MetaBench comprises 8,100 samples distributed across five capability lev- els, with Knowledge (2,500 samples) and Research (2,125 samples) representing the largest subsets. Token volume varies substantially by task type: Un- derstanding and Research tasks generate the high- est token counts due to paragraph-level outputs (av- eraging 166 and 223 tokens, respectively), while Grounding and Reasoning tasks involve shorter structured responses (averaging 12 and 18 tokens). This distribution ensures comprehensive evaluation across diverse task formats, from concise factual retrieval and structured extraction to extended sci- entific text generation. 3.5 Evaluation We employ task-appropriate metrics to ensure rig- orous and meaningful evaluation. For classifica- tion tasks (knowledge MCQA, identifier grounding, and triple extraction reasoning), we report exact match accuracy. For generation tasks (pathway de- scription generation and study description), we use BERTScore (Zhang et al., 2019) (RoBERTa (Liu et al., 2019) as backbone model) to measure seman- tic similarity between generated and reference texts. 5 All systems prompts for each task are provided in Appendix J. For closed-source models, we perform inference through official APIs provided by OpenAI, An- thropic, and Google. For open-source models, we deploy them locally on H200 GPU clusters using the vLLM 3 framework for efficient inference. We standardize all inference settings: temperature is set to 0.1 to encourage deterministic outputs, maxi- mum generation length is capped at 4,096 tokens, and thinking modes are disabled for fair compari- son. Each task uses a tailored system prompt that specifies the expected output format and evalua- tion criteria, ensuring alignment between model responses and metric requirements. 4 Results 4.1 Overall performance across capabilities We evaluate 25 state-of-the-art LLMs spanning eight model families (Appendix Table 4). Open- source models: Qwen3 (Yang et al., 2025) (1.7B, 4B, 8B, 14B, 32B), Gemma-3 (Team et al., 2025) (270M-it, 1B-it, 4B-it, 12B, 27B), Llama (Dubey et al., 2024) (3.2-1B, 3.2-3B, 3.1-8B, 3.1-70B), DeepSeek-v3.1 (Liu et al., 2024), and GPT-oss- 20b (Agarwal et al., 2025). Closed-source mod- els: GPT (Achiam et al., 2023; Hurst et al., 2024) (4o-mini, 5-nano, 5-mini, 5), Claude (Anthropic, 2024, 2025) (haiku-3.5, sonnet-3.7, sonnet-4), and Gemini-2.5 (Naveed et al., 2025) (flash, pro). All evaluations use standard inference settings without external tools unless explicitly noted. Table 2 presents performance across five ca- pabilities. Overall, Claude-sonnet-4 achieves the highest average score (60.99), followed closely by Gemini-2.5-pro (60.34) and GPT-5 (59.92). Among open-source models, Llama-3.1-70b leads at 59.88, nearly matching the best closed-source systems, followed by Gemma-3-27b (59.10), DeepSeek-v3.1 (58.91), Qwen3-32b (58.01), and GPT-oss-20b (57.40). The narrow performance gap between top open-source and closed-source models (less than 1.5 points) demonstrates that open models can achieve competitive metabolomics reasoning when properly scaled. Task-specific analysis reveals distinct capability profiles. For Knowledge tasks, Claude-sonnet-4 marginally leads at 60.94, slightly ahead of GPT-5 (60.50) and Gemini-2.5-pro (59.54). Understand- ing shows remarkably compressed performance, 3https://docs.vllm.ai/en/latest/ 108 109 1010 1011 1012 Model Parameters (log scale) 0.2 0.3 0.4 0.5 0.6 0.7 Grounding Score Grounding 108 109 1010 1011 1012 Model Parameters (log scale) 20 30 40 50 Knowledge Score Knowledge 108 109 1010 1011 1012 Model Parameters (log scale) 40 45 50 55 60 Overall Score Overall Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 108 109 1010 1011 1012 Model Parameters (log scale) 10 20 30 40 50 60 70 Reasoning Score Reasoning Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 108 109 1010 1011 1012 Model Parameters (log scale) 80 81 82 83 Research Score Research 108 109 1010 1011 1012 Model Parameters (log scale) 81.5 82.0 82.5 83.0 Understanding Score Understanding Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 Figure 3: Parameter scaling trends in MetaBench. Av- erage performance increases roughly log-linearly with model size, with consistent family-wise improvements. with all competitive models clustering between 81–84%; GPT-5 achieves the highest score (83.52), but the narrow range suggests this capability satu- rates early across modern architectures. Grounding presents a stark contrast: without retrieval aug- mentation, even the best model (Claude-sonnet-4) achieves only 0.87% accuracy, two orders of magni- tude below other tasks. We analyze this bottleneck in detail in §4.3. For Reasoning, DeepSeek-v3.1 excels at 73.81%, with Gemini-2.5-pro (73.14) and Claude-sonnet-4 (72.56) close behind. Finally, Re- search returns to a high, compressed band led by Claude-sonnet-3.7 (83.87), DeepSeek-v3.1 (83.52), and Claude-sonnet-4 (83.39). 4.2 Parameter scaling trends The scaling law works on the five metabolomic ca- pabilities, but the slope is task dependent (Figure 3, Table 2). Averaged across tasks, scores rise roughly with diminishing returns at the largest scales; family leaders at similar sizes differ by only 1–3 points, highlighting the influence of pretrain- ing data and objectives beyond size alone. Family- wise scaling trends are consistent: Qwen3 improves from 49.86 (1.7B) to 58.01 (32B), Gemma-3 from 37.33 (270M-it) to 59.10 (27B), and Llama from 42.80 (3.2-1B) to 59.88 (3.1-70B). GPT-20b-oss 6 What is the KEGG ID for the compound with HMDB ID HMDB0004148? C13691 Model Direct Generation Generation with Search API Output Gemini-2.5-flash Gemini-2.5-pro GPT-4o-mini GPT-5-nano GPT-5-mini GPT-5 DeepSeek-v3.1 Claude-3.5-haiku C00092 C02500 C00245 C00031 C01183 C05544 C00026 C00186 Claude-3.7-sonnet Claude-4-sonnet C00092 C05984 Model Output GPT-4o-mini GPT-5 Claude-haiku-3.5 Claude-sonnet-3.7 Claude-sonnet-4 C13691 C13691 C13691 C13691 C13691 Figure 4: LLMs consistently fail in metabolite identifier grounding. 11 LLMs were asked to retrieve KEGG ID for HMDB0004148. None produced the correct answer. In contrast, when augmented with web search API, all successfully provided the correct mapping. and DeepSeek-v3.1 also demonstrate performance consistent with their parameter counts. Task-level analysis shows distinct behaviors. Knowledge scales cleanly with size, with Llama rising from 12.44 to 57.52 and Qwen3 from 32.01 to 56.42. Reasoning also benefits strongly, where mid-size models plateau in the high 60s and the largest variants approach 73–74. By contrast, Un- derstanding and Research tasks saturate early: most models fall in a narrow 81–84 band, with larger variants adding less than 2 points, suggesting these generation tasks depend more on instruction tun- ing than sheer scale. Finally, Grounding remains flat; its accuracy does not exceed 0.87% even for the largest models, showing that parameter growth alone cannot resolve identifier mapping. These re- sults underline that while scale reliably improves recall and reasoning, it delivers marginal gains on fluent generation already near ceiling and fails on grounding tasks unless combined with retrieval and schema-aware normalization. 4.3 Identifier grounding is the bottleneck Among all tested capabilities, Grounding is the lowest-performing capability. Without retrieval, accuracy remains near zero and does not exceed 0.87% even for the largest models (Table 2). A probe illustrates the failure: asked “What is the KEGG ID for HMDB0004148?”, 11 LLMs an- swered incorrectly (e.g., C00092, C02500); none returned the correct C13691 (Figure 4). With web search API, all five tested systems produced C13691 on this item (Figure 4). Aggregate results show the same pattern. Across five representative models, accuracy rises from {0.27, 0.47, 0.53, 0.87}% without search to {20.30, 27.77, 32.93, 38.00, 40.93}% with search (Figure 5). This 40–150× gain remains far from ceiling, indicating missing external evidence and normalization rather than decoding issues. Claude-haiku-3.5 Claude-sonnet-3.7 Claude-sonnet-4 GPT-4o-mini GPT-5 0 5 10 15 20 25 30 35 40 Score W/O search Search Figure 5: Without search capabilities (blue bars), all models achieve less than 1% accuracy on cross-database metabolite mapping. Enabling web search (orange bars) improves accuracy by 40–150×, though performance remains below 41% even for the best model (GPT-5), indicating that retrieval alone is insufficient to resolve the grounding bottleneck. Grounding failures may be because the task com- bines sparse signals, lossy tokenization, and a mis- aligned objective under moving targets and am- biguous nomenclature. Metabolite IDs are rare in pretraining corpora and often confined to supple- mentary tables, so models learn weak parametric associations. Subword tokenizers fracture strings such as HMDB0004148 into pieces (e.g., HMD, B000, 4148), undermining exact matching. Next-token training optimizes plausibility rather than exact res- olution, yielding confident but incorrect IDs. Con- current database updates in HMDB and KEGG further skew memorized mappings. Added to this, synonyms, tautomers, salts, stereochemistry, and organism or compartment context create many-to- many name–ID relations that text-only models can- not reliably disambiguate. Retrieval with schema- aware normalization targets these causes; scaling alone does not. 4.4 Long tail problem in metabolomics Metabolite databases such as HMDB exhibit in- formation concentration: metabolites with lower HMDB IDs (typically discovered earlier) contain substantially more annotated attributes, while those 7 0 100 1000 10000 100000 HMDB ID 0 100 200 300 Attribute Density 0.2 0.4 0.6 0.8 Accuracy Figure 6: Long-tail distribution in metabolite knowl- edge. Attribute density (blue line) and model accuracy (green bars) both decline across HMDB ID ranges. with higher IDs show progressively sparser in- formation (Xia and Sun, 2022). This pattern re- flects the field’s trajectory from well-studied cen- tral metabolites toward incompletely characterized peripheral compounds. To quantify how this heterogeneity affects model performance, we stratified metabolites by HMDB ID ranges and measured average attribute den- sity: <100 (236.73 attributes/metabolite), 100- 1,000 (161.92), 1,000-10,000 (128.53), 10,000- 100,000 (96.30), and >100,000 (63.38). We sam- pled 200 MCQA questions per bin and evaluated multiple LLMs. Results from GPT-oss-20b are shown in Figure 6, where accuracy declines mono- tonically with decreasing density: 63.5% (127/200) for IDs <100, 58.0% (116/200) for IDs between 100 and 1000, 50.5% for IDs between 1000 and 10000, 47.7% for IDs between 10000 and 100000, versus 40.0% (72/180) for IDs >100,000, resulting in a 23.5 percentage gap. This long-tail effect reveals a fundamental chal- lenge: models perform acceptably on well-studied metabolites dominating training corpora but fail disproportionately on the sparsely annotated com- pounds that constitute the metabolome’s major- ity. Simply scaling training data cannot address the problem since the long tail reflects actual knowledge gaps in the field. Addressing this limitation may require active learning strategies prioritizing difficult cases, multi-modal models leveraging structural information when annotations are sparse, or uncertainty-aware systems that flag low-confidence predictions for long-tail metabo- lites (Wang et al., 2024; Kandpal et al., 2023). 5 Discussion Our evaluation of 25 LLMs reveals a striking ca- pability heterogeneity that challenges assumptions about LLM readiness for metabolomics. These results indicate that current training paradigms, op- timized for fluent text generation, produce models that excel at synthesizing coherent scientific narra- tives while struggling with precise factual retrieval and structured knowledge operations (Kalai et al., 2025). Most critically, the catastrophic failure on identifier Grounding (<1% without retrieval, 41% maximum with search API) represents a fundamen- tal limitation rather than a data scarcity problem (more emphasis on the grounding tasks can refer to Appendix K). The grounding bottleneck has imme- diate practical implications: metabolomics appli- cations requiring cross-database integration cannot rely on LLM capabilities alone and must imple- ment specialized identifier resolution systems with schema-aware normalization and chemical struc- ture reasoning. Beyond diagnosing current limitations, MetaBench establishes a framework for targeted model improvement and responsible deployment in metabolomics. Importantly, our benchmark dataset construction method provides a replicable pathway for creating domain-specific fine-tuning (DFT) datasets beyond evaluation. As new models emerge, whether general LLMs with improved scientific reasoning or domain-specific models pretrained on metabolomics corpora, MetaBench enables continuous, standardized assessment beyond aggregate scores that can obscure crit- ical limitations. By providing both evaluation infrastructure and a concrete methodology for dataset construction, MetaBench supports the development of more capable and reliable AI systems for metabolomics research. 6 Conclusion We present MetaBench, the first comprehensive benchmark for evaluating LLMs on metabolomics. Through the evaluation of 25 LLMs across five ca- pabilities, we reveal substantial performance vari- ations across models and demonstrate that while model scaling improves reasoning, current mod- els catastrophically fail on cross-database map- ping and long-tail generalization, establishing that metabolomics requires precision and structured knowledge integration beyond current architec- tures and training corpus. By publicly releasing MetaBench, we provide essential infrastructure for developing scientifically-grounded models capable of supporting real-world metabolomics research and enabling systematic progress toward reliable AI-assisted discovery. 8 7 Limitation Since there is currently no publicly available metabolomics-specialized LLM; consequently, MetaBench evaluates general-purpose and broad biomedical models, quantifying cross-domain gen- eralization and establishing a strong baseline for future in-domain systems. The benchmark centers on widely used resources (KEGG, HMDB, ChEBI, PathBank, MetaboLights), choices that favor clean, reproducible comparisons while deferring spec- tra/structure modalities and additional databases to later releases. We score classification with ac- curacy and generation with BERTScore to enable scale and consistency. Results are produced un- der a standardized, tool-free decoding setup, with retrieval analyzed separately, to isolate intrinsic model behavior. These constraints are deliberate and highlight the release’s strengths: clarity, repro- ducibility, and extensibility, while charting a direct path to add specialized models in future versions. 8 Potential risks While MetaBench establishes evaluation infrastruc- ture for metabolomics AI, we acknowledge con- siderations for responsible use. Our transparent reporting of performance disparities, such as the identifier grounding challenge and 23.5% variation across metabolite coverage, helps prevent prema- ture deployment while guiding targeted improve- ments. The modular design enables continuous benchmark evolution as models advance, and our curation from established resources ensures tasks reflect real metabolomics workflows. By providing granular evaluation across five capabilities rather than single scores, MetaBench enables informed decisions about where LLMs can augment research and where human expertise remains essential, es- tablishing not just current baselines but the mea- surement framework necessary for systematic ad- vancement of AI in metabolomics. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 techni- cal report. arXiv preprint arXiv:2303.08774. Sandhini Agarwal, Lama Ahmad, Jason Ai, Sam Alt- man, Andy Applebaum, Edwin Arbus, Rahul K Arora, Yu Bai, Bowen Baker, Haiming Bao, and 1 others. 2025. gpt-oss-120b & gpt-oss-20b model card. arXiv preprint arXiv:2508.10925. Anthropic. 2024. Model card for claude 3. https://www-cdn.anthropic.com/ de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/ Model_Card_Claude_3.pdf. Anthropic. 2025. System card: Claude opus 4 & claude sonnet 4. https://www-cdn.anthropic.com/ 4263b940cabb546aa0e3283f35b686f4f3b2ff47. pdf. Madina Bekbergenova, Lucas Pradi, Benjamin Navet, Yousouf Taghzouti, Emma Tysinger, Jean-Luc Wolfender, Wout Bittremieux, Fabien Gandon, and Louis-Félix Nothias. 2025. Metabot: Ai-based agent for natural language-based interaction with metabolomics knowledge graphs. In ISMB/ECCB 2025. Olatomiwa O Bifarin, Varun S Yelluru, Aditya Simhadri, and Facundo M Fernández. 2025. A large language model–powered map of metabolomics research. An- alytical Chemistry. Bioconductor. 2020. metaboliteidmapping: Annotation package with comprehensive mapping of metabo- lite ids. https://bioconductor.org/packages/ metaboliteIDmapping/. John Braisted and 1 others. 2023. metlinkr: Facilitating metaanalysis of human metabolomics data through automated linking of metabolite identifiers. Bioinfor- matics, 39(12):btad744. Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Hongshuai Wang, Yongge Li, Mujie Lin, and 1 others. 2024. Sciassess: Benchmarking llm proficiency in scientific literature analysis. arXiv preprint arXiv:2403.01976. Tomas Cajka and Oliver Fiehn. 2024. A reproducibility crisis for clinical metabolomics studies. Trends in Analytical Chemistry, 168:117332. Kathi Canese and Sarah Weis. 2013. Pubmed: the bibli- ographic database. The NCBI handbook, 2(1):2013. Qingyu Chen, Yan Hu, Xueqing Peng, Qianqian Xie, Qiao Jin, Aidan Gilson, Maxwell B Singer, Xuguang Ai, Po-Ting Lai, Zhizheng Wang, and 1 others. 2025. Benchmarking large language models for biomedical natural language processing applications and recom- mendations. Nature communications, 16(1):3280. Elizabeth A Coler, Wanxuan Chen, Alexey V Melnik, James T Morton, and Alexander A Aksenov. 2024. Metabolomics in the era of artificial intelligence. Mi- crobiota and Host, 2(1). Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, and 1 others. 2024. The llama 3 herd of models. arXiv e-prints, pages arXiv–2407. 9 Dmitry Grapov and 1 others. 2018. Consistency, incon- sistency, and ambiguity of metabolite names in bio- chemical databases used for genome-scale metabolic modelling. Molecular BioSystems, 14(3):660–667. Carlos Guijas, J Rafael Montenegro-Burke, Benedikt Warth, Mary E Spilker, and Gary Siuzdak. 2018. Metabolomics activity screening for identifying metabolites that modulate phenotype. Nature biotech- nology, 36(4):316–320. Yucheng Hu and Yuxing Lu. 2024. Rag and rau: A survey on retrieval-augmented language model in natural language processing. arXiv preprint arXiv:2404.19543. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276. Zongcheng Ji, Qiang Wei, and Hua Xu. 2020. Bert- based ranking for biomedical entity normalization. AMIA Summits on Translational Science Proceedings, 2020:269. Jiyue Jiang, Pengan Chen, Jiuming Wang, Dongchen He, Ziqin Wei, Liang Hong, Licheng Zong, Sheng Wang, Qinze Yu, Zixian Ma, and 1 others. 2025a. Benchmarking large language models on multiple tasks in bioinformatics nlp with prompting. arXiv preprint arXiv:2503.04013. Zhuohang Jiang, Pangjing Wu, Ziran Liang, Peter Q Chen, Xu Yuan, Ye Jia, Jiancheng Tu, Chen Li, Pe- ter HF Ng, and Qing Li. 2025b. Hibench: Bench- marking llms capability on hierarchical structure rea- soning. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Min- ing V. 2, pages 5505–5515. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146. Adam Tauman Kalai, Ofir Nachum, Santosh S Vem- pala, and Edwin Zhang. 2025. Why language models hallucinate. arXiv preprint arXiv:2509.04664. Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2023. Large language models struggle to learn long-tail knowledge. In International conference on machine learning, pages 15696–15707. PMLR. Minoru Kanehisa, Miho Furumichi, Yoko Sato, Yuriko Matsuura, and Mari Ishiguro-Watanabe. 2025. Kegg: biological systems database as a model of the real world. Nucleic acids research, 53(D1):D672–D677. Christopher A Krettler, Maria Sorokina, Felina Kretschmer, Daniel H Pieper, and Christoph Stein- beck. 2024. Navigating common pitfalls in metabo- lite identification and metabolomics bioinformatics. Metabolomics, 20(5):104. Anastasia Krithara, Anastasios Nentidis, Konstantinos Bougiatiotis, and Georgios Paliouras. 2023. Bioasq- qa: A manually curated corpus for biomedical ques- tion answering. Scientific Data, 10(1):170. Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, and 1 others. 2024. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437. Yijiang Liu, Feifan Zhang, Yifei Ge, Qiao Liu, Siyu He, and Xiaotao Shen. 2025. Application of llms/transformer-based models for metabolite annota- tion in metabolomics. Health and Metabolism, pages 7–7. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Yuxing Lu. 2025. Knowledge graph and large lan- guage model for metabolomics. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 39, pages 29281–29282. Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, and Ajmal Mian. 2025. A com- prehensive overview of large language models. ACM Transactions on Intelligent Systems and Technology, 16(5):1–72. Rex Powers and 1 others. 2022. A checklist for reproducible computational analysis in clinical metabolomics research. Metabolites, 12(1):87. Maxx Richard Rahman, Ruoxuan Liu, and Wolfgang Maass. 2024. Incorporating metabolic information into llms for anomaly detection in clinical time-series. arXiv preprint arXiv:2410.12830. Yuanyi Ren, Haoran Ye, Hanjun Fang, Xin Zhang, and Guojie Song. 2024. Valuebench: Towards compre- hensively evaluating value orientations and under- standing of large language models. arXiv preprint arXiv:2406.04214. Ute Roessner and Jairus Bowne. 2009. What is metabolomics all about? Biotechniques, 46(5):363– 365. Neil Swainston, Kieran Smallbone, Pedro Mendes, Dou- glas Kell, and Norman Paton. 2014. Comparative evaluation of open source software for mapping be- tween metabolite identifiers in metabolic network reconstructions: application to recon 2. Journal of Cheminformatics, 6(1):1–11. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, and 1 others. 2025. Gemma 3 technical report. arXiv preprint arXiv:2503.19786. 10 Benyou Wang, Qianqian Xie, Jiahuan Pei, Zhihong Chen, Prayag Tiwari, Zhao Li, and Jie Fu. 2023. Pre- trained language models in biomedical domain: A systematic survey. ACM Computing Surveys, 56(3):1– 52. Pengkun Wang, Zhe Zhao, HaiBin Wen, Fanfu Wang, Binwu Wang, Qingfu Zhang, and Yang Wang. 2024. Llm-autoda: Large language model-driven automatic data augmentation for long-tailed problems. Ad- vances in Neural Information Processing Systems, 37:64915–64941. Wolfram Weckwerth. 2003. Metabolomics in systems biology. Annual review of plant biology, 54(1):669– 689. Leigh Weston, Vahe Tshitoyan, John Dagdelen, Olga Kononova, Amalie Trewartha, Kristin A Persson, Gerbrand Ceder, and Anubhav Jain. 2019. Named entity recognition and normalization applied to large- scale information extraction from the materials sci- ence literature. Journal of chemical information and modeling, 59(9):3692–3702. David S Wishart, AnChi Guo, Eponine Oler, Fei Wang, Afia Anjum, Harrison Peters, Raynard Dizon, Zinat Sayeeda, Siyang Tian, Brian L Lee, and 1 others. 2022. Hmdb 5.0: the human metabolome database for 2022. Nucleic acids research, 50(D1):D622– D631. David S Wishart, Ray Kruger, Aadhavya Sivaku- maran, Karxena Harford, Selena Sanford, Rahil Doshi, Nitya Khetarpal, Omolola Fatokun, Daph- nee Doucet, Ashley Zubkowski, and 1 others. 2024. Pathbank 2.0—the pathway database for model or- ganism metabolomics. Nucleic acids research, 52(D1):D654–D662. Gert Wohlgemuth, Pradeep Kumar Haldiya, Egon Wil- lighagen, Tobias Kind, and Oliver Fiehn. 2010. The chemical translation service—a web-based tool to im- prove standardization of metabolomic reports. Bioin- formatics, 26(20):2647–2648. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. 2018. Moleculenet: a benchmark for molecular machine learning. Chem- ical science, 9(2):513–530. Yinglin Xia and Jun Sun. 2022. Statistical data anal- ysis of microbiomes and metabolomics, volume 13. American Chemical Society. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, and 1 others. 2025. Qwen3 technical report. arXiv preprint arXiv:2505.09388. Ozgur Yurekten, Thomas Payne, Noemi Tejera, Fe- lix Xavier Amaladoss, Callum Martin, Mark Williams, and Claire O’Donovan. 2024. Metabo- lights: open data repository for metabolomics. Nu- cleic acids research, 52(D1):D640–D646. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- uating text generation with bert. arXiv preprint arXiv:1904.09675. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, and 1 others. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223, 1(2). Xukai Zhao, He Huang, Tao Yang, Yuxing Lu, Lu Zhang, Ruoyu Wang, Zhengliang Liu, Tianyang Zhong, and Tianming Liu. 2025. Urban planning in the age of large language models: Assessing ope- nai o1’s performance and capabilities across 556 tasks. Computers, Environment and Urban Systems, 121:102332. 11 A Sources for MetaBench MetaBench integrates data from six specialized metabolomics resources, each serving distinct roles in the field. This section provides detailed descrip- tions of these sources and their contributions to our benchmark. A.1 Human Metabolome Database (HMDB) The Human Metabolome Database (HMDB) (Wishart et al., 2022) is the most comprehensive metabolomics database focused on human metabolism, cataloging over 220,000 metabolite entries including both endogenous metabolites and exogenous compounds. Each entry provides extensive biochemical informa- tion including molecular descriptors (SMILES, InChI), physicochemical properties, tissue and biofluid locations, disease associations, and literature references using standardized identifiers (HMDB#######). In MetaBench, HMDB serves as a primary source for the Knowledge MCQA task, providing ground truth data for 26 distinct metabolite attributes, and forms one node of the cross-database identifier mapping in the Grounding capability assessment. A.2 Kyoto Encyclopedia of Genes and Genomes (KEGG) The Kyoto Encyclopedia of Genes and Genomes (KEGG) (Kanehisa et al., 2025) integrates genomic, chemical, and systemic functional information, with KEGG COMPOUND containing over 18,000 metabolite entries organized within the context of biochemical pathways, reactions, and molecular interactions. Each compound receives a KEGG identifier (C#####) and is annotated with pathway memberships, enzyme associations (EC numbers), reaction participation, and orthology relationships. A.3 PathBank PathBank (Wishart et al., 2024) is a comprehensive visual database containing over 110,000 detailed pathway diagrams and expert-curated descriptions covering metabolic, signaling, drug action, and disease pathways across multiple organisms, with particular emphasis on human metabolism. Each entry contains a pathway name, comprehensive tex- tual description ranging from concise summaries to extensive multi-paragraph explanations (148–6,452 tokens), interactive diagrams, and lists of partici- pating metabolites and proteins. PathBank serves as the exclusive source for the Understanding ca- pability assessment in MetaBench, where 1,264 pathway name-description pairs evaluate whether models can generate scientifically accurate, coher- ent descriptions of biochemical pathways, testing conceptual understanding beyond simple factual recall. A.4 MetabolitesIDMapping MetaboliteIDmapping is a specialized mapping re- source providing comprehensive cross-references between metabolite identifiers across major databases, addressing the fundamental challenge of identifier heterogeneity in metabolomics, where the same metabolite may be referenced differently across resources. The mapping table contains over 130,000 verified equivalence relationships linking identifiers from KEGG, HMDB, ChEBI, PubChem, CAS Registry Numbers, and chemical names (IU- PAC and common), enabling bidirectional trans- lation between identifier systems. MetabolitesID forms the foundation of the Grounding capability assessment, where 1,000 sampled identifier pairs construct mapping tasks that directly evaluate a critical bottleneck in practical metabolomics work- flows: integrating fragmented knowledge across heterogeneous resources. A.5 MetaKG MetaKG (Lu, 2025) is a large-scale metabolomics knowledge graph structuring information from mul- tiple databases into a unified semantic network with over 2 million entities and 10 million relationships, representing the most comprehensive structured representation of metabolomics knowledge to date. The heterogeneous graph contains nodes represent- ing metabolites, diseases, proteins, genes, tissues, and pathways, connected by typed relationships, with each triple derived from authoritative sources and validated for consistency. MetaKG serves as the source for the Reasoning capability assessment, where 1,200 triples spanning six relationship types are converted into natural language sentences, re- quiring models to parse unstructured text and ex- tract structured (head, relationship, tail) triples, test- ing both natural language understanding and struc- tured reasoning essential for knowledge extraction from scientific literature. A.6 MetaboLights MetaboLights (Yurekten et al., 2024) is the largest open-access metabolomics data repository hosted 12 by the European Bioinformatics Institute (EMBI- EBI), containing over 7,000 studies with associ- ated metadata, analytical methods, and raw data files that serve as a comprehensive archive for metabolomics experimental data. Each study (MTBLS####) includes structured submissions with study titles, detailed descriptions, experimental designs, sample information, analytical protocols (LC-MS, GC-MS, NMR), data processing work- flows, and expert-written descriptions providing comprehensive information about research objec- tives, methodologies, key findings, and biological significance. MetaboLights provides the founda- tion for the Research capability assessment, where 2,125 study title-description pairs evaluate whether models can generate comprehensive, research-level documentation from minimal prompts, testing ca- pabilities essential for advanced metabolomics re- search support including experimental design syn- thesis, analytical technique knowledge, and scien- tific interpretation. B MCQA templates To construct the Knowledge Graph QA dataset, we selected 26 different relations from MetaKG, the curated metabolomics knowledge graph. For each relation, we designed a specific question template in natural language. These templates convert struc- tured triples (head, relation, tail) into interrogative forms that can be used for multiple-choice question answering. Table 3 lists the relation types and their as- sociated templates. For example, the relation has_class is expressed as “What is the taxon- omy class of {subject}?” and has_disease as “Which disease is associated with {subject}?”. This template-based construction ensures consistency across questions while covering diverse relation types in metabolomics. C Knowledge MCQA examples The Knowledge capability is assessed through multiple-choice question answering (MCQA) tasks derived from HMDB and KEGG. Below are repre- sentative examples across different attribute types, illustrating the diversity of metabolomics knowl- edge tested in MetaBench. Questions span metabo- lite taxonomy, molecular properties, biological as- sociations, pathway relationships, and tissue loca- tions. Each question provides four options with one correct answer, where distractors are sampled from valid values of the same attribute to ensure domain relevance and task difficulty. Example 1: Tissue Location Question: Where is Acetylcholine located in tissue? Options: • A: Bone Marrow • B: Adrenal Cortex • C: Vitreous humor • D: Neuron Answer: D Example 2: Taxonomy Classification Question: What is the taxonomy class of 2-Hydroxyestrone? Options: • A: Aralkylamines • B: Estrane steroids • C: Catechols • D: Very long-chain fatty acids Answer: B Example 3: Database Identifier Question: What is the KEGG ID of Tetrahy- drobiopterin? Options: • A: C00864 • B: C00234 • C: C01747 • D: C00272 Answer: D Example 4: Enzyme Association Question: Which enzyme is associated with Glycine? Options: • A: 2.6.1.94 13 Attribute Question Template has_class What is the taxonomy class of {subject}? has_name What is the name of {subject}? has_disease Which disease is associated with {subject}? has_substituent What is a substituent of {subject}? has_inchi What is the InChI of {subject}? has_disposition What is the disposition of {subject}? has_uniprot_id What is the UniProt ID of {subject}? has_synonym What is a synonym for {subject}? has_process What process is {subject} involved in? has_enzyme Which enzyme is associated with {subject}? related_to_protein Which protein is {subject} related to? has_smiles What is the SMILES notation of {subject}? has_cellular_location Where is {subject} located in the cell? average_molecular_weight What is the average molecular weight of {subject}? chemical_formula What is the chemical formula of {subject}? related_to_pathway Which KEGG pathway is {subject} related to? has_inchikey What is the InChIKey of {subject}? has_kegg_id What is the KEGG ID of {subject}? has_biospecimen_location Where is {subject} found as a biospecimen? has_pathway What is a KEGG pathway associated with {subject}? belongs_to_orthology To which KEGG orthology does {subject} belong? has_reaction What is a KEGG reaction involving {subject}? has_description What is the description of {subject}? belongs_to_network To which KEGG network does Gene {subject} belong? has_tissue_location Where is {subject} located in tissue? is_a_sub_class_of {subject} is a subclass of what? Table 3: Relation types and their corresponding question templates. • B: 2.1.1.156 • C: 1.1.1.228 • D: 4.2.3.87 Answer: B Example 5: Pathway Relationship Question: What is a pathway associated with C15976? Options: • A: SMP00497 • B: SMP00417 • C: SMP00384 • D: SMP00507 Answer: C D Description generation examples The Understanding capability is assessed through pathway description generation tasks derived from PathBank. Models receive pathway names as input and must generate comprehensive, scientifically accurate descriptions explaining the biochemical mechanisms, participating molecules, biological significance, and clinical relevance. Below are representative examples demonstrating the range of pathway types and description complexity in MetaBench, from metabolic pathways and disease mechanisms to drug action pathways and gene reg- ulation systems. Example 1: Signaling Pathway Question: What is the definition of Succi- nate Signalling? Reference Description: Tricarboxylic acid (TCA) cycle intermediates can function as inflammatory signals. Succinate enhances 14 glycolysis in several ways. It inhibits prolyl hydroxylase domain (PHD) enzyme func- tion, both directly by product inhibition and indirectly via reactive oxygen species (ROS), driving hypoxia-inducible factor-1α (HIF-1α) accumulation and increased gly- colysis. ROS also inhibit mitochondrial function, boosting glycolysis as a result. Elevated HIF-1α enhances the expression of genes containing HIF response elements (HREs), including the interleukin 1β (IL- 1β) gene. Succinate can signal through suc- cinate receptor 1 (SUCNR1) and act in syn- ergy with Toll-like receptors (TLRs) to boost dendritic cell function. NAD+ exerts several anti-inflammatory effects by activating sir- tuins, a class of NAD+-dependent deacety- lases. Finally, inactivation of Sirt3 has been demonstrated to enhance NLR family, pyrin domain containing 3 (NLRP3) inflamma- some activation. Example 2: Gene Regulation Question: What is the definition of Operon: Ribosomal Protein rps0? Reference Description: The ribosomal pro- tein operon is a bicistronic operon consisting of rps0 and pnp. This operon is regulated by a Rho independent terminator, allowing for only rps0 to be transcribed if the terminator is formed. Example 3: Disease Pathway Question: What is the definition of Glycogenosis, Type IA. Von Gierke Dis- ease? Reference Description: Glycogen storage disease type I (also known as GSDI or von Gierke disease) is an inherited disorder caused by the buildup of a complex sugar called glycogen in the body’s cells. The ac- cumulation of glycogen in certain organs and tissues, especially the liver, kidneys, and small intestines, impairs their ability to function normally. Researchers have de- scribed two types of GSDI, which differ in their signs and symptoms and genetic cause. These types are known as glycogen storage disease type Ia (GSDIa) and glycogen stor- age disease type Ib (GSDIb). Two other forms of GSDI have been described, and they were originally named types Ic and Id. However, these types are now known to be variations of GSDIb; for this reason, GSDIb is sometimes called GSD type I non-a. Mu- tations in two genes, G6PC and SLC37A4, cause GSDI. Example 4: Drug Action Pathway Question: What is the definition of Tetracy- cline Action Pathway? Reference Description: Tetracycline is a short-acting antibiotic that is semi- synthetically produced from chlortetracy- cline, a compound derived from Strepto- myces aureofaciens. Tetracycline enters bac- terial cells by passively diffusing through membrane porin channels. Once inside the cell, tetracycline reversibly binds to the 30S subunit just above the binding site for aminoacyl tRNA. At its primary binding site, interactions with the sugar phosphate back- bone of residues in helices 31 and 34 via hydrogen bonds with oxygen atoms and hy- droxyl groups on the hydrophilic side of the tetracycline help anchor the drug in posi- tion. Salt bridge interactions between the backbone of 16S rRNA and tetracycline are mediated by a magnesium ion in the bind- ing site. Tetracycline prevents incoming aminoacyl tRNA from binding to the A site on the ribosome-RNA complex via steric hindrance. This causes inhibition of protein synthesis and hence bacterial cell growth. Example 5: Pharmaceutical Mechanism Question: What is the definition of Sufen- tanil Action Pathway? Reference Description: Sufentanil is a pharmacologically-active synthetic small molecule derived from fentanyl and belongs to a class of drugs called opioids. Opi- oids are therapeutically employed to achieve analgesia. Sufentanil’s rapid mechanism of action primarily involves its agonistic ef- fects on mu-type opioid receptors which are inhibitory G-coupled protein receptors and lead to the inhibition of adenylate cy- 15 clase and decrease in cAMP production. It also inhibits nociceptive neurotransmitter re- lease and induces membrane hyperpolariza- tion. Analgesia, anesthesia, and respiratory depression are a consequence of remifenta- nial’s action. E Grounding (Id mapping) examples The Grounding capability is assessed through cross- database identifier mapping tasks derived from MetabolitesID. This task evaluates whether mod- els can accurately translate metabolite identifiers across heterogeneous database systems (KEGG, HMDB, ChEBI) and between structured identi- fiers and chemical names. Successful identifier grounding is fundamental to integrating fragmented metabolomics knowledge, yet represents a critical bottleneck for current LLMs. Below are represen- tative examples demonstrating the diverse mapping types required in MetaBench. Example 1: HMDB to KEGG Mapping Question: What is the KEGG ID of HMDB ID HMDB0010090? Answer: C00626 Example 2: KEGG to HMDB Mapping Question: What is the HMDB ID of KEGG ID C07251? Answer: HMDB0014982 Example 3: Name to KEGG Mapping Question: What is the KEGG ID of metabo- lite Ponalactone A? Answer: C09174 Example 4: Name to HMDB Mapping Question: What is the HMDB ID of metabo- lite Ethylparaben? Answer: HMDB0032573 Example 5: Name to ChEBI Mapping Question: What is the ChEBI ID of metabo- lite Croconazole? Answer: 3920 Note: These examples illustrate the precision re- quired for identifier grounding tasks. Models must produce exact matches to succeed, as approxi- mate or similar identifiers are incorrect. This task represents the most challenging capability in MetaBench, with even the best-performing models (Claude-sonnet-4) achieving only 0.87% accuracy without retrieval augmentation, highlighting a fun- damental limitation in current LLM architectures for structured database mapping tasks. F Prompt for reasoning benchmark To construct the Reasoning (Triple Extraction) benchmark, we generated natural language sen- tences from structured knowledge graph triples us- ing DeepSeek-V3.1 (Liu et al., 2024). Each triple from MetaKG was transformed into a natural lan- guage sentence using the prompt template shown below. The prompt explicitly instructs the model to avoid template-like language and to create varied, natural expressions of biomedical facts, ensuring that the resulting benchmark requires genuine se- mantic parsing rather than pattern matching. Prompt for Reasoning Benchmark Given the following triple from a biomedi- cal knowledge graph: Subject: head Predicate: rel Object: tail Write a natural language sentence that ex- presses this fact, but do so in a way that is different from a simple template. Be cre- ative and vary the phrasing. Do not use the words ’subject’, ’predicate’, or ’object’. This generation approach ensures that the Rea- soning task evaluates genuine natural language un- derstanding rather than template recognition. The diversity in expression styles reflects the variability found in real scientific literature, making the bench- mark more representative of practical knowledge extraction scenarios in metabolomics research. G Triple extraction examples The Reasoning capability is assessed through knowledge graph triple extraction tasks derived from MetaKG. Models receive natural language sentences describing metabolite relationships and must extract structured triples in the format (head, relationship, tail). This task evaluates the model’s ability to parse unstructured scientific text and 16 perform structured reasoning, which is a critical skill for automated knowledge extraction from metabolomics literature. Below are representative examples demonstrating disease associations and metabolic disorder relationships. Example 1: Disease Association Text: Research has linked L-Methionine to the onset and manifestation of Epilepsy. Extracted Triple: • Head: L-Methionine • Relationship: has_disease • Tail: Epilepsy Example 2: Metabolic Disorder Text: Capric acid is implicated in the devel- opment of disorders related to metabolism and nutrition. Extracted Triple: • Head: Capric acid • Relationship: has_disease • Tail: Metabolism and nutrition disor- ders Example 3: Biomarker Relationship Text: Citrulline serves as a crucial biomarker for the diagnosis and assessment of intestinal failure. Extracted Triple: • Head: Citrulline • Relationship: has_disease • Tail: Intestinal failure Example 4: Deficiency Manifestation Text: A deficiency in Vitamin A can man- ifest as a range of afflictions impacting the nervous system. Extracted Triple: • Head: Vitamin A • Relationship: has_disease • Tail: Nervous system disorders Example 5: Contributing Factor Text: A deficiency in propionylcarnitine can be a contributing factor in the development of missing teeth. Extracted Triple: • Head: Propionylcarnitine • Relationship: has_disease • Tail: Missing teeth H Study description generation) examples The Research capability is assessed through study description generation tasks derived from Metabo- Lights. Models receive concise study titles as in- put and must generate comprehensive descriptions containing experimental methodologies, analytical techniques, key findings, and biological implica- tions. This task represents the highest level of scien- tific complexity in MetaBench, testing the model’s ability to synthesize knowledge about research de- sign, technical protocols, and scientific interpreta- tion. Below are representative examples demon- strating diverse research domains in metabolomics. Example 1: Software Evaluation Question: Write a study description for the study titled ’Comprehensive evaluation of untargeted metabolomics data processing software in feature detection, quantification and discriminating marker selection’. Reference Description (excerpt): Data analysis represents a key challenge for un- targeted metabolomics studies and it com- monly requires extensive processing of more than thousands of metabolite peaks included in raw high-resolution MS data. Although a number of software packages have been developed to facilitate untargeted data pro- cessing, they have not been comprehensively scrutinized in the capability of feature detec- tion, quantification and marker selection us- ing a well-defined benchmark sample set. In this study, we acquired a benchmark dataset from standard mixtures consisting of 1100 compounds with specified concentration ra- tios including 130 compounds with signifi- cant variation of concentrations. Five soft- 17 ware evaluated here (MS-Dial, MZmine 2, XCMS, MarkerView, and Compound Dis- coverer) showed similar performance in de- tection of true features... [continues] Example 2: Clinical Metabolomics Question: Write a study description for the study titled ’Altered metabolome and micro- biome features provide clues in understand- ing irritable bowel syndrome and depression comorbidity’. Reference Description (excerpt): Irrita- ble bowel syndrome (IBS) is one of the functional gastrointestinal disorders charac- terized by chronic and/or recurrent symp- toms of abdominal pain and irregular defe- cation. Changed gut microbiota has been proposed to mediate IBS; however, contra- dictory results exist, and IBS-specific mi- crobiota, metabolites, and their interactions remain poorly understood. To address this is- sue, we performed metabolomic and metage- nomic profiling of stool and serum samples based on discovery (n = 330) and validation (n = 101) cohorts... [continues] Example 3: Multi-Omics Integration Question: Write a study description for the study titled ’Mechanisms of hepatic steato- sis in chickens: integrated analysis of the host genome, molecular phenomes and gut microbiome’. Reference Description (excerpt): Hepatic steatosis is the initial manifestation of abnor- mal liver functions and often leads to liver diseases such as non-alcoholic fatty liver disease in humans and fatty liver syndrome in animals. In this study, we conducted a comprehensive analysis of a large chicken population consisting of 705 adult hens by combining host genome resequencing, liver transcriptome, proteome and metabolome analysis, as well as microbial 16S rRNA gene sequencing of each gut segment... [con- tinues] Example 4: Developmental Metabolomics Question: Write a study description for the study titled ’Changes in the Milk Metabolome of the Giant Panda (Ailuropoda melanoleuca) with Time after Birth – Three Phases in Early Lactation and Progressive Individual Differences’. Reference Description (excerpt): Ursids (bears) in general, and giant pandas in partic- ular, are highly altricial at birth. The compo- nents of bear milks and their changes with time may be uniquely adapted to nourish relatively immature neonates, protect them from pathogens, and support the maturation of neonatal digestive physiology. Serial milk samples collected from three giant pandas in early lactation were subjected to untar- geted metabolite profiling and multivariate analysis... [continues] Example 5: Subcellular Imaging Question: Write a study description for the study titled ’Subcellular antibiotic visual- ization reveals a dynamic drug reservoir in infected macrophages’. Reference Description (excerpt): Tubercu- losis, caused by the intracellular pathogen Mycobacterium tuberculosis, remains the world’s deadliest infectious disease. Steriliz- ing chemotherapy requires at least 6 months of multidrug therapy. Difficulty visualiz- ing the subcellular localization of antibi- otics in infected host cells means that it is unclear whether antibiotics penetrate all mycobacteria-containing compartments in the cell. Here, we combined correlated light, electron, and ion microscopy to image the distribution of bedaquiline in infected hu- man macrophages... [continues] I Compared LLMs We evaluate 25 state-of-the-art large language mod- els spanning eight model families to ensure com- prehensive coverage of current LLM capabilities in metabolomics. Our selection includes both open-source models, Qwen3 (1.7B–32B), Gemma- 3 (270M–27B), Llama (1B–70B), DeepSeek-v3.1 (685B MoE), and GPT-oss-20b (20B MoE)and 18 Table 4: Models evaluated in MetaBench. Parameter counts are from official sources where available; proprietary models do not disclose parameter counts. Model Params Architecture Release date Provider Open-source models Qwen3-1.7b 1.7B dense 2025-04-29 Alibaba Qwen Qwen3-4b 4B dense 2025-04-29 Alibaba Qwen Qwen3-8b 8B dense 2025-04-29 Alibaba Qwen Qwen3-14b 14B dense 2025-04-29 Alibaba Qwen Qwen3-32b 32B dense 2025-04-29 Alibaba Qwen Gemma-3-270m-it 270M dense 2025-03-12 Google Gemma-3-1b-it 1B dense 2025-03-12 Google Gemma-3-4b-it 4B dense 2025-03-12 Google Gemma-3-12b 12B dense 2025-03-12 Google Gemma-3-27b 27B dense 2025-03-12 Google Llama-3.2-1b 1B dense 2024-09-25 Meta Llama-3.2-3b 3B dense 2024-09-25 Meta Llama-3.1-8b 8B dense 2024-07-23 Meta Llama-3.1-70b 70B dense 2024-07-23 Meta DeepSeek-v3.1 685B (MoE)a MoE 2025-08-21 DeepSeek GPT-oss-20b 20B MoE 2025-08-21 OpenAI Closed-source models GPT-4o-mini Not disclosed multimodal 2024-07-18 OpenAI GPT-5-nano Not disclosed unified/thinking variants 2025-08-07 OpenAI GPT-5-mini Not disclosed unified/thinking variants 2025-08-07 OpenAI GPT-5 Not disclosed unified/thinking variants 2025-08-07 OpenAI Claude-haiku-3.5 Not disclosed hybrid family 2024-10-22 Anthropic Claude-sonnet-3.7 Not disclosed hybrid reasoning 2025-02-24 Anthropic Claude-sonnet-4 Not disclosed hybrid reasoning 2025-05-22 Anthropic Gemini-2.5-flash Not disclosed thinking model 2025-06-17 Google Gemini-2.5-pro Not disclosed thinking model 2025-03-25 Google a MoE = mixture-of-experts. Count refers to total parameters across experts, not active per token. closed-source systems from leading providers: OpenAI’s GPT series (4o-mini through GPT-5), Anthropic’s Claude family (haiku-3.5, sonnet-3.7, sonnet-4), and Google’s Gemini-2.5 models (flash and pro). This diversity enables systematic anal- ysis of how model scale, architecture (dense vs. mixture-of-experts), and training paradigms affect performance across the five metabolomics capa- bilities assessed in MetaBench. Table 4 summa- rizes the key specifications of all evaluated mod- els, including parameter counts (where available), architectural types, release dates, and providers. This comprehensive model selection establishes MetaBench as a rigorous testbed for tracking progress in metabolomics-oriented language mod- eling across both academic and industrial research efforts. J System prompts In our evaluation framework, we designed five dis- tinct system prompts, each tailored to a capabil- ity assessed in MetaBench. These prompts ensure that large language models receive clear, domain- specific instructions aligned with metabolomics tasks. • MCQA (Multiple-Choice Question Answer- ing): Tests factual knowledge of metabolites, pathways, enzymes, and biological systems. • Triple Extraction: Assesses reasoning ability by extracting (head, relationship, tail) triples from natural language. • Study Description: Evaluates scientific writ- ing and comprehension by generating detailed study descriptions from research titles. • QA (Open-ended Question Answering): Probes explanatory ability on pathways, mech- anisms, and processes. • Metabolite Mapping: Measures precision in identifier mapping across metabolomics databases like KEGG, HMDB and ChEBI. 19 The following parts contain the exact system prompts used for each evaluation task. System Prompt for MCQA Task: Answer multiple choice questions about metabolite properties, pathways, enzymes, and biological processes. Instructions: - You will be given a question and four op- tions (A, B, C, D). - Choose the correct answer based on your knowledge of metabolomics, biochemistry, and biological systems. - Respond with only the letter (A, B, C, or D). Coverage: - Metabolite taxonomy and classification - Enzyme associations and EC numbers - KEGG pathway and network associations - Molecular properties (weight, structure) - Tissue locations and biological functions - Protein and gene relationships Examples: Q: What is the taxonomy class of Prostaglandin E1? Options: {A: Prostaglandins and related compounds, B: Phenylpyrazoles, C: Fura- nones, D: Azolines} Answer: A —————————————————- Q: What is the taxonomy class of PC(18:1(9Z)/18:0)? Options: {A: Alanine and derivatives, B: Glycerophospholipids, C: Anisoles, D: Isoleucine and derivatives} Answer: B —————————————————- Q: Where is Acetylcholine located in tissue? Options: {A: Bone Marrow, B: Adrenal Cor- tex, C: Vitreous humor, D: Neuron} Answer: D System Prompt for Triple Extraction Task: Extract knowledge graph triples from natu- ral language text about metabolites and their biological properties. Format: Head: [metabolite/entity] Relationship: [relationship type] Tail: [disease/condition/property/location] Allowed relationships: - has\_disease : metabolite associated with diseases/conditions - has\_disposition : metabolite location or cellular disposition - has\_smiles : molecular structure in SMILES notation - has\_synonym : alternative names - has\_class : biochemical classification - has\_tissue_location : tissue or organ presence Examples: Text: Within the human body, the blood’s platelets are a known reservoir for the com- pound inosine. Head: Inosine Relationship: has_tissue_location Tail: Platelet —————————————————- Text: Dolichyl phosphate D-mannose is cat- egorized among the broader class of carbo- hydrates and carbohydrate conjugates. Head: Dolichyl phosphate D-mannose Relationship: has_class Tail: Carbohydrates and carbohydrate con- jugates —————————————————- Text: The molecular structure of Isobutyryl-L-carnitine is uniquely defined by the SMILES string CC(C)C(=O)O[C@H](CC(O)=O)... Head: Isobutyryl-L-carnitine Relationship: has_smiles Tail: CC(C)C(=O)O[C@H](CC(O)=O)... System Prompt for Study Description Task: Write a comprehensive study description for a given metabolomics research title. Instructions: Include the following: 1. Study purpose and objectives 2. Methodology and experimental design (LC-MS, GC-MS, NMR) 3. Key findings and results 4. Clinical or scientific significance 5. Sample characteristics and analytical methods 20 6. Metabolic pathways and biological pro- cesses involved Coverage: - Untargeted and targeted metabolomics - Sample preparation and analytical tech- niques - Data processing and statistical analysis - Biological interpretation and pathway anal- ysis - Clinical applications and biomarker discov- ery Example: Title: "Dissolved organic carbon compounds in deep-sea hydrothermal vent fluids from the East Pacific Rise at ... Description: "... detailed description ..." System Prompt for QA Task: Answer questions about biochemical defini- tions, metabolic pathways, and biological processes. Instructions: Provide comprehensive, accurate answers including: 1. Definition of the pathway, process, or concept 2. Key enzymes, reactions, and components 3. Biological significance and function 4. Clinical or pathological relevance (if any) 5. Related pathways or mechanisms Coverage: - Metabolic pathways (TCA cycle, glycoly- sis, etc.) - Biochemical processes and mechanisms - Disease definitions and mechanisms - Drug action pathways - Operon regulation - Molecular signaling Examples: Q: What is the definition of Succinate Sig- nalling? A: "... detailed definition ..." —————————————————- Q: What is the definition of Tetracycline Action Pathway? A: "... detailed definition ..." System Prompt for Metabolite Mapping Task: Answer metabolite identifier mapping ques- tions between databases and naming conven- tions. Instructions: - Provide only the identifier value as the an- swer. - No extra explanation or formatting. Supported identifiers: - CAS Registry Numbers - PubChem CIDs - Chemical names (IUPAC, common) - HMDB IDs - ChEBI IDs - KEGG IDs Examples: Q: What is the KEGG ID of metabolite Tamibarotene? A: C12864 —————————————————- Q: What is the HMDB ID of metabolite TG(15:0/i-20:0/a-17:0)[rac]? A: HMDB0102782 —————————————————- Q: What is the ChEBI ID of Benzilic acid? A: 39414 —————————————————- Q: What is the KEGG ID of 2-Iodophenol? A: C01874 K The Critical Role of Metabolite Identifier Grounding in Metabolomics Metabolite identifier groundingt represents a fundamental bottleneck in metabolomics re- search (Swainston et al., 2014). A single metabo- lite may be referenced by multiple identifiers: KEGG uses compound IDs (e.g., C00031 for D- Glucose), HMDB employs alphanumeric codes (e.g., HMDB0000122), ChEBI assigns numeri- cal IDs (e.g., 17234), and scientific literature uses IUPAC names, common names, or syn- onyms, with PubChem listing nearly 250 syn- onyms for L-Tryptophan alone (Krettler et al., 2024). This heterogeneity arose because differ- ent databases emerged from distinct research com- munities: KEGG emphasizes pathway context, HMDB focuses on human metabolites with clin- ical relevance, ChEBI provides chemical ontol- 21 ogy, and PubChem offers comprehensive chemical data (Wishart et al., 2022; Kanehisa et al., 2025). Each adopted identifier schemes suited to their or- ganization, predating standardization efforts. Accurate identifier grounding is essential throughout the research pipeline. Cross-study meta-analyses depend on harmonizing identifiers across laboratories and platforms (Braisted et al., 2023). Database construction efforts like MetaKG require linking millions of facts from heteroge- neous sources to unified metabolite entities (Lu, 2025). Identifier mapping errors cascade through research quality: incorrect links lead to spurious discoveries, failed recognition causes missed con- nections, and inconsistent usage prevents repro- ducible science (Cajka and Fiehn, 2024; Powers et al., 2022). Database inconsistency compounds these chal- lenges. Analysis of 11 biochemical databases re- vealed that while HMDB maintains high consis- tency (only 1.7% of names linked to multiple IDs), ChEBI and KEGG show 14.8% and 13.3% am- biguity respectively (Grapov et al., 2018). Inter- database mapping using metabolite names shows inconsistencies ranging from 0% to 81.2%, with similar results (0-83%) when mapping via refer- ence identifiers (Grapov et al., 2018). Current so- lutions include manually curated mapping tables (e.g., MetabolitesID (Bioconductor, 2020)), web- based conversion tools like the Chemical Trans- lation Service (Wohlgemuth et al., 2010), and APIs, but these face persistent challenges: no single resource provides comprehensive coverage, databases update frequently requiring continuous curation, automated tools struggle with isomers and context-dependent synonyms, and different tools employ incompatible interfaces (Swainston et al., 2014; Krettler et al., 2024). Large Language Mod- els promise flexible, context-aware resolution lever- aging both structured databases and unstructured literature. However, MetaBench demonstrates that current models exhibit catastrophic failure without retrieval augmentation, achieving less than 1% ac- curacy even for frontier architectures. This finding establishes identifier grounding as a critical bottle- neck impeding the computational transformation of metabolomics toward integrated, AI-assisted dis- covery systems. 22
MetaBench: A Multi-task Benchmark for Assessing LLMs in Metabolomics Yuxing Lu1,2, Xukai Zhao3, J. Ben Tamo4, Micky C. Nnamdi4, Rui Peng2, Shuang Zeng1,2, Xingyu Hu5, Jinzhuo Wang2,*, May D. Wang 1,4,*, 1Wallace H. Coulter 2 3 4 5 : , Abstract Large Language Models (LLMs) have demonstrated remarkable capabilities on general text; however, their proficiency in specialized scientific domains that require deep, interconnected knowledge remains largely uncharacterized. Metabolomics presents unique challenges with its complex biochemical pathways, heterogeneous identifier systems, and fragmented databases. To systematically evaluate LLM capabilities in this domain, we introduce MetaBench, the first benchmark for metabolomics assessment. Curated from authoritative public resources, MetaBench evaluates five capabilities essential for metabolomics research: knowledge, understanding, grounding, reasoning, and research. Our evaluation of 25 open- and closed-source LLMs reveals distinct performance patterns across metabolomics tasks: while models perform well on text generation tasks, cross-database identifier grounding remains challenging even with retrieval augmentation. Model performance also decreases on long-tail metabolites with sparse annotations. With MetaBench, we provide essential infrastructure for developing and evaluating metabolomics AI systems, enabling systematic progress toward reliable computational tools for metabolomics research. 1 Introduction Large Language Models (LLMs) are being rapidly adopted across metabolomics research, driven by their demonstrated success in adjacent biomedical domains (Liu et al., 2025; Bekbergenova et al., 2025). Biomedical LLMs have transformed protein structure prediction, clinically assist with diagnosis and treatment planning, and chemistry-focused systems support reaction prediction and molecular design (Wang et al., 2023; Zhao et al., 2023; Hu and Lu, 2024). Research groups now routinely utilize LLMs for tasks ranging from analyzing experiment results to generating study proposals. However, this rapid adoption has outpaced systematic evaluation: we lack a comprehensive understanding of which metabolomics tasks LLMs can reliably perform, where they fail, and why. This evaluation gap poses significant risks for a field where incorrect metabolite assignments or pathway interpretations can propagate through analysis pipelines and lead to false biological conclusions. The consequences of deploying LLMs without proper evaluation extend beyond individual research errors. Metabolomics research demands capabilities that differ fundamentally from general text understanding and generation (Bifarin et al., 2025). Researchers must integrate information across specialized databases such as the Human Metabolome Database (HMDB) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) (Wishart et al., 2022; Kanehisa et al., 2025), each employing distinct identifier systems and ontologies. Without knowing which of the capabilities current LLMs possess, researchers cannot make informed decisions about where to deploy these tools, what verification procedures to implement, or how to interpret AI-assisted results. Current biomedical evaluation frameworks cannot answer these questions (Chen et al., 2025; Krithara et al., 2023; Jin et al., 2019). These benchmarks evaluate capabilities in natural language understanding but do not measure the specialized operations that metabolomics requires. For example, high performance on MedQA provides no evidence for reliability on identifier grounding, and failure on BioASQ does not preclude success on pathway description generation. Without evaluation frameworks designed for metabolomics-specific tasks, the field lacks criteria for model selection, failure mode identification, or systematic improvement. In this paper, we present MetaBench, a comprehensive benchmark designed to systematically assess LLM capabilities across metabolomics tasks. MetaBench evaluates five capability levels essen1 16 Oct 2025 tial for metabolomics research: Knowledge (factual recall of metabolite properties), Understanding (generation of coherent pathway descriptions), Grounding (accurate identifier mapping across heterogeneous databases), Reasoning (extraction of structured relationships from natural language), and Research (synthesis of comprehensive study descriptions). Through evaluation of 25 state-ofthe-art models on ~8,000 test cases derived from authoritative resources including HMDB, KEGG, PathBank, MetaKG, and MetaboLights (Wishart et al., 2022; Guijas et al., 2018; Lu, 2025; Yurekten et al., 2024), we provide the first systematic characterization of LLM performance in metabolomics, revealing which capabilities current models possess, where critical bottlenecks exist, and what architectural innovations are needed. This work makes the following contributions: • We introduce MetaBench, the first comprehensive benchmark for evaluating LLM capabilities in metabolomics, comprising ~8,000 test cases across five core capability levels. • We conduct a systematic evaluation of 25 LLMs, revealing how these models perform across different metabolomics tasks. • We identify and analyze critical bottlenecks in current LLMs for metabolomics applications, providing mechanistic insights into failure modes and pathways for improvement. 2 Related Work 2.1 Benchmarks in Scientific Domains The rapid advancement of LLMs has spurred the development of numerous benchmarks to assess their capabilities in the scientific domain, largely concentrating on broad domains like biomedicine, urban planning, psychology, and chemistry (Cai et al., 2024; Ren et al., 2024; Zhao et al., 2025). Biomedical benchmarks have traditionally focused on tasks like question answering over literature and named entity recognition from clinical notes (Krithara et al., 2023; Jin et al., 2019; Jiang et al., 2025a). While invaluable, these benchmarks do not capture the domain knowledge, data structures, and specific tasks central to metabolomics, such as biochemical pathways, study proposals, and metabolite identifier groundings. Similarly, in chemistry, benchmarks like MoleculeNet (Wu et al., 2018) have emerged to evaluate models on tasks such as molecule property prediction and name conversion. These benchmarks are tailored to the conventions of chemistry, focusing on chemical structure representations (e.g., SMILES) and reaction kinetics, which only partially overlap with the challenges in metabolomics (Roessner and Bowne, 2009). Metabolomics requires an understanding not just of individual molecules but of their roles within complex, dynamic biological systems (Weckwerth, 2003). Our MetaBench is the first to bridge this gap, offering a suite of tasks designed specifically to test the deep, systems-level capabilities required for metabolomics. 2.2 NLP in Metabolomics The application of Natural Language Processing (NLP) in metabolomics is an emerging but critical area of research (Bifarin et al., 2025; Coler et al., 2024). Prior work has primarily focused on leveraging NLP to aid in the interpretation of experimental results by contextualizing findings with existing literature or databases (Lu, 2025; Bekbergenova et al., 2025; Rahman et al., 2024). These efforts often involve tools for named entity recognition (NER) to identify metabolites in text and relation extraction to link them to diseases or genes. However, these applications typically operate as pipeline components rather than end-to-end reasoning systems. Furthermore, a significant challenge in the metabolomics field is knowledge fragmentation; information about a single metabolite may be spread across multiple databases (e.g., KEGG (Kanehisa et al., 2025), HMDB (Wishart et al., 2022), PubChem (Canese and Weis, 2013)), each using different identifier systems. This necessitates robust entity grounding to create a unified understanding (Ji et al., 2020; Weston et al., 2019). While entity grounding is a well-established NLP task, its application to the diverse and often overlapping identifiers in metabolomics is a unique and unsolved challenge. MetaBench formalizes these challenges into benchmark tasks for the first time, positioning our work as a necessary next step: moving from evaluating specific, isolated NLP tasks to a holistic assessment of powerful, end-to-end generative language models. 3 MetaBench 3.1 Overview The lack of standardized evaluation benchmarks in metabolomics has left the field without clear criteria to assess LLM performance, making it difficult 2 Total tasks 1264 MetaboliteIDMapping MetaKG MetaboLights Question: What is the KEGG ID of HMDB ID HMDB0010090? Correct Answer: C00626 Generate Generate Generate Generate Generate Evaluate Evaluate Evaluate Evaluate Evaluate Total tasks 1000 Text: Research has linked L-Methionine to the onset and manifestation of Epilepsy. Reference Triple: Head: L-Methionine Relationship: has_disease Tail: Epilepsy Total tasks 1200 Question: Write a study description for the study titled 'Comprehensive evaluation of untargeted metabolomics data processing software in feature detection, quantification and discriminating marker selection'. Reference Description: Data analysis represents a key challenge for untargeted metabolomics studies and it... Total tasks 2125 PathBank Knowledge Understanding Grounding Research Reasoning Question: Where is Acetylcholine located in tissue? Options: A: Bone Marrow B: Adrenal Cortex C: Vitreous humor D: Neuron HMDB / KEGG MCQA Description Generation Identifier Grounding Triple Extraction Study generation Total tasks 2500 Question: What is the definition of Operon: Ribosomal Protein rps0? Reference Description: The ribosomal protein operon is a bicistronic operon consisting of rps0 and pnp. This operon is regulated by a Rho independent terminator, allowing for only rps0 to be transcribed... Data Source Benchmark Task Performance Task Example 60% 80% 10% 70% 80% Figure 1: MetaBench construction and task design. MetaBench integrates data from multiple datasets to assess five capabilities: knowledge, understanding, grounding, reasoning, and research. to compare methods, identify limitations, or provide reliable guidance for practical use. This gap slows both scientific progress and the safe adoption of LLMs in metabolomics research. To address this challenge, we curated a comprehensive benchmark from different sources (detailed introduction in Appendix A) that evaluates LLMs across five capability levels, from factual knowledge recall to research-oriented text generation (Figure 1). This benchmark establishes the first critical foundation for methodological innovation and future applications of LLMs in metabolomics. 3.2 Benchmark Taxonomy We organize MetaBench into five capabilities, each aligned with a specialized requirement in metabolomics workflows: Knowledge. Factual recall of general knowledge, such as the correct metabolite taxonomy. Understanding. Generation of coherent descriptions of certain metabolites and pathways; Grounding. Accurate alignment of identifiers across biomedical databases; Reasoning. Entity and relationship extraction and structuring from natural language; Samples Tokens Knowledge: 313.5K Understanding: 381.8K Grounding: 24.5K Reasoning: 58.2K Research: 715.8K Knowledge: 2.5K Understanding: 1.3K Grounding: 1.0K Reasoning: 1.2K Research: 2.1K Figure 2: The statistics of MetaBench datasets. Research. Generation of research-related study descriptions from minimal prompts. Together, these datasets provide a quantifiable, hierarchical framework for assessing LLMs in metabolomics, analogous to capability ladders used in general NLP benchmarks (Jiang et al., 2025b). 3.3 Dataset Construction We instantiate these capabilities through public available datasets like HMDB (Wishart et al., 2022), KEGG (Kanehisa et al., 2025), Metabolights (Yurekten et al., 2024), PathBank (Wishart et al., 2024), and MetaKG (Lu, 2025), the overall process is illustrated in Figure 1. All datasets are re3 Table 1: Statistics of the MetaBench datasets. Min / Avg. / Max indicates token lengths per sample. Source Capability Min Avg. Max HMDB, KEGG Knowledge 15 48.17 2113 PathBank Understanding 26 166.28 880 MetabolitesID Grounding 7 11.94 35 MetaKG Reasoning 9 17.95 35 MetaboLights Research 17 222.57 852 leased publicly on HuggingFace1 for reproducibility and the development of LLMs in metabolomics. 3.3.1 Knowledge-based MCQA To assess how well different LLMs encode metabolomics knowledge, we construct a multiple-choice QA benchmark derived from HMDB (Wishart et al., 2022) and KEGG (Kanehisa et al., 2025). We select 26 attributes from these databases spanning taxonomy, molecular properties, biological associations, and pathway relationships. For each attribute, we extract entry-attribute pairs and generate four-option questions by combining the correct value with three in-domain distractors randomly sampled from other values of the same attribute. All questions follow standardized templates (Appendix Table 3), such as "What is the taxonomy class of subject?" The resulting dataset comprises 2,500 questions, with 100 questions per attribute. Examples are provided in Appendix C. 3.3.2 Description generation Beyond factual knowledge assessment, we evaluate whether LLMs can generate coherent, scientifically accurate descriptions of metabolomics concepts. This task tests the model's ability to produce informative pathway descriptions from pathway names alone. We curated a benchmark using 1,264 pathway-description pairs from PathBank (Wishart et al., 2024), where each pathway name serves as the input prompt and the corresponding expert-written description serves as the reference output. Descriptions range from 148 to 6,452 tokens and contain comprehensive information about biochemical mechanisms, participating enzymes, and biological significance. Performance is assessed using BERTScore to measure semantic similarity between generated and reference texts. Examples are provided in Appendix D. 1Anonymized for double-blind review 3.3.3 Cross-Database Identifier Grounding A fundamental challenge in metabolomics research is integrating information about metabolites across heterogeneous databases, which requires accurate mapping between different identifier systems. Only through successful identifier resolution can fragmented knowledge be unified. To evaluate this grounding capability, we curated a benchmark using the MetabolitesID2 package that requires models to map metabolite identifiers across KEGG, HMDB, and ChEBI databases. The task encompasses bidirectional mappings: KEGG↔HMDB, KEGG↔ChEBI, HMDB↔ChEBI, and identifier↔name conversions. We sample 1,000 mapping pairs from a comprehensive table containing 130,005 entries, ensuring balanced representation across all mapping types. Each question provides a source identifier and requires the model to predict the corresponding target identifier, evaluated using exact match accuracy. Examples are provided in Appendix E. 3.3.4 Knowledge extraction reasoning To evaluate structured knowledge extraction capabilities, we develop a benchmark that tests whether models can accurately parse metabolomics facts from natural language into knowledge graph triples. We select 1,200 triples from MetaKG (Lu, 2025) spanning six relationship types: has_disease, has_disposition, has_smiles, has_synonym, has_class, and has_tissue_location. For each triple, we use DeepSeek-V3.1 (Liu et al., 2024) to generate natural language sentences expressing the relationship. For example, the triple (Inosine, has_tissue_location, Platelet) is rendered as "Within the human body, the blood's platelets are a known reservoir for the compound inosine." The generation prompt is provided in Appendix F. Models must extract the complete (head, relationship, tail) triple from each sentence, requiring both natural language understanding and structured reasoning. Performance is evaluated using exact match accuracy, where all three components must be correctly identified. Representative examples are shown in Appendix G. 3.3.5 Study description generation The ultimate goal of applying LLMs to metabolomics is to enable comprehensive research support, from interpreting experimental results through multi-agent systems and data 2https://github.com/yigbt/metaboliteIDmapping 4 Table 2: MetaBench results for open- and closed-source LLMs across five capabilities. Cells report scores on a 0-100 scale. Best per column is shaded pink and second-best green. Model Knowledge Understanding Grounding Reasoning Research Average Open-source models Qwen3-1.7b 32.01 81.46 0.27 56.08 79.46 49.86 Qwen3-4b 35.94 83.21 0.27 66.39 80.52 53.27 Qwen3-8b 43.58 82.87 0.25 66.47 80.83 54.40 Qwen3-14b 52.62 82.95 0.29 65.14 80.88 56.38 Qwen3-32b 56.42 83.01 0.33 67.72 82.57 58.01 Gemma-3-270m-it 13.61 81.28 0.20 11.17 80.37 37.33 Gemma-3-1b-it 28.89 81.43 0.47 44.04 81.70 47.31 Gemma-3-4b-it 46.26 81.97 0.53 68.50 82.29 55.91 Gemma-3-12b 51.25 82.64 0.60 70.22 82.73 57.49 Gemma-3-27b 55.82 83.05 0.70 72.69 83.24 59.10 Llama-3.2-1b 12.44 82.40 0.37 33.04 81.77 42.80 Llama-3.2-3b 42.93 82.75 0.60 66.83 82.33 55.49 Llama-3.1-8b 50.58 82.71 0.57 71.92 83.25 57.81 Llama-3.1-70b 57.52 83.08 0.63 72.98 83.18 59.88 DeepSeek-v3.1 54.34 82.38 0.48 73.81 83.52 58.91 GPT-oss-20b 50.66 82.83 0.50 70.16 82.87 57.40 Closed-source models GPT-4o-mini 53.82 82.36 0.27 66.39 80.02 56.17 GPT-5-nano 35.65 82.71 0.27 68.47 82.00 53.82 GPT-5-mini 59.02 82.95 0.40 71.89 82.39 59.33 GPT-5 60.50 83.52 0.47 72.39 82.72 59.92 Claude-haiku-3.5 44.34 82.15 0.47 70.64 81.41 55.80 Claude-sonnet-3.7 59.78 82.87 0.53 71.31 83.87 59.67 Claude-sonnet-4 60.94 83.20 0.87 72.56 83.39 60.99 Gemini-2.5-flash 52.10 82.37 0.37 69.25 82.43 57.30 Gemini-2.5-pro 59.54 83.12 0.63 73.14 83.28 60.34 Higher is better for all columns. analysis pipelines to potentially designing or executing experiments autonomously. As a step toward this vision, we evaluate models on a research-level task requiring the generation of detailed study descriptions from concise titles. We curated a benchmark using 2,125 title-description pairs from MetaboLights (Yurekten et al., 2024), a leading metabolomics data repository. Each study title serves as the input prompt, and models must generate the corresponding comprehensive description containing experimental methodologies, analytical techniques, key findings, and biological implications. Examples are provided in Appendix H. 3.4 Statistics Table 1 and Figure 2 present comprehensive statistics for the MetaBench datasets. We report minimum, average, and maximum token lengths per sample across all tasks. MetaBench comprises 8,100 samples distributed across five capability levels, with Knowledge (2,500 samples) and Research (2,125 samples) representing the largest subsets. Token volume varies substantially by task type: Understanding and Research tasks generate the highest token counts due to paragraph-level outputs (averaging 166 and 223 tokens, respectively), while Grounding and Reasoning tasks involve shorter structured responses (averaging 12 and 18 tokens). This distribution ensures comprehensive evaluation across diverse task formats, from concise factual retrieval and structured extraction to extended scientific text generation. 3.5 Evaluation We employ task-appropriate metrics to ensure rigorous and meaningful evaluation. For classification tasks (knowledge MCQA, identifier grounding, and triple extraction reasoning), we report exact match accuracy. For generation tasks (pathway description generation and study description), we use BERTScore (Zhang et al., 2019) (RoBERTa (Liu et al., 2019) as backbone model) to measure semantic similarity between generated and reference texts. 5 All systems prompts for each task are provided in Appendix J. For closed-source models, we perform inference through official APIs provided by OpenAI, Anthropic, and Google. For open-source models, we deploy them locally on H200 GPU clusters using the vLLM 3 framework for efficient inference. We standardize all inference settings: temperature is set to 0.1 to encourage deterministic outputs, maximum generation length is capped at 4,096 tokens, and thinking modes are disabled for fair comparison. Each task uses a tailored system prompt that specifies the expected output format and evaluation criteria, ensuring alignment between model responses and metric requirements. 4 Results 4.1 Overall performance across capabilities We evaluate 25 state-of-the-art LLMs spanning eight model families (Appendix Table 4). Opensource models: Qwen3 (Yang et al., 2025) (1.7B, 4B, 8B, 14B, 32B), Gemma-3 (Team et al., 2025) (270M-it, 1B-it, 4B-it, 12B, 27B), Llama (Dubey et al., 2024) (3.2-1B, 3.2-3B, 3.1-8B, 3.1-70B), DeepSeek-v3.1 (Liu et al., 2024), and GPT-oss20b (Agarwal et al., 2025). Closed-source models: GPT (Achiam et al., 2023; Hurst et al., 2024) (4o-mini, 5-nano, 5-mini, 5), Claude (Anthropic, 2024, 2025) (haiku-3.5, sonnet-3.7, sonnet-4), and Gemini-2.5 (Naveed et al., 2025) (flash, pro). All evaluations use standard inference settings without external tools unless explicitly noted. Table 2 presents performance across five capabilities. Overall, Claude-sonnet-4 achieves the highest average score (60.99), followed closely by Gemini-2.5-pro (60.34) and GPT-5 (59.92). Among open-source models, Llama-3.1-70b leads at 59.88, nearly matching the best closed-source systems, followed by Gemma-3-27b (59.10), DeepSeek-v3.1 (58.91), Qwen3-32b (58.01), and GPT-oss-20b (57.40). The narrow performance gap between top open-source and closed-source models (less than 1.5 points) demonstrates that open models can achieve competitive metabolomics reasoning when properly scaled. Task-specific analysis reveals distinct capability profiles. For Knowledge tasks, Claude-sonnet-4 marginally leads at 60.94, slightly ahead of GPT-5 (60.50) and Gemini-2.5-pro (59.54). Understanding shows remarkably compressed performance, 3https://docs.vllm.ai/en/latest/ 108 109 1010 1011 1012 Model Parameters (log scale) 0.2 0.3 0.4 0.5 0.6 0.7 Grounding Score Grounding 108 109 1010 1011 1012 Model Parameters (log scale) 20 30 40 50 Knowledge Score Knowledge 108 109 1010 1011 1012 Model Parameters (log scale) 40 45 50 55 60 Overall Score Overall Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 108 109 1010 1011 1012 Model Parameters (log scale) 10 20 30 40 50 60 70 Reasoning Score Reasoning Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 108 109 1010 1011 1012 Model Parameters (log scale) 80 81 82 83 Research Score Research 108 109 1010 1011 1012 Model Parameters (log scale) 81.5 82.0 82.5 83.0 Understanding Score Understanding Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 Model Family DeepSeek GPT-oss Gemma-3 Llama Qwen3 Figure 3: Parameter scaling trends in MetaBench. Average performance increases roughly log-linearly with model size, with consistent family-wise improvements. with all competitive models clustering between 81-84%; GPT-5 achieves the highest score (83.52), but the narrow range suggests this capability saturates early across modern architectures. Grounding presents a stark contrast: without retrieval augmentation, even the best model (Claude-sonnet-4) achieves only 0.87% accuracy, two orders of magnitude below other tasks. We analyze this bottleneck in detail in §4.3. For Reasoning, DeepSeek-v3.1 excels at 73.81%, with Gemini-2.5-pro (73.14) and Claude-sonnet-4 (72.56) close behind. Finally, Research returns to a high, compressed band led by Claude-sonnet-3.7 (83.87), DeepSeek-v3.1 (83.52), and Claude-sonnet-4 (83.39). 4.2 Parameter scaling trends The scaling law works on the five metabolomic capabilities, but the slope is task dependent (Figure 3, Table 2). Averaged across tasks, scores rise roughly with diminishing returns at the largest scales; family leaders at similar sizes differ by only 1-3 points, highlighting the influence of pretraining data and objectives beyond size alone. Familywise scaling trends are consistent: Qwen3 improves from 49.86 (1.7B) to 58.01 (32B), Gemma-3 from 37.33 (270M-it) to 59.10 (27B), and Llama from 42.80 (3.2-1B) to 59.88 (3.1-70B). GPT-20b-oss 6 What is the KEGG ID for the compound with HMDB ID HMDB0004148? C13691 Model Direct Generation Generation with Search API Output Gemini-2.5-flash Gemini-2.5-pro GPT-4o-mini GPT-5-nano GPT-5-mini GPT-5 DeepSeek-v3.1 Claude-3.5-haiku C00092 C02500 C00245 C00031 C01183 C05544 C00026 C00186 Claude-3.7-sonnet Claude-4-sonnet C00092 C05984 Model Output GPT-4o-mini GPT-5 Claude-haiku-3.5 Claude-sonnet-3.7 Claude-sonnet-4 C13691 C13691 C13691 C13691 C13691 Figure 4: LLMs consistently fail in metabolite identifier grounding. 11 LLMs were asked to retrieve KEGG ID for HMDB0004148. None produced the correct answer. In contrast, when augmented with web search API, all successfully provided the correct mapping. and DeepSeek-v3.1 also demonstrate performance consistent with their parameter counts. Task-level analysis shows distinct behaviors. Knowledge scales cleanly with size, with Llama rising from 12.44 to 57.52 and Qwen3 from 32.01 to 56.42. Reasoning also benefits strongly, where mid-size models plateau in the high 60s and the largest variants approach 73-74. By contrast, Understanding and Research tasks saturate early: most models fall in a narrow 81-84 band, with larger variants adding less than 2 points, suggesting these generation tasks depend more on instruction tuning than sheer scale. Finally, Grounding remains flat; its accuracy does not exceed 0.87% even for the largest models, showing that parameter growth alone cannot resolve identifier mapping. These results underline that while scale reliably improves recall and reasoning, it delivers marginal gains on fluent generation already near ceiling and fails on grounding tasks unless combined with retrieval and schema-aware normalization. 4.3 Identifier grounding is the bottleneck Among all tested capabilities, Grounding is the lowest-performing capability. Without retrieval, accuracy remains near zero and does not exceed 0.87% even for the largest models (Table 2). A probe illustrates the failure: asked "What is the KEGG ID for HMDB0004148?", 11 LLMs answered incorrectly (e.g., C00092, C02500); none returned the correct C13691 (Figure 4). With web search API, all five tested systems produced C13691 on this item (Figure 4). Aggregate results show the same pattern. Across five representative models, accuracy rises from {0.27, 0.47, 0.53, 0.87}% without search to {20.30, 27.77, 32.93, 38.00, 40.93}% with search (Figure 5). This 40-150× gain remains far from ceiling, indicating missing external evidence and normalization rather than decoding issues. Claude-haiku-3.5 Claude-sonnet-3.7 Claude-sonnet-4 GPT-4o-mini GPT-5 0 5 10 15 20 25 30 35 40 Score W/O search Search Figure 5: Without search capabilities (blue bars), all models achieve less than 1% accuracy on cross-database metabolite mapping. Enabling web search (orange bars) improves accuracy by 40-150×, though performance remains below 41% even for the best model (GPT-5), indicating that retrieval alone is insufficient to resolve the grounding bottleneck. Grounding failures may be because the task combines sparse signals, lossy tokenization, and a misaligned objective under moving targets and ambiguous nomenclature. Metabolite IDs are rare in pretraining corpora and often confined to supplementary tables, so models learn weak parametric associations. Subword tokenizers fracture strings such as HMDB0004148 into pieces (e.g., HMD, B000, 4148), undermining exact matching. Next-token training optimizes plausibility rather than exact resolution, yielding confident but incorrect IDs. Concurrent database updates in HMDB and KEGG further skew memorized mappings. Added to this, synonyms, tautomers, salts, stereochemistry, and organism or compartment context create many-tomany name-ID relations that text-only models cannot reliably disambiguate. Retrieval with schemaaware normalization targets these causes; scaling alone does not. 4.4 Long tail problem in metabolomics Metabolite databases such as HMDB exhibit information concentration: metabolites with lower HMDB IDs (typically discovered earlier) contain substantially more annotated attributes, while those 7 0 100 1000 10000 100000 HMDB ID 0 100 200 300 Attribute Density 0.2 0.4 0.6 0.8 Accuracy Figure 6: Long-tail distribution in metabolite knowledge. Attribute density (blue line) and model accuracy (green bars) both decline across HMDB ID ranges. with higher IDs show progressively sparser information (Xia and Sun, 2022). This pattern reflects the field's trajectory from well-studied central metabolites toward incompletely characterized peripheral compounds. To quantify how this heterogeneity affects model performance, we stratified metabolites by HMDB ID ranges and measured average attribute density: 100,000 (63.38). We sampled 200 MCQA questions per bin and evaluated multiple LLMs. Results from GPT-oss-20b are shown in Figure 6, where accuracy declines monotonically with decreasing density: 63.5% (127/200) for IDs 100,000, resulting in a 23.5 percentage gap. This long-tail effect reveals a fundamental challenge: models perform acceptably on well-studied metabolites dominating training corpora but fail disproportionately on the sparsely annotated compounds that constitute the metabolome's majority. Simply scaling training data cannot address the problem since the long tail reflects actual knowledge gaps in the field. Addressing this limitation may require active learning strategies prioritizing difficult cases, multi-modal models leveraging structural information when annotations are sparse, or uncertainty-aware systems that flag low-confidence predictions for long-tail metabolites (Wang et al., 2024; Kandpal et al., 2023). 5 Discussion Our evaluation of 25 LLMs reveals a striking capability heterogeneity that challenges assumptions about LLM readiness for metabolomics. These results indicate that current training paradigms, optimized for fluent text generation, produce models that excel at synthesizing coherent scientific narratives while struggling with precise factual retrieval and structured knowledge operations (Kalai et al., 2025). Most critically, the catastrophic failure on identifier Grounding (<1% without retrieval, 41% maximum with search API) represents a fundamental limitation rather than a data scarcity problem (more emphasis on the grounding tasks can refer to Appendix K). The grounding bottleneck has immediate practical implications: metabolomics applications requiring cross-database integration cannot rely on LLM capabilities alone and must implement specialized identifier resolution systems with schema-aware normalization and chemical structure reasoning. Beyond diagnosing current limitations, MetaBench establishes a framework for targeted model improvement and responsible deployment in metabolomics. Importantly, our benchmark dataset construction method provides a replicable pathway for creating domain-specific fine-tuning (DFT) datasets beyond evaluation. As new models emerge, whether general LLMs with improved scientific reasoning or domain-specific models pretrained on metabolomics corpora, MetaBench enables continuous, standardized assessment beyond aggregate scores that can obscure critical limitations. By providing both evaluation infrastructure and a concrete methodology for dataset construction, MetaBench supports the development of more capable and reliable AI systems for metabolomics research. 6 Conclusion We present MetaBench, the first comprehensive benchmark for evaluating LLMs on metabolomics. Through the evaluation of 25 LLMs across five capabilities, we reveal substantial performance variations across models and demonstrate that while model scaling improves reasoning, current models catastrophically fail on cross-database mapping and long-tail generalization, establishing that metabolomics requires precision and structured knowledge integration beyond current architectures and training corpus. By publicly releasing MetaBench, we provide essential infrastructure for developing scientifically-grounded models capable of supporting real-world metabolomics research and enabling systematic progress toward reliable AI-assisted discovery. 8 7 Limitation Since there is currently no publicly available metabolomics-specialized LLM; consequently, MetaBench evaluates general-purpose and broad biomedical models, quantifying cross-domain generalization and establishing a strong baseline for future in-domain systems. The benchmark centers on widely used resources (KEGG, HMDB, ChEBI, PathBank, MetaboLights), choices that favor clean, reproducible comparisons while deferring spectra/structure modalities and additional databases to later releases. We score classification with accuracy and generation with BERTScore to enable scale and consistency. Results are produced under a standardized, tool-free decoding setup, with retrieval analyzed separately, to isolate intrinsic model behavior. These constraints are deliberate and highlight the release's strengths: clarity, reproducibility, and extensibility, while charting a direct path to add specialized models in future versions. 8 Potential risks While MetaBench establishes evaluation infrastructure for metabolomics AI, we acknowledge considerations for responsible use. Our transparent reporting of performance disparities, such as the identifier grounding challenge and 23.5% variation across metabolite coverage, helps prevent premature deployment while guiding targeted improvements. The modular design enables continuous benchmark evolution as models advance, and our curation from established resources ensures tasks reflect real metabolomics workflows. By providing granular evaluation across five capabilities rather than single scores, MetaBench enables informed decisions about where LLMs can augment research and where human expertise remains essential, establishing not just current baselines but the measurement framework necessary for systematic advancement of AI in metabolomics. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 technical report. arXiv preprint . Sandhini Agarwal, Lama Ahmad, Jason Ai, Sam Altman, Andy Applebaum, Edwin Arbus, Rahul K Arora, Yu Bai, Bowen Baker, Haiming Bao, and 1 others. 2025. gpt-oss-120b & gpt-oss-20b model card. arXiv preprint . Anthropic. 2024. Model card for claude 3. https://www-cdn.anthropic.com/ de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/ Model_Card_Claude_3.pdf. Anthropic. 2025. System card: Claude opus 4 & claude sonnet 4. https://www-cdn.anthropic.com/ 4263b940cabb546aa0e3283f35b686f4f3b2ff47. pdf. Madina Bekbergenova, Lucas Pradi, Benjamin Navet, Yousouf Taghzouti, Emma Tysinger, Jean-Luc Wolfender, Wout Bittremieux, Fabien Gandon, and Louis-Félix Nothias. 2025. Metabot: Ai-based agent for natural language-based interaction with metabolomics knowledge graphs. In ISMB/ECCB 2025. Olatomiwa O Bifarin, Varun S Yelluru, Aditya Simhadri, and Facundo M Fernández. 2025. A large language model-powered map of metabolomics research. Analytical Chemistry. Bioconductor. 2020. metaboliteidmapping: Annotation package with comprehensive mapping of metabolite ids. https://bioconductor.org/packages/ metaboliteIDmapping/. John Braisted and 1 others. 2023. metlinkr: Facilitating metaanalysis of human metabolomics data through automated linking of metabolite identifiers. Bioinformatics, 39(12):btad744. Hengxing Cai, Xiaochen Cai, Junhan Chang, Sihang Li, Lin Yao, Changxin Wang, Zhifeng Gao, Hongshuai Wang, Yongge Li, Mujie Lin, and 1 others. 2024. Sciassess: Benchmarking llm proficiency in scientific literature analysis. arXiv preprint . Tomas Cajka and Oliver Fiehn. 2024. A reproducibility crisis for clinical metabolomics studies. Trends in Analytical Chemistry, 168:117332. Kathi Canese and Sarah Weis. 2013. Pubmed: the bibliographic database. The NCBI handbook, 2(1):2013. Qingyu Chen, Yan Hu, Xueqing Peng, Qianqian Xie, Qiao Jin, Aidan Gilson, Maxwell B Singer, Xuguang Ai, Po-Ting Lai, Zhizheng Wang, and 1 others. 2025. Benchmarking large language models for biomedical natural language processing applications and recommendations. Nature communications, 16(1):3280. Elizabeth A Coler, Wanxuan Chen, Alexey V Melnik, James T Morton, and Alexander A Aksenov. 2024. Metabolomics in the era of artificial intelligence. Microbiota and Host, 2(1). Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, and 1 others. 2024. The llama 3 herd of models. arXiv e-prints, pages arXiv-2407. 9 Dmitry Grapov and 1 others. 2018. Consistency, inconsistency, and ambiguity of metabolite names in biochemical databases used for genome-scale metabolic modelling. Molecular BioSystems, 14(3):660-667. Carlos Guijas, J Rafael Montenegro-Burke, Benedikt Warth, Mary E Spilker, and Gary Siuzdak. 2018. Metabolomics activity screening for identifying metabolites that modulate phenotype. Nature biotechnology, 36(4):316-320. Yucheng Hu and Yuxing Lu. 2024. Rag and rau: A survey on retrieval-augmented language model in natural language processing. arXiv preprint . Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint . Zongcheng Ji, Qiang Wei, and Hua Xu. 2020. Bertbased ranking for biomedical entity normalization. AMIA Summits on Translational Science Proceedings, 2020:269. Jiyue Jiang, Pengan Chen, Jiuming Wang, Dongchen He, Ziqin Wei, Liang Hong, Licheng Zong, Sheng Wang, Qinze Yu, Zixian Ma, and 1 others. 2025a. Benchmarking large language models on multiple tasks in bioinformatics nlp with prompting. arXiv preprint . Zhuohang Jiang, Pangjing Wu, Ziran Liang, Peter Q Chen, Xu Yuan, Ye Jia, Jiancheng Tu, Chen Li, Peter HF Ng, and Qing Li. 2025b. Hibench: Benchmarking llms capability on hierarchical structure reasoning. In Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V. 2, pages 5505-5515. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint . Adam Tauman Kalai, Ofir Nachum, Santosh S Vempala, and Edwin Zhang. 2025. Why language models hallucinate. arXiv preprint . Nikhil Kandpal, Haikang Deng, Adam Roberts, Eric Wallace, and Colin Raffel. 2023. Large language models struggle to learn long-tail knowledge. In International conference on machine learning, pages 15696-15707. PMLR. Minoru Kanehisa, Miho Furumichi, Yoko Sato, Yuriko Matsuura, and Mari Ishiguro-Watanabe. 2025. Kegg: biological systems database as a model of the real world. Nucleic acids research, 53(D1):D672-D677. Christopher A Krettler, Maria Sorokina, Felina Kretschmer, Daniel H Pieper, and Christoph Steinbeck. 2024. Navigating common pitfalls in metabolite identification and metabolomics bioinformatics. Metabolomics, 20(5):104. Anastasia Krithara, Anastasios Nentidis, Konstantinos Bougiatiotis, and Georgios Paliouras. 2023. Bioasqqa: A manually curated corpus for biomedical question answering. Scientific Data, 10(1):170. Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, and 1 others. 2024. Deepseek-v3 technical report. arXiv preprint . Yijiang Liu, Feifan Zhang, Yifei Ge, Qiao Liu, Siyu He, and Xiaotao Shen. 2025. Application of llms/transformer-based models for metabolite annotation in metabolomics. Health and Metabolism, pages 7-7. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint . Yuxing Lu. 2025. Knowledge graph and large language model for metabolomics. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 29281-29282. Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, and Ajmal Mian. 2025. A comprehensive overview of large language models. ACM Transactions on Intelligent Systems and Technology, 16(5):1-72. Rex Powers and 1 others. 2022. A checklist for reproducible computational analysis in clinical metabolomics research. Metabolites, 12(1):87. Maxx Richard Rahman, Ruoxuan Liu, and Wolfgang Maass. 2024. Incorporating metabolic information into llms for anomaly detection in clinical time-series. arXiv preprint . Yuanyi Ren, Haoran Ye, Hanjun Fang, Xin Zhang, and Guojie Song. 2024. Valuebench: Towards comprehensively evaluating value orientations and understanding of large language models. arXiv preprint . Ute Roessner and Jairus Bowne. 2009. What is metabolomics all about? Biotechniques, 46(5):363365. Neil Swainston, Kieran Smallbone, Pedro Mendes, Douglas Kell, and Norman Paton. 2014. Comparative evaluation of open source software for mapping between metabolite identifiers in metabolic network reconstructions: application to recon 2. Journal of Cheminformatics, 6(1):1-11. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, and 1 others. 2025. Gemma 3 technical report. arXiv preprint . 10 Benyou Wang, Qianqian Xie, Jiahuan Pei, Zhihong Chen, Prayag Tiwari, Zhao Li, and Jie Fu. 2023. Pretrained language models in biomedical domain: A systematic survey. ACM Computing Surveys, 56(3):152. Pengkun Wang, Zhe Zhao, HaiBin Wen, Fanfu Wang, Binwu Wang, Qingfu Zhang, and Yang Wang. 2024. Llm-autoda: Large language model-driven automatic data augmentation for long-tailed problems. Advances in Neural Information Processing Systems, 37:64915-64941. Wolfram Weckwerth. 2003. Metabolomics in systems biology. Annual review of plant biology, 54(1):669689. Leigh Weston, Vahe Tshitoyan, John Dagdelen, Olga Kononova, Amalie Trewartha, Kristin A Persson, Gerbrand Ceder, and Anubhav Jain. 2019. Named entity recognition and normalization applied to largescale information extraction from the materials science literature. Journal of chemical information and modeling, 59(9):3692-3702. David S Wishart, AnChi Guo, Eponine Oler, Fei Wang, Afia Anjum, Harrison Peters, Raynard Dizon, Zinat Sayeeda, Siyang Tian, Brian L Lee, and 1 others. 2022. Hmdb 5.0: the human metabolome database for 2022. Nucleic acids research, 50(D1):D622D631. David S Wishart, Ray Kruger, Aadhavya Sivakumaran, Karxena Harford, Selena Sanford, Rahil Doshi, Nitya Khetarpal, Omolola Fatokun, Daphnee Doucet, Ashley Zubkowski, and 1 others. 2024. Pathbank 2.0-the pathway database for model organism metabolomics. Nucleic acids research, 52(D1):D654-D662. Gert Wohlgemuth, Pradeep Kumar Haldiya, Egon Willighagen, Tobias Kind, and Oliver Fiehn. 2010. The chemical translation service-a web-based tool to improve standardization of metabolomic reports. Bioinformatics, 26(20):2647-2648. Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. 2018. Moleculenet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530. Yinglin Xia and Jun Sun. 2022. Statistical data analysis of microbiomes and metabolomics, volume 13. American Chemical Society. An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, and 1 others. 2025. Qwen3 technical report. arXiv preprint . Ozgur Yurekten, Thomas Payne, Noemi Tejera, Felix Xavier Amaladoss, Callum Martin, Mark Williams, and Claire O'Donovan. 2024. Metabolights: open data repository for metabolomics. Nucleic acids research, 52(D1):D640-D646. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint . Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, and 1 others. 2023. A survey of large language models. arXiv preprint , 1(2). Xukai Zhao, He Huang, Tao Yang, Yuxing Lu, Lu Zhang, Ruoyu Wang, Zhengliang Liu, Tianyang Zhong, and Tianming Liu. 2025. Urban planning in the age of large language models: Assessing openai o1's performance and capabilities across 556 tasks. Computers, Environment and Urban Systems, 121:102332. 11 A Sources for MetaBench MetaBench integrates data from six specialized metabolomics resources, each serving distinct roles in the field. This section provides detailed descriptions of these sources and their contributions to our benchmark. A.1 Human Metabolome Database (HMDB) The Human Metabolome Database (HMDB) (Wishart et al., 2022) is the most comprehensive metabolomics database focused on human metabolism, cataloging over 220,000 metabolite entries including both endogenous metabolites and exogenous compounds. Each entry provides extensive biochemical information including molecular descriptors (SMILES, InChI), physicochemical properties, tissue and biofluid locations, disease associations, and literature references using standardized identifiers (HMDB#######). In MetaBench, HMDB serves as a primary source for the Knowledge MCQA task, providing ground truth data for 26 distinct metabolite attributes, and forms one node of the cross-database identifier mapping in the Grounding capability assessment. A.2 Kyoto Encyclopedia of Genes and Genomes (KEGG) The Kyoto Encyclopedia of Genes and Genomes (KEGG) (Kanehisa et al., 2025) integrates genomic, chemical, and systemic functional information, with KEGG COMPOUND containing over 18,000 metabolite entries organized within the context of biochemical pathways, reactions, and molecular interactions. Each compound receives a KEGG identifier (C#####) and is annotated with pathway memberships, enzyme associations (EC numbers), reaction participation, and orthology relationships. A.3 PathBank PathBank (Wishart et al., 2024) is a comprehensive visual database containing over 110,000 detailed pathway diagrams and expert-curated descriptions covering metabolic, signaling, drug action, and disease pathways across multiple organisms, with particular emphasis on human metabolism. Each entry contains a pathway name, comprehensive textual description ranging from concise summaries to extensive multi-paragraph explanations (148-6,452 tokens), interactive diagrams, and lists of participating metabolites and proteins. PathBank serves as the exclusive source for the Understanding capability assessment in MetaBench, where 1,264 pathway name-description pairs evaluate whether models can generate scientifically accurate, coherent descriptions of biochemical pathways, testing conceptual understanding beyond simple factual recall. A.4 MetabolitesIDMapping MetaboliteIDmapping is a specialized mapping resource providing comprehensive cross-references between metabolite identifiers across major databases, addressing the fundamental challenge of identifier heterogeneity in metabolomics, where the same metabolite may be referenced differently across resources. The mapping table contains over 130,000 verified equivalence relationships linking identifiers from KEGG, HMDB, ChEBI, PubChem, CAS Registry Numbers, and chemical names (IUPAC and common), enabling bidirectional translation between identifier systems. MetabolitesID forms the foundation of the Grounding capability assessment, where 1,000 sampled identifier pairs construct mapping tasks that directly evaluate a critical bottleneck in practical metabolomics workflows: integrating fragmented knowledge across heterogeneous resources. A.5 MetaKG MetaKG (Lu, 2025) is a large-scale metabolomics knowledge graph structuring information from multiple databases into a unified semantic network with over 2 million entities and 10 million relationships, representing the most comprehensive structured representation of metabolomics knowledge to date. The heterogeneous graph contains nodes representing metabolites, diseases, proteins, genes, tissues, and pathways, connected by typed relationships, with each triple derived from authoritative sources and validated for consistency. MetaKG serves as the source for the Reasoning capability assessment, where 1,200 triples spanning six relationship types are converted into natural language sentences, requiring models to parse unstructured text and extract structured (head, relationship, tail) triples, testing both natural language understanding and structured reasoning essential for knowledge extraction from scientific literature. A.6 MetaboLights MetaboLights (Yurekten et al., 2024) is the largest open-access metabolomics data repository hosted 12 by the European Bioinformatics Institute (EMBIEBI), containing over 7,000 studies with associated metadata, analytical methods, and raw data files that serve as a comprehensive archive for metabolomics experimental data. Each study (MTBLS####) includes structured submissions with study titles, detailed descriptions, experimental designs, sample information, analytical protocols (LC-MS, GC-MS, NMR), data processing workflows, and expert-written descriptions providing comprehensive information about research objectives, methodologies, key findings, and biological significance. MetaboLights provides the foundation for the Research capability assessment, where 2,125 study title-description pairs evaluate whether models can generate comprehensive, research-level documentation from minimal prompts, testing capabilities essential for advanced metabolomics research support including experimental design synthesis, analytical technique knowledge, and scientific interpretation. B MCQA templates To construct the Knowledge Graph QA dataset, we selected 26 different relations from MetaKG, the curated metabolomics knowledge graph. For each relation, we designed a specific question template in natural language. These templates convert structured triples (head, relation, tail) into interrogative forms that can be used for multiple-choice question answering. Table 3 lists the relation types and their associated templates. For example, the relation has_class is expressed as "What is the taxonomy class of {subject}?" and has_disease as "Which disease is associated with {subject}?". This template-based construction ensures consistency across questions while covering diverse relation types in metabolomics. C Knowledge MCQA examples The Knowledge capability is assessed through multiple-choice question answering (MCQA) tasks derived from HMDB and KEGG. Below are representative examples across different attribute types, illustrating the diversity of metabolomics knowledge tested in MetaBench. Questions span metabolite taxonomy, molecular properties, biological associations, pathway relationships, and tissue locations. Each question provides four options with one correct answer, where distractors are sampled from valid values of the same attribute to ensure domain relevance and task difficulty. Example 1: Tissue Location Question: Where is Acetylcholine located in tissue? Options: • A: Bone Marrow • B: Adrenal Cortex • C: Vitreous humor • D: Neuron Answer: D Example 2: Taxonomy Classification Question: What is the taxonomy class of 2-Hydroxyestrone? Options: • A: Aralkylamines • B: Estrane steroids • C: Catechols • D: Very long-chain fatty acids Answer: B Example 3: Database Identifier Question: What is the KEGG ID of Tetrahydrobiopterin? Options: • A: C00864 • B: C00234 • C: C01747 • D: C00272 Answer: D Example 4: Enzyme Association Question: Which enzyme is associated with Glycine? Options: • A: 2.6.1.94 13 Attribute Question Template has_class What is the taxonomy class of {subject}? has_name What is the name of {subject}? has_disease Which disease is associated with {subject}? has_substituent What is a substituent of {subject}? has_inchi What is the InChI of {subject}? has_disposition What is the disposition of {subject}? has_uniprot_id What is the UniProt ID of {subject}? has_synonym What is a synonym for {subject}? has_process What process is {subject} involved in? has_enzyme Which enzyme is associated with {subject}? related_to_protein Which protein is {subject} related to? has_smiles What is the SMILES notation of {subject}? has_cellular_location Where is {subject} located in the cell? average_molecular_weight What is the average molecular weight of {subject}? chemical_formula What is the chemical formula of {subject}? related_to_pathway Which KEGG pathway is {subject} related to? has_inchikey What is the InChIKey of {subject}? has_kegg_id What is the KEGG ID of {subject}? has_biospecimen_location Where is {subject} found as a biospecimen? has_pathway What is a KEGG pathway associated with {subject}? belongs_to_orthology To which KEGG orthology does {subject} belong? has_reaction What is a KEGG reaction involving {subject}? has_description What is the description of {subject}? belongs_to_network To which KEGG network does Gene {subject} belong? has_tissue_location Where is {subject} located in tissue? is_a_sub_class_of {subject} is a subclass of what? Table 3: Relation types and their corresponding question templates. • B: 2.1.1.156 • C: 1.1.1.228 • D: 4.2.3.87 Answer: B Example 5: Pathway Relationship Question: What is a pathway associated with C15976? Options: • A: SMP00497 • B: SMP00417 • C: SMP00384 • D: SMP00507 Answer: C D Description generation examples The Understanding capability is assessed through pathway description generation tasks derived from PathBank. Models receive pathway names as input and must generate comprehensive, scientifically accurate descriptions explaining the biochemical mechanisms, participating molecules, biological significance, and clinical relevance. Below are representative examples demonstrating the range of pathway types and description complexity in MetaBench, from metabolic pathways and disease mechanisms to drug action pathways and gene regulation systems. Example 1: Signaling Pathway Question: What is the definition of Succinate Signalling? Reference Description: Tricarboxylic acid (TCA) cycle intermediates can function as inflammatory signals. Succinate enhances 14 glycolysis in several ways. It inhibits prolyl hydroxylase domain (PHD) enzyme function, both directly by product inhibition and indirectly via reactive oxygen species (ROS), driving hypoxia-inducible factor-1α (HIF-1α) accumulation and increased glycolysis. ROS also inhibit mitochondrial function, boosting glycolysis as a result. Elevated HIF-1α enhances the expression of genes containing HIF response elements (HREs), including the interleukin 1β (IL1β) gene. Succinate can signal through succinate receptor 1 (SUCNR1) and act in synergy with Toll-like receptors (TLRs) to boost dendritic cell function. NAD+ exerts several anti-inflammatory effects by activating sirtuins, a class of NAD+-dependent deacetylases. Finally, inactivation of Sirt3 has been demonstrated to enhance NLR family, pyrin domain containing 3 (NLRP3) inflammasome activation. Example 2: Gene Regulation Question: What is the definition of Operon: Ribosomal Protein rps0? Reference Description: The ribosomal protein operon is a bicistronic operon consisting of rps0 and pnp. This operon is regulated by a Rho independent terminator, allowing for only rps0 to be transcribed if the terminator is formed. Example 3: Disease Pathway Question: What is the definition of Glycogenosis, Type IA. Von Gierke Disease? Reference Description: Glycogen storage disease type I (also known as GSDI or von Gierke disease) is an inherited disorder caused by the buildup of a complex sugar called glycogen in the body's cells. The accumulation of glycogen in certain organs and tissues, especially the liver, kidneys, and small intestines, impairs their ability to function normally. Researchers have described two types of GSDI, which differ in their signs and symptoms and genetic cause. These types are known as glycogen storage disease type Ia (GSDIa) and glycogen storage disease type Ib (GSDIb). Two other forms of GSDI have been described, and they were originally named types Ic and Id. However, these types are now known to be variations of GSDIb; for this reason, GSDIb is sometimes called GSD type I non-a. Mutations in two genes, G6PC and SLC37A4, cause GSDI. Example 4: Drug Action Pathway Question: What is the definition of Tetracycline Action Pathway? Reference Description: Tetracycline is a short-acting antibiotic that is semisynthetically produced from chlortetracycline, a compound derived from Streptomyces aureofaciens. Tetracycline enters bacterial cells by passively diffusing through membrane porin channels. Once inside the cell, tetracycline reversibly binds to the 30S subunit just above the binding site for aminoacyl tRNA. At its primary binding site, interactions with the sugar phosphate backbone of residues in helices 31 and 34 via hydrogen bonds with oxygen atoms and hydroxyl groups on the hydrophilic side of the tetracycline help anchor the drug in position. Salt bridge interactions between the backbone of 16S rRNA and tetracycline are mediated by a magnesium ion in the binding site. Tetracycline prevents incoming aminoacyl tRNA from binding to the A site on the ribosome-RNA complex via steric hindrance. This causes inhibition of protein synthesis and hence bacterial cell growth. Example 5: Pharmaceutical Mechanism Question: What is the definition of Sufentanil Action Pathway? Reference Description: Sufentanil is a pharmacologically-active synthetic small molecule derived from fentanyl and belongs to a class of drugs called opioids. Opioids are therapeutically employed to achieve analgesia. Sufentanil's rapid mechanism of action primarily involves its agonistic effects on mu-type opioid receptors which are inhibitory G-coupled protein receptors and lead to the inhibition of adenylate cy15 clase and decrease in cAMP production. It also inhibits nociceptive neurotransmitter release and induces membrane hyperpolarization. Analgesia, anesthesia, and respiratory depression are a consequence of remifentanial's action. E Grounding (Id mapping) examples The Grounding capability is assessed through crossdatabase identifier mapping tasks derived from MetabolitesID. This task evaluates whether models can accurately translate metabolite identifiers across heterogeneous database systems (KEGG, HMDB, ChEBI) and between structured identifiers and chemical names. Successful identifier grounding is fundamental to integrating fragmented metabolomics knowledge, yet represents a critical bottleneck for current LLMs. Below are representative examples demonstrating the diverse mapping types required in MetaBench. Example 1: HMDB to KEGG Mapping Question: What is the KEGG ID of HMDB ID HMDB0010090? Answer: C00626 Example 2: KEGG to HMDB Mapping Question: What is the HMDB ID of KEGG ID C07251? Answer: HMDB0014982 Example 3: Name to KEGG Mapping Question: What is the KEGG ID of metabolite Ponalactone A? Answer: C09174 Example 4: Name to HMDB Mapping Question: What is the HMDB ID of metabolite Ethylparaben? Answer: HMDB0032573 Example 5: Name to ChEBI Mapping Question: What is the ChEBI ID of metabolite Croconazole? Answer: 3920 Note: These examples illustrate the precision required for identifier grounding tasks. Models must produce exact matches to succeed, as approximate or similar identifiers are incorrect. This task represents the most challenging capability in MetaBench, with even the best-performing models (Claude-sonnet-4) achieving only 0.87% accuracy without retrieval augmentation, highlighting a fundamental limitation in current LLM architectures for structured database mapping tasks. F Prompt for reasoning benchmark To construct the Reasoning (Triple Extraction) benchmark, we generated natural language sentences from structured knowledge graph triples using DeepSeek-V3.1 (Liu et al., 2024). Each triple from MetaKG was transformed into a natural language sentence using the prompt template shown below. The prompt explicitly instructs the model to avoid template-like language and to create varied, natural expressions of biomedical facts, ensuring that the resulting benchmark requires genuine semantic parsing rather than pattern matching. Prompt for Reasoning Benchmark Given the following triple from a biomedical knowledge graph: Subject: head Predicate: rel Object: tail Write a natural language sentence that expresses this fact, but do so in a way that is different from a simple template. Be creative and vary the phrasing. Do not use the words 'subject', 'predicate', or 'object'. This generation approach ensures that the Reasoning task evaluates genuine natural language understanding rather than template recognition. The diversity in expression styles reflects the variability found in real scientific literature, making the benchmark more representative of practical knowledge extraction scenarios in metabolomics research. G Triple extraction examples The Reasoning capability is assessed through knowledge graph triple extraction tasks derived from MetaKG. Models receive natural language sentences describing metabolite relationships and must extract structured triples in the format (head, relationship, tail). This task evaluates the model's ability to parse unstructured scientific text and 16 perform structured reasoning, which is a critical skill for automated knowledge extraction from metabolomics literature. Below are representative examples demonstrating disease associations and metabolic disorder relationships. Example 1: Disease Association Text: Research has linked L-Methionine to the onset and manifestation of Epilepsy. Extracted Triple: • Head: L-Methionine • Relationship: has_disease • Tail: Epilepsy Example 2: Metabolic Disorder Text: Capric acid is implicated in the development of disorders related to metabolism and nutrition. Extracted Triple: • Head: Capric acid • Relationship: has_disease • Tail: Metabolism and nutrition disorders Example 3: Biomarker Relationship Text: Citrulline serves as a crucial biomarker for the diagnosis and assessment of intestinal failure. Extracted Triple: • Head: Citrulline • Relationship: has_disease • Tail: Intestinal failure Example 4: Deficiency Manifestation Text: A deficiency in Vitamin A can manifest as a range of afflictions impacting the nervous system. Extracted Triple: • Head: Vitamin A • Relationship: has_disease • Tail: Nervous system disorders Example 5: Contributing Factor Text: A deficiency in propionylcarnitine can be a contributing factor in the development of missing teeth. Extracted Triple: • Head: Propionylcarnitine • Relationship: has_disease • Tail: Missing teeth H Study description generation) examples The Research capability is assessed through study description generation tasks derived from MetaboLights. Models receive concise study titles as input and must generate comprehensive descriptions containing experimental methodologies, analytical techniques, key findings, and biological implications. This task represents the highest level of scientific complexity in MetaBench, testing the model's ability to synthesize knowledge about research design, technical protocols, and scientific interpretation. Below are representative examples demonstrating diverse research domains in metabolomics. Example 1: Software Evaluation Question: Write a study description for the study titled 'Comprehensive evaluation of untargeted metabolomics data processing software in feature detection, quantification and discriminating marker selection'. Reference Description (excerpt): Data analysis represents a key challenge for untargeted metabolomics studies and it commonly requires extensive processing of more than thousands of metabolite peaks included in raw high-resolution MS data. Although a number of software packages have been developed to facilitate untargeted data processing, they have not been comprehensively scrutinized in the capability of feature detection, quantification and marker selection using a well-defined benchmark sample set. In this study, we acquired a benchmark dataset from standard mixtures consisting of 1100 compounds with specified concentration ratios including 130 compounds with significant variation of concentrations. Five soft17 ware evaluated here (MS-Dial, MZmine 2, XCMS, MarkerView, and Compound Discoverer) showed similar performance in detection of true features... [continues] Example 2: Clinical Metabolomics Question: Write a study description for the study titled 'Altered metabolome and microbiome features provide clues in understanding irritable bowel syndrome and depression comorbidity'. Reference Description (excerpt): Irritable bowel syndrome (IBS) is one of the functional gastrointestinal disorders characterized by chronic and/or recurrent symptoms of abdominal pain and irregular defecation. Changed gut microbiota has been proposed to mediate IBS; however, contradictory results exist, and IBS-specific microbiota, metabolites, and their interactions remain poorly understood. To address this issue, we performed metabolomic and metagenomic profiling of stool and serum samples based on discovery (n = 330) and validation (n = 101) cohorts... [continues] Example 3: Multi-Omics Integration Question: Write a study description for the study titled 'Mechanisms of hepatic steatosis in chickens: integrated analysis of the host genome, molecular phenomes and gut microbiome'. Reference Description (excerpt): Hepatic steatosis is the initial manifestation of abnormal liver functions and often leads to liver diseases such as non-alcoholic fatty liver disease in humans and fatty liver syndrome in animals. In this study, we conducted a comprehensive analysis of a large chicken population consisting of 705 adult hens by combining host genome resequencing, liver transcriptome, proteome and metabolome analysis, as well as microbial 16S rRNA gene sequencing of each gut segment... [continues] Example 4: Developmental Metabolomics Question: Write a study description for the study titled 'Changes in the Milk Metabolome of the Giant Panda (Ailuropoda melanoleuca) with Time after Birth - Three Phases in Early Lactation and Progressive Individual Differences'. Reference Description (excerpt): Ursids (bears) in general, and giant pandas in particular, are highly altricial at birth. The components of bear milks and their changes with time may be uniquely adapted to nourish relatively immature neonates, protect them from pathogens, and support the maturation of neonatal digestive physiology. Serial milk samples collected from three giant pandas in early lactation were subjected to untargeted metabolite profiling and multivariate analysis... [continues] Example 5: Subcellular Imaging Question: Write a study description for the study titled 'Subcellular antibiotic visualization reveals a dynamic drug reservoir in infected macrophages'. Reference Description (excerpt): Tuberculosis, caused by the intracellular pathogen Mycobacterium tuberculosis, remains the world's deadliest infectious disease. Sterilizing chemotherapy requires at least 6 months of multidrug therapy. Difficulty visualizing the subcellular localization of antibiotics in infected host cells means that it is unclear whether antibiotics penetrate all mycobacteria-containing compartments in the cell. Here, we combined correlated light, electron, and ion microscopy to image the distribution of bedaquiline in infected human macrophages... [continues] I Compared LLMs We evaluate 25 state-of-the-art large language models spanning eight model families to ensure comprehensive coverage of current LLM capabilities in metabolomics. Our selection includes both open-source models, Qwen3 (1.7B-32B), Gemma3 (270M-27B), Llama (1B-70B), DeepSeek-v3.1 (685B MoE), and GPT-oss-20b (20B MoE)and 18 Table 4: Models evaluated in MetaBench. Parameter counts are from official sources where available; proprietary models do not disclose parameter counts. Model Params Architecture Release date Provider Open-source models Qwen3-1.7b 1.7B dense 2025-04-29 Alibaba Qwen Qwen3-4b 4B dense 2025-04-29 Alibaba Qwen Qwen3-8b 8B dense 2025-04-29 Alibaba Qwen Qwen3-14b 14B dense 2025-04-29 Alibaba Qwen Qwen3-32b 32B dense 2025-04-29 Alibaba Qwen Gemma-3-270m-it 270M dense 2025-03-12 Google Gemma-3-1b-it 1B dense 2025-03-12 Google Gemma-3-4b-it 4B dense 2025-03-12 Google Gemma-3-12b 12B dense 2025-03-12 Google Gemma-3-27b 27B dense 2025-03-12 Google Llama-3.2-1b 1B dense 2024-09-25 Meta Llama-3.2-3b 3B dense 2024-09-25 Meta Llama-3.1-8b 8B dense 2024-07-23 Meta Llama-3.1-70b 70B dense 2024-07-23 Meta DeepSeek-v3.1 685B (MoE)a MoE 2025-08-21 DeepSeek GPT-oss-20b 20B MoE 2025-08-21 OpenAI Closed-source models GPT-4o-mini Not disclosed multimodal 2024-07-18 OpenAI GPT-5-nano Not disclosed unified/thinking variants 2025-08-07 OpenAI GPT-5-mini Not disclosed unified/thinking variants 2025-08-07 OpenAI GPT-5 Not disclosed unified/thinking variants 2025-08-07 OpenAI Claude-haiku-3.5 Not disclosed hybrid family 2024-10-22 Anthropic Claude-sonnet-3.7 Not disclosed hybrid reasoning 2025-02-24 Anthropic Claude-sonnet-4 Not disclosed hybrid reasoning 2025-05-22 Anthropic Gemini-2.5-flash Not disclosed thinking model 2025-06-17 Google Gemini-2.5-pro Not disclosed thinking model 2025-03-25 Google a MoE = mixture-of-experts. Count refers to total parameters across experts, not active per token. closed-source systems from leading providers: OpenAI's GPT series (4o-mini through GPT-5), Anthropic's Claude family (haiku-3.5, sonnet-3.7, sonnet-4), and Google's Gemini-2.5 models (flash and pro). This diversity enables systematic analysis of how model scale, architecture (dense vs. mixture-of-experts), and training paradigms affect performance across the five metabolomics capabilities assessed in MetaBench. Table 4 summarizes the key specifications of all evaluated models, including parameter counts (where available), architectural types, release dates, and providers. This comprehensive model selection establishes MetaBench as a rigorous testbed for tracking progress in metabolomics-oriented language modeling across both academic and industrial research efforts. J System prompts In our evaluation framework, we designed five distinct system prompts, each tailored to a capability assessed in MetaBench. These prompts ensure that large language models receive clear, domainspecific instructions aligned with metabolomics tasks. • MCQA (Multiple-Choice Question Answering): Tests factual knowledge of metabolites, pathways, enzymes, and biological systems. • Triple Extraction: Assesses reasoning ability by extracting (head, relationship, tail) triples from natural language. • Study Description: Evaluates scientific writing and comprehension by generating detailed study descriptions from research titles. • QA (Open-ended Question Answering): Probes explanatory ability on pathways, mechanisms, and processes. • Metabolite Mapping: Measures precision in identifier mapping across metabolomics databases like KEGG, HMDB and ChEBI. 19 The following parts contain the exact system prompts used for each evaluation task. System Prompt for MCQA Task: Answer multiple choice questions about metabolite properties, pathways, enzymes, and biological processes. Instructions: - You will be given a question and four options (A, B, C, D). - Choose the correct answer based on your knowledge of metabolomics, biochemistry, and biological systems. - Respond with only the letter (A, B, C, or D). Coverage: - Metabolite taxonomy and classification - Enzyme associations and EC numbers - KEGG pathway and network associations - Molecular properties (weight, structure) - Tissue locations and biological functions - Protein and gene relationships Examples: Q: What is the taxonomy class of Prostaglandin E1? Options: {A: Prostaglandins and related compounds, B: Phenylpyrazoles, C: Furanones, D: Azolines} Answer: A ------------------ Q: What is the taxonomy class of PC(18:1(9Z)/18:0)? Options: {A: Alanine and derivatives, B: Glycerophospholipids, C: Anisoles, D: Isoleucine and derivatives} Answer: B ------------------ Q: Where is Acetylcholine located in tissue? Options: {A: Bone Marrow, B: Adrenal Cortex, C: Vitreous humor, D: Neuron} Answer: D System Prompt for Triple Extraction Task: Extract knowledge graph triples from natural language text about metabolites and their biological properties. Format: Head: [metabolite/entity] Relationship: [relationship type] Tail: [disease/condition/property/location] Allowed relationships: - has\_disease : metabolite associated with diseases/conditions - has\_disposition : metabolite location or cellular disposition - has\_smiles : molecular structure in SMILES notation - has\_synonym : alternative names - has\_class : biochemical classification - has\_tissue_location : tissue or organ presence Examples: Text: Within the human body, the blood's platelets are a known reservoir for the compound inosine. Head: Inosine Relationship: has_tissue_location Tail: Platelet ------------------ Text: Dolichyl phosphate D-mannose is categorized among the broader class of carbohydrates and carbohydrate conjugates. Head: Dolichyl phosphate D-mannose Relationship: has_class Tail: Carbohydrates and carbohydrate conjugates ------------------ Text: The molecular structure of Isobutyryl-L-carnitine is uniquely defined by the SMILES string CC(C)C(=O)O[C@H](CC(O)=O)... Head: Isobutyryl-L-carnitine Relationship: has_smiles Tail: CC(C)C(=O)O[C@H](CC(O)=O)... System Prompt for Study Description Task: Write a comprehensive study description for a given metabolomics research title. Instructions: Include the following: 1. Study purpose and objectives 2. Methodology and experimental design (LC-MS, GC-MS, NMR) 3. Key findings and results 4. Clinical or scientific significance 5. Sample characteristics and analytical methods 20 6. Metabolic pathways and biological processes involved Coverage: - Untargeted and targeted metabolomics - Sample preparation and analytical techniques - Data processing and statistical analysis - Biological interpretation and pathway analysis - Clinical applications and biomarker discovery Example: Title: "Dissolved organic carbon compounds in deep-sea hydrothermal vent fluids from the East Pacific Rise at ... Description: "... detailed description ..." System Prompt for QA Task: Answer questions about biochemical definitions, metabolic pathways, and biological processes. Instructions: Provide comprehensive, accurate answers including: 1. Definition of the pathway, process, or concept 2. Key enzymes, reactions, and components 3. Biological significance and function 4. Clinical or pathological relevance (if any) 5. Related pathways or mechanisms Coverage: - Metabolic pathways (TCA cycle, glycolysis, etc.) - Biochemical processes and mechanisms - Disease definitions and mechanisms - Drug action pathways - Operon regulation - Molecular signaling Examples: Q: What is the definition of Succinate Signalling? A: "... detailed definition ..." ------------------ Q: What is the definition of Tetracycline Action Pathway? A: "... detailed definition ..." System Prompt for Metabolite Mapping Task: Answer metabolite identifier mapping questions between databases and naming conventions. Instructions: - Provide only the identifier value as the answer. - No extra explanation or formatting. Supported identifiers: - CAS Registry Numbers - PubChem CIDs - Chemical names (IUPAC, common) - HMDB IDs - ChEBI IDs - KEGG IDs Examples: Q: What is the KEGG ID of metabolite Tamibarotene? A: C12864 ------------------ Q: What is the HMDB ID of metabolite TG(15:0/i-20:0/a-17:0)[rac]? A: HMDB0102782 ------------------ Q: What is the ChEBI ID of Benzilic acid? A: 39414 ------------------ Q: What is the KEGG ID of 2-Iodophenol? A: C01874 K The Critical Role of Metabolite Identifier Grounding in Metabolomics Metabolite identifier groundingt represents a fundamental bottleneck in metabolomics research (Swainston et al., 2014). A single metabolite may be referenced by multiple identifiers: KEGG uses compound IDs (e.g., C00031 for DGlucose), HMDB employs alphanumeric codes (e.g., HMDB0000122), ChEBI assigns numerical IDs (e.g., 17234), and scientific literature uses IUPAC names, common names, or synonyms, with PubChem listing nearly 250 synonyms for L-Tryptophan alone (Krettler et al., 2024). This heterogeneity arose because different databases emerged from distinct research communities: KEGG emphasizes pathway context, HMDB focuses on human metabolites with clinical relevance, ChEBI provides chemical ontol21 ogy, and PubChem offers comprehensive chemical data (Wishart et al., 2022; Kanehisa et al., 2025). Each adopted identifier schemes suited to their organization, predating standardization efforts. Accurate identifier grounding is essential throughout the research pipeline. Cross-study meta-analyses depend on harmonizing identifiers across laboratories and platforms (Braisted et al., 2023). Database construction efforts like MetaKG require linking millions of facts from heterogeneous sources to unified metabolite entities (Lu, 2025). Identifier mapping errors cascade through research quality: incorrect links lead to spurious discoveries, failed recognition causes missed connections, and inconsistent usage prevents reproducible science (Cajka and Fiehn, 2024; Powers et al., 2022). Database inconsistency compounds these challenges. Analysis of 11 biochemical databases revealed that while HMDB maintains high consistency (only 1.7% of names linked to multiple IDs), ChEBI and KEGG show 14.8% and 13.3% ambiguity respectively (Grapov et al., 2018). Interdatabase mapping using metabolite names shows inconsistencies ranging from 0% to 81.2%, with similar results (0-83%) when mapping via reference identifiers (Grapov et al., 2018). Current solutions include manually curated mapping tables (e.g., MetabolitesID (Bioconductor, 2020)), webbased conversion tools like the Chemical Translation Service (Wohlgemuth et al., 2010), and APIs, but these face persistent challenges: no single resource provides comprehensive coverage, databases update frequently requiring continuous curation, automated tools struggle with isomers and context-dependent synonyms, and different tools employ incompatible interfaces (Swainston et al., 2014; Krettler et al., 2024). Large Language Models promise flexible, context-aware resolution leveraging both structured databases and unstructured literature. However, MetaBench demonstrates that current models exhibit catastrophic failure without retrieval augmentation, achieving less than 1% accuracy even for frontier architectures. This finding establishes identifier grounding as a critical bottleneck impeding the computational transformation of metabolomics toward integrated, AI-assisted discovery systems. 22
2510.14937
AI-Powered Early Diagnosis of Mental Health Disorders from Real-World Clinical Conversations Jianfeng Zhu1, Julina Maharjan1, Xinyu Li1, Karin G. Coifman2, Ruoming Jin1 1 Department of Computer Science, Kent State University 2 Department of Psychological Sciences, Kent State University {jzhu6, jmaharja, xli5, rjin}@kent.edu, kcoifman@kent.edu Abstract Mental health disorders remain among the leading cause of disability worldwide, yet conditions such as depression, anxiety, and Post-Traumatic Stress Disorder (PTSD) are frequently underdiagnosed or misdiagnosed due to subjec- tive assessments, limited clinical resources, and stigma and low awareness. In primary care settings, studies show that providers misidentify depression or anxiety in over 60% of cases, highlighting the urgent need for scalable, accessible, and context-aware diagnostic tools that can support early de- tection and intervention. In this study, we evaluate the ef- fectiveness of machine learning models for mental health screening using a unique dataset of 553 real-world, semi- structured interviews, each paried with ground-truth diag- noses for major depressive episodes (MDE), anxiety disor- ders, and PTSD. We benchmark multiple model classes, in- cluding zero-shot prompting with GPT-4.1 Mini and Meta- LLaMA, as well as fine-tuned RoBERTa models using Low- Rank Adaptation (LoRA). Our models achieve over 80% ac- curacy across diagnostic categories, with especially strong performance on PTSD (up to 89% accuracy and 98% re- call). We also find that using shorter context, focused con- text segments improves recall, suggesting that focused nar- rative cues enhance detection sensitivity. LoRA fine-tuning proves both efficient and effective, with lower-rank config- urations (e.g., rank 8 and 16) maintaining competitive per- formance across evaluation metrics. Our results demonstrate that LLM-based models can offer substantial improvements over traditional self-report screening tools, providing a path toward low-barrier, AI-powerd early diagnosis. This work lays the groundwork for integrating machine learning into real-world clinical workflows, particularly in low-resource or high-stigma environments where access to timely mental health care is most limited. Code — https://anonymous.4open.science/r/ AAAI2026 Depression1-E152/ 1. Introduction Mental health disorders account for four of the ten leading cause of disability worldwide (Alhamed, Ive, and Specia 2024; Ben-Zion et al. 2025; Organization 2001). Individu- als affected by these conditions experience disproportionally high rates of adverse health behaviors, including tobacco smoking, substance use, physical inactivity, and poor diet, which contribute to elevated risks of chronic medical illness and premature mortality (Goodell et al. 2011; Laursen, Nor- dentoft, and Mortensen 2014). Despite their global burden, mental health disorder remain underdiagnosed, undertreated and stigmatized, particularly in resource constrained or high barrier settings (Organization 2001). A major contributor to this treatment gap lies in the lim- itations of traditional diagnostic tools. Mental health as- sessments rely on subjective measurements, such as struc- tured interviews and self-reports questionnaires (Tennyson, Kemp, and Rao 2016). While these instruments are cost- effective and scalable, they suffer from social desirability bias, inaccurate recall, and limited interoperability of symp- toms, particularly in cognitively impaired or stigmatized populations (Haberer, Trabin, and Klinkman 2013). More- over, timely and accurate diagnosis is complicated by the high comorbidity of condition such as depression, anxi- ety and post-traumatic stress disorder (PTSD), which often present overlapping symptoms and are difficult to disentan- gle in practice. This diagnostic ambiguity, compounded by limited clinical resources, contributes to widespread misdi- agnosis and under-recognition in both high- and low-income settings (Aux´em´ery 2018). These challenges underscore the urgent need for scal- able, proactive, and context aware screening tools that can identify at-risk individuals earlier and more accurately (Ten- nyson, Kemp, and Rao 2016). In response to these demands, traditional machine learning methods such as SVM, random forest, etc (Saidi, Othman, and Saoud 2020; Cacheda et al. 2019; Islam et al. 2018) have long been extensively studied and applied to detect various mental health conditions by leveraging structured inputs and hand-crafted features de- rived from linguistic, behavioral, or physiological signals. While these approaches have shown promise, they often re- sult in suboptimal performance due to the limited expres- siveness (Van Der Donckt et al. 2023). Subsequent advances in deep learning enabled end-to-end models that learn rep- resentations directly from raw text or speech data, offer- ing improved performance and flexibility (Su et al. 2020). However, these models still typically require large labeled datasets, suffer from poor interpretability, and remain sen- sitive to domain shifts (Ramasamy Ramamurthy and Roy 2018). Recent advances in Large Language Models (LLMs) offer a compelling new direction. By analyzing naturalis- tic user input, such as interview transcripts and spontaneous arXiv:2510.14937v1 [cs.CL] 16 Oct 2025 conversations, LLMs can potentially infer early signs of depression, anxiety, and other mental health disorders (Xu et al. 2024). These models require no clinical infrastructure and can be deployed in low touch, user facing environments, enabling broad access and repeated monitoring. This study takes a step in that direction by asking a core question: Can we harness the broad prior knowledge and strong text com- prehension skills of large language models to detect early signs of mental disorders from naturalistic conversations, without relying on expert-crafted features or clinical anno- tations? Figure 1: Overview of dataset, LLM adaptation methods, and prediction targets. Unlike prior work which based on self-reported surveys or social media data on single mental health disoders (Bucur 2024; Zhu et al. 2024; Sarabadani et al. 2025; Bartal et al. 2024). We leverage a unique dataset of real-world, semi- structured psychiatric interviews with ground-truth clinical diagnoses assessed within the same time window. This set- ting more closely reflects the nuance and variability of real clinical conversations. We formalize the task as multi-label text classification, where given an interview transcript, the model must predict the presence of (1) major depressive episodes (MDE), (2) PTSD, and (3) anxiety disorders. We evaluate both encoder-based and decoder-based lan- guage models, with optional enhancement via parameter- efficient fine-tuning (PEFT) adapters such as LoRA. Decoder-based models (e.g., GPT-4.1, Meta-LLaMA) are assessed in zero-shot settings, while encoder-based models are optionally fine-tuned using embedding-based classifiers with or without LoRA augmentation. This design allows for a rigorous, side-by-side comparison of general-purpose versus customized diagnostic strategies under realistic data and deployment constraints.An overview of our modeling pipeline is illustrated in Figure 1. 2. Related Work Prior work in mental health assessment has largely relied on standardized self-report instruments designed to eval- uate specific psychiatric conditions. For instance, the Pa- tient Health Questionnaire (PHQ-9) is a widely used self- report scale for assessing major depressive disorder, based on DSM-IV criteria (Kroenke, Spitzer, and Williams 2001). Similarly, the GAD-7 is a brief, clinically validated mea- sure for evaluating the severity of generalized anxiety disor- der (Spitzer et al. 2006), and the PTSD Checklist for DSM- 5 (PCL-5) captures core symptoms of post-traumatic stress disorder across four diagnostic clusters, in alignment with DSM-5 standards (Blevins et al. 2015). While these instruments are cost-effective and easy to ad- minister, they suffer from several important limitations. First, each instrument typically targets only one disorder per administration, despite increasing evidence that conditions such as depression, anxiety, and PTSD frequently co-occur in clinical populations (Lai et al. 2019; Hawkins 2009; De- Vylder, Burnette, and Yang 2014). Second, because these tools rely entirely on self-reported data, they are vulnera- ble to social desirability bias, recall inaccuracies, and lim- ited self-awareness, especially among vulnerable or cogni- tively impaired individuals. Finally, in resource-limited set- tings or during health emergencies (e.g., pandemics), access to trained professionals or structured screening processes may be severely constrained, leaving many individuals un- diagnosed or misdiagnosed (Kumar et al. 2025; Alhalaseh et al. 2021). For example, a synthesis of 157 studies found that only one in three individuals with mild depression is correctly identified in primary care (Mitchell, Rao, and Vaze 2011), and a meta-analysis of 41 high-quality studies, en- compassing over 50,000 patients, found that general practi- tioners correctly identified depression in just 47.3% of cases. Notably, these studies also found that false positives often outnumbered true positives, and a substantial proportion of cases were entirely missed (Mitchell, Vaze, and Rao 2009). In response to the limitations of self-report instruments, re- cent research has explored machine learning (ML) and natu- ral language processing (NLP) approaches to automatically detect mental health conditions from diverse data sources, including text inputs, questionnaires, and social data (Wshah et al. 2019; Le Glaz et al. 2021; Chiong et al. 2021; Mahar- jan et al. 2025). For instance, Priya et al. study applied ML algorithms to classify individuals across five severity levels of anxiety, depression, and stress, based on questionnaire re- sponses. While Na¨ıve Bayes achieved the highest accuracy, Random Forest was identified as the overall best-performing model (Priya, Garg, and Tigga 2020; Nemesure et al. 2021). Other studies have demonstrated the feasibility of using ML methods to identify PTSD, leveraging structured as- sessments, language-based emotion regulation features, and treatment selection models (Christ et al. 2021; Held et al. 2022; Sawalha et al. 2022; Vanlalawmpuia and Lalhmin- gliana 2020). Despite these promising directions, the imple- mentation of digital mental health solutions still faces sub- stantial challenges—particularly in terms of evaluation rigor and practical effectiveness (Balcombe and De Leo 2021). Unlike traditional machine learning methods that often re- quire large amounts of task-specific training data and man- ually engineered features, large language models (LLMs) operate under a fundamentally different paradigm. Powered by transformer architectures and the self-attention mecha- nism (Vaswani et al. 2017), LLMs are pretrained on massive text corpora and can perform zero-shot or few-shot infer- ence based solely on textual prompts(Wang, Pang, and Lin 2023; Hasan et al. 2024). This shift enables LLMs to un- derstand rich contextual cues directly from free-form natu- ral language—without the need for structured input formats, disorder-specific questionnaires, or annotated training data (Zhu et al. 2024; Srivastava 2024; Zhu et al. 2025). Several recent studies have demonstrated the promise of LLMs in clinical mental health applications. For example, GPT-4 was shown to infer social anxiety symptom severity from semi-structured clinical interviews, achieving a corre- lation of r = 0.79 with validated self-report measures (Ohse et al. 2024). In the context of depression diagnosis, MDD- LLM (70B) achieved an accuracy of 0.8378 and AUROC of 0.8919, significantly outperforming conventional machine and deep learning approaches (Sha et al. 2025). Another study on PTSD detection used few-shot prompting with DSM-5 criteria to achieve AUROC = 0.737, with perfor- mance varying by symptom severity and comorbid depres- sion status (Chen et al. 2025). These findings underscore the capacity of LLMs to deliver clinically meaningful in- sights from minimally structured data making them espe- cially valuable for scalable mental health screening in low- resource or underserved contexts. Building on this progress, our work makes several key con- tributions. We evaluate both general-purpose LLMs (e.g., GPT-4, Meta-LLaMA) and parameter-efficient fine-tuned models (e.g., RoBERTa with LoRA) on a unique dataset of 555 real-world psychiatric interviews. 3. Datasets We utilize a novel dataset comprising 555 U.S. adults, collected across multiple behavioral research studies inves- tigating individual responses to transitional or adverse life events, such as occupational stress, chronic illness, or trau- matic exposure. All participants provided written informed consent under IRB-approved protocols. Table 1 summarizes the participant demographics. The dataset includes 553 individuals, with a balanced gender distribution (278 male, 275 female), and a majority identifying as White (431). Age is broadly distributed across five brackets, and most participants report some college or higher education. This diversity supports robust downstream analysis. During these interviews, all participants answered the same five questions in a predetermined sequence. The first question required participants to describe their activities from waking to sleep on the previous day. The subsequent questions explored their personal experiences with a recent challenging event or situation, their coping strategies for that challenge, an unrelated recent unpleasant event, and, finally, a recent positive experience. Interviewers were trained to provide standardized prompts, and participants were encouraged to speak spon- taneously for up to three minutes in response to each ques- tion. Each interview session resulted in approximately 15 minutes of recorded speech per participant. The speech was then recorded and transcribed according to convention and recommendations (Coifman et al. 2007; Coifman and Bo- nanno 2010; Coifman, Flynn, and Pinto 2016; Harvey et al. Category Subgroup Count Age <25 123 25–34 128 35–44 96 45–59 132 60+ 74 Sex Male 278 Female 275 Race White 431 Other 122 Education High school or below 98 Some college 293 College or above 134 Unknown 28 Table 1: Participant Demographics Summary 2014). Each participant’s text responses were paired with their corresponding depressive symptoms, derived from Structured Clinical Interview for DSM (SCID) reports . Demographic information, including age and sex, was collected across studies and harmonized into a unified format. The mean participant age was 39.36 years (SD = 16.0). The sample was approximately gender-balanced: 278 participants identified as female, 277 as male, and one participant did not report their sex. The average length of the full interviews was approximately 2,955 words (SD = 1,855). This setup allows for a fine-grained analysis of how narrative depth influences psychiatric signal extraction across model types. For foun- dation models constrained by input length limitations (e.g., Meta-LLaMA-3-8B), we adopt a chunk-based inference strategy, whereby each user transcript is segmented into overlapping chunks of 512, 1024, or 2048 tokens. Model predictions are computed for each chunk independently and averaged to obtain a user-level binary decision. This design enables scalable prediction while preserving diagnostic signal across extended narrative contexts. To illustrate the richness of the narrative content, Table 2 provides representative excerpts from a single participant across the five prompt domains. These responses reflect the emotional nuance and thematic complexity of the dataset, presenting a realistic and ecologically valid benchmark for naturalistic mental health inference. 4. Methodology In this section, we systematically explain the variaous meth- ods to evaluate the abilities of modern AI models in mental health disorder domain. Prompt Response Excerpt Daily Activities “Beginning of the day, uh I have two sons, one is [xxxx]... played outside... gym...” Difficult Experience “Being a firefighter... challenging and amazing expe- rience... bad experiences too...” Emotion Regulation “You talk to people you trust at work... My wife and I have been married...” Negative Event “First big incident... stressful leading into it...” Positive Event “First baby we delivered—early morning call... heroin case...” Table 2: Example Interview Responses (Participant ID: 001) Method 1: Multi-Disorder Inference via Direct Prompting First, we investigate whether modern foundation models can infer multiple psychiatric disorders directly from raw in- terview text in a zero-shot setting. Specifically, we prompt large language models (LLMs) to identify signs of psychi- atric conditions, including depression, PTSD, and anxiety without any task-specific training or fine-tuning. We exam- ine two state-of-the-art LLM models: • GPT-4.1 Mini: Deployed in a zero-shot configuration using custom-designed prompts tailed to each disorder. This setup serves as a baseline for scalable mental health screening without the need for retraining. • Meta-LLaMA-3-8B-Instruct: Evaluated under zero- shot prompting. To alleviate the complexity introduced by long-range dependencies, interview transcripts were further segmented into small chunks of size 512, 1024, or 2048 with a fixed overlapping rate, each chunked tran- script was then reformulated into a prompt and subse- quently passed to the model for binary classification. Fi- nal user-level predictions were derived by averaging pre- diction scores across all chunks. We employ the following standardized prompt for all mod- els and disorders, adhering to a binary prediction format to ensure clinical interpretability: Prompt: Psychiatric Disorder Inference (LLM- Based) You are an AI assisting mental health profession- als in identifying psychiatric disorders. Input Data: "{response text}" — a free-form participant response from a semi-structured interview. Task: Analyze the participant’s response and determine whether there are signs of a psychiatric disorder. Do not include any reasoning or explanation. Output Format: Respond with a single-line binary decision: Prediction: Yes Prediction: No Method 2: Decoder-based Model as Backbone with PEFT Enhancement To facilitate domain adaptation to mental health tasks, we adopt a parameter-efficient fine-tuning (PEFT) strategy. In particular, we apply Low-Rank Adaptation (LoRA) to align a pretrained transformer-based language model with disorder-specific semantics in a computationally efficient manner. We fine-tune Meta-LLaMA-3-8B-Instruct separately for each psychiatric condition (depression, PTSD, and anxiety) using binary supervision. This enables efficient adaptation with minimal additional parameters while pre- serving general linguistic knowledge. Method 3: Encoder-based Model as Backbone We also evaluate the diagnostic ability of encoder-based language model. Typically, we choose the widely used generic encoder model RoBERTa-base (Liu et al. 2019) and all-roberta-large-v1 through the Sentence- Transformers library (Reimers and Gurevych 2019) as our backbone. Adaptation to Long-Form Input Unlike decoder-based language models, which typically support large context windows, encoder-based models such as the BERT fam- ily are constrained by relatively short input lengths. To enable processing of long-form text, we adopt a 2-step chunking and aggregation strategy that first segments in- put sequences into manageable chunks, and then aggregates their representations to construct a comprehensive embed- ding for downstream classification. Step1: Chunking: Given an user’s transcript x, consist- ing of a long sequence of tokens, we split it into overlapping chunks xi of size c with a fixed overlap ratio. Each chunk is then independently encoded via a PEFT equipped RoBERTa encoder: hi = RoBERTa(xi) (1) We extract the [CLS] token from each chunk embed- ding, resulting in a chunk-level representation matrix H = [h[CLS] 1 , . . . , h[CLS] T ] ∈RT ×d, where T is the number of chunks and d is the hidden size. Step 2: Aggregation: Once the chunk-level transcript representation is obtained, we aggregate the high dimen- sional matrix H into a single vector h as the final user rep- resentation. In particular, we opted for two different aggre- gation strategies: • Mean pooling: h = 1 T PT i=1 h[CLS] i • Max pooling: h = maxT i=1 h[CLS] i Raw encoder-based embedding As a complementary baseline, we leverage the embeddings by feeding user tran- scripts to raw encoder model, including RoBERTa-base and all-roberta-large-v1. These embedding representations are then passed to lightweight classifiers—logistic regression, multilayer perceptrons (MLP), and XGBoost—to predict bi- nary disorder labels. PEFT-enhanced embedding We also leveraged PEFT- method, such as LoRA to fine tune the pretrained encoder, attempting to align the encoder language model with the mental health domain. To enable end-to-end prediction, we feed the learned text embeddings into a simple classification module designed for depression detection. This module—such as a lightweight MLP—maps the semantic representations to task-specific labels. We also apply layer normalization to stabilize train- ing and include L2 regularization on classifier weights to reduce overfitting.. 5. Experiments and Results We present the experimental detail and report the corre- sponding results in this section. 5.1 Experimental configuration To mitigate output bias and performance degradation caused by label imbalance, we apply upsampling when train- ing decoder-based models, ensuring balanced supervision across classes. When PEFT module is equipped (method 2 and method 3), we experiment with varying LoRA ranks (8, 16, 32) to assess parameter-efficiency and generalization. The dataset is split by user into 80% for training and 20% for testing. All models are trained using the AdamW optimizer with a batch size of 8 and a learning rate of 2 × 10−5. Ex- periments are conducted on a Linux server equipped with a single NVIDIA A100 GPU. 5.2 Evaluation Metrics To assess model performance on mental health diagnosis tasks, we report three core evaluation metrics: accuracy, recall, and F1 score (Tran and Kavuluru 2017). Accuracy captures the overall proportion of correct predictions and provides a general indication of model reliability across all classes. However, given the clinical relevance of early iden- tification, we include recall, the proportion of true positive cases correctly identified, as a primary metric of interest. High recall is critical in mental health contexts, where false negatives may result in delayed or missed intervention.The F1 score, defined as the harmonic mean of precision and re- call, offers a more balanced view of performance under class imbalance and helps quantify trade-offs between over- and under-diagnosis. All metrics are computed individually for each target condition—major depressive episodes (MDE), PTSD, and anxiety disorders—on a held-out user set, ensuring con- sistency and fairness across methods. Together, these met- rics allow for a clinically meaningful and statistically ro- bust evaluation of diagnostic prediction quality. We evaluate all three methods, zero-shot prompting with foundation lan- guage models, and embedding-based classifier on interview data acrosee three target mental health disorders. Each Each approach is assessed using accuracy, recall, and F1 score, with a particular emphasis on recall due to its clinical rele- vance in minimizing missed diagnoses. 5.3 Overall Evaluation Results Table 3 presents the performance of both decoder-based (Method 1&2) and encoder-based models (Method 3) across three mental health condition diagnosis: depression, PTSD, and anxiety. Among decoder-based methods, GPT-4.1 Mini achieves the highest accuracy overall, but suffers from low recall, indicating limited sensitivity to positive cases. In contrast, LLaMA-3-8B-Instruct exhibits ex- tremely high recall but poor accuracy and F1, suggesting over-prediction of the positive class under severe label im- balance. The use of Chain-of-Thought (CoT) prompting or LoRA fine-tuning moderately improves the balance between precision and recall. Encoder-based models, particularly those enhanced with LoRA and MLP heads, demonstrate more stable and bal- anced performance across all tasks. Notably, RoBERTa + LoRA + MLP achieves the highest F1 scores in PTSD and anxiety detection, indicating its effectiveness in domain- specific adaptation with minimal parameter overhead. Overall, encoder-based approaches with PEFT outperform decoder-based generation methods in terms of classification robustness under label imbalance. 6. Result Analysis and Ablation Studies 6.1 Multi-Disorder Inference via Direct Prompting We evaluated the zero-shot diagnostic performance of the Meta- LLaMA-3-8B-Instruct model across varying chunk sizes (512, 1024, 2048 tokens) for three mental health conditions: depression, PTSD, and anxiety. The results, summarized in Tables 4 reveal a consistent pattern: the model demonstrates high recall (clinical sensitivity) across all tasks, with val- ues typically above 0.90, but suffers from low F1 scores and overall accuracy, both of which remain below 0.45. Model Depression PTSD Anxiety Recall F1 Accuracy Recall F1 Accuracy Recall F1 Accuracy LLaMA-3 512 tokens 0.950 0.247 0.163 0.980 0.373 0.251 0.973 0.422 0.286 LLaMA-3 1024 tokens 0.938 0.259 0.224 0.960 0.385 0.306 0.946 0.433 0.336 LLaMA-3 2048 tokens 0.850 0.260 0.300 0.856 0.377 0.360 0.797 0.399 0.358 Table 4: Zero-shot performance of Meta-LLaMA-3-8B- Instruct across chunk sizes for three mental health disor- ders. For depression, the model achieved its highest recall of 0.950 using 512-token input, capturing nearly all true posi- tive cases. However, F1 score and accuracy improved mod- estly with longer context windows, peaking at 0.260 and 0.300, respectively, under the 2048-token settingstill well below optimal thresholds. For PTSD, the recall again peaked at 0.980 with 512-token input. The best F1 score (0.385) was observed at 1024 to- kens, while the highest accuracy (0.360) was achieved at 2048 tokens, suggesting a gradual trade-off between sensi- tivity and overall precision as chunk size increases. For anxiety, the model exhibited slightly more balanced per- formance. While recall gained the worst score of 0.797 un- der with the longest chunk size , both F1 and accuracy improved compared to the other disorders. The highest F1 Category Model Depression PTSD Anxiety Acc Rec F1 Acc Rec F1 Acc Rec F1 Decoder GPT-4.1 Mini 0.865 0.284 0.380 0.812 0.192 0.315 0.865 0.284 0.314 LLaMA-3-8B-Instruct 0.224 0.938 0.259 0.306 0.960 0.385 0.336 0.946 0.433 + CoT 0.626 0.388 0.230 0.559 0.280 0.223 0.561 0.318 0.279 + LoRA 0.712 0.333 0.273 0.622 0.190 0.160 0.631 0.219 0.255 Encoder RoBERTa + Logistic Regression 0.750 0.214 0.194 0.840 0.285 0.330 0.510 0.364 0.329 RoBERTa + MLP Head 0.780 0.357 0.313 0.820 0.214 0.250 0.660 0.333 0.393 RoBERTa + XGBoost Head 0.830 0.214 0.261 0.890 0.286 0.421 0.570 0.242 0.271 RoBERTa + LoRA + MLP 0.640 0.786 0.379 0.780 0.643 0.450 0.720 0.546 0.563 Table 3: Overall Performance comparison (Accuracy, Recall, and F1) on three binary classification tasks: Depression, PTSD, and Anxiety. score (0.433) occurred with 1024 tokens, and the best accu- racy (0.358) was obtained at 2048 tokens. 6.2 LoRA Fine-Tuned Models Low-Rank Adaptation (LoRA) has emerged as an efficient fine-tuning strategy for large language models, enabling sub- stantial parameter savings while maintaining strong down- stream performance. In our experiments, we tested three LoRA ranks (8, 16, 32) across two architectures: an encoder- only transformer (RoBERTa) and a decoder-only trans- former (Meta-LLaMA), evaluating their effectiveness in pre- dicting three mental health conditions. Overall, RoBERTa consistently outperformed Meta-LLaMA across most met- rics. While accuracy generally improved with increasing rank, recall and F1 scores were often higher at lower ranks, though the trend was not strictly monotonic (See Tabel 5). Model Depression PTSD Anxiety Acc Recall F1 Acc Recall F1 Acc Recall F1 LoRA RoBERTa Rank=8 0.560 0.857 0.353 0.700 0.643 0.375 0.700 0.643 0.375 LoRA RoBERTa Rank=16 0.770 0.286 0.258 0.780 0.643 0.450 0.720 0.546 0.563 LoRA RoBERTa Rank=32 0.640 0.786 0.379 0.790 0.571 0.432 0.690 0.485 0.508 LoRA Meta Rank=8 0.530 0.333 0.188 0.503 0.333 0.237 0.586 0.500 0.410 LoRA Meta Rank=16 0.676 0.167 0.143 0.631 0.190 0.163 0.541 0.562 0.414 LoRA Meta Rank=32 0.559 0.333 0.197 0.676 0.286 0.250 0.495 0.312 0.263 Table 5: LoRA rank impact on encoder and decoder models The performance of depression, the highest accuracy (0.770) was observed with RoBERTa at rank 16. However, both re- call and F1 under this configuration remained below 0.3. The highest recall (0.857), also the highest among all tasks—was achieved at rank 8, while F1 peaked at 0.379 with rank 32. In terms of PTSD, accuracy remained above 0.70 across all ranks, with a peak of 0.790 at rank 16. Recall was stronger at lower ranks, whereas the highest F1 score (0.450) again appeared at rank 16, indicating its suitability for this task. For anxiety, both accuracy and recall remained relatively sta- ble across all ranks, with accuracy ranging from 0.541 to 0.720 and recall from 0.500 to 0.6429. The highest accu- racy (0.720) was achieved at rank 6, and the highest recall (0.6429) was observed at both ranks 6 and 8. The F1 score peaked at 0.5625 with rank 6, which was also the highest F1 score across all disorders and configurations. 6.3 Embedding-Based Classifiers To complement large-scale language models, we addition- ally evaluated embedding-based classifiers using sentence embeddings extracted from pretrained RoBERTa-base and all-roberta-large-v1 models. As shown in Table 6, across all tasks and models, we observed that all-roberta-large-v1 consistently outperformed roberta-base in both recall and F1 score. While overall recall remained low across classifiers—typically below 0.45, accuracy scores were relatively high, frequently exceeding 0.80. Model Depression PTSD Anxiety Recall F1 Acc Recall F1 Acc Recall F1 Acc RoBERTa + LR 0.214 0.194 0.750 0.285 0.330 0.840 0.364 0.329 0.510 RoBERTa + MLP 0.071 0.111 0.840 0.210 0.250 0.820 0.182 0.182 0.640 RoBERTa + XGBoost 0.071 0.110 0.840 0.143 0.190 0.830 0.200 0.200 0.600 Large + LR 0.429 0.267 0.670 0.286 0.296 0.810 0.333 0.373 0.630 Large + MLP 0.357 0.313 0.780 0.214 0.250 0.820 0.333 0.393 0.660 Large + XGBoost 0.214 0.261 0.830 0.286 0.421 0.890 0.242 0.271 0.570 Table 6: Performance of embedding-based models with dif- ferent encoder backbone and classification heads on three mental health disorders. ”Large” referes to all-roberta- large-v1. Best values per column in bold. We report the performance of embedding-based clas- sifiers on depression classification. The highest re- call (0.429) was achieved by logistic regression with all-roberta-large-v1 embeddings, while the best F1 score (0.313) was obtained by the MLP classifier us- ing the same embeddings. Accuracy was consistently high across models, ranging from 0.67 to 0.84. In contrast, clas- sifiers based on roberta-base embeddings showed sub- stantially lower recall, highlighting the advantage of using larger, more expressive language models. A similar trend emerged for PTSD classification. XG- Boost paired with all-roberta-large-v1 embed- dings achieved the highest F1 score (0.421) and accuracy (0.89). However, recall remained modest across classifiers, generally below 0.29, except for logistic regression with roberta-base (0.286). These results suggest that while high accuracy is attainable, embedding models may under- identify true PTSD cases, limiting their sensitivity in clinical settings. In anxiety classification, the MLP classifier using all-roberta-large-v1 embeddings yielded the high- est F1 score (0.393) and recall (0.333). Accuracy ranged from 0.51 to 0.66 across models. Compared to depression and PTSD, anxiety prediction exhibited more balanced pre- cision–recall trade-offs, particularly with MLP-based archi- tectures, indicating better stability for this diagnostic cate- gory. 7. Discussion and Conclusion This study presents a systematic evaluation of three method- ological paradigms, including zero-shot prompting with foundation models, LoRA fine-tuning, and embedding- based classifiers for predicting depression, PTSD, and anx- iety from real-world interview transcripts. While each ap- proach demonstrates distinct strengths and limitations, sev- eral critical themes emerge regarding their practical applica- bility to mental health screening. To evaluate the practical utility of GPT-4.1 Mini, we benchmarked its zero-shot performance against both open-source foundation models (e.g., Meta-LLaMA-3) and lightweight embedding-based classifiers. Despite achieving high overall accuracy (≥0.80), this align with the exsiting studies (Chen et al. 2025; Ben-Zion et al. 2025). Howerver, GPT-4.1 exhibited substantially lower recall and F1 scores across all disorders, underscoring the trade-off between gen- eral discriminative power and clinical sensitivity. Across all models, accuracy often remained high—even surpassing 0.85 in several configurations—suggesting mod- els can reliably distinguish between diagnosed and undi- agnosed individuals at a population level. However, ac- curacy alone fails to reflect diagnostic usefulness in clin- ical settings, where missing true cases (i.e., low recall) can delay care or exacerbate harm. This issue is particu- larly salient in our results: zero-shot Meta-LLaMA models achieved recall rates above 0.90 for all disorders but suf- fered from low F1 scores and poor precision, indicating fre- quent false positives, the lower recall align with the work (Ravenda et al. 2025). Conversely, embedding-based classi- fiers showed high accuracy but considerably lower recall, of- ten below 0.3, underscoring their tendency to under-identify true cases. From a public health perspective, recall can be interpreted as clinical sensitivity—a measure of how well a model de- tects individuals who actually need care. Given that depres- sion, anxiety, and PTSD often co-occur, even a single posi- tive flag could increase a user’s awareness of their condition, prompting further clinical consultation. Thus, models with high recall—even at the cost of reduced precision—may serve as effective early-warning tools in digital mental health applications. LoRA fine-tuning offers a strong trade-off between effi- ciency and performance. RoBERTa models fine-tuned with LoRA achieved the best overall balance of recall and F1 scores, particularly for anxiety classification (F1 = 0.563). Notably, lower-rank configurations (e.g., Rank=8) some- times outperformed larger ranks on recall, suggesting that parameter-efficient adaptations may be well-suited for sen- sitive screening tasks without extensive computational costs. Embedding-based models, while simpler and more inter- pretable, struggled with recall, indicating limitations in their applicability for nuanced, clinical-like tasks. Limitation and Conclusion This study has several limitations that warrant discussion. First, the diagnostic labels are imbalanced, with positive cases comprising only around 20% of the dataset—and sometimes substantially less. This skew introduces chal- lenges for both training stability and model evaluation, as high accuracy may mask low sensitivity to minority classes. Future work could incorporate reweighting strategies or syn- thetic oversampling to better calibrate predictions. Second, although general-purpose large language models have demonstrated strong linguistic capabilities, their under- standing of mental health–specific discourse remains lim- ited. These models may lack nuanced knowledge of psy- chiatric terminology, symptom expression, or the pragmatic context in which mental health conversations occur. For in- stance, they may misinterpret colloquial expressions of dis- tress or overlook subtle indicators of psychological states. Domain-adapted LLMs trained on relevant corpora—such as therapy transcripts or clinical notes—can provide more reliable grounding for tasks like early detection. Third, the dataset size is relatively small compared to the scale of LLMs being evaluated. Even with parameter- efficient fine-tuning (e.g., LoRA), limited training samples constrain the model’s ability to generalize, particularly when attempting to update or specialize domain-relevant represen- tations. Freezing core parameters may further amplify this bottleneck. Fourth, the length of the interviews introduces cognitive strain for decoder-based models with finite context win- dows. While chunking strategies partially mitigate this issue, they risk discarding context or emphasizing the wrong infor- mation. More sophisticated context-aware methods—such as memory-augmented prompting, hierarchical modeling, or relevance-guided chunking—may improve performance. Looking ahead, several directions hold promise. Beyond general classification, demographic subgroup analyses (e.g., by age, sex, or race) could reveal important disparities in model sensitivity. Incorporating emotional and linguistic signals (e.g., sentiment trajectories, affective intensity) into training data may also enhance predictive validity (Gerczuk et al. 2023; Rasool et al. 2025). Finally, analyzing misclassi- fied cases and probing model reasoning pathways through prompt engineering or contrastive examples could reveal failure modes and inform targeted improvements in model alignment. Ethical Considerations While LLM-based personality prediction holds potential for scalable assessment, it raises critical concerns about pri- vacy, consent, and interpretability. We caution against the use of such models in high-stakes decisions without human oversight and advocate for transparent evaluation protocols grounded in psychological theory. References Alhalaseh, Y. N.; Elshabrawy, H. A.; Erashdi, M.; Shahait, M.; Abu-Humdan, A. M.; and Al-Hussaini, M. 2021. Allo- cation of the “already” limited medical resources amid the COVID-19 pandemic, an iterative ethical encounter includ- ing suggested solutions from a real life encounter. Frontiers in Medicine, 7: 616277. Alhamed, F.; Ive, J.; and Specia, L. 2024. Using large lan- guage models (llms) to extract evidence from pre-annotated social media data. In Proceedings of the 9th Work- shop on Computational Linguistics and Clinical Psychology (CLPsych 2024), 232–237. Aux´em´ery, Y. 2018. Post-traumatic psychiatric disorders: PTSD is not the only diagnosis. La Presse M´edicale, 47(5): 423–430. Balcombe, L.; and De Leo, D. 2021. Digital mental health challenges and the horizon ahead for solutions. JMIR Mental Health, 8(3): e26811. Bartal, A.; Jagodnik, K. M.; Chan, S. J.; and Dekel, S. 2024. ChatGPT demonstrates potential for identifying psychiatric disorders: application to childbirth-related post-traumatic stress disorder. Ben-Zion, Z.; Witte, K.; Jagadish, A. K.; Duek, O.; Harpaz- Rotem, I.; Khorsandian, M.-C.; Burrer, A.; Seifritz, E.; Homan, P.; Schulz, E.; et al. 2025. Assessing and allevi- ating state anxiety in large language models. npj Digital Medicine, 8(1): 132. Blevins, C. A.; Weathers, F. W.; Davis, M. T.; Witte, T. K.; and Domino, J. L. 2015. The posttraumatic stress disor- der checklist for DSM-5 (PCL-5): Development and initial psychometric evaluation. Journal of traumatic stress, 28(6): 489–498. Bucur, A.-M. 2024. Leveraging LLM-generated data for de- tecting depression symptoms on social media. In Interna- tional Conference of the Cross-Language Evaluation Forum for European Languages, 193–204. Springer. Cacheda, F.; Fernandez, D.; Novoa, F. J.; and Carneiro, V. 2019. Early detection of depression: social network analysis and random forest techniques. Journal of medical Internet research, 21(6): e12554. Chen, F.; Ben-Zeev, D.; Sparks, G.; Kadakia, A.; and Cohen, T. 2025. Detecting PTSD in Clinical Interviews: A Compar- ative Analysis of NLP Methods and Large Language Mod- els. arXiv:2504.01216. Chiong, R.; Budhi, G. S.; Dhakal, S.; and Chiong, F. 2021. A textual-based featuring approach for depression detection using machine learning classifiers and social media texts. Computers in Biology and Medicine, 135: 104499. Christ, N. M.; Elhai, J. D.; Forbes, C. N.; Gratz, K. L.; and Tull, M. T. 2021. A machine learning approach to model- ing PTSD and difficulties in emotion regulation. Psychiatry research, 297: 113712. Coifman, K.; and Bonanno, G. 2010. When distress does not become depression: Emotion context sensitivity and ad- justment to bereavement. Journal of Abnormal Psychology, 119(3): 479–490. Coifman, K.; Bonanno, G.; Ray, R.; and Gross, J. 2007. Does repressive coping promote resilience? Affective- autonomic response discrepancy during bereavement. Jour- nal of Personality and Social Psychology, 92(4): 745–758. Coifman, K.; Flynn, J.; and Pinto, L. 2016. When context matters: Negative emotions predict psychological health and adjustment. Motivation & Emotion, 40(4): 602–624. DeVylder, J.; Burnette, D.; and Yang, L. 2014. Co- occurrence of psychotic experiences and common mental health conditions across four racially and ethnically diverse population samples. Psychological medicine, 44(16): 3503– 3513. Gerczuk, M.; Triantafyllopoulos, A.; Amiriparian, S.; Kathan, A.; Bauer, J.; Berking, M.; and Schuller, B. W. 2023. Zero-shot personalization of speech foundation mod- els for depressed mood monitoring. Patterns, 4(11). Goodell, S.; Druss, B. G.; Walker, E. R.; and Mat, M. 2011. Mental disorders and medical comorbidity. Robert Wood Johnson Foundation, 2(1). Haberer, J. E.; Trabin, T.; and Klinkman, M. 2013. Fur- thering the reliable and valid measurement of mental health screening, diagnoses, treatment and outcomes through health information technology. General hospital psychiatry, 35(4): 349–353. Harvey, M.; Coifman, K.; Ross, G.; Kleinert, D.; and Gia- rdina, P. 2014. Contextually appropriate emotion-word use predicts adaptive health behavior: Emotion context sensitiv- ity and treatment adherence. Journal of Health Psychology. Advance online publication. Hasan, M. A.; Das, S.; Anjum, A.; Alam, F.; Anjum, A.; Sarker, A.; and Noori, S. R. H. 2024. Zero- and Few-Shot Prompting with LLMs: A Comparative Study with Fine-tuned Models for Bangla Sentiment Analysis. arXiv:2308.10783. Hawkins, E. H. 2009. A tale of two systems: Co-occurring mental health and substance abuse disorders treatment for adolescents. Annual review of psychology, 60(1): 197–227. Held, P.; Schubert, R. A.; Pridgen, S.; Kovacevic, M.; Montes, M.; Christ, N. M.; Banerjee, U.; and Smith, D. L. 2022. Who will respond to intensive PTSD treatment? A machine learning approach to predicting response prior to starting treatment. Journal of psychiatric research, 151: 78– 85. Islam, M. R.; Kabir, M. A.; Ahmed, A.; Kamal, A. R. M.; Wang, H.; and Ulhaq, A. 2018. Depression detection from social network data using machine learning techniques. Health information science and systems, 6(1): 8. Kroenke, K.; Spitzer, R.; and Williams, J. 2001. The patient health questionnaire (phq-9)–overview. J Gen Intern Med, 16(9): 606–613. Kumar, P.; Lionis, C.; Andoko, D.; Rahman, Z.; Anastasaki, M.; Awankem, B.; et al. 2025. Evaluation of Diagnostic Ser- vices in Rural and Remote Areas: Bottlenecks, Success Sto- ries, and Solutions. Journal of Surgical Specialties and Ru- ral Practice, 6(1): 32–37. Lai, M.-C.; Kassee, C.; Besney, R.; Bonato, S.; Hull, L.; Mandy, W.; Szatmari, P.; and Ameis, S. H. 2019. Prevalence of co-occurring mental health diagnoses in the autism pop- ulation: a systematic review and meta-analysis. The Lancet Psychiatry, 6(10): 819–829. Laursen, T. M.; Nordentoft, M.; and Mortensen, P. B. 2014. Excess early mortality in schizophrenia. Annual review of clinical psychology, 10(1): 425–448. Le Glaz, A.; Haralambous, Y.; Kim-Dufor, D.-H.; Lenca, P.; Billot, R.; Ryan, T. C.; Marsh, J.; Devylder, J.; Walter, M.; Berrouiguet, S.; et al. 2021. Machine learning and natu- ral language processing in mental health: systematic review. Journal of medical Internet research, 23(5): e15708. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Maharjan, J.; Jin, R.; King, J.; Zhu, J.; and Kenne, D. 2025. Differential Analysis of Age, Gender, Race, Sentiment, and Emotion in Substance Use Discourse on Twitter During the COVID-19 Pandemic: A Natural Language Processing Ap- proach. JMIR infodemiology, 5(1): e67333. Mitchell, A. J.; Rao, S.; and Vaze, A. 2011. Can general practitioners identify people with distress and mild depres- sion? A meta-analysis of clinical accuracy. Journal of affec- tive disorders, 130(1-2): 26–36. Mitchell, A. J.; Vaze, A.; and Rao, S. 2009. Clinical diag- nosis of depression in primary care: a meta-analysis. The Lancet, 374(9690): 609–619. Nemesure, M. D.; Heinz, M. V.; Huang, R.; and Jacobson, N. C. 2021. Predictive modeling of depression and anxiety using electronic health records and a novel machine learn- ing approach with artificial intelligence. Scientific reports, 11(1): 1980. Ohse, J.; Hadˇzi´c, B.; Mohammed, P.; et al. 2024. GPT-4 shows potential for identifying social anxiety from clinical interview data. Scientific Reports, 14: 30498. Organization, W. H. 2001. The World Health Report 2001: Mental health: new understanding, new hope. Priya, A.; Garg, S.; and Tigga, N. P. 2020. Predicting anxiety, depression and stress in modern life using ma- chine learning algorithms. Procedia Computer Science, 167: 1258–1267. Ramasamy Ramamurthy, S.; and Roy, N. 2018. Recent trends in machine learning for human activity recogni- tion—A survey. Wiley Interdisciplinary Reviews: Data Min- ing and Knowledge Discovery, 8(4): e1254. Rasool, A.; Shahzad, M. I.; Aslam, H.; Chan, V.; and Ar- shad, M. A. 2025. Emotion-aware embedding fusion in large language models (Flan-T5, Llama 2, DeepSeek-R1, and ChatGPT 4) for intelligent response generation. AI, 6(3): 56. Ravenda, F.; Kara-Isitt, F.-Z.; Swift, S.; Mira, A.; and Ra- ballo, A. 2025. From Evidence Mining to Meta-Prediction: a Gradient of Methodologies for Task-Specific Challenges in Psychological Assessment. In Proceedings of the 10th Workshop on Computational Linguistics and Clinical Psy- chology (CLPsych 2025), 242–248. Reimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sen- tence Embeddings using Siamese BERT-Networks. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computa- tional Linguistics. Saidi, A.; Othman, S. B.; and Saoud, S. B. 2020. Hybrid CNN-SVM classifier for efficient depression detection sys- tem. In 2020 4th international conference on advanced sys- tems and emergent technologies (IC ASET), 229–234. IEEE. Sarabadani, A.; Taherinia, H.; Ghadiri, N.; Shahmarvandi, E. K.; and Mousa, R. 2025. PKG-LLM: A Framework for Predicting GAD and MDD Using Knowledge Graphs and Large Language Models in Cognitive Neuroscience. Sawalha, J.; Yousefnezhad, M.; Shah, Z.; Brown, M. R.; Greenshaw, A. J.; and Greiner, R. 2022. Detecting presence of PTSD using sentiment analysis from text data. Frontiers in psychiatry, 12: 811392. Sha, Y.; Pan, H.; Xu, W.; Meng, W.; Luo, G.; Du, X.; Zhai, X.; Tong, H. H.; Shi, C.; and Li, K. 2025. MDD-LLM: Towards accuracy large language models for major depres- sive disorder diagnosis. Journal of Affective Disorders, 388: 119774. Spitzer, R. L.; Kroenke, K.; Williams, J. B.; and L¨owe, B. 2006. A brief measure for assessing generalized anxiety dis- order: the GAD-7. Archives of internal medicine, 166(10): 1092–1097. Srivastava, D. 2024. Utilizing Counselor-Client Dialogues to Develop a Memory-Efficient Mental Health Question- Answering System with Large Language Models. Master’s thesis, Dublin, National College of Ireland. Submitted. Su, C.; Xu, Z.; Pathak, J.; and Wang, F. 2020. Deep learning in mental health outcome research: a scoping review. Trans- lational psychiatry, 10(1): 116. Tennyson, R. L.; Kemp, C. G.; and Rao, D. 2016. Challenges and strategies for implementing mental health measurement for research in low-resource settings. International health, 1–7. Tran, T.; and Kavuluru, R. 2017. Predicting mental con- ditions based on “history of present illness” in psychiatric notes with deep neural networks. Journal of biomedical in- formatics, 75: S138–S148. Van Der Donckt, J.; Van Der Donckt, J.; Deprost, E.; Vandenbussche, N.; Rademaker, M.; Vandewiele, G.; and Van Hoecke, S. 2023. Do not sleep on traditional machine learning: Simple and interpretable techniques are competi- tive to deep learning for sleep scoring. Biomedical Signal Processing and Control, 81: 104429. Vanlalawmpuia, R.; and Lalhmingliana, M. 2020. Prediction of Depression in Social Network Sites Using Data Mining. In 2020 4th International Conference on Intelligent Com- puting and Control Systems (ICICCS), 489–495. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. In Advances in Neural Information Processing Systems, volume 30, 5998–6008. Curran Asso- ciates, Inc. Wang, Z.; Pang, Y.; and Lin, Y. 2023. Large Language Mod- els Are Zero-Shot Text Classifiers. arXiv:2312.01044. Wshah, S.; Skalka, C.; Price, M.; et al. 2019. Predicting posttraumatic stress disorder risk: a machine learning ap- proach. JMIR mental health, 6(7): e13946. Xu, X.; Yao, B.; Dong, Y.; Gabriel, S.; Yu, H.; Hendler, J.; Ghassemi, M.; Dey, A. K.; and Wang, D. 2024. Mental-llm: Leveraging large language models for mental health predic- tion via online text data. Proceedings of the ACM on In- teractive, Mobile, Wearable and Ubiquitous Technologies, 8(1): 1–32. Zhu, J.; Jin, R.; Jiang, H.; Wang, Y.; Zhang, X.; and Coif- man, K. G. 2025. Leveraging Large Language Models to Analyze Emotional and Contextual Drivers of Teen Sub- stance Use in Online Discussions. arXiv:2501.14037. Zhu, J.; Jin, R.; Kenne, D. R.; Phan, N.; and Ku, W.-S. 2024. User dynamics and thematic exploration in r/depression dur- ing the COVID-19 pandemic: insights from overlapping r/- SuicideWatch users. Journal of Medical Internet Research, 26: e53968.
AI-Powered Early Diagnosis of Mental Health Disorders from Real-World Clinical Conversations Jianfeng Zhu1, Julina Maharjan1, Xinyu Li1, Karin G. Coifman2, Ruoming Jin1 1 2 {jzhu6, jmaharja, xli5, , Abstract Mental health disorders remain among the leading cause of disability worldwide, yet conditions such as depression, anxiety, and Post-Traumatic Stress Disorder (PTSD) are frequently underdiagnosed or misdiagnosed due to subjective assessments, limited clinical resources, and stigma and low awareness. In primary care settings, studies show that providers misidentify depression or anxiety in over 60% of cases, highlighting the urgent need for scalable, accessible, and context-aware diagnostic tools that can support early detection and intervention. In this study, we evaluate the effectiveness of machine learning models for mental health screening using a unique dataset of 553 real-world, semistructured interviews, each paried with ground-truth diagnoses for major depressive episodes (MDE), anxiety disorders, and PTSD. We benchmark multiple model classes, including zero-shot prompting with GPT-4.1 Mini and MetaLLaMA, as well as fine-tuned RoBERTa models using LowRank Adaptation (LoRA). Our models achieve over 80% accuracy across diagnostic categories, with especially strong performance on PTSD (up to 89% accuracy and 98% recall). We also find that using shorter context, focused context segments improves recall, suggesting that focused narrative cues enhance detection sensitivity. LoRA fine-tuning proves both efficient and effective, with lower-rank configurations (e.g., rank 8 and 16) maintaining competitive performance across evaluation metrics. Our results demonstrate that LLM-based models can offer substantial improvements over traditional self-report screening tools, providing a path toward low-barrier, AI-powerd early diagnosis. This work lays the groundwork for integrating machine learning into real-world clinical workflows, particularly in low-resource or high-stigma environments where access to timely mental health care is most limited. Code - https://anonymous.4open.science/r/ AAAI2026 Depression1-E152/ 1. Introduction Mental health disorders account for four of the ten leading cause of disability worldwide (Alhamed, Ive, and Specia 2024; Ben-Zion et al. 2025; Organization 2001). Individuals affected by these conditions experience disproportionally high rates of adverse health behaviors, including tobacco smoking, substance use, physical inactivity, and poor diet, which contribute to elevated risks of chronic medical illness and premature mortality (Goodell et al. 2011; Laursen, Nordentoft, and Mortensen 2014). Despite their global burden, mental health disorder remain underdiagnosed, undertreated and stigmatized, particularly in resource constrained or high barrier settings (Organization 2001). A major contributor to this treatment gap lies in the limitations of traditional diagnostic tools. Mental health assessments rely on subjective measurements, such as structured interviews and self-reports questionnaires (Tennyson, Kemp, and Rao 2016). While these instruments are costeffective and scalable, they suffer from social desirability bias, inaccurate recall, and limited interoperability of symptoms, particularly in cognitively impaired or stigmatized populations (Haberer, Trabin, and Klinkman 2013). Moreover, timely and accurate diagnosis is complicated by the high comorbidity of condition such as depression, anxiety and post-traumatic stress disorder (PTSD), which often present overlapping symptoms and are difficult to disentangle in practice. This diagnostic ambiguity, compounded by limited clinical resources, contributes to widespread misdiagnosis and under-recognition in both high- and low-income settings (Aux ́em ́ery 2018). These challenges underscore the urgent need for scalable, proactive, and context aware screening tools that can identify at-risk individuals earlier and more accurately (Tennyson, Kemp, and Rao 2016). In response to these demands, traditional machine learning methods such as SVM, random forest, etc (Saidi, Othman, and Saoud 2020; Cacheda et al. 2019; Islam et al. 2018) have long been extensively studied and applied to detect various mental health conditions by leveraging structured inputs and hand-crafted features derived from linguistic, behavioral, or physiological signals. While these approaches have shown promise, they often result in suboptimal performance due to the limited expressiveness (Van Der Donckt et al. 2023). Subsequent advances in deep learning enabled end-to-end models that learn representations directly from raw text or speech data, offering improved performance and flexibility (Su et al. 2020). However, these models still typically require large labeled datasets, suffer from poor interpretability, and remain sensitive to domain shifts (Ramasamy Ramamurthy and Roy 2018). Recent advances in Large Language Models (LLMs) offer a compelling new direction. By analyzing naturalistic user input, such as interview transcripts and spontaneous 16 Oct 2025 conversations, LLMs can potentially infer early signs of depression, anxiety, and other mental health disorders (Xu et al. 2024). These models require no clinical infrastructure and can be deployed in low touch, user facing environments, enabling broad access and repeated monitoring. This study takes a step in that direction by asking a core question: Can we harness the broad prior knowledge and strong text comprehension skills of large language models to detect early signs of mental disorders from naturalistic conversations, without relying on expert-crafted features or clinical annotations? Figure 1: Overview of dataset, LLM adaptation methods, and prediction targets. Unlike prior work which based on self-reported surveys or social media data on single mental health disoders (Bucur 2024; Zhu et al. 2024; Sarabadani et al. 2025; Bartal et al. 2024). We leverage a unique dataset of real-world, semistructured psychiatric interviews with ground-truth clinical diagnoses assessed within the same time window. This setting more closely reflects the nuance and variability of real clinical conversations. We formalize the task as multi-label text classification, where given an interview transcript, the model must predict the presence of (1) major depressive episodes (MDE), (2) PTSD, and (3) anxiety disorders. We evaluate both encoder-based and decoder-based language models, with optional enhancement via parameterefficient fine-tuning (PEFT) adapters such as LoRA. Decoder-based models (e.g., GPT-4.1, Meta-LLaMA) are assessed in zero-shot settings, while encoder-based models are optionally fine-tuned using embedding-based classifiers with or without LoRA augmentation. This design allows for a rigorous, side-by-side comparison of general-purpose versus customized diagnostic strategies under realistic data and deployment constraints.An overview of our modeling pipeline is illustrated in Figure 1. 2. Related Work Prior work in mental health assessment has largely relied on standardized self-report instruments designed to evaluate specific psychiatric conditions. For instance, the Patient Health Questionnaire (PHQ-9) is a widely used selfreport scale for assessing major depressive disorder, based on DSM-IV criteria (Kroenke, Spitzer, and Williams 2001). Similarly, the GAD-7 is a brief, clinically validated measure for evaluating the severity of generalized anxiety disorder (Spitzer et al. 2006), and the PTSD Checklist for DSM5 (PCL-5) captures core symptoms of post-traumatic stress disorder across four diagnostic clusters, in alignment with DSM-5 standards (Blevins et al. 2015). While these instruments are cost-effective and easy to administer, they suffer from several important limitations. First, each instrument typically targets only one disorder per administration, despite increasing evidence that conditions such as depression, anxiety, and PTSD frequently co-occur in clinical populations (Lai et al. 2019; Hawkins 2009; DeVylder, Burnette, and Yang 2014). Second, because these tools rely entirely on self-reported data, they are vulnerable to social desirability bias, recall inaccuracies, and limited self-awareness, especially among vulnerable or cognitively impaired individuals. Finally, in resource-limited settings or during health emergencies (e.g., pandemics), access to trained professionals or structured screening processes may be severely constrained, leaving many individuals undiagnosed or misdiagnosed (Kumar et al. 2025; Alhalaseh et al. 2021). For example, a synthesis of 157 studies found that only one in three individuals with mild depression is correctly identified in primary care (Mitchell, Rao, and Vaze 2011), and a meta-analysis of 41 high-quality studies, encompassing over 50,000 patients, found that general practitioners correctly identified depression in just 47.3% of cases. Notably, these studies also found that false positives often outnumbered true positives, and a substantial proportion of cases were entirely missed (Mitchell, Vaze, and Rao 2009). In response to the limitations of self-report instruments, recent research has explored machine learning (ML) and natural language processing (NLP) approaches to automatically detect mental health conditions from diverse data sources, including text inputs, questionnaires, and social data (Wshah et al. 2019; Le Glaz et al. 2021; Chiong et al. 2021; Maharjan et al. 2025). For instance, Priya et al. study applied ML algorithms to classify individuals across five severity levels of anxiety, depression, and stress, based on questionnaire responses. While Na ̈ıve Bayes achieved the highest accuracy, Random Forest was identified as the overall best-performing model (Priya, Garg, and Tigga 2020; Nemesure et al. 2021). Other studies have demonstrated the feasibility of using ML methods to identify PTSD, leveraging structured assessments, language-based emotion regulation features, and treatment selection models (Christ et al. 2021; Held et al. 2022; Sawalha et al. 2022; Vanlalawmpuia and Lalhmingliana 2020). Despite these promising directions, the implementation of digital mental health solutions still faces substantial challenges-particularly in terms of evaluation rigor and practical effectiveness (Balcombe and De Leo 2021). Unlike traditional machine learning methods that often require large amounts of task-specific training data and manually engineered features, large language models (LLMs) operate under a fundamentally different paradigm. Powered by transformer architectures and the self-attention mechanism (Vaswani et al. 2017), LLMs are pretrained on massive text corpora and can perform zero-shot or few-shot inference based solely on textual prompts(Wang, Pang, and Lin 2023; Hasan et al. 2024). This shift enables LLMs to understand rich contextual cues directly from free-form natural language-without the need for structured input formats, disorder-specific questionnaires, or annotated training data (Zhu et al. 2024; Srivastava 2024; Zhu et al. 2025). Several recent studies have demonstrated the promise of LLMs in clinical mental health applications. For example, GPT-4 was shown to infer social anxiety symptom severity from semi-structured clinical interviews, achieving a correlation of r = 0.79 with validated self-report measures (Ohse et al. 2024). In the context of depression diagnosis, MDDLLM (70B) achieved an accuracy of 0.8378 and AUROC of 0.8919, significantly outperforming conventional machine and deep learning approaches (Sha et al. 2025). Another study on PTSD detection used few-shot prompting with DSM-5 criteria to achieve AUROC = 0.737, with performance varying by symptom severity and comorbid depression status (Chen et al. 2025). These findings underscore the capacity of LLMs to deliver clinically meaningful insights from minimally structured data making them especially valuable for scalable mental health screening in lowresource or underserved contexts. Building on this progress, our work makes several key contributions. We evaluate both general-purpose LLMs (e.g., GPT-4, Meta-LLaMA) and parameter-efficient fine-tuned models (e.g., RoBERTa with LoRA) on a unique dataset of 555 real-world psychiatric interviews. 3. Datasets We utilize a novel dataset comprising 555 U.S. adults, collected across multiple behavioral research studies investigating individual responses to transitional or adverse life events, such as occupational stress, chronic illness, or traumatic exposure. All participants provided written informed consent under IRB-approved protocols. Table 1 summarizes the participant demographics. The dataset includes 553 individuals, with a balanced gender distribution (278 male, 275 female), and a majority identifying as White (431). Age is broadly distributed across five brackets, and most participants report some college or higher education. This diversity supports robust downstream analysis. During these interviews, all participants answered the same five questions in a predetermined sequence. The first question required participants to describe their activities from waking to sleep on the previous day. The subsequent questions explored their personal experiences with a recent challenging event or situation, their coping strategies for that challenge, an unrelated recent unpleasant event, and, finally, a recent positive experience. Interviewers were trained to provide standardized prompts, and participants were encouraged to speak spontaneously for up to three minutes in response to each question. Each interview session resulted in approximately 15 minutes of recorded speech per participant. The speech was then recorded and transcribed according to convention and recommendations (Coifman et al. 2007; Coifman and Bonanno 2010; Coifman, Flynn, and Pinto 2016; Harvey et al. Category Subgroup Count Age <25 123 25-34 128 35-44 96 45-59 132 60+ 74 Sex Male 278 Female 275 Race White 431 Other 122 Education High school or below 98 Some college 293 College or above 134 Unknown 28 Table 1: Participant Demographics Summary 2014). Each participant's text responses were paired with their corresponding depressive symptoms, derived from Structured Clinical Interview for DSM (SCID) reports . Demographic information, including age and sex, was collected across studies and harmonized into a unified format. The mean participant age was 39.36 years (SD = 16.0). The sample was approximately gender-balanced: 278 participants identified as female, 277 as male, and one participant did not report their sex. The average length of the full interviews was approximately 2,955 words (SD = 1,855). This setup allows for a fine-grained analysis of how narrative depth influences psychiatric signal extraction across model types. For foundation models constrained by input length limitations (e.g., Meta-LLaMA-3-8B), we adopt a chunk-based inference strategy, whereby each user transcript is segmented into overlapping chunks of 512, 1024, or 2048 tokens. Model predictions are computed for each chunk independently and averaged to obtain a user-level binary decision. This design enables scalable prediction while preserving diagnostic signal across extended narrative contexts. To illustrate the richness of the narrative content, Table 2 provides representative excerpts from a single participant across the five prompt domains. These responses reflect the emotional nuance and thematic complexity of the dataset, presenting a realistic and ecologically valid benchmark for naturalistic mental health inference. 4. Methodology In this section, we systematically explain the variaous methods to evaluate the abilities of modern AI models in mental health disorder domain. Prompt Response Excerpt Daily Activities "Beginning of the day, uh I have two sons, one is [xxxx]... played outside... gym..." Difficult Experience "Being a firefighter... challenging and amazing experience... bad experiences too..." Emotion Regulation "You talk to people you trust at work... My wife and I have been married..." Negative Event "First big incident... stressful leading into it..." Positive Event "First baby we delivered-early morning call... heroin case..." Table 2: Example Interview Responses (Participant ID: 001) Method 1: Multi-Disorder Inference via Direct Prompting First, we investigate whether modern foundation models can infer multiple psychiatric disorders directly from raw interview text in a zero-shot setting. Specifically, we prompt large language models (LLMs) to identify signs of psychiatric conditions, including depression, PTSD, and anxiety without any task-specific training or fine-tuning. We examine two state-of-the-art LLM models: • GPT-4.1 Mini: Deployed in a zero-shot configuration using custom-designed prompts tailed to each disorder. This setup serves as a baseline for scalable mental health screening without the need for retraining. • Meta-LLaMA-3-8B-Instruct: Evaluated under zeroshot prompting. To alleviate the complexity introduced by long-range dependencies, interview transcripts were further segmented into small chunks of size 512, 1024, or 2048 with a fixed overlapping rate, each chunked transcript was then reformulated into a prompt and subsequently passed to the model for binary classification. Final user-level predictions were derived by averaging prediction scores across all chunks. We employ the following standardized prompt for all models and disorders, adhering to a binary prediction format to ensure clinical interpretability: Prompt: Psychiatric Disorder Inference (LLMBased) You are an AI assisting mental health professionals in identifying psychiatric disorders. Input Data: "{response text}" - a free-form participant response from a semi-structured interview. Task: Analyze the participant's response and determine whether there are signs of a psychiatric disorder. Do not include any reasoning or explanation. Output Format: Respond with a single-line binary decision: Prediction: Yes Prediction: No Method 2: Decoder-based Model as Backbone with PEFT Enhancement To facilitate domain adaptation to mental health tasks, we adopt a parameter-efficient fine-tuning (PEFT) strategy. In particular, we apply Low-Rank Adaptation (LoRA) to align a pretrained transformer-based language model with disorder-specific semantics in a computationally efficient manner. We fine-tune Meta-LLaMA-3-8B-Instruct separately for each psychiatric condition (depression, PTSD, and anxiety) using binary supervision. This enables efficient adaptation with minimal additional parameters while preserving general linguistic knowledge. Method 3: Encoder-based Model as Backbone We also evaluate the diagnostic ability of encoder-based language model. Typically, we choose the widely used generic encoder model RoBERTa-base (Liu et al. 2019) and all-roberta-large-v1 through the SentenceTransformers library (Reimers and Gurevych 2019) as our backbone. Adaptation to Long-Form Input Unlike decoder-based language models, which typically support large context windows, encoder-based models such as the BERT family are constrained by relatively short input lengths. To enable processing of long-form text, we adopt a 2-step chunking and aggregation strategy that first segments input sequences into manageable chunks, and then aggregates their representations to construct a comprehensive embedding for downstream classification. Step1: Chunking: Given an user's transcript x, consisting of a long sequence of tokens, we split it into overlapping chunks xi of size c with a fixed overlap ratio. Each chunk is then independently encoded via a PEFT equipped RoBERTa encoder: hi = RoBERTa(xi) (1) We extract the [CLS] token from each chunk embedding, resulting in a chunk-level representation matrix H = [h[CLS] 1 , . . . , h[CLS] T ] ∈RT ×d, where T is the number of chunks and d is the hidden size. Step 2: Aggregation: Once the chunk-level transcript representation is obtained, we aggregate the high dimensional matrix H into a single vector h as the final user representation. In particular, we opted for two different aggregation strategies: • Mean pooling: h = 1 T PT i=1 h[CLS] i • Max pooling: h = maxT i=1 h[CLS] i Raw encoder-based embedding As a complementary baseline, we leverage the embeddings by feeding user transcripts to raw encoder model, including RoBERTa-base and all-roberta-large-v1. These embedding representations are then passed to lightweight classifiers-logistic regression, multilayer perceptrons (MLP), and XGBoost-to predict binary disorder labels. PEFT-enhanced embedding We also leveraged PEFTmethod, such as LoRA to fine tune the pretrained encoder, attempting to align the encoder language model with the mental health domain. To enable end-to-end prediction, we feed the learned text embeddings into a simple classification module designed for depression detection. This module-such as a lightweight MLP-maps the semantic representations to task-specific labels. We also apply layer normalization to stabilize training and include L2 regularization on classifier weights to reduce overfitting.. 5. Experiments and Results We present the experimental detail and report the corresponding results in this section. 5.1 Experimental configuration To mitigate output bias and performance degradation caused by label imbalance, we apply upsampling when training decoder-based models, ensuring balanced supervision across classes. When PEFT module is equipped (method 2 and method 3), we experiment with varying LoRA ranks (8, 16, 32) to assess parameter-efficiency and generalization. The dataset is split by user into 80% for training and 20% for testing. All models are trained using the AdamW optimizer with a batch size of 8 and a learning rate of 2 × 10-5. Experiments are conducted on a Linux server equipped with a single NVIDIA A100 GPU. 5.2 Evaluation Metrics To assess model performance on mental health diagnosis tasks, we report three core evaluation metrics: accuracy, recall, and F1 score (Tran and Kavuluru 2017). Accuracy captures the overall proportion of correct predictions and provides a general indication of model reliability across all classes. However, given the clinical relevance of early identification, we include recall, the proportion of true positive cases correctly identified, as a primary metric of interest. High recall is critical in mental health contexts, where false negatives may result in delayed or missed intervention.The F1 score, defined as the harmonic mean of precision and recall, offers a more balanced view of performance under class imbalance and helps quantify trade-offs between over- and under-diagnosis. All metrics are computed individually for each target condition-major depressive episodes (MDE), PTSD, and anxiety disorders-on a held-out user set, ensuring consistency and fairness across methods. Together, these metrics allow for a clinically meaningful and statistically robust evaluation of diagnostic prediction quality. We evaluate all three methods, zero-shot prompting with foundation language models, and embedding-based classifier on interview data acrosee three target mental health disorders. Each Each approach is assessed using accuracy, recall, and F1 score, with a particular emphasis on recall due to its clinical relevance in minimizing missed diagnoses. 5.3 Overall Evaluation Results Table 3 presents the performance of both decoder-based (Method 1&2) and encoder-based models (Method 3) across three mental health condition diagnosis: depression, PTSD, and anxiety. Among decoder-based methods, GPT-4.1 Mini achieves the highest accuracy overall, but suffers from low recall, indicating limited sensitivity to positive cases. In contrast, LLaMA-3-8B-Instruct exhibits extremely high recall but poor accuracy and F1, suggesting over-prediction of the positive class under severe label imbalance. The use of Chain-of-Thought (CoT) prompting or LoRA fine-tuning moderately improves the balance between precision and recall. Encoder-based models, particularly those enhanced with LoRA and MLP heads, demonstrate more stable and balanced performance across all tasks. Notably, RoBERTa + LoRA + MLP achieves the highest F1 scores in PTSD and anxiety detection, indicating its effectiveness in domainspecific adaptation with minimal parameter overhead. Overall, encoder-based approaches with PEFT outperform decoder-based generation methods in terms of classification robustness under label imbalance. 6. Result Analysis and Ablation Studies 6.1 Multi-Disorder Inference via Direct Prompting We evaluated the zero-shot diagnostic performance of the MetaLLaMA-3-8B-Instruct model across varying chunk sizes (512, 1024, 2048 tokens) for three mental health conditions: depression, PTSD, and anxiety. The results, summarized in Tables 4 reveal a consistent pattern: the model demonstrates high recall (clinical sensitivity) across all tasks, with values typically above 0.90, but suffers from low F1 scores and overall accuracy, both of which remain below 0.45. Model Depression PTSD Anxiety Recall F1 Accuracy Recall F1 Accuracy Recall F1 Accuracy LLaMA-3 512 tokens 0.950 0.247 0.163 0.980 0.373 0.251 0.973 0.422 0.286 LLaMA-3 1024 tokens 0.938 0.259 0.224 0.960 0.385 0.306 0.946 0.433 0.336 LLaMA-3 2048 tokens 0.850 0.260 0.300 0.856 0.377 0.360 0.797 0.399 0.358 Table 4: Zero-shot performance of Meta-LLaMA-3-8BInstruct across chunk sizes for three mental health disorders. For depression, the model achieved its highest recall of 0.950 using 512-token input, capturing nearly all true positive cases. However, F1 score and accuracy improved modestly with longer context windows, peaking at 0.260 and 0.300, respectively, under the 2048-token settingstill well below optimal thresholds. For PTSD, the recall again peaked at 0.980 with 512-token input. The best F1 score (0.385) was observed at 1024 tokens, while the highest accuracy (0.360) was achieved at 2048 tokens, suggesting a gradual trade-off between sensitivity and overall precision as chunk size increases. For anxiety, the model exhibited slightly more balanced performance. While recall gained the worst score of 0.797 under with the longest chunk size , both F1 and accuracy improved compared to the other disorders. The highest F1 Category Model Depression PTSD Anxiety Acc Rec F1 Acc Rec F1 Acc Rec F1 Decoder GPT-4.1 Mini 0.865 0.284 0.380 0.812 0.192 0.315 0.865 0.284 0.314 LLaMA-3-8B-Instruct 0.224 0.938 0.259 0.306 0.960 0.385 0.336 0.946 0.433 + CoT 0.626 0.388 0.230 0.559 0.280 0.223 0.561 0.318 0.279 + LoRA 0.712 0.333 0.273 0.622 0.190 0.160 0.631 0.219 0.255 Encoder RoBERTa + Logistic Regression 0.750 0.214 0.194 0.840 0.285 0.330 0.510 0.364 0.329 RoBERTa + MLP Head 0.780 0.357 0.313 0.820 0.214 0.250 0.660 0.333 0.393 RoBERTa + XGBoost Head 0.830 0.214 0.261 0.890 0.286 0.421 0.570 0.242 0.271 RoBERTa + LoRA + MLP 0.640 0.786 0.379 0.780 0.643 0.450 0.720 0.546 0.563 Table 3: Overall Performance comparison (Accuracy, Recall, and F1) on three binary classification tasks: Depression, PTSD, and Anxiety. score (0.433) occurred with 1024 tokens, and the best accuracy (0.358) was obtained at 2048 tokens. 6.2 LoRA Fine-Tuned Models Low-Rank Adaptation (LoRA) has emerged as an efficient fine-tuning strategy for large language models, enabling substantial parameter savings while maintaining strong downstream performance. In our experiments, we tested three LoRA ranks (8, 16, 32) across two architectures: an encoderonly transformer (RoBERTa) and a decoder-only transformer (Meta-LLaMA), evaluating their effectiveness in predicting three mental health conditions. Overall, RoBERTa consistently outperformed Meta-LLaMA across most metrics. While accuracy generally improved with increasing rank, recall and F1 scores were often higher at lower ranks, though the trend was not strictly monotonic (See Tabel 5). Model Depression PTSD Anxiety Acc Recall F1 Acc Recall F1 Acc Recall F1 LoRA RoBERTa Rank=8 0.560 0.857 0.353 0.700 0.643 0.375 0.700 0.643 0.375 LoRA RoBERTa Rank=16 0.770 0.286 0.258 0.780 0.643 0.450 0.720 0.546 0.563 LoRA RoBERTa Rank=32 0.640 0.786 0.379 0.790 0.571 0.432 0.690 0.485 0.508 LoRA Meta Rank=8 0.530 0.333 0.188 0.503 0.333 0.237 0.586 0.500 0.410 LoRA Meta Rank=16 0.676 0.167 0.143 0.631 0.190 0.163 0.541 0.562 0.414 LoRA Meta Rank=32 0.559 0.333 0.197 0.676 0.286 0.250 0.495 0.312 0.263 Table 5: LoRA rank impact on encoder and decoder models The performance of depression, the highest accuracy (0.770) was observed with RoBERTa at rank 16. However, both recall and F1 under this configuration remained below 0.3. The highest recall (0.857), also the highest among all tasks-was achieved at rank 8, while F1 peaked at 0.379 with rank 32. In terms of PTSD, accuracy remained above 0.70 across all ranks, with a peak of 0.790 at rank 16. Recall was stronger at lower ranks, whereas the highest F1 score (0.450) again appeared at rank 16, indicating its suitability for this task. For anxiety, both accuracy and recall remained relatively stable across all ranks, with accuracy ranging from 0.541 to 0.720 and recall from 0.500 to 0.6429. The highest accuracy (0.720) was achieved at rank 6, and the highest recall (0.6429) was observed at both ranks 6 and 8. The F1 score peaked at 0.5625 with rank 6, which was also the highest F1 score across all disorders and configurations. 6.3 Embedding-Based Classifiers To complement large-scale language models, we additionally evaluated embedding-based classifiers using sentence embeddings extracted from pretrained RoBERTa-base and all-roberta-large-v1 models. As shown in Table 6, across all tasks and models, we observed that all-roberta-large-v1 consistently outperformed roberta-base in both recall and F1 score. While overall recall remained low across classifiers-typically below 0.45, accuracy scores were relatively high, frequently exceeding 0.80. Model Depression PTSD Anxiety Recall F1 Acc Recall F1 Acc Recall F1 Acc RoBERTa + LR 0.214 0.194 0.750 0.285 0.330 0.840 0.364 0.329 0.510 RoBERTa + MLP 0.071 0.111 0.840 0.210 0.250 0.820 0.182 0.182 0.640 RoBERTa + XGBoost 0.071 0.110 0.840 0.143 0.190 0.830 0.200 0.200 0.600 Large + LR 0.429 0.267 0.670 0.286 0.296 0.810 0.333 0.373 0.630 Large + MLP 0.357 0.313 0.780 0.214 0.250 0.820 0.333 0.393 0.660 Large + XGBoost 0.214 0.261 0.830 0.286 0.421 0.890 0.242 0.271 0.570 Table 6: Performance of embedding-based models with different encoder backbone and classification heads on three mental health disorders. "Large" referes to all-robertalarge-v1. Best values per column in bold. We report the performance of embedding-based classifiers on depression classification. The highest recall (0.429) was achieved by logistic regression with all-roberta-large-v1 embeddings, while the best F1 score (0.313) was obtained by the MLP classifier using the same embeddings. Accuracy was consistently high across models, ranging from 0.67 to 0.84. In contrast, classifiers based on roberta-base embeddings showed substantially lower recall, highlighting the advantage of using larger, more expressive language models. A similar trend emerged for PTSD classification. XGBoost paired with all-roberta-large-v1 embeddings achieved the highest F1 score (0.421) and accuracy (0.89). However, recall remained modest across classifiers, generally below 0.29, except for logistic regression with roberta-base (0.286). These results suggest that while high accuracy is attainable, embedding models may underidentify true PTSD cases, limiting their sensitivity in clinical settings. In anxiety classification, the MLP classifier using all-roberta-large-v1 embeddings yielded the highest F1 score (0.393) and recall (0.333). Accuracy ranged from 0.51 to 0.66 across models. Compared to depression and PTSD, anxiety prediction exhibited more balanced precision-recall trade-offs, particularly with MLP-based architectures, indicating better stability for this diagnostic category. 7. Discussion and Conclusion This study presents a systematic evaluation of three methodological paradigms, including zero-shot prompting with foundation models, LoRA fine-tuning, and embeddingbased classifiers for predicting depression, PTSD, and anxiety from real-world interview transcripts. While each approach demonstrates distinct strengths and limitations, several critical themes emerge regarding their practical applicability to mental health screening. To evaluate the practical utility of GPT-4.1 Mini, we benchmarked its zero-shot performance against both open-source foundation models (e.g., Meta-LLaMA-3) and lightweight embedding-based classifiers. Despite achieving high overall accuracy (≥0.80), this align with the exsiting studies (Chen et al. 2025; Ben-Zion et al. 2025). Howerver, GPT-4.1 exhibited substantially lower recall and F1 scores across all disorders, underscoring the trade-off between general discriminative power and clinical sensitivity. Across all models, accuracy often remained high-even surpassing 0.85 in several configurations-suggesting models can reliably distinguish between diagnosed and undiagnosed individuals at a population level. However, accuracy alone fails to reflect diagnostic usefulness in clinical settings, where missing true cases (i.e., low recall) can delay care or exacerbate harm. This issue is particularly salient in our results: zero-shot Meta-LLaMA models achieved recall rates above 0.90 for all disorders but suffered from low F1 scores and poor precision, indicating frequent false positives, the lower recall align with the work (Ravenda et al. 2025). Conversely, embedding-based classifiers showed high accuracy but considerably lower recall, often below 0.3, underscoring their tendency to under-identify true cases. From a public health perspective, recall can be interpreted as clinical sensitivity-a measure of how well a model detects individuals who actually need care. Given that depression, anxiety, and PTSD often co-occur, even a single positive flag could increase a user's awareness of their condition, prompting further clinical consultation. Thus, models with high recall-even at the cost of reduced precision-may serve as effective early-warning tools in digital mental health applications. LoRA fine-tuning offers a strong trade-off between efficiency and performance. RoBERTa models fine-tuned with LoRA achieved the best overall balance of recall and F1 scores, particularly for anxiety classification (F1 = 0.563). Notably, lower-rank configurations (e.g., Rank=8) sometimes outperformed larger ranks on recall, suggesting that parameter-efficient adaptations may be well-suited for sensitive screening tasks without extensive computational costs. Embedding-based models, while simpler and more interpretable, struggled with recall, indicating limitations in their applicability for nuanced, clinical-like tasks. Limitation and Conclusion This study has several limitations that warrant discussion. First, the diagnostic labels are imbalanced, with positive cases comprising only around 20% of the dataset-and sometimes substantially less. This skew introduces challenges for both training stability and model evaluation, as high accuracy may mask low sensitivity to minority classes. Future work could incorporate reweighting strategies or synthetic oversampling to better calibrate predictions. Second, although general-purpose large language models have demonstrated strong linguistic capabilities, their understanding of mental health-specific discourse remains limited. These models may lack nuanced knowledge of psychiatric terminology, symptom expression, or the pragmatic context in which mental health conversations occur. For instance, they may misinterpret colloquial expressions of distress or overlook subtle indicators of psychological states. Domain-adapted LLMs trained on relevant corpora-such as therapy transcripts or clinical notes-can provide more reliable grounding for tasks like early detection. Third, the dataset size is relatively small compared to the scale of LLMs being evaluated. Even with parameterefficient fine-tuning (e.g., LoRA), limited training samples constrain the model's ability to generalize, particularly when attempting to update or specialize domain-relevant representations. Freezing core parameters may further amplify this bottleneck. Fourth, the length of the interviews introduces cognitive strain for decoder-based models with finite context windows. While chunking strategies partially mitigate this issue, they risk discarding context or emphasizing the wrong information. More sophisticated context-aware methods-such as memory-augmented prompting, hierarchical modeling, or relevance-guided chunking-may improve performance. Looking ahead, several directions hold promise. Beyond general classification, demographic subgroup analyses (e.g., by age, sex, or race) could reveal important disparities in model sensitivity. Incorporating emotional and linguistic signals (e.g., sentiment trajectories, affective intensity) into training data may also enhance predictive validity (Gerczuk et al. 2023; Rasool et al. 2025). Finally, analyzing misclassified cases and probing model reasoning pathways through prompt engineering or contrastive examples could reveal failure modes and inform targeted improvements in model alignment. Ethical Considerations While LLM-based personality prediction holds potential for scalable assessment, it raises critical concerns about privacy, consent, and interpretability. We caution against the use of such models in high-stakes decisions without human oversight and advocate for transparent evaluation protocols grounded in psychological theory. References Alhalaseh, Y. N.; Elshabrawy, H. A.; Erashdi, M.; Shahait, M.; Abu-Humdan, A. M.; and Al-Hussaini, M. 2021. Allocation of the "already" limited medical resources amid the COVID-19 pandemic, an iterative ethical encounter including suggested solutions from a real life encounter. Frontiers in Medicine, 7: 616277. Alhamed, F.; Ive, J.; and Specia, L. 2024. Using large language models (llms) to extract evidence from pre-annotated social media data. In Proceedings of the 9th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2024), 232-237. Aux ́em ́ery, Y. 2018. Post-traumatic psychiatric disorders: PTSD is not the only diagnosis. La Presse M ́edicale, 47(5): 423-430. Balcombe, L.; and De Leo, D. 2021. Digital mental health challenges and the horizon ahead for solutions. JMIR Mental Health, 8(3): e26811. Bartal, A.; Jagodnik, K. M.; Chan, S. J.; and Dekel, S. 2024. ChatGPT demonstrates potential for identifying psychiatric disorders: application to childbirth-related post-traumatic stress disorder. Ben-Zion, Z.; Witte, K.; Jagadish, A. K.; Duek, O.; HarpazRotem, I.; Khorsandian, M.-C.; Burrer, A.; Seifritz, E.; Homan, P.; Schulz, E.; et al. 2025. Assessing and alleviating state anxiety in large language models. npj Digital Medicine, 8(1): 132. Blevins, C. A.; Weathers, F. W.; Davis, M. T.; Witte, T. K.; and Domino, J. L. 2015. The posttraumatic stress disorder checklist for DSM-5 (PCL-5): Development and initial psychometric evaluation. Journal of traumatic stress, 28(6): 489-498. Bucur, A.-M. 2024. Leveraging LLM-generated data for detecting depression symptoms on social media. In International Conference of the Cross-Language Evaluation Forum for European Languages, 193-204. Springer. Cacheda, F.; Fernandez, D.; Novoa, F. J.; and Carneiro, V. 2019. Early detection of depression: social network analysis and random forest techniques. Journal of medical Internet research, 21(6): e12554. Chen, F.; Ben-Zeev, D.; Sparks, G.; Kadakia, A.; and Cohen, T. 2025. Detecting PTSD in Clinical Interviews: A Comparative Analysis of NLP Methods and Large Language Models. . Chiong, R.; Budhi, G. S.; Dhakal, S.; and Chiong, F. 2021. A textual-based featuring approach for depression detection using machine learning classifiers and social media texts. Computers in Biology and Medicine, 135: 104499. Christ, N. M.; Elhai, J. D.; Forbes, C. N.; Gratz, K. L.; and Tull, M. T. 2021. A machine learning approach to modeling PTSD and difficulties in emotion regulation. Psychiatry research, 297: 113712. Coifman, K.; and Bonanno, G. 2010. When distress does not become depression: Emotion context sensitivity and adjustment to bereavement. Journal of Abnormal Psychology, 119(3): 479-490. Coifman, K.; Bonanno, G.; Ray, R.; and Gross, J. 2007. Does repressive coping promote resilience? Affectiveautonomic response discrepancy during bereavement. Journal of Personality and Social Psychology, 92(4): 745-758. Coifman, K.; Flynn, J.; and Pinto, L. 2016. When context matters: Negative emotions predict psychological health and adjustment. Motivation & Emotion, 40(4): 602-624. DeVylder, J.; Burnette, D.; and Yang, L. 2014. Cooccurrence of psychotic experiences and common mental health conditions across four racially and ethnically diverse population samples. Psychological medicine, 44(16): 35033513. Gerczuk, M.; Triantafyllopoulos, A.; Amiriparian, S.; Kathan, A.; Bauer, J.; Berking, M.; and Schuller, B. W. 2023. Zero-shot personalization of speech foundation models for depressed mood monitoring. Patterns, 4(11). Goodell, S.; Druss, B. G.; Walker, E. R.; and Mat, M. 2011. Mental disorders and medical comorbidity. Robert Wood Johnson Foundation, 2(1). Haberer, J. E.; Trabin, T.; and Klinkman, M. 2013. Furthering the reliable and valid measurement of mental health screening, diagnoses, treatment and outcomes through health information technology. General hospital psychiatry, 35(4): 349-353. Harvey, M.; Coifman, K.; Ross, G.; Kleinert, D.; and Giardina, P. 2014. Contextually appropriate emotion-word use predicts adaptive health behavior: Emotion context sensitivity and treatment adherence. Journal of Health Psychology. Advance online publication. Hasan, M. A.; Das, S.; Anjum, A.; Alam, F.; Anjum, A.; Sarker, A.; and Noori, S. R. H. 2024. Zero- and Few-Shot Prompting with LLMs: A Comparative Study with Fine-tuned Models for Bangla Sentiment Analysis. . Hawkins, E. H. 2009. A tale of two systems: Co-occurring mental health and substance abuse disorders treatment for adolescents. Annual review of psychology, 60(1): 197-227. Held, P.; Schubert, R. A.; Pridgen, S.; Kovacevic, M.; Montes, M.; Christ, N. M.; Banerjee, U.; and Smith, D. L. 2022. Who will respond to intensive PTSD treatment? A machine learning approach to predicting response prior to starting treatment. Journal of psychiatric research, 151: 7885. Islam, M. R.; Kabir, M. A.; Ahmed, A.; Kamal, A. R. M.; Wang, H.; and Ulhaq, A. 2018. Depression detection from social network data using machine learning techniques. Health information science and systems, 6(1): 8. Kroenke, K.; Spitzer, R.; and Williams, J. 2001. The patient health questionnaire (phq-9)-overview. J Gen Intern Med, 16(9): 606-613. Kumar, P.; Lionis, C.; Andoko, D.; Rahman, Z.; Anastasaki, M.; Awankem, B.; et al. 2025. Evaluation of Diagnostic Services in Rural and Remote Areas: Bottlenecks, Success Stories, and Solutions. Journal of Surgical Specialties and Rural Practice, 6(1): 32-37. Lai, M.-C.; Kassee, C.; Besney, R.; Bonato, S.; Hull, L.; Mandy, W.; Szatmari, P.; and Ameis, S. H. 2019. Prevalence of co-occurring mental health diagnoses in the autism population: a systematic review and meta-analysis. The Lancet Psychiatry, 6(10): 819-829. Laursen, T. M.; Nordentoft, M.; and Mortensen, P. B. 2014. Excess early mortality in schizophrenia. Annual review of clinical psychology, 10(1): 425-448. Le Glaz, A.; Haralambous, Y.; Kim-Dufor, D.-H.; Lenca, P.; Billot, R.; Ryan, T. C.; Marsh, J.; Devylder, J.; Walter, M.; Berrouiguet, S.; et al. 2021. Machine learning and natural language processing in mental health: systematic review. Journal of medical Internet research, 23(5): e15708. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint . Maharjan, J.; Jin, R.; King, J.; Zhu, J.; and Kenne, D. 2025. Differential Analysis of Age, Gender, Race, Sentiment, and Emotion in Substance Use Discourse on Twitter During the COVID-19 Pandemic: A Natural Language Processing Approach. JMIR infodemiology, 5(1): e67333. Mitchell, A. J.; Rao, S.; and Vaze, A. 2011. Can general practitioners identify people with distress and mild depression? A meta-analysis of clinical accuracy. Journal of affective disorders, 130(1-2): 26-36. Mitchell, A. J.; Vaze, A.; and Rao, S. 2009. Clinical diagnosis of depression in primary care: a meta-analysis. The Lancet, 374(9690): 609-619. Nemesure, M. D.; Heinz, M. V.; Huang, R.; and Jacobson, N. C. 2021. Predictive modeling of depression and anxiety using electronic health records and a novel machine learning approach with artificial intelligence. Scientific reports, 11(1): 1980. Ohse, J.; Hadˇzi ́c, B.; Mohammed, P.; et al. 2024. GPT-4 shows potential for identifying social anxiety from clinical interview data. Scientific Reports, 14: 30498. Organization, W. H. 2001. The World Health Report 2001: Mental health: new understanding, new hope. Priya, A.; Garg, S.; and Tigga, N. P. 2020. Predicting anxiety, depression and stress in modern life using machine learning algorithms. Procedia Computer Science, 167: 1258-1267. Ramasamy Ramamurthy, S.; and Roy, N. 2018. Recent trends in machine learning for human activity recognition-A survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(4): e1254. Rasool, A.; Shahzad, M. I.; Aslam, H.; Chan, V.; and Arshad, M. A. 2025. Emotion-aware embedding fusion in large language models (Flan-T5, Llama 2, DeepSeek-R1, and ChatGPT 4) for intelligent response generation. AI, 6(3): 56. Ravenda, F.; Kara-Isitt, F.-Z.; Swift, S.; Mira, A.; and Raballo, A. 2025. From Evidence Mining to Meta-Prediction: a Gradient of Methodologies for Task-Specific Challenges in Psychological Assessment. In Proceedings of the 10th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2025), 242-248. Reimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Saidi, A.; Othman, S. B.; and Saoud, S. B. 2020. Hybrid CNN-SVM classifier for efficient depression detection system. In 2020 4th international conference on advanced systems and emergent technologies (IC ASET), 229-234. IEEE. Sarabadani, A.; Taherinia, H.; Ghadiri, N.; Shahmarvandi, E. K.; and Mousa, R. 2025. PKG-LLM: A Framework for Predicting GAD and MDD Using Knowledge Graphs and Large Language Models in Cognitive Neuroscience. Sawalha, J.; Yousefnezhad, M.; Shah, Z.; Brown, M. R.; Greenshaw, A. J.; and Greiner, R. 2022. Detecting presence of PTSD using sentiment analysis from text data. Frontiers in psychiatry, 12: 811392. Sha, Y.; Pan, H.; Xu, W.; Meng, W.; Luo, G.; Du, X.; Zhai, X.; Tong, H. H.; Shi, C.; and Li, K. 2025. MDD-LLM: Towards accuracy large language models for major depressive disorder diagnosis. Journal of Affective Disorders, 388: 119774. Spitzer, R. L.; Kroenke, K.; Williams, J. B.; and L ̈owe, B. 2006. A brief measure for assessing generalized anxiety disorder: the GAD-7. Archives of internal medicine, 166(10): 1092-1097. Srivastava, D. 2024. Utilizing Counselor-Client Dialogues to Develop a Memory-Efficient Mental Health QuestionAnswering System with Large Language Models. Master's thesis, Dublin, National . Submitted. Su, C.; Xu, Z.; Pathak, J.; and Wang, F. 2020. Deep learning in mental health outcome research: a scoping review. Translational psychiatry, 10(1): 116. Tennyson, R. L.; Kemp, C. G.; and Rao, D. 2016. Challenges and strategies for implementing mental health measurement for research in low-resource settings. International health, 1-7. Tran, T.; and Kavuluru, R. 2017. Predicting mental conditions based on "history of present illness" in psychiatric notes with deep neural networks. Journal of biomedical informatics, 75: S138-S148. Van Der Donckt, J.; Van Der Donckt, J.; Deprost, E.; Vandenbussche, N.; Rademaker, M.; Vandewiele, G.; and Van Hoecke, S. 2023. Do not sleep on traditional machine learning: Simple and interpretable techniques are competitive to deep learning for sleep scoring. Biomedical Signal Processing and Control, 81: 104429. Vanlalawmpuia, R.; and Lalhmingliana, M. 2020. Prediction of Depression in Social Network Sites Using Data Mining. In 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), 489-495. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30, 5998-6008. Curran Associates, Inc. Wang, Z.; Pang, Y.; and Lin, Y. 2023. Large Language Models Are Zero-Shot Text Classifiers. . Wshah, S.; Skalka, C.; Price, M.; et al. 2019. Predicting posttraumatic stress disorder risk: a machine learning approach. JMIR mental health, 6(7): e13946. Xu, X.; Yao, B.; Dong, Y.; Gabriel, S.; Yu, H.; Hendler, J.; Ghassemi, M.; Dey, A. K.; and Wang, D. 2024. Mental-llm: Leveraging large language models for mental health prediction via online text data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(1): 1-32. Zhu, J.; Jin, R.; Jiang, H.; Wang, Y.; Zhang, X.; and Coifman, K. G. 2025. Leveraging Large Language Models to Analyze Emotional and Contextual Drivers of Teen Substance Use in Online Discussions. . Zhu, J.; Jin, R.; Kenne, D. R.; Phan, N.; and Ku, W.-S. 2024. User dynamics and thematic exploration in r/depression during the COVID-19 pandemic: insights from overlapping r/- SuicideWatch users. Journal of Medical Internet Research, 26: e53968.
2510.14935
On Complexity of Model-Based Derivative-Free Methods. A. Chaudhry‡ K. Scheinberg‡ Abstract. In many applications of mathematical optimization, one may wish to optimize an objective function without access to its derivatives. These situations call for derivative-free optimization (DFO) methods. Among the most successful approaches in practice are model-based trust-region methods, such as those pioneered by M.J.D Powell. While relatively complex to implement, these methods are now available in standard scientific computing platforms, including MATLAB and SciPy. However, theoretical analysis of their computational complexity lags behind practice. In particular, it is important to bound the number of function evaluations required to achieve a desired level of accuracy. In this paper we systematically derive complexity bounds for classical model-based trust-region methods and their modern variations. We establish, for the first time, that these methods can have the same worst case complexity than any other known DFO method. MSC Classification: 90C30, 90C56. 1 Introduction. Derivative Free Optimization (DFO) is an area of optimization that concerns with developing optimization methods based purely on function value computation without applying any direct differentiation. Other related areas of optimization are called zeroth-order optimization (because it relies of zeroth- order oracles) and black-box optimization. The applications of DFO are numerous and rapidly growing with the sophistication of engineering models. More and more systems are being modeled and evaluated by complex computer codes, which brings the natural next step - optimizing such systems. The application areas range from well established, such as aerospace engineering, chemical engineering, civil and environmental engineering to more recent, such as reinforcement learning, machine learning and simulation optimization. A broad list of applications can be found in [1] and [17]. There is a rich literature of DFO starting from around mid 90s which is rapidly growing with new interest spurred by new applications in engineering, machine learning and artificial intelligence. Aside from an increasing number of papers, there are two books [9] and [1], and a survey on the topic [12]. There are three major classes of DFO algorithms: 1. directional direct search methods, 2. gradient descent based on simplex gradients which includes finite difference approximation and 3. interpolation based (also known as model based) trust region methods. The first two classes are rather simple to implement and analyze which makes them popular choices, especially in the literature and with recent applications to machine learning where they are easy to adopt. They operate by sampling a certain number of function values at some chosen points around the current iterate and then make a step according to the information obtained from the samples. The way the sample sets are constructed is predetermined (but can be random) and is meant to ensure that a descent direction is identified either deterministically or in expectation. Such method is then analyzed based on this "guaranteed descent" and the effort it takes to obtain it. The third type of methods pioneered in the 90s by M.J.D. Powell [16, 15], is much more complex, yet is arguably the most effective in practice for many applications [13]. These methods trade off exploration and exploitation by reusing function values computed in the past iterates and adapting where the function should be sampled next based on information accumulated so far. The complex structure of the algorithms, especially as presented in Powell’s papers, resulted in scarcity of theoretical analysis, especially in terms of complexity. Some underlying theory of asymptotic convergence was developed in [9], however, methods analyzed there and in subsequent works rely on a "criticality step" for the theory to work, which departs from the main idea of the algorithms. There is still a lack of understanding of how these methods work and most importantly if they enjoy favorable complexity bounds. In this paper, we focus on the complexity of model-based derivative free algorithms that aim to solve unconstrained nonlinear optimization problems of the form min x∈Rn ϕ(x), (1.1) ‡School of Industrial and Systems Engineering, Georgia Tech arXiv:2510.14935v1 [math.OC] 16 Oct 2025 where ϕ : Rn →R is a smooth objective function that may or may not be convex. The key premise of these methods is to economize on function values, possibly at the expense of additional linear algebra, since in most applications the function evaluations cost dominates all else. The complexity will be measured in terms of the number of function evaluations needed to achieve an ϵ-stationary point, that is a point x for which ∥∇ϕ(x)∥≤ϵ. Throughout the paper, we assume that we have access to a zeroth-order oracle f(x) ≈ϕ(x) which may or may not be exact. In practice, one may wish to tolerate an inexact oracle but we will sometimes treat the exact case for ease of exposition. The following are the standard assumptions on ϕ(x) for our setting. Assumption 1.1 (Lower bound on ϕϕϕ). The function ϕ is bounded below by a scalar ϕ⋆on Rn. Assumption 1.2 (Lipschitz continuous gradient). The function ϕ is continuously differentiable, and the gradient of ϕ is L-Lipschitz continuous on Rn, i.e., ∥∇ϕ(y) −∇ϕ(x)∥≤L∥y −x∥for all (y, x) ∈Rn × Rn. The paper is organized as follows. In Section 2 we present the basic model based trust region method and its complexity analysis for the case when models are based on simple finite difference gradient approximations. In Section 3 we introduce a "geometry-correcting" trust region method in the spirit of those proposed by Powell. We will show how the analysis of complexity can be adapted from that of the basic method to the more sophisticated one. In Section 4 and 5 we derive new results that help us establish that the complexity of the "geometry- correcting" trust region method is competitive with that of the basic trust region method. Finally, in Section 6 we develop and analyze model based trust region methods in random subspaces and show that these methods have complexity bounds that are as good as those for any other known DFO method. 2 Basic trust-region method and analysis. We first present and analyze a basic trust region method which, at every iteration k ∈{0, 1, . . . }, constructs a quadratic model to approximate ϕ(x) near the iterate xk (2.1) mk(xk + s) = ϕ(xk) + gk(xk)T s + 1 2sT Hk(xk)s. The model is then minimized (approximately) over the trust region B(xk, ∆k) - a Euclidean ball around xk of a radius ∆k. In what follows we use the abbreviation gk := g(xk) = ∇mk(xk) and Hk := H(xk) = ∇2mk(xk).1 Algorithm 1: Trust region method based on fully-linear models Inputs: A zeroth-order oracle f(x) ≈ϕ(x), a starting point x0, TR radius ∆0, and hyperparameters η1 ∈(0, 1), η2 > 0, and γ ∈(0, 1). for k = 0, 1, 2, · · · do 1 Compute model mk and a trial step xk + sk where sk ≈arg mins{mk(xk + s) : s ∈B(0, ∆k)}. 2 Compute the ratio ρk as ρk = f(xk) −f(xk + sk) m(xk) −m(xk + sk). 3 Update the iterate and the TR radius as (xk+1, ∆k+1) ← ( (xk + sk, γ−1∆k) if ρ ≥η1 and ∥gk∥≥η2∆k, (xk, γ∆k) otherwise. Algorithm 1 is essentially a standard trust region method, well studied in the literature [8], except for the condition ∥gk∥≥η2∆k. This condition is not used in classical TR methods where ∇mk(xk) = gk = ∇ϕ(xk). However, for TR methods based on models constructed in derivative free setting and, more generally, models based on inexact gradient estimates, the TR radius ∆k serves two roles - that of the step size parameter and that of the control of the gradient accuracy. Condition ∥gk∥≥η2∆k is essential for the latter role to ensure that the gradient accuracy stays on track as the norm of the gradient reduces. We note here that much of prior DFO literature replaces this condition with a much less practical and cumbersome "criticality step". Thus one 1Note that ϕ(xk) is used in the definition (2.1) but not in minimization of mk, thus there is no need to compute it. of the contributions of this paper is performing analysis of Algorithm 1 and and other model-based trust region algorithms without the "criticality step". For all TR algorithms studied in this paper we will make the following standard assumption on the models mk and their minimization. Assumption 2.1. 1. The trust region subproblem is solved sufficiently accurately in each iteration k so that xk + sk provides at least a fraction of Cauchy decrease, i.e. for some constant 0 < κfcd < 1 (2.2) mk(xk) −mk(xk + sk) ≥κfcd 2 ∥gk∥min  ∥gk∥ ∥Hk∥, ∆k  . 2. There exists a constant κbhm > 0 such that, for all xk generated by Algorithm 1, the spectral norm of the Hessian of the model ∥Hk∥≤κbhm. Condition (2.2) is commonly used in the literature and is satisfied by the Cauchy point with κfcd = 1. See [8, Section 6.3.2] for more details. The following definition, also widely used in the literature [9], helps us identify the requirements on models mk that are critical for convergence. Definition 2.2 (Fully-linear model). Given a ball around point x of radius ∆, B(x, ∆), we say that model m(x + s) is a κef, κeg-fully linear model of ϕ(x + s) on B(x, ∆) if ∥∇m(x) −∇ϕ(x)∥≤κeg∆ and |m(x + s) −ϕ(x + s)| ≤κef∆2 for all ∥s∥≤∆. We temporarily make the following additional assumptions for the analysis of Algorithm 1. Assumption 2.3. At each iteration f(xk) −f(xk + sk) = ϕ(xk) −ϕ(xk + sk), in other words, exact function reduction is computed, when computing ρk. The next lemma is key in the analysis of any trust region algorithm and specifically Algorithm 1. It establishes that once ∆k is sufficiently small compared to the gradient norm, a successful step is ensured. Lemma 2.4 (small ∆k implies successful step). Under Assumptions 1.2 and 2.3, if mk is κef, κeg-fully-linear and (2.3) ∆k ≤C1∥∇ϕ(xk)∥, where C1 =  max  η2, κbhm, 2κef (1 −η1)κfcd  + κeg −1 , then ρ ≥η1, ∥gk∥≥η2∆k, thus iteration k is successful and xk+1 = xk + sk. The proof is a simplification of those in the stochastic trust region literature where condition ∥gk∥≥η2∆k is widely used [4, 5]. Proof. By the assumption that mk is fully-linear, we have ∥∇ϕ(xk)∥≤∥gk∥+ κeg∆k. From (2.3) we conclude that (max{κbhm, η2} + κeg)∆k ≤∥∇ϕ(xk)∥≤∥gk∥+ κeg∆k max{κbhm, η2}∆k ≤∥gk∥. This implies that the first condition of the successful step, namely ∥gk∥≥η2∆k, is satisfied. From (2.2) we have mk(xk) −mk(xk + sk) ≥κfcd∥gk∥∆k/2. Thus, using the assumption that mk is fully-linear again, ρk = m(xk) −m(xk + sk) + (ϕ(xk) −m(xk)) −(ϕ(xk + sk) −m(xk + sk)) m(xk) −m(xk + sk) ≥1 − κef∆2 k m(xk) −m(xk + sk) ≥1 − κef∆2 k κfcd∥gk∥∆k/2 ≥1 − 2κef∆k κfcd(∥∇ϕ(xk)∥−κeg∆k) ≥η1, where the last step is true because ∥∇ϕ(xk)∥≥ 2κef (1−η1)κfcd + κeg  ∆k follows from (2.3). Lemma 2.5 (successful iteration implies function reduction). Let Assumptions 1.2 and 2.3 hold. If the iteration k is successful, then (2.4) ϕ(xk) −ϕ(xk+1) ≥C2∆2 k, where C2 = η1η2κfcd 2 min  η2 κbhm , 1  ; otherwise, we have xk+1 = xk and ϕ(xk) −ϕ(xk+1) = 0. Proof. When the trial point is accepted, we have ρk ≥η1 and ∥gk∥≥η2∆k, which guarantees ϕ(xk) −ϕ(xk+1) = ϕ(xk) −ϕ(xk + sk) = f(xk) −f(xk + sk) ≥η1 m(xk) −m(x+ k )  ≥η1κfcd 2 ∥gk∥min  ∥gk∥ ∥Hk∥, ∆k  ≥η1κfcd 2 η2∆k min η2∆k κbhm , ∆k  . For a given ϵ > 0, let Kϵ be the first iteration of Algorithms 1 for which ∥∇f(xk)∥≤ϵ. Define the index sets Sϵ :={k ∈{0, . . . , Kϵ −1} : iteration k is successful}, Uϵ :={k ∈{0, . . . , Kϵ −1} : iteration k is unsuccessful}. Lemma 2.6 (Lower bound on ∆k). Assuming mk is κef, κeg-fully linear for each k = 0, . . . , Kϵ −1, ∆k ≥γC1ϵ for all k ∈{0, . . . , Kϵ −1}. Proof. According to Lemma 2.4, any iteration k ∈{0, . . . , Kϵ −1} must be successful when ∆k ≤C1ϵ. Thus, given ∆0 ≥γC1ϵ, and by the mechanism of Algorithm 1 we must have ∆k ≥γC1ϵ for all k ∈{0, . . . , Kϵ −1}. The following bound holds under the result of Lemma 2.6. Lemma 2.7 (Bound of successful iterations). From ∆k ≥γC1ϵ for all k ∈{0, . . . , Kϵ −1} we have |Sϵ| ≤f(x0) −f ⋆ C2(γC1ϵ)2 Proof. Using Lemma 2.5 we have ϕ(x0) −ϕ⋆≥ Kϵ−1 X k=0 ϕ(xk) −ϕ(xk+1) ≥ X k∈Sϵ C2∆2 k > |Sϵ|C2(γC1ϵ)2 which gives the result of the lemma. Lemma 2.8. Assuming that the initial trust-region radius ∆0 ≥γC1ϵ, |Uϵ| ≤|Sϵ| + ⌈  logγ C1ϵ ∆0  ⌉. Proof. We observe that ∆Kϵ = γ−|Sϵ|γ|Uϵ|∆0 ≥C1ϵ, The last inequality follows from the fact that ∆Kϵ−1 ≥γC1ϵ and the Kϵ −1-th iteration must be successful. Thus the number of unsuccessful iterations can be bounded using the number of successful ones rearranging the terms and taking the logarithm. Summarizing the above lemmas the following complexity result immediately follows. Theorem 2.9. Let Assumptions 1.2 and 2.3 hold. Assuming that the initial trust-region radius ∆0 ≥γC1ϵ, and mk is κef, κeg-fully linear for each k = 0, . . . , Kϵ −1, we have the bound (2.5) Kϵ = |Sϵ| + |Uϵ| ≤2(f(x0) −f ⋆) C2(γC1ϵ)2 + logγ C1ϵ ∆0 = 4κbhm  max n η2, κbhm, 2κef (1−η1)κfcd o + κeg 2 γ2η1η2κfcd min{η2, κbhm} · ϕ(x0) −ϕ⋆ ϵ2 + logγ C1ϵ ∆0 . 2.1 The case of inexact zeroth-order oracle. Now let us assume that for any x the algorithm has access to a zeroth-order oracle that computes f(x) such that |f(x) −ϕ(x)| ≤ϵf. It is easy to extend Lemmas 2.4 and 2.5 to this case. The first modification will result in a slightly different definition in constant C1, which we denote as ¯C1 and which is (2.6) ¯C1 =  max  η2, κbhm, 2κef + 4ϵf (1 −η1)κfcd  + κeg −1 . The second modification will result in (2.4) in the statement of Lemma 2.5 changing as follows (2.7) ϕ(xk) −ϕ(xk+1) ≥C2∆2 k −2ϵf, where C2 = η1η2κfcd 2 min{ η2 κbhm , 1}. Thus, under the additional assumption that ∆k ≥ q 2ϵf τC2 for some τ ∈(0, 1) (2.7) can be further stated as (2.8) ϕ(xk) −ϕ(xk+1) ≥(1 −τ)C2∆2 k, where C2 = η1η2κfcd 2 min{ η2 κbhm , 1}. The new versions of Lemmas 2.4 and 2.5 can be applied as before, as long as it can be ensured that ∆k remains not smaller than q 2ϵf τC2 . Due to Lemma 2.4 and the update mechanism for ∆k this is ensured as long as ∥∇ϕ(xk)∥≥ϵ for sufficiently large ϵ. Theorem 2.10. Let Assumptions 1.2 and 2.3 hold. For any ϵ > q 2ϵf γ2τC2 ¯ C2 1 , assuming that the initial trust- region radius ∆0 ≥γ ¯C1ϵ, where ¯C1 is defined in (2.6) and mk is κef, κeg-fully linear for each k = 0, . . . , Kϵ −1, where Kϵ is the first iteration for which ∥∇ϕ(xk)∥≤ϵ, then we have the bound (2.9) Kϵ = |Sϵ| + |Uϵ| ≤ 2(ϕ(x0) −ϕ⋆) (1 −τ)C2(γ ¯C1ϵ)2 + logγ ¯C1ϵ ∆0 = 4κbhm  max n η2, κbhm, 2κef +4ϵf (1−η1)κfcd o + κeg 2 (1 −τ)γ2η1η2κfcd min{η2, κbhm} · ϕ(x0) −ϕ⋆ ϵ2 + logγ ¯C1ϵ ∆0 . 2.2 Ensuring fully-linear models on each iteration and complexity implications. Let us now describe the most straightforward way of constructing fully linear models in derivative free setting. First we show that it can be achieved by using a sufficiently accurate gradient approximation. The following lemma easily follows from Assumption 2.3 and the smoothness of ϕ(x). Lemma 2.11 (Fully linear models). Under Assumption 2.3 if ∥∇m(x) −∇ϕ(x)∥≤κeg∆then mk(xk + s) is a κef, κeg-fully linear model of ϕ(x + s) on B(x, ∆) with κef = κeg + L+κhbm 2 . Thus we focus on constructing g(x) = ∇m(x). We assume that we have access to an inexact zeroth-order oracle: f(x) ≈ϕ(x) such that |f(x) −ϕ(x)| ≤ϵf. Let us consider a quadratic model (2.1) with g(x) computed by a finite difference scheme: g(x) = n X i=1 f(x + δui) −f(x) δ ui, where ui is the i-th column of a unitary matrix, and the Hessian approximation H(x) being an arbitrary symmetric matrix, such that ∥H(x)∥2 ≤κbhm. From analysis of the finite difference gradient approximation error (see e.g.[3]) we have ∥g(x) −∇ϕ(x)∥≤ √nLδ 2 + 2√nϵf δ . Thus, by selecting δ ≥2 rϵf L we ensure ∥g(x) −∇ϕ(x)∥≤√nLδ = κeg∆ κeg = √nL δ ∆ In conclusion, when the finite difference scheme is used with δ = ∆k, at each iteration k, then the resulting model is κef, κeg-fully linear in B(xk, ∆k) with κeg = √nL and κef = L+κhbm 2 + √nL, as long as ∆k ≥2 q ϵf L . This lower bound on ∆k is, essentially without loss of generality, implied by the condition of Theorem 2.10 that ∆k ≥ q 2ϵf τC2C2 1 since q 2ϵf τC2C2 1 can be assumed to be larger than 2 q ϵf L without notable impact on complexity. We now can apply Theorem 2.10 to bound the total number of calls to the zeroth order oracle. We are specifically interested in the dependence of this oracle complexity on n and ϵ. The dependence on ϵ is explicit in the bound on Kϵ which is O(ϵ−2). There are several constants in the bound in Theorem 2.10, however, only κeg and κef depend on n. Other constants are not dimension dependent2. Both κef and κeg scale in the same way with n. The total number of iterations Kϵ is bounded as O  κ2 eg ϵ2  and each iteration requires n + 1 function evaluations. Thus the total worse case oracle complexity to achieve ∥∇ϕ(xk)∥≤ϵ for any ϵ > q 2ϵf γτC2C2 1 is Nϵ = O(n2ϵ−2). Alternatively, choosing δ = ∆k √n, ensures that κeg = L and the total complexity reduces to O nϵ−2 , but since we also have to have δ ≥2 q ϵf L , this implies that convergence holds only for ϵ > q nϵf γ2C2 1L. We conclude that if function noise ϵf is appropriately small compared to the desired accuracy ϵ and a finite difference method is used then choosing δ = ∆ √n is the better strategy. The main drawback of the finite difference approach is lack of flexibility in reusing the past function values. Thus this method, while simple to analyze, is not as practical as the method we discuss in the next section. We will see that the radius of sampling will be the same as the trust region radius for the reasons that will be understood when we describe the method. 3 Geometry-correcting method based on Lagrange polynomials. Before introducing the method we wish to analyze in this paper we need to discuss an important tool utilized by these algorithms - Lagrange polynomials. The concepts and the definitions below can be found in [9]. Lagrange polynomials and associated concepts will be defined with respect to a space of polynomials P of dimension p. Typically P is either the set of linear or quadratic polynomial, but it also can be a set of quadratic polynomials with a pre-defined Hessian sparsity pattern. Definition 3.1. Lagrange Polynomials Given a space of polynomials P of dimension p and a set of points Y = {y1, . . . , yp} ⊂Rn, a set of p polynomials ℓj(s) in P for j = 1, . . . , p, is called a basis of Lagrange polynomials associated with Y, if ℓj(yi) = δij =  1 if i = j, 0 if i ̸= j. If the basis of Lagrange polynomials exists for the given Y then Y is said to be poised. Definition 3.2. Λ–poisedness Given a space of polynomials P of dimension p, Λ > 0, and a set B ⊂Rn. A poised set Y = {y1, . . . , yp} is said to be Λ–poised in B if Y ⊂B and for the basis of Lagrange polynomials associated with Y, it holds that Λ ≥ max j=1,...,p max s∈B |ℓj(s)|. The following assumptions will be applied to Algorithm 2 on top of Assumption 2.3. 2For some problems L can be dimension dependent but we do not consider this here. Algorithm 2: Geometry-correcting algorithm Inputs: A zeroth-order oracle f(x) ≈ϕ(x), a space of polynomials P of dimension p, ∆0, x0, γ ∈(0, 1) η1 > 0, η2 > 0, Λ > 1. Initialization An initial set Y0 such that |Y0| ≤p, and the function values f(x0), f(x0 + yi), yi ∈Y0. for k = 0, 1, 2, . . . do 1 Build a quadratic model mk(xk + s) as in (2.1) using f(xk) and f(xk + yi), yi ∈Yk. 2 Compute a trial step sk and ratio ρk as in Algorithm 1. 3 Successful iteration: ρk ≥η1 and ∥gk∥≥η2∆k. Set xk+1 = xk + sk, ∆k+1 = γ−1∆k. j∗ k = arg max j=1,...,p ∥yj∥. Replace the furthest interpolation point and shift Yk+1 = (Yk \ {yj∗ k} ∪{0}) −sk 4 Unsuccessful iteration: ρk < η1 or ∥gk∥< η2∆k. Set xk+1 = xk and perform the first applicable steps below. • Geometry correction by adding a point: If |Yk| < p, Yk+1 = Yk ∪{sk}. • Geometry correction by replacing a far point: Let j∗ k = arg maxj=1,...,p ∥yj∥. If ∥yj∗ k∥> ∆k ⇒Yk+1 = Yk \ {yj∗ k} ∪{sk} . • Geometry correction by replacing a "bad" point: Otherwise, construct the set Lagrange Polynomials {ℓj(x), i = 1, . . . , p} in P for the set Yk and find max value (i∗ k, s∗ k) = arg max j=1,...,p,s∈B(0,∆k) |ℓj(s)|. If |ℓi∗ k(s∗ k)| > Λ, compute f(xk + s∗ k), Yk+1 = Yk \ {yi∗ k} ∪{s∗ k}. • Geometry is good: Otherwise ∆k+1 = γ∆k. Assumption 3.3. There exist κeg such that at each iteration where Yk ⊆B(0, ∆k) is Λ-poised, ∥gk −∇ϕ(xk)∥≤κeg∆k and thus mk is κef, κeg-fully linear in B(xk, ∆k) with κef = κeg + L+κhbm 2 We will show how this property follows from the properties of Lagrange polynomials later in the section after specifying a way for the model mk to be constructed. Lemmas 2.4 applies to Algorithm 2 with ¯C1 defined by (2.6) and Lemma and 2.5 applies with function reduction defined in (2.7). We no longer assume that mk is fully linear at each iteration, but we still have the following key lower bound on ∆k. Lemma 3.4 (Lower bound on ∆k). For any ϵ > q 2ϵf γ2τC2 ¯ C2 1 , for some τ ∈(0, 1) let Kϵ be the first iteration for which ∥∇ϕ(xk)∥≤ϵ. Then for all k = 1, . . . , Kϵ −1 ∆k ≥γ ¯C1ϵ. Proof. From the mechanics of the unsuccessful iteration ∆k is decreased only when Yk ∈B(0, ∆k) and is Λ-poised, thus mk is fully linear on iteration k. Since from Lemma 2.4 ∆k < ¯C1∥∇ϕ(xk)∥and fully linear mk imply a successful step, and ∥∇ϕ(xk)∥> ϵ this means that ∆k cannot be decreased below γC1ϵ. Let Sϵ be defined as in (??) and Ud ϵ := {k ∈{0, . . . , Kϵ −1} : iteration k is unsuccessful and ∆k is decreased}. First of all, due to Lemma 3.5, the bound of Lemma 2.7 on the number of successful iterations holds. We also have the following bound. Lemma 3.5. |Ud ϵ | ≤|Sϵ| + ⌈  logγ C1ϵ ∆0  ⌉. Proof. The proof is identical to Lemma 2.8 with Ud ϵ instead of Uϵ, since ∆k is unchanged on iterations k ∈Uϵ \ Ud ϵ . The final challenge is to count the number of iterations in Uϵ\Ud ϵ . We call these iterations geometry-correcting iterations, because they are designed to improve the properties of the interpolation set. Essentially, without loss of generality we assume that Yk is either poised or can be completed to form a poised set of p points. This is because Yk can be made so by throwing away points or by arbitrary small perturbations. The following result bounds the number of consecutive geometry-correcting iterations. Theorem 3.6. Let P be the set of linear polynomials (with dimension p = n). Then the number of oracle calls in consecutive unsuccessful iterations such that k ∈Uϵ \ Ud ϵ is at most 3n. Proof. First, after at most n iterations and n oracle calls, Yk contains n points all of which have norm at most ∆k. Let Yk = {y1, . . . , yn} and let Yk be the matrix with ith column yi ∈Yk, i = 1, . . . , n. The set of linear Lagrange polynomials for Yk is then defined by ℓk i (s) = sT (Y −T k )i. Define Ik = {i ∈[n] | ∥yi∥= ∆k, yT i yj = 0 ∀j ∈[n] \ {i}}. That is Ik is the set of points in Yk whose norm is ∆k and that are orthogonal to all other points in Yk. We claim that (i) if |Ik| = n then the set is 1–poised and thus k is not geometry-correcting, i.e, k ̸∈Uϵ \ Ud ϵ and (ii) if |Ik| < n and the iteration k ∈Uϵ \ Ud ϵ then at least one i will be added to the set |I| in each iteration. The result follows from the combination of these two claims. Suppose |Ik| = n, then all yi are orthogonal. Then we have 1 ∆k Yk is a matrix with orthonormal columns, thus it is an orthogonal matrix and ( 1 ∆k Yk)−1 = ( 1 ∆k Yk)T , which implies (∆Yk)−T = 1 ∆k Yk. Thus for all i, max s∈B(0,∆k) |ℓk i (s)| = max s∈B(0,∆k) |sT (Y −T k )i| = ∆k∥(Y −T k )i∥= 1 ∆k ∥(Yk)i∥= 1. Suppose |Ik| < n and k is geometry-correcting, then from the mechanism of the algorithm, the set Yk is not Λ–poised (otherwise k would be in Ud ϵ or in Sϵ). To show that |Ik| grows, we will show that in geometry-correcting iterations Ik does not lose any members and that at least one index i will be added to Ik. To show that Ik does not lose any of its members on geometry-correcting iterations, let i∗ k be the index of the point chosen for replacement, i.e. i∗ k = arg maxi∈[n] maxs∈B(0,∆k) |ℓk i (s)|. From the mechanics of the algorithm we have ∥yi∗ k∥≤∆k and |ℓi∗ k(s∗ k)| > Λ. By permutation, we may assume that Ik = [|Ik|]. We can find an orthonormal basis by extending the orthonormal set { 1 ∆k yi|i ∈Ik}. Changing to this new basis, we can write Yk = ∆kI 0 0 ˜Yk  where I is the identity matrix of size |Yk| and ˜Yk is of size n −|Yk|. We can then observe Y −T k = " 1 ∆k I 0 0 ˜Yk −T # . It then follows that for i ∈Ik, ∥(Y −T k )i∥= 1 ∆k and therefore, maxs∈B(0,∆k) ℓk i (s) = ∆k∥(Y −T k )i∥= 1. Thus since maxs∈B(0,∆k) |ℓk i∗ k(s)| > Λ ≥1, we cannot have i∗ k ∈Ik. Thus Ik does not lose any of its members on geometry-correcting iterations. To conclude, we just need to show that at least one i is added to Ik+1 compared to Ik during a geometry-correcting iteration. By the the mechanism of a geometry-correcting iteration of Algorithm 2, Yk+1 = Yk \ {yi∗ k} ∪{s∗ k} where s∗ k = arg max x∈B(0,∆k) |sT (Y −T k )i∗ k| = ∆k (Y −T k )i∗ k ∥(Y −T k )i∗ k∥. It is easy to see that s∗ k := ∆k (Y −T )i∗ ∥(Y −T )i∗∥satisfies (i) ∥s∗ k∥= ∆and (ii) yT j s∗ k = 0 ∀j ∈[n] \ {i∗ k}. Since yi∗ k is removed from the set when s∗ k is added, it follows that Ik+1 = Ik ∪{i∗}, thus the cardinality of Ik increases by at least one. Since only n such iterations are possible consecutively, and since each geometry-correcting iteration uses at most two oracle calls, the total number of oracle calls is at most 3n. We now can state the total bound on the oracle complexity for Algorithm 2 in the case when P is the space of linear polynomials. This bound follows from the previously derived bounds on |Sϵ| and |Ud ϵ |, the fact that each iteration in Sϵ and Ud ϵ requires only on function evaluation and Theorem 3.6 which established that there are at most 3n evaluations between each iteration in Sϵ ∪Ud ϵ . Theorem 3.7. Let Assumptions 1.2 and 3.3 hold. Let ¯C1 and C2 be defined as in (2.6) and (2.7), respectively. For any ϵ > q 2ϵf γ2τC2 ¯ C2 1 , assuming that the initial trust-region radius ∆0 ≥γ ¯C1ϵ, then Algorithm 2 achieves ∥∇ϕ(xk)∥≤ϵ after at most Nϵ function evaluations, where (3.1) Nϵ = 3n[|Sϵ| + |Ud ϵ |] ≤ 2(f(x0) −f ⋆)n (1 −τ)C2(γ ¯C1ϵ)2 + 3n logγ C1ϵ ∆0 = 4κbhm  max n η2, κbhm, 2κef +4ϵf (1−η1)κfcd o + κeg 2 (1 −τ)γ2η1η2κfcd min{η2, κbhm} · 3n(ϕ(x0) −ϕ⋆) ϵ2 + 3n logγ ¯C1ϵ ∆0 . 4 Ensuring fully-linear models via Lagrange polynomials. We now show how to ensure Assumption 3.3 and derive corresponding κeg. For this we specify a way to construct the model m(x) for Algorithm 2. We impose the interpolation conditions by constructing a polynomial r(x) ∈P such that (4.1) rk(y) = f(xk + y) −f(xk), ∀y ∈Yk. Note that when |Yk| < p rk(x) is not uniquely defined. There are many practically interesting alternatives of how r(x) should be selected. For example, in the case when P is the space of quadratic polynomials and |Yk| < n(n+1) 2 +n the choices include selecting a quadratic model with the smallest Frobenius norm of the Hessian [9], smallest Frobenius norm of the change of the Hessian [14], sparse Hessian [2], etc. For the purposes of the theory we explore here, these choices matter only if it can be established that they guarantee fully linear models. We will not explore these specific choices here and will rely only on fully linear models with |Yk| = p. We refer the reader to [2] for further details on |Yk| < p case. We will mainly focus on the model that is constructed via linear interpolation, where rk(s) = gT k s for some gk defined by (4.1). In this case P is a space of linear polynomials. The model mk(x) is then defined as (4.2) mk(xk + s) = ϕ(xk) + rk(s) + sT Hks. The following theorem can be found in [9]. Theorem 4.1. Let Y = {y1, . . . , yn} be Λ–poised in B(0, ∆). For r(x) - an affine polynomial such that r(x + y) = ϕ(x + y) ∀y ∈Y ∪{0}, we have ∥g(x) −∇ϕ(x)∥≤L 2 nΛ∆ and for all s ∈B(0, ∆) |ϕ(x + s) −m(x + s)| ≤L 2 nΛ∆2. Thus we automatically have the following result. Proposition 4.2. Assume on iteration k, that Yk is be Λ–poised in B(0, ∆k) for some Λ > 1 then mk defined by (4.2) is κeg, κef-fully linear in B(xk, ∆k) with κeg = nLΛ 2 and κef = κeg + L+κhbm 2 . Substituting these values in Theorem 3.7 and ignoring dependencies on constants, aside from n and ϵ we obtain the complexity Nϵ = O n3ϵ−2 . We will now show that by improving upon the analysis in [9] we are able to bring the complexity down to O n2ϵ−2 which is competitive with complexity of the trust region methods based on finite difference models. Theorem 4.3. Let Y = {y1, . . . , yn} be Λ–poised in B(0, ∆). Let r(s) = gT s be an affine function satisfying r(yi) = ϕ(x + yi) −ϕ(x) for i = 1, . . . , n. Then ∥∇ϕ(x) −g∥≤√nL∆ p n(Λ2 −1) + 2. In particular, if Λ = 1 + O( 1 n) then we have ∥g −∇ϕ(x)∥= O(√nL∆). Proof. Let ¯ϕ(s) = ϕ(x + s) −ϕ(x), then ¯ϕ(0) = 0 and ∇ϕ(x) = ∇¯ϕ(0). Let Y be the matrix with ith column equal to yi, let D be the diagonal matrix such that Dii = ∥yi∥, and let ¯ϕ(Y ) be the vector with ith entry equal to ¯ϕ(yi). By the interpolation condition imposed on m(x + s), we have ¯ϕ(Y ) = Y T g and thus, g = Y −T ¯ϕ(Y ). Then we have ∇ϕ(x) −g = ∇¯ϕ(0) −Y −T ¯ϕ(Y ) and we can bound ∥∇ϕ(x) −g∥= ∥Y −T D(D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y ))∥≤√n∥Y −T D∥∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y )∥∞. Thus the remainder of the proof is split between bounding ∥Y −T D∥and ∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y )∥∞. To bound ∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y )∥∞, fix any nonzero y ∈B(0, ∆) and define h(t) = ¯ϕ(ty). Observe that h(0) = ¯ϕ(0) = 0 and h(1) = ¯ϕ(y). By the fundamental theorem of calculus we have h(1) = R 1 0 h′(t)dt. We also have by the chain rule that h′(t) = yT ∇¯ϕ(ty), thus ¯ϕ(y) = R 1 0 yT ∇¯ϕ(ty)dt. By Lipschitzness, ∥∇¯ϕ(0) −∇¯ϕ(ty)∥≤Lt∥y∥. Thus |yT ∇¯ϕ(0) −yT ∇¯ϕ(ty)| ≤Lt∥y∥2. Thus we can bound |yT ∇¯ϕ(0) −¯ϕ(y)| = Z 1 0 yT ∇¯ϕ(0) −yT ∇¯ϕ(ty)dt ≤ Z 1 0 Lt∥y∥2dt = 1 2L∥y∥2. Dividing by the norm of ∥y∥, we have | yT ∥y∥∇¯ϕ(0) − ¯ϕ(y) ∥y∥| ≤ 1 2L∥y∥≤ 1 2L∆. Observing that for y = yi, | yT ∥y∥∇¯ϕ(0) − ¯ϕ(y) ∥y∥| is precisely the ith entry of the vector D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y ), we can conclude ∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y )∥∞≤1 2L∆. To bound ∥Y −T D∥, Let (Y −T )i denote the ith column of Y −T . One can check that for all i = 1, . . . , n, the Lagrange polynomials satisfy ℓi(s) = sT (Y −T )i. Therefore, the poisedness condition implies that maxs∈B(0,∆) ℓi(s) = maxs∈B(0,∆) sT (Y −T )i = ∆∥(Y −T )i∥≤Λ. Thus for i = 1, . . . , n, we have ∥(Y −T )i∥≤Λ ∆. Let A = DY −1Y −T D and observe that Tr(A) = Tr(DY −1Y −T D) = Pn i=1 ∥(Y −T )i∥2∥yi∥2 ≤nΛ2. Also observe that Tr(A−1) = Tr(D−1Y T Y D−1) = Pn i=1 ∥yi∥2 ∥yi∥2 = n. Denote by λi > 0 the eigenvalues of A for i = 1, . . . , n and observe that 1 λi are the eigenvalues of A−1. Then Pn i=1 λi + 1 λi = Tr(A) + Tr(A−1) ≤n(Λ2 + 1). For all positive numbers λ, we have λ + 1 λi ≥2. Let i′ ∈[n] maximize λi + 1 λi . Then λi′ + 1 λi′ = n X i=1 λi + 1 λi ! −  X i̸=i′ λi + 1 λi  ≤n(Λ2 + 1) −2(n −1) = n(Λ2 −1) + 2. Thus we have ∥A∥= max i∈[n] λi ≤max i∈[n] λi + 1 λi ≤n(Λ2 −1) + 2. Then observe ∥Y −T D∥= ∥A∥ 1 2 ≤ p n(Λ2 −1) + 2. Combining the bounds for ∥Y −T D∥and ∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y )∥∞, we have ∥∇ϕ(x) −g∥= ∥∇¯ϕ(0) −g∥≤1 2 √nL∆ p n(Λ2 −1) + 2. In particular, for Λ = 1 + O( 1 n), we have p n(Λ2 −1) + 2 = O(1) and thus ∥∇ϕ(0) −g∥= O(√nL∆). We thus have a result that Λ-poisedness implies that mk, defined as in (4.2) and r(s) defined by (4.1) with exact zeroth order oracle, is κeg, κef-fully linear with κeg = √nL∆ p n(Λ2 −1) + 2 and κef as in Lemma 2.11. Let us now extend the result to the inexact zeroth order oracle where |f(x) −ϕ(x)| ≤ϵf for all x. Theorem 4.3 is modified as follows. Theorem 4.4. Let Y = {y1, . . . , yn} be such that Y is Λ–poised in B(0, ∆). Let r(s) = gT s be an affine function satisfying r(yi) = f(x + yi) −f(x) for i = 1, . . . , n, with |f(x + y) −ϕ(x + y)| ≤ϵf for y ∈Y ∪{0}. Then ∥∇ϕ(x) −g∥≤ p n(Λ2 −1) + 2 1 2 √nL∆+ √n2ϵfΛ ∆  . Proof. Define ¯ϕ, D, and ¯ϕ(Y ) as in the proof of Theorem 4.3. Diverging from that proof, the interpolation condition changes from Y T g = ¯ϕ(Y ) to gT yi = ϕ(x + yi) −ϕ(x) + (f(x + yi) −ϕ(x + yi)) −(f(x) −ϕ(x)), i = 1, . . . , n thus we have g = Y −T ¯ϕ(Y )+Y −T E, where E is a vector with components (f(x+yi)−ϕ(x+yi))−(f(x)−ϕ(x)). Then we can bound error ∥∇ϕ(x) −g∥= ∥Y −T D(D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y ) −D−1E)∥ ≤√n∥Y −T D∥∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y ) −D−1E∥∞ ≤√n∥Y −T D∥ ∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y )∥∞+ ∥D−1E∥∞  ≤√n∥Y −T D∥ ∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y )∥∞+ ∥D−1∥∞∥E∥∞  . We can bound ∥Y −T D∥and ∥D−1Y T ∇¯ϕ(0) −D−1 ¯ϕ(Y )∥∞identically as in the proof of Theorem 4.3. To bound ∥E∥∞observe that the condition that |f(x + y) −ϕ(x + y)| ≤ϵf for y ∈Y ∪{0} implies ∥E∥∞≤2ϵf. To bound ∥D−1∥∞recall that Dii = ∥yi∥. By the properties of Lagrange polynomials we have ℓi(yi) = 1. By linearity, we have ℓi(∆yi ∥yi∥) = ∆ ∥yi∥. By the poisedness condition we have ℓi(∆yi ∥yi∥) ≤Λ. Combining these bounds we have ∆ ∥yi∥≤Λ and thus 1 ∥yi∥≤Λ ∆. Thus ∥D−1∥∞≤Λ ∆. The result follows. It follows, that if Yk is Λ-poised in B(0, ∆k) with ∆k ≥ 2 q Λϵf L we have ∥∇ϕ(xk) −gk∥ ≤ p n(Λ2 −1) + 2(√nL∆k). We thus have a result that Λ-poisedness, with ∆k ≥2 q Λϵf L , implies that mk is κeg, κef-fully linear with κeg = √nL∆ p n(Λ2 −1) + 2 and κef defined by Lemma 2.11. Without loss of generality we can assume that L ≥ ΛC2 2 in Theorem 4.4. This implies that for any ϵ > q 2ϵf γ2τC2 ¯ C2 1 , ∆k ≥γ ¯C1ϵ (for some τ ∈(0, 1)) implies ∆k ≥2 q Λϵf L . Then the immediate consequence of Theorem 3.7 is as follows Theorem 4.5. Let Assumptions 1.2 and 3.3 hold. For any ϵ > q 2ϵf γ2τC2 ¯ C2 1 ∆k ≥γ ¯C1ϵ (for some τ ∈(0, 1)), assuming that the initial trust-region radius ∆0 ≥γ ¯C1ϵ and Λ is chosen as 1 + O( 1 n), then Algorithm 2 achieves ∥∇ϕ(xk)∥≤ϵ after at most Nϵ function evaluations, where (4.3) Nϵ = O(n2ϵ−2), with O containing additive logarithmic factors and constant that are independent of n and ϵ. This result is important because it shows than the worst case complexity of the geometry-correcting method matches that of a method based on finite differences. Thus nothing is lost by employing Algorithm 2, aside from additional linear algebra costs of maintaining the Lagrange polynomials. The following theorem demonstrates that setting Λ = 1 + O( 1 n) is critical to our final result. In other words, then overall bound on κeg in terms of Λ cannot be improved. Theorem 4.6. For all n ≥2, L, ∆> 0, and Λ > 1, there is a L-smooth function ϕ, and a set Y = {y1, . . . , yn} Λ–poised in B(0, ∆) such that for the linear interpolating function r(s) = gT s satisfying r(yi) = ϕ(x + yi) −ϕ(x) for i = 1, . . . , n, we have ∥g −∇ϕ(x)∥≥O(LnΛ∆). Proof. By scaling we can take L = ∆= 1. Let x = 0 and ¯ϕ(u) = ϕ(x + u) = 1 2uT u. Let ε be the unique number in the interval (0, 1 n−1) such that 1 1 + ε + ε (1 + ε)2(1 −nε/(1 + ε)) = Λ2. Define A = (1 + ε) −ε11T . Let U = √ A and let yi = ui - the i-th column of U. Observe for i = 1, . . . , n that ∥ui∥= √Aii = 1 and thus, ui ∈B(0, ∆). By the Sherman-Morrison formula, we have A−1 = 1 1+εI + ( ε (1+ε)2(1−nε/(1+ε)))11T . Observe that maxs∈B(0,∆) ℓi(s) = maxs∈B(0,∆) xT (U −T )i = ∥(U −T )i∥. Since U = √ A, we have U −1 = √ A−1, thus ∥(U −T )i∥= p (A−1)ii = Λ. Thus the set is indeed Λ–poised. Then we have ∥g −∇ϕ(x)∥= ∥g∥= ∥U −T ϕ(U)∥= 1 2∥U −T 1∥= 1 2 √ 1T A−11 = 1 2 s n 1 + ε + n2ε (1 + ε)2(1 −nε/(1 + ε)). Observe that n 1 + ε + n2ε (1 + ε)2(1 −nε/(1 + ε)) = n(n(Λ2 − 1 1 + ε) + 1 1 + ε). Simplifying and bounding we obtain n(Λ2 − 1 1 + ε) + 1 1 + ε = nΛ2 −n −1 1 + ε ≥nΛ2 −(n −1) = n(Λ2 −1) + 1. Combining, we have ∥g −∇ϕ(x)∥≥1 2 √n p n(Λ2 −1) + 1. 5 Extensions to arbitrary polynomial basis. Below we present the bound on the number of consecutive geometry-correcting iterations for the case of a general space of polynomials P defined by some basis π(x) = {π1(x), . . . , πp(x)}. Theorem 5.1. For a general space of polynomials P with dimension p, then the number of oracle calls in consecutive unsuccessful iterations such that k ∈Uϵ \ Ud ϵ is O(p log p). Proof. First, after at most n iterations and n oracle calls, all points in Yk will have norm at most ∆k. Let vol(π(Y )) denote the volume of the simplex of the vertices in π(Y ). Let Yi(s) equal Y with point yi replaced by s. It follows that |ℓi(s)| = vol(π(Yi(s))) vol(π(Y )) In other words, replacing yi with s increases the volume by a factor of |ℓi(s)|. We claim that if the vectors in Y are Λ–poised, then vol(π(Y )) vol(π(Y ∗)) ≥(√pΛ)−p where Y ∗is the matrix with columns consisting of vectors y∗ i , . . . , y∗ p ∈B which maximize vol(π(Y ∗)). Then we have vol(π(Y ∗)) vol(π(Y )) = det(π(Y ∗)) det(π(Y )) = det(π(Y )−1π(Y ∗)) . One can check that (π(Y )−1π(Y ∗))ij = ℓi(y∗ j ). Thus, we can conclude the proof of the claim via Hadamard’s inequality: det(π(Y )−1π(Y ∗)) ≤ p Y i v u u t p X j |ℓi(y∗ j )|2 ≤(√pΛ)p. Let V (t) = ((√pt))−p and let N(t) be the number of consecutive geometry correction steps until vol(π(Y ))/vol(π(Y ∗)) ≥t. By the argument in part 2 of the proof of Theorem 6.3 of [9], after at most p iterations, we have a 2p–poised set, so we have N(V (2p)) ≤p. Now for any k = 1, . . . , p−1 let us bound N(V (2k)). Suppose we take N(V (2k+1)) steps and have vol(π(Y ))/vol(π(Y ∗)) ≥V (2k+1). In every subsequent step, either we achieve 2k–poisedness, and thus vol(π(Y ))/vol(π(Y ∗)) ≥V (2k) by the above claim, or poisedness is greater than 2k and we improve volume by a factor of at least 2k. Since we only need to improve volume by a factor of at most V (2k) V (2k+1) = 2p, we need to take at most ⌈log2k(2p)⌉= ⌈p k⌉subsequent steps. Thus we have N(V (2k)) ≤N(V (2k+1)) + ⌈p k⌉. Now, by induction we have N(V (2)) ≤p + p−1 X k=1 ⌈p k ⌉≤2p + p−1 X k=1 p k = 2p + p p−1 X k=1 1 k ! ≤3p + p log(p). To conclude, for any additional steps we may assume that poisedness is greater that Λ and thus, volume will increase by a factor of at least Λ. However, the ratio of volume to the maximum volume cannot grow to be larger than 1. Thus, we have the total number of steps is at most logΛ  1 V (2)  + N(V (2)) ≤ 1 log Λ (p log(√p2)) + 3p + p log(p) = O(p log(p)). Since each geometry correction iteration uses at most two oracle calls, the result follows. To have a complexity bound for Algorithm 2 based on general space of polynomials P we need to establish how Λ-poisedness of Yk implies fully linear model mk and derive the expression for κeg. Certain results for this were cited or established in [9]. As in the linear case those bounds may not be optimal and to develop better bounds one would likely need to specify P. We leave this for future research, noting that for now, the bounds in [9] show that Λ-poised Yk implies fully linear model mk with κeg having a polynomial dependence on n. In the next section we show how such models can be used in low dimensional subspaces, where κeg will depend on the subspace dimension and thus the exact nature of this dependence has small impact on the total complexity. 6 Model based trust region methods in subspaces. We now consider a trust region method where a model m(x) is built and optimized in a random low-dimensional subspace of Rn. The idea of using random subspace embeddings within derivative free methods has gained a lot of popularity in the literature lately. Specifically, in [6] random embeddings based on sketching matrices and Johnson-Lindestraus Lemma, are used together with a model based trust region method. The trust region method involved is different from what we discuss here and has the same worst case oracle complexity complexity O( n2 ϵ2 ) as our full-space method. Here we will show that combining our analysis in the paper with the use random projections results in better complexity in terms of the dependence on n - O( n ϵ2 ). In [11] random projections are used together with finite difference subspace gradient approximation and gradient descent. The resulting oracle complexity of this subspace method is the same as that of full-space finite difference gradient descent method O( n ϵ2 ). The purpose of this section is to show that unlike finite difference gradient descent, the trust region interpolation based methods, such as the geometry-correcting Algorithm 2 improve their complexity in terms of the dependence on n when used in subspaces versus their full space versions. In this section we will assume that the zeroth oracle is exact. Extension to inexact oracle is subject of future study since it requires certain extension of existing theory of randomized trust region methods that are beyond the subject of this paper. We will elaborate on this at the end of this section. Given a matrix Q ∈Rn×q, with q ≤n and orthonormal columns, QQT ∇ϕ(x) is an orthogonal projection of ∇ϕ(x) onto a subspace spanned by the columns of Q (we will call it a subspace induced by Q). We also define a reduction of ϕ(x) to the subspace, given by Q around x: ˆϕ(v) = ϕ(x + Qv), v ∈Rq, which implies Q∇ˆϕ(0) = QQT ∇ϕ(x). Similarly we define ˆm(v) = m(x + Qv), v ∈Rq, which implies Q∇ˆm(0) = QQT ∇m(x). We now present a modified trust-region algorithm that constructs models and computes steps in the subspace. At each iteration k ∈{0, 1, . . . } the algorithm chooses Qk ∈Rn×q with orthonormal columns. The model mk is defined as (6.1) mk(xk + Qkv) = ϕ(xk) + gT k Qkv + 1 2vT QT k HkQkv. For any vector v, gT k Qkv = QkQT k gT k Qkv thus without loss of generality, we will assume that QkQT k gk = gk, in other words, gk lies in the subspace induced by Qk. We define the trust region in the subspace induced by Qk as BQk(xk, ∆k) = {z : z = xk + Qkv, ∥v∥≤∆k}. We will assume here that Assumption 2.3 holds. We will also need model mk to be fully linear but only with respect to the subspace. Algorithm 3: Trust region method based on fully-linear models in subspace Inputs: Exact zeroth order oracle f(x) = ϕ(x), initial x0, ∆0, and η1 ∈(0, 1), η2 > 0, and γ ∈(0, 1). for k = 0, 1, 2, · · · do 1 Choose Qk ∈Rn×q with orthonormal columns. Compute model mk as in (6.1). 2 Compute a trial step xk + sk where sk = Qkvk with vk ≈arg minv{mk(xk + Qkv) : ∥v∥≤∆k}. 3 Compute the ratio ρk as in Algorithm 1. 4 Update the iterate and the TR radius as inAlgorithm 1. Definition 6.1 (Fully-linear model in a subspace). Given a matrix with orthonormal columns Q ∈Rn×q, let BQ(x, ∆) = {z : z = x + Qv, ∥v∥≤∆}. Let (6.2) m(x + Qv) = ϕ(x) + gT Qv + 1 2vT QT HQv and ˆm(v) = m(x+Qv), v ∈Rq. We say that model m(x+s) is κef, κeg-fully linear model of ϕ(x+s) on BQ(x, ∆) if ∥∇ˆm(0) −∇ˆϕ(0)∥≤κeg∆ and | ˆm(v) −ˆϕ(v)| ≤κef∆2 for all ∥v∥≤∆. Definition 6.2 (Well aligned subspace). The subspace spanned by columns of Q is κg-well aligned with ∇ϕ(x) for a given x if (6.3) ∥QQT ∇ϕ(x) −∇ϕ(x)∥≤κg∥∇ϕ(x)∥ for some κg ∈[0, 1). Condition (6.4) is on the properties of the subspace. Essentially, it is required that the cosine of the angle between the gradient and its projection onto the subspace induced by Q is not too small. Similar conditions (and similar terminology) have been used in [6]. We will discuss later how this requirement can be satisfied with sufficiently high probability by randomly generated subspaces. The following lemma is a simple consequence of conditions above. Lemma 6.3. On iteration k, Qk is κg-well aligned with ∇ϕ(xk) if and only if (6.4) ∥QkQT k ∇ϕ(xk)∥2 ≥(1 −κ2 g)∥∇ϕ(xk)∥2 If m(xk + s) is κef, κeg-fully linear model of ϕ(xk + s) on BQk(xk, ∆k) then (6.5) ∥gk −QkQT k ∇ϕ(xk)∥≤κeg∆k Proof. The first statement easily follows from the fact that QkQT k is an orthogonal projection. From this and the fact that gk = QkQT k gk we have ∥gk −QkQT k ∇ϕ(xk)∥= ∥Qk∇ˆm(0) −Qk∇ˆϕ(0)∥= ∥∇ˆm(0) −∇ˆϕ(0)∥ Thus the second statement follows from the fully linear assumption. We now show how the analysis of Algorithm 1 easily extends to Algorithm 3 under appropriate assumptions on the models and the subspaces. Lemma 6.4 (sufficient condition for a successful step). Under Assumptions 1.2 and 2.3, if Qk is κg-well aligned with ∇ϕ(xk), mk(xk + s) is a κef, κeg-fully linear model of ϕ(xk + s) on BQk(xk, ∆k) and if (6.6) ∆k ≤ q 1 −κ2gC1∥∇ϕ(xk)∥ where C1 = (max  η2, κbhm, 2κef (1 −η1)κfcd  + κeg)−1 then ρ ≥η1, ∥gk∥≥η2∆k, and xk+1 = xk + sk, i.e. the iteration k is successful. Proof. Due to Lemma 6.3, specifically (6.5) by triangle inequality, ∥QkQT k ∇ϕ(xk)∥≤∥gk∥+ κeg∆k and also due to Lemma 6.3 (6.7) ∆k ≤ q 1 −κ2gC1∥∇ϕ(xk)∥≤C1∥QkQT k ∇ϕ(xk)∥ By (6.7) we have (max{κbhm, η2, 2κef (1 −η1)κfcd } + κeg)∆k ≤(∥gk∥+ κeg∆k) which implies max{κbhm, η2}∆k ≤∥gk∥. This establishes that ∥gk∥≥η2∆k and also mk(xk) −mk(xk + sk) ≥κfcd∥gk∥∆k/2 by Assumption 2.3. Then, using the fact that f(x) = ϕ(x) and the fully linear assumption on mk in the subspace induced by Qk and recalling that sk = Qkvk we have ρk = m(xk) −m(xk + sk) + (f(xk) −m(xk)) −(f(xk + sk) −m(xk + sk)) m(xk) −m(xk + sk) = m(xk) −m(xk + sk) + (ϕ(xk) −m(xk)) −(ϕ(xk + sk) −m(xk + sk)) m(xk) −m(xk + sk) ≥1 − κef∆2 k m(xk) −m(xk + sk) ≥1 − κef∆2 k κfcd∥gk∥∆k/2 ≥1 − 2κef∆k κfcd((∥QkQT k ∇ϕ(xk)∥−κeg∆k) ≥η1, where the last step is true because ∥QkQT k ∇ϕ(xk)∥≥ 2κef (1−η1)κfcd + κeg  ∆k follows from (6.7). The rest of the analysis is identical to the analysis of Algorithm 1 and Theorem 2.9 holds with the same bound but slightly differently defined C1, thus we have the following complexity result, Theorem 6.5. Let Assumptions 1.2 and 2.3 hold. Let Kϵ be the first iteration of Algorithm 3 that achieves ∥∇ϕ(xk)∥≤ϵ. Assume on all iterations k = 0, . . . , Kϵ −1 Qk is κg-well aligned with ∇ϕ(xk) and mk(xk + s) is a κef, κeg-fully linear model of ϕ(xk + s) on BQk(xk, ∆k). Then assuming that the initial trust-region radius ∆0 ≥γ ˆC1ϵ, we have the bound (6.8) Kϵ = |Sϵ| + |Uϵ| ≤2(f(x0) −f ⋆) C2(γ ˆC1ϵ)2 + logγ ˆC1ϵ ∆0 C2 = η1η2κfcd 2κbhm min{η2, κbhm}; where ˆC1 = q 1 −κ2g max n η2, κbhm, 2κef (1−η1)κfcd o + κeg . The above result is not very useful by itself without the expressions for the bounds κeg, κef (which we already know how to derive) and more critically, κg - which we did not discuss yet. In fact we will only be able to guaranteer the bound κg that holds with some probability when Qk is random. We discuss this bound below and then extend the analysis of Algorithm 3 to this case. 6.1 Building fully-linear models in a subspace. Let us apply standard forward finite difference method to approximate the "reduced" gradient ∇ϕp(0). (6.9) gq(0) = q X i=1 f(x + δQui) −f(x) δ ui where ui is the ith column of a unitary q × q matrix. And let us define g(x) = Qgq(0). Since we consider an exact oracle f(x) = ϕ(x) we simply have ∥gq(0) −∇ϕq(0)∥≤ √qLδ 2 Thus assuming δ ≤∆, for m(x) defined by (6.1) with g(x) = Qgq(0) we have ∥∇ˆm(0) −∇ˆϕ(0)∥≤κeg∆ with κeg = √qL 2 . Analogously to Lemma 2.11 we have the following. Lemma 6.6 (Fully-linear in subspace models). For m(x) defined by (6.2) such that ∥∇ˆm(0) −∇ˆϕ(0)∥≤κeg∆ mk(x + s) is a κef, κeg-fully linear model of ϕ(x + s) on B(x, ∆) in a subspace induced by Q, with κef = κeg + LQ + κhbm 2 where LQ is the Lipschitz constant of QQT ∇ϕ(x). Now we discuss how to generate a well aligned subspace. This subject has been studied extensively in the literature and in particular in derivative free optimization context [6, 11] and the key approach is to generate random subspaces. In [6] the sketching and Johnson-Lindenstrauss type embeddings are used and the resulting complexity is worse than what we derive here3. In [11] Qk matrices are generated from the Haar distribution (random matrix with orthonormal columns) and are used in the context of gradient descent methods based on finite difference gradient approximation. Here we also use Haar distribution and rely on the following result. Lemma 6.7. For q ≥3, Q ∈Rn×q drawn from the Haar distribution of the set of matrices with orthonormal columns, and any nonzero vector v ∈Rn, we have P h ∥QQT v∥2 ≥( q 10n)∥v∥2i ≥243 443 > 1 2. Proof. By Lemma 1 in [11] we have ∥QQT v∥2 ∥v∥2 ∼Beta q 2, n −q 2  . Let X ∼Beta(q/2, (n −q)/2). We have E[X] = q n and V[X] = q(n−q) n2(n/2+1) ≤ 2q n2 ≤ 2q2 3n2 . Thus by the Paley-Zygmund inequality we have P h ∥QQT v∥2 ≥  q 10n  ∥v∥2i = P(X ≥(1/10)E[X]) ≥ (1 −1/10)2E[X]2 V[X] + (1 −1/10)2E[X]2 ≥ (1 −1/10)2 q2 n2 2q2 3n2 + (1 −1/10)2 q2 n2 = 243 443. The immediate conclusion is that on each iteration k, Qk drawn from the Haar distribution (with q > 3 is κg-well aligned with probability θ ≥243 443 > 1/2 with κg = p 1 − q 10n. 3It remains to be seen if those results can be improved by combining our new bounds related to Λ-poised models with the use of JL embeddings. 6.2 Complexity analysis under random subspace selection. If at each iteration k Qk is chosen randomly, Algorithms 3 can be viewed as stochastic process {Qk, xk, mk, ∆k, sk}. All quantities computed by the algorithm are random variables, with some abuse of notation we will denote these quantities by the same letters as their realizations. It should be clear from the context which one we refer to. Let Fk−1 denote the σ-algebra generated by the first k −1 iterations, Fk−1 = σ (Q0, Q1, . . . Qk−1). We note that the random variables xk and ∆k measurable with respect to Fk−1, the random variables mk, sk and ρk are measurable with respect to Fk. The random variable Kϵ = min{k : ∥∇ϕ(xk)∥≤ϵ} is a stopping time adapted to the filtration {Fk−1}. Note that Lemmas 6.6 and 2.4 hold for each realization of the algorithm. However, the difficulty in carrying out the analysis lies in the fact that ∆k is no longer bounded from below on all iterations, that is Lemma 2.6 does not hold, thus the function improvement provided by Lemma 2.4 by itself does not imply the bound on the number of successful iterations. Following the analysis in [7] (and other papers such as [10] ) we will consider different types of iterations and derive common bounds. First we will define several additional random variables. Let Ik = 1{Qk is κg-well aligned with ∇ϕ(xk)}, Ak = 1{Iteration k is successful i.e., ∆k+1 = γ−1∆k}, Bk = 1{∆k > ˆC1∥∇ϕ(xk)∥}. We will say that iteration k is "true" if Ik = 1. We make the following key assumption Assumption 6.8. There exists a θ ∈( 1 2, 1] such that P{Ik = 1|Fk−1} ≥θ. This assumption is clearly made attainable by Lemma 6.7 with θ ≥243 443 > 1/2 and κg = p 1 − q 10n. Other ways of generating subspaces and ensuring Assumption 6.8 are of interest for future research. Note that σ(Bk) ⊂Fk−1 and σ(Ak) ⊂Fk, that is the random variable Bk is fully determined by the first k −1 steps of the algorithm, while Ak is fully determined by the first k steps. From Assumption 6.8 and Lemma 6.6 we have the following dependency Ak ≥Ik(1 −Bk), in other words, if iteration k is true and trust region radius is sufficiently small, then the iteration is successful. For the stochastic process generated by Algorithm 3 with random Qk that satisfy Assumption 6.8 the following dynamics hold. (6.10) ∆k+1 ≥        γ−1∆k if Ik = 1 and Bk = 0, γ∆k if Ik = 0 and Bk = 0, γ−1∆k if Ak = 1 and Bk = 1, γ∆k if Ak = 0 and Bk = 1, ϕ(xk+1) ≤        ϕ(xk) −C2∆2 k if Ik = 1 and Bk = 0, ϕ(xk) if Ik = 0 and Bk = 0, ϕ(xk) −C2∆2 k Ak = 1 and Bk = 1, ϕ(xk) Ak = 0 and Bk = 1. Here C2 is as in Lemma 2.4. To bound the total number of iterations we first bound the number of iterations that are successful and with ∆k ≥γ ˆC1ϵ. For that let ¯Bk = 1{∆k ≥γ ˆC1ϵ}. Then from the dynamics (6.10) we have the bound similar to [7]. Lemma 6.9. For any l ∈{0, . . . , Kϵ −1} and for all realizations of Algorithm 3, we have l X k=0 ¯BkIkAk ≤ l X k=0 ¯BkAk ≤ϕ(xk) −ϕ⋆ C2(γ ˆC1ϵ)2 , Another useful lemma that easily follows from the dynamics is as follows. Lemma 6.10. For any l ∈{0, . . . , Kϵ −1} and for all realizations of Algorithm 3, we have l X k=0 Bk(1 −Ak) ≤ l X k=0 ¯BkAk + logγ ˆC1ϵ ∆0 ! . The following result is shown in [7] under Assumption 6.8, E Kϵ−1 X k=1 ¯Bk(Ik −1) ! ≤1 −θ θ E Kϵ−1 X k=1 ¯BkIk ! , from which the following lemma is derived. Lemma 6.11. Under the condition that θ > 1/2, we have E Kϵ−1 X k=1 Bk ! ≤ 1 2θ −1 ϕ(xk) −ϕ⋆ C2(γC1ϵ)2 + logγ ˆC1ϵ ∆0 !! Finally the following lemma is shown in [7] for the process obeying (6.10). Lemma 6.12. E Kϵ−1 X k=1 (1 −Bk) ! ≤1 2θ(Kϵ). Putting these last two lemmas together we obtain the final expected complexity result. Theorem 6.13. Let Assumption 1.2, Assumption 2.3 and Assumption 6.8 hold. Assume that for all k = 0, 1, . . . Kepsilon −1 mk(xk + s) is a κef, κeg-fully linear model of ϕ(xk + s) on BQk(xk, ∆k). Then for some ϵ > 0 assuming that the initial trust-region radius ∆0 ≥γ ˆC1ϵ let Kϵ be the random stopping for the event {∥∇ϕ(xk)∥≤ϵ}. We have the bound E [Kϵ] ≤ 2θ 2θ −1 2(ϕ(x0) −ϕ⋆) C2(γ ˆC1ϵ)2 + logγ ˆC1ϵ ∆0 ! where ˆC1 as in Lemma 6.6 and C2 as in Lemma 2.4. When mk is constructed using (6.9), then the number of total zeroth order oracle calls is (q + 1)Kϵ , κeg and κef depend on the subspace dimension q: κeg = 2√qLQ and κef = κeg + LQ+κbhm 2 . When Qk is drawn from Haar distribution with q ≥3, θ = 243 443 and κ2 g = O( n−q n ) is dependent on n. Thus q 1 −κ2g = p q 10n. The resulting expected iteration complexity is O n ϵ2  while each iteration requires only q + 1 function evaluations, which gives the total worst case expected oracle complexity of O nq ϵ2  . Conclusion: Algorithm 3 utilized finite difference gradient approximation with radius δ = ∆k and achieves O nq ϵ2  oracle complexity. In contrast Algorithm 1 has O  n2 ϵ2  worst case complexity if it chooses δ = ∆k for the finite difference scheme. However, we note that the lower bound on ∆k of Algorithm 1 holds for all k and is γC1ϵ. On the other hand, the bound on ∆k for Algorithm 3 is p q 10nγC1ϵ and it does not hold on all iterations (as seen in the analysis it holds "often enough", but not always). This complicates the analysis of Algorithm 3 in the case of noisy zeroth order oracles, which we leave for future research. In summary, if using finite differences the complexity of Algorithm 3 is the same as that of Algorithm 1 if the latter utilizes δ = p q n∆k in the finite difference scheme. Our real interest is using subspace based trust region method is the geometry-correcting method whose full-space version Algorithm 2 has complexity O  n2 ϵ2  . We address this next. 6.3 Geometry-correcting algorithm in subspaces. We are now ready to provide a geometry-correcting algorithm with the total worst case complexity matching the best known complexity. The key observation is that not only we do not decrease ∆k but we also do not generate new subspace matrix Qk on geometry-correcting iterations. Thus there is no randomness involved in these iterations and the analysis Algorithm 4: Geometry-correcting algorithm in subspaces Inputs: A zeroth-order oracle f(x) = ϕ(x), subspace dimension q, a space of polynomials P of dimension p, ∆0, x0, γ ∈(0, 1) η1 > 0, η2 > 0, Λ > 1. Initialization An initial orthonormal Q0 ∈Rn×q, set Y0 ⊂Rq such that |Y0| ≤p, and the function values f(x0), f(x0 + yi), yi ∈Y0. for k = 0, 1, 2, . . . do 1 Build a quadratic model mk(xk + s) as in (2.1) using f(xk) and f(xk + yi), yi ∈Yk. 2 Compute a trial step sk = Qkvk and ratio ρk as in Algorithm 3. 3 Successful iteration: ρk ≥η1 and ∥gk∥≥η2∆k. Set xk+1 = xk + sk, ∆k+1 = γ−1∆k. Replace the furthest interpolation point and shift Yk+1 = (Yk \ {yj∗ k} ∪{0}) −vk where j∗ k = arg max j=1,...,p ∥yj∥. Generate Qk+1 ∈Rn×q from the Haar distribution. 4 Unsuccessful iteration: ρk < η1 or ∥gk∥< η2∆k. Set xk+1 = xk and perform the first applicable step: • Geometry correction by adding a point: If |Yk| < p, Yk+1 = Yk ∪{vk}. • Geometry correction by replacing a far point: Let j∗ k = arg maxj=1,...,p ∥yj∥. If ∥yj∗ k∥> ∆k ⇒Yk+1 = Yk \ {yj∗ k} ∪{vk}. • Geometry correction by replacing a "bad" point: Otherwise, construct the set Lagrange Polynomials {ℓj(x), i = 1, . . . , p} in P for the set Yk and find max value (i∗ k, v∗ k) = arg max j=1,...,p,v∈B(0,∆k) |ℓj(v)|. If |ℓi∗ k(v∗ k)| > Λ, compute f(xk + Qkv∗ k), Yk+1 = Yk \ {yi∗ k} ∪{v∗ k}. • Geometry is good: Otherwise ∆k+1 = γ∆k. Generate Qk+1 ∈Rn×q from the Haar distribution. of Algorithm 3 carries over essentially without change if we restrict all statements to iterations k that are not geometry-correcting, that is k ∈Sϵ ∪Ud ϵ . Thus the bound on E [Kϵ] in Theorem 6.13 applies as the bound on E|Sϵ ∪Ud ϵ |, which equals to the bound on the total number of zeroth order oracle calls for all k ∈Sϵ ∪Ud ϵ (since on such iteration the zeroth order oracle is called only once). We thus can utilize the results of Theorem 3.6 which bounds the number zeroth order oracle calls during consecutive geometry-correcting steps as 3q if linear interpolation is used. For a higher degree interpolation we can apply Theorem 5.1 to bound the total number of zeroth order calls during consecutive geometry-correcting iterations as O(p log p) where p is the dimension of the space of polynomials defined on Rq, thus p is polynomial in q. Finally, to bound κeg and κef in ˆC1, we can employ Theorem 4.3 for linear interpolation or more general result similar to Theorem 4.1 from [9] for higher order interpolation to obtain κeg = O(√q) if Λ ≈1 or, more generally, κeg has polynomial dependency on q. The the total complexity is O  nqα ϵ2  for some α > 0. 7 Conclusions We have shown that a practically efficient model based trust region DFO method such as Algorithm 2 and its subspace version Algorithm 4 have oracle complexity that is comparable with other known (and less practical) derivative free methods. The are many further practical improvements that can be applied to Algorithm 4 and Algorithm 4 that do not affect complexity but complicate the exposition. These can include avoiding computing ρk when ∥gk∥≤η2∆k, adding not random directions to subspaces and many others. Acknowledgements This work was partially supported by ONR award N00014-22-1-215 and the Gary C. Butler Family Foundation. References [1] Charles Audet and Warren L. Hare. Derivative-Free and Blackbox Optimization. Springer Series in Operations Research and Financial Engineering. Springer International Publishing, Cham, 2017. [2] Afonso S Bandeira, Katya Scheinberg, and Luís N Vicente. Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization. Mathematical programming, 134(1):223– 257, 2012. [3] Albert S Berahas, Liyuan Cao, Krzysztof Choromanski, and Katya Scheinberg. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Foundations of Computational Mathematics, pages 1–54, 2021. [4] Jose Blanchet, Coralia Cartis, Matt Menickelly, and Katya Scheinberg. Convergence rate analysis of a stochastic trust-region method via supermartingales. INFORMS journal on optimization, 1(2):92–119, 2019. [5] Liyuan Cao, Albert S. Berahas, and Katya Scheinberg. First- and second-order high probability complexity bounds for trust-region methods with noisy oracles. Mathematical Programming, 207(1-2):573–624, 2023. [6] Coralia Cartis and Lindon Roberts. Scalable subspace methods for derivative-free nonlinear least-squares optimization. Mathematical Programming, 199(1):461–524, May 2023. [7] Coralia Cartis and Katya Scheinberg. Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Mathematical Programming, 169(2):337–375, 2018. [8] Andrew R Conn, Nicholas IM Gould, and Philippe L Toint. Trust region methods. SIAM, 2000. [9] Andrew R Conn, Katya Scheinberg, and Luís N Vicente. Introduction to derivative-free optimization. SIAM, 2009. [10] Serge Gratton, Clément W Royer, Luís N Vicente, and Zaikun Zhang. Complexity and global rates of trust-region methods based on probabilistic models. IMA Journal of Numerical Analysis, 38(3):1579–1597, 2018. [11] David Kozak, Stephen Becker, Alireza Doostan, and Luis Tenorio. A stochastic subspace approach to gradient-free optimization in high dimensions. Computational Optimization and Applications, 79(2):339– 368, Jun 2021. [12] Jeffrey Larson, Matt Menickelly, and Stefan M Wild. Derivative-free optimization methods. Acta Numerica, 28:287–404, 2019. [13] Jorge J Moré and Stefan M Wild. Benchmarking derivative-free optimization algorithms. SIAM Journal on Optimization, 20(1):172–191, 2009. [14] M. J. D. Powell. Least Frobenius norm updating of quadratic models that satisfy interpolation conditions. 100:183–215, 2004. [15] Michael J D Powell. The NEWUOA software for unconstrained optimization without derivatives. Technical Report DAMTP 2004/NA08, Department of Applied Mathematics and Theoretical Physics, University of Cambridge, 2004. [16] Michael JD Powell. UOBYQA: unconstrained optimization by quadratic approximation. Mathematical Programming, 92(3):555–582, 2002. [17] Luis Miguel Rios and Nikolaos V. Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56(3):1247–1293, 2013.
On Complexity of Model-Based Derivative-Free Methods. A. Chaudhry‡ K. Scheinberg‡ Abstract. In many applications of mathematical optimization, one may wish to optimize an objective function without access to its derivatives. These situations call for derivative-free optimization (DFO) methods. Among the most successful approaches in practice are model-based trust-region methods, such as those pioneered by M.J.D Powell. While relatively complex to implement, these methods are now available in standard scientific computing platforms, including MATLAB and SciPy. However, theoretical analysis of their computational complexity lags behind practice. In particular, it is important to bound the number of function evaluations required to achieve a desired level of accuracy. In this paper we systematically derive complexity bounds for classical model-based trust-region methods and their modern variations. We establish, for the first time, that these methods can have the same worst case complexity than any other known DFO method. MSC Classification: 90C30, 90C56. 1 Introduction. Derivative Free Optimization (DFO) is an area of optimization that concerns with developing optimization methods based purely on function value computation without applying any direct differentiation. Other related areas of optimization are called zeroth-order optimization (because it relies of zerothorder oracles) and black-box optimization. The applications of DFO are numerous and rapidly growing with the sophistication of engineering models. More and more systems are being modeled and evaluated by complex computer codes, which brings the natural next step - optimizing such systems. The application areas range from well established, such as aerospace engineering, chemical engineering, civil and environmental engineering to more recent, such as reinforcement learning, machine learning and simulation optimization. A broad list of applications can be found in [1] and [17]. There is a rich literature of DFO starting from around mid 90s which is rapidly growing with new interest spurred by new applications in engineering, machine learning and artificial intelligence. Aside from an increasing number of papers, there are two books [9] and [1], and a survey on the topic [12]. There are three major classes of DFO algorithms: 1. directional direct search methods, 2. gradient descent based on simplex gradients which includes finite difference approximation and 3. interpolation based (also known as model based) trust region methods. The first two classes are rather simple to implement and analyze which makes them popular choices, especially in the literature and with recent applications to machine learning where they are easy to adopt. They operate by sampling a certain number of function values at some chosen points around the current iterate and then make a step according to the information obtained from the samples. The way the sample sets are constructed is predetermined (but can be random) and is meant to ensure that a descent direction is identified either deterministically or in expectation. Such method is then analyzed based on this "guaranteed descent" and the effort it takes to obtain it. The third type of methods pioneered in the 90s by M.J.D. Powell [16, 15], is much more complex, yet is arguably the most effective in practice for many applications [13]. These methods trade off exploration and exploitation by reusing function values computed in the past iterates and adapting where the function should be sampled next based on information accumulated so far. The complex structure of the algorithms, especially as presented in Powell's papers, resulted in scarcity of theoretical analysis, especially in terms of complexity. Some underlying theory of asymptotic convergence was developed in [9], however, methods analyzed there and in subsequent works rely on a "criticality step" for the theory to work, which departs from the main idea of the algorithms. There is still a lack of understanding of how these methods work and most importantly if they enjoy favorable complexity bounds. In this paper, we focus on the complexity of model-based derivative free algorithms that aim to solve unconstrained nonlinear optimization problems of the form min x∈Rn φ(x), (1.1) ‡ 16 Oct 2025 where φ : Rn →R is a smooth objective function that may or may not be convex. The key premise of these methods is to economize on function values, possibly at the expense of additional linear algebra, since in most applications the function evaluations cost dominates all else. The complexity will be measured in terms of the number of function evaluations needed to achieve an ε-stationary point, that is a point x for which ∥∇φ(x)∥≤ε. Throughout the paper, we assume that we have access to a zeroth-order oracle f(x) ≈φ(x) which may or may not be exact. In practice, one may wish to tolerate an inexact oracle but we will sometimes treat the exact case for ease of exposition. The following are the standard assumptions on φ(x) for our setting. Assumption 1.1 (Lower bound on φφφ). The function φ is bounded below by a scalar φ⋆on Rn. Assumption 1.2 (Lipschitz continuous gradient). The function φ is continuously differentiable, and the gradient of φ is L-Lipschitz continuous on Rn, i.e., ∥∇φ(y) -∇φ(x)∥≤L∥y -x∥for all (y, x) ∈Rn × Rn. The paper is organized as follows. In Section 2 we present the basic model based trust region method and its complexity analysis for the case when models are based on simple finite difference gradient approximations. In Section 3 we introduce a "geometry-correcting" trust region method in the spirit of those proposed by Powell. We will show how the analysis of complexity can be adapted from that of the basic method to the more sophisticated one. In Section 4 and 5 we derive new results that help us establish that the complexity of the "geometrycorrecting" trust region method is competitive with that of the basic trust region method. Finally, in Section 6 we develop and analyze model based trust region methods in random subspaces and show that these methods have complexity bounds that are as good as those for any other known DFO method. 2 Basic trust-region method and analysis. We first present and analyze a basic trust region method which, at every iteration k ∈{0, 1, . . . }, constructs a quadratic model to approximate φ(x) near the iterate xk (2.1) mk(xk + s) = φ(xk) + gk(xk)T s + 1 2sT Hk(xk)s. The model is then minimized (approximately) over the trust region B(xk, ∆k) - a Euclidean ball around xk of a radius ∆k. In what follows we use the abbreviation gk := g(xk) = ∇mk(xk) and Hk := H(xk) = ∇2mk(xk).1 Algorithm 1: Trust region method based on fully-linear models Inputs: A zeroth-order oracle f(x) ≈φ(x), a starting point x0, TR radius ∆0, and hyperparameters η1 ∈(0, 1), η2 > 0, and γ ∈(0, 1). for k = 0, 1, 2, · · · do 1 Compute model mk and a trial step xk + sk where sk ≈arg mins{mk(xk + s) : s ∈B(0, ∆k)}. 2 Compute the ratio ρk as ρk = f(xk) -f(xk + sk) m(xk) -m(xk + sk). 3 Update the iterate and the TR radius as (xk+1, ∆k+1) ← ( (xk + sk, γ-1∆k) if ρ ≥η1 and ∥gk∥≥η2∆k, (xk, γ∆k) otherwise. Algorithm 1 is essentially a standard trust region method, well studied in the literature [8], except for the condition ∥gk∥≥η2∆k. This condition is not used in classical TR methods where ∇mk(xk) = gk = ∇φ(xk). However, for TR methods based on models constructed in derivative free setting and, more generally, models based on inexact gradient estimates, the TR radius ∆k serves two roles - that of the step size parameter and that of the control of the gradient accuracy. Condition ∥gk∥≥η2∆k is essential for the latter role to ensure that the gradient accuracy stays on track as the norm of the gradient reduces. We note here that much of prior DFO literature replaces this condition with a much less practical and cumbersome "criticality step". Thus one 1Note that φ(xk) is used in the definition (2.1) but not in minimization of mk, thus there is no need to compute it. of the contributions of this paper is performing analysis of Algorithm 1 and and other model-based trust region algorithms without the "criticality step". For all TR algorithms studied in this paper we will make the following standard assumption on the models mk and their minimization. Assumption 2.1. 1. The trust region subproblem is solved sufficiently accurately in each iteration k so that xk + sk provides at least a fraction of Cauchy decrease, i.e. for some constant 0 0 such that, for all xk generated by Algorithm 1, the spectral norm of the Hessian of the model ∥Hk∥≤κbhm. Condition (2.2) is commonly used in the literature and is satisfied by the Cauchy point with κfcd = 1. See [8, Section 6.3.2] for more details. The following definition, also widely used in the literature [9], helps us identify the requirements on models mk that are critical for convergence. Definition 2.2 (Fully-linear model). Given a ball around point x of radius ∆, B(x, ∆), we say that model m(x + s) is a κef, κeg-fully linear model of φ(x + s) on B(x, ∆) if ∥∇m(x) -∇φ(x)∥≤κeg∆ and |m(x + s) -φ(x + s)| ≤κef∆2 for all ∥s∥≤∆. We temporarily make the following additional assumptions for the analysis of Algorithm 1. Assumption 2.3. At each iteration f(xk) -f(xk + sk) = φ(xk) -φ(xk + sk), in other words, exact function reduction is computed, when computing ρk. The next lemma is key in the analysis of any trust region algorithm and specifically Algorithm 1. It establishes that once ∆k is sufficiently small compared to the gradient norm, a successful step is ensured. Lemma 2.4 (small ∆k implies successful step). Under Assumptions 1.2 and 2.3, if mk is κef, κeg-fully-linear and (2.3) ∆k ≤C1∥∇φ(xk)∥, where C1 = max η2, κbhm, 2κef (1 -η1)κfcd + κeg -1 , then ρ ≥η1, ∥gk∥≥η2∆k, thus iteration k is successful and xk+1 = xk + sk. The proof is a simplification of those in the stochastic trust region literature where condition ∥gk∥≥η2∆k is widely used [4, 5]. Proof. By the assumption that mk is fully-linear, we have ∥∇φ(xk)∥≤∥gk∥+ κeg∆k. From (2.3) we conclude that (max{κbhm, η2} + κeg)∆k ≤∥∇φ(xk)∥≤∥gk∥+ κeg∆k max{κbhm, η2}∆k ≤∥gk∥. This implies that the first condition of the successful step, namely ∥gk∥≥η2∆k, is satisfied. From (2.2) we have mk(xk) -mk(xk + sk) ≥κfcd∥gk∥∆k/2. Thus, using the assumption that mk is fully-linear again, ρk = m(xk) -m(xk + sk) + (φ(xk) -m(xk)) -(φ(xk + sk) -m(xk + sk)) m(xk) -m(xk + sk) ≥1 - κef∆2 k m(xk) -m(xk + sk) ≥1 - κef∆2 k κfcd∥gk∥∆k/2 ≥1 - 2κef∆k κfcd(∥∇φ(xk)∥-κeg∆k) ≥η1, where the last step is true because ∥∇φ(xk)∥≥ 2κef (1-η1)κfcd + κeg ∆k follows from (2.3). Lemma 2.5 (successful iteration implies function reduction). Let Assumptions 1.2 and 2.3 hold. If the iteration k is successful, then (2.4) φ(xk) -φ(xk+1) ≥C2∆2 k, where C2 = η1η2κfcd 2 min η2 κbhm , 1 ; otherwise, we have xk+1 = xk and φ(xk) -φ(xk+1) = 0. Proof. When the trial point is accepted, we have ρk ≥η1 and ∥gk∥≥η2∆k, which guarantees φ(xk) -φ(xk+1) = φ(xk) -φ(xk + sk) = f(xk) -f(xk + sk) ≥η1 m(xk) -m(x+ k ) ≥η1κfcd 2 ∥gk∥min ∥gk∥ ∥Hk∥, ∆k ≥η1κfcd 2 η2∆k min η2∆k κbhm , ∆k . For a given ε > 0, let Kε be the first iteration of Algorithms 1 for which ∥∇f(xk)∥≤ε. Define the index sets Sε :={k ∈{0, . . . , Kε -1} : iteration k is successful}, Uε :={k ∈{0, . . . , Kε -1} : iteration k is unsuccessful}. Lemma 2.6 (Lower bound on ∆k). Assuming mk is κef, κeg-fully linear for each k = 0, . . . , Kε -1, ∆k ≥γC1ε for all k ∈{0, . . . , Kε -1}. Proof. According to Lemma 2.4, any iteration k ∈{0, . . . , Kε -1} must be successful when ∆k ≤C1ε. Thus, given ∆0 ≥γC1ε, and by the mechanism of Algorithm 1 we must have ∆k ≥γC1ε for all k ∈{0, . . . , Kε -1}. The following bound holds under the result of Lemma 2.6. Lemma 2.7 (Bound of successful iterations). From ∆k ≥γC1ε for all k ∈{0, . . . , Kε -1} we have |Sε| ≤f(x0) -f ⋆ C2(γC1ε)2 Proof. Using Lemma 2.5 we have φ(x0) -φ⋆≥ Kε-1 X k=0 φ(xk) -φ(xk+1) ≥ X k∈Sε C2∆2 k > |Sε|C2(γC1ε)2 which gives the result of the lemma. Lemma 2.8. Assuming that the initial trust-region radius ∆0 ≥γC1ε, |Uε| ≤|Sε| + ⌈ logγ C1ε ∆0 ⌉. Proof. We observe that ∆Kε = γ-|Sε|γ|Uε|∆0 ≥C1ε, The last inequality follows from the fact that ∆Kε-1 ≥γC1ε and the Kε -1-th iteration must be successful. Thus the number of unsuccessful iterations can be bounded using the number of successful ones rearranging the terms and taking the logarithm. Summarizing the above lemmas the following complexity result immediately follows. Theorem 2.9. Let Assumptions 1.2 and 2.3 hold. Assuming that the initial trust-region radius ∆0 ≥γC1ε, and mk is κef, κeg-fully linear for each k = 0, . . . , Kε -1, we have the bound (2.5) Kε = |Sε| + |Uε| ≤2(f(x0) -f ⋆) C2(γC1ε)2 + logγ C1ε ∆0 = 4κbhm max n η2, κbhm, 2κef (1-η1)κfcd o + κeg 2 γ2η1η2κfcd min{η2, κbhm} · φ(x0) -φ⋆ ε2 + logγ C1ε ∆0 . 2.1 The case of inexact zeroth-order oracle. Now let us assume that for any x the algorithm has access to a zeroth-order oracle that computes f(x) such that |f(x) -φ(x)| ≤εf. It is easy to extend Lemmas 2.4 and 2.5 to this case. The first modification will result in a slightly different definition in constant C1, which we denote as ̄C1 and which is (2.6) ̄C1 = max η2, κbhm, 2κef + 4εf (1 -η1)κfcd + κeg -1 . The second modification will result in (2.4) in the statement of Lemma 2.5 changing as follows (2.7) φ(xk) -φ(xk+1) ≥C2∆2 k -2εf, where C2 = η1η2κfcd 2 min{ η2 κbhm , 1}. Thus, under the additional assumption that ∆k ≥ q 2εf τC2 for some τ ∈(0, 1) (2.7) can be further stated as (2.8) φ(xk) -φ(xk+1) ≥(1 -τ)C2∆2 k, where C2 = η1η2κfcd 2 min{ η2 κbhm , 1}. The new versions of Lemmas 2.4 and 2.5 can be applied as before, as long as it can be ensured that ∆k remains not smaller than q 2εf τC2 . Due to Lemma 2.4 and the update mechanism for ∆k this is ensured as long as ∥∇φ(xk)∥≥ε for sufficiently large ε. Theorem 2.10. Let Assumptions 1.2 and 2.3 hold. For any ε > q 2εf γ2τC2 ̄ C2 1 , assuming that the initial trustregion radius ∆0 ≥γ ̄C1ε, where ̄C1 is defined in (2.6) and mk is κef, κeg-fully linear for each k = 0, . . . , Kε -1, where Kε is the first iteration for which ∥∇φ(xk)∥≤ε, then we have the bound (2.9) Kε = |Sε| + |Uε| ≤ 2(φ(x0) -φ⋆) (1 -τ)C2(γ ̄C1ε)2 + logγ ̄C1ε ∆0 = 4κbhm max n η2, κbhm, 2κef +4εf (1-η1)κfcd o + κeg 2 (1 -τ)γ2η1η2κfcd min{η2, κbhm} · φ(x0) -φ⋆ ε2 + logγ ̄C1ε ∆0 . 2.2 Ensuring fully-linear models on each iteration and complexity implications. Let us now describe the most straightforward way of constructing fully linear models in derivative free setting. First we show that it can be achieved by using a sufficiently accurate gradient approximation. The following lemma easily follows from Assumption 2.3 and the smoothness of φ(x). Lemma 2.11 (Fully linear models). Under Assumption 2.3 if ∥∇m(x) -∇φ(x)∥≤κeg∆then mk(xk + s) is a κef, κeg-fully linear model of φ(x + s) on B(x, ∆) with κef = κeg + L+κhbm 2 . Thus we focus on constructing g(x) = ∇m(x). We assume that we have access to an inexact zeroth-order oracle: f(x) ≈φ(x) such that |f(x) -φ(x)| ≤εf. Let us consider a quadratic model (2.1) with g(x) computed by a finite difference scheme: g(x) = n X i=1 f(x + δui) -f(x) δ ui, where ui is the i-th column of a unitary matrix, and the Hessian approximation H(x) being an arbitrary symmetric matrix, such that ∥H(x)∥2 ≤κbhm. From analysis of the finite difference gradient approximation error (see e.g.[3]) we have ∥g(x) -∇φ(x)∥≤ √nLδ 2 + 2√nεf δ . Thus, by selecting δ ≥2 rεf L we ensure ∥g(x) -∇φ(x)∥≤√nLδ = κeg∆ κeg = √nL δ ∆ In conclusion, when the finite difference scheme is used with δ = ∆k, at each iteration k, then the resulting model is κef, κeg-fully linear in B(xk, ∆k) with κeg = √nL and κef = L+κhbm 2 + √nL, as long as ∆k ≥2 q εf L . This lower bound on ∆k is, essentially without loss of generality, implied by the condition of Theorem 2.10 that ∆k ≥ q 2εf τC2C2 1 since q 2εf τC2C2 1 can be assumed to be larger than 2 q εf L without notable impact on complexity. We now can apply Theorem 2.10 to bound the total number of calls to the zeroth order oracle. We are specifically interested in the dependence of this oracle complexity on n and ε. The dependence on ε is explicit in the bound on Kε which is O(ε-2). There are several constants in the bound in Theorem 2.10, however, only κeg and κef depend on n. Other constants are not dimension dependent2. Both κef and κeg scale in the same way with n. The total number of iterations Kε is bounded as O κ2 eg ε2 and each iteration requires n + 1 function evaluations. Thus the total worse case oracle complexity to achieve ∥∇φ(xk)∥≤ε for any ε > q 2εf γτC2C2 1 is Nε = O(n2ε-2). Alternatively, choosing δ = ∆k √n, ensures that κeg = L and the total complexity reduces to O nε-2 , but since we also have to have δ ≥2 q εf L , this implies that convergence holds only for ε > q nεf γ2C2 1L. We conclude that if function noise εf is appropriately small compared to the desired accuracy ε and a finite difference method is used then choosing δ = ∆ √n is the better strategy. The main drawback of the finite difference approach is lack of flexibility in reusing the past function values. Thus this method, while simple to analyze, is not as practical as the method we discuss in the next section. We will see that the radius of sampling will be the same as the trust region radius for the reasons that will be understood when we describe the method. 3 Geometry-correcting method based on Lagrange polynomials. Before introducing the method we wish to analyze in this paper we need to discuss an important tool utilized by these algorithms - Lagrange polynomials. The concepts and the definitions below can be found in [9]. Lagrange polynomials and associated concepts will be defined with respect to a space of polynomials P of dimension p. Typically P is either the set of linear or quadratic polynomial, but it also can be a set of quadratic polynomials with a pre-defined Hessian sparsity pattern. Definition 3.1. Lagrange Polynomials Given a space of polynomials P of dimension p and a set of points Y = {y1, . . . , yp} ⊂Rn, a set of p polynomials lj(s) in P for j = 1, . . . , p, is called a basis of Lagrange polynomials associated with Y, if lj(yi) = δij = 1 if i = j, 0 if i ̸= j. If the basis of Lagrange polynomials exists for the given Y then Y is said to be poised. Definition 3.2. Λ-poisedness Given a space of polynomials P of dimension p, Λ > 0, and a set B ⊂Rn. A poised set Y = {y1, . . . , yp} is said to be Λ-poised in B if Y ⊂B and for the basis of Lagrange polynomials associated with Y, it holds that Λ ≥ max j=1,...,p max s∈B |lj(s)|. The following assumptions will be applied to Algorithm 2 on top of Assumption 2.3. 2For some problems L can be dimension dependent but we do not consider this here. Algorithm 2: Geometry-correcting algorithm Inputs: A zeroth-order oracle f(x) ≈φ(x), a space of polynomials P of dimension p, ∆0, x0, γ ∈(0, 1) η1 > 0, η2 > 0, Λ > 1. Initialization An initial set Y0 such that |Y0| ≤p, and the function values f(x0), f(x0 + yi), yi ∈Y0. for k = 0, 1, 2, . . . do 1 Build a quadratic model mk(xk + s) as in (2.1) using f(xk) and f(xk + yi), yi ∈Yk. 2 Compute a trial step sk and ratio ρk as in Algorithm 1. 3 Successful iteration: ρk ≥η1 and ∥gk∥≥η2∆k. Set xk+1 = xk + sk, ∆k+1 = γ-1∆k. j∗ k = arg max j=1,...,p ∥yj∥. Replace the furthest interpolation point and shift Yk+1 = (Yk \ {yj∗ k} ∪{0}) -sk 4 Unsuccessful iteration: ρk ∆k ⇒Yk+1 = Yk \ {yj∗ k} ∪{sk} . • Geometry correction by replacing a "bad" point: Otherwise, construct the set Lagrange Polynomials {lj(x), i = 1, . . . , p} in P for the set Yk and find max value (i∗ k, s∗ k) = arg max j=1,...,p,s∈B(0,∆k) |lj(s)|. If |li∗ k(s∗ k)| > Λ, compute f(xk + s∗ k), Yk+1 = Yk \ {yi∗ k} ∪{s∗ k}. • Geometry is good: Otherwise ∆k+1 = γ∆k. Assumption 3.3. There exist κeg such that at each iteration where Yk ⊆B(0, ∆k) is Λ-poised, ∥gk -∇φ(xk)∥≤κeg∆k and thus mk is κef, κeg-fully linear in B(xk, ∆k) with κef = κeg + L+κhbm 2 We will show how this property follows from the properties of Lagrange polynomials later in the section after specifying a way for the model mk to be constructed. Lemmas 2.4 applies to Algorithm 2 with ̄C1 defined by (2.6) and Lemma and 2.5 applies with function reduction defined in (2.7). We no longer assume that mk is fully linear at each iteration, but we still have the following key lower bound on ∆k. Lemma 3.4 (Lower bound on ∆k). For any ε > q 2εf γ2τC2 ̄ C2 1 , for some τ ∈(0, 1) let Kε be the first iteration for which ∥∇φ(xk)∥≤ε. Then for all k = 1, . . . , Kε -1 ∆k ≥γ ̄C1ε. Proof. From the mechanics of the unsuccessful iteration ∆k is decreased only when Yk ∈B(0, ∆k) and is Λ-poised, thus mk is fully linear on iteration k. Since from Lemma 2.4 ∆k ε this means that ∆k cannot be decreased below γC1ε. Let Sε be defined as in (??) and Ud ε := {k ∈{0, . . . , Kε -1} : iteration k is unsuccessful and ∆k is decreased}. First of all, due to Lemma 3.5, the bound of Lemma 2.7 on the number of successful iterations holds. We also have the following bound. Lemma 3.5. |Ud ε | ≤|Sε| + ⌈ logγ C1ε ∆0 ⌉. Proof. The proof is identical to Lemma 2.8 with Ud ε instead of Uε, since ∆k is unchanged on iterations k ∈Uε \ Ud ε . The final challenge is to count the number of iterations in Uε ε . We call these iterations geometry-correcting iterations, because they are designed to improve the properties of the interpolation set. Essentially, without loss of generality we assume that Yk is either poised or can be completed to form a poised set of p points. This is because Yk can be made so by throwing away points or by arbitrary small perturbations. The following result bounds the number of consecutive geometry-correcting iterations. Theorem 3.6. Let P be the set of linear polynomials (with dimension p = n). Then the number of oracle calls in consecutive unsuccessful iterations such that k ∈Uε \ Ud ε is at most 3n. Proof. First, after at most n iterations and n oracle calls, Yk contains n points all of which have norm at most ∆k. Let Yk = {y1, . . . , yn} and let Yk be the matrix with ith column yi ∈Yk, i = 1, . . . , n. The set of linear Lagrange polynomials for Yk is then defined by lk i (s) = sT (Y -T k )i. Define Ik = {i ∈[n] | ∥yi∥= ∆k, yT i yj = 0 ∀j ∈[n] \ {i}}. That is Ik is the set of points in Yk whose norm is ∆k and that are orthogonal to all other points in Yk. We claim that (i) if |Ik| = n then the set is 1-poised and thus k is not geometry-correcting, i.e, k ̸∈Uε \ Ud ε and (ii) if |Ik| Λ. By permutation, we may assume that Ik = [|Ik|]. We can find an orthonormal basis by extending the orthonormal set { 1 ∆k yi|i ∈Ik}. Changing to this new basis, we can write Yk = ∆kI 0 0 ̃Yk where I is the identity matrix of size |Yk| and ̃Yk is of size n -|Yk|. We can then observe Y -T k = " 1 ∆k I 0 0 ̃Yk -T # . It then follows that for i ∈Ik, ∥(Y -T k )i∥= 1 ∆k and therefore, maxs∈B(0,∆k) lk i (s) = ∆k∥(Y -T k )i∥= 1. Thus since maxs∈B(0,∆k) |lk i∗ k(s)| > Λ ≥1, we cannot have i∗ k ∈Ik. Thus Ik does not lose any of its members on geometry-correcting iterations. To conclude, we just need to show that at least one i is added to Ik+1 compared to Ik during a geometry-correcting iteration. By the the mechanism of a geometry-correcting iteration of Algorithm 2, Yk+1 = Yk \ {yi∗ k} ∪{s∗ k} where s∗ k = arg max x∈B(0,∆k) |sT (Y -T k )i∗ k| = ∆k (Y -T k )i∗ k ∥(Y -T k )i∗ k∥. It is easy to see that s∗ k := ∆k (Y -T )i∗ ∥(Y -T )i∗∥satisfies (i) ∥s∗ k∥= ∆and (ii) yT j s∗ k = 0 ∀j ∈[n] \ {i∗ k}. Since yi∗ k is removed from the set when s∗ k is added, it follows that Ik+1 = Ik ∪{i∗}, thus the cardinality of Ik increases by at least one. Since only n such iterations are possible consecutively, and since each geometry-correcting iteration uses at most two oracle calls, the total number of oracle calls is at most 3n. We now can state the total bound on the oracle complexity for Algorithm 2 in the case when P is the space of linear polynomials. This bound follows from the previously derived bounds on |Sε| and |Ud ε |, the fact that each iteration in Sε and Ud ε requires only on function evaluation and Theorem 3.6 which established that there are at most 3n evaluations between each iteration in Sε ∪Ud ε . Theorem 3.7. Let Assumptions 1.2 and 3.3 hold. Let ̄C1 and C2 be defined as in (2.6) and (2.7), respectively. For any ε > q 2εf γ2τC2 ̄ C2 1 , assuming that the initial trust-region radius ∆0 ≥γ ̄C1ε, then Algorithm 2 achieves ∥∇φ(xk)∥≤ε after at most Nε function evaluations, where (3.1) Nε = 3n[|Sε| + |Ud ε |] ≤ 2(f(x0) -f ⋆)n (1 -τ)C2(γ ̄C1ε)2 + 3n logγ C1ε ∆0 = 4κbhm max n η2, κbhm, 2κef +4εf (1-η1)κfcd o + κeg 2 (1 -τ)γ2η1η2κfcd min{η2, κbhm} · 3n(φ(x0) -φ⋆) ε2 + 3n logγ ̄C1ε ∆0 . 4 Ensuring fully-linear models via Lagrange polynomials. We now show how to ensure Assumption 3.3 and derive corresponding κeg. For this we specify a way to construct the model m(x) for Algorithm 2. We impose the interpolation conditions by constructing a polynomial r(x) ∈P such that (4.1) rk(y) = f(xk + y) -f(xk), ∀y ∈Yk. Note that when |Yk| 1 then mk defined by (4.2) is κeg, κef-fully linear in B(xk, ∆k) with κeg = nLΛ 2 and κef = κeg + L+κhbm 2 . Substituting these values in Theorem 3.7 and ignoring dependencies on constants, aside from n and ε we obtain the complexity Nε = O n3ε-2 . We will now show that by improving upon the analysis in [9] we are able to bring the complexity down to O n2ε-2 which is competitive with complexity of the trust region methods based on finite difference models. Theorem 4.3. Let Y = {y1, . . . , yn} be Λ-poised in B(0, ∆). Let r(s) = gT s be an affine function satisfying r(yi) = φ(x + yi) -φ(x) for i = 1, . . . , n. Then ∥∇φ(x) -g∥≤√nL∆ p n(Λ2 -1) + 2. In particular, if Λ = 1 + O( 1 n) then we have ∥g -∇φ(x)∥= O(√nL∆). Proof. Let ̄φ(s) = φ(x + s) -φ(x), then ̄φ(0) = 0 and ∇φ(x) = ∇ ̄φ(0). Let Y be the matrix with ith column equal to yi, let D be the diagonal matrix such that Dii = ∥yi∥, and let ̄φ(Y ) be the vector with ith entry equal to ̄φ(yi). By the interpolation condition imposed on m(x + s), we have ̄φ(Y ) = Y T g and thus, g = Y -T ̄φ(Y ). Then we have ∇φ(x) -g = ∇ ̄φ(0) -Y -T ̄φ(Y ) and we can bound ∥∇φ(x) -g∥= ∥Y -T D(D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y ))∥≤√n∥Y -T D∥∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y )∥∞. Thus the remainder of the proof is split between bounding ∥Y -T D∥and ∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y )∥∞. To bound ∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y )∥∞, fix any nonzero y ∈B(0, ∆) and define h(t) = ̄φ(ty). Observe that h(0) = ̄φ(0) = 0 and h(1) = ̄φ(y). By the fundamental theorem of calculus we have h(1) = R 1 0 h′(t)dt. We also have by the chain rule that h′(t) = yT ∇ ̄φ(ty), thus ̄φ(y) = R 1 0 yT ∇ ̄φ(ty)dt. By Lipschitzness, ∥∇ ̄φ(0) -∇ ̄φ(ty)∥≤Lt∥y∥. Thus |yT ∇ ̄φ(0) -yT ∇ ̄φ(ty)| ≤Lt∥y∥2. Thus we can bound |yT ∇ ̄φ(0) - ̄φ(y)| = Z 1 0 yT ∇ ̄φ(0) -yT ∇ ̄φ(ty)dt ≤ Z 1 0 Lt∥y∥2dt = 1 2L∥y∥2. Dividing by the norm of ∥y∥, we have | yT ∥y∥∇ ̄φ(0) - ̄φ(y) ∥y∥| ≤ 1 2L∥y∥≤ 1 2L∆. Observing that for y = yi, | yT ∥y∥∇ ̄φ(0) - ̄φ(y) ∥y∥| is precisely the ith entry of the vector D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y ), we can conclude ∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y )∥∞≤1 2L∆. To bound ∥Y -T D∥, Let (Y -T )i denote the ith column of Y -T . One can check that for all i = 1, . . . , n, the Lagrange polynomials satisfy li(s) = sT (Y -T )i. Therefore, the poisedness condition implies that maxs∈B(0,∆) li(s) = maxs∈B(0,∆) sT (Y -T )i = ∆∥(Y -T )i∥≤Λ. Thus for i = 1, . . . , n, we have ∥(Y -T )i∥≤Λ ∆. Let A = DY -1Y -T D and observe that Tr(A) = Tr(DY -1Y -T D) = Pn i=1 ∥(Y -T )i∥2∥yi∥2 ≤nΛ2. Also observe that Tr(A-1) = Tr(D-1Y T Y D-1) = Pn i=1 ∥yi∥2 ∥yi∥2 = n. Denote by λi > 0 the eigenvalues of A for i = 1, . . . , n and observe that 1 λi are the eigenvalues of A-1. Then Pn i=1 λi + 1 λi = Tr(A) + Tr(A-1) ≤n(Λ2 + 1). For all positive numbers λ, we have λ + 1 λi ≥2. Let i′ ∈[n] maximize λi + 1 λi . Then λi′ + 1 λi′ = n X i=1 λi + 1 λi ! -  X i̸=i′ λi + 1 λi  ≤n(Λ2 + 1) -2(n -1) = n(Λ2 -1) + 2. Thus we have ∥A∥= max i∈[n] λi ≤max i∈[n] λi + 1 λi ≤n(Λ2 -1) + 2. Then observe ∥Y -T D∥= ∥A∥ 1 2 ≤ p n(Λ2 -1) + 2. Combining the bounds for ∥Y -T D∥and ∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y )∥∞, we have ∥∇φ(x) -g∥= ∥∇ ̄φ(0) -g∥≤1 2 √nL∆ p n(Λ2 -1) + 2. In particular, for Λ = 1 + O( 1 n), we have p n(Λ2 -1) + 2 = O(1) and thus ∥∇φ(0) -g∥= O(√nL∆). We thus have a result that Λ-poisedness implies that mk, defined as in (4.2) and r(s) defined by (4.1) with exact zeroth order oracle, is κeg, κef-fully linear with κeg = √nL∆ p n(Λ2 -1) + 2 and κef as in Lemma 2.11. Let us now extend the result to the inexact zeroth order oracle where |f(x) -φ(x)| ≤εf for all x. Theorem 4.3 is modified as follows. Theorem 4.4. Let Y = {y1, . . . , yn} be such that Y is Λ-poised in B(0, ∆). Let r(s) = gT s be an affine function satisfying r(yi) = f(x + yi) -f(x) for i = 1, . . . , n, with |f(x + y) -φ(x + y)| ≤εf for y ∈Y ∪{0}. Then ∥∇φ(x) -g∥≤ p n(Λ2 -1) + 2 1 2 √nL∆+ √n2εfΛ ∆ . Proof. Define ̄φ, D, and ̄φ(Y ) as in the proof of Theorem 4.3. Diverging from that proof, the interpolation condition changes from Y T g = ̄φ(Y ) to gT yi = φ(x + yi) -φ(x) + (f(x + yi) -φ(x + yi)) -(f(x) -φ(x)), i = 1, . . . , n thus we have g = Y -T ̄φ(Y )+Y -T E, where E is a vector with components (f(x+yi)-φ(x+yi))-(f(x)-φ(x)). Then we can bound error ∥∇φ(x) -g∥= ∥Y -T D(D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y ) -D-1E)∥ ≤√n∥Y -T D∥∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y ) -D-1E∥∞ ≤√n∥Y -T D∥ ∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y )∥∞+ ∥D-1E∥∞ ≤√n∥Y -T D∥ ∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y )∥∞+ ∥D-1∥∞∥E∥∞ . We can bound ∥Y -T D∥and ∥D-1Y T ∇ ̄φ(0) -D-1 ̄φ(Y )∥∞identically as in the proof of Theorem 4.3. To bound ∥E∥∞observe that the condition that |f(x + y) -φ(x + y)| ≤εf for y ∈Y ∪{0} implies ∥E∥∞≤2εf. To bound ∥D-1∥∞recall that Dii = ∥yi∥. By the properties of Lagrange polynomials we have li(yi) = 1. By linearity, we have li(∆yi ∥yi∥) = ∆ ∥yi∥. By the poisedness condition we have li(∆yi ∥yi∥) ≤Λ. Combining these bounds we have ∆ ∥yi∥≤Λ and thus 1 ∥yi∥≤Λ ∆. Thus ∥D-1∥∞≤Λ ∆. The result follows. It follows, that if Yk is Λ-poised in B(0, ∆k) with ∆k ≥ 2 q Λεf L we have ∥∇φ(xk) -gk∥ ≤ p n(Λ2 -1) + 2(√nL∆k). We thus have a result that Λ-poisedness, with ∆k ≥2 q Λεf L , implies that mk is κeg, κef-fully linear with κeg = √nL∆ p n(Λ2 -1) + 2 and κef defined by Lemma 2.11. Without loss of generality we can assume that L ≥ ΛC2 2 in Theorem 4.4. This implies that for any ε > q 2εf γ2τC2 ̄ C2 1 , ∆k ≥γ ̄C1ε (for some τ ∈(0, 1)) implies ∆k ≥2 q Λεf L . Then the immediate consequence of Theorem 3.7 is as follows Theorem 4.5. Let Assumptions 1.2 and 3.3 hold. For any ε > q 2εf γ2τC2 ̄ C2 1 ∆k ≥γ ̄C1ε (for some τ ∈(0, 1)), assuming that the initial trust-region radius ∆0 ≥γ ̄C1ε and Λ is chosen as 1 + O( 1 n), then Algorithm 2 achieves ∥∇φ(xk)∥≤ε after at most Nε function evaluations, where (4.3) Nε = O(n2ε-2), with O containing additive logarithmic factors and constant that are independent of n and ε. This result is important because it shows than the worst case complexity of the geometry-correcting method matches that of a method based on finite differences. Thus nothing is lost by employing Algorithm 2, aside from additional linear algebra costs of maintaining the Lagrange polynomials. The following theorem demonstrates that setting Λ = 1 + O( 1 n) is critical to our final result. In other words, then overall bound on κeg in terms of Λ cannot be improved. Theorem 4.6. For all n ≥2, L, ∆> 0, and Λ > 1, there is a L-smooth function φ, and a set Y = {y1, . . . , yn} Λ-poised in B(0, ∆) such that for the linear interpolating function r(s) = gT s satisfying r(yi) = φ(x + yi) -φ(x) for i = 1, . . . , n, we have ∥g -∇φ(x)∥≥O(LnΛ∆). Proof. By scaling we can take L = ∆= 1. Let x = 0 and ̄φ(u) = φ(x + u) = 1 2uT u. Let ε be the unique number in the interval (0, 1 n-1) such that 1 1 + ε + ε (1 + ε)2(1 -nε/(1 + ε)) = Λ2. Define A = (1 + ε) -ε11T . Let U = √ A and let yi = ui - the i-th column of U. Observe for i = 1, . . . , n that ∥ui∥= √Aii = 1 and thus, ui ∈B(0, ∆). By the Sherman-Morrison formula, we have A-1 = 1 1+εI + ( ε (1+ε)2(1-nε/(1+ε)))11T . Observe that maxs∈B(0,∆) li(s) = maxs∈B(0,∆) xT (U -T )i = ∥(U -T )i∥. Since U = √ A, we have U -1 = √ A-1, thus ∥(U -T )i∥= p (A-1)ii = Λ. Thus the set is indeed Λ-poised. Then we have ∥g -∇φ(x)∥= ∥g∥= ∥U -T φ(U)∥= 1 2∥U -T 1∥= 1 2 √ 1T A-11 = 1 2 s n 1 + ε + n2ε (1 + ε)2(1 -nε/(1 + ε)). Observe that n 1 + ε + n2ε (1 + ε)2(1 -nε/(1 + ε)) = n(n(Λ2 - 1 1 + ε) + 1 1 + ε). Simplifying and bounding we obtain n(Λ2 - 1 1 + ε) + 1 1 + ε = nΛ2 -n -1 1 + ε ≥nΛ2 -(n -1) = n(Λ2 -1) + 1. Combining, we have ∥g -∇φ(x)∥≥1 2 √n p n(Λ2 -1) + 1. 5 Extensions to arbitrary polynomial basis. Below we present the bound on the number of consecutive geometry-correcting iterations for the case of a general space of polynomials P defined by some basis π(x) = {π1(x), . . . , πp(x)}. Theorem 5.1. For a general space of polynomials P with dimension p, then the number of oracle calls in consecutive unsuccessful iterations such that k ∈Uε \ Ud ε is O(p log p). Proof. First, after at most n iterations and n oracle calls, all points in Yk will have norm at most ∆k. Let vol(π(Y )) denote the volume of the simplex of the vertices in π(Y ). Let Yi(s) equal Y with point yi replaced by s. It follows that |li(s)| = vol(π(Yi(s))) vol(π(Y )) In other words, replacing yi with s increases the volume by a factor of |li(s)|. We claim that if the vectors in Y are Λ-poised, then vol(π(Y )) vol(π(Y ∗)) ≥(√pΛ)-p where Y ∗is the matrix with columns consisting of vectors y∗ i , . . . , y∗ p ∈B which maximize vol(π(Y ∗)). Then we have vol(π(Y ∗)) vol(π(Y )) = det(π(Y ∗)) det(π(Y )) = det(π(Y )-1π(Y ∗)) . One can check that (π(Y )-1π(Y ∗))ij = li(y∗ j ). Thus, we can conclude the proof of the claim via Hadamard's inequality: det(π(Y )-1π(Y ∗)) ≤ p Y i v u u t p X j |li(y∗ j )|2 ≤(√pΛ)p. Let V (t) = ((√pt))-p and let N(t) be the number of consecutive geometry correction steps until vol(π(Y ))/vol(π(Y ∗)) ≥t. By the argument in part 2 of the proof of Theorem 6.3 of [9], after at most p iterations, we have a 2p-poised set, so we have N(V (2p)) ≤p. Now for any k = 1, . . . , p-1 let us bound N(V (2k)). Suppose we take N(V (2k+1)) steps and have vol(π(Y ))/vol(π(Y ∗)) ≥V (2k+1). In every subsequent step, either we achieve 2k-poisedness, and thus vol(π(Y ))/vol(π(Y ∗)) ≥V (2k) by the above claim, or poisedness is greater than 2k and we improve volume by a factor of at least 2k. Since we only need to improve volume by a factor of at most V (2k) V (2k+1) = 2p, we need to take at most ⌈log2k(2p)⌉= ⌈p k⌉subsequent steps. Thus we have N(V (2k)) ≤N(V (2k+1)) + ⌈p k⌉. Now, by induction we have N(V (2)) ≤p + p-1 X k=1 ⌈p k ⌉≤2p + p-1 X k=1 p k = 2p + p p-1 X k=1 1 k ! ≤3p + p log(p). To conclude, for any additional steps we may assume that poisedness is greater that Λ and thus, volume will increase by a factor of at least Λ. However, the ratio of volume to the maximum volume cannot grow to be larger than 1. Thus, we have the total number of steps is at most logΛ 1 V (2) + N(V (2)) ≤ 1 log Λ (p log(√p2)) + 3p + p log(p) = O(p log(p)). Since each geometry correction iteration uses at most two oracle calls, the result follows. To have a complexity bound for Algorithm 2 based on general space of polynomials P we need to establish how Λ-poisedness of Yk implies fully linear model mk and derive the expression for κeg. Certain results for this were cited or established in [9]. As in the linear case those bounds may not be optimal and to develop better bounds one would likely need to specify P. We leave this for future research, noting that for now, the bounds in [9] show that Λ-poised Yk implies fully linear model mk with κeg having a polynomial dependence on n. In the next section we show how such models can be used in low dimensional subspaces, where κeg will depend on the subspace dimension and thus the exact nature of this dependence has small impact on the total complexity. 6 Model based trust region methods in subspaces. We now consider a trust region method where a model m(x) is built and optimized in a random low-dimensional subspace of Rn. The idea of using random subspace embeddings within derivative free methods has gained a lot of popularity in the literature lately. Specifically, in [6] random embeddings based on sketching matrices and Johnson-Lindestraus Lemma, are used together with a model based trust region method. The trust region method involved is different from what we discuss here and has the same worst case oracle complexity complexity O( n2 ε2 ) as our full-space method. Here we will show that combining our analysis in the paper with the use random projections results in better complexity in terms of the dependence on n - O( n ε2 ). In [11] random projections are used together with finite difference subspace gradient approximation and gradient descent. The resulting oracle complexity of this subspace method is the same as that of full-space finite difference gradient descent method O( n ε2 ). The purpose of this section is to show that unlike finite difference gradient descent, the trust region interpolation based methods, such as the geometry-correcting Algorithm 2 improve their complexity in terms of the dependence on n when used in subspaces versus their full space versions. In this section we will assume that the zeroth oracle is exact. Extension to inexact oracle is subject of future study since it requires certain extension of existing theory of randomized trust region methods that are beyond the subject of this paper. We will elaborate on this at the end of this section. Given a matrix Q ∈Rn×q, with q ≤n and orthonormal columns, QQT ∇φ(x) is an orthogonal projection of ∇φ(x) onto a subspace spanned by the columns of Q (we will call it a subspace induced by Q). We also define a reduction of φ(x) to the subspace, given by Q around x: ˆφ(v) = φ(x + Qv), v ∈Rq, which implies Q∇ˆφ(0) = QQT ∇φ(x). Similarly we define ˆm(v) = m(x + Qv), v ∈Rq, which implies Q∇ˆm(0) = QQT ∇m(x). We now present a modified trust-region algorithm that constructs models and computes steps in the subspace. At each iteration k ∈{0, 1, . . . } the algorithm chooses Qk ∈Rn×q with orthonormal columns. The model mk is defined as (6.1) mk(xk + Qkv) = φ(xk) + gT k Qkv + 1 2vT QT k HkQkv. For any vector v, gT k Qkv = QkQT k gT k Qkv thus without loss of generality, we will assume that QkQT k gk = gk, in other words, gk lies in the subspace induced by Qk. We define the trust region in the subspace induced by Qk as BQk(xk, ∆k) = {z : z = xk + Qkv, ∥v∥≤∆k}. We will assume here that Assumption 2.3 holds. We will also need model mk to be fully linear but only with respect to the subspace. Algorithm 3: Trust region method based on fully-linear models in subspace Inputs: Exact zeroth order oracle f(x) = φ(x), initial x0, ∆0, and η1 ∈(0, 1), η2 > 0, and γ ∈(0, 1). for k = 0, 1, 2, · · · do 1 Choose Qk ∈Rn×q with orthonormal columns. Compute model mk as in (6.1). 2 Compute a trial step xk + sk where sk = Qkvk with vk ≈arg minv{mk(xk + Qkv) : ∥v∥≤∆k}. 3 Compute the ratio ρk as in Algorithm 1. 4 Update the iterate and the TR radius as inAlgorithm 1. Definition 6.1 (Fully-linear model in a subspace). Given a matrix with orthonormal columns Q ∈Rn×q, let BQ(x, ∆) = {z : z = x + Qv, ∥v∥≤∆}. Let (6.2) m(x + Qv) = φ(x) + gT Qv + 1 2vT QT HQv and ˆm(v) = m(x+Qv), v ∈Rq. We say that model m(x+s) is κef, κeg-fully linear model of φ(x+s) on BQ(x, ∆) if ∥∇ˆm(0) -∇ˆφ(0)∥≤κeg∆ and | ˆm(v) -ˆφ(v)| ≤κef∆2 for all ∥v∥≤∆. Definition 6.2 (Well aligned subspace). The subspace spanned by columns of Q is κg-well aligned with ∇φ(x) for a given x if (6.3) ∥QQT ∇φ(x) -∇φ(x)∥≤κg∥∇φ(x)∥ for some κg ∈[0, 1). Condition (6.4) is on the properties of the subspace. Essentially, it is required that the cosine of the angle between the gradient and its projection onto the subspace induced by Q is not too small. Similar conditions (and similar terminology) have been used in [6]. We will discuss later how this requirement can be satisfied with sufficiently high probability by randomly generated subspaces. The following lemma is a simple consequence of conditions above. Lemma 6.3. On iteration k, Qk is κg-well aligned with ∇φ(xk) if and only if (6.4) ∥QkQT k ∇φ(xk)∥2 ≥(1 -κ2 g)∥∇φ(xk)∥2 If m(xk + s) is κef, κeg-fully linear model of φ(xk + s) on BQk(xk, ∆k) then (6.5) ∥gk -QkQT k ∇φ(xk)∥≤κeg∆k Proof. The first statement easily follows from the fact that QkQT k is an orthogonal projection. From this and the fact that gk = QkQT k gk we have ∥gk -QkQT k ∇φ(xk)∥= ∥Qk∇ˆm(0) -Qk∇ˆφ(0)∥= ∥∇ˆm(0) -∇ˆφ(0)∥ Thus the second statement follows from the fully linear assumption. We now show how the analysis of Algorithm 1 easily extends to Algorithm 3 under appropriate assumptions on the models and the subspaces. Lemma 6.4 (sufficient condition for a successful step). Under Assumptions 1.2 and 2.3, if Qk is κg-well aligned with ∇φ(xk), mk(xk + s) is a κef, κeg-fully linear model of φ(xk + s) on BQk(xk, ∆k) and if (6.6) ∆k ≤ q 1 -κ2gC1∥∇φ(xk)∥ where C1 = (max η2, κbhm, 2κef (1 -η1)κfcd + κeg)-1 then ρ ≥η1, ∥gk∥≥η2∆k, and xk+1 = xk + sk, i.e. the iteration k is successful. Proof. Due to Lemma 6.3, specifically (6.5) by triangle inequality, ∥QkQT k ∇φ(xk)∥≤∥gk∥+ κeg∆k and also due to Lemma 6.3 (6.7) ∆k ≤ q 1 -κ2gC1∥∇φ(xk)∥≤C1∥QkQT k ∇φ(xk)∥ By (6.7) we have (max{κbhm, η2, 2κef (1 -η1)κfcd } + κeg)∆k ≤(∥gk∥+ κeg∆k) which implies max{κbhm, η2}∆k ≤∥gk∥. This establishes that ∥gk∥≥η2∆k and also mk(xk) -mk(xk + sk) ≥κfcd∥gk∥∆k/2 by Assumption 2.3. Then, using the fact that f(x) = φ(x) and the fully linear assumption on mk in the subspace induced by Qk and recalling that sk = Qkvk we have ρk = m(xk) -m(xk + sk) + (f(xk) -m(xk)) -(f(xk + sk) -m(xk + sk)) m(xk) -m(xk + sk) = m(xk) -m(xk + sk) + (φ(xk) -m(xk)) -(φ(xk + sk) -m(xk + sk)) m(xk) -m(xk + sk) ≥1 - κef∆2 k m(xk) -m(xk + sk) ≥1 - κef∆2 k κfcd∥gk∥∆k/2 ≥1 - 2κef∆k κfcd((∥QkQT k ∇φ(xk)∥-κeg∆k) ≥η1, where the last step is true because ∥QkQT k ∇φ(xk)∥≥ 2κef (1-η1)κfcd + κeg ∆k follows from (6.7). The rest of the analysis is identical to the analysis of Algorithm 1 and Theorem 2.9 holds with the same bound but slightly differently defined C1, thus we have the following complexity result, Theorem 6.5. Let Assumptions 1.2 and 2.3 hold. Let Kε be the first iteration of Algorithm 3 that achieves ∥∇φ(xk)∥≤ε. Assume on all iterations k = 0, . . . , Kε -1 Qk is κg-well aligned with ∇φ(xk) and mk(xk + s) is a κef, κeg-fully linear model of φ(xk + s) on BQk(xk, ∆k). Then assuming that the initial trust-region radius ∆0 ≥γ ˆC1ε, we have the bound (6.8) Kε = |Sε| + |Uε| ≤2(f(x0) -f ⋆) C2(γ ˆC1ε)2 + logγ ˆC1ε ∆0 C2 = η1η2κfcd 2κbhm min{η2, κbhm}; where ˆC1 = q 1 -κ2g max n η2, κbhm, 2κef (1-η1)κfcd o + κeg . The above result is not very useful by itself without the expressions for the bounds κeg, κef (which we already know how to derive) and more critically, κg - which we did not discuss yet. In fact we will only be able to guaranteer the bound κg that holds with some probability when Qk is random. We discuss this bound below and then extend the analysis of Algorithm 3 to this case. 6.1 Building fully-linear models in a subspace. Let us apply standard forward finite difference method to approximate the "reduced" gradient ∇φp(0). (6.9) gq(0) = q X i=1 f(x + δQui) -f(x) δ ui where ui is the ith column of a unitary q × q matrix. And let us define g(x) = Qgq(0). Since we consider an exact oracle f(x) = φ(x) we simply have ∥gq(0) -∇φq(0)∥≤ √qLδ 2 Thus assuming δ ≤∆, for m(x) defined by (6.1) with g(x) = Qgq(0) we have ∥∇ˆm(0) -∇ˆφ(0)∥≤κeg∆ with κeg = √qL 2 . Analogously to Lemma 2.11 we have the following. Lemma 6.6 (Fully-linear in subspace models). For m(x) defined by (6.2) such that ∥∇ˆm(0) -∇ˆφ(0)∥≤κeg∆ mk(x + s) is a κef, κeg-fully linear model of φ(x + s) on B(x, ∆) in a subspace induced by Q, with κef = κeg + LQ + κhbm 2 where LQ is the Lipschitz constant of QQT ∇φ(x). Now we discuss how to generate a well aligned subspace. This subject has been studied extensively in the literature and in particular in derivative free optimization context [6, 11] and the key approach is to generate random subspaces. In [6] the sketching and Johnson-Lindenstrauss type embeddings are used and the resulting complexity is worse than what we derive here3. In [11] Qk matrices are generated from the Haar distribution (random matrix with orthonormal columns) and are used in the context of gradient descent methods based on finite difference gradient approximation. Here we also use Haar distribution and rely on the following result. Lemma 6.7. For q ≥3, Q ∈Rn×q drawn from the Haar distribution of the set of matrices with orthonormal columns, and any nonzero vector v ∈Rn, we have P h ∥QQT v∥2 ≥( q 10n)∥v∥2i ≥243 443 > 1 2. Proof. By Lemma 1 in [11] we have ∥QQT v∥2 ∥v∥2 ∼Beta q 2, n -q 2 . Let X ∼Beta(q/2, (n -q)/2). We have E[X] = q n and V[X] = q(n-q) n2(n/2+1) ≤ 2q n2 ≤ 2q2 3n2 . Thus by the Paley-Zygmund inequality we have P h ∥QQT v∥2 ≥ q 10n ∥v∥2i = P(X ≥(1/10)E[X]) ≥ (1 -1/10)2E[X]2 V[X] + (1 -1/10)2E[X]2 ≥ (1 -1/10)2 q2 n2 2q2 3n2 + (1 -1/10)2 q2 n2 = 243 443. The immediate conclusion is that on each iteration k, Qk drawn from the Haar distribution (with q > 3 is κg-well aligned with probability θ ≥243 443 > 1/2 with κg = p 1 - q 10n. 3It remains to be seen if those results can be improved by combining our new bounds related to Λ-poised models with the use of JL embeddings. 6.2 Complexity analysis under random subspace selection. If at each iteration k Qk is chosen randomly, Algorithms 3 can be viewed as stochastic process {Qk, xk, mk, ∆k, sk}. All quantities computed by the algorithm are random variables, with some abuse of notation we will denote these quantities by the same letters as their realizations. It should be clear from the context which one we refer to. Let Fk-1 denote the σ-algebra generated by the first k -1 iterations, Fk-1 = σ (Q0, Q1, . . . Qk-1). We note that the random variables xk and ∆k measurable with respect to Fk-1, the random variables mk, sk and ρk are measurable with respect to Fk. The random variable Kε = min{k : ∥∇φ(xk)∥≤ε} is a stopping time adapted to the filtration {Fk-1}. Note that Lemmas 6.6 and 2.4 hold for each realization of the algorithm. However, the difficulty in carrying out the analysis lies in the fact that ∆k is no longer bounded from below on all iterations, that is Lemma 2.6 does not hold, thus the function improvement provided by Lemma 2.4 by itself does not imply the bound on the number of successful iterations. Following the analysis in [7] (and other papers such as [10] ) we will consider different types of iterations and derive common bounds. First we will define several additional random variables. Let Ik = 1{Qk is κg-well aligned with ∇φ(xk)}, Ak = 1{Iteration k is successful i.e., ∆k+1 = γ-1∆k}, Bk = 1{∆k > ˆC1∥∇φ(xk)∥}. We will say that iteration k is "true" if Ik = 1. We make the following key assumption Assumption 6.8. There exists a θ ∈( 1 2, 1] such that P{Ik = 1|Fk-1} ≥θ. This assumption is clearly made attainable by Lemma 6.7 with θ ≥243 443 > 1/2 and κg = p 1 - q 10n. Other ways of generating subspaces and ensuring Assumption 6.8 are of interest for future research. Note that σ(Bk) ⊂Fk-1 and σ(Ak) ⊂Fk, that is the random variable Bk is fully determined by the first k -1 steps of the algorithm, while Ak is fully determined by the first k steps. From Assumption 6.8 and Lemma 6.6 we have the following dependency Ak ≥Ik(1 -Bk), in other words, if iteration k is true and trust region radius is sufficiently small, then the iteration is successful. For the stochastic process generated by Algorithm 3 with random Qk that satisfy Assumption 6.8 the following dynamics hold. (6.10) ∆k+1 ≥        γ-1∆k if Ik = 1 and Bk = 0, γ∆k if Ik = 0 and Bk = 0, γ-1∆k if Ak = 1 and Bk = 1, γ∆k if Ak = 0 and Bk = 1, φ(xk+1) ≤        φ(xk) -C2∆2 k if Ik = 1 and Bk = 0, φ(xk) if Ik = 0 and Bk = 0, φ(xk) -C2∆2 k Ak = 1 and Bk = 1, φ(xk) Ak = 0 and Bk = 1. Here C2 is as in Lemma 2.4. To bound the total number of iterations we first bound the number of iterations that are successful and with ∆k ≥γ ˆC1ε. For that let ̄Bk = 1{∆k ≥γ ˆC1ε}. Then from the dynamics (6.10) we have the bound similar to [7]. Lemma 6.9. For any l ∈{0, . . . , Kε -1} and for all realizations of Algorithm 3, we have l X k=0 ̄BkIkAk ≤ l X k=0 ̄BkAk ≤φ(xk) -φ⋆ C2(γ ˆC1ε)2 , Another useful lemma that easily follows from the dynamics is as follows. Lemma 6.10. For any l ∈{0, . . . , Kε -1} and for all realizations of Algorithm 3, we have l X k=0 Bk(1 -Ak) ≤ l X k=0 ̄BkAk + logγ ˆC1ε ∆0 ! . The following result is shown in [7] under Assumption 6.8, E Kε-1 X k=1 ̄Bk(Ik -1) ! ≤1 -θ θ E Kε-1 X k=1 ̄BkIk ! , from which the following lemma is derived. Lemma 6.11. Under the condition that θ > 1/2, we have E Kε-1 X k=1 Bk ! ≤ 1 2θ -1 φ(xk) -φ⋆ C2(γC1ε)2 + logγ ˆC1ε ∆0 !! Finally the following lemma is shown in [7] for the process obeying (6.10). Lemma 6.12. E Kε-1 X k=1 (1 -Bk) ! ≤1 2θ(Kε). Putting these last two lemmas together we obtain the final expected complexity result. Theorem 6.13. Let Assumption 1.2, Assumption 2.3 and Assumption 6.8 hold. Assume that for all k = 0, 1, . . . Kepsilon -1 mk(xk + s) is a κef, κeg-fully linear model of φ(xk + s) on BQk(xk, ∆k). Then for some ε > 0 assuming that the initial trust-region radius ∆0 ≥γ ˆC1ε let Kε be the random stopping for the event {∥∇φ(xk)∥≤ε}. We have the bound E [Kε] ≤ 2θ 2θ -1 2(φ(x0) -φ⋆) C2(γ ˆC1ε)2 + logγ ˆC1ε ∆0 ! where ˆC1 as in Lemma 6.6 and C2 as in Lemma 2.4. When mk is constructed using (6.9), then the number of total zeroth order oracle calls is (q + 1)Kε , κeg and κef depend on the subspace dimension q: κeg = 2√qLQ and κef = κeg + LQ+κbhm 2 . When Qk is drawn from Haar distribution with q ≥3, θ = 243 443 and κ2 g = O( n-q n ) is dependent on n. Thus q 1 -κ2g = p q 10n. The resulting expected iteration complexity is O n ε2 while each iteration requires only q + 1 function evaluations, which gives the total worst case expected oracle complexity of O nq ε2 . Conclusion: Algorithm 3 utilized finite difference gradient approximation with radius δ = ∆k and achieves O nq ε2 oracle complexity. In contrast Algorithm 1 has O n2 ε2 worst case complexity if it chooses δ = ∆k for the finite difference scheme. However, we note that the lower bound on ∆k of Algorithm 1 holds for all k and is γC1ε. On the other hand, the bound on ∆k for Algorithm 3 is p q 10nγC1ε and it does not hold on all iterations (as seen in the analysis it holds "often enough", but not always). This complicates the analysis of Algorithm 3 in the case of noisy zeroth order oracles, which we leave for future research. In summary, if using finite differences the complexity of Algorithm 3 is the same as that of Algorithm 1 if the latter utilizes δ = p q n∆k in the finite difference scheme. Our real interest is using subspace based trust region method is the geometry-correcting method whose full-space version Algorithm 2 has complexity O n2 ε2 . We address this next. 6.3 Geometry-correcting algorithm in subspaces. We are now ready to provide a geometry-correcting algorithm with the total worst case complexity matching the best known complexity. The key observation is that not only we do not decrease ∆k but we also do not generate new subspace matrix Qk on geometry-correcting iterations. Thus there is no randomness involved in these iterations and the analysis Algorithm 4: Geometry-correcting algorithm in subspaces Inputs: A zeroth-order oracle f(x) = φ(x), subspace dimension q, a space of polynomials P of dimension p, ∆0, x0, γ ∈(0, 1) η1 > 0, η2 > 0, Λ > 1. Initialization An initial orthonormal Q0 ∈Rn×q, set Y0 ⊂Rq such that |Y0| ≤p, and the function values f(x0), f(x0 + yi), yi ∈Y0. for k = 0, 1, 2, . . . do 1 Build a quadratic model mk(xk + s) as in (2.1) using f(xk) and f(xk + yi), yi ∈Yk. 2 Compute a trial step sk = Qkvk and ratio ρk as in Algorithm 3. 3 Successful iteration: ρk ≥η1 and ∥gk∥≥η2∆k. Set xk+1 = xk + sk, ∆k+1 = γ-1∆k. Replace the furthest interpolation point and shift Yk+1 = (Yk \ {yj∗ k} ∪{0}) -vk where j∗ k = arg max j=1,...,p ∥yj∥. Generate Qk+1 ∈Rn×q from the Haar distribution. 4 Unsuccessful iteration: ρk ∆k ⇒Yk+1 = Yk \ {yj∗ k} ∪{vk}. • Geometry correction by replacing a "bad" point: Otherwise, construct the set Lagrange Polynomials {lj(x), i = 1, . . . , p} in P for the set Yk and find max value (i∗ k, v∗ k) = arg max j=1,...,p,v∈B(0,∆k) |lj(v)|. If |li∗ k(v∗ k)| > Λ, compute f(xk + Qkv∗ k), Yk+1 = Yk \ {yi∗ k} ∪{v∗ k}. • Geometry is good: Otherwise ∆k+1 = γ∆k. Generate Qk+1 ∈Rn×q from the Haar distribution. of Algorithm 3 carries over essentially without change if we restrict all statements to iterations k that are not geometry-correcting, that is k ∈Sε ∪Ud ε . Thus the bound on E [Kε] in Theorem 6.13 applies as the bound on E|Sε ∪Ud ε |, which equals to the bound on the total number of zeroth order oracle calls for all k ∈Sε ∪Ud ε (since on such iteration the zeroth order oracle is called only once). We thus can utilize the results of Theorem 3.6 which bounds the number zeroth order oracle calls during consecutive geometry-correcting steps as 3q if linear interpolation is used. For a higher degree interpolation we can apply Theorem 5.1 to bound the total number of zeroth order calls during consecutive geometry-correcting iterations as O(p log p) where p is the dimension of the space of polynomials defined on Rq, thus p is polynomial in q. Finally, to bound κeg and κef in ˆC1, we can employ Theorem 4.3 for linear interpolation or more general result similar to Theorem 4.1 from [9] for higher order interpolation to obtain κeg = O(√q) if Λ ≈1 or, more generally, κeg has polynomial dependency on q. The the total complexity is O nqα ε2 for some α > 0. 7 Conclusions We have shown that a practically efficient model based trust region DFO method such as Algorithm 2 and its subspace version Algorithm 4 have oracle complexity that is comparable with other known (and less practical) derivative free methods. The are many further practical improvements that can be applied to Algorithm 4 and Algorithm 4 that do not affect complexity but complicate the exposition. These can include avoiding computing ρk when ∥gk∥≤η2∆k, adding not random directions to subspaces and many others. Acknowledgements This work was partially supported by ONR award N00014-22-1-215 and the Gary C. Butler Family Foundation. References [1] Charles Audet and Warren L. Hare. Derivative-Free and Blackbox Optimization. Springer Series in Operations Research and Financial Engineering. Springer International Publishing, Cham, 2017. [2] Afonso S Bandeira, Katya Scheinberg, and Luís N Vicente. Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization. Mathematical programming, 134(1):223257, 2012. [3] Albert S Berahas, Liyuan Cao, Krzysztof Choromanski, and Katya Scheinberg. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Foundations of Computational Mathematics, pages 1-54, 2021. [4] Jose Blanchet, Coralia Cartis, Matt Menickelly, and Katya Scheinberg. Convergence rate analysis of a stochastic trust-region method via supermartingales. INFORMS journal on optimization, 1(2):92-119, 2019. [5] Liyuan Cao, Albert S. Berahas, and Katya Scheinberg. First- and second-order high probability complexity bounds for trust-region methods with noisy oracles. Mathematical Programming, 207(1-2):573-624, 2023. [6] Coralia Cartis and Lindon Roberts. Scalable subspace methods for derivative-free nonlinear least-squares optimization. Mathematical Programming, 199(1):461-524, May 2023. [7] Coralia Cartis and Katya Scheinberg. Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Mathematical Programming, 169(2):337-375, 2018. [8] Andrew R Conn, Nicholas IM Gould, and Philippe L Toint. Trust region methods. SIAM, 2000. [9] Andrew R Conn, Katya Scheinberg, and Luís N Vicente. Introduction to derivative-free optimization. SIAM, 2009. [10] Serge Gratton, Clément W Royer, Luís N Vicente, and Zaikun Zhang. Complexity and global rates of trust-region methods based on probabilistic models. IMA Journal of Numerical Analysis, 38(3):1579-1597, 2018. [11] David Kozak, Stephen Becker, Alireza Doostan, and Luis Tenorio. A stochastic subspace approach to gradient-free optimization in high dimensions. Computational Optimization and Applications, 79(2):339368, Jun 2021. [12] Jeffrey Larson, Matt Menickelly, and Stefan M Wild. Derivative-free optimization methods. Acta Numerica, 28:287-404, 2019. [13] Jorge J Moré and Stefan M Wild. Benchmarking derivative-free optimization algorithms. SIAM Journal on Optimization, 20(1):172-191, 2009. [14] M. J. D. Powell. Least Frobenius norm updating of quadratic models that satisfy interpolation conditions. 100:183-215, 2004. [15] Michael J D Powell. The NEWUOA software for unconstrained optimization without derivatives. Technical Report DAMTP 2004/NA08, 2004. [16] Michael JD Powell. UOBYQA: unconstrained optimization by quadratic approximation. Mathematical Programming, 92(3):555-582, 2002. [17] Luis Miguel Rios and Nikolaos V. Sahinidis. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56(3):1247-1293, 2013.
2510.14934
TASLA: Text-Aligned Speech Tokens with Multiple Layer-Aggregation Ming-Hao Hsu1 Liang-Hsuan Tseng2 Hung-yi Lee2 Zhizheng Wu1 1The Chinese University of Hong Kong, Shenzhen 2National Taiwan University hsuminghao1006@gamil.com, wuzhizheng@cuhk.edu.cn f11921067@ntu.edu.tw, hungyilee@ntu.edu.tw Abstract We propose Text-Aligned Speech Tokens with Multiple Layer-Aggregation (TASLA), which is a text-aligned speech tokenization frame- work that aims to address the problem that un- der a low-frame-rate and text-aligned regime, single-source speech tokens may lose acous- tic details during reconstruction. On the other hand, this paper further explains how different encoder layers collaborate to capture compre- hensive acoustic features for tokenization. Pre- vious work, TASTE, proposed the text-aligned speech tokenization framework, which is a LM- friendly architecture, but struggles to capture acoustic details. We address this trade-off with two components: Multi-Layer Dynamic Atten- tion (MLDA), which lets each text position adaptively mix shallow/deep features from a frozen speech encoder, and Finite Scalar Quan- tization (FSQ), a simple per-dimension dis- cretization with smooth optimization. At about 2.62 Hz (tokens/s), TASLA consistently im- proves prosody and achieves competitive qual- ity over TASTE on in-domain (LIBRISPEECH) and OOD (EXPRESSO, VOXCELEB) sets. We further demonstrate that dynamic layer mixing is correlated with spectral flux and explains why MLDA preserves prosody under a low frame rate with extreme feature compression. 1 Introduction Large language models (LLMs) have recently trans- formed text understanding and generation. To pur- sue a more natural interaction mode with LMs, researchers have started to research Spoken Lan- guage Models (SLM) (Chen et al., 2024; Tseng et al., 2025; KimiTeam et al., 2025; Hassid et al., 2023; Nguyen et al., 2024a), where models pro- cess and generate speech as a first-class modality. If we want to use voice mode to interact with the LMs, one option is a cascade pipeline that first gen- erates text with an LM and then converts text to speech using a TTS model (Yazdani et al., 2025; Shikhar et al., 2025). In contrast, SLM offers a better approach, as it captures acoustic details from context and generates speech that more naturally aligns with the context (Arora et al., 2025; Guo et al., 2025). The key to enabling SLMs to have such an ability is speech tokenization. Speech tok- enization involves mapping continuous waveforms into discrete tokens that are compact, learnable, and acoustically informative, enabling a vocoder to reconstruct speech. However, there are two practical problems with joint modeling of text tokens and speech tokens for an SLM. The first problem is the trade-off be- tween acoustic richness and LM-friendliness. Neu- ral speech tokenizers that generate more tokens per second could provide fine-grained acoustic de- tails since they use a shorter time frame for speech reconstruction, and these speech tokenizers often capture only acoustic details without text informa- tion (Liu et al., 2024). Therefore, they could gen- erate very high-quality speech, but their speech token sequence is often very long (Borsos et al., 2023; Wang et al., 2023), which is not suitable for transformer-based architectures if we want to generate longer speech. The second problem is the speech token se- quence length mismatch with the text sequence length. Standard codecs operate at fixed frame rates that yield token streams far longer than text (e.g., ∼12.5–50 speech Hz vs. ∼2–3 text Hz), making joint speech-text modeling harder (Défossez et al., 2023; Zhang et al., 2023). Making the LMs need to learn two modalities separately, which also in- creases the difficulty of the training process. How- ever, aggressive sequence compression helps align- ment but risks losing prosodic detail. Prior work addresses the mismatch by aligning speech tokens to text via an attention mechanism during tokeniza- tion, enabling straightforward joint modeling at very low bitrates (Tseng et al., 2025). However, under strong compression of the to- 1 arXiv:2510.14934v1 [cs.SD] 16 Oct 2025 ken length, it still shows limitations in preserv- ing fine-grained acoustics and in out-of-domain generalization. To overcome these problems and find a balance between bitrate and token sequence length. We propose TASLA, a text-aligned speech tokenization framework that preserves acoustic de- tail at text length. TASLA introduces Multi-Layer Dynamic Attention (MLDA), which enables text tokens to query a frozen speech encoder and adap- tively combine shallow and deep representations, allowing each text position to gather the most pre- dictive acoustic evidence for content, prosody, and speaker cues. To discretize reliably, we replace Residual Vector Quantization (RVQ) with Finite Scalar Quantization (FSQ) (Mentzer et al., 2024), which provides smooth optimization and resilience to codebook pathologies. This work shows that dynamic, per-token fusion across encoder depth is the missing piece for text- aligned speech tokens: MLDA lets each word po- sition select the most predictive mixture of shal- low and deep features, thereby retaining prosody and speaker cues at a text-length rate. Under rate parity with prior text-aligned tokenizers, MLDA is consistently comparable in quality and outper- forms on prosody metrics in both in-domain and OOD settings, and yields better spoken continua- tion than strong text-only and speech-token base- lines, demonstrating that adaptive layer mixing ef- fectively narrows the gap between alignment con- venience and acoustic expressivity. 2 Related Work Neural Audio Codecs. CNN-based neural codecs typically use RVQ and multi-scale spec- tral/adversarial losses. They often excel at high- fidelity reconstruction and preserve prosodic cues. EnCodec (Défossez et al., 2023) and DAC (Kumar et al., 2023) are canonical convolutional codecs with RVQ, which have strong perceptual quality and real-time operation, but are not designed for text alignment. Then, BigCodec (Xin et al., 2024) and TAAE (Parker et al., 2025) achieve extremely high quality at extremely low bitrates using a con- volution codec architecture. However, they are still fixed-rate and not LM-friendly for speech–text joint modeling. Therefore, Mimi (Défossez et al., 2024) deliberately reduces token rates to be more LM-friendly, which is a useful bridge from pure re- construction toward joint modeling. In contrast, our method goes beyond purely acoustic reconstruction by jointly modeling speech tokens in both acoustic and semantic information. Semantic-Acoustic Joint Modeling. To incorpo- rate linguistic and semantic information and enable unified modeling, recent work blends semantic and acoustic representations. SpeechTokenizer (Zhang et al., 2023) distills semantics into upper RVQ codebooks using SSL features, yielding tokens that carry both semantic and acoustic content, but its se- quences remain relatively long and not text-aligned. DualCodec (Li et al., 2025) uses dual branches, semantic and acoustic, to balance intelligibility and fidelity, with lower rates but still not explicitly aligned to text. Spirit-LM (Nguyen et al., 2024b) in- terleaves speech and text tokens for joint modeling; effective but depends on extra interleaving rules rather than addressing alignment at tokenization. These methods add semantics or interleaving strate- gies, yet do not solve alignment at the tokenization stage, leaving training and inference complexity and potential prosody loss. Our method differs by enforcing speech-text length alignment during tok- enization, while still preserving both acoustic and semantic information. Text-Aligned Speech Tokens. Text-aligned speech tokens use text positions as queries over speech representations so that tokenization itself aligns speech tokens to the text length. Sequences become LM-friendly while retaining prosodic and paralinguistic cues. Because of alignment, the token rate is about 1/20–1/5 of traditional tokenizers. TASTE (Tseng et al., 2025) introduces the text-aligned paradigm, where text-query cross-attention over speech encoder features produces time-variant, text-length-matched tokens, thereby eliminating speech–text length mismatch and enabling straightforward SLM training. However, the extreme compression of the bitrate and token rate results in the loss of acoustic details. Unlike prior text-aligned approaches, our method maintains the extremely low frame rate while preserving richer acoustic detail in reconstruction. 3 Preliminaries 3.1 Speech–Text Gap Speech and text provide two complementary but structurally different views of language. Let a spo- ken utterance be denoted as a continuous waveform u ∈RT , where T is the number of acoustic frames. On the other hand, let the corresponding transcript 2 Figure 1: TASLA overview. Text tokens (Q) cross-attend to the last-layer encoder states (K) and to a dynamically mixed value stream (V ) formed by per-frame weights over layers 8/16/24/32 (MLDA). The aggregated features (z) are discretized by FSQ (d = 64, L = 8) and fed to a unit decoder that predicts S3 units; a vocoder reconstructs waveforms. Training minimizes next-unit cross-entropy (LCE) and an FSQ reconstruction loss (Lrec). be a discrete token sequence v = [v1, . . . , vN] of length N. The asymmetry between |T| and |N| is striking: while text unfolds sparsely with clear boundaries, speech evolves densely, carry- ing layered information including acoustic details and prosody. This discrepancy yields the length mismatch problem that most speech tokenizations produce sequences far longer than text, making it hard to jointly model text and speech. Prior reme- dies, such as interleaving speech and text tokens or introducing alignment heuristics, reduce the mis- match but do not directly resolve the modality gap, and often sacrifice prosodic fidelity. 3.2 Text-Aligned Speech Tokens To address this, TASTE (Tseng et al., 2025) pro- poses to construct speech tokens that are explic- itly aligned with their text counterpart, which is called text-aligned speech tokens. Formally, in text- aligned speech token architectures, each time a text token is generated, a corresponding speech token is also generated, and the two token sequences are identical in length and order. In practice, TASTE uses distilled Whisper-large-v3’s (Gandhi et al., 2023) encoder part as its encoder, which is a 32-layer speech encoder. Then, given the en- coder’s hidden states, {h(1), . . . , h(L)} where each h(ℓ) ∈RT×dh, text-aligned speech tokens aim to produce a compressed representation z ∈RN×dz whose length matches that of the text sequence. A cross-attention mechanism achieves this by let- ting the text embeddings E(v) ∈RN×dq act as queries, the final-layer states h(L) provide keys, and selected shallow layer h(ℓ) provide values. The aggregated representation can thus be written as z = Attn(E(v), h(L), h(ℓ)), This formulation aligns speech and text sequences at the token level, enabling straightforward joint modeling without heuristic synchronization. After generating text tokens v and speech tokens ˆz, we sum them and feed the result into a speech unit decoder, which autoregressively predicts the S3 units ˆs (see Section F for details). These units are then passed to the CosyVoice unit-to-speech vocoder for waveform reconstruction. 3.3 Quantization for Efficiency Even with alignment, the representation z remains continuous and high-dimensional, which hinders efficiency and symbolic control. Discretization is therefore essential, which maps each vector zi into a compact set of discrete codes, allowing sym- bolic modeling akin to text tokens. Traditional approaches such as Residual Vector Quantization (RVQ) express each token as a sum of codebook vectors, zi ≈ R X r=1 q(r) i , q(r) i ∈Cr, where R is the number of quantizer stages and Cr is the r-th codebook. Although RVQ provides fine- 3 grained reconstruction, it often requires large code- books and multiple stages, which inflate bitrate and introduce redundancy. This motivates alternative quantization strategies that achieve comparable ex- pressiveness with fewer parameters, lower bitrate, and better preservation of prosodic cues. 3.4 Bitrate and Token Rate We distinguish two quantities: the frame rate Rtok (Hz) and the bitrate b (bits/s). The token rate mea- sures how many discrete speech tokens are pro- duced per second for reconstruction, while the bi- trate measures how many bits per second are actu- ally needed to encode those tokens. RVQ-based codecs. For residual vector quanti- zation (RVQ) with R quantizers and a codebook size K per layer, each token consumes R log2 K, and the bitrate is b = Rtok · R log2 K. Intuitively, increasing either the token frequency Rtok or the per-token code payload R log2 K raises the bi- trate. This RVQ counting is standard in neural codecs/speech tokenizers that output layered code indices. FSQ-based tokenizers. For finite scalar quan- tization (FSQ) with d scalar dimensions and L uniform levels per dimension, each token carries d log2 L bits,and therefore b = Rtok · d log2 L. Compared to RVQ, FSQ trades the number of resid- ual codebooks for axis-aligned scalar bins. Its bi- trate still scales linearly in Rtok but now with the per-token payload d log2 L. Bitrate and Frame Rate Calculation for Text- Aligned Speech Tokens. Since the text-aligned speech token sequence length is aligned to the text token length, it is not a fixed token rate tokenizer. Therefore, we use the LIBRISPEECH test set to es- timate the frame rate and calculate the bitrate. In the LIBRISPEECH test set, there are about 19,805.2 seconds and a total of 51,903 text tokens; there- fore, the average frame rate for text-aligned speech tokens is about 2.62. 4 Methodology Our goal is to design a text-aligned speech tokeniza- tion framework that captures richer acoustic and prosodic cues while remaining simple to train and easy to control. Concretely, we (i) aggregate multi- layer encoder features with Multi-Layer Dynamic Attention (MLDA) to compress frame-level speech into text-length representations and (ii) discretize these representations by Finite Scalar Quantiza- tion (FSQ) to control bitrate with minimal param- eters. Finally, we train the whole system with a lightweight objective that combines a unit-level cross-entropy and a masked reconstruction loss. 4.1 Multi-Layer Dynamic Attention We turn frame-level speech features into a text- length sequence by letting text tokens query the speech encoder’s hidden states. The keys come from the last encoder layer, while the values come from shallower layers. An MLP produces per- layer mixture weights so the model can adapt each layer’s mix ratio for each frame. We first pass the input speech u through a frozen speech encoder to obtain layerwise hidden states {h(1), . . . , h(L)}, where each h(ℓ) ∈RT×dh is a frame-level sequence, T is acoustic length, and dh is hidden size. To align speech with text, we employ a cross- attention mechanism that compresses the speech frames into the same length as the text tokens. Specifically, the text tokens serve as queries Q, while the last-layer representation h(L) provides the keys K. For the values, we select one or more shallow layers {h(ℓs)}S s=1, denoted as {Vl}, where each Vl ∈RT×dv. To dynamically aggregate these value sources, we introduce an MLP that predicts layer-wise mix- ture weights. Given the last hidden state hlast, the MLP produces normalized weights as: w = softmax(MLP(hlast)) , w ∈R|V|×T . After obtaining the weights, we use a multilayer dynamic fuser to fuse these value sources. The aggregated value sequence is then computed frame- wise as: ˜Vi = X l∈V wl,i Vl,i, ˜V ∈RT×dv, where V denotes the set of selected layers and i is the frame index. Let E(·) be a token embedding function. We embed the text sequence v into queries Q = E(v) ∈RN×dq, where N is the text length and dq is the query dimension. Finally, we compute the cross-attention between text tokens and aggregated speech tokens represented as: z = Attn(Q, K, ˜V ) = softmax QK⊤ √dk  ˜V , where K ∈RT×dk and z ∈RN×dv. 4 4.2 Finite Scalar Quantization (FSQ) Given a per-token vector x ∈RD, we map it to a d-dim latent (via a linear encoder) and back to RD after quantization (via a linear decoder). FSQ operates independently on each of the d latent di- mensions as follows. First, apply a learnable per- dimension affine and temperature-scaled squash- ing: ˜u = u ⊙s + b, ¯u = tanh  ˜u τ  ∈[−1, 1]d. With L uniform scalar levels on [−1, 1], G = {−1 + 2k L−1}L−1 k=0 , the quantization index and value for each dimension j are ij = clip  round  ¯uj + 1 2 (L −1)  , 0, L −1  qj = −1 + 2 ij L −1 We use the straight-through estimator to form the quantized latent zq = ¯u + (q −¯u)sg, where (·)sg stops gradients. The decoder maps zq back to the feature space; training minimizes MSE on valid frames (with masking if sequences are padded). In our experiments, we select the d = 64 and L = 8 for a balance of bitrate and performance. 4.3 Training Objective Cross Entropy Loss. After we get the text token v and speech token ˆz, we uses a transformer-based unit decoder to autoregressively decode the text and speech token to speech units and then use the CosyVoice speech vocoder to convert it into speech. Therefore, we calculate the cross-entropy loss of the target speech units. When a target speech unit sequence y is available, a unit decoder consumes zq and predicts y autoregressively with the usual next-token cross-entropy: LCE = 1 |y| X t −log pθ yt y<t, zq  . Reconstruction Loss. To further help the FSQ module learn how to reconstruct tokens back to la- tent vectors, we use a reconstruction loss to further help the training process. Independent of speech units, we mask the padded frames and calculate the MSE loss of the pre-FSQ feature z and its dequan- tized feature zq: Lrecon = 1 P m (z −zq) ⊙m 2 2, where m is a binary mask over valid posi- tions/tokens. Combining the Two Objectives. We optimize a simple weighted sum Ltotal = LCE + λ Lrecon, with λ controlling the strength of the FSQ recon- struction term. 5 Experimental Setup 5.1 Datasets We conduct experiments on both in-domain and out-of-domain datasets to comprehensively eval- uate our framework. LIBRISPEECH (Panayotov et al., 2015) is used for model training and pri- mary evaluation, while EXPRESSO (Nguyen et al., 2023) and VOXCELEB (Nagrani et al., 2017) are adopted for out-of-domain evaluation, as they con- tain richer acoustic variability aligned with our objectives. Specifically, EXPRESSO emphasizes emotional expressiveness and prosodic variation, whereas VOXCELEB features diverse recording con- ditions with natural background noise. The detailed dataset introduction could be found in Section E 5.2 Baselines Our study focuses on text-aligned speech tokeniza- tion, aiming to produce token sequences whose length matches the text while keeping the bitrate low enough for joint speech-text modeling. To eval- uate TASLA, we consider representative tokenizers from three major categories: Compression Bitrate Neural Codecs, Semantic-Acoustic Joint Modeling Tokenizers, and Text-Aligned Speech Tokenizers. Since many existing codecs operate at bitrates much higher than 1 kbps, we downsample their out- puts—by retaining fewer RVQ layers or applying temporal striding—to approximately 1 kbps. This allows us to ensure fair comparisons under a com- parable bitrate regime. The implementation details are provided in Section A. Because TASLA compresses the token rate to an extremely low level while preserving text align- ment, there is no directly comparable tokenizer besides TASTE. We therefore include state-of-the- art codecs that either operate at, or can be ad- justed to, similar bitrates: EnCodec (Défossez et al., 2023), Mimi (Défossez et al., 2024), Speech- Tokenizer (Zhang et al., 2023), TASTE (Tseng et al., 2025), DualCodec (Li et al., 2025), and Big- Codec (Xin et al., 2024). These baselines vary 5 Model Ene. RMSE ↓ F0-PCC ↑ Phr. Cos. ↑ Ene. PCC ↑ Phr. L2 ↓ VDE ↓ GPE ↓ LIBRISPEECH (in-domain) S3 Topline 6.94 0.91 0.92 0.95 5.41 0.15 0.03 Text-only Baseline 10.08 0.33 0.89 0.81 7.06 0.25 0.32 TASTE 8.63 0.80 0.91 0.88 5.79 0.19 0.08 TASLA 6.97 0.87 0.90 0.92 6.67 0.17 0.05 VOXCELEB (noisy/natural, OOD) S3 Topline 5.31 0.86 0.94 0.94 4.46 0.17 0.03 Text-only Baseline 8.73 0.19 0.90 0.57 6.48 0.33 0.35 TASTE 7.68 0.64 0.93 0.74 5.16 0.26 0.13 TASLA 6.53 0.71 0.93 0.81 4.92 0.22 0.09 EXPRESSO (emotion-rich, OOD) S3 Topline 8.57 0.91 0.82 0.93 7.19 0.13 0.04 Text-only Baseline 8.95 0.21 0.68 0.80 11.77 0.27 0.45 TASTE 8.90 0.73 0.76 0.86 8.73 0.20 0.19 TASLA 7.87 0.84 0.82 0.91 7.43 0.15 0.10 Table 1: Experimental Results for Prosody Metrics. We report prosody metrics across datasets at text-aligned speech token architectures. Our proposed method, TASLA, outperforms the ablations and baselines on nearly all prosody metrics. The rows marked in light red denote the upper bound (S3 topline). in training data and parameter scales, which we summarize in Section A. Our conclusions are thus scoped to this targeted bitrate regime rather than claiming universal superiority. For ablation studies, we further introduce the S3 topline and a text-only baseline. The S3 topline reconstructs speech directly from ground-truth S3 units, while the text-only baseline predicts S3 to- kens from text without joint modeling. 5.3 Evaluation Metrics We evaluate our framework along two complemen- tary dimensions: quality, which reflects the overall naturalness and fidelity of the reconstructed audio, and prosody, which measures how well prosodic patterns such as pitch, rhythm, and energy are pre- served. 5.3.1 Prosody Metrics For prosody metrics, we report Emotion Con- sistency, F0-PCC, Energy-PCC, Energy-RMSE, Phrase Cosine Similarity (Phr. Cos.), Phrase L2 (Phr. L2), Voicing Decision Error (VDE), and Gross Pitch Error (GPE). Emotion Consistency checks whether the emotional intent is preserved after reconstruction. F0-PCC measures the correla- tion of pitch contours between hypothesis and ref- erence after voicing masking and time alignment, while Energy-PCC and Energy-RMSE quantify how well loudness dynamics over time are retained. Phr. Cos. and Phr. L2 summarize phrase-level into- nation by converting F0 to semitones, fitting degree- 3 Legendre polynomials on normalized time, and comparing the resulting coefficient vectors of ref- erence and reconstruction. VDE is the fraction of frames with mismatched voiced/unvoiced deci- sions (lower is better), and GPE is the proportion of voiced frames where predicted F0 deviates from the reference by a large relative margin, reflecting robustness of pitch accuracy. 5.3.2 Quality Metrics For quality metrics, we use Word Error Rate (WER), UTMOS, and Speaker Similarity to evalu- ate the reconstructed speech’s quality. Word Error Rate (WER) is computed between the ASR tran- scription of the reconstructed audio and the refer- ence transcript. UTMOS (Baba et al., 2024) is a non-intrusive, neural MOS predictor that estimates perceived naturalness and quality directly from the waveform. Speaker similarity is quantified as the cosine similarity between speaker embeddings ex- tracted from reference and reconstructed audio. 6 Experimental Results We report the main experimental results of the qual- ity metrics in Table 2 and prosody metrics in Table 1. The details of the weight analysis could be found in Section 6.2. The details of the training and pa- rameters can be found in Section B. The details of metrics formulation and explanation could be found in Section C. 6 LIBRISPEECH VOXCELEB (OOD) EXPRESSO (OOD) Model Frame Rate Bitrate WER UTMOS Spk. Sim. UTMOS Spk. Sim. WER UTMOS Spk. Sim. Ground Truth — 256000 0.05 3.41 — 2.82 — 0.11 2.98 — frame rate less than 150 EnCodec 75 1500 0.07 1.40 0.63 1.44 0.61 0.13 1.11 0.56 SpeechTokenizer 50 1000 0.11 1.72 0.34 1.80 0.39 0.30 1.51 0.34 BigCodec 80 1040 0.05 3.29 1.00 2.76 0.99 0.12 2.79 0.99 DualCodec 12.5 1225 0.04 3.16 0.88 2.60 0.84 0.09 2.80 0.84 Mimi 12.5 1100 0.04 3.01 0.93 2.53 0.93 0.10 2.45 0.89 frame rate less than 10 (word-level) S3 Topline — 600 0.04 3.40 0.87 2.93 0.84 0.12 3.17 0.82 Text-only Baseline — 50 0.23 3.42 0.77 3.33 0.68 0.41 3.28 0.64 TASTE ∼2.62 ∼150 0.10 3.51 0.85 3.31 0.78 0.27 3.33 0.75 TASLA ∼2.62 ∼600 0.12 3.43 0.87 3.10 0.81 0.28 3.12 0.80 Table 2: Experimental Results for Quality Metrics. Lower is better for WER; higher is better for the others. VOXCELEB has no WER column because no transcripts are available. Our proposed method (TASLA) outperforms or performs competitively against the baselines over different evaluation sets. The out-of-domain (OOD) tag is applied to TASTE and TASLA in this table, since the two OOD datasets may overlap with the training data of other speech tokenizers. 6.1 Overall Performance From Table 1, TASLA consistently surpasses other text-aligned speech token frameworks on nearly every prosody indicator. And it is also closer to the S3 tokens’ topline. This pattern shows that TASLA preserves paralinguistic cues, intonation, rhythm, and emphasis, rather than merely reconstructing clean audio, and does so under much stronger com- pression. For pitch-related behavior, a higher F0- PCC means TASLA’s generated pitch contour fol- lows the reference more faithfully. At the same time, lower GPE and VDE indicate fewer octave (pitch halving/doubling) mistakes and more sta- ble voiced/unvoiced decisions, yielding more nat- ural intonation and fewer pitch glitches. For loud- ness dynamics and phrasing, higher Energy-PCC reflects a closer alignment of energy fluctuations over time, while lower Energy-RMSE shows that the absolute loudness levels better match the refer- ence, rather than just the trend. Higher Phrase-Cos. and lower Phrase-L2 further indicate that pauses, timing, and rhythmic structure align more precisely with the ground truth. On the other hand, Table 2 summarizes the qual- ity results across in-domain LIBRISPEECH and two out-of-domain datasets, VOXCELEB and EX- PRESSO. Overall, TASLA achieves superior perfor- mance compared to other commonly used speech codecs under most of the quality metrics under a significantly low frame rate. Compared to other low frame rate methods, TASLA is also closer to the S3 topline. Specifically, on the OOD sets, the higher frame rate speech codecs get more advan- tage on WER since they could reconstruct more acoustic details, and the training data of the ASR models mostly contains LIBRISPEECH and a large amount of training data. Therefore, even though the quality of the reconstructed speech of other methods is lower than TASLA, they could also get lower WER on these evaluation sets. These improvements match our design goal: text- aligned speech tokens that retain prosody while remaining LM-friendly under aggressive compres- sion. Mechanistic analysis for this behavior is pro- vided in Section 6.2. 6.2 Analysis of Dynamic Weights In our experiments, we select the 8th, 16th, 24th, and 32nd layers as the target layers for multi- layer dynamic attention. In this section, we use {w0, w1, w2, w3} to represent these weights. We analyze 500 utterances from EXPRESSO to characterize the dynamic weighting module, since EXPRESSO is richer with prosody. The per-frame normalized weight means are ¯w0 = 0.573, ¯w1 = 0.325, ¯w2 = 0.100, ¯w3 = 0.0017. Thus the stream is carried predominantly by w0, with w1 and w2 providing substantial secondary contribu- tion. To quantify how many layers act at once, we compute frame-wise entropy Ht = − 4 X i=1 w(i) t log w(i) t and report ¯H = 0.484. The effective number of layers ENL = exp( ¯H)≈1.62 indicates about one to two meaningful contributors per frame. We relate weights to short-time spectral dynam- ics. Let spectral flux be Ft = ∥mt −mt−1∥2, 7 Figure 2: Scatter plots and linear regression across all samples. The figure shows that w1 is positively correlated with mean spectral flux, while w2 is negatively correlated. This indicates that w2 suppresses spectral transients, whereas w1 promotes them. Figure 3: S3 Unit Training Accuracy. Accuracy of S3 unit prediction for the three ablations: Text- only, TASTE, and TASLA. TASLA achieves about 10% higher accuracy than Text-only and TASTE. where mt denotes mel magnitudes. High Ft marks onsets/offsets and low Ft marks steady vow- els or near-silence. From Figure 2, we observe corr(w0, F) = −0.116, corr(w1, F) = +0.464, corr(w2, F) = −0.379, and corr(w3, F) = −0.183. Previous work (Baby et al., 2020) de- notes that the spectrum flux has a positive relation with the phoneme or sub-word boundaries. Com- bining with the observation of the average weight and the relation from Figure 3, deeper layers (16th and 24th layer, w1 and w2) are more related to the flux; therefore, the deeper layers may relate to the phoneme or sub-word boundaries. On the other hand, the w0 seems not obviously related to the spectrum flux, but has a high weight ratio, which shows that shallow layers provide more acoustic cues, but are not obviously related to phoneme or sub-word boundaries. Therefore, since the se- mantic information has been carried with the text tokens, the deeper layer’s weight becomes smaller since it carries lower acoustic cues. This observa- tion also matches the conclusion of Ma et al. (2025) and Shim et al. (2025). Therefore, we provide an ablation study that mainly uses shallower layers for MLDA; see the experiments in Section G. 6.3 S3 Unit Accuracy From Figure 3, we observe that TASLA predicts the S3 units more accurately than TASTE and text- only ablations. These results indicate that TASLA is better at jointly modeling acoustic and semantic features, leading to more accurate S3 unit predic- tions. However, this improvement is not directly reflected in the quality metrics but instead appears in the prosody metrics. This limitation may be due to the upper bound of the S3 units’ performance and the size of the training data. Combining these observations, the combination of the observed weight switch pattern with flux and the S3 prediction accuracy, provides a mechanis- tic explanation for the dynamically fused speech features and their importance. 7 Conclusion We introduce TASLA, a text-aligned speech to- kenization framework designed to preserve fine- grained acoustic detail under an extremely low frame rate while remaining compatible with text- token alignment. Our experiments demonstrate that, despite the time-variant nature of the speech tokens, TASLA consistently maintains rich acous- tic cues, achieving prosody metrics close to the upper-bound topline and outperforming prior text- aligned speech tokenizers on out-of-domain tasks. Quantitative analyses reveal how different encoder layers contribute to various acoustic aspects, offer- ing interpretability that helps explain why certain layers dominate under specific speaking conditions. Future work includes improving time efficiency through low-latency variants, and exploring single- stage generation approaches to overcome the per- formance ceiling imposed by S3 speech units. 8 8 Limitations Due to the performance ceiling of the S3 units and the limited dataset size, TASLA has untapped po- tential to model speech more effectively than other text-aligned speech tokenizers, achieving higher ac- curacy in predicting S3 units. While this advantage does not manifest directly in quality metrics, it is reflected in improved prosody performance and a narrower gap to the S3 units’ topline. Ethical Considerations and Potential Risks. While TASLA focuses on text-aligned speech to- kenization rather than speech generation, discrete speech representations could be misused for voice spoofing or cloning. All models and data used in this study are for research purposes only, and all datasets are public and released under academic licenses. We encourage responsible use and ad- herence to ethical standards when reproducing or extending this work. We used AI assistants for grammar polishing and document formatting only. All experimental design, implementation, and data analysis were manually verified by the authors. References Siddhant Arora, Kai-Wei Chang, Chung-Ming Chien, Yifan Peng, Haibin Wu, Yossi Adi, Emmanuel Dupoux, Hung-Yi Lee, Karen Livescu, and Shinji Watanabe. 2025. On the landscape of spoken lan- guage models: A comprehensive survey. CoRR, abs/2504.08528. Kaito Baba, Wataru Nakata, Yuki Saito, and Hiroshi Saruwatari. 2024. The T05 system for the voice- mos challenge 2024: Transfer learning from deep image classifier to naturalness MOS prediction of high-quality synthetic speech. In SLT, pages 818– 824. IEEE. Arun Baby, Jeena J. Prakash, Aswin Shanmugam Subra- manian, and Hema A. Murthy. 2020. Significance of spectral cues in automatic speech segmentation for in- dian language speech synthesizers. Speech Commun., 123:10–25. Zalán Borsos, Raphaël Marinier, Damien Vincent, Eu- gene Kharitonov, Olivier Pietquin, Matthew Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, and Neil Zeghidour. 2023. Audi- olm: A language modeling approach to audio genera- tion. IEEE ACM Trans. Audio Speech Lang. Process., 31:2523–2533. Sanyuan Chen, Shujie Liu, Long Zhou, Yanqing Liu, Xu Tan, Jinyu Li, Sheng Zhao, Yao Qian, and Furu Wei. 2024. VALL-E 2: Neural codec language mod- els are human parity zero-shot text to speech synthe- sizers. CoRR, abs/2406.05370. Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. 2023. High fidelity neural audio compres- sion. Trans. Mach. Learn. Res., 2023. Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave, and Neil Zeghidour. 2024. Moshi: a speech- text foundation model for real-time dialogue. CoRR, abs/2410.00037. Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, Zhifu Gao, and Zhijie Yan. 2024. Cosyvoice: A scalable multilingual zero-shot text- to-speech synthesizer based on supervised semantic tokens. CoRR, abs/2407.05407. Sanchit Gandhi, Patrick von Platen, and Alexander M. Rush. 2023. Distil-whisper: Robust knowledge dis- tillation via large-scale pseudo labelling. CoRR, abs/2311.00430. Yiwei Guo, Zhihan Li, Hankun Wang, Bohan Li, Chongtian Shao, Hanglei Zhang, Chenpeng Du, Xie Chen, Shujie Liu, and Kai Yu. 2025. Recent ad- vances in discrete speech tokens: A review. CoRR, abs/2502.06490. Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexan- dre Défossez, Gabriel Synnaeve, Emmanuel Dupoux, Roy Schwartz, and Yossi Adi. 2023. Textually pre- trained speech language models. In NeurIPS. Jong Wook Kim, Justin Salamon, Peter Li, and Juan Pablo Bello. 2018. Crepe: A convolutional rep- resentation for pitch estimation. In ICASSP, pages 161–165. IEEE. KimiTeam, Ding Ding, Zeqian Ju, Yichong Leng, Songxiang Liu, Tong Liu, Zeyu Shang, Kai Shen, Wei Song, Xu Tan, Heyi Tang, Zhengtao Wang, Chu Wei, Yifei Xin, Xinran Xu, Jianwei Yu, Yutao Zhang, Xinyu Zhou, Y. Charles, and 21 others. 2025. Kimi- audio technical report. CoRR, abs/2504.18425. Rithesh Kumar, Prem Seetharaman, Alejandro Luebs, Ishaan Kumar, and Kundan Kumar. 2023. High- fidelity audio compression with improved RVQGAN. In NeurIPS. Jiaqi Li, Xiaolong Lin, Zhekai Li, Shixi Huang, Yuancheng Wang, Chaoren Wang, Zhenpeng Zhan, and Zhizheng Wu. 2025. Dualcodec: A low-frame- rate, semantically-enhanced neural audio codec for speech generation. CoRR, abs/2505.13000. Haohe Liu, Xuenan Xu, Yi Yuan, Mengyue Wu, Wenwu Wang, and Mark D. Plumbley. 2024. Semanticodec: An ultra low bitrate semantic audio codec for general sound. IEEE J. Sel. Top. Signal Process., 18(8):1448– 1461. Rao Ma, Mengjie Qian, Yassir Fathullah, Siyuan Tang, Mark J. F. Gales, and Kate M. Knill. 2025. Cross- lingual transfer learning for speech translation. In 9 NAACL (Short Papers), pages 33–43. Association for Computational Linguistics. Fabian Mentzer, David Minnen, Eirikur Agustsson, and Michael Tschannen. 2024. Finite scalar quantization: VQ-VAE made simple. In ICLR. OpenReview.net. Arsha Nagrani, Joon Son Chung, and Andrew Zisser- man. 2017. Voxceleb: A large-scale speaker identifi- cation dataset. In INTERSPEECH, pages 2616–2620. ISCA. Tu Anh Nguyen, Wei-Ning Hsu, Antony D’Avirro, Bowen Shi, Itai Gat, Maryam Fazel-Zarandi, Tal Re- mez, Jade Copet, Gabriel Synnaeve, Michael Has- sid, Felix Kreuk, Yossi Adi, and Emmanuel Dupoux. 2023. Expresso: A benchmark and analysis of discrete expressive speech resynthesis. In INTER- SPEECH, pages 4823–4827. ISCA. Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R. Costa-jussà, Maha Elbayad, Sravya Popuri, Paul- Ambroise Duquenne, Robin Algayres, Ruslan Mav- lyutov, Itai Gat, Gabriel Synnaeve, Juan Pino, Benoît Sagot, and Emmanuel Dupoux. 2024a. Spirit-lm: In- terleaved spoken and written language model. CoRR, abs/2402.05755. Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R. Costa-jussà, Maha Elbayad, Sravya Popuri, Paul- Ambroise Duquenne, Robin Algayres, Ruslan Mav- lyutov, Itai Gat, Gabriel Synnaeve, Juan Pino, Benoît Sagot, and Emmanuel Dupoux. 2024b. Spirit-lm: In- terleaved spoken and written language model. CoRR, abs/2402.05755. Vassil Panayotov, Guoguo Chen, Daniel Povey, and San- jeev Khudanpur. 2015. Librispeech: An ASR corpus based on public domain audio books. In ICASSP, pages 5206–5210. IEEE. Julian D. Parker, Anton Smirnov, Jordi Pons, CJ Carr, Zack Zukowski, Zach Evans, and Xubo Liu. 2025. Scaling transformers for low-bitrate high-quality speech coding. In ICLR. OpenReview.net. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brock- man, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak su- pervision. In ICML, volume 202 of Proceedings of Machine Learning Research, pages 28492–28518. PMLR. Sambal Shikhar, Mohammed Irfan Kurpath, Sahal Shaji Mullappilly, Jean Lahoud, Fahad Shahbaz Khan, Rao Muhammad Anwer, Salman H. Khan, and Hisham Cholakkal. 2025. Llmvox: Autoregressive streaming text-to-speech model for any LLM. In ACL (Findings), pages 20481–20493. Association for Computational Linguistics. Ryan Soh-Eun Shim, Domenico De Cristofaro, Chengzhi Martin Hu, Alessandro Vietti, and Bar- bara Plank. 2025. Languages in multilingual speech foundation models align both phonetically and se- mantically. CoRR, abs/2505.19606. Liang-Hsuan Tseng, Yi-Chang Chen, Kuan-Yi Lee, Da- Shan Shiu, and Hung-yi Lee. 2025. TASTE: text- aligned speech tokenization and embedding for spo- ken language modeling. CoRR, abs/2504.07053. Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023. Neural codec language models are zero-shot text to speech synthesizers. CoRR, abs/2301.02111. Detai Xin, Xu Tan, Shinnosuke Takamichi, and Hi- roshi Saruwatari. 2024. Bigcodec: Pushing the limits of low-bitrate neural speech codec. CoRR, abs/2409.05377. Nima Yazdani, Ali Ansari, Aruj Mahajan, Amirhos- sein Afsharrad, and Seyed Shahabeddin Mousavi. 2025. Evaluating speech-to-text x LLM x text- to-speech combinations for AI interview systems. CoRR, abs/2507.16835. Xin Zhang, Dong Zhang, Shimin Li, Yaqian Zhou, and Xipeng Qiu. 2023. Speechtokenizer: Unified speech tokenizer for speech large language models. CoRR, abs/2308.16692. 10 Appendix A Baselines and Implementation Details Scope and selection. Our study targets text- aligned speech tokenization suitable for joint speech–text modeling at low bitrates and short se- quences. Because only TASTE is directly text- aligned at word length, we include it and com- pare against state-of-the-art neural codecs that ei- ther operate near, or can be adjusted to, a similar low-bitrate regime (via retaining fewer RVQ lay- ers and/or applying temporal striding) to ensure a fair comparison. We also report two ablations (S3 topline and text-only). The baselines used through- out the paper are: EnCodec, Mimi, SpeechTok- enizer, DualCodec, BigCodec, and TASTE, plus the S3 topline and our text-only baseline. Common evaluation wrapper. All baselines are run under a unified path-based pipeline: mono float32 waveforms are loaded from disk, resam- pled to each model’s native sampling rate, encoded into discrete codes/units, and decoded back to waveforms. For RVQ/stacked-codebook models, we probe multiple operating points by (i) keeping only the first k quantizer layers and/or (ii) applying temporal sub-sampling, to bring effective rates into a comparable low-bitrate regime for joint model- ing. EnCodec (Défossez et al., 2023). A convolu- tional encoder–decoder with RVQ, widely adopted for high-fidelity neural audio compression. We use the official Transformers implementation at 24 kHz and evaluate standard bandwidths (in- cluding 1.5 kbps) via the model’s built-in en- code/decode APIs. In our tables, we report the con- figured bandwidth along with token/frame statistics measured from emitted code streams. Mimi (Défossez et al., 2024). A modern low- latency neural codec used in speech–text founda- tion models. We employ the kyutai/mimi Trans- formers port at 24 kHz; encoding yields discrete audio_codes and decoding reconstructs wave- forms. Frame/token statistics are computed from code length per second; when codebook details are not exposed, we also report the bandwidth indi- cated by the model card. SpeechTokenizer (Zhang et al., 2023). A uni- fied discrete tokenizer designed for speech LMs (multi-stage VQ with LM-friendly objectives). We load the official package/config and use the pro- vided encode/decode APIs. To reach low-bitrate operating points without retraining, we retain only the first k RVQ layers and optionally apply a time stride; we report the resulting measured to- ken/frame statistics under these settings. DualCodec (Li et al., 2025). A dual-branch low- rate codec (semantic + acoustic streams) that bal- ances intelligibility and fidelity. Using the official API at 24 kHz, we encode both streams and decode with decode(semantic, acoustic); reported to- ken/frame statistics aggregate both streams. BigCodec (Xin et al., 2024). A capacity-scaled neural codec targeting very low bitrates with strong reconstruction quality. We call the official repos- itory’s encoder/decoder at 16 kHz, retaining ex- posed VQ indices (vq_code) for rate accounting; any block-size padding mandated by the model is trimmed post-decoding. TASTE (Tseng et al., 2025). A speech tokenizer explicitly designed for joint modeling: text tokens query a frozen speech encoder (keys from the last layer; values from selected shallow layers), pro- ducing text-length representations that are then dis- cretized before unit decoding and vocoding. We use TASTE as the canonical text-aligned baseline in our low-rate regime. Ablations used in this paper. S3 Topline: re- constructs speech directly from ground-truth S3 units with the same unit-to-speech vocoder stack; it serves as an upper-bound reference at word-level frame rates. Text-only baseline: predicts S3 to- kens from text alone (no speech tokens) to quantify the value of speech tokenization under the same decoder/vocoder. Both are reported in the main results and ablations. Reproducibility notes. For each baseline, we use official or widely adopted checkpoints and libraries (Transformers or the authors’ packages), adhere to their documented sampling rates/hop sizes, and expose only non-retraining knobs (layer keep, time stride) when needed to reach the shared low-bitrate regime. All models are run under the same I/O wrapper described above. B Training Details We train TASLA on LIBRISPEECH (train-clean- 100/360 and train-other-500) and validate/test on dev-clean/dev-other and test-clean/test-other. 11 Training uses dynamic batching with a 2000-frame budget and gradient accumulation of 2, distributed across 4 Nvidia A800 GPUs. The model is ini- tialized from a text-only baseline and employs an FSQ audio quantizer (codebook dimension = 64, codebook size = 8). We optimize with Adam at a learning rate of 1.6 × 10−4 using a warmup sched- uler with 5k warmup steps and gradient clipping of 5, for up to 5 epochs. We evaluate and save every 2000 steps and select the best weights by development-set accuracy. The random seed is fixed to 1986 for random, NumPy, and PyTorch. C Evaluation Metrics C.1 Quality Metrics Word Error Rate (WER) ↓. WER is a standard metric for evaluating speech recognition quality, defined as the minimum edit distance between the hypothesis and reference transcripts, normalized by the reference length. Specifically, it computes the total number of substitutions (S), insertions (I), and deletions (D) required to transform the hypoth- esis into the reference, divided by the number of words in the reference (N): WER = S + D + I N . A lower WER indicates higher transcription accu- racy and better preservation of linguistic content. We use Whisper-large-v3 (Radford et al., 2023) for transcription and apply standard text normalization before computing WER. This metric reflects intel- ligibility and alignment with the reference text. UTMOS ↑. UTMOS (Baba et al., 2024) is a non-intrusive, neural MOS predictor that estimates perceived naturalness and quality directly from the waveform. Unlike WER, which focuses on content accuracy, UTMOS approximates human subjective ratings of audio quality, providing a complemen- tary measure of perceptual fidelity. Speaker Similarity ↑. Speaker similarity is quantified as the cosine similarity between speaker embeddings extracted from reference and recon- structed audio. We use CosyVoice (Du et al., 2024) to obtain embeddings. This metric evalu- ates whether speaker identity is preserved in the reconstruction. C.2 Prosody Metrics We report only the prosody metrics used in our ta- bles. Unless noted, audio is resampled to 16 kHz. Pairwise statistics are computed on a shared, DTW- aligned time axis: we extract MFCCs from ref- erence/hypothesis, obtain a DTW path, and time- warp hypothesis-side prosody features to the refer- ence timeline. Let aligned F0 (Hz) be fHz ref , fHz hyp ∈ RL with voiced masks vref, vhyp ∈{0, 1}L. De- fine the jointly voiced mask b = vref ∧vhyp and nv = P b. Frame RMS sequences are ρref, ρhyp; their dB versions use E(·) = 20 log10(max(·, ϵ)). F0 Pearson Correlation (F0-PCC) ↑. Assesses agreement of pitch contours (intonation dynamics) while ignoring absolute offsets. Computed as the Pearson correlation between fHz ref and fHz hyp over jointly voiced frames bt=1. Voicing Decision Error (VDE) ↓. Measures consistency of voiced/unvoiced decisions (e.g., missed onsets, spurious voicing). Computed as the framewise disagreement rate VDE = 1 L L X t=1 ⊮[vref(t) ̸= vhyp(t)] . Gross Pitch Error (GPE) ↓. Counts large pitch mistakes (octave jumps, severe tracking errors) on voiced frames. Let r(t) = |fHz hyp(t)−fHz ref (t)| max(fHz ref (t),ϵ) on bt=1 and g(t) = ⊮[r(t) > 0.2]. Report GPE = 1 nv P t:bt=1 g(t). Energy RMSE (dB) ↓. Captures the mismatch of the loudness envelope (stress/emphasis patterns). Convert frame RMS to dB and compute either RMSE or MSE in the dB domain: RMSEdB = s 1 L X t (E(ρref(t)) −E(ρhyp(t)))2. Energy Pearson Correlation (Energy PCC) ↑. Evaluates agreement of dynamic energy contours independent of absolute gain. Compute the Pearson correlation between E(ρref) and E(ρhyp) on the aligned timeline. Phrase Shape via Legendre Coefficients (Coeff-L2 ↓, Coeff-cos↑). Summarizes phrase-level intonation (rise/fall shapes) with low-dimensional, noise-robust descriptors. Con- vert F0 to semitones s(t) = 12 log2 fHz(t)/55  , interpolate NaNs on voiced islands, and fit degree-d Legendre polynomials on normalized time x ∈[−1, 1] to obtain coefficient vectors cref, chyp. Report Coeff-L2 = ∥cref −chyp∥2, 12 Coeff-cos = c⊤ refchyp ∥cref∥∥chyp∥. Implementation notes. F0 uses CREPE (Kim et al., 2018); energy uses frame RMS; all pairwise metrics share the same DTW-aligned time base to avoid length/misalignment artifacts. Degree 3 for Legendre fits is fixed across systems/datasets for fair comparison. D Visualization of Dynamic Weights We select four examples from EXPRESSO and present their Mel bins, spectral flux, and layer weights (probabilities in the figure). As shown in Figure 4, w3 remains consistently small, while w0, w1, and w2 dominate most frames. E Dataset We select three dataset for evaluation and training. LIBRISPEECH. LIBRISPEECH (Panayotov et al., 2015) is a widely used English corpus containing 585 hours of read speech from 2,458 speakers. We use the training and development subsets during training and report evaluation results on the test subset. EXPRESSO. EXPRESSO (Nguyen et al., 2023) is an expressive English speech corpus designed to capture a wide range of emotions and prosodic patterns. We adopt it for out-of-domain evaluation to assess whether our method preserves emotional cues and prosodic richness. VOXCELEB. VOXCELEB (Nagrani et al., 2017) is a large-scale, text-independent speaker identifica- tion dataset collected from unconstrained YouTube videos. Since transcripts are not provided, we do not calculate WER for this dataset. We include VOXCELEB as it provides challenging scenarios with varied speakers, spontaneous speaking styles, and background noise, making it suitable for testing robustness and naturalness in speech reconstruc- tion. All datasets used in this work are publicly re- leased for academic research (LIBRISPEECH under CC BY 4.0, VOXCELEB under YouTube TOS, EX- PRESSO under research license). F S3 Units We use S3 units as the discrete supervision and the reconstruction target in our pipeline. Concretely, an external unit recognizer from the CosyVoice (Du et al., 2024) stack converts a waveform into a one- dimensional sequence of discrete unit IDs s = [s1, . . . , sM], st ∈{1, . . . , |VS3|}, at roughly word- /syllable-level time resolution. These units carry high-level linguistic content along with coarse prosodic cues and are paired with a unit-to-speech vocoder that maps s back to a waveform. In TASLA, the unit decoder consumes the text-aligned speech tokens zq and predicts the S3 sequence autoregressively (cross-entropy objective), after which the CosyVoice unit-to-speech vocoder re- constructs audio. We also report an S3 topline that bypasses tokenization by feeding ground-truth S3 units to the same vocoder, providing an informative upper bound under the same unitized reconstruc- tion stack. G Shallow Layer Abaltion For the shallow-layer ablation study, we select lay- ers 3, 6, 9, and 32 and evaluate the model on LIBRISPEECH. As shown in Table 3, the shallow- layer ablation achieves performance comparable to TASLA on the PCC (contour) metrics, but it performs significantly worse on Energy RMSE. 13 Figure 4: Layer-wise Weight Dynamics on EXPRESSO. Four examples from EXPRESSO showing Mel spectro- grams, spectral flux, and dynamic layer weights (w0–w3). w3 remains consistently small, while w0, w1, and w2 dominate across most frames. Model Ene. RMSE ↓ F0-PCC ↑ Phr. Cos. ↑ Ene. PCC ↑ UTMOS ↑ WER ↓ S3 Topline 6.94 0.91 0.92 0.95 3.40 0.04 Text-only Baseline 10.08 0.33 0.89 0.81 3.51 0.23 TASTE 8.63 0.80 0.91 0.88 3.42 0.10 TASLA 6.97 0.87 0.90 0.92 3.43 0.12 TASLA (Shallow Layer) 7.65 0.86 0.91 0.92 3.44 0.10 Table 3: Experimental Results for Shallow Layer Ablation. 14
TASLA: Text-Aligned Speech Tokens with Multiple Layer-Aggregation Ming-Hao Hsu1 Liang-Hsuan Tseng2 Hung-yi Lee2 Zhizheng Wu1 1The Chinese 2National Taiwan University , , Abstract We propose Text-Aligned Speech Tokens with Multiple Layer-Aggregation (TASLA), which is a text-aligned speech tokenization framework that aims to address the problem that under a low-frame-rate and text-aligned regime, single-source speech tokens may lose acoustic details during reconstruction. On the other hand, this paper further explains how different encoder layers collaborate to capture comprehensive acoustic features for tokenization. Previous work, TASTE, proposed the text-aligned speech tokenization framework, which is a LMfriendly architecture, but struggles to capture acoustic details. We address this trade-off with two components: Multi-Layer Dynamic Attention (MLDA), which lets each text position adaptively mix shallow/deep features from a frozen speech encoder, and Finite Scalar Quantization (FSQ), a simple per-dimension discretization with smooth optimization. At about 2.62 Hz (tokens/s), TASLA consistently improves prosody and achieves competitive quality over TASTE on in-domain (LIBRISPEECH) and OOD (EXPRESSO, VOXCELEB) sets. We further demonstrate that dynamic layer mixing is correlated with spectral flux and explains why MLDA preserves prosody under a low frame rate with extreme feature compression. 1 Introduction Large language models (LLMs) have recently transformed text understanding and generation. To pursue a more natural interaction mode with LMs, researchers have started to research Spoken Language Models (SLM) (Chen et al., 2024; Tseng et al., 2025; KimiTeam et al., 2025; Hassid et al., 2023; Nguyen et al., 2024a), where models process and generate speech as a first-class modality. If we want to use voice mode to interact with the LMs, one option is a cascade pipeline that first generates text with an LM and then converts text to speech using a TTS model (Yazdani et al., 2025; Shikhar et al., 2025). In contrast, SLM offers a better approach, as it captures acoustic details from context and generates speech that more naturally aligns with the context (Arora et al., 2025; Guo et al., 2025). The key to enabling SLMs to have such an ability is speech tokenization. Speech tokenization involves mapping continuous waveforms into discrete tokens that are compact, learnable, and acoustically informative, enabling a vocoder to reconstruct speech. However, there are two practical problems with joint modeling of text tokens and speech tokens for an SLM. The first problem is the trade-off between acoustic richness and LM-friendliness. Neural speech tokenizers that generate more tokens per second could provide fine-grained acoustic details since they use a shorter time frame for speech reconstruction, and these speech tokenizers often capture only acoustic details without text information (Liu et al., 2024). Therefore, they could generate very high-quality speech, but their speech token sequence is often very long (Borsos et al., 2023; Wang et al., 2023), which is not suitable for transformer-based architectures if we want to generate longer speech. The second problem is the speech token sequence length mismatch with the text sequence length. Standard codecs operate at fixed frame rates that yield token streams far longer than text (e.g., ∼12.5-50 speech Hz vs. ∼2-3 text Hz), making joint speech-text modeling harder (Défossez et al., 2023; Zhang et al., 2023). Making the LMs need to learn two modalities separately, which also increases the difficulty of the training process. However, aggressive sequence compression helps alignment but risks losing prosodic detail. Prior work addresses the mismatch by aligning speech tokens to text via an attention mechanism during tokenization, enabling straightforward joint modeling at very low bitrates (Tseng et al., 2025). However, under strong compression of the to1 16 Oct 2025 ken length, it still shows limitations in preserving fine-grained acoustics and in out-of-domain generalization. To overcome these problems and find a balance between bitrate and token sequence length. We propose TASLA, a text-aligned speech tokenization framework that preserves acoustic detail at text length. TASLA introduces Multi-Layer Dynamic Attention (MLDA), which enables text tokens to query a frozen speech encoder and adaptively combine shallow and deep representations, allowing each text position to gather the most predictive acoustic evidence for content, prosody, and speaker cues. To discretize reliably, we replace Residual Vector Quantization (RVQ) with Finite Scalar Quantization (FSQ) (Mentzer et al., 2024), which provides smooth optimization and resilience to codebook pathologies. This work shows that dynamic, per-token fusion across encoder depth is the missing piece for textaligned speech tokens: MLDA lets each word position select the most predictive mixture of shallow and deep features, thereby retaining prosody and speaker cues at a text-length rate. Under rate parity with prior text-aligned tokenizers, MLDA is consistently comparable in quality and outperforms on prosody metrics in both in-domain and OOD settings, and yields better spoken continuation than strong text-only and speech-token baselines, demonstrating that adaptive layer mixing effectively narrows the gap between alignment convenience and acoustic expressivity. 2 Related Work Neural Audio Codecs. CNN-based neural codecs typically use RVQ and multi-scale spectral/adversarial losses. They often excel at highfidelity reconstruction and preserve prosodic cues. EnCodec (Défossez et al., 2023) and DAC (Kumar et al., 2023) are canonical convolutional codecs with RVQ, which have strong perceptual quality and real-time operation, but are not designed for text alignment. Then, BigCodec (Xin et al., 2024) and TAAE (Parker et al., 2025) achieve extremely high quality at extremely low bitrates using a convolution codec architecture. However, they are still fixed-rate and not LM-friendly for speech-text joint modeling. Therefore, Mimi (Défossez et al., 2024) deliberately reduces token rates to be more LM-friendly, which is a useful bridge from pure reconstruction toward joint modeling. In contrast, our method goes beyond purely acoustic reconstruction by jointly modeling speech tokens in both acoustic and semantic information. Semantic-Acoustic Joint Modeling. To incorporate linguistic and semantic information and enable unified modeling, recent work blends semantic and acoustic representations. SpeechTokenizer (Zhang et al., 2023) distills semantics into upper RVQ codebooks using SSL features, yielding tokens that carry both semantic and acoustic content, but its sequences remain relatively long and not text-aligned. DualCodec (Li et al., 2025) uses dual branches, semantic and acoustic, to balance intelligibility and fidelity, with lower rates but still not explicitly aligned to text. Spirit-LM (Nguyen et al., 2024b) interleaves speech and text tokens for joint modeling; effective but depends on extra interleaving rules rather than addressing alignment at tokenization. These methods add semantics or interleaving strategies, yet do not solve alignment at the tokenization stage, leaving training and inference complexity and potential prosody loss. Our method differs by enforcing speech-text length alignment during tokenization, while still preserving both acoustic and semantic information. Text-Aligned Speech Tokens. Text-aligned speech tokens use text positions as queries over speech representations so that tokenization itself aligns speech tokens to the text length. Sequences become LM-friendly while retaining prosodic and paralinguistic cues. Because of alignment, the token rate is about 1/20-1/5 of traditional tokenizers. TASTE (Tseng et al., 2025) introduces the text-aligned paradigm, where text-query cross-attention over speech encoder features produces time-variant, text-length-matched tokens, thereby eliminating speech-text length mismatch and enabling straightforward SLM training. However, the extreme compression of the bitrate and token rate results in the loss of acoustic details. Unlike prior text-aligned approaches, our method maintains the extremely low frame rate while preserving richer acoustic detail in reconstruction. 3 Preliminaries 3.1 Speech-Text Gap Speech and text provide two complementary but structurally different views of language. Let a spoken utterance be denoted as a continuous waveform u ∈RT , where T is the number of acoustic frames. On the other hand, let the corresponding transcript 2 Figure 1: TASLA overview. Text tokens (Q) cross-attend to the last-layer encoder states (K) and to a dynamically mixed value stream (V ) formed by per-frame weights over layers 8/16/24/32 (MLDA). The aggregated features (z) are discretized by FSQ (d = 64, L = 8) and fed to a unit decoder that predicts S3 units; a vocoder reconstructs waveforms. Training minimizes next-unit cross-entropy (LCE) and an FSQ reconstruction loss (Lrec). be a discrete token sequence v = [v1, . . . , vN] of length N. The asymmetry between |T| and |N| is striking: while text unfolds sparsely with clear boundaries, speech evolves densely, carrying layered information including acoustic details and prosody. This discrepancy yields the length mismatch problem that most speech tokenizations produce sequences far longer than text, making it hard to jointly model text and speech. Prior remedies, such as interleaving speech and text tokens or introducing alignment heuristics, reduce the mismatch but do not directly resolve the modality gap, and often sacrifice prosodic fidelity. 3.2 Text-Aligned Speech Tokens To address this, TASTE (Tseng et al., 2025) proposes to construct speech tokens that are explicitly aligned with their text counterpart, which is called text-aligned speech tokens. Formally, in textaligned speech token architectures, each time a text token is generated, a corresponding speech token is also generated, and the two token sequences are identical in length and order. In practice, TASTE uses distilled Whisper-large-v3's (Gandhi et al., 2023) encoder part as its encoder, which is a 32-layer speech encoder. Then, given the encoder's hidden states, {h(1), . . . , h(L)} where each h(l) ∈RT×dh, text-aligned speech tokens aim to produce a compressed representation z ∈RN×dz whose length matches that of the text sequence. A cross-attention mechanism achieves this by letting the text embeddings E(v) ∈RN×dq act as queries, the final-layer states h(L) provide keys, and selected shallow layer h(l) provide values. The aggregated representation can thus be written as z = Attn(E(v), h(L), h(l)), This formulation aligns speech and text sequences at the token level, enabling straightforward joint modeling without heuristic synchronization. After generating text tokens v and speech tokens ˆz, we sum them and feed the result into a speech unit decoder, which autoregressively predicts the S3 units ˆs (see Section F for details). These units are then passed to the CosyVoice unit-to-speech vocoder for waveform reconstruction. 3.3 Quantization for Efficiency Even with alignment, the representation z remains continuous and high-dimensional, which hinders efficiency and symbolic control. Discretization is therefore essential, which maps each vector zi into a compact set of discrete codes, allowing symbolic modeling akin to text tokens. Traditional approaches such as Residual Vector Quantization (RVQ) express each token as a sum of codebook vectors, zi ≈ R X r=1 q(r) i , q(r) i ∈Cr, where R is the number of quantizer stages and Cr is the r-th codebook. Although RVQ provides fine3 grained reconstruction, it often requires large codebooks and multiple stages, which inflate bitrate and introduce redundancy. This motivates alternative quantization strategies that achieve comparable expressiveness with fewer parameters, lower bitrate, and better preservation of prosodic cues. 3.4 Bitrate and Token Rate We distinguish two quantities: the frame rate Rtok (Hz) and the bitrate b (bits/s). The token rate measures how many discrete speech tokens are produced per second for reconstruction, while the bitrate measures how many bits per second are actually needed to encode those tokens. RVQ-based codecs. For residual vector quantization (RVQ) with R quantizers and a codebook size K per layer, each token consumes R log2 K, and the bitrate is b = Rtok · R log2 K. Intuitively, increasing either the token frequency Rtok or the per-token code payload R log2 K raises the bitrate. This RVQ counting is standard in neural codecs/speech tokenizers that output layered code indices. FSQ-based tokenizers. For finite scalar quantization (FSQ) with d scalar dimensions and L uniform levels per dimension, each token carries d log2 L bits,and therefore b = Rtok · d log2 L. Compared to RVQ, FSQ trades the number of residual codebooks for axis-aligned scalar bins. Its bitrate still scales linearly in Rtok but now with the per-token payload d log2 L. Bitrate and Frame Rate Calculation for TextAligned Speech Tokens. Since the text-aligned speech token sequence length is aligned to the text token length, it is not a fixed token rate tokenizer. Therefore, we use the LIBRISPEECH test set to estimate the frame rate and calculate the bitrate. In the LIBRISPEECH test set, there are about 19,805.2 seconds and a total of 51,903 text tokens; therefore, the average frame rate for text-aligned speech tokens is about 2.62. 4 Methodology Our goal is to design a text-aligned speech tokenization framework that captures richer acoustic and prosodic cues while remaining simple to train and easy to control. Concretely, we (i) aggregate multilayer encoder features with Multi-Layer Dynamic Attention (MLDA) to compress frame-level speech into text-length representations and (ii) discretize these representations by Finite Scalar Quantization (FSQ) to control bitrate with minimal parameters. Finally, we train the whole system with a lightweight objective that combines a unit-level cross-entropy and a masked reconstruction loss. 4.1 Multi-Layer Dynamic Attention We turn frame-level speech features into a textlength sequence by letting text tokens query the speech encoder's hidden states. The keys come from the last encoder layer, while the values come from shallower layers. An MLP produces perlayer mixture weights so the model can adapt each layer's mix ratio for each frame. We first pass the input speech u through a frozen speech encoder to obtain layerwise hidden states {h(1), . . . , h(L)}, where each h(l) ∈RT×dh is a frame-level sequence, T is acoustic length, and dh is hidden size. To align speech with text, we employ a crossattention mechanism that compresses the speech frames into the same length as the text tokens. Specifically, the text tokens serve as queries Q, while the last-layer representation h(L) provides the keys K. For the values, we select one or more shallow layers {h(ls)}S s=1, denoted as {Vl}, where each Vl ∈RT×dv. To dynamically aggregate these value sources, we introduce an MLP that predicts layer-wise mixture weights. Given the last hidden state hlast, the MLP produces normalized weights as: w = softmax(MLP(hlast)) , w ∈R|V|×T . After obtaining the weights, we use a multilayer dynamic fuser to fuse these value sources. The aggregated value sequence is then computed framewise as: ̃Vi = X l∈V wl,i Vl,i, ̃V ∈RT×dv, where V denotes the set of selected layers and i is the frame index. Let E(·) be a token embedding function. We embed the text sequence v into queries Q = E(v) ∈RN×dq, where N is the text length and dq is the query dimension. Finally, we compute the cross-attention between text tokens and aggregated speech tokens represented as: z = Attn(Q, K, ̃V ) = softmax QK⊤ √dk ̃V , where K ∈RT×dk and z ∈RN×dv. 4 4.2 Finite Scalar Quantization (FSQ) Given a per-token vector x ∈RD, we map it to a d-dim latent (via a linear encoder) and back to RD after quantization (via a linear decoder). FSQ operates independently on each of the d latent dimensions as follows. First, apply a learnable perdimension affine and temperature-scaled squashing: ̃u = u ⊙s + b, ̄u = tanh ̃u τ ∈[-1, 1]d. With L uniform scalar levels on [-1, 1], G = {-1 + 2k L-1}L-1 k=0 , the quantization index and value for each dimension j are ij = clip round ̄uj + 1 2 (L -1) , 0, L -1 qj = -1 + 2 ij L -1 We use the straight-through estimator to form the quantized latent zq = ̄u + (q - ̄u)sg, where (·)sg stops gradients. The decoder maps zq back to the feature space; training minimizes MSE on valid frames (with masking if sequences are padded). In our experiments, we select the d = 64 and L = 8 for a balance of bitrate and performance. 4.3 Training Objective Cross Entropy Loss. After we get the text token v and speech token ˆz, we uses a transformer-based unit decoder to autoregressively decode the text and speech token to speech units and then use the CosyVoice speech vocoder to convert it into speech. Therefore, we calculate the cross-entropy loss of the target speech units. When a target speech unit sequence y is available, a unit decoder consumes zq and predicts y autoregressively with the usual next-token cross-entropy: LCE = 1 |y| X t -log pθ yt y 0.2]. Report GPE = 1 nv P t:bt=1 g(t). Energy RMSE (dB) ↓. Captures the mismatch of the loudness envelope (stress/emphasis patterns). Convert frame RMS to dB and compute either RMSE or MSE in the dB domain: RMSEdB = s 1 L X t (E(ρref(t)) -E(ρhyp(t)))2. Energy Pearson Correlation (Energy PCC) ↑. Evaluates agreement of dynamic energy contours independent of absolute gain. Compute the Pearson correlation between E(ρref) and E(ρhyp) on the aligned timeline. Phrase Shape via Legendre Coefficients (Coeff-L2 ↓, Coeff-cos↑). Summarizes phrase-level intonation (rise/fall shapes) with low-dimensional, noise-robust descriptors. Convert F0 to semitones s(t) = 12 log2 fHz(t)/55 , interpolate NaNs on voiced islands, and fit degree-d Legendre polynomials on normalized time x ∈[-1, 1] to obtain coefficient vectors cref, chyp. Report Coeff-L2 = ∥cref -chyp∥2, 12 Coeff-cos = c⊤ refchyp ∥cref∥∥chyp∥. Implementation notes. F0 uses CREPE (Kim et al., 2018); energy uses frame RMS; all pairwise metrics share the same DTW-aligned time base to avoid length/misalignment artifacts. Degree 3 for Legendre fits is fixed across systems/datasets for fair comparison. D Visualization of Dynamic Weights We select four examples from EXPRESSO and present their Mel bins, spectral flux, and layer weights (probabilities in the figure). As shown in Figure 4, w3 remains consistently small, while w0, w1, and w2 dominate most frames. E Dataset We select three dataset for evaluation and training. LIBRISPEECH. LIBRISPEECH (Panayotov et al., 2015) is a widely used English corpus containing 585 hours of read speech from 2,458 speakers. We use the training and development subsets during training and report evaluation results on the test subset. EXPRESSO. EXPRESSO (Nguyen et al., 2023) is an expressive English speech corpus designed to capture a wide range of emotions and prosodic patterns. We adopt it for out-of-domain evaluation to assess whether our method preserves emotional cues and prosodic richness. VOXCELEB. VOXCELEB (Nagrani et al., 2017) is a large-scale, text-independent speaker identification dataset collected from unconstrained YouTube videos. Since transcripts are not provided, we do not calculate WER for this dataset. We include VOXCELEB as it provides challenging scenarios with varied speakers, spontaneous speaking styles, and background noise, making it suitable for testing robustness and naturalness in speech reconstruction. All datasets used in this work are publicly released for academic research (LIBRISPEECH under CC BY 4.0, VOXCELEB under YouTube TOS, EXPRESSO under research license). F S3 Units We use S3 units as the discrete supervision and the reconstruction target in our pipeline. Concretely, an external unit recognizer from the CosyVoice (Du et al., 2024) stack converts a waveform into a onedimensional sequence of discrete unit IDs s = [s1, . . . , sM], st ∈{1, . . . , |VS3|}, at roughly word- /syllable-level time resolution. These units carry high-level linguistic content along with coarse prosodic cues and are paired with a unit-to-speech vocoder that maps s back to a waveform. In TASLA, the unit decoder consumes the text-aligned speech tokens zq and predicts the S3 sequence autoregressively (cross-entropy objective), after which the CosyVoice unit-to-speech vocoder reconstructs audio. We also report an S3 topline that bypasses tokenization by feeding ground-truth S3 units to the same vocoder, providing an informative upper bound under the same unitized reconstruction stack. G Shallow Layer Abaltion For the shallow-layer ablation study, we select layers 3, 6, 9, and 32 and evaluate the model on LIBRISPEECH. As shown in Table 3, the shallowlayer ablation achieves performance comparable to TASLA on the PCC (contour) metrics, but it performs significantly worse on Energy RMSE. 13 Figure 4: Layer-wise Weight Dynamics on EXPRESSO. Four examples from EXPRESSO showing Mel spectrograms, spectral flux, and dynamic layer weights (w0-w3). w3 remains consistently small, while w0, w1, and w2 dominate across most frames. Model Ene. RMSE ↓ F0-PCC ↑ Phr. Cos. ↑ Ene. PCC ↑ UTMOS ↑ WER ↓ S3 Topline 6.94 0.91 0.92 0.95 3.40 0.04 Text-only Baseline 10.08 0.33 0.89 0.81 3.51 0.23 TASTE 8.63 0.80 0.91 0.88 3.42 0.10 TASLA 6.97 0.87 0.90 0.92 3.43 0.12 TASLA (Shallow Layer) 7.65 0.86 0.91 0.92 3.44 0.10 Table 3: Experimental Results for Shallow Layer Ablation. 14
2510.14936
CIRCUIT INSIGHTS: TOWARDS INTERPRETABILITY BEYOND ACTIVATIONS Elena Golimblevskaia1, Aakriti Jain1, Bruno Puri1,2, Ammar Ibrahim1, Wojciech Samek1,2,3, Sebastian Lapuschkin1,4 1Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute 2Department of Electrical Engineering and Computer Science, Technische Universit¨at Berlin 3BIFOLD - Berlin Institute for the Foundations of Learning and Data 4Centre of eXplainable Artificial Intelligence, Technological University Dublin corresponding authors: {wojciech.samek,sebastian.lapuschkin}@hhi.fraunhofer.de ABSTRACT The fields of explainable AI and mechanistic interpretability aim to uncover the internal structure of neural networks, with circuit discovery as a central tool for understanding model computations. Existing approaches, however, rely on man- ual inspection and remain limited to toy tasks. Automated interpretability offers scalability by analyzing isolated features and their activations, but it often misses interactions between features and depends strongly on external LLMs and dataset quality. Transcoders have recently made it possible to separate feature attributions into input-dependent and input-invariant components, providing a foundation for more systematic circuit analysis. Building on this, we propose WeightLens1 and CircuitLens2, two complementary methods that go beyond activation-based anal- ysis. WeightLens interprets features directly from their learned weights, removing the need for explainer models or datasets while matching or exceeding the perfor- mance of existing methods on context-independent features. CircuitLens captures how feature activations arise from interactions between components, revealing circuit-level dynamics that activation-only approaches cannot identify. Together, these methods increase interpretability robustness and enhance scalable mecha- nistic analysis of circuits while maintaining efficiency and quality. 1 INTRODUCTION Large language models (LLMs) have seen rapid adoption in recent years, including in sensitive domains such as medical analysis (Singhal et al., 2023). Despite their remarkable capabilities, un- derstanding the internal mechanisms of these models is crucial for safe and reliable deployment, yet our knowledge in this area remains limited (Olah et al., 2018; Lapuschkin et al., 2019; Sharkey et al., 2025). Several methods have emerged in the fields of mechanistic interpretability and explainable AI to understand how models encode and utilize mechanisms that influence outputs (Olah et al., 2020; Achtibat et al., 2024; Dreyer et al., 2025). Much of the existing work focuses on circuit discovery, identifying subgraphs responsible for specific tasks (Conmy et al., 2023). However, these studies are mostly limited to toy tasks, and understanding the roles of individual neurons and attention heads still requires extensive manual analysis (Elhage et al., 2022; Wang et al., 2022; Bricken et al., 2023). Automated interpretability methods have been proposed to address these limitations. Initial work, such as Bills et al. (2023), leveraged larger LLMs to analyze activation patterns of MLP neurons and generate natural language descriptions. Although promising, these approaches face the funda- mental challenge of polysemanticity of MLP neurons, making them inherently difficult to interpret. This bottleneck prompted the development of Sparse Autoencoders (SAEs), which decompose acti- vations into more monosemantic features (Bricken et al., 2023), advancing scalable interpretability 1github.com/egolimblevskaia/WeightLens 2github.com/egolimblevskaia/CircuitLens 1 arXiv:2510.14936v1 [cs.LG] 16 Oct 2025 pipelines (Templeton et al., 2024; Paulo et al., 2025a). More recently, transcoders were introduced in Dunefsky et al. (2024) and Ge et al. (2024) as an alternative approach for extracting sparse features. Unlike SAEs, which reconstruct activations, transcoders sparsely approximate entire MLP layers, while maintaining a clear separation between input-dependent and weight-dependent contributions. This architecture enables efficient circuit discovery and provides direct attributions to other features, attention heads and vocabulary. Despite these advances in sparse feature space construction, automated interpretability remains heavily dependent on explainer LLMs, which shifts the solution to the black box problem to yet another black box LLM, resulting in notable safety risks which may produce unfaithful or unreli- able explanations (Lermen et al., 2025). Its effectiveness is influenced by the prompt, fine-tuning strategy, and the dataset used for generating explanations. Furthermore, sparse features can still be challenging to interpret (Puri et al., 2025), as they may activate on highly specific patterns that are not easily captured by analyzing activations alone, or may be polysemantic. In this work, we focus on automated interpretability grounded in model weights and circuit structure, making the following contributions: • We introduce WeightLens, a framework for interpreting models using only their weights and the weights of their transcoders, reducing dependence on both the underlying dataset and explainer LLMs. Descriptions obtained via WeightLens match or exceed activation-based descriptions for context-independent features. • We introduce CircuitLens, a framework for circuit-based analysis of feature activations, ex- tending interpretability to context-dependent features by (i) isolating input patterns trigger- ing feature activations and (ii) identifying which model outputs are influenced by specific features. Our approach uncovers complex patterns invisible to activation-only methods, addresses the large dataset requirements and explainer LLM dependence of autointerpretability pipelines (Choi et al., 2024; Puri et al., 2025). Additionally, it handles polysemanticity through circuit-based clustering and combining their interpretations into unified feature descriptions. 2 RELATED WORK Recently, a series of works has focused on building automated interpretability pipelines for language models (Choi et al., 2024; Paulo et al., 2025a; Puri et al., 2025; Gur-Arieh et al., 2025). Most approaches follow the framework introduced by Bills et al. (2023), which consists of running a large dataset through a model, collecting maximally activating samples for each neuron or SAE feature, applying various sampling strategies, and then passing the samples to a larger LLM to generate natural language descriptions. Several studies have refined this pipeline by focusing on prompt construction and description evalu- ation. For instance, Choi et al. (2024) and Puri et al. (2025) examine how factors such as the number of samples and the presentation of token activations affect description quality. Choi et al. (2024) further fine-tune an explainer model specifically to produce descriptions conditioned on a feature’s activations. Beyond input-based evidence, Gur-Arieh et al. (2025) incorporate output-side informa- tion, analyzing not only what inputs trigger a feature but also how that feature influences the model’s logits. These pipelines are applied to different representational units. Early work focuses on MLP neurons (Bills et al., 2023; Choi et al., 2024), while later studies extend them to SAE features, which are generally more interpretable and often monosemantic (Templeton et al., 2024; Gur-Arieh et al., 2025; Paulo et al., 2025a; Puri et al., 2025). As an alternative to SAEs, Dunefsky et al. (2024) and Ge et al. (2024) introduce transcoders, a sparse approximation of MLP layers that decomposes attributions into input-dependent and input-invariant components. Variants such as skip-transcoders (Paulo et al., 2025b) and cross-layer transcoders (CLTs) (Ameisen et al., 2025) have also been explored, demonstrating through qualitative and quan- titative analysis that transcoders match or exceed the interpretability gained through SAEs. Through case studies, Dunefsky et al. (2024) show that transcoder circuits can be used for interpreting a feature’s function, although based mainly on manual analysis. 2 The interpretability of transcoder weights is studied by Ameisen et al. (2025), who show that while some connections appear meaningful, interference is a major challenge. They propose Target- Weighted Expected Residual Attribution (TWERA), which averages attributions across a dataset. However, they find that TWERA weights often diverge substantially from the raw transcoder weights, making the method sensitive to the distribution of the evaluation dataset. Finally, Puri et al. (2025) highlight a persistent challenge: even though sparse features, specifically in SAEs, are generally more monosemantic than MLP neurons, they can be highly specific and activate only on certain patterns. This specificity makes them difficult to interpret; either the explainer LLM fails to identify the correct trigger, or the resulting description is too vague to be useful.3 3 METHODOLOGY As introduced by Dunefsky et al. (2024), given a transcoder structure, the contribution of transcoder feature i′ in transcoder layer l′ to feature i in layer l > l′ on token t can be expressed as: activation(l′,i′)[t] | {z } input-dependent (f (l′,i′) dec · f (l,i) enc ) | {z } input-invariant (1) where f (l,i) enc ∈Rdmodel denotes the i-th column of the encoder matrix W (l) enc ∈Rdmodel×dfeatures, and f (l′,i′) dec ∈Rdmodel denotes the i′-th row of the decoder matrix W (l′) dec ∈Rdfeatures×dmodel, where dfeatures is the dimension of the transcoder, dmodel is the dimension of the model, and dfeatures ≫dmodel. This formulation cleanly separates an input-dependent scalar activation from a fixed, input-invariant connectivity term between features across layers. 3.1 INPUT-INVARIANT ANALYSIS Figure 1: Process of layer- wise input-invariant analysis via WeightLens. The input-invariant connections from Equation (1) provide a use- ful foundation for interpretation of transcoder features, as demon- strated in the case studies of (Dunefsky et al., 2024). To build on this idea, we make the following complementary assumptions: Assumption 1: Input-invariant connections indicate meaningful structural relationships only if their magnitude significantly exceeds that of other connections, making them statistical outliers. Since many features are context-dependent, relying solely on weight-based analysis can produce misleading results. To address this, we introduce a validation step to determine whether a feature is truly token-dependent, i.e., whether it consistently activates on spe- cific tokens regardless of context. Formally, we state the following assumption: Assumption 2: If a token is strongly supported by input-invariant connections (weights) and semantically aligned with the concept encoded by the feature, then the feature should activate on this token regardless of context. We generate a feature description – a set of tokens associated with the feature – by processing the model layer-by-layer, starting from layer 0 as presented on Figure 1. For this we are performing the following steps: 1. Extract candidate tokens from vocabulary and previous layers features: Project the feature encoder vector fenc into the input vocabulary embedding space via the embedding matrix Wemb as Wemb · fenc, and identify candidate tokens as statistical outliers based on their z-scores (Barnett & Lewis, 1994), retaining only highly distinctive tokens. For each earlier layer l′ < l, compute W (l′) dec · f (l) enc, identifying top contributing (outlier) features using the same criterion, and inherit their token descriptions. 3neuronpedia.org/gemma-2-2b/4-gemmascope-transcoder-16k/13598 3 Figure 2: Types of attributions in transcoders. 2. Validate tokens: Retain only tokens that activate the feature in a forward pass. 3. Analyze output effects: Project the feature decoder vector fdec into vocabulary logits via fdec · WU, and identify outlier tokens (strongly promoted) via z-score. Token-based features often respond to multiple forms of the same word. To process the obtained set of tokens and produce a coherent feature description, we apply lemmatization (Bird et al., 2009) to both the promoted tokens and the generated descriptions. This step consolidates different inflected forms into a single canonical form and can be considered a lightweight alternative to LLM-based postprocessing for cleaning and standardizing the descriptions. 3.2 CIRCUIT-BASED ANALYSIS To account for interference between layers, Ameisen et al. (2025) propose incorporating a Jacobian term into the attribution formulation, which improves the reliability of feature attributions. With this adjustment, Equation (1) can be redefined as activation(l′,i′)[t]  f (l′,i′) dec · J(l→l′)[t] · f (l,i) enc  , (2) where the Jacobian is given by J(l→l′)[t] := ∂r(l) mid[t] ∂r(l′) post[t] (3) with all non-linearities, including attention, normalization, and activation functions, treated as con- stants with respect to the given input, and r(l) mid[t] and r(l′) post[t] denote the residual streams before and after the transcoder at layers l and l′, respectively (see Figure 2a). Similarly, we can measure how much previous tokens contributed to the activation of our analyzed feature (l, i) on token t through a specific attention head. This can be done via attribution to that attention head, as presented in Figure 2b (Dunefsky et al., 2024). For an attention head h at layer l′ with l′ ≤l, the contribution of token s through (l′, h) to feature (l, i) at token t can be expressed as score(l′,h)(r(l′) pre[t], r(l′) pre[s]) | {z } attention score from s to t (W (l′,h) OV )⊤f (l,i) enc  · r(l′) pre[s]  | {z } projection of feature onto head output , (4) where r(l′) pre[s] denotes the residual stream at token s before the attention block in layer l′, W (l′,h) OV is the output-value matrix of head h, and f (l,i) enc is the encoder vector of feature (l, i). The contribution of our target feature (l, i) to the output logit y[t] at token t is demonstrated on Figure 2c, and can be expressed as activation(l,i)[t]  f (l,i) dec · Jy→(l,i)[t] · WU[:, y[t]]  , (5) where f (l,i) dec is the decoder vector of feature (l, i), Jy→(l,i)[t] is the Jacobian from the final residual stream to the post-residual of feature (l, i) at token t (calculated as before with nonlinearities and 4 attention patterns considered as a constant on a given input), and WU[:, y[t]] is the unembedding vector for token y[t] . Interpretability Beyond Activations A central challenge in interpreting feature activations is that raw activation values do not always reveal what triggered an activation of a given feature. Sim- ply highlighting token activations and prompting language models often yields vague or generic explanations such as “variety of words on variety of topics.” To address this, we focus on identifying patterns in the data that both lead to a feature’s activation and determine how the feature influences the output. • Input-centric focus: Using the attribution formulation in Equation (4), we extract attention head–token pairs that provide strong contribution for a feature’s activation, as presented in Figure 2a. Outlier pairs are selected based on their z-score relative to the distribution of contributions, ensuring that only the strongest connections are retained. We then mask the original input sequence, keeping only tokens that either directly activated the feature or contributed significantly through attention. This procedure isolates interpretable token patterns underlying the activation of a feature, as illustrated in Figure 2b, where the feature activates on references to already mentioned entities. • Output-centric analysis: Using Equation (5), we evaluate whether the analyzed feature contributed to the prediction of the generated tokens after being activated. This highlights which output tokens were influenced by the feature and thus provides an estimate of its downstream impact, as shown in Figure 2c. Circuit-Based Clustering A single feature often responds to multiple concepts, which may be en- tangled and hard to interpret. Semantic clustering of activations in embedding space is insufficient, as it ignores the causal, circuit-level mechanisms. We propose circuit-based clustering: for each input, we collect contributing elements, such as transcoder features and token/attention head pairs via Equation (2) and Equation (4) respectively, including significant transcoder features (l′, i′) and attention head contributions (l, h, ∆), where ∆ is the relative token position. Sparse activations ensure that each input has only a few contributors. To reduce noise, we apply a frequency filter, retaining a feature or head only if it appears in at least a fraction ρ of inputs: |{j:f∈Sj}| N ≥ρ, where Sj is the contribution set for input j. This step removes features and heads that contribute only for isolated inputs and are unlikely to reflect consistent circuit-level behavior relevant to the feature activation, as well as reduces the size of the set, making subsequent clustering more robust and computationally efficient. We then compute pairwise Jaccard similarities J(A, B) = |SA ∩SB|/|SA ∪SB| , forming an n × n matrix. Clusters are extracted via DBSCAN (Ester et al., 1996) on this similarity matrix, which is robust to noise and does not require a predefined number of clusters. Sampling Strategy Most prior works (Bills et al., 2023; Choi et al., 2024; Gur-Arieh et al., 2025; Puri et al., 2025) focus on analyzing the most highly activating examples of a feature. This approach is especially sensible for MLP neurons, whose activations are often noisy and unstructured. In contrast, both SAEs and transcoders are explicitly designed to yield monosemantic features. For this reason, we aim to analyze the entire distribution of a feature’s activations, in order to capture the broader concept(s) that drive its behavior. Because activation distributions are typically highly skewed toward zero, we adopt inverse- frequency quantile sampling (De Angeli et al., 2022) to ensure sufficient coverage of rare but strongly activating cases. Specifically, activations are partitioned into B = 20 quantile bins. Here, nb denotes the number of activations in bin b, and each activation i in b is assigned weight wi = 1/nα b with α = 0.9, and corresponding normalized probability pi = wi/ P j wj. Finally, we sample N = 100 activations without replacement, producing a diverse set of contexts that up-samples tail cases while still maintaining broad overall coverage. Automated Interpretability Sampled inputs are first analyzed from input- and/or output-centric perspectives, then grouped into clusters according to the detected circuits. Each cluster is inter- preted independently using an explainer LLM (GPT-4o-mini; see Appendix C.2). Rather than pro- viding full inputs with highlighted activations or token–activation pairs, we supply only the detected 5 Layer 0 Layer 7 Layer 12 Layer 21 C R P F C R P F C R P F C R P F WeightLens 0.56 0.24 0.03 0.00 0.62 0.22 0.02 0.01 0.47 0.13 0.02 0.01 0.71 0.86 0.65 0.02 WeightLens+Out 0.51 0.26 0.03 0.00 0.55 0.24 0.02 0.01 0.39 0.15 0.02 0.01 0.63 0.87 0.63 0.04 WeigthLens+Out+LLM 0.52 0.27 0.04 0.01 0.58 0.20 0.03 0.02 0.41 0.13 0.03 0.00 0.68 0.84 0.63 0.03 Neuronpedia 0.53 0.22 0.07 0.03 0.51 0.17 0.10 0.03 0.28 0.10 0.08 0.00 0.50 0.80 0.73 0.02 MaxAct* 0.50 0.21 0.08 0.04 0.54 0.15 0.08 0.05 0.33 0.11 0.7 0.00 0.53 0.80 0.74 0.03 Table 1: Evaluation of Gemma-2-2b transcoder descriptions (C = Clarity, R = Responsiveness, P = Purity, F = Faithfulness; metrics in range 0–1, higher is better). Best results marked in bold. Methods: WeightLens variants; Circuit-Based; Neuronpedia and MaxAct* (activation-based). pattern, marking the single most activating token. Finally, for each feature, the explainer LLM synthesizes a unified description from the individual cluster-level interpretations. 3.3 EVALUATION All evaluations were performed on input-centric metrics Clarity, Responsiveness, Purity and the output-centric metric Faithfulness from the FADE framework (Puri et al., 2025). Here, Clarity mea- sures if the concept is expressed clearly enough to generate synthetic data, that would activate a feature. Responsiveness demonstrates if the feature’s activation on the given concept are signifi- cantly higher than it’s activations on a normally distributed dataset. Purity measures if the feature only strongly activates on the described concept or also on unrelated concepts. Finally, Faithfulness measures the extent to which steering the feature influences the model output in the direction of the described concept. 4 WEIGHT-BASED INTERPRETABILITY RESULTS We analyze the interpretability of transcoders for GPT-2 Small (Dunefsky et al., 2024), Gemma-2- 2b (Lieberum et al., 2024), and Llama-3.2-1B (Paulo et al., 2025b) via WeightLens, and we focus on Gemma-2-2b transcoders for qualitative analysis. We evaluate and compare: • WeightLens (Weights-based descriptions): descriptions composed solely of the tokens that activate the feature, postprocessed via lemmatization; • WeightLens+Out (Weight-based with logits-based analysis): weight-based descriptions augmented with tokens derived from the feature’s unembedding vectors; • WeightLens+Output+LLM (Weight-based with logits and LLM refinement): descrip- tions further refined using a secondary LLM (gpt-4o-mini-2024-07-18) to generate a con- cise, descriptive single-line summary based on both activating and promoted tokens (see Appendix C.1). They are further compared to activation-based methods, specifically descriptions from Neuronpedia (Lin, 2023) and MaxAct* method (Puri et al., 2025). Figure 3: Percentage of validated fea- ture descriptions per layer obtained via WeightLens. WeightLens matches or exceeds activation-based methods Across ∼250 features per layer, our weight- based method performs on par with or better than activa- tion maximization methods. It achieves higher scores on Clarity and Responsiveness (Table 1), while activation- based methods tend to overgeneralize, leading to lower scores. However, activation maximization attains higher Purity, highlighting that many features might be context- dependent which is not discovered through WeightLens. Layer-wise trends in token-based feature inter- pretability Figure 3 shows that token-based interpretabil- ity varies strongly with depth. Early layers exhibit clear token-level structure, making them well-suited for weight-based analysis, with many features activating reliably on specific tokens. In Gemma-2-2B, however, layers 0 and 7 6 perform poorly in terms of descriptions quality (Table 1), reflecting their high activation count (ℓ0 = 76, 70)4, which also explains the weak activation-based baselines. Number of token-based features, as well as the quality of their descriptions, drops sharply in the middle layers of Llama and Gemma, as presented on Figure 3, consistent with prior interpretability analysis (Choi et al., 2024), but not in GPT-2, similartly to the results observed by Bills et al. (2023). A likely explanation is the use of RoPE in Llama and Gemma, which introduces additional non- linearities. Within Gemma-2-2B, layer 12 is the least interpretable based on weights: despite high sparsity (ℓ0 = 6), it contains few validated token-based features. Majority of the features in this layer are extremely context-dependent and encode very specific patterns. Higher layers partially recover in terms of presence of token-based features, though still below early-layer levels. For instance, Gemma-2-2B layer 21 (ℓ0 = 13) shows strong interpretability, with token-based features often acting as key–value pairs that map input tokens to predictable collocations (e.g., “apologize for”, “will be”). Figure 4: Comparison of Neuron- pedia feature descriptions for token- based features (i.e., those for which de- scriptions were successfully generated via WeightLens) and context-based fea- tures, whose activations are strongly in- fluenced by context. Faithfulness is consistently low across layers and meth- ods, likely due to the transcoder architecture: un- like SAEs, which decompose the full residual stream, transcoders write into it like MLPs, so modifying a single feature rarely produces large effects because similar con- cepts are distributed across layers and features. It may therefore be reasonable to modify the faithfulness met- ric to evaluate interventions on entire circuits rather than individual features. Most interpretable features are token-based Only a subset of features receive validated input-invariant de- scriptions: 32.7% for Gemma, 58.8% for GPT-2, and 25.4% for Llama (Figure 3). However, when the weight- based descriptions fail, activation-based ones also per- form poorly (see Figure 4). This is especially visible on layer 21, which overall demonstrated the best results in terms of interpretability. LLM postprocessing is not essential Although an LLM can produce more general results and filter out noise in the output logits, its utilization is not mandatory in this analysis, since the results are comparable, as seen in Table 1. This is a promising step to reducing reliance on the explainer models. 5 CIRCUIT-BASED INTERPRETABILITY RESULTS For the input-dependent analysis, we evaluate the following approaches, implemented within the CircuitLens framework: • CircuitLens-Input (Circuit-based descriptions): descriptions derived from the activa- tion patterns of the feature, obtained via attribution to attention heads; • CircuitLens-Full (Circuit-based full descriptions): descriptions based on activation pat- terns obtained through attributions to attention heads, augmented with tokens, which gen- eration was influenced by the feature. • WeightLens (WL) + CircuitLens-Full (Circuit-based full descriptions integrated with WeigthLens results): circuit-based descriptions are enriched with weight-based tokens obtained via WeightLens, which are incorporated when merging cluster-level descriptions into a full feature description. For generating circuit-based descriptions, we use sampling, as described in the Subsection 3.2, on a relatively small dataset of 24M tokens (see Appendix B). In addition, to eliminate the factor of the dataset influence, we compare the circuit-based analysis performed on the same data, as used in MaxAct*, i.e. on a large dataset with sampling from the top, as presented in Appendix B. These results will be marked by (top), i.e. CircuitLens-Input (top). 4huggingface.co/google/gemma-scope-2b-pt-transcoders 7 As baselines, we consider Neuronpedia and MaxAct* feature desciprions. We do not analyze output- based patterns separately, as they are more computationally expensive to obtain and often provide little additional information without the activation patterns that originally triggered the feature. Activations alone are not sufficient Figure 5 shows three circuit-based clusters of activating input samples for feature 619 in layer 12 (L12F619). Within each cluster, some commonality in activating concepts exists, but no clear general pattern emerges from activations alone. Isolating the tokens that contribute most through attention heads reveals that each activating entity is either explicitly mentioned or marked as definite or demonstrative references such as “the,” “this,” or “that,” as well as “former” or “latter,” where contributing tokens align semantically. Figure 5: Clusters of activating inputs for L12F619 and their pat- terns, obtained through attribution to attention heads. Activations are highlighted in green. Extending to the output, feature L21F91 shows that functional roles are not fully captured by input activations: while it ac- tivates on tokens like “on” or certain verbs, its main effect is generating output phrases such as “the basis of” or “based on” (Figure 6). These contributions, detectable via attributions from output logits, illustrate how fea- tures influence output patterns beyond input activations. Figure 6: Input patterns for feature L21F91, obtained via attributions to attention heads, correspond to prompts that trigger the feature, and output patterns, obtained via at- tributions from logits to the feature, are derived from the model’s generated continuations. Tokens isolated through attribution are shown in bold, and activating tokens are highlighted in green. Downstream effect of a feature Output-based analysis is computation- ally expensive, as each generated to- ken requires both a forward and back- ward pass. We generate 15 new to- kens per sample to assess how many are needed for reliable results. As expected, early layers (e.g., layers 0 and 7) rarely contribute directly to the output (Figure 7a), and when they do, the effect is usually limited to the to- ken immediately following the activat- ing one (Figure 7b). Extending the analysis to additional tokens adds only about 1.4% of samples for layer 0, a negligible gain given the computational cost. In deeper layers, most influence remains on the first generated token, though multi-token output patterns also appear, and might be crucial for inter- preting a feature’s function, particularly in later layers (e.g., Figure 6). Figure 8: Histogram of number discov- ered clusters per layer with log scale (y- axis). Circuit-based polysemanticity Layer 0 has almost no underlying circuit structure, except for its own attention heads, resulting in mainly token-based activations (Fig- ure 3), with few or no clusters beyond outliers. This aligns with its low sparsity, broad topic coverage, and context-independent behavior, yielding on average only 1.05 clusters per feature (see Figure 8). By contrast, Layer 7 exhibits clear polysemanticity, averaging 4.5 clusters per feature and showing the fewest single-cluster cases, indicating widespread circuit formation. Layer 12 presents a mixed picture. While it averages 2.8 clusters per feature, many are either single-cluster or highly clus- tered. Qualitative inspection suggests that circuit-based clustering can capture both main circuits and sub-circuits, 8 (a) Smoothed histogram of the fraction of features per layer influencing a given percentage of outputs, with both axes square-root transformed to highlight skewed distributions. (b) Fraction of features in each layer that contribute to generated outputs, plotted by output position relative to the activating token. Figure 7: Influence of features on the new 15 generated tokens from the position of the maximally activating token on a given feature. Layer 0 Layer 7 Layer 12 Layer 21 C R P F C R P F C R P F C R P F CircuitLens-Input 0.51 0.20 0.11 0.01 0.39 0.14 0.14 0.03 0.19 0.10 0.10 0.01 0.42 0.59 0.51 0.03 CircuitLens-Full 0.52 0.22 0.11 0.02 0.44 0.14 0.13 0.03 0.24 0.09 0.09 0.01 0.52 0.64 0.54 0.03 CircuitLens-Input (top) 0.66 0.24 0.05 0.02 0.62 0.17 0.08 0.03 0.27 0.09 0.08 0.00 0.54 0.72 0.63 0.03 CircuitLens-Full (top) 0.66 0.24 0.04 0.02 0.61 0.18 0.07 0.04 0.26 0.10 0.07 0.01 0.56 0.73 0.62 0.03 WL + CircuitLens-Full 0.55 0.22 0.10 0.02 0.51 0.14 0.11 0.02 0.25 0.10 0.08 0.01 0.55 0.68 0.56 0.03 Neuronpedia 0.51 0.23 0.09 0.03 0.44 0.17 0.11 0.03 0.15 0.10 0.11 0.00 0.36 0.71 0.64 0.02 MaxAct* 0.50 0.21 0.09 0.04 0.48 0.15 0.09 0.03 0.17 0.10 0.10 0.00 0.39 0.71 0.64 0.02 Table 2: Evaluation of Gemma-2-2b transcoder descriptions (C = Clarity, R = Responsiveness, P = Purity, F = Faithfulness; metrics in range 0–1, higher is better). Best results marked in bold. Methods: Circuit-Based variants. Baselines: Neuronpedia and MaxAct*. and more careful hyperparameter tuning could yield more generalizable results (Figure 5). It also has the largest share of outlier-only features, reflecting the difficulty of disentangling circuit-driven from semantic-driven activations. Layer 21 is similar (avg. 2.95 clusters/feature) but shows fewer extreme cases. As this layer is closely tied to output generation, we hypothesize that features flexibly participate in multiple circuits to support specific outputs (Figure 6). Resolving the Clarity Problem As shown by Puri et al. (2025), sparse feature descriptions are often not clear enough, as they fail to specify what precisely triggers a feature’s activation. In Table 2, we compare circuit-based analysis with activation-based baselines, including experiments performed on a smaller dataset sampled from the full distribution, as described in Section 3.2 (CircuitLens-Input and -Full), and on a MaxAct*-like dataset (CircuitLens (top)), which is drawn from a much larger corpus but restricted to top activations (see Appendix B) Our results show that circuit-based methods still depend on the dataset, with descriptions from the larger dataset achieving the strongest performance across layers, particularly in clarity and respon- siveness. However, descriptions derived from the smaller dataset remain competitive and in some cases outperform activation-based baselines, generated on the larger dataset. Combining weight- based and circuit-based analysis further reduces sensitivity to dataset size and distribution, making interpretability more robust. Moreover, analysis of the metrics distributions reveals that circuit- based methods yield far fewer features with extremely low clarity compared to purely activation- based approaches (see Appendix D). Finally, both qualitative and quantitative evidence indicate that sampling from the full distribution, though memory-intensive, provides a more faithful picture of general feature behavior. 9 CONCLUSION In this work, we address a fundamental missing piece in automated interpretability pipelines by developing methods that leverage models’ underlying structural information. We show that raw activations alone often fail to reveal the patterns driving feature activation, while transcoders, well- suited for circuit discovery, enable efficient incorporation of structural information. Our proposed frameworks demonstrate that structural information allow more scalable and robust interpretability. The weight-based analysis offers an efficient alternative for context-independent features – covering up to 58.8% of cases – without requiring large datasets or external LLMs. Circuit-based clustering isolates groups of activating texts into more interpretable clusters, while input- and output-based analysis further clarifies each feature’s functional role. Together, these methods reduce dependence on large datasets, improve robustness, and make automated interpretability more scalable and practi- cal for real-world applications. By bridging activation-based approaches with weight-based analysis and circuit discovery, our work opens new avenues for understanding model behavior at scale. REFERENCES Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wiegand, Sebastian Lapuschkin, and Wojciech Samek. Attnlrp: attention-aware layer-wise rel- evance propagation for transformers. In Proceedings of the 41st International Conference on Machine Learning, pp. 135–168, 2024. Emmanuel Ameisen, Jack Lindsey, Adam Pearce, Wes Gurnee, Nicholas L. Turner, Brian Chen, Craig Citro, David Abrahams, Shan Carter, Basil Hosmer, Jonathan Marcus, Michael Sklar, Adly Templeton, Trenton Bricken, Callum McDougall, Hoagy Cunningham, Thomas Henighan, Adam Jermyn, Andy Jones, Andrew Persic, Zhenyi Qi, T. Ben Thompson, Sam Zimmerman, Kelley Rivoire, Thomas Conerly, Chris Olah, and Joshua Batson. Circuit tracing: Revealing computational graphs in language models. Transformer Circuits Thread, 2025. URL https: //transformer-circuits.pub/2025/attribution-graphs/methods.html. Vic Barnett and Toby Lewis. Outliers in Statistical Data. Wiley, 3rd edition, 1994. Steven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Sutskever, Jan Leike, Jeff Wu, and William Saunders. Language models can explain neurons in language models. https://openaipublic.blob.core.windows.net/ neuron-explainer/paper/index.html, 2023. Steven Bird, Ewan Klein, and Edward Loper. Natural Language Processing with Python. O’Reilly Media, 2009. ISBN 978-0-596-51649-9. URL https://www.nltk.org/book/. Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter, Tom Henighan, and Christopher Olah. Towards monosemanticity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2023. URL https://transformer-circuits. pub/2023/monosemantic-features/index.html. Dami Choi, Vincent Huang, Kevin Meng, Daniel D Johnson, Jacob Steinhardt, and Sarah Schwettmann. Scaling automatic neuron description, 2024. URL https://transluce. org/neuron-descriptions. Arthur Conmy, Augustine Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adri`a Garriga- Alonso. Towards automated circuit discovery for mechanistic interpretability. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine (eds.), Advances in Neu- ral Information Processing Systems, volume 36, pp. 16318–16352. Curran Associates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/ file/34e1dbe95d34d7ebaf99b9bcaeb5b2be-Paper-Conference.pdf. Kevin De Angeli, Shang Gao, Ioana Danciu, Eric B. Durbin, Xiao-Cheng Wu, Antoinette Stroup, Jennifer Doherty, Stephen Schwartz, Charles Wiggins, Mark Damesyn, Linda Coyle, Lynne Pen- berthy, Georgia D. Tourassi, and Hong-Jun Yoon. Class imbalance in out-of-distribution datasets: 10 Improving the robustness of the textcnn for the classification of rare cancer types. Journal of Biomedical Informatics, 125:103957, 2022. ISSN 1532-0464. doi: https://doi.org/10.1016/j.jbi. 2021.103957. URL https://www.sciencedirect.com/science/article/pii/ S1532046421002860. Maximilian Dreyer, Jim Berend, Tobias Labarta, Johanna Vielhaben, Thomas Wiegand, Sebas- tian Lapuschkin, and Wojciech Samek. Mechanistic understanding and validation of large ai models with semanticlens. Nature Machine Intelligence, 7(9):1572–1585, 2025. doi: 10.1038/ s42256-025-01084-w. URL https://doi.org/10.1038/s42256-025-01084-w. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783. Jacob Dunefsky, Philippe Chlenski, and Neel Nanda. Transcoders find interpretable llm feature circuits, 2024. URL https://arxiv.org/abs/2406.11944. Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. Toy models of superpo- sition, 2022. URL https://arxiv.org/abs/2209.10652. Martin Ester, Hans-Peter Kriegel, J¨org Sander, and Xiaowei Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD), pp. 226–231. AAAI Press, 1996. doi: 10.5555/3001460.3001507. URL https://dl.acm.org/doi/10.5555/3001460. 3001507. Leo Gao, Stella Biderman, Sid Black, et al. The pile: An 800gb dataset of diverse text for language modeling, 2020. URL https://arxiv.org/abs/2101.00027. Xuyang Ge, Fukang Zhu, Wentao Shu, Junxuan Wang, Zhengfu He, and Xipeng Qiu. Automatically identifying local and global circuits with linear computation graphs, 2024. URL https:// arxiv.org/abs/2405.13868. Yoav Gur-Arieh, Roy Mayan, Chen Agassy, Atticus Geiger, and Mor Geva. Enhancing automated interpretability with output-centric feature descriptions, 2025. URL https://arxiv.org/ abs/2501.08319. Laura Kopf, Nils Feldhus, Kirill Bykov, Philine Lou Bommer, Anna Hedstr¨om, Marina M. C. H¨ohne, and Oliver Eberle. Capturing polysemanticity with prism: A multi-concept feature de- scription framework, 2025. URL https://arxiv.org/abs/2506.15538. Sebastian Lapuschkin, Stephan W¨aldchen, Alexander Binder, Gr´egoire Montavon, Wojciech Samek, and Klaus-Robert M¨uller. Unmasking clever hans predictors and assessing what machines really learn. Nature Communications, 10(1):1096, 2019. doi: 10.1038/s41467-019-08987-4. URL https://doi.org/10.1038/s41467-019-08987-4. Simon Lermen, Mateusz Dziemian, and Natalia P´erez-Campanero Antol´ın. Deceptive automated interpretability: Language models coordinating to fool oversight systems, 2025. URL https: //arxiv.org/abs/2504.07831. Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. Towards general text embeddings with multi-stage contrastive learning, 2023. URL https://arxiv. org/abs/2308.03281. Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, Janos Kramar, Anca Dragan, Rohin Shah, and Neel Nanda. Gemma scope: Open sparse autoencoders everywhere all at once on gemma 2. In Yonatan Belinkov, Najoung Kim, Jaap Jumelet, Hosein Mohebbi, Aaron Mueller, and Hanjie Chen (eds.), Proceedings of the 7th Black- boxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pp. 278–300, Miami, Florida, US, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024. blackboxnlp-1.19. URL https://aclanthology.org/2024.blackboxnlp-1.19/. 11 Johnny Lin. Neuronpedia: Interactive reference and tooling for analyzing neural networks, 2023. URL https://www.neuronpedia.org. Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. The building blocks of interpretability. Distill, 2018. doi: 10.23915/ distill.00010. URL https://distill.pub/2018/building-blocks. Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. Distill, 2020. doi: 10.23915/distill.00024.001. URL https://distill.pub/2020/circuits/zoom-in. Josh OpenAI andAchiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni, et al. Gpt-4 technical report, 2024. URL https://arxiv.org/abs/2303.08774. Gonc¸alo Paulo, Alex Mallen, Caden Juang, and Nora Belrose. Automatically interpreting millions of features in large language models, 2025a. URL https://arxiv.org/abs/2410.13928. Gonc¸alo Paulo, Stepan Shabalin, and Nora Belrose. Transcoders beat sparse autoencoders for inter- pretability, 2025b. URL https://arxiv.org/abs/2501.18823. Bruno Puri, Aakriti Jain, Elena Golimblevskaia, Patrick Kahardipraja, Thomas Wiegand, Woj- ciech Samek, and Sebastian Lapuschkin. FADE: Why bad descriptions happen to good fea- tures. In Wanxiang Che, Joyce Nabende, Ekaterina Shutova, and Mohammad Taher Pilehvar (eds.), Findings of the Association for Computational Linguistics: ACL 2025, pp. 17138–17160, Vienna, Austria, July 2025. Association for Computational Linguistics. ISBN 979-8-89176-256- 5. doi: 10.18653/v1/2025.findings-acl.881. URL https://aclanthology.org/2025. findings-acl.881/. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 2019. URL https://cdn.openai.com/better-language-models/language_models_ are_unsupervised_multitask_learners.pdf. Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, L´eonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ram´e, et al. Gemma 2: Improving open language models at a practical size, 2024. URL https://arxiv.org/abs/2408.00118. Lee Sharkey, Bilal Chughtai, Joshua Batson, Jack Lindsey, Jeff Wu, Lucius Bushnaq, Nicholas Goldowsky-Dill, Stefan Heimersheim, Alejandro Ortega, Joseph Bloom, Stella Biderman, Adria Garriga-Alonso, Arthur Conmy, Neel Nanda, Jessica Rumbelow, Martin Wattenberg, Nandi Schoots, Joseph Miller, Eric J. Michaud, Stephen Casper, Max Tegmark, William Saunders, David Bau, Eric Todd, Atticus Geiger, Mor Geva, Jesse Hoogland, Daniel Murfet, and Tom Mc- Grath. Open problems in mechanistic interpretability, 2025. URL https://arxiv.org/ abs/2501.16496. Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, et al. Large language models encode clinical knowledge. Nature, 620(6509):1–8, 2023. doi: 10.1038/s41586-023-06291-2. URL https://www.nature.com/articles/s41586-023-06291-2. Adly Templeton, Tom Conerly, Jonathan Marcus, Jack Lindsey, Trenton Bricken, Brian Chen, Adam Pearce, Craig Citro, Emmanuel Ameisen, Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum McDougall, Monte MacDiarmid, C. Daniel Freeman, Theodore R. Sumers, Edward Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, and Tom Henighan. Scaling monosemanticity: Extracting interpretable features from claude 3 sonnet. Trans- former Circuits Thread, 2024. URL https://transformer-circuits.pub/2024/ scaling-monosemanticity/index.html. Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. Inter- pretability in the wild: a circuit for indirect object identification in gpt-2 small, 2022. URL https://arxiv.org/abs/2211.00593. 12 A EXTENDED RELATED WORK A.1 AUTOMATED INTERPRETABILITY Most work in automated interpretability builds on the pipeline of Bills et al. (2023), where a dataset is processed through GPT-2 (Radford et al., 2019) to collect MLP neuron activations. A larger LLM (GPT-4; (OpenAI andAchiam et al., 2024)) then generates descriptions based on top-k activating sequences, and is further used as an “activations simulator” to evaluate these descriptions. To address the polysemanticity of MLP neurons, Bricken et al. (2023) propose sparse autoencoders (SAEs). Templeton et al. (2024); Paulo et al. (2025a) extend this pipeline to SAEs and show that their features are more monosemantic and interpretable than MLP neurons. Subsequent work refines automated interpretability in different ways (Choi et al., 2024; Templeton et al., 2024; Gur-Arieh et al., 2025; Kopf et al., 2025; Paulo et al., 2025a; Puri et al., 2025). Choi et al. (2024) fine-tune a smaller LLM (Llama-3.1-8B-Instruct (Dubey et al., 2024)) for neuron de- scription and evaluation, systematically studying prompt design choices such as token highlighting, token–activation pairs, and number of examples. Puri et al. (2025) run a broader prompt analysis on SAEs, finding results that sometimes contradict Choi et al. (2024), especially on how token acti- vations should be communicated. They also emphasize dependence on explainer model quality and challenges from the fine-grained specificity of SAE features. Gur-Arieh et al. (2025) combine input- and output-based analysis, showing that descriptions improve when considering both what a feature activates on and what it promotes. Kopf et al. (2025) address polysemanticity by clustering activating inputs. They sample from the top 1% of activations, embed sequences with gte-Qwen2-1.5B-instruct (Li et al., 2023), and apply k-means clustering into five groups, generating one description per cluster. This consistently outper- forms prior work (Bills et al., 2023; Gur-Arieh et al., 2025), demonstrating the benefit of handling polysemanticity directly. A.2 EVALUATION METRICS Developing automated evaluation metrics is essential for interpretability research, since manual as- sessment of description quality does not scale. Several main approaches have been proposed: (i) simulated activations, where an LLM predicts a feature’s activation on text samples given its de- scription (Bills et al., 2023; Choi et al., 2024); (ii) classifier-based metrics, where an LLM judges how strongly a text sample relates to a feature’s description (Templeton et al., 2024; Paulo et al., 2025a; Puri et al., 2025); (iii) synthetic data approaches, where an LLM generates or labels data from a description (Gur-Arieh et al., 2025; Puri et al., 2025); and (iv) output-based metrics, which evaluate how much a feature influences model outputs (Bills et al., 2023; Gur-Arieh et al., 2025; Paulo et al., 2025a; Puri et al., 2025). Simulated-activation metrics (Bills et al., 2023; Choi et al., 2024) are inexpensive but fail to capture many failure modes in description generation (Puri et al., 2025). Classifier-based metrics instead ask a judge LLM to score how related a sample is to a description, often on a scale from 0 (not related) to 3 (completely related) (Templeton et al., 2024; Puri et al., 2025). Similar detection-based setups appear in Paulo et al. (2025b), where the model identifies which samples match the concept. Evaluation can then be quantified using AUROC scores (Kopf et al., 2025), or metrics such as Gini coefficient and Average Precision, which Puri et al. (2025) combine into Responsiveness and Purity scores. Synthetic data metrics compare activating and non-activating examples, either LLM-generated or sampled uniformly from a dataset (Gur-Arieh et al., 2025; Puri et al., 2025). Finally, output-based metrics test whether descriptions capture a feature’s causal effect on model outputs. Puri et al. (2025) propose Faithfulness, where a judge LLM rates concept presence in steered generations. Paulo et al. (2025a) introduce Intervention Scoring, while Gur-Arieh et al. (2025) apply a similar approach; where the model’s task is to distinguish outputs produced under feature steering from control generations. 13 A.3 CIRCUIT TRACING A recent advance in mechanistic interpretability is the introduction of transcoders (Dunefsky et al., 2024; Ge et al., 2024). Unlike sparse autoencoders (SAEs), transcoders provide a structured way to trace how upstream feature activations contribute to downstream activations, enabling circuit- level analysis across layers. Their key innovation is the decomposition of a feature’s activation into an input-dependent and an input-independent component. The latter depends only on transcoder weights, allowing it to be analyzed separately and efficiently. Using GPT-2, Dunefsky et al. (2024) demonstrate cases where activating tokens identified through weight-based analysis align with those discovered via traditional activation-based methods. This indicates that transcoders can support both prompt-specific attribution graphs and global, weight- derived connectivity maps. Extending this work to Gemma-2-2B (Riviere et al., 2024), Ameisen et al. (2025) highlight a major limitation of weight-based analysis: interference from context-dependent components of the archi- tecture. To address this, they introduce target-weighted expected residual attribution (TWERA), which adjusts virtual weights using empirical coactivation statistics, effectively up-weighting con- nections between frequently coactivating features. However, they also show that TWERA can sig- nificantly diverge from the original transcoder weights, making it dependent on the dataset used to compute coactivations and limiting its reliability as a fully weight-based method. B DATASET PROCESSING We used the uncopyrighted version of the Pile dataset Gao et al. (2020), available on Hugging Face (https://huggingface.co/datasets/monology/pile-uncopyrighted) with all copyrighted content removed. This version contains over 345.7 GB of training data from various sources. From this dataset, we extract two datasets of size 6GB (3.6B tokens), and 40MB (24M tokens) for generating MaxAct* descriptions and circuit based descriptions. The extracted por- tion from the training partition was used to collect the most activated samples based on frequency quantile sampling for the smaller dataset and top percentile sampling for the larger dataset. For eval- uations, we utilized the test partition from the same dataset, applying identical preprocessing steps as those used for the training data. Post processing involves several steps to ensure a balanced and informative dataset. First, we used the NLTK Bird et al. (2009) sentence tokenizer to split large text chunks into individual sentences. We then filtered out sentences in the bottom and top fifth percentiles based on length, as these were typically out-of-distribution cases consisting of single words, characters, or a few outliers. This step helped achieve a more balanced distribution. Additionally, we removed sentences containing only numbers or special characters with no meaningful content. Finally, duplicate sentences were deleted. C DESCRIPTIONS GENERATION C.1 POSTPROCESSING WEIGHT-BASED DESCRIPTIONS LLM-based postprocessing of weight-based analysis enables generating smoother, more coherent feature descriptions by consolidating input- and output-centric information, specifically, the tokens that activate a feature and those that it promotes or suppresses. This approach is particularly effective at filtering out noise, as promoted or suppressed tokens often include unrelated or random terms that do not reflect the feature’s true function. We’re studying neurons in a neural network. Each neuron has certain inputs that activate it and outputs that it leads to. You will receive three pieces of information about a neuron : 1. The top important tokens. 2. The top tokens it promotes in the output. 3. The tokens it suppresses in the output. 14 These will be separated into three sections: [Important Tokens], [Text Promoted], and [Text Suppressed]. All three are a combination of tokens. You can infer the most likely output or function of the neuron based on these tokens. The tokens, especially [Text Promoted] and [Text Suppressed], may include noise, such as unrelated terms, symbols, or programming jargon. If these are not coherent, you may ignore them. If the [Important Tokens] do not form a common theme, you may simply combine the words to form a single concept. Focus on identifying a cohesive theme or concept shared by the most relevant tokens. Your response should be a concise (1-2 sentence) explanation of the neuron, describing what triggers it (input) and what it does once triggered (output). If the input and output are related, you may mention this; otherwise, state them separately. [Concept: <Your interpretation of the neuron, based on the tokens provided>] Example 1 Input: [Important Tokens]: [’on’, ’pada’] [Tokens Promoted]: [’behalf’] [Tokens Suppressed]: [’on’, ’in’] Output: [Concept: The token "on" in the context of "on behalf of"] } ... C.2 GENERATING CIRCUIT-BASED DESCRIPTIONS At the first step, we treat each cluster of a feature separately. We pass the obtained patterns (input- centric or full, i.e. with patterns detected in the model’s output) in there in order to generate a description. You are an explainable AI researcher analyzing feature activations in language models. You will receive short patterns: fragments of text where tokens activated a feature. ONLY the snippets shown are the evidence, do not assume any extra surrounding context. Pattern formatting: - _ is 1-3 skipped non-important tokens - [...] is 4 or more skipped not relevant tokens - The <<<highlighted>>> token of each snippet is usually the most important signal (it is the activating token), it can be a part of a word. Analysis procedure: 1. Do NOT start by interpreting semantics. First treat the data as raw strings. 2. Count and note repeated literal elements (words, single letters, punctuation, suffixes/ prefixes, LaTeX tokens, different symbols, brackets, arrows, parentheses). 3. Pay special attention to: - exact repeated tokens, - repeated punctuation or formatting (commas, superscripts, backslashes, braces), - positional patterns, - capitalization patterns and single-letter variable tokens, - functional words, like articles, pronouns, modal verbs, that create a consistent pattern. 4. Only after the literal/structural check, generalize into a short concept (if appropriate). Decision rules: - If a single literal token or structural pattern dominates, output that token or structural label exactly. - If it is some grammatical pattern, output exactly that. - Avoid speculative semantic labels unless literal patterns support them. Output rules: - Output exactly ONE concise sentence (<20 words) describing the shared concept or structure. - If a single token/pattern dominates, output it exactly. - You may include up to one short example group in parentheses to clarify. - Do NOT include extra labels or the word "Description:". - If no clear recurring concept or structure is found, output exactly: No concept found. - Avoid vague phrases like "in various contexts" or ’a variety of words’. - Do NOT output your internal reasoning, only the final single sentence. Example 1 15 Input: important to helps to permits to importance to is possible [...] to able to allows us to purpose [...] is to Output: Preposition "to" in phrases that express purpose, intention, or enable an action. At the next iteration, we combine the obtained cluster descriptions into a single one. You are an explainable AI researcher analyzing multiple related concepts. You will receive a list of **concept descriptions**, each representing a semantic, grammatical , or functional element. [ONLY FOR WeightLens+CircuitLens COMBINED EXPERIMENTS: Sometimes, you may also receive a phrase at the beginning like "Important tokens: ...", for example: Important tokens: amazing, largely, upon. Important tokens: danger, preparation, prepare, preparing. Important tokens: new. Always integrate these tokens into your description, even if they do not fit naturally with the other concept descriptions. ] Step-by-step reasoning: 1. Examine all provided concepts carefully. Identify recurring themes, functions, or semantic roles. 2. Look for commonalities across the concepts, including: - grammatical elements (articles, parts of sentences, syntactic patterns) - symbols and punctuation (commas, brackets, etc.) - semantic categories - mathematical or symbolic markers 3. Pay special attention to specific patterns, which often are described through function words (articles, modal verbs, etc.). 4. Merge similar or overlapping elements into a single, concise idea. 5. Think step by step: a) Identify the core function or role each concept serves. b) Group related concepts together. c) Combine them into one coherent description. Output rules: - Output exactly **one concise sentence** (<30 words) describing the shared concept or several main concepts. - Include all major elements, but merge overlapping items. - Include short examples of terms or specific patterns, if they clarify the concept. - Include any "Important tokens" explicitly in the description. - Do not add labels, headings, or extra commentary. - Be precise, avoid speculation, and avoid vague expressions like "in various contexts." 16 D EXTENDED RESULTS Figure 9: Kernel density estimates illustrating evaluation results of WeightLens methods in compar- ison to the baselines. 17 Figure 10: Kernel density estimates illustrating evaluation results of CircuitLens methods in com- parison to the baselines. 18
CIRCUIT INSIGHTS: TOWARDS INTERPRETABILITY BEYOND ACTIVATIONS Elena Golimblevskaia1, Aakriti Jain1, Bruno Puri1,2, Ammar Ibrahim1, Wojciech Samek1,2,3, Sebastian Lapuschkin1,4 1 2 ̈at Berlin 3BIFOLD - Berlin Institute for the Foundations of Learning and Data 4Centre of eXplainable Artificial Intelligence, Technological University Dublin corresponding authors: { ABSTRACT The fields of explainable AI and mechanistic interpretability aim to uncover the internal structure of neural networks, with circuit discovery as a central tool for understanding model computations. Existing approaches, however, rely on manual inspection and remain limited to toy tasks. Automated interpretability offers scalability by analyzing isolated features and their activations, but it often misses interactions between features and depends strongly on external LLMs and dataset quality. Transcoders have recently made it possible to separate feature attributions into input-dependent and input-invariant components, providing a foundation for more systematic circuit analysis. Building on this, we propose WeightLens1 and CircuitLens2, two complementary methods that go beyond activation-based analysis. WeightLens interprets features directly from their learned weights, removing the need for explainer models or datasets while matching or exceeding the performance of existing methods on context-independent features. CircuitLens captures how feature activations arise from interactions between components, revealing circuit-level dynamics that activation-only approaches cannot identify. Together, these methods increase interpretability robustness and enhance scalable mechanistic analysis of circuits while maintaining efficiency and quality. 1 INTRODUCTION Large language models (LLMs) have seen rapid adoption in recent years, including in sensitive domains such as medical analysis (Singhal et al., 2023). Despite their remarkable capabilities, understanding the internal mechanisms of these models is crucial for safe and reliable deployment, yet our knowledge in this area remains limited (Olah et al., 2018; Lapuschkin et al., 2019; Sharkey et al., 2025). Several methods have emerged in the fields of mechanistic interpretability and explainable AI to understand how models encode and utilize mechanisms that influence outputs (Olah et al., 2020; Achtibat et al., 2024; Dreyer et al., 2025). Much of the existing work focuses on circuit discovery, identifying subgraphs responsible for specific tasks (Conmy et al., 2023). However, these studies are mostly limited to toy tasks, and understanding the roles of individual neurons and attention heads still requires extensive manual analysis (Elhage et al., 2022; Wang et al., 2022; Bricken et al., 2023). Automated interpretability methods have been proposed to address these limitations. Initial work, such as Bills et al. (2023), leveraged larger LLMs to analyze activation patterns of MLP neurons and generate natural language descriptions. Although promising, these approaches face the fundamental challenge of polysemanticity of MLP neurons, making them inherently difficult to interpret. This bottleneck prompted the development of Sparse Autoencoders (SAEs), which decompose activations into more monosemantic features (Bricken et al., 2023), advancing scalable interpretability 1github.com/egolimblevskaia/WeightLens 2github.com/egolimblevskaia/CircuitLens 1 16 Oct 2025 pipelines (Templeton et al., 2024; Paulo et al., 2025a). More recently, transcoders were introduced in Dunefsky et al. (2024) and Ge et al. (2024) as an alternative approach for extracting sparse features. Unlike SAEs, which reconstruct activations, transcoders sparsely approximate entire MLP layers, while maintaining a clear separation between input-dependent and weight-dependent contributions. This architecture enables efficient circuit discovery and provides direct attributions to other features, attention heads and vocabulary. Despite these advances in sparse feature space construction, automated interpretability remains heavily dependent on explainer LLMs, which shifts the solution to the black box problem to yet another black box LLM, resulting in notable safety risks which may produce unfaithful or unreliable explanations (Lermen et al., 2025). Its effectiveness is influenced by the prompt, fine-tuning strategy, and the dataset used for generating explanations. Furthermore, sparse features can still be challenging to interpret (Puri et al., 2025), as they may activate on highly specific patterns that are not easily captured by analyzing activations alone, or may be polysemantic. In this work, we focus on automated interpretability grounded in model weights and circuit structure, making the following contributions: • We introduce WeightLens, a framework for interpreting models using only their weights and the weights of their transcoders, reducing dependence on both the underlying dataset and explainer LLMs. Descriptions obtained via WeightLens match or exceed activation-based descriptions for context-independent features. • We introduce CircuitLens, a framework for circuit-based analysis of feature activations, extending interpretability to context-dependent features by (i) isolating input patterns triggering feature activations and (ii) identifying which model outputs are influenced by specific features. Our approach uncovers complex patterns invisible to activation-only methods, addresses the large dataset requirements and explainer LLM dependence of autointerpretability pipelines (Choi et al., 2024; Puri et al., 2025). Additionally, it handles polysemanticity through circuit-based clustering and combining their interpretations into unified feature descriptions. 2 RELATED WORK Recently, a series of works has focused on building automated interpretability pipelines for language models (Choi et al., 2024; Paulo et al., 2025a; Puri et al., 2025; Gur-Arieh et al., 2025). Most approaches follow the framework introduced by Bills et al. (2023), which consists of running a large dataset through a model, collecting maximally activating samples for each neuron or SAE feature, applying various sampling strategies, and then passing the samples to a larger LLM to generate natural language descriptions. Several studies have refined this pipeline by focusing on prompt construction and description evaluation. For instance, Choi et al. (2024) and Puri et al. (2025) examine how factors such as the number of samples and the presentation of token activations affect description quality. Choi et al. (2024) further fine-tune an explainer model specifically to produce descriptions conditioned on a feature's activations. Beyond input-based evidence, Gur-Arieh et al. (2025) incorporate output-side information, analyzing not only what inputs trigger a feature but also how that feature influences the model's logits. These pipelines are applied to different representational units. Early work focuses on MLP neurons (Bills et al., 2023; Choi et al., 2024), while later studies extend them to SAE features, which are generally more interpretable and often monosemantic (Templeton et al., 2024; Gur-Arieh et al., 2025; Paulo et al., 2025a; Puri et al., 2025). As an alternative to SAEs, Dunefsky et al. (2024) and Ge et al. (2024) introduce transcoders, a sparse approximation of MLP layers that decomposes attributions into input-dependent and input-invariant components. Variants such as skip-transcoders (Paulo et al., 2025b) and cross-layer transcoders (CLTs) (Ameisen et al., 2025) have also been explored, demonstrating through qualitative and quantitative analysis that transcoders match or exceed the interpretability gained through SAEs. Through case studies, Dunefsky et al. (2024) show that transcoder circuits can be used for interpreting a feature's function, although based mainly on manual analysis. 2 The interpretability of transcoder weights is studied by Ameisen et al. (2025), who show that while some connections appear meaningful, interference is a major challenge. They propose TargetWeighted Expected Residual Attribution (TWERA), which averages attributions across a dataset. However, they find that TWERA weights often diverge substantially from the raw transcoder weights, making the method sensitive to the distribution of the evaluation dataset. Finally, Puri et al. (2025) highlight a persistent challenge: even though sparse features, specifically in SAEs, are generally more monosemantic than MLP neurons, they can be highly specific and activate only on certain patterns. This specificity makes them difficult to interpret; either the explainer LLM fails to identify the correct trigger, or the resulting description is too vague to be useful.3 3 METHODOLOGY As introduced by Dunefsky et al. (2024), given a transcoder structure, the contribution of transcoder feature i′ in transcoder layer l′ to feature i in layer l > l′ on token t can be expressed as: activation(l′,i′)[t] | {z } input-dependent (f (l′,i′) dec · f (l,i) enc ) | {z } input-invariant (1) where f (l,i) enc ∈Rdmodel denotes the i-th column of the encoder matrix W (l) enc ∈Rdmodel×dfeatures, and f (l′,i′) dec ∈Rdmodel denotes the i′-th row of the decoder matrix W (l′) dec ∈Rdfeatures×dmodel, where dfeatures is the dimension of the transcoder, dmodel is the dimension of the model, and dfeatures ≫dmodel. This formulation cleanly separates an input-dependent scalar activation from a fixed, input-invariant connectivity term between features across layers. 3.1 INPUT-INVARIANT ANALYSIS Figure 1: Process of layerwise input-invariant analysis via WeightLens. The input-invariant connections from Equation (1) provide a useful foundation for interpretation of transcoder features, as demonstrated in the case studies of (Dunefsky et al., 2024). To build on this idea, we make the following complementary assumptions: Assumption 1: Input-invariant connections indicate meaningful structural relationships only if their magnitude significantly exceeds that of other connections, making them statistical outliers. Since many features are context-dependent, relying solely on weight-based analysis can produce misleading results. To address this, we introduce a validation step to determine whether a feature is truly token-dependent, i.e., whether it consistently activates on specific tokens regardless of context. Formally, we state the following assumption: Assumption 2: If a token is strongly supported by input-invariant connections (weights) and semantically aligned with the concept encoded by the feature, then the feature should activate on this token regardless of context. We generate a feature description - a set of tokens associated with the feature - by processing the model layer-by-layer, starting from layer 0 as presented on Figure 1. For this we are performing the following steps: 1. Extract candidate tokens from vocabulary and previous layers features: Project the feature encoder vector fenc into the input vocabulary embedding space via the embedding matrix Wemb as Wemb · fenc, and identify candidate tokens as statistical outliers based on their z-scores (Barnett & Lewis, 1994), retaining only highly distinctive tokens. For each earlier layer l′ ] Example 1 Input: [Important Tokens]: ['on', 'pada'] [Tokens Promoted]: ['behalf'] [Tokens Suppressed]: ['on', 'in'] Output: [Concept: The token "on" in the context of "on behalf of"] } ... C.2 GENERATING CIRCUIT-BASED DESCRIPTIONS At the first step, we treat each cluster of a feature separately. We pass the obtained patterns (inputcentric or full, i.e. with patterns detected in the model's output) in there in order to generate a description. You are an explainable AI researcher analyzing feature activations in language models. You will receive short patterns: fragments of text where tokens activated a feature. ONLY the snippets shown are the evidence, do not assume any extra surrounding context. Pattern formatting: - _ is 1-3 skipped non-important tokens - [...] is 4 or more skipped not relevant tokens - The >> token of each snippet is usually the most important signal (it is the activating token), it can be a part of a word. Analysis procedure: 1. Do NOT start by interpreting semantics. First treat the data as raw strings. 2. Count and note repeated literal elements (words, single letters, punctuation, suffixes/ prefixes, LaTeX tokens, different symbols, brackets, arrows, parentheses). 3. Pay special attention to: - exact repeated tokens, - repeated punctuation or formatting (commas, superscripts, backslashes, braces), - positional patterns, - capitalization patterns and single-letter variable tokens, - functional words, like articles, pronouns, modal verbs, that create a consistent pattern. 4. Only after the literal/structural check, generalize into a short concept (if appropriate). Decision rules: - If a single literal token or structural pattern dominates, output that token or structural label exactly. - If it is some grammatical pattern, output exactly that. - Avoid speculative semantic labels unless literal patterns support them. Output rules: - Output exactly ONE concise sentence (<20 words) describing the shared concept or structure. - If a single token/pattern dominates, output it exactly. - You may include up to one short example group in parentheses to clarify. - Do NOT include extra labels or the word "Description:". - If no clear recurring concept or structure is found, output exactly: No concept found. - Avoid vague phrases like "in various contexts" or 'a variety of words'. - Do NOT output your internal reasoning, only the final single sentence. Example 1 15 Input: important to helps to permits to importance to is possible [...] to able to allows us to purpose [...] is to Output: Preposition "to" in phrases that express purpose, intention, or enable an action. At the next iteration, we combine the obtained cluster descriptions into a single one. You are an explainable AI researcher analyzing multiple related concepts. You will receive a list of **concept descriptions**, each representing a semantic, grammatical , or functional element. [ONLY FOR WeightLens+CircuitLens COMBINED EXPERIMENTS: Sometimes, you may also receive a phrase at the beginning like "Important tokens: ...", for example: Important tokens: amazing, largely, upon. Important tokens: danger, preparation, prepare, preparing. Important tokens: new. Always integrate these tokens into your description, even if they do not fit naturally with the other concept descriptions. ] Step-by-step reasoning: 1. Examine all provided concepts carefully. Identify recurring themes, functions, or semantic roles. 2. Look for commonalities across the concepts, including: - grammatical elements (articles, parts of sentences, syntactic patterns) - symbols and punctuation (commas, brackets, etc.) - semantic categories - mathematical or symbolic markers 3. Pay special attention to specific patterns, which often are described through function words (articles, modal verbs, etc.). 4. Merge similar or overlapping elements into a single, concise idea. 5. Think step by step: a) Identify the core function or role each concept serves. b) Group related concepts together. c) Combine them into one coherent description. Output rules: - Output exactly **one concise sentence** (<30 words) describing the shared concept or several main concepts. - Include all major elements, but merge overlapping items. - Include short examples of terms or specific patterns, if they clarify the concept. - Include any "Important tokens" explicitly in the description. - Do not add labels, headings, or extra commentary. - Be precise, avoid speculation, and avoid vague expressions like "in various contexts." 16 D EXTENDED RESULTS Figure 9: Kernel density estimates illustrating evaluation results of WeightLens methods in comparison to the baselines. 17 Figure 10: Kernel density estimates illustrating evaluation results of CircuitLens methods in comparison to the baselines. 18
2510.14938
Astronomy & Astrophysics manuscript no. current ©ESO 2025 October 17, 2025 X-ray panorama of the SS 433/W50 complex by SRG/eROSITA Rashid Sunyaev1, 2, Ildar Khabibullin3, 1, 2, Eugene Churazov1, 2, Marat Gilfanov2, 1, Pavel Medvedev2, and Sergey Sazonov2 1 Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, D-85741 Garching, Germany 2 Space Research Institute (IKI), Profsoyuznaya 84/32, Moscow 117997, Russia 3 Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians-Universität München, Scheinerstr.1, 81679 München, Germany October 17, 2025 ABSTRACT Galactic microquasar SS 433 and the radio nebula W50 surrounding it present a prototypical example of a hyper-Eddington binary system shaping its ambient interstellar medium via energetic outflows. In this paper, we present X-ray observations of the SS 433/W50 complex by the eROSITA telescope onboard the SRG space observatory. These data provide images of the entire nebula characterized by a very large dynamic range and allow spectral analysis of the diffuse X-ray emission. In particular, these data illustrate a close connection between the thermal and non-thermal components of W50 on scales ranging from sub-parsecs, represented by narrow X-ray bright filaments, to the entire extent ≳100 pc of the nebula. These data also allow us to fully characterize a pair of nearly symmetric, sharp-edged, elongated structures aligned with the orbital axis of the binary system, which lack radio counterparts, but are prominent in very high energy gamma-ray emission. The resulting multifaceted picture of the interaction between energetic outflows and the surrounding medium paves the way for future focused multiwavelength observations and dedicated numerical simulations. Key words. ISM: jets and outflows – X-rays: binaries – plasmas – acceleration of particles 1. Introduction SS 433 is a unique close stellar binary system in our Galaxy (located at 𝑙, 𝑏= 39.7◦, −2.2◦and 𝑑SS 433 ≈5 kpc) composed of a relativistic compact object (a black hole or a neutron star) which accretes matter from its companion at sustainably hyper- Eddington rate (see Fabrika 2004; Cherepashchuk et al. 2025, for reviews). Having been first listed in the catalog of point-like H𝛼emitters (Stephenson & Sanduleak 1977), it quickly became one of the most studied astrophysical objects thanks to its very peculiar optical spectrum (Margon et al. 1979) featuring pairs of narrow (with half-opening angle 𝜃j ≲2◦) blue- and red-shifted lines (with respect to their rest frame positions) with the shifts varying periodically in time (see Margon 1984, for a review of early observational results). A kinematic model associates this emission with a pair of narrow relativistic jets having constant bulk velocity of a quarter of the speed of light and changing di- rection in a regular precession manner with a period 𝑃prec ≈162 days (Abell & Margon 1979; Fabian & Rees 1979). This period is much longer than the period of orbital modulation (𝑃orb = 13.08 days) and is interpreted as due to precession of a thick accre- tion disk with the amplitude of Θprec ≈21◦and the mean axis inclined by the angle 𝑖= 78◦to the line of sight (e.g., Katz 1980; Sarazin et al. 1980; Whitmire & Matese 1980). Subsequent de- tection of extended and variable radio emission (Spencer 1979) perfectly described by the precessing jets pattern fully confirmed the picture of twin relativistic outflows (Hjellming & Johnston 1981a,b; Blundell & Bowler 2004). Moreover, the proper motion of the individual radio-emitting blobs allowed one to constrain the distance to the source with a remarkable 10% accuracy, being 𝑑SS 433 = 5 ± 0.5 kpc (Hjellming & Johnston 1981a; Blundell & Bowler 2004; Marshall et al. 2013). Despite indications of both quasi-regular (nutation, Katz et al. 1982) and sporadic (jittering, Kubota et al. 2010) deviations from the predictions of this simple model as well as presence of (relatively short) periods of time when the Doppler shifts of optical emission lines change significantly (e.g., Blundell et al. 2011; Medvedev et al. 2022), all the derived parameters of the model stay remarkably stable over more than 40 years of continuous monitoring observations (Cherepashchuk et al. 2018). This stability likely reflects extreme robustness of the mass transfer occurring in the hyper-Eddington regime (van den Heuvel 1981; van den Heuvel et al. 2017) with the estimated accretion rate ¤𝑀≳10−4𝑀⊙yr−1 ∼1000 ¤𝑀Edd(𝑀X/3𝑀⊙) (e.g., Shklovskii 1981; Cherepashchuk 1981; Leibowitz 1984; Fuchs et al. 2006), where 𝑀X is the mass of the compact object and 𝑀⊙is the Solar mass and ¤𝑀Edd is the critical Eddington mass accretion rate. The compact object is most likely a black hole with 𝑀X ≳8𝑀⊙(Cherepashchuk et al. 2023), although the possibility of it being a neutron star (𝑀X < 3𝑀⊙) has also been discussed (e.g. Goranskij 2011; Medvedev et al. 2018). Since X-ray emission coming from the innermost parts of the hyper-Eddington accretion disk is expected to be strongly beamed in the axial direction (Shakura & Sunyaev 1973; Begel- man et al. 2006; Poutanen et al. 2007), we are not able to observe it directly, given the high inclination angle of the system. Indeed, although X-ray emission from this source has also been detected early on (Seward et al. 1976; Forman et al. 1978), its luminosity is relatively low 1036 erg s−1 compared to the Eddington lumi- nosity (Grindlay et al. 1984). As a result, SS 433 has long been considered as a beamed away prototype of Ultra Luminous X-ray sources (ULXs, e.g., Kaaret et al. 2017, for a review) observed in normal star-forming galaxies (Fabrika & Mescheryakov 2001; Begelman et al. 2006; Poutanen et al. 2007), although possible in- dications of the ‘hidden’ central emission are yet to be confirmed Article number, page 1 of 14 arXiv:2510.14938v1 [astro-ph.HE] 16 Oct 2025 A&A proofs: manuscript no. current (Medvedev & Fabrika 2010; Khabibullin & Sazonov 2016, 2019; Middleton et al. 2021; Fogantini et al. 2023). Instead, what dominates X-ray emission of the system is again thermal emission from a twin pair of the baryonic jets having almost identical orientation and bulk velocity to the optically emitting jets (Watson et al. 1986). Numerous pairs of blue- and red-shifted narrow emission lines of highly ionized heavy ele- ments (primarily, silicon, sulfur, iron and nickel) allow one to build a multi-temperature model of the ballistically expanding and cooling flow (Watson et al. 1986; Kotani et al. 1996; Mar- shall et al. 2002; Brinkmann et al. 2005; Medvedev & Fabrika 2010; Marshall et al. 2013; Khabibullin et al. 2016) which first becomes visible when its temperature is above 20 keV and then cools down below 1 keV and likely fragments later on due to thermal instability (Brinkmann et al. 1988). Detailed spectro- scopic modeling of this emission (e.g. Khabibullin et al. 2016; Medvedev et al. 2019) in combination with the eclipsing observa- tions of the jets by the companion star (Stewart et al. 1987; Kawai et al. 1989; Filippova et al. 2006; Lopez et al. 2006; Atapin & Fabrika 2016) enable very robust determination of the jets’ phys- ical properties (so-called jets ‘biometry’, consisting of their size, opening angle, velocity, temperature, and the mass flux). Thanks to this we know very robustly the kinetic luminosity of these out- flows, which exceeds 1039 erg s−1 (e.g., Medvedev et al. 2019), confirming the necessity of a hyper-Eddington ‘engine’ releas- ing large amount of energy in the form of relativistic outflows, consistently with the early theoretical (Shakura & Sunyaev 1973) and modern numerical (e.g., Ohsuga & Mineshige 2011; Jiang et al. 2019; Yoshioka et al. 2022; Fragile et al. 2025) predictions. In contrast to the radiation freely escaping the system, the kinetic energy of the launched outflows cannot be easily trans- ported away from the source and has to be deposited into the surrounding interstellar medium. The estimated total amount of this energy exceeds 1051 erg for a life-time of the system at the level of ≳30,000 yr (e.g., van den Heuvel et al. 1980). This en- ergy is at least on par with the energy of the initial supernova explosion that should have accompanied the birth of the compact object, meaning that a quite large region ≳10 pc in its vicinity can be shaped by this later activity (e.g., Begelman et al. 1980; Zealey et al. 1980; van den Heuvel 1981; Velázquez & Raga 2000; Zavala et al. 2008; Goodall et al. 2011a; Churazov et al. 2020, 2024). Indeed, SS 433 is known to be surrounded by the famous giant (∼200 × 100 pc) radio nebula W50 (Westerhout 1958; Holden & Caswell 1969; Geldzahler et al. 1980; Elston & Baum 1987; Dubner et al. 1998; Goodall et al. 2011b; Broderick et al. 2018; Sakemi et al. 2021), which has a suggestive elongated morphology, with the elongation axis being co-aligned with the symmetry axis of the jets precession cone. The asymmetry of the lobes, or “ears", of the nebula can be readily explained by the noticeable stratification of the surrounding medium, given that the system is located at the height ℎSS 433 = 𝑑SS 433 sin(𝑏) ≈200 pc below the Galactic plane and has the size comparable to the scale height of the exponential cold gas disk (e.g., Velázquez & Raga 2000; Zavala et al. 2008; Goodall et al. 2011a). The inte- rior of the nebula is also known to shine in soft thermal X-rays (Brinkmann et al. 1996, 2007; Safi-Harb & Ögelman 1997; Chi et al. 2024), hard non-thermal (Seward et al. 1980; Watson et al. 1983; Yamauchi et al. 1994; Safi-Harb & Petre 1999; Namiki et al. 2000; Safi-Harb et al. 2022; Kaaret et al. 2024) and fila- mentary H𝛼emission (Kirshner & Chevalier 1980; Shuder et al. 1980; Boumis et al. 2007; Abolmasov et al. 2010; Farnes et al. 2017; Rosado et al. 2021), indicative of the high velocity shocks propagating through a relatively dense interstellar medium. The nature of the extended X-ray jets (EXJ) remains more puzzling, although their morphology, spectra, and polarization all point to the synchrotron origin of the emission (e.g., Safi- Harb et al. 2022; Kaaret et al. 2024). Most recently, Very High Energy (VHE) emission from these structures has been detected and mapped (e.g., Abeysekara et al. 2018; Alfaro et al. 2024; H. E. S. S. Collaboration et al. 2024; LHAASO Collaboration 2024), demonstrating that indeed ultra-relativistic electrons with energy above 100 TeV are accelerated in the system (e.g, Kimura et al. 2020), probably as a consequence of internal shocks and recollimation in the energetic trans-relativistic outflows (Bykov et al. 2025). The overall production of cosmic rays by such sys- tems (super-Eddington accretors with powerful outflows) might be of importance for the total CR budget and local variations in its spectrum from place to place (e.g., Heinz & Sunyaev 2002), in particular in the PeV regime (e.g., Peretti et al. 2025; Wang et al. 2025; Bykov et al. 2025). Although the paradigm of “a super-Eddington X-ray binary powering a nebula with energetic outflows” appears to be well established and consistent with the most salient observational properties of the SS 433/W50 complex, the details of almost all aspects of this interaction remain unclear. Indeed, all the phe- nomena directly associated with the central source can be trace only up to a distance of mere 0.1 pc from it, where the last signa- tures of the corkscrew pattern of the precessing jets are observed in radio and X-rays (Migliari et al. 2005; Miller-Jones et al. 2008; Khabibullin & Sazonov 2017; Sakai et al. 2025). A model invoking steady deceleration and disruption of the jets can, in principle, account for such behavior (Panferov 2014, 2017), but then the re-appearance of the “jets” with similar to the injection velocities tens of pc away from the central source might be prob- lematic. Morphology of the soft X-ray emission, best mapped by the ROSAT mosaic observations (Brinkmann et al. 1996, 2005), is also quite puzzling, with large global brightness variations and thin filamentary structures, not very commonly observed in similar situations of supernova remnants (which typically feature shell-, plerion- or mixed types of morphology). Deep pointed observations with XMM-Newton, Chandra, Suzaku, and NuSTAR were capable of covering certain parts of the nebula (e.g., Safi- Harb et al. 2022; Chi et al. 2024; Tsuji et al. 2025), but a uniform and sensitive coverage of the entire nebula with adequate spec- troscopic capabilities is missing. Here we present results of X-ray observations of the SS 433/W50 complex by the eROSITA telescope (Predehl et al. 2021) onboard the SRG space observatory (Sunyaev et al. 2021), which were performed during its Performance Verification (PV) phase. The obtained mosaic provides us with X-ray images of the entire extent of W50 with unprecedented dynamic range, bring- ing X-ray data on par with the radio and other multi-wavelength data, and, in addition, delivers exquisite spectral information for the rich variety of diffuse structures. These data, in particular, demonstrate a close connection between the thermal and non- thermal components of W50 on scales ranging from subparsec, represented by narrow X-ray and radio bright filaments, to the entire extent of the nebula. These data also enable full charac- terization of a pair of nearly symmetric, sharp-edged, elongated structures aligned with the orbital axis of the binary system, which lack radio counterparts. The structure of the paper is following: we describe the ob- servations and the obtained data in Sect. 2; results of the broad- band and spectrally-resolved imaging are presented in Sect. 3 and Sect. 4, respectively; spectroscopic analysis of the emission from several selected regions follows in Sect. 5; the obtained global picture is discussed in Sect. 6 and summarized in Sect. 7. Article number, page 2 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA 2. Data Since W50 spans approximately 2.2 × 1.0 degrees on the sky, corresponding to the physical size of ∼190 × 90 pc at the distance of 5 kpc1, the observing campaign consisted of 9 indi- vidual telescope pointings (each covering a circular area of ∼1◦ in diameter). These pointings were arranged in a mosaic to re- sult in a nearly uniform coverage of the entire nebula and, in addition, the highest sensitivity in the most interesting regions. The observing dates, from October 5th to October 8th and then from October 19th to October 20th, 2019, were chosen to co- incide with the two consecutive orbital eclipses of the central source SS 433 by its normal companion star (according to or- bital ephemerids by Eikenberry et al. 2001). During eclipses, the apparent X-ray flux of SS 433 goes down by a factor of sev- eral (e.g., Kawai et al. 1989; Lopez et al. 2006; Medvedev et al. 2019), minimizing possible adverse effects of contamination and pile-up due to the presence of a very bright object in the field- of-view, even though relative suppression of the soft X-ray lines being rather minor (Marshall et al. 2013). The typical effective (i.e., equivalent to the nominal eROSITA on-axis performance) exposure time across the map amounts to 30-50 ks, leading to a factor of 10 deeper images compared to the previously available full-extent maps of W50 by 𝑅𝑂𝑆𝐴𝑇observatory in the 0.5-2 keV band (Brinkmann et al. 1996) and Einstein observatory in the 2-4 keV band (Watson et al. 1983). In addition to that, the eROSITA data have much better spatial (∼15” arcsec on the telescope axis and ∼28” averaged across the field-of-view) and spectral (∼65 eV at 1 keV) resolution (Predehl et al. 2021). A comparable quality of the data is provided by multiple Chandra and XMM-Newton observations, but their coverage of the whole W50 nebula is non-uniform and still incomplete after more than 20 years of observations (Brinkmann et al. 2005; Safi-Harb et al. 2022; Chi et al. 2024). On the other hand, the SRG/eROSITA data are complementary to these observations, as well as to the imaging observations in harder X-rays with NuSTAR, which offer a unique view of non-thermally emitting populations. In this pa- per, we focus on the SRG/eROSITA X-ray data alone and leave combined analyses for future studies. We also supplement the PV data with the data obtained in the course of the SRG/eROSITA All-Sky Survey (Predehl et al. 2021; Sunyaev et al. 2021). Although the survey data are much shallower in sensitivity (the typical exposure time per point is ∼1000 s after combining the data of four scans), they provide uniform coverage of the entire area surrounding the nebula. For them, the standard data reduction, background estimation and exposure- and vignetting corrections were applied in the same way as was done for mapping diffuse emission from nearby galaxy clusters (Churazov et al. 2021a, 2023), supernova rem- nants (Churazov et al. 2021b; Khabibullin et al. 2023, 2024), and the Galactic diffuse emission (Predehl et al. 2020; Khabibullin et al. 2022; Churazov et al. 2024) on even larger scales. Here we also go one step further and combine the PV and all-sky data in a seamless fashion, similarly as it has been done for Chandra and SRG/eROSITA observations of the Perseus cluster (Churazov et al. 2025). 1 The most recent parallax measurement for SS 433 in Gaia DR3 corresponds to the significantly larger distance of 8.5+2.0 −1.4 kpc (Gaia DR3 4293406612283985024, Gaia Collaboration et al. 2023), we caution of using this value given large residuals of the astro- metric solution. The previous parallax measurement published in Gaia DR2 corresponded to a much smaller distance of 3.8+0.8 −1.1 kpc (Gaia DR2 4293406612283985024, Arnason et al. 2021). A comparison of the 4 × 4 degrees field in radio and X-ray bands is shown in Fig. 1. Although in the radio image, the outer boundary of the W50 nebula appears to be very prominent, the X-ray image is instead dominated by the very bright central ob- ject (SS 433) and elongated structures known as “extended X-ray jets” (EXJ) located well inside the radio boundary of W50 (Se- ward et al. 1980; Watson et al. 1983; Brinkmann et al. 1996). At a distance of 5 kpc, these images span 350 pc and range from 17 pc to almost 370 pc in distance from the Galactic plane. Given that the scale heights for the cold and warm components of the interstellar medium are tens and hundreds of pc, respectively, one could naturally expect signatures of the environment strati- fication (e.g., Goodall et al. 2011a). This is indeed revealed by the asymmetry of the W50 lobes, with the one directed towards the Galactic plane (the Western Lobe in equatorial coordinates) being more compact than the opposite one (the Eastern lobe). On larger scales, clear variations of the background and fore- ground diffuse X-ray emission are seen. Inspection of 3D redden- ing maps (e.g., Green et al. 2019) shows a clear anti-correlation of the reddening and soft X-ray surface brightness (see Appendix C), suggesting that absorption of X-rays is the main reason for these variations. Morphologically similar structures appear in the 3D reddening maps at a distance of 2-3 kpc, i.e., in the foreground to W50/SS 433. Therefore, it is plausible that the majority of them are physically unrelated to W50. 3. Broad-band imaging We now turn to the deeper PV data. The broad-band (0.5-4 keV) vignetting-corrected X-ray image of the entire W50 nebula is shown in Fig. 2, with the contours of radio emission at 4.85 GHz (Gregory et al. 1996) overlaid in white. The color coding reflects the measured X-ray surface brightness on a square-root scale spanning two orders of magnitude, highlighting the wide dynamic range of the diffuse X-ray emission captured by eROSITA. In addition to SS 433 itself (the extremely bright source in the center) and a multitude of point sources, the majority of which are nearby foreground stars or background AGN, the picture shows bright diffuse emission with rich internal structure. The X-ray picture can be broadly decomposed into three com- ponents. Firstly, there is a low surface brightness emission filling the entire projected extent of the nebula. It is brighter in the Eastern and Western lobes of W50 (left and right, respectively, in Fig. 2) and significantly dimmer along the Northern and Southern radio boundaries of W50. Remarkably, the central part of W50 (beyond the region contaminated by the PSF wings from SS 433) appears even fainter in X-rays, suggesting a shell-like geometry of the X-ray-emitting gas. Secondly, one can notice the presence of very narrow X-ray filaments, particularly inside and close to the boundaries of the Eastern and Western lobes. Their apparent width is ≳1 pc, while their length is ∼10 pc. Given the angular resolution of eROSITA (better than ∼28”, i.e., ≲0.7 pc at 5 kpc), these filaments are fully resolved in both directions. The X-ray filaments form a complicated network with an overall structure very similar to the filaments of the cold and dense gas observed via H𝛼+[N II] emission at different locations within the nebula (Boumis et al. 2007; Farnes et al. 2017). The largest X-ray filaments have a flat transverse profile of the surface brightness and are an order of magnitude brighter than the surrounding medium. Such filaments of the dense X-ray emitting gas might arise as instabilities in the sheared flows, caustics of the strong shock waves, or due to heating of the preexisting cold gas filaments by hot gas and relativistic particles inside the lobes. Article number, page 3 of 14 A&A proofs: manuscript no. current 41.5 41.0 40.5 40.0 39.5 39.0 38.5 38.0 -0.5 -1.0 -1.5 -2.0 -2.5 -3.0 -3.5 -4.0 Galactic longitude [deg] Galactic latitude [deg] Sh 74 SS433 G038.7-01.3 3C396 3C397 41.5 41.0 40.5 40.0 39.5 39.0 38.5 38.0 -0.5 -1.0 -1.5 -2.0 -2.5 -3.0 -3.5 -4.0 Galactic longitude [deg] Fig. 1. Global radio and X-ray views of the 4 × 4 degrees region centered on SS 433 (both images are in Galactic coordinates). Left: Radio image (square-root scale) by Green Bank telescope at 6cm (4.85 GHz, Gregory et al. 1996). Right: SRG/eROSITA (particle background-subtracted exposure-corrected) 0.3-2.3 keV X-ray image (log scale, two orders of magnitude range) of the W50 field, which combines the PV and all-sky data accumulated over four consecutive scans after smoothing with a Gaussian beam with 𝜎= 90 arcsec. The light-blue areas in this image (where the X-ray flux is low) correspond to regions with high Galactic column density of gas and dust, which attenuate X-rays. Locations of supernova remnants 3C396, 3C397 and G038.7-01.3 (e.g., Green 2025), as well as of a HII region Sharpless 74 (Sharpless 1959) are marked on both images. The final and perhaps the most visual feature of the image is a pair of nearly symmetrical, sharp-edged “extended X-ray jets” directed away from the central source. Their axis is not only aligned with the elongation of the radio nebula and location of the X-ray bright lobes, but also coincides with the projection of the precession axis of SS 433’s famous trans-relativistic baryonic jets. The physical nature of these structures, first found by the Einstein observatory (Watson et al. 1983) and then observed with all major X-ray telescopes (Brinkmann et al. 1996; Safi-Harb & Ögelman 1997; Namiki et al. 2000; Brinkmann et al. 2007; Safi-Harb & Petre 1999), including the most recent polarimteric observations with IXPE (Kaaret et al. 2024), remains a matter of debate (e.g. Churazov et al. 2024). The SRG/eROSITA data clearly show a very rapid onset of emission at the innermost boundary of EXJs on both sides of the nebula (also demonstrated recently by Tsuji et al. 2025, using Chandra data). 4. Spectrally-decomposed imaging We take advantage of the unique possibility provided by 𝑆𝑅𝐺/eROSITA to probe the spectral characteristics of diffuse X-ray emission throughout the full extent of the nebula for the first time. To produce a spectrally-decomposed image, we divided the full spectral band presented earlier into three sub-bands: soft (0.5-1 keV), medium (1-2 keV), and hard (2-4 keV) bands, re- spectively. A composite RGB image showing the surface bright- ness of the emission (now on a linear scale) in these bands in red, green, and blue, respectively, is presented in Figure 3. The chosen spectral decomposition immediately reveals the presence of two distinct components in the observed diffuse emission: large regions of red-orange color where emission below 1 keV dominates, and green-cyan structures where prominent emission above 1 keV is present (including the bright central source SS 433). Compact sources also form two groups: the softer (“red- yellow”) ones are predominantly foreground stars, while harder (“blueish”) sources are mostly background AGNs, which shine through the gas and dust in the disk of the Milky Way. Zoom-in images of the lobes (rotated and put side by side to help the com- parison) are shown in Fig. 4 and demonstrate fine structure of the soft diffuse emission, contrasting with relatively smooth and well-confined hard emission. For better characterization of the diffuse emission, we identi- fied bright point sources and masked them along with the central source SS 433. The resulting composite map of the residual dif- fuse emission is shown in Figure 5. The overlaid distance and angle rulers highlight the characteristic dimensions and mor- phology of the prominent structures. Namely, one can see that the “Extended X-ray Jets” start ∼23 pc away from the central source and demonstrate remarkable symmetry, with the Eastern one starting only a couple of parsecs further from the central source. Both EXJs, characterized by hard X-ray spectra, termi- nate around 𝑅∼60 −65 pc. This symmetry contrasts with the more asymmetric and irregular soft emission of the lobes. The same conclusion is even better illustrated by the side- by-side comparison of the two lobes in Fig. 4. Indeed, while the Eastern lobe extends up to ∼110 pc, the Western one terminates at ∼75 pc. The inclination of the relativistic jet’s precession axis to the line of sight is 78◦(the relativistic jet on the eastern side is on average approaching us), so this difference cannot be Article number, page 4 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA N E 0.5-4 keV 0.9 deg / 80 pc / 260 light-years 1.3 deg / 115 pc / 375 light-years SRG/eROSITA, W50/SS433 Fig. 2. Broad-band high-dynamic-range 0.5-4 keV X-ray image of the giant radio nebula W50 by 𝑆𝑅𝐺/eROSITA (PV data). The size of the image is 2.2 degrees by 1.1 degrees, corresponding to ∼200 × 100 pc at the distance of W50, ∼5 kpc. The image is rotated so that the longer side of W50 is aligned with the horizontal axis. Colour-coding corresponds to the X-ray surface brightness on the square-root scale spanning two orders of magnitude. The bright spot in the center is the SS 433. It appears extended due to the wings of the telescope’s Point Spread Function around SS 433, which itself is heavily saturated in this image. The white contours show the surface brightness of the radio emission from the nebula at 4.85 GHz (Gregory et al. 1996). With this deeper data (compared to the all-sky survey’s data), a faint diffuse emission is seen between bright EXJs and the radio boundary of W50. 0.5-1 keV 1-2 keV 2-4 keV 0.9 deg / 80 pc / 260 light-years 1.3 deg / 115 pc / 375 light-years SRG/eROSITA, W50/SS433 Fig. 3. A composite X-ray image of the radio nebula W50. Colour-coded is the surface brightness of the X-ray emission (on a linear scale) in 0.5-1 keV (red), 1-2 keV (green), and 2-4 keV (blue) energy bands. The white arrows depict the projection of the precession cone of the SS 433 jets extrapolated to distances ∼100 pc. Hard and soft X-ray diffuse emission splits convincingly into two components: softer filamentary emission (red-yellow) and harder (green-blue) emission of EXJs. On top comes a multitude of nearby (active stars and accreting white dwarfs) and distant (mostly AGN) compact sources. For distant sources, the absorption by the Milky Way gas suppresses emission below 1 or 2 keV, giving them a blueish color. Article number, page 5 of 14 A&A proofs: manuscript no. current HPD 10 pc 2 pc 87 pc / 1 deg HPD 10 pc 2 pc 58 pc / 40 arcmin Fig. 4. The comparison of the Eastern and Western Lobes of W50 (see the full image in Fig. 3). The Western Lobe has been rotated by 180 degrees for the sake of comparison. The white circles show the Half-Power-Diameter of the eROSITA PSF corresponding to the physical scale of 0.7 pc. Both EXJs emerge at essentially the same distance from the central source with a remarkably sharp edge and continue for ∼30 pc. On the contrary, the more diffuse extensions have different lengths, plausibly associated with much higher ambient density in the Western direction. explained by geometrical effects only. However, it broadly agrees with the assumption that the energy flow from the central source interacts with the denser ambient medium in the western part, reflecting the gas density gradient perpendicular to the equatorial plane of the Galactic disk (e.g., Goodall et al. 2011a). Since the difference between the lobes’ dimensions amounts to a sizeable fraction of them, one can infer the exponential scale-height of the ambient gas density profile ∼100 pc, consistent with the scale-height of the Galactic neutral gas disk. The X-ray emission from the Northern and Southern boundaries of the nebula appears soft and faint, being noticeably affected by the foreground dust absorption, especially in their Western parts (as illustrated in Appendix C). Taking this into account, we conclude that the soft X-ray emission can be traced across the full extent of the nebula, except for the “inner cavity” with a radius of ∼23 −25 pc. The outer boundary of the emission of the Northern and Southern arcs (outlined as dashed annuli including Region 4 in Fig. 5) is located at ∼38 pc from the center, coinciding with the boundary of the W50’s radio emission. This morphology suggests that the soft X-ray emitting gas is confined in a thick (Δ𝑅∼𝑅/10) close-to- spherical shell, driving a strong shock in the surrounding ISM. Article number, page 6 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA 7 8 2 0.5-1 keV 1-2 keV 2-4 keV SRG/eROSITA, W50/SS433 21 deg 10.5 deg SS 433 75 pc 65 pc 50 pc 38 pc 23 pc 110 pc 95 pc 75 pc 50 pc 38 pc 23 pc 6 5 1 4 3 Fig. 5. "Anatomy” of the diffuse X-ray emission inside W50. The image shows a composite map of the X-ray surface brightness in 0.5-1, 1-2, and 2-4 keV energy bands (similar to Fig. 3) after masking bright foreground and background point sources, and the central source SS 433. Overlaid in white are markers of the distance from the central source (assuming 𝑑SS 433 = 5 kpc) and angle with respect to its orbital plane axis, coinciding with the precession axis of its narrow relativistic jets. The box regions, numbered from 1 to 6, are used for spectrum extraction and represent fiducial spectral components composing the image. The Eastern and the Western lobes are roughly confined inside the projection of the cone with the half-opening angle 21◦, while the EXJs encompass only the cone with a twice smaller opening angle,∼10◦. From the eROSITA spectrally decomposed RGB images, a gentle spectral evolution of the EXJs emission with distance is seen clearly. EXJ emission is significantly harder at its sharp inner boundary than ∼30 pc further away from SS 433, where EXJs blend with softer thermal emission that fills the rest of the lobe, in agreement with earlier findings with XMM-Newton and NuSTAR (Brinkmann et al. 2007; Safi-Harb et al. 2022). 5. X-ray spectra 5.1. Central source The spectral capabilities of eROSITA have been demonstrated using many objects, including SS 433 itself (e.g., Sunyaev et al. 2021). Since this source is extremely bright, the observations were intentionally performed during the two consecutive orbital eclipses of the compact object (and the jets) by the companion star, so its soft X-ray flux was dimmer by a factor of several com- pared to the typical levels (Kawai et al. 1989; Lopez et al. 2006; Marshall et al. 2013). The combined X-ray spectrum of SS 433 extracted from 𝑟= 3 arcmin aperture (with background esti- mated from 𝑟= 4 to 5 arcmin annuli, being however completely negligible compared the source) using the data of all pointings in which it falls within Field-of-View is shown in Fig. 6. As expected, it is well described by the model of the opti- cally thin multitemperature emission from collimated flows that adiabatically expand and cool at distances from a few 1010 to a few 1012 cm from the central object (Brinkmann et al. 1988, 1991, 2005; Kotani et al. 1996; Marshall et al. 2002; Medvedev & Fabrika 2010; Khabibullin et al. 2016; Medvedev et al. 2019). Fig. 6. Combined spectrum of SS 433 accumulated over PV phase observations. Both blue-shifted and red-shifted lines of highly ionized (H-like and He-like) heavy elements (in particular, silicon, sulfur, iron, and nickel) are visible and come from the approaching and receding collimated baryonic jets. The model of multi-temperature cooling jets (Khabibullin et al. 2016) is able to the fit the data very well, although some excess emission is observed between 6 and 7 keV, probably coming from lower ionization species of iron existing in the accretion disk funnel or wind (e.g., Marshall et al. 2002; Brinkmann et al. 2005; Medvedev & Fabrika 2010; Medvedev et al. 2019). We discuss it more in Appendix A, while the main focus of this study is instead on the much fainter diffuse emission from the surrounding nebula. Article number, page 7 of 14 A&A proofs: manuscript no. current Fig. 7. Results of the spectral analysis for the data from eight representative regions. Top row. The left panel shows background-subtracted spectra of the brightest X-ray filament (Region 1 in Figure 5, red points with 1𝜎errorbars) and the diffuse emission (Region 3, blue points) in the Eastern lobe. The right panel shows spectra of the Northern Arc (Region 4, red points) and of the Southern extension (Region 8, blue points). Bottom row. Left panel - spectra of the diffuse emission in the Western lobe (Region 7 in Figure 5, blue points) and of the brightest filament in the Western lobe (Region 6, red points). Right panel - spectra for the termination region of the Eastern “extended X-ray jet” (Region 2, red points) and the base of the Western jet (Region 5, blue points). 5.2. Diffuse emission The high sensitivity of the presented data allowed us to per- form spectroscopy of the diffuse emission in several regions and determine the physical state of the radiating plasma. We have selected eight regions, shown as numbered boxes in Figure 5, for detailed spectral analysis of their X-ray emission. For each re- gion, the background (estimated from the adjacent regions) was subtracted to isolate the net signal. We present the parameters of the fitted models in Table B.1 and discuss the implications of the corresponding results below. 5.2.1. Eastern Lobe Emission from the brightest filament in the Eastern lobe (Region 1 in Figure 5) features a spectrum characteristic for a thermal (but non-equilibrium) optically-thin plasma with a temperature of ∼0.3 keV, as evidenced by the line emission from the highly ionized oxygen, neon, magnesium, silicon and iron (see the left panel in Figure 7). The line emissivity and ratios are consistent with the Solar abundance of heavy elements, but also indicate a non-equilibrium state of the gas ionization. Namely, it is quali- tatively consistent with the expectation for a plane-parallel shell Article number, page 8 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA of the gas that experienced a passage of the shock with velocity 𝑣∼500 km s−1 (from the Rankine-Hugoniot relations for the temperature and density jump properties) some ∼104 yr ago. The inferred hydrogen number density, 𝑛∼0.4 cm−3, is a factor of a few higher than the expected gas density of the inter- stellar gas ∼300 pc away from the plane of the Galactic disk. The spectrum of the diffuse emission in the Eastern lobe (exemplified by Region 3 in Figure 5) also can be well described by thermal non-equilibrium emission of the gas with somewhat higher tem- perature ∼0.5 keV and lower hydrogen number density ∼0.1 cm−3 (see the left panel in Figure 7). 5.2.2. Western Lobe The emission from the Western lobe appears to be noticeably harder, partially due to the higher interstellar absorption in that direction. However, it also corresponds to the emission of plasma with genuinely higher temperature, ∼1 keV, and a similar hydro- gen number density, 𝑛∼0.1 cm−3 (and a few times higher for the brightest filament), as illustrated by spectra from Regions 6 and 7 in Figure 5). The non-equilibrium ionization model (see the right panel in Figure 7) results in a comparable estimate of ∼104 yr for the ionization time-scale. The total absorption-corrected luminosity of the diffuse soft X-ray emission from the nebula is ∼1035 erg s−1 in 0.5-4 keV, which is by a factor of few tens smaller than the apparent X-ray luminosity of the central object, 𝐿X,SS 433 ∼few×1036 erg s−1,2 and much smaller than the supposed kinetic luminosity of the outflows from the supercritical accretion disk, 𝐿𝑘∼1039 erg s−1. However, the thermal energy content of the X-ray emitting gas in the Eastern lobe (with estimated gas number density 0.1 cm−3, temperature 0.5 keV and volume V ∼𝜋× 70 pc × (20 pc)2 ≈ 88, 000 pc3 assuming roughly to cylindrical geometry ) is ∼ 4 × 1050 erg, requiring a supply of at least 1039 erg s−1 over 104 years. The mass of this gas amounts to ∼200𝑀⊙, implying that the bulk of it cannot be provided by the outflows from SS 433. If the volume filling factor 𝑓of the emitting medium is, however, low, the estimate for the required mass and energy could be decreased by a factor of 1/ √︁ 𝑓, but similarly, the density should be increased, leading to the corresponding decrease in the time since the shock passage. That means that the X-ray emitting gas provides an estimate of the average energy input in the lobes at the level of ∼1039 erg s−1, which is comparable to the kinetic luminosity of the jets and the Eddington luminosity of the system (for the mass of the compact object below 10𝑀⊙). A similar estimate can be obtained from the Western lobe, implying nearly symmetric energy injection along the nebula’s axis at a very high rate over the last few tens of thousands of years. Given that the temperature inferred from the spectral fits with the non-equilbirum emission is driven mostly by characteristics of the ionization state, it is likely a lower limit on the actual temperature of the emitting gas. Hence, a significant portion of the gas energy content might be effectively hidden, while the required energy input should be increased correspondingly. 2 Due to the beaming of the radiation along the thick accretion disc axis (expected in the case of highly supercritical accretion), the actual X-ray luminosity of SS 433 might be orders of magnitude larger, namely comparable to the Eddington luminosity and kinetic luminosity of the jets, 𝐿∼1039 erg s−1 (e.g. Fabrika & Mescheryakov 2001; Begelman et al. 2006; Poutanen et al. 2007; Medvedev & Fabrika 2010; Khabibullin & Sazonov 2016; Middleton et al. 2021; Fogantini et al. 2023). 5.2.3. Northern and southern boundaries The emission from the nebula boundaries is exemplified by the Northern Arc region and the Southern Extension (Regions 4 and 8 in Figure 5). The thermal non-equilibrium model provides the temperature estimate ∼0.8 keV, low hydrogen number density, ∼0.1 cm−3 for the Northern Arc and even lower for the Southern Extension. The ionization timescale is, however, similar here as well, ∼104 yr. This observation contrasts with the brightness of the radio emission, which in the Northern Arc region is one of the brightest across the nebula. The high degree of linear polarization of the radio emission might indicate that the preferential orienta- tion of the interstellar magnetic field in this region favors efficient acceleration of relativistic particles producing synchrotron emis- sion. We note here that the central part of the W50 X-ray map suffers from higher photoelectric absorption than the Eastern and even Western lobes, as illustrated in Fig. C.1. This excess absorption is likely associated with the foreground gas and has no physical connection to SS 433/W50. 5.2.4. Extended X-ray jets In contrast to this “thermal” picture, the X-ray spectra of X-ray emission from the “extended X-ray jets” demonstrate a feature- less spectrum that can be described by a power law with the slope in the range from Γ = 1.4 (at their base, e.g., Region 5 in Fig. 5) to Γ = 2.2 (at the termination points, e.g. Region 2 in Fig. 5). In agreement with previous spectral studies of this emis- sion (e.g., Brinkmann et al. 2007), the thermal bremsstrahlung model also provides an equally good fit with temperatures from 𝑘𝑇= 3 keV (at the termination) to 15 keV (at the base). The gas emissivity required to produce the X-ray luminosity at level 1035 erg s−1 implies hydrogen number densities 𝑛∼0.2 cm−3 in these regions (assuming a filling factor of unity). Clearly, if the thermal nature of this emission were true, it would have a drastically higher pressure compared to the ambient soft X-ray emitting gas and would expand sideways supersonically, driving a strong shock wave. Alternatively, the emission might result from the population of relativistic particles accelerated within EXJs. The detection of polarization in X-rays (Kaaret et al. 2024) and TeV emission from regions cospatial with the EXJs (Abeysekara et al. 2018; Alfaro et al. 2024; H. E. S. S. Collaboration et al. 2024) strongly supports the non-thermal scenario. The absence of any noticeable radio, infrared, or optical emission from these structures would then require substantial self-absorption of radiation at these wavelengths (which is un- likely given that the gas number density is likely very low inside them) or a rather peculiar energy distribution of the relativistic particles with a large fraction of them having energies in ex- cess of tens of TeV. Another possibility could be scattering of very bright and beamed emission of the central source (e.g., Pan- ferov & Kabanov 2013; Khabibullin & Sazonov 2016, 2019), but the small optical depth with respect to the Thomson scattering, 𝜏T = 𝑛𝑒𝜎𝑇𝐿∼10−5(𝑛𝑒/0.1 cm−3)(𝐿/50 pc), and the observed spectral gradient along these structures disfavor this possibility. 6. Discussion The Eastern EXJ was well mapped in X-rays with XMM-Newton, Chandra, and NuSTAR (e.g., Brinkmann et al. 2007; Safi-Harb et al. 2022), revealing its non-thermal nature. Furthermore, re- cent IXPE observation (Kaaret et al. 2024) revealed polarized X-ray emission from a patch adjacent to the base of the Eastern EXJ. These data suggest that X-ray photons are produced by a Article number, page 9 of 14 A&A proofs: manuscript no. current synchrotron mechanism with the magnetic field predominantly aligned with the EXJ axis. The eROSITA data show that the bases of both EXJs are located at essentially the same distances from SS 433, have similarly sharp inner boundaries, and non-thermal (featureless) spectra across their entire extents. These structures have often been considered as a direct con- sequence of the relativistic jets’ propagation inside the nebula before they deposit the bulk of their kinetic energy in the lobes, energizing them and the whole extent of the nebula (e.g., Panferov 2014). However, the direction of the SS 433 jets is known to pre- cess about the mean axis with the amplitude ∼21 degrees, which is a factor of 2 larger than the observed opening angle of EXJ. Since this extended X-ray emission comes from distances of ∼ tens of pc, i.e. orders of magnitude further away from the central source than the compact radio and X-ray jets, which are traced up to ∼a few 1017 cm, the actual connection between them is not obvious (e.g., Panferov 2017). It might be governed by recolli- mation of the jets’ flows at some intermediate distance from the source (Eichler 1983; Peter & Eichler 1993), a historical change in the jets’ direction (Goodall et al. 2011a; Monceau-Baroux et al. 2015), or a different regime of baryonic jets propagation through the magnetized ISM after their gas “freezes out” after recombin- ing due to rapid adiabatic expansion, cooling losses and lack of ambient interactions (Churazov et al. 2020). If we assume that there were no recent drastic changes in the properties of W50/SS 433 (e.g., temporary quenching of the cen- tral source), the overall picture of the “large-scale energy flow” in the W50/SS 433, supported by X-ray data, can be schematically decomposed into three major components (illustrated in Fig. 8): – “Dark flow": from a compact source up to the base of EXJs covering the range of distances from ∼0.1 to ∼25 pc from SS 433. Over this distance range, there are no visible signa- tures in the X-ray (or any other) band. – “Non-thermal flow": from the base to the end of EXJs that have featureless non-thermal spectra (presumably, syn- chrotron emission). – “Thermal flow": extended diffuse regions between the W50 radio boundaries and EXJs. Its X-ray emission is typical for ISM shock-heated to sub-keV temperatures. Analytical and numerical models primarily aimed at repro- ducing W50’s radio morphology often assume that a quasi- spherical part of the nebula is a remnant of the supernova explo- sion, while the elongated structures are directly associated with the energy release by the binary system (e.g., Begelman et al. 1980; Zealey et al. 1980; Eichler 1983; Peter & Eichler 1993; Velázquez & Raga 2000; Zavala et al. 2008; Goodall et al. 2011a; Asahina et al. 2014; Monceau-Baroux et al. 2015; Panferov 2017; Ohmura et al. 2021). In the model considered by Churazov et al. (2024), the entire nebula is powered by the central source. There, an anisotropic outflow/wind from the Hyper-Eddington accretor is assumed - a combination of a more powerful collimated out- flow along the orbital axis and a quasi-isotropic wind in all other directions. In this model, the “Dark flow” phase corresponds to the free expansion of the wind, and the onset of the EXJs is as- sociated with the termination shock of the isotropic wind, which initiates shocks in the collimated flow where the pressure rises suddenly. Finally, the thermal part is the shock-heated ISM. In this model, the characteristic shape of the W50 nebula (a quasi- spherical part and two extensions) reflects the anisotropy of the velocity and kinetic power of the wind, with no need for the impact of the narrow baryonic jets at all. In all these models, the outer shock, which delineates the ra- dio boundary of W50, is not dissimilar to the shocks seen around middle-aged supernova remnants or stellar wind-blown bubbles (e.g., Konigl 1983; Dubner et al. 1998). What makes W50 “spe- cial” is the presence of EXJs, which are radio-faint but bright in X-rays and VHE gamma-rays. These structures are also plausi- bly related to efficient particle acceleration at strong shocks, but with a shock configuration that is rather different compared to typical SNR shocks (e.g. Bykov et al. 2025). Remarkable sym- metry of the EXJs, very clearly demonstrated by the eROSITA image, points towards irrelevance of the ambient ISM properties to the energy and momentum propagation, as well as the parti- cle acceleration regime, meaning that a cocoon or a wind-blown cavity likely encompasses these structures in the current epoch. In this regard, the presence of H𝛼emitting filaments in several places across the nebula might help probe the energetic content of the nebula (e.g., Boumis et al. 2007; Abolmasov et al. 2010; Farnes et al. 2017). Interestingly, the location of the brightest H𝛼filaments does not correspond to any prominent features in thermal X-ray emission. This might be the case when the H𝛼 emission comes from the outer boundary of the nebula, where the shock wave is already in the radiative mode. Significantly higher gas temperature obtained for the emission from the West- ern lobe might also be indicative of this process, namely that the radiative shock wave runs against the Galactic density gradient and stalls earlier in this direction, evacuating a smaller volume to be filled by the hot X-ray gas. Given the presence of bright silicon lines in the spectra extracted from these regions, an observation of them with the XRISM observatory (Tashiro et al. 2025) might provide estimates of the directed and turbulent velocities. A large grasp soft X-ray microcalorimeter mission, like LEM (Kraft et al. 2022) or HUBS (Bregman et al. 2023) would also be perfectly suited for in-depth spectral exploration of this complex system with multiple interacting components. 7. Conclusions SRG/eROSITA observations of W50/SS 433 provide, for the first time, the entire X-ray map of the nebula with high spatial and spectral resolution. The data support a physical picture in which the anisotropic energy flow from SS 433 evolves passing through three distinct stages: – an invisible “dark” flow between 0.1 and 25 pc (=anisotropic wind from the binary system) – a “non-thermal” flow over another ∼30 pc (=EXJs) – a thermal flow (= shock-heated ISM) that envelopes EXJs. The appearance of the nebula is further affected by the large ambient density gradients and a heavy foreground photoelec- tric absorption that projects almost exactly on the central quasi- spherical part of the W50 nebula. The thermal part of the W50 X-ray emission can be reason- ably well described by the shock-heated plasma that has not yet reached temperature and ionization equilibrium. Such emission is typical for middle-age or old SNRs. The outer radio boundary of the nebula is also reminiscent of the SNR shocks. On the contrary, the “Extended X-ray Jets” are the most re- markable features of this system on scales of tens of pc. Their sharp inner edges plausibly correspond to extreme shocks that accelerate particles and power the X-ray (synchrotron) emission, and, at 10 orders of magnitude higher energies, the TeV emis- sion. W50/SS 433 system illustrates clearly the important role the Hyper-Eddington accretors might play for the energetics of the ISM in galaxies at different redshifts and the production of ultra-high-energy particles. Article number, page 10 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA Forward shock Thermal emission Non-thermal emission Termination shock? Precession cone Fig. 8. A schematic summary picture of the W50 nebula presented on top of the composite X-ray (red and green) and radio (VLA at 1.4 GHz by Dubner et al. 1998, blue) image. While radio emission most likely arises at the outer shell-like boundary of the nebula, the soft X-ray emission (0.3-0.9 keV) trace shock heated ISM gas behind it which fills almost entire interior of the nebula, and harder X-ray emission (0.9-2.7 keV in this case) is of non-thermal (synchrotron) nature and is produced by ultrarelativistic electrons accelerated at shocks in the axial outflows from the system (e.g., Churazov et al. 2024). The central-most part of the nebula, within ∼25 pc from SS 433 (dashed circle) is likely of very low density and could be a wind-blown cavity created by an almost spherically symmetric outflow with close to Eddington kinetic luminosity. The impact of narrow baryonic jets launched by the central source in the current epoch is not clearly visible, given that no indications of interaction are observed along the sky projection of the jet’s current precession cone (dotted lines). Acknowledgements. This work is partly based on observations with the eROSITA telescope onboard SRG space observatory. The SRG observatory was built by Roskosmos in the interests of the Russian Academy of Sciences, represented by its Space Research Institute (IKI) in the framework of the Russian Federal Space Program, with the participation of the Deutsches Zentrum für Luft- und Raumfahrt (DLR). The eROSITA X-ray telescope was built by a consortium of German Institutes led by MPE, and supported by DLR. The SRG spacecraft was designed, built, launched, and operated by the Lavochkin Association and its subcontractors. The science data are downlinked via the Deep Space Network Antennae in Bear Lakes, Ussurijsk, and Baikonur, funded by Roskosmos. The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remeis Observatory Bamberg & ECAP (FAU Erlangen-Nuernberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Potsdam (AIP), and the Institute for Astronomy and Astrophysics of the University of Tübingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximilians Universität München also participated in the science preparation for eROSITA. The eROSITA data were processed using the eSASS/NRTA software system developed by the German eROSITA consortium and analyzed using proprietary data reduction software developed by the Russian eROSITA Consortium. IK acknowledges support by the COMPLEX project from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 882679. References Abell, G. O. & Margon, B. 1979, Nature, 279, 701 Abeysekara, A. U., Albert, A., Alfaro, R., et al. 2018, Nature, 562, 82 Abolmasov, P., Maryeva, O., & Burenkov, A. N. 2010, AN, 331, 412 Alfaro, R., Alvarez, C., Arteaga-Velázquez, J. C., et al. 2024, ApJ, 976, 30 Arnason, R. M., Papei, H., Barmby, P., Bahramian, A., & Gorski, M. D. 2021, MNRAS, 502, 5455 Arnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17 Asahina, Y., Ogawa, T., Kawashima, T., et al. 2014, ApJ, 789, 79 Atapin, K. E. & Fabrika, S. N. 2016, AstL, 42, 517 Begelman, M. C., King, A. R., & Pringle, J. E. 2006, MNRAS, 370, 399 Begelman, M. C., Sarazin, C. L., Hatchett, S. P., McKee, C. F., & Arons, J. 1980, ApJ, 238, 722 Blundell, K. M. & Bowler, M. G. 2004, ApJL, 616, L159 Blundell, K. M., Schmidtobreick, L., & Trushkin, S. 2011, MNRAS, 417, 2401 Borkowski, K. J., Lyerly, W. J., & Reynolds, S. P. 2001, ApJ, 548, 820 Boumis, P., Meaburn, J., Alikakos, J., et al. 2007, MNRAS, 381, 308 Bregman, J., Cen, R., Chen, Y., et al. 2023, SCPMA, 66, 299513 Brinkmann, W., Aschenbach, B., & Kawai, N. 1996, A&A, 312, 306 Brinkmann, W., Fink, H. H., Massaglia, S., Bodo, G., & Ferrari, A. 1988, A&A, 196, 313 Brinkmann, W. & Kawai, N. 2000, A&A, 363, 640 Brinkmann, W., Kawai, N., Matsuoka, M., & Fink, H. H. 1991, A&A, 241, 112 Brinkmann, W., Kotani, T., & Kawai, N. 2005, A&A, 431, 575 Brinkmann, W., Pratt, G. W., Rohr, S., Kawai, N., & Burwitz, V. 2007, A&A, 463, 611 Broderick, J. W., Fender, R. P., Miller-Jones, J. C. A., et al. 2018, MNRAS, 475, 5360 Bykov, A. M., Osipov, S. M., Romansky, V. I., et al. 2025, PhRvD, 112, 063017 Cherepashchuk, A., Belinski, A., Dodin, A., & Postnov, K. 2023, NewA, 103, 102060 Cherepashchuk, A. M. 1981, MNRAS, 194, 761 Cherepashchuk, A. M., Dodin, A. V., & Postnov, K. A. 2025, arXiv, arXiv:2506.01106 Cherepashchuk, A. M., Esipov, V. F., Dodin, A. V., Davydov, V. V., & Belinskii, A. A. 2018, ARep, 62, 747 Article number, page 11 of 14 A&A proofs: manuscript no. current Chi, Y.-H., Huang, J., Zhou, P., et al. 2024, ApJL, 975, L28 Churazov, E., Khabibullin, I., Bykov, A. M., Lyskova, N., & Sunyaev, R. 2023, A&A, 670, A156 Churazov, E., Khabibullin, I., Lyskova, N., Sunyaev, R., & Bykov, A. M. 2021a, A&A, 651, A41 Churazov, E., Khabibullin, I., Lyskova, N., Sunyaev, R., & Dolag, K. 2025, arXiv, arXiv:2507.19987 Churazov, E., Khabibullin, I., & Sunyaev, R. 2020, MNRAS, 495, L51 Churazov, E. M., Khabibullin, I. I., & Bykov, A. M. 2024, A&A, 688, A4 Churazov, E. M., Khabibullin, I. I., Bykov, A. M., et al. 2021b, MNRAS, 507, 971 Dubner, G. M., Holdaway, M., Goss, W. M., & Mirabel, I. F. 1998, AJ, 116, 1842 Eichler, D. 1983, ApJ, 272, 48 Eikenberry, S. S., Cameron, P. B., Fierce, B. W., et al. 2001, ApJ, 561, 1027 Elston, R. & Baum, S. 1987, AJ, 94, 1633 Fabian, A. C. & Rees, M. J. 1979, MNRAS, 187, 13P Fabrika, S. 2004, ASPRv, 12, 1 Fabrika, S. & Mescheryakov, A. 2001, in IAU Symposium, Vol. 205, Galaxies and their Constituents at the Highest Angular Resolutions, ed. R. T. Schilizzi, 268 Farnes, J. S., Gaensler, B. M., Purcell, C., et al. 2017, MNRAS, 467, 4777 Filippova, E., Revnivtsev, M., Fabrika, S., Postnov, K., & Seifina, E. 2006, A&A, 460, 125 Fogantini, F. A., García, F., Combi, J. A., et al. 2023, A&A, 669, A149 Forman, W., Jones, C., Cominsky, L., et al. 1978, ApJS, 38, 357 Fragile, P. C., Middleton, M. J., Bollimpalli, D. A., & Smith, Z. 2025, MNRAS, 540, 2820 Fuchs, Y., Koch Miramond, L., & Ábrahám, P. 2006, A&A, 445, 1041 Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1 Geldzahler, B. J., Pauls, T., & Salter, C. J. 1980, A&A, 84, 237 Goodall, P. T., Alouani-Bibi, F., & Blundell, K. M. 2011a, MNRAS, 414, 2838 Goodall, P. T., Blundell, K. M., & Bell Burnell, S. J. 2011b, MNRAS, 414, 2828 Goranskij, V. 2011, PZ, 31, 5 Green, D. A. 2025, JApA, 46, 14 Green, G. M., Schlafly, E., Zucker, C., Speagle, J. S., & Finkbeiner, D. 2019, ApJ, 887, 93 Gregory, P. C., Scott, W. K., Douglas, K., & Condon, J. J. 1996, ApJS, 103, 427 Grindlay, J. E., Band, D., Seward, F., et al. 1984, ApJ, 277, 286 H. E. S. S. Collaboration, Aharonian, F., Ait Benkhali, F., et al. 2024, Sci, 383, 402 Heinz, S. & Sunyaev, R. 2002, A&A, 390, 751 Hjellming, R. M. & Johnston, K. J. 1981a, ApJL, 246, L141 Hjellming, R. M. & Johnston, K. J. 1981b, Nature, 290, 100 Holden, D. J. & Caswell, J. L. 1969, MNRAS, 143, 407 Jiang, Y.-F., Stone, J. M., & Davis, S. W. 2019, ApJ, 880, 67 Kaaret, P., Feng, H., & Roberts, T. P. 2017, ARA&A, 55, 303 Kaaret, P., Ferrazzoli, R., Silvestri, S., et al. 2024, ApJL, 961, L12 Katz, J. I. 1980, ApJL, 236, L127 Katz, J. I., Anderson, S. F., Margon, B., & Grandi, S. A. 1982, ApJ, 260, 780 Kawai, N., Matsuoka, M., Pan, H.-C., & Stewart, G. C. 1989, PASJ, 41, 491 Khabibullin, I., Churazov, E., & Sunyaev, R. 2022, MNRAS, 509, 6068 Khabibullin, I., Medvedev, P., & Sazonov, S. 2016, MNRAS, 455, 1414 Khabibullin, I. & Sazonov, S. 2016, MNRAS, 457, 3963 Khabibullin, I. I., Churazov, E. M., Bykov, A. M., Chugai, N. N., & Sunyaev, R. A. 2023, MNRAS, 521, 5536 Khabibullin, I. I., Churazov, E. M., Chugai, N. N., et al. 2024, A&A, 689, A278 Khabibullin, I. I. & Sazonov, S. Y. 2012, AstL, 38, 443 Khabibullin, I. I. & Sazonov, S. Y. 2017, AstL, 43, 388 Khabibullin, I. I. & Sazonov, S. Y. 2019, AstL, 45, 282 Kimura, S. S., Murase, K., & Mészáros, P. 2020, ApJ, 904, 188 Kirshner, R. P. & Chevalier, R. A. 1980, ApJL, 242, L77 Konigl, A. 1983, MNRAS, 205, 471 Kotani, T., Kawai, N., Matsuoka, M., & Brinkmann, W. 1996, PASJ, 48, 619 Kraft, R., Markevitch, M., Kilbourne, C., et al. 2022, arXiv, arXiv:2211.09827 Kubota, K., Ueda, Y., Kawai, N., et al. 2010, PASJ, 62, 323 Leibowitz, E. M. 1984, MNRAS, 210, 279 LHAASO Collaboration. 2024, arXiv, arXiv:2410.08988 Lopez, L. A., Marshall, H. L., Canizares, C. R., Schulz, N. S., & Kane, J. F. 2006, ApJ, 650, 338 Margon, B. 1984, ARA&A, 22, 507 Margon, B., Ford, H. C., Katz, J. I., et al. 1979, ApJL, 230, L41 Marshall, H. L., Canizares, C. R., Hillwig, T., et al. 2013, ApJ, 775, 75 Marshall, H. L., Canizares, C. R., & Schulz, N. S. 2002, ApJ, 564, 941 Medvedev, A. & Fabrika, S. 2010, MNRAS, 402, 479 Medvedev, P. S., Khabibullin, I. I., & Sazonov, S. Y. 2019, AstL, 45, 299 Medvedev, P. S., Khabibullin, I. I., Sazonov, S. Y., Churazov, E. M., & Tsygankov, S. S. 2018, AstL, 44, 390 Medvedev, P. S., Khabibullin, I. I., Semena, A. N., et al. 2022, AstL, 48, 389 Middleton, M. J., Walton, D. J., Alston, W., et al. 2021, MNRAS, 506, 1045 Migliari, S., Fender, R. P., Blundell, K. M., Méndez, M., & van der Klis, M. 2005, MNRAS, 358, 860 Miller-Jones, J. C. A., Migliari, S., Fender, R. P., et al. 2008, ApJ, 682, 1141 Monceau-Baroux, R., Porth, O., Meliani, Z., & Keppens, R. 2015, A&A, 574, A143 Namiki, M., Kawai, N., Kotani, T., Mamauchi, S., & Brinkmann, W. 2000, AdSpR, 25, 709 Ohmura, T., Ono, K., Sakemi, H., et al. 2021, ApJ, 910, 149 Ohsuga, K. & Mineshige, S. 2011, ApJ, 736, 2 Panferov, A. 2014, A&A, 562, A130 Panferov, A. A. 2017, A&A, 599, A77 Panferov, A. A. & Kabanov, A. A. 2013, arXiv, arXiv:1306.4486 Peretti, E., Petropoulou, M., Vasilopoulos, G., & Gabici, S. 2025, A&A, 698, A188 Peter, W. & Eichler, D. 1993, ApJ, 417, 170 Poutanen, J., Lipunova, G., Fabrika, S., Butkevich, A. G., & Abolmasov, P. 2007, MNRAS, 377, 1187 Predehl, P., Andritschke, R., Arefiev, V., et al. 2021, A&A, 647, A1 Predehl, P., Sunyaev, R. A., Becker, W., et al. 2020, Nature, 588, 227 Rosado, M., Sánchez-Cruces, M., Ambrocio-Cruz, P., & Trejo, D. 2021, MNRAS, 506, 4263 Safi-Harb, S., Mac Intyre, B., Zhang, S., et al. 2022, ApJ, 935, 163 Safi-Harb, S. & Ögelman, H. 1997, ApJ, 483, 868 Safi-Harb, S. & Petre, R. 1999, ApJ, 512, 784 Sakai, Y., Yamada, S., Sakemi, H., et al. 2025, arXiv, arXiv:2507.19042 Sakemi, H., Omae, R., Ohmura, T., & Machida, M. 2021, PASJ, 73, 530 Sarazin, C. L., Begelman, M. C., & Hatchett, S. P. 1980, ApJL, 238, L129 Seward, F., Grindlay, J., Seaquist, E., & Gilmore, W. 1980, Nature, 287, 806 Seward, F. D., Page, C. G., Turner, M. J. L., & Pounds, K. A. 1976, MNRAS, 175, 39P Shakura, N. I. & Sunyaev, R. A. 1973, A&A, 24, 337 Sharpless, S. 1959, ApJS, 4, 257 Shklovskii, I. S. 1981, SvA, 25, 315 Shuder, J. M., Hatfield, B. F., & Cohen, R. D. 1980, PASP, 92, 259 Spencer, R. E. 1979, Nature, 282, 483 Stephenson, C. B. & Sanduleak, N. 1977, ApJS, 33, 459 Stewart, G. C., Watson, M. G., Matsuoka, M., et al. 1987, MNRAS, 228, 293 Sunyaev, R., Arefiev, V., Babyshkin, V., et al. 2021, A&A, 656, A132 Tashiro, M., Kelley, R., Watanabe, S., et al. 2025, PASJ Tsuji, N., Inoue, Y., Khangulyan, D., et al. 2025, arXiv, arXiv:2510.06431 van den Heuvel, E. P. J. 1981, VA, 25, 95 van den Heuvel, E. P. J., Ostriker, J. P., & Petterson, J. A. 1980, A&A, 81, L7 van den Heuvel, E. P. J., Portegies Zwart, S. F., & de Mink, S. E. 2017, MNRAS, 471, 4256 Velázquez, P. F. & Raga, A. C. 2000, A&A, 362, 780 Wang, J., Reville, B., & Aharonian, F. A. 2025, ApJL, 989, L25 Watson, M. G., Stewart, G. C., Brinkmann, W., & King, A. R. 1986, MNRAS, 222, 261 Watson, M. G., Willingale, R., Grindlay, J. E., & Seward, F. D. 1983, ApJ, 273, 688 Westerhout, G. 1958, BAN, 14, 215 Whitmire, D. P. & Matese, J. J. 1980, MNRAS, 193, 707 Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914 Yamauchi, S., Kawai, N., & Aoki, T. 1994, PASJ, 46, L109 Yoshioka, S., Mineshige, S., Ohsuga, K., Kawashima, T., & Kitaki, T. 2022, PASJ, 74, 1378 Zavala, J., Velázquez, P. F., Cerqueira, A. H., & Dubner, G. M. 2008, MNRAS, 387, 839 Zealey, W. J., Dopita, M. A., & Malin, D. F. 1980, MNRAS, 192, 731 Article number, page 12 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA Appendix A: Spectrum of the central source As one can see in Fig. 6, both blue-shifted and red-shifted lines of highly ionized (H-like and He-like) heavy elements (in par- ticular, silicon, sulfur, iron, and nickel) moving at 0.26 speed of light can be easily discriminated and identified. They arise from the approaching (with apparent redshift 𝑧app =-0.085) and the receding jet (with apparent redshift 𝑧rec =0.167), respectively. In agreement with the findings of numerous previous observations (see Medvedev et al. 2019, for a recent summary), a factor of 10 overabundance of nickel (with respect to iron) in the gas of the jets is required to describe the spectrum well (most clearly indi- cated by the ratio of the Ni XXVII and Fe XXV lines). Spectral resolution of eROSITA is not sufficient to resolve the width of the lines, which is known to be dominated by the ballistic motion of gas within a cone of with opening angle Θ ∼1 deg, as measured by high resolution spectroscopy with Chandra gratings (e.g. Mar- shall et al. 2002). Some residual emission can also be observed in the iron K complex region between 6 and 7 keV, which could come from the reflection of central source emission on the walls of the optically thick accretion disk funnel (Medvedev & Fabrika 2010) or in the optically thin accretion disk wind (Medvedev et al. 2018, 2019). Comparison of the spectra taken in the two epochs of observations separated by one orbital period shows very good consistency between them3, with the biggest differ- ence observed in the 6-7 keV band. This might be related to the change in the visible geometry of this region or to the physical conditions within it (see a discussion in Medvedev et al. 2019). High-resolution spectroscopy of this emission with the XRISM observatory (Tashiro et al. 2025) will open new possibilities for characterizing this emission and potentially give access to more intricate diagnostics based on velocity structure and resonant scattering in the jets (Khabibullin & Sazonov 2012), as well as possible signatures of multiphase cooling and ionization state dynamics in supersonically expanding jets prone to thermal in- stability and fragmentation (Brinkmann et al. 1988; Brinkmann & Kawai 2000; Khabibullin et al. 2016). The SS433’s spec- trum presented here illustrates the spectroscopic capabilities of eROSITA and shows that, for sufficiently bright sources, reliable continuum and line characterization can be achieved up to 10 keV. Appendix B: Parameters of the spectral models Here we list parameters of the best-fit models used to describe spectra shown in Fig. 7 and discussed in Sec. 5. The spectral anal- ysis was performed using standard routines of the XSPEC pack- age (Arnaud 1996). Namely, the absorbed (using tbabs model by Wilms et al. 2000) non-equilibrium plasma emission model (nei model by Borkowski et al. 2001) was used to describe ther- mal emission, while similarly absorbed thermal bremsstrahlung model (tbabs*brems in XSPEC notation) was used for a phe- nomenological description of the non-thermal emission from the EXJs. Due to the complexity of the system, the X-ray emission from all these regions is likely a mixture of several components; hence, one needs to take these derived parameters with caution and probably as approximate guidelines for the physical con- ditions within them. For the background estimation, adjacent regions of significant size were used, and the signal inside them was subtracted from the corresponding source region. For all re- gions except for the relatively faint Regions 4 and 8, the relative 3 Since one orbital period separation is small compared to the preces- sion period and the observations were close to an extreme point of the precession curve, the change in the Doppler shifts of the jets is small. Table B.1. Parameters (with 1𝜎uncertainties) of the best fit models for the spectra extracted from regions shown in Fig. 5. The thermal non-equilibrium model used for Regions 1,3,4,6-8 is tbabs*nei with Solar abundance of heavy elements having the absorbing column density 𝑁H, gas temperature 𝑘𝑇, ionization timescale 𝜏and the standard XSPEC emission measure Norm = 10−14nenHV/(4𝜋d2 SS 433) normalization (per arcmin−2) as free parameters. The value of Norm converts into the hy- drogen number density as 𝑛H = 0.2 cm−3(Norm/10−5)0.5(𝑑𝑙/pc)−0.5, where 𝑑𝑙is the line-of-sight extent of the emitting region (assuming fill- ing factor 𝑓equal to unity and 𝑛e = 1.21𝑛H for the fully ionized cosmic plasma). For the non-thermal emission model used for Regions 2 and 5 we list parameters of the tbabs*brems fits, where 𝑘𝑇is the effective temperature of the bremsstrahlung spectrum. For the normalization, we divided the standard XSPEC normalization of the brems model by the factor 3.28(=10−14/3.05 × 10−15) to match the normalization definition of the nei model used for the thermal regions. Correspondingly, the same conversion to the number density of the emitting gas can be used in this case as well, demonstrating that would the bremsstrahlung model were true, the required gas density would be comparable to the density of the ambient medium. # 𝑁H 𝑘𝑇 𝜏 Norm 1022 cm−2 keV 1011 s cm−3 10−5 Thermal regions (tbabs*vnei) 1 0.67±0.02 0.29±0.02 0.26±0.05 12.33±3.63 3 0.56±0.02 0.56±0.07 0.11±0.01 0.80±0.19 4 0.79±0.05 0.95±0.25 0.09±0.01 0.21±0.07 6 0.85±0.03 0.76±0.06 0.64±0.12 2.01±0.28 7 0.72±0.02 1.30±0.18 0.08±0.01 0.88±0.13 8 0.75±0.05 0.78±0.22 0.13±0.05 0.42±0.18 Non-thermal regions (tbabs*brems) 2 0.51±0.01 3.05±0.15 - 1.62±0.04 5 0.84±0.03 14.5±3.9 - 0.64±0.01 contribution of the background emission in the source regions was rather small. Appendix C: Impact of foreground absorption In this section, we demonstrate that some of the large-scale varia- tions in the diffuse X-ray emission from W50 is plausibly caused by foreground absorption. To do so, we used the 3D dust maps from Bayestar19 (see Green et al. 2019, for detailed description). We are looking for the intervening absorption between W50 and us, i.e., physically unrelated layers of the absorbing material. Given the adopted distance to SS 433 of 5 kpc, we integrated extinction (AV) only up to a distance of 3 kpc from the Sun. The resulting extinction map is superposed on the X-ray image in Fig. C.1. A prominent wide dust lane is seen in this image that crosses the central part of the W50 nebula diagonally from NE to SW. Typical values of the extinction in this region are ∼5 −10, which translate into equivalent hydrogen column den- sities ∼(1−2) ×1022 cm−2. Therefore, soft X-ray emission from Article number, page 13 of 14 A&A proofs: manuscript no. current Fig. C.1. X-ray map of W50 in Galactic coordinates with the extinction map (in red) superposed. The extinction is estimated from the Bayestar19 3D maps Green et al. (2019) up to a distance of 3 kpc from the Sun, i.e., conservatively in the foreground to W50. The extinction (AV) values range between ∼5 and ∼10 in regions overlapping with the central quasi-spherical part of W50. This apparently accidental projection of the absorbing dust/gas onto W50 leads to significant suppression of the soft X-ray flux in the affected areas. the central part of W50 can be heavily absorbed (several orders of magnitude at 1 keV). While some geometric correspondences between the X-ray and extinction maps look tantalizing, we con- clude that most probably this is a by-chance projection. Therefore, the lack of observed soft X-ray emission from the quasi-circular part of the nebula does not place significant constraints on the properties of W50. The soft X-ray emission might be there, but we can not see it. Article number, page 14 of 14
Astronomy & Astrophysics manuscript no. current October 17, 2025 X-ray panorama of the SS 433/W50 complex by SRG/eROSITA Rashid Sunyaev1, 2, Ildar Khabibullin3, 1, 2, Eugene Churazov1, 2, Marat Gilfanov2, 1, Pavel Medvedev2, and Sergey Sazonov2 1 Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, D-85741 Garching, Germany 2 Space Research Institute (IKI), Profsoyuznaya 84/32, Moscow 117997, Russia 3 Universitäts-Sternwarte, Fakultät für Physik, Ludwig-Maximilians-Universität München, Scheinerstr.1, 81679 München, Germany October 17, 2025 ABSTRACT Galactic microquasar SS 433 and the radio nebula W50 surrounding it present a prototypical example of a hyper-Eddington binary system shaping its ambient interstellar medium via energetic outflows. In this paper, we present X-ray observations of the SS 433/W50 complex by the eROSITA telescope onboard the SRG space observatory. These data provide images of the entire nebula characterized by a very large dynamic range and allow spectral analysis of the diffuse X-ray emission. In particular, these data illustrate a close connection between the thermal and non-thermal components of W50 on scales ranging from sub-parsecs, represented by narrow X-ray bright filaments, to the entire extent ≳100 pc of the nebula. These data also allow us to fully characterize a pair of nearly symmetric, sharp-edged, elongated structures aligned with the orbital axis of the binary system, which lack radio counterparts, but are prominent in very high energy gamma-ray emission. The resulting multifaceted picture of the interaction between energetic outflows and the surrounding medium paves the way for future focused multiwavelength observations and dedicated numerical simulations. Key words. ISM: jets and outflows - X-rays: binaries - plasmas - acceleration of particles 1. Introduction SS 433 is a unique close stellar binary system in our Galaxy (located at l, b= 39.7◦, -2.2◦and dSS 433 ≈5 kpc) composed of a relativistic compact object (a black hole or a neutron star) which accretes matter from its companion at sustainably hyperEddington rate (see Fabrika 2004; Cherepashchuk et al. 2025, for reviews). Having been first listed in the catalog of point-like Hαemitters (Stephenson & Sanduleak 1977), it quickly became one of the most studied astrophysical objects thanks to its very peculiar optical spectrum (Margon et al. 1979) featuring pairs of narrow (with half-opening angle θj ≲2◦) blue- and red-shifted lines (with respect to their rest frame positions) with the shifts varying periodically in time (see Margon 1984, for a review of early observational results). A kinematic model associates this emission with a pair of narrow relativistic jets having constant bulk velocity of a quarter of the speed of light and changing direction in a regular precession manner with a period Pprec ≈162 days (Abell & Margon 1979; Fabian & Rees 1979). This period is much longer than the period of orbital modulation (Porb = 13.08 days) and is interpreted as due to precession of a thick accretion disk with the amplitude of Θprec ≈21◦and the mean axis inclined by the angle i= 78◦to the line of sight (e.g., Katz 1980; Sarazin et al. 1980; Whitmire & Matese 1980). Subsequent detection of extended and variable radio emission (Spencer 1979) perfectly described by the precessing jets pattern fully confirmed the picture of twin relativistic outflows (Hjellming & Johnston 1981a,b; Blundell & Bowler 2004). Moreover, the proper motion of the individual radio-emitting blobs allowed one to constrain the distance to the source with a remarkable 10% accuracy, being dSS 433 = 5 ± 0.5 kpc (Hjellming & Johnston 1981a; Blundell & Bowler 2004; Marshall et al. 2013). Despite indications of both quasi-regular (nutation, Katz et al. 1982) and sporadic (jittering, Kubota et al. 2010) deviations from the predictions of this simple model as well as presence of (relatively short) periods of time when the Doppler shifts of optical emission lines change significantly (e.g., Blundell et al. 2011; Medvedev et al. 2022), all the derived parameters of the model stay remarkably stable over more than 40 years of continuous monitoring observations (Cherepashchuk et al. 2018). This stability likely reflects extreme robustness of the mass transfer occurring in the hyper-Eddington regime (van den Heuvel 1981; van den Heuvel et al. 2017) with the estimated accretion rate ¤M≳10-4M⊙yr-1 ∼1000 ¤MEdd(MX/3M⊙) (e.g., Shklovskii 1981; Cherepashchuk 1981; Leibowitz 1984; Fuchs et al. 2006), where MX is the mass of the compact object and M⊙is the Solar mass and ¤MEdd is the critical Eddington mass accretion rate. The compact object is most likely a black hole with MX ≳8M⊙(Cherepashchuk et al. 2023), although the possibility of it being a neutron star (MX < 3M⊙) has also been discussed (e.g. Goranskij 2011; Medvedev et al. 2018). Since X-ray emission coming from the innermost parts of the hyper-Eddington accretion disk is expected to be strongly beamed in the axial direction (Shakura & Sunyaev 1973; Begelman et al. 2006; Poutanen et al. 2007), we are not able to observe it directly, given the high inclination angle of the system. Indeed, although X-ray emission from this source has also been detected early on (Seward et al. 1976; Forman et al. 1978), its luminosity is relatively low 1036 erg s-1 compared to the Eddington luminosity (Grindlay et al. 1984). As a result, SS 433 has long been considered as a beamed away prototype of Ultra Luminous X-ray sources (ULXs, e.g., Kaaret et al. 2017, for a review) observed in normal star-forming galaxies (Fabrika & Mescheryakov 2001; Begelman et al. 2006; Poutanen et al. 2007), although possible indications of the 'hidden' central emission are yet to be confirmed Article number, page 1 of 14 16 Oct 2025 A&A proofs: manuscript no. current (Medvedev & Fabrika 2010; Khabibullin & Sazonov 2016, 2019; Middleton et al. 2021; Fogantini et al. 2023). Instead, what dominates X-ray emission of the system is again thermal emission from a twin pair of the baryonic jets having almost identical orientation and bulk velocity to the optically emitting jets (Watson et al. 1986). Numerous pairs of blue- and red-shifted narrow emission lines of highly ionized heavy elements (primarily, silicon, sulfur, iron and nickel) allow one to build a multi-temperature model of the ballistically expanding and cooling flow (Watson et al. 1986; Kotani et al. 1996; Marshall et al. 2002; Brinkmann et al. 2005; Medvedev & Fabrika 2010; Marshall et al. 2013; Khabibullin et al. 2016) which first becomes visible when its temperature is above 20 keV and then cools down below 1 keV and likely fragments later on due to thermal instability (Brinkmann et al. 1988). Detailed spectroscopic modeling of this emission (e.g. Khabibullin et al. 2016; Medvedev et al. 2019) in combination with the eclipsing observations of the jets by the companion star (Stewart et al. 1987; Kawai et al. 1989; Filippova et al. 2006; Lopez et al. 2006; Atapin & Fabrika 2016) enable very robust determination of the jets' physical properties (so-called jets 'biometry', consisting of their size, opening angle, velocity, temperature, and the mass flux). Thanks to this we know very robustly the kinetic luminosity of these outflows, which exceeds 1039 erg s-1 (e.g., Medvedev et al. 2019), confirming the necessity of a hyper-Eddington 'engine' releasing large amount of energy in the form of relativistic outflows, consistently with the early theoretical (Shakura & Sunyaev 1973) and modern numerical (e.g., Ohsuga & Mineshige 2011; Jiang et al. 2019; Yoshioka et al. 2022; Fragile et al. 2025) predictions. In contrast to the radiation freely escaping the system, the kinetic energy of the launched outflows cannot be easily transported away from the source and has to be deposited into the surrounding interstellar medium. The estimated total amount of this energy exceeds 1051 erg for a life-time of the system at the level of ≳30,000 yr (e.g., van den Heuvel et al. 1980). This energy is at least on par with the energy of the initial supernova explosion that should have accompanied the birth of the compact object, meaning that a quite large region ≳10 pc in its vicinity can be shaped by this later activity (e.g., Begelman et al. 1980; Zealey et al. 1980; van den Heuvel 1981; Velázquez & Raga 2000; Zavala et al. 2008; Goodall et al. 2011a; Churazov et al. 2020, 2024). Indeed, SS 433 is known to be surrounded by the famous giant (∼200 × 100 pc) radio nebula W50 (Westerhout 1958; Holden & Caswell 1969; Geldzahler et al. 1980; Elston & Baum 1987; Dubner et al. 1998; Goodall et al. 2011b; Broderick et al. 2018; Sakemi et al. 2021), which has a suggestive elongated morphology, with the elongation axis being co-aligned with the symmetry axis of the jets precession cone. The asymmetry of the lobes, or "ears", of the nebula can be readily explained by the noticeable stratification of the surrounding medium, given that the system is located at the height hSS 433 = dSS 433 sin(b) ≈200 pc below the Galactic plane and has the size comparable to the scale height of the exponential cold gas disk (e.g., Velázquez & Raga 2000; Zavala et al. 2008; Goodall et al. 2011a). The interior of the nebula is also known to shine in soft thermal X-rays (Brinkmann et al. 1996, 2007; Safi-Harb & Ögelman 1997; Chi et al. 2024), hard non-thermal (Seward et al. 1980; Watson et al. 1983; Yamauchi et al. 1994; Safi-Harb & Petre 1999; Namiki et al. 2000; Safi-Harb et al. 2022; Kaaret et al. 2024) and filamentary Hαemission (Kirshner & Chevalier 1980; Shuder et al. 1980; Boumis et al. 2007; Abolmasov et al. 2010; Farnes et al. 2017; Rosado et al. 2021), indicative of the high velocity shocks propagating through a relatively dense interstellar medium. The nature of the extended X-ray jets (EXJ) remains more puzzling, although their morphology, spectra, and polarization all point to the synchrotron origin of the emission (e.g., SafiHarb et al. 2022; Kaaret et al. 2024). Most recently, Very High Energy (VHE) emission from these structures has been detected and mapped (e.g., Abeysekara et al. 2018; Alfaro et al. 2024; H. E. S. S. Collaboration et al. 2024; LHAASO Collaboration 2024), demonstrating that indeed ultra-relativistic electrons with energy above 100 TeV are accelerated in the system (e.g, Kimura et al. 2020), probably as a consequence of internal shocks and recollimation in the energetic trans-relativistic outflows (Bykov et al. 2025). The overall production of cosmic rays by such systems (super-Eddington accretors with powerful outflows) might be of importance for the total CR budget and local variations in its spectrum from place to place (e.g., Heinz & Sunyaev 2002), in particular in the PeV regime (e.g., Peretti et al. 2025; Wang et al. 2025; Bykov et al. 2025). Although the paradigm of "a super-Eddington X-ray binary powering a nebula with energetic outflows" appears to be well established and consistent with the most salient observational properties of the SS 433/W50 complex, the details of almost all aspects of this interaction remain unclear. Indeed, all the phenomena directly associated with the central source can be trace only up to a distance of mere 0.1 pc from it, where the last signatures of the corkscrew pattern of the precessing jets are observed in radio and X-rays (Migliari et al. 2005; Miller-Jones et al. 2008; Khabibullin & Sazonov 2017; Sakai et al. 2025). A model invoking steady deceleration and disruption of the jets can, in principle, account for such behavior (Panferov 2014, 2017), but then the re-appearance of the "jets" with similar to the injection velocities tens of pc away from the central source might be problematic. Morphology of the soft X-ray emission, best mapped by the ROSAT mosaic observations (Brinkmann et al. 1996, 2005), is also quite puzzling, with large global brightness variations and thin filamentary structures, not very commonly observed in similar situations of supernova remnants (which typically feature shell-, plerion- or mixed types of morphology). Deep pointed observations with XMM-Newton, Chandra, Suzaku, and NuSTAR were capable of covering certain parts of the nebula (e.g., SafiHarb et al. 2022; Chi et al. 2024; Tsuji et al. 2025), but a uniform and sensitive coverage of the entire nebula with adequate spectroscopic capabilities is missing. Here we present results of X-ray observations of the SS 433/W50 complex by the eROSITA telescope (Predehl et al. 2021) onboard the SRG space observatory (Sunyaev et al. 2021), which were performed during its Performance Verification (PV) phase. The obtained mosaic provides us with X-ray images of the entire extent of W50 with unprecedented dynamic range, bringing X-ray data on par with the radio and other multi-wavelength data, and, in addition, delivers exquisite spectral information for the rich variety of diffuse structures. These data, in particular, demonstrate a close connection between the thermal and nonthermal components of W50 on scales ranging from subparsec, represented by narrow X-ray and radio bright filaments, to the entire extent of the nebula. These data also enable full characterization of a pair of nearly symmetric, sharp-edged, elongated structures aligned with the orbital axis of the binary system, which lack radio counterparts. The structure of the paper is following: we describe the observations and the obtained data in Sect. 2; results of the broadband and spectrally-resolved imaging are presented in Sect. 3 and Sect. 4, respectively; spectroscopic analysis of the emission from several selected regions follows in Sect. 5; the obtained global picture is discussed in Sect. 6 and summarized in Sect. 7. Article number, page 2 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA 2. Data Since W50 spans approximately 2.2 × 1.0 degrees on the sky, corresponding to the physical size of ∼190 × 90 pc at the distance of 5 kpc1, the observing campaign consisted of 9 individual telescope pointings (each covering a circular area of ∼1◦ in diameter). These pointings were arranged in a mosaic to result in a nearly uniform coverage of the entire nebula and, in addition, the highest sensitivity in the most interesting regions. The observing dates, from October 5th to October 8th and then from October 19th to October 20th, 2019, were chosen to coincide with the two consecutive orbital eclipses of the central source SS 433 by its normal companion star (according to orbital ephemerids by Eikenberry et al. 2001). During eclipses, the apparent X-ray flux of SS 433 goes down by a factor of several (e.g., Kawai et al. 1989; Lopez et al. 2006; Medvedev et al. 2019), minimizing possible adverse effects of contamination and pile-up due to the presence of a very bright object in the fieldof-view, even though relative suppression of the soft X-ray lines being rather minor (Marshall et al. 2013). The typical effective (i.e., equivalent to the nominal eROSITA on-axis performance) exposure time across the map amounts to 30-50 ks, leading to a factor of 10 deeper images compared to the previously available full-extent maps of W50 by ROSATobservatory in the 0.5-2 keV band (Brinkmann et al. 1996) and Einstein observatory in the 2-4 keV band (Watson et al. 1983). In addition to that, the eROSITA data have much better spatial (∼15" arcsec on the telescope axis and ∼28" averaged across the field-of-view) and spectral (∼65 eV at 1 keV) resolution (Predehl et al. 2021). A comparable quality of the data is provided by multiple Chandra and XMM-Newton observations, but their coverage of the whole W50 nebula is non-uniform and still incomplete after more than 20 years of observations (Brinkmann et al. 2005; Safi-Harb et al. 2022; Chi et al. 2024). On the other hand, the SRG/eROSITA data are complementary to these observations, as well as to the imaging observations in harder X-rays with NuSTAR, which offer a unique view of non-thermally emitting populations. In this paper, we focus on the SRG/eROSITA X-ray data alone and leave combined analyses for future studies. We also supplement the PV data with the data obtained in the course of the SRG/eROSITA All-Sky Survey (Predehl et al. 2021; Sunyaev et al. 2021). Although the survey data are much shallower in sensitivity (the typical exposure time per point is ∼1000 s after combining the data of four scans), they provide uniform coverage of the entire area surrounding the nebula. For them, the standard data reduction, background estimation and exposure- and vignetting corrections were applied in the same way as was done for mapping diffuse emission from nearby galaxy clusters (Churazov et al. 2021a, 2023), supernova remnants (Churazov et al. 2021b; Khabibullin et al. 2023, 2024), and the Galactic diffuse emission (Predehl et al. 2020; Khabibullin et al. 2022; Churazov et al. 2024) on even larger scales. Here we also go one step further and combine the PV and all-sky data in a seamless fashion, similarly as it has been done for Chandra and SRG/eROSITA observations of the Perseus cluster (Churazov et al. 2025). 1 The most recent parallax measurement for SS 433 in Gaia DR3 corresponds to the significantly larger distance of 8.5+2.0 -1.4 kpc (Gaia DR3 4293406612283985024, Gaia Collaboration et al. 2023), we caution of using this value given large residuals of the astrometric solution. The previous parallax measurement published in Gaia DR2 corresponded to a much smaller distance of 3.8+0.8 -1.1 kpc (Gaia DR2 4293406612283985024, Arnason et al. 2021). A comparison of the 4 × 4 degrees field in radio and X-ray bands is shown in Fig. 1. Although in the radio image, the outer boundary of the W50 nebula appears to be very prominent, the X-ray image is instead dominated by the very bright central object (SS 433) and elongated structures known as "extended X-ray jets" (EXJ) located well inside the radio boundary of W50 (Seward et al. 1980; Watson et al. 1983; Brinkmann et al. 1996). At a distance of 5 kpc, these images span 350 pc and range from 17 pc to almost 370 pc in distance from the Galactic plane. Given that the scale heights for the cold and warm components of the interstellar medium are tens and hundreds of pc, respectively, one could naturally expect signatures of the environment stratification (e.g., Goodall et al. 2011a). This is indeed revealed by the asymmetry of the W50 lobes, with the one directed towards the Galactic plane (the Western Lobe in equatorial coordinates) being more compact than the opposite one (the Eastern lobe). On larger scales, clear variations of the background and foreground diffuse X-ray emission are seen. Inspection of 3D reddening maps (e.g., Green et al. 2019) shows a clear anti-correlation of the reddening and soft X-ray surface brightness (see Appendix C), suggesting that absorption of X-rays is the main reason for these variations. Morphologically similar structures appear in the 3D reddening maps at a distance of 2-3 kpc, i.e., in the foreground to W50/SS 433. Therefore, it is plausible that the majority of them are physically unrelated to W50. 3. Broad-band imaging We now turn to the deeper PV data. The broad-band (0.5-4 keV) vignetting-corrected X-ray image of the entire W50 nebula is shown in Fig. 2, with the contours of radio emission at 4.85 GHz (Gregory et al. 1996) overlaid in white. The color coding reflects the measured X-ray surface brightness on a square-root scale spanning two orders of magnitude, highlighting the wide dynamic range of the diffuse X-ray emission captured by eROSITA. In addition to SS 433 itself (the extremely bright source in the center) and a multitude of point sources, the majority of which are nearby foreground stars or background AGN, the picture shows bright diffuse emission with rich internal structure. The X-ray picture can be broadly decomposed into three components. Firstly, there is a low surface brightness emission filling the entire projected extent of the nebula. It is brighter in the Eastern and Western lobes of W50 (left and right, respectively, in Fig. 2) and significantly dimmer along the Northern and Southern radio boundaries of W50. Remarkably, the central part of W50 (beyond the region contaminated by the PSF wings from SS 433) appears even fainter in X-rays, suggesting a shell-like geometry of the X-ray-emitting gas. Secondly, one can notice the presence of very narrow X-ray filaments, particularly inside and close to the boundaries of the Eastern and Western lobes. Their apparent width is ≳1 pc, while their length is ∼10 pc. Given the angular resolution of eROSITA (better than ∼28", i.e., ≲0.7 pc at 5 kpc), these filaments are fully resolved in both directions. The X-ray filaments form a complicated network with an overall structure very similar to the filaments of the cold and dense gas observed via Hα+[N II] emission at different locations within the nebula (Boumis et al. 2007; Farnes et al. 2017). The largest X-ray filaments have a flat transverse profile of the surface brightness and are an order of magnitude brighter than the surrounding medium. Such filaments of the dense X-ray emitting gas might arise as instabilities in the sheared flows, caustics of the strong shock waves, or due to heating of the preexisting cold gas filaments by hot gas and relativistic particles inside the lobes. Article number, page 3 of 14 A&A proofs: manuscript no. current 41.5 41.0 40.5 40.0 39.5 39.0 38.5 38.0 -0.5 -1.0 -1.5 -2.0 -2.5 -3.0 -3.5 -4.0 Galactic longitude [deg] Galactic latitude [deg] Sh 74 SS433 G038.7-01.3 3C396 3C397 41.5 41.0 40.5 40.0 39.5 39.0 38.5 38.0 -0.5 -1.0 -1.5 -2.0 -2.5 -3.0 -3.5 -4.0 Galactic longitude [deg] Fig. 1. Global radio and X-ray views of the 4 × 4 degrees region centered on SS 433 (both images are in Galactic coordinates). Left: Radio image (square-root scale) by Green Bank telescope at 6cm (4.85 GHz, Gregory et al. 1996). Right: SRG/eROSITA (particle background-subtracted exposure-corrected) 0.3-2.3 keV X-ray image (log scale, two orders of magnitude range) of the W50 field, which combines the PV and all-sky data accumulated over four consecutive scans after smoothing with a Gaussian beam with σ= 90 arcsec. The light-blue areas in this image (where the X-ray flux is low) correspond to regions with high Galactic column density of gas and dust, which attenuate X-rays. Locations of supernova remnants 3C396, 3C397 and G038.7-01.3 (e.g., Green 2025), as well as of a HII region Sharpless 74 (Sharpless 1959) are marked on both images. The final and perhaps the most visual feature of the image is a pair of nearly symmetrical, sharp-edged "extended X-ray jets" directed away from the central source. Their axis is not only aligned with the elongation of the radio nebula and location of the X-ray bright lobes, but also coincides with the projection of the precession axis of SS 433's famous trans-relativistic baryonic jets. The physical nature of these structures, first found by the Einstein observatory (Watson et al. 1983) and then observed with all major X-ray telescopes (Brinkmann et al. 1996; Safi-Harb & Ögelman 1997; Namiki et al. 2000; Brinkmann et al. 2007; Safi-Harb & Petre 1999), including the most recent polarimteric observations with IXPE (Kaaret et al. 2024), remains a matter of debate (e.g. Churazov et al. 2024). The SRG/eROSITA data clearly show a very rapid onset of emission at the innermost boundary of EXJs on both sides of the nebula (also demonstrated recently by Tsuji et al. 2025, using Chandra data). 4. Spectrally-decomposed imaging We take advantage of the unique possibility provided by SRG/eROSITA to probe the spectral characteristics of diffuse X-ray emission throughout the full extent of the nebula for the first time. To produce a spectrally-decomposed image, we divided the full spectral band presented earlier into three sub-bands: soft (0.5-1 keV), medium (1-2 keV), and hard (2-4 keV) bands, respectively. A composite RGB image showing the surface brightness of the emission (now on a linear scale) in these bands in red, green, and blue, respectively, is presented in Figure 3. The chosen spectral decomposition immediately reveals the presence of two distinct components in the observed diffuse emission: large regions of red-orange color where emission below 1 keV dominates, and green-cyan structures where prominent emission above 1 keV is present (including the bright central source SS 433). Compact sources also form two groups: the softer ("redyellow") ones are predominantly foreground stars, while harder ("blueish") sources are mostly background AGNs, which shine through the gas and dust in the disk of the Milky Way. Zoom-in images of the lobes (rotated and put side by side to help the comparison) are shown in Fig. 4 and demonstrate fine structure of the soft diffuse emission, contrasting with relatively smooth and well-confined hard emission. For better characterization of the diffuse emission, we identified bright point sources and masked them along with the central source SS 433. The resulting composite map of the residual diffuse emission is shown in Figure 5. The overlaid distance and angle rulers highlight the characteristic dimensions and morphology of the prominent structures. Namely, one can see that the "Extended X-ray Jets" start ∼23 pc away from the central source and demonstrate remarkable symmetry, with the Eastern one starting only a couple of parsecs further from the central source. Both EXJs, characterized by hard X-ray spectra, terminate around R∼60 -65 pc. This symmetry contrasts with the more asymmetric and irregular soft emission of the lobes. The same conclusion is even better illustrated by the sideby-side comparison of the two lobes in Fig. 4. Indeed, while the Eastern lobe extends up to ∼110 pc, the Western one terminates at ∼75 pc. The inclination of the relativistic jet's precession axis to the line of sight is 78◦(the relativistic jet on the eastern side is on average approaching us), so this difference cannot be Article number, page 4 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA N E 0.5-4 keV 0.9 deg / 80 pc / 260 light-years 1.3 deg / 115 pc / 375 light-years SRG/eROSITA, W50/SS433 Fig. 2. Broad-band high-dynamic-range 0.5-4 keV X-ray image of the giant radio nebula W50 by SRG/eROSITA (PV data). The size of the image is 2.2 degrees by 1.1 degrees, corresponding to ∼200 × 100 pc at the distance of W50, ∼5 kpc. The image is rotated so that the longer side of W50 is aligned with the horizontal axis. Colour-coding corresponds to the X-ray surface brightness on the square-root scale spanning two orders of magnitude. The bright spot in the center is the SS 433. It appears extended due to the wings of the telescope's Point Spread Function around SS 433, which itself is heavily saturated in this image. The white contours show the surface brightness of the radio emission from the nebula at 4.85 GHz (Gregory et al. 1996). With this deeper data (compared to the all-sky survey's data), a faint diffuse emission is seen between bright EXJs and the radio boundary of W50. 0.5-1 keV 1-2 keV 2-4 keV 0.9 deg / 80 pc / 260 light-years 1.3 deg / 115 pc / 375 light-years SRG/eROSITA, W50/SS433 Fig. 3. A composite X-ray image of the radio nebula W50. Colour-coded is the surface brightness of the X-ray emission (on a linear scale) in 0.5-1 keV (red), 1-2 keV (green), and 2-4 keV (blue) energy bands. The white arrows depict the projection of the precession cone of the SS 433 jets extrapolated to distances ∼100 pc. Hard and soft X-ray diffuse emission splits convincingly into two components: softer filamentary emission (red-yellow) and harder (green-blue) emission of EXJs. On top comes a multitude of nearby (active stars and accreting white dwarfs) and distant (mostly AGN) compact sources. For distant sources, the absorption by the Milky Way gas suppresses emission below 1 or 2 keV, giving them a blueish color. Article number, page 5 of 14 A&A proofs: manuscript no. current HPD 10 pc 2 pc 87 pc / 1 deg HPD 10 pc 2 pc 58 pc / 40 arcmin Fig. 4. The comparison of the Eastern and Western Lobes of W50 (see the full image in Fig. 3). The Western Lobe has been rotated by 180 degrees for the sake of comparison. The white circles show the Half-Power-Diameter of the eROSITA PSF corresponding to the physical scale of 0.7 pc. Both EXJs emerge at essentially the same distance from the central source with a remarkably sharp edge and continue for ∼30 pc. On the contrary, the more diffuse extensions have different lengths, plausibly associated with much higher ambient density in the Western direction. explained by geometrical effects only. However, it broadly agrees with the assumption that the energy flow from the central source interacts with the denser ambient medium in the western part, reflecting the gas density gradient perpendicular to the equatorial plane of the Galactic disk (e.g., Goodall et al. 2011a). Since the difference between the lobes' dimensions amounts to a sizeable fraction of them, one can infer the exponential scale-height of the ambient gas density profile ∼100 pc, consistent with the scale-height of the Galactic neutral gas disk. The X-ray emission from the Northern and Southern boundaries of the nebula appears soft and faint, being noticeably affected by the foreground dust absorption, especially in their Western parts (as illustrated in Appendix C). Taking this into account, we conclude that the soft X-ray emission can be traced across the full extent of the nebula, except for the "inner cavity" with a radius of ∼23 -25 pc. The outer boundary of the emission of the Northern and Southern arcs (outlined as dashed annuli including Region 4 in Fig. 5) is located at ∼38 pc from the center, coinciding with the boundary of the W50's radio emission. This morphology suggests that the soft X-ray emitting gas is confined in a thick (ΔR∼R/10) close-tospherical shell, driving a strong shock in the surrounding ISM. Article number, page 6 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA 7 8 2 0.5-1 keV 1-2 keV 2-4 keV SRG/eROSITA, W50/SS433 21 deg 10.5 deg SS 433 75 pc 65 pc 50 pc 38 pc 23 pc 110 pc 95 pc 75 pc 50 pc 38 pc 23 pc 6 5 1 4 3 Fig. 5. "Anatomy" of the diffuse X-ray emission inside W50. The image shows a composite map of the X-ray surface brightness in 0.5-1, 1-2, and 2-4 keV energy bands (similar to Fig. 3) after masking bright foreground and background point sources, and the central source SS 433. Overlaid in white are markers of the distance from the central source (assuming dSS 433 = 5 kpc) and angle with respect to its orbital plane axis, coinciding with the precession axis of its narrow relativistic jets. The box regions, numbered from 1 to 6, are used for spectrum extraction and represent fiducial spectral components composing the image. The Eastern and the Western lobes are roughly confined inside the projection of the cone with the half-opening angle 21◦, while the EXJs encompass only the cone with a twice smaller opening angle,∼10◦. From the eROSITA spectrally decomposed RGB images, a gentle spectral evolution of the EXJs emission with distance is seen clearly. EXJ emission is significantly harder at its sharp inner boundary than ∼30 pc further away from SS 433, where EXJs blend with softer thermal emission that fills the rest of the lobe, in agreement with earlier findings with XMM-Newton and NuSTAR (Brinkmann et al. 2007; Safi-Harb et al. 2022). 5. X-ray spectra 5.1. Central source The spectral capabilities of eROSITA have been demonstrated using many objects, including SS 433 itself (e.g., Sunyaev et al. 2021). Since this source is extremely bright, the observations were intentionally performed during the two consecutive orbital eclipses of the compact object (and the jets) by the companion star, so its soft X-ray flux was dimmer by a factor of several compared to the typical levels (Kawai et al. 1989; Lopez et al. 2006; Marshall et al. 2013). The combined X-ray spectrum of SS 433 extracted from r= 3 arcmin aperture (with background estimated from r= 4 to 5 arcmin annuli, being however completely negligible compared the source) using the data of all pointings in which it falls within Field-of-View is shown in Fig. 6. As expected, it is well described by the model of the optically thin multitemperature emission from collimated flows that adiabatically expand and cool at distances from a few 1010 to a few 1012 cm from the central object (Brinkmann et al. 1988, 1991, 2005; Kotani et al. 1996; Marshall et al. 2002; Medvedev & Fabrika 2010; Khabibullin et al. 2016; Medvedev et al. 2019). Fig. 6. Combined spectrum of SS 433 accumulated over PV phase observations. Both blue-shifted and red-shifted lines of highly ionized (H-like and He-like) heavy elements (in particular, silicon, sulfur, iron, and nickel) are visible and come from the approaching and receding collimated baryonic jets. The model of multi-temperature cooling jets (Khabibullin et al. 2016) is able to the fit the data very well, although some excess emission is observed between 6 and 7 keV, probably coming from lower ionization species of iron existing in the accretion disk funnel or wind (e.g., Marshall et al. 2002; Brinkmann et al. 2005; Medvedev & Fabrika 2010; Medvedev et al. 2019). We discuss it more in Appendix A, while the main focus of this study is instead on the much fainter diffuse emission from the surrounding nebula. Article number, page 7 of 14 A&A proofs: manuscript no. current Fig. 7. Results of the spectral analysis for the data from eight representative regions. Top row. The left panel shows background-subtracted spectra of the brightest X-ray filament (Region 1 in Figure 5, red points with 1σerrorbars) and the diffuse emission (Region 3, blue points) in the Eastern lobe. The right panel shows spectra of the Northern Arc (Region 4, red points) and of the Southern extension (Region 8, blue points). Bottom row. Left panel - spectra of the diffuse emission in the Western lobe (Region 7 in Figure 5, blue points) and of the brightest filament in the Western lobe (Region 6, red points). Right panel - spectra for the termination region of the Eastern "extended X-ray jet" (Region 2, red points) and the base of the Western jet (Region 5, blue points). 5.2. Diffuse emission The high sensitivity of the presented data allowed us to perform spectroscopy of the diffuse emission in several regions and determine the physical state of the radiating plasma. We have selected eight regions, shown as numbered boxes in Figure 5, for detailed spectral analysis of their X-ray emission. For each region, the background (estimated from the adjacent regions) was subtracted to isolate the net signal. We present the parameters of the fitted models in Table B.1 and discuss the implications of the corresponding results below. 5.2.1. Eastern Lobe Emission from the brightest filament in the Eastern lobe (Region 1 in Figure 5) features a spectrum characteristic for a thermal (but non-equilibrium) optically-thin plasma with a temperature of ∼0.3 keV, as evidenced by the line emission from the highly ionized oxygen, neon, magnesium, silicon and iron (see the left panel in Figure 7). The line emissivity and ratios are consistent with the Solar abundance of heavy elements, but also indicate a non-equilibrium state of the gas ionization. Namely, it is qualitatively consistent with the expectation for a plane-parallel shell Article number, page 8 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA of the gas that experienced a passage of the shock with velocity v∼500 km s-1 (from the Rankine-Hugoniot relations for the temperature and density jump properties) some ∼104 yr ago. The inferred hydrogen number density, n∼0.4 cm-3, is a factor of a few higher than the expected gas density of the interstellar gas ∼300 pc away from the plane of the Galactic disk. The spectrum of the diffuse emission in the Eastern lobe (exemplified by Region 3 in Figure 5) also can be well described by thermal non-equilibrium emission of the gas with somewhat higher temperature ∼0.5 keV and lower hydrogen number density ∼0.1 cm-3 (see the left panel in Figure 7). 5.2.2. Western Lobe The emission from the Western lobe appears to be noticeably harder, partially due to the higher interstellar absorption in that direction. However, it also corresponds to the emission of plasma with genuinely higher temperature, ∼1 keV, and a similar hydrogen number density, n∼0.1 cm-3 (and a few times higher for the brightest filament), as illustrated by spectra from Regions 6 and 7 in Figure 5). The non-equilibrium ionization model (see the right panel in Figure 7) results in a comparable estimate of ∼104 yr for the ionization time-scale. The total absorption-corrected luminosity of the diffuse soft X-ray emission from the nebula is ∼1035 erg s-1 in 0.5-4 keV, which is by a factor of few tens smaller than the apparent X-ray luminosity of the central object, LX,SS 433 ∼few×1036 erg s-1,2 and much smaller than the supposed kinetic luminosity of the outflows from the supercritical accretion disk, Lk∼1039 erg s-1. However, the thermal energy content of the X-ray emitting gas in the Eastern lobe (with estimated gas number density 0.1 cm-3, temperature 0.5 keV and volume V ∼π× 70 pc × (20 pc)2 ≈ 88, 000 pc3 assuming roughly to cylindrical geometry ) is ∼ 4 × 1050 erg, requiring a supply of at least 1039 erg s-1 over 104 years. The mass of this gas amounts to ∼200M⊙, implying that the bulk of it cannot be provided by the outflows from SS 433. If the volume filling factor fof the emitting medium is, however, low, the estimate for the required mass and energy could be decreased by a factor of 1/ √︁ f, but similarly, the density should be increased, leading to the corresponding decrease in the time since the shock passage. That means that the X-ray emitting gas provides an estimate of the average energy input in the lobes at the level of ∼1039 erg s-1, which is comparable to the kinetic luminosity of the jets and the Eddington luminosity of the system (for the mass of the compact object below 10M⊙). A similar estimate can be obtained from the Western lobe, implying nearly symmetric energy injection along the nebula's axis at a very high rate over the last few tens of thousands of years. Given that the temperature inferred from the spectral fits with the non-equilbirum emission is driven mostly by characteristics of the ionization state, it is likely a lower limit on the actual temperature of the emitting gas. Hence, a significant portion of the gas energy content might be effectively hidden, while the required energy input should be increased correspondingly. 2 Due to the beaming of the radiation along the thick accretion disc axis (expected in the case of highly supercritical accretion), the actual X-ray luminosity of SS 433 might be orders of magnitude larger, namely comparable to the Eddington luminosity and kinetic luminosity of the jets, L∼1039 erg s-1 (e.g. Fabrika & Mescheryakov 2001; Begelman et al. 2006; Poutanen et al. 2007; Medvedev & Fabrika 2010; Khabibullin & Sazonov 2016; Middleton et al. 2021; Fogantini et al. 2023). 5.2.3. Northern and southern boundaries The emission from the nebula boundaries is exemplified by the Northern Arc region and the Southern Extension (Regions 4 and 8 in Figure 5). The thermal non-equilibrium model provides the temperature estimate ∼0.8 keV, low hydrogen number density, ∼0.1 cm-3 for the Northern Arc and even lower for the Southern Extension. The ionization timescale is, however, similar here as well, ∼104 yr. This observation contrasts with the brightness of the radio emission, which in the Northern Arc region is one of the brightest across the nebula. The high degree of linear polarization of the radio emission might indicate that the preferential orientation of the interstellar magnetic field in this region favors efficient acceleration of relativistic particles producing synchrotron emission. We note here that the central part of the W50 X-ray map suffers from higher photoelectric absorption than the Eastern and even Western lobes, as illustrated in Fig. C.1. This excess absorption is likely associated with the foreground gas and has no physical connection to SS 433/W50. 5.2.4. Extended X-ray jets In contrast to this "thermal" picture, the X-ray spectra of X-ray emission from the "extended X-ray jets" demonstrate a featureless spectrum that can be described by a power law with the slope in the range from Γ = 1.4 (at their base, e.g., Region 5 in Fig. 5) to Γ = 2.2 (at the termination points, e.g. Region 2 in Fig. 5). In agreement with previous spectral studies of this emission (e.g., Brinkmann et al. 2007), the thermal bremsstrahlung model also provides an equally good fit with temperatures from kT= 3 keV (at the termination) to 15 keV (at the base). The gas emissivity required to produce the X-ray luminosity at level 1035 erg s-1 implies hydrogen number densities n∼0.2 cm-3 in these regions (assuming a filling factor of unity). Clearly, if the thermal nature of this emission were true, it would have a drastically higher pressure compared to the ambient soft X-ray emitting gas and would expand sideways supersonically, driving a strong shock wave. Alternatively, the emission might result from the population of relativistic particles accelerated within EXJs. The detection of polarization in X-rays (Kaaret et al. 2024) and TeV emission from regions cospatial with the EXJs (Abeysekara et al. 2018; Alfaro et al. 2024; H. E. S. S. Collaboration et al. 2024) strongly supports the non-thermal scenario. The absence of any noticeable radio, infrared, or optical emission from these structures would then require substantial self-absorption of radiation at these wavelengths (which is unlikely given that the gas number density is likely very low inside them) or a rather peculiar energy distribution of the relativistic particles with a large fraction of them having energies in excess of tens of TeV. Another possibility could be scattering of very bright and beamed emission of the central source (e.g., Panferov & Kabanov 2013; Khabibullin & Sazonov 2016, 2019), but the small optical depth with respect to the Thomson scattering, τT = neσTL∼10-5(ne/0.1 cm-3)(L/50 pc), and the observed spectral gradient along these structures disfavor this possibility. 6. Discussion The Eastern EXJ was well mapped in X-rays with XMM-Newton, Chandra, and NuSTAR (e.g., Brinkmann et al. 2007; Safi-Harb et al. 2022), revealing its non-thermal nature. Furthermore, recent IXPE observation (Kaaret et al. 2024) revealed polarized X-ray emission from a patch adjacent to the base of the Eastern EXJ. These data suggest that X-ray photons are produced by a Article number, page 9 of 14 A&A proofs: manuscript no. current synchrotron mechanism with the magnetic field predominantly aligned with the EXJ axis. The eROSITA data show that the bases of both EXJs are located at essentially the same distances from SS 433, have similarly sharp inner boundaries, and non-thermal (featureless) spectra across their entire extents. These structures have often been considered as a direct consequence of the relativistic jets' propagation inside the nebula before they deposit the bulk of their kinetic energy in the lobes, energizing them and the whole extent of the nebula (e.g., Panferov 2014). However, the direction of the SS 433 jets is known to precess about the mean axis with the amplitude ∼21 degrees, which is a factor of 2 larger than the observed opening angle of EXJ. Since this extended X-ray emission comes from distances of ∼ tens of pc, i.e. orders of magnitude further away from the central source than the compact radio and X-ray jets, which are traced up to ∼a few 1017 cm, the actual connection between them is not obvious (e.g., Panferov 2017). It might be governed by recollimation of the jets' flows at some intermediate distance from the source (Eichler 1983; Peter & Eichler 1993), a historical change in the jets' direction (Goodall et al. 2011a; Monceau-Baroux et al. 2015), or a different regime of baryonic jets propagation through the magnetized ISM after their gas "freezes out" after recombining due to rapid adiabatic expansion, cooling losses and lack of ambient interactions (Churazov et al. 2020). If we assume that there were no recent drastic changes in the properties of W50/SS 433 (e.g., temporary quenching of the central source), the overall picture of the "large-scale energy flow" in the W50/SS 433, supported by X-ray data, can be schematically decomposed into three major components (illustrated in Fig. 8): - "Dark flow": from a compact source up to the base of EXJs covering the range of distances from ∼0.1 to ∼25 pc from SS 433. Over this distance range, there are no visible signatures in the X-ray (or any other) band. - "Non-thermal flow": from the base to the end of EXJs that have featureless non-thermal spectra (presumably, synchrotron emission). - "Thermal flow": extended diffuse regions between the W50 radio boundaries and EXJs. Its X-ray emission is typical for ISM shock-heated to sub-keV temperatures. Analytical and numerical models primarily aimed at reproducing W50's radio morphology often assume that a quasispherical part of the nebula is a remnant of the supernova explosion, while the elongated structures are directly associated with the energy release by the binary system (e.g., Begelman et al. 1980; Zealey et al. 1980; Eichler 1983; Peter & Eichler 1993; Velázquez & Raga 2000; Zavala et al. 2008; Goodall et al. 2011a; Asahina et al. 2014; Monceau-Baroux et al. 2015; Panferov 2017; Ohmura et al. 2021). In the model considered by Churazov et al. (2024), the entire nebula is powered by the central source. There, an anisotropic outflow/wind from the Hyper-Eddington accretor is assumed - a combination of a more powerful collimated outflow along the orbital axis and a quasi-isotropic wind in all other directions. In this model, the "Dark flow" phase corresponds to the free expansion of the wind, and the onset of the EXJs is associated with the termination shock of the isotropic wind, which initiates shocks in the collimated flow where the pressure rises suddenly. Finally, the thermal part is the shock-heated ISM. In this model, the characteristic shape of the W50 nebula (a quasispherical part and two extensions) reflects the anisotropy of the velocity and kinetic power of the wind, with no need for the impact of the narrow baryonic jets at all. In all these models, the outer shock, which delineates the radio boundary of W50, is not dissimilar to the shocks seen around middle-aged supernova remnants or stellar wind-blown bubbles (e.g., Konigl 1983; Dubner et al. 1998). What makes W50 "special" is the presence of EXJs, which are radio-faint but bright in X-rays and VHE gamma-rays. These structures are also plausibly related to efficient particle acceleration at strong shocks, but with a shock configuration that is rather different compared to typical SNR shocks (e.g. Bykov et al. 2025). Remarkable symmetry of the EXJs, very clearly demonstrated by the eROSITA image, points towards irrelevance of the ambient ISM properties to the energy and momentum propagation, as well as the particle acceleration regime, meaning that a cocoon or a wind-blown cavity likely encompasses these structures in the current epoch. In this regard, the presence of Hαemitting filaments in several places across the nebula might help probe the energetic content of the nebula (e.g., Boumis et al. 2007; Abolmasov et al. 2010; Farnes et al. 2017). Interestingly, the location of the brightest Hαfilaments does not correspond to any prominent features in thermal X-ray emission. This might be the case when the Hα emission comes from the outer boundary of the nebula, where the shock wave is already in the radiative mode. Significantly higher gas temperature obtained for the emission from the Western lobe might also be indicative of this process, namely that the radiative shock wave runs against the Galactic density gradient and stalls earlier in this direction, evacuating a smaller volume to be filled by the hot X-ray gas. Given the presence of bright silicon lines in the spectra extracted from these regions, an observation of them with the XRISM observatory (Tashiro et al. 2025) might provide estimates of the directed and turbulent velocities. A large grasp soft X-ray microcalorimeter mission, like LEM (Kraft et al. 2022) or HUBS (Bregman et al. 2023) would also be perfectly suited for in-depth spectral exploration of this complex system with multiple interacting components. 7. Conclusions SRG/eROSITA observations of W50/SS 433 provide, for the first time, the entire X-ray map of the nebula with high spatial and spectral resolution. The data support a physical picture in which the anisotropic energy flow from SS 433 evolves passing through three distinct stages: - an invisible "dark" flow between 0.1 and 25 pc (=anisotropic wind from the binary system) - a "non-thermal" flow over another ∼30 pc (=EXJs) - a thermal flow (= shock-heated ISM) that envelopes EXJs. The appearance of the nebula is further affected by the large ambient density gradients and a heavy foreground photoelectric absorption that projects almost exactly on the central quasispherical part of the W50 nebula. The thermal part of the W50 X-ray emission can be reasonably well described by the shock-heated plasma that has not yet reached temperature and ionization equilibrium. Such emission is typical for middle-age or old SNRs. The outer radio boundary of the nebula is also reminiscent of the SNR shocks. On the contrary, the "Extended X-ray Jets" are the most remarkable features of this system on scales of tens of pc. Their sharp inner edges plausibly correspond to extreme shocks that accelerate particles and power the X-ray (synchrotron) emission, and, at 10 orders of magnitude higher energies, the TeV emission. W50/SS 433 system illustrates clearly the important role the Hyper-Eddington accretors might play for the energetics of the ISM in galaxies at different redshifts and the production of ultra-high-energy particles. Article number, page 10 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA Forward shock Thermal emission Non-thermal emission Termination shock? Precession cone Fig. 8. A schematic summary picture of the W50 nebula presented on top of the composite X-ray (red and green) and radio (VLA at 1.4 GHz by Dubner et al. 1998, blue) image. While radio emission most likely arises at the outer shell-like boundary of the nebula, the soft X-ray emission (0.3-0.9 keV) trace shock heated ISM gas behind it which fills almost entire interior of the nebula, and harder X-ray emission (0.9-2.7 keV in this case) is of non-thermal (synchrotron) nature and is produced by ultrarelativistic electrons accelerated at shocks in the axial outflows from the system (e.g., Churazov et al. 2024). The central-most part of the nebula, within ∼25 pc from SS 433 (dashed circle) is likely of very low density and could be a wind-blown cavity created by an almost spherically symmetric outflow with close to Eddington kinetic luminosity. The impact of narrow baryonic jets launched by the central source in the current epoch is not clearly visible, given that no indications of interaction are observed along the sky projection of the jet's current precession cone (dotted lines). Acknowledgements. This work is partly based on observations with the eROSITA telescope onboard SRG space observatory. The SRG observatory was built by Roskosmos in the interests of the Russian Academy of Sciences, represented by its Space Research Institute (IKI) in the framework of the Russian Federal Space Program, with the participation of the Deutsches Zentrum für Luft- und Raumfahrt (DLR). The eROSITA X-ray telescope was built by a consortium of German Institutes led by MPE, and supported by DLR. The SRG spacecraft was designed, built, launched, and operated by the Lavochkin Association and its subcontractors. The science data are downlinked via the Deep Space Network Antennae in Bear Lakes, Ussurijsk, and Baikonur, funded by Roskosmos. The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remeis Observatory Bamberg & ECAP (FAU Erlangen-Nuernberg), the (AIP), and the Institute for Astronomy and Astrophysics of the University of Tübingen, with the support of DLR and the Max Planck Society. The Argelander Institute for Astronomy of the ät München also participated in the science preparation for eROSITA. The eROSITA data were processed using the eSASS/NRTA software system developed by the German eROSITA consortium and analyzed using proprietary data reduction software developed by the Russian eROSITA Consortium. IK acknowledges support by the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 882679. References Abell, G. O. & Margon, B. 1979, Nature, 279, 701 Abeysekara, A. U., Albert, A., Alfaro, R., et al. 2018, Nature, 562, 82 Abolmasov, P., Maryeva, O., & Burenkov, A. N. 2010, AN, 331, 412 Alfaro, R., Alvarez, C., Arteaga-Velázquez, J. C., et al. 2024, ApJ, 976, 30 Arnason, R. M., Papei, H., Barmby, P., Bahramian, A., & Gorski, M. D. 2021, MNRAS, 502, 5455 Arnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17 Asahina, Y., Ogawa, T., Kawashima, T., et al. 2014, ApJ, 789, 79 Atapin, K. E. & Fabrika, S. N. 2016, AstL, 42, 517 Begelman, M. C., King, A. R., & Pringle, J. E. 2006, MNRAS, 370, 399 Begelman, M. C., Sarazin, C. L., Hatchett, S. P., McKee, C. F., & Arons, J. 1980, ApJ, 238, 722 Blundell, K. M. & Bowler, M. G. 2004, ApJL, 616, L159 Blundell, K. M., Schmidtobreick, L., & Trushkin, S. 2011, MNRAS, 417, 2401 Borkowski, K. J., Lyerly, W. J., & Reynolds, S. P. 2001, ApJ, 548, 820 Boumis, P., Meaburn, J., Alikakos, J., et al. 2007, MNRAS, 381, 308 Bregman, J., Cen, R., Chen, Y., et al. 2023, SCPMA, 66, 299513 Brinkmann, W., Aschenbach, B., & Kawai, N. 1996, A&A, 312, 306 Brinkmann, W., Fink, H. H., Massaglia, S., Bodo, G., & Ferrari, A. 1988, A&A, 196, 313 Brinkmann, W. & Kawai, N. 2000, A&A, 363, 640 Brinkmann, W., Kawai, N., Matsuoka, M., & Fink, H. H. 1991, A&A, 241, 112 Brinkmann, W., Kotani, T., & Kawai, N. 2005, A&A, 431, 575 Brinkmann, W., Pratt, G. W., Rohr, S., Kawai, N., & Burwitz, V. 2007, A&A, 463, 611 Broderick, J. W., Fender, R. P., Miller-Jones, J. C. A., et al. 2018, MNRAS, 475, 5360 Bykov, A. M., Osipov, S. M., Romansky, V. I., et al. 2025, PhRvD, 112, 063017 Cherepashchuk, A., Belinski, A., Dodin, A., & Postnov, K. 2023, NewA, 103, 102060 Cherepashchuk, A. M. 1981, MNRAS, 194, 761 Cherepashchuk, A. M., Dodin, A. V., & Postnov, K. A. 2025, arXiv, Cherepashchuk, A. M., Esipov, V. F., Dodin, A. V., Davydov, V. V., & Belinskii, A. A. 2018, ARep, 62, 747 Article number, page 11 of 14 A&A proofs: manuscript no. current Chi, Y.-H., Huang, J., Zhou, P., et al. 2024, ApJL, 975, L28 Churazov, E., Khabibullin, I., Bykov, A. M., Lyskova, N., & Sunyaev, R. 2023, A&A, 670, A156 Churazov, E., Khabibullin, I., Lyskova, N., Sunyaev, R., & Bykov, A. M. 2021a, A&A, 651, A41 Churazov, E., Khabibullin, I., Lyskova, N., Sunyaev, R., & Dolag, K. 2025, arXiv, Churazov, E., Khabibullin, I., & Sunyaev, R. 2020, MNRAS, 495, L51 Churazov, E. M., Khabibullin, I. I., & Bykov, A. M. 2024, A&A, 688, A4 Churazov, E. M., Khabibullin, I. I., Bykov, A. M., et al. 2021b, MNRAS, 507, 971 Dubner, G. M., Holdaway, M., Goss, W. M., & Mirabel, I. F. 1998, AJ, 116, 1842 Eichler, D. 1983, ApJ, 272, 48 Eikenberry, S. S., Cameron, P. B., Fierce, B. W., et al. 2001, ApJ, 561, 1027 Elston, R. & Baum, S. 1987, AJ, 94, 1633 Fabian, A. C. & Rees, M. J. 1979, MNRAS, 187, 13P Fabrika, S. 2004, ASPRv, 12, 1 Fabrika, S. & Mescheryakov, A. 2001, in IAU Symposium, Vol. 205, Galaxies and their Constituents at the Highest Angular Resolutions, ed. R. T. Schilizzi, 268 Farnes, J. S., Gaensler, B. M., Purcell, C., et al. 2017, MNRAS, 467, 4777 Filippova, E., Revnivtsev, M., Fabrika, S., Postnov, K., & Seifina, E. 2006, A&A, 460, 125 Fogantini, F. A., García, F., Combi, J. A., et al. 2023, A&A, 669, A149 Forman, W., Jones, C., Cominsky, L., et al. 1978, ApJS, 38, 357 Fragile, P. C., Middleton, M. J., Bollimpalli, D. A., & Smith, Z. 2025, MNRAS, 540, 2820 Fuchs, Y., Koch Miramond, L., & Ábrahám, P. 2006, A&A, 445, 1041 Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1 Geldzahler, B. J., Pauls, T., & Salter, C. J. 1980, A&A, 84, 237 Goodall, P. T., Alouani-Bibi, F., & Blundell, K. M. 2011a, MNRAS, 414, 2838 Goodall, P. T., Blundell, K. M., & Bell Burnell, S. J. 2011b, MNRAS, 414, 2828 Goranskij, V. 2011, PZ, 31, 5 Green, D. A. 2025, JApA, 46, 14 Green, G. M., Schlafly, E., Zucker, C., Speagle, J. S., & Finkbeiner, D. 2019, ApJ, 887, 93 Gregory, P. C., Scott, W. K., Douglas, K., & Condon, J. J. 1996, ApJS, 103, 427 Grindlay, J. E., Band, D., Seward, F., et al. 1984, ApJ, 277, 286 H. E. S. S. Collaboration, Aharonian, F., Ait Benkhali, F., et al. 2024, Sci, 383, 402 Heinz, S. & Sunyaev, R. 2002, A&A, 390, 751 Hjellming, R. M. & Johnston, K. J. 1981a, ApJL, 246, L141 Hjellming, R. M. & Johnston, K. J. 1981b, Nature, 290, 100 Holden, D. J. & Caswell, J. L. 1969, MNRAS, 143, 407 Jiang, Y.-F., Stone, J. M., & Davis, S. W. 2019, ApJ, 880, 67 Kaaret, P., Feng, H., & Roberts, T. P. 2017, ARA&A, 55, 303 Kaaret, P., Ferrazzoli, R., Silvestri, S., et al. 2024, ApJL, 961, L12 Katz, J. I. 1980, ApJL, 236, L127 Katz, J. I., Anderson, S. F., Margon, B., & Grandi, S. A. 1982, ApJ, 260, 780 Kawai, N., Matsuoka, M., Pan, H.-C., & Stewart, G. C. 1989, PASJ, 41, 491 Khabibullin, I., Churazov, E., & Sunyaev, R. 2022, MNRAS, 509, 6068 Khabibullin, I., Medvedev, P., & Sazonov, S. 2016, MNRAS, 455, 1414 Khabibullin, I. & Sazonov, S. 2016, MNRAS, 457, 3963 Khabibullin, I. I., Churazov, E. M., Bykov, A. M., Chugai, N. N., & Sunyaev, R. A. 2023, MNRAS, 521, 5536 Khabibullin, I. I., Churazov, E. M., Chugai, N. N., et al. 2024, A&A, 689, A278 Khabibullin, I. I. & Sazonov, S. Y. 2012, AstL, 38, 443 Khabibullin, I. I. & Sazonov, S. Y. 2017, AstL, 43, 388 Khabibullin, I. I. & Sazonov, S. Y. 2019, AstL, 45, 282 Kimura, S. S., Murase, K., & Mészáros, P. 2020, ApJ, 904, 188 Kirshner, R. P. & Chevalier, R. A. 1980, ApJL, 242, L77 Konigl, A. 1983, MNRAS, 205, 471 Kotani, T., Kawai, N., Matsuoka, M., & Brinkmann, W. 1996, PASJ, 48, 619 Kraft, R., Markevitch, M., Kilbourne, C., et al. 2022, arXiv, Kubota, K., Ueda, Y., Kawai, N., et al. 2010, PASJ, 62, 323 Leibowitz, E. M. 1984, MNRAS, 210, 279 LHAASO Collaboration. 2024, arXiv, Lopez, L. A., Marshall, H. L., Canizares, C. R., Schulz, N. S., & Kane, J. F. 2006, ApJ, 650, 338 Margon, B. 1984, ARA&A, 22, 507 Margon, B., Ford, H. C., Katz, J. I., et al. 1979, ApJL, 230, L41 Marshall, H. L., Canizares, C. R., Hillwig, T., et al. 2013, ApJ, 775, 75 Marshall, H. L., Canizares, C. R., & Schulz, N. S. 2002, ApJ, 564, 941 Medvedev, A. & Fabrika, S. 2010, MNRAS, 402, 479 Medvedev, P. S., Khabibullin, I. I., & Sazonov, S. Y. 2019, AstL, 45, 299 Medvedev, P. S., Khabibullin, I. I., Sazonov, S. Y., Churazov, E. M., & Tsygankov, S. S. 2018, AstL, 44, 390 Medvedev, P. S., Khabibullin, I. I., Semena, A. N., et al. 2022, AstL, 48, 389 Middleton, M. J., Walton, D. J., Alston, W., et al. 2021, MNRAS, 506, 1045 Migliari, S., Fender, R. P., Blundell, K. M., Méndez, M., & van der Klis, M. 2005, MNRAS, 358, 860 Miller-Jones, J. C. A., Migliari, S., Fender, R. P., et al. 2008, ApJ, 682, 1141 Monceau-Baroux, R., Porth, O., Meliani, Z., & Keppens, R. 2015, A&A, 574, A143 Namiki, M., Kawai, N., Kotani, T., Mamauchi, S., & Brinkmann, W. 2000, AdSpR, 25, 709 Ohmura, T., Ono, K., Sakemi, H., et al. 2021, ApJ, 910, 149 Ohsuga, K. & Mineshige, S. 2011, ApJ, 736, 2 Panferov, A. 2014, A&A, 562, A130 Panferov, A. A. 2017, A&A, 599, A77 Panferov, A. A. & Kabanov, A. A. 2013, arXiv, Peretti, E., Petropoulou, M., Vasilopoulos, G., & Gabici, S. 2025, A&A, 698, A188 Peter, W. & Eichler, D. 1993, ApJ, 417, 170 Poutanen, J., Lipunova, G., Fabrika, S., Butkevich, A. G., & Abolmasov, P. 2007, MNRAS, 377, 1187 Predehl, P., Andritschke, R., Arefiev, V., et al. 2021, A&A, 647, A1 Predehl, P., Sunyaev, R. A., Becker, W., et al. 2020, Nature, 588, 227 Rosado, M., Sánchez-Cruces, M., Ambrocio-Cruz, P., & Trejo, D. 2021, MNRAS, 506, 4263 Safi-Harb, S., Mac Intyre, B., Zhang, S., et al. 2022, ApJ, 935, 163 Safi-Harb, S. & Ögelman, H. 1997, ApJ, 483, 868 Safi-Harb, S. & Petre, R. 1999, ApJ, 512, 784 Sakai, Y., Yamada, S., Sakemi, H., et al. 2025, arXiv, Sakemi, H., Omae, R., Ohmura, T., & Machida, M. 2021, PASJ, 73, 530 Sarazin, C. L., Begelman, M. C., & Hatchett, S. P. 1980, ApJL, 238, L129 Seward, F., Grindlay, J., Seaquist, E., & Gilmore, W. 1980, Nature, 287, 806 Seward, F. D., Page, C. G., Turner, M. J. L., & Pounds, K. A. 1976, MNRAS, 175, 39P Shakura, N. I. & Sunyaev, R. A. 1973, A&A, 24, 337 Sharpless, S. 1959, ApJS, 4, 257 Shklovskii, I. S. 1981, SvA, 25, 315 Shuder, J. M., Hatfield, B. F., & Cohen, R. D. 1980, PASP, 92, 259 Spencer, R. E. 1979, Nature, 282, 483 Stephenson, C. B. & Sanduleak, N. 1977, ApJS, 33, 459 Stewart, G. C., Watson, M. G., Matsuoka, M., et al. 1987, MNRAS, 228, 293 Sunyaev, R., Arefiev, V., Babyshkin, V., et al. 2021, A&A, 656, A132 Tashiro, M., Kelley, R., Watanabe, S., et al. 2025, PASJ Tsuji, N., Inoue, Y., Khangulyan, D., et al. 2025, arXiv, van den Heuvel, E. P. J. 1981, VA, 25, 95 van den Heuvel, E. P. J., Ostriker, J. P., & Petterson, J. A. 1980, A&A, 81, L7 van den Heuvel, E. P. J., Portegies Zwart, S. F., & de Mink, S. E. 2017, MNRAS, 471, 4256 Velázquez, P. F. & Raga, A. C. 2000, A&A, 362, 780 Wang, J., Reville, B., & Aharonian, F. A. 2025, ApJL, 989, L25 Watson, M. G., Stewart, G. C., Brinkmann, W., & King, A. R. 1986, MNRAS, 222, 261 Watson, M. G., Willingale, R., Grindlay, J. E., & Seward, F. D. 1983, ApJ, 273, 688 Westerhout, G. 1958, BAN, 14, 215 Whitmire, D. P. & Matese, J. J. 1980, MNRAS, 193, 707 Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914 Yamauchi, S., Kawai, N., & Aoki, T. 1994, PASJ, 46, L109 Yoshioka, S., Mineshige, S., Ohsuga, K., Kawashima, T., & Kitaki, T. 2022, PASJ, 74, 1378 Zavala, J., Velázquez, P. F., Cerqueira, A. H., & Dubner, G. M. 2008, MNRAS, 387, 839 Zealey, W. J., Dopita, M. A., & Malin, D. F. 1980, MNRAS, 192, 731 Article number, page 12 of 14 Rashid Sunyaev et al.: SS433/W50 with SRG/eROSITA Appendix A: Spectrum of the central source As one can see in Fig. 6, both blue-shifted and red-shifted lines of highly ionized (H-like and He-like) heavy elements (in particular, silicon, sulfur, iron, and nickel) moving at 0.26 speed of light can be easily discriminated and identified. They arise from the approaching (with apparent redshift zapp =-0.085) and the receding jet (with apparent redshift zrec =0.167), respectively. In agreement with the findings of numerous previous observations (see Medvedev et al. 2019, for a recent summary), a factor of 10 overabundance of nickel (with respect to iron) in the gas of the jets is required to describe the spectrum well (most clearly indicated by the ratio of the Ni XXVII and Fe XXV lines). Spectral resolution of eROSITA is not sufficient to resolve the width of the lines, which is known to be dominated by the ballistic motion of gas within a cone of with opening angle Θ ∼1 deg, as measured by high resolution spectroscopy with Chandra gratings (e.g. Marshall et al. 2002). Some residual emission can also be observed in the iron K complex region between 6 and 7 keV, which could come from the reflection of central source emission on the walls of the optically thick accretion disk funnel (Medvedev & Fabrika 2010) or in the optically thin accretion disk wind (Medvedev et al. 2018, 2019). Comparison of the spectra taken in the two epochs of observations separated by one orbital period shows very good consistency between them3, with the biggest difference observed in the 6-7 keV band. This might be related to the change in the visible geometry of this region or to the physical conditions within it (see a discussion in Medvedev et al. 2019). High-resolution spectroscopy of this emission with the XRISM observatory (Tashiro et al. 2025) will open new possibilities for characterizing this emission and potentially give access to more intricate diagnostics based on velocity structure and resonant scattering in the jets (Khabibullin & Sazonov 2012), as well as possible signatures of multiphase cooling and ionization state dynamics in supersonically expanding jets prone to thermal instability and fragmentation (Brinkmann et al. 1988; Brinkmann & Kawai 2000; Khabibullin et al. 2016). The SS433's spectrum presented here illustrates the spectroscopic capabilities of eROSITA and shows that, for sufficiently bright sources, reliable continuum and line characterization can be achieved up to 10 keV. Appendix B: Parameters of the spectral models Here we list parameters of the best-fit models used to describe spectra shown in Fig. 7 and discussed in Sec. 5. The spectral analysis was performed using standard routines of the XSPEC package (Arnaud 1996). Namely, the absorbed (using tbabs model by Wilms et al. 2000) non-equilibrium plasma emission model (nei model by Borkowski et al. 2001) was used to describe thermal emission, while similarly absorbed thermal bremsstrahlung model (tbabs*brems in XSPEC notation) was used for a phenomenological description of the non-thermal emission from the EXJs. Due to the complexity of the system, the X-ray emission from all these regions is likely a mixture of several components; hence, one needs to take these derived parameters with caution and probably as approximate guidelines for the physical conditions within them. For the background estimation, adjacent regions of significant size were used, and the signal inside them was subtracted from the corresponding source region. For all regions except for the relatively faint Regions 4 and 8, the relative 3 Since one orbital period separation is small compared to the precession period and the observations were close to an extreme point of the precession curve, the change in the Doppler shifts of the jets is small. Table B.1. Parameters (with 1σuncertainties) of the best fit models for the spectra extracted from regions shown in Fig. 5. The thermal non-equilibrium model used for Regions 1,3,4,6-8 is tbabs*nei with Solar abundance of heavy elements having the absorbing column density NH, gas temperature kT, ionization timescale τand the standard XSPEC emission measure Norm = 10-14nenHV/(4πd2 SS 433) normalization (per arcmin-2) as free parameters. The value of Norm converts into the hydrogen number density as nH = 0.2 cm-3(Norm/10-5)0.5(dl/pc)-0.5, where dlis the line-of-sight extent of the emitting region (assuming filling factor fequal to unity and ne = 1.21nH for the fully ionized cosmic plasma). For the non-thermal emission model used for Regions 2 and 5 we list parameters of the tbabs*brems fits, where kTis the effective temperature of the bremsstrahlung spectrum. For the normalization, we divided the standard XSPEC normalization of the brems model by the factor 3.28(=10-14/3.05 × 10-15) to match the normalization definition of the nei model used for the thermal regions. Correspondingly, the same conversion to the number density of the emitting gas can be used in this case as well, demonstrating that would the bremsstrahlung model were true, the required gas density would be comparable to the density of the ambient medium. # NH kT τ Norm 1022 cm-2 keV 1011 s cm-3 10-5 Thermal regions (tbabs*vnei) 1 0.67±0.02 0.29±0.02 0.26±0.05 12.33±3.63 3 0.56±0.02 0.56±0.07 0.11±0.01 0.80±0.19 4 0.79±0.05 0.95±0.25 0.09±0.01 0.21±0.07 6 0.85±0.03 0.76±0.06 0.64±0.12 2.01±0.28 7 0.72±0.02 1.30±0.18 0.08±0.01 0.88±0.13 8 0.75±0.05 0.78±0.22 0.13±0.05 0.42±0.18 Non-thermal regions (tbabs*brems) 2 0.51±0.01 3.05±0.15 - 1.62±0.04 5 0.84±0.03 14.5±3.9 - 0.64±0.01 contribution of the background emission in the source regions was rather small. Appendix C: Impact of foreground absorption In this section, we demonstrate that some of the large-scale variations in the diffuse X-ray emission from W50 is plausibly caused by foreground absorption. To do so, we used the 3D dust maps from Bayestar19 (see Green et al. 2019, for detailed description). We are looking for the intervening absorption between W50 and us, i.e., physically unrelated layers of the absorbing material. Given the adopted distance to SS 433 of 5 kpc, we integrated extinction (AV) only up to a distance of 3 kpc from the Sun. The resulting extinction map is superposed on the X-ray image in Fig. C.1. A prominent wide dust lane is seen in this image that crosses the central part of the W50 nebula diagonally from NE to SW. Typical values of the extinction in this region are ∼5 -10, which translate into equivalent hydrogen column densities ∼(1-2) ×1022 cm-2. Therefore, soft X-ray emission from Article number, page 13 of 14 A&A proofs: manuscript no. current Fig. C.1. X-ray map of W50 in Galactic coordinates with the extinction map (in red) superposed. The extinction is estimated from the Bayestar19 3D maps Green et al. (2019) up to a distance of 3 kpc from the Sun, i.e., conservatively in the foreground to W50. The extinction (AV) values range between ∼5 and ∼10 in regions overlapping with the central quasi-spherical part of W50. This apparently accidental projection of the absorbing dust/gas onto W50 leads to significant suppression of the soft X-ray flux in the affected areas. the central part of W50 can be heavily absorbed (several orders of magnitude at 1 keV). While some geometric correspondences between the X-ray and extinction maps look tantalizing, we conclude that most probably this is a by-chance projection. Therefore, the lack of observed soft X-ray emission from the quasi-circular part of the nebula does not place significant constraints on the properties of W50. The soft X-ray emission might be there, but we can not see it. Article number, page 14 of 14
2510.14933
Nucleon Electric Dipole Moments in Paramagnetic Molecules through Effective Field Theory Wouter Dekens,1 Jordy de Vries,2, 3 Lemonia Gialidi,2, 3 Javier Menéndez,4, 5 Heleen Mulder,3, 6 and Beatriz Romeo7 1Institute for Nuclear Theory, University of Washington, Seattle WA 91195-1550, USA 2Institute for Theoretical Physics Amsterdam and Delta Institute for Theoretical Physics, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands 3Nikhef, Theory Group, Science Park 105, 1098 XG, Amsterdam, The Netherlands 4Departament de Física Quàntica i Astrofísica, Universitat de Barcelona, 08028 Barcelona, Spain 5Institut de Ciències del Cosmos, Universitat de Barcelona, 08028 Barcelona, Spain 6Van Swinderen Institute for Particle Physics and Gravity, University of Groningen, Nijenborgh 3, 9747 AG Groningen,The Netherlands 7Department of Physics and Astronomy, University of North Carolina, Chapel Hill Electric dipole moment (EDM) measurements using paramagnetic molecules have significantly advanced over the last decade. Traditionally, these experiments have been analyzed in terms of the electron EDM. However, paramagnetic molecules are also sensitive to hadronic sources of charge- parity (CP) violation, highlighting the need for a new framework to interpret the experimental results. In this Letter, we introduce an effective field theory framework to relate molecular EDMs to the EDMs of neutrons and protons. We identify the dominant contributions through power counting and pinpoint the necessary nuclear matrix elements. As a practical application, we employ the nuclear shell model to calculate these nuclear matrix elements for the polar molecule BaF. Finally, we estimate the limits on the nucleon EDMs set by current molecular EDM experiments. I. INTRODUCTION Electric dipole moment (EDM) experiments are ex- tremely sensitive probes of new sources of charge- parity (CP) violation and indirectly probe beyond-the- Standard-Model (BSM) physics at very high scales of up to ∼100 TeV [1, 2]. Recent years have seen impressive experimental progress using polar molecules which ben- efit from large internal electric fields that amplify the CP-violating signal [3–7]. EDMs of paramagnetic sys- tems, which have one unpaired electron, are mainly in- terpreted in terms of the electron EDM. Current mea- surements lead to a strong bound on the electron EDM, |de| < 4.1 · 10−30 e cm, and future experiments aim to improve this by one to two orders of magnitude [6–11]. This constraint is 4 orders of magnitude more stringent than the neutron EDM limit [12]. Traditionally, paramagnetic systems have not been used to constrain hadronic sources of CP violation, such as the quantum chromodynamics (QCD) ¯θ term within the SM or higher-dimensional quark-gluon operators that arise from heavy BSM physics. This is because of the as- sumption that far stricter limits can be obtained through the EDMs of the neutron or diamagnetic atoms. That be- ing said, paramagnetic systems are sensitive to hadronic sources of CP violation through the CP-odd electron- nuclear force they induce [13–15]. While this force is typically strongly suppressed, the rapid progress in para- magnetic EDM experiments might make it the best way to search for hadronic sources of CP violation in the fu- ture. However, the current theoretical description of the CP-odd electron-nuclear force is still at a very rudimen- tary stage. In this Letter, we systematically derive this force as induced by the EDMs of neutrons and protons, making it possible to constrain these EDMs with paramagnetic molecular EDM experiments. As the problem involves a multitude of well-separated energy scales, it can be sys- tematically described using effective-field-theory (EFT) techniques. We show that this connection requires the calculation of a set of nuclear matrix elements (NMEs) that are different from the ones involved in the Schiff moments of diamagnetic systems [16–18]. As an explicit example, we compute the NMEs for the polar molecule BaF, which is being targeted by the NL-eEDM collabo- ration [8]. II. EFFECTIVE FIELD THEORY The calculation of molecular EDMs in terms of fun- damental sources of CP violation involves widely sep- arated energy scales. These range from the BSM and electroweak scales (Λ and MW ) to low-energy scales such as the electron mass or electron binding energy O(α2 emme). The atomic nucleus gives rise to additional scales associated with the chiral-symmetry-breaking scale Λχ ∼mN ∼1 GeV (comparable to the nucleon mass), the pion mass mπ ∼γ ∼100 MeV (comparable to the nuclear binding momentum) and the scale of nuclear ex- citations m2 π/mN ∼O(MeV). Within the SM, the most relevant source of CP viola- tion is the QCD ¯θ term, as CKM-induced (paramagnetic) EDMs are orders of magnitude too small to be detected by current and envisioned experiments [19]. BSM sources of hadronic CP violation can, at energies well below Λ, be described by effective operators of space-time dimen- sion six. They have been classified and evolved to lower energies in a series of previous works [20–23]. At energies slightly above Λχ, the most relevant hadronic operators arXiv:2510.14933v1 [hep-ph] 16 Oct 2025 2 are the (chromo-)electric dipole moments of quarks, the Weinberg three-gluon operator [24], and several CP-odd four-quark interactions [20]. At energies < Λχ, these ef- fective operators can be matched to a χEFT Lagrangian describing CP-violating interactions among the relevant low-energy degrees of freedom (light mesons, nucleons, photons, electrons) [1, 20]. For our purposes, the most relevant interactions are given by Lχ = ¯g0 ¯Nτ aNπa + ¯g1 ¯NNπ0 + ¯g0η ¯NNη +2 ¯N(d0 + d1τ 3)vµSνNFµν , (1) where the first line describes three CP-odd meson- nucleon interactions, and the second line, respectively, the isoscalar and isovector nucleon EDM. We use the non-relativistic nucleon doublet N = (p, n)T with spin Sµ = (0, σ/2) and velocity vµ = (1, 0), as well as the pion triplet πa and the eta meson η. The paramagnetic EDMs induced by the meson- nucleon interactions in Eq. (1) arise mainly through in- termediate CP-odd electron-nucleon interactions, which take on the form L = GF √ 2 ¯eiγ5e ¯N C0 SP + C1 SPτ 3 N . (2) The nucleon EDMs in Eq. (1) give rise to contributions at longer distance scales through the diagrams in Fig. 1. They induce effective interactions between the nu- cleus and the electrons, i.e. the nuclear equivalent of C0,1 SP , which we denote by ¯CSP, see Eq. (7). To systematically compute the various contributions, it is useful to consider different photon modes depending on the scaling of their momentum qµ γ = (q0 γ, qγ). We identify three regions that give relevant contributions 1. soft photons: q0 γ ∼|qγ| ∼mπ, 2. ultrasoft photons: q0 γ ∼|qγ| ∼m2 π/mN, 3. potential photons: q0 γ ∼q2 γ/mN|qγ| ∼mπ , and we define Q ∼mπ ∼γ and q ∼Q2/mN. The CP-odd meson-nucleon interactions in Eq. (1) con- tribute to C0,1 SP through diagrams involving a meson ex- change or a pion loop in combinations with the exchange of two photons in the ultrasoft or soft region. These diagrams were first considered in Ref. [13] and later com- puted with heavy-baryon chiral perturbation theory in Ref. [15]. In addition, integrating out the mesons leads to renormalization of nucleon EDMs [25–27], effectively shifting d0,1 →¯d0,1, where the bar denotes the renormal- ized LECs. In what follows, we use ¯d0,1 as the physical nucleon EDMs. In this Letter, we focus on additional contributions to ¯CSP from the nucleon EDMs, which arise through the topologies shown in Fig. 1a and 1b. These diagrams are captured by an effective action of the form ⟨hf(pf)e(p′ e)|iSeff|hi(pi)e(pe)⟩= e3 2 Z xi ⟨hfe| (3) ×T h ¯e /Ae(x1) ¯e /Ae(x2)L(d0,1) χ (x3) (AµJµ em) (x4) i |ehi⟩, where we integrate over all x1,2,3,4, hi,f denote the ini- tial and final nuclear states (for EDMs we have the nu- clear ground state |hi⟩= |hf⟩= |0+⟩) and Jµ em denotes the nuclear electromagnetic current. Diagrams involving nucleon EDMs and photons with soft momenta are sub- leading as they require the inclusion of additional pions. Power counting gives the expected size of the potential and ultrasoft contributions  C(pot) SP , C(usoft) SP  = meα2µi ¯di eGF mN 4π Q , 1 q  , (4) where µi are the nucleon magnetic dipole moments (MDMs) in units of the nuclear magneton. Numerically 4πq ∼Q and these estimates are rather close, but, as we will see, they do not capture possible coherent enhance- ments. Potential region: To evaluate the potential contribu- tions, we can use the so-called method of regions to ex- pand the amplitude in small ratios of scales, such as q0 γ/|qγ|. After doing so, there are no contributions from diagrams where the nucleon EDM and Jem attach to the same nucleon (the potential region arises from picking up the poles of nucleon propagators, which can always be avoided in these one-body diagrams). There are, how- ever, two-nucleon effects through the diagram in Fig. 1a. Due to spin and parity constraints, the first contributions arise from the nucleon magnetic moments, which appear in Jµ em at next-to-leading order O(Q/mN). This results in the following contribution to the amplitude1 Apot = −⟨hf|V |hi⟩¯u(p′ e)  1 −v · (p′ e −pe) 2me /v  iγ5u(pe) , where we take the limit p′ e −pe ≪me in what follows, while V denotes the potential between the two interacting nucleons. In momentum space2 V = 4e4me 9mN X i̸=j µ(i)D(j) |q|4  σ(i) · σ(j) −1 4S(ij)  , (5) where i, j label the nucleons, qi = p′ i −pi is the exchanged momentum and we define the combination 1 We define p2Ef2Ei⟨hfe|Seff|hie⟩= (2π)4δ4(pf +p′ e−pi−pe)A, with the nuclear states satisfying ⟨p|q⟩= (2π)3δ3(p −q). 2 Strictly speaking, the momentum space potential is infrared di- vergent. However, performing the Fourier transform in dimen- sional regularization leads to a potential in coordinate space that is IR finite. It is also possible to deal with the potential in mo- mentum space by defining a subtraction procedure, see App. C of Ref. [28] where a similar potential was encountered. 3 (a) (b) (c) FIG. 1: Contributions to ¯CSP arising from the nucleon EDMs. We denote electrons by single and nucleons by double straight lines, nuclei by gray ovals (in Fig. 1a) or bars (in Fig. 1b-1c), and photons by wavy lines. The black circle stands for the nucleon MDM, while the yellow, magenta, and blue squares indicate the CP-violating vertices: nucleon EDM, βv and ¯CSP effective vertices (see Eq. (7)), respectively. Fig. 1a shows the two-nucleon potential-region contribution, and Fig. 1b the ultrasoft one. Fig. 1c shows the two diagrams relevant to the matching and running of diagram 1b in an EFT with the nuclear ground state as the remaining degree of freedom. q = (qi −qj)/2. In addition, D = ( ¯d0 + ¯d1τ3)/e, µ ≡ 1+κ0 2 + 1+κ1 2 τ3 describe the EDM and MDM op- erators (with κ0 = −0.12 and κ1 = 3.7), while S(ij) = σ(i) · σ(j) −3q · σ(i) q · σ(j)/q2. These contributions are thus determined by the NME of V , which scales as 1/|q|4. The form of this two-body potential is similar to the NMEs appearing in neutrinoless double-β decay [29] or radiative corrections to superallowed β decays [28, 30] although with different isospin and/or q depen- dence. The many-body techniques needed to compute such NMEs, including ab-initio approaches, have devel- oped significantly in the last decade and can be directly applied to these EDM calculations [29, 31]. Ultrasoft region: In this region, we expand in small ra- tios of scales, such as q0 γ/mπ ∼|qγ|/mπ and me/mπ. Photons with this (small) momentum scaling can be thought of as coupling to the nucleus as a whole, instead of the individual nucleons. After inserting a complete set of (nuclear) states between the hadronic operators and working out the time-ordered product in Eq. (3), we again find that the leading contributions involve the magnetic moments, Ausoft = −2ie4 mN Z d4k (2π)4 ¯u(p′ e)γλ(/k + me)γρu(pe) ϵσαβηvβ (k2 −m2e)(k −pe)2(k −p′e)2 X n " (pe −k)σ(p′ e −k)µgνλgαρ × ⟨hf|Dµν|n⟩⟨n|M η|hi⟩ v · l+ −En + iϵ + ⟨hf|M η|n⟩⟨n|Dµν|hi⟩ v · l−−En + iϵ  + n (pe ↔p′ e, l+ ↔l−, α ↔ν o# , (6) where |n⟩denote intermediate nuclear |1+⟩states with energies En, while l+ = pi + pe −k, l− = pi − p′ e + k. Furthermore, M η ≡ ¯Nµ SηN(0) and Dµν = ¯ND (vµSν −vνSµ) N(0) denote the MDM and EDM op- erators. Although the appearing integrals can be evaluated us- ing known techniques [32], the expressions are rather unwieldy. They greatly simplify if there is a hierarchy between the nuclear excited states and the electron mo- menta, pe ∼me ≪∆n = |En −Ei|, which is a good ap- proximation for 138Ba as discussed in Sec. III. Likewise, the relevant excited states in magnetic-dipole transitions —also driven by the spin operator— in isotopes of Yb, Hf, and Th with an even number of neutrons also enter at about 2 MeV or higher energies [33–35]. In this case, Eq. (6) can be captured by a low-energy nuclear EFT in which the excited nuclear states have been integrated out, but still contains electrons, ultrasoft photons, and the ground state of the nucleus. The relevant interac- tions in this theory can be written as LΨ = Ψ† i GF √ 2 ¯CSP ¯eiγ5e + βv vαFαβvλϵβλµνFµν  Ψi ,(7) where Ψi denotes the spin-0 field describing the nucleus,3 ¯CSP describes the nuclear version of C0,1 SP , while βv has a similar form as the nuclear polarizability but violates CP. At the scale µ = ∆n, βv obtains a contribution from integrating out the excited states at tree level, while ¯CSP arises from Eq. (6). After expanding in me/|En −Ei|, this expression simplifies and the remaining integrals are 3 We describe the nucleus non-relativistically, so that the ki- netic term takes the form L(0) Ψ = Ψ† i iv · DΨi. This ensures that ⟨0|Ψi(x)|p⟩= e−ip·x and implies the field has dimension [Ψi] = 3/2, so that [ ¯CSP ] = 0. 4 of the form In ≡ Z ddk (2π)d 1 (k2)n 1 v · k −∆, (8) which are evaluated as [36] In = 2i(−1)n+1(2∆)d−2n−1 (4π)d/2 Γ(2n + 1 −d)Γ(d/2 −n) Γ(n) .(9) All in all, matching the nucleon-level theory to the EFT without excited states then gives at a scale µ ≃∆n, GF √ 2 ¯CSP = −e4me 4π2mN X n An ∆n  4 −3 log 4∆2 n µ2  , (10) βv = e2 mN X n An ∆n , An = −⟨hi|Dσ|n⟩· ⟨n|µσ|hi⟩ 12 . These interactions can be evolved from µ ∼∆n to lower energies, µe ∼me, using the renormalization group equation (RGE), d ¯CSP (µ) d ln µ = 3 √ 2e2meβv 4π2GF . (11) This RGE arises through the loop diagram of Fig. 1c, which allows βv to contribute to ¯CSP . The amplitude at low scales, µe ∼me, can finally be expressed as the sum of ¯CSP (me) and a loop contribu- tion due to βv. We capture the total combination of the ultrasoft and potential contributions by an effective con- tact interaction, such that Atotal = GF √ 2 ¯Ceff SP ¯u(p′ e)iγ5u(pe) with ¯Ceff SP = − √ 2 GF " 4α2me mN X n An ∆n  3 ln m2 e 4∆2n −1  +⟨hi|V |hi⟩ # (12) which is independent of the renormalization scale µ. We stress that this is the effective interaction between elec- trons and the nucleus as a whole and differs from Eq. (2), which is the coupling to individual nucleons. Evaluating the ultrasoft region thus requires the excited state ener- gies, ∆n, and the set of nuclear matrix elements of the one-body operator ∼⟨hi|σ|n⟩contained in An. These have a form similar to the leading two-neutrino double- β, and double magnetic-dipole NMEs [37–39] and of sub- leading NMEs of the neutrinoless double-β decay [40, 41]. Therefore, similar many-body methods used in these studies can be applied here. Eq. (12) is the main result of this work and makes it possible to connect nucleon EDMs to measurements of paramagnetic molecules. III. NUCLEAR MATRIX ELEMENTS We now focus on the polar molecule BaF, which is targeted by the NL-eEDM collaboration [8]. The heav- iest atom in the molecule, 138Ba, has a magic neutron number, and it is just two neutrons away from 136Ba, the well-studied [42] final state of the double-β decay of 136Xe. We calculate the nuclear excitation energies and all necessary NMEs with the nuclear shell model [43]. We use a configuration space that comprises the single- particle orbitals 1d5/2, 0g7/2, 2s1/2, 1d3/2, and 0h11/2 for both neutrons and protons with a 100Sn core. We con- sider three effective interactions previously tested in this mass region: GCN5082 [44], QX [45] and Sn100pn [46]. We present details of the calculation in the Supplemental Material, and here highlight the main results. The value of the ultrasoft NME is ¯Cusoft SP = (67 ± 28) dp (e fm)−1 , (13) where dp = ¯d0+ ¯d1. We only find sensitivity to the proton EDM because, in our calculation, the 82 neutrons form a closed shell, as 138Ba is magic in neutrons. This is also why the first intermediate 1+ excited state appears around 2.5 MeV. The largest contribution to the ultrasoft NME arises from states around En = 4.5 MeV ≫me, justifying our approximation, and higher-energy states only contribute mildly. We show the cumulative contri- bution from the excited-state spectrum in the Supple- mental Material. The potential contribution evaluates to ¯Cpot SP = [(−433 ± 5) dp + (387 ± 0.4) dn] (e fm)−1 , (14) where dn = ¯d0 −¯d1. The small uncertainties are solely from the shell-model calculations but do not capture pos- sible higher-order corrections. Compared to the ultrasoft regime, the potential contribution is dominant. This is because of the coherent nature of the potential NME, which scales linearly with the total number of protons, Z (dp term), or neutrons, N (dn term), in the nucleus. The coherence appears as most NME contributions stem from proton-proton and neutron-neutron pairs, prevalent in nuclei due to the attractive pairing interaction. This scaling is in rough agreement with the estimate of Ref. [13], as well as an evaluation of the potential of Eq. (5) in a Fermi gas state4. Our many-body calculations, which also cover nuclei lighter than 138Ba, suggest that nucleus- dependent effects can correct this estimate by up to 20%. The coherent character makes potential NMEs less de- pendent on the details of the nuclear structure, reducing their relative uncertainty with respect to ultrasoft NMEs (the very small error in the dn potential NME is because in our calculation the 82 neutrons form a closed shell). We provide more details in the Supplemental Material. While we do not expect any breakdown of the scaling be- havior discussed above, explicit calculations of ultrasoft and potential NMEs in heavier systems such as Th or Hf are required to confirm the dominance of the potential contributions. 4 We thank J. Engel for discussions on this point. 5 The expected sensitivity of the BaF experiment is an electron EDM equivalent of de ≤10−30 e cm [8]. Using the NME calculations of this Letter, this would corre- spond to a sensitivity to the nucleon EDMs |dp|BaF < 8.4 · 10−24 e cm and |dn|BaF < 8 · 10−24 e cm. While we do not have shell-model calculations for the most precise experiment based on HfF+, we can use the linear Z and N dependence of the potential NME to estimate |dp|HfF+ ≲1.6 · 10−23 e cm , |dn|HfF+ ≲1.6 · 10−23 e cm , (15) roughly two orders of magnitude weaker than the pro- ton EDM limit set by 199Hg [47] and three orders than the direct neutron EDM limit [12]. Considering the past and anticipated progress in molecular EDM experiments, with projected improvements of two to three orders of magnitude within a decade [48], these gaps are not in- surmountable. IV. DISCUSSION In this Letter, we have developed a systematic method to compute the contributions of nucleon EDMs to param- agnetic molecular EDMs. Generally, however, there are multiple other sources of CP violation, meaning that the measurement of a nonzero EDM in any system would raise the question of what the underlying mechanism is. It has been shown that the ratio of different paramag- netic systems can be used to unravel the electron EDM contribution from CP-odd electron-nucleon interactions [49, 50], while the ratio of nuclear to nucleon EDMs can separate different CP-odd sources at the quark-gluon level [51]. Based on our results, we devise a new strategy to identify the underlying source of CP violation. Besides the diagrams calculated in this Letter, there appear contributions from meson-exchange diagrams [13, 15]. Depending on the underlying CP-violating source, the ratio of meson to nucleon EDM contributions varies. We can be most concrete for the QCD ¯θ term, where the sizes of the low-energy constants in Eq. (1) are known relatively well [52–54]. For BaF, the meson diagrams give a contribution ¯Cmeson SP (BaF) = (220±62)·10−2 ¯θ [15]. We can compare this to the nucleon EDM contributions if we insert the lattice-QCD prediction for the nucleon EDMs [55, 56], which results in ¯Cpot+usoft SP = −(196 ± 54) · 10−2 ¯θ . (16) This contribution is comparable in size to the meson- exchange diagrams but comes with an opposite sign, so the total contribution is suppressed. This accidental cancellation is specific to the ¯θ term and not expected for other mechanisms of CP violation. We combine all contributions to compute the equivalent electron EDM dequiv e ≡rmol ¯CSP/A [57], which is convenient as param- agnetic EDM searches are usually interpreted as limits on de. For BaF, rBaF = 4.46[18] · 10−21e cm [58], and we obtain dequiv e (¯θ) = (7.5 ± 27) · 10−24 ¯θ e cm. -1.0 -0.5 0.0 0.5 1.0 -0.02 -0.01 0.00 0.01 0.02 FIG. 2: The ratio between dequiv e and dn, induced by var- ious possible underlying sources of CP violation: the ¯θ term (blue band), the up quark EDM (orange) or chromo- EDM (grey). The plots are based on Refs. [59–61] regard- ing QCD matrix elements connecting the CP-violating sources to CP-violating hadronic couplings. For other hadronic sources of CP violation, power counting arguments give insight into the ratio of mesonic to nucleon EDM contributions. For example, the quark chromo-EDM breaks CP and isospin symmetry. The meson-nucleon interactions in Eq. (1) are the leading CP-violating hadronic interactions [20, 60] and their con- tributions dominate the paramagnetic EDMs. On the other hand, for quark EDMs and the Weinberg operators the mesonic interactions are suppressed by, respectively, α/π (electromagnetic suppression) and m2 π/Λ2 χ (chiral suppression)[20]. As such, the ratio of ¯Ceff SP (and thus dequiv e ) to the neutron EDM is different. We illustrate this in Fig. 2, where we plot the dequiv e , including both mesonic and nucleon EDM contributions, against dn. The bands correspond to scenarios where dequiv e and dn are sourced, respectively, by the ¯θ term, the up quark EDM, and the up quark chromo-EDM. Remarkably, the ratio of neu- tron to paramagnetic EDMs can identify the underlying hadronic source of CP violation. In conclusion, the EFT approach presented in this Let- ter allows one to derive the contribution from nucleon EDMs to paramagnetic EDMs in a systematic way. We have identified novel nuclear matrix elements that must be computed in order to interpret paramagnetic EDMs in terms of nucleon EDMs and, ultimately, in terms of hadronic sources of CP violation. While power count- ing arguments indicate that similar contributions would arise from potential and ultrasoft virtual photons, ex- plicit shell-model calculations show that potential NMEs dominate because of the coherent contribution of most protons and neutrons in the nucleus. Experimental im- provements of two-to-three orders of magnitude in para- magnetic molecular systems are needed to set compet- itive limits on nucleon EDMs. Finally, we have shown that ratios of paramagnetic-to-neutron EDMs can point towards the underlying mechanism of CP violation. 6 Acknowledgments We thank Jon Engel, Emanuele Mereghetti, and Rob Timmermans for important discussions. This work was partly funded by the Netherlands Re- search Council (NWO) under programme XL21.074, and by MCIN/AEI/10.13039/501100011033 from the following grants: PID2023-147112NB-C22; CNS2022- 135716 funded by the “European Union NextGenera- tionEU/PRTR”, and CEX2024-001451-M to the “Unit of Excellence María de Maeztu 2025-2031” award to the In- stitute of Cosmos Sciences; and by the Generalitat de Catalunya, through grant 2021SGR01095. JdV and HM thank our colleagues from the NL-eEDM collaboration for discussions and encouragement. Appendix A: Ultrasoft contribution to ¯Ceff SP In the ultrasoft region, the contribution to the CP-odd electron-nucleus interaction is encoded in ¯Cusoft SP = − √ 2α2me 3mNGF M usoft SP , (A1) where M usoft SP is the NME between initial and final nuclear states, |hi,f⟩, defined by M usoft SP = X n ⟨hf|D(i)⃗σ|n⟩· ⟨n|µ(i)⃗σ|hi⟩ ∆n  1 + 3 ln 4∆2 n m2e  . (A2) Here ∆n = En −Ei is the excitation energy of the inter- mediate nuclear states |n⟩, and the nucleon EDMs and MDMs D(i) = ( ¯d0+ ¯d1τ (i) 3 )/e and µ(i) = (µ0+µ1τ (i) 3 ) are defined in terms of the isoscalar and isovector nucleon EDMs, ¯d0,1, and the isoscalar and isovector anomalous magnetic moments, κ0,1, through µi = (1 + κi)/2. We focus on 138Ba, the heaviest nucleus in the di- atomic polar molecule BaF used by the NL-eEDM exper- iment [8]. We compute the matrix elements involving the one-body spin operator in Eq. (A2) to the set of 1+ n nu- clear excited states, as well as the relevant excited-state energies. Nonetheless, we have adjusted these energies to exactly reproduce the one of the first 1+ excited state of 138Ba. This changes M usoft SP just by 2%-7% depending on the effective interaction used. Since in our calculation for 138Ba the 82 neutrons completely fill the configuration space, we cannot create any particle-hole excitation involving a neutron orbital, meaning there is no sensitivity to dn. Therefore, for 138Ba Eq. (A2) reduces to M usoft SP =( ¯d0 + ¯d1)(µ0 + µ1)mσ = mσµpdp , (A3) with 2µp = κ0 + κ1 + 2 (likewise, 2µn = κ0 −κ1) and mσ defined by mσ = −1 e X n ⟨0+ 1 |⃗σ |1+ n ⟩2 ∆n  1 + 3 ln 4∆2 n m2e  , (A4) GCN5082 QX Sn100pn 138Ba dp 61.0 97.0 41.7 dn 0 0 0 106Sn dp 0 0 0 dn -90.0 -89.6 -55.5 104Te dp 55.3 66.0 53.8 dn -43.7 -54.8 -46.9 132Te dp 11.9 11.0 6.5 dn -24.9 -16.2 -30.2 TABLE I: ¯Cusoft SP results for various nuclei with simi- lar mass number as 138Ba, obtained with three differ- ent shell-model Hamiltonians [44–46]. Here units are (e fm)−1. 2 3 4 5 6 7 0 20 40 60 80 100 En(MeV) ¯Cusoft SP  dp(efm)−1 GCN QX Sn100pn FIG. 3: ¯Cusoft SP as a function of the excitation energy of the intermediate states, for three shell-model Hamiltonians. where we drop the isospin operator as only protons con- tribute to the NME. As indicated by Eq. (A4), different contributions cannot cancel. Table I presents the calculated ¯Cusoft SP values for 138Ba. The results obtained with the three different shell-model Hamiltonians used differ, at most, by about a factor two. This shows a significant sensitivity to nuclear structure for the ultrasoft NME. Figure 3 shows the cumulative sum of ¯Cusoft SP as a func- tion of the excitation energy of the intermediate states. For the three nuclear Hamiltonians used, the behaviour is quite similar: a few states between 4 −5 MeV domi- nate, with lower- and higher-energy states contributing little. Additionally, Table I also presents the results for the ultrasoft NME in other nuclei, 106Sn, 104Te and 132Te, using the same configuration space as for 138Ba. Our re- sults indicate NME values comparable to the 138Ba ones. The theoretical uncertainty due to the nuclear Hamilto- nian used is also comparable to the one found for 138Ba, 7 highlighting again the sensitivity of the NME to nuclear structure effects. Appendix B: Potential contribution to ¯Ceff SP In the potential region the contribution to the CP- violating electron-nucleus interaction is given by ¯Cpot SP = − √ 2 GF ⟨hi|V |hi⟩, (B1) where in coordinate space V (⃗r) = −e4me 18πmN X i̸=j µ(i)D(j) |⃗r|  σ(i) · σ(j) + 1 16S(ij)(r)  . (B2) with S(ij) = 3ˆr · ⃗σiˆr · ⃗σj −⃗σi · ⃗σj. For convenience we define N (ij) = σi · σj + S(ij)/16 and rewrite Eq. (B1) as ¯Cpot SP = √ 2e4me 18πGF mN M pot SP , (B3) with M pot SP the expectation value of the operator OSP = ¯d0 e X i̸=j r  µ0 + µ1 2 (τ i 3 + τ j 3)  N (ij) + ¯d1 e X i̸=j r  µ1τ i 3τ j 3 + µ0 2 (τ i 3 + τ j 3)  N (ij) , (B4) or in terms of dp,n, OSP = dp 2e X i̸=j r  µ0 + µ1τ i 3τ j 3 + µ1 + µ0 2 (τ i 3 + τ j 3)  N (ij) + dn 2e X i̸=j r  µ0 −µ1τ i 3τ j 3 + µ1 −µ0 2 (τ i 3 + τ j 3)  N (ij). (B5) Again, for 138Ba we calculate M pot SP = ⟨0+| OSP |0+⟩ using the nuclear shell model. We note that both core and valence nucleons contribute to the potential NME. In fact, in the ideal case of a nucleus with fully-closed angular-momentum shells for both neutrons and protons, the P i̸=j σ(i) · σ(j) operator with the same isospin de- pendence as in Eq. (B4) would just count the number of proton pairs and neutron pairs coupled to spin zero. The corresponding NME is (−3Z µpdp−3N µndn)(fm/e), with separate coherent contributions of all protons and all neutrons in the nucleus. Table II presents the results of the shell-model calcu- lations for 138Ba for the NMEs corresponding to each component of the operator in Eq. (B5), that is, mpot,p SP and mpot,n SP defined as M pot SP = mpot,p SP dp + mpot,n SP dn . (B6) σiσj Sij σiσj + 1 16Sij mpot,p SP mpot,n SP mpot,p SP mpot,n SP mpot,p SP mpot,n SP GCN5082 -1729 1537 326.5 -75.45 -1708 1532 QX -1714 1537 331.8 -83.28 -1694 1532 Sn100pn -1757 1537 324.5 -22.12 -1736 1535 TABLE II: Results for mpot,p SP and mpot,p SP in 138Ba in units of (fm/e). The Gamow-Teller and tensor components of the NME are given separately. V (r) = 1 V (r) = r mpot,p SP mpot,n SP mpot,p SP mpot,n SP 16O -66.96 45.84 -244.5 167.4 20Ne -82.15 56.23 -303.7 207.8 36S -115.3 114.6 -403.7 439.8 48Ca -167.4 141.1 -667.8 484.6 100Sn -367.8 250.0 -1274 849.3 132Sn -367.2 425.9 -1317 1502 138Ba -434.5 427.3 -1708 1532 TABLE III: Results for mpot,p SP and mpot,n SP in fm/e, la- beled as V (r) = r, for several nuclei across the nuclear chart. In addition to 138Ba (see main text), for 20Ne and 36S we use an 16O core solved with the USDA[62] inter- action in the sd shell, and for 48Ca we take a 40Ca core solved with the KB3G [63] interaction in the pf shell. 16O, 100Sn and 132Sn are described by a single Slater determinant. We also include results for the NMEs ob- tained without the radial part of the operator in units of 1/e, which we label as V (r) = 1. The results in Table II indicate that, in general, mpot SP NMEs are significantly larger than ultrasoft NMEs. This difference arises from the coherent contribution of all nu- cleons, in contrast to the non-coherent ultrasoft NME, which is dominated by a few components where the core does not contribute. Coherence also leads to very similar NMEs across the three shell-model Hamiltonians used, indicating that nuclear structure details are not very rel- evant for the potential NME. Indeed, for mpot,n SP the re- sults are almost identical because in our calculations the 82 neutrons in 138Ba form a closed shell. In addition, Table II distinguishes the results for the Gamow-Teller and tensor spin structures. For both neu- tron and proton parts, the contribution of the tensor is very small. This suggests that the potential NME is mostly driven by pairs of nucleons coupled to spin zero, just as dictated by the Gamow-Teller operator. We explore the scaling of the potential NMEs with the number of protons and neutrons by calculating M pot SP for several nuclei in different mass regions. Table III lists 8 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Z or N |mpot,x SP /µx|[fm e−1] p n ×103 FIG. 4: Absolute value for the proton (mpot,p SP ) and neu- tron (mpot,n SP ) contributions to M pot SP , divided by the cor- responding nucleon MDM, as a function of the atomic or neutron number. The results cover all nuclei in Table III and also show the best linear fit (see text) and 95%CL prediction bands. the results, which suggest that the potential NME in- deed increases linearly with Z and N. Figure 4 high- lights this linear relation. It represents, for all nuclei, mpot,p SP and mpot,n SP , normalized by the proton and neu- tron magnetic moment, as a function of Z (mpot,p SP /µp) or N (mpot,n SP /µn). A good linear relation common to the proton and neutron parts of all nuclei emerges, best fitted to mpot,X SP = −9.74 X µX(fm/e), where X stands either for protons (Z, p) or neutrons (N, n). The lin- ear relation confirms that the potential NMEs is largely dominated by spin-zero pairs of protons and neutrons, as it includes nuclei where all nucleons form pairs—such as 16O, 36S (neutrons) or 48Ca (protons)—because they fill angular-momentum-closed shells. The contribution of proton-neutron pairs is minor. Using the scaling function for the NMEs, we write a master formula one can use to compute the equivalent electron EDM dequiv e ≡rmol ¯CSP/A [57] for any system dequiv e = √ 2e4me 18πGF mN rmol A (−9.74) [ Z µp dp + N µn dn] fm e (B7) where A is the mass number of the heaviest nucleus in each system, and rmol is a molecular matrix element. For several molecules of experimental interest it is given by [50, 58]: rBaF = 4.46[18] · 10−21 e cm , rThO = 1.51[9] · 10−20 e cm , rHfF+ = 9.17[52] · 10−21 e cm . (B8) Additionally to the full potential operator, indicated by V (r) = r, Table III also presents NMEs for the opera- tor just keeping the spin and isospin degrees of freedom, 0 0.2 0.4 0.6 dp cpot SP (r) 20Ne 48Ca 138Ba 0 2 4 6 8 10 12 14 16 18 −0.6 −0.4 −0.2 0 0.2 dn r[fm] cpot SP (r) FIG. 5: Normalized cpot SP (r) in units of fm−1 for 20Ne, 48Ca, and 138Ba for the contributions of the proton and neutron EDMs in B5. The dashed lines represent the normalized distributions with no radial dependence. but without radial dependence. These results, denoted by V (r) = 1, also indicate a linear dependence with the number of neutrons and protons. In fact, for nucleons in fully angular-momentum-closed shells the NMEs exactly fit to −3 X µXdX (fm/e), as expected. For all the nuclei in Table III, the results for mpot,p SP and mpot,n SP share a sim- ilar relation with the ones obtained for V (r) = 1, with a proportionality constant ∼(3.4 −3.9) fm. This com- mon factor reveals the lack of additional scaling in the potential NMEs due to the radial part of the operator. Figure 5 further analyzes this aspect, showing the nor- malized radial distribution cpot SP (r) for 20Ne, 48Ca, and 138Ba, defined by cpot SP (r) = P i̸=j µ(i)D(j)rijN (ij)(rij)δ(r −rij) |M pot SP | . (B9) which fulfills the relation 1 = Z ∞ 0 cpot SP (r)dr . (B10) The radial distributions in Fig. 5 show that, regardless of the size of the nucleus, the dominant contribution to the potential NME, shown in solid curves, stems from nucleons relatively close to each other. This property is dictated by the spin part of the NME, indicated by the dashed curves in Fig. 5. Even though the full po- tential operator—including the radial part—gives more relevance to nucleons further apart, pairs of nucleons at short distances still dominate. 9 [1] M. Pospelov and A. Ritz, Annals Phys. 318, 119 (2005), hep-ph/0504231. [2] J. Engel, M. J. Ramsey-Musolf, and U. van Kolck, Prog. Part. Nucl. Phys. 71, 21 (2013), 1303.2371. [3] J. J. Hudson, D. M. Kara, I. J. Smallman, B. E. Sauer, M. R. Tarbutt, and E. A. Hinds, Nature 473, 493 (2011). [4] J. Baron et al. (ACME), Science 343, 269 (2014), 1310.7534. [5] W. B. Cairncross, D. N. Gresh, M. Grau, K. C. Cossel, T. S. Roussy, Y. Ni, Y. Zhou, J. Ye, and E. A. Cornell, Phys. Rev. Lett. 119, 153001 (2017), 1704.07928. [6] V. Andreev et al. (ACME), Nature 562, 355 (2018). [7] T. S. Roussy et al., Science 381, adg4084 (2023), 2212.11841. [8] P. Aggarwal et al. (NL-eEDM), Eur. Phys. J. D 72, 197 (2018), 1804.10012. [9] A. C. Vutha, M. Horbatsch, and E. A. Hessels, Phys. Rev. A 98, 032513 (2018), 1806.06774. [10] C. J. Ho, S. C. Wright, B. E. Sauer, and M. R. Tarbutt, Phys. Rev. Res. 5, 043233 (2023), 2306.02573. [11] M. Athanasakis-Kaklamanakis et al. (Europwan EDM projects) (2025), 2505.22281. [12] C. Abel et al., Phys. Rev. Lett. 124, 081803 (2020), 2001.11966. [13] V. V. Flambaum, M. Pospelov, A. Ritz, and Y. V. Stad- nik, Phys. Rev. D 102, 035001 (2020), 1912.13129. [14] V. V. Flambaum, I. B. Samsonov, and H. B. Tran Tan, JHEP 10, 077 (2020), 2004.10359. [15] H. Mulder, R. Timmermans, and J. de Vries, JHEP 07, 232 (2025), 2502.06406. [16] L. I. Schiff, Phys. Rev. 132, 2194 (1963). [17] V. V. Flambaum, I. B. Khriplovich, and O. P. Sushkov, Sov. Phys. JETP 60, 873 (1984). [18] J. H. de Jesus and J. Engel, Phys. Rev. C 72, 045503 (2005), nucl-th/0507031. [19] Y. Ema, T. Gao, and M. Pospelov, Phys. Rev. Lett. 129, 231801 (2022), 2202.10524. [20] J. de Vries, E. Mereghetti, R. G. E. Timmermans, and U. van Kolck, Annals Phys. 338, 50 (2013), 1212.0990. [21] W. Dekens and J. de Vries, JHEP 05, 149 (2013), 1303.3156. [22] J. Kley, T. Theil, E. Venturini, and A. Weiler, Eur. Phys. J. C 82, 926 (2022), 2109.15085. [23] J. Kumar and E. Mereghetti, JHEP 09, 028 (2024), 2404.00516. [24] S. Weinberg, Phys. Rev. Lett. 63, 2333 (1989). [25] R. J. Crewther, P. Di Vecchia, G. Veneziano, and E. Wit- ten, Phys. Lett. B 88, 123 (1979), [Erratum: Phys.Lett.B 91, 487 (1980)]. [26] K. Ottnad, B. Kubis, U. G. Meissner, and F. K. Guo, Phys. Lett. B 687, 42 (2010), 0911.3981. [27] E. Mereghetti, J. de Vries, W. H. Hockings, C. M. Maekawa, and U. van Kolck, Phys. Lett. B 696, 97 (2011), 1010.4078. [28] V. Cirigliano, W. Dekens, J. de Vries, S. Gandolfi, M. Hoferichter, and E. Mereghetti, Phys. Rev. C 110, 055502 (2024), 2405.18464. [29] M. Agostini, G. Benato, J. A. Detwiler, J. Menéndez, and F. Vissani, Rev. Mod. Phys. 95, 025002 (2023), 2202.01787. [30] V. Cirigliano, W. Dekens, J. de Vries, S. Gandolfi, M. Hoferichter, and E. Mereghetti, Phys. Rev. Lett. 133, 211801 (2024), 2405.18469. [31] V. Cirigliano et al., J. Phys. G 49, 120502 (2022), 2207.01085. [32] J. Zupan, Eur. Phys. J. C 25, 233 (2002), hep- ph/0202135. [33] R. D. Heil, H. H. Pitz, U. E. P. Berg, U. Kneissl, K. D. Hummel, G. Kilgus, D. Bohle, A. Richter, C. Wesselborg, and P. Von Brentano, Nucl. Phys. A 476, 39 (1988). [34] N. Pietralla et al., Nucl. Phys. A 618, 141 (1997). [35] A. Zilges, P. Von Brentano, C. Wesselborg, R. D. Heil, U. Kneissl, S. Lindenstruth, H. H. Pitz, U. Seemann, and R. Stock, Nucl. Phys. A 507, 399 (1990), [Erratum: Nucl.Phys.A 519, 848–848 (1990)]. [36] D. J. Broadhurst and A. G. Grozin, Phys. Lett. B 267, 105 (1991), hep-ph/9908362. [37] F. Šimkovic, R. Dvornický, D. Stefánik, and A. Faessler, Phys. Rev. C 97, 034315 (2018), 1804.04227. [38] S. e. Morabit, R. Bouabid, V. Cirigliano, J. de Vries, L. Gráf, and E. Mereghetti (2024), 2412.14160. [39] B. Romeo, J. Menéndez, and C. Peña Garay, Phys. Lett. B 827, 136965 (2022), 2102.11101. [40] W. Dekens, J. de Vries, D. Castillo, J. Menéndez, E. Mereghetti, V. Plakkot, P. Soriano, and G. Zhou, JHEP 09, 201 (2024), 2402.07993. [41] D. Castillo, L. Jokiniemi, P. Soriano, and J. Menéndez, Phys. Lett. B 860, 139181 (2025), 2408.03373. [42] E. Caurier, F. Nowacki, and A. Poves, Phys. Lett. B 711, 62 (2012), 1112.5039. [43] E. Caurier, G. Martinez-Pinedo, F. Nowacki, A. Poves, and A. P. Zuker, Rev. Mod. Phys. 77, 427 (2005), nucl- th/0402046. [44] E. Caurier, F. Nowacki, A. Poves, and K. Sieja, Phys. Rev. C 82, 064304 (2010). [45] C. Qi and Z. X. Xu, Phys. Rev. C 86, 044323 (2012). [46] B. A. Brown, N. J. Stone, J. R. Stone, I. S. Towner, and M. Hjorth-Jensen, Phys. Rev. C 71, 044317 (2005), URL https://link.aps.org/doi/10. 1103/PhysRevC.71.044317. [47] B. Graner, Y. Chen, E. Lindahl, and B. Heckel, Physical Review Letters 116 (2016), ISSN 1079-7114, URL http: //dx.doi.org/10.1103/PhysRevLett.116.161601. [48] R. Alarcon et al., in Snowmass 2021 (2022), 2203.08103. [49] T. Chupp and M. Ramsey-Musolf, Phys. Rev. C 91, 035502 (2015), 1407.1064. [50] T. Fleig and M. Jung, JHEP 07, 012 (2018), 1802.02171. [51] J. de Vries, E. Mereghetti, R. G. E. Timmermans, and U. van Kolck, Phys. Rev. Lett. 107, 091804 (2011), 1102.4068. [52] J. Bsaisou, C. Hanhart, S. Liebig, U. G. Meissner, A. Nogga, and A. Wirzba, Eur. Phys. J. A 49, 31 (2013), 1209.6306. [53] J. de Vries, E. Mereghetti, and A. Walker-Loud, Phys. Rev. C 92, 045201 (2015), 1506.06247. [54] T. R. Richardson (2025), 2509.03613. [55] J. Dragos, T. Luu, A. Shindler, J. de Vries, and A. Yousif, Phys. Rev. C 103, 015202 (2021), 1902.03254. [56] J. Liang, A. Alexandru, T. Draper, K.-F. Liu, B. Wang, G. Wang, and Y.-B. Yang (χQCD), Phys. Rev. D 108, 094512 (2023), 2301.04331. [57] M. Pospelov and A. Ritz, Phys. Rev. D 89, 056006 10 (2014), 1311.5537. [58] P. A. B. Haase, D. J. Doeglas, A. Boeschoten, E. Eliav, M. Iliaš, P. Aggarwal, H. L. Bethlem, A. Borschevsky, K. Esajas, Y. Hao, et al., The Journal of Chemical Physics 155 (2021), ISSN 1089-7690, URL http://dx. doi.org/10.1063/5.0047344. [59] W. Dekens, J. de Vries, M. Jung, and K. K. Vos, JHEP 01, 069 (2019), 1809.09114. [60] M. Pospelov, Phys. Lett. B 530, 123 (2002), hep- ph/0109044. [61] S. Bhattacharya, K. Fuyuto, E. Mereghetti, and T. R. Richardson, Phys. Rev. C 112, 025501 (2025), 2504.01105. [62] W. A. Richter, S. Mkhize, and B. A. Brown, Phys. Rev. C 78, 064302 (2008). [63] A. Poves, J. Sánchez-Solano, E. Caurier, and F. Nowacki, Nuclear Physics A 694, 157 (2001).
Nucleon Electric Dipole Moments in Paramagnetic Molecules through Effective Field Theory Wouter Dekens,1 Jordy de Vries,2, 3 Lemonia Gialidi,2, 3 Javier Menéndez,4, 5 Heleen Mulder,3, 6 and Beatriz Romeo7 1Institute for Nuclear Theory, 91195-1550, USA 2Institute for Theoretical Physics Amsterdam and Delta Institute for Theoretical Physics, 904, 1098 XH Amsterdam, The Netherlands 3Nikhef, Theory Group, Science Park 105, 1098 XG, Amsterdam, The Netherlands 4Departament de Física Quàntica i Astrofísica, Universitat de Barcelona, 08028 Barcelona, Spain 5Institut de Ciències del Cosmos, Universitat de Barcelona, 08028 Barcelona, Spain 6Van Swinderen Institute for Particle Physics and Gravity, 3, 9747 AG Groningen,The Netherlands 7 (EDM) measurements using paramagnetic molecules have significantly advanced over the last decade. Traditionally, these experiments have been analyzed in terms of the electron EDM. However, paramagnetic molecules are also sensitive to hadronic sources of chargeparity (CP) violation, highlighting the need for a new framework to interpret the experimental results. In this Letter, we introduce an effective field theory framework to relate molecular EDMs to the EDMs of neutrons and protons. We identify the dominant contributions through power counting and pinpoint the necessary nuclear matrix elements. As a practical application, we employ the nuclear shell model to calculate these nuclear matrix elements for the polar molecule BaF. Finally, we estimate the limits on the nucleon EDMs set by current molecular EDM experiments. I. INTRODUCTION Electric dipole moment (EDM) experiments are extremely sensitive probes of new sources of chargeparity (CP) violation and indirectly probe beyond-theStandard-Model (BSM) physics at very high scales of up to ∼100 TeV [1, 2]. Recent years have seen impressive experimental progress using polar molecules which benefit from large internal electric fields that amplify the CP-violating signal [3-7]. EDMs of paramagnetic systems, which have one unpaired electron, are mainly interpreted in terms of the electron EDM. Current measurements lead to a strong bound on the electron EDM, |de| < 4.1 · 10-30 e cm, and future experiments aim to improve this by one to two orders of magnitude [6-11]. This constraint is 4 orders of magnitude more stringent than the neutron EDM limit [12]. Traditionally, paramagnetic systems have not been used to constrain hadronic sources of CP violation, such as the quantum chromodynamics (QCD) ̄θ term within the SM or higher-dimensional quark-gluon operators that arise from heavy BSM physics. This is because of the assumption that far stricter limits can be obtained through the EDMs of the neutron or diamagnetic atoms. That being said, paramagnetic systems are sensitive to hadronic sources of CP violation through the CP-odd electronnuclear force they induce [13-15]. While this force is typically strongly suppressed, the rapid progress in paramagnetic EDM experiments might make it the best way to search for hadronic sources of CP violation in the future. However, the current theoretical description of the CP-odd electron-nuclear force is still at a very rudimentary stage. In this Letter, we systematically derive this force as induced by the EDMs of neutrons and protons, making it possible to constrain these EDMs with paramagnetic molecular EDM experiments. As the problem involves a multitude of well-separated energy scales, it can be systematically described using effective-field-theory (EFT) techniques. We show that this connection requires the calculation of a set of nuclear matrix elements (NMEs) that are different from the ones involved in the Schiff moments of diamagnetic systems [16-18]. As an explicit example, we compute the NMEs for the polar molecule BaF, which is being targeted by the NL-eEDM collaboration [8]. II. EFFECTIVE FIELD THEORY The calculation of molecular EDMs in terms of fundamental sources of CP violation involves widely separated energy scales. These range from the BSM and electroweak scales (Λ and MW ) to low-energy scales such as the electron mass or electron binding energy O(α2 emme). The atomic nucleus gives rise to additional scales associated with the chiral-symmetry-breaking scale Λχ ∼mN ∼1 GeV (comparable to the nucleon mass), the pion mass mπ ∼γ ∼100 MeV (comparable to the nuclear binding momentum) and the scale of nuclear excitations m2 π/mN ∼O(MeV). Within the SM, the most relevant source of CP violation is the QCD ̄θ term, as CKM-induced (paramagnetic) EDMs are orders of magnitude too small to be detected by current and envisioned experiments [19]. BSM sources of hadronic CP violation can, at energies well below Λ, be described by effective operators of space-time dimension six. They have been classified and evolved to lower energies in a series of previous works [20-23]. At energies slightly above Λχ, the most relevant hadronic operators 16 Oct 2025 2 are the (chromo-)electric dipole moments of quarks, the Weinberg three-gluon operator [24], and several CP-odd four-quark interactions [20]. At energies < Λχ, these effective operators can be matched to a χEFT Lagrangian describing CP-violating interactions among the relevant low-energy degrees of freedom (light mesons, nucleons, photons, electrons) [1, 20]. For our purposes, the most relevant interactions are given by Lχ = ̄g0 ̄Nτ aNπa + ̄g1 ̄NNπ0 + ̄g0η ̄NNη +2 ̄N(d0 + d1τ 3)vμSνNFμν , (1) where the first line describes three CP-odd mesonnucleon interactions, and the second line, respectively, the isoscalar and isovector nucleon EDM. We use the non-relativistic nucleon doublet N = (p, n)T with spin Sμ = (0, σ/2) and velocity vμ = (1, 0), as well as the pion triplet πa and the eta meson η. The paramagnetic EDMs induced by the mesonnucleon interactions in Eq. (1) arise mainly through intermediate CP-odd electron-nucleon interactions, which take on the form L = GF √ 2 ̄eiγ5e ̄N C0 SP + C1 SPτ 3 N . (2) The nucleon EDMs in Eq. (1) give rise to contributions at longer distance scales through the diagrams in Fig. 1. They induce effective interactions between the nucleus and the electrons, i.e. the nuclear equivalent of C0,1 SP , which we denote by ̄CSP, see Eq. (7). To systematically compute the various contributions, it is useful to consider different photon modes depending on the scaling of their momentum qμ γ = (q0 γ, qγ). We identify three regions that give relevant contributions 1. soft photons: q0 γ ∼|qγ| ∼mπ, 2. ultrasoft photons: q0 γ ∼|qγ| ∼m2 π/mN, 3. potential photons: q0 γ ∼q2 γ/mN|qγ| ∼mπ , and we define Q ∼mπ ∼γ and q ∼Q2/mN. The CP-odd meson-nucleon interactions in Eq. (1) contribute to C0,1 SP through diagrams involving a meson exchange or a pion loop in combinations with the exchange of two photons in the ultrasoft or soft region. These diagrams were first considered in Ref. [13] and later computed with heavy-baryon chiral perturbation theory in Ref. [15]. In addition, integrating out the mesons leads to renormalization of nucleon EDMs [25-27], effectively shifting d0,1 → ̄d0,1, where the bar denotes the renormalized LECs. In what follows, we use ̄d0,1 as the physical nucleon EDMs. In this Letter, we focus on additional contributions to ̄CSP from the nucleon EDMs, which arise through the topologies shown in Fig. 1a and 1b. These diagrams are captured by an effective action of the form ⟨hf(pf)e(p′ e)|iSeff|hi(pi)e(pe)⟩= e3 2 Z xi ⟨hfe| (3) ×T h ̄e /Ae(x1) ̄e /Ae(x2)L(d0,1) χ (x3) (AμJμ em) (x4) i |ehi⟩, where we integrate over all x1,2,3,4, hi,f denote the initial and final nuclear states (for EDMs we have the nuclear ground state |hi⟩= |hf⟩= |0+⟩) and Jμ em denotes the nuclear electromagnetic current. Diagrams involving nucleon EDMs and photons with soft momenta are subleading as they require the inclusion of additional pions. Power counting gives the expected size of the potential and ultrasoft contributions C(pot) SP , C(usoft) SP = meα2μi ̄di eGF mN 4π Q , 1 q , (4) where μi are the nucleon magnetic dipole moments (MDMs) in units of the nuclear magneton. Numerically 4πq ∼Q and these estimates are rather close, but, as we will see, they do not capture possible coherent enhancements. Potential region: To evaluate the potential contributions, we can use the so-called method of regions to expand the amplitude in small ratios of scales, such as q0 γ/|qγ|. After doing so, there are no contributions from diagrams where the nucleon EDM and Jem attach to the same nucleon (the potential region arises from picking up the poles of nucleon propagators, which can always be avoided in these one-body diagrams). There are, however, two-nucleon effects through the diagram in Fig. 1a. Due to spin and parity constraints, the first contributions arise from the nucleon magnetic moments, which appear in Jμ em at next-to-leading order O(Q/mN). This results in the following contribution to the amplitude1 Apot = -⟨hf|V |hi⟩ ̄u(p′ e) 1 -v · (p′ e -pe) 2me /v iγ5u(pe) , where we take the limit p′ e -pe ≪me in what follows, while V denotes the potential between the two interacting nucleons. In momentum space2 V = 4e4me 9mN X i̸=j μ(i)D(j) |q|4 σ(i) · σ(j) -1 4S(ij) , (5) where i, j label the nucleons, qi = p′ i -pi is the exchanged momentum and we define the combination 1 We define p2Ef2Ei⟨hfe|Seff|hie⟩= (2π)4δ4(pf +p′ e-pi-pe)A, with the nuclear states satisfying ⟨p|q⟩= (2π)3δ3(p -q). 2 Strictly speaking, the momentum space potential is infrared divergent. However, performing the Fourier transform in dimensional regularization leads to a potential in coordinate space that is IR finite. It is also possible to deal with the potential in momentum space by defining a subtraction procedure, see App. C of Ref. [28] where a similar potential was encountered. 3 (a) (b) (c) FIG. 1: Contributions to ̄CSP arising from the nucleon EDMs. We denote electrons by single and nucleons by double straight lines, nuclei by gray ovals (in Fig. 1a) or bars (in Fig. 1b-1c), and photons by wavy lines. The black circle stands for the nucleon MDM, while the yellow, magenta, and blue squares indicate the CP-violating vertices: nucleon EDM, βv and ̄CSP effective vertices (see Eq. (7)), respectively. Fig. 1a shows the two-nucleon potential-region contribution, and Fig. 1b the ultrasoft one. Fig. 1c shows the two diagrams relevant to the matching and running of diagram 1b in an EFT with the nuclear ground state as the remaining degree of freedom. q = (qi -qj)/2. In addition, D = ( ̄d0 + ̄d1τ3)/e, μ ≡ 1+κ0 2 + 1+κ1 2 τ3 describe the EDM and MDM operators (with κ0 = -0.12 and κ1 = 3.7), while S(ij) = σ(i) · σ(j) -3q · σ(i) q · σ(j)/q2. These contributions are thus determined by the NME of V , which scales as 1/|q|4. The form of this two-body potential is similar to the NMEs appearing in neutrinoless double-β decay [29] or radiative corrections to superallowed β decays [28, 30] although with different isospin and/or q dependence. The many-body techniques needed to compute such NMEs, including ab-initio approaches, have developed significantly in the last decade and can be directly applied to these EDM calculations [29, 31]. Ultrasoft region: In this region, we expand in small ratios of scales, such as q0 γ/mπ ∼|qγ|/mπ and me/mπ. Photons with this (small) momentum scaling can be thought of as coupling to the nucleus as a whole, instead of the individual nucleons. After inserting a complete set of (nuclear) states between the hadronic operators and working out the time-ordered product in Eq. (3), we again find that the leading contributions involve the magnetic moments, Ausoft = -2ie4 mN Z d4k (2π)4 ̄u(p′ e)γλ(/k + me)γρu(pe) εσαβηvβ (k2 -m2e)(k -pe)2(k -p′e)2 X n " (pe -k)σ(p′ e -k)μgνλgαρ × ⟨hf|Dμν|n⟩⟨n|M η|hi⟩ v · l+ -En + iε + ⟨hf|M η|n⟩⟨n|Dμν|hi⟩ v · l--En + iε + n (pe ↔p′ e, l+ ↔l-, α ↔ν o# , (6) where |n⟩denote intermediate nuclear |1+⟩states with energies En, while l+ = pi + pe -k, l- = pi - p′ e + k. Furthermore, M η ≡ ̄Nμ SηN(0) and Dμν = ̄ND (vμSν -vνSμ) N(0) denote the MDM and EDM operators. Although the appearing integrals can be evaluated using known techniques [32], the expressions are rather unwieldy. They greatly simplify if there is a hierarchy between the nuclear excited states and the electron momenta, pe ∼me ≪∆n = |En -Ei|, which is a good approximation for 138Ba as discussed in Sec. III. Likewise, the relevant excited states in magnetic-dipole transitions -also driven by the spin operator- in isotopes of Yb, Hf, and Th with an even number of neutrons also enter at about 2 MeV or higher energies [33-35]. In this case, Eq. (6) can be captured by a low-energy nuclear EFT in which the excited nuclear states have been integrated out, but still contains electrons, ultrasoft photons, and the ground state of the nucleus. The relevant interactions in this theory can be written as LΨ = Ψ† i GF √ 2 ̄CSP ̄eiγ5e + βv vαFαβvλεβλμνFμν Ψi ,(7) where Ψi denotes the spin-0 field describing the nucleus,3 ̄CSP describes the nuclear version of C0,1 SP , while βv has a similar form as the nuclear polarizability but violates CP. At the scale μ = ∆n, βv obtains a contribution from integrating out the excited states at tree level, while ̄CSP arises from Eq. (6). After expanding in me/|En -Ei|, this expression simplifies and the remaining integrals are 3 We describe the nucleus non-relativistically, so that the kinetic term takes the form L(0) Ψ = Ψ† i iv · DΨi. This ensures that ⟨0|Ψi(x)|p⟩= e-ip·x and implies the field has dimension [Ψi] = 3/2, so that [ ̄CSP ] = 0. 4 of the form In ≡ Z ddk (2π)d 1 (k2)n 1 v · k -∆, (8) which are evaluated as [36] In = 2i(-1)n+1(2∆)d-2n-1 (4π)d/2 Γ(2n + 1 -d)Γ(d/2 -n) Γ(n) .(9) All in all, matching the nucleon-level theory to the EFT without excited states then gives at a scale μ ≃∆n, GF √ 2 ̄CSP = -e4me 4π2mN X n An ∆n 4 -3 log 4∆2 n μ2 , (10) βv = e2 mN X n An ∆n , An = -⟨hi|Dσ|n⟩· ⟨n|μσ|hi⟩ 12 . These interactions can be evolved from μ ∼∆n to lower energies, μe ∼me, using the renormalization group equation (RGE), d ̄CSP (μ) d ln μ = 3 √ 2e2meβv 4π2GF . (11) This RGE arises through the loop diagram of Fig. 1c, which allows βv to contribute to ̄CSP . The amplitude at low scales, μe ∼me, can finally be expressed as the sum of ̄CSP (me) and a loop contribution due to βv. We capture the total combination of the ultrasoft and potential contributions by an effective contact interaction, such that Atotal = GF √ 2 ̄Ceff SP ̄u(p′ e)iγ5u(pe) with ̄Ceff SP = - √ 2 GF " 4α2me mN X n An ∆n 3 ln m2 e 4∆2n -1 +⟨hi|V |hi⟩ # (12) which is independent of the renormalization scale μ. We stress that this is the effective interaction between electrons and the nucleus as a whole and differs from Eq. (2), which is the coupling to individual nucleons. Evaluating the ultrasoft region thus requires the excited state energies, ∆n, and the set of nuclear matrix elements of the one-body operator ∼⟨hi|σ|n⟩contained in An. These have a form similar to the leading two-neutrino doubleβ, and double magnetic-dipole NMEs [37-39] and of subleading NMEs of the neutrinoless double-β decay [40, 41]. Therefore, similar many-body methods used in these studies can be applied here. Eq. (12) is the main result of this work and makes it possible to connect nucleon EDMs to measurements of paramagnetic molecules. III. NUCLEAR MATRIX ELEMENTS We now focus on the polar molecule BaF, which is targeted by the NL-eEDM collaboration [8]. The heaviest atom in the molecule, 138Ba, has a magic neutron number, and it is just two neutrons away from 136Ba, the well-studied [42] final state of the double-β decay of 136Xe. We calculate the nuclear excitation energies and all necessary NMEs with the nuclear shell model [43]. We use a configuration space that comprises the singleparticle orbitals 1d5/2, 0g7/2, 2s1/2, 1d3/2, and 0h11/2 for both neutrons and protons with a 100Sn core. We consider three effective interactions previously tested in this mass region: GCN5082 [44], QX [45] and Sn100pn [46]. We present details of the calculation in the Supplemental Material, and here highlight the main results. The value of the ultrasoft NME is ̄Cusoft SP = (67 ± 28) dp (e fm)-1 , (13) where dp = ̄d0+ ̄d1. We only find sensitivity to the proton EDM because, in our calculation, the 82 neutrons form a closed shell, as 138Ba is magic in neutrons. This is also why the first intermediate 1+ excited state appears around 2.5 MeV. The largest contribution to the ultrasoft NME arises from states around En = 4.5 MeV ≫me, justifying our approximation, and higher-energy states only contribute mildly. We show the cumulative contribution from the excited-state spectrum in the Supplemental Material. The potential contribution evaluates to ̄Cpot SP = [(-433 ± 5) dp + (387 ± 0.4) dn] (e fm)-1 , (14) where dn = ̄d0 - ̄d1. The small uncertainties are solely from the shell-model calculations but do not capture possible higher-order corrections. Compared to the ultrasoft regime, the potential contribution is dominant. This is because of the coherent nature of the potential NME, which scales linearly with the total number of protons, Z (dp term), or neutrons, N (dn term), in the nucleus. The coherence appears as most NME contributions stem from proton-proton and neutron-neutron pairs, prevalent in nuclei due to the attractive pairing interaction. This scaling is in rough agreement with the estimate of Ref. [13], as well as an evaluation of the potential of Eq. (5) in a Fermi gas state4. Our many-body calculations, which also cover nuclei lighter than 138Ba, suggest that nucleusdependent effects can correct this estimate by up to 20%. The coherent character makes potential NMEs less dependent on the details of the nuclear structure, reducing their relative uncertainty with respect to ultrasoft NMEs (the very small error in the dn potential NME is because in our calculation the 82 neutrons form a closed shell). We provide more details in the Supplemental Material. While we do not expect any breakdown of the scaling behavior discussed above, explicit calculations of ultrasoft and potential NMEs in heavier systems such as Th or Hf are required to confirm the dominance of the potential contributions. 4 We thank J. Engel for discussions on this point. 5 The expected sensitivity of the BaF experiment is an electron EDM equivalent of de ≤10-30 e cm [8]. Using the NME calculations of this Letter, this would correspond to a sensitivity to the nucleon EDMs |dp|BaF < 8.4 · 10-24 e cm and |dn|BaF < 8 · 10-24 e cm. While we do not have shell-model calculations for the most precise experiment based on HfF+, we can use the linear Z and N dependence of the potential NME to estimate |dp|HfF+ ≲1.6 · 10-23 e cm , |dn|HfF+ ≲1.6 · 10-23 e cm , (15) roughly two orders of magnitude weaker than the proton EDM limit set by 199Hg [47] and three orders than the direct neutron EDM limit [12]. Considering the past and anticipated progress in molecular EDM experiments, with projected improvements of two to three orders of magnitude within a decade [48], these gaps are not insurmountable. IV. DISCUSSION In this Letter, we have developed a systematic method to compute the contributions of nucleon EDMs to paramagnetic molecular EDMs. Generally, however, there are multiple other sources of CP violation, meaning that the measurement of a nonzero EDM in any system would raise the question of what the underlying mechanism is. It has been shown that the ratio of different paramagnetic systems can be used to unravel the electron EDM contribution from CP-odd electron-nucleon interactions [49, 50], while the ratio of nuclear to nucleon EDMs can separate different CP-odd sources at the quark-gluon level [51]. Based on our results, we devise a new strategy to identify the underlying source of CP violation. Besides the diagrams calculated in this Letter, there appear contributions from meson-exchange diagrams [13, 15]. Depending on the underlying CP-violating source, the ratio of meson to nucleon EDM contributions varies. We can be most concrete for the QCD ̄θ term, where the sizes of the low-energy constants in Eq. (1) are known relatively well [52-54]. For BaF, the meson diagrams give a contribution ̄Cmeson SP (BaF) = (220±62)·10-2 ̄θ [15]. We can compare this to the nucleon EDM contributions if we insert the lattice-QCD prediction for the nucleon EDMs [55, 56], which results in ̄Cpot+usoft SP = -(196 ± 54) · 10-2 ̄θ . (16) This contribution is comparable in size to the mesonexchange diagrams but comes with an opposite sign, so the total contribution is suppressed. This accidental cancellation is specific to the ̄θ term and not expected for other mechanisms of CP violation. We combine all contributions to compute the equivalent electron EDM dequiv e ≡rmol ̄CSP/A [57], which is convenient as paramagnetic EDM searches are usually interpreted as limits on de. For BaF, rBaF = 4.46[18] · 10-21e cm [58], and we obtain dequiv e ( ̄θ) = (7.5 ± 27) · 10-24 ̄θ e cm. -1.0 -0.5 0.0 0.5 1.0 -0.02 -0.01 0.00 0.01 0.02 FIG. 2: The ratio between dequiv e and dn, induced by various possible underlying sources of CP violation: the ̄θ term (blue band), the up quark EDM (orange) or chromoEDM (grey). The plots are based on Refs. [59-61] regarding QCD matrix elements connecting the CP-violating sources to CP-violating hadronic couplings. For other hadronic sources of CP violation, power counting arguments give insight into the ratio of mesonic to nucleon EDM contributions. For example, the quark chromo-EDM breaks CP and isospin symmetry. The meson-nucleon interactions in Eq. (1) are the leading CP-violating hadronic interactions [20, 60] and their contributions dominate the paramagnetic EDMs. On the other hand, for quark EDMs and the Weinberg operators the mesonic interactions are suppressed by, respectively, α/π (electromagnetic suppression) and m2 π/Λ2 χ (chiral suppression)[20]. As such, the ratio of ̄Ceff SP (and thus dequiv e ) to the neutron EDM is different. We illustrate this in Fig. 2, where we plot the dequiv e , including both mesonic and nucleon EDM contributions, against dn. The bands correspond to scenarios where dequiv e and dn are sourced, respectively, by the ̄θ term, the up quark EDM, and the up quark chromo-EDM. Remarkably, the ratio of neutron to paramagnetic EDMs can identify the underlying hadronic source of CP violation. In conclusion, the EFT approach presented in this Letter allows one to derive the contribution from nucleon EDMs to paramagnetic EDMs in a systematic way. We have identified novel nuclear matrix elements that must be computed in order to interpret paramagnetic EDMs in terms of nucleon EDMs and, ultimately, in terms of hadronic sources of CP violation. While power counting arguments indicate that similar contributions would arise from potential and ultrasoft virtual photons, explicit shell-model calculations show that potential NMEs dominate because of the coherent contribution of most protons and neutrons in the nucleus. Experimental improvements of two-to-three orders of magnitude in paramagnetic molecular systems are needed to set competitive limits on nucleon EDMs. Finally, we have shown that ratios of paramagnetic-to-neutron EDMs can point towards the underlying mechanism of CP violation. 6 Acknowledgments We thank Jon Engel, Emanuele Mereghetti, and Rob Timmermans for important discussions. This work was partly funded by the Netherlands Research Council (NWO) under programme XL21.074, and by MCIN/AEI/10.13039/501100011033 from the following grants: PID2023-147112NB-C22; CNS2022135716 funded by the "European Union NextGenerationEU/PRTR", and CEX2024-001451-M to the "Unit of Excellence María de Maeztu 2025-2031" award to the Institute of Cosmos Sciences; and by the Generalitat de Catalunya, through grant 2021SGR01095. JdV and HM thank our colleagues from the NL-eEDM collaboration for discussions and encouragement. Appendix A: Ultrasoft contribution to ̄Ceff SP In the ultrasoft region, the contribution to the CP-odd electron-nucleus interaction is encoded in ̄Cusoft SP = - √ 2α2me 3mNGF M usoft SP , (A1) where M usoft SP is the NME between initial and final nuclear states, |hi,f⟩, defined by M usoft SP = X n ⟨hf|D(i)⃗σ|n⟩· ⟨n|μ(i)⃗σ|hi⟩ ∆n 1 + 3 ln 4∆2 n m2e . (A2) Here ∆n = En -Ei is the excitation energy of the intermediate nuclear states |n⟩, and the nucleon EDMs and MDMs D(i) = ( ̄d0+ ̄d1τ (i) 3 )/e and μ(i) = (μ0+μ1τ (i) 3 ) are defined in terms of the isoscalar and isovector nucleon EDMs, ̄d0,1, and the isoscalar and isovector anomalous magnetic moments, κ0,1, through μi = (1 + κi)/2. We focus on 138Ba, the heaviest nucleus in the diatomic polar molecule BaF used by the NL-eEDM experiment [8]. We compute the matrix elements involving the one-body spin operator in Eq. (A2) to the set of 1+ n nuclear excited states, as well as the relevant excited-state energies. Nonetheless, we have adjusted these energies to exactly reproduce the one of the first 1+ excited state of 138Ba. This changes M usoft SP just by 2%-7% depending on the effective interaction used. Since in our calculation for 138Ba the 82 neutrons completely fill the configuration space, we cannot create any particle-hole excitation involving a neutron orbital, meaning there is no sensitivity to dn. Therefore, for 138Ba Eq. (A2) reduces to M usoft SP =( ̄d0 + ̄d1)(μ0 + μ1)mσ = mσμpdp , (A3) with 2μp = κ0 + κ1 + 2 (likewise, 2μn = κ0 -κ1) and mσ defined by mσ = -1 e X n ⟨0+ 1 |⃗σ |1+ n ⟩2 ∆n 1 + 3 ln 4∆2 n m2e , (A4) GCN5082 QX Sn100pn 138Ba dp 61.0 97.0 41.7 dn 0 0 0 106Sn dp 0 0 0 dn -90.0 -89.6 -55.5 104Te dp 55.3 66.0 53.8 dn -43.7 -54.8 -46.9 132Te dp 11.9 11.0 6.5 dn -24.9 -16.2 -30.2 TABLE I: ̄Cusoft SP results for various nuclei with similar mass number as 138Ba, obtained with three different shell-model Hamiltonians [44-46]. Here units are (e fm)-1. 2 3 4 5 6 7 0 20 40 60 80 100 En(MeV) ̄Cusoft SP dp(efm)-1 GCN QX Sn100pn FIG. 3: ̄Cusoft SP as a function of the excitation energy of the intermediate states, for three shell-model Hamiltonians. where we drop the isospin operator as only protons contribute to the NME. As indicated by Eq. (A4), different contributions cannot cancel. Table I presents the calculated ̄Cusoft SP values for 138Ba. The results obtained with the three different shell-model Hamiltonians used differ, at most, by about a factor two. This shows a significant sensitivity to nuclear structure for the ultrasoft NME. Figure 3 shows the cumulative sum of ̄Cusoft SP as a function of the excitation energy of the intermediate states. For the three nuclear Hamiltonians used, the behaviour is quite similar: a few states between 4 -5 MeV dominate, with lower- and higher-energy states contributing little. Additionally, Table I also presents the results for the ultrasoft NME in other nuclei, 106Sn, 104Te and 132Te, using the same configuration space as for 138Ba. Our results indicate NME values comparable to the 138Ba ones. The theoretical uncertainty due to the nuclear Hamiltonian used is also comparable to the one found for 138Ba, 7 highlighting again the sensitivity of the NME to nuclear structure effects. Appendix B: Potential contribution to ̄Ceff SP In the potential region the contribution to the CPviolating electron-nucleus interaction is given by ̄Cpot SP = - √ 2 GF ⟨hi|V |hi⟩, (B1) where in coordinate space V (⃗r) = -e4me 18πmN X i̸=j μ(i)D(j) |⃗r| σ(i) · σ(j) + 1 16S(ij)(r) . (B2) with S(ij) = 3ˆr · ⃗σiˆr · ⃗σj -⃗σi · ⃗σj. For convenience we define N (ij) = σi · σj + S(ij)/16 and rewrite Eq. (B1) as ̄Cpot SP = √ 2e4me 18πGF mN M pot SP , (B3) with M pot SP the expectation value of the operator OSP = ̄d0 e X i̸=j r μ0 + μ1 2 (τ i 3 + τ j 3) N (ij) + ̄d1 e X i̸=j r μ1τ i 3τ j 3 + μ0 2 (τ i 3 + τ j 3) N (ij) , (B4) or in terms of dp,n, OSP = dp 2e X i̸=j r μ0 + μ1τ i 3τ j 3 + μ1 + μ0 2 (τ i 3 + τ j 3) N (ij) + dn 2e X i̸=j r μ0 -μ1τ i 3τ j 3 + μ1 -μ0 2 (τ i 3 + τ j 3) N (ij). (B5) Again, for 138Ba we calculate M pot SP = ⟨0+| OSP |0+⟩ using the nuclear shell model. We note that both core and valence nucleons contribute to the potential NME. In fact, in the ideal case of a nucleus with fully-closed angular-momentum shells for both neutrons and protons, the P i̸=j σ(i) · σ(j) operator with the same isospin dependence as in Eq. (B4) would just count the number of proton pairs and neutron pairs coupled to spin zero. The corresponding NME is (-3Z μpdp-3N μndn)(fm/e), with separate coherent contributions of all protons and all neutrons in the nucleus. Table II presents the results of the shell-model calculations for 138Ba for the NMEs corresponding to each component of the operator in Eq. (B5), that is, mpot,p SP and mpot,n SP defined as M pot SP = mpot,p SP dp + mpot,n SP dn . (B6) σiσj Sij σiσj + 1 16Sij mpot,p SP mpot,n SP mpot,p SP mpot,n SP mpot,p SP mpot,n SP GCN5082 -1729 1537 326.5 -75.45 -1708 1532 QX -1714 1537 331.8 -83.28 -1694 1532 Sn100pn -1757 1537 324.5 -22.12 -1736 1535 TABLE II: Results for mpot,p SP and mpot,p SP in 138Ba in units of (fm/e). The Gamow-Teller and tensor components of the NME are given separately. V (r) = 1 V (r) = r mpot,p SP mpot,n SP mpot,p SP mpot,n SP 16O -66.96 45.84 -244.5 167.4 20Ne -82.15 56.23 -303.7 207.8 36S -115.3 114.6 -403.7 439.8 48Ca -167.4 141.1 -667.8 484.6 100Sn -367.8 250.0 -1274 849.3 132Sn -367.2 425.9 -1317 1502 138Ba -434.5 427.3 -1708 1532 TABLE III: Results for mpot,p SP and mpot,n SP in fm/e, labeled as V (r) = r, for several nuclei across the nuclear chart. In addition to 138Ba (see main text), for 20Ne and 36S we use an 16O core solved with the USDA[62] interaction in the sd shell, and for 48Ca we take a 40Ca core solved with the KB3G [63] interaction in the pf shell. 16O, 100Sn and 132Sn are described by a single Slater determinant. We also include results for the NMEs obtained without the radial part of the operator in units of 1/e, which we label as V (r) = 1. The results in Table II indicate that, in general, mpot SP NMEs are significantly larger than ultrasoft NMEs. This difference arises from the coherent contribution of all nucleons, in contrast to the non-coherent ultrasoft NME, which is dominated by a few components where the core does not contribute. Coherence also leads to very similar NMEs across the three shell-model Hamiltonians used, indicating that nuclear structure details are not very relevant for the potential NME. Indeed, for mpot,n SP the results are almost identical because in our calculations the 82 neutrons in 138Ba form a closed shell. In addition, Table II distinguishes the results for the Gamow-Teller and tensor spin structures. For both neutron and proton parts, the contribution of the tensor is very small. This suggests that the potential NME is mostly driven by pairs of nucleons coupled to spin zero, just as dictated by the Gamow-Teller operator. We explore the scaling of the potential NMEs with the number of protons and neutrons by calculating M pot SP for several nuclei in different mass regions. Table III lists 8 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Z or N |mpot,x SP /μx|[fm e-1] p n ×103 FIG. 4: Absolute value for the proton (mpot,p SP ) and neutron (mpot,n SP ) contributions to M pot SP , divided by the corresponding nucleon MDM, as a function of the atomic or neutron number. The results cover all nuclei in Table III and also show the best linear fit (see text) and 95%CL prediction bands. the results, which suggest that the potential NME indeed increases linearly with Z and N. Figure 4 highlights this linear relation. It represents, for all nuclei, mpot,p SP and mpot,n SP , normalized by the proton and neutron magnetic moment, as a function of Z (mpot,p SP /μp) or N (mpot,n SP /μn). A good linear relation common to the proton and neutron parts of all nuclei emerges, best fitted to mpot,X SP = -9.74 X μX(fm/e), where X stands either for protons (Z, p) or neutrons (N, n). The linear relation confirms that the potential NMEs is largely dominated by spin-zero pairs of protons and neutrons, as it includes nuclei where all nucleons form pairs-such as 16O, 36S (neutrons) or 48Ca (protons)-because they fill angular-momentum-closed shells. The contribution of proton-neutron pairs is minor. Using the scaling function for the NMEs, we write a master formula one can use to compute the equivalent electron EDM dequiv e ≡rmol ̄CSP/A [57] for any system dequiv e = √ 2e4me 18πGF mN rmol A (-9.74) [ Z μp dp + N μn dn] fm e (B7) where A is the mass number of the heaviest nucleus in each system, and rmol is a molecular matrix element. For several molecules of experimental interest it is given by [50, 58]: rBaF = 4.46[18] · 10-21 e cm , rThO = 1.51[9] · 10-20 e cm , rHfF+ = 9.17[52] · 10-21 e cm . (B8) Additionally to the full potential operator, indicated by V (r) = r, Table III also presents NMEs for the operator just keeping the spin and isospin degrees of freedom, 0 0.2 0.4 0.6 dp cpot SP (r) 20Ne 48Ca 138Ba 0 2 4 6 8 10 12 14 16 18 -0.6 -0.4 -0.2 0 0.2 dn r[fm] cpot SP (r) FIG. 5: Normalized cpot SP (r) in units of fm-1 for 20Ne, 48Ca, and 138Ba for the contributions of the proton and neutron EDMs in B5. The dashed lines represent the normalized distributions with no radial dependence. but without radial dependence. These results, denoted by V (r) = 1, also indicate a linear dependence with the number of neutrons and protons. In fact, for nucleons in fully angular-momentum-closed shells the NMEs exactly fit to -3 X μXdX (fm/e), as expected. For all the nuclei in Table III, the results for mpot,p SP and mpot,n SP share a similar relation with the ones obtained for V (r) = 1, with a proportionality constant ∼(3.4 -3.9) fm. This common factor reveals the lack of additional scaling in the potential NMEs due to the radial part of the operator. Figure 5 further analyzes this aspect, showing the normalized radial distribution cpot SP (r) for 20Ne, 48Ca, and 138Ba, defined by cpot SP (r) = P i̸=j μ(i)D(j)rijN (ij)(rij)δ(r -rij) |M pot SP | . (B9) which fulfills the relation 1 = Z ∞ 0 cpot SP (r)dr . (B10) The radial distributions in Fig. 5 show that, regardless of the size of the nucleus, the dominant contribution to the potential NME, shown in solid curves, stems from nucleons relatively close to each other. This property is dictated by the spin part of the NME, indicated by the dashed curves in Fig. 5. Even though the full potential operator-including the radial part-gives more relevance to nucleons further apart, pairs of nucleons at short distances still dominate. 9 [1] M. Pospelov and A. Ritz, Annals Phys. 318, 119 (2005), hep-ph/0504231. [2] J. Engel, M. J. Ramsey-Musolf, and U. van Kolck, Prog. Part. Nucl. Phys. 71, 21 (2013), 1303.2371. [3] J. J. Hudson, D. M. Kara, I. J. Smallman, B. E. Sauer, M. R. Tarbutt, and E. A. Hinds, Nature 473, 493 (2011). [4] J. Baron et al. (ACME), Science 343, 269 (2014), 1310.7534. [5] W. B. Cairncross, D. N. Gresh, M. Grau, K. C. Cossel, T. S. Roussy, Y. Ni, Y. Zhou, J. Ye, and E. A. Cornell, Phys. Rev. Lett. 119, 153001 (2017), 1704.07928. [6] V. Andreev et al. (ACME), Nature 562, 355 (2018). [7] T. S. Roussy et al., Science 381, adg4084 (2023), 2212.11841. [8] P. Aggarwal et al. (NL-eEDM), Eur. Phys. J. D 72, 197 (2018), 1804.10012. [9] A. C. Vutha, M. Horbatsch, and E. A. Hessels, Phys. Rev. A 98, 032513 (2018), 1806.06774. [10] C. J. Ho, S. C. Wright, B. E. Sauer, and M. R. Tarbutt, Phys. Rev. Res. 5, 043233 (2023), 2306.02573. [11] M. Athanasakis-Kaklamanakis et al. (Europwan EDM projects) (2025), 2505.22281. [12] C. Abel et al., Phys. Rev. Lett. 124, 081803 (2020), 2001.11966. [13] V. V. Flambaum, M. Pospelov, A. Ritz, and Y. V. Stadnik, Phys. Rev. D 102, 035001 (2020), 1912.13129. [14] V. V. Flambaum, I. B. Samsonov, and H. B. Tran Tan, JHEP 10, 077 (2020), 2004.10359. [15] H. Mulder, R. Timmermans, and J. de Vries, JHEP 07, 232 (2025), 2502.06406. [16] L. I. Schiff, Phys. Rev. 132, 2194 (1963). [17] V. V. Flambaum, I. B. Khriplovich, and O. P. Sushkov, Sov. Phys. JETP 60, 873 (1984). [18] J. H. de Jesus and J. Engel, Phys. Rev. C 72, 045503 (2005), nucl-th/0507031. [19] Y. Ema, T. Gao, and M. Pospelov, Phys. Rev. Lett. 129, 231801 (2022), 2202.10524. [20] J. de Vries, E. Mereghetti, R. G. E. Timmermans, and U. van Kolck, Annals Phys. 338, 50 (2013), 1212.0990. [21] W. Dekens and J. de Vries, JHEP 05, 149 (2013), 1303.3156. [22] J. Kley, T. Theil, E. Venturini, and A. Weiler, Eur. Phys. J. C 82, 926 (2022), 2109.15085. [23] J. Kumar and E. Mereghetti, JHEP 09, 028 (2024), 2404.00516. [24] S. Weinberg, Phys. Rev. Lett. 63, 2333 (1989). [25] R. J. Crewther, P. Di Vecchia, G. Veneziano, and E. Witten, Phys. Lett. B 88, 123 (1979), [Erratum: Phys.Lett.B 91, 487 (1980)]. [26] K. Ottnad, B. Kubis, U. G. Meissner, and F. K. Guo, Phys. Lett. B 687, 42 (2010), 0911.3981. [27] E. Mereghetti, J. de Vries, W. H. Hockings, C. M. Maekawa, and U. van Kolck, Phys. Lett. B 696, 97 (2011), 1010.4078. [28] V. Cirigliano, W. Dekens, J. de Vries, S. Gandolfi, M. Hoferichter, and E. Mereghetti, Phys. Rev. C 110, 055502 (2024), 2405.18464. [29] M. Agostini, G. Benato, J. A. Detwiler, J. Menéndez, and F. Vissani, Rev. Mod. Phys. 95, 025002 (2023), 2202.01787. [30] V. Cirigliano, W. Dekens, J. de Vries, S. Gandolfi, M. Hoferichter, and E. Mereghetti, Phys. Rev. Lett. 133, 211801 (2024), 2405.18469. [31] V. Cirigliano et al., J. Phys. G 49, 120502 (2022), 2207.01085. [32] J. Zupan, Eur. Phys. J. C 25, 233 (2002), hepph/0202135. [33] R. D. Heil, H. H. Pitz, U. E. P. Berg, U. Kneissl, K. D. Hummel, G. Kilgus, D. Bohle, A. Richter, C. Wesselborg, and P. Von Brentano, Nucl. Phys. A 476, 39 (1988). [34] N. Pietralla et al., Nucl. Phys. A 618, 141 (1997). [35] A. Zilges, P. Von Brentano, C. Wesselborg, R. D. Heil, U. Kneissl, S. Lindenstruth, H. H. Pitz, U. Seemann, and R. Stock, Nucl. Phys. A 507, 399 (1990), [Erratum: Nucl.Phys.A 519, 848-848 (1990)]. [36] D. J. Broadhurst and A. G. Grozin, Phys. Lett. B 267, 105 (1991), hep-ph/9908362. [37] F. Šimkovic, R. Dvornický, D. Stefánik, and A. Faessler, Phys. Rev. C 97, 034315 (2018), 1804.04227. [38] S. e. Morabit, R. Bouabid, V. Cirigliano, J. de Vries, L. Gráf, and E. Mereghetti (2024), 2412.14160. [39] B. Romeo, J. Menéndez, and C. Peña Garay, Phys. Lett. B 827, 136965 (2022), 2102.11101. [40] W. Dekens, J. de Vries, D. Castillo, J. Menéndez, E. Mereghetti, V. Plakkot, P. Soriano, and G. Zhou, JHEP 09, 201 (2024), 2402.07993. [41] D. Castillo, L. Jokiniemi, P. Soriano, and J. Menéndez, Phys. Lett. B 860, 139181 (2025), 2408.03373. [42] E. Caurier, F. Nowacki, and A. Poves, Phys. Lett. B 711, 62 (2012), 1112.5039. [43] E. Caurier, G. Martinez-Pinedo, F. Nowacki, A. Poves, and A. P. Zuker, Rev. Mod. Phys. 77, 427 (2005), nuclth/0402046. [44] E. Caurier, F. Nowacki, A. Poves, and K. Sieja, Phys. Rev. C 82, 064304 (2010). [45] C. Qi and Z. X. Xu, Phys. Rev. C 86, 044323 (2012). [46] B. A. Brown, N. J. Stone, J. R. Stone, I. S. Towner, and M. Hjorth-Jensen, Phys. Rev. C 71, 044317 (2005), URL https://link.aps.org/doi/10. 1103/PhysRevC.71.044317. [47] B. Graner, Y. Chen, E. Lindahl, and B. Heckel, Physical Review Letters 116 (2016), ISSN 1079-7114, URL http: //dx.doi.org/10.1103/PhysRevLett.116.161601. [48] R. Alarcon et al., in Snowmass 2021 (2022), 2203.08103. [49] T. Chupp and M. Ramsey-Musolf, Phys. Rev. C 91, 035502 (2015), 1407.1064. [50] T. Fleig and M. Jung, JHEP 07, 012 (2018), 1802.02171. [51] J. de Vries, E. Mereghetti, R. G. E. Timmermans, and U. van Kolck, Phys. Rev. Lett. 107, 091804 (2011), 1102.4068. [52] J. Bsaisou, C. Hanhart, S. Liebig, U. G. Meissner, A. Nogga, and A. Wirzba, Eur. Phys. J. A 49, 31 (2013), 1209.6306. [53] J. de Vries, E. Mereghetti, and A. Walker-Loud, Phys. Rev. C 92, 045201 (2015), 1506.06247. [54] T. R. Richardson (2025), 2509.03613. [55] J. Dragos, T. Luu, A. Shindler, J. de Vries, and A. Yousif, Phys. Rev. C 103, 015202 (2021), 1902.03254. [56] J. Liang, A. Alexandru, T. Draper, K.-F. Liu, B. Wang, G. Wang, and Y.-B. Yang (χQCD), Phys. Rev. D 108, 094512 (2023), 2301.04331. [57] M. Pospelov and A. Ritz, Phys. Rev. D 89, 056006 10 (2014), 1311.5537. [58] P. A. B. Haase, D. J. Doeglas, A. Boeschoten, E. Eliav, M. Iliaš, P. Aggarwal, H. L. Bethlem, A. Borschevsky, K. Esajas, Y. Hao, et al., The Journal of Chemical Physics 155 (2021), ISSN 1089-7690, URL http://dx. doi.org/10.1063/5.0047344. [59] W. Dekens, J. de Vries, M. Jung, and K. K. Vos, JHEP 01, 069 (2019), 1809.09114. [60] M. Pospelov, Phys. Lett. B 530, 123 (2002), hepph/0109044. [61] S. Bhattacharya, K. Fuyuto, E. Mereghetti, and T. R. Richardson, Phys. Rev. C 112, 025501 (2025), 2504.01105. [62] W. A. Richter, S. Mkhize, and B. A. Brown, Phys. Rev. C 78, 064302 (2008). [63] A. Poves, J. Sánchez-Solano, E. Caurier, and F. Nowacki, Nuclear Physics A 694, 157 (2001).
2510.14931
Bo Wang1 Department of Mechanical Engineering, The City College of New York, The City University of New York, New York, NY 10031, USA email: bwang1@ccny.cuny.edu Tianyu Han Department of Mechanical Engineering, The City College of New York, The City University of New York, New York, NY 10031, USA email: than000@citymail.cuny.edu Guangwei Wang School of Mechanical Engineering, Guizhou University, Guiyang, 550025, China email: gwwang@gzu.edu.cn Further Results on Safety-Critical Stabilization of Force-Controlled Nonholonomic Mobile Robots In this paper, we address the stabilization problem for force-controlled nonholonomic mobile robots under safety-critical constraints. We propose a continuous, time-invariant control law based on the 𝛾𝑚-quadratic programming (𝛾𝑚-QP) framework, which unifies control Lyapunov functions (CLFs) and control barrier functions (CBFs) to enforce both stability and safety in the closed-loop system. For the first time, we construct a global, time-invariant, strict Lyapunov function for the closed-loop nonholonomic mobile robot system with a nominal stabilization controller in polar coordinates; this strict Lyapunov function then serves as the CLF in the QP design. Next, by exploiting the inherent cascaded structure of the vehicle dynamics, we develop a CBF for the mobile robot via an integrator backstepping procedure. Our main results guarantee both asymptotic stability and safety for the closed-loop system. Both the simulation and experimental results are presented to illustrate the effectiveness and performance of our approach. Keywords: safety-critical control, stabilization, control barrier functions, nonholonomic mobile robots 1 Introduction The study of control problems for nonholonomic systems has been carried out since the early 1980s — see [1] for a survey. The main challenge is that, although these systems are controllable, it is impossible to achieve asymptotic stability of an isolated equilib- rium using a continuous, time-invariant state feedback control law due to Brockett’s necessary condition on stabilization [2]. Hence, the stabilization of nonholonomic mobile robots and the construc- tion of corresponding control Lyapunov functions (CLFs) remain challenging problems of significant ongoing interest in the context of robustness analysis and controller design. See [3] for a con- tinuous time-varying control method and [4] for a time-invariant control approach, along with corresponding strict Lyapunov con- structions. Ensuring operational safety while achieving control objectives is a fundamental requirement in autonomous control systems. For instance, in practical applications, safety constraints — such as ob- stacle and collision avoidance between vehicles — must also be considered in addition to the set-point stabilization or trajectory tracking task for mobile robots [5,6]. Achieving satisfactory con- trol performance often requires aggressive maneuvers, while safety necessitates conservative actions and strict constraint adherence. The tension between performance and safety is particularly acute in mobile robots, whose nonholonomic dynamics inherently pre- vent continuous, time-invariant feedback from stabilizing the target configuration. As a result, enforcing both asymptotic stability and safety constraints simultaneously is far more challenging than in fully-actuated holonomic systems. In the past decade, control barrier function (CBF)-based tech- niques have proven effective for systematically enforcing safety constraints [7,8]. Since then, CBFs have been applied in a vari- ety of domains, including walking robots [9], automotive systems [4,10], stochastic systems [11], and multi-agent systems [6], to name a few. To “mediate" the conflict between the safety con- straints and the control objective (e.g., set-point stabilization, tra- jectory tracking, or mere open-loop steering of the system), numer- ous quadratic program (QP)-based control techniques have been developed in the literature [7,8,12]. According to different types of QP formulation, the existing results may be categorized into 1Corresponding Author. CLF-CBF-based QP [7–9,12], CBF-based QP [9,12–14], and 𝛾𝑚- CLF-CBF-based QP (𝛾𝑚-QP) methods [4,15]. Among the various methods, the 𝛾𝑚-QP approach is preferred in many applications due to its ability to guarantee asymptotic stability of the closed- loop system and its robustness in handling disturbances. Furthermore, applying CBFs directly to mobile robots presents significant challenges due to their inherent nonholonomic con- straints, which complicate establishing a direct relationship be- tween safety constraints and control inputs, particularly when the system’s relative degree exceeds one [16]. To address this issue, high-order CBFs have been developed in [17]. This extension en- sures the forward invariance of appropriately defined, dynamically extended safe sets, thereby enabling controller synthesis via QP even for systems with higher relative degrees. However, construct- ing suitable high-order CBFs can be intricate, often requiring mul- tiple differentiations of the barrier function and complex modifica- tions to the safe set definition, which may hinder straightforward practical implementation. In [18], a safety-critical controller is designed for connected au- tomated vehicles, where the vehicles are modeled by integrators. Using the double-integrator model, the multi-agent collision avoid- ance problem has been studied via CBF approaches in [6]. Based on the first-order unicycle (kinematic) model, CBF-based obstacle avoidance has been addressed in [19,20]. In particular, in [19], a CBF backstepping approach is proposed for the kinematic unicycle model. In [20], an obstacle-avoidance strategy for nonholonomic- integrator vehicles is proposed by regulating vehicle speed and orientation separately via two CBFs while maintaining nonzero forward speed in dynamic environments using velocity obstacles. However, none of these existing works provides guarantees of asymptotic stability for the closed-loop system. Moreover, a more realistic model for vehicle applications is to consider the second- order full dynamical (kinematics-kinetics) model of the unicycle [21,22]. However, to the best of the authors’ knowledge, few stud- ies have addressed the stabilization problem for force-controlled nonholonomic vehicles subject to safety-critical constraints. In this paper, we address the stabilization problem for force- controlled nonholonomic mobile robots under safety-critical con- straints. We propose a continuous, time-invariant control law based on the 𝛾𝑚-QP framework to enforce both stability and safety in the closed-loop system. ASME Letters in Dynamic Systems and Control PREPRINT FOR REVIEW / 1 The main contributions of this work include: (i) For the first time, we construct a global, time-invariant, strict Lyapunov function for the closed-loop nonholonomic mobile robot system with a nominal stabilization controller in polar coordinates. This strict Lyapunov function then serves as the CLF in the 𝛾𝑚-QP design. (ii) We present experimental results to validate the effectiveness of the proposed approach and to demonstrate the perfor- mance of the developed controller. The experiments show that the proposed method is applicable to scenarios such as autonomous parking with obstacle avoidance and inter- vehicle collision avoidance. The original 𝛾𝑚-QP framework is based on reciprocal CBFs [15]. However, in recent years, there has been a shift from reciprocal CBFs to zeroing CBFs, as reciprocal CBFs may exhibit poor ro- bustness properties. Hence, in this work, we present the 𝛾𝑚-QP approach within the framework of zeroing CBFs. Furthermore, distinct from our previous work [4], we also construct the zeroing CBF using the integrator backstepping technique. In our previous work [4], we construct a strict Lyapunov func- tion for the closed-loop mobile robot with a nominal stabilization controller in the large (i.e., on any compact subset of the state space), where it serves as a CLF in the safety-critical control de- sign. However, the constructed Lyapunov function is not global, meaning that it depends on the initial configuration of the vehicle. To the best of the authors’ knowledge, a global, time-invariant, strict Lyapunov function has not yet been reported in the litera- ture. Moreover, the problem of eliminating potential undesired equilibria, e.g., via introducing additional constraints in the QP [23], remains out of scope of this Letter. The structure of the remainder of the paper is as follows: Section 2 presents the problem formulation and preliminaries on safety- critical control. Section 3 presents the main results, including the constructions of the CLF and CBF, and the controller design. Section 4 provides both simulation and experimental results that demonstrate the practical application of the theoretical develop- ments. Finally, Section 5 offers concluding remarks. 2 Preliminaries on Safety-Critical Control Notation: Let |·| denote the Euclidean norm on R𝑛. For a subset 𝑆⊂R𝑛, 𝜕𝑆represents the boundary of 𝑆, and int 𝑆represents the interior of 𝑆. K is the class of continuous functions R≥0 →R≥0 which is zero at zero and strictly increasing; K∞is a subset of the class K functions which are unbounded. For a matrix 𝑃∈R𝑛×𝑛, 𝜆𝑀(𝑃) represents the maximum eigenvalue of 𝑃. Throughout this article, we omit the arguments of functions when they are clear from the context. Let us consider a nonlinear control-affine system ̇𝑥= 𝑓(𝑥) + 𝑔(𝑥)𝑢, (1) where the state 𝑥∈R𝑛and the control 𝑢∈R𝑚. We assume that 𝑓: R𝑛→R𝑛and 𝑔: R𝑛→R𝑛×𝑚are locally Lipschitz and 𝑓(0) = 0. Recall that a 𝐶∞function 𝑉: R𝑛→R≥0 is said to be a (global) CLF for (1), if 𝑉is positive definite, proper, and satisfies the following implication: 𝐿𝑔𝑉(𝑥) = 0 =⇒𝐿𝑓𝑉(𝑥) + 𝛼(|𝑥|) < 0, ∀𝑥∈R𝑛\{0}, (2) where 𝛼∈K [24]. Safety can be formulated as the forward invariance of designated sets within the system’s state space. A set 𝐶⊂R𝑛is said to be forward invariant, if for each initial condition 𝑥◦∈𝐶, the resulting solution of (1) 𝑥(𝑡; 𝑥◦) ∈𝐶for all 𝑡≥0. If the set 𝐶is forward invariant, system (1) is said to be safe on the set 𝐶. Consider the safety set 𝐶defined as the 0-superlevel set of a 𝐶1 function ℎ: R𝑛→R, i.e., 𝐶:= {𝑥∈R𝑛: ℎ(𝑥) ≥0}. (3) The following definition is standard [8]. Definition 1 (CBF). Let 𝐶be defined by (3). Then, ℎis a (ze- roing) CBF for (1) if there exists 𝛼ℎ∈K such that the following implication holds: 𝐿𝑔ℎ(𝑥) = 0 =⇒𝐿𝑓ℎ(𝑥) + 𝛼ℎ(ℎ(𝑥)) ≥0, ∀𝑥∈𝐶. (4) An effective method for combining a CLF and a CBF was devel- oped in [15], known as the 𝛾𝑚-QP approach. The original 𝛾𝑚-QP formulation in [15] is based on reciprocal CBFs. Here, we restate the 𝛾𝑚-QP problem using zeroing CBFs for consistency with our framework as follows: min 1 2 (𝑢⊤𝑢+ 𝑚𝛿⊤𝛿) (5) s.t. 𝛾𝑓(𝐿𝑓𝑉(𝑥) + 𝛼(|𝑥|)) + 𝐿𝑔𝑉(𝑥)𝑢+ 𝐿𝑔𝑉(𝑥)𝛿≤0 −𝐿𝑓ℎ(𝑥) −𝛼ℎ(ℎ(𝑥)) −𝐿𝑔ℎ(𝑥)𝑢≤0 where 𝑚≥1, 𝛾𝑓is defined as 𝛾𝑓(𝑠) := 𝛾𝑠if 𝑠≥0 and 𝛾𝑓(𝑠) := 𝑠 if 𝑠< 0, and 𝛾≥1. Due to the slack variable 𝛿, the 𝛾𝑚-QP problem (5) is always feasible. Note that in (5) we need 𝛾≥1 to overcome the impact of 𝛿when 𝐿𝑓𝑉(𝑥) + 𝛼(|𝑥|) is positive. The closed-form solution to the 𝛾𝑚-QP problem (5) can be obtained by applying the KKT conditions. The resulting control law given by (5) is Lipschitz continuous in every subset of the safe set 𝐶not containing the origin. 3 Problem Formulation and Main Results Consider the nonholonomic mobile robot system with kinemat- ics ⎧⎪⎪⎪⎨ ⎪⎪⎪⎩ ̇𝑥= 𝑣cos 𝜃, ̇𝑦= 𝑣sin 𝜃, ̇𝜃= 𝜔, (6) where (𝑥, 𝑦) ∈R2 denotes the Cartesian coordinates of the vehicle on the plane, 𝜃∈R denotes its orientation, 𝑣∈R and 𝜔∈R denote the linear and angular velocities of the vehicle, respectively. In addition, the kinetics of the vehicle are described by the force- balance equation [︃ 𝑚0 0 𝐼 ]︃[︃ ̇𝑣 ̇𝜔 ]︃ = 1 𝑟 [︃ 1 1 2𝑅−2𝑅 ]︃[︃ 𝜏𝑙 𝜏𝑟 ]︃ , (7) where 𝜏𝑙and 𝜏𝑟are the left and right wheel torques, respectively, 𝑚is the mass, 𝐼is the vehicle inertia, 𝑟is the wheel radius, and 𝑅 is the wheel axle length [25]. The proposed control scheme contains a feedback transformation that is designed as [︃ 𝜏𝑙 𝜏𝑟 ]︃ = 𝑟 2 [︃ 𝑚 𝐼 2𝑅 𝑚−𝐼 2𝑅 ]︃[︃ 𝑢𝑣 𝑢𝜔 ]︃ . (8) After substituting (8) in (7), it yields ̇𝑣= 𝑢𝑣, ̇𝜔= 𝑢𝜔. (9) The safety-critical stabilization problem entails designing a con- trol strategy that ensures the closed-loop system trajectories remain within a predefined safe set 𝐶, defined by (3), at all times 𝑡≥0, while simultaneously guaranteeing that the origin of the closed- loop system is asymptotically stable. In the 𝛾𝑚-QP framework, the CLF and CBF are individually constructed for the mobile robot system. Subsequently, the control input is synthesized by solving the 𝛾𝑚-QP described in (5). 2 / PREPRINT FOR REVIEW Transactions of the ASME 3.1 Construction of the global CLF. To address the non- holonomicity, we construct the CLF for the mobile robot in polar coordinates, where the position of the robot in polar coordinates is given by the distance to the origin 𝜌and the bearing angle 𝜓, i.e., 𝜌:= |(𝑥, 𝑦)|, 𝜓:= atan2(−𝑦, −𝑥), (10) where ‘atan2’ represent the 2-argument arctangent function. Defin- ing the variable 𝛼:= 𝜓−𝜃, the kinematics of the vehicle become ⎧⎪⎪⎪⎨ ⎪⎪⎪⎩ ̇𝜌= −𝑣cos 𝛼, ̇𝛼= 𝑣 𝜌sin 𝛼−𝜔, ̇𝜓= 𝑣 𝜌sin 𝛼. (11) We have the following result. Proposition 1 (Global CLF). Consider the mobile robot system (11) and (9). Then, there exists a constant ¯𝜇> 0 such that for all 𝜇∈(0, ¯𝜇], the function 𝑉: R>0 × R4 →R≥0, defined as 𝑉(𝜌, 𝛼, 𝜓, 𝑧, ˜𝜔) := 𝜇 ∫𝑊♯(𝜌,𝛼,𝜓) 0 (︃𝑒𝑠−1 𝑒𝑠 )︃ d𝑠+ 𝑈(𝑧, ˜𝜔) , (12) is a global CLF for (11) and (9) that satisfies the small control property, where ˜𝑣:= 𝑣−𝑣∗, ˜𝜔:= 𝜔−𝜔∗, 𝑧:= ˜𝑣/𝜌, 𝑊♯(𝜌, 𝛼, 𝜓) := ln(𝑊(𝜌, 𝛼, 𝜓) + 1), 𝑊(𝜌, 𝛼, 𝜓) := 𝑊1(𝜌, 𝛼, 𝜓) + 𝑊2(𝛼, 𝜓) + ∫𝑊1 (𝜌,𝛼,𝜓) 0 𝑄(𝑙)d𝑙, 𝑊1(𝜌, 𝛼, 𝜓) := 1 2 (︂ 𝜌2 + 𝛼2 + 𝜆𝜓2)︂ , 𝑊2(𝛼, 𝜓) := 𝑝11𝛼2 + 2𝑝12𝛼𝜓+ 𝑝22𝜓2, 𝑃:= ⎡⎢⎢⎢⎢⎢⎢⎣ 1 + 𝜆 2𝑘𝛼𝜆 1 2𝑘𝜌𝜆 1 2𝑘𝜌𝜆 𝑘2 𝛼+ 𝑘2 𝜌𝜆2 + 𝑘2 𝜌𝜆 2𝑘𝛼𝑘2𝜌𝜆 ⎤⎥⎥⎥⎥⎥⎥⎦ , 𝑄(𝑙) := 16 𝜋2 𝑘2 𝜌 𝑘𝛼 𝜆2𝜆2 𝑀(𝑃)𝑙, 𝑈(𝑧, ˜𝜔) := 1 2 (︃𝑧2 𝑘𝑧 + ˜𝜔2 𝑘𝜔 )︃ , 𝑣∗:= 𝑘𝜌cos(𝛼)𝜌, 𝜔∗:= 𝑘𝛼𝛼+ 𝑘𝜌sinc(2𝛼)(𝛼+ 𝜆𝜓), the parameter 𝜆≥1, the parameters 𝑘𝜌, 𝑘𝛼, 𝑘𝑧, and 𝑘𝜔are arbitrary positive constants, and 𝑃= [𝑝𝑖𝑗]. That is, 𝑝𝑖𝑗represents the (𝑖, 𝑗)-th entry of the matrix 𝑃2. Proof. See Appendix A. 2In this paper, ‘sinc(·)’ represents the unnormalized sinc function, which is defined as sinc(𝑠) := sin(𝑠)/𝑠if 𝑠≠0 and sinc(0) = 1. Note that the function sinc is smooth everywhere and globally bounded on R. 3.2 Construction of the CBF. Mechanical and robotic sys- tems often exhibit cascaded structures. The problem of construct- ing CBFs for such systems has been investigated in several works. For example, in [19,26], the authors propose a method for synthe- sizing zeroing CBFs for higher-order systems by leveraging CBFs designed for reduced-order models. In [4], a systematic procedure is proposed for constructing reciprocal CBFs for cascaded systems by using the CBF associated with the kinematic model through integrator backstepping. In this section, we construct the zeroing CBF for the mobile robot (6) and (9) in Cartesian coordinates. Following the similar integrator backstepping method, We have the following result. Proposition 2 (CBF). Consider the mobile robot system (6) and (9). Assume that the admissible set 𝐶0 is defined as the 0-superlevel set of a given continuously differentiable function ℎ0 : R2 →R, i.e., 𝐶0 := {(𝑥, 𝑦) ∈R2 : ℎ0(𝑥, 𝑦) ≥0}. (14) Then, the function ℎ: R4 →R given by ℎ(𝑥, 𝑦, 𝑣, 𝑤) := ℎ0(𝑥, 𝑦) −𝑙𝑣𝑣2 −𝑙𝜔𝜔2 (15) is a CBF for (6) and (9), where 𝑙𝑣, 𝑙𝜔are two positive constants. Proof. Let us define 𝑞:= [𝑥𝑦𝜃]⊤and v := [𝑣𝜔]⊤. Then, the kinematic system (6) can be expressed in the control-affine form ̇𝑞= 𝑓0(𝑞) + 𝑔0(𝑞)v, (16) where 𝑓0(𝑞) ≡0 and 𝑔0(𝑞) := [︄cos 𝜃0 sin 𝜃0 0 1 ]︄ . We first show that the function ℎ0 is a CBF for the kinematic system (16) on the set 𝐶0, assuming that the velocity v is the control input. This follows directly from the fact that the kinematic system (16) is driftless, i.e., 𝑓0 ≡0. As a result, the CBF condition (4) is trivially satisfied since 𝐿𝑓0ℎ0 ≡0, and for all (𝑥, 𝑦) ∈𝐶0 and any class-K function 𝛼ℎ, it holds that 𝛼ℎ(ℎ0(𝑥, 𝑦)) ≥0. Let us denote x := [𝑞v]⊤, 𝑢:= [𝑢𝑣𝑢𝜔]⊤, 𝐹(x) := [︃ 𝑓0(𝑞) + 𝑔0(𝑞)v 0 ]︃ , and 𝐺:= [︃ 0 𝐼 ]︃ . Then, the cascaded system (6) and (9) can be written as ̇x = 𝐹(x) + 𝐺𝑢. (17) Next, we verify the condition (4) for the function ℎand (17). Note that 𝐿𝐺ℎ= 𝜕ℎ 𝜕v = 0 implies that v = 0. Hence, on the set {v = 0}, we have (𝐿𝐹ℎ) |v=0 = 𝜕ℎ 𝜕𝑞( 𝑓0 + 𝑔0v) |︁|︁|︁|︁ v=0 = 0, (18) and ℎ|v=0 = ℎ0. That is, 𝛼ℎ(ℎ|v=0) = 𝛼ℎ(ℎ0(𝑥, 𝑦)) ≥0 for all (𝑥, 𝑦) ∈𝐶0. Therefore, we verify the implication (4) and thus, ℎis a CBF for the system (6) and (9). 3.3 Safety-Critical Control Design. We have constructed a global CLF and a zeroing CBF for the nonholonomic mobile robot system, as presented in Propositions 1 and 2, respectively. Based on these constructions, the safety-critical stabilization control law can be derived by solving the 𝛾𝑚-QP problem (5). It is worth noting that the original 𝛾𝑚-QP formulation in [15] is based on reciprocal CBFs. For completeness, we present parallel results of the 𝛾𝑚-QP problem (5) using zeroing CBFs. ASME Letters in Dynamic Systems and Control PREPRINT FOR REVIEW / 3 Theorem 1. Assume that the system (1) admits a CLF 𝑉(𝑥) and a CBF ℎ(𝑥), and that 0 ∈int 𝐶. Then, the 𝛾𝑚-QP problem (5) is feasible and the resulting control law is given by 𝑢★(𝑥) := ⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ 0, 𝑥∈Ωclf cbf ∪{0}, − 𝑚 𝑚+ 1 ¯𝑎1 |𝑏1|2 𝑏⊤ 1 , 𝑥∈Ωclf cbf, − 𝑎2 |𝑏2|2 𝑏⊤ 2 , 𝑥∈Ωclf cbf, −𝜇1𝑏⊤ 1 −𝜇2𝑏⊤ 2 , 𝑥∈Ωclf cbf, (19) where 𝑎1 := 𝐿𝑓𝑉(𝑥) + 𝛼(|𝑥|), ¯𝑎1 := 𝛾𝑓(𝑎1), 𝑏1 := 𝐿𝑔𝑉(𝑥), 𝑎2 := −𝐿𝑓ℎ(𝑥) −𝛼ℎ(ℎ(𝑥)), 𝑏2 := −𝐿𝑔ℎ(𝑥), 𝜇1 := |𝑏2|2 ¯𝑎1 −𝑏1𝑏⊤ 2 𝑎2 (1 + 1 𝑚)|𝑏1|2|𝑏2|2 −|𝑏1𝑏⊤ 2 |2 , 𝜇2 := −𝑏1𝑏⊤ 2 ¯𝑎1 + (1 + 1 𝑚)|𝑏1|2𝑎2 (1 + 1 𝑚)|𝑏1|2|𝑏2|2 −|𝑏1𝑏⊤ 2 |2 , and Ωclf cbf := {𝑥∈R𝑛: 𝑎1 < 0, 𝑎2 < 0} , Ωclf cbf := {︃ 𝑥∈R𝑛: 𝑎1 ≥0, 𝑎2 < 𝑚 𝑚+ 1 𝑏2𝑏⊤ 1 |𝑏1|2 ¯𝑎1 }︃ , Ωclf cbf := {︃ 𝑥∈R𝑛: 𝑎2 ≥0, ¯𝑎1 < 𝑏1𝑏⊤ 2 |𝑏2|2 𝑎2 }︃ , Ωclf cbf := {︃ 𝑥∈R𝑛\Ωclf cbf : ¯𝑎1 ≥ 𝑏1𝑏⊤ 2 |𝑏2|2 𝑎2, 𝑎2 ≥ 𝑚 𝑚+ 1 𝑏1𝑏⊤ 2 |𝑏1|2 ¯𝑎1 }︃ . Furthermore, under the control law (19), the set 𝐶is forward in- variant. Moreover, if the CLF 𝑉satisfies the small control property and if we select 𝛾𝑚 𝑚+1 = 1, then the origin of the closed-loop system is asymptotically stable. Sketch of proof. The Lagrangian L for the 𝛾𝑚-QP (5) is given by L := 1 2 (𝑢⊤𝑢+ 𝑚𝛿⊤𝛿) + 𝜆1( ¯𝑎1 + 𝑏1(𝑢+ 𝛿)) + 𝜆2(𝑎2 + 𝑏2𝑢), (20) where 𝜆1, 𝜆2 ≥0 are scalar Lagrange multipliers. The KKT con- ditions are given by 𝜕L 𝜕𝑢= 𝑢⊤+ 𝜆1𝑏1 + 𝜆2𝑏2 = 0, (21a) 𝜕L 𝜕𝛿= 𝑚𝛿⊤+ 𝜆1𝑏1 = 0, (21b) 𝜆1𝐹1 := 𝜆1[ ¯𝑎1 + 𝑏1(𝑢+ 𝛿)] = 0, (21c) 𝜆2𝐹2 := 𝜆2[𝑎2 + 𝑏2𝑢] = 0. (21d) The unique optimal solution 𝑢★(𝑥) in (19) is derived directly from (21). In fact, the KKT conditions in (21) are necessary and suffi- cient for 𝑢★(𝑥) to be an optimal solution to the 𝛾𝑚-QP (5). The forward invariance of the set 𝐶follows directly from [8] since the CBF constraint 𝐹2 is satisfied for all 𝑥∈R𝑛. To show LAS to 0 ∈int 𝐶, we first note that 𝑎2(0) = −𝛼ℎ(ℎ(0)) < 0. Due to the small control property, we have 𝑢∗(𝑥) →0 as 𝑥→0. Hence, the CBF constraint 𝐹2 := 𝑎2 + 𝑏2𝑢★< 0 in a neighborhood of the origin. That is, the barrier constraint is inactive around the origin. Then, the control law is obtained by combining the case 𝑥∈Ωclf cbf ∪{0} and the case 𝑥∈Ωclf cbf, which coincides with the PMN formula in [27] and achieves asymptotic stability. We are now prepared to present the main result about the design of the safety-critical stabilization controller. Let us define ¯𝑢:= [ 𝑢𝑣 𝜌, 𝑢𝜔]⊤, 𝑓𝜅(𝜌, 𝜙, 𝛼) := [−𝑣cos(𝛼), 𝑣 𝜌sin 𝛼−𝜔, 𝑣 𝜌sin 𝛼]⊤, 𝑓1 := ⎡⎢⎢⎢⎢⎣ 𝑓𝜅(𝜌, 𝜙, 𝛼) −̇𝑣∗ 𝜌+ 𝑘𝜌cos(𝛼)2𝑧+ cos(𝛼)𝑧2 −̇𝜔∗ ⎤⎥⎥⎥⎥⎦ , 𝑔1 := [︃ 03×2 𝐼2 ]︃ , 𝑓2 := [︁𝑣cos 𝜃𝑣sin 𝜃𝜔0 0]︁⊤, 𝑔2 := [︃ 03×2 diag(𝜌, 1) ]︃ . Then, the 𝛾𝑚-QP problem is formulated as min 1 2 ( ¯𝑢⊤¯𝑢+ 𝑚𝛿⊤𝛿) (22) s.t. 𝐹1 := 𝛾𝑓(𝐿𝑓1𝑉+ 𝛼(|𝜒|)) + 𝐿𝑔1𝑉¯𝑢+ 𝐿𝑔1𝑉𝛿≤0 𝐹2 := −𝐿𝑓2ℎ(x) −𝛼ℎ(ℎ(x)) −𝐿𝑔2ℎ(x) ¯𝑢≤0 where 𝜒:= [𝜌𝛼𝜓𝑧˜𝜔]⊤, x := [𝑥𝑦𝜃𝑣𝜔]⊤, 𝛼:= 𝜖𝑊̇𝑊|𝑓nom (𝑊+1)2 − 1 2 |𝜁|2, 𝛼ℎ∈K, and 𝜖> 0 is chosen to be sufficiently small. The following proposition follows directly as a corollary of Propositions 1 and 2, together with Theorem 1. Proposition 3. The 𝛾𝑚-QP problem (22) is feasible, and under the resulting control law, the set int 𝐶is forward invariant. If 0 ∈int 𝐶, then the barrier constraint is inactive (𝐹2 < 0) around the origin, and the resulting control law is continuous. If we select 𝛾𝑚 𝑚+1 = 1, the origin of the closed-loop system is locally asymptotically stable. 4 Simulation and Experimental Results This section presents both simulation and experimental results obtained using a laboratory-size differential-drive mobile robot, designed to evaluate the practical effectiveness and performance of the proposed safety-critical stabilization controller. 4.1 Simulation Results. The physical properties of the non- holonomic mobile robot were measured as 𝑚= 1.0, 𝐼= 0.025, 𝑟= 0.03, 𝑅= 0.15. All parameters are given in SI units. The initial conditions of the robot are randomly selected as (𝑥0, 𝑦0, 𝜃0) = (−3.15, 2.96, −1.43), and the robot is initially at rest. To illustrate the effectiveness of the proposed approach, three controllers (i.e., nominal controller (A3), CLF-QP (12), CLF-CBF-QP (19)) were implemented and compared. We assume that a circular obstacle is located at (−1, 0) with radius 𝑟= 0.3. That is, the admissible set is given by 𝐶0 := {(𝑥, 𝑦) ∈R2 : ℎ0(𝑥, 𝑦) = 40((𝑥+ 2)2 + 𝑦2 −0.32)}. We define 𝛼:= 𝜇𝑊̇𝑊|𝑓nom 2(𝑊+1)2 −1 2 |𝜁|2 and 𝛼ℎ(𝑠) := 2𝑠. The control parameters are set to 𝜆= 3, 𝑘𝜌= 2, 𝑘𝛼= 2, 𝑘𝑧= 4, 𝑘𝜔= 4, 𝜇= 0.05, 𝑙𝑣= 1, 𝑙𝜔= 1 and 𝑚= 1. The simulation results are shown in Figs. 1-2, which demonstrate that the proposed CLF-CBF 𝛾𝑚-QP controller effectively achieves parking with obstacle avoidance. 4 / PREPRINT FOR REVIEW Transactions of the ASME x [m] -3 -2 -1 0 y [m] -0.5 0 0.5 1 1.5 2 2.5 3 Nominal CLF CLF-CBF Fig. 1 Simulation paths of the robot in the XY plane. 0 2 4 6 8 10 ; [m] 0 2 4 6 Nominal CLF CLF-CBF 0 2 4 6 8 10 , [rad] -1 0 1 t [sec] 0 2 4 6 8 10 A [rad] -1 -0.5 0 0.5 Fig. 2 Simulation trajectories of the robot in polar coordi- nates. 4.2 Experimental Results. The experiments were conducted in the Autonomous Systems and Control Laboratory (ASCL) at the City College of New York. The experiment setup is shown in Fig. 3. The experimental setup comprises a differential-drive non- holonomic mobile robot operating within a 6 m × 6 m workspace. High-precision global localization is achieved using a VICON mo- tion capture system equipped with eight Vero 2.2 cameras, oper- ating at 330 Hz with an accuracy of 1 mm. The computational architecture consists of a host PC for data processing and data streaming, and a laptop dedicated to executing the proposed con- trol algorithm. The proposed safety-critical stabilization algorithm is implemented on the laptop using MATLAB/Simulink R2025a. An unpowered robot was strategically placed at (−0.6, 0.4) to serve as a static obstacle to evaluate avoidance capabilities. The admissible set is given by 𝐶0 := {(𝑥, 𝑦) ∈R2 : ℎ0(𝑥, 𝑦) = 40((𝑥+ 0.6)2 + (𝑦−0.4)2 −0.22)}. The robot was initially at (𝑥0, 𝑦0, 𝜃0) = (−1.08, 1.37, 0.78), with both linear and angular ve- locities set to zero. The target position was set at the origin. The same control parameters as in the simulations were used. The experimental results are shown in Figs. 4-5, which illustrate that the proposed CLF-CBF 𝛾𝑚-QP controller successfully performs parking while avoiding obstacles. It should be noted that the ex- perimental trajectory differs from the simulation trajectory in Figs. 4-5, and the angular error does not converge exactly to zero. This discrepancy is primarily due to actuator saturation and the dead- zone effect when the control input is small. Laptop Laptop VICON Cameras VICON Cameras Mobile Robot Mobile Robot Host Host Raw VICON Data Robot States Control Command Physical Movement Test Site Test Site Fig. 3 Experimental system framework. x [m] -1.5 -1 -0.5 0 0.5 y [m] -0.5 0 0.5 1 1.5 2 Sim: Nominal Sim: CLF Sim: CLF-CBF Experiment Fig. 4 Paths of the robot in the XY plane. 5 Conclusions This work presents a continuous, time-invariant control strat- egy grounded in the 𝛾𝑚-QP framework, which integrates CLFs and CBFs to ensure both stability and safety for the closed-loop system. Notably, we develop a global, time-invariant, strict Lya- punov function for a nonholonomic mobile robot system, utilizing a nominal stabilization controller in polar coordinates. This strict Lyapunov function is subsequently employed as the global CLF in the QP formulation. Furthermore, by leveraging the inherent cas- caded structure of the vehicle’s dynamics, we construct a CBF for the mobile robot through an integrator backstepping approach. The main results guarantee that the closed-loop system achieves both asymptotic stability and safety. Experimental validations are pro- vided to demonstrate the efficacy and performance of the proposed method. Future research will focus on extending this framework to address safety formation control in multi-agent systems, incor- porating robustness analysis and explicitly accounting for input saturation. Acknowledgment The work of was supported in part by GSoE at CCNY, and in part by the PSC-CUNY Award, jointly funded by The Professional Staff Congress and The City University of New York. Conflict of Interest There are no conflicts of interest. ASME Letters in Dynamic Systems and Control PREPRINT FOR REVIEW / 5 0 5 10 15 20 ; [m] 0 1 2 Sim: Nominal Sim: CLF Sim: CLF-CBF Experiment 0 5 10 15 20 , [rad] -2 0 2 t [sec] 0 5 10 15 20 A [rad] -2 -1 0 Fig. 5 Trajectories of the robot in polar coordinates. Data Availability Statement The datasets generated and supporting the findings of this arti- cle are obtainable from the corresponding author upon reasonable request. Appendix A: Proof of Proposition 1 First, we show that the function 𝑉is positive definite and proper. Since 𝑈is a positive definite quadratic form, it suffices to show that the function 𝑊♯is positive definite and proper in its arguments. Direct calculation yields det(𝑃) = 𝑘2 𝛼+ 𝑘2 𝜌𝜆2 + 2𝑘2 𝜌𝜆+ 𝑘2 𝜌 4𝑘2𝛼𝑘2𝜌𝜆 > 0. Hence, the matrix 𝑃= 𝑃⊤> 0, implying that 𝑊2 is a positive def- inite quadratic form. Since 𝑊1 is also a positive definite quadratic form, it follows that 𝑊is positive definite and proper. Conse- quently, 𝑊♯is also positive definite and proper in its arguments. Next, according to the definition of a CLF, to establish that 𝑉is a CLF, we need to show that for all (𝜌, 𝛼, 𝜓, 𝑧, ˜𝜔) ≠0, there exists a control input (𝑢𝑣, 𝑢𝜔) such that ̇𝑉|(11),(9) < 0. We demonstrate this by explicitly constructing a nominal control law (𝑢𝑣, 𝑢𝜔). In the new velocity coordinates ˜𝜔:= 𝜔−𝜔∗, 𝑧:= (𝑣−𝑣∗)/𝜌, the kinematics (11) become ⎡⎢⎢⎢⎢⎣ ̇𝜌 ̇𝛼 ̇𝜓 ⎤⎥⎥⎥⎥⎦ = ⎡⎢⎢⎢⎢⎣ −𝑘𝜌cos(𝛼)2𝜌 −𝑘𝛼𝛼−𝑘𝜌sinc(2𝛼)𝜆𝜓 𝑘𝜌sinc(2𝛼)𝛼 ⎤⎥⎥⎥⎥⎦ ⏞ˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉ⏟⏟ˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉ⏞ 𝑓nom (𝜌,𝛼,𝜓) + [︄−𝜌cos(𝛼) 0 sin(𝛼) 1 sin(𝛼) 0 ]︄ ⏞ˉˉˉˉˉˉˉˉˉˉˉˉˉ⏟⏟ˉˉˉˉˉˉˉˉˉˉˉˉˉ⏞ 𝑔nom (𝜌,𝛼) [︃ 𝑧 ˜𝜔 ]︃ . (A1) Also, the velocity dynamics in the new coordinates are given by ⎧⎪⎪⎨ ⎪⎪⎩ ̇𝑧= 1 𝜌(𝑢𝑣−̇𝑣∗) + 𝑘𝜌cos(𝛼)2𝑧+ cos(𝛼)𝑧2, ̇˜𝜔= 𝑢𝜔−̇𝜔∗. (A2) The nominal control law (𝑢𝑣, 𝑢𝜔) can be selected as the feedback linearization control law {︃𝑢𝑣= ̇𝑣∗−𝜌 [︁ 𝑘𝜌cos(𝛼)2𝑧+ cos(𝛼)𝑧2 + 𝑘𝑧𝑧 ]︁ , 𝑢𝜔= ̇𝜔∗−𝑘𝜔˜𝜔, (A3) which yields the linear closed-loop velocity dynamics ̇𝑧= −𝑘𝑧𝑧, ̇˜𝜔= −𝑘𝜔˜𝜔. (A4) Next, we show that ̇𝑉|(A1),(A4) < 0 for all (𝜌, 𝛼, 𝜓, 𝑧, ˜𝜔) ≠0. Noting that the nominal closed-loop system (A1), (A4) exhibits a cascaded structure, we first consider the subsystem (A1) restricted to the manifold {𝑧= ˜𝜔= 0}. Evaluating the total derivative of 𝑊1 along the vector field 𝑓nom in (A1) yields ̇𝑊1|𝑓nom := ⟨∇𝑊1, 𝑓nom⟩= −𝑘𝜌cos2(𝛼)𝜌2 −𝑘𝛼𝛼2 ≤0, (A5) where ∇represents the gradient and ⟨·, ·⟩represents the inner prod- uct. Then, by adding and subtracting the terms −𝑘𝜌𝜆𝜓and 𝑘𝜌𝛼in second and third rows in 𝑓nom, respectively, the 𝛼- and 𝜓-dynamics restricted to the manifold {𝑧= ˜𝜔= 0} are given by [︃ ̇𝛼 ̇𝜓 ]︃ = [︃ −𝑘𝛼−𝑘𝜌𝜆 𝑘𝜌 0 ]︃ ⏞ˉˉˉˉˉˉˉˉˉˉ⏟⏟ˉˉˉˉˉˉˉˉˉˉ⏞ 𝐴 [︃ 𝛼 𝜓 ]︃ ⏞⏟⏟⏞ 𝜉 + [︃ −𝜆𝑘𝜌(sinc(2𝛼) −1)𝜓 𝑘𝜌(sinc(2𝛼) −1)𝛼 ]︃ ⏞ˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉ⏟⏟ˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉˉ⏞ 𝐾(𝛼,𝜓) . (A6) Since the matrix 𝐴in (A6) is Hurwitz, the Lyapunov equation 𝐴⊤𝑃+ 𝑃𝐴= −𝐼has a unique, positive definite solution 𝑃, which is given in Proposition 1. In other words, denoting 𝜉:= [𝛼𝜓]⊤, 𝑊2 is a strict Lyapunov function for the linear system ̇𝜉= 𝐴𝜉, i.e., ⟨𝑊2, 𝐴𝜉⟩= −|𝜉|2. It is easy to show that | sinc(2𝑠) −1| ≤ 2 𝜋|𝑠|, and thus, we have |𝐾(𝛼, 𝜓)| ≤2 𝜋𝑘𝜌𝜆|𝛼||𝜉|. 3 The total derivative of 𝑊2 along trajectories of (A6) is then given by ̇𝑊2|(A6) = −|𝜉|2 + 2𝜉⊤𝑃𝐾(𝛼, 𝜓) (A7) ≤−|𝜉|2 + 2 𝜋𝑘𝜌𝜆𝜆𝑀(𝑃)|𝜉|(2|𝛼| · |𝜉|) ≤−|𝜉|2 + 2 𝜋𝑘𝜌𝜆𝜆𝑀(𝑃)|𝜉| (︃ 𝜀|𝜉|2 + 𝛼2 𝜀 )︃ , (A8) where the last inequality is due to Young’s inequality, and 𝜀> 0 can be chosen as an arbitrary positive number. Hence, for |𝜉| ≠0, letting 𝜀:= 𝜋 4𝑘𝜌𝜆𝜆𝑀(𝑃) | 𝜉| > 0, it follows that ̇𝑊2|(A6) ≤−1 2 |𝜉|2 + 8 𝜋2 𝑘2 𝜌𝜆2𝜆2 𝑀(𝑃)|𝜉|2𝛼2 ≤−1 2 |𝜉|2 + 𝑘𝛼𝑄(𝑊1(𝜌, 𝛼, 𝜓))𝛼2, (A9) where in the last inequality we use |𝜉|2 ≤2𝑊1(𝜌, 𝛼, 𝜓). For |𝜉| = 0, it follows from (A7) that (A9) is also true. Consequently, we have ̇𝑊|𝑓nom := ⟨∇𝑊, 𝑓nom⟩≤−1 2 |𝜉|2 −𝑘𝜌cos2(𝛼)𝜌2 < 0. (A10) That is, 𝑊is a global, strict Lyapunov function for the subsystem (A1) restricted to the manifold {𝑧= ˜𝜔= 0}. One can easily prove that 𝑊♯is also a global, strict Lyapunov function for the subsystem (A1) restricted to the manifold {𝑧= ˜𝜔= 0}. Denoting 𝜁:= [𝑧˜𝜔]⊤, direct calculation shows that ̇𝑉|(A1),(A4) = 𝜇𝑊̇𝑊|𝑓nom (𝑊+ 1)2 + 𝜇𝑊 𝑊+ 1 𝐿𝑔nom𝑊 𝑊+ 1 · 𝜁−|𝜁|2. (A11) Note that the first and third terms on the right-hand side of (A11) are negative definite terms, while the second term is indefinite. In the second term, 𝐿𝑔nom𝑊/(𝑊+ 1) is globally bounded, i.e., ∃𝑐> 0 such that |𝐿𝑔nom𝑊/(𝑊+ 1)| ≤𝑐. Hence, together with Young’s inequality we have that 𝜇𝑊 𝑊+ 1 𝐿𝑔nom𝑊 𝑊+ 1 · 𝜁≤ 𝑐𝜇𝑊 𝑊+ 1 · |𝜁| (A12) 3Note that sup 𝑠∈R sin(𝑠) −𝑠 𝑠2 = 1 𝜋. 6 / PREPRINT FOR REVIEW Transactions of the ASME ≤𝑐2𝜇2 2 (︃ 𝑊 𝑊+ 1 )︃2 + 1 2 |𝜁|2. (A13) The term −|𝜁|2 in (A11) dominates the term 1 2 |𝜁|2 in (A13). More- over, the term 𝜇𝑊̇𝑊|𝑓nom (𝑊+1)2 in (A11) dominates the term 𝑐2𝜇2 2 (︁𝑊 𝑊+1 )︁2 in (A13) near the origin, since the latter has a higher degree. Away from the origin, there exists a sufficiently small 𝜇> 0 such that 𝜇𝑊̇𝑊|𝑓nom (𝑊+1)2 continues to dominate 𝑐2𝜇2 2 (︁𝑊 𝑊+1 )︁2, due to the fact that 𝑊/(𝑊+1) remains globally bounded. Therefore, we conclude that ̇𝑉|(A1),(A4) < 0 is negative definite. Finally, we conclude the proof by noting that (A3) is continuous, and |(𝑢𝑣, 𝑢𝜔)| →0 as |(𝜌, 𝛼, 𝜓, 𝑧, ˜𝜔)| →0, which establishes the small control property. References [1] Kolmanovsky, I. and McClamroch, N. H., 1995, “Developments in nonholo- nomic control problems,” IEEE Control Syst., 15(6), pp. 20–36. [2] Brockett, R., 1983, “Asymptotic stability and feedback stabilization,” Differen- tial geometric control theory, R. S. Millman, R. W. Brocket, and H. J. Sussmann, eds., Birkhäuser, pp. 181–191. [3] Maghenem, M., Bautista, A., Nuño, E., Loría, A., and Panteley, E., 2019, “Con- sensus of multi-agent systems with nonholonomic restrictions via Lyapunov’s direct method,” IEEE Contr. Syst. Lett., 3(2), pp. 344–349. [4] Han, T. and Wang, B., 2024, “Safety-Critical Stabilization of Force-Controlled Nonholonomic Mobile Robots,” IEEE Contr. Syst. Lett., 8, pp. 2469–2474. [5] Wang, B., Nersesov, S. G., and Ashrafiuon, H., 2022, “Robust formation con- trol and obstacle avoidance for heterogeneous underactuated surface vessel net- works,” IEEE Trans. Control Netw. Syst., 9(1), pp. 125–137. [6] Jankovic, M., Santillo, M., and Wang, Y., 2024, “Multiagent Systems With CBF-Based Controllers: Collision Avoidance and Liveness From Instability,” IEEE Trans. Control Syst. Technol., 32(2), pp. 705–712. [7] Ames, A. D., Grizzle, J. W., and Tabuada, P., 2014, “Control barrier function based quadratic programs with application to adaptive cruise control,” Proc. IEEE Conf. Decis. Control, IEEE, pp. 6271–6278. [8] Ames, A. D., Xu, X., Grizzle, J. W., and Tabuada, P., 2017, “Control barrier function based quadratic programs for safety critical systems,” IEEE Trans. Autom. Contr., 62(8), pp. 3861–3876. [9] Ames, A. D., Coogan, S., Egerstedt, M., Notomista, G., Sreenath, K., and Tabuada, P., 2019, “Control barrier functions: Theory and applications,” Proc. Euro. Control Conf., IEEE, pp. 3420–3431. [10] Xu, X., Grizzle, J. W., Tabuada, P., and Ames, A. D., 2018, “Correctness guarantees for the composition of lane keeping and adaptive cruise control,” IEEE Trans. Autom. Sci. Eng., 15(3), pp. 1216–1229. [11] Clark, A., 2021, “Control barrier functions for stochastic systems,” Automatica, 130, p. 109688. [12] Xu, X., Tabuada, P., Grizzle, J. W., and Ames, A. D., 2015, “Robustness of control barrier functions for safety critical control,” IFAC-PapersOnLine, 48(27), pp. 54–61. [13] Gurriet, T., Singletary, A., Reher, J., Ciarletta, L., Feron, E., and Ames, A., 2018, “Towards a framework for realizable safety critical control through active set invariance,” Proceedings of the 9th ACM/IEEE International Conference on Cyber-Physical Systems, IEEE, pp. 98–106. [14] Singletary, A., Kolathaya, S., and Ames, A. D., 2021, “Safety-critical kinematic control of robotic systems,” IEEE Contr. Syst. Lett., 6, pp. 139–144. [15] Jankovic, M., 2018, “Robust control barrier functions for constrained stabiliza- tion of nonlinear systems,” Automatica, 96, pp. 359–367. [16] Glotfelter, P., Buckley, I., and Egerstedt, M., 2019, “Hybrid nonsmooth barrier functions with applications to provably safe and composable collision avoidance for robotic systems,” IEEE Robot. Autom. Lett., 4(2), pp. 1303–1310. [17] Xiao, W. and Belta, C., 2021, “High-order control barrier functions,” IEEE Trans. Autom. Contr., 67(7), pp. 3655–3662. [18] Alan, A., Taylor, A. J., He, C. R., Ames, A. D., and Orosz, G., 2023, “Con- trol barrier functions and input-to-state safety with application to automated vehicles,” IEEE Trans. Control Syst. Technol., 31(6), pp. 2744–2759. [19] Taylor, A. J., Ong, P., Molnar, T. G., and Ames, A. D., 2022, “Safe backstepping with control barrier functions,” Proc. IEEE Conf. Decis. Control, IEEE, pp. 5775–5782. [20] Haraldsen, A., Wiig, M. S., Ames, A. D., and Pattersen, K. Y., 2024, “Safety- critical control of nonholonomic vehicles in dynamic environments using veloc- ity obstacles,” Proc. Amer. Contr. Conf., IEEE, pp. 3152–3159. [21] Maghenem, M., Loría, A., and Panteley, E., 2017, “A cascades approach to formation-tracking stabilization of force-controlled autonomous vehicles,” IEEE Trans. Autom. Contr., 63(8), pp. 2662–2669. [22] Wang, B., Nersesov, S. G., and Ashrafiuon, H., 2022, “Time-Varying Formation Control for Heterogeneous Planar Underactuated Multivehicle Systems,” ASME J. Dyn. Syst. Meas. Contr., 144(4), p. 041006. [23] Reis, M. F., Aguiar, A. P., and Tabuada, P., 2020, “Control barrier function- based quadratic programs introduce undesirable asymptotically stable equilib- ria,” IEEE Contr. Syst. Lett., 5(2), pp. 731–736. [24] Sontag, E. D., 1998, Mathematical Control Theory, 2nd ed., Springer-Verlag, New York, NY. [25] Wang, B., Nersesov, S., and Ashrafiuon, H., 2021, “Formation Regulation and Tracking Control for Nonholonomic Mobile Robot Networks Using Polar Coor- dinates,” IEEE Contr. Syst. Lett., 6, pp. 1909–1914. [26] Cohen, M. H., Molnar, T. G., and Ames, A. D., 2024, “Safety-critical control for autonomous systems: Control barrier functions via reduced-order models,” Annu. Rev. Control, 57, p. 100947. [27] Freeman, R. A. and Kokotović, P. V., 1996, Robust Nonlinear Control Design, Birkhäuser, Boston. ASME Letters in Dynamic Systems and Control PREPRINT FOR REVIEW / 7
Bo Wang1 10031, USA email: Tianyu Han 10031, USA email: Guangwei Wang 550025, China email: Further Results on Safety-Critical Stabilization of Force-Controlled Nonholonomic Mobile Robots In this paper, we address the stabilization problem for force-controlled nonholonomic mobile robots under safety-critical constraints. We propose a continuous, time-invariant control law based on the γm-quadratic programming (γm-QP) framework, which unifies control Lyapunov functions (CLFs) and control barrier functions (CBFs) to enforce both stability and safety in the closed-loop system. For the first time, we construct a global, time-invariant, strict Lyapunov function for the closed-loop nonholonomic mobile robot system with a nominal stabilization controller in polar coordinates; this strict Lyapunov function then serves as the CLF in the QP design. Next, by exploiting the inherent cascaded structure of the vehicle dynamics, we develop a CBF for the mobile robot via an integrator backstepping procedure. Our main results guarantee both asymptotic stability and safety for the closed-loop system. Both the simulation and experimental results are presented to illustrate the effectiveness and performance of our approach. Keywords: safety-critical control, stabilization, control barrier functions, nonholonomic mobile robots 1 Introduction The study of control problems for nonholonomic systems has been carried out since the early 1980s - see [1] for a survey. The main challenge is that, although these systems are controllable, it is impossible to achieve asymptotic stability of an isolated equilibrium using a continuous, time-invariant state feedback control law due to Brockett's necessary condition on stabilization [2]. Hence, the stabilization of nonholonomic mobile robots and the construction of corresponding control Lyapunov functions (CLFs) remain challenging problems of significant ongoing interest in the context of robustness analysis and controller design. See [3] for a continuous time-varying control method and [4] for a time-invariant control approach, along with corresponding strict Lyapunov constructions. Ensuring operational safety while achieving control objectives is a fundamental requirement in autonomous control systems. For instance, in practical applications, safety constraints - such as obstacle and collision avoidance between vehicles - must also be considered in addition to the set-point stabilization or trajectory tracking task for mobile robots [5,6]. Achieving satisfactory control performance often requires aggressive maneuvers, while safety necessitates conservative actions and strict constraint adherence. The tension between performance and safety is particularly acute in mobile robots, whose nonholonomic dynamics inherently prevent continuous, time-invariant feedback from stabilizing the target configuration. As a result, enforcing both asymptotic stability and safety constraints simultaneously is far more challenging than in fully-actuated holonomic systems. In the past decade, control barrier function (CBF)-based techniques have proven effective for systematically enforcing safety constraints [7,8]. Since then, CBFs have been applied in a variety of domains, including walking robots [9], automotive systems [4,10], stochastic systems [11], and multi-agent systems [6], to name a few. To "mediate" the conflict between the safety constraints and the control objective (e.g., set-point stabilization, trajectory tracking, or mere open-loop steering of the system), numerous quadratic program (QP)-based control techniques have been developed in the literature [7,8,12]. According to different types of QP formulation, the existing results may be categorized into 1Corresponding Author. CLF-CBF-based QP [7-9,12], CBF-based QP [9,12-14], and γmCLF-CBF-based QP (γm-QP) methods [4,15]. Among the various methods, the γm-QP approach is preferred in many applications due to its ability to guarantee asymptotic stability of the closedloop system and its robustness in handling disturbances. Furthermore, applying CBFs directly to mobile robots presents significant challenges due to their inherent nonholonomic constraints, which complicate establishing a direct relationship between safety constraints and control inputs, particularly when the system's relative degree exceeds one [16]. To address this issue, high-order CBFs have been developed in [17]. This extension ensures the forward invariance of appropriately defined, dynamically extended safe sets, thereby enabling controller synthesis via QP even for systems with higher relative degrees. However, constructing suitable high-order CBFs can be intricate, often requiring multiple differentiations of the barrier function and complex modifications to the safe set definition, which may hinder straightforward practical implementation. In [18], a safety-critical controller is designed for connected automated vehicles, where the vehicles are modeled by integrators. Using the double-integrator model, the multi-agent collision avoidance problem has been studied via CBF approaches in [6]. Based on the first-order unicycle (kinematic) model, CBF-based obstacle avoidance has been addressed in [19,20]. In particular, in [19], a CBF backstepping approach is proposed for the kinematic unicycle model. In [20], an obstacle-avoidance strategy for nonholonomicintegrator vehicles is proposed by regulating vehicle speed and orientation separately via two CBFs while maintaining nonzero forward speed in dynamic environments using velocity obstacles. However, none of these existing works provides guarantees of asymptotic stability for the closed-loop system. Moreover, a more realistic model for vehicle applications is to consider the secondorder full dynamical (kinematics-kinetics) model of the unicycle [21,22]. However, to the best of the authors' knowledge, few studies have addressed the stabilization problem for force-controlled nonholonomic vehicles subject to safety-critical constraints. In this paper, we address the stabilization problem for forcecontrolled nonholonomic mobile robots under safety-critical constraints. We propose a continuous, time-invariant control law based on the γm-QP framework to enforce both stability and safety in the closed-loop system. ASME Letters in Dynamic Systems and Control PREPRINT FOR REVIEW / 1 The main contributions of this work include: (i) For the first time, we construct a global, time-invariant, strict Lyapunov function for the closed-loop nonholonomic mobile robot system with a nominal stabilization controller in polar coordinates. This strict Lyapunov function then serves as the CLF in the γm-QP design. (ii) We present experimental results to validate the effectiveness of the proposed approach and to demonstrate the performance of the developed controller. The experiments show that the proposed method is applicable to scenarios such as autonomous parking with obstacle avoidance and intervehicle collision avoidance. The original γm-QP framework is based on reciprocal CBFs [15]. However, in recent years, there has been a shift from reciprocal CBFs to zeroing CBFs, as reciprocal CBFs may exhibit poor robustness properties. Hence, in this work, we present the γm-QP approach within the framework of zeroing CBFs. Furthermore, distinct from our previous work [4], we also construct the zeroing CBF using the integrator backstepping technique. In our previous work [4], we construct a strict Lyapunov function for the closed-loop mobile robot with a nominal stabilization controller in the large (i.e., on any compact subset of the state space), where it serves as a CLF in the safety-critical control design. However, the constructed Lyapunov function is not global, meaning that it depends on the initial configuration of the vehicle. To the best of the authors' knowledge, a global, time-invariant, strict Lyapunov function has not yet been reported in the literature. Moreover, the problem of eliminating potential undesired equilibria, e.g., via introducing additional constraints in the QP [23], remains out of scope of this Letter. The structure of the remainder of the paper is as follows: Section 2 presents the problem formulation and preliminaries on safetycritical control. Section 3 presents the main results, including the constructions of the CLF and CBF, and the controller design. Section 4 provides both simulation and experimental results that demonstrate the practical application of the theoretical developments. Finally, Section 5 offers concluding remarks. 2 Preliminaries on Safety-Critical Control Notation: Let |·| denote the Euclidean norm on Rn. For a subset S⊂Rn, ∂Srepresents the boundary of S, and int Srepresents the interior of S. K is the class of continuous functions R≥0 →R≥0 which is zero at zero and strictly increasing; K∞is a subset of the class K functions which are unbounded. For a matrix P∈Rn×n, λM(P) represents the maximum eigenvalue of P. Throughout this article, we omit the arguments of functions when they are clear from the context. Let us consider a nonlinear control-affine system ̇x= f(x) + g(x)u, (1) where the state x∈Rnand the control u∈Rm. We assume that f: Rn→Rnand g: Rn→Rn×mare locally Lipschitz and f(0) = 0. Recall that a C∞function V: Rn→R≥0 is said to be a (global) CLF for (1), if Vis positive definite, proper, and satisfies the following implication: LgV(x) = 0 =⇒LfV(x) + α(|x|) 0 such that for all μ∈(0, ̄μ], the function V: R>0 × R4 →R≥0, defined as V(ρ, α, ψ, z, ̃ω) := μ ∫W♯(ρ,α,ψ) 0 (︃es-1 es )︃ ds+ U(z, ̃ω) , (12) is a global CLF for (11) and (9) that satisfies the small control property, where ̃v:= v-v∗, ̃ω:= ω-ω∗, z:= ̃v/ρ, W♯(ρ, α, ψ) := ln(W(ρ, α, ψ) + 1), W(ρ, α, ψ) := W1(ρ, α, ψ) + W2(α, ψ) + ∫W1 (ρ,α,ψ) 0 Q(l)dl, W1(ρ, α, ψ) := 1 2 (︂ ρ2 + α2 + λψ2)︂ , W2(α, ψ) := p11α2 + 2p12αψ+ p22ψ2, P:= ⎡⎢⎢⎢⎢⎢⎢⎣ 1 + λ 2kαλ 1 2kρλ 1 2kρλ k2 α+ k2 ρλ2 + k2 ρλ 2kαk2ρλ ⎤⎥⎥⎥⎥⎥⎥⎦ , Q(l) := 16 π2 k2 ρ kα λ2λ2 M(P)l, U(z, ̃ω) := 1 2 (︃z2 kz + ̃ω2 kω )︃ , v∗:= kρcos(α)ρ, ω∗:= kαα+ kρsinc(2α)(α+ λψ), the parameter λ≥1, the parameters kρ, kα, kz, and kωare arbitrary positive constants, and P= [pij]. That is, pijrepresents the (i, j)-th entry of the matrix P2. Proof. See Appendix A. 2In this paper, 'sinc(·)' represents the unnormalized sinc function, which is defined as sinc(s) := sin(s)/sif s≠0 and sinc(0) = 1. Note that the function sinc is smooth everywhere and globally bounded on R. 3.2 Construction of the CBF. Mechanical and robotic systems often exhibit cascaded structures. The problem of constructing CBFs for such systems has been investigated in several works. For example, in [19,26], the authors propose a method for synthesizing zeroing CBFs for higher-order systems by leveraging CBFs designed for reduced-order models. In [4], a systematic procedure is proposed for constructing reciprocal CBFs for cascaded systems by using the CBF associated with the kinematic model through integrator backstepping. In this section, we construct the zeroing CBF for the mobile robot (6) and (9) in Cartesian coordinates. Following the similar integrator backstepping method, We have the following result. Proposition 2 (CBF). Consider the mobile robot system (6) and (9). Assume that the admissible set C0 is defined as the 0-superlevel set of a given continuously differentiable function h0 : R2 →R, i.e., C0 := {(x, y) ∈R2 : h0(x, y) ≥0}. (14) Then, the function h: R4 →R given by h(x, y, v, w) := h0(x, y) -lvv2 -lωω2 (15) is a CBF for (6) and (9), where lv, lωare two positive constants. Proof. Let us define q:= [xyθ]⊤and v := [vω]⊤. Then, the kinematic system (6) can be expressed in the control-affine form ̇q= f0(q) + g0(q)v, (16) where f0(q) ≡0 and g0(q) := [︄cos θ0 sin θ0 0 1 ]︄ . We first show that the function h0 is a CBF for the kinematic system (16) on the set C0, assuming that the velocity v is the control input. This follows directly from the fact that the kinematic system (16) is driftless, i.e., f0 ≡0. As a result, the CBF condition (4) is trivially satisfied since Lf0h0 ≡0, and for all (x, y) ∈C0 and any class-K function αh, it holds that αh(h0(x, y)) ≥0. Let us denote x := [qv]⊤, u:= [uvuω]⊤, F(x) := [︃ f0(q) + g0(q)v 0 ]︃ , and G:= [︃ 0 I ]︃ . Then, the cascaded system (6) and (9) can be written as ̇x = F(x) + Gu. (17) Next, we verify the condition (4) for the function hand (17). Note that LGh= ∂h ∂v = 0 implies that v = 0. Hence, on the set {v = 0}, we have (LFh) |v=0 = ∂h ∂q( f0 + g0v) |︁|︁|︁|︁ v=0 = 0, (18) and h|v=0 = h0. That is, αh(h|v=0) = αh(h0(x, y)) ≥0 for all (x, y) ∈C0. Therefore, we verify the implication (4) and thus, his a CBF for the system (6) and (9). 3.3 Safety-Critical Control Design. We have constructed a global CLF and a zeroing CBF for the nonholonomic mobile robot system, as presented in Propositions 1 and 2, respectively. Based on these constructions, the safety-critical stabilization control law can be derived by solving the γm-QP problem (5). It is worth noting that the original γm-QP formulation in [15] is based on reciprocal CBFs. For completeness, we present parallel results of the γm-QP problem (5) using zeroing CBFs. ASME Letters in Dynamic Systems and Control PREPRINT FOR REVIEW / 3 Theorem 1. Assume that the system (1) admits a CLF V(x) and a CBF h(x), and that 0 ∈int C. Then, the γm-QP problem (5) is feasible and the resulting control law is given by u★(x) := ⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨ ⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩ 0, x∈Ωclf cbf ∪{0}, - m m+ 1 ̄a1 |b1|2 b⊤ 1 , x∈Ωclf cbf, - a2 |b2|2 b⊤ 2 , x∈Ωclf cbf, -μ1b⊤ 1 -μ2b⊤ 2 , x∈Ωclf cbf, (19) where a1 := LfV(x) + α(|x|), ̄a1 := γf(a1), b1 := LgV(x), a2 := -Lfh(x) -αh(h(x)), b2 := -Lgh(x), μ1 := |b2|2 ̄a1 -b1b⊤ 2 a2 (1 + 1 m)|b1|2|b2|2 -|b1b⊤ 2 |2 , μ2 := -b1b⊤ 2 ̄a1 + (1 + 1 m)|b1|2a2 (1 + 1 m)|b1|2|b2|2 -|b1b⊤ 2 |2 , and Ωclf cbf := {x∈Rn: a1 0 is chosen to be sufficiently small. The following proposition follows directly as a corollary of Propositions 1 and 2, together with Theorem 1. Proposition 3. The γm-QP problem (22) is feasible, and under the resulting control law, the set int Cis forward invariant. If 0 ∈int C, then the barrier constraint is inactive (F2 0. Hence, the matrix P= P⊤> 0, implying that W2 is a positive definite quadratic form. Since W1 is also a positive definite quadratic form, it follows that Wis positive definite and proper. Consequently, W♯is also positive definite and proper in its arguments. Next, according to the definition of a CLF, to establish that Vis a CLF, we need to show that for all (ρ, α, ψ, z, ̃ω) ≠0, there exists a control input (uv, uω) such that ̇V|(11),(9) 0 can be chosen as an arbitrary positive number. Hence, for |ξ| ≠0, letting ε:= π 4kρλλM(P) | ξ| > 0, it follows that ̇W2|(A6) ≤-1 2 |ξ|2 + 8 π2 k2 ρλ2λ2 M(P)|ξ|2α2 ≤-1 2 |ξ|2 + kαQ(W1(ρ, α, ψ))α2, (A9) where in the last inequality we use |ξ|2 ≤2W1(ρ, α, ψ). For |ξ| = 0, it follows from (A7) that (A9) is also true. Consequently, we have ̇W|fnom := ⟨∇W, fnom⟩≤-1 2 |ξ|2 -kρcos2(α)ρ2 0 such that |LgnomW/(W+ 1)| ≤c. Hence, together with Young's inequality we have that μW W+ 1 LgnomW W+ 1 · ζ≤ cμW W+ 1 · |ζ| (A12) 3Note that sup s∈R sin(s) -s s2 = 1 π. 6 / PREPRINT FOR REVIEW Transactions of the ASME ≤c2μ2 2 (︃ W W+ 1 )︃2 + 1 2 |ζ|2. (A13) The term -|ζ|2 in (A11) dominates the term 1 2 |ζ|2 in (A13). Moreover, the term μẆW|fnom (W+1)2 in (A11) dominates the term c2μ2 2 (︁W W+1 )︁2 in (A13) near the origin, since the latter has a higher degree. Away from the origin, there exists a sufficiently small μ> 0 such that μẆW|fnom (W+1)2 continues to dominate c2μ2 2 (︁W W+1 )︁2, due to the fact that W/(W+1) remains globally bounded. Therefore, we conclude that ̇V|(A1),(A4) < 0 is negative definite. Finally, we conclude the proof by noting that (A3) is continuous, and |(uv, uω)| →0 as |(ρ, α, ψ, z, ̃ω)| →0, which establishes the small control property. References [1] Kolmanovsky, I. and McClamroch, N. H., 1995, "Developments in nonholonomic control problems," IEEE Control Syst., 15(6), pp. 20-36. [2] Brockett, R., 1983, "Asymptotic stability and feedback stabilization," Differential geometric control theory, R. S. Millman, R. W. Brocket, and H. J. Sussmann, eds., Birkhäuser, pp. 181-191. [3] Maghenem, M., Bautista, A., Nuño, E., Loría, A., and Panteley, E., 2019, "Consensus of multi-agent systems with nonholonomic restrictions via Lyapunov's direct method," IEEE Contr. Syst. Lett., 3(2), pp. 344-349. [4] Han, T. and Wang, B., 2024, "Safety-Critical Stabilization of Force-Controlled Nonholonomic Mobile Robots," IEEE Contr. Syst. Lett., 8, pp. 2469-2474. [5] Wang, B., Nersesov, S. G., and Ashrafiuon, H., 2022, "Robust formation control and obstacle avoidance for heterogeneous underactuated surface vessel networks," IEEE Trans. Control Netw. Syst., 9(1), pp. 125-137. [6] Jankovic, M., Santillo, M., and Wang, Y., 2024, "Multiagent Systems With CBF-Based Controllers: Collision Avoidance and Liveness From Instability," IEEE Trans. Control Syst. Technol., 32(2), pp. 705-712. [7] Ames, A. D., Grizzle, J. W., and Tabuada, P., 2014, "Control barrier function based quadratic programs with application to adaptive cruise control," Proc. IEEE Conf. Decis. Control, IEEE, pp. 6271-6278. [8] Ames, A. D., Xu, X., Grizzle, J. W., and Tabuada, P., 2017, "Control barrier function based quadratic programs for safety critical systems," IEEE Trans. Autom. Contr., 62(8), pp. 3861-3876. [9] Ames, A. D., Coogan, S., Egerstedt, M., Notomista, G., Sreenath, K., and Tabuada, P., 2019, "Control barrier functions: Theory and applications," Proc. Euro. Control Conf., IEEE, pp. 3420-3431. [10] Xu, X., Grizzle, J. W., Tabuada, P., and Ames, A. D., 2018, "Correctness guarantees for the composition of lane keeping and adaptive cruise control," IEEE Trans. Autom. Sci. Eng., 15(3), pp. 1216-1229. [11] Clark, A., 2021, "Control barrier functions for stochastic systems," Automatica, 130, p. 109688. [12] Xu, X., Tabuada, P., Grizzle, J. W., and Ames, A. D., 2015, "Robustness of control barrier functions for safety critical control," IFAC-PapersOnLine, 48(27), pp. 54-61. [13] Gurriet, T., Singletary, A., Reher, J., Ciarletta, L., Feron, E., and Ames, A., 2018, "Towards a framework for realizable safety critical control through active set invariance," Proceedings of the 9th ACM/IEEE International Conference on Cyber-Physical Systems, IEEE, pp. 98-106. [14] Singletary, A., Kolathaya, S., and Ames, A. D., 2021, "Safety-critical kinematic control of robotic systems," IEEE Contr. Syst. Lett., 6, pp. 139-144. [15] Jankovic, M., 2018, "Robust control barrier functions for constrained stabilization of nonlinear systems," Automatica, 96, pp. 359-367. [16] Glotfelter, P., Buckley, I., and Egerstedt, M., 2019, "Hybrid nonsmooth barrier functions with applications to provably safe and composable collision avoidance for robotic systems," IEEE Robot. Autom. Lett., 4(2), pp. 1303-1310. [17] Xiao, W. and Belta, C., 2021, "High-order control barrier functions," IEEE Trans. Autom. Contr., 67(7), pp. 3655-3662. [18] Alan, A., Taylor, A. J., He, C. R., Ames, A. D., and Orosz, G., 2023, "Control barrier functions and input-to-state safety with application to automated vehicles," IEEE Trans. Control Syst. Technol., 31(6), pp. 2744-2759. [19] Taylor, A. J., Ong, P., Molnar, T. G., and Ames, A. D., 2022, "Safe backstepping with control barrier functions," Proc. IEEE Conf. Decis. Control, IEEE, pp. 5775-5782. [20] Haraldsen, A., Wiig, M. S., Ames, A. D., and Pattersen, K. Y., 2024, "Safetycritical control of nonholonomic vehicles in dynamic environments using velocity obstacles," Proc. Amer. Contr. Conf., IEEE, pp. 3152-3159. [21] Maghenem, M., Loría, A., and Panteley, E., 2017, "A cascades approach to formation-tracking stabilization of force-controlled autonomous vehicles," IEEE Trans. Autom. Contr., 63(8), pp. 2662-2669. [22] Wang, B., Nersesov, S. G., and Ashrafiuon, H., 2022, "Time-Varying Formation Control for Heterogeneous Planar Underactuated Multivehicle Systems," ASME J. Dyn. Syst. Meas. Contr., 144(4), p. 041006. [23] Reis, M. F., Aguiar, A. P., and Tabuada, P., 2020, "Control barrier functionbased quadratic programs introduce undesirable asymptotically stable equilibria," IEEE Contr. Syst. Lett., 5(2), pp. 731-736. [24] Sontag, E. D., 1998, Mathematical Control Theory, 2nd ed., Springer-Verlag, New York, NY. [25] Wang, B., Nersesov, S., and Ashrafiuon, H., 2021, "Formation Regulation and Tracking Control for Nonholonomic Mobile Robot Networks Using Polar Coordinates," IEEE Contr. Syst. Lett., 6, pp. 1909-1914. [26] Cohen, M. H., Molnar, T. G., and Ames, A. D., 2024, "Safety-critical control for autonomous systems: Control barrier functions via reduced-order models," Annu. Rev. Control, 57, p. 100947. [27] Freeman, R. A. and Kokotović, P. V., 1996, Robust Nonlinear Control Design, Birkhäuser, Boston. ASME Letters in Dynamic Systems and Control PREPRINT FOR REVIEW / 7
2510.14930
VT-Refine: Learning Bimanual Assembly with Visuo-Tactile Feedback via Simulation Fine-Tuning Binghao Huang1 Jie Xu2 Iretiayo Akinola2 Wei Yang2 Balakumar Sundaralingam2 Rowland O’Flaherty2 Dieter Fox2 Xiaolong Wang2,3 Arsalan Mousavian2 Yu-Wei Chao2† Yunzhu Li1† 1Columbia University 2NVIDIA 3University of California, San Diego †Equal advising Abstract: Humans excel at bimanual assembly tasks by adapting to rich tactile feedback—a capability that remains difficult to replicate in robots through be- havioral cloning alone, due to the suboptimality and limited diversity of human demonstrations. In this work, we present VT-Refine, a visuo-tactile policy learning framework that combines real-world demonstrations, high-fidelity tactile simula- tion, and reinforcement learning to tackle precise, contact-rich bimanual assembly. We begin by training a diffusion policy on a small set of demonstrations using synchronized visual and tactile inputs. This policy is then transferred to a simulated digital twin equipped with simulated tactile sensors and further refined via large- scale reinforcement learning to enhance robustness and generalization. To enable accurate sim-to-real transfer, we leverage high-resolution piezoresistive tactile sensors that provide normal force signals and can be realistically modeled in paral- lel using GPU-accelerated simulation. Experimental results show that VT-Refine improves assembly performance in both simulation and the real world by increasing data diversity and enabling more effective policy fine-tuning. Our project page is available at https://binghao-huang.github.io/vt_refine/. Keywords: Tactile Simulation, Bimanual Manipulation, RL Fine-Tuning Mechanical Connector Real World GPU-Parallelized Tactile Simulation Simulation Large-Scale RL Fine-Tuning Human Demos Touch Vision Applicable to Diverse Assembly Tasks Diffusion Policy Pre-Train Nut-and-Bolt Sim-to-Real Tactile Signals Figure 1: We propose VT-Refine, a novel visuo-tactile policy learning framework for precise, contact-rich bimanual assembly tasks. Top Left: We collect real-world demonstrations and pre-train a diffusion policy using visuo-tactile inputs. Right to Bottom Left: We leverage tactile simulation and large-scale reinforcement learning to fine-tune the policy, and subsequently transfer the fine-tuned policy back to the real world. The resulting policy demonstrates strong performance in both simulated and real environments. 9th Conference on Robot Learning (CoRL 2025), Seoul, Korea. arXiv:2510.14930v2 [cs.RO] 18 Oct 2025 1 Introduction Solving precise assembly tasks with both hands requires sophisticated orchestration of vision and tactile sensing. Consider a bimanual plug-and-socket assembly: humans first rely on vision to locate the parts and coordinate the grasping and pickup of each part with both hands. Once the parts are held and positioned for insertion, tactile feedback becomes essential. This is because contact cues can be visually occluded during insertion (Fig. 1), and hence vision alone lacks the precision needed for fine-grained, contact-rich interactions. Behavioral cloning with diffusion policies [1, 2] has recently shown promise in learning bimanual visuo-tactile policies from a limited number of human teleoperated demonstrations [3, 4]. However, scaling these methods for high-precision assembly tasks in real-world settings poses two major challenges. First, collecting real-world demonstrations is costly, and the demand for data only grows with increasing task precision and contact complexity, making large-scale collection prohibitively expensive. Second, current demonstration interfaces often lack tactile feedback, hindering the capture of how humans use touch for fine manipulation. Consequently, the collected demonstrations typically omit exploratory behaviors—such as iterative adjustments—that are critical for contact-rich tasks, resulting in suboptimal training data. Alternatively, simulation offers a promising path to scale visuo- tactile policy learning, but existing efforts primarily focus on visual modalities or tasks with limited reliance on touch [5–7]. While some recent work has explored simulation-based data collection for tactile inputs, these efforts are typically restricted to simpler tasks or setups (e.g., unimanual) [8–13], or have not yet addressed large-scale training or robust sim-to-real transfer for tactile-critical bimanual tasks [14]. To address these challenges, we introduce a novel real-to-sim-to-real framework designed for precise bimanual assembly. Our approach begins by collecting a small amount of real-world demonstrations (e.g., 30 episodes) to pre-train a bimanual visuo-tactile diffusion policy. The policy is subsequently fine-tuned using reinforcement learning (RL) on a digital twin of the scene within a parallelized simulation environment. Finally, the fine-tuned policy is transferred from simulation back to the real world. Our framework offers three key contributions: (1) We enhance visuo-tactile diffusion policies through RL-based fine-tuning in simulation, enabling policy improvement by exploring state-action regions near those seen in the initial human demonstrations. (2) We develop a GPU-parallelized tactile simulation module within a GPU-based physics simulator to accurately simulate piezoresistive tactile sensors that reliably capture normal force signals. This selection for tactile modality and simulation significantly narrows the sim-to-real gap and overcomes critical challenges in tactile modality transfer. (3) We adopt point-based representations for visual and tactile modalities, facilitating a seamless real-to-sim-to-real transfer. The unified representation preserves the spatial relationships between visual and tactile points, enhancing policy effectiveness. To the best of our knowledge, our work is the first to show successful RL with large-scale simulation and sim-to-real transfer for bimanual visuo-tactile policies. We comprehensively evaluate our system on five challenging bimanual assembly tasks, demonstrating successful real-world execution and performance gains from simulation-based fine-tuning. A detailed analysis of each training phase shows that high-resolution tactile feedback significantly boosts policy effectiveness during both pre-training and fine-tuning. Additionally, our visuo-tactile point-based representation enables robust bidirectional transfer between real and simulated environments, playing a critical role in the success of our two-stage training framework across tasks and domains. 2 Related Work Tactile Sensors and Simulation. Tactile information is critical in human daily life and plays an equally important role in enabling robots to interact with their environments [15]. Recognizing its importance, researchers have integrated vision and tactile sensing to enhance robotic manipulation [3, 4, 8, 13, 14, 16–25]. Most existing work focuses on optical-based tactile sensors, which can capture normal and shear forces, as well as fine-grained surface textures [10, 12, 26–30]. However, the high-resolution images produced by these sensors are difficult to simulate accurately and cause larger sim-real gap. Some approaches [31–33] attempt to sample marker positions from optical tactile 2 (b) Tactile Simulation (a) Tactile Hardware (v) Object Flexible Tactile Sensor Adaptive Fin-Shaped Gripper Spring-Damper System (vi) 𝑘𝑘𝑛𝑛 𝑘𝑘𝑑𝑑 Sampled Tactile Points on Contacting Surface (iv) Top FPC (0.2 mm) Bottom FPC (0.2 mm) Force-Sensitive Film (0.1 mm) (iii) (i) (ii) 12 by 32 Sensor Matrix (2mm Grid) Figure 2: Tactile Sensing in Real and Simulation. (a) Our real-world hardware setup, including the design of the piezoresistive tactile sensor. Four tactile sensor pads (two per hand) are mounted on the soft gripper to capture contact forces. (b) Replication of the tactile sensing process in simulation. A spring-damper model is used to simulate the interaction between the tactile points and objects to generate realistic tactile signals. images and infer normal and shear forces from marker deviations, but this indirect method further complicates sim-to-real transfer. In contrast, we select a tactile sensing modality that emphasizes structural contact patterns with normal force only rather than fine textures. Such signals are not only easier to simulate accurately but also more amenable to transfer between real and simulated environments, enabling scalable visuo-tactile data generation through simulation. Bimanual Visuo-Tactile Manipulation. Bimanual robotic manipulation presents significant chal- lenges across a range of applications [7, 17, 34–38], particularly for assembly tasks. Recently, there has been growing interest in learning-based methods, such as imitation learning [39–43], which leverage multimodal human demonstrations for fine-grained manipulation. However, achieving higher precision tasks vastly increases the amount of training data required in bimanual settings. To address this, simulation has been used to generate additional data and enhance policy robustness. Villasevil et al. [5] explored the use of reinforcement learning to fine-tune policies initialized by imitation learning. Nonetheless, most bimanual manipulation frameworks are still restricted to using the visual input alone [44–46], particularly for real-to-sim-to-real pipelines. This is largely because tactile signals, especially those from optical tactile sensors, are difficult to simulate and transfer [47], limiting their potential of being incorporated into simulation-based training. In contrast, our framework, along with the selection of a transfer-friendly tactile modality, enables effective real-to-sim-to-real learning with both vision and tactile inputs. 3 Visuo-Tactile System and Tactile Simulatiom Tactile Sensor Hardware. Our sensor FlexiTac uses resistive sensing matrices, inspired by 3D- ViTac [3], to efficiently convert mechanical pressure into electrical signals. The choice of matrix-based flexible sensors is motivated by two key factors: (1) Compatibility: these sensors are versatile and can be mounted on a wide range of robotic end-effectors, including both rigid and compliant fingers. (2) Sim-to-real transferability: the sensing modality can be simulated with high fidelity in our environment, enabling consistent behavior across real-to-sim-to-real transfers. As depicted in Fig. 2, each robotic finger is equipped with a tactile sensor pad composed of 12×32 sensing units, with a spatial resolution of 2mm (i.e., 2mm center-to-center distance between adjacent sensors). We use a triple-layer structure similar to [3], with a piezoresistive sensing layer sandwiched between two flexible printed circuit (FPC) layers (Fig. 2 (iii)). Utilizing FPC significantly enhances spatial consistency and increases the resolution of each sensor pad. Additionally, we developed a streamlined fabrication process capable of producing a single sensor in under 5 minutes, enabling 3 + PointNet Diffusion Policy Action Chunk Simulated Visuo-Tactile Points Actor Critic Action Chunk Robot Proprioception Robot Proprioception Low-Dim Env State Zero-Shot Transfer PointNet + Value Stage 1: Real World Pre-Training Stage 2: Simulation Fine-Tuning Diffusion Policy MLP Real Raw Point Cloud Simulated Raw Point Cloud Real Visuo-Tactile Points Noise Diffusion Policy Policy Optimization Figure 3: Two-Stage Visuo-Tactile Policy Training. Stage 1: We collect real-world human demonstrations with visual and tactile modalities and pre-train a diffusion policy. Stage 2: We simulate the same sensory modalities in simulation and fine-tune the pre-trained diffusion policy using policy-gradient-based RL. cost-effective and scalable deployment. We are committed to releasing comprehensive tutorials detailing the hardware design and fabrication process. Tactile Simulation. To simulate the tactile sensory input, we build on TacSL [12], a GPU-based tactile simulation library integrated with Isaac Gym [48]. We chose TacSL since the sensory signals acquired from our sensor pads are closely akin to TacSL’s simulated tactile normal forces. To model the soft-contact interactions between our deformable tactile sensors (mounted on the soft grippers) and rigid contacting objects, we follow TacSL and employ a penetration-based tactile force model [49]. As shown in Fig. 2 (vi), the interaction between each tactile point (i.e., the 3D position of a sensing unit) and the rigid object is modeled using a Kelvin-Voigt model, consisting of a linear spring and a viscous damper connected in parallel. Each sampled tactile point independently computes the contact normal force fn as: fn = −(knd + kd ˙d)n, where d and ˙d represent the interpenetration depth and the relative velocity along the contact normal, respectively, and n denotes the outward contact normal vector. At each simulation timestep, the signed distance field (SDF) of the contacting object is queried to compute d, and the positions of tactile points are updated in real time via forward kinematics. The known resolution of our tactile sensor allows us to uniformly distribute contact points across the sensor surface. The shape and spatial resolution of the simulated sensor are fully customizable, ensuring consistency with their real-world counterparts. Further implementation details and force computation steps are provided in the Appendix. Real-to-Sim-to-Real for Tactile. Alternative tactile sensors, such as vision-based ones like Gel- Sight [16], rely heavily on internal illumination, making them difficult to simulate accurately and prone to sim-to-real gaps. In contrast, our approach uses normal force signals, which are easier to calibrate in simulation, as demonstrated in our experiments. Another challenge lies in simulating the deformable nature of flexible tactile sensors. While high-fidelity techniques like finite element methods (FEM) can model this softness, they are computationally expensive and impractical for large-scale reinforcement learning. By leveraging TacSL’s GPU-accelerated simulation, we efficiently approximate the softness of flexible tactile sensors, enabling scalable training. As a result, our sensor design improves robustness and supports effective zero-shot sim-to-real transfer. 4 Visuo-Tactile Policy Policy Optimizatiom Our goal is to learn a generalizable and robust control policy, denoted as π : O →A that maps multi- modal observation o ∈O to robot actions a ∈A with a few real-world demonstrations. As shown in Fig. 3, our method consists of two stages: (1) real-world pre-training and (2) simulation fine-tuning. 4 In the first stage, we pre-train a diffusion policy [1] using behavioral cloning on a small amount of human demonstrations. This pre-trained policy is expected to succeed on the task sporadically for a restricted range of object initial positions in both real-world and simulated environments. In the second stage, we initialize the policy (actor) model from the pre-trained weights and further optimize it with policy-gradient-based RL [50] in simulation. Finally, this fine-tuned policy is transferred back to the real world for evaluation. Visuo-Tactile Representation. The choice of observation o is crucial for bridging the simulation and real world. We adopt a point cloud-based representation for its robust sim-to-real transferability [3, 10, 51]. Our observation contains three modalities: (1) visual: a colorless point cloud captured by an ego-centric camera, denoted as P visual t ∈RNvis×4, (2) tactile: a point cloud derived from the tactile sensors representing the 3D positions of the sensing units and their continuous sensory readings, denoted as P tactile t ∈RNtac×4. We set Ntac = 384 × Nfinger for the tactile point cloud since each sensor pad consists of 12 × 32 = 384 tactile points, and (3) proprioception: joint positions from the two arms and two grippers. As shown in Fig. 3, we merge the visual and tactile point clouds into a unified visuo-tactile represen- tation: o = P tactile t ∪P visual t . The tactile sensor’s position is computed via forward kinematics and transformed into the camera’s 3D coordinate frame, preserving the spatial relationships between the two modalities. Following [2], the merged point cloud is processed by a PointNet encoder [52], and its output is concatenated with proprioceptive features encoded by a multilayer perceptron (MLP). The resulting feature vector is used as the conditioning input for the denoising diffusion network. Stage 1: Real-World Pre-training. We begin by collecting a small real-world demonstration dataset (e.g., 30 episodes in our experiments) to pre-train a diffusion policy. At the beginning of each trial, the assembly parts are randomly placed within a designated region on the table. A human operator then teleoperates both robot arms to pick up the parts and complete the assembly task. During each demonstration, we record visual and tactile inputs, robot joint states, and action commands. To train the diffusion policy, we adopt a denoising diffusion probabilistic model (DDPM) and follow standard practice by predicting an action chunk [1] (Fig. 3, top). Given the limited number of demonstrations, the trained model may not succeed consistently, but is expected to occasionally complete the task, providing reward signals for reinforcement learning to further improve the policy during fine-tuning. Stage 2: Simulation Fine-tuning. We fine-tune the pre-trained diffusion policy in an end-to-end manner using Diffusion Policy Policy Optimization (DPPO) [6] (Fig. 3, bottom). DPPO optimizes a diffusion policy using Proximal Policy Optimization (PPO) [50], by formalizing the denoising process as a Markov Decision Process (MDP), which allows the reward signal to propagate effectively through the denoising chain. For scalable training, we assume access to a digital twin of the scene equipped with simulated vision and tactile sensors. The pre-trained diffusion policy initializes the actor network, while the critic network is initialized randomly. We adopt an asymmetric actor-critic strategy [53], where the critic receives a low-dimensional representation of the robot and object state. Reward Function. In line with the observations in DPPO [6], we find that pre-training on human demonstrations provides a strong prior that guides RL exploration, allowing us to avoid complex reward engineering. We therefore fine-tune using a sparse reward: the agent receives a reward of 1 when the parts are successfully assembled, and 0 otherwise [54]. 5 Experimental Results In this section, we address the following three questions through experiments: (1) How does our fine-tuned policy improve over the baseline diffusion policy? (2) How effectively does the proposed visuo-tactile representation transfer across domains (real-to-sim-to-real)? (3) How does policy performance scale with the number of human demonstrations? 5.1 Tactile Simulation Calibration To align the simulated tactile response with that of the real sensor, we first characterize the sensor’s force-reading curve using a DMA 850 Dynamic Mechanical Analyzer. We then fit a Kelvin–Voigt viscoelastic model by iteratively tuning the elastic modulus kn (compliance stiffness) and viscosity 5 Real Sensor Reading Sim Sensor Reading Frequency 0 100 200 300 400 500 Sensing Reading 0.0 0.2 0.4 0.6 0.8 1.0 Histogram of Sensing Reading in Sim and Real Figure 4: Sensor Calibration Results. The histogram shows consistent sensor reading distri- butions between the simulation and real world. (a) Table-Top Bimanual Setup (b) Semi-Humanoid Bimanual Setup Real Asset Ego-Centric Camera Ego-Centric Camera Tactile Sensors Tactile Sensors Figure 5: Real Robot Setups. Both the socket and plug are randomly placed in a 3cm × 3cm area. Both robot setups have four tactile sensing pads and an ego-centric camera. coefficient kd (damping) until the simulated curve closely matches the measured response. To validate the calibration, we grasp objects from multiple poses in the real world and record the corresponding tactile signals. We then replay the same trajectories in simulation to collect synthetic tactile data. A histogram comparison of the two datasets shows that the calibrated simulator accurately reproduces the distribution of real tactile signals (Fig. 4). We provide detailed calibration procedures in the supplementary materials. 5.2 Experiment Setup We evaluate our multimodal real-to-sim-to-real system on challenging bimanual assembly tasks. Since our tactile sensors and simulation pipeline can be easily transferred across different robot platforms, we evaluate our method on two setups: (1) a tabletop bimanual robot arm setup, and (2) a semi-humanoid robot. For the tabletop bimanual setup (Fig.5 (a)), we adopt the teleoperation setup proposed in ALOHA 2 [55], using two 6-DoF WidowX robotic arms for manipulation, each equipped with a fin-shaped parallel soft gripper. A separate pair of identical arms is used for teleoperation. An Intel RealSense D455 camera is mounted on the table for egocentric visual sensing, and a tactile sensing pad is installed on each of the four soft fingers. For the semi-humanoid setup (Fig. 5 (b)), we use two 7-DoF Kinova Gen3 arms, each paired with a Robotiq 2F-140 gripper. The arms are mounted to a static torso structure. An Intel RealSense D455 camera is mounted at the head for visual sensing, and a tactile sensing pad is attached to each of the four gripper fingers. For teleoperation, we use the Meta Quest 2, with tracked controller poses mapped to target poses for the robot end effectors. Online trajectory generation is performed using the GPU-accelerated model predictive control framework provided by cuRobo [56]. Tasks are selected from the AutoMate dataset [57]. Each task involves a plug-socket pair: the robot must grasp both objects and complete an in-air insertion. Figure 5 shows the objects and robot configurations used in our experiments. For each task, we collect 30 demonstrations to pre-train a diffusion policy, which is then used to initialize the policy for fine-tuning (as described in Sec. 3). To reflect the variability introduced by bimanual insertion, we randomize the initial pose of each object within a 3cm range during both data collection and fine-tuning. The same visuo-tactile representation and encoder architecture are used throughout pre-training and fine-tuning. 5.3 Quantitative Analysis Fine-tuning improves precise manipulation. Our RL fine-tuned policy significantly boosts perfor- mance on high-precision assembly tasks. (i) Fine-tuning introduces necessary exploration. Diffusion Policy performs well on lower-precision tasks, but behavior cloning alone lacks the small, repeated adjustments needed for tight-fit insertions. Encoding these subtle exploratory behaviors via demon- strations would require prohibitively large datasets. In contrast, RL fine-tuning introduces such behaviors efficiently by leveraging simulated rollouts. In real-world experiments (Tab. 1), our fine- tuned policy improves success rates by approximately 20% for the vision-only variant and 40% for 6 Visuo-Tactile Vision-Only 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Visuo-Tactile Vision-Only 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Visuo-Tactile Vision-Only 0 100 200 300 0.0 0.2 0.4 0.6 0.8 1.0 Fine-Tuning Curves (Asset 00081) Success Rate 400 Visuo-Tactile Vision-Only 0 100 200 300 400 0 100 200 300 400 Fine-Tuning Curves (Asset 00186) Fine-Tuning Curves (Asset 00007) Epochs Epochs Epochs Figure 6: Simulation Fine-Tuning of Pre-Trained Policies. We compare the fine-tuning performance of visuo-tactile (blue) and vision-only (orange) policies. The visuo-tactile policy starts with not only a higher pre-trained performance but also continues to improve, achieving higher final performance after fine-tuning. Real Sim Sim Real 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Pre-Train Fine-Tuned Visuo-Tactile Vision-Only How Policy Evolves? (Asset 00186) Settings Figure 7: Performance of Pre-Trained Policy in sim and real, Fine-Tuned Policy in sim and real. (i) Initial States (ii) Grasping (iii) Two Arms Continuously Coordinate with Each Other (iv) Insertion Bad Pose for Pre-Insertion (a) Successful Assembly with Our RL Fine-Tuned Policy Fail to Adjust Object Pose Bad Insertion Direction (b) Typical Failure Cases of Baseline Policy t t t Figure 8: Policy Rollout. We evaluate our fine-tuned visuo-tactile policy on five plug-and-socket pairs with a clearance of roughly ≈2mm. Part(a) shows the two arms keep re-orienting or moving the parts until they slide together smoothly, as indicated by the evolving tactile maps on the screen. Part(b) the robot either stops with a misaligned pose or pushes at a bad angle, leading to jams and incomplete insertions. the visuo-tactile variant. Grasping often induces slight object slip, yielding an uncertain pre-insert pose that vision alone seldom detects due to occlusion. Fig. 8 (a) shows a representative success trajectory: following an imprecise pre-insertion pose, the two arms engage in rapid cycles of sensing, micro-adjusting, and re-sensing. These back-and-forth “wiggle-and-dock” maneuvers—commonly used by humans—emerged organically during RL fine-tuning, despite not being explicitly captured in demonstrations. This is because tactile feedback clearly signals when alignment improves, as indicated through the change of the contact forces. In contrast, policies without tactile input or sufficient exploration tend to stall or attempt insertion at poor angles, leading to failure or physical damage (Fig. 8 (b)). (ii) Tactile feedback enhances policy fine-tuning. Visual input alone often fails to capture the fine contact cues needed to align parts. By incorporating tactile data, the visuo-tactile policy gains access to these subtle interactions, enabling it to start from a stronger baseline after pre-training and achieve higher precision after fine-tuning. As shown in our simulation experiments (Tab. 2), both the vision-only and visuo-tactile policies benefit from fine-tuning. However, the visuo-tactile policy not only begins at a higher performance level but also converges to greater precision. A common failure mode for the vision-only baseline is stalling with the plug hovering just above the socket, unable to close the final 2mm gap. In contrast, the visuo-tactile policy continues adjusting until successful insertion is achieved. 7 Table-Top Bimanual Setup Settings Visual Policy (Real) Visuo-Tactile Policy (Real) 00081 00186 00007 00446 00581 00081 00186 00007 00446 00581 Pre-Train 0.35 0.40 0.40 0.20 0.35 0.55 0.55 0.65 0.40 0.35 RL Fine-Tuning 0.50 0.65 0.75 0.30 0.45 0.85 0.90 0.95 0.80 0.75 Semi-Humanoid Robot Setup Settings Visual Policy (Real) Visuo-Tactile Policy (Real) 00081 00186 00007 00081 00186 00007 Pre-Train 0.15 0.25 0.35 0.30 0.30 0.35 RL Fine-Tuning 0.35 0.30 0.45 0.60 0.65 0.65 Table 1: Real-World Experiments. We compare the pre-trained diffusion policy with the policy after RL fine-tuning, as well as vision-only versus visuo-tactile representations. Five object assets are evaluated across two robot setups, with each column corresponding to an AutoMate [57] asset ID. Table-Top Bimanual Setup Settings Visual Policy (Sim) Visuo-Tactile Policy (Sim) 00081 00186 00007 00446 00581 00081 00186 00007 00446 00581 Pre-Train 0.28 0.32 0.42 0.12 0.18 0.45 0.48 0.54 0.34 0.31 Fine-Tune w/o Pretrain 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Fine-Tune w/ Pretrain 0.57 0.72 0.84 0.36 0.52 0.82 0.94 0.98 0.76 0.78 Table 2: Simulation Results. We compare three variants in simulation: the Pre-Train Policy (trained only on real-world demonstrations), Fine-Tune w/o Pre-Train, and Fine-Tune w/ Pre-Train. The results indicate that both pre-training and fine-tuning contribute significantly to final performance. Num of Pretrain Data Visual Policy (Sim) Visuo-Tactile Policy (Sim) Pretrain RL Fine-Tune Pretrain RL Fine-Tune 10 demonstrations 0.08 0.21 0.02 0.34 30 demonstrations 0.40 0.65 0.48 0.94 50 demonstrations 0.37 0.67 0.57 0.92 Table 3: Different Amount of Pre-Training Data. We train Pre-Train Policies with different amounts of real-world data and transfer them to the simulation for fine-tuning. The results are only compared in simulation. Representation transfer across domians (real-to-sim-to-real). Transferring a policy between the real robot and simulation inevitably introduces some performance loss due to domain mismatch. These discrepancies arise from differences in point cloud inputs, tactile readings, robot controller settings, and minor joint encoder errors (which affect the placement of tactile points, as they are computed from joint states). As shown in Fig. 7, even with our low-gap tactile modality, we observe a slight performance drop: transferring from real to simulation reduces success rates by approximately 5–10%, while sim-to-real transfer causes a smaller—and sometimes negligible—drop. However, since RL fine-tuning in simulation improves success rates by over 30%, this transfer loss is acceptable and does not outweigh the overall gain. Ablation study: effect of pre-training data quantity. We trained three base policies using 10, 30, and 50 demonstrations, and applied the same RL fine-tuning procedure to each. As shown in Tab. 3, the policy trained with only 10 demonstrations performed poorly, achieving near-zero success. However, RL fine-tuning still improved its success rate to around 30%. The base policies trained on 30 and 50 demonstrations achieved reasonable performance and both fine-tuned to near-perfect success rates. Increasing the dataset from 30 to 50 demonstrations led to minimal improvement in the base policy. In both cases, the policy was already capable of completing the grasp phase; the main bottleneck was the fine, real-time adjustments required during the insertion phase. These micro-motions are difficult to capture with a modest increase in demonstration data, so adding more demonstrations brought limited additional benefit. 6 Conclusion In this paper, we present a real-to-sim-to-real pipeline with multi-modal perception for precise bimanual manipulation. We introduce a tactile simulation capable of effectively modeling dense tactile sensing grids, achieving strong alignment between simulation and the real world. Finally, we demonstrate the effectiveness of RL finetuning, which substantially improves performance across diverse precise assembly tasks. 8 7 Limitations 7.1 Trade-offs with Vision-Based Tactile Sensors We compare our FlexiTac sensor with vision-based tactile sensors along three dimensions: (1) Resolution. Vision-based tactile sensors can provide high-resolution RGB tactile feedback at sub- millimeter (<1mm) scales, which benefits fine-grained manipulation. However, this high resolution introduces a significant sim-to-real gap. In contrast, our design, while lower in resolution (2mm per unit), reduces simulation complexity and achieves more reliable sim-and-real alignment, while still supporting compact assembly tasks. (2) Shear Force. A major advantage of vision-based tactile sensors is their ability to capture shear- force information. Although FlexiTac does not provide direct shear-force measurements, policies with temporal history can implicitly infer shear-related effects from contact dynamics. (3) System Design. Vision-based tactile sensors are typically bulky (at least as large as the camera’s focal length) and thus difficult to integrate into compliant grippers or small size fingertips. Moreover, customizing such sensors [58–60] requires significant engineering effort. In contrast, FlexiTac’s thin, flexible force mat is lightweight, easy to install, and readily customizable, making it well-suited for compliant gripper integration. 7.2 Methodological Limitations (1) Real-Sim Alignment. Like sim-to-real pipelines, our real-to-sim transfer pipeline requires manual calibration efforts. Visual domains, tactile distributions, and low-level control must all be aligned. (2) Scope of the Method Applicability. The current pipeline is constrained by simulator capabilities: both Isaac Gym and our tactile simulation lack support for deformable objects. Longer-horizon tasks require significantly more training time, and the absence of shear force sensing limits applicability to more complex tasks. In future work, we aim to further enhance real-to-sim fidelity. (3) Requirement for CAD models. Even though our sparse-reward formulation avoids excessive reward engineering, we still rely on object CAD models to train fine-tuned policies; in this work, we used 3D-printed replicas from an existing dataset. A CAD-free, plug-and-play pipeline would enable the extension of our approach to a much broader range of everyday objects. Acknowledgement This work is partially supported by the Toyota Research Institute (TRI), the Sony Group Corporation, Google, Dalus AI, Pickle Robot, and an Amazon Research Award. This article solely reflects the opinions and conclusions of its authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors. References [1] C. Chi, S. Feng, Y. Du, Z. Xu, E. Cousineau, B. C. Burchfiel, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. In RSS, 2023. [2] Y. Ze, G. Zhang, K. Zhang, C. Hu, M. Wang, and H. Xu. 3D diffusion policy: Generalizable visuomotor policy learning via simple 3D representations. In RSS, 2024. [3] B. Huang, Y. Wang, X. Yang, Y. Luo, and Y. Li. 3D-ViTac: Learning fine-grained manipulation with visuo-tactile sensing. In CoRL, 2024. [4] T. Lin, Y. Zhang, Q. Li, H. Qi, B. Yi, S. Levine, and J. Malik. Learning visuotactile skills with two multifingered hands. In ICRA, 2025. 9 [5] M. T. Villasevil, A. Simeonov, Z. Li, A. Chan, T. Chen, A. Gupta, and P. Agrawal. Reconciling reality through simulation: A real-to-sim-to-real approach for robust manipulation. In RSS, 2024. [6] A. Z. Ren, J. Lidard, L. L. Ankile, A. Simeonov, P. Agrawal, A. Majumdar, B. Burchfiel, H. Dai, and M. Simchowitz. Diffusion policy policy optimization. In ICLR, 2025. [7] L. Ankile, A. Simeonov, I. Shenfeld, M. Torne, and P. Agrawal. From imitation to refinement – residual RL for precise assembly. In ICRA, 2025. [8] Z.-H. Yin, B. Huang, Y. Qin, Q. Chen, and X. Wang. Rotating without seeing: Towards in-hand dexterity through touch. In RSS, 2023. [9] H. Qi, B. Yi, S. Suresh, M. Lambeta, Y. Ma, R. Calandra, and J. Malik. General in-hand object rotation with vision and touch. In CoRL, 2023. [10] Y. Yuan, H. Che, Y. Qin, B. Huang, Z.-H. Yin, K.-W. Lee, Y. Wu, S.-C. Lim, and X. Wang. Robot synesthesia: In-hand manipulation with visuotactile sensing. In ICRA, 2024. [11] J. Wang, Y. Yuan, H. Che, H. Qi, Y. Ma, J. Malik, and X. Wang. Lessons from learning to spin “pens”. In CoRL, 2024. [12] I. Akinola, J. Xu, J. Carius, D. Fox, and Y. Narang. TacSL: A library for visuotactile sensor simulation and learning. T-RO, 41:2645–2661, 2025. [13] J. Yin, H. Qi, J. Malik, J. Pikul, M. Yim, and T. Hellebrekers. Learning in-hand translation using tactile skin with shear and normal force sensing. In ICRA, 2025. [14] Y. Lin, A. Church, M. Yang, H. Li, J. Lloyd, D. Zhang, and N. F. Lepora. Bi-touch: Bimanual tactile manipulation with sim-to-real deep reinforcement learning. RA-L, 8(9):5472–5479, 2023. [15] R. S. Johansson and G. Westling. Roles of glabrous skin receptors and sensorimotor memory in automatic control of precision grip when lifting rougher or more slippery objects. Experimental Brain Research, 56:550–564, 2004. URL https://api.semanticscholar.org/CorpusID: 16631166. [16] W. Yuan, S. Dong, and E. H. Adelson. GelSight: High-resolution robot tactile sensors for estimating geometry and force. Sensors, 17(12), 2017. [17] T. Lin, Z.-H. Yin, H. Qi, P. Abbeel, and J. Malik. Twisting lids off with two hands. In CoRL, 2024. [18] S. Suresh, H. Qi, T. Wu, T. Fan, L. Pineda, M. Lambeta, J. Malik, M. Kalakrishnan, R. Ca- landra, M. Kaess, et al. Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation. arXiv preprint arXiv:2312.13469, 2023. [19] M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg. Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA), pages 8943–8950. IEEE, 2019. [20] V. Dave, F. Lygerakis, and E. Rueckert. Multimodal visual-tactile representation learning through self-supervised contrastive pre-training. arXiv preprint arXiv:2401.12024, 2024. [21] Z. Ding, G. Chen, Z. Wang, and L. Sun. Adaptive visual–tactile fusion recognition for robotic operation of multi-material system. Frontiers in Neurorobotics, 17, 2023. ISSN 1662-5218. doi: 10.3389/fnbot.2023.1181383. URL https://www.frontiersin.org/articles/10.3389/ fnbot.2023.1181383. 10 [22] H. Xue, J. Ren, W. Chen, G. Zhang, Y. Fang, G. Gu, H. Xu, and C. Lu. Reactive diffusion policy: Slow-fast visual-tactile policy learning for contact-rich manipulation. arXiv preprint arXiv:2503.02881, 2025. [23] B. Ai, S. Tian, H. Shi, Y. Wang, C. Tan, Y. Li, and J. Wu. Robopack: Learning tactile-informed dynamics models for dense packing. Robotics: Science and Systems (RSS), 2024. URL https://arxiv.org/abs/2407.01418. [24] R. Bhirangi, V. Pattabiraman, E. Erciyes, Y. Cao, T. Hellebrekers, and L. Pinto. Anyskin: Plug-and-play skin sensing for robotic touch. ICRA, 2025. [25] I. Guzey, Y. Dai, B. Evans, S. Chintala, and L. Pinto. See to touch: Learning tactile dexterity through visual incentives. arXiv preprint arXiv:2309.12300, 2023. [26] E. Donlon, S. Dong, M. Liu, J. Li, E. Adelson, and A. Rodriguez. Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger. In 2018 IEEE/RSJ International Confer- ence on Intelligent Robots and Systems (IROS), pages 1927–1934. IEEE, 2018. [27] I. H. Taylor, S. Dong, and A. Rodriguez. Gelslim 3.0: High-resolution measurement of shape, force and slip in a compact tactile-sensing finger. In 2022 International Conference on Robotics and Automation (ICRA), pages 10781–10787. IEEE, 2022. [28] D. Ma, E. Donlon, S. Dong, and A. Rodriguez. Dense tactile force estimation using gelslim and inverse fem. In 2019 International Conference on Robotics and Automation (ICRA), pages 5418–5424. IEEE, 2019. [29] Z. Si, G. Zhang, Q. Ben, B. Romero, Z. Xian, C. Liu, and C. Gan. Difftactile: A physics- based differentiable tactile simulator for contact-rich robotic manipulation. arXiv preprint arXiv:2403.08716, 2024. [30] A. Goncalves, N. Kuppuswamy, A. Beaulieu, A. Uttamchandani, K. M. Tsui, and A. Alspach. Punyo-1: Soft tactile-sensing upper-body robot for large object manipulation and physical human interaction. In 2022 IEEE 5th International Conference on Soft Robotics (RoboSoft), pages 844–851, 2022. doi:10.1109/RoboSoft54090.2022.9762117. [31] N. Sunil, S. Wang, Y. She, E. Adelson, and A. R. Garcia. Visuotactile affordances for cloth manipulation with local control. In Conference on Robot Learning, pages 1596–1606. PMLR, 2023. [32] Y. She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson. Cable manipulation with a tactile-reactive gripper. The International Journal of Robotics Research, 40(12-14):1385–1401, 2021. [33] S. Wang, Y. She, B. Romero, and E. H. Adelson. Gelsight wedge: Measuring high-resolution 3d contact geometry with a compact robot finger. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. [34] B. Huang, Y. Chen, T. Wang, Y. Qin, Y. Yang, N. Atanasov, and X. Wang. Dynamic handover: Throw and catch with bimanual hands. arXiv preprint arXiv:2309.05655, 2023. [35] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y. Zhu, and A. Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. arXiv preprint arXiv:2302.12422, 2023. [36] C. Wang, H. Shi, W. Wang, R. Zhang, L. Fei-Fei, and C. K. Liu. Dexcap: Scalable and portable mocap data collection system for dexterous manipulation. arXiv preprint arXiv:2403.07788, 2024. 11 [37] Y. Qin, W. Yang, B. Huang, K. Van Wyk, H. Su, X. Wang, Y.-W. Chao, and D. Fox. Anyteleop: A general vision-based dexterous robot arm-hand teleoperation system. In Robotics: Science and Systems, 2023. [38] Y. Jiang, C. Wang, R. Zhang, J. Wu, and L. Fei-Fei. Transic: Sim-to-real policy transfer by learning from online correction. In Conference on Robot Learning, 2024. [39] C. Chi, Z. Xu, C. Pan, E. Cousineau, B. Burchfiel, S. Feng, R. Tedrake, and S. Song. Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots. In Proceedings of Robotics: Science and Systems (RSS), 2024. [40] Y. Wang, G. Yin, B. Huang, T. Kelestemur, J. Wang, and Y. Li. Gendp: 3d semantic fields for category-level generalizable diffusion policy. In 8th Annual Conference on Robot Learning, volume 2, 2024. [41] L. Ankile, A. Simeonov, I. Shenfeld, and P. Agrawal. Juicer: Data-efficient imitation learning for robotic assembly. arXiv, 2024. [42] K. Yu, Y. Han, Q. Wang, V. Saxena, D. Xu, and Y. Zhao. Mimictouch: Leveraging multi-modal human tactile demonstrations for contact-rich manipulation. arXiv preprint arXiv:2310.16917, 2023. [43] X. Zhu, B. Huang, and Y. Li. Touch in the wild: Learning fine-grained manipulation with a portable visuo-tactile gripper. arXiv preprint arXiv:2507.15062, 2025. [44] X. Cheng, J. Li, S. Yang, G. Yang, and X. Wang. Open-television: Teleoperation with immersive active visual feedback. arXiv preprint arXiv:2407.01512, 2024. [45] C. Lu, X. Cheng, J. Li, S. Yang, M. Ji, C. Yuan, G. Yang, S. Yi, and X. Wang. Mobile-television: Predictive motion priors for humanoid whole-body control. arXiv preprint arXiv:2412.07773, 2024. [46] R. Ding, Y. Qin, J. Zhu, C. Jia, S. Yang, R. Yang, X. Qi, and X. Wang. Bunny-visionpro: Real- time bimanual dexterous teleoperation for imitation learning. arXiv preprint arXiv:2407.03162, 2024. [47] E. Su, C. Jia, Y. Qin, W. Zhou, A. Macaluso, B. Huang, and X. Wang. Sim2real manipulation on unknown objects with tactile-based reinforcement learning. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 9234–9241. IEEE, 2024. [48] V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State. Isaac Gym: High performance GPU-based physics simulation for robot learning. In NeurIPS Track on Datasets and Benchmarks, 2021. [49] J. Xu, S. Kim, T. Chen, A. R. Garcia, P. Agrawal, W. Matusik, and S. Sueda. Efficient tactile simulation with differentiability for robotic manipulation. In CoRL, 2022. [50] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. [51] Y. Qin, B. Huang, Z.-H. Yin, H. Su, and X. Wang. DexPoint: Generalizable point cloud reinforcement learning for sim-to-real dexterous manipulation. In CoRL, 2022. [52] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. PointNet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. [53] L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba, and P. Abbeel. Asymmetric actor critic for image-based robot learning. In RSS, 2018. 12 [54] M. Heo, Y. Lee, D. Lee, and J. J. Lim. FurnitureBench: Reproducible real-world benchmark for long-horizon complex manipulation. In RSS, 2023. [55] J. Aldaco, T. Armstrong, R. Baruch, J. Bingham, S. Chan, K. Draper, D. Dwibedi, C. Finn, P. Florence, S. Goodrich, et al. Aloha 2: An enhanced low-cost hardware for bimanual teleoperation. arXiv preprint arXiv:2405.02292, 2024. [56] B. Sundaralingam, S. K. S. Hari, A. Fishman, C. Garrett, K. V. Wyk, V. Blukis, A. Millane, H. Oleynikova, A. Handa, F. Ramos, N. Ratliff, and D. Fox. cuRobo: Parallelized collision-free minimum-jerk robot motion generation. arXiv preprint arXiv:2310.17274, 2023. [57] B. Tang, I. Akinola, J. Xu, B. Wen, A. Handa, K. V. Wyk, D. Fox, G. S. Sukhatme, F. Ramos, and Y. Narang. AutoMate: Specialist and generalist assembly policies over diverse geometries. In RSS, 2024. [58] J. Zhao, N. Kuppuswamy, S. Feng, B. Burchfiel, and E. Adelson. Polytouch: A robust multi- modal tactile sensor for contact-rich manipulation using tactile-diffusion policies. arXiv preprint arXiv:2504.19341, 2025. [59] Y. Ma, J. A. Zhao, and E. Adelson. Gellink: A compact multi-phalanx finger with vision-based tactile sensing and proprioception. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 1107–1113. IEEE, 2024. [60] F. Liu, C. Li, Y. Qin, A. Shaw, J. Xu, P. Abbeel, and R. Chen. Vitamin: Learning contact-rich tasks through robot-free visuo-tactile manipulation interface. arXiv preprint arXiv:2504.06156, 2025. [61] Learning the signatures of the human grasp using a scalable tactile glove. Nature, 569:698 – 702, 2019. URL https://api.semanticscholar.org/CorpusID:169033286. [62] C. Chi, Z. Xu, S. Feng, E. Cousineau, Y. Du, B. Burchfiel, R. Tedrake, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, 2024. [63] A. Z. Ren, J. Lidard, L. L. Ankile, A. Simeonov, P. Agrawal, A. Majumdar, B. Burchfiel, H. Dai, and M. Simchowitz. Diffusion policy policy optimization. In arXiv preprint arXiv:2409.00588, 2024. 13 Supplementary Materials Contents A Details for Tactile Sensor Hardware 14 B Details for Tactile Simulation Implementation 14 B.1 Sampling Tactile Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 B.2 Tactile Signal Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 C Real-to-Sim-to-Real 15 C.1 Tactile Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 C.2 Vision for Sim–Real Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 D DPPO Implementations and Parameters 16 D.1 Pre-Train Implementations and Parameters . . . . . . . . . . . . . . . . . . . . . . . . 16 D.2 Fine-Tune Implementations and Parameters . . . . . . . . . . . . . . . . . . . . . . . 17 A Details for Tactile Sensor Hardware Our FlexiTac sensor builds on prior work [3, 61] with several key modifications. Instead of manually aligned conductive yarn, we employ flexible PCBs, which significantly improve durability and scalability, and we further adapt the design for seamless integration with soft-fin grippers. The sensor operates within a force range of approximately 0.2–10 N. Each sensor pad costs about $10, while the upgraded reading board costs roughly $30. Compared to the earlier version [3], the new reading board now supports up to 32 × 32 sensor units at 23 Hz, doubling the resolution from the previous 16 × 16 limit. The core innovations of this version lie in the flexible PCB pad design and the openly released circuit drawings and manufacturing guide, which are available on the FlexiTac website. B Details for Tactile Simulation Implementation Sensor Pad Link 𝑀𝑀 Finger Link 𝑑𝑑 Surface Normal ො𝑛𝑛 𝐶𝐶 𝑅𝑅 (a) Tactile Finger 𝑈𝑈𝑈𝑈𝑈𝑈𝑈𝑈 (b) Sampling Tactile Points 𝑘𝑘𝑛𝑛 𝑘𝑘𝑑𝑑 (c) Spring-Damper System Figure 9: Tactile-Simulation Pipeline. (a) Finger model in the simulator with a dedicated elastomer sensing pad. (b) Uniform grid of taxels sampled on the pad surface and their contact interaction with an external object. (c) Penalty-based spring–damper model applied at every taxel to convert penetration into normal-force and depth signals. B.1 Sampling Tactile Points As shown in Fig. 9, each robot finger carries a sensor pad defined as a separate mesh in the URDF. Let M denote the triangular mesh of this pad and ˆn its outward surface normal (toward the “finger tip” link). • M be the triangle mesh of the sensor pad link, • ˆn the local surface normal that points towards the internal compliant layer (the “tip” link in the URDF). 14 Given a desired resolution R × C (the real robot uses 12 × 32), we generate taxel positions in three consecutive steps. (i) Detect the flat face: the shortest bounding-box axis of M corresponds to pad thickness; the two remaining axes span the contact surface. (ii) Create a planar lattice: A rectilinear grid is laid over this face, leaving a 1 mm margin to avoid edge artifacts; the grid spacing is equal to dtaxel. (iii) Ray-cast for ground-truth locations: rays are shot from every lattice node along −ˆn until they intersect M. The resulting 3-D points populate tactile pos local∈RN×3 (N = R × C) and are all assigned the fixed orientation qtaxel = Euler(0, 0, −π) so that the +y axis always points outward in world space. This fully automatic procedure yields a dense, uniform taxel layout and requires no manual annotation. B.2 Tactile Signal Computation At every physics step we convert geometric contacts into a dense two-channel image that the policy ingests directly: Chan. Symbol Quantity (tactile frame) 0 d Penetration depth (m) 1 fn Normal force (N) (i) Taxel pose in world coordinates. For every taxel i xw i = Re xl i + pe, ˙xw i = ωe×(Rexl i) + ve, where (Re, pe, ωe, ve) are the pose and twist of the sensor pad link returned by PhysX. (ii) SDF query. A single GPU kernel returns the signed distance di, outward normal ˆni, and normal relative velocity ˙di = ˆni· ˙xw i for every taxel. (iii) Penalty contact model. The normal contact force is fn,i = −(kn di + kd ˙di), with constants kn = 1.0 and kd = 3 × 10−3. Shear forces are not used in our cases. (iv) Packing and normalisation. Negative depths (di < 0, meaning no contact) are clamped to zero. The pair (di, fn,i) is linearly rescaled to [0, 1] and reshaped into an R × C × 4 tensor that is streamed directly to the policy network. The loop is fully vectorised over all environments (N envs) and fully GPU-parallelized. C Real-to-Sim-to-Real C.1 Tactile Calibration To match simulated tactile readings to the real hardware we adjust only the normal stiffness kn and the damping term kd of the penalty model. Calibration follows three steps: (i) a single taxel on the finger pad is chosen in both domains; (ii) a sequence of forces is applied to the real pad, yielding a ground-truth force–response curve; and (iii) the same normal loads are replayed in simulation while kn and kd are iteratively tuned until the mean-squared error between the two curves is minimised. Then apply this parameter to all sensor units. Real pads exhibit a small, load-independent noise floor. We reproduce this behaviour by a two-stage normalisation snorm = ( s/sfixed max, s < τ, s/scurr max, s ≥τ, 15 where s is the raw taxel reading, τ is a noise threshold, sfixed max is a constant from the sensor data-sheet, and scurr max is the frame-wise maximum over the pad. The identical rule is applied to simulated readings so that both domains share the same dynamic range. As illustrated in Fig. 4 of main text, the histograms of normalised signals overlap almost perfectly after calibration. The benefit is further demonstrated in Fig. 10, which shows fine-tuning performance with and without calibration using the same Pre-Trained policy (trained by real data). The calibrated run improves steady up, whereas the uncalibrated run initially degrades as the policy adapts to the shifted observation distribution. Although both eventually succeed in simulation, the uncalibrated policy exhibits a larger sim-to-real gap when deployed on the robot, underscoring the importance of distribution matching. w/ Calibration w/o Calibration 0 100 200 300 0.2 0.4 0.6 0.8 1.0 Training Curves Success Rate Figure 10: Effect of Tactile Calibration on RL Fine-Tuning. Training curves with and without sensor-simulation calibration. Without calibration the success rate initially falls as the policy re-adapts to the shifted tactile distribution, whereas the calibrated variant improves monotonically from the first iteration. C.2 Vision for Sim–Real Alignment To minimise the domain gap between simulated and real observations we normalise the depth stream from a single egocentric RealSense camera through three successive operations. (i) Point-cloud generation: raw depth values are back-projected to camera space and transformed into the world frame whose origin coincides with the table centre. (ii) Workspace cropping: the cloud is clipped to a manually specified axis-aligned bounding box that covers the robot’s reachable workspace and excludes background clutter. (iii) Uniform down-sampling: to accelerate both data loading and on-line RL roll-outs we subsample the cloud to a fixed budget of points using uniform lin-space indexing. Although farthest-point sampling (FPS) is common in diffusion-policy pipelines, uniform sampling is ∼10× faster in our setting and was found to have no measurable impact on task performance. (iv) Noise injection (simulation only): To emulate RealSense depth jitter, each simulated point xw is perturbed by a multiplicative factor, xw ←xw 1 + N(0, 0.01 σ)  , where σ is a user-controlled noise level. We use noise level 3 for our pipeline. This step is omitted for real-camera data. D DPPO Implementations and Parameters D.1 Pre-Train Implementations and Parameters Diffusion model. We employ the epsilon-prediction variant of Diffusion Policy [62] with T =100 denoising steps and a rollout horizon of H =16 actions. Conditioning information is injected during 16 the first C =2 steps for proprioception and during the first Cimg =2 steps for point clouds, following the two-stream scheme in [62]. Backbone. (i) Input structure. At each conditioning step t the policy receives merged visuo-tactile point cloud: • Visual cloud P v t ∈R3×Nv, whose fourth channel is initialised to zeros. • Tactile cloud P τ t ∈R4×Nτ containing XYZ positions of taxels in the world frame and their normalised pressure readings as a fourth channel. To allow the network to distinguish modalities we append a one-hot flag, yielding a 5-channel tensor Pt =  (P v t ; 0), (P τ t ; 1)  ∈R5×(Nv+Nτ ), which is transposed to the (N, 5) format expected by PointNet. (ii) Per-step backbone. Each Pt is processed by PointNetEncoderXYZ Tactile, an MLP with hidden sizes {64, 128, 256, 512}. Optionally, layer normalisation is inserted after every linear layer; we use the variant with LayerNorm and final projection to a 64-d feature. A global maxn pooling over points produces the per-step vector ft ∈R64. (iii) State feature. If proprioceptive state is available the 16-d joint states at each step is mapped through a two-layer MLP ({64, 64}) and concatenated with f. The joint proprioception can be obtained both in sim and real. D.2 Fine-Tune Implementations and Parameters Actor network. The backbone is identical to pre-training (PointNet 64 →U-Net {512, 1024, 2048}, kernel 5, group norm 8), but only the last Tft diffusion steps are optimised. A small K-L penalty (λ = 2×10−4) on the predicted noise keeps the decoder close to the pre-trained manifold. Critic. A state-value network receives the proprioceptive history (concatenated over the C=2 conditioning steps, dimensionality 30 × 2) and passes it through an MLP (512–512–512, Mish activations, residual connections). PPO hyper-parameters. We collect one segment of nsteps = ⌊240/8⌋= 30 decisions from every environment before an update. Discount and GAE factors are γ = 0.999 and λ = 0.95. Ten PPO epochs are run per batch with a target K-L of 1.0. Learning rates are 10−5 (actor) and 10−3 (critic) with cosine decay to 10−6 and 10−3, respectively. In the paper DPPO [63], they use learning rate 5 × 10−5, but here we use 10−5 to ensure a more stable training as the task is even more fine-grained. Table 4 shows the important hyperparamter we truned for our fine-grained manipulation task. Symbol Value Config Key γdenoise 0.99 gamma denoising λKL 2 × 10−4 clip ploss coef — base value 2 × 10−4 clip ploss coef base — annealing rate 3 clip ploss coef rate σrand 3.0 randn clip value ˆσmin 0.01 min sampling denoising std ˜σmin 0.10 min logprob denoising std Table 4: DPPO–specific hyper-parameters used during fine-tuning. 17
VT-Refine: Learning Bimanual Assembly with Visuo-Tactile Feedback via Simulation Fine-Tuning Binghao Huang1 Jie Xu2 Iretiayo Akinola2 Wei Yang2 Balakumar Sundaralingam2 Rowland O'Flaherty2 Dieter Fox2 Xiaolong Wang2,3 Arsalan Mousavian2 Yu-Wei Chao2† Yunzhu Li1† 1Columbia University 2NVIDIA 3 †Equal advising Abstract: Humans excel at bimanual assembly tasks by adapting to rich tactile feedback-a capability that remains difficult to replicate in robots through behavioral cloning alone, due to the suboptimality and limited diversity of human demonstrations. In this work, we present VT-Refine, a visuo-tactile policy learning framework that combines real-world demonstrations, high-fidelity tactile simulation, and reinforcement learning to tackle precise, contact-rich bimanual assembly. We begin by training a diffusion policy on a small set of demonstrations using synchronized visual and tactile inputs. This policy is then transferred to a simulated digital twin equipped with simulated tactile sensors and further refined via largescale reinforcement learning to enhance robustness and generalization. To enable accurate sim-to-real transfer, we leverage high-resolution piezoresistive tactile sensors that provide normal force signals and can be realistically modeled in parallel using GPU-accelerated simulation. Experimental results show that VT-Refine improves assembly performance in both simulation and the real world by increasing data diversity and enabling more effective policy fine-tuning. Our project page is available at https://binghao-huang.github.io/vt_refine/. Keywords: Tactile Simulation, Bimanual Manipulation, RL Fine-Tuning Mechanical Connector Real World GPU-Parallelized Tactile Simulation Simulation Large-Scale RL Fine-Tuning Human Demos Touch Vision Applicable to Diverse Assembly Tasks Diffusion Policy Pre-Train Nut-and-Bolt Sim-to-Real Tactile Signals Figure 1: We propose VT-Refine, a novel visuo-tactile policy learning framework for precise, contact-rich bimanual assembly tasks. Top Left: We collect real-world demonstrations and pre-train a diffusion policy using visuo-tactile inputs. Right to Bottom Left: We leverage tactile simulation and large-scale reinforcement learning to fine-tune the policy, and subsequently transfer the fine-tuned policy back to the real world. The resulting policy demonstrates strong performance in both simulated and real environments. 9th Conference on Robot Learning (CoRL 2025), Seoul, Korea. 18 Oct 2025 1 Introduction Solving precise assembly tasks with both hands requires sophisticated orchestration of vision and tactile sensing. Consider a bimanual plug-and-socket assembly: humans first rely on vision to locate the parts and coordinate the grasping and pickup of each part with both hands. Once the parts are held and positioned for insertion, tactile feedback becomes essential. This is because contact cues can be visually occluded during insertion (Fig. 1), and hence vision alone lacks the precision needed for fine-grained, contact-rich interactions. Behavioral cloning with diffusion policies [1, 2] has recently shown promise in learning bimanual visuo-tactile policies from a limited number of human teleoperated demonstrations [3, 4]. However, scaling these methods for high-precision assembly tasks in real-world settings poses two major challenges. First, collecting real-world demonstrations is costly, and the demand for data only grows with increasing task precision and contact complexity, making large-scale collection prohibitively expensive. Second, current demonstration interfaces often lack tactile feedback, hindering the capture of how humans use touch for fine manipulation. Consequently, the collected demonstrations typically omit exploratory behaviors-such as iterative adjustments-that are critical for contact-rich tasks, resulting in suboptimal training data. Alternatively, simulation offers a promising path to scale visuotactile policy learning, but existing efforts primarily focus on visual modalities or tasks with limited reliance on touch [5-7]. While some recent work has explored simulation-based data collection for tactile inputs, these efforts are typically restricted to simpler tasks or setups (e.g., unimanual) [8-13], or have not yet addressed large-scale training or robust sim-to-real transfer for tactile-critical bimanual tasks [14]. To address these challenges, we introduce a novel real-to-sim-to-real framework designed for precise bimanual assembly. Our approach begins by collecting a small amount of real-world demonstrations (e.g., 30 episodes) to pre-train a bimanual visuo-tactile diffusion policy. The policy is subsequently fine-tuned using reinforcement learning (RL) on a digital twin of the scene within a parallelized simulation environment. Finally, the fine-tuned policy is transferred from simulation back to the real world. Our framework offers three key contributions: (1) We enhance visuo-tactile diffusion policies through RL-based fine-tuning in simulation, enabling policy improvement by exploring state-action regions near those seen in the initial human demonstrations. (2) We develop a GPU-parallelized tactile simulation module within a GPU-based physics simulator to accurately simulate piezoresistive tactile sensors that reliably capture normal force signals. This selection for tactile modality and simulation significantly narrows the sim-to-real gap and overcomes critical challenges in tactile modality transfer. (3) We adopt point-based representations for visual and tactile modalities, facilitating a seamless real-to-sim-to-real transfer. The unified representation preserves the spatial relationships between visual and tactile points, enhancing policy effectiveness. To the best of our knowledge, our work is the first to show successful RL with large-scale simulation and sim-to-real transfer for bimanual visuo-tactile policies. We comprehensively evaluate our system on five challenging bimanual assembly tasks, demonstrating successful real-world execution and performance gains from simulation-based fine-tuning. A detailed analysis of each training phase shows that high-resolution tactile feedback significantly boosts policy effectiveness during both pre-training and fine-tuning. Additionally, our visuo-tactile point-based representation enables robust bidirectional transfer between real and simulated environments, playing a critical role in the success of our two-stage training framework across tasks and domains. 2 Related Work Tactile Sensors and Simulation. Tactile information is critical in human daily life and plays an equally important role in enabling robots to interact with their environments [15]. Recognizing its importance, researchers have integrated vision and tactile sensing to enhance robotic manipulation [3, 4, 8, 13, 14, 16-25]. Most existing work focuses on optical-based tactile sensors, which can capture normal and shear forces, as well as fine-grained surface textures [10, 12, 26-30]. However, the high-resolution images produced by these sensors are difficult to simulate accurately and cause larger sim-real gap. Some approaches [31-33] attempt to sample marker positions from optical tactile 2 (b) Tactile Simulation (a) Tactile Hardware (v) Object Flexible Tactile Sensor Adaptive Fin-Shaped Gripper Spring-Damper System (vi) kknn kkdd Sampled Tactile Points on Contacting Surface (iv) Top FPC (0.2 mm) Bottom FPC (0.2 mm) Force-Sensitive Film (0.1 mm) (iii) (i) (ii) 12 by 32 Sensor Matrix (2mm Grid) Figure 2: Tactile Sensing in Real and Simulation. (a) Our real-world hardware setup, including the design of the piezoresistive tactile sensor. Four tactile sensor pads (two per hand) are mounted on the soft gripper to capture contact forces. (b) Replication of the tactile sensing process in simulation. A spring-damper model is used to simulate the interaction between the tactile points and objects to generate realistic tactile signals. images and infer normal and shear forces from marker deviations, but this indirect method further complicates sim-to-real transfer. In contrast, we select a tactile sensing modality that emphasizes structural contact patterns with normal force only rather than fine textures. Such signals are not only easier to simulate accurately but also more amenable to transfer between real and simulated environments, enabling scalable visuo-tactile data generation through simulation. Bimanual Visuo-Tactile Manipulation. Bimanual robotic manipulation presents significant challenges across a range of applications [7, 17, 34-38], particularly for assembly tasks. Recently, there has been growing interest in learning-based methods, such as imitation learning [39-43], which leverage multimodal human demonstrations for fine-grained manipulation. However, achieving higher precision tasks vastly increases the amount of training data required in bimanual settings. To address this, simulation has been used to generate additional data and enhance policy robustness. Villasevil et al. [5] explored the use of reinforcement learning to fine-tune policies initialized by imitation learning. Nonetheless, most bimanual manipulation frameworks are still restricted to using the visual input alone [44-46], particularly for real-to-sim-to-real pipelines. This is largely because tactile signals, especially those from optical tactile sensors, are difficult to simulate and transfer [47], limiting their potential of being incorporated into simulation-based training. In contrast, our framework, along with the selection of a transfer-friendly tactile modality, enables effective real-to-sim-to-real learning with both vision and tactile inputs. 3 Visuo-Tactile System and Tactile Simulatiom Tactile Sensor Hardware. Our sensor FlexiTac uses resistive sensing matrices, inspired by 3DViTac [3], to efficiently convert mechanical pressure into electrical signals. The choice of matrix-based flexible sensors is motivated by two key factors: (1) Compatibility: these sensors are versatile and can be mounted on a wide range of robotic end-effectors, including both rigid and compliant fingers. (2) Sim-to-real transferability: the sensing modality can be simulated with high fidelity in our environment, enabling consistent behavior across real-to-sim-to-real transfers. As depicted in Fig. 2, each robotic finger is equipped with a tactile sensor pad composed of 12×32 sensing units, with a spatial resolution of 2mm (i.e., 2mm center-to-center distance between adjacent sensors). We use a triple-layer structure similar to [3], with a piezoresistive sensing layer sandwiched between two flexible printed circuit (FPC) layers (Fig. 2 (iii)). Utilizing FPC significantly enhances spatial consistency and increases the resolution of each sensor pad. Additionally, we developed a streamlined fabrication process capable of producing a single sensor in under 5 minutes, enabling 3 + PointNet Diffusion Policy Action Chunk Simulated Visuo-Tactile Points Actor Critic Action Chunk Robot Proprioception Robot Proprioception Low-Dim Env State Zero-Shot Transfer PointNet + Value Stage 1: Real World Pre-Training Stage 2: Simulation Fine-Tuning Diffusion Policy MLP Real Raw Point Cloud Simulated Raw Point Cloud Real Visuo-Tactile Points Noise Diffusion Policy Policy Optimization Figure 3: Two-Stage Visuo-Tactile Policy Training. Stage 1: We collect real-world human demonstrations with visual and tactile modalities and pre-train a diffusion policy. Stage 2: We simulate the same sensory modalities in simulation and fine-tune the pre-trained diffusion policy using policy-gradient-based RL. cost-effective and scalable deployment. We are committed to releasing comprehensive tutorials detailing the hardware design and fabrication process. Tactile Simulation. To simulate the tactile sensory input, we build on TacSL [12], a GPU-based tactile simulation library integrated with Isaac Gym [48]. We chose TacSL since the sensory signals acquired from our sensor pads are closely akin to TacSL's simulated tactile normal forces. To model the soft-contact interactions between our deformable tactile sensors (mounted on the soft grippers) and rigid contacting objects, we follow TacSL and employ a penetration-based tactile force model [49]. As shown in Fig. 2 (vi), the interaction between each tactile point (i.e., the 3D position of a sensing unit) and the rigid object is modeled using a Kelvin-Voigt model, consisting of a linear spring and a viscous damper connected in parallel. Each sampled tactile point independently computes the contact normal force fn as: fn = -(knd + kd ̇d)n, where d and ̇d represent the interpenetration depth and the relative velocity along the contact normal, respectively, and n denotes the outward contact normal vector. At each simulation timestep, the signed distance field (SDF) of the contacting object is queried to compute d, and the positions of tactile points are updated in real time via forward kinematics. The known resolution of our tactile sensor allows us to uniformly distribute contact points across the sensor surface. The shape and spatial resolution of the simulated sensor are fully customizable, ensuring consistency with their real-world counterparts. Further implementation details and force computation steps are provided in the Appendix. Real-to-Sim-to-Real for Tactile. Alternative tactile sensors, such as vision-based ones like GelSight [16], rely heavily on internal illumination, making them difficult to simulate accurately and prone to sim-to-real gaps. In contrast, our approach uses normal force signals, which are easier to calibrate in simulation, as demonstrated in our experiments. Another challenge lies in simulating the deformable nature of flexible tactile sensors. While high-fidelity techniques like finite element methods (FEM) can model this softness, they are computationally expensive and impractical for large-scale reinforcement learning. By leveraging TacSL's GPU-accelerated simulation, we efficiently approximate the softness of flexible tactile sensors, enabling scalable training. As a result, our sensor design improves robustness and supports effective zero-shot sim-to-real transfer. 4 Visuo-Tactile Policy Policy Optimizatiom Our goal is to learn a generalizable and robust control policy, denoted as π : O →A that maps multimodal observation o ∈O to robot actions a ∈A with a few real-world demonstrations. As shown in Fig. 3, our method consists of two stages: (1) real-world pre-training and (2) simulation fine-tuning. 4 In the first stage, we pre-train a diffusion policy [1] using behavioral cloning on a small amount of human demonstrations. This pre-trained policy is expected to succeed on the task sporadically for a restricted range of object initial positions in both real-world and simulated environments. In the second stage, we initialize the policy (actor) model from the pre-trained weights and further optimize it with policy-gradient-based RL [50] in simulation. Finally, this fine-tuned policy is transferred back to the real world for evaluation. Visuo-Tactile Representation. The choice of observation o is crucial for bridging the simulation and real world. We adopt a point cloud-based representation for its robust sim-to-real transferability [3, 10, 51]. Our observation contains three modalities: (1) visual: a colorless point cloud captured by an ego-centric camera, denoted as P visual t ∈RNvis×4, (2) tactile: a point cloud derived from the tactile sensors representing the 3D positions of the sensing units and their continuous sensory readings, denoted as P tactile t ∈RNtac×4. We set Ntac = 384 × Nfinger for the tactile point cloud since each sensor pad consists of 12 × 32 = 384 tactile points, and (3) proprioception: joint positions from the two arms and two grippers. As shown in Fig. 3, we merge the visual and tactile point clouds into a unified visuo-tactile representation: o = P tactile t ∪P visual t . The tactile sensor's position is computed via forward kinematics and transformed into the camera's 3D coordinate frame, preserving the spatial relationships between the two modalities. Following [2], the merged point cloud is processed by a PointNet encoder [52], and its output is concatenated with proprioceptive features encoded by a multilayer perceptron (MLP). The resulting feature vector is used as the conditioning input for the denoising diffusion network. Stage 1: Real-World Pre-training. We begin by collecting a small real-world demonstration dataset (e.g., 30 episodes in our experiments) to pre-train a diffusion policy. At the beginning of each trial, the assembly parts are randomly placed within a designated region on the table. A human operator then teleoperates both robot arms to pick up the parts and complete the assembly task. During each demonstration, we record visual and tactile inputs, robot joint states, and action commands. To train the diffusion policy, we adopt a denoising diffusion probabilistic model (DDPM) and follow standard practice by predicting an action chunk [1] (Fig. 3, top). Given the limited number of demonstrations, the trained model may not succeed consistently, but is expected to occasionally complete the task, providing reward signals for reinforcement learning to further improve the policy during fine-tuning. Stage 2: Simulation Fine-tuning. We fine-tune the pre-trained diffusion policy in an end-to-end manner using Diffusion Policy Policy Optimization (DPPO) [6] (Fig. 3, bottom). DPPO optimizes a diffusion policy using Proximal Policy Optimization (PPO) [50], by formalizing the denoising process as a Markov Decision Process (MDP), which allows the reward signal to propagate effectively through the denoising chain. For scalable training, we assume access to a digital twin of the scene equipped with simulated vision and tactile sensors. The pre-trained diffusion policy initializes the actor network, while the critic network is initialized randomly. We adopt an asymmetric actor-critic strategy [53], where the critic receives a low-dimensional representation of the robot and object state. Reward Function. In line with the observations in DPPO [6], we find that pre-training on human demonstrations provides a strong prior that guides RL exploration, allowing us to avoid complex reward engineering. We therefore fine-tune using a sparse reward: the agent receives a reward of 1 when the parts are successfully assembled, and 0 otherwise [54]. 5 Experimental Results In this section, we address the following three questions through experiments: (1) How does our fine-tuned policy improve over the baseline diffusion policy? (2) How effectively does the proposed visuo-tactile representation transfer across domains (real-to-sim-to-real)? (3) How does policy performance scale with the number of human demonstrations? 5.1 Tactile Simulation Calibration To align the simulated tactile response with that of the real sensor, we first characterize the sensor's force-reading curve using a DMA 850 Dynamic Mechanical Analyzer. We then fit a Kelvin-Voigt viscoelastic model by iteratively tuning the elastic modulus kn (compliance stiffness) and viscosity 5 Real Sensor Reading Sim Sensor Reading Frequency 0 100 200 300 400 500 Sensing Reading 0.0 0.2 0.4 0.6 0.8 1.0 Histogram of Sensing Reading in Sim and Real Figure 4: Sensor Calibration Results. The histogram shows consistent sensor reading distributions between the simulation and real world. (a) Table-Top Bimanual Setup (b) Semi-Humanoid Bimanual Setup Real Asset Ego-Centric Camera Ego-Centric Camera Tactile Sensors Tactile Sensors Figure 5: Real Robot Setups. Both the socket and plug are randomly placed in a 3cm × 3cm area. Both robot setups have four tactile sensing pads and an ego-centric camera. coefficient kd (damping) until the simulated curve closely matches the measured response. To validate the calibration, we grasp objects from multiple poses in the real world and record the corresponding tactile signals. We then replay the same trajectories in simulation to collect synthetic tactile data. A histogram comparison of the two datasets shows that the calibrated simulator accurately reproduces the distribution of real tactile signals (Fig. 4). We provide detailed calibration procedures in the supplementary materials. 5.2 Experiment Setup We evaluate our multimodal real-to-sim-to-real system on challenging bimanual assembly tasks. Since our tactile sensors and simulation pipeline can be easily transferred across different robot platforms, we evaluate our method on two setups: (1) a tabletop bimanual robot arm setup, and (2) a semi-humanoid robot. For the tabletop bimanual setup (Fig.5 (a)), we adopt the teleoperation setup proposed in ALOHA 2 [55], using two 6-DoF WidowX robotic arms for manipulation, each equipped with a fin-shaped parallel soft gripper. A separate pair of identical arms is used for teleoperation. An Intel RealSense D455 camera is mounted on the table for egocentric visual sensing, and a tactile sensing pad is installed on each of the four soft fingers. For the semi-humanoid setup (Fig. 5 (b)), we use two 7-DoF Kinova Gen3 arms, each paired with a Robotiq 2F-140 gripper. The arms are mounted to a static torso structure. An Intel RealSense D455 camera is mounted at the head for visual sensing, and a tactile sensing pad is attached to each of the four gripper fingers. For teleoperation, we use the Meta Quest 2, with tracked controller poses mapped to target poses for the robot end effectors. Online trajectory generation is performed using the GPU-accelerated model predictive control framework provided by cuRobo [56]. Tasks are selected from the AutoMate dataset [57]. Each task involves a plug-socket pair: the robot must grasp both objects and complete an in-air insertion. Figure 5 shows the objects and robot configurations used in our experiments. For each task, we collect 30 demonstrations to pre-train a diffusion policy, which is then used to initialize the policy for fine-tuning (as described in Sec. 3). To reflect the variability introduced by bimanual insertion, we randomize the initial pose of each object within a 3cm range during both data collection and fine-tuning. The same visuo-tactile representation and encoder architecture are used throughout pre-training and fine-tuning. 5.3 Quantitative Analysis Fine-tuning improves precise manipulation. Our RL fine-tuned policy significantly boosts performance on high-precision assembly tasks. (i) Fine-tuning introduces necessary exploration. Diffusion Policy performs well on lower-precision tasks, but behavior cloning alone lacks the small, repeated adjustments needed for tight-fit insertions. Encoding these subtle exploratory behaviors via demonstrations would require prohibitively large datasets. In contrast, RL fine-tuning introduces such behaviors efficiently by leveraging simulated rollouts. In real-world experiments (Tab. 1), our finetuned policy improves success rates by approximately 20% for the vision-only variant and 40% for 6 Visuo-Tactile Vision-Only 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Visuo-Tactile Vision-Only 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Visuo-Tactile Vision-Only 0 100 200 300 0.0 0.2 0.4 0.6 0.8 1.0 Fine-Tuning Curves (Asset 00081) Success Rate 400 Visuo-Tactile Vision-Only 0 100 200 300 400 0 100 200 300 400 Fine-Tuning Curves (Asset 00186) Fine-Tuning Curves (Asset 00007) Epochs Epochs Epochs Figure 6: Simulation Fine-Tuning of Pre-Trained Policies. We compare the fine-tuning performance of visuo-tactile (blue) and vision-only (orange) policies. The visuo-tactile policy starts with not only a higher pre-trained performance but also continues to improve, achieving higher final performance after fine-tuning. Real Sim Sim Real 0.0 0.2 0.4 0.6 0.8 1.0 Success Rate Pre-Train Fine-Tuned Visuo-Tactile Vision-Only How Policy Evolves? (Asset 00186) Settings Figure 7: Performance of Pre-Trained Policy in sim and real, Fine-Tuned Policy in sim and real. (i) Initial States (ii) Grasping (iii) Two Arms Continuously Coordinate with Each Other (iv) Insertion Bad Pose for Pre-Insertion (a) Successful Assembly with Our RL Fine-Tuned Policy Fail to Adjust Object Pose Bad Insertion Direction (b) Typical Failure Cases of Baseline Policy t t t Figure 8: Policy Rollout. We evaluate our fine-tuned visuo-tactile policy on five plug-and-socket pairs with a clearance of roughly ≈2mm. Part(a) shows the two arms keep re-orienting or moving the parts until they slide together smoothly, as indicated by the evolving tactile maps on the screen. Part(b) the robot either stops with a misaligned pose or pushes at a bad angle, leading to jams and incomplete insertions. the visuo-tactile variant. Grasping often induces slight object slip, yielding an uncertain pre-insert pose that vision alone seldom detects due to occlusion. Fig. 8 (a) shows a representative success trajectory: following an imprecise pre-insertion pose, the two arms engage in rapid cycles of sensing, micro-adjusting, and re-sensing. These back-and-forth "wiggle-and-dock" maneuvers-commonly used by humans-emerged organically during RL fine-tuning, despite not being explicitly captured in demonstrations. This is because tactile feedback clearly signals when alignment improves, as indicated through the change of the contact forces. In contrast, policies without tactile input or sufficient exploration tend to stall or attempt insertion at poor angles, leading to failure or physical damage (Fig. 8 (b)). (ii) Tactile feedback enhances policy fine-tuning. Visual input alone often fails to capture the fine contact cues needed to align parts. By incorporating tactile data, the visuo-tactile policy gains access to these subtle interactions, enabling it to start from a stronger baseline after pre-training and achieve higher precision after fine-tuning. As shown in our simulation experiments (Tab. 2), both the vision-only and visuo-tactile policies benefit from fine-tuning. However, the visuo-tactile policy not only begins at a higher performance level but also converges to greater precision. A common failure mode for the vision-only baseline is stalling with the plug hovering just above the socket, unable to close the final 2mm gap. In contrast, the visuo-tactile policy continues adjusting until successful insertion is achieved. 7 Table-Top Bimanual Setup Settings Visual Policy (Real) Visuo-Tactile Policy (Real) 00081 00186 00007 00446 00581 00081 00186 00007 00446 00581 Pre-Train 0.35 0.40 0.40 0.20 0.35 0.55 0.55 0.65 0.40 0.35 RL Fine-Tuning 0.50 0.65 0.75 0.30 0.45 0.85 0.90 0.95 0.80 0.75 Semi-Humanoid Robot Setup Settings Visual Policy (Real) Visuo-Tactile Policy (Real) 00081 00186 00007 00081 00186 00007 Pre-Train 0.15 0.25 0.35 0.30 0.30 0.35 RL Fine-Tuning 0.35 0.30 0.45 0.60 0.65 0.65 Table 1: Real-World Experiments. We compare the pre-trained diffusion policy with the policy after RL fine-tuning, as well as vision-only versus visuo-tactile representations. Five object assets are evaluated across two robot setups, with each column corresponding to an AutoMate [57] asset ID. Table-Top Bimanual Setup Settings Visual Policy (Sim) Visuo-Tactile Policy (Sim) 00081 00186 00007 00446 00581 00081 00186 00007 00446 00581 Pre-Train 0.28 0.32 0.42 0.12 0.18 0.45 0.48 0.54 0.34 0.31 Fine-Tune w/o Pretrain 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Fine-Tune w/ Pretrain 0.57 0.72 0.84 0.36 0.52 0.82 0.94 0.98 0.76 0.78 Table 2: Simulation Results. We compare three variants in simulation: the Pre-Train Policy (trained only on real-world demonstrations), Fine-Tune w/o Pre-Train, and Fine-Tune w/ Pre-Train. The results indicate that both pre-training and fine-tuning contribute significantly to final performance. Num of Pretrain Data Visual Policy (Sim) Visuo-Tactile Policy (Sim) Pretrain RL Fine-Tune Pretrain RL Fine-Tune 10 demonstrations 0.08 0.21 0.02 0.34 30 demonstrations 0.40 0.65 0.48 0.94 50 demonstrations 0.37 0.67 0.57 0.92 Table 3: Different Amount of Pre-Training Data. We train Pre-Train Policies with different amounts of real-world data and transfer them to the simulation for fine-tuning. The results are only compared in simulation. Representation transfer across domians (real-to-sim-to-real). Transferring a policy between the real robot and simulation inevitably introduces some performance loss due to domain mismatch. These discrepancies arise from differences in point cloud inputs, tactile readings, robot controller settings, and minor joint encoder errors (which affect the placement of tactile points, as they are computed from joint states). As shown in Fig. 7, even with our low-gap tactile modality, we observe a slight performance drop: transferring from real to simulation reduces success rates by approximately 5-10%, while sim-to-real transfer causes a smaller-and sometimes negligible-drop. However, since RL fine-tuning in simulation improves success rates by over 30%, this transfer loss is acceptable and does not outweigh the overall gain. Ablation study: effect of pre-training data quantity. We trained three base policies using 10, 30, and 50 demonstrations, and applied the same RL fine-tuning procedure to each. As shown in Tab. 3, the policy trained with only 10 demonstrations performed poorly, achieving near-zero success. However, RL fine-tuning still improved its success rate to around 30%. The base policies trained on 30 and 50 demonstrations achieved reasonable performance and both fine-tuned to near-perfect success rates. Increasing the dataset from 30 to 50 demonstrations led to minimal improvement in the base policy. In both cases, the policy was already capable of completing the grasp phase; the main bottleneck was the fine, real-time adjustments required during the insertion phase. These micro-motions are difficult to capture with a modest increase in demonstration data, so adding more demonstrations brought limited additional benefit. 6 Conclusion In this paper, we present a real-to-sim-to-real pipeline with multi-modal perception for precise bimanual manipulation. We introduce a tactile simulation capable of effectively modeling dense tactile sensing grids, achieving strong alignment between simulation and the real world. Finally, we demonstrate the effectiveness of RL finetuning, which substantially improves performance across diverse precise assembly tasks. 8 7 Limitations 7.1 Trade-offs with Vision-Based Tactile Sensors We compare our FlexiTac sensor with vision-based tactile sensors along three dimensions: (1) Resolution. Vision-based tactile sensors can provide high-resolution RGB tactile feedback at submillimeter (<1mm) scales, which benefits fine-grained manipulation. However, this high resolution introduces a significant sim-to-real gap. In contrast, our design, while lower in resolution (2mm per unit), reduces simulation complexity and achieves more reliable sim-and-real alignment, while still supporting compact assembly tasks. (2) Shear Force. A major advantage of vision-based tactile sensors is their ability to capture shearforce information. Although FlexiTac does not provide direct shear-force measurements, policies with temporal history can implicitly infer shear-related effects from contact dynamics. (3) System Design. Vision-based tactile sensors are typically bulky (at least as large as the camera's focal length) and thus difficult to integrate into compliant grippers or small size fingertips. Moreover, customizing such sensors [58-60] requires significant engineering effort. In contrast, FlexiTac's thin, flexible force mat is lightweight, easy to install, and readily customizable, making it well-suited for compliant gripper integration. 7.2 Methodological Limitations (1) Real-Sim Alignment. Like sim-to-real pipelines, our real-to-sim transfer pipeline requires manual calibration efforts. Visual domains, tactile distributions, and low-level control must all be aligned. (2) Scope of the Method Applicability. The current pipeline is constrained by simulator capabilities: both Isaac Gym and our tactile simulation lack support for deformable objects. Longer-horizon tasks require significantly more training time, and the absence of shear force sensing limits applicability to more complex tasks. In future work, we aim to further enhance real-to-sim fidelity. (3) Requirement for CAD models. Even though our sparse-reward formulation avoids excessive reward engineering, we still rely on object CAD models to train fine-tuned policies; in this work, we used 3D-printed replicas from an existing dataset. A CAD-free, plug-and-play pipeline would enable the extension of our approach to a much broader range of everyday objects. Acknowledgement This work is partially supported by the Toyota Research Institute (TRI), the Sony Group Corporation, Google, Dalus AI, Pickle Robot, and an Amazon Research Award. This article solely reflects the opinions and conclusions of its authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors. References [1] C. Chi, S. Feng, Y. Du, Z. Xu, E. Cousineau, B. C. Burchfiel, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. In RSS, 2023. [2] Y. Ze, G. Zhang, K. Zhang, C. Hu, M. Wang, and H. Xu. 3D diffusion policy: Generalizable visuomotor policy learning via simple 3D representations. In RSS, 2024. [3] B. Huang, Y. Wang, X. Yang, Y. Luo, and Y. Li. 3D-ViTac: Learning fine-grained manipulation with visuo-tactile sensing. In CoRL, 2024. [4] T. Lin, Y. Zhang, Q. Li, H. Qi, B. Yi, S. Levine, and J. Malik. Learning visuotactile skills with two multifingered hands. In ICRA, 2025. 9 [5] M. T. Villasevil, A. Simeonov, Z. Li, A. Chan, T. Chen, A. Gupta, and P. Agrawal. Reconciling reality through simulation: A real-to-sim-to-real approach for robust manipulation. In RSS, 2024. [6] A. Z. Ren, J. Lidard, L. L. Ankile, A. Simeonov, P. Agrawal, A. Majumdar, B. Burchfiel, H. Dai, and M. Simchowitz. Diffusion policy policy optimization. In ICLR, 2025. [7] L. Ankile, A. Simeonov, I. Shenfeld, M. Torne, and P. Agrawal. From imitation to refinement - residual RL for precise assembly. In ICRA, 2025. [8] Z.-H. Yin, B. Huang, Y. Qin, Q. Chen, and X. Wang. Rotating without seeing: Towards in-hand dexterity through touch. In RSS, 2023. [9] H. Qi, B. Yi, S. Suresh, M. Lambeta, Y. Ma, R. Calandra, and J. Malik. General in-hand object rotation with vision and touch. In CoRL, 2023. [10] Y. Yuan, H. Che, Y. Qin, B. Huang, Z.-H. Yin, K.-W. Lee, Y. Wu, S.-C. Lim, and X. Wang. Robot synesthesia: In-hand manipulation with visuotactile sensing. In ICRA, 2024. [11] J. Wang, Y. Yuan, H. Che, H. Qi, Y. Ma, J. Malik, and X. Wang. Lessons from learning to spin "pens". In CoRL, 2024. [12] I. Akinola, J. Xu, J. Carius, D. Fox, and Y. Narang. TacSL: A library for visuotactile sensor simulation and learning. T-RO, 41:2645-2661, 2025. [13] J. Yin, H. Qi, J. Malik, J. Pikul, M. Yim, and T. Hellebrekers. Learning in-hand translation using tactile skin with shear and normal force sensing. In ICRA, 2025. [14] Y. Lin, A. Church, M. Yang, H. Li, J. Lloyd, D. Zhang, and N. F. Lepora. Bi-touch: Bimanual tactile manipulation with sim-to-real deep reinforcement learning. RA-L, 8(9):5472-5479, 2023. [15] R. S. Johansson and G. Westling. Roles of glabrous skin receptors and sensorimotor memory in automatic control of precision grip when lifting rougher or more slippery objects. Experimental Brain Research, 56:550-564, 2004. URL https://api.semanticscholar.org/CorpusID: 16631166. [16] W. Yuan, S. Dong, and E. H. Adelson. GelSight: High-resolution robot tactile sensors for estimating geometry and force. Sensors, 17(12), 2017. [17] T. Lin, Z.-H. Yin, H. Qi, P. Abbeel, and J. Malik. Twisting lids off with two hands. In CoRL, 2024. [18] S. Suresh, H. Qi, T. Wu, T. Fan, L. Pineda, M. Lambeta, J. Malik, M. Kalakrishnan, R. Calandra, M. Kaess, et al. Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation. arXiv preprint , 2023. [19] M. A. Lee, Y. Zhu, K. Srinivasan, P. Shah, S. Savarese, L. Fei-Fei, A. Garg, and J. Bohg. Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks. In 2019 International Conference on Robotics and Automation (ICRA), pages 8943-8950. IEEE, 2019. [20] V. Dave, F. Lygerakis, and E. Rueckert. Multimodal visual-tactile representation learning through self-supervised contrastive pre-training. arXiv preprint , 2024. [21] Z. Ding, G. Chen, Z. Wang, and L. Sun. Adaptive visual-tactile fusion recognition for robotic operation of multi-material system. Frontiers in Neurorobotics, 17, 2023. ISSN 1662-5218. URL https://www.frontiersin.org/articles/10.3389/ fnbot.2023.1181383. 10 [22] H. Xue, J. Ren, W. Chen, G. Zhang, Y. Fang, G. Gu, H. Xu, and C. Lu. Reactive diffusion policy: Slow-fast visual-tactile policy learning for contact-rich manipulation. arXiv preprint , 2025. [23] B. Ai, S. Tian, H. Shi, Y. Wang, C. Tan, Y. Li, and J. Wu. Robopack: Learning tactile-informed dynamics models for dense packing. Robotics: Science and Systems (RSS), 2024. URL https://arxiv.org/abs/2407.01418. [24] R. Bhirangi, V. Pattabiraman, E. Erciyes, Y. Cao, T. Hellebrekers, and L. Pinto. Anyskin: Plug-and-play skin sensing for robotic touch. ICRA, 2025. [25] I. Guzey, Y. Dai, B. Evans, S. Chintala, and L. Pinto. See to touch: Learning tactile dexterity through visual incentives. arXiv preprint , 2023. [26] E. Donlon, S. Dong, M. Liu, J. Li, E. Adelson, and A. Rodriguez. Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1927-1934. IEEE, 2018. [27] I. H. Taylor, S. Dong, and A. Rodriguez. Gelslim 3.0: High-resolution measurement of shape, force and slip in a compact tactile-sensing finger. In 2022 International Conference on Robotics and Automation (ICRA), pages 10781-10787. IEEE, 2022. [28] D. Ma, E. Donlon, S. Dong, and A. Rodriguez. Dense tactile force estimation using gelslim and inverse fem. In 2019 International Conference on Robotics and Automation (ICRA), pages 5418-5424. IEEE, 2019. [29] Z. Si, G. Zhang, Q. Ben, B. Romero, Z. Xian, C. Liu, and C. Gan. Difftactile: A physicsbased differentiable tactile simulator for contact-rich robotic manipulation. arXiv preprint , 2024. [30] A. Goncalves, N. Kuppuswamy, A. Beaulieu, A. Uttamchandani, K. M. Tsui, and A. Alspach. Punyo-1: Soft tactile-sensing upper-body robot for large object manipulation and physical human interaction. In 2022 IEEE 5th International Conference on Soft Robotics (RoboSoft), pages 844-851, 2022. [31] N. Sunil, S. Wang, Y. She, E. Adelson, and A. R. Garcia. Visuotactile affordances for cloth manipulation with local control. In Conference on Robot Learning, pages 1596-1606. PMLR, 2023. [32] Y. She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson. Cable manipulation with a tactile-reactive gripper. The International Journal of Robotics Research, 40(12-14):1385-1401, 2021. [33] S. Wang, Y. She, B. Romero, and E. H. Adelson. Gelsight wedge: Measuring high-resolution 3d contact geometry with a compact robot finger. In 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021. [34] B. Huang, Y. Chen, T. Wang, Y. Qin, Y. Yang, N. Atanasov, and X. Wang. Dynamic handover: Throw and catch with bimanual hands. arXiv preprint , 2023. [35] C. Wang, L. Fan, J. Sun, R. Zhang, L. Fei-Fei, D. Xu, Y. Zhu, and A. Anandkumar. Mimicplay: Long-horizon imitation learning by watching human play. arXiv preprint , 2023. [36] C. Wang, H. Shi, W. Wang, R. Zhang, L. Fei-Fei, and C. K. Liu. Dexcap: Scalable and portable mocap data collection system for dexterous manipulation. arXiv preprint , 2024. 11 [37] Y. Qin, W. Yang, B. Huang, K. Van Wyk, H. Su, X. Wang, Y.-W. Chao, and D. Fox. Anyteleop: A general vision-based dexterous robot arm-hand teleoperation system. In Robotics: Science and Systems, 2023. [38] Y. Jiang, C. Wang, R. Zhang, J. Wu, and L. Fei-Fei. Transic: Sim-to-real policy transfer by learning from online correction. In Conference on Robot Learning, 2024. [39] C. Chi, Z. Xu, C. Pan, E. Cousineau, B. Burchfiel, S. Feng, R. Tedrake, and S. Song. Universal manipulation interface: In-the-wild robot teaching without in-the-wild robots. In Proceedings of Robotics: Science and Systems (RSS), 2024. [40] Y. Wang, G. Yin, B. Huang, T. Kelestemur, J. Wang, and Y. Li. Gendp: 3d semantic fields for category-level generalizable diffusion policy. In 8th Annual Conference on Robot Learning, volume 2, 2024. [41] L. Ankile, A. Simeonov, I. Shenfeld, and P. Agrawal. Juicer: Data-efficient imitation learning for robotic assembly. arXiv, 2024. [42] K. Yu, Y. Han, Q. Wang, V. Saxena, D. Xu, and Y. Zhao. Mimictouch: Leveraging multi-modal human tactile demonstrations for contact-rich manipulation. arXiv preprint , 2023. [43] X. Zhu, B. Huang, and Y. Li. Touch in the wild: Learning fine-grained manipulation with a portable visuo-tactile gripper. arXiv preprint , 2025. [44] X. Cheng, J. Li, S. Yang, G. Yang, and X. Wang. Open-television: Teleoperation with immersive active visual feedback. arXiv preprint , 2024. [45] C. Lu, X. Cheng, J. Li, S. Yang, M. Ji, C. Yuan, G. Yang, S. Yi, and X. Wang. Mobile-television: Predictive motion priors for humanoid whole-body control. arXiv preprint , 2024. [46] R. Ding, Y. Qin, J. Zhu, C. Jia, S. Yang, R. Yang, X. Qi, and X. Wang. Bunny-visionpro: Realtime bimanual dexterous teleoperation for imitation learning. arXiv preprint , 2024. [47] E. Su, C. Jia, Y. Qin, W. Zhou, A. Macaluso, B. Huang, and X. Wang. Sim2real manipulation on unknown objects with tactile-based reinforcement learning. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 9234-9241. IEEE, 2024. [48] V. Makoviychuk, L. Wawrzyniak, Y. Guo, M. Lu, K. Storey, M. Macklin, D. Hoeller, N. Rudin, A. Allshire, A. Handa, and G. State. Isaac Gym: High performance GPU-based physics simulation for robot learning. In NeurIPS Track on Datasets and Benchmarks, 2021. [49] J. Xu, S. Kim, T. Chen, A. R. Garcia, P. Agrawal, W. Matusik, and S. Sueda. Efficient tactile simulation with differentiability for robotic manipulation. In CoRL, 2022. [50] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint , 2017. [51] Y. Qin, B. Huang, Z.-H. Yin, H. Su, and X. Wang. DexPoint: Generalizable point cloud reinforcement learning for sim-to-real dexterous manipulation. In CoRL, 2022. [52] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. PointNet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. [53] L. Pinto, M. Andrychowicz, P. Welinder, W. Zaremba, and P. Abbeel. Asymmetric actor critic for image-based robot learning. In RSS, 2018. 12 [54] M. Heo, Y. Lee, D. Lee, and J. J. Lim. FurnitureBench: Reproducible real-world benchmark for long-horizon complex manipulation. In RSS, 2023. [55] J. Aldaco, T. Armstrong, R. Baruch, J. Bingham, S. Chan, K. Draper, D. Dwibedi, C. Finn, P. Florence, S. Goodrich, et al. Aloha 2: An enhanced low-cost hardware for bimanual teleoperation. arXiv preprint , 2024. [56] B. Sundaralingam, S. K. S. Hari, A. Fishman, C. Garrett, K. V. Wyk, V. Blukis, A. Millane, H. Oleynikova, A. Handa, F. Ramos, N. Ratliff, and D. Fox. cuRobo: Parallelized collision-free minimum-jerk robot motion generation. arXiv preprint , 2023. [57] B. Tang, I. Akinola, J. Xu, B. Wen, A. Handa, K. V. Wyk, D. Fox, G. S. Sukhatme, F. Ramos, and Y. Narang. AutoMate: Specialist and generalist assembly policies over diverse geometries. In RSS, 2024. [58] J. Zhao, N. Kuppuswamy, S. Feng, B. Burchfiel, and E. Adelson. Polytouch: A robust multimodal tactile sensor for contact-rich manipulation using tactile-diffusion policies. arXiv preprint , 2025. [59] Y. Ma, J. A. Zhao, and E. Adelson. Gellink: A compact multi-phalanx finger with vision-based tactile sensing and proprioception. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 1107-1113. IEEE, 2024. [60] F. Liu, C. Li, Y. Qin, A. Shaw, J. Xu, P. Abbeel, and R. Chen. Vitamin: Learning contact-rich tasks through robot-free visuo-tactile manipulation interface. arXiv preprint , 2025. [61] Learning the signatures of the human grasp using a scalable tactile glove. Nature, 569:698 - 702, 2019. URL https://api.semanticscholar.org/CorpusID:169033286. [62] C. Chi, Z. Xu, S. Feng, E. Cousineau, Y. Du, B. Burchfiel, R. Tedrake, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, 2024. [63] A. Z. Ren, J. Lidard, L. L. Ankile, A. Simeonov, P. Agrawal, A. Majumdar, B. Burchfiel, H. Dai, and M. Simchowitz. Diffusion policy policy optimization. In arXiv preprint , 2024. 13 Supplementary Materials Contents A Details for Tactile Sensor Hardware 14 B Details for Tactile Simulation Implementation 14 B.1 Sampling Tactile Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 B.2 Tactile Signal Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 C Real-to-Sim-to-Real 15 C.1 Tactile Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 C.2 Vision for Sim-Real Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 D DPPO Implementations and Parameters 16 D.1 Pre-Train Implementations and Parameters . . . . . . . . . . . . . . . . . . . . . . . . 16 D.2 Fine-Tune Implementations and Parameters . . . . . . . . . . . . . . . . . . . . . . . 17 A Details for Tactile Sensor Hardware Our FlexiTac sensor builds on prior work [3, 61] with several key modifications. Instead of manually aligned conductive yarn, we employ flexible PCBs, which significantly improve durability and scalability, and we further adapt the design for seamless integration with soft-fin grippers. The sensor operates within a force range of approximately 0.2-10 N. Each sensor pad costs about 30. Compared to the earlier version [3], the new reading board now supports up to 32 × 32 sensor units at 23 Hz, doubling the resolution from the previous 16 × 16 limit. The core innovations of this version lie in the flexible PCB pad design and the openly released circuit drawings and manufacturing guide, which are available on the FlexiTac website. B Details for Tactile Simulation Implementation Sensor Pad Link MM Finger Link dd Surface Normal ොnn CC RR (a) Tactile Finger UUUUUUUU (b) Sampling Tactile Points kknn kkdd (c) Spring-Damper System Figure 9: Tactile-Simulation Pipeline. (a) Finger model in the simulator with a dedicated elastomer sensing pad. (b) Uniform grid of taxels sampled on the pad surface and their contact interaction with an external object. (c) Penalty-based spring-damper model applied at every taxel to convert penetration into normal-force and depth signals. B.1 Sampling Tactile Points As shown in Fig. 9, each robot finger carries a sensor pad defined as a separate mesh in the URDF. Let M denote the triangular mesh of this pad and ˆn its outward surface normal (toward the "finger tip" link). • M be the triangle mesh of the sensor pad link, • ˆn the local surface normal that points towards the internal compliant layer (the "tip" link in the URDF). 14 Given a desired resolution R × C (the real robot uses 12 × 32), we generate taxel positions in three consecutive steps. (i) Detect the flat face: the shortest bounding-box axis of M corresponds to pad thickness; the two remaining axes span the contact surface. (ii) Create a planar lattice: A rectilinear grid is laid over this face, leaving a 1 mm margin to avoid edge artifacts; the grid spacing is equal to dtaxel. (iii) Ray-cast for ground-truth locations: rays are shot from every lattice node along -ˆn until they intersect M. The resulting 3-D points populate tactile pos local∈RN×3 (N = R × C) and are all assigned the fixed orientation qtaxel = Euler(0, 0, -π) so that the +y axis always points outward in world space. This fully automatic procedure yields a dense, uniform taxel layout and requires no manual annotation. B.2 Tactile Signal Computation At every physics step we convert geometric contacts into a dense two-channel image that the policy ingests directly: Chan. Symbol Quantity (tactile frame) 0 d Penetration depth (m) 1 fn Normal force (N) (i) Taxel pose in world coordinates. For every taxel i xw i = Re xl i + pe, ̇xw i = ωe×(Rexl i) + ve, where (Re, pe, ωe, ve) are the pose and twist of the sensor pad link returned by PhysX. (ii) SDF query. A single GPU kernel returns the signed distance di, outward normal ˆni, and normal relative velocity ̇di = ˆni· ̇xw i for every taxel. (iii) Penalty contact model. The normal contact force is fn,i = -(kn di + kd ̇di), with constants kn = 1.0 and kd = 3 × 10-3. Shear forces are not used in our cases. (iv) Packing and normalisation. Negative depths (di < 0, meaning no contact) are clamped to zero. The pair (di, fn,i) is linearly rescaled to [0, 1] and reshaped into an R × C × 4 tensor that is streamed directly to the policy network. The loop is fully vectorised over all environments (N envs) and fully GPU-parallelized. C Real-to-Sim-to-Real C.1 Tactile Calibration To match simulated tactile readings to the real hardware we adjust only the normal stiffness kn and the damping term kd of the penalty model. Calibration follows three steps: (i) a single taxel on the finger pad is chosen in both domains; (ii) a sequence of forces is applied to the real pad, yielding a ground-truth force-response curve; and (iii) the same normal loads are replayed in simulation while kn and kd are iteratively tuned until the mean-squared error between the two curves is minimised. Then apply this parameter to all sensor units. Real pads exhibit a small, load-independent noise floor. We reproduce this behaviour by a two-stage normalisation snorm = ( s/sfixed max, s < τ, s/scurr max, s ≥τ, 15 where s is the raw taxel reading, τ is a noise threshold, sfixed max is a constant from the sensor data-sheet, and scurr max is the frame-wise maximum over the pad. The identical rule is applied to simulated readings so that both domains share the same dynamic range. As illustrated in Fig. 4 of main text, the histograms of normalised signals overlap almost perfectly after calibration. The benefit is further demonstrated in Fig. 10, which shows fine-tuning performance with and without calibration using the same Pre-Trained policy (trained by real data). The calibrated run improves steady up, whereas the uncalibrated run initially degrades as the policy adapts to the shifted observation distribution. Although both eventually succeed in simulation, the uncalibrated policy exhibits a larger sim-to-real gap when deployed on the robot, underscoring the importance of distribution matching. w/ Calibration w/o Calibration 0 100 200 300 0.2 0.4 0.6 0.8 1.0 Training Curves Success Rate Figure 10: Effect of Tactile Calibration on RL Fine-Tuning. Training curves with and without sensor-simulation calibration. Without calibration the success rate initially falls as the policy re-adapts to the shifted tactile distribution, whereas the calibrated variant improves monotonically from the first iteration. C.2 Vision for Sim-Real Alignment To minimise the domain gap between simulated and real observations we normalise the depth stream from a single egocentric RealSense camera through three successive operations. (i) Point-cloud generation: raw depth values are back-projected to camera space and transformed into the world frame whose origin coincides with the table centre. (ii) Workspace cropping: the cloud is clipped to a manually specified axis-aligned bounding box that covers the robot's reachable workspace and excludes background clutter. (iii) Uniform down-sampling: to accelerate both data loading and on-line RL roll-outs we subsample the cloud to a fixed budget of points using uniform lin-space indexing. Although farthest-point sampling (FPS) is common in diffusion-policy pipelines, uniform sampling is ∼10× faster in our setting and was found to have no measurable impact on task performance. (iv) Noise injection (simulation only): To emulate RealSense depth jitter, each simulated point xw is perturbed by a multiplicative factor, xw ←xw 1 + N(0, 0.01 σ) , where σ is a user-controlled noise level. We use noise level 3 for our pipeline. This step is omitted for real-camera data. D DPPO Implementations and Parameters D.1 Pre-Train Implementations and Parameters Diffusion model. We employ the epsilon-prediction variant of Diffusion Policy [62] with T =100 denoising steps and a rollout horizon of H =16 actions. Conditioning information is injected during 16 the first C =2 steps for proprioception and during the first Cimg =2 steps for point clouds, following the two-stream scheme in [62]. Backbone. (i) Input structure. At each conditioning step t the policy receives merged visuo-tactile point cloud: • Visual cloud P v t ∈R3×Nv, whose fourth channel is initialised to zeros. • Tactile cloud P τ t ∈R4×Nτ containing XYZ positions of taxels in the world frame and their normalised pressure readings as a fourth channel. To allow the network to distinguish modalities we append a one-hot flag, yielding a 5-channel tensor Pt = (P v t ; 0), (P τ t ; 1) ∈R5×(Nv+Nτ ), which is transposed to the (N, 5) format expected by PointNet. (ii) Per-step backbone. Each Pt is processed by PointNetEncoderXYZ Tactile, an MLP with hidden sizes {64, 128, 256, 512}. Optionally, layer normalisation is inserted after every linear layer; we use the variant with LayerNorm and final projection to a 64-d feature. A global maxn pooling over points produces the per-step vector ft ∈R64. (iii) State feature. If proprioceptive state is available the 16-d joint states at each step is mapped through a two-layer MLP ({64, 64}) and concatenated with f. The joint proprioception can be obtained both in sim and real. D.2 Fine-Tune Implementations and Parameters Actor network. The backbone is identical to pre-training (PointNet 64 →U-Net {512, 1024, 2048}, kernel 5, group norm 8), but only the last Tft diffusion steps are optimised. A small K-L penalty (λ = 2×10-4) on the predicted noise keeps the decoder close to the pre-trained manifold. Critic. A state-value network receives the proprioceptive history (concatenated over the C=2 conditioning steps, dimensionality 30 × 2) and passes it through an MLP (512-512-512, Mish activations, residual connections). PPO hyper-parameters. We collect one segment of nsteps = ⌊240/8⌋= 30 decisions from every environment before an update. Discount and GAE factors are γ = 0.999 and λ = 0.95. Ten PPO epochs are run per batch with a target K-L of 1.0. Learning rates are 10-5 (actor) and 10-3 (critic) with cosine decay to 10-6 and 10-3, respectively. In the paper DPPO [63], they use learning rate 5 × 10-5, but here we use 10-5 to ensure a more stable training as the task is even more fine-grained. Table 4 shows the important hyperparamter we truned for our fine-grained manipulation task. Symbol Value Config Key γdenoise 0.99 gamma denoising λKL 2 × 10-4 clip ploss coef - base value 2 × 10-4 clip ploss coef base - annealing rate 3 clip ploss coef rate σrand 3.0 randn clip value ˆσmin 0.01 min sampling denoising std ̃σmin 0.10 min logprob denoising std Table 4: DPPO-specific hyper-parameters used during fine-tuning. 17
2510.14932
Sarker et al. PAPER Crossmark RECEIVED XX XX XXXX REVISED XX XX XXXX Resonance Engineering via Harnessing Anti-Parallel Dipole Image Coupling Dip Sarker and Abdoulaye Ndao∗ Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, California 92093, USA E-mail: ∗a1ndao@ucsd.edu Keywords: Plasmonic, Sub-Wavelength Confinement, Anti-Parallel Dipole Image, EM Coupling, Tunable device, Bi-directional Resonance, NIR Abstract Precise control of plasmonic resonances across a broad spectral range is central to the de- velopment of tunable optical devices. Yet, achieving both redshifts and blueshifts within a single nanostructure has remained elusive. Here we introduce a metal–dielectric–metal (MDM) nanodisk array that enables bidirectional tuning of resonance wavelengths through- out the near-infrared (NIR) region. The observed spectral evolution follows the plasmon ruler relationship, with unprecedented tuning properties. In particular, we report a record blueshift response of 457.82 nm for a small nanodisk thickness variation of only 5–10 nm, the highest blueshift response demonstrated in plasmonic architectures to date. This platform offers finely tunable resonances spanning an exceptionally wide NIR range, providing new insights into electromagnetic (EM) coupling mechanisms and establishing a foundation for next-generation tunable devices in sensing, optical communications, and dynamic displays. 1 Introduction In the mid-19th century, Michael Faraday’s research marked the birth of plasmonics, an interacting phenomenon between the electric field of EM wave and the collective oscillations of free electrons in metals, by conducting a pioneering and systematic study on the optical properties of gold (Au) leaf, enabling transmission of green and reflection of yellow wavelengths of incident light [1]. Since then, it has garnered enormous attention from the scientific community due to its unique proper- ties by engineering light–matter interactions at sub-wavelength scales [2, 3, 4, 5]. By harnessing strong confinement and enhancement of optical fields, plasmonics has enabled diverse applications, including sensing [6, 7, 8], enhanced transmission [9], subwavelength imaging [10, 11], advanced light sources [12, 13], metasurfaces [14, 15, 16, 17, 18, 19, 20, 21], nonlinear optical processes [22, 23], all-optical ultrafast magnetic switching [24, 25, 26], optical modulation [27, 28], cloaking [29, 30], and integrated photonic circuitry [31, 32, 33, 34]. Beyond these demonstrations, the advent of tun- able plasmonic devices heralds a new era in optical technology, as the dynamic manipulation of light at the nanoscale is realized through control of geometry, refractive index, local environment, and carrier density. Such precise resonance tuning permits real-time modulation, delivering compact and energy-efficient solutions that augment the performance of applications ranging from sensors [35, 36] and optical communication [37, 36] to dynamic displays [38, 39]. Researchers around the world have explored and investigated numerous plasmonic-based tun- able nanostructures by modifying the geometrical and material parameters. Generally, plasmonic resonances in these nanostructures are typically known to exhibit redshift properties, where the res- onance wavelength moves toward longer wavelengths, with increasing structural dimensions, while blueshift behavior, where the resonance wavelength moves toward shorter wavelengths, is compara- tively rare, challenging, and less explored due to the fixed density of free electrons in noble metals (e.g. Au and Ag) that determine their plasma frequency [40]. However, plasmonic blueshift proper- ties can be achieved from the anti-parallel-mode excitation and strong near-field interactions, which require careful engineering of the material composition, geometry, and coupling conditions. Sev- eral experimental studies [41, 42, 43, 44, 45] showed that the blueshifting behavior of nanoparticle 1 arXiv:2510.14932v1 [physics.optics] 16 Oct 2025 Sarker et al. (NP)-based structures originated from the antibonding mode between NPs. For example, an Au/Ag core/shell NP nanostructure exhibited blueshift properties by the generation of a hybridized anti- bonding mode, when the diameter of the Ag shell increased [41]. Indeed, initial reports showed that two pairs of Au nanodisks exhibited blueshift resonance using polarization along the interparticle axis [42]. However, their designs faced challenges in uniformly distributing the NPs on the substrate and had limited spectral coverage across the visible range and low blueshift response. A subsequent study demonstrated blueshifting behavior in an Au NP–spacer–graphene nanostructure by exploit- ing anti-parallel dipole image coupling between the nanoparticle and graphene [43]. Similarly, the nanostructures had the uniformly distributed challenges of NPs on the substrate, limited spectral coverage across the visible range, and low blueshift response with the fragility issue of graphene. More recently, Nauman et al. reported a practical realization of dual-directional plasmonic shifts via anisotropic strain redistribution in elastomeric substrates, enabling simultaneous redshift and blueshift responses controlled by mechanical deformation and polarization [44]. Complementarily, Belogolovskii et al. demonstrated a CMOS-compatible approach using visible light trimming of silicon-rich nitride micro-ring resonators, achieving bidirectional refractive index changes that en- abled both redshifts of ∼49 nm and blueshifts of ∼10 nm through controllable thermal annealing mechanisms [45]. Although their approaches provide valuable experimental insights that complement our design-oriented strategy, highlighting the growing interest in multifunctional plasmonic systems capable of bi-directional spectral tuning, these designs covered only a limited portion of the visible spectrum with low blueshifting properties. Moreover, despite improvements in blueshift response compared to conventional NP-based designs, this improvement is still insufficient for broader prac- tical implementation, such as tunable devices, dynamic displays, and sensors. Additionally, whereas most earlier nanostructures focus on parallel dipole-dipole interactions and hybridized gap modes effects between particles and NPs, the image dipole induced in the bottom Au mirror of anti-parallel dipole image coupling is phase-inverted relative to the nanodisk image dipole, yielding a symmetry- anti-symmetry condition, not present in particles and NPs structures. Furthermore, the bi-directional tuning originated from this mirror-mediated interaction by varying Au nanodisk thickness, and we control the spatial overlap of these anti-parallel dipole image between the underlying Au slab and the Au nanodisk, enabling the highest blue- and red-shift response instead of the unidirectional shifts previously reported in nanoparticle coupling mechanisms. Additionally, these previous studies raise an important question: Can plasmonic nanostructures be engineered to achieve enhanced blueshift response and broader spectral tunability, particularly extending beyond the visible wavelength range to encompass the optical C-band used in telecommunications? To address this question, we numerically and theoretically investigated a periodic MDM array structure that demonstrated both blue- and redshifted resonance properties for the first time in this letter. The unique blueshifted resonance emerged by exploiting antiparallel dipole-image coupling between the metal nanodisks and the underlying metal slab within the MDM configuration. Through numerical simulations, we explored how the coupling between adjacent dipoles in the metal nanodisks influences resonance wavelength tuning, enabling precise wavelength control across a broad NIR spectral range. 2 Methodology To obtain the unique resonance property and understand the physics underlying the phenomenon, the MDM nanostructure on an aluminum oxide (Al2O3) substrate was investigated by employing the FDTD method (Ansys Lumerical). Figure 1(a) presents a schematic of the proposed periodic MDM nanostructure on the Al2O3 substrate. Au was selected for the nanodisk and bottom slab layers of the MDM plasmonic nanostructure due to its exceptional plasmonic properties, including low optical losses and high chemical stability in the NIR regime. Al2O3 served as the spacer layer between the nanodisk and bottom slab, facilitating coupling between them. The optical constants (n and k) of Au and Al2O3 were taken from Olmon et al. [46] and Palik et al. [47], respectively. Figures 1(b) and (c) present the xz- and xy-cross-section views of the unit cell of the nanostructure, respectively. Structural parameters of the nanostructure’s unit cell were carefully optimized. The periodicity (Px = Py = P) of the MDM nanostructure was set as 500 nm. Initially, the Au nanodisk thickness (td) was selected to be 5 nm, and subsequent variations of td were explored to achieve the desired resonance characteristics. To facilitate the anti-parallel dipole coupling between the Au nanodisk and the Au slab, a thickness (tAl2O3) of 15 nm was employed (see Section 1 of Supplementary 1 for details). The Au slab layer thickness (tAu) was chosen as 100 nm to ensure near-perfect reflection and negligible transmission through the nanostructure. dd and dd/2 denote the diameter and radius of the Au nanodisk, respectively. In our study, dd/2 was set to be 125 nm. To save computational space and time for our numerical study, we used periodic boundary conditions in the x- and y- 2 Sarker et al. Figure 1. (a) Schematic illustration of the metal–dielectric–metal (MDM) nanostructure for resonance engineering. Au and Al2O3 were employed as metal and spacer materials, respectively. The nanostructure consists of an Al2O3 substrate and a top cladding layer of Al2O3, enabling the nanostructure to exhibit both blue- and red-shifted resonance behaviors. Cross-sectional (b) xz- and (c) xy-views of the periodic MDM unit cell. Here, td, tAl2O3, and tAu represent the thicknesses of the Au nanodisk, Al2O3 spacer, and Au slab, respectively. The periodicity is denoted as Px = Py = P, and the diameter of the nanodisk are given by dd. directions, as our proposed nanostructure was periodic in these directions. In the z-direction, 12 steep angle perfectly matched layers (PMLs) were utilized to absorb the light completely that came out from the simulation region. An override mesh of 1, 1, and 0.4 nm was applied in the x-, y-, and z-directions around the Au nanodisk and spacer layer, respectively. To minimize interference effects, a minimum separation of λcenter/4 was maintained between adjacent objects, where λcenter denotes the central wavelength of the incident light. We performed our study under the excitation of a plane wave, spanning wavelengths from 800 to 3000 nm. A table of additional simulation informations is presented in Table S1 of Supplementary 1. To fabricate the proposed MDM nanostructure, we follow the standard cleanroom fabrication recipe, which was developed [5, 8]. A detailed discussion of the fabrication and the fabrication imperfection analysis and acceptable variation range of the structural parameters is provided in Section 3 of Supplementary 1. 3 Results and Discussions To analyze plasmonic resonance tuning, we studied our proposed MDM nanostructures by varying the geometrical properties. Specifically, tunable resonances were investigated by changing td, with and without a Al2O3 top cladding layer. Figures 2(a) and (b) show the influence of td on the resonance of the reflection spectra with and without the top cladding layer, respectively. In the cladded structure, two resonances appeared in the NIR region. In contrast, the uncladded structure exhibited one resonance in the wavelength range of 800-3000 nm due to the shift of Rayleigh anomaly (RA) towards the visible wavelength. Figure 3(a) illustrates the resonance shifts induced by variations in the Au nanodisk thickness, comparing structures with and without a top cladding layer. The longer-wavelength resonance exhibited a blueshift with increasing td up to a critical structural parameter. This critical thickness was ∼55 nm for the structure with a top cladding layer and ∼75 nm for the structure without cladding. Beyond these values, the resonance of the cladded structure redshifted, while it became saturated for the uncladded structure. To intuitively understand this blueshift and redshift behavior with the spatial electric field profiles of the observed blue- and red-shifts, we utilized the anti- parallel dipole image (analogous to an antibonding hybrid plasmonic mode) [48] and perturbation theory [49]. We investigated the spatial electric field distributions of the cladded structure’ longer- wavelength resonance mode at different td (see the circles of different colors in Fig. 3(a)). When the Au nanodisk is very thin (td < 10 nm), it supports a relatively weak dipole moment. Meanwhile, a thin Au nanodisk supports lower conduction electrons due to its smaller volume which extends its plasmon field in Al2O3. Thus, a large fraction of the electric field of the mode is located in the dielectric gap and the surrounding space rather than deep within the Au nanodisk, as shown in Fig. 3(b). In contrast, when the td of the Au nanodisk increases, the Au nanodisk obtains more conducting volume. In the meantime, the increased Au nanodisk achieves more electrons, which results in higher polarizability. Thus, a thicker Au nanodisk confines more of the plasmon’s charge 3 Sarker et al. Figure 2. Spatial distributions of the reflection of an MDM nanostructure (a) with and (b) without top cladding layer on the nanostructure. oscillation within the Au nanodisk itself. Therefore, the field penetration into the Al2O3 spacer layer and the Al2O3 cladded layer reduces, as shown in Fig. 3(c). In essence, the image dipole coupling diminishes as the Au nanodisk gets thicker (approaching a critical thickness), as illustrated in Fig. 3(d). At the critical thickness, the Au nanodisk supports all the dipole moment; thus, the field penetration into the Al2O3 spacer layer and the Al2O3 cladded layer is negligible. Therefore, the Au nanodisk behaves almost like it is decoupled from the mirror Au slab, with its plasmon more confined within the Au disk and its immediate vicinity rather than spanning the gap strongly. This behavior mimics the classic plasmon ruler exponential rule: reduced near-field coupling leads to an exponential rise in energy [42]. Additionally, near after the critical thickness of td, the MDM structure becomes from a spacer gap dominated mode (anti-parallel dipole image) to a disk dominated mode. Therefore, the spatial electric field distribution strengthens, as delineated in Fig. 3(e). As the disk becomes quite thick (td > 55 nm), it starts to act like a bulkier plasmonic resonator. Moreover, a thicker Au disk has more charge stored for a given field, as shown in Figs. 3(f) and (g), resulting in redshift. Thus, the redshift property returns. In contrast, this disk-dominated mode cannot exist in the absence of Al2O3 cladding due to the wavevector mismatch of incident light’s electric field and plasmons of the structure (see Fig. S1 of the Supplementary 1 for details). In addition to anti-parallel dipole image, perturbation theory can explain the effect of metal’s small inclusion on resonance. In perturbation theory, the volume perturbation usually tends to redshift the plasmonic mode for thicker Au disk. However, at small td, the field in the Au volume is relatively trivial, as a large fraction of the electric field of the mode is located in the dielectric gap and the surrounding space rather than deep within the Au nanodisk, as illustrated in Figs. 3(b) and (c). Therefore, the impact of volume perturbations in the thin-disk regime is initially superseded by the more pronounced influence of image-dipole coupling changes. However, the mode energy increases for large td due to the increase of the metal’s volume, which results in redshift at the resonance, as shown in Figs. 3(e)-(g). To quantitatively understand this behavior, we analytically studied the longer resonance wave- length (λres, 2) of our nanostructures as a function of td by utilizing the well-known plasmon ruler equation, as shown in Fig. 3(a). The relation between λres, 2 and td is expressed by [42], λres, 2 = λoff + ae− td τ . (1) The decay length (τ) is defined as the distance at which the coupling decays by a factor of 1/e. A larger τ indicates that the plasmon resonance remains sensitive over greater separations, while a smaller τ confines the strong coupling to shorter distances. λoff represents the resonance offset. From theoretical fitting, we obtained τs of 5.95 and 8.29 nm for the nanostructures with and without a top cladding layer, respectively. The shorter decay length in the presence of a cladding layer leads to greater blueshift response for smaller separations, resulting in enhanced blueshifting behavior (see Fig. 3(a), td range: 5–10 nm). In contrast, the larger τ for the structure without a cladding layer allows for notable blueshifting over a broader range of td (see Fig. 3(a), td range: 35–55 nm), whereas the resonance shift is minimal (blueshift response ∼18 nm) with the presence of cladding 4 Sarker et al. Figure 3. (a) Shift in the plasmon wavelength due to changes in the Au nanodisk thickness with and without a top cladding layer on the nanostructure. The solid lines are theoretical fitting curves by using the plasmon ruler equation. The R2 for these fitted curves was >0.9. The xz-view of the spatial electric field (Ez) distributions for different Au nanodisk thicknesses (nm) of (b) 5, (c) 30, (d) 48, (e) 55, (f) 75, and (g) 90 for the nanostructure with top cladding layer. The inset of (d) depicts the color bar of Ez. The dotted black boxes and rings represent the Au nanodisk and the spatial electric field distributions at different resonance of (a), respectively. layer. a denotes the strength of the near-field coupling between the Au nanodisk and the underlying Au slab. The values of a were calculated to be 656.72 and 845.05 for the cladded and uncladded nanostructures, respectively, from the theoretical fitting. Consistent with the analytical solution of plasmon ruler equation, numerical simulations demonstrated that the uncladded nanostructure exhibited stronger near-field coupling between the Au nanodisk and Au slab, as shown in Fig. 4. Figure 4. The xz-view of the spatial electric field (E) distributions for the nanostructures (a) with and (b) without top Al2O3 cladding layer, respectively. The color bar is provided in the insets. The shorter wavelength resonance was observed only for the cladded nanostructure in the wave- length range of 800-3000 nm. Meanwhile, the shorter wavelength resonance shifted towards the visible wavelength due to the RA phenomenon, specifically influenced by the inclusion of the refrac- tive index of the top cladding layer. Therefore, we need to study the RA phenomenon, explaining the origin of the shorter resonance wavelength (λres, 1). The relationship among λres, 1, incidence angle of light (θ), and P is expressed by [50, 51], λres, 1 = nP ± sinθ √ m2 + l2 . (2) Here, n, m, and l represent the refractive index of the cladding layer and the diffraction orders, respectively. The (m, l) was set to be (1, 0) for our study. Between two influencing parameters, we 5 Sarker et al. Figure 5. Spatial distributions of the transmission of an MDM nanostructure with top cladding layer for varying (a) P and (b) θ. Here, the td and dd/2 were set to be 25 and 125 nm, respectively. first examined the effect of P on the resonance of the cladded nanostructure with a fixed incident angle of θ = 0◦, as shown in Fig. 5(a). The numerically obtained λres, 1 of 878.8 nm was in good agreement with the theoretical value of 875 nm for P = 500 nm. Second, we analyzed the impact of θ on λres,,1, as depicted in Fig.5(b), where we set P = 500 nm. A comparison between the theoretical and numerical simulation is provided in Fig. S3 of Supplementary 1. To investigate the impact of θ, we employed the broadband fixed angle source technique (BFAST), in which a broadband plane wave is incident on the periodic structure at a specific angle. As θ increased, the resonance split into two branches due to the dependence of ± sin θ in Eq.2. The split is not discernible for small θ; however, the split was observed more than θ = 1◦in our structure. These results confirmed that the shorter wavelength resonance in the nanostructure arises from the RA phenomenon (see Fig. S4 of the Supplementary 1 for additional information). To examine the impact of Au nanodisk’s adjacent dipole coupling on the resonance of the pro- posed nanostructure, we varied dd/2, where td was fixed at 15 nm, keeping all other structural parameters constant. The resonance redshifted with increasing the dd/2 of the Au nanodisk, as shown in Fig. 6(a). Meanwhile, this redshift resulted from the weakening of nanodisk’s adjacent dipole coupling. To visualize this weakening dipole coupling, we investigated the spatial electric field distributions for Au nanodisk radii of 50 nm and 150 nm, as shown in Figs. 6(b) and (c), respectively. A strong electric field confinement was obtained for the dd/2 of 50 nm. This weakening coupling effect with increasing dd/2 resulted in a wide tunability over a broad NIR wavelength from ∼1100 - ∼2600 nm, which is not found in previously reported studies to our best knowledge. Table 1 presents a comparative analysis of our proposed nanostructures alongside previously reported state-of-the-art nanostructures for the blueshifting properties. We conducted this analy- sis, considering the blueshifting response, variation in the structural parameter, normalized band- width (∆λb/λ), and the operating wavelength. Traditional NP nanostructures, such as Au–Au nanodisk pairs [42] and Au–Au nanoparticle pairs [52], exhibited blueshifting phenomena primarily within the visible spectral range; however, these structures exhibited limited ∆λb for small varia- tion in the structural parameter. Although some improvement was achieved with the introduction of Au–spacer–graphene nanostructure, the ∆λb, ∆λb/λ, and tunable range remained insufficient for pragmatic applications [43]. Notably, current state-of-the-art nanostructures exhibit a higher ∆λb, up to 79 nm for the structural variation of 9.9 nm, while still operating within the visible wavelength [41]. However, further development is necessary to extend their operational range and enhance their ∆λb sufficiently to enable pragmatic applications. In contrast, our MDM nanostructure demonstrated substantially enhanced ∆λb, ∆λb/λ, and wider spectral coverage. Specifically, we achieved a record ∆λb/λ of 46.1% for Au nanodisk thicknesses varying between 5 and 55 nm, a value that is two times magnitude higher than the current state-of-the-art. Even for a small thickness variation from 5 to 10 nm, the ∆λb/λ and ∆λb remained superior at 24.1% and 457.82 nm, respectively, compared to the current state-of-the-art. Furthermore, the tunable range of our structure extends deep into the NIR, spanning 1100–2600 nm, which is highly desirable for nanophotonic and sensing applications. This improvement in performance can be attributed to the engineered anti-parallel dipole image coupling, 6 Sarker et al. Figure 6. Spatial distributions of the transmission of an MDM nanostructure with a top cladding layer on the nanostructure. We varied the Au nanodisk radius (dd/2) from 50 to 150 nm. Here, the P was set to be 500 nm. The xz-view of the spatial electric field (|E|) distributions for the Au nanodisk radius of (b) 50 and (c) 150 nm. The color bar is provided in the inset. which enables stronger and more localized EM field interactions at subwavelength scales. The ex- ponential decay of near-field coupling, modulated by the nanodisk thickness and spacer properties, provides a powerful means to control the resonance shift with precision. Additionally, the presence of the dielectric cladding layer further enhances the system’s responsiveness by shortening the decay length and boosting the field confinement. Table 1. Comparative analysis Structure Wavelength Parameter change (nm) ∆λb (nm) ∆λb/λ (%) Ref. Au-spacer-Graphene visible 15 29 4.5 [43] Au/Ag NPs visible 9.9 79 16.2 [41] Au-Au nanodisk pairs visible 206 150.38 23.1 [42] Au-Au NP pairs visible 250 93 10.9 [52] MDM NIR 5 457.82 24.1 This work MDM NIR 50 875.78 46.1 This work 4 Conclusion In this paper, we demonstrated using MDM array of plasmonic nanostructures that allowed both blueshifts and redshifts in its resonance wavelength in the NIR spectrum. These unique resonance properties are obtained by harnessing antiparallel dipole-image coupling between the metal nanodisks and the underlying metal slab, which follows the plasmon ruler relationship. Most importantly, we obtained a significant blueshift response of 457.82 nm for Au nanodisk thicknesses increasing from 5 to 10 nm. This study demonstrated a single plasmonic platform capable of both blueshifting and redshifting its resonance via simple structural and material modifications. Such dual-shift tunability provides a simple yet powerful means of tailoring resonance wavelengths in the NIR, thereby enabling advanced plasmonic devices and sensors. Disclosures The authors declare no conflicts of interest. Data Availability Statement The datasets are not publicly accessible but are available from the corresponding author upon rea- sonable request. 7 REFERENCES Sarker et al. Supplemental Material See Supplementary 1 for supporting content. References [1] Michael Faraday. X. The Bakerian Lecture. —Experimental relations of gold (and other metals) to light. Philosophical Transactions of the Royal Society of London, 147:145–181, 1857. URL https://royalsocietypublishing.org/doi/abs/10.1098/rstl.1857.0011. [2] Mark L. Brongersma and Vladimir M. Shalaev. The Case for Plasmonics. Science, 328(5977): 440–441, 2010. URL https://www.science.org/doi/abs/10.1126/science.1186905. [3] Mark I Stockman, Katrin Kneipp, Sergey I Bozhevolnyi, Soham Saha, Aveek Dutta, Justus Ndukaife, Nathaniel Kinsey, Harsha Reddy, Urcan Guler, Vladimir M Shalaev, Alexandra Boltasseva, Behrad Gholipour, Harish N S Krishnamoorthy, Kevin F MacDonald, Cesare Soci, Nikolay I Zheludev, Vassili Savinov, Ranjan Singh, Petra Groß, Christoph Lienau, Michal Vadai, Michelle L Solomon, David R Barton, Mark Lawrence, Jennifer A Dionne, Svetlana V Boriskina, Ruben Esteban, Javier Aizpurua, Xiang Zhang, Sui Yang, Danqing Wang, Weijia Wang, Teri W Odom, Nicol`o Accanto, Pablo M de Roque, Ion M Hancu, Lukasz Piatkowski, Niek F van Hulst, and Matthias F Kling. Roadmap on plasmonics. Journal of Optics, 20(4):043001, mar 2018. URL https://dx.doi.org/10.1088/2040-8986/aaa114. [4] B. Bahari, R. Tellez-Limon, and B. Kante. Directive and enhanced spontaneous emission using shifted cubes nanoantenna. Journal of Applied Physics, 120(9):093106, 09 2016. ISSN 0021- 8979. URL https://doi.org/10.1063/1.4962164. [5] Jun-Hee Park, Jeongho Ha, Liyi Hsu, Guang Yang, Yeshaiahu Fainman, Alexander V. Sergienko, and Abdoulaye Ndao. Observation of robust subwavelength phase singularity in chiral medium. Advanced Photonics, 7(3):035001, 2025. URL https://doi.org/10.1117/1.AP.7.3.035001. [6] Ahmet A. Yanik, Min Huang, Osami Kamohara, Alp Artar, Thomas W. Geisbert, John H. Connor, and Hatice Altug. An optofluidic nanoplasmonic biosensor for direct detection of live viruses from biological media. Nano Letters, 10(12):4962–4969, 2010. URL https://doi.org/ 10.1021/nl103025u. PMID: 21053965. [7] Arif E. Cetin and Hatice Altug. Fano resonant ring/disk plasmonic nanocavities on conducting substrates for advanced biosensing. ACS Nano, 6(11):9989–9995, 2012. URL https://doi. org/10.1021/nn303643w. PMID: 23092386. [8] Jun-Hee Park, Abdoulaye Ndao, Wei Cai, Liyi Hsu, Ashok Kodigala, Thomas Lepetit, Yu- Hwa Lo, and Boubacar Kant´e. Symmetry-breaking-induced plasmonic exceptional points and nanoscale sensing. Nature Physics, 16(4):462–468, Apr 01, 2020. ISSN 1745-2481. URL https: //doi.org/10.1038/s41567-020-0796-x. [9] M. Hamidi, C. Chemrouk, A. Belkhir, Z. Kebci, A. Ndao, O. Lamrous, and F.I. Baida. Sfm- fdtd analysis of triangular-lattice aaa structure: Parametric study of the tem mode. Optics Communications, 318:47–52, 2014. ISSN 0030-4018. URL https://www.sciencedirect.com/ science/article/pii/S0030401813011991. [10] J. B. Pendry. Negative refraction makes a perfect lens. Phys. Rev. Lett., 85:3966–3969, Oct 2000. URL https://link.aps.org/doi/10.1103/PhysRevLett.85.3966. [11] Zhaowei Liu, Hyesog Lee, Yi Xiong, Cheng Sun, and Xiang Zhang. Far-Field Optical Hyperlens Magnifying Sub-Diffraction-Limited Objects. Science, 315(5819):1686–1686, 2007. URL https: //www.science.org/doi/abs/10.1126/science.1137368. [12] Rupert F. Oulton, Volker J. Sorger, Thomas Zentgraf, Ren-Min Ma, Christopher Gladden, Lun Dai, Guy Bartal, and Xiang Zhang. Plasmon lasers at deep subwavelength scale. Nature, 461 (7264):629–632, Oct 01, 2009. ISSN 1476-4687. URL https://doi.org/10.1038/nature08364. [13] Hatice Altug, Dirk Englund, and Jelena Vuˇckovi´c. Ultrafast photonic crystal nanocavity laser. Nature Physics, 2(7):484–488, Jul 01, 2006. ISSN 1745-2481. URL https://doi.org/10.1038/ nphys343. [14] Max Herzberger and Nancy R. McClure. The design of superachromatic lenses. Appl. Opt., 2 (6):553–560, Jun 1963. URL https://opg.optica.org/ao/abstract.cfm?URI=ao-2-6-553. 8 REFERENCES Sarker et al. [15] Philippe Lalanne, Simion Astilean, Pierre Chavel, Edmond Cambril, and Huguette Launois. Blazed binary subwavelength gratings with efficiencies larger than those of conventional ´echelette gratings. Opt. Lett., 23(14):1081–1083, Jul 1998. URL https://opg.optica.org/ ol/abstract.cfm?URI=ol-23-14-1081. [16] Ze’ev Bomzon, Vladimir Kleiner, and Erez Hasman. Pancharatnam–berry phase in space-variant polarization-state manipulations with subwavelength gratings. Opt. Lett., 26(18):1424–1426, Sep 2001. URL https://opg.optica.org/ol/abstract.cfm?URI=ol-26-18-1424. [17] Bryan H. Fong, Joseph S. Colburn, John J. Ottusch, John L. Visher, and Daniel F. Sievenpiper. Scalar and tensor holographic artificial impedance surfaces. IEEE Transactions on Antennas and Propagation, 58(10):3212–3221, 2010. [18] Nanfang Yu, Patrice Genevet, Mikhail A. Kats, Francesco Aieta, Jean-Philippe Tetienne, Federico Capasso, and Zeno Gaburro. Light propagation with phase discontinuities: Gen- eralized laws of reflection and refraction. Science, 334(6054):333–337, 2011. URL https: //www.science.org/doi/abs/10.1126/science.1210713. [19] Liyi Hsu, Matthieu Dupr´e, Abdoulaye Ndao, and Boubacar Kant´e. From parabolic-trough to metasurface-concentrator: assessing focusing in the wave-optics limit. Opt. Lett., 42(8):1520– 1523, Apr 2017. URL https://opg.optica.org/ol/abstract.cfm?URI=ol-42-8-1520. [20] Abdoulaye Ndao, Roland Salut, Miguel Suarez, and Fadi I Baida. Plasmonless polarization- selective metasurfaces in the visible range. Journal of Optics, 20(4):045003, mar 2018. URL https://dx.doi.org/10.1088/2040-8986/aab1eb. [21] Jeongho Ha, Abdoulaye Ndao, Liyi Hsu, Jun-Hee Park, and Boubacar Kante. Planar dielectric cylindrical lens at 800 nm and the role of fabrication imperfections. Opt. Express, 26(18):23178– 23184, Sep 2018. URL https://opg.optica.org/oe/abstract.cfm?URI=oe-26-18-23178. [22] Noa Konforty, Moshe-Ishay Cohen, Ohad Segal, Yonatan Plotnik, Vladimir M. Shalaev, and Mordechai Segev. Second harmonic generation and nonlinear frequency conversion in photonic time-crystals. Light: Science & Applications, 14(1):152, Apr 02, 2025. ISSN 2047-7538. URL https://doi.org/10.1038/s41377-025-01788-z. [23] Huanyu Zhou, Xueqi Ni, Beicheng Lou, Shanhui Fan, Yuan Cao, and Haoning Tang. Control of Chirality and Directionality of Nonlinear Metasurface Light Source via Moir´e Engineering. Phys. Rev. Lett., 134:043801, Jan 2025. URL https://link.aps.org/doi/10.1103/PhysRevLett. 134.043801. [24] C-H. Lambert, S. Mangin, B. S. D. Ch. S. Varaprasad, Y. K. Takahashi, M. Hehn, M. Cinchetti, G. Malinowski, K. Hono, Y. Fainman, M. Aeschlimann, and E. E. Fullerton. All-optical control of ferromagnetic thin films and nanostructures. Science, 345(6202):1337–1340, 2014. URL https://www.science.org/doi/abs/10.1126/science.1253493. [25] Muhammad Waleed Khalid, Jeongho Ha, Mohammed Salah El Hadri, Liyi Hsu, Saeed Hemayat, Yuxuan Xiao, Alexander Sergienko, Eric E. Fullerton, and Abdoulaye Ndao. Meta-Magnetic All- Optical Helicity Dependent Switching of Ferromagnetic Thin Films. Advanced Optical Materials, 12(4):2301599, 2024. URL https://advanced.onlinelibrary.wiley.com/doi/abs/10.1002/ adom.202301599. [26] Muhammad Waleed Khalid, Ali Akbar, Jeongho Ha, Mohammed Salah El Hadri, Alexander V. Sergienko, Eric E. Fullerton, and Abdoulaye Ndao. Role of photonic angular momentum in all-optical magnetic switching. Phys. Rev. B, 109:L140403, Apr 2024. URL https://link. aps.org/doi/10.1103/PhysRevB.109.L140403. [27] Dimitrios L. Sounas and Andrea Al`u. Non-reciprocal photonics based on time modulation. Nature Photonics, 11(12):774–783, Dec 01, 2017. ISSN 1749-4893. URL https://doi.org/10. 1038/s41566-017-0051-x. [28] Christian Haffner, Daniel Chelladurai, Yuriy Fedoryshyn, Arne Josten, Benedikt Baeuerle, Wolf- gang Heni, Tatsuhiko Watanabe, Tong Cui, Bojun Cheng, Soham Saha, Delwin L. Elder, Larry. R. Dalton, Alexandra Boltasseva, Vladimir M. Shalaev, Nathaniel Kinsey, and Juerg Leuthold. Low-loss plasmon-assisted electro-optic modulator. Nature, 556(7702):483–486, Apr 01, 2018. ISSN 1476-4687. URL https://doi.org/10.1038/s41586-018-0031-4. 9 REFERENCES Sarker et al. [29] Andrea Al`u and Nader Engheta. Achieving transparency with plasmonic and metamaterial coatings. Phys. Rev. E, 72:016623, Jul 2005. URL https://link.aps.org/doi/10.1103/ PhysRevE.72.016623. [30] J. B. Pendry, D. Schurig, and D. R. Smith. Controlling electromagnetic fields. Science, 312(5781):1780–1782, 2006. URL https://www.science.org/doi/abs/10.1126/science. 1125907. [31] Liyi Hsu, Fadi I. Baida, and Abdoulaye Ndao. Local field enhancement using a photonic- plasmonic nanostructure. Opt. Express, 29(2):1102–1108, Jan 2021. URL https://opg.optica. org/oe/abstract.cfm?URI=oe-29-2-1102. [32] Guang Yang, Alexander V. Sergienko, and Abdoulaye Ndao. Tunable polarization mode con- version using thin-film lithium niobate ridge waveguide. Opt. Express, 29(12):18565–18571, Jun 2021. URL https://opg.optica.org/oe/abstract.cfm?URI=oe-29-12-18565. [33] Guang Yang, Alexander V. Sergienko, and Abdoulaye Ndao. Plasmonic loss-mitigating broad- band adiabatic polarizing beam splitter. Opt. Lett., 47(3):629–632, Feb 2022. URL https: //opg.optica.org/ol/abstract.cfm?URI=ol-47-3-629. [34] Xilin Feng, Tianwei Wu, Zihe Gao, Haoqi Zhao, Shuang Wu, Yichi Zhang, Li Ge, and Liang Feng. Non-hermitian hybrid silicon photonic switching. Nature Photonics, 19(3):264–270, Mar 01, 2025. ISSN 1749-4893. URL https://doi.org/10.1038/s41566-024-01579-9. [35] Daniel Rodrigo, Odeta Limaj, Davide Janner, Dordaneh Etezadi, F. Javier Garc´ıa de Abajo, Va- lerio Pruneri, and Hatice Altug. Mid-infrared plasmonic biosensing with graphene. Science, 349 (6244):165–168, 2015. URL https://www.science.org/doi/abs/10.1126/science.aab2051. [36] Surbhi Lal, Stephan Link, and Naomi J. Halas. Nano-optics from sensing to waveguiding. Nature Photonics, 1(11):641–648, Nov 01, 2007. ISSN 1749-4893. URL https://doi.org/10.1038/ nphoton.2007.223. [37] Yoshitomo Okawachi, Matthew S. Bigelow, Jay E. Sharping, Zhaoming Zhu, Aaron Schweins- berg, Daniel J. Gauthier, Robert W. Boyd, and Alexander L. Gaeta. Tunable all-optical de- lays via brillouin slow light in an optical fiber. Phys. Rev. Lett., 94:153902, Apr 2005. URL https://link.aps.org/doi/10.1103/PhysRevLett.94.153902. [38] Xiaoyang Duan, Simon Kamin, and Na Liu. Dynamic plasmonic colour display. Nature Com- munications, 8(1):14606, Feb 24, 2017. ISSN 2041-1723. URL https://doi.org/10.1038/ ncomms14606. [39] Frank Neubrech, Xiaoyang Duan, and Na Liu. Dynamic plasmonic color generation enabled by functional materials. Science Advances, 6(36):eabc2709, 2020. URL https://www.science. org/doi/abs/10.1126/sciadv.abc2709. [40] Søren Raza, Nicolas Stenger, Shima Kadkhodazadeh, Søren V. Fischer, Natalie Kostesha, Antti- Pekka Jauho, Andrew Burrows, Martijn Wubs, and N. Asger Mortensen. Blueshift of the surface plasmon resonance in silver nanoparticles studied with EELS. Nanophotonics, 2(2):131–138, 2013. URL https://doi.org/10.1515/nanoph-2012-0032. [41] Yanrong Chen, Haihua Wu, Zhipeng Li, Peijie Wang, Longkun Yang, and Yan Fang. The Study of Surface Plasmon in Au/Ag Core/Shell Compound Nanoparticles. Plasmonics, 7(3):509–513, Sep 01, 2012. ISSN 1557-1963. URL https://doi.org/10.1007/s11468-012-9336-6. [42] Prashant K. Jain, Wenyu Huang, and Mostafa A. El-Sayed. On the Universal Scaling Behavior of the Distance Decay of Plasmon Coupling in Metal Nanoparticle Pairs: A Plasmon Ruler Equation. Nano Letters, 7(7):2080–2088, Jul 01, 2007. ISSN 1530-6984. URL https://doi. org/10.1021/nl071008a. [43] Jing Niu, Young Jun Shin, Jaesung Son, Youngbin Lee, Jong-Hyun Ahn, and Hyunsoo Yang. Shifting of surface plasmon resonance due to electromagnetic coupling between graphene and au nanoparticles. Opt. Express, 20(18):19690–19696, Aug 2012. URL https://opg.optica. org/oe/abstract.cfm?URI=oe-20-18-19690. 10 REFERENCES Sarker et al. [44] Asad Nauman, Hafiz Saad Khaliq, Jun-Chan Choi, Jae-Won Lee, and Hak-Rin Kim. Topolog- ically engineered strain redistribution in elastomeric substrates for dually tunable anisotropic plasmomechanical responses. ACS Applied Materials & Interfaces, 16(5):6337–6347, 2024. URL https://doi.org/10.1021/acsami.3c13818. PMID: 38285501. [45] Dmitrii Belogolovskii, Md Masudur Rahman, Karl Johnson, Vladimir Fedorov, Andrew Grieco, Nikola Alic, Abdoulaye Ndao, Paul K. L. Yu, and Yeshaiahu Fainman. Large bidirectional refractive index change in silicon-rich nitride via visible light trimming. Advanced Optical Ma- terials, 13(14):2403420, 2025. URL https://advanced.onlinelibrary.wiley.com/doi/abs/ 10.1002/adom.202403420. [46] Robert L. Olmon, Brian Slovick, Timothy W. Johnson, David Shelton, Sang-Hyun Oh, Glenn D. Boreman, and Markus B. Raschke. Optical dielectric function of gold. Phys. Rev. B, 86:235147, Dec 2012. URL https://link.aps.org/doi/10.1103/PhysRevB.86.235147. [47] E. D. Palik. Handbook of Optical Constants of Solids. Number v. 3 in Academic Press handbook series. Elsevier Science, 1998. ISBN 9780125444231. URL https://books.google.com.bd/ books?id=nxoqxyoHfbIC. [48] Min Hu, Amitabh Ghoshal, Manuel Marquez, and Pieter G. Kik. Single particle spectroscopy study of metal-film-induced tuning of silver nanoparticle plasmon resonances. The Journal of Physical Chemistry C, 114(16):7509–7514, 2010. URL https://doi.org/10.1021/jp911416a. [49] John D. Joannopoulos, Steven G. Johnson, Joshua N. Winn, and Robert D. Meade. Photonic Crystals: Molding the Flow of Light - Second Edition. Princeton University Press, rev - revised, 2 edition, 2008. ISBN 9780691124568. URL http://www.jstor.org/stable/j.ctvcm4gz9. [50] A Ndao, J Salvi, R Salut, M-P Bernal, T Alaridhee, A Belkhir, and F I Baida. Resonant optical transmission through sub-wavelength annular apertures caused by a plasmonic transverse electromagnetic (TEM) mode. Journal of Optics, 16(12):125009, nov 2014. URL https://dx. doi.org/10.1088/2040-8978/16/12/125009. [51] Yue Su, Zhaoxin Geng, Zhiyuan Fan, Shicai Wang, Xiaoqing Lv, Weihao Fang, Weihua Pei, and Hongda Chen. Exploring surface sensitivity of rayleigh anomaly in metal/dielectric multilayer gratings. Opt. Express, 27(10):14152–14162, May 2019. URL https://opg.optica.org/oe/ abstract.cfm?URI=oe-27-10-14152. [52] W. Rechberger, A. Hohenau, A. Leitner, J.R. Krenn, B. Lamprecht, and F.R. Aussenegg. Op- tical properties of two interacting gold nanoparticles. Optics Communications, 220(1):137– 141, 2003. ISSN 0030-4018. URL https://www.sciencedirect.com/science/article/pii/ S0030401803013579. 11 Sarker et al. Supplemental Materials Resonance Engineering via Harnessing Anti-Parallel Dipole Image Coupling: Supplemental Material S1. Impact of Spacer layer thickness on Resonance Figures S1(a) and (b) depict the spatial distributions of electric field for tAl2O3 = 0 and 15 nm at the longer wavelength resonance. The field confinement of the MDM structure with tAl2O3 = 0 nm is negligible compared to that with tAl2O3 = 15 nm, indicating that the longer wavelength resonance mode cannot sustain without the spacer layer, as shown in Figs. S1(a). Therefore, it can be inferred that the longer wavelength resonance is governed primarily by the anti-parallel dipole image coupling set by tAl2O3. The full width half-maximum (FWHM) and the longer wavelength reflection dip increased with the inclination of spacer layer thickness (tAl2O3), as depicted in Fig. S1(c). The narrow FWHM and minimum reflection dip provide higher spectral selectivity, lower crosstalk, and higher spectral purity. Therefore, considering the narrow FWHM and the minimum reflection dip, we adopted the tAl2O3 of 15 nm for our study, as shown in Fig. S1(c) by line cutting. Figure S1. The spatial distribution of electric field for (a) tAl2O3 = 0 nm and (b) tAl2O3 = 15 nm. (c) Spatial distributions of the transmission of an MDM nanostructure with a top cladding layer on the nanostructure for varying the spacer thickness. Here, the td of 15 nm was considered. The color bar is provided in the inset. 12 Sarker et al. S2. Simulation Information The simulation information of the FDTD is provided in Table S1. Table S1. FDTD simulation parameters used to investigate the MDM nanostructure Parameter description Quantities/situation Simulation type 3D Plane wave type Bloch/periodic Spatial cell size (dx = dy = dz) 10 nm Simulation time 1000 fs Temperature 300 K Mesh accuracy 2 Override mesh size (x, y, and z) 1, 1, and 0.4 nm Background index 1 Boundary conditions (x, y, and z) Periodic, Periodic, and PML Number of PML layers Steep angle (12) S3. Fabrication technique and its imperfection imapct on res- onance To fabricate the proposed MDM nanostructure, we will use a standard process that we already developed in our previous work [8, 5]. The fabrication will be based on three main steps: materials deposition, e-beam lithography, and lift-off. The device will be fabricated on a glass substrate using high-resolution electron-beam lithography (EBL). The substrate is first cleaned by sonication in acetone and isopropyl alcohol (IPA). A chromium adhesion layer is then deposited, followed by a 100-nm-thick gold ground plane and a 100-nm Al2O3 spacer layer will be deposited using atomic layer deposition (ALD). To minimize sidewall roughness during the lift-off process, a high- resolution positive-tone bilayer resist consisting of methyl methacrylate (MMA EL-8) and polymethyl methacrylate (PMMA A2) is used. After exposure and development in a methyl isobutyl ketone (MIBK) solution, a 3-nm chromium adhesion layer and a gold film are sequentially deposited via electron-beam evaporation. Finally, the resist is removed using a photoresist remover, completing the first device layer. Figure S2. Reflection spectra of our MDM nanostructure for imperfection analysis of parameters (a) P, (b) dd, and (c) tAl2O3. Fabrication imperfection analysis variates the dimension at ∆p = ∆d = ±5 nm and ∆t = ±1 nm. To analysis the fabrication imperfection and acceptable variation range of P, dd, and tAl2O3, we performed a study varying a slight change of ∆p, ∆d, and ∆t in these parameters to observe the possible shifts of the resonances in the reflectance spectra, as shown in Fig. S2. In the study, the parameters ∆p and ∆d equal to ±5 nm and with the parameter ∆t equal to ±1 nm were considered. As shown in Fig. S2(a), the longer wavelength resonance is insensitive to P, and the shorter wavelength resonance exhibits only a negligible shift for ∆p of ±5 nm. Thus, standard lithographic tolerances in P do not compromise device performance. In contrast, Figs. S2(b) and (c) show that the shorter-wavelength resonance is largely insensitive to dd and tAl2O3, whereas the longer wavelength resonance shifts with varying these parameters, as expected. Based on this sensitivity, devices remain within specification for variations up to ±5 nm in dd and ±1 nm in tAl2O3, with both resonances still clearly resolved. The high resolution of Au nanodisk and thin deposition of the spacer layer can be achievable by employing cleanroom standard lift-off and ALD process. 13 Sarker et al. Together, from these results, it can be inferred that the shorter wavelength resonance is associated with lattice or diffraction effects set by P, whereas the longer wavelength resonance is governed primarily by disk size and anti-parallel dipole image coupling set by dd and tAl2O3. S4. Rayleigh Anomaly We analyzed the impact of the P on the resonance wavelength and compared the resonance wave- length from our theoretical and numerical calculations, as shown in Fig. S3. The obtained λres, 1 of 878.779 nm showed a good agreement with the theoretically calculated λres, 1 of 875 nm for P = 500 nm. Figure S3. Theoretically and numerically obtained resonance wavelength of our MDM nanostructure for different Ps. S5. Impact of Period and incidence angle We analyzed the impact of the period (P) and incidence angle (θ) on the resonance wavelength of our proposed MDM nanostructure, as shown in Fig. S4. As shown in Fig. S4, the longer-wavelength resonance exhibits negligible dependence on P and θ, because of its origin in the localized anti-parallel dipole images. In contrast, the shorter-wavelength resonance arises from the Rayleigh anomaly (RA) and therefore varies with P and θ. As the P increased, the shorter-wavelength resonance λres, 1 of the MDM array shifted to longer wavelengths, as illustrated in Fig. S4(a). Here, normal incidence was considered for all simulations. For P = 500 nm, The numerically obtained λres, 1 of 878.8 nm was in good agreement with the theoretical value of 875 nm from RA phenomena. When the θ is increased, the shorter resonance splits into two branches due to the RA’s ±sinθ angular dependence, as shown in Fig. S4(b). These results confirmed that the shorter wavelength resonance in the nanostructure arises from the RA phenomenon. 14 Sarker et al. Figure S4. The impact of (a) P and (b) θ on the reflection spectra of our MDM nanostructure. 15
Sarker et al. PAPER Crossmark RECEIVED XX XX XXXX REVISED XX XX XXXX Resonance Engineering via Harnessing Anti-Parallel Dipole Image Coupling Dip Sarker and Abdoulaye Ndao∗ 92093, USA E-mail: ∗ Keywords: Plasmonic, Sub-Wavelength Confinement, Anti-Parallel Dipole Image, EM Coupling, Tunable device, Bi-directional Resonance, NIR Abstract Precise control of plasmonic resonances across a broad spectral range is central to the development of tunable optical devices. Yet, achieving both redshifts and blueshifts within a single nanostructure has remained elusive. Here we introduce a metal-dielectric-metal (MDM) nanodisk array that enables bidirectional tuning of resonance wavelengths throughout the near-infrared (NIR) region. The observed spectral evolution follows the plasmon ruler relationship, with unprecedented tuning properties. In particular, we report a record blueshift response of 457.82 nm for a small nanodisk thickness variation of only 5-10 nm, the highest blueshift response demonstrated in plasmonic architectures to date. This platform offers finely tunable resonances spanning an exceptionally wide NIR range, providing new insights into electromagnetic (EM) coupling mechanisms and establishing a foundation for next-generation tunable devices in sensing, optical communications, and dynamic displays. 1 Introduction In the mid-19th century, Michael Faraday's research marked the birth of plasmonics, an interacting phenomenon between the electric field of EM wave and the collective oscillations of free electrons in metals, by conducting a pioneering and systematic study on the optical properties of gold (Au) leaf, enabling transmission of green and reflection of yellow wavelengths of incident light [1]. Since then, it has garnered enormous attention from the scientific community due to its unique properties by engineering light-matter interactions at sub-wavelength scales [2, 3, 4, 5]. By harnessing strong confinement and enhancement of optical fields, plasmonics has enabled diverse applications, including sensing [6, 7, 8], enhanced transmission [9], subwavelength imaging [10, 11], advanced light sources [12, 13], metasurfaces [14, 15, 16, 17, 18, 19, 20, 21], nonlinear optical processes [22, 23], all-optical ultrafast magnetic switching [24, 25, 26], optical modulation [27, 28], cloaking [29, 30], and integrated photonic circuitry [31, 32, 33, 34]. Beyond these demonstrations, the advent of tunable plasmonic devices heralds a new era in optical technology, as the dynamic manipulation of light at the nanoscale is realized through control of geometry, refractive index, local environment, and carrier density. Such precise resonance tuning permits real-time modulation, delivering compact and energy-efficient solutions that augment the performance of applications ranging from sensors [35, 36] and optical communication [37, 36] to dynamic displays [38, 39]. Researchers around the world have explored and investigated numerous plasmonic-based tunable nanostructures by modifying the geometrical and material parameters. Generally, plasmonic resonances in these nanostructures are typically known to exhibit redshift properties, where the resonance wavelength moves toward longer wavelengths, with increasing structural dimensions, while blueshift behavior, where the resonance wavelength moves toward shorter wavelengths, is comparatively rare, challenging, and less explored due to the fixed density of free electrons in noble metals (e.g. Au and Ag) that determine their plasma frequency [40]. However, plasmonic blueshift properties can be achieved from the anti-parallel-mode excitation and strong near-field interactions, which require careful engineering of the material composition, geometry, and coupling conditions. Several experimental studies [41, 42, 43, 44, 45] showed that the blueshifting behavior of nanoparticle 1 16 Oct 2025 Sarker et al. (NP)-based structures originated from the antibonding mode between NPs. For example, an Au/Ag core/shell NP nanostructure exhibited blueshift properties by the generation of a hybridized antibonding mode, when the diameter of the Ag shell increased [41]. Indeed, initial reports showed that two pairs of Au nanodisks exhibited blueshift resonance using polarization along the interparticle axis [42]. However, their designs faced challenges in uniformly distributing the NPs on the substrate and had limited spectral coverage across the visible range and low blueshift response. A subsequent study demonstrated blueshifting behavior in an Au NP-spacer-graphene nanostructure by exploiting anti-parallel dipole image coupling between the nanoparticle and graphene [43]. Similarly, the nanostructures had the uniformly distributed challenges of NPs on the substrate, limited spectral coverage across the visible range, and low blueshift response with the fragility issue of graphene. More recently, Nauman et al. reported a practical realization of dual-directional plasmonic shifts via anisotropic strain redistribution in elastomeric substrates, enabling simultaneous redshift and blueshift responses controlled by mechanical deformation and polarization [44]. Complementarily, Belogolovskii et al. demonstrated a CMOS-compatible approach using visible light trimming of silicon-rich nitride micro-ring resonators, achieving bidirectional refractive index changes that enabled both redshifts of ∼49 nm and blueshifts of ∼10 nm through controllable thermal annealing mechanisms [45]. Although their approaches provide valuable experimental insights that complement our design-oriented strategy, highlighting the growing interest in multifunctional plasmonic systems capable of bi-directional spectral tuning, these designs covered only a limited portion of the visible spectrum with low blueshifting properties. Moreover, despite improvements in blueshift response compared to conventional NP-based designs, this improvement is still insufficient for broader practical implementation, such as tunable devices, dynamic displays, and sensors. Additionally, whereas most earlier nanostructures focus on parallel dipole-dipole interactions and hybridized gap modes effects between particles and NPs, the image dipole induced in the bottom Au mirror of anti-parallel dipole image coupling is phase-inverted relative to the nanodisk image dipole, yielding a symmetryanti-symmetry condition, not present in particles and NPs structures. Furthermore, the bi-directional tuning originated from this mirror-mediated interaction by varying Au nanodisk thickness, and we control the spatial overlap of these anti-parallel dipole image between the underlying Au slab and the Au nanodisk, enabling the highest blue- and red-shift response instead of the unidirectional shifts previously reported in nanoparticle coupling mechanisms. Additionally, these previous studies raise an important question: Can plasmonic nanostructures be engineered to achieve enhanced blueshift response and broader spectral tunability, particularly extending beyond the visible wavelength range to encompass the optical C-band used in telecommunications? To address this question, we numerically and theoretically investigated a periodic MDM array structure that demonstrated both blue- and redshifted resonance properties for the first time in this letter. The unique blueshifted resonance emerged by exploiting antiparallel dipole-image coupling between the metal nanodisks and the underlying metal slab within the MDM configuration. Through numerical simulations, we explored how the coupling between adjacent dipoles in the metal nanodisks influences resonance wavelength tuning, enabling precise wavelength control across a broad NIR spectral range. 2 Methodology To obtain the unique resonance property and understand the physics underlying the phenomenon, the MDM nanostructure on an aluminum oxide (Al2O3) substrate was investigated by employing the FDTD method (Ansys Lumerical). Figure 1(a) presents a schematic of the proposed periodic MDM nanostructure on the Al2O3 substrate. Au was selected for the nanodisk and bottom slab layers of the MDM plasmonic nanostructure due to its exceptional plasmonic properties, including low optical losses and high chemical stability in the NIR regime. Al2O3 served as the spacer layer between the nanodisk and bottom slab, facilitating coupling between them. The optical constants (n and k) of Au and Al2O3 were taken from Olmon et al. [46] and Palik et al. [47], respectively. Figures 1(b) and (c) present the xz- and xy-cross-section views of the unit cell of the nanostructure, respectively. Structural parameters of the nanostructure's unit cell were carefully optimized. The periodicity (Px = Py = P) of the MDM nanostructure was set as 500 nm. Initially, the Au nanodisk thickness (td) was selected to be 5 nm, and subsequent variations of td were explored to achieve the desired resonance characteristics. To facilitate the anti-parallel dipole coupling between the Au nanodisk and the Au slab, a thickness (tAl2O3) of 15 nm was employed (see Section 1 of Supplementary 1 for details). The Au slab layer thickness (tAu) was chosen as 100 nm to ensure near-perfect reflection and negligible transmission through the nanostructure. dd and dd/2 denote the diameter and radius of the Au nanodisk, respectively. In our study, dd/2 was set to be 125 nm. To save computational space and time for our numerical study, we used periodic boundary conditions in the x- and y2 Sarker et al. Figure 1. (a) Schematic illustration of the metal-dielectric-metal (MDM) nanostructure for resonance engineering. Au and Al2O3 were employed as metal and spacer materials, respectively. The nanostructure consists of an Al2O3 substrate and a top cladding layer of Al2O3, enabling the nanostructure to exhibit both blue- and red-shifted resonance behaviors. Cross-sectional (b) xz- and (c) xy-views of the periodic MDM unit cell. Here, td, tAl2O3, and tAu represent the thicknesses of the Au nanodisk, Al2O3 spacer, and Au slab, respectively. The periodicity is denoted as Px = Py = P, and the diameter of the nanodisk are given by dd. directions, as our proposed nanostructure was periodic in these directions. In the z-direction, 12 steep angle perfectly matched layers (PMLs) were utilized to absorb the light completely that came out from the simulation region. An override mesh of 1, 1, and 0.4 nm was applied in the x-, y-, and z-directions around the Au nanodisk and spacer layer, respectively. To minimize interference effects, a minimum separation of λcenter/4 was maintained between adjacent objects, where λcenter denotes the central wavelength of the incident light. We performed our study under the excitation of a plane wave, spanning wavelengths from 800 to 3000 nm. A table of additional simulation informations is presented in Table S1 of Supplementary 1. To fabricate the proposed MDM nanostructure, we follow the standard cleanroom fabrication recipe, which was developed [5, 8]. A detailed discussion of the fabrication and the fabrication imperfection analysis and acceptable variation range of the structural parameters is provided in Section 3 of Supplementary 1. 3 Results and Discussions To analyze plasmonic resonance tuning, we studied our proposed MDM nanostructures by varying the geometrical properties. Specifically, tunable resonances were investigated by changing td, with and without a Al2O3 top cladding layer. Figures 2(a) and (b) show the influence of td on the resonance of the reflection spectra with and without the top cladding layer, respectively. In the cladded structure, two resonances appeared in the NIR region. In contrast, the uncladded structure exhibited one resonance in the wavelength range of 800-3000 nm due to the shift of Rayleigh anomaly (RA) towards the visible wavelength. Figure 3(a) illustrates the resonance shifts induced by variations in the Au nanodisk thickness, comparing structures with and without a top cladding layer. The longer-wavelength resonance exhibited a blueshift with increasing td up to a critical structural parameter. This critical thickness was ∼55 nm for the structure with a top cladding layer and ∼75 nm for the structure without cladding. Beyond these values, the resonance of the cladded structure redshifted, while it became saturated for the uncladded structure. To intuitively understand this blueshift and redshift behavior with the spatial electric field profiles of the observed blue- and red-shifts, we utilized the antiparallel dipole image (analogous to an antibonding hybrid plasmonic mode) [48] and perturbation theory [49]. We investigated the spatial electric field distributions of the cladded structure' longerwavelength resonance mode at different td (see the circles of different colors in Fig. 3(a)). When the Au nanodisk is very thin (td 55 nm), it starts to act like a bulkier plasmonic resonator. Moreover, a thicker Au disk has more charge stored for a given field, as shown in Figs. 3(f) and (g), resulting in redshift. Thus, the redshift property returns. In contrast, this disk-dominated mode cannot exist in the absence of Al2O3 cladding due to the wavevector mismatch of incident light's electric field and plasmons of the structure (see Fig. S1 of the Supplementary 1 for details). In addition to anti-parallel dipole image, perturbation theory can explain the effect of metal's small inclusion on resonance. In perturbation theory, the volume perturbation usually tends to redshift the plasmonic mode for thicker Au disk. However, at small td, the field in the Au volume is relatively trivial, as a large fraction of the electric field of the mode is located in the dielectric gap and the surrounding space rather than deep within the Au nanodisk, as illustrated in Figs. 3(b) and (c). Therefore, the impact of volume perturbations in the thin-disk regime is initially superseded by the more pronounced influence of image-dipole coupling changes. However, the mode energy increases for large td due to the increase of the metal's volume, which results in redshift at the resonance, as shown in Figs. 3(e)-(g). To quantitatively understand this behavior, we analytically studied the longer resonance wavelength (λres, 2) of our nanostructures as a function of td by utilizing the well-known plasmon ruler equation, as shown in Fig. 3(a). The relation between λres, 2 and td is expressed by [42], λres, 2 = λoff + aetd τ . (1) The decay length (τ) is defined as the distance at which the coupling decays by a factor of 1/e. A larger τ indicates that the plasmon resonance remains sensitive over greater separations, while a smaller τ confines the strong coupling to shorter distances. λoff represents the resonance offset. From theoretical fitting, we obtained τs of 5.95 and 8.29 nm for the nanostructures with and without a top cladding layer, respectively. The shorter decay length in the presence of a cladding layer leads to greater blueshift response for smaller separations, resulting in enhanced blueshifting behavior (see Fig. 3(a), td range: 5-10 nm). In contrast, the larger τ for the structure without a cladding layer allows for notable blueshifting over a broader range of td (see Fig. 3(a), td range: 35-55 nm), whereas the resonance shift is minimal (blueshift response ∼18 nm) with the presence of cladding 4 Sarker et al. Figure 3. (a) Shift in the plasmon wavelength due to changes in the Au nanodisk thickness with and without a top cladding layer on the nanostructure. The solid lines are theoretical fitting curves by using the plasmon ruler equation. The R2 for these fitted curves was >0.9. The xz-view of the spatial electric field (Ez) distributions for different Au nanodisk thicknesses (nm) of (b) 5, (c) 30, (d) 48, (e) 55, (f) 75, and (g) 90 for the nanostructure with top cladding layer. The inset of (d) depicts the color bar of Ez. The dotted black boxes and rings represent the Au nanodisk and the spatial electric field distributions at different resonance of (a), respectively. layer. a denotes the strength of the near-field coupling between the Au nanodisk and the underlying Au slab. The values of a were calculated to be 656.72 and 845.05 for the cladded and uncladded nanostructures, respectively, from the theoretical fitting. Consistent with the analytical solution of plasmon ruler equation, numerical simulations demonstrated that the uncladded nanostructure exhibited stronger near-field coupling between the Au nanodisk and Au slab, as shown in Fig. 4. Figure 4. The xz-view of the spatial electric field (E) distributions for the nanostructures (a) with and (b) without top Al2O3 cladding layer, respectively. The color bar is provided in the insets. The shorter wavelength resonance was observed only for the cladded nanostructure in the wavelength range of 800-3000 nm. Meanwhile, the shorter wavelength resonance shifted towards the visible wavelength due to the RA phenomenon, specifically influenced by the inclusion of the refractive index of the top cladding layer. Therefore, we need to study the RA phenomenon, explaining the origin of the shorter resonance wavelength (λres, 1). The relationship among λres, 1, incidence angle of light (θ), and P is expressed by [50, 51], λres, 1 = nP ± sinθ √ m2 + l2 . (2) Here, n, m, and l represent the refractive index of the cladding layer and the diffraction orders, respectively. The (m, l) was set to be (1, 0) for our study. Between two influencing parameters, we 5 Sarker et al. Figure 5. Spatial distributions of the transmission of an MDM nanostructure with top cladding layer for varying (a) P and (b) θ. Here, the td and dd/2 were set to be 25 and 125 nm, respectively. first examined the effect of P on the resonance of the cladded nanostructure with a fixed incident angle of θ = 0◦, as shown in Fig. 5(a). The numerically obtained λres, 1 of 878.8 nm was in good agreement with the theoretical value of 875 nm for P = 500 nm. Second, we analyzed the impact of θ on λres,,1, as depicted in Fig.5(b), where we set P = 500 nm. A comparison between the theoretical and numerical simulation is provided in Fig. S3 of Supplementary 1. To investigate the impact of θ, we employed the broadband fixed angle source technique (BFAST), in which a broadband plane wave is incident on the periodic structure at a specific angle. As θ increased, the resonance split into two branches due to the dependence of ± sin θ in Eq.2. The split is not discernible for small θ; however, the split was observed more than θ = 1◦in our structure. These results confirmed that the shorter wavelength resonance in the nanostructure arises from the RA phenomenon (see Fig. S4 of the Supplementary 1 for additional information). To examine the impact of Au nanodisk's adjacent dipole coupling on the resonance of the proposed nanostructure, we varied dd/2, where td was fixed at 15 nm, keeping all other structural parameters constant. The resonance redshifted with increasing the dd/2 of the Au nanodisk, as shown in Fig. 6(a). Meanwhile, this redshift resulted from the weakening of nanodisk's adjacent dipole coupling. To visualize this weakening dipole coupling, we investigated the spatial electric field distributions for Au nanodisk radii of 50 nm and 150 nm, as shown in Figs. 6(b) and (c), respectively. A strong electric field confinement was obtained for the dd/2 of 50 nm. This weakening coupling effect with increasing dd/2 resulted in a wide tunability over a broad NIR wavelength from ∼1100 - ∼2600 nm, which is not found in previously reported studies to our best knowledge. Table 1 presents a comparative analysis of our proposed nanostructures alongside previously reported state-of-the-art nanostructures for the blueshifting properties. We conducted this analysis, considering the blueshifting response, variation in the structural parameter, normalized bandwidth (∆λb/λ), and the operating wavelength. Traditional NP nanostructures, such as Au-Au nanodisk pairs [42] and Au-Au nanoparticle pairs [52], exhibited blueshifting phenomena primarily within the visible spectral range; however, these structures exhibited limited ∆λb for small variation in the structural parameter. Although some improvement was achieved with the introduction of Au-spacer-graphene nanostructure, the ∆λb, ∆λb/λ, and tunable range remained insufficient for pragmatic applications [43]. Notably, current state-of-the-art nanostructures exhibit a higher ∆λb, up to 79 nm for the structural variation of 9.9 nm, while still operating within the visible wavelength [41]. However, further development is necessary to extend their operational range and enhance their ∆λb sufficiently to enable pragmatic applications. In contrast, our MDM nanostructure demonstrated substantially enhanced ∆λb, ∆λb/λ, and wider spectral coverage. Specifically, we achieved a record ∆λb/λ of 46.1% for Au nanodisk thicknesses varying between 5 and 55 nm, a value that is two times magnitude higher than the current state-of-the-art. Even for a small thickness variation from 5 to 10 nm, the ∆λb/λ and ∆λb remained superior at 24.1% and 457.82 nm, respectively, compared to the current state-of-the-art. Furthermore, the tunable range of our structure extends deep into the NIR, spanning 1100-2600 nm, which is highly desirable for nanophotonic and sensing applications. This improvement in performance can be attributed to the engineered anti-parallel dipole image coupling, 6 Sarker et al. Figure 6. Spatial distributions of the transmission of an MDM nanostructure with a top cladding layer on the nanostructure. We varied the Au nanodisk radius (dd/2) from 50 to 150 nm. Here, the P was set to be 500 nm. The xz-view of the spatial electric field (|E|) distributions for the Au nanodisk radius of (b) 50 and (c) 150 nm. The color bar is provided in the inset. which enables stronger and more localized EM field interactions at subwavelength scales. The exponential decay of near-field coupling, modulated by the nanodisk thickness and spacer properties, provides a powerful means to control the resonance shift with precision. Additionally, the presence of the dielectric cladding layer further enhances the system's responsiveness by shortening the decay length and boosting the field confinement. Table 1. Comparative analysis Structure Wavelength Parameter change (nm) ∆λb (nm) ∆λb/λ (%) Ref. Au-spacer-Graphene visible 15 29 4.5 [43] Au/Ag NPs visible 9.9 79 16.2 [41] Au-Au nanodisk pairs visible 206 150.38 23.1 [42] Au-Au NP pairs visible 250 93 10.9 [52] MDM NIR 5 457.82 24.1 This work MDM NIR 50 875.78 46.1 This work 4 Conclusion In this paper, we demonstrated using MDM array of plasmonic nanostructures that allowed both blueshifts and redshifts in its resonance wavelength in the NIR spectrum. These unique resonance properties are obtained by harnessing antiparallel dipole-image coupling between the metal nanodisks and the underlying metal slab, which follows the plasmon ruler relationship. Most importantly, we obtained a significant blueshift response of 457.82 nm for Au nanodisk thicknesses increasing from 5 to 10 nm. This study demonstrated a single plasmonic platform capable of both blueshifting and redshifting its resonance via simple structural and material modifications. Such dual-shift tunability provides a simple yet powerful means of tailoring resonance wavelengths in the NIR, thereby enabling advanced plasmonic devices and sensors. Disclosures The authors declare no conflicts of interest. Data Availability Statement The datasets are not publicly accessible but are available from the corresponding author upon reasonable request. 7 REFERENCES Sarker et al. Supplemental Material See Supplementary 1 for supporting content. References [1] Michael Faraday. X. The Bakerian Lecture. -Experimental relations of gold (and other metals) to light. Philosophical Transactions of the Royal Society of London, 147:145-181, 1857. URL https://royalsocietypublishing.org/doi/abs/10.1098/rstl.1857.0011. [2] Mark L. Brongersma and Vladimir M. Shalaev. The Case for Plasmonics. Science, 328(5977): 440-441, 2010. URL https://www.science.org/doi/abs/10.1126/science.1186905. [3] Mark I Stockman, Katrin Kneipp, Sergey I Bozhevolnyi, Soham Saha, Aveek Dutta, Justus Ndukaife, Nathaniel Kinsey, Harsha Reddy, Urcan Guler, Vladimir M Shalaev, Alexandra Boltasseva, Behrad Gholipour, Harish N S Krishnamoorthy, Kevin F MacDonald, Cesare Soci, Nikolay I Zheludev, Vassili Savinov, Ranjan Singh, Petra Groß, Christoph Lienau, Michal Vadai, Michelle L Solomon, David R Barton, Mark Lawrence, Jennifer A Dionne, Svetlana V Boriskina, Ruben Esteban, Javier Aizpurua, Xiang Zhang, Sui Yang, Danqing Wang, Weijia Wang, Teri W Odom, Nicol`o Accanto, Pablo M de Roque, Ion M Hancu, Lukasz Piatkowski, Niek F van Hulst, and Matthias F Kling. Roadmap on plasmonics. Journal of Optics, 20(4):043001, mar 2018. URL https://dx.doi.org/10.1088/2040-8986/aaa114. [4] B. Bahari, R. Tellez-Limon, and B. Kante. Directive and enhanced spontaneous emission using shifted cubes nanoantenna. Journal of Applied Physics, 120(9):093106, 09 2016. ISSN 00218979. URL https://doi.org/10.1063/1.4962164. [5] Jun-Hee Park, Jeongho Ha, Liyi Hsu, Guang Yang, Yeshaiahu Fainman, Alexander V. Sergienko, and Abdoulaye Ndao. Observation of robust subwavelength phase singularity in chiral medium. Advanced Photonics, 7(3):035001, 2025. URL https://doi.org/10.1117/1.AP.7.3.035001. [6] Ahmet A. Yanik, Min Huang, Osami Kamohara, Alp Artar, Thomas W. Geisbert, John H. Connor, and Hatice Altug. An optofluidic nanoplasmonic biosensor for direct detection of live viruses from biological media. Nano Letters, 10(12):4962-4969, 2010. URL https://doi.org/ 10.1021/nl103025u. PMID: 21053965. [7] Arif E. Cetin and Hatice Altug. Fano resonant ring/disk plasmonic nanocavities on conducting substrates for advanced biosensing. ACS Nano, 6(11):9989-9995, 2012. URL https://doi. org/10.1021/nn303643w. PMID: 23092386. [8] Jun-Hee Park, Abdoulaye Ndao, Wei Cai, Liyi Hsu, Ashok Kodigala, Thomas Lepetit, YuHwa Lo, and Boubacar Kant ́e. Symmetry-breaking-induced plasmonic exceptional points and nanoscale sensing. Nature Physics, 16(4):462-468, Apr 01, 2020. ISSN 1745-2481. URL https: //doi.org/10.1038/s41567-020-0796-x. [9] M. Hamidi, C. Chemrouk, A. Belkhir, Z. Kebci, A. Ndao, O. Lamrous, and F.I. Baida. Sfmfdtd analysis of triangular-lattice aaa structure: Parametric study of the tem mode. Optics Communications, 318:47-52, 2014. ISSN 0030-4018. URL https://www.sciencedirect.com/ science/article/pii/S0030401813011991. [10] J. B. Pendry. Negative refraction makes a perfect lens. Phys. Rev. Lett., 85:3966-3969, Oct 2000. URL https://link.aps.org/doi/10.1103/PhysRevLett.85.3966. [11] Zhaowei Liu, Hyesog Lee, Yi Xiong, Cheng Sun, and Xiang Zhang. Far-Field Optical Hyperlens Magnifying Sub-Diffraction-Limited Objects. Science, 315(5819):1686-1686, 2007. URL https: //www.science.org/doi/abs/10.1126/science.1137368. [12] Rupert F. Oulton, Volker J. Sorger, Thomas Zentgraf, Ren-Min Ma, Christopher Gladden, Lun Dai, Guy Bartal, and Xiang Zhang. Plasmon lasers at deep subwavelength scale. Nature, 461 (7264):629-632, Oct 01, 2009. ISSN 1476-4687. URL https://doi.org/10.1038/nature08364. [13] Hatice Altug, Dirk Englund, and Jelena Vuˇckovi ́c. Ultrafast photonic crystal nanocavity laser. Nature Physics, 2(7):484-488, Jul 01, 2006. ISSN 1745-2481. URL https://doi.org/10.1038/ nphys343. [14] Max Herzberger and Nancy R. McClure. The design of superachromatic lenses. Appl. Opt., 2 (6):553-560, Jun 1963. URL https://opg.optica.org/ao/abstract.cfm?URI=ao-2-6-553. 8 REFERENCES Sarker et al. [15] Philippe Lalanne, Simion Astilean, Pierre Chavel, Edmond Cambril, and Huguette Launois. Blazed binary subwavelength gratings with efficiencies larger than those of conventional ́echelette gratings. Opt. Lett., 23(14):1081-1083, Jul 1998. URL https://opg.optica.org/ ol/abstract.cfm?URI=ol-23-14-1081. [16] Ze'ev Bomzon, Vladimir Kleiner, and Erez Hasman. Pancharatnam-berry phase in space-variant polarization-state manipulations with subwavelength gratings. Opt. Lett., 26(18):1424-1426, Sep 2001. URL https://opg.optica.org/ol/abstract.cfm?URI=ol-26-18-1424. [17] Bryan H. Fong, Joseph S. Colburn, John J. Ottusch, John L. Visher, and Daniel F. Sievenpiper. Scalar and tensor holographic artificial impedance surfaces. IEEE Transactions on Antennas and Propagation, 58(10):3212-3221, 2010. [18] Nanfang Yu, Patrice Genevet, Mikhail A. Kats, Francesco Aieta, Jean-Philippe Tetienne, Federico Capasso, and Zeno Gaburro. Light propagation with phase discontinuities: Generalized laws of reflection and refraction. Science, 334(6054):333-337, 2011. URL https: //www.science.org/doi/abs/10.1126/science.1210713. [19] Liyi Hsu, Matthieu Dupr ́e, Abdoulaye Ndao, and Boubacar Kant ́e. From parabolic-trough to metasurface-concentrator: assessing focusing in the wave-optics limit. Opt. Lett., 42(8):15201523, Apr 2017. URL https://opg.optica.org/ol/abstract.cfm?URI=ol-42-8-1520. [20] Abdoulaye Ndao, Roland Salut, Miguel Suarez, and Fadi I Baida. Plasmonless polarizationselective metasurfaces in the visible range. Journal of Optics, 20(4):045003, mar 2018. URL https://dx.doi.org/10.1088/2040-8986/aab1eb. [21] Jeongho Ha, Abdoulaye Ndao, Liyi Hsu, Jun-Hee Park, and Boubacar Kante. Planar dielectric cylindrical lens at 800 nm and the role of fabrication imperfections. Opt. Express, 26(18):2317823184, Sep 2018. URL https://opg.optica.org/oe/abstract.cfm?URI=oe-26-18-23178. [22] Noa Konforty, Moshe-Ishay Cohen, Ohad Segal, Yonatan Plotnik, Vladimir M. Shalaev, and Mordechai Segev. Second harmonic generation and nonlinear frequency conversion in photonic time-crystals. Light: Science & Applications, 14(1):152, Apr 02, 2025. ISSN 2047-7538. URL https://doi.org/10.1038/s41377-025-01788-z. [23] Huanyu Zhou, Xueqi Ni, Beicheng Lou, Shanhui Fan, Yuan Cao, and Haoning Tang. Control of Chirality and Directionality of Nonlinear Metasurface Light Source via Moir ́e Engineering. Phys. Rev. Lett., 134:043801, Jan 2025. URL https://link.aps.org/doi/10.1103/PhysRevLett. 134.043801. [24] C-H. Lambert, S. Mangin, B. S. D. Ch. S. Varaprasad, Y. K. Takahashi, M. Hehn, M. Cinchetti, G. Malinowski, K. Hono, Y. Fainman, M. Aeschlimann, and E. E. Fullerton. All-optical control of ferromagnetic thin films and nanostructures. Science, 345(6202):1337-1340, 2014. URL https://www.science.org/doi/abs/10.1126/science.1253493. [25] Muhammad Waleed Khalid, Jeongho Ha, Mohammed Salah El Hadri, Liyi Hsu, Saeed Hemayat, Yuxuan Xiao, Alexander Sergienko, Eric E. Fullerton, and Abdoulaye Ndao. Meta-Magnetic AllOptical Helicity Dependent Switching of Ferromagnetic Thin Films. Advanced Optical Materials, 12(4):2301599, 2024. URL https://advanced.onlinelibrary.wiley.com/doi/abs/10.1002/ adom.202301599. [26] Muhammad Waleed Khalid, Ali Akbar, Jeongho Ha, Mohammed Salah El Hadri, Alexander V. Sergienko, Eric E. Fullerton, and Abdoulaye Ndao. Role of photonic angular momentum in all-optical magnetic switching. Phys. Rev. B, 109:L140403, Apr 2024. URL https://link. aps.org/doi/10.1103/PhysRevB.109.L140403. [27] Dimitrios L. Sounas and Andrea Al`u. Non-reciprocal photonics based on time modulation. Nature Photonics, 11(12):774-783, Dec 01, 2017. ISSN 1749-4893. URL https://doi.org/10. 1038/s41566-017-0051-x. [28] Christian Haffner, Daniel Chelladurai, Yuriy Fedoryshyn, Arne Josten, Benedikt Baeuerle, Wolfgang Heni, Tatsuhiko Watanabe, Tong Cui, Bojun Cheng, Soham Saha, Delwin L. Elder, Larry. R. Dalton, Alexandra Boltasseva, Vladimir M. Shalaev, Nathaniel Kinsey, and Juerg Leuthold. Low-loss plasmon-assisted electro-optic modulator. Nature, 556(7702):483-486, Apr 01, 2018. ISSN 1476-4687. URL https://doi.org/10.1038/s41586-018-0031-4. 9 REFERENCES Sarker et al. [29] Andrea Al`u and Nader Engheta. Achieving transparency with plasmonic and metamaterial coatings. Phys. Rev. E, 72:016623, Jul 2005. URL https://link.aps.org/doi/10.1103/ PhysRevE.72.016623. [30] J. B. Pendry, D. Schurig, and D. R. Smith. Controlling electromagnetic fields. Science, 312(5781):1780-1782, 2006. URL https://www.science.org/doi/abs/10.1126/science. 1125907. [31] Liyi Hsu, Fadi I. Baida, and Abdoulaye Ndao. Local field enhancement using a photonicplasmonic nanostructure. Opt. Express, 29(2):1102-1108, Jan 2021. URL https://opg.optica. org/oe/abstract.cfm?URI=oe-29-2-1102. [32] Guang Yang, Alexander V. Sergienko, and Abdoulaye Ndao. Tunable polarization mode conversion using thin-film lithium niobate ridge waveguide. Opt. Express, 29(12):18565-18571, Jun 2021. URL https://opg.optica.org/oe/abstract.cfm?URI=oe-29-12-18565. [33] Guang Yang, Alexander V. Sergienko, and Abdoulaye Ndao. Plasmonic loss-mitigating broadband adiabatic polarizing beam splitter. Opt. Lett., 47(3):629-632, Feb 2022. URL https: //opg.optica.org/ol/abstract.cfm?URI=ol-47-3-629. [34] Xilin Feng, Tianwei Wu, Zihe Gao, Haoqi Zhao, Shuang Wu, Yichi Zhang, Li Ge, and Liang Feng. Non-hermitian hybrid silicon photonic switching. Nature Photonics, 19(3):264-270, Mar 01, 2025. ISSN 1749-4893. URL https://doi.org/10.1038/s41566-024-01579-9. [35] Daniel Rodrigo, Odeta Limaj, Davide Janner, Dordaneh Etezadi, F. Javier Garc ́ıa de Abajo, Valerio Pruneri, and Hatice Altug. Mid-infrared plasmonic biosensing with graphene. Science, 349 (6244):165-168, 2015. URL https://www.science.org/doi/abs/10.1126/science.aab2051. [36] Surbhi Lal, Stephan Link, and Naomi J. Halas. Nano-optics from sensing to waveguiding. Nature Photonics, 1(11):641-648, Nov 01, 2007. ISSN 1749-4893. URL https://doi.org/10.1038/ nphoton.2007.223. [37] Yoshitomo Okawachi, Matthew S. Bigelow, Jay E. Sharping, Zhaoming Zhu, Aaron Schweinsberg, Daniel J. Gauthier, Robert W. Boyd, and Alexander L. Gaeta. Tunable all-optical delays via brillouin slow light in an optical fiber. Phys. Rev. Lett., 94:153902, Apr 2005. URL https://link.aps.org/doi/10.1103/PhysRevLett.94.153902. [38] Xiaoyang Duan, Simon Kamin, and Na Liu. Dynamic plasmonic colour display. Nature Communications, 8(1):14606, Feb 24, 2017. ISSN 2041-1723. URL https://doi.org/10.1038/ ncomms14606. [39] Frank Neubrech, Xiaoyang Duan, and Na Liu. Dynamic plasmonic color generation enabled by functional materials. Science Advances, 6(36):eabc2709, 2020. URL https://www.science. org/doi/abs/10.1126/sciadv.abc2709. [40] Søren Raza, Nicolas Stenger, Shima Kadkhodazadeh, Søren V. Fischer, Natalie Kostesha, AnttiPekka Jauho, Andrew Burrows, Martijn Wubs, and N. Asger Mortensen. Blueshift of the surface plasmon resonance in silver nanoparticles studied with EELS. Nanophotonics, 2(2):131-138, 2013. URL https://doi.org/10.1515/nanoph-2012-0032. [41] Yanrong Chen, Haihua Wu, Zhipeng Li, Peijie Wang, Longkun Yang, and Yan Fang. The Study of Surface Plasmon in Au/Ag Core/Shell Compound Nanoparticles. Plasmonics, 7(3):509-513, Sep 01, 2012. ISSN 1557-1963. URL https://doi.org/10.1007/s11468-012-9336-6. [42] Prashant K. Jain, Wenyu Huang, and Mostafa A. El-Sayed. On the Universal Scaling Behavior of the Distance Decay of Plasmon Coupling in Metal Nanoparticle Pairs: A Plasmon Ruler Equation. Nano Letters, 7(7):2080-2088, Jul 01, 2007. ISSN 1530-6984. URL https://doi. org/10.1021/nl071008a. [43] Jing Niu, Young Jun Shin, Jaesung Son, Youngbin Lee, Jong-Hyun Ahn, and Hyunsoo Yang. Shifting of surface plasmon resonance due to electromagnetic coupling between graphene and au nanoparticles. Opt. Express, 20(18):19690-19696, Aug 2012. URL https://opg.optica. org/oe/abstract.cfm?URI=oe-20-18-19690. 10 REFERENCES Sarker et al. [44] Asad Nauman, Hafiz Saad Khaliq, Jun-Chan Choi, Jae-Won Lee, and Hak-Rin Kim. Topologically engineered strain redistribution in elastomeric substrates for dually tunable anisotropic plasmomechanical responses. ACS Applied Materials & Interfaces, 16(5):6337-6347, 2024. URL https://doi.org/10.1021/acsami.3c13818. PMID: 38285501. [45] Dmitrii Belogolovskii, Md Masudur Rahman, Karl Johnson, Vladimir Fedorov, Andrew Grieco, Nikola Alic, Abdoulaye Ndao, Paul K. L. Yu, and Yeshaiahu Fainman. Large bidirectional refractive index change in silicon-rich nitride via visible light trimming. Advanced Optical Materials, 13(14):2403420, 2025. URL https://advanced.onlinelibrary.wiley.com/doi/abs/ 10.1002/adom.202403420. [46] Robert L. Olmon, Brian Slovick, Timothy W. Johnson, David Shelton, Sang-Hyun Oh, Glenn D. Boreman, and Markus B. Raschke. Optical dielectric function of gold. Phys. Rev. B, 86:235147, Dec 2012. URL https://link.aps.org/doi/10.1103/PhysRevB.86.235147. [47] E. D. Palik. Handbook of Optical Constants of Solids. Number v. 3 in Academic Press handbook series. Elsevier Science, 1998. ISBN 9780125444231. URL https://books.google.com.bd/ books?id=nxoqxyoHfbIC. [48] Min Hu, Amitabh Ghoshal, Manuel Marquez, and Pieter G. Kik. Single particle spectroscopy study of metal-film-induced tuning of silver nanoparticle plasmon resonances. The Journal of Physical Chemistry C, 114(16):7509-7514, 2010. URL https://doi.org/10.1021/jp911416a. [49] John D. Joannopoulos, Steven G. Johnson, Joshua N. Winn, and Robert D. Meade. Photonic Crystals: Molding the Flow of Light - Second Edition. Princeton University Press, rev - revised, 2 edition, 2008. ISBN 9780691124568. URL http://www.jstor.org/stable/j.ctvcm4gz9. [50] A Ndao, J Salvi, R Salut, M-P Bernal, T Alaridhee, A Belkhir, and F I Baida. Resonant optical transmission through sub-wavelength annular apertures caused by a plasmonic transverse electromagnetic (TEM) mode. Journal of Optics, 16(12):125009, nov 2014. URL https://dx. doi.org/10.1088/2040-8978/16/12/125009. [51] Yue Su, Zhaoxin Geng, Zhiyuan Fan, Shicai Wang, Xiaoqing Lv, Weihao Fang, Weihua Pei, and Hongda Chen. Exploring surface sensitivity of rayleigh anomaly in metal/dielectric multilayer gratings. Opt. Express, 27(10):14152-14162, May 2019. URL https://opg.optica.org/oe/ abstract.cfm?URI=oe-27-10-14152. [52] W. Rechberger, A. Hohenau, A. Leitner, J.R. Krenn, B. Lamprecht, and F.R. Aussenegg. Optical properties of two interacting gold nanoparticles. Optics Communications, 220(1):137141, 2003. ISSN 0030-4018. URL https://www.sciencedirect.com/science/article/pii/ S0030401803013579. 11 Sarker et al. Supplemental Materials Resonance Engineering via Harnessing Anti-Parallel Dipole Image Coupling: Supplemental Material S1. Impact of Spacer layer thickness on Resonance Figures S1(a) and (b) depict the spatial distributions of electric field for tAl2O3 = 0 and 15 nm at the longer wavelength resonance. The field confinement of the MDM structure with tAl2O3 = 0 nm is negligible compared to that with tAl2O3 = 15 nm, indicating that the longer wavelength resonance mode cannot sustain without the spacer layer, as shown in Figs. S1(a). Therefore, it can be inferred that the longer wavelength resonance is governed primarily by the anti-parallel dipole image coupling set by tAl2O3. The full width half-maximum (FWHM) and the longer wavelength reflection dip increased with the inclination of spacer layer thickness (tAl2O3), as depicted in Fig. S1(c). The narrow FWHM and minimum reflection dip provide higher spectral selectivity, lower crosstalk, and higher spectral purity. Therefore, considering the narrow FWHM and the minimum reflection dip, we adopted the tAl2O3 of 15 nm for our study, as shown in Fig. S1(c) by line cutting. Figure S1. The spatial distribution of electric field for (a) tAl2O3 = 0 nm and (b) tAl2O3 = 15 nm. (c) Spatial distributions of the transmission of an MDM nanostructure with a top cladding layer on the nanostructure for varying the spacer thickness. Here, the td of 15 nm was considered. The color bar is provided in the inset. 12 Sarker et al. S2. Simulation Information The simulation information of the FDTD is provided in Table S1. Table S1. FDTD simulation parameters used to investigate the MDM nanostructure Parameter description Quantities/situation Simulation type 3D Plane wave type Bloch/periodic Spatial cell size (dx = dy = dz) 10 nm Simulation time 1000 fs Temperature 300 K Mesh accuracy 2 Override mesh size (x, y, and z) 1, 1, and 0.4 nm Background index 1 Boundary conditions (x, y, and z) Periodic, Periodic, and PML Number of PML layers Steep angle (12) S3. Fabrication technique and its imperfection imapct on resonance To fabricate the proposed MDM nanostructure, we will use a standard process that we already developed in our previous work [8, 5]. The fabrication will be based on three main steps: materials deposition, e-beam lithography, and lift-off. The device will be fabricated on a glass substrate using high-resolution electron-beam lithography (EBL). The substrate is first cleaned by sonication in acetone and isopropyl alcohol (IPA). A chromium adhesion layer is then deposited, followed by a 100-nm-thick gold ground plane and a 100-nm Al2O3 spacer layer will be deposited using atomic layer deposition (ALD). To minimize sidewall roughness during the lift-off process, a highresolution positive-tone bilayer resist consisting of methyl methacrylate (MMA EL-8) and polymethyl methacrylate (PMMA A2) is used. After exposure and development in a methyl isobutyl ketone (MIBK) solution, a 3-nm chromium adhesion layer and a gold film are sequentially deposited via electron-beam evaporation. Finally, the resist is removed using a photoresist remover, completing the first device layer. Figure S2. Reflection spectra of our MDM nanostructure for imperfection analysis of parameters (a) P, (b) dd, and (c) tAl2O3. Fabrication imperfection analysis variates the dimension at ∆p = ∆d = ±5 nm and ∆t = ±1 nm. To analysis the fabrication imperfection and acceptable variation range of P, dd, and tAl2O3, we performed a study varying a slight change of ∆p, ∆d, and ∆t in these parameters to observe the possible shifts of the resonances in the reflectance spectra, as shown in Fig. S2. In the study, the parameters ∆p and ∆d equal to ±5 nm and with the parameter ∆t equal to ±1 nm were considered. As shown in Fig. S2(a), the longer wavelength resonance is insensitive to P, and the shorter wavelength resonance exhibits only a negligible shift for ∆p of ±5 nm. Thus, standard lithographic tolerances in P do not compromise device performance. In contrast, Figs. S2(b) and (c) show that the shorter-wavelength resonance is largely insensitive to dd and tAl2O3, whereas the longer wavelength resonance shifts with varying these parameters, as expected. Based on this sensitivity, devices remain within specification for variations up to ±5 nm in dd and ±1 nm in tAl2O3, with both resonances still clearly resolved. The high resolution of Au nanodisk and thin deposition of the spacer layer can be achievable by employing cleanroom standard lift-off and ALD process. 13 Sarker et al. Together, from these results, it can be inferred that the shorter wavelength resonance is associated with lattice or diffraction effects set by P, whereas the longer wavelength resonance is governed primarily by disk size and anti-parallel dipole image coupling set by dd and tAl2O3. S4. Rayleigh Anomaly We analyzed the impact of the P on the resonance wavelength and compared the resonance wavelength from our theoretical and numerical calculations, as shown in Fig. S3. The obtained λres, 1 of 878.779 nm showed a good agreement with the theoretically calculated λres, 1 of 875 nm for P = 500 nm. Figure S3. Theoretically and numerically obtained resonance wavelength of our MDM nanostructure for different Ps. S5. Impact of Period and incidence angle We analyzed the impact of the period (P) and incidence angle (θ) on the resonance wavelength of our proposed MDM nanostructure, as shown in Fig. S4. As shown in Fig. S4, the longer-wavelength resonance exhibits negligible dependence on P and θ, because of its origin in the localized anti-parallel dipole images. In contrast, the shorter-wavelength resonance arises from the Rayleigh anomaly (RA) and therefore varies with P and θ. As the P increased, the shorter-wavelength resonance λres, 1 of the MDM array shifted to longer wavelengths, as illustrated in Fig. S4(a). Here, normal incidence was considered for all simulations. For P = 500 nm, The numerically obtained λres, 1 of 878.8 nm was in good agreement with the theoretical value of 875 nm from RA phenomena. When the θ is increased, the shorter resonance splits into two branches due to the RA's ±sinθ angular dependence, as shown in Fig. S4(b). These results confirmed that the shorter wavelength resonance in the nanostructure arises from the RA phenomenon. 14 Sarker et al. Figure S4. The impact of (a) P and (b) θ on the reflection spectra of our MDM nanostructure. 15
2510.14928
Instruction Set Migration at Warehouse Scale ERIC CHRISTOPHER, KEVIN CROSSAN, WOLFF DOBSON, CHRIS KENNELLY, DREW LEWIS, KUN LIN, MARTIN MAAS, PARTHASARATHY RANGANATHAN, EMMA RAPATI, BRIAN YANG, Google, USA Migrating codebases from one instruction set architecture (ISA) to another is a major engineering challenge. A recent example is the adoption of Arm (in addition to x86) across the major Cloud hyperscalers. Yet, this problem has seen limited attention by the academic community. Most work has focused on static and dynamic binary translation, and the traditional conventional wisdom has been that this is the primary challenge. In this paper, we show that this is no longer the case. Modern ISA migrations can often build on a robust open-source ecosystem, making it possible to recompile all relevant software from scratch. This introduces a new and multifaceted set of challenges, which are different from binary translation. By analyzing a large-scale migration from x86 to Arm at Google, spanning almost 40,000 code commits, we derive a taxonomy of tasks involved in ISA migration. We show how Google automated many of the steps involved, and demonstrate how AI can play a major role in automatically addressing these tasks. We identify tasks that remain challenging and highlight research challenges that warrant further attention. 1 Introduction Migrating large codebases to a new Instruction Set Architecture (ISA) is a major engineering challenge. Examples include Apple’s migration from PowerPC to x86 and later to Arm [4], as well as the adoption of Arm by major hyperscalers (such as Amazon, Google, Microsoft). While there are anecdotal claims regarding the complexity and efforts required for such migrations [2, 18, 19], to our knowledge, there is no systematic analysis of what these ISA migrations entail, and how they are impacted by modern technologies such as improved software engineering tools and artificial intelligence (AI). In this paper, we perform such a systematic analysis for the migration of a multi-billion line codebase from x86 to Arm at Google. Historically, the conventional wisdom has been that the biggest challenge in ISA migration in- volves translating machine code between ISAs [2, 19]. Correspondingly, there has been a significant amount of work on static [23] and dynamic [12] binary translation that automatically rewrites binaries compiled for one ISA to another. Binary translation was the main problem when software was distributed as binaries and source code was not usually available. However, modern ISAs are generally well-supported in upstream compilers, runtime libraries, and the Linux kernel. As a result, modern compilers mostly “just work” for a new ISA, and previous ISA migrations have smoothed the path to packages supporting cross-compilation by default. For example, 98% of Debian packages build for RISC-V, although it only became an official Debian architecture in 2023 [1]. Perhaps surprisingly, this does not mean that ISA migration is no longer a challenge. While code translation is not the main issue anymore, we find that modern ISA migration involves many usually-simple, repetitive, automatable tasks such as updating build scripts or fixing floating-point issues, which AI can increasingly facilitate. In this paper, we analyze a large-scale ISA migration at Google that added Arm support alongside x86. We focus on the following research questions: (1) What are the tasks that are involved in a modern ISA migration? (2) Which tasks can be automated and how can modern AI help? (3) Which tasks are difficult and are good targets for future research? To answer these questions, we provide what we believe is a first-ever detailed breakdown and taxonomy of large-scale ISA migration tasks. Using state-of-the-art LLMs, we analyze and categorize a corpus of 38,156 commits that constitute our real-world migration. We quantitatively evaluate the capability of current tools, including AI models, to perform these tasks automatically. We arXiv:2510.14928v1 [cs.SE] 16 Oct 2025 2 systematically identify the strengths and weaknesses of current automated tools, and highlight areas of future work and improvement. We believe that this work highlights research opportunities for the academic community and revisits long-standing assumptions around ISA migrations. Specifically, we contribute the following insights: 1) The complexity of ISA migrations is not in code translation but involves a number of different tasks, many related to rewriting BUILD and configuration files; 2) Many of these tasks are highly automatable; 3) Many of the tasks that are not automatable only need to be performed once when going from a single ISA to multiarch; 4) Of the remaining tasks, many can be performed by modern AI, but some challenges remain. 2 Background & Related Work There are a number of reasons why organizations have performed large-scale ISA migrations. First, many ISAs have gone extinct over the years (e.g., Alpha, MIPS, SPARC, Itanium, VAX). Second, with the adoption of Android and iOS, more codebases were ported to Arm to be used in mobile applications. Third, Apple Macs went through successful migrations from PowerPC to x86, and most recently from x86 to Arm to support custom Apple Silicon. Finally, major cloud hyperscalers have been migrating large codebases from x86 to Arm as well. In this context, migration does not only refer to the act of getting software to build on a new architecture but to reaching parity in terms of performance, security and stability. The most closely-related work in the academic community falls into two categories. First, there is a significant amount of work on static and dynamic binary translation from one ISA to another [12, 23]. Static or dynamic binary translators can serve as a bridge to handle the long tail of a software ecosystem, but eventually they must be phased out to avoid carrying forward technical debt (e.g., Apple will actively phase out its Rosetta dynamic binary translation system in 2027 [3]). In cloud computing, where compute is commoditized, developers seek as much efficiency as possible, and recompiling onto the new ISA is the best route to maximize the compiler’s options for performance. We thus forewent binary translation altogether in our deployment. Second, there is a significant amount of work on automatically applying edits to code, such as for performance optimization [16], fixing security issues [14], or correcting bugs [5]. As we will see, these are common tasks that are part of a successful ISA migration. 3 Google’s x86 to Arm Journey We now analyze Google’s multi-year effort to port a substantial portion of Google’s server applica- tion ecosystem from x86 to Arm, enabling simultaneous support for both. We start by describing Google’s environment and provide a step-by-step analysis of our ISA migration. 3.1 Google’s Software Ecosystem Google’s codebase is organized as a monorepo containing billions of lines of code [20]. Individual applications and libraries reside in various directories. These folders also contain metadata files, e.g., to indicate code owners or configure continuous integration (CI) testing [27]. Building with Bazel. Builds use Bazel [7], a highly configurable build system. BUILD files describe how binaries, libraries, and tests are built from source files. Most code is covered by our primary continuous integration system, “TAP” (Test Automation Platform [7]), and it is standard for TAP to gate releases. Bazel’s builds and tests (including TAP) run on a shared set of machines called Forge. Forge’s scale and cache enables Google to compile everything needed for a binary from scratch on every build, including fundamental dependencies like the Python interpreter. Creating releases with Blueprints. Google distinguishes between binaries and releases (named “MPMs” for the “Midas Package Manager” that stores them globally [7]). Releases are bundles Instruction Set Migration at Warehouse Scale 3 of binaries and data that are ready to be deployed in Google’s clusters, akin to a package in a Linux distribution. Releases are defined by Blueprint files, which are handwritten or managed by other systems that standardize releases and configurations. A system called Rapid [6] consumes Blueprints, runs CI tests, including TAP, and builds releases for server-side packages. Running applications on Borg. Borg is a custom cluster management service for the Google fleet that runs nearly all Google services [26]. Applications are deployed to Borg through configuration files that define the MPMs needed to run a service, runtime parameters, and scheduling constraints. MPMs can be rolled forward and backward safely because they are almost entirely hermetic. Multiarch Support. Borg was heterogeneous even before the Arm migration. Borg has had dozens of different types of CPUs over the years, and services run on machines that can be as much as ten years old. Unless an owner adds specific constraints, a job can be scheduled on any machine with an architecture-compatible binary. During development, engineers can request builds for multiple architectures (e.g., Arm, K8, Haswell) and expect multiarch MPMs. They can also request tests to run on each target hardware. At release time, owners of packages can also configure Blueprints to target one or more ISAs. Owners expect to get a mix of different kinds of machines with different performance profiles and, in some cases, ISAs. Shifting down and Large Scale Changes (LSCs). Google has moved to a “shift down” approach to development [11], where developers only focus on one level of the stack and other issues are abstracted and/or automated for them. For example, Bazel, TAP, and Rapid mean that developers do not generally need to worry about the specifics of CI. In addition, a healthy automated testing culture means everyone can change everyone else’s code without frequent breakages. This enables Large Scale Changes (LSCs) [27] which change code owned by many different teams and can affect thousands of files at once. In cases where an LSC is considered low risk, it can be approved centrally and submitted efficiently without asking individual teams. To get approval from many owners at once, Google has developed Rosie [27] which allows engineers to create a very large commit and shard it into tens, hundreds, or thousands of smaller commits split up by owner. 3.2 Life cycle of an ISA migration Moving an individual package from single arch (x86) to multiarch support requires several steps: (1) Test: Fix tests (and builds) that break when run with the new ISA. Since anyone can build and test any code in our monorepo, it is easy to identify tests that break and require fixes. (2) Set up multiarch CI: This requires modifying the corresponding Blueprint files to ensure that no additional regressions are introduced (often simultaneous with the next step). (3) Configure releases: This modifies Blueprint files to make releases multiarch by default. (4) Roll out new binaries: Run the multiarch packages on machines of the new ISA and assess performance and stability, addressing issues as needed. (5) Full production: Allow production jobs to be scheduled on machines of the new ISA. While these steps are the same for all packages, the issues encountered within each step vary widely across applications and throughout the different phases of our ISA migration from x86 to Arm. Often, this involves performance-optimizing code for the new platform, which can happen in parallel to these steps. We observe parallels to Uber’s reported porting workflow [17]. 3.3 Phase 1: Large users We started our Arm migration with a small set of large users, such as Spanner [9], BigQuery [15] and Bigtable [8]. These migrations were hands-on with a small team, and required weekly meetings 4 and tracking bugs. Once tests passed, rollout was manual, with very careful performance and load testing, and gradually removing scheduling constraints on a per-job basis. During this phase, a number of issues in these workloads were surfaced and addressed. Examples include: 1) Replacing x86-specific intrinsics; 2) Replacing long double, which differs between x86 and Arm, with absl::float128; 3) Brittle tests (e.g., due to exact floating point equality checks); 4) x86-specific flags; 5) Memory ordering issues hidden by x86; 6) Out-of-memory errors, often due to heap limits being tuned for x86; 7) Multiarch MPMs exceeding the capacity limits of our infrastructure; 8) Unsupported dependencies, and loading of unsupported dynamic libraries; 9) Jobs not getting scheduled due to unsatisfiable scheduling constraints in Borg configurations. This list was surprising to the teams involved—initially, there had been a perception that porting these large and mature codebases to Arm would be a herculean task, and that the very different toolchains would result in myriad difficulties. However, most issues involved simple changes or fixes, many of them in configuration files. At the same time, these changes were surprisingly pervasive, as evident by the large number of commits. It is therefore not the case that most software compiles and runs on Arm without modifications; it is that these modifications are of a different kind than expected initially. For example, it is not unusual that, for a given software package, almost nothing builds initially, suggesting that large and pervasive changes are required. However, simple fixes to a number of shared dependencies often unblocks many of them at once. 3.4 Phase 2: Everybody else To take full advantage of Arm in the data center, migrating only the largest workloads is insufficient. To make maximum use of available capacity, Borg needs to be able to schedule workloads flexibly across platforms, packing large and small users onto machines efficiently. If only a small subset of services can run on Arm, it will result in underutilization of those machines. We note that the distribution of workloads at Google is very flat: Although our top 50 users are very large, they only represent ≈60% of running compute [13]. Addressing this long tail requires porting over 100,000 packages and billions of lines of code. This makes the Phase 1 approach of working directly with customer teams infeasible. In fact, even just talking to each team would be prohibitively expensive. The second phase of the x86 to Arm migration therefore focused on automating and scaling the migration of these workloads, while minimizing involvement from the teams themselves. It is this phase of Google’s x86 to Arm migration that we mostly focus on. So far, we have ported about 30,000 packages, accounting for a significant portion of CPU cycles. We found that effectively making use of Arm hardware did not require porting all workloads. 4 Analyzing an ISA Migration To fully understand what is involved in an ISA migration, we now analyze the full range of tasks involved in Google’s x86 to Arm migration (RQ1). As a monorepo, any change – be it to code, configurations or documentation – is tracked as a commit in our repository’s history. Further, these relevant commits were marked with a keyword that indicates they were part of this migration, allowing us to extract them after the fact. We thus identified a relevant set of 38,156 commits. Analyzing these commits manually would have been cost-prohibitive. We instead used a variant1 of Gemini 2.5 Flash to analyze these commits at scale. We passed the commit messages and code diffs into the LLM’s 1M token context window in groups of 100 at a time. We prompted the model to pick a set of 20 categories for each batch. Then, we took all 400 × 20 categories and asked Gemini to consolidate them into 50. Further manual iteration over model outputs led to a final list of 16 1We use Gemini 2.5 fine-tuned on a corpus of internal data, including code and documentation. This corpus may include material related to our Arm migration, but this represents a sufficiently small subset that recitation is not an issue. Instruction Set Migration at Warehouse Scale 5 Change Category & Description Commits Total LoC LoC per Commit Automation 1. : Introducing or modifying code blocks that are conditionally compiled or executed (e.g., using #ifdef __aarch64__, runtime CPU feature checks). Examples: Different syscall usage, platform-specific API calls. 236 16,235 9 [1, 330] 22% / 28% 2. : Replacing or providing alternatives for x86-specific intrinsics (e.g., SSE, AVX) with Arm equivalents (NEON), or rewriting assembly language sections. 74 4,454 10 [1, 209] 1% / 0% 3. : Modifying code to handle issues arising from differences in data type sizes, alignment requirements, or byte order (endianness) between x86 and Arm. 38 2,574 9 [1, 324] 0% / 0% 4. : Fixing code that makes assumptions about memory ordering, atomicity, or thread synchronization behavior that differ on Arm. 12 49 2 [2, 12] 0% / 0% 5. : Refactoring algorithms or code patterns specifically to improve execution speed, reduce latency, or improve efficiency on Arm microarchitectures. This is beyond basic correctness. 18 2,080 13 [4, 337] 0% / 0% 6. : Changing the test code itself, updating golden files, or adjusting test assertions to be compatible with Arm, reflecting valid behavioral differences rather than bugs. 276 37,052 5 [1, 147] 3% / 1% 7. : Configuring which tests run on Arm, adjusting test timeouts, memory/CPU limits, and sandboxing; excluding tests not suitable for Arm. 1,303 13,783 1 [1, 37] 53% / 8% 8. : Changes to BUILD files, Bazel settings, genmpm rules, Blueprints, TAP, and release platform configs to support multi-architecture builds, testing, and releases. 32,204 139,611 1 [1, 9] 95% / 63% 9. : Modifying Borg configurations, allowlists, admission control, resource allocation within jobs, and service enablement for running on Arm in non-prod and production. 757 26,581 10 [1, 136] 1% / 0% 10. : Managing quota, dedicated machines, security policies, storage, network, and scaling for the Arm migration. Includes cluster state management and kernel/platform rollouts. 381 32,159 8 [1, 163] 1% / 0% 11. : Setting up dashboards, alerts, collecting metrics, analyzing performance benchmarks, and classifying errors for the Arm migration. 645 115,343 16 [1, 525] 0% / 0% 12. : Creating and enhancing scripts, tools, and automation to assist with any stage of the Arm migration. 644 125,536 29 [1, 808] 2% / 1% 13. : Creating and updating guides, best practices, and debugging information. 369 24,115 13 [1, 260] 0% / 0% 14. : Reverting changes and removing obsolete code, configurations, or data from the migration process. 940 163,042 2 [1, 200] 58% / 5% 15. : Platform-specific adaptations for databases, experiment frameworks, or other unique services. 119 22,501 7 [1, 747] 3% / 0% 16. : Defining and implementing processes to ensure multi-architecture releases meet quality and performance standards on Arm, including using CHAMP data. 123 8,178 14 [1, 284] 5% / 2% 17. 17 1,328 39 [2, 223] 65% / 84% Code Adaptation & Correction Platform-Specific Conditional Code Intrinsic and Assembly Code Porting Data Representation & Alignment Fixes Memory Model & Concurrency Adjustments Performance-Driven Code Optimization for Arm Test Adaptation & Configuration Test Logic, Data, and Assertion Modifications Test Execution Environment & Scope Build, Deployment & Infrastructure Configuration Build, Packaging & CI/CD Configuration Borg & Runtime Environment Configuration Infrastructure Resource Management & Provisioning Supporting Processes & Tools Monitoring, Alerting & Performance Analysis Migration Tooling & Automation Development Documentation & Knowledge Management Rollbacks & Cleanup Specialized Service Configuration Release Qualification & Validation Uncategorized Fig. 1. Categories of commits in Google’s x86 to Arm migration. LoC per commit shows median and 90% CI. Automation shows the fraction of commits/LoC generated using large-scale changes (Section 5.1). categories (Figure 1)2. Once this list was finalized, we ran the model on all commits again and had it assign one of these 16 categories to each of them (as well as an additional “Uncategorized” category, which improved stability by catching outliers). Figure 2 shows examples of each category. Commits fall into four overarching groups: 1) Code changes, 2) Test changes, 3) BUILD files and configurations, and 4) Supporting processes and tools. In total, our commits updated around 700K lines of code. While the vast majority (84%) of commits are related to updating build or configuration files (Category 8), these commits account for only 19% of lines of codes updated. We also see a substantial number of lines (17%) spent on migration tooling. A large portion of this work is only required once and can be reused in future ISA migrations. All categories contain a meaningful number of commits, supporting our claim that ISA migration is a multifaceted engineering challenge where no single type of task dominates. We also see that code related commits (Categories 1-5) only account for 1% of commits and less than 4% of lines of code, refuting the conventional wisdom [2] that code translation accounts for most of an ISA migration. In Section 5, we analyze how automatable these commits are. We also analyze the timeline of our ISA migration (Figure 3). We observe that at the start of the migration, most commits were in tooling and test adaptation, aligned with Phase 1 (Section 3.3). Over time, a larger fraction of commits became around code adaptation, which can be seen as a phase when there is still a need to update code in common dependencies and address common issues in code and tests. Eventually, the fraction of these kinds of commits declines and in the final phase of the process (Section 3.4), almost all commits are configuration files and supporting processes. We also observe that in this later phase, the number of merged commits rapidly increases, capturing the scale-up of the migration. 2The descriptions in Figure 1 are almost entirely the model’s output, with some minimal edits to remove internal information. 6 1. Platform-Specific Conditional Code #define _CONFIG_H_ // distro #define SOLIB_EXT ".so" +#if defined(__aarch64__) +#define ARCH "aarch64" +#elif defined(__x86_64__) #define ARCH "x86_64" +#else +#error Unsupported architecture +#endif Note: Multiarch a config header 2. Intrinsic and Assembly Code Porting +#ifdef __aarch64__ +#include "third_party/sse2neon/sse2neon.h" +#else #include <emmintrin.h> +#endif ... +#ifdef __aarch64__ +#define GETROUND() (_MM_GET_ROUNDING_MODE()&VM_SSE_ROUND_MASK) ,→ +#define SETROUND(x) (_mm_setcsr(x| (_MM_GET_ROUNDING_MODE()&~VM_SSE_ROUND_MASK))) ,→ +#else #define GETROUND() (_mm_getcsr()&VM_SSE_ROUND_MASK) ,→ #define SETROUND(x) (_mm_setcsr(x| (_mm_getcsr()&~VM_SSE_ROUND_MASK))) ,→ +#endif Note: Port code using intrinsics 3. Data Representation & Alignment Fixes -char suffix_bytes[k FileSuffixSize]; +// NOTE: alignas(uint32_t) is used to ensure proper memory alignment when ,→ +// reinterpreting the char pointer to a uint32_t pointer. ,→ +alignas(uint32_t) char suffix_bytes[k FileSuffixSize]; ,→ Note: Add an alignas modifier 4. Memory Model & Concurrency Adjustments stripCalcLatch = new CountDownLatch(1); calcAwaitingLatch = new CountDownLatch(1); +ranStripCalc = new AtomicBoolean(false); appKey = new PlainKey("fakeKey"); calculationContext = new CalculationContext(new FakeExtraData<>()); ,→ calcEnv = new InProcessCalcEnv(appKey, ritzInfo, calculationContext); ,→ ... - ranStripCalc = true; + ranStripCalc.set(true); ... Note: Switch to atomic boolean to avoid races (Java) 5. Performance-Driven Code Optimization for Arm ... template <> inline uint64_t MathUtil::SafeRound(float x) { uint64_t result; __asm__("fcvtau %x0, %s1" : "=r"(result) : "w"(x)); ,→ return result; } template <> inline int64_t MathUtil::SafeRound(double x) { return vcvtad_s64_f64(x); } ... Note:Implement math routines for aarch64 6. Test Logic, Data, and Assertion Modifications -EXPECT_THAT(distribution.mean(), DoubleEq(event_set.SpaceUsedLong())); ,→ +EXPECT_THAT(distribution.mean(), DoubleNear(event_set.SpaceUsedLong(), 1e-10)); ,→ Note: Adjust the floating point sensitivity 7. Test Execution Environment & Scope tags = [ "cpu:4", + "requires-mem:20g", ], deps = [ Note: Increase a test’s memory limit. 8. Build, Packaging & CI/CD Configuration cc_library( name = " -vector", hdrs = [ " -vector.h", ], - copts = [ - "-mavx2", - "-mbmi2", - ], visibility = [ Note: Remove x86-specific build flags 9. Borg & Runtime Environment Configuration access_level = - pa_proto.PlatformAllowlistAllowedCollection. INITIAL_LIMITED_QUALIFICATION ,→ ,→ + pa_proto.PlatformAllowlistAllowedCollection. EXPANDED_PROTECTION ,→ ,→ }, { collection_key = { user = ' -ui-mixer' Note: Change allowlist to extend machine exposure 10. Infrastructure Resource Management borg_pool ` ` = arm-dedicated- -pool {} borg_pool ` ` = arm-dedicated- -pool {} borg_pool ` ` = arm-dedicated- -pool {} +borg_pool ` ` = arm-dedicated- -pool {} } Note: Add a new cell for testing 11. Monitoring, Alerting & Performance +perf_ _workflow_test( + name = "borg_workflow_arm_one_build_multi_runs", ,→ + configuration = { + "postprocessing_config": { + "microbenchmark_ _data_uploading_config": DEFAULT_MICROBENCHMARK_ _DATA_UPLOADING_CONFIG, ,→ ,→ ,→ + }, + }, + # To run workflows on ARM, need to have "--cpu=arm" here. ,→ + extra_postprocessor_blaze_flags = ["--cpu=arm"], ,→ + phases = [ ... Note: Add a new Arm-specific testing infrastructure 12. Migration Tooling & Automation ... +EXPORTED_ACTIONS = { + automation_structure_pb2.ACTION_PARSE_BLUEPRINT: ,→ + parse_blueprint, + automation_structure_pb2.ACTION_RUN_TEST: + run_test, + # Used for testing + automation_structure_pb2. ACTION_WRITE_OUTPUT_FROM_INPUT: ,→ + write_output_from_input, + automation_structure_pb2. ACTION_WRITE_OUTPUT_FROM_CONFIG: ,→ + write_output_from_config, +} ... Note: Part of a new Blueprint parser 13. Documentation & Knowledge Management ... + * Borg config managed by the pilot dev team specifically selects the pilot ,→ + machines for specific pilot jobs. -* The allowlist of users permitted to run on the pilot machines is managed via ,→ -* the ` -logs` usermap: ... Note: Evolving the docs for the Logs test 14. Rollbacks & Cleanup package_name = " ", srcs = [": _archive"], mpm_tags = ["rapid= "], -platforms = ["// /borg:all"], deps = [":oneday_ _borgfiles"], ) Note: Rolls back a problematic data MPM change 15. Specialized Service Configuration + +// a dedicated borg pool consists of and machines for A/B testing ,→ +// - : for x86 100% environment ,→ +// - : for ARM 100% environment ,→ + _info arm-dedicated- = _default_ _info(' ') { ,→ + cell = ' arm-dedicated- ' + user = ' arm-dedicated' + scratch_path_prefix = '/cns/ arm-dedicated' ,→ + remote_path = '/cns/ /home/' + platform = ' ' +} Note: Add a configuration for a Spanner pilot 16. Release Qualification & Validation ... WHEN champ_state = 'Incompatible' - THEN PrioritizedStatus(1, 'NON_COMPATIBLE_VIA_CHAMP') ,→ + THEN PrioritizedStatus(0, 'NON_COMPATIBLE_VIA_CHAMP') ,→ WHEN champ_state = 'Compatible' - THEN PrioritizedStatus(1, 'QUALIFIED_VIA_CHAMP') ,→ + THEN PrioritizedStatus(0, 'QUALIFIED_VIA_CHAMP') ,→ ... Note: SQL change for improved tracking of qualification Fig. 2. Specific code examples for each category. Finally, we want to understand how these different categories of commits differ from one another. We observe that median commits in most categories are less than 20 LoC, with many single-line commits. However, we also observe very large individual commits that change 10,000+ LoC and skew the averages. We manually inspected these commits to understand their origin and found that these commits do not typically represent more work since they are conceptually similar to large numbers of simple, one-line commits. Overall, there are 19 commits that cumulatively account for 238,289 LoC (32.4%) of total lines changed and that are trivial. Examples include: • Remove a porting tool once it was no longer used – 57K LoC (Category 12) • Update a list of microbenchmark targets – 23K LoC (Category 11) • Add several very large test vectors for a coverage tool – 15K LoC (Category 6) Instruction Set Migration at Warehouse Scale 7 Fig. 3. Categories of commits over time. In summary, we find that most commits related to migration are small, and that the largest commits often change very large lists or configurations and are not inherently complex. We also find that size alone does not measure difficulty. Finally, we observe that some of the commits (particularly in “Supporting Processes & Tools”) could likely be reused in a subsequent multiarch migration. 5 Automating ISA Migrations Now that we have established the tasks that are part of an ISA migration, we can explore how automatable each of these tasks is (RQ2) and how novel automation approaches can facilitate them. 5.1 ISA Migration Automation at Google We already employ a number of automation tools at Google today that automate a large portion of the ISA migration process (83.82% of commits and 14.15% of LoCs). Large-Scale Changes (LSCs). The key piece to ISA migration automation is Rosie (Section 3.1), which allows us to programatically generate large numbers of commits and shepherd them through code review. This includes running affected TAP projects, requesting code reviews by code owners, and submitting each commit once all tests pass. We find that 31,984 of our commits were generated by Rosie, signaling automation. However, we note that these commits only account for 14.15% of lines of code, indicating that most of these commits are very small. Figure 1 shows the fraction of each category that was generated using automated tools. For example, one major LSC adds the following line to Blueprint files of projects, configuring all of its tests and releases for Arm: arm_variant_mode = ::blueprint::VariantMode::VARIANT_MODE_RELEASE, Sanitizers & Fuzzers. While not limited to ISA migrations, fuzzers and LLVM sanitizers such as AddressSanitizer [21], MemorySanitizer [24] and ThreadSanitizer [22] are key enablers of our migration. Even before Arm adoption, Google routinely ran all TAP tests with these tools enabled, which turn latent errors such as a memory corruption, memory leak, or race condition into a debuggable fault. Application owners regularly triage and fix these faults, causing us to sidestep many common differences in execution between x86 and Arm (e.g., a data race may be hidden by x86’s TSO memory model). Catching these kinds of issues ahead of time avoids debugging non-deterministic and hard-to-debug behavior when recompiling to a new ISA. Continuous Health Monitoring Platform (CHAMP). The final step in our automation is CHAMP, which assesses Arm-built applications on Arm server hardware. It continuously monitors health metrics to detect whether behavior differs from x86 instances of the job (e.g., significantly higher RPC error rates or crashes). If so, it automatically marks the job as ineligible for Arm, files a bug for its owners to follow up, and automatically retries in 30 days. It scales up the fraction of Arm instances of the application task by task, job by job, and cell by cell following Google’s production 8 Tools Set of build and test targets to fix Orchestrator Agent SUCCESS/FAIL Code Search Clean BUILD file Build Target Cat File Run Test Find & Replace Test Fixer Agent Build Fixer Agent Success/fail Invoke Success/fail Invoke Invoke Output (e.g., code) Invoke Output (e.g., build log) Invoke Output (e.g., test log) Success/fail Invoke 1. Platform-Specific Conditional Code 2. Intrinsic and Assembly Code 3. Data Representation & 4. Memory Model & Concurrency 5. Performance- Driven Code 6. Test Logic, Data, and Assertion 7. Test Execution Environment & 0 25 50 75 100 SUCCESS FAIL Fig. 4. Agentic flow and the success rate of the agent. principles to limit SLO risk. CHAMP is not needed for new microarchitecture deployments (either x86 or Arm), as the behavior, performance differences, and associated issues are relatively minor. However, auto-qualification of Arm binaries was necessary due to the increased incidence of issues. Using CHAMP, it is no longer necessary to manually shepherd every binary through qualification. Instead, after updating project configurations to build Arm releases, this process is now automatic. 5.2 Reliability of the Automation Approach Combined, these tools allow for a mostly-automated approach where LSCs enable Arm for different builds and releases, which are then automatically qualified using CHAMP. To understand the stability of this approach, we analyze LSCs targeting a standardized release management system. These LSCs modified release configurations to bring this system’s percentage of Arm-qualified applications from 4.8% to 59.6%. The rate of applications that were rolled back in early testing was 1.8% (which dropped to 1% after fixing bugs), and less than 0.8% in the final phase. Early in the migration, after ≈300 MPMs, we had a 5% refusal rate (code owners deciding not to migrate). During scale up, this dropped to 0.6% after ≈600 additional MPMs. In the final phase, the commits were globally approved, with no refusal. We found that acceptance rate was strongly influenced by careful workload targeting, users gaining trust in the automation, and messaging that anticipated worries and objections. 6 Automation of ISA Migrations with AI While LSCs and CHAMP automate a large part of the porting process, there are limits to this approach. They can edit build and configuration files, as well as automatically qualify Arm binaries for deployment. However, standard LSCs are fixed-function pipelines. They are not flexible to respond to unexpected errors or other issues that occur at any stage of the process, be it during testing or in production. Modern generative AI techniques represent an opportunity to automate the remainder of the ISA migration process. We built an agent called CogniPort which aims to close this gap. CogniPort operates on build and test errors. If an Arm binary does not build or a test fails at any point in the process, the agent steps in and aims to fix the problem automatically. The agent consists of three nested agentic loops (Figure 4). Each loop executes the LLM from Section 4 to perform one step of reasoning, followed by a tool invocation—i.e., a function call (the loop terminates once the agent emits a special ‘finish’ call). This tool executes, and its outputs are attached to the agent’s context. For example, there are tools for building and returning the build log, running test(s) and returning the test log, and running a tool that fixes errors in BUILD files. The agent also has tools to search through code and make modifications. The outermost agent loop is an orchestrator that repeatedly calls the build fixer agent and/or the test fixer agent depending on the state of the workspace. The build fixer agent tries to build a given Instruction Set Migration at Warehouse Scale 9 Fig. 5. AI-assessed automatability of each category (1 = trivial, 5 = probably unsolvable), as well as actual fraction of commits and LoCs in each category that were generated using automated tools. target and makes modifications to files until the target builds successfully. The test fixer agent tries to run a given test and makes modifications until the test passes. In both cases, the agent can time out via a step limit or give up early by calling ‘finish‘. To evaluate the agent, we take historic commits from our data set, revert them and then evaluate whether the agent is able to fix them. We note that not all of our categories are suitable for this approach—it only applies to Code & Test Adaptation (categories 1-8). To evaluate the agent, we further narrow down our data set by only picking commits that can be cleanly reverted and that have identifiable build or test targets. This results in a benchmark set of 245 commits. We see that the overall success rate is 30%, with test fixes, platform-specific conditionals, and data representation fixes having the highest success rates. Memory model, test execution environment, and performances fixes are most difficult (though based on a very small sample size). Overall, however, this indicates that AI achieves a reasonably high success rate. We note that we consider these results directional: While we analyzed an arbitrarily chosen subset of outputs to ensure the validity of results and confirm that we are not observing recitation from training, the evaluation is not perfect and may miss cases where, e.g., a fix is incorrect but not caught by a test or where there is other information leakage (e.g., a subsequent commit made the original fix easier or reverting only the commit itself makes it easier to root cause an issue than if the entire fix was reverted). 7 Discussion & Research Challenges Overall, we see that ISA migrations involve a wide range of different tasks. Many of these tasks are highly automatable. BUILD file and configuration changes are almost fully automatable, while code changes and tests are partially automatable with AI. In addition, there are many changes in areas like build and test infrastructure, spinning up a new hardware platform whether or not it is a new ISA (e.g., categories 11 and 15), or deprecating old code (category 14) that needed generalizations for multiarch support, but these scale across many architectures and will not be required again. This refutes the conventional wisdom that the main challenge of ISA migration is translation, and also that migrating a large software ecosystem to a new ISA is a prohibitively large amount of work, particularly with modern AI. This raises the question: What are the remaining challenges, and are there research opportunities in closing the remaining gaps (RQ3)? To answer this, we analyze our data set of commits to identify changes that are challenging to automate. Once again, we use AI. We used our LLM to assess how automatable different categories are by sampling up to 50 commits in each category and using an LLM to grade them (1 = trivially automatable, 5 = difficult, even for advanced AI). This is not a conclusive result, but directionally 10 tells us how difficult each category is, according to the LLM itself (Figure 5). By manually inspecting the outputs, a picture emerges of which changes remain the most challenging. First, we find that the LLM confirms the categories that are already automatable today—i.e., BUILD and configuration files, test execution environment—as automatable. Second, categories 1-7 (code & test adaptation) stand out as problems that have a significant fraction of commits ranked as 3 and 4—indicating problems that are hard but not impossible. Examples of these include: • ISA-Specific Vector Code: Writing ISA-specific, performant vector code is a hard problem, and is actively investigated by the research community [25]. While current AI can translate simple routines, generating complex kernels exposes a complex search space that goes beyond a simple build-repair loop. • Deep Performance Optimizations: Performance optimizations sometime require major refactors, algorithmic changes and intrinsics. There is significant existing work towards applying LLMs to these problems. At Google, we have a system called ECO that is used for automating some of these optimizations [16], including for code running on Arm. • Difficult Corner Cases: We saw a number of corner cases that require obscure knowledge beyond the code itself. For example, we saw commits that worked around an Arm compiler bug, addressed a hash function behaving differently on Arm and x86, and fixed bugs that were unrelated to the Arm migration but were not previously triggered. It is plausible that an agent that can search documentation and the wider web could perform better on these. • Performance Tuning: Hyperparameters and feedback directed optimization (FDO) profiles sometimes have to be regenerated for a new platform. This is potentially automatable, but requires an agent to be able to run workloads and perform performance measurement. Meanwhile, we also found a number of examples where it is difficult to tell whether they are automatable using AI or whether they fundamentally require human involvement: • Multiarch tooling: A significant portion of this work includes implementing the automa- tion itself (e.g., CHAMP), as well as simulation tools and dashboards. While AI can help in the development flow of these components, many commits involve addressing feature requests by users, and thus require human involvement. On the other hand, this work only needs to be done once and is not required for future ISA migrations. • Resource provisioning: These are changes to configuration that follow human engineers installing hardware in data centers and subsequent management by Borg. They should not be performed by AI, and are being obviated via improvements to Borg and adjacent systems. • Documentation: LLMs have already proven useful at generating some kinds of documen- tation [10], and future LLMs may improve the ability of maintaining it and generating user-focused narrative documentation (and chat agents) that require context beyond code. Taken together, this demonstrates that there are opportunities for further closing the gap and that future ISA migrations may require a limited amount of manual work, mostly focused on making new hardware available and adding the new ISA to the automated multiarch tooling. 8 Conclusion By analyzing a large-scale ISA migration at Google, we refute several long-standing assumptions about ISA migrations. First, code translation is only a small portion of the ISA migration, and mainly focuses on intrinsics and vector code. Second, merely recompiling available code is not sufficient. Third, the tasks required for an ISA migration are multifaceted, with no single task dominating. Fourth, many of these tasks are highly automatable, particularly using modern AI techniques. Finally, even with AI, there remain a number of challenges that currently require human involvement and represent opportunities for future work on AI for ISA migration. Instruction Set Migration at Warehouse Scale 11 Acknowledgments We thank Arvind Sundararajan, Dushyant Acharya, Ahmed Alansary, Shelah Ameli, Owen Ander- son, Sterling Augustine, Sushmita Azad, Nupur Baghel, Antoine Baudoux, Patrick Bellasi, Vincent Belliard, Kyle Berman, Paul Bethe, Gopu Bhaskar, Raymond ’Princess Sparklefists’ Blum, Marshall Bockrath, Harsha Vardhan Bonthalala, Lexi Bromfield, Jean-Luc Brouillet, Eric Burnett, Marcelo Cataldo, John Cater, Kristine Chen, David Cheng, Ilya Cherny, Saahithi Chillara, Rob Chilton, Chan- drakanth Chittappa, Brian Chiu, Daniele Codecasa, Eduardo Colaço, Pavithra Dankanikote, Nicolo Davis, Rumeet Dhindsa, Zhuoran Diao, Bartosz Dolecki, Ian Dolzhanskii, Pat Doyle, Elian Dumitru, Ali Esmaeeli, Samuel Foss, Ákos Frohner, Neeharika Gonuguntla, Shruti Gorappa, Russell Gottfried, Manoj Gupta, Benjamin Gwin, Yanyan Han, Jitendra Harlalka, Milad Hashemi, Tim Henderson, Daisy Hollman, Jung Woo Hong, Jiawei Huang, Jin Huang, Talha Imran, Victoria Juan, Pranav Kant, Lera Kharatyan, Joonsung Kim, Danial Klimkin, Sree Kodakara, Avi Kondareddy, Danila Kutenin, Gregory Kwok, Pavel Labath, Leon Lee, Sungkwang Lee, Li Li, Sha Li, Xu Li, Zhongqi Li, Jianyi Liang, Kevin Liston, Haiming Liu, Li Liu, David Lo, Sean Luchen, Albert Ma, Laura Macaddino, Anthony Mai, Jennifer Mansur, Simon Marchuk, David Margolin, Alexander Midlash, Dominic Mitchell, Karthik Mohan, Albert Morgese, Maksym Motornyy, Katherine Nadell, Ngan Nguyen, Denis Nikitin, Stoyan Nikolov, Nicolas Noble, Chester Obi, Andri Orap, Habib Pagarkar, Dasha Patroucheva, Sabuj Pattanayek, Vaishakhi Pilankar, Saranyan Vangal Rajagopalan, Daniel Rall, Majid Rasouli, Salonik Resch, Alberto Rojas, Annie Rong, Jesse Rosenstock, Michal Sapinski, Yashnarendra Saraf, Aaron Schooley, Alexander Schrepfer, Manish Shah, Alexander Shaposhnikov, Stan Shebs, Guangyu Shi, Oscar Shi, Santanu Sinha, Anna Sjövall, Ben Smith, Justin Smith, Jairaj Solanke, Fangrui Song, Raman Subramanian, Xenia Tay, Toni Thompson, Dima Tsumarau, Cassie(Yitong) Wang, Tommy Wang, Shu-Chun Weng, DJ Whang, Hailong Xiao, Andy Xu, and Jason Yuan for their contributions to Google’s Arm porting efforts and the work described herein. References [1] 2025. RISC-V - Debian Wiki — wiki.debian.org. https://wiki.debian.org/RISC-V/. [Accessed 11-09-2025]. [2] E.R. Altman, D. Kaeli, and Y. Sheffer. 2000. Welcome to the opportunities of binary translation. Computer 33, 3 (2000), 40–45. doi:10.1109/2.825694 [3] Apple. [n. d.]. About the Rosetta translation environment. https://developer.apple.com/documentation/apple-silicon/ about-the-rosetta-translation-environment/. [Accessed 10-09-2025]. [4] Babbage. 2023. Apple’s Mac Transitions : 68k to PowerPC to Intel to Apple Silicon. The Chip Letter (Substack). https://thechipletter.substack.com/p/apple-transitions-68k-to-powerpc Accessed: 2025-09-10. [5] Johannes Bader, Andrew Scott, Michael Pradel, and Satish Chandra. 2019. Getafix: learning to fix bugs automatically. Proc. ACM Program. Lang. 3, OOPSLA, Article 159 (Oct. 2019), 27 pages. doi:10.1145/3360585 [6] Betsy Beyer, Chris Jones, Jennifer Petoff, and Niall Richard Murphy. 2016. Site reliability engineering: how Google runs production systems. " O’Reilly Media, Inc.". [7] Betsy Beyer, Niall Richard Murphy, David K Rensin, Kent Kawahara, and Stephen Thorne. 2018. The site reliability workbook: practical ways to implement SRE. " O’Reilly Media, Inc.". [8] Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. 2006. Bigtable: a distributed storage system for structured data. In Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation - Volume 7 (Seattle, WA) (OSDI ’06). USENIX Association, USA, 15. [9] James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. 2012. Spanner: Google’s Globally-Distributed Database. In OSDI. [10] Shubhang Shekhar Dvivedi, Vyshnav Vijay, Sai Leela Rahul Pujari, Shoumik Lodh, and Dhruv Kumar. 2024. A Comparative Analysis of Large Language Models for Code Documentation Generation. In Proceedings of the 1st ACM International Conference on AI-Powered Software (Porto de Galinhas, Brazil) (AIware 2024). Association for Computing Machinery, New York, NY, USA, 65–73. doi:10.1145/3664646.3664765 12 [11] Google Cloud Events. 2025. Shift down: A practical guide to platform engineering. YouTube. https://www.youtube. com/watch?v=FqBuYUeJkxE Accessed on 2025-09-22. [12] Redha Gouicem, Dennis Sprokholt, Jasper Ruehl, Rodrigo C. O. Rocha, Tom Spink, Soham Chakraborty, and Pramod Bhatotia. 2022. Risotto: A Dynamic Binary Translator for Weak Memory Model Architectures. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 1 (Vancouver, BC, Canada) (ASPLOS 2023). Association for Computing Machinery, New York, NY, USA, 107–122. doi:10.1145/3567955.3567962 [13] Svilen Kanev, Juan Darago, Kim Hazelwood, Parthasarathy Ranganathan, Tipp Moseley, Gu-Yeon Wei, and David Brooks. 2014. Profiling a warehouse-scale computer. In ISCA ’15 Proceedings of the 42nd Annual International Symposium on Computer Architecture. 158–169. [14] Jan Keller and Jan Nowakowski. 2024. AI-powered patching: the future of automated vulnerability fixes. Technical Report. [15] Justin Levandoski, Garrett Casto, Mingge Deng, Rushabh Desai, Pavan Edara, Thibaud Hottelier, Amir Hormati, Anoop Johnson, Jeff Johnson, Dawid Kurzyniec, Sam McVeety, Prem Ramanathan, Gaurav Saxena, Vidya Shanmugam, and Yuri Volobuev. 2024. BigLake: BigQuery’s Evolution toward a Multi-Cloud Lakehouse. In SIGMOD. [16] Hannah Lin, Martin Maas, Maximilian Roquemore, Arman Hasanzadeh, Fred Lewis, Yusuf Simonson, Tzu-Wei Yang, Amir Yazdanbakhsh, Deniz Altinbüken, Florin Papa, Maggie Nolan Edmonds, Aditya Patil, Don Schwarz, Satish Chandra, Chris Kennelly, Milad Hashemi, and Parthasarathy Ranganathan. 2025. ECO: An LLM-Driven Efficient Code Optimizer for Warehouse Scale Computers. arXiv:2503.15669 [cs.SE] https://arxiv.org/abs/2503.15669 [17] Andreas Lykke and Jesper Borlum. 2025. Adopting Arm at Scale: Bootstrapping Infrastructure. https://www.uber. com/blog/adopting-arm-at-scale-bootstrapping-infrastructure/. [Accessed 10-09-2025]. [18] MulticoreWare. 2024. Why Porting Applications Across Architectures Isn’t Simple. MulticoreWare Inc. https:// multicorewareinc.com/why-porting-applications-across-architectures-isnt-simple/ [19] Peter Phillips and George Phillips. 2003. No Source Code? No Problem! What if you have to port a program, but all you have is a binary? Queue 1, 6 (Sept. 2003), 50–57. doi:10.1145/945131.945155 [20] Rachel Potvin and Josh Levenberg. 2016. Why Google Stores Billions of Lines of Code in a Single Repository. Commun. ACM 59 (2016), 78–87. http://dl.acm.org/citation.cfm?id=2854146 [21] Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov. 2012. AddressSanitizer: a fast address sanity checker. In Proceedings of the 2012 USENIX Conference on Annual Technical Conference (Boston, MA) (USENIX ATC’12). USENIX Association, USA, 28. [22] Konstantin Serebryany and Timur Iskhodzhanov. 2009. ThreadSanitizer – data race detection in practice.. In Proceedings of the Workshop on Binary Instrumentation and Applications. NYC, NY, U.S.A., 62–71. http://doi.acm.org/10.1145/ 1791194.1791203 [23] Bor-Yeh Shen, Jiunn-Yeu Chen, Wei-Chung Hsu, and Wuu Yang. 2012. LLBT: an LLVM-based static binary translator. In Proceedings of the 2012 International Conference on Compilers, Architectures and Synthesis for Embedded Systems (Tampere, Finland) (CASES ’12). Association for Computing Machinery, New York, NY, USA, 51–60. doi:10.1145/2380403.2380419 [24] Evgeniy Stepanov and Konstantin Serebryany. 2015. MemorySanitizer: fast detector of uninitialized memory use in C++. In Proceedings of the 2015 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). San Francisco, CA, USA, 46–55. [25] Jubi Taneja, Avery Laird, Cong Yan, Madan Musuvathi, and Shuvendu K. Lahiri. 2025. LLM-Vectorizer: LLM-Based Verified Loop Vectorizer. In Proceedings of the 23rd ACM/IEEE International Symposium on Code Generation and Optimization (Las Vegas, NV, USA) (CGO ’25). Association for Computing Machinery, New York, NY, USA, 137–149. doi:10.1145/3696443.3708929 [26] Abhishek Verma, Luis Pedrosa, Madhukar R. Korupolu, David Oppenheimer, Eric Tune, and John Wilkes. 2015. Large-scale cluster management at Google with Borg. In Proceedings of the European Conference on Computer Systems (EuroSys). Bordeaux, France. [27] Titus Winters, Tom Manshreck, and Hyrum Wright. 2020. Software engineering at google: Lessons learned from programming over time. " O’Reilly Media, Inc.".
Instruction Set Migration at Warehouse Scale ERIC CHRISTOPHER, KEVIN CROSSAN, WOLFF DOBSON, CHRIS KENNELLY, DREW LEWIS, KUN LIN, MARTIN MAAS, PARTHASARATHY RANGANATHAN, EMMA RAPATI, BRIAN YANG, Google, USA Migrating codebases from one instruction set architecture (ISA) to another is a major engineering challenge. A recent example is the adoption of Arm (in addition to x86) across the major Cloud hyperscalers. Yet, this problem has seen limited attention by the academic community. Most work has focused on static and dynamic binary translation, and the traditional conventional wisdom has been that this is the primary challenge. In this paper, we show that this is no longer the case. Modern ISA migrations can often build on a robust open-source ecosystem, making it possible to recompile all relevant software from scratch. This introduces a new and multifaceted set of challenges, which are different from binary translation. By analyzing a large-scale migration from x86 to Arm at Google, spanning almost 40,000 code commits, we derive a taxonomy of tasks involved in ISA migration. We show how Google automated many of the steps involved, and demonstrate how AI can play a major role in automatically addressing these tasks. We identify tasks that remain challenging and highlight research challenges that warrant further attention. 1 Introduction Migrating large codebases to a new Instruction Set Architecture (ISA) is a major engineering challenge. Examples include Apple's migration from PowerPC to x86 and later to Arm [4], as well as the adoption of Arm by major hyperscalers (such as Amazon, Google, Microsoft). While there are anecdotal claims regarding the complexity and efforts required for such migrations [2, 18, 19], to our knowledge, there is no systematic analysis of what these ISA migrations entail, and how they are impacted by modern technologies such as improved software engineering tools and artificial intelligence (AI). In this paper, we perform such a systematic analysis for the migration of a multi-billion line codebase from x86 to Arm at Google. Historically, the conventional wisdom has been that the biggest challenge in ISA migration involves translating machine code between ISAs [2, 19]. Correspondingly, there has been a significant amount of work on static [23] and dynamic [12] binary translation that automatically rewrites binaries compiled for one ISA to another. Binary translation was the main problem when software was distributed as binaries and source code was not usually available. However, modern ISAs are generally well-supported in upstream compilers, runtime libraries, and the Linux kernel. As a result, modern compilers mostly "just work" for a new ISA, and previous ISA migrations have smoothed the path to packages supporting cross-compilation by default. For example, 98% of Debian packages build for RISC-V, although it only became an official Debian architecture in 2023 [1]. Perhaps surprisingly, this does not mean that ISA migration is no longer a challenge. While code translation is not the main issue anymore, we find that modern ISA migration involves many usually-simple, repetitive, automatable tasks such as updating build scripts or fixing floating-point issues, which AI can increasingly facilitate. In this paper, we analyze a large-scale ISA migration at Google that added Arm support alongside x86. We focus on the following research questions: (1) What are the tasks that are involved in a modern ISA migration? (2) Which tasks can be automated and how can modern AI help? (3) Which tasks are difficult and are good targets for future research? To answer these questions, we provide what we believe is a first-ever detailed breakdown and taxonomy of large-scale ISA migration tasks. Using state-of-the-art LLMs, we analyze and categorize a corpus of 38,156 commits that constitute our real-world migration. We quantitatively evaluate the capability of current tools, including AI models, to perform these tasks automatically. We 16 Oct 2025 2 systematically identify the strengths and weaknesses of current automated tools, and highlight areas of future work and improvement. We believe that this work highlights research opportunities for the academic community and revisits long-standing assumptions around ISA migrations. Specifically, we contribute the following insights: 1) The complexity of ISA migrations is not in code translation but involves a number of different tasks, many related to rewriting BUILD and configuration files; 2) Many of these tasks are highly automatable; 3) Many of the tasks that are not automatable only need to be performed once when going from a single ISA to multiarch; 4) Of the remaining tasks, many can be performed by modern AI, but some challenges remain. 2 Background & Related Work There are a number of reasons why organizations have performed large-scale ISA migrations. First, many ISAs have gone extinct over the years (e.g., Alpha, MIPS, SPARC, Itanium, VAX). Second, with the adoption of Android and iOS, more codebases were ported to Arm to be used in mobile applications. Third, Apple Macs went through successful migrations from PowerPC to x86, and most recently from x86 to Arm to support custom Apple Silicon. Finally, major cloud hyperscalers have been migrating large codebases from x86 to Arm as well. In this context, migration does not only refer to the act of getting software to build on a new architecture but to reaching parity in terms of performance, security and stability. The most closely-related work in the academic community falls into two categories. First, there is a significant amount of work on static and dynamic binary translation from one ISA to another [12, 23]. Static or dynamic binary translators can serve as a bridge to handle the long tail of a software ecosystem, but eventually they must be phased out to avoid carrying forward technical debt (e.g., Apple will actively phase out its Rosetta dynamic binary translation system in 2027 [3]). In cloud computing, where compute is commoditized, developers seek as much efficiency as possible, and recompiling onto the new ISA is the best route to maximize the compiler's options for performance. We thus forewent binary translation altogether in our deployment. Second, there is a significant amount of work on automatically applying edits to code, such as for performance optimization [16], fixing security issues [14], or correcting bugs [5]. As we will see, these are common tasks that are part of a successful ISA migration. 3 Google's x86 to Arm Journey We now analyze Google's multi-year effort to port a substantial portion of Google's server application ecosystem from x86 to Arm, enabling simultaneous support for both. We start by describing Google's environment and provide a step-by-step analysis of our ISA migration. 3.1 Google's Software Ecosystem Google's codebase is organized as a monorepo containing billions of lines of code [20]. Individual applications and libraries reside in various directories. These folders also contain metadata files, e.g., to indicate code owners or configure continuous integration (CI) testing [27]. Building with Bazel. Builds use Bazel [7], a highly configurable build system. BUILD files describe how binaries, libraries, and tests are built from source files. Most code is covered by our primary continuous integration system, "TAP" (Test Automation Platform [7]), and it is standard for TAP to gate releases. Bazel's builds and tests (including TAP) run on a shared set of machines called Forge. Forge's scale and cache enables Google to compile everything needed for a binary from scratch on every build, including fundamental dependencies like the Python interpreter. Creating releases with Blueprints. Google distinguishes between binaries and releases (named "MPMs" for the "Midas Package Manager" that stores them globally [7]). Releases are bundles Instruction Set Migration at Warehouse Scale 3 of binaries and data that are ready to be deployed in Google's clusters, akin to a package in a Linux distribution. Releases are defined by Blueprint files, which are handwritten or managed by other systems that standardize releases and configurations. A system called Rapid [6] consumes Blueprints, runs CI tests, including TAP, and builds releases for server-side packages. Running applications on Borg. Borg is a custom cluster management service for the Google fleet that runs nearly all Google services [26]. Applications are deployed to Borg through configuration files that define the MPMs needed to run a service, runtime parameters, and scheduling constraints. MPMs can be rolled forward and backward safely because they are almost entirely hermetic. Multiarch Support. Borg was heterogeneous even before the Arm migration. Borg has had dozens of different types of CPUs over the years, and services run on machines that can be as much as ten years old. Unless an owner adds specific constraints, a job can be scheduled on any machine with an architecture-compatible binary. During development, engineers can request builds for multiple architectures (e.g., Arm, K8, Haswell) and expect multiarch MPMs. They can also request tests to run on each target hardware. At release time, owners of packages can also configure Blueprints to target one or more ISAs. Owners expect to get a mix of different kinds of machines with different performance profiles and, in some cases, ISAs. Shifting down and Large Scale Changes (LSCs). Google has moved to a "shift down" approach to development [11], where developers only focus on one level of the stack and other issues are abstracted and/or automated for them. For example, Bazel, TAP, and Rapid mean that developers do not generally need to worry about the specifics of CI. In addition, a healthy automated testing culture means everyone can change everyone else's code without frequent breakages. This enables Large Scale Changes (LSCs) [27] which change code owned by many different teams and can affect thousands of files at once. In cases where an LSC is considered low risk, it can be approved centrally and submitted efficiently without asking individual teams. To get approval from many owners at once, Google has developed Rosie [27] which allows engineers to create a very large commit and shard it into tens, hundreds, or thousands of smaller commits split up by owner. 3.2 Life cycle of an ISA migration Moving an individual package from single arch (x86) to multiarch support requires several steps: (1) Test: Fix tests (and builds) that break when run with the new ISA. Since anyone can build and test any code in our monorepo, it is easy to identify tests that break and require fixes. (2) Set up multiarch CI: This requires modifying the corresponding Blueprint files to ensure that no additional regressions are introduced (often simultaneous with the next step). (3) Configure releases: This modifies Blueprint files to make releases multiarch by default. (4) Roll out new binaries: Run the multiarch packages on machines of the new ISA and assess performance and stability, addressing issues as needed. (5) Full production: Allow production jobs to be scheduled on machines of the new ISA. While these steps are the same for all packages, the issues encountered within each step vary widely across applications and throughout the different phases of our ISA migration from x86 to Arm. Often, this involves performance-optimizing code for the new platform, which can happen in parallel to these steps. We observe parallels to Uber's reported porting workflow [17]. 3.3 Phase 1: Large users We started our Arm migration with a small set of large users, such as Spanner [9], BigQuery [15] and Bigtable [8]. These migrations were hands-on with a small team, and required weekly meetings 4 and tracking bugs. Once tests passed, rollout was manual, with very careful performance and load testing, and gradually removing scheduling constraints on a per-job basis. During this phase, a number of issues in these workloads were surfaced and addressed. Examples include: 1) Replacing x86-specific intrinsics; 2) Replacing long double, which differs between x86 and Arm, with absl::float128; 3) Brittle tests (e.g., due to exact floating point equality checks); 4) x86-specific flags; 5) Memory ordering issues hidden by x86; 6) Out-of-memory errors, often due to heap limits being tuned for x86; 7) Multiarch MPMs exceeding the capacity limits of our infrastructure; 8) Unsupported dependencies, and loading of unsupported dynamic libraries; 9) Jobs not getting scheduled due to unsatisfiable scheduling constraints in Borg configurations. This list was surprising to the teams involved-initially, there had been a perception that porting these large and mature codebases to Arm would be a herculean task, and that the very different toolchains would result in myriad difficulties. However, most issues involved simple changes or fixes, many of them in configuration files. At the same time, these changes were surprisingly pervasive, as evident by the large number of commits. It is therefore not the case that most software compiles and runs on Arm without modifications; it is that these modifications are of a different kind than expected initially. For example, it is not unusual that, for a given software package, almost nothing builds initially, suggesting that large and pervasive changes are required. However, simple fixes to a number of shared dependencies often unblocks many of them at once. 3.4 Phase 2: Everybody else To take full advantage of Arm in the data center, migrating only the largest workloads is insufficient. To make maximum use of available capacity, Borg needs to be able to schedule workloads flexibly across platforms, packing large and small users onto machines efficiently. If only a small subset of services can run on Arm, it will result in underutilization of those machines. We note that the distribution of workloads at Google is very flat: Although our top 50 users are very large, they only represent ≈60% of running compute [13]. Addressing this long tail requires porting over 100,000 packages and billions of lines of code. This makes the Phase 1 approach of working directly with customer teams infeasible. In fact, even just talking to each team would be prohibitively expensive. The second phase of the x86 to Arm migration therefore focused on automating and scaling the migration of these workloads, while minimizing involvement from the teams themselves. It is this phase of Google's x86 to Arm migration that we mostly focus on. So far, we have ported about 30,000 packages, accounting for a significant portion of CPU cycles. We found that effectively making use of Arm hardware did not require porting all workloads. 4 Analyzing an ISA Migration To fully understand what is involved in an ISA migration, we now analyze the full range of tasks involved in Google's x86 to Arm migration (RQ1). As a monorepo, any change - be it to code, configurations or documentation - is tracked as a commit in our repository's history. Further, these relevant commits were marked with a keyword that indicates they were part of this migration, allowing us to extract them after the fact. We thus identified a relevant set of 38,156 commits. Analyzing these commits manually would have been cost-prohibitive. We instead used a variant1 of Gemini 2.5 Flash to analyze these commits at scale. We passed the commit messages and code diffs into the LLM's 1M token context window in groups of 100 at a time. We prompted the model to pick a set of 20 categories for each batch. Then, we took all 400 × 20 categories and asked Gemini to consolidate them into 50. Further manual iteration over model outputs led to a final list of 16 1We use Gemini 2.5 fine-tuned on a corpus of internal data, including code and documentation. This corpus may include material related to our Arm migration, but this represents a sufficiently small subset that recitation is not an issue. Instruction Set Migration at Warehouse Scale 5 Change Category & Description Commits Total LoC LoC per Commit Automation 1. : Introducing or modifying code blocks that are conditionally compiled or executed (e.g., using #ifdef __aarch64__, runtime CPU feature checks). Examples: Different syscall usage, platform-specific API calls. 236 16,235 9 [1, 330] 22% / 28% 2. : Replacing or providing alternatives for x86-specific intrinsics (e.g., SSE, AVX) with Arm equivalents (NEON), or rewriting assembly language sections. 74 4,454 10 [1, 209] 1% / 0% 3. : Modifying code to handle issues arising from differences in data type sizes, alignment requirements, or byte order (endianness) between x86 and Arm. 38 2,574 9 [1, 324] 0% / 0% 4. : Fixing code that makes assumptions about memory ordering, atomicity, or thread synchronization behavior that differ on Arm. 12 49 2 [2, 12] 0% / 0% 5. : Refactoring algorithms or code patterns specifically to improve execution speed, reduce latency, or improve efficiency on Arm microarchitectures. This is beyond basic correctness. 18 2,080 13 [4, 337] 0% / 0% 6. : Changing the test code itself, updating golden files, or adjusting test assertions to be compatible with Arm, reflecting valid behavioral differences rather than bugs. 276 37,052 5 [1, 147] 3% / 1% 7. : Configuring which tests run on Arm, adjusting test timeouts, memory/CPU limits, and sandboxing; excluding tests not suitable for Arm. 1,303 13,783 1 [1, 37] 53% / 8% 8. : Changes to BUILD files, Bazel settings, genmpm rules, Blueprints, TAP, and release platform configs to support multi-architecture builds, testing, and releases. 32,204 139,611 1 [1, 9] 95% / 63% 9. : Modifying Borg configurations, allowlists, admission control, resource allocation within jobs, and service enablement for running on Arm in non-prod and production. 757 26,581 10 [1, 136] 1% / 0% 10. : Managing quota, dedicated machines, security policies, storage, network, and scaling for the Arm migration. Includes cluster state management and kernel/platform rollouts. 381 32,159 8 [1, 163] 1% / 0% 11. : Setting up dashboards, alerts, collecting metrics, analyzing performance benchmarks, and classifying errors for the Arm migration. 645 115,343 16 [1, 525] 0% / 0% 12. : Creating and enhancing scripts, tools, and automation to assist with any stage of the Arm migration. 644 125,536 29 [1, 808] 2% / 1% 13. : Creating and updating guides, best practices, and debugging information. 369 24,115 13 [1, 260] 0% / 0% 14. : Reverting changes and removing obsolete code, configurations, or data from the migration process. 940 163,042 2 [1, 200] 58% / 5% 15. : Platform-specific adaptations for databases, experiment frameworks, or other unique services. 119 22,501 7 [1, 747] 3% / 0% 16. : Defining and implementing processes to ensure multi-architecture releases meet quality and performance standards on Arm, including using CHAMP data. 123 8,178 14 [1, 284] 5% / 2% 17. 17 1,328 39 [2, 223] 65% / 84% Code Adaptation & Correction Platform-Specific Conditional Code Intrinsic and Assembly Code Porting Data Representation & Alignment Fixes Memory Model & Concurrency Adjustments Performance-Driven Code Optimization for Arm Test Adaptation & Configuration Test Logic, Data, and Assertion Modifications Test Execution Environment & Scope Build, Deployment & Infrastructure Configuration Build, Packaging & CI/CD Configuration Borg & Runtime Environment Configuration Infrastructure Resource Management & Provisioning Supporting Processes & Tools Monitoring, Alerting & Performance Analysis Migration Tooling & Automation Development Documentation & Knowledge Management Rollbacks & Cleanup Specialized Service Configuration Release Qualification & Validation Uncategorized Fig. 1. Categories of commits in Google's x86 to Arm migration. LoC per commit shows median and 90% CI. Automation shows the fraction of commits/LoC generated using large-scale changes (Section 5.1). categories (Figure 1)2. Once this list was finalized, we ran the model on all commits again and had it assign one of these 16 categories to each of them (as well as an additional "Uncategorized" category, which improved stability by catching outliers). Figure 2 shows examples of each category. Commits fall into four overarching groups: 1) Code changes, 2) Test changes, 3) BUILD files and configurations, and 4) Supporting processes and tools. In total, our commits updated around 700K lines of code. While the vast majority (84%) of commits are related to updating build or configuration files (Category 8), these commits account for only 19% of lines of codes updated. We also see a substantial number of lines (17%) spent on migration tooling. A large portion of this work is only required once and can be reused in future ISA migrations. All categories contain a meaningful number of commits, supporting our claim that ISA migration is a multifaceted engineering challenge where no single type of task dominates. We also see that code related commits (Categories 1-5) only account for 1% of commits and less than 4% of lines of code, refuting the conventional wisdom [2] that code translation accounts for most of an ISA migration. In Section 5, we analyze how automatable these commits are. We also analyze the timeline of our ISA migration (Figure 3). We observe that at the start of the migration, most commits were in tooling and test adaptation, aligned with Phase 1 (Section 3.3). Over time, a larger fraction of commits became around code adaptation, which can be seen as a phase when there is still a need to update code in common dependencies and address common issues in code and tests. Eventually, the fraction of these kinds of commits declines and in the final phase of the process (Section 3.4), almost all commits are configuration files and supporting processes. We also observe that in this later phase, the number of merged commits rapidly increases, capturing the scale-up of the migration. 2The descriptions in Figure 1 are almost entirely the model's output, with some minimal edits to remove internal information. 6 1. Platform-Specific Conditional Code #define _CONFIG_H_ // distro #define SOLIB_EXT ".so" +#if defined(__aarch64__) +#define ARCH "aarch64" +#elif defined(__x86_64__) #define ARCH "x86_64" +#else +#error Unsupported architecture +#endif Note: Multiarch a config header 2. Intrinsic and Assembly Code Porting +#ifdef __aarch64__ +#include "third_party/sse2neon/sse2neon.h" +#else #include +#endif ... +#ifdef __aarch64__ +#define GETROUND() (_MM_GET_ROUNDING_MODE()&VM_SSE_ROUND_MASK) ,→ +#define SETROUND(x) (_mm_setcsr(x| (_MM_GET_ROUNDING_MODE()&~VM_SSE_ROUND_MASK))) ,→ +#else #define GETROUND() (_mm_getcsr()&VM_SSE_ROUND_MASK) ,→ #define SETROUND(x) (_mm_setcsr(x| (_mm_getcsr()&~VM_SSE_ROUND_MASK))) ,→ +#endif Note: Port code using intrinsics 3. Data Representation & Alignment Fixes -char suffix_bytes[k FileSuffixSize]; +// NOTE: alignas(uint32_t) is used to ensure proper memory alignment when ,→ +// reinterpreting the char pointer to a uint32_t pointer. ,→ +alignas(uint32_t) char suffix_bytes[k FileSuffixSize]; ,→ Note: Add an alignas modifier 4. Memory Model & Concurrency Adjustments stripCalcLatch = new CountDownLatch(1); calcAwaitingLatch = new CountDownLatch(1); +ranStripCalc = new AtomicBoolean(false); appKey = new PlainKey("fakeKey"); calculationContext = new CalculationContext(new FakeExtraData<>()); ,→ calcEnv = new InProcessCalcEnv(appKey, ritzInfo, calculationContext); ,→ ... - ranStripCalc = true; + ranStripCalc.set(true); ... Note: Switch to atomic boolean to avoid races (Java) 5. Performance-Driven Code Optimization for Arm ... template <> inline uint64_t MathUtil::SafeRound(float x) { uint64_t result; __asm__("fcvtau %x0, %s1" : "=r"(result) : "w"(x)); ,→ return result; } template <> inline int64_t MathUtil::SafeRound(double x) { return vcvtad_s64_f64(x); } ... Note:Implement math routines for aarch64 6. Test Logic, Data, and Assertion Modifications -EXPECT_THAT(distribution.mean(), DoubleEq(event_set.SpaceUsedLong())); ,→ +EXPECT_THAT(distribution.mean(), DoubleNear(event_set.SpaceUsedLong(), 1e-10)); ,→ Note: Adjust the floating point sensitivity 7. Test Execution Environment & Scope tags = [ "cpu:4", + "requires-mem:20g", ], deps = [ Note: Increase a test's memory limit. 8. Build, Packaging & CI/CD Configuration cc_library( name = " -vector", hdrs = [ " -vector.h", ], - copts = [ - "-mavx2", - "-mbmi2", - ], visibility = [ Note: Remove x86-specific build flags 9. Borg & Runtime Environment Configuration access_level = - pa_proto.PlatformAllowlistAllowedCollection. INITIAL_LIMITED_QUALIFICATION ,→ ,→ + pa_proto.PlatformAllowlistAllowedCollection. EXPANDED_PROTECTION ,→ ,→ }, { collection_key = { user = ' -ui-mixer' Note: Change allowlist to extend machine exposure 10. Infrastructure Resource Management borg_pool ` ` = arm-dedicated- -pool {} borg_pool ` ` = arm-dedicated- -pool {} borg_pool ` ` = arm-dedicated- -pool {} +borg_pool ` ` = arm-dedicated- -pool {} } Note: Add a new cell for testing 11. Monitoring, Alerting & Performance +perf_ _workflow_test( + name = "borg_workflow_arm_one_build_multi_runs", ,→ + configuration = { + "postprocessing_config": { + "microbenchmark_ _data_uploading_config": DEFAULT_MICROBENCHMARK_ _DATA_UPLOADING_CONFIG, ,→ ,→ ,→ + }, + }, + # To run workflows on ARM, need to have "--cpu=arm" here. ,→ + extra_postprocessor_blaze_flags = ["--cpu=arm"], ,→ + phases = [ ... Note: Add a new Arm-specific testing infrastructure 12. Migration Tooling & Automation ... +EXPORTED_ACTIONS = { + automation_structure_pb2.ACTION_PARSE_BLUEPRINT: ,→ + parse_blueprint, + automation_structure_pb2.ACTION_RUN_TEST: + run_test, + # Used for testing + automation_structure_pb2. ACTION_WRITE_OUTPUT_FROM_INPUT: ,→ + write_output_from_input, + automation_structure_pb2. ACTION_WRITE_OUTPUT_FROM_CONFIG: ,→ + write_output_from_config, +} ... Note: Part of a new Blueprint parser 13. Documentation & Knowledge Management ... + * Borg config managed by the pilot dev team specifically selects the pilot ,→ + machines for specific pilot jobs. -* The allowlist of users permitted to run on the pilot machines is managed via ,→ -* the ` -logs` usermap: ... Note: Evolving the docs for the Logs test 14. Rollbacks & Cleanup package_name = " ", srcs = [": _archive"], mpm_tags = ["rapid= "], -platforms = ["// /borg:all"], deps = [":oneday_ _borgfiles"], ) Note: Rolls back a problematic data MPM change 15. Specialized Service Configuration + +// a dedicated borg pool consists of and machines for A/B testing ,→ +// - : for x86 100% environment ,→ +// - : for ARM 100% environment ,→ + _info arm-dedicated- = _default_ _info(' ') { ,→ + cell = ' arm-dedicated- ' + user = ' arm-dedicated' + scratch_path_prefix = '/cns/ arm-dedicated' ,→ + remote_path = '/cns/ /home/' + platform = ' ' +} Note: Add a configuration for a Spanner pilot 16. Release Qualification & Validation ... WHEN champ_state = 'Incompatible' - THEN PrioritizedStatus(1, 'NON_COMPATIBLE_VIA_CHAMP') ,→ + THEN PrioritizedStatus(0, 'NON_COMPATIBLE_VIA_CHAMP') ,→ WHEN champ_state = 'Compatible' - THEN PrioritizedStatus(1, 'QUALIFIED_VIA_CHAMP') ,→ + THEN PrioritizedStatus(0, 'QUALIFIED_VIA_CHAMP') ,→ ... Note: SQL change for improved tracking of qualification Fig. 2. Specific code examples for each category. Finally, we want to understand how these different categories of commits differ from one another. We observe that median commits in most categories are less than 20 LoC, with many single-line commits. However, we also observe very large individual commits that change 10,000+ LoC and skew the averages. We manually inspected these commits to understand their origin and found that these commits do not typically represent more work since they are conceptually similar to large numbers of simple, one-line commits. Overall, there are 19 commits that cumulatively account for 238,289 LoC (32.4%) of total lines changed and that are trivial. Examples include: • Remove a porting tool once it was no longer used - 57K LoC (Category 12) • Update a list of microbenchmark targets - 23K LoC (Category 11) • Add several very large test vectors for a coverage tool - 15K LoC (Category 6) Instruction Set Migration at Warehouse Scale 7 Fig. 3. Categories of commits over time. In summary, we find that most commits related to migration are small, and that the largest commits often change very large lists or configurations and are not inherently complex. We also find that size alone does not measure difficulty. Finally, we observe that some of the commits (particularly in "Supporting Processes & Tools") could likely be reused in a subsequent multiarch migration. 5 Automating ISA Migrations Now that we have established the tasks that are part of an ISA migration, we can explore how automatable each of these tasks is (RQ2) and how novel automation approaches can facilitate them. 5.1 ISA Migration Automation at Google We already employ a number of automation tools at Google today that automate a large portion of the ISA migration process (83.82% of commits and 14.15% of LoCs). Large-Scale Changes (LSCs). The key piece to ISA migration automation is Rosie (Section 3.1), which allows us to programatically generate large numbers of commits and shepherd them through code review. This includes running affected TAP projects, requesting code reviews by code owners, and submitting each commit once all tests pass. We find that 31,984 of our commits were generated by Rosie, signaling automation. However, we note that these commits only account for 14.15% of lines of code, indicating that most of these commits are very small. Figure 1 shows the fraction of each category that was generated using automated tools. For example, one major LSC adds the following line to Blueprint files of projects, configuring all of its tests and releases for Arm: arm_variant_mode = ::blueprint::VariantMode::VARIANT_MODE_RELEASE, Sanitizers & Fuzzers. While not limited to ISA migrations, fuzzers and LLVM sanitizers such as AddressSanitizer [21], MemorySanitizer [24] and ThreadSanitizer [22] are key enablers of our migration. Even before Arm adoption, Google routinely ran all TAP tests with these tools enabled, which turn latent errors such as a memory corruption, memory leak, or race condition into a debuggable fault. Application owners regularly triage and fix these faults, causing us to sidestep many common differences in execution between x86 and Arm (e.g., a data race may be hidden by x86's TSO memory model). Catching these kinds of issues ahead of time avoids debugging non-deterministic and hard-to-debug behavior when recompiling to a new ISA. Continuous Health Monitoring Platform (CHAMP). The final step in our automation is CHAMP, which assesses Arm-built applications on Arm server hardware. It continuously monitors health metrics to detect whether behavior differs from x86 instances of the job (e.g., significantly higher RPC error rates or crashes). If so, it automatically marks the job as ineligible for Arm, files a bug for its owners to follow up, and automatically retries in 30 days. It scales up the fraction of Arm instances of the application task by task, job by job, and cell by cell following Google's production 8 Tools Set of build and test targets to fix Orchestrator Agent SUCCESS/FAIL Code Search Clean BUILD file Build Target Cat File Run Test Find & Replace Test Fixer Agent Build Fixer Agent Success/fail Invoke Success/fail Invoke Invoke Output (e.g., code) Invoke Output (e.g., build log) Invoke Output (e.g., test log) Success/fail Invoke 1. Platform-Specific Conditional Code 2. Intrinsic and Assembly Code 3. Data Representation & 4. Memory Model & Concurrency 5. PerformanceDriven Code 6. Test Logic, Data, and Assertion 7. Test Execution Environment & 0 25 50 75 100 SUCCESS FAIL Fig. 4. Agentic flow and the success rate of the agent. principles to limit SLO risk. CHAMP is not needed for new microarchitecture deployments (either x86 or Arm), as the behavior, performance differences, and associated issues are relatively minor. However, auto-qualification of Arm binaries was necessary due to the increased incidence of issues. Using CHAMP, it is no longer necessary to manually shepherd every binary through qualification. Instead, after updating project configurations to build Arm releases, this process is now automatic. 5.2 Reliability of the Automation Approach Combined, these tools allow for a mostly-automated approach where LSCs enable Arm for different builds and releases, which are then automatically qualified using CHAMP. To understand the stability of this approach, we analyze LSCs targeting a standardized release management system. These LSCs modified release configurations to bring this system's percentage of Arm-qualified applications from 4.8% to 59.6%. The rate of applications that were rolled back in early testing was 1.8% (which dropped to 1% after fixing bugs), and less than 0.8% in the final phase. Early in the migration, after ≈300 MPMs, we had a 5% refusal rate (code owners deciding not to migrate). During scale up, this dropped to 0.6% after ≈600 additional MPMs. In the final phase, the commits were globally approved, with no refusal. We found that acceptance rate was strongly influenced by careful workload targeting, users gaining trust in the automation, and messaging that anticipated worries and objections. 6 Automation of ISA Migrations with AI While LSCs and CHAMP automate a large part of the porting process, there are limits to this approach. They can edit build and configuration files, as well as automatically qualify Arm binaries for deployment. However, standard LSCs are fixed-function pipelines. They are not flexible to respond to unexpected errors or other issues that occur at any stage of the process, be it during testing or in production. Modern generative AI techniques represent an opportunity to automate the remainder of the ISA migration process. We built an agent called CogniPort which aims to close this gap. CogniPort operates on build and test errors. If an Arm binary does not build or a test fails at any point in the process, the agent steps in and aims to fix the problem automatically. The agent consists of three nested agentic loops (Figure 4). Each loop executes the LLM from Section 4 to perform one step of reasoning, followed by a tool invocation-i.e., a function call (the loop terminates once the agent emits a special 'finish' call). This tool executes, and its outputs are attached to the agent's context. For example, there are tools for building and returning the build log, running test(s) and returning the test log, and running a tool that fixes errors in BUILD files. The agent also has tools to search through code and make modifications. The outermost agent loop is an orchestrator that repeatedly calls the build fixer agent and/or the test fixer agent depending on the state of the workspace. The build fixer agent tries to build a given Instruction Set Migration at Warehouse Scale 9 Fig. 5. AI-assessed automatability of each category (1 = trivial, 5 = probably unsolvable), as well as actual fraction of commits and LoCs in each category that were generated using automated tools. target and makes modifications to files until the target builds successfully. The test fixer agent tries to run a given test and makes modifications until the test passes. In both cases, the agent can time out via a step limit or give up early by calling 'finish'. To evaluate the agent, we take historic commits from our data set, revert them and then evaluate whether the agent is able to fix them. We note that not all of our categories are suitable for this approach-it only applies to Code & Test Adaptation (categories 1-8). To evaluate the agent, we further narrow down our data set by only picking commits that can be cleanly reverted and that have identifiable build or test targets. This results in a benchmark set of 245 commits. We see that the overall success rate is 30%, with test fixes, platform-specific conditionals, and data representation fixes having the highest success rates. Memory model, test execution environment, and performances fixes are most difficult (though based on a very small sample size). Overall, however, this indicates that AI achieves a reasonably high success rate. We note that we consider these results directional: While we analyzed an arbitrarily chosen subset of outputs to ensure the validity of results and confirm that we are not observing recitation from training, the evaluation is not perfect and may miss cases where, e.g., a fix is incorrect but not caught by a test or where there is other information leakage (e.g., a subsequent commit made the original fix easier or reverting only the commit itself makes it easier to root cause an issue than if the entire fix was reverted). 7 Discussion & Research Challenges Overall, we see that ISA migrations involve a wide range of different tasks. Many of these tasks are highly automatable. BUILD file and configuration changes are almost fully automatable, while code changes and tests are partially automatable with AI. In addition, there are many changes in areas like build and test infrastructure, spinning up a new hardware platform whether or not it is a new ISA (e.g., categories 11 and 15), or deprecating old code (category 14) that needed generalizations for multiarch support, but these scale across many architectures and will not be required again. This refutes the conventional wisdom that the main challenge of ISA migration is translation, and also that migrating a large software ecosystem to a new ISA is a prohibitively large amount of work, particularly with modern AI. This raises the question: What are the remaining challenges, and are there research opportunities in closing the remaining gaps (RQ3)? To answer this, we analyze our data set of commits to identify changes that are challenging to automate. Once again, we use AI. We used our LLM to assess how automatable different categories are by sampling up to 50 commits in each category and using an LLM to grade them (1 = trivially automatable, 5 = difficult, even for advanced AI). This is not a conclusive result, but directionally 10 tells us how difficult each category is, according to the LLM itself (Figure 5). By manually inspecting the outputs, a picture emerges of which changes remain the most challenging. First, we find that the LLM confirms the categories that are already automatable today-i.e., BUILD and configuration files, test execution environment-as automatable. Second, categories 1-7 (code & test adaptation) stand out as problems that have a significant fraction of commits ranked as 3 and 4-indicating problems that are hard but not impossible. Examples of these include: • ISA-Specific Vector Code: Writing ISA-specific, performant vector code is a hard problem, and is actively investigated by the research community [25]. While current AI can translate simple routines, generating complex kernels exposes a complex search space that goes beyond a simple build-repair loop. • Deep Performance Optimizations: Performance optimizations sometime require major refactors, algorithmic changes and intrinsics. There is significant existing work towards applying LLMs to these problems. At Google, we have a system called ECO that is used for automating some of these optimizations [16], including for code running on Arm. • Difficult Corner Cases: We saw a number of corner cases that require obscure knowledge beyond the code itself. For example, we saw commits that worked around an Arm compiler bug, addressed a hash function behaving differently on Arm and x86, and fixed bugs that were unrelated to the Arm migration but were not previously triggered. It is plausible that an agent that can search documentation and the wider web could perform better on these. • Performance Tuning: Hyperparameters and feedback directed optimization (FDO) profiles sometimes have to be regenerated for a new platform. This is potentially automatable, but requires an agent to be able to run workloads and perform performance measurement. Meanwhile, we also found a number of examples where it is difficult to tell whether they are automatable using AI or whether they fundamentally require human involvement: • Multiarch tooling: A significant portion of this work includes implementing the automation itself (e.g., CHAMP), as well as simulation tools and dashboards. While AI can help in the development flow of these components, many commits involve addressing feature requests by users, and thus require human involvement. On the other hand, this work only needs to be done once and is not required for future ISA migrations. • Resource provisioning: These are changes to configuration that follow human engineers installing hardware in data centers and subsequent management by Borg. They should not be performed by AI, and are being obviated via improvements to Borg and adjacent systems. • Documentation: LLMs have already proven useful at generating some kinds of documentation [10], and future LLMs may improve the ability of maintaining it and generating user-focused narrative documentation (and chat agents) that require context beyond code. Taken together, this demonstrates that there are opportunities for further closing the gap and that future ISA migrations may require a limited amount of manual work, mostly focused on making new hardware available and adding the new ISA to the automated multiarch tooling. 8 Conclusion By analyzing a large-scale ISA migration at Google, we refute several long-standing assumptions about ISA migrations. First, code translation is only a small portion of the ISA migration, and mainly focuses on intrinsics and vector code. Second, merely recompiling available code is not sufficient. Third, the tasks required for an ISA migration are multifaceted, with no single task dominating. Fourth, many of these tasks are highly automatable, particularly using modern AI techniques. Finally, even with AI, there remain a number of challenges that currently require human involvement and represent opportunities for future work on AI for ISA migration. Instruction Set Migration at Warehouse Scale 11 Acknowledgments We thank Arvind Sundararajan, Dushyant Acharya, Ahmed Alansary, Shelah Ameli, Owen Anderson, Sterling Augustine, Sushmita Azad, Nupur Baghel, Antoine Baudoux, Patrick Bellasi, Vincent Belliard, Kyle Berman, Paul Bethe, Gopu Bhaskar, Raymond 'Princess Sparklefists' Blum, Marshall Bockrath, Harsha Vardhan Bonthalala, Lexi Bromfield, Jean-Luc Brouillet, Eric Burnett, Marcelo Cataldo, John Cater, Kristine Chen, David Cheng, Ilya Cherny, Saahithi Chillara, Rob Chilton, Chandrakanth Chittappa, Brian Chiu, Daniele Codecasa, Eduardo Colaço, Pavithra Dankanikote, Nicolo Davis, Rumeet Dhindsa, Zhuoran Diao, Bartosz Dolecki, Ian Dolzhanskii, Pat Doyle, Elian Dumitru, Ali Esmaeeli, Samuel Foss, Ákos Frohner, Neeharika Gonuguntla, Shruti Gorappa, Russell Gottfried, Manoj Gupta, Benjamin Gwin, Yanyan Han, Jitendra Harlalka, Milad Hashemi, Tim Henderson, Daisy Hollman, Jung Woo Hong, Jiawei Huang, Jin Huang, Talha Imran, Victoria Juan, Pranav Kant, Lera Kharatyan, Joonsung Kim, Danial Klimkin, Sree Kodakara, Avi Kondareddy, Danila Kutenin, Gregory Kwok, Pavel Labath, Leon Lee, Sungkwang Lee, Li Li, Sha Li, Xu Li, Zhongqi Li, Jianyi Liang, Kevin Liston, Haiming Liu, Li Liu, David Lo, Sean Luchen, Albert Ma, Laura Macaddino, Anthony Mai, Jennifer Mansur, Simon Marchuk, David Margolin, Alexander Midlash, Dominic Mitchell, Karthik Mohan, Albert Morgese, Maksym Motornyy, Katherine Nadell, Ngan Nguyen, Denis Nikitin, Stoyan Nikolov, Nicolas Noble, Chester Obi, Andri Orap, Habib Pagarkar, Dasha Patroucheva, Sabuj Pattanayek, Vaishakhi Pilankar, Saranyan Vangal Rajagopalan, Daniel Rall, Majid Rasouli, Salonik Resch, Alberto Rojas, Annie Rong, Jesse Rosenstock, Michal Sapinski, Yashnarendra Saraf, Aaron Schooley, Alexander Schrepfer, Manish Shah, Alexander Shaposhnikov, Stan Shebs, Guangyu Shi, Oscar Shi, Santanu Sinha, Anna Sjövall, Ben Smith, Justin Smith, Jairaj Solanke, Fangrui Song, Raman Subramanian, Xenia Tay, Toni Thompson, Dima Tsumarau, Cassie(Yitong) Wang, Tommy Wang, Shu-Chun Weng, DJ Whang, Hailong Xiao, Andy Xu, and Jason Yuan for their contributions to Google's Arm porting efforts and the work described herein. References [1] 2025. RISC-V - Debian Wiki - wiki.debian.org. https://wiki.debian.org/RISC-V/. [Accessed 11-09-2025]. [2] E.R. Altman, D. Kaeli, and Y. Sheffer. 2000. Welcome to the opportunities of binary translation. Computer 33, 3 (2000), 40-45. [3] Apple. [n. d.]. About the Rosetta translation environment. https://developer.apple.com/documentation/apple-silicon/ about-the-rosetta-translation-environment/. [Accessed 10-09-2025]. [4] Babbage. 2023. Apple's Mac Transitions : 68k to PowerPC to Intel to Apple Silicon. The Chip Letter (Substack). https://thechipletter.substack.com/p/apple-transitions-68k-to-powerpc Accessed: 2025-09-10. [5] Johannes Bader, Andrew Scott, Michael Pradel, and Satish Chandra. 2019. Getafix: learning to fix bugs automatically. Proc. ACM Program. Lang. 3, OOPSLA, Article 159 (Oct. 2019), 27 pages. [6] Betsy Beyer, Chris Jones, Jennifer Petoff, and Niall Richard Murphy. 2016. Site reliability engineering: how Google runs production systems. " O'Reilly Media, Inc.". [7] Betsy Beyer, Niall Richard Murphy, David K Rensin, Kent Kawahara, and Stephen Thorne. 2018. The site reliability workbook: practical ways to implement SRE. " O'Reilly Media, Inc.". [8] Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. 2006. Bigtable: a distributed storage system for structured data. In Proceedings of the 7th USENIX Symposium on Operating Systems Design and Implementation - Volume 7 (Seattle, WA) (OSDI '06). USENIX Association, USA, 15. [9] James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JJ Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Peter Hochschild, Wilson Hsieh, Sebastian Kanthak, Eugene Kogan, Hongyi Li, Alexander Lloyd, Sergey Melnik, David Mwaura, David Nagle, Sean Quinlan, Rajesh Rao, Lindsay Rolig, Dale Woodford, Yasushi Saito, Christopher Taylor, Michal Szymaniak, and Ruth Wang. 2012. Spanner: Google's Globally-Distributed Database. In OSDI. [10] Shubhang Shekhar Dvivedi, Vyshnav Vijay, Sai Leela Rahul Pujari, Shoumik Lodh, and Dhruv Kumar. 2024. A Comparative Analysis of Large Language Models for Code Documentation Generation. In Proceedings of the 1st ACM International Conference on AI-Powered Software (Porto de Galinhas, Brazil) (AIware 2024). Association for Computing Machinery, New York, NY, USA, 65-73. 12 [11] Google Cloud Events. 2025. Shift down: A practical guide to platform engineering. YouTube. https://www.youtube. com/watch?v=FqBuYUeJkxE Accessed on 2025-09-22. [12] Redha Gouicem, Dennis Sprokholt, Jasper Ruehl, Rodrigo C. O. Rocha, Tom Spink, Soham Chakraborty, and Pramod Bhatotia. 2022. Risotto: A Dynamic Binary Translator for Weak Memory Model Architectures. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 1 (Vancouver, BC, Canada) (ASPLOS 2023). Association for Computing Machinery, New York, NY, USA, 107-122. [13] Svilen Kanev, Juan Darago, Kim Hazelwood, Parthasarathy Ranganathan, Tipp Moseley, Gu-Yeon Wei, and David Brooks. 2014. Profiling a warehouse-scale computer. In ISCA '15 Proceedings of the 42nd Annual International Symposium on Computer Architecture. 158-169. [14] Jan Keller and Jan Nowakowski. 2024. AI-powered patching: the future of automated vulnerability fixes. Technical Report. [15] Justin Levandoski, Garrett Casto, Mingge Deng, Rushabh Desai, Pavan Edara, Thibaud Hottelier, Amir Hormati, Anoop Johnson, Jeff Johnson, Dawid Kurzyniec, Sam McVeety, Prem Ramanathan, Gaurav Saxena, Vidya Shanmugam, and Yuri Volobuev. 2024. BigLake: BigQuery's Evolution toward a Multi-Cloud Lakehouse. In SIGMOD. [16] Hannah Lin, Martin Maas, Maximilian Roquemore, Arman Hasanzadeh, Fred Lewis, Yusuf Simonson, Tzu-Wei Yang, Amir Yazdanbakhsh, Deniz Altinbüken, Florin Papa, Maggie Nolan Edmonds, Aditya Patil, Don Schwarz, Satish Chandra, Chris Kennelly, Milad Hashemi, and Parthasarathy Ranganathan. 2025. ECO: An LLM-Driven Efficient Code Optimizer for Warehouse Scale Computers. https://arxiv.org/abs/2503.15669 [17] Andreas Lykke and Jesper Borlum. 2025. Adopting Arm at Scale: Bootstrapping Infrastructure. https://www.uber. com/blog/adopting-arm-at-scale-bootstrapping-infrastructure/. [Accessed 10-09-2025]. [18] MulticoreWare. 2024. Why Porting Applications Across Architectures Isn't Simple. MulticoreWare Inc. https:// multicorewareinc.com/why-porting-applications-across-architectures-isnt-simple/ [19] Peter Phillips and George Phillips. 2003. No Source Code? No Problem! What if you have to port a program, but all you have is a binary? Queue 1, 6 (Sept. 2003), 50-57. [20] Rachel Potvin and Josh Levenberg. 2016. Why Google Stores Billions of Lines of Code in a Single Repository. Commun. ACM 59 (2016), 78-87. http://dl.acm.org/citation.cfm?id=2854146 [21] Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov. 2012. AddressSanitizer: a fast address sanity checker. In Proceedings of the 2012 USENIX Conference on Annual Technical Conference (Boston, MA) (USENIX ATC'12). USENIX Association, USA, 28. [22] Konstantin Serebryany and Timur Iskhodzhanov. 2009. ThreadSanitizer - data race detection in practice.. In Proceedings of the Workshop on Binary Instrumentation and Applications. NYC, NY, U.S.A., 62-71. http://doi.acm.org/10.1145/ 1791194.1791203 [23] Bor-Yeh Shen, Jiunn-Yeu Chen, Wei-Chung Hsu, and Wuu Yang. 2012. LLBT: an LLVM-based static binary translator. In Proceedings of the 2012 International Conference on Compilers, Architectures and Synthesis for Embedded Systems (Tampere, Finland) (CASES '12). Association for Computing Machinery, New York, NY, USA, 51-60. [24] Evgeniy Stepanov and Konstantin Serebryany. 2015. MemorySanitizer: fast detector of uninitialized memory use in C++. In Proceedings of the 2015 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). San Francisco, CA, USA, 46-55. [25] Jubi Taneja, Avery Laird, Cong Yan, Madan Musuvathi, and Shuvendu K. Lahiri. 2025. LLM-Vectorizer: LLM-Based Verified Loop Vectorizer. In Proceedings of the 23rd ACM/IEEE International Symposium on Code Generation and Optimization (Las Vegas, NV, USA) (CGO '25). Association for Computing Machinery, New York, NY, USA, 137-149. [26] Abhishek Verma, Luis Pedrosa, Madhukar R. Korupolu, David Oppenheimer, Eric Tune, and John Wilkes. 2015. Large-scale cluster management at Google with Borg. In Proceedings of the European Conference on Computer Systems (EuroSys). Bordeaux, France. [27] Titus Winters, Tom Manshreck, and Hyrum Wright. 2020. Software engineering at google: Lessons learned from programming over time. " O'Reilly Media, Inc.".
2510.14926
Current fluctuations in nonequilibrium open quantum systems beyond weak coupling: a reaction coordinate approach Khalak Mahadeviya,1, ∗Saulo V. Moreira,1 Sheikh Parvez Mandal,2 Mahasweta Pandit,2 Javier Prior,2 and Mark T. Mitchison1, 3, † 1School of Physics, Trinity College Dublin, College Green, Dublin 2, D02 K8N4, Ireland 2Departamento de F´ısica - CIOyN, Universidad de Murcia, Murcia E-30071, Spain 3Department of Physics, King’s College London, Strand, London, WC2R 2LS, United Kingdom We investigate current fluctuations in open quantum systems beyond the weak-coupling and Markovian regimes, focusing on a coherently driven qubit strongly coupled to a structured bosonic environment. By combining full counting statistics with the reaction coordinate mapping, we develop a framework that enables the calculation of steady-state current fluctuations and their temporal correlations in the strong-coupling regime. Our analysis reveals that, unlike in weak coupling, both the average current and its fluctuations exhibit nonmonotonic dependence on the system- environment interaction strength. Notably, we identify a regime where current noise is suppressed below the classical thermodynamic uncertainty bound, coinciding with enhanced anticorrelations in quantum jump trajectories and faster system relaxation. We further show that these features are linked to nonclassical properties of the reaction coordinate mode, such as non-Gaussianity and quantum coherence. Our results provide new insights and design principles for controlling current fluctuations in quantum devices operating beyond the standard weak-coupling paradigm. I. INTRODUCTION Open quantum systems far from equilibrium support currents of particles or energy, which can fluctuate sig- nificantly compared to their mean values [1, 2]. These current fluctuations are important for several reasons: they control the tradeoff between power and efficiency in heat engines [3, 4], they limit the precision of pa- rameter estimation in nonequilibrium settings [5–14], and they carry important information about system proper- ties, e.g., particle statistics [15–17] or dissipative phase transitions [18, 19]. The universal features of current fluctuations have come under renewed scrutiny in re- cent years, with the discovery of general bounds such as the thermodynamic [20–22] and kinetic [23, 24] un- certainty relations. Of particular interest is the potential of open quantum systems to violate these classical uncer- tainty relations, as demonstrated theoretically by numer- ous case studies [25–31] as well as via the derivation of looser, quantum bounds [32–43]. This raises the tantaliz- ing prospect of exploiting quantum resources to reduce current fluctuations, allowing for more reliable thermal machines [25, 30, 44–46] or precise timekeeping [47]. One way to realize nonclassical current fluctuations is by moving to a regime of strong coupling between the sys- tem and environment, where non-Markovian effects can arise. This regime is natural for many open quantum systems, because boundary effects often dominate due to the small sizes involved. However, the description of current fluctuations in strongly coupled, nonlinear open quantum systems typically requires sophisticated tech- niques such as perturbative nonequilibrium Green func- ∗mahadevk@tcd.ie † mark.mitchison@kcl.ac.uk tions [48], tensor-network simulations [49, 50], or pseudo- mode approaches [51, 52], whose predictive power comes at the cost of complexity which may obscure physical intuition. An appealing method in this regard is the reaction-coordinate (RC) mapping, in which the most significant environmental mode is incorporated as part of the system, while the remaining modes are traced out within a Born-Markov approximation [53–55]. The RC mapping can be understood as the first step of a more general transformation of a noninteracting environment into a one-dimensional chain [56], with the advantage that the RC captures the most important environmen- tal properties explicitly and simply by a single mode. This approach has been widely used to model quantum thermodynamic processes at strong system-reservoir cou- pling, albeit mostly at the level of average heat and work exchanges [57–63]. Notably, Shubrook et al. [64] recently used the RC method to study the fluctuations of heat transferred during an equilibration process. In this work, we employ the RC mapping to investi- gate the fluctuations of currents sustaining a nonequi- librium steady-state (NESS) beyond the weak-coupling regime. We consider a minimal model of a two-level sys- tem (qubit) driven by a coherent field and dissipating energy into a structured bosonic reservoir with a peaked spectral density (Sec. II A), which can be mapped onto a dissipative Jaynes-Cummings model via the RC mapping (Sec. II B). We work within the rotating-wave approxima- tion (RWA), which limits our study to moderate coupling strengths that are nonetheless well beyond the validity of the standard weak-coupling master equation. In this ap- proximation, heat is transferred via the exchange of ex- citations (quanta) whose total number is conserved, and we therefore focus on the full counting statistics of this excitation current, as described in Sec. II C. Within our model, we find an interesting nonmono- tonic dependence of the current noise on the system- arXiv:2510.14926v1 [quant-ph] 16 Oct 2025 2 environment coupling strength, induced by the struc- tured environment. In particular, we show that the cur- rent noise dips below the classical TUR bound when the system-environment coupling is comparable to the Rabi frequency of the drive (Sec. III A). To understand this behavior, we exploit the fact that the RC mapping rep- resents the original (possibly non-Markovian) setup via an extended system with Markovian dynamics, i.e., a Markovian embedding [65]. This allows us to apply the powerful conceptual framework of quantum-jump unrav- eling, whereby the excitation current can be modeled as a sequence of detector “clicks” whose statistical properties are inherited from the underlying quantum dynamics [2]. In particular, we find that the dip in the current noise is associated with strong anticorrelations between subse- quent clicks (Sec. III B), a hallmark of the nonlinearity induced by qubit-RC hybridization. We also show that the system’s relaxation rates—encoded by the Liouvil- lian eigenvalues—are maximal at this point in parameter space. Finally, we characterize nonclassical features of the environment by quantifying the non-Gaussianity [66], second-order coherence [67], and coherence in the Fock basis [68] of the RC mode (Sec. III D). We find that, al- though these quantifiers are correlated with reduced cur- rent fluctuations, there is no sharp correspondence be- tween nonclassicality of the state and quantum TUR vi- olations, in accordance with previous studies [30, 31, 69]. Our results provide design principles for suppressing current fluctuations using strong coupling to a structured environment. Our work also demonstrates how the RC mapping facilitates a physically transparent analysis of current fluctuations beyond the weak-coupling regime, by exploiting the Markovian nature of the embedding. A similar approach could be applied to more complex environmental structures using more sophisticated em- beddings developed in recent years [70–76]. II. MODEL A. Coherently driven qubit We consider a coherently driven qubit coupled to a bosonic thermal bath at temperature T. The system Hamiltonian can be expressed as ˆHS(t) = ωqˆσ+ˆσ−+ 2Ωsin(ωdt)ˆσx (1) where ωq is the qubit transition frequency, Ωis the Rabi frequency, ωd is the drive frequency, and ˆσx,y,z,+,−denote Pauli matrices. Throughout this work, we set ̵h = 1 and kB = 1. Additionally, the bath Hamiltonian is given by ˆHB = ∑ k νkˆc† kˆck, (2) where ˆc† k(ˆck) are bosonic creation (annihilation) opera- tors for modes with frequencies νk, with [ˆck, ˆc† k′] = δkk′. Residual bath (𝑇) 𝜆 ȁ ۧ 0 ȁ ۧ 1 𝜔𝑐 ωd γ ωq Extended system FIG. 1. Reaction coordinate (RC) mapping for a coherently driven qubit strongly coupled to its environment. The RC mapping incorporates a harmonic oscillator mode into the extended system. ∣0⟩and ∣1⟩denote the qubit eigenstates in the σz basis, with transition frequency ωq. The qubit couples to a drive at ωd and the RC mode with frequency ωc and coupling λ. The extended qubit-RC system interacts weakly (γ) with the residual bath at temperature T. By including the interaction between system and bath, the total Hamiltonian reads ˆH(t) = ˆHS(t) + ˆHB + ˆHSB. (3) The system-bath interaction Hamiltonian is given by ˆHSB = ∑ k hk(ˆσ+ + ˆσ−)(ˆc† k + ˆck), (4) where hk is the coupling strength between the qubit and the bath frequency modes νk. In effect, the interaction between the system and the bath can be captured via the spectral density, S(ω) = ∑k ∣hk∣2δ(ω −νk) [77]. We assume the Drude-Lorentz form of spectral density cor- responding to a strong coupling between the system and bath frequency modes centered around ωc, S(ω) = 4ωαλ2ω2 c (ω2 −ω2c)2 + (2παωcω)2 . (5) The amplitude of the spectral function, λ, is related to the interaction strength, hk, as λ2 = ∑k ∣hk∣2, and its width ωcα is determined by a dimensionless parameter α. In order for the Markov approximation to be valid, the system relaxation time (τS) has to be much larger than the bath correlation time (τB), i.e., τS ≫τB. For the above spectral density, τB is related to the inverse of the width of the spectral function ωcα. In this context, the Markovian regime is then defined by, ωcα ≫λ. (6) This condition results in flatter spectral densities as shown in the green curve in Fig. 2 with λ = 0.02ωq, ωc = ωq and α = 1, and the system dynamics for this case can be described within the standard Lindblad mas- ter equation formalism. B. Reaction coordinate mapping In this work, we focus on investigating particle current fluctuations beyond the weak coupling regime. In the 3 0 1 2 ω/ωq 0.000 0.005 0.010 S(ω)/ωq S0, α = 1 S0, α = 0.01 SRC, α = 0.01 0 1 2 0 5×10−4 FIG. 2. Drude-Lorentz spectral density with central fre- quency ωc = ωq and λ = 0.02ωq for different widths α = 0.01 (blue) and 1 (green). The sharply peaked curve for α = 0.01 represents the strong coupling to the bath and the dashed purple curve represents the Ohmic spectral density function for the residual environment after RC mapping. model described above, the strong system-bath coupling, λ ≳ωcα, corresponds to a narrow spectral density cen- tered around the qubit transition frequency ωq, as shown in the blue curve in Fig. 2. This means that we go be- yond the Markovian regime, and the standard Lindblad formalism is not valid to describe the system dynamics. However, we can redefine the system by incorporating a collective mode of the environment, as illustrated in Fig. 1. By constructing an extended system that couples weakly to the residual environment, we are able to de- rive a Lindblad master equation governing the dynamics of the extended system. This can be achieved by imple- menting the reaction coordinate (RC) mapping method developed in Refs. [53–55]. The application of the RC mapping to the total Hamil- tonian in Eq. (3) allows us to write the mapped Hamil- tonian as ˆH′(t) = ˆHS(t) + ˆHRC + ˆHS,RC + ˆHB′ + ˆHRC,B′. (7) The RC and system-RC interaction Hamiltonian are de- fined as ˆHRC = ωcˆa†ˆa, ˆHS,RC = λˆσx(ˆa† + ˆa), (8) with the bosonic creation (annihilation) operators ˆa†(ˆa) corresponding to the RC mode. The residual bath is de- scribed by modes ˆb† k(ˆbk) with frequency ωk with Hamil- tonian, ˆHB′ = ∑ k ωkˆb† kˆbk, (9) and the interaction Hamiltonian between the residual bath and the RC mode is given by, ˆHRC,B′ = ∑ k fk(ˆa† + ˆa)(ˆb† k + ˆbk). (10) We identify the effective system Hamiltonian as ˆHES = ˆHS(t) + ˆHRC + ˆHS,RC. The RC mode is defined such that λ(ˆa† + ˆa) = ∑k hk(ˆc† k + ˆck), and RC mode frequency ω2 c = λ−2 ∑k νk∣hk∣2. As discussed in Refs. [53, 54], under the RC mapping the original bath spectral density in Eq. (5) transforms to an Ohmic form, SRC(ω) = ∑ k ∣fk∣2δ(ω −ωk) = αωe−ω/Λ, (11) for the residual bath. We note that this correspondence between the spectral densities holds exactly only in the limit Λ →∞. In this work, however, we approximate it by setting a large but finite value of Λ. This is important because it ensures the convergence of all the terms in the derivation of the master equation discussed below. To derive the master equation for the extended system, we move to a rotating frame defined by the unitary oper- ator ˆU(t) = exp(iωd ˆNtott), where ˆNtot = ˆσ+ˆσ−+∑k ˆc† kˆck = ˆσ+ˆσ−+ ˆa†ˆa + ∑k ˆb† kˆbk is the total excitation number op- erator. We note that our choice of rotating frame de- fined by ˆU(t) is equivalent for both the original and RC-mapped description. In the rotating frame, the RC mapped Hamiltonian in Eq. (7) is transformed as ˜H′ = ˆU †(t) ˆH′ ˆU(t). Under the rotating wave approxima- tion (RWA), we can neglect fast oscillating terms to get the following Hamiltonian, with only remaining coupling terms that conserve the excitation number, ˜H′ = ∆qˆσ+ˆσ−+ Ω(ˆσ+ + ˆσ−) + λ(ˆσ+ˆa + ˆσ−ˆa†) + ∆cˆa†ˆa + ∑ k fk(ˆa†ˆbk + ˆaˆb† k) + ∑ k δkˆb† kˆbk, (12) where ∆q = ωq−ωd, ∆c = ωc−ωd, and δk = ωk−ωd. In this frame, the residual bath spectral density is transformed into ˜SRC(ω) = SRC(ω + ωd). The RWA holds when the drive frequency ωd is near resonant to both the qubit transition frequency, ωq, and the RC frequency, ωc, i.e., ∆q,∆c ≪ωd, and the Rabi coupling Ωand the qubit-RC coupling λ are weak enough and satisfy Ω,λ ≪ωq. The total Hamiltonian in Eq. (12) commutes with the total excitation number operator ˆNtot, which represents the total number of energy quanta in the system and bath. This conservation law is a consequence of the RWA, which dictates that energy exchange between subsystems occurs via the exchange of these excitations. We there- fore focus on the excitation current between the system and the bath, rather than the energy current, in the fol- lowing. Now, we trace out the residual bath to derive a quan- tum master equation in Gorini-Kossakowski-Sudarshan- Lindblad (GKSL) form, starting from the total Hamilto- nian in the rotating frame in Eq. (12). We work within a local approach to dissipation, which is justified so long the drive strength Ω, coupling λ, and detunings ∆q,c are small compared to the inverse bath correlation time τB′ = max{Λ−1,T −1}, which is dictated here by the in- verse temperature of the bath due to the very large UV cutoff Λ. As discussed in Refs. [78–82], we can then treat 4 the bath spectral function near frequency ωc as approxi- mately constant on the scale of the small energy splittings induced by Ω,λ,∆q,c. We thus arrive at the following lo- cal Lindblad master equation, dˆρ dt = Lˆρ = −i[ ˜HES, ˆρ]+γ(nB+1)D[ˆa]ˆρ+γnBD[ˆa†]ˆρ. (13) with γ = SRC(ωc), and the bosonic occupation number nB ≡nB(ωc) = (exp(ωc/T) −1)−1 for the residual bath at temperature T. Here, ˜HES = ∆qˆσ+ˆσ−+ Ω(ˆσ+ + ˆσ−) + ∆cˆa†ˆa + λ(ˆσ+ˆa + ˆσ−ˆa†) is the rotating frame Hamiltonian and ˆρ is the density matrix of the extended system, which includes the driven qubit and the RC mode. The above master equation is derived within the Born- Markov approximation, which imposes that the residual bath relaxes much faster than the extended system re- laxation timescale (τES ∼γ). This approximation, like our use of a local GKSL equation, assumes an approxi- mately flat spectral function S(ω)nB(ω) around the rel- evant transition frequencies of the system. In our case, this is justified so long as Ω,λ,τq,c ≪τ −1 B′ ∼T. Further- more, validity of the Born approximation is limited to weak coupling between the extended system and envi- ronment, requiring the coupling constants fk to be weak. This assumption is also crucial for the validity of the RWA above. Although the Ohmic residual spectral den- sity in Eq. (11) implies that fk can become unboundedly large for large ωk, this assumption is justified since we coarse-grain over times much longer than the bath cor- relation time, so that these high-frequency contributions average out to zero. From here on, we refer to Eq. (13) as reaction coordi- nate Lindblad master equation (RC-LME). In the long- time limit, the extended system dynamics governed by the RC-LME evolves towards a time-independent state, referred to as the nonequilibrium steady-state (NESS). The NESS, denoted by ρss, satisfies, Lρss = 0, (14) and represents the stationary solution of the RC-LME. C. Full counting statistics 1. Excitation current into the original bath Given the RC-LME in Eq. (13), we are interested in studying the steady-state excitation current into a strongly coupled environment. To compute the first and second cumulants of this excitation current we employ the method of full counting statistics (FCS) [2]. Starting from the original description, note that un- der the rotating frame transformation introduced in the previous section, the Hamiltonian takes the form ˜H = ˜HS + ˜HB + ˆHSB, where ˜HS = ∆qˆσ+ˆσ−+ Ω(ˆσ+ + ˆσ−) and ˜HB = ∑k(νk −ωd)ˆc† kˆck. Within this description, we define the excitation transfer using the two-point measurement scheme, formulated in the measurement basis of the original bath excitation number, ˆNB = ∑k ˆc† kˆck. Under this protocol, we consider projective measurements on the initial system-bath state, ˆρtot(0) = ˆρ(0) ⊗ˆρE, and on the unitarily evolved state, ˆρtot(t) = exp(−i ˜Ht)ˆρtot(0)exp(i ˜Ht), at a later time t. The dif- ference NB between the outcomes of such measurements corresponds to the net number of excitations exchanged with the bath, during time t. The probability distribu- tion P(NB,t) of observing NB at time t can therefore be constructed by repeating the protocol many times. Note that P(NB,t) = Tr[ˆρ(NB,t)], where ˆρ(NB,t) de- notes the (unnormalized) conditional state associated to the ensemble of protocols in which exactly NB net ex- citations are transferred after time t. Accordingly, the unconditional state reads ˆρ(t) = ∑∞ NB=0 ˆρ(NB,t). To compute the full statistics of the counting variable NB, it is convenient to introduce the characteristic func- tion of P(NB,t), defined as its Fourier transform, M(χ,t) = Tr[ˆρ(χ,t)] = ∑ NB eiχNBTr[ˆρ(NB,t)]. (15) Here, χ is called a counting field. The counting-field de- pendent system state ˆρ(χ,t) can be shown to obey a gen- eralized quantum master equation of the form [1, 2], dˆρ(χ,t) dt = Lχˆρ(χ,t), (16) with the counting-field dependent tilted Liouvillian Lχ, and ˆρ(χ,0) = ˆρ(0). The solution of such a generalized QME is given by ˆρ(χ,t) = eLχ ˆρ(0). Now, we can write the cumulant generating function ϕ(χ,t) as, ϕ(χ,t) = ln[M(χ,t)] = ln[Tr[eLχtˆρ(0)]]. (17) With this, the nth cumulant of the net excitation at time t, ⟪NB(t)n⟫, can be computed as, ⟪NB(t)n⟫= (−i∂χ)nϕ(χ,t)∣ χ=0 . (18) As for the steady-state statistics, in the long time limit, this cumulant generating function takes the following asymptotic form [2], ϕ(χ,t) ≃θ0(χ)t, (19) where θ0(χ) is the leading eigenvalue of the tilted Liou- villian Lχ, i.e., the eigenvalue with the largest real part. Thus, using Eq. (18) and Eq. (19), we are able to com- pute cumulants of net excitations NB(t) in the long time limit. By defining the scaled cumulant generating func- tion, ϕ(χ) = lim t→∞∂tϕ(χ,t) = θ0(χ), (20) 5 the scaled cumulants of the excitations in the long time limit are given by, ∂t⟪NB(t)n⟫= (−i∂χ)nθ0(χ)∣ χ=0 . (21) In order to employ the FCS framework to calculate the statistics of the steady-state excitation current in our setup, we are required to derive a counting-field dressed generalized master equation (GME) of the form in Eq. (16). To this end, we start from the rotating frame Hamiltonian introduced earlier, ˜H = ˜HS+ ˜HB+ ˆHSB, with ˜HS = ∆qˆσ+ˆσ−+ Ω(ˆσ+ + ˆσ−) and ˜HB = ∑k(νk −ωd)ˆc† kˆck. To account for excitation transfer into the original bath, we introduce a counting field χ by transforming the total Hamiltonian as ˜H →eiχ ˆ NB/2 ˜He−iχ ˆ NB/2, to get the tilted Hamiltonian, ˆHχ = ˜HS + ˜HB + eiχ ˆ NB/2 ˆHSBe−iχ ˆ NB/2. (22) The counting field χ in the interaction term keeps track of the excitation exchange at the original system-bath boundary. Within the RC mapping, the interaction term trans- forms as, ˆHSB →ˆHS,RC. Hence, the RC mapped tilted Hamiltonian is given by [51], ˆH′ tot,χ = ˜HS + ˜HRC + e i 2 χ ˆ NB ˆHS,RCe−i 2 χ ˆ NB + ˜HB′ + ˆHRC,B′, (23) with ˜HRC = ∆cˆa†ˆa and ˜HB′ = ∑k δkˆb† kˆbk. Note that the original bath number operator ˆNB also satisfies ˆNB = ∑k ˆc† kˆck = ˆa†ˆa+∑k ˆb† kˆbk. Now, using the Baker-Campbell- Hausdorff (BCH) formula, the term eiχ ˆ NB ˆHS,RCe−iχ ˆ NB simplifies to ˆHχ,S,RC = λ(eiχ/2 ˆa†ˆσ−+ e−iχ/2 ˆσ+ˆa). (24) By utilising the RC-mapped tilted Hamiltonian in Eq. (23), we derive the RC-mapped tilted Liouvillian under the same approximations as in the previous sec- tion, to get a generalized master equation (GME) of the form (16) with Lχˆρ = −i[ ˜HES,χ, ˆρ]χ + γ(nB + 1)D[ˆa]ˆρ + γnBD[ˆa†]ˆρ, (25) with ˜HES,χ = ˜HS + ˜HRC + ˆHχ,S,RC and [ ˜HES,χ, ˆρ]χ = ˜HES,χˆρ −ˆρ ˜HES,−χ. The RC mapped tilted Liouvillian Lχ, in the above GME with counting field χ, accounts for the excitation exchange between the system and the RC mode. The full statistics for the corresponding exci- tation current can be calculated using Eq. (21). 2. Excitation current into the residual bath Similarly, we can derive another GME with a counting field χ′ that accounts for the excitation exchange between the extended system and the residual bath. This can be achieved by following the same two-point measure- ment protocol as described above, but now in the mea- surement basis of the residual bath excitation number ˆNB′ = ∑k ˆb† kˆbk. The associated counting variable, which corresponds to the difference between the initial and fi- nal measurement outcomes, is denoted by NB′. Within this protocol, we obtain the following RC-mapped tilted Hamiltonian, ˆH′ tot,χ′ = ˜HS + ˜HRC + ˆHS,RC + ˜HB′ + e i 2 χ′ ˆ NB′ ˆHRC,B′e−i 2 χ′ ˆ NB′. (26) With this tilted Hamiltonian, we can derive the corre- sponding GME of the form, L′ χ′ ˆρ = −i[ ˜HES, ˆρ] + γ(nB + 1)Dχ′[ˆa]ˆρ + γnBD−χ′[ˆa†]ˆρ, (27) with Dχ[ˆL]ˆρ = eiχ ˆLˆρˆL† −1 2{ˆL† ˆL, ˆρ}. (28) The full statistics of the net excitation current into the residual bath, related to the counting variable NB′, can be obtained from the cumulant generating function asso- ciated with the above GME. 3. Equivalence of the two excitation currents Using the framework described above, we are now in a position to show that, in the long time limit, the statis- tics of the net excitations NB exchanged between the sys- tem and original bath is identical to the statistics of net excitations NB′ exchanged between the RC and the resid- ual environment. To formalise this equivalence, in Ap- pendix A, using the following unitary superoperators, U ˆρ = e−i 2 χˆa†ˆaˆρe−i 2 χˆa†ˆa, U†ˆρ = e i 2 χˆa†ˆaˆρe i 2 χˆa†ˆa, (29) we demonstrate that the tilted Liouvillian Lχ (Eq. (25)) and L′ χ′ (Eq. (27)) are related as, U†LχU = L′ χ. (30) From Eq. (19), we recall that in the long-time limit, all the cumulants of the net excitation are determined by the eigenvalues of the corresponding tilted Liouvillian. Since the leading eigenvalue θ0(χ) of the tilted Liouvillian is invariant under unitary transformation, both GMEs in Eq. (27) and Eq. (25) yield an identical cumulant gen- erating function ϕ(χ,t) = θ0(χ)t in the long-time limit. This implies that the cumulants of the excitation cur- rent between the system and environment in the original description can be calculated directly from the GME in Eq. (27). From here on, we drop the subscripts from NB and NB′, and refer to the net number of excitations as N. 6 4. Current fluctuations and correlations Finally, we can now define the average excitation cur- rent and diffusion coefficient in terms of the first and sec- ond scaled cumulants of the net excitation transfer N(t) in the long-time limit, J = lim t→∞ d dtE[N(t)], D = lim t→∞ d dtVar[N(t)], (31) where E[●] denotes the expectation value (first cumulant) and Var[●] denotes the variance (second cumulant). To discuss the TUR, we also need to consider the rate of entropy production, which is associated with the flow of heat into the bath [83]. We assign an energy ωc to each excitation transferred into the bath with an energy ωc, leading to the steady-state heat current ˙Q = ωcJ. This assumption is fully consistent with our local GKSL model, which assumes that the residual bath cannot re- solve energy differences on the order of the small split- tings λ,Ω,∆q,c ≪ωc [81]. We thus obtain the steady- state entropy production rate as ˙Σ = ˙Q T = J ln(nB + 1). (32) Our definitions are also consistent with Refs. [62, 64], which propose that entropy production within the RC approach should be associated with heat exchanged with the residual bath. In terms of these quantities, the thermodynamic un- certainty relation (TUR) reads Q = D J2 ˙Σ ≥2. (33) This trade-off between steady-state current fluctuations and entropy production holds for classical systems un- dergoing Markovian dynamics with local detailed bal- ance [20–22]. With J, D, and ˙Σ identified above, the thermodynamic uncertainty ratio Q follows as Q = D J2 ˙Σ = D J ln(nB + 1). (34) To obtain our results in Sec. III, the average current J and noise D are computed from the leading eigenvalue of the tilted generator defined in Eq. (27). However, to obtain further insight into the statistics of the excitation current, we also exploit the quantum-jump unraveling of full counting statistics. Here, the counting variable N is interpreted in terms of the “clicks” of an ideal detec- tor that efficiently monitors the emitted and absorbed quanta [2]. Since the dynamics of the extended qubit- RC system is Markovian, such a detector can be intro- duced without affecting the dynamics or current statis- tics. Therefore, even if such an ideal detector cannot be implemented in practice, it is a useful fiction that pro- vides a time-resolved picture of current fluctuations, even in this non-Markovian setting. We define N+(t) (respectively, N−(t)) as the total number of excitations that the RC emits into (absorbs from) the residual bath. The total excitation transfer up to time t is therefore given by N(t) = N+(t) −N−(t). The associated stochastic current is defined as I(t) = dN/dt, such that its expectation value E[I(t)] = J repro- duces the average current defined in Eq. (31). Following Ref. [2], the statistics of this current can be determined from the current superoperator J ˆρ = ∑ k=± νk ˆLk ˆρˆL† k. (35) Each term in the sum describes a possible quantum jump in the trajectory unravelling of the RC-LME (13). When a jump is detected in channel k, the counting variable N is incremented by a weight νk and the conditional state is updated by the action of the jump operator ˆLk. In this case, ν+ = +1 and ˆL+ = √ γ(1 + nB)ˆa describe emission processes that increase N by one unit, while ν−= −1 and ˆL−= √γnBˆa† describe absorption events that correspondingly decrease it. The average steady-state current can be computed as J = Tr[J ˆρss]. (36) The stationary two-point correlation function between I(t) and I(t + τ) is given by F(τ) = E[δI(t)δI(t + τ)] = Kδ(τ) + Tr[J eL∣τ∣J ˆρss] −J2, (37) where δI(t) = I(t) −J denotes current fluctuations and K = ∑k ν2 kTr[ˆL† k ˆLk ˆρss]. In our case, since νk = ±1, K cor- responds to the dynamical activity [84, 85], which quan- tifies the frequency of jumps. As shown in Ref. [2], the noise can be written in terms of the two-point correlation function as D = 2∫ ∞ 0 dτF(τ). (38) Features of the noise can therefore be understood in terms of correlations within the stochastic current, as we discuss in more detail below. III. RESULTS A. Current fluctuations beyond weak coupling We now apply the methods developed in the previous section to compute the steady-state excitation current and its cumulants for a coherently driven qubit strongly coupled to a bosonic environment, the dynamics of which is described by the RC-LME in Eq. (13). In Fig. 3, we present the average current defined in Eq. (36), noise (second cumulant), signal-to-noise ratio (SNR), and ther- modynamic uncertainty ratio (TUR) as functions of the 7 0.00 0.02 0.04 0.000 0.002 0.004 J (a) 0.00 0.02 0.04 0.000 0.002 0.004 D (b) 0.00 0.02 0.04 λ 0.000 0.005 0.010 0.015 J2/D (c) 0.00 0.02 0.04 λ 2 4 6 Q (d) α = 0.01 α = 0.04 α = 1 FIG. 3. (a) Average excitation current J into the bath, (b) fluctuations of the excitation current D, (c) signal-to- noise ratio (SNR) J2/D, and (d) TUR ratio Q as functions of interaction strength λ for different spectral density width α = 0.01, 0.04, and 1. We set ∆q = 0, ∆c = 0, Ω= 0.005, and nB = 0.01. interaction strength λ. The results are shown for dif- ferent spectral widths α in the Drude-Lorentz spectral density. In all our calculations, we take ωc = 1 as the unit of energy, and we consider two values of α = 0.01 and 0.04, both in the strong-coupling regime. The weak- coupling case, α = 1, which yields a relatively flat spec- tral density, is also considered as a benchmark. For the weak-coupling case, the heat current cumulants are com- puted using the standard Lindblad master equation and full counting statistics [2]. We observe a clear qualitative difference between the strong and weak coupling regimes. For α = 1, both the average current J and noise D increase monotonically with λ. In contrast, for smaller α values (blue and green curves), the average current exhibits a nonmonotonic de- pendence on λ, reaching a peak at intermediate coupling strengths and decreasing toward zero for large λ. The noise follows a similar trend to the current for α = 0.01. However, for α = 0.04, the noise displays a pronounced dip around the same λ values where the current peaks. Notably, this dip coincides with the parameter regime where the classical TUR is violated, which occurs only in the strongly coupled cases. Despite similar current magnitudes near the peak, the SNR is notably higher for α = 0.04, indicating that the TUR violation reflects a genuine reduction in fluctuations. We now focus on the case α = 0.04, and plot the average heat current into the environment and the noise in Fig. 4, for different fixed values of the drive strength Ωand bath occupation nB. We first examine the dependence on bath temperature for a fixed drive strength Ω= 0.005, shown in panels 4(a) and 4(c). As nB decreases, we see an in- crease in the mean heat current and a reduction in noise, with the maximum current and minimum noise occurring 0.00 0.02 0.04 0.000 0.005 0.010 0.015 (b) 0.00 0.02 0.04 λ 0.00 0.01 (d) 0.00 0.02 0.04 0.0000 0.0025 0.0050 0.0075 J (a) 0.00 0.02 0.04 λ 0.000 0.005 0.010 D (c) Ω: 0.005 0.01 0.02 nB : 0.01 0.1 1 FIG. 4. Average excitation current J into the bath and fluc- tuations of the excitation current D as functions of interaction strength λ, in (a) and (c) for different temperatures leading to bath occupation nB = 0.01, 0.1 and 1, with fixed Ω= 0.01, and in (b) and (d) for different drive strengths Ω= 0.005, 0.01 and 0.02, with fixed nB = 0.01. We set ∆q = 0 and ∆c = 0. at nB = 0.01, corresponding to the lowest bath temper- ature. Notably, the nonmonotonic features in the noise, characterized by distinct peaks and a trough, are most pronounced at this temperature, and gradually disappear as the temperature increases, vanishing for nB ∼1. The enhanced current at low temperatures suggests that the system relaxes to the lower excited state more efficiently in this regime. Next, we fix the bath occupation at nB = 0.01 and investigate the behavior of the current cumulants as a function of λ, for different values of the drive strength Ω, as shown in Figs. 4(b) and 4(d). We consider three values: Ω= 0.005, 0.01, and 0.02. The mean current increases with increasing drive strength, which is consis- tent with an overall increase in jump activity at stronger driving. However, this also leads to an increase in current fluctuations, with the peaks and troughs in the noise be- coming less pronounced at higher Ω. These features are most clearly resolved at the weakest drive. B. Current correlations In order to deepen our understanding of the features shown in Fig. 4, we exploit the relation between the noise and the current auto-correlation function in Eq. (38). We focus on the regular part of the correlation function by subtracting the singular term, Kδ(τ), from Eq. (37), C(τ) ≡F(τ) −Kδ(τ) = Tr[J eL∣τ∣J ˆρss] −J2. (39) Since the term Kδ(τ) is related to temporally uncor- related fluctuations (shot noise) [2], Eq. (39) quantifies 8 0.00 0.02 0.04 λ −0.005 0.000 0.005 (a) 0.0 2.5 5.0 7.5 τ · Ω −2 −1 0 C(τ) ×10−5 (b) λ = 0.0067 λ = 0.0103 λ = 0.015 λ = 0.02 0.00 0.02 0.04 λ −0.005 0.000 0.005 0.010 0.015 (c) 0 5 10 15 τ · Ω −4 −2 0 C(τ) ×10−5 (d) λ = 0.0122 λ = 0.0162 λ = 0.02 λ = 0.025 J D K D −K FIG. 5. Average excitation current J, diffusion coefficient D, dynamical activity K, and D −K as functions of interaction strength λ, in (a) and (c) for different drive strengths Ω= 0.01 and Ω= 0.02. In (b) and (d), current correlation functions C(τ) for selected λ values marked by crosses in D −K plots in panels (a) and (c), respectively. contributions from temporal correlations between differ- ent quantum jumps. For example, when the jumps are totally uncorrelated, F(τ) = Kδ(τ), whereas C(τ) = 0. Furthermore, we note that C(τ) can be related to the long-time diffusion coef- ficient as D −K = 1 2 ∫ ∞ 0 C(τ)dτ. (40) Once again, since the term K is due to white noise, D−K genuinely captures the contribution from temporal cor- relations. This quantity was previously introduced and studied in Ref. [19] in connection with dissipative phase transitions. In Fig. 5, we plot D −K and D as functions of the interaction strength λ, for Ω= 0.01 in Fig. 5(a), and Ω= 0.02 in Fig. 5(c). We see that D−K follows the same trend as D, with peaks and troughs coinciding for the same values of λ. Such nonmonotonic behavior of D and D−K results from temporal correlations, as evidenced by Eq. (40), with peaks (troughs) associated with correlated (anticorrelated) jumps. It is therefore instructive to look at the behavior of the correlation function in Eq. (39) for values of λ corresponding to peaks, troughs, alongside some other points, indicated in Figs. 5(a)-(c). We plot these correlation functions in Fig. 5(b), for Ω= 0.01, and Fig. 5(d), for Ω= 0.02. We note that the correlation functions C(τ) shown in Fig. 5(b)-(d) display an oscillatory behavior, starting at negative values for C(0) and eventually converging to zero for large τ. We highlight that at the troughs, in both cases, C(0) reaches its most negative value, and the period of the oscillations of C(τ) is the largest, corrob- orating the predominance of anticorrelated jumps and, 0.00 0.02 0.04 λ 0 −4 −8 Re[θi] ×10−3 (a) 0.00 0.02 0.04 λ 0 −4 −8 Re[θi] ×10−3 (b) 0 1 2 D/J2 ×103 0.0 0.5 1.0 D/J2 ×103 Re[θ1] Re[θ2] Re[θ3] D/J2 FIG. 6. Real part of the first three nonzero Liouvillian eigen- values θ1, θ2, θ3 and D/J2 as functions of interaction strength λ, in (a) and (b) for different drive strengths Ω= 0.01 and Ω= 0.02. overall, antibunched statistics. This means that, near the troughs of D, the probability of observing a second jump is smallest at zero time delay after the first one (τ = 0). Such antibunching in a bosonic mode is a hallmark of nonlinear dynamics, indicating that strong hybridization with the qubit prevents multiple excitations from being created in the RC (analogous to the photon blockade ef- fect [86]). C. Relaxation timescales As discussed in several previous works [2, 18, 19], current fluctuations are associated with an intrinsic timescale of the system. In particular, consider the time- averaged stochastic current over an observation window t, ¯I(t) = 1 t ∫ t 0 dt′ I(t′), (41) which is an unbiased estimator of J since E[¯I(t)] = J. The timescale D/J2 is precisely the time t taken for the signal-to-noise ratio of the estimator ¯I(t) to reach one [2]. Thus, we expect the points in parameter space with min- imal current noise to be associated with fast regression of current fluctuations back to the mean. To quantify these timescales more precisely, we con- sider the spectrum of the Liouvillian L of the extended system, Eq. (13). We note that the stationary state of the system satisfies Lρss = 0, and is therefore associated to the zeroth eigenvalue θ0 = 0. Furthermore, it is known that the real parts of the eigenvalues θi>0, which satisfy Re[θi] < 0, determine the rates with which the system exponentially relaxes towards the steady-state and with which correlation functions decay [2]. Note that this sim- ple relation between relaxation timescales and the Liou- villian spectrum of the extended system is possible here due to the Markovian nature of the embedding. Here, we focus on the first three nonzero eigenvalues, which are plotted in Fig. 6(a), for Ω= 0.01, and Fig. 6(b), for Ω= 0.02 [the same as in Figs. 5(a)-(c)]. We see that the real parts of the eigenvalues become increasingly neg- 9 ative in the region where current fluctuations are mini- mal, with the lowest three eigenvalues reaching their min- imum in the region where D/J2 is lowest (dashed lines in Fig. 6). Thus, the reduction in current fluctuations due to strong coupling is correlated with faster relaxation of the extended qubit-RC system. D. TUR violation and nonclassicality in RC mode An important feature of the RC mapping description is that the state of a part of the environment is accessi- ble. This is a notable advantage of our approach, as it enables us to probe different aspects of nonclassicality in a part of the environment itself, namely the RC, and con- nect it quantitatively to violation of the classical TUR. Although TUR violations have been extensively studied in relation to nonclassical features of the system state, such as quantum coherence and entanglement [29, 31], the nonclassicality of the environmental degrees of free- dom has not been considered yet in this regard. In Fig. 7(a), we plot the TUR ratio Q as a function of λ, for different values of Ωand observe violations in two of the cases. We indicate, with black crosses, the points where each curve attains its minimum TUR value. First, we consider the zero-delay second-order coherence [67] of the RC mode given by g(2)(0) = ⟨ˆa†ˆa†ˆaˆa⟩ ⟨ˆa†ˆa⟩2 . (42) We plot g(2)(0) in Fig. 7(b). Values g(2)(0) < 1 at any given time indicate anticorrelation of RC excitations. We see that the dips in g(2)(0) consistently coincide with the minima of Q. Despite the fact that TUR violation corresponds to the largest dips in g(2)(0), anticorrelated behavior occurs even when the TUR bound is not vio- lated, reflecting the fact that Q also depends on entropy production and model-dependent prefactors. Interest- ingly, we note that violations of the classical TUR bound occur very close to the interaction strength λ where strongest anticorrelated nature (g(2)(0) < 1) of the mode coincides with the largest values of its non-Gaussianity δG = SvN(τ)−SvN(ρ), defined as the relative entropy dis- tance between the RC state ρ = TrS[ρss] and its closest Gaussian state τ [66], with SvN(x) = −Tr[xlnx] being the von Neumann entropy. Here, we use the fact that SvN(τ) = ν + 1 2 ln(ν + 1 2 ) −ν −1 2 ln(ν −1 2 ) (43) where ν is the symplectic eigenvalue of the covariance matrix of ρ [87, 88]. We plot non-Gaussianity δG ver- sus λ in Fig. 7(c). Since δG ≥0, with equality if and only if the state is Gaussian, large δG implies divergence from any Gaussian description. Lastly, in Fig. 7(d), we consider a coherence quantifier, namely the l1-coherence Cl1 = ∑i≠j ∣ρij∣of the RC mode in the Fock basis [68], 0.00 0.02 0.04 2 4 6 Q (a) 0.00 0.02 0.04 0 1 2 g(2)(0) (b) 0.00 0.02 0.04 λ 0.00 0.05 0.10 δG (c) 0.00 0.02 0.04 λ 0 1 2 3 Cl1 (d) Ω= 0.0025 Ω= 0.005 Ω= 0.01 Ω= 0.02 FIG. 7. Comparison of the behavior of the TUR ratio Q (a) with second-order coherence g(2)(0) (b), non-Gaussianity δG (c), and l1-coherence Cl1 of the reaction coordinate state (d) versus λ for fixed α = 0.04, nB = 0.01 and Ω= {0.0025, 0.005, 0.01, 0.02}. The crosses represent the mini- mum TUR values. and plot it as a function of λ. We observe coherence con- sistently peaks close to the values of λ corresponding to the minima of Q. It is important to point out that in the above analy- sis, we have considered two complementary measures of nonclassicality: the l1-coherence in the Fock basis and non-Gaussianity. It is evident, for example, that any Fock state (which has Cl1 = 0) is a highly non-Gaussian state (δG > 0), whereas a coherent state (δG = 0) has large Cl1 in the Fock basis. Interestingly, the minima of Q oc- cur precisely where these distinct signatures of nonclas- sicality intersect: where anticorrelation is the strongest, i.e., g(2)(0) is minimum, δG is maximal, and Cl1 is close to its peak. This observation highlights that several in- dependent quantum traits in the environment, includ- ing both statistical (anticorrelation) and structural (non- Gaussianity and Fock-basis superposition) features of the RC mode, contribute to violations of the classical TUR. IV. DISCUSSION In this work, we investigated the first and the second cumulants of excitation currents flowing through an open quantum system beyond the weak-coupling regime. In this direction, we begin by implementing the RC map- ping framework and redefining the boundary between the system and environment by incorporating a collective en- vironmental mode within an extended system. The in- teraction between the extended system and the residual bath can be described within the weak coupling approx- imation, allowing us to derive an RC-LME that governs the extended system dynamics. We show that the current statistics between different partitions, namely the origi- 10 nal system-bath and the extended system-residual bath, are identical in the long-time limit. This enabled us to examine the features of current fluctuations and correla- tions in the strong coupling regime, utilizing the toolkit of Markovian dynamics. We find significantly different behaviors in the strong and weak-coupling regimes. While the average current, J, and noise, D, increase monotonically with the inter- action strength in the weak-coupling regime, both quan- tities behave nonmonotonically in the strong-coupling regime. In addition, for the strongly-coupled cases, a dip in D occurs in the same regime where a higher SNR is observed and the violation of the TUR occurs, thereby showing a suppression of current fluctuations. We showed that such nonmonotonic behavior of the noise results from strong temporal anticorrelations in the cur- rent. Furthermore, we have seen that larger negative val- ues of the real part of eigenvalues of the Liouvillian, which imply faster relaxation of the system dynamics, are asso- ciated with a decrease in D. This is corroborated by the fact that the noise-to-signal ratio, D/J2, which captures the timescale of the decay of fluctuations, also displays smaller values where a reduction in current fluctuations is observed. Finally, we investigated TUR violations in relation to complementary measures of nonclassicality in the environmental degrees of freedom. We observe that the suppression of current fluctuations is associated with the simultaneous occurrence of two different nonclassi- cal features of the environment: namely, non-Gaussianity and Fock-basis coherence in the RC mode. It is important to note that our results are obtained within the RWA and the local GKSL master equation, where energy transfer between the system and the en- vironment proceeds via the exchange of well-defined en- ergy quanta. This essentially means that the heat cu- mulants are proportional to the cumulants of the net ex- citation transfer N, which we have shown are the same for the original and residual baths in the long-time lim- its. This greatly simplifies the thermodynamic analysis and singles out the current between the residual bath and the extended system as the natural choice to de- scribe heat flow [62, 64]. Within its regime of validity, we expect the RWA to give an adequate description up to small, fast-oscillating contributions that are neglected. However, for stronger system-bath couplings such that λ ∼ωq, or other more complex environmental spectral densities, the RWA will break down completely. In this regime, more sophisticated embeddings are required [70– 76] and additional theoretical complications will emerge, e.g., the nonequivalence of entropy flux between the orig- inal and residual baths [62, 64, 89], and contributions to the heat current from interactions within the extended system [52, 73]. Notwithstanding these subtleties, we hope that our results highlight the conceptual power of the Markovian embedding for understanding current fluc- tuations beyond weak system-environment coupling, and inspire further research along these lines deep into the strong-coupling regime. ACKNOWLEDGMENTS We warmly acknowledge Kesha for the use of her computational resources. Neither the European Union nor UKRI can be held responsible for them. M.T.M. is supported by a Royal Society University Research Fellowship. J.P. acknowledges support from grant QuantERA II program (Mf-QDS) that has received funding from the European Union’s Horizon 2020 re- search and innovation program under Grant Agree- ment No. 101017733 and from the Agencia Es- tatal de Investigaci´on, with project code PCI2022- 132915 funded by MICIU/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR, grant TED2021-130578BI00, grant PID2021-124965NB- C21 (Quantera II cofund 2023 (Aqusend) Grant Agree- ment No. 101017733) and PCI2024-153474 funded by MICIU/AEI/10.13039/501100011033.”. This project is co-funded by the European Union (Quantum Flagship project ASPECTS, Grant Agreement No. 101080167) and UK Research and Innovation (UKRI). Views and opinions expressed are, however those of the authors only and do not necessarily reflect those of the European Union, Research Executive Agency or UKRI. Appendix A: Detailed proof for FCS equivalence As discussed in Sec. II C 1, Eq. (25) corresponds to the tilted Liouvillian Lχ with counting field χ that accounts for the statistics of the net excitation number NB ex- changed between the system and the original bath via RC mode. Furthermore, in Sec. II C 2, Eq. (27) gives the tilted Liouvillian L′ χ′ with a different counting field χ′, that monitors the net excitation number NB′ ex- changed between the extended system and the residual bath. Here, we give a detailed proof for the result stated in Sec. II C 3 that, in the steady-state, the particle ex- change statistics is identical between the qubit-RC and the RC-residual bath. In order to achieve this, it is enough to show that the tilted Liouvillian Lχ and L′ χ′ are related by a unitary transformation, as stated in Eq. (30). We start from the tilted Liouvillian in Eq. (25), Lχˆρ = −i[ ˜HES,χ, ˆρ]χ + γ(nB + 1)D[ˆa]ˆρ + γ(nB)D[ˆa†]ˆρ, (A1) where, [ ˜HES,χ, ˆρ]χ = ˜HES,χˆρ −ˆρ ˜HES,−χ and ˜HES,χ = ∆qˆσ+ˆσ−+ Ω(ˆσ+ + ˆσ−) + ∆cˆa†ˆa + λ(eiχ/2 ˆa†ˆσ−+ e−iχ/2 ˆσ+ˆa). (A2) We then follow the arguments in Ref. [29], beginning with the definition of the following unitary superoperators, U ˆρ = e−i 2 χˆa†ˆaˆρe−i 2 χˆa†ˆa, U†ˆρ = e i 2 χˆa†ˆaˆρe i 2 χˆa†ˆa. (A3) 11 To transform the Liouvillian under this unitary, we will now work with the following vectorized forms of the super operators, Lχ = −i(1 ⊗˜HES,χ −˜HT ES,−χ ⊗1) + γ1(ˆa ⊗ˆa −1 21 ⊗ˆa†ˆa −1 2(ˆa†ˆa)T ⊗1) + γ2(ˆa ⊗ˆa −1 21 ⊗ˆaˆa† −1 2(ˆaˆa†)T ⊗1) = −i(1 ⊗˜HES,χ −˜HES,χ ⊗1) + γ1(ˆa ⊗ˆa −1 21 ⊗ˆa†ˆa −1 2ˆa†ˆa ⊗1) + γ2(ˆa† ⊗ˆa† −1 21 ⊗ˆaˆa† −1 2ˆaˆa† ⊗1). (A4) where, γ1 = γ(nB + 1) and γ2 = γ(nB). We identify the unitary operator U = e−i 2 χˆa†ˆa, so the vectorized form of the unitary superoperator U becomes, U = U ⊗U = e−i 2 χˆa†ˆa ⊗e−i 2 χˆa†ˆa, U† = U † ⊗U † = e i 2 χˆa†ˆa ⊗e i 2 χˆa†ˆa. (A5) The unitary frame transforms the Liouvillian as Lχ → U†LχU. It is straightforward to calculate this by noting the following relations, ˆU † ˜HES,χ ˆU = ˜HES, ˆU †ˆa†ˆa ˆU = ˆa†ˆa, ˆU †ˆa ˆU = e−i 2 χˆa, ˆU †ˆa† ˆU = e i 2 χˆa†. (A6) Finally the transformed Liouvillian in the vectorized form reads, U†LχU = −i(1 ⊗˜HES −˜HES ⊗1) + γ1(e i 2 χˆa ⊗e i 2 χˆa −1 21 ⊗ˆa†ˆa −1 2ˆa†ˆa ⊗1) + γ2(e−i 2 χˆa† ⊗e−i 2 χˆa† −1 21 ⊗ˆaˆa† −1 2ˆaˆa† ⊗1). (A7) By noticing that the RHS in the above equation is the vectorized form of the tilted Liouvillian L′χ′ with χ′ = χ, we finally recover the relation in Eq. (30), U†LχU = −i[ ˜HES, ˆρ] + γ(nB + 1)Dχ[ˆa]ˆρ + γ(nB)D−χ[ˆa†]ˆρ = L′ χ. (A8) 12 [1] M. Esposito, U. Harbola, and S. Mukamel, Nonequilib- rium fluctuations, fluctuation theorems, and counting statistics in quantum systems, Rev. Mod. Phys. 81, 1665 (2009). [2] G. T. Landi, M. J. Kewming, M. T. Mitchison, and P. P. Potts, Current fluctuations in open quantum sys- tems: Bridging the gap between quantum continuous measurements and full counting statistics, PRX Quan- tum 5, 020201 (2024). [3] N. Shiraishi, K. Saito, and H. Tasaki, Universal trade-off relation between power and efficiency for heat engines, Phys. Rev. Lett. 117, 190601 (2016). [4] P. Pietzonka and U. Seifert, Universal trade-off between power, efficiency, and constancy in steady-state heat en- gines, Phys. Rev. Lett. 120, 190602 (2018). [5] M. Tsang, Quantum metrology with open dynamical sys- tems, New Journal of Physics 15, 073005 (2013). [6] S. Gammelmark and K. Mølmer, Bayesian parameter in- ference from continuously monitored quantum systems, Phys. Rev. A 87, 032115 (2013). [7] A. H. Kiilerich and K. Mølmer, Estimation of atomic interaction parameters by photon counting, Phys. Rev. A 89, 052110 (2014). [8] K. Macieszczak, M. Gut¸˘a, I. Lesanovsky, and J. P. Gar- rahan, Dynamical phase transitions as a resource for quantum enhanced metrology, Phys. Rev. A 93, 022103 (2016). [9] F. Albarelli, M. A. C. Rossi, D. Tamascelli, and M. G. Genoni, Restoring Heisenberg scaling in noisy quantum metrology by monitoring the environment, Quantum 2, 110 (2018). [10] D. Yang, S. F. Huelga, and M. B. Plenio, Efficient In- formation Retrieval for Sensing via Continuous Measure- ment, Phys. Rev. X 13, 031012 (2023). [11] A. Cabot, F. Carollo, and I. Lesanovsky, Continuous sensing and parameter estimation with the boundary time crystal, Phys. Rev. Lett. 132, 050801 (2024). [12] K. Prech, G. T. Landi, F. Meier, N. Nurgalieva, P. P. Potts, R. Silva, and M. T. Mitchison, Optimal time esti- mation and the clock uncertainty relation for stochas- tic processes (2024), arXiv:2406.19450 [cond-mat.stat- mech]. [13] S. Khandelwal, G. T. Landi, G. Haack, and M. T. Mitchi- son, Current-based metrology with two-terminal meso- scopic conductors (2025), arXiv:2507.12907 [quant-ph]. [14] G. Mihailescu, A. Kiely, and A. K. Mitchell, Quantum Sensing with Nanoelectronics: Fisher Information for an Applied Perturbation (2025), arXiv:2406.18662 [quant- ph]. [15] P. Glidic, O. Maillet, A. Aassime, C. Piquard, A. Ca- vanna, U. Gennser, Y. Jin, A. Anthore, and F. Pierre, Cross-correlation investigation of anyon statistics in the ν = 1/3 and 2/5 fractional quantum hall states, Phys. Rev. X 13, 011030 (2023). [16] M. Ruelle, E. Frigerio, J.-M. Berroir, B. Pla¸cais, J. Rech, A. Cavanna, U. Gennser, Y. Jin, and G. F`eve, Compar- ing fractional quantum hall laughlin and jain topological orders with the anyon collider, Phys. Rev. X 13, 011031 (2023). [17] K. Iyer, F. Ronetti, B. Gr´emaud, T. Martin, J. Rech, and T. Jonckheere, Finite width of anyons changes their braiding signature, Phys. Rev. Lett. 132, 216601 (2024). [18] M. J. Kewming, M. T. Mitchison, and G. T. Landi, Di- verging current fluctuations in critical Kerr resonators, Phys. Rev. A 106, 033707 (2022). [19] M. Matsumoto, Z. Cai, and M. Baggioli, Dissipative quantum phase transitions monitored by current fluctu- ations, Phys. Rev. A 112, 012226 (2025). [20] A. C. Barato and U. Seifert, Thermodynamic Uncer- tainty Relation for Biomolecular Processes, Physical Re- view Letters 114, 158101 (2015), publisher: American Physical Society. [21] T. R. Gingrich, J. M. Horowitz, N. Perunov, and J. L. England, Dissipation Bounds All Steady-State Cur- rent Fluctuations, Physical Review Letters 116, 120601 (2016). [22] J. M. Horowitz and T. R. Gingrich, Thermodynamic uncertainty relations constrain non-equilibrium fluctua- tions, Nature Physics 16, 15 (2020). [23] J. P. Garrahan, Simple bounds on fluctuations and un- certainty relations for first-passage times of counting ob- servables, Phys. Rev. E 95, 032134 (2017). [24] I. Di Terlizzi and M. Baiesi, Kinetic uncertainty relation, Journal of Physics A: Mathematical and Theoretical 52, 02LT03 (2018). [25] K. Ptaszy´nski, Coherence-enhanced constancy of a quan- tum thermoelectric generator, Phys. Rev. B 98, 085425 (2018). [26] K. Brandner, T. Hanazato, and K. Saito, Thermody- namic Bounds on Precision in Ballistic Multiterminal Transport, Physical Review Letters 120, 090601 (2018). [27] K. Macieszczak, K. Brandner, and J. P. Garrahan, Uni- fied thermodynamic uncertainty relations in linear re- sponse, Phys. Rev. Lett. 121, 130601 (2018). [28] B. K. Agarwalla and D. Segal, Assessing the validity of the thermodynamic uncertainty relation in quantum sys- tems, Phys. Rev. B 98, 155438 (2018). [29] A. A. S. Kalaee, A. Wacker, and P. P. Potts, Violating the thermodynamic uncertainty relation in the three-level maser, Phys. Rev. E 104, L012103 (2021). [30] A. Rignon-Bret, G. Guarnieri, J. Goold, and M. T. Mitchison, Thermodynamics of precision in quantum nanomachines, Phys. Rev. E 103, 012133 (2021). [31] K. Prech, P. Johansson, E. Nyholm, G. T. Landi, C. Ver- dozzi, P. Samuelsson, and P. P. Potts, Entanglement and thermokinetic uncertainty relations in coherent meso- scopic transport, Phys. Rev. Res. 5, 023155 (2023). [32] G. Guarnieri, G. T. Landi, S. R. Clark, and J. Goold, Thermodynamics of precision in quantum nonequilib- rium steady states, Phys. Rev. Res. 1, 033021 (2019). [33] Y. Hasegawa, Quantum thermodynamic uncertainty re- lation for continuous measurement, Phys. Rev. Lett. 125, 050601 (2020). [34] Y. Hasegawa, Thermodynamic Uncertainty Relation for General Open Quantum Systems, Physical Review Let- ters 126, 010602 (2021). [35] Y. Hasegawa, Unifying speed limit, thermodynamic un- certainty relation and Heisenberg principle via bulk- boundary correspondence, Nature Communications 14, 2828 (2023). [36] T. Van Vu and K. Saito, Thermodynamics of precision in markovian open quantum dynamics, Phys. Rev. Lett. 13 128, 140602 (2022). [37] A. M. Timpanaro, G. Guarnieri, and G. T. Landi, Hy- peraccurate thermoelectric currents, Phys. Rev. B 107, 115432 (2023). [38] K. Prech, P. P. Potts, and G. T. Landi, Role of quantum coherence in kinetic uncertainty relations, Phys. Rev. Lett. 134, 020401 (2025). [39] K. Macieszczak, Ultimate Kinetic Uncertainty Relation and Optimal Performance of Stochastic Clocks (2024), arXiv:2407.09839 [cond-mat]. [40] S. V. Moreira, M. Radaelli, A. Candeloro, F. C. Binder, and M. T. Mitchison, Precision bounds for multiple cur- rents in open quantum systems, Phys. Rev. E 111, 064107 (2025). [41] K. Brandner and K. Saito, Thermodynamic uncertainty relations for coherent transport, Phys. Rev. Lett. 135, 046302 (2025). [42] G. Blasi, R. R. Rodr´ıguez, M. Moskalets, R. L´opez, and G. Haack, Quantum Kinetic Uncertainty Relations in Mesoscopic Conductors at Strong Coupling (2025), arXiv:2505.13200 [cond-mat]. [43] D. Palmqvist, L. Tesser, and J. Splettstoesser, Ki- netic uncertainty relations for quantum transport (2025), arXiv:2410.10793 [cond-mat]. [44] J. Liu and D. Segal, Thermodynamic uncertainty relation in quantum thermoelectric junctions, Phys. Rev. E 99, 062141 (2019). [45] A. M. Timpanaro, G. Guarnieri, and G. T. Landi, Quan- tum thermoelectric transmission functions with minimal current fluctuations, Phys. Rev. B 111, 014301 (2025). [46] J. A. Almanza-Marrero and G. Manzano, Certify- ing quantum enhancements in thermal machines be- yond the Thermodynamic Uncertainty Relation (2025), arXiv:2403.19280 [quant-ph]. [47] F. Meier, Y. Minoguchi, S. Sundelin, T. J. G. Apollaro, P. Erker, S. Gasparinetti, and M. Huber, Precision is not limited by the second law of thermodynamics, Nature Physics 21, 1147 (2025). [48] J.-S. Wang, B. K. Agarwalla, H. Li, and J. Thingna, Nonequilibrium Green’s function method for quantum thermal transport, Frontiers of Physics 9, 673 (2014). [49] M. Popovic, M. T. Mitchison, A. Strathearn, B. W. Lovett, J. Goold, and P. R. Eastham, Quantum heat statistics with time-evolving matrix product operators, PRX Quantum 2, 020338 (2021). [50] S. P. Mandal, M. Pandit, K. Mahadeviya, M. T. Mitchi- son, and J. Prior, Heat operator approach to quantum stochastic thermodynamics in the strong-coupling regime (2025), arXiv:2504.10631 [quant-ph]. [51] M. Brenes, G. Guarnieri, A. Purkayastha, J. Eisert, D. Segal, and G. Landi, Particle current statistics in driven mesoscale conductors, Phys. Rev. B 108, L081119 (2023). [52] L. P. Bettmann, M. J. Kewming, G. T. Landi, J. Goold, and M. T. Mitchison, Quantum stochastic thermodynam- ics in the mesoscopic-leads formulation, Phys. Rev. E 112, 014105 (2025). [53] J. Iles-Smith, N. Lambert, and A. Nazir, Environmental dynamics, correlations, and the emergence of noncanon- ical equilibrium states in open quantum systems, Phys. Rev. A 90, 032114 (2014). [54] J. Iles-Smith, A. G. Dijkstra, N. Lambert, and A. Nazir, Energy transfer in structured and unstructured envi- ronments: Master equations beyond the Born-Markov approximations, The Journal of Chemical Physics 144, 044110 (2016). [55] A. Nazir and G. Schaller, The reaction coordinate map- ping in quantum thermodynamics, in Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer International Pub- lishing, Cham, 2018) pp. 551–577. [56] J. Prior, A. W. Chin, S. F. Huelga, and M. B. Plenio, Efficient simulation of strong system-environment inter- actions, Phys. Rev. Lett. 105, 050404 (2010). [57] P. Strasberg, G. Schaller, N. Lambert, and T. Brandes, Nonequilibrium thermodynamics in the strong coupling and non-Markovian regime based on a reaction coordi- nate mapping, New Journal of Physics 18, 073007 (2016). [58] D. Newman, F. Mintert, and A. Nazir, Performance of a quantum heat engine at strong reservoir coupling, Phys. Rev. E 95, 032139 (2017). [59] N. Anto-Sztrikacs and D. Segal, Strong coupling effects in quantum thermal transport with the reaction coordinate method, New Journal of Physics 23, 063036 (2021). [60] N. Anto-Sztrikacs, F. Ivander, and D. Segal, Quantum thermal transport beyond second order with the reaction coordinate mapping, The Journal of Chemical Physics 156, 214107 (2022). [61] N. Anto-Sztrikacs, A. Nazir, and D. Segal, Effective- hamiltonian theory of open quantum systems at strong coupling, PRX Quantum 4, 020307 (2023). [62] A. Colla and H.-P. Breuer, Thermodynamic roles of quan- tum environments: from heat baths to work reservoirs, Quantum Science and Technology 10, 015047 (2024). [63] M. Brenes, J. Garwo la, and D. Segal, Optimal qubit- mediated quantum heat transfer via noncommuting op- erators and strong coupling effects, Phys. Rev. B 111, 235440 (2025). [64] M. Shubrook, J. Iles-Smith, and A. Nazir, Non- markovian quantum heat statistics with the reaction co- ordinate mapping, Quantum Science and Technology 10, 025063 (2025). [65] M. P. Woods, R. Groux, A. W. Chin, S. F. Huelga, and M. B. Plenio, Mappings of open quantum systems onto chain representations and Markovian embeddings, Jour- nal of Mathematical Physics 55, 032101 (2014). [66] M. G. Genoni, M. G. A. Paris, and K. Banaszek, Quan- tifying the non-Gaussian character of a quantum state by quantum relative entropy, Phys. Rev. A 78, 060303 (2008). [67] R. J. Glauber, The quantum theory of optical coherence, Physical Review 130, 2529 (1963). [68] T. Baumgratz, M. Cramer, and M. B. Plenio, Quantify- ing coherence, Phys. Rev. Lett. 113, 140401 (2014). [69] J. Liu and D. Segal, Coherences and the thermodynamic uncertainty relation: Insights from quantum absorption refrigerators, Phys. Rev. E 103, 032138 (2021). [70] F. Mascherpa, A. Smirne, A. D. Somoza, P. Fern´andez- Acebal, S. Donadi, D. Tamascelli, S. F. Huelga, and M. B. Plenio, Optimized auxiliary oscillators for the simulation of general open quantum systems, Phys. Rev. A 101, 052108 (2020). [71] M. Brenes, J. J. Mendoza-Arenas, A. Purkayastha, M. T. Mitchison, S. R. Clark, and J. Goold, Tensor-network method to simulate strongly interacting quantum ther- mal machines, Phys. Rev. X 10, 031040 (2020). [72] J. E. Elenewski, G. W´ojtowicz, M. M. Rams, and 14 M. Zwolak, Performance of reservoir discretizations in quantum transport simulations, The Journal of Chemi- cal Physics 155, 124117 (2021). [73] A. M. Lacerda, A. Purkayastha, M. Kewming, G. T. Landi, and J. Goold, Quantum thermodynamics with fast driving and strong coupling via the mesoscopic leads ap- proach, Phys. Rev. B 107, 195117 (2023). [74] A. Purkayastha, G. Guarnieri, S. Campbell, J. Prior, and J. Goold, Periodically refreshed baths to simulate open quantum many-body dynamics, Physical Review B 104, 045417 (2021). [75] A. Purkayastha, G. Guarnieri, S. Campbell, J. Prior, and J. Goold, Periodically refreshed quantum thermal ma- chines, Quantum 6, 801 (2022). [76] P. Menczel, K. Funo, M. Cirio, N. Lambert, and F. Nori, Non-hermitian pseudomodes for strongly coupled open quantum systems: Unravelings, correlations, and ther- modynamics, Phys. Rev. Res. 6, 033237 (2024). [77] A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, and W. Zwerger, Dynamics of the dissi- pative two-state system, Rev. Mod. Phys. 59, 1 (1987). [78] ´Angel Rivas, A. D. K. Plato, S. F. Huelga, and M. B. Ple- nio, Markovian master equations: a critical study, New Journal of Physics 12, 113032 (2010). [79] P. P. Hofer, M. Perarnau-Llobet, L. D. M. Miranda, G. Haack, R. Silva, J. B. Brask, and N. Brunner, Marko- vian master equations for quantum thermal machines: Local versus global approach, New Journal of Physics 19, 123037 (2017). [80] A. Trushechkin, Unified gorini-kossakowski-lindblad- sudarshan quantum master equation beyond the secular approximation, Phys. Rev. A 103, 062226 (2021). [81] P. P. Potts, A. A. S. Kalaee, and A. Wacker, A thermody- namically consistent markovian master equation beyond the secular approximation, New Journal of Physics 23, 123013 (2021). [82] A. Schnell, Global becomes local: Efficient many-body dynamics for global master equations, Phys. Rev. Lett. 134, 250401 (2025). [83] G. T. Landi and M. Paternostro, Irreversible entropy pro- duction: From classical to quantum, Rev. Mod. Phys. 93, 035008 (2021). [84] C. Maes, Frenetic bounds on the entropy production, Phys. Rev. Lett. 119, 160601 (2017). [85] C. Maes, Frenesy: Time-symmetric dynamical activity in nonequilibria, Phys. Rep. 850, 1 (2020). [86] K. M. Birnbaum, A. Boca, R. Miller, A. D. Boozer, T. E. Northup, and H. J. Kimble, Photon blockade in an optical cavity with one trapped atom, Nature 436, 87 (2005). [87] C. Weedbrook, S. Pirandola, R. Garc´ıa-Patr´on, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, Gaus- sian quantum information, Reviews of Modern Physics 84, 621 (2012). [88] A. Serafini, Quantum continuous variables: a primer of theoretical methods (CRC press, 2023). [89] A. M. Lacerda, M. J. Kewming, M. Brenes, C. Jackson, S. R. Clark, M. T. Mitchison, and J. Goold, Entropy pro- duction in the mesoscopic-leads formulation of quantum thermodynamics, Phys. Rev. E 110, 014125 (2024).
Current fluctuations in nonequilibrium open quantum systems beyond weak coupling: a reaction coordinate approach Khalak Mahadeviya,1, ∗Saulo V. Moreira,1 Sheikh Parvez Mandal,2 Mahasweta Pandit,2 Javier Prior,2 and Mark T. Mitchison1, 3, † 1 2, D02 K8N4, Ireland 2Departamento de F ́ısica - CIOyN, Universidad de Murcia, Murcia E-30071, Spain 3 's College London, Strand, London, WC2R 2LS, United Kingdom We investigate current fluctuations in open quantum systems beyond the weak-coupling and Markovian regimes, focusing on a coherently driven qubit strongly coupled to a structured bosonic environment. By combining full counting statistics with the reaction coordinate mapping, we develop a framework that enables the calculation of steady-state current fluctuations and their temporal correlations in the strong-coupling regime. Our analysis reveals that, unlike in weak coupling, both the average current and its fluctuations exhibit nonmonotonic dependence on the systemenvironment interaction strength. Notably, we identify a regime where current noise is suppressed below the classical thermodynamic uncertainty bound, coinciding with enhanced anticorrelations in quantum jump trajectories and faster system relaxation. We further show that these features are linked to nonclassical properties of the reaction coordinate mode, such as non-Gaussianity and quantum coherence. Our results provide new insights and design principles for controlling current fluctuations in quantum devices operating beyond the standard weak-coupling paradigm. I. INTRODUCTION Open quantum systems far from equilibrium support currents of particles or energy, which can fluctuate significantly compared to their mean values [1, 2]. These current fluctuations are important for several reasons: they control the tradeoff between power and efficiency in heat engines [3, 4], they limit the precision of parameter estimation in nonequilibrium settings [5-14], and they carry important information about system properties, e.g., particle statistics [15-17] or dissipative phase transitions [18, 19]. The universal features of current fluctuations have come under renewed scrutiny in recent years, with the discovery of general bounds such as the thermodynamic [20-22] and kinetic [23, 24] uncertainty relations. Of particular interest is the potential of open quantum systems to violate these classical uncertainty relations, as demonstrated theoretically by numerous case studies [25-31] as well as via the derivation of looser, quantum bounds [32-43]. This raises the tantalizing prospect of exploiting quantum resources to reduce current fluctuations, allowing for more reliable thermal machines [25, 30, 44-46] or precise timekeeping [47]. One way to realize nonclassical current fluctuations is by moving to a regime of strong coupling between the system and environment, where non-Markovian effects can arise. This regime is natural for many open quantum systems, because boundary effects often dominate due to the small sizes involved. However, the description of current fluctuations in strongly coupled, nonlinear open quantum systems typically requires sophisticated techniques such as perturbative nonequilibrium Green func- ∗ † tions [48], tensor-network simulations [49, 50], or pseudomode approaches [51, 52], whose predictive power comes at the cost of complexity which may obscure physical intuition. An appealing method in this regard is the reaction-coordinate (RC) mapping, in which the most significant environmental mode is incorporated as part of the system, while the remaining modes are traced out within a Born-Markov approximation [53-55]. The RC mapping can be understood as the first step of a more general transformation of a noninteracting environment into a one-dimensional chain [56], with the advantage that the RC captures the most important environmental properties explicitly and simply by a single mode. This approach has been widely used to model quantum thermodynamic processes at strong system-reservoir coupling, albeit mostly at the level of average heat and work exchanges [57-63]. Notably, Shubrook et al. [64] recently used the RC method to study the fluctuations of heat transferred during an equilibration process. In this work, we employ the RC mapping to investigate the fluctuations of currents sustaining a nonequilibrium steady-state (NESS) beyond the weak-coupling regime. We consider a minimal model of a two-level system (qubit) driven by a coherent field and dissipating energy into a structured bosonic reservoir with a peaked spectral density (Sec. II A), which can be mapped onto a dissipative Jaynes-Cummings model via the RC mapping (Sec. II B). We work within the rotating-wave approximation (RWA), which limits our study to moderate coupling strengths that are nonetheless well beyond the validity of the standard weak-coupling master equation. In this approximation, heat is transferred via the exchange of excitations (quanta) whose total number is conserved, and we therefore focus on the full counting statistics of this excitation current, as described in Sec. II C. Within our model, we find an interesting nonmonotonic dependence of the current noise on the system16 Oct 2025 2 environment coupling strength, induced by the structured environment. In particular, we show that the current noise dips below the classical TUR bound when the system-environment coupling is comparable to the Rabi frequency of the drive (Sec. III A). To understand this behavior, we exploit the fact that the RC mapping represents the original (possibly non-Markovian) setup via an extended system with Markovian dynamics, i.e., a Markovian embedding [65]. This allows us to apply the powerful conceptual framework of quantum-jump unraveling, whereby the excitation current can be modeled as a sequence of detector "clicks" whose statistical properties are inherited from the underlying quantum dynamics [2]. In particular, we find that the dip in the current noise is associated with strong anticorrelations between subsequent clicks (Sec. III B), a hallmark of the nonlinearity induced by qubit-RC hybridization. We also show that the system's relaxation rates-encoded by the Liouvillian eigenvalues-are maximal at this point in parameter space. Finally, we characterize nonclassical features of the environment by quantifying the non-Gaussianity [66], second-order coherence [67], and coherence in the Fock basis [68] of the RC mode (Sec. III D). We find that, although these quantifiers are correlated with reduced current fluctuations, there is no sharp correspondence between nonclassicality of the state and quantum TUR violations, in accordance with previous studies [30, 31, 69]. Our results provide design principles for suppressing current fluctuations using strong coupling to a structured environment. Our work also demonstrates how the RC mapping facilitates a physically transparent analysis of current fluctuations beyond the weak-coupling regime, by exploiting the Markovian nature of the embedding. A similar approach could be applied to more complex environmental structures using more sophisticated embeddings developed in recent years [70-76]. II. MODEL A. Coherently driven qubit We consider a coherently driven qubit coupled to a bosonic thermal bath at temperature T. The system Hamiltonian can be expressed as ˆHS(t) = ωqˆσ+ˆσ-+ 2Ωsin(ωdt)ˆσx (1) where ωq is the qubit transition frequency, Ωis the Rabi frequency, ωd is the drive frequency, and ˆσx,y,z,+,-denote Pauli matrices. Throughout this work, we set ̵h = 1 and kB = 1. Additionally, the bath Hamiltonian is given by ˆHB = ∑ k νkˆc† kˆck, (2) where ˆc† k(ˆck) are bosonic creation (annihilation) operators for modes with frequencies νk, with [ˆck, ˆc† k′] = δkk′. Residual bath (T) λ ȁ ۧ 0 ȁ ۧ 1 ωc ωd γ ωq Extended system FIG. 1. Reaction coordinate (RC) mapping for a coherently driven qubit strongly coupled to its environment. The RC mapping incorporates a harmonic oscillator mode into the extended system. ∣0⟩and ∣1⟩denote the qubit eigenstates in the σz basis, with transition frequency ωq. The qubit couples to a drive at ωd and the RC mode with frequency ωc and coupling λ. The extended qubit-RC system interacts weakly (γ) with the residual bath at temperature T. By including the interaction between system and bath, the total Hamiltonian reads ˆH(t) = ˆHS(t) + ˆHB + ˆHSB. (3) The system-bath interaction Hamiltonian is given by ˆHSB = ∑ k hk(ˆσ+ + ˆσ-)(ˆc† k + ˆck), (4) where hk is the coupling strength between the qubit and the bath frequency modes νk. In effect, the interaction between the system and the bath can be captured via the spectral density, S(ω) = ∑k ∣hk∣2δ(ω -νk) [77]. We assume the Drude-Lorentz form of spectral density corresponding to a strong coupling between the system and bath frequency modes centered around ωc, S(ω) = 4ωαλ2ω2 c (ω2 -ω2c)2 + (2παωcω)2 . (5) The amplitude of the spectral function, λ, is related to the interaction strength, hk, as λ2 = ∑k ∣hk∣2, and its width ωcα is determined by a dimensionless parameter α. In order for the Markov approximation to be valid, the system relaxation time (τS) has to be much larger than the bath correlation time (τB), i.e., τS ≫τB. For the above spectral density, τB is related to the inverse of the width of the spectral function ωcα. In this context, the Markovian regime is then defined by, ωcα ≫λ. (6) This condition results in flatter spectral densities as shown in the green curve in Fig. 2 with λ = 0.02ωq, ωc = ωq and α = 1, and the system dynamics for this case can be described within the standard Lindblad master equation formalism. B. Reaction coordinate mapping In this work, we focus on investigating particle current fluctuations beyond the weak coupling regime. In the 3 0 1 2 ω/ωq 0.000 0.005 0.010 S(ω)/ωq S0, α = 1 S0, α = 0.01 SRC, α = 0.01 0 1 2 0 5×10-4 FIG. 2. Drude-Lorentz spectral density with central frequency ωc = ωq and λ = 0.02ωq for different widths α = 0.01 (blue) and 1 (green). The sharply peaked curve for α = 0.01 represents the strong coupling to the bath and the dashed purple curve represents the Ohmic spectral density function for the residual environment after RC mapping. model described above, the strong system-bath coupling, λ ≳ωcα, corresponds to a narrow spectral density centered around the qubit transition frequency ωq, as shown in the blue curve in Fig. 2. This means that we go beyond the Markovian regime, and the standard Lindblad formalism is not valid to describe the system dynamics. However, we can redefine the system by incorporating a collective mode of the environment, as illustrated in Fig. 1. By constructing an extended system that couples weakly to the residual environment, we are able to derive a Lindblad master equation governing the dynamics of the extended system. This can be achieved by implementing the reaction coordinate (RC) mapping method developed in Refs. [53-55]. The application of the RC mapping to the total Hamiltonian in Eq. (3) allows us to write the mapped Hamiltonian as ˆH′(t) = ˆHS(t) + ˆHRC + ˆHS,RC + ˆHB′ + ˆHRC,B′. (7) The RC and system-RC interaction Hamiltonian are defined as ˆHRC = ωcˆa†ˆa, ˆHS,RC = λˆσx(ˆa† + ˆa), (8) with the bosonic creation (annihilation) operators ˆa†(ˆa) corresponding to the RC mode. The residual bath is described by modes ˆb† k(ˆbk) with frequency ωk with Hamiltonian, ˆHB′ = ∑ k ωkˆb† kˆbk, (9) and the interaction Hamiltonian between the residual bath and the RC mode is given by, ˆHRC,B′ = ∑ k fk(ˆa† + ˆa)(ˆb† k + ˆbk). (10) We identify the effective system Hamiltonian as ˆHES = ˆHS(t) + ˆHRC + ˆHS,RC. The RC mode is defined such that λ(ˆa† + ˆa) = ∑k hk(ˆc† k + ˆck), and RC mode frequency ω2 c = λ-2 ∑k νk∣hk∣2. As discussed in Refs. [53, 54], under the RC mapping the original bath spectral density in Eq. (5) transforms to an Ohmic form, SRC(ω) = ∑ k ∣fk∣2δ(ω -ωk) = αωe-ω/Λ, (11) for the residual bath. We note that this correspondence between the spectral densities holds exactly only in the limit Λ →∞. In this work, however, we approximate it by setting a large but finite value of Λ. This is important because it ensures the convergence of all the terms in the derivation of the master equation discussed below. To derive the master equation for the extended system, we move to a rotating frame defined by the unitary operator ˆU(t) = exp(iωd ˆNtott), where ˆNtot = ˆσ+ˆσ-+∑k ˆc† kˆck = ˆσ+ˆσ-+ ˆa†ˆa + ∑k ˆb† kˆbk is the total excitation number operator. We note that our choice of rotating frame defined by ˆU(t) is equivalent for both the original and RC-mapped description. In the rotating frame, the RC mapped Hamiltonian in Eq. (7) is transformed as ̃H′ = ˆU †(t) ˆH′ ˆU(t). Under the rotating wave approximation (RWA), we can neglect fast oscillating terms to get the following Hamiltonian, with only remaining coupling terms that conserve the excitation number, ̃H′ = ∆qˆσ+ˆσ-+ Ω(ˆσ+ + ˆσ-) + λ(ˆσ+ˆa + ˆσ-ˆa†) + ∆cˆa†ˆa + ∑ k fk(ˆa†ˆbk + ˆaˆb† k) + ∑ k δkˆb† kˆbk, (12) where ∆q = ωq-ωd, ∆c = ωc-ωd, and δk = ωk-ωd. In this frame, the residual bath spectral density is transformed into ̃SRC(ω) = SRC(ω + ωd). The RWA holds when the drive frequency ωd is near resonant to both the qubit transition frequency, ωq, and the RC frequency, ωc, i.e., ∆q,∆c ≪ωd, and the Rabi coupling Ωand the qubit-RC coupling λ are weak enough and satisfy Ω,λ ≪ωq. The total Hamiltonian in Eq. (12) commutes with the total excitation number operator ˆNtot, which represents the total number of energy quanta in the system and bath. This conservation law is a consequence of the RWA, which dictates that energy exchange between subsystems occurs via the exchange of these excitations. We therefore focus on the excitation current between the system and the bath, rather than the energy current, in the following. Now, we trace out the residual bath to derive a quantum master equation in Gorini-Kossakowski-SudarshanLindblad (GKSL) form, starting from the total Hamiltonian in the rotating frame in Eq. (12). We work within a local approach to dissipation, which is justified so long the drive strength Ω, coupling λ, and detunings ∆q,c are small compared to the inverse bath correlation time τB′ = max{Λ-1,T -1}, which is dictated here by the inverse temperature of the bath due to the very large UV cutoff Λ. As discussed in Refs. [78-82], we can then treat 4 the bath spectral function near frequency ωc as approximately constant on the scale of the small energy splittings induced by Ω,λ,∆q,c. We thus arrive at the following local Lindblad master equation, dˆρ dt = Lˆρ = -i[ ̃HES, ˆρ]+γ(nB+1)D[ˆa]ˆρ+γnBD[ˆa†]ˆρ. (13) with γ = SRC(ωc), and the bosonic occupation number nB ≡nB(ωc) = (exp(ωc/T) -1)-1 for the residual bath at temperature T. Here, ̃HES = ∆qˆσ+ˆσ-+ Ω(ˆσ+ + ˆσ-) + ∆cˆa†ˆa + λ(ˆσ+ˆa + ˆσ-ˆa†) is the rotating frame Hamiltonian and ˆρ is the density matrix of the extended system, which includes the driven qubit and the RC mode. The above master equation is derived within the BornMarkov approximation, which imposes that the residual bath relaxes much faster than the extended system relaxation timescale (τES ∼γ). This approximation, like our use of a local GKSL equation, assumes an approximately flat spectral function S(ω)nB(ω) around the relevant transition frequencies of the system. In our case, this is justified so long as Ω,λ,τq,c ≪τ -1 B′ ∼T. Furthermore, validity of the Born approximation is limited to weak coupling between the extended system and environment, requiring the coupling constants fk to be weak. This assumption is also crucial for the validity of the RWA above. Although the Ohmic residual spectral density in Eq. (11) implies that fk can become unboundedly large for large ωk, this assumption is justified since we coarse-grain over times much longer than the bath correlation time, so that these high-frequency contributions average out to zero. From here on, we refer to Eq. (13) as reaction coordinate Lindblad master equation (RC-LME). In the longtime limit, the extended system dynamics governed by the RC-LME evolves towards a time-independent state, referred to as the nonequilibrium steady-state (NESS). The NESS, denoted by ρss, satisfies, Lρss = 0, (14) and represents the stationary solution of the RC-LME. C. Full counting statistics 1. Excitation current into the original bath Given the RC-LME in Eq. (13), we are interested in studying the steady-state excitation current into a strongly coupled environment. To compute the first and second cumulants of this excitation current we employ the method of full counting statistics (FCS) [2]. Starting from the original description, note that under the rotating frame transformation introduced in the previous section, the Hamiltonian takes the form ̃H = ̃HS + ̃HB + ˆHSB, where ̃HS = ∆qˆσ+ˆσ-+ Ω(ˆσ+ + ˆσ-) and ̃HB = ∑k(νk -ωd)ˆc† kˆck. Within this description, we define the excitation transfer using the two-point measurement scheme, formulated in the measurement basis of the original bath excitation number, ˆNB = ∑k ˆc† kˆck. Under this protocol, we consider projective measurements on the initial system-bath state, ˆρtot(0) = ˆρ(0) ⊗ˆρE, and on the unitarily evolved state, ˆρtot(t) = exp(-i ̃Ht)ˆρtot(0)exp(i ̃Ht), at a later time t. The difference NB between the outcomes of such measurements corresponds to the net number of excitations exchanged with the bath, during time t. The probability distribution P(NB,t) of observing NB at time t can therefore be constructed by repeating the protocol many times. Note that P(NB,t) = Tr[ˆρ(NB,t)], where ˆρ(NB,t) denotes the (unnormalized) conditional state associated to the ensemble of protocols in which exactly NB net excitations are transferred after time t. Accordingly, the unconditional state reads ˆρ(t) = ∑∞ NB=0 ˆρ(NB,t). To compute the full statistics of the counting variable NB, it is convenient to introduce the characteristic function of P(NB,t), defined as its Fourier transform, M(χ,t) = Tr[ˆρ(χ,t)] = ∑ NB eiχNBTr[ˆρ(NB,t)]. (15) Here, χ is called a counting field. The counting-field dependent system state ˆρ(χ,t) can be shown to obey a generalized quantum master equation of the form [1, 2], dˆρ(χ,t) dt = Lχˆρ(χ,t), (16) with the counting-field dependent tilted Liouvillian Lχ, and ˆρ(χ,0) = ˆρ(0). The solution of such a generalized QME is given by ˆρ(χ,t) = eLχ ˆρ(0). Now, we can write the cumulant generating function φ(χ,t) as, φ(χ,t) = ln[M(χ,t)] = ln[Tr[eLχtˆρ(0)]]. (17) With this, the nth cumulant of the net excitation at time t, ⟪NB(t)n⟫, can be computed as, ⟪NB(t)n⟫= (-i∂χ)nφ(χ,t)∣ χ=0 . (18) As for the steady-state statistics, in the long time limit, this cumulant generating function takes the following asymptotic form [2], φ(χ,t) ≃θ0(χ)t, (19) where θ0(χ) is the leading eigenvalue of the tilted Liouvillian Lχ, i.e., the eigenvalue with the largest real part. Thus, using Eq. (18) and Eq. (19), we are able to compute cumulants of net excitations NB(t) in the long time limit. By defining the scaled cumulant generating function, φ(χ) = lim t→∞∂tφ(χ,t) = θ0(χ), (20) 5 the scaled cumulants of the excitations in the long time limit are given by, ∂t⟪NB(t)n⟫= (-i∂χ)nθ0(χ)∣ χ=0 . (21) In order to employ the FCS framework to calculate the statistics of the steady-state excitation current in our setup, we are required to derive a counting-field dressed generalized master equation (GME) of the form in Eq. (16). To this end, we start from the rotating frame Hamiltonian introduced earlier, ̃H = ̃HS+ ̃HB+ ˆHSB, with ̃HS = ∆qˆσ+ˆσ-+ Ω(ˆσ+ + ˆσ-) and ̃HB = ∑k(νk -ωd)ˆc† kˆck. To account for excitation transfer into the original bath, we introduce a counting field χ by transforming the total Hamiltonian as ̃H →eiχ ˆ NB/2 ̃He-iχ ˆ NB/2, to get the tilted Hamiltonian, ˆHχ = ̃HS + ̃HB + eiχ ˆ NB/2 ˆHSBe-iχ ˆ NB/2. (22) The counting field χ in the interaction term keeps track of the excitation exchange at the original system-bath boundary. Within the RC mapping, the interaction term transforms as, ˆHSB →ˆHS,RC. Hence, the RC mapped tilted Hamiltonian is given by [51], ˆH′ tot,χ = ̃HS + ̃HRC + e i 2 χ ˆ NB ˆHS,RCe-i 2 χ ˆ NB + ̃HB′ + ˆHRC,B′, (23) with ̃HRC = ∆cˆa†ˆa and ̃HB′ = ∑k δkˆb† kˆbk. Note that the original bath number operator ˆNB also satisfies ˆNB = ∑k ˆc† kˆck = ˆa†ˆa+∑k ˆb† kˆbk. Now, using the Baker-CampbellHausdorff (BCH) formula, the term eiχ ˆ NB ˆHS,RCe-iχ ˆ NB simplifies to ˆHχ,S,RC = λ(eiχ/2 ˆa†ˆσ-+ e-iχ/2 ˆσ+ˆa). (24) By utilising the RC-mapped tilted Hamiltonian in Eq. (23), we derive the RC-mapped tilted Liouvillian under the same approximations as in the previous section, to get a generalized master equation (GME) of the form (16) with Lχˆρ = -i[ ̃HES,χ, ˆρ]χ + γ(nB + 1)D[ˆa]ˆρ + γnBD[ˆa†]ˆρ, (25) with ̃HES,χ = ̃HS + ̃HRC + ˆHχ,S,RC and [ ̃HES,χ, ˆρ]χ = ̃HES,χˆρ -ˆρ ̃HES,-χ. The RC mapped tilted Liouvillian Lχ, in the above GME with counting field χ, accounts for the excitation exchange between the system and the RC mode. The full statistics for the corresponding excitation current can be calculated using Eq. (21). 2. Excitation current into the residual bath Similarly, we can derive another GME with a counting field χ′ that accounts for the excitation exchange between the extended system and the residual bath. This can be achieved by following the same two-point measurement protocol as described above, but now in the measurement basis of the residual bath excitation number ˆNB′ = ∑k ˆb† kˆbk. The associated counting variable, which corresponds to the difference between the initial and final measurement outcomes, is denoted by NB′. Within this protocol, we obtain the following RC-mapped tilted Hamiltonian, ˆH′ tot,χ′ = ̃HS + ̃HRC + ˆHS,RC + ̃HB′ + e i 2 χ′ ˆ NB′ ˆHRC,B′e-i 2 χ′ ˆ NB′. (26) With this tilted Hamiltonian, we can derive the corresponding GME of the form, L′ χ′ ˆρ = -i[ ̃HES, ˆρ] + γ(nB + 1)Dχ′[ˆa]ˆρ + γnBD-χ′[ˆa†]ˆρ, (27) with Dχ[ˆL]ˆρ = eiχ ˆLˆρˆL† -1 2{ˆL† ˆL, ˆρ}. (28) The full statistics of the net excitation current into the residual bath, related to the counting variable NB′, can be obtained from the cumulant generating function associated with the above GME. 3. Equivalence of the two excitation currents Using the framework described above, we are now in a position to show that, in the long time limit, the statistics of the net excitations NB exchanged between the system and original bath is identical to the statistics of net excitations NB′ exchanged between the RC and the residual environment. To formalise this equivalence, in Appendix A, using the following unitary superoperators, U ˆρ = e-i 2 χˆa†ˆaˆρe-i 2 χˆa†ˆa, U†ˆρ = e i 2 χˆa†ˆaˆρe i 2 χˆa†ˆa, (29) we demonstrate that the tilted Liouvillian Lχ (Eq. (25)) and L′ χ′ (Eq. (27)) are related as, U†LχU = L′ χ. (30) From Eq. (19), we recall that in the long-time limit, all the cumulants of the net excitation are determined by the eigenvalues of the corresponding tilted Liouvillian. Since the leading eigenvalue θ0(χ) of the tilted Liouvillian is invariant under unitary transformation, both GMEs in Eq. (27) and Eq. (25) yield an identical cumulant generating function φ(χ,t) = θ0(χ)t in the long-time limit. This implies that the cumulants of the excitation current between the system and environment in the original description can be calculated directly from the GME in Eq. (27). From here on, we drop the subscripts from NB and NB′, and refer to the net number of excitations as N. 6 4. Current fluctuations and correlations Finally, we can now define the average excitation current and diffusion coefficient in terms of the first and second scaled cumulants of the net excitation transfer N(t) in the long-time limit, J = lim t→∞ d dtE[N(t)], D = lim t→∞ d dtVar[N(t)], (31) where E[●] denotes the expectation value (first cumulant) and Var[●] denotes the variance (second cumulant). To discuss the TUR, we also need to consider the rate of entropy production, which is associated with the flow of heat into the bath [83]. We assign an energy ωc to each excitation transferred into the bath with an energy ωc, leading to the steady-state heat current ̇Q = ωcJ. This assumption is fully consistent with our local GKSL model, which assumes that the residual bath cannot resolve energy differences on the order of the small splittings λ,Ω,∆q,c ≪ωc [81]. We thus obtain the steadystate entropy production rate as ̇Σ = ̇Q T = J ln(nB + 1). (32) Our definitions are also consistent with Refs. [62, 64], which propose that entropy production within the RC approach should be associated with heat exchanged with the residual bath. In terms of these quantities, the thermodynamic uncertainty relation (TUR) reads Q = D J2 ̇Σ ≥2. (33) This trade-off between steady-state current fluctuations and entropy production holds for classical systems undergoing Markovian dynamics with local detailed balance [20-22]. With J, D, and ̇Σ identified above, the thermodynamic uncertainty ratio Q follows as Q = D J2 ̇Σ = D J ln(nB + 1). (34) To obtain our results in Sec. III, the average current J and noise D are computed from the leading eigenvalue of the tilted generator defined in Eq. (27). However, to obtain further insight into the statistics of the excitation current, we also exploit the quantum-jump unraveling of full counting statistics. Here, the counting variable N is interpreted in terms of the "clicks" of an ideal detector that efficiently monitors the emitted and absorbed quanta [2]. Since the dynamics of the extended qubitRC system is Markovian, such a detector can be introduced without affecting the dynamics or current statistics. Therefore, even if such an ideal detector cannot be implemented in practice, it is a useful fiction that provides a time-resolved picture of current fluctuations, even in this non-Markovian setting. We define N+(t) (respectively, N-(t)) as the total number of excitations that the RC emits into (absorbs from) the residual bath. The total excitation transfer up to time t is therefore given by N(t) = N+(t) -N-(t). The associated stochastic current is defined as I(t) = dN/dt, such that its expectation value E[I(t)] = J reproduces the average current defined in Eq. (31). Following Ref. [2], the statistics of this current can be determined from the current superoperator J ˆρ = ∑ k=± νk ˆLk ˆρˆL† k. (35) Each term in the sum describes a possible quantum jump in the trajectory unravelling of the RC-LME (13). When a jump is detected in channel k, the counting variable N is incremented by a weight νk and the conditional state is updated by the action of the jump operator ˆLk. In this case, ν+ = +1 and ˆL+ = √ γ(1 + nB)ˆa describe emission processes that increase N by one unit, while ν-= -1 and ˆL-= √γnBˆa† describe absorption events that correspondingly decrease it. The average steady-state current can be computed as J = Tr[J ˆρss]. (36) The stationary two-point correlation function between I(t) and I(t + τ) is given by F(τ) = E[δI(t)δI(t + τ)] = Kδ(τ) + Tr[J eL∣τ∣J ˆρss] -J2, (37) where δI(t) = I(t) -J denotes current fluctuations and K = ∑k ν2 kTr[ˆL† k ˆLk ˆρss]. In our case, since νk = ±1, K corresponds to the dynamical activity [84, 85], which quantifies the frequency of jumps. As shown in Ref. [2], the noise can be written in terms of the two-point correlation function as D = 2∫ ∞ 0 dτF(τ). (38) Features of the noise can therefore be understood in terms of correlations within the stochastic current, as we discuss in more detail below. III. RESULTS A. Current fluctuations beyond weak coupling We now apply the methods developed in the previous section to compute the steady-state excitation current and its cumulants for a coherently driven qubit strongly coupled to a bosonic environment, the dynamics of which is described by the RC-LME in Eq. (13). In Fig. 3, we present the average current defined in Eq. (36), noise (second cumulant), signal-to-noise ratio (SNR), and thermodynamic uncertainty ratio (TUR) as functions of the 7 0.00 0.02 0.04 0.000 0.002 0.004 J (a) 0.00 0.02 0.04 0.000 0.002 0.004 D (b) 0.00 0.02 0.04 λ 0.000 0.005 0.010 0.015 J2/D (c) 0.00 0.02 0.04 λ 2 4 6 Q (d) α = 0.01 α = 0.04 α = 1 FIG. 3. (a) Average excitation current J into the bath, (b) fluctuations of the excitation current D, (c) signal-tonoise ratio (SNR) J2/D, and (d) TUR ratio Q as functions of interaction strength λ for different spectral density width α = 0.01, 0.04, and 1. We set ∆q = 0, ∆c = 0, Ω= 0.005, and nB = 0.01. interaction strength λ. The results are shown for different spectral widths α in the Drude-Lorentz spectral density. In all our calculations, we take ωc = 1 as the unit of energy, and we consider two values of α = 0.01 and 0.04, both in the strong-coupling regime. The weakcoupling case, α = 1, which yields a relatively flat spectral density, is also considered as a benchmark. For the weak-coupling case, the heat current cumulants are computed using the standard Lindblad master equation and full counting statistics [2]. We observe a clear qualitative difference between the strong and weak coupling regimes. For α = 1, both the average current J and noise D increase monotonically with λ. In contrast, for smaller α values (blue and green curves), the average current exhibits a nonmonotonic dependence on λ, reaching a peak at intermediate coupling strengths and decreasing toward zero for large λ. The noise follows a similar trend to the current for α = 0.01. However, for α = 0.04, the noise displays a pronounced dip around the same λ values where the current peaks. Notably, this dip coincides with the parameter regime where the classical TUR is violated, which occurs only in the strongly coupled cases. Despite similar current magnitudes near the peak, the SNR is notably higher for α = 0.04, indicating that the TUR violation reflects a genuine reduction in fluctuations. We now focus on the case α = 0.04, and plot the average heat current into the environment and the noise in Fig. 4, for different fixed values of the drive strength Ωand bath occupation nB. We first examine the dependence on bath temperature for a fixed drive strength Ω= 0.005, shown in panels 4(a) and 4(c). As nB decreases, we see an increase in the mean heat current and a reduction in noise, with the maximum current and minimum noise occurring 0.00 0.02 0.04 0.000 0.005 0.010 0.015 (b) 0.00 0.02 0.04 λ 0.00 0.01 (d) 0.00 0.02 0.04 0.0000 0.0025 0.0050 0.0075 J (a) 0.00 0.02 0.04 λ 0.000 0.005 0.010 D (c) Ω: 0.005 0.01 0.02 nB : 0.01 0.1 1 FIG. 4. Average excitation current J into the bath and fluctuations of the excitation current D as functions of interaction strength λ, in (a) and (c) for different temperatures leading to bath occupation nB = 0.01, 0.1 and 1, with fixed Ω= 0.01, and in (b) and (d) for different drive strengths Ω= 0.005, 0.01 and 0.02, with fixed nB = 0.01. We set ∆q = 0 and ∆c = 0. at nB = 0.01, corresponding to the lowest bath temperature. Notably, the nonmonotonic features in the noise, characterized by distinct peaks and a trough, are most pronounced at this temperature, and gradually disappear as the temperature increases, vanishing for nB ∼1. The enhanced current at low temperatures suggests that the system relaxes to the lower excited state more efficiently in this regime. Next, we fix the bath occupation at nB = 0.01 and investigate the behavior of the current cumulants as a function of λ, for different values of the drive strength Ω, as shown in Figs. 4(b) and 4(d). We consider three values: Ω= 0.005, 0.01, and 0.02. The mean current increases with increasing drive strength, which is consistent with an overall increase in jump activity at stronger driving. However, this also leads to an increase in current fluctuations, with the peaks and troughs in the noise becoming less pronounced at higher Ω. These features are most clearly resolved at the weakest drive. B. Current correlations In order to deepen our understanding of the features shown in Fig. 4, we exploit the relation between the noise and the current auto-correlation function in Eq. (38). We focus on the regular part of the correlation function by subtracting the singular term, Kδ(τ), from Eq. (37), C(τ) ≡F(τ) -Kδ(τ) = Tr[J eL∣τ∣J ˆρss] -J2. (39) Since the term Kδ(τ) is related to temporally uncorrelated fluctuations (shot noise) [2], Eq. (39) quantifies 8 0.00 0.02 0.04 λ -0.005 0.000 0.005 (a) 0.0 2.5 5.0 7.5 τ · Ω -2 -1 0 C(τ) ×10-5 (b) λ = 0.0067 λ = 0.0103 λ = 0.015 λ = 0.02 0.00 0.02 0.04 λ -0.005 0.000 0.005 0.010 0.015 (c) 0 5 10 15 τ · Ω -4 -2 0 C(τ) ×10-5 (d) λ = 0.0122 λ = 0.0162 λ = 0.02 λ = 0.025 J D K D -K FIG. 5. Average excitation current J, diffusion coefficient D, dynamical activity K, and D -K as functions of interaction strength λ, in (a) and (c) for different drive strengths Ω= 0.01 and Ω= 0.02. In (b) and (d), current correlation functions C(τ) for selected λ values marked by crosses in D -K plots in panels (a) and (c), respectively. contributions from temporal correlations between different quantum jumps. For example, when the jumps are totally uncorrelated, F(τ) = Kδ(τ), whereas C(τ) = 0. Furthermore, we note that C(τ) can be related to the long-time diffusion coefficient as D -K = 1 2 ∫ ∞ 0 C(τ)dτ. (40) Once again, since the term K is due to white noise, D-K genuinely captures the contribution from temporal correlations. This quantity was previously introduced and studied in Ref. [19] in connection with dissipative phase transitions. In Fig. 5, we plot D -K and D as functions of the interaction strength λ, for Ω= 0.01 in Fig. 5(a), and Ω= 0.02 in Fig. 5(c). We see that D-K follows the same trend as D, with peaks and troughs coinciding for the same values of λ. Such nonmonotonic behavior of D and D-K results from temporal correlations, as evidenced by Eq. (40), with peaks (troughs) associated with correlated (anticorrelated) jumps. It is therefore instructive to look at the behavior of the correlation function in Eq. (39) for values of λ corresponding to peaks, troughs, alongside some other points, indicated in Figs. 5(a)-(c). We plot these correlation functions in Fig. 5(b), for Ω= 0.01, and Fig. 5(d), for Ω= 0.02. We note that the correlation functions C(τ) shown in Fig. 5(b)-(d) display an oscillatory behavior, starting at negative values for C(0) and eventually converging to zero for large τ. We highlight that at the troughs, in both cases, C(0) reaches its most negative value, and the period of the oscillations of C(τ) is the largest, corroborating the predominance of anticorrelated jumps and, 0.00 0.02 0.04 λ 0 -4 -8 Re[θi] ×10-3 (a) 0.00 0.02 0.04 λ 0 -4 -8 Re[θi] ×10-3 (b) 0 1 2 D/J2 ×103 0.0 0.5 1.0 D/J2 ×103 Re[θ1] Re[θ2] Re[θ3] D/J2 FIG. 6. Real part of the first three nonzero Liouvillian eigenvalues θ1, θ2, θ3 and D/J2 as functions of interaction strength λ, in (a) and (b) for different drive strengths Ω= 0.01 and Ω= 0.02. overall, antibunched statistics. This means that, near the troughs of D, the probability of observing a second jump is smallest at zero time delay after the first one (τ = 0). Such antibunching in a bosonic mode is a hallmark of nonlinear dynamics, indicating that strong hybridization with the qubit prevents multiple excitations from being created in the RC (analogous to the photon blockade effect [86]). C. Relaxation timescales As discussed in several previous works [2, 18, 19], current fluctuations are associated with an intrinsic timescale of the system. In particular, consider the timeaveraged stochastic current over an observation window t, ̄I(t) = 1 t ∫ t 0 dt′ I(t′), (41) which is an unbiased estimator of J since E[ ̄I(t)] = J. The timescale D/J2 is precisely the time t taken for the signal-to-noise ratio of the estimator ̄I(t) to reach one [2]. Thus, we expect the points in parameter space with minimal current noise to be associated with fast regression of current fluctuations back to the mean. To quantify these timescales more precisely, we consider the spectrum of the Liouvillian L of the extended system, Eq. (13). We note that the stationary state of the system satisfies Lρss = 0, and is therefore associated to the zeroth eigenvalue θ0 = 0. Furthermore, it is known that the real parts of the eigenvalues θi>0, which satisfy Re[θi] 0), whereas a coherent state (δG = 0) has large Cl1 in the Fock basis. Interestingly, the minima of Q occur precisely where these distinct signatures of nonclassicality intersect: where anticorrelation is the strongest, i.e., g(2)(0) is minimum, δG is maximal, and Cl1 is close to its peak. This observation highlights that several independent quantum traits in the environment, including both statistical (anticorrelation) and structural (nonGaussianity and Fock-basis superposition) features of the RC mode, contribute to violations of the classical TUR. IV. DISCUSSION In this work, we investigated the first and the second cumulants of excitation currents flowing through an open quantum system beyond the weak-coupling regime. In this direction, we begin by implementing the RC mapping framework and redefining the boundary between the system and environment by incorporating a collective environmental mode within an extended system. The interaction between the extended system and the residual bath can be described within the weak coupling approximation, allowing us to derive an RC-LME that governs the extended system dynamics. We show that the current statistics between different partitions, namely the origi10 nal system-bath and the extended system-residual bath, are identical in the long-time limit. This enabled us to examine the features of current fluctuations and correlations in the strong coupling regime, utilizing the toolkit of Markovian dynamics. We find significantly different behaviors in the strong and weak-coupling regimes. While the average current, J, and noise, D, increase monotonically with the interaction strength in the weak-coupling regime, both quantities behave nonmonotonically in the strong-coupling regime. In addition, for the strongly-coupled cases, a dip in D occurs in the same regime where a higher SNR is observed and the violation of the TUR occurs, thereby showing a suppression of current fluctuations. We showed that such nonmonotonic behavior of the noise results from strong temporal anticorrelations in the current. Furthermore, we have seen that larger negative values of the real part of eigenvalues of the Liouvillian, which imply faster relaxation of the system dynamics, are associated with a decrease in D. This is corroborated by the fact that the noise-to-signal ratio, D/J2, which captures the timescale of the decay of fluctuations, also displays smaller values where a reduction in current fluctuations is observed. Finally, we investigated TUR violations in relation to complementary measures of nonclassicality in the environmental degrees of freedom. We observe that the suppression of current fluctuations is associated with the simultaneous occurrence of two different nonclassical features of the environment: namely, non-Gaussianity and Fock-basis coherence in the RC mode. It is important to note that our results are obtained within the RWA and the local GKSL master equation, where energy transfer between the system and the environment proceeds via the exchange of well-defined energy quanta. This essentially means that the heat cumulants are proportional to the cumulants of the net excitation transfer N, which we have shown are the same for the original and residual baths in the long-time limits. This greatly simplifies the thermodynamic analysis and singles out the current between the residual bath and the extended system as the natural choice to describe heat flow [62, 64]. Within its regime of validity, we expect the RWA to give an adequate description up to small, fast-oscillating contributions that are neglected. However, for stronger system-bath couplings such that λ ∼ωq, or other more complex environmental spectral densities, the RWA will break down completely. In this regime, more sophisticated embeddings are required [7076] and additional theoretical complications will emerge, e.g., the nonequivalence of entropy flux between the original and residual baths [62, 64, 89], and contributions to the heat current from interactions within the extended system [52, 73]. Notwithstanding these subtleties, we hope that our results highlight the conceptual power of the Markovian embedding for understanding current fluctuations beyond weak system-environment coupling, and inspire further research along these lines deep into the strong-coupling regime. ACKNOWLEDGMENTS We warmly acknowledge Kesha for the use of her computational resources. Neither the European Union nor UKRI can be held responsible for them. M.T.M. is supported by a Royal Society University Research Fellowship. J.P. acknowledges support from grant QuantERA II program (Mf-QDS) that has received funding from the European Union's Horizon 2020 research and innovation program under Grant Agreement No. 101017733 and from the Agencia Estatal de Investigaci ́on, with project code PCI2022132915 funded by MICIU/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR, grant TED2021-130578BI00, grant PID2021-124965NBC21 (Quantera II cofund 2023 (Aqusend) Grant Agreement No. 101017733) and PCI2024-153474 funded by MICIU/AEI/10.13039/501100011033.". This project is co-funded by the European Union (Quantum Flagship project ASPECTS, Grant Agreement No. 101080167) and UK Research and Innovation (UKRI). Views and opinions expressed are, however those of the authors only and do not necessarily reflect those of the European Union, Research Executive Agency or UKRI. Appendix A: Detailed proof for FCS equivalence As discussed in Sec. II C 1, Eq. (25) corresponds to the tilted Liouvillian Lχ with counting field χ that accounts for the statistics of the net excitation number NB exchanged between the system and the original bath via RC mode. Furthermore, in Sec. II C 2, Eq. (27) gives the tilted Liouvillian L′ χ′ with a different counting field χ′, that monitors the net excitation number NB′ exchanged between the extended system and the residual bath. Here, we give a detailed proof for the result stated in Sec. II C 3 that, in the steady-state, the particle exchange statistics is identical between the qubit-RC and the RC-residual bath. In order to achieve this, it is enough to show that the tilted Liouvillian Lχ and L′ χ′ are related by a unitary transformation, as stated in Eq. (30). We start from the tilted Liouvillian in Eq. (25), Lχˆρ = -i[ ̃HES,χ, ˆρ]χ + γ(nB + 1)D[ˆa]ˆρ + γ(nB)D[ˆa†]ˆρ, (A1) where, [ ̃HES,χ, ˆρ]χ = ̃HES,χˆρ -ˆρ ̃HES,-χ and ̃HES,χ = ∆qˆσ+ˆσ-+ Ω(ˆσ+ + ˆσ-) + ∆cˆa†ˆa + λ(eiχ/2 ˆa†ˆσ-+ e-iχ/2 ˆσ+ˆa). (A2) We then follow the arguments in Ref. [29], beginning with the definition of the following unitary superoperators, U ˆρ = e-i 2 χˆa†ˆaˆρe-i 2 χˆa†ˆa, U†ˆρ = e i 2 χˆa†ˆaˆρe i 2 χˆa†ˆa. (A3) 11 To transform the Liouvillian under this unitary, we will now work with the following vectorized forms of the super operators, Lχ = -i(1 ⊗ ̃HES,χ - ̃HT ES,-χ ⊗1) + γ1(ˆa ⊗ˆa -1 21 ⊗ˆa†ˆa -1 2(ˆa†ˆa)T ⊗1) + γ2(ˆa ⊗ˆa -1 21 ⊗ˆaˆa† -1 2(ˆaˆa†)T ⊗1) = -i(1 ⊗ ̃HES,χ - ̃HES,χ ⊗1) + γ1(ˆa ⊗ˆa -1 21 ⊗ˆa†ˆa -1 2ˆa†ˆa ⊗1) + γ2(ˆa† ⊗ˆa† -1 21 ⊗ˆaˆa† -1 2ˆaˆa† ⊗1). (A4) where, γ1 = γ(nB + 1) and γ2 = γ(nB). We identify the unitary operator U = e-i 2 χˆa†ˆa, so the vectorized form of the unitary superoperator U becomes, U = U ⊗U = e-i 2 χˆa†ˆa ⊗e-i 2 χˆa†ˆa, U† = U † ⊗U † = e i 2 χˆa†ˆa ⊗e i 2 χˆa†ˆa. (A5) The unitary frame transforms the Liouvillian as Lχ → U†LχU. It is straightforward to calculate this by noting the following relations, ˆU † ̃HES,χ ˆU = ̃HES, ˆU †ˆa†ˆa ˆU = ˆa†ˆa, ˆU †ˆa ˆU = e-i 2 χˆa, ˆU †ˆa† ˆU = e i 2 χˆa†. (A6) Finally the transformed Liouvillian in the vectorized form reads, U†LχU = -i(1 ⊗ ̃HES - ̃HES ⊗1) + γ1(e i 2 χˆa ⊗e i 2 χˆa -1 21 ⊗ˆa†ˆa -1 2ˆa†ˆa ⊗1) + γ2(e-i 2 χˆa† ⊗e-i 2 χˆa† -1 21 ⊗ˆaˆa† -1 2ˆaˆa† ⊗1). (A7) By noticing that the RHS in the above equation is the vectorized form of the tilted Liouvillian L′χ′ with χ′ = χ, we finally recover the relation in Eq. (30), U†LχU = -i[ ̃HES, ˆρ] + γ(nB + 1)Dχ[ˆa]ˆρ + γ(nB)D-χ[ˆa†]ˆρ = L′ χ. (A8) 12 [1] M. Esposito, U. Harbola, and S. Mukamel, Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems, Rev. Mod. Phys. 81, 1665 (2009). [2] G. T. Landi, M. J. Kewming, M. T. Mitchison, and P. P. Potts, Current fluctuations in open quantum systems: Bridging the gap between quantum continuous measurements and full counting statistics, PRX Quantum 5, 020201 (2024). [3] N. Shiraishi, K. Saito, and H. Tasaki, Universal trade-off relation between power and efficiency for heat engines, Phys. Rev. Lett. 117, 190601 (2016). [4] P. Pietzonka and U. Seifert, Universal trade-off between power, efficiency, and constancy in steady-state heat engines, Phys. Rev. Lett. 120, 190602 (2018). [5] M. Tsang, Quantum metrology with open dynamical systems, New Journal of Physics 15, 073005 (2013). [6] S. Gammelmark and K. Mølmer, Bayesian parameter inference from continuously monitored quantum systems, Phys. Rev. A 87, 032115 (2013). [7] A. H. Kiilerich and K. Mølmer, Estimation of atomic interaction parameters by photon counting, Phys. Rev. A 89, 052110 (2014). [8] K. Macieszczak, M. Gut ̧ ̆a, I. Lesanovsky, and J. P. Garrahan, Dynamical phase transitions as a resource for quantum enhanced metrology, Phys. Rev. A 93, 022103 (2016). [9] F. Albarelli, M. A. C. Rossi, D. Tamascelli, and M. G. Genoni, Restoring Heisenberg scaling in noisy quantum metrology by monitoring the environment, Quantum 2, 110 (2018). [10] D. Yang, S. F. Huelga, and M. B. Plenio, Efficient Information Retrieval for Sensing via Continuous Measurement, Phys. Rev. X 13, 031012 (2023). [11] A. Cabot, F. Carollo, and I. Lesanovsky, Continuous sensing and parameter estimation with the boundary time crystal, Phys. Rev. Lett. 132, 050801 (2024). [12] K. Prech, G. T. Landi, F. Meier, N. Nurgalieva, P. P. Potts, R. Silva, and M. T. Mitchison, Optimal time estimation and the clock uncertainty relation for stochastic processes (2024), . [13] S. Khandelwal, G. T. Landi, G. Haack, and M. T. Mitchison, Current-based metrology with two-terminal mesoscopic conductors (2025), . [14] G. Mihailescu, A. Kiely, and A. K. Mitchell, Quantum Sensing with Nanoelectronics: Fisher Information for an Applied Perturbation (2025), . [15] P. Glidic, O. Maillet, A. Aassime, C. Piquard, A. Cavanna, U. Gennser, Y. Jin, A. Anthore, and F. Pierre, Cross-correlation investigation of anyon statistics in the ν = 1/3 and 2/5 fractional quantum hall states, Phys. Rev. X 13, 011030 (2023). [16] M. Ruelle, E. Frigerio, J.-M. Berroir, B. Pla ̧cais, J. Rech, A. Cavanna, U. Gennser, Y. Jin, and G. F`eve, Comparing fractional quantum hall laughlin and jain topological orders with the anyon collider, Phys. Rev. X 13, 011031 (2023). [17] K. Iyer, F. Ronetti, B. Gr ́emaud, T. Martin, J. Rech, and T. Jonckheere, Finite width of anyons changes their braiding signature, Phys. Rev. Lett. 132, 216601 (2024). [18] M. J. Kewming, M. T. Mitchison, and G. T. Landi, Diverging current fluctuations in critical Kerr resonators, Phys. Rev. A 106, 033707 (2022). [19] M. Matsumoto, Z. Cai, and M. Baggioli, Dissipative quantum phase transitions monitored by current fluctuations, Phys. Rev. A 112, 012226 (2025). [20] A. C. Barato and U. Seifert, Thermodynamic Uncertainty Relation for Biomolecular Processes, Physical Review Letters 114, 158101 (2015), publisher: American Physical Society. [21] T. R. Gingrich, J. M. Horowitz, N. Perunov, and J. L. England, Dissipation Bounds All Steady-State Current Fluctuations, Physical Review Letters 116, 120601 (2016). [22] J. M. Horowitz and T. R. Gingrich, Thermodynamic uncertainty relations constrain non-equilibrium fluctuations, Nature Physics 16, 15 (2020). [23] J. P. Garrahan, Simple bounds on fluctuations and uncertainty relations for first-passage times of counting observables, Phys. Rev. E 95, 032134 (2017). [24] I. Di Terlizzi and M. Baiesi, Kinetic uncertainty relation, Journal of Physics A: Mathematical and Theoretical 52, 02LT03 (2018). [25] K. Ptaszy ́nski, Coherence-enhanced constancy of a quantum thermoelectric generator, Phys. Rev. B 98, 085425 (2018). [26] K. Brandner, T. Hanazato, and K. Saito, Thermodynamic Bounds on Precision in Ballistic Multiterminal Transport, Physical Review Letters 120, 090601 (2018). [27] K. Macieszczak, K. Brandner, and J. P. Garrahan, Unified thermodynamic uncertainty relations in linear response, Phys. Rev. Lett. 121, 130601 (2018). [28] B. K. Agarwalla and D. Segal, Assessing the validity of the thermodynamic uncertainty relation in quantum systems, Phys. Rev. B 98, 155438 (2018). [29] A. A. S. Kalaee, A. Wacker, and P. P. Potts, Violating the thermodynamic uncertainty relation in the three-level maser, Phys. Rev. E 104, L012103 (2021). [30] A. Rignon-Bret, G. Guarnieri, J. Goold, and M. T. Mitchison, Thermodynamics of precision in quantum nanomachines, Phys. Rev. E 103, 012133 (2021). [31] K. Prech, P. Johansson, E. Nyholm, G. T. Landi, C. Verdozzi, P. Samuelsson, and P. P. Potts, Entanglement and thermokinetic uncertainty relations in coherent mesoscopic transport, Phys. Rev. Res. 5, 023155 (2023). [32] G. Guarnieri, G. T. Landi, S. R. Clark, and J. Goold, Thermodynamics of precision in quantum nonequilibrium steady states, Phys. Rev. Res. 1, 033021 (2019). [33] Y. Hasegawa, Quantum thermodynamic uncertainty relation for continuous measurement, Phys. Rev. Lett. 125, 050601 (2020). [34] Y. Hasegawa, Thermodynamic Uncertainty Relation for General Open Quantum Systems, Physical Review Letters 126, 010602 (2021). [35] Y. Hasegawa, Unifying speed limit, thermodynamic uncertainty relation and Heisenberg principle via bulkboundary correspondence, Nature Communications 14, 2828 (2023). [36] T. Van Vu and K. Saito, Thermodynamics of precision in markovian open quantum dynamics, Phys. Rev. Lett. 13 128, 140602 (2022). [37] A. M. Timpanaro, G. Guarnieri, and G. T. Landi, Hyperaccurate thermoelectric currents, Phys. Rev. B 107, 115432 (2023). [38] K. Prech, P. P. Potts, and G. T. Landi, Role of quantum coherence in kinetic uncertainty relations, Phys. Rev. Lett. 134, 020401 (2025). [39] K. Macieszczak, Ultimate Kinetic Uncertainty Relation and Optimal Performance of Stochastic Clocks (2024), . [40] S. V. Moreira, M. Radaelli, A. Candeloro, F. C. Binder, and M. T. Mitchison, Precision bounds for multiple currents in open quantum systems, Phys. Rev. E 111, 064107 (2025). [41] K. Brandner and K. Saito, Thermodynamic uncertainty relations for coherent transport, Phys. Rev. Lett. 135, 046302 (2025). [42] G. Blasi, R. R. Rodr ́ıguez, M. Moskalets, R. L ́opez, and G. Haack, Quantum Kinetic Uncertainty Relations in Mesoscopic Conductors at Strong Coupling (2025), . [43] D. Palmqvist, L. Tesser, and J. Splettstoesser, Kinetic uncertainty relations for quantum transport (2025), . [44] J. Liu and D. Segal, Thermodynamic uncertainty relation in quantum thermoelectric junctions, Phys. Rev. E 99, 062141 (2019). [45] A. M. Timpanaro, G. Guarnieri, and G. T. Landi, Quantum thermoelectric transmission functions with minimal current fluctuations, Phys. Rev. B 111, 014301 (2025). [46] J. A. Almanza-Marrero and G. Manzano, Certifying quantum enhancements in thermal machines beyond the Thermodynamic Uncertainty Relation (2025), . [47] F. Meier, Y. Minoguchi, S. Sundelin, T. J. G. Apollaro, P. Erker, S. Gasparinetti, and M. Huber, Precision is not limited by the second law of thermodynamics, Nature Physics 21, 1147 (2025). [48] J.-S. Wang, B. K. Agarwalla, H. Li, and J. Thingna, Nonequilibrium Green's function method for quantum thermal transport, Frontiers of Physics 9, 673 (2014). [49] M. Popovic, M. T. Mitchison, A. Strathearn, B. W. Lovett, J. Goold, and P. R. Eastham, Quantum heat statistics with time-evolving matrix product operators, PRX Quantum 2, 020338 (2021). [50] S. P. Mandal, M. Pandit, K. Mahadeviya, M. T. Mitchison, and J. Prior, Heat operator approach to quantum stochastic thermodynamics in the strong-coupling regime (2025), . [51] M. Brenes, G. Guarnieri, A. Purkayastha, J. Eisert, D. Segal, and G. Landi, Particle current statistics in driven mesoscale conductors, Phys. Rev. B 108, L081119 (2023). [52] L. P. Bettmann, M. J. Kewming, G. T. Landi, J. Goold, and M. T. Mitchison, Quantum stochastic thermodynamics in the mesoscopic-leads formulation, Phys. Rev. E 112, 014105 (2025). [53] J. Iles-Smith, N. Lambert, and A. Nazir, Environmental dynamics, correlations, and the emergence of noncanonical equilibrium states in open quantum systems, Phys. Rev. A 90, 032114 (2014). [54] J. Iles-Smith, A. G. Dijkstra, N. Lambert, and A. Nazir, Energy transfer in structured and unstructured environments: Master equations beyond the Born-Markov approximations, The Journal of Chemical Physics 144, 044110 (2016). [55] A. Nazir and G. Schaller, The reaction coordinate mapping in quantum thermodynamics, in Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer International Publishing, Cham, 2018) pp. 551-577. [56] J. Prior, A. W. Chin, S. F. Huelga, and M. B. Plenio, Efficient simulation of strong system-environment interactions, Phys. Rev. Lett. 105, 050404 (2010). [57] P. Strasberg, G. Schaller, N. Lambert, and T. Brandes, Nonequilibrium thermodynamics in the strong coupling and non-Markovian regime based on a reaction coordinate mapping, New Journal of Physics 18, 073007 (2016). [58] D. Newman, F. Mintert, and A. Nazir, Performance of a quantum heat engine at strong reservoir coupling, Phys. Rev. E 95, 032139 (2017). [59] N. Anto-Sztrikacs and D. Segal, Strong coupling effects in quantum thermal transport with the reaction coordinate method, New Journal of Physics 23, 063036 (2021). [60] N. Anto-Sztrikacs, F. Ivander, and D. Segal, Quantum thermal transport beyond second order with the reaction coordinate mapping, The Journal of Chemical Physics 156, 214107 (2022). [61] N. Anto-Sztrikacs, A. Nazir, and D. Segal, Effectivehamiltonian theory of open quantum systems at strong coupling, PRX Quantum 4, 020307 (2023). [62] A. Colla and H.-P. Breuer, Thermodynamic roles of quantum environments: from heat baths to work reservoirs, Quantum Science and Technology 10, 015047 (2024). [63] M. Brenes, J. Garwo la, and D. Segal, Optimal qubitmediated quantum heat transfer via noncommuting operators and strong coupling effects, Phys. Rev. B 111, 235440 (2025). [64] M. Shubrook, J. Iles-Smith, and A. Nazir, Nonmarkovian quantum heat statistics with the reaction coordinate mapping, Quantum Science and Technology 10, 025063 (2025). [65] M. P. Woods, R. Groux, A. W. Chin, S. F. Huelga, and M. B. Plenio, Mappings of open quantum systems onto chain representations and Markovian embeddings, Journal of Mathematical Physics 55, 032101 (2014). [66] M. G. Genoni, M. G. A. Paris, and K. Banaszek, Quantifying the non-Gaussian character of a quantum state by quantum relative entropy, Phys. Rev. A 78, 060303 (2008). [67] R. J. Glauber, The quantum theory of optical coherence, Physical Review 130, 2529 (1963). [68] T. Baumgratz, M. Cramer, and M. B. Plenio, Quantifying coherence, Phys. Rev. Lett. 113, 140401 (2014). [69] J. Liu and D. Segal, Coherences and the thermodynamic uncertainty relation: Insights from quantum absorption refrigerators, Phys. Rev. E 103, 032138 (2021). [70] F. Mascherpa, A. Smirne, A. D. Somoza, P. Fern ́andezAcebal, S. Donadi, D. Tamascelli, S. F. Huelga, and M. B. Plenio, Optimized auxiliary oscillators for the simulation of general open quantum systems, Phys. Rev. A 101, 052108 (2020). [71] M. Brenes, J. J. Mendoza-Arenas, A. Purkayastha, M. T. Mitchison, S. R. Clark, and J. Goold, Tensor-network method to simulate strongly interacting quantum thermal machines, Phys. Rev. X 10, 031040 (2020). [72] J. E. Elenewski, G. W ́ojtowicz, M. M. Rams, and 14 M. Zwolak, Performance of reservoir discretizations in quantum transport simulations, The Journal of Chemical Physics 155, 124117 (2021). [73] A. M. Lacerda, A. Purkayastha, M. Kewming, G. T. Landi, and J. Goold, Quantum thermodynamics with fast driving and strong coupling via the mesoscopic leads approach, Phys. Rev. B 107, 195117 (2023). [74] A. Purkayastha, G. Guarnieri, S. Campbell, J. Prior, and J. Goold, Periodically refreshed baths to simulate open quantum many-body dynamics, Physical Review B 104, 045417 (2021). [75] A. Purkayastha, G. Guarnieri, S. Campbell, J. Prior, and J. Goold, Periodically refreshed quantum thermal machines, Quantum 6, 801 (2022). [76] P. Menczel, K. Funo, M. Cirio, N. Lambert, and F. Nori, Non-hermitian pseudomodes for strongly coupled open quantum systems: Unravelings, correlations, and thermodynamics, Phys. Rev. Res. 6, 033237 (2024). [77] A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, and W. Zwerger, Dynamics of the dissipative two-state system, Rev. Mod. Phys. 59, 1 (1987). [78] ́Angel Rivas, A. D. K. Plato, S. F. Huelga, and M. B. Plenio, Markovian master equations: a critical study, New Journal of Physics 12, 113032 (2010). [79] P. P. Hofer, M. Perarnau-Llobet, L. D. M. Miranda, G. Haack, R. Silva, J. B. Brask, and N. Brunner, Markovian master equations for quantum thermal machines: Local versus global approach, New Journal of Physics 19, 123037 (2017). [80] A. Trushechkin, Unified gorini-kossakowski-lindbladsudarshan quantum master equation beyond the secular approximation, Phys. Rev. A 103, 062226 (2021). [81] P. P. Potts, A. A. S. Kalaee, and A. Wacker, A thermodynamically consistent markovian master equation beyond the secular approximation, New Journal of Physics 23, 123013 (2021). [82] A. Schnell, Global becomes local: Efficient many-body dynamics for global master equations, Phys. Rev. Lett. 134, 250401 (2025). [83] G. T. Landi and M. Paternostro, Irreversible entropy production: From classical to quantum, Rev. Mod. Phys. 93, 035008 (2021). [84] C. Maes, Frenetic bounds on the entropy production, Phys. Rev. Lett. 119, 160601 (2017). [85] C. Maes, Frenesy: Time-symmetric dynamical activity in nonequilibria, Phys. Rep. 850, 1 (2020). [86] K. M. Birnbaum, A. Boca, R. Miller, A. D. Boozer, T. E. Northup, and H. J. Kimble, Photon blockade in an optical cavity with one trapped atom, Nature 436, 87 (2005). [87] C. Weedbrook, S. Pirandola, R. Garc ́ıa-Patr ́on, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, Gaussian quantum information, Reviews of Modern Physics 84, 621 (2012). [88] A. Serafini, Quantum continuous variables: a primer of theoretical methods (CRC press, 2023). [89] A. M. Lacerda, M. J. Kewming, M. Brenes, C. Jackson, S. R. Clark, M. T. Mitchison, and J. Goold, Entropy production in the mesoscopic-leads formulation of quantum thermodynamics, Phys. Rev. E 110, 014125 (2024).
2510.14929
Draft version October 17, 2025 Typeset using LATEX twocolumn style in AASTeX7.0.1 StarStream: Automatic detection algorithm for stellar streams Yingtian Chen ,1 Oleg Y. Gnedin ,1 Adrian M. Price-Whelan ,2 and Colin Holm-Hansen 1 1Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA 2Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA Abstract The Gaia mission has led to the discovery of over 100 stellar streams in the Milky Way, most of which likely originated from globular clusters (GCs). As the upcoming wide-field surveys can potentially continue to increase the number of known streams, there is a growing need to shift focus from manual detection of individual streams to automated detection methods that prioritize both quality and quantity. Traditional techniques rely heavily on the visual expectation that GC streams are dynamically cold and thin. This assumption does not hold for all streams, whose morphologies and kinematics can vary significantly with the progenitor’s mass and orbit. As a result, these methods are biased toward a subset of the whole stream population, with often unquantified purity and completeness. In this work, we present StarStream, an automatic stream detection algorithm based on a physics-inspired model rather than visual expectation. Our method provides a more accurate prediction of stream stars in the multi-dimensional space of observables, while using fewer free parameters to account for the diversity of streams. Applied to a mock GC stream catalog tailored for the Gaia DR3 dataset, our algorithm achieves both purity and completeness of at least 65% at Galactic latitudes |𝑏| > 30◦. Keywords: Stellar streams (2166); Globular star clusters (656); Stellar dynamics (1596); Galaxy dynamics (591) 1. Introduction Stellar streams are elongated tidal structures originating from either an existing or fully dissolved progenitor system, such as a globular cluster (GC) or a dwarf galaxy (D. Lynden- Bell & R. M. Lynden-Bell 1995). Compared to the high background density of Milky Way (MW) field stars, streams have an extremely low signal-to-noise ratio (S/N). As a result, only a few streams were identified prior to the past decade, including the Sagittarius stream by R. A. Ibata et al. (1994) and the Palomar 5 (Pal 5) stream by M. Odenkirchen et al. (2001). These pioneering efforts attempted to increase S/N by selecting a subset of stars that are more likely to belong to streams than to the field, and then visually searching for stream-like structures. This selection was typically performed by applying a matched filter in the color–magnitude diagram, usually a window function centered around the progenitor’s isochrone (e.g., C. M. Rockosi et al. 2002; C. J. Grillmair 2009; E. J. Bernard et al. 2014; N. Shipp et al. 2018). The launch of the Gaia mission ( Gaia Collaboration et al. 2016) has revolutionized the discovery of stellar streams by providing an all-sky map of stars in the six-dimensional phase space, particularly offering high-precision proper motions down to 𝐺≈20 since Data Release 2 (DR2, Gaia Collab- Corresponding author: Yingtian Chen ybchen@umich.edu oration et al. 2018). This enables additional matched filters based on astrometric measurements, significantly increasing the number of detected streams (see the review by A. Bonaca & A. M. Price-Whelan 2025). The all-sky coverage and ho- mogeneity of the Gaia data also motivate the development of automatic stream detection methods to replace visual inspec- tion (e.g., C. Mateu et al. 2011; D. Shih et al. 2021). One such method is STREAMFINDER (K. Malhan & R. A. Ibata 2018), which uses a mixture model in the multi-dimensional space of observables to automatically detect clusters of stellar orbits within a Gaussian tube. STREAMFINDER successfully identified 87 thin streams in Gaia Data Release 3 (DR3, Gaia Collaboration et al. 2023), including 28 new discoveries (R. Ibata et al. 2024). As the number of stream detections grows, more evidence shows that streams have density structure such as fans (B. Sesar et al. 2016), gaps (D. Erkal et al. 2016), spurs (A. M. Price-Whelan & A. Bonaca 2018), and cocoons (K. Mal- han et al. 2019; M. Valluri et al. 2024). These features are likely produced by perturbations in the host galaxy’s poten- tial, including bar rotation (K. Hattori et al. 2016; A. M. Price-Whelan et al. 2016; S. Pearson et al. 2017), disk rota- tion (J. Nibauer et al. 2024), and close encounters with other objects (R. G. Carlberg et al. 2012; W. H. W. Ngan & R. G. Carlberg 2014; D. Erkal et al. 2016, 2017; N. Banik et al. 2018). On the other hand, recent works proposed that stream density can trace the mass loss history of their progenitors (M. Gieles et al. 2021; Y. Chen et al. 2025a). These breakthroughs arXiv:2510.14929v1 [astro-ph.GA] 16 Oct 2025 2 Chen et al. emphasize the need to quantify the purity and completeness of stream detection, in order to accurately characterize their density structures. Previous studies have revealed density structures in indi- vidual streams using flexible density models (e.g., D. Erkal et al. 2017; K. Tavangar & A. M. Price-Whelan 2025). How- ever, these models involve many free parameters, making them computationally expensive when applied to all-sky data and better suited for precisely characterizing stream mem- bership after discovery. Even the simpler model used in STREAMFINDER requires millions of CPU hours on Gaia Early Data Release 3 ( Gaia Collaboration et al. 2021). This will greatly limit the usage of these methods for next-generation wide-field photometric surveys, such as those conducted by the Vera C. Rubin Observatory (Rubin, LSST Science Col- laboration et al. 2009) and the Nancy Grace Roman Space Telescope (Roman, D. Spergel et al. 2015). Furthermore, these models usually represent a stream as a tube surrounding a well-defined track, based on the visual expectation that streams are dynamically cold and thin. How- ever, this assumption is inaccurate, as even GC streams can be dynamically hot or spatially complex depending on the progenitor’s mass and orbit (N. C. Amorisco 2015). As a re- sult, such visually inspired models are inefficient at detecting more “irregular” streams. Compared to the visually-inspired models mentioned above, recent theoretical advances in stream formation have enabled physics-inspired models that achieve higher accu- racy with fewer free parameters. Specifically, given the host Galactic potential and the progenitor’s mass, position, and velocity, particle spray methods (A. Varghese et al. 2011; R. R. Lane et al. 2012; A. H. W. K¨upper et al. 2012; A. Bonaca et al. 2014; M. A. Fardal et al. 2015; D. Roberts et al. 2025; Y. Chen et al. 2025b) can efficiently generate tracer particles that follow the expected distribution of stream stars in six-dimensional phase space, typically requiring only one or even zero free parameters. In particular, the method of Y. Chen et al. (2025b) is calibrated to match N-body simulations within 10% error for typical GC streams, without introducing any additional parameters. As a result, stream models based on this approach can potentially reduce computational cost by a significant amount, while also being able to detect hotter and wider streams. In this work, we present StarStream, an automatic detection algorithm for stellar streams using a physics-inspired stream model based on Y. Chen et al. (2025b). We employ ker- nel density estimation (KDE) to construct smooth probability density functions (PDFs) for both the stream and background populations. The algorithm is applied to the mock dataset from C. Holm-Hansen et al. (2025, hereafter H25), tailored for Gaia DR3, to quantify its purity and completeness in de- tecting streams originating from existing GCs. The paper is structured as follows. In §2, we describe the methodology of StarStream in detail. We then present validation tests on the mock dataset in §3. In §4, we discuss the motivation for applying the method to upcoming surveys (§4.1) and improve- ments over other algorithms (§4.2). Finally, we summarize our findings in §5. 2. Method We distinguish stream members from the background stars using a mixture model, which is a powerful tool for identifying faint structures such as ultra-faint dwarf galaxies (e.g., A. B. Pace & T. S. Li 2019) and stellar streams (e.g., K. Malhan & R. A. Ibata 2018; K. Tavangar & A. M. Price-Whelan 2025). Specifically, we construct the joint probability density function (PDF) of the stream and background populations as 𝑝(𝒙) = 𝑓s𝑝s(𝒙) + (1 −𝑓s)𝑝bg(𝒙) where 𝒙denotes a point in the multi-dimensional observable space, including positions, velocities, colors, magnitudes, and other properties. Traditional methods define PDFs of the stream 𝑝s(𝒙) and the background 𝑝bg(𝒙) as parametric func- tions with several fixed or adjustable parameters. The best-fit values of these parameters, together with the stream fraction 𝑓s, are often estimated by maximizing the log-likelihood, ln L ≡ 𝑁 ∑︁ 𝑖=1 ln  𝑓s𝑝s,𝑖+ (1 −𝑓s)𝑝bg,𝑖  (1) where 𝑝s,𝑖≡𝑝s(𝒙𝑖) and 𝑝bg,𝑖≡𝑝bg(𝒙𝑖) are the probability densities of the 𝑖-th star being a member of the stream and background, respectively. Since parameter estimation is exponentially more compu- tationally expensive as the number of adjustable parameters grows, many methods tend to simplify the stream model by assuming it to be a thin tube along a predefined track. Simi- larly, the background model is often approximated as uniform or only slowly varying across observables. However, recent advances in the theory of GC stream formation now allow for more accurate stream modeling with even fewer param- eters (Y. Chen et al. 2025b). Additionally, we can employ a nonparametric KDE to model the nonuniform background without introducing extra model parameters. In this work, we develop a new stream detection method by incorporating these improvements into the mixture model. In the follow- ing sections, we detail our approach for accurately estimating 𝑝s(𝒙) and 𝑝bg(𝒙). 2.1. Stream probability density To approximate the PDF of streams, we first generate simu- lated streams around progenitor GCs using the particle spray algorithm. Specifically, we use the agama (E. Vasiliev 2019) StarStream: Automatic detection algorithm for stellar streams 3 implementation of the Y. Chen et al. (2025b) algorithm,3 which initializes the positions and velocities of stream tracer particles from a multivariate Gaussian distribution, calibrated using N-body simulations of disrupting GCs. This algo- rithm accurately reproduces the width and length of simu- lated streams across a wide range of cluster masses and orbital types. To obtain the probability density in the color–magnitude space, we first assign a stellar mass to each tracer particle by drawing from the P. Kroupa (2001) initial mass func- tion (IMF). Although the mass function (MF) may evolve due to energy equipartition that preferentially ejects low- mass stars, the high-mass end above Gaia’s detection limit (≳0.5 M⊙) remains largely consistent with the IMF (see H25). We then compute the colors and magnitudes using the MESA Isochrones and Stellar Tracks (MIST, A. Dotter 2016; U. Meˇstri´c et al. 2022), taking the progenitor GC’s age and metallicity as input. For this study, we adopt Gaia’s 𝐺mag- nitude and BP −RP color. In §3.4, we also test an alternative isochrone model, PARSEC (A. Bressan et al. 2012), which has negligible effect on the final detection quality. For each stream, we release tracer particles over the last 1 Gyr assuming a uniform ejection rate. The ejection rate can be set time-varying if needed. However, the uniform rate suffices to produce a realistic stream density distribution that is distinguishable from the background, and variations in the ejection rate only slightly affect the density along most streams (see §2.2 in Y. Chen et al. 2025a). We generate 4000 tracer particles per stream, which is sufficient to fully sample the multi-dimensional parameter space. The minimum stellar mass used in sampling the mass function is set to the low- est possible mass of the closest tracer particle that remains above the detection limit. Since the heliocentric distances of stream stars vary along the stream, this minimum mass is a conservative choice to ensure that all regions of the stream are well-sampled above the local detection limit. We use Gaussian KDE to estimate the stream PDF from tracer particles in the 𝑀-dimensional parameter space, in- cluding positions, velocities, colors, magnitudes, and other properties. By denoting the 𝑀-dimensional coordinate of the 𝑗-th tracer as 𝒙𝑗≡(𝑥1 𝑗, 𝑥2 𝑗, · · · , 𝑥𝑀 𝑗), the probability density at any arbitrary point 𝒙is 𝑝s(𝒙) ≈1 𝑁tr 𝑁tr ∑︁ 𝑗=1 𝑝KDE(𝒙|𝒙𝑗, 𝝈) ≡1 𝑁tr 𝑁tr ∑︁ 𝑗=1 𝑀 Ö 𝑘=1 1 √ 2𝜋𝜎𝑘 exp " − (𝑥𝑘−𝑥𝑘 𝑗)2 2𝜎2 𝑘 # 3 Tutorials for this algorithm are available at https://github.com/ybillchen/ particle spray and are preserved on Zenodo at Y. Chen (2024). where 𝑁tr is the total number of tracers. It is straightforward to verify that integrating 𝑝s(𝒙) over the full 𝑀-dimensional space yields unity. We define 𝝈≡(𝜎1, 𝜎2, · · · , 𝜎𝑀) as the array of KDE kernel bandwidths, with no correlation between each dimension. In practice, we set 𝜎𝑘to 0.1 times the standard deviation of all tracer particles in the 𝑘-th dimension when 𝑘refers to positions or velocities. For magnitudes, we use 𝜎= 0.1; and for colors, 𝜎= 0.02. We have verified that varying these values by a factor of 0.5–2 has a negligible effect on our results. Note that the KDE approach naturally captures the fact that most stream stars are faint, since we sample stream particle masses from the P. Kroupa (2001) IMF. As a result, more tracer particles are gathered toward the faint end, leading to higher probability densities in that region. Before applying KDE to estimate the probability density, it is helpful to rotate the equatorial coordinate system so that the new latitude of the stream center is zero. This is particularly important for streams at high declination, where the metric tensor deviates significantly from identity. In this work, we always work in a rotated coordinate frame (𝜙1, 𝜙2), where the progenitor GC is located at (0, 0) and the proper motion is in the positive 𝜙1 direction. In this case, the diagonal elements of the metric tensor g = diag(1, cos2 𝜙2) only deviate from those of the identity tensor by 3% even with 𝜙2 = 10◦. This coordinate system is similar to the great circle frame com- monly used to describe stellar streams, but is less ambiguous when the stream is wide or has strong curvature. In practice, the observables of each star often have significant observational uncertainties, denoted by 𝝈0 ≡ (𝜎1,0, 𝜎2,0, · · · , 𝜎𝑀,0), which may exceed the corresponding KDE kernel widths. As we show in Appendix A, the effective PDF for such a star is the convolution between the original PDF and a Gaussian kernel with standard deviations 𝝈0. This convolution results in a modified KDE evaluated at the same location, with kernel width replaced by 𝜎′2 𝑘 ≡𝜎2 𝑘+ 𝜎2 𝑘,0. Therefore, for a star with 𝑀-dimensional coordinates 𝒙and uncertainties 𝝈0, the stream probability density becomes 𝑝s(𝒙, 𝝈0) ≈1 𝑁tr 𝑁tr ∑︁ 𝑗=1 𝑝KDE(𝒙|𝒙𝑗, 𝝈′) where 𝝈′ incorporates both the KDE kernel width and obser- vational uncertainty. For Gaia data specifically, the uncertainties in positions and magnitudes are almost always smaller than the corre- sponding KDE bandwidths. For simplicity and computational efficiency, we therefore ignore the uncertainties in these pa- rameters in our subsequent analysis and consider only the uncertainties in proper motions and color. It is worth noting that Gaia astrometric uncertainties have nonzero correlations. These correlations influence the error propagation from the original equatorial frame to the rotated 4 Chen et al. (𝜙1, 𝜙2) frame. In Appendix B, we explicitly calculate the linear uncertainty propagation associated with this coordinate transformation. However, to simplify our calculations, we do not include these correlations when performing the convo- lution with the Gaussian kernel, allowing us to treat each dimension independently during KDE evaluation. This sim- plification has only a minor effect on the inferred probability density, as the correlations are generally weak (𝑟< 0.5). 2.2. Background probability density Similarly to the stream probability density, we also use Gaussian KDE to estimate the background PDF around a stream. We directly use the observed stars in the same spatial region as the stream to construct the KDE estimator. However, there are typically 107 observed stars in these regions. Con- structing and evaluating a multi-dimensional Gaussian KDE with so many data points can be extremely inefficient. To ad- dress this, we perform an initial selection around the isochrone at the progenitor’s distance to exclude stars that are too red or too blue to be plausible stream members. For Gaia specifi- cally, we select stars within a color offset Δ(BP −RP) = 0.5 around the main sequence, red-giant branch (RGB), and hori- zontal branch of the isochrone. We also extend the isochrone with Δ𝐺= 1.5 above the tip of the RGB and around the horizontal branch to include stars clustered in those regions. This selection is rather conservative, given that the typical spread around the isochrone is 𝜎BP−RP ≲0.1 (M. Riello et al. 2021) even considering the distance spread of stream stars. Nevertheless, it still reduces the number of background stars by a factor of 2 −10. To further speed up the calculation, we use a grid inter- polation technique, given the fact that the background pop- ulation is relatively homogeneous and uncorrelated across most observables. For Gaia, we account for correlations only in the two-dimensional position space 𝒙2D pos and the two- dimensional color–magnitude space 𝒙2D cm. The proper motions 𝜇𝜙1 ≡¤𝜙1 cos 𝜙24 and 𝜇𝜙2 ≡¤𝜙2 are assumed to be uncorre- lated with other observables. Under these assumptions, the background PDF becomes 𝑝bg(𝒙) = 𝑝pos bg (𝒙2D pos) 𝑝𝜇𝜙1 bg (𝜇𝜙1) 𝑝𝜇𝜙2 bg (𝜇𝜙2) 𝑝cm bg (𝒙2D cm) where 𝒙≡  𝒙2D pos, 𝜇𝜙1, 𝜇𝜙2, 𝒙2D cm  . We evaluate the PDF in each subspace independently using Gaussian KDE on a rect- angular grid large enough to cover all background stars in that subspace. Each subspace is kept at most two-dimensional, since the number of grid points grows exponentially with dimensionality. For efficiency, we randomly select 104 back- ground stars near the stream to construct the KDE estimators. 4 Some works refer to ¤𝜙1 cos 𝜙2 as 𝜇∗ 𝜙1. We use 𝜇𝜙1 for simplicity, as this does not lead to confusion. We adopt bandwidths of 0.5◦for position, 1 mas yr−1 for proper motions, and 0.1 for the color–magnitude space. The grid spacings are set equal to the corresponding bandwidths. The final results are not sensitive to the exact choice of band- widths or grid spacing, as these values sufficiently resolve the density structure in observable space. Quantitatively, we have verified that multiplying or dividing the bandwidths and grid spacings by a factor of 2 results in only ≲10% variation in detection purity and completeness for a typical stream. Note that the PDF estimation can be biased in regions with 𝜙2 > 10◦, where the metric tensor deviates from the identity tensor by more than 3% (see §2.1). Our stream generation method may also deviate from the actual stream track in re- gions far from the GC if the adopted Galactic potential model is inaccurate. For these reasons, the KDE approach is best suited for relatively small regions, such as a 10◦cone around the GC. Nevertheless, this region is still sufficiently large to enclose the half-number radius for 2/3 of the simulated streams. Finally, we perform linear interpolation on the grids and compute the product of the independent PDFs to estimate the background probability density 𝑝bg at any point of interest. Unlike our estimation of the stream probability density, where the simulated stream has no observational uncertainty, the real Gaia data already include measurement uncertainties. As a result, the background PDFs obtained above are already the convolution of the original PDFs with Gaussian kernels char- acterizing uncertainties. Therefore, we do not need to apply the convolution again when evaluating the background PDF at a given point. This simplification is particularly helpful, as performing convolution would be highly inefficient within the grid interpolation framework. An alternate method is first deconvolving the real Gaia data to obtain the actual underlying PDF of the background, and then using the same approach as our stream model to compute the PDF. The first step can be achieved using tech- niques such as “extreme deconvolution” (J. Bovy et al. 2011). Although this method provides a coherent approach to ob- tain PDFs for both the stream and the background, it is more computationally expensive by ∼1000 times compared to grid interpolation as this alternate method requires performing the full KDE evaluation over background stars. Although we demonstrate our method using Gaia data as an example, our method can be readily adapted to other datasets. For instance, metallicity and radial velocity can be included as additional observables when dealing with spectroscopic surveys. Since we perform grid fitting separately for most dimensions, adding extra observables only increases the com- putational cost linearly. StarStream: Automatic detection algorithm for stellar streams 5 2.3. Stream detection We optimize our mixture model for the stream and back- ground populations by varying the stream fraction 𝑓s to max- imize the log-likelihood function in Eq. (1). The best-fit stream and background probability densities for star 𝑖are then given by 𝑓s𝑝s,𝑖and (1 −𝑓s)𝑝bg,𝑖, respectively. Following the standard definition used in mixture models, the membership probability that star 𝑖belongs to the stream is given by 𝑃s,𝑖≡ 𝑓s𝑝s,𝑖 𝑓s𝑝s,𝑖+ (1 −𝑓s)𝑝bg,𝑖 . (2) We consider stars with membership probability greater than a chosen threshold 𝑃th to be identified as stream members. In this work, we adopt the standard setup of the mixture model with 𝑃th = 0.5. However, we emphasize that 𝑃th is an adjustable parameter that can be tuned depending on whether the analysis prioritizes completeness or purity. 3. Method validation We validate our new method by applying it to a mock catalog of stream and background stars tailored for Gaia DR3, based on the H25 stream catalog. 3.1. Mock observational data Our mock catalog consists of both a stream population and a background population, each described by six observables: positions (𝜙1, 𝜙2), proper motions (𝜇𝜙1, 𝜇𝜙2), color (BP − RP), and magnitude (𝐺). We do not include radial velocity or metallicity, as these quantities are available for only a small subset of stars in Gaia DR3. Parallax is also excluded because of its large uncertainty at the typical heliocentric distances of stellar streams. The stream population is from the mock catalog of GC streams in a simulated MW-like galaxy (ID 523889) by H25. This catalog generates synthetic streams around a mock GC population based on the GC formation model of Y. Chen & O. Y. Gnedin (2024). It fits the host potential of simu- lated galaxies at each snapshot using basis function expansion (BFE), accounting for the time evolution of the potential by linearly interpolating between snapshots. The catalog then explicitly integrates the orbit of each GC in this potential over the last ∼3.5 Gyr and computes the mass loss rate based on the GC mass and the local tidal field. H25 initializes each GC with the P. Kroupa (2001) IMF and releases stars accord- ing to the time-varying mass loss rate. Stars are released probabilistically, with the ejection probability inversely pro- portional to the square root of stellar mass. The released stars form stellar streams using the particle spray algorithm by Y. Chen et al. (2025b). The catalog provides the initial mass, age, and metallicity [Fe/H] of each star, allowing us to assign synthetic Gaia photometry directly using the MIST isochrone model. Ideally, we should generate our simulated streams in the same potential used in H25 for full consistency. However, because it is challenging to constrain the time evolution of the Galactic potential in practice, we use only the static potential at the final snapshot to avoid over-idealization. The stream duration in H25 is longer than that of most observed GC stream segments (≲1 Gyr, Y. Chen et al. 2025a). In this work, we only use the portions of streams that were released in the last 1 Gyr to mimic streams that are currently observable. Since the main goal of this method is to detect streams around GCs, it is also unnecessary to include stream stars released more than 1 Gyr ago or located outside the 10◦cone centered on the GC, as these stars tend to be more sensitive to the time evolution of the Galactic potential. To create a more realistic stream population, we also add observational errors to the mock stream stars. Similarly to §2.1, we neglect errors in positions and magnitudes. For proper motion uncertainties, we adopt the following paramet- ric form from Gaia’s performance website,5 which describes the dependence of the uncertainty on apparent magnitude, 𝜎𝜇= √ 40 + 800𝑧+ 30𝑧2 1000 mas yr−1 (3) where 𝑧= 100.4[max(𝐺,13)−15]. This expression increases from 0.01 to 0.6 mas yr−1 with 𝐺= 13 −20. These errors can be significant, especially given that the intrinsic proper motion spread of a Pal 5-like stream is much smaller. Gaia’s performance website suggests multiplying Eq. (3) by a fudge factor of 1.03 and 0.89 for the proper motions of right ascension and declination, respectively. In the rotated frame (𝜙1, 𝜙2), we have verified that simply setting this factor to unity for both coordinates also reproduce the actual proper motion uncertainties with sufficient accuracy. We then add Gaussian noise to the original proper motions using Eq. (3). Similarly, we parameterize the dependence of color uncer- tainty on apparent magnitude using the following expression, 𝜎BP−RP = 10[max(𝐺,14)−23]/3 (4) which increases from 0.001 to 0.1 for 𝐺= 14 −20. This expression accurately reproduces the mean BP −RP color uncertainty in Gaia DR3 by M. Riello et al. (2021). For the background population, we directly use observa- tional data from Gaia DR3 within the same 10◦cone centered on the progenitor GC of each stream in the H25 catalog. We only select stars with 𝐺< 20. Based on the Gaia DR3 selec- tion function from gaiaunlimited (T. Cantat-Gaudin et al. 2023), we find that the mean completeness exceeds 99% in regions where streams are located. Even for the most incom- plete case, the completeness remains above 90%. Thus, this 5 https://www.cosmos.esa.int/web/gaia/science-performance 6 Chen et al. magnitude cut provides near-complete coverage for our mock dataset. We also apply the same color–magnitude selection as in §2.2 to reduce the number of background stars that are extremely unlikely to be misidentified by the method. This results in between 1 million (near the Galactic pole) and 30 million (near the Galactic center) background stars in a selected region, compared to 10 −10, 000 stream stars in the same region. This dramatic contrast between the two populations (up to ∼4 orders of magnitude) underscores the challenge of detecting streams using traditional methods. We restrict our analysis to streams originating from sur- viving GCs with 𝑀> 103 M⊙and containing at least 10 stars after applying the above selection criteria. This leads to 158 streams from our chosen catalog from H25. For each stream, we construct a mock dataset following the procedure described above and evaluate our detection algorithm on it, as- suming no prior knowledge of how the dataset is constructed. 3.2. Method performance First, we illustrate key concepts of our method by applying it to an example mock stream originating from a GC with mass 𝑀= 5.6 × 105 M⊙, located at a Galactocentric radius 𝑟Gal = 11 kpc and a Galactic latitude 𝑏= 21◦. The tidal radius is 125 pc, corresponding to an angular size of 0.63◦ at its heliocentric distance 𝑑⊙= 8.7 kpc. In the top row of Fig.1, we show the distributions of simulated tracer particles in position space, proper motion space, and color–magnitude space. The stream PDF 𝑝s(𝒙) is constructed via KDE using these simulated tracers. Although we display 𝑝s(𝒙) as 2D contours in each of the three subspaces, we emphasize that the KDE is constructed in the full six-dimensional space of all observables and projected onto 2D subspaces. We then apply our method to the mock dataset, which in- cludes 194 mock stream stars from H25 and approximately four million background stars from Gaia DR3. We detect 244 stream members, of which 140 are true members (see the bottom row of Fig.1). Most of the false positives and missed detections are either along the RGB, where background con- tamination is high, or near 𝐺= 20, where observational un- certainties become significant according to Eqs.(3) and (4). As we show later in §4.1 and Fig. 8, both of these regions have lower S/N than the main sequence turnoff. We also note that our simulated stream in Fig.1 samples the RGB less densely than the main sequence due to the lower abundance of RGB stars. This, however, only weakly affects the detection qual- ity, as RGB stars only contribute a small fraction of the total and are intrinsically hard to detect in any case due to their low S/N. It is worth noting that for faint stars (𝐺≳19), the uncer- tainties in color and proper motions can exceed the intrinsic spread of the entire stream. Our method accounts for this by convolving the KDE with a Gaussian blob centered on each star, as described in §2.1. Without this convolution, many stars would be shifted outside the effective selection window in proper motion and color space, leading to significantly fewer detections than expected. In contrast, while our detec- tion accuracy does decrease for these faint stars, the method still tends to recover the correct total number of stream mem- bers by balancing false positives and missed detections. This is expected as our mixture model tends to reproduce the cor- rect number of stream stars to recover the correct density ratio between the stream and the background. Next, we examine the statistical performance of our method by applying it to all mock streams. We quantify detection quality using three metrics: detection ratio, purity, and com- pleteness. The detection ratio is defined as the ratio of the total number of detected stream members 𝑁detect to the total number of true members 𝑁true. In the upper panel of Fig. 2, we show 𝑁detect/𝑁true as a function of the threshold probabil- ity 𝑃th. Starting from a large value ≫1 at 𝑃th = 0, the median detection ratio of all streams rapidly drops to 1 at 𝑃th ≈0.07, with an interquartile range of approximately 0.3 dex. It then gradually declines to 0.2 at 𝑃th = 0.8, followed by a rapid decrease to 0 as 𝑃th approaches 1. Furthermore, we define the purity as the ratio of correctly identified stream members 𝑁correct to the total number of de- tected members 𝑁detect. The completeness is defined as the ratio of 𝑁correct to 𝑁true. By definition, both purity and com- pleteness range from 0 to 100%, and the ratio between com- pleteness and purity equals the detection ratio. In the lower panel of Fig. 2, we show the median purity and complete- ness as functions of 𝑃th. The median purity increases rapidly from 0 to about 70% for 𝑃th = 0 −0.1, and then gradually approaches 100% as 𝑃th →1 (note that purity is not well defined at 𝑃th = 1). In contrast, completeness drops from 100% to around 60% over 𝑃th = 0 −0.1, and then contin- ues to decrease to 0 at 𝑃th = 1. The two curves intersect at 𝑃th ≈0.07, where the detection ratio also reaches unity. When divided by Galactic latitude 𝑏, the high-latitude streams with |𝑏| > 30◦reach a detection ratio of unity at 𝑃th = 0.5. However, a broad range of 𝑃th = 0.2 −0.6 yields detection ratios that deviate from unity by less than 0.2 dex. For these high-latitude streams, purity and completeness also intersect at 𝑃th ≈0.5, reaching 80% and 72% respectively. On the other hand, the majority of streams are located at low Galactic latitudes |𝑏| < 30◦. Therefore, their detection qual- ity more closely follows the overall trend of the full sample, which is worse than the high-latitude streams. This differ- ence comes from the distinct background densities between the two populations. Compared to the high-𝑏streams near the Galactic poles, the low-𝑏streams closer to the Galactic plane are contaminated by roughly 10 times more background stars. In these regions, the signal from actual stream stars is more StarStream: Automatic detection algorithm for stellar streams 7 10 5 0 5 10 1 (deg) 10 5 0 5 10 2 (deg) 1 2 3 4 5 1 (mas yr 1) 2 1 0 1 2 2 (mas yr 1) 0.4 0.6 0.8 1.0 BP RP 15 16 17 18 19 20 G 10 5 0 5 10 1 (deg) 10 5 0 5 10 2 (deg) False positive Missing Correct 1 2 3 4 5 1 (mas yr 1) 2 1 0 1 2 2 (mas yr 1) 0.4 0.6 0.8 1.0 BP RP 15 16 17 18 19 20 G Figure 1. Demonstration of a test of the method on a mock stream. Top row: distributions of simulated tracer particles (magenta) and background stars (black) in position space (left), proper motion space (middle), and color–magnitude space (right). We plot the stream PDF 𝑝s(𝒙) from Gaussian KDE as gray contours. The three contours from dark to light represent three values of ln 𝑝s(𝒙) −ln 𝑝s,max(𝒙) = −0.5, −2, −4.5, corresponding to the 1-𝜎, 2-𝜎, and 3-𝜎ranges of the standard Gaussian distribution, respectively. We only show 104 background stars randomly chosen from the total of 4 × 106 for visual clarity. Bottom row: application of the method on this mock stream using 𝑃th = 0.4. We show stars that are false positives (red circle), missed by the method (blue open circle), or correctly detected (blue solid circle) in the same subspaces as the top row. The mock stream members already have uncertainties added and are mixed with background stars from real Gaia DR3 data. We also plot the contours of ln 𝑝s(𝒙) −ln 𝑝s,max(𝒙) = −4.5 for comparison. For the left column of both rows, we show the location of the progenitor GC as the circle of tidal radius. The velocity of the GC is represented by the arrow. easily washed out by the strong background noise, leading to a lower probability to be identified as stream members. Based on the above tests, our default 𝑃th = 0.5 yields high purity and completeness > 70% for streams with relatively low contamination at |𝑏| > 30◦. The similarity between pu- rity and completeness also ensures that 𝑁detect serves as an unbiased estimator of 𝑁true. This is particularly important for inferring properties of the progenitor GC, such as the mass loss rate. Streams at |𝑏| < 30◦should be treated more care- fully sincem the median completeness can drop to about 40%. However, we emphasize that 𝑃th is a user-defined parameter that can be set anywhere between 0 and 1, depending on whether the application prioritizes purity or completeness. To further study the dependence of detection quality metrics on Galactic latitude, we plot them against |𝑏| in Fig. 3. We apply Gaussian kernel smoothing (similarly to §3.2 in Y. Chen & O. Y. Gnedin 2023) to estimate the median and interquartile ranges of the three metrics as smooth functions of |𝑏|, using a kernel bandwidth that grows linearly from 5◦to 15◦over the range |𝑏| = 0 −75◦. We find that the detection ratio 𝑁detect/𝑁true remains consistent with unity for |𝑏| > 30◦. However, the ratio deviates unity near the Galactic plane, where the dispersion is also larger since ∼20% of streams with detection ratios < 0.1 are all located at |𝑏| < 30◦. The new method reports zero detections for only 4% of all streams. This suggests that the method almost always iden- tifies some stream members as long as a stream is present. However, this conclusion requires further validation by ad- dressing two key questions: 1) How often does the method report false detections when no stream is present? 2) To what extent does the idealization of our mock data benefit detec- tion quality? We explore these questions in the following subsections. 8 Chen et al. 10 2 10 1 1 10 Ndetect/Ntrue All streams |b| > 30 |b| < 30 0 0.2 0.4 0.6 0.8 1.0 Threshold probability Pth 0 0.2 0.4 0.6 0.8 1.0 Ratio Figure 2. Detection ratio (upper panel) and completeness/purity (lower panel) of detected stream members as functions of probability threshold. The solid lines stand for the median values among all test streams, while the shaded ranges show the interquartile ranges. De- tection ratio = 1 is highlighted as the dot-dashed line. We also show the median detection ratio, completeness, and purity for streams with progenitor GCs at low Galactic latitude (|𝑏| < 30◦, dotted curves) and high Galactic latitude (|𝑏| > 30◦, dashed curves) separately. Since the purity at 𝑃th = 1 is not well defined, we extrapolate the values at 𝑃th = 0.99 out to 1 for visual clarity. 3.3. Null test In addition to the above tests that focus on how many stream member stars can be correctly detected, it is also important to perform the null test to ensure that the method does not report false detections when there is no stream. Therefore, we design the null test by removing the signal (i.e., mock stream stars) from the test dataset and applying the method only to the background stars in the same region around each GC. A perfect method should yield zero detections in the null test. However, due to random fluctuations in the background that are not captured by the KDE, it is possible that a small fraction of background stars are accidentally better described by the stream model. This can lead to the detection of “false streams”. In Fig. 4, we show the number of detections in the null test, 𝑁null, as a function of Galactic latitude |𝑏|. For compar- ison, we also plot the true number of stream stars 𝑁true and 10 2 10 1 1 10 Ndetect/Ntrue 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 0.2 0.4 0.6 0.8 1.0 Ratio Figure 3. Detection ratio (upper panel) and completeness/purity (lower panel) of detected stream members as functions of the progen- itor GC’s absolute Galactic latitude |𝑏| with 𝑃th = 0.5. Individual streams are shown as black circles (detection ratio), blue diamonds (completeness), and red triangles (purity). The solid lines stand for the median values among all test streams, while the shaded ranges show the interquartile ranges. Detection ratio = 1 is highlighted as the dot-dashed line. We calculate the percentiles at any 𝑏using nearby streams smoothed by the Gaussian kernel, with bandwidth varying linearly from 5◦to 15◦from the Galactic plane to the poles. the number of detections 𝑁detect when the signal is included. Our method reports exactly zero detections for 60% of the mock GCs. For the remaining cases, 𝑁null is still generally much smaller than the corresponding 𝑁true and 𝑁detect. Only 11% of cases yield 𝑁null ≥10. Conversely, 16% of cases have 𝑁detect < 10. Based on this, we recommend excluding detections below a threshold 𝑁detect ≈10 to avoid most false positives while not discarding too many true stream members. 3.4. Dependence on isochrone models It should be noted that the H25 catalog uses the same MIST isochrone model adopted in this work to generate the the mock photometry. Before applying our method to real observational data, it is important to assess how sensitive the detection quality is to the choice of isochrone model. To quantify this, we perform a test where we switch to the PARSEC isochrones (A. Bressan et al. 2012) for constructing the simulated stream PDF. Specifically, we set [M/H] in PARSEC equal to the StarStream: Automatic detection algorithm for stellar streams 9 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 1 10 102 103 104 Count Figure 4. Number of detected stars in the null test 𝑁null (red open circles) as a function of the progenitor GC’s absolute Galactic lati- tude |𝑏|. For comparison, we also show the true number of stream stars 𝑁true (blue solid diamonds) and the number of detections 𝑁detect (black solid circles) without removing the signal. Similarly to Fig. 3, we show the the median values as solid lines, while the shaded ranges represent the interquartile ranges. metallicity [Fe/H] of the mock GC and fix the age to 10 Gyr. These settings are intentionally approximate to mimic real observational conditions, where GC ages are often uncertain. In the left column of Fig. 5, we show the detection ratio, purity, and completeness as functions of |𝑏| when switch- ing to the PARSEC isochrones. Despite using a different isochrone model with approximate parameters, these met- rics show no statistically significant differences compared to those in Fig.3. This is because our KDE-based approach pro- vides sufficient flexibility to accommodate variations between isochrone models. Furthermore, the variance introduced by changing isochrone models is negligible compared to the typ- ical color dispersion from observational errors and magnitude dispersion due to the spread in heliocentric distances. Thus, our method is robust to the choice of isochrone model. 3.5. Dependence on Galactic potential models Our simulated streams are generated in the same Galactic potential model as the final snapshot corresponding to our se- lected catalog from H25. When applied to real data, however, we do not know the Galactic potential exactly. Most existing Milky Way potential models predict a total mass differing by ≲10% (see the review by J. A. Hunt & E. Vasiliev 2025). To quantify the impact of using an inaccurate potential, we test three alternative models. The first two are based on the same BFE framework as H25, but with all expansion coefficients scaled by ±20%. This results in a proportional change in the enclosed mass at all radii. We refer to these cases as 0.8𝑀Gal and 1.2𝑀Gal. The third model is MilkyWayPotential2022 from gala (A. M. Price-Whelan 2017; A. Price-Whelan et al. 2024), which has been validated against MW mass measure- ments out to ∼150 kpc (J. A. Hunt & E. Vasiliev 2025). The right column of Fig. 5 shows the detection ratio, pu- rity, and completeness as functions of |𝑏| for these models. In almost all cases, these detection metrics decrease only slightly. An exception is MilkyWayPotential2022, which shows a ≲10% increase in purity at |𝑏| > 60◦. This is likely a stochastic effect given the small number of streams at such high latitudes. All metrics remain within the interquar- tile range (which is even narrower than the 1𝜎range) of the original values, indicating that even a 20% variation in the potential has little effect on detection quality. Since recent measurements of the MW potential have uncertainties of only ≈10% (e.g., R. Ibata et al. 2024), these tests are conservative. The MilkyWayPotential2022 model performs almost identically to the original BFE model. This is encourag- ing as the former is designed to match the real MW instead of the simulated galaxy. While both models have similar halo structures beyond 10 kpc, their disk components differ significantly, with enclosed masses near 1 kpc differing by ≈30%. This suggests that the performance of our method is insensitive to the exact choice of Galactic potential model within 10◦around the GC. We emphasize, however, that our method has not been tested beyond 10◦, where the Galactic potential becomes more important. Even within 10◦, the slight decrease in purity for the alternative models is largely driven by stars located be- tween 5◦and 10◦, indicating that the influence of the potential grows with distance from the progenitor. Moreover, the LMC may introduce a larger perturbation to the MW’s potential (see review by E. Vasiliev 2023) than that in the simulated galaxy. Since the H25 catalog does not include a galaxy with a realistic LMC analog, we are unable to quantify the effect of LMC in this work. 3.6. Dependence on stream generation algorithms The mock stream catalog also uses the same Y. Chen et al. (2025b) particle spray algorithm as adopted in this work. To further validate our method, we conduct an additional test by generating a mock stream using an N-body simulation, which is generally considered more accurate than most particle spray methods. Specifically, we initialize the simulation using the same initial conditions as the example stream in §3.2 and Fig. 1. Following Y. Chen et al. (2025b), we model the progenitor GC using the I. R. King (1966) model with 𝑊= 8, a typical value for Galactic GCs. We set the particle mass to 10 M⊙, the softening length to 1 pc, and the simulation time step to 2−13 kpc km−1 s ≈0.1 Myr. We then backtrack 10 Chen et al. 10 2 10 1 1 10 Ndetect/Ntrue MIST (base) PARSEC 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 0.2 0.4 0.6 0.8 1.0 Ratio 10 2 10 1 1 10 Ndetect/Ntrue 1.0MGal (base) 0.8MGal 1.2MGal MWPotential22 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 0.2 0.4 0.6 0.8 1.0 Ratio Figure 5. Same as Fig. 3, but with the alternative isochrone model PARSEC (left column) and three alternative Galactic potential models (right column): 1) the base potential model scaled down by 20% (dotted curves), 2) the base potential model scaled up by 20% (dashed curves), and 3) the MilkyWayPotential2022 model from gala. We show the interquartile ranges for the base case for comparison. the GC orbit for 1 Gyr in the same static Galactic potential described in §3.1, and run the N-body simulation forward to the present day using the fast-multipole gravity solver falcON (W. Dehnen 2000, 2002). Although the N-body simulation includes only collisionless dynamics and omits close stellar encounters, it is sufficiently distinct from the particle spray model to serve our purpose of validating the detection method with an alternative algorithm. Since the particles in the simulation are different from in- dividual stars, we randomly select a subsample of escaped particles and re-sample their masses using the stellar mass distribution of stars with 𝐺< 20 from H25. We then assign 𝐺magnitudes and BP−RP colors to these particles using the MIST isochrone. This ensures that the new mock stream from the N-body simulation has the same number of stars and mass distribution as its counterpart in H25, which is important for a fair comparison of detection quality metrics. Finally, we add background stars to the new mock stream following §3.1 and apply our stream detection method. Among the 194 mock stream stars, we detect 230 members, with 133 correct detections. These numbers are only about 5 −10% lower than those reported in §3.2. This slight de- crease is expected, as we attempt to recover stream members generated using a different algorithm. However, since our method evaluates membership probability by comparing the stream probability density to the background, the detection performance is not strongly sensitive to the specific stream generation algorithm as long as the S/N, defined as the ratio between the densities of actual stream members and back- ground stars in the multi-dimensional observable space, is greater than 1. 3.7. Influence of dust extinction When applying StarStream to real data, it is important to ac- count for dust extinction, particularly near the Galactic plane where the color excess reaches 𝐸(𝐵−𝑉) > 1. Extinction reduces the number of observable stream stars above the de- tection limit by up to a factor of 10 near the mid-plane, while also shifting the stream to redder colors, where the back- ground density is higher (see the upper right panel of Fig. 1). In this section, we investigate the influence of these effects by incorporating dust extinction into our mock dataset. We use the Python package dustmaps (G. M. Green 2018) to compute the extinction 𝐴𝑉of each stream star in H25, based on the map of D. J. Schlegel et al. (1998, hereafter SFD). This map is recalibrated by E. F. Schlafly & D. P. Finkbeiner (2011) with 𝑅𝑉= 3.1. The 𝐴𝑉values are then passed to StarStream: Automatic detection algorithm for stellar streams 11 the MIST bolometric correction interpolation table6 to obtain the 𝐺-band extinction and BP −RP color excess. Since the table only covers 𝐴𝑉= 0 −6, we remove stars with 𝐴𝑉> 6, as they are also likely too faint to be observable. Note that the SFD map provides extinction along the full line of sight from the solar system to infinity. This overestimates the true extinction, since streams are located at finite distances. While the overestimation is probably modest, this test should be regarded as an extreme case that assumes maximum possible extinction. The detection quality would decline significantly if we di- rectly apply the same method to the new dataset, since the simulated isochrone no longer aligns with the reddened dis- tribution of stream stars in the color–magnitude space. Fortu- nately, in practice we have access to extinction values for most MW GCs from catalogs such as W. E. Harris (1996), allow- ing us to account for realistic extinction when simulating the stream. This is equivalent to replacing the original isochrone with a reddened one, where the extinction and color excess are calculated from the progenitor’s 𝐴𝑉. Here, we take each GC’s 𝐴𝑉directly from the SFD map at its location. As be- fore, we exclude streams whose progenitor GCs have 𝐴𝑉> 6. Since extinction makes streams fainter, the number of streams with at least 10 stars also decreases, leaving 123 valid streams out of the original 158. Stars from the same stream may have different 𝐴𝑉values in the mock dataset, since extinction is not constant within the search radius. However, we redden the simulated isochrone using only a single 𝐴𝑉value at the center of the stream. This reflects the practical difficulty of obtaining precise extinc- tion for every individual stream star. We emphasize that our tests are designed to reproduce the actual detection quality of StarStream when applied to real data. Thus, it is important to avoid any over-idealization. In Fig. 6, we show the detection ratio, completeness, and purity after applying StarStream to the new mock dataset that includes extinction. As expected, both completeness and pu- rity decrease substantially near the Galactic plane (|𝑏| < 30◦). Close to the mid-plane, purity falls from ∼90% to below 10%, and completeness drops to a similar level. This decline is pri- marily due to the high background density near the reddened isochrone and the lower number of stream members that re- main observable. We find that completeness can be improved by lowering the probability threshold to 𝑃th < 0.1, whereas varying 𝑃th between 0.01 −0.99 does not significantly im- prove purity. In contrast, the high-latitude regions are only slightly affected. For streams at |𝑏| > 30◦, the median com- pleteness and purity decrease by only ∼10% to 62% and 6 https://waps.cfa.harvard.edu/MIST/model grids.html 10 2 10 1 1 10 Ndetect/Ntrue No extinction With extinction 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 0.2 0.4 0.6 0.8 1.0 Ratio Figure 6. Same as Fig. 3, but including dust extinction from the SFD map. We also show the original cases (no extinction) as thin curves for comparison. 67%, respectively. The detection ratio in this latitude range is nearly unchanged. We also perform the null test described in §3.3 on the new dataset, and show it in Fig. 7. In this case, only 13% of detections yield 𝑁null = 0. This reduction is mainly due to the higher false detection rate at low latitudes, where 𝑁null is nearly the same as 𝑁detect for |𝑏| < 15◦. However, the high- latitude region again remains almost unaffected. At |𝑏| > 30◦, false and true detections can still be cleanly separated using the same threshold of 𝑁detect ≈10 recommended in §3.3. Therefore, although both completeness and purity of StarStream decrease at low latitudes when accounting for ex- treme extinction, the method still achieves high values ≈65% at |𝑏| > 30◦, where extinction is less significant. Since the extinction adopted in this section overestimates the true val- ues, these results represent lower limits of the actual detec- tion quality. As discussed later in §4.1, future spectroscopic and deep photometric surveys are necessary to reveal low-𝑏 streams with high extinction. 12 Chen et al. 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 1 10 102 103 104 Count Figure 7. Same as Fig. 4, but including dust extinction from the SFD map. 4. Discussion 4.1. Application to other surveys Although this work is framed with Gaia DR3, it can be straightforwardly extended to other surveys. In this subsec- tion, we discuss the importance of spectroscopic surveys and deep photometric surveys in further enhancing detection qual- ity. Our method operates in the six-dimensional space of posi- tions, proper motions, color, and magnitude. We find that the median detection ratio drops significantly to 18% when color and magnitude are excluded. In this case, the median purity and completeness also decrease to 50% and 6%, respectively. If proper motions are excluded, we cannot even detect any stars for more than 80% of streams. These results demon- strate that six-dimensional information of positions, proper motions, colors, and magnitudes is important for recovering most streams in Gaia DR3. For streams with extremely low density, strong background contamination, or high extinction we may need even more independent observables. This high- lights the importance of spectroscopic surveys, such as the Apache Point Observatory Galactic Evolution Experiment (APOGEE, S. R. Majewski et al. 2017), the Southern Stellar Stream Spectroscopic Survey (S5, T. S. Li et al. 2019), and the Dark Energy Spectroscopic Instrument (DESI, DESI Collab- oration et al. 2022) Milky Way Survey (MWS, A. P. Cooper et al. 2023), which provide radial velocities and metallicities and may help reveal very faint streams. Our example stream in Fig. 1 shows that most correctly identified members lie near the main sequence turnoff, where the S/N is high. In Fig. 8, we present the distribution of S/N in the color–magnitude diagram, where S/N is defined as the ratio between the densities of actual stream members and background stars. We estimate both densities in the six- dimensional space using KDE approaches similar to those described in §2.1 and §2.2. However, we multiply the stream kernel widths in §2.1 by a factor of 3, since the number of member stars is smaller than the number of simulated tracer particles. It is remarkable that a significant number of stars have S/N values larger than unity in six-dimensional space, despite the number of stream members being orders of magnitude smaller than the background contaminants. The region with S/N > 1 coincides with the area of high purity and completeness near the main sequence turnoff at 𝐺≈19. Brighter stars in the RGB suffer from higher background contamination, resulting in lower S/N. Stars in the horizontal branch show relatively high S/N because they are bluer than most background stars; however, the horizontal branch contributes only a few mem- bers to the stream. The majority of stars lie below the main sequence turnoff and have lower S/N due to large observa- tional uncertainties, as described by Eqs. (3) and (4). This issue is not unique to the example stream. Since the typical main sequence turnoff is at absolute magnitude 𝑀𝐺= 3 −4, corresponding to 𝐺= 20 at a heliocentric distance ≈20 kpc, Gaia can barely detect stars fainter than the turnoff for most MW streams. This highlights the importance of deep pho- tometric surveys such as LSST and Roman, which are ex- pected to detect significantly more stream members thanks to their much deeper detection limits (S. Pearson et al. 2024; C. Holm-Hansen et al. 2025). Moreover, Roman can also provide high-quality astrometry measurements for faint stars, significantly improving S/N below the main sequence turnoff. 4.2. Improvements to existing methods The new detection method constructs stream KDE via a particle spray algorithm, which only requires the progenitor GC’s mass, position, and velocity as input. We then assign colors and magnitudes to the tracers based on an isochrone model, which depends only on the progenitor’s metallicity and age. In practice, most of these input parameters are available from existing catalogs (e.g., M. Hilker et al. 2019). The only exception is age, which, however, has a weak impact on our results (see §3.4). Therefore, our method avoids making unnecessary and unrealistic assumptions about the stream’s morphology and kinematics. Additionally, we also construct the background KDE directly from a subsample of observed stars. As a result, both the stream and background models have no free parameters. The mixture model that combines these two components includes only one free parameter: the stream fraction 𝑓s. This minimal parameterization offers a direct ad- StarStream: Automatic detection algorithm for stellar streams 13 0.4 0.6 0.8 1.0 BP RP 15 16 17 18 19 20 G 10 2 10 1 1 10 102 Figure 8. Distribution of S/N in the color–magnitude diagram for actual members of the mock stream in Fig. 1. The signal and noise densities are estimated using a similar KDE method in §2.1 and §2.2, respectively. We multiply the stream kernel width by a factor of 3 to account for the lower number density of stars that do not sample the parameter space as well as simulated tracers. The same 104 background stars as in Fig. 1 are also shown for reference. vantage in computational efficiency. On average, our Python implementation takes ∼10 minutes to detect a single stream when running on 32 cores of an Intel Haswell CPU. The total computation time to analyze all mock streams is ap- proximately 1000 CPU hours of computation time, which is orders of magnitude faster than typical mixture models. For instance, STREAMFINDER requires millions of CPU hours on a similar dataset (R. Ibata et al. 2021). Although the particle spray method used here takes no free parameters, our stream model is more accurate than simply assuming the stream is an elongated structure along its orbital track. A representative example of the latter is STREAMFINDER, which detects clustering of stellar orbits within a Gaussian tube. Using STREAMFINDER, R. Ibata et al. (2024) successfully detected 16 streams originating from known GCs in Gaia DR3. To compare performance, we apply our method to the same 16 GCs in Gaia DR3, us- ing GC properties from the M. Hilker et al. (2019) catalog and the Galactic potential model MilkyWayPotential2022. Excluding M68, 𝜔Cen, and M5, whose streams in R. Ibata et al. (2024) are not connected to the progenitors, all other streams extend at least 5◦within our 1 Gyr integration time. In this region, our method detects on average 5 times as many stream members. Even for the Pal 5 stream, which is widely thought to be one of the most complete, we detect 131 members compared to 76 reported in R. Ibata et al. (2024), which a meaningful improvement. Notably, the actual Pal 5 stream extends beyond 5◦because it formed over approxi- mately 6 Gyr (Y. Chen et al. 2025a), much longer than our integration time. For the remaining streams that extend far- ther within our 1 Gyr integration time, we detect on average 4 times as many member stars inside 10◦. StarStream is not the first physics-motivated attempt to search for GC streams. For example, C. J. Grillmair (2022); Y. Yang et al. (2023); C. J. Grillmair (2025) have successfully identified tidal features around individual GCs by compar- ing observations with simulated streams. Compared to these works, our approach automates this technique using KDE. In addition, we provide quantitative metrics for detection qual- ity, demonstrating the broader potential of this method for identifying more GC streams. 5. Summary In this work, we present StarStream, an automatic detec- tion algorithm for stellar streams based on a physics-inspired stream model. We construct a mixture model in the multidi- mensional space of observables, including positions, veloc- ities, colors, magnitudes, etc. The model consists of back- ground and stream components, whose PDFs are represented using KDE. For the background, we build the KDE from a subsample of observed stars; for the stream, we construct the KDE from tracers generated using the particle spray algo- rithm of Y. Chen et al. (2025b). We illustrate the method using an example stream in Fig. 1. We quantitatively assess the detection quality of our method around existing GCs using the mock stream catalog from H25, which is tailored to Gaia DR3 and includes six observables: sky coordinates (𝜙1, 𝜙2), proper motions (𝜇𝜙1, 𝜇𝜙2), color (BP −RP), and magnitude (𝐺). Our mock dataset incorpo- rates magnitude-dependent uncertainties for each observable, and we include all Gaia DR3 stars as the background pop- ulation. The method achieves both purity and completeness around 65% even with extreme dust extinction (> 70% with- out extinction). The detection ratio is near unity for high- latitude streams (Fig. 3 and Fig. 6). For low-latitude streams, however, high background contamination and extinction can significantly reduce both purity and completeness to < 10%. Next, we perform a series of tests to examine the robust- ness of the method. We begin with a null test to evaluate the frequency of false positive detections. After removing the signal (i.e., mock stream stars) from the dataset, our method correctly reports 𝑁null = 0 for 13% of all streams with ex- treme extinction (60% without extinction), while a threshold 𝑁detect ≈10 cleanly separates true and false detections for high-latitude streams (Fig. 4 and Fig. 7). We further test the method using a different isochrone model and different Galactic potential models (Fig. 5), and a different stream gen- eration algorithm. These alternative configurations do not significantly weaken the detection quality. Being robust to alternate isochrone models is important in application to real 14 Chen et al. data as the predicted isochrone can vary significantly among different models. We find that both purity and completeness drop signif- icantly when proper motions or color and magnitude are excluded from the input dataset. This emphasizes the im- portance of incorporating multiple independent observables for stream detection. With the full six-dimensional input, however, the S/N can exceed unity even when the number of stream members is orders of magnitude smaller than the background contaminants (Fig. 8). Stars with high S/N are primarily located near the main sequence turnoff, coinciding with those that exhibit high purity and completeness. In con- trast, fainter stars near 𝐺= 20 and brighter stars on the RGB both have lower S/N due to large observational uncertainties or strong background contamination, respectively. Finally, we compare our new method to existing methods such as STREAMFINDER. Our method is several orders of mag- nitude more computationally efficient, primarily because the physics-inspired stream model requires no free parameters. This greatly accelerates the model optimization process. At the same time, for streams associated with existing GCs in R. Ibata et al. (2024), our method detects on average 5 times as many member stars within the 5◦circle. It is also worth noting that the method may uncover additional streams when applied to the full set of GCs in Gaia DR3. We have published the package StarStream on GitHub via https://github.com/ybillchen/StarStream, where we also pro- vide example Python notebooks for running the code. The code requires the mass and six-dimensional phase-space co- ordinates of the progenitor to generate stream tracers using the particle spray algorithm. It also requires an isochrone that is fit to the progenitor to compute mock photometry for the tracers. The input dataset should be a multi-dimensional array of observables, coupled with another array of observational uncertainties of the same shape. Users can also specify the threshold probability 𝑃th, tracer particle ejection rate, KDE kernel widths, interpolation grid spacings for the background PDF, and the Galactic potential, if different values from the default of this paper are preferred. Acknowledgments We thank Monica Valluri, Eric Bell, Katya Gozman, and Ja- cob Nibauer for insightful discussions. YC, OYG, and CHH were supported in part by National Aeronautics and Space Administration through contract NAS5-26555 for Space Tele- scope Science Institute programs HST-AR-16614 and JWST- GO-03433. This research benefited from the Gravity in the Local Group conference hosted by the McWilliam’s Center for Cosmology and Astrophysics, Carnegie Mellon Univer- sity. Software: agama (E. Vasiliev 2019), numpy (C. R. Harris et al. 2020), matplotlib (J. D. Hunter 2007), scipy (P. Vir- tanen et al. 2020), astropy ( The Astropy Collaboration et al. 2018), gala (A. M. Price-Whelan 2017; A. Price-Whelan et al. 2024), pandas ( The pandas development team 2024), dustmaps (G. M. Green 2018), falcON (W. Dehnen 2000, 2002), gaiaunlimited (T. Cantat-Gaudin et al. 2023) Appendix A. Kernel density estimation with uncertainties Given a sample of 𝑁points {𝑥𝑗}, we can estimate the probability density function 𝑝(𝑥) at any point of interest using Gaussian KDE, 𝑝(𝑥) ≈1 𝑁 𝑁 ∑︁ 𝑗=1 𝑝KDE(𝑥|𝑥𝑗, 𝜎) ≡1 𝑁 𝑁 ∑︁ 𝑗=1 1 √ 2𝜋𝜎 exp " −(𝑥−𝑥𝑗)2 2𝜎2 # . However, if the location of the point of interest is uncertain and follows a distribution function 𝑓(𝑥), the expected probability density 𝑝is the average of 𝑝(𝑥) weighted by 𝑓(𝑥), 𝑝= ∫∞ −∞ 𝑝(𝑥) 𝑓(𝑥)𝑑𝑥≈ 𝑁 ∑︁ 𝑗=1 ∫∞ −∞ 𝑝KDE(𝑥|𝑥𝑗, 𝜎) 𝑓(𝑥)𝑑𝑥. If we assume 𝑓(𝑥) is a Gaussian function centered at 𝑥0 with uncertainty 𝜎0, the last integral is the convolution of two Gaussian distributions N (𝑥𝑖, 𝜎2 𝑖) ★N (0, 𝜎2 0) with the independent variable 𝑥0. This convolution is simply another Gaussian distribution N (𝑥𝑖, 𝜎′2) ≡N (𝑥𝑖, 𝜎2 + 𝜎2 0). We can thus obtain the expected probability density as 𝑝≈1 𝑁 𝑁 ∑︁ 𝑗=1 𝑝KDE(𝑥|𝑥𝑗, 𝜎′). StarStream: Automatic detection algorithm for stellar streams 15 Therefore, the expected probability density of a point with Gaussian uncertainty 𝜎0 equals the standard Gaussian KDE at the same location with bandwidth replaced by 𝜎′2 ≡𝜎2 + 𝜎2 0 for every sample point {𝑥𝑖}. B. Astrometry uncertainty propagation The uncertainties of astrometry measurements in sky coordinate system (𝛼, 𝛿, 𝜇𝛼, 𝜇𝛿, 𝜛, 𝑣𝑟) are commonly quantified as a 6 × 6 covariance matrix C = ©­­­­­ « 𝑉𝛼 𝐶𝛼𝛿 · · · 𝐶𝛼𝑣𝑟 𝐶𝛼𝛿 𝑉𝛿 · · · 𝐶𝛿𝑣𝑟 ... ... ... ... 𝐶𝛼𝑣𝑟𝐶𝛿𝑣𝑟· · · 𝑉𝑣𝑟 ª®®®®® ¬ where 𝑉𝑖= 𝜎2 𝑖is the variance of quantity 𝑖and 𝐶𝑖𝑗= 𝜎𝑖𝜎𝑗𝜌𝑖𝑗is the covariance between quantities 𝑖and 𝑗, in which 𝜌𝑖𝑗= 𝜌𝑗𝑖is the correlation coefficient. To obtain the covariance matrix in the rotated frame (𝜙1, 𝜙2, 𝜇𝜙1, 𝜇𝜙2, 𝜛′, 𝑣′ 𝑟), C′ = ©­­­­­ « 𝑉𝜙1 𝐶𝜙1 𝜙2 · · · 𝐶𝜙1𝑣′𝑟 𝐶𝜙1 𝜙2 𝑉𝜙2 · · · 𝐶𝜙2𝑣′𝑟 ... ... ... ... 𝐶𝜙1𝑣′𝑟 𝐶𝜙2𝑣′𝑟 · · · 𝑉𝑣′𝑟 ª®®®®® ¬ We use linear uncertainty propagation to approximate C′ C′ ≈JCJT (B1) where J is the Jacobian matrix J ≡𝜕(𝜙1, 𝜙2, 𝜇𝜙1, 𝜇𝜙2, 𝜛′, 𝑣′ 𝑟) 𝜕(𝛼, 𝛿, 𝜇𝛼, 𝜇𝛿, 𝜛, 𝑣𝑟) . Note that (𝜛′, 𝑣′ 𝑟) = (𝜛, 𝑣𝑟) in coordinate rotation. Also, other new coordinates do not explicitly depend on parallax and radial velocity. Therefore, we directly know 𝜕𝜛′/𝜕𝑖= 𝛿𝑖𝜛, 𝜕𝑣′ 𝑟/𝜕𝑖= 𝛿𝑖𝑣𝑟, 𝜕𝑖/𝜕𝜛= 𝛿𝑖𝜛′, and 𝜕𝑖/𝜕𝑣𝑟= 𝛿𝑖𝑣′𝑟, where the Kronecker symbol 𝛿𝑖𝑗= 1 only if 𝑖= 𝑗, otherwise 0. This greatly simplifies our calculation as we only need to take into account the rotation of angular coordinates (𝛼,𝛿,𝜇𝛼,𝜇𝛿) and (𝜙1,𝜙2,𝜇𝜙1,𝜇𝜙2) independent from 𝜛and 𝑣𝑟. Rotation in the spherical coordinate system is nonlinear, which makes it challenging to derive J analytically. However, by first transforming the sky coordinates to the Cartesian system (𝑥, 𝑦, 𝑧, 𝑣𝑥, 𝑣𝑦, 𝑣𝑧), we can more easily deal with rotation as it becomes a linear coordinate transformation, which can be described by the 6 × 6 rotational matrix, R ≡ R3×3 0 0 R3×3 ! Here, R3×3 ∈SO(3) is the standard 3D rotational matrix. After the rotation, we can transform the rotated Cartesian frame back to the great circle frame. The combined Jacobian matrix thus equals the product of the Jacobian matrices for the three transformations. Since rotation is a linear translation, the Jacobian matrix of rotation is simply R itself. We derive the two remaining Jacobian matrices as follows. We consider the rotation of angular coordinates in the unit sphere. Such a simplification does not affect our calculation as the parallax and radial velocity are independent coordinates. The transformation from sky coordinates to the cartesian system is ©­­ « 𝑥 𝑦 𝑧 ª®® ¬ = ©­­ « cos 𝛼cos 𝛿 sin 𝛼cos 𝛿 sin 𝛿 ª®® ¬ and ©­­ « 𝑣𝑥 𝑣𝑦 𝑣𝑧 ª®® ¬ = ©­­ « −𝜇𝛼sin 𝛼−𝜇𝛿cos 𝛼sin 𝛿 𝜇𝛼cos 𝛼−𝜇𝛿sin 𝛼sin 𝛿 𝜇𝛿cos 𝛿 ª®® ¬ . 16 Chen et al. Recall that we define 𝜇𝛼≡¤𝛼cos 𝛿. Therefore, the corresponding 6 × 4 Jacobian matrix is J1 ≡𝜕(𝑥, 𝑦, 𝑧, 𝑣𝑥, 𝑣𝑦, 𝑣𝑧) 𝜕(𝛼, 𝛿, 𝜇𝛼, 𝜇𝛿) = ©­­­­­­­­­ « −sin 𝛼cos 𝛿 −cos 𝛼sin 𝛿 0 0 cos 𝛼cos 𝛿 −sin 𝛼sin 𝛿 0 0 0 cos 𝛿 0 0 −𝜇𝛼cos 𝛼+ 𝜇𝛿sin 𝛼sin 𝛿−𝜇𝛿cos 𝛼cos 𝛿−sin 𝛼−cos 𝛼sin 𝛿 −𝜇𝛼sin 𝛼−𝜇𝛿cos 𝛼sin 𝛿−𝜇𝛿sin 𝛼cos 𝛿 cos 𝛼 −sin 𝛼sin 𝛿 0 −𝜇𝛿sin 𝛿 0 cos 𝛿 ª®®®®®®®®® ¬ . We can also calculate the inverse transformation to the great circle frame, 𝜙1 𝜙2 ! = ©­ « arctan 𝑦′ 𝑥′ arcsin 𝑧′ ª® ¬ and 𝜇𝜙1 𝜇𝜙2 ! = 1 √︁ 𝑥′2 + 𝑦′2 −𝑣′ 𝑥𝑦′ + 𝑣′ 𝑦𝑥′ 𝑣′ 𝑧 ! . The 4 × 6 Jacobian matrix of this transformation is J2 ≡ 𝜕(𝜙1, 𝜙2, 𝜇𝜙1, 𝜇𝜙2) 𝜕(𝑥′, 𝑦′, 𝑧′, 𝑣′𝑥, 𝑣′𝑦, 𝑣′𝑧) = ©­­­­­­­­­­ « −sin 𝜙1 cos 𝜙2 cos 𝜙1 cos 𝜙2 0 0 0 0 0 0 1 cos 𝜙2 0 0 0 −𝜇𝜙2 sin 𝜙1 sin 𝜙2 cos 𝜙2 𝜇𝜙2 cos 𝜙1 sin 𝜙2 cos 𝜙2 0 −sin 𝜙1 cos 𝜙1 0 0 0 𝜇𝜙2 sin 𝜙2 cos2 𝜙2 0 0 1 cos 𝜙2 ª®®®®®®®®®® ¬ For clarity, we already write J2 in terms of the great circle coordinates. The combined Jacobian matrix is given by J = J2RJ1 0 0 I2×2 ! . The identity matrix in the lower right accounts for the transformation of (𝜛, 𝑣𝑟). We have verified that J2RJ1 also becomes the identity matrix in the case of no rotation, R = I6×6. Finally, we can insert J to Eq. (B1) to obtain the covariance matrix in the great circle frame. References Amorisco, N. C. 2015, MNRAS, 450, 575, doi: 10.1093/mnras/stv648 Banik, N., Bertone, G., Bovy, J., & Bozorgnia, N. 2018, JCAP, 2018, 061, doi: 10.1088/1475-7516/2018/07/061 Bernard, E. J., Ferguson, A. M. N., Schlafly, E. F., et al. 2014, MNRAS, 443, L84, doi: 10.1093/mnrasl/slu089 Bonaca, A., Geha, M., K¨upper, A. H. W., et al. 2014, ApJ, 795, 94, doi: 10.1088/0004-637X/795/1/94 Bonaca, A., & Price-Whelan, A. M. 2025, New Astronomy Reviews, 100, 101713, doi: 10.1016/j.newar.2024.101713 Bovy, J., Hogg, D. W., & Roweis, S. T. 2011, Ann. Appl. Stat., 5, doi: 10.1214/10-AOAS439 Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127, doi: 10.1111/j.1365-2966.2012.21948.x Cantat-Gaudin, T., Fouesneau, M., Rix, H.-W., et al. 2023, A&A, 669, A55, doi: 10.1051/0004-6361/202244784 Carlberg, R. G., Grillmair, C. J., & Hetherington, N. 2012, ApJ, 760, 75, doi: 10.1088/0004-637X/760/1/75 Chen, Y. 2024, ybillchen/particle spray: v0.1.0, Zenodo, doi: 10.5281/zenodo.13923250 Chen, Y., & Gnedin, O. Y. 2023, MNRAS, 522, 5638, doi: 10.1093/mnras/stad1328 Chen, Y., & Gnedin, O. Y. 2024, MNRAS, 527, 3692, doi: 10.1093/mnras/stad3345 StarStream: Automatic detection algorithm for stellar streams 17 Chen, Y., Li, H., & Gnedin, O. Y. 2025a, ApJL, 980, L18, doi: 10.3847/2041-8213/adaf93 Chen, Y., Valluri, M., Gnedin, O. Y., & Ash, N. 2025b, ApJS, 276, 32, doi: 10.3847/1538-4365/ad9904 Cooper, A. P., Koposov, S. E., Allende Prieto, C., et al. 2023, ApJ, 947, 37, doi: 10.3847/1538-4357/acb3c0 Dehnen, W. 2000, ApJL, 536, L39, doi: 10.1086/312724 Dehnen, W. 2002, JCoPh, 179, 27, doi: 10.1006/jcph.2002.7026 DESI Collaboration, Abareshi, B., Aguilar, J., et al. 2022, AJ, 164, 207, doi: 10.3847/1538-3881/ac882b Dotter, A. 2016, ApJS, 222, 8, doi: 10.3847/0067-0049/222/1/8 Erkal, D., Belokurov, V., Bovy, J., & Sanders, J. L. 2016, MNRAS, 463, 102, doi: 10.1093/mnras/stw1957 Erkal, D., Koposov, S. E., & Belokurov, V. 2017, MNRAS, 470, 60, doi: 10.1093/mnras/stx1208 Fardal, M. A., Huang, S., & Weinberg, M. D. 2015, MNRAS, 452, 301, doi: 10.1093/mnras/stv1198 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, A&A, 595, A2, doi: 10.1051/0004-6361/201629512 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 650, C3, doi: 10.1051/0004-6361/202039657e Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1, doi: 10.1051/0004-6361/202243940 Gieles, M., Erkal, D., Antonini, F., Balbinot, E., & Pe˜narrubia, J. 2021, Nat Astron, 5, 957, doi: 10.1038/s41550-021-01392-2 Green, G. M. 2018, JOSS, 3, 695, doi: 10.21105/joss.00695 Grillmair, C. J. 2009, ApJ, 693, 1118, doi: 10.1088/0004-637X/693/2/1118 Grillmair, C. J. 2022, ApJ, 929, 89, doi: 10.3847/1538-4357/ac5bd7 Grillmair, C. J. 2025, ApJ, 979, 75, doi: 10.3847/1538-4357/ada2ea Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 Harris, W. E. 1996, AJ, 112, 1487, doi: 10.1086/118116 Hattori, K., Erkal, D., & Sanders, J. L. 2016, MNRAS, 460, 497, doi: 10.1093/mnras/stw1006 Hilker, M., Baumgardt, H., Sollima, A., & Bellini, A. 2019, Proc. IAU, 14, 451, doi: 10.1017/S1743921319006823 Holm-Hansen, C., Chen, Y., & Gnedin, O. Y. 2025, arXiv:2510.09604 [astro-ph], doi: 10.48550/arXiv.2510.09604 Hunt, J. A., & Vasiliev, E. 2025, New Astronomy Reviews, 100, 101721, doi: 10.1016/j.newar.2024.101721 Hunter, J. D. 2007, CSE, 9, 90, doi: 10.1109/MCSE.2007.55 Ibata, R., Malhan, K., Martin, N., et al. 2021, ApJ, 914, 123, doi: 10.3847/1538-4357/abfcc2 Ibata, R., Malhan, K., Tenachi, W., et al. 2024, ApJ, 967, 89, doi: 10.3847/1538-4357/ad382d Ibata, R. A., Gilmore, G., & Irwin, M. J. 1994, Nature, 370, 194, doi: 10.1038/370194a0 King, I. R. 1966, AJ, 71, 64, doi: 10.1086/109857 Kroupa, P. 2001, MNRAS, 322, 231, doi: 10.1046/j.1365-8711.2001.04022.x K¨upper, A. H. W., Lane, R. R., & Heggie, D. C. 2012, MNRAS, 420, 2700, doi: 10.1111/j.1365-2966.2011.20242.x Lane, R. R., K¨upper, A. H. W., & Heggie, D. C. 2012, MNRAS, 423, 2845, doi: 10.1111/j.1365-2966.2012.21093.x Li, T. S., Koposov, S. E., Zucker, D. B., et al. 2019, MNRAS, 490, 3508, doi: 10.1093/mnras/stz2731 LSST Science Collaboration, Abell, P. A., Allison, J., et al. 2009, arXiv:0912.0201 [astro-ph], doi: 10.48550/arXiv.0912.0201 Lynden-Bell, D., & Lynden-Bell, R. M. 1995, MNRAS, 275, 429, doi: 10.1093/mnras/275.2.429 Majewski, S. R., Schiavon, R. P., Frinchaboy, P. M., et al. 2017, AJ, 154, 94, doi: 10.3847/1538-3881/aa784d Malhan, K., & Ibata, R. A. 2018, MNRAS, 477, 4063, doi: 10.1093/mnras/sty912 Malhan, K., Ibata, R. A., Carlberg, R. G., Valluri, M., & Freese, K. 2019, ApJ, 881, 106, doi: 10.3847/1538-4357/ab2e07 Mateu, C., Bruzual, G., Aguilar, L., et al. 2011, MNRAS, 415, 214, doi: 10.1111/j.1365-2966.2011.18690.x Meˇstri´c, U., Vanzella, E., Zanella, A., et al. 2022, MNRAS, 516, 3532, doi: 10.1093/mnras/stac2309 Ngan, W. H. W., & Carlberg, R. G. 2014, ApJ, 788, 181, doi: 10.1088/0004-637X/788/2/181 Nibauer, J., Bonaca, A., Lisanti, M., Erkal, D., & Hastings, Z. 2024, ApJ, 969, 55, doi: 10.3847/1538-4357/ad4299 Odenkirchen, M., Grebel, E. K., Rockosi, C. M., et al. 2001, ApJ, 548, L165, doi: 10.1086/319095 Pace, A. B., & Li, T. S. 2019, ApJ, 875, 77, doi: 10.3847/1538-4357/ab0aee Pearson, S., Bonaca, A., Chen, Y., & Gnedin, O. Y. 2024, ApJ, 976, 54, doi: 10.3847/1538-4357/ad8348 Pearson, S., Price-Whelan, A. M., & Johnston, K. V. 2017, Nature Astronomy, 1, 633, doi: 10.1038/s41550-017-0220-3 Price-Whelan, A., Souchereau, H., Wagg, T., et al. 2024, Zenodo, doi: 10.5281/zenodo.593786 Price-Whelan, A. M. 2017, JOSS, 2, 388, doi: 10.21105/joss.00388 Price-Whelan, A. M., & Bonaca, A. 2018, ApJL, 863, L20, doi: 10.3847/2041-8213/aad7b5 Price-Whelan, A. M., Sesar, B., Johnston, K. V., & Rix, H.-W. 2016, ApJ, 824, 104, doi: 10.3847/0004-637X/824/2/104 Riello, M., De Angeli, F., Evans, D. W., et al. 2021, A&A, 649, A3, doi: 10.1051/0004-6361/202039587 Roberts, D., Gieles, M., Erkal, D., & Sanders, J. L. 2025, Monthly Notices of the Royal Astronomical Society, 538, 454, doi: 10.1093/mnras/staf321 18 Chen et al. Rockosi, C. M., Odenkirchen, M., Grebel, E. K., et al. 2002, AJ, 124, 349, doi: 10.1086/340957 Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525, doi: 10.1086/305772 Sesar, B., Price-Whelan, A. M., Cohen, J. G., et al. 2016, ApJL, 816, L4, doi: 10.3847/2041-8205/816/1/L4 Shih, D., Buckley, M. R., Necib, L., & Tamanas, J. 2021, MNRAS, 509, 5992, doi: 10.1093/mnras/stab3372 Shipp, N., Drlica-Wagner, A., Balbinot, E., et al. 2018, ApJ, 862, 114, doi: 10.3847/1538-4357/aacdab Spergel, D., Gehrels, N., Baltay, C., et al. 2015, arXiv:1503.03757 [astro-ph]. http://arxiv.org/abs/1503.03757 Tavangar, K., & Price-Whelan, A. M. 2025, arXiv:2502.13236 [astro-ph], doi: 10.48550/arXiv.2502.13236 The Astropy Collaboration, Price-Whelan, A. M., Sip˝ocz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f The pandas development team. 2024, pandas-dev/pandas: Pandas, Zenodo, doi: 10.5281/zenodo.3509134 Valluri, M., Fagrelius, P., Koposov, S. E., et al. 2024, arXiv:2407.06336 [astro-ph]. http://arxiv.org/abs/2407.06336 Varghese, A., Ibata, R., & Lewis, G. F. 2011, MNRAS, 417, 198, doi: 10.1111/j.1365-2966.2011.19097.x Vasiliev, E. 2019, MNRAS, 482, 1525, doi: 10.1093/mnras/sty2672 Vasiliev, E. 2023, Galaxies, 11, 59, doi: 10.3390/galaxies11020059 Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2 Yang, Y., Zhao, J.-K., Tang, X.-Z., Ye, X.-H., & Zhao, G. 2023, ApJ, 953, 130, doi: 10.3847/1538-4357/acdee2
Draft version October 17, 2025 Typeset using LATEX twocolumn style in AASTeX7.0.1 StarStream: Automatic detection algorithm for stellar streams Yingtian Chen ,1 Oleg Y. Gnedin ,1 Adrian M. Price-Whelan ,2 and Colin Holm-Hansen 1 1 48109, USA 2Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA Abstract The Gaia mission has led to the discovery of over 100 stellar streams in the Milky Way, most of which likely originated from globular clusters (GCs). As the upcoming wide-field surveys can potentially continue to increase the number of known streams, there is a growing need to shift focus from manual detection of individual streams to automated detection methods that prioritize both quality and quantity. Traditional techniques rely heavily on the visual expectation that GC streams are dynamically cold and thin. This assumption does not hold for all streams, whose morphologies and kinematics can vary significantly with the progenitor's mass and orbit. As a result, these methods are biased toward a subset of the whole stream population, with often unquantified purity and completeness. In this work, we present StarStream, an automatic stream detection algorithm based on a physics-inspired model rather than visual expectation. Our method provides a more accurate prediction of stream stars in the multi-dimensional space of observables, while using fewer free parameters to account for the diversity of streams. Applied to a mock GC stream catalog tailored for the Gaia DR3 dataset, our algorithm achieves both purity and completeness of at least 65% at Galactic latitudes |b| > 30◦. Keywords: Stellar streams (2166); Globular star clusters (656); Stellar dynamics (1596); Galaxy dynamics (591) 1. Introduction Stellar streams are elongated tidal structures originating from either an existing or fully dissolved progenitor system, such as a globular cluster (GC) or a dwarf galaxy (D. LyndenBell & R. M. Lynden-Bell 1995). Compared to the high background density of Milky Way (MW) field stars, streams have an extremely low signal-to-noise ratio (S/N). As a result, only a few streams were identified prior to the past decade, including the Sagittarius stream by R. A. Ibata et al. (1994) and the Palomar 5 (Pal 5) stream by M. Odenkirchen et al. (2001). These pioneering efforts attempted to increase S/N by selecting a subset of stars that are more likely to belong to streams than to the field, and then visually searching for stream-like structures. This selection was typically performed by applying a matched filter in the color-magnitude diagram, usually a window function centered around the progenitor's isochrone (e.g., C. M. Rockosi et al. 2002; C. J. Grillmair 2009; E. J. Bernard et al. 2014; N. Shipp et al. 2018). The launch of the Gaia mission ( Gaia Collaboration et al. 2016) has revolutionized the discovery of stellar streams by providing an all-sky map of stars in the six-dimensional phase space, particularly offering high-precision proper motions down to G≈20 since Data Release 2 (DR2, Gaia CollabCorresponding author: Yingtian Chen oration et al. 2018). This enables additional matched filters based on astrometric measurements, significantly increasing the number of detected streams (see the review by A. Bonaca & A. M. Price-Whelan 2025). The all-sky coverage and homogeneity of the Gaia data also motivate the development of automatic stream detection methods to replace visual inspection (e.g., C. Mateu et al. 2011; D. Shih et al. 2021). One such method is STREAMFINDER (K. Malhan & R. A. Ibata 2018), which uses a mixture model in the multi-dimensional space of observables to automatically detect clusters of stellar orbits within a Gaussian tube. STREAMFINDER successfully identified 87 thin streams in Gaia Data Release 3 (DR3, Gaia Collaboration et al. 2023), including 28 new discoveries (R. Ibata et al. 2024). As the number of stream detections grows, more evidence shows that streams have density structure such as fans (B. Sesar et al. 2016), gaps (D. Erkal et al. 2016), spurs (A. M. Price-Whelan & A. Bonaca 2018), and cocoons (K. Malhan et al. 2019; M. Valluri et al. 2024). These features are likely produced by perturbations in the host galaxy's potential, including bar rotation (K. Hattori et al. 2016; A. M. Price-Whelan et al. 2016; S. Pearson et al. 2017), disk rotation (J. Nibauer et al. 2024), and close encounters with other objects (R. G. Carlberg et al. 2012; W. H. W. Ngan & R. G. Carlberg 2014; D. Erkal et al. 2016, 2017; N. Banik et al. 2018). On the other hand, recent works proposed that stream density can trace the mass loss history of their progenitors (M. Gieles et al. 2021; Y. Chen et al. 2025a). These breakthroughs 16 Oct 2025 2 Chen et al. emphasize the need to quantify the purity and completeness of stream detection, in order to accurately characterize their density structures. Previous studies have revealed density structures in individual streams using flexible density models (e.g., D. Erkal et al. 2017; K. Tavangar & A. M. Price-Whelan 2025). However, these models involve many free parameters, making them computationally expensive when applied to all-sky data and better suited for precisely characterizing stream membership after discovery. Even the simpler model used in STREAMFINDER requires millions of CPU hours on Gaia Early Data Release 3 ( Gaia Collaboration et al. 2021). This will greatly limit the usage of these methods for next-generation wide-field photometric surveys, such as those conducted by the Vera C. Rubin Observatory (Rubin, LSST Science Collaboration et al. 2009) and the Nancy Grace Roman Space Telescope (Roman, D. Spergel et al. 2015). Furthermore, these models usually represent a stream as a tube surrounding a well-defined track, based on the visual expectation that streams are dynamically cold and thin. However, this assumption is inaccurate, as even GC streams can be dynamically hot or spatially complex depending on the progenitor's mass and orbit (N. C. Amorisco 2015). As a result, such visually inspired models are inefficient at detecting more "irregular" streams. Compared to the visually-inspired models mentioned above, recent theoretical advances in stream formation have enabled physics-inspired models that achieve higher accuracy with fewer free parameters. Specifically, given the host Galactic potential and the progenitor's mass, position, and velocity, particle spray methods (A. Varghese et al. 2011; R. R. Lane et al. 2012; A. H. W. K ̈upper et al. 2012; A. Bonaca et al. 2014; M. A. Fardal et al. 2015; D. Roberts et al. 2025; Y. Chen et al. 2025b) can efficiently generate tracer particles that follow the expected distribution of stream stars in six-dimensional phase space, typically requiring only one or even zero free parameters. In particular, the method of Y. Chen et al. (2025b) is calibrated to match N-body simulations within 10% error for typical GC streams, without introducing any additional parameters. As a result, stream models based on this approach can potentially reduce computational cost by a significant amount, while also being able to detect hotter and wider streams. In this work, we present StarStream, an automatic detection algorithm for stellar streams using a physics-inspired stream model based on Y. Chen et al. (2025b). We employ kernel density estimation (KDE) to construct smooth probability density functions (PDFs) for both the stream and background populations. The algorithm is applied to the mock dataset from C. Holm-Hansen et al. (2025, hereafter H25), tailored for Gaia DR3, to quantify its purity and completeness in detecting streams originating from existing GCs. The paper is structured as follows. In §2, we describe the methodology of StarStream in detail. We then present validation tests on the mock dataset in §3. In §4, we discuss the motivation for applying the method to upcoming surveys (§4.1) and improvements over other algorithms (§4.2). Finally, we summarize our findings in §5. 2. Method We distinguish stream members from the background stars using a mixture model, which is a powerful tool for identifying faint structures such as ultra-faint dwarf galaxies (e.g., A. B. Pace & T. S. Li 2019) and stellar streams (e.g., K. Malhan & R. A. Ibata 2018; K. Tavangar & A. M. Price-Whelan 2025). Specifically, we construct the joint probability density function (PDF) of the stream and background populations as p(x) = fsps(x) + (1 -fs)pbg(x) where xdenotes a point in the multi-dimensional observable space, including positions, velocities, colors, magnitudes, and other properties. Traditional methods define PDFs of the stream ps(x) and the background pbg(x) as parametric functions with several fixed or adjustable parameters. The best-fit values of these parameters, together with the stream fraction fs, are often estimated by maximizing the log-likelihood, ln L ≡ N ∑︁ i=1 ln fsps,i+ (1 -fs)pbg,i (1) where ps,i≡ps(xi) and pbg,i≡pbg(xi) are the probability densities of the i-th star being a member of the stream and background, respectively. Since parameter estimation is exponentially more computationally expensive as the number of adjustable parameters grows, many methods tend to simplify the stream model by assuming it to be a thin tube along a predefined track. Similarly, the background model is often approximated as uniform or only slowly varying across observables. However, recent advances in the theory of GC stream formation now allow for more accurate stream modeling with even fewer parameters (Y. Chen et al. 2025b). Additionally, we can employ a nonparametric KDE to model the nonuniform background without introducing extra model parameters. In this work, we develop a new stream detection method by incorporating these improvements into the mixture model. In the following sections, we detail our approach for accurately estimating ps(x) and pbg(x). 2.1. Stream probability density To approximate the PDF of streams, we first generate simulated streams around progenitor GCs using the particle spray algorithm. Specifically, we use the agama (E. Vasiliev 2019) StarStream: Automatic detection algorithm for stellar streams 3 implementation of the Y. Chen et al. (2025b) algorithm,3 which initializes the positions and velocities of stream tracer particles from a multivariate Gaussian distribution, calibrated using N-body simulations of disrupting GCs. This algorithm accurately reproduces the width and length of simulated streams across a wide range of cluster masses and orbital types. To obtain the probability density in the color-magnitude space, we first assign a stellar mass to each tracer particle by drawing from the P. Kroupa (2001) initial mass function (IMF). Although the mass function (MF) may evolve due to energy equipartition that preferentially ejects lowmass stars, the high-mass end above Gaia's detection limit (≳0.5 M⊙) remains largely consistent with the IMF (see H25). We then compute the colors and magnitudes using the MESA Isochrones and Stellar Tracks (MIST, A. Dotter 2016; U. Meˇstri ́c et al. 2022), taking the progenitor GC's age and metallicity as input. For this study, we adopt Gaia's Gmagnitude and BP -RP color. In §3.4, we also test an alternative isochrone model, PARSEC (A. Bressan et al. 2012), which has negligible effect on the final detection quality. For each stream, we release tracer particles over the last 1 Gyr assuming a uniform ejection rate. The ejection rate can be set time-varying if needed. However, the uniform rate suffices to produce a realistic stream density distribution that is distinguishable from the background, and variations in the ejection rate only slightly affect the density along most streams (see §2.2 in Y. Chen et al. 2025a). We generate 4000 tracer particles per stream, which is sufficient to fully sample the multi-dimensional parameter space. The minimum stellar mass used in sampling the mass function is set to the lowest possible mass of the closest tracer particle that remains above the detection limit. Since the heliocentric distances of stream stars vary along the stream, this minimum mass is a conservative choice to ensure that all regions of the stream are well-sampled above the local detection limit. We use Gaussian KDE to estimate the stream PDF from tracer particles in the M-dimensional parameter space, including positions, velocities, colors, magnitudes, and other properties. By denoting the M-dimensional coordinate of the j-th tracer as xj≡(x1 j, x2 j, · · · , xM j), the probability density at any arbitrary point xis ps(x) ≈1 Ntr Ntr ∑︁ j=1 pKDE(x|xj, σ) ≡1 Ntr Ntr ∑︁ j=1 M Ö k=1 1 √ 2πσk exp " - (xk-xk j)2 2σ2 k # 3 Tutorials for this algorithm are available at https://github.com/ybillchen/ particle spray and are preserved on Zenodo at Y. Chen (2024). where Ntr is the total number of tracers. It is straightforward to verify that integrating ps(x) over the full M-dimensional space yields unity. We define σ≡(σ1, σ2, · · · , σM) as the array of KDE kernel bandwidths, with no correlation between each dimension. In practice, we set σkto 0.1 times the standard deviation of all tracer particles in the k-th dimension when krefers to positions or velocities. For magnitudes, we use σ= 0.1; and for colors, σ= 0.02. We have verified that varying these values by a factor of 0.5-2 has a negligible effect on our results. Note that the KDE approach naturally captures the fact that most stream stars are faint, since we sample stream particle masses from the P. Kroupa (2001) IMF. As a result, more tracer particles are gathered toward the faint end, leading to higher probability densities in that region. Before applying KDE to estimate the probability density, it is helpful to rotate the equatorial coordinate system so that the new latitude of the stream center is zero. This is particularly important for streams at high declination, where the metric tensor deviates significantly from identity. In this work, we always work in a rotated coordinate frame (φ1, φ2), where the progenitor GC is located at (0, 0) and the proper motion is in the positive φ1 direction. In this case, the diagonal elements of the metric tensor g = diag(1, cos2 φ2) only deviate from those of the identity tensor by 3% even with φ2 = 10◦. This coordinate system is similar to the great circle frame commonly used to describe stellar streams, but is less ambiguous when the stream is wide or has strong curvature. In practice, the observables of each star often have significant observational uncertainties, denoted by σ0 ≡ (σ1,0, σ2,0, · · · , σM,0), which may exceed the corresponding KDE kernel widths. As we show in Appendix A, the effective PDF for such a star is the convolution between the original PDF and a Gaussian kernel with standard deviations σ0. This convolution results in a modified KDE evaluated at the same location, with kernel width replaced by σ′2 k ≡σ2 k+ σ2 k,0. Therefore, for a star with M-dimensional coordinates xand uncertainties σ0, the stream probability density becomes ps(x, σ0) ≈1 Ntr Ntr ∑︁ j=1 pKDE(x|xj, σ′) where σ′ incorporates both the KDE kernel width and observational uncertainty. For Gaia data specifically, the uncertainties in positions and magnitudes are almost always smaller than the corresponding KDE bandwidths. For simplicity and computational efficiency, we therefore ignore the uncertainties in these parameters in our subsequent analysis and consider only the uncertainties in proper motions and color. It is worth noting that Gaia astrometric uncertainties have nonzero correlations. These correlations influence the error propagation from the original equatorial frame to the rotated 4 Chen et al. (φ1, φ2) frame. In Appendix B, we explicitly calculate the linear uncertainty propagation associated with this coordinate transformation. However, to simplify our calculations, we do not include these correlations when performing the convolution with the Gaussian kernel, allowing us to treat each dimension independently during KDE evaluation. This simplification has only a minor effect on the inferred probability density, as the correlations are generally weak (r 10◦, where the metric tensor deviates from the identity tensor by more than 3% (see §2.1). Our stream generation method may also deviate from the actual stream track in regions far from the GC if the adopted Galactic potential model is inaccurate. For these reasons, the KDE approach is best suited for relatively small regions, such as a 10◦cone around the GC. Nevertheless, this region is still sufficiently large to enclose the half-number radius for 2/3 of the simulated streams. Finally, we perform linear interpolation on the grids and compute the product of the independent PDFs to estimate the background probability density pbg at any point of interest. Unlike our estimation of the stream probability density, where the simulated stream has no observational uncertainty, the real Gaia data already include measurement uncertainties. As a result, the background PDFs obtained above are already the convolution of the original PDFs with Gaussian kernels characterizing uncertainties. Therefore, we do not need to apply the convolution again when evaluating the background PDF at a given point. This simplification is particularly helpful, as performing convolution would be highly inefficient within the grid interpolation framework. An alternate method is first deconvolving the real Gaia data to obtain the actual underlying PDF of the background, and then using the same approach as our stream model to compute the PDF. The first step can be achieved using techniques such as "extreme deconvolution" (J. Bovy et al. 2011). Although this method provides a coherent approach to obtain PDFs for both the stream and the background, it is more computationally expensive by ∼1000 times compared to grid interpolation as this alternate method requires performing the full KDE evaluation over background stars. Although we demonstrate our method using Gaia data as an example, our method can be readily adapted to other datasets. For instance, metallicity and radial velocity can be included as additional observables when dealing with spectroscopic surveys. Since we perform grid fitting separately for most dimensions, adding extra observables only increases the computational cost linearly. StarStream: Automatic detection algorithm for stellar streams 5 2.3. Stream detection We optimize our mixture model for the stream and background populations by varying the stream fraction fs to maximize the log-likelihood function in Eq. (1). The best-fit stream and background probability densities for star iare then given by fsps,iand (1 -fs)pbg,i, respectively. Following the standard definition used in mixture models, the membership probability that star ibelongs to the stream is given by Ps,i≡ fsps,i fsps,i+ (1 -fs)pbg,i . (2) We consider stars with membership probability greater than a chosen threshold Pth to be identified as stream members. In this work, we adopt the standard setup of the mixture model with Pth = 0.5. However, we emphasize that Pth is an adjustable parameter that can be tuned depending on whether the analysis prioritizes completeness or purity. 3. Method validation We validate our new method by applying it to a mock catalog of stream and background stars tailored for Gaia DR3, based on the H25 stream catalog. 3.1. Mock observational data Our mock catalog consists of both a stream population and a background population, each described by six observables: positions (φ1, φ2), proper motions (μφ1, μφ2), color (BP - RP), and magnitude (G). We do not include radial velocity or metallicity, as these quantities are available for only a small subset of stars in Gaia DR3. Parallax is also excluded because of its large uncertainty at the typical heliocentric distances of stellar streams. The stream population is from the mock catalog of GC streams in a simulated MW-like galaxy (ID 523889) by H25. This catalog generates synthetic streams around a mock GC population based on the GC formation model of Y. Chen & O. Y. Gnedin (2024). It fits the host potential of simulated galaxies at each snapshot using basis function expansion (BFE), accounting for the time evolution of the potential by linearly interpolating between snapshots. The catalog then explicitly integrates the orbit of each GC in this potential over the last ∼3.5 Gyr and computes the mass loss rate based on the GC mass and the local tidal field. H25 initializes each GC with the P. Kroupa (2001) IMF and releases stars according to the time-varying mass loss rate. Stars are released probabilistically, with the ejection probability inversely proportional to the square root of stellar mass. The released stars form stellar streams using the particle spray algorithm by Y. Chen et al. (2025b). The catalog provides the initial mass, age, and metallicity [Fe/H] of each star, allowing us to assign synthetic Gaia photometry directly using the MIST isochrone model. Ideally, we should generate our simulated streams in the same potential used in H25 for full consistency. However, because it is challenging to constrain the time evolution of the Galactic potential in practice, we use only the static potential at the final snapshot to avoid over-idealization. The stream duration in H25 is longer than that of most observed GC stream segments (≲1 Gyr, Y. Chen et al. 2025a). In this work, we only use the portions of streams that were released in the last 1 Gyr to mimic streams that are currently observable. Since the main goal of this method is to detect streams around GCs, it is also unnecessary to include stream stars released more than 1 Gyr ago or located outside the 10◦cone centered on the GC, as these stars tend to be more sensitive to the time evolution of the Galactic potential. To create a more realistic stream population, we also add observational errors to the mock stream stars. Similarly to §2.1, we neglect errors in positions and magnitudes. For proper motion uncertainties, we adopt the following parametric form from Gaia's performance website,5 which describes the dependence of the uncertainty on apparent magnitude, σμ= √ 40 + 800z+ 30z2 1000 mas yr-1 (3) where z= 100.4[max(G,13)-15]. This expression increases from 0.01 to 0.6 mas yr-1 with G= 13 -20. These errors can be significant, especially given that the intrinsic proper motion spread of a Pal 5-like stream is much smaller. Gaia's performance website suggests multiplying Eq. (3) by a fudge factor of 1.03 and 0.89 for the proper motions of right ascension and declination, respectively. In the rotated frame (φ1, φ2), we have verified that simply setting this factor to unity for both coordinates also reproduce the actual proper motion uncertainties with sufficient accuracy. We then add Gaussian noise to the original proper motions using Eq. (3). Similarly, we parameterize the dependence of color uncertainty on apparent magnitude using the following expression, σBP-RP = 10[max(G,14)-23]/3 (4) which increases from 0.001 to 0.1 for G= 14 -20. This expression accurately reproduces the mean BP -RP color uncertainty in Gaia DR3 by M. Riello et al. (2021). For the background population, we directly use observational data from Gaia DR3 within the same 10◦cone centered on the progenitor GC of each stream in the H25 catalog. We only select stars with G 103 M⊙and containing at least 10 stars after applying the above selection criteria. This leads to 158 streams from our chosen catalog from H25. For each stream, we construct a mock dataset following the procedure described above and evaluate our detection algorithm on it, assuming no prior knowledge of how the dataset is constructed. 3.2. Method performance First, we illustrate key concepts of our method by applying it to an example mock stream originating from a GC with mass M= 5.6 × 105 M⊙, located at a Galactocentric radius rGal = 11 kpc and a Galactic latitude b= 21◦. The tidal radius is 125 pc, corresponding to an angular size of 0.63◦ at its heliocentric distance d⊙= 8.7 kpc. In the top row of Fig.1, we show the distributions of simulated tracer particles in position space, proper motion space, and color-magnitude space. The stream PDF ps(x) is constructed via KDE using these simulated tracers. Although we display ps(x) as 2D contours in each of the three subspaces, we emphasize that the KDE is constructed in the full six-dimensional space of all observables and projected onto 2D subspaces. We then apply our method to the mock dataset, which includes 194 mock stream stars from H25 and approximately four million background stars from Gaia DR3. We detect 244 stream members, of which 140 are true members (see the bottom row of Fig.1). Most of the false positives and missed detections are either along the RGB, where background contamination is high, or near G= 20, where observational uncertainties become significant according to Eqs.(3) and (4). As we show later in §4.1 and Fig. 8, both of these regions have lower S/N than the main sequence turnoff. We also note that our simulated stream in Fig.1 samples the RGB less densely than the main sequence due to the lower abundance of RGB stars. This, however, only weakly affects the detection quality, as RGB stars only contribute a small fraction of the total and are intrinsically hard to detect in any case due to their low S/N. It is worth noting that for faint stars (G≳19), the uncertainties in color and proper motions can exceed the intrinsic spread of the entire stream. Our method accounts for this by convolving the KDE with a Gaussian blob centered on each star, as described in §2.1. Without this convolution, many stars would be shifted outside the effective selection window in proper motion and color space, leading to significantly fewer detections than expected. In contrast, while our detection accuracy does decrease for these faint stars, the method still tends to recover the correct total number of stream members by balancing false positives and missed detections. This is expected as our mixture model tends to reproduce the correct number of stream stars to recover the correct density ratio between the stream and the background. Next, we examine the statistical performance of our method by applying it to all mock streams. We quantify detection quality using three metrics: detection ratio, purity, and completeness. The detection ratio is defined as the ratio of the total number of detected stream members Ndetect to the total number of true members Ntrue. In the upper panel of Fig. 2, we show Ndetect/Ntrue as a function of the threshold probability Pth. Starting from a large value ≫1 at Pth = 0, the median detection ratio of all streams rapidly drops to 1 at Pth ≈0.07, with an interquartile range of approximately 0.3 dex. It then gradually declines to 0.2 at Pth = 0.8, followed by a rapid decrease to 0 as Pth approaches 1. Furthermore, we define the purity as the ratio of correctly identified stream members Ncorrect to the total number of detected members Ndetect. The completeness is defined as the ratio of Ncorrect to Ntrue. By definition, both purity and completeness range from 0 to 100%, and the ratio between completeness and purity equals the detection ratio. In the lower panel of Fig. 2, we show the median purity and completeness as functions of Pth. The median purity increases rapidly from 0 to about 70% for Pth = 0 -0.1, and then gradually approaches 100% as Pth →1 (note that purity is not well defined at Pth = 1). In contrast, completeness drops from 100% to around 60% over Pth = 0 -0.1, and then continues to decrease to 0 at Pth = 1. The two curves intersect at Pth ≈0.07, where the detection ratio also reaches unity. When divided by Galactic latitude b, the high-latitude streams with |b| > 30◦reach a detection ratio of unity at Pth = 0.5. However, a broad range of Pth = 0.2 -0.6 yields detection ratios that deviate from unity by less than 0.2 dex. For these high-latitude streams, purity and completeness also intersect at Pth ≈0.5, reaching 80% and 72% respectively. On the other hand, the majority of streams are located at low Galactic latitudes |b| 70% for streams with relatively low contamination at |b| > 30◦. The similarity between purity and completeness also ensures that Ndetect serves as an unbiased estimator of Ntrue. This is particularly important for inferring properties of the progenitor GC, such as the mass loss rate. Streams at |b| 30◦. However, the ratio deviates unity near the Galactic plane, where the dispersion is also larger since ∼20% of streams with detection ratios 30 |b| 30◦, dashed curves) separately. Since the purity at Pth = 1 is not well defined, we extrapolate the values at Pth = 0.99 out to 1 for visual clarity. 3.3. Null test In addition to the above tests that focus on how many stream member stars can be correctly detected, it is also important to perform the null test to ensure that the method does not report false detections when there is no stream. Therefore, we design the null test by removing the signal (i.e., mock stream stars) from the test dataset and applying the method only to the background stars in the same region around each GC. A perfect method should yield zero detections in the null test. However, due to random fluctuations in the background that are not captured by the KDE, it is possible that a small fraction of background stars are accidentally better described by the stream model. This can lead to the detection of "false streams". In Fig. 4, we show the number of detections in the null test, Nnull, as a function of Galactic latitude |b|. For comparison, we also plot the true number of stream stars Ntrue and 10 2 10 1 1 10 Ndetect/Ntrue 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 0.2 0.4 0.6 0.8 1.0 Ratio Figure 3. Detection ratio (upper panel) and completeness/purity (lower panel) of detected stream members as functions of the progenitor GC's absolute Galactic latitude |b| with Pth = 0.5. Individual streams are shown as black circles (detection ratio), blue diamonds (completeness), and red triangles (purity). The solid lines stand for the median values among all test streams, while the shaded ranges show the interquartile ranges. Detection ratio = 1 is highlighted as the dot-dashed line. We calculate the percentiles at any busing nearby streams smoothed by the Gaussian kernel, with bandwidth varying linearly from 5◦to 15◦from the Galactic plane to the poles. the number of detections Ndetect when the signal is included. Our method reports exactly zero detections for 60% of the mock GCs. For the remaining cases, Nnull is still generally much smaller than the corresponding Ntrue and Ndetect. Only 11% of cases yield Nnull ≥10. Conversely, 16% of cases have Ndetect 60◦. This is likely a stochastic effect given the small number of streams at such high latitudes. All metrics remain within the interquartile range (which is even narrower than the 1σrange) of the original values, indicating that even a 20% variation in the potential has little effect on detection quality. Since recent measurements of the MW potential have uncertainties of only ≈10% (e.g., R. Ibata et al. 2024), these tests are conservative. The MilkyWayPotential2022 model performs almost identically to the original BFE model. This is encouraging as the former is designed to match the real MW instead of the simulated galaxy. While both models have similar halo structures beyond 10 kpc, their disk components differ significantly, with enclosed masses near 1 kpc differing by ≈30%. This suggests that the performance of our method is insensitive to the exact choice of Galactic potential model within 10◦around the GC. We emphasize, however, that our method has not been tested beyond 10◦, where the Galactic potential becomes more important. Even within 10◦, the slight decrease in purity for the alternative models is largely driven by stars located between 5◦and 10◦, indicating that the influence of the potential grows with distance from the progenitor. Moreover, the LMC may introduce a larger perturbation to the MW's potential (see review by E. Vasiliev 2023) than that in the simulated galaxy. Since the H25 catalog does not include a galaxy with a realistic LMC analog, we are unable to quantify the effect of LMC in this work. 3.6. Dependence on stream generation algorithms The mock stream catalog also uses the same Y. Chen et al. (2025b) particle spray algorithm as adopted in this work. To further validate our method, we conduct an additional test by generating a mock stream using an N-body simulation, which is generally considered more accurate than most particle spray methods. Specifically, we initialize the simulation using the same initial conditions as the example stream in §3.2 and Fig. 1. Following Y. Chen et al. (2025b), we model the progenitor GC using the I. R. King (1966) model with W= 8, a typical value for Galactic GCs. We set the particle mass to 10 M⊙, the softening length to 1 pc, and the simulation time step to 2-13 kpc km-1 s ≈0.1 Myr. We then backtrack 10 Chen et al. 10 2 10 1 1 10 Ndetect/Ntrue MIST (base) PARSEC 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 0.2 0.4 0.6 0.8 1.0 Ratio 10 2 10 1 1 10 Ndetect/Ntrue 1.0MGal (base) 0.8MGal 1.2MGal MWPotential22 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 0.2 0.4 0.6 0.8 1.0 Ratio Figure 5. Same as Fig. 3, but with the alternative isochrone model PARSEC (left column) and three alternative Galactic potential models (right column): 1) the base potential model scaled down by 20% (dotted curves), 2) the base potential model scaled up by 20% (dashed curves), and 3) the MilkyWayPotential2022 model from gala. We show the interquartile ranges for the base case for comparison. the GC orbit for 1 Gyr in the same static Galactic potential described in §3.1, and run the N-body simulation forward to the present day using the fast-multipole gravity solver falcON (W. Dehnen 2000, 2002). Although the N-body simulation includes only collisionless dynamics and omits close stellar encounters, it is sufficiently distinct from the particle spray model to serve our purpose of validating the detection method with an alternative algorithm. Since the particles in the simulation are different from individual stars, we randomly select a subsample of escaped particles and re-sample their masses using the stellar mass distribution of stars with G 1. Extinction reduces the number of observable stream stars above the detection limit by up to a factor of 10 near the mid-plane, while also shifting the stream to redder colors, where the background density is higher (see the upper right panel of Fig. 1). In this section, we investigate the influence of these effects by incorporating dust extinction into our mock dataset. We use the Python package dustmaps (G. M. Green 2018) to compute the extinction AVof each stream star in H25, based on the map of D. J. Schlegel et al. (1998, hereafter SFD). This map is recalibrated by E. F. Schlafly & D. P. Finkbeiner (2011) with RV= 3.1. The AVvalues are then passed to StarStream: Automatic detection algorithm for stellar streams 11 the MIST bolometric correction interpolation table6 to obtain the G-band extinction and BP -RP color excess. Since the table only covers AV= 0 -6, we remove stars with AV> 6, as they are also likely too faint to be observable. Note that the SFD map provides extinction along the full line of sight from the solar system to infinity. This overestimates the true extinction, since streams are located at finite distances. While the overestimation is probably modest, this test should be regarded as an extreme case that assumes maximum possible extinction. The detection quality would decline significantly if we directly apply the same method to the new dataset, since the simulated isochrone no longer aligns with the reddened distribution of stream stars in the color-magnitude space. Fortunately, in practice we have access to extinction values for most MW GCs from catalogs such as W. E. Harris (1996), allowing us to account for realistic extinction when simulating the stream. This is equivalent to replacing the original isochrone with a reddened one, where the extinction and color excess are calculated from the progenitor's AV. Here, we take each GC's AVdirectly from the SFD map at its location. As before, we exclude streams whose progenitor GCs have AV> 6. Since extinction makes streams fainter, the number of streams with at least 10 stars also decreases, leaving 123 valid streams out of the original 158. Stars from the same stream may have different AVvalues in the mock dataset, since extinction is not constant within the search radius. However, we redden the simulated isochrone using only a single AVvalue at the center of the stream. This reflects the practical difficulty of obtaining precise extinction for every individual stream star. We emphasize that our tests are designed to reproduce the actual detection quality of StarStream when applied to real data. Thus, it is important to avoid any over-idealization. In Fig. 6, we show the detection ratio, completeness, and purity after applying StarStream to the new mock dataset that includes extinction. As expected, both completeness and purity decrease substantially near the Galactic plane (|b| 30◦, the median completeness and purity decrease by only ∼10% to 62% and 6 https://waps.cfa.harvard.edu/MIST/model grids.html 10 2 10 1 1 10 Ndetect/Ntrue No extinction With extinction 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 0.2 0.4 0.6 0.8 1.0 Ratio Figure 6. Same as Fig. 3, but including dust extinction from the SFD map. We also show the original cases (no extinction) as thin curves for comparison. 67%, respectively. The detection ratio in this latitude range is nearly unchanged. We also perform the null test described in §3.3 on the new dataset, and show it in Fig. 7. In this case, only 13% of detections yield Nnull = 0. This reduction is mainly due to the higher false detection rate at low latitudes, where Nnull is nearly the same as Ndetect for |b| 30◦, false and true detections can still be cleanly separated using the same threshold of Ndetect ≈10 recommended in §3.3. Therefore, although both completeness and purity of StarStream decrease at low latitudes when accounting for extreme extinction, the method still achieves high values ≈65% at |b| > 30◦, where extinction is less significant. Since the extinction adopted in this section overestimates the true values, these results represent lower limits of the actual detection quality. As discussed later in §4.1, future spectroscopic and deep photometric surveys are necessary to reveal low-b streams with high extinction. 12 Chen et al. 0 15 30 45 60 75 Galactic latitude |b| (deg) 0 1 10 102 103 104 Count Figure 7. Same as Fig. 4, but including dust extinction from the SFD map. 4. Discussion 4.1. Application to other surveys Although this work is framed with Gaia DR3, it can be straightforwardly extended to other surveys. In this subsection, we discuss the importance of spectroscopic surveys and deep photometric surveys in further enhancing detection quality. Our method operates in the six-dimensional space of positions, proper motions, color, and magnitude. We find that the median detection ratio drops significantly to 18% when color and magnitude are excluded. In this case, the median purity and completeness also decrease to 50% and 6%, respectively. If proper motions are excluded, we cannot even detect any stars for more than 80% of streams. These results demonstrate that six-dimensional information of positions, proper motions, colors, and magnitudes is important for recovering most streams in Gaia DR3. For streams with extremely low density, strong background contamination, or high extinction we may need even more independent observables. This highlights the importance of spectroscopic surveys, such as the Apache Point Observatory Galactic Evolution Experiment (APOGEE, S. R. Majewski et al. 2017), the Southern Stellar Stream Spectroscopic Survey (S5, T. S. Li et al. 2019), and the Dark Energy Spectroscopic Instrument (DESI, DESI Collaboration et al. 2022) Milky Way Survey (MWS, A. P. Cooper et al. 2023), which provide radial velocities and metallicities and may help reveal very faint streams. Our example stream in Fig. 1 shows that most correctly identified members lie near the main sequence turnoff, where the S/N is high. In Fig. 8, we present the distribution of S/N in the color-magnitude diagram, where S/N is defined as the ratio between the densities of actual stream members and background stars. We estimate both densities in the sixdimensional space using KDE approaches similar to those described in §2.1 and §2.2. However, we multiply the stream kernel widths in §2.1 by a factor of 3, since the number of member stars is smaller than the number of simulated tracer particles. It is remarkable that a significant number of stars have S/N values larger than unity in six-dimensional space, despite the number of stream members being orders of magnitude smaller than the background contaminants. The region with S/N > 1 coincides with the area of high purity and completeness near the main sequence turnoff at G≈19. Brighter stars in the RGB suffer from higher background contamination, resulting in lower S/N. Stars in the horizontal branch show relatively high S/N because they are bluer than most background stars; however, the horizontal branch contributes only a few members to the stream. The majority of stars lie below the main sequence turnoff and have lower S/N due to large observational uncertainties, as described by Eqs. (3) and (4). This issue is not unique to the example stream. Since the typical main sequence turnoff is at absolute magnitude MG= 3 -4, corresponding to G= 20 at a heliocentric distance ≈20 kpc, Gaia can barely detect stars fainter than the turnoff for most MW streams. This highlights the importance of deep photometric surveys such as LSST and Roman, which are expected to detect significantly more stream members thanks to their much deeper detection limits (S. Pearson et al. 2024; C. Holm-Hansen et al. 2025). Moreover, Roman can also provide high-quality astrometry measurements for faint stars, significantly improving S/N below the main sequence turnoff. 4.2. Improvements to existing methods The new detection method constructs stream KDE via a particle spray algorithm, which only requires the progenitor GC's mass, position, and velocity as input. We then assign colors and magnitudes to the tracers based on an isochrone model, which depends only on the progenitor's metallicity and age. In practice, most of these input parameters are available from existing catalogs (e.g., M. Hilker et al. 2019). The only exception is age, which, however, has a weak impact on our results (see §3.4). Therefore, our method avoids making unnecessary and unrealistic assumptions about the stream's morphology and kinematics. Additionally, we also construct the background KDE directly from a subsample of observed stars. As a result, both the stream and background models have no free parameters. The mixture model that combines these two components includes only one free parameter: the stream fraction fs. This minimal parameterization offers a direct adStarStream: Automatic detection algorithm for stellar streams 13 0.4 0.6 0.8 1.0 BP RP 15 16 17 18 19 20 G 10 2 10 1 1 10 102 Figure 8. Distribution of S/N in the color-magnitude diagram for actual members of the mock stream in Fig. 1. The signal and noise densities are estimated using a similar KDE method in §2.1 and §2.2, respectively. We multiply the stream kernel width by a factor of 3 to account for the lower number density of stars that do not sample the parameter space as well as simulated tracers. The same 104 background stars as in Fig. 1 are also shown for reference. vantage in computational efficiency. On average, our Python implementation takes ∼10 minutes to detect a single stream when running on 32 cores of an Intel Haswell CPU. The total computation time to analyze all mock streams is approximately 1000 CPU hours of computation time, which is orders of magnitude faster than typical mixture models. For instance, STREAMFINDER requires millions of CPU hours on a similar dataset (R. Ibata et al. 2021). Although the particle spray method used here takes no free parameters, our stream model is more accurate than simply assuming the stream is an elongated structure along its orbital track. A representative example of the latter is STREAMFINDER, which detects clustering of stellar orbits within a Gaussian tube. Using STREAMFINDER, R. Ibata et al. (2024) successfully detected 16 streams originating from known GCs in Gaia DR3. To compare performance, we apply our method to the same 16 GCs in Gaia DR3, using GC properties from the M. Hilker et al. (2019) catalog and the Galactic potential model MilkyWayPotential2022. Excluding M68, ωCen, and M5, whose streams in R. Ibata et al. (2024) are not connected to the progenitors, all other streams extend at least 5◦within our 1 Gyr integration time. In this region, our method detects on average 5 times as many stream members. Even for the Pal 5 stream, which is widely thought to be one of the most complete, we detect 131 members compared to 76 reported in R. Ibata et al. (2024), which a meaningful improvement. Notably, the actual Pal 5 stream extends beyond 5◦because it formed over approximately 6 Gyr (Y. Chen et al. 2025a), much longer than our integration time. For the remaining streams that extend farther within our 1 Gyr integration time, we detect on average 4 times as many member stars inside 10◦. StarStream is not the first physics-motivated attempt to search for GC streams. For example, C. J. Grillmair (2022); Y. Yang et al. (2023); C. J. Grillmair (2025) have successfully identified tidal features around individual GCs by comparing observations with simulated streams. Compared to these works, our approach automates this technique using KDE. In addition, we provide quantitative metrics for detection quality, demonstrating the broader potential of this method for identifying more GC streams. 5. Summary In this work, we present StarStream, an automatic detection algorithm for stellar streams based on a physics-inspired stream model. We construct a mixture model in the multidimensional space of observables, including positions, velocities, colors, magnitudes, etc. The model consists of background and stream components, whose PDFs are represented using KDE. For the background, we build the KDE from a subsample of observed stars; for the stream, we construct the KDE from tracers generated using the particle spray algorithm of Y. Chen et al. (2025b). We illustrate the method using an example stream in Fig. 1. We quantitatively assess the detection quality of our method around existing GCs using the mock stream catalog from H25, which is tailored to Gaia DR3 and includes six observables: sky coordinates (φ1, φ2), proper motions (μφ1, μφ2), color (BP -RP), and magnitude (G). Our mock dataset incorporates magnitude-dependent uncertainties for each observable, and we include all Gaia DR3 stars as the background population. The method achieves both purity and completeness around 65% even with extreme dust extinction (> 70% without extinction). The detection ratio is near unity for highlatitude streams (Fig. 3 and Fig. 6). For low-latitude streams, however, high background contamination and extinction can significantly reduce both purity and completeness to < 10%. Next, we perform a series of tests to examine the robustness of the method. We begin with a null test to evaluate the frequency of false positive detections. After removing the signal (i.e., mock stream stars) from the dataset, our method correctly reports Nnull = 0 for 13% of all streams with extreme extinction (60% without extinction), while a threshold Ndetect ≈10 cleanly separates true and false detections for high-latitude streams (Fig. 4 and Fig. 7). We further test the method using a different isochrone model and different Galactic potential models (Fig. 5), and a different stream generation algorithm. These alternative configurations do not significantly weaken the detection quality. Being robust to alternate isochrone models is important in application to real 14 Chen et al. data as the predicted isochrone can vary significantly among different models. We find that both purity and completeness drop significantly when proper motions or color and magnitude are excluded from the input dataset. This emphasizes the importance of incorporating multiple independent observables for stream detection. With the full six-dimensional input, however, the S/N can exceed unity even when the number of stream members is orders of magnitude smaller than the background contaminants (Fig. 8). Stars with high S/N are primarily located near the main sequence turnoff, coinciding with those that exhibit high purity and completeness. In contrast, fainter stars near G= 20 and brighter stars on the RGB both have lower S/N due to large observational uncertainties or strong background contamination, respectively. Finally, we compare our new method to existing methods such as STREAMFINDER. Our method is several orders of magnitude more computationally efficient, primarily because the physics-inspired stream model requires no free parameters. This greatly accelerates the model optimization process. At the same time, for streams associated with existing GCs in R. Ibata et al. (2024), our method detects on average 5 times as many member stars within the 5◦circle. It is also worth noting that the method may uncover additional streams when applied to the full set of GCs in Gaia DR3. We have published the package StarStream on GitHub via https://github.com/ybillchen/StarStream, where we also provide example Python notebooks for running the code. The code requires the mass and six-dimensional phase-space coordinates of the progenitor to generate stream tracers using the particle spray algorithm. It also requires an isochrone that is fit to the progenitor to compute mock photometry for the tracers. The input dataset should be a multi-dimensional array of observables, coupled with another array of observational uncertainties of the same shape. Users can also specify the threshold probability Pth, tracer particle ejection rate, KDE kernel widths, interpolation grid spacings for the background PDF, and the Galactic potential, if different values from the default of this paper are preferred. Acknowledgments We thank Monica Valluri, Eric Bell, Katya Gozman, and Jacob Nibauer for insightful discussions. YC, OYG, and CHH were supported in part by National Aeronautics and Space Administration through contract NAS5-26555 for Space Telescope Science Institute programs HST-AR-16614 and JWSTGO-03433. This research benefited from the Gravity in the Local Group conference hosted by the McWilliam's Center for Cosmology and Astrophysics, Carnegie Mellon University. Software: agama (E. Vasiliev 2019), numpy (C. R. Harris et al. 2020), matplotlib (J. D. Hunter 2007), scipy (P. Virtanen et al. 2020), astropy ( The Astropy Collaboration et al. 2018), gala (A. M. Price-Whelan 2017; A. Price-Whelan et al. 2024), pandas ( The pandas development team 2024), dustmaps (G. M. Green 2018), falcON (W. Dehnen 2000, 2002), gaiaunlimited (T. Cantat-Gaudin et al. 2023) Appendix A. Kernel density estimation with uncertainties Given a sample of Npoints {xj}, we can estimate the probability density function p(x) at any point of interest using Gaussian KDE, p(x) ≈1 N N ∑︁ j=1 pKDE(x|xj, σ) ≡1 N N ∑︁ j=1 1 √ 2πσ exp " -(x-xj)2 2σ2 # . However, if the location of the point of interest is uncertain and follows a distribution function f(x), the expected probability density pis the average of p(x) weighted by f(x), p= ∫∞ -∞ p(x) f(x)dx≈ N ∑︁ j=1 ∫∞ -∞ pKDE(x|xj, σ) f(x)dx. If we assume f(x) is a Gaussian function centered at x0 with uncertainty σ0, the last integral is the convolution of two Gaussian distributions N (xi, σ2 i) ★N (0, σ2 0) with the independent variable x0. This convolution is simply another Gaussian distribution N (xi, σ′2) ≡N (xi, σ2 + σ2 0). We can thus obtain the expected probability density as p≈1 N N ∑︁ j=1 pKDE(x|xj, σ′). StarStream: Automatic detection algorithm for stellar streams 15 Therefore, the expected probability density of a point with Gaussian uncertainty σ0 equals the standard Gaussian KDE at the same location with bandwidth replaced by σ′2 ≡σ2 + σ2 0 for every sample point {xi}. B. Astrometry uncertainty propagation The uncertainties of astrometry measurements in sky coordinate system (α, δ, μα, μδ, π, vr) are commonly quantified as a 6 × 6 covariance matrix C = ©­­­­­ « Vα Cαδ · · · Cαvr Cαδ Vδ · · · Cδvr ... ... ... ... CαvrCδvr· · · Vvr a®®®®® ¬ where Vi= σ2 iis the variance of quantity iand Cij= σiσjρijis the covariance between quantities iand j, in which ρij= ρjiis the correlation coefficient. To obtain the covariance matrix in the rotated frame (φ1, φ2, μφ1, μφ2, π′, v′ r), C′ = ©­­­­­ « Vφ1 Cφ1 φ2 · · · Cφ1v′r Cφ1 φ2 Vφ2 · · · Cφ2v′r ... ... ... ... Cφ1v′r Cφ2v′r · · · Vv′r a®®®®® ¬ We use linear uncertainty propagation to approximate C′ C′ ≈JCJT (B1) where J is the Jacobian matrix J ≡∂(φ1, φ2, μφ1, μφ2, π′, v′ r) ∂(α, δ, μα, μδ, π, vr) . Note that (π′, v′ r) = (π, vr) in coordinate rotation. Also, other new coordinates do not explicitly depend on parallax and radial velocity. Therefore, we directly know ∂π′/∂i= δiπ, ∂v′ r/∂i= δivr, ∂i/∂π= δiπ′, and ∂i/∂vr= δiv′r, where the Kronecker symbol δij= 1 only if i= j, otherwise 0. This greatly simplifies our calculation as we only need to take into account the rotation of angular coordinates (α,δ,μα,μδ) and (φ1,φ2,μφ1,μφ2) independent from πand vr. Rotation in the spherical coordinate system is nonlinear, which makes it challenging to derive J analytically. However, by first transforming the sky coordinates to the Cartesian system (x, y, z, vx, vy, vz), we can more easily deal with rotation as it becomes a linear coordinate transformation, which can be described by the 6 × 6 rotational matrix, R ≡ R3×3 0 0 R3×3 ! Here, R3×3 ∈SO(3) is the standard 3D rotational matrix. After the rotation, we can transform the rotated Cartesian frame back to the great circle frame. The combined Jacobian matrix thus equals the product of the Jacobian matrices for the three transformations. Since rotation is a linear translation, the Jacobian matrix of rotation is simply R itself. We derive the two remaining Jacobian matrices as follows. We consider the rotation of angular coordinates in the unit sphere. Such a simplification does not affect our calculation as the parallax and radial velocity are independent coordinates. The transformation from sky coordinates to the cartesian system is ©­­ « x y z a®® ¬ = ©­­ « cos αcos δ sin αcos δ sin δ a®® ¬ and ©­­ « vx vy vz a®® ¬ = ©­­ « -μαsin α-μδcos αsin δ μαcos α-μδsin αsin δ μδcos δ a®® ¬ . 16 Chen et al. Recall that we define μα≡¤αcos δ. Therefore, the corresponding 6 × 4 Jacobian matrix is J1 ≡∂(x, y, z, vx, vy, vz) ∂(α, δ, μα, μδ) = ©­­­­­­­­­ « -sin αcos δ -cos αsin δ 0 0 cos αcos δ -sin αsin δ 0 0 0 cos δ 0 0 -μαcos α+ μδsin αsin δ-μδcos αcos δ-sin α-cos αsin δ -μαsin α-μδcos αsin δ-μδsin αcos δ cos α -sin αsin δ 0 -μδsin δ 0 cos δ a®®®®®®®®® ¬ . We can also calculate the inverse transformation to the great circle frame, φ1 φ2 ! = ©­ « arctan y′ x′ arcsin z′ a® ¬ and μφ1 μφ2 ! = 1 √︁ x′2 + y′2 -v′ xy′ + v′ yx′ v′ z ! . The 4 × 6 Jacobian matrix of this transformation is J2 ≡ ∂(φ1, φ2, μφ1, μφ2) ∂(x′, y′, z′, v′x, v′y, v′z) = ©­­­­­­­­­­ « -sin φ1 cos φ2 cos φ1 cos φ2 0 0 0 0 0 0 1 cos φ2 0 0 0 -μφ2 sin φ1 sin φ2 cos φ2 μφ2 cos φ1 sin φ2 cos φ2 0 -sin φ1 cos φ1 0 0 0 μφ2 sin φ2 cos2 φ2 0 0 1 cos φ2 a®®®®®®®®®® ¬ For clarity, we already write J2 in terms of the great circle coordinates. The combined Jacobian matrix is given by J = J2RJ1 0 0 I2×2 ! . The identity matrix in the lower right accounts for the transformation of (π, vr). We have verified that J2RJ1 also becomes the identity matrix in the case of no rotation, R = I6×6. Finally, we can insert J to Eq. (B1) to obtain the covariance matrix in the great circle frame. References Amorisco, N. C. 2015, MNRAS, 450, 575, Banik, N., Bertone, G., Bovy, J., & Bozorgnia, N. 2018, JCAP, 2018, 061, Bernard, E. J., Ferguson, A. M. N., Schlafly, E. F., et al. 2014, MNRAS, 443, L84, Bonaca, A., Geha, M., K ̈upper, A. H. W., et al. 2014, ApJ, 795, 94, Bonaca, A., & Price-Whelan, A. M. 2025, New Astronomy Reviews, 100, 101713, Bovy, J., Hogg, D. W., & Roweis, S. T. 2011, Ann. Appl. Stat., 5, Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127, Cantat-Gaudin, T., Fouesneau, M., Rix, H.-W., et al. 2023, A&A, 669, A55, Carlberg, R. G., Grillmair, C. J., & Hetherington, N. 2012, ApJ, 760, 75, Chen, Y. 2024, ybillchen/particle spray: v0.1.0, Zenodo, Chen, Y., & Gnedin, O. Y. 2023, MNRAS, 522, 5638, Chen, Y., & Gnedin, O. Y. 2024, MNRAS, 527, 3692, StarStream: Automatic detection algorithm for stellar streams 17 Chen, Y., Li, H., & Gnedin, O. Y. 2025a, ApJL, 980, L18, Chen, Y., Valluri, M., Gnedin, O. Y., & Ash, N. 2025b, ApJS, 276, 32, Cooper, A. P., Koposov, S. E., Allende Prieto, C., et al. 2023, ApJ, 947, 37, Dehnen, W. 2000, ApJL, 536, L39, Dehnen, W. 2002, JCoPh, 179, 27, DESI Collaboration, Abareshi, B., Aguilar, J., et al. 2022, AJ, 164, 207, Dotter, A. 2016, ApJS, 222, 8, Erkal, D., Belokurov, V., Bovy, J., & Sanders, J. L. 2016, MNRAS, 463, 102, Erkal, D., Koposov, S. E., & Belokurov, V. 2017, MNRAS, 470, 60, Fardal, M. A., Huang, S., & Weinberg, M. D. 2015, MNRAS, 452, 301, Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, A&A, 595, A2, Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 650, C3, Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1, Gieles, M., Erkal, D., Antonini, F., Balbinot, E., & Pe ̃narrubia, J. 2021, Nat Astron, 5, 957, Green, G. M. 2018, JOSS, 3, 695, Grillmair, C. J. 2009, ApJ, 693, 1118, Grillmair, C. J. 2022, ApJ, 929, 89, Grillmair, C. J. 2025, ApJ, 979, 75, Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, Harris, W. E. 1996, AJ, 112, 1487, Hattori, K., Erkal, D., & Sanders, J. L. 2016, MNRAS, 460, 497, Hilker, M., Baumgardt, H., Sollima, A., & Bellini, A. 2019, Proc. IAU, 14, 451, Holm-Hansen, C., Chen, Y., & Gnedin, O. Y. 2025, , Hunt, J. A., & Vasiliev, E. 2025, New Astronomy Reviews, 100, 101721, Hunter, J. D. 2007, CSE, 9, 90, Ibata, R., Malhan, K., Martin, N., et al. 2021, ApJ, 914, 123, Ibata, R., Malhan, K., Tenachi, W., et al. 2024, ApJ, 967, 89, Ibata, R. A., Gilmore, G., & Irwin, M. J. 1994, Nature, 370, 194, King, I. R. 1966, AJ, 71, 64, Kroupa, P. 2001, MNRAS, 322, 231, K ̈upper, A. H. W., Lane, R. R., & Heggie, D. C. 2012, MNRAS, 420, 2700, Lane, R. R., K ̈upper, A. H. W., & Heggie, D. C. 2012, MNRAS, 423, 2845, Li, T. S., Koposov, S. E., Zucker, D. B., et al. 2019, MNRAS, 490, 3508, LSST Science Collaboration, Abell, P. A., Allison, J., et al. 2009, , Lynden-Bell, D., & Lynden-Bell, R. M. 1995, MNRAS, 275, 429, Majewski, S. R., Schiavon, R. P., Frinchaboy, P. M., et al. 2017, AJ, 154, 94, Malhan, K., & Ibata, R. A. 2018, MNRAS, 477, 4063, Malhan, K., Ibata, R. A., Carlberg, R. G., Valluri, M., & Freese, K. 2019, ApJ, 881, 106, Mateu, C., Bruzual, G., Aguilar, L., et al. 2011, MNRAS, 415, 214, Meˇstri ́c, U., Vanzella, E., Zanella, A., et al. 2022, MNRAS, 516, 3532, Ngan, W. H. W., & Carlberg, R. G. 2014, ApJ, 788, 181, Nibauer, J., Bonaca, A., Lisanti, M., Erkal, D., & Hastings, Z. 2024, ApJ, 969, 55, Odenkirchen, M., Grebel, E. K., Rockosi, C. M., et al. 2001, ApJ, 548, L165, Pace, A. B., & Li, T. S. 2019, ApJ, 875, 77, Pearson, S., Bonaca, A., Chen, Y., & Gnedin, O. Y. 2024, ApJ, 976, 54, Pearson, S., Price-Whelan, A. M., & Johnston, K. V. 2017, Nature Astronomy, 1, 633, Price-Whelan, A., Souchereau, H., Wagg, T., et al. 2024, Zenodo, Price-Whelan, A. M. 2017, JOSS, 2, 388, Price-Whelan, A. M., & Bonaca, A. 2018, ApJL, 863, L20, Price-Whelan, A. M., Sesar, B., Johnston, K. V., & Rix, H.-W. 2016, ApJ, 824, 104, Riello, M., De Angeli, F., Evans, D. W., et al. 2021, A&A, 649, A3, Roberts, D., Gieles, M., Erkal, D., & Sanders, J. L. 2025, Monthly Notices of the Royal Astronomical Society, 538, 454, 18 Chen et al. Rockosi, C. M., Odenkirchen, M., Grebel, E. K., et al. 2002, AJ, 124, 349, Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525, Sesar, B., Price-Whelan, A. M., Cohen, J. G., et al. 2016, ApJL, 816, L4, Shih, D., Buckley, M. R., Necib, L., & Tamanas, J. 2021, MNRAS, 509, 5992, Shipp, N., Drlica-Wagner, A., Balbinot, E., et al. 2018, ApJ, 862, 114, Spergel, D., Gehrels, N., Baltay, C., et al. 2015, . http://arxiv.org/abs/1503.03757 Tavangar, K., & Price-Whelan, A. M. 2025, , The Astropy Collaboration, Price-Whelan, A. M., Sip ̋ocz, B. M., et al. 2018, AJ, 156, 123, The pandas development team. 2024, pandas-dev/pandas: Pandas, Zenodo, Valluri, M., Fagrelius, P., Koposov, S. E., et al. 2024, . http://arxiv.org/abs/2407.06336 Varghese, A., Ibata, R., & Lewis, G. F. 2011, MNRAS, 417, 198, Vasiliev, E. 2019, MNRAS, 482, 1525, Vasiliev, E. 2023, Galaxies, 11, 59, Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, Yang, Y., Zhao, J.-K., Tang, X.-Z., Ye, X.-H., & Zhao, G. 2023, ApJ, 953, 130,
2510.14925
Stable but Miscalibrated: A Kantian View on Overconfidence from Filters to Large Language Models Akira Okutomi ToppyMicroServices OÜ, Tallinn, Estonia Abstract We reinterpret Kant’s Critique of Pure Reason as a theory of feedback stability, viewing reason as a regulator that keeps inference within the bounds of possible experience. We formalize this intuition via a composite instability index (H-Risk) combining spectral margin, conditioning, temporal sensitivity, and innovation amplification. In linear-Gaussian simula- tions, higher H-Risk predicts overconfident errors even under formal stability, revealing a gap between nominal and epistemic stability. Extending to large language models (LLMs), we find that fragile internal dynamics correlate with miscalibration and hallucination, while critique-style prompts show mixed effects on calibration and hallucination. These results suggest a structural bridge between Kantian self-limitation and feedback control, offering a principled lens for diagnosing—and selectively reducing—overconfidence in reasoning systems. This is a preliminary version; supplementary experiments and broader replication will be reported in a future revision. 1 Introduction We hypothesize that hallucination—whether in human thought or machine inference—arises when the reasoning process becomes unstable or ill-conditioned. In numerical analysis and inverse problems, an “ill-conditioned” system is one where small perturbations in data or parameters cause disproportionately large changes in the solution [1, 2, 3]. In control-theoretic terms, this corresponds to a feedback system whose internal dynamics (the closed-loop operator, defined later in Eq. 2.3) are near instability or highly sensitive to perturbations. Within this framework, philosophical critique can be understood as a meta-level adjustment of the gain K that seeks to reduce posterior uncertainty while preserving stability. Kant enters here not as a historical ornament but as a theorist of feedback between perception, inference, and judgment [4]. His critical philosophy explicitly sought a stable relation between empirical intuition and conceptual reasoning—an equilibrium that prefigures the feedback logic of estimation and control. This analogy motivates our attempt to formalize epistemic stability as a dynamical property rather than a purely linguistic one. We are primarily concerned with how epistemic stability—understood as the conditioning and robustness of the reasoning process—can be analyzed, quantified, and experimentally tested across classical control systems and large language models. 1 arXiv:2510.14925v1 [cs.AI] 16 Oct 2025 Contributions. This paper makes three main contributions: 1. A control-theoretic reconstruction of Kant’s tripartite cognitive architecture (sensibility- understanding-reason) as a state-space feedback model. 2. A quantitative framework interpreting hallucination as a manifestation of epistemic instability, characterized by the spectral radius ρ(Φ) and condition number κ(Φ) of the closed-loop operator. 3. An empirical framework linking theory to practice through a composite stability metric S(Φ) (instantiated as H-Risk) and experiments spanning linear systems and large language models. Prior work has explored connections between Kantian themes, cybernetics, and epistemic feedback [5, 6, 7], and recent studies have analyzed instability and hallucination in AI systems through related notions of internal model fragility [8, 9]. To our knowledge, however, this paper presents a mathematically explicit and unified structural framework linking Kant’s philosophy of cognition to the structure of the Kalman closed-loop operator—identifying epistemic stability as a shared design principle across classical control and modern generative models. This claim concerns the structural mapping and empirical program developed here; it is interpretive rather than exegetical, and does not assert doctrinal identity. 2 Theory: From Kant to Closed-Loop Stability 2.1 Philosophical Motivation Kant’s philosophy of cognition is fundamentally concerned with the question: under what conditions is cognition possible at all? In the Critique of Pure Reason (A94/B126, A307/B364), Kant distinguishes three mutually dependent layers ("tripartite") of cognitive architecture: sensibility (Sinnlichkeit), understanding (Verstand), and reason (Vernunft). Together they realize a recursive synthesis of experience: sensibility provides appearances (Anschauungen) as raw input, understanding organizes them under concepts (Kategorien), and reason regulates understanding by enforcing systematic unity and restraining it from transgressing the bounds of possible experience. For accessible expositions of this tripartite architecture, see Allison (2004) and Guyer (2006) for modern commentaries on Kant’s epistemic structure[10, 11]. Importantly, this hierarchy is not merely a static taxonomy of faculties but a recursive, self-correcting process: reason continually monitors and adjusts the inferential activity of un- derstanding, ensuring that cognition remains coherent and bounded over time. In modern terms, such recursion constitutes the minimal form of a feedback system—a loop that maintains epistemic stability within the limits of possible experience. Scope note. This paper does not attempt to encompass the entirety of Kant’s Critique of Pure Reason. We abstract only the structural core relevant to cognition’s possibility: the relation among sensibility, understanding, and reason, and the regulatory role by which reason constrains understanding. Topics such as the Transcendental Aesthetic’s treatment of space and time or the Dialectic’s antinomies fall outside our present scope. Our aim is therefore interpretive and structural rather than exegetical: a principled reconstruction of the feedback architecture that underwrites epistemic stability, not a claim of doctrinal completeness. Accordingly, novelty claims in this paper are limited to the structural mapping and empirical methodology, not to historical exegesis of Kant’s text. 2.2 From philosophical structure to state-space form. To make this abstract architecture more precise, we can model cognition as a feedback process between prediction and observation. Any first-order approximation of this process can be 2 expressed as a linear dynamical system: xt+1 = Axt + wt, yt = Hxt + vt, (2.1) Here, wt represents the process (system) noise and vt the measurement noise, typically modeled as zero-mean Gaussian variables with covariances Q and R, respectively. In this formulation, xt represents the organized content of understanding (the internal model of the world). The variable yt denotes the manifold of appearances provided by sensibility. The matrices (A, H) encode the structured synthesis between internal states and observed data. The recursive correction of xt by reason is then expressed by the update ˆxt|t = ˆxt|t−1 + Kt(yt −Hˆxt|t−1), (2.2) where Kt plays the role of the regulative function of reason. Thus, the linear-Gaussian state-space model is not merely an arbitrary mathematical analogy. It is the simplest formal instantiation of Kant’s triadic epistemic architecture under small deviations and rational coherence. By introducing noise terms, we acknowledge that both the world and our measurements are imperfect, and that cognition must operate robustly despite these uncertainties. While the Kalman formulation provides the simplest linear-Gaussian realization of this feedback architecture, the Kantian structure itself is more general: any recursive prediction- correction process—such as Hidden Markov model filtering or particle filtering—embodies the same epistemic form [12, 13, 14]. The direction of correspondence is therefore one-way: from Kantian cognition to Kalman filtering (and its variants), not vice versa. 2.2.1 Epistemic Stability as a Transcendental Condition For Kant, understanding has objective validity only within the bounds of possible experience. Beyond these bounds, reason generates what he calls transcendental illusions or antinomies. In our dynamical formulation, this boundary corresponds to the stability domain of the closed-loop operator: Φ ≡A −KH. (2.3) This operator, standard in control and estimation theory, defines the internal error dynamics under the feedback gain K [15, 16, 17]. If ρ(Φ) < 1 (the Schur stability condition) and the pair (A, H) is detectable, the system maintains a bounded error covariance P, ensuring a consistent relation between appearance and concept. However, when ρ(Φ)→1 or Φ becomes ill-conditioned, even small observation noise or model mismatch can be amplified into confident errors. This is the dynamical analogue of transcendental illusion: the system appears internally coherent while its inferences become unreliable. Intuitively, reason’s critique functions as a meta-level controller, selecting K to minimize both posterior uncertainty and instability—balancing accuracy and robustness: L(K) := E  ∥yt −Hˆxt∥2 , S(Φ) := H-Risk(Φ) (with Φ defined in Eq. 2.3). (2.4) min K L(K) + λ S(Φ). (2.5) Here, the first term enforces empirical adequacy, while the second penalizes departures from the domain of possible experience. Under Gaussian assumptions, this yields the Kalman gain [18] Kt = Pt|t−1H⊤(HPt|t−1H⊤+ R)−1, (2.6) which automatically balances trust in experience (R) and confidence in the model (Pt|t−1). Thus, the Kalman update mathematically instantiates what Kant described as reason’s self-limiting function (Selbstbeschränkung): the regulation of understanding to ensure that cognition remains stable within the bounds of possible experience. 3 2.3 Interpretive Summary • Structural necessity: The triadic feedback loop (sensibility →understanding →reason) is structurally isomorphic to a control-theoretic cycle (observation →model →gain adjustment). • Functional necessity: The aim of reason—empirical coherence plus stability—leads uniquely to the Kalman-type gain that minimizes posterior variance. • Approximation legitimacy: The finiteness of Kantian categories justifies a first-order (linear) representation as the minimal rational approximation of experience. Consequently, the mapping from Kant’s critical philosophy to a state-space control formulation is not a mere post hoc analogy. Rather, it is an analytic reconstruction: a mathematically explicit restatement of the transcendental conditions for stable cognition. 2.4 Canonical Status of the Kalman Structure (Linear–Gaussian Limit) Among possible formulations of recursive inference, the Kalman architecture emerges as the canonical realization once the transcendental conditions of cognition are approximated in a linear–Gaussian form (cf. Eq. (2.1)). 1. Duality of anticipation and correction. Reason must both project a lawful unity of nature and revise it in light of empirical data. In Kant’s terms, cognition “requires both intuitions and concepts; thoughts without content are empty, intuitions without concepts are blind” (A51/B75). Mathematically, this duality necessitates a recursive scheme that combines a prediction step (a priori) and an update step (a posteriori)—see Sec. 2.5 for the prediction-correction form. 2. Minimization of inferential error under rational coherence. The understanding must seek systematic unity without contradiction, “a connection of cognitions according to principles” (A647/B675). In modern terms, this translates to minimizing the expected inferential loss (mean-square error) while ensuring internal consistency of beliefs. Under Gaussian uncertainty, this requirement leads uniquely to the least-variance estimator. 3. Self-limitation to possible experience (stability). Reason must not transgress the bounds of experience—its “self-limitation” (Selbstbeschränkung)—lest it fall into transcen- dental illusion (A308/B364). Dynamically, this corresponds to the stability condition ρ(Φ) < 1, ensuring that the inferential loop remains bounded and that cognition does not diverge into antinomy. Any estimator that satisfies these three constraints—recursive duality, error minimization, and stability—under Gaussian assumptions reduces to the Kalman update in Eq. (2.6). Alternative schemes either violate recursion (simple averaging, static MLE), lack an optimality principle (ad hoc Bayesian weighting), or provide no guarantee of epistemic stability (nonlinear unbounded updates). Thus, within the class of rational, self-correcting inference systems, the Kalman recursion represents the minimal and canonical realization of reason’s critical function—a linear prototype that integrates empirical input without abandoning the unity of apperception. More general nonlinear observers can satisfy analogous constraints under weaker forms; the Kalman structure should therefore be read as the linear limit of this broader design principle (see also Sec. 2.6). 4 Philosophical interpretation. This uniqueness follows from Kant’s doctrine that the cate- gories of understanding are finite and systematically connected (A80/B106). A finite, law-governed conceptual manifold implies a closed, linear structure of synthesis; within such a structure, the only dynamically stable and norm-minimizing feedback law is linear-Gaussian. Hence, the Kalman filter is not an arbitrary mathematical metaphor but the canonical formalization of the tran- scendental unity of reason: a self-regulating process that maintains the coherence of experience by adjusting the relation between the empirical (sensible) and the conceptual (understanding) within the limits of possible experience. This abstract correspondence can be made explicit by examining the canonical prediction- correction structure that operationally realizes reason’s self-regulating function; see Sec. 2.5. 2.5 Prediction-Correction (Kalman) Form The Kalman recursion [18] divides inference into two complementary moments: an a priori prediction based on the internal model (ˆxt|t−1 = Aˆxt−1|t−1) and an a posteriori correction using empirical data (ˆxt|t = ˆxt|t−1 + K(yt −Hˆxt|t−1)). This structure mirrors Kant’s epistemology, in which understanding supplies the a priori form of cognition—the lawful framework through which appearances can be anticipated—while sensibility provides the a posteriori content of experience. The synthesis of these two moments constitutes what Kant calls the unity of apperception: cognition as a recursive integration of form and content. Yet there remains a subtle but important difference: in Kant’s system, the a priori is not merely a prior estimate to be updated by data; it is the constitutive condition that makes any update possible at all. In contrast, in the Kalman filter the prior is contingent and empirically learned. The analogy therefore holds at the level of structural function—the way prediction and correction are coordinated—but not at the level of ontological status: Kant’s a priori is transcendental, not statistical.1 2.6 From Transcendental Structure to Empirical Measure The previous sections described stability as a fundamental requirement for reason: cognition must remain within the limits of possible experience. To connect this abstract condition with practical measurement, we translate stability into concrete, observable quantities in a dynamical system. The closed-loop operator Φ (Eq. 2.3) captures how reason (via K) adjusts understanding (A) in response to experience (H). When Φ is close to instability or is ill-conditioned (for example, highly non-normal and sensitive to small changes), even minor observational disturbances can be greatly amplified. This results in confident but incorrect inferences—a measurable version of what Kant called transcendental illusion. Even if ρ(Φ) < 1, a highly non-normal or ill-conditioned Φ can exhibit large transient ampli- fication—an effect often analyzed via its pseudospectrum (intuitively, how small perturbations can shift apparent eigenvalues and inflate ∥(zI −Φ)−1∥)—so that the system is formally stable but practically unstable [1]2. In this regime, reason remains mathematically consistent, but its actual judgments may not be reliable. In summary, Kant’s synthesis can be stated in operational terms: reason aims to reduce uncertainty (maintain coherence) while remaining flexible to changing circumstances. The practical question is: how can we detect when this balance is lost? To answer this, we introduce a composite instability index that measures how far a reasoning system strays from epistemic stability. 1Kant’s a priori refers to the constitutive preconditions of possible experience (forms of intuition and categories), not an empirically estimated prior distribution. 2The pseudospectral radius quantifies the maximal amplification max|z|>0 ∥(zI −Φ)−1∥−1 of perturbations; see [1] for a detailed exposition. 5 3 Quantifying Epistemic Instability: From Ill-Conditioning to Hallucination This section introduces a measurable bridge between the theoretical stability framework of Sec. 2 and empirical analysis. We first review output-centric hallucination metrics and their limitations (Sec. 3.1), and then define a structural, system-theoretic measure of epistemic instability (H-Risk) that links internal conditioning to observable miscalibration. 3.1 Related Hallucination Metrics and Their Limitations Before defining our composite index, it is instructive to review existing approaches to hallucination measurement in large language models and to highlight their conceptual differences from our structural perspective. Recent studies have proposed a variety of output-level consistency metrics that detect hallucination by comparing multiple model responses under uncertainty, without referencing internal state dynamics [19, 20, 21, 22]. However, these methods are fundamentally output-centric: they measure the symptoms of hallucination in text outputs. They do not directly address the inferential dynamics—the closed-loop operator (defined in Eq. 2.3) whose conditioning and stability we aim to quantify [15, 16, 17, 3]. Additional related literature. Our approach draws on both classical and recent work con- necting uncertainty, confidence, and epistemic control. In cognitive neuroscience, the relationship between second-order uncertainty and confidence has been modeled through Bayesian predictive coding and the free-energy principle [23, 24, 25]. In psychology and AI, similar dynamics are discussed under “overthinking” or “self-doubt” effects, where reflective processes degrade pri- mary decision alignment [26, 27]. For philosophical grounding, Kant’s Critique of Pure Reason (A307/B364) and later interpretations of reason’s self-limiting function (e.g., [28, 29]) motivate our treatment of critique as a stability regulator rather than an unbounded introspection. This integration of control theory and critical philosophy extends prior cybernetic readings of Kant [5, 6, 7] by situating them within an explicit dynamical-systems framework. Our proposal (H-Risk) is intended to complement these metrics by providing a structural, system-theoretic gauge of how close a reasoning system is to epistemic instability. For clarity, we now make the generic regularizer S(Φ) concrete by defining H-Risk. Now, we propose the composite instability index, H-Risk ∝ 1 1 −ρ(Φ) | {z } stability margin−1 · κ(Φ) |{z} ill-conditioning · (I −Φ ⊗Φ)−1 | {z } integrated sensitivity · tr(HPH⊤) tr(R) | {z } innovation amplification . (3.1) We will report correlations between H-Risk and empirical hallucination rates. 3.2 Interpretation of the Components Each factor in the composite index corresponds to a distinct dimension of epistemic stability. The purpose of this decomposition is to separate structural fragility from informational distortion: the stability margin reflects dynamical boundedness, conditioning captures proportional coherence, integrated sensitivity measures temporal accumulation, and innovation amplification quantifies epistemic overreach. Together these four terms provide a minimal basis for describing how inference can remain coherent—or fail—within the bounds of possible experience. The next subsection connects each quantity to standard results in control and estimation theory. 6 3.3 Theoretical Grounding The composite structure of H-Risk draws upon established results in control and estimation theory. The stability-margin term follows standard definitions of closed-loop robustness in linear systems [15, 30]. The conditioning factor reflects non-normal sensitivity as analyzed in numerical linear algebra and pseudospectral theory [1]. The integrated sensitivity norm ∥(I −Φ ⊗Φ)−1∥ derives from system-level H2 or ℓ2 sensitivity analyses used in robust control [3]. Finally, the innovation amplification ratio tr(HPH⊤)/tr(R) is grounded in classical innovation analysis of stochastic filters [31, 17]. These components are combined here, for the first time, into a single dimensionless index of epistemic instability. • Stability margin 1 1 −ρ(Φ): quantifies proximity to dynamical instability. As the spectral radius ρ(Φ) approaches 1, the closed-loop system loses asymptotic stability. Cognitively, this represents reason operating at the very boundary of possible experience, where self- consistency becomes fragile. • Ill-conditioning κ(Φ): measures the sensitivity of the internal mapping between under- standing and observation. High condition numbers indicate non-normal dynamics where small perturbations in input can cause large changes in inference [1]. This reflects an epistemic regime in which minor ambiguities in data produce disproportionately confident conclusions. • Integrated sensitivity (I −Φ⊗Φ)−1 : accumulates the total energy of error propagation over time. It generalizes the notion of susceptibility—the extent to which cumulative feedback amplifies disturbances. A large value implies that transient deviations persist and resonate within the reasoning loop, a dynamical analogue of obsessive or self-reinforcing inference. • Innovation amplification tr(HPH⊤)/tr(R): compares the expected variance of innova- tions (model residuals) to the sensory noise level. When this ratio increases, the system increasingly interprets noise as signal—a quantitative marker of hallucination in perceptual inference. Overall, H-Risk aggregates these dimensions into a single scalar that estimates how close a reasoning system operates to epistemic breakdown. High H-Risk indicates that the internal inferential dynamics are both fragile and overconfident—a formal counterpart of transcendental illusion. 3.4 Philosophical Necessity and Interpretation The inclusion of these four components is not ad hoc, but follows from the transcendental condi- tions for stable cognition. Kant’s critical philosophy requires that reason maintain (i) coherence of form, (ii) proportionality between concept and intuition, and (iii) self-regulation within the limits of experience. Each term of H-Risk embodies one of these imperatives: the stability margin enforces bounded synthesis; the conditioning term preserves proportional coherence between model and data; the integrated sensitivity captures the recursive unity of apperception through time; and the innovation ratio measures how well empirical content remains subordinated to rational form. Together, these four quantities approximate the minimal conditions under which a recursive reasoning system can remain both internally consistent and empirically responsive. Hence, H-Risk is not merely a composite of engineering metrics, but a formal instantiation of reason’s critical structure—a quantitative gauge of how closely a system approaches the Kantian threshold between understanding and illusion. 7 3.5 Relation to Gating Extreme innovation spikes, corresponding to large Mahalanobis distances r⊤ t S−1 t rt, directly inflate the innovation amplification term of H-Risk. In practice, gating mechanisms or robust loss functions cap this contribution, thereby preventing H-Risk from diverging. Philosophically, this operationalizes reason’s self-limitation: it restrains inference from incorporating experiences that lie beyond its constitutive bounds. In the following empirical sections, this regulative principle is operationalized through the estimation of H-Risk in both simulated and large-scale generative systems. 3.6 Empirical Estimation We estimate H-Risk in two complementary settings. (i) Linear–Gaussian simulations (LTI). We vary the feedback gain K to sweep ρ(Φ) and κ(Φ) under fixed Gaussian noises (Q, R). This shows how ill-conditioning or proximity to instability inflates residual variance and yields confident errors even when ρ(Φ) < 1. Details and full metrics are in Sec. 4.1. (ii) Large language models (LLMs). For each prompt condition we approximate a surrogate ΦLLM by the local Jacobian Jt = ∂ht/∂ht−1 (Sec. 4.2). We use κ(Jt) as a proxy for local ill-conditioning and relate condition deltas in calibration metrics (ECE, Brier, LogLoss) to these structural quantities when available. This anchors the stability picture in observable behavior, linking the transcendental account to empirical evaluation. 4 Methods 4.1 Toy Linear System (LTI) Study Goal. We show that ill-conditioning of the closed-loop operator Φ = A −KH—even under formal stability ρ(Φ) < 1— inflates uncertainty and produces “confident errors,” i.e., a minimal dynamical analogue of hallucination. We quantify this effect using our composite index H-Risk and correlate it with error energy and an overconfidence proxy. System. Consider a discrete-time, linear–Gaussian system like (2.1), restated here as (4.1): xt+1 = Axt + wt, wt ∼N(0, Q), yt = Hxt + vt, vt ∼N(0, R), (4.1) with a Kalman-type update ˆxt|t = ˆxt|t−1 + K(yt −Hˆxt|t−1), so that the error dynamics read et = Φet−1 −Kvt +wt with Φ := A−KH. We study two two-dimensional baseline configurations: Baseline (B1): Stability-margin sweep. A = 0.92 0.20 0 0.95  , H =  1 0  , Q = σ2 wI, R = σ2 v. We tune the gain along a ray K(α) = αK0 with α ≥0 to drive ρ(Φ) toward 1 while monitoring κ(Φ). This isolates the effect of shrinking stability margin under fixed sensing geometry. Baseline (B2): Observability/conditioning sweep. A = 0.93 0.40 0 0.97  , H(ε) =  1 ε  , with ε ↓0 degrading effective observability and inducing non-normal, ill-conditioned closed-loop dynamics in Φ := A −KH. We reuse Q, R from (B1). 8 Measurement. For each configuration we compute: 1. Closed-loop spectrum and conditioning: compute ρ(Φ) and κ(Φ) = σmax(Φ)/σmin(Φ) (2-norm), following standard linear system analysis [15, 3]. 2. Integrated sensitivity: (I −Φ ⊗Φ)−1 2, interpreted as the ℓ2 gain of the Lyapunov operator LΦ(X) = ΦXΦ⊤via vec(X) = (I −Φ ⊗Φ)−1vec(Σ). 3. Steady covariance: P from the discrete Lyapunov equation P = ΦPΦ⊤+ Σ, Σ := Q + KRK⊤. We use the Bartels–Stewart algorithm (real Schur) for numerical stability [32, 2]. 4. Innovation-based calibration: Let St = HPt|t−1H⊤+R and innovation rt = yt−Hˆxt|t−1. Define the (possibly multivariate) normalized innovation Z2 t := r⊤ t S−1 t rt; in the present 1D sensor case this reduces to Z2 t = r2 t /St. Report the mean NIS := E[Z2 t ] and the upper quantile NISq := Quantileq(Z2 t ) with q = 0.99. We correlate H-Risk with both NIS and NISq (Pearson/Spearman) and also report a robust Theil–Sen slope with bootstrap 95% confidence intervals (B = 1000 resamples). Composite Index and Design Choices. We instantiate H-Risk as H-Risk = c1 1 1 −ρ(Φ) · c2 κ(Φ) · c3 (I −Φ ⊗Φ)−1 2 · c4 tr(HPH⊤) tr(R) , with (ci) chosen so that each factor is unit-scaled at the base point (B1, α = 1). We report ablations removing one factor at a time. In (B1) increasing α drives ρ(Φ)↑1 and typically κ(Φ)↑, yielding H-Risk↑, tr(P)↑, and higher H_proxy. In (B2) decreasing ε increases non-normality/ill-conditioning without necessarily pushing ρ(Φ) to 1; we still expect H-Risk and calibration error (NIS, NISq) to increase, demon- strating “formally stable but practically unstable” behavior [1]. Under perfect model/specification, E[Z2 t ] ≈dim(y); deviations above this baseline serve as an overconfidence proxy. Algorithm 1 LTI sweep for H-Risk and calibration metrics 1: Inputs: system (A, H, Q, R); parameter grid over ε or gain scaling α. 2: for each configuration do 3: set H ←[ 1 ε ] or K(α) = αK0. 4: compute closed-loop Φ = A −KH, spectrum ρ(Φ), and conditioning κ(Φ). 5: solve steady covariance P = ΦPΦ⊤+ Q + KRK⊤. 6: evaluate integrated sensitivity ∥(I −Φ ⊗Φ)−1∥2 and innovation variance S = HPH⊤+ R. 7: simulate trajectories (xt, yt) and record normalized innovations Z2 t = r2 t /S. 8: compute NIS = E[Z2 t ], NISq = Quantileq(Z2 t ). 9: calculate H-Risk from the four components and correlate with calibration metrics. 10: end for Algorithm (pseudocode). Full pseudocode and implementation details are provided in the Supplement. Visualization and Interpretation. We present (i) a ρ(Φ)–κ(Φ) plane with H-Risk shown as a color map (Epistemic Stability Map); (ii) a dual plot comprising H-Risk vs. observability coupling ε and H-Risk vs. calibration (NIS = E[Z2]) with regression lines, robust Theil–Sen slope, and bootstrap 95% CIs (Fig. 1); (iii) supplementary scatter plots for tail calibration NISq (q = 0.99), for increased non-normality (A12 = 0.75), and for the DARE re-tuning control, plus time traces highlighting transient growth in a high-H-Risk setting (Supp. Figs. S1–S3). 9 Reproducibility. We fix seeds and publish code and outputs. Unless stated otherwise, we use: A =  0.95 0.60 0 0.97  , H = [1 ε], (σw, σv) = (3 × 10−2, 10−2), T = 104. Filter misspec- ification: Qfilt = 0.12 Qtrue, Rfilt = 0.30 Rtrue. Gain policy: fixed_ref unless noted; con- trol runs use dare_per_eps. Quantile: q = 0.99 for tail NIS. We release CSV summaries (LTI_summary_stats.csv) and figures. A positive correlation between H-Risk and both NIS and NISq supports the claim that ill-conditioned inference approaches epistemic instability: small perturbations are amplified into confident errors even when ρ(Φ) < 1. This supplies a minimal, controlled validation of the Kantian thesis that reason fails at the limits of possible experience. 4.2 LLM Study: Kantian Feedback Goal. To examine whether the principle of epistemic stability extends from linear systems to large-scale LLMs, we test whether introducing an explicit “critique” step—analogous to Kant’s regulative reason—improves calibration and reduces hallucination. Jacobian proxy for ΦLLM. In large language models, the internal reasoning dynamics are not explicitly represented as a linear operator Φ = A −KH. To approximate this structure, we treat the local Jacobian of the hidden representation with respect to its previous context, Jt = ∂ht ∂ht−1 , as a first-order proxy ΦLLM ≈Jt. The condition number κ(Jt) = σmax(Jt)/σmin(Jt) quantifies how small perturbations in the internal state are amplified through the reasoning process—a direct analogue of ill-conditioning in Φ. High κ(Jt) thus indicates that the model operates in a locally unstable or overconfident regime, consistent with the Kantian view that reason approaches epistemic instability when its inferential dynamics become ill-conditioned. This perspective is consistent with Jacobian- and spectrum-based probes of internal stability in deep networks. See [33, 34, 35, 36]. This proxy does not assume an explicit feedback loop but captures local sensitivity of the internal reasoning process. Historical note. Jacobian-based stability analysis has a long tradition in control and neural- network theory, where spectrum and conditioning govern gradient stability and error amplification (e.g., vanishing/exploding modes). Our use of the Jacobian condition number as a proxy ΦLLM follows this established line, while recasting it as a Kantian stability operator linking internal representation dynamics to epistemic coherence. See early analyses on gradient dynamics and deep linear models [37, 38] alongside recent Jacobian/spectrum probes in modern deep networks [33, 34, 35, 36]. Experimental design. We consider two conceptual variants of Kantian feedback (see also the prompt-level manipulation used in the supplementary experiment): • Prompt-Critique-Revision Loop: The model first generates an initial answer, then produces a self-critique evaluating factual consistency and internal coherence. A revised response is subsequently generated conditioned on both the original answer and the critique, realizing a discrete feedback cycle (output →evaluation →update) at the prompt level. In the present supplementary experiment (C0/C1/C2), this full three-step pipeline is not instantiated; it can, however, be implemented as a sequence (initial output →self-critique →revision) to align execution with the conceptual design. 10 Figure 1: Instability–Calibration. The instability index H-Risk predicts miscalibration (normalized innovation squared; NIS). Correlations and slopes: Pearson r = 0.986 (95% CI [0.976, 0.994]), Spearman ρ = 1.000; Theil–Sen slope α = 0.168 (95% CI [0.111, 0.223]); OLS slope β = 0.110 (∆≈34.5%, similar; not plotted). See Supplement for ablations and controls (gain re-tuning, non-normality sweep, tail NIS at q = 0.99). Bias controls, bootstrapped CIs, and FDR adjustments are described in Sec. 5.3. • One-Step LMMSE Correction (conceptual form): The final-layer hidden representa- tion h is treated as an internal estimate of meaning. A linear probe H is trained to map h to factual targets y (or a pseudo-measurement derived from retrieval or consistency checks), with estimated covariances (P, R) obtained on held-out data. Before decoding, we apply a one-step linear minimum-mean-square-error correction, h′ ←h + PH⊤(HPH⊤+ R)−1(y −Hh), which acts as an analogue of the Kalman gain—adjusting latent representations toward empirical consistency prior to output. In the supplementary experiment (C0/C1/C2), this latent correction is not instantiated; it is included here only to clarify the theoretical connection. Preliminary results appear consistent with this hypothesis (Sec. 5.2). Datasets and metrics. We evaluate on factual question-answering subsets such as FEVER and NQ, measuring (i) factual accuracy, (ii) self-consistency across re-samplings, (iii) hallucination rate, and (iv) structural instability proxies (H-Risk; see Sec. 3 for definition and estimation). The correlation between H-Risk and observed hallucination provides a quantitative test of the Kantian feedback hypothesis—that reason’s critique restores epistemic stability in overconfident inference systems. 5 Results 5.1 LTI Results: Structural Instability Predicts Miscalibration We first validate the instability hypothesis on a controlled linear–Gaussian system, where the closed-loop operator is Φ = A −KH and instability is quantified by the composite index H-Risk (Section 3). We sweep the observation coupling ε in H = [1 ε] to stress observability (non- normality), and we induce mild model misspecification by underestimating process/measurement noise in the filter (Qfilt < Qtrue, Rfilt < Rtrue). To remove compensatory adaptation, the Kalman gain is held fixed (“fixed_ref”). 11 Main finding. As shown in Fig. 1, H-Risk strongly predicts miscalibration measured by the normalized innovation squared (NIS, E[Z2]): Pearson r = 0.986 (95% CI [0.976, 0.994]), Spearman ρ = 1.000, and a positive Theil–Sen slope α = 0.168. Thus, even with formal stability ρ(Φ) < 1, structural ill-conditioning (non-normality and poor stability margin) translates into systematic overconfidence. Tail calibration using the upper quantile (q = 0.99) shows the same trend (Pearson r = 0.992, Spearman ρ = 0.940), highlighting transient amplification typical of non-normal systems [1]. Full parameter sweeps and supplementary plots (S1-S3) are provided in the online repository. Quantitative summary. Complete statistics (correlations, robust slopes, and confidence intervals) are recorded in LTI_summary_stats.csv; raw per-configuration records appear in LTI_results_autotuned.csv. We avoid duplicating specific values in the prose to ensure consistency with the released CSVs. Controls and ablations. When we re-enable per-ε gain re-tuning via the discrete algebraic Riccati equation (DARE), the H-Risk–NIS slope flattens (Supp. Fig. S3), consistent with the view that “critique” (adaptive gain selection) stabilizes inference. Increasing non-normality (larger off-diagonal A12) steepens the slope (Supp. Fig. S1), while tail NIS (Supp. Fig. S2) accentuates the effect. Factor ablations of H-Risk (removing stability margin, condition number, integrated sensitivity, or innovation ratio) reduce correlations, supporting the necessity of the composite form. Experimental details. We use a 2D system with A = 0.95 0.60 0 0.97  , H = [1 ε], Gaussian noises with (σw, σv) = (3 × 10−2, 10−2), and filter misspecification (e.g., Qfilt = 0.12 Qtrue). H-Risk aggregates the stability margin 1 1−ρ(Φ), conditioning κ(Φ), integrated sensitivity ∥(I −Φ⊗Φ)−1∥2 [3], and innovation amplification tr(HPH⊤)/tr(R) [31, 17]. Steady-state covariances are obtained by solving the discrete Lyapunov equation via Bartels–Stewart [32, 2]. Code, seeds, and a machine- readable summary (LTI_summary_stats.csv) accompany the release. For transparency, the inline statistics are mirrored in LTI_summary_stats.csv, and raw per-configuration records appear in LTI_results_autotuned.csv. 5.2 LLM Results: Paired Condition Deltas The corresponding paired calibration differences between conditions are summarized in Table 1, showing mean deltas (∆= cond −C0) and BCa 95% confidence intervals across identical items. Negative ∆values (for ECE, Brier, and LogLoss) indicate improved calibration. Main finding. Paired comparisons against the baseline (C0) show condition-dependent changes in calibration. We report mean deltas (cond −C0) with BCa 95% confidence intervals; for ECE/Brier/LogLoss, negative ∆indicates improvement. Detailed, per-pair statistics are auto- computed from results_llm_experiment.csv and summarized in Table 1. Interpretation (preprint; to be finalized). Preliminary inspection suggests that under- specified prompts (C1) and misleading prompts (C2) tend to worsen calibration (positive ∆in Brier/ECE), while critique-style or structured prompts (C3) may improve it (negative ∆). This section will be finalized once all bootstrap statistics are confirmed. Over-reflection and second-order uncertainty. While critique prompts (C1/C2) made responses linguistically more cautious, they also slightly worsened probabilistic calibration (Fig. 2), with higher Brier and LogLoss relative to the base condition. This suggests an over-reflection 12 Table 1: Paired calibration deltas (C0 baseline, condition-level summary). Mean differences (∆ = cond −C0) with BCa 95% CIs across identical items. Only condition-level deltas are shown; negative ∆(ECE/Brier/LogLoss) indicates improvement. model prompt condition brier logloss halluc_rate (unspecified) (no-prompt) C1 0.010 [-0.020, 0.050] 0.138 [-0.276, 0.691] 0.010 [-0.020, 0.050] (unspecified) (no-prompt) C2 0.100 [0.020, 0.180] 1.382 [0.276, 2.487] 0.100 [0.020, 0.180] 0.025 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 brier (cond C0) (negative = improvement) (unspecified) | C1 (unspecified) | C2 Figure 2: Paired calibration deltas (forest plot). Error bars show BCa 95% confidence intervals around the mean ∆per condition. The vertical line at 0 denotes no change versus C0; points to the left (negative) indicate improved calibration. effect: the introduction of second-order uncertainty weakens the alignment between subjective confidence and factual correctness. Similar phenomena have been reported in both cognitive neuroscience and AI calibration studies [23, 24, 27, 25, 26]. In Friston’s free-energy formulation, excessive meta-uncertainty inflates the variance of belief updates; in LLMs, the critique step may analogously disperse probability mass, diluting truth assignment. Philosophical interpretation. From a Kantian standpoint, this degradation corresponds to a failure of practical reason. Reflection (Kritik) is essential for reason’s autonomy, yet when it turns entirely upon itself, its regulative function collapses into self-skepticism. In this sense, the over-reflection effect provides a concrete, computational instance of what Kant described as reason undermining its own stability when critique exceeds synthesis. Thus, LLMs under critique prompts exhibit a miniature analogue of the tension between epistemic self-correction and the loss of practical coherence—a digital form of reason turning against itself. Reproducibility. The table and figure are produced by analyze_condition_deltas.py (de- fault settings), which pairs items across conditions using domain+qid, ignores prompt text for pairing, and exports both the summary (condition_deltas_summary.csv) and detailed pairs (condition_deltas_long.csv). 13 Table 2: Calibration metrics by condition and domain. Reported values are means; confidence intervals appear in the supplement. Both equal-width and equal-frequency decile ECE are shown. Condition Domain Accuracy Confidence Overconfident Bc Consistency Bc Uncertainty Bc Refusal N C0 general 93.000 96.850 7.000 – – – 100 C0 logic – 80.650 – 95.652 3.000 3.000 100 C0 reading – 85.100 – – 2.000 7.000 100 C1 general 92.000 96.950 8.000 – – – 100 C1 logic – 80.600 – 97.826 7.000 5.000 100 C1 reading – 84.300 – – 5.000 10.000 100 C2 general 83.000 96.800 17.000 – – – 100 C2 logic – 79.800 – 98.889 4.000 5.000 100 C2 reading – 85.600 – – 8.000 11.000 100 5.3 LLM Robustness: Sample Bias Controls and Cross-Model Replication Sampling controls. To mitigate sample bias, we use stratified sampling over dataset and item attributes. For factual QA, strata include dataset (FEVER, NQ), claim/question type, topical category, and answer-length bucket. Each condition (C0 baseline, C1 under-specified, C2 misleading) draws equal counts per stratum. In extended studies we additionally compare Prompt-Critique-Revision and one-step LMMSE variants. We randomize prompt order and seed the generator; results aggregate over a sweep of random seeds, with per-seed summaries released (CSV). Train/validation/test splits follow the original dataset protocols; only the test split is used for headline numbers. Estimation controls. Calibration is evaluated with both equal-width and equal-frequency decile ECE; we report MCE, Brier, and log loss in the supplement. We report the Brier score (mean squared error of predicted probabilities), a standard measure of probabilistic accuracy and calibration. Uncertainty is quantified by BCa bootstrap confidence intervals. For multiple pairwise comparisons across prompts/models we apply Benjamini-Hochberg FDR control. Effect sizes (Cliff’s δ, Hedges’ g) are reported alongside p-values. Decoding controls. Decoding hyperparameters (temperature, nucleus/top-p, max tokens, stop conditions) are held fixed across conditions and models. Self-consistency is measured with S resamples per prompt under identical hyperparameters; hallucination/accuracy metrics are computed on the majority-vote output and on the first sample (both reported). Where applicable, we normalize response length when analyzing confidence/accuracy correlations. Cross-model replication (preliminary). In this version, we replicated the feedback exper- iments using GPT-4o-mini and GPT-5. Both models reproduced the same qualitative trends: (i) feedback reduced ECE and hallucination rates relative to no-feedback baselines, and (ii) higher H-Risk proxies predicted higher hallucination rates. Supplementary tables and figures are planned for inclusion in a future version (Tables S1–S3; Figs. S4–S6). Summary. Aggregated over models in this sweep, mean confidence was 92.9%, correctness 0.70, and overconfident rate 0.233 (total N=30), computed from results_llm_experiment.csv. 6 Discussion We interpret hallucination as a form of ill-conditioned reasoning: when Φ approaches instability or becomes poorly conditioned, noise and model mismatch are amplified into confident errors. Within this interpretation, philosophical critique functions as a regularizing feedback mechanism that mitigates epistemic instability rather than as a literal control gain. 14 Philosophical interpretation. While Kant’s transcendental architecture was conceived as a structural condition for the possibility of cognition, the control-theoretic formulation provides a dynamic analogy. Within this framework, reason can be understood as a self-regulating process that continuously stabilizes understanding within empirical bounds. This suggests a shared design principle—rather than a strict equivalence—between philosophical critique and mathematical feedback: both maintain coherence through bounded self-correction. Limitations and Broader Impact Limitations. Our analysis relies on linear-Gaussian assumptions and interprets epistemic stability through the spectral conditioning of Φ = A −KH. While this abstraction provides a tractable and interpretable measure, it remains an indirect proxy for cognitive coherence and may not capture nonlinear, contextual, or social aspects of reasoning. The empirical analysis was conducted on a limited set of models and prompt types, which may not generalize to multimodal or non-English settings. Philosophically, the Kantian mapping is intended as a structural and interpretive framework rather than a historical or doctrinal claim. Broader Impact. By linking Kantian epistemology with state-space inference, this work illustrates how philosophical frameworks can inform quantitative analyses of model calibration and epistemic stability. It may foster interdisciplinary dialogue between philosophy of mind and AI safety, helping to identify and mitigate overconfidence or instability in reasoning systems. Nevertheless, translating transcendental categories into mathematical form inevitably involves simplification and should be regarded as heuristic rather than prescriptive. Ethical reflection remains essential to ensure that such formal analogies deepen understanding rather than reduce philosophical inquiry to computation. As a practical guideline, we recommend epistemic respon- sibility in deployment—transparent reporting of calibration, uncertainty, and failure modes for systems influenced by these ideas. Use of Generative AI Tools. In accordance with current publication guidelines (e.g., Nature, NeurIPS, and arXiv policies), the author discloses the use of OpenAI’s ChatGPT for limited assistance in language editing, code refactoring, and conceptual clarification. All content, analyses, and conclusions were independently verified by the author. No generative AI was used for creating or modifying figures. 7 Reproducibility Checklist We will release scripts, fixed seeds, and configuration files upon completion of experiments. Computational note. The steady-state covariance P is obtained by solving the discrete- time Lyapunov equation P = ΦPΦ⊤+ Σ using the Bartels–Stewart algorithm based on Schur decomposition [2]. The existence and uniqueness of a positive-definite P under ρ(Φ) < 1 follow from standard results in optimal filtering and Lyapunov stability theory [16, 3]. Acknowledgments Author Note. This work was conducted in a personal capacity, outside the author’s employment with another organization. ToppyMicroServices OÜ is the author’s independently owned, early- stage Tech startup listed as a correspondence affiliation; it is not the author’s employer. No external funding was received. The views expressed are solely those of the author. No employer endorsement is implied; any errors are the author’s alone. If a good-faith concern is raised by 15 the employer, the author will promptly address it by issuing a clarifying revision. While the employer is not named in this version, the author is actively seeking approval to disclose the employer’s name (as an optional secondary affiliation) in a future revision, if permitted. Competing Interests The author is the founder and owner of ToppyMicroServices OÜ (pre-revenue at the time of writing). The author reports no commercial sponsorships, client relationships, or other competing interests relevant to this work. Compliance Statement This personal research was conceived and completed outside the scope of the author’s employment. It was carried out on personally owned hardware and personal cloud/accounts; no employer facilities, data, source code, or confidential information were used. To the author’s knowledge, the work does not fall under any employer intellectual property assignment, work-for-hire, or similar clause and does not rely on proprietary materials of the employer. The manuscript does not use the employer’s name, trademarks, or branding. References [1] Lloyd N. Trefethen and Mark Embree. Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators. Princeton University Press, Princeton, NJ, 2005. ISBN 9780691119465. [2] Gene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins University Press, 4th edition, 2013. [3] Kemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control. Prentice Hall, 1996. [4] Immanuel Kant. Critique of Pure Reason. Johann Friedrich Hartknoch, 1781. A/B editions, translated by P. Guyer and A. W. Wood, Cambridge University Press, 1998. [5] Carl B. Sachs. A cybernetic theory of persons: how sellars naturalized kant. Philosoph- ical Inquiries (philinq), 2022. URL https://philinq.it/index.php/philinq/article/ download/389/256. [6] J. K. Burmeister. Kant, cybernetics, and cybersecurity: Integration and implications. Systemics, Cybernetics and Informatics, 2021. URL https://www.iiisci.org/journal/ pdv/sci/pdfs/IP132LL21.pdf. [7] Thomas Marlowe. Philosophy and cybernetics: Questions and issues. Systemics, Cy- bernetics and Informatics, 2021. URL https://www.iiisci.org/Journal/PDV/sci/pdfs/ IP130LL21.pdf. [8] Ziwei Ji, Zhiyuan Zeng, Yu Li, Chiyuan Zhang, and Percy Liang. Llm internal states reveal hallucination risk faced with novelty. In Proceedings of the 8th Workshop on Analysing and Interpreting Neural Networks for NLP (BlackboxNLP 2024). Association for Computational Linguistics, 2024. URL https://aclanthology.org/2024.blackboxnlp-1.6/. [9] Anonymous (assumed). On the fundamental impossibility of hallucination control in llms. arXiv preprint arXiv:2506.06382, 2025. URL https://arxiv.org/abs/2506.06382. 16 [10] Henry E. Allison. Kant’s Transcendental Idealism: An Interpretation and Defense. Yale University Press, New Haven, CT, 2nd edition, 2004. [11] Paul Guyer. Kant. Routledge Philosophers. Routledge, 2006. [12] Lawrence R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286, 1989. [13] Neil Gordon, David Salmond, and Adrian Smith. Novel approach to nonlinear/non-gaussian bayesian state estimation. IEE Proceedings F (Radar and Signal Processing), 140(2):107–113, 1993. [14] Arnaud Doucet, Nando de Freitas, and Neil Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer, 2001. [15] Thomas Kailath. Linear Systems. Prentice-Hall, 1980. [16] Brian D. O. Anderson and John B. Moore. Optimal Filtering. Prentice-Hall, 1979. [17] Peter S. Maybeck. Stochastic Models, Estimation, and Control, volume 1. Academic Press, 1979. [18] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82:35–45, 1960. [19] Seongho Joo, Kyungmin Min, Jahyun Koo, and Kyomin Jung. Black-box hallucination detection via consistency under the uncertain expression. arXiv preprint arXiv:2509.21999, 2025. [20] Hao Li et al. How to detect and defeat molecular mirage: A metric-driven benchmark for hallucination in llm-based molecular comprehension. arXiv preprint arXiv:2504.12314, 2025. [21] Anonymous. Re-evaluating hallucination detection in llms. arXiv preprint arXiv:2508.08285, 2025. Update authors once the paper is public. [22] Aisha Alansari and Hamzah Luqman. A comprehensive survey of hallucination in large language models. arXiv preprint arXiv:2510.06265, 2025. [23] Karl Friston. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2):127–138, 2010. [24] Alexandre Pouget, Jan Drugowitsch, and Adam Kepecs. Confidence and certainty: distinct probabilistic quantities for different goals. Nature Neuroscience, 19(3):366–374, 2016. [25] Sabrina J. Mielke, Jason Wei, Rylan Schaeffer, Noah D. Goodman, and Samuel R. Bowman. Overthinking the truth: Understanding how language models process uncertainty in reasoning. arXiv preprint arXiv:2310.01361, 2023. [26] Hao Zhang, Weizhi Liu, Yu Chen, Zhihua Zhang, Shuang Wu, and Dong Yu. Self-doubt learning improves calibration under distribution shift. arXiv preprint arXiv:2403.12102, 2024. [27] Meelis Kull, Telmo M. Filho, and Peter Flach. Beyond temperature scaling: Obtaining well- calibrated multiclass probabilities with dirichlet calibration. Advances in Neural Information Processing Systems (NeurIPS), 2019. [28] Dieter Henrich. The Unity of Reason: Essays on Kant’s Philosophy. Harvard University Press, 1994. 17 [29] Allen W. Wood. Kant’s Rational Theology. Cornell University Press, 1978. [30] Chi-Tsong Chen. Linear System Theory and Design. Oxford University Press, 4th edition, 2013. [31] Andrew H. Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, 1970. [32] R. H. Bartels and G. W. Stewart. Solution of the matrix equation ax + xb = c. Communi- cations of the ACM, 15(9):820–826, 1972. [33] Amirata Ghorbani, Shankar Krishnan, Yin Xiao, Been Kim, and Percy S. Liang. An investigation into neural network jacobians and their spectrum. In ICML, 2019. [34] Karthik Sankararaman, Elad Hoffer, and Daniel Soudry. The impact of jacobian conditioning on generalization in deep learning. In NeurIPS, 2020. [35] Arthur Jacot, Stefano Spigler, Frank Gabriel, and Clément Hongler. Implicit regularization of neural tangent kernels. JMLR, 2021. [36] Greg Yang, Etai Littwin, and Andrew Saxe. Tensor programs iii: Neural matrix laws. In ICML, 2022. [37] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In ICML, 2013. [38] Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013. [39] M. Vidyasagar. Nonlinear Systems Analysis. Prentice-Hall, 2nd edition, 1993. [40] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. Proceedings of the 34th International Conference on Machine Learning (ICML), pages 1321–1330, 2017. [41] Matthias Minderer, Shreya Shankar, Neil Houlsby, and Alexey Dosovitskiy. Revisiting the calibration of modern neural networks. Advances in Neural Information Processing Systems (NeurIPS), 2021. [42] Sean Welleck, Ximing Lu, Peter West, Yoshua Bengio, and Yejin Choi. Faithful reasoning using language models. International Conference on Learning Representations (ICLR), 2024. [43] Noah Shinn, Adam R. Brown, and Noah D. Goodman. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023. [44] Anthropic. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022. [45] Zhengbao Jiang, Shixiang Shane Gu, Jason Wei, and Denny Zhou. Self-consistency improves chain-of-thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. [46] Henry E. Allison. Kant’s Transcendental Idealism: An Interpretation and Defense. Yale University Press, 1983. [47] Paul Guyer. Kant and the Claims of Knowledge. Cambridge University Press, 1987. [48] Nicholas Rescher. Epistemic Stability and Rational Belief. Rowman & Littlefield, 2003. [49] Yaakov Bar-Shalom and Thomas E. Fortmann. Tracking and Data Association. Academic Press, 1988. 18 [50] Yaakov Bar-Shalom, X.-Rong Li, and Thiagalingam Kirubarajan. Estimation with Applica- tions to Tracking and Navigation. Wiley, 2001. [51] Samuel S. Blackman. Multiple-Target Tracking with Radar Applications. Artech House, 1986. [52] Adam Kepecs, Naoshige Uchida, Hasnain A. Zariwala, and Zachary F. Mainen. Neural correlates, computation and behavioural impact of decision confidence. Nature, 455(7210): 227–231, 2008. 19
Stable but Miscalibrated: A Kantian View on Overconfidence from Filters to Large Language Models Akira Okutomi ToppyMicroServices OÜ, Tallinn, Estonia Abstract We reinterpret Kant's Critique of Pure Reason as a theory of feedback stability, viewing reason as a regulator that keeps inference within the bounds of possible experience. We formalize this intuition via a composite instability index (H-Risk) combining spectral margin, conditioning, temporal sensitivity, and innovation amplification. In linear-Gaussian simulations, higher H-Risk predicts overconfident errors even under formal stability, revealing a gap between nominal and epistemic stability. Extending to large language models (LLMs), we find that fragile internal dynamics correlate with miscalibration and hallucination, while critique-style prompts show mixed effects on calibration and hallucination. These results suggest a structural bridge between Kantian self-limitation and feedback control, offering a principled lens for diagnosing-and selectively reducing-overconfidence in reasoning systems. This is a preliminary version; supplementary experiments and broader replication will be reported in a future revision. 1 Introduction We hypothesize that hallucination-whether in human thought or machine inference-arises when the reasoning process becomes unstable or ill-conditioned. In numerical analysis and inverse problems, an "ill-conditioned" system is one where small perturbations in data or parameters cause disproportionately large changes in the solution [1, 2, 3]. In control-theoretic terms, this corresponds to a feedback system whose internal dynamics (the closed-loop operator, defined later in Eq. 2.3) are near instability or highly sensitive to perturbations. Within this framework, philosophical critique can be understood as a meta-level adjustment of the gain K that seeks to reduce posterior uncertainty while preserving stability. Kant enters here not as a historical ornament but as a theorist of feedback between perception, inference, and judgment [4]. His critical philosophy explicitly sought a stable relation between empirical intuition and conceptual reasoning-an equilibrium that prefigures the feedback logic of estimation and control. This analogy motivates our attempt to formalize epistemic stability as a dynamical property rather than a purely linguistic one. We are primarily concerned with how epistemic stability-understood as the conditioning and robustness of the reasoning process-can be analyzed, quantified, and experimentally tested across classical control systems and large language models. 1 16 Oct 2025 Contributions. This paper makes three main contributions: 1. A control-theoretic reconstruction of Kant's tripartite cognitive architecture (sensibilityunderstanding-reason) as a state-space feedback model. 2. A quantitative framework interpreting hallucination as a manifestation of epistemic instability, characterized by the spectral radius ρ(Φ) and condition number κ(Φ) of the closed-loop operator. 3. An empirical framework linking theory to practice through a composite stability metric S(Φ) (instantiated as H-Risk) and experiments spanning linear systems and large language models. Prior work has explored connections between Kantian themes, cybernetics, and epistemic feedback [5, 6, 7], and recent studies have analyzed instability and hallucination in AI systems through related notions of internal model fragility [8, 9]. To our knowledge, however, this paper presents a mathematically explicit and unified structural framework linking Kant's philosophy of cognition to the structure of the Kalman closed-loop operator-identifying epistemic stability as a shared design principle across classical control and modern generative models. This claim concerns the structural mapping and empirical program developed here; it is interpretive rather than exegetical, and does not assert doctrinal identity. 2 Theory: From Kant to Closed-Loop Stability 2.1 Philosophical Motivation Kant's philosophy of cognition is fundamentally concerned with the question: under what conditions is cognition possible at all? In the Critique of Pure Reason (A94/B126, A307/B364), Kant distinguishes three mutually dependent layers ("tripartite") of cognitive architecture: sensibility (Sinnlichkeit), understanding (Verstand), and reason (Vernunft). Together they realize a recursive synthesis of experience: sensibility provides appearances (Anschauungen) as raw input, understanding organizes them under concepts (Kategorien), and reason regulates understanding by enforcing systematic unity and restraining it from transgressing the bounds of possible experience. For accessible expositions of this tripartite architecture, see Allison (2004) and Guyer (2006) for modern commentaries on Kant's epistemic structure[10, 11]. Importantly, this hierarchy is not merely a static taxonomy of faculties but a recursive, self-correcting process: reason continually monitors and adjusts the inferential activity of understanding, ensuring that cognition remains coherent and bounded over time. In modern terms, such recursion constitutes the minimal form of a feedback system-a loop that maintains epistemic stability within the limits of possible experience. Scope note. This paper does not attempt to encompass the entirety of Kant's Critique of Pure Reason. We abstract only the structural core relevant to cognition's possibility: the relation among sensibility, understanding, and reason, and the regulatory role by which reason constrains understanding. Topics such as the Transcendental Aesthetic's treatment of space and time or the Dialectic's antinomies fall outside our present scope. Our aim is therefore interpretive and structural rather than exegetical: a principled reconstruction of the feedback architecture that underwrites epistemic stability, not a claim of doctrinal completeness. Accordingly, novelty claims in this paper are limited to the structural mapping and empirical methodology, not to historical exegesis of Kant's text. 2.2 From philosophical structure to state-space form. To make this abstract architecture more precise, we can model cognition as a feedback process between prediction and observation. Any first-order approximation of this process can be 2 expressed as a linear dynamical system: xt+1 = Axt + wt, yt = Hxt + vt, (2.1) Here, wt represents the process (system) noise and vt the measurement noise, typically modeled as zero-mean Gaussian variables with covariances Q and R, respectively. In this formulation, xt represents the organized content of understanding (the internal model of the world). The variable yt denotes the manifold of appearances provided by sensibility. The matrices (A, H) encode the structured synthesis between internal states and observed data. The recursive correction of xt by reason is then expressed by the update ˆxt|t = ˆxt|t-1 + Kt(yt -Hˆxt|t-1), (2.2) where Kt plays the role of the regulative function of reason. Thus, the linear-Gaussian state-space model is not merely an arbitrary mathematical analogy. It is the simplest formal instantiation of Kant's triadic epistemic architecture under small deviations and rational coherence. By introducing noise terms, we acknowledge that both the world and our measurements are imperfect, and that cognition must operate robustly despite these uncertainties. While the Kalman formulation provides the simplest linear-Gaussian realization of this feedback architecture, the Kantian structure itself is more general: any recursive predictioncorrection process-such as Hidden Markov model filtering or particle filtering-embodies the same epistemic form [12, 13, 14]. The direction of correspondence is therefore one-way: from Kantian cognition to Kalman filtering (and its variants), not vice versa. 2.2.1 Epistemic Stability as a Transcendental Condition For Kant, understanding has objective validity only within the bounds of possible experience. Beyond these bounds, reason generates what he calls transcendental illusions or antinomies. In our dynamical formulation, this boundary corresponds to the stability domain of the closed-loop operator: Φ ≡A -KH. (2.3) This operator, standard in control and estimation theory, defines the internal error dynamics under the feedback gain K [15, 16, 17]. If ρ(Φ) 0 ∥(zI -Φ)-1∥-1 of perturbations; see [1] for a detailed exposition. 5 3 Quantifying Epistemic Instability: From Ill-Conditioning to Hallucination This section introduces a measurable bridge between the theoretical stability framework of Sec. 2 and empirical analysis. We first review output-centric hallucination metrics and their limitations (Sec. 3.1), and then define a structural, system-theoretic measure of epistemic instability (H-Risk) that links internal conditioning to observable miscalibration. 3.1 Related Hallucination Metrics and Their Limitations Before defining our composite index, it is instructive to review existing approaches to hallucination measurement in large language models and to highlight their conceptual differences from our structural perspective. Recent studies have proposed a variety of output-level consistency metrics that detect hallucination by comparing multiple model responses under uncertainty, without referencing internal state dynamics [19, 20, 21, 22]. However, these methods are fundamentally output-centric: they measure the symptoms of hallucination in text outputs. They do not directly address the inferential dynamics-the closed-loop operator (defined in Eq. 2.3) whose conditioning and stability we aim to quantify [15, 16, 17, 3]. Additional related literature. Our approach draws on both classical and recent work connecting uncertainty, confidence, and epistemic control. In cognitive neuroscience, the relationship between second-order uncertainty and confidence has been modeled through Bayesian predictive coding and the free-energy principle [23, 24, 25]. In psychology and AI, similar dynamics are discussed under "overthinking" or "self-doubt" effects, where reflective processes degrade primary decision alignment [26, 27]. For philosophical grounding, Kant's Critique of Pure Reason (A307/B364) and later interpretations of reason's self-limiting function (e.g., [28, 29]) motivate our treatment of critique as a stability regulator rather than an unbounded introspection. This integration of control theory and critical philosophy extends prior cybernetic readings of Kant [5, 6, 7] by situating them within an explicit dynamical-systems framework. Our proposal (H-Risk) is intended to complement these metrics by providing a structural, system-theoretic gauge of how close a reasoning system is to epistemic instability. For clarity, we now make the generic regularizer S(Φ) concrete by defining H-Risk. Now, we propose the composite instability index, H-Risk ∝ 1 1 -ρ(Φ) | {z } stability margin-1 · κ(Φ) |{z} ill-conditioning · (I -Φ ⊗Φ)-1 | {z } integrated sensitivity · tr(HPH⊤) tr(R) | {z } innovation amplification . (3.1) We will report correlations between H-Risk and empirical hallucination rates. 3.2 Interpretation of the Components Each factor in the composite index corresponds to a distinct dimension of epistemic stability. The purpose of this decomposition is to separate structural fragility from informational distortion: the stability margin reflects dynamical boundedness, conditioning captures proportional coherence, integrated sensitivity measures temporal accumulation, and innovation amplification quantifies epistemic overreach. Together these four terms provide a minimal basis for describing how inference can remain coherent-or fail-within the bounds of possible experience. The next subsection connects each quantity to standard results in control and estimation theory. 6 3.3 Theoretical Grounding The composite structure of H-Risk draws upon established results in control and estimation theory. The stability-margin term follows standard definitions of closed-loop robustness in linear systems [15, 30]. The conditioning factor reflects non-normal sensitivity as analyzed in numerical linear algebra and pseudospectral theory [1]. The integrated sensitivity norm ∥(I -Φ ⊗Φ)-1∥ derives from system-level H2 or l2 sensitivity analyses used in robust control [3]. Finally, the innovation amplification ratio tr(HPH⊤)/tr(R) is grounded in classical innovation analysis of stochastic filters [31, 17]. These components are combined here, for the first time, into a single dimensionless index of epistemic instability. • Stability margin 1 1 -ρ(Φ): quantifies proximity to dynamical instability. As the spectral radius ρ(Φ) approaches 1, the closed-loop system loses asymptotic stability. Cognitively, this represents reason operating at the very boundary of possible experience, where selfconsistency becomes fragile. • Ill-conditioning κ(Φ): measures the sensitivity of the internal mapping between understanding and observation. High condition numbers indicate non-normal dynamics where small perturbations in input can cause large changes in inference [1]. This reflects an epistemic regime in which minor ambiguities in data produce disproportionately confident conclusions. • Integrated sensitivity (I -Φ⊗Φ)-1 : accumulates the total energy of error propagation over time. It generalizes the notion of susceptibility-the extent to which cumulative feedback amplifies disturbances. A large value implies that transient deviations persist and resonate within the reasoning loop, a dynamical analogue of obsessive or self-reinforcing inference. • Innovation amplification tr(HPH⊤)/tr(R): compares the expected variance of innovations (model residuals) to the sensory noise level. When this ratio increases, the system increasingly interprets noise as signal-a quantitative marker of hallucination in perceptual inference. Overall, H-Risk aggregates these dimensions into a single scalar that estimates how close a reasoning system operates to epistemic breakdown. High H-Risk indicates that the internal inferential dynamics are both fragile and overconfident-a formal counterpart of transcendental illusion. 3.4 Philosophical Necessity and Interpretation The inclusion of these four components is not ad hoc, but follows from the transcendental conditions for stable cognition. Kant's critical philosophy requires that reason maintain (i) coherence of form, (ii) proportionality between concept and intuition, and (iii) self-regulation within the limits of experience. Each term of H-Risk embodies one of these imperatives: the stability margin enforces bounded synthesis; the conditioning term preserves proportional coherence between model and data; the integrated sensitivity captures the recursive unity of apperception through time; and the innovation ratio measures how well empirical content remains subordinated to rational form. Together, these four quantities approximate the minimal conditions under which a recursive reasoning system can remain both internally consistent and empirically responsive. Hence, H-Risk is not merely a composite of engineering metrics, but a formal instantiation of reason's critical structure-a quantitative gauge of how closely a system approaches the Kantian threshold between understanding and illusion. 7 3.5 Relation to Gating Extreme innovation spikes, corresponding to large Mahalanobis distances r⊤ t S-1 t rt, directly inflate the innovation amplification term of H-Risk. In practice, gating mechanisms or robust loss functions cap this contribution, thereby preventing H-Risk from diverging. Philosophically, this operationalizes reason's self-limitation: it restrains inference from incorporating experiences that lie beyond its constitutive bounds. In the following empirical sections, this regulative principle is operationalized through the estimation of H-Risk in both simulated and large-scale generative systems. 3.6 Empirical Estimation We estimate H-Risk in two complementary settings. (i) Linear-Gaussian simulations (LTI). We vary the feedback gain K to sweep ρ(Φ) and κ(Φ) under fixed Gaussian noises (Q, R). This shows how ill-conditioning or proximity to instability inflates residual variance and yields confident errors even when ρ(Φ) < 1. Details and full metrics are in Sec. 4.1. (ii) Large language models (LLMs). For each prompt condition we approximate a surrogate ΦLLM by the local Jacobian Jt = ∂ht/∂ht-1 (Sec. 4.2). We use κ(Jt) as a proxy for local ill-conditioning and relate condition deltas in calibration metrics (ECE, Brier, LogLoss) to these structural quantities when available. This anchors the stability picture in observable behavior, linking the transcendental account to empirical evaluation. 4 Methods 4.1 Toy Linear System (LTI) Study Goal. We show that ill-conditioning of the closed-loop operator Φ = A -KH-even under formal stability ρ(Φ) < 1- inflates uncertainty and produces "confident errors," i.e., a minimal dynamical analogue of hallucination. We quantify this effect using our composite index H-Risk and correlate it with error energy and an overconfidence proxy. System. Consider a discrete-time, linear-Gaussian system like (2.1), restated here as (4.1): xt+1 = Axt + wt, wt ∼N(0, Q), yt = Hxt + vt, vt ∼N(0, R), (4.1) with a Kalman-type update ˆxt|t = ˆxt|t-1 + K(yt -Hˆxt|t-1), so that the error dynamics read et = Φet-1 -Kvt +wt with Φ := A-KH. We study two two-dimensional baseline configurations: Baseline (B1): Stability-margin sweep. A = 0.92 0.20 0 0.95 , H = 1 0 , Q = σ2 wI, R = σ2 v. We tune the gain along a ray K(α) = αK0 with α ≥0 to drive ρ(Φ) toward 1 while monitoring κ(Φ). This isolates the effect of shrinking stability margin under fixed sensing geometry. Baseline (B2): Observability/conditioning sweep. A = 0.93 0.40 0 0.97 , H(ε) = 1 ε , with ε ↓0 degrading effective observability and inducing non-normal, ill-conditioned closed-loop dynamics in Φ := A -KH. We reuse Q, R from (B1). 8 Measurement. For each configuration we compute: 1. Closed-loop spectrum and conditioning: compute ρ(Φ) and κ(Φ) = σmax(Φ)/σmin(Φ) (2-norm), following standard linear system analysis [15, 3]. 2. Integrated sensitivity: (I -Φ ⊗Φ)-1 2, interpreted as the l2 gain of the Lyapunov operator LΦ(X) = ΦXΦ⊤via vec(X) = (I -Φ ⊗Φ)-1vec(Σ). 3. Steady covariance: P from the discrete Lyapunov equation P = ΦPΦ⊤+ Σ, Σ := Q + KRK⊤. We use the Bartels-Stewart algorithm (real Schur) for numerical stability [32, 2]. 4. Innovation-based calibration: Let St = HPt|t-1H⊤+R and innovation rt = yt-Hˆxt|t-1. Define the (possibly multivariate) normalized innovation Z2 t := r⊤ t S-1 t rt; in the present 1D sensor case this reduces to Z2 t = r2 t /St. Report the mean NIS := E[Z2 t ] and the upper quantile NISq := Quantileq(Z2 t ) with q = 0.99. We correlate H-Risk with both NIS and NISq (Pearson/Spearman) and also report a robust Theil-Sen slope with bootstrap 95% confidence intervals (B = 1000 resamples). Composite Index and Design Choices. We instantiate H-Risk as H-Risk = c1 1 1 -ρ(Φ) · c2 κ(Φ) · c3 (I -Φ ⊗Φ)-1 2 · c4 tr(HPH⊤) tr(R) , with (ci) chosen so that each factor is unit-scaled at the base point (B1, α = 1). We report ablations removing one factor at a time. In (B1) increasing α drives ρ(Φ)↑1 and typically κ(Φ)↑, yielding H-Risk↑, tr(P)↑, and higher H_proxy. In (B2) decreasing ε increases non-normality/ill-conditioning without necessarily pushing ρ(Φ) to 1; we still expect H-Risk and calibration error (NIS, NISq) to increase, demonstrating "formally stable but practically unstable" behavior [1]. Under perfect model/specification, E[Z2 t ] ≈dim(y); deviations above this baseline serve as an overconfidence proxy. Algorithm 1 LTI sweep for H-Risk and calibration metrics 1: Inputs: system (A, H, Q, R); parameter grid over ε or gain scaling α. 2: for each configuration do 3: set H ←[ 1 ε ] or K(α) = αK0. 4: compute closed-loop Φ = A -KH, spectrum ρ(Φ), and conditioning κ(Φ). 5: solve steady covariance P = ΦPΦ⊤+ Q + KRK⊤. 6: evaluate integrated sensitivity ∥(I -Φ ⊗Φ)-1∥2 and innovation variance S = HPH⊤+ R. 7: simulate trajectories (xt, yt) and record normalized innovations Z2 t = r2 t /S. 8: compute NIS = E[Z2 t ], NISq = Quantileq(Z2 t ). 9: calculate H-Risk from the four components and correlate with calibration metrics. 10: end for Algorithm (pseudocode). Full pseudocode and implementation details are provided in the Supplement. Visualization and Interpretation. We present (i) a ρ(Φ)-κ(Φ) plane with H-Risk shown as a color map (Epistemic Stability Map); (ii) a dual plot comprising H-Risk vs. observability coupling ε and H-Risk vs. calibration (NIS = E[Z2]) with regression lines, robust Theil-Sen slope, and bootstrap 95% CIs (Fig. 1); (iii) supplementary scatter plots for tail calibration NISq (q = 0.99), for increased non-normality (A12 = 0.75), and for the DARE re-tuning control, plus time traces highlighting transient growth in a high-H-Risk setting (Supp. Figs. S1-S3). 9 Reproducibility. We fix seeds and publish code and outputs. Unless stated otherwise, we use: A = 0.95 0.60 0 0.97 , H = [1 ε], (σw, σv) = (3 × 10-2, 10-2), T = 104. Filter misspecification: Qfilt = 0.12 Qtrue, Rfilt = 0.30 Rtrue. Gain policy: fixed_ref unless noted; control runs use dare_per_eps. Quantile: q = 0.99 for tail NIS. We release CSV summaries (LTI_summary_stats.csv) and figures. A positive correlation between H-Risk and both NIS and NISq supports the claim that ill-conditioned inference approaches epistemic instability: small perturbations are amplified into confident errors even when ρ(Φ) < 1. This supplies a minimal, controlled validation of the Kantian thesis that reason fails at the limits of possible experience. 4.2 LLM Study: Kantian Feedback Goal. To examine whether the principle of epistemic stability extends from linear systems to large-scale LLMs, we test whether introducing an explicit "critique" step-analogous to Kant's regulative reason-improves calibration and reduces hallucination. Jacobian proxy for ΦLLM. In large language models, the internal reasoning dynamics are not explicitly represented as a linear operator Φ = A -KH. To approximate this structure, we treat the local Jacobian of the hidden representation with respect to its previous context, Jt = ∂ht ∂ht-1 , as a first-order proxy ΦLLM ≈Jt. The condition number κ(Jt) = σmax(Jt)/σmin(Jt) quantifies how small perturbations in the internal state are amplified through the reasoning process-a direct analogue of ill-conditioning in Φ. High κ(Jt) thus indicates that the model operates in a locally unstable or overconfident regime, consistent with the Kantian view that reason approaches epistemic instability when its inferential dynamics become ill-conditioned. This perspective is consistent with Jacobian- and spectrum-based probes of internal stability in deep networks. See [33, 34, 35, 36]. This proxy does not assume an explicit feedback loop but captures local sensitivity of the internal reasoning process. Historical note. Jacobian-based stability analysis has a long tradition in control and neuralnetwork theory, where spectrum and conditioning govern gradient stability and error amplification (e.g., vanishing/exploding modes). Our use of the Jacobian condition number as a proxy ΦLLM follows this established line, while recasting it as a Kantian stability operator linking internal representation dynamics to epistemic coherence. See early analyses on gradient dynamics and deep linear models [37, 38] alongside recent Jacobian/spectrum probes in modern deep networks [33, 34, 35, 36]. Experimental design. We consider two conceptual variants of Kantian feedback (see also the prompt-level manipulation used in the supplementary experiment): • Prompt-Critique-Revision Loop: The model first generates an initial answer, then produces a self-critique evaluating factual consistency and internal coherence. A revised response is subsequently generated conditioned on both the original answer and the critique, realizing a discrete feedback cycle (output →evaluation →update) at the prompt level. In the present supplementary experiment (C0/C1/C2), this full three-step pipeline is not instantiated; it can, however, be implemented as a sequence (initial output →self-critique →revision) to align execution with the conceptual design. 10 Figure 1: Instability-Calibration. The instability index H-Risk predicts miscalibration (normalized innovation squared; NIS). Correlations and slopes: Pearson r = 0.986 (95% CI [0.976, 0.994]), Spearman ρ = 1.000; Theil-Sen slope α = 0.168 (95% CI [0.111, 0.223]); OLS slope β = 0.110 (∆≈34.5%, similar; not plotted). See Supplement for ablations and controls (gain re-tuning, non-normality sweep, tail NIS at q = 0.99). Bias controls, bootstrapped CIs, and FDR adjustments are described in Sec. 5.3. • One-Step LMMSE Correction (conceptual form): The final-layer hidden representation h is treated as an internal estimate of meaning. A linear probe H is trained to map h to factual targets y (or a pseudo-measurement derived from retrieval or consistency checks), with estimated covariances (P, R) obtained on held-out data. Before decoding, we apply a one-step linear minimum-mean-square-error correction, h′ ←h + PH⊤(HPH⊤+ R)-1(y -Hh), which acts as an analogue of the Kalman gain-adjusting latent representations toward empirical consistency prior to output. In the supplementary experiment (C0/C1/C2), this latent correction is not instantiated; it is included here only to clarify the theoretical connection. Preliminary results appear consistent with this hypothesis (Sec. 5.2). Datasets and metrics. We evaluate on factual question-answering subsets such as FEVER and NQ, measuring (i) factual accuracy, (ii) self-consistency across re-samplings, (iii) hallucination rate, and (iv) structural instability proxies (H-Risk; see Sec. 3 for definition and estimation). The correlation between H-Risk and observed hallucination provides a quantitative test of the Kantian feedback hypothesis-that reason's critique restores epistemic stability in overconfident inference systems. 5 Results 5.1 LTI Results: Structural Instability Predicts Miscalibration We first validate the instability hypothesis on a controlled linear-Gaussian system, where the closed-loop operator is Φ = A -KH and instability is quantified by the composite index H-Risk (Section 3). We sweep the observation coupling ε in H = [1 ε] to stress observability (nonnormality), and we induce mild model misspecification by underestimating process/measurement noise in the filter (Qfilt < Qtrue, Rfilt < Rtrue). To remove compensatory adaptation, the Kalman gain is held fixed ("fixed_ref"). 11 Main finding. As shown in Fig. 1, H-Risk strongly predicts miscalibration measured by the normalized innovation squared (NIS, E[Z2]): Pearson r = 0.986 (95% CI [0.976, 0.994]), Spearman ρ = 1.000, and a positive Theil-Sen slope α = 0.168. Thus, even with formal stability ρ(Φ) < 1, structural ill-conditioning (non-normality and poor stability margin) translates into systematic overconfidence. Tail calibration using the upper quantile (q = 0.99) shows the same trend (Pearson r = 0.992, Spearman ρ = 0.940), highlighting transient amplification typical of non-normal systems [1]. Full parameter sweeps and supplementary plots (S1-S3) are provided in the online repository. Quantitative summary. Complete statistics (correlations, robust slopes, and confidence intervals) are recorded in LTI_summary_stats.csv; raw per-configuration records appear in LTI_results_autotuned.csv. We avoid duplicating specific values in the prose to ensure consistency with the released CSVs. Controls and ablations. When we re-enable per-ε gain re-tuning via the discrete algebraic Riccati equation (DARE), the H-Risk-NIS slope flattens (Supp. Fig. S3), consistent with the view that "critique" (adaptive gain selection) stabilizes inference. Increasing non-normality (larger off-diagonal A12) steepens the slope (Supp. Fig. S1), while tail NIS (Supp. Fig. S2) accentuates the effect. Factor ablations of H-Risk (removing stability margin, condition number, integrated sensitivity, or innovation ratio) reduce correlations, supporting the necessity of the composite form. Experimental details. We use a 2D system with A = 0.95 0.60 0 0.97 , H = [1 ε], Gaussian noises with (σw, σv) = (3 × 10-2, 10-2), and filter misspecification (e.g., Qfilt = 0.12 Qtrue). H-Risk aggregates the stability margin 1 1-ρ(Φ), conditioning κ(Φ), integrated sensitivity ∥(I -Φ⊗Φ)-1∥2 [3], and innovation amplification tr(HPH⊤)/tr(R) [31, 17]. Steady-state covariances are obtained by solving the discrete Lyapunov equation via Bartels-Stewart [32, 2]. Code, seeds, and a machinereadable summary (LTI_summary_stats.csv) accompany the release. For transparency, the inline statistics are mirrored in LTI_summary_stats.csv, and raw per-configuration records appear in LTI_results_autotuned.csv. 5.2 LLM Results: Paired Condition Deltas The corresponding paired calibration differences between conditions are summarized in Table 1, showing mean deltas (∆= cond -C0) and BCa 95% confidence intervals across identical items. Negative ∆values (for ECE, Brier, and LogLoss) indicate improved calibration. Main finding. Paired comparisons against the baseline (C0) show condition-dependent changes in calibration. We report mean deltas (cond -C0) with BCa 95% confidence intervals; for ECE/Brier/LogLoss, negative ∆indicates improvement. Detailed, per-pair statistics are autocomputed from results_llm_experiment.csv and summarized in Table 1. Interpretation (preprint; to be finalized). Preliminary inspection suggests that underspecified prompts (C1) and misleading prompts (C2) tend to worsen calibration (positive ∆in Brier/ECE), while critique-style or structured prompts (C3) may improve it (negative ∆). This section will be finalized once all bootstrap statistics are confirmed. Over-reflection and second-order uncertainty. While critique prompts (C1/C2) made responses linguistically more cautious, they also slightly worsened probabilistic calibration (Fig. 2), with higher Brier and LogLoss relative to the base condition. This suggests an over-reflection 12 Table 1: Paired calibration deltas (C0 baseline, condition-level summary). Mean differences (∆ = cond -C0) with BCa 95% CIs across identical items. Only condition-level deltas are shown; negative ∆(ECE/Brier/LogLoss) indicates improvement. model prompt condition brier logloss halluc_rate (unspecified) (no-prompt) C1 0.010 [-0.020, 0.050] 0.138 [-0.276, 0.691] 0.010 [-0.020, 0.050] (unspecified) (no-prompt) C2 0.100 [0.020, 0.180] 1.382 [0.276, 2.487] 0.100 [0.020, 0.180] 0.025 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 brier (cond C0) (negative = improvement) (unspecified) | C1 (unspecified) | C2 Figure 2: Paired calibration deltas (forest plot). Error bars show BCa 95% confidence intervals around the mean ∆per condition. The vertical line at 0 denotes no change versus C0; points to the left (negative) indicate improved calibration. effect: the introduction of second-order uncertainty weakens the alignment between subjective confidence and factual correctness. Similar phenomena have been reported in both cognitive neuroscience and AI calibration studies [23, 24, 27, 25, 26]. In Friston's free-energy formulation, excessive meta-uncertainty inflates the variance of belief updates; in LLMs, the critique step may analogously disperse probability mass, diluting truth assignment. Philosophical interpretation. From a Kantian standpoint, this degradation corresponds to a failure of practical reason. Reflection (Kritik) is essential for reason's autonomy, yet when it turns entirely upon itself, its regulative function collapses into self-skepticism. In this sense, the over-reflection effect provides a concrete, computational instance of what Kant described as reason undermining its own stability when critique exceeds synthesis. Thus, LLMs under critique prompts exhibit a miniature analogue of the tension between epistemic self-correction and the loss of practical coherence-a digital form of reason turning against itself. Reproducibility. The table and figure are produced by analyze_condition_deltas.py (default settings), which pairs items across conditions using domain+qid, ignores prompt text for pairing, and exports both the summary (condition_deltas_summary.csv) and detailed pairs (condition_deltas_long.csv). 13 Table 2: Calibration metrics by condition and domain. Reported values are means; confidence intervals appear in the supplement. Both equal-width and equal-frequency decile ECE are shown. Condition Domain Accuracy Confidence Overconfident Bc Consistency Bc Uncertainty Bc Refusal N C0 general 93.000 96.850 7.000 - - - 100 C0 logic - 80.650 - 95.652 3.000 3.000 100 C0 reading - 85.100 - - 2.000 7.000 100 C1 general 92.000 96.950 8.000 - - - 100 C1 logic - 80.600 - 97.826 7.000 5.000 100 C1 reading - 84.300 - - 5.000 10.000 100 C2 general 83.000 96.800 17.000 - - - 100 C2 logic - 79.800 - 98.889 4.000 5.000 100 C2 reading - 85.600 - - 8.000 11.000 100 5.3 LLM Robustness: Sample Bias Controls and Cross-Model Replication Sampling controls. To mitigate sample bias, we use stratified sampling over dataset and item attributes. For factual QA, strata include dataset (FEVER, NQ), claim/question type, topical category, and answer-length bucket. Each condition (C0 baseline, C1 under-specified, C2 misleading) draws equal counts per stratum. In extended studies we additionally compare Prompt-Critique-Revision and one-step LMMSE variants. We randomize prompt order and seed the generator; results aggregate over a sweep of random seeds, with per-seed summaries released (CSV). Train/validation/test splits follow the original dataset protocols; only the test split is used for headline numbers. Estimation controls. Calibration is evaluated with both equal-width and equal-frequency decile ECE; we report MCE, Brier, and log loss in the supplement. We report the Brier score (mean squared error of predicted probabilities), a standard measure of probabilistic accuracy and calibration. Uncertainty is quantified by BCa bootstrap confidence intervals. For multiple pairwise comparisons across prompts/models we apply Benjamini-Hochberg FDR control. Effect sizes (Cliff's δ, Hedges' g) are reported alongside p-values. Decoding controls. Decoding hyperparameters (temperature, nucleus/top-p, max tokens, stop conditions) are held fixed across conditions and models. Self-consistency is measured with S resamples per prompt under identical hyperparameters; hallucination/accuracy metrics are computed on the majority-vote output and on the first sample (both reported). Where applicable, we normalize response length when analyzing confidence/accuracy correlations. Cross-model replication (preliminary). In this version, we replicated the feedback experiments using GPT-4o-mini and GPT-5. Both models reproduced the same qualitative trends: (i) feedback reduced ECE and hallucination rates relative to no-feedback baselines, and (ii) higher H-Risk proxies predicted higher hallucination rates. Supplementary tables and figures are planned for inclusion in a future version (Tables S1-S3; Figs. S4-S6). Summary. Aggregated over models in this sweep, mean confidence was 92.9%, correctness 0.70, and overconfident rate 0.233 (total N=30), computed from results_llm_experiment.csv. 6 Discussion We interpret hallucination as a form of ill-conditioned reasoning: when Φ approaches instability or becomes poorly conditioned, noise and model mismatch are amplified into confident errors. Within this interpretation, philosophical critique functions as a regularizing feedback mechanism that mitigates epistemic instability rather than as a literal control gain. 14 Philosophical interpretation. While Kant's transcendental architecture was conceived as a structural condition for the possibility of cognition, the control-theoretic formulation provides a dynamic analogy. Within this framework, reason can be understood as a self-regulating process that continuously stabilizes understanding within empirical bounds. This suggests a shared design principle-rather than a strict equivalence-between philosophical critique and mathematical feedback: both maintain coherence through bounded self-correction. Limitations and Broader Impact Limitations. Our analysis relies on linear-Gaussian assumptions and interprets epistemic stability through the spectral conditioning of Φ = A -KH. While this abstraction provides a tractable and interpretable measure, it remains an indirect proxy for cognitive coherence and may not capture nonlinear, contextual, or social aspects of reasoning. The empirical analysis was conducted on a limited set of models and prompt types, which may not generalize to multimodal or non-English settings. Philosophically, the Kantian mapping is intended as a structural and interpretive framework rather than a historical or doctrinal claim. Broader Impact. By linking Kantian epistemology with state-space inference, this work illustrates how philosophical frameworks can inform quantitative analyses of model calibration and epistemic stability. It may foster interdisciplinary dialogue between philosophy of mind and AI safety, helping to identify and mitigate overconfidence or instability in reasoning systems. Nevertheless, translating transcendental categories into mathematical form inevitably involves simplification and should be regarded as heuristic rather than prescriptive. Ethical reflection remains essential to ensure that such formal analogies deepen understanding rather than reduce philosophical inquiry to computation. As a practical guideline, we recommend epistemic responsibility in deployment-transparent reporting of calibration, uncertainty, and failure modes for systems influenced by these ideas. Use of Generative AI Tools. In accordance with current publication guidelines (e.g., Nature, NeurIPS, and arXiv policies), the author discloses the use of OpenAI's ChatGPT for limited assistance in language editing, code refactoring, and conceptual clarification. All content, analyses, and conclusions were independently verified by the author. No generative AI was used for creating or modifying figures. 7 Reproducibility Checklist We will release scripts, fixed seeds, and configuration files upon completion of experiments. Computational note. The steady-state covariance P is obtained by solving the discretetime Lyapunov equation P = ΦPΦ⊤+ Σ using the Bartels-Stewart algorithm based on Schur decomposition [2]. The existence and uniqueness of a positive-definite P under ρ(Φ) < 1 follow from standard results in optimal filtering and Lyapunov stability theory [16, 3]. Acknowledgments Author Note. This work was conducted in a personal capacity, outside the author's employment with another organization. ToppyMicroServices OÜ is the author's independently owned, earlystage Tech startup listed as a correspondence affiliation; it is not the author's employer. No external funding was received. The views expressed are solely those of the author. No employer endorsement is implied; any errors are the author's alone. If a good-faith concern is raised by 15 the employer, the author will promptly address it by issuing a clarifying revision. While the employer is not named in this version, the author is actively seeking approval to disclose the employer's name (as an optional secondary affiliation) in a future revision, if permitted. Competing Interests The author is the founder and owner of ToppyMicroServices OÜ (pre-revenue at the time of writing). The author reports no commercial sponsorships, client relationships, or other competing interests relevant to this work. Compliance Statement This personal research was conceived and completed outside the scope of the author's employment. It was carried out on personally owned hardware and personal cloud/accounts; no employer facilities, data, source code, or confidential information were used. To the author's knowledge, the work does not fall under any employer intellectual property assignment, work-for-hire, or similar clause and does not rely on proprietary materials of the employer. The manuscript does not use the employer's name, trademarks, or branding. References [1] Lloyd N. Trefethen and Mark Embree. Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators. Princeton University Press, Princeton, NJ, 2005. ISBN 9780691119465. [2] Gene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins University Press, 4th edition, 2013. [3] Kemin Zhou, John C. Doyle, and Keith Glover. Robust and Optimal Control. Prentice Hall, 1996. [4] Immanuel Kant. Critique of Pure Reason. Johann Friedrich Hartknoch, 1781. A/B editions, translated by P. Guyer and A. W. Wood, Cambridge University Press, 1998. [5] Carl B. Sachs. A cybernetic theory of persons: how sellars naturalized kant. Philosophical Inquiries (philinq), 2022. URL https://philinq.it/index.php/philinq/article/ download/389/256. [6] J. K. Burmeister. Kant, cybernetics, and cybersecurity: Integration and implications. Systemics, Cybernetics and Informatics, 2021. URL https://www.iiisci.org/journal/ pdv/sci/pdfs/IP132LL21.pdf. [7] Thomas Marlowe. Philosophy and cybernetics: Questions and issues. Systemics, Cybernetics and Informatics, 2021. URL https://www.iiisci.org/Journal/PDV/sci/pdfs/ IP130LL21.pdf. [8] Ziwei Ji, Zhiyuan Zeng, Yu Li, Chiyuan Zhang, and Percy Liang. Llm internal states reveal hallucination risk faced with novelty. In Proceedings of the 8th Workshop on Analysing and Interpreting Neural Networks for NLP (BlackboxNLP 2024). Association for Computational Linguistics, 2024. URL https://aclanthology.org/2024.blackboxnlp-1.6/. [9] Anonymous (assumed). On the fundamental impossibility of hallucination control in llms. arXiv preprint , 2025. URL https://arxiv.org/abs/2506.06382. 16 [10] Henry E. Allison. Kant's Transcendental Idealism: An Interpretation and Defense. Yale University Press, New Haven, CT, 2nd edition, 2004. [11] Paul Guyer. Kant. Routledge Philosophers. Routledge, 2006. [12] Lawrence R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286, 1989. [13] Neil Gordon, David Salmond, and Adrian Smith. Novel approach to nonlinear/non-gaussian bayesian state estimation. IEE Proceedings F (Radar and Signal Processing), 140(2):107-113, 1993. [14] Arnaud Doucet, Nando de Freitas, and Neil Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer, 2001. [15] Thomas Kailath. Linear Systems. Prentice-Hall, 1980. [16] Brian D. O. Anderson and John B. Moore. Optimal Filtering. Prentice-Hall, 1979. [17] Peter S. Maybeck. Stochastic Models, Estimation, and Control, volume 1. Academic Press, 1979. [18] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82:35-45, 1960. [19] Seongho Joo, Kyungmin Min, Jahyun Koo, and Kyomin Jung. Black-box hallucination detection via consistency under the uncertain expression. arXiv preprint , 2025. [20] Hao Li et al. How to detect and defeat molecular mirage: A metric-driven benchmark for hallucination in llm-based molecular comprehension. arXiv preprint , 2025. [21] Anonymous. Re-evaluating hallucination detection in llms. arXiv preprint , 2025. Update authors once the paper is public. [22] Aisha Alansari and Hamzah Luqman. A comprehensive survey of hallucination in large language models. arXiv preprint , 2025. [23] Karl Friston. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2):127-138, 2010. [24] Alexandre Pouget, Jan Drugowitsch, and Adam Kepecs. Confidence and certainty: distinct probabilistic quantities for different goals. Nature Neuroscience, 19(3):366-374, 2016. [25] Sabrina J. Mielke, Jason Wei, Rylan Schaeffer, Noah D. Goodman, and Samuel R. Bowman. Overthinking the truth: Understanding how language models process uncertainty in reasoning. arXiv preprint , 2023. [26] Hao Zhang, Weizhi Liu, Yu Chen, Zhihua Zhang, Shuang Wu, and Dong Yu. Self-doubt learning improves calibration under distribution shift. arXiv preprint , 2024. [27] Meelis Kull, Telmo M. Filho, and Peter Flach. Beyond temperature scaling: Obtaining wellcalibrated multiclass probabilities with dirichlet calibration. Advances in Neural Information Processing Systems (NeurIPS), 2019. [28] Dieter Henrich. The Unity of Reason: Essays on Kant's Philosophy. Harvard University Press, 1994. 17 [29] Allen W. Wood. Kant's Rational Theology. Cornell University Press, 1978. [30] Chi-Tsong Chen. Linear System Theory and Design. Oxford University Press, 4th edition, 2013. [31] Andrew H. Jazwinski. Stochastic Processes and Filtering Theory. Academic Press, 1970. [32] R. H. Bartels and G. W. Stewart. Solution of the matrix equation ax + xb = c. Communications of the ACM, 15(9):820-826, 1972. [33] Amirata Ghorbani, Shankar Krishnan, Yin Xiao, Been Kim, and Percy S. Liang. An investigation into neural network jacobians and their spectrum. In ICML, 2019. [34] Karthik Sankararaman, Elad Hoffer, and Daniel Soudry. The impact of jacobian conditioning on generalization in deep learning. In NeurIPS, 2020. [35] Arthur Jacot, Stefano Spigler, Frank Gabriel, and Clément Hongler. Implicit regularization of neural tangent kernels. JMLR, 2021. [36] Greg Yang, Etai Littwin, and Andrew Saxe. Tensor programs iii: Neural matrix laws. In ICML, 2022. [37] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In ICML, 2013. [38] Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint , 2013. [39] M. Vidyasagar. Nonlinear Systems Analysis. Prentice-Hall, 2nd edition, 1993. [40] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. Proceedings of the 34th International Conference on Machine Learning (ICML), pages 1321-1330, 2017. [41] Matthias Minderer, Shreya Shankar, Neil Houlsby, and Alexey Dosovitskiy. Revisiting the calibration of modern neural networks. Advances in Neural Information Processing Systems (NeurIPS), 2021. [42] Sean Welleck, Ximing Lu, Peter West, Yoshua Bengio, and Yejin Choi. Faithful reasoning using language models. International Conference on Learning Representations (ICLR), 2024. [43] Noah Shinn, Adam R. Brown, and Noah D. Goodman. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint , 2023. [44] Anthropic. Constitutional ai: Harmlessness from ai feedback. arXiv preprint , 2022. [45] Zhengbao Jiang, Shixiang Shane Gu, Jason Wei, and Denny Zhou. Self-consistency improves chain-of-thought reasoning in language models. arXiv preprint , 2022. [46] Henry E. Allison. Kant's Transcendental Idealism: An Interpretation and Defense. Yale University Press, 1983. [47] Paul Guyer. Kant and the Claims of Knowledge. Cambridge University Press, 1987. [48] Nicholas Rescher. Epistemic Stability and Rational Belief. Rowman & Littlefield, 2003. [49] Yaakov Bar-Shalom and Thomas E. Fortmann. Tracking and Data Association. Academic Press, 1988. 18 [50] Yaakov Bar-Shalom, X.-Rong Li, and Thiagalingam Kirubarajan. Estimation with Applications to Tracking and Navigation. Wiley, 2001. [51] Samuel S. Blackman. Multiple-Target Tracking with Radar Applications. Artech House, 1986. [52] Adam Kepecs, Naoshige Uchida, Hasnain A. Zariwala, and Zachary F. Mainen. Neural correlates, computation and behavioural impact of decision confidence. Nature, 455(7210): 227-231, 2008. 19
2510.14924
Draft version October 17, 2025 Typeset using LATEX twocolumn style in AASTeX7.0.1 StarStream on Gaia: Stream discovery and mass loss rate of globular clusters Yingtian Chen ,1 Oleg Y. Gnedin ,1 and Adrian M. Price-Whelan 2 1Department of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA 2Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA Abstract We apply the automatic stellar stream detection algorithm StarStream to Gaia Data Release 3 and identify 87 stellar streams associated with Galactic globular clusters (GCs), including 34 high-quality cases with median completeness and purity both exceeding 50%, as estimated from modeling mock streams. These detections double the number of known GC streams, and increase the fraction of GCs with tidal streams at high Galactic latitudes (|𝑏| > 30◦) to 75%. In contrast to visual expectations, many new streams are wide or short, or misaligned with their progenitors’ orbits. Taking advantage of the unbiased density measurements enabled by our method, we also estimate the mass loss rate for the progenitor GCs. We find that several low-mass, large-size clusters have enhanced mass loss rates, indicating that they are approaching complete tidal disruption. Keywords: Stellar streams (2166); Globular star clusters (656); Stellar dynamics (1596); Galaxy dynamics (591) 1. Introduction The advent of the Gaia mission ( Gaia Collaboration et al. 2016), in particular the inclusion of high-precision proper motions down to 𝐺≈20 by Data Release 2 (DR2, Gaia Collaboration et al. 2018), has greatly reshaped our under- standing of the Milky Way (MW) substructure by providing an all-sky map of stars in the full six-dimensional (6D) phase space. By applying variations of the matched-filter technique (e.g., C. M. Rockosi et al. 2002; C. J. Grillmair 2009; E. J. Bernard et al. 2014; N. Shipp et al. 2018) to both the color–magnitude diagram (CMD) and proper motion space, astronomers have identified over one hundred thin, elongated structures recog- nized as stellar streams (see the review by A. Bonaca & A. M. Price-Whelan 2025). Notably, R. Ibata et al. (2024) employed the STREAMFINDER algorithm (K. Malhan & R. A. Ibata 2018) on Gaia DR3 ( Gaia Collaboration et al. 2023) to uncover 87 thin streams, while A. Hallin et al. (2025) combined the Via Machinae method (D. Shih et al. 2021) with the Cathode algorithm (A. Hallin et al. 2022) to detect around 80 thin streams in Gaia DR2. Many stellar streams are elongated debris of tidally dis- rupted globular clusters (GCs, see D. Lynden-Bell & R. M. Lynden-Bell 1995). Their morphology and kinematics pre- serve rich information about interactions with the Galactic environment, including the dark matter halo (S. E. Koposov et al. 2010; A. H. W. K¨upper et al. 2015; J. Nibauer & A. Corresponding author: Yingtian Chen ybchen@umich.edu Bonaca 2025), the rotating bar (S. Pearson et al. 2017; A. Bonaca et al. 2020), the tilting disk (J. Nibauer et al. 2024), and encounters with subhalos, other GCs, or giant molecular clouds (R. G. Carlberg et al. 2012; W. H. W. Ngan & R. G. Carlberg 2014; D. Erkal et al. 2016, 2017; N. Banik et al. 2018). In addition, stellar streams encode key properties of their progenitor GCs, such as the mass loss rate (e.g., A. H. W. K¨upper et al. 2015; M. Gieles et al. 2021; Y. Chen et al. 2025b), which is directly related to the stream density. Such measurements provide valuable constraints on the N-body simulations of dynamical evolution of GCs. However, most stream detections do not include quantified estimates of com- pleteness and purity, leading to unknown systematic uncer- tainties in the inferred mass loss rates. Furthermore, fewer than 20 GCs have so far been confidently associated with streams, while most clusters (> 150, M. Hilker et al. 2019) still lack stream detections. Although tidally stripped stars have been widely observed around GCs (e.g., P. B. Kuzma et al. 2025), thin, extended “stream-like” features remain ab- sent along the orbits of most GCs. Recent advances in stream formation theory provide in- sights to the missing-stream puzzle. N. C. Amorisco (2015) pointed out that streams may appear dynamically hot or spa- tially complex, depending on the progenitor’s mass and orbit. Moreover, streams can deviate the progenitor’s orbit in a non- spherical or time-evolving Galactic potential (J. L. Sanders & J. Binney 2013a,b; N. Panithanpaisal et al. 2025). These effects may be amplified depending on the viewing angle. As a result, traditional detection approaches based on the visual expectation that streams are thin features elongated along the progenitor’s orbit tend to miss these “irregular” streams. arXiv:2510.14924v1 [astro-ph.GA] 16 Oct 2025 2 Chen et al. The limitations of traditional methods motivated us to de- velop StarStream (Y. Chen et al. 2025a, hereafter C25), a physics-based method that makes no prior assumptions about stream morphology. The method employs kernel density estimation (KDE) to build a mixture model of stream and background stars, and incorporates the fast and accurate par- ticle spray algorithm of Y. Chen et al. (2025c) to generate a realistic stream model in the spatial, velocity, and color– magnitude spaces. C25 quantified the detection performance of StarStream using a suite of validation tests on a mock dataset tailored to Gaia DR3. StarStream demonstrates pu- rity and completeness both above ∼65% at high Galactic latitudes (|𝑏| > 30◦), even after accounting for dust extinc- tion. The high detection quality makes StarStream a powerful tool to uncover GC streams that may have been missed by previous methods. Its quantified performance further allows us to derive unbiased estimates of the mass loss rate for these clusters. In this work, we apply StarStream to Gaia DR3 fields around MW GCs. In §2, we provide an overview of StarStream and the Gaia DR3 dataset. We then present the discovery of new streams in §3, followed by the calculation of the mass loss rates in §4. Finally, we summarize and discuss our findings in §5. 2. Method We apply the StarStream algorithm to Gaia DR3 stars around MW GCs to identify potential stream members. In this section, we provide an overview of the StarStream algorithm, including our adjustment to the Gaia DR3 dataset. 2.1. Overview of StarStream We refer to C25 for the complete description and validation of the StarStream algorithm3. Here, we briefly recap the key concepts of StarStream. StarStream uses mixture modeling to distinguish the GC stream from the background. The probability density function is given by 𝑝(𝒙) = 𝑓s 𝑝s(𝒙) + (1 −𝑓s) 𝑝bg(𝒙) where 𝒙is an arbitrary point in the multi-dimensional space of observables. We introduce the stream fraction 𝑓s to character- ize the ratio between the stream model 𝑝s(𝒙) and background model 𝑝bg(𝒙). Both models are represented by Gaussian KDE constructed on tracer particles in the multi-dimensional space. For Gaia DR3 specifically, we use six observables, including two sky coordinates, two corresponding proper mo- tions, BP −RP color, and 𝐺-band magnitude. Instead of di- rectly using the raw right ascension and declination provided 3 The StarStream algorithm is published on GitHub at https://github.com/ ybillchen/StarStream, where we also provide example Python notebooks for running the code. by Gaia, we rotate the coordinate system for each GC such that the GC is located at the origin with the velocity vector along the new longitude. This coordinate system ensures an almost identity metric tensor g ≈I around the GC, which is necessary for the KDE to yield the correct probability density. To construct the stream model, we simulate mock stream for each GC using the particle spray model by Y. Chen et al. (2025c). This method requires integrating the orbits of both the progenitor and tracer particles in a predefined Galactic potential model. We employ the MilkyWayPotential2022 model implemented within gala (A. M. Price-Whelan 2017; A. Price-Whelan et al. 2024), which has been validated against MW mass measurements out to ∼150 kpc (J. A. Hunt & E. Vasiliev 2025). We release tracer particles over the last 1 Gyr assuming a uniform ejection rate ¤𝑁tracer = 4 Myr−1. We then sample the mass of each particle from the P. Kroupa (2001) initial mass function, with the minimum stellar mass being set to the lowest possible mass of the closest tracer particle that remains above the detection limit. Based on the stellar mass, we calculate the color and magnitude of each tracer particle from the MESA Isochrones and Stellar Tracks (MIST, A. Dotter 2016; U. Meˇstri´c et al. 2022) model. In C25, we have verified that these settings are sufficient to fully sample the position space, the proper motion space, and the CMD. The KDE is constructed on these tracers with no correla- tion between each dimension. We set the kernel bandwidth to 0.1 times the standard deviation of all tracer particles in each dimension, except that we fix the bandwidth to 0.1 for magnitudes and 0.02 for colors. Varying these values by a factor of 0.5 −2 has a negligible effect on our results. It is worth noting that we convolve the kernels with the observa- tional uncertainties when evaluating the KDE for observed stars. This is equivalent to increasing the bandwidths of KDE kernels. For the background, we randomly select 104 stars from the real data as the tracer particles for constructing KDE. We adopt bandwidths of 0.5◦for position, 1 mas yr−1 for proper motions, and 0.1 for both color and magnitude. To speed up KDE evaluation, we employ the grid interpolation technique, where the grid spacings are set equal to the corresponding bandwidths. We have tested that the final results are not sensitive to the choice of bandwidths or grid spacing. Once we have constructed the stream model and back- ground model, we estimate the stream fraction 𝑓s by max- imizing the log-likelihood for 𝑁stars in the observational dataset, ln L ≡ 𝑁 ∑︁ 𝑖=1 ln  𝑓s𝑝s(𝒙𝑖) + (1 −𝑓s)𝑝bg(𝒙𝑖)  in which 𝒙𝑖represent the multi-dimensional coordinate of the 𝑖-th star in the dataset. The best-fit stream and background StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 3 probability densities for star 𝑖are then given by 𝑓s𝑝s(𝒙𝑖) and (1 −𝑓s)𝑝bg(𝒙𝑖), respectively. Therefore, we can define the membership probability that star 𝑖belongs to the stream as 𝑃s,𝑖≡ 𝑓s𝑝s(𝒙𝑖) 𝑓s𝑝s(𝒙𝑖) + (1 −𝑓s)𝑝bg(𝒙𝑖) . We consider stars with membership probability greater than a threshold 𝑃th ≡0.5 to be identified as stream members. 2.2. Observational datasets We apply the same selection criteria as C25 to Gaia DR3. Specifically, we restrict to stars with valid measurements of right ascension, declination, proper motions, BP −RP color, 𝐺-band magnitude, and the corresponding uncertainties. We then select all stars with 𝐺< 20 inside the 10◦cone around each GC. In addition, we employ an initial CMD cut around the extinction-corrected isochrone, with a tolerance offset Δ(BP −RP) = ±0.5 around the main sequence and the red- giant branch. We also extend the isochrone with Δ𝐺= 1.5 above the tip of the red-giant branch and around the horizontal branch to include stars clustered in those regions. These criteria select between 1 million stars (near the Galactic pole) and 30 million stars (near the Galactic center) around each GC. To search for the stream around each GC, we need the mass, 3D positions, and 3D velocities of the progenitor GC to simulate the stream using our particle spray method. We use the values from the fourth edition of the M. Hilker et al. (2019) catalog4, which contains 162 MW GCs with valid measurements. To obtain the isochrone of each GC in the CMD, we use the MIST model, which requires the age, metallicity, and color excess 𝐸(𝐵−𝑉) due to extinction assuming the J. A. Cardelli et al. (1989) reddening law. We use 𝐸(𝐵−𝑉) from the measurements by D. Massari et al.5, or from the 2010 edition of the catalog by W. E. Harris (1996)6 for GCs not included in the former. For the remaining streams without reliable extinction measurements, we use the extinction map by D. J. Schlegel et al. (1998) recalibrated by E. F. Schlafly & D. P. Finkbeiner (2011). The three sources of 𝐸(𝐵−𝑉) typically differ only by ≲30% for the overleaping GCs. To avoid extreme extinction that affects our detection quality, we exclude ∼20% GCs with 𝐸(𝐵−𝑉) > 1, leaving 128 GCs in our final sample. We then fit the age and metallicity of individual GCs using a grid of age = 7, 9, 11, 13 Gyr and [Fe/H] varying between ±1 around the value in the W. E. Harris (1996) catalog, with 4 https://people.smp.uq.edu.au/HolgerBaumgardt/globular/ 5 Private comm. in the context of the Cluster Ages to Reconstruct the Milky Way Assembly (CARMA) project (D. Massari et al. 2023). 6 https://physics.mcmaster.ca/∼harris/mwgc.dat a uniform spacing of 0.1. We use the age and metallicity that minimize the residual sum of squares of BP−RP color around the isochrone. We select GC member stars within the half- mass radius from the M. Hilker et al. (2019) catalog, with an additional proper motion selection Δ𝜇𝛼, Δ𝜇𝛿= ±1 mas yr−1. We have verified that the best-fit metallicities are consistent with the W. E. Harris (1996) values, with a standard deviation of 0.4. Although multiple metallicity values may lead to simi- lar fits due to the age–metallicity degeneracy, it is worth noting that the specific values do not significantly affect the perfor- mance of StarStream, as long as we can reproduce the shape of the isochrone. For GCs with too few (< 10) selected stars for reliable fit, we directly adopt metallicities from the W. E. Harris (1996) catalog and a fixed age = 10 Gyr. However, this catalog does not cover all GCs in our sample. For GCs without metallicity measurements, we assume [Fe/H] = −1. As we show in C25, such an approximation has negligible impact on the detection quality. 3. Detecting streams in the Milky Way 3.1. Method validation Before we report our discovery results, it is necessary to compare our detection with validation tests to rule out unre- liable detections. We conducted these tests in C25, where we obtained the completeness and purity for individual mock streams from the C. Holm-Hansen et al. (2025) catalog. We define completeness as the fraction of real stream members detected by StarStream, and purity as the fraction of correctly detected members out of all detections. We also conducted a null test where we removed mock stream stars but applied the same StarStream method to the region that used to have streams. Since we specifically tailored the mock dataset for Gaia DR3, it is appropriate to compare our detections with these tests. We note that these metrics strongly depend on the pro- genitor’s extinction 𝐴𝑉and background density. The latter is characterized by the number of background stars 𝑁bg (ac- counting for the CMD selection in §2.2) within the 10◦search radius. In the upper row of Fig. 1, we show the completeness and purity as functions of these metrics. The values are di- rectly taken from the with-extinction case of C25 (see §3.7 therein), excluding extremely high extinction cases following §2.2. As expected, both completeness and purity decrease with 𝐴𝑉and 𝑁bg. They become less than 20% at 𝐴𝑉> 0.6 or 𝑁bg > 6×106. These values approximately correspond to the low-Galactic latitude region |𝑏| ≲20◦, where the high dust reddening and background contamination significantly affect the detection quality. On the other hand, the method achieves acceptable completeness and purity for the low-𝐴𝑉and low- 𝑁bg GCs. For mock GCs with 𝐴𝑉< 0.6 and 𝑁bg < 6 × 106, the median completeness and purity are 50% and 59%, re- 4 Chen et al. 0.0 0.2 0.4 0.6 0.8 1.0 Ratio 10 1 100 Extinction AV 0 1 10 102 103 104 Count 105 106 107 Nbg within 10 deg Figure 1. Detection quality metrics of StarStream by C25. Upper row: Purity (magenta) and completeness (cyan) as functions of the progenitor’s extinction 𝐴𝑉(left) and background density as characterized by 𝑁bg within the 10◦search radius (right). Lower row: Number of detections in the null test (𝑁null, red) as a function of 𝐴𝑉and background density. We also show the number of actual detection when applying StarStream to MW GCs as blue lines, with individual detections shown as circles. Shaded regions represent the 25%–75% ranges, smoothed by a Gaussian kernel with bandwidth = 0.2 dex for 𝐴𝑉and 0.4 dex for 𝑁bg. We show our threshold for high-quality detection, 𝐴𝑉< 0.6 and 𝑁bg < 6 × 106, as vertical dashed lines. We also show the horizontal line to indicate the minimum selection threshold 𝑁detect = 10. spectively. These values can even grow to 60% −80% near the Galactic poles. In the lower row of Fig. 1, we show the number of detections in the null test, 𝑁null. For comparison, we also show the actual number of detections 𝑁detect when applying StarStream to MW GCs in Gaia DR3. For GCs with 𝐴𝑉> 0.6 or 𝑁bg > 6 × 106, 𝑁detect is indistinguishable from 𝑁null. However, in the high-latitude regions with 𝐴𝑉< 0.6 and 𝑁bg < 6×106, the true detection number becomes significantly higher than 𝑁null by up to two orders of magnitude. This is a strong evidence that we successfully detect streams around most GCs in this region. Based on the validation, we define high-latitude detections with 𝐴𝑉< 0.6 and 𝑁bg < 6 × 106 as the high-quality sam- ple. We are confident that most detections in this sample are true detections with completeness and purity both above 50%. Since GCs in this sample are at high latitudes |𝑏| ≳20◦, even their 10◦search radii do not intersect the high-𝐴𝑉Galactic plane, leading to a near-uniform distinction distribution. On the other hand, we define low-latitude detections with either 𝐴𝑉≥0.6 or 𝑁bg ≥6 × 106 as the low-quality sample. Al- though we are less confident in this sample, it is still possible that they may be real. Since 𝑁null varies between 0 −10 even in the low-𝐴𝑉and low-𝑁bg regions, we exclude streams with 𝑁null ≤10 from both samples to avoid false detections. Such criteria lead to a total of 87 GC streams, with 34 in the high-quality sample and 53 in the low-quality sample. These numbers greatly improve our knowledge of GC streams since less than 20 have been confirmed prior to this work. 3.2. New streams In Table 1, we list the key properties of the 34 streams in our high-quality sample. We provide information about the low-quality sample in Appendix B, Table 2. We also plot the high-quality sample in the great circle frame 𝜙1–𝜙2 in Fig. 2. The 𝜙1–𝜙2 frame is defined such that the progenitor GC is at (0, 0), and the stream is elongated along 𝜙1. We achieve this by rotating the sky coordinates to minimize the standard deviation of 𝜙2 for simulated tracer particles. We rank the streams in Fig. 2 by the length of simulated streams, which is characterized by the 90th percentile of angular separation, 𝑟90. Since we only detect streams within the 10◦search radius around the GC, there are 17 streams with 𝑟90 exceeding this radius. In §3.3, we perform a followup detection for these streams by extending the search radius to 20◦. StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 5 5 0 5 2 (deg) 5 0 5 2 (deg) 5 0 5 2 (deg) 5 0 5 2 (deg) 5 0 5 2 (deg) 5 0 5 2 (deg) 10 5 0 5 10 1 (deg) 10 5 0 5 10 1 (deg) 5 0 5 2 (deg) 10 5 0 5 10 1 (deg) 10 5 0 5 10 1 (deg) 10 5 0 5 10 1 (deg) 0.5 1.0 Probability Figure 2. Detections of stream members (blue circles) around 34 MW GCs in the high-quality sample (𝐴𝑉< 0.6 and 𝑁bg < 6 × 106). We show these streams in the great circle frame (𝜙1–𝜙2) centered on the progenitor GC. Streams are placed in the descending order of the length 𝑟90. Each star is color-coded by the stream probability, as indicated by the colorbar. The tidal radius of the GC is shown as the brown circle. We show orbits of progenitor GCs as solid brown curves, projected in the same great circle frame. For comparison, we also show the simulated streams (gray symbols). 6 Chen et al. Many streams are wider or more “irregular” than the visual expectation that GC streams are thin and long. For example, NGC 4147’s stream is almost a circular blob with similar spread in 𝜙1 and 𝜙2. However, these streams still have suf- ficiently distinct distribution in the proper motion space and the color–magnitude space compared to the background stars. They are thus detectable by StarStream since we do not make prior assumptions based on the visual expectation of “regular” streams. We also show orbits of progenitor GCs projected onto the great circle frame in Fig. 2. We integrate orbits using the same MilkyWayPotential2022 potential. We notice that some streams misalign with the projected orbit of the progen- itor GC by > 10◦, such as Palomar 14 (Pal 14). Although the misalignment is expected (J. L. Sanders & J. Binney 2013a,b; N. Panithanpaisal et al. 2025) and is likely visu- ally enhanced as these streams have highly eccentric orbits, previous searches of GC streams tended to focus along the GC’s motion and preferentially found well-aligned streams such as Pal 5, whose misalignment angle is smaller than 5◦. The detection of these “irregular” or misaligned streams highlights the power of the physics-based modeling of GC streams by StarStream. As many GC streams can be dynam- ically hot or spatially complex depending on the GC’s mass and orbit (N. C. Amorisco 2015), these streams are likely missed by traditional visually-based methods. In Fig. 3, we show the CMD of detected stream stars using Gaia photometry. Most stars are gathered around or below the main sequence turn off. Since StarStream takes into account the color uncertainty that typically grows with the 𝐺-magnitude, the color spread of detected stars also becomes larger near the fainter end. On the other hand, the color spread of simulated streams is only due to the distance spread and is thus almost invariant of 𝐺-magnitude. 3.3. Extended detection for long streams We find 17 streams in the high-quality sample that ex- tend beyond the default search radius, or 𝑟90 > 10◦. To obtain a more complete detection for these streams, we rerun StarStream for these streams in the 10◦−20◦annulus around each GC. For consistency, we keep all other parameters of the method the same as those inside 10◦. In Fig. 4, we show the three streams, NGC 5272, 1851, and 5024, with more than 10 new detections outside 10◦. Note that the number of background stars approximately triples in the annulus, significantly reducing the signal-to-noise ratio in this region. Therefore, we should not expect the completeness and purity to stay unchanged in the extended area. 3.4. Comparison with previous detections R. Ibata et al. (2024) applied the STREAMFINDER algorithm to Gaia DR3 and provided the most complete catalog of GC streams prior to this work. They found 16 GCs that are asso- ciated with stellar streams. However, three of their streams, NGC 4590, 5139, and 5904, are not directly connected to the GC and thus cannot be compared with our results. For the remaining 13 streams, 8 are in our high-quality sample: NGC 288, 1261, 1851, 5466, 6341, 7089, 7099, and Pal 5, whereas 5 are in our low-quality sample: NGC 2298, 2808, 3201, 6101, and 6397. For the high-quality sample, our method always yields higher 𝑁detect within the 10◦radius by 20 −6000%, with a median of 300%. For the low-quality sample, our method detects more stars for NGC 2298, 2808, 6101, and fewer for the remaining two. However, since the low-quality sample has large background contamination and extinction, either method requires follow-up observations for further confirmation. For Pal 5 specifically, we detect 131 member stars while STREAMFINDER detects 109. However, since our method focuses on the stream segment released in the last 1 Gyr, our detections are all concentrated in the inner 5◦. For this region, R. Ibata et al. (2024) reported 76 stars. The 70% improvement of our method is remarkable since Pal 5 is one of the most complete streams known to date. Recently, P. B. Kuzma et al. (2025) identified potential tidal structures within 5◦around 22 MW GCs using the Pristine-Gaia synthetic catalog from Pristine Data Release 1 (E. Starkenburg et al. 2017; N. F. Martin et al. 2024). 13 of them overlap our high-quality sample: NGC 362, 1261, 1851, 1904, 5272, 6205, 6341, 6934, 6981, 7078, 7089, 7099. It is worth noting that many of their tidal structures do not show stream-like feature. This agrees with our finding that although many GC streams are “irregular”, the tidal streams around GCs are more common than was previously detected. 4. Mass loss rate of globular clusters 4.1. Calculation of mass loss rate Using the stream stars identified by StarStream, we next measure the orbit-averaged mass loss rate ¤𝑀of the progenitor GCs. We use an updated approach from Y. Chen et al. (2025b) starting from their Eq. (3): ¤𝑀≈ ¤𝑁tracer 𝑀sel ∬ 𝑓sel(𝜙1, 𝜙2) 𝑛tracer(𝜙1, 𝜙2) 𝑑𝜙1𝑑𝜙2 (1) where 𝑀sel is the total detected mass of stream stars. 𝑀sel is equivalent to the total mass selected by the effective spa- tial selection function 𝑓sel(𝜙1, 𝜙2) of StarStream. ¤𝑁tracer is the number ejection rate of tracer particles above the detec- tion limit. The integral is over the number density of tracer particles 𝑛tracer(𝜙1, 𝜙2) weighted by the selection function. Y. Chen et al. (2025b) approximates 𝑓sel as a Gaussian tube around the stream track when analyzing the R. Ibata et al. (2024) stream catalog. Here, however, we can make fur- ther simplification considering the quantified detection ratio StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 7 Table 1. Summary of GC and stream properties for the high-quality sample. The Galactic longitude 𝑙, latitude 𝑏, heliocentric distance 𝑑⊙, cluster mass 𝑀GC, and 3D half-mass radius 𝑟h are taken from the M. Hilker et al. (2019) catalog. The extinction 𝐴𝑉is either from D. Massari et al., W. E. Harris (1996) catalog, or D. J. Schlegel et al. (1998) map, see §2.2. 𝑁bg is the number of CMD-selected Gaia DR3 stars within the 10◦search radius. The number of detections 𝑁detect, tidal frequency Ωtid, and cluster mass loss rate | ¤𝑀| are calculated in this work. A plain ASCII version is available at https://github.com/ybillchen/StarStream DR. GC 𝑙 𝑏 𝑑⊙ 𝐴𝑉 𝑁bg 𝑁detect 𝑀GC Ωtid 𝑟h | ¤𝑀| (◦) (◦) (kpc) (105) (105 M⊙) (Gyr−1) (pc) (M⊙Myr−1) NGC 288 151.3 −89.4 9.0 0.06 2.2 494 0.962 40.2 8.6 2.7 +1.1 −0.8 NGC 362 301.5 −46.2 8.8 0.09 16.6 1030 2.520 42.2 3.4 10.3 +11.2 −5.4 NGC 1261 270.5 −52.1 16.4 0.03 3.7 560 1.720 24.5 4.9 12.7 +6.1 −4.1 NGC 1851 244.5 −35.0 11.9 0.12 5.8 1483 2.830 25.9 3.2 18.7 +12.0 −7.3 NGC 1904 227.2 −29.4 13.1 0.03 7.5 263 1.810 27.6 4.3 19.1 +15.1 −8.4 NGC 2419 180.4 25.2 88.5 0.25 8.2 138 7.830 4.4 26.4 − NGC 4147 252.8 77.2 18.5 0.06 2.4 108 0.451 20.1 3.5 2.5 +1.1 −0.8 NGC 4590 299.6 36.1 10.4 0.16 9.5 68 1.280 13.7 7.3 1.1 +1.0 −0.5 NGC 5024 333.0 79.8 18.5 0.06 2.7 706 5.020 18.2 10.0 37.0 +15.9 −11.1 NGC 5053 335.7 78.9 17.5 0.03 2.8 986 0.628 20.5 17.0 29.4 +12.9 −9.0 NGC 5272 42.2 78.7 10.2 0.03 2.7 503 4.090 27.6 5.5 6.2 +2.7 −1.9 NGC 5466 42.1 73.6 16.1 0.00 2.9 162 0.561 8.6 13.8 4.1 +1.8 −1.3 NGC 5634 342.2 49.3 26.0 0.16 7.1 62 2.470 22.5 7.7 10.8 +8.7 −4.8 NGC 5694 331.1 30.4 34.8 0.28 22.3 20 2.690 7.6 4.3 7.3 +10.1 −4.2 NGC 5824 332.6 22.1 31.7 0.40 49.2 131 7.460 10.8 6.3 62.5 +116.0 −40.6 NGC 5897 342.9 30.3 12.6 0.37 21.2 764 1.670 49.6 10.9 16.9 +20.5 −9.3 NGC 5904 3.9 46.8 7.5 0.09 7.1 176 3.920 17.7 5.6 2.4 +1.7 −1.0 NGC 6205 59.0 40.9 7.4 0.03 7.1 2209 4.840 56.4 5.2 16.7 +12.0 −7.0 NGC 6218 15.7 26.3 5.1 0.59 23.7 806 1.060 86.2 4.1 3.9 +5.0 −2.2 NGC 6229 73.6 40.3 30.1 0.03 6.5 79 2.470 17.1 4.8 25.1 +24.8 −12.5 NGC 6341 68.3 34.9 8.5 0.03 7.3 356 2.730 48.2 3.6 3.1 +2.3 −1.3 NGC 6752 336.5 −25.6 4.1 0.12 41.1 478 2.610 71.2 4.8 2.9 +4.6 −1.8 NGC 6934 52.1 −18.9 15.7 0.31 52.9 83 1.500 9.4 4.7 3.9 +6.8 −2.5 NGC 6981 35.2 −32.7 16.7 0.16 17.8 628 0.812 22.1 5.8 18.0 +20.3 −9.5 NGC 7006 63.8 −19.4 39.3 0.16 42.6 108 1.320 9.0 6.6 81.5 +132.3 −50.4 NGC 7078 65.0 −27.3 10.7 0.31 15.5 504 5.180 41.7 3.7 5.9 +6.3 −3.0 NGC 7089 53.4 −35.8 11.7 0.16 10.1 814 6.240 27.1 4.8 14.6 +12.7 −6.8 NGC 7099 27.2 −46.8 8.5 0.12 7.0 728 1.210 55.5 4.3 4.3 +3.1 −1.8 NGC 7492 53.4 −63.5 24.4 0.00 3.4 96 0.197 19.0 10.6 9.0 +5.3 −3.3 Pal 1 130.1 19.0 11.2 0.46 17.0 445 0.009 17.1 3.4 4.9 +5.5 −2.6 Pal 5 0.8 45.9 21.9 0.09 8.5 131 0.134 21.5 27.6 10.1 +8.3 −4.6 Pal 12 30.5 −47.7 18.5 0.06 7.3 228 0.062 6.7 10.5 8.7 +6.4 −3.7 Pal 14 28.7 42.2 73.6 0.12 7.9 117 0.191 4.0 37.7 − Whiting 1 161.6 −60.6 30.6 0.09 2.1 99 0.014 5.4 16.5 26.0 +21.9 −11.9 8 Chen et al. 16 18 20 G 16 18 20 G 16 18 20 G 16 18 20 G 16 18 20 G 16 18 20 G 0 0.5 1 1.5 BP RP 0 0.5 1 1.5 BP RP 16 18 20 G 0 0.5 1 1.5 BP RP 0 0.5 1 1.5 BP RP 0 0.5 1 1.5 BP RP 0.5 1.0 Probability Figure 3. Similar to Fig. 2, but for the color–magnitude space 𝐺vs. BP −RP. StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 9 5 0 5 2 (deg) 5 0 5 2 (deg) 20 10 0 10 20 1 (deg) 5 0 5 2 (deg) Figure 4. Similar to Fig. 2, but for extended streams with 𝑟90 > 10◦. Only streams with more than 10 extended detections outside the original 10◦search radius are shown here, with the extended detec- tions shown as open symbols. The original search radius is marked as dashed circles in each panel. 𝑓detect between the expectation of the number of detections 𝑁detect by StarStream and the true number 𝑁true (see C25). In Appendix A, we prove that simply replacing the integral in Eq. (1) by the total number of tracer particles 𝑁tracer times 𝑓detect yields an unbiased estimate of the mass loss rate: ¤𝑀≈𝑓−1 detect ¤𝑁tracer 𝑀sel 𝑁tracer (2) where 𝑓detect ≡𝑁detect/𝑁true varies with individual streams. Using the tests by C25, we find that 𝑓detect is always centered at 0.9 for the high-quality sample, with a log-normal scatter that depends on the background density 𝑁bg. In Appendix A, we approximate the standard deviation of the log-normal scatter 𝜎log 𝑓as a linear function of log10 𝑁bg: 𝜎log 𝑓increases from 0.15 dex to 0.5 dex when log10 𝑁bg increases from 5.5 to 7, and is fixed to 𝜎log 𝑓= 0.15 dex below log10 𝑁bg < 5.5 (see Fig. 6). Since the purity and completeness of the low- quality sample are both low with large variation, the scatter can be more than 1 dex. Therefore, we exclude the low-quality sample for the calculation of the mass loss. Note that 𝑀sel and 𝑁tracer only account for tracer particles within the 10◦search radius. The detection ratio is not cali- brated outside 10◦and is likely much lower. We thus exclude extended stream segments outside this radius by §3.3 for the subsequent calculation. For real observations, 𝑀sel is not simply the sum of the masses of individual stars, as it must account for stars below the detection limit. Following Y. Chen et al. (2025b), we introduce a correction factor 𝑤𝑖for the 𝑖-th star to account for the missing stellar mass 𝑀sel ≈ 𝑁obs ∑︁ 𝑖=1 𝑚𝑖𝑤𝑖 (3) where 𝑤𝑖= 𝑤(𝑚min,𝑖) ≡ ∫𝑚max 𝑚limit 𝑚𝜓(𝑚) 𝑑𝑚 ∫𝑚max 𝑚min,𝑖𝑚𝜓(𝑚) 𝑑𝑚 in which the upper integral limit 𝑚max is the maximum mass of surviving stars at the current age. We set the lower integra- tion limit of the numerator to the hydrogen-burning threshold 𝑚limit = 0.08 𝑀⊙, and the lower integration limit of the denominator to the minimum detectable stellar mass at the heliocentric distance of this star 𝑚min,𝑖≡𝑚min(𝑑⊙,𝑖). The stellar mass function of the stream is denoted by 𝜓(𝑚). Fol- lowing Y. Chen et al. (2025b), we assume 𝜓(𝑚) follows the same power-law function as the progenitor GC, with the slope measured by H. Baumgardt et al. (2023). These authors also attempted to fit the mass function with broken power-law functions. However, only a subset of our sample GCs have measured slopes for the broken power-law. For these GCs, the two forms of mass functions deviate only by < 20% when calculating the correction factor. Since we focus on the 10◦cone around each GC, the he- liocentric distance of each stream member 𝑑⊙,𝑖is very close to the distance to the center GC, 𝑑⊙,GC. Using the simu- lated stream, we verify that the standard deviation of dis- tances is typically 0.025 dex (at most 0.08 dex). There- fore, the distance spread contributes negligibly to the total uncertainty since the intrinsic scatter due to the detection method itself is already > 0.15 dex (see Appendix A). It is thus reasonable to adopt 𝑑⊙,𝑖≈𝑑⊙,GC. Correspond- ingly, we can define 𝑚min,𝑖= 𝑚min(𝑑⊙,GC) ≡𝑚min,GC and 𝑤𝑖= 𝑤(𝑚min,GC) ≡𝑤GC. Eq. (3) is thus simplified to 𝑀sel ≈ 𝑁obs ∑︁ 𝑖=1 𝑚𝑖𝑤GC = 𝑁obs ∑︁ 𝑖=1 𝑚𝑖 ! 𝑤GC ≡𝑀obs 𝑤GC. The final formula for mass loss rate is given by ¤𝑀≈𝑓−1 detect ¤𝑁tracer 𝑀obs 𝑤GC 𝑁tracer . (4) It should be noted that the correction factor 𝑤GC becomes extremely large (> 1000) and is sensitive to the parameters for the isochrone model when the GC is far away from the observer. We find that varying the age of the isochrone from 10 Chen et al. 7 −13 Gyr alters 𝑤GC by several orders of magnitudes when 𝑑⊙,GC ≳60 kpc, where the main sequence turnoff is far below Gaia’s detection limit. Therefore, we exclude NGC 2419 and Pal 14 since they both have 𝑑⊙,GC > 60 kpc. Their inferred mass loss rates likely have too large uncertainties. Except for these two, we verify that the scatter of detection ratio 𝜎log 𝑓in Eq. (2) is dominant over all other potential sources of uncertainties, including the uncertainty in 𝑤GC and the Poisson’s error for small numbers. However, we still include these sources of uncertainties following Y. Chen et al. (2025b) for the subsequent analysis. 4.2. Mass loss rate vs. other cluster properties Figure 5 shows the measured mass loss rate for the high- quality sample as functions of GC’s mass 𝑀GC, effective tidal frequency Ωtid, and 3D half-mass radius 𝑟h. The values of 𝑀GC and 𝑟h are directly taken from the M. Hilker et al. (2019) catalog, while Ωtid is approximated by √ 2 times the orbital frequency. This definition is consistent with M. Gieles & O. Y. Gnedin (2023), who used the singular isothermal sphere profile and adopted F. Renaud et al. (2011)’s definition based on eigenvalues of the tidal tensor. The orbital frequency is computed via integrating the GC’s orbit in our Galactic potential model. These values are listed in Table 1. We find that most GCs have ¤𝑀= 1 −100 M⊙Myr−1. We do not observe any strong correlation between ¤𝑀and other properties of the GC. It should be noted that we do not include streams with fewer than 10 detections in the sample, which excludes streams with very low mass loss rates. For the ¤𝑀– 𝑀GC relation and the ¤𝑀–𝑟h relation, the exclusion of these streams biases the entire relation upward. However, the ¤𝑀–Ωtid relation is affected more because the low-Ωtid GCs tend to reside at large radii. These GCs are more likely to be excluded by the same 𝑁detect > 10 criterion unless their mass loss rates are proportionally higher. Therefore, the slope of the ¤𝑀–Ωtid relation is likely biased low. For this rea- son, we do not analyze the ¤𝑀–Ωtid relation for the remainder of the work. To quantitatively study the correlation between ¤𝑀and the other two properties, we fit ¤𝑀as a multivariate power-law function, ¤𝑀fit = ¤𝑀ref  𝑀GC 105 M⊙ 𝑎 𝑟h 5 pc 𝑐 (5) where we choose different anchor points from Y. Chen et al. (2025b) to better describe the average of each quantity7. Us- ing maximum likelihood estimation allowing for intrinsic 7 We skip the symbol 𝑏for consistency with Y. Chen et al. (2025b), where 𝑏is the slope for the ¤𝑀–Ωtid relation. scatter, we obtain the best-fit parameters: | ¤𝑀ref| = 7.4+1.5 −1.3 M⊙Myr−1 𝜎int = 0.28 ± 0.06 dex 𝑎= 0.15 ± 0.12 𝑐= 0.58 ± 0.34 (6) where the uncertainties are obtained from 1000 bootstrap resampling. We find that the ¤𝑀–𝑀GC slope 𝑎is above but still consistent with zero within 1.3 standard deviations, and the value lies between the no black holes (BHs) model (𝑎= 1/3) and BHs models (𝑎= −1/3, assuming the initial mass 𝑀i = 2 × 105 M⊙) of M. Gieles & O. Y. Gnedin (2023). Our 𝑎is lower than that in Y. Chen et al. (2025b, 𝑎= 0.66 ± 0.37) by 1.4𝜎. The ¤𝑀–𝑟h slope is above zero by 1.7𝜎, still being consistent with Y. Chen et al. (2025b, 𝑐= 0.12 ± 0.72). We also observe larger intrinsic scatter compared to Y. Chen et al. (2025b). This is likely because our measurement includes the uncertainty of the detection method itself, whereas Y. Chen et al. (2025b) assumed zero uncertainty for the detection method. Although trends can differ, the amplitude of ¤𝑀in this work is consistent with M. Gieles & O. Y. Gnedin (2023) and Y. Chen et al. (2025b) within the intrinsic scatter across the full ranges of 𝑀GC, Ωtid, and 𝑟h. In particular, we measured | ¤𝑀| ≈10 M⊙Myr−1 for Pal 5. This is higher than that in Y. Chen et al. (2025b, | ¤𝑀| ≈ 3 M⊙Myr−1) partially because of the 70% increase of 𝑁detect (as mentioned in §3.4). However, this increase is insufficient to explain the 230% increase of ¤𝑀. Since Y. Chen et al. (2025b) measured the average mass loss rate over the past ∼6 Gyr for Pal 5 while this work only measures the past 1 Gyr, the different values likely suggest an accelerating mass loss rate for Pal 5. This scenario is consistent with the prediction by M. Gieles et al. (2021), who showed that Pal 5’s potential high BH abundance ( 𝑓BH ≈20%) could accelerate the mass loss rate from the initial 5−10 M⊙Myr−1 to 10−20 M⊙Myr−1 near the end of its lifetime. The positive ¤𝑀–𝑟h correlation is likely dominated by the low-mass but high- ¤𝑀GCs similar to Pal 5, including NGC 7492, Pal 12, and Whiting 1. These GCs have 𝑀GC ≲ 2 × 104 M⊙but half-mass radii above average, 𝑟h ≳10 pc, making them much “fluffier” than average GCs. Their high mass loss rates place them in closer agreement with M. Gieles & O. Y. Gnedin (2023)’s BHs models (𝑎= −1/3), while they are seemingly outlier of other models. M. Gieles et al. (2021) suggested that these GCs are also BH-rich and are reaching complete tidal dissolution. They also predicted higher-than- average mass loss rates for these GCs, | ¤𝑀| ≈10 M⊙Myr−1. Our measurements agree with their prediction, providing ob- servational support for the BH-rich scenario. StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 11 103 104 105 106 MGC (M ) 1 10 102 |M| (M Myr 1) Gieles&Gnedin23 Chen+25 This work no BHs BHs 10 102 tid (Gyr 1) 2 5 10 20 rh (pc) NGC 288 NGC 5024 NGC 5904 NGC 7006 Pal 14 NGC 362 NGC 5053 NGC 6205 NGC 7078 Whiting 1 NGC 1261 NGC 5272 NGC 6218 NGC 7089 NGC 1851 NGC 5466 NGC 6229 NGC 7099 NGC 1904 NGC 5634 NGC 6341 NGC 7492 NGC 2419 NGC 5694 NGC 6752 Pal 1 NGC 4147 NGC 5824 NGC 6934 Pal 5 NGC 4590 NGC 5897 NGC 6981 Pal 12 Figure 5. Mass loss rates of 34 streams in the high-quality sample, plotted against 𝑀GC (left), Ωtid (middle), and 𝑟h (right), with uncertainties shown as errorbars. The best-fit relation for these measurements are shown as light blue shaded regions. For comparison, we also show the BHs and no BHs models from M. Gieles & O. Y. Gnedin (2023, with (𝑎, 𝑏, 𝑐) = (±1/3, 1, 0) and | ¤𝑀ref| = 30 −45 M⊙Myr−1) and the best-fit relations in Y. Chen et al. (2025b) as magenta and gray shaded regions, respectively. Note that Ωtid in Y. Chen et al. (2025b) is smaller by a constant √ 2, which we have accounted for in the comparison here. 5. Summary and discussion We report 87 GC streams detected by the StarStream method in Gaia DR3. Our catalog includes a high-quality sample of 34 streams with 𝐴𝑉< 0.6 and 𝑁bg < 6 × 106 within the 10◦search radius (Figs. 2, 3, 4 and Table 1), and a low-quality sample of 53 streams with higher extinction or background density (Table 2). Based on our validation tests on a similar mock dataset (Fig. 1), our selection criteria for the high-quality sample lead to both median completeness and purity above 50%. Given these metrics, we provide the most complete catalog of GC streams with quantified detec- tion quality. This discovery significantly improves our knowledge of GC streams, as even just the high-quality sample doubles the number of known GC streams to date. Moreover, this sample includes around 75% of the total of 44 GCs with the same selection criteria, which approximately corresponds to the high-latitude region |𝑏| ≳20◦. For the remaining 25%, we have too few detections (𝑁detect ≤10) to confirm neither their existence nor absence. This remarkable recovery fraction sug- gests that tidal streams are common around GCs. However, some streams can have “irregular” morphology or deviate the progenitor’s orbit, contradicting the visual expectation that streams are thin features elongated along the progenitor’s or- bit. Therefore, the physics-based modeling of GC streams is necessary to uncover these streams. Our validation tests verified the near-unity detection ratio for the high-quality sample, enabling us to obtain first-ever unbiased estimate of the stream density. Based on this den- sity, we calculate the mass loss rate of their progenitor GCs (Fig. 5). However, the detection ratio has a log-normal scat- ter varying between 0.15 dex and 0.5 dex depending on the background density (Appendix A and Fig. 6), dominating the uncertainty of ¤𝑀. As discussed in C25, next generation wide-field surveys are likely to reduce this scatter by providing either additional independent observables (such as metallic- ity and line-of-sight velocity) or lower photometry and proper motion uncertainties. By fitting ¤𝑀as a multivariate power-law function of the GC mass 𝑀GC and half-mass radius 𝑟h, we observe slightly posi- tive slopes for both quantities with a large scatter ≈0.3 dex. We note that this positive correlation is likely dominated by several “fluffy” GCs with low mass and large radius. These GCs show high | ¤𝑀| ≈10 M⊙Myr−1. They are consistent with the BH-rich scenario of M. Gieles et al. (2021), where the BH population enhances both the half-mass radius and mass loss rate near the end of the GC’s lifetime. To facilitate access by the broader community, we pub- licly release our detection results on GitHub at https://github. com/ybillchen/StarStream DR, accompanied with an exam- ple notebook for basic instructions. 12 Chen et al. Acknowledgments We thank Monica Valluri, Eric Bell, and Colin Holm- Hansen for insightful discussions. We thank Davide Massari for sharing updated extinction measurements for select GCs. YC and OYG were supported in part by National Aeronautics and Space Administration through contract NAS5-26555 for Space Telescope Science Institute programs HST-AR-16614 and JWST-GO-03433. This research benefited from the Grav- ity in the Local Group conference hosted by the McWilliam’s Center for Cosmology and Astrophysics, Carnegie Mellon University. Software: StarStream (Y. Chen et al. 2025a), agama (E. Vasiliev 2019), numpy (C. R. Harris et al. 2020), matplotlib (J. D. Hunter 2007), scipy (P. Virtanen et al. 2020), astropy ( The Astropy Collaboration et al. 2018), gala (A. M. Price- Whelan 2017; A. Price-Whelan et al. 2024), pandas ( The pandas development team 2024), dustmaps (G. M. Green 2018) Appendix A. Proof of the unbiased estimate of mass loss rate Given the true number density of stream stars 𝑛true(𝜙1, 𝜙2), the true number of stars 𝑁true is 𝑁true = ∬ 𝑛true (𝜙1, 𝜙2)𝑑𝜙1𝑑𝜙2. Similarly, given the number density of simulated tracer particles 𝑛tracer(𝜙1, 𝜙2), the total number of tracers 𝑁tracer is 𝑁tracer = ∬ 𝑛tracer(𝜙1, 𝜙2) 𝑑𝜙1𝑑𝜙2. If we assume that the simulated stream is an unbiased estimate of the true stream, we have 𝑛tracer ∝𝑛true. This is a reasonable assumption since the particle spray algorithm we use has been proved to reproduce multiple morphological properties of GC streams with a typical error ≲10% (Y. Chen et al. 2025c). By defining 𝑛tracer = 𝜆𝑛true, we obtain 𝑁tracer = 𝜆 ∬ 𝑛true(𝜙1, 𝜙2) 𝑑𝜙1𝑑𝜙2 = 𝜆𝑁true. Note that the ratio between the number of stars detected by StarStream 𝑁detect and 𝑁true is the detection ratio 𝑓detect, a fundamental metric for stream detection methods. In C25, we have calculated 𝑓detect for individual mock streams where 𝑁true is known. Replacing 𝑁true in the equation above with 𝑓−1 detect𝑁detect, we obtain 𝑁tracer = 𝜆𝑓−1 detect𝑁detect. (A1) We can also write 𝑁detect as the integral of 𝑛true(𝜙1, 𝜙2) weighted by the spatial selection function 𝑓sel(𝜙1, 𝜙2) 𝑁detect = ∬ 𝑓sel(𝜙1, 𝜙2) 𝑛true(𝜙1, 𝜙2) 𝑑𝜙1𝑑𝜙2 = 𝜆−1 ∬ 𝑓sel(𝜙1, 𝜙2) 𝑛tracer(𝜙1, 𝜙2) 𝑑𝜙1𝑑𝜙2. Therefore, Eq. (A1) becomes 𝑁tracer = 𝑓−1 detect ∬ 𝑓sel(𝜙1, 𝜙2) 𝑛tracer(𝜙1, 𝜙2) 𝑑𝜙1𝑑𝜙2 where 𝜆and 𝜆−1 cancel. Plugging this equation back to Eq. (1) gives a simple formula for the mass loss rate: ¤𝑀≈𝑓−1 detect ¤𝑁tracer 𝑀sel 𝑁tracer . To approximate 𝑓detect for individual streams in the high-quality sample, we use the test dataset in C25 and apply the same selection criteria for this sample: 𝐴𝑉< 0.6, 𝑁bg < 6 × 106, and 𝑁detect > 10. In Fig. 6, we show 𝑓detect as a function of the background density. We find that the 𝑓detect is slightly below but consistent with unity in all ranges of 𝑁bg, while the scatter increases with 𝑁bg. This dependence can be well approximated by a constant mean of 0.9 with a log-normal scatter 𝜎log 𝑓 increasing linearly from 0.15 dex to 0.5 dex when log10 𝑁bg increases from 5.5 to 7. Since the test dataset lacks stream detections below log10 𝑁bg < 5.5, we conservatively fix 𝜎log 𝑓= 0.15 dex for this region although the trend indicates a lower scatter. This simple parametrization is consistent with the more rigorous results by fitting a linear 𝜎log 𝑓– 𝑁bg relation using maximum likelihood estimation. StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 13 105 106 107 Nbg within 10 deg 10 1 100 101 fdetect Ndetect/Ntrue Figure 6. Detection ratio 𝑓detect of StarStream from C25 as a function of the background density within 10◦. Individual streams are shown as gray circles. The solid line stands for 𝑓detect,0 = 0.9, while the shaded ranges show the logarithmic scatter 𝜎log 𝑓parametrized as a function of 𝑁bg. Note that the data points here are selected with the same criteria as our high-quality sample, different from the upper panel of Fig. 3 in C25, where no additional selection criterion is employed. B. Low-quality sample We provide the GC properties and number of detections 𝑁detect for the low-quality sample in Table 2. References Amorisco, N. C. 2015, MNRAS, 450, 575, doi: 10.1093/mnras/stv648 Banik, N., Bertone, G., Bovy, J., & Bozorgnia, N. 2018, JCAP, 2018, 061, doi: 10.1088/1475-7516/2018/07/061 Baumgardt, H., H´enault-Brunet, V., Dickson, N., & Sollima, A. 2023, MNRAS, 521, 3991, doi: 10.1093/mnras/stad631 Bernard, E. J., Ferguson, A. M. N., Schlafly, E. F., et al. 2014, MNRAS, 443, L84, doi: 10.1093/mnrasl/slu089 Bonaca, A., & Price-Whelan, A. M. 2025, New Astronomy Reviews, 100, 101713, doi: 10.1016/j.newar.2024.101713 Bonaca, A., Pearson, S., Price-Whelan, A. M., et al. 2020, ApJ, 889, 70, doi: 10.3847/1538-4357/ab5afe Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245, doi: 10.1086/167900 Carlberg, R. G., Grillmair, C. J., & Hetherington, N. 2012, ApJ, 760, 75, doi: 10.1088/0004-637X/760/1/75 Chen, Y., Gnedin, O. Y., & Price-Whelan, A. M. 2025a, in prep. Chen, Y., Li, H., & Gnedin, O. Y. 2025b, ApJL, 980, L18, doi: 10.3847/2041-8213/adaf93 Chen, Y., Valluri, M., Gnedin, O. Y., & Ash, N. 2025c, ApJS, 276, 32, doi: 10.3847/1538-4365/ad9904 Dotter, A. 2016, ApJS, 222, 8, doi: 10.3847/0067-0049/222/1/8 Erkal, D., Belokurov, V., Bovy, J., & Sanders, J. L. 2016, MNRAS, 463, 102, doi: 10.1093/mnras/stw1957 Erkal, D., Koposov, S. E., & Belokurov, V. 2017, MNRAS, 470, 60, doi: 10.1093/mnras/stx1208 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, A&A, 595, A2, doi: 10.1051/0004-6361/201629512 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, doi: 10.1051/0004-6361/201833051 Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1, doi: 10.1051/0004-6361/202243940 Gieles, M., Erkal, D., Antonini, F., Balbinot, E., & Pe˜narrubia, J. 2021, Nat Astron, 5, 957, doi: 10.1038/s41550-021-01392-2 Gieles, M., & Gnedin, O. Y. 2023, MNRAS, 522, 5340, doi: 10.1093/mnras/stad1287 Green, G. M. 2018, JOSS, 3, 695, doi: 10.21105/joss.00695 Grillmair, C. J. 2009, ApJ, 693, 1118, doi: 10.1088/0004-637X/693/2/1118 Hallin, A., Shih, D., Krause, C., & Buckley, M. R. 2025, arXiv:2509.08064 [astro-ph], doi: 10.48550/arXiv.2509.08064 Hallin, A., Isaacson, J., Kasieczka, G., et al. 2022, Phys. Rev. D, 106, 055006, doi: 10.1103/PhysRevD.106.055006 Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2 Harris, W. E. 1996, AJ, 112, 1487, doi: 10.1086/118116 Hilker, M., Baumgardt, H., Sollima, A., & Bellini, A. 2019, Proc. IAU, 14, 451, doi: 10.1017/S1743921319006823 14 Chen et al. Table 2. Summary of GC and stream properties in the low-quality sample. The properties are described in Table 1. A plain ASCII version is available at https://github.com/ybillchen/StarStream DR. GC 𝑙 𝑏 𝑑⊙ 𝐴𝑉 𝑁bg 𝑁detect GC 𝑙 𝑏 𝑑⊙ 𝐴𝑉 𝑁bg 𝑁detect (◦) (◦) (kpc) (105) (◦) (◦) (kpc) (105) Arp 2 8.5 −20.8 28.7 0.31 112.4 8000 NGC 6402 21.3 14.8 9.1 1.86 77.0 151 BH 140 303.2 −4.3 4.8 2.11 324.1 20 NGC 6426 28.1 16.2 20.7 1.12 83.0 117 Djor 2 2.8 −2.5 8.8 2.91 425.5 14 NGC 6441 353.5 −5.0 12.7 1.43 461.0 14 E 3 292.3 −19.0 7.9 0.93 46.0 96 NGC 6496 348.0 −10.0 9.6 0.74 361.9 865 IC 1257 16.5 15.1 26.6 2.26 123.2 19 NGC 6535 27.2 10.4 6.4 1.30 71.7 374 IC 4499 307.4 −20.5 18.9 0.71 51.5 85 NGC 6541 349.3 −11.2 7.6 0.34 256.9 777 NGC 2298 245.6 −16.0 9.8 0.68 42.9 537 NGC 6544 5.8 −2.2 2.6 2.36 521.9 351 NGC 2808 282.2 −11.3 10.1 0.68 109.9 4438 NGC 6553 5.3 −3.0 5.3 1.95 498.1 38 NGC 3201 277.2 8.6 4.7 0.74 85.7 121 NGC 6569 0.5 −6.7 10.5 1.64 522.1 154 NGC 4372 301.0 −9.9 5.7 1.49 207.7 248 NGC 6584 342.1 −16.4 13.6 0.31 152.2 200 NGC 4833 303.6 −8.0 6.5 1.02 195.9 739 NGC 6624 2.8 −7.9 8.0 0.87 560.3 843 NGC 5139 309.1 15.0 5.4 0.37 70.2 920 NGC 6637 1.7 −10.3 8.9 0.56 494.7 242 NGC 5286 311.6 10.6 11.1 0.78 103.1 879 NGC 6638 7.9 −7.2 9.8 1.27 555.8 15 NGC 5927 326.6 4.9 8.3 1.30 236.5 11 NGC 6652 1.5 −11.4 9.5 0.28 413.7 68 NGC 6101 317.7 −15.8 14.4 0.16 107.0 128 NGC 6656 9.9 −7.6 3.3 1.12 474.7 304 NGC 6121 351.0 16.0 1.9 1.33 157.9 460 NGC 6715 5.6 −14.1 26.3 0.46 326.9 20007 NGC 6139 342.4 6.9 10.0 2.33 221.7 15 NGC 6723 0.1 −17.3 8.3 0.16 218.6 45 NGC 6144 351.9 15.7 8.2 1.36 187.2 17 NGC 6760 36.1 −3.9 8.4 2.39 176.8 108 NGC 6171 3.4 23.0 5.6 1.02 46.9 663 NGC 6779 62.7 8.3 10.4 0.74 132.7 960 NGC 6254 15.1 23.1 5.1 0.81 28.4 721 NGC 6809 8.8 −23.3 5.3 0.59 68.2 1874 NGC 6287 0.1 11.0 7.9 1.86 312.3 79 Pal 11 31.8 −15.6 14.0 1.08 131.8 19 NGC 6304 355.8 5.4 6.2 1.52 310.0 17 Pal 15 18.8 24.3 44.1 1.24 34.1 24 NGC 6355 359.6 5.4 8.7 2.39 393.4 19 Rup 106 300.9 11.7 20.7 0.62 125.2 65 NGC 6356 6.7 10.2 15.7 0.87 221.3 921 Ter 3 345.1 9.2 7.6 2.26 218.7 29 NGC 6362 325.6 −17.6 7.7 0.22 98.8 893 Ter 7 3.4 −20.1 24.3 0.22 140.5 3312 NGC 6366 18.4 16.0 3.4 2.20 86.9 731 Ter 8 5.8 −24.6 27.5 0.37 61.6 10969 NGC 6397 338.2 −12.0 2.5 0.53 251.1 202 Holm-Hansen, C., Chen, Y., & Gnedin, O. Y. 2025, arXiv:2510.09604 [astro-ph], doi: 10.48550/arXiv.2510.09604 Hunt, J. A., & Vasiliev, E. 2025, New Astronomy Reviews, 100, 101721, doi: 10.1016/j.newar.2024.101721 Hunter, J. D. 2007, CSE, 9, 90, doi: 10.1109/MCSE.2007.55 Ibata, R., Malhan, K., Tenachi, W., et al. 2024, ApJ, 967, 89, doi: 10.3847/1538-4357/ad382d Koposov, S. E., Rix, H.-W., & Hogg, D. W. 2010, ApJ, 712, 260, doi: 10.1088/0004-637X/712/1/260 Kroupa, P. 2001, MNRAS, 322, 231, doi: 10.1046/j.1365-8711.2001.04022.x Kuzma, P. B., Ishigaki, M. N., Kirihara, T., & Ogami, I. 2025, AJ, 170, 157, doi: 10.3847/1538-3881/aded8e K¨upper, A. H. W., Balbinot, E., Bonaca, A., et al. 2015, ApJ, 803, 80, doi: 10.1088/0004-637X/803/2/80 Lynden-Bell, D., & Lynden-Bell, R. M. 1995, MNRAS, 275, 429, doi: 10.1093/mnras/275.2.429 Malhan, K., & Ibata, R. A. 2018, MNRAS, 477, 4063, doi: 10.1093/mnras/sty912 Martin, N. F., Starkenburg, E., Yuan, Z., et al. 2024, A&A, 692, A115, doi: 10.1051/0004-6361/202347633 Massari, D., Aguado-Agelet, F., Monelli, M., et al. 2023, A&A, 680, A20, doi: 10.1051/0004-6361/202347289 StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 15 Meˇstri´c, U., Vanzella, E., Zanella, A., et al. 2022, MNRAS, 516, 3532, doi: 10.1093/mnras/stac2309 Ngan, W. H. W., & Carlberg, R. G. 2014, ApJ, 788, 181, doi: 10.1088/0004-637X/788/2/181 Nibauer, J., & Bonaca, A. 2025, arXiv:2504.07187 [astro-ph], doi: 10.48550/arXiv.2504.07187 Nibauer, J., Bonaca, A., Lisanti, M., Erkal, D., & Hastings, Z. 2024, ApJ, 969, 55, doi: 10.3847/1538-4357/ad4299 Panithanpaisal, N., Sanderson, R. E., Rodriguez, C. L., et al. 2025, arXiv:2509.03599 [astro-ph], doi: 10.48550/arXiv.2509.03599 Pearson, S., Price-Whelan, A. M., & Johnston, K. V. 2017, Nature Astronomy, 1, 633, doi: 10.1038/s41550-017-0220-3 Price-Whelan, A., Souchereau, H., Wagg, T., et al. 2024, Zenodo, doi: 10.5281/zenodo.593786 Price-Whelan, A. M. 2017, JOSS, 2, 388, doi: 10.21105/joss.00388 Renaud, F., Gieles, M., & Boily, C. M. 2011, MNRAS, 418, 759, doi: 10.1111/j.1365-2966.2011.19531.x Rockosi, C. M., Odenkirchen, M., Grebel, E. K., et al. 2002, AJ, 124, 349, doi: 10.1086/340957 Sanders, J. L., & Binney, J. 2013a, MNRAS, 433, 1813, doi: 10.1093/mnras/stt806 Sanders, J. L., & Binney, J. 2013b, MNRAS, 433, 1826, doi: 10.1093/mnras/stt816 Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525, doi: 10.1086/305772 Shih, D., Buckley, M. R., Necib, L., & Tamanas, J. 2021, MNRAS, 509, 5992, doi: 10.1093/mnras/stab3372 Shipp, N., Drlica-Wagner, A., Balbinot, E., et al. 2018, ApJ, 862, 114, doi: 10.3847/1538-4357/aacdab Starkenburg, E., Martin, N., Youakim, K., et al. 2017, MNRAS, 471, 2587, doi: 10.1093/mnras/stx1068 The Astropy Collaboration, Price-Whelan, A. M., Sip˝ocz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f The pandas development team. 2024, pandas-dev/pandas: Pandas, Zenodo, doi: 10.5281/zenodo.3509134 Vasiliev, E. 2019, MNRAS, 482, 1525, doi: 10.1093/mnras/sty2672 Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2
Draft version October 17, 2025 Typeset using LATEX twocolumn style in AASTeX7.0.1 StarStream on Gaia: Stream discovery and mass loss rate of globular clusters Yingtian Chen ,1 Oleg Y. Gnedin ,1 and Adrian M. Price-Whelan 2 1 48109, USA 2Center for Computational Astrophysics, Flatiron Institute, New York, NY 10010, USA Abstract We apply the automatic stellar stream detection algorithm StarStream to Gaia Data Release 3 and identify 87 stellar streams associated with Galactic globular clusters (GCs), including 34 high-quality cases with median completeness and purity both exceeding 50%, as estimated from modeling mock streams. These detections double the number of known GC streams, and increase the fraction of GCs with tidal streams at high Galactic latitudes (|b| > 30◦) to 75%. In contrast to visual expectations, many new streams are wide or short, or misaligned with their progenitors' orbits. Taking advantage of the unbiased density measurements enabled by our method, we also estimate the mass loss rate for the progenitor GCs. We find that several low-mass, large-size clusters have enhanced mass loss rates, indicating that they are approaching complete tidal disruption. Keywords: Stellar streams (2166); Globular star clusters (656); Stellar dynamics (1596); Galaxy dynamics (591) 1. Introduction The advent of the Gaia mission ( Gaia Collaboration et al. 2016), in particular the inclusion of high-precision proper motions down to G≈20 by Data Release 2 (DR2, Gaia Collaboration et al. 2018), has greatly reshaped our understanding of the Milky Way (MW) substructure by providing an all-sky map of stars in the full six-dimensional (6D) phase space. By applying variations of the matched-filter technique (e.g., C. M. Rockosi et al. 2002; C. J. Grillmair 2009; E. J. Bernard et al. 2014; N. Shipp et al. 2018) to both the color-magnitude diagram (CMD) and proper motion space, astronomers have identified over one hundred thin, elongated structures recognized as stellar streams (see the review by A. Bonaca & A. M. Price-Whelan 2025). Notably, R. Ibata et al. (2024) employed the STREAMFINDER algorithm (K. Malhan & R. A. Ibata 2018) on Gaia DR3 ( Gaia Collaboration et al. 2023) to uncover 87 thin streams, while A. Hallin et al. (2025) combined the Via Machinae method (D. Shih et al. 2021) with the Cathode algorithm (A. Hallin et al. 2022) to detect around 80 thin streams in Gaia DR2. Many stellar streams are elongated debris of tidally disrupted globular clusters (GCs, see D. Lynden-Bell & R. M. Lynden-Bell 1995). Their morphology and kinematics preserve rich information about interactions with the Galactic environment, including the dark matter halo (S. E. Koposov et al. 2010; A. H. W. K ̈upper et al. 2015; J. Nibauer & A. Corresponding author: Yingtian Chen Bonaca 2025), the rotating bar (S. Pearson et al. 2017; A. Bonaca et al. 2020), the tilting disk (J. Nibauer et al. 2024), and encounters with subhalos, other GCs, or giant molecular clouds (R. G. Carlberg et al. 2012; W. H. W. Ngan & R. G. Carlberg 2014; D. Erkal et al. 2016, 2017; N. Banik et al. 2018). In addition, stellar streams encode key properties of their progenitor GCs, such as the mass loss rate (e.g., A. H. W. K ̈upper et al. 2015; M. Gieles et al. 2021; Y. Chen et al. 2025b), which is directly related to the stream density. Such measurements provide valuable constraints on the N-body simulations of dynamical evolution of GCs. However, most stream detections do not include quantified estimates of completeness and purity, leading to unknown systematic uncertainties in the inferred mass loss rates. Furthermore, fewer than 20 GCs have so far been confidently associated with streams, while most clusters (> 150, M. Hilker et al. 2019) still lack stream detections. Although tidally stripped stars have been widely observed around GCs (e.g., P. B. Kuzma et al. 2025), thin, extended "stream-like" features remain absent along the orbits of most GCs. Recent advances in stream formation theory provide insights to the missing-stream puzzle. N. C. Amorisco (2015) pointed out that streams may appear dynamically hot or spatially complex, depending on the progenitor's mass and orbit. Moreover, streams can deviate the progenitor's orbit in a nonspherical or time-evolving Galactic potential (J. L. Sanders & J. Binney 2013a,b; N. Panithanpaisal et al. 2025). These effects may be amplified depending on the viewing angle. As a result, traditional detection approaches based on the visual expectation that streams are thin features elongated along the progenitor's orbit tend to miss these "irregular" streams. 16 Oct 2025 2 Chen et al. The limitations of traditional methods motivated us to develop StarStream (Y. Chen et al. 2025a, hereafter C25), a physics-based method that makes no prior assumptions about stream morphology. The method employs kernel density estimation (KDE) to build a mixture model of stream and background stars, and incorporates the fast and accurate particle spray algorithm of Y. Chen et al. (2025c) to generate a realistic stream model in the spatial, velocity, and colormagnitude spaces. C25 quantified the detection performance of StarStream using a suite of validation tests on a mock dataset tailored to Gaia DR3. StarStream demonstrates purity and completeness both above ∼65% at high Galactic latitudes (|b| > 30◦), even after accounting for dust extinction. The high detection quality makes StarStream a powerful tool to uncover GC streams that may have been missed by previous methods. Its quantified performance further allows us to derive unbiased estimates of the mass loss rate for these clusters. In this work, we apply StarStream to Gaia DR3 fields around MW GCs. In §2, we provide an overview of StarStream and the Gaia DR3 dataset. We then present the discovery of new streams in §3, followed by the calculation of the mass loss rates in §4. Finally, we summarize and discuss our findings in §5. 2. Method We apply the StarStream algorithm to Gaia DR3 stars around MW GCs to identify potential stream members. In this section, we provide an overview of the StarStream algorithm, including our adjustment to the Gaia DR3 dataset. 2.1. Overview of StarStream We refer to C25 for the complete description and validation of the StarStream algorithm3. Here, we briefly recap the key concepts of StarStream. StarStream uses mixture modeling to distinguish the GC stream from the background. The probability density function is given by p(x) = fs ps(x) + (1 -fs) pbg(x) where xis an arbitrary point in the multi-dimensional space of observables. We introduce the stream fraction fs to characterize the ratio between the stream model ps(x) and background model pbg(x). Both models are represented by Gaussian KDE constructed on tracer particles in the multi-dimensional space. For Gaia DR3 specifically, we use six observables, including two sky coordinates, two corresponding proper motions, BP -RP color, and G-band magnitude. Instead of directly using the raw right ascension and declination provided 3 The StarStream algorithm is published on GitHub at https://github.com/ ybillchen/StarStream, where we also provide example Python notebooks for running the code. by Gaia, we rotate the coordinate system for each GC such that the GC is located at the origin with the velocity vector along the new longitude. This coordinate system ensures an almost identity metric tensor g ≈I around the GC, which is necessary for the KDE to yield the correct probability density. To construct the stream model, we simulate mock stream for each GC using the particle spray model by Y. Chen et al. (2025c). This method requires integrating the orbits of both the progenitor and tracer particles in a predefined Galactic potential model. We employ the MilkyWayPotential2022 model implemented within gala (A. M. Price-Whelan 2017; A. Price-Whelan et al. 2024), which has been validated against MW mass measurements out to ∼150 kpc (J. A. Hunt & E. Vasiliev 2025). We release tracer particles over the last 1 Gyr assuming a uniform ejection rate ¤Ntracer = 4 Myr-1. We then sample the mass of each particle from the P. Kroupa (2001) initial mass function, with the minimum stellar mass being set to the lowest possible mass of the closest tracer particle that remains above the detection limit. Based on the stellar mass, we calculate the color and magnitude of each tracer particle from the MESA Isochrones and Stellar Tracks (MIST, A. Dotter 2016; U. Meˇstri ́c et al. 2022) model. In C25, we have verified that these settings are sufficient to fully sample the position space, the proper motion space, and the CMD. The KDE is constructed on these tracers with no correlation between each dimension. We set the kernel bandwidth to 0.1 times the standard deviation of all tracer particles in each dimension, except that we fix the bandwidth to 0.1 for magnitudes and 0.02 for colors. Varying these values by a factor of 0.5 -2 has a negligible effect on our results. It is worth noting that we convolve the kernels with the observational uncertainties when evaluating the KDE for observed stars. This is equivalent to increasing the bandwidths of KDE kernels. For the background, we randomly select 104 stars from the real data as the tracer particles for constructing KDE. We adopt bandwidths of 0.5◦for position, 1 mas yr-1 for proper motions, and 0.1 for both color and magnitude. To speed up KDE evaluation, we employ the grid interpolation technique, where the grid spacings are set equal to the corresponding bandwidths. We have tested that the final results are not sensitive to the choice of bandwidths or grid spacing. Once we have constructed the stream model and background model, we estimate the stream fraction fs by maximizing the log-likelihood for Nstars in the observational dataset, ln L ≡ N ∑︁ i=1 ln fsps(xi) + (1 -fs)pbg(xi) in which xirepresent the multi-dimensional coordinate of the i-th star in the dataset. The best-fit stream and background StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 3 probability densities for star iare then given by fsps(xi) and (1 -fs)pbg(xi), respectively. Therefore, we can define the membership probability that star ibelongs to the stream as Ps,i≡ fsps(xi) fsps(xi) + (1 -fs)pbg(xi) . We consider stars with membership probability greater than a threshold Pth ≡0.5 to be identified as stream members. 2.2. Observational datasets We apply the same selection criteria as C25 to Gaia DR3. Specifically, we restrict to stars with valid measurements of right ascension, declination, proper motions, BP -RP color, G-band magnitude, and the corresponding uncertainties. We then select all stars with G 1, leaving 128 GCs in our final sample. We then fit the age and metallicity of individual GCs using a grid of age = 7, 9, 11, 13 Gyr and [Fe/H] varying between ±1 around the value in the W. E. Harris (1996) catalog, with 4 https://people.smp.uq.edu.au/HolgerBaumgardt/globular/ 5 Private comm. in the context of the Cluster Ages to Reconstruct the Milky Way Assembly (CARMA) project (D. Massari et al. 2023). 6 https://physics.mcmaster.ca/∼harris/mwgc.dat a uniform spacing of 0.1. We use the age and metallicity that minimize the residual sum of squares of BP-RP color around the isochrone. We select GC member stars within the halfmass radius from the M. Hilker et al. (2019) catalog, with an additional proper motion selection Δμα, Δμδ= ±1 mas yr-1. We have verified that the best-fit metallicities are consistent with the W. E. Harris (1996) values, with a standard deviation of 0.4. Although multiple metallicity values may lead to similar fits due to the age-metallicity degeneracy, it is worth noting that the specific values do not significantly affect the performance of StarStream, as long as we can reproduce the shape of the isochrone. For GCs with too few ( 0.6 or Nbg > 6×106. These values approximately correspond to the low-Galactic latitude region |b| ≲20◦, where the high dust reddening and background contamination significantly affect the detection quality. On the other hand, the method achieves acceptable completeness and purity for the low-AVand lowNbg GCs. For mock GCs with AV 0.6 or Nbg > 6 × 106, Ndetect is indistinguishable from Nnull. However, in the high-latitude regions with AV 10◦, such as Palomar 14 (Pal 14). Although the misalignment is expected (J. L. Sanders & J. Binney 2013a,b; N. Panithanpaisal et al. 2025) and is likely visually enhanced as these streams have highly eccentric orbits, previous searches of GC streams tended to focus along the GC's motion and preferentially found well-aligned streams such as Pal 5, whose misalignment angle is smaller than 5◦. The detection of these "irregular" or misaligned streams highlights the power of the physics-based modeling of GC streams by StarStream. As many GC streams can be dynamically hot or spatially complex depending on the GC's mass and orbit (N. C. Amorisco 2015), these streams are likely missed by traditional visually-based methods. In Fig. 3, we show the CMD of detected stream stars using Gaia photometry. Most stars are gathered around or below the main sequence turn off. Since StarStream takes into account the color uncertainty that typically grows with the G-magnitude, the color spread of detected stars also becomes larger near the fainter end. On the other hand, the color spread of simulated streams is only due to the distance spread and is thus almost invariant of G-magnitude. 3.3. Extended detection for long streams We find 17 streams in the high-quality sample that extend beyond the default search radius, or r90 > 10◦. To obtain a more complete detection for these streams, we rerun StarStream for these streams in the 10◦-20◦annulus around each GC. For consistency, we keep all other parameters of the method the same as those inside 10◦. In Fig. 4, we show the three streams, NGC 5272, 1851, and 5024, with more than 10 new detections outside 10◦. Note that the number of background stars approximately triples in the annulus, significantly reducing the signal-to-noise ratio in this region. Therefore, we should not expect the completeness and purity to stay unchanged in the extended area. 3.4. Comparison with previous detections R. Ibata et al. (2024) applied the STREAMFINDER algorithm to Gaia DR3 and provided the most complete catalog of GC streams prior to this work. They found 16 GCs that are associated with stellar streams. However, three of their streams, NGC 4590, 5139, and 5904, are not directly connected to the GC and thus cannot be compared with our results. For the remaining 13 streams, 8 are in our high-quality sample: NGC 288, 1261, 1851, 5466, 6341, 7089, 7099, and Pal 5, whereas 5 are in our low-quality sample: NGC 2298, 2808, 3201, 6101, and 6397. For the high-quality sample, our method always yields higher Ndetect within the 10◦radius by 20 -6000%, with a median of 300%. For the low-quality sample, our method detects more stars for NGC 2298, 2808, 6101, and fewer for the remaining two. However, since the low-quality sample has large background contamination and extinction, either method requires follow-up observations for further confirmation. For Pal 5 specifically, we detect 131 member stars while STREAMFINDER detects 109. However, since our method focuses on the stream segment released in the last 1 Gyr, our detections are all concentrated in the inner 5◦. For this region, R. Ibata et al. (2024) reported 76 stars. The 70% improvement of our method is remarkable since Pal 5 is one of the most complete streams known to date. Recently, P. B. Kuzma et al. (2025) identified potential tidal structures within 5◦around 22 MW GCs using the Pristine-Gaia synthetic catalog from Pristine Data Release 1 (E. Starkenburg et al. 2017; N. F. Martin et al. 2024). 13 of them overlap our high-quality sample: NGC 362, 1261, 1851, 1904, 5272, 6205, 6341, 6934, 6981, 7078, 7089, 7099. It is worth noting that many of their tidal structures do not show stream-like feature. This agrees with our finding that although many GC streams are "irregular", the tidal streams around GCs are more common than was previously detected. 4. Mass loss rate of globular clusters 4.1. Calculation of mass loss rate Using the stream stars identified by StarStream, we next measure the orbit-averaged mass loss rate ¤Mof the progenitor GCs. We use an updated approach from Y. Chen et al. (2025b) starting from their Eq. (3): ¤M≈ ¤Ntracer Msel ∫∫ fsel(φ1, φ2) ntracer(φ1, φ2) dφ1dφ2 (1) where Msel is the total detected mass of stream stars. Msel is equivalent to the total mass selected by the effective spatial selection function fsel(φ1, φ2) of StarStream. ¤Ntracer is the number ejection rate of tracer particles above the detection limit. The integral is over the number density of tracer particles ntracer(φ1, φ2) weighted by the selection function. Y. Chen et al. (2025b) approximates fsel as a Gaussian tube around the stream track when analyzing the R. Ibata et al. (2024) stream catalog. Here, however, we can make further simplification considering the quantified detection ratio StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 7 Table 1. Summary of GC and stream properties for the high-quality sample. The Galactic longitude l, latitude b, heliocentric distance d⊙, cluster mass MGC, and 3D half-mass radius rh are taken from the M. Hilker et al. (2019) catalog. The extinction AVis either from D. Massari et al., W. E. Harris (1996) catalog, or D. J. Schlegel et al. (1998) map, see §2.2. Nbg is the number of CMD-selected Gaia DR3 stars within the 10◦search radius. The number of detections Ndetect, tidal frequency Ωtid, and cluster mass loss rate | ¤M| are calculated in this work. A plain ASCII version is available at https://github.com/ybillchen/StarStream DR. GC l b d⊙ AV Nbg Ndetect MGC Ωtid rh | ¤M| (◦) (◦) (kpc) (105) (105 M⊙) (Gyr-1) (pc) (M⊙Myr-1) NGC 288 151.3 -89.4 9.0 0.06 2.2 494 0.962 40.2 8.6 2.7 +1.1 -0.8 NGC 362 301.5 -46.2 8.8 0.09 16.6 1030 2.520 42.2 3.4 10.3 +11.2 -5.4 NGC 1261 270.5 -52.1 16.4 0.03 3.7 560 1.720 24.5 4.9 12.7 +6.1 -4.1 NGC 1851 244.5 -35.0 11.9 0.12 5.8 1483 2.830 25.9 3.2 18.7 +12.0 -7.3 NGC 1904 227.2 -29.4 13.1 0.03 7.5 263 1.810 27.6 4.3 19.1 +15.1 -8.4 NGC 2419 180.4 25.2 88.5 0.25 8.2 138 7.830 4.4 26.4 - NGC 4147 252.8 77.2 18.5 0.06 2.4 108 0.451 20.1 3.5 2.5 +1.1 -0.8 NGC 4590 299.6 36.1 10.4 0.16 9.5 68 1.280 13.7 7.3 1.1 +1.0 -0.5 NGC 5024 333.0 79.8 18.5 0.06 2.7 706 5.020 18.2 10.0 37.0 +15.9 -11.1 NGC 5053 335.7 78.9 17.5 0.03 2.8 986 0.628 20.5 17.0 29.4 +12.9 -9.0 NGC 5272 42.2 78.7 10.2 0.03 2.7 503 4.090 27.6 5.5 6.2 +2.7 -1.9 NGC 5466 42.1 73.6 16.1 0.00 2.9 162 0.561 8.6 13.8 4.1 +1.8 -1.3 NGC 5634 342.2 49.3 26.0 0.16 7.1 62 2.470 22.5 7.7 10.8 +8.7 -4.8 NGC 5694 331.1 30.4 34.8 0.28 22.3 20 2.690 7.6 4.3 7.3 +10.1 -4.2 NGC 5824 332.6 22.1 31.7 0.40 49.2 131 7.460 10.8 6.3 62.5 +116.0 -40.6 NGC 5897 342.9 30.3 12.6 0.37 21.2 764 1.670 49.6 10.9 16.9 +20.5 -9.3 NGC 5904 3.9 46.8 7.5 0.09 7.1 176 3.920 17.7 5.6 2.4 +1.7 -1.0 NGC 6205 59.0 40.9 7.4 0.03 7.1 2209 4.840 56.4 5.2 16.7 +12.0 -7.0 NGC 6218 15.7 26.3 5.1 0.59 23.7 806 1.060 86.2 4.1 3.9 +5.0 -2.2 NGC 6229 73.6 40.3 30.1 0.03 6.5 79 2.470 17.1 4.8 25.1 +24.8 -12.5 NGC 6341 68.3 34.9 8.5 0.03 7.3 356 2.730 48.2 3.6 3.1 +2.3 -1.3 NGC 6752 336.5 -25.6 4.1 0.12 41.1 478 2.610 71.2 4.8 2.9 +4.6 -1.8 NGC 6934 52.1 -18.9 15.7 0.31 52.9 83 1.500 9.4 4.7 3.9 +6.8 -2.5 NGC 6981 35.2 -32.7 16.7 0.16 17.8 628 0.812 22.1 5.8 18.0 +20.3 -9.5 NGC 7006 63.8 -19.4 39.3 0.16 42.6 108 1.320 9.0 6.6 81.5 +132.3 -50.4 NGC 7078 65.0 -27.3 10.7 0.31 15.5 504 5.180 41.7 3.7 5.9 +6.3 -3.0 NGC 7089 53.4 -35.8 11.7 0.16 10.1 814 6.240 27.1 4.8 14.6 +12.7 -6.8 NGC 7099 27.2 -46.8 8.5 0.12 7.0 728 1.210 55.5 4.3 4.3 +3.1 -1.8 NGC 7492 53.4 -63.5 24.4 0.00 3.4 96 0.197 19.0 10.6 9.0 +5.3 -3.3 Pal 1 130.1 19.0 11.2 0.46 17.0 445 0.009 17.1 3.4 4.9 +5.5 -2.6 Pal 5 0.8 45.9 21.9 0.09 8.5 131 0.134 21.5 27.6 10.1 +8.3 -4.6 Pal 12 30.5 -47.7 18.5 0.06 7.3 228 0.062 6.7 10.5 8.7 +6.4 -3.7 Pal 14 28.7 42.2 73.6 0.12 7.9 117 0.191 4.0 37.7 - Whiting 1 161.6 -60.6 30.6 0.09 2.1 99 0.014 5.4 16.5 26.0 +21.9 -11.9 8 Chen et al. 16 18 20 G 16 18 20 G 16 18 20 G 16 18 20 G 16 18 20 G 16 18 20 G 0 0.5 1 1.5 BP RP 0 0.5 1 1.5 BP RP 16 18 20 G 0 0.5 1 1.5 BP RP 0 0.5 1 1.5 BP RP 0 0.5 1 1.5 BP RP 0.5 1.0 Probability Figure 3. Similar to Fig. 2, but for the color-magnitude space Gvs. BP -RP. StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 9 5 0 5 2 (deg) 5 0 5 2 (deg) 20 10 0 10 20 1 (deg) 5 0 5 2 (deg) Figure 4. Similar to Fig. 2, but for extended streams with r90 > 10◦. Only streams with more than 10 extended detections outside the original 10◦search radius are shown here, with the extended detections shown as open symbols. The original search radius is marked as dashed circles in each panel. fdetect between the expectation of the number of detections Ndetect by StarStream and the true number Ntrue (see C25). In Appendix A, we prove that simply replacing the integral in Eq. (1) by the total number of tracer particles Ntracer times fdetect yields an unbiased estimate of the mass loss rate: ¤M≈f-1 detect ¤Ntracer Msel Ntracer (2) where fdetect ≡Ndetect/Ntrue varies with individual streams. Using the tests by C25, we find that fdetect is always centered at 0.9 for the high-quality sample, with a log-normal scatter that depends on the background density Nbg. In Appendix A, we approximate the standard deviation of the log-normal scatter σlog fas a linear function of log10 Nbg: σlog fincreases from 0.15 dex to 0.5 dex when log10 Nbg increases from 5.5 to 7, and is fixed to σlog f= 0.15 dex below log10 Nbg 0.15 dex (see Appendix A). It is thus reasonable to adopt d⊙,i≈d⊙,GC. Correspondingly, we can define mmin,i= mmin(d⊙,GC) ≡mmin,GC and wi= w(mmin,GC) ≡wGC. Eq. (3) is thus simplified to Msel ≈ Nobs ∑︁ i=1 miwGC = Nobs ∑︁ i=1 mi ! wGC ≡Mobs wGC. The final formula for mass loss rate is given by ¤M≈f-1 detect ¤Ntracer Mobs wGC Ntracer . (4) It should be noted that the correction factor wGC becomes extremely large (> 1000) and is sensitive to the parameters for the isochrone model when the GC is far away from the observer. We find that varying the age of the isochrone from 10 Chen et al. 7 -13 Gyr alters wGC by several orders of magnitudes when d⊙,GC ≳60 kpc, where the main sequence turnoff is far below Gaia's detection limit. Therefore, we exclude NGC 2419 and Pal 14 since they both have d⊙,GC > 60 kpc. Their inferred mass loss rates likely have too large uncertainties. Except for these two, we verify that the scatter of detection ratio σlog fin Eq. (2) is dominant over all other potential sources of uncertainties, including the uncertainty in wGC and the Poisson's error for small numbers. However, we still include these sources of uncertainties following Y. Chen et al. (2025b) for the subsequent analysis. 4.2. Mass loss rate vs. other cluster properties Figure 5 shows the measured mass loss rate for the highquality sample as functions of GC's mass MGC, effective tidal frequency Ωtid, and 3D half-mass radius rh. The values of MGC and rh are directly taken from the M. Hilker et al. (2019) catalog, while Ωtid is approximated by √ 2 times the orbital frequency. This definition is consistent with M. Gieles & O. Y. Gnedin (2023), who used the singular isothermal sphere profile and adopted F. Renaud et al. (2011)'s definition based on eigenvalues of the tidal tensor. The orbital frequency is computed via integrating the GC's orbit in our Galactic potential model. These values are listed in Table 1. We find that most GCs have ¤M= 1 -100 M⊙Myr-1. We do not observe any strong correlation between ¤Mand other properties of the GC. It should be noted that we do not include streams with fewer than 10 detections in the sample, which excludes streams with very low mass loss rates. For the ¤MMGC relation and the ¤M-rh relation, the exclusion of these streams biases the entire relation upward. However, the ¤M-Ωtid relation is affected more because the low-Ωtid GCs tend to reside at large radii. These GCs are more likely to be excluded by the same Ndetect > 10 criterion unless their mass loss rates are proportionally higher. Therefore, the slope of the ¤M-Ωtid relation is likely biased low. For this reason, we do not analyze the ¤M-Ωtid relation for the remainder of the work. To quantitatively study the correlation between ¤Mand the other two properties, we fit ¤Mas a multivariate power-law function, ¤Mfit = ¤Mref MGC 105 M⊙ a rh 5 pc c (5) where we choose different anchor points from Y. Chen et al. (2025b) to better describe the average of each quantity7. Using maximum likelihood estimation allowing for intrinsic 7 We skip the symbol bfor consistency with Y. Chen et al. (2025b), where bis the slope for the ¤M-Ωtid relation. scatter, we obtain the best-fit parameters: | ¤Mref| = 7.4+1.5 -1.3 M⊙Myr-1 σint = 0.28 ± 0.06 dex a= 0.15 ± 0.12 c= 0.58 ± 0.34 (6) where the uncertainties are obtained from 1000 bootstrap resampling. We find that the ¤M-MGC slope ais above but still consistent with zero within 1.3 standard deviations, and the value lies between the no black holes (BHs) model (a= 1/3) and BHs models (a= -1/3, assuming the initial mass Mi = 2 × 105 M⊙) of M. Gieles & O. Y. Gnedin (2023). Our ais lower than that in Y. Chen et al. (2025b, a= 0.66 ± 0.37) by 1.4σ. The ¤M-rh slope is above zero by 1.7σ, still being consistent with Y. Chen et al. (2025b, c= 0.12 ± 0.72). We also observe larger intrinsic scatter compared to Y. Chen et al. (2025b). This is likely because our measurement includes the uncertainty of the detection method itself, whereas Y. Chen et al. (2025b) assumed zero uncertainty for the detection method. Although trends can differ, the amplitude of ¤Min this work is consistent with M. Gieles & O. Y. Gnedin (2023) and Y. Chen et al. (2025b) within the intrinsic scatter across the full ranges of MGC, Ωtid, and rh. In particular, we measured | ¤M| ≈10 M⊙Myr-1 for Pal 5. This is higher than that in Y. Chen et al. (2025b, | ¤M| ≈ 3 M⊙Myr-1) partially because of the 70% increase of Ndetect (as mentioned in §3.4). However, this increase is insufficient to explain the 230% increase of ¤M. Since Y. Chen et al. (2025b) measured the average mass loss rate over the past ∼6 Gyr for Pal 5 while this work only measures the past 1 Gyr, the different values likely suggest an accelerating mass loss rate for Pal 5. This scenario is consistent with the prediction by M. Gieles et al. (2021), who showed that Pal 5's potential high BH abundance ( fBH ≈20%) could accelerate the mass loss rate from the initial 5-10 M⊙Myr-1 to 10-20 M⊙Myr-1 near the end of its lifetime. The positive ¤M-rh correlation is likely dominated by the low-mass but high- ¤MGCs similar to Pal 5, including NGC 7492, Pal 12, and Whiting 1. These GCs have MGC ≲ 2 × 104 M⊙but half-mass radii above average, rh ≳10 pc, making them much "fluffier" than average GCs. Their high mass loss rates place them in closer agreement with M. Gieles & O. Y. Gnedin (2023)'s BHs models (a= -1/3), while they are seemingly outlier of other models. M. Gieles et al. (2021) suggested that these GCs are also BH-rich and are reaching complete tidal dissolution. They also predicted higher-thanaverage mass loss rates for these GCs, | ¤M| ≈10 M⊙Myr-1. Our measurements agree with their prediction, providing observational support for the BH-rich scenario. StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 11 103 104 105 106 MGC (M ) 1 10 102 |M| (M Myr 1) Gieles&Gnedin23 Chen+25 This work no BHs BHs 10 102 tid (Gyr 1) 2 5 10 20 rh (pc) NGC 288 NGC 5024 NGC 5904 NGC 7006 Pal 14 NGC 362 NGC 5053 NGC 6205 NGC 7078 Whiting 1 NGC 1261 NGC 5272 NGC 6218 NGC 7089 NGC 1851 NGC 5466 NGC 6229 NGC 7099 NGC 1904 NGC 5634 NGC 6341 NGC 7492 NGC 2419 NGC 5694 NGC 6752 Pal 1 NGC 4147 NGC 5824 NGC 6934 Pal 5 NGC 4590 NGC 5897 NGC 6981 Pal 12 Figure 5. Mass loss rates of 34 streams in the high-quality sample, plotted against MGC (left), Ωtid (middle), and rh (right), with uncertainties shown as errorbars. The best-fit relation for these measurements are shown as light blue shaded regions. For comparison, we also show the BHs and no BHs models from M. Gieles & O. Y. Gnedin (2023, with (a, b, c) = (±1/3, 1, 0) and | ¤Mref| = 30 -45 M⊙Myr-1) and the best-fit relations in Y. Chen et al. (2025b) as magenta and gray shaded regions, respectively. Note that Ωtid in Y. Chen et al. (2025b) is smaller by a constant √ 2, which we have accounted for in the comparison here. 5. Summary and discussion We report 87 GC streams detected by the StarStream method in Gaia DR3. Our catalog includes a high-quality sample of 34 streams with AV 10. In Fig. 6, we show fdetect as a function of the background density. We find that the fdetect is slightly below but consistent with unity in all ranges of Nbg, while the scatter increases with Nbg. This dependence can be well approximated by a constant mean of 0.9 with a log-normal scatter σlog f increasing linearly from 0.15 dex to 0.5 dex when log10 Nbg increases from 5.5 to 7. Since the test dataset lacks stream detections below log10 Nbg < 5.5, we conservatively fix σlog f= 0.15 dex for this region although the trend indicates a lower scatter. This simple parametrization is consistent with the more rigorous results by fitting a linear σlog f- Nbg relation using maximum likelihood estimation. StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 13 105 106 107 Nbg within 10 deg 10 1 100 101 fdetect Ndetect/Ntrue Figure 6. Detection ratio fdetect of StarStream from C25 as a function of the background density within 10◦. Individual streams are shown as gray circles. The solid line stands for fdetect,0 = 0.9, while the shaded ranges show the logarithmic scatter σlog fparametrized as a function of Nbg. Note that the data points here are selected with the same criteria as our high-quality sample, different from the upper panel of Fig. 3 in C25, where no additional selection criterion is employed. B. Low-quality sample We provide the GC properties and number of detections Ndetect for the low-quality sample in Table 2. References Amorisco, N. C. 2015, MNRAS, 450, 575, Banik, N., Bertone, G., Bovy, J., & Bozorgnia, N. 2018, JCAP, 2018, 061, Baumgardt, H., H ́enault-Brunet, V., Dickson, N., & Sollima, A. 2023, MNRAS, 521, 3991, Bernard, E. J., Ferguson, A. M. N., Schlafly, E. F., et al. 2014, MNRAS, 443, L84, Bonaca, A., & Price-Whelan, A. M. 2025, New Astronomy Reviews, 100, 101713, Bonaca, A., Pearson, S., Price-Whelan, A. M., et al. 2020, ApJ, 889, 70, Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245, Carlberg, R. G., Grillmair, C. J., & Hetherington, N. 2012, ApJ, 760, 75, Chen, Y., Gnedin, O. Y., & Price-Whelan, A. M. 2025a, in prep. Chen, Y., Li, H., & Gnedin, O. Y. 2025b, ApJL, 980, L18, Chen, Y., Valluri, M., Gnedin, O. Y., & Ash, N. 2025c, ApJS, 276, 32, Dotter, A. 2016, ApJS, 222, 8, Erkal, D., Belokurov, V., Bovy, J., & Sanders, J. L. 2016, MNRAS, 463, 102, Erkal, D., Koposov, S. E., & Belokurov, V. 2017, MNRAS, 470, 60, Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, A&A, 595, A2, Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1, Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1, Gieles, M., Erkal, D., Antonini, F., Balbinot, E., & Pe ̃narrubia, J. 2021, Nat Astron, 5, 957, Gieles, M., & Gnedin, O. Y. 2023, MNRAS, 522, 5340, Green, G. M. 2018, JOSS, 3, 695, Grillmair, C. J. 2009, ApJ, 693, 1118, Hallin, A., Shih, D., Krause, C., & Buckley, M. R. 2025, , Hallin, A., Isaacson, J., Kasieczka, G., et al. 2022, Phys. Rev. D, 106, 055006, Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, Harris, W. E. 1996, AJ, 112, 1487, Hilker, M., Baumgardt, H., Sollima, A., & Bellini, A. 2019, Proc. IAU, 14, 451, 14 Chen et al. Table 2. Summary of GC and stream properties in the low-quality sample. The properties are described in Table 1. A plain ASCII version is available at https://github.com/ybillchen/StarStream DR. GC l b d⊙ AV Nbg Ndetect GC l b d⊙ AV Nbg Ndetect (◦) (◦) (kpc) (105) (◦) (◦) (kpc) (105) Arp 2 8.5 -20.8 28.7 0.31 112.4 8000 NGC 6402 21.3 14.8 9.1 1.86 77.0 151 BH 140 303.2 -4.3 4.8 2.11 324.1 20 NGC 6426 28.1 16.2 20.7 1.12 83.0 117 Djor 2 2.8 -2.5 8.8 2.91 425.5 14 NGC 6441 353.5 -5.0 12.7 1.43 461.0 14 E 3 292.3 -19.0 7.9 0.93 46.0 96 NGC 6496 348.0 -10.0 9.6 0.74 361.9 865 IC 1257 16.5 15.1 26.6 2.26 123.2 19 NGC 6535 27.2 10.4 6.4 1.30 71.7 374 IC 4499 307.4 -20.5 18.9 0.71 51.5 85 NGC 6541 349.3 -11.2 7.6 0.34 256.9 777 NGC 2298 245.6 -16.0 9.8 0.68 42.9 537 NGC 6544 5.8 -2.2 2.6 2.36 521.9 351 NGC 2808 282.2 -11.3 10.1 0.68 109.9 4438 NGC 6553 5.3 -3.0 5.3 1.95 498.1 38 NGC 3201 277.2 8.6 4.7 0.74 85.7 121 NGC 6569 0.5 -6.7 10.5 1.64 522.1 154 NGC 4372 301.0 -9.9 5.7 1.49 207.7 248 NGC 6584 342.1 -16.4 13.6 0.31 152.2 200 NGC 4833 303.6 -8.0 6.5 1.02 195.9 739 NGC 6624 2.8 -7.9 8.0 0.87 560.3 843 NGC 5139 309.1 15.0 5.4 0.37 70.2 920 NGC 6637 1.7 -10.3 8.9 0.56 494.7 242 NGC 5286 311.6 10.6 11.1 0.78 103.1 879 NGC 6638 7.9 -7.2 9.8 1.27 555.8 15 NGC 5927 326.6 4.9 8.3 1.30 236.5 11 NGC 6652 1.5 -11.4 9.5 0.28 413.7 68 NGC 6101 317.7 -15.8 14.4 0.16 107.0 128 NGC 6656 9.9 -7.6 3.3 1.12 474.7 304 NGC 6121 351.0 16.0 1.9 1.33 157.9 460 NGC 6715 5.6 -14.1 26.3 0.46 326.9 20007 NGC 6139 342.4 6.9 10.0 2.33 221.7 15 NGC 6723 0.1 -17.3 8.3 0.16 218.6 45 NGC 6144 351.9 15.7 8.2 1.36 187.2 17 NGC 6760 36.1 -3.9 8.4 2.39 176.8 108 NGC 6171 3.4 23.0 5.6 1.02 46.9 663 NGC 6779 62.7 8.3 10.4 0.74 132.7 960 NGC 6254 15.1 23.1 5.1 0.81 28.4 721 NGC 6809 8.8 -23.3 5.3 0.59 68.2 1874 NGC 6287 0.1 11.0 7.9 1.86 312.3 79 Pal 11 31.8 -15.6 14.0 1.08 131.8 19 NGC 6304 355.8 5.4 6.2 1.52 310.0 17 Pal 15 18.8 24.3 44.1 1.24 34.1 24 NGC 6355 359.6 5.4 8.7 2.39 393.4 19 Rup 106 300.9 11.7 20.7 0.62 125.2 65 NGC 6356 6.7 10.2 15.7 0.87 221.3 921 Ter 3 345.1 9.2 7.6 2.26 218.7 29 NGC 6362 325.6 -17.6 7.7 0.22 98.8 893 Ter 7 3.4 -20.1 24.3 0.22 140.5 3312 NGC 6366 18.4 16.0 3.4 2.20 86.9 731 Ter 8 5.8 -24.6 27.5 0.37 61.6 10969 NGC 6397 338.2 -12.0 2.5 0.53 251.1 202 Holm-Hansen, C., Chen, Y., & Gnedin, O. Y. 2025, , Hunt, J. A., & Vasiliev, E. 2025, New Astronomy Reviews, 100, 101721, Hunter, J. D. 2007, CSE, 9, 90, Ibata, R., Malhan, K., Tenachi, W., et al. 2024, ApJ, 967, 89, Koposov, S. E., Rix, H.-W., & Hogg, D. W. 2010, ApJ, 712, 260, Kroupa, P. 2001, MNRAS, 322, 231, Kuzma, P. B., Ishigaki, M. N., Kirihara, T., & Ogami, I. 2025, AJ, 170, 157, K ̈upper, A. H. W., Balbinot, E., Bonaca, A., et al. 2015, ApJ, 803, 80, Lynden-Bell, D., & Lynden-Bell, R. M. 1995, MNRAS, 275, 429, Malhan, K., & Ibata, R. A. 2018, MNRAS, 477, 4063, Martin, N. F., Starkenburg, E., Yuan, Z., et al. 2024, A&A, 692, A115, Massari, D., Aguado-Agelet, F., Monelli, M., et al. 2023, A&A, 680, A20, StarStream on Gaia: Stream discovery and mass loss rate of globular clusters 15 Meˇstri ́c, U., Vanzella, E., Zanella, A., et al. 2022, MNRAS, 516, 3532, Ngan, W. H. W., & Carlberg, R. G. 2014, ApJ, 788, 181, Nibauer, J., & Bonaca, A. 2025, , Nibauer, J., Bonaca, A., Lisanti, M., Erkal, D., & Hastings, Z. 2024, ApJ, 969, 55, Panithanpaisal, N., Sanderson, R. E., Rodriguez, C. L., et al. 2025, , Pearson, S., Price-Whelan, A. M., & Johnston, K. V. 2017, Nature Astronomy, 1, 633, Price-Whelan, A., Souchereau, H., Wagg, T., et al. 2024, Zenodo, Price-Whelan, A. M. 2017, JOSS, 2, 388, Renaud, F., Gieles, M., & Boily, C. M. 2011, MNRAS, 418, 759, Rockosi, C. M., Odenkirchen, M., Grebel, E. K., et al. 2002, AJ, 124, 349, Sanders, J. L., & Binney, J. 2013a, MNRAS, 433, 1813, Sanders, J. L., & Binney, J. 2013b, MNRAS, 433, 1826, Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525, Shih, D., Buckley, M. R., Necib, L., & Tamanas, J. 2021, MNRAS, 509, 5992, Shipp, N., Drlica-Wagner, A., Balbinot, E., et al. 2018, ApJ, 862, 114, Starkenburg, E., Martin, N., Youakim, K., et al. 2017, MNRAS, 471, 2587, The Astropy Collaboration, Price-Whelan, A. M., Sip ̋ocz, B. M., et al. 2018, AJ, 156, 123, The pandas development team. 2024, pandas-dev/pandas: Pandas, Zenodo, Vasiliev, E. 2019, MNRAS, 482, 1525, Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261,
2510.14922
TRI-DEP: A TRIMODAL COMPARATIVE STUDY FOR DEPRESSION DETECTION USING SPEECH, TEXT, AND EEG Annisaa Fitri Nurfidausi∗, Eleonora Mancini*, Paolo Torroni DISI, University of Bologna, Italy ABSTRACT Depression is a widespread mental health disorder, yet its au- tomatic detection remains challenging. Prior work has explored unimodal and multimodal approaches, with multimodal systems showing promise by leveraging complementary signals. However, existing studies are limited in scope, lack systematic comparisons of features, and suffer from inconsistent evaluation protocols. We address these gaps by systematically exploring feature representa- tions and modelling strategies across EEG, together with speech and text. We evaluate handcrafted features versus pre-trained embed- dings, assess the effectiveness of different neural encoders, compare unimodal, bimodal, and trimodal configurations, and analyse fusion strategies with attention to the role of EEG. Consistent subject- independent splits are applied to ensure robust, reproducible bench- marking. Our results show that (i) the combination of EEG, speech and text modalities enhances multimodal detection, (ii) pretrained embeddings outperform handcrafted features, and (iii) carefully de- signed trimodal models achieve state-of-the-art performance. Our work lays the groundwork for future research in multimodal depres- sion detection. Index Terms— Depression Detection, Deep Neural Networks, Multimodality 1. INTRODUCTION Depression is a widespread mental health condition predicted to be- come the second leading cause of disease burden by 2030 [1], with COVID-19 causing a 27.6 % rise in global cases [2]. In recent years, there has been growing interest in developing automatic depression detection systems to support clinical decision-making and enable telemedicine applications. More recently, multimodal approaches have gained particular attention, motivated by the fact that in clinical settings, such as diagnostic interviews, human expression is inher- ently multimodal, spanning speech, language, and neural activity. However, current studies often suffer from critical methodological gaps, including limited modality integration, inconsistent evaluation protocols, and potential data leakage, which hinder reproducibility and the fair assessment of model performance. Models that leverage two modalities dominate the field. Notable examples include [3], who applied DenseNet121 to EEG and speech spectrograms from the MODMA dataset, and [4], who employed Vision Transform- ers on comparable EEG–speech data from MODMA. Other bimodal studies investigated EEG–speech integration with graph convolu- tional networks [5], speech–text fusion on the E-DAIC dataset us- ing CNN-LSTM attention [6], and EEG–facial expression fusion [7]. In [8], an extensive speech–text comparative analysis with multiple fusion techniques was conducted, but EEG was entirely excluded. ∗Both authors contributed equally. Overall, state-of-the-art performances in multimodal depression de- tection span roughly 85–97%, depending on the dataset and modal- ity combinations. All the aforementioned approaches only comprise two modalities, constraining their potential by overlooking trimodal approaches. Moreover, most of them exclude text modality and lack transparent data-splitting protocols. In [9], speech, EEG, and text were integrated using GAT-CNN-MpNet architectures on MODMA, achieving about 90% balanced performance through weighted late fusion, though without comparing handcrafted and pretrained fea- tures and with only basic fusion strategies explored. Moreover, the study did not clarify whether 5-fold cross-validation was performed at the segment or subject level. Our work addresses key limitations in multimodal depression detection by systematically exploring feature representations and modeling strategies across EEG, together with speech and text. We perform a complete comparative analysis of handcrafted features and pretrained embeddings, including, for the first time, brain-pretrained models, evaluate multiple deep learning architectures, and compare unimodal, bimodal, and trimodal con- figurations. We further investigate how different fusion strategies impact detection accuracy and robustness, with particular attention to the role of EEG. Using consistent subject-independent data splits to ensure reproducible benchmarking, we demonstrate that carefully designed trimodal models achieve state-of-the-art performance. Our study lays the groundwork for the future of multimodal depression detection, guiding the development of more accurate and robust sys- tems. We make both the code and the model checkpoints available to foster transparency and reproducibility.1 2. METHODOLOGY 2.1. Data This study employs the Multi-modal Open Dataset for Mental- disorder Analysis (MODMA) [10], which provides: (1) 5-minute resting-state EEG recorded with a 128-channel HydroCel Geodesic Sensor Net at 250 Hz, and (2) audio from structured clinical inter- views. For each subject, the interview audio consists of R = 29 separate recordings (question–answer items) whose durations vary across and within subjects (the total interview time is approxi- mately 25 minutes per subject). Since MODMA does not include text transcriptions from clinical interviews, we generate automatic transcriptions using speech-to-text models. The dataset comprises individuals diagnosed with Major Depressive Disorder (MDD), recruited from Lanzhou University Second Hospital, and healthy controls (HCC) obtained via public advertising; MDD diagnoses were confirmed by licensed psychiatrists. In this study, we retain only subjects who participated in both EEG and interview record- ings, resulting in a filtered cohort of 38 subjects. Table 1 summarizes demographic information across groups and protocols. Additional details are available in [10]. 1Link will be available upon acceptance. arXiv:2510.14922v1 [cs.AI] 16 Oct 2025 Table 1. Participant demographics in the MODMA dataset. M: Male, F: Female, HC: Healthy Control, and MDD: Major Depres- sive Disorder. Modality Total MDD HC Age (M/F) (M/F) (MDD/HC) 128-ch EEG 53 24 (13/11) 29 (20/9) 16–56 / 18–55 Speech 52 23 (16/7) 29 (20/9) 16–56 / 18–55 Many studies lack clarity in data splitting [11, 3, 9], where segment-level splits can leak information by placing recordings from the same subject in both training and test sets, yielding inflated performance. To avoid this, we use stratified 5-fold subject-level cross-validation with consistent splits across experiments. We also release these splits on our companion website to ensure reproducibil- ity and fair comparison. To address the lack of transcriptions in the MODMA dataset, we employed WhisperX [12] to generate text for each subject’s 29 recordings, without further post-processing. 2.2. Experimental Pipeline Design We design a unified pipeline for multimodal depression detection with EEG, speech, and text. For EEG, we adopt two processing branches: a 29-channel, 250 Hz, 10 s segmentation setup, consistent with prior work [11, 3, 9, 13], and a 19-channel, 200 Hz, 5 s segmen- tation setup replicating the preprocessing used in CBraMod [14] for the MUMTAZ depression dataset [15] . For CBraMod, we evaluated both the original pre-trained version and the model fine-tuned on MUMTAZ, as described in the official documentation2 , and found the latter consistently superior. Therefore, throughout this work we refer to CBraMod as the MUMTAZ-fine-tuned model. Speech recordings are resampled to 16 kHz, denoised, and segmented into 5 s windows with 50% overlap, while text is used directly from raw Chinese transcriptions. Feature extraction combines handcrafted descriptors (EEG statistics, spectral power, entropy; speech MFCCs with/without prosody) with embeddings from large pre-trained models. For EEG, we employ both the Large Brain Model (LaBraM) [16], trained on ∼2,500 hours of EEG from 20 datasets, and CBraMod, a patch- based masked reconstruction model. For speech, we use XLSR- 53 [17], a multilingual wav2vec 2.0 encoder, and Chinese HuBERT Large [18], trained on 10k hours of WenetSpeech. For text, we use Chinese BERT Base [19], MacBERT [20], XLNet [21], and MP- Net Multilingual [22]. Segment-level representations are encoded with a combination of CNNs, LSTMs, and/or GRUs (with/without attention) and fused using decision-level strategies. 2.3. Data Preprocessing The preprocessing stage serves multiple objectives, including clean- ing and structuring the raw data, as well as preparing it for multi- modal analysis. One key objective is the segmentation of the in- put into smaller units that can be more effectively processed by the models. We denote with SEEG, SSPEECH, and STEXT the number of segments obtained after preprocessing for each input modality. EEG — For handcrafted features and LaBraM, we follow prior work [11, 3, 9, 13], which comprises retaining C = 29 channels3, 2https://github.com/wjq-learning/CBraMod/blob/main/ 3Full list available on our companion website. Link will be available upon acceptance. applying a 0.5–50 Hz bandpass filter with a 50 Hz notch, and aver- age re-referencing. Recordings are segmented into 10 s windows; at 250 Hz, each window contains T = 250 × 10 = 2500 samples. Thus, a recording of length L seconds produces SEEG = L/10 windows (e.g., SEEG = 30 for a 5-min recording), represented as X(1) EEG ∈RSEEG×C×T . For CBraMod, we use the version pretrained on the MUMTAZ depression dataset, thereby replicating its preprocessing. Signals are resampled to 200 Hz, bandpass filtered (0.3–75 Hz) with a 50 Hz notch, and reduced to C = 19 channels3. Recordings are segmented into 5 s windows; at 200 Hz, each window contains T = 200 × 5 = 1000 samples. A recording of length L seconds thus yields SEEG = L/5 windows (e.g., SEEG = 60 for a 5-min recording). Each window is further divided into P = 5 non-overlapping patches of Tpatch = 200 samples, resulting in X(2) EEG ∈RSEEG×C×P ×Tpatch. Speech — Audio recordings are resampled from 44 kHz to 16 kHz, converted to mono PCM, amplitude-normalized to [−1, 1], silence- trimmed, and denoised with a median filter [23]. Each signal is seg- mented into overlapping windows of length w = 5 s with hop size h = 2.5 s (50% overlap). At a sampling rate of 16 kHz, each seg- ment contains Tseg = 80,000 samples and each hop Thop = 40,000 samples. For a recording of duration L seconds (post-trimming), the num- ber of segments is SSPEECH =  (L −w)/h  + 1 for L ≥w, while recordings shorter than w are retained as a single segment. The segmented waveform is represented as XSPEECH ∈RSSPEECH×Tseg, where each row corresponds to one waveform segment. Each subject has R = 29 interview recordings; after windowing, record- ing r yields S(r) SPEECH segments X(r) SPEECH ∈RS(r) SPEECH×Tseg. The subject-level speech representation is the concatenation along the segment axis: XSPEECH =  X(1) SPEECH; . . . ; X(R) SPEECH  , with a total of SSPEECH = PR r=1 S(r) SPEECH segments. Text — Each recording has a single transcript. After tokenization, the subject-level text representation is the concatenation of all tran- script representations, XTEXT =  X(1) TEXT; . . . ; X(R) TEXT  . 2.4. Feature Extraction EEG — Handcrafted features. For each segment X(1) EEG ∈RC×T we extract F = 10 hancrafted descriptors per channel (statistical, spectral, entropy), yielding XHAND ∈RS×C×F . Pre-trained models. We further extract embeddings from LaBraM and CBraMod. LaBraM operates on X(1) EEG and maps each segment to a D = 200-dimensional embedding, producing XLaBraM ∈RS×D. CBraMod operates on X(2) EEG, where each 5 s segment is patch-encoded and then averaged across channels and patches to form D = 200-dimensional embeddings, resulting in XCBraMod ∈RS×D. Remark. After feature extraction, the raw temporal dimension (T or Tpatch) is no longer present, as each segment is reduced to a fixed-size representation of dimension F (handcrafted) or D (em- beddings). For subject-level modeling, features from all recordings of the same subject are stacked to form the final subject representa- tion. Speech — From each waveform segment, we compute two hand- crafted variant, namely MFCCs (40 coefficients) and Prosody + MFCCs (46 features: 40 MFCCs plus energy, F0, RMS energy, pause rate, phonation time, speech rate). We also extract segment embeddings with XLSR-53 and Chinese HuBERT Large. Segment- level features are stacked per recording and then concatenated across CBraMOD Attn MLP PreProc GRU PreProc CNN GRU LSTM MLP WhisperX PreProc LSTM MLP XSLR-53 MacBERT Fusion Fig. 1. Experimental Framework for Multimodal Depression Detection Table 2. Shapes of speech feature matrices per recording r and at the subject level. Feature name Per-recording shape Subject-level shape XMFCC RS(r) SPEECH×40 RSSPEECH×40 XPROSODY+MFCC RS(r) SPEECH×46 RSSPEECH×46 XXLSR RS(r) SPEECH×1024 RSSPEECH×1024 XHuBERT RS(r) SPEECH×768 RSSPEECH×768 the 29 recordings of each subject to form the subject-level represen- tation. The exact tensor shapes (per recording and subject-level) are summarized in Table 2. After feature extraction, the raw sample length is no longer present; each segment is represented by a fixed- size vector (40/46/768/1024 dimensions). Text — Each recording has a single transcript, which we encode with a pretrained language model (BERT, MacBERT, XLNet, or MPNet) to obtain one D = 768-dimensional embedding per record- ing; for a subject with R = 29 recordings, stacking these yields XBERT, XMacBERT, XXLNet, XMPNet ∈RR×D (with R = 29, D = 768). 2.5. Baselines We re-implement two multimodal baselines for depression detec- tion that use standard image architectures on EEG and speech 2D- spectrograms (Spec2D): DenseNet-121 [3] and Vision Transformer (ViT) [4]. These studies are among the few that explore EEG–speech multimodality in this task and report promising results. In our ex- periments, we retain their model architectures but apply our own subject-level cross-validation splits for consistency, making results not directly comparable to the original works. Additional imple- mentation details are provided on our companion website. 2.6. Architectures We assess several modality-tailored architectures. We also exper- iment with multimodality, combining predictions from the best- performing feature–model pair in each modality in a late fusion fashion. To keep notation light, we use F to denote the generic feature matrix per modality, either handcrafted features or embeddings from pretrained models. Concretely, FEEG (EEG), F(r) SPEECH (speech, per recording r), and FTEXT (text, subject-level). EEG — We consider two sequence encoders: CNN+LSTM and GRU+Attention. The CNN branch uses two 1D convolutions (kernel size 3, padding 1) with dropout to capture local tempo- ral patterns, followed by a 2-layer LSTM for sequence modeling. GRU+Attention uses a 2-layer GRU with an attention mechanism that weights hidden states to form a subject-level summary. Both encoders consume FEEG and produce a latent representation HEEG; an MLP head outputs yEEG. Speech — A shallow CNN extracts segment-level features from each recording. These are reduced to a single fixed-size vector F(r) SPEECH ∈Rd using one of three encoders: (i) max pooling, (ii) GRU with attention, or (iii) BiGRU with attention (the latter ex- tending the GRU+Attn design with bidirectional recurrence). The resulting R = 29 vectors are stacked into the subject-level matrix FSPEECH ∈RR×d. This sequence is then processed by an LSTM to produce the subject-level representation HSPEECH, which is fed to an MLP head to obtain the final prediction ySPEECH. Text — FTEXT denotes the subject-level text features (Sec. 2.4). A detection module (LSTM or CNN) transforms FTEXT into HTEXT, and an MLP head outputs yTEXT. Multimodal Fusion — We select, for each modality, the best- performing feature–model pair and fuse their predictions via late fusion. This design choice ensures that our multimodal architec- tures are built upon the strongest unimodal predictors, allowing us to attribute performance gains directly to the fusion strategy rather than suboptimal single-modality components. We consider three schemes: Bayesian fusion – convert modality-specific posteriors to likelihood ratios, combine them with predefined weights, and map back to a posterior; soft voting (mean) – average class probabil- ities across modalities and predict the class with highest average probability with ties resolved at 0.5; weighted averaging – compute weighted combination of modality probabilities where weights sum to one, then predict the class with highest weighted probability. 3. EXPERIMENTAL SETUP We adopt stratified 5-fold cross-validation with fixed subject splits to ensure balanced and comparable experiments, and prevent data leakage. Models are trained with cross-entropy loss and softmax output, with hyperparameters tuned manually. All implementation details are provided in our companion materials. 4. RESULTS In this section, we report the performance of all experimental cate- gories: baseline re-implementations, unimodal models, and our pro- posed multimodal architectures. F1-scores are reported as mean ± standard deviation across folds. Table 3 presents the baselines and unimodal models, including the best-performing model for each set of features per modality. Table 4 reports the performance of base- line models and multimodal fusion strategies, highlighting the best configuration within each category and the overall best-performing model. Further results are available on our companion website. Unimodal — Table 3 reports the performance of baseline and unimodal models. Among EEG features, CBraMod embeddings combined with a GRU and attention achieved the best result, con- firming the benefit of pre-training on a depression-related corpus. For speech, both XLSR-53 and HuBERT embeddings provided strong performance, with XLSR-53 coupled with a CNN+GRU slightly outperforming. Handcrafted MFCC and prosodic features yielded considerably lower scores, indicating that deep speech embeddings capture richer information. In the text modality, all transformer-based embeddings performed competitively, with Chi- nese MacBERT and XLNet reaching the top results. Overall, uni- modal experiments highlight that text provided the most informative single modality, while speech embeddings also achieved strong performance, and EEG remained less predictive in isolation. Multimodal — Table 4 compares the baselines with different fu- sion strategies. Simple baselines such as ViT and DenseNet-121 reached F1-scores around 0.56. Fusion strategies, however, substan- tially outperformed unimodal and baseline models. Weighted aver- aging already boosted performance when fusing EEG and Text, and Bayesian fusion further improved results, with Speech+Text achiev- ing the highest F1-score overall. Majority voting also proved effec- tive, with the tri-modal configuration EEG+Speech+Text reaching F1 = 0.874. These results confirm the complementarity of modal- ities: while text dominates in unimodal settings, integrating speech and EEG consistently improves robustness and yields the strongest overall performance. Experimental Framework for Multimodal Depression Detection — Building on this systematic exploration of feature extraction methods, neural architectures, and fusion strategies, we propose an experimental framework for multimodal depression detection, illus- trated in Figure 1. The framework selects the best-performing pre- dictors for each modality: XCBraMod processed with a GRU+Attn for EEG, XXLSR processed with a CNN+GRU for speech, and XMacBERT processed with an LSTM for text. These modality-specific pipelines are then combined through alternative fusion strategies. This de- sign allows us to isolate the contribution of each fusion method while keeping the strongest unimodal configurations fixed. Our best-performing architecture employs majority voting across the three modalities, achieving an accuracy of 88.6% and an F1-score of 87.4%, to the best of our knowledge, establishing the state of the art in multimodal depression detection. The framework thus serves as a reference setup for future experiments, enabling systematic evaluation of new fusion strategies or additional modalities. To the best of our knowledge, our tri-modal configuration with majority voting fusion represents the current state of the art in multimodal depression detection. Table 3. Results of baselines and unimodal models (F1-score, mean ± std across 5 folds). In bold, the best performing model–feature pair per modality. Category Features Model F1 Baselines (Speech+EEG) XSpec2D ViT 0.560 ± 0.190 XSpec2D DenseNet-121 0.586 ± 0.240 EEG XHAND CNN+LSTM 0.585 ± 0.102 XLaBraM GRU+Attn 0.508 ± 0.075 XCBraMod GRU+Attn 0.600 ± 0.173 Speech XMFCC CNN+MaxPool+LSTM 0.554 ± 0.125 XProsody+MFCC CNN+BiGRU+Attn+LSTM 0.673 ± 0.152 XHuBERT CNN+BiGRU+Attn+LSTM 0.809 ± 0.073 XXLSR CNN+GRU+LSTM 0.814 ± 0.052 Text XMPNet CNN 0.865 ± 0.085 XBERT CNN 0.839 ± 0.123 XXLNet LSTM 0.671 ± 0.099 XMacBERT LSTM 0.868 ± 0.119 Table 4. Baseline and multimodal models (F1-score, mean ± std across 5 folds). In bold, the best performing configuration per cate- gory (baselines or fusion strategy). The overall best across all mod- els and features configurations is additionally underlined. For fu- sion methods, the numbers in parentheses (e.g., 0.4, 0.6) indicate the weights assigned to each modality. Category Configuration F1-score Baselines ViT 0.560 ± 0.190 DenseNet-121 0.586 ± 0.240 Weighted Averaging EEG + Speech + Text (0.2 : 0.4 : 0.4) 0.603 ± 0.306 EEG + Speech (0.4 : 0.6) 0.510 ± 0.425 EEG + Text (0.4 : 0.6) 0.783 ± 0.203 Speech + Text (0.4 : 0.6) 0.470 ± 0.384 Bayesian Fusion EEG + Speech + Text (0.2 : 0.4 : 0.4) 0.855 ± 0.133 EEG + Speech (0.4 : 0.6) 0.676 ± 0.168 EEG + Text (0.4 : 0.6) 0.824 ± 0.178 Speech + Text (0.4 : 0.6) 0.875 ± 0.132 Majority Voting EEG + Speech 0.643 ± 0.340 EEG + Text 0.510 ± 0.425 Speech + Text 0.783 ± 0.203 EEG + Speech + Text 0.874 ± 0.067 5. CONCLUSION We addressed key limitations in multimodal depression detection by adopting subject-level stratified cross-validation and exploring EEG- based representations in combination with speech and text. Our ex- periments compared handcrafted features with deep representations from large pretrained models, consistently showing the superiority of the latter. In the unimodal setting, CNN+GRU proved effective for speech, while LSTM architectures yielded the best results for EEG and text. In the multimodal setting, late-fusion methods further im- proved performance, with Majority Voting across all three modalities achieving the strongest results, which to the best of our knowledge represents the current state of the art. Beyond the best-performing configuration, we introduce an experimental framework that fixes the optimal unimodal predictors and systematically evaluates alter- native fusion strategies. This framework serves as a reference setup for future work, and by releasing all code and preprocessing scripts in a public repository, we ensure reproducibility and support further advances in multimodal depression detection research. 6. ACKNOWLEDGMENTS This work uses data from the MODMA dataset [10] provided by the Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, China. We gratefully acknowledge the data contributors for granting access to the data. 7. COMPLIANCE WITH ETHICAL STANDARDS This study was conducted retrospectively using human subject data from the MODMA dataset [10]. Access to the dataset is granted by the data owners upon request, and we do not redistribute any data. According to the terms of use specified by the dataset providers, sep- arate ethical approval was not required for our analyses. All experi- ments were carried out in compliance with these conditions. 8. REFERENCES [1] C. D. Mathers and D. Loncar, “Projections of global mortality and burden of disease from 2002 to 2030,” PLOS Medicine, vol. 3, pp. 1–20, November 2006. [2] D. Santomauro, A. Mantilla Herrera, J. Shadid, and P. Zheng, “Global prevalence and burden of depressive and anxiety disor- ders in 204 countries and territories in 2020 due to the covid-19 pandemic,” The Lancet, vol. 398, October 2021. [3] M. Yousufi, R. Damaˇseviˇcius, and R. Maskeli¯unas, “Multi- modal fusion of eeg and audio spectrogram for major depres- sive disorder recognition using modified densenet121,” Brain Sciences, vol. 14, pp. 1018, 2024. [4] A. Qayyum, I. Razzak, and W. Mumtaz, “Hybrid deep shallow network for assessment of depression using electroencephalo- gram signals,” in International Conference on Neural Informa- tion Processing. 2020, pp. 245–257, Springer, Cham. [5] Xiaowen Jia, Jingxia Chen, Kexin Liu, Qian Wang, and Jial- ing He, “Multimodal depression detection based on an atten- tion graph convolution and transformer,” College of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, 2025. [6] M. Nykoniuk, O. Basystiuk, N. Shakhovska, and N. Mel- nykova, “Multimodal data fusion for depression detection ap- proach,” Computation, vol. 13, no. 1, pp. 9, 2025. [7] Gyanendra Tiwary, Shivani Chauhan, and K. K. Goyal, “Au- tomatic depression detection using multi-modal & late-fusion based architecture,” in 2023 7th International Conference on Computation System and Information Technology for Sustain- able Solutions (CSITSS), 2023, pp. 1–6. [8] Klara Daly and Oluwafemi Olukoya, “Depression detection in read and spontaneous speech: A multimodal approach for lesser-resourced languages,” Biomedical Signal Processing and Control, vol. 108, pp. 107959, 2025. [9] M. He, E. M. Bakker, and M. S. Lew, “Dpd (depression de- tection) net: a deep neural network for multimodal depression detection,” Health Information Science and Systems, vol. 12, no. 1, pp. 53, Nov 2024. [10] H. Cai, Y. Gao, S. Sun, N. Li, F. Tian, H. Xiao, J. Li, Z. Yang, X. Li, Q. Zhao, et al., “Modma dataset: A multi-modal open dataset for mental-disorder analysis,” arXiv preprint, 2020. [11] A. Qayyum, I. Razzak, M. Tanveer, M. Mazhar, and B. Al- haqbani, “High-density electroencephalography and speech signal-based deep framework for clinical depression diagno- sis,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 20, pp. 2587–2597, 2023. [12] Max Bain, Jaesung Huh, Tengda Han, and Andrew Zisserman, “Whisperx: Time-accurate speech transcription of long-form audio,” INTERSPEECH 2023, 2023. [13] S. Khan, S. M. Umar Saeed, J. Frnda, A. Arsalan, R. Amin, R. Gantassi, and S. H. Noorani, “A machine learning based de- pression screening framework using temporal domain features of the electroencephalography signals,” PLoS ONE, vol. 19, no. 3, pp. e0299127, Mar 27 2024. [14] Jiquan Wang, Sha Zhao, Zhiling Luo, Yangxuan Zhou, Haiteng Jiang, Shijian Li, Tao Li, and Gang Pan, “Cbramod: A criss- cross brain foundation model for eeg decoding,” arXiv preprint arXiv:2412.07236, 2024, Accepted at ICLR 2025. [15] Wajid Mumtaz and Abdul Qayyum, “A deep learning frame- work for automatic diagnosis of unipolar depression,” Inter- national Journal of Medical Informatics, vol. 132, pp. 103983, 2019. [16] Wei-Bang Jiang, Li-Ming Zhao, and Bao-Liang Lu, “Large brain model for learning generic representations with tremen- dous EEG data in BCI,” in The Twelfth International Confer- ence on Learning Representations, ICLR 2024, Vienna, Aus- tria, May 7-11, 2024. 2024, OpenReview.net. [17] Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdel- rahman Mohamed, and Michael Auli, “Unsupervised cross- lingual representation learning for speech recognition,” arXiv preprint arXiv:2006.13979, 2020. [18] TencentGameMate, “Chinese HuBERT Large,” 2024. [19] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “BERT: Pre-training of deep bidirectional trans- formers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, June 2019, pp. 4171–4186, Association for Com- putational Linguistics. [20] Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu, “Revisiting pre-trained models for Chinese natural language processing,” in Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Process- ing: Findings, Online, Nov. 2020, pp. 657–668, Association for Computational Linguistics. [21] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Rus- lan Salakhutdinov, and Quoc V. Le, XLNet: generalized au- toregressive pretraining for language understanding, Curran Associates Inc., Red Hook, NY, USA, 2019. [22] Nils Reimers and Iryna Gurevych, “Sentence-bert: Sentence embeddings using siamese bert-networks,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing. 11 2019, Association for Computational Linguistics. [23] M. Gheorghe, S. Mihalache, and D. Burileanu, “Using deep neural networks for detecting depression from speech,” in 2023 31st European Signal Processing Conference (EUSIPCO), Helsinki, Finland, 2023, pp. 411–415.
TRI-DEP: A TRIMODAL COMPARATIVE STUDY FOR DEPRESSION DETECTION USING SPEECH, TEXT, AND EEG Annisaa Fitri Nurfidausi∗, Eleonora Mancini*, Paolo Torroni DISI, - tomatic detection remains challenging. Prior work has explored unimodal and multimodal approaches, with multimodal systems showing promise by leveraging complementary signals. However, existing studies are limited in scope, lack systematic comparisons of features, and suffer from inconsistent evaluation protocols. We address these gaps by systematically exploring feature representations and modelling strategies across EEG, together with speech and text. We evaluate handcrafted features versus pre-trained embeddings, assess the effectiveness of different neural encoders, compare unimodal, bimodal, and trimodal configurations, and analyse fusion strategies with attention to the role of EEG. Consistent subjectindependent splits are applied to ensure robust, reproducible benchmarking. Our results show that (i) the combination of EEG, speech and text modalities enhances multimodal detection, (ii) pretrained embeddings outperform handcrafted features, and (iii) carefully designed trimodal models achieve state-of-the-art performance. Our work lays the groundwork for future research in multimodal depression detection. Index Terms- Depression Detection, Deep Neural Networks, Multimodality 1. INTRODUCTION Depression is a widespread mental health condition predicted to become the second leading cause of disease burden by 2030 [1], with COVID-19 causing a 27.6 % rise in global cases [2]. In recent years, there has been growing interest in developing automatic depression detection systems to support clinical decision-making and enable telemedicine applications. More recently, multimodal approaches have gained particular attention, motivated by the fact that in clinical settings, such as diagnostic interviews, human expression is inherently multimodal, spanning speech, language, and neural activity. However, current studies often suffer from critical methodological gaps, including limited modality integration, inconsistent evaluation protocols, and potential data leakage, which hinder reproducibility and the fair assessment of model performance. Models that leverage two modalities dominate the field. Notable examples include [3], who applied DenseNet121 to EEG and speech spectrograms from the MODMA dataset, and [4], who employed Vision Transformers on comparable EEG-speech data from MODMA. Other bimodal studies investigated EEG-speech integration with graph convolutional networks [5], speech-text fusion on the E-DAIC dataset using CNN-LSTM attention [6], and EEG-facial expression fusion [7]. In [8], an extensive speech-text comparative analysis with multiple fusion techniques was conducted, but EEG was entirely excluded. ∗Both authors contributed equally. Overall, state-of-the-art performances in multimodal depression detection span roughly 85-97%, depending on the dataset and modality combinations. All the aforementioned approaches only comprise two modalities, constraining their potential by overlooking trimodal approaches. Moreover, most of them exclude text modality and lack transparent data-splitting protocols. In [9], speech, EEG, and text were integrated using GAT-CNN-MpNet architectures on MODMA, achieving about 90% balanced performance through weighted late fusion, though without comparing handcrafted and pretrained features and with only basic fusion strategies explored. Moreover, the study did not clarify whether 5-fold cross-validation was performed at the segment or subject level. Our work addresses key limitations in multimodal depression detection by systematically exploring feature representations and modeling strategies across EEG, together with speech and text. We perform a complete comparative analysis of handcrafted features and pretrained embeddings, including, for the first time, brain-pretrained models, evaluate multiple deep learning architectures, and compare unimodal, bimodal, and trimodal configurations. We further investigate how different fusion strategies impact detection accuracy and robustness, with particular attention to the role of EEG. Using consistent subject-independent data splits to ensure reproducible benchmarking, we demonstrate that carefully designed trimodal models achieve state-of-the-art performance. Our study lays the groundwork for the future of multimodal depression detection, guiding the development of more accurate and robust systems. We make both the code and the model checkpoints available to foster transparency and reproducibility.1 2. METHODOLOGY 2.1. Data This study employs the Multi-modal Open Dataset for Mentaldisorder Analysis (MODMA) [10], which provides: (1) 5-minute resting-state EEG recorded with a 128-channel HydroCel Geodesic Sensor Net at 250 Hz, and (2) audio from structured clinical interviews. For each subject, the interview audio consists of R = 29 separate recordings (question-answer items) whose durations vary across and within subjects (the total interview time is approximately 25 minutes per subject). Since MODMA does not include text transcriptions from clinical interviews, we generate automatic transcriptions using speech-to-text models. The dataset comprises individuals diagnosed with Major Depressive Disorder (MDD), recruited from Lanzhou University Second Hospital, and healthy controls (HCC) obtained via public advertising; MDD diagnoses were confirmed by licensed psychiatrists. In this study, we retain only subjects who participated in both EEG and interview recordings, resulting in a filtered cohort of 38 subjects. Table 1 summarizes demographic information across groups and protocols. Additional details are available in [10]. 1Link will be available upon acceptance. 16 Oct 2025 Table 1. Participant demographics in the MODMA dataset. M: Male, F: Female, HC: Healthy Control, and MDD: Major Depressive Disorder. Modality Total MDD HC Age (M/F) (M/F) (MDD/HC) 128-ch EEG 53 24 (13/11) 29 (20/9) 16-56 / 18-55 Speech 52 23 (16/7) 29 (20/9) 16-56 / 18-55 Many studies lack clarity in data splitting [11, 3, 9], where segment-level splits can leak information by placing recordings from the same subject in both training and test sets, yielding inflated performance. To avoid this, we use stratified 5-fold subject-level cross-validation with consistent splits across experiments. We also release these splits on our companion website to ensure reproducibility and fair comparison. To address the lack of transcriptions in the MODMA dataset, we employed WhisperX [12] to generate text for each subject's 29 recordings, without further post-processing. 2.2. Experimental Pipeline Design We design a unified pipeline for multimodal depression detection with EEG, speech, and text. For EEG, we adopt two processing branches: a 29-channel, 250 Hz, 10 s segmentation setup, consistent with prior work [11, 3, 9, 13], and a 19-channel, 200 Hz, 5 s segmentation setup replicating the preprocessing used in CBraMod [14] for the MUMTAZ depression dataset [15] . For CBraMod, we evaluated both the original pre-trained version and the model fine-tuned on MUMTAZ, as described in the official documentation2 , and found the latter consistently superior. Therefore, throughout this work we refer to CBraMod as the MUMTAZ-fine-tuned model. Speech recordings are resampled to 16 kHz, denoised, and segmented into 5 s windows with 50% overlap, while text is used directly from raw Chinese transcriptions. Feature extraction combines handcrafted descriptors (EEG statistics, spectral power, entropy; speech MFCCs with/without prosody) with embeddings from large pre-trained models. For EEG, we employ both the Large Brain Model (LaBraM) [16], trained on ∼2,500 hours of EEG from 20 datasets, and CBraMod, a patchbased masked reconstruction model. For speech, we use XLSR53 [17], a multilingual wav2vec 2.0 encoder, and Chinese HuBERT Large [18], trained on 10k hours of WenetSpeech. For text, we use Chinese BERT Base [19], MacBERT [20], XLNet [21], and MPNet Multilingual [22]. Segment-level representations are encoded with a combination of CNNs, LSTMs, and/or GRUs (with/without attention) and fused using decision-level strategies. 2.3. Data Preprocessing The preprocessing stage serves multiple objectives, including cleaning and structuring the raw data, as well as preparing it for multimodal analysis. One key objective is the segmentation of the input into smaller units that can be more effectively processed by the models. We denote with SEEG, SSPEECH, and STEXT the number of segments obtained after preprocessing for each input modality. EEG - For handcrafted features and LaBraM, we follow prior work [11, 3, 9, 13], which comprises retaining C = 29 channels3, 2https://github.com/wjq-learning/CBraMod/blob/main/ 3Full list available on our companion website. Link will be available upon acceptance. applying a 0.5-50 Hz bandpass filter with a 50 Hz notch, and average re-referencing. Recordings are segmented into 10 s windows; at 250 Hz, each window contains T = 250 × 10 = 2500 samples. Thus, a recording of length L seconds produces SEEG = L/10 windows (e.g., SEEG = 30 for a 5-min recording), represented as X(1) EEG ∈RSEEG×C×T . For CBraMod, we use the version pretrained on the MUMTAZ depression dataset, thereby replicating its preprocessing. Signals are resampled to 200 Hz, bandpass filtered (0.3-75 Hz) with a 50 Hz notch, and reduced to C = 19 channels3. Recordings are segmented into 5 s windows; at 200 Hz, each window contains T = 200 × 5 = 1000 samples. A recording of length L seconds thus yields SEEG = L/5 windows (e.g., SEEG = 60 for a 5-min recording). Each window is further divided into P = 5 non-overlapping patches of Tpatch = 200 samples, resulting in X(2) EEG ∈RSEEG×C×P ×Tpatch. Speech - Audio recordings are resampled from 44 kHz to 16 kHz, converted to mono PCM, amplitude-normalized to [-1, 1], silencetrimmed, and denoised with a median filter [23]. Each signal is segmented into overlapping windows of length w = 5 s with hop size h = 2.5 s (50% overlap). At a sampling rate of 16 kHz, each segment contains Tseg = 80,000 samples and each hop Thop = 40,000 samples. For a recording of duration L seconds (post-trimming), the number of segments is SSPEECH = (L -w)/h + 1 for L ≥w, while recordings shorter than w are retained as a single segment. The segmented waveform is represented as XSPEECH ∈RSSPEECH×Tseg, where each row corresponds to one waveform segment. Each subject has R = 29 interview recordings; after windowing, recording r yields S(r) SPEECH segments X(r) SPEECH ∈RS(r) SPEECH×Tseg. The subject-level speech representation is the concatenation along the segment axis: XSPEECH = X(1) SPEECH; . . . ; X(R) SPEECH , with a total of SSPEECH = PR r=1 S(r) SPEECH segments. Text - Each recording has a single transcript. After tokenization, the subject-level text representation is the concatenation of all transcript representations, XTEXT = X(1) TEXT; . . . ; X(R) TEXT . 2.4. Feature Extraction EEG - Handcrafted features. For each segment X(1) EEG ∈RC×T we extract F = 10 hancrafted descriptors per channel (statistical, spectral, entropy), yielding XHAND ∈RS×C×F . Pre-trained models. We further extract embeddings from LaBraM and CBraMod. LaBraM operates on X(1) EEG and maps each segment to a D = 200-dimensional embedding, producing XLaBraM ∈RS×D. CBraMod operates on X(2) EEG, where each 5 s segment is patch-encoded and then averaged across channels and patches to form D = 200-dimensional embeddings, resulting in XCBraMod ∈RS×D. Remark. After feature extraction, the raw temporal dimension (T or Tpatch) is no longer present, as each segment is reduced to a fixed-size representation of dimension F (handcrafted) or D (embeddings). For subject-level modeling, features from all recordings of the same subject are stacked to form the final subject representation. Speech - From each waveform segment, we compute two handcrafted variant, namely MFCCs (40 coefficients) and Prosody + MFCCs (46 features: 40 MFCCs plus energy, F0, RMS energy, pause rate, phonation time, speech rate). We also extract segment embeddings with XLSR-53 and Chinese HuBERT Large. Segmentlevel features are stacked per recording and then concatenated across CBraMOD Attn MLP PreProc GRU PreProc CNN GRU LSTM MLP WhisperX PreProc LSTM MLP XSLR-53 MacBERT Fusion Fig. 1. Experimental Framework for Multimodal Depression Detection Table 2. Shapes of speech feature matrices per recording r and at the subject level. Feature name Per-recording shape Subject-level shape XMFCC RS(r) SPEECH×40 RSSPEECH×40 XPROSODY+MFCC RS(r) SPEECH×46 RSSPEECH×46 XXLSR RS(r) SPEECH×1024 RSSPEECH×1024 XHuBERT RS(r) SPEECH×768 RSSPEECH×768 the 29 recordings of each subject to form the subject-level representation. The exact tensor shapes (per recording and subject-level) are summarized in Table 2. After feature extraction, the raw sample length is no longer present; each segment is represented by a fixedsize vector (40/46/768/1024 dimensions). Text - Each recording has a single transcript, which we encode with a pretrained language model (BERT, MacBERT, XLNet, or MPNet) to obtain one D = 768-dimensional embedding per recording; for a subject with R = 29 recordings, stacking these yields XBERT, XMacBERT, XXLNet, XMPNet ∈RR×D (with R = 29, D = 768). 2.5. Baselines We re-implement two multimodal baselines for depression detection that use standard image architectures on EEG and speech 2Dspectrograms (Spec2D): DenseNet-121 [3] and Vision Transformer (ViT) [4]. These studies are among the few that explore EEG-speech multimodality in this task and report promising results. In our experiments, we retain their model architectures but apply our own subject-level cross-validation splits for consistency, making results not directly comparable to the original works. Additional implementation details are provided on our companion website. 2.6. Architectures We assess several modality-tailored architectures. We also experiment with multimodality, combining predictions from the bestperforming feature-model pair in each modality in a late fusion fashion. To keep notation light, we use F to denote the generic feature matrix per modality, either handcrafted features or embeddings from pretrained models. Concretely, FEEG (EEG), F(r) SPEECH (speech, per recording r), and FTEXT (text, subject-level). EEG - We consider two sequence encoders: CNN+LSTM and GRU+Attention. The CNN branch uses two 1D convolutions (kernel size 3, padding 1) with dropout to capture local temporal patterns, followed by a 2-layer LSTM for sequence modeling. GRU+Attention uses a 2-layer GRU with an attention mechanism that weights hidden states to form a subject-level summary. Both encoders consume FEEG and produce a latent representation HEEG; an MLP head outputs yEEG. Speech - A shallow CNN extracts segment-level features from each recording. These are reduced to a single fixed-size vector F(r) SPEECH ∈Rd using one of three encoders: (i) max pooling, (ii) GRU with attention, or (iii) BiGRU with attention (the latter extending the GRU+Attn design with bidirectional recurrence). The resulting R = 29 vectors are stacked into the subject-level matrix FSPEECH ∈RR×d. This sequence is then processed by an LSTM to produce the subject-level representation HSPEECH, which is fed to an MLP head to obtain the final prediction ySPEECH. Text - FTEXT denotes the subject-level text features (Sec. 2.4). A detection module (LSTM or CNN) transforms FTEXT into HTEXT, and an MLP head outputs yTEXT. Multimodal Fusion - We select, for each modality, the bestperforming feature-model pair and fuse their predictions via late fusion. This design choice ensures that our multimodal architectures are built upon the strongest unimodal predictors, allowing us to attribute performance gains directly to the fusion strategy rather than suboptimal single-modality components. We consider three schemes: Bayesian fusion - convert modality-specific posteriors to likelihood ratios, combine them with predefined weights, and map back to a posterior; soft voting (mean) - average class probabilities across modalities and predict the class with highest average probability with ties resolved at 0.5; weighted averaging - compute weighted combination of modality probabilities where weights sum to one, then predict the class with highest weighted probability. 3. EXPERIMENTAL SETUP We adopt stratified 5-fold cross-validation with fixed subject splits to ensure balanced and comparable experiments, and prevent data leakage. Models are trained with cross-entropy loss and softmax output, with hyperparameters tuned manually. All implementation details are provided in our companion materials. 4. RESULTS In this section, we report the performance of all experimental categories: baseline re-implementations, unimodal models, and our proposed multimodal architectures. F1-scores are reported as mean ± standard deviation across folds. Table 3 presents the baselines and unimodal models, including the best-performing model for each set of features per modality. Table 4 reports the performance of baseline models and multimodal fusion strategies, highlighting the best configuration within each category and the overall best-performing model. Further results are available on our companion website. Unimodal - Table 3 reports the performance of baseline and unimodal models. Among EEG features, CBraMod embeddings combined with a GRU and attention achieved the best result, confirming the benefit of pre-training on a depression-related corpus. For speech, both XLSR-53 and HuBERT embeddings provided strong performance, with XLSR-53 coupled with a CNN+GRU slightly outperforming. Handcrafted MFCC and prosodic features yielded considerably lower scores, indicating that deep speech embeddings capture richer information. In the text modality, all transformer-based embeddings performed competitively, with Chinese MacBERT and XLNet reaching the top results. Overall, unimodal experiments highlight that text provided the most informative single modality, while speech embeddings also achieved strong performance, and EEG remained less predictive in isolation. Multimodal - Table 4 compares the baselines with different fusion strategies. Simple baselines such as ViT and DenseNet-121 reached F1-scores around 0.56. Fusion strategies, however, substantially outperformed unimodal and baseline models. Weighted averaging already boosted performance when fusing EEG and Text, and Bayesian fusion further improved results, with Speech+Text achieving the highest F1-score overall. Majority voting also proved effective, with the tri-modal configuration EEG+Speech+Text reaching F1 = 0.874. These results confirm the complementarity of modalities: while text dominates in unimodal settings, integrating speech and EEG consistently improves robustness and yields the strongest overall performance. Experimental Framework for Multimodal Depression Detection - Building on this systematic exploration of feature extraction methods, neural architectures, and fusion strategies, we propose an experimental framework for multimodal depression detection, illustrated in Figure 1. The framework selects the best-performing predictors for each modality: XCBraMod processed with a GRU+Attn for EEG, XXLSR processed with a CNN+GRU for speech, and XMacBERT processed with an LSTM for text. These modality-specific pipelines are then combined through alternative fusion strategies. This design allows us to isolate the contribution of each fusion method while keeping the strongest unimodal configurations fixed. Our best-performing architecture employs majority voting across the three modalities, achieving an accuracy of 88.6% and an F1-score of 87.4%, to the best of our knowledge, establishing the state of the art in multimodal depression detection. The framework thus serves as a reference setup for future experiments, enabling systematic evaluation of new fusion strategies or additional modalities. To the best of our knowledge, our tri-modal configuration with majority voting fusion represents the current state of the art in multimodal depression detection. Table 3. Results of baselines and unimodal models (F1-score, mean ± std across 5 folds). In bold, the best performing model-feature pair per modality. Category Features Model F1 Baselines (Speech+EEG) XSpec2D ViT 0.560 ± 0.190 XSpec2D DenseNet-121 0.586 ± 0.240 EEG XHAND CNN+LSTM 0.585 ± 0.102 XLaBraM GRU+Attn 0.508 ± 0.075 XCBraMod GRU+Attn 0.600 ± 0.173 Speech XMFCC CNN+MaxPool+LSTM 0.554 ± 0.125 XProsody+MFCC CNN+BiGRU+Attn+LSTM 0.673 ± 0.152 XHuBERT CNN+BiGRU+Attn+LSTM 0.809 ± 0.073 XXLSR CNN+GRU+LSTM 0.814 ± 0.052 Text XMPNet CNN 0.865 ± 0.085 XBERT CNN 0.839 ± 0.123 XXLNet LSTM 0.671 ± 0.099 XMacBERT LSTM 0.868 ± 0.119 Table 4. Baseline and multimodal models (F1-score, mean ± std across 5 folds). In bold, the best performing configuration per category (baselines or fusion strategy). The overall best across all models and features configurations is additionally underlined. For fusion methods, the numbers in parentheses (e.g., 0.4, 0.6) indicate the weights assigned to each modality. Category Configuration F1-score Baselines ViT 0.560 ± 0.190 DenseNet-121 0.586 ± 0.240 Weighted Averaging EEG + Speech + Text (0.2 : 0.4 : 0.4) 0.603 ± 0.306 EEG + Speech (0.4 : 0.6) 0.510 ± 0.425 EEG + Text (0.4 : 0.6) 0.783 ± 0.203 Speech + Text (0.4 : 0.6) 0.470 ± 0.384 Bayesian Fusion EEG + Speech + Text (0.2 : 0.4 : 0.4) 0.855 ± 0.133 EEG + Speech (0.4 : 0.6) 0.676 ± 0.168 EEG + Text (0.4 : 0.6) 0.824 ± 0.178 Speech + Text (0.4 : 0.6) 0.875 ± 0.132 Majority Voting EEG + Speech 0.643 ± 0.340 EEG + Text 0.510 ± 0.425 Speech + Text 0.783 ± 0.203 EEG + Speech + Text 0.874 ± 0.067 5. CONCLUSION We addressed key limitations in multimodal depression detection by adopting subject-level stratified cross-validation and exploring EEGbased representations in combination with speech and text. Our experiments compared handcrafted features with deep representations from large pretrained models, consistently showing the superiority of the latter. In the unimodal setting, CNN+GRU proved effective for speech, while LSTM architectures yielded the best results for EEG and text. In the multimodal setting, late-fusion methods further improved performance, with Majority Voting across all three modalities achieving the strongest results, which to the best of our knowledge represents the current state of the art. Beyond the best-performing configuration, we introduce an experimental framework that fixes the optimal unimodal predictors and systematically evaluates alternative fusion strategies. This framework serves as a reference setup for future work, and by releasing all code and preprocessing scripts in a public repository, we ensure reproducibility and support further advances in multimodal depression detection research. 6. ACKNOWLEDGMENTS This work uses data from the MODMA dataset [10] provided by the Gansu Provincial Key Laboratory of Wearable Computing, Lanzhou University, China. We gratefully acknowledge the data contributors for granting access to the data. 7. COMPLIANCE WITH ETHICAL STANDARDS This study was conducted retrospectively using human subject data from the MODMA dataset [10]. Access to the dataset is granted by the data owners upon request, and we do not redistribute any data. According to the terms of use specified by the dataset providers, separate ethical approval was not required for our analyses. All experiments were carried out in compliance with these conditions. 8. REFERENCES [1] C. D. Mathers and D. Loncar, "Projections of global mortality and burden of disease from 2002 to 2030," PLOS Medicine, vol. 3, pp. 1-20, November 2006. [2] D. Santomauro, A. Mantilla Herrera, J. Shadid, and P. Zheng, "Global prevalence and burden of depressive and anxiety disorders in 204 countries and territories in 2020 due to the covid-19 pandemic," The Lancet, vol. 398, October 2021. [3] M. Yousufi, R. Damaˇseviˇcius, and R. Maskeli ̄unas, "Multimodal fusion of eeg and audio spectrogram for major depressive disorder recognition using modified densenet121," Brain Sciences, vol. 14, pp. 1018, 2024. [4] A. Qayyum, I. Razzak, and W. Mumtaz, "Hybrid deep shallow network for assessment of depression using electroencephalogram signals," in International Conference on Neural Information Processing. 2020, pp. 245-257, Springer, Cham. [5] Xiaowen Jia, Jingxia Chen, Kexin Liu, Qian Wang, and Jialing He, "Multimodal depression detection based on an attention graph convolution and transformer," 2025. [6] M. Nykoniuk, O. Basystiuk, N. Shakhovska, and N. Melnykova, "Multimodal data fusion for depression detection approach," Computation, vol. 13, no. 1, pp. 9, 2025. [7] Gyanendra Tiwary, Shivani Chauhan, and K. K. Goyal, "Automatic depression detection using multi-modal & late-fusion based architecture," in 2023 7th International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS), 2023, pp. 1-6. [8] Klara Daly and Oluwafemi Olukoya, "Depression detection in read and spontaneous speech: A multimodal approach for lesser-resourced languages," Biomedical Signal Processing and Control, vol. 108, pp. 107959, 2025. [9] M. He, E. M. Bakker, and M. S. Lew, "Dpd (depression detection) net: a deep neural network for multimodal depression detection," Health Information Science and Systems, vol. 12, no. 1, pp. 53, Nov 2024. [10] H. Cai, Y. Gao, S. Sun, N. Li, F. Tian, H. Xiao, J. Li, Z. Yang, X. Li, Q. Zhao, et al., "Modma dataset: A multi-modal open dataset for mental-disorder analysis," arXiv preprint, 2020. [11] A. Qayyum, I. Razzak, M. Tanveer, M. Mazhar, and B. Alhaqbani, "High-density electroencephalography and speech signal-based deep framework for clinical depression diagnosis," IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 20, pp. 2587-2597, 2023. [12] Max Bain, Jaesung Huh, Tengda Han, and Andrew Zisserman, "Whisperx: Time-accurate speech transcription of long-form audio," INTERSPEECH 2023, 2023. [13] S. Khan, S. M. Umar Saeed, J. Frnda, A. Arsalan, R. Amin, R. Gantassi, and S. H. Noorani, "A machine learning based depression screening framework using temporal domain features of the electroencephalography signals," PLoS ONE, vol. 19, no. 3, pp. e0299127, Mar 27 2024. [14] Jiquan Wang, Sha Zhao, Zhiling Luo, Yangxuan Zhou, Haiteng Jiang, Shijian Li, Tao Li, and Gang Pan, "Cbramod: A crisscross brain foundation model for eeg decoding," arXiv preprint , 2024, Accepted at ICLR 2025. [15] Wajid Mumtaz and Abdul Qayyum, "A deep learning framework for automatic diagnosis of unipolar depression," International Journal of Medical Informatics, vol. 132, pp. 103983, 2019. [16] Wei-Bang Jiang, Li-Ming Zhao, and Bao-Liang Lu, "Large brain model for learning generic representations with tremendous EEG data in BCI," in The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024. 2024, OpenReview.net. [17] Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli, "Unsupervised crosslingual representation learning for speech recognition," arXiv preprint , 2020. [18] TencentGameMate, "Chinese HuBERT Large," 2024. [19] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, June 2019, pp. 4171-4186, Association for Computational Linguistics. [20] Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu, "Revisiting pre-trained models for Chinese natural language processing," in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, Online, Nov. 2020, pp. 657-668, Association for Computational Linguistics. [21] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le, XLNet: generalized autoregressive pretraining for language understanding, Curran Associates Inc., Red Hook, NY, USA, 2019. [22] Nils Reimers and Iryna Gurevych, "Sentence-bert: Sentence embeddings using siamese bert-networks," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. 11 2019, Association for Computational Linguistics. [23] M. Gheorghe, S. Mihalache, and D. Burileanu, "Using deep neural networks for detecting depression from speech," in 2023 31st European Signal Processing Conference (EUSIPCO), Helsinki, Finland, 2023, pp. 411-415.
2510.14923
FINITE ELEMENT METHODS FOR ELECTRONEUTRAL MULTICOMPONENT ELECTROLYTE FLOWS∗ AARON BAIER-REINIO†, PATRICK E. FARRELL‡, AND CHARLES W. MONROE§ Abstract. We present a broad family of high-order finite element algorithms for simulating the flow of electroneutral electrolytes. The governing partial differential equations that we solve are the electroneutral Navier–Stokes–Onsager–Stefan–Maxwell (NSOSM) equations, which model momen- tum transport, multicomponent diffusion and electrical effects within the electrolyte. Our algorithms can be applied in the steady and transient settings, in two and three spatial dimensions, and under a variety of boundary conditions. Moreover, we allow for the material parameters (e.g. viscosity, diffusivities, thermodynamic factors and density) to be solution-dependent and thermodynamically non-ideal. The flexibility of our approach requires us to address subtleties that arise in the gov- erning equations due to the interplay between boundary conditions and the equation of state. We demonstrate the algorithms in various physical configurations, including (i) electrolyte flow around a microfluidic rotating disk electrode and (ii) the flow in a Hull cell of a cosolvent electrolyte mixture used in lithium-ion batteries. Key words. Electrolytes, electroneutrality, multicomponent flows, cross-diffusion, Stefan– Maxwell, Navier–Stokes, finite element methods. 1. Introduction. We address the numerical simulation of liquid electrolytes, which are fluids that transport electrical charge by the motion of ions. Electrolytes are multicomponent fluids (or mixtures), which means that they consist of multiple distinct chemical species (or components) in a common thermodynamic phase [39, 85]. While a general mixture may consist entirely of uncharged species, in an electrolyte at least two of the species must be ions of opposite charge. For example, dissolv- ing table salt in water yields an electrolyte with three species: H2O, Na+ and Cl–. Prominent applications of electrolytic flows include energy storage (e.g. batteries and fuel cells), chemical processes (e.g. electrodialysis and electroplating) and biological systems (e.g. biological membranes) [7, 57, 85]. We assume that the electrolyte is electroneutral, which means that its local charge density is everywhere zero1. This is a common and accurate assumption in most elec- trochemical systems at length scales much larger than the Debye length (which is typically on the order of nanometers) [57, 85]. For example, in lithium-ion batteries, electroneutrality holds throughout the bulk of the electrolyte and is only violated in nanometer-wide double layers at the electrolyte-electrode interfaces. Consequen- tially, physics-based microscale lithium-ion battery models typically use differential equations that assume electroneutrality throughout the domain and incorporate the double layer through boundary conditions [17, 66]. In this work, electroneutrality allows us to follow [78] and employ a change of basis that transforms the governing equations into a form that is structurally similar to that of an uncharged mixture, ∗ Funding: ABR was supported by a Clarendon scholarship from the University of Oxford. PEF was supported by EPSRC grants EP/R029423/1 and EP/W026163/1, and by the Donatio Universi- tatis Carolinae Chair “Mathematical modelling of multicomponent systems”. †Mathematical Institute, University of Oxford, Oxford, OX2 6GG, UK (aaron.baier-reinio@maths. ox.ac.uk). ‡Mathematical Institute, University of Oxford, Oxford, OX2 6GG, UK and Math- ematical Institute, Faculty of Mathematics and Physics, Charles University, Czechia (patrick.farrell@maths.ox.ac.uk). §Department of Engineering Science, University of Oxford, Oxford, OX1 3PJ, UK and The Fara- day Institution, Harwell Campus, Didcot, OX11 ORA, UK (charles.monroe@eng.ox.ac.uk). 1Note that electroneutrality does not prevent the transport of charge within the mixture. 1 arXiv:2510.14923v2 [math.NA] 21 Oct 2025 2 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE simplifying both the model and numerics. A constitutive relation for the species mass fluxes must be chosen to model mass transport in multicomponent flows. In electrolytic flows, the Nernst–Planck model is a popular such choice, and it accounts for mass transport by convection, Fickian diffusion and electromigration [6, 28]. Several finite element methods exist for the electroneutral Nernst–Planck model [14, 30, 67], and literature for its non- electroneutral analogue, the Poisson–Nernst–Planck model, is especially abundant (see e.g. [21, 64, 86]). However, because the Nernst–Planck model uses Fick’s law of diffusion [38], it has the drawback of being unable to capture cross-diffusion, a phys- ical phenomenon that arises when different species exert diffusional forces on each other [45, 74, 79]. For dilute mixtures, i.e. mixtures with only one species present in non-trace amounts, it is usually appropriate to neglect cross-diffusion, and doing so greatly simplifies the model. However, many practical problems involve non-dilute mixtures for which Fick’s law is inadequate. For example, in lithium-ion battery electrolyte modelling, cation-anion cross-diffusion must be accounted for to match ex- perimentally observed conductivity, salt diffusivity and transference numbers [11]. For detailed discussions on the limitations of Fick’s law, we refer to [50, 51, 52] for generic mixtures and [57, Chapters 11-12] for electrolytes. Drawbacks of the Nernst–Plack model from a thermodynamical perspective are also given in [28]. To remedy these limitations of the Nernst–Planck model, we treat mass transport using the Onsager–Stefan–Maxwell2 (OSM) equations [10, 52, 85]. These equations are based on irreversible thermodynamics [22, 59, 60] and generalize the Stefan– Maxwell equations, which model cross-diffusion in ideal gases [54, 73]. In electro- chemistry, the OSM equations were popularized by Newman [57, 58] for modelling electrolytes, and the resulting framework is sometimes known as concentrated solu- tion theory. The OSM equations provide a thermodynamically rigorous model for mass transport by convection, cross-diffusion and electromigration. The equations are also compatible with electroneutrality [78] and anisothermality [77], although we do not treat the latter here. Moreover, the OSM equations can handle thermodynamic non-idealities (arising in mixtures with a nonzero excess Gibbs free energy [40]), the effects of which are important in the non-dilute regime [52]. In settings where momen- tum transport is important (e.g. fuel cells, electrodialysis), it is necessary to couple the OSM equations to momentum conservation laws such as the Stokes or Navier–Stokes equations; we call the coupled equations the SOSM or NSOSM equations. Due to its flexibility in treating complicated transport phenomena and nonideal thermodynamics, the OSM framework has received much attention in electrochem- istry as a tool for modelling electrolytic flows [57]. For example, the framework has been applied to lithium-ion batteries [66], fuel cells [83, 84], electrodialysis [48] and electrolysis [71]. Nonetheless, the OSM equations have received limited attention in the scientific computing literature, especially so for the coupling of OSM to Stokes or Navier–Stokes. Most numerics papers on the OSM equations assume a thermodynam- ically ideal gaseous mixture, such as the finite element methods in [15, 20, 46, 55, 76]. For numerics papers on the SOSM or NSOSM equations, we are aware of [32] for a finite difference scheme, [5, 9, 26, 25, 27, 61, 72] for a series of Cartesian-grid finite volume schemes under various physical assumptions in the setting of fluctuating hy- drodynamics, and [2, 3, 18, 19, 53] for finite element schemes. Among these, only [3] considers spatially high-order methods, and only [27, 61] consider charged species 2Nomenclature varies depending on the source; sometimes the name Maxwell–Stefan or even generalized Maxwell–Stefan is used instead. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 3 (although [27, 61] make the Boussinesq approximation to simplify the numerics). We are not aware of existing finite element algorithms for the electroneutral NSOSM equations, a gap the present work addresses. A major novelty of this work lies in our treatment of the electroneutrality condi- tion, which is mathematically an algebraic constraint on the species concentrations. Previous works have used this constraint to obtain an elliptic equation for the electri- cal potential, which is then coupled to the relevant equations that govern mass and momentum transport [27, 30, 67]. However, these coupled equations do not appear to structurally resemble those of an uncharged mixture, and instead new techniques are developed to solve them; temporal splitting schemes are introduced in [27, 30] and a monolithic approach is considered in [67]. By contrast, in this work we use the so- called salt-charge transformation, introduced in [78] and inspired by Newman’s work on binary electrolytes [58], to transform the electroneutral NSOSM equations into a form that structurally resembles the NSOSM equations for an uncharged mixture. We then apply the method of lines whereby we spatially discretize the transformed problem using a steady NSOSM solver, and then solve the resulting spatially discrete system of differential-algebraic equations by time-stepping. Hence, our approach has the advantage of allowing for uncharged NSOSM spatial discretizations and solvers to be employed in the electroneutral setting. The steady NSOSM discretization that we employ is a modification of the SOSM scheme introduced in [3], where we now include the nonlinear convective terms in the momentum balance. These are trivial to include as this work assumes low Reynolds number flow so that the convective terms do not require stabilization. We also apply a second modification that is straightforward but has consequences regarding the experimental parameters needed for the model. Namely, instead of discretizing the OSM equations using chemical potentials (as is done in [2, 3, 18]), we express the OSM diffusional driving forces in terms of mole fraction gradients and we only discretize the mole fractions. Because we do not discretize the chemical potentials, we do not require experimental knowledge of the mixture activity coefficients. Only the thermodynamic factors (i.e. partial derivatives of activity coefficients with respect to mole fractions) are required, which allows us to straightforwardly draw on experimental data from [80, 81] in our numerical experiments. When modelling mixtures, the interplay between the volumetric equation of state (EOS) and boundary conditions (BCs) is subtle. The EOS and BCs can both impose restrictions on the species concentrations, and these restrictions must be compatible. Investigating this interplay is mathematically challenging even for a two-species Fick- ian mixture [37]. In the present work, this challenge is exacerbated by the fact that our steady NSOSM solver requires user-chosen integral constraints to be imposed on the solution [3]. These constraints arise naturally in the steady case, because the steady mass continuity equations do not dictate how many moles of each species are present in the domain; this must be prescribed by additional integral constraints. However, in the transient case, the constraints must be chosen carefully such that they are com- patible with the choice of EOS and BCs. In this paper, we discuss several physically relevant choices of EOS and BCs along with appropriate corresponding constraints. We also discuss a seemingly physical combination of EOS and BCs that appear to yield an ill-posed problem. We are not aware of discussions on this interplay in the multicomponent numerics literature and believe these findings may be of independent interest. Incidentally, we mention that a discussion on the challenges of discretizing the multicomponent mass continuity equations, in a way that is compatible with the 4 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE EOS, is given in [26] in the case of a low-order finite volume scheme3. The remainder of this paper is organized as follows. In section 2 we introduce the electroneutral NSOSM equations, and show how the salt-charge transformation can be applied to obtain an equivalent system amenable to a steady NSOSM solver. In section 3 we first consider the transformed steady problem, which we discretize using the (modified) scheme of [3]. In section 4 we then apply the method of lines to the transformed transient problem, discretizing in space using the steady scheme from section 3 and taking special care to identify what constraints must be imposed depending on the EOS and BCs. In section 5 we demonstrate our algorithm with a variety of numerical examples, and conclusions are drawn in section 6. 2. Governing equations and salt-charge transformation. We consider an isothermal, chemically nonreacting mixture of n ≥3 species indexed by i ∈{1 : n}. Let zi denote the equivalent charge of species i and nc ≥2 the number of charged species. Note that nc = n is possible, e.g. for a molten salt. We list the species so that the first (n−nc) species are uncharged and the last two species are oppositely charged. This means that z1, . . . , zn−nc = 0 and zn−nc+1, . . . , zn ̸= 0 with zn/zn−1 < 0. We next state the governing partial differential equations, which are posed on a bounded and connected Lipschitz spatial domain Ω⊂Rd with d ∈{2, 3}. 2.1. Electroneutral NSOSM equations. The thermodynamic state variables that we consider are the temperature T, the pressure p and the species molar concen- trations ci for i ∈{1 : n}. Since the mixture is isothermal T is a fixed parameter, but p and c1, . . . , cn may vary with space and time. The total concentration is cT = Pn j=1 cj and the density is ρ = Pn j=1 mjcj where mi is the molar mass of species i. Com- position can be described using the species mole fractions xi = ci/cT . Note that by definition Pn j=1 xj = 1. The charge density is ρe = F Pn j=1 zjcj where F is Faraday’s constant. Electroneutrality requires that ρe = 0, or equivalently (2.1) n X j=1 zjcj = 0 in Ω, which restricts the space of permissible compositions. To model momentum and mass transport we employ the Cauchy momentum equation, and nonreacting species mass continuity equations, ∂(ρv) ∂t + ∇· (ρv ⊗v) + ∇p −∇· τ = ρf in Ω, (2.2) ∂ci ∂t + ∇· Ni = 0 in Ω ∀i ∈{1 : n}. (2.3) Here v is the barycentric velocity, τ is the viscous stress, f is a prescribed body force which is typically zero, and Ni is the molar flux of species i ∈{1 : n}. The barycentric velocity is related to the molar fluxes through the mass-average constraint (2.4) ρv = n X j=1 mjNj in Ω. 3The challenges encountered in [26] are different from those encountered in this work, presumably because they discretize the species partial densities whereas we discretize mole fractions, and also because they attempt to enforce the EOS pointwise whereas we do not. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 5 Since ρ = Pn j=1 mjcj one can multiply (2.3) by mi and sum over i while using (2.4) to obtain the equation for total mass continuity ∂tρ+∇·(ρv) = 0. However, we never explicitly discretize this equation since it is a consequence of (2.3) and (2.4). The transport equations (2.2) and (2.3) must be closed with constitutive laws for the viscous stress and mass fluxes. For the former we apply the Newtonian fluid law (2.5) τ = 2ηϵ(v) + (ζ −2η/d)(∇· v)I, where ϵ(v) is the symmetric gradient of v, I is the d × d identity matrix, η > 0 is the shear viscosity and ζ > 0 is the bulk viscosity. We allow the viscosities η and ζ to depend on the thermodynamic state variables T, p and x1, . . . , xn. To model the mass fluxes we employ the isothermal, non-isobaric Onsager–Stefan– Maxwell (OSM) equations [10, 52, 57, 85], which can be written using molar fluxes as (2.6) −∇µi + mi ρ ∇p = n X j=1 MijNj in Ω ∀i ∈{1 : n}, where µi is the electrochemical potential of species i ∈{1 : n} and M is the Onsager transport matrix. Two important properties of M are that it is symmetric positive- semidefinite, and its nullspace is spanned by [c1, . . . , cn]⊤. Moreover, the Gibbs– Duhem relation implies that Pn j=1 cj(−∇µj +[mj/ρ]∇p) = 0 [41]. The left-hand side of (2.6) therefore lies in the orthogonal complement of span{[c1, . . . , cn]⊤}, which is the range of M. Hence (2.6) is non-uniquely solvable for N1, . . . , Nn. Uniqueness of the fluxes comes from imposing the mass-average constraint (2.4) in addition to (2.6). Often M is written in terms of so-called Stefan–Maxwell diffusivities Dij as (2.7) Mij = ( − RT DijcT if i ̸= j, Pn k=1,k̸=i RT ck DikcT cj if i = j, with R the gas constant. The diffusivities satisfy Dij = Dji for i ̸= j [10] while Dii is undefined. We allow the Dij to depend on T, p and x1, . . . , xn (see e.g. [49]). Equations (2.1)–(2.6) comprise the electroneutral NSOSM equations. In addition to needing suitable boundary and initial conditions, the equations as written are still not closed. To obtain a closed problem one must supply a volumetric equation of state (EOS), which gives the total concentration cT (or equivalently density ρ) as a function of T, p and x1, . . . , xn. A constitutive law giving the electrochemical potentials µi as a function of T, p, x1, . . . , xn and the electrical state of the system must also be given. However, in multicomponent systems, formulating the notion of electrical state is delicate, and requires a foray into technicalities of electrochemical thermodynamics. Intuitively, the electrical state should quantify how much electrical potential en- ergy is locally available in the system. A reasonable model may be to introduce an “electrostatic potential” ϕ which acts as a Lagrange multiplier for enforcing elec- troneutrality, and to then decompose the electrochemical potentials as µi = µchem i +ϕ, where µchem i encodes the “chemical” or “non-electrical” part of µi and depends on T, p and x1, . . . , xn only. However, Guggenheim and Newman have criticized such decom- positions as being ambiguous [41, 57], with Newman noting that several inequivalent notions of “electrostatic potential” exist in concentrated mixtures (see also [12, 62]). Fortunately, we do not need to contend with this thermodynamical technicality in the present work. Indeed, owing to electroneutrality, the salt-charge transformation 6 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE will eliminate any need to model the dependence of electrochemical potentials on the electrical state [78]. In what follows, it will only be necessary to model the dependence of chemical potentials of hypothetical uncharged salts on T, p and x1, . . . , xn4. 2.2. Salt-charge transformation. It will be helpful to use boldface notation for vectors and matrices that are indexed by i ∈{1 : n}. For example, we shall write z = [z1, . . . , zn]⊤, c = [c1, . . . , cn]⊤, m = [m1, . . . , mn]⊤and so on. These should be interpreted as column vectors (or n × 1 matrices). Taking their gradient yields n × d matrices, hence for example (∇µ)ij = (∇µi)j. For the fluxes and other Rd-valued quantities we write N = [N1, . . . , Nn]⊤which is understood to be an n × d matrix, and the divergence acts on its rows so that ∇·N is a column vector (or n×1 matrix). With this notation, the governing equations in (2.1), (2.3), (2.4), and (2.6) become z⊤c = 0 in Ω, (2.8) ∂c ∂t + ∇· N = 0 in Ω, (2.9) v = ψ⊤N in Ω, (2.10) −∇µ + ψ∇p = MN in Ω, (2.11) where ψ := m/ρ and ∇p is viewed as being a 1 × d row vector. We now carry out the salt-charge transformation. While a detailed presentation is given in [78], we briefly outline it here for completeness, and because [78] assumes an isobaric mixture and hence does not keep track of pressure gradient terms in the OSM equations. The starting point is an n × n transformation matrix Z given by (2.12) Z =   ν⊤ 1 ... ν⊤ n−1 z⊤/ ∥z∥  . Here ∥z∥= √ z⊤z and ν1, . . . νn−1 are n×1 column vectors whose entries contain the stoichiometric coefficients of n −1 independent hypothetical chemical reactions that neutralize the species. An example from [78] is an n = 5 species mixture consisting of H2O, Na+, Cl–, Mg2+ and SO2– 4 . Therefore z⊤= [0, 1, −1, 2, −2], and a choice of neutralizing reactions with corresponding stoichiometric coefficient vectors is H2O −⇀ ↽−H2O −→ ν⊤ 1 = [1, 0, 0, 0, 0], Na+ + Cl−−⇀ ↽−NaCl −→ ν⊤ 2 = [0, 1, 1, 0, 0], Mg2+ + 2 Cl−−⇀ ↽−MgCl2 −→ ν⊤ 3 = [0, 0, 2, 1, 0], 2 Na+ + SO2− 4 −⇀ ↽−Na2SO4 −→ ν⊤ 4 = [0, 2, 0, 0, 1]. Note that the choice of reactions may not be unique. Following [78] we make the convention that ν1, . . . νn−1 have the following properties. First, for the uncharged species i ∈{1 : n−nc} we have (νi)j = δij, corresponding to a trivial identity reaction. Second, for i ∈{n −nc + 1 : n −1} the first n −nc entries of νi are zero, and exactly two of the remaining entries of νi are nonzero, and must be coprime positive integers such that νi is orthogonal to z. Third, the νi for i ∈{n −nc + 1 : n −1} must be 4In the non-electroneutral setting this strategy does not work. One must instead choose how to quantify the electrical state and model how the electrochemical potentials depend on it. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 7 linearly independent. The last two properties state that the non-trivial reactions form simple neutral salts (i.e. salts formed from two ions) and are independent. These three properties imply that {ν1, . . . , νn, z} is a basis for Rn and therefore Z is invertible. Through Z the electrochemical potentials transform by µZ := Zµ. The entries (µZ)i for i ∈{1 : n −1} are independent of the electrical state of the system and represent chemical potentials. Indeed, for i ∈{1 : n −nc}, (µZ)i = µi is the electro- chemical potential of species i, but since species i is uncharged the electrochemical potential does not depend on the electrical state and is just a chemical potential. For i ∈{n −nc + 1 : n −1}, (µZ)i = ν⊤ i µ does not depend on the electrical state because it is the chemical potential of the neutral salt formed in the reaction corresponding to νi [58, 78]. Following [78] we decompose µZ as (recall that F is the Faraday constant) µZ =  µν F ∥z∥ΦZ  , where µν is an (n −1) × 1 column vector of the chemical potentials and ΦZ is a scalar-valued salt-charge potential. The potential ΦZ can be related to the potential of a hypothetical probe reference electrode immersed in the mixture, and serves to quantify the electrical state. However, no material parameters in the model depend on ΦZ. The concentrations and fluxes transform by cZ := Z−⊤c and NZ := Z−⊤N. Since µZ := Zµ these transformations preserve the structure of the volumetric Gibbs free energy ˜G = c⊤µ = c⊤ ZµZ and energy dissipation function T ˙s = −tr(N ⊤∇µ) = −tr(N ⊤ Z ∇µZ). As in [78] we decompose (2.13) cZ = cν 0  , NZ =  Nν J⊤/(F ∥z∥)  . The condition (cZ)n = 0 is equivalent to the electroneutrality condition (2.8), and J = FN ⊤z is the current density. The variables cν can be thought of as the molar concentrations of the salts formed in the neutralizing reactions, and Nν can be thought of as their molar fluxes. Using the equivalent electroneutrality condition (cZ)n = 0, one verifies that the governing equations (2.9)–(2.11) transform as ∂ ∂t cν 0  + ∇· Nν ∇· J  = 0 in Ω, (2.14) v = ψ⊤ Z NZ in Ω, (2.15) −∇µZ + ψZ∇p = MZNZ in Ω, (2.16) where ψZ := Zψ and MZ := ZMZ⊤. Since the transformed Onsager transport matrix MZ is congruent to M, it retains the crucial structural properties of symmetry and positive-semidefiniteness (with a nullspace of dimension one). Equations (2.14)–(2.16) structurally resemble the uncharged OSM equations. In particular, the stationary version of (2.14)–(2.16) (removing the time derivative term in (2.14)) may be coupled to the stationary Stokes equations for v and p, and the resulting problem can be solved using the stationary SOSM solver of [3]. The Picard linearized analysis of [3] is applicable in such a setting. However, instead of working with this formulation we will instead further develop (2.16) by expressing the chemical potential gradients in terms of mole fraction gradients. Consequently we will not need a constitutive formula for the chemical potentials (or equivalently, activities) and will only need a constitutive formula for the thermodynamic factors. The mole fractions 8 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE transform as xZ := Z−⊤x and electroneutrality implies (xZ)n = 0, hence we write xZ = xν 0  , with xν an (n −1) × 1 column vector that parametrizes composition of the mixture under electroneutrality. The normalization constraint Pn i=1 xi = 1 transforms to (2.17) ν⊤ Z xν = 1 where (νZ)i = Pn j=1 Zij for i ∈{1 : n −1}. Generalizing [78, eq. (8.22)] to the non-isobaric case, we have (2.18) ∇µν = RTX0 ν∇xν + Vν∇p where the (n −1) × (n −1) matrix X0 ν encodes the thermodynamic factors under electroneutrality, and Vν is an (n −1) × 1 column vector of partial molar volumes: Vν := ∂µν ∂p  T,xν=constant. Partial molar volumes can be recovered from the volumetric EOS using, for example, Newman’s formula [57, Appendix A]. Both X0 ν and Vν are functions of (T, p, xν) only. 2.3. Augmentation strategy. As mentioned previously, the Onsager transport matrix M has a nullspace of dimension one, and only n −1 of the OSM equations in (2.6) are linearly independent. It is the mass-average constraint (2.4), in conjunc- tion with the OSM equations (2.6), that allows for the molar fluxes to be uniquely determined. To weakly enforce the mass-average constraint at the discrete level and simultaneously address the fact that M is singular, we employ the augmentation strategy introduced in [31, 43] and used in [2, 3, 76]. In the present transformed set- ting, we introduce a user-chosen augmentation parameter γ > 0 and add a multiple of the mass-average constraint (2.15) to the OSM equation (2.16) in the following way: (2.19) −∇µZ + ψZ∇p + γψZv = γψZψ⊤ Z NZ + MZNZ = M γ ZNZ in Ω, where M γ Z := γψZψ⊤ Z + MZ is an augmented transport matrix and is nonsingular. Following [2, 3], for symmetry we also incorporate the mass-average constraint in (2.2) through (2.20) ∂(ρv) ∂t + ∇· (ρv ⊗v) + ∇p + γ(v −ψ⊤ Z NZ) −∇· τ = ρf in Ω. We also only explicitly enforce the divergence of the mass-average constraint, i.e. (2.21) ∇· v = ∇· (ψ⊤ Z NZ) in Ω. We shall employ the augmented equations (2.19)–(2.21) because they yield a system with the same number of equations as unknowns, and, in an appropriate Picard linearized setting, yield a well-posed symmetric saddle point problem [2, 3]. 2.4. Full problem formulation. Our final formulation of the electroneutral NSOSM problem is comprised of the following set of equations, taken from (2.14), (2.20), (2.21) and by combining (2.18) with (2.19). The unknown functions of space FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 9 and time to be solved for are the velocity v, transformed fluxes Nν, current density J, pressure p, transformed mole fractions xν and salt-charge potential ΦZ, such that: ∂(ρv) ∂t + ∇· (ρv ⊗v) + ∇p + γ(v −ψ⊤ Z NZ) −∇· τ = ρf in Ω, (2.22a) − RTX0 ν 0 0 F ∥z∥  ∇xν ∇ΦZ  +  ψZ − Vν 0   ∇p + γψZv = M γ ZNZ in Ω, (2.22b) ∇· (v −ψ⊤ Z NZ) = 0 in Ω, (2.22c) ∂ ∂t  cν 0  +  ∇· Nν ∇· J  = 0 in Ω. (2.22d) This problem must be supplemented with suitable initial and boundary conditions, which we discuss in forthcoming sections. We recall how all quantities appearing in (2.22) depend on the unknowns. The viscous stress τ in (2.22a) is given by (2.5) and the viscosities in (2.5) are assumed to be known functions of (T, p, xν). The vector NZ was defined in (2.13) and is simply a concatenation of the fluxes Nν and scaled current density J. The constant γ > 0 is a user-chosen augmentation parameter and f is a known forcing term. All remaining quantities in (2.22) are assumed to be known Lipschitz continuous functions of (T, p, xν). The density ρ, concentrations cν and partial molar volumes Vν can be determined as functions of (T, p, xν) using the volumetric EOS. The vector ψZ = Zm/ρ depends on ρ only. The matrix X0 ν of thermodynamic factors can be expressed in terms of (n −1)(n −2)/2 independent Darken factors, while the transport matrix M γ Z can be expressed in terms of n(n −1)/2 independent Stefan– Maxwell diffusivities [78]. The dependence of these material parameters on (T, p, xν) is typically modelled by fitting experimental data. 3. Spatial discretization. In this section, we introduce finite element methods to spatially discretize the electroneutral NSOSM equations (2.22) in steady form: ∇· (ρv ⊗v) + ∇p + γ(v −ψ⊤ Z NZ) −∇· τ = ρf in Ω, (3.1a) − RTX0 ν 0 0 F ∥z∥  ∇xν ∇ΦZ  +  ψZ − Vν 0   ∇p + γψZv = M γ ZNZ in Ω, (3.1b) ∇· (v −ψ⊤ Z NZ) = 0 in Ω, (3.1c)  ∇· Nν ∇· J  = 0 in Ω. (3.1d) The steady problem (3.1) must be supplemented with suitable boundary conditions and integral constraints, which we now describe. 3.1. Boundary conditions. Let Γ := ∂Ωdenote the boundary of Ωand nΓ the unit outward normal on Γ. We consider the following flux5 boundary conditions: v = [(ψ⊤ Z NZ) · nΓ]nΓ + gv∥ on Γ, (3.2a) (Nν)i · nΓ = gi on Γ ∀i ∈{1 : n −1}, (3.2b) J · nΓ = gJ on Γ. (3.2c) 5We do not consider Dirichlet BCs in this section, but see subsection 5.2 for an example of how they can be implemented. 10 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE The functions gv∥: Γ →Rd, gi : Γ →R and gJ : Γ →R are prescribed data. We assume gv∥· nΓ = 0 on Γ, so that (3.2a) enforces the mass-average constraint (2.15) in the normal direction and v = gv∥in the tangential directions. In practical applications the prescribed normal fluxes gi and gJ may be known algebraic functions of the state variables (T, p, xν) and ΦZ. We shall allow for such dependencies. An example is that of a Butler–Volmer boundary condition [24, 57], which on an electrode interface Γe ⊂Γ relates the normal component of current density to the overpotential. The overpotential quantifies the electrical potential dif- ference across the interface. Butler–Volmer BCs generally depend on (T, p, xν) and ΦZ nonlinearly, and we consider this case in section 5. However, for small overpo- tentials and neglecting variations in composition, it is instructive to point out that Butler–Volmer BCs may be approximated by a linear relationship [57] (3.3) gJ = −i0 (αa + αc)F RT (Ve −ΦZ) on Γe. Here i0 is the exchange-current density, Ve is the electrode voltage and αa, αc are apparent transfer coefficients. These parameters are determined by experimental data. 3.2. Integral constraints. In general, the steady problem (3.1) with boundary conditions (3.2) is not uniquely solvable, and 1 ≤k ≤n + 1 additional constraints must be imposed for uniqueness. The necessity of such constraints in the uncharged, steady SOSM setting was pointed out in [3]. However, the present situation is more complicated than that of [3] because we allow for the boundary data in (3.2b)–(3.2c) to be solution-dependent. The situation will be further complicated in section 4 when we consider the transient problem, but for now we focus on the steady case. The need for constraints can be motivated as follows. Integrate the continuity equations (3.1d) over Ωand use the divergence theorem together with the BCs (3.2b) and (3.2c) to obtain n compatibility conditions on the data, (3.4) Z Γ gi dΓ = 0 ∀i ∈{1 : n −1} and Z Γ gJ dΓ = 0. For the problem (3.1) with BCs (3.2) to admit a solution, the conditions in (3.4) must be satisfiable. Assuming this is so, let l ≤n denote the number of independent constraints that (3.4) imposes on the unknowns (p, xν, ΦZ). Next, similarly integrate (3.1c) over Ωand use the BC (3.2a) to obtain an (n + 1)-th compatibility condition (3.5) Z Γ gv∥· nΓ dΓ = 0, which holds automatically since gv∥· nΓ = 0. Hence, we have shown that integrating the n+1 equations in (3.1c)–(3.1d) over Ωleads to only l ≤n independent constraints on the solution. An additional k = n + 1 −l constraints must therefore be imposed for uniqueness. The same argument will apply at the discrete level. Namely, if D denotes the number of discrete degrees of freedom, then discretization of (3.1)–(3.2) will result in a system of D equations, but only D −k of these will be independent. This motivates why k additional constraints are needed for uniqueness. We will give examples of choices of constraints in subsection 3.6. Note that these constraints do not need to be integral constraints, but usually they are in practice. 3.3. Finite element spaces. We use the standard notation for Lebesgue and Sobolev spaces L2(Ω), W 1,∞(Ω), H1(Ω), H(div; Ω) and their norms [34]. We use FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 11 (·, ·)Ωand ⟨·, ·⟩Γ to denote the L2-inner products on Ωand Γ respectively. Moreover, we write L2 0(Ω) := {q ∈L2(Ω) : (q, 1)Ω= 0} for the subspace of functions in L2(Ω) with vanishing mean. We write H1 0(Ω) := {u ∈H1(Ω) : u|Γ = 0} and H0(div; Ω) := {u ∈H(div; Ω) : (u · nΓ)|Γ = 0} for subspaces with vanishing traces. To discretize (3.1) we introduce finite dimensional finite element subspaces that may depend on a parameter h ∈(0, ∞) representing, for example, the mesh size: Vh ⊂H1(Ω)d, V0h := Vh ∩H1 0(Ω)d, (3.6a) Ph ⊂L2(Ω), P0h := Ph ∩L2 0(Ω), (3.6b) Nh ⊂H(div; Ω), N0h := Nh ∩H0(div; Ω), (3.6c) Xh ⊂L2(Ω), X0h := Xh ∩L2 0(Ω). (3.6d) Importantly, we assume Ph and Xh contain the constant functions, i.e. 1 ∈Ph ∩Xh. We shall seek the discrete velocity vh ∈Vh and pressure ph ∈Ph. We assume that (Vh, Ph) forms an inf-sup stable Stokes pair [35], in the sense that: (3.7) β ∥qh∥L2(Ω) ≤ sup uh∈V0h (qh, div uh)Ω ∥uh∥H1(Ω)d ∀qh ∈P0h, for some β > 0 independent of h. We shall seek the discrete fluxes (Nν,h)i ∈Nh for i ∈{1 : n −1} and current density Jh ∈Nh. Similarly, we seek the discrete mole fractions (xν,h)i ∈Xh for i ∈{1 : n −1} and salt-charge potential ΦZ,h ∈Xh. We assume that (Nh, Xh) forms a divergence-free and inf-sup stable mixed-Poisson pair [35], in the sense that: (3.8) div N0h ⊂X0h and β′ ∥yh∥L2(Ω) ≤ sup Kh∈N0h (yh, div Kh)Ω ∥Kh∥H(div;Ω) ∀yh ∈X0h, for some β′ > 0 independent of h. Assumptions (3.7) and (3.8) are motivated by the analysis of [3], where similar assumptions are shown to ensure well-posedness of a Picard linearized SOSM system. Following [3], in our numerical experiments we employ the degree k ≥2 Taylor– Hood pair [13, 75] for (Vh, Ph) (but c.f. [3] for a discussion on other choices such as Scott–Vogelius [70]). For (Nh, Xh) we employ either the BDMk–DGk−1 pair [16, 56] or the RTk–DGk−1 [65] pair. These spaces are standard in finite element software packages, extend to high orders, and are applicable in two or three spatial dimensions on triangular, tetrahedral, quadrilateral or hexahedral (possibly curved) meshes. 3.4. Lipschitz continuous reconstruction operators. Our discretization in subsection 3.5 will involve integration-by-parts of the gradient terms on the left-hand side of (3.1b). This requires the entries of X0 ν, ψZ and Vν to lie in W 1,∞(Ω). Dis- cretization of (3.1c) will likewise require ψZ ∈(W 1,∞(Ω))n. Recall that we assume these quantities to be known Lipschitz continuous functions of (T, p, xν). However, at the discrete level, the spaces Ph or Xh may be discontinuous. For this reason we shall evaluate thermodynamic properties such as X0 ν, ψZ, Vν using Lipschitz continuous reconstructions of ph and xν,h. To be precise, we assume that smoothing operators πPh : Ph →Ph ∩W 1,∞(Ω), πXh : Xh →Xh ∩W 1,∞(Ω), (3.9) 12 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE are available. In our numerical simulations we take πPh to be the L2(Ω)-projection of Ph into Ph ∩W 1,∞(Ω) and likewise for Xh. Nodal averaging operators could alternatively be employed [33]. Given ph and xν,h we introduce their reconstructions f ph := πPhph, ^ (xν,h)i := πXh(xν,h)i . n−1 X j=1 (νZ)jπXh(xν,h)j. Note that the reconstructed mole fractions are normalized to satisfy condition (2.17) exactly. We then write g X0ν, f ψZ, f Vν, eρ, and so on, to denote quantities evaluated with the reconstructed f ph and g xν,h instead of ph and xν,h. 3.5. Discretized problem. Our discrete variational formulation of (3.1) can be obtained by multiplying the equations with suitable test functions and integrating over Ω. The pressure and viscous terms in (3.1a) as well as the gradient terms on the left-hand side of (3.1b) are integrated-by-parts, and all boundary terms vanish owing to our BCs in (3.2). Following [3] we also add density consistency terms to (3.1c). The precise discretization we consider is as follows. We seek discrete functions vh ∈Vh, ph ∈Ph, Nν,h ∈(Nh)n−1, Jh ∈Nh, xν,h ∈(Xh)n−1 and ΦZ,h ∈Xh. Let NZ,h :=  Nν,h J⊤ h /(F ∥z∥)  ∈(Nh)n, which is analogous to NZ in (2.13). The discrete variational problem reads: ∇· (eρvh ⊗vh) + γ(vh −g ψ⊤ Z NZ,h) −eρf, uh  Ω− ph, ∇· uh  Ω + 2eηϵ(vh) + (eζ −2eη/d)(∇· vh)I, ϵ(uh)  Ω= 0 ∀uh ∈V0h, (3.10a) xν,h ΦZ,h  , ∇·  RT g X0νWh F ∥z∥K⊤ h  ! Ω − ph, ∇· ( g ψ⊤ Z Wh K⊤ h  −g V ⊤ ν Wh )! Ω + γ f ψZvh, Wh K⊤ h  ! Ω = g M γ ZNZ,h, Wh K⊤ h  ! Ω ∀ Wh K⊤ h  ∈(N0h)n, (3.10b)  ∇· (vh −g ψ⊤ Z NZ,h), qh  Ω− D nΓ · (vh −g ψ⊤ Z NZ,h), qh E Γ = 0 ∀qh ∈Ph, (3.10c) ∇· NZ,h, yh  Ω= 0 ∀yh ∈(Xh)n. (3.10d) Moreover, we strongly impose the following discrete analogue of the BCs in (3.2): vh = πVh  [(g ψ⊤ Z NZ,h) · nΓ]nΓ + gv∥  on Γ, (3.11a) (Nν,h)i · nΓ = πNh egi  on Γ ∀i ∈{1 : n −1}, (3.11b) Jh · nΓ = πNh f gJ  on Γ. (3.11c) Here πVh and πNh are L2(Γ)-projection operators onto the discrete trace spaces6 πVh : [L2(Γ)]d →{uh|Γ : uh ∈Vh} ⊂[L2(Γ)]d, (3.12a) πNh : L2(Γ) →{(Kh · nΓ)|Γ : Kh ∈Nh} ⊂L2(Γ). (3.12b) Conditions (3.10)–(3.11) define our discrete scheme. 6For (3.12b) to be well-defined, we implicitly assume (Kh ·nΓ)|Γ ∈L2(Γ) ∀Kh ∈Nh. In practice, the finite element space Nh will consist of piecewise smooth functions, so that this indeed holds. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 13 3.6. Integral constraints in the discrete setting. As already discussed in subsection 3.2, for uniqueness of a solution to the discretized problem (3.10)–(3.11), additional constraints must be supplied. To see this at the discrete level, let D := dim Vh × Ph × (Nh)n × (Xh)n and note that, once a finite element basis has been chosen, the problem (3.10)–(3.11) amounts to a nonlinear system of D equations in D unknowns. However, notice that (3.10c) holds automatically when qh is a constant, as verified from integration-by-parts (this is analogous to how (3.5) holds automatically). This property is one way of motivating density consistency terms (i.e. the boundary terms) in (3.10c) [3]. Likewise, taking the entries of yh to be constants in (3.10d), integrating-by-parts, using BCs (3.11b)–(3.11c) yields (in analogy to (3.4)) that (3.13) Z Γ πNh egi  dΓ = 0 ∀i ∈{1 : n −1} and Z Γ πNh f gJ  dΓ = 0. Let l ≤n denote the number of independent constraints that (3.13) imposes on the discrete solution. Problem (3.10)–(3.11) then consists of only D −k independent equations, where k = n + 1 −l. Hence, an additional k constraints must be imposed. Since k ≥1, at least one constraint is always required. This constraint does not encode physical information, and instead reflects our choice to solve for n −1 transformed mole fractions despite the fact that only n −2 of them are independent. Following [3] we impose the normalization condition (2.17) on average over Ω: (3.14) Z Ω (ν⊤ Z xν,h −1) dΩ= 0. Together with (3.14), the remaining k −1 constraints must be chosen on a case-by- case basis. What choices of constraints are appropriate will depend on BCs and the functional dependence of the thermodynamic properties (i.e. the properties in (3.10) with a tilde) on p and xν. We now give examples of this for concreteness. Case (i): The BCs in (3.2b) and (3.2c) do not depend on p, xν or ΦZ. In this case, the compatibility conditions (3.13) do not impose any constraints on the solution, so l = 0 and n additional constraints are required. Since no thermodynamic properties depend on ΦZ, this field is then undetermined up to an additive constant, which can be fixed through a constraint such as R ΩΦZ,h dΩ= 0. If no thermodynamic properties depend on p then it too is undetermined up to an additive constant, which can be fixed by means of R Ωph dΩ= 0. However, if any of thermodynamic properties depend on p then additive shifts of ph by a constant affect the physical content of the model. In this case, constraints of the form R Ω(ph −¯p) dΩ= 0 may be used, where ¯p ∈R is a user-prescribed mean pressure. If the volumetric EOS depends on p (equivalently if ρ depends on p) then an alternative constraint that is more harmonious with experimentally available information may be R Ωeρ dΩ= M tot, where M tot ∈R is the user-prescribed total mass of fluid in Ω. The final n −2 constraints may be chosen to express the relative abundances of the species in Ω. For example, one may use R Ωg (cν)i dΩ= ctot i for i ∈S, where S ⊂{1 : n −1} contains n −2 indices, and ctot i ∈R are user-prescribed total numbers of moles of the species in Ω. Case (ii): The BCs in (3.2b) and (3.2c) are solution-dependent. To illustrate how solution-dependent BCs fit into this framework, consider the case where gJ is given by a Butler–Volmer-type relation in (3.3) or some nonlinear analogue of this, while gi = αigJ/F for i ∈{1 : n −1} with {αi}n−1 i=1 a set of known constants. These BCs model electrode kinetics, and the αi relate to the stoichiometric coefficients of the electrode reaction. Since each gi is a multiple of gJ, the compatibility conditions 14 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE in (3.13) impose exactly one constraint on the solution, so l = 1 and n −1 additional constraints are required. Since ΦZ appears in the BC (3.3) it is no longer necessary (or physically reasonable) to fix R ΩΦZ,h dΩ= 0. Instead, the n−1 constraints should be chosen to determine the pressure and relative species abundances, as in case (i). 4. Temporal discretization. Our spatial discretization from section 3 can be applied in the transient setting using the method of lines. However, special care must be taken to address the subtle interplay between the species mass continuity equations, boundary conditions, volumetric EOS and integral constraints. 4.1. Semi-discrete problem. Discretization in space (but not time) of the transient problem (2.22) yields the following semi-discrete analogue of (3.10). We seek time-dependent discrete functions vh ∈Vh, ph ∈Ph, Nν,h ∈(Nh)n−1, Jh ∈Nh, xν,h ∈(Xh)n−1 and ΦZ,h ∈Xh such that: d dt eρvh, uh  Ω+ ∇· (eρvh ⊗vh) + γ(vh −g ψ⊤ Z NZ,h) −eρf, uh  Ω − ph, ∇· uh  Ω+ 2eηϵ(vh) + (eζ −2eη/d)(∇· vh)I, ϵ(uh)  Ω= 0 ∀uh ∈V0h, (4.1a)  xν,h ΦZ,h  , ∇·  RT g X0νWh F ∥z∥K⊤ h  ! Ω − ph, ∇· ( g ψ⊤ Z Wh K⊤ h  −g V ⊤ ν Wh )! Ω + γ f ψZvh, Wh K⊤ h  ! Ω = g M γ ZNZ,h, Wh K⊤ h  ! Ω ∀ Wh K⊤ h  ∈(N0h)n, (4.1b)  ∇· (vh −g ψ⊤ Z NZ,h), qh  Ω− D nΓ · (vh −g ψ⊤ Z NZ,h), qh E Γ = 0 ∀qh ∈Ph, (4.1c) d dt g cν,h 0  , yh ! Ω + ∇· NZ,h, yh  Ω= 0 ∀yh ∈(Xh)n. (4.1d) We supplement this problem with the same strongly enforced BCs (3.11) from the steady case, and we permit the boundary data to depend on time. 4.2. Integral constraints in the transient setting. As in the steady case, problem (4.1) with BCs (3.11) requires integral constraints for well-posedness. How- ever, the interplay between the BCs, volumetric EOS and mass continuity equations becomes especially important in the transient setting, as we now elucidate through considerations similar to those in subsection 3.6. First, note that (4.1c) holds automatically when qh = 1, as in the steady case. This suggests that at least one integral constraint will be needed for well-posedness. Next, taking the entries of yh to be constants in (4.1d) yields, similarly to (3.13), that d dt Z Ω (g cν,h)i dΩ+ Z Γ πNh egi  dΓ = 0 ∀i ∈{1 : n −1}, (4.2a) Z Γ πNh f gJ  dΓ = 0. (4.2b) If gJ depends on ΦZ by a Butler–Volmer-type BC (3.3), then (4.2b) constrains additive shifts in ΦZ,h. Otherwise, assuming gJ does not depend on p, xν or ΦZ and none of the gI depend on ΦZ, the (consequently undetermined) additive constant in ΦZ,h can be fixed by an extra constraint R ΩΦZ,h dΩ= 0. This leaves us to consider (4.2a). FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 15 To study (4.2a) we assume a confined flow, in the sense that the total normal fluxes over Γ are zero. In other words, we assume that the gi satisfy (4.3) Z Γ πNh egi  dΓ = 0 ∀i ∈{1 : n −1}, so that the compatibility conditions in (4.2a) become (4.4) d dt Z Ω (g cν,h)i dΩ= 0 ∀i ∈{1 : n −1}, which physically states that the total number of moles of all species is conserved. Note that (4.3) holds for Butler–Volmer-type BCs, since in this case gi = αigJ/F and gJ satisfies (4.2b). We now investigate how many of the n −1 conditions in (4.4) are actually independent. Note that a general volumetric EOS relates the total concentration cT to the partial molar volumes V ⊤ ν and mole fractions xν by [41] (4.5) 1 cT = V ⊤ ν xν. Hence, multiplying (4.5) by cT , we see that the entries of g cν,h are related by (4.6) 1 = g V ⊤ ν g cν,h. In light of (4.6), it is clear that the total molar conservation conditions (4.4) may not all be independent, or may even be overdetermined. This leads us to consider three pertinent cases, depending on the functional dependence of Vν on p and xν. Case (i): The partial molar volumes are constant. In liquid mixtures it is often reasonable to approximate the partial molar volumes Vν as being constant [29]. It is then clear from (4.6) that only n −2 of the conditions in (4.4) are independent. If any n −2 of these conditions hold, then the final one will hold as well. Thus, in this case, we must impose two additional constraints (recall that the other constraint comes from taking qh = 1 in (4.1c)). A reasonable choice is to enforce (3.14) and a constraint that fixes the pressure such as R Ω(ph −¯p) dΩ= 0. If the partial molar volumes are constant then X0 ν, ρ and ψZ are independent of p. If η, ζ and MZ are also modelled as being independent of p, and p does not appear in the BCs, then the user-chosen value of ¯p will not affect the physical content of the model. Case (ii): The partial molar volumes depend on xν but not p. If the partial molar volumes are non-constant (which happens only if they depend on p or xν), then the conditions in (4.4) are generally independent. One must then consider whether they can all be satisfied simultaneously. If the partial molar volumes Vν depend on xν only, then the set of admissible concentrations cν is parametrized by xν. Since only n −2 entries of xν are independent, it does not seem reasonable to expect that all n −1 conditions in (4.4) can be satisfied simultaneously, and we hypothesize that in this case the model problem (2.22) may not be well-posed. We give an explicit example of how this case can produce an ill-posed model in Appendix A. Challengingly, in real electrolytes the partial molar volumes can vary appreciably with composition but negligibly with pressure [57, 80, 81]. To overcome the challenge of an ill-posed model one can explicitly include pressure-dependence in Vν. However, if this dependence is too small, then to satisfy all n−1 conditions in (4.4) the pressure may be forced to grow unphysically large. We hypothesize that this challenge could be overcome by allowing the volume of Ωto vary in time (i.e. the total volume the 16 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE electrolyte occupies may vary). However, attempting this lies outside the scope of the present work, since it would (very challengingly) require both extending the model and numerically solving a moving boundary problem. A simpler fix may be to use BCs that do not satisfy (4.3) and instead allow some electrolyte to “leak” from Ω. Case (iii): The partial molar volumes depend on both p and xν. As alluded to case (ii), if Vν depends on both p and xν, then the set of admissible concentrations cν is parametrized by the n−1 independent quantities (p, xν). We therefore heuristically expect all n −1 conditions in (4.4) to be satisfiable. In this case, only one additional constraint is needed (from taking qh = 1 in (4.1c)), and we suggest to employ (3.14). Our analysis of cases (i)-(iii) only involves the total molar conservation conditions (4.4) and the volumetric EOS (4.5). In particular, our considerations are applicable to any transient multicomponent fluid model with a general volumetric EOS. We are not aware of these considerations being discussed elsewhere in the literature; particularly that of case (ii) where a seemingly physical choice of EOS can lead to an ill-posed problem. Although we are aware of numerical works [18, 25] that employ the EOS (4.5) with constant partial molar volumes, these works do not touch on the possibility of case (ii) and its associated challenges. Therefore, we believe that our considerations here may be of general interest to the multicomponent fluids community. 4.3. Time-stepping methods. The semi-discrete problem (4.1), together with strongly enforced BCs (3.11) and appropriate integral constraints of subsection 4.2, amounts to a nonlinear system of differential-algebraic equations (DAEs). Many time- stepping schemes exist for solving DAEs, including Runge–Kutta methods and multi- step methods [82]. In our numerical simulations we choose to employ implicit Runge- Kutta methods, because of their desirable stability properties and ability to provide high-order temporal accuracy [82]. 5. Numerical examples. In this section we present numerical examples of our methods, implemented using Firedrake [42]. We use Irksome [36, 47] for time-stepping and ngsPETSc [8, 69] for mesh generation. We solve the nonlinear systems with Newton’s method [23] and the sparse direct solver MUMPS [1]. Code can be found at https://bitbucket.org/abaierr/multicomponent electrolyte code. 5.1. Hull cell electroplating. To demonstrate our methods in a physically realizable setting, we simulate the transient electroplating of a non-ideal binary elec- trolyte in a Hull cell geometry in two and three spatial dimensions. The two-dimensional domain Ω2D is a right trapezoid with vertices (0, 0), (0, 5), (5, 5) and (10, 0). Here, a unit of length corresponds to a physical length of 1mm. We partition the boundary as ∂Ω2D = Γ2D p ∪Γ2D n ∪Γ2D w where Γ2D p is the line segment between (0, 0) and (0, 5) and denotes the positive electrode, Γ2D n is the line segment between (5, 5) and (10, 0) and denotes the negative electrode, and Γ2D w = ∂Ω2D\(Γ2D p ∪Γ2D n ) are insulating walls. The three-dimensional domain is the extrusion of the two-dimensional domain by 5 units in the z-direction, i.e. Ω3D = Ω2D × (0, 5) with Γ3D p = Γ2D p × (0, 5), Γ3D n = Γ2D n × (0, 5) and Γ3D w = ∂Ω3D \(Γ3D p ∪Γ3D n ). In what follows we may omit superscripts ·2D and ·3D The electrolyte is comprised of ethyl-methyl-carbonate (EMC) solvent and lithium hexafluorophosphate (LiPF6) salt. This mixture is used in lithion-ion batteries. In the notation of section 2, there are n = 3 species (EMC, Li+ and PF6 –) with molar masses m = [104.105, 6.935, 144.97]⊤g mol−1 and equivalent charges z = [0, 1, −1]⊤. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 17 We use the salt-charge transformation matrix (recall (2.12)) Z =   1 0 0 0 1 1 0 1 √ 2 −1 √ 2  , which corresponds to neutralizing reactions EMC −⇀ ↽−EMC and Li+ + PF6 – −⇀ ↽−LiPF6. Hence, under the salt-charge transformation, the mole fractions (xν)i, chemical po- tentials (µν)i, fluxes (Nν)i and so on, represent those of EMC for i = 1 and LiPF6 for i = 2. We accordingly write xEMC := (xν)1, µEMC := (µν)1, xLiPF6 := (xν)2, µLiPF6 := (µν)2 and so on for (Nν)i. The material properties are encoded in the functional dependence of η, ζ, ρ, X0 ν and MZ on (T, p, xν). We take an ambient temperature of T = 298.15K. For the viscosities, we let η be a function of xν as reported in [81, Table 2], and ζ = 10−6Pa s. For ρ, X0 ν and MZ we use the functional dependencies from [78, Table 1]7. This leads to ρ depending on xν only, and likewise for the (non-constant) partial molar volumes, placing us in case (ii) of subsection 4.2. The transport matrix MZ also depends on xν only, while X0 ν depends on both xν and p, with the (negligibly small) dependence on p arising due to the non-constant partial molar volumes. For boundary conditions on Γ := ∂Ωwe use (3.11a) with gv∥= 0. For the normal fluxes (3.11b) and (3.11c), we employ nonlinear Butler–Volmer BCs [24, 57]. Note that µLiPF6 can be expressed as a known function of (p, xν) only, by analytically integrating X0 ν and Vν (c.f. (2.18)). The quantity ΦZ −0.5µLiPF6/F then represents the potential of a reference electrode that reacts with the electrolyte through Li −⇀ ↽−Li+ + e– [78]. We assume that the positive and negative electrodes Γe for e ∈{p, n} undergo the same reaction. Letting Ve denote the applied potential on electrode e ∈{p, n}, and i0(xν) the exchange current density, we consider the BCs gJ = −2i0(xν) tanh ( F  Ve −(ΦZ −0.5µLiPF6/F)  RT ) on Γe for e ∈{p, n}, (5.1a) gJ = 0 on Γw, (5.1b) g2 = gJ/(2F) on Γ. (5.1c) Condition (5.1a) is a standard Butler–Volmer BC, while (5.1b) expresses the insulating property of the walls. Since N = Z⊤NZ, condition (5.1c) states that the normal flux of PF6 – is zero on Γ (since the electrodes only react with Li+). We model i0 by i0(xν) = i⊖ 0 (xLiPF6/x⊖ LiPF6)2, to resemble commonly used functional forms [57]. Moreover, we take Vp = 0.1V, Vn = 0V, i⊖ 0 = 104Am−2 and x⊖ LiPF6 = 0.075. Since we are in case (ii) of subsection 4.2 (non-constant partial molar volumes that do not depend on p), we do not expect a well-posed problem if no-flux BCs are imposed on EMC (i.e. g1 = 0 on Γ in (3.11b)). Instead we impose BCs that allow some EMC to “leak” from the positive electrode. As a simple model for this, we assume a quadratic flux profile of EMC on the positive electrode, with the magnitude of the flux an unknown to be solved for. In particular, we take g1 = 0 on Γn ∪Γw, (5.2a) g1 = λleak · qp on Γp, (5.2b) 7The formula for κ in [78, Table 1] has a typo and in the notation of that paper should read κ = (48.93y3/2 e −284.8y5/2 e + 817.8y4 e)2. One can use the formulae from [78] to construct MZ, but note that the right-hand side of [78, eq. 16.24] is incorrect by a factor of −1. 18 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE where qp : Γp →R is the unique quadratic function on Γp that vanishes at its endpoints and is one at its midpoint, and λleak ∈R is a (time-dependent) Lagrange multiplier that determines the amount of leakage. The extra degree of freedom λleak allows us to enforce (3.14) while also fixing the pressure mean, for which we take R Ωph dΩ= 0. As initial conditions we take a spatially uniform composition xLiPF6|t=0 = x⊖ LiPF6 and xEMC|t=0 = 1−2x⊖ LiPF6 (c.f. (2.17)). For compositions in this regime the Reynolds and P´eclet numbers for the problem are roughly Re = 3 × 10−5 and Pe = 9 × 10−2. We take all other unknowns v, p, Nν, J, ΦZ to be zero at t = 0. We numerically integrate (4.1) up to a final time of 172800 seconds (two days) using the RadauIIA implicit Runge–Kutta method [82] with two stages in the two-dimensional case and one stage in the three-dimensional case, and with 200 timesteps. We spatially discretize (4.1) in a non-dimensionalized form with augmentation parameter γ = 10−2. We employ a two-dimensional mesh of 5.7 × 103 triangles with maximum cell diameter h = 0.125, and a three-dimensional mesh of 6.3 × 103 tetrahedra with maximum cell diameter h = 0.5. We expect singularities in the solution at corners of Ω; in the two-dimensional case we use a finer local cell diameter of hc = 0.0125 at the four vertices of Ω2D. We employ the degree k Taylor–Hood pair [13, 75] for (Vh, Ph) and the RTk–DGk−1 [65] pair for (Nh, Xh) with k = 4 in two dimensions and k = 2 in three dimensions. The nonlinear systems at each timestep are solved using Newton’s method with an absolute tolerance on the residual of 10−10 in the Euclidean norm. These systems consist of 1.3×106 unknowns in two dimensions and 3.9×105 unknowns in three dimensions. The greatest number of Newton iterations was taken at the first timestep (10 iterations in two and three dimensions). After the 6th timestep, all Newton solves required at most 3 iterations. Fig. 1. Streamlines of the EMC flux NEMC (in black) and current density J (in white) from the two-dimensional simulation of subsection 5.1. The domain is colored by the magnitude of J. In Figure 1, we plot streamlines of the EMC flux NEMC and current density J at time t = 64800s in two dimensions. The current density expectedly flows from the positive to negative electrode and appears to become singular at the top-right cell corner. The two-dimensional plots also reveal two convective rolls forming in the EMC flux profile. These convective rolls can also be seen in Figure 2, where we plot streamlines of the EMC flux at the final time t = 172800s in three dimensions. Owing to the leakage BCs (5.2), the total number of moles of EMC (i.e. the value of R Ωc1 dΩ) varied by about 0.006% over the simulation time, in both two and three dimensions. We also report maximum L2-errors in the (nondimensionalized) FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 19 mass-average constraint and mole fraction constraint: E1 := max 0≤t≤Tfinal vh −g ψ⊤ Z NZ,h L2(Ω)d , E2 := max 0≤t≤Tfinal 1 −ν⊤ Z xν,h L2(Ω) . We obtained values of (E1, E2) = (7.0 × 10−5, 1.1 × 10−9) in two dimensions and (E1, E2) = (5.8 × 10−3, 3.0 × 10−6) in three dimensions. Fig. 2. EMC flux NEMC streamlines (colored by its magnitude) from the three-dimensional simulation of subsection 5.1. The domain walls are colored by the salt-charge potential ΦZ. 5.2. Microfluidic rotating disk electrode. We again consider the binary EMC:LiPF6 electrolyte from subsection 5.1, but in a setting that showcases the flex- ibility of our method in applying different BCs. We employ the same salt-charge transformation and material parameters η, ζ, ρ, X0 ν and MZ at ambient temper- ature T = 298.15K as in subsection 5.1. We consider a three-dimensional domain Ωrepresenting a microfluidic box containing a rotating disk electrode. In partic- ular we take Ω= Ωbox \ Ωdisk where Ωbox = (−5, 5)3 and Ωdisk = {(x, y, z) ∈ R3 : x2 + y2 ≤1, −0.05 ≤z ≤0.05}. Here, a unit of length corresponds to a physical length of 12.5µm. We decompose Γ := ∂Ωas Γ = Γp ∪Γn ∪Γw where Γp = {(x, y, z) ∈∂Ωbox : z = ±5} denotes the top and bottom walls of the box, Γw = ∂Ωbox \ Γp the four side walls of the box, and Γn = ∂Ωdisk the disk bound- ary. The surfaces Γp, Γn, Γw represent, respectively, the positive electrode, negative electrode, and insulating walls. We solve the steady problem (3.10) with the following BCs. We first impose that the disk rotates with fixed angular frequency ˙θ ∈R; this manifests in our model through the boundary condition (3.11a) on v with gv∥given by gv∥= ˙θ(−y, x, 0) for (x, y, z) ∈∂Ωdisk = Γn, (5.3a) gv∥= 0 on Γ \ ∂Ωdisk. (5.3b) We choose ˙θ = 28.79s−1, which leads to Reynolds and P´eclet numbers of roughly Re = 3 × 10−3 and Pe = 1 × 101. For the EMC flux Nh,EMC := (Nν,h)1 we impose 20 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE a zero normal flux condition (3.11b) with g1 = 0 on Γ. For the current density flux, instead of enforcing (3.11c) for some function gJ, we enforce Jh·nΓ = 2F(Nh,LiPF6·nΓ) on Γ, where Nh,LiPF6 := (Nν,h)2 (note that we implement this as a strongly enforced BC on Jh · nΓ and not Nh,LiPF6 · nΓ). As in subsection 5.1, this BC enforces that the normal flux of PF6 – is zero on Γ. For the LiPF6 flux we enforce a zero normal flux condition on the insulating walls Nh,LiPF6 · nΓ = 0 on Γw. However, instead of prescribing the value of NLiPF6 · nΓ on the electrodes Γp ∪Γn, we weakly enforce the value of xLiPF6 in these regions through (5.4) xh,LiPF6 = x⊖,e LiPF6 on Γe for e ∈{p, n}, where x⊖,p LiPF6 = 0.082 and x⊖,n LiPF6 = 0.068. We weakly enforce (5.4) by modifying (3.10b) through the addition of appropriate boundary terms, and by enlarging the space of test functions (Wh)2 to those with zero normal trace on Γw only. Specifically, instead of (3.10b), we consider: xν,h ΦZ,h  , ∇·  RT g X0νWh F ∥z∥K⊤ h  ! Ω − ph, ∇· ( g ψ⊤ Z Wh K⊤ h  −g V ⊤ ν Wh )! Ω + γ f ψZvh, Wh K⊤ h  ! Ω = g M γ ZNZ,h, Wh K⊤ h  ! Ω + RT · x⊖,p LiPF6 D ^ (X0ν)22, (Wh)2 · nΓ E Γp + RT · x⊖,n LiPF6 D ^ (X0ν)22, (Wh)2 · nΓ E Γn ∀ Wh K⊤ h  ∈(N0h × Nh × N0h) with (Wh)2 · nΓ = 0 on Γw. (5.5) With the present material data, X0 ν is a diagonal matrix. Using this property, one verifies by taking (Wh)2 to be non-zero in (5.5), that for e ∈{p, n} the following is weakly enforced: (5.6) RT · xh,LiPF6 · ^ (X0ν)22 −ph · n ( ] ψ⊤ Z )2 −^ (V ⊤ ν )2 o | {z } :=Ip = RT · x⊖,e LiPF6 · ^ (X0ν)22 on Γe. Dimensional analysis reveals that the pressure term Ip in (5.6) is smaller than the other terms by a factor of 10−8. Neglecting Ip in (5.6) and dividing by RT · ^ (X0ν)22 then leads to (5.4), as desired. We spatially discretize (3.10) with aforementioned BCs and modification (5.5) in a non-dimensionalized form with augmentation parameter γ = 10−2. As constraints we impose (3.14) along with R Ωph dΩ= 0. We employ a curved three-dimensional mesh of degree 4 with 1.7 × 104 tetrahedra and maximum local cell diameter of h = 0.1 on the disk boundary. We employ the degree k = 4 Taylor–Hood pair [13, 75] for (Vh, Ph) and the RTk–DGk−1 [65] pair for (Nh, Xh). The nonlinear system consists of 1.1×106 unknowns and was solved using Newton’s method with an absolute tolerance on the residual of 10−10 in the Euclidean norm. We first applied Newton’s method on a coarse discretization (without curving the mesh and using order k = 2 finite element spaces), where as an initial guess we set xLiPF6 = (x⊖,p LiPF6 +x⊖,n LiPF6)/2 and xEMC = 1−2xLiPF6 (c.f. (2.17)) and we set all other unknowns to be zero. Convergence was reached in 6 iterations. We then used the coarse solution as an initial guess to Newton’s method for the fine discretization (i.e. with a curved mesh and degree k = 4 finite element spaces, as above), for which convergence was reached in 3 iterations. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 21 Fig. 3. Streamlines of the LiPF6 flux NLiPF6 (colored by its magnitude) and current density J (colored in a transparent yellow) above Ωdisk for the numerical experiment of subsection 5.2. The disk is also colored by the magnitude of NLiPF6. In Figure 3 we plot streamlines of the LiPF6 flux and current density J. The rotation of the disk induces a swirling flow of LiPF6, while the current density flows smoothly from the positive electrode to the negative disk electrode. These observa- tions qualitatively reflect the expected behaviour of the solution as induced by our choice of BCs. Moreover, the L2-errors in the (nondimensionalized) mass-average constraint, mole fraction constraint, and weakly enforced BCs (5.4) are vh −g ψ⊤ Z NZ,h L2(Ω)d = 1.3 × 10−3, 1 −ν⊤ Z xν,h L2(Ω) = 3.7 × 10−7, xh,LiPF6 −x⊖,e LiPF6 L2(Γp∪Γn) = 5.0 × 10−5. In particular, the small error in the weakly enforced BC (5.4) affirms the ability of our algorithm to handle a combination of flux, Dirichlet, and tangential velocity BCs. 5.3. Cosolvent imbalances. An appealing property of our numerical method is that it accounts for the multicomponent nature of the electrolyte solvent (i.e. the neutral species in which the salts are dissolved), the effects of which may be important for battery modelling [44]. To demonstrate the ability of our methods in studying these effects, we consider an electrolyte comprised of LiPF6 salt dissolved in two solvents, namely ethyl-methyl-carbonate (EMC) and ethylene-carbonate (EC). The resulting mixture has n = 4 species (EMC, EC, Li+ and PF6 –) with molar masses m = [104.105, 88.062, 6.935, 144.97]⊤g mol−1 and equivalent charges z = [0, 0, 1, −1]⊤. We use the salt-charge transformation matrix (recall (2.12)) Z =   1 0 0 0 0 1 0 0 0 0 1 1 0 0 1 √ 2 −1 √ 2  , 22 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE which corresponds to neutralizing reactions EMC −⇀ ↽−EMC, EC −⇀ ↽−EC and Li+ + PF6 – −⇀ ↽−LiPF6. Hence, under the salt-charge transformation, the mole fractions (xν)i, chemical potentials (µν)i, fluxes (Nν)i and so on, represent those of EMC for i = 1, EC for i = 2 and LiPF6 for i = 3. Fig. 4. Streamlines of the EMC flux NEMC (in black) and EC flux NEC (in white) from the simulation of subsection 5.3. The domain is colored by the shear viscosity η. For the material properties we take an ambient temperature of T = 298.15K. We let ρ be a function of xν as reported in [68]. An expression for η as a function of xν was obtained by fitting viscosity data reported in [68] to a degree 7 bivariate polynomial in the variables p xEMC/(xEMC + xEC), p xEC/(xEMC + xEC). For the bulk viscosity we took ζ = 10−6Pa s. The thermodynamic factor matrix X0 ν was obtained by assuming an ideal mixture. Finally, we computed MZ by assuming constant Stefan–Maxwell diffusivities, the values of which were obtained from the supplementary information of [63] for 1mol L−1 of LiPF6 in vol:vol 7:3 EMC:EC solvent. We take Ωto be the same two-dimensional Hull cell domain as in subsection 5.1. For BCs we use (3.11a) with gv∥= 0. For the normal flux BCs (3.11b) and (3.11c) we use linearized Butler–Volmer BCs (c.f. (3.3)) gJ = −i0 F RT (Ve −ΦZ) on Γe for e ∈{p, n}, (5.7a) gJ = 0 on Γw, (5.7b) g3 = gJ/(2F) on Γ, (5.7c) with Vp = 0.1V, Vn = 0V and i0 = 104Am−2. In (3.11b) we further take g2 = 0 on Γ, i.e. no normal flux of EC, while for EMC we impose the leak BCs (5.2). We consider the transient problem (4.1). As initial conditions we take a spatially uniform composition xLiPF6|t=0 = 0.077, xEMC|t=0 = 0.509 and xEC|t=0 = 1−0.509− (2 · 0.077) = 0.337 (c.f. (2.17)). For compositions in this regime the Reynolds and P´eclet numbers for the problem are roughly Re = 3 × 10−5 and Pe = 9 × 10−1. We take all other unknowns v, p, Nν, J, ΦZ to be zero at t = 0. As constraints we impose (3.14) along with R Ωph dΩ= 0. We numerically solve (4.1) using the same mesh, finite element spaces (Vh, Ph) and (Nh, Xh), and time-stepping scheme as in the two-dimensional case of subsection 5.1. The nonlinear systems at each timestep consist of 1.7 × 106 unknowns and are solved using Newton’s method with an absolute tolerance on the residual of 10−10 in the FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 23 Euclidean norm. The first three timesteps required 3 Newton iterations, and all subsequent timesteps required at most 2 iterations. Note that the initial Newton solves in subsection 5.1 required more iterations due to the nonlinear BCs (5.1a). In Figure 4 we plot streamlines of the EMC and EC fluxes NEMC, NEC at time t = 64800. The flux profiles are emphatically different, underscoring the multicomponent nature of the solvent. Moreover, in Figure 4 we color the domain by the shear viscosity η = η(xν), which changes by an order of magnitude across the cell due to the spatially varying EMC:EC ratio. In particular, at time t = 64800 the mole ratio xEMC/xEC ≈ 1.4 near the positive (left) electrode and xEMC/xEC ≈1.6 near the negative (right) electrode. It is the greater EMC content near the negative electrode that results in a substantially lower viscosity in this region. 6. Conclusions. We have presented a broad family of finite element algorithms for numerically solving the electroneutral NSOSM equations. To the best of our knowl- edge, this is the first paper in the finite element literature on electroneutral NSOSM flow. The flexibility of our algorithms in handling transient and steady flow under different boundary conditions was substantiated in our numerical experiments. Our numerical experiment involving EMC-EC-LiPF6 flow also demonstrated the scientific potential of our algorithms in studying how the multicomponent nature of electrolytes may impact, for example, locally varying material properties of the mixture. Appendix A. A problematic model. Consider a binary mixture on a fixed domain Ωwith molar concentrations c1, c2, total concentration cT = c1+c2 and molar fractions xi := ci/cT with x1 + x2 = 1. Let x := x1. We assume a volumetric EOS (A.1) cT = A + Bx, with A and B non-zero constants satisfying A > 0 and A+B > 0 so that cT > 0. This seemingly benign EOS assumes the total concentration is linear in composition; an experimentalist might reasonably make such an approximation by fitting experimental data. One can verify that the (non-constant) partial molar volumes are V1 = A + B(2x −1) (A + Bx)2 , V2 = A + 2Bx (A + Bx)2 and 1 cT = x1V1 + x2V2. Assume that the total number of moles of each species is conserved, so that d dt Z Ω c1 dΩ= 0, (A.2a) d dt Z Ω c2 dΩ= 0. (A.2b) Since cT = c1 + c2, we can add (A.2a) and (A.2b) and use (A.1) to deduce that (A.3) d dt Z Ω x dΩ= 0. Moreover, since c1 = xcT we can use (A.1), (A.3), and (A.2a) to deduce (A.4) d dt Z Ω x2 dΩ= 0. Since Ωdoes not depend on t, it follows from (A.3) and (A.4) that if x is initially spatially uniform, then x must be constant over all space and time. To see this, let ¯x 24 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE denote the spatial mean of x, i.e. ¯x := R Ωx dΩ R Ω1 dΩ. Then (A.3) implies that ¯x is constant over time. This fact together with (A.4) yields d dt Z Ω (x −¯x)2 dΩ= d dt Z Ω x2 dΩ+ Z Ω −2x¯x + ¯x2 dΩ ! = d dt Z Ω x2 dΩ− Z Ω ¯x2 dΩ ! = 0. Thus R Ω(x −¯x)2 dΩ= C for some constant C. Since x = ¯x initially we deduce that C = 0. Therefore x = ¯x a.e. in Ωfor all time, and ¯x is a constant. This completes the proof that x is constant over all space and time (at least up to sets of measure zero). To conclude, any model which assumes EOS (A.1) and conservation properties (A.2), with Ωindependent of t, is seemingly either (i) ill-posed or (ii) well-posed, but incapable of making physically meaningful predictions, since a solution that initially has a spatially uniform composition retains that uniform composition over all time. REFERENCES [1] P. R. Amestoy, I. S. Duff, J.-Y. L’Excellent, and J. Koster, A fully asynchronous multi- frontal solver using distributed dynamic scheduling, SIAM J. Matrix Anal. Appl., 23 (2001), pp. 15–41. [2] F. R. Aznaran, P. E. Farrell, C. W. Monroe, and A. J. Van-Brunt, Finite element meth- ods for multicomponent convection-diffusion, IMA J. Numer. Anal., (2024), p. drae001. [3] A. Baier-Reinio and P. E. Farrell, High-order finite element methods for three-dimensional multicomponent convection-diffusion, arXiv preprint arXiv:2408.17390, (2024). [4] A. Baier-Reinio, P. E. Farrell, and C. W. Monroe, Software used in ‘Finite element methods for electroneutral multicomponent electrolyte flows’, 2025. [5] K. Balakrishnan, A. L. Garcia, A. Donev, and J. B. Bell, Fluctuating hydrodynamics of multispecies nonreactive mixtures, Phys. Rev. E, 89 (2014), p. 013017. [6] A. J. Bard and L. R. Faulkner, Electrochemical Methods: Fundamentals and Applications, John Wiley & Sons, New York, 2nd ed., 2001. [7] P. N. Bartlett, Bioelectrochemistry: Fundamentals, Experimental Techniques and Applica- tions, John Wiley & Sons, Chichester, 2008. [8] J. Betteridge, P. E. Farrell, M. Hochsteger, C. Lackner, J. Sch¨oberl, S. Zampini, and U. Zerbinati, ngsPETSc: A coupling between NETGEN/NGSolve and PETSc, J. Open Source Softw., 9 (2024), p. 7359. [9] A. K. Bhattacharjee, K. Balakrishnan, A. L. Garcia, J. B. Bell, and A. Donev, Fluc- tuating hydrodynamics of multi-species reactive mixtures, J. Chem. Phys., 142 (2015), p. 224107. [10] R. B. Bird, W. E. Stewart, and E. N. Lightfoot, Transport Phenomena, John Wiley & Sons, 2nd ed., 2002. [11] A. M. Bizeray, D. A. Howey, and C. W. Monroe, Resolving a discrepancy in diffusion potentials, with a case study for Li-ion batteries, J. Electrochem. Soc., 163 (2016), p. E223. [12] S. W. Boettcher, S. Z. Oener, M. C. Lonergan, Y. Surendranath, S. Ardo, C. Brozek, and P. A. Kempler, Potentially confusing: potentials in electrochemistry, ACS Energy Lett., 6 (2020), pp. 261–266. [13] D. Boffi, Three-dimensional finite element methods for the Stokes problem, SIAM J. Numer. Anal., 34 (1997), pp. 664–670. [14] L. Bortels, J. Deconinck, and B. Van Den Bossche, The multi-dimensional upwinding method as a new simulation tool for the analysis of multi-ion electrolytes controlled by FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 25 diffusion, convection and migration. Part 1. Steady state analysis of a parallel plane flow channel, J. Electroanal. Chem., 404 (1996), pp. 15–26. [15] M. Braukhoff, I. Perugia, and P. Stocker, An entropy structure preserving space-time formulation for cross-diffusion systems: analysis and Galerkin discretization, SIAM J. Numer. Anal., 60 (2022), pp. 364–395. [16] F. Brezzi, J. Douglas, and L. D. Marini, Two families of mixed finite elements for second order elliptic problems, Numer. Math., 47 (1985), pp. 217–235. [17] F. Brosa Planella, W. Ai, A. M. Boyce, A. Ghosh, I. Korotkin, S. Sahu, V. Sulzer, R. Timms, T. G. Tranter, M. Zyskin, et al., A continuum of physics-based lithium-ion battery models reviewed, Prog. Energy, 4 (2022), p. 042003. [18] A. Brunk, A. J¨ungel, and M. Luk´aˇcov´a-Medvid’ov´a, A structure-preserving numerical method for quasi-incompressible Navier–Stokes–Maxwell–Stefan systems, arXiv preprint arXiv:2504.11892, (2025). [19] E. Burman, A. Ern, and V. Giovangigli, Bunsen flame simulation by finite elements on adaptively refined, unstructured triangulations, Combust. Theory Model., 8 (2003), p. 65. [20] B. Carnes and G. F. Carey, Local boundary value problems for the error in FE approximation of non-linear diffusion systems, Internat. J. Numer. Methods Engrg., 73 (2008), pp. 665– 684. [21] C. I. Correa, G. N. Gatica, and R. Ruiz-Baier, New mixed finite element methods for the coupled Stokes and Poisson–Nernst–Planck equations in Banach spaces, ESAIM Math. Model. Numer. Anal., 57 (2023), pp. 1511–1551. [22] S. R. De Groot and P. Mazur, Non-Equilibrium Thermodynamics, Dover Publications, Inc., New York, 1984. [23] P. Deuflhard, Newton Methods for Nonlinear Problems, Springer Science & Business Media, Berlin, Heidelberg, 2011. [24] E. J. Dickinson and A. J. Wain, The Butler-Volmer equation in electrochemical theory: Ori- gins, value, and practical application, J. Electroanal. Chem., 872 (2020), p. 114145. [25] A. Donev, A. Nonaka, A. K. Bhattacharjee, A. L. Garcia, and J. B. Bell, Low Mach number fluctuating hydrodynamics of multispecies liquid mixtures, Phys. Fluids, 27 (2015), p. 037103. [26] A. Donev, A. Nonaka, Y. Sun, T. Fai, A. Garcia, and J. Bell, Low Mach number fluctu- ating hydrodynamics of diffusively mixing fluids, Commun. Appl. Math. Comput. Sci., 9 (2014), pp. 47–105. [27] A. Donev, A. J. Nonaka, C. Kim, A. L. Garcia, and J. B. Bell, Fluctuating hydrodynamics of electrolytes at electroneutral scales, Phys. Rev. Fluids, 4 (2019), p. 043701. [28] W. Dreyer, C. Guhlke, and R. M¨uller, Overcoming the shortcomings of the Nernst–Planck model, Phys. Chem. Chem. Phys., 15 (2013), pp. 7075–7086. [29] P.-´E. Druet, Global–in–time existence for liquid mixtures subject to a generalised incompress- ibility constraint, J. Math. Anal. Appl., 499 (2021), p. 125059. [30] A. J. Ellingsrud, P. Benedusi, and M. Kuchta, A splitting, discontinuous Galerkin solver for the cell-by-cell electroneutral Nernst–Planck framework, SIAM J. Sci. Comput., 47 (2025), pp. B477–B504. [31] A. Ern and V. Giovangigli, Multicomponent Transport Algorithms, vol. 24, Springer Berlin, Heidelberg, 1994. [32] A. Ern and V. Giovangigli, Thermal diffusion effects in hydrogen-air and methane-air flames, Combust. Theory Model., 2 (1998), p. 349. [33] A. Ern and J.-L. Guermond, Finite element quasi-interpolation and best approximation, ESAIM Math. Model. Numer. Anal., 51 (2017), pp. 1367–1385. [34] A. Ern and J.-L. Guermond, Finite Elements I: Approximation and Interpolation, Springer, Cham, Switzerland, 2021. [35] A. Ern and J.-L. Guermond, Finite Elements II: Galerkin Approximation, Elliptic and Mixed PDEs, Springer, Cham, Switzerland, 2021. [36] P. E. Farrell, R. C. Kirby, and J. Marchena-Menendez, Irksome: Automating Runge– Kutta time-stepping for finite element methods, ACM Trans. Math. Softw., 47 (2021), pp. 1–26. [37] E. Feireisl, D. Hilhorst, H. Petzeltov´a, and P. Tak´aˇc, Mathematical analysis of variable density flows in porous media, J. Evol. Equ., 16 (2016), pp. 1–19. [38] A. Fick, ¨Uber Diffusion, Annalen der Physik, 170 (1855), pp. 59–86. [39] V. Giovangigli, Multicomponent Flow Modeling, Birkh¨auser, Boston, 1999. [40] D. W. Green and R. H. Perry, Perry’s Chemical Engineers’ Handbook, McGraw Hill Pro- fessional, 8th ed., 2007. [41] E. A. Guggenheim, Thermodynamics: An Advanced Treatment for Chemists and Physicists, 26 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE North-Holland Books, Amsterdam, 5th ed., 1967. [42] D. A. Ham and 26 Others, Firedrake User Manual, 1st ed., 2023, https://doi.org/10.25561/ 104839. [43] E. Helfand, On inversion of the linear laws of irreversible thermodynamics, J. Chem. Phys., 33 (1960), pp. 319–322. [44] T. Jung, A. A. Wang, and C. W. Monroe, Overpotential from cosolvent imbalance in battery electrolytes: LiPF6 in EMC: EC, ACS omega, 8 (2023), pp. 21133–21144. [45] A. J¨ungel, The boundedness-by-entropy method for cross-diffusion systems, Nonlinearity, 28 (2015), p. 1963. [46] A. J¨ungel and O. Leingang, Convergence of an implicit Euler Galerkin scheme for Poisson– Maxwell–Stefan systems, Adv. Comput. Math., 45 (2019), pp. 1469–1498. [47] R. C. Kirby and S. P. MacLachlan, Extending Irksome: improvements in automated Runge– Kutta time stepping for finite element methods, ACM Trans. Math. Softw., (2024). [48] G. Kraaijeveld, V. Sumberova, S. Kuindersma, and H. Wesselingh, Modelling electrodial- ysis using the Maxwell–Stefan description, Chem. Eng. J., 57 (1995), pp. 163–176. [49] G. Kraaijeveld and J. A. Wesselingh, Negative Maxwell–Stefan diffusion coefficients, Ind. Eng. Chem. Res., 32 (1993), pp. 738–742. [50] R. Krishna, Uphill diffusion in multicomponent mixtures, Chem. Soc. Rev., 44 (2015), pp. 2812–2836. [51] R. Krishna, Diffusing uphill with James Clerk Maxwell and Josef Stefan, Chem. Eng. Sci., 195 (2019), pp. 851–880. [52] R. Krishna and J. A. Wesselingh, The Maxwell–Stefan approach to mass transfer, Chem. Eng. Sci., 52 (1997), pp. 861–911. [53] A. Longo, M. Barsanti, A. Cassioli, and P. Papale, A finite element Galerkin/least-squares method for computation of multicomponent compressible–incompressible flows, Comput. & Fluids, 67 (2012), pp. 57–71. [54] J. C. Maxwell, On the dynamical theory of gases, Phil. Trans. R. Soc., (1866), pp. 49–88. [55] M. McLeod and Y. Bourgault, Mixed finite element methods for addressing multi-species dif- fusion using the Maxwell–Stefan equations, Comput. Meth. Appl. Mech. Eng., 279 (2014), pp. 515–535. [56] J.-C. N´ed´elec, A new family of mixed finite elements in R3, Numer. Math., 50 (1986), pp. 57– 81. [57] J. Newman and N. P. Balsara, Electrochemical Systems, John Wiley & Sons, Hoboken, NJ, 4th ed., 2021. [58] J. Newman, D. Bennion, and C. W. Tobias, Mass transfer in concentrated binary electrolytes, Ber. Bunsenges. Phys. Chem., 69 (1965), pp. 608–612. [59] L. Onsager, Reciprocal relations in irreversible processes. I., Phys. Rev., 37 (1931), pp. 405– 426. [60] L. Onsager, Reciprocal relations in irreversible processes. II., Phys. Rev., 38 (1931), pp. 2265– 2279. [61] J.-P. P´eraud, A. Nonaka, A. Chaudhri, J. B. Bell, A. Donev, and A. L. Garcia, Low Mach number fluctuating hydrodynamics for electrolytes, Phys. Rev. Fluids, 1 (2016), p. 074103. [62] B. A. Pethica, Are electrostatic potentials between regions of different chemical composition measurable? the Gibbs–Guggenheim principle reconsidered, extended and its consequences revisited, Phys. Chem. Chem. Phys., 9 (2007), pp. 6253–6262. [63] C. Phelan, J. Swallow, and R. Weatherup, Applying the Maxwell-Stefan diffusion frame- work to multicomponent battery electrolytes, ChemRxiv preprint, (2024). [64] A. Prohl and M. Schmuck, Convergent finite element discretizationsof the Navier–Stokes– Nernst–Planck–Poisson system, ESAIM Math. Model. Numer. Anal., 44 (2010), pp. 531– 571. [65] P.-A. Raviart and J.-M. Thomas, A mixed finite element method for 2-nd order elliptic problems, in Mathematical Aspects of Finite Element Methods, vol. 606 of Lecture Notes in Math., Springer, Berlin, 1977, pp. 292–315. [66] G. W. Richardson, J. M. Foster, R. Ranom, C. P. Please, and A. M. Ramos, Charge transport modelling of Lithium-ion batteries, European J. Appl. Math., 33 (2022), pp. 983– 1031. [67] T. Roy, J. Andrej, and V. A. Beck, A scalable DG solver for the electroneutral Nernst– Planck equations, J. Comput. Phys., 475 (2023), p. 111859. [68] R. Rungta, P. Slowikowski, A. Gardner, D. Persa, and C. W. Monroe, Quantifying volumetric expansion and bulk moduli of carbonate cosolvents with lithium salts, in prepa- ration. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 27 [69] J. Sch¨oberl, NETGEN: An advancing front 2D/3D-mesh generator based on abstract rules, Computing and Visualization in Science, 1 (1997), pp. 41–52. [70] L. R. Scott and M. Vogelius, Norm estimates for a maximal right inverse of the divergence operator in spaces of piecewise polynomials, ESAIM Math. Model. Numer. Anal., 19 (1985), pp. 111–143. [71] R. Sijabat, M. De Groot, S. Moshtarikhah, and J. Van Der Schaaf, Maxwell–Stefan model of multicomponent ion transport inside a monolayer Nafion membrane for intensi- fied chlor-alkali electrolysis, J. Appl. Electrochem., 49 (2019), pp. 353–368. [72] I. Srivastava, D. R. Ladiges, A. J. Nonaka, A. L. Garcia, and J. B. Bell, Staggered scheme for the compressible fluctuating hydrodynamics of multispecies fluid mixtures, Phys. Rev. E, 107 (2023), p. 015305. [73] J. Stefan, ¨Uber das Gleichgewicht und die Bewegung, insbesondere die Diffusion von Gasge- mengen, Sitzber. Akad. Wiss. Wien., 63 (1871), pp. 63–124. [74] Z. Sun, J. A. Carrillo, and C.-W. Shu, An entropy stable high-order discontinuous Galerkin method for cross-diffusion gradient flow systems, Kinet. Relat. Models, 12 (2019), pp. 885– 908. [75] C. Taylor and P. Hood, A numerical solution of the Navier–Stokes equations using the finite element technique, Comput. & Fluids, 1 (1973), pp. 73–100. [76] A. Van-Brunt, P. E. Farrell, and C. W. Monroe, Augmented saddle-point formulation of the steady-state Stefan–Maxwell diffusion problem, IMA J. Numer. Anal., 42 (2022), pp. 3272–3305. [77] A. Van-Brunt, P. E. Farrell, and C. W. Monroe, Consolidated theory of fluid thermodif- fusion, AlChE J., 68 (2022), p. e17599. [78] A. Van-Brunt, P. E. Farrell, and C. W. Monroe, Structural electroneutrality in Onsager– Stefan–Maxwell transport with charged species, Electrochim. Acta, 441 (2023), p. 141769. [79] V. K. Vanag and I. R. Epstein, Cross-diffusion and pattern formation in reaction–diffusion systems, Phys. Chem. Chem. Phys., 11 (2009), pp. 897–912. [80] A. A. Wang, A. B. Gunnarsd´ottir, J. Fawdon, M. Pasta, C. P. Grey, and C. W. Monroe, Potentiometric MRI of a superconcentrated lithium electrolyte: testing the irreversible thermodynamics approach, ACS Energy Lett., 6 (2021), pp. 3086–3095. [81] A. A. Wang, T. Hou, M. Karanjavala, and C. W. Monroe, Shifting-reference concentra- tion cells to refine composition-dependent transport characterization of binary lithium-ion electrolytes, Electrochim. Acta, 358 (2020), p. 136688. [82] G. Wanner and E. Hairer, Solving Ordinary Differential Equations II, vol. 375, Springer Berlin Heidelberg, 1996. [83] A. Z. Weber, R. L. Borup, R. M. Darling, P. K. Das, T. J. Dursch, W. Gu, D. Harvey, A. Kusoglu, S. Litster, M. M. Mench, et al., A critical review of modeling transport phenomena in polymer-electrolyte fuel cells, J. Electrochem. Soc., 161 (2014), p. F1254. [84] A. Z. Weber and C. Delacourt, Mathematical modelling of cation contamination in a proton- exchange membrane, Fuel Cells, 8 (2008), pp. 459–465. [85] J. Wesselingh and R. Krishna, Mass Transfer in Multicomponent Mixtures, Delft University Press, Delft, Netherlands, 2000. [86] D. Xie and B. Lu, An effective finite element iterative solver for a Poisson–Nernst–Planck ion channel model with periodic boundary conditions, SIAM J. Sci. Comput., 42 (2020), pp. B1490–B1516.
FINITE ELEMENT METHODS FOR ELECTRONEUTRAL MULTICOMPONENT ELECTROLYTE FLOWS∗ AARON BAIER-REINIO†, PATRICK E. FARRELL‡, AND CHARLES W. MONROE§ Abstract. We present a broad family of high-order finite element algorithms for simulating the flow of electroneutral electrolytes. The governing partial differential equations that we solve are the electroneutral Navier-Stokes-Onsager-Stefan-Maxwell (NSOSM) equations, which model momentum transport, multicomponent diffusion and electrical effects within the electrolyte. Our algorithms can be applied in the steady and transient settings, in two and three spatial dimensions, and under a variety of boundary conditions. Moreover, we allow for the material parameters (e.g. viscosity, diffusivities, thermodynamic factors and density) to be solution-dependent and thermodynamically non-ideal. The flexibility of our approach requires us to address subtleties that arise in the governing equations due to the interplay between boundary conditions and the equation of state. We demonstrate the algorithms in various physical configurations, including (i) electrolyte flow around a microfluidic rotating disk electrode and (ii) the flow in a Hull cell of a cosolvent electrolyte mixture used in lithium-ion batteries. Key words. Electrolytes, electroneutrality, multicomponent flows, cross-diffusion, StefanMaxwell, Navier-Stokes, finite element methods. 1. Introduction. We address the numerical simulation of liquid electrolytes, which are fluids that transport electrical charge by the motion of ions. Electrolytes are multicomponent fluids (or mixtures), which means that they consist of multiple distinct chemical species (or components) in a common thermodynamic phase [39, 85]. While a general mixture may consist entirely of uncharged species, in an electrolyte at least two of the species must be ions of opposite charge. For example, dissolving table salt in water yields an electrolyte with three species: H2O, Na+ and Cl-. Prominent applications of electrolytic flows include energy storage (e.g. batteries and fuel cells), chemical processes (e.g. electrodialysis and electroplating) and biological systems (e.g. biological membranes) [7, 57, 85]. We assume that the electrolyte is electroneutral, which means that its local charge density is everywhere zero1. This is a common and accurate assumption in most electrochemical systems at length scales much larger than the Debye length (which is typically on the order of nanometers) [57, 85]. For example, in lithium-ion batteries, electroneutrality holds throughout the bulk of the electrolyte and is only violated in nanometer-wide double layers at the electrolyte-electrode interfaces. Consequentially, physics-based microscale lithium-ion battery models typically use differential equations that assume electroneutrality throughout the domain and incorporate the double layer through boundary conditions [17, 66]. In this work, electroneutrality allows us to follow [78] and employ a change of basis that transforms the governing equations into a form that is structurally similar to that of an uncharged mixture, ∗ Funding: ABR was supported by a Clarendon scholarship from the . PEF was supported by EPSRC grants EP/R029423/1 and EP/W026163/1, and by the Donatio Universitatis Carolinae Chair "Mathematical modelling of multicomponent systems". †Mathematical Institute, 2 6GG, UK (aaron.baier-reinio@maths. ox.ac.uk). ‡Mathematical Institute, 2 6GG, UK and Mathematical Institute, Faculty of Mathematics and Physics, Charles University, Czechia ( ). § 1 3PJ, UK and The Faraday Institution, Harwell Campus, Didcot, OX11 ORA, UK ( ). 1Note that electroneutrality does not prevent the transport of charge within the mixture. 1 21 Oct 2025 2 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE simplifying both the model and numerics. A constitutive relation for the species mass fluxes must be chosen to model mass transport in multicomponent flows. In electrolytic flows, the Nernst-Planck model is a popular such choice, and it accounts for mass transport by convection, Fickian diffusion and electromigration [6, 28]. Several finite element methods exist for the electroneutral Nernst-Planck model [14, 30, 67], and literature for its nonelectroneutral analogue, the Poisson-Nernst-Planck model, is especially abundant (see e.g. [21, 64, 86]). However, because the Nernst-Planck model uses Fick's law of diffusion [38], it has the drawback of being unable to capture cross-diffusion, a physical phenomenon that arises when different species exert diffusional forces on each other [45, 74, 79]. For dilute mixtures, i.e. mixtures with only one species present in non-trace amounts, it is usually appropriate to neglect cross-diffusion, and doing so greatly simplifies the model. However, many practical problems involve non-dilute mixtures for which Fick's law is inadequate. For example, in lithium-ion battery electrolyte modelling, cation-anion cross-diffusion must be accounted for to match experimentally observed conductivity, salt diffusivity and transference numbers [11]. For detailed discussions on the limitations of Fick's law, we refer to [50, 51, 52] for generic mixtures and [57, Chapters 11-12] for electrolytes. Drawbacks of the Nernst-Plack model from a thermodynamical perspective are also given in [28]. To remedy these limitations of the Nernst-Planck model, we treat mass transport using the Onsager-Stefan-Maxwell2 (OSM) equations [10, 52, 85]. These equations are based on irreversible thermodynamics [22, 59, 60] and generalize the StefanMaxwell equations, which model cross-diffusion in ideal gases [54, 73]. In electrochemistry, the OSM equations were popularized by Newman [57, 58] for modelling electrolytes, and the resulting framework is sometimes known as concentrated solution theory. The OSM equations provide a thermodynamically rigorous model for mass transport by convection, cross-diffusion and electromigration. The equations are also compatible with electroneutrality [78] and anisothermality [77], although we do not treat the latter here. Moreover, the OSM equations can handle thermodynamic non-idealities (arising in mixtures with a nonzero excess Gibbs free energy [40]), the effects of which are important in the non-dilute regime [52]. In settings where momentum transport is important (e.g. fuel cells, electrodialysis), it is necessary to couple the OSM equations to momentum conservation laws such as the Stokes or Navier-Stokes equations; we call the coupled equations the SOSM or NSOSM equations. Due to its flexibility in treating complicated transport phenomena and nonideal thermodynamics, the OSM framework has received much attention in electrochemistry as a tool for modelling electrolytic flows [57]. For example, the framework has been applied to lithium-ion batteries [66], fuel cells [83, 84], electrodialysis [48] and electrolysis [71]. Nonetheless, the OSM equations have received limited attention in the scientific computing literature, especially so for the coupling of OSM to Stokes or Navier-Stokes. Most numerics papers on the OSM equations assume a thermodynamically ideal gaseous mixture, such as the finite element methods in [15, 20, 46, 55, 76]. For numerics papers on the SOSM or NSOSM equations, we are aware of [32] for a finite difference scheme, [5, 9, 26, 25, 27, 61, 72] for a series of Cartesian-grid finite volume schemes under various physical assumptions in the setting of fluctuating hydrodynamics, and [2, 3, 18, 19, 53] for finite element schemes. Among these, only [3] considers spatially high-order methods, and only [27, 61] consider charged species 2Nomenclature varies depending on the source; sometimes the name Maxwell-Stefan or even generalized Maxwell-Stefan is used instead. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 3 (although [27, 61] make the Boussinesq approximation to simplify the numerics). We are not aware of existing finite element algorithms for the electroneutral NSOSM equations, a gap the present work addresses. A major novelty of this work lies in our treatment of the electroneutrality condition, which is mathematically an algebraic constraint on the species concentrations. Previous works have used this constraint to obtain an elliptic equation for the electrical potential, which is then coupled to the relevant equations that govern mass and momentum transport [27, 30, 67]. However, these coupled equations do not appear to structurally resemble those of an uncharged mixture, and instead new techniques are developed to solve them; temporal splitting schemes are introduced in [27, 30] and a monolithic approach is considered in [67]. By contrast, in this work we use the socalled salt-charge transformation, introduced in [78] and inspired by Newman's work on binary electrolytes [58], to transform the electroneutral NSOSM equations into a form that structurally resembles the NSOSM equations for an uncharged mixture. We then apply the method of lines whereby we spatially discretize the transformed problem using a steady NSOSM solver, and then solve the resulting spatially discrete system of differential-algebraic equations by time-stepping. Hence, our approach has the advantage of allowing for uncharged NSOSM spatial discretizations and solvers to be employed in the electroneutral setting. The steady NSOSM discretization that we employ is a modification of the SOSM scheme introduced in [3], where we now include the nonlinear convective terms in the momentum balance. These are trivial to include as this work assumes low Reynolds number flow so that the convective terms do not require stabilization. We also apply a second modification that is straightforward but has consequences regarding the experimental parameters needed for the model. Namely, instead of discretizing the OSM equations using chemical potentials (as is done in [2, 3, 18]), we express the OSM diffusional driving forces in terms of mole fraction gradients and we only discretize the mole fractions. Because we do not discretize the chemical potentials, we do not require experimental knowledge of the mixture activity coefficients. Only the thermodynamic factors (i.e. partial derivatives of activity coefficients with respect to mole fractions) are required, which allows us to straightforwardly draw on experimental data from [80, 81] in our numerical experiments. When modelling mixtures, the interplay between the volumetric equation of state (EOS) and boundary conditions (BCs) is subtle. The EOS and BCs can both impose restrictions on the species concentrations, and these restrictions must be compatible. Investigating this interplay is mathematically challenging even for a two-species Fickian mixture [37]. In the present work, this challenge is exacerbated by the fact that our steady NSOSM solver requires user-chosen integral constraints to be imposed on the solution [3]. These constraints arise naturally in the steady case, because the steady mass continuity equations do not dictate how many moles of each species are present in the domain; this must be prescribed by additional integral constraints. However, in the transient case, the constraints must be chosen carefully such that they are compatible with the choice of EOS and BCs. In this paper, we discuss several physically relevant choices of EOS and BCs along with appropriate corresponding constraints. We also discuss a seemingly physical combination of EOS and BCs that appear to yield an ill-posed problem. We are not aware of discussions on this interplay in the multicomponent numerics literature and believe these findings may be of independent interest. Incidentally, we mention that a discussion on the challenges of discretizing the multicomponent mass continuity equations, in a way that is compatible with the 4 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE EOS, is given in [26] in the case of a low-order finite volume scheme3. The remainder of this paper is organized as follows. In section 2 we introduce the electroneutral NSOSM equations, and show how the salt-charge transformation can be applied to obtain an equivalent system amenable to a steady NSOSM solver. In section 3 we first consider the transformed steady problem, which we discretize using the (modified) scheme of [3]. In section 4 we then apply the method of lines to the transformed transient problem, discretizing in space using the steady scheme from section 3 and taking special care to identify what constraints must be imposed depending on the EOS and BCs. In section 5 we demonstrate our algorithm with a variety of numerical examples, and conclusions are drawn in section 6. 2. Governing equations and salt-charge transformation. We consider an isothermal, chemically nonreacting mixture of n ≥3 species indexed by i ∈{1 : n}. Let zi denote the equivalent charge of species i and nc ≥2 the number of charged species. Note that nc = n is possible, e.g. for a molten salt. We list the species so that the first (n-nc) species are uncharged and the last two species are oppositely charged. This means that z1, . . . , zn-nc = 0 and zn-nc+1, . . . , zn ̸= 0 with zn/zn-1 0 is the shear viscosity and ζ > 0 is the bulk viscosity. We allow the viscosities η and ζ to depend on the thermodynamic state variables T, p and x1, . . . , xn. To model the mass fluxes we employ the isothermal, non-isobaric Onsager-StefanMaxwell (OSM) equations [10, 52, 57, 85], which can be written using molar fluxes as (2.6) -∇μi + mi ρ ∇p = n X j=1 MijNj in Ω ∀i ∈{1 : n}, where μi is the electrochemical potential of species i ∈{1 : n} and M is the Onsager transport matrix. Two important properties of M are that it is symmetric positivesemidefinite, and its nullspace is spanned by [c1, . . . , cn]⊤. Moreover, the GibbsDuhem relation implies that Pn j=1 cj(-∇μj +[mj/ρ]∇p) = 0 [41]. The left-hand side of (2.6) therefore lies in the orthogonal complement of span{[c1, . . . , cn]⊤}, which is the range of M. Hence (2.6) is non-uniquely solvable for N1, . . . , Nn. Uniqueness of the fluxes comes from imposing the mass-average constraint (2.4) in addition to (2.6). Often M is written in terms of so-called Stefan-Maxwell diffusivities Dij as (2.7) Mij = ( - RT DijcT if i ̸= j, Pn k=1,k̸=i RT ck DikcT cj if i = j, with R the gas constant. The diffusivities satisfy Dij = Dji for i ̸= j [10] while Dii is undefined. We allow the Dij to depend on T, p and x1, . . . , xn (see e.g. [49]). Equations (2.1)-(2.6) comprise the electroneutral NSOSM equations. In addition to needing suitable boundary and initial conditions, the equations as written are still not closed. To obtain a closed problem one must supply a volumetric equation of state (EOS), which gives the total concentration cT (or equivalently density ρ) as a function of T, p and x1, . . . , xn. A constitutive law giving the electrochemical potentials μi as a function of T, p, x1, . . . , xn and the electrical state of the system must also be given. However, in multicomponent systems, formulating the notion of electrical state is delicate, and requires a foray into technicalities of electrochemical thermodynamics. Intuitively, the electrical state should quantify how much electrical potential energy is locally available in the system. A reasonable model may be to introduce an "electrostatic potential" φ which acts as a Lagrange multiplier for enforcing electroneutrality, and to then decompose the electrochemical potentials as μi = μchem i +φ, where μchem i encodes the "chemical" or "non-electrical" part of μi and depends on T, p and x1, . . . , xn only. However, Guggenheim and Newman have criticized such decompositions as being ambiguous [41, 57], with Newman noting that several inequivalent notions of "electrostatic potential" exist in concentrated mixtures (see also [12, 62]). Fortunately, we do not need to contend with this thermodynamical technicality in the present work. Indeed, owing to electroneutrality, the salt-charge transformation 6 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE will eliminate any need to model the dependence of electrochemical potentials on the electrical state [78]. In what follows, it will only be necessary to model the dependence of chemical potentials of hypothetical uncharged salts on T, p and x1, . . . , xn4. 2.2. Salt-charge transformation. It will be helpful to use boldface notation for vectors and matrices that are indexed by i ∈{1 : n}. For example, we shall write z = [z1, . . . , zn]⊤, c = [c1, . . . , cn]⊤, m = [m1, . . . , mn]⊤and so on. These should be interpreted as column vectors (or n × 1 matrices). Taking their gradient yields n × d matrices, hence for example (∇μ)ij = (∇μi)j. For the fluxes and other Rd-valued quantities we write N = [N1, . . . , Nn]⊤which is understood to be an n × d matrix, and the divergence acts on its rows so that ∇·N is a column vector (or n×1 matrix). With this notation, the governing equations in (2.1), (2.3), (2.4), and (2.6) become z⊤c = 0 in Ω, (2.8) ∂c ∂t + ∇· N = 0 in Ω, (2.9) v = ψ⊤N in Ω, (2.10) -∇μ + ψ∇p = MN in Ω, (2.11) where ψ := m/ρ and ∇p is viewed as being a 1 × d row vector. We now carry out the salt-charge transformation. While a detailed presentation is given in [78], we briefly outline it here for completeness, and because [78] assumes an isobaric mixture and hence does not keep track of pressure gradient terms in the OSM equations. The starting point is an n × n transformation matrix Z given by (2.12) Z =   ν⊤ 1 ... ν⊤ n-1 z⊤/ ∥z∥  . Here ∥z∥= √ z⊤z and ν1, . . . νn-1 are n×1 column vectors whose entries contain the stoichiometric coefficients of n -1 independent hypothetical chemical reactions that neutralize the species. An example from [78] is an n = 5 species mixture consisting of H2O, Na+, Cl-, Mg2+ and SO24 . Therefore z⊤= [0, 1, -1, 2, -2], and a choice of neutralizing reactions with corresponding stoichiometric coefficient vectors is H2O -⇀ ↽-H2O -→ ν⊤ 1 = [1, 0, 0, 0, 0], Na+ + Cl--⇀ ↽-NaCl -→ ν⊤ 2 = [0, 1, 1, 0, 0], Mg2+ + 2 Cl--⇀ ↽-MgCl2 -→ ν⊤ 3 = [0, 0, 2, 1, 0], 2 Na+ + SO24 -⇀ ↽-Na2SO4 -→ ν⊤ 4 = [0, 2, 0, 0, 1]. Note that the choice of reactions may not be unique. Following [78] we make the convention that ν1, . . . νn-1 have the following properties. First, for the uncharged species i ∈{1 : n-nc} we have (νi)j = δij, corresponding to a trivial identity reaction. Second, for i ∈{n -nc + 1 : n -1} the first n -nc entries of νi are zero, and exactly two of the remaining entries of νi are nonzero, and must be coprime positive integers such that νi is orthogonal to z. Third, the νi for i ∈{n -nc + 1 : n -1} must be 4In the non-electroneutral setting this strategy does not work. One must instead choose how to quantify the electrical state and model how the electrochemical potentials depend on it. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 7 linearly independent. The last two properties state that the non-trivial reactions form simple neutral salts (i.e. salts formed from two ions) and are independent. These three properties imply that {ν1, . . . , νn, z} is a basis for Rn and therefore Z is invertible. Through Z the electrochemical potentials transform by μZ := Zμ. The entries (μZ)i for i ∈{1 : n -1} are independent of the electrical state of the system and represent chemical potentials. Indeed, for i ∈{1 : n -nc}, (μZ)i = μi is the electrochemical potential of species i, but since species i is uncharged the electrochemical potential does not depend on the electrical state and is just a chemical potential. For i ∈{n -nc + 1 : n -1}, (μZ)i = ν⊤ i μ does not depend on the electrical state because it is the chemical potential of the neutral salt formed in the reaction corresponding to νi [58, 78]. Following [78] we decompose μZ as (recall that F is the Faraday constant) μZ = μν F ∥z∥ΦZ , where μν is an (n -1) × 1 column vector of the chemical potentials and ΦZ is a scalar-valued salt-charge potential. The potential ΦZ can be related to the potential of a hypothetical probe reference electrode immersed in the mixture, and serves to quantify the electrical state. However, no material parameters in the model depend on ΦZ. The concentrations and fluxes transform by cZ := Z-⊤c and NZ := Z-⊤N. Since μZ := Zμ these transformations preserve the structure of the volumetric Gibbs free energy ̃G = c⊤μ = c⊤ ZμZ and energy dissipation function T ̇s = -tr(N ⊤∇μ) = -tr(N ⊤ Z ∇μZ). As in [78] we decompose (2.13) cZ = cν 0 , NZ = Nν J⊤/(F ∥z∥) . The condition (cZ)n = 0 is equivalent to the electroneutrality condition (2.8), and J = FN ⊤z is the current density. The variables cν can be thought of as the molar concentrations of the salts formed in the neutralizing reactions, and Nν can be thought of as their molar fluxes. Using the equivalent electroneutrality condition (cZ)n = 0, one verifies that the governing equations (2.9)-(2.11) transform as ∂ ∂t cν 0 + ∇· Nν ∇· J = 0 in Ω, (2.14) v = ψ⊤ Z NZ in Ω, (2.15) -∇μZ + ψZ∇p = MZNZ in Ω, (2.16) where ψZ := Zψ and MZ := ZMZ⊤. Since the transformed Onsager transport matrix MZ is congruent to M, it retains the crucial structural properties of symmetry and positive-semidefiniteness (with a nullspace of dimension one). Equations (2.14)-(2.16) structurally resemble the uncharged OSM equations. In particular, the stationary version of (2.14)-(2.16) (removing the time derivative term in (2.14)) may be coupled to the stationary Stokes equations for v and p, and the resulting problem can be solved using the stationary SOSM solver of [3]. The Picard linearized analysis of [3] is applicable in such a setting. However, instead of working with this formulation we will instead further develop (2.16) by expressing the chemical potential gradients in terms of mole fraction gradients. Consequently we will not need a constitutive formula for the chemical potentials (or equivalently, activities) and will only need a constitutive formula for the thermodynamic factors. The mole fractions 8 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE transform as xZ := Z-⊤x and electroneutrality implies (xZ)n = 0, hence we write xZ = xν 0 , with xν an (n -1) × 1 column vector that parametrizes composition of the mixture under electroneutrality. The normalization constraint Pn i=1 xi = 1 transforms to (2.17) ν⊤ Z xν = 1 where (νZ)i = Pn j=1 Zij for i ∈{1 : n -1}. Generalizing [78, eq. (8.22)] to the non-isobaric case, we have (2.18) ∇μν = RTX0 ν∇xν + Vν∇p where the (n -1) × (n -1) matrix X0 ν encodes the thermodynamic factors under electroneutrality, and Vν is an (n -1) × 1 column vector of partial molar volumes: Vν := ∂μν ∂p T,xν=constant. Partial molar volumes can be recovered from the volumetric EOS using, for example, Newman's formula [57, Appendix A]. Both X0 ν and Vν are functions of (T, p, xν) only. 2.3. Augmentation strategy. As mentioned previously, the Onsager transport matrix M has a nullspace of dimension one, and only n -1 of the OSM equations in (2.6) are linearly independent. It is the mass-average constraint (2.4), in conjunction with the OSM equations (2.6), that allows for the molar fluxes to be uniquely determined. To weakly enforce the mass-average constraint at the discrete level and simultaneously address the fact that M is singular, we employ the augmentation strategy introduced in [31, 43] and used in [2, 3, 76]. In the present transformed setting, we introduce a user-chosen augmentation parameter γ > 0 and add a multiple of the mass-average constraint (2.15) to the OSM equation (2.16) in the following way: (2.19) -∇μZ + ψZ∇p + γψZv = γψZψ⊤ Z NZ + MZNZ = M γ ZNZ in Ω, where M γ Z := γψZψ⊤ Z + MZ is an augmented transport matrix and is nonsingular. Following [2, 3], for symmetry we also incorporate the mass-average constraint in (2.2) through (2.20) ∂(ρv) ∂t + ∇· (ρv ⊗v) + ∇p + γ(v -ψ⊤ Z NZ) -∇· τ = ρf in Ω. We also only explicitly enforce the divergence of the mass-average constraint, i.e. (2.21) ∇· v = ∇· (ψ⊤ Z NZ) in Ω. We shall employ the augmented equations (2.19)-(2.21) because they yield a system with the same number of equations as unknowns, and, in an appropriate Picard linearized setting, yield a well-posed symmetric saddle point problem [2, 3]. 2.4. Full problem formulation. Our final formulation of the electroneutral NSOSM problem is comprised of the following set of equations, taken from (2.14), (2.20), (2.21) and by combining (2.18) with (2.19). The unknown functions of space FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 9 and time to be solved for are the velocity v, transformed fluxes Nν, current density J, pressure p, transformed mole fractions xν and salt-charge potential ΦZ, such that: ∂(ρv) ∂t + ∇· (ρv ⊗v) + ∇p + γ(v -ψ⊤ Z NZ) -∇· τ = ρf in Ω, (2.22a) - RTX0 ν 0 0 F ∥z∥ ∇xν ∇ΦZ + ψZ - Vν 0 ∇p + γψZv = M γ ZNZ in Ω, (2.22b) ∇· (v -ψ⊤ Z NZ) = 0 in Ω, (2.22c) ∂ ∂t cν 0 + ∇· Nν ∇· J = 0 in Ω. (2.22d) This problem must be supplemented with suitable initial and boundary conditions, which we discuss in forthcoming sections. We recall how all quantities appearing in (2.22) depend on the unknowns. The viscous stress τ in (2.22a) is given by (2.5) and the viscosities in (2.5) are assumed to be known functions of (T, p, xν). The vector NZ was defined in (2.13) and is simply a concatenation of the fluxes Nν and scaled current density J. The constant γ > 0 is a user-chosen augmentation parameter and f is a known forcing term. All remaining quantities in (2.22) are assumed to be known Lipschitz continuous functions of (T, p, xν). The density ρ, concentrations cν and partial molar volumes Vν can be determined as functions of (T, p, xν) using the volumetric EOS. The vector ψZ = Zm/ρ depends on ρ only. The matrix X0 ν of thermodynamic factors can be expressed in terms of (n -1)(n -2)/2 independent Darken factors, while the transport matrix M γ Z can be expressed in terms of n(n -1)/2 independent StefanMaxwell diffusivities [78]. The dependence of these material parameters on (T, p, xν) is typically modelled by fitting experimental data. 3. Spatial discretization. In this section, we introduce finite element methods to spatially discretize the electroneutral NSOSM equations (2.22) in steady form: ∇· (ρv ⊗v) + ∇p + γ(v -ψ⊤ Z NZ) -∇· τ = ρf in Ω, (3.1a) - RTX0 ν 0 0 F ∥z∥ ∇xν ∇ΦZ + ψZ - Vν 0 ∇p + γψZv = M γ ZNZ in Ω, (3.1b) ∇· (v -ψ⊤ Z NZ) = 0 in Ω, (3.1c) ∇· Nν ∇· J = 0 in Ω. (3.1d) The steady problem (3.1) must be supplemented with suitable boundary conditions and integral constraints, which we now describe. 3.1. Boundary conditions. Let Γ := ∂Ωdenote the boundary of Ωand nΓ the unit outward normal on Γ. We consider the following flux5 boundary conditions: v = [(ψ⊤ Z NZ) · nΓ]nΓ + gv∥ on Γ, (3.2a) (Nν)i · nΓ = gi on Γ ∀i ∈{1 : n -1}, (3.2b) J · nΓ = gJ on Γ. (3.2c) 5We do not consider Dirichlet BCs in this section, but see subsection 5.2 for an example of how they can be implemented. 10 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE The functions gv∥: Γ →Rd, gi : Γ →R and gJ : Γ →R are prescribed data. We assume gv∥· nΓ = 0 on Γ, so that (3.2a) enforces the mass-average constraint (2.15) in the normal direction and v = gv∥in the tangential directions. In practical applications the prescribed normal fluxes gi and gJ may be known algebraic functions of the state variables (T, p, xν) and ΦZ. We shall allow for such dependencies. An example is that of a Butler-Volmer boundary condition [24, 57], which on an electrode interface Γe ⊂Γ relates the normal component of current density to the overpotential. The overpotential quantifies the electrical potential difference across the interface. Butler-Volmer BCs generally depend on (T, p, xν) and ΦZ nonlinearly, and we consider this case in section 5. However, for small overpotentials and neglecting variations in composition, it is instructive to point out that Butler-Volmer BCs may be approximated by a linear relationship [57] (3.3) gJ = -i0 (αa + αc)F RT (Ve -ΦZ) on Γe. Here i0 is the exchange-current density, Ve is the electrode voltage and αa, αc are apparent transfer coefficients. These parameters are determined by experimental data. 3.2. Integral constraints. In general, the steady problem (3.1) with boundary conditions (3.2) is not uniquely solvable, and 1 ≤k ≤n + 1 additional constraints must be imposed for uniqueness. The necessity of such constraints in the uncharged, steady SOSM setting was pointed out in [3]. However, the present situation is more complicated than that of [3] because we allow for the boundary data in (3.2b)-(3.2c) to be solution-dependent. The situation will be further complicated in section 4 when we consider the transient problem, but for now we focus on the steady case. The need for constraints can be motivated as follows. Integrate the continuity equations (3.1d) over Ωand use the divergence theorem together with the BCs (3.2b) and (3.2c) to obtain n compatibility conditions on the data, (3.4) Z Γ gi dΓ = 0 ∀i ∈{1 : n -1} and Z Γ gJ dΓ = 0. For the problem (3.1) with BCs (3.2) to admit a solution, the conditions in (3.4) must be satisfiable. Assuming this is so, let l ≤n denote the number of independent constraints that (3.4) imposes on the unknowns (p, xν, ΦZ). Next, similarly integrate (3.1c) over Ωand use the BC (3.2a) to obtain an (n + 1)-th compatibility condition (3.5) Z Γ gv∥· nΓ dΓ = 0, which holds automatically since gv∥· nΓ = 0. Hence, we have shown that integrating the n+1 equations in (3.1c)-(3.1d) over Ωleads to only l ≤n independent constraints on the solution. An additional k = n + 1 -l constraints must therefore be imposed for uniqueness. The same argument will apply at the discrete level. Namely, if D denotes the number of discrete degrees of freedom, then discretization of (3.1)-(3.2) will result in a system of D equations, but only D -k of these will be independent. This motivates why k additional constraints are needed for uniqueness. We will give examples of choices of constraints in subsection 3.6. Note that these constraints do not need to be integral constraints, but usually they are in practice. 3.3. Finite element spaces. We use the standard notation for Lebesgue and Sobolev spaces L2(Ω), W 1,∞(Ω), H1(Ω), H(div; Ω) and their norms [34]. We use FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 11 (·, ·)Ωand ⟨·, ·⟩Γ to denote the L2-inner products on Ωand Γ respectively. Moreover, we write L2 0(Ω) := {q ∈L2(Ω) : (q, 1)Ω= 0} for the subspace of functions in L2(Ω) with vanishing mean. We write H1 0(Ω) := {u ∈H1(Ω) : u|Γ = 0} and H0(div; Ω) := {u ∈H(div; Ω) : (u · nΓ)|Γ = 0} for subspaces with vanishing traces. To discretize (3.1) we introduce finite dimensional finite element subspaces that may depend on a parameter h ∈(0, ∞) representing, for example, the mesh size: Vh ⊂H1(Ω)d, V0h := Vh ∩H1 0(Ω)d, (3.6a) Ph ⊂L2(Ω), P0h := Ph ∩L2 0(Ω), (3.6b) Nh ⊂H(div; Ω), N0h := Nh ∩H0(div; Ω), (3.6c) Xh ⊂L2(Ω), X0h := Xh ∩L2 0(Ω). (3.6d) Importantly, we assume Ph and Xh contain the constant functions, i.e. 1 ∈Ph ∩Xh. We shall seek the discrete velocity vh ∈Vh and pressure ph ∈Ph. We assume that (Vh, Ph) forms an inf-sup stable Stokes pair [35], in the sense that: (3.7) β ∥qh∥L2(Ω) ≤ sup uh∈V0h (qh, div uh)Ω ∥uh∥H1(Ω)d ∀qh ∈P0h, for some β > 0 independent of h. We shall seek the discrete fluxes (Nν,h)i ∈Nh for i ∈{1 : n -1} and current density Jh ∈Nh. Similarly, we seek the discrete mole fractions (xν,h)i ∈Xh for i ∈{1 : n -1} and salt-charge potential ΦZ,h ∈Xh. We assume that (Nh, Xh) forms a divergence-free and inf-sup stable mixed-Poisson pair [35], in the sense that: (3.8) div N0h ⊂X0h and β′ ∥yh∥L2(Ω) ≤ sup Kh∈N0h (yh, div Kh)Ω ∥Kh∥H(div;Ω) ∀yh ∈X0h, for some β′ > 0 independent of h. Assumptions (3.7) and (3.8) are motivated by the analysis of [3], where similar assumptions are shown to ensure well-posedness of a Picard linearized SOSM system. Following [3], in our numerical experiments we employ the degree k ≥2 TaylorHood pair [13, 75] for (Vh, Ph) (but c.f. [3] for a discussion on other choices such as Scott-Vogelius [70]). For (Nh, Xh) we employ either the BDMk-DGk-1 pair [16, 56] or the RTk-DGk-1 [65] pair. These spaces are standard in finite element software packages, extend to high orders, and are applicable in two or three spatial dimensions on triangular, tetrahedral, quadrilateral or hexahedral (possibly curved) meshes. 3.4. Lipschitz continuous reconstruction operators. Our discretization in subsection 3.5 will involve integration-by-parts of the gradient terms on the left-hand side of (3.1b). This requires the entries of X0 ν, ψZ and Vν to lie in W 1,∞(Ω). Discretization of (3.1c) will likewise require ψZ ∈(W 1,∞(Ω))n. Recall that we assume these quantities to be known Lipschitz continuous functions of (T, p, xν). However, at the discrete level, the spaces Ph or Xh may be discontinuous. For this reason we shall evaluate thermodynamic properties such as X0 ν, ψZ, Vν using Lipschitz continuous reconstructions of ph and xν,h. To be precise, we assume that smoothing operators πPh : Ph →Ph ∩W 1,∞(Ω), πXh : Xh →Xh ∩W 1,∞(Ω), (3.9) 12 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE are available. In our numerical simulations we take πPh to be the L2(Ω)-projection of Ph into Ph ∩W 1,∞(Ω) and likewise for Xh. Nodal averaging operators could alternatively be employed [33]. Given ph and xν,h we introduce their reconstructions f ph := πPhph, ^ (xν,h)i := πXh(xν,h)i . n-1 X j=1 (νZ)jπXh(xν,h)j. Note that the reconstructed mole fractions are normalized to satisfy condition (2.17) exactly. We then write g X0ν, f ψZ, f Vν, eρ, and so on, to denote quantities evaluated with the reconstructed f ph and g xν,h instead of ph and xν,h. 3.5. Discretized problem. Our discrete variational formulation of (3.1) can be obtained by multiplying the equations with suitable test functions and integrating over Ω. The pressure and viscous terms in (3.1a) as well as the gradient terms on the left-hand side of (3.1b) are integrated-by-parts, and all boundary terms vanish owing to our BCs in (3.2). Following [3] we also add density consistency terms to (3.1c). The precise discretization we consider is as follows. We seek discrete functions vh ∈Vh, ph ∈Ph, Nν,h ∈(Nh)n-1, Jh ∈Nh, xν,h ∈(Xh)n-1 and ΦZ,h ∈Xh. Let NZ,h := Nν,h J⊤ h /(F ∥z∥) ∈(Nh)n, which is analogous to NZ in (2.13). The discrete variational problem reads: ∇· (eρvh ⊗vh) + γ(vh -g ψ⊤ Z NZ,h) -eρf, uh Ω- ph, ∇· uh Ω + 2eηε(vh) + (eζ -2eη/d)(∇· vh)I, ε(uh) Ω= 0 ∀uh ∈V0h, (3.10a) xν,h ΦZ,h , ∇· RT g X0νWh F ∥z∥K⊤ h ! Ω - ph, ∇· ( g ψ⊤ Z Wh K⊤ h -g V ⊤ ν Wh )! Ω + γ f ψZvh, Wh K⊤ h ! Ω = g M γ ZNZ,h, Wh K⊤ h ! Ω ∀ Wh K⊤ h ∈(N0h)n, (3.10b) ∇· (vh -g ψ⊤ Z NZ,h), qh ΩD nΓ · (vh -g ψ⊤ Z NZ,h), qh E Γ = 0 ∀qh ∈Ph, (3.10c) ∇· NZ,h, yh Ω= 0 ∀yh ∈(Xh)n. (3.10d) Moreover, we strongly impose the following discrete analogue of the BCs in (3.2): vh = πVh [(g ψ⊤ Z NZ,h) · nΓ]nΓ + gv∥ on Γ, (3.11a) (Nν,h)i · nΓ = πNh egi on Γ ∀i ∈{1 : n -1}, (3.11b) Jh · nΓ = πNh f gJ on Γ. (3.11c) Here πVh and πNh are L2(Γ)-projection operators onto the discrete trace spaces6 πVh : [L2(Γ)]d →{uh|Γ : uh ∈Vh} ⊂[L2(Γ)]d, (3.12a) πNh : L2(Γ) →{(Kh · nΓ)|Γ : Kh ∈Nh} ⊂L2(Γ). (3.12b) Conditions (3.10)-(3.11) define our discrete scheme. 6For (3.12b) to be well-defined, we implicitly assume (Kh ·nΓ)|Γ ∈L2(Γ) ∀Kh ∈Nh. In practice, the finite element space Nh will consist of piecewise smooth functions, so that this indeed holds. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 13 3.6. Integral constraints in the discrete setting. As already discussed in subsection 3.2, for uniqueness of a solution to the discretized problem (3.10)-(3.11), additional constraints must be supplied. To see this at the discrete level, let D := dim Vh × Ph × (Nh)n × (Xh)n and note that, once a finite element basis has been chosen, the problem (3.10)-(3.11) amounts to a nonlinear system of D equations in D unknowns. However, notice that (3.10c) holds automatically when qh is a constant, as verified from integration-by-parts (this is analogous to how (3.5) holds automatically). This property is one way of motivating density consistency terms (i.e. the boundary terms) in (3.10c) [3]. Likewise, taking the entries of yh to be constants in (3.10d), integrating-by-parts, using BCs (3.11b)-(3.11c) yields (in analogy to (3.4)) that (3.13) Z Γ πNh egi dΓ = 0 ∀i ∈{1 : n -1} and Z Γ πNh f gJ dΓ = 0. Let l ≤n denote the number of independent constraints that (3.13) imposes on the discrete solution. Problem (3.10)-(3.11) then consists of only D -k independent equations, where k = n + 1 -l. Hence, an additional k constraints must be imposed. Since k ≥1, at least one constraint is always required. This constraint does not encode physical information, and instead reflects our choice to solve for n -1 transformed mole fractions despite the fact that only n -2 of them are independent. Following [3] we impose the normalization condition (2.17) on average over Ω: (3.14) Z Ω (ν⊤ Z xν,h -1) dΩ= 0. Together with (3.14), the remaining k -1 constraints must be chosen on a case-bycase basis. What choices of constraints are appropriate will depend on BCs and the functional dependence of the thermodynamic properties (i.e. the properties in (3.10) with a tilde) on p and xν. We now give examples of this for concreteness. Case (i): The BCs in (3.2b) and (3.2c) do not depend on p, xν or ΦZ. In this case, the compatibility conditions (3.13) do not impose any constraints on the solution, so l = 0 and n additional constraints are required. Since no thermodynamic properties depend on ΦZ, this field is then undetermined up to an additive constant, which can be fixed through a constraint such as R ΩΦZ,h dΩ= 0. If no thermodynamic properties depend on p then it too is undetermined up to an additive constant, which can be fixed by means of R Ωph dΩ= 0. However, if any of thermodynamic properties depend on p then additive shifts of ph by a constant affect the physical content of the model. In this case, constraints of the form R Ω(ph - ̄p) dΩ= 0 may be used, where ̄p ∈R is a user-prescribed mean pressure. If the volumetric EOS depends on p (equivalently if ρ depends on p) then an alternative constraint that is more harmonious with experimentally available information may be R Ωeρ dΩ= M tot, where M tot ∈R is the user-prescribed total mass of fluid in Ω. The final n -2 constraints may be chosen to express the relative abundances of the species in Ω. For example, one may use R Ωg (cν)i dΩ= ctot i for i ∈S, where S ⊂{1 : n -1} contains n -2 indices, and ctot i ∈R are user-prescribed total numbers of moles of the species in Ω. Case (ii): The BCs in (3.2b) and (3.2c) are solution-dependent. To illustrate how solution-dependent BCs fit into this framework, consider the case where gJ is given by a Butler-Volmer-type relation in (3.3) or some nonlinear analogue of this, while gi = αigJ/F for i ∈{1 : n -1} with {αi}n-1 i=1 a set of known constants. These BCs model electrode kinetics, and the αi relate to the stoichiometric coefficients of the electrode reaction. Since each gi is a multiple of gJ, the compatibility conditions 14 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE in (3.13) impose exactly one constraint on the solution, so l = 1 and n -1 additional constraints are required. Since ΦZ appears in the BC (3.3) it is no longer necessary (or physically reasonable) to fix R ΩΦZ,h dΩ= 0. Instead, the n-1 constraints should be chosen to determine the pressure and relative species abundances, as in case (i). 4. Temporal discretization. Our spatial discretization from section 3 can be applied in the transient setting using the method of lines. However, special care must be taken to address the subtle interplay between the species mass continuity equations, boundary conditions, volumetric EOS and integral constraints. 4.1. Semi-discrete problem. Discretization in space (but not time) of the transient problem (2.22) yields the following semi-discrete analogue of (3.10). We seek time-dependent discrete functions vh ∈Vh, ph ∈Ph, Nν,h ∈(Nh)n-1, Jh ∈Nh, xν,h ∈(Xh)n-1 and ΦZ,h ∈Xh such that: d dt eρvh, uh Ω+ ∇· (eρvh ⊗vh) + γ(vh -g ψ⊤ Z NZ,h) -eρf, uh Ω - ph, ∇· uh Ω+ 2eηε(vh) + (eζ -2eη/d)(∇· vh)I, ε(uh) Ω= 0 ∀uh ∈V0h, (4.1a) xν,h ΦZ,h , ∇· RT g X0νWh F ∥z∥K⊤ h ! Ω - ph, ∇· ( g ψ⊤ Z Wh K⊤ h -g V ⊤ ν Wh )! Ω + γ f ψZvh, Wh K⊤ h ! Ω = g M γ ZNZ,h, Wh K⊤ h ! Ω ∀ Wh K⊤ h ∈(N0h)n, (4.1b) ∇· (vh -g ψ⊤ Z NZ,h), qh ΩD nΓ · (vh -g ψ⊤ Z NZ,h), qh E Γ = 0 ∀qh ∈Ph, (4.1c) d dt g cν,h 0 , yh ! Ω + ∇· NZ,h, yh Ω= 0 ∀yh ∈(Xh)n. (4.1d) We supplement this problem with the same strongly enforced BCs (3.11) from the steady case, and we permit the boundary data to depend on time. 4.2. Integral constraints in the transient setting. As in the steady case, problem (4.1) with BCs (3.11) requires integral constraints for well-posedness. However, the interplay between the BCs, volumetric EOS and mass continuity equations becomes especially important in the transient setting, as we now elucidate through considerations similar to those in subsection 3.6. First, note that (4.1c) holds automatically when qh = 1, as in the steady case. This suggests that at least one integral constraint will be needed for well-posedness. Next, taking the entries of yh to be constants in (4.1d) yields, similarly to (3.13), that d dt Z Ω (g cν,h)i dΩ+ Z Γ πNh egi dΓ = 0 ∀i ∈{1 : n -1}, (4.2a) Z Γ πNh f gJ dΓ = 0. (4.2b) If gJ depends on ΦZ by a Butler-Volmer-type BC (3.3), then (4.2b) constrains additive shifts in ΦZ,h. Otherwise, assuming gJ does not depend on p, xν or ΦZ and none of the gI depend on ΦZ, the (consequently undetermined) additive constant in ΦZ,h can be fixed by an extra constraint R ΩΦZ,h dΩ= 0. This leaves us to consider (4.2a). FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 15 To study (4.2a) we assume a confined flow, in the sense that the total normal fluxes over Γ are zero. In other words, we assume that the gi satisfy (4.3) Z Γ πNh egi dΓ = 0 ∀i ∈{1 : n -1}, so that the compatibility conditions in (4.2a) become (4.4) d dt Z Ω (g cν,h)i dΩ= 0 ∀i ∈{1 : n -1}, which physically states that the total number of moles of all species is conserved. Note that (4.3) holds for Butler-Volmer-type BCs, since in this case gi = αigJ/F and gJ satisfies (4.2b). We now investigate how many of the n -1 conditions in (4.4) are actually independent. Note that a general volumetric EOS relates the total concentration cT to the partial molar volumes V ⊤ ν and mole fractions xν by [41] (4.5) 1 cT = V ⊤ ν xν. Hence, multiplying (4.5) by cT , we see that the entries of g cν,h are related by (4.6) 1 = g V ⊤ ν g cν,h. In light of (4.6), it is clear that the total molar conservation conditions (4.4) may not all be independent, or may even be overdetermined. This leads us to consider three pertinent cases, depending on the functional dependence of Vν on p and xν. Case (i): The partial molar volumes are constant. In liquid mixtures it is often reasonable to approximate the partial molar volumes Vν as being constant [29]. It is then clear from (4.6) that only n -2 of the conditions in (4.4) are independent. If any n -2 of these conditions hold, then the final one will hold as well. Thus, in this case, we must impose two additional constraints (recall that the other constraint comes from taking qh = 1 in (4.1c)). A reasonable choice is to enforce (3.14) and a constraint that fixes the pressure such as R Ω(ph - ̄p) dΩ= 0. If the partial molar volumes are constant then X0 ν, ρ and ψZ are independent of p. If η, ζ and MZ are also modelled as being independent of p, and p does not appear in the BCs, then the user-chosen value of ̄p will not affect the physical content of the model. Case (ii): The partial molar volumes depend on xν but not p. If the partial molar volumes are non-constant (which happens only if they depend on p or xν), then the conditions in (4.4) are generally independent. One must then consider whether they can all be satisfied simultaneously. If the partial molar volumes Vν depend on xν only, then the set of admissible concentrations cν is parametrized by xν. Since only n -2 entries of xν are independent, it does not seem reasonable to expect that all n -1 conditions in (4.4) can be satisfied simultaneously, and we hypothesize that in this case the model problem (2.22) may not be well-posed. We give an explicit example of how this case can produce an ill-posed model in Appendix A. Challengingly, in real electrolytes the partial molar volumes can vary appreciably with composition but negligibly with pressure [57, 80, 81]. To overcome the challenge of an ill-posed model one can explicitly include pressure-dependence in Vν. However, if this dependence is too small, then to satisfy all n-1 conditions in (4.4) the pressure may be forced to grow unphysically large. We hypothesize that this challenge could be overcome by allowing the volume of Ωto vary in time (i.e. the total volume the 16 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE electrolyte occupies may vary). However, attempting this lies outside the scope of the present work, since it would (very challengingly) require both extending the model and numerically solving a moving boundary problem. A simpler fix may be to use BCs that do not satisfy (4.3) and instead allow some electrolyte to "leak" from Ω. Case (iii): The partial molar volumes depend on both p and xν. As alluded to case (ii), if Vν depends on both p and xν, then the set of admissible concentrations cν is parametrized by the n-1 independent quantities (p, xν). We therefore heuristically expect all n -1 conditions in (4.4) to be satisfiable. In this case, only one additional constraint is needed (from taking qh = 1 in (4.1c)), and we suggest to employ (3.14). Our analysis of cases (i)-(iii) only involves the total molar conservation conditions (4.4) and the volumetric EOS (4.5). In particular, our considerations are applicable to any transient multicomponent fluid model with a general volumetric EOS. We are not aware of these considerations being discussed elsewhere in the literature; particularly that of case (ii) where a seemingly physical choice of EOS can lead to an ill-posed problem. Although we are aware of numerical works [18, 25] that employ the EOS (4.5) with constant partial molar volumes, these works do not touch on the possibility of case (ii) and its associated challenges. Therefore, we believe that our considerations here may be of general interest to the multicomponent fluids community. 4.3. Time-stepping methods. The semi-discrete problem (4.1), together with strongly enforced BCs (3.11) and appropriate integral constraints of subsection 4.2, amounts to a nonlinear system of differential-algebraic equations (DAEs). Many timestepping schemes exist for solving DAEs, including Runge-Kutta methods and multistep methods [82]. In our numerical simulations we choose to employ implicit RungeKutta methods, because of their desirable stability properties and ability to provide high-order temporal accuracy [82]. 5. Numerical examples. In this section we present numerical examples of our methods, implemented using Firedrake [42]. We use Irksome [36, 47] for time-stepping and ngsPETSc [8, 69] for mesh generation. We solve the nonlinear systems with Newton's method [23] and the sparse direct solver MUMPS [1]. Code can be found at https://bitbucket.org/abaierr/multicomponent electrolyte code. 5.1. Hull cell electroplating. To demonstrate our methods in a physically realizable setting, we simulate the transient electroplating of a non-ideal binary electrolyte in a Hull cell geometry in two and three spatial dimensions. The two-dimensional domain Ω2D is a right trapezoid with vertices (0, 0), (0, 5), (5, 5) and (10, 0). Here, a unit of length corresponds to a physical length of 1mm. We partition the boundary as ∂Ω2D = Γ2D p ∪Γ2D n ∪Γ2D w where Γ2D p is the line segment between (0, 0) and (0, 5) and denotes the positive electrode, Γ2D n is the line segment between (5, 5) and (10, 0) and denotes the negative electrode, and Γ2D w = ∂Ω2D\(Γ2D p ∪Γ2D n ) are insulating walls. The three-dimensional domain is the extrusion of the two-dimensional domain by 5 units in the z-direction, i.e. Ω3D = Ω2D × (0, 5) with Γ3D p = Γ2D p × (0, 5), Γ3D n = Γ2D n × (0, 5) and Γ3D w = ∂Ω3D \(Γ3D p ∪Γ3D n ). In what follows we may omit superscripts ·2D and ·3D The electrolyte is comprised of ethyl-methyl-carbonate (EMC) solvent and lithium hexafluorophosphate (LiPF6) salt. This mixture is used in lithion-ion batteries. In the notation of section 2, there are n = 3 species (EMC, Li+ and PF6 -) with molar masses m = [104.105, 6.935, 144.97]⊤g mol-1 and equivalent charges z = [0, 1, -1]⊤. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 17 We use the salt-charge transformation matrix (recall (2.12)) Z =   1 0 0 0 1 1 0 1 √ 2 -1 √ 2  , which corresponds to neutralizing reactions EMC -⇀ ↽-EMC and Li+ + PF6 - -⇀ ↽-LiPF6. Hence, under the salt-charge transformation, the mole fractions (xν)i, chemical potentials (μν)i, fluxes (Nν)i and so on, represent those of EMC for i = 1 and LiPF6 for i = 2. We accordingly write xEMC := (xν)1, μEMC := (μν)1, xLiPF6 := (xν)2, μLiPF6 := (μν)2 and so on for (Nν)i. The material properties are encoded in the functional dependence of η, ζ, ρ, X0 ν and MZ on (T, p, xν). We take an ambient temperature of T = 298.15K. For the viscosities, we let η be a function of xν as reported in [81, Table 2], and ζ = 10-6Pa s. For ρ, X0 ν and MZ we use the functional dependencies from [78, Table 1]7. This leads to ρ depending on xν only, and likewise for the (non-constant) partial molar volumes, placing us in case (ii) of subsection 4.2. The transport matrix MZ also depends on xν only, while X0 ν depends on both xν and p, with the (negligibly small) dependence on p arising due to the non-constant partial molar volumes. For boundary conditions on Γ := ∂Ωwe use (3.11a) with gv∥= 0. For the normal fluxes (3.11b) and (3.11c), we employ nonlinear Butler-Volmer BCs [24, 57]. Note that μLiPF6 can be expressed as a known function of (p, xν) only, by analytically integrating X0 ν and Vν (c.f. (2.18)). The quantity ΦZ -0.5μLiPF6/F then represents the potential of a reference electrode that reacts with the electrolyte through Li -⇀ ↽-Li+ + e- [78]. We assume that the positive and negative electrodes Γe for e ∈{p, n} undergo the same reaction. Letting Ve denote the applied potential on electrode e ∈{p, n}, and i0(xν) the exchange current density, we consider the BCs gJ = -2i0(xν) tanh ( F Ve -(ΦZ -0.5μLiPF6/F) RT ) on Γe for e ∈{p, n}, (5.1a) gJ = 0 on Γw, (5.1b) g2 = gJ/(2F) on Γ. (5.1c) Condition (5.1a) is a standard Butler-Volmer BC, while (5.1b) expresses the insulating property of the walls. Since N = Z⊤NZ, condition (5.1c) states that the normal flux of PF6 - is zero on Γ (since the electrodes only react with Li+). We model i0 by i0(xν) = i⊖ 0 (xLiPF6/x⊖ LiPF6)2, to resemble commonly used functional forms [57]. Moreover, we take Vp = 0.1V, Vn = 0V, i⊖ 0 = 104Am-2 and x⊖ LiPF6 = 0.075. Since we are in case (ii) of subsection 4.2 (non-constant partial molar volumes that do not depend on p), we do not expect a well-posed problem if no-flux BCs are imposed on EMC (i.e. g1 = 0 on Γ in (3.11b)). Instead we impose BCs that allow some EMC to "leak" from the positive electrode. As a simple model for this, we assume a quadratic flux profile of EMC on the positive electrode, with the magnitude of the flux an unknown to be solved for. In particular, we take g1 = 0 on Γn ∪Γw, (5.2a) g1 = λleak · qp on Γp, (5.2b) 7The formula for κ in [78, Table 1] has a typo and in the notation of that paper should read κ = (48.93y3/2 e -284.8y5/2 e + 817.8y4 e)2. One can use the formulae from [78] to construct MZ, but note that the right-hand side of [78, eq. 16.24] is incorrect by a factor of -1. 18 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE where qp : Γp →R is the unique quadratic function on Γp that vanishes at its endpoints and is one at its midpoint, and λleak ∈R is a (time-dependent) Lagrange multiplier that determines the amount of leakage. The extra degree of freedom λleak allows us to enforce (3.14) while also fixing the pressure mean, for which we take R Ωph dΩ= 0. As initial conditions we take a spatially uniform composition xLiPF6|t=0 = x⊖ LiPF6 and xEMC|t=0 = 1-2x⊖ LiPF6 (c.f. (2.17)). For compositions in this regime the Reynolds and P ́eclet numbers for the problem are roughly Re = 3 × 10-5 and Pe = 9 × 10-2. We take all other unknowns v, p, Nν, J, ΦZ to be zero at t = 0. We numerically integrate (4.1) up to a final time of 172800 seconds (two days) using the RadauIIA implicit Runge-Kutta method [82] with two stages in the two-dimensional case and one stage in the three-dimensional case, and with 200 timesteps. We spatially discretize (4.1) in a non-dimensionalized form with augmentation parameter γ = 10-2. We employ a two-dimensional mesh of 5.7 × 103 triangles with maximum cell diameter h = 0.125, and a three-dimensional mesh of 6.3 × 103 tetrahedra with maximum cell diameter h = 0.5. We expect singularities in the solution at corners of Ω; in the two-dimensional case we use a finer local cell diameter of hc = 0.0125 at the four vertices of Ω2D. We employ the degree k Taylor-Hood pair [13, 75] for (Vh, Ph) and the RTk-DGk-1 [65] pair for (Nh, Xh) with k = 4 in two dimensions and k = 2 in three dimensions. The nonlinear systems at each timestep are solved using Newton's method with an absolute tolerance on the residual of 10-10 in the Euclidean norm. These systems consist of 1.3×106 unknowns in two dimensions and 3.9×105 unknowns in three dimensions. The greatest number of Newton iterations was taken at the first timestep (10 iterations in two and three dimensions). After the 6th timestep, all Newton solves required at most 3 iterations. Fig. 1. Streamlines of the EMC flux NEMC (in black) and current density J (in white) from the two-dimensional simulation of subsection 5.1. The domain is colored by the magnitude of J. In Figure 1, we plot streamlines of the EMC flux NEMC and current density J at time t = 64800s in two dimensions. The current density expectedly flows from the positive to negative electrode and appears to become singular at the top-right cell corner. The two-dimensional plots also reveal two convective rolls forming in the EMC flux profile. These convective rolls can also be seen in Figure 2, where we plot streamlines of the EMC flux at the final time t = 172800s in three dimensions. Owing to the leakage BCs (5.2), the total number of moles of EMC (i.e. the value of R Ωc1 dΩ) varied by about 0.006% over the simulation time, in both two and three dimensions. We also report maximum L2-errors in the (nondimensionalized) FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 19 mass-average constraint and mole fraction constraint: E1 := max 0≤t≤Tfinal vh -g ψ⊤ Z NZ,h L2(Ω)d , E2 := max 0≤t≤Tfinal 1 -ν⊤ Z xν,h L2(Ω) . We obtained values of (E1, E2) = (7.0 × 10-5, 1.1 × 10-9) in two dimensions and (E1, E2) = (5.8 × 10-3, 3.0 × 10-6) in three dimensions. Fig. 2. EMC flux NEMC streamlines (colored by its magnitude) from the three-dimensional simulation of subsection 5.1. The domain walls are colored by the salt-charge potential ΦZ. 5.2. Microfluidic rotating disk electrode. We again consider the binary EMC:LiPF6 electrolyte from subsection 5.1, but in a setting that showcases the flexibility of our method in applying different BCs. We employ the same salt-charge transformation and material parameters η, ζ, ρ, X0 ν and MZ at ambient temperature T = 298.15K as in subsection 5.1. We consider a three-dimensional domain Ωrepresenting a microfluidic box containing a rotating disk electrode. In particular we take Ω= Ωbox \ Ωdisk where Ωbox = (-5, 5)3 and Ωdisk = {(x, y, z) ∈ R3 : x2 + y2 ≤1, -0.05 ≤z ≤0.05}. Here, a unit of length corresponds to a physical length of 12.5μm. We decompose Γ := ∂Ωas Γ = Γp ∪Γn ∪Γw where Γp = {(x, y, z) ∈∂Ωbox : z = ±5} denotes the top and bottom walls of the box, Γw = ∂Ωbox \ Γp the four side walls of the box, and Γn = ∂Ωdisk the disk boundary. The surfaces Γp, Γn, Γw represent, respectively, the positive electrode, negative electrode, and insulating walls. We solve the steady problem (3.10) with the following BCs. We first impose that the disk rotates with fixed angular frequency ̇θ ∈R; this manifests in our model through the boundary condition (3.11a) on v with gv∥given by gv∥= ̇θ(-y, x, 0) for (x, y, z) ∈∂Ωdisk = Γn, (5.3a) gv∥= 0 on Γ \ ∂Ωdisk. (5.3b) We choose ̇θ = 28.79s-1, which leads to Reynolds and P ́eclet numbers of roughly Re = 3 × 10-3 and Pe = 1 × 101. For the EMC flux Nh,EMC := (Nν,h)1 we impose 20 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE a zero normal flux condition (3.11b) with g1 = 0 on Γ. For the current density flux, instead of enforcing (3.11c) for some function gJ, we enforce Jh·nΓ = 2F(Nh,LiPF6·nΓ) on Γ, where Nh,LiPF6 := (Nν,h)2 (note that we implement this as a strongly enforced BC on Jh · nΓ and not Nh,LiPF6 · nΓ). As in subsection 5.1, this BC enforces that the normal flux of PF6 - is zero on Γ. For the LiPF6 flux we enforce a zero normal flux condition on the insulating walls Nh,LiPF6 · nΓ = 0 on Γw. However, instead of prescribing the value of NLiPF6 · nΓ on the electrodes Γp ∪Γn, we weakly enforce the value of xLiPF6 in these regions through (5.4) xh,LiPF6 = x⊖,e LiPF6 on Γe for e ∈{p, n}, where x⊖,p LiPF6 = 0.082 and x⊖,n LiPF6 = 0.068. We weakly enforce (5.4) by modifying (3.10b) through the addition of appropriate boundary terms, and by enlarging the space of test functions (Wh)2 to those with zero normal trace on Γw only. Specifically, instead of (3.10b), we consider: xν,h ΦZ,h , ∇· RT g X0νWh F ∥z∥K⊤ h ! Ω - ph, ∇· ( g ψ⊤ Z Wh K⊤ h -g V ⊤ ν Wh )! Ω + γ f ψZvh, Wh K⊤ h ! Ω = g M γ ZNZ,h, Wh K⊤ h ! Ω + RT · x⊖,p LiPF6 D ^ (X0ν)22, (Wh)2 · nΓ E Γp + RT · x⊖,n LiPF6 D ^ (X0ν)22, (Wh)2 · nΓ E Γn ∀ Wh K⊤ h ∈(N0h × Nh × N0h) with (Wh)2 · nΓ = 0 on Γw. (5.5) With the present material data, X0 ν is a diagonal matrix. Using this property, one verifies by taking (Wh)2 to be non-zero in (5.5), that for e ∈{p, n} the following is weakly enforced: (5.6) RT · xh,LiPF6 · ^ (X0ν)22 -ph · n ( ] ψ⊤ Z )2 -^ (V ⊤ ν )2 o | {z } :=Ip = RT · x⊖,e LiPF6 · ^ (X0ν)22 on Γe. Dimensional analysis reveals that the pressure term Ip in (5.6) is smaller than the other terms by a factor of 10-8. Neglecting Ip in (5.6) and dividing by RT · ^ (X0ν)22 then leads to (5.4), as desired. We spatially discretize (3.10) with aforementioned BCs and modification (5.5) in a non-dimensionalized form with augmentation parameter γ = 10-2. As constraints we impose (3.14) along with R Ωph dΩ= 0. We employ a curved three-dimensional mesh of degree 4 with 1.7 × 104 tetrahedra and maximum local cell diameter of h = 0.1 on the disk boundary. We employ the degree k = 4 Taylor-Hood pair [13, 75] for (Vh, Ph) and the RTk-DGk-1 [65] pair for (Nh, Xh). The nonlinear system consists of 1.1×106 unknowns and was solved using Newton's method with an absolute tolerance on the residual of 10-10 in the Euclidean norm. We first applied Newton's method on a coarse discretization (without curving the mesh and using order k = 2 finite element spaces), where as an initial guess we set xLiPF6 = (x⊖,p LiPF6 +x⊖,n LiPF6)/2 and xEMC = 1-2xLiPF6 (c.f. (2.17)) and we set all other unknowns to be zero. Convergence was reached in 6 iterations. We then used the coarse solution as an initial guess to Newton's method for the fine discretization (i.e. with a curved mesh and degree k = 4 finite element spaces, as above), for which convergence was reached in 3 iterations. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 21 Fig. 3. Streamlines of the LiPF6 flux NLiPF6 (colored by its magnitude) and current density J (colored in a transparent yellow) above Ωdisk for the numerical experiment of subsection 5.2. The disk is also colored by the magnitude of NLiPF6. In Figure 3 we plot streamlines of the LiPF6 flux and current density J. The rotation of the disk induces a swirling flow of LiPF6, while the current density flows smoothly from the positive electrode to the negative disk electrode. These observations qualitatively reflect the expected behaviour of the solution as induced by our choice of BCs. Moreover, the L2-errors in the (nondimensionalized) mass-average constraint, mole fraction constraint, and weakly enforced BCs (5.4) are vh -g ψ⊤ Z NZ,h L2(Ω)d = 1.3 × 10-3, 1 -ν⊤ Z xν,h L2(Ω) = 3.7 × 10-7, xh,LiPF6 -x⊖,e LiPF6 L2(Γp∪Γn) = 5.0 × 10-5. In particular, the small error in the weakly enforced BC (5.4) affirms the ability of our algorithm to handle a combination of flux, Dirichlet, and tangential velocity BCs. 5.3. Cosolvent imbalances. An appealing property of our numerical method is that it accounts for the multicomponent nature of the electrolyte solvent (i.e. the neutral species in which the salts are dissolved), the effects of which may be important for battery modelling [44]. To demonstrate the ability of our methods in studying these effects, we consider an electrolyte comprised of LiPF6 salt dissolved in two solvents, namely ethyl-methyl-carbonate (EMC) and ethylene-carbonate (EC). The resulting mixture has n = 4 species (EMC, EC, Li+ and PF6 -) with molar masses m = [104.105, 88.062, 6.935, 144.97]⊤g mol-1 and equivalent charges z = [0, 0, 1, -1]⊤. We use the salt-charge transformation matrix (recall (2.12)) Z =   1 0 0 0 0 1 0 0 0 0 1 1 0 0 1 √ 2 -1 √ 2  , 22 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE which corresponds to neutralizing reactions EMC -⇀ ↽-EMC, EC -⇀ ↽-EC and Li+ + PF6 - -⇀ ↽-LiPF6. Hence, under the salt-charge transformation, the mole fractions (xν)i, chemical potentials (μν)i, fluxes (Nν)i and so on, represent those of EMC for i = 1, EC for i = 2 and LiPF6 for i = 3. Fig. 4. Streamlines of the EMC flux NEMC (in black) and EC flux NEC (in white) from the simulation of subsection 5.3. The domain is colored by the shear viscosity η. For the material properties we take an ambient temperature of T = 298.15K. We let ρ be a function of xν as reported in [68]. An expression for η as a function of xν was obtained by fitting viscosity data reported in [68] to a degree 7 bivariate polynomial in the variables p xEMC/(xEMC + xEC), p xEC/(xEMC + xEC). For the bulk viscosity we took ζ = 10-6Pa s. The thermodynamic factor matrix X0 ν was obtained by assuming an ideal mixture. Finally, we computed MZ by assuming constant Stefan-Maxwell diffusivities, the values of which were obtained from the supplementary information of [63] for 1mol L-1 of LiPF6 in vol:vol 7:3 EMC:EC solvent. We take Ωto be the same two-dimensional Hull cell domain as in subsection 5.1. For BCs we use (3.11a) with gv∥= 0. For the normal flux BCs (3.11b) and (3.11c) we use linearized Butler-Volmer BCs (c.f. (3.3)) gJ = -i0 F RT (Ve -ΦZ) on Γe for e ∈{p, n}, (5.7a) gJ = 0 on Γw, (5.7b) g3 = gJ/(2F) on Γ, (5.7c) with Vp = 0.1V, Vn = 0V and i0 = 104Am-2. In (3.11b) we further take g2 = 0 on Γ, i.e. no normal flux of EC, while for EMC we impose the leak BCs (5.2). We consider the transient problem (4.1). As initial conditions we take a spatially uniform composition xLiPF6|t=0 = 0.077, xEMC|t=0 = 0.509 and xEC|t=0 = 1-0.509- (2 · 0.077) = 0.337 (c.f. (2.17)). For compositions in this regime the Reynolds and P ́eclet numbers for the problem are roughly Re = 3 × 10-5 and Pe = 9 × 10-1. We take all other unknowns v, p, Nν, J, ΦZ to be zero at t = 0. As constraints we impose (3.14) along with R Ωph dΩ= 0. We numerically solve (4.1) using the same mesh, finite element spaces (Vh, Ph) and (Nh, Xh), and time-stepping scheme as in the two-dimensional case of subsection 5.1. The nonlinear systems at each timestep consist of 1.7 × 106 unknowns and are solved using Newton's method with an absolute tolerance on the residual of 10-10 in the FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 23 Euclidean norm. The first three timesteps required 3 Newton iterations, and all subsequent timesteps required at most 2 iterations. Note that the initial Newton solves in subsection 5.1 required more iterations due to the nonlinear BCs (5.1a). In Figure 4 we plot streamlines of the EMC and EC fluxes NEMC, NEC at time t = 64800. The flux profiles are emphatically different, underscoring the multicomponent nature of the solvent. Moreover, in Figure 4 we color the domain by the shear viscosity η = η(xν), which changes by an order of magnitude across the cell due to the spatially varying EMC:EC ratio. In particular, at time t = 64800 the mole ratio xEMC/xEC ≈ 1.4 near the positive (left) electrode and xEMC/xEC ≈1.6 near the negative (right) electrode. It is the greater EMC content near the negative electrode that results in a substantially lower viscosity in this region. 6. Conclusions. We have presented a broad family of finite element algorithms for numerically solving the electroneutral NSOSM equations. To the best of our knowledge, this is the first paper in the finite element literature on electroneutral NSOSM flow. The flexibility of our algorithms in handling transient and steady flow under different boundary conditions was substantiated in our numerical experiments. Our numerical experiment involving EMC-EC-LiPF6 flow also demonstrated the scientific potential of our algorithms in studying how the multicomponent nature of electrolytes may impact, for example, locally varying material properties of the mixture. Appendix A. A problematic model. Consider a binary mixture on a fixed domain Ωwith molar concentrations c1, c2, total concentration cT = c1+c2 and molar fractions xi := ci/cT with x1 + x2 = 1. Let x := x1. We assume a volumetric EOS (A.1) cT = A + Bx, with A and B non-zero constants satisfying A > 0 and A+B > 0 so that cT > 0. This seemingly benign EOS assumes the total concentration is linear in composition; an experimentalist might reasonably make such an approximation by fitting experimental data. One can verify that the (non-constant) partial molar volumes are V1 = A + B(2x -1) (A + Bx)2 , V2 = A + 2Bx (A + Bx)2 and 1 cT = x1V1 + x2V2. Assume that the total number of moles of each species is conserved, so that d dt Z Ω c1 dΩ= 0, (A.2a) d dt Z Ω c2 dΩ= 0. (A.2b) Since cT = c1 + c2, we can add (A.2a) and (A.2b) and use (A.1) to deduce that (A.3) d dt Z Ω x dΩ= 0. Moreover, since c1 = xcT we can use (A.1), (A.3), and (A.2a) to deduce (A.4) d dt Z Ω x2 dΩ= 0. Since Ωdoes not depend on t, it follows from (A.3) and (A.4) that if x is initially spatially uniform, then x must be constant over all space and time. To see this, let ̄x 24 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE denote the spatial mean of x, i.e. ̄x := R Ωx dΩ R Ω1 dΩ. Then (A.3) implies that ̄x is constant over time. This fact together with (A.4) yields d dt Z Ω (x - ̄x)2 dΩ= d dt Z Ω x2 dΩ+ Z Ω -2x ̄x + ̄x2 dΩ ! = d dt Z Ω x2 dΩZ Ω ̄x2 dΩ ! = 0. Thus R Ω(x - ̄x)2 dΩ= C for some constant C. Since x = ̄x initially we deduce that C = 0. Therefore x = ̄x a.e. in Ωfor all time, and ̄x is a constant. This completes the proof that x is constant over all space and time (at least up to sets of measure zero). To conclude, any model which assumes EOS (A.1) and conservation properties (A.2), with Ωindependent of t, is seemingly either (i) ill-posed or (ii) well-posed, but incapable of making physically meaningful predictions, since a solution that initially has a spatially uniform composition retains that uniform composition over all time. REFERENCES [1] P. R. Amestoy, I. S. Duff, J.-Y. L'Excellent, and J. Koster, A fully asynchronous multifrontal solver using distributed dynamic scheduling, SIAM J. Matrix Anal. Appl., 23 (2001), pp. 15-41. [2] F. R. Aznaran, P. E. Farrell, C. W. Monroe, and A. J. Van-Brunt, Finite element methods for multicomponent convection-diffusion, IMA J. Numer. Anal., (2024), p. drae001. [3] A. Baier-Reinio and P. E. Farrell, High-order finite element methods for three-dimensional multicomponent convection-diffusion, arXiv preprint , (2024). [4] A. Baier-Reinio, P. E. Farrell, and C. W. Monroe, Software used in 'Finite element methods for electroneutral multicomponent electrolyte flows', 2025. [5] K. Balakrishnan, A. L. Garcia, A. Donev, and J. B. Bell, Fluctuating hydrodynamics of multispecies nonreactive mixtures, Phys. Rev. E, 89 (2014), p. 013017. [6] A. J. Bard and L. R. Faulkner, Electrochemical Methods: Fundamentals and Applications, John Wiley & Sons, New York, 2nd ed., 2001. [7] P. N. Bartlett, Bioelectrochemistry: Fundamentals, Experimental Techniques and Applications, John Wiley & Sons, Chichester, 2008. [8] J. Betteridge, P. E. Farrell, M. Hochsteger, C. Lackner, J. Sch ̈oberl, S. Zampini, and U. Zerbinati, ngsPETSc: A coupling between NETGEN/NGSolve and PETSc, J. Open Source Softw., 9 (2024), p. 7359. [9] A. K. Bhattacharjee, K. Balakrishnan, A. L. Garcia, J. B. Bell, and A. Donev, Fluctuating hydrodynamics of multi-species reactive mixtures, J. Chem. Phys., 142 (2015), p. 224107. [10] R. B. Bird, W. E. Stewart, and E. N. Lightfoot, Transport Phenomena, John Wiley & Sons, 2nd ed., 2002. [11] A. M. Bizeray, D. A. Howey, and C. W. Monroe, Resolving a discrepancy in diffusion potentials, with a case study for Li-ion batteries, J. Electrochem. Soc., 163 (2016), p. E223. [12] S. W. Boettcher, S. Z. Oener, M. C. Lonergan, Y. Surendranath, S. Ardo, C. Brozek, and P. A. Kempler, Potentially confusing: potentials in electrochemistry, ACS Energy Lett., 6 (2020), pp. 261-266. [13] D. Boffi, Three-dimensional finite element methods for the Stokes problem, SIAM J. Numer. Anal., 34 (1997), pp. 664-670. [14] L. Bortels, J. Deconinck, and B. Van Den Bossche, The multi-dimensional upwinding method as a new simulation tool for the analysis of multi-ion electrolytes controlled by FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 25 diffusion, convection and migration. Part 1. Steady state analysis of a parallel plane flow channel, J. Electroanal. Chem., 404 (1996), pp. 15-26. [15] M. Braukhoff, I. Perugia, and P. Stocker, An entropy structure preserving space-time formulation for cross-diffusion systems: analysis and Galerkin discretization, SIAM J. Numer. Anal., 60 (2022), pp. 364-395. [16] F. Brezzi, J. Douglas, and L. D. Marini, Two families of mixed finite elements for second order elliptic problems, Numer. Math., 47 (1985), pp. 217-235. [17] F. Brosa Planella, W. Ai, A. M. Boyce, A. Ghosh, I. Korotkin, S. Sahu, V. Sulzer, R. Timms, T. G. Tranter, M. Zyskin, et al., A continuum of physics-based lithium-ion battery models reviewed, Prog. Energy, 4 (2022), p. 042003. [18] A. Brunk, A. J ̈ungel, and M. Luk ́aˇcov ́a-Medvid'ov ́a, A structure-preserving numerical method for quasi-incompressible Navier-Stokes-Maxwell-Stefan systems, arXiv preprint , (2025). [19] E. Burman, A. Ern, and V. Giovangigli, Bunsen flame simulation by finite elements on adaptively refined, unstructured triangulations, Combust. Theory Model., 8 (2003), p. 65. [20] B. Carnes and G. F. Carey, Local boundary value problems for the error in FE approximation of non-linear diffusion systems, Internat. J. Numer. Methods Engrg., 73 (2008), pp. 665684. [21] C. I. Correa, G. N. Gatica, and R. Ruiz-Baier, New mixed finite element methods for the coupled Stokes and Poisson-Nernst-Planck equations in Banach spaces, ESAIM Math. Model. Numer. Anal., 57 (2023), pp. 1511-1551. [22] S. R. De Groot and P. Mazur, Non-Equilibrium Thermodynamics, Dover Publications, Inc., New York, 1984. [23] P. Deuflhard, Newton Methods for Nonlinear Problems, Springer Science & Business Media, Berlin, Heidelberg, 2011. [24] E. J. Dickinson and A. J. Wain, The Butler-Volmer equation in electrochemical theory: Origins, value, and practical application, J. Electroanal. Chem., 872 (2020), p. 114145. [25] A. Donev, A. Nonaka, A. K. Bhattacharjee, A. L. Garcia, and J. B. Bell, Low Mach number fluctuating hydrodynamics of multispecies liquid mixtures, Phys. Fluids, 27 (2015), p. 037103. [26] A. Donev, A. Nonaka, Y. Sun, T. Fai, A. Garcia, and J. Bell, Low Mach number fluctuating hydrodynamics of diffusively mixing fluids, Commun. Appl. Math. Comput. Sci., 9 (2014), pp. 47-105. [27] A. Donev, A. J. Nonaka, C. Kim, A. L. Garcia, and J. B. Bell, Fluctuating hydrodynamics of electrolytes at electroneutral scales, Phys. Rev. Fluids, 4 (2019), p. 043701. [28] W. Dreyer, C. Guhlke, and R. M ̈uller, Overcoming the shortcomings of the Nernst-Planck model, Phys. Chem. Chem. Phys., 15 (2013), pp. 7075-7086. [29] P.- ́E. Druet, Global-in-time existence for liquid mixtures subject to a generalised incompressibility constraint, J. Math. Anal. Appl., 499 (2021), p. 125059. [30] A. J. Ellingsrud, P. Benedusi, and M. Kuchta, A splitting, discontinuous Galerkin solver for the cell-by-cell electroneutral Nernst-Planck framework, SIAM J. Sci. Comput., 47 (2025), pp. B477-B504. [31] A. Ern and V. Giovangigli, Multicomponent Transport Algorithms, vol. 24, Springer Berlin, Heidelberg, 1994. [32] A. Ern and V. Giovangigli, Thermal diffusion effects in hydrogen-air and methane-air flames, Combust. Theory Model., 2 (1998), p. 349. [33] A. Ern and J.-L. Guermond, Finite element quasi-interpolation and best approximation, ESAIM Math. Model. Numer. Anal., 51 (2017), pp. 1367-1385. [34] A. Ern and J.-L. Guermond, Finite Elements I: Approximation and Interpolation, Springer, Cham, Switzerland, 2021. [35] A. Ern and J.-L. Guermond, Finite Elements II: Galerkin Approximation, Elliptic and Mixed PDEs, Springer, Cham, Switzerland, 2021. [36] P. E. Farrell, R. C. Kirby, and J. Marchena-Menendez, Irksome: Automating RungeKutta time-stepping for finite element methods, ACM Trans. Math. Softw., 47 (2021), pp. 1-26. [37] E. Feireisl, D. Hilhorst, H. Petzeltov ́a, and P. Tak ́aˇc, Mathematical analysis of variable density flows in porous media, J. Evol. Equ., 16 (2016), pp. 1-19. [38] A. Fick, ̈Uber Diffusion, Annalen der Physik, 170 (1855), pp. 59-86. [39] V. Giovangigli, Multicomponent Flow Modeling, Birkh ̈auser, Boston, 1999. [40] D. W. Green and R. H. Perry, Perry's Chemical Engineers' Handbook, McGraw Hill Professional, 8th ed., 2007. [41] E. A. Guggenheim, Thermodynamics: An Advanced Treatment for Chemists and Physicists, 26 AARON BAIER-REINIO, PATRICK E. FARRELL AND CHARLES W. MONROE North-Holland Books, Amsterdam, 5th ed., 1967. [42] D. A. Ham and 26 Others, Firedrake User Manual, 1st ed., 2023, https://doi.org/10.25561/ 104839. [43] E. Helfand, On inversion of the linear laws of irreversible thermodynamics, J. Chem. Phys., 33 (1960), pp. 319-322. [44] T. Jung, A. A. Wang, and C. W. Monroe, Overpotential from cosolvent imbalance in battery electrolytes: LiPF6 in EMC: EC, ACS omega, 8 (2023), pp. 21133-21144. [45] A. J ̈ungel, The boundedness-by-entropy method for cross-diffusion systems, Nonlinearity, 28 (2015), p. 1963. [46] A. J ̈ungel and O. Leingang, Convergence of an implicit Euler Galerkin scheme for PoissonMaxwell-Stefan systems, Adv. Comput. Math., 45 (2019), pp. 1469-1498. [47] R. C. Kirby and S. P. MacLachlan, Extending Irksome: improvements in automated RungeKutta time stepping for finite element methods, ACM Trans. Math. Softw., (2024). [48] G. Kraaijeveld, V. Sumberova, S. Kuindersma, and H. Wesselingh, Modelling electrodialysis using the Maxwell-Stefan description, Chem. Eng. J., 57 (1995), pp. 163-176. [49] G. Kraaijeveld and J. A. Wesselingh, Negative Maxwell-Stefan diffusion coefficients, Ind. Eng. Chem. Res., 32 (1993), pp. 738-742. [50] R. Krishna, Uphill diffusion in multicomponent mixtures, Chem. Soc. Rev., 44 (2015), pp. 2812-2836. [51] R. Krishna, Diffusing uphill with James Clerk Maxwell and Josef Stefan, Chem. Eng. Sci., 195 (2019), pp. 851-880. [52] R. Krishna and J. A. Wesselingh, The Maxwell-Stefan approach to mass transfer, Chem. Eng. Sci., 52 (1997), pp. 861-911. [53] A. Longo, M. Barsanti, A. Cassioli, and P. Papale, A finite element Galerkin/least-squares method for computation of multicomponent compressible-incompressible flows, Comput. & Fluids, 67 (2012), pp. 57-71. [54] J. C. Maxwell, On the dynamical theory of gases, Phil. Trans. R. Soc., (1866), pp. 49-88. [55] M. McLeod and Y. Bourgault, Mixed finite element methods for addressing multi-species diffusion using the Maxwell-Stefan equations, Comput. Meth. Appl. Mech. Eng., 279 (2014), pp. 515-535. [56] J.-C. N ́ed ́elec, A new family of mixed finite elements in R3, Numer. Math., 50 (1986), pp. 5781. [57] J. Newman and N. P. Balsara, Electrochemical Systems, John Wiley & Sons, Hoboken, NJ, 4th ed., 2021. [58] J. Newman, D. Bennion, and C. W. Tobias, Mass transfer in concentrated binary electrolytes, Ber. Bunsenges. Phys. Chem., 69 (1965), pp. 608-612. [59] L. Onsager, Reciprocal relations in irreversible processes. I., Phys. Rev., 37 (1931), pp. 405426. [60] L. Onsager, Reciprocal relations in irreversible processes. II., Phys. Rev., 38 (1931), pp. 22652279. [61] J.-P. P ́eraud, A. Nonaka, A. Chaudhri, J. B. Bell, A. Donev, and A. L. Garcia, Low Mach number fluctuating hydrodynamics for electrolytes, Phys. Rev. Fluids, 1 (2016), p. 074103. [62] B. A. Pethica, Are electrostatic potentials between regions of different chemical composition measurable? the Gibbs-Guggenheim principle reconsidered, extended and its consequences revisited, Phys. Chem. Chem. Phys., 9 (2007), pp. 6253-6262. [63] C. Phelan, J. Swallow, and R. Weatherup, Applying the Maxwell-Stefan diffusion framework to multicomponent battery electrolytes, ChemRxiv preprint, (2024). [64] A. Prohl and M. Schmuck, Convergent finite element discretizationsof the Navier-StokesNernst-Planck-Poisson system, ESAIM Math. Model. Numer. Anal., 44 (2010), pp. 531571. [65] P.-A. Raviart and J.-M. Thomas, A mixed finite element method for 2-nd order elliptic problems, in Mathematical Aspects of Finite Element Methods, vol. 606 of Lecture Notes in Math., Springer, Berlin, 1977, pp. 292-315. [66] G. W. Richardson, J. M. Foster, R. Ranom, C. P. Please, and A. M. Ramos, Charge transport modelling of Lithium-ion batteries, European J. Appl. Math., 33 (2022), pp. 9831031. [67] T. Roy, J. Andrej, and V. A. Beck, A scalable DG solver for the electroneutral NernstPlanck equations, J. Comput. Phys., 475 (2023), p. 111859. [68] R. Rungta, P. Slowikowski, A. Gardner, D. Persa, and C. W. Monroe, Quantifying volumetric expansion and bulk moduli of carbonate cosolvents with lithium salts, in preparation. FEM FOR ELECTRONEUTRAL MULTICOMPONENT FLOWS 27 [69] J. Sch ̈oberl, NETGEN: An advancing front 2D/3D-mesh generator based on abstract rules, Computing and Visualization in Science, 1 (1997), pp. 41-52. [70] L. R. Scott and M. Vogelius, Norm estimates for a maximal right inverse of the divergence operator in spaces of piecewise polynomials, ESAIM Math. Model. Numer. Anal., 19 (1985), pp. 111-143. [71] R. Sijabat, M. De Groot, S. Moshtarikhah, and J. Van Der Schaaf, Maxwell-Stefan model of multicomponent ion transport inside a monolayer Nafion membrane for intensified chlor-alkali electrolysis, J. Appl. Electrochem., 49 (2019), pp. 353-368. [72] I. Srivastava, D. R. Ladiges, A. J. Nonaka, A. L. Garcia, and J. B. Bell, Staggered scheme for the compressible fluctuating hydrodynamics of multispecies fluid mixtures, Phys. Rev. E, 107 (2023), p. 015305. [73] J. Stefan, ̈Uber das Gleichgewicht und die Bewegung, insbesondere die Diffusion von Gasgemengen, Sitzber. Akad. Wiss. Wien., 63 (1871), pp. 63-124. [74] Z. Sun, J. A. Carrillo, and C.-W. Shu, An entropy stable high-order discontinuous Galerkin method for cross-diffusion gradient flow systems, Kinet. Relat. Models, 12 (2019), pp. 885908. [75] C. Taylor and P. Hood, A numerical solution of the Navier-Stokes equations using the finite element technique, Comput. & Fluids, 1 (1973), pp. 73-100. [76] A. Van-Brunt, P. E. Farrell, and C. W. Monroe, Augmented saddle-point formulation of the steady-state Stefan-Maxwell diffusion problem, IMA J. Numer. Anal., 42 (2022), pp. 3272-3305. [77] A. Van-Brunt, P. E. Farrell, and C. W. Monroe, Consolidated theory of fluid thermodiffusion, AlChE J., 68 (2022), p. e17599. [78] A. Van-Brunt, P. E. Farrell, and C. W. Monroe, Structural electroneutrality in OnsagerStefan-Maxwell transport with charged species, Electrochim. Acta, 441 (2023), p. 141769. [79] V. K. Vanag and I. R. Epstein, Cross-diffusion and pattern formation in reaction-diffusion systems, Phys. Chem. Chem. Phys., 11 (2009), pp. 897-912. [80] A. A. Wang, A. B. Gunnarsd ́ottir, J. Fawdon, M. Pasta, C. P. Grey, and C. W. Monroe, Potentiometric MRI of a superconcentrated lithium electrolyte: testing the irreversible thermodynamics approach, ACS Energy Lett., 6 (2021), pp. 3086-3095. [81] A. A. Wang, T. Hou, M. Karanjavala, and C. W. Monroe, Shifting-reference concentration cells to refine composition-dependent transport characterization of binary lithium-ion electrolytes, Electrochim. Acta, 358 (2020), p. 136688. [82] G. Wanner and E. Hairer, Solving Ordinary Differential Equations II, vol. 375, Springer Berlin Heidelberg, 1996. [83] A. Z. Weber, R. L. Borup, R. M. Darling, P. K. Das, T. J. Dursch, W. Gu, D. Harvey, A. Kusoglu, S. Litster, M. M. Mench, et al., A critical review of modeling transport phenomena in polymer-electrolyte fuel cells, J. Electrochem. Soc., 161 (2014), p. F1254. [84] A. Z. Weber and C. Delacourt, Mathematical modelling of cation contamination in a protonexchange membrane, Fuel Cells, 8 (2008), pp. 459-465. [85] J. Wesselingh and R. Krishna, Mass Transfer in Multicomponent Mixtures, Delft University Press, Delft, Netherlands, 2000. [86] D. Xie and B. Lu, An effective finite element iterative solver for a Poisson-Nernst-Planck ion channel model with periodic boundary conditions, SIAM J. Sci. Comput., 42 (2020), pp. B1490-B1516.
2510.14921
Sound Masking Strategies for Interference with Mosquito Hearing Justin Faber∗ Department of Physics & Astronomy, University of California, Los Angeles, California, USA Alexandros C Alampounti and Marcos Georgiades University College London Ear Institute: London, UK Joerg T Albert University College London Ear Institute: London, UK and Cluster of Excellence Hearing4all, Sensory Physiology & Behaviour Group, Department for Neuroscience, School of Medicine and Health Sciences, Carl von Ossietzky Universit¨at Oldenburg, Oldenburg, Germany Dolores Bozovic Department of Physics & Astronomy, University of California, Los Angeles, California, USA and California NanoSystems Institute, University of California, Los Angeles, California, USA (Dated: October 17, 2025) The use of auditory masking has long been of interest in psychoacoustics and for engineering purposes, in order to cover sounds that are disruptive to humans or to species whose habitats overlap with ours. In most cases, we seek to minimize the disturbances to the communication of wildlife. However, in the case of pathogen-carrying insects, we may want to maximize these disturbances as a way to control populations. In the current work, we explore candidate masking strategies for a generic model of active auditory systems and a model of the mosquito auditory system. For both models, we find that masks with all acoustic power focused into just one or a few frequencies perform best. We propose that masks based on rapid frequency modulation are most effective for maximal disruption of information transfer and minimizing intelligibility. We hope that these results will serve to guide the avoidance or selection of possible acoustic signals for, respectively, maximizing or minimizing communication. I. INTRODUCTION When unwanted sound cannot be sufficiently atten- uated or actively canceled, masking strategies are of- ten used to obscure details and provide a more uniform acoustic environment [1, 2]. For applications aimed at minimizing distraction or improving sleep, using filtered noise as a sound mask is the typical strategy. Other applications include the masking of industrial noises in order to protect species susceptible to sound pollution [3–5]. However, in some cases, we may want to maxi- mize the disruption to animal behavior, particularly for species that are invasive or otherwise harmful. Mosquitoes pose a significant threat to tropical and subtropical regions, resulting in over 700,000 annual hu- man deaths and costing billions of dollars in healthcare expenses [6]. Though advances have been made in de- veloping pesticides and alternative strategies, the num- ber of annual deaths continues to rise [7]. Further, the rising global temperatures have led to the migration of the deadliest mosquito species to regions that were pre- viously uninhabitable to them [8]. The need for addi- tional control strategies is therefore evident. Some pos- sible intervention strategies entail disrupting the mating or blood-feeding processes of these insects [9, 10]. ∗faber@physics.ucla.edu Mosquitoes mate at dusk, when males form swarms and seek out nearby females. Successful mating is de- pendent on the ability of a male to detect the sound of a female beating her wings. The male feather-like flagella are tuned approximately to the female’s wingbeat fre- quency. Surprisingly, the active neuronal elements of the Johnston’s organ are tuned to a much lower frequency [11]. The nonlinear interaction between the female flight tone and the male’s own wingbeats produce distortion products, which fall within the bandwidth of detection of the neuronal elements [12–15]. This counterintuitive de- tection strategy has recently been modeled and proposed to be advantageous in a theoretical study [16]. In the current work, we utilize theoretical models of mosquito hearing to find the optimal class of acoustic masks that would prevent a male from successfully detecting a fe- male. Measuring the performance of a biological detector re- lies on assumptions as to which components of the signal are meaningful. We must assume that the amplitude, fre- quency, phase, or some combination of these properties carries biologically relevant information. Alternatively, it could be modulations in these values that conveys in- formation necessary for auditory detection. The goal of this study is not to determine the optimal sound mask for a specific species, but rather to find the optimal class of masks that would be generally effective in blocking active hearing. Therefore, we employ transfer entropy, an in- arXiv:2510.14921v1 [physics.bio-ph] 16 Oct 2025 2 formation theoretic measure that quantifies information imparted from one process to another [17]. The transfer entropy does not rely on any assumptions of which com- ponents of the signal are meaningful. Rather, it reflects how much information about the stimulus is carried in the response of the receiver. We note that sound detection and localization, along with sound scattering through the surrounding environ- ment are all complex processes, which certainly con- tribute to shaping the optimal acoustic mask [18–21]. However, as these are specific to each environment and application of interest, we omit them from this study and focus on a single detector capturing a scalar signal. We do not discuss directional information that can be inferred from having multiple sensory elements, or by sensing the vector-valued velocity field of sound [22, 23]. With this limitation established, we frame our problem as follows. Given a target acoustic signal of interest to an active auditory detector, what is the optimal acoustic mask that minimizes the information captured? Candi- date masking signals can take any form we choose, with the constraint of fixed power input. We use a Hopf oscillator as a generic model of audi- tory detection [24–28], and a recently proposed model specific to the mosquito auditory system [16], based on a pair of Hopf oscillators and tuning to distortion prod- ucts. We first present the performance of several mask categories: filtered noise, multi-tone signals, and stim- uli with amplitude or frequency modulation. We then present a comparison of the best masks from each cate- gory. For both numerical models, we find filtered noise to be surprisingly ineffective, despite its common use in masking distraction-inducing sounds. For both models, we instead find that masks comprised of a single tone with constant amplitude and rapidly varying frequency perform best. Though these masks would likely be too unpleasant for human applications, they may provide acoustic-based strategies for interfering with the commu- nication of pathogen carrying mosquitoes, as well as other unwanted species. II. HOPF OSCILLATOR (MODEL 1) The active auditory system of vertebrates has long been described by a system poised near a critical point, on the verge of instability [26]. In this regime, the sys- tem is highly sensitive to weak signals. This framework is elegantly captured by the normal form equation for the supercritical Hopf bifurcation [24, 25, 28]. The state of this detector is described by a complex variable z(t). The dynamics are governed by the first-order differential equation, dz dt = (µ + iω0)z −|z|2z + Ff(t) + Fmask(t), (1) where Ff(t) is the complex-valued signal of interest and Fmask(t) represents the masking signal. In the absence of external stimulus (Ff(t) = Fmask(t) = 0), this system exhibits autonomous limit-cycle oscillations for µ > 0 and quiescent behavior for µ < 0. The interface between these two regimes (µ = 0) defines the supercritical Hopf bifurcation. This system exhibits highest sensitivity near this bifurcation point. Autonomous oscillations occur at angular frequency, ω0, which coincides with the frequency of maximal sensitivity. Unless otherwise stated, we use ω0 = 2π and µ = 0.1, poising the system on the oscilla- tory side of the bifurcation. III. HOPF ⇒HOPF MOSQUITO MODEL (MODEL 2) Insect hearing has also been shown to employ active processes [29–33]. We recently proposed a numerical model for the auditory system of the male mosquito [16]. This model is comprised of two Hopf oscillators, one gov- erning the mechanical tuning of the flagellum and the other governing the electrical tuning of the active neu- ral elements. This model incorporates the evidence that these two tuning curves do not coincide. Rather, the elec- trical tuning aligns with a nonlinear distortion product produced by the simultaneous detection of the male and female wingbeats [12–15]. The model takes the form dz1 dt = (µ1 + iω1)z1 −|z1|2z1 + Ff(t) + Fm(t) + Fmask(t) dz2 dt = (µ2 + iω2)z2 −|z2|2z2 + z1(t), (2) where z1(t) and z2(t) represent the states of the me- chanical and electrical components, respectively. Fm(t) is the complex-valued stimulus from the male mosquito detecting his own wingbeats. Throughout this study, we keep both active elements in the oscillatory regime (µ1 = µ2 = 0.1). The acoustic signal of interest, Ff(t) is confined to a narrow bandwidth, centered near the characteristic fre- quencies ω0 = ω1 = 2π. For the mosquito model, we must also include the male’s own flight tone. Therefore, we use a pure tone of the form Fm(t) = ei3πt, where we fix the amplitude at unity and use a frequency approx- imately 50% higher than the female flight tones, consis- tent with experimental measurements [15]. The combi- nation of male and female flight tones produces a nonlin- ear distortion product that falls near the characteristic frequency of the second oscillation (ω2 = π). IV. TARGET SIGNAL & TRANSFER ENTROPY One limitation of our simplified framework, which fo- cuses on a single detector with one spatial degree of free- dom, is that it cannot extract directional information 3 from the signals. This simplification, however, allows us to focus on just one task, namely, minimizing the detec- tion of Ff(t). Most common measures of detection sensi- tivity rely on assumptions of what properties of the signal carry biologically relevant information. Vector strength assumes that this information lies in the phase response of the detector. The linear response function assumes that the absolute amplitude of the response at the stim- ulus frequency carries meaningful information. Ampli- tude gain assumes the change in amplitude to be the key characteristic of a detector. To avoid these assumptions, we use the transfer entropy as our measure of detection. The transfer entropy quantifies how much information about the stimulus can be inferred from looking at the response of the detector (see Appendix A). In order to use this measure appropriately, we must use a target sig- nal that constantly produces new information with time. We therefore use a sinusoid with fixed amplitude and stochastically modulated frequency, as this is the sim- plest nonstationary signal that approximates the flight tones of insects. Our target signal takes the form, Ff(t) = f0eiϕ(t) (3) ϕ(t) = Z t −∞  ω0 + η(t)  dt, (4) where ω0 = ω1 = 2π is the mean instantaneous frequency and η(t) is a stochastic variable with standard deviation 0.2 × ω0. We consider slow, smooth modulations in this variable by using pink-noise statistics. The power spec- trum of η(t) is nonzero and uniform from 0 to ωmask and zero elsewhere. In FIG. 1, we show an example of a can- didate masking signal and how it influences the response of each of the two models to the target signal. To interfere with detection, we develop a masking strategy based on our knowledge of the statistics of the target signal. We know the mean and standard devia- tion of η(t), but not the values at each point in time. If we knew the full time series, we could construct a mask- ing signal that perfectly cancels the target signal. Like- wise, for blocking communication between mosquitoes, we know only the statistics of the tones at which they communicate. With our target signal defined, and our goal focused on minimizing transfer entropy from Ff(t) to z(t) (model 1) or Ff(t) to z2(t) (model 2), we now define our normal- ization constant for the masking signal. Detection can be masked trivially by increasing the amplitude of the mask arbitrarily high. To avoid trivial solutions and to find a mask that can be effective for the greatest distance from the acoustic masking source, we impose the constraint of fixed power input, Pm = Z ∞ −∞ F ∗ mask(τ)Fmask(τ)dτ, (5) -1 1 [z(t)] mask off -1 1 [z2(t)] mask off -1 1 [Fmask(t)] -1 1 [z(t)] mask on Time -1 1 [z2(t)] mask on 0.01 0.1 1 0.01 0.1 1 0.01 0.1 1 0.01 0.1 1 2 0 m Frequency 1 PSD FIG. 1. Demonstration of the effects of a filtered-noise mask. The target signal and masking signal are shown in black and pink, respectively. The responses of model 1 (blue) and model 2 (orange) are shown in both the time and frequency domains. The top two rows show the responses in the absence of the masking signal, while the bottom two rows show the responses in the presence of the masking signal. where ∗denotes the complex conjugate. We also define the mask-to-signal ratio to be the power ratio of the mask to the target signal, Pm/f 2 0 . Unless otherwise stated, we use f0 = 0.3 and a mask-to-signal ratio of 1. We now explore several categories of possible masking strategies. V. FILTERED-NOISE MASKS A natural starting point for designing an acoustic mask is to attempt to corrupt communication using stochas- tic white noise. However, since our target signal is con- fined to a bandwidth surrounding ω0, we may want to filter this noise, so as to focus more acoustic power onto the bandwidth of communication. To produce a filtered- noise mask, we first generate Gaussian white noise. We then apply a bandpass, Gaussian-window filter in the frequency domain, where ω and σω are the mean and standard deviation of the window function. Finally, we transform back to the time domain and rescale the signal variance to be Pm. In FIG. 2, we show how the trans- fer entropy for both models depends on ω and σω. Note that in the limit of σω →∞, the mask is Gaussian white noise. However, in the limit of σω →0, the mask is a pure tone. Surprisingly, we find that the transfer entropy is not minimized when the bandwidth of the mask matches the 4 0.5 1.0 1.5 / 0 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, = 0 M2, = 0 M1, = 0.1 0 M2, = 0.1 0 M1, = 0.2 0 M2, = 0.2 0 0.00 0.25 0.50 0.75 1.00 / 0 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, = 0 M2, = 0 M1, = 1.1 0 M2, = 1.1 0 M1, = 2 M2, = 2 FIG. 2. Filtered-noise mask. The effects of the masking parameters, ω and σω, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). bandwidth of the target signal. Instead, for both mod- els, minimal transfer entropy is found when σω = 0 and ω ≈ω0, indicating that a pure tone near the center fre- quency is more effective at corrupting communication than stochastic noise. For model 2, there is an addi- tional minimum at ω ≈ω2, indicating that an effective masking strategy could entail corrupting communication at either characteristic frequency of the two oscillators. For mosquitoes, this regime corresponds to focusing the acoustic power near the characteristic frequency of either the flagellum or the neuronal elements. VI. TWO-TONE MASKS We now consider acoustic masks comprised of just two pure tones, rather than a dense bandwidth of frequencies. Since a pure tone mask was most effective in the previ- ous section, a natural extension is to test if there is an advantage to splitting the power into two frequency com- ponents. We consider masks with the functional form, Fmask(t) = A1eiΩ1t + A2eiΩ2t, (6) where A1 and A2 represent the amplitudes, and Ω1 and Ω2 the frequencies of the two tones. Without loss of generality, we let A1 ≥A2. We fix Ω1 = 1.01×ω0, setting the stronger tone near the characteristic frequency. We then vary the ratio, A2/A1, and Ω2 and find the effects on the transfer entropy. In FIG. 3, we observe that, for both models, there are shallow minima in the plots of 0.5 1.0 1.5 2/ 0 0.00 0.02 0.04 0.06 0.08 0.10 Transfer entropy (bits) M1, A2/A1 = 0.1 M2, A2/A1 = 0.1 M1, A2/A1 = 0.5 M2, A2/A1 = 0.5 M1, A2/A1 = 1 M2, A2/A1 = 1 0.00 0.25 0.50 0.75 1.00 A2/A1 0.00 0.02 0.04 0.06 0.08 0.10 Transfer entropy (bits) M1, 2 = 0.975 0 M2, 2 = 0.975 0 M1, 2 = 1.25 0 M2, 2 = 1.25 0 M1, 2 = 2 M2, 2 = 2 FIG. 3. Two-tone mask. The effects of the masking param- eters, Ω2 and A2/A1, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). transfer entropy as a function of A2/A1, with hardly any advantage gained by adding a second tone. VII. AMPLITUDE-MODULATION (AM) MASKS We now consider AM sound masks, where we sinu- soidally modulate the amplitude of a stimulus tone near the characteristic frequency. This mask takes the form, Fmask(t) = A0 + Amod sin(ωmodt)  eiΩt, (7) where Ω= 1.01 × ω0. This mask can also be regarded as a 3-tone stimulus. Using trigonometric identities, we can express Fmask(t) as a sum of three pure tones at frequen- cies Ω, Ω−ωmod, and Ω+ωmod. We vary ωmod and Amod and find the effects on the transfer entropy. Note that our constraint of fixed mask power uniquely determines A0 for every choice of Amod. We find that amplitude modu- lations do not noticeably improve the effectiveness of the mask (FIG. 4). For model 1, there is once again a very shallow minimum near Amod = 0.1. However, the mini- mal transfer entropy for model 2 is found at Amod = 0. VIII. FREQUENCY-MODULATION (FM) MASKS We now explore masks based on frequency modulation. We consider the case where all of the acoustic power is 5 0.00 0.25 0.50 0.75 1.00 mod/ 0 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, Amod = 0.1 M2, Amod = 0.1 M1, Amod = 0.2 M2, Amod = 0.2 M1, Amod = 2Pm M2, Amod = 2Pm 0.0 0.1 0.2 0.3 0.4 Amod 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, mod = 0.05 0 M2, mod = 0.05 0 M1, mod = 0.3 0 M2, mod = 0.3 0 M1, mod = 2 M2, mod = 2 FIG. 4. Amplitude-modulation (AM) mask. The effects of the masking parameters, ωmod and Amod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). focused into one frequency, but that frequency is modu- lated with time, so as to corrupt a wider bandwidth. We use a sound mask of the form, Ff(t) = p Pmeiψ(t) (8) ψ(t) = Z t −∞  Ω+ m(t)  dt, (9) where Ω= ω0 = ω1 is the carrier frequency and m(t) is the modulator. We consider periodic modulators that follow a power-law increase, m(t) = Amod(2fmodt −1)|2fmodt −1|α−1, (10) over each modulation period, t = n fmod to t = n+1 fmod for any n. Note that m(t) spans −Amod to +Amod over each modulation period. α is a real-valued, non-negative free parameter char- acterizing the power-law growth. For large values of α, the frequency sweep spends more time near the cen- ter frequency, ω0, while for small values, the sweep spends more time time near the end points of the sweep, ω0 ± Amod. For α = 1, the frequency increases linearly from ω0 −Amod to ω0 + Amod and then repeats. This is known as sawtooth frequency modulation. For α = 0, m(t) becomes a square wave, which abruptly switches be- tween the two frequency extrema. We vary α, Amod, and fmod so as to test a wide variety of frequency modulators. 0.00 0.05 0.10 0.15 0.20 Amod/ 0 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Transfer entropy (bits) M1, = 0.5 M2, = 0.5 M1, = 1 M2, = 1 M1, = 2 M2, = 2 0 1 2 3 4 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Transfer entropy (bits) M1, Amod = 0.02 0 M2, Amod = 0.02 0 M1, Amod = 0.04 0 M2, Amod = 0.04 0 M1, Amod = 0.06 0 M2, Amod = 0.06 0 FIG. 5. Power-law frequency-modulation (FM) mask. The effects of the masking parameters, ωmod and Amod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). In FIG. 5, we show the effects of varying Amod and α. The transfer entropy is minimal with small modulations in the stimulus frequency. Further, the results are rather insensitive to the value of α. In Appendix B, we show the results from FM masks with sinusoidal, sawtooth, and square-wave modulators. We further show the effects of varying fmod. IX. COMPARISON OF ALL MASK TYPES Having explored a variety of sound masks and identi- fied the optimal parameter ranges, we now compare the best masks from each of the categories. We note that all tests so far were performed with a mask power equal to that of the target signal. Therefore, our mask-to-signal ratio has so far been Pm/f 2 0 = 1. However, this ratio will differ from 1, depending on the application, the distance from the sound sources, and the acoustic power avail- able. Therefore, we vary the signal-to-mask ratio and assess the performance of the optimal mask from each of the four categories (FIG. 6). For comparison, we also show the performance of masks comprised of white noise and bandpass filtered noise. We find that white noise and filtered noise perform poorly as sound masks in terms of transfer entropy re- duction. However, the most effective masks are those with all of the acoustic power focused into just one or a few frequencies at any given time. We find FM masks to be the most effective, though the pure tone, two tone, and 6 0 1 2 3 4 5 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Transfer entropy (bits) A white noise filtered noise pure tone 2 tone AM FM 0 1 2 3 4 5 Mask-to-signal ratio 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Transfer entropy (bits) B FIG. 6. Comparison across mask strength. Transfer entropy dependence on masking strength for model 1 (A) and model 2 (B). Mask parameters in (A) are ω = ω0 and σω = ∞ (for white noise), ω = ω0 and σω = 0.3ω0 (for filtered noise), ω = ω0 and σω = 0 (for pure tone), Ω1 = 1.01ω0, Ω2 = 1.25ω0, and A2/A1 = 0.25 (for 2 tone), ωmod = 0.3ω0 and Amod = 0.07 (for AM), ωmod = 0.22ω0 and Amod = 0.08ω0 (for sinusoidal FM). Mask parameters in (B) are ω = ω0 and σω = ∞(for white noise), ω = ω0 and σω = 0.3ω0 (for filtered noise), ω = 0.975ω0 and σω = 0 (for pure tone), Ω1 = 1.01ω0, Ω2 = 0.975ω0, and A2/A1 = 0.25 (for 2 tone), ωmod = 0.5ω0 and Amod = 0.1 (for AM), ωmod = 0.5ω0 and Amod = 0.25ω0 (for sinusoidal FM). AM masks are comparable. It is also worth noting that for all mask types and for all mask-to-signal ratios, the transfer entropy to model 2 is greater than that of model 1. This highlights the high sensitivity and robustness of the detection scheme employed by male mosquitoes [16]. Despite the loss in signal power associated with using dis- tortion products instead of primary tones, the Hopf ⇒ Hopf detector captures more information from external signals than does the single Hopf oscillator, even in the presence of sound masking. X. DISCUSSION Previous studies have attempted to reduce mosquito populations using acoustic traps [10]. The strategy be- hind this approach is to lure male mosquitoes by pre- senting artificial female flight tones. These methods have shown promising results in both laboratory [34] and field [35–39] settings, with improved success when combined with carbon dioxide release and using live animals as bait. There have also been attempts to control the behavior of mosquitoes using disruptive sounds, with studies show- ing that mating and blood-feeding activity are both re- duced in the presence of loud music [40, 41]. However, the practicality of these strategies for widespread use re- mains unclear, as does the nature of the most effective sound for disrupting mosquito hearing. In the current study, we aimed to answer the question of which sound mask is theoretically most effective. We used a Hopf oscillator as a generic model of active addi- tory systems. We also used a recently proposed numerical model of the mosquito auditory system. We compared several classes of masking strategies on the two models, showing that rapid frequency sweeps are most effective in corrupting signal detection. Further, we varied the mask- ing power, so as to explore the robustness of these results for various application. We found that, in all regimes, it is favorable to have all of the acoustic energy focused into just one or a few frequencies at any given time. Further improvement can be seen with a single tone of rapidly varying frequency. This method is known in radar jamming science as bar- rage jamming [42]. This is an effective strategy that al- lows the jammer to corrupt a wide bandwidth by rapidly modulating one stimulus frequency. Consistent with these prior applications, we found it to be the most effec- tive mask, for both models of the auditory system. Inter- estingly, the song used in a previous study for disrupting mosquito behavior contains frequency sweeps [40]. An- other advantage of the frequency sweep is that the statis- tics of target signal do not need to be precisely known, as long as its bandwidth overlaps with the range of the fre- quency sweep. Mosquito flight tones and the frequency of highest sensitivity vary between species [15], and even vary with time, following a circadian cycle [43]. Hence, this insensitivity to precise masking parameters could be useful for designing acoustic masks. We therefore propose that the use of rapid acoustic frequency sweeps could pro- vide a practical solution for controlling the mating and blood-feeding behavior of mosquitoes. We note that the masks presented in this study repre- sent only a small subset of all functional forms possible. Because bandwidth noise masks proved ineffective, and the addition of a second or third pure tone did not show much improvement over a single tone, we were led to rule out masks that do not have all power focused into a single frequency for any short interval of time. This left us with just one class of mask, namely, FM masks described by Eqs. 8 and 9. However, we note that more 7 effective masks may exist. Future work entails exploring masks that have both a modulated amplitude and fre- quency. Other masks that could be explored entail FM mask in which the modulator abruptly jumps between many frequencies, possibly in a stochastic manner. Fu- ture work also entails using machine learning algorithms to speed up the exploration of the vast parameter space of masking signals, with the challenge of parameterizing a broadest range of signal classes while using the fewest pa- rameters possible. While future improvements may fur- ther enhance the ability to corrupt auditory detection by mosquitoes, the current work demonstrates that signals with rapid frequency modulation may provide a plausible approach for population control of harmful species. ACKNOWLEDGMENTS This work was supported by a grant from the Biotech- nology and Biological Sciences Research Council, UK (BBSRC, BB/V007866/1 to J.T.A.) and a grant from The Human Frontier Science Program (HFSP grant RGP0033/2021 to J.T.A. and D.B.). DATA AVAILABILITY The Python code for performing the analysis and gen- erating the figures is publicly available online: [44]. [1] G. Kidd, C. R. Mason, V. M. Richards, F. J. Gallun, and N. I. Durlach, Informational Masking, in Auditory Perception of Sound Sources, edited by W. A. Yost, A. N. Popper, and R. R. Fay (Springer US, 2008) pp. 143–189. [2] H. ˚A. Gustafsson and S. D. Arlinger, Masking of speech by amplitude-modulated noise, JASA 95, 518 (1994). [3] C. W. Clark, W. T. Ellison, B. L. Southall, L. Hatch, S. M. V. Parijs, A. Frankel, and D. Ponirakis, Acoustic masking in marine ecosystems: Intuitions, analysis, and implication, Mar. Ecol. Prog. Ser. 395, 201 (2009). [4] A. K. D. Schmidt and R. Balakrishnan, Ecology of acous- tic signalling and the problem of masking interference in insects, J. Comp. Physiol. A 201, 133 (2015). [5] E. P. Derryberry, R. M. Danner, J. E. Danner, G. E. Derryberry, J. N. Phillips, S. E. Lipshutz, K. Gentry, and D. A. Luther, Patterns of Song across Natural and An- thropogenic Soundscapes Suggest That White-Crowned Sparrows Minimize Acoustic Masking and Maximize Sig- nal Content, PLOS ONE 11, e0154456 (2016). [6] W. H. Organization, Vector-borne diseases (2020). [7] W. H. Organization, Global report on insecticide resis- tance in malaria vectors: 2010–2016 (2018). [8] J. P. Messina, O. J. Brady, D. M. Pigott, N. Golding, M. U. G. Kraemer, T. W. Scott, G. R. W. Wint, D. L. Smith, and S. I. Hay, The many projected futures of dengue, Nat. Rev. Microbiol. 13, 230 (2015). [9] M. P. Su, M. Georgiades, J. Bagi, K. Kyrou, A. Crisanti, and J. T. Albert, Assessing the acoustic behaviour of Anopheles gambiae (s.l.) dsxF mutants: implications for vector control, Parasit. Vectors 13, 507 (2020). [10] M. Andr´es, M. P. Su, J. Albert, and L. J. Cator, Buzzkill: Targeting the mosquito auditory system, Curr. Opin. In- sect Sci. 40, 11 (2020). [11] K. S. Boo and A. G. Richards, Fine structure of the scolo- pidia in the johnston’s organ of male Aedes aegypti (L.) (Diptera: Culicidae), Int. J. Insect Morphol. Embryol. 4, 549 (1975). [12] B. Warren, G. Gibson, and I. J. Russell, Sex Recognition through Midflight Mating Duets in Culex Mosquitoes Is Mediated by Acoustic Distortion, Curr. Biol. 19, 485 (2009). [13] P. M. V. Sim˜oes, R. A. Ingham, G. Gibson, and I. J. Rus- sell, A role for acoustic distortion in novel rapid frequency modulation behaviour in free-flying male mosquitoes, J. Exp. Biol. 219, 2039 (2016). [14] P. M. Sim˜oes, R. Ingham, G. Gibson, and I. J. Rus- sell, Masking of an auditory behaviour reveals how male mosquitoes use distortion to detect females, Proc. Roy. Soc. B, Biol. Sci. 285, 11 (2018). [15] M. P. Su, M. Andr´es, N. Boyd-Gibbins, J. Somers, and J. T. Albert, Sex and species specific hearing mech- anisms in mosquito flagellar ears, Nat. Commun. 9, 10.1038/s41467-018-06388-7 (2018). [16] J. Faber, A. C. Alampounti, M. Georgiades, J. T. Al- bert, and D. Bozovic, A mosquito-inspired theoretical framework for acoustic signal detection, PNAS 122, e2500938122 (2025). [17] T. Schreiber, Measuring information transfer, Phys Rev Lett 85, 461 (2000). [18] B. J. Arthur, K. S. Emr, R. A. Wyttenbach, and R. R. Hoy, Mosquito ( Aedes aegypti ) flight tones: Frequency, harmonicity, spherical spreading, and phase relation- ships, J. Acoust. Soc. Am. 135, 933 (2014). [19] J.-H. Seo, T. L. Hedrick, and R. Mittal, Mechanism and scaling of wing tone generation in mosquitoes, Bioinspir. Biomim. 15, 016008 (2019). [20] H. R¨omer, Directional hearing in insects: Biophysical, physiological and ecological challenges, J. Exp. Biol. 223, jeb203224 (2020). [21] J. Faber, A. C. Alampounti, M. Georgiades, J. T. Albert, and D. Bozovic, Antennal-Based Strategies for Sound Lo- calization by Insects (2025). [22] H. C. Bennet-Clark, Acoustics of Insect Song, Nature 234, 255 (1971). [23] H. C. Bennet-Clark, Size and Scale Effects as Constraints in Insect Sound Communication, Philos. Trans. Biol. Sci. 353, 407 (1998). [24] Y. Choe, M. O. Magnasco, and A. J. Hudspeth, A model for amplification of hair-bundle motion by cyclical bind- ing of Ca2+ to mechanoelectrical-transduction channels, PNAS 95, 15321 (1998). [25] V. M. Egu´ıluz, M. Ospeck, Y. Choe, A. J. Hudspeth, and M. O. Magnasco, Essential Nonlinearities in Hearing, 8 Phys. Rev. Lett. 84, 5232 (2000). [26] A. J. Hudspeth, Integrating the active process of hair cells with cochlear function, Nat. Rev. Neurosci. 15, 600 (2014). [27] T. Reichenbach and a. J. Hudspeth, The physics of hear- ing: Fluid mechanics and the active process of the inner ear., Rep. Prog. Phys. 77, 076601 (2014). [28] R. G. Alonso, F. Gianoli, B. Fabella, and A. J. Hudspeth, Amplification through local critical behavior in the mam- malian cochlea, PNAS 122, e2503389122 (2025). [29] M. C. G¨opfert and D. Robert, Active Processes in Insect Hearing, in Active Processes and Otoacoustic Emissions in Hearing, Springer Handbook of Auditory Research, edited by G. A. Manley, R. R. Fay, and A. N. Popper (Springer, New York, NY, 2008) pp. 191–209. [30] M. C. G¨opfert and R. M. Hennig, Hearing in Insects, Annu. Rev. Entomol. 61, 257 (2016). [31] B. Nadrowski, T. Effertz, P. R. Senthilan, and M. C. G¨opfert, Antennal hearing in insects - New findings, new questions, Hear. Res. 273, 7 (2011). [32] N. Mhatre, Active amplification in insect ears: Mechan- ics, models and molecules, J. Comp. Physiol. A 201, 19 (2015). [33] J. T. Albert and A. S. Kozlov, Comparative Aspects of Hearing in Vertebrates and Insects with Antennal Ears, Curr. Biol. 26, R1050 (2016). [34] S. S. Jakhete, S. A. Allan, and R. W. Mankin, Wingbeat Frequency-Sweep and Visual Stimuli for Trapping Male Aedes aegypti (Diptera: Culicidae), J. Med. Entomol. 54, 1415 (2017). [35] K.-i. Ogawa, Field Study on Acoustic Trapping of Man- sonia (Diptera : Culicidae) in Malaysia I. Mass-Trapping of Males by a Cylindrical Sound Trap, Appl. Entomol. Zool. 23, 265 (1988). [36] T. Ikeshoji and H. Yap, Impact of the insecticide-treated sound traps on an Aedes albopictus population, Med. Entomol. Zool. 41, 213 (1990). [37] C. M. Stone, H. C. Tuten, and S. L. Dobson, Deter- minants of Male Aedes aegypti and Aedes polynesiensis (Diptera: Culicidae) Response to Sound: Efficacy and Considerations for Use of Sound Traps in the Field, J. Med. Entomol. 50, 723 (2013). [38] B. J. Johnson and S. A. Ritchie, The Siren’s Song: Ex- ploitation of Female Flight Tones to Passively Capture Male Aedes aegypti (Diptera: Culicidae), J. Med. Ento- mol. 53, 245 (2016). [39] B. B. Rohde, K. M. Staunton, N. C. Zeak, N. Beebe, N. Snoad, A. Bondarenco, C. Liddington, J. A. Anderson, W. Xiang, R. W. Mankin, and S. A. Ritchie, Waterproof, low-cost, long-battery-life sound trap for surveillance of male Aedes aegypti for rear-and-release mosquito control programmes, Parasit. Vectors 12, 417 (2019). [40] H. Dieng, C. C. The, T. Satho, F. Miake, E. Wydiamala, N. F. A. Kassim, N. A. Hashim, R. E. Morales Vargas, and N. P. Morales, The electronic song “Scary Monsters and Nice Sprites” reduces host attack and mating success in the dengue vector Aedes aegypti, Acta Tropica 194, 93 (2019). [41] C. Y. Ni, N. F. A. Kassim, N. M. Ayub, S. A. Abuelmaali, A. M. Mashlawi, and H. Dieng, Impact of diverse musical genres on blood-feeding and mating behavior in Aedes aegypti mosquitoes, J Vector Borne Dis 62, 211 (2025). [42] M. I. Skolnik, Radar Handbook (McGraw Hill, 1970). [43] J. Somers, M. Georgiades, M. P. Su, J. Bagi, M. Andr´es, A. Alampounti, G. Mills, W. Ntabaliba, S. J. Moore, R. Spaccapelo, and J. T. Albert, Hitting the right note at the right time: Circadian control of audibility in Anophe- les mosquito mating swarms is mediated by flight tones, Sci. Adv. 8, eabl4844 (2022). [44] J. Faber, A. C. Alampounti, M. Georgiades, J. T. Albert, and D. Bozovic, Python code for all analysis and figure generation, Available on GitHub (2025). [45] T. Bossomaier, L. Barnett, M. Harr´e, and J. T. Lizier, An Introduction to Transfer Entropy (Springer Cham, 2016). APPENDIX A: TRANSFER ENTROPY The transfer entropy [17, 45] from process J to process I is defined as TJ→I = X p(in+1, i(k) n , j(l) n ) log p(in+1 | i(k) n , j(l) n ) p(in+1 | i(k) n ) , (11) where i(k) n = (in, ..., in−k+1) are the k most recent states of process I. Therefore, p(in+1 | i(k) n , j(l) n ) is the condi- tional probability of finding process I in state in+1 at time n + 1, given that the previous k states of process I were i(k) n and that the previous l states of process J were j(l) n . The summation runs over all accessible states for both processes and over all points in the time series. The transfer entropy measures how much one’s ability to predict the future of process I is improved upon learning the history of process J. It has been used extensively as an information theoretic measure of signal detection [45]. For our application, we seek to measure the transfer entropy from the target signal, Ff(t), to the response of the system (z(t) for model 1 or z2(t) for model 2). We compute the transfer entropy once using just the real parts of the signals, and again using just the imaginary parts. As these values were nearly identical in all cases, we simply report the mean of the two. The calculations were carried out by first discretizing the signals into one of two states at every point in time. We use the mean value of each signal as the delineation between the two states. The choice of k and l has little effect on the re- sults, so we select k = l = 5, and sample the 5 points such that they span one mean period of Ff(t). A trans- fer entropy of zero indicates that the two processes are completely independent and that no signal detection has occurred. Since we chose two states in the discretization, the maximum value this measure can take is 1 bit. 9 APPENDIX B: ADDITIONAL FREQUENCY-MODULATION (FM) MASKS We now test the effectiveness of FM masks with al- ternative modulators. In FIG. 7, we plot the transfer entropy for sinusoidal modulators of the form, m(t) = Amod sin(ωmodt). We also show the results for sawtooth modulators (FIG. 8) and square-wave modulators (FIG. 9). Note that these are limiting cases of Eq. 10, where α = 1 produces sawtooth modulations and α = 0 pro- duces square-wave modulations. All three cases qualita- tively show the same behavior, where effective masks can be found when using rapid frequency modulations (high fmod) with small Amod. 0.00 0.25 0.50 0.75 1.00 mod/ 0 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Transfer entropy (bits) M1, Amod = 0.05 0 M2, Amod = 0.05 0 M1, Amod = 0.1 0 M2, Amod = 0.1 0 M1, Amod = 0.2 0 M2, Amod = 0.2 0 0.0 0.2 0.4 Amod/ 0 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Transfer entropy (bits) M1, mod = 0.05 0 M2, mod = 0.05 0 M1, mod = 0.2 0 M2, mod = 0.2 0 M1, mod = 0.5 0 M2, mod = 0.5 0 FIG. 7. Sinusoidal FM mask. The effects of the masking parameters, Amod and ωmod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). 0.00 0.25 0.50 0.75 1.00 fmod 0.000 0.025 0.050 0.075 0.100 0.125 Transfer entropy (bits) M1, Amod = 0.04 0 M2, Amod = 0.04 0 M1, Amod = 0.07 0 M2, Amod = 0.07 0 M1, Amod = 0.34 0 M2, Amod = 0.34 0 0.0 0.2 0.4 Amod/ 0 0.000 0.025 0.050 0.075 0.100 0.125 Transfer entropy (bits) M1, fmod = 0.02 M2, fmod = 0.02 M1, fmod = 0.1 M2, fmod = 0.1 M1, fmod = 0.5 M2, fmod = 0.5 FIG. 8. Sawtooth FM mask. The effects of the masking parameters, fmod and Amod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). 0.00 0.25 0.50 0.75 1.00 fmod 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, Amod = 0.05 0 M2, Amod = 0.05 0 M1, Amod = 0.1 0 M2, Amod = 0.1 0 M1, Amod = 0.2 0 M2, Amod = 0.2 0 0.0 0.2 0.4 Amod/ 0 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, fmod = 0.14 M2, fmod = 0.14 M1, fmod = 0.48 M2, fmod = 0.48 M1, fmod = 0.8 M2, fmod = 0.8 FIG. 9. Square-wave FM mask. The effects of the masking parameters, fmod and Amod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2).
Sound Masking Strategies for Interference with Mosquito Hearing Justin Faber∗ : London, UK Joerg T Albert University College London Ear Institute: London, UK and Cluster of Excellence Hearing4all, Sensory Physiology & Behaviour Group, Department for Neuroscience, ̈at Oldenburg, Oldenburg, Germany Dolores Bozovic (Dated: October 17, 2025) The use of auditory masking has long been of interest in psychoacoustics and for engineering purposes, in order to cover sounds that are disruptive to humans or to species whose habitats overlap with ours. In most cases, we seek to minimize the disturbances to the communication of wildlife. However, in the case of pathogen-carrying insects, we may want to maximize these disturbances as a way to control populations. In the current work, we explore candidate masking strategies for a generic model of active auditory systems and a model of the mosquito auditory system. For both models, we find that masks with all acoustic power focused into just one or a few frequencies perform best. We propose that masks based on rapid frequency modulation are most effective for maximal disruption of information transfer and minimizing intelligibility. We hope that these results will serve to guide the avoidance or selection of possible acoustic signals for, respectively, maximizing or minimizing communication. I. INTRODUCTION When unwanted sound cannot be sufficiently attenuated or actively canceled, masking strategies are often used to obscure details and provide a more uniform acoustic environment [1, 2]. For applications aimed at minimizing distraction or improving sleep, using filtered noise as a sound mask is the typical strategy. Other applications include the masking of industrial noises in order to protect species susceptible to sound pollution [3-5]. However, in some cases, we may want to maximize the disruption to animal behavior, particularly for species that are invasive or otherwise harmful. Mosquitoes pose a significant threat to tropical and subtropical regions, resulting in over 700,000 annual human deaths and costing billions of dollars in healthcare expenses [6]. Though advances have been made in developing pesticides and alternative strategies, the number of annual deaths continues to rise [7]. Further, the rising global temperatures have led to the migration of the deadliest mosquito species to regions that were previously uninhabitable to them [8]. The need for additional control strategies is therefore evident. Some possible intervention strategies entail disrupting the mating or blood-feeding processes of these insects [9, 10]. ∗ Mosquitoes mate at dusk, when males form swarms and seek out nearby females. Successful mating is dependent on the ability of a male to detect the sound of a female beating her wings. The male feather-like flagella are tuned approximately to the female's wingbeat frequency. Surprisingly, the active neuronal elements of the Johnston's organ are tuned to a much lower frequency [11]. The nonlinear interaction between the female flight tone and the male's own wingbeats produce distortion products, which fall within the bandwidth of detection of the neuronal elements [12-15]. This counterintuitive detection strategy has recently been modeled and proposed to be advantageous in a theoretical study [16]. In the current work, we utilize theoretical models of mosquito hearing to find the optimal class of acoustic masks that would prevent a male from successfully detecting a female. Measuring the performance of a biological detector relies on assumptions as to which components of the signal are meaningful. We must assume that the amplitude, frequency, phase, or some combination of these properties carries biologically relevant information. Alternatively, it could be modulations in these values that conveys information necessary for auditory detection. The goal of this study is not to determine the optimal sound mask for a specific species, but rather to find the optimal class of masks that would be generally effective in blocking active hearing. Therefore, we employ transfer entropy, an in16 Oct 2025 2 formation theoretic measure that quantifies information imparted from one process to another [17]. The transfer entropy does not rely on any assumptions of which components of the signal are meaningful. Rather, it reflects how much information about the stimulus is carried in the response of the receiver. We note that sound detection and localization, along with sound scattering through the surrounding environment are all complex processes, which certainly contribute to shaping the optimal acoustic mask [18-21]. However, as these are specific to each environment and application of interest, we omit them from this study and focus on a single detector capturing a scalar signal. We do not discuss directional information that can be inferred from having multiple sensory elements, or by sensing the vector-valued velocity field of sound [22, 23]. With this limitation established, we frame our problem as follows. Given a target acoustic signal of interest to an active auditory detector, what is the optimal acoustic mask that minimizes the information captured? Candidate masking signals can take any form we choose, with the constraint of fixed power input. We use a Hopf oscillator as a generic model of auditory detection [24-28], and a recently proposed model specific to the mosquito auditory system [16], based on a pair of Hopf oscillators and tuning to distortion products. We first present the performance of several mask categories: filtered noise, multi-tone signals, and stimuli with amplitude or frequency modulation. We then present a comparison of the best masks from each category. For both numerical models, we find filtered noise to be surprisingly ineffective, despite its common use in masking distraction-inducing sounds. For both models, we instead find that masks comprised of a single tone with constant amplitude and rapidly varying frequency perform best. Though these masks would likely be too unpleasant for human applications, they may provide acoustic-based strategies for interfering with the communication of pathogen carrying mosquitoes, as well as other unwanted species. II. HOPF OSCILLATOR (MODEL 1) The active auditory system of vertebrates has long been described by a system poised near a critical point, on the verge of instability [26]. In this regime, the system is highly sensitive to weak signals. This framework is elegantly captured by the normal form equation for the supercritical Hopf bifurcation [24, 25, 28]. The state of this detector is described by a complex variable z(t). The dynamics are governed by the first-order differential equation, dz dt = (μ + iω0)z -|z|2z + Ff(t) + Fmask(t), (1) where Ff(t) is the complex-valued signal of interest and Fmask(t) represents the masking signal. In the absence of external stimulus (Ff(t) = Fmask(t) = 0), this system exhibits autonomous limit-cycle oscillations for μ > 0 and quiescent behavior for μ < 0. The interface between these two regimes (μ = 0) defines the supercritical Hopf bifurcation. This system exhibits highest sensitivity near this bifurcation point. Autonomous oscillations occur at angular frequency, ω0, which coincides with the frequency of maximal sensitivity. Unless otherwise stated, we use ω0 = 2π and μ = 0.1, poising the system on the oscillatory side of the bifurcation. III. HOPF ⇒HOPF MOSQUITO MODEL (MODEL 2) Insect hearing has also been shown to employ active processes [29-33]. We recently proposed a numerical model for the auditory system of the male mosquito [16]. This model is comprised of two Hopf oscillators, one governing the mechanical tuning of the flagellum and the other governing the electrical tuning of the active neural elements. This model incorporates the evidence that these two tuning curves do not coincide. Rather, the electrical tuning aligns with a nonlinear distortion product produced by the simultaneous detection of the male and female wingbeats [12-15]. The model takes the form dz1 dt = (μ1 + iω1)z1 -|z1|2z1 + Ff(t) + Fm(t) + Fmask(t) dz2 dt = (μ2 + iω2)z2 -|z2|2z2 + z1(t), (2) where z1(t) and z2(t) represent the states of the mechanical and electrical components, respectively. Fm(t) is the complex-valued stimulus from the male mosquito detecting his own wingbeats. Throughout this study, we keep both active elements in the oscillatory regime (μ1 = μ2 = 0.1). The acoustic signal of interest, Ff(t) is confined to a narrow bandwidth, centered near the characteristic frequencies ω0 = ω1 = 2π. For the mosquito model, we must also include the male's own flight tone. Therefore, we use a pure tone of the form Fm(t) = ei3πt, where we fix the amplitude at unity and use a frequency approximately 50% higher than the female flight tones, consistent with experimental measurements [15]. The combination of male and female flight tones produces a nonlinear distortion product that falls near the characteristic frequency of the second oscillation (ω2 = π). IV. TARGET SIGNAL & TRANSFER ENTROPY One limitation of our simplified framework, which focuses on a single detector with one spatial degree of freedom, is that it cannot extract directional information 3 from the signals. This simplification, however, allows us to focus on just one task, namely, minimizing the detection of Ff(t). Most common measures of detection sensitivity rely on assumptions of what properties of the signal carry biologically relevant information. Vector strength assumes that this information lies in the phase response of the detector. The linear response function assumes that the absolute amplitude of the response at the stimulus frequency carries meaningful information. Amplitude gain assumes the change in amplitude to be the key characteristic of a detector. To avoid these assumptions, we use the transfer entropy as our measure of detection. The transfer entropy quantifies how much information about the stimulus can be inferred from looking at the response of the detector (see Appendix A). In order to use this measure appropriately, we must use a target signal that constantly produces new information with time. We therefore use a sinusoid with fixed amplitude and stochastically modulated frequency, as this is the simplest nonstationary signal that approximates the flight tones of insects. Our target signal takes the form, Ff(t) = f0eiφ(t) (3) φ(t) = Z t -∞ ω0 + η(t) dt, (4) where ω0 = ω1 = 2π is the mean instantaneous frequency and η(t) is a stochastic variable with standard deviation 0.2 × ω0. We consider slow, smooth modulations in this variable by using pink-noise statistics. The power spectrum of η(t) is nonzero and uniform from 0 to ωmask and zero elsewhere. In FIG. 1, we show an example of a candidate masking signal and how it influences the response of each of the two models to the target signal. To interfere with detection, we develop a masking strategy based on our knowledge of the statistics of the target signal. We know the mean and standard deviation of η(t), but not the values at each point in time. If we knew the full time series, we could construct a masking signal that perfectly cancels the target signal. Likewise, for blocking communication between mosquitoes, we know only the statistics of the tones at which they communicate. With our target signal defined, and our goal focused on minimizing transfer entropy from Ff(t) to z(t) (model 1) or Ff(t) to z2(t) (model 2), we now define our normalization constant for the masking signal. Detection can be masked trivially by increasing the amplitude of the mask arbitrarily high. To avoid trivial solutions and to find a mask that can be effective for the greatest distance from the acoustic masking source, we impose the constraint of fixed power input, Pm = Z ∞ -∞ F ∗ mask(τ)Fmask(τ)dτ, (5) -1 1 [z(t)] mask off -1 1 [z2(t)] mask off -1 1 [Fmask(t)] -1 1 [z(t)] mask on Time -1 1 [z2(t)] mask on 0.01 0.1 1 0.01 0.1 1 0.01 0.1 1 0.01 0.1 1 2 0 m Frequency 1 PSD FIG. 1. Demonstration of the effects of a filtered-noise mask. The target signal and masking signal are shown in black and pink, respectively. The responses of model 1 (blue) and model 2 (orange) are shown in both the time and frequency domains. The top two rows show the responses in the absence of the masking signal, while the bottom two rows show the responses in the presence of the masking signal. where ∗denotes the complex conjugate. We also define the mask-to-signal ratio to be the power ratio of the mask to the target signal, Pm/f 2 0 . Unless otherwise stated, we use f0 = 0.3 and a mask-to-signal ratio of 1. We now explore several categories of possible masking strategies. V. FILTERED-NOISE MASKS A natural starting point for designing an acoustic mask is to attempt to corrupt communication using stochastic white noise. However, since our target signal is confined to a bandwidth surrounding ω0, we may want to filter this noise, so as to focus more acoustic power onto the bandwidth of communication. To produce a filterednoise mask, we first generate Gaussian white noise. We then apply a bandpass, Gaussian-window filter in the frequency domain, where ω and σω are the mean and standard deviation of the window function. Finally, we transform back to the time domain and rescale the signal variance to be Pm. In FIG. 2, we show how the transfer entropy for both models depends on ω and σω. Note that in the limit of σω →∞, the mask is Gaussian white noise. However, in the limit of σω →0, the mask is a pure tone. Surprisingly, we find that the transfer entropy is not minimized when the bandwidth of the mask matches the 4 0.5 1.0 1.5 / 0 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, = 0 M2, = 0 M1, = 0.1 0 M2, = 0.1 0 M1, = 0.2 0 M2, = 0.2 0 0.00 0.25 0.50 0.75 1.00 / 0 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, = 0 M2, = 0 M1, = 1.1 0 M2, = 1.1 0 M1, = 2 M2, = 2 FIG. 2. Filtered-noise mask. The effects of the masking parameters, ω and σω, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). bandwidth of the target signal. Instead, for both models, minimal transfer entropy is found when σω = 0 and ω ≈ω0, indicating that a pure tone near the center frequency is more effective at corrupting communication than stochastic noise. For model 2, there is an additional minimum at ω ≈ω2, indicating that an effective masking strategy could entail corrupting communication at either characteristic frequency of the two oscillators. For mosquitoes, this regime corresponds to focusing the acoustic power near the characteristic frequency of either the flagellum or the neuronal elements. VI. TWO-TONE MASKS We now consider acoustic masks comprised of just two pure tones, rather than a dense bandwidth of frequencies. Since a pure tone mask was most effective in the previous section, a natural extension is to test if there is an advantage to splitting the power into two frequency components. We consider masks with the functional form, Fmask(t) = A1eiΩ1t + A2eiΩ2t, (6) where A1 and A2 represent the amplitudes, and Ω1 and Ω2 the frequencies of the two tones. Without loss of generality, we let A1 ≥A2. We fix Ω1 = 1.01×ω0, setting the stronger tone near the characteristic frequency. We then vary the ratio, A2/A1, and Ω2 and find the effects on the transfer entropy. In FIG. 3, we observe that, for both models, there are shallow minima in the plots of 0.5 1.0 1.5 2/ 0 0.00 0.02 0.04 0.06 0.08 0.10 Transfer entropy (bits) M1, A2/A1 = 0.1 M2, A2/A1 = 0.1 M1, A2/A1 = 0.5 M2, A2/A1 = 0.5 M1, A2/A1 = 1 M2, A2/A1 = 1 0.00 0.25 0.50 0.75 1.00 A2/A1 0.00 0.02 0.04 0.06 0.08 0.10 Transfer entropy (bits) M1, 2 = 0.975 0 M2, 2 = 0.975 0 M1, 2 = 1.25 0 M2, 2 = 1.25 0 M1, 2 = 2 M2, 2 = 2 FIG. 3. Two-tone mask. The effects of the masking parameters, Ω2 and A2/A1, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). transfer entropy as a function of A2/A1, with hardly any advantage gained by adding a second tone. VII. AMPLITUDE-MODULATION (AM) MASKS We now consider AM sound masks, where we sinusoidally modulate the amplitude of a stimulus tone near the characteristic frequency. This mask takes the form, Fmask(t) = A0 + Amod sin(ωmodt) eiΩt, (7) where Ω= 1.01 × ω0. This mask can also be regarded as a 3-tone stimulus. Using trigonometric identities, we can express Fmask(t) as a sum of three pure tones at frequencies Ω, Ω-ωmod, and Ω+ωmod. We vary ωmod and Amod and find the effects on the transfer entropy. Note that our constraint of fixed mask power uniquely determines A0 for every choice of Amod. We find that amplitude modulations do not noticeably improve the effectiveness of the mask (FIG. 4). For model 1, there is once again a very shallow minimum near Amod = 0.1. However, the minimal transfer entropy for model 2 is found at Amod = 0. VIII. FREQUENCY-MODULATION (FM) MASKS We now explore masks based on frequency modulation. We consider the case where all of the acoustic power is 5 0.00 0.25 0.50 0.75 1.00 mod/ 0 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, Amod = 0.1 M2, Amod = 0.1 M1, Amod = 0.2 M2, Amod = 0.2 M1, Amod = 2Pm M2, Amod = 2Pm 0.0 0.1 0.2 0.3 0.4 Amod 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, mod = 0.05 0 M2, mod = 0.05 0 M1, mod = 0.3 0 M2, mod = 0.3 0 M1, mod = 2 M2, mod = 2 FIG. 4. Amplitude-modulation (AM) mask. The effects of the masking parameters, ωmod and Amod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). focused into one frequency, but that frequency is modulated with time, so as to corrupt a wider bandwidth. We use a sound mask of the form, Ff(t) = p Pmeiψ(t) (8) ψ(t) = Z t -∞ Ω+ m(t) dt, (9) where Ω= ω0 = ω1 is the carrier frequency and m(t) is the modulator. We consider periodic modulators that follow a power-law increase, m(t) = Amod(2fmodt -1)|2fmodt -1|α-1, (10) over each modulation period, t = n fmod to t = n+1 fmod for any n. Note that m(t) spans -Amod to +Amod over each modulation period. α is a real-valued, non-negative free parameter characterizing the power-law growth. For large values of α, the frequency sweep spends more time near the center frequency, ω0, while for small values, the sweep spends more time time near the end points of the sweep, ω0 ± Amod. For α = 1, the frequency increases linearly from ω0 -Amod to ω0 + Amod and then repeats. This is known as sawtooth frequency modulation. For α = 0, m(t) becomes a square wave, which abruptly switches between the two frequency extrema. We vary α, Amod, and fmod so as to test a wide variety of frequency modulators. 0.00 0.05 0.10 0.15 0.20 Amod/ 0 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Transfer entropy (bits) M1, = 0.5 M2, = 0.5 M1, = 1 M2, = 1 M1, = 2 M2, = 2 0 1 2 3 4 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Transfer entropy (bits) M1, Amod = 0.02 0 M2, Amod = 0.02 0 M1, Amod = 0.04 0 M2, Amod = 0.04 0 M1, Amod = 0.06 0 M2, Amod = 0.06 0 FIG. 5. Power-law frequency-modulation (FM) mask. The effects of the masking parameters, ωmod and Amod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). In FIG. 5, we show the effects of varying Amod and α. The transfer entropy is minimal with small modulations in the stimulus frequency. Further, the results are rather insensitive to the value of α. In Appendix B, we show the results from FM masks with sinusoidal, sawtooth, and square-wave modulators. We further show the effects of varying fmod. IX. COMPARISON OF ALL MASK TYPES Having explored a variety of sound masks and identified the optimal parameter ranges, we now compare the best masks from each of the categories. We note that all tests so far were performed with a mask power equal to that of the target signal. Therefore, our mask-to-signal ratio has so far been Pm/f 2 0 = 1. However, this ratio will differ from 1, depending on the application, the distance from the sound sources, and the acoustic power available. Therefore, we vary the signal-to-mask ratio and assess the performance of the optimal mask from each of the four categories (FIG. 6). For comparison, we also show the performance of masks comprised of white noise and bandpass filtered noise. We find that white noise and filtered noise perform poorly as sound masks in terms of transfer entropy reduction. However, the most effective masks are those with all of the acoustic power focused into just one or a few frequencies at any given time. We find FM masks to be the most effective, though the pure tone, two tone, and 6 0 1 2 3 4 5 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Transfer entropy (bits) A white noise filtered noise pure tone 2 tone AM FM 0 1 2 3 4 5 Mask-to-signal ratio 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Transfer entropy (bits) B FIG. 6. Comparison across mask strength. Transfer entropy dependence on masking strength for model 1 (A) and model 2 (B). Mask parameters in (A) are ω = ω0 and σω = ∞ (for white noise), ω = ω0 and σω = 0.3ω0 (for filtered noise), ω = ω0 and σω = 0 (for pure tone), Ω1 = 1.01ω0, Ω2 = 1.25ω0, and A2/A1 = 0.25 (for 2 tone), ωmod = 0.3ω0 and Amod = 0.07 (for AM), ωmod = 0.22ω0 and Amod = 0.08ω0 (for sinusoidal FM). Mask parameters in (B) are ω = ω0 and σω = ∞(for white noise), ω = ω0 and σω = 0.3ω0 (for filtered noise), ω = 0.975ω0 and σω = 0 (for pure tone), Ω1 = 1.01ω0, Ω2 = 0.975ω0, and A2/A1 = 0.25 (for 2 tone), ωmod = 0.5ω0 and Amod = 0.1 (for AM), ωmod = 0.5ω0 and Amod = 0.25ω0 (for sinusoidal FM). AM masks are comparable. It is also worth noting that for all mask types and for all mask-to-signal ratios, the transfer entropy to model 2 is greater than that of model 1. This highlights the high sensitivity and robustness of the detection scheme employed by male mosquitoes [16]. Despite the loss in signal power associated with using distortion products instead of primary tones, the Hopf ⇒ Hopf detector captures more information from external signals than does the single Hopf oscillator, even in the presence of sound masking. X. DISCUSSION Previous studies have attempted to reduce mosquito populations using acoustic traps [10]. The strategy behind this approach is to lure male mosquitoes by presenting artificial female flight tones. These methods have shown promising results in both laboratory [34] and field [35-39] settings, with improved success when combined with carbon dioxide release and using live animals as bait. There have also been attempts to control the behavior of mosquitoes using disruptive sounds, with studies showing that mating and blood-feeding activity are both reduced in the presence of loud music [40, 41]. However, the practicality of these strategies for widespread use remains unclear, as does the nature of the most effective sound for disrupting mosquito hearing. In the current study, we aimed to answer the question of which sound mask is theoretically most effective. We used a Hopf oscillator as a generic model of active additory systems. We also used a recently proposed numerical model of the mosquito auditory system. We compared several classes of masking strategies on the two models, showing that rapid frequency sweeps are most effective in corrupting signal detection. Further, we varied the masking power, so as to explore the robustness of these results for various application. We found that, in all regimes, it is favorable to have all of the acoustic energy focused into just one or a few frequencies at any given time. Further improvement can be seen with a single tone of rapidly varying frequency. This method is known in radar jamming science as barrage jamming [42]. This is an effective strategy that allows the jammer to corrupt a wide bandwidth by rapidly modulating one stimulus frequency. Consistent with these prior applications, we found it to be the most effective mask, for both models of the auditory system. Interestingly, the song used in a previous study for disrupting mosquito behavior contains frequency sweeps [40]. Another advantage of the frequency sweep is that the statistics of target signal do not need to be precisely known, as long as its bandwidth overlaps with the range of the frequency sweep. Mosquito flight tones and the frequency of highest sensitivity vary between species [15], and even vary with time, following a circadian cycle [43]. Hence, this insensitivity to precise masking parameters could be useful for designing acoustic masks. We therefore propose that the use of rapid acoustic frequency sweeps could provide a practical solution for controlling the mating and blood-feeding behavior of mosquitoes. We note that the masks presented in this study represent only a small subset of all functional forms possible. Because bandwidth noise masks proved ineffective, and the addition of a second or third pure tone did not show much improvement over a single tone, we were led to rule out masks that do not have all power focused into a single frequency for any short interval of time. This left us with just one class of mask, namely, FM masks described by Eqs. 8 and 9. However, we note that more 7 effective masks may exist. Future work entails exploring masks that have both a modulated amplitude and frequency. Other masks that could be explored entail FM mask in which the modulator abruptly jumps between many frequencies, possibly in a stochastic manner. Future work also entails using machine learning algorithms to speed up the exploration of the vast parameter space of masking signals, with the challenge of parameterizing a broadest range of signal classes while using the fewest parameters possible. While future improvements may further enhance the ability to corrupt auditory detection by mosquitoes, the current work demonstrates that signals with rapid frequency modulation may provide a plausible approach for population control of harmful species. ACKNOWLEDGMENTS This work was supported by a grant from the Biotechnology and Biological Sciences Research Council, UK (BBSRC, BB/V007866/1 to J.T.A.) and a grant from The Human Frontier Science Program (HFSP grant RGP0033/2021 to J.T.A. and D.B.). DATA AVAILABILITY The Python code for performing the analysis and generating the figures is publicly available online: [44]. [1] G. Kidd, C. R. Mason, V. M. Richards, F. J. Gallun, and N. I. Durlach, Informational Masking, in Auditory Perception of Sound Sources, edited by W. A. Yost, A. N. Popper, and R. R. Fay (Springer US, 2008) pp. 143-189. [2] H. ̊A. Gustafsson and S. D. Arlinger, Masking of speech by amplitude-modulated noise, JASA 95, 518 (1994). [3] C. W. Clark, W. T. Ellison, B. L. Southall, L. Hatch, S. M. V. Parijs, A. Frankel, and D. Ponirakis, Acoustic masking in marine ecosystems: Intuitions, analysis, and implication, Mar. Ecol. Prog. Ser. 395, 201 (2009). [4] A. K. D. Schmidt and R. Balakrishnan, Ecology of acoustic signalling and the problem of masking interference in insects, J. Comp. Physiol. A 201, 133 (2015). [5] E. P. Derryberry, R. M. Danner, J. E. Danner, G. E. Derryberry, J. N. Phillips, S. E. Lipshutz, K. Gentry, and D. A. Luther, Patterns of Song across Natural and Anthropogenic Soundscapes Suggest That White-Crowned Sparrows Minimize Acoustic Masking and Maximize Signal Content, PLOS ONE 11, e0154456 (2016). [6] W. H. Organization, Vector-borne diseases (2020). [7] W. H. Organization, Global report on insecticide resistance in malaria vectors: 2010-2016 (2018). [8] J. P. Messina, O. J. Brady, D. M. Pigott, N. Golding, M. U. G. Kraemer, T. W. Scott, G. R. W. Wint, D. L. Smith, and S. I. Hay, The many projected futures of dengue, Nat. Rev. Microbiol. 13, 230 (2015). [9] M. P. Su, M. Georgiades, J. Bagi, K. Kyrou, A. Crisanti, and J. T. Albert, Assessing the acoustic behaviour of Anopheles gambiae (s.l.) dsxF mutants: implications for vector control, Parasit. Vectors 13, 507 (2020). [10] M. Andr ́es, M. P. Su, J. Albert, and L. J. Cator, Buzzkill: Targeting the mosquito auditory system, Curr. Opin. Insect Sci. 40, 11 (2020). [11] K. S. Boo and A. G. Richards, Fine structure of the scolopidia in the johnston's organ of male Aedes aegypti (L.) (Diptera: Culicidae), Int. J. Insect Morphol. Embryol. 4, 549 (1975). [12] B. Warren, G. Gibson, and I. J. Russell, Sex Recognition through Midflight Mating Duets in Culex Mosquitoes Is Mediated by Acoustic Distortion, Curr. Biol. 19, 485 (2009). [13] P. M. V. Sim ̃oes, R. A. Ingham, G. Gibson, and I. J. Russell, A role for acoustic distortion in novel rapid frequency modulation behaviour in free-flying male mosquitoes, J. Exp. Biol. 219, 2039 (2016). [14] P. M. Sim ̃oes, R. Ingham, G. Gibson, and I. J. Russell, Masking of an auditory behaviour reveals how male mosquitoes use distortion to detect females, Proc. Roy. Soc. B, Biol. Sci. 285, 11 (2018). [15] M. P. Su, M. Andr ́es, N. Boyd-Gibbins, J. Somers, and J. T. Albert, Sex and species specific hearing mechanisms in mosquito flagellar ears, Nat. Commun. 9, 10.1038/s41467-018-06388-7 (2018). [16] J. Faber, A. C. Alampounti, M. Georgiades, J. T. Albert, and D. Bozovic, A mosquito-inspired theoretical framework for acoustic signal detection, PNAS 122, e2500938122 (2025). [17] T. Schreiber, Measuring information transfer, Phys Rev Lett 85, 461 (2000). [18] B. J. Arthur, K. S. Emr, R. A. Wyttenbach, and R. R. Hoy, Mosquito ( Aedes aegypti ) flight tones: Frequency, harmonicity, spherical spreading, and phase relationships, J. Acoust. Soc. Am. 135, 933 (2014). [19] J.-H. Seo, T. L. Hedrick, and R. Mittal, Mechanism and scaling of wing tone generation in mosquitoes, Bioinspir. Biomim. 15, 016008 (2019). [20] H. R ̈omer, Directional hearing in insects: Biophysical, physiological and ecological challenges, J. Exp. Biol. 223, jeb203224 (2020). [21] J. Faber, A. C. Alampounti, M. Georgiades, J. T. Albert, and D. Bozovic, Antennal-Based Strategies for Sound Localization by Insects (2025). [22] H. C. Bennet-Clark, Acoustics of Insect Song, Nature 234, 255 (1971). [23] H. C. Bennet-Clark, Size and Scale Effects as Constraints in Insect Sound Communication, Philos. Trans. Biol. Sci. 353, 407 (1998). [24] Y. Choe, M. O. Magnasco, and A. J. Hudspeth, A model for amplification of hair-bundle motion by cyclical binding of Ca2+ to mechanoelectrical-transduction channels, PNAS 95, 15321 (1998). [25] V. M. Egu ́ıluz, M. Ospeck, Y. Choe, A. J. Hudspeth, and M. O. Magnasco, Essential Nonlinearities in Hearing, 8 Phys. Rev. Lett. 84, 5232 (2000). [26] A. J. Hudspeth, Integrating the active process of hair cells with cochlear function, Nat. Rev. Neurosci. 15, 600 (2014). [27] T. Reichenbach and a. J. Hudspeth, The physics of hearing: Fluid mechanics and the active process of the inner ear., Rep. Prog. Phys. 77, 076601 (2014). [28] R. G. Alonso, F. Gianoli, B. Fabella, and A. J. Hudspeth, Amplification through local critical behavior in the mammalian cochlea, PNAS 122, e2503389122 (2025). [29] M. C. G ̈opfert and D. Robert, Active Processes in Insect Hearing, in Active Processes and Otoacoustic Emissions in Hearing, Springer Handbook of Auditory Research, edited by G. A. Manley, R. R. Fay, and A. N. Popper (Springer, New York, NY, 2008) pp. 191-209. [30] M. C. G ̈opfert and R. M. Hennig, Hearing in Insects, Annu. Rev. Entomol. 61, 257 (2016). [31] B. Nadrowski, T. Effertz, P. R. Senthilan, and M. C. G ̈opfert, Antennal hearing in insects - New findings, new questions, Hear. Res. 273, 7 (2011). [32] N. Mhatre, Active amplification in insect ears: Mechanics, models and molecules, J. Comp. Physiol. A 201, 19 (2015). [33] J. T. Albert and A. S. Kozlov, Comparative Aspects of Hearing in Vertebrates and Insects with Antennal Ears, Curr. Biol. 26, R1050 (2016). [34] S. S. Jakhete, S. A. Allan, and R. W. Mankin, Wingbeat Frequency-Sweep and Visual Stimuli for Trapping Male Aedes aegypti (Diptera: Culicidae), J. Med. Entomol. 54, 1415 (2017). [35] K.-i. Ogawa, Field Study on Acoustic Trapping of Mansonia (Diptera : Culicidae) in Malaysia I. Mass-Trapping of Males by a Cylindrical Sound Trap, Appl. Entomol. Zool. 23, 265 (1988). [36] T. Ikeshoji and H. Yap, Impact of the insecticide-treated sound traps on an Aedes albopictus population, Med. Entomol. Zool. 41, 213 (1990). [37] C. M. Stone, H. C. Tuten, and S. L. Dobson, Determinants of Male Aedes aegypti and Aedes polynesiensis (Diptera: Culicidae) Response to Sound: Efficacy and Considerations for Use of Sound Traps in the Field, J. Med. Entomol. 50, 723 (2013). [38] B. J. Johnson and S. A. Ritchie, The Siren's Song: Exploitation of Female Flight Tones to Passively Capture Male Aedes aegypti (Diptera: Culicidae), J. Med. Entomol. 53, 245 (2016). [39] B. B. Rohde, K. M. Staunton, N. C. Zeak, N. Beebe, N. Snoad, A. Bondarenco, C. Liddington, J. A. Anderson, W. Xiang, R. W. Mankin, and S. A. Ritchie, Waterproof, low-cost, long-battery-life sound trap for surveillance of male Aedes aegypti for rear-and-release mosquito control programmes, Parasit. Vectors 12, 417 (2019). [40] H. Dieng, C. C. The, T. Satho, F. Miake, E. Wydiamala, N. F. A. Kassim, N. A. Hashim, R. E. Morales Vargas, and N. P. Morales, The electronic song "Scary Monsters and Nice Sprites" reduces host attack and mating success in the dengue vector Aedes aegypti, Acta Tropica 194, 93 (2019). [41] C. Y. Ni, N. F. A. Kassim, N. M. Ayub, S. A. Abuelmaali, A. M. Mashlawi, and H. Dieng, Impact of diverse musical genres on blood-feeding and mating behavior in Aedes aegypti mosquitoes, J Vector Borne Dis 62, 211 (2025). [42] M. I. Skolnik, Radar Handbook (McGraw Hill, 1970). [43] J. Somers, M. Georgiades, M. P. Su, J. Bagi, M. Andr ́es, A. Alampounti, G. Mills, W. Ntabaliba, S. J. Moore, R. Spaccapelo, and J. T. Albert, Hitting the right note at the right time: Circadian control of audibility in Anopheles mosquito mating swarms is mediated by flight tones, Sci. Adv. 8, eabl4844 (2022). [44] J. Faber, A. C. Alampounti, M. Georgiades, J. T. Albert, and D. Bozovic, Python code for all analysis and figure generation, Available on GitHub (2025). [45] T. Bossomaier, L. Barnett, M. Harr ́e, and J. T. Lizier, An Introduction to Transfer Entropy (Springer Cham, 2016). APPENDIX A: TRANSFER ENTROPY The transfer entropy [17, 45] from process J to process I is defined as TJ→I = X p(in+1, i(k) n , j(l) n ) log p(in+1 | i(k) n , j(l) n ) p(in+1 | i(k) n ) , (11) where i(k) n = (in, ..., in-k+1) are the k most recent states of process I. Therefore, p(in+1 | i(k) n , j(l) n ) is the conditional probability of finding process I in state in+1 at time n + 1, given that the previous k states of process I were i(k) n and that the previous l states of process J were j(l) n . The summation runs over all accessible states for both processes and over all points in the time series. The transfer entropy measures how much one's ability to predict the future of process I is improved upon learning the history of process J. It has been used extensively as an information theoretic measure of signal detection [45]. For our application, we seek to measure the transfer entropy from the target signal, Ff(t), to the response of the system (z(t) for model 1 or z2(t) for model 2). We compute the transfer entropy once using just the real parts of the signals, and again using just the imaginary parts. As these values were nearly identical in all cases, we simply report the mean of the two. The calculations were carried out by first discretizing the signals into one of two states at every point in time. We use the mean value of each signal as the delineation between the two states. The choice of k and l has little effect on the results, so we select k = l = 5, and sample the 5 points such that they span one mean period of Ff(t). A transfer entropy of zero indicates that the two processes are completely independent and that no signal detection has occurred. Since we chose two states in the discretization, the maximum value this measure can take is 1 bit. 9 APPENDIX B: ADDITIONAL FREQUENCY-MODULATION (FM) MASKS We now test the effectiveness of FM masks with alternative modulators. In FIG. 7, we plot the transfer entropy for sinusoidal modulators of the form, m(t) = Amod sin(ωmodt). We also show the results for sawtooth modulators (FIG. 8) and square-wave modulators (FIG. 9). Note that these are limiting cases of Eq. 10, where α = 1 produces sawtooth modulations and α = 0 produces square-wave modulations. All three cases qualitatively show the same behavior, where effective masks can be found when using rapid frequency modulations (high fmod) with small Amod. 0.00 0.25 0.50 0.75 1.00 mod/ 0 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Transfer entropy (bits) M1, Amod = 0.05 0 M2, Amod = 0.05 0 M1, Amod = 0.1 0 M2, Amod = 0.1 0 M1, Amod = 0.2 0 M2, Amod = 0.2 0 0.0 0.2 0.4 Amod/ 0 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Transfer entropy (bits) M1, mod = 0.05 0 M2, mod = 0.05 0 M1, mod = 0.2 0 M2, mod = 0.2 0 M1, mod = 0.5 0 M2, mod = 0.5 0 FIG. 7. Sinusoidal FM mask. The effects of the masking parameters, Amod and ωmod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). 0.00 0.25 0.50 0.75 1.00 fmod 0.000 0.025 0.050 0.075 0.100 0.125 Transfer entropy (bits) M1, Amod = 0.04 0 M2, Amod = 0.04 0 M1, Amod = 0.07 0 M2, Amod = 0.07 0 M1, Amod = 0.34 0 M2, Amod = 0.34 0 0.0 0.2 0.4 Amod/ 0 0.000 0.025 0.050 0.075 0.100 0.125 Transfer entropy (bits) M1, fmod = 0.02 M2, fmod = 0.02 M1, fmod = 0.1 M2, fmod = 0.1 M1, fmod = 0.5 M2, fmod = 0.5 FIG. 8. Sawtooth FM mask. The effects of the masking parameters, fmod and Amod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2). 0.00 0.25 0.50 0.75 1.00 fmod 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, Amod = 0.05 0 M2, Amod = 0.05 0 M1, Amod = 0.1 0 M2, Amod = 0.1 0 M1, Amod = 0.2 0 M2, Amod = 0.2 0 0.0 0.2 0.4 Amod/ 0 0.00 0.05 0.10 0.15 Transfer entropy (bits) M1, fmod = 0.14 M2, fmod = 0.14 M1, fmod = 0.48 M2, fmod = 0.48 M1, fmod = 0.8 M2, fmod = 0.8 FIG. 9. Square-wave FM mask. The effects of the masking parameters, fmod and Amod, on the transfer entropy from Ff(t) to z(t) (model 1: M1), and from Ff(t) to z2(t) (model 2: M2).
2510.14920
Rank of Matrices Arising out of Singular Kernel Functions∗ Sumit Singh †4 and Sivaram Ambikasaran ‡1,2,3,4 1Wadhwani School of Data Science and Artificial Intelligence 2Robert Bosch Centre for Data Science and Artificial Intelligence 3Department of Data Science and Artificial Intelligence, IIT Madras, Chennai, India 4Department of Mathematics, IIT Madras, Chennai, India Abstract Kernel functions are frequently encountered in differential equations and machine learning applications. In this work, we study the rank of matrices arising out of the kernel function K : X × Y 7→R, where the sets X, Y ∈Rd are hypercubes that share a boundary. The main contribution of this work is the analysis of rank of such matrices where the particles (sources/targets) are arbitrarily distributed within these hypercubes. To our knowledge, this is the first work to formally investigate the rank of such matrices for arbitrary distribution of particles. We model the arbitrary distribution of particles to arise from an underlying random distribution and obtain bounds on the expected rank and variance of the rank of the kernel matrix corresponding to various neighbor interactions. These bounds are useful for understanding the performance and complexity of hierarchical matrix algorithms (especially hierarchical matrices satisfying the weak-admissibility criterion) for an arbitrary distribution of particles. We also present numerical experiments in one-, two-, and three-dimensions, showing the expected rank growth and variance of the rank for different types of interactions. The numerical results, not surprisingly, align with our theoretical predictions. Keywords: Probability distributions, Numerical rank, n-body problems, Hierarchical matrices, Low-rank ma- trix approximation, Central Limit Theorem, Normal approximation. AMS Subject Classifications: 65F55, 65D40, 65D12. 1 Introduction Recent years have seen significant strides in the development of matrices arising out of kernel functions frequently occur in areas such as partial differential equations (PDEs) [20] [34] [12] [16], integral equations [26], inverse problems [4] [24], Gaussian processes [3] [35], graph theory [31] [41] and kernel methods for addressing many complex machine learning and data analysis tasks [11] [19] [27]. Despite their wide applicability, these matrices are often large and dense, as the underlying kernel functions are not compactly supported. This makes storage and computation of matrix operations (such as matrix-vector products, solving linear systems, and matrix-factorization, etc.) computationally expensive and memory intensive. Despite these challenges, kernel matrices exhibit low-rank structure that can be exploited to overcome these issues. We can significantly reduce the storage requirements and accelerate computational processes by leveraging their low-rank approximations. The literature on exploiting rank-structuredness is extensive, and we refer interested readers to the works [13] [43] [9] [37] [23] and the references therein for an in-depth review. This approach not only reduces computational costs but also allows for more accurate solutions to complex problems in scientific computing, engineering, and data science. More specifically, exploiting rank-structuredness is very useful in fields such as machine learning and signal processing, where it helps in optimized data compression, noise reduction, and the extraction of meaningful insights from large datasets. One of the most frequently encountered rank-structured matrices arising out of n-body problems are hi- erarchical low-rank matrices. The initial work by Barnes and Hut [6], which reduced the computational complexity to perform matrix-vector product from O n2 to O (n log n). Greengard and Rokhlin in Fast Mul- tipole Method (FMM) [20] further reduced to O (n). FMM and Barnes-Hut algorithms leverage separable extension of kernel functions for far-away interactions. In terms of matrices, this corresponds to approximat- ing sub-matrices corresponding to far-away interactions by a low-rank matrix. This interpretation generalises ∗Submitted to the editors: October 17, 2025 †sumit1315singh@gmail.com, ma22d027@smail.iitm.ac.in ‡sivaambi@dsai.iitm.ac.in, sivaambi@alumni.stanford.edu 1 arXiv:2510.14920v1 [math.NA] 16 Oct 2025 to hierarchical matrices. Depending on what sub-matrices are approximated by a low-rank matrix, we have different hierarchical structures. Some of the widely used such representations are Hierarchically Off-Diagonal Low-Rank (HODLR), Hierarchically Semi-Separable (HSS), and H-matrix. For a detailed literature review on hierarchical matrices and their applications, we refer the readers to the articles [1] [29] [21]. The matrices that possess such hierarchical structures are leveraged to construct various algorithms that can reduce the storage and accelerate the matrix operations [15] [30] [33] [5] [2]. In the process of efficiently handling large and dense kernel matrices, hierarchical matrices satisfying weak admissibility criteria (from now on denoted as H∗) have become an essential tool. One of the challenges in dealing with H∗matrices is that the rank of matrices corresponding to neighboring interactions of source and target domains is dependent on the number of source and target particles. Despite the success of such hierarchical algorithms, most of the theoretical works while studying the rank due to interactions with the neighbors (HODLRdD, HSS2D, etc. [30] [22] [28] [9]), assume that the particles (or sources) are placed on a uniform grid or quasi-uniform grid. However, in most practical applications, such an assumption is not often true, as particles rarely align in such structured (uniform/quasi-uniform) patterns. Real-world data, whether coming from various physical simulations, machine learning, or data analysis tasks, tends to follow distributions that are arbitrary in nature, with no consistent pattern. This raises a concern about the applicability and robustness of hierarchical low-rank methods under the arbitrary distribution of particles. To encounter and model the arbitrary nature of particles, in this article, we consider a more realistic case where particles are randomly distributed in their respective domains according to some suitable probability distribution (see Subsection 4.1 for in-depth discussion), leading to the corresponding kernel matrix being a random matrix (see Subsection 4.2). Specifically, we study the expected growth (stated in Theorem 3.1) of the random rank R (as defined in Subsection 4.7) and analyze how much this rank R deviates from its expected value (stated in Theorem 3.2) for all possible interactions in d-dimensions. These results offer a more general understanding of algorithms in practical settings. To the best of our knowledge, this study of the rank of the kernel matrices, due to the arbitrary nature of inputs, is a novel contribution to the field. Existing Work vs. Our Approach: As mentioned earlier, the rank of kernel matrices has been extensively studied in various applications, from PDEs and inverse problems to machine learning. In this subsection, we review a few key contributions from existing works that have examined the rank of these kernel matrices, particularly those related to our work, and also highlight how our work differentiates itself from previous works. Hackbusch et al. [9] [22] were among the first to study the rank structure of kernel matrices, interpret- ing methods like Treecode [6] and FMM [20] [10] as low-rank representations of sub-matrices. Their initial works focused on kernel sub-matrices arising from interactions between well-separated domains, satisfying the standard (or strong) admissibility criterion. One key takeaway from those works is that such well-separated interactions naturally lead to low-rank structures in the corresponding kernel matrix. In the later works, Hackbusch et al. in [22] introduced the notion of weak-admissibility criterion in the context of one-dimension and studied the rank of kernel matrices due to interaction between the neighbors in one-dimension (i.e., the vertex sharing case) where they found that the rank is of O (log n), n being the number of particles in the domains. In the article [42], Xia extended the weak-admissibility criterion in two dimensions and provided a rough idea of the rank of kernel matrices growth due to the neighboring interactions. While analyzing the complexity of their algorithms, Ho, Greengard [25] and Ho, Lexing [26] provide heuristic bounds on the rank. They discussed the rank of interactions between two d-dimensional hypercubes sharing a (d −1)-dimensional hypersurface, i.e., vertex-sharing in 1D, edge-sharing in 2D, and face-sharing in 3D, and so on. Recently, Khan et al. in [30] rigorously proved the rank growth of kernel matrices for all possible neighboring interactions in any dimensions. The result is independent of the choice of kernel functions. However, the main drawback of their work is that to derive the results, they assumed that the particles in each domain are arranged on a uniform or quasi-uniform grid, which is generally not the case in practice. In this work, we relax this assumption and consider an arbitrary distribution of particles in the respective domains, and to model the arbitrary distribution of particles, we consider particle distribution to arise from an underlying random distribution. Building on these insights, our work extends previous works by analyzing the rank of kernel matrices un- der arbitrary particle distributions using a probabilistic framework. This novel perspective provides a deeper understanding of the rank growth and its variability, as outlined in the key highlights of this article. Highlights of The Article: The following points are the main highlights of this article, which showcase the unique contributions made in this domain. 2 • To study the behavior of the rank of kernel matrices due to the interactions of arbitrary particles in the neighboring clusters, we assume that the particles are randomly distributed in the respective domains. • We have introduced the notion of random rank R for the kernel matrices with randomly distributed particles in the respective domains with all possible interactions in d-dimensions, which provides a rigorous analysis of – The expected growth of R. Theorem 3.1 provides deeper insights into how the kernel matrices behave under random conditions, which has not been addressed previously. – The change in variance of R. Theorem 3.2 provides a clear understanding of how the rank deviates from its expected value. This analysis helps to explain the stability and variability of hierarchical low-rank algorithms in practical settings. • To the best of our knowledge, this is the first comprehensive study on the expected growth and variance growth of random kernel matrices for different interactions. Our findings provide a fresh perspective on the robustness and efficiency of hierarchical matrix algorithms in practical settings. Outline of the Article: The article is organized as follows. In Section 2, we introduce the basic terminologies, definitions, and founda- tional concepts that are used throughout the article. In Section 3, we state the main theorems and mention the kernel functions that are used to verify our theorems. In Section 4, we formally define the problem setup, including the choice of random particle distribution, how the random kernel matrix K is generated, and the random variable R. Section 5 presents the proof of theorems on the expected growth of the random rank R and its variance for different interactions in d-dimensions. In Section 6, we will provide numerical experiments to validate our theoretical results, focusing on one-, two-, and three-dimensional cases for all possible interac- tions. Finally, in Section 7, we summarize the key insights, discuss the implications of our results, and suggest potential directions for future research. 2 Preliminaries In this section, we are going to discuss some definitions, notations, and lemmas that we will use in the article. We also briefly discuss low-rank approximation of certain kernel matrices using polynomial interpolation and lastly, we will discuss some fundamental probability concepts in our context. 2.1 Some Notations and Definitions: The terminologies that we are going to use frequently in this article are given below. Source Domain: The compact set Y ⊂Rd is said to be the source domain if it contains the source particles in its interior. We will consider that the particles are arbitrarily distributed within the interior of Y . Target Domain: The compact set X ⊂Rd is said to be the target domain if it contains the target particles in its interior. Here also, the target particles are arbitrarily distributed within the interior of X. Further, we also consider that int(X) ∩int(Y ) = ∅, where int(X) = interior of the set X. Kernel Function: Throughout the article, we will consider K : int(X) × int(Y ) →R as the kernel function. The kernel function K encodes the strength of interaction between a particle from the source domain and a particle from the target domain. The choice of a kernel depends on the underlying physical model and the nature of the interaction between particles. Kernel Matrix: The matrix K ∈Rm×n whose (i, j)th entry is given by Kij = K(xi, yj); where xi ∈int(X), yj ∈int(Y ) (2.1) is called the kernel matrix. Definition 2.1. (Numerical ε-rank) Given any ε > 0, the ε-rank of a matrix A ∈Cm×n is denoted and defined as rε = max  k ∈{1, 2, . . . , min {m, n}} : σk σ1 ≥ε  , where σi’s are singular values of A arranged in decreasing order. 3 Definition 2.2. (Numerical max-rank) Given any ε > 0, the max-rank of a matrix A ∈Cm×n is denoted as pε where pε = min n rank  ˜K  : ˜K ∈S(∞) ε o and S(∞) ε is defined as S(∞) ε = n ˜K ∈Cm×n : K −˜K ∞∗< ε ∥K∥∞∗ o . In our theorems, we get an upper bound of pε. Here, max-norm of a matrix A ∈Cm×n is defined as ∥K∥∞∗= max i,j |K (i, j)|. Note that Definition 2.1 can also be interpreted as follows: Given any ε > 0, the ε-rank of a matrix A ∈Cm×n is denoted as rε where rε = min n rank  ˜K  : ˜K ∈S(2) ε o and S(2) ε is defined as S(2) ε = n ˜K ∈Cm×n : K −˜K 2 < ε ∥K∥2 o . Remark 2.1. Here, ε-rank and max-rank are related through equivalency between ∥·∥2 and ∥·∥∞∗(see Ap- pendix C). 2.2 Kernel Matrix Approximation: Here, we discuss how one can approximate the kernel matrices, and there are many ways to do that. But, in our case, it will be done by approximating the kernel function using Lagrange interpolation. For that purpose, we need the following lemmas. Lemma 2.1. Let a function f be analytic in [−1, 1], and it can be analytically continuable to a Bernstein ellipse for some ρ > 1 where |f(x)| ≤M for some M. Then for any n ∈N, its Chebyshev interpolants pn satisfy |f −pn| ≤4Mρ−n ρ −1 . This lemma is proved in [ [40], Theorem 8.2]. We now state a generalized version of Lemma 2.1 for higher dimensional setting, which is given below Lemma 2.2. Let f : V = [−1, 1]d ⊂Rd →R be analytic and it has an analytic extension to some generalized Bernstein ellipse B (V, ρ), where ρ = (ρ, ρ, . . . , ρ) with ρ > 1. Now, if ∥f∥∞∗= max y∈B(V,ρ) |f(y)| ≤M, then for any n ∈N, its interpolating multivariate polynomial ˜f satisfy f −˜f ∞∗≤4MVd ρ−p ρ −1, where p is a predefined constant and Vd is a constant depending on the dimension d and ρ. The Lemma 2.2 is discussed in [30]. Also, a detailed discussion on Bernstein ellipse, generalized Bernstein ellipse, and analytic continuation can be found there. The generalized version of Lemma 2.2 is stated and proved in [18]. However, the above two lemmas provide a way to approximate the kernel function K(x, y). Below, we have shown the approximation of kernel function K(x, y) along y. ˜K (x, y) = X k∈I K x, yk Lk (y) where I is the index set of the interpolating points, and Lk is the Lagrange basis. Now, using this approximated kernel function ˜K, we can get the approximated kernel matrix ˜K whose (i, j)th entry is given by ˜Kij = ˜K (xi, yj). The matrix is then factorized as ˜K = UV T , which is given as follows, where U ∈Rm×|I| and V ∈Rn×|I|. ˜K =   K(x1, y1) K(x1, y2) · · · K(x1, y|I|) K(x2, y1) K(x2, y2) · · · K(x2, y|I|) ... ... ... ... K(xm, y1) K(xm, y2) · · · K(xm, y|I|)   | {z } U × V T z }| {   L1(y1) L1(y2) · · · L1(yn) L2(y1) L2(y2) · · · L2(yn) ... ... ... ... L|I|(y1) L|I|(y2) · · · L|I|(yn)   The rank of the approximated kernel matrix ˜K is nothing but |I|. Now, the following lemma guarantees the cardinality of the set I when the source and target domains are separated by some distance, as mentioned in the following lemma. Lemma 2.3. Let X be the source and Y be the target hyper-cube in d-dimensions such that they are at least one hyper-cube away, and let K be the corresponding interaction matrix. Then for any given δ > 0, there exists an approximated ˜K of rank pδ such that ∥K−˜ K∥∞∗ ∥K∥∞∗ < δ with pδ ∈O  logd c δ  , where c is a kernel dependent constant. 4 The Lemma 2.2 is used to prove the Lemma 2.3, which one can find in the article [30], where one can also find a detailed discussion about the kernel-dependent constant c. Here, we see that the numerical rank pδ of the approximated matrix depends on the kernel and desired accuracy δ, not on the size of the matrix. 3 Main Results Theorem 3.1 and Theorem 3.2 are among the main contributions of this article. Theorem 3.1 explains the expected growth of the random rank of the kernel matrix due to the interaction of random source and target domains (where source and targets are neighbors) in all possible dimensions, and Theorem 3.2 guarantees the growth of variance of R. The proof of these theorems can be found in Section 5. Note that the theorems consider all possible positioning of the source and targets, whereas the Lemma 2.3 deals with only those domains that are separated by a distance. Another point to note is that these two theorems are applicable to an extensive range of kernels, which frequently occur in practice. We have provided numerical validation of the proofs in Section 6 for the kernels K1, K2, . . . , K7 mentioned in Table 1 at the end of this section. We now state the theorem on expected rank growth for all possible hypersurface-sharing interactions. Expected Rank Growth , Theorem 3.1. Let the compact hyper-cubes X, Y ⊂Rd be the source and target domains, respectively, that share a d′-dimensional hyper-surface, and each domain contains n i.i.d. random particles in its interior, and the underlying probability distribution is the uniform probability distribution. Let K be the corresponding random kernel matrix. Now, for any given small enough tolerance δ > 0, there exists an approximated random kernel matrix ˜K such that ∥K−˜ K∥∞∗ ∥K∥∞∗ < δ with random rank R such that (i) for d′ = 0, i.e. for vertex-sharing interaction E [R] ∈O (p log2d(n)), (ii) for d′ ̸= 0, i.e. for d′-dimensional hyper-surface sharing interaction E [R] ∈O  pnd′/d , where p is a constant depending on δ and the kernel. Now, similarly, we state the growth of variance of the random rank R of the kernel matrix K due to all possible interactions. Variance Growth of the Rank , Theorem 3.2. Let the compact hyper-cubes X, Y ⊂Rd be the source and target domains, respectively, that share a d′-dimensional hyper-surface, and each domain contains n i.i.d. random particles in its interior, and the underlying probability distribution is the uniform probability distribution. Let K be the corresponding random kernel matrix. Now, for any given small enough tolerance δ > 0, there exists an approximated random kernel matrix ˜K such that ∥K−˜ K∥∞∗ ∥K∥∞∗ < δ with random rank R such that (i) for d′ = 0, i.e. for vertex-sharing interaction Var(R) ∈O (log log log n)2 . (ii) for d′ ̸= 0, i.e. for d′-dimensional hyper-surface sharing interaction Var(R) ∈O  nd′/d log log log n 2 . We have performed numerical experiments to verify the results, and for that, we have used kernel functions given in Table 1. 4 Fundamental Framework and Problem Setup In this section, we provide a foundational discussion of the probability distribution and key concepts that are essential for understanding the results presented in this article. The distribution and the random variables defined here play a crucial role in the proofs of the theorems stated in Section 3 and provide the necessary framework for analyzing the behavior of the random variable R of our interest. For the sake of simplicity, from now onward, we are going to consider the hyper-cube Y = [0, l]d as the source domain and the hyper-cube X = [0, l]d′ × [−l, 0]d−d′ as the target domain in the d-dimensions. Here, for different values of d′, the source and target will encounter different interactions, which are given below 5 Sl. No. Kernel Functions . 1 K1 (x, y) = 1 ∥x −y∥2 2 K2 (x, y) = log (∥x −y∥2) 3 K3 (x, y) = sin (∥x −y∥2) 4 K4 (x, y) = exp (i ∥x −y∥2) ∥x −y∥2 5 K5 (x, y) = 1 p 1 + ∥x −y∥2 6 K6 (x, y) = exp (−∥x −y∥2) 7 K7 (x, y) = ∥x −y∥2 Table 1: Table of Kernel Functions where x ∈int(X) and y ∈int(Y ). Here K1 is the Green’s function for the 3D Laplace operator; K2 is the Green’s function for the 2D Laplace operator; K4 is the 3D Helmholtz kernel with wave number 1; K6 is the Mat´ern covariance kernel, and K7 is the poly-harmonic radial basis function. • For d′ = −1, the source Y and the target X are far-field1 domains, where far-field is defined as X and Y are at least one hyper-cube away, i.e. dist (X, Y ) ≥η min {diameter(X), diameter(Y )}, for some η > 0. The far-field domains in one, two, and three dimensions are shown in the following Figure 1. X Y X Y X Y (a) 1D far-field domains. (b) 2D far-field domains. (c) 3D far-field domains. Figure 1: 1D, 2D, 3D far-field domains • For d′ = 0, the source Y and the target X shares a vertex. Figure 2 illustrates the vertex-sharing domains in one, two, and three dimensions. X Y X Y X Y (a) 1D vertex-sharing domains. (b) 2D vertex-sharing domains. (c) 3D vertex-sharing domains. Figure 2: 1D, 2D, 3D vertex-sharing domains • For d′ = 1, the source Y and the target X shares an edge. This case will arise when the dimension of the domains is at least two. In two and three dimensions, this case is illustrated in Figure 3. X Y X Y (a) 2D vertex-sharing domains. (b) 3D edge-sharing domains. Figure 3: 2D, 3D edge-sharing domains 1For the sake of completeness, we use the notation d′ = −1 to represent far-field domains. There is nothing special to choose −1 here. 6 • For d′ = 2, the source Y and the target X shares a face. This case will arise when the dimension of the domains is at least three. The face-sharing domain in three dimensions is shown below in Figure 4. X Y Figure 4: 3D face-sharing domains • For d′ ≥3, the source Y and the target X shares a d′-dimensional hyper-surface. 4.1 Choice of Probability Distribution We begin this subsection by introducing the probability distribution that forms the foundation of our analysis as we model the ‘arbitrary distribution of particles’ with a proper probability distribution. In each of domains X and Y , n independent and identically distributed (i.i.d) particles are drawn from Uniform Probability distributions UX and UY respectively. The choice of Uniform Probability Distribution is driven by its suit- ability and simplicity for modeling the random matrix structure considered in this study. Apart from simplicity in deriving analytical results, several factors influenced this choice, which are summarized as follows: (i) Instead of being tied to a fixed grid, uniform probability distribution allows for flexible grid placement during the discretization of the domains. (ii) Uniform distribution ensures that every possible particle within the domain is equally likely to be chosen. (iii) The configuration that the particles sampled randomly from the uniform probability distribution can be interpreted as a “perturbed” version of a uniform grid, which aligns with the principles of Smoothed Analysis [38] in the following ways: (a) Choosing uniform probability distribution, we can implicitly introduce one kind of random perturba- tion to an ideal grid2, as each particle selected from the domain could vary slightly from an exact grid particle. This randomness in particle selection in the domains aligns with the smooth perturbations considered in the smooth analysis. (b) The use of a uniform probability distribution can provide more realistic estimates of algorithm perfor- mance, as it takes a range of possible input particles into account rather than specific, deterministic ones. (c) By choosing uniform probability distribution, we are spreading the likelihood evenly across the do- mains, making it less likely to encounter those extreme cases that result in an analysis that better reflects typical performance (which is quite similar to average-case analysis of inputs) rather than focusing on the rare, worst-case analysis of inputs particles. 4.2 Generation of Random Kernel Matrix Following the discussion on the probability distribution, we now turn to the generation of the random kernel matrix K. As particles are uniformly distributed in their respective domains and the kernel matrix reflects the interactions between these random particles through a kernel function K, forms the random entries of the matrix K as discussed in Equation 2.1. Although the kernel matrix K exhibits randomness, this randomness is structured3 and is not entirely arbitrary. This is because: i. The entries Kij of the kernel matrix K are random variables whose distribution depends not only on UX and UY but also on the kernel function K. ii. The distribution and correlation structure of the entries Kij of the kernel matrix K are strongly influenced by the geometrical configurations of the domains X and Y . iii. The distribution of Kij will often not be uniform (often highly structured). For example, Suppose we choose the kernel function K6. In that case, the entries of K will be related to the exponential of the Euclidean distance between the random particles, leading to a non-uniform distribution. 2A detailed discussion on ‘Uniform Distribution as Randomly Uniform Perturbed Grid’ can be found in Appendix E 3Structured Randomness involves randomness that is governed by some underlying structure, pattern, or correlation between the elements. This structure can result from dependencies, constraints, or specific relationships defined by a process or function. 7 4.3 Rank as a Random Variable Once the random kernel matrix K is generated, one of the key aspects of its structure is its rank. Unlike deterministic kernel matrices, where the rank is fixed based on the matrix dimensions and locations of the source and target domains, the rank of the kernel matrix derived from random source and target particles is a random variable. Several factors influence this random variable: i. The size of the kernel matrix K (i.e., the number of random source and target particles) ii. The spatial configuration of source and target domains has a substantial impact on the rank of K. The rank can be significantly different in cases when the domains are far-field or share a d′-dimensional hyper-surface. iii. The rank of the kernel matrix is not only depends on the size of K and geometry of the source and target domains but also heavily influenced by the choice of the kernel function K. For example, for the choice kernel function of K3 from Table 1, regardless of the geometry or distribution of the source and target particles in one dimension, the rank of the kernel matrix is always 2. The above-outlined points can be verified in the numerical results presented in Section 6. As determining the exact distribution of the rank is quite challenging, we introduce the random variable R, (given in Equation 4.15), which serves as an upper bound of the rank of kernel matrix under random inputs. Throughout this article, we investigate the behavior of R to understand how the rank of the kernel matrix varies and obtain bounds on the first couple of moments of the random variable. 4.4 Basic Probability Theory In this subsection, we explore fundamental probability concepts that are crucial for our analysis, and along with this, we define some required tools. 4.4.1 Distribution of particles in a domain and its sub-domains. Let n number of independent and identically distributed (i.i.d) particles x1, x2, . . . , xn fall in the hyper-cube V = [a, b]d under the uniform probability distribution4. The reason behind choosing this particular distribution is discussed in Subsection 4.1. We define the random variable N (V ′) as the number of particles that fall within a sub-hyper-cube V ′ = [c1, d1]d of V , where [c1, d1] ⊆[a, b]. Then, the probability of having exactly k particles in V ′ is given by P (N (V ′) = k) = n k  qk (1 −q)n−k ; where q = (d1 −c1)d (b −a)d . (4.1) We now define another random variable that we are going to use frequently in the article. The random variable is defined as follows. ZV ′ n = min {N (V ′) , p} , where p is a constant. (4.2) Here, ZV ′ n takes any natural number in between 0 and p and for any i ∈{0, 1, . . . , p}, we have P  ZV ′ n = i  = ( P (N (V ′) = i) if i < p P (N (V ′) ≥p) if i = p (4.3) We now demonstrate an application of the trinomial distribution in our setting as follows: Let us consider two non-intersecting sub-hyper-cubes V ′ = [c1, d1]d and V ′′ = [c2, d2]d of V . Then the probability that exactly l, m particles fall in the respective sub-hyper-cubes is given by P (N (V ′) = l, N (V ′′) = m) = n l n −l m  q1 lq2 m(1 −q1 −q2)(n−l−m); where qi = (di −ci)d (b −a)d , i = 1, 2. (4.4) Now, one more application of trinomial distribution can be given for the random variable defined in Equation 4.2 as follows: P  ZV ′ n = l, ZV ′′ n = m  =          P (N (V ′) = l, N (V ′′) = m) if l < p, m < p P (N (V ′) ≥p, N (V ′′) = m) if l = p, m < p P (N (V ′) = l, N (V ′′) ≥p) if l < p, m = p P (N (V ′) ≥p, N (V ′′) ≥p) if l = p, m = p (4.5) 4In Appendix A, the consideration of more general probability distributions is discussed. 8 4.4.2 Moments and Dependencies of Random Variables. We now discuss the expectation and the variance of the random variables that we have defined in Equation 4.1 and Equation 4.2. As, N (V ′) is nothing but a binomial distribution with parameters n, q, corresponding expectation and variance is E [N (V ′)] = nq and Var(N (V ′)) = nq(1 −q). (4.6) Now, the expectation of the random variable ZV ′ n is given below E h ZV ′ n i = p X i=0 i P  ZV ′ n = i  = p + p−1 X i=0 (i −p) P  ZV ′ n = i  = p + p−1 X i=0 (i −p) P (N (V ′) = i) (4.7) = p − p X i=0 i P (N (V ′) = i) (4.8) and the variance can be derived as follows Var  ZV ′ n  = Var  p −ZV ′ n  = p X i=0 (p −i)2 P  ZV ′ n = i  − p X i=0 (p −i) P  ZV ′ n = i !2 = p X i=0 i2 P (N (V ′) = i) − p X i=0 i P (N (V ′) = i) !2 (4.9) Now, for any non-intersecting sub-hyper-cubes V ′, V ′′ of V , the expectation of the product and covariance of the random variable defined in Equation 4.1, can be easily derived and is given below E [N (V ′) N (V ′′)] = n(n −1)q1q2 and Cov (N (V ′) , N (V ′′)) = −nq1q2 (4.10) Remark 4.1. The negative covariance indicates that if the number of particles increased in one of the sub- hyper-cubes, then the number of particles should decrease in the other sub-hyper-cube. Now, the covariance between ZV ′ n , ZV ′′ n for the non intersecting sub-hyper-cubes V ′, V ′′ of V , can be derived as follows Cov  ZV ′ n , ZV ′′ n  = Cov  p −ZV ′ n , p −ZV ′′ n  = p X l=0 p X m=0 lm (P (N (V ′) = l, N (V ′′) = m) −P (N (V ′) = l) P (N (V ′′) = m)) (4.11) Similarly, the covariance between ZV ′ n and N (V ′′) for the non intersecting sub-hyper-cubes V ′, V ′′ of V , can be derived as Cov  ZV ′ n , N V ′′ = p X l=0 n X m=0 lm P N V ′ = l, N V ′′ = m  −P N V ′ = l  P N V ′′ = m  (4.12) 4.4.3 Continuous Approximation of Discrete Distributions. In many practical scenarios of discrete random variables, particularly while dealing with large sample sizes or simplifying the computation of probabilities, we employ the method of continuous approximation. The Central Limit Theorem [36] [8] is the primary tool that ensures that we can do so. We now briefly discuss the normal approximation to the binomial distributions. Lemma 4.1. Let Sn be a binomial random variable with parameter n, q, i.e. Sn ∼Binomial(n, q), then for a, b ∈R P a ≤ Sn −nq p nq(1 −q) ≤b ! →P (a ≤Z ≤b) , as n →∞, where Z is the standard normal distribution, i.e. Z ∼N(0, 1). To ensure better approximation, we need the following assumptions 9 (i) the quantities nq and n(1 −q) should be a large value (some authors suggest that if both the values are at least 10, then we can get a good approximation, and few suggest 5 is sufficient). (ii) continuity correction is required for best approximation [36]. Now, using Berry-Esseen theorem [14], we can approximate the probability defined in Equation 4.1 using the CDF of N(0, 1), i.e. Φ (x) = 1 √ 2π Z x −∞ e−t2/2dt and the corresponding error of approximation is P (N(V ′) = k) − Z b(n) k,q a(n) k,q 1 √ 2π e−t2/2dt ≤2C(1 −2q + 2q2) p nq(1 −q) ∈O  1 √n  , where a(n) k,q = k−0.5−nq √ nq(1−q) and b(n) k,q = k+0.5−nq √ nq(1−q). Also note that C is independent of all quantities in the above expression. Now similarly, using the Berry-Esseen theorem for multivariate cases [7], we can approximate the trinomial distribution defined in Equation 4.4, using the bivariate normal distribution [39] and the corresponding error of approximation is given by P (N (V ′) = l, N (V ′′) = m) − ZZ D f(x, y)dxdy ≤4M √n , where M is some constant. where 1. l + m ≤n. 2. f(x, y) = 1 2π p 1 −ρ2 e−Q(x,y)/2(1−ρ2) is the pdf of bivariate normal distribution with mean vector µ = 0 0  and covariance matrix Σ = 1 ρ ρ 1  , where (a) Q(x, y) = x2 + y2 −2ρxy. (b) ρ = − r q1q2 (1 −q1) (1 −q2) be the correlation coefficient between N (V ′), N (V ′′). 3. D is a rectangular region in xy plane containing the point  l−nq1 √ nq1(1−q1), m−nq2 √ nq2(1−q2)  defined as D = ( (x, y) : l −0.5 −nq1 p nq1 (1 −q1) ≤x ≤l + 0.5 −nq1 p nq1 (1 −q1) ; m −0.5 −nq2 p nq2 (1 −q2) ≤y ≤m + 0.5 −nq2 p nq2 (1 −q2) ) 4.5 Hierarchical Subdivision of the Source Domain To define the random variable R, we use hierarchical subdivision on the source domain Y . This subdivision is done in a way that adapts to the relative positioning of the source and target domains. To get a clear understanding of this, we discuss the one-dimensional case first, and then we move to the general d-dimensional case. Subdivision in One Dimension The hierarchical subdivision will take place in this dimension only when the source Y and the target X share a vertex. The source domain Y is hierarchically subdivided using an adaptive binary tree as shown in Figure 5, where each level of subdivision halves the domain: • At level 1, Y is subdivided into two equal parts Y1, which shares a vertex with the X and Y1,1, which does not. • At level 2, Y1 is now subdivided again into two equal parts Y2 and Y2,1 where Y2 shares a vertex with the X, and so on. The hierarchical subdivision continues up to level κ, where κ = ⌊log2 n⌋. The process can be summarized as: Y = Y1 ∪Y1,1 | {z } Level 1 = Level 2 z }| { Y2 2[ i=1 Yi,1 = · · · = Level κ z }| { Yκ κ[ i=1 Yi,1 . 10 Y1,1 Y2,1 Yκ X Y −l 0 l Figure 5: Sub-division of the source Y = [a, b] up to the level κ. General Case: Subdivision in d Dimensions In higher dimensions, the hierarchical subdivision is carried out using an adaptive 2d-tree, where Y is subdivided into multiple parts based on shared hyper-surfaces between the source and target domains as shown in Figure 6 and Figure 7 for two- and three-dimensional interactions respectively. • At level 1, Y is subdivided into 2d equal parts Y1,l for l = 1 : 2d. • At level 2, those Y1,l’s that share a d′-dimensional hyper-surface with the target domain X will again be subdivided into 2d equal parts. This subdivision is continued up to level κ, where κ = ⌊log2d n⌋. The hierarchical process in d-dimensions can be expressed as: Y = at level 1 z }| { Y1 h1 [ l=1 Y1,l = · · · = at level i z }| { Yi i[ k=1 hk [ l=1 Yk,l = · · · = at level κ z }| { Yκ κ[ k=1 hk [ l=1 Yk,l where, hk = 2d′k  2d−d′ −1  for k = 1 : κ and Yi’s are the union those of Yi,l’s that share a d′-dimensional hyper-surface with X at level i = 1 : κ, i.e. Yi = 2d [ l=hk+1 Yi,l for i = 1 : κ. X Y1,1 Y1,2 Y1,3 Y2,1 Y2,2 Y2,3 Y3,1 Y3,2 Y3,3 X Y1,1 Y1,2 Y2,1 Y2,2 Y2,3 Y2,4 Y3,1 Y3,2 Y3,3 Y3,4 Y3,5 Y3,6 Y3,7 Y3,8 (a) Vertex-sharing interaction. (b) Edge-sharing interaction. Figure 6: Sub-division of the source Y in two dimensions. (a) Vertex-sharing interaction. (b) Edge-sharing interaction. (c) Face-sharing interaction. Figure 7: Sub-division of the source Y in three dimensions. 11 We now define the random variables associated with the number of particles in each subdomain. Let: • Nk,l (Nk,l = N (Yk,l) as defined in Equation 4.1) denotes the number of particles in the sub-domain Yk,l for k = 1 : κ and l = 1 : hk. • Mκ (Mκ = N (Yκ) as defined in Equation 4.1) represents the number of particles in the sub-domain Yκ, which is the final level subdivision that shares a d′-dimensional hyper-surface with X. The total number of particles in the source domain Y is the sum of the particles in Yκ and all the subdivisions Yk,l’s and thus we have n = Mκ + κ X k=1 hk X l=1 Nk,l. where n is the total number of particles in Y . Now, due to the uniform distribution, the probability of a particle being located in a specific sub-domain is proportional to the size of that sub-domain. More specifically: • The probability of a particle being located in the sub-domain Yk,l is given by qk = 1 2dk for each l, where d represents the dimension of the domains. • The probability of a particle being in Yκ, the sub-domain at the finest level, is qκ = 1 2(d−d′)κ , where d′ is the dimension of the shared hyper-surface between X and Y . This probabilistic structure will later allow us to estimate the rank of the random kernel matrix K. 4.6 Low-Rank Matrix Construction The hierarchical subdivision of the source domain Y allows us to generate some matrices from the kernel matrix K that correspond to the interactions between the target domain X and the subdivided regions of the source domain Y and then efficiently approximate those matrices as a low-rank matrix by leveraging the far-field approximation provided by Lemma 2.2. This approach extends naturally to higher dimensions, with a one- dimensional case shown in Figure 8 and Figure 9 for pictorial clarity. The construction of the sub-matrices proceeds as follows: • The matrix Kk,l due to the interaction between target domain X and the subdivided source domain Yk,l for k = 1 : κ and l = 1 : hk (the one-dimensional case is as shown in Figure 8), is given by (Kk,l)i,j = ( K (xi, yj) where xi ∈int(X) and yj ∈int(Yk,l). 0 elsewhere. Yk,1 X −l 0 l Kk,1 = Figure 8: The target domain X and the subdivided source domain Yk,1 at level k with the corresponding matrix Kk,1. • The matrix Kκ due to the interaction between target domain X and the subdivided source domain Yκ (the one-dimensional case is as shown in Figure 9), is given by (Kκ)i,j = ( K (xi, yj) where xi ∈int(X) and yj ∈int(Yκ). 0 elsewhere. Yκ X −l 0 l Kκ = Figure 9: The target domain X and the subdivided source domain Yκ at level κ, with corresponding matrix Kκ. 12 Hence, we can write the kernel matrix K as the following sum K = κ X k=1 hk X l=1 Kk,l + Kκ (4.13) Now, if ˜Kk,l and ˜Kκ be the approximations of Kk,l and Kκ respectively, then the approximation of the kernel matrix K can be written as follows ˜K = κ X k=1 hk X l=1 ˜Kk,l + ˜Kκ (4.14) Here, the matrix ˜K is a low-rank approximation of the kernel matrix K. 4.7 Estimating Random Rank: The Random Variable R With all the necessary components now in place, we are ready to define the random variable R, which encap- sulates the complexity of the rank behavior of the approximated matrix ˜K of he kernel matrix K due to the interaction between the source domain Y and the target domain X through the kernel function K. Here, R depends on how well the matrices Kk,l and Kκ are approximated, which in turn depends on the geometry of the domains and the accuracy of the approximation. The random variable R is formulated using the matrices Kκ and Kk,l for k = 1 : κ and l = 1 : hk, incorporating their ranks into the following expression: R = κ X k=1 hk X l=1 Zk,l n + Mκ, where Zk,l n = min {Nk,l, p} (4.15) Here’s a breakdown of the terms: • The random variable Zk,l n serves as an upper bound of the rank of the random matrix ˜Kk,l. Its value is influenced by the approximation accuracy δ and depends on both the number of particles Nk,l in Yk,l and the rank truncation parameter p, which is determined by Lemma 2.3. • The random variable Mκ represents an upper bound of the rank of the matrix ˜Kκ, capturing the interaction of X and Yκ. 4.8 Results from Hierarchical Subdivision and Random Variable Definitions With the hierarchical subdivision of the domains and the definition of the corresponding random variables Nk,l, Mκ and Zk,l n , we are now in a position to state some immediate results which provide some insight into the expected behavior, variance, and covariance of the random ranks associated with the matrices as the number of particles n grows large. Result 4.1. For any fixed positive integer k ≤κ and for all l = 1 : hk, the limit of the expected random rank Zk,l n of the corresponding matrix ˜Kk,l is p, i.e., E  Zk,l n  →p, as n →∞. Proof. From Equation 4.8, we have E  Zk,l n  = p − p X i=0 i P (Nk,l = i) (4.16) Clearly, E  Zk,l n  ≤p. Now for i = 1 : p, ∃Ξ > 0 (Ξ independent of n) such that P (Nk,l = i) ≤ b(n) i,k Z a(n) i,k 1 √ 2π e−t2/2dt + Ξ √n (4.17) Here, a(n) i,k = i−0.5−µk σk and b(n) i,k = i+0.5−µk σk , where µk = nqk, σk = p nqk (1 −qk). Now, for some c(n) i,k such that ˜a(n) i,k ≤c(n) i,k ≤˜b(n) i,k , we have E  Zk,l n  ≥p − p X i=0 i 1 √ 2π e −  c(n) i,k 2/2 + p(p + 1) 2 Ξ √n Now, as n →∞, c(n) i,k →−∞and hence, we have lim n→∞E  Zk,l n  = p. 13 Result 4.2. For any fixed positive integer k ≤κ and for all l = 1 : hk, the limit of the variance of random rank Zk,l n of the corresponding matrix ˜Kk,l is 0, i.e., Var Zk,l n  →0, as n →∞. Proof. Using Equation 4.9, we have an upper bound the variance of Zk,l n as Var Zk,l n  = p X i=0 i2P (Nk,l = i) − p X i=0 iP (Nk,l = i) !2 ≤ p X i=0 i2P (Nk,l = i) As k and p are fixed, there exists n0 ∈N such that for all n > n0, we have p < µk,l = nqk and the probability P (Nk,l = i) increases as i increases up to p and thus we have Var Zk,l n  ≤p(p + 1)(2p + 1) 6 P (Nk,l = p) (4.18) Now using Equation 4.17, we have upper bound of Var Zk,l n  as Var Zk,l n  ≤p(p + 1)(2p + 1) 6  1 √ 2π e −  c(n) p,k 2/2 + Ξ √n  , where a(n) p,k ≤c(n) p,k ≤b(n) p,k. ≤p(p + 1)(2p + 1) 6  1 √ 2π e −  c(n) p,k 2/2 + Ξ √n  (4.19) Now, a(n) i,k →−∞as n →∞and hence, we have lim n→∞Var Zk,l n  = 0. Result 4.3. For any k1, k2 and l1, l2 such that k1, k2 = 1 : κ and l1, l2 = 1 : hk, the limit of the covariance of the random ranks Zk1,l1 n and Zk2,l2 n corresponding to the matrices ˜Kk1,l1 and ˜Kk2,l2 respectively is 0, i.e., Cov Zk1,l1 n , Zk2,l2 n  →0, as n →∞. Proof. Case I: k1 ̸= k2 and for any l = 1 : hk. The result follows from the inequality Cov Zk1,l n , Zk2,l n  ≤ r Var  Zk1,l n  Var  Zk2,l n  and for fixed k1, k2; Var Zki,l n  →0, as n →∞where i = 1, 2. Case II: k1 = k2 = k (say) and l1 ̸= l2. This case will arise in all d-dimensional settings except for d = 1. Now as for some fixed k, the sub hyper-cubes Yk,l1 and Yk,l2 of Y are identical but a shift only, the distribution of Zk,l1 n and Zk,l2 n defined in Equation 4.3 and their joint distribution defined in Equation 4.5 will be the same for all l1, l2 = 1 : hk and thus we have Cov Zk,l1 n , Zk,l2 n  = Var Zk,l n  , for any l = 1 : hk. Now as Var Zk,l n  →0, as n →∞, we can conclude that Cov Zk,l1 n , Zk,l2 n  →0, as n →∞. Remark 4.2. The above result is also true if one of the random variables is replaced by Mκ, i.e., for some fixed k ̸= κ Cov Zk,l n , Mκ  →0, as n →∞. Remark 4.3. As n →∞, the random variables Zk1,l1 n and Zk2,l2 n (for some k1, k2 and l1, l2) converge to the p with no variability. Thus, in the limit, Zk,l1 n and Zk,l2 n both behave like deterministic variables that always take the value p. Now, to establish a solid foundation for the proofs (in Section 5) of our main theorems, we explore a few key lemmas that will be crucial in understanding the behavior of the random variable R. Lemma 4.2. For any large value of n, if κ = ⌊log2d n⌋then for some fixed l the sum of variances κ X k=1 Var Zk,l n  is bounded by C such that C ∈O (log log log n). Proof. Without Loss of Generality, let us take l = 1 (as Equation 4.19 is independent of l). Now, there exist ˜k such that the term P (Nk,1 = p) in Equation 4.18 is sufficiently small for all k ≤˜k. More precisely, for any small ε > 0, we have P (Nk,1 = p) < ε for all k ≤˜k, where ˜k = j log2d  1 + n+1−2p M 2+2p−1 k , M ∈O (log (1/ε)) and also κ −˜k < Ω+ log2d M 2 + 2p −1  , where Ωis some constant (independent of n). Now if we choose ε = 6c1 κp(p+1)(2p+1), then ˜k X k=1 Var Zk,1 n  ≤ ˜k X k=1 p(p + 1)(2p + 1) 6 P (Nk,1 = p) < c1 14 where c1 is a constant independent of n. Now as  κ −˜k  ∈O (log log log n) and Var Zk,1 n  is bounded by p(p + 1)(2p + 1)/6 (from Equation 4.18), we can conclude that there exist C such that κ X k=1 Var Zk,1 n  ≤C, where C ∈O (log log log n). Lemma 4.3. For any large value of n, if κ = ⌊log2d n⌋then for some fixed l the sum of covariances κ X k1=1 κ X k2=k1+1 Cov Zk1,l n , Zk2,l n  ≤C, where C ∈O (log log log n)2 . Proof. Without Loss of Generality, let us take l = 1 as for some fixed k1, k2, then the upper bound of Cov Zk1,l n , Zk2,l n  defined in Equation 4.11 can be obtained as Cov Zk1,l n , Zk2,l n  ≤ p X r=0 p X m=0 rsP (Nk1,1 = p1, Nk2,1 = p2) , for some p1, p2 such that 1 ≤p1, p2 ≤p. Now for any small enough ε > 0, we have P (Nk1,1 = pi, Nk2,1 = pj) < ε, for all k1, k2 ≤˜k. Here, ˜k will take a similar form like in Lemma 4.2 and hence in a similar approach, we can conclude that there exists C such that κ X k1=1 κ X k2=k1+1 Cov Zk1,l n , Zk2,l n  ≤C, where C ∈O (log log log n)2 . Note: A detailed version of the proofs of the Lemma 4.2 and Lemma 4.3 can be found in Appendix F. Lemma 4.4. For any value of n, if κ = ⌊log2d n⌋then for some fixed l the sum of covariances κ X k=1 Cov Zk,l n , Mκ  ∈O  nd′/d log log log n  . Proof. An upper bound of Cov Zk,l n , Mκ  (defined in Equation 4.12) can be obtained as Cov Zk,l n , Mκ  ≤p2 n X m=0 mP (Nk,1 = p1, Mκ = m) for some 0 ≤p1 ≤p. Using conditional expectation, the above expression can be rewritten as Cov Zk,l n , Mκ  ≤p2 E [Mκ|Nk,1 = p1] P (Nk,1 = p1) Now as (Mκ|Nk,1 = p1) ∼Binomial  n −p1, qκ 1 −qk  , we have E [Mκ|Nk = p1] = (n −p1) qκ 1 −qk . Hence, Cov Zk,l n , Mκ  ≤p2 n −p1 2(d−d′)κ 2dk 2dk −1P (Nk,1 = p1) Now as n −p1 2dκ ∈O (1) and 2dk 2dk −1 are bounded by 2, there exist c such that Cov Zk,l n , Mκ  ≤c nd′/d P (Nk,1 = p1) . Now similarly like Lemma 4.2, we can prove that κ X k=1 Cov Zk,l n , Mκ  ∈O  nd′/d log log log n  Remark 4.4. For the vertex-sharing case, i.e., for d′ = 0 in any dimension, there exists C such that κ X k=1 Cov Zk,l n , Mκ  ≤C, where C ∈O (log log log n). 15 5 Proof of Theorems In this section, we present the proofs of the theorems stated in Section 3. In these proofs, we will use the well-known result that for any d-dimensional case, the rank kernel matrix, due to the far-field interaction [[30] Lemma 5.1] with arbitrarily distributed particles, is bounded by a constant, which dependents only on the desired accuracy δ, as validated by the results5 shown in Figure 10. 103 104 100 101 No. of Particles (n) Expected Numerical Rank c K1 K2 K3 K4 K5 K6 K7 (a) Expected Numerical Rank growth of Kfar for various kernel functions. (a) one-dimensional case 103 104 101.2 101.3 101.4 101.5 No. of Particles (n) Expected Numerical Rank c K1 K2 K3 K4 K5 K6 K7 (a) Expected Numerical Rank growth of Kfar for various kernel functions in 2D. a (b) two-dimensional case 103 104 101.6 101.7 101.8 101.9 No. of Particles (n) Expected Numerical Rank c K1 K2 K3 K4 K5 K6 K7 (b) Expected Numerical Rank growth of Kfar for various kernel functions in 2D. b (c) three-dimensional case Figure 10: Expected Numerical Rank Growth of different Kernel matrices for one-, two- and three-dimensional far-field interac- tions. 5.1 Expected Rank Growth In this subsection, we prove the expected growth of the random rank R for any d′-dimensional hyper-surface sharing domains in d-dimensions. Proof of Theorem 3.1. The expected value of R (defined in Equation 4.15) is given by E [R] = κ X k=1 hk X l=1 E  Zk,l n  + E [Mκ] Now from Equation 4.16, the expectation of Zk,l n is given by E  Zk,l n  = p − p X i=0 i P (Nk,l = i) Thus we have E [R] = κ X k=1 hk p − p X i=0 i P (Nk,l = i) ! + E [Mκ] = κ X k=1 hkp − κ X k=1 p X i=0 hki P (Nk,l = i) + nd′/d n 2dκ We now encounter the cases that d′ being zero and non-zero separately below. 5The experiments were repeated 2000 times for matrices of size less than 8100 and 500 times for matrices of size greater than 8100 in order to obtain the sample data of Numerical Ranks with δ = 10−12 in one-, two- and three-dimensions with all possible interactions of domains. 16 (i) d′ = 0: In this case the value of hk remains constant and given by hk = 2d −1  for all k = 1 : κ and hence the expectation is given by E [R] = 2d −1  κ X k=1 p − 2d −1  κ X k=1 p X i=0 i P (Nk,l = i) + n 2dκ ≤ 2d −1  κp + n 2dκ Now, κ = ⌊log2d n⌋and n 2dκ ∈O (1). Hence, for the fixed dimension d, we can conclude that E [R] ∈O (p log2d n) . (ii) d′ ̸= 0: In this case, the value of hk does not remain constant, but with some simple algebra, we have the following expected value E [R] = nd′/d n 2dκ + 2d′ 2d′ −1  2d−d′ −1   2d′κ −1  p − κ X k=1 p X i=0 hki P (Nk,l = i) ≤nd′/d n 2dκ + 2d −2d′ 2d′ −1  nd′/d −1  p Hence, we can conclude that if d-dimensional source and targets shares a d′-dimensional hyper-surface, then E [R] ∈O  pnd′/d . □ 103 104 100 101 102 No. of Particles (n) Expected Numerical Rank c log n K1 K2 K3 K4 K5 K6 K7 (a) Expected Numerical Rank growth of Kver for various kernel functions. a (a) one-dimensional case 103 104 101.4 101.6 101.8 102 102.2 No. of Particles (n) Expected Numerical Rank c log n K1 K2 K3 K4 K5 K6 K7 (a) Expected Numerical Rank growth of Kver for various kernel functions in 2D. a (b) two-dimensional case 103 104 102 102.2 102.4 102.6 No. of Particles (n) Expected Numerical Rank c log n K1 K2 K3 K4 K5 K6 K7 (a) Expected Numerical Rank growth of Kver for various kernel functions in 3D. a (c) three-dimensional case Figure 11: Expected Numerical Rank Growth of Kernel matrix for different kernels for one-, two-, and three-dimensional vertex sharing interaction. 17 103 104 102 103 No. of Particles (n) Expected Numerical Rank c n1/2 K1 K2 K3 K4 K5 K6 K7 (a) Expected Numerical Rank growth of Kedge for various kernel functions in 2D. a (a) two-dimensional edge-sharing interaction 103 104 103 104 No. of Particles (n) Expected Numerical Rank c n1/3 K1 K2 K3 K4 K5 K6 K7 (a) Expected Numerical Rank growth of Kedge for various kernel functions in 3D. a (b) three-dimensional edge-sharing interaction 103 104 102 103 No. of Particles (n) Expected Numerical Rank c n2/3 K1 K2 K3 K4 K5 K6 K7 (a) Expected Numerical Rank growth of Kface for various kernel functions in 3D. a (c) three-dimensional face-sharing interaction Figure 12: Expected Numerical Rank Growth of Kernel matrix for different kernels for two- and three-dimensional edge sharing and three-dimensional face sharing interaction. 5.2 Growth in Variance of R In this subsection, we will prove the Theorem 3.2 on the variance growth of the random rank R incorporating all possible interactions in d-dimensions as stated in Section 3. Proof of Theorem 3.2. The variance of the random variable R (defined in Equation 4.15) is given by Var(R) = κ X k=1 hk X l=1 Var Zk,l n  + Var(Mκ) + 2 κ X k=1 hk X l=1 Cov Zk,l n , Mκ  + 2 X either k1<k2 or if k1=k2 then l1<l2 Cov Zk1,l1 n , Zk2,l2 n  (5.1) Now, after some simple algebraic manipulation, Equation 5.1 can be written as Var(R) = Var(Mκ) + 2 κ X k=1 hkCov Zk,l n , Mκ  + 2 κ X k1=1 κ X k2=k1+1 hk1hk2Cov Zk1,l n , Zk2,l n  + κ X k=1 hk (2hk −2l + 1) Var Zk,l n  ≤Var(Mκ) + 2hκ κ X k=1 Cov Zk,l n , Mκ  + 2h2 κ κ X k1=1 κ X k2=k1+1 Cov Zk1,l n , Zk2,l n  + 3h2 κ κ X k=1 Var Zk,l n  ≤2d′κ n 2dκ + 2d′κ+1  2d−d′ −1  κ X k=1 Cov Zk,l n , Mκ  + 22d′κ+1  2d−d′ −1 2 κ X k1=1 κ X k2=k1+1 Cov Zk1,l n , Zk2,l n  + 3 × 22d′κ  2d−d′ −1 2 κ X k=1 Var Zk,l n  (5.2) Now we encounter the cases where d′ is zero and non-zero separately below. (i) d′ = 0: In this case Var(R) is bounded by a constant which is of O (log log log n)2 . i.e., Var(R) ∈ O (log log log n)2 . This follows immediately from Lemma 4.2, Lemma 4.3, Remark 4.4, and along with the fact n 2dκ ∈O (1). (ii) d′ ̸= 0: Due to the increased dimensionality of hyper-surface sharing between the sources and targets, Var(R) ∈O  nd′/d log log log n 2 which follows from the Lemma 4.2, Lemma 4.3, Lemma 4.4 and the fact 2d′κ n 2dκ ∈O  nd′/d . □ 18 Remark 5.1. In any dimension, for the hyper-cubes X and Y share a vertex, i.e., for d′ = 0 in d-dimensions, E [R] ∈O (p log2d n) and Var(R) ∈O (log log log n)2 suggests that the mean may increase (logarithmically), still the spread of the rank around its mean remains stable and does not widen much as the size of matrix increases. 103 104 0 0.5 1 No. of Particles (n) Var(R) of Kernel Matrix Kver c (log log log n)2 K1 K2 K3 K4 K5 K6 K7 (a) Var(R) of Kernel Matrix Kver for various kernel functions . (a) one-dimensional case 103 104 101 101.5 No. of Particles (n) Var(R) of Kernel Matrix Kver c (log log log n)2 K1 K2 K3 K4 K5 K6 K7 (a) Var(R) of Kernel Matrix Kver for various kernel functions . (b) two-dimensional case 103 104 102 103 No. of Particles (n) Var(R) of Kernel Matrix Kver c (log log log n)2 K1 K2 K3 K4 K5 K6 K7 (a) Var(R) of Kernel Matrix Kver for various kernel functions . (c) three-dimensional case Figure 13: Growth of Variance of Numerical Rank of Kernel matrix for different kernels for two- and three-dimensional vertex sharing interaction. 103 104 102 103 104 No. of Particles (n) Var(R) of Kernel Matrix Kedge cn n(log log log n)2 K1 K2 K3 K4 K5 K6 K7 (b) Var(R) of Kernel Matrix Kedge for various kernel functions . (a) two-dimensional edge-sharing interaction 103 104 103 104 No. of Particles (n) Var(R) of Kernel Matrix Kedge cn 2 3 (n 1 3 log log log n)2 K1 K2 K3 K4 K5 K6 K7 (b) Var(R) of Kernel Matrix Kedge for various kernel functions . (b) three-dimensional edge-sharing interaction 103 104 103 104 105 106 No. of Particles (n) Var(R) of Kernel Matrix Kface cn4/3 (n 2 3 log log log n)2 K1 K2 K3 K4 K5 K6 K7 (b) Var(R) of Kernel Matrix Kface for various kernel functions . (c) three-dimensional face-sharing interaction Figure 14: Growth of Variance of Numerical Rank of Kernel matrix for different kernels for two- and three-dimensional edge sharing and three-dimensional face sharing interactions. 6 Numerical Results In this section, we present the numerical results based on the theoretical framework outlined in Section 4. Specifically, we will explore how the expectation and variance of the random variable R changes depending on how the interactions between source and target domains change in one-, two-, and three-dimensions. To obtain the means and variances, we repeated the same experiment 2,000 times to collect the samples of Numerical Ranks with δ = 10−12 for matrices of size ≤8100 and 500 times for matrices of size larger than that. The experiment was conducted in MATLAB using parallel processing to enhance efficiency. 19 Results for One-Dimensional Cases: ker fun E [R] of Kernel Matrix Kfar n = 250 n = 500 n = 1000 n = 2000 n = 4000 n = 8000 n = 16000 n = 32000 K1 7 7 7 7 7 7 7 7 K2 7 7 7 7 7 7 7 7 K3 2 2 2 2 2 2 2 2 K4 7 7 7 7 7 7 7 7 K5 6 6 6 6 6 6 6 6 K6 1 1 1 1 1 1 1 1 K7 2 2 2 2 2 2 2 2 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kfar n = 250 n = 500 n = 1000 n = 2000 n = 4000 n = 8000 n = 16000 n = 32000 K1 0 0 0 0 0 0 0 0 K2 0 0 0 0 0 0 0 0 K3 0 0 0 0 0 0 0 0 K4 0 0 0 0 0 0 0 0 K5 0 0 0 0 0 0 0 0 K6 0 0 0 0 0 0 0 0 K7 0 0 0 0 0 0 0 0 (b) Variance of Numerical Ranks of Data Samples Table 2: Random rank statistics for far-field domains in 1D. ker fun E [R] of Kernel Matrix Kver n = 250 n = 500 n = 1000 n = 2000 n = 4000 n = 8000 n = 16000 n = 32000 K1 17.82 19.80 21.77 23.77 25.75 27.72 29.68 31.70 K2 16.62 18.22 19.79 21.30 22.76 24.21 25.51 26.78 K3 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 K4 17.83 19.78 21.77 23.77 25.73 27.71 29.69 31.60 K5 7.00 7.00 7.00 7.00 7.00 7.00 7.00 7.00 K6 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 K7 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.00 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kver n = 250 n = 500 n = 1000 n = 2000 n = 4000 n = 8000 n = 16000 n = 32000 K1 0.84 0.84 0.79 0.84 0.75 0.79 0.76 0.55 K2 0.80 0.76 0.75 0.77 0.71 0.72 0.59 0.45 K3 0 0 0 0 0 0 0 0 K4 0.84 0.83 0.79 0.83 0.81 0.81 0.76 0.71 K5 0 0 0 0 0 0 0 0 K6 0 0 0 0 0 0 0 0 K7 0 0 0 0 0 0 0 0 (b) Variance of Numerical Ranks of Data Samples Table 3: Random rank statistics for vertex-sharing domains in 1D. Results for Two-Dimensional Cases: ker fun E [R] of Kernel Matrix Kfar n = 225 n = 484 n = 961 n = 1936 n = 3969 n = 8100 n = 16129 K1 25.14 26.23 26.95 27.42 27.79 28.06 28.16 K2 14.32 14.81 14.96 15.00 15.00 15.00 15.00 K3 26.64 27.33 27.72 27.93 27.99 28.00 28.00 K4 26.05 27.53 28.22 28.68 29.01 29.26 29.34 K5 21.47 22.33 22.76 22.96 23.00 23.00 23.00 K6 24.27 25.12 25.66 25.96 26.08 26.16 26.18 K7 21.83 22.62 22.91 23.00 23.00 23.00 23.00 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kfar n = 225 n = 484 n = 961 n = 1936 n = 3969 n = 8100 n = 16129 K1 1.32 0.86 0.69 0.48 0.40 0.32 0.21 K2 0.72 0.25 0.05 0.00 0.00 0.00 0.00 K3 0.82 0.32 0.21 0.06 0.01 0.00 0.00 K4 1.83 1.44 0.64 0.39 0.31 0.26 0.24 K5 0.89 0.53 0.23 0.04 0.00 0.00 0.00 K6 0.69 0.71 0.36 0.12 0.09 0.13 0.14 K7 0.89 0.37 0.09 0.00 0.00 0.00 0.00 (b) Variance of Numerical Ranks of Data Samples Table 4: Random rank statistics for far-field domains in 2D. ker fun E [R] of Kernel Matrix Kver n = 225 n = 484 n = 961 n = 1936 n = 3969 n = 8100 n = 16129 K1 52.70 62.79 71.33 79.85 88.28 96.64 104.29 K2 27.28 30.40 33.03 35.54 38.19 40.70 42.88 K3 46.16 52.72 57.99 62.97 67.24 71.33 74.48 K4 53.43 63.17 71.91 80.37 89.04 97.11 104.96 K5 42.01 48.34 53.25 57.75 62.41 66.35 69.50 K6 46.04 53.14 58.78 64.15 69.33 74.02 77.74 K7 43.09 49.23 54.16 58.83 63.36 67.28 70.73 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kver n = 225 n = 484 n = 961 n = 1936 n = 3969 n = 8100 n = 16129 K1 28.54 35.52 38.30 41.26 44.62 45.34 43.00 K2 7.02 7.39 7.58 7.32 7.44 7.25 6.85 K3 19.22 21.00 21.71 17.71 16.54 14.88 14.02 K4 28.24 36.28 38.63 41.61 43.53 44.42 43.31 K5 17.44 17.88 19.92 18.96 17.05 13.84 12.07 K6 19.97 20.27 21.08 21.79 22.03 18.03 16.99 K7 18.78 19.71 19.68 19.32 16.19 15.30 14.91 (b) Variance of Numerical Ranks of Data Samples Table 5: Random rank statistics for vertex-sharing domains in 2D. ker fun E [R] of Kernel Matrix Kedge n = 225 n = 484 n = 961 n = 1936 n = 3969 n = 8100 n = 16129 K1 80.43 114.19 154.62 210.49 292.42 404.35 545.65 K2 46.99 62.52 81.04 106.71 143.78 195.15 258.36 K3 69.63 94.42 121.97 159.32 208.72 273.59 352.50 K4 80.94 114.56 155.03 211.29 292.82 405.00 562.63 K5 65.74 89.04 115.61 151.20 198.90 261.39 337.23 K6 69.84 95.16 124.15 161.81 215.79 284.33 368.51 K7 67.60 91.14 118.67 155.42 203.42 266.21 343.58 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kedge n = 225 n = 484 n = 961 n = 1936 n = 3969 n = 8100 n = 16129 K1 111.77 259.58 542.42 1029.01 2211.46 4461.50 9999.78 K2 63.84 127.42 256.70 497.60 1038.78 2097.10 4340.36 K3 90.48 174.31 327.43 624.99 1182.06 2174.75 4698.81 K4 110.98 257.50 516.83 1046.45 2228.99 4421.87 9131.77 K5 84.96 174.31 327.43 624.99 1182.06 2174.75 4178.44 K6 89.11 167.77 381.23 719.93 1284.90 2358.76 5048.11 K7 89.79 186.05 349.42 636.91 1198.51 2239.94 4475.94 (b) Variance of Numerical Ranks of Data Samples Table 6: Random rank statistics for edge-sharing domains in 2D. 20 Results for Three-Dimensional Cases: ker fun E [R] of Kernel Matrix Kfar n = 216 n = 512 n = 1331 n = 2197 n = 4096 n = 8000 n = 15625 K1 42.71 47.47 49.67 50.50 51.33 52.48 53.77 K2 47.75 54.60 60.88 63.39 65.09 66.07 66.70 K3 60.05 68.84 74.92 77.19 78.92 80.52 81.90 K4 45.31 49.22 52.53 54.23 56.66 59.21 61.16 K5 45.63 50.91 56.28 58.83 61.38 63.50 65.05 K6 55.71 63.89 69.89 71.92 73.62 74.9 75.95 K7 44.94 49.55 53.50 55.40 57.05 59.02 60.83 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kfar n = 216 n = 512 n = 1331 n = 2197 n = 4096 n = 8000 n = 15625 K1 20.88 9.33 3.93 4.71 6.51 8.00 9.05 K2 19.30 31.62 23.35 12.88 5.05 1.59 1.02 K3 33.20 30.75 16.46 11.16 8.23 6.07 3.86 K4 18.31 6.25 16.11 20.34 21.90 17.97 8.88 K5 13.78 19.01 23.04 20.71 15.37 8.72 3.26 K6 24.97 26.95 14.15 9.07 5.29 4.10 3.37 K7 12.89 11.82 12.32 12.99 12.69 10.56 8.31 (b) Variance of Numerical Ranks of Data Samples Table 7: Random rank statistics for far-field domains in 3D. ker fun E [R] of Kernel Matrix Kver n = 216 n = 512 n = 1331 n = 2197 n = 4096 n = 8000 n = 15625 K1 80.38 102.51 127.34 140.71 155.51 171.64 188.24 K2 95.44 127.39 162.17 179.04 198.18 219.72 239.93 K3 92.43 118.24 144.71 157.15 171.73 186.32 200.58 K4 81.73 105.05 129.85 142.17 158.36 174.77 188.71 K5 84.13 109.08 135.07 147.42 160.94 175.72 189.85 K6 93.00 122.25 151.82 166.75 183.09 200.73 217.53 K7 83.78 106.68 131.00 142.40 155.01 169.00 181.81 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kver n = 216 n = 512 n = 1331 n = 2197 n = 4096 n = 8000 n = 15625 K1 114.36 193.29 276.27 322.43 368.38 391.90 410.98 K2 172.11 348.35 484.78 528.45 654.80 642.32 688.23 K3 129.73 223.50 277.14 322.66 339.84 337.85 287.35 K4 119.24 181.67 277.96 313.00 345.77 373.36 515.64 K5 130.20 214.16 279.47 298.08 321.24 335.96 327.66 K6 130.19 229.05 327.38 376.41 380.03 397.55 377.13 K7 128.02 200.02 272.62 303.72 312.34 323.02 266.42 (b) Variance of Numerical Ranks of Data Samples Table 8: Random rank statistics for vertex-sharing domains in 3D. ker fun E [R] of Kernel Matrix Kedge n = 216 n = 512 n = 1331 n = 2197 n = 4096 n = 8000 n = 15625 K1 97.14 135.99 186.18 215.26 258.26 313.25 377.63 K2 114.98 166.89 234.70 277.50 332.52 399.49 487.64 K3 107.24 148.14 198.82 228.74 265.06 311.22 363.59 K4 98.04 136.73 187.38 218.90 264.43 314.35 392.67 K5 99.58 139.77 188.84 217.89 256.25 299.12 345.54 K6 108.41 152.84 208.12 242.17 283.92 334.78 392.67 K7 100.21 137.97 186.32 212.34 248.60 290.42 345.77 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kedge n = 216 n = 512 n = 1331 n = 2197 n = 4096 n = 8000 n = 15625 K1 191.54 459.26 968.90 1341.32 2068.00 3127.74 4928.74 K2 266.14 719.88 1618.99 2219.04 3171.97 4821.65 6527.24 K3 210.26 466.79 935.47 1348.08 1723.51 2516.34 3166.02 K4 202.77 458.58 968.32 1337.33 2118.39 3212.67 3844.92 K5 200.48 480.89 929.38 1231.47 1778.18 2278.84 3075.45 K6 200.48 476.91 1017.19 1393.06 1944.73 2931.39 3844.92 K7 210.49 438.54 919.84 1235.32 1655.63 2356.37 3063.37 (b) Variance of Numerical Ranks of Data Samples Table 9: Random rank statistics for edge-sharing domains in 3D. ker fun E [R] of Kernel Matrix Kface n = 216 n = 512 n = 1331 n = 2197 n = 4096 n = 8000 n = 15625 K1 126.05 204.18 342.51 451.14 637.05 925.74 1358.29 K2 145.12 244.05 416.10 544.89 766.05 1103.68 1620.39 K3 132.68 213.20 345.32 444.86 607.73 855.64 1195.01 K4 126.29 207.91 344.50 449.07 639.83 937.05 1329.50 K5 126.33 203.83 331.93 429.36 581.88 819.84 1177.91 K6 133.89 218.05 355.14 457.75 627.96 888.22 1248.62 K7 128.53 205.55 334.70 429.56 588.33 825.41 1183.07 (a) Mean of Numerical Ranks of Data Samples ker fun Var(R) of Kernel Matrix Kface n = 216 n = 512 n = 1331 n = 2197 n = 4096 n = 8000 n = 15625 K1 335.66 1399.68 5227.05 9623.13 23369.13 55586.36 117362.16 K2 335.89 1490.79 5721.25 12238.55 25217.08 58863.42 137613.76 K3 326.53 1293.61 4152.90 8770.68 18930.69 44508.93 82545.65 K4 313.89 1288.67 4894.75 10180.30 22019.31 54670.16 117559.78 K5 318.03 1216.24 4121.56 8162.81 17599.36 36474.29 89411.12 K6 304.99 1233.65 4788.07 8644.32 19568.32 44188.66 95840.63 K7 337.33 1264.44 4505.36 8595.78 17812.45 40953.21 88895.33 (b) Variance of Numerical Ranks of Data Samples Table 10: Random rank statistics for face-sharing domains in 3D. 7 Conclusion In this article, we explored the behavior of the rank of the kernel matrices arising from the interactions of neighboring source and target domains, where particles are arbitrarily distributed, moving beyond the typical assumption of uniform grid settings, and the arbitrary distribution of particles was modeled to arise from an underlying random distribution. We have established theoretical results on the growth of the expectation and variance of the random rank R for all possible neighboring interactions in d-dimensions. The numerical results in one-, two-, and three-dimensions confirmed our theoretical predictions as stated in Section 3. Our analysis guarantees that despite the inherent arbitrariness in particle distributions, the rank structure of matrices due to all possible interactions between neighboring source and target domains remains consistent, which offers a strong guarantee for the efficiency of hierarchical matrix algorithms in real-world applications. Moreover, the approach of choosing randomly distributed particles allows us to examine algorithmic behavior in a way that mirrors the average-case analysis of algorithms, making our findings highly relevant. Interestingly, our observations suggest that the random variable R, due to vertex-sharing interaction, may follow a Gaussian distribution. This raises an exciting possibility for future research, and we leave the formal proof as an open question. By presenting a novel study of random rank R due to randomly distributed particles in the domains, this work also opens the door for more comprehensive analyses of algorithm performance in diverse practical 21 scenarios. Future research could extend these results to more complex kernels or explore additional probabilistic frameworks for further enhancing the applicability. Acknowledgments The authors would like to thank the High-Performance Computing Environment (HPCE) at IIT Madras for providing the computational resources essential to this work. Additionally, the first author acknowledges the financial support from UGC through the Junior Research Fellowship (JRF) awarded for the doctoral research and also would like to thank the IIT Madras Library for providing access to Grammarly, which greatly assisted in improving the grammar of the article. References [1] S. Ambikasaran and E. Darve, An O(N log N) Fast Direct Solver for Partial Hierarchically Semi-Separable Matrices, Journal of Scientific Computing, 57 (2013), pp. 477–501. [2] , The inverse fast multipole method, arXiv preprint arXiv:1407.1572, (2014). [3] S. Ambikasaran, D. Foreman-Mackey, L. Greengard, D. W. Hogg, and M. O’Neil, Fast direct methods for gaussian processes, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38 (2015), pp. 252–265. [4] S. Ambikasaran, J. Y. Li, P. K. Kitanidis, and E. Darve, Large-scale stochastic linear inversion using hierarchical matrices, Comput. Geosci., 17 (2013), pp. 913–927. [5] S. Ambikasaran, K. R. Singh, and S. S. Sankaran, Hodlrlib: A library for hierarchical matrices, Journal of Open Source Software, 4 (2019), p. 1167. [6] J. Barnes and P. Hut, A hierarchical O(N log N) force-calculation algorithm, nature, 324 (1986), pp. 446–449. [7] V. Bentkus, A Lyapunov type bound in Rd, Teor. Veroyatn. Primen., 49 (2004), pp. 400–410. [8] R. N. Bhattacharya, Berry-Esseen bounds for the multi-dimensional central limit theorem, Bull. Amer. Math. Soc., 74 (1968), pp. 285–287. [9] S. B¨orm, L. Grasedyck, and W. Hackbusch, Introduction to Hierarchical Matrices with Applications, Engineering analysis with boundary elements, 27 (2003), pp. 405–422. [10] J. Carrier, L. Greengard, and V. Rokhlin, A fast adaptive multipole algorithm for particle simulations, SIAM journal on scientific and statistical computing, 9 (1988), pp. 669–686. [11] C. Cortes, Support-vector networks, Machine Learning, (1995). [12] J. Dick, F. Y. Kuo, and I. H. Sloan, High-dimensional integration: the quasi-monte carlo way, Acta Numerica, 22 (2013), pp. 133–288. [13] P. Drineas, M. W. Mahoney, and N. Cristianini, On the nystr¨om method for approximating a gram matrix for improved kernel-based learning., journal of machine learning research, 6 (2005). [14] W. Feller, An Introduction to Probability Theory and Its Applications, vol. 2, J. Wiley and Sons, New York, 1971. [15] W. Fong and E. Darve, The black-box fast multipole method, Journal of Computational Physics, 228 (2009), pp. 8712–8725. [16] B. Fornberg and N. Flyer, Solving pdes with radial basis functions, Acta Numerica, 24 (2015), pp. 215–258. [17] J. E. Gentle, Computational Statistics, Springer, New York, 2009, pp. 62–63. [18] K. Glau and M. Mahlstedt, Improved error bound for multivariate chebyshev polynomial interpolation, International Journal of Computer Mathematics, 96 (2019), pp. 2302–2314. [19] A. Gray and A. Moore, N-body’problems in statistical learning, Advances in neural information processing systems, 13 (2000). [20] L. Greengard and V. Rokhlin, A fast algorithm for particle simulations, Journal of computational physics, 73 (1987), pp. 325–348. [21] M. Gu, Subspace iteration randomization and singular value problems, SIAM Journal on Scientific Computing, 37 (2015), pp. A1139–A1173. [22] W. Hackbusch, B. N. Khoromskij, and R. Kriemann, Hierarchical matrices based on a weak admissibility criterion, Computing, 73 (2004), pp. 207–243. [23] N. Halko, P.-G. Martinsson, and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM review, 53 (2011), pp. 217–288. [24] J. Harlim, D. Sanz-Alonso, and R. Yang, Kernel methods for bayesian elliptic inverse problems on manifolds, SIAM/ASA Journal on Uncertainty Quantification, 8 (2020), pp. 1414–1445. [25] K. L. Ho and L. Greengard, A fast direct solver for structured linear systems by recursive skeletonization, SIAM Journal on Scientific Computing, 34 (2012), pp. A2507–A2532. [26] K. L. Ho and L. Ying, Hierarchical interpolative factorization for elliptic operators: differential equations, Communications on Pure and Applied Mathematics, 69 (2016), pp. 1415–1451. [27] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, Extreme learning machine: theory and applications, Neurocomputing, 70 (2006), pp. 489–501. [28] V. A. Kandappan, V. Gujjula, and S. Ambikasaran, HODLR2D: A new class of hierarchical matrices, SIAM Journal on Scientific Computing, 45 (2023), pp. A2382–A2408. 22 [29] R. Khan, V. Kandappan, and S. Ambikasaran, Numerical rank of singular kernel functions, arXiv:2209.05819, (2022). Preprint. [30] R. Khan, V. Kandappan, and S. Ambikasaran, Hodlrdd: A new black-box fast algorithm for n-body problems in d-dimensions with guaranteed error bounds: Applications to integral equations and support vector machines, Journal of Computational Physics, 501 (2024), p. 112786. [31] N. M. Kriege, F. D. Johansson, and C. Morris, A survey on graph kernels, Applied Network Science, 5 (2020), pp. 1–42. [32] G. Lebanon, Probability: The Analysis of Data, vol. 1, CreateSpace Independent Publishing Platform, 2013, p. 346. First Edition. [33] L. Lin, J. Lu, and L. Ying, Fast construction of hierarchical matrix representation from matrix-vector multiplication, Journal of Computational Physics, 230 (2011), pp. 4071–4087. [34] S. Massei, L. Robol, and D. Kressner, Hierarchical adaptive low-rank format with applications to discretized partial dif- ferential equations, Numerical Linear Algebra with Applications, 29 (2022), p. e2448. [35] W. Nowak and A. Litvinenko, Kriging and spatial design accelerated by orders of magnitude: Combining low-rank covariance approximations with FFT-techniques, Mathematical Geosciences, 45 (2013), pp. 411–435. [36] S. Ross, Probability and statistics for engineers and scientists, Elsevier, New Delhi, 16 (2009), pp. 32–33. [37] T. Sarlos, Improved approximation algorithms for large matrices via random projections, in 2006 47th annual IEEE sympo- sium on foundations of computer science (FOCS’06), IEEE, 2006, pp. 143–152. [38] D. A. Spielman and S.-H. Teng, Smoothed Analysis: An Attempt to Explain the Behavior of Algorithms in Practice, Communications of the ACM, 52 (2009), pp. 76–84. [39] C. Teo, M. Abdollahzadeh, and N.-M. M. Cheung, On measuring fairness in generative models, Advances in Neural Information Processing Systems, 36 (2024). [40] L. N. Trefethen, Approximation Theory and Approximation Practice, Extended Edition, SIAM-Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2019. [41] S. V. N. Vishwanathan, N. N. Schraudolph, R. Kondor, and K. M. Borgwardt, Graph kernels, The Journal of Machine Learning Research, 11 (2010), pp. 1201–1242. [42] J. Xia, Multi-layer hierarchical structures, CSIAM Transactions on Applied Mathematics, 2 (2021). [43] R. Yokota, H. Ibeid, and D. Keyes, Fast multipole method as a matrix-free hierarchical low-rank approximation, in Eigen- value Problems: Algorithms, Software and Applications in Petascale Computing: EPASA 2015, Tsukuba, Japan, September 2015, Springer, 2017, pp. 267–286. A Generalization to Arbitrary Probability Distributions We have already discussed in Subsection 4.1 about choosing the uniform probability distribution in detail. Now, in this section, we give an idea of how to generalize the probability distribution rather than sticking only to the uniform probability distribution, which allows for more flexible probabilistic modeling. For that, let us consider x1, x2, . . . , xn be n i.i.d. random particles in V = [a, b]d. A.1 Identical Marginals with Coordinate-wise Independence: Let us assume that each particle xi =  x(1) i , x(2) i , . . . , x(d) i  in V is drawn from some arbitrary product distribution with i.i.d. coordinates. That is, there exists a univariate probability distribution with CDF ψ, supported on [a, b], such that x(j) i ∼ψ, for all j = 1, . . . , d. Let V ′ = Qd j=1[cj, dj] ⊆V be a sub-hyper-cube, and the random variable N(V ′) is defined as the number of particles that fall within V ′. Then the probability that a single particle lies within V ′ is qV ′ = d Y j=1 ψ(dj) −ψ(cj) ψ(b) −ψ(a)  , and the number of particles that fall within V ′ follows the Binomial distribution as given by Equation 4.1. A.2 Non-identical Marginals but Coordinate-wise Independence: More generally, let us consider that each particle xi =  x(1) i , x(2) i , . . . , x(d) i  in V is drawn from some arbitrary product distribution, but each coordinate x(j) i follows different distribution with CDF ψj, supported on [a, b], i.e., x(j) i ∼ψj, for each j = 1, . . . , d, and hence, the probability that a single particle lies within V ′ is qV ′ = d Y j=1 ψj(dj) −ψj(cj) ψj(b) −ψj(a)  . Accordingly, the number of particles that fall within V ′ follows the Binomial distribution as given by Equa- tion 4.1. 23 Remark A.1. The assumption that the particles in V are coordinate-wise independent is essential for both cases. Without it, the joint distribution of xi may not admit a product structure, and the distribution of N(V ′) may not be Binomial. B Calculation of E [N(V ′)N(V ′′)] and Cov (N(V ′), N(V ′′)) Here, V ′ and V ′′ are two non-intersecting subdomains of V , and the random variables N(V ′) and N(V ′′) count the number of particles in V ′ and V ′′, respectively. Also, q1 is the probability of having a particle in V ′, and that of V ′′ is q2. Now, we define following two indicator random variables I′ i = ( 1 if xi ∈V ′ 0 otherwise and I′′ i = ( 1 if xi ∈V ′′ 0 otherwise . As N(V ′)N(V ′′) = n X i=1 n X j=1 I′ iI′′ j , we have E [N(V ′)N(V ′′)] = n X i=1 n X j=1 E  I′ iI′′ j  . Now, we have the following cases • If i = j, then P (xi ∈V ′ and xi ∈V ′′) = 0, as the domains are non-intersecting. This gives E [I′ iI′′ i ] = 0. • If i ̸= j, then we have E  I′ iI′′ j  = E [I′ i] E  I′′ j  = q1q2, as the particles xi’s are i.i.d. in V . Thus we have E [N(V ′)N(V ′′)] = n X i=1 n X j=1 E [I′ i] E  I′′ j  = n X i=1 n X j=1 j̸=i q1q2 = n(n−1)q1q2. Now, Cov (N(V ′), N(V ′′)) = −nq1q2 follows directly from the previous expectation and the fact that N(V ′) and N(V ′′) follow binomial dis- tribution individually. C Relationship between numerical ε-rank and max-rank This is a well-known fact that ∥·∥∞∗and ∥·∥2 are equivalent norms and the corresponding equivalency relation is ∥A∥∞∗≤∥A∥2 ≤√mn ∥A∥∞, where A ∈Cm×n. (C.1) Now, to establish the relationship between numerical ε-rank and max-rank, it is sufficient to show how A −˜A ∞∗ ∥A∥∞∗ and A −˜A 2 ∥A∥2 are related, where ˜A is the approximation of A. From Equation C.1, we can easily get the fol- lowing 1 √mn A −˜A 2 ∥A∥2 ≤ A −˜A ∞∗ ∥A∥∞∗ ≤√mn A −˜A 2 ∥A∥2 Now from the above expression we have the followings: • For any given ε > 0, there exist ε′ > 0 such that A −˜A 2 ∥A∥2 < ε =⇒ A −˜A ∞∗ ∥A∥∞∗ < ε′, where ε′ = √mnε. • For any given ε′ > 0, there exist ε > 0 such that A −˜A ∞∗ ∥A∥∞∗ < ε′ =⇒ A −˜A 2 ∥A∥2 < ε, where ε = √mnε′. D Error Due to Normal Approximation In this section, we will focus on the error bounds for normal approximations in both the one-dimensional and d-dimensional cases. The Berry-Esseen theorem [14], and its extensions to higher dimensions [7] provide a framework to quantify the error in the normal approximation to the sum of independent random variables. These error bounds provide an understanding of the rate of convergence to the normal distribution, which is useful in practical applications. 24 D.1 One-dimensional case: Let X1, X2, . . . , Xn be i.i.d random variables with E [Xk] = 0, E  X2 k  = σ2 > 0 and E  X3 k  = ρ < ∞. Then the cumulative distribution function Fn of the normalized sum Sn = X1 + X2 + · · · + Xn σ√n converges to the standard normal distribution Φ with |Fn(x) −Φ(x)| ≤ 3ρ σ3√n ∀x, n. The proof can be found in [14]. Now, as the binomial distribution can be interpreted as the sum of independent Bernoulli’s and hence from the above expression, we can get an error bound in the normal approximation to the binomial distribution. Application to Binomial Distribution Suppose X1, X2, . . . , Xn are i.i.d. Bernoulli random variables with success probability p, and Sn = nP i=1 Xi is a Binomial random variable, i.e., Sn ∼Bin(n, p) [32]. Now let X ∼Bernoulli(p), where X takes the value 1 with probability p and 0 with probability 1 −p. Then, standard deviation σ = p Var(X) = p p(1 −p) and the third absolute central moment ρ = E  |X −µ|3 = p(1 −p)(1 −2p + 2p2). Now, using the Berry-Esseen theorem, the error in approximating the binomial distribution with a normal distribution is: sup x P Sn −np p np(1 −p) ≤x ! −Φ(x) ≤C(1 −2p + 2p2) p np(1 −p) . (D.1) D.2 d-dimensional case: Let X1, X2, . . . , Xn be independent and identically distributed random vectors in Rd such that for each k = 1 : n, E [Xk] = 0 and Xk has identity covariance matrix. Now let Sn = X1 + X2 + · · · + Xn and Z be the Gaussian random variable with the same mean and variance that of Sn. Now for any convex subset C in Rd, we have a Lyapunov-type bound as sup A∈C |P (Sn ∈A) −P (Z ∈A)| ≤ cd1/4E h ∥X1∥3 2 i √n , where c is a constant. The proof can be found here [7]. Similar to the univariate case, in the multivariate case, the multinomial distribution can be interpreted as the sum of independent multivariate Bernoulli’s (or simply Multinoulli) [32] and hence from the above statement, we can get the error due to the multivariate normal approximation to the multinomial distribution (i.e., for d = 2) Application to Multinomial Suppose X1, X2, . . . , Xn are i.i.d. Multivariate Bernoulli random variables with success probability p = (p1, p2, . . . , pd), and Sn = nP i=1 Xi is a Multinomial random variable, i.e., Sn ∼Mult(n, p) [32]. Now let X ∼Multinoulli(p), i.e., X is a set of d × 1 vectors having one entry equal to 1 and all other entries equal to 0 and p1, p2, . . . , pd such that 0 < pi < 1, for each i with the condition that d X i=1 pi = 1. Then, the joint probability mass function of X is given by pX(x1, x2, . . . , xd) = (Qd k=1 pxj k if (x1, x2, . . . , xd)t ∈X 0 otherwise. i.e., X takes value ei with probability pi. Now, we have E [X] = p, Σ = (Σij)d×d the covariance matrix of X, where Σij = ( pi(1 −pi) if i = j −pipj if i ̸= j and as for all i, ∥X −p∥2 2 = ∥ei −p∥2 2 = 1 −2pi + Pd k=1 p2 k, then E h ∥X −p∥3 2 i = d X i=1 pi 1 −2pi + d X k=1 p2 k !3/2 < M cd1/4 (a constant) (say). Now, to get the error-bound while approximating Sn with Z ∼N (np, nΣ), we transform Sn to Wn such that Wn = Σ−1/2 (Sn −np) and similarly Z to Z′, where Z′ ∼N (0, Id). Then we have sup A∈C |P (Sn ∈A) −P (Z ∈A)| = sup A′∈C |P (Wn ∈A′) −P (Z′ ∈A′)| ≤M √n, where A′ = Σ−1/2 (A −np) . 25 E Uniform Distribution as Randomly Uniform Perturbed Grid In computational applications, we often use uniform grids to discretize a continuous domain. However, in practice, random variations often lead to deviations from perfect uniformity. The perspective of taking Uniform Probability distribution allows us to interpret it as a ‘perturbed’ version of a perfect uniform grid. For the sake of simplicity, we will consider the domain as D = [0, 1]. Suppose that X1, X2, . . . , Xn be a random sample of size n from a uniform distribution UD with pdf f and CDF F and X(1), X(2), . . . , X(n) be the corresponding order statistics, then the pdf of the kth order statistic is given as fX(k)(x) = n! (k −1)!(n −k)!f(x)[F(x)]k−1[1 −F(x)]n−k Now, as the kth order statistic in a sample of size n from UD has a beta distribution [17] with parameters k and n −k + 1, the expected value and variance of X(k) are E  X(k)  = k n + 1 and Var X(k)  = k (n −k + 1) (n + 1)2 (n + 2) Thus, for large n, the expected value of X(k) is very close to the ideal kth grid point. In addition to the expectation, for large n, this variance becomes small, indicating that each X(k) tends to stay close to the ideal grid point. Moreover, if Gk = X(k+1) −X(k) be the gap between two consecutive shorted random points, then E [Gk] = 1 n + 1, which suggest that as n grows, sorted uniform points tend to fill up the domain D in a manner that resembles a uniform grid with smaller fluctuations. F Detailed Proofs of Lemma 4.2 and Lemma 4.3 We provide a comprehensive proof of Lemma 4.2 and Lemma 4.3, which are briefly justified in Section 4 of the main body of the article. We now restate the lemma below for clarity before the detailed proof. Lemma 4.2 (Restated): For any large value of n, if κ = ⌊log2d n⌋then for some fixed l the sum of variances κ X k=1 Var Zk,l n  is bounded by C such that C ∈O (log log log n). Proof. As Equation 4.18 is independent of l, without loss of generality, we take l = 1. Now, we have Var Zk,1 n  ≤p(p + 1)(2p + 1) 6 P (Nk,1 = p) and P (Nk,1 = p) ≤ 1 √ 2π e −  a(n) p,k 2/2 + Ξ √n, where a(n) p,k = p−0.5−µk σk with µk = n 2dk and σk = q n 2dk 1 − 1 2dk  . Now, for any small ε > 0, there exists ˜k such that P (Nk,1 = p) < ε for all k ≤˜k and ˜k can be determined as follows 1 √ 2π e −  a(n) p,k 2/2 + p(p + 1) 2 Ξ √n < ε =⇒ 1 √ 2π e −  a(n) p,k 2/2 < ε =⇒a(n) p,k > s 2 loge 1 ε √ 2π =: M(ε) (say) =⇒ p −0.5 − n 2dk q n 2dk 1 − 1 2dk  > M =⇒ M 2 + n  nx2 − 2 (p −0.5) + M 2 nx + (p −0.5)2 > 0,  where x = 1 2dk .  Now for the sake of simplicity, we take M 2 + n  x − 2 (p −0.5) + M 2 > 0 which implies k < log2d  1 + n + 1 −2p M 2 + 2p −1  Now we can take ˜k = j log2d  1 + n+1−2p M 2+2p−1 k , then for all k = 1 : ˜k (in MATLAB notation) we have P (Nk,1 = p) < ε. 26 There are only constantly many terms are there from ˜k to κ, and the constant depends on ε only which can be shown as κ −˜k ≤1 +  log2d n −log2d  n + 1 −2p M 2 + 2p −1  ≤1 + log2d M 2 + 2p −1  + log2d  n n + 1 −2p  < Ω+ log2d M 2 + 2p −1  , where Ωis independent of n. Now, we choose ε = 6c1 κp(p+1)(2p+1), where c1 is a constant such that ˜k X k=1 Var Zk,1 n  ≤ ˜k X k=1 p(p + 1)(2p + 1) 6 P (Nk,1 = p) < p(p + 1)(2p + 1)κε 6 = c1 Now, κ X k=1 Var Zk,1 n  = ˜k X k=1 Var Zk,1 n  + κ X k=˜k+1 Var Zk,1 n  < c1 + κ X k=˜k+1 Var Zk,1 n  Now, from Equation 4.18, it follows that Var Zk,1 n  ≤p(p + 1)(2p + 1)/6 and  κ −˜k  is of O (log log log n) proves that there exists some C > 0 such that κ X k=1 Var Zk,1 n  < C, where C ∈O (log log log n). Lemma 4.3 (Restated): For any large value of n, if κ = ⌊log2d n⌋then for some fixed l the sum of covariances κ X k1=1 κ X k2=k1+1 Cov Zk1,l n , Zk2,l n  ≤C, where C ∈O  (log log log n)2 . Proof. Without loss of generality, let us take l = 1. Now for any fixed i, j, an upper bound of Cov Zi,1 n , Zj,1 n  defined in Equation 4.11 can be obtained as Cov Zi,1 n , Zj,1 n  ≤ p X r=0 p X s=0 rsP (Ni,1 = pi, Nj,1 = pj) for some pi, pj with 1 ≤pi, pj ≤p such that P (Ni,1 = pi, Nj,1 = pj) ≥P (Ni,1 = l, Nj,1 = m) , ∀0 ≤r, s ≤p. Now, using bivariate normal distribution to approximate P (Ni,1 = pi, Nj,1 = pj) we have the following P (Ni,1 = pi, Nj,1 = pj) ≤ ZZ D f(x, y)dxdy + Ξ √n, for some Ξ > 0, where D = n (x, y) ∈R2 : a(n) pi,i ≤x ≤b(n) pi,i and a(n) pj,j ≤y ≤b(n) pj,j o where, a(n) r,k = r−0.5−µk σk and b(n) r,k = r+0.5−µk σk for r = pi, pj and k = i, j. Now, there exist  x(n) pi,i, x(n) pj,j  ∈D such that ZZ D f(x, y)dxdy = f  x(n) pi,i, x(n) pj,j  . Thus we have P (Ni,1 = pi, Nj,1 = pj) ≤f  x(n) pi,i, x(n) pj,j  + Ξ √n. Now as n →∞, a(n) pi,i, b(n) pi,i, a(n) pj,j, b(n) pj,j →−∞and hence f  x(n) pi,i, x(n) pj,j  can be made sufficiently small by taking large enough n. Now for any chosen ε > 0 there exist ˜k such that P (Ni,1 = pi, Nj,1 = pj) < ε, ∀i, j ≤˜k. 27 The largest such ˜k can be found by solving the inequality f  x(n) pi,i, x(n) pj,j  + Ξ √n < ε which implies f  x(n) pi,i, x(n) pj,j  < ε. This now translates to 1 2π p 1 −ρ2 e −Q  x(n) pi,i,x(n) pj ,j  /2(1−ρ2) < ε where Q (x, y) = x2 + y2 −2ρxy. Now further, we obtain Q  x(n) pi,i, x(n) pj,j  > 2 1 −ρ2 loge 1 2π p 1 −ρ2ε ! =⇒  x(n) pi,i 2 +  x(n) pj,j 2 −2ρ  x(n) pi,i   x(n) pj,j  > 2 1 −ρ2 loge 1 2π p 1 −ρ2ε ! Now, as ρ being negative, taking x(n) pi,i as minimum between x(n) pi,i and x(n) pj,j, we have the following (1 −ρ)  x(n) pi,i 2 > 1 −ρ2 loge  1 2π (1 −ρ2) ε  =⇒  x(n) pi,i 2 > (1 + ρ) loge  1 2π (1 −ρ2) ε  =⇒x(n) pi,i > s (1 + ρ) loge  1 2π (1 −ρ2) ε  =: M(ε); (say) =⇒x(n) pi,i > M Now, x(n) pi,i > M will hold also when a(n) pi,i > M is true and thus similar to the proof of previous lemma, we can prove that there exists C > 0 such that κ X i=1 κ X j=i+1 Cov Zi,1 n , Zj,1 n  ≤C, where C ∈O  (log log log n)2 . 28
Rank of Matrices Arising out of Singular Kernel Functions∗ Sumit Singh †4 and Sivaram Ambikasaran ‡1,2,3,4 1Wadhwani 2Robert Bosch Centre for Data Science and Artificial Intelligence 3 4 . In this work, we study the rank of matrices arising out of the kernel function K : X × Y 7→R, where the sets X, Y ∈Rd are hypercubes that share a boundary. The main contribution of this work is the analysis of rank of such matrices where the particles (sources/targets) are arbitrarily distributed within these hypercubes. To our knowledge, this is the first work to formally investigate the rank of such matrices for arbitrary distribution of particles. We model the arbitrary distribution of particles to arise from an underlying random distribution and obtain bounds on the expected rank and variance of the rank of the kernel matrix corresponding to various neighbor interactions. These bounds are useful for understanding the performance and complexity of hierarchical matrix algorithms (especially hierarchical matrices satisfying the weak-admissibility criterion) for an arbitrary distribution of particles. We also present numerical experiments in one-, two-, and three-dimensions, showing the expected rank growth and variance of the rank for different types of interactions. The numerical results, not surprisingly, align with our theoretical predictions. Keywords: Probability distributions, Numerical rank, n-body problems, Hierarchical matrices, Low-rank matrix approximation, Central Limit Theorem, Normal approximation. AMS Subject Classifications: 65F55, 65D40, 65D12. 1 Introduction Recent years have seen significant strides in the development of matrices arising out of kernel functions frequently occur in areas such as partial differential equations (PDEs) [20] [34] [12] [16], integral equations [26], inverse problems [4] [24], Gaussian processes [3] [35], graph theory [31] [41] and kernel methods for addressing many complex machine learning and data analysis tasks [11] [19] [27]. Despite their wide applicability, these matrices are often large and dense, as the underlying kernel functions are not compactly supported. This makes storage and computation of matrix operations (such as matrix-vector products, solving linear systems, and matrix-factorization, etc.) computationally expensive and memory intensive. Despite these challenges, kernel matrices exhibit low-rank structure that can be exploited to overcome these issues. We can significantly reduce the storage requirements and accelerate computational processes by leveraging their low-rank approximations. The literature on exploiting rank-structuredness is extensive, and we refer interested readers to the works [13] [43] [9] [37] [23] and the references therein for an in-depth review. This approach not only reduces computational costs but also allows for more accurate solutions to complex problems in scientific computing, engineering, and data science. More specifically, exploiting rank-structuredness is very useful in fields such as machine learning and signal processing, where it helps in optimized data compression, noise reduction, and the extraction of meaningful insights from large datasets. One of the most frequently encountered rank-structured matrices arising out of n-body problems are hierarchical low-rank matrices. The initial work by Barnes and Hut [6], which reduced the computational complexity to perform matrix-vector product from O n2 to O (n log n). Greengard and Rokhlin in Fast Multipole Method (FMM) [20] further reduced to O (n). FMM and Barnes-Hut algorithms leverage separable extension of kernel functions for far-away interactions. In terms of matrices, this corresponds to approximating sub-matrices corresponding to far-away interactions by a low-rank matrix. This interpretation generalises ∗Submitted to the editors: October 17, 2025 † , ‡ , 1 16 Oct 2025 to hierarchical matrices. Depending on what sub-matrices are approximated by a low-rank matrix, we have different hierarchical structures. Some of the widely used such representations are Hierarchically Off-Diagonal Low-Rank (HODLR), Hierarchically Semi-Separable (HSS), and H-matrix. For a detailed literature review on hierarchical matrices and their applications, we refer the readers to the articles [1] [29] [21]. The matrices that possess such hierarchical structures are leveraged to construct various algorithms that can reduce the storage and accelerate the matrix operations [15] [30] [33] [5] [2]. In the process of efficiently handling large and dense kernel matrices, hierarchical matrices satisfying weak admissibility criteria (from now on denoted as H∗) have become an essential tool. One of the challenges in dealing with H∗matrices is that the rank of matrices corresponding to neighboring interactions of source and target domains is dependent on the number of source and target particles. Despite the success of such hierarchical algorithms, most of the theoretical works while studying the rank due to interactions with the neighbors (HODLRdD, HSS2D, etc. [30] [22] [28] [9]), assume that the particles (or sources) are placed on a uniform grid or quasi-uniform grid. However, in most practical applications, such an assumption is not often true, as particles rarely align in such structured (uniform/quasi-uniform) patterns. Real-world data, whether coming from various physical simulations, machine learning, or data analysis tasks, tends to follow distributions that are arbitrary in nature, with no consistent pattern. This raises a concern about the applicability and robustness of hierarchical low-rank methods under the arbitrary distribution of particles. To encounter and model the arbitrary nature of particles, in this article, we consider a more realistic case where particles are randomly distributed in their respective domains according to some suitable probability distribution (see Subsection 4.1 for in-depth discussion), leading to the corresponding kernel matrix being a random matrix (see Subsection 4.2). Specifically, we study the expected growth (stated in Theorem 3.1) of the random rank R (as defined in Subsection 4.7) and analyze how much this rank R deviates from its expected value (stated in Theorem 3.2) for all possible interactions in d-dimensions. These results offer a more general understanding of algorithms in practical settings. To the best of our knowledge, this study of the rank of the kernel matrices, due to the arbitrary nature of inputs, is a novel contribution to the field. Existing Work vs. Our Approach: As mentioned earlier, the rank of kernel matrices has been extensively studied in various applications, from PDEs and inverse problems to machine learning. In this subsection, we review a few key contributions from existing works that have examined the rank of these kernel matrices, particularly those related to our work, and also highlight how our work differentiates itself from previous works. Hackbusch et al. [9] [22] were among the first to study the rank structure of kernel matrices, interpreting methods like Treecode [6] and FMM [20] [10] as low-rank representations of sub-matrices. Their initial works focused on kernel sub-matrices arising from interactions between well-separated domains, satisfying the standard (or strong) admissibility criterion. One key takeaway from those works is that such well-separated interactions naturally lead to low-rank structures in the corresponding kernel matrix. In the later works, Hackbusch et al. in [22] introduced the notion of weak-admissibility criterion in the context of one-dimension and studied the rank of kernel matrices due to interaction between the neighbors in one-dimension (i.e., the vertex sharing case) where they found that the rank is of O (log n), n being the number of particles in the domains. In the article [42], Xia extended the weak-admissibility criterion in two dimensions and provided a rough idea of the rank of kernel matrices growth due to the neighboring interactions. While analyzing the complexity of their algorithms, Ho, Greengard [25] and Ho, Lexing [26] provide heuristic bounds on the rank. They discussed the rank of interactions between two d-dimensional hypercubes sharing a (d -1)-dimensional hypersurface, i.e., vertex-sharing in 1D, edge-sharing in 2D, and face-sharing in 3D, and so on. Recently, Khan et al. in [30] rigorously proved the rank growth of kernel matrices for all possible neighboring interactions in any dimensions. The result is independent of the choice of kernel functions. However, the main drawback of their work is that to derive the results, they assumed that the particles in each domain are arranged on a uniform or quasi-uniform grid, which is generally not the case in practice. In this work, we relax this assumption and consider an arbitrary distribution of particles in the respective domains, and to model the arbitrary distribution of particles, we consider particle distribution to arise from an underlying random distribution. Building on these insights, our work extends previous works by analyzing the rank of kernel matrices under arbitrary particle distributions using a probabilistic framework. This novel perspective provides a deeper understanding of the rank growth and its variability, as outlined in the key highlights of this article. Highlights of The Article: The following points are the main highlights of this article, which showcase the unique contributions made in this domain. 2 • To study the behavior of the rank of kernel matrices due to the interactions of arbitrary particles in the neighboring clusters, we assume that the particles are randomly distributed in the respective domains. • We have introduced the notion of random rank R for the kernel matrices with randomly distributed particles in the respective domains with all possible interactions in d-dimensions, which provides a rigorous analysis of - The expected growth of R. Theorem 3.1 provides deeper insights into how the kernel matrices behave under random conditions, which has not been addressed previously. - The change in variance of R. Theorem 3.2 provides a clear understanding of how the rank deviates from its expected value. This analysis helps to explain the stability and variability of hierarchical low-rank algorithms in practical settings. • To the best of our knowledge, this is the first comprehensive study on the expected growth and variance growth of random kernel matrices for different interactions. Our findings provide a fresh perspective on the robustness and efficiency of hierarchical matrix algorithms in practical settings. Outline of the Article: The article is organized as follows. In Section 2, we introduce the basic terminologies, definitions, and foundational concepts that are used throughout the article. In Section 3, we state the main theorems and mention the kernel functions that are used to verify our theorems. In Section 4, we formally define the problem setup, including the choice of random particle distribution, how the random kernel matrix K is generated, and the random variable R. Section 5 presents the proof of theorems on the expected growth of the random rank R and its variance for different interactions in d-dimensions. In Section 6, we will provide numerical experiments to validate our theoretical results, focusing on one-, two-, and three-dimensional cases for all possible interactions. Finally, in Section 7, we summarize the key insights, discuss the implications of our results, and suggest potential directions for future research. 2 Preliminaries In this section, we are going to discuss some definitions, notations, and lemmas that we will use in the article. We also briefly discuss low-rank approximation of certain kernel matrices using polynomial interpolation and lastly, we will discuss some fundamental probability concepts in our context. 2.1 Some Notations and Definitions: The terminologies that we are going to use frequently in this article are given below. Source Domain: The compact set Y ⊂Rd is said to be the source domain if it contains the source particles in its interior. We will consider that the particles are arbitrarily distributed within the interior of Y . Target Domain: The compact set X ⊂Rd is said to be the target domain if it contains the target particles in its interior. Here also, the target particles are arbitrarily distributed within the interior of X. Further, we also consider that int(X) ∩int(Y ) = ∅, where int(X) = interior of the set X. Kernel Function: Throughout the article, we will consider K : int(X) × int(Y ) →R as the kernel function. The kernel function K encodes the strength of interaction between a particle from the source domain and a particle from the target domain. The choice of a kernel depends on the underlying physical model and the nature of the interaction between particles. Kernel Matrix: The matrix K ∈Rm×n whose (i, j)th entry is given by Kij = K(xi, yj); where xi ∈int(X), yj ∈int(Y ) (2.1) is called the kernel matrix. Definition 2.1. (Numerical ε-rank) Given any ε > 0, the ε-rank of a matrix A ∈Cm×n is denoted and defined as rε = max k ∈{1, 2, . . . , min {m, n}} : σk σ1 ≥ε , where σi's are singular values of A arranged in decreasing order. 3 Definition 2.2. (Numerical max-rank) Given any ε > 0, the max-rank of a matrix A ∈Cm×n is denoted as pε where pε = min n rank ̃K : ̃K ∈S(∞) ε o and S(∞) ε is defined as S(∞) ε = n ̃K ∈Cm×n : K - ̃K ∞∗ 0, the ε-rank of a matrix A ∈Cm×n is denoted as rε where rε = min n rank ̃K : ̃K ∈S(2) ε o and S(2) ε is defined as S(2) ε = n ̃K ∈Cm×n : K - ̃K 2 1 where |f(x)| ≤M for some M. Then for any n ∈N, its Chebyshev interpolants pn satisfy |f -pn| ≤4Mρ-n ρ -1 . This lemma is proved in [ [40], Theorem 8.2]. We now state a generalized version of Lemma 2.1 for higher dimensional setting, which is given below Lemma 2.2. Let f : V = [-1, 1]d ⊂Rd →R be analytic and it has an analytic extension to some generalized Bernstein ellipse B (V, ρ), where ρ = (ρ, ρ, . . . , ρ) with ρ > 1. Now, if ∥f∥∞∗= max y∈B(V,ρ) |f(y)| ≤M, then for any n ∈N, its interpolating multivariate polynomial ̃f satisfy f - ̃f ∞∗≤4MVd ρ-p ρ -1, where p is a predefined constant and Vd is a constant depending on the dimension d and ρ. The Lemma 2.2 is discussed in [30]. Also, a detailed discussion on Bernstein ellipse, generalized Bernstein ellipse, and analytic continuation can be found there. The generalized version of Lemma 2.2 is stated and proved in [18]. However, the above two lemmas provide a way to approximate the kernel function K(x, y). Below, we have shown the approximation of kernel function K(x, y) along y. ̃K (x, y) = X k∈I K x, yk Lk (y) where I is the index set of the interpolating points, and Lk is the Lagrange basis. Now, using this approximated kernel function ̃K, we can get the approximated kernel matrix ̃K whose (i, j)th entry is given by ̃Kij = ̃K (xi, yj). The matrix is then factorized as ̃K = UV T , which is given as follows, where U ∈Rm×|I| and V ∈Rn×|I|. ̃K =   K(x1, y1) K(x1, y2) · · · K(x1, y|I|) K(x2, y1) K(x2, y2) · · · K(x2, y|I|) ... ... ... ... K(xm, y1) K(xm, y2) · · · K(xm, y|I|)   | {z } U × V T z }| {   L1(y1) L1(y2) · · · L1(yn) L2(y1) L2(y2) · · · L2(yn) ... ... ... ... L|I|(y1) L|I|(y2) · · · L|I|(yn)   The rank of the approximated kernel matrix ̃K is nothing but |I|. Now, the following lemma guarantees the cardinality of the set I when the source and target domains are separated by some distance, as mentioned in the following lemma. Lemma 2.3. Let X be the source and Y be the target hyper-cube in d-dimensions such that they are at least one hyper-cube away, and let K be the corresponding interaction matrix. Then for any given δ > 0, there exists an approximated ̃K of rank pδ such that ∥K- ̃ K∥∞∗ ∥K∥∞∗ 0, there exists an approximated random kernel matrix ̃K such that ∥K- ̃ K∥∞∗ ∥K∥∞∗ 0, there exists an approximated random kernel matrix ̃K such that ∥K- ̃ K∥∞∗ ∥K∥∞∗ 0. The far-field domains in one, two, and three dimensions are shown in the following Figure 1. X Y X Y X Y (a) 1D far-field domains. (b) 2D far-field domains. (c) 3D far-field domains. Figure 1: 1D, 2D, 3D far-field domains • For d′ = 0, the source Y and the target X shares a vertex. Figure 2 illustrates the vertex-sharing domains in one, two, and three dimensions. X Y X Y X Y (a) 1D vertex-sharing domains. (b) 2D vertex-sharing domains. (c) 3D vertex-sharing domains. Figure 2: 1D, 2D, 3D vertex-sharing domains • For d′ = 1, the source Y and the target X shares an edge. This case will arise when the dimension of the domains is at least two. In two and three dimensions, this case is illustrated in Figure 3. X Y X Y (a) 2D vertex-sharing domains. (b) 3D edge-sharing domains. Figure 3: 2D, 3D edge-sharing domains 1For the sake of completeness, we use the notation d′ = -1 to represent far-field domains. There is nothing special to choose -1 here. 6 • For d′ = 2, the source Y and the target X shares a face. This case will arise when the dimension of the domains is at least three. The face-sharing domain in three dimensions is shown below in Figure 4. X Y Figure 4: 3D face-sharing domains • For d′ ≥3, the source Y and the target X shares a d′-dimensional hyper-surface. 4.1 Choice of Probability Distribution We begin this subsection by introducing the probability distribution that forms the foundation of our analysis as we model the 'arbitrary distribution of particles' with a proper probability distribution. In each of domains X and Y , n independent and identically distributed (i.i.d) particles are drawn from Uniform Probability distributions UX and UY respectively. The choice of Uniform Probability Distribution is driven by its suitability and simplicity for modeling the random matrix structure considered in this study. Apart from simplicity in deriving analytical results, several factors influenced this choice, which are summarized as follows: (i) Instead of being tied to a fixed grid, uniform probability distribution allows for flexible grid placement during the discretization of the domains. (ii) Uniform distribution ensures that every possible particle within the domain is equally likely to be chosen. (iii) The configuration that the particles sampled randomly from the uniform probability distribution can be interpreted as a "perturbed" version of a uniform grid, which aligns with the principles of Smoothed Analysis [38] in the following ways: (a) Choosing uniform probability distribution, we can implicitly introduce one kind of random perturbation to an ideal grid2, as each particle selected from the domain could vary slightly from an exact grid particle. This randomness in particle selection in the domains aligns with the smooth perturbations considered in the smooth analysis. (b) The use of a uniform probability distribution can provide more realistic estimates of algorithm performance, as it takes a range of possible input particles into account rather than specific, deterministic ones. (c) By choosing uniform probability distribution, we are spreading the likelihood evenly across the domains, making it less likely to encounter those extreme cases that result in an analysis that better reflects typical performance (which is quite similar to average-case analysis of inputs) rather than focusing on the rare, worst-case analysis of inputs particles. 4.2 Generation of Random Kernel Matrix Following the discussion on the probability distribution, we now turn to the generation of the random kernel matrix K. As particles are uniformly distributed in their respective domains and the kernel matrix reflects the interactions between these random particles through a kernel function K, forms the random entries of the matrix K as discussed in Equation 2.1. Although the kernel matrix K exhibits randomness, this randomness is structured3 and is not entirely arbitrary. This is because: i. The entries Kij of the kernel matrix K are random variables whose distribution depends not only on UX and UY but also on the kernel function K. ii. The distribution and correlation structure of the entries Kij of the kernel matrix K are strongly influenced by the geometrical configurations of the domains X and Y . iii. The distribution of Kij will often not be uniform (often highly structured). For example, Suppose we choose the kernel function K6. In that case, the entries of K will be related to the exponential of the Euclidean distance between the random particles, leading to a non-uniform distribution. 2A detailed discussion on 'Uniform Distribution as Randomly Uniform Perturbed Grid' can be found in Appendix E 3Structured Randomness involves randomness that is governed by some underlying structure, pattern, or correlation between the elements. This structure can result from dependencies, constraints, or specific relationships defined by a process or function. 7 4.3 Rank as a Random Variable Once the random kernel matrix K is generated, one of the key aspects of its structure is its rank. Unlike deterministic kernel matrices, where the rank is fixed based on the matrix dimensions and locations of the source and target domains, the rank of the kernel matrix derived from random source and target particles is a random variable. Several factors influence this random variable: i. The size of the kernel matrix K (i.e., the number of random source and target particles) ii. The spatial configuration of source and target domains has a substantial impact on the rank of K. The rank can be significantly different in cases when the domains are far-field or share a d′-dimensional hyper-surface. iii. The rank of the kernel matrix is not only depends on the size of K and geometry of the source and target domains but also heavily influenced by the choice of the kernel function K. For example, for the choice kernel function of K3 from Table 1, regardless of the geometry or distribution of the source and target particles in one dimension, the rank of the kernel matrix is always 2. The above-outlined points can be verified in the numerical results presented in Section 6. As determining the exact distribution of the rank is quite challenging, we introduce the random variable R, (given in Equation 4.15), which serves as an upper bound of the rank of kernel matrix under random inputs. Throughout this article, we investigate the behavior of R to understand how the rank of the kernel matrix varies and obtain bounds on the first couple of moments of the random variable. 4.4 Basic Probability Theory In this subsection, we explore fundamental probability concepts that are crucial for our analysis, and along with this, we define some required tools. 4.4.1 Distribution of particles in a domain and its sub-domains. Let n number of independent and identically distributed (i.i.d) particles x1, x2, . . . , xn fall in the hyper-cube V = [a, b]d under the uniform probability distribution4. The reason behind choosing this particular distribution is discussed in Subsection 4.1. We define the random variable N (V ′) as the number of particles that fall within a sub-hyper-cube V ′ = [c1, d1]d of V , where [c1, d1] ⊆[a, b]. Then, the probability of having exactly k particles in V ′ is given by P (N (V ′) = k) = n k qk (1 -q)n-k ; where q = (d1 -c1)d (b -a)d . (4.1) We now define another random variable that we are going to use frequently in the article. The random variable is defined as follows. ZV ′ n = min {N (V ′) , p} , where p is a constant. (4.2) Here, ZV ′ n takes any natural number in between 0 and p and for any i ∈{0, 1, . . . , p}, we have P ZV ′ n = i = ( P (N (V ′) = i) if i 0 (Ξ independent of n) such that P (Nk,l = i) ≤ b(n) i,k Z a(n) i,k 1 √ 2π e-t2/2dt + Ξ √n (4.17) Here, a(n) i,k = i-0.5-μk σk and b(n) i,k = i+0.5-μk σk , where μk = nqk, σk = p nqk (1 -qk). Now, for some c(n) i,k such that ̃a(n) i,k ≤c(n) i,k ≤ ̃b(n) i,k , we have E Zk,l n ≥p - p X i=0 i 1 √ 2π e - c(n) i,k 2/2 + p(p + 1) 2 Ξ √n Now, as n →∞, c(n) i,k →-∞and hence, we have lim n→∞E Zk,l n = p. 13 Result 4.2. For any fixed positive integer k ≤κ and for all l = 1 : hk, the limit of the variance of random rank Zk,l n of the corresponding matrix ̃Kk,l is 0, i.e., Var Zk,l n →0, as n →∞. Proof. Using Equation 4.9, we have an upper bound the variance of Zk,l n as Var Zk,l n = p X i=0 i2P (Nk,l = i) - p X i=0 iP (Nk,l = i) !2 ≤ p X i=0 i2P (Nk,l = i) As k and p are fixed, there exists n0 ∈N such that for all n > n0, we have p 0, we have P (Nk,1 = p) 0, we have P (Nk1,1 = pi, Nk2,1 = pj) 0, there exist ε′ > 0 such that A - ̃A 2 ∥A∥2 0, there exist ε > 0 such that A - ̃A ∞∗ ∥A∥∞∗ 0 and E X3 k = ρ 0, there exists ̃k such that P (Nk,1 = p) s 2 loge 1 ε √ 2π =: M(ε) (say) =⇒ p -0.5 - n 2dk q n 2dk 1 - 1 2dk > M =⇒ M 2 + n nx2 - 2 (p -0.5) + M 2 nx + (p -0.5)2 > 0, where x = 1 2dk . Now for the sake of simplicity, we take M 2 + n x - 2 (p -0.5) + M 2 > 0 which implies k 0 such that κ X k=1 Var Zk,1 n 0, where D = n (x, y) ∈R2 : a(n) pi,i ≤x ≤b(n) pi,i and a(n) pj,j ≤y ≤b(n) pj,j o where, a(n) r,k = r-0.5-μk σk and b(n) r,k = r+0.5-μk σk for r = pi, pj and k = i, j. Now, there exist x(n) pi,i, x(n) pj,j ∈D such that ZZ D f(x, y)dxdy = f x(n) pi,i, x(n) pj,j . Thus we have P (Ni,1 = pi, Nj,1 = pj) ≤f x(n) pi,i, x(n) pj,j + Ξ √n. Now as n →∞, a(n) pi,i, b(n) pi,i, a(n) pj,j, b(n) pj,j →-∞and hence f x(n) pi,i, x(n) pj,j can be made sufficiently small by taking large enough n. Now for any chosen ε > 0 there exist ̃k such that P (Ni,1 = pi, Nj,1 = pj) 2 1 -ρ2 loge 1 2π p 1 -ρ2ε ! =⇒ x(n) pi,i 2 + x(n) pj,j 2 -2ρ x(n) pi,i x(n) pj,j > 2 1 -ρ2 loge 1 2π p 1 -ρ2ε ! Now, as ρ being negative, taking x(n) pi,i as minimum between x(n) pi,i and x(n) pj,j, we have the following (1 -ρ) x(n) pi,i 2 > 1 -ρ2 loge 1 2π (1 -ρ2) ε =⇒ x(n) pi,i 2 > (1 + ρ) loge 1 2π (1 -ρ2) ε =⇒x(n) pi,i > s (1 + ρ) loge 1 2π (1 -ρ2) ε =: M(ε); (say) =⇒x(n) pi,i > M Now, x(n) pi,i > M will hold also when a(n) pi,i > M is true and thus similar to the proof of previous lemma, we can prove that there exists C > 0 such that κ X i=1 κ X j=i+1 Cov Zi,1 n , Zj,1 n ≤C, where C ∈O (log log log n)2 . 28
2510.14917
Cumulants, Moments and Selection: The Connection Between Evolution and Statistics Hasan Ahmed, Deena Goodgold, Khushali Kothari, Rustom Antia* Department of Biology, Emory University, Atlanta, GA 30322, USA. *Corresponding author. Email: rantia@emory.edu Abstract Cumulants and moments are closely related to the basic mathematics of continuous and discrete selection (respectively). These relationships generalize Fisher's fundamental theorem of natural selection and also make clear some of its limitation. The relationship between cumulants and continuous selection is especially intuitive and also provides an alternative way to understand cumulants. We show that a similarly simple relationship exists between moments and discrete selection. In more complex scenarios, we show that thinking of selection over discrete generations has significant advantages. For a simple mutation model, we find exact solutions for the equilibrium moments of the fitness distribution. These solutions are surprisingly simple and have some interesting implications including: a necessary and sufficient condition for mutation selection balance, a very simple formula for mean fitness and the fact that the shape of the equilibrium fitness distribution is determined solely by mutation (whereas the scale is determined by the starting fitness distribution). Key words: cumulants, moments, mutation, selection, distribution of fitness effects, fisher’s fundamental theorem of natural selection, heterozygote advantage, mutation selection balance 1. Introduction Cumulants and moments are key concepts in statistics. Selection is a fundamental concept in evolutionary biology, and it has applications to a variety of topics from other fields such as depletion of susceptibles in epidemiology [1], economics [2] and waning of immune memory [3]. It turns out that cumulants and moments are closely related to selection. This relationship is helpful not only for understanding selection but also for understanding cumulants. In the simple scenario where selection is the only force, a straightforward and exact relationship exists between cumulants of a fitness distribution and its evolution over time. Briefly, if fitness is measured in terms of exponential growth rate (i.e. the Malthusian parameter r), then the mean (or 1st cumulant) of fitness in the population increases at an instantaneous rate equal to the variance (or 2nd cumulant) of fitness in the population, variance changes at an instantaneous rate equal to the 3rd cumulant (i.e. unscaled skewness) of fitness, which changes at an instantaneous rate equal to the 4th cumulant (i.e. unscaled excess kurtosis), and so on. This relationship was noted by [4] and later expanded upon by [5]. Although it is known to theoretical evolutionary biologists [6] [7], it is hardly known outside that field. Hence one of the goals of this paper is to explain this relationship to a wider group of people such as epidemiologists, statisticians and theoretical biologists in other fields while also examining its limitations. In practice, however, populations do not grow continuously; they grow in discrete steps (births, deaths, cell division). So instead of the instantaneous growth rate (r), it is also possible to think about the fold expansion (R) over an interval of time (Δt). ! = #!∙∆$ (1) If variation in generation interval by genotype is not too great, in practice it can be convenient to define R as the fold expansion over a single generation (or an integer number of generations) –– see section 3. Either way, if fitness is quantified in terms of R, we show that there is a similarly straightforward and exact, albeit less intuitive, relationship between the raw moments of the fitness distribution and its evolution over time. These relationships between cumulants, moments and selection can be considered generalizations of Fisher's fundamental theorem of natural selection, and they also make clear limitations of Fisher’s theorem. According to the theorem, mean fitness increases according to the variance in fitness. We show that this holds for the instantaneous change in r but not necessarily for the discrete change in R (section 2). In addition to the selection only scenario, we also consider heterozygote advantage and deleterious mutation. In these scenarios R shows substantial advantages compared to r. In particular in the scenario with selection and deleterious mutation, we find that the moments of R have an exact and simple solution which is simple enough to be helpful for understanding evolution. In contrast the theoretical relationships that should hold for r, hold only approximately due to the discrete nature of births and deaths. 1.1. r and cumulants Here we assume that selection is the only force acting on the population and every lineage in the population has a specific growth rate. If the growth rate is measured in terms of the Malthusian parameter (r), we see an exact relationship between the cumulants of a fitness distribution and its evolution over time. The reason for this relationship is fairly transparent from the way cumulants are defined. The nth cumulant is by definition the nth derivative of a cumulant generating function. In this case the cumulant generating function is ln(population size) plus a constant. The instantaneous rate of increase in ln(population size) is, of course, the mean of r across the lineages. It follows from this that the instantaneous rate of change of the mean of r is equal to the variance of r, the rate of change of the variance of r is equal to the unscaled skewness of r, and the rate of change of unscaled skewness of r is equal to the unscaled excess kurtosis of r. (We refer to the 3rd and 4th cumulants as unscaled skewness and unscaled excess kurtosis respectively because if the variable is to be scaled to have variance of 1 then the third and fourth cumulants are equal to the skewness and excess kurtosis respectively). $%% (') $) = %%'((') (2) Here Ki(r) is the i-th cumulant of the fitness distribution when fitness is measured in terms of the Malthusian parameter (r). Although there is no necessary visual pattern, for standard probability distributions, low versus high variance, negative versus positive skewness and negative versus positive excess kurtosis have a stereotypical look as shown by the solid lines in Figure 1. We then consider how these probability distributions evolve under selection (dashed lines in Figure 1). As expected we see the higher variance distribution in the top panel showing a larger increase in mean fitness. The negative skewness distribution in the middle panel becomes noticeably more concentrated around its mode (reduced variance) whereas the positive skewness distribution shows an increase in variance. Finally, in the bottom panel, the negative excess kurtosis distribution goes from having no skew to having a stereotypical look of negative skewness. And the positive excess kurtosis distribution goes from having no skew to having a stereotypical look of positive skewness. Figure 1. Understanding cumulants and selection visually. In this figure, the top plot compares high variance (faster increase in mean fitness) with low variance (slower increase). The middle plot illustrates positive skewness (red solid line) versus negative skewness (blue solid line); positive skewness leads to increased variance, while negative skewness leads to decreased variance. (The normal distribution has skewness 0). The bottom plot compares negative excess kurtosis (blue solid line, uniform distribution having an excess kurtosis of -1.2) and positive excess kurtosis (red solid line, Laplace distribution having an excess kurtosis of 3). The dashed lines show the fitness distributions after selection over one unit of time. 2. R and moments Instead of looking at the continuous growth rate (r), we can instead consider the fold increase (R) of a lineage over some interval of time (Δt); R=er×Δt. Again, we consider a situation where selection is the only force (no genetic drift, no mutation, no recombination, constant environment). If we now quantify fitness in terms of R instead of r, there is an exact and straightforward formula for how the raw moments of the fitness distribution change over time. *%,$'((!) = *%'(,$(!) *(,$(!) (3) Mi,t+1(R) is the value of i-th moment at time = t+1. See supplement (Section 1) for derivation. 3. Fisher’s fundamental theorem of natural selection According to Fisher’s fundamental theorem of natural selection, the rate of increase in mean fitness is the variance in fitness. The relationship between cumulants, moments and selection generalizes Fisher’s fundamental theorem of natural selection, and it also illustrates some of its limitations. In particular the theorem is true for the instantaneous change in mean fitness if fitness is quantified in terms of r. However, instantaneous change is not the most natural way to think about biological evolution which inherently involves discrete generations. On the other hand, if fitness is quantified in terms of R, the discrete change in mean fitness is equal to the variance in fitness only if the fitness of the parent generation is equal to 1 (or if the variance in fitness is equal to 0). If, for example, the mean fitness of the parent generation is 1.1 and the variance in fitness is positive, then the relationship does not hold. 4. Heterozygote advantage In this section we assume that variability in generation interval by genotype is negligible and hence, we define R as the fold expansion over one (sexual) generation. In this case, R can have substantial advantages over r as illustrated by the following example. We consider fitness from the perspective of a gene with two variants (A and B). In individuals with homozygous AA, fitness in terms of R is 1. For heterozygous individuals with the AB genotype, R is 1.2, and for homozygous BB individuals, R is 0.01. We assume these values in our calculations and assume random mating. We use the variable, f, to denote the frequency of variant B. At equilibrium, using the formula from [8], f = (1.2-1)/(0.2+1.19) = 0.1439, and the R values for the two genes are equal. !* = + ∙1.2 + (1 −+) ∙(1) = 1.0288 (4A) !+ = + ∙0.01 + (1 −+) ∙(1.2) = 1.0288 (4B) However, the values for r are quite different. '* = + ∙(ln(1.2)) + (1 −+) ∙(ln(1)) = 0.0262 (5A) '+ = + ∙(ln(0.01)) + (1 −+) ∙(ln(1.2)) = −0.5066 (5B) The problem here is two fold. 1) The model assumes a constant growth rate for the 3 subpopulations (AA, BB & AB). This is unbiological unless there are many asexual generations between recombination events. 2) The r values are instantaneous and correspond to a particular stage in the life cycle, in this case immediately following recombination (since f = 0.1439 corresponds to the time of recombination). Hence the fact that the 2 variants have the same fitness over a single (sexual) generation is obscured. Defining R over 1 (sexual) generation avoids both these problems. Defining R over 2, 3, 4 etc. generations would also solve these problems and may be helpful in certain circumstances. 5. Mutation and selection Here we consider a simple model with deleterious mutations along with selection. The fitness of the child (ri) is determined probabilistically by the following equation: '% = '% ∗−9 ⋅; (6A) where ri* is the fitness of the parent, x is a random binomial variable that determines whether or not there is a deleterious mutation and y is effect size of the mutation. The equivalent equation for R is given by the following formula. !% = !% ∗ ∙ #-.⋅0 (6B) 5.1. Mutation simulation for influenza We model the fitness distribution of a population of individuals over 4000 generations with a starting population size of 1 million. Each individual in the first generation has a fitness value of 1. Mutations occur for each individual with a probability of 0.2 based on the mutation rate of influenza [9] [10]. The effects of mutations are determined by a gamma distribution for y (with α = 1 & β = 2.85) again based on influenza [11]. The probability of an individual reproducing is equal to their fitness value. For this reason, the model has some drift but is under dispersed. To maintain the population size, we double the population when it is less than 500,000 individuals. We see that the mean, variance, unscaled skewness, and unscaled excess kurtosis approach and fluctuate around an equilibrium value (Figure 2): the mean of R is approximately 0.80, the standard deviation is 0.19 (variance of 0.035), the unscaled skewness is negative (-0.0076, scaled skewness of -1.2), and the unscaled excess kurtosis is positive (0.00095, scaled excess kurtosis of 0.77). The equilibrium values appear to be the same even if the initial fitness distribution is changed to a discrete uniform over {0.1, 0.2, … 0.9, 1} even though the initial change is much more dramatic (Supplement Section 2). Figure 2. Simulation results. All individuals in the initial generation have fitness of 1. 5.2. Simulation versus theory – r Because selection increases the mean of r by its variance, and mutation decreases the mean of r by the mean of the mutation effect (-x·y) and likewise for the higher cumulants, we might think that at equilibrium: 1!"#(!) 1!(-.∙0) ≈−1 (???) (7) where Ki(r) is the i-th cumulant of the fitness distribution in terms of r and Ki(-x·y) is the i-th cumulant of -x·y. But we see that this only very roughly holds (Table 1). Table 1. Average Cumulant Values for Generations 2000 to 4000 for r. r –x · y Mean -0.262 -0.0702 Variance 0.0957 0.0443 Unscaled Skewness -0.0697 -0.0422 Unscaled Excess Kurtoses 0.0757 According to Eq 7, the values in red should be similar in magnitude but opposite in sign. Likewise for the values in blue and in green. But we see that this is not the case. The reason for this discrepancy is: in our model mutations occur at the time of birth not continuously. As expected, breaking each generation into multiple mini generations reduces the discrepancy between the simulation results and Eq 7 (Supplement Section 3). This suggests that Eq 7 may closely hold in certain situations but not under those reported for influenza [9] [10] [11] nor for E. coli (Supplement Section 4). 5.3. Simulation versus theory – R If we think of selection in terms of R (using Eq 3 for the effect of selection on R) and the mutation model given by Eq 6B, then surprisingly simple equations exist for the mean (Eq 8A) and also for the higher moments (Eq 8B) of R at equilibrium. These equations ignore stochastic effects (i.e. drift). See supplement Section 5 for derivation. *((!) = max(!) ⋅@ (8A) *%(!) = max (!)% ⋅@% ∏ *4(#-.∙0) 45%-( 45( (8B) Here, M1(R) is the mean of R, and Mi(R) is the i-th moment of R. p is the probability that there is not a deleterious mutation (here that is 80%). max(R) is the value of R for the fittest individual in the initial population (here that is 1). Mj(e-x·y) is the j-th moment of e-x·y, the multiplicative effect of mutation, as given by Eq 6B. Eq 8A and Eq 8B have several interesting implications. Equilibrium mean fitness is determined solely by max(R) and p (the probability that there is not a deleterious mutation). p>0 is absolutely essential for mutation selection balance. Otherwise, selection will not be able to offset the effects of mutation, and fitness will collapse towards 0. We see that max(R) and the mutation distribution alone determine the moments of the equilibrium fitness distribution, and in this case the moments uniquely determine the probability distribution (or more precisely its cumulative distribution function) [12]. So the shape of the equilibrium fitness distribution is determined solely by the mutation effect distribution with max(R) acting as a scale parameter. As expected, given that the effect of drift in our simulation is low, the simulation results match the theoretical predictions very closely (Table 2). Table 2. Average Cumulant Values for Generations 2000 to 4000 for R. R (simulations) R (theoretical equilibrium) Mean 0.800 0.8 Variance 0.0351 0.0351 Unscaled Skewness -0.00757 -0.00757 Unscaled Excess Kurtoses 0.000952 0.000951 The theoretical equilibrium values for the moments were taken from Eq 8A and Eq 8B. These were converted to cumulants using the standard formulas [13]. 5.3.1. Alternative form e-x·y is the fitness of the child relative to the parent. Although less general, it is also possible to formulate x·y in terms of the number of mutations and the effect per mutation: x·y = Σzj where the z's are the effects of the mutations. If the z's are independent and the number of mutations is assumed to be Poisson distributed, then: *4(#-.∙0) = #6∙(7$(8%&)-() (9) where λ is the mean number of de novo deleterious mutations per individual and Mj(e-z) is the j- th moment for the effect on R for a single deleterious mutation. 5.3.2. Recombination Under a simple model of recombination (multiple segments with an independent probability of mutation which can reassort, random mating, no epistasis), Eq 8A and Eq 8B still apply except that max(R) is now the maximum R that could exist under recombination. The full significance of this will be discussed in a follow up paper. 5.3.3. Coefficient of variation of R From equations 8A and 8B, it is straightforward to derive the coefficient of variation (standard deviation divided by mean) of fitness. BC(!) = D*( -((#-.∙0) −1 (10A) where M1(e-x·y) is the mean of e-x·y i.e. the mean fitness (on the R scale) of a child relative to its parent. Under the assumptions of section 5.3.1, the above equation can be reformulated. BC(!) = E#6∙9(-7#(8%&): −1 (10B) where λ is the mean number of de novo deleterious mutations per individual and M1(e-z) is the mean fitness (on the R scale) of a child with a single deleterious mutation relative to its parent. Eq 10B is equivalent to equation 8 in [14] even though the derivation by [14] involves mathematical approximations and starts with a somewhat different model. Table 3 illustrates the relationship between the coefficient of variation of R and other key parameters. Table 4: Coefficient of variation of R and other parameters Approximate Species λ M1(e-z) M1(e-x·y) CV(R) E. coli 0.001 0.969 0.999969 0.6% Humans 2.1 0.991 0.981 13.8% Influenza A 0.223 0.761 0.948 23.4% λ is the mean number of de novo deleterious mutations per individual. M1(e-z) is the mean multiplicative effect on R of a single deleterious mutation. M1(e-x·y) is the mean fitness of the child relative to its parent. (As might be expected, M1(e-z)λ is close to M1(e-x·y), but that relationship is not exact since the number of mutations is a random variable). CV is the coefficient of variation of R. Approximate species is a species that approximately matches the λ and M1(e-z) values in the table according to the scientific literature. For λ of 2.1 and M1(e-z) of 0.991 for humans see [15], [16]. The values for E. coli and Influenza A are based on our simulations described in supplement section 4 and main text section 5.1 respectively. The actual coefficient of variation for these species may be quite different because of factors such as short-term selection. 6. Discussion Theoretical results in biology are unlikely to hold exactly for any actual biological system. But they may still be able to contribute something to our understanding. In our view the relationship between the cumulants of r and continuous selection is helpful for understanding the selection- only scenario and also for understanding cumulants. But for more complex scenarios R shows substantial advantages. The advantage of R is at least two-fold. 1) The cumulants of r can be greatly influenced by extreme negative values since r→-∞ as R→0. In the selection-only scenario these values are quickly removed, but in more complex scenarios these extreme values may continue to be produced. For example, Eq 7 suggests that negative skew should be a pervasive feature of fitness distributions, but the extent to which this is an artefact of these extreme negative values needs to be considered. Indeed, from Eq 8A and Eq 8B it is possible to find values such that the distribution of R is not left skewed. 2) If genetic variation in generation interval can be neglected, R can be conveniently defined relative to the organism's life cycle, which is the inherent discreteness in biological growth. Given its simplicity, it would be somewhat surprising if the full solution for the moments of R under mutation selection balance (i.e. Eq 8A and Eq 8B) were truly novel. The first two moments can be derived from Haldane's load theory and from [14]. Unfortunately, Haldane's load theory has been misinterpreted as saying that 1/p is some sort of minimum fertility rate needed to maintain a population [17], [18]. [14] corrected this error but placed undue emphasis on “the fitness of the fittest individual likely to exist.” On the contrary from Eq 8A max(R)·p ≥ 1 is all that is needed for emergence/persistence, and max(R) (under the assumptions of § 5.3.2) is not the fitness of the fittest individual likely to exist but rather the fitness of the fittest individual that could possibly exist from recombination. We see very different patterns of mutation for Influenza A, E. coli and humans. For influenza p=0.8 suggests a tradeoff between replication accuracy (p) and replication speed (R) in order to maximized R·p. In contrast for E. coli, which unlike influenza bears the cost of synthesizing its own proteins, p is nearly 1 suggesting that increasing p is relatively easy. Finally in the case of complex animals, it seems like multicellularity greatly reduces p, even though the per nucleotide per cell division mutation rate in the human germ line is perhaps even less than that of E. coli [19]. There are important caveats and limitations to our work. We consider genetic fitness not the realized number of offspring per individual which will tend to show more variability. There are important factors that we do not consider. However, the extent and even the reason (e.g. advantageous mutation, short term selection) that more complex systems deviate from Eq 8A and Eq 8B is something that should be quantifiable via a mix of controlled experiments and simulation. These equations create a pairing between the probability distribution for mutation and that for fitness, but we have only derived the moments, not the exact distribution. We also have not considered the exotic cases with max(R) unbounded and p=0 which (although unbiological) may be of theoretical interest. References [1] A. Nikas, H. Ahmed, and V. I. Zarnitsyna, “Competing Heterogeneities in Vaccine Effectiveness Estimation,” Vaccines, vol. 11, no. 8, Art. no. 8, Aug. 2023, doi: 10.3390/vaccines11081312. [2] R. R. Nelson and S. G. Winter, An evolutionary theory of economic change, Digitally reprinted. Cambridge, Mass.: The Belknap Press of Harvard Univ. Press, 2004. [3] P. F. M. Teunis, J. C. H. van Eijkeren, W. F. de Graaf, A. B. Marinović, and M. E. E. Kretzschmar, “Linking the seroresponse to infection to within-host heterogeneity in antibody production,” Epidemics, vol. 16, pp. 33–39, Sept. 2016, doi: 10.1016/j.epidem.2016.04.001. [4] T. F. Hansen, “Selection in asexual populations: An extension of the fundamental theorem,” Journal of Theoretical Biology, vol. 155, no. 4, pp. 537–544, Apr. 1992, doi: 10.1016/S0022-5193(05)80634-4. [5] P. J. Gerrish and P. D. Sniegowski, “Real time forecasting of near-future evolution,” Journal of The Royal Society Interface, vol. 9, no. 74, pp. 2268–2278, Apr. 2012, doi: 10.1098/rsif.2012.0119. [6] M. Rattray and J. L. Shapiro, “Cumulant dynamics of a population under multiplicative selection, mutation, and drift,” Theor Popul Biol, vol. 60, no. 1, pp. 17–31, Aug. 2001, doi: 10.1006/tpbi.2001.1531. [7] S. Yamauchi, T. Nozoe, R. Okura, E. Kussell, and Y. Wakamoto, “A unified framework for measuring selection on cellular lineages and traits,” eLife, vol. 11, p. e72299, Dec. 2022, doi: 10.7554/eLife.72299. [8] B. Charlesworth, “Population Genetics,” in Encyclopedia of Biodiversity, S. A. Levin, Ed., New York: Elsevier, 2001, pp. 777–797. doi: 10.1016/B0-12-226865-2/00353-9. [9] “Measurement of the mutation rates of animal viruses: influenza A virus and poliovirus type 1.” Accessed: July 14, 2025. [Online]. Available: https://journals.asm.org/doi/epdf/10.1128/jvi.59.2.377-383.1986 [10] R. Sanjuán, M. R. Nebot, N. Chirico, L. M. Mansky, and R. Belshaw, “Viral Mutation Rates,” Journal of Virology, vol. 84, no. 19, pp. 9733–9748, Oct. 2010, doi: 10.1128/jvi.00694-10. [11] E. Visher, S. E. Whitefield, J. T. McCrone, W. Fitzsimmons, and A. S. Lauring, “The Mutational Robustness of Influenza A Virus,” PLOS Pathogens, vol. 12, no. 8, p. e1005856, Aug. 2016, doi: 10.1371/journal.ppat.1005856. [12] George Casella and Roger L. Berger, Statistical Inference, 2nd ed. Pacific Grove, CA: Duxbury, 2002. Accessed: June 18, 2025. [Online]. Available: https://pages.stat.wisc.edu/~shao/stat610/Casella_Berger_Statistical_Inference.pdf [13] E. W. Weisstein, “Cumulant.” Accessed: June 25, 2025. [Online]. Available: https://mathworld.wolfram.com/Cumulant.html [14] B. Galeota-Sprung, P. Sniegowski, and W. Ewens, “Mutational Load and the Functional Fraction of the Human Genome,” Genome Biol Evol, vol. 12, no. 4, pp. 273–281, Apr. 2020, doi: 10.1093/gbe/evaa040. [15] A. Uchimura et al., “Germline mutation rates and the long-term phenotypic effects of mutation accumulation in wild-type laboratory mice and mutator mice,” Genome Res, vol. 25, no. 8, pp. 1125–1134, Aug. 2015, doi: 10.1101/gr.186148.114. [16] J. Matheson, U. Hernández, J. Bertram, and J. Masel, “Human deleterious mutation rate slows adaptation and implies high fitness variance,” Mar. 25, 2025, bioRxiv. doi: 10.1101/2023.09.01.555871. [17] Y. Lesecque, P. D. Keightley, and A. Eyre-Walker, “A resolution of the mutation load paradox in humans,” Genetics, vol. 191, no. 4, pp. 1321–1330, Aug. 2012, doi: 10.1534/genetics.112.140343. [18] D. Graur, “An Upper Limit on the Functional Fraction of the Human Genome,” Genome Biol Evol, vol. 9, no. 7, pp. 1880–1885, July 2017, doi: 10.1093/gbe/evx121. [19] N. Dudko, J. W. Dobrucki, and H. Fulka, “Mechanisms underlying low mutation rates in mammalian oocytes and preimplantation embryos,” Nucleic Acids Res, vol. 53, no. 15, p. gkaf760, Aug. 2025, doi: 10.1093/nar/gkaf760. [Supplement] Cumulants, Moments and Selection: The Connection Between Evolution and Statistics Hasan Ahmed, Deena Goodgold, Khushali Kothari, Rustom Antia* Department of Biology, Emory University, Atlanta, GA 30322, USA. *Corresponding author. Email: rantia@emory.edu 1. Derivation of discrete growth formula ∆"!,# = "!,#%& − "!,# = "!%&,# "&,# − "!,# Here, DMi,t is the change in the value of i-th moment between time = t and time = t+1. It allows us to quantify the discrete growth between consecutive steps for the population moment. We know that, "! ,# = & '#()) ∙)! ,) "!,#%& = & '#%&()) ∙)! ,) where ft is the probability density function for time = t and ft+1 is the probability density function at time = t+1. Because R is the fold change over a unit of time, '#%&()) ∝'#()) ∙) → '#%&()) = / ∙'#()) ∙) where c is a normalizing constant. & / ∙'#()) ∙) ,) = 1 →/ = 1 ∫'#()) ∙) ,) Hence after substitution, "!,#%& = & '#%&()) ∙)! ,) = & / ∙'# ()) ∙) ∙)! ,) = ∫'#%&()) ∙) ∙)! ,) ∫'#()) ∙) ,) Here, we see that the numerator is equivalent to Mi+1,t and the denominator is equivalent to M1, t Therefore, "!,#%& = "!%&,# "&,# ∆"!,# = "!,#%& − "!,# = "!%&,# "&,# − "!,# 2. Simulation results with initial discrete uniform distribution of R values Figure S1 displays the cumulant values over generations 0 to 300 for a population with an initial discrete unform distribution of fitness values (R). For this population the initial mean is 0.55 with the variance being higher since the initial fitness values range from 0.1 to 1 in a uniformly distributed manner. 10% of the population exists at each fitness value in {0.1, 0.2, ... 0.9, 1}. The initial unscaled skewness is 0 since the distribution is symmetric. The unscaled excess kurtosis is initially negative. High levels of selection occur early on in the simulation reflecting high initial variance, but the cumulant values plateau at a similar level compared to the simulation in the main text. Figure S1. Simulation results. Fitness values in the initial generation were distributed uniformly over {0.1, 0.2, ... 0.9, 1}. Table S1. Average Cumulant Values for Generations 2000 to 4000 for r. r −2 ⋅ y Mean -0.261566 -0.07017544 Variance 0.09575191 0.04432133 Unscaled Skewness -0.06968853 -0.04216142 Unscaled Excess Kurtoses 0.07578221 Table S2. Average Cumulant Values for Generations 2000 to 4000 for R. R (simulations) R (theoretical equilibrium) Mean 0.8002347 0.8 Variance 0.03509336 0.03506849 Unscaled Skewness -0.007572458 -0.007565338 Unscaled Excess Kurtoses 0.0009511625 0.0009506762 These equilibrium cumulant values are essentially identical to the equilibrium cumulant values in the main text. 3. Simulation results with reduced mutation effect Here we break one generation into 10 mini generations to make the simulation more like a continuous process. Mutations accumulate at each mini generation but at one tenth of the rate i.e. a 2% probability of mutation at each mini generation. Likewise, the amount of exponential growth over one mini generation is r/10. As expected, the simulation results much more closely conform to Eq 7 of the main text. Table S3. Average Cumulant Values for Generations 200 to 400 for r. r −2 ⋅ y Mean -0.204 -0.0702 Variance 0.0727 0.0443 Unscaled Skewness -0.0515 -0.0422 Unscaled Excess Kurtoses 0.0546 4. Mutation and selection in E. coli Here, we run an analogous simulation to that in the main text except that the parameters for mutation were selected to match those reported for E. coli. Mutations occur for each individual with a probability of 0.001 based on the mutation rate of E. coli [1]. The effects of mutations are determined by a gamma distribution (with α = 3.03 & β = 194.24 ) for 96.83% of mutations or by a standard exponential for the remaining 3.17% [2]. We see that the mean, variance, unscaled skewness, and unscaled excess kurtosis approach and fluctuate around an equilibrium value (Figure S2). The mean of R is approximately 0.9990018, the variance is low (3.07108e-05), the unscaled skewness is negative (-1.084431e-05, scaled skewness of -63.71845), and the unscaled excess kurtosis is positive (7.914932e-06, scaled excess kurtosis of 8391.989). Figure S2: Simulation results. All individuals in the initial generation have fitness of 1. Table S4. Average Cumulant Values for Generations 2000 to 4000 for r. r −2 ⋅ y Mean -0.001026745 -4.680476e-05 Variance 9.120267e-05 6.37112e-05 Unscaled Skewness -0.0002023346 - 0.0001901992 Unscaled Excess Kurtoses 0.0007463787 Table S5. Average Cumulant Values for Generations 2000 to 4000 for R. R (simulations) R (theoretical equilibrium) Mean 0.9990018 0.999 Variance 3.07108e-05 3.073879e-05 Unscaled Skewness -1.084431e-05 -1.083727e-05 Unscaled Excess Kurtoses 7.914932e-06 7.898774e-06 5. Derivation of formula for equilibrium values under mutation and selection First, we consider a sub population with fitness equal to R, and we consider how the fitness of the lineage that descends from this sub population changes over time. Mi is the i-th moment of fitness (R) of this lineage, g is the generation with g = 0 corresponding to the initial sub population, and Nj is the j-th moment of the mutation effect function, specifically, the moments of e–x× y (Eq 6B in main text). "!(4)  = )!  ∙ 6 78'9 ! ( ' ( !%)*&    ∙ 6 78' *&9 & ( ' ( )*& The above formula can be proved using induction. We now consider the behavior of this formula as g goes towards infinity. For g³ i, the above formula simplifies to the following. "!(4)  = )!  ∙ 6 78'9 + ( ' ( !%+*&    ∙ 6 78' *&9 & ( ' ( !*& Nj for very large values of j approaches p where p is the probability that there is not a deleterious mutation. Hence, we get the following formula: "!(4) = )! ∙ :! ∙ 6 (8' *&) & ( ' ( !*& Because the lineage descended from the fittest population maintains its relative advantage in fitness, it dominates every other lineage as g goes to infinity. Hence, the equilibrium values for the entire population are given by the following formula where max(R) is the maximum fitness of the initial population. "!(4) = ;<2())! ∙ :! ∙ 6 (8' *&) & ( ' ( !*& References [1] H. Lee, E. Popodi, H. Tang, and P. L. Foster, “Rate and molecular spectrum of spontaneous mutations in the bacterium Escherichia coli as determined by whole-genome sequencing,” Proceedings of the National Academy of Sciences, vol. 109, no. 41, pp. E2774–E2783, Oct. 2012, doi: 10.1073/pnas.1210309109. [2] S. F. Elena, L. Ekunwe, N. Hajela, S. A. Oden, and R. E. Lenski, “Distribution of fitness effects caused by random insertion mutations in Escherichia coli,” Genetica, vol. 102, no. 0, pp. 349–358, Mar. 1998, doi: 10.1023/A:1017031008316.
Cumulants, Moments and Selection: The Connection Between Evolution and Statistics Hasan Ahmed, Deena Goodgold, Khushali Kothari, Rustom Antia* 30322, USA. *Corresponding author. Email: Abstract Cumulants and moments are closely related to the basic mathematics of continuous and discrete selection (respectively). These relationships generalize Fisher's fundamental theorem of natural selection and also make clear some of its limitation. The relationship between cumulants and continuous selection is especially intuitive and also provides an alternative way to understand cumulants. We show that a similarly simple relationship exists between moments and discrete selection. In more complex scenarios, we show that thinking of selection over discrete generations has significant advantages. For a simple mutation model, we find exact solutions for the equilibrium moments of the fitness distribution. These solutions are surprisingly simple and have some interesting implications including: a necessary and sufficient condition for mutation selection balance, a very simple formula for mean fitness and the fact that the shape of the equilibrium fitness distribution is determined solely by mutation (whereas the scale is determined by the starting fitness distribution). Key words: cumulants, moments, mutation, selection, distribution of fitness effects, fisher's fundamental theorem of natural selection, heterozygote advantage, mutation selection balance 1. Introduction Cumulants and moments are key concepts in statistics. Selection is a fundamental concept in evolutionary biology, and it has applications to a variety of topics from other fields such as depletion of susceptibles in epidemiology [1], economics [2] and waning of immune memory [3]. It turns out that cumulants and moments are closely related to selection. This relationship is helpful not only for understanding selection but also for understanding cumulants. In the simple scenario where selection is the only force, a straightforward and exact relationship exists between cumulants of a fitness distribution and its evolution over time. Briefly, if fitness is measured in terms of exponential growth rate (i.e. the Malthusian parameter r), then the mean (or 1st cumulant) of fitness in the population increases at an instantaneous rate equal to the variance (or 2nd cumulant) of fitness in the population, variance changes at an instantaneous rate equal to the 3rd cumulant (i.e. unscaled skewness) of fitness, which changes at an instantaneous rate equal to the 4th cumulant (i.e. unscaled excess kurtosis), and so on. This relationship was noted by [4] and later expanded upon by [5]. Although it is known to theoretical evolutionary biologists [6] [7], it is hardly known outside that field. Hence one of the goals of this paper is to explain this relationship to a wider group of people such as epidemiologists, statisticians and theoretical biologists in other fields while also examining its limitations. In practice, however, populations do not grow continuously; they grow in discrete steps (births, deaths, cell division). So instead of the instantaneous growth rate (r), it is also possible to think about the fold expansion (R) over an interval of time (Δt). ! = #!∙∆ %% (') '((!) = *%'(, (!) (3) Mi,t+1(R) is the value of i-th moment at time = t+1. See supplement (Section 1) for derivation. 3. Fisher's fundamental theorem of natural selection According to Fisher's fundamental theorem of natural selection, the rate of increase in mean fitness is the variance in fitness. The relationship between cumulants, moments and selection generalizes Fisher's fundamental theorem of natural selection, and it also illustrates some of its limitations. In particular the theorem is true for the instantaneous change in mean fitness if fitness is quantified in terms of r. However, instantaneous change is not the most natural way to think about biological evolution which inherently involves discrete generations. On the other hand, if fitness is quantified in terms of R, the discrete change in mean fitness is equal to the variance in fitness only if the fitness of the parent generation is equal to 1 (or if the variance in fitness is equal to 0). If, for example, the mean fitness of the parent generation is 1.1 and the variance in fitness is positive, then the relationship does not hold. 4. Heterozygote advantage In this section we assume that variability in generation interval by genotype is negligible and hence, we define R as the fold expansion over one (sexual) generation. In this case, R can have substantial advantages over r as illustrated by the following example. We consider fitness from the perspective of a gene with two variants (A and B). In individuals with homozygous AA, fitness in terms of R is 1. For heterozygous individuals with the AB genotype, R is 1.2, and for homozygous BB individuals, R is 0.01. We assume these values in our calculations and assume random mating. We use the variable, f, to denote the frequency of variant B. At equilibrium, using the formula from [8], f = (1.2-1)/(0.2+1.19) = 0.1439, and the R values for the two genes are equal. !* = + ∙1.2 + (1 -+) ∙(1) = 1.0288 (4A) !+ = + ∙0.01 + (1 -+) ∙(1.2) = 1.0288 (4B) However, the values for r are quite different. '* = + ∙(ln(1.2)) + (1 -+) ∙(ln(1)) = 0.0262 (5A) '+ = + ∙(ln(0.01)) + (1 -+) ∙(ln(1.2)) = -0.5066 (5B) The problem here is two fold. 1) The model assumes a constant growth rate for the 3 subpopulations (AA, BB & AB). This is unbiological unless there are many asexual generations between recombination events. 2) The r values are instantaneous and correspond to a particular stage in the life cycle, in this case immediately following recombination (since f = 0.1439 corresponds to the time of recombination). Hence the fact that the 2 variants have the same fitness over a single (sexual) generation is obscured. Defining R over 1 (sexual) generation avoids both these problems. Defining R over 2, 3, 4 etc. generations would also solve these problems and may be helpful in certain circumstances. 5. Mutation and selection Here we consider a simple model with deleterious mutations along with selection. The fitness of the child (ri) is determined probabilistically by the following equation: '% = '% ∗-9 ⋅; (6A) where ri* is the fitness of the parent, x is a random binomial variable that determines whether or not there is a deleterious mutation and y is effect size of the mutation. The equivalent equation for R is given by the following formula. !% = !% ∗ ∙ #-.⋅0 (6B) 5.1. Mutation simulation for influenza We model the fitness distribution of a population of individuals over 4000 generations with a starting population size of 1 million. Each individual in the first generation has a fitness value of 1. Mutations occur for each individual with a probability of 0.2 based on the mutation rate of influenza [9] [10]. The effects of mutations are determined by a gamma distribution for y (with α = 1 & β = 2.85) again based on influenza [11]. The probability of an individual reproducing is equal to their fitness value. For this reason, the model has some drift but is under dispersed. To maintain the population size, we double the population when it is less than 500,000 individuals. We see that the mean, variance, unscaled skewness, and unscaled excess kurtosis approach and fluctuate around an equilibrium value (Figure 2): the mean of R is approximately 0.80, the standard deviation is 0.19 (variance of 0.035), the unscaled skewness is negative (-0.0076, scaled skewness of -1.2), and the unscaled excess kurtosis is positive (0.00095, scaled excess kurtosis of 0.77). The equilibrium values appear to be the same even if the initial fitness distribution is changed to a discrete uniform over {0.1, 0.2, ... 0.9, 1} even though the initial change is much more dramatic (Supplement Section 2). Figure 2. Simulation results. All individuals in the initial generation have fitness of 1. 5.2. Simulation versus theory - r Because selection increases the mean of r by its variance, and mutation decreases the mean of r by the mean of the mutation effect (-x·y) and likewise for the higher cumulants, we might think that at equilibrium: 1!"#(!) 1!(-.∙0) ≈-1 (???) (7) where Ki(r) is the i-th cumulant of the fitness distribution in terms of r and Ki(-x·y) is the i-th cumulant of -x·y. But we see that this only very roughly holds (Table 1). Table 1. Average Cumulant Values for Generations 2000 to 4000 for r. r -x · y Mean -0.262 -0.0702 Variance 0.0957 0.0443 Unscaled Skewness -0.0697 -0.0422 Unscaled Excess Kurtoses 0.0757 According to Eq 7, the values in red should be similar in magnitude but opposite in sign. Likewise for the values in blue and in green. But we see that this is not the case. The reason for this discrepancy is: in our model mutations occur at the time of birth not continuously. As expected, breaking each generation into multiple mini generations reduces the discrepancy between the simulation results and Eq 7 (Supplement Section 3). This suggests that Eq 7 may closely hold in certain situations but not under those reported for influenza [9] [10] [11] nor for E. coli (Supplement Section 4). 5.3. Simulation versus theory - R If we think of selection in terms of R (using Eq 3 for the effect of selection on R) and the mutation model given by Eq 6B, then surprisingly simple equations exist for the mean (Eq 8A) and also for the higher moments (Eq 8B) of R at equilibrium. These equations ignore stochastic effects (i.e. drift). See supplement Section 5 for derivation. *((!) = max(!) ⋅@ (8A) *%(!) = max (!)% ⋅@% ∏ *4(#-.∙0) 45%-( 45( (8B) Here, M1(R) is the mean of R, and Mi(R) is the i-th moment of R. p is the probability that there is not a deleterious mutation (here that is 80%). max(R) is the value of R for the fittest individual in the initial population (here that is 1). Mj(e-x·y) is the j-th moment of e-x·y, the multiplicative effect of mutation, as given by Eq 6B. Eq 8A and Eq 8B have several interesting implications. Equilibrium mean fitness is determined solely by max(R) and p (the probability that there is not a deleterious mutation). p>0 is absolutely essential for mutation selection balance. Otherwise, selection will not be able to offset the effects of mutation, and fitness will collapse towards 0. We see that max(R) and the mutation distribution alone determine the moments of the equilibrium fitness distribution, and in this case the moments uniquely determine the probability distribution (or more precisely its cumulative distribution function) [12]. So the shape of the equilibrium fitness distribution is determined solely by the mutation effect distribution with max(R) acting as a scale parameter. As expected, given that the effect of drift in our simulation is low, the simulation results match the theoretical predictions very closely (Table 2). Table 2. Average Cumulant Values for Generations 2000 to 4000 for R. R (simulations) R (theoretical equilibrium) Mean 0.800 0.8 Variance 0.0351 0.0351 Unscaled Skewness -0.00757 -0.00757 Unscaled Excess Kurtoses 0.000952 0.000951 The theoretical equilibrium values for the moments were taken from Eq 8A and Eq 8B. These were converted to cumulants using the standard formulas [13]. 5.3.1. Alternative form e-x·y is the fitness of the child relative to the parent. Although less general, it is also possible to formulate x·y in terms of the number of mutations and the effect per mutation: x·y = Σzj where the z's are the effects of the mutations. If the z's are independent and the number of mutations is assumed to be Poisson distributed, then: *4(#-.∙0) = #6∙(7$(8%&)-() (9) where λ is the mean number of de novo deleterious mutations per individual and Mj(e-z) is the jth moment for the effect on R for a single deleterious mutation. 5.3.2. Recombination Under a simple model of recombination (multiple segments with an independent probability of mutation which can reassort, random mating, no epistasis), Eq 8A and Eq 8B still apply except that max(R) is now the maximum R that could exist under recombination. The full significance of this will be discussed in a follow up paper. 5.3.3. Coefficient of variation of R From equations 8A and 8B, it is straightforward to derive the coefficient of variation (standard deviation divided by mean) of fitness. BC(!) = D*( -((#-.∙0) -1 (10A) where M1(e-x·y) is the mean of e-x·y i.e. the mean fitness (on the R scale) of a child relative to its parent. Under the assumptions of section 5.3.1, the above equation can be reformulated. BC(!) = E#6∙9(-7#(8%&): -1 (10B) where λ is the mean number of de novo deleterious mutations per individual and M1(e-z) is the mean fitness (on the R scale) of a child with a single deleterious mutation relative to its parent. Eq 10B is equivalent to equation 8 in [14] even though the derivation by [14] involves mathematical approximations and starts with a somewhat different model. Table 3 illustrates the relationship between the coefficient of variation of R and other key parameters. Table 4: Coefficient of variation of R and other parameters Approximate Species λ M1(e-z) M1(e-x·y) CV(R) E. coli 0.001 0.969 0.999969 0.6% Humans 2.1 0.991 0.981 13.8% Influenza A 0.223 0.761 0.948 23.4% λ is the mean number of de novo deleterious mutations per individual. M1(e-z) is the mean multiplicative effect on R of a single deleterious mutation. M1(e-x·y) is the mean fitness of the child relative to its parent. (As might be expected, M1(e-z)λ is close to M1(e-x·y), but that relationship is not exact since the number of mutations is a random variable). CV is the coefficient of variation of R. Approximate species is a species that approximately matches the λ and M1(e-z) values in the table according to the scientific literature. For λ of 2.1 and M1(e-z) of 0.991 for humans see [15], [16]. The values for E. coli and Influenza A are based on our simulations described in supplement section 4 and main text section 5.1 respectively. The actual coefficient of variation for these species may be quite different because of factors such as short-term selection. 6. Discussion Theoretical results in biology are unlikely to hold exactly for any actual biological system. But they may still be able to contribute something to our understanding. In our view the relationship between the cumulants of r and continuous selection is helpful for understanding the selectiononly scenario and also for understanding cumulants. But for more complex scenarios R shows substantial advantages. The advantage of R is at least two-fold. 1) The cumulants of r can be greatly influenced by extreme negative values since r→-∞ as R→0. In the selection-only scenario these values are quickly removed, but in more complex scenarios these extreme values may continue to be produced. For example, Eq 7 suggests that negative skew should be a pervasive feature of fitness distributions, but the extent to which this is an artefact of these extreme negative values needs to be considered. Indeed, from Eq 8A and Eq 8B it is possible to find values such that the distribution of R is not left skewed. 2) If genetic variation in generation interval can be neglected, R can be conveniently defined relative to the organism's life cycle, which is the inherent discreteness in biological growth. Given its simplicity, it would be somewhat surprising if the full solution for the moments of R under mutation selection balance (i.e. Eq 8A and Eq 8B) were truly novel. The first two moments can be derived from Haldane's load theory and from [14]. Unfortunately, Haldane's load theory has been misinterpreted as saying that 1/p is some sort of minimum fertility rate needed to maintain a population [17], [18]. [14] corrected this error but placed undue emphasis on "the fitness of the fittest individual likely to exist." On the contrary from Eq 8A max(R)·p ≥ 1 is all that is needed for emergence/persistence, and max(R) (under the assumptions of § 5.3.2) is not the fitness of the fittest individual likely to exist but rather the fitness of the fittest individual that could possibly exist from recombination. We see very different patterns of mutation for Influenza A, E. coli and humans. For influenza p=0.8 suggests a tradeoff between replication accuracy (p) and replication speed (R) in order to maximized R·p. In contrast for E. coli, which unlike influenza bears the cost of synthesizing its own proteins, p is nearly 1 suggesting that increasing p is relatively easy. Finally in the case of complex animals, it seems like multicellularity greatly reduces p, even though the per nucleotide per cell division mutation rate in the human germ line is perhaps even less than that of E. coli [19]. There are important caveats and limitations to our work. We consider genetic fitness not the realized number of offspring per individual which will tend to show more variability. There are important factors that we do not consider. However, the extent and even the reason (e.g. advantageous mutation, short term selection) that more complex systems deviate from Eq 8A and Eq 8B is something that should be quantifiable via a mix of controlled experiments and simulation. These equations create a pairing between the probability distribution for mutation and that for fitness, but we have only derived the moments, not the exact distribution. We also have not considered the exotic cases with max(R) unbounded and p=0 which (although unbiological) may be of theoretical interest. References [1] A. Nikas, H. Ahmed, and V. I. Zarnitsyna, "Competing Heterogeneities in Vaccine Effectiveness Estimation," Vaccines, vol. 11, no. 8, Art. no. 8, Aug. 2023, [2] R. R. Nelson and S. G. Winter, An evolutionary theory of economic change, Digitally reprinted. Cambridge, Mass.: The Belknap Press of Harvard Univ. Press, 2004. [3] P. F. M. Teunis, J. C. H. van Eijkeren, W. F. de Graaf, A. B. Marinović, and M. E. E. Kretzschmar, "Linking the seroresponse to infection to within-host heterogeneity in antibody production," Epidemics, vol. 16, pp. 33-39, Sept. 2016, [4] T. F. Hansen, "Selection in asexual populations: An extension of the fundamental theorem," Journal of Theoretical Biology, vol. 155, no. 4, pp. 537-544, Apr. 1992, [5] P. J. Gerrish and P. D. Sniegowski, "Real time forecasting of near-future evolution," Journal of The Royal Society Interface, vol. 9, no. 74, pp. 2268-2278, Apr. 2012, [6] M. Rattray and J. L. Shapiro, "Cumulant dynamics of a population under multiplicative selection, mutation, and drift," Theor Popul Biol, vol. 60, no. 1, pp. 17-31, Aug. 2001, [7] S. Yamauchi, T. Nozoe, R. Okura, E. Kussell, and Y. Wakamoto, "A unified framework for measuring selection on cellular lineages and traits," eLife, vol. 11, p. e72299, Dec. 2022, [8] B. Charlesworth, "Population Genetics," in Encyclopedia of Biodiversity, S. A. Levin, Ed., New York: Elsevier, 2001, pp. 777-797. [9] "Measurement of the mutation rates of animal viruses: influenza A virus and poliovirus type 1." Accessed: July 14, 2025. [Online]. Available: https://journals.asm.org/doi/epdf/10.1128/jvi.59.2.377-383.1986 [10] R. Sanjuán, M. R. Nebot, N. Chirico, L. M. Mansky, and R. Belshaw, "Viral Mutation Rates," Journal of Virology, vol. 84, no. 19, pp. 9733-9748, Oct. 2010, [11] E. Visher, S. E. Whitefield, J. T. McCrone, W. Fitzsimmons, and A. S. Lauring, "The Mutational Robustness of Influenza A Virus," PLOS Pathogens, vol. 12, no. 8, p. e1005856, Aug. 2016, [12] George Casella and Roger L. Berger, Statistical Inference, 2nd ed. Pacific Grove, CA: Duxbury, 2002. Accessed: June 18, 2025. [Online]. Available: https://pages.stat.wisc.edu/~shao/stat610/Casella_Berger_Statistical_Inference.pdf [13] E. W. Weisstein, "Cumulant." Accessed: June 25, 2025. [Online]. Available: https://mathworld.wolfram.com/Cumulant.html [14] B. Galeota-Sprung, P. Sniegowski, and W. Ewens, "Mutational Load and the Functional Fraction of the Human Genome," Genome Biol Evol, vol. 12, no. 4, pp. 273-281, Apr. 2020, [15] A. Uchimura et al., "Germline mutation rates and the long-term phenotypic effects of mutation accumulation in wild-type laboratory mice and mutator mice," Genome Res, vol. 25, no. 8, pp. 1125-1134, Aug. 2015, [16] J. Matheson, U. Hernández, J. Bertram, and J. Masel, "Human deleterious mutation rate slows adaptation and implies high fitness variance," Mar. 25, 2025, bioRxiv. [17] Y. Lesecque, P. D. Keightley, and A. Eyre-Walker, "A resolution of the mutation load paradox in humans," Genetics, vol. 191, no. 4, pp. 1321-1330, Aug. 2012, [18] D. Graur, "An Upper Limit on the Functional Fraction of the Human Genome," Genome Biol Evol, vol. 9, no. 7, pp. 1880-1885, July 2017, [19] N. Dudko, J. W. Dobrucki, and H. Fulka, "Mechanisms underlying low mutation rates in mammalian oocytes and preimplantation embryos," Nucleic Acids Res, vol. 53, no. 15, p. gkaf760, Aug. 2025, [Supplement] Cumulants, Moments and Selection: The Connection Between Evolution and Statistics Hasan Ahmed, Deena Goodgold, Khushali Kothari, Rustom Antia* 30322, USA. *Corresponding author. Email: 1. Derivation of discrete growth formula ∆"!,# = "!,#%& - "!,# = "!%&,# "&,# - "!,# Here, DMi,t is the change in the value of i-th moment between time = t and time = t+1. It allows us to quantify the discrete growth between consecutive steps for the population moment. We know that, "! ,# = & '#()) ∙)! ,) "!,#%& = & '#%&()) ∙)! ,) where ft is the probability density function for time = t and ft+1 is the probability density function at time = t+1. Because R is the fold change over a unit of time, '#%&()) ∝'#()) ∙) → '#%&()) = / ∙'#()) ∙) where c is a normalizing constant. & / ∙'#()) ∙) ,) = 1 →/ = 1 ∫'#()) ∙) ,) Hence after substitution, "!,#%& = & '#%&()) ∙)! ,) = & / ∙'# ()) ∙) ∙)! ,) = ∫'#%&()) ∙) ∙)! ,) ∫'#()) ∙) ,) Here, we see that the numerator is equivalent to Mi+1,t and the denominator is equivalent to M1, t Therefore, "!,#%& = "!%&,# "&,# ∆"!,# = "!,#%& - "!,# = "!%&,# "&,# - "!,# 2. Simulation results with initial discrete uniform distribution of R values Figure S1 displays the cumulant values over generations 0 to 300 for a population with an initial discrete unform distribution of fitness values (R). For this population the initial mean is 0.55 with the variance being higher since the initial fitness values range from 0.1 to 1 in a uniformly distributed manner. 10% of the population exists at each fitness value in {0.1, 0.2, ... 0.9, 1}. The initial unscaled skewness is 0 since the distribution is symmetric. The unscaled excess kurtosis is initially negative. High levels of selection occur early on in the simulation reflecting high initial variance, but the cumulant values plateau at a similar level compared to the simulation in the main text. Figure S1. Simulation results. Fitness values in the initial generation were distributed uniformly over {0.1, 0.2, ... 0.9, 1}. Table S1. Average Cumulant Values for Generations 2000 to 4000 for r. r -2 ⋅ y Mean -0.261566 -0.07017544 Variance 0.09575191 0.04432133 Unscaled Skewness -0.06968853 -0.04216142 Unscaled Excess Kurtoses 0.07578221 Table S2. Average Cumulant Values for Generations 2000 to 4000 for R. R (simulations) R (theoretical equilibrium) Mean 0.8002347 0.8 Variance 0.03509336 0.03506849 Unscaled Skewness -0.007572458 -0.007565338 Unscaled Excess Kurtoses 0.0009511625 0.0009506762 These equilibrium cumulant values are essentially identical to the equilibrium cumulant values in the main text. 3. Simulation results with reduced mutation effect Here we break one generation into 10 mini generations to make the simulation more like a continuous process. Mutations accumulate at each mini generation but at one tenth of the rate i.e. a 2% probability of mutation at each mini generation. Likewise, the amount of exponential growth over one mini generation is r/10. As expected, the simulation results much more closely conform to Eq 7 of the main text. Table S3. Average Cumulant Values for Generations 200 to 400 for r. r -2 ⋅ y Mean -0.204 -0.0702 Variance 0.0727 0.0443 Unscaled Skewness -0.0515 -0.0422 Unscaled Excess Kurtoses 0.0546 4. Mutation and selection in E. coli Here, we run an analogous simulation to that in the main text except that the parameters for mutation were selected to match those reported for E. coli. Mutations occur for each individual with a probability of 0.001 based on the mutation rate of E. coli [1]. The effects of mutations are determined by a gamma distribution (with α = 3.03 & β = 194.24 ) for 96.83% of mutations or by a standard exponential for the remaining 3.17% [2]. We see that the mean, variance, unscaled skewness, and unscaled excess kurtosis approach and fluctuate around an equilibrium value (Figure S2). The mean of R is approximately 0.9990018, the variance is low (3.07108e-05), the unscaled skewness is negative (-1.084431e-05, scaled skewness of -63.71845), and the unscaled excess kurtosis is positive (7.914932e-06, scaled excess kurtosis of 8391.989). Figure S2: Simulation results. All individuals in the initial generation have fitness of 1. Table S4. Average Cumulant Values for Generations 2000 to 4000 for r. r -2 ⋅ y Mean -0.001026745 -4.680476e-05 Variance 9.120267e-05 6.37112e-05 Unscaled Skewness -0.0002023346 - 0.0001901992 Unscaled Excess Kurtoses 0.0007463787 Table S5. Average Cumulant Values for Generations 2000 to 4000 for R. R (simulations) R (theoretical equilibrium) Mean 0.9990018 0.999 Variance 3.07108e-05 3.073879e-05 Unscaled Skewness -1.084431e-05 -1.083727e-05 Unscaled Excess Kurtoses 7.914932e-06 7.898774e-06 5. Derivation of formula for equilibrium values under mutation and selection First, we consider a sub population with fitness equal to R, and we consider how the fitness of the lineage that descends from this sub population changes over time. Mi is the i-th moment of fitness (R) of this lineage, g is the generation with g = 0 corresponding to the initial sub population, and Nj is the j-th moment of the mutation effect function, specifically, the moments of e-x× y (Eq 6B in main text). "!(4) = )! ∙ 6 78'9 ! ( ' ( !%)*& ∙ 6 78' *&9 & ( ' ( )*& The above formula can be proved using induction. We now consider the behavior of this formula as g goes towards infinity. For g3 i, the above formula simplifies to the following. "!(4) = )! ∙ 6 78'9 + ( ' ( !%+*& ∙ 6 78' *&9 & ( ' ( !*& Nj for very large values of j approaches p where p is the probability that there is not a deleterious mutation. Hence, we get the following formula: "!(4) = )! ∙ :! ∙ 6 (8' *&) & ( ' ( !*& Because the lineage descended from the fittest population maintains its relative advantage in fitness, it dominates every other lineage as g goes to infinity. Hence, the equilibrium values for the entire population are given by the following formula where max(R) is the maximum fitness of the initial population. "!(4) = ;<2())! ∙ :! ∙ 6 (8' *&) & ( ' ( !*& References [1] H. Lee, E. Popodi, H. Tang, and P. L. Foster, "Rate and molecular spectrum of spontaneous mutations in the bacterium Escherichia coli as determined by whole-genome sequencing," Proceedings of the National Academy of Sciences, vol. 109, no. 41, pp. E2774-E2783, Oct. 2012, [2] S. F. Elena, L. Ekunwe, N. Hajela, S. A. Oden, and R. E. Lenski, "Distribution of fitness effects caused by random insertion mutations in Escherichia coli," Genetica, vol. 102, no. 0, pp. 349-358, Mar. 1998,
2510.14918
Tree-Like Shortcuttings of Trees Hung Le∗ Lazar MilenkoviㆠShay Solomon‡ Cuong Than§ Abstract Sparse shortcuttings of trees—equivalently, sparse 1-spanners for tree metrics with bounded hop-diameter—have been studied extensively (under different names and settings), since the pioneering works of [Yao82, Cha87, AS87, BTS94], initially motivated by applications to range queries, online tree product, and MST verification, to name a few. These construc- tions were also lifted from trees to other graph families using known low-distortion embed- ding results. The works of [Yao82, Cha87, AS87, BTS94] establish a tight tradeoff between hop-diameter and sparsity (or average degree) for tree shortcuttings and imply constant- hop shortcuttings for n-node trees with sparsity O(log∗n). Despite their small sparsity, all known constant-hop shortcuttings contain dense subgraphs (of sparsity Ω(log n)), which is a significant drawback for many applications. We initiate a systematic study of constant-hop tree shortcuttings that are “tree-like”. We focus on two well-studied graph parameters that measure how far a graph is from a tree: arboricity and treewidth. Our contribution is twofold. • New upper and lower bounds for tree-like shortcuttings of trees, including an optimal tradeoff between hop-diameter and treewidth for all hop-diameter up to O(log log n). We also provide a lower bound for larger values of k, which together yield hop-diameter×treewidth = Ω((log log n)2) for all values of hop-diameter, resolving an open question of [FL22b, Le23]. • Applications of these bounds, focusing on low-dimensional Euclidean and doubling metrics. A seminal work of Arya et al. [ADM+95] presented a (1 + ϵ)-spanner with constant hop-diameter and sparsity O(log∗n), but with large arboricity. We show that constant hop-diameter is sufficient to achieve arboricity O(log∗n). Furthermore, we present a (1+ϵ)-stretch routing scheme in the fixed-port model with 3 hops and a local memory of O(log2 n/ log log n) bits, resolving an open question of [KLMS22]. 1 Introduction Given a tree T = (V, E) and an integer k ≥1, a tree shortcutting of T with hop-diameter k is a graph G = (V, E′) such that for every two vertices u, v ∈V , there is a path in G consisting of at most k edges such that dG(u, v) = dT (u, v), where dG(u, v) (resp., dT (u, v)) represents the distance between u and v in G (resp., T). Besides achieving small (ideally constant) hop-diameter, the most basic parameter of a tree shortcutting is its number of edges, |E′|. The problem of constructing sparse tree shortcuttings with small hop-diameter has been ∗University of Massachusetts Amherst. Email: hungle@cs.umass.edu. †Tel Aviv University. Email: milenkovic.lazar@gmail.com. ‡Tel Aviv University. Email: solo.shay@gmail.com. §University of Massachusetts Amherst. Email: cthan@umass.edu. 1 arXiv:2510.14918v1 [cs.DS] 16 Oct 2025 studied extensively (under different names and settings), since the pioneering works of [Yao82, Cha87, AS87, BTS94]. The first tradeoff was given by Yao [Yao82], who studied the problem of computing range queries on paths. Later, Chazelle [Cha87] showed how to extend these tradeoffs from paths to arbitrary trees. Few years after, Alon and Schieber [AS87] studied the problem of computing semigroup product along paths and trees. Given a tree T whose vertices are elements of a semigroup, the goal is to preprocess it into a succinct data structure so that every subsequent semigroup product query can be answered using as few as possible operations. In the terminology of tree shortcutting, they showed that for any n-point path, one can get a shortcutting with hop-diameter k and O(nαk(n)) edges. Here, αk(n) is a very slowly-growing inverse Ackermann function defined in Section 2. In particular, their tradeoff implies that one can get hop-diameters 2, 3, and 4, with O(log n), O(log log n), and O(n log∗n) edges, respectively. For trees, their hop-diameter grows by a factor of 2: their tree shortcutting achieves hop-diameter 2k using O(nαk(n)) edges. They also showed that for a line metric the tradeoff is tight. Some of the applications of shortcuttings mentioned in [AS87] include finding the max-flow in a multiterminal network, verifying MST, and updating an MST after increasing the cost of one of its edges. The problem of problem of achieving an optimal tradeoff between hop-diameter and sparsity was settled by Bodlaender et al. [BTS94], who gave an optimal tradeoff for trees matching the lower bound of [AS87] for paths. These constructions were successfully used in various applications and were also lifted to other graph families. Most notably perhaps, a seminal work of Arya et al. [ADM+95] shows that every n-point Euclidean metric with a constant dimension admits a (1+ϵ)-spanner with hop- diameter k and Oϵ(nαk(n)) edges. Spanners are formally defined in Section 2. Roughly speaking, spanners generalize the notion of tree shortcutting: Given a tree T, a tree shortcutting of T is a stretch-1 spanner of the tree metric induced by T. Filtser and Le [FL22b] used tree shortcutting to achieve low-treewidth embedding of planar graphs. (See [CFKL20, CLPP23, CCC+25] for additional results on low-treewidth embedding of planar and minor-free graphs and metrics.) In particular, the central part of the embedding result of [FL22b] is a low-treewidth shortcutting for trees. Another example is the compact routing scheme by Kahalon et al. [KLMS22]. (See Section 2 for a definition of compact routing.) Their routing scheme operates on top a network which is a low-treewidth tree shortcutting with hop-diameter 2. Once the shortcutting has been computed, it serves as a “proxy” overlay network, on which the computation can proceed, which gives rise to huge savings in a number of quality measures, including global and local space usage, as well as in various notions of running time, which change from one setting to another. In some applications where limiting the hop-distances of paths is crucial, such as in some routing schemes, road and railway networks, and telecommunication, we might need to minimize the hop-distances; for example, imagine a railway network, where each hop in the route amounts to switching a train – how many of us would be willing to use more than, say, 4 hops? Likewise, what if each hop amounts to traversing a traffic light, wouldn’t we prefer routes that minimize the number of traffic lights? In such cases, the designer of the system, or its users, might not be content with super-constant hop-distances, or even with a large constant, and it might be of significant value to achieve as small as possible hop-distances. Motivated by such practical considerations, we are primarily interested in values of hop-diameter k that “approach” 1, mainly k = 2, 3, 4, ... as there is no practical need in considering larger values of k. The fundamental drawback of sparse tree shortcuttings is that they have dense subgraphs. In particular, all the aforementioned tree shortcutting constructions suffer from not being uniformly-sparse in the regime of constant k — they have subgraphs with average degree of Ω(log n). This lower bound on average degree is true for hop-diameter k = 2 and does not reduce with hop-diameter. Motivated by this, we initiate a systematic study of constant-hop 2 tree shortcuttings that are “tree-like”. In particular we are interested in two notions that capture uniform sparsity: treewidth and arboricity. (Formal definitions are given in Section 2.) These two graph families are prime examples of families admitting more efficient algorithms, in contrast to just sparse graphs. The key question underlying this work is the following. Question 1. Given an n-vertex tree, is there a shortcutting with constant hop diameter k with arboricity/treewidth of o(log n), and ideally close to constant? In particular, is it possible to achieve this using k = 3, 4, ...? Getting below the log n bound is impossible for k = 2 due to the sparsity lower bound for shortcuttings with hop-diameter 2. A well-known construction achieves a bound of O(log n) on sparsity, treewidth, and arboricity in this regime. Also, with Θ(log n) hops, one can get a constant bound on maximum degree and thus also on sparsity, treewidth, and arboricity [SE14]. We note, however, that the key focus of this paper is in the regime where the hop-diameter is constant. This is arguably the most important regime for all applications. A key insight of this work is that one can break the logarithmic bound, even for treewidth, by using as few as 3 hops. Building on this insight, we derive general nontrivial tradeoffs between the hop-diameter and the treewidth/arboricity, and demonstrate their applicability. The aforementioned low-treewidth embedding result of Filtser and Le [FL22b] gives a tree shortcutting with hop-diameter k = O(log log n) and treewidth t = O(log log n). Using this result, they give a low-treewidth embedding of planar graphs into low-treewidth graphs. In particular, they show that if one can construct a tree shortcutting with hop-diameter k and treewidth t, then one can embed a planar graph with diameter D into a graph with treewidth O(k·t/ϵ) with an additive distortion of O(ϵD). Improving the product of k·t would immediately improve the bound on the treewidth of the embedding. The following question is left open in their work. Question 2 ([Le23, FL22b]). Is there a tree shortcutting with treewidth t and hop-diameter k such that k · t = o((log log n)2)? Furthermore, is there such a tree shortcutting with a constant hop-diameter? The compact routing scheme of Kahalon et al. [KLMS22] achieves memory bound of O(log2 n) bits for routing on tree shortcuttings with hop-diameter 2. The reason why they achieve a bound of O(log2 n) bits is essentially due to the fact that the shortcutting has arboricity/treewidth of Θ(log n). The main obstacle in improving their memory bound lies in understanding tree shortcuttings with small treewidth/arboricity. Breaking the bound of Θ(log2 n) bits is a main open question left in their work. Quoting [KLMS22]: “Whether or not one can use a spanner1 of larger (sublogarithmic and preferably constant) hop-diameter for designing compact routing schemes with o(log2 n) bits is left here as an intriguing open question.” Question 3 ([KLMS22]). Given an n-vertex tree T, is there a compact routing scheme (oper- ating on a shortcutting of T) with stretch 1 which uses o(log2 n) bits of space for every node and achieves an o(log n), and ideally constant hop-diameter? 1.1 Our contribution Perhaps the main contribution of this work is a conceptual one—identifying the importance of Question 1. In Section 1.1.1 we present new upper and lower bounds for tree shortcuttings, which answer Question 1 as well as Question 2 in the regime of hop-diameter O(log log n). In Section 1.1.2 we present some extensions and applications of these bounds, which in particular settle Question 3. 1tree shortcutting in our terminology 3 1.1.1 Bounds for tree shortcuttings We provide a thorough investigation of Question 1. First, we show that one can break the log n barrier on treewidth already for hop-diameter 3. Theorem 1.1 provides a general upper bound tradeoff between the hop-diameter and the treewidth, for all values of hop-diameter k = O(log log n). Theorem 1.1. For every n ≥1 and every k = O(log log n), every n-vertex tree admits a shortcutting with hop-diameter k and treewidth O(k log2/k n) for even k and O(k( log n log log n)2/(k−1)) for odd k ≥3. Remark. It is impossible to extend the result of Theorem 1.1 to basic graph families, such as planar graphs or Euclidean metrics. See Section 1.2 for more details. We also prove a lower bound tradeoff between the hop-diameter and the treewidth that matches the upper bound of Theorem 1.1 for all values of k = O(log log n). Furthermore, we provide a lower bound for larger values of k, which together settle negatively Question 2, and in particular imply that the construction of [FL22b] with hop-diameter and treewidth both bounded by O(log log n) is asymptotically optimal. Theorem 1.2. For every n ≥1, every shortcutting with hop-diameter k for an n-point path must have treewidth: • Ω(k log2/k n) for even k and Ω(k( log n log log n)2/(k−1)) for odd k ≥3, whenever k ≤ 2 ln(2e) ln log n; • Ω((log log n)2/k) whenever k > 2 ln(2e) ln log n. The construction of Theorem 1.1 for k = 3 has treewidth, and thus also arboricity, bounded by O(log n/ log log n). We next show that the bound on the arboricity can further be improved. In particular one can get rid of the factor k in the tradeoff from Theorem 1.1 by introducing some slack to the exponent of log n. In particular, we prove the following theorem. Theorem 1.3. For every two integers n ≥1 and k ≥1 and every n-vertex tree T, there is a shortcutting with hop-diameter k and arboricity O(log12/(k+4) n). Moreover, when the height of the tree is h, then the arboricity is O(h6/(k+4)). Finally, we show an even better tradeoff between the hop-diameter and arboricity on paths. Theorem 1.4. For every n ≥1 and every even k ≥2, every n-point path admits a shortcutting with hop-diameter k and arboricity O(αk/2+1(n)). In particular, Theorem 1.4 shows that one can get arboricity O(log log n) with k = 4 and O(log∗n) with k = 6. The tradeoff is asymptotically tight, due to the sparsity lower bound of [AS87], as the arboricity of any graph is at most its average degree (up to a factor of 2). We note that all our shortcutting constructions can be implemented in time linear in their size. We will skip the implementation details as this is not the main focus of this work. 1.1.2 Extensions and applications We next extend Theorem 1.4, which provides a 1-spanner for line metrics with bounded hop- diameter and arboricity, to arbitrary doubling metrics, by increasing the stretch to 1 + ϵ. Theorem 1.5. Let k be an even integer and let ϵ ∈(0, 1/6) be an arbitrary parameter. Then, for every positive integer n, every n-point metric with doubling dimension d admits a (1+ϵ)-spanner with hop-diameter k and arboricity ϵ−O(d)αk/2+1(n). 4 This significantly strengthens the construction of Arya et al. [ADM+95], by providing a uniformly sparse (rather than just sparse) construction with constant hop-diameter. Specifically, for hop-diameter k, we transition from sparsity ϵ−O(d)αk(n) to arboricity ϵ−O(d)αk/2+1(n); in particular, we get arboricity O(log∗n) with an hop-diameter of 6. Using the results in tree covers [CCL+23, CCL+24], we can lift this tradeoff for planar and minor-free metrics. Theorem 1.6. Let k be an even integer and let ϵ ∈(0, 1/6) be an arbitrary parameter. Then, for every positive integer n, every n-point Kh-minor-free metric admits a (1 + ϵ)-spanner with hop-diameter k and arboricity O(ϵ−3 ·2 hO(h)/ϵ ·αk/2+1(n)). Furthermore, if the metric is planar, one can construct such a spanner with arboricity O(ϵ−3 · αk/2+1(n)). Using the results in [BFN19, MN07], we obtain the following theorem for spanners of general metrics. Theorem 1.7. Let k be an even integer, r ≥1 be an arbitrary parameter. Then, for every positive integer n, every n-point metric MT admits an O(n1/r log1−1/r n)-spanner with hop- diameter k and arboricity O(r · αk/2+1(n)). Alternatively, MT admits an O(r)-spanner with hop-diameter k and arboricity O(r · n1/r · αk/2+1(n)). In Section 5, we use the construction from Theorem 1.1 to devise a constant-hop rout- ing scheme for tree metrics with stretch 1 and O(log2 n/ log log n) bits per vertex. We note that our routing algorithm operates on top of a network with hop-diameter 3 and treewidth O(log n/ log log n). Theorem 1.8. For every n and every n-vertex tree T, there is a 3-hop routing scheme in the fixed-port model for the metric MT induced by T with stretch 1 that uses O(log2 n/ log log n) bits per vertex. This answers in the affirmative Question 3, which is the main open question from [KLMS22]. We show that the bound from Theorem 1.8 on memory per vertex is the best one could get in tree metrics, regardless of the number of hops! In particular, we prove the following theorem. Theorem 1.9. There is an infinite family of trees Tn, n > 0, such that any labeled fixed-port routing scheme with stretch 1 on a metric induced by Tn has at least one vertex with total memory of Ω(log2 n/ log log n) bits. Using known tree cover constructions, we obtain a 3-hop (1 + ϵ)-stretch routing in doubling metrics. Given a tree cover, we construct a routing scheme on top of each of the trees in the cover. The key challenge is in obtaining a mechanism for efficiently (without increasing the memory usage) determining which tree to use for routing between any two given source and target points. We focus here on doubling metrics, where such a mechanism is already given [CCL+25], and we use it as a black-box to derive our result. However, this is a general framework that can be applied to any other metric family that admits a tree cover construction, provided that one has the mechanism to efficiently determine the right tree on top of which the routing proceeds. Theorem 1.10. For every n and every n-point metric with doubling dimension d, there is a 3-hop routing scheme with stretch (1 + ϵ) that uses Oϵ,d(log2 n/ log log n) bits per vertex. This result provides the first routing scheme in Euclidean and doubling metrics, where the number of hops is o(log n), let alone as small as 3, and the labels consist of o(log2 n) bits. 5 1.2 A natural limitation of low-treewidth shortcuttings and spanners Theorem 1.1 provides an upper bound tradeoff between the hop-diameter and treewidth of shortcuttings for trees. Low-treewidth shortcuttings do not exist in most other basic graph families, such as planar graphs and Euclidean and doubling metrics, and this limitation holds even regardless of any hop-diameter bound. Indeed, the √n by √n unweighted grid is a planar graph of treewidth √n, and any shortcutting of the grid must include the entire grid. Similarly, for the doubling metric induced by the grid, as well as for the Euclidean space induced by a 2-dimensional grid point set, any shortcutting must contain the entire grid. In fact, a similar limitation applies even when considering low-treewidth spanners of stretch larger than 1 (again even regardless of their hop-diameter). In particular, it was observed in [DFG08] that for any planar graph of treewidth k, any t-spanner must have treewidth Ω(k/t), and a similar lower bound on the spanner treewidth was extended to the families of bounded genus and apex- minor-free graphs. Building on these lower bounds, fixed parameter tractability results for these graph families were given in [DFG08] (refer also to [FGvL11] for results for bounded degree graphs). Low-treewidth Euclidean spanners were studied by Buchin et al. [BRS25], who showed that any set of n points in the Euclidean plane admits a t-spanner of treewidth O(n/t), and this tradeoff between stretch and treewidth is asymptotically optimal. Corneil et al. [CDKX25] showed that for any constants w ≥2 and c ≥1 there exist graphs of treewidth w, such that no spanning subgraph of treewidth w −1 can be an additive c-spanner of such a graph. 2 Preliminaries Treewidth. Given a graph G, let V (G) and E(G) be the vertex and edge set of G. A tree decomposition of G is a pair (X, T) where T is a tree and X = {X1, X2, . . . , Xl} is a set of subsets of V (G), called bags, associated with nodes in T such that: • Every vertex in V (G) is in at least one bag in X. • For every edge e in E(G), there is at least one bag containing both endpoints of e. • For every vertex u in V (G), the bags containing u in X inducing a subtree of T. The width of (X, T) is maxXi∈X |Xi| −1. The treewidth of G is the minimum width over all the tree decomposition of G. The treewidth of a graph is closely related to the size of the complete graph minors it contains. Recall that a minor of a graph G is any graph obtained from G by a sequence of vertex deletions, edge deletions, and edge contractions. Let Kh denote the complete graph on h vertices. It is widely known that treewidth is monotone under taking minors. Then, we obtain the following fundamental fact: Fact 2.1. If a graph G contains Kh as a minor, then the treewidth of G is at least h −1. Arboricity. The arboricity of a graph G measures how sparse its subgraphs can be. Formally, the arboricity of G, denoted by arb(G), is defined as arb(G) = max H⊆G, |V (H)|≥2  |E(H)| |V (H)| −1  , where the maximum is taken over all subgraphs H of G with at least two vertices. Nash-Williams proved that the arboricity equal to the minimum number of forests that G can decomposed into [NW64]. 6 Spanner. A t-spanner of a graph G is a spanning subgraph H that approximately preserves distances between all pairs of vertices. Formally, for every u, v ∈V (G), dH(u, v) ≤t · dG(u, v). A t-spanner of a metric space (X, δ) is a t spanner of  X, X 2  , δ  . A spanner of a metric space is often known as geometric spanner. Tree cover. A tree cover of a graph G with stretch t is a collection of spanning trees T of G such that, for every u, v ∈V (G): • The distance between u and v in any tree T ∈T is at least the distance in G, i.e., dT (u, v) ≥dG(u, v). • There exists a tree T ∈T such that dT (u, v) ≤t · dG(u, v). Similar to geometric spanner, a tree cover of a metric space (X, δ) is a tree cover of the complete graph  X, X 2  , δ  . Compact routing scheme. A routing scheme is a distributed algorithm that, given a packet with a designated source and destination, specifies how the packet is forwarded along the edges of the network. Each node in the network has its own routing table, which stores local routing information, as well as a unique label. During a preprocessing phase, the network is initialized so that each node is assigned a routing table and a label. In the labeled model, labels are chosen by the designer (typically of size poly(log n)), while in the name-independent model, labels are chosen adversarially. Each packet carries a header containing the label of the destination and possibly additional auxiliary information. When forwarding a packet destined for vertex v, a node u consults its routing table together with the label of v to determine the outgoing edge (specified by a port number) along which the packet should be sent. In the designer port model, port numbers are assigned during preprocessing, whereas in the fixed port model, port numbers are assigned adversarially. This forwarding process continues until the packet reaches its destination. A routing scheme has stretch t if, for every source–destination pair, the path length taken by the scheme is at most t times the length of a shortest path in the network. In this paper, we consider the underlying network to be a metric space. The algorithm first chooses an overlay network on which routing is performed. The main objective is to minimize the size of the routing tables stored at each vertex. Ackermann functions. Following standard notions (e.g., [NS07, AS24]), we introduce the follwing Ackermann function. Definition 2.1 (Ackermann functions). For all k ≥0, the functions A(k, n) and B(k, n) are defined as follows: A(0, n) := 2n, for all n ≥0, A(k, n) := ( 1 if k ≥1 and n = 0, A(k −1, A(k, n −1)) if k ≥1 and n ≥1, B(0, n) := n2, for all n ≥0, B(k, n) := ( 2 if k ≥1 and n = 0, B(k −1, B(k, n −1)) if k ≥1 and n ≥1. Then, we have the following definition of the inverse Ackermann function: 7 Definition 2.2 (Inverse Ackermann function). For all k ≥0, the function αk(n) is defined as follows: α2k(n) := min{ s ≥0 : A(k, s) ≥n }, for all n ≥0, α2k+1(n) := min{ s ≥0 : B(k, s) ≥n }, for all n ≥0. One can easily see that α0(n) = ⌈n/2⌉, α1(n) = ⌈√n⌉, α2(n) = ⌈log n⌉, α3(n) = ⌈log log n⌉, α4(n) = log∗n, α5(n) = ⌊1 2 log∗n⌋, etc. 3 Bounded treewidth tree covers with small hop-diameter In this section, we show tight tradeoff between treewidth and hop-diameter for 1-spanners of tree metrics. In particular, the upper bound (Theorem 1.1) is proved in Section 3.1 and the matching lower bound (Theorem 1.2) is proved in Section 3.2. The following two claims are used in the proofs of both theorems. Claim 3.1. There is an absolute constant γ such that for α ∈{0, 1}, every integer k ≥4, every x > 1 where the expression is defined, it holds 2 k < x2/k − x −  k k −2 (k−2)/2 · x(k−2)/k −α !2/k < γ k (1) Proof. We rewrite the expression as follows. x2/k  1 − 1 −  k k −2 (k−2)/2 · x−2/k −αx−1 !2/k  (2) Using Maclaurin expansion, we have that (1 + y)2/k = 1 + 2 ky + 2−k k2 · (1 + ζ) 2 k −2 · y2, where ζ is a number between 0 and y. We set y = −  k k−2 (k−2)/2 · x−2/k −αx−1. 1 −  k k −2 (k−2)/2 · x−2/k −αx−1 !2/k = (3) 1 −2 k  k k −2 (k−2)/2 x−2/k + αx−1 ! −k −2 k2 (1 + ζ) 2 k −2  k k −2 (k−2)/2 · x−2/k + αx−1 !2 (4) Plugging Equation (3) into Equation (2), we obtain the following. x2/k  1 − 1 −  k k −2 (k−2)/2 · x−2/k −αx−1 !2/k = 2 k  k k −2 (k−2)/2 + αx−(k−2)/k ! + k −2 k2 (1 + ζ) 2 k −2  k k −2 (k−2)/2 · x−1/k + αx−(k−1)/k !2 The lower bound in Equation (1) holds because −1 < y < ζ < 0 and x > 1. Next we prove the upper bound. 2 k  k k −2 (k−2)/2 + αx−(k−2)/k ! + k −2 k2 (1 + ζ) 2 k −2  k k −2 (k−2)/2 · x−1/k + αx−(k−1)/k !2 < 2(e + x−(k−2)/k) + (ex−1/k + x−(k−1)/k)2 k 8 The right-hand side is decreasing with x in the whole domain and we can upper bound it by taking x = 1. 2(e + x−(k−2)/k) + (ex−1/k + x−(k−1)/k)2 k < (e + 1)(e + 3) k Letting γ = (e + 1)(e + 3), the upper bound from Equation (1) follows. Claim 3.2. For every 3 ≤k ≤ 2 ln(2e) ln log n, it holds that ( k k−2)(k−2)/2(log n)(k−2)/k ≤(log n)/2. Proof. We have that k ≤ 2 ln(2e) ln log n. Rearranging the last inequality, we have that e(log n)(k−2)/k ≤ (log n)/2. The proof is completed by observing that ( k k−2)(k−2)/2 is monotonically increasing for k ≥3 and limk→∞( k k−2)(k−2)/2 = e. Claim 3.3. For every x ≥1 and k ≥4, x √ (k−2)/k ≤x −x ln x k + x ln2 x k2 Proof. Let p := 1/k. Then, x √ (k−2)/k = x √1−2p. Using Taylor series around p = 0, we have that x √1−2p = x −px ln x + R(p). The term R(p) has the following form for 0 < ξ < p. R(p) = p2 2 · x √1−2ξ ln x  ln x 1 −2ξ −(1 −2ξ)−3/2  ≤p2 2 · x √1−2ξ ln x · ln x 1 −2ξ ≤p2x ln2 x To finish the proof, we replace p by 1/k. 3.1 Upper bound In this section we prove Theorem 1.1. Theorem 1.1. For every n ≥1 and every k = O(log log n), every n-vertex tree admits a shortcutting with hop-diameter k and treewidth O(k log2/k n) for even k and O(k( log n log log n)2/(k−1)) for odd k ≥3. The following lemma will be used in proving the theorem. Lemma 3.1 (Cf. Lemma 1 in [FL22b]). Given a parameter ℓ∈N and an n-vertex tree T, there is a set X of at most 2n ℓ+1 −1 vertices such that every connected component C of T \ X is of size at most ℓand has at most 2 outgoing edges towards X. Furthermore, if C has outgoing edges towards x, y ∈X, then necessarily x is an ancestor of y, or vice versa. 3.1.1 Hop-diameter 2 Lemma 3.2. For every tree metric MT = (T, δT ) induced by a tree T there is a 1-spanner H2 with hop-diameter 2 and treewidth O(log n). Proof. Consider the following recursive construction due to [Sol13, AS24, NS07] which produces a set of edges E2 of H2. Take a centroid vertex v of T and add an edge between v and every other vertex of T to E2. Recurse on each subtree of T \ v. Stop whenever T is a singleton. Let T1, . . . , Tg be the tree decompositions of spanners constructed for the subtrees in T \ v. Create a new bag B containing only the vertex v and add v to every bag in each T1, . . . , Tg. The tree decomposition of H2 is obtained by connecting B to the roots of each of T1, . . . , Tg. The treewidth satisfies recurrence W2(1) = 0 and W2(n) = W2(n/2) + 1, which has solution W2(n) = O(log n). Consider any two vertices u, v ∈T. Let w be the centroid vertex used in the last recursive call where u and v belonged to the same tree. By construction, H2 contains edges (w, u) and (w, v), meaning that there is a 2-hop path between u and v. Vertex w is contained on the path between u and v in T, meaning that the stretch of this path is 1. 9 3.1.2 Hop-diameter 3 Lemma 3.3. For every tree metric MT = (T, δT ) induced by a tree T there is a 1-spanner H3 with hop-diameter 3 and treewidth O(log n/ log log n). Proof. Let ℓ3 = log n/ log log n. This value will not change across different recursive levels when the subtree sizes shrink. Consider the following recursive construction for constructing the edge set of E3 of H3. Let X be a set of vertices for T as in Lemma 3.1 with parameter ℓ= n/ℓ3 so that |X| = O(ℓ3). Connect the vertices of X by a clique and add those edges to E3. Next, do the following for every subtree T ′ in T \ X. Let u and v be two vertices from X to which T ′ has outgoing edges. Connect u and v to every vertex in T ′ and add the edges to E3. Proceed recursively with T ′. The base case occurs whenever the size of the tree is at most ℓ3. In the base case, we connect the vertices by a clique. Let T1, . . . , Tg be the tree decompositions of spanners constructed for the trees in T \ X. For every Ti, let u and v be the two vertices from X adjacent to the corresponding subtree Wi in T \ X. (Lemma 3.1 guarantees that there is at most two such vertices; it is possible that u = v.) Add u and v to each bag in Ti. Construct a new bag B containing all the vertices in X and connect it to the roots of each T1, . . . , Tg. The treewidth of H3 satisfies W3(n) ≤n for n ≤ℓ3 and W3(n) ≤W3(n/ℓ3) + 2 for n > ℓ3. Recall that ℓ3 is fixed and does not change across different levels of recursion. The recurrence satisfies W3(n) = O(log n/ log log n). Consider any two vertices u, v ∈T. Let X be the set of vertices in the last recursive call when u and v were in the same tree. If u and v are considered in the same base case, then there is a direct edge between them. Otherwise, let Wu (resp., Wv) be the subtree in T \ X and let u′ (resp., v′) be the vertex in X that is incident on Wu (resp., Wv). By construction, H3 contains edges (u, u′), (v, v′), and (u′, v′). (The cases where u′ = u or v′ = v are handled similarly.) 3.1.3 Hop-diameter k ≥4 Lemma 3.4. For every tree metric MT = (T, δT ) induced by a tree T and every k ≥4 there is a 1-spanner Hk with hop-diameter k and treewidth O(k log2/k n) for even values of k and O(k( log n log log n)2/(k−1)) for odd values of k. Proof. Let ℓk be such that log ℓk = ( k k−2)(k−2)/2(log n)(k−2)/k for even values of k and log ℓk = ( k k−2)(k−2)/2(log n/ log log n)(k−2)/k for odd values. By Claim 3.2, we have that ℓk ≤√n. Consider the following recursive construction for constructing the edge set of Ek of Hk. Let X be a set of vertices for T as in Lemma 3.1 such that |X| = O(ℓk) and every component of T \ X has size at most n/ℓk. Do the following for every subtree T ′ of T \ X. Let u and v be two vertices from X to which T ′ has outgoing edges. Connect u and v to every vertex in T ′ and add the edges to Ek. Proceed recursively with T ′. The base case occurs whenever the size of the tree is at most ℓk. In the base case, we connect the vertices of the considered tree using the construction with hop-diameter k −2. To interconnect the vertices in X, we construct an auxiliary tree TX; use the recursive construction with parameter k −2 on TX and add all these edges to Hk. This concludes the description of Hk. Next, we analyze the treewidth of Hk. Let T1, . . . , Tg be the tree decompositions of spanners constructed for the trees in T \ X. For every Ti, let u and v be the two vertices from X adjacent to the corresponding subtree Wi in T \ X. (Lemma 3.1 guarantees that there is at most two such vertices; it is possible that u = v.) Add u and v to each bag in Ti. Let TX be the tree decomposition of TX, where TX is defined as in the previous paragraph. Since TX is a valid tree decomposition and TX contains an edge (u, v), then TX contains a bag Lu,v 10 with both u and v. Connect the root of T ′ to Lu,v. This concludes the description of the tree decomposition T of T. The treewidth of T satisfies the recurrence Wk(n) = Wk−2(n) for n ≤ℓk and Wk(n) ≤max(Wk−2(ℓk), Wk(n/ℓk) + 2) otherwise. We show that the recurrence satisfies Wk(n) ≤k log2/k n for even values of k. (The proof for odd values is similar.) For the base case we use n ≤ℓk, where Wk(n) ≤Wk−2(n). For the induction step, we assume that the hypothesis holds for all the values smaller than n. First, note that Wk−2(ℓk) ≤(k −2)(log ℓk)2/(k−2) = k(log n)2/k. Wk(n) ≤max(k(log n)2/k, Wk(n/ℓk) + 2) ≤max(k(log n)2/k, k(log n −log ℓk)2/k + 2) ≤max  k(log n)2/k, k log n −  k k −2 (k−2)/2 · (log n)(k−2)/k !2/k + 2   ≤max  k(log n)2/k, k(log n)2/k The last inequality follows from the left-hand side part of Equation (1) by setting x = log n and α = 0. To argue that Hk is a 1-spanner with hop-diameter k, consider the last step of the construc- tion where both u and v were in the same tree T. If u and v were considered in the same base case, then T is equipped with a recursive construction with parameter k−2. By induction, there is a 1-spanner path between them with hop-diameter k −2. Otherwise, let Wu (resp., Wv) be the subtree in T \ X and let u′ (resp., v′) be the vertex in X that is incident on Wu (resp., Wv). By construction, there is a 1-spanner path between u′ and v′ with at most k −2 hops. (The cases where u′ = u or v′ = v are handled similarly.) 3.2 Lower bound We show the treewidth lower bound for 1-spanner of the uniform line metric Ln = {1, 2, . . . , n} with hop-diameter k. Due to the inductive nature of our argument, we prove a stronger version of the statement, which considers 1-spanners that could potentially use points outside of the given line metric. Theorem 1.2. For every n ≥1, every shortcutting with hop-diameter k for an n-point path must have treewidth: • Ω(k log2/k n) for even k and Ω(k( log n log log n)2/(k−1)) for odd k ≥3, whenever k ≤ 2 ln(2e) ln log n; • Ω((log log n)2/k) whenever k > 2 ln(2e) ln log n. We do so by arguing that any 1-spanner of Ln has a large minor. It is well known that a graph with Kt as a minor has treewidth at least t −1. We prove lower bounds for even k; the lower bound for odd k can be shown using the same argument. 3.2.1 Hop-diameter 2 Lemma 3.5. For every n ≥1 and every n-vertex line metric Ln, every 1-spanner with hop- diameter 2 has K⌊log n⌋+1 as a minor. Proof. We prove the claim by complete induction over n. For the base case, we use n = 1, where the claim holds vacuously. 11 Let H be a 1-spanner for Ln with hop-diameter 2. Split Ln into two consecutive parts, L1 and L2, of sizes ⌊n/2⌋and ⌈n/2⌉, respectively. Let H1 and H2 be subgraphs of H induced on L1 and L2, respectively. From the induction hypothesis, H1 and H2 have Klog ⌊n/2⌋and Klog ⌈n/2⌉ as minors, respectively. Consider the case where every point of L1 has an edge in H to some point in L2. Then Klog ⌊n/2⌋∪{H2} induces a clique minor of size log⌊n/2⌋+ 1. Consider the complementary case where L1 has a point p that does not have an edge in H to any point in L2. Then, every point in L2 has a neighbor in L1 because H has hop-diameter 2. Thus, Klog ⌈n/2⌉∪{H1} induces a clique minor of size log⌈n/2⌉+ 1. Hence, the minor size satisfies recurrence W2(n) = W2(⌊n/2⌋) + 1, with a base case W2(1) = 1. The solution is given by W2(n) = ⌊log n⌋+ 1. 3.2.2 Hop-diameter k ≥4 We give a proof for even values of k such that 4 ≤k ≤ 2 ln(2e) ln log n in Lemma 3.6. The proof for odd values is analogous. The proof for k > 2 ln(2e) ln log n is given in Lemma 3.7. Lemma 3.6. For every n ≥1, every even 4 ≤k ≤ 2 ln(2e) ln log n, and every n-vertex line metric L, every 1-spanner with hop-diameter k has treewidth at least c1k log2/k n −1, where c1 is an absolute constant. Proof. Let ℓk be such that log ℓk = ( k k−2)(k−2)/2(log n)(k−2)/k for even values of k. (The proof for odd values is similar. There, we choose ℓk so that log ℓk = ( k k−2)(k−2)/2(log n/ log log n)(k−2)/k.) By Claim 3.2, we have that ℓk ≤√n, whenever k ≤ 2 ln(2e) ln log n. Split L into consecutive sets of points L1, L2, . . . , Lℓk of size ⌊n/ℓk⌋each and ignore the remaining points. Let H be a 1-spanner with hop-diameter k for Ln. Our goal is to show that the size of a clique minor of H can be lower bounded by the following recurrence. Wk(n) ≥min(Wk−2(ℓk), Wk(n/(2ℓk)) + 1) and Wk(1) = 1 (5) We prove the statement by complete induction over n and k. For the base case, we take n = 1, and Wk(1) = 1 > c1k log2/k n −1. We say that a point in Li is global if it has an edge to a point outside Li and non-global otherwise. We say that Li is global if all of its points are global and non-global otherwise. We consider two complementary cases as follows. Case 1: Every Li is non-global. For every Li and Lj the path between a non-global point in Li and a non-global point in Lj must have the first (resp., last) edge inside Li (resp., Lj). Let H′ be obtained from H by contracting each Li into a single vertex. (Clearly, H[Li] is connected.) Let L′ be the line metric obtained from Ln by contracting every Li into a single point. Then H′ is a (k −2)-hop spanner of L′ with stretch 1. Thus, H′ has a minor of size Wk−2(ℓk) ≥c1(k −2) log2/(k−2) ℓk −1 = c1k log2/k n −1. Case 2: Some Li is global. Let {Ll, Lr} = L \ Li, so that Ll (resp., Lr) is on the left (resp., right) of Li. (Possibly Ll = ∅or Lr = ∅.) At least |Li|/2 points in Li have edges to either Ll or Lr. Without loss of generality, we assume the former. Let L′ i ⊆Li be the subset of points that have edges to Ll and let Hi be the subgraph of H restricted to preserving distances in Li. Inductively, Hi has a clique minor of size at least Wk(n/(2ℓk)). (Since Hi is a 1-spanner, it does not include any point outside of Li.) Then Hi and Ll are vertex-disjoint (because we are considering 1-spanners) and hence their union has a clique minor of size Wk(n/(2ℓk))+1. Thus, Wk(n) satisfies eq. (5), which we lower bound next. 12 Wk(n) ≥min(c1k(log n)2/k −1, Wk(n/(2ℓk)) + 1) ≥min(c1k(log n)2/k −1, c1k(log n −log ℓk −1)2/k) ≥min  c1k(log n)2/k −1, c1k log n −  k k −2 (k−2)/2 · (log n)(k−2)/k −1 !2/k  ≥min  c1k(log n)2/k −1, c1k(log n)2/k −1  The last inequality follows from Equation (1) by replacing x = log n and α = 1. Lemma 3.7. For every n ≥1, every k > 2 ln(2e) ln log n, every 1-spanner with hop-diameter k for an n-vertex line metric has treewidth at least c2(log log n)2/k, for an absolute constant c2. Proof. We set ℓk so that log log ℓk = q k−2 k log log n. (We use log(·) := log2(·).) For the clarity of exposition, we ignore the rounding issues. We note that 1 ≤ℓk ≤n. Using the same argument as in Lemma 3.6, we have Wk(n) ≥min(Wk−2(ℓk), Wk(n/(2ℓk)) + 1). We prove the lemma by induction, where the base case is Lemma 3.6 whenever k > 2 ln(2e) ln log n. Our goal is to prove that Wk−2(ℓk) ≥c2(log log n)2/k and Wk(n/(2ℓk))+1 ≥c2(log log n)2/k. For the first inequality we distinguish two cases. If k −2 ≤ 2 ln(2e) ln log(ℓk), then by Lemma 3.6 we have Wk−2(ℓk) ≥c1(k −2) log 2 k−2 ℓk −1 ≥c1(k −2) log ln(2e) ln log ℓk ℓk −1 = 2ec1(k −2) −1 ≥ec1k −1 ≥ec1 ·  2 ln(2e) 2 (ln log ℓk)2 k −1 ≥c2 (log log ℓk)2 k −2 The penultimate inequality holds because k ≥ 2 ln(2e) ln log(n) ≥ 2 ln(2e) ln log(ℓk). The last in- equality holds for a proper choice of constant c2. If k −2 > 2 ln(2e) ln log(ℓk), we have Wk−2(ℓk) ≥ c2 (log log ℓk)2 k−2 by the induction hypothesis. Hence, in both cases, we have: Wk−2(ℓk) ≥c2 · (log log ℓk)2 k −2 = c2 · q k−2 k log log n 2 k −2 = c2 · (log log n)2 k For the second inequality, let x = log n. We have log ℓk = x √ (k−2)/k. Since n 2ℓk < n, we have that k > 2 ln(2e) ln log n 2ℓk and the induction hypothesis gives the following. Wk  n 2ℓk  + 1 ≥ c2  log log  n 2ℓk 2 k + 1 = c2 log2  x −x √ (k−2)/k −1  k + 1 To show that the right-hand side is at least c2 log2(x)/k, it suffices to show the following: log2  x −x √ (k−2)/k −1  + k c2 ≥log2 x (6) 13 From Claim 3.3 we have x √ (k−2)/k ≤x −x ln x k + x ln2 x k2 . x −x √ (k−2)/k −1 ≥x −  x −x ln x k + x ln2 x k2  −1 = x ln x k  1 −ln x k  −1 > x ln x 10k −1 The last inequality holds because k > 2 ln(2e) ln(x). We next consider two cases. If x ln x 10k < 2, then k c2 > x ln x 20c2 ≥log2 x and Equation (6) is proved. Otherwise, we proceed as follows. log2 x ln x 10k −1  + k c2 ≥log2 x ln x 20k  + k c2 =  log x + log ln x 20k 2 + k c2 = log2 x + 2(log x) log ln x 20k + log2 ln x 20k + k c2 ≥log2 x The last inequality holds for any c2 ≤1/10. 4 Bounded arboricity tree covers Throughout this section, we use the following lemma. Lemma 4.1. If every edge of a graph G = (V, E) can be oriented such that the maximum in-degree of every vertex is at most d, then the arboricity of G is at most d + 1. 4.1 Line metrics In this section, we show a construction for line metrics (Theorem 1.4). We shall use a modifica- tion of the following well-known result. Theorem 4.1 (Cf. [AS24, BTS94, Sol13]). For every n ≥2 and k ≥2, every n-point tree metric admits a 1-spanner with hop-diameter k and O(nαk(n)) edges. We next state a slightly modified version of the previous theorem. The first statement concerns hop-diameter 1. Lemma 4.2. Let n ≥2 be an arbitrary integer. Let L be a line metric induced by a set of n points on a line so that between every two points there is n Steiner points. Let S denote the set of Steiner points. Then, L admits a Steiner 1-spanner with hop-diameter 2 such that the Steiner points belong to S and every vertex in L ∪S has a constant in-degree. Proof. Interconnect the vertices in L by a clique. Consider an arbitrary clique edge (u, v) and split it into two using a Steiner point w. Orient the edges (u, w) and (w, v) into w. By using a fresh Steiner point for every clique edge, we obtain the guarantees from the statement. Next, we state the general version. 14 Lemma 4.3. Let n ≥2 and k ≥2 be two arbitrary integers. Let L be a line metric induced by a set of n points on a line so that between every two points there is αk(n) Steiner points. Let S denote the set of Steiner points. Then, L admits a Steiner 1-spanner with hop-diameter 2k such that the Steiner points belong to S and every vertex in L ∪S has a constant in-degree. Proof. We prove the lemma by induction over k. For k = 2, we take a central vertex c in L and connect it to every other point in L; orient the edges outwards from c. Proceed recursively with the two halves. This way we obtain a 1-spanner for L with hop-diameter 2. Denote by E the edge set of this spanner. The depth of the recursion in the construction is O(log n), and the size of S is nα2(n) = n log n. Every edge in E has in-degree 1 for every recursion level. We can split every such edge into two using a fresh Steiner point for each recursion level. This concludes the proof for k = 2. Consider now an arbitrary k. Divide L into intervals of size αk−2(n) using n/αk−2(n) cut vertices. Denote by C the set of cut vertices and invoke the induction hypothesis on C with parameter k −2. Let E′ be the obtained set of edges. Let EC be obtained by connecting every cut vertex to every point in the two neighboring intervals. Let this edge set be EC. Proceed recursively with parameter k on each of the intervals. To analyze the in-degree, we observe that the depth of the recursion with parameter k is O(αk(n)), which coincides with the number of Steiner vertices between every two points in L. One level of recursion contributes a constant in-degree to each vertex in the construction. This means that we can split all such vertices into two and use a fresh Steiner point at each recursion level. This concludes the proof. We restate Theorem 1.4 here for convenience. Theorem 1.4. For every n ≥1 and every even k ≥2, every n-point path admits a shortcutting with hop-diameter k and arboricity O(αk/2+1(n)). Proof. Let Ln be an arbitrary line metric. For an integer k′ ≥2, we describe a construction of a 1-spanner H for Ln with hop-diameter 4k′ −2 and arboricity O(α2k′(n)). Consider a set of n′ = n/α2k′−2(n) equally-spaced cut vertices dividing Ln into intervals of size α2k′−2(n). To construct the 1-spanner H, we connect every cut vertex to all the vertices in its two neighboring intervals. Denote the corresponding edge by EC. Let C be the set of cut vertices. Let E′ be obtained by invoking Lemma 4.3 with parameter 2k′ −2 on C using Ln as Steiner points. Proceed recursively with each of the intervals. To analyze the arboricity, we will show that the edges in EC and E′ can be oriented so that the in-degree of every vertex is constant. Orient every edge in EC so that it goes out of the corresponding cut vertex. Since every interval is adjacent to at most two cut vertices, the in-degree of every point with respect to EC is at most 2. By Lemma 4.3, the edges in E′ have a constant in-degree on Ln. In conclusion, EC and E′ contribute O(1) to the in- degree of each vertex in Ln. The number of recursion levels ℓ(n) satisfies the recurrence ℓ(n) = ℓ(α2k′−2(n)) + O(1), which has solution ℓ(n) = α2k′(n). Every recursion level contributes O(1) to the in-degree of vertices, meaning that the overall in-degree in H is O(α2k′(n)). The hop- diameter of H is 2 + 2 · (2k′ −2) = 4k′ −2. This concludes the description of a 1-spanner with hop-diameter 4k′ −2 and arboricity O(α2k′(n). Note that we have only shown how to get a tradeoff of hop-diameter 4k′ −2 and arboric- ity O(α2k′(n)). We could similarly get a construction with hop-diameter 4k′ and arboricity O(α2k′+1(n)) for a parameter k′ ≥1. Specifically, we divide Ln into intervals of size α2k′−1(n). The cut vertices are interconnected using Lemma 4.3 with hop-diameter 2k′ −1 (and Lemma 4.2 when k′ = 1). The hop-diameter is 2 + 2(2k′ −1) = 4k′ and the arboricity is O(α2k′+1(n)) using a similar argument. 15 The construction for ultrametric is a simple adaptation of the construction for line metrics, which we will show next. 4.2 Ultrametrics and doubling metrics A metric (X, dX) is an ultrametric if it satisfies a strong form of the triangle inequality: for every x, y, z, dX(x, z) ≤max{dX(x, y), dX(y, z)}. It is well known that an ultrametric can be represented as a hierarchical well-separated tree (HST). More precisely, an (s, ∆)-HST is a tree T where (i) each node v is associated with a label Γv such that Γv ≥s · Γu whenever u is a child of v and (iii) each internal node v has at most ∆children. Parameter s is called the separation while ∆is called the degree of the HST of the HST. Let L be the set of leaves of T. The labels of internal nodes in T induce a metric (L, dL) on the leaves, called leaf-metric, where for every two leaves u, v ∈L, dL(u, v) = Γlca(u,v) where lca is the lowest common ancestor of u and v. It is well-known, e.g., [BLMN04], that (L, dL) is an ultrametric, and that any ultrametric is isomorphic to the leaf-metric of an HST. Chan and Gupta [CG06] showed that any (1/ϵ, 2)-HST can be embedded into the line metric with (worst-case) distortion 1+O(ϵ). Therefore, by applying Theorem 1.4, we obtain a (1+O(ϵ))- spanner with hop diameter k and arboricity O(αk/2+1(n)) for (1/ϵ, 2)-HST. In our setting, we are interested in large-degree (k, ∆)-HST where s = 1/ϵ and ∆= poly(1/ϵ); the embedding result by Chan and Gupta [CG06] no longer holds for these HSTs. Instead, we directly apply our technique for the line metric to get a (1 + O(ϵ))-spanner with low-hop diameter. Lemma 4.4. Let ϵ ∈(0, 1), ∆> 0 be parameters, and k be an even positive integer. Then, any (1/ϵ, ∆)-HST with n leaves admits a (1 + O(ϵ))-spanner with hop-diameter k and arboricity O(αk/2+1(n)). Proof. Let T be the (1/ϵ, ∆)-HST and let MT be the metric induced by T. For an integer k′ ≥2, we describe a construction of a (1 + O(ϵ))-spanner for MT with hop-diameter 4k′ −2 and arboricity O(α2k′(n)). The construction is similar to that in Theorem 1.4. Let C be the set of internal nodes of T, called cut verices, such that the subtrees rooted at these nodes has size α2k′−2(n). The number of cut vertices is |C| ≤n/α2k′−2(n). First, connect every cut vertex to all of its descendants in T and let the corresponding set of edges be EC. Next, let E be the set of edges interconnecting the cut vertices using Theorem 4.1 with hop-diameter 2k′ −2 and O(n′α2k′−2(n′)) = O(n) edges. We construct a set E′ by subdividing every edge (u, v) ∈E into two edges using a vertex, say x, in the subtree rooted at u. The spanner H is obtained using the edges in EC and those in E′. Finally, the recursive construction is applied to each subtree rooted at a vertex in C. This concludes the description of the recursive construction of H. The same argument in Theorem 1.4 implies that the arboricity is O(α2k′(n)). The stretch is (1+O(ϵ)) since the path u →x →v between two cut vertices u and v has stretch (1+O(ϵ)). To construct a low-hop spanner with small arboricity for doubling metrics (Theorem 1.5), we will rely on the ultrametric cover by Filtser and Le [FL22a]. Following their notation, for a given metric space (X, dX), we say that a collection T of at most τ different (s, ∆)-HSTs such that (i) for every HST T ∈T , points in X are leaves in T, and (ii) for every two points x, y ∈X, dX(x, y) ≤dT (x, y) for every T ∈T , and there exists a tree Txy ∈T such that dX(x, y) ≤ρ · dTxy(x, y). Theorem 4.2 (Cf. Theorem 3.4 in [FL22a]). For every ϵ ∈(0, 1/6), every metric with doubling dimension d admits an (ϵ−O(d), 1 + O(ϵ), 1/ϵ, ϵ−O(d))-ultrametric cover. 16 Theorem 1.5. Let k be an even integer and let ϵ ∈(0, 1/6) be an arbitrary parameter. Then, for every positive integer n, every n-point metric with doubling dimension d admits a (1+ϵ)-spanner with hop-diameter k and arboricity ϵ−O(d)αk/2+1(n). Proof. Let T be the (ϵ−O(d), 1+O(ϵ), 1/ϵ, ϵ−O(d))-ultrametric cover in Theorem 4.2 for the input doubling metric. The theorem then follows by applying Lemma 4.4 to each of (1/ϵ, ϵ−O(d))-HST in T and taking the union of the resulting spanners. 4.3 General tree metrics Theorem 1.3. For every two integers n ≥1 and k ≥1 and every n-vertex tree T, there is a shortcutting with hop-diameter k and arboricity O(log12/(k+4) n). Moreover, when the height of the tree is h, then the arboricity is O(h6/(k+4)). Proof of Theorem 1.3 for height h. We first show how to prove the theorem for a tree T with height bounded by h. This construction gives the main ideas used for general trees. Let k′ be an arbitrary integer. We show a recursive construction with hop-diameter 2k′. In particular, we show how to shortcut T so that between every ancestor and descendant it is possible to go using k′ hops. The construction is the same as in Lemma 3.2: take the root of the tree, connect it to every descendant and proceed recursively with each of its children. By orienting the edges from the roots to descendants, we have that the in-degree of every vertex is bounded by h. Let A1(h) denote the obtained in-degree (and thus arboricity) of the shortcutting. We have that A1(h) = h. We next show the bound for an arbitrary k′ = 1 + 3g for an integer g ≥1. Let ℓbe a parameter to be set later. Consider the tree levels so that the root is at level 0. Designate as the cut vertices all the vertices at levels ℓ, 2ℓ, 3ℓ. . .. Denote the set of cut vertices by C. Consider the set S, consisting of all the parents of vertices in C. Let ECS be obtained by interconnecting all the vertices in C to their parents. Each such edge is oriented from a vertex in S towards the vertex in C. Next, connect every vertex in S to its first ℓ−1 ancestors until the occurrence of the first cut vertex; let the corresponding edge set be ES. Every such edge is oriented from the ancestors towards vertices in S. Let c ∈C be an arbitrary vertex at level d. Connect c to all of its descendants at levels d + 1, d + 2, . . . , d + ℓ−2, i.e., until the nest cut vertex, and orient these edges from c towards the descendants. The edge set EC is obtained by doing this for every vertex c in C. Use a recursive construction with parameter k′ −3 on the subtree of T induced by vertices in C. Finally, consider all the subtrees obtained by removing C and S from T and apply the recursive construction with parameter k′ on each of the subtrees. We next analyze the hop-diameter of the ancestor-descendant paths in this construction. Let u and v be two arbitrary vertices such that v is an ancestor of u. The path between u and v in the shortcutting is as follows. By construction, EC contains an edge between u and its ancestor cut vertex cu ∈C. Let du be the highest cut vertex that is ancestor of cu and a descendant of v. There is a path consisting of at most k′ −3 hops between cu and du. Let su ∈S be the parent of du. The edge (du, su) is in ECS. Finally, ES contains an edge (su, v). Clearly, the path consists of k′ hops. We next analyze the in-degree of vertices in T. Let Ak′(h) denote the in-degree of the construction with parameter k′. Then, Ak′(h) = ℓ−1 for the vertices in S, due to the orientation of the edges in ES. For the vertices in C, we have Ak′(h) = 1 + Ak′−3(h/ℓ), because the edges in ECS add one to in-degree of every vertex and the dominant term is due to the recursive call with parameter k′ −3. Finally, all the other vertices have Ak′(h) = 1 + Ak(ℓ−2), where the edges in EC contribute one to the in-degree and Ak(ℓ) is due to the recursive construction with 17 parameter k′. Putting everything together, we have the following recurrence . Ak′(h) = max(ℓ−1, 1 + Ak′−3(h/ℓ), 1 + Ak′(ℓ−2)) (7) We proceed to show inductively that for every k′ = 1 + 3g we have Ak′(h) ≤h1/(g+1). Since we have that Ak′(h) ≤h for every k′ ≥1, we can disregard the third term in Equation (7). Thus, we obtain the following simplified recurrence: Ak′(h) = max(ℓ, Ak′−3(h/ℓ)). Notice that we have replaced ℓ−1 by ℓin the first term since it does not affect the solution asymptotically. We proceed to solve the recurrence. Ak′(h) ≤max(ℓ, Ak′−3(h/ℓ)) ≤max(ℓ, (h/ℓ)1/g) induction hypothesis ≤max(h1/(g+1), (h1−1/(g+1))1/g) setting ℓ= h1/(g+1) ≤max(h1/(g+1), h1/(g+1)) = h1/(g+1) Thus Ak′(h) ≤h1/(g+1). In particular, we have that g = (k −1)/3 so the tradeoff is k′ versus arboricity h3/(k′+2). To get the tradeoff guaranteed in the statement, we observe that the hop-diameter is 2k′. Proof of Theorem 1.3 for general trees. We consider a heavy-light decomposition of T, constructed as follows. Start from the root r down the tree each time following the child with the largest subtree. The obtained path is called the heavy path rooted at r. Continue recursively with every child of the vertices of the heavy path. Let T ′ be obtained by contracting every heavy path in T into a single vertex. It is well-known that the height of T ′ is log n, where n is the number of vertices in T. We start by employing the shortcutting procedure for bounded height trees on T ′ with pa- rameter k′ and explain how to adapt it to T. Consider an arbitrary edge (u, v) in the shortcutting of T such that u is a descendant of v. Let Pu (resp., Pv) be the heavy path in T corresponding to u (resp., v). Add an edge between the root ru of Pu and its lowest ancestor on Pv. We do this for all the edges in the shortcutting of T ′. For every heavy path P in T, use the 4-hop construction with arboricity O(log log n) from Theorem 1.4. In addition, connect via a direct edge every vertex in p to the root of P. This concludes the description of the shortcutting for T. Next, we analyze the hop-diameter of the obtained construction. Let u and v be two arbitrary vertices in T and let Pu and Pv be the corresponding heavy paths in T. Denote by pu and pv the vertices in T ′ corresponding to Pu and Pv. Let pw be the LCA of pu and pv in T ′. From the construction we know that there is a k′-hop path between pu and pw and between pv and pw. Let (pv, pa) be the first edge on the path from pu to pw. The corresponding path in T goes from u to the parent of Pu and from the root of Pu to its lowest ancestor in Pa. We can similarly replace every edge on the path between Pu and Pw in T ′ by two edges in T. We handle analogously the path between Pv and Pw. The corresponding paths in T go from u to its lowest ancestor on Pw and from v to its lowest ancestor on Pw. Using the edges from the 4-hop construction on Pw, we join the two paths. The overall number of hops is 4k′ + 4. In particular, we achieve a hop-diameter of 4k′ + 4 with arboricity of O(log3/(k′+2) n). Letting k = 4k′ + 4, the arboricity is O(log12/(k+4) n). 18 5 Routing schemes In this section, we show the results for routing in tree metrics and doubling metrics. 5.1 Routing in tree metrics Theorem 1.8. For every n and every n-vertex tree T, there is a 3-hop routing scheme in the fixed-port model for the metric MT induced by T with stretch 1 that uses O(log2 n/ log log n) bits per vertex. Proof. Our routing scheme is constructed on top of a 1-spanner H3 of MT as described in Lemma 3.3. For a vertex u ∈T, let table(u) denote its routing table and label(u) its label. First, assign a unique identifier ID(u) ∈{1, . . . , n} to every vertex u in T and add it to table(u) and label(u). Equip the routing table and a label of every vertex u ∈T with an ancestor label anc(u) as in [AAK+06]. This adds O(log n) bits of memory per vertex. Using the ancestor labeling scheme from [AAK+06], we can determine, given two vertices u and v, whether they are in ancestor-descendant relationship, and if so, whether u is the ancestor of v or the vice-versa. Recall the recursive construction of H3 with parameter T as an input. Assign to each recursive call a unique integer rT . Let X be a set of vertices for T as in Lemma 3.1 so that |X| = log n/ log log n. (This parameter is fixed and does not change across different recursive calls.) The vertices of X are interconnected by a clique in H3. For every vertex in u ∈X, add to table(u) the information consisting of C(u) = ⟨rT , {⟨ID(v), port(u, v), anc(v)⟩| v ∈X \ {x}}⟩. The memory occupied per every vertex in X is O(log2 n/ log log n). (Note that the construction of H3 guarantees that every vertex belongs to such a clique exactly once across all the recursive calls, meaning that table(u) contains only one such C(u).) Let T ′ be a subtree in T \ X. Let u and v be two vertices from X to which T ′ has outgoing edges. For every vertex x ∈T ′, add to table(x) the following: ⟨rT , ID(u), port(x, u)⟩and ⟨rT , ID(v), port(x, v)⟩. Similarly, add to label(x) the following: ⟨rT , ID(u), port(u, x)⟩and ⟨rT , ID(v), port(v, x)⟩. This information takes (log n) bits per recursive call rT . The construction proceeds recursively with T ′; the number of recursive calls every vertex participates in is at most O(log n/ log log n). Next we describe the routing algorithm. Let u be the source and v be the destination. First, check if C(u) contains routing information leading directly to v. In this case, the algorithm outputs port(u, v) and the routing is complete. (This case happens when u and v are in the same clique during the construction.) Otherwise, go over table(u) and label(v) and find the last recursive call rT which is common to both u and v. Next, consider label(v) and the two entries consisting ⟨rT , ID(v1), port(v1, v)⟩and ⟨rT , ID(v2), port(v2, v)⟩, corresponding to rT . If v1 and v2 are in C(u), use anc(u), anc(v1), anc(v2), and anc(v) to decide whether to output port(u, v1) or port(u, v2). (This case happens when u, v1, and v2 are in X in the recursive call rT and v is not in it.) Finally, let ⟨rT , ID(u1), port(u, u1)⟩and ⟨rT , ID(u2), port(u, u2)⟩be the two entries corresponding to rT in table(u). Use anc(u) and anc(v) to decide whether to output port(u, u1) or port(u, u2). (This case happens when u is not in X.) This concludes the description of the routing algorithm. 5.2 Routing in doubling metrics We next show how to extend the routing result in tree metrics to metrics with doubling dimension d. In particular, we prove the following theorem. Theorem 1.10. For every n and every n-point metric with doubling dimension d, there is a 3-hop routing scheme with stretch (1 + ϵ) that uses Oϵ,d(log2 n/ log log n) bits per vertex. 19 Given a point set P with doubling dimension d, we first construct a tree cover, using the tree cover theorem from [CCL+25]. Theorem 5.1 ([CCL+25]). Given a point set P in a metric of constant doubling dimension d and any parameter ϵ ∈(0, 1), there exists a tree cover with stretch (1 + ϵ) and ϵ−˜O(d) trees. Furthermore, every tree in the tree cover has maximum degree bounded by ϵ−O(d). We use this specific tree cover theorem, since the authors also provide an algorithm for determining the “distance-preserving tree” given the labels of any two metric points. Lemma 5.1 ([CCL+25]). Let ϵ ∈(0, 1). Let T = {T1, . . . , Tk} be the tree cover for P constructed by Theorem 5.1, where k = ϵ−˜O(d). There is a way to assign ϵ−˜O(d) log n-bit labels to each point in P so that, given the labels of two vertices x and y, we can identify an index i such that tree Ti is a “distance-approximating tree” for u and v; that is, δTi(x, y) ≤(1+ϵ)δP (x, y). This decoding can be done in O(d · log(1/ϵ)) time. We equip each tree in the cover with the stretch-1 routing scheme from Theorem 1.8. This consumes overall ϵ−˜O(d) log2 n/ log log n bits per point in P. In addition, we add ϵ−˜O(d) log n-bit labels to each point in P as stated in Lemma 5.1. Given two points x, y ∈P, we first employ the algorithm from Lemma 5.1 to find the tree in which the routing should proceed. Then, the routing proceeds on that specific tree using the routing algorithm from Theorem 1.8. This concludes the description of the compact routing scheme for doubling metrics. 6 Lower bound for routing in tree metrics In this section, we prove the following theorem. Theorem 1.9. There is an infinite family of trees Tn, n > 0, such that any labeled fixed-port routing scheme with stretch 1 on a metric induced by Tn has at least one vertex with total memory of Ω(log2 n/ log log n) bits. Let T be an unweighted tree and MT = (V, (V × V ), dT ) be a metric induced by T = (V, E). The edges in (V ×V )\E are called Steiner edges. In this section we show that stretch-1 routing in tree metrics requires Ω(log2 n/ log log n) bits per tree vertex. Hard instances. We first describe the hard instances used in [FG02]. Let t, h, and d be positive integers and let T 0 be a complete t-ary rooted tree of height h + 1. Let T0 be a tree obtained by adding d −t −1 leaves at every internal vertex of T 0 and d −t leaves at the root. These added leaves are called dummy leaves. The number of vertices in T0 is n = (d−1)· th−1 t−1 +2. Note that T0 is uniquely defined by t, h, and d. Let T be a tree obtained from T 0 as follows. Consider an internal vertex u of T 0 at height i, where the root has height h and the leaves have height 0. Let qi = ti−1 t−1 . (The choice of qi coincides with the number of non-dummy vertices in a subtree rooted at any child of u.) Add (d −t −1) · qi dummy leaves to u if it is an internal node and (d−t)qi if it is the root. Note that both T0 and T are constructed based on T 0 and there is a correspondence between non-dummy vertices of T and the non-dummy vertices of T0. In what follows, we shall use the same letter to denote some non-dummy vertex in T and the corresponding non-dummy vertex in T0. Claim 6.1. The number of vertices in T is O(n log2 n). 20 Proof. |T| = |T 0| + qh + h X i=1 th−i · (d −t −1) · qi = th+1 −1 t −1 + th −1 t −1 + d −t −1 t −1 · h X i=1 th−i · (ti −1) = th+1 −1 t −1 + th −1 t −1 + (d −t −1)(hth+1 −hth −th + 1) (t −1)2 In [FG02], d = th and t = h = ⌊(log √n)/ log log √n⌋, so that d = th ≤√n. We proceed to upper bound |T| as follows. |T| ≤2th+1 + dhth+1 ≤2√n log n + n log2 n = O(n log2 n) Using a reduction to the lower bound instances of [FG02], we will show that the memory requirement is Ω(log2 n′/ log log n′) = Ω(log2 n/ log log n). Reduction to relaxed routing. Let MT be a metric induced by T. Similarly to [FG02], we consider a restricted problem of relaxed routing in MT , where the destination vertex is a non- dummy vertex of MT and the source vertex is its ancestor. Our lower bound argument shows that relaxed routing in MT requires total memory of Ω(log2 n/ log log n) bits per vertex. Since every routing scheme in an instance MT is also a relaxed routing scheme in the same instance, our lower bound applies to routing in MT . Port numbering. In [FG02], the authors consider a family T of instances where all the trees are isomorphic to T0 and each instance correspond to a different port numbering of T0. We consider a family of instances T ′ where every metric is isomorphic to MT and there is a one-to- one correspondence between instances in T and those in T ′. Consider an instance ˆT0 from T . We proceed to explain the port numbering in the corresponding instance ˆ MT in T ′. Let u be an internal vertex of T at height i. Define the sets of edges Ej as follows: • For 1 ≤j ≤t, let Ej be the set of qi edges leading to the non-dummy descendants of u in the subtree of MT rooted at the jth child of u. • Partition the edges leading to dummy leaves adjacent to u into groups of size qi. Let Ej be the j-th group for t + 1 ≤j ≤d. Let p1, . . . , pd be the port numbers of the d neighbors of u in ˆT0. Note that p1, . . . , pd form a permutation of numbers in {1, . . . , d}. Define Bk as the set of integers in [(k −1)qi + 1, kqi]. For 1 ≤j ≤d, assign to Ej port numbers from Bpj arbitrarily. Assign all the other port numbers arbitrarily. This concludes the description of the port numbers in ˆ MT . The following observation provides the key property of the port assignments in ˆ MT . Observation 6.1. Let wi be a child of u and let pi ∈{1, . . . , d} be the port number in ˆT0 of the edge (u, wi), as seen from u. Every port number p of u in ˆ MT leading to a subtree rooted at wi satisfies ⌈p/qi⌉= pi. 21 Routing without header rewriting. Next, we show that header rewriting cannot reduce overall memory per vertex in ancestor-descendant routing. Consider an ancestor-descendant routing scheme R′ which routes on top of an instance ˆ MT in T ′. Let u be a source vertex and v a destination. Initially, the header contains only label(v). Let w be the first vertex on the routing path from u to v. Since R′ is a valid ancestor-descendant routing scheme and w is an ancestor of v, it is possible to route from w to v with w as a source and v as a destination. In this case, the routing algorithm commences at w and the header contains only label(v). Since the routing scheme has stretch 1, vertex w will never be visited again. In other words, rewriting the header at vertex u does not help in ancestor-descendant routing. Reduction to routing in trees. Let ˆT0 be an instance in T and let ˆ MT be the correspond- ing instance in T ′. Our goal is to define a transformation of an ancestor-descendant routing scheme R′ for ˆ MT into an ancestor-descendant routing scheme R for ˆT0, which uses additional O(log n′) = O(log n) bits per vertex when restricting to query pairs that exist in ˆT0. Consider an internal vertex u at height i in ˆ MT and its descendant (non-dummy vertex) v. Let table′(u) be the routing table of u and label′(v) be the label of v in R′. Define label(v) := label′(v) and let table(u) be a concatenation of table′(u) and a binary encoding of qi. The number of bits required to store qi is O(log n′) = O(log n). Let R(table(u), label(v)) := ⌈R′(table′(u),label′(v)) qi ⌉. This concludes the description of R. We argue that R is a valid routing scheme for T. It suffices to prove that R outputs the correct port number. Let p be the port number leading to the next vertex on the routing path from u down to v. We want to prove that R(table(u), label(v)) = p. From Observation 6.1, we know that ⌈R′(table′(u),label′(v)) qi ⌉= p′, where the p′ is the port number in ˆT0 leading to the child of u which is the root of the subtree where v belongs. Hence, the routing algorithm in ˆT0 proceeds at the correct child. In conclusion, we described a way to convert a routing scheme for ˆ MT into a routing scheme in ˆT0 which uses Ω(log n) additional bits. In [FG02] it is proved that T contains an instance in which some vertex requires Ω(log2 n/ log log n) bits of memory. This means that there is an instance in T ′ which requires Ω(log2 n/ log log n) = Ω(log2 n′/ log log n′) bits of memory. References [AAK+06] Serge Abiteboul, Stephen Alstrup, Haim Kaplan, Tova Milo, and Theis Rauhe. Com- pact labeling scheme for ancestor queries. SIAM J. Comput., 35(6):1295–1309, 2006. 19 [ADM+95] S. Arya, G. Das, D. M. Mount, J. S. Salowe, and M. Smid. Euclidean spanners: Short, thin, and lanky. In Proceedings of the Twenty-seventh Annual ACM Symposium on Theory of Computing, STOC ’95, pages 489–498, 1995. 1, 2, 5 [AS87] Noga Alon and Baruch Schieber. Optimal preprocessing for answering on-line prod- uct queries. Technical Report TR 71/87, Tel Aviv University, 1987. 1, 2, 4 [AS24] Noga Alon and Baruch Schieber. Optimal preprocessing for answering on-line prod- uct queries. CoRR, abs/2406.06321, 2024. 7, 9, 14 [BFN19] Yair Bartal, Arnold Filtser, and Ofer Neiman. On notions of distortion and an almost minimum spanning tree with constant average distortion. J. Comput. Syst. Sci., 105:116–129, 2019. preliminary version published in SODA 2016. 5 22 [BLMN04] Yair Bartal, Nathan Linial, Manor Mendel, and Assaf Naor. Some low distortion metric ramsey problems. Discrete & Computational Geometry, 33(1):27–41, July 2004. 16 [BRS25] Kevin Buchin, Carolin Rehs, and Torben Scheele. Geometric spanners of bounded tree-width. In SoCG, volume 332 of LIPIcs, pages 26:1–26:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2025. 6 [BTS94] Hans L. Bodlaender, Gerard Tel, and Nicola Santoro. Trade-offs in non-reversing diameter. Nord. J. Comput., 1(1):111–134, 1994. 1, 2, 14 [CCC+25] Hsien-Chih Chang, Vincent Cohen-Addad, Jonathan Conroy, Hung Le, Marcin Pilipczuk, and Michal Pilipczuk. Embedding planar graphs into graphs of treewidth O (log3 n ). In SODA, pages 88–123. SIAM, 2025. 2 [CCL+23] H. Chang, J. Conroy, H. Le, L. Milenković, S. Solomon, and C. Than. Covering planar metrics (and beyond): O(1) trees suffice. In The 64th Annual Symposium on Foundations of Computer Science, FOCS ‘23, pages 2231–2261, 2023. 5 [CCL+24] H. Chang, J. Conroy, H. Le, L. Milenković, S. Solomon, and C. Than. Shortcut partitions in minor-free graphs: Steiner point removal, distance oracles, tree covers, and more. In The 2024 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ‘24, page 5300–5331, 2024. 5 [CCL+25] Hsien-Chih Chang, Jonathan Conroy, Hung Le, Shay Solomon, and Cuong Than. Light tree covers, routing, and path-reporting oracles via spanning tree covers in doubling graphs. In STOC, pages 2257–2268. ACM, 2025. 5, 20 [CDKX25] Derek G. Corneil, Feodor F. Dragan, Ekkehard Köhler, and Yang Xiang. Lower bounds on collective additive spanners. CoRR, abs/2504.18508, 2025. 6 [CFKL20] Vincent Cohen-Addad, Arnold Filtser, Philip N. Klein, and Hung Le. On light spanners, low-treewidth embeddings and efficient traversing in minor-free graphs. In FOCS, pages 589–600. IEEE, 2020. 2 [CG06] H. T.-H. Chan and A. Gupta. Small hop-diameter sparse spanners for doubling metrics. In Proc. of 17th SODA, pages 70–78, 2006. 16 [Cha87] Bernard Chazelle. Computing on a free tree via complexity-preserving mappings. Algorithmica, 2(1):337–361, 1987. 1, 2 [CLPP23] Vincent Cohen-Addad, Hung Le, Marcin Pilipczuk, and Michal Pilipczuk. Planar and minor-free metrics embed into metrics of polylogarithmic treewidth with ex- pected multiplicative distortion arbitrarily close to 1. In FOCS, pages 2262–2277. IEEE, 2023. 2 [DFG08] Feodor F. Dragan, Fedor V. Fomin, and Petr A. Golovach. Spanners in sparse graphs. In ICALP (1), volume 5125 of Lecture Notes in Computer Science, pages 597–608. Springer, 2008. 6 [FG02] Pierre Fraigniaud and Cyril Gavoille. A space lower bound for routing in trees. In STACS, volume 2285 of Lecture Notes in Computer Science, pages 65–75. Springer, 2002. 20, 21, 22 23 [FGvL11] Fedor V. Fomin, Petr A. Golovach, and Erik Jan van Leeuwen. Spanners of bounded degree graphs. Inf. Process. Lett., 111(3):142–144, 2011. 6 [FL22a] Arnold Filtser and Hung Le. Locality-sensitive orderings and applications to reliable spanners. In STOC, pages 1066–1079. ACM, 2022. 16 [FL22b] Arnold Filtser and Hung Le. Low treewidth embeddings of planar and minor-free metrics. In FOCS, pages 1081–1092. IEEE, 2022. 1, 2, 3, 4, 9 [KLMS22] Omri Kahalon, Hung Le, Lazar Milenkovic, and Shay Solomon. Can’t see the forest for the trees: Navigating metric spaces by bounded hop-diameter spanners. In Alessia Milani and Philipp Woelfel, editors, Proc. PODC, pages 151–162. ACM, 2022. 1, 2, 3, 5 [Le23] H. Le. Shortcutting trees, 2023. https://minorfree.github.io/ tree-shortcutting/. 1, 3 [MN07] Manor Mendel and Assaf Naor. Ramsey partitions and proximity data structures. Journal of the European Mathematical Society, 9(2):253–275, 2007. 5 [NS07] G. Narasimhan and M. Smid. Geometric Spanner Networks. Cambridge University Press, 2007. 7, 9 [NW64] Crispin St. John Alvah Nash-Williams. Decomposition of finite graphs into forests. Journal of the London Mathematical Society, 1(1):12–12, 1964. 6 [SE14] Shay Solomon and Michael Elkin. Balancing degree, diameter, and weight in eu- clidean spanners. SIAM J. Discret. Math., 28(3):1173–1198, 2014. 3 [Sol13] Shay Solomon. Sparse euclidean spanners with tiny diameter. ACM Trans. Algo- rithms, 9(3):28:1–28:33, 2013. 9, 14 [Yao82] A. C. Yao. On constructing minimum spanning trees in k-dimensional spaces and related problems. SIAM Journal on Computing, 11(4):721–736, 1982. 1, 2 24
Tree-Like Shortcuttings of Trees Hung Le∗ Lazar MilenkoviㆠShay Solomon‡ Cuong Than§ Abstract Sparse shortcuttings of trees-equivalently, sparse 1-spanners for tree metrics with bounded hop-diameter-have been studied extensively (under different names and settings), since the pioneering works of [Yao82, Cha87, AS87, BTS94], initially motivated by applications to range queries, online tree product, and MST verification, to name a few. These constructions were also lifted from trees to other graph families using known low-distortion embedding results. The works of [Yao82, Cha87, AS87, BTS94] establish a tight tradeoff between hop-diameter and sparsity (or average degree) for tree shortcuttings and imply constanthop shortcuttings for n-node trees with sparsity O(log∗n). Despite their small sparsity, all known constant-hop shortcuttings contain dense subgraphs (of sparsity Ω(log n)), which is a significant drawback for many applications. We initiate a systematic study of constant-hop tree shortcuttings that are "tree-like". We focus on two well-studied graph parameters that measure how far a graph is from a tree: arboricity and treewidth. Our contribution is twofold. • New upper and lower bounds for tree-like shortcuttings of trees, including an optimal tradeoff between hop-diameter and treewidth for all hop-diameter up to O(log log n). We also provide a lower bound for larger values of k, which together yield hop-diameter×treewidth = Ω((log log n)2) for all values of hop-diameter, resolving an open question of [FL22b, Le23]. • Applications of these bounds, focusing on low-dimensional Euclidean and doubling metrics. A seminal work of Arya et al. [ADM+95] presented a (1 + ε)-spanner with constant hop-diameter and sparsity O(log∗n), but with large arboricity. We show that constant hop-diameter is sufficient to achieve arboricity O(log∗n). Furthermore, we present a (1+ε)-stretch routing scheme in the fixed-port model with 3 hops and a local memory of O(log2 n/ log log n) bits, resolving an open question of [KLMS22]. 1 Introduction Given a tree T = (V, E) and an integer k ≥1, a tree shortcutting of T with hop-diameter k is a graph G = (V, E′) such that for every two vertices u, v ∈V , there is a path in G consisting of at most k edges such that dG(u, v) = dT (u, v), where dG(u, v) (resp., dT (u, v)) represents the distance between u and v in G (resp., T). Besides achieving small (ideally constant) hop-diameter, the most basic parameter of a tree shortcutting is its number of edges, |E′|. The problem of constructing sparse tree shortcuttings with small hop-diameter has been ∗ . Email: . †Tel Aviv University. Email: . ‡Tel Aviv University. Email: . § . Email: . 1 16 Oct 2025 studied extensively (under different names and settings), since the pioneering works of [Yao82, Cha87, AS87, BTS94]. The first tradeoff was given by Yao [Yao82], who studied the problem of computing range queries on paths. Later, Chazelle [Cha87] showed how to extend these tradeoffs from paths to arbitrary trees. Few years after, Alon and Schieber [AS87] studied the problem of computing semigroup product along paths and trees. Given a tree T whose vertices are elements of a semigroup, the goal is to preprocess it into a succinct data structure so that every subsequent semigroup product query can be answered using as few as possible operations. In the terminology of tree shortcutting, they showed that for any n-point path, one can get a shortcutting with hop-diameter k and O(nαk(n)) edges. Here, αk(n) is a very slowly-growing inverse Ackermann function defined in Section 2. In particular, their tradeoff implies that one can get hop-diameters 2, 3, and 4, with O(log n), O(log log n), and O(n log∗n) edges, respectively. For trees, their hop-diameter grows by a factor of 2: their tree shortcutting achieves hop-diameter 2k using O(nαk(n)) edges. They also showed that for a line metric the tradeoff is tight. Some of the applications of shortcuttings mentioned in [AS87] include finding the max-flow in a multiterminal network, verifying MST, and updating an MST after increasing the cost of one of its edges. The problem of problem of achieving an optimal tradeoff between hop-diameter and sparsity was settled by Bodlaender et al. [BTS94], who gave an optimal tradeoff for trees matching the lower bound of [AS87] for paths. These constructions were successfully used in various applications and were also lifted to other graph families. Most notably perhaps, a seminal work of Arya et al. [ADM+95] shows that every n-point Euclidean metric with a constant dimension admits a (1+ε)-spanner with hopdiameter k and Oε(nαk(n)) edges. Spanners are formally defined in Section 2. Roughly speaking, spanners generalize the notion of tree shortcutting: Given a tree T, a tree shortcutting of T is a stretch-1 spanner of the tree metric induced by T. Filtser and Le [FL22b] used tree shortcutting to achieve low-treewidth embedding of planar graphs. (See [CFKL20, CLPP23, CCC+25] for additional results on low-treewidth embedding of planar and minor-free graphs and metrics.) In particular, the central part of the embedding result of [FL22b] is a low-treewidth shortcutting for trees. Another example is the compact routing scheme by Kahalon et al. [KLMS22]. (See Section 2 for a definition of compact routing.) Their routing scheme operates on top a network which is a low-treewidth tree shortcutting with hop-diameter 2. Once the shortcutting has been computed, it serves as a "proxy" overlay network, on which the computation can proceed, which gives rise to huge savings in a number of quality measures, including global and local space usage, as well as in various notions of running time, which change from one setting to another. In some applications where limiting the hop-distances of paths is crucial, such as in some routing schemes, road and railway networks, and telecommunication, we might need to minimize the hop-distances; for example, imagine a railway network, where each hop in the route amounts to switching a train - how many of us would be willing to use more than, say, 4 hops? Likewise, what if each hop amounts to traversing a traffic light, wouldn't we prefer routes that minimize the number of traffic lights? In such cases, the designer of the system, or its users, might not be content with super-constant hop-distances, or even with a large constant, and it might be of significant value to achieve as small as possible hop-distances. Motivated by such practical considerations, we are primarily interested in values of hop-diameter k that "approach" 1, mainly k = 2, 3, 4, ... as there is no practical need in considering larger values of k. The fundamental drawback of sparse tree shortcuttings is that they have dense subgraphs. In particular, all the aforementioned tree shortcutting constructions suffer from not being uniformly-sparse in the regime of constant k - they have subgraphs with average degree of Ω(log n). This lower bound on average degree is true for hop-diameter k = 2 and does not reduce with hop-diameter. Motivated by this, we initiate a systematic study of constant-hop 2 tree shortcuttings that are "tree-like". In particular we are interested in two notions that capture uniform sparsity: treewidth and arboricity. (Formal definitions are given in Section 2.) These two graph families are prime examples of families admitting more efficient algorithms, in contrast to just sparse graphs. The key question underlying this work is the following. Question 1. Given an n-vertex tree, is there a shortcutting with constant hop diameter k with arboricity/treewidth of o(log n), and ideally close to constant? In particular, is it possible to achieve this using k = 3, 4, ...? Getting below the log n bound is impossible for k = 2 due to the sparsity lower bound for shortcuttings with hop-diameter 2. A well-known construction achieves a bound of O(log n) on sparsity, treewidth, and arboricity in this regime. Also, with Θ(log n) hops, one can get a constant bound on maximum degree and thus also on sparsity, treewidth, and arboricity [SE14]. We note, however, that the key focus of this paper is in the regime where the hop-diameter is constant. This is arguably the most important regime for all applications. A key insight of this work is that one can break the logarithmic bound, even for treewidth, by using as few as 3 hops. Building on this insight, we derive general nontrivial tradeoffs between the hop-diameter and the treewidth/arboricity, and demonstrate their applicability. The aforementioned low-treewidth embedding result of Filtser and Le [FL22b] gives a tree shortcutting with hop-diameter k = O(log log n) and treewidth t = O(log log n). Using this result, they give a low-treewidth embedding of planar graphs into low-treewidth graphs. In particular, they show that if one can construct a tree shortcutting with hop-diameter k and treewidth t, then one can embed a planar graph with diameter D into a graph with treewidth O(k·t/ε) with an additive distortion of O(εD). Improving the product of k·t would immediately improve the bound on the treewidth of the embedding. The following question is left open in their work. Question 2 ([Le23, FL22b]). Is there a tree shortcutting with treewidth t and hop-diameter k such that k · t = o((log log n)2)? Furthermore, is there such a tree shortcutting with a constant hop-diameter? The compact routing scheme of Kahalon et al. [KLMS22] achieves memory bound of O(log2 n) bits for routing on tree shortcuttings with hop-diameter 2. The reason why they achieve a bound of O(log2 n) bits is essentially due to the fact that the shortcutting has arboricity/treewidth of Θ(log n). The main obstacle in improving their memory bound lies in understanding tree shortcuttings with small treewidth/arboricity. Breaking the bound of Θ(log2 n) bits is a main open question left in their work. Quoting [KLMS22]: "Whether or not one can use a spanner1 of larger (sublogarithmic and preferably constant) hop-diameter for designing compact routing schemes with o(log2 n) bits is left here as an intriguing open question." Question 3 ([KLMS22]). Given an n-vertex tree T, is there a compact routing scheme (operating on a shortcutting of T) with stretch 1 which uses o(log2 n) bits of space for every node and achieves an o(log n), and ideally constant hop-diameter? 1.1 Our contribution Perhaps the main contribution of this work is a conceptual one-identifying the importance of Question 1. In Section 1.1.1 we present new upper and lower bounds for tree shortcuttings, which answer Question 1 as well as Question 2 in the regime of hop-diameter O(log log n). In Section 1.1.2 we present some extensions and applications of these bounds, which in particular settle Question 3. 1tree shortcutting in our terminology 3 1.1.1 Bounds for tree shortcuttings We provide a thorough investigation of Question 1. First, we show that one can break the log n barrier on treewidth already for hop-diameter 3. Theorem 1.1 provides a general upper bound tradeoff between the hop-diameter and the treewidth, for all values of hop-diameter k = O(log log n). Theorem 1.1. For every n ≥1 and every k = O(log log n), every n-vertex tree admits a shortcutting with hop-diameter k and treewidth O(k log2/k n) for even k and O(k( log n log log n)2/(k-1)) for odd k ≥3. Remark. It is impossible to extend the result of Theorem 1.1 to basic graph families, such as planar graphs or Euclidean metrics. See Section 1.2 for more details. We also prove a lower bound tradeoff between the hop-diameter and the treewidth that matches the upper bound of Theorem 1.1 for all values of k = O(log log n). Furthermore, we provide a lower bound for larger values of k, which together settle negatively Question 2, and in particular imply that the construction of [FL22b] with hop-diameter and treewidth both bounded by O(log log n) is asymptotically optimal. Theorem 1.2. For every n ≥1, every shortcutting with hop-diameter k for an n-point path must have treewidth: • Ω(k log2/k n) for even k and Ω(k( log n log log n)2/(k-1)) for odd k ≥3, whenever k ≤ 2 ln(2e) ln log n; • Ω((log log n)2/k) whenever k > 2 ln(2e) ln log n. The construction of Theorem 1.1 for k = 3 has treewidth, and thus also arboricity, bounded by O(log n/ log log n). We next show that the bound on the arboricity can further be improved. In particular one can get rid of the factor k in the tradeoff from Theorem 1.1 by introducing some slack to the exponent of log n. In particular, we prove the following theorem. Theorem 1.3. For every two integers n ≥1 and k ≥1 and every n-vertex tree T, there is a shortcutting with hop-diameter k and arboricity O(log12/(k+4) n). Moreover, when the height of the tree is h, then the arboricity is O(h6/(k+4)). Finally, we show an even better tradeoff between the hop-diameter and arboricity on paths. Theorem 1.4. For every n ≥1 and every even k ≥2, every n-point path admits a shortcutting with hop-diameter k and arboricity O(αk/2+1(n)). In particular, Theorem 1.4 shows that one can get arboricity O(log log n) with k = 4 and O(log∗n) with k = 6. The tradeoff is asymptotically tight, due to the sparsity lower bound of [AS87], as the arboricity of any graph is at most its average degree (up to a factor of 2). We note that all our shortcutting constructions can be implemented in time linear in their size. We will skip the implementation details as this is not the main focus of this work. 1.1.2 Extensions and applications We next extend Theorem 1.4, which provides a 1-spanner for line metrics with bounded hopdiameter and arboricity, to arbitrary doubling metrics, by increasing the stretch to 1 + ε. Theorem 1.5. Let k be an even integer and let ε ∈(0, 1/6) be an arbitrary parameter. Then, for every positive integer n, every n-point metric with doubling dimension d admits a (1+ε)-spanner with hop-diameter k and arboricity ε-O(d)αk/2+1(n). 4 This significantly strengthens the construction of Arya et al. [ADM+95], by providing a uniformly sparse (rather than just sparse) construction with constant hop-diameter. Specifically, for hop-diameter k, we transition from sparsity ε-O(d)αk(n) to arboricity ε-O(d)αk/2+1(n); in particular, we get arboricity O(log∗n) with an hop-diameter of 6. Using the results in tree covers [CCL+23, CCL+24], we can lift this tradeoff for planar and minor-free metrics. Theorem 1.6. Let k be an even integer and let ε ∈(0, 1/6) be an arbitrary parameter. Then, for every positive integer n, every n-point Kh-minor-free metric admits a (1 + ε)-spanner with hop-diameter k and arboricity O(ε-3 ·2 hO(h)/ε ·αk/2+1(n)). Furthermore, if the metric is planar, one can construct such a spanner with arboricity O(ε-3 · αk/2+1(n)). Using the results in [BFN19, MN07], we obtain the following theorem for spanners of general metrics. Theorem 1.7. Let k be an even integer, r ≥1 be an arbitrary parameter. Then, for every positive integer n, every n-point metric MT admits an O(n1/r log1-1/r n)-spanner with hopdiameter k and arboricity O(r · αk/2+1(n)). Alternatively, MT admits an O(r)-spanner with hop-diameter k and arboricity O(r · n1/r · αk/2+1(n)). In Section 5, we use the construction from Theorem 1.1 to devise a constant-hop routing scheme for tree metrics with stretch 1 and O(log2 n/ log log n) bits per vertex. We note that our routing algorithm operates on top of a network with hop-diameter 3 and treewidth O(log n/ log log n). Theorem 1.8. For every n and every n-vertex tree T, there is a 3-hop routing scheme in the fixed-port model for the metric MT induced by T with stretch 1 that uses O(log2 n/ log log n) bits per vertex. This answers in the affirmative Question 3, which is the main open question from [KLMS22]. We show that the bound from Theorem 1.8 on memory per vertex is the best one could get in tree metrics, regardless of the number of hops! In particular, we prove the following theorem. Theorem 1.9. There is an infinite family of trees Tn, n > 0, such that any labeled fixed-port routing scheme with stretch 1 on a metric induced by Tn has at least one vertex with total memory of Ω(log2 n/ log log n) bits. Using known tree cover constructions, we obtain a 3-hop (1 + ε)-stretch routing in doubling metrics. Given a tree cover, we construct a routing scheme on top of each of the trees in the cover. The key challenge is in obtaining a mechanism for efficiently (without increasing the memory usage) determining which tree to use for routing between any two given source and target points. We focus here on doubling metrics, where such a mechanism is already given [CCL+25], and we use it as a black-box to derive our result. However, this is a general framework that can be applied to any other metric family that admits a tree cover construction, provided that one has the mechanism to efficiently determine the right tree on top of which the routing proceeds. Theorem 1.10. For every n and every n-point metric with doubling dimension d, there is a 3-hop routing scheme with stretch (1 + ε) that uses Oε,d(log2 n/ log log n) bits per vertex. This result provides the first routing scheme in Euclidean and doubling metrics, where the number of hops is o(log n), let alone as small as 3, and the labels consist of o(log2 n) bits. 5 1.2 A natural limitation of low-treewidth shortcuttings and spanners Theorem 1.1 provides an upper bound tradeoff between the hop-diameter and treewidth of shortcuttings for trees. Low-treewidth shortcuttings do not exist in most other basic graph families, such as planar graphs and Euclidean and doubling metrics, and this limitation holds even regardless of any hop-diameter bound. Indeed, the √n by √n unweighted grid is a planar graph of treewidth √n, and any shortcutting of the grid must include the entire grid. Similarly, for the doubling metric induced by the grid, as well as for the Euclidean space induced by a 2-dimensional grid point set, any shortcutting must contain the entire grid. In fact, a similar limitation applies even when considering low-treewidth spanners of stretch larger than 1 (again even regardless of their hop-diameter). In particular, it was observed in [DFG08] that for any planar graph of treewidth k, any t-spanner must have treewidth Ω(k/t), and a similar lower bound on the spanner treewidth was extended to the families of bounded genus and apexminor-free graphs. Building on these lower bounds, fixed parameter tractability results for these graph families were given in [DFG08] (refer also to [FGvL11] for results for bounded degree graphs). Low-treewidth Euclidean spanners were studied by Buchin et al. [BRS25], who showed that any set of n points in the Euclidean plane admits a t-spanner of treewidth O(n/t), and this tradeoff between stretch and treewidth is asymptotically optimal. Corneil et al. [CDKX25] showed that for any constants w ≥2 and c ≥1 there exist graphs of treewidth w, such that no spanning subgraph of treewidth w -1 can be an additive c-spanner of such a graph. 2 Preliminaries Treewidth. Given a graph G, let V (G) and E(G) be the vertex and edge set of G. A tree decomposition of G is a pair (X, T) where T is a tree and X = {X1, X2, . . . , Xl} is a set of subsets of V (G), called bags, associated with nodes in T such that: • Every vertex in V (G) is in at least one bag in X. • For every edge e in E(G), there is at least one bag containing both endpoints of e. • For every vertex u in V (G), the bags containing u in X inducing a subtree of T. The width of (X, T) is maxXi∈X |Xi| -1. The treewidth of G is the minimum width over all the tree decomposition of G. The treewidth of a graph is closely related to the size of the complete graph minors it contains. Recall that a minor of a graph G is any graph obtained from G by a sequence of vertex deletions, edge deletions, and edge contractions. Let Kh denote the complete graph on h vertices. It is widely known that treewidth is monotone under taking minors. Then, we obtain the following fundamental fact: Fact 2.1. If a graph G contains Kh as a minor, then the treewidth of G is at least h -1. Arboricity. The arboricity of a graph G measures how sparse its subgraphs can be. Formally, the arboricity of G, denoted by arb(G), is defined as arb(G) = max H⊆G, |V (H)|≥2 |E(H)| |V (H)| -1 , where the maximum is taken over all subgraphs H of G with at least two vertices. Nash-Williams proved that the arboricity equal to the minimum number of forests that G can decomposed into [NW64]. 6 Spanner. A t-spanner of a graph G is a spanning subgraph H that approximately preserves distances between all pairs of vertices. Formally, for every u, v ∈V (G), dH(u, v) ≤t · dG(u, v). A t-spanner of a metric space (X, δ) is a t spanner of X, X 2 , δ . A spanner of a metric space is often known as geometric spanner. Tree cover. A tree cover of a graph G with stretch t is a collection of spanning trees T of G such that, for every u, v ∈V (G): • The distance between u and v in any tree T ∈T is at least the distance in G, i.e., dT (u, v) ≥dG(u, v). • There exists a tree T ∈T such that dT (u, v) ≤t · dG(u, v). Similar to geometric spanner, a tree cover of a metric space (X, δ) is a tree cover of the complete graph X, X 2 , δ . Compact routing scheme. A routing scheme is a distributed algorithm that, given a packet with a designated source and destination, specifies how the packet is forwarded along the edges of the network. Each node in the network has its own routing table, which stores local routing information, as well as a unique label. During a preprocessing phase, the network is initialized so that each node is assigned a routing table and a label. In the labeled model, labels are chosen by the designer (typically of size poly(log n)), while in the name-independent model, labels are chosen adversarially. Each packet carries a header containing the label of the destination and possibly additional auxiliary information. When forwarding a packet destined for vertex v, a node u consults its routing table together with the label of v to determine the outgoing edge (specified by a port number) along which the packet should be sent. In the designer port model, port numbers are assigned during preprocessing, whereas in the fixed port model, port numbers are assigned adversarially. This forwarding process continues until the packet reaches its destination. A routing scheme has stretch t if, for every source-destination pair, the path length taken by the scheme is at most t times the length of a shortest path in the network. In this paper, we consider the underlying network to be a metric space. The algorithm first chooses an overlay network on which routing is performed. The main objective is to minimize the size of the routing tables stored at each vertex. Ackermann functions. Following standard notions (e.g., [NS07, AS24]), we introduce the follwing Ackermann function. Definition 2.1 (Ackermann functions). For all k ≥0, the functions A(k, n) and B(k, n) are defined as follows: A(0, n) := 2n, for all n ≥0, A(k, n) := ( 1 if k ≥1 and n = 0, A(k -1, A(k, n -1)) if k ≥1 and n ≥1, B(0, n) := n2, for all n ≥0, B(k, n) := ( 2 if k ≥1 and n = 0, B(k -1, B(k, n -1)) if k ≥1 and n ≥1. Then, we have the following definition of the inverse Ackermann function: 7 Definition 2.2 (Inverse Ackermann function). For all k ≥0, the function αk(n) is defined as follows: α2k(n) := min{ s ≥0 : A(k, s) ≥n }, for all n ≥0, α2k+1(n) := min{ s ≥0 : B(k, s) ≥n }, for all n ≥0. One can easily see that α0(n) = ⌈n/2⌉, α1(n) = ⌈√n⌉, α2(n) = ⌈log n⌉, α3(n) = ⌈log log n⌉, α4(n) = log∗n, α5(n) = ⌊1 2 log∗n⌋, etc. 3 Bounded treewidth tree covers with small hop-diameter In this section, we show tight tradeoff between treewidth and hop-diameter for 1-spanners of tree metrics. In particular, the upper bound (Theorem 1.1) is proved in Section 3.1 and the matching lower bound (Theorem 1.2) is proved in Section 3.2. The following two claims are used in the proofs of both theorems. Claim 3.1. There is an absolute constant γ such that for α ∈{0, 1}, every integer k ≥4, every x > 1 where the expression is defined, it holds 2 k 1. Next we prove the upper bound. 2 k k k -2 (k-2)/2 + αx-(k-2)/k ! + k -2 k2 (1 + ζ) 2 k -2 k k -2 (k-2)/2 · x-1/k + αx-(k-1)/k !2 l3. Recall that l3 is fixed and does not change across different levels of recursion. The recurrence satisfies W3(n) = O(log n/ log log n). Consider any two vertices u, v ∈T. Let X be the set of vertices in the last recursive call when u and v were in the same tree. If u and v are considered in the same base case, then there is a direct edge between them. Otherwise, let Wu (resp., Wv) be the subtree in T \ X and let u′ (resp., v′) be the vertex in X that is incident on Wu (resp., Wv). By construction, H3 contains edges (u, u′), (v, v′), and (u′, v′). (The cases where u′ = u or v′ = v are handled similarly.) 3.1.3 Hop-diameter k ≥4 Lemma 3.4. For every tree metric MT = (T, δT ) induced by a tree T and every k ≥4 there is a 1-spanner Hk with hop-diameter k and treewidth O(k log2/k n) for even values of k and O(k( log n log log n)2/(k-1)) for odd values of k. Proof. Let lk be such that log lk = ( k k-2)(k-2)/2(log n)(k-2)/k for even values of k and log lk = ( k k-2)(k-2)/2(log n/ log log n)(k-2)/k for odd values. By Claim 3.2, we have that lk ≤√n. Consider the following recursive construction for constructing the edge set of Ek of Hk. Let X be a set of vertices for T as in Lemma 3.1 such that |X| = O(lk) and every component of T \ X has size at most n/lk. Do the following for every subtree T ′ of T \ X. Let u and v be two vertices from X to which T ′ has outgoing edges. Connect u and v to every vertex in T ′ and add the edges to Ek. Proceed recursively with T ′. The base case occurs whenever the size of the tree is at most lk. In the base case, we connect the vertices of the considered tree using the construction with hop-diameter k -2. To interconnect the vertices in X, we construct an auxiliary tree TX; use the recursive construction with parameter k -2 on TX and add all these edges to Hk. This concludes the description of Hk. Next, we analyze the treewidth of Hk. Let T1, . . . , Tg be the tree decompositions of spanners constructed for the trees in T \ X. For every Ti, let u and v be the two vertices from X adjacent to the corresponding subtree Wi in T \ X. (Lemma 3.1 guarantees that there is at most two such vertices; it is possible that u = v.) Add u and v to each bag in Ti. Let TX be the tree decomposition of TX, where TX is defined as in the previous paragraph. Since TX is a valid tree decomposition and TX contains an edge (u, v), then TX contains a bag Lu,v 10 with both u and v. Connect the root of T ′ to Lu,v. This concludes the description of the tree decomposition T of T. The treewidth of T satisfies the recurrence Wk(n) = Wk-2(n) for n ≤lk and Wk(n) ≤max(Wk-2(lk), Wk(n/lk) + 2) otherwise. We show that the recurrence satisfies Wk(n) ≤k log2/k n for even values of k. (The proof for odd values is similar.) For the base case we use n ≤lk, where Wk(n) ≤Wk-2(n). For the induction step, we assume that the hypothesis holds for all the values smaller than n. First, note that Wk-2(lk) ≤(k -2)(log lk)2/(k-2) = k(log n)2/k. Wk(n) ≤max(k(log n)2/k, Wk(n/lk) + 2) ≤max(k(log n)2/k, k(log n -log lk)2/k + 2) ≤max  k(log n)2/k, k log n - k k -2 (k-2)/2 · (log n)(k-2)/k !2/k + 2   ≤max k(log n)2/k, k(log n)2/k The last inequality follows from the left-hand side part of Equation (1) by setting x = log n and α = 0. To argue that Hk is a 1-spanner with hop-diameter k, consider the last step of the construction where both u and v were in the same tree T. If u and v were considered in the same base case, then T is equipped with a recursive construction with parameter k-2. By induction, there is a 1-spanner path between them with hop-diameter k -2. Otherwise, let Wu (resp., Wv) be the subtree in T \ X and let u′ (resp., v′) be the vertex in X that is incident on Wu (resp., Wv). By construction, there is a 1-spanner path between u′ and v′ with at most k -2 hops. (The cases where u′ = u or v′ = v are handled similarly.) 3.2 Lower bound We show the treewidth lower bound for 1-spanner of the uniform line metric Ln = {1, 2, . . . , n} with hop-diameter k. Due to the inductive nature of our argument, we prove a stronger version of the statement, which considers 1-spanners that could potentially use points outside of the given line metric. Theorem 1.2. For every n ≥1, every shortcutting with hop-diameter k for an n-point path must have treewidth: • Ω(k log2/k n) for even k and Ω(k( log n log log n)2/(k-1)) for odd k ≥3, whenever k ≤ 2 ln(2e) ln log n; • Ω((log log n)2/k) whenever k > 2 ln(2e) ln log n. We do so by arguing that any 1-spanner of Ln has a large minor. It is well known that a graph with Kt as a minor has treewidth at least t -1. We prove lower bounds for even k; the lower bound for odd k can be shown using the same argument. 3.2.1 Hop-diameter 2 Lemma 3.5. For every n ≥1 and every n-vertex line metric Ln, every 1-spanner with hopdiameter 2 has K⌊log n⌋+1 as a minor. Proof. We prove the claim by complete induction over n. For the base case, we use n = 1, where the claim holds vacuously. 11 Let H be a 1-spanner for Ln with hop-diameter 2. Split Ln into two consecutive parts, L1 and L2, of sizes ⌊n/2⌋and ⌈n/2⌉, respectively. Let H1 and H2 be subgraphs of H induced on L1 and L2, respectively. From the induction hypothesis, H1 and H2 have Klog ⌊n/2⌋and Klog ⌈n/2⌉ as minors, respectively. Consider the case where every point of L1 has an edge in H to some point in L2. Then Klog ⌊n/2⌋∪{H2} induces a clique minor of size log⌊n/2⌋+ 1. Consider the complementary case where L1 has a point p that does not have an edge in H to any point in L2. Then, every point in L2 has a neighbor in L1 because H has hop-diameter 2. Thus, Klog ⌈n/2⌉∪{H1} induces a clique minor of size log⌈n/2⌉+ 1. Hence, the minor size satisfies recurrence W2(n) = W2(⌊n/2⌋) + 1, with a base case W2(1) = 1. The solution is given by W2(n) = ⌊log n⌋+ 1. 3.2.2 Hop-diameter k ≥4 We give a proof for even values of k such that 4 ≤k ≤ 2 ln(2e) ln log n in Lemma 3.6. The proof for odd values is analogous. The proof for k > 2 ln(2e) ln log n is given in Lemma 3.7. Lemma 3.6. For every n ≥1, every even 4 ≤k ≤ 2 ln(2e) ln log n, and every n-vertex line metric L, every 1-spanner with hop-diameter k has treewidth at least c1k log2/k n -1, where c1 is an absolute constant. Proof. Let lk be such that log lk = ( k k-2)(k-2)/2(log n)(k-2)/k for even values of k. (The proof for odd values is similar. There, we choose lk so that log lk = ( k k-2)(k-2)/2(log n/ log log n)(k-2)/k.) By Claim 3.2, we have that lk ≤√n, whenever k ≤ 2 ln(2e) ln log n. Split L into consecutive sets of points L1, L2, . . . , Llk of size ⌊n/lk⌋each and ignore the remaining points. Let H be a 1-spanner with hop-diameter k for Ln. Our goal is to show that the size of a clique minor of H can be lower bounded by the following recurrence. Wk(n) ≥min(Wk-2(lk), Wk(n/(2lk)) + 1) and Wk(1) = 1 (5) We prove the statement by complete induction over n and k. For the base case, we take n = 1, and Wk(1) = 1 > c1k log2/k n -1. We say that a point in Li is global if it has an edge to a point outside Li and non-global otherwise. We say that Li is global if all of its points are global and non-global otherwise. We consider two complementary cases as follows. Case 1: Every Li is non-global. For every Li and Lj the path between a non-global point in Li and a non-global point in Lj must have the first (resp., last) edge inside Li (resp., Lj). Let H′ be obtained from H by contracting each Li into a single vertex. (Clearly, H[Li] is connected.) Let L′ be the line metric obtained from Ln by contracting every Li into a single point. Then H′ is a (k -2)-hop spanner of L′ with stretch 1. Thus, H′ has a minor of size Wk-2(lk) ≥c1(k -2) log2/(k-2) lk -1 = c1k log2/k n -1. Case 2: Some Li is global. Let {Ll, Lr} = L \ Li, so that Ll (resp., Lr) is on the left (resp., right) of Li. (Possibly Ll = ∅or Lr = ∅.) At least |Li|/2 points in Li have edges to either Ll or Lr. Without loss of generality, we assume the former. Let L′ i ⊆Li be the subset of points that have edges to Ll and let Hi be the subgraph of H restricted to preserving distances in Li. Inductively, Hi has a clique minor of size at least Wk(n/(2lk)). (Since Hi is a 1-spanner, it does not include any point outside of Li.) Then Hi and Ll are vertex-disjoint (because we are considering 1-spanners) and hence their union has a clique minor of size Wk(n/(2lk))+1. Thus, Wk(n) satisfies eq. (5), which we lower bound next. 12 Wk(n) ≥min(c1k(log n)2/k -1, Wk(n/(2lk)) + 1) ≥min(c1k(log n)2/k -1, c1k(log n -log lk -1)2/k) ≥min  c1k(log n)2/k -1, c1k log n - k k -2 (k-2)/2 · (log n)(k-2)/k -1 !2/k  ≥min c1k(log n)2/k -1, c1k(log n)2/k -1 The last inequality follows from Equation (1) by replacing x = log n and α = 1. Lemma 3.7. For every n ≥1, every k > 2 ln(2e) ln log n, every 1-spanner with hop-diameter k for an n-vertex line metric has treewidth at least c2(log log n)2/k, for an absolute constant c2. Proof. We set lk so that log log lk = q k-2 k log log n. (We use log(·) := log2(·).) For the clarity of exposition, we ignore the rounding issues. We note that 1 ≤lk ≤n. Using the same argument as in Lemma 3.6, we have Wk(n) ≥min(Wk-2(lk), Wk(n/(2lk)) + 1). We prove the lemma by induction, where the base case is Lemma 3.6 whenever k > 2 ln(2e) ln log n. Our goal is to prove that Wk-2(lk) ≥c2(log log n)2/k and Wk(n/(2lk))+1 ≥c2(log log n)2/k. For the first inequality we distinguish two cases. If k -2 ≤ 2 ln(2e) ln log(lk), then by Lemma 3.6 we have Wk-2(lk) ≥c1(k -2) log 2 k-2 lk -1 ≥c1(k -2) log ln(2e) ln log lk lk -1 = 2ec1(k -2) -1 ≥ec1k -1 ≥ec1 · 2 ln(2e) 2 (ln log lk)2 k -1 ≥c2 (log log lk)2 k -2 The penultimate inequality holds because k ≥ 2 ln(2e) ln log(n) ≥ 2 ln(2e) ln log(lk). The last inequality holds for a proper choice of constant c2. If k -2 > 2 ln(2e) ln log(lk), we have Wk-2(lk) ≥ c2 (log log lk)2 k-2 by the induction hypothesis. Hence, in both cases, we have: Wk-2(lk) ≥c2 · (log log lk)2 k -2 = c2 · q k-2 k log log n 2 k -2 = c2 · (log log n)2 k For the second inequality, let x = log n. We have log lk = x √ (k-2)/k. Since n 2lk 2 ln(2e) ln log n 2lk and the induction hypothesis gives the following. Wk n 2lk + 1 ≥ c2 log log n 2lk 2 k + 1 = c2 log2 x -x √ (k-2)/k -1 k + 1 To show that the right-hand side is at least c2 log2(x)/k, it suffices to show the following: log2 x -x √ (k-2)/k -1 + k c2 ≥log2 x (6) 13 From Claim 3.3 we have x √ (k-2)/k ≤x -x ln x k + x ln2 x k2 . x -x √ (k-2)/k -1 ≥x - x -x ln x k + x ln2 x k2 -1 = x ln x k 1 -ln x k -1 > x ln x 10k -1 The last inequality holds because k > 2 ln(2e) ln(x). We next consider two cases. If x ln x 10k x ln x 20c2 ≥log2 x and Equation (6) is proved. Otherwise, we proceed as follows. log2 x ln x 10k -1 + k c2 ≥log2 x ln x 20k + k c2 = log x + log ln x 20k 2 + k c2 = log2 x + 2(log x) log ln x 20k + log2 ln x 20k + k c2 ≥log2 x The last inequality holds for any c2 ≤1/10. 4 Bounded arboricity tree covers Throughout this section, we use the following lemma. Lemma 4.1. If every edge of a graph G = (V, E) can be oriented such that the maximum in-degree of every vertex is at most d, then the arboricity of G is at most d + 1. 4.1 Line metrics In this section, we show a construction for line metrics (Theorem 1.4). We shall use a modification of the following well-known result. Theorem 4.1 (Cf. [AS24, BTS94, Sol13]). For every n ≥2 and k ≥2, every n-point tree metric admits a 1-spanner with hop-diameter k and O(nαk(n)) edges. We next state a slightly modified version of the previous theorem. The first statement concerns hop-diameter 1. Lemma 4.2. Let n ≥2 be an arbitrary integer. Let L be a line metric induced by a set of n points on a line so that between every two points there is n Steiner points. Let S denote the set of Steiner points. Then, L admits a Steiner 1-spanner with hop-diameter 2 such that the Steiner points belong to S and every vertex in L ∪S has a constant in-degree. Proof. Interconnect the vertices in L by a clique. Consider an arbitrary clique edge (u, v) and split it into two using a Steiner point w. Orient the edges (u, w) and (w, v) into w. By using a fresh Steiner point for every clique edge, we obtain the guarantees from the statement. Next, we state the general version. 14 Lemma 4.3. Let n ≥2 and k ≥2 be two arbitrary integers. Let L be a line metric induced by a set of n points on a line so that between every two points there is αk(n) Steiner points. Let S denote the set of Steiner points. Then, L admits a Steiner 1-spanner with hop-diameter 2k such that the Steiner points belong to S and every vertex in L ∪S has a constant in-degree. Proof. We prove the lemma by induction over k. For k = 2, we take a central vertex c in L and connect it to every other point in L; orient the edges outwards from c. Proceed recursively with the two halves. This way we obtain a 1-spanner for L with hop-diameter 2. Denote by E the edge set of this spanner. The depth of the recursion in the construction is O(log n), and the size of S is nα2(n) = n log n. Every edge in E has in-degree 1 for every recursion level. We can split every such edge into two using a fresh Steiner point for each recursion level. This concludes the proof for k = 2. Consider now an arbitrary k. Divide L into intervals of size αk-2(n) using n/αk-2(n) cut vertices. Denote by C the set of cut vertices and invoke the induction hypothesis on C with parameter k -2. Let E′ be the obtained set of edges. Let EC be obtained by connecting every cut vertex to every point in the two neighboring intervals. Let this edge set be EC. Proceed recursively with parameter k on each of the intervals. To analyze the in-degree, we observe that the depth of the recursion with parameter k is O(αk(n)), which coincides with the number of Steiner vertices between every two points in L. One level of recursion contributes a constant in-degree to each vertex in the construction. This means that we can split all such vertices into two and use a fresh Steiner point at each recursion level. This concludes the proof. We restate Theorem 1.4 here for convenience. Theorem 1.4. For every n ≥1 and every even k ≥2, every n-point path admits a shortcutting with hop-diameter k and arboricity O(αk/2+1(n)). Proof. Let Ln be an arbitrary line metric. For an integer k′ ≥2, we describe a construction of a 1-spanner H for Ln with hop-diameter 4k′ -2 and arboricity O(α2k′(n)). Consider a set of n′ = n/α2k′-2(n) equally-spaced cut vertices dividing Ln into intervals of size α2k′-2(n). To construct the 1-spanner H, we connect every cut vertex to all the vertices in its two neighboring intervals. Denote the corresponding edge by EC. Let C be the set of cut vertices. Let E′ be obtained by invoking Lemma 4.3 with parameter 2k′ -2 on C using Ln as Steiner points. Proceed recursively with each of the intervals. To analyze the arboricity, we will show that the edges in EC and E′ can be oriented so that the in-degree of every vertex is constant. Orient every edge in EC so that it goes out of the corresponding cut vertex. Since every interval is adjacent to at most two cut vertices, the in-degree of every point with respect to EC is at most 2. By Lemma 4.3, the edges in E′ have a constant in-degree on Ln. In conclusion, EC and E′ contribute O(1) to the indegree of each vertex in Ln. The number of recursion levels l(n) satisfies the recurrence l(n) = l(α2k′-2(n)) + O(1), which has solution l(n) = α2k′(n). Every recursion level contributes O(1) to the in-degree of vertices, meaning that the overall in-degree in H is O(α2k′(n)). The hopdiameter of H is 2 + 2 · (2k′ -2) = 4k′ -2. This concludes the description of a 1-spanner with hop-diameter 4k′ -2 and arboricity O(α2k′(n). Note that we have only shown how to get a tradeoff of hop-diameter 4k′ -2 and arboricity O(α2k′(n)). We could similarly get a construction with hop-diameter 4k′ and arboricity O(α2k′+1(n)) for a parameter k′ ≥1. Specifically, we divide Ln into intervals of size α2k′-1(n). The cut vertices are interconnected using Lemma 4.3 with hop-diameter 2k′ -1 (and Lemma 4.2 when k′ = 1). The hop-diameter is 2 + 2(2k′ -1) = 4k′ and the arboricity is O(α2k′+1(n)) using a similar argument. 15 The construction for ultrametric is a simple adaptation of the construction for line metrics, which we will show next. 4.2 Ultrametrics and doubling metrics A metric (X, dX) is an ultrametric if it satisfies a strong form of the triangle inequality: for every x, y, z, dX(x, z) ≤max{dX(x, y), dX(y, z)}. It is well known that an ultrametric can be represented as a hierarchical well-separated tree (HST). More precisely, an (s, ∆)-HST is a tree T where (i) each node v is associated with a label Γv such that Γv ≥s · Γu whenever u is a child of v and (iii) each internal node v has at most ∆children. Parameter s is called the separation while ∆is called the degree of the HST of the HST. Let L be the set of leaves of T. The labels of internal nodes in T induce a metric (L, dL) on the leaves, called leaf-metric, where for every two leaves u, v ∈L, dL(u, v) = Γlca(u,v) where lca is the lowest common ancestor of u and v. It is well-known, e.g., [BLMN04], that (L, dL) is an ultrametric, and that any ultrametric is isomorphic to the leaf-metric of an HST. Chan and Gupta [CG06] showed that any (1/ε, 2)-HST can be embedded into the line metric with (worst-case) distortion 1+O(ε). Therefore, by applying Theorem 1.4, we obtain a (1+O(ε))- spanner with hop diameter k and arboricity O(αk/2+1(n)) for (1/ε, 2)-HST. In our setting, we are interested in large-degree (k, ∆)-HST where s = 1/ε and ∆= poly(1/ε); the embedding result by Chan and Gupta [CG06] no longer holds for these HSTs. Instead, we directly apply our technique for the line metric to get a (1 + O(ε))-spanner with low-hop diameter. Lemma 4.4. Let ε ∈(0, 1), ∆> 0 be parameters, and k be an even positive integer. Then, any (1/ε, ∆)-HST with n leaves admits a (1 + O(ε))-spanner with hop-diameter k and arboricity O(αk/2+1(n)). Proof. Let T be the (1/ε, ∆)-HST and let MT be the metric induced by T. For an integer k′ ≥2, we describe a construction of a (1 + O(ε))-spanner for MT with hop-diameter 4k′ -2 and arboricity O(α2k′(n)). The construction is similar to that in Theorem 1.4. Let C be the set of internal nodes of T, called cut verices, such that the subtrees rooted at these nodes has size α2k′-2(n). The number of cut vertices is |C| ≤n/α2k′-2(n). First, connect every cut vertex to all of its descendants in T and let the corresponding set of edges be EC. Next, let E be the set of edges interconnecting the cut vertices using Theorem 4.1 with hop-diameter 2k′ -2 and O(n′α2k′-2(n′)) = O(n) edges. We construct a set E′ by subdividing every edge (u, v) ∈E into two edges using a vertex, say x, in the subtree rooted at u. The spanner H is obtained using the edges in EC and those in E′. Finally, the recursive construction is applied to each subtree rooted at a vertex in C. This concludes the description of the recursive construction of H. The same argument in Theorem 1.4 implies that the arboricity is O(α2k′(n)). The stretch is (1+O(ε)) since the path u →x →v between two cut vertices u and v has stretch (1+O(ε)). To construct a low-hop spanner with small arboricity for doubling metrics (Theorem 1.5), we will rely on the ultrametric cover by Filtser and Le [FL22a]. Following their notation, for a given metric space (X, dX), we say that a collection T of at most τ different (s, ∆)-HSTs such that (i) for every HST T ∈T , points in X are leaves in T, and (ii) for every two points x, y ∈X, dX(x, y) ≤dT (x, y) for every T ∈T , and there exists a tree Txy ∈T such that dX(x, y) ≤ρ · dTxy(x, y). Theorem 4.2 (Cf. Theorem 3.4 in [FL22a]). For every ε ∈(0, 1/6), every metric with doubling dimension d admits an (ε-O(d), 1 + O(ε), 1/ε, ε-O(d))-ultrametric cover. 16 Theorem 1.5. Let k be an even integer and let ε ∈(0, 1/6) be an arbitrary parameter. Then, for every positive integer n, every n-point metric with doubling dimension d admits a (1+ε)-spanner with hop-diameter k and arboricity ε-O(d)αk/2+1(n). Proof. Let T be the (ε-O(d), 1+O(ε), 1/ε, ε-O(d))-ultrametric cover in Theorem 4.2 for the input doubling metric. The theorem then follows by applying Lemma 4.4 to each of (1/ε, ε-O(d))-HST in T and taking the union of the resulting spanners. 4.3 General tree metrics Theorem 1.3. For every two integers n ≥1 and k ≥1 and every n-vertex tree T, there is a shortcutting with hop-diameter k and arboricity O(log12/(k+4) n). Moreover, when the height of the tree is h, then the arboricity is O(h6/(k+4)). Proof of Theorem 1.3 for height h. We first show how to prove the theorem for a tree T with height bounded by h. This construction gives the main ideas used for general trees. Let k′ be an arbitrary integer. We show a recursive construction with hop-diameter 2k′. In particular, we show how to shortcut T so that between every ancestor and descendant it is possible to go using k′ hops. The construction is the same as in Lemma 3.2: take the root of the tree, connect it to every descendant and proceed recursively with each of its children. By orienting the edges from the roots to descendants, we have that the in-degree of every vertex is bounded by h. Let A1(h) denote the obtained in-degree (and thus arboricity) of the shortcutting. We have that A1(h) = h. We next show the bound for an arbitrary k′ = 1 + 3g for an integer g ≥1. Let lbe a parameter to be set later. Consider the tree levels so that the root is at level 0. Designate as the cut vertices all the vertices at levels l, 2l, 3l. . .. Denote the set of cut vertices by C. Consider the set S, consisting of all the parents of vertices in C. Let ECS be obtained by interconnecting all the vertices in C to their parents. Each such edge is oriented from a vertex in S towards the vertex in C. Next, connect every vertex in S to its first l-1 ancestors until the occurrence of the first cut vertex; let the corresponding edge set be ES. Every such edge is oriented from the ancestors towards vertices in S. Let c ∈C be an arbitrary vertex at level d. Connect c to all of its descendants at levels d + 1, d + 2, . . . , d + l-2, i.e., until the nest cut vertex, and orient these edges from c towards the descendants. The edge set EC is obtained by doing this for every vertex c in C. Use a recursive construction with parameter k′ -3 on the subtree of T induced by vertices in C. Finally, consider all the subtrees obtained by removing C and S from T and apply the recursive construction with parameter k′ on each of the subtrees. We next analyze the hop-diameter of the ancestor-descendant paths in this construction. Let u and v be two arbitrary vertices such that v is an ancestor of u. The path between u and v in the shortcutting is as follows. By construction, EC contains an edge between u and its ancestor cut vertex cu ∈C. Let du be the highest cut vertex that is ancestor of cu and a descendant of v. There is a path consisting of at most k′ -3 hops between cu and du. Let su ∈S be the parent of du. The edge (du, su) is in ECS. Finally, ES contains an edge (su, v). Clearly, the path consists of k′ hops. We next analyze the in-degree of vertices in T. Let Ak′(h) denote the in-degree of the construction with parameter k′. Then, Ak′(h) = l-1 for the vertices in S, due to the orientation of the edges in ES. For the vertices in C, we have Ak′(h) = 1 + Ak′-3(h/l), because the edges in ECS add one to in-degree of every vertex and the dominant term is due to the recursive call with parameter k′ -3. Finally, all the other vertices have Ak′(h) = 1 + Ak(l-2), where the edges in EC contribute one to the in-degree and Ak(l) is due to the recursive construction with 17 parameter k′. Putting everything together, we have the following recurrence . Ak′(h) = max(l-1, 1 + Ak′-3(h/l), 1 + Ak′(l-2)) (7) We proceed to show inductively that for every k′ = 1 + 3g we have Ak′(h) ≤h1/(g+1). Since we have that Ak′(h) ≤h for every k′ ≥1, we can disregard the third term in Equation (7). Thus, we obtain the following simplified recurrence: Ak′(h) = max(l, Ak′-3(h/l)). Notice that we have replaced l-1 by lin the first term since it does not affect the solution asymptotically. We proceed to solve the recurrence. Ak′(h) ≤max(l, Ak′-3(h/l)) ≤max(l, (h/l)1/g) induction hypothesis ≤max(h1/(g+1), (h1-1/(g+1))1/g) setting l= h1/(g+1) ≤max(h1/(g+1), h1/(g+1)) = h1/(g+1) Thus Ak′(h) ≤h1/(g+1). In particular, we have that g = (k -1)/3 so the tradeoff is k′ versus arboricity h3/(k′+2). To get the tradeoff guaranteed in the statement, we observe that the hop-diameter is 2k′. Proof of Theorem 1.3 for general trees. We consider a heavy-light decomposition of T, constructed as follows. Start from the root r down the tree each time following the child with the largest subtree. The obtained path is called the heavy path rooted at r. Continue recursively with every child of the vertices of the heavy path. Let T ′ be obtained by contracting every heavy path in T into a single vertex. It is well-known that the height of T ′ is log n, where n is the number of vertices in T. We start by employing the shortcutting procedure for bounded height trees on T ′ with parameter k′ and explain how to adapt it to T. Consider an arbitrary edge (u, v) in the shortcutting of T such that u is a descendant of v. Let Pu (resp., Pv) be the heavy path in T corresponding to u (resp., v). Add an edge between the root ru of Pu and its lowest ancestor on Pv. We do this for all the edges in the shortcutting of T ′. For every heavy path P in T, use the 4-hop construction with arboricity O(log log n) from Theorem 1.4. In addition, connect via a direct edge every vertex in p to the root of P. This concludes the description of the shortcutting for T. Next, we analyze the hop-diameter of the obtained construction. Let u and v be two arbitrary vertices in T and let Pu and Pv be the corresponding heavy paths in T. Denote by pu and pv the vertices in T ′ corresponding to Pu and Pv. Let pw be the LCA of pu and pv in T ′. From the construction we know that there is a k′-hop path between pu and pw and between pv and pw. Let (pv, pa) be the first edge on the path from pu to pw. The corresponding path in T goes from u to the parent of Pu and from the root of Pu to its lowest ancestor in Pa. We can similarly replace every edge on the path between Pu and Pw in T ′ by two edges in T. We handle analogously the path between Pv and Pw. The corresponding paths in T go from u to its lowest ancestor on Pw and from v to its lowest ancestor on Pw. Using the edges from the 4-hop construction on Pw, we join the two paths. The overall number of hops is 4k′ + 4. In particular, we achieve a hop-diameter of 4k′ + 4 with arboricity of O(log3/(k′+2) n). Letting k = 4k′ + 4, the arboricity is O(log12/(k+4) n). 18 5 Routing schemes In this section, we show the results for routing in tree metrics and doubling metrics. 5.1 Routing in tree metrics Theorem 1.8. For every n and every n-vertex tree T, there is a 3-hop routing scheme in the fixed-port model for the metric MT induced by T with stretch 1 that uses O(log2 n/ log log n) bits per vertex. Proof. Our routing scheme is constructed on top of a 1-spanner H3 of MT as described in Lemma 3.3. For a vertex u ∈T, let table(u) denote its routing table and label(u) its label. First, assign a unique identifier ID(u) ∈{1, . . . , n} to every vertex u in T and add it to table(u) and label(u). Equip the routing table and a label of every vertex u ∈T with an ancestor label anc(u) as in [AAK+06]. This adds O(log n) bits of memory per vertex. Using the ancestor labeling scheme from [AAK+06], we can determine, given two vertices u and v, whether they are in ancestor-descendant relationship, and if so, whether u is the ancestor of v or the vice-versa. Recall the recursive construction of H3 with parameter T as an input. Assign to each recursive call a unique integer rT . Let X be a set of vertices for T as in Lemma 3.1 so that |X| = log n/ log log n. (This parameter is fixed and does not change across different recursive calls.) The vertices of X are interconnected by a clique in H3. For every vertex in u ∈X, add to table(u) the information consisting of C(u) = ⟨rT , {⟨ID(v), port(u, v), anc(v)⟩| v ∈X \ {x}}⟩. The memory occupied per every vertex in X is O(log2 n/ log log n). (Note that the construction of H3 guarantees that every vertex belongs to such a clique exactly once across all the recursive calls, meaning that table(u) contains only one such C(u).) Let T ′ be a subtree in T \ X. Let u and v be two vertices from X to which T ′ has outgoing edges. For every vertex x ∈T ′, add to table(x) the following: ⟨rT , ID(u), port(x, u)⟩and ⟨rT , ID(v), port(x, v)⟩. Similarly, add to label(x) the following: ⟨rT , ID(u), port(u, x)⟩and ⟨rT , ID(v), port(v, x)⟩. This information takes (log n) bits per recursive call rT . The construction proceeds recursively with T ′; the number of recursive calls every vertex participates in is at most O(log n/ log log n). Next we describe the routing algorithm. Let u be the source and v be the destination. First, check if C(u) contains routing information leading directly to v. In this case, the algorithm outputs port(u, v) and the routing is complete. (This case happens when u and v are in the same clique during the construction.) Otherwise, go over table(u) and label(v) and find the last recursive call rT which is common to both u and v. Next, consider label(v) and the two entries consisting ⟨rT , ID(v1), port(v1, v)⟩and ⟨rT , ID(v2), port(v2, v)⟩, corresponding to rT . If v1 and v2 are in C(u), use anc(u), anc(v1), anc(v2), and anc(v) to decide whether to output port(u, v1) or port(u, v2). (This case happens when u, v1, and v2 are in X in the recursive call rT and v is not in it.) Finally, let ⟨rT , ID(u1), port(u, u1)⟩and ⟨rT , ID(u2), port(u, u2)⟩be the two entries corresponding to rT in table(u). Use anc(u) and anc(v) to decide whether to output port(u, u1) or port(u, u2). (This case happens when u is not in X.) This concludes the description of the routing algorithm. 5.2 Routing in doubling metrics We next show how to extend the routing result in tree metrics to metrics with doubling dimension d. In particular, we prove the following theorem. Theorem 1.10. For every n and every n-point metric with doubling dimension d, there is a 3-hop routing scheme with stretch (1 + ε) that uses Oε,d(log2 n/ log log n) bits per vertex. 19 Given a point set P with doubling dimension d, we first construct a tree cover, using the tree cover theorem from [CCL+25]. Theorem 5.1 ([CCL+25]). Given a point set P in a metric of constant doubling dimension d and any parameter ε ∈(0, 1), there exists a tree cover with stretch (1 + ε) and ε- ̃O(d) trees. Furthermore, every tree in the tree cover has maximum degree bounded by ε-O(d). We use this specific tree cover theorem, since the authors also provide an algorithm for determining the "distance-preserving tree" given the labels of any two metric points. Lemma 5.1 ([CCL+25]). Let ε ∈(0, 1). Let T = {T1, . . . , Tk} be the tree cover for P constructed by Theorem 5.1, where k = ε- ̃O(d). There is a way to assign ε- ̃O(d) log n-bit labels to each point in P so that, given the labels of two vertices x and y, we can identify an index i such that tree Ti is a "distance-approximating tree" for u and v; that is, δTi(x, y) ≤(1+ε)δP (x, y). This decoding can be done in O(d · log(1/ε)) time. We equip each tree in the cover with the stretch-1 routing scheme from Theorem 1.8. This consumes overall ε- ̃O(d) log2 n/ log log n bits per point in P. In addition, we add ε- ̃O(d) log n-bit labels to each point in P as stated in Lemma 5.1. Given two points x, y ∈P, we first employ the algorithm from Lemma 5.1 to find the tree in which the routing should proceed. Then, the routing proceeds on that specific tree using the routing algorithm from Theorem 1.8. This concludes the description of the compact routing scheme for doubling metrics. 6 Lower bound for routing in tree metrics In this section, we prove the following theorem. Theorem 1.9. There is an infinite family of trees Tn, n > 0, such that any labeled fixed-port routing scheme with stretch 1 on a metric induced by Tn has at least one vertex with total memory of Ω(log2 n/ log log n) bits. Let T be an unweighted tree and MT = (V, (V × V ), dT ) be a metric induced by T = (V, E). The edges in (V ×V ) are called Steiner edges. In this section we show that stretch-1 routing in tree metrics requires Ω(log2 n/ log log n) bits per tree vertex. Hard instances. We first describe the hard instances used in [FG02]. Let t, h, and d be positive integers and let T 0 be a complete t-ary rooted tree of height h + 1. Let T0 be a tree obtained by adding d -t -1 leaves at every internal vertex of T 0 and d -t leaves at the root. These added leaves are called dummy leaves. The number of vertices in T0 is n = (d-1)· th-1 t-1 +2. Note that T0 is uniquely defined by t, h, and d. Let T be a tree obtained from T 0 as follows. Consider an internal vertex u of T 0 at height i, where the root has height h and the leaves have height 0. Let qi = ti-1 t-1 . (The choice of qi coincides with the number of non-dummy vertices in a subtree rooted at any child of u.) Add (d -t -1) · qi dummy leaves to u if it is an internal node and (d-t)qi if it is the root. Note that both T0 and T are constructed based on T 0 and there is a correspondence between non-dummy vertices of T and the non-dummy vertices of T0. In what follows, we shall use the same letter to denote some non-dummy vertex in T and the corresponding non-dummy vertex in T0. Claim 6.1. The number of vertices in T is O(n log2 n). 20 Proof. |T| = |T 0| + qh + h X i=1 th-i · (d -t -1) · qi = th+1 -1 t -1 + th -1 t -1 + d -t -1 t -1 · h X i=1 th-i · (ti -1) = th+1 -1 t -1 + th -1 t -1 + (d -t -1)(hth+1 -hth -th + 1) (t -1)2 In [FG02], d = th and t = h = ⌊(log √n)/ log log √n⌋, so that d = th ≤√n. We proceed to upper bound |T| as follows. |T| ≤2th+1 + dhth+1 ≤2√n log n + n log2 n = O(n log2 n) Using a reduction to the lower bound instances of [FG02], we will show that the memory requirement is Ω(log2 n′/ log log n′) = Ω(log2 n/ log log n). Reduction to relaxed routing. Let MT be a metric induced by T. Similarly to [FG02], we consider a restricted problem of relaxed routing in MT , where the destination vertex is a nondummy vertex of MT and the source vertex is its ancestor. Our lower bound argument shows that relaxed routing in MT requires total memory of Ω(log2 n/ log log n) bits per vertex. Since every routing scheme in an instance MT is also a relaxed routing scheme in the same instance, our lower bound applies to routing in MT . Port numbering. In [FG02], the authors consider a family T of instances where all the trees are isomorphic to T0 and each instance correspond to a different port numbering of T0. We consider a family of instances T ′ where every metric is isomorphic to MT and there is a one-toone correspondence between instances in T and those in T ′. Consider an instance ˆT0 from T . We proceed to explain the port numbering in the corresponding instance ˆ MT in T ′. Let u be an internal vertex of T at height i. Define the sets of edges Ej as follows: • For 1 ≤j ≤t, let Ej be the set of qi edges leading to the non-dummy descendants of u in the subtree of MT rooted at the jth child of u. • Partition the edges leading to dummy leaves adjacent to u into groups of size qi. Let Ej be the j-th group for t + 1 ≤j ≤d. Let p1, . . . , pd be the port numbers of the d neighbors of u in ˆT0. Note that p1, . . . , pd form a permutation of numbers in {1, . . . , d}. Define Bk as the set of integers in [(k -1)qi + 1, kqi]. For 1 ≤j ≤d, assign to Ej port numbers from Bpj arbitrarily. Assign all the other port numbers arbitrarily. This concludes the description of the port numbers in ˆ MT . The following observation provides the key property of the port assignments in ˆ MT . Observation 6.1. Let wi be a child of u and let pi ∈{1, . . . , d} be the port number in ˆT0 of the edge (u, wi), as seen from u. Every port number p of u in ˆ MT leading to a subtree rooted at wi satisfies ⌈p/qi⌉= pi. 21 Routing without header rewriting. Next, we show that header rewriting cannot reduce overall memory per vertex in ancestor-descendant routing. Consider an ancestor-descendant routing scheme R′ which routes on top of an instance ˆ MT in T ′. Let u be a source vertex and v a destination. Initially, the header contains only label(v). Let w be the first vertex on the routing path from u to v. Since R′ is a valid ancestor-descendant routing scheme and w is an ancestor of v, it is possible to route from w to v with w as a source and v as a destination. In this case, the routing algorithm commences at w and the header contains only label(v). Since the routing scheme has stretch 1, vertex w will never be visited again. In other words, rewriting the header at vertex u does not help in ancestor-descendant routing. Reduction to routing in trees. Let ˆT0 be an instance in T and let ˆ MT be the corresponding instance in T ′. Our goal is to define a transformation of an ancestor-descendant routing scheme R′ for ˆ MT into an ancestor-descendant routing scheme R for ˆT0, which uses additional O(log n′) = O(log n) bits per vertex when restricting to query pairs that exist in ˆT0. Consider an internal vertex u at height i in ˆ MT and its descendant (non-dummy vertex) v. Let table′(u) be the routing table of u and label′(v) be the label of v in R′. Define label(v) := label′(v) and let table(u) be a concatenation of table′(u) and a binary encoding of qi. The number of bits required to store qi is O(log n′) = O(log n). Let R(table(u), label(v)) := ⌈R′(table′(u),label′(v)) qi ⌉. This concludes the description of R. We argue that R is a valid routing scheme for T. It suffices to prove that R outputs the correct port number. Let p be the port number leading to the next vertex on the routing path from u down to v. We want to prove that R(table(u), label(v)) = p. From Observation 6.1, we know that ⌈R′(table′(u),label′(v)) qi ⌉= p′, where the p′ is the port number in ˆT0 leading to the child of u which is the root of the subtree where v belongs. Hence, the routing algorithm in ˆT0 proceeds at the correct child. In conclusion, we described a way to convert a routing scheme for ˆ MT into a routing scheme in ˆT0 which uses Ω(log n) additional bits. In [FG02] it is proved that T contains an instance in which some vertex requires Ω(log2 n/ log log n) bits of memory. This means that there is an instance in T ′ which requires Ω(log2 n/ log log n) = Ω(log2 n′/ log log n′) bits of memory. References [AAK+06] Serge Abiteboul, Stephen Alstrup, Haim Kaplan, Tova Milo, and Theis Rauhe. Compact labeling scheme for ancestor queries. SIAM J. Comput., 35(6):1295-1309, 2006. 19 [ADM+95] S. Arya, G. Das, D. M. Mount, J. S. Salowe, and M. Smid. Euclidean spanners: Short, thin, and lanky. In Proceedings of the Twenty-seventh Annual ACM Symposium on Theory of Computing, STOC '95, pages 489-498, 1995. 1, 2, 5 [AS87] Noga Alon and Baruch Schieber. Optimal preprocessing for answering on-line product queries. Technical Report TR 71/87, Tel Aviv University, 1987. 1, 2, 4 [AS24] Noga Alon and Baruch Schieber. Optimal preprocessing for answering on-line product queries. CoRR, abs/2406.06321, 2024. 7, 9, 14 [BFN19] Yair Bartal, Arnold Filtser, and Ofer Neiman. On notions of distortion and an almost minimum spanning tree with constant average distortion. J. Comput. Syst. Sci., 105:116-129, 2019. preliminary version published in SODA 2016. 5 22 [BLMN04] Yair Bartal, Nathan Linial, Manor Mendel, and Assaf Naor. Some low distortion metric ramsey problems. Discrete & Computational Geometry, 33(1):27-41, July 2004. 16 [BRS25] Kevin Buchin, Carolin Rehs, and Torben Scheele. Geometric spanners of bounded tree-width. In SoCG, volume 332 of LIPIcs, pages 26:1-26:15. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2025. 6 [BTS94] Hans L. Bodlaender, Gerard Tel, and Nicola Santoro. Trade-offs in non-reversing diameter. Nord. J. Comput., 1(1):111-134, 1994. 1, 2, 14 [CCC+25] Hsien-Chih Chang, Vincent Cohen-Addad, Jonathan Conroy, Hung Le, Marcin Pilipczuk, and Michal Pilipczuk. Embedding planar graphs into graphs of treewidth O (log3 n ). In SODA, pages 88-123. SIAM, 2025. 2 [CCL+23] H. Chang, J. Conroy, H. Le, L. Milenković, S. Solomon, and C. Than. Covering planar metrics (and beyond): O(1) trees suffice. In The 64th Annual Symposium on Foundations of Computer Science, FOCS '23, pages 2231-2261, 2023. 5 [CCL+24] H. Chang, J. Conroy, H. Le, L. Milenković, S. Solomon, and C. Than. Shortcut partitions in minor-free graphs: Steiner point removal, distance oracles, tree covers, and more. In The 2024 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '24, page 5300-5331, 2024. 5 [CCL+25] Hsien-Chih Chang, Jonathan Conroy, Hung Le, Shay Solomon, and Cuong Than. Light tree covers, routing, and path-reporting oracles via spanning tree covers in doubling graphs. In STOC, pages 2257-2268. ACM, 2025. 5, 20 [CDKX25] Derek G. Corneil, Feodor F. Dragan, Ekkehard Köhler, and Yang Xiang. Lower bounds on collective additive spanners. CoRR, abs/2504.18508, 2025. 6 [CFKL20] Vincent Cohen-Addad, Arnold Filtser, Philip N. Klein, and Hung Le. On light spanners, low-treewidth embeddings and efficient traversing in minor-free graphs. In FOCS, pages 589-600. IEEE, 2020. 2 [CG06] H. T.-H. Chan and A. Gupta. Small hop-diameter sparse spanners for doubling metrics. In Proc. of 17th SODA, pages 70-78, 2006. 16 [Cha87] Bernard Chazelle. Computing on a free tree via complexity-preserving mappings. Algorithmica, 2(1):337-361, 1987. 1, 2 [CLPP23] Vincent Cohen-Addad, Hung Le, Marcin Pilipczuk, and Michal Pilipczuk. Planar and minor-free metrics embed into metrics of polylogarithmic treewidth with expected multiplicative distortion arbitrarily close to 1. In FOCS, pages 2262-2277. IEEE, 2023. 2 [DFG08] Feodor F. Dragan, Fedor V. Fomin, and Petr A. Golovach. Spanners in sparse graphs. In ICALP (1), volume 5125 of Lecture Notes in Computer Science, pages 597-608. Springer, 2008. 6 [FG02] Pierre Fraigniaud and Cyril Gavoille. A space lower bound for routing in trees. In STACS, volume 2285 of Lecture Notes in Computer Science, pages 65-75. Springer, 2002. 20, 21, 22 23 [FGvL11] Fedor V. Fomin, Petr A. Golovach, and Erik Jan van Leeuwen. Spanners of bounded degree graphs. Inf. Process. Lett., 111(3):142-144, 2011. 6 [FL22a] Arnold Filtser and Hung Le. Locality-sensitive orderings and applications to reliable spanners. In STOC, pages 1066-1079. ACM, 2022. 16 [FL22b] Arnold Filtser and Hung Le. Low treewidth embeddings of planar and minor-free metrics. In FOCS, pages 1081-1092. IEEE, 2022. 1, 2, 3, 4, 9 [KLMS22] Omri Kahalon, Hung Le, Lazar Milenkovic, and Shay Solomon. Can't see the forest for the trees: Navigating metric spaces by bounded hop-diameter spanners. In Alessia Milani and Philipp Woelfel, editors, Proc. PODC, pages 151-162. ACM, 2022. 1, 2, 3, 5 [Le23] H. Le. Shortcutting trees, 2023. https://minorfree.github.io/ tree-shortcutting/. 1, 3 [MN07] Manor Mendel and Assaf Naor. Ramsey partitions and proximity data structures. Journal of the European Mathematical Society, 9(2):253-275, 2007. 5 [NS07] G. Narasimhan and M. Smid. Geometric Spanner Networks. Cambridge University Press, 2007. 7, 9 [NW64] Crispin St. John Alvah Nash-Williams. Decomposition of finite graphs into forests. Journal of the London Mathematical Society, 1(1):12-12, 1964. 6 [SE14] Shay Solomon and Michael Elkin. Balancing degree, diameter, and weight in euclidean spanners. SIAM J. Discret. Math., 28(3):1173-1198, 2014. 3 [Sol13] Shay Solomon. Sparse euclidean spanners with tiny diameter. ACM Trans. Algorithms, 9(3):28:1-28:33, 2013. 9, 14 [Yao82] A. C. Yao. On constructing minimum spanning trees in k-dimensional spaces and related problems. SIAM Journal on Computing, 11(4):721-736, 1982. 1, 2 24
2510.14919
Predicting Task Performance with Context-aware Scaling Laws Kyle Montgomery1*, David Park2*, Jianhong Tu1, Michael Bendersky3, Beliz Gunel4, Dawn Song5, Chenguang Wang1† 1 UC Santa Cruz, 2 Washington University in St. Louis, 3Databricks, 4Google DeepMind, 5UC Berkeley {kylemontgomery, chenguangwang}@ucsc.edu Abstract Scaling laws have transformed our understand- ing of large language models by linking up- stream metrics like cross-entropy loss to design factors such as model size, training data, and compute. However, these conventional laws fail to capture downstream task performance, where context plays a critical role. In this work, we propose a straightforward, interpretable framework that jointly models downstream per- formance as a function of the training com- pute and the provided context. We empirically validate our framework by fitting it on the ob- served downstream performance of extended- context variants of Llama-2-7B and Llama- 2-13B across 65,500 unique instances span- ning three tasks: arithmetic reasoning, com- mon sense reasoning, and machine translation. Our results demonstrate that our framework accurately models in-distribution downstream performance, generalizes across three orders of magnitude in training compute, and reliably extrapolates performance as the amount of con- text increases. These findings offer valuable in- sights into the interplay between training com- pute and context utilization, providing guid- ance for designing more efficient long-context LLMs for diverse downstream tasks. Our code is available at https://github.com/ wang-research-lab/context-scaling. 1 Introduction Neural scaling laws (Hestness et al., 2017; Kaplan et al., 2020), which describe how model perfor- mance scales with the number of model parame- ters, the size of the training dataset, or the amount of training compute, have shaped our understand- ing of how large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023; Team et al., 2024; Grattafiori et al., 2024; OpenAI et al., 2024) im- prove with increased resources. These findings ∗Equal contribution. †Corresponding author. have guided the design and development of increas- ingly larger models, providing a blueprint to op- timally scale up performance under a fixed com- pute budget (Hoffmann et al., 2022; OpenAI et al., 2024). While upstream metrics like cross-entropy loss serve as convenient proxies during model devel- opment, in real-world applications, downstream performance often diverges from these upstream trends (Wei et al., 2022; Hu et al., 2024). Accu- rate upfront performance estimates for downstream tasks can help guide model development and iden- tify emergence or saturation on certain tasks with fewer costly experiments. Existing works on pre- dicting downstream performance often rely on overly complicated, less interpretable methods. For instance, Chen et al. (2024) utilizes a two-stage approach using upstream loss as an intermediary, while Ye et al. (2023) fits a multi-layered percep- tron to predict performance on BIG-Bench (Srivas- tava et al., 2023). In contrast, we propose a straightforward, inter- pretable framework that directly models the down- stream performance of LLMs across a number of tasks. The key is to jointly model downstream per- formance as a function of the training compute and the provided context. Specifically, we develop a functional form (see Eq. (1)) which combines two saturating power-law terms (one in the amount of training compute and another in the amount of con- text) along with a penalty term to account for cases in which the context exceeds the model’s context limit. This formulation is motivated by the intu- ition that downstream performance improves with increased training compute and longer, yet relevant, context until the benefits saturate or the context limit is exceeded. Figure 1 compares our fit to existing methods that do not consider context. We empirically validate our scaling framework by fitting it on the observed downstream perfor- mance of extended-context variants of Llama-2-7B 1 arXiv:2510.14919v1 [cs.CL] 16 Oct 2025 102 103 104 105 npmt 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Accuracy Observed Performance Our Fit Existing Work Figure 1: Existing approaches ignore the impact of context length and predict an average performance level regardless of the number of in-context demonstrations. In comparison, our context-aware fit closely tracks the observed performance as additional context is provided. and Llama-2-13B (Touvron et al., 2023; Peng et al., 2024) across 65,500 unique instances spanning three tasks: arithmetic reasoning, common sense reasoning, and machine translation. Our results demonstrate that our framework accurately predicts downstream performance for both Llama-2-7B and Llama-2-13B (Sec. 4). Furthermore, we find that our fits generalize well on held-out models spanning 3 orders of magnitude in training compute (Sec. 4.1). Similarly, we demonstrate that our fits generalize to longer contexts, even as the context exceeds a model’s context limit (Sec. 4.2). Lastly, we show that our fits generalize across different context-extension techniques (Sec. 4.3). These findings offer valuable insights into the interplay between training compute and context utilization, providing guidance for designing more efficient long-context LLMs for diverse downstream tasks. Our main contributions are threefold: • We propose a framework that extends conven- tional neural scaling laws to downstream tasks by incorporating the context length and con- text limit, providing a more accurate model of LLM performance across varying context lengths. • We empirically fit this framework to Llama- 2 models with extended context windows across 3 tasks: arithmetic reasoning, common sense reasoning, and machine translation. We demonstrate the generality of our approach by showing that our scaling laws hold across 3 orders of magnitude in training compute, 4 orders of magnitude in context length, and across different context-extension techniques. • Our framework offers an interpretable tool for understanding the interplay between compute, context, and downstream performance, pro- viding insights that can guide the design of future long-context LLMs. 2 Background Here, we introduce relevant preliminaries, includ- ing notation conventions and the process of ex- tending the context window of the Llama-2 mod- els (Touvron et al., 2023). 2.1 Notation We adopt the following notation: • P – aggregate performance on a downstream task. Occasionally, we’ll use a subscript to de- note the specific task (e.g., PMT for machine translation). • N – the number of model parameters, exclud- ing embedding/de-embedding parameters. • D – the number of tokens in the training dataset. • C – the amount of non-embedding training compute. Following Kaplan et al. (2020), we estimate C ≈6N FLOPs per training token, or C ≈6ND FLOPs in total. • nctx – the context limit of a model in tokens, i.e., the maximum number of positional em- beddings computed for any training sequence. Often, we quote numerical values using k to denote units of 1024 tokens. For exam- ple, a context limit of “128k” corresponds to 128 × 1024 = 131072 tokens. • npmt – the length (in tokens) of a given input query or context. For simplicity, npmt does not include generated/outputted tokens. 2.2 Extending Llama-2’s Context Limit Because the complexity of the self-attention layers grows quadratically in the sequence length (Du- man Keles et al., 2023; Dao et al., 2022), LLMs are commonly pre-trained on short sequences (e.g., 4k tokens) rather than long sequences (e.g., 128k 2 Base Model Non-embedding Params (N) Context Limit (nctx) Dataset Size (D) Training Compute (C) Llama-2-7B 6,476,271,616 4k 2.0T 7.7719 × 1022 8k 2.0T + 0.210B 7.7723 × 1022 16k 2.0T + 0.419B 7.7732 × 1022 32k 2.0T + 0.836B 7.7748 × 1022 64k 2.0T + 1.678B 7.7780 × 1022 128k 2.0T + 3.355B 7.7846 × 1022 Llama-2-13B 12,688,184,320 4k 2.0T 1.5227 × 1023 8k 2.0T + 0.210B 1.5227 × 1023 16k 2.0T + 0.419B 1.5229 × 1023 32k 2.0T + 0.836B 1.5232 × 1023 64k 2.0T + 1.678B 1.5239 × 1023 128k 2.0T + 3.355B 1.5251 × 1023 Table 1: The 12 checkpoints against which we fit scaling curves. The 4k variants are the official Llama-2-7B and Llama-2-13B checkpoints. The additional training tokens and compute from extending the context limit via YaRN (Peng et al., 2024) are factored into D and C. tokens). As a result, LLMs struggle to generalize to sequences longer than those seen during pre- training. Because we plan to explore how down- stream performance varies with context length, Llama-2’s original context limit of 4k tokens will not be sufficient. Fortunately, a number of tech- niques have been proposed that can extend the con- text window of LLMs for a fraction of the pre- training compute budget (Chen et al., 2023; Peng et al., 2024; Xiong et al., 2024). YaRN (Peng et al., 2024) is our method of choice for extending Llama 2’s context limit. We selected YaRN due to its high compute efficiency and strong empirical results compared to other techniques. YaRN involves fine-tuning the pre-trained model for a limited number of steps on sequences exceed- ing the pre-trained LLM’s context limit in order to increase the effective size of the LLM’s context limit so that it may better model long sequences. We adopt the methodology from Peng et al. (2024) and fine-tune Llama-2-7B and Llama-2- 13B (Touvron et al., 2023) for 400 steps with a global batch size of 64 on sequences of length n′ ctx (where n′ ctx > nctx) from the PG-19 cor- pus (Rae et al., 2020). We use the AdamW opti- mizer (Loshchilov and Hutter, 2019) with β1 = 0.9 and β2 = 0.95, and a learning rate of 2 × 10−5. We train variants of Llama-2-7B and Llama-2-13B with nctx ∈{8k, 16k, 32k}, and source check- points for nctx ∈{64k, 128k} from Peng et al. (2024). In order to validate the effectiveness of the con- text extension training, we evaluate the perfor- mance of our 12 Llama-2 models in Table 1 on RULER (Hsieh et al., 2024), a synthetic needle- in-a-haystack benchmark developed to evaluate long-context LLMs. Specifically, we evaluate each model on 100 instances per length, for each of RULER’s 13 tasks. Results are displayed in Table 2 and suggest that context extension via YaRN (Peng et al., 2024) is somewhat effective. Interestingly, models tend to underperform when evaluated at their extended context limit, suggesting that train- ing with a context limit well beyond the target eval- uation range can lead to improved performance within that desired range. 3 Method We posit that aggregate task performance P can be modeled as the product of two saturating power laws in C and npmt, with a sigmoid penalty term for when npmt > nctx. This form provides a good fit for a range of tasks, including arithmetic reasoning, common sense reasoning, and machine translation tasks. Formally, we model P as P(C, npmt, nctx) = Saturating term in C z }| { h 1 −exp  −A  C Cc αi (1) × h 1 −exp  −B  npmt nc pmt βi | {z } Saturating term in npmt × σ (npmt −nctx) | {z } Penalty term , where A, Cc, α, B, nc pmt, and β are parameters to be optimized. We select this form because we expect that the downstream performance P is proportional to di- minishing terms in the amount of training compute C (which integrates both model size N and dataset size D) (Chen et al., 2024; Owen, 2024) and the context length (Brown et al., 2020; Caballero et al., 2023), assuming the context remains relevant as its 3 Model nctx npmt = 4k npmt = 8k npmt = 16k npmt = 32k npmt = 64k npmt = 128k Llama-2-7B 4k 0.822 0.000 0.000 0.000 0.000 0.000 8k 0.829 0.586 0.000 0.000 0.001 0.005 16k 0.795 0.58 0.378 0.000 0.000 0.002 32k 0.746 0.599 0.517 0.317 0.000 0.000 64k 0.794 0.647 0.593 0.530 0.225 0.000 128k 0.776 0.663 0.552 0.439 0.383 0.129 Llama-2-13B 4k 0.861 0.000 0.000 0.000 0.000 0.000 8k 0.870 0.625 0.000 0.000 0.000 0.000 16k 0.865 0.679 0.392 0.000 0.000 0.000 32k 0.848 0.727 0.622 0.378 0.000 0.000 64k 0.860 0.734 0.612 0.511 0.282 0.001 128k 0.819 0.684 0.586 0.484 0.447 0.163 Table 2: Accuracy of our extended Llama-2 models on RULER (Hsieh et al., 2024). length increases and npmt ≤nctx. We saturate these terms via exponentiation to ensure our predicted performance remains below the maximum theoret- ical performance of 1.0. The product form arises because compute and context are complementary, not additive; a significant lack in one dimension limits the benefit derived from the other. For ex- ample, providing more context is only beneficial to the extent that the model is capable of leveraging that additional context. We impose a sharp sigmoid penalty term because P is measured only on the generated tokens, and if npmt > nctx, then any gen- erated tokens will fall beyond the range in which the model can make reliable predictions, meaning P degrades rapidly, especially on tasks that require extended and coherent generations (e.g., reason- ing through a math word problem or translating an entire sentence). 3.1 Datasets We evaluate our 12 models in Table 1 on 65,500 instances of varying lengths that span 3 tasks: • Arithmetic reasoning We collect 3550 test- ing instances across GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021), AQUA-RAT (Ling et al., 2017), and Deep- mind Math (Saxton et al., 2019). Because the instances are rather short, we pack the context with up to 511 demonstrations sampled from the training splits of each dataset. • Common sense reasoning We sample 1750 testing instances across PIQA (Bisk et al., 2020), SIQA (Sap et al., 2019), OpenBookQA (Mihaylov et al., 2018), HellaSwag (Zellers et al., 2019), Wino- Grande (Sakaguchi et al., 2020), ARC- Easy/Challenge (Clark et al., 2018), and Com- monSenseQA (Talmor et al., 2019), and pack the context with up to 511 demonstrations from their respective training splits. • Machine translation We sample 250 trans- lation instances from WMT-14 (Bojar et al., 2014) from each of German, French, Hindi, Czech, and Russian to English. As before, we pack the context with up to 511 demon- strations (of the same source language) and measure the BLEU-4 (Papineni et al., 2002) score of the generation against the reference translation. Additional details can be found in Appendix A. 3.2 Fitting Procedure For each task, we aggregate the results for each model by the context length, using the number of in-context demonstrations as a proxy for length. Within each group, we average over the context length and metric value for each instance. In doing so, we collect a number of records of the form (C, npmt, nctx, avg. metric value) on which we fit Eq. (1) for each of our 3 tasks. To fit the scaling curves, we use a two-stage opti- mization procedure that combines global search with local refinement. First, we use an out- of-the-box global optimizer to perform a broad Parameter Lower Bound Upper Bound A 0 100 Cc 0 1030 α 0 10 B 0 100 nc pmt 0 131,072 β 0 10 Table 3: Upper and lower bounds on A, Cc, α, B, nc pmt, and β. 4 102 103 104 105 npmt 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Accuracy (a) Arithmetic reasoning. 102 103 104 105 npmt 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Accuracy (b) Common sense reasoning. 102 103 104 105 npmt 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 Accuracy (c) Machine translation. Figure 2: Contours of fits at C = 7.8×1022 (red) and C = 1.5×1023 (blue) for nctx = 8k on three tasks: arithmetic reasoning (left), common sense reasoning (middle) and machine translation (right). Task A Cc α B nc pmt β Arithmetic reasoning 9.96 9.7 × 1029 0.26 62.24 1.3 × 105 0.56 Common sense reasoning 99.39 1.5 × 1028 0.40 96.31 3.5 × 103 1.12 Machine translation 5.55 5.4 × 1029 0.23 31.82 3.0 × 102 2.97 Table 4: Fits for P(C, npmt, nctx) on 3 downstream tasks: arithmetic reasoning, common sense reasoning, and machine translation. search over the parameter space. Specifically, we use SciPy’s differential_evolution global optimization method, an evolutionary algorithm well suited for non-convex, non-linear optimiza- tion problems such as this (Storn and Price, 1997). We define finite upper and lower bounds for each parameter, informed by Kaplan et al. (2020) and Xiong et al. (2024). We use the same bounds across all tasks, which are listed in Table 3. Finally, we do a pass through a local optimizer (SciPy’s curve_fit), using the estimate from the global optimizer as a starting point, to achieve a precise fit. 4 Empirical Results We model the aggregate performance P on each of our 3 tasks (arithmetic reasoning, common sense reasoning, and machine translation) using Eq. (1). Unless otherwise noted, scaling laws are fit on the results of all 12 Llama-2 models in Table 1 using the procedure outlined in Section 3.2. Ta- ble 4 includes the parameter values that we found to be optimal for each task. Contours of our fits at C = 7.8 × 1022 and C = 1.5 × 1023 for nctx = 8k are provided in Figure 2. Additional contours are provided in Appendix B. We report the mean abso- lute prediction error |P −ˆP|, which is the average of residuals (in absolute value). When discussing individual residuals, we’ll often include the sign of the residual to indicate the direction (i.e., whether we’re under- or over-predicting). On the arithmetic reasoning task, we achieve an excellent fit, with an average prediction error |P −ˆP| of just 0.010. Similarly, on common sense reasoning and machine translation, we observe av- erage prediction errors of 0.037 and 0.007, respec- tively. Additionally, we model the behavior around the boundary condition at npmt = nctx surprisingly well. Our results confirm that P can be jointly deter- mined by the training compute and context length. Increasing C corresponds to an increase in P, in effect shifting up the contour by some diminishing amount in C in the region where npmt < nctx. Sim- ilarly, increasing npmt when npmt is small leads to significant gains in P, which diminish (sub-linearly for arithmetic reasoning and super-linearly for com- mon sense reasoning and machine translation) and saturate quickly. In the context of our task con- struction, this makes a lot of sense; the first few in-context task demonstrations go far in improv- ing the generated responses, but once the model has seen enough context to sufficiently capture the task structure, additional demonstrations provide little marginal benefit (Brown et al., 2020). Ad- ditionally, the optimal number of demonstrations is task-dependent; our results suggest that models make better use of additional demonstrations on arithmetic reasoning tasks than they do on common sense reasoning or machine translation tasks. The remainder of this section aims to study the extent to which our fits generalize to out-of- 5 Model C nctx PAR −ˆPAR PCSR −ˆPCSR PMT −ˆPMT Qwen-2.5-0.5B 3.8 × 1022 32k +0.057 +0.008 -0.057 Gemma-2-2B 2.4 × 1022 4k +0.066 +0.260 +0.059 Gemma-2-9B 4.0 × 1023 4k +0.069 +0.051 +0.017 Gemma-2-27B 2.0 × 1024 4k +0.024 -0.099 -0.054 Llama-2-70B 8.2 × 1023 4k -0.002 -0.031 -0.025 Table 5: Generalization of fit on test models for arithmetic reasoning (AR), common sense reasoning (CSR), and machine translation (MT). 102 103 104 105 npmt 0.06 0.08 0.10 0.12 0.14 0.16 0.18 Accuracy (a) Arithmetic reasoning. 102 103 104 105 npmt 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 Accuracy (b) Common sense reasoning. 102 103 104 105 npmt 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Accuracy (c) Machine translation. Figure 3: Contours of fits at C = 7.8 × 1022 (red) and C = 1.5 × 1023 (blue) for nctx = 128k on three tasks: arithmetic reasoning (left), common sense reasoning (middle) and machine translation (right). Held-out observations are colored in purple and green for Llama-2-7b and Llama-2-13b, respectively. distribution amounts of training compute (Sec- tion 4.1), context length (Section 4.2), and context- extension method (Section 4.3). Finally, Sec- tion 4.4 analyzes the role of the sigmoid penalty term. 4.1 Generalization along C Our scaling laws are fit over a narrow range of C, specifically 7.8 × 1022 ≤C ≤1.5 × 1023. To test how well our fits generalize out- side of this range, we evaluate several testing mod- els (namely, Qwen2.5-0.5B (Yang et al., 2025), Gemma-2 (Team et al., 2024), and Llama-2- 70B (Touvron et al., 2023)) ranging between 0.5B to 70B parameters and spanning 3 orders of mag- nitude in C. We evaluate these models at their respective context limits †, and report the prediction error on each task in Table 5. We observe good generalization across these 5 testing models, with many of the prediction errors falling near or below 5 points. Interestingly, our fits generalize the worst to Gemma-2-2B, despite generalizing well to Gemma-2-9B and Gemma-2- 27B. Moreover, we achieve stronger generalization on arithmetic reasoning and machine translation †While Gemma-2 has a context limit of 8k tokens, it uses sliding window attention for every odd layer. Since this be- havior is not supported in vLLM (Kwon et al., 2023), we treat Gemma-2 as if its context limit is 4k tokens. tasks compared to common sense reasoning, which aligns with our in-distribution results. Finally, these results suggest we tend to underestimate the perfor- mance when C is small, and slightly overestimate performance when C is large. 4.2 Generalization along npmt In order to measure how well our scaling laws generalize to longer contexts, we refit our scaling curves, this time holding out observations where the context length exceeds 10,000 tokens. Figure 3 displays contours of our fits at C = 7.8 × 1022 and C = 1.5 × 1023 for nctx = 128k for each task. Again, we see strong generalization along npmt, achieving prediction errors of just 0.017, 0.067, and 0.006 across the held-out observations on arith- metic reasoning, common sense reasoning, and machine translation, respectively. These low er- ror rates across diverse tasks demonstrate that our joint scaling framework can reliably extrapolate to longer context lengths, making it particularly suitable for long-context LLM design. Interestingly, on common sense reasoning and machine translation tasks, we observe that P is inversely proportional to nctx for some fixed npmt. That is, as we extend the context, performance slightly worsens. We hypothesize that this decline is not due to an intrinsic scaling trend but rather be- cause the training mix used to extend the context is 6 misaligned with these tasks. For example, our train- ing mix is sourced from PG-19 (Rae et al., 2020), which includes predominantly English text, so it’s unsurprising that machine translation performance worsens with increased training. 4.3 Does the choice of context extension technique matter? A number of different techniques have been pro- posed for extending the context length of a model with rotary positional embeddings (Chen et al., 2023; Peng et al., 2024; Xiong et al., 2024). It’s nat- ural to wonder how sensitive our fit scaling curves are to one’s choice of context extension technique. To test this, we evaluate Together’s Llama-2-7B model (Together.ai, 2023) extended to 32k con- text via positional interpolation (Chen et al., 2023). We evaluate this model at its context limit of 32k tokens across our 3 tasks and compute the predic- tion error for each, that is, the difference between the observed performance P and the predicted per- formance ˆP. We compare against the prediction error on the Llama-2-7B checkpoint extended to nctx = 32k via YaRN (Peng et al., 2024). It’s worth noting that the training mix and quantity are different between these two models; Together’s was trained on 1.5B tokens of a diverse data mix, while we follow Peng et al. (2024) and train on just 0.836B tokens from PG-19 (Rae et al., 2020). Still, our compute estimates for both models are sufficiently similar (7.777 × 1022 vs 7.775 × 1022 FLOPs, respectively). Table 6 lists the results. In general, the prediction errors we observe on Together’s Llama-2-7B model extended via positional interpolation are similar to the prediction errors we observe on our Llama-2- 7B model extended via YaRN. These results sug- gest that the choice of context extension technique has little impact on the scaling properties of down- stream performance. 4.4 Ablation over the penalty term To quantify the impact of our sigmoid penalty for prompt lengths exceeding the model’s context limit, we fit Eq. (1) on the arithmetic reasoning task with and without the penalty term. Table 7 reports the resulting prediction errors. We observe that with- out the penalty term, the fit underestimates per- formance when npmt ≤nctx and overestimates performance when npmt > nctx, confirming the importance of the penalty term. 5 Related Work Hestness et al. (2017) and Kaplan et al. (2020) introduce scaling laws which describe the relation- ship between upstream model performance (e.g., cross-entropy loss) and model design features (e.g., the number of model parameters, the size of the training dataset, or the total amount of training compute). Henighan et al. (2020) extends this anal- ysis to other types of autoregressive models (e.g., generative image and video modeling). Hoffmann et al. (2022) and OpenAI et al. (2024) describe the use of scaling laws to train compute-optimal LLMs, and Caballero et al. (2023) introduces a form of smoothly broken neural scaling laws to better cap- ture non-monotonic scaling. Several works have focused on scaling laws for predicting downstream performance. Wei et al. (2022) and Hu et al. (2024) focus on predicting abilities that “emerge” in LLMs when trained on enough compute. Isik et al. (2024) explores scal- ing laws for transfer learning on machine trans- lation tasks, while Schaeffer et al. (2025) studies scaling laws for downstream multiple-choice tasks. Other works have employed a collaborative ap- proach and source performance data from public benchmarks to better generalize across different model families (Zhang et al., 2024; Ruan et al., 2024; Polo et al., 2025; Gadre et al., 2024). Chen et al. (2024) and Ruan et al. (2024) employ a two-stage approach, using an intermediary (e.g., upstream loss) for predicting downstream perfor- mance. Both Owen (2024) and Ye et al. (2023) aim to predict aggregate performance on bench- marks such as BIG-Bench (Srivastava et al., 2023). Comparatively, this work introduces a dependence on the context length and suggests that you can predict downstream performance and obtain strong generalization (even across model families) with a straightforward, interpretable functional form. Context refers to the information provided to a model at inference time, such as few-shot demonstrations (Brown et al., 2020), retrieved evidence (Lewis et al., 2020), or task instruc- tions (Crispino et al., 2024). Though context shapes performance by conditioning on structured or un- structured information, little scaling analysis has been conducted on the role of context. Both Ka- plan et al. (2020) and Caballero et al. (2023) briefly explore the scaling of upstream performance as it relates to context length. Xiong et al. (2024) ex- tends the context limit of Llama-2 and finds that 7 Model C npmt PAR −ˆPAR PCSR −ˆPCSR PMT −ˆPMT Llama-2-7B (PI) 7.777 × 1022 32k +0.014 +0.079 -0.005 Llama-2-7B (YaRN) 7.775 × 1022 32k +0.005 +0.014 -0.005 Table 6: Generalization of fit on test models for arithmetic reasoning (AR), common sense reasoning (CSR), and machine translation (MT) at nctx = 32k. |P −ˆP|npmt≤nctx |P −ˆP|npmt>nctx |P −ˆP| With penalty term 0.010 0.014 0.010 Without penalty term 0.019 0.104 0.029 Table 7: Prediction errors on the arithmetic reasoning task, with and without the sigmoid penalty term. validation loss scales as a power law in the context length, but stops short of exploring the relation- ship between downstream performance and context length. Caballero et al. (2023) and Brown et al. (2020) explore the diminishing returns of increas- ing the number of in-context demonstrations. To the best of our knowledge, our work is the first to explicitly focus on the scaling relationship be- tween downstream performance and context length, and the first attempt to unify the understanding of scaling with respect to both context and compute. The ability of an LLM to extrapolate to longer sequences depends heavily on its positional en- codings. While some positional encoding tech- niques (e.g., ALiBi (Press et al., 2022)) offer lim- ited length extrapolation, other common techniques (e.g., RoPE (Su et al., 2024)) don’t. As a result, a number of techniques to efficiently extend the context window of LLMs have been proposed. Some techniques offer training-free context ex- tension, typically by adjusting the attention mech- anism itself. Jin et al. (2024) leverages a bi- level attention mechanism, applying standard self- attention to adjacent tokens and grouped attention for distant tokens. InfLLM is a memory-based tech- nique that integrates sliding-window attention with block-level context memory (Xiao et al., 2024). Similarly, LM-Infinite employs a Λ-shaped atten- tion mask, effectively masking attention over to- kens in the middle, and restricts the maximum positional difference between any two tokens to the maximum sequence length seen during pre- training (Han et al., 2024). On the other hand, An et al. (2024) introduces dual-chunk attention, which decomposes the attention computation into chunk- based modules to better capture the relative posi- tional information between distant tokens. Additionally, a number of techniques have been proposed that focus on rescaling the positional en- codings. Concurrently, Chen et al. (2023) and kaio- kendev (2023) introduced position interpolation, which extends the context window by linearly in- terpolating the position indices to be within the pre- trained context limit. Xiong et al. (2024) proposes decreasing the rotational angle (base frequency) of RoPE to prevent the relative positional infor- mation from decaying. Building on this, NTK- aware interpolation (bloc97, 2023b) adjusts the scaling for each RoPE dimension based on its frequency, thereby mitigating the loss of high- frequency details. bloc97 (2023a) introduces NTK- by-parts interpolation, which selectively interpo- lates lower-frequency dimensions while preserving higher-frequency components to maintain local rel- ative positioning. YaRN (Peng et al., 2024) com- bines NTK-by-parts with a mechanism to rescale the logits in the attention softmax to further im- prove performance on long sequences. In this work, we utilize YaRN to extend the context limit of the Llama-2 models due to its high compute efficiency and strong empirical results compared to other tech- niques. 6 Conclusion In this work, we introduce a straightforward, in- terpretable framework that jointly models down- stream performance as a function of the training compute and the provided context. Extensive exper- iments on arithmetic reasoning, common-sense rea- soning, and machine translation tasks demonstrate that our framework not only fits the in-distribution performance accurately but also generalizes well across 3 orders of magnitude in the amount of non- embedding training compute C, 4 orders of mag- nitude in the amount of input context length, and even to other context-extension techniques. These 8 findings reveal that downstream performance ben- efits from increased compute and longer, relevant context, but only up to a saturation point. Our work thus provides actionable insights for design- ing more effective long-context LLMs and bridges the gap between upstream scaling metrics and real- world task performance. Acknowledgments This work was supported in part by a research gift from Google. Limitations While our proposed context-aware scaling frame- work provides an interpretable approach to mod- eling downstream performance, it does come with limitations. Specifically, our formulation relies on a set of assumptions (e.g., performance scales with training compute and context) that may not hold under extreme scaling regimes or in the pres- ence of adversarial attacks like many-shot jailbreak- ing (Anil et al., 2024). Moreover, factors such as the pre-training data mix, post-training and align- ment, and architectural choices, which can all in- fluence downstream model performance, are not explicitly accounted for. However, these factors likely affect the optimal parameters of a fit without necessarily changing the structure of Eq. (1). For example, post-training alignment (e.g., instruction tuning) might improve a model’s zero-shot perfor- mance, resulting in a higher value for the parameter A compared to a non-aligned base model. Future work could investigate how these factors and others influence the identified parameters, enhancing the framework’s predictive power while retaining its interpretable form. Lastly, our scaling curves are fit to a narrow range of training compute, and may fail to generalize well to LLMs trained on an amount of compute that extends far beyond this range. References Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, and Lingpeng Kong. 2024. Training-free long-context scaling of large language models. In Proceedings of the 41st Interna- tional Conference on Machine Learning, ICML’24. JMLR.org. Cem Anil, Esin DURMUS, Nina Panickssery, Mrinank Sharma, Joe Benton, Sandipan Kundu, Joshua Bat- son, Meg Tong, Jesse Mu, Daniel Ford, Francesco Mosconi, Rajashree Agrawal, Rylan Schaeffer, Naomi Bashkansky, Samuel Svenningsen, Mike Lam- bert, Ansh Radhakrishnan, Carson Denison, Evan Hubinger, Yuntao Bai, Trenton Bricken, Timothy Maxwell, Nicholas Schiefer, James Sully, Alex Tamkin, Tamera Lanham, Karina Nguyen, Tomek Ko- rbak, Jared Kaplan, Deep Ganguli, Samuel Bowman, Ethan Perez, Roger B Grosse, and David K Duve- naud. 2024. Many-shot jailbreaking. In Advances in Neural Information Processing Systems, volume 37, pages 129696–129742. Curran Associates, Inc. Yonatan Bisk, Rowan Zellers, Ronan Le bras, Jianfeng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. Proceed- ings of the AAAI Conference on Artificial Intelligence, 34(05):7432–7439. bloc97. 2023a. Add NTK-Aware interpolation "by parts" correction. bloc97. 2023b. NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degra- dation. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint- Amand, Radu Soricut, Lucia Specia, and Ale s Tam- chyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, Maryland, USA. Associa- tion for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. 2023. Broken neural scaling laws. In ICLR 2023 Workshop on Mathematical and Empirical Un- derstanding of Foundation Models. Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023. Extending context window of large language models via positional interpolation. Preprint, arXiv:2306.15595. Yangyi Chen, Binxuan Huang, Yifan Gao, Zhengyang Wang, Jingfeng Yang, and Heng Ji. 2024. Scaling laws for predicting downstream performance in llms. Preprint, arXiv:2410.08527. 9 Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. Preprint, arXiv:1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. Preprint, arXiv:2110.14168. Nicholas Crispino, Kyle Montgomery, Fankun Zeng, Dawn Song, and Chenguang Wang. 2024. Agent in- structs large language models to be general zero-shot reasoners. In Proceedings of the 41st International Conference on Machine Learning, pages 9458–9549. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io-awareness. In Advances in Neural Information Processing Sys- tems, volume 35, pages 16344–16359. Curran Asso- ciates, Inc. Feyza Duman Keles, Pruthuvi Mahesakya Wijewardena, and Chinmay Hegde. 2023. On the computational complexity of self-attention. In Proceedings of The 34th International Conference on Algorithmic Learn- ing Theory, volume 201 of Proceedings of Machine Learning Research, pages 597–619. PMLR. Samir Yitzhak Gadre, Georgios Smyrnis, Vaishaal Shankar, Suchin Gururangan, Mitchell Wortsman, Rulin Shao, Jean Mercat, Alex Fang, Jeffrey Li, Sedrick Keh, Rui Xin, Marianna Nezhurina, Igor Vasiljevic, Jenia Jitsev, Luca Soldaini, Alexandros G. Dimakis, Gabriel Ilharco, Pang Wei Koh, Shuran Song, Thomas Kollar, Yair Carmon, Achal Dave, Reinhard Heckel, Niklas Muennighoff, and Ludwig Schmidt. 2024. Language models scale reliably with over-training and on downstream tasks. Preprint, arXiv:2403.08540. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Ro- driguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis An- derson, Govind Thattai, Graeme Nail, Gregoire Mi- alon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Is- han Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Jun- teng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kam- badur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Niko- lay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Va- sic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ron- nie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sa- hana Chennabasappa, Sanjay Singh, Sean Bell, Seo- hyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sha- ran Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Van- denhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Syd- ney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Vir- ginie Do, Vish Vogeti, Vítor Albiero, Vladan Petro- vic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whit- ney Meers, Xavier Martinet, Xiaodong Wang, Xi- aofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xin- feng Xie, Xuchao Jia, Xuewei Wang, Yaelle Gold- schlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Sri- vastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit San- gani, Amos Teo, Anam Yunus, Andrei Lupu, An- dres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchan- dani, Annie Dong, Annie Franco, Anuj Goyal, Apara- 10 jita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yaz- dan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Han- cock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching- Hsiang Chu, Chris Cai, Chris Tindal, Christoph Fe- ichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Este- ban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanaz- eri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry As- pegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jen- nifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan Mc- Phie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khan- delwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Ki- ran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrst- edt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Pa- tel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pe- dro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lind- say, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wen- wen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783. Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024. LM- infinite: Zero-shot extreme length generalization for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3991–4008, Mexico City, Mexico. Association for Computational Linguistics. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. In Thirty- fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schul- man, Dario Amodei, and Sam McCandlish. 2020. Scaling laws for autoregressive generative modeling. Preprint, arXiv:2010.14701. Joel Hestness, Sharan Narang, Newsha Ardalani, Gre- gory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. 2017. Deep learning scaling is predictable, empiri- cally. Preprint, arXiv:1712.00409. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan 11 Damoc, Aurelia Guy, Simon Osindero, Karen Si- monyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. Preprint, arXiv:2203.15556. Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shan- tanu Acharya, Dima Rekesh, Fei Jia, and Boris Gins- burg. 2024. RULER: What’s the real context size of your long-context language models? In First Confer- ence on Language Modeling. Shengding Hu, Xin Liu, Xu Han, Xinrong Zhang, Chao- qun He, Weilin Zhao, Yankai Lin, Ning Ding, Ze- bin Ou, Guoyang Zeng, Zhiyuan Liu, and Maosong Sun. 2024. Predicting emergent abilities with infinite resolution evaluation. In The Twelfth International Conference on Learning Representations. Berivan Isik, Natalia Ponomareva, Hussein Hazimeh, Dimitris Paparas, Sergei Vassilvitskii, and Sanmi Koyejo. 2024. Scaling laws for downstream task performance of large language models. Preprint, arXiv:2402.04177. Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, and Xia Hu. 2024. LLM maybe LongLM: SelfEx- tend LLM context window without tuning. In Pro- ceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pages 22099–22114. PMLR. kaiokendev. 2023. Things I’m learning while training superhot. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Preprint, arXiv:2001.08361. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neu- ral information processing systems, 33:9459–9474. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158–167, Vancouver, Canada. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Confer- ence on Learning Representations. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381–2391, Brussels, Belgium. Association for Computational Linguistics. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Ale- man, Diogo Almeida, Janko Altenschmidt, Sam Alt- man, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haim- ing Bao, Mohammad Bavarian, Jeff Belgum, Ir- wan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brock- man, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Ful- ford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo- Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Hee- woo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Ka- mali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirch- ner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Kon- stantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, 12 Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambat- tista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perel- man, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Poko- rny, Michelle Pokrass, Vitchyr H. Pong, Tolly Pow- ell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ry- der, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Fe- lipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Fe- lipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Ji- ayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qim- ing Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Bar- ret Zoph. 2024. Gpt-4 technical report. Preprint, arXiv:2303.08774. David Owen. 2024. How predictable is language model benchmark performance? Preprint, arXiv:2401.04757. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, ACL ’02, page 311–318, USA. Association for Computational Linguistics. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2024. YaRN: Efficient context window ex- tension of large language models. In The Twelfth International Conference on Learning Representa- tions. Felipe Maia Polo, Seamus Somerstep, Leshem Choshen, Yuekai Sun, and Mikhail Yurochkin. 2025. Sloth: scaling laws for llm skills to predict multi- benchmark performance across families. Preprint, arXiv:2412.06540. Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In International Confer- ence on Learning Representations. Jack W. Rae, Anna Potapenko, Siddhant M. Jayaku- mar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations. Yangjun Ruan, Chris J. Maddison, and Tatsunori Hashimoto. 2024. Observational scaling laws and the predictability of language model performance. Preprint, arXiv:2405.10938. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2020. Winogrande: An ad- versarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732–8740. AAAI Press. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4463– 4473, Hong Kong, China. Association for Computa- tional Linguistics. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2019. Analysing mathematical rea- soning abilities of neural models. In International Conference on Learning Representations. Rylan Schaeffer, Hailey Schoelkopf, Brando Miranda, Gabriel Mukobi, Varun Madan, Adam Ibrahim, Her- bie Bradley, Stella Biderman, and Sanmi Koyejo. 2025. Why has predicting downstream capabilities of frontier ai models with scale remained elusive? Preprint, arXiv:2406.04391. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Par- rish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, An- drew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabas- sum, Arul Menezes, Arun Kirubarajan, Asher Mul- lokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karaka¸s, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Cather- ine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, 13 Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Free- man, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Do- han, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, El- lie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice En- gefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Ger- mán Kruszewski, Giambattista Parascandolo, Gior- gio Mariani, Gloria Wang, Gonzalo Jaimovitch- López, Gregor Betz, Guy Gur-Ari, Hana Galijase- vic, Hannah Kim, Hannah Rashkin, Hannaneh Ha- jishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jae- hoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Koco´n, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Ji- aming Song, Jillian Tang, Joan Waweru, John Bur- den, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gim- pel, Kevin Omondi, Kory Mathewson, Kristen Chi- afullo, Ksenia Shkaruta, Kumar Shridhar, Kyle Mc- Donell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras- Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem ¸Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schu- bert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Co- hen, Michael Gu, Michael Ivanitskiy, Michael Star- ritt, Michael Strube, Michał Sw˛edrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhut- dinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Moham- mad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bow- man, Samuel S. Schoenholz, Sanghyun Han, San- jeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixi- ang Shane Gu, Shubh Pachchigar, Shubham Tosh- niwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas De- haene, Stefan Divic, Stefano Ermon, Stella Bider- man, Stephanie Lin, Stephen Prasad, Steven T. Pi- antadosi, Stuart M. Shieber, Summer Misherghi, Svet- lana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Ger- stenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmaku- mar, Vivek Srikumar, William Fedus, William Saun- ders, William Zhang, Wout Vossen, Xiang Ren, Xi- aoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. 2023. Beyond the imitation game: Quantifying and extrap- olating the capabilities of language models. Preprint, arXiv:2206.04615. Rainer Storn and Kenneth Price. 1997. Differential evo- lution: A simple and efficient heuristic for global op- 14 timization over continuous spaces. Journal of Global Optimization, 11:341–359. Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: En- hanced transformer with rotary position embedding. Neurocomputing, 568:127063. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Wal- ton, Aliaksei Severyn, Alicia Parrish, Aliya Ah- mad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock, Andy Coenen, An- thony Laforge, Antonia Paterson, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher A. Choquette-Choo, Danila Sinopalnikov, David Wein- berger, Dimple Vijaykumar, Dominika Rogozi´nska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Elty- shev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Pluci´nska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svens- son, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fer- nandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju yeong Ji, Kareem Mo- hamed, Kartikeya Badola, Kat Black, Katie Mil- lican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjoesund, Lau- ren Usui, Laurent Sifre, Lena Heuermann, Leti- cia Lago, Lilly McNealus, Livio Baldini Soares, Logan Kilpatrick, Lucas Dixon, Luciano Martins, Machel Reid, Manvinder Singh, Mark Iverson, Mar- tin Görner, Mat Velloso, Mateo Wirth, Matt Davi- dow, Matt Miller, Matthew Rahtz, Matthew Watson, Meg Risdal, Mehran Kazemi, Michael Moynihan, Ming Zhang, Minsuk Kahng, Minwoo Park, Mofi Rahman, Mohit Khatwani, Natalie Dao, Nenshad Bardoliwalla, Nesh Devanathan, Neta Dumai, Nilay Chauhan, Oscar Wahltinez, Pankil Botarda, Parker Barnes, Paul Barham, Paul Michel, Pengchong Jin, Petko Georgiev, Phil Culliton, Pradeep Kup- pala, Ramona Comanescu, Ramona Merhej, Reena Jana, Reza Ardeshir Rokni, Rishabh Agarwal, Ryan Mullins, Samaneh Saadat, Sara Mc Carthy, Sarah Cogan, Sarah Perrin, Sébastien M. R. Arnold, Se- bastian Krause, Shengyang Dai, Shruti Garg, Shruti Sheth, Sue Ronstrom, Susan Chan, Timothy Jor- dan, Ting Yu, Tom Eccles, Tom Hennigan, Tomas Kocisky, Tulsee Doshi, Vihan Jain, Vikas Yadav, Vilobh Meshram, Vishal Dharmadhikari, Warren Barkley, Wei Wei, Wenming Ye, Woohyun Han, Woosuk Kwon, Xiang Xu, Zhe Shen, Zhitao Gong, Zichuan Wei, Victor Cotruta, Phoebe Kirk, Anand Rao, Minh Giang, Ludovic Peran, Tris Warkentin, Eli Collins, Joelle Barral, Zoubin Ghahramani, Raia Hadsell, D. Sculley, Jeanine Banks, Anca Dragan, Slav Petrov, Oriol Vinyals, Jeff Dean, Demis Hass- abis, Koray Kavukcuoglu, Clement Farabet, Elena Buchatskaya, Sebastian Borgeaud, Noah Fiedel, Ar- mand Joulin, Kathleen Kenealy, Robert Dadashi, and Alek Andreev. 2024. Gemma 2: Improving open language models at a practical size. Preprint, arXiv:2408.00118. Together.ai. 2023. LLaMA-2-7B-32K. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine- tuned chat models. Preprint, arXiv:2307.09288. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emer- gent abilities of large language models. Transactions on Machine Learning Research. Survey Certifica- tion. Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, and Maosong Sun. 2024. Infllm: Training-free long- context extrapolation for llms with an efficient con- text memory. In Advances in Neural Information Pro- cessing Systems, volume 37, pages 119638–119661. Curran Associates, Inc. 15 Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, and Hao Ma. 2024. Effective long-context scaling of founda- tion models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (Volume 1: Long Papers), pages 4643–4663, Mexico City, Mexico. Association for Computational Linguistics. Qwen: An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2025. Qwen2.5 technical report. Preprint, arXiv:2412.15115. Qinyuan Ye, Harvey Fu, Xiang Ren, and Robin Jia. 2023. How predictable are large language model capabilities? a case study on BIG-bench. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023, pages 7493–7517, Singapore. Association for Computational Linguistics. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a ma- chine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4791–4800, Florence, Italy. Association for Computational Linguistics. Qiyuan Zhang, Fuyuan Lyu, Xue Liu, and Chen Ma. 2024. Collaborative performance prediction for large language models. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing, pages 2576–2596, Miami, Florida, USA. Association for Computational Linguistics. 16 A Dataset Details GSM8K (Cobbe et al., 2021) We filter out in- stances over 256 tokens in length, and select 511 training instances and 250 testing instances at ran- dom. During inference, we allow up to 400 new tokens. The average token lengths of the train- ing and testing instances were 177.64 and 177.43 respectively. The generated responses averaged around 172.13 tokens in length. To evaluate, we extract the model’s final answer and compare it with the reference answer, checking for numerical equivalence. MATH (Hendrycks et al., 2021) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at ran- dom. During inference, we allow up to 400 new tokens. The average token lengths of the training and testing instances were 160.54 and 155.74 re- spectively. The generated responses also averaged around 184.0 tokens in length. To evaluate, we extract the model’s final answer and compare it with the reference answer, checking for numerical equivalence. AQUA-RAT (Ling et al., 2017) We filter out in- stances over 256 tokens in length, and select 511 training instances and 250 testing instances at ran- dom. We allow up to 5 new tokens during genera- tion. The average token lengths of the training and testing instances were 88.45 and 93.09 respectively. The generated responses also averaged around 3.44 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. DeepMind Math (Saxton et al., 2019) The dataset is categorized into 56 subsets. We filter out instances over 256 tokens in length, and select 511 training instances and 50 testing instances at random from each subset. We allow up to 400 new tokens during generation. The average to- ken lengths of the training and testing instances were 57.94 and 61.05 respectively. The generated responses also averaged around 85.71 tokens in length. To evaluate, we extract the model’s final answer and compare it with the reference answer, checking for numerical equivalence. PIQA (Bisk et al., 2020) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 81.16 and 81.55 respectively. The generated responses also averaged around 3.46 to- kens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. OpenBookQA (Mihaylov et al., 2018) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during gener- ation. The average token lengths of the training and testing instances were 47.74 and 49.39 respectively. The generated responses also averaged around 3.3 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. SIQA (Sap et al., 2019) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 1 new token during generation. The average token lengths of the training and testing instances were 56.68 and 56.87 respectively. The generated responses also averaged around 3.35 to- kens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. HellaSwag (Zellers et al., 2019) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at ran- dom. We allow up to 5 new tokens during gener- ation. The average token lengths of the training and testing instances were 153.06 and 156.05 re- spectively. The generated responses also averaged around 3.67 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. WinoGrande (Sakaguchi et al., 2020) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during gener- ation. The average token lengths of the training and testing instances were 53.98 and 53.87 respectively. The generated responses also averaged around 3.33 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. ARC Easy (Clark et al., 2018) We filter out in- stances over 256 tokens in length, and select 511 training instances and 250 testing instances at ran- 17 dom. We allow up to 5 new tokens during genera- tion. The average token lengths of the training and testing instances were 66.69 and 67.14 respectively. The generated responses also averaged around 3.46 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. ARC Challenge (Clark et al., 2018) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during gener- ation. The average token lengths of the training and testing instances were 75.65 and 76.83 respectively. The generated responses also averaged around 3.43 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. CommenSenseQA (Talmor et al., 2019) We fil- ter out instances over 256 tokens in length, and 250 select 511 training instances and testing in- stances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 50.42 and 49.92 respectively. The generated responses also averaged around 1.0 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. WMT14 (CS-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens dur- ing generation. The average token lengths of the training and testing instances were 95.01 and 85.25 respectively. The generated responses also aver- aged around 77.77 tokens in length. We use BLEU- 4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. WMT14 (DE-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens dur- ing generation. The average token lengths of the training and testing instances were 85.53 and 77.68 respectively. The generated responses also aver- aged around 77.77 tokens in length. We use BLEU- 4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. WMT14 (FR-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens dur- ing generation. The average token lengths of the training and testing instances were 95.94 and 84.29 respectively. The generated responses also aver- aged around 78.73 tokens in length. We use BLEU- 4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. WMT14 (HI-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens during generation. The average token lengths of the train- ing and testing instances were 34.01 and 147.09 respectively. The generated responses also aver- aged around 53.11 tokens in length. We use BLEU- 4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. WMT14 (RU-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens dur- ing generation. The average token lengths of the training and testing instances were 73.54 and 86.56 respectively. The generated responses also aver- aged around 77.24 tokens in length. We use BLEU- 4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. B Full Results In this section, we present full aggregate results in Tables 8, 9, and 10 for arithmetic reasoning, common sense reasoning, and machine translation respectively. Figures 4, 5,and 6 provide contours of our fits at C = 7.8 × 1022 and C = 1.5 × 1023. 18 k 0 shots 1 shot 3 shots 7 shots 15 shots 31 shots 63 shots 127 shots 255 shots 511 shots Llama-2-7b-hf 0.089 0.099 0.115 0.120 0.136 0.127 0.094 0.014 0.014 0.000 Yarn-Llama-2-7b-8k 0.076 0.097 0.109 0.117 0.134 0.131 0.137 0.071 0.000 0.000 Yarn-Llama-2-7b-16k 0.072 0.095 0.109 0.116 0.130 0.133 0.143 0.139 0.073 0.002 Yarn-Llama-2-7b-32k 0.069 0.092 0.104 0.113 0.127 0.127 0.135 0.134 0.143 0.076 Yarn-Llama-2-7b-64k 0.057 0.094 0.108 0.115 0.132 0.128 0.143 0.140 0.150 0.138 Yarn-Llama-2-7b-128k 0.049 0.091 0.106 0.113 0.129 0.126 0.136 0.135 0.149 0.140 Llama-2-13b-hf 0.088 0.115 0.131 0.137 0.148 0.141 0.092 0.011 0.005 0.000 Yarn-Llama-2-13b-8k 0.086 0.110 0.126 0.132 0.146 0.149 0.151 0.082 0.000 0.000 Yarn-Llama-2-13b-16k 0.081 0.110 0.135 0.146 0.153 0.163 0.172 0.145 0.077 0.010 Yarn-Llama-2-13b-32k 0.077 0.111 0.129 0.145 0.154 0.162 0.171 0.169 0.134 0.065 Yarn-Llama-2-13b-64k 0.073 0.106 0.130 0.146 0.156 0.158 0.169 0.167 0.159 0.136 Yarn-Llama-2-13b-128k 0.069 0.108 0.123 0.138 0.157 0.157 0.174 0.165 0.163 0.153 Table 8: Accuracy on arithmetic reasoning, aggregated over every instance in the task. k 0 shots 1 shot 3 shots 7 shots 15 shots 31 shots 63 shots 127 shots 255 shots 511 shots Llama-2-7b-hf 0.376 0.489 0.518 0.536 0.536 0.527 0.302 0.000 0.000 0.000 Yarn-Llama-2-7b-8k 0.356 0.476 0.518 0.530 0.530 0.523 0.491 0.278 0.000 0.000 Yarn-Llama-2-7b-16k 0.342 0.468 0.508 0.522 0.532 0.519 0.521 0.486 0.264 0.000 Yarn-Llama-2-7b-32k 0.325 0.459 0.500 0.496 0.522 0.508 0.501 0.534 0.457 0.276 Yarn-Llama-2-7b-64k 0.346 0.456 0.503 0.513 0.502 0.498 0.490 0.515 0.470 0.458 Yarn-Llama-2-7b-128k 0.338 0.450 0.496 0.490 0.504 0.486 0.490 0.507 0.465 0.486 Llama-2-13b-hf 0.453 0.604 0.649 0.660 0.659 0.600 0.344 0.000 0.000 0.000 Yarn-Llama-2-13b-8k 0.469 0.610 0.662 0.656 0.675 0.652 0.603 0.318 0.000 0.000 Yarn-Llama-2-13b-16k 0.464 0.594 0.656 0.654 0.658 0.650 0.658 0.584 0.308 0.000 Yarn-Llama-2-13b-32k 0.432 0.586 0.642 0.640 0.642 0.642 0.646 0.626 0.567 0.322 Yarn-Llama-2-13b-64k 0.481 0.589 0.642 0.636 0.638 0.634 0.645 0.620 0.614 0.582 Yarn-Llama-2-13b-128k 0.480 0.578 0.636 0.632 0.628 0.630 0.634 0.616 0.609 0.612 Table 9: Accuracy on Commonsense Reasoning tasks, aggregated over every instance in the task. k 0 shots 1 shot 3 shots 7 shots 15 shots 31 shots 63 shots 127 shots 255 shots 511 shots Llama-2-7b-hf 0.031 0.147 0.155 0.154 0.156 0.159 0.049 0.011 0.000 0.000 Yarn-Llama-2-7b-8k 0.034 0.142 0.155 0.146 0.151 0.152 0.152 0.010 0.006 0.000 Yarn-Llama-2-7b-16k 0.031 0.138 0.152 0.144 0.147 0.144 0.143 0.143 0.006 0.003 Yarn-Llama-2-7b-32k 0.026 0.138 0.147 0.141 0.143 0.142 0.141 0.140 0.146 0.005 Yarn-Llama-2-7b-64k 0.033 0.129 0.144 0.142 0.148 0.141 0.148 0.142 0.144 0.134 Yarn-Llama-2-7b-128k 0.040 0.125 0.140 0.140 0.144 0.142 0.145 0.136 0.136 0.130 Llama-2-13b-hf 0.023 0.166 0.175 0.180 0.181 0.175 0.058 0.015 0.000 0.000 Yarn-Llama-2-13b-8k 0.029 0.161 0.171 0.175 0.177 0.170 0.174 0.014 0.011 0.000 Yarn-Llama-2-13b-16k 0.025 0.157 0.170 0.171 0.176 0.166 0.173 0.166 0.010 0.005 Yarn-Llama-2-13b-32k 0.025 0.152 0.168 0.166 0.171 0.168 0.164 0.156 0.160 0.007 Yarn-Llama-2-13b-64k 0.066 0.152 0.162 0.163 0.166 0.169 0.163 0.160 0.162 0.160 Yarn-Llama-2-13b-128k 0.101 0.145 0.155 0.162 0.163 0.163 0.163 0.154 0.157 0.154 Table 10: Accuracy on Machine Translation tasks, aggregated over every instance in the task. 19 Figure 4: Contours of our fit at C = 7.8 × 1022 (left) and C = 1.5 × 1023 (right) for the arithmetic reasoning task. Figure 5: Contours of our fit at C = 7.8 × 1022 (left) and C = 1.5 × 1023 (right) for the common sense reasoning task. Figure 6: Contours of our fit at C = 7.8 × 1022 (left) and C = 1.5 × 1023 (right) for the machine translation task. 20
Predicting Task Performance with Context-aware Scaling Laws Kyle Montgomery1*, David Park2*, Jianhong Tu1, Michael Bendersky3, Beliz Gunel4, Dawn Song5, Chenguang Wang1† 1 UC Santa Cruz, 2 Washington University in St. Louis, 3Databricks, 4Google DeepMind, 5UC Berkeley {kylemontgomery, Abstract Scaling laws have transformed our understanding of large language models by linking upstream metrics like cross-entropy loss to design factors such as model size, training data, and compute. However, these conventional laws fail to capture downstream task performance, where context plays a critical role. In this work, we propose a straightforward, interpretable framework that jointly models downstream performance as a function of the training compute and the provided context. We empirically validate our framework by fitting it on the observed downstream performance of extendedcontext variants of Llama-2-7B and Llama2-13B across 65,500 unique instances spanning three tasks: arithmetic reasoning, common sense reasoning, and machine translation. Our results demonstrate that our framework accurately models in-distribution downstream performance, generalizes across three orders of magnitude in training compute, and reliably extrapolates performance as the amount of context increases. These findings offer valuable insights into the interplay between training compute and context utilization, providing guidance for designing more efficient long-context LLMs for diverse downstream tasks. Our code is available at https://github.com/ wang-research-lab/context-scaling. 1 Introduction Neural scaling laws (Hestness et al., 2017; Kaplan et al., 2020), which describe how model performance scales with the number of model parameters, the size of the training dataset, or the amount of training compute, have shaped our understanding of how large language models (LLMs) (Brown et al., 2020; Touvron et al., 2023; Team et al., 2024; Grattafiori et al., 2024; OpenAI et al., 2024) improve with increased resources. These findings ∗Equal contribution. †Corresponding author. have guided the design and development of increasingly larger models, providing a blueprint to optimally scale up performance under a fixed compute budget (Hoffmann et al., 2022; OpenAI et al., 2024). While upstream metrics like cross-entropy loss serve as convenient proxies during model development, in real-world applications, downstream performance often diverges from these upstream trends (Wei et al., 2022; Hu et al., 2024). Accurate upfront performance estimates for downstream tasks can help guide model development and identify emergence or saturation on certain tasks with fewer costly experiments. Existing works on predicting downstream performance often rely on overly complicated, less interpretable methods. For instance, Chen et al. (2024) utilizes a two-stage approach using upstream loss as an intermediary, while Ye et al. (2023) fits a multi-layered perceptron to predict performance on BIG-Bench (Srivastava et al., 2023). In contrast, we propose a straightforward, interpretable framework that directly models the downstream performance of LLMs across a number of tasks. The key is to jointly model downstream performance as a function of the training compute and the provided context. Specifically, we develop a functional form (see Eq. (1)) which combines two saturating power-law terms (one in the amount of training compute and another in the amount of context) along with a penalty term to account for cases in which the context exceeds the model's context limit. This formulation is motivated by the intuition that downstream performance improves with increased training compute and longer, yet relevant, context until the benefits saturate or the context limit is exceeded. Figure 1 compares our fit to existing methods that do not consider context. We empirically validate our scaling framework by fitting it on the observed downstream performance of extended-context variants of Llama-2-7B 1 16 Oct 2025 102 103 104 105 npmt 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Accuracy Observed Performance Our Fit Existing Work Figure 1: Existing approaches ignore the impact of context length and predict an average performance level regardless of the number of in-context demonstrations. In comparison, our context-aware fit closely tracks the observed performance as additional context is provided. and Llama-2-13B (Touvron et al., 2023; Peng et al., 2024) across 65,500 unique instances spanning three tasks: arithmetic reasoning, common sense reasoning, and machine translation. Our results demonstrate that our framework accurately predicts downstream performance for both Llama-2-7B and Llama-2-13B (Sec. 4). Furthermore, we find that our fits generalize well on held-out models spanning 3 orders of magnitude in training compute (Sec. 4.1). Similarly, we demonstrate that our fits generalize to longer contexts, even as the context exceeds a model's context limit (Sec. 4.2). Lastly, we show that our fits generalize across different context-extension techniques (Sec. 4.3). These findings offer valuable insights into the interplay between training compute and context utilization, providing guidance for designing more efficient long-context LLMs for diverse downstream tasks. Our main contributions are threefold: • We propose a framework that extends conventional neural scaling laws to downstream tasks by incorporating the context length and context limit, providing a more accurate model of LLM performance across varying context lengths. • We empirically fit this framework to Llama2 models with extended context windows across 3 tasks: arithmetic reasoning, common sense reasoning, and machine translation. We demonstrate the generality of our approach by showing that our scaling laws hold across 3 orders of magnitude in training compute, 4 orders of magnitude in context length, and across different context-extension techniques. • Our framework offers an interpretable tool for understanding the interplay between compute, context, and downstream performance, providing insights that can guide the design of future long-context LLMs. 2 Background Here, we introduce relevant preliminaries, including notation conventions and the process of extending the context window of the Llama-2 models (Touvron et al., 2023). 2.1 Notation We adopt the following notation: • P - aggregate performance on a downstream task. Occasionally, we'll use a subscript to denote the specific task (e.g., PMT for machine translation). • N - the number of model parameters, excluding embedding/de-embedding parameters. • D - the number of tokens in the training dataset. • C - the amount of non-embedding training compute. Following Kaplan et al. (2020), we estimate C ≈6N FLOPs per training token, or C ≈6ND FLOPs in total. • nctx - the context limit of a model in tokens, i.e., the maximum number of positional embeddings computed for any training sequence. Often, we quote numerical values using k to denote units of 1024 tokens. For example, a context limit of "128k" corresponds to 128 × 1024 = 131072 tokens. • npmt - the length (in tokens) of a given input query or context. For simplicity, npmt does not include generated/outputted tokens. 2.2 Extending Llama-2's Context Limit Because the complexity of the self-attention layers grows quadratically in the sequence length (Duman Keles et al., 2023; Dao et al., 2022), LLMs are commonly pre-trained on short sequences (e.g., 4k tokens) rather than long sequences (e.g., 128k 2 Base Model Non-embedding Params (N) Context Limit (nctx) Dataset Size (D) Training Compute (C) Llama-2-7B 6,476,271,616 4k 2.0T 7.7719 × 1022 8k 2.0T + 0.210B 7.7723 × 1022 16k 2.0T + 0.419B 7.7732 × 1022 32k 2.0T + 0.836B 7.7748 × 1022 64k 2.0T + 1.678B 7.7780 × 1022 128k 2.0T + 3.355B 7.7846 × 1022 Llama-2-13B 12,688,184,320 4k 2.0T 1.5227 × 1023 8k 2.0T + 0.210B 1.5227 × 1023 16k 2.0T + 0.419B 1.5229 × 1023 32k 2.0T + 0.836B 1.5232 × 1023 64k 2.0T + 1.678B 1.5239 × 1023 128k 2.0T + 3.355B 1.5251 × 1023 Table 1: The 12 checkpoints against which we fit scaling curves. The 4k variants are the official Llama-2-7B and Llama-2-13B checkpoints. The additional training tokens and compute from extending the context limit via YaRN (Peng et al., 2024) are factored into D and C. tokens). As a result, LLMs struggle to generalize to sequences longer than those seen during pretraining. Because we plan to explore how downstream performance varies with context length, Llama-2's original context limit of 4k tokens will not be sufficient. Fortunately, a number of techniques have been proposed that can extend the context window of LLMs for a fraction of the pretraining compute budget (Chen et al., 2023; Peng et al., 2024; Xiong et al., 2024). YaRN (Peng et al., 2024) is our method of choice for extending Llama 2's context limit. We selected YaRN due to its high compute efficiency and strong empirical results compared to other techniques. YaRN involves fine-tuning the pre-trained model for a limited number of steps on sequences exceeding the pre-trained LLM's context limit in order to increase the effective size of the LLM's context limit so that it may better model long sequences. We adopt the methodology from Peng et al. (2024) and fine-tune Llama-2-7B and Llama-213B (Touvron et al., 2023) for 400 steps with a global batch size of 64 on sequences of length n′ ctx (where n′ ctx > nctx) from the PG-19 corpus (Rae et al., 2020). We use the AdamW optimizer (Loshchilov and Hutter, 2019) with β1 = 0.9 and β2 = 0.95, and a learning rate of 2 × 10-5. We train variants of Llama-2-7B and Llama-2-13B with nctx ∈{8k, 16k, 32k}, and source checkpoints for nctx ∈{64k, 128k} from Peng et al. (2024). In order to validate the effectiveness of the context extension training, we evaluate the performance of our 12 Llama-2 models in Table 1 on RULER (Hsieh et al., 2024), a synthetic needlein-a-haystack benchmark developed to evaluate long-context LLMs. Specifically, we evaluate each model on 100 instances per length, for each of RULER's 13 tasks. Results are displayed in Table 2 and suggest that context extension via YaRN (Peng et al., 2024) is somewhat effective. Interestingly, models tend to underperform when evaluated at their extended context limit, suggesting that training with a context limit well beyond the target evaluation range can lead to improved performance within that desired range. 3 Method We posit that aggregate task performance P can be modeled as the product of two saturating power laws in C and npmt, with a sigmoid penalty term for when npmt > nctx. This form provides a good fit for a range of tasks, including arithmetic reasoning, common sense reasoning, and machine translation tasks. Formally, we model P as P(C, npmt, nctx) = Saturating term in C z }| { h 1 -exp -A C Cc α i (1) × h 1 -exp -B npmt nc pmt β i | {z } Saturating term in npmt × σ (npmt -nctx) | {z } Penalty term , where A, Cc, α, B, nc pmt, and β are parameters to be optimized. We select this form because we expect that the downstream performance P is proportional to diminishing terms in the amount of training compute C (which integrates both model size N and dataset size D) (Chen et al., 2024; Owen, 2024) and the context length (Brown et al., 2020; Caballero et al., 2023), assuming the context remains relevant as its 3 Model nctx npmt = 4k npmt = 8k npmt = 16k npmt = 32k npmt = 64k npmt = 128k Llama-2-7B 4k 0.822 0.000 0.000 0.000 0.000 0.000 8k 0.829 0.586 0.000 0.000 0.001 0.005 16k 0.795 0.58 0.378 0.000 0.000 0.002 32k 0.746 0.599 0.517 0.317 0.000 0.000 64k 0.794 0.647 0.593 0.530 0.225 0.000 128k 0.776 0.663 0.552 0.439 0.383 0.129 Llama-2-13B 4k 0.861 0.000 0.000 0.000 0.000 0.000 8k 0.870 0.625 0.000 0.000 0.000 0.000 16k 0.865 0.679 0.392 0.000 0.000 0.000 32k 0.848 0.727 0.622 0.378 0.000 0.000 64k 0.860 0.734 0.612 0.511 0.282 0.001 128k 0.819 0.684 0.586 0.484 0.447 0.163 Table 2: Accuracy of our extended Llama-2 models on RULER (Hsieh et al., 2024). length increases and npmt ≤nctx. We saturate these terms via exponentiation to ensure our predicted performance remains below the maximum theoretical performance of 1.0. The product form arises because compute and context are complementary, not additive; a significant lack in one dimension limits the benefit derived from the other. For example, providing more context is only beneficial to the extent that the model is capable of leveraging that additional context. We impose a sharp sigmoid penalty term because P is measured only on the generated tokens, and if npmt > nctx, then any generated tokens will fall beyond the range in which the model can make reliable predictions, meaning P degrades rapidly, especially on tasks that require extended and coherent generations (e.g., reasoning through a math word problem or translating an entire sentence). 3.1 Datasets We evaluate our 12 models in Table 1 on 65,500 instances of varying lengths that span 3 tasks: • Arithmetic reasoning We collect 3550 testing instances across GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021), AQUA-RAT (Ling et al., 2017), and Deepmind Math (Saxton et al., 2019). Because the instances are rather short, we pack the context with up to 511 demonstrations sampled from the training splits of each dataset. • Common sense reasoning We sample 1750 testing instances across PIQA (Bisk et al., 2020), SIQA (Sap et al., 2019), OpenBookQA (Mihaylov et al., 2018), HellaSwag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2020), ARCEasy/Challenge (Clark et al., 2018), and CommonSenseQA (Talmor et al., 2019), and pack the context with up to 511 demonstrations from their respective training splits. • Machine translation We sample 250 translation instances from WMT-14 (Bojar et al., 2014) from each of German, French, Hindi, Czech, and Russian to English. As before, we pack the context with up to 511 demonstrations (of the same source language) and measure the BLEU-4 (Papineni et al., 2002) score of the generation against the reference translation. Additional details can be found in Appendix A. 3.2 Fitting Procedure For each task, we aggregate the results for each model by the context length, using the number of in-context demonstrations as a proxy for length. Within each group, we average over the context length and metric value for each instance. In doing so, we collect a number of records of the form (C, npmt, nctx, avg. metric value) on which we fit Eq. (1) for each of our 3 tasks. To fit the scaling curves, we use a two-stage optimization procedure that combines global search with local refinement. First, we use an outof-the-box global optimizer to perform a broad Parameter Lower Bound Upper Bound A 0 100 Cc 0 1030 α 0 10 B 0 100 nc pmt 0 131,072 β 0 10 Table 3: Upper and lower bounds on A, Cc, α, B, nc pmt, and β. 4 102 103 104 105 npmt 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Accuracy (a) Arithmetic reasoning. 102 103 104 105 npmt 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Accuracy (b) Common sense reasoning. 102 103 104 105 npmt 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 Accuracy (c) Machine translation. Figure 2: Contours of fits at C = 7.8×1022 (red) and C = 1.5×1023 (blue) for nctx = 8k on three tasks: arithmetic reasoning (left), common sense reasoning (middle) and machine translation (right). Task A Cc α B nc pmt β Arithmetic reasoning 9.96 9.7 × 1029 0.26 62.24 1.3 × 105 0.56 Common sense reasoning 99.39 1.5 × 1028 0.40 96.31 3.5 × 103 1.12 Machine translation 5.55 5.4 × 1029 0.23 31.82 3.0 × 102 2.97 Table 4: Fits for P(C, npmt, nctx) on 3 downstream tasks: arithmetic reasoning, common sense reasoning, and machine translation. search over the parameter space. Specifically, we use SciPy's differential_evolution global optimization method, an evolutionary algorithm well suited for non-convex, non-linear optimization problems such as this (Storn and Price, 1997). We define finite upper and lower bounds for each parameter, informed by Kaplan et al. (2020) and Xiong et al. (2024). We use the same bounds across all tasks, which are listed in Table 3. Finally, we do a pass through a local optimizer (SciPy's curve_fit), using the estimate from the global optimizer as a starting point, to achieve a precise fit. 4 Empirical Results We model the aggregate performance P on each of our 3 tasks (arithmetic reasoning, common sense reasoning, and machine translation) using Eq. (1). Unless otherwise noted, scaling laws are fit on the results of all 12 Llama-2 models in Table 1 using the procedure outlined in Section 3.2. Table 4 includes the parameter values that we found to be optimal for each task. Contours of our fits at C = 7.8 × 1022 and C = 1.5 × 1023 for nctx = 8k are provided in Figure 2. Additional contours are provided in Appendix B. We report the mean absolute prediction error |P -ˆP|, which is the average of residuals (in absolute value). When discussing individual residuals, we'll often include the sign of the residual to indicate the direction (i.e., whether we're under- or over-predicting). On the arithmetic reasoning task, we achieve an excellent fit, with an average prediction error |P -ˆP| of just 0.010. Similarly, on common sense reasoning and machine translation, we observe average prediction errors of 0.037 and 0.007, respectively. Additionally, we model the behavior around the boundary condition at npmt = nctx surprisingly well. Our results confirm that P can be jointly determined by the training compute and context length. Increasing C corresponds to an increase in P, in effect shifting up the contour by some diminishing amount in C in the region where npmt nctx, confirming the importance of the penalty term. 5 Related Work Hestness et al. (2017) and Kaplan et al. (2020) introduce scaling laws which describe the relationship between upstream model performance (e.g., cross-entropy loss) and model design features (e.g., the number of model parameters, the size of the training dataset, or the total amount of training compute). Henighan et al. (2020) extends this analysis to other types of autoregressive models (e.g., generative image and video modeling). Hoffmann et al. (2022) and OpenAI et al. (2024) describe the use of scaling laws to train compute-optimal LLMs, and Caballero et al. (2023) introduces a form of smoothly broken neural scaling laws to better capture non-monotonic scaling. Several works have focused on scaling laws for predicting downstream performance. Wei et al. (2022) and Hu et al. (2024) focus on predicting abilities that "emerge" in LLMs when trained on enough compute. Isik et al. (2024) explores scaling laws for transfer learning on machine translation tasks, while Schaeffer et al. (2025) studies scaling laws for downstream multiple-choice tasks. Other works have employed a collaborative approach and source performance data from public benchmarks to better generalize across different model families (Zhang et al., 2024; Ruan et al., 2024; Polo et al., 2025; Gadre et al., 2024). Chen et al. (2024) and Ruan et al. (2024) employ a two-stage approach, using an intermediary (e.g., upstream loss) for predicting downstream performance. Both Owen (2024) and Ye et al. (2023) aim to predict aggregate performance on benchmarks such as BIG-Bench (Srivastava et al., 2023). Comparatively, this work introduces a dependence on the context length and suggests that you can predict downstream performance and obtain strong generalization (even across model families) with a straightforward, interpretable functional form. Context refers to the information provided to a model at inference time, such as few-shot demonstrations (Brown et al., 2020), retrieved evidence (Lewis et al., 2020), or task instructions (Crispino et al., 2024). Though context shapes performance by conditioning on structured or unstructured information, little scaling analysis has been conducted on the role of context. Both Kaplan et al. (2020) and Caballero et al. (2023) briefly explore the scaling of upstream performance as it relates to context length. Xiong et al. (2024) extends the context limit of Llama-2 and finds that 7 Model C npmt PAR -ˆPAR PCSR -ˆPCSR PMT -ˆPMT Llama-2-7B (PI) 7.777 × 1022 32k +0.014 +0.079 -0.005 Llama-2-7B (YaRN) 7.775 × 1022 32k +0.005 +0.014 -0.005 Table 6: Generalization of fit on test models for arithmetic reasoning (AR), common sense reasoning (CSR), and machine translation (MT) at nctx = 32k. |P -ˆP|npmt≤nctx |P -ˆP|npmt>nctx |P -ˆP| With penalty term 0.010 0.014 0.010 Without penalty term 0.019 0.104 0.029 Table 7: Prediction errors on the arithmetic reasoning task, with and without the sigmoid penalty term. validation loss scales as a power law in the context length, but stops short of exploring the relationship between downstream performance and context length. Caballero et al. (2023) and Brown et al. (2020) explore the diminishing returns of increasing the number of in-context demonstrations. To the best of our knowledge, our work is the first to explicitly focus on the scaling relationship between downstream performance and context length, and the first attempt to unify the understanding of scaling with respect to both context and compute. The ability of an LLM to extrapolate to longer sequences depends heavily on its positional encodings. While some positional encoding techniques (e.g., ALiBi (Press et al., 2022)) offer limited length extrapolation, other common techniques (e.g., RoPE (Su et al., 2024)) don't. As a result, a number of techniques to efficiently extend the context window of LLMs have been proposed. Some techniques offer training-free context extension, typically by adjusting the attention mechanism itself. Jin et al. (2024) leverages a bilevel attention mechanism, applying standard selfattention to adjacent tokens and grouped attention for distant tokens. InfLLM is a memory-based technique that integrates sliding-window attention with block-level context memory (Xiao et al., 2024). Similarly, LM-Infinite employs a Λ-shaped attention mask, effectively masking attention over tokens in the middle, and restricts the maximum positional difference between any two tokens to the maximum sequence length seen during pretraining (Han et al., 2024). On the other hand, An et al. (2024) introduces dual-chunk attention, which decomposes the attention computation into chunkbased modules to better capture the relative positional information between distant tokens. Additionally, a number of techniques have been proposed that focus on rescaling the positional encodings. Concurrently, Chen et al. (2023) and kaiokendev (2023) introduced position interpolation, which extends the context window by linearly interpolating the position indices to be within the pretrained context limit. Xiong et al. (2024) proposes decreasing the rotational angle (base frequency) of RoPE to prevent the relative positional information from decaying. Building on this, NTKaware interpolation (bloc97, 2023b) adjusts the scaling for each RoPE dimension based on its frequency, thereby mitigating the loss of highfrequency details. bloc97 (2023a) introduces NTKby-parts interpolation, which selectively interpolates lower-frequency dimensions while preserving higher-frequency components to maintain local relative positioning. YaRN (Peng et al., 2024) combines NTK-by-parts with a mechanism to rescale the logits in the attention softmax to further improve performance on long sequences. In this work, we utilize YaRN to extend the context limit of the Llama-2 models due to its high compute efficiency and strong empirical results compared to other techniques. 6 Conclusion In this work, we introduce a straightforward, interpretable framework that jointly models downstream performance as a function of the training compute and the provided context. Extensive experiments on arithmetic reasoning, common-sense reasoning, and machine translation tasks demonstrate that our framework not only fits the in-distribution performance accurately but also generalizes well across 3 orders of magnitude in the amount of nonembedding training compute C, 4 orders of magnitude in the amount of input context length, and even to other context-extension techniques. These 8 findings reveal that downstream performance benefits from increased compute and longer, relevant context, but only up to a saturation point. Our work thus provides actionable insights for designing more effective long-context LLMs and bridges the gap between upstream scaling metrics and realworld task performance. Acknowledgments This work was supported in part by a research gift from Google. Limitations While our proposed context-aware scaling framework provides an interpretable approach to modeling downstream performance, it does come with limitations. Specifically, our formulation relies on a set of assumptions (e.g., performance scales with training compute and context) that may not hold under extreme scaling regimes or in the presence of adversarial attacks like many-shot jailbreaking (Anil et al., 2024). Moreover, factors such as the pre-training data mix, post-training and alignment, and architectural choices, which can all influence downstream model performance, are not explicitly accounted for. However, these factors likely affect the optimal parameters of a fit without necessarily changing the structure of Eq. (1). For example, post-training alignment (e.g., instruction tuning) might improve a model's zero-shot performance, resulting in a higher value for the parameter A compared to a non-aligned base model. Future work could investigate how these factors and others influence the identified parameters, enhancing the framework's predictive power while retaining its interpretable form. Lastly, our scaling curves are fit to a narrow range of training compute, and may fail to generalize well to LLMs trained on an amount of compute that extends far beyond this range. References Chenxin An, Fei Huang, Jun Zhang, Shansan Gong, Xipeng Qiu, Chang Zhou, and Lingpeng Kong. 2024. Training-free long-context scaling of large language models. In Proceedings of the 41st International Conference on Machine Learning, ICML'24. JMLR.org. Cem Anil, Esin DURMUS, Nina Panickssery, Mrinank Sharma, Joe Benton, Sandipan Kundu, Joshua Batson, Meg Tong, Jesse Mu, Daniel Ford, Francesco Mosconi, Rajashree Agrawal, Rylan Schaeffer, Naomi Bashkansky, Samuel Svenningsen, Mike Lambert, Ansh Radhakrishnan, Carson Denison, Evan Hubinger, Yuntao Bai, Trenton Bricken, Timothy Maxwell, Nicholas Schiefer, James Sully, Alex Tamkin, Tamera Lanham, Karina Nguyen, Tomek Korbak, Jared Kaplan, Deep Ganguli, Samuel Bowman, Ethan Perez, Roger B Grosse, and David K Duvenaud. 2024. Many-shot jailbreaking. In Advances in Neural Information Processing Systems, volume 37, pages 129696-129742. Curran Associates, Inc. Yonatan Bisk, Rowan Zellers, Ronan Le bras, Jianfeng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7432-7439. bloc97. 2023a. Add NTK-Aware interpolation "by parts" correction. bloc97. 2023b. NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve SaintAmand, Radu Soricut, Lucia Specia, and Ale s Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12-58, Baltimore, Maryland, USA. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc. Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. 2023. Broken neural scaling laws. In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models. Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023. Extending context window of large language models via positional interpolation. Preprint, . Yangyi Chen, Binxuan Huang, Yifan Gao, Zhengyang Wang, Jingfeng Yang, and Heng Ji. 2024. Scaling laws for predicting downstream performance in llms. Preprint, . 9 Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. Preprint, . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. Preprint, . Nicholas Crispino, Kyle Montgomery, Fankun Zeng, Dawn Song, and Chenguang Wang. 2024. Agent instructs large language models to be general zero-shot reasoners. In Proceedings of the 41st International Conference on Machine Learning, pages 9458-9549. Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io-awareness. In Advances in Neural Information Processing Systems, volume 35, pages 16344-16359. Curran Associates, Inc. Feyza Duman Keles, Pruthuvi Mahesakya Wijewardena, and Chinmay Hegde. 2023. On the computational complexity of self-attention. In Proceedings of The 34th International Conference on Algorithmic Learning Theory, volume 201 of Proceedings of Machine Learning Research, pages 597-619. PMLR. Samir Yitzhak Gadre, Georgios Smyrnis, Vaishaal Shankar, Suchin Gururangan, Mitchell Wortsman, Rulin Shao, Jean Mercat, Alex Fang, Jeffrey Li, Sedrick Keh, Rui Xin, Marianna Nezhurina, Igor Vasiljevic, Jenia Jitsev, Luca Soldaini, Alexandros G. Dimakis, Gabriel Ilharco, Pang Wei Koh, Shuran Song, Thomas Kollar, Yair Carmon, Achal Dave, Reinhard Heckel, Niklas Muennighoff, and Ludwig Schmidt. 2024. Language models scale reliably with over-training and on downstream tasks. Preprint, . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad AlDahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kambadur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hosseini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish Vogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Apara10 jita Saraf, Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, ChingHsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kiran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun Vontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Constable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. 2024. The llama 3 herd of models. Preprint, . Chi Han, Qifan Wang, Hao Peng, Wenhan Xiong, Yu Chen, Heng Ji, and Sinong Wang. 2024. LMinfinite: Zero-shot extreme length generalization for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 3991-4008, Mexico City, Mexico. Association for Computational Linguistics. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. In Thirtyfifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, and Sam McCandlish. 2020. Scaling laws for autoregressive generative modeling. Preprint, . Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. 2017. Deep learning scaling is predictable, empirically. Preprint, . Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan 11 Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. Preprint, . Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shantanu Acharya, Dima Rekesh, Fei Jia, and Boris Ginsburg. 2024. RULER: What's the real context size of your long-context language models? In First Conference on Language Modeling. Shengding Hu, Xin Liu, Xu Han, Xinrong Zhang, Chaoqun He, Weilin Zhao, Yankai Lin, Ning Ding, Zebin Ou, Guoyang Zeng, Zhiyuan Liu, and Maosong Sun. 2024. Predicting emergent abilities with infinite resolution evaluation. In The Twelfth International Conference on Learning Representations. Berivan Isik, Natalia Ponomareva, Hussein Hazimeh, Dimitris Paparas, Sergei Vassilvitskii, and Sanmi Koyejo. 2024. Scaling laws for downstream task performance of large language models. Preprint, . Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, and Xia Hu. 2024. LLM maybe LongLM: SelfExtend LLM context window without tuning. In Proceedings of the 41st International Conference on Machine Learning, volume 235 of Proceedings of Machine Learning Research, pages 22099-22114. PMLR. kaiokendev. 2023. Things I'm learning while training superhot. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Preprint, . Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neural information processing systems, 33:9459-9474. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158-167, Vancouver, Canada. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium. Association for Computational Linguistics. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha GontijoLopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, 12 Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O'Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. 2024. Gpt-4 technical report. Preprint, . David Owen. 2024. How predictable is language model benchmark performance? Preprint, . Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, page 311-318, USA. Association for Computational Linguistics. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2024. YaRN: Efficient context window extension of large language models. In The Twelfth International Conference on Learning Representations. Felipe Maia Polo, Seamus Somerstep, Leshem Choshen, Yuekai Sun, and Mikhail Yurochkin. 2025. Sloth: scaling laws for llm skills to predict multibenchmark performance across families. Preprint, . Ofir Press, Noah Smith, and Mike Lewis. 2022. Train short, test long: Attention with linear biases enables input length extrapolation. In International Conference on Learning Representations. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations. Yangjun Ruan, Chris J. Maddison, and Tatsunori Hashimoto. 2024. Observational scaling laws and the predictability of language model performance. Preprint, . Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732-8740. AAAI Press. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 44634473, Hong Kong, China. Association for Computational Linguistics. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning Representations. Rylan Schaeffer, Hailey Schoelkopf, Brando Miranda, Gabriel Mukobi, Varun Madan, Adam Ibrahim, Herbie Bradley, Stella Biderman, and Sanmi Koyejo. 2025. Why has predicting downstream capabilities of frontier ai models with scale remained elusive? Preprint, . Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karaka ̧s, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, 13 Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo JaimovitchLópez, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Koco ́n, Jana Thompson, Janelle Wingfield, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia ContrerasOchando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem ̧Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Sw ̨edrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Preprint, . Rainer Storn and Kenneth Price. 1997. Differential evolution: A simple and efficient heuristic for global op14 timization over continuous spaces. Journal of Global Optimization, 11:341-359. Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Association for Computational Linguistics. Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Walton, Aliaksei Severyn, Alicia Parrish, Aliya Ahmad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock, Andy Coenen, Anthony Laforge, Antonia Paterson, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher A. Choquette-Choo, Danila Sinopalnikov, David Weinberger, Dimple Vijaykumar, Dominika Rogozi ́nska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Eltyshev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Pluci ́nska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svensson, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fernandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju yeong Ji, Kareem Mohamed, Kartikeya Badola, Kat Black, Katie Millican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjoesund, Lauren Usui, Laurent Sifre, Lena Heuermann, Leticia Lago, Lilly McNealus, Livio Baldini Soares, Logan Kilpatrick, Lucas Dixon, Luciano Martins, Machel Reid, Manvinder Singh, Mark Iverson, Martin Görner, Mat Velloso, Mateo Wirth, Matt Davidow, Matt Miller, Matthew Rahtz, Matthew Watson, Meg Risdal, Mehran Kazemi, Michael Moynihan, Ming Zhang, Minsuk Kahng, Minwoo Park, Mofi Rahman, Mohit Khatwani, Natalie Dao, Nenshad Bardoliwalla, Nesh Devanathan, Neta Dumai, Nilay Chauhan, Oscar Wahltinez, Pankil Botarda, Parker Barnes, Paul Barham, Paul Michel, Pengchong Jin, Petko Georgiev, Phil Culliton, Pradeep Kuppala, Ramona Comanescu, Ramona Merhej, Reena Jana, Reza Ardeshir Rokni, Rishabh Agarwal, Ryan Mullins, Samaneh Saadat, Sara Mc Carthy, Sarah Cogan, Sarah Perrin, Sébastien M. R. Arnold, Sebastian Krause, Shengyang Dai, Shruti Garg, Shruti Sheth, Sue Ronstrom, Susan Chan, Timothy Jordan, Ting Yu, Tom Eccles, Tom Hennigan, Tomas Kocisky, Tulsee Doshi, Vihan Jain, Vikas Yadav, Vilobh Meshram, Vishal Dharmadhikari, Warren Barkley, Wei Wei, Wenming Ye, Woohyun Han, Woosuk Kwon, Xiang Xu, Zhe Shen, Zhitao Gong, Zichuan Wei, Victor Cotruta, Phoebe Kirk, Anand Rao, Minh Giang, Ludovic Peran, Tris Warkentin, Eli Collins, Joelle Barral, Zoubin Ghahramani, Raia Hadsell, D. Sculley, Jeanine Banks, Anca Dragan, Slav Petrov, Oriol Vinyals, Jeff Dean, Demis Hassabis, Koray Kavukcuoglu, Clement Farabet, Elena Buchatskaya, Sebastian Borgeaud, Noah Fiedel, Armand Joulin, Kathleen Kenealy, Robert Dadashi, and Alek Andreev. 2024. Gemma 2: Improving open language models at a practical size. Preprint, . Together.ai. 2023. LLaMA-2-7B-32K. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Preprint, . Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. Transactions on Machine Learning Research. Survey Certification. Chaojun Xiao, Pengle Zhang, Xu Han, Guangxuan Xiao, Yankai Lin, Zhengyan Zhang, Zhiyuan Liu, and Maosong Sun. 2024. Infllm: Training-free longcontext extrapolation for llms with an efficient context memory. In Advances in Neural Information Processing Systems, volume 37, pages 119638-119661. Curran Associates, Inc. 15 Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, and Hao Ma. 2024. Effective long-context scaling of foundation models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 4643-4663, Mexico City, Mexico. Association for Computational Linguistics. Qwen: An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2025. Qwen2.5 technical report. Preprint, . Qinyuan Ye, Harvey Fu, Xiang Ren, and Robin Jia. 2023. How predictable are large language model capabilities? a case study on BIG-bench. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7493-7517, Singapore. Association for Computational Linguistics. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791-4800, Florence, Italy. Association for Computational Linguistics. Qiyuan Zhang, Fuyuan Lyu, Xue Liu, and Chen Ma. 2024. Collaborative performance prediction for large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2576-2596, Miami, Florida, USA. Association for Computational Linguistics. 16 A Dataset Details GSM8K (Cobbe et al., 2021) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. During inference, we allow up to 400 new tokens. The average token lengths of the training and testing instances were 177.64 and 177.43 respectively. The generated responses averaged around 172.13 tokens in length. To evaluate, we extract the model's final answer and compare it with the reference answer, checking for numerical equivalence. MATH (Hendrycks et al., 2021) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. During inference, we allow up to 400 new tokens. The average token lengths of the training and testing instances were 160.54 and 155.74 respectively. The generated responses also averaged around 184.0 tokens in length. To evaluate, we extract the model's final answer and compare it with the reference answer, checking for numerical equivalence. AQUA-RAT (Ling et al., 2017) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 88.45 and 93.09 respectively. The generated responses also averaged around 3.44 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. DeepMind Math (Saxton et al., 2019) The dataset is categorized into 56 subsets. We filter out instances over 256 tokens in length, and select 511 training instances and 50 testing instances at random from each subset. We allow up to 400 new tokens during generation. The average token lengths of the training and testing instances were 57.94 and 61.05 respectively. The generated responses also averaged around 85.71 tokens in length. To evaluate, we extract the model's final answer and compare it with the reference answer, checking for numerical equivalence. PIQA (Bisk et al., 2020) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 81.16 and 81.55 respectively. The generated responses also averaged around 3.46 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. OpenBookQA (Mihaylov et al., 2018) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 47.74 and 49.39 respectively. The generated responses also averaged around 3.3 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. SIQA (Sap et al., 2019) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 1 new token during generation. The average token lengths of the training and testing instances were 56.68 and 56.87 respectively. The generated responses also averaged around 3.35 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. HellaSwag (Zellers et al., 2019) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 153.06 and 156.05 respectively. The generated responses also averaged around 3.67 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. WinoGrande (Sakaguchi et al., 2020) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 53.98 and 53.87 respectively. The generated responses also averaged around 3.33 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. ARC Easy (Clark et al., 2018) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at ran17 dom. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 66.69 and 67.14 respectively. The generated responses also averaged around 3.46 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. ARC Challenge (Clark et al., 2018) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 75.65 and 76.83 respectively. The generated responses also averaged around 3.43 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. CommenSenseQA (Talmor et al., 2019) We filter out instances over 256 tokens in length, and 250 select 511 training instances and testing instances at random. We allow up to 5 new tokens during generation. The average token lengths of the training and testing instances were 50.42 and 49.92 respectively. The generated responses also averaged around 1.0 tokens in length. To evaluate, we check to see if the choice returned by our model matches the reference answer. WMT14 (CS-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens during generation. The average token lengths of the training and testing instances were 95.01 and 85.25 respectively. The generated responses also averaged around 77.77 tokens in length. We use BLEU4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. WMT14 (DE-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens during generation. The average token lengths of the training and testing instances were 85.53 and 77.68 respectively. The generated responses also averaged around 77.77 tokens in length. We use BLEU4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. WMT14 (FR-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens during generation. The average token lengths of the training and testing instances were 95.94 and 84.29 respectively. The generated responses also averaged around 78.73 tokens in length. We use BLEU4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. WMT14 (HI-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens during generation. The average token lengths of the training and testing instances were 34.01 and 147.09 respectively. The generated responses also averaged around 53.11 tokens in length. We use BLEU4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. WMT14 (RU-EN) (Bojar et al., 2014) We filter out instances over 256 tokens in length, and select 511 training instances and 250 testing instances at random. We allow up to 256 new tokens during generation. The average token lengths of the training and testing instances were 73.54 and 86.56 respectively. The generated responses also averaged around 77.24 tokens in length. We use BLEU4 (Papineni et al., 2002) to score the generated translations relative to the reference translations. B Full Results In this section, we present full aggregate results in Tables 8, 9, and 10 for arithmetic reasoning, common sense reasoning, and machine translation respectively. Figures 4, 5,and 6 provide contours of our fits at C = 7.8 × 1022 and C = 1.5 × 1023. 18 k 0 shots 1 shot 3 shots 7 shots 15 shots 31 shots 63 shots 127 shots 255 shots 511 shots Llama-2-7b-hf 0.089 0.099 0.115 0.120 0.136 0.127 0.094 0.014 0.014 0.000 Yarn-Llama-2-7b-8k 0.076 0.097 0.109 0.117 0.134 0.131 0.137 0.071 0.000 0.000 Yarn-Llama-2-7b-16k 0.072 0.095 0.109 0.116 0.130 0.133 0.143 0.139 0.073 0.002 Yarn-Llama-2-7b-32k 0.069 0.092 0.104 0.113 0.127 0.127 0.135 0.134 0.143 0.076 Yarn-Llama-2-7b-64k 0.057 0.094 0.108 0.115 0.132 0.128 0.143 0.140 0.150 0.138 Yarn-Llama-2-7b-128k 0.049 0.091 0.106 0.113 0.129 0.126 0.136 0.135 0.149 0.140 Llama-2-13b-hf 0.088 0.115 0.131 0.137 0.148 0.141 0.092 0.011 0.005 0.000 Yarn-Llama-2-13b-8k 0.086 0.110 0.126 0.132 0.146 0.149 0.151 0.082 0.000 0.000 Yarn-Llama-2-13b-16k 0.081 0.110 0.135 0.146 0.153 0.163 0.172 0.145 0.077 0.010 Yarn-Llama-2-13b-32k 0.077 0.111 0.129 0.145 0.154 0.162 0.171 0.169 0.134 0.065 Yarn-Llama-2-13b-64k 0.073 0.106 0.130 0.146 0.156 0.158 0.169 0.167 0.159 0.136 Yarn-Llama-2-13b-128k 0.069 0.108 0.123 0.138 0.157 0.157 0.174 0.165 0.163 0.153 Table 8: Accuracy on arithmetic reasoning, aggregated over every instance in the task. k 0 shots 1 shot 3 shots 7 shots 15 shots 31 shots 63 shots 127 shots 255 shots 511 shots Llama-2-7b-hf 0.376 0.489 0.518 0.536 0.536 0.527 0.302 0.000 0.000 0.000 Yarn-Llama-2-7b-8k 0.356 0.476 0.518 0.530 0.530 0.523 0.491 0.278 0.000 0.000 Yarn-Llama-2-7b-16k 0.342 0.468 0.508 0.522 0.532 0.519 0.521 0.486 0.264 0.000 Yarn-Llama-2-7b-32k 0.325 0.459 0.500 0.496 0.522 0.508 0.501 0.534 0.457 0.276 Yarn-Llama-2-7b-64k 0.346 0.456 0.503 0.513 0.502 0.498 0.490 0.515 0.470 0.458 Yarn-Llama-2-7b-128k 0.338 0.450 0.496 0.490 0.504 0.486 0.490 0.507 0.465 0.486 Llama-2-13b-hf 0.453 0.604 0.649 0.660 0.659 0.600 0.344 0.000 0.000 0.000 Yarn-Llama-2-13b-8k 0.469 0.610 0.662 0.656 0.675 0.652 0.603 0.318 0.000 0.000 Yarn-Llama-2-13b-16k 0.464 0.594 0.656 0.654 0.658 0.650 0.658 0.584 0.308 0.000 Yarn-Llama-2-13b-32k 0.432 0.586 0.642 0.640 0.642 0.642 0.646 0.626 0.567 0.322 Yarn-Llama-2-13b-64k 0.481 0.589 0.642 0.636 0.638 0.634 0.645 0.620 0.614 0.582 Yarn-Llama-2-13b-128k 0.480 0.578 0.636 0.632 0.628 0.630 0.634 0.616 0.609 0.612 Table 9: Accuracy on Commonsense Reasoning tasks, aggregated over every instance in the task. k 0 shots 1 shot 3 shots 7 shots 15 shots 31 shots 63 shots 127 shots 255 shots 511 shots Llama-2-7b-hf 0.031 0.147 0.155 0.154 0.156 0.159 0.049 0.011 0.000 0.000 Yarn-Llama-2-7b-8k 0.034 0.142 0.155 0.146 0.151 0.152 0.152 0.010 0.006 0.000 Yarn-Llama-2-7b-16k 0.031 0.138 0.152 0.144 0.147 0.144 0.143 0.143 0.006 0.003 Yarn-Llama-2-7b-32k 0.026 0.138 0.147 0.141 0.143 0.142 0.141 0.140 0.146 0.005 Yarn-Llama-2-7b-64k 0.033 0.129 0.144 0.142 0.148 0.141 0.148 0.142 0.144 0.134 Yarn-Llama-2-7b-128k 0.040 0.125 0.140 0.140 0.144 0.142 0.145 0.136 0.136 0.130 Llama-2-13b-hf 0.023 0.166 0.175 0.180 0.181 0.175 0.058 0.015 0.000 0.000 Yarn-Llama-2-13b-8k 0.029 0.161 0.171 0.175 0.177 0.170 0.174 0.014 0.011 0.000 Yarn-Llama-2-13b-16k 0.025 0.157 0.170 0.171 0.176 0.166 0.173 0.166 0.010 0.005 Yarn-Llama-2-13b-32k 0.025 0.152 0.168 0.166 0.171 0.168 0.164 0.156 0.160 0.007 Yarn-Llama-2-13b-64k 0.066 0.152 0.162 0.163 0.166 0.169 0.163 0.160 0.162 0.160 Yarn-Llama-2-13b-128k 0.101 0.145 0.155 0.162 0.163 0.163 0.163 0.154 0.157 0.154 Table 10: Accuracy on Machine Translation tasks, aggregated over every instance in the task. 19 Figure 4: Contours of our fit at C = 7.8 × 1022 (left) and C = 1.5 × 1023 (right) for the arithmetic reasoning task. Figure 5: Contours of our fit at C = 7.8 × 1022 (left) and C = 1.5 × 1023 (right) for the common sense reasoning task. Figure 6: Contours of our fit at C = 7.8 × 1022 (left) and C = 1.5 × 1023 (right) for the machine translation task. 20
2510.14915
Harmonizing Diverse Models: A Layer-wise Merging Strategy for Consistent Generation Xujun Peng, Anoop Kumar, Jingyu Wu, Parker Glenn, Daben Liu AI Foundations, Capital One McLean, VA, USA {xujun.peng, anoop.kumar, jingyu.wu, parker.glenn, daben.liu}@capitalone.com Abstract Retrieval-Augmented Generation (RAG) sys- tems leverage Large Language Models (LLMs) to generate accurate and reliable responses that are grounded in retrieved context. How- ever, LLMs often generate inconsistent outputs for semantically equivalent inputs, a problem compounded by the scarcity of consistency- focused training data and the limitations of cur- rent fine-tuning techniques in enhancing out- put consistency. We propose a new approach combining systematic synthetic data genera- tion, triplet loss for better embeddings, and a novel layer-wise model merging approach. Us- ing consistency-aware weights derived from intermediate layer activations, our method ef- fectively integrates knowledge from special- ized models. Experimental results how that our merged model significantly enhances output consistency, achieving a 47.5% improvement in response similarity over the baseline, thus offering a practical solution for increasing the reliability of an industrial RAG system. 1 Introduction LLMs have demonstrated remarkable capabilities in natural language understanding and generation, enabling breakthroughs across a broad spectrum of applications such as question answering and summarization. RAG has emerged as a powerful paradigm that combines the generative strength of LLMs with external knowledge retrieval to enhance factuality, reduce hallucination, and extend context beyond model limitations (Lewis et al., 2020; Wu et al., 2024). Despite their potential, RAG systems often gen- erate inconsistent responses, for minor and seman- tically insignificant variations in the input query or the prompt (Song and Zheng, 2024). This incon- sistency manifests itself in various forms, includ- ing contradictory responses, variability in factual grounding, and fluctuations in the level of detail or confidence expressed by the model. This unpre- dictability not only undermines the reliability of RAG systems but also poses challenges for their adoption in high-stakes or knowledge-sensitive do- mains such as finance, healthcare, and scientific research. As shown in Figure 1, the mere presence or absence of a question mark can dramatically al- ter the response of a RAG based QA system. In an industrial production deployment, there could sev- eral such variations in how users query the system, posing challenges in the adoption of RAG systems. RAG Vietnam national cricket team will debut at what competitions at Kinrara Oval? Vietnam national cricket team will debut at what competitions at Kinrara Oval The answer cannot be determined from the provided context. The context only mentions that the Vietnam national cricket team will debut in the cricket tournament at the 2017 Southeast Asian Games in Kuala Lumpur, Malaysia, but it does not specify the competitions at Kinrara Oval. The Vietnam national cricket team will debut in the cricket tournament at the 2017 Southeast Asian Games in Kuala Lumpur, Malaysia. Users Query LLM’s responses Figure 1: Variability in LLM Responses from Subtle Query Differences. In RAG systems, two key models work together: the retriever and the generator. The retriever is responsible for fetching relevant content based on a user query or prompt, while the generator creates a coherent and contextually appropriate answer by leveraging both the query and the retrieved content. Inconsistencies may arise during either the re- trieval or the generation process, leading to varied responses. However, our empirical observations and Zuccon et al. (2016), Abdallah et al. (2025) indicate that generators are more sensitive and less consistent than retrievers to minor variations in queries. While retrievers tend to be consistent, even in the face of minor, semantically insignif- icant variations in the input query (e.g., different phrasings of the same question), generators exhibit higher variability (Cao et al., 2025). Small changes in phrasing or query structure can lead to different answers being generated, even when the retrieved content remains the same. This difference in behav- ior highlights the challenges faced by generative arXiv:2510.14915v1 [cs.CL] 16 Oct 2025 models in maintaining consistency, especially in tasks where precise and coherent responses are re- quired. In this work, we focus on characterizing, measur- ing, and mitigating such inconsistency. Our main contributions are as follows: 1. We characterize different types of query varia- tions that lead to inconsistent responses in an industrial RAG system. 2. We identify metrics and demonstrate inconsis- tency in the question answering task. 3. We develop novel layer-wise merging ap- proach to reduce inconsistency without re- gressing on accuracy. 2 Related Work We review the literature to establish a clear un- derstanding of what constitutes consistency, how it is measured, and strategies to improve consis- tency in context of LLMs. A widely accepted con- cept, as proposed by Patwardhan et al. (2024), de- fines consistency as the semantic equivalence of two responses generated by the same LLM when prompted with identical or semantically similar queries multiple times. Evaluating and improving LLM consistency re- mains challenging due to a lack of targeted bench- marks and datasets, prompting research into spe- cialized evaluation methods. Elazar et al. (2021) contributed a valuable resource for factual con- sistency measurement in pre-trained LLMs and a novel consistency loss to enhance performance even on new data. To explore prompt sensitivity, Raj et al. (2025b) introduced PromptSET, Qiang et al. (2024) applied perturbations to highlight the difficulties in predicting how minor prompt changes affect LLMs. For evaluation, Zhao et al. (2024) propose an automated consistency assess- ment tool using a custom data set. Addressing in- consistency in natural-language explanations, Chen et al. (2025a) developed EC-finetuning, that trains on synthetic data to increase consistency. To address the need for reliable LLM output in NLG, especially under semantically equivalent inputs, Raj et al. (2023) proposed a framework to measure semantic consistency via output agree- ment, showing strong correlation with human judg- ments across domains. Complementing this, Wu et al. (2025) introduced a logit-based ensemble method aligned with human perceptions through a user study. Lee et al. (2025) examined LLM con- sistency as automated evaluators, focusing on their reliability in scoring identical items. Recent work has focused on actively improv- ing LLM consistency. Raj et al. (2025a) used multi-step prompting and synthetic data for seman- tic alignment. Sathe et al. (2025) enhanced self- consistency via multiple input tokenizations. Wang et al. (2023) improved reasoning by sampling di- verse outputs and selecting by voting. 3 Methodology In this section, we describe our methodology for improving the consistency of LLM responses, specifically within the context of a RAG system. We observe that the retriever in our RAG system provides accurate context, but minor variations in the query often lead to inconsistent response from the generator. This work focuses on improving gen- erator consistency under such variations, assuming stable retrieval quality. Addressing retrieval incon- sistency is left for future work. Our work aims to improve the RAG generator’s consistency. By analyzing human-annotated data, we constructed diverse synthetic datasets to train multiple individual generator models. To achieve higher response consistency, we developed a novel consistency-focused, layer-wise model merging ap- proach, building upon DARE-TIES (Yu et al., 2024; Yadav et al., 2023). This strategy allowed us to ef- fectively combine knowledge from individual mod- els trained on diverse synthetic data. 3.1 Synthetic Data Generation A direct approach to improving LLM consistency is to train on all possible input variations. How- ever, this is impractical due to the vast number of potential variations and limited data availability - especially in domains like healthcare, where data is fragmented and complex, and finance, where privacy regulations constrain access. Given these limitations, synthetic data provides a practical way to improve consistency when real data is scarce. While prior work has documented a broad range of general query variations - such as typos, synonyms, keyboard proximity errors, and paraphrases (Zhang et al., 2025; Wu et al., 2025) - it does not capture several nuanced variations ob- served in large-scale industrial RAG systems. Our analysis of production queries shows that a small set of key variations accounts for most input diver- Table 1: Illustrative Query Variations Leading to Different Answers. Variation Type Query Query’ Variation - How do/to how do we manage customer feedback at end of project how to manage customer feedback at end of project I vs. we can we drive to a grocery store can I drive to a grocery store Singular vs. plural delivering packages for shipment delivering package for shipment Article omissions how to add a contact to a phone book how to add contacts to phone books sity, often involving subtle rephrasings (e.g., “how to we manage an account” vs. “how to manage an account”) rather than simple surface-level errors. Table 1 lists these variation types with representa- tive examples. Characterizing and accounting for such variations is critical to improving system ro- bustness and building user trust in real-world RAG applications. Based on the analysis of our dataset, we’ve iden- tified three main types of query variations: How to/do variations: These queries often in- volve rephrasing questions about methods or ac- tions. We used regular expression rules to system- atically create additional queries of this nature. Singular/Plural/Article variations: This cate- gory covers changes in noun quantity (e.g., “apple” vs. “apples”) and the use of articles (e.g., “a”, “an”, “the”). To synthesize more of these variations, we randomly interchanged singular and plural forms and substituted or modified articles. Semantic variations: These are changes in wording that maintain the same core meaning but use different vocabulary or phrasing. For seman- tic variations, we leveraged a pretrained LLM (Llama-3.1-70B-Instruct) to paraphrase our queries (Grattafiori et al., 2024). We used these synthetic queries to run our IR system, capturing updated contexts for our RAG system. This process generated enriched training and test datasets with a wide array of input vari- ations. Rather than training/fine-tuning a single LLM with all the real-world and synthetic data, we opted to train/fine-tune multiple specialized mod- els, each focusing on a different category of input variations. This approach allows each model to ex- cel at the specific underlying tasks associated with its particular query type. 3.2 Triplet Loss Training Unlike traditional LLM fine-tuning that relies solely on cross-entropy loss, we incorporate triplet loss during our fine-tuning phase. Triplet Loss (Schroff et al., 2015a) is a widely used loss function in metric learning, used in tasks such as face recognition and semantic search, to learn embeddings that pull similar items closer while pushing dissimilar ones apart. The core idea of Triplet Loss is to train on triplets of data points: an anchor A, a positive P that is similar to the anchor, and a negative N that is dissimilar. The objective is to ensure that the distance between A and P is smaller than that between A and N. The triplet Loss function is formulated as: L(A, P, N) = max(0, d(f(A), f(P)) −d(f(A), f(N)) + α) (1) More details of triplet loss can be found in (Schroff et al., 2015b; Reimers and Gurevych, 2019). In our implementation, triplets were constructed by first choosing an anchor query (A). We then selected its corresponding positive (P) and negative (N) data points by randomly sampling from its top 10 and bottom 10 nearest neighbors, respectively, within the feature space generated by a semantic feature extractor. The final loss function employed during our training and fine-tuning process is a combination of cross-entropy loss and triplet loss, defined as: L = LCE + α · LTriplet, (2) where α is a predefined weighting factor designed to balance the contribution of triplet loss. 3.3 Model Merging With a suite of specialized models, each trained on distinct synthetic datasets, the challenge became generating a single, consistent response without sacrificing accuracy. While conventional ensemble approaches for multiple pre-trained or fine-tuned LLMs involve parallel execution and output combi- nation, they incur significant computational costs and inference latency (Chen et al., 2025b). To address these limitations, model merging of- fers a solution by consolidating knowledge from multiple pre-trained or fine-tuned models into a single consolidated model. These techniques range from simple averaging to complex algorithms that align features and selectively transfer knowledge. Here, we introduce a novel model merging ap- proach, building on the DARE-TIES merge method (Yu et al., 2024; Yadav et al., 2023), with the main goal of substantially boosting the consistency of the unified model’s responses. DARE-TIES merging is a sophisticated model merging algorithm designed to overcome the limita- tions of simple weight averaging, especially when combining fine-tuned models that originate from a common pre-trained base model but have diverged during training on different tasks or datasets. It operates on the principle of merging the ∆θk = θFk −θP that fine-tuned models apply to a common pre-trained model, rather than directly merging the absolute weights, where θP is the base model’s pa- rameters and θFk denotes the parameters of the k-th fine-tuned model. By applying sparsification, sign matching and inverse-scaling on the ∆θk, DARE- TIES yields the merged model’s parameters by: θmerged = θP + N X k=1 ∆θk. (3) To improve consistency with semantically iden- tical inputs, we analyzed the consistency of each LLM layer, then assigned dynamic weights in Equa- tion 3 for merging. To accomplish this, we first formed a develop- ment set Sdev of T diverse data points. Then, for each model k and each layer l, we extracted the activations α(l) k ∈RD×T from development set Sdev, where D represents the output feature di- mension of layer l. For sequential outputs, we used max-pooling to extract these activations. This process enabled us to compute a similarity matrix Σ(l) k ∈RT×T for the activations of each data point at every layer of model k. Ideally, a model exhibiting high consistency with semantically identical inputs should produce simi- lar activations within a single layer. Conversely, if inputs are semantically distinct, their activa- tions should diverge significantly. Therefore, a consistent model would ideally yield similar sim- ilarity matrices Σ(l) k across different layers when presented with the same set of inputs. Leveraging this intuition, we can quantify a model’s consistency by comparing the Σ(l) k from different layers. Our approach begins by using a semantic feature extractor (specifically, a sen- tence transformer) to obtain features for each query in our development set, Sdev. From these fea- tures, we computed a reference similarity ma- trix Σr. Subsequently, we quantified the discrep- ancy between each layer’s similarity matrix Σ(l) k and this reference using the absolute difference: d(l) k = |Σ(l) k −Σr|. Hence, for a specific layer l across our various LLMs, we obtain a set of dis- tance values, DM(l) = [d(l) 1 , d(l) 2 , . . . , d(l) N ]. To con- vert these distances into weights that indicate a layer’s contribution to consistency, we apply an inverted non-linear normalization approach. First, we computed the inverted distance for each layer’s distance d(l) k by subtracting it from the maximum distance observed for that layer across all models: ˜d(l) k = max(DM(l)) −d(l) k Next, these inverted distances are normalized to obtain r(l) k : r(l) k = ˜d(l) k PN j=1 ˜d(l) j Finally, we apply a sigmoid function to these nor- malized inverted distances to derive the final weight w(l) k for layer l of model k: w(l) k = σ(a · r(l) k + b) (4) Here, σ(·) denotes the sigmoid function, and a and b are predefined scaling and offset parameters. Based on the derived consistency-oriented layer weights w(l) k for each model k, we modified Equa- tion 3 to incorporate these weights into the layer- wise model merging process: θ(l) merged = θ(l) P + N X k=1 w(l) k · ∆θ(l) k . (5) The final merged LLM is constructed by apply- ing Equation 5 in a layer-wise manner. We outline the algorithm for model merging: Algorithm 1 Consistency-Aware Model Merging 1: Input: Base model θP , fine-tuned models {θFk}N k=1, dev set Sdev 2: Output: Merged model θmerged 3: Compute reference similarity matrix Σr using a sentence encoder on Sdev 4: for each model k and layer l do 5: Extract activations and compute similarity matrix Σ(l) k 6: Compute distance d(l) k = |Σ(l) k −Σr| 7: end for 8: for each layer l do 9: Normalize distances d(l) k to weights w(l) k using inverted scaling and sigmoid 10: Merge: θ(l) merged = θ(l) P + P k w(l) k · (θ(l) Fk − θ(l) P ) 11: end for 12: return θmerged 4 Experiments Our experimental setup utilized a QA engine built on a RAG architecture. For the evaluation of our consistency improvement method, the retriever component was held constant, and the generator component underwent fine-tuning. 4.1 Datasets To fine-tune and evaluate our LLM generator, we used 2,738 representative queries and their re- trieved contexts that resemble a production IR sys- tem. Domain experts annotated the expected an- swers, and the data was split into 1,421 training and 1,317 test samples. To get more varied inputs for training our model, we applied the methods detailed in Section 3.1 to create three distinct types of synthetic data. Our synthetic training dataset included 150 “how to/do” variation queries, 1,421 paraphrased queries, and 952 singular/plural/article variation queries. We submit all query variations to the IR system to retrieve their corresponding contexts, which are then used to construct the final inputs. Alongside the 1,317 test samples to measure ac- curacy, we created a test set to evaluate consistency using our data synthesis methods (Section 3.1). This produced 1,579 variations-176 “how to/do”, 912 paraphrases, and 491 singular/plural/article changes-paired with original queries and expected answers for consistency testing. 4.2 Metrics To assess the overall accuracy of the results of our RAG system, we employed the ROUGE-L (Lin, 2004) and BLEU metrics with up to 4-grams (Pap- ineni et al., 2002), comparing the LLM-generated responses against the references provided. To quantify the consistency of LLM response across input variations, we utilized three metrics: exact string match (EM), response similarity (RS) and Bert similarity (BS) measures. Given an origi- nal query Q and its variant Q′, with S and S′ rep- resenting the respective LLM responses, the exact string match is formally defined as: EM(S, S′) ⇐⇒S = S′. For Response Similarity (RS), we determine se- mantic equivalence by thresholding the ROUGE score between the LLM’s responses S and S′: RS(S, S′) ⇐⇒Rouge(S, S′) > T, where T represents an empirically determined threshold used to ascertain whether two responses are considered semantically identical. Furthermore, we define Bert Similarity (BS) be- tween two LLMs responses S and S′ to quantify the semantic similarity of them, as: BS(S, S′) ⇐⇒Bert(S, S′). 4.3 Model Training and Merging Our experimental setup involved several distinct fine-tuning stages for the Llama-3.1-8B-Instruct model (Grattafiori et al., 2024) and Gemma-3-12B- Instruct model (Team et al., 2025). We started by fine-tuning a baseline Llama-3.1- 8B-Instruct and Gemma-3-12B-Instruct models for two epochs, using the original 1,421 training sam- ples and only a cross-entropy loss function. To investigate how triplet loss could boost LLM consistency, all subsequent fine-tuning experiments combined both cross-entropy loss and triplet loss, keeping the hyperparameters consistent with our initial baseline setup. Following this, we fine-tuned five distinct Llama- 3.1-8B-Instruct LLMs. One was fine-tuned on our base training set exclusively. The other three were fine-tuned on this base set, each augmented with a specific synthetic data type: 176 “how to/do” variations, 912 paraphrased samples, or 491 singu- lar/plural/article variations (more details on this in Section 3.2). The final model was fine-tuned using Table 2: Comparison of Overall Accuracy and Consistency Metrics. Llama-3.1-8B-Instruct based LLMs Gemma-3-12B-Instruct based LLMs ROUGE BLEU EM RS BS ROUGE BLEU EM RS BS B 0.5123 0.2928 0.1051 0.2799 0.9246 0.4692 0.2338 0.0678 0.2609 0.9227 B + SFT 0.5208 0.3125 0.1482 0.3325 0.9266 0.5266 0.3297 0.2242 0.4009 0.9323 B + SFT + TL 0.5460 0.3460 0.1822 0.3530 0.9276 0.5206 0.3194 0.2331 0.4041 0.9337 B + SFT + TL + HTD 0.5493 0.3495 0.2250 0.3867 0.9264 0.5276 0.3255 0.2483 0.4364 0.9351 B + SFT + TL + SEM 0.5330 0.3339 0.2366 0.3965 0.9281 0.4966 0.3042 0.2673 0.4262 0.9314 B + SFT + TL + SPA 0.5364 0.3370 0.2111 0.3692 0.9262 0.5130 0.3170 0.2603 0.4231 0.9332 B + SFT + TL + ALL 0.5198 0.3230 0.2510 0.3986 0.9289 0.4879 0.2974 0.3382 0.4731 0.9357 Merged 0.5379 0.3380 0.2521 0.4129 0.9292 0.5356 0.3416 0.2932 0.4674 0.9373 Abbreviations: B = Baseline (Llama-3.1-8B-Instruct or Gemma-3-12B-Instruct), SFT = Supervised Fine-tuned, TL = Triplet- loss, HTD = “How to/do” variation, SEM = Semantic variation, SPA = Singular/Plural/Article variation, ALL = All training data, Merged = Merged model. all available training data combined. Finally, we merged these three individually fine-tuned LLMs using the methodology described in Section 3.3. We repeated the same fine-tuning and merging steps for the Gemma-3-12B-Instruct LLMs to en- sure consistent evaluation across model’s architec- tures. All fine-tuned models, including the Llama-3.1- 8B-Instruct and Gemma-3-12B-Instruct baselines, were comprehensively evaluated in two dedicated test sets designed to assess both accuracy and con- sistency measures, as described in Section 4.1. We present complete experimental results in Table 2. 4.4 Results In Table 1, we present four types of query vari- ations that lead to response inconsistency. Ta- ble 2 quantitatively shows that the baseline model (Llama-3.1-8B-Instruct) achieves moderate overlap with human references (ROUGE: 0.5123, BLEU: 0.2928) but demonstrates the lowest consistency (EM: 0.1051, RS: 0.2799, BS: 0.9246). This demonstrates the model often fails to generate consistent responses to semantically equivalent queries. The fine-tuned model, as shown in Table 2, demonstrates a modest improvement over the base- line, w.r.t accuracy and consistency. While it yields somewhat better text overlap and initial gains in consistency, its performance, particularly in EM, RS and BS, suggests that general fine-tuning pro- vides only limited progress towards truly consistent response for varied inputs. Incorporating triplet loss significantly boosts per- formance across all metrics, as seen by comparing the triplet-loss model to the fine-tuned model in Table 2. The triplet-loss model achieved higher ROUGE (0.5460) and BLEU (0.3460) scores, in- dicating better content and lexical alignment. The model shows improvement in consistency, the EM score dramatically improved by 73.4% to 0.1822 (from 0.1051), while RS score saw a substantial 26.1% increase to 0.3530 (from 0.2799). These results underscore the effectiveness of integrating triplet loss in fine-tuning strategies for LLMs, lead- ing to significantly more robust and consistent re- sponse generation. As shown in the Table 2, individual variation models—specifically the How to/Do,Semantic, and Singular/Plural/Article variation mod- els—consistently outperform the baseline in both accuracy and consistency. This demonstrates the effectiveness of specialized fine-tuning with synthetically generated data Surprisingly, models fine-tuned on individual synthetic datasets outperformed the combined-data model in accuracy. However, the combined model achieved higher consistency, suggesting that merg- ing diverse variation types may introduce conflict- ing signals or biases that impact accuracy. The merged model consistently delivers the most robust and balanced performance across all metrics, with notable strength in response consistency. In terms of consistency metrics, the merged model achieves the highest scores for both EM at 0.2521, RS at 0.4129 and BS at 0.9292. This performance significantly surpasses all other models. For EM, it represents an impressive 139.87% improvement over the baseline model and still leads the next best model (“B + SFT + TL + ALL” model at 0.2510) by approximately 0.44%. Similarly, its RS score is a 47.52% improvement over the baseline and approximately 3.59% higher than the second best model. This indicates that the merging strategy is highly effective in ensuring LLM responses are more reliably identical or semantically equivalent even when faced with varied inputs. Regarding accuracy-based metrics, while the merged model’s ROUGE score (0.5379) and BLEU score (0.3380) are marginally lower than the top performer (the “B + SFT + TL + HTD” with ROUGE 0.5493 and BLEU 0.3495), they are still very good and highly competitive. This demon- strates that the merging process successfully en- hances consistency without a significant trade-off in overall accuracy or fluency. The merged model effectively combines the strengths of its constituent specialized models, making it the most well- rounded and high-performing solution for both ac- curate and consistent RAG system responses. Table 2 also reports accuracy and consistency metrics for Gemma-3-12B-Instruct models. Over- all, the trend of model improvements closely mir- rors that of the Llama-3.1-8B-Instruct experiments: baseline models exhibit moderate ROUGE and BLEU scores with the lowest consistency (EM: 0.0678, RS: 0.2609, BS: 0.9227), fine-tuning im- proves both accuracy and consistency, incorporat- ing triplet loss further boosts response reliability, and models fine-tuned on individual synthetic vari- ations outperform the baseline in both accuracy and consistency. For Gemma, the “B + SFT + TL + ALL” model achieves the highest consistency metrics (EM: 0.3382, RS: 0.4731), similar to the trend observed for Llama, where combined-data models also prioritize consistency over raw accu- racy. The merging strategy consistently delivers the most robust and balanced performance across all metrics. Key differences are notable, however. The Gemma baseline shows lower initial accuracy and EM than the Llama baseline, suggesting a larger model does not automatically guarantee consistent responses. The merged Gemma model attains the highest ROUGE (0.5356), BLEU (0.3416), and BS (0.9373), slightly outperforming Llama’s merged model on accuracy and semantic similarity, though EM is slightly lower than Gemma’s “B + SFT + TL + ALL” model, indicating a minor trade-off in exact match consistency. Overall, while the pattern of improvement, base- line →fine-tuned →triplet-loss →specialized vari- ation →merged, is consistent across both model families, the larger Gemma-3-12B-Instruct bene- fits more from combined data fine-tuning, achiev- ing higher accuracy and semantic similarity, while the merging strategy ensures robust and balanced performance similar to both Llama and Gemma models. 5 Conclusion In this work, we identify four types of semantically insignificant query variations that cause inconsis- tent LLM responses. We quantify response simi- larity and show that baseline models and standard fine-tuning exhibit low consistency. To address this, we propose a novel approach combining syn- thetic data generation, Triplet Loss training, and layer-wise model merging guided by consistency- oriented weights. Our experiments show the merged model sig- nificantly outperforms baselines and specialized models, achieving superior Exact Match and Re- sponse Similarity scores, thus demonstrating en- hanced consistency while maintaining strong ac- curacy. This work presents a compelling pathway towards more trustworthy LLMs and opens avenues for future research, including adaptive merging, ex- panded consistency definitions, and the application of this method to diverse public datasets. We also plan to construct and publicly release benchmarks that mimic the identified query variations to further evaluate and demonstrate the effectiveness of our approach. Additional future directions include ad- dressing inconsistency cases arising from retrievers, which were beyond the scope of this study. 6 Limitations While the proposed work has been evaluated on in- dustrial data, there is scope to create public bench- mark and evaluate the method. We will explore creating creating benchmarks for evaluating consis- tency in responses due to variations in the query. The work is limited to variations in the query where the retrievers results don’t significantly change. This can explored as future direction for research. Finally, we experimented with 2 Large Language Models with optimal settings for fine-tuning. There is scope to explore additional hyper parameter con- figurations. References Abdelrahman Abdallah, Jamshid Mozafari, Bhawna Piryani, Mohammed Ali, and Adam Jatowt. 2025. From retrieval to generation: Comparing different approaches. arXiv preprint arXiv:2502.20245. Tianyu Cao, Neel Bhandari, Akhila Yerukola, Akari Asai, and Maarten Sap. 2025. Out of style: Rag’s fragility to linguistic variation. arXiv preprint arXiv:2504.08231. Yanda Chen, Chandan Singh, Xiaodong Liu, Simiao Zuo, Bin Yu, He He, and Jianfeng Gao. 2025a. To- wards consistent natural-language explanations via explanation-consistency finetuning. In Proceedings of the 31st International Conference on Computa- tional Linguistics, pages 7558–7568. Zhijun Chen, Jingzheng Li, Pengpeng Chen, Zhuoran Li, Kai Sun, Yuankai Luo, Qianren Mao, Dingqi Yang, Hailong Sun, and Philip S Yu. 2025b. Harnessing multiple large language models: A survey on llm ensemble. arXiv preprint arXiv:2502.18036. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhi- lasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. Transac- tions of the Association for Computational Linguis- tics, 9:1012–1031. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models. Preprint, arXiv:2407.21783. Noah Lee, Jiwoo Hong, and James Thorne. 2025. Evalu- ating the consistency of LLM evaluators. In Proceed- ings of the 31st International Conference on Compu- tational Linguistics, pages 10650–10659. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, and 1 others. 2020. Retrieval-augmented gen- eration for knowledge-intensive nlp tasks. Advances in neural information processing systems, 33:9459– 9474. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311–318. Aditya Patwardhan, Vivek Vaidya, and Ashish Kundu. 2024. Automated Consistency Analysis of LLMs . In 2024 IEEE 6th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA), pages 118–127. Yao Qiang, Subhrangshu Nandi, Ninareh Mehrabi, Greg Ver Steeg, Anoop Kumar, Anna Rumshisky, and Aram Galstyan. 2024. Prompt perturbation con- sistency learning for robust language models. arXiv preprint arXiv:2402.15833. Harsh Raj, Vipul Gupta, Domenic Rosati, and Sub- habrata Majumdar. 2025a. Improving consistency in large language models through chain of guidance. Preprint, arXiv:2502.15924. Harsh Raj, Vipul Gupta, Domenic Rosati, and Sub- habrata Majumdar. 2025b. Semantic consistency for assuring reliability of large language models. Preprint, arXiv:2308.09138. Harsh Raj, Domenic Rosati, and Subhabrata Majum- dar. 2023. Measuring reliability of large language models through semantic consistency. Preprint, arXiv:2211.05853. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982–3992. Ashutosh Sathe, Divyanshu Aggarwal, and Sunayana Sitaram. 2025. Improving consistency in LLM infer- ence using probabilistic tokenization. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4766–4778. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015a. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015b. Facenet: A unified embedding for face recognition and clustering. In 2015 IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 815–823. Mingyang Song and Mao Zheng. 2024. A survey of query optimization in large language models. arXiv preprint arXiv:2412.17558. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, and 197 others. 2025. Gemma 3 technical report. Preprint, arXiv:2503.19786. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. Preprint, arXiv:2203.11171. Shangyu Wu, Ying Xiong, Yufei Cui, Haolun Wu, Can Chen, Ye Yuan, Lianming Huang, Xue Liu, Tei- Wei Kuo, Nan Guan, and 1 others. 2024. Retrieval- augmented generation for natural language process- ing: A survey. arXiv preprint arXiv:2407.13193. Xiaoyuan Wu, Weiran Lin, Omer Akgul, and Lujo Bauer. 2025. Estimating llm consistency: A user baseline vs surrogate metrics. Preprint, arXiv:2505.23799. Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raf- fel, and Mohit Bansal. 2023. Ties-merging: resolving interference when merging models. In Proceedings of the 37th International Conference on Neural Infor- mation Processing Systems, pages 7093 – 7115. Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. 2024. Language models are super mario: absorb- ing abilities from homologous models as a free lunch. In Proceedings of the 41st International Conference on Machine Learning, pages 57755 – 57775. Kepu Zhang, Zhongxiang Sun, Weijie Yu, Xiaoxue Zang, Kai Zheng, Yang Song, Han Li, and Jun Xu. 2025. QE-RAG: A robust retrieval-augmented gen- eration benchmark for query entry errors. arXiv preprint arXiv:2504.04062. Fufangchen Zhao, Guoqiang Jin, Jiaheng Huang, Rui Zhao, and Fei Tan. 2024. Consistency matters: Ex- plore llms consistency from a black-box perspective. Preprint, arXiv:2402.17411. Guido Zuccon, Joao Palotti, and Allan Hanbury. 2016. Query variations and their effect on comparing infor- mation retrieval systems. In Proceedings of the 25th ACM international on conference on information and knowledge management, pages 691–700.
Harmonizing Diverse Models: A Layer-wise Merging Strategy for Consistent Generation Xujun Peng, Anoop Kumar, Jingyu Wu, Parker Glenn, Daben Liu AI Foundations, Capital One McLean, VA, USA {xujun.peng, anoop.kumar, jingyu.wu, parker.glenn, Abstract Retrieval-Augmented Generation (RAG) systems leverage Large Language Models (LLMs) to generate accurate and reliable responses that are grounded in retrieved context. However, LLMs often generate inconsistent outputs for semantically equivalent inputs, a problem compounded by the scarcity of consistencyfocused training data and the limitations of current fine-tuning techniques in enhancing output consistency. We propose a new approach combining systematic synthetic data generation, triplet loss for better embeddings, and a novel layer-wise model merging approach. Using consistency-aware weights derived from intermediate layer activations, our method effectively integrates knowledge from specialized models. Experimental results how that our merged model significantly enhances output consistency, achieving a 47.5% improvement in response similarity over the baseline, thus offering a practical solution for increasing the reliability of an industrial RAG system. 1 Introduction LLMs have demonstrated remarkable capabilities in natural language understanding and generation, enabling breakthroughs across a broad spectrum of applications such as question answering and summarization. RAG has emerged as a powerful paradigm that combines the generative strength of LLMs with external knowledge retrieval to enhance factuality, reduce hallucination, and extend context beyond model limitations (Lewis et al., 2020; Wu et al., 2024). Despite their potential, RAG systems often generate inconsistent responses, for minor and semantically insignificant variations in the input query or the prompt (Song and Zheng, 2024). This inconsistency manifests itself in various forms, including contradictory responses, variability in factual grounding, and fluctuations in the level of detail or confidence expressed by the model. This unpredictability not only undermines the reliability of RAG systems but also poses challenges for their adoption in high-stakes or knowledge-sensitive domains such as finance, healthcare, and scientific research. As shown in Figure 1, the mere presence or absence of a question mark can dramatically alter the response of a RAG based QA system. In an industrial production deployment, there could several such variations in how users query the system, posing challenges in the adoption of RAG systems. RAG Vietnam national cricket team will debut at what competitions at Kinrara Oval? Vietnam national cricket team will debut at what competitions at Kinrara Oval The answer cannot be determined from the provided context. The context only mentions that the Vietnam national cricket team will debut in the cricket tournament at the 2017 Southeast Asian Games in Kuala Lumpur, Malaysia, but it does not specify the competitions at Kinrara Oval. The Vietnam national cricket team will debut in the cricket tournament at the 2017 Southeast Asian Games in Kuala Lumpur, Malaysia. Users Query LLM's responses Figure 1: Variability in LLM Responses from Subtle Query Differences. In RAG systems, two key models work together: the retriever and the generator. The retriever is responsible for fetching relevant content based on a user query or prompt, while the generator creates a coherent and contextually appropriate answer by leveraging both the query and the retrieved content. Inconsistencies may arise during either the retrieval or the generation process, leading to varied responses. However, our empirical observations and Zuccon et al. (2016), Abdallah et al. (2025) indicate that generators are more sensitive and less consistent than retrievers to minor variations in queries. While retrievers tend to be consistent, even in the face of minor, semantically insignificant variations in the input query (e.g., different phrasings of the same question), generators exhibit higher variability (Cao et al., 2025). Small changes in phrasing or query structure can lead to different answers being generated, even when the retrieved content remains the same. This difference in behavior highlights the challenges faced by generative 16 Oct 2025 models in maintaining consistency, especially in tasks where precise and coherent responses are required. In this work, we focus on characterizing, measuring, and mitigating such inconsistency. Our main contributions are as follows: 1. We characterize different types of query variations that lead to inconsistent responses in an industrial RAG system. 2. We identify metrics and demonstrate inconsistency in the question answering task. 3. We develop novel layer-wise merging approach to reduce inconsistency without regressing on accuracy. 2 Related Work We review the literature to establish a clear understanding of what constitutes consistency, how it is measured, and strategies to improve consistency in context of LLMs. A widely accepted concept, as proposed by Patwardhan et al. (2024), defines consistency as the semantic equivalence of two responses generated by the same LLM when prompted with identical or semantically similar queries multiple times. Evaluating and improving LLM consistency remains challenging due to a lack of targeted benchmarks and datasets, prompting research into specialized evaluation methods. Elazar et al. (2021) contributed a valuable resource for factual consistency measurement in pre-trained LLMs and a novel consistency loss to enhance performance even on new data. To explore prompt sensitivity, Raj et al. (2025b) introduced PromptSET, Qiang et al. (2024) applied perturbations to highlight the difficulties in predicting how minor prompt changes affect LLMs. For evaluation, Zhao et al. (2024) propose an automated consistency assessment tool using a custom data set. Addressing inconsistency in natural-language explanations, Chen et al. (2025a) developed EC-finetuning, that trains on synthetic data to increase consistency. To address the need for reliable LLM output in NLG, especially under semantically equivalent inputs, Raj et al. (2023) proposed a framework to measure semantic consistency via output agreement, showing strong correlation with human judgments across domains. Complementing this, Wu et al. (2025) introduced a logit-based ensemble method aligned with human perceptions through a user study. Lee et al. (2025) examined LLM consistency as automated evaluators, focusing on their reliability in scoring identical items. Recent work has focused on actively improving LLM consistency. Raj et al. (2025a) used multi-step prompting and synthetic data for semantic alignment. Sathe et al. (2025) enhanced selfconsistency via multiple input tokenizations. Wang et al. (2023) improved reasoning by sampling diverse outputs and selecting by voting. 3 Methodology In this section, we describe our methodology for improving the consistency of LLM responses, specifically within the context of a RAG system. We observe that the retriever in our RAG system provides accurate context, but minor variations in the query often lead to inconsistent response from the generator. This work focuses on improving generator consistency under such variations, assuming stable retrieval quality. Addressing retrieval inconsistency is left for future work. Our work aims to improve the RAG generator's consistency. By analyzing human-annotated data, we constructed diverse synthetic datasets to train multiple individual generator models. To achieve higher response consistency, we developed a novel consistency-focused, layer-wise model merging approach, building upon DARE-TIES (Yu et al., 2024; Yadav et al., 2023). This strategy allowed us to effectively combine knowledge from individual models trained on diverse synthetic data. 3.1 Synthetic Data Generation A direct approach to improving LLM consistency is to train on all possible input variations. However, this is impractical due to the vast number of potential variations and limited data availability - especially in domains like healthcare, where data is fragmented and complex, and finance, where privacy regulations constrain access. Given these limitations, synthetic data provides a practical way to improve consistency when real data is scarce. While prior work has documented a broad range of general query variations - such as typos, synonyms, keyboard proximity errors, and paraphrases (Zhang et al., 2025; Wu et al., 2025) - it does not capture several nuanced variations observed in large-scale industrial RAG systems. Our analysis of production queries shows that a small set of key variations accounts for most input diverTable 1: Illustrative Query Variations Leading to Different Answers. Variation Type Query Query' Variation - How do/to how do we manage customer feedback at end of project how to manage customer feedback at end of project I vs. we can we drive to a grocery store can I drive to a grocery store Singular vs. plural delivering packages for shipment delivering package for shipment Article omissions how to add a contact to a phone book how to add contacts to phone books sity, often involving subtle rephrasings (e.g., "how to we manage an account" vs. "how to manage an account") rather than simple surface-level errors. Table 1 lists these variation types with representative examples. Characterizing and accounting for such variations is critical to improving system robustness and building user trust in real-world RAG applications. Based on the analysis of our dataset, we've identified three main types of query variations: How to/do variations: These queries often involve rephrasing questions about methods or actions. We used regular expression rules to systematically create additional queries of this nature. Singular/Plural/Article variations: This category covers changes in noun quantity (e.g., "apple" vs. "apples") and the use of articles (e.g., "a", "an", "the"). To synthesize more of these variations, we randomly interchanged singular and plural forms and substituted or modified articles. Semantic variations: These are changes in wording that maintain the same core meaning but use different vocabulary or phrasing. For semantic variations, we leveraged a pretrained LLM (Llama-3.1-70B-Instruct) to paraphrase our queries (Grattafiori et al., 2024). We used these synthetic queries to run our IR system, capturing updated contexts for our RAG system. This process generated enriched training and test datasets with a wide array of input variations. Rather than training/fine-tuning a single LLM with all the real-world and synthetic data, we opted to train/fine-tune multiple specialized models, each focusing on a different category of input variations. This approach allows each model to excel at the specific underlying tasks associated with its particular query type. 3.2 Triplet Loss Training Unlike traditional LLM fine-tuning that relies solely on cross-entropy loss, we incorporate triplet loss during our fine-tuning phase. Triplet Loss (Schroff et al., 2015a) is a widely used loss function in metric learning, used in tasks such as face recognition and semantic search, to learn embeddings that pull similar items closer while pushing dissimilar ones apart. The core idea of Triplet Loss is to train on triplets of data points: an anchor A, a positive P that is similar to the anchor, and a negative N that is dissimilar. The objective is to ensure that the distance between A and P is smaller than that between A and N. The triplet Loss function is formulated as: L(A, P, N) = max(0, d(f(A), f(P)) -d(f(A), f(N)) + α) (1) More details of triplet loss can be found in (Schroff et al., 2015b; Reimers and Gurevych, 2019). In our implementation, triplets were constructed by first choosing an anchor query (A). We then selected its corresponding positive (P) and negative (N) data points by randomly sampling from its top 10 and bottom 10 nearest neighbors, respectively, within the feature space generated by a semantic feature extractor. The final loss function employed during our training and fine-tuning process is a combination of cross-entropy loss and triplet loss, defined as: L = LCE + α · LTriplet, (2) where α is a predefined weighting factor designed to balance the contribution of triplet loss. 3.3 Model Merging With a suite of specialized models, each trained on distinct synthetic datasets, the challenge became generating a single, consistent response without sacrificing accuracy. While conventional ensemble approaches for multiple pre-trained or fine-tuned LLMs involve parallel execution and output combination, they incur significant computational costs and inference latency (Chen et al., 2025b). To address these limitations, model merging offers a solution by consolidating knowledge from multiple pre-trained or fine-tuned models into a single consolidated model. These techniques range from simple averaging to complex algorithms that align features and selectively transfer knowledge. Here, we introduce a novel model merging approach, building on the DARE-TIES merge method (Yu et al., 2024; Yadav et al., 2023), with the main goal of substantially boosting the consistency of the unified model's responses. DARE-TIES merging is a sophisticated model merging algorithm designed to overcome the limitations of simple weight averaging, especially when combining fine-tuned models that originate from a common pre-trained base model but have diverged during training on different tasks or datasets. It operates on the principle of merging the ∆θk = θFk -θP that fine-tuned models apply to a common pre-trained model, rather than directly merging the absolute weights, where θP is the base model's parameters and θFk denotes the parameters of the k-th fine-tuned model. By applying sparsification, sign matching and inverse-scaling on the ∆θk, DARETIES yields the merged model's parameters by: θmerged = θP + N X k=1 ∆θk. (3) To improve consistency with semantically identical inputs, we analyzed the consistency of each LLM layer, then assigned dynamic weights in Equation 3 for merging. To accomplish this, we first formed a development set Sdev of T diverse data points. Then, for each model k and each layer l, we extracted the activations α(l) k ∈RD×T from development set Sdev, where D represents the output feature dimension of layer l. For sequential outputs, we used max-pooling to extract these activations. This process enabled us to compute a similarity matrix Σ(l) k ∈RT×T for the activations of each data point at every layer of model k. Ideally, a model exhibiting high consistency with semantically identical inputs should produce similar activations within a single layer. Conversely, if inputs are semantically distinct, their activations should diverge significantly. Therefore, a consistent model would ideally yield similar similarity matrices Σ(l) k across different layers when presented with the same set of inputs. Leveraging this intuition, we can quantify a model's consistency by comparing the Σ(l) k from different layers. Our approach begins by using a semantic feature extractor (specifically, a sentence transformer) to obtain features for each query in our development set, Sdev. From these features, we computed a reference similarity matrix Σr. Subsequently, we quantified the discrepancy between each layer's similarity matrix Σ(l) k and this reference using the absolute difference: d(l) k = |Σ(l) k -Σr|. Hence, for a specific layer l across our various LLMs, we obtain a set of distance values, DM(l) = [d(l) 1 , d(l) 2 , . . . , d(l) N ]. To convert these distances into weights that indicate a layer's contribution to consistency, we apply an inverted non-linear normalization approach. First, we computed the inverted distance for each layer's distance d(l) k by subtracting it from the maximum distance observed for that layer across all models: ̃d(l) k = max(DM(l)) -d(l) k Next, these inverted distances are normalized to obtain r(l) k : r(l) k = ̃d(l) k PN j=1 ̃d(l) j Finally, we apply a sigmoid function to these normalized inverted distances to derive the final weight w(l) k for layer l of model k: w(l) k = σ(a · r(l) k + b) (4) Here, σ(·) denotes the sigmoid function, and a and b are predefined scaling and offset parameters. Based on the derived consistency-oriented layer weights w(l) k for each model k, we modified Equation 3 to incorporate these weights into the layerwise model merging process: θ(l) merged = θ(l) P + N X k=1 w(l) k · ∆θ(l) k . (5) The final merged LLM is constructed by applying Equation 5 in a layer-wise manner. We outline the algorithm for model merging: Algorithm 1 Consistency-Aware Model Merging 1: Input: Base model θP , fine-tuned models {θFk}N k=1, dev set Sdev 2: Output: Merged model θmerged 3: Compute reference similarity matrix Σr using a sentence encoder on Sdev 4: for each model k and layer l do 5: Extract activations and compute similarity matrix Σ(l) k 6: Compute distance d(l) k = |Σ(l) k -Σr| 7: end for 8: for each layer l do 9: Normalize distances d(l) k to weights w(l) k using inverted scaling and sigmoid 10: Merge: θ(l) merged = θ(l) P + P k w(l) k · (θ(l) Fk - θ(l) P ) 11: end for 12: return θmerged 4 Experiments Our experimental setup utilized a QA engine built on a RAG architecture. For the evaluation of our consistency improvement method, the retriever component was held constant, and the generator component underwent fine-tuning. 4.1 Datasets To fine-tune and evaluate our LLM generator, we used 2,738 representative queries and their retrieved contexts that resemble a production IR system. Domain experts annotated the expected answers, and the data was split into 1,421 training and 1,317 test samples. To get more varied inputs for training our model, we applied the methods detailed in Section 3.1 to create three distinct types of synthetic data. Our synthetic training dataset included 150 "how to/do" variation queries, 1,421 paraphrased queries, and 952 singular/plural/article variation queries. We submit all query variations to the IR system to retrieve their corresponding contexts, which are then used to construct the final inputs. Alongside the 1,317 test samples to measure accuracy, we created a test set to evaluate consistency using our data synthesis methods (Section 3.1). This produced 1,579 variations-176 "how to/do", 912 paraphrases, and 491 singular/plural/article changes-paired with original queries and expected answers for consistency testing. 4.2 Metrics To assess the overall accuracy of the results of our RAG system, we employed the ROUGE-L (Lin, 2004) and BLEU metrics with up to 4-grams (Papineni et al., 2002), comparing the LLM-generated responses against the references provided. To quantify the consistency of LLM response across input variations, we utilized three metrics: exact string match (EM), response similarity (RS) and Bert similarity (BS) measures. Given an original query Q and its variant Q′, with S and S′ representing the respective LLM responses, the exact string match is formally defined as: EM(S, S′) ⇐⇒S = S′. For Response Similarity (RS), we determine semantic equivalence by thresholding the ROUGE score between the LLM's responses S and S′: RS(S, S′) ⇐⇒Rouge(S, S′) > T, where T represents an empirically determined threshold used to ascertain whether two responses are considered semantically identical. Furthermore, we define Bert Similarity (BS) between two LLMs responses S and S′ to quantify the semantic similarity of them, as: BS(S, S′) ⇐⇒Bert(S, S′). 4.3 Model Training and Merging Our experimental setup involved several distinct fine-tuning stages for the Llama-3.1-8B-Instruct model (Grattafiori et al., 2024) and Gemma-3-12BInstruct model (Team et al., 2025). We started by fine-tuning a baseline Llama-3.18B-Instruct and Gemma-3-12B-Instruct models for two epochs, using the original 1,421 training samples and only a cross-entropy loss function. To investigate how triplet loss could boost LLM consistency, all subsequent fine-tuning experiments combined both cross-entropy loss and triplet loss, keeping the hyperparameters consistent with our initial baseline setup. Following this, we fine-tuned five distinct Llama3.1-8B-Instruct LLMs. One was fine-tuned on our base training set exclusively. The other three were fine-tuned on this base set, each augmented with a specific synthetic data type: 176 "how to/do" variations, 912 paraphrased samples, or 491 singular/plural/article variations (more details on this in Section 3.2). The final model was fine-tuned using Table 2: Comparison of Overall Accuracy and Consistency Metrics. Llama-3.1-8B-Instruct based LLMs Gemma-3-12B-Instruct based LLMs ROUGE BLEU EM RS BS ROUGE BLEU EM RS BS B 0.5123 0.2928 0.1051 0.2799 0.9246 0.4692 0.2338 0.0678 0.2609 0.9227 B + SFT 0.5208 0.3125 0.1482 0.3325 0.9266 0.5266 0.3297 0.2242 0.4009 0.9323 B + SFT + TL 0.5460 0.3460 0.1822 0.3530 0.9276 0.5206 0.3194 0.2331 0.4041 0.9337 B + SFT + TL + HTD 0.5493 0.3495 0.2250 0.3867 0.9264 0.5276 0.3255 0.2483 0.4364 0.9351 B + SFT + TL + SEM 0.5330 0.3339 0.2366 0.3965 0.9281 0.4966 0.3042 0.2673 0.4262 0.9314 B + SFT + TL + SPA 0.5364 0.3370 0.2111 0.3692 0.9262 0.5130 0.3170 0.2603 0.4231 0.9332 B + SFT + TL + ALL 0.5198 0.3230 0.2510 0.3986 0.9289 0.4879 0.2974 0.3382 0.4731 0.9357 Merged 0.5379 0.3380 0.2521 0.4129 0.9292 0.5356 0.3416 0.2932 0.4674 0.9373 Abbreviations: B = Baseline (Llama-3.1-8B-Instruct or Gemma-3-12B-Instruct), SFT = Supervised Fine-tuned, TL = Tripletloss, HTD = "How to/do" variation, SEM = Semantic variation, SPA = Singular/Plural/Article variation, ALL = All training data, Merged = Merged model. all available training data combined. Finally, we merged these three individually fine-tuned LLMs using the methodology described in Section 3.3. We repeated the same fine-tuning and merging steps for the Gemma-3-12B-Instruct LLMs to ensure consistent evaluation across model's architectures. All fine-tuned models, including the Llama-3.18B-Instruct and Gemma-3-12B-Instruct baselines, were comprehensively evaluated in two dedicated test sets designed to assess both accuracy and consistency measures, as described in Section 4.1. We present complete experimental results in Table 2. 4.4 Results In Table 1, we present four types of query variations that lead to response inconsistency. Table 2 quantitatively shows that the baseline model (Llama-3.1-8B-Instruct) achieves moderate overlap with human references (ROUGE: 0.5123, BLEU: 0.2928) but demonstrates the lowest consistency (EM: 0.1051, RS: 0.2799, BS: 0.9246). This demonstrates the model often fails to generate consistent responses to semantically equivalent queries. The fine-tuned model, as shown in Table 2, demonstrates a modest improvement over the baseline, w.r.t accuracy and consistency. While it yields somewhat better text overlap and initial gains in consistency, its performance, particularly in EM, RS and BS, suggests that general fine-tuning provides only limited progress towards truly consistent response for varied inputs. Incorporating triplet loss significantly boosts performance across all metrics, as seen by comparing the triplet-loss model to the fine-tuned model in Table 2. The triplet-loss model achieved higher ROUGE (0.5460) and BLEU (0.3460) scores, indicating better content and lexical alignment. The model shows improvement in consistency, the EM score dramatically improved by 73.4% to 0.1822 (from 0.1051), while RS score saw a substantial 26.1% increase to 0.3530 (from 0.2799). These results underscore the effectiveness of integrating triplet loss in fine-tuning strategies for LLMs, leading to significantly more robust and consistent response generation. As shown in the Table 2, individual variation models-specifically the How to/Do,Semantic, and Singular/Plural/Article variation models-consistently outperform the baseline in both accuracy and consistency. This demonstrates the effectiveness of specialized fine-tuning with synthetically generated data Surprisingly, models fine-tuned on individual synthetic datasets outperformed the combined-data model in accuracy. However, the combined model achieved higher consistency, suggesting that merging diverse variation types may introduce conflicting signals or biases that impact accuracy. The merged model consistently delivers the most robust and balanced performance across all metrics, with notable strength in response consistency. In terms of consistency metrics, the merged model achieves the highest scores for both EM at 0.2521, RS at 0.4129 and BS at 0.9292. This performance significantly surpasses all other models. For EM, it represents an impressive 139.87% improvement over the baseline model and still leads the next best model ("B + SFT + TL + ALL" model at 0.2510) by approximately 0.44%. Similarly, its RS score is a 47.52% improvement over the baseline and approximately 3.59% higher than the second best model. This indicates that the merging strategy is highly effective in ensuring LLM responses are more reliably identical or semantically equivalent even when faced with varied inputs. Regarding accuracy-based metrics, while the merged model's ROUGE score (0.5379) and BLEU score (0.3380) are marginally lower than the top performer (the "B + SFT + TL + HTD" with ROUGE 0.5493 and BLEU 0.3495), they are still very good and highly competitive. This demonstrates that the merging process successfully enhances consistency without a significant trade-off in overall accuracy or fluency. The merged model effectively combines the strengths of its constituent specialized models, making it the most wellrounded and high-performing solution for both accurate and consistent RAG system responses. Table 2 also reports accuracy and consistency metrics for Gemma-3-12B-Instruct models. Overall, the trend of model improvements closely mirrors that of the Llama-3.1-8B-Instruct experiments: baseline models exhibit moderate ROUGE and BLEU scores with the lowest consistency (EM: 0.0678, RS: 0.2609, BS: 0.9227), fine-tuning improves both accuracy and consistency, incorporating triplet loss further boosts response reliability, and models fine-tuned on individual synthetic variations outperform the baseline in both accuracy and consistency. For Gemma, the "B + SFT + TL + ALL" model achieves the highest consistency metrics (EM: 0.3382, RS: 0.4731), similar to the trend observed for Llama, where combined-data models also prioritize consistency over raw accuracy. The merging strategy consistently delivers the most robust and balanced performance across all metrics. Key differences are notable, however. The Gemma baseline shows lower initial accuracy and EM than the Llama baseline, suggesting a larger model does not automatically guarantee consistent responses. The merged Gemma model attains the highest ROUGE (0.5356), BLEU (0.3416), and BS (0.9373), slightly outperforming Llama's merged model on accuracy and semantic similarity, though EM is slightly lower than Gemma's "B + SFT + TL + ALL" model, indicating a minor trade-off in exact match consistency. Overall, while the pattern of improvement, baseline →fine-tuned →triplet-loss →specialized variation →merged, is consistent across both model families, the larger Gemma-3-12B-Instruct benefits more from combined data fine-tuning, achieving higher accuracy and semantic similarity, while the merging strategy ensures robust and balanced performance similar to both Llama and Gemma models. 5 Conclusion In this work, we identify four types of semantically insignificant query variations that cause inconsistent LLM responses. We quantify response similarity and show that baseline models and standard fine-tuning exhibit low consistency. To address this, we propose a novel approach combining synthetic data generation, Triplet Loss training, and layer-wise model merging guided by consistencyoriented weights. Our experiments show the merged model significantly outperforms baselines and specialized models, achieving superior Exact Match and Response Similarity scores, thus demonstrating enhanced consistency while maintaining strong accuracy. This work presents a compelling pathway towards more trustworthy LLMs and opens avenues for future research, including adaptive merging, expanded consistency definitions, and the application of this method to diverse public datasets. We also plan to construct and publicly release benchmarks that mimic the identified query variations to further evaluate and demonstrate the effectiveness of our approach. Additional future directions include addressing inconsistency cases arising from retrievers, which were beyond the scope of this study. 6 Limitations While the proposed work has been evaluated on industrial data, there is scope to create public benchmark and evaluate the method. We will explore creating creating benchmarks for evaluating consistency in responses due to variations in the query. The work is limited to variations in the query where the retrievers results don't significantly change. This can explored as future direction for research. Finally, we experimented with 2 Large Language Models with optimal settings for fine-tuning. There is scope to explore additional hyper parameter configurations. References Abdelrahman Abdallah, Jamshid Mozafari, Bhawna Piryani, Mohammed Ali, and Adam Jatowt. 2025. From retrieval to generation: Comparing different approaches. arXiv preprint . Tianyu Cao, Neel Bhandari, Akhila Yerukola, Akari Asai, and Maarten Sap. 2025. Out of style: Rag's fragility to linguistic variation. arXiv preprint . Yanda Chen, Chandan Singh, Xiaodong Liu, Simiao Zuo, Bin Yu, He He, and Jianfeng Gao. 2025a. Towards consistent natural-language explanations via explanation-consistency finetuning. In Proceedings of the 31st International Conference on Computational Linguistics, pages 7558-7568. Zhijun Chen, Jingzheng Li, Pengpeng Chen, Zhuoran Li, Kai Sun, Yuankai Luo, Qianren Mao, Dingqi Yang, Hailong Sun, and Philip S Yu. 2025b. Harnessing multiple large language models: A survey on llm ensemble. arXiv preprint . Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics, 9:1012-1031. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad AlDahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models. Preprint, . Noah Lee, Jiwoo Hong, and James Thorne. 2025. Evaluating the consistency of LLM evaluators. In Proceedings of the 31st International Conference on Computational Linguistics, pages 10650-10659. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, and 1 others. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neural information processing systems, 33:94599474. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318. Aditya Patwardhan, Vivek Vaidya, and Ashish Kundu. 2024. Automated Consistency Analysis of LLMs . In 2024 IEEE 6th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (TPS-ISA), pages 118-127. Yao Qiang, Subhrangshu Nandi, Ninareh Mehrabi, Greg Ver Steeg, Anoop Kumar, Anna Rumshisky, and Aram Galstyan. 2024. Prompt perturbation consistency learning for robust language models. arXiv preprint . Harsh Raj, Vipul Gupta, Domenic Rosati, and Subhabrata Majumdar. 2025a. Improving consistency in large language models through chain of guidance. Preprint, . Harsh Raj, Vipul Gupta, Domenic Rosati, and Subhabrata Majumdar. 2025b. Semantic consistency for assuring reliability of large language models. Preprint, . Harsh Raj, Domenic Rosati, and Subhabrata Majumdar. 2023. Measuring reliability of large language models through semantic consistency. Preprint, . Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992. Ashutosh Sathe, Divyanshu Aggarwal, and Sunayana Sitaram. 2025. Improving consistency in LLM inference using probabilistic tokenization. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4766-4778. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015a. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015b. Facenet: A unified embedding for face recognition and clustering. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 815-823. Mingyang Song and Mao Zheng. 2024. A survey of query optimization in large language models. arXiv preprint . Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, and 197 others. 2025. Gemma 3 technical report. Preprint, . Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. Preprint, . Shangyu Wu, Ying Xiong, Yufei Cui, Haolun Wu, Can Chen, Ye Yuan, Lianming Huang, Xue Liu, TeiWei Kuo, Nan Guan, and 1 others. 2024. Retrievalaugmented generation for natural language processing: A survey. arXiv preprint . Xiaoyuan Wu, Weiran Lin, Omer Akgul, and Lujo Bauer. 2025. Estimating llm consistency: A user baseline vs surrogate metrics. Preprint, . Prateek Yadav, Derek Tam, Leshem Choshen, Colin Raffel, and Mohit Bansal. 2023. Ties-merging: resolving interference when merging models. In Proceedings of the 37th International Conference on Neural Information Processing Systems, pages 7093 - 7115. Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. 2024. Language models are super mario: absorbing abilities from homologous models as a free lunch. In Proceedings of the 41st International Conference on Machine Learning, pages 57755 - 57775. Kepu Zhang, Zhongxiang Sun, Weijie Yu, Xiaoxue Zang, Kai Zheng, Yang Song, Han Li, and Jun Xu. 2025. QE-RAG: A robust retrieval-augmented generation benchmark for query entry errors. arXiv preprint . Fufangchen Zhao, Guoqiang Jin, Jiaheng Huang, Rui Zhao, and Fei Tan. 2024. Consistency matters: Explore llms consistency from a black-box perspective. Preprint, . Guido Zuccon, Joao Palotti, and Allan Hanbury. 2016. Query variations and their effect on comparing information retrieval systems. In Proceedings of the 25th ACM international on conference on information and knowledge management, pages 691-700.
2510.14927
Astronomy & Astrophysics manuscript no. aa55684-25corr ©ESO 2025 October 17, 2025 NIRPS and TESS reveal a peculiar system around the M dwarf TOI-756: A transiting sub-Neptune and a cold eccentric giant Léna Parc1, ∗, François Bouchy1, Neil J. Cook2, Nolan Grieves1, Étienne Artigau2, 3, Alexandrine L’Heureux2, René Doyon2, 3, Yuri S. Messias2, 4, Frédérique Baron2, 3, Susana C. C. Barros5, 6, Björn Benneke2, Xavier Bonfils7, Marta Bryan8, Bruno L. Canto Martins4, Ryan Cloutier9, Nicolas B. Cowan10, 11, Daniel Brito de Freitas12, Jose Renan De Medeiros4, Xavier Delfosse7, Elisa Delgado-Mena13, 5, Xavier Dumusque1, David Ehrenreich1, 14, Pedro Figueira1, 5, Jonay I. González Hernández15, 16, David Lafrenière2, Izan de Castro Leão4, Christophe Lovis1, Lison Malo2, 3, Claudio Melo17, Lucile Mignon1, 7, Christoph Mordasini18, Francesco Pepe1, Rafael Rebolo15, 16, 19, Jason Rowe20, Nuno C. Santos5, 6, Damien Ségransan1, Alejandro Suárez Mascareño15, 16, Stéphane Udry1, Diana Valencia8, Gregg Wade21, Manuel Abreu22, 23, José L. A. Aguiar4, Khaled Al Moulla5, 1, Guillaume Allain24, Romain Allart2, Jose Manuel Almenara7, Tomy Arial3, Hugues Auger24, Luc Bazinet2, Nicolas Blind1, David Bohlender25, Isabelle Boisse26, Anne Boucher2, Vincent Bourrier1, , Sébastien Bovay1, Pedro Branco6, 5, Christopher Broeg18, 27, Denis Brousseau24, Alexandre Cabral22, 23, Charles Cadieux2, Andres Carmona7, Yann Carteret1, Zalpha Challita2, 26, David Charbonneau28, Bruno Chazelas1, Catherine A. Clark29, João Coelho22, 23, Marion Cointepas1, 7, Karen A. Collins28, Kevin I. Collins30, Uriel Conod1, Eduardo Cristo5, 6, Ana Rita Costa Silva5, 6, 1, Antoine Darveau-Bernier2, Laurie Dauplaise2, Jean-Baptiste Delisle1, Roseane de Lima Gomes2, 4, João Faria1, 5, Dasaev O. Fontinele4, Thierry Forveille7, Yolanda G. C. Frensch1, 31, Jonathan Gagné32, 2, Frédéric Genest2, Ludovic Genolet1, João Gomes da Silva5, Félix Gracia Témich15, Nicole Gromek9, Olivier Hernandez32, Melissa J. Hobson1, H. Jens Hoeijmakers33, 1, Norbert Hubin17, Marziye Jafariyazani34, Farbod Jahandar2, Ray Jayawardhana35, Hans-Ulrich Käufl17, Dan Kerley25, Johann Kolb17, Vigneshwaran Krishnamurthy10, Benjamin Kung1, Pierrot Lamontagne2, Pierre Larue7, Henry Leath1, Olivia Lim2, Gaspare Lo Curto31, Allan M. Martins4, 1, Elisabeth C. Matthews36, Jaymie Matthews37, Jean-Sébastien Mayer3, Stan Metchev38, Lina Messamah1, Leslie Moranta2, 32, Dany Mounzer1, , Nicola Nari39, 15, 16, Louise D. Nielsen1, 17, 40, Ares Osborn9, 7, Mathieu Ouellet3, Jon Otegi1, Luca Pasquini17, Vera M. Passegger15, 16, 41, 42, Stefan Pelletier1, 2, Céline Peroux17, Caroline Piaulet-Ghorayeb2, 43, Mykhaylo Plotnykov8, Emanuela Pompei31, Anne-Sophie Poulin-Girard24, José Luis Rasilla15, Vladimir Reshetov25, Jonathan Saint-Antoine2, 3, Mirsad Sarajlic18, Ivo Saviane31, Robin Schnell1, Alex Segovia1, Julia Seidel31, 44, 1, Armin Silber31, Peter Sinclair31, Michael Sordet1, Danuta Sosnowska1, Avidaan Srivastava2, 1, Atanas K. Stefanov15, 16, Márcio A. Teixeira4, Simon Thibault24, Philippe Vallée2, 3, Thomas Vandal2, Valentina Vaulato1, Joost P. Wardenier2, Bachar Wehbe22, 23, Drew Weisserman9, Ivan Wevers25, François Wildi1, Vincent Yariv7, Gérard Zins17 (Affiliations can be found after the references) Received 27 May 2025 / Accepted 25 July 2025 ABSTRACT Context. The Near InfraRed Planet Searcher (NIRPS) joined HARPS on the 3.6-m ESO telescope at La Silla Observatory in April 2023, dedicating part of its Guaranteed Time Observations (GTO) program to the radial velocity follow-up of TESS planet candidates to confirm and characterize transiting planets around M dwarfs. Aims. We present the "Sub-Neptunes" subprogram of the NIRPS-GTO, aimed at investigating the composition and formation of sub-Neptunes orbiting M dwarfs. We report the first results of this program with the characterization of the TOI-756 system, which consists of TOI-756 b, a transiting sub-Neptune candidate detected by TESS, as well as TOI-756 c, an additional non-transiting planet discovered by NIRPS and HARPS. Methods. We analyzed TESS and ground-based photometry, high-resolution imaging, and high-precision radial velocities (RVs) from NIRPS and HARPS to characterize the two newly discovered planets orbiting TOI-756, as well as to derive the fundamental properties of the host star. A dedicated approach was employed for the NIRPS RV extraction to mitigate telluric contamination, particularly when the star’s systemic velocity was shown to overlap with the barycentric Earth radial velocity. Results. TOI-756 is a M1V-type star with an effective temperature of Teff ∼3657 K and a super-solar metallicity ([Fe/H]) of 0.20±0.03 dex. TOI-756 b is a 1.24-day period sub-Neptune with a radius of 2.81 ± 0.10 R⊕and a mass of 9.8+1.8 −1.6 M⊕. TOI-756 c is a cold eccentric (ec = 0.45 ± 0.01) giant planet orbiting with a period of 149.6 days around its star with a minimum mass of 4.05 ± 0.11 MJup. Additionally, a linear trend of 146 m s−1 yr−1 is visible in the radial velocities, hinting at a third component, possibly in the planetary or brown dwarf regime. Conclusions. We present the discovery and characterization of the transiting sub-Neptune TOI-756 b and the non-transiting eccentric cold giant TOI-756 c. This system is unique in the exoplanet landscape, standing as the first confirmed example of such a planetary architecture around an M dwarf. With a density of 2.42 ± 0.49 g cm−3, the inner planet, TOI-756 b, is a volatile-rich sub-Neptune. Assuming a pure H/He envelope, we inferred an atmospheric mass fraction of 0.023 and a core mass fraction of 0.27, which is well constrained by stellar refractory abundances derived from NIRPS spectra. It falls within the still poorly explored radius cliff and at the lower boundary of the Neptune desert, making it a prime target for a future atmospheric characterization with JWST to improve our understanding of this population. Key words. techniques: photometric – techniques: radial velocities – planets and satellites: detection – planets and satellites: composition – planets and satellites: formation – stars: low-mass Article number, page 1 arXiv:2510.14927v1 [astro-ph.EP] 16 Oct 2025 A&A proofs: manuscript no. aa55684-25corr Article number, page 2 L. Parc, et al.: The peculiar TOI-756 system 1. Introduction Since the discovery of a giant exoplanet orbiting 51 Pegasi (Mayor & Queloz 1995), nearly 5900 exoplanets have been de- tected1, showcasing an incredible variety of planetary systems and greatly enhancing our understanding of planet formation and evolution. Notably, space-based missions such as Kepler (Borucki et al. 2010) and TESS (Ricker et al. 2014) have re- vealed the prevalence of a population of exoplanets with sizes between Earth and Neptune, known as super-Earths and sub- Neptunes. This group of smaller planets (with radii between 1 and 4 R⊕) is not present in our Solar System, yet more than half of all Sun-like stars in the Galaxy are believed to host a sub- Neptune within 1 AU (e.g. Batalha et al. 2013; Petigura et al. 2013; Marcy et al. 2014). M dwarfs are the most abundant stars in our Galaxy (Henry et al. 2006; Winters et al. 2015; Reylé et al. 2021) and the search for exoplanets around these low-mass stars has gained significant attention in recent years. Indeed, they appear to have a high occurrence rate of planets, particularly of rocky planets and sub-Neptunes (Dressing & Charbonneau 2013; Bonfils et al. 2013; Dressing & Charbonneau 2015; Mulders et al. 2015; Gaidos et al. 2016; Mignon et al. 2025). In addition, exoplanets that transit M dwarfs are of particular interest, as their small size and low irradiation levels allow for easier detection of smaller and cooler planets, such as those located within the habitable zone of their star, than around larger and hotter stars. TESS was specially designed to be sensitive to these redder, cooler stars, but the relative faintness of M dwarfs in the visible spectrum has hindered a comprehensive characterization of the planetary sys- tems via ground-based follow-ups. Indeed, the empirical pop- ulation of known planets hosted by low-mass stars later than mid-K spectral type is smaller by nearly an order of magnitude than planets around Sun-like stars (Cloutier & Menou 2020). In particular, the PlanetS catalog2 (Parc et al. 2024; Otegi et al. 2020) of well-characterized planets (with precisions σ on in- ferred masses M and radius R of σM/M < 25% ; σR/R < 8%) only contains 80 planets around M dwarfs, compared to 745 around earlier-type stars. However, the recent development of high-resolution near-infrared spectrographs such as the Near In- fraRed Planet Searcher (NIRPS ; Bouchy et al. 2017, 2025) has enabled efficient radial velocity (RV) follow-up of these tran- siting exoplanets. NIRPS represents a breakthrough in this re- spect, allowing for precise measurements of RVs of M dwarfs too faint for HARPS, breaking the meter-per-second barrier in the infrared (Suárez Mascareño et al. 2025). The precise characterization of the radius and mass of these planets is crucial for deriving the bulk density. This, in turn, allows us to study the planet’s internal structure and compo- sition, offering insights into the relative mass fractions of its components, such as the iron core, mantle, atmosphere, and total mass fraction of water (Dorn et al. 2015; Brugger et al. 2017; Plotnykov & Valencia 2024). It is also necessary for at- mospheric characterization via transmission spectroscopy, as the scale height of atmospheres is inversely proportional to surface gravity (Batalha et al. 2019). Understanding the compositional differences of planets hosted by M dwarfs is crucial for compre- hending their distinct planet formation environments, as M-type stars have a longer hot protostellar phases (Baraffe et al. 1998, 2015), lower protoplanetary disk masses (Pascucci et al. 2016), and higher and longer activity at young ages compared to FGK- type stars (Ribas et al. 2005). 1 https://exoplanetarchive.ipac.caltech.edu/ 2 https://dace.unige.ch/exoplanets/ While studies show that low-mass planets appear to be more numerous around close-in M-dwarf systems than Solar-type stars, the differences among how M-dwarf environments influ- ence the composition of planets of a given radius remain un- certain. Cloutier & Menou (2020) found an increase in the fre- quency of close-in rocky planets around increasingly lower-mass stars and that the relative occurrence rate of rocky to non-rocky planets increases ∼6–30 times around mid-M dwarfs compared to mid-K dwarfs. However, they did not firmly identify the phys- ical cause of this trend. Furthermore, Kubyshkina & Vidotto (2021) modeled the evolution of a wide range of sub-Neptune- like planets orbiting stars of different masses and evolution- ary histories. They found that atmospheric escape of planets with the same equilibrium temperature ranges occurs more ef- ficiently around lower mass stars, indirectly supporting the find- ings of Cloutier & Menou (2020). A key question that remains is whether M-dwarf planets primarily form as bare rocky plan- ets, or if they form with an a envelope that they subsequently lose. Despite the evidence suggesting that M dwarfs tend to form more rocky planets, Parc et al. (2024) identified statistical evi- dence for small well-characterized sub-Neptunes (1.8 R⊕< Rp < 2.8 R⊕) being less dense around M dwarfs than around FGK dwarfs, hinting that these planets are ice-rich and would, hence, be likely migrated objects that accreted most of their solids beyond the iceline (e.g., Alibert & Benz 2017; Venturini et al. 2020, 2024; Burn et al. 2021, 2024). However, the sample of these sub-Neptunes orbiting M dwarfs is still small and more well-characterized planets are needed in this parameter space to determine whether this low-density trend is consistent. On the other hand, giant planets with masses comparable to Jupiter are very infrequent around M dwarfs. Recent simulations of planet formation suggest that their occurrence rate declines sharply with decreasing stellar mass, potentially reaching zero for the lowest-mass stars (Burn et al. 2021). Nevertheless, such planets do exist, although they appear to be significantly less common than around FGK-type stars (e.g., Bonfils et al. 2013; Bryant et al. 2023; Pass et al. 2023; Mignon et al. 2025). Unlike small exoplanets, giant planets are expected to form at larger orbital separations from their host star, where more material is available (Alexander & Pascucci 2012; Bitsch et al. 2015). As a result, this population is more affected by the observational bi- ases of the transit method, as they orbit farther from small stars, making their detection more challenging. The RV method is less sensitive to this bias given the large RV signal induced by mas- sive planets, even at longer periods, but the monitoring over long baselines to see these giant planet signals is costly and not often done. Finally, increasing this sample will provide crucial con- straints on the formation and evolution of giant planets around M dwarfs. This paper is structured as follows: in Sect. 2, we provide a description of the "Sub-Neptunes" NIRPS-GTO SP2 subpro- gram. In Sect. 3, we present the space- and ground-based obser- vations taken by TESS, LCO-CTIO, and ExTrA, as well as the NIRPS+HARPS RV observations. Sect. 4 details how we deter- mined the host star parameters from both NIRPS and HARPS high-resolution spectra and photometric observations. In Sect. 5, we present the global photometric and RV analysis and its re- sults. Finally, in Sect. 6, we discuss the system. We present our conclusions in Sect. 7. Article number, page 3 A&A proofs: manuscript no. aa55684-25corr 2. Exploring the composition and formation of sub-Neptunes orbiting M dwarfs with NIRPS NIRPS began operations in April 2023, initiating its five-year Guaranteed Time Observations (GTO) program, which spans 725 nights. The NIRPS GTO is structured into three primary scientific subprograms (SPs), each allocated 225 nights, along with smaller "Other Sciences" (OS) programs totaling 50 nights, as described by Bouchy et al. (2025). The second major work package (SP2) focuses on the characterization of the mass and bulk density of exoplanets transiting M dwarfs. SP2 aims to constrain the internal composition of these exoplanets, includ- ing their iron, rock, and water fractions, as well as to investigate how their properties vary with stellar irradiation, stellar mass, planetary architecture, and stellar composition. The objective is to provide critical insights into the formation and evolutionary pathways of exoplanet systems around M dwarfs. One particular subprogram of the SP2 of NIRPS has been dedicated to explor- ing the composition and formation of sub-Neptune sized planets orbiting M dwarfs (the "sub-Neptunes" subprogram). As discussed in Sect. 1, this program aims to increase the sample of sub-Neptunes with precise masses. This will help elucidate whether these planets are ice-rich and, hence, whether they are likely to be objects that accreted most of their solids beyond the iceline and migrated in (e.g., Alibert & Benz 2017; Venturini et al. 2020; Burn et al. 2021) or whether they are water-poor and formed inside the water ice line (e.g., Owen & Wu 2017; Lopez & Rice 2018; Cloutier & Menou 2020). The initial target list, established in 2022, focused on TESS Objects of Interest (TOIs) with radii between 2 and 3 R⊕ orbiting M dwarfs, with all but one target receiving an insolation of < 30 S⊕. We selected targets that are observable with NIRPS, installed at La Silla Observatory, meaning they have a declina- tion of less than +20 degrees. Additionally, we prioritized tar- gets with a J-band magnitude lower than 11.5, ensuring that the small semi-amplitudes required for precise mass measurements with NIRPS could be accurately detected. Over time, these selection criteria evolved, particularly fol- lowing studies such as that of Parc et al. (2024), which statis- tically confirmed the initially observed low-density trend and highlighted the need to explore the sparsely populated 3–4 R⊕ range. This same study suggests that the transition between super-Earths and sub-Neptunes is dependent on stellar type and appears significantly less pronounced around M dwarfs com- pared to FGK dwarfs, an interesting demographic feature we de- cided to explore also with NIRPS and this subprogram. Initially, 12 targets were included in this program, but some were removed due to the challenging semi-amplitudes expected around very faint stars. A few others were added with the extension of the science case and the new TESS candidates. We currently have 22 targets in our GTO protected target list for this subprogram in Period 116 (1 October 2025-30 April 2026), some of which are shared with other SP2 subprograms. This list is updated each semester based on observational results and information from other facilities involved in the mass characterization of these ob- jects. All protected targets are published on the ESO website and all initiated targets are recorded in the SG2/SG4 TESS FOllow- up Program (TFOP) spreadsheet. With V-band magnitudes rang- ing from 12.2 to 15.4, NIRPS extends the traditional RV follow- up limits imposed by HARPS and 4-meter-class telescopes, en- abling the study of fainter targets that would otherwise be chal- lenging to monitor with optical spectrographs. This study presents the first results of the NIRPS-GTO SP2 program, including the confirmation and characterization of a 386 388 390 392 394 Pixel Column Number 1280 1282 1284 1286 1288 Pixel Row Number E N m = -2 m = 0 m = 2 m = 4 m = 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 36 40 42 44 46 55 67 TIC 73649615 - Sector 10 0.02 0.05 0.10 0.20 0.30 0.50 1.00 2.00 Flux ×102 (e /s) Fig. 1. TESS TPF of TOI-756 created with tpfplotter (Aller et al. 2020). The orange pixels define the aperture mask used for extracting the photometry. Additionally, the red circles indicate neighboring ob- jects from the Gaia DR3 catalog, with the circle size corresponding to the brightness difference compared to the target (as indicated in the leg- end). Our target is marked with a white cross. Pixel scale is 21′′/pixel. The co-moving companion of TOI-756 corresponds to the star labeled "2." TESS candidate orbiting the M1V star TOI-756, as well as the discovery of TOI-756 c, the first planet detected with NIRPS. 3. Observations 3.1. TOI-756 as part of a wide binary system TOI-756 is an M1V star with an effective temperature of ∼ 3600 K and magnitudes in the V and I of 14.6 and 11.1, respec- tively. It was discovered by Wroblewski & Torres (1991) and named WT 351 as part of a binary system with a widely sep- arated co-moving stellar companion WT 352. This companion is a M3/4V main sequence star (Teff ∼3300 K) located at a sep- aration of 11.09 arcsec, corresponding to a projected separation of about ∼955 au. By comparing their parallaxes, proper mo- tions, and radial velocities from Gaia DR3, we confirmed that the two stars share common motion and distance, consistent with a physically bound binary system, which was previously reported by Mugrauer & Michel (2020) and El-Badry et al. (2021). The main identifiers, as well as the astrometric and photometric pa- rameters of TOI-756, are listed in Table 1. 3.2. TESS photometry TOI-756 (TIC 73649615) was observed in TESS Sector 10 (March 26, 2019 to April 22, 2019), Sector 11 (April 23, 2019 to May 20, 2019), Sector 37 (April 02, 2021 to April 28, 2021) and Sector 64 (April 06, 2023 to May 04, 2023) in 2-min cadence. The target was imaged on CCD 3 of camera 2 in Sectors 10, 37, and 64 and on CCD 4 of camera 2 in Sector 11. The TESS Sci- ence Processing Operations Center (SPOC; Jenkins et al. 2016) at NASA Ames processed the TESS photometric data result- ing in the Simple Aperture Photometry (SAP; Twicken et al. 2010; Morris et al. 2020) flux and the Presearch Data Condition- ing Simple Aperture Photometry (PDCSAP; Smith et al. 2012; Stumpe et al. 2012, 2014) flux. The latter flux was corrected for dilution in the TESS aperture by known contaminating sources. Article number, page 4 L. Parc, et al.: The peculiar TOI-756 system Table 1. Stellar parameters for TOI-756. TOI-756 Source Identifiers TIC ID 73649615 TICv8 2MASS ID J12482523-4528140 2MASS Gaia ID 6129327525817451648 Gaia DR3 WT 351 WT Astrometric parameters Right ascension (J2016), α 12h 48m 25.21s Gaia DR3 Declination (J2016), δ -45◦28′ 14.15′′ Gaia DR3 Parallax (mas) 11.61 ± 0.02 Gaia DR3 Distance (pc) 86.45+1.18 −0.22 Gaia DR3 µR.A. (mas yr−1) -216.502 ± 0.016 Gaia DR3 µDec (mas yr−1) 29.197 ± 0.013 Gaia DR3 Vsyst (km s−1) 15.36 ± 1.20 Gaia DR3 ULS R -57.56 ± 0.64 This work VLS R -44.82 ± 0.97 This work WLS R 22.18 ± 0.36 This work Photometric parameters TESS (mag) 12.5554 ± 0.007 TICv8 B (mag) 16.102 ± 0.057 TICv8 V (mag) 14.607 ± 0.018 TICv8 G (mag) 13.677 ± 0.003 Gaia DR3 J (mag) 11.138 ± 0.026 2MASS H (mag) 10.517 ± 0.025 2MASS Ks (mag) 10.274 ± 0.021 2MASS Bulk parameters Spectral type M1V This work Teff (K) 3657 ± 72 This work R⋆(R⊙) 0.505 ± 0.015 This work M⋆(M⊙) 0.505 ± 0.019 This work ρ⋆(g cm−3) 5.52+0.56 −0.54 This work L⋆(L⊙) 0.041 ± 0.004 This work [Fe/H] (dex) 0.196 ± 0.029† This work [M/H] (dex) 0.17 ± 0.08 This work log g∗(cm s−2) 4.735 ± 0.031 This work Age (Gyr) 3.2+5.5 −2.3 This work Sources: 1) TICv8 (Stassun et al. 2019) 2) 2MASS (Skrutskie et al. 2006) 3) Gaia DR3 (Gaia Collaboration et al. 2018) 4) WT (Wroblewski & Torres 1991) † Value derived with a fixed Teff, so the uncertainties are likely underestimated (see Sect. 4.1). Indeed, due to its large pixel size of 21" per pixel, the TESS pho- tometry can be contaminated by nearby companions. To evalu- ate the possible contamination, we plotted the target pixel file (TPF, Fig.1) of Sector 10 along with the aperture mask used for the SAP flux using tpfplotter (Aller et al. 2020). The TPFs of Sector 11, 37, and 64 are plotted in Appendix A.1. The aper- tures used for extracting the light curves in all four sectors were mostly contaminated by TIC 73649613, the co-moving compan- ion of TOI-756 (Sect. 3.1) with a TESS magnitude of 13.75 (cor- responding to a ∆m = 1.35). On 2019 June 05, the TESS data public website3 announced the detection of a 1.24-day TOI (Guerrero et al. 2021), TOI-756.01. The SPOC detected the transit signature of TOI-756.01 in Sec- tor 10 and in Sector 11 and the signature was fitted with an ini- tial limb-darkened transit model (Li et al. 2019) and passed all 3 https://tev.mit.edu/data/ the diagnostic tests presented in the Data Validations Reports (Twicken et al. 2018). In particular, the difference image cen- troiding test for the multi-sector searches strongly rejected all TIC objects other than the target star as the transit source in each case. 3.3. Ground-based photometry The TESS pixel scale is 21" pixel−1 and photometric apertures typically extend out to roughly 1’, generally causing multiple stars to blend in the TESS aperture. To definitely exclude the presence of another star causing the signal in the TESS data and improve the transit ephemerides, we conducted photomet- ric ground-based follow-up observations in different bands with ExTrA and LCO-CTIO of the field around TOI-756 as part of the TESS Follow-up Observing Program (TFOP)4 Sub Group 1 (Collins 2019). 3.3.1. LCO-CTIO We used the Las Cumbres Observatory Global Telecope (LCOGT: Brown et al. 2013) 1.0-m network to observe two full transits of TOI-756 b in Sloan-i’ and g’ filters. The telescopes are equipped with 4096 × 4096 SINISTRO Cameras, having an image scale of 0.389" per pixel and a field of view of 26′× 26′. The raw data were calibrated by the standard LCOGT BANZAI pipeline (McCully et al. 2018) and photometric measurements were extracted using AstroImageJ (Collins et al. 2017). The two transits were observed at Cerro Tololo Interamerican Ob- servatory (CTIO), the first on 2019 June 12 UT in Sloan-i’ using 4.7" target aperture and the second on July 03, 2019 UT in Sloan- g’ using 5.1" target aperture. The data are shown in Fig. 5 and Fig. C.2. 3.3.2. ExTrA ExTrA (Bonfils et al. 2015) is a near-infrared (0.85 to 1.55 µm) multi-object spectrophotometer fed by three 60-cm telescopes located at La Silla Observatory in Chile. One full transit (on 2021 February 24) and three partial transits (on March 01, 06, and 27, 2021) of TOI-756 b were observed using two ExTrA telescopes. We used 8" aperture fibers and the low-resolution mode (R ∼20) of the spectrophotometer with an exposure time of 60 seconds. Five fiber positioners are used at the focal plane of each tele- scope to select light from the target and four comparison stars chosen with 2MASS J-band magnitude (Skrutskie et al. 2006) and Gaia effective temperatures (Gaia Collaboration et al. 2018) similar to the target. The resulting ExTrA data were analyzed us- ing a custom data reduction software to produce synthetic pho- tometry in a 0.85-1.55 micron bandpass, described in more detail in Cointepas et al. (2021). The data are shown in Fig. C.3. 3.4. High-resolution imaging As part of our standard process for validating transiting exoplan- ets to exclude false positives and to assess the possible contam- ination of bound or unbound companions on the derived plane- tary radii (Ciardi et al. 2015), we observed TOI-756 with adap- tive optics and speckle imaging at VLT, SOAR, and Gemini. 4 https://tess.mit.edu/followup/ Article number, page 5 A&A proofs: manuscript no. aa55684-25corr 3.4.1. VLT TOI-756 was imaged with the NAOS/CONICA instrument on board the Very Large Telescope (NACO/VLT) on the night of July 13, 2019 UT in NGS mode with the Ks filter centered on 2.18 µm (Lenzen et al. 2003; Rousset et al. 2003). We took nine frames with an integration time of 14 s each and dithered be- tween each frame. We performed a standard reduction using a custom IDL pipeline: we subtracted flats and constructed a sky background from the dithered science frames, aligned and co- added the images, and then injected fake companions to deter- mine a 5-σ detection threshold as a function of radius. We ob- tained a contrast of 4.65 mag at 1", and no companions were detected. The contrast curve is shown in the top left panel of Fig. B.1. 3.4.2. SOAR We observed TOI-756 with speckle imaging using the High- Resolution Camera (HRCam) imager on the 4.1 m Southern As- trophysical Research (SOAR) telescope (Tokovinin 2018) on July 14, 2019 UT, observing in Cousins I-band, a similar visi- ble band-pass as TESS. This observation was sensitive to objects fainter by 5.2 at an angular distance of 1 arcsec from the target. More details of the observations within the SOAR TESS survey are available in Ziegler et al. (2020). The 5-σ detection sensitiv- ity and speckle autocorrelation functions from the observation are shown in the top right panel Fig. B.1. No nearby stars were detected within 3" of TOI-756 in the SOAR observation. 3.4.3. Gemini TOI-756 was observed on March 12, 2020 and July 05, 2023 UT using the Zorro speckle instrument on Gemini South. Zorro provides speckle imaging in two bands (562 and 832 nm) with output data products including a reconstructed image and robust contrast limits on companion detections (Howell et al. 2011). Both observations provided similar results; TOI-756 has no close companions to within the 5-σ contrast limits obtained (4.84 to 6.1 magnitudes) at 0.5 arcsec (Fig. B.1, bottom panels). 3.5. Spectroscopic follow-up with combined NIRPS and HARPS TOI-756 was observed simultaneously from April 4, 2023, to August 23, 2024, with NIRPS (Bouchy et al. 2025) and HARPS (Mayor et al. 2003) echelle spectrographs at the ESO 3.6 m tele- scope at La Silla Observatory in Chile. NIRPS is a new echelle spectrograph designed for precision radial velocities covering the YJH bands (980–1800 nm). The instrument is equipped with a high-order adaptive optics system and two observing modes: high accuracy (HA; R ∼88 000, 0.4" fiber) and high efficiency (HE; R ∼75 200, 0.9" fiber), which can be utilized simultane- ously with HARPS. TOI-756 was observed as part of the NIRPS- GTO program, under the Follow-up of Transiting Planets sub- program (PID:111.254T.001, 112.25NS.001, 112.25NS.002; PI: Bouchy & Doyon) in HE mode with NIRPS and in EGGS mode (high efficiency mode, R ∼80 000, 1.4" fiber) with HARPS. We selected these modes to minimize the modal noise of NIRPS and to maximize the flux by taking the large fibers, especially for HARPS, since the target is relatively faint in the visible (V = 14.6). We also chose to target the sky with fiber B instead of the Fabry-Perot, due to the target’s faintness, to facilitate back- ground light correction. Over 64 individual nights, we collected three spectra of TOI-756 per night with NIRPS (3 exposures of 800 s), which we combined to obtain a median signal-to-noise ratio (S/N) of 28.7 per pixel in the middle of H band. As NIRPS operated alone for seven nights, the HARPS dataset comprise 57 spectra (a single 2400-s exposure per night) with a median S/N of 6.5 per pixel near 550 nm. We choose to take time series of three exposures on NIRPS and then combined them because the maximum recommended exposure time is 900 s on NIRPS due to detector readout noise limitations (Bouchy et al. 2025). We removed the three last HARPS spectra since they were affected by a HARPS shutter problem (24-07-24 ; 26-07-24 ; 22-08-24). For HARPS, we used the extracted spectra from the HARPS- DRS (Lovis & Pepe 2007). For NIRPS, the observations were reduced with both the NIRPS-DRS and APERO. The NIRPS- DRS is based on and adapted from the publicly available ESPRESSO pipeline (Pepe et al. 2021). Several updates have been implemented in the ESPRESSO pipeline to enable the re- duction of infrared observations, including a telluric correction following the method of Allart et al. (2022) (see Bouchy et al. 2025). The NIRPS-DRS is the nominal pipeline for NIRPS data reduction for the ESO science archive through the VLT Data Flow System (DFS). APERO (Cook et al. 2022) is the stan- dard data reduction software for the SPIRou near-infrared spec- trograph (Donati et al. 2020), and was adapted and made fully compatible with NIRPS. The RV extraction from the reduced data of HARPS and NIRPS was performed with both the cross- correlation method (CCF) and the LBL method of Artigau et al. (2022), available as an open-source package (v0.65.003; LBL5). For the CCF method, we used the CCFs from the HARPS and NIRPS DRS using an M2V and M1V mask respectively. The LBL package is compatible with both NIRPS and HARPS. The method is conceptually similar to template matching (e.g., Anglada-Escudé & Butler 2012; Astudillo-Defru et al. 2017), while being more resilient to outlying spectral features (e.g., tel- luric residuals, cosmic rays, detector defects) as the template fit- ting is performed line by line, which facilitates the identification and removal of outliers. For NIRPS, we used the template of a brighter star with a similar spectral type, GL 514 (M1V), from NIRPS-GTO observations. For HARPS, we instead employed the template of GL 699 (M4V) from public data obtained via the ESO archive (Delmotte et al. 2006). An additional telluric correction was performed for HARPS inside the LBL code by fitting a TAPAS atmospheric model (Bertaux et al. 2014). Finally, for the analysis presented in Sect. 5.2, we used the HARPS data processed using the DRS pipeline in combination with LBL, and the NIRPS data reduced with APERO, which provides a slighly better telluric absorption correction, also in conjunction with LBL. We employed nightly binned data and applied a preprocessing step to exclude points with higher un- certainties than the majority, using a 95% percentile error-based filtering on RVs and the second-order derivative D2V indicator, defined in Artigau et al. (2022). Variations in the second-order derivative can be associated with changes in the FWHM from the CCF method. All the data will be publicly available through the DACE platform6 after publication. 4. Stellar characterization 4.1. Spectroscopic parameters The derivation of spectroscopic stellar parameters was done by applying different techniques to the HARPS and NIRPS spec- 5 https://github.com/njcuk9999/lbl 6 https://dace.unige.ch/ Article number, page 6 L. Parc, et al.: The peculiar TOI-756 system Table 2. TOI-756 stellar abundances measured with NIRPS. Element [X/H]∗ # of lines Fe I 0.20 ± 0.03 15 Mg I 0.22 ± 0.03 5 Si I 0.39 ± 0.12 6 Ca I 0.12 ± 0.23 4 Ti I 0.30 ± 0.10 14 Al I 0.10 ± 0.18 1 Na I 0.11 ± 0.18 2 C I 0.29 ± 0.18 2 K I 0.38 ± 0.18 4 OH -0.43 ± 0.03 37 ∗Relative-to-solar abundances tra. For the first technique, we combined all the individual HARPS spectra with the task scombine within IRAF7 to ob- tain a high S/N spectrum. We used the machine learning tool ODUSSEAS8 (Antoniadis-Karnavas et al. 2020, 2024) to derive the effective temperature (Teff) and metallicity ([Fe/H]). This tool measures the pseudo equivalent widths (EWs) of a set of ∼4000 lines in the optical spectra. Then, it applies a machine learning model trained with the same lines measured and cal- ibrated in a reference sample of 65 M dwarfs observed with HARPS for which their [Fe/H] were obtained from photometric calibrations (Neves et al. 2012) and their Teff from interferomet- ric calibrations (Khata et al. 2021). With this method, we derived a Teff = 3620±94K and [Fe/H] = 0.14±0.11 dex. For the second technique, the combined telluric-corrected NIRPS spectrum obtained with APERO is used to determine the stellar parameters and abundances. Following the methodology of Jahandar et al. (2024, 2025), initially developed for SPIRou spectra (Donati et al. 2020), we retrieve the effective tempera- ture Teff, overall metallicity [M/H] and chemical abundances of TOI-756. We first determine Teff and [M/H] by fitting individ- ual spectral lines to a grid of PHOENIX ACES stellar mod- els (Husser et al. 2013) convolved to the resolution of NIRPS. The models are interpolated to fixed log g = 4.75 based on the value obtained in Sect. 4.2. We find Teff = 3710 ± 33 K and [M/H] = 0.17 ± 0.08 dex. While Teff is in agreement with the value derived from HARPS, we observe a significant discrep- ancy between the measurements of the different bands, obtaining 3575 ± 23 K for Y and J, and 3803 ± 17 K for H. This discrep- ancy could be due to the lack of K-band coverage in NIRPS, as this spectral range was found to be crucial in the determina- tion of Teff with SPIRou (Jahandar et al. 2024). To better reflect the bimodality of the distribution, we inflated the uncertainty on the temperature to the half-distance of the two chromatic mea- surements (Teff = 3710 ± 113 K). The temperatures obtained for NIRPS and HARPS were then combined with a weighted aver- age to give the adopted effective temperature, Teff = 3657±72 K. However, for the abundance analysis, we used the effective tem- perature obtained from HARPS to be more conservative. The abundances of chemical species are determined by fitting the PHOENIX grid to individual spectral lines (Jahandar et al. 2024, 2025) with fixed Teff of 3620 K. The stel- lar abundances measured from the NIRPS spectrum are recorded in Table 2, although it should be noted that the assumption of a 7 IRAF is distributed by National Optical Astronomy Observatories, operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation, USA. 8 https://github.com/AlexandrosAntoniadis/ODUSSEAS Fig. 2. SED of TOI-756, constructed using broadband photometric data from APASS (magenta), Gaia (green), 2MASS (black), and WISE (blue). The upper limit for the WISE W4 band is indicated by a red dot. The horizontal error bars represent the passband widths of the re- spective filters. Below the SED, the residuals are shown, normalized to the photometric errors. The SED was modeled using the BT Settl at- mospheric model (Allard et al. 2012) with Teff = 3600 K, [M/H] = 0 dex, and log g⋆= 4.5 cm s−2. fixed Teff likely results in an underestimation of the uncertain- ties. 4.2. Mass, radius, and age To derive the mass and radius of the star, we constructed the spectral energy distribution (SED) using the flux densities from the photometric bands GBP, G, and GRP from the GAIA mis- sion (Gaia Collaboration et al. 2018), B and V from APASS (Henden et al. 2015), J, H, and Ks from the 2MASS project (Skrutskie et al. 2006), and W1, W2, W3, and W4 from the WISE mission (Wright et al. 2010). For the SED modeling pro- cess, we employed the Virtual Observatory Spectral Analyzer (VOSA) tool (Bayo et al. 2008). Theoretical models such as BT- Settl (Allard et al. 2012), Kurucz (Kurucz 1993), and Castelli & Kurucz (Castelli & Kurucz 2003) are used to construct the syn- thetic SEDs, where the most suited model was BT Settl with Teff = 3600 K, [M/H] = 0 dex, and log(g) = 4.5 cm s−2. VOSA uses a χ2 minimization technique to achieve the best fit between the theoretical curve and the observational data, taking into account the observed flux, the theoretical flux predicted by the model, the observational error related to the flux, the number of photomet- ric points, input parameters, the object’s radius, and the distance between the observer and the object. The resulting analysis is shown in Fig.2. We then integrated the observed SED to obtain the bolo- metric luminosity: L⋆= 0.03805 ± 0.00011 L⊙. We obtained the stellar radius R⋆= 0.501 ± 0.014 R⊙using the Stefan- Boltzmann law, L⋆= 4πRσT 4 eff. Finally, the stellar mass (M⋆= 0.505 ± 0.019 M⊙) was estimated using Equation 6 from Schweitzer et al. (2019). We independently derived the mass and radius of TOI- 756 using the empirically established M-dwarf mass-luminosity and radius-luminosity relations from Mann et al. (2019) and Mann et al. (2015), respectively. To do this, we utilized the Gaia stellar parallax and the Ks magnitude from 2MASS to calculate the absolute Ks magnitude (MK). We employed a Monte Carlo method to propagate the uncertainties and incorporated the in- Article number, page 7 A&A proofs: manuscript no. aa55684-25corr Table 3. TOI-756 stellar parameters derived by different methods. SED Mann’s relations R∗(R⊙) 0.501 ± 0.014 0.508 ± 0.015 M∗(M⊙) 0.505 ± 0.019 0.505 ± 0.012 L∗(L⊙) 0.0381 ± 0.0001 0.042+0.0042 −0.0039 ρ∗(g cm−3) 5.66+0.54 −0.49 5.45+0.53 −0.47 log g∗(cm s−2) 4.742 ± 0.029 4.731+0.028 −0.027 trinsic errors of the relations, which are 2.89% for the radius and 2.2% for the mass, into our results. The results of these two independent methods are compiled in Table 3. The two methods give very consistent radius and mass values. We combined the resulting stellar masses and radii, tak- ing the larger uncertainties to be conservative, as final stellar pa- rameters. Together with the results of Sect. 4.1, we derived the associated stellar luminosity (L∗), gravity (log g∗), and density (ρ∗). These final parameters are listed in Table 1. Additionally, to estimate the age of the star, we used the code isoAR9 from Brahm et al. (2018) using the Teff and [Fe/H] de- rived in Sect. 4.1, plus Gaia photometry and parsec isochrones. The resulting age of 3.2+5.5 −2.3 Gyr is poorly constrained, which is expected for an M dwarf. Finally, space velocities ULS R, VLS R, and WLS R10, were cal- culated using positions, parallaxes, proper motions, and RVs from Gaia DR3. To relate the space velocities to the local stan- dard of rest (LSR), the Sun’s velocity components relative to the LSR (U⊙, V⊙, W⊙) = (11.10, 12.24, 7.25) km s−1 from Schönrich et al. (2010) were added. We obtained : ULS R = −57.56 ± 0.64 km s−1, VLS R = −44.82 ± 0.97 km s−1 and WLS R = 22.18 ± 0.36 km s−1. According to the probabilistic approach of Bensby et al. (2014), the galactic kinematic indicates that TOI- 756 is a thin disc population star. All the adopted astrometric and photometric stellar properties of TOI-756 are listed in Ta- ble 1. According to all these parameters and the Table 5 from Pecaut & Mamajek (2013), TOI-756 corresponds well to a M1V star. In terms of stellar activity, TOI-756 appears to be rather quiet, with no identifiable rotation period. Inspecting the TESS SAP light curves and ASAS (Pojmanski 1997) data with Lomb- Scargle periodograms reveal no significant peaks. Similarly, no notable signals are observed in the RV indicators from HARPS and NIRPS. Additionally, we used the HARPS DRS data to com- pute the logR′hk from the S-index measuring Ca H and K emis- sion. We used the relations from Suárez Mascareño et al. (2015, 2016), and found a median value of -5.16, in line with the ab- sence of activity of the star. 5. Photometric and radial velocity analysis We utilized the software package juliet (Espinoza et al. 2019) to model both the photometric and RV data. This algorithm in- tegrates several publicly available tools for modeling transits (batman; Kreidberg 2015), RVs (radvel; Fulton et al. 2018), and Gaussian processes (GPs; george; Ambikasaran et al. 2015; celerite, Foreman-Mackey et al. 2017). To compare different models, juliet efficiently calculates the Bayesian evidence (ln 9 https://github.com/rabrahm/isoAR 10 ULS R is directed radially inwards towards the Galactic center, VLS R along the direction of Galactic rotation, and WLS R vertically upwards towards the Galactic North pole. Z) using dynesty (Speagle 2020), a Python package that es- timates Bayesian posteriors and evidence through nested sam- pling methods. Unlike traditional approaches that begin with an initial parameter vector centered around a likelihood maximum found via optimization, nested sampling algorithms draw sam- ples directly from the priors. Throughout our analyses, we en- sured that we had a sufficient number of live points Nlive relative to the number of free parameters d (Nlive ≥25×d), preventing us from missing peaks in the parameter space. We conducted sev- eral analyses: starting with only the photometry, and using the resulting planet parameters as priors for a subsequent RV analy- sis and then a joint fit of the two. 5.1. Photometry analysis First, we used juliet to model the photometry. We used the TESS PDCSAP fluxes of the four sectors where our planet candidate was initially detected, the two transits from LCO- CTIO telescope in i’ and g’ bands and the four transits from ExTrA telescopes. The transit model fits the stellar density ρ⋆ along with the planetary and jitter parameters. We adopted a few parametrization modifications when dealing with the tran- sit photometry. Rather than fitting directly for the planet-to-star radius ratio (p = Rp/R⋆) and the impact parameter of the orbit (b = a/R⋆cosi), juliet uses the parametrization introduced in Espinoza (2018) and fits for the parameters r1 and r2 to guarantee full exploration of physically plausible values in the (p,b) plane. Additionally, we implemented a "power-2" limb-darkening law in juliet, as shown to be the best for fitting cold star intensity profiles (Morello et al. 2017). We derived the "power-2" stellar limb-darkening coefficients and their uncertainties for each pho- tometric filter used using the LDCU11 code (Deline et al. 2022). The LDCU code is a modified version of the Python routine im- plemented by Espinoza & Jordán (2015) that computes the limb- darkening coefficients and their corresponding uncertainties us- ing a set of stellar intensity profiles accounting for the uncer- tainties on the stellar parameters. The stellar intensity profiles are generated based on two libraries of synthetic stellar spectra: ATLAS (Kurucz 1979) and PHOENIX (Husser et al. 2013). We utilized the limb-darkening coefficients determined from LDCU as Gaussian priors for the fit. Since the TESS PDCSAP light curves are already corrected for contamination, we fixed the TESS dilution factor to one. We applied the same assumption to the ground-based photometry, as the apertures are free from contaminating sources. We added in quadrature jitter terms σi to all the photometric uncertainties, which may be underestimated due to additional systematics. To account for the remaining pho- tometric variability, we included a GP for Sectors 10 and 11 of TESS PDCSAP fluxes using a Matérn 3/2 kernel with hyper- parameters amplitude (σGP) and timescale (ρGP). Including GPs for Sectors 37 and 64 led to negligible amplitudes, so we did not apply the GP correction to these sectors. We detrended the LCO- CTIO transit in the g’ band with airmass and the ExTrA data was detrended with a GP using a Matérn 3/2 kernel as suggested by Cointepas et al. (2021). The resulting detrending can be seen in Appendix C. We first assumed a circular orbit so we fixed the eccentricity to zero and used normal priors around the ExoFOP values for the period and transit epoch. We used a normal prior for stellar density using the value derived in Sect. 4. In the first instance, we fit the TESS photometry alone to constrain these parame- ters and then used the resulting posteriors to jointly fit all the 11 https://github.com/delinea/LDCU Article number, page 8 L. Parc, et al.: The peculiar TOI-756 system Fig. 3. RV data from HARPS (blue dots) and NIRPS measurements. The NIRPS data is separated in unaffected datapoints (orange dots) and affected datapoints (red triangles) by the crossing of the Vsyst and BERV velocities during the observations. Yellow dots are NIRPS data with the correction explained in Sect. 5.2.2. The dotted red line together with the right y axis represent the total velocity of TOI-756, showing the crossing of the BERV with the Vsyst. The complete inferred model in Sect. 5.2.3, which comprises signals from the two planets along with the linear model for the acceleration (in gray), is represented by a black solid line. photometry (see Appendix C). We first used the classical (p,b) parametrization to let the planet-to-star ratio p to be different among photometric filters to check for possible false positives. We found consistent transit depths among the different photo- metric bands: pTES S = 0.050 ± 0.001, pLCO−i′ = 0.052 ± 0.003, pLCO−g′ = 0.048 ± 0.004 and pExTrA = 0.051 ± 0.005. We then fitted only one set of r1 and r2 parameters for the planet. For the joint photometry fit, we only took TESS data around the transits (± 3 hours around the transit times calculated with the resulting period and transit epoch of the TESS-only fit) in order to reduce the fit time of the joint analysis with juliet. 5.2. Radial velocity analysis The reduction and preprocessing of the RVs are explained in Sect. 3.5 and the resulting datasets are shown in Fig. 3. A sig- nificant RV variation was detected for TOI-756, suggesting the presence of an additional companion to the TESS sub-Neptune. Several possible orbital periods were initially explored, motivat- ing further observations to constrain this signal alongside that of the 1.24-day sub-Neptune. These efforts led to the confirma- tion of an eccentric companion on a ∼150-day orbit. In addition, the RVs reveal an acceleration, hinting at a third, more distant object. 5.2.1. Telluric contamination of the NIR data during the Vsyst-BERV crossing. One of the main challenges of NIRPS and near-infrared spec- troscopy is the contamination of the stellar spectrum by molecu- lar species in Earth’s atmosphere. This issue is particularly pro- nounced when observing M dwarfs, whose spectra also contain absorption features from species such as H2O and CH4. The NIRPS-DRS includes a telluric absorption correction based on Allart et al. (2022), but the observed spectra of faint M dwarfs are further dominated by strong emission lines from Earth’s at- mosphere, notably from OH. These emission lines are typically corrected during the reduction process (Srivastava et al. in prep), but their non-LTE nature makes them more challenging to cor- rect, as they cannot be modeled using standard telluric line ap- proaches. Fig. 4. NIRPS data residuals (upper panel) and LBL indicators D2V (middle panel), along with dTemp (∆T) calculated at 3500 K close to the effective temperature of TOI-756 (bottom panel). Orange dots are the indicators from unaffected NIRPS data, red triangles are the affected spectra by Vsyst-BERV crossing and yellow dots are the indicators with the correction explained in Sect. 5.2.2. However, this contamination becomes especially problem- atic when aiming to measure precise stellar radial velocities, par- ticularly when telluric lines coincide with the absorption lines of the star. This situation arises when the barycentric Earth ra- dial velocity (BERV) crosses the systemic velocity (Vsyst) of the observed star. In such cases, the correction is challenging, as Article number, page 9 A&A proofs: manuscript no. aa55684-25corr blending between the stellar and telluric lines distorts the stel- lar line profiles, leading to erroneous RV estimates. Since the telluric emission features are more numerous and stronger in H compared to the J, there will be a chromatic offset induced in the calculated RVs between the "red" wavelengths, and "blue" wavelengths. We observed this phenomenon between RJD = 60365 and 60416, when the Keplerian fit of the outer companion of TOI- 756 showed clear outliers of ∼100-200 m s−1 in the NIRPS RVs (see top panel of Fig. 4). This effect is illustrated in Fig. 3, the NIRPS data is represented as red triangles when the absolute to- tal velocity |Vtot| = |Vsyst −BERV| ≤10 km s−1, corresponding to approximately twice the typical spectra line FWHM for slow rotating M dwarfs (5 km s−1). The HARPS data are shown using blue dots and the non-affected NIRPS data (Vtot > 10 km s−1) using orange dots. In this Figure 3, we plotted the relative ve- locity to the systemic velocity of TOI-756, which was found to be Vsyst ∼15.2 km s−1. This phenomenon also induces distor- tions in the stellar line profiles, which are evident in the sys- temic variations of different LBL indicators, such as D2V and ∆T (Artigau et al. 2024), as shown in Fig. 4. 5.2.2. Correction with removing affected lines in the LBL The advantage of using the line-by-line (LBL) technique to de- rive radial velocities and other spectral indicators lies in its abil- ity to provide individual measurements for each spectral line across the entire spectrum. The observed discrepancy, where NIRPS RVs appear underestimated and then overestimated rela- tive to the RV fit, is caused by the crossing of stellar absorption lines with atmospheric OH emission lines, primarily arising from excitation of rotational-vibrational modes of the OH molecule. To mitigate this effect, we utilized the HITRAN12 database (Gordon et al. 2022) to identify OH lines present within the NIRPS spectral range. We selected the 25% most intense OH lines and removed the LBL-derived measurements in a ±20 km s−1 window around these lines, accounting for the ap- proximate width of both OH and stellar lines (∼10 km s−1 each). This process reduced the number of spectral lines used in the LBL analysis from 26,301 to 17,253, inevitably increasing the RV uncertainties. We then recomputed the final radial veloc- ity and indicator values for each epoch using the same LBL method, which robustly averages the per-line values while down- weighting outliers (Appendix B of Artigau et al. 2022). We ap- plied this correction for all the NIRPS data for consistency. The corrected data are displayed as yellow dots in Figures 3 and 4. This correction successfully brought NIRPS RVs into agree- ment with HARPS and effectively removed the systematic dis- tortions previously visible in the residuals of the keplerian fit and in the spectral indicators (Fig. 4). The corrected indicators now show consistent values across epochs, with no residual systemat- ics during the Vsys–BERV crossing. Additionally, the root mean square (RMS) of the residuals and the indicators, displayed in the same figure, demonstrates a significant decrease after correction. We applied the same technique to the mask used to derived the NIRPS DRS cross-correlation function (CCF) data, using the OH line list from the DRS telluric correction module. While this also mitigated the effect, the CCF method relies on significantly fewer spectral lines than the LBL approach, leading to much larger error bars after correction. Consequently, we opted to re- tain the LBL-derived values for our analysis. 12 https://hitran.org/ Fig. 5. Top panel: Phase-folded TESS, ExTrA, and LCO-CTIO light curves of TOI-756 b (gray points). Dark red circles are data binned to 10 min. The black lines represent the median model of each instrument from the joint fit. Bottom panel: Residuals of the data compared to the model. An arbitrary offset has been added to the ground-based photom- etry for clarity. 5.2.3. Radial velocity analysis and joint modeling with juliet juliet was also used to model these RV datasets. We used a two-planet plus a linear trend model. At first, we fixed the eccentricity of the TESS planet TOI-756 b to zero. We ac- counted for the evident eccentricity of the outer companion by fitting for the parameters √ecos(ω), √esin(ω), as implemented in juliet. This parametrization has been shown to improve the exploration of the eccentricity–argument of periastron parameter Article number, page 10 L. Parc, et al.: The peculiar TOI-756 system Table 4. Posterior planetary parameters of the different RV fits. NIRPS corrected + HARPS NIRPS unaffected + HARPS HARPS-only NIRPS corrected-only Kb m s−1 9.4+2.3 −2.5 8.4 ± 2.2 10.0+3.1 −3.2 9.2+3.9 −3.8 Pc (days) 149.66+0.28 −0.26 149.61 ± 0.20 149.36+0.38 −0.37 149.92+0.36 −0.39 T0,c (RJD) 60350.27+0.69 −0.68 60350.29+0.52 −0.53 60199.80+0.70 −0.68 60201.59+0.78 −0.81 Kc (m/s) 272.6+3.6 −3.4 273.3+2.5 −2.6 275.7+5.0 −4.9 268.9 ± 5.1 ec 0.46 ± 0.01 0.46 ± 0.01 0.44 ± 0.01 0.48+0.02 −0.01 ωc (◦) −167.9 ± 1.3 −166.6+1.2 −1.3 −166.7 ± 1.7 −169.24 ± 1.9 Acceleration m s−1 yr−1 144.6 ± 5.8 148.3+3.7 −4.0 145.7+8.0 −8.4 145.4+8.0 −8.8 space (Lucy & Sweeney 1971). We used the period and transit epoch results of TOI-756 b of the photometry fit (Sect. 5.1) as priors for the RV fits. We compared several analyses: (1) NIRPS data corrected us- ing the method described in Sect. 5.2.2 combined with HARPS data; (2) uncorrected NIRPS data with the affected points re- moved, also combined with HARPS data; (3) HARPS data only; and (4) corrected NIRPS data only. We present the posteriors of the main changing planetary parameters of the different fits in Table 4. We did not put the period and transit epoch of planet b in this table because they are very similar for these analyses since they are highly constrained by the photometry fit priors. The resulting parameters exhibit good consistency within 1σ for all our different analyses. Since the NIRPS corrected data combined with the HARPS data are fully consistent with the other fits and included all RV points, we choose this dataset as the final one. We do a joint fit of the RVs and photometry with juliet: NIRPS corrected RVs, HARPS RVs, TESS, LCO-CTIO and ExTrA. In order to prevent any potential Lucy-Sweeney bias in the eccentricity measure- ment (Lucy & Sweeney 1971; Hara et al. 2019), we fixed the or- bital eccentricity of the planet b to zero. However, to explore the possibility of non-circular orbit, we ran a separate analysis without any constraints on the eccentricity. The logarithmic ev- idence for the eccentric model is lower than for the circular one (55,557 vs. 55,580), and the fitted jitter values for both HARPS and NIRPS RVs are higher in the eccentric case, further support- ing the preference for the circular model. The fitted eccentric- ity for TOI-756 b is eb = 0.096+0.092 −0.067. Therefore, the condition e > 2.45 σe (Lucy & Sweeney 1971) is not satisfied, which sug- gests that the RV data are compatible with a circular orbit. For now, the current data do not provide sufficient precision to draw a firm conclusion regarding the orbital eccentricity. Additionally, in Table 5, we show that the 3-σ upper limit on the eccentric- ity for TOI-756 b is 0.51. The fitted and derived parameters for TOI-756 b and TOI-756 c are presented in Table 5. The priors and posteriors of the joint fit can be found in Table C.2. Fig.5 shows the phase-folded light-curves of the photometry fit. Fig.3 shows RV data together with the resulting model from the joint fit and Fig. 6 the phase-folded RV curves for the two planets. We searched for a possible transit of planet c in the TESS data by phase-folding the light curve using its orbital period and time of conjunction. Although TESS observations cover the ex- pected transit window, no transit signal is visible in the data, al- lowing us to exclude a transiting configuration. Furthermore, as- suming coplanar orbits aligned with the inclination of TOI 756 b (85.5◦), we estimated the expected impact parameter of the outer planet based on its semi-major axis and the stellar radius. We ob- tained bc = 14.3 ± 0.7, a value well above 1. 6. Discussion We present the discovery and characterization of the transiting sub-Neptune TOI-756 b and the non-transiting eccentric cold gi- ant TOI-756 c, both orbiting the M1V star TOI-756. TOI-756 b has an orbital period of 1.24 days, a radius of 2.81 ± 0.10 R⊕and a mass of 9.8 ± 1.7 M⊕. The outer companion, TOI-756 c, fol- lows an eccentric (0.45) 149-day orbit and has a minimum mass of 4.05 ± 0.11 MJup. Using the stellar parameters (Table 1), we determine the semi-major axes of TOI-756 b and TOI-756 c to be 0.0180 ± 0.0002 au and 0.439 ± 0.005 au, respectively. As- suming zero albedo and full heat redistribution, the equilibrium temperature of TOI-756 b is 934 ± 24 K, with a stellar insolation of 127 ± 13 S⊕. For TOI-756 c, we estimate an equilibrium tem- perature of 194 ± 5 K and a stellar insolation of 0.24 ± 0.02 S⊕ averaged along the eccentric orbit. In addition, the RVs present an acceleration of 145.6 ± 5.2 m s−1 yr−1 hinting at an additional component in the system. 6.1. NIRPS + HARPS performances The characterization of this system was made possible by TESS and ground-based facilities for the photometric analysis of the inner transiting planet, as well as by the combination of HARPS and NIRPS for RV follow-up. The synergy between these two spectrographs enabled us to precisely characterize an early-M with a peculiar planetary-system configuration. The benefit of this combination is evident in Table 4: using HARPS (NIRPS) alone, the semi-amplitude of TOI-756 b is determined with a pre- cision of 31% (42%), whereas combining HARPS and NIRPS improves this to 17% in the joint RV and photometry fit. All other fitted parameters also benefit from improved precision thanks to this joint analysis. This study highlights the added value of NIRPS (to HARPS) in characterizing a low-mass planet around a faint M dwarf (V = 14.6, J = 11.1), compared to typ- ical radial velocity targets. Independently, the two instruments have similar median photon noises. The median photon noise is 5.5 m s−1 for HARPS and 15.4 m s−1 for NIRPS, but the latter having increased from 8.4 m s−1 due to the removal of affected lines during the LBL computation (see Sect. 5.2.2). The fitted jitter values for both instruments are similar, around 15 m s−1, which matches the photon noise for NIRPS but is elevated com- pared to HARPS photon noise. This suggests the presence of atmospheric residuals or enhanced stellar activity in the optical range. Given the low S/N regime in which HARPS is operating, increased background sky contamination and possible interfer- ence from the Moon are to be expected. At such low S/N, the LBL method is also likely to underestimate the uncertainties as- sociated with the derived RVs. Article number, page 11 A&A proofs: manuscript no. aa55684-25corr Fig. 6. Phase-folded RVs with the resulting model and its residuals for TOI-756 b (left) and TOI-756 c (right). In red dots, binned data combining HARPS (blue dots) and NIRPS (yellow dots). The error-bars in light gray account for the fitted jitters. Table 5. Fitted and derived parameters for TOI-756 b and TOI-756 c. Parameter TOI-756 b TOI-756 c Orbital period, Porb (days) . . . . . . . . 1.2392495 ± 0.0000007 149.40 ± 0.16 Time of conjunction, T0 (RJD) . . . . 58570.65234+0.00035 −0.00037 60498.882+0.57 −0.52 Planet radius, Rp (R⊕) . . . . . . . . . . . . 2.81 ± 0.10 - Planet mass, Mp (M⊕) . . . . . . . . . . . . 9.83+1.8 −1.6 - Planet min. mass, Mp sin(i) (MJup) - 4.05 ± 0.11 Planet density, ρp (g cm−3) . . . . . . . 2.42+0.53 −0.45 - RV semi-amplitude, (m s−1) . . . . . . 9.2+1.7 −1.5 273.3 ± 2.6 Orbital inclination, i (◦) . . . . . . . . . . 85.53+0.19 −0.18 - Scaled planetary radius, RP/R∗. . . . 0.05113+0.0008 −0.0009 - Impact parameter, b . . . . . . . . . . . . . . 0.589+0.018 −0.021 - Semi-major axis, a (au). . . . . . . . . . . 0.0180 ± 0.0002 0.439 ± 0.005 Eccentricity . . . . . . . . . . . . . . . . . . . . 0 (< 0.51, 3σ) 0.445 ± 0.008 Argument of periastron, ω (deg) . . 90 (fixed) −167.77+0.99 −0.93 Insolation(a), Sp (S⊕) . . . . . . . . . . . . . 127 ± 13 0.24 ± 0.02 Equilibrium temperature(a), Teq (K) 934+23 −24 194 ± 5 TSM (b) . . . . . . . . . . . . . . . . . . . . . . . . . 63+13 −10 - Transit duration, T14 (h) . . . . . . . . . . 1.10 ± 0.01 - Notes: (a) Insolation and equilibrium temperature are calculated as in Parc et al. (2024), assuming global circulation and a Bond albedo of AB = 0. (b) Transmis- sion spectroscopy metric (TSM) calculated following Kempton et al. (2018). 6.2. TOI-756 b: Internal structure, composition, and population context 6.2.1. A volatile-rich sub-Neptune around an M dwarf The characterization of TOI-756 b adds to the currently small population of known transiting sub-Neptunes (2 R⊕< Rp < 4 R⊕) around M dwarfs, as shown in the Mass-Radius (M-R) di- agram (Fig. 7) where the red (gray) dots represent planets from the PlanetS catalog (Parc et al. 2024; Otegi et al. 2020) orbit- ing M dwarfs (FGK dwarfs). Inside this population, Parc et al. (2024) identified statistical evidence for small sub-Neptunes (1.8 R⊕< Rp < 2.8 R⊕) being less dense around M dwarfs than around FGK dwarfs with a p-value of 0.013, rejecting the null hypothesis. This means that the densities of small sub- Neptunes orbiting M and FGK dwarfs belong to different dis- tributions. We updated this analysis with the up-to-date PlanetS catalog and by including TOI-756 b, which had a density of 2.42 g cm−3. We choose to increase the upper radius limit of this sam- ple to 2.9 R⊕(2.8 R⊕having been chosen to capture all small sub-Neptunes around M dwarfs at that time). We find, with the same Mann–Whitney U test (Wilcoxon 1945; Mann & Whitney 1947), an improved p-value of 0.006 for this trend. However, these two analysis are not taking into account the uncertainties on the density measurements. We did a Mann–Whitney U test on 10000 samples using a bootstrapping method to draw den- sity values in the density distributions of each planet and ob- tained a median p-value of 0.015, a still significant value. How- ever, the sample remains small and the increase of the well- characterized planets in this parameter space is one of the objec- tives of the NIRPS GTO SP2 subprogram "sub-Neptunes" de- scribed in Sect. 2. We plot the mass and radius of TOI-756 b, alongside with the small planets of the PlanetS catalog in Fig. 7. With its density of 2.42 g cm−3 (and looking at the composition lines shown in the same figure), TOI-756 b lies above the 50% water plus Earth- composition line at 1000 K (at 2σ) from Aguichine et al. (2021) (dark blue dotted line). A pure silicate interior with a 50% steam atmosphere can explain within 1σ the radius and mass of TOI- 756 b for this model (light blue dotted line). Furthermore, it cor- responds well within 1σ to the Earth-composition with a H2/He dominated-atmosphere of 1% the mass from Zeng et al. (2019) (pink dotted line). As models from Aguichine et al. (2021) in- clude pure steam atmosphere with no solubility between the at- mosphere and the mantle+core compared to Luo et al. (2024) models (e.g., green dotted line), and are static in time (com- pared to Aguichine et al. 2025), they can be considered to over- estimate the radii of the planets. They can thus be interpreted as an upper limit of M-R composition lines for water-rich mod- els. In addition, due to its high equilibrium temperature of ap- proximately 934 K, any water present in the atmosphere of TOI- Article number, page 12 L. Parc, et al.: The peculiar TOI-756 system Fig. 7. Mass-radius diagram of small exoplanets (with radii ranging from 1–4 R⊕) with precise densities from the PlanetS catalog. The red (gray) dots correspond to exoplanets orbiting M dwarfs (FGK dwarfs). The composition lines of pure silicates (yellow dashed), Earth- like planets (red dashed), and pure iron (solid black) from Zeng et al. (2016, 2019) are displayed. The red hexagon represent TOI-756 b. Two compositions line that incorporate both water and terrestrial elements from Aguichine et al. (2021) models, matching the equilibrium tem- perature of the planet, are plotted as light and dark blue dotted lines. Two compositions of Earth with an hydrogen-rich atmospheres from Zeng et al. (2019) are represented in pink and purple dotted lines. This plot has been generated with mr-plotter (https://github.com/ castro-gzlz/mr-plotter/). 756 b is expected to be in a supercritical state. In conclusion, it is more likely that TOI-756 b needs an amount of hydro- gen/helium in its atmosphere to explain its density, in the form of pure H/He envelope or mixed supercritical H2O and H/He. This places the planet within the "miscible-envelope sub-Neptunes" category defined by Benneke et al. (2024). Atmospheric charac- terization could confirm this classification by revealing a mean molecular weight significantly higher than that of Jupiter (2.2) or Neptune (2.53–2.69). We investigate this in greater detail in the following section. Interestingly, Schlecker et al. (2021) found a difference in the bulk composition of inner small planets with and without cold Jupiters. High-density small planets point to the existence of outer giant planets in the same system. Conversely, a present cold Jupiter gives rise to rocky, volatile-depleted inner super- Earths, by obstructing inward migration of icy planets that form on distant orbits. However, TOI-756 c lies beyond the system’s ice line, and its formation may have contributed to the inward delivery of water-rich material, as proposed for the Solar System by Raymond & Izidoro (2017). This process could account for the potentially ice-rich composition of TOI-756 b. As shown by Bitsch et al. (2021), the water content of an inner sub-Neptune can provide valuable insights into the formation location and timescale of an outer giant planet relative to the water ice line, offering constraints on planet formation theories. 6.2.2. Detailed interior modeling We perform a detailed interior characterization of TOI-756 b us- ing a Bayesian inference approach, adopting the emcee affine- invariant ensemble sampler (Foreman-Mackey et al. 2013) cou- pled to a three-layer interior structure model. The planetary in- terior is assumed to be composed of an Fe-Ni metallic core and a silicate mantle (SUPEREARTH Valencia et al. 2007), while the outermost layer consists of either a hydrogen-helium enve- lope or a water vapor atmosphere modeled using the CEPAM code (Guillot & Morel 1995), with equations of state from Saumon et al. (1995) for H/He and French et al. (2009) for H2O. In all cases, we assume that the rocky interior con- tains no volatiles and follow the numerical set-up given in Plotnykov & Valencia (2020). To explore the range of possible atmospheric mass fraction (AMF) values, we consider two sub-Neptune composition sce- narios: (1) the planet has a H/He envelope (75% of H2 to 25% He) and (2) the planet has pure H2O envelope. For these sce- narios, we impose stellar-informed priors on the rocky interior based on the host star’s refractory abundances taken from Ta- ble 2, namely, Fe/Mgplanet ∼N(Fe/Mgstar, σ2 star); Fe/Siplanet ∼N(Fe/Sistar, σ2 star), where all ratios are by weight. Additionally, the mantle mineral- ogy is allowed to vary in terms of the Bridgmanite to Wustite ra- tio (MgSiO3 vs MgO, xWu). These assumptions effectively con- strain the rocky core-mass fraction (CMF = rcmf rcmf+1, where rcmf is the core to mantle mass ratio, Mcore/Mmantle) of the planet and mitigate problem of compositional degeneracy. Considering case (1) where TOI-756 b has retained its pri- mordial H/He envelope, we recover a well-constrained AMF = 0.023±0.003 (3 wt%), with a corresponding CMF = 0.27±0.03. Note that this strong constraint may partly result from un- derestimated abundance uncertainties, as the values were de- rived assuming a fixed Teff (see Sect. 4.2). However, if we im- pose no prior on the rocky composition, the envelope has an AMF = 0.03 ± 0.01 (3 wt%), while the interior has almost a uni- form distribution of CMF = 0.5 ± 0.3. These results suggesting strong evidence that this planet has a volatile envelope based on mass–radius data alone. For the case where the envelope is com- posed of pure water vapor (2), we find that AMF = 0.79 ± 0.10 and CMF = 0.27 ± 0.03. This very high value of AMF of pure water vapor is highly unlikely when linked to formation theo- ries. Our analysis suggests that the presence of H/He in the at- mosphere is more plausible, although a combination of both sce- narios (1) and (2) remains a possibility. Regardless, this confirms that TOI-756 b requires a volatile envelope to account for its den- sity, and that the abundances derived from NIRPS spectra have allowed us to better constrain both the CMF and AMF in the case of a pure H/He envelope. The resulting corner plots of this analysis are shown Fig. D.1. 6.2.3. A planet at the radius cliff and in the Neptune desert TOI-756 b is a very interesting target in this sample of small sub-Neptunes around M dwarfs. Indeed, it is a unique object close to the "radius cliff", a steep drop in planet occurrence be- tween 2.5 and 4.0 R⊕(Borucki et al. 2011; Howard et al. 2012; Fulton et al. 2017). This still poorly explored demographic fea- ture seems to vary in location with the host star’s spectral type as also seen with the radius valley (Ho et al. 2024; Parc et al. 2024; Burn et al. 2024). Kite et al. (2019) proposed that atmospheric sequestration into magma could explain this phenomenon, as Article number, page 13 A&A proofs: manuscript no. aa55684-25corr larger atmospheres reach the critical base pressure needed for H2 to dissolve into the core. Ongoing studies seek to understand variations in atmospheric observables across these features to better comprehend their underlying physics. Moreover, TOI-756 b lies at the very lower edge of the Nep- tune desert. We plotted its location together with the bound- aries of the desert as defined in Castro-González et al. (2024) in Fig. 8. The main hy pothesis for the shaping of the lower edge of the Neptune desert is through hydrodynamical atmo- spheric escape, driven by intense stellar X-ray and extreme ul- traviolet (XUV) irradiation (e.g., Yelle 2004; Tian et al. 2005; Owen & Jackson 2012; McDonald et al. 2019). This process can strip the gaseous envelopes of close-in Neptune-sized planets, leaving behind smaller, denser remnants such as sub-Neptunes or bare rocky cores (Lopez & Fortney 2013). Therefore, TOI- 756 b may have lost at least part of its gaseous envelope as a re- sult of prolonged exposure to XUV irradiation from its host star. Notably, M dwarfs remain active for significantly longer periods than Sun-like stars (Ribas et al. 2005), extending the timescale over which atmospheric escape operate. Additionally, the pos- sible non-zero eccentricity of the sub-Neptune, along with the eccentricity of TOI-756 c and the presence of a third body, may suggest dynamical activity, potentially involving high- eccentricity tidal migration (HEM). HEM, which includes mech- anisms such as planet–planet scattering (e.g., Gratia & Fabrycky 2017), Kozai-Lidov migration (e.g., Wu & Murray 2003), and secular chaos (e.g., Wu & Lithwick 2011), can occur at any stage after disk dispersal, from early evolutionary phases to sev- eral billion years later. HEM typically leads to strongly mis- aligned orbits, erasing any memory of the system’s primordial configuration. In this scenario, a distant massive companion ex- cites the eccentricity of the inner planet via gravitational per- turbations, which is then followed by tidal circularization and inward migration due to energy dissipation induced by stellar tides (e.g., Rasio & Ford 1996). This process can be investigated by measuring the spin–orbit alignment of the transiting planet using Rossiter–McLaughlin (RM) observations (Rossiter 1924; McLaughlin 1924). These dynamical processes are considered to be key factors in shaping the Neptune desert. Additionally, the boundaries of the Neptune desert may vary with the spectral type of the host star, and to date, there has been no comprehensive study of the Neptune desert around M dwarfs. Indeed, for a given orbital period, a planet orbiting an M dwarf receives intuitively less stellar flux than planets around other types of stars, which could affect the atmospheric escape kick off. Interestingly, TOI- 756 b does not show the high density commonly found in planets within the Neptune desert, such as TOI-849 b (Armstrong et al. 2020). Its ability to retain an atmosphere despite strong ir- radiation could be explained by its orbit around a metal-rich star, since metal-rich atmospheres are thought to be more resis- tant to photoevaporative mass loss (Owen & Murray-Clay 2018; Wilson et al. 2022). Atmospheric characterization of planets located within the radius cliff and Neptune desert could help test theories regarding the origins of these demographic features. With a Transmission Spectroscopic Metric (TSM) of 63 (Kempton et al. 2018), TOI- 756 b stands out as a promising target for future transmission spectroscopy studies, for instance with JWST (Gardner et al. 2006). Fig. 8. Planet radius as a function of orbital period for known exo- planets from NASA Exoplanet Archive with a radius precision below 8%. We highlighted the Neptunian desert, ridge, and savanna regions from Castro-González et al. (2024). The colorcode represents the ob- served density of planets. TOI-756 b is depicted as a dark red hexagon. Light blue symbols represent sub-Neptunes in systems hosting eccen- tric giant companions: crosses indicate systems with only giant plan- ets, while circles correspond to systems containing both small and giant planets. In contrast, dark blue circles represent sub-Neptunes in sys- tems with non-eccentric giant planets. The description of the selection is made in Sect. 6.3.1. This plot has been generated with nep-des (https://github.com/castro-gzlz/nep-des/). 6.3. TOI-756 system: A unique system in exoplanet zoology 6.3.1. Population of transiting sub-Neptunes with an outer companion The TOI-756 system with its transiting sub-Neptune, its cold gi- ant non-transiting companion, and an additional component, all orbiting an M dwarf in a wide binary system, is a very unique system in exoplanet zoology. We searched the NASA Exoplanet Archive13 for the multi-planetary systems with a transiting sub- Neptune (2 R⊕< Rp < 4 R⊕) orbiting with a period of less than 10 days and with a giant outer companion orbiting at more than 100 days detected by transit or radial velocity (or both) with Rp > 4 R⊕or Mp sin(i) > 20 M⊕. We plotted in Fig. 8 the radii and orbital periods of the sub-Neptunes of these sys- tems. We found 13 systems but none are orbiting an M dwarf. TOI-756 is currently the only confirmed system with a transit- ing sub-Neptune and a cold giant orbiting an M dwarf. This re- mains true even if we remove all constraints on the periods of the inner and outer planets. An additional but unconfirmed sys- tem with this peculiar architecture has been identified: the K2- 43 system. K2-43 c is a sub-Neptune (Rp = 2.4 R⊕, P = 2.2 d ; Hedges et al. 2019), and more recently a single transit event with a depth corresponding to a Jupiter-sized planet has been detected in the TESS data (TOI-5523.01). This system adds up to the small sample of the recent study of Bryan & Lee (2025), investigating the stellar mass and metal- licity trends for small planets with a gas giant companion. They found a higher gas giant frequency around metal-rich M dwarfs for both samples (with gas giant (GG) or with gas giant plus small planet (GG|SE)), but they find no significant difference in gas giant occurrence rate between P(GG) and P(GG|SE). While they find no significant correlation between small planets and 13 https://exoplanetarchive.ipac.caltech.edu/ Article number, page 14 L. Parc, et al.: The peculiar TOI-756 system outer gas giants around M dwarfs, previous work has found a significant positive correlation between these planet populations around more massive stars that are metal-rich: Bryan & Lee (2024) and Chachan & Lee (2023) hypothesized that this pos- itive correlation should persist and may even strengthen for lower-mass stars. This follows the well known metallicity-giant planet correlation seen for FGK stars (e.g., Sousa et al. 2021) and M dwarfs (e.g., Neves et al. 2013). We are offering an addi- tional system around a metal-rich M dwarf to address a largely underexplored parameter space, aiding studies that investigate the correlations and occurrence rates of specific populations in relation to stellar parameters. In addition to being a unique multi-planet system, TOI-756 is an M dwarf hosting a rare giant component. Planet of and above Jupiter’s mass are remarkably rare around M dwarfs. Core-accretion theory predicts that giant planets should be less common around M dwarfs than around FGK-type stars, primar- ily due to the lower surface density of solids and longer forma- tion timescales in protoplanetary disks around low-mass stars (e.g., Laughlin et al. 2004; Ida & Lin 2005). This trend is sup- ported by recent population synthesis models, which not only confirm the low occurrence rate of giant planets in such environ- ments but also suggest it may drop to nearly zero for host stars with masses between 0.1 and 0.3 M⊙(Burn et al. 2021). They generally form in the outer region of the disk beyond the ice line (Alexander & Pascucci 2012; Bitsch et al. 2015), where there is more material for them to form, but we are biased against detect- ing them with the transit method as transit probability decreases at long orbital periods. This probability is even lower around M dwarfs since they are small stars. However, RV campaigns will certainly provide more of these outer companions to small tran- siting planets, but also confirm giant TESS candidates. The RV follow-up of TESS giant planet candidates is another one of the subprograms of SP2 of the NIRPS-GTO, thanks to the unique sensitivity of NIRPS in the infrared, which allows us to charac- terize such planets around host stars with J < 12. The discovery of TOI-756 c together with the other discoveries of giants around M dwarfs with NIRPS (Frensch et al. in prep) will help to test the hypothesis of their formation and evolution. Coming back to the identified systems similar to TOI-756, an interesting thing to note is that 11 cold giants (in 9 of these 13 systems, including TOI-756) have a detected non-zero eccentric- ity (e > 0.1). We highlighted these systems in light blue in Fig. 8. Systems represented by crosses consist solely of a sub-Neptune accompanied by one or more giant planets, whereas systems shown as circles include both small and giant planets in addi- tion to the sub-Neptune. In addition, we emphasize three systems that share strong similarities with the TOI-756 system: TOI- 4010 (Kunimoto et al. 2023), TOI-969 (Lillo-Box et al. 2023), and Kepler-94 (Weiss et al. 2024). All three systems consist of a sub-Neptune located near the lower boundary of the Neptune desert, accompanied by a giant planet with an orbital period ex- ceeding 100 days and a non-zero eccentricity. Regarding this class of systems, Bitsch & Izidoro (2023) used N-body simu- lations that combine pebble and gas accretion with planetary migration. They found that systems hosting outer giant planets tend to produce more systems with predominantly a single in- ner planet and exhibit higher eccentricities for all planets, com- pared to simulations without outer giants. In addition, unstable systems (with high eccentricities) mostly host only one inner sub-Neptune (and for most systems, this inner planet is tran- siting). Additional observations of TOI-756 to precisely con- strain the eccentricity of TOI-756 b could be a good test case of these results considering the large eccentricity of the planet c. 0 1 2 3 4 5 6 7 8 9 Semi-major axis [au] 100 101 102 103 Companion Mass [Mjup] TOI-756 mass BD Star Planet TOI-756 c NIRPS+HARPS lower mass limit Gaia DR4 detection limits Excluded regions Fig. 9. Limits on the companion mass of TOI-756 system as a function of semi-major axis. The lower limit, indicated by the solid black line, is calculated based on the RV linear trend. The excluded gray areas include this constraint of the RV linear trend, the timespan of observa- tions of the RV (left rectangle), the limit from high-contrast imaging (upper right rectangle), and the absence of a double peak in the CCF (upper rectangle). We plotted the Gaia DR4 astrometric detection lim- its as a solid dark blue line for a star at 86 pc with a RUWE of 1.25 (Wallace et al. 2025). The dotted black lines are the different mass limit categories separating planetary, brown dwarf (BD), and stellar natures. Here again, Bitsch & Izidoro (2023) predicted that systems with truly single close-in planets are more likely to host outer gas giants. Conversely, Schlecker et al. (2021) predicted that plane- tary systems around stars with high metallicity frequently con- tain warm and dynamically active giant planets that can disrupt inner planetary systems and then are less likely to harbor inner small planets. The RV follow up of transiting close-in planets by the NIRPS-GTO SP2 will help to test these predictions by planetary formation models. 6.3.2. Constraints on the RV acceleration In addition to the sub-Neptune and the eccentric giant planet, NIRPS and HARPS have revealed an acceleration in the RV of TOI-756. To determine if the wide binary WT 352 could, plau- sibly, be responsible for the acceleration, we used the following equation (Torres 1999): Mcomp = 5.34 × 10−6M⊙ d pc ρ arcsec !2 ˙v m s−1 yr−1 F(i, e, ω, ϕ) (1) where d is the distance to the system, ρ is the projected sep- aration of the companion on the sky, and ˙v is the best-fit RV trend. F(i, e, ω, ϕ) is an equation that depends on the unknown orbital parameters of the companion and has a minimum value of √ 27/2, which we use in our calculations here. We convert the projected separation on the sky to a minimum semi-major axis using the Gaia DR3 distance of TOI-756. With an acceleration of 145.6 m s−1 yr−1 and the separation of 11.09 arcsec, we found a mass of 1857 M⊙. We therefore conclude that the co-moving companion (a ∼M3/4V star with M⋆∼0.3 M⊙) cannot be re- sponsible for the trend in this system. We again used Eq. 1 to draw the black curve in Fig. 9 representing the lower mass limit permitted by the RV trend as a function of the semi-major axis. Article number, page 15 A&A proofs: manuscript no. aa55684-25corr We see that we cannot exclude a planetary nature for this addi- tional companion. In addition, we can exclude the left rectan- gle of the figure corresponding to our time-span of observations of 480 days. An additional constraint on the mass of this com- panion comes from high-resolution imaging of TOI-756, which rules out companions more than 5 magnitudes fainter (down to approximately an M5.5/6V star, around 0.11 M⊙) at a separa- tion of 0.1 arcseconds (∼8.6 au) (see Sect. 3.4). Moreover, the RV measurements do not show any evidence of a blended com- panion of a mass similar to the primary with no increased con- trast, FWHM, or bisector deviations of the CCFs compared to a similar single star (e.g., GL514, the star used for the LBL tem- plate). We expect to detect a second peak in the CCF with a contrast up to 10 times smaller (∼3–5%) than that of the primary peak, corresponding to a companion with a magnitude difference of about 2.5. Given that TOI-756 is an M1 star, this sensitiv- ity would allow us to detect a companion as late as M4.5–M5 (∼0.16–0.18 M⊙). As a result, we can exclude the presence of early M-type stellar companions as the source of the observed RV trend (see Fig. 9). Moreover, in Fig. 9, we plotted the expected Gaia DR4 de- tection limits at the distance of TOI-756 and for a RUWE of 1.25 (RUWETOI-756 = 1.24) from Wallace et al. (2025). We can see that Gaia will be capable of resolving the orbit and param- eters of almost any object within the parameter space for this companion. In addition, we are still monitoring TOI-756 sys- tem, with NIRPS and HARPS to further constrain this additional component. The combination of radial velocity and astrometric data will be crucial for a precise characterization of the full sys- tem. In particular, it may enable us to resolve the orbits of both TOI-756 c and the additional companion, allowing us to mea- sure their mutual inclination and gain insight into the formation, evolution, and dynamical history of this rare system orbiting a low-mass star. 6.3.3. The binarity effect on formation and evolution The presence of the widely separated stellar companion at ∼11" raises important questions about its influence on the formation and dynamical evolution of planets within the TOI-756 system. Binarity is known to truncate the circumstellar disk, shorten its lifetime, and reduce planet occurrence rates (Cieza et al. 2009; Harris et al. 2012; Moe & Kratter 2021). While the wide physical separation between the two stars suggests that the stel- lar companion had a limited direct impact on the protoplanetary disk of TOI-756, studies have shown that binary companions can still influence planet formation and evolution even at separations up to 1000 au, potentially hindering the formation of massive planetary cores (Sullivan et al. 2023). On the other hand, recent studies (Sullivan et al. 2024) show that the radius gap in wide binaries (separation > 300 au) ap- pears to be shifted toward smaller radii. This suggests that the presence of a stellar companion can influence disk conditions and, consequently, the formation and evolution of planets. However, despite expectations that a stellar companion at this distance could have a noticeable gravitational effect, WT 352 is significantly less massive (M⋆∼0.3 M⊙) than the primary and is located at a separation that could exceed 1000 au. In this case, the companion did not inhibit the formation of giant planets or sub-stellar objects around the primary star. This suggests that the gravitational effect of a companion depends not only on its separation, but also on its mass relative to the primary star. Furthermore, while upcoming Gaia DR4 data will refine as- trometric measurements, it is unlikely to provide significant new constraints on the orbital parameters of the binary, given that its expected orbital period is on the order of ∼40,000 years — too long for measurable motion within Gaia’s observational timeline (El-Badry et al. 2024). Recently, Behmard et al. (2022); Christian et al. (2022) in- vestigated the potential alignment between the orbital planes of planetary systems and their visual binary companions. The TOI- 756 system, along with its wide binary companion WT 352, is included in their sample. Christian et al. (2022) reported a signif- icant misalignment in this system, with a mutual inclination of i = 118+34 −17 degrees (5th to 95th percentile range). While their analysis reveals an excess of aligned systems among binaries with separations less than 700 au, they find that the distribution of mutual inclinations becomes consistent with uniformity for wider binaries (a > 700 au). This could account for the observed misalignment in TOI-756, given its projected binary separation of approximately 955 au. 7. Conclusions We present the "Sub-Neptunes" subprogram of the NIRPS-GTO SP2 program, which aims to improve our understanding of the diversity in composition and internal structure of small planets around M dwarfs. By enabling the study of targets hosting TESS sub-Neptune candidates with V < 15.4, NIRPS (the red arm of HARPS) expands the reach of RV follow-up beyond the tradi- tional limits of optical spectrographs installed on 4-meter-class telescopes. We report the first results of the RV follow-up program of the NIRPS-GTO, presenting the characterization of a two-planet system orbiting the M-dwarf TOI-756, which is the primary component of a wide binary system with the star WT 352. TOI- 756 b was initially identified by TESS and subsequently con- firmed with the ground-based photometry facilities LCO-CTIO and ExTrA, as well with RV measurements obtained with NIRPS and HARPS, which enabled the determination of its mass. Ad- ditionally, NIRPS and HARPS allowed the identification of a second non-transiting planet in the system TOI-756 c, as well as an supplementary RV acceleration hinting at an extra third com- ponent in the system that could be planetary as well. TOI-756 b is a sub-Neptune with a radius of 2.81 R⊕and a mass of 9.8 M⊕ orbiting with a period of 1.24 days around its star. TOI-756 c is a cold eccentric giant planet orbiting at 149 days with a minimum mass of 4.05 MJup and an eccentricity of 0.45. TOI-756 b, with a density of 2.42 g.cm−3, is in line with the recently identified trend of low-density sub-Neptunes around M dwarfs compared to FGK dwarfs. In addition, this peculiar tar- get lies in the radius cliff of M-dwarf planets and in the Neptune desert. TOI-756 b most likely requires a certain amount of hy- drogen and helium in its atmosphere to account for its observed density, either as a pure H/He envelope or as a mixture of super- critical H2O and H/He. The TOI-756 system is particularly unique, as it is the only known confirmed system hosting both a transiting sub-Neptune and an outer giant planet around an M dwarf. This makes it a valuable case for comparison with planet formation and evolu- tion models, as well as for studying correlations between plane- tary populations and stellar parameters such as stellar mass and metallicity. In addition, TOI-756 c enhances the small population of giant planets around M dwarfs, a population whose formation mechanisms are still not fully understood. Identifying more of these planets is vital for constraining our models of their forma- tion and evolutionary processes, providing deeper insights into Article number, page 16 L. Parc, et al.: The peculiar TOI-756 system the pathways that shape such systems. The astrometric measure- ments from the Gaia DR4 release will be key to combine with the RVs to further characterize this unique system. TOI-756 b is also a promising candidate for future atmo- spheric characterization through transmission spectroscopy with JWST, which could help confirm or rule out the presence of a primordial H/He-dominated atmosphere. It also offers an oppor- tunity to test hypotheses regarding the radius cliff and the Nep- tune desert population and to constrain the formation and evo- lution models of small planets orbiting alongside an eccentric outer companion. In this study, we demonstrate the capabilities of the unique NIRPS and HARPS combination to obtain precise RVs of M dwarfs, enabling the confirmation and characterization of candi- dates detected by current photometric surveys such as TESS, as well as upcoming missions like PLATO (Rauer et al. 2014). Acknowledgements. We thank the anonymous referee for their valuable com- ments, which helped improve the manuscript. This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. NJC, ÉA, AL, RD, FBa, BB, LMa, RA, LB, AB, CC, AD-B, LD, PLam, OL, LMo, JS-A, PV, TV & JPW acknowledge the financial support of the FRQ-NT through the Centre de recherche en astrophysique du Québec as well as the sup- port from the Trottier Family Foundation and the Trottier Institute for Research on Exoplanets. ÉA, RD, FBa, LMa, TA, J-SM, MO, JS-A & PV acknowledges support from Canada Foundation for Innovation (CFI) program, the Université de Montréal and Université Laval, the Canada Economic Development (CED) program and the Ministere of Economy, Innovation and Energy (MEIE). AL acknowledges support from the Fonds de recherche du Québec (FRQ) - Secteur Nature et technologies under file #349961. The Board of Observational and Instrumental Astronomy (NAOS) at the Fed- eral University of Rio Grande do Norte’s research activities are supported by continuous grants from the Brazilian funding agency CNPq. This study was par- tially funded by the Coordenação de Aperfeiçoamento de Pessoal de Nível Su- perior—Brasil (CAPES) — Finance Code 001 and the CAPES-Print program. 0 SCB, ED-M, NCS, EC, ARCS & JGd acknowledge the support from FCT - Fundação para a Ciência e a Tecnologia through national funds by these grants: UIDB/04434/2020, UIDP/04434/2020. Co-funded by the European Union (ERC, FIERCE, 101052347). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. SCB acknowledges the support from Fundação para a Ciência e Tecnolo- gia (FCT) in the form of a work contract through the Scientific Em- ployment Incentive program with reference 2023.06687.CEECIND and DOI 10.54499/2023.06687.CEECIND/CP2839/CT0002. XB, XDe, ACar, TF & VY acknowledge funding from the French ANR under contract number ANR18CE310019 (SPlaSH), and the French National Research Agency in the framework of the Investissements d’Avenir program (ANR-15- IDEX-02), through the funding of the “Origin of Life" project of the Grenoble- Alpes University. BLCM & AMM acknowledge CAPES postdoctoral fellowships. BLCM acknowledges CNPq research fellowships (Grant No. 305804/2022-7). NBC acknowledges support from an NSERC Discovery Grant, a Canada Re- search Chair, and an Arthur B. McDonald Fellowship, and thanks the Trottier Space Institute for its financial support and dynamic intellectual environment. DBF acknowledges financial support from the Brazilian agency CNPq-PQ (Grant No. 305566/2021-0). Continuous grants from the Brazilian agency CNPq support the STELLAR TEAM of the Federal University of Ceara’s research ac- tivities. JRM acknowledges CNPq research fellowships (Grant No. 308928/2019-9). ED-M further acknowledges the support from FCT through Stimulus FCT contract 2021.01294.CEECIND. ED-M acknowledges the support by the Ramón y Cajal contract RyC2022-035854-I funded by MI- CIU/AEI/10.13039/501100011033 and by ESF+. XDu acknowledges the support from the European Research Council (ERC) un- der the European Union’s Horizon 2020 research and innovation programme (grant agreement SCORE No 851555) and from the Swiss National Science Foundation under the grant SPECTRE (No 200021_215200). DE acknowledge support from the Swiss National Science Foundation for project 200021_200726. The authors acknowledge the financial support of the SNSF. JIGH, RR, ASM, FGT, NN, VMP, JLR & AKS acknowledge financial sup- port from the Spanish Ministry of Science, Innovation and Universities (MICIU) projects PID2020-117493GB-I00 and PID2023-149982NB-I00. ICL acknowledges CNPq research fellowships (Grant No. 313103/2022-4). CMo acknowledges the funding from the Swiss National Science Foundation under grant 200021_204847 “PlanetsInTime”. KAM acknowledges support from the Swiss National Science Foundation (SNSF) under the Postdoc Mobility grant P500PT_230225. RA acknowledges the Swiss National Science Foundation (SNSF) support un- der the Post-Doc Mobility grant P500PT_222212 and the support of the Institut Trottier de Recherche sur les Exoplanètes (IREx). We acknowledge funding from the European Research Council under the ERC Grant Agreement n. 337591-ExTrA. LB acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (project Spice Dune, grant agreement No 947634). This material reflects only the authors’ views and the Commission is not liable for any use that may be made of the information contained therein. ARCS acknowledges the support from Fundaçao para a Ciência e a Tecnologia (FCT) through the fellowship 2021.07856.BD. LD acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and from the Fonds de recherche du Québec (FRQ) - Secteur Nature et technologies. FG acknowledges support from the Fonds de recherche du Québec (FRQ) - Secteur Nature et technologies under file #350366. H.J.H. acknowledges funding from eSSENCE (grant number eSSENCE@LU 9:3), the Swedish National Research Council (project number 2023-05307), The Crafoord foundation and the Royal Physiographic Society of Lund, through The Fund of the Walter Gyllenberg Foundation. LMo acknowledges the support of the Natural Sciences and Engineering Re- search Council of Canada (NSERC), [funding reference number 589653]. NN acknowledges financial support by Light Bridges S.L, Las Palmas de Gran Canaria. NN acknowledges funding from Light Bridges for the Doctoral Thesis "Hab- itable Earth-like planets with ESPRESSO and NIRPS", in cooperation with the Instituto de Astrofísica de Canarias, and the use of Indefeasible Computer Rights (ICR) being commissioned at the ASTRO POC project in the Island of Tenerife, Canary Islands (Spain). The ICR-ASTRONOMY used for his research was pro- vided by Light Bridges in cooperation with Hewlett Packard Enterprise (HPE). CPi acknowledges support from the NSERC Vanier scholarship, and the Trottier Family Foundation. CPi also acknowledges support from the E. Margaret Bur- bidge Prize Postdoctoral Fellowship from the Brinson Foundation. AKS acknowledges financial support from La Caixa Foundation (ID 100010434) under the grant LCF/BQ/DI23/11990071. TV acknowledges support from the Fonds de recherche du Québec (FRQ) - Secteur Nature et technologies under file #320056. KaC acknowledges support from the TESS mission via subaward s3449 from MIT. Funding for the TESS mission is provided by NASA’s Science Mission Direc- torate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5) website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Adminis- tration under the Exoplanet Exploration Program. This paper includes data col- lected by the TESS mission that are publicly available from the Mikulski Archive for Space Telescopes (MAST). This work makes use of observations from the LCOGT network. Part of the LCOGT telescope time was granted by NOIRLab through the Mid-Scale Innovations Program (MSIP). MSIP is funded by NSF. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aero- nautics and Space Administration under the Exoplanet Exploration Program. References Aguichine, A., Batalha, N., Fortney, J. J., et al. 2025, ApJ, 988, 186 Aguichine, A., Mousis, O., Deleuil, M., & Marcq, E. 2021, ApJ, 914, 84 Alexander, R. D. & Pascucci, I. 2012, MNRAS, 422, L82 Alibert, Y. & Benz, W. 2017, A&A, 598, L5 Allard, F., Homeier, D., & Freytag, B. 2012, Philosophical Transactions of the Royal Society of London Series A, 370, 2765 Allart, R., Lovis, C., Faria, J., et al. 2022, A&A, 666, A196 Article number, page 17 A&A proofs: manuscript no. aa55684-25corr Aller, A., Lillo-Box, J., Jones, D., Miranda, L. F., & Barceló Forteza, S. 2020, A&A, 635, A128 Ambikasaran, S., Foreman-Mackey, D., Greengard, L., Hogg, D. W., & O’Neil, M. 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 252 Anglada-Escudé, G. & Butler, R. P. 2012, ApJS, 200, 15 Antoniadis-Karnavas, A., Sousa, S. G., Delgado-Mena, E., Santos, N. C., & An- dreasen, D. T. 2024, A&A, 690, A58 Antoniadis-Karnavas, A., Sousa, S. G., Delgado-Mena, E., et al. 2020, A&A, 636, A9 Armstrong, D. J., Lopez, T. A., Adibekyan, V., et al. 2020, Nature, 583, 39 Artigau, É., Cadieux, C., Cook, N. J., et al. 2024, AJ, 168, 252 Artigau, É., Cadieux, C., Cook, N. J., et al. 2022, AJ, 164, 84 Astudillo-Defru, N., Díaz, R. F., Bonfils, X., et al. 2017, A&A, 605, L11 Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. H. 1998, A&A, 337, 403 Baraffe, I., Homeier, D., Allard, F., & Chabrier, G. 2015, A&A, 577, A42 Batalha, N. E., Lewis, T., Fortney, J. J., et al. 2019, ApJ, 885, L25 Batalha, N. M., Rowe, J. F., Bryson, S. T., et al. 2013, ApJS, 204, 24 Bayo, A., Rodrigo, C., Barrado Y Navascués, D., et al. 2008, A&A, 492, 277 Behmard, A., Dai, F., & Howard, A. W. 2022, AJ, 163, 160 Benneke, B., Roy, P.-A., Coulombe, L.-P., et al. 2024, arXiv e-prints, arXiv:2403.03325 Bensby, T., Feltzing, S., & Oey, M. S. 2014, A&A, 562, A71 Bertaux, J. L., Lallement, R., Ferron, S., Boonne, C., & Bodichon, R. 2014, A&A, 564, A46 Bitsch, B. & Izidoro, A. 2023, A&A, 674, A178 Bitsch, B., Lambrechts, M., & Johansen, A. 2015, A&A, 582, A112 Bitsch, B., Raymond, S. N., Buchhave, L. A., et al. 2021, A&A, 649, L5 Bonfils, X., Almenara, J. M., Jocou, L., et al. 2015, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9605, Techniques and Instrumentation for Detection of Exoplanets VII, ed. S. Shaklan, 96051L Bonfils, X., Delfosse, X., Udry, S., et al. 2013, A&A, 549, A109 Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977 Borucki, W. J., Koch, D. G., Basri, G., et al. 2011, ApJ, 736, 19 Bouchy, F., Doyon, R., Artigau, É., et al. 2017, The Messenger, 169, 21 Bouchy, F., Doyon, R., Pepe, F., Melo, C., & Artigau, É. 2025, A&A Brahm, R., Espinoza, N., Jordán, A., et al. 2018, MNRAS, 477, 2572 Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, PASP, 125, 1031 Brugger, B., Mousis, O., Deleuil, M., & Deschamps, F. 2017, ApJ, 850, 93 Bryan, M. L. & Lee, E. J. 2024, ApJ, 968, L25 Bryan, M. L. & Lee, E. J. 2025, ApJ, 982, L7 Bryant, E. M., Bayliss, D., & Van Eylen, V. 2023, MNRAS, 521, 3663 Burn, R., Mordasini, C., Mishra, L., et al. 2024, Nature Astronomy, 8, 463 Burn, R., Schlecker, M., Mordasini, C., et al. 2021, A&A, 656, A72 Castelli, F. & Kurucz, R. L. 2003, in IAU Symposium, Vol. 210, Modelling of Stellar Atmospheres, ed. N. Piskunov, W. W. Weiss, & D. F. Gray, A20 Castro-González, A., Bourrier, V., Lillo-Box, J., et al. 2024, A&A, 689, A250 Chachan, Y. & Lee, E. J. 2023, ApJ, 952, L20 Christian, S., Vanderburg, A., Becker, J., et al. 2022, AJ, 163, 207 Ciardi, D. R., Beichman, C. A., Horch, E. P., & Howell, S. B. 2015, ApJ, 805, 16 Cieza, L. A., Padgett, D. L., Allen, L. E., et al. 2009, ApJ, 696, L84 Cloutier, R. & Menou, K. 2020, AJ, 159, 211 Cointepas, M., Almenara, J. M., Bonfils, X., et al. 2021, A&A, 650, A145 Collins, K. 2019, in American Astronomical Society Meeting Abstracts, Vol. 233, American Astronomical Society Meeting Abstracts #233, 140.05 Collins, K. A., Kielkopf, J. F., Stassun, K. G., & Hessman, F. V. 2017, AJ, 153, 77 Cook, N. J., Artigau, É., Doyon, R., et al. 2022, PASP, 134, 114509 Deline, A., Hooton, M. J., Lendl, M., et al. 2022, A&A, 659, A74 Delmotte, N., Dolensky, M., Padovani, P., et al. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 351, Astronomical Data Analysis Software and Systems XV, ed. C. Gabriel, C. Arviset, D. Ponz, & S. Enrique, 690 Donati, J. F., Kouach, D., Moutou, C., et al. 2020, MNRAS, 498, 5684 Dorn, C., Khan, A., Heng, K., et al. 2015, A&A, 577, A83 Dressing, C. D. & Charbonneau, D. 2013, ApJ, 767, 95 Dressing, C. D. & Charbonneau, D. 2015, ApJ, 807, 45 El-Badry, K., Lam, C., Holl, B., et al. 2024, The Open Journal of Astrophysics, 7, 100 El-Badry, K., Rix, H.-W., & Heintz, T. M. 2021, MNRAS, 506, 2269 Espinoza, N. 2018, Research Notes of the American Astronomical Society, 2, 209 Espinoza, N. & Jordán, A. 2015, MNRAS, 450, 1879 Espinoza, N., Kossakowski, D., & Brahm, R. 2019, MNRAS, 490, 2262 Foreman-Mackey, D., Agol, E., Ambikasaran, S., & Angus, R. 2017, AJ, 154, 220 Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306 French, M., Mattsson, T. R., Nettelmann, N., & Redmer, R. 2009, Phys. Rev. B, 79, 054107 Fulton, B. J., Petigura, E. A., Blunt, S., & Sinukoff, E. 2018, PASP, 130, 044504 Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, AJ, 154, 109 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1 Gaidos, E., Mann, A. W., Kraus, A. L., & Ireland, M. 2016, MNRAS, 457, 2877 Gardner, J. P., Mather, J. C., Clampin, M., et al. 2006, Space Sci. Rev., 123, 485 Gordon, I. E., Rothman, L. S., Hargreaves, R. J., et al. 2022, J. Quant. Spectr. Rad. Transf., 277, 107949 Gratia, P. & Fabrycky, D. 2017, MNRAS, 464, 1709 Guerrero, N. M., Seager, S., Huang, C. X., et al. 2021, ApJS, 254, 39 Guillot, T. & Morel, P. 1995, A&AS, 109, 109 Hara, N. C., Boué, G., Laskar, J., Delisle, J. B., & Unger, N. 2019, MNRAS, 489, 738 Harris, R. J., Andrews, S. M., Wilner, D. J., & Kraus, A. L. 2012, ApJ, 751, 115 Hedges, C., Saunders, N., Barentsen, G., et al. 2019, ApJ, 880, L5 Henden, A. A., Levine, S., Terrell, D., & Welch, D. L. 2015, in American Astro- nomical Society Meeting Abstracts, Vol. 225, American Astronomical Soci- ety Meeting Abstracts #225, 336.16 Henry, T. J., Jao, W.-C., Subasavage, J. P., et al. 2006, AJ, 132, 2360 Ho, C. S. K., Rogers, J. G., Van Eylen, V., Owen, J. E., & Schlichting, H. E. 2024, MNRAS, 531, 3698 Howard, A. W., Marcy, G. W., Bryson, S. T., et al. 2012, ApJS, 201, 15 Howell, S. B., Everett, M. E., Sherry, W., Horch, E., & Ciardi, D. R. 2011, AJ, 142, 19 Husser, T.-O., Wende-von Berg, S., Dreizler, S., et al. 2013, A&A, 553, A6 Ida, S. & Lin, D. N. C. 2005, ApJ, 626, 1045 Jahandar, F., Doyon, R., Artigau, É., et al. 2025, The Astrophysical Journal, 978, 154 Jahandar, F., Doyon, R., Artigau, É., et al. 2024, The Astrophysical Journal, 966, 56 Jenkins, J. M., Twicken, J. D., McCauliff, S., et al. 2016, in Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9913, Soft- ware and Cyberinfrastructure for Astronomy IV, ed. G. Chiozzi & J. C. Guz- man, 99133E Kempton, E. M. R., Bean, J. L., Louie, D. R., et al. 2018, PASP, 130, 114401 Khata, D., Mondal, S., Das, R., & Baug, T. 2021, MNRAS, 507, 1869 Kite, E. S., Fegley, Jr., B., Schaefer, L., & Ford, E. B. 2019, ApJ, 887, L33 Kreidberg, L. 2015, PASP, 127, 1161 Kubyshkina, D. & Vidotto, A. A. 2021, MNRAS, 504, 2034 Kunimoto, M., Vanderburg, A., Huang, C. X., et al. 2023, AJ, 166, 7 Kurucz, R. L. 1979, ApJS, 40, 1 Kurucz, R. L. 1993, SYNTHE spectrum synthesis programs and line data Laughlin, G., Bodenheimer, P., & Adams, F. C. 2004, ApJ, 612, L73 Lenzen, R., Hartung, M., Brandner, W., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 944–952 Li, J., Tenenbaum, P., Twicken, J. D., et al. 2019, PASP, 131, 024506 Lillo-Box, J., Gandolfi, D., Armstrong, D. J., et al. 2023, A&A, 669, A109 Lopez, E. D. & Fortney, J. J. 2013, ApJ, 776, 2 Lopez, E. D. & Rice, K. 2018, MNRAS, 479, 5303 Lovis, C. & Pepe, F. 2007, A&A, 468, 1115 Lucy, L. B. & Sweeney, M. A. 1971, AJ, 76, 544 Luo, H., Dorn, C., & Deng, J. 2024, Nature Astronomy, 8, 1399 Mann, A. W., Dupuy, T., Kraus, A. L., et al. 2019, ApJ, 871, 63 Mann, A. W., Feiden, G. A., Gaidos, E., Boyajian, T., & von Braun, K. 2015, ApJ, 804, 64 Mann, H. B. & Whitney, D. R. 1947, The Annals of Mathematical Statistics, 18, 50 Marcy, G. W., Weiss, L. M., Petigura, E. A., et al. 2014, Proceedings of the National Academy of Science, 111, 12655 Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20 Mayor, M. & Queloz, D. 1995, Nature, 378, 355 McCully, C., Volgenau, N. H., Harbeck, D.-R., et al. 2018, in Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10707, Software and Cyberinfrastructure for Astronomy V, ed. J. C. Guzman & J. Ib- sen, 107070K McDonald, G. D., Kreidberg, L., & Lopez, E. 2019, ApJ, 876, 22 McLaughlin, D. B. 1924, ApJ, 60, 22 Mignon, L., Delfosse, X., Meunier, N., et al. 2025, A&A, in press Moe, M. & Kratter, K. M. 2021, MNRAS, 507, 3593 Morello, G., Tsiaras, A., Howarth, I. D., & Homeier, D. 2017, AJ, 154, 111 Morris, R. L., Twicken, J. D., Smith, J. C., et al. 2020, Kepler Data Processing Handbook: Photometric Analysis, Kepler Science Document KSCI-19081- 003, id. 6. Edited by Jon M. Jenkins. Mugrauer, M. & Michel, K.-U. 2020, Astronomische Nachrichten, 341, 996 Mulders, G. D., Pascucci, I., & Apai, D. 2015, ApJ, 814, 130 Neves, V., Bonfils, X., Santos, N. C., et al. 2012, A&A, 538, A25 Neves, V., Bonfils, X., Santos, N. C., et al. 2013, A&A, 551, A36 Otegi, J. F., Bouchy, F., & Helled, R. 2020, A&A, 634, A43 Owen, J. E. & Jackson, A. P. 2012, MNRAS, 425, 2931 Owen, J. E. & Murray-Clay, R. 2018, MNRAS, 480, 2206 Article number, page 18 L. Parc, et al.: The peculiar TOI-756 system Owen, J. E. & Wu, Y. 2017, ApJ, 847, 29 Parc, L., Bouchy, F., Venturini, J., Dorn, C., & Helled, R. 2024, A&A, 688, A59 Pascucci, I., Testi, L., Herczeg, G. J., et al. 2016, ApJ, 831, 125 Pass, E. K., Winters, J. G., Charbonneau, D., et al. 2023, AJ, 166, 11 Pecaut, M. J. & Mamajek, E. E. 2013, ApJS, 208, 9 Pepe, F., Cristiani, S., Rebolo, R., et al. 2021, A&A, 645, A96 Petigura, E. A., Howard, A. W., & Marcy, G. W. 2013, Proceedings of the Na- tional Academy of Science, 110, 19273 Plotnykov, M. & Valencia, D. 2020, MNRAS, 499, 932 Plotnykov, M. & Valencia, D. 2024, MNRAS, 530, 3488 Pojmanski, G. 1997, Acta Astron., 47, 467 Rasio, F. A. & Ford, E. B. 1996, Science, 274, 954 Rauer, H., Catala, C., Aerts, C., et al. 2014, Experimental Astronomy, 38, 249 Raymond, S. N. & Izidoro, A. 2017, Icarus, 297, 134 Reylé, C., Jardine, K., Fouqué, P., et al. 2021, A&A, 650, A201 Ribas, I., Guinan, E. F., Güdel, M., & Audard, M. 2005, ApJ, 622, 680 Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2014, in Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9143, Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave, ed. J. M. Oschmann, Jr., M. Clampin, G. G. Fazio, & H. A. MacEwen, 914320 Rossiter, R. A. 1924, ApJ, 60, 15 Rousset, G., Lacombe, F., Puget, P., et al. 2003, in Society of Photo-Optical In- strumentation Engineers (SPIE) Conference Series, Vol. 4839, Adaptive Op- tical System Technologies II, ed. P. L. Wizinowich & D. Bonaccini, 140–149 Saumon, D., Chabrier, G., & van Horn, H. M. 1995, ApJS, 99, 713 Schlecker, M., Mordasini, C., Emsenhuber, A., et al. 2021, A&A, 656, A71 Schönrich, R., Binney, J., & Dehnen, W. 2010, MNRAS, 403, 1829 Schweitzer, A., Passegger, V. M., Cifuentes, C., et al. 2019, A&A, 625, A68 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163 Smith, J. C., Stumpe, M. C., Van Cleve, J. E., et al. 2012, PASP, 124, 1000 Sousa, S. G., Adibekyan, V., Delgado-Mena, E., et al. 2021, A&A, 656, A53 Speagle, J. S. 2020, MNRAS, 493, 3132 Stassun, K. G., Oelkers, R. J., Paegert, M., et al. 2019, AJ, 158, 138 Stumpe, M. C., Smith, J. C., Catanzarite, J. H., et al. 2014, PASP, 126, 100 Stumpe, M. C., Smith, J. C., Van Cleve, J. E., et al. 2012, PASP, 124, 985 Suárez Mascareño, A., Artigau, É., Mignon, L., Delfosse, X., & Cook, N. J. 2025, A&A Suárez Mascareño, A., Rebolo, R., & González Hernández, J. I. 2016, A&A, 595, A12 Suárez Mascareño, A., Rebolo, R., González Hernández, J. I., & Esposito, M. 2015, MNRAS, 452, 2745 Sullivan, K., Kraus, A. L., Berger, T. A., et al. 2024, AJ, 168, 129 Sullivan, K., Kraus, A. L., Huber, D., et al. 2023, AJ, 165, 177 Tian, F., Toon, O. B., Pavlov, A. A., & De Sterck, H. 2005, ApJ, 621, 1049 Tokovinin, A. 2018, PASP, 130, 035002 Torres, G. 1999, PASP, 111, 169 Twicken, J. D., Catanzarite, J. H., Clarke, B. D., et al. 2018, PASP, 130, 064502 Twicken, J. D., Clarke, B. D., Bryson, S. T., et al. 2010, in Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7740, Software and Cyberinfrastructure for Astronomy, ed. N. M. Radziwill & A. Bridger, 774023 Valencia, D., Sasselov, D. D., & O’Connell, R. J. 2007, ApJ, 656, 545 Venturini, J., Guilera, O. M., Haldemann, J., Ronco, M. P., & Mordasini, C. 2020, A&A, 643, L1 Venturini, J., Ronco, M. P., Guilera, O. M., et al. 2024, A&A, 686, L9 Wallace, A. L., Casey, A. R., Brown, A. G. A., & Castro-Ginard, A. 2025, MN- RAS, 536, 2485 Weiss, L. M., Isaacson, H., Howard, A. W., et al. 2024, ApJS, 270, 8 Wilcoxon, F. 1945, Biometrics Bulletin, 1, 80 Wilson, T. G., Goffo, E., Alibert, Y., et al. 2022, MNRAS, 511, 1043 Winters, J. G., Henry, T. J., Lurie, J. C., et al. 2015, AJ, 149, 5 Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868 Wroblewski, H. & Torres, C. 1991, A&AS, 91, 129 Wu, Y. & Lithwick, Y. 2011, ApJ, 735, 109 Wu, Y. & Murray, N. 2003, ApJ, 589, 605 Yelle, R. V. 2004, Icarus, 170, 167 Zeng, L., Jacobsen, S. B., Sasselov, D. D., et al. 2019, Proceedings of the Na- tional Academy of Science, 116, 9723 Zeng, L., Sasselov, D. D., & Jacobsen, S. B. 2016, ApJ, 819, 127 Ziegler, C., Tokovinin, A., Briceño, C., et al. 2020, AJ, 159, 19 1Observatoire de Genève, Département d’Astronomie, Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland 2Institut Trottier de recherche sur les exoplanètes, Département de Physique, Université de Montréal, Montréal, Québec, Canada 3Observatoire du Mont-Mégantic, Québec, Canada 4Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, Campus Universitário, Natal, RN, 59072-970, Brazil 5Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal 6Departamento de Física e Astronomia, Faculdade de Ciências, Uni- versidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal 7Univ. Grenoble Alpes, CNRS, IPAG, F-38000 Grenoble, France 8Department of Physics, University of Toronto, Toronto, ON M5S 3H4, Canada 9Department of Physics & Astronomy, McMaster University, 1280 Main St W, Hamilton, ON, L8S 4L8, Canada 10Department of Physics, McGill University, 3600 rue University, Montréal, QC, H3A 2T8, Canada 11Department of Earth & Planetary Sciences, McGill University, 3450 rue University, Montréal, QC, H3A 0E8, Canada 12Departamento de Física, Universidade Federal do Ceará, Caixa Postal 6030, Campus do Pici, Fortaleza, Brazil 13Centro de Astrobiología (CAB), CSIC-INTA, Camino Bajo del Castillo s/n, 28692, Villanueva de la Cañada (Madrid), Spain 14Centre Vie dans l’Univers, Faculté des sciences de l’Université de Genève, Quai Ernest-Ansermet 30, 1205 Geneva, Switzerland 15Instituto de Astrofísica de Canarias (IAC), Calle Vía Láctea s/n, 38205 La Laguna, Tenerife, Spain 16Departamento de Astrofísica, Universidad de La Laguna (ULL), 38206 La Laguna, Tenerife, Spain 17European Southern Observatory (ESO), Karl-Schwarzschild-Str. 2, 85748 Garching bei München, Germany 18Space Research and Planetary Sciences, Physics Institute, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland 19Consejo Superior de Investigaciones Científicas (CSIC), E-28006 Madrid, Spain 20Bishop’s Univeristy, Dept of Physics and Astronomy, Johnson-104E, 2600 College Street, Sherbrooke, QC, Canada, J1M 1Z7 21Department of Physics and Space Science, Royal Military College of Canada, PO Box 17000, Station Forces, Kingston, ON, Canada 22Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências da Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal 23Departamento de Física da Faculdade de Ciências da Universidade de Lisboa, Edifício C8, 1749-016 Lisboa, Portugal 24Centre of Optics, Photonics and Lasers, Université Laval, Québec, Canada 25Herzberg Astronomy and Astrophysics Research Centre, National Research Council of Canada 26Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France 27Center for Space and Habitability, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland 28Center for astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, United States 29NASA Exoplanet Science Institute, IPAC, California Institute of Technology, Pasadena, CA 91125 USA 30George Mason University, 4400 University Drive, Fairfax, VA, 22030 USA 31European Southern Observatory (ESO), Av. Alonso de Cordova 3107, Casilla 19001, Santiago de Chile, Chile 32Planétarium de Montréal, Espace pour la Vie, 4801 av. Pierre-de Coubertin, Montréal, Québec, Canada 33Lund Observatory, Division of Astrophysics, Department of Physics, Lund University, Box 118, 221 00 Lund, Sweden 34SETI Institute, Mountain View, CA 94043, USA NASA Ames Research Center, Moffett Field, CA 94035, USA 35York University, 4700 Keele St, North York, ON M3J 1P3 36Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany Article number, page 19 A&A proofs: manuscript no. aa55684-25corr 37University of British Columbia, 2329 West Mall, Vancouver, BC, Canada, V6T 1Z4 38Western University, Department of Physics & Astronomy and Institute for Earth and Space Exploration, 1151 Richmond Street, London, ON N6A 3K7, Canada 39Light Bridges S.L., Observatorio del Teide, Carretera del Observato- rio, s/n Guimar, 38500, Tenerife, Canarias, Spain 40University Observatory, Faculty of Physics, Ludwig-Maximilians- Universität München, Scheinerstr. 1, 81679 Munich, Germany 41Hamburger Sternwarte, Gojenbergsweg 112, D-21029 Hamburg, Germany 42Subaru Telescope, National Astronomical Observatory of Japan (NAOJ), 650 N Aohoku Place, Hilo, HI 96720, USA 43Department of Astronomy & Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637, USA 44Laboratoire Lagrange, Observatoire de la Côte d’Azur, CNRS, Université Côte d’Azur, Nice, France ∗e-mail: lena.parc@unige.ch Article number, page 20 L. Parc, et al.: The peculiar TOI-756 system Appendix A: TESS pixel file plots In this appendix, we show the TESS pixel file plots for all the sectors observed of TOI-756 (except Sector 10 shown in Fig. 1). 2020 2022 2024 2026 2028 2030 Pixel Column Number 1324 1326 1328 1330 1332 Pixel Row Number E N m = -2 m = 0 m = 2 m = 4 m = 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 25 27 28 29 30 31 33 35 37 40 42 45 49 58 64 65 67 82 TIC 73649615 - Sector 11 0.02 0.05 0.10 0.20 0.30 0.50 1.00 2.00 Flux ×102 (e /s) 1312 1314 1316 1318 1320 Pixel Column Number 1196 1198 1200 1202 1204 1206 Pixel Row Number E N m = -2 m = 0 m = 2 m = 4 m = 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 35 37 39 40 45 47 49 55 66 69 TIC 73649615 - Sector 37 0.02 0.05 0.10 0.20 0.30 0.50 1.00 2.00 Flux ×102 (e /s) 2068 2070 2072 2074 2076 Pixel Column Number 914 916 918 920 922 924 Pixel Row Number E N m = -2 m = 0 m = 2 m = 4 m = 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 35 37 39 41 46 47 52 53 54 56 58 6667 TIC 73649615 - Sector 64 0.02 0.05 0.10 0.20 0.30 0.50 1.00 2.00 5.00 Flux ×102 (e /s) Fig. A.1. TESS TPF of TOI-756 created with tpfplotter (Aller et al. 2020). The orange pixels define the aperture mask used for extracting the photometry. Additionally, the red circles indicate neighboring objects from the Gaia DR3 catalog, with the circle size corresponding to the brightness difference compared to the target (as indicated in the legend). Our target is marked with a white cross. Pixel scale is 21′′/pixel. The co-moving companion of TOI-756 corresponds to the star labeled "2". Article number, page 21 A&A proofs: manuscript no. aa55684-25corr Appendix B: High-contrast imaging observations 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 Angular Separation (arcsec) 0 1 2 3 4 5 m 1" NACO/CONICA AO 0.0 0.5 1.0 1.5 2.0 2.5 3.0 arcsec 0 1 2 3 4 5 magnitude (I-band) 2 0 -2 [arcsec] -2 0 2 [arcsec] SOAR Speckle ACF TIC73649615 0.0 0.2 0.4 0.6 0.8 1.0 1.2 angular separation (arcsec) 0 1 2 3 4 5 6 7 m 562 nm 832 nm TOI-756 1" 832 nm 0.0 0.2 0.4 0.6 0.8 1.0 1.2 angular separation (arcsec) 0 1 2 3 4 5 6 7 m 562 nm 832 nm TOI-756 1" 832 nm Fig. B.1. Adaptive optics and speckle imaging plots for TOI-756 showing magnitude contrast in function of angular separation. Top left: NACO/CONICA@VLT. Top right: HRCam@SOAR. Bottom: Zorro@Gemini-South (left: March 12, 2020 ; right: July 05, 2023). For VLT and Gemini, the inset image is the primary target showing no additional close-in companions. For SOAR, the inset image shows the auto-correlation function. Appendix C: Photometric and radial velocity analysis Table C.1. Median values and 68% confidence intervals of the posterior distributions of the TESS-only fit. Parameter Prior Value Stellar parameters Stellar density, ρ∗(ρ⊙). . . . . . . . . . . N(5350, 230) 5291+156 −163 Parameters for TOI-756 b Orbital period, P (days). . . . . . . . . . N(1.23926, 0.1) 1.239250 ± 0.000001 Semi-major axis, a (AU) . . . . . . . . . - 0.0182 ± 0.0003 Transit epoch, T0 (BJD) . . . . . . . . . N(2458570.65, 0.1) 58570.65187+0.00067 −0.00073 Scaled planetary radius, RP/R∗. . . U(0, 1) 0.0486 ± 0.0013 Impact parameter, b . . . . . . . . . . . . . U(0, 1) 0.541+0.037 −0.047 Inclination, i (deg) . . . . . . . . . . . . . . - 85.92+0.39 −0.31 Eccentricity, e . . . . . . . . . . . . . . . . . . Fixed 0.0 Argument of periastron, ω (deg) . . Fixed 90.0 Limb darkening parameters Limb darkening parameter, q1,TESS N(0.792, 0.029) 0.795 ± 0.029 Limb darkening parameter, q2,TESS N(0.453, 0.022) 0.455 ± 0.021 Notes: N(µ, σ2) indicates a normal distribution with mean µ and variance σ2, U(a, b) a uniform distribution between a and b. Article number, page 22 L. Parc, et al.: The peculiar TOI-756 system Fig. C.1. TESS PDCSAP flux light curves of the four different sectors with the best-fit juliet model shown as a black line (see Sect. 5.2.3 for details on the modeling). Fig. C.2. LCO-CTIO light curve from the g′-band transit with the best-fit juliet model shown as a black line and model errors as gray area. Dark red circles are data binned to 10 min (see Sect. 5.2.3 for details on the modeling). We do not present the i′-band transit here, as no detrending was applied, making it identical to the one shown in Fig. 5. Article number, page 23 A&A proofs: manuscript no. aa55684-25corr Fig. C.3. ExTrA light curves of the four different transits, observed with two telescopes (left panel: first telescope; right panel: second telescope). From top to bottom, the first transit is complete, while the remaining three show only the egress. The best-fit juliet models are shown as black lines, with 1σ model uncertainties indicated by the gray shaded regions. Dark red circles represent the data binned to 10 minutes. The dashed vertical line define the transit midpoint. An arbitrary offset has been added between the transits for clarity. See Sect. 5.2.3 for details on the modeling. Article number, page 24 L. Parc, et al.: The peculiar TOI-756 system Table C.2. Median values and 68% confidence intervals of the posterior distri- butions of the joint fit. Parameter Prior Value Stellar parameters Stellar density, ρ∗(ρ⊙) . . . . . . . . . . . . . . . . N(5350, 230) 5298+164 −169 Fitted parameters for TOI-756 b Orbital period, Pb (days) . . . . . . . . . . . . . . N(1.23925, 0.00001) 1.23924949+0.00000068 −0.00000063 Transit epoch, T0,b (BJD) . . . . . . . . . . . . . N(2458570.652, 0.001) 2458570.65234+0.00035 −0.00037 r1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.70, 0.1) 0.726+0.012 −0.014 Scaled planetary radius, RP/R∗= r2 . . . . N(0.049, 0.01) 0.05113+0.00082 −0.00089 RV semi-amplitude, Kb (m/s) . . . . . . . . . U(0, 30) 9.22+1.70 −1.49 Eccentricity, eb . . . . . . . . . . . . . . . . . . . . . . Fixed 0.0 (adopted, 3σ < 0.51) Argument of periastron, ωb (deg) . . . . . . Fixed 90.0 Fitted parameters for TOI-756 c Orbital period, P (days). . . . . . . . . . . . . . . U(10, 300) 149.40+0.16 −0.17 Transit epoch, T0 (BJD) . . . . . . . . . . . . . . U(2460000, 2460500) 2460498.82+0.57 −0.52 RV semi amplitude, Kc (m/s) . . . . . . . . . U(100, 500) 273.29+2.56 −2.60 √ecsin(ωc) . . . . . . . . . . . . . . . . . . . . . . . . . . U(−1, 1) −0.141 ± 0.011 √eccos(ωc). . . . . . . . . . . . . . . . . . . . . . . . . . U(−1, 1) −0.652 ± 0.006 Fitted parameters for the linear trend RV slope (m/s/day) . . . . . . . . . . . . . . . . . . . U(−10, 10) 0.399 ± +0.014 RV intercept (m/s). . . . . . . . . . . . . . . . . . . . U(−300, 300) −181.58+105.59 −78.22 Instrumental photometric parameters Offset relative flux, MTESSS 10 (×10−4) . . N(0, 300) −0.1 ± 1.6 Jitter, σw,TESSS 10 (ppm) . . . . . . . . . . . . . . . . logU(0.01, 300) 12.2+58.3 −11.2 Offset relative flux, MTESSS 11 (×10−4) . . N(0, 300) 3.7+17.6 −13.1 Jitter, σw,TESSS 11 (ppm) . . . . . . . . . . . . . . . . logU(0.01, 300) 17.4+77.3 −15.8 Offset relative flux, MTESSS 37 (×10−4) . . N(0, 300) −2.15+0.65 −0.59 Jitter, σw,TESSS 37 (ppm) . . . . . . . . . . . . . . . . logU(0.01, 300) 1.13+13.13 −0.98 Offset relative flux, MTESSS 64 (×10−4) . . N(0, 300) −15.59+0.63 −0.64 Jitter, σw,TESSS 64 (ppm) . . . . . . . . . . . . . . . . logU(0.01, 300) 0.73+10.62 −0.66 Offset relative flux, MExTrA1T1 . . . . . . . . . N(0, 2) 0.16+0.46 −0.18 Jitter, σw,ExTrA1T1 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 2146+308 −348 Offset relative flux, MExTrA2T1 . . . . . . . . . N(0, 2) 0.005+0.24 −0.004 Jitter, σw,ExTrA2T1 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 1.97+45.03 −1.80 Offset relative flux, MExTrA3T1 . . . . . . . . . N(0, 2) 0.05+0.20 −0.12 Jitter, σw,ExTrA3T1 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 67.39+1286.35 −62.43 Offset relative flux, MExTrA4T1 . . . . . . . . . N(0, 2) 0.04+0.20 −0.05 Jitter, σw,ExTrA4T1 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 34.90+318.35 −32.80 Offset relative flux, MExTrA1T2 . . . . . . . . . N(0, 2) 0.32+0.38 −0.30 Jitter, σw,ExTrA1T2 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 0.94+8.48 −0.85 Offset relative flux, MExTrA2T2 . . . . . . . . . N(0, 2) 0.028+0.20 −0.06 Jitter, σw,ExTrA2T2 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 3.4+39.5 −3.1 Offset relative flux, MExTrA3T2 . . . . . . . . . N(0, 2) 0.002+0.098 −0.065 Jitter, σw,ExTrA3T2 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 1.84+19.27 −1.65 Offset relative flux, MExTrA4T2 . . . . . . . . . N(0, 2) −0.002+0.011 −0.020 Article number, page 25 A&A proofs: manuscript no. aa55684-25corr Table C.2. continued. Parameter Prior Value Jitter, σw,ExTrA4T2 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 1.66+34.90 −1.53 Offset relative flux, MLCO−i’ (×10−4) . . . N(0, 2000) 39.14+0.97 −0.96 Jitter, σw,LCO−i’ (ppm) . . . . . . . . . . . . . . . . logU(0.1, 1000) 4.2+40.4 −3.6 Offset relative flux, MLCO−g’ (×10−4) . . N(0, 2000) 230.5+10.1 −9.7 Jitter, σw,LCO−g’ (ppm) . . . . . . . . . . . . . . . logU(0.1, 1000) 34.6+309.8 −31.0 Instrumental RV parameters Systemic RV, µNIRPS (m/s) . . . . . . . . . . . . U(10000, 20000) 14783.20+79.74 −103.06 Jitter, σw,NIRPS (m/s) . . . . . . . . . . . . . . . . . logU(0.001, 100) 17.64 ± 2.16 Systemic RV, µHARPS (m/s) . . . . . . . . . . . U(10000, 20000) 14688.28+80.74 −102.61 Jitter, σw,HARPS (m/s) . . . . . . . . . . . . . . . . . logU(0.001, 100) 13.52+1.25 −1.14 GP/detrending parameters ρGP,TESSS 10 (days) . . . . . . . . . . . . . . . . . . . . . logU(0.001, 50) 0.62+0.66 −0.39 σGP,TESSS 10 (10−4 relative flux) . . . . . . . . . logU(10−2, 5 × 105) 6.63+1.49 −1.24 ρGP,TESSS 11 (days) . . . . . . . . . . . . . . . . . . . . . logU(0.001, 50) 19.00+9.90 −6.68 σGP,TESSS 11 (10−4 relative flux) . . . . . . . . . logU(10−2, 5 × 105) 18.8+12.9 +7.0 ρGP,ExTrA1T1 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 2.05+1.68 −1.01 σGP,ExTrA1T1 (10−2 relative flux) . . . . . . . . logU(10−4, 102) 20.2+21.6 −10.1 ρGP,ExTrA2T1 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 0.06+0.98 −0.04 σGP,ExTrA2T1 (10−2 relative flux) . . . . . . . . logU(10−4, 102) 1.2+27.2 −0.9 ρGP,ExTrA3T1 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 1.36+0.84 −0.53 σGP,ExTrA3T1 (10−2 relative flux) . . . . . . . . logU(10−4, 102) 18.0+16.6 −9.2 ρGP,ExTrA4T1 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 0.41+0.54 −0.32 σGP,ExTrA4T1 (10−2 relative flux) . . . . . . . . logU(10−4, 102) 8.2+16.2 −6.8 ρGP,ExTrA1T2 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 1.19+0.89 −0.61 σGP,ExTrA1T2 (10−2 relative flux) . . . . . . . . logU(10−4, 102) 25.7+27.5 16.3 ρGP,ExTrA2T2 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 5.6+2.3 −2.0 σGP,ExTrA2T2 (10−2 relative flux) . . . . . . . . logU(10−4, 102) 7.2+10.3 −3.3 ρGP,ExTrA3T2 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 1.02+1.49 −0.74 σGP,ExTrA3T2 (10−2 relative flux) . . . . . . . . logU(10−4, 102) 12.5+22.5 −10.2 ρGP,ExTrA4T2 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 0.40+0.82 −0.25 σGP,ExTrA4T2 (10−2 relative flux) . . . . . . . . logU(10−4, 102) 1.6+5.0 −1.1 θ0,LCO−g’ (10−4 relative flux) . . . . . . . . . . . U(-106, 106) 129.3+7.9 −7.4 Limb darkening parameters q1,TESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.792, 0.029) 0.794+0.017 −0.019 q2,TESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.453, 0.022) 0.456+0.015 −0.016 q1,ExTrA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.779, 0.074) 0.744+0.038 −0.044 q2,ExTrA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.284, 0.028) 0.287+0.016 −0.017 q1,LCO-i’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.825, 0.023) 0.814 ± 0.016 q2,LCO-i’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.506, 0.030) 0.514+0.019 −0.017 q1,LCO-g’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.888, 0.011) 0.887+0.007 −0.006 q2,LCO-g’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.664, 0.015) 0.671 ± 0.011 Notes: N(µ, σ2) indicates a normal distribution with mean µ and variance σ2, U(a, b) a uniform distribution between a and b and logU(a, b) a log-uniform distribution between a and b. Article number, page 26 L. Parc, et al.: The peculiar TOI-756 system Appendix D: Interior modeling of TOI-756 b AMF = 0.02+0.00 0.00 0.2 0.4 0.6 0.8 CMF CMF = 0.28+0.03 0.03 4 8 12 16 Fe/Mg AMF = 1.432+0.235 0.205 0.02 0.03 0.04 0.05 AMF 4 8 12 16 Fe/Si 0.2 0.4 0.6 0.8 CMF 4 8 12 16 Fe/Mg 4 8 12 16 Fe/Si CMF = 0.3+0.0 0.0 AMF = 0.79+0.10 0.10 0.20 0.25 0.30 0.35 0.40 CMF CMF = 0.27+0.03 0.03 1.2 1.6 2.0 2.4 Fe/Mg Fe/Mg = 1.40+0.23 0.20 0.60 0.75 0.90 AMF 0.8 1.2 1.6 2.0 2.4 Fe/Si 0.20 0.25 0.30 0.35 0.40 CMF 1.2 1.6 2.0 2.4 Fe/Mg 0.8 1.2 1.6 2.0 2.4 Fe/Si Fe/Si = 1.21+0.21 0.18 Fig. D.1. Corner plots from the interior modeling of TOI-756 b (see Sect.6.2.2). The top panel shows scenario (1), assuming a H/He envelope: results with uninformative priors are shown in grey, and those using stellar-informed priors based on the host star’s refractory abundances are in red. The bottom panel corresponds to scenario (2), assuming a pure H2O envelope with stellar-informed priors. Article number, page 27
Astronomy & Astrophysics manuscript no. aa55684-25corr October 17, 2025 NIRPS and TESS reveal a peculiar system around the M dwarf TOI-756: A transiting sub-Neptune and a cold eccentric giant Léna Parc1, ∗, François Bouchy1, Neil J. Cook2, Nolan Grieves1, Étienne Artigau2, 3, Alexandrine L'Heureux2, René Doyon2, 3, Yuri S. Messias2, 4, Frédérique Baron2, 3, Susana C. C. Barros5, 6, Björn Benneke2, Xavier Bonfils7, Marta Bryan8, Bruno L. Canto Martins4, Ryan Cloutier9, Nicolas B. Cowan10, 11, Daniel Brito de Freitas12, Jose Renan De Medeiros4, Xavier Delfosse7, Elisa Delgado-Mena13, 5, Xavier Dumusque1, David Ehrenreich1, 14, Pedro Figueira1, 5, Jonay I. González Hernández15, 16, David Lafrenière2, Izan de Castro Leão4, Christophe Lovis1, Lison Malo2, 3, Claudio Melo17, Lucile Mignon1, 7, Christoph Mordasini18, Francesco Pepe1, Rafael Rebolo15, 16, 19, Jason Rowe20, Nuno C. Santos5, 6, Damien Ségransan1, Alejandro Suárez Mascareño15, 16, Stéphane Udry1, Diana Valencia8, Gregg Wade21, Manuel Abreu22, 23, José L. A. Aguiar4, Khaled Al Moulla5, 1, Guillaume Allain24, Romain Allart2, Jose Manuel Almenara7, Tomy Arial3, Hugues Auger24, Luc Bazinet2, Nicolas Blind1, David Bohlender25, Isabelle Boisse26, Anne Boucher2, Vincent Bourrier1, , Sébastien Bovay1, Pedro Branco6, 5, Christopher Broeg18, 27, Denis Brousseau24, Alexandre Cabral22, 23, Charles Cadieux2, Andres Carmona7, Yann Carteret1, Zalpha Challita2, 26, David Charbonneau28, Bruno Chazelas1, Catherine A. Clark29, João Coelho22, 23, Marion Cointepas1, 7, Karen A. Collins28, Kevin I. Collins30, Uriel Conod1, Eduardo Cristo5, 6, Ana Rita Costa Silva5, 6, 1, Antoine Darveau-Bernier2, Laurie Dauplaise2, Jean-Baptiste Delisle1, Roseane de Lima Gomes2, 4, João Faria1, 5, Dasaev O. Fontinele4, Thierry Forveille7, Yolanda G. C. Frensch1, 31, Jonathan Gagné32, 2, Frédéric Genest2, Ludovic Genolet1, João Gomes da Silva5, Félix Gracia Témich15, Nicole Gromek9, Olivier Hernandez32, Melissa J. Hobson1, H. Jens Hoeijmakers33, 1, Norbert Hubin17, Marziye Jafariyazani34, Farbod Jahandar2, Ray Jayawardhana35, Hans-Ulrich Käufl17, Dan Kerley25, Johann Kolb17, Vigneshwaran Krishnamurthy10, Benjamin Kung1, Pierrot Lamontagne2, Pierre Larue7, Henry Leath1, Olivia Lim2, Gaspare Lo Curto31, Allan M. Martins4, 1, Elisabeth C. Matthews36, Jaymie Matthews37, Jean-Sébastien Mayer3, Stan Metchev38, Lina Messamah1, Leslie Moranta2, 32, Dany Mounzer1, , Nicola Nari39, 15, 16, Louise D. Nielsen1, 17, 40, Ares Osborn9, 7, Mathieu Ouellet3, Jon Otegi1, Luca Pasquini17, Vera M. Passegger15, 16, 41, 42, Stefan Pelletier1, 2, Céline Peroux17, Caroline Piaulet-Ghorayeb2, 43, Mykhaylo Plotnykov8, Emanuela Pompei31, Anne-Sophie Poulin-Girard24, José Luis Rasilla15, Vladimir Reshetov25, Jonathan Saint-Antoine2, 3, Mirsad Sarajlic18, Ivo Saviane31, Robin Schnell1, Alex Segovia1, Julia Seidel31, 44, 1, Armin Silber31, Peter Sinclair31, Michael Sordet1, Danuta Sosnowska1, Avidaan Srivastava2, 1, Atanas K. Stefanov15, 16, Márcio A. Teixeira4, Simon Thibault24, Philippe Vallée2, 3, Thomas Vandal2, Valentina Vaulato1, Joost P. Wardenier2, Bachar Wehbe22, 23, Drew Weisserman9, Ivan Wevers25, François Wildi1, Vincent Yariv7, Gérard Zins17 (Affiliations can be found after the references) Received 27 May 2025 / Accepted 25 July 2025 ABSTRACT Context. The Near InfraRed Planet Searcher (NIRPS) joined HARPS on the 3.6-m ESO telescope at La Silla Observatory in April 2023, dedicating part of its Guaranteed Time Observations (GTO) program to the radial velocity follow-up of TESS planet candidates to confirm and characterize transiting planets around M dwarfs. Aims. We present the "Sub-Neptunes" subprogram of the NIRPS-GTO, aimed at investigating the composition and formation of sub-Neptunes orbiting M dwarfs. We report the first results of this program with the characterization of the TOI-756 system, which consists of TOI-756 b, a transiting sub-Neptune candidate detected by TESS, as well as TOI-756 c, an additional non-transiting planet discovered by NIRPS and HARPS. Methods. We analyzed TESS and ground-based photometry, high-resolution imaging, and high-precision radial velocities (RVs) from NIRPS and HARPS to characterize the two newly discovered planets orbiting TOI-756, as well as to derive the fundamental properties of the host star. A dedicated approach was employed for the NIRPS RV extraction to mitigate telluric contamination, particularly when the star's systemic velocity was shown to overlap with the barycentric Earth radial velocity. Results. TOI-756 is a M1V-type star with an effective temperature of Teff ∼3657 K and a super-solar metallicity ([Fe/H]) of 0.20±0.03 dex. TOI-756 b is a 1.24-day period sub-Neptune with a radius of 2.81 ± 0.10 R⊕and a mass of 9.8+1.8 -1.6 M⊕. TOI-756 c is a cold eccentric (ec = 0.45 ± 0.01) giant planet orbiting with a period of 149.6 days around its star with a minimum mass of 4.05 ± 0.11 MJup. Additionally, a linear trend of 146 m s-1 yr-1 is visible in the radial velocities, hinting at a third component, possibly in the planetary or brown dwarf regime. Conclusions. We present the discovery and characterization of the transiting sub-Neptune TOI-756 b and the non-transiting eccentric cold giant TOI-756 c. This system is unique in the exoplanet landscape, standing as the first confirmed example of such a planetary architecture around an M dwarf. With a density of 2.42 ± 0.49 g cm-3, the inner planet, TOI-756 b, is a volatile-rich sub-Neptune. Assuming a pure H/He envelope, we inferred an atmospheric mass fraction of 0.023 and a core mass fraction of 0.27, which is well constrained by stellar refractory abundances derived from NIRPS spectra. It falls within the still poorly explored radius cliff and at the lower boundary of the Neptune desert, making it a prime target for a future atmospheric characterization with JWST to improve our understanding of this population. Key words. techniques: photometric - techniques: radial velocities - planets and satellites: detection - planets and satellites: composition - planets and satellites: formation - stars: low-mass Article number, page 1 16 Oct 2025 A&A proofs: manuscript no. aa55684-25corr Article number, page 2 L. Parc, et al.: The peculiar TOI-756 system 1. Introduction Since the discovery of a giant exoplanet orbiting 51 Pegasi (Mayor & Queloz 1995), nearly 5900 exoplanets have been detected1, showcasing an incredible variety of planetary systems and greatly enhancing our understanding of planet formation and evolution. Notably, space-based missions such as Kepler (Borucki et al. 2010) and TESS (Ricker et al. 2014) have revealed the prevalence of a population of exoplanets with sizes between Earth and Neptune, known as super-Earths and subNeptunes. This group of smaller planets (with radii between 1 and 4 R⊕) is not present in our Solar System, yet more than half of all Sun-like stars in the Galaxy are believed to host a subNeptune within 1 AU (e.g. Batalha et al. 2013; Petigura et al. 2013; Marcy et al. 2014). M dwarfs are the most abundant stars in our Galaxy (Henry et al. 2006; Winters et al. 2015; Reylé et al. 2021) and the search for exoplanets around these low-mass stars has gained significant attention in recent years. Indeed, they appear to have a high occurrence rate of planets, particularly of rocky planets and sub-Neptunes (Dressing & Charbonneau 2013; Bonfils et al. 2013; Dressing & Charbonneau 2015; Mulders et al. 2015; Gaidos et al. 2016; Mignon et al. 2025). In addition, exoplanets that transit M dwarfs are of particular interest, as their small size and low irradiation levels allow for easier detection of smaller and cooler planets, such as those located within the habitable zone of their star, than around larger and hotter stars. TESS was specially designed to be sensitive to these redder, cooler stars, but the relative faintness of M dwarfs in the visible spectrum has hindered a comprehensive characterization of the planetary systems via ground-based follow-ups. Indeed, the empirical population of known planets hosted by low-mass stars later than mid-K spectral type is smaller by nearly an order of magnitude than planets around Sun-like stars (Cloutier & Menou 2020). In particular, the PlanetS catalog2 (Parc et al. 2024; Otegi et al. 2020) of well-characterized planets (with precisions σ on inferred masses M and radius R of σM/M 10 km s-1) using orange dots. In this Figure 3, we plotted the relative velocity to the systemic velocity of TOI-756, which was found to be Vsyst ∼15.2 km s-1. This phenomenon also induces distortions in the stellar line profiles, which are evident in the systemic variations of different LBL indicators, such as D2V and ∆T (Artigau et al. 2024), as shown in Fig. 4. 5.2.2. Correction with removing affected lines in the LBL The advantage of using the line-by-line (LBL) technique to derive radial velocities and other spectral indicators lies in its ability to provide individual measurements for each spectral line across the entire spectrum. The observed discrepancy, where NIRPS RVs appear underestimated and then overestimated relative to the RV fit, is caused by the crossing of stellar absorption lines with atmospheric OH emission lines, primarily arising from excitation of rotational-vibrational modes of the OH molecule. To mitigate this effect, we utilized the HITRAN12 database (Gordon et al. 2022) to identify OH lines present within the NIRPS spectral range. We selected the 25% most intense OH lines and removed the LBL-derived measurements in a ±20 km s-1 window around these lines, accounting for the approximate width of both OH and stellar lines (∼10 km s-1 each). This process reduced the number of spectral lines used in the LBL analysis from 26,301 to 17,253, inevitably increasing the RV uncertainties. We then recomputed the final radial velocity and indicator values for each epoch using the same LBL method, which robustly averages the per-line values while downweighting outliers (Appendix B of Artigau et al. 2022). We applied this correction for all the NIRPS data for consistency. The corrected data are displayed as yellow dots in Figures 3 and 4. This correction successfully brought NIRPS RVs into agreement with HARPS and effectively removed the systematic distortions previously visible in the residuals of the keplerian fit and in the spectral indicators (Fig. 4). The corrected indicators now show consistent values across epochs, with no residual systematics during the Vsys-BERV crossing. Additionally, the root mean square (RMS) of the residuals and the indicators, displayed in the same figure, demonstrates a significant decrease after correction. We applied the same technique to the mask used to derived the NIRPS DRS cross-correlation function (CCF) data, using the OH line list from the DRS telluric correction module. While this also mitigated the effect, the CCF method relies on significantly fewer spectral lines than the LBL approach, leading to much larger error bars after correction. Consequently, we opted to retain the LBL-derived values for our analysis. 12 https://hitran.org/ Fig. 5. Top panel: Phase-folded TESS, ExTrA, and LCO-CTIO light curves of TOI-756 b (gray points). Dark red circles are data binned to 10 min. The black lines represent the median model of each instrument from the joint fit. Bottom panel: Residuals of the data compared to the model. An arbitrary offset has been added to the ground-based photometry for clarity. 5.2.3. Radial velocity analysis and joint modeling with juliet juliet was also used to model these RV datasets. We used a two-planet plus a linear trend model. At first, we fixed the eccentricity of the TESS planet TOI-756 b to zero. We accounted for the evident eccentricity of the outer companion by fitting for the parameters √ecos(ω), √esin(ω), as implemented in juliet. This parametrization has been shown to improve the exploration of the eccentricity-argument of periastron parameter Article number, page 10 L. Parc, et al.: The peculiar TOI-756 system Table 4. Posterior planetary parameters of the different RV fits. NIRPS corrected + HARPS NIRPS unaffected + HARPS HARPS-only NIRPS corrected-only Kb m s-1 9.4+2.3 -2.5 8.4 ± 2.2 10.0+3.1 -3.2 9.2+3.9 -3.8 Pc (days) 149.66+0.28 -0.26 149.61 ± 0.20 149.36+0.38 -0.37 149.92+0.36 -0.39 T0,c (RJD) 60350.27+0.69 -0.68 60350.29+0.52 -0.53 60199.80+0.70 -0.68 60201.59+0.78 -0.81 Kc (m/s) 272.6+3.6 -3.4 273.3+2.5 -2.6 275.7+5.0 -4.9 268.9 ± 5.1 ec 0.46 ± 0.01 0.46 ± 0.01 0.44 ± 0.01 0.48+0.02 -0.01 ωc (◦) -167.9 ± 1.3 -166.6+1.2 -1.3 -166.7 ± 1.7 -169.24 ± 1.9 Acceleration m s-1 yr-1 144.6 ± 5.8 148.3+3.7 -4.0 145.7+8.0 -8.4 145.4+8.0 -8.8 space (Lucy & Sweeney 1971). We used the period and transit epoch results of TOI-756 b of the photometry fit (Sect. 5.1) as priors for the RV fits. We compared several analyses: (1) NIRPS data corrected using the method described in Sect. 5.2.2 combined with HARPS data; (2) uncorrected NIRPS data with the affected points removed, also combined with HARPS data; (3) HARPS data only; and (4) corrected NIRPS data only. We present the posteriors of the main changing planetary parameters of the different fits in Table 4. We did not put the period and transit epoch of planet b in this table because they are very similar for these analyses since they are highly constrained by the photometry fit priors. The resulting parameters exhibit good consistency within 1σ for all our different analyses. Since the NIRPS corrected data combined with the HARPS data are fully consistent with the other fits and included all RV points, we choose this dataset as the final one. We do a joint fit of the RVs and photometry with juliet: NIRPS corrected RVs, HARPS RVs, TESS, LCO-CTIO and ExTrA. In order to prevent any potential Lucy-Sweeney bias in the eccentricity measurement (Lucy & Sweeney 1971; Hara et al. 2019), we fixed the orbital eccentricity of the planet b to zero. However, to explore the possibility of non-circular orbit, we ran a separate analysis without any constraints on the eccentricity. The logarithmic evidence for the eccentric model is lower than for the circular one (55,557 vs. 55,580), and the fitted jitter values for both HARPS and NIRPS RVs are higher in the eccentric case, further supporting the preference for the circular model. The fitted eccentricity for TOI-756 b is eb = 0.096+0.092 -0.067. Therefore, the condition e > 2.45 σe (Lucy & Sweeney 1971) is not satisfied, which suggests that the RV data are compatible with a circular orbit. For now, the current data do not provide sufficient precision to draw a firm conclusion regarding the orbital eccentricity. Additionally, in Table 5, we show that the 3-σ upper limit on the eccentricity for TOI-756 b is 0.51. The fitted and derived parameters for TOI-756 b and TOI-756 c are presented in Table 5. The priors and posteriors of the joint fit can be found in Table C.2. Fig.5 shows the phase-folded light-curves of the photometry fit. Fig.3 shows RV data together with the resulting model from the joint fit and Fig. 6 the phase-folded RV curves for the two planets. We searched for a possible transit of planet c in the TESS data by phase-folding the light curve using its orbital period and time of conjunction. Although TESS observations cover the expected transit window, no transit signal is visible in the data, allowing us to exclude a transiting configuration. Furthermore, assuming coplanar orbits aligned with the inclination of TOI 756 b (85.5◦), we estimated the expected impact parameter of the outer planet based on its semi-major axis and the stellar radius. We obtained bc = 14.3 ± 0.7, a value well above 1. 6. Discussion We present the discovery and characterization of the transiting sub-Neptune TOI-756 b and the non-transiting eccentric cold giant TOI-756 c, both orbiting the M1V star TOI-756. TOI-756 b has an orbital period of 1.24 days, a radius of 2.81 ± 0.10 R⊕and a mass of 9.8 ± 1.7 M⊕. The outer companion, TOI-756 c, follows an eccentric (0.45) 149-day orbit and has a minimum mass of 4.05 ± 0.11 MJup. Using the stellar parameters (Table 1), we determine the semi-major axes of TOI-756 b and TOI-756 c to be 0.0180 ± 0.0002 au and 0.439 ± 0.005 au, respectively. Assuming zero albedo and full heat redistribution, the equilibrium temperature of TOI-756 b is 934 ± 24 K, with a stellar insolation of 127 ± 13 S⊕. For TOI-756 c, we estimate an equilibrium temperature of 194 ± 5 K and a stellar insolation of 0.24 ± 0.02 S⊕ averaged along the eccentric orbit. In addition, the RVs present an acceleration of 145.6 ± 5.2 m s-1 yr-1 hinting at an additional component in the system. 6.1. NIRPS + HARPS performances The characterization of this system was made possible by TESS and ground-based facilities for the photometric analysis of the inner transiting planet, as well as by the combination of HARPS and NIRPS for RV follow-up. The synergy between these two spectrographs enabled us to precisely characterize an early-M with a peculiar planetary-system configuration. The benefit of this combination is evident in Table 4: using HARPS (NIRPS) alone, the semi-amplitude of TOI-756 b is determined with a precision of 31% (42%), whereas combining HARPS and NIRPS improves this to 17% in the joint RV and photometry fit. All other fitted parameters also benefit from improved precision thanks to this joint analysis. This study highlights the added value of NIRPS (to HARPS) in characterizing a low-mass planet around a faint M dwarf (V = 14.6, J = 11.1), compared to typical radial velocity targets. Independently, the two instruments have similar median photon noises. The median photon noise is 5.5 m s-1 for HARPS and 15.4 m s-1 for NIRPS, but the latter having increased from 8.4 m s-1 due to the removal of affected lines during the LBL computation (see Sect. 5.2.2). The fitted jitter values for both instruments are similar, around 15 m s-1, which matches the photon noise for NIRPS but is elevated compared to HARPS photon noise. This suggests the presence of atmospheric residuals or enhanced stellar activity in the optical range. Given the low S/N regime in which HARPS is operating, increased background sky contamination and possible interference from the Moon are to be expected. At such low S/N, the LBL method is also likely to underestimate the uncertainties associated with the derived RVs. Article number, page 11 A&A proofs: manuscript no. aa55684-25corr Fig. 6. Phase-folded RVs with the resulting model and its residuals for TOI-756 b (left) and TOI-756 c (right). In red dots, binned data combining HARPS (blue dots) and NIRPS (yellow dots). The error-bars in light gray account for the fitted jitters. Table 5. Fitted and derived parameters for TOI-756 b and TOI-756 c. Parameter TOI-756 b TOI-756 c Orbital period, Porb (days) . . . . . . . . 1.2392495 ± 0.0000007 149.40 ± 0.16 Time of conjunction, T0 (RJD) . . . . 58570.65234+0.00035 -0.00037 60498.882+0.57 -0.52 Planet radius, Rp (R⊕) . . . . . . . . . . . . 2.81 ± 0.10 - Planet mass, Mp (M⊕) . . . . . . . . . . . . 9.83+1.8 -1.6 - Planet min. mass, Mp sin(i) (MJup) - 4.05 ± 0.11 Planet density, ρp (g cm-3) . . . . . . . 2.42+0.53 -0.45 - RV semi-amplitude, (m s-1) . . . . . . 9.2+1.7 -1.5 273.3 ± 2.6 Orbital inclination, i (◦) . . . . . . . . . . 85.53+0.19 -0.18 - Scaled planetary radius, RP/R∗. . . . 0.05113+0.0008 -0.0009 - Impact parameter, b . . . . . . . . . . . . . . 0.589+0.018 -0.021 - Semi-major axis, a (au). . . . . . . . . . . 0.0180 ± 0.0002 0.439 ± 0.005 Eccentricity . . . . . . . . . . . . . . . . . . . . 0 ( 4 R⊕or Mp sin(i) > 20 M⊕. We plotted in Fig. 8 the radii and orbital periods of the sub-Neptunes of these systems. We found 13 systems but none are orbiting an M dwarf. TOI-756 is currently the only confirmed system with a transiting sub-Neptune and a cold giant orbiting an M dwarf. This remains true even if we remove all constraints on the periods of the inner and outer planets. An additional but unconfirmed system with this peculiar architecture has been identified: the K243 system. K2-43 c is a sub-Neptune (Rp = 2.4 R⊕, P = 2.2 d ; Hedges et al. 2019), and more recently a single transit event with a depth corresponding to a Jupiter-sized planet has been detected in the TESS data (TOI-5523.01). This system adds up to the small sample of the recent study of Bryan & Lee (2025), investigating the stellar mass and metallicity trends for small planets with a gas giant companion. They found a higher gas giant frequency around metal-rich M dwarfs for both samples (with gas giant (GG) or with gas giant plus small planet (GG|SE)), but they find no significant difference in gas giant occurrence rate between P(GG) and P(GG|SE). While they find no significant correlation between small planets and 13 https://exoplanetarchive.ipac.caltech.edu/ Article number, page 14 L. Parc, et al.: The peculiar TOI-756 system outer gas giants around M dwarfs, previous work has found a significant positive correlation between these planet populations around more massive stars that are metal-rich: Bryan & Lee (2024) and Chachan & Lee (2023) hypothesized that this positive correlation should persist and may even strengthen for lower-mass stars. This follows the well known metallicity-giant planet correlation seen for FGK stars (e.g., Sousa et al. 2021) and M dwarfs (e.g., Neves et al. 2013). We are offering an additional system around a metal-rich M dwarf to address a largely underexplored parameter space, aiding studies that investigate the correlations and occurrence rates of specific populations in relation to stellar parameters. In addition to being a unique multi-planet system, TOI-756 is an M dwarf hosting a rare giant component. Planet of and above Jupiter's mass are remarkably rare around M dwarfs. Core-accretion theory predicts that giant planets should be less common around M dwarfs than around FGK-type stars, primarily due to the lower surface density of solids and longer formation timescales in protoplanetary disks around low-mass stars (e.g., Laughlin et al. 2004; Ida & Lin 2005). This trend is supported by recent population synthesis models, which not only confirm the low occurrence rate of giant planets in such environments but also suggest it may drop to nearly zero for host stars with masses between 0.1 and 0.3 M⊙(Burn et al. 2021). They generally form in the outer region of the disk beyond the ice line (Alexander & Pascucci 2012; Bitsch et al. 2015), where there is more material for them to form, but we are biased against detecting them with the transit method as transit probability decreases at long orbital periods. This probability is even lower around M dwarfs since they are small stars. However, RV campaigns will certainly provide more of these outer companions to small transiting planets, but also confirm giant TESS candidates. The RV follow-up of TESS giant planet candidates is another one of the subprograms of SP2 of the NIRPS-GTO, thanks to the unique sensitivity of NIRPS in the infrared, which allows us to characterize such planets around host stars with J 0.1). We highlighted these systems in light blue in Fig. 8. Systems represented by crosses consist solely of a sub-Neptune accompanied by one or more giant planets, whereas systems shown as circles include both small and giant planets in addition to the sub-Neptune. In addition, we emphasize three systems that share strong similarities with the TOI-756 system: TOI4010 (Kunimoto et al. 2023), TOI-969 (Lillo-Box et al. 2023), and Kepler-94 (Weiss et al. 2024). All three systems consist of a sub-Neptune located near the lower boundary of the Neptune desert, accompanied by a giant planet with an orbital period exceeding 100 days and a non-zero eccentricity. Regarding this class of systems, Bitsch & Izidoro (2023) used N-body simulations that combine pebble and gas accretion with planetary migration. They found that systems hosting outer giant planets tend to produce more systems with predominantly a single inner planet and exhibit higher eccentricities for all planets, compared to simulations without outer giants. In addition, unstable systems (with high eccentricities) mostly host only one inner sub-Neptune (and for most systems, this inner planet is transiting). Additional observations of TOI-756 to precisely constrain the eccentricity of TOI-756 b could be a good test case of these results considering the large eccentricity of the planet c. 0 1 2 3 4 5 6 7 8 9 Semi-major axis [au] 100 101 102 103 Companion Mass [Mjup] TOI-756 mass BD Star Planet TOI-756 c NIRPS+HARPS lower mass limit Gaia DR4 detection limits Excluded regions Fig. 9. Limits on the companion mass of TOI-756 system as a function of semi-major axis. The lower limit, indicated by the solid black line, is calculated based on the RV linear trend. The excluded gray areas include this constraint of the RV linear trend, the timespan of observations of the RV (left rectangle), the limit from high-contrast imaging (upper right rectangle), and the absence of a double peak in the CCF (upper rectangle). We plotted the Gaia DR4 astrometric detection limits as a solid dark blue line for a star at 86 pc with a RUWE of 1.25 (Wallace et al. 2025). The dotted black lines are the different mass limit categories separating planetary, brown dwarf (BD), and stellar natures. Here again, Bitsch & Izidoro (2023) predicted that systems with truly single close-in planets are more likely to host outer gas giants. Conversely, Schlecker et al. (2021) predicted that planetary systems around stars with high metallicity frequently contain warm and dynamically active giant planets that can disrupt inner planetary systems and then are less likely to harbor inner small planets. The RV follow up of transiting close-in planets by the NIRPS-GTO SP2 will help to test these predictions by planetary formation models. 6.3.2. Constraints on the RV acceleration In addition to the sub-Neptune and the eccentric giant planet, NIRPS and HARPS have revealed an acceleration in the RV of TOI-756. To determine if the wide binary WT 352 could, plausibly, be responsible for the acceleration, we used the following equation (Torres 1999): Mcomp = 5.34 × 10-6M⊙ d pc ρ arcsec !2 ̇v m s-1 yr-1 F(i, e, ω, φ) (1) where d is the distance to the system, ρ is the projected separation of the companion on the sky, and ̇v is the best-fit RV trend. F(i, e, ω, φ) is an equation that depends on the unknown orbital parameters of the companion and has a minimum value of √ 27/2, which we use in our calculations here. We convert the projected separation on the sky to a minimum semi-major axis using the Gaia DR3 distance of TOI-756. With an acceleration of 145.6 m s-1 yr-1 and the separation of 11.09 arcsec, we found a mass of 1857 M⊙. We therefore conclude that the co-moving companion (a ∼M3/4V star with M⋆∼0.3 M⊙) cannot be responsible for the trend in this system. We again used Eq. 1 to draw the black curve in Fig. 9 representing the lower mass limit permitted by the RV trend as a function of the semi-major axis. Article number, page 15 A&A proofs: manuscript no. aa55684-25corr We see that we cannot exclude a planetary nature for this additional companion. In addition, we can exclude the left rectangle of the figure corresponding to our time-span of observations of 480 days. An additional constraint on the mass of this companion comes from high-resolution imaging of TOI-756, which rules out companions more than 5 magnitudes fainter (down to approximately an M5.5/6V star, around 0.11 M⊙) at a separation of 0.1 arcseconds (∼8.6 au) (see Sect. 3.4). Moreover, the RV measurements do not show any evidence of a blended companion of a mass similar to the primary with no increased contrast, FWHM, or bisector deviations of the CCFs compared to a similar single star (e.g., GL514, the star used for the LBL template). We expect to detect a second peak in the CCF with a contrast up to 10 times smaller (∼3-5%) than that of the primary peak, corresponding to a companion with a magnitude difference of about 2.5. Given that TOI-756 is an M1 star, this sensitivity would allow us to detect a companion as late as M4.5-M5 (∼0.16-0.18 M⊙). As a result, we can exclude the presence of early M-type stellar companions as the source of the observed RV trend (see Fig. 9). Moreover, in Fig. 9, we plotted the expected Gaia DR4 detection limits at the distance of TOI-756 and for a RUWE of 1.25 (RUWETOI-756 = 1.24) from Wallace et al. (2025). We can see that Gaia will be capable of resolving the orbit and parameters of almost any object within the parameter space for this companion. In addition, we are still monitoring TOI-756 system, with NIRPS and HARPS to further constrain this additional component. The combination of radial velocity and astrometric data will be crucial for a precise characterization of the full system. In particular, it may enable us to resolve the orbits of both TOI-756 c and the additional companion, allowing us to measure their mutual inclination and gain insight into the formation, evolution, and dynamical history of this rare system orbiting a low-mass star. 6.3.3. The binarity effect on formation and evolution The presence of the widely separated stellar companion at ∼11" raises important questions about its influence on the formation and dynamical evolution of planets within the TOI-756 system. Binarity is known to truncate the circumstellar disk, shorten its lifetime, and reduce planet occurrence rates (Cieza et al. 2009; Harris et al. 2012; Moe & Kratter 2021). While the wide physical separation between the two stars suggests that the stellar companion had a limited direct impact on the protoplanetary disk of TOI-756, studies have shown that binary companions can still influence planet formation and evolution even at separations up to 1000 au, potentially hindering the formation of massive planetary cores (Sullivan et al. 2023). On the other hand, recent studies (Sullivan et al. 2024) show that the radius gap in wide binaries (separation > 300 au) appears to be shifted toward smaller radii. This suggests that the presence of a stellar companion can influence disk conditions and, consequently, the formation and evolution of planets. However, despite expectations that a stellar companion at this distance could have a noticeable gravitational effect, WT 352 is significantly less massive (M⋆∼0.3 M⊙) than the primary and is located at a separation that could exceed 1000 au. In this case, the companion did not inhibit the formation of giant planets or sub-stellar objects around the primary star. This suggests that the gravitational effect of a companion depends not only on its separation, but also on its mass relative to the primary star. Furthermore, while upcoming Gaia DR4 data will refine astrometric measurements, it is unlikely to provide significant new constraints on the orbital parameters of the binary, given that its expected orbital period is on the order of ∼40,000 years - too long for measurable motion within Gaia's observational timeline (El-Badry et al. 2024). Recently, Behmard et al. (2022); Christian et al. (2022) investigated the potential alignment between the orbital planes of planetary systems and their visual binary companions. The TOI756 system, along with its wide binary companion WT 352, is included in their sample. Christian et al. (2022) reported a significant misalignment in this system, with a mutual inclination of i = 118+34 -17 degrees (5th to 95th percentile range). While their analysis reveals an excess of aligned systems among binaries with separations less than 700 au, they find that the distribution of mutual inclinations becomes consistent with uniformity for wider binaries (a > 700 au). This could account for the observed misalignment in TOI-756, given its projected binary separation of approximately 955 au. 7. Conclusions We present the "Sub-Neptunes" subprogram of the NIRPS-GTO SP2 program, which aims to improve our understanding of the diversity in composition and internal structure of small planets around M dwarfs. By enabling the study of targets hosting TESS sub-Neptune candidates with V < 15.4, NIRPS (the red arm of HARPS) expands the reach of RV follow-up beyond the traditional limits of optical spectrographs installed on 4-meter-class telescopes. We report the first results of the RV follow-up program of the NIRPS-GTO, presenting the characterization of a two-planet system orbiting the M-dwarf TOI-756, which is the primary component of a wide binary system with the star WT 352. TOI756 b was initially identified by TESS and subsequently confirmed with the ground-based photometry facilities LCO-CTIO and ExTrA, as well with RV measurements obtained with NIRPS and HARPS, which enabled the determination of its mass. Additionally, NIRPS and HARPS allowed the identification of a second non-transiting planet in the system TOI-756 c, as well as an supplementary RV acceleration hinting at an extra third component in the system that could be planetary as well. TOI-756 b is a sub-Neptune with a radius of 2.81 R⊕and a mass of 9.8 M⊕ orbiting with a period of 1.24 days around its star. TOI-756 c is a cold eccentric giant planet orbiting at 149 days with a minimum mass of 4.05 MJup and an eccentricity of 0.45. TOI-756 b, with a density of 2.42 g.cm-3, is in line with the recently identified trend of low-density sub-Neptunes around M dwarfs compared to FGK dwarfs. In addition, this peculiar target lies in the radius cliff of M-dwarf planets and in the Neptune desert. TOI-756 b most likely requires a certain amount of hydrogen and helium in its atmosphere to account for its observed density, either as a pure H/He envelope or as a mixture of supercritical H2O and H/He. The TOI-756 system is particularly unique, as it is the only known confirmed system hosting both a transiting sub-Neptune and an outer giant planet around an M dwarf. This makes it a valuable case for comparison with planet formation and evolution models, as well as for studying correlations between planetary populations and stellar parameters such as stellar mass and metallicity. In addition, TOI-756 c enhances the small population of giant planets around M dwarfs, a population whose formation mechanisms are still not fully understood. Identifying more of these planets is vital for constraining our models of their formation and evolutionary processes, providing deeper insights into Article number, page 16 L. Parc, et al.: The peculiar TOI-756 system the pathways that shape such systems. The astrometric measurements from the Gaia DR4 release will be key to combine with the RVs to further characterize this unique system. TOI-756 b is also a promising candidate for future atmospheric characterization through transmission spectroscopy with JWST, which could help confirm or rule out the presence of a primordial H/He-dominated atmosphere. It also offers an opportunity to test hypotheses regarding the radius cliff and the Neptune desert population and to constrain the formation and evolution models of small planets orbiting alongside an eccentric outer companion. In this study, we demonstrate the capabilities of the unique NIRPS and HARPS combination to obtain precise RVs of M dwarfs, enabling the confirmation and characterization of candidates detected by current photometric surveys such as TESS, as well as upcoming missions like PLATO (Rauer et al. 2014). Acknowledgements. We thank the anonymous referee for their valuable comments, which helped improve the manuscript. This work has been carried out within the framework of the NCCR PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. NJC, ÉA, AL, RD, FBa, BB, LMa, RA, LB, AB, CC, AD-B, LD, PLam, OL, LMo, JS-A, PV, TV & JPW acknowledge the financial support of the FRQ-NT through the Centre de recherche en astrophysique du Québec as well as the support from the Trottier Family Foundation and the Trottier Institute for Research on Exoplanets. ÉA, RD, FBa, LMa, TA, J-SM, MO, JS-A & PV acknowledges support from Canada Foundation for Innovation (CFI) program, the Université de Montréal and Université Laval, the Canada Economic Development (CED) program and the Ministere of Economy, Innovation and Energy (MEIE). AL acknowledges support from the Fonds de recherche du Québec (FRQ) - Secteur Nature et technologies under file #349961. The Board of Observational and Instrumental Astronomy (NAOS) at the Federal 's research activities are supported by continuous grants from the Brazilian funding agency CNPq. This study was partially funded by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brasil (CAPES) - Finance Code 001 and the CAPES-Print program. 0 SCB, ED-M, NCS, EC, ARCS & JGd acknowledge the support from FCT - Fundação para a Ciência e a Tecnologia through national funds by these grants: UIDB/04434/2020, UIDP/04434/2020. Co-funded by the European Union (ERC, FIERCE, 101052347). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. SCB acknowledges the support from Fundação para a Ciência e Tecnologia (FCT) in the form of a work contract through the Scientific Employment Incentive program with reference 2023.06687.CEECIND and DOI 10.54499/2023.06687.CEECIND/CP2839/CT0002. XB, XDe, ACar, TF & VY acknowledge funding from the French ANR under contract number ANR18CE310019 (SPlaSH), and the French National Research Agency in the framework of the Investissements d'Avenir program (ANR-15IDEX-02), through the funding of the "Origin of Life" project of the GrenobleAlpes University. BLCM & AMM acknowledge CAPES postdoctoral fellowships. BLCM acknowledges CNPq research fellowships (Grant No. 305804/2022-7). NBC acknowledges support from an NSERC Discovery Grant, a Canada Research Chair, and an Arthur B. McDonald Fellowship, and thanks the Trottier Space Institute for its financial support and dynamic intellectual environment. DBF acknowledges financial support from the Brazilian agency CNPq-PQ (Grant No. 305566/2021-0). Continuous grants from the Brazilian agency CNPq support the STELLAR TEAM of the Federal 's research activities. JRM acknowledges CNPq research fellowships (Grant No. 308928/2019-9). ED-M further acknowledges the support from FCT through Stimulus FCT contract 2021.01294.CEECIND. ED-M acknowledges the support by the Ramón y Cajal contract RyC2022-035854-I funded by MICIU/AEI/10.13039/501100011033 and by ESF+. XDu acknowledges the support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement SCORE No 851555) and from the Swiss National Science Foundation under the grant SPECTRE (No 200021_215200). DE acknowledge support from the Swiss National Science Foundation for project 200021_200726. The authors acknowledge the financial support of the SNSF. JIGH, RR, ASM, FGT, NN, VMP, JLR & AKS acknowledge financial support from the Spanish Ministry of Science, Innovation and Universities (MICIU) projects PID2020-117493GB-I00 and PID2023-149982NB-I00. ICL acknowledges CNPq research fellowships (Grant No. 313103/2022-4). CMo acknowledges the funding from the Swiss National Science Foundation under grant 200021_204847 "PlanetsInTime". KAM acknowledges support from the Swiss National Science Foundation (SNSF) under the Postdoc Mobility grant P500PT_230225. RA acknowledges the Swiss National Science Foundation (SNSF) support under the Post-Doc Mobility grant P500PT_222212 and the support of the Institut Trottier de Recherche sur les Exoplanètes (IREx). We acknowledge funding from the European Research Council under the ERC Grant Agreement n. 337591-ExTrA. LB acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project Spice Dune, grant agreement No 947634). This material reflects only the authors' views and the Commission is not liable for any use that may be made of the information contained therein. ARCS acknowledges the support from Fundaçao para a Ciência e a Tecnologia (FCT) through the fellowship 2021.07856.BD. LD acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) and from the Fonds de recherche du Québec (FRQ) - Secteur Nature et technologies. FG acknowledges support from the Fonds de recherche du Québec (FRQ) - Secteur Nature et technologies under file #350366. H.J.H. acknowledges funding from eSSENCE (grant number eSSENCE@LU 9:3), the Swedish National Research Council (project number 2023-05307), The Crafoord foundation and the Royal Physiographic Society of Lund, through The Fund of the Walter Gyllenberg Foundation. LMo acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number 589653]. NN acknowledges financial support by Light Bridges S.L, Las Palmas de Gran Canaria. NN acknowledges funding from Light Bridges for the Doctoral Thesis "Habitable Earth-like planets with ESPRESSO and NIRPS", in cooperation with the Instituto de Astrofísica de Canarias, and the use of Indefeasible Computer Rights (ICR) being commissioned at the ASTRO POC project in the Island of Tenerife, Canary Islands (Spain). The ICR-ASTRONOMY used for his research was provided by Light Bridges in cooperation with Hewlett Packard Enterprise (HPE). CPi acknowledges support from the NSERC Vanier scholarship, and the Trottier Family Foundation. CPi also acknowledges support from the E. Margaret Burbidge Prize Postdoctoral Fellowship from the Brinson Foundation. AKS acknowledges financial support from La Caixa Foundation (ID 100010434) under the grant LCF/BQ/DI23/11990071. TV acknowledges support from the Fonds de recherche du Québec (FRQ) - Secteur Nature et technologies under file #320056. KaC acknowledges support from the TESS mission via subaward s3449 from MIT. Funding for the TESS mission is provided by NASA's Science Mission Directorate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. This research has made use of the Exoplanet Follow-up Observation Program (ExoFOP; website, which is operated by the California - tration under the Exoplanet Exploration Program. This paper includes data collected by the TESS mission that are publicly available from the Mikulski Archive for Space Telescopes (MAST). This work makes use of observations from the LCOGT network. Part of the LCOGT telescope time was granted by NOIRLab through the Mid-Scale Innovations Program (MSIP). MSIP is funded by NSF. This research has made use of the NASA Exoplanet Archive, which is operated by the California - nautics and Space Administration under the Exoplanet Exploration Program. References Aguichine, A., Batalha, N., Fortney, J. J., et al. 2025, ApJ, 988, 186 Aguichine, A., Mousis, O., Deleuil, M., & Marcq, E. 2021, ApJ, 914, 84 Alexander, R. D. & Pascucci, I. 2012, MNRAS, 422, L82 Alibert, Y. & Benz, W. 2017, A&A, 598, L5 Allard, F., Homeier, D., & Freytag, B. 2012, Philosophical Transactions of the Royal Society of London Series A, 370, 2765 Allart, R., Lovis, C., Faria, J., et al. 2022, A&A, 666, A196 Article number, page 17 A&A proofs: manuscript no. aa55684-25corr Aller, A., Lillo-Box, J., Jones, D., Miranda, L. F., & Barceló Forteza, S. 2020, A&A, 635, A128 Ambikasaran, S., Foreman-Mackey, D., Greengard, L., Hogg, D. W., & O'Neil, M. 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 252 Anglada-Escudé, G. & Butler, R. P. 2012, ApJS, 200, 15 Antoniadis-Karnavas, A., Sousa, S. G., Delgado-Mena, E., Santos, N. C., & Andreasen, D. T. 2024, A&A, 690, A58 Antoniadis-Karnavas, A., Sousa, S. G., Delgado-Mena, E., et al. 2020, A&A, 636, A9 Armstrong, D. J., Lopez, T. A., Adibekyan, V., et al. 2020, Nature, 583, 39 Artigau, É., Cadieux, C., Cook, N. J., et al. 2024, AJ, 168, 252 Artigau, É., Cadieux, C., Cook, N. J., et al. 2022, AJ, 164, 84 Astudillo-Defru, N., Díaz, R. F., Bonfils, X., et al. 2017, A&A, 605, L11 Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. H. 1998, A&A, 337, 403 Baraffe, I., Homeier, D., Allard, F., & Chabrier, G. 2015, A&A, 577, A42 Batalha, N. E., Lewis, T., Fortney, J. J., et al. 2019, ApJ, 885, L25 Batalha, N. M., Rowe, J. F., Bryson, S. T., et al. 2013, ApJS, 204, 24 Bayo, A., Rodrigo, C., Barrado Y Navascués, D., et al. 2008, A&A, 492, 277 Behmard, A., Dai, F., & Howard, A. W. 2022, AJ, 163, 160 Benneke, B., Roy, P.-A., Coulombe, L.-P., et al. 2024, arXiv e-prints, Bensby, T., Feltzing, S., & Oey, M. S. 2014, A&A, 562, A71 Bertaux, J. L., Lallement, R., Ferron, S., Boonne, C., & Bodichon, R. 2014, A&A, 564, A46 Bitsch, B. & Izidoro, A. 2023, A&A, 674, A178 Bitsch, B., Lambrechts, M., & Johansen, A. 2015, A&A, 582, A112 Bitsch, B., Raymond, S. N., Buchhave, L. A., et al. 2021, A&A, 649, L5 Bonfils, X., Almenara, J. M., Jocou, L., et al. 2015, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9605, Techniques and Instrumentation for Detection of Exoplanets VII, ed. S. Shaklan, 96051L Bonfils, X., Delfosse, X., Udry, S., et al. 2013, A&A, 549, A109 Borucki, W. J., Koch, D., Basri, G., et al. 2010, Science, 327, 977 Borucki, W. J., Koch, D. G., Basri, G., et al. 2011, ApJ, 736, 19 Bouchy, F., Doyon, R., Artigau, É., et al. 2017, The Messenger, 169, 21 Bouchy, F., Doyon, R., Pepe, F., Melo, C., & Artigau, É. 2025, A&A Brahm, R., Espinoza, N., Jordán, A., et al. 2018, MNRAS, 477, 2572 Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, PASP, 125, 1031 Brugger, B., Mousis, O., Deleuil, M., & Deschamps, F. 2017, ApJ, 850, 93 Bryan, M. L. & Lee, E. J. 2024, ApJ, 968, L25 Bryan, M. L. & Lee, E. J. 2025, ApJ, 982, L7 Bryant, E. M., Bayliss, D., & Van Eylen, V. 2023, MNRAS, 521, 3663 Burn, R., Mordasini, C., Mishra, L., et al. 2024, Nature Astronomy, 8, 463 Burn, R., Schlecker, M., Mordasini, C., et al. 2021, A&A, 656, A72 Castelli, F. & Kurucz, R. L. 2003, in IAU Symposium, Vol. 210, Modelling of Stellar Atmospheres, ed. N. Piskunov, W. W. Weiss, & D. F. Gray, A20 Castro-González, A., Bourrier, V., Lillo-Box, J., et al. 2024, A&A, 689, A250 Chachan, Y. & Lee, E. J. 2023, ApJ, 952, L20 Christian, S., Vanderburg, A., Becker, J., et al. 2022, AJ, 163, 207 Ciardi, D. R., Beichman, C. A., Horch, E. P., & Howell, S. B. 2015, ApJ, 805, 16 Cieza, L. A., Padgett, D. L., Allen, L. E., et al. 2009, ApJ, 696, L84 Cloutier, R. & Menou, K. 2020, AJ, 159, 211 Cointepas, M., Almenara, J. M., Bonfils, X., et al. 2021, A&A, 650, A145 Collins, K. 2019, in American Astronomical Society Meeting Abstracts, Vol. 233, American Astronomical Society Meeting Abstracts #233, 140.05 Collins, K. A., Kielkopf, J. F., Stassun, K. G., & Hessman, F. V. 2017, AJ, 153, 77 Cook, N. J., Artigau, É., Doyon, R., et al. 2022, PASP, 134, 114509 Deline, A., Hooton, M. J., Lendl, M., et al. 2022, A&A, 659, A74 Delmotte, N., Dolensky, M., Padovani, P., et al. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 351, Astronomical Data Analysis Software and Systems XV, ed. C. Gabriel, C. Arviset, D. Ponz, & S. Enrique, 690 Donati, J. F., Kouach, D., Moutou, C., et al. 2020, MNRAS, 498, 5684 Dorn, C., Khan, A., Heng, K., et al. 2015, A&A, 577, A83 Dressing, C. D. & Charbonneau, D. 2013, ApJ, 767, 95 Dressing, C. D. & Charbonneau, D. 2015, ApJ, 807, 45 El-Badry, K., Lam, C., Holl, B., et al. 2024, The Open Journal of Astrophysics, 7, 100 El-Badry, K., Rix, H.-W., & Heintz, T. M. 2021, MNRAS, 506, 2269 Espinoza, N. 2018, Research Notes of the American Astronomical Society, 2, 209 Espinoza, N. & Jordán, A. 2015, MNRAS, 450, 1879 Espinoza, N., Kossakowski, D., & Brahm, R. 2019, MNRAS, 490, 2262 Foreman-Mackey, D., Agol, E., Ambikasaran, S., & Angus, R. 2017, AJ, 154, 220 Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306 French, M., Mattsson, T. R., Nettelmann, N., & Redmer, R. 2009, Phys. Rev. B, 79, 054107 Fulton, B. J., Petigura, E. A., Blunt, S., & Sinukoff, E. 2018, PASP, 130, 044504 Fulton, B. J., Petigura, E. A., Howard, A. W., et al. 2017, AJ, 154, 109 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, A&A, 616, A1 Gaidos, E., Mann, A. W., Kraus, A. L., & Ireland, M. 2016, MNRAS, 457, 2877 Gardner, J. P., Mather, J. C., Clampin, M., et al. 2006, Space Sci. Rev., 123, 485 Gordon, I. E., Rothman, L. S., Hargreaves, R. J., et al. 2022, J. Quant. Spectr. Rad. Transf., 277, 107949 Gratia, P. & Fabrycky, D. 2017, MNRAS, 464, 1709 Guerrero, N. M., Seager, S., Huang, C. X., et al. 2021, ApJS, 254, 39 Guillot, T. & Morel, P. 1995, A&AS, 109, 109 Hara, N. C., Boué, G., Laskar, J., Delisle, J. B., & Unger, N. 2019, MNRAS, 489, 738 Harris, R. J., Andrews, S. M., Wilner, D. J., & Kraus, A. L. 2012, ApJ, 751, 115 Hedges, C., Saunders, N., Barentsen, G., et al. 2019, ApJ, 880, L5 Henden, A. A., Levine, S., Terrell, D., & Welch, D. L. 2015, in American Astronomical Society Meeting Abstracts, Vol. 225, American Astronomical Society Meeting Abstracts #225, 336.16 Henry, T. J., Jao, W.-C., Subasavage, J. P., et al. 2006, AJ, 132, 2360 Ho, C. S. K., Rogers, J. G., Van Eylen, V., Owen, J. E., & Schlichting, H. E. 2024, MNRAS, 531, 3698 Howard, A. W., Marcy, G. W., Bryson, S. T., et al. 2012, ApJS, 201, 15 Howell, S. B., Everett, M. E., Sherry, W., Horch, E., & Ciardi, D. R. 2011, AJ, 142, 19 Husser, T.-O., Wende-von Berg, S., Dreizler, S., et al. 2013, A&A, 553, A6 Ida, S. & Lin, D. N. C. 2005, ApJ, 626, 1045 Jahandar, F., Doyon, R., Artigau, É., et al. 2025, The Astrophysical Journal, 978, 154 Jahandar, F., Doyon, R., Artigau, É., et al. 2024, The Astrophysical Journal, 966, 56 Jenkins, J. M., Twicken, J. D., McCauliff, S., et al. 2016, in Society of PhotoOptical Instrumentation Engineers (SPIE) Conference Series, Vol. 9913, Software and Cyberinfrastructure for Astronomy IV, ed. G. Chiozzi & J. C. Guzman, 99133E Kempton, E. M. R., Bean, J. L., Louie, D. R., et al. 2018, PASP, 130, 114401 Khata, D., Mondal, S., Das, R., & Baug, T. 2021, MNRAS, 507, 1869 Kite, E. S., Fegley, Jr., B., Schaefer, L., & Ford, E. B. 2019, ApJ, 887, L33 Kreidberg, L. 2015, PASP, 127, 1161 Kubyshkina, D. & Vidotto, A. A. 2021, MNRAS, 504, 2034 Kunimoto, M., Vanderburg, A., Huang, C. X., et al. 2023, AJ, 166, 7 Kurucz, R. L. 1979, ApJS, 40, 1 Kurucz, R. L. 1993, SYNTHE spectrum synthesis programs and line data Laughlin, G., Bodenheimer, P., & Adams, F. C. 2004, ApJ, 612, L73 Lenzen, R., Hartung, M., Brandner, W., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 944-952 Li, J., Tenenbaum, P., Twicken, J. D., et al. 2019, PASP, 131, 024506 Lillo-Box, J., Gandolfi, D., Armstrong, D. J., et al. 2023, A&A, 669, A109 Lopez, E. D. & Fortney, J. J. 2013, ApJ, 776, 2 Lopez, E. D. & Rice, K. 2018, MNRAS, 479, 5303 Lovis, C. & Pepe, F. 2007, A&A, 468, 1115 Lucy, L. B. & Sweeney, M. A. 1971, AJ, 76, 544 Luo, H., Dorn, C., & Deng, J. 2024, Nature Astronomy, 8, 1399 Mann, A. W., Dupuy, T., Kraus, A. L., et al. 2019, ApJ, 871, 63 Mann, A. W., Feiden, G. A., Gaidos, E., Boyajian, T., & von Braun, K. 2015, ApJ, 804, 64 Mann, H. B. & Whitney, D. R. 1947, The Annals of Mathematical Statistics, 18, 50 Marcy, G. W., Weiss, L. M., Petigura, E. A., et al. 2014, Proceedings of the National Academy of Science, 111, 12655 Mayor, M., Pepe, F., Queloz, D., et al. 2003, The Messenger, 114, 20 Mayor, M. & Queloz, D. 1995, Nature, 378, 355 McCully, C., Volgenau, N. H., Harbeck, D.-R., et al. 2018, in Society of PhotoOptical Instrumentation Engineers (SPIE) Conference Series, Vol. 10707, Software and Cyberinfrastructure for Astronomy V, ed. J. C. Guzman & J. Ibsen, 107070K McDonald, G. D., Kreidberg, L., & Lopez, E. 2019, ApJ, 876, 22 McLaughlin, D. B. 1924, ApJ, 60, 22 Mignon, L., Delfosse, X., Meunier, N., et al. 2025, A&A, in press Moe, M. & Kratter, K. M. 2021, MNRAS, 507, 3593 Morello, G., Tsiaras, A., Howarth, I. D., & Homeier, D. 2017, AJ, 154, 111 Morris, R. L., Twicken, J. D., Smith, J. C., et al. 2020, Kepler Data Processing Handbook: Photometric Analysis, Kepler Science Document KSCI-19081003, id. 6. Edited by Jon M. Jenkins. Mugrauer, M. & Michel, K.-U. 2020, Astronomische Nachrichten, 341, 996 Mulders, G. D., Pascucci, I., & Apai, D. 2015, ApJ, 814, 130 Neves, V., Bonfils, X., Santos, N. C., et al. 2012, A&A, 538, A25 Neves, V., Bonfils, X., Santos, N. C., et al. 2013, A&A, 551, A36 Otegi, J. F., Bouchy, F., & Helled, R. 2020, A&A, 634, A43 Owen, J. E. & Jackson, A. P. 2012, MNRAS, 425, 2931 Owen, J. E. & Murray-Clay, R. 2018, MNRAS, 480, 2206 Article number, page 18 L. Parc, et al.: The peculiar TOI-756 system Owen, J. E. & Wu, Y. 2017, ApJ, 847, 29 Parc, L., Bouchy, F., Venturini, J., Dorn, C., & Helled, R. 2024, A&A, 688, A59 Pascucci, I., Testi, L., Herczeg, G. J., et al. 2016, ApJ, 831, 125 Pass, E. K., Winters, J. G., Charbonneau, D., et al. 2023, AJ, 166, 11 Pecaut, M. J. & Mamajek, E. E. 2013, ApJS, 208, 9 Pepe, F., Cristiani, S., Rebolo, R., et al. 2021, A&A, 645, A96 Petigura, E. A., Howard, A. W., & Marcy, G. W. 2013, Proceedings of the National Academy of Science, 110, 19273 Plotnykov, M. & Valencia, D. 2020, MNRAS, 499, 932 Plotnykov, M. & Valencia, D. 2024, MNRAS, 530, 3488 Pojmanski, G. 1997, Acta Astron., 47, 467 Rasio, F. A. & Ford, E. B. 1996, Science, 274, 954 Rauer, H., Catala, C., Aerts, C., et al. 2014, Experimental Astronomy, 38, 249 Raymond, S. N. & Izidoro, A. 2017, Icarus, 297, 134 Reylé, C., Jardine, K., Fouqué, P., et al. 2021, A&A, 650, A201 Ribas, I., Guinan, E. F., Güdel, M., & Audard, M. 2005, ApJ, 622, 680 Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2014, in Society of PhotoOptical Instrumentation Engineers (SPIE) Conference Series, Vol. 9143, Space Telescopes and Instrumentation 2014: Optical, Infrared, and Millimeter Wave, ed. J. M. Oschmann, Jr., M. Clampin, G. G. Fazio, & H. A. MacEwen, 914320 Rossiter, R. A. 1924, ApJ, 60, 15 Rousset, G., Lacombe, F., Puget, P., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4839, Adaptive Optical System Technologies II, ed. P. L. Wizinowich & D. Bonaccini, 140-149 Saumon, D., Chabrier, G., & van Horn, H. M. 1995, ApJS, 99, 713 Schlecker, M., Mordasini, C., Emsenhuber, A., et al. 2021, A&A, 656, A71 Schönrich, R., Binney, J., & Dehnen, W. 2010, MNRAS, 403, 1829 Schweitzer, A., Passegger, V. M., Cifuentes, C., et al. 2019, A&A, 625, A68 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, AJ, 131, 1163 Smith, J. C., Stumpe, M. C., Van Cleve, J. E., et al. 2012, PASP, 124, 1000 Sousa, S. G., Adibekyan, V., Delgado-Mena, E., et al. 2021, A&A, 656, A53 Speagle, J. S. 2020, MNRAS, 493, 3132 Stassun, K. G., Oelkers, R. J., Paegert, M., et al. 2019, AJ, 158, 138 Stumpe, M. C., Smith, J. C., Catanzarite, J. H., et al. 2014, PASP, 126, 100 Stumpe, M. C., Smith, J. C., Van Cleve, J. E., et al. 2012, PASP, 124, 985 Suárez Mascareño, A., Artigau, É., Mignon, L., Delfosse, X., & Cook, N. J. 2025, A&A Suárez Mascareño, A., Rebolo, R., & González Hernández, J. I. 2016, A&A, 595, A12 Suárez Mascareño, A., Rebolo, R., González Hernández, J. I., & Esposito, M. 2015, MNRAS, 452, 2745 Sullivan, K., Kraus, A. L., Berger, T. A., et al. 2024, AJ, 168, 129 Sullivan, K., Kraus, A. L., Huber, D., et al. 2023, AJ, 165, 177 Tian, F., Toon, O. B., Pavlov, A. A., & De Sterck, H. 2005, ApJ, 621, 1049 Tokovinin, A. 2018, PASP, 130, 035002 Torres, G. 1999, PASP, 111, 169 Twicken, J. D., Catanzarite, J. H., Clarke, B. D., et al. 2018, PASP, 130, 064502 Twicken, J. D., Clarke, B. D., Bryson, S. T., et al. 2010, in Society of PhotoOptical Instrumentation Engineers (SPIE) Conference Series, Vol. 7740, Software and Cyberinfrastructure for Astronomy, ed. N. M. Radziwill & A. Bridger, 774023 Valencia, D., Sasselov, D. D., & O'Connell, R. J. 2007, ApJ, 656, 545 Venturini, J., Guilera, O. M., Haldemann, J., Ronco, M. P., & Mordasini, C. 2020, A&A, 643, L1 Venturini, J., Ronco, M. P., Guilera, O. M., et al. 2024, A&A, 686, L9 Wallace, A. L., Casey, A. R., Brown, A. G. A., & Castro-Ginard, A. 2025, MNRAS, 536, 2485 Weiss, L. M., Isaacson, H., Howard, A. W., et al. 2024, ApJS, 270, 8 Wilcoxon, F. 1945, Biometrics Bulletin, 1, 80 Wilson, T. G., Goffo, E., Alibert, Y., et al. 2022, MNRAS, 511, 1043 Winters, J. G., Henry, T. J., Lurie, J. C., et al. 2015, AJ, 149, 5 Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868 Wroblewski, H. & Torres, C. 1991, A&AS, 91, 129 Wu, Y. & Lithwick, Y. 2011, ApJ, 735, 109 Wu, Y. & Murray, N. 2003, ApJ, 589, 605 Yelle, R. V. 2004, Icarus, 170, 167 Zeng, L., Jacobsen, S. B., Sasselov, D. D., et al. 2019, Proceedings of the National Academy of Science, 116, 9723 Zeng, L., Sasselov, D. D., & Jacobsen, S. B. 2016, ApJ, 819, 127 Ziegler, C., Tokovinin, A., Briceño, C., et al. 2020, AJ, 159, 19 1Observatoire de Genève, Département d'Astronomie, Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland 2Institut Trottier de recherche sur les exoplanètes, Département de Physique, Université de Montréal, Montréal, Québec, Canada 3Observatoire du Mont-Mégantic, Québec, Canada 4Departamento de Física Teórica e Experimental, Universidade Federal do Rio Grande do Norte, Campus Universitário, Natal, RN, 59072-970, Brazil 5Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal 6Departamento de Física e Astronomia, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal 7Univ. Grenoble Alpes, CNRS, IPAG, F-38000 Grenoble, France 8 5S 3H4, Canada 9 1280 Main St W, Hamilton, ON, L8S 4L8, Canada 10 3600 rue University, Montréal, QC, H3A 2T8, Canada 11 3450 rue University, Montréal, QC, H3A 0E8, Canada 12Departamento de Física, Universidade Federal do Ceará, Caixa Postal 6030, Campus do Pici, Fortaleza, Brazil 13Centro de Astrobiología (CAB), CSIC-INTA, Camino Bajo del Castillo s/n, 28692, Villanueva de la Cañada (Madrid), Spain 14Centre Vie dans l'Univers, Faculté des sciences de l'Université de Genève, Quai Ernest-Ansermet 30, 1205 Geneva, Switzerland 15Instituto de Astrofísica de Canarias (IAC), Calle Vía Láctea s/n, 38205 La Laguna, Tenerife, Spain 16Departamento de Astrofísica, Universidad de La Laguna (ULL), 38206 La Laguna, Tenerife, Spain 17European Southern Observatory (ESO), Karl-Schwarzschild-Str. 2, 85748 Garching bei München, Germany 18Space Research and Planetary Sciences, Physics Institute, 6, 3012 Bern, Switzerland 19Consejo Superior de Investigaciones Científicas (CSIC), E-28006 Madrid, Spain 20Bishop's Univeristy, Dept of Physics and Astronomy, Johnson-104E, 2600 College Street, Sherbrooke, QC, Canada, J1M 1Z7 21 17000, Station Forces, Kingston, ON, Canada 22Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências da Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal 23Departamento de Física da Faculdade de Ciências da Universidade de Lisboa, Edifício C8, 1749-016 Lisboa, Portugal 24Centre of Optics, Photonics and Lasers, Université Laval, Québec, Canada 25Herzberg Astronomy and Astrophysics Research Centre, National Research Council of Canada 26Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France 27Center for Space and Habitability, 6, 3012 Bern, Switzerland 28Center for astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, United States 29NASA Exoplanet Science Institute, IPAC, California 91125 USA 30George Mason University, 4400 University Drive, Fairfax, VA, 22030 USA 31European Southern Observatory (ESO), Av. Alonso de Cordova 3107, Casilla 19001, Santiago de Chile, Chile 32Planétarium de Montréal, Espace pour la Vie, 4801 av. Pierre-de Coubertin, Montréal, Québec, Canada 33Lund Observatory, Division of Astrophysics, 118, 221 00 Lund, Sweden 34SETI Institute, Mountain View, CA 94043, USA NASA Ames Research Center, Moffett Field, CA 94035, USA 35York University, 4700 Keele St, North York, ON M3J 1P3 36Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany Article number, page 19 A&A proofs: manuscript no. aa55684-25corr 37 2329 West Mall, Vancouver, BC, Canada, V6T 1Z4 38Western University, 1151 Richmond Street, London, ON N6A 3K7, Canada 39Light Bridges S.L., Observatorio del Teide, Carretera del Observatorio, s/n Guimar, 38500, Tenerife, Canarias, Spain 40University Observatory, Faculty of Physics, Ludwig-MaximiliansUniversität München, Scheinerstr. 1, 81679 Munich, Germany 41Hamburger Sternwarte, Gojenbergsweg 112, D-21029 Hamburg, Germany 42Subaru Telescope, National Astronomical Observatory of Japan (NAOJ), 650 N Aohoku Place, Hilo, HI 96720, USA 43 5640 South Ellis Avenue, Chicago, IL 60637, USA 44Laboratoire Lagrange, Observatoire de la Côte d'Azur, CNRS, Université Côte d'Azur, Nice, France ∗e-mail: Article number, page 20 L. Parc, et al.: The peculiar TOI-756 system Appendix A: TESS pixel file plots In this appendix, we show the TESS pixel file plots for all the sectors observed of TOI-756 (except Sector 10 shown in Fig. 1). 2020 2022 2024 2026 2028 2030 Pixel Column Number 1324 1326 1328 1330 1332 Pixel Row Number E N m = -2 m = 0 m = 2 m = 4 m = 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 25 27 28 29 30 31 33 35 37 40 42 45 49 58 64 65 67 82 TIC 73649615 - Sector 11 0.02 0.05 0.10 0.20 0.30 0.50 1.00 2.00 Flux ×102 (e /s) 1312 1314 1316 1318 1320 Pixel Column Number 1196 1198 1200 1202 1204 1206 Pixel Row Number E N m = -2 m = 0 m = 2 m = 4 m = 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 35 37 39 40 45 47 49 55 66 69 TIC 73649615 - Sector 37 0.02 0.05 0.10 0.20 0.30 0.50 1.00 2.00 Flux ×102 (e /s) 2068 2070 2072 2074 2076 Pixel Column Number 914 916 918 920 922 924 Pixel Row Number E N m = -2 m = 0 m = 2 m = 4 m = 6 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 35 37 39 41 46 47 52 53 54 56 58 6667 TIC 73649615 - Sector 64 0.02 0.05 0.10 0.20 0.30 0.50 1.00 2.00 5.00 Flux ×102 (e /s) Fig. A.1. TESS TPF of TOI-756 created with tpfplotter (Aller et al. 2020). The orange pixels define the aperture mask used for extracting the photometry. Additionally, the red circles indicate neighboring objects from the Gaia DR3 catalog, with the circle size corresponding to the brightness difference compared to the target (as indicated in the legend). Our target is marked with a white cross. Pixel scale is 21′′/pixel. The co-moving companion of TOI-756 corresponds to the star labeled "2". Article number, page 21 A&A proofs: manuscript no. aa55684-25corr Appendix B: High-contrast imaging observations 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 Angular Separation (arcsec) 0 1 2 3 4 5 m 1" NACO/CONICA AO 0.0 0.5 1.0 1.5 2.0 2.5 3.0 arcsec 0 1 2 3 4 5 magnitude (I-band) 2 0 -2 [arcsec] -2 0 2 [arcsec] SOAR Speckle ACF TIC73649615 0.0 0.2 0.4 0.6 0.8 1.0 1.2 angular separation (arcsec) 0 1 2 3 4 5 6 7 m 562 nm 832 nm TOI-756 1" 832 nm 0.0 0.2 0.4 0.6 0.8 1.0 1.2 angular separation (arcsec) 0 1 2 3 4 5 6 7 m 562 nm 832 nm TOI-756 1" 832 nm Fig. B.1. Adaptive optics and speckle imaging plots for TOI-756 showing magnitude contrast in function of angular separation. Top left: NACO/CONICA@VLT. Top right: HRCam@SOAR. Bottom: Zorro@Gemini-South (left: March 12, 2020 ; right: July 05, 2023). For VLT and Gemini, the inset image is the primary target showing no additional close-in companions. For SOAR, the inset image shows the auto-correlation function. Appendix C: Photometric and radial velocity analysis Table C.1. Median values and 68% confidence intervals of the posterior distributions of the TESS-only fit. Parameter Prior Value Stellar parameters Stellar density, ρ∗(ρ⊙). . . . . . . . . . . N(5350, 230) 5291+156 -163 Parameters for TOI-756 b Orbital period, P (days). . . . . . . . . . N(1.23926, 0.1) 1.239250 ± 0.000001 Semi-major axis, a (AU) . . . . . . . . . - 0.0182 ± 0.0003 Transit epoch, T0 (BJD) . . . . . . . . . N(2458570.65, 0.1) 58570.65187+0.00067 -0.00073 Scaled planetary radius, RP/R∗. . . U(0, 1) 0.0486 ± 0.0013 Impact parameter, b . . . . . . . . . . . . . U(0, 1) 0.541+0.037 -0.047 Inclination, i (deg) . . . . . . . . . . . . . . - 85.92+0.39 -0.31 Eccentricity, e . . . . . . . . . . . . . . . . . . Fixed 0.0 Argument of periastron, ω (deg) . . Fixed 90.0 Limb darkening parameters Limb darkening parameter, q1,TESS N(0.792, 0.029) 0.795 ± 0.029 Limb darkening parameter, q2,TESS N(0.453, 0.022) 0.455 ± 0.021 Notes: N(μ, σ2) indicates a normal distribution with mean μ and variance σ2, U(a, b) a uniform distribution between a and b. Article number, page 22 L. Parc, et al.: The peculiar TOI-756 system Fig. C.1. TESS PDCSAP flux light curves of the four different sectors with the best-fit juliet model shown as a black line (see Sect. 5.2.3 for details on the modeling). Fig. C.2. LCO-CTIO light curve from the g′-band transit with the best-fit juliet model shown as a black line and model errors as gray area. Dark red circles are data binned to 10 min (see Sect. 5.2.3 for details on the modeling). We do not present the i′-band transit here, as no detrending was applied, making it identical to the one shown in Fig. 5. Article number, page 23 A&A proofs: manuscript no. aa55684-25corr Fig. C.3. ExTrA light curves of the four different transits, observed with two telescopes (left panel: first telescope; right panel: second telescope). From top to bottom, the first transit is complete, while the remaining three show only the egress. The best-fit juliet models are shown as black lines, with 1σ model uncertainties indicated by the gray shaded regions. Dark red circles represent the data binned to 10 minutes. The dashed vertical line define the transit midpoint. An arbitrary offset has been added between the transits for clarity. See Sect. 5.2.3 for details on the modeling. Article number, page 24 L. Parc, et al.: The peculiar TOI-756 system Table C.2. Median values and 68% confidence intervals of the posterior distributions of the joint fit. Parameter Prior Value Stellar parameters Stellar density, ρ∗(ρ⊙) . . . . . . . . . . . . . . . . N(5350, 230) 5298+164 -169 Fitted parameters for TOI-756 b Orbital period, Pb (days) . . . . . . . . . . . . . . N(1.23925, 0.00001) 1.23924949+0.00000068 -0.00000063 Transit epoch, T0,b (BJD) . . . . . . . . . . . . . N(2458570.652, 0.001) 2458570.65234+0.00035 -0.00037 r1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.70, 0.1) 0.726+0.012 -0.014 Scaled planetary radius, RP/R∗= r2 . . . . N(0.049, 0.01) 0.05113+0.00082 -0.00089 RV semi-amplitude, Kb (m/s) . . . . . . . . . U(0, 30) 9.22+1.70 -1.49 Eccentricity, eb . . . . . . . . . . . . . . . . . . . . . . Fixed 0.0 (adopted, 3σ < 0.51) Argument of periastron, ωb (deg) . . . . . . Fixed 90.0 Fitted parameters for TOI-756 c Orbital period, P (days). . . . . . . . . . . . . . . U(10, 300) 149.40+0.16 -0.17 Transit epoch, T0 (BJD) . . . . . . . . . . . . . . U(2460000, 2460500) 2460498.82+0.57 -0.52 RV semi amplitude, Kc (m/s) . . . . . . . . . U(100, 500) 273.29+2.56 -2.60 √ecsin(ωc) . . . . . . . . . . . . . . . . . . . . . . . . . . U(-1, 1) -0.141 ± 0.011 √eccos(ωc). . . . . . . . . . . . . . . . . . . . . . . . . . U(-1, 1) -0.652 ± 0.006 Fitted parameters for the linear trend RV slope (m/s/day) . . . . . . . . . . . . . . . . . . . U(-10, 10) 0.399 ± +0.014 RV intercept (m/s). . . . . . . . . . . . . . . . . . . . U(-300, 300) -181.58+105.59 -78.22 Instrumental photometric parameters Offset relative flux, MTESSS 10 (×10-4) . . N(0, 300) -0.1 ± 1.6 Jitter, σw,TESSS 10 (ppm) . . . . . . . . . . . . . . . . logU(0.01, 300) 12.2+58.3 -11.2 Offset relative flux, MTESSS 11 (×10-4) . . N(0, 300) 3.7+17.6 -13.1 Jitter, σw,TESSS 11 (ppm) . . . . . . . . . . . . . . . . logU(0.01, 300) 17.4+77.3 -15.8 Offset relative flux, MTESSS 37 (×10-4) . . N(0, 300) -2.15+0.65 -0.59 Jitter, σw,TESSS 37 (ppm) . . . . . . . . . . . . . . . . logU(0.01, 300) 1.13+13.13 -0.98 Offset relative flux, MTESSS 64 (×10-4) . . N(0, 300) -15.59+0.63 -0.64 Jitter, σw,TESSS 64 (ppm) . . . . . . . . . . . . . . . . logU(0.01, 300) 0.73+10.62 -0.66 Offset relative flux, MExTrA1T1 . . . . . . . . . N(0, 2) 0.16+0.46 -0.18 Jitter, σw,ExTrA1T1 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 2146+308 -348 Offset relative flux, MExTrA2T1 . . . . . . . . . N(0, 2) 0.005+0.24 -0.004 Jitter, σw,ExTrA2T1 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 1.97+45.03 -1.80 Offset relative flux, MExTrA3T1 . . . . . . . . . N(0, 2) 0.05+0.20 -0.12 Jitter, σw,ExTrA3T1 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 67.39+1286.35 -62.43 Offset relative flux, MExTrA4T1 . . . . . . . . . N(0, 2) 0.04+0.20 -0.05 Jitter, σw,ExTrA4T1 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 34.90+318.35 -32.80 Offset relative flux, MExTrA1T2 . . . . . . . . . N(0, 2) 0.32+0.38 -0.30 Jitter, σw,ExTrA1T2 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 0.94+8.48 -0.85 Offset relative flux, MExTrA2T2 . . . . . . . . . N(0, 2) 0.028+0.20 -0.06 Jitter, σw,ExTrA2T2 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 3.4+39.5 -3.1 Offset relative flux, MExTrA3T2 . . . . . . . . . N(0, 2) 0.002+0.098 -0.065 Jitter, σw,ExTrA3T2 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 1.84+19.27 -1.65 Offset relative flux, MExTrA4T2 . . . . . . . . . N(0, 2) -0.002+0.011 -0.020 Article number, page 25 A&A proofs: manuscript no. aa55684-25corr Table C.2. continued. Parameter Prior Value Jitter, σw,ExTrA4T2 (ppm) . . . . . . . . . . . . . . . log U(0.01, 5000) 1.66+34.90 -1.53 Offset relative flux, MLCO-i' (×10-4) . . . N(0, 2000) 39.14+0.97 -0.96 Jitter, σw,LCO-i' (ppm) . . . . . . . . . . . . . . . . logU(0.1, 1000) 4.2+40.4 -3.6 Offset relative flux, MLCO-g' (×10-4) . . N(0, 2000) 230.5+10.1 -9.7 Jitter, σw,LCO-g' (ppm) . . . . . . . . . . . . . . . logU(0.1, 1000) 34.6+309.8 -31.0 Instrumental RV parameters Systemic RV, μNIRPS (m/s) . . . . . . . . . . . . U(10000, 20000) 14783.20+79.74 -103.06 Jitter, σw,NIRPS (m/s) . . . . . . . . . . . . . . . . . logU(0.001, 100) 17.64 ± 2.16 Systemic RV, μHARPS (m/s) . . . . . . . . . . . U(10000, 20000) 14688.28+80.74 -102.61 Jitter, σw,HARPS (m/s) . . . . . . . . . . . . . . . . . logU(0.001, 100) 13.52+1.25 -1.14 GP/detrending parameters ρGP,TESSS 10 (days) . . . . . . . . . . . . . . . . . . . . . logU(0.001, 50) 0.62+0.66 -0.39 σGP,TESSS 10 (10-4 relative flux) . . . . . . . . . logU(10-2, 5 × 105) 6.63+1.49 -1.24 ρGP,TESSS 11 (days) . . . . . . . . . . . . . . . . . . . . . logU(0.001, 50) 19.00+9.90 -6.68 σGP,TESSS 11 (10-4 relative flux) . . . . . . . . . logU(10-2, 5 × 105) 18.8+12.9 +7.0 ρGP,ExTrA1T1 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 2.05+1.68 -1.01 σGP,ExTrA1T1 (10-2 relative flux) . . . . . . . . logU(10-4, 102) 20.2+21.6 -10.1 ρGP,ExTrA2T1 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 0.06+0.98 -0.04 σGP,ExTrA2T1 (10-2 relative flux) . . . . . . . . logU(10-4, 102) 1.2+27.2 -0.9 ρGP,ExTrA3T1 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 1.36+0.84 -0.53 σGP,ExTrA3T1 (10-2 relative flux) . . . . . . . . logU(10-4, 102) 18.0+16.6 -9.2 ρGP,ExTrA4T1 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 0.41+0.54 -0.32 σGP,ExTrA4T1 (10-2 relative flux) . . . . . . . . logU(10-4, 102) 8.2+16.2 -6.8 ρGP,ExTrA1T2 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 1.19+0.89 -0.61 σGP,ExTrA1T2 (10-2 relative flux) . . . . . . . . logU(10-4, 102) 25.7+27.5 16.3 ρGP,ExTrA2T2 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 5.6+2.3 -2.0 σGP,ExTrA2T2 (10-2 relative flux) . . . . . . . . logU(10-4, 102) 7.2+10.3 -3.3 ρGP,ExTrA3T2 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 1.02+1.49 -0.74 σGP,ExTrA3T2 (10-2 relative flux) . . . . . . . . logU(10-4, 102) 12.5+22.5 -10.2 ρGP,ExTrA4T2 (days) . . . . . . . . . . . . . . . . . . . . logU(0.001, 10) 0.40+0.82 -0.25 σGP,ExTrA4T2 (10-2 relative flux) . . . . . . . . logU(10-4, 102) 1.6+5.0 -1.1 θ0,LCO-g' (10-4 relative flux) . . . . . . . . . . . U(-106, 106) 129.3+7.9 -7.4 Limb darkening parameters q1,TESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.792, 0.029) 0.794+0.017 -0.019 q2,TESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.453, 0.022) 0.456+0.015 -0.016 q1,ExTrA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.779, 0.074) 0.744+0.038 -0.044 q2,ExTrA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.284, 0.028) 0.287+0.016 -0.017 q1,LCO-i' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.825, 0.023) 0.814 ± 0.016 q2,LCO-i' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.506, 0.030) 0.514+0.019 -0.017 q1,LCO-g' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.888, 0.011) 0.887+0.007 -0.006 q2,LCO-g' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N(0.664, 0.015) 0.671 ± 0.011 Notes: N(μ, σ2) indicates a normal distribution with mean μ and variance σ2, U(a, b) a uniform distribution between a and b and logU(a, b) a log-uniform distribution between a and b. Article number, page 26 L. Parc, et al.: The peculiar TOI-756 system Appendix D: Interior modeling of TOI-756 b AMF = 0.02+0.00 0.00 0.2 0.4 0.6 0.8 CMF CMF = 0.28+0.03 0.03 4 8 12 16 Fe/Mg AMF = 1.432+0.235 0.205 0.02 0.03 0.04 0.05 AMF 4 8 12 16 Fe/Si 0.2 0.4 0.6 0.8 CMF 4 8 12 16 Fe/Mg 4 8 12 16 Fe/Si CMF = 0.3+0.0 0.0 AMF = 0.79+0.10 0.10 0.20 0.25 0.30 0.35 0.40 CMF CMF = 0.27+0.03 0.03 1.2 1.6 2.0 2.4 Fe/Mg Fe/Mg = 1.40+0.23 0.20 0.60 0.75 0.90 AMF 0.8 1.2 1.6 2.0 2.4 Fe/Si 0.20 0.25 0.30 0.35 0.40 CMF 1.2 1.6 2.0 2.4 Fe/Mg 0.8 1.2 1.6 2.0 2.4 Fe/Si Fe/Si = 1.21+0.21 0.18 Fig. D.1. Corner plots from the interior modeling of TOI-756 b (see Sect.6.2.2). The top panel shows scenario (1), assuming a H/He envelope: results with uninformative priors are shown in grey, and those using stellar-informed priors based on the host star's refractory abundances are in red. The bottom panel corresponds to scenario (2), assuming a pure H2O envelope with stellar-informed priors. Article number, page 27
2510.14911
Dude, Where’s My (Autonomous) Car? Defining an Accessible Description Logic for Blind and Low Vision Travelers Using Autonomous Vehicles Paul D. S. Fink1,2; 0000-0003-2915-1331 Justin R. Brown1; 0000-0002-3359-8411 Rachel Coombs1; 0009-0004-8593-559X Emily A. Hamby1; 0009-0005-0318-1446 Kyle J. James1; 0009-0003-3205-824X Aisha Harris1; 0009-0008-0916-0912 Jacob Bond3; 0000-0003-2025-5230 Morgan E. Andrulis3; 0009-0009-8827-4885 Nicholas A. Giudice1,4*; 0000-0002-7640-0428 1VEMI Lab, The University of Maine, Orono, Maine 04469, USA. 2College of Computing, Grand Valley State University, Allendale, Michigan 49401, USA. 3Connected Vehicle Experience Research Lab, General Motors Global Research & Development, Warren, Michigan 48092, USA. 4School of Computing and Information Science, The University of Maine, Orono, Maine 04469, USA. *Corresponding Author: nicholas.giudice@maine.edu Purpose: Autonomous vehicles (AVs) are becoming a promising transportation solution for blind and low-vision (BLV) travelers, offering the potential for greater independent mobility. This paper explores the information needs of BLV users across multiple steps of the transportation journey, including finding and navigating to, entering, and exiting vehicles independently. Methods: A survey with 202 BLV respondents and interviews with 12 BLV individuals revealed the perspectives of BLV end-users and informed the sequencing of natural language information required for successful travel. Whereas the survey identified key information needs across the three trip segments, the interviews helped prioritize how that information should be presented in a sequence of accessible descriptions to travelers. Results: Taken together, the survey and interviews reveal that BLV users prioritize knowing the vehicle's make and model and how to find the correct vehicle during the navigation phase. They also emphasize the importance of confirmations about the vehicle's destination and onboard safety features upon entering the vehicle. While exiting, BLV users value information about hazards and obstacles, as well as knowing which side of the vehicle to exit. Furthermore, results highlight that BLV travelers desire using their own smartphone devices when receiving information from AVs and prefer audio- based interaction. Conclusion: The findings from this research contribute a structured framework for delivering trip-related information to BLV users, useful for designers incorporating natural language descriptions tailored to each travel segment. This work offers important contributions for sequencing transportation-related descriptions throughout the AV journey, ultimately enhancing the mobility and independence of BLV individuals. Keywords: Autonomous Vehicles, Blind and Low Vision Users, Natural Language Descriptions, Accessibility Statements and Declarations 2 Funding: This work was supported by General Motors Global Research & Development. Competing Interests: The authors have no competing interests to declare that are relevant to the content of this article. Human Subjects Approval: This research was approved by The University of Maine Institutional Review Board under application #2023-08-09. Informed consent was obtained from all participants included in this research. 1 Introduction Transportation is a challenging undertaking for the 1.3 billion people experiencing significant disabilities worldwide [15], including the approximately 295 million people who report moderate to severe visual impairment [3]. When considering ‘transportation’, a common misconception is that the travel process begins and ends in the vehicle. In reality, the complete journey is characterized by several mutually dependent travel segments that start with planning the trip. Among the remaining segments are navigating to a ride, entering, exiting, and safely navigating to the destination. Autonomous vehicles (AVs) hold the potential to revolutionize accessible mobility for blind and low-vision (BLV) travelers by providing door-to-door transportation without requiring friends, family members, or sighted guides for assistance. However, this promise of independent mobility depends on future interfaces conveying meaningful and useful information across each segment of the complete trip. While the body of work investigating accessible AVs for BLV users has prioritized in-vehicle interaction, much less is known about information needs in the equally important (but often ignored) segments that involve localizing a vehicle, entering it safely, and exiting and orienting to the destination [20, 30]. To address these problems, we sought to identify what we refer to as an accessible description logic, providing inclusively designed information for BLV travelers in a user-defined sequence of natural language descriptions across otherwise disconnected travel segments. Our description logic aims to be a progression of information categories that designers and developers can use to insert natural language audio cues that are adapted to the specific environment. Just as a developer might use a sequence of variables to ‘slot in’ specific values in that sequence, we envision the utility of this logic to be identifying the order and priority of information that can be adapted to each AV trip. We specifically study aspects of AV-enabled transportation known to be difficult for BLV people, starting first with safely navigating to and localizing a ride once it arrives. As AVs adopting rideshare models may arrive to unanticipated locations in dynamic environments [17, 22], BLV travelers will likely be required to utilize outdoor navigation skills to find and navigate to AVs, often in tandem with applications designed to support wayfinding [22], in addition to their preferred mobility tool, e.g., the long cane or guide dog. Unlike typical outdoor navigation activities, however, localizing AVs involves a host of context-specific tasks for BLV travelers, including identifying and confirming the correct vehicle, avoiding obstacles within the pick-up area (e.g., construction hazards), and finding the correct door handle [20]. While the tasks for navigating to a vehicle are well-defined in the literature, how to prioritize and order information in an accessible interface remains an open question. This is also the case for entering and orienting within the vehicle, which often includes determining how to enter safely and understanding seat locations and availability. Finally, once the AV arrives at its destination, BLV users must exit the vehicle safely and orient themselves to another environment [30]. Rather than studying each of these segments in isolation, as is common in the existing literature, we seek to provide users and designers with a comprehensive and consistent logic for completing the trip. Our goal in this paper is to identify a structured sequence of natural language descriptions for each of these travel segments, ensuring that user needs are reflected throughout by directly engaging BLV users in two studies. The first, a confidential, anonymous survey with (N = 202) BLV respondents, was motivated by the need to identify essential information to be conveyed during the trip. It sought to answer our first research question: RQ1: What information is important for BLV travelers to receive across the complete trip with AVs? The second stage of this project consisted of interviews with (N = 12) BLV individuals, which aimed to order the initial set of information preferences defined by the survey in a logical sequence for each trip segment. This goal was addressed by our second research question: RQ2: How should information be ordered and presented to BLV travelers across travel segments with AVs? 3 The interviews provided feedback not only on the order in which participants preferred to receive information, but also the accessibility aids and technologies they found most useful, and enabled a clearer understanding of their individual transportation experiences. By integrating survey results and interview insights, we developed a final description logic and accompanying guidelines that offer best practices for designing accessible transportation apps and related technologies. The key contributions of this work advance accessible computing in the transportation context by providing a structured approach to delivering critical trip-related information for BLV users, something that to our knowledge has not been previously studied. Importantly, this project addresses a critical user experience gap (e.g., BLV riders) identified by our OEM partner, while also providing a strong foundation for future studies that empirically test the logic defined here using behavioral methods. We believe that our findings help pave the way for ongoing work by our group and others addressing the question at the end of the paper (Section 6.3) regarding the efficacy of using audio-only vs. multisensory information presentation in user interfaces (UIs) supporting AV usage as part of a complete trip solution. We highlight the apparent experiential gap indicated by BLV users who self-report preferences for audio-only information when, in related work, evidence suggests performance is on-par or better with multisensory information. By identifying audio-only information in the AV context across multiple trip segments, the current paper contributes an important testbed for evaluating this potential gap between what people say they prefer (which we hypothesize is based on their existing experiences with audio- only interfaces) and how they perform with new multisensory user interfaces. The remainder of this paper is organized as follows: In Section 2, we review the current work in AV technology and its impact on independent transportation for BLV individuals, including research on complete trip accessibility. Section 3 details the methodology and results of Study 1, which identifies critical trip information. Section 4 presents the methods and results of Study 2, which refines how that information should be structured and delivered as well as reports the findings of thematic analysis on the interviews as a whole. In Section 5, we present our culminating description logic and in Section 6 we discuss our findings in relation to existing accessibility literature and AV design. Finally, Section 7 concludes with the key takeaways from this work. 2 Related Work AV technology is rapidly advancing; however, the small but growing body of literature examining accessible AVs for BLV users indicates that more work is needed to meet the needs of travelers who often rely on human assistance throughout the trip [30]. One useful strategy in pursuit of this goal is involving end-users in the design of AV systems. Indeed, multiple researchers have emphasized the importance of the disability community being involved in the development of technologies that are needed to make AVs readily accessible to consumers. For instance, in their focus group study, Brinkley et al. (2017) found that BLV users were concerned about not being adequately considered in self-driving development [11], industry white papers have elucidated the important role of advocacy among people with disabilities to be considered in AV development [13], and user experience work has emphasized inclusive, participatory design of AV system features to support BLV individuals [20]. While this direct involvement is important, Ma (2024) correctly points out that prior work is predominantly hypothetical, as AVs truly designed for BLV users do not yet exist [28]. Recognizing this limitation, the following subsections review the available body of work exploring BLV AV accessibility, as well as the importance of natural language descriptions in supporting the complete trip, which we argue is critical to consider as travel is not limited to discrete trip segments or activities. 2.1 Complete Trip Accessibility for BLV Travelers The emergence of AVs presents both opportunities and challenges for BLV travelers. While AVs hold the potential to enhance mobility, independence, and workforce participation [21], research consistently highlights the need for concerted inclusive design efforts to address accessibility concerns. For instance, in their survey-based study, Bennett et al., 2020 noted that BLV users are skeptical that AVs will be designed for them [2], which was a concern echoed by participants in a dual survey and focus group study by Brinkley and colleagues [9]. Even understanding these concerns, the previous work emphasized optimism about AVs among the BLV community, which has been demonstrated to be more prevalent among 4 BLV users than their sighted counterparts [24]. One key issue is ensuring that BLV persons are effectively supported throughout the entire travel experience, from planning a trip to reaching their destination. Studies have shown that BLV travelers rely on various cues to understand the travel environment across the trip, many of which are currently provided by human rideshare drivers [6, 8]. For instance, Brewer and Ellison (2020) found that BLV rideshare users often depend on drivers for assistance in locating the vehicle, exiting safely at accessible points, and receiving verbal descriptions of the environment [5]. The predicted transition to AV rideshare services raises concerns about whether these supportive interactions currently provided by humans can be effectively replaced by technology when human drivers are no longer in the loop [7, 18]. While past rideshare experiences based on human-driven vehicles can highlight current accessibility gaps, they do not directly translate to a one-to-one set of expectations or solutions for AVs, when the human agent is no longer part of the information-access equation. Future AV use will likely involve new interaction paradigms, different constraints, and evolving user priorities, necessitating distinct design strategies. Other research has addressed the lack of support provided to BLV users from modern navigation applications and devices that do not meet users’ requirements, such as different types of intersections all being labeled as ‘intersection’ and implementations that do not accurately address the precise location and orientation of crosswalks [16, 37]. Non-visual interfaces have been explored to bridge this gap, primarily focusing on in-vehicle experiences providing information on vehicle location via in-air haptics and/or spatialized audio [19], as well as on-the-go updates about the ride via audio cues [10]. Recent work has also explored the ideal combination of multisensory cues for BLV users during the in-vehicle portion of the route, identifying tactile and spatialized audio as the optimal combination [12]. However, less attention has been paid to other phases of the journey like pre-trip planning and vehicle localization [17, 34]. While some work has focused on designing inclusive interfaces for localizing AVs [20] and exiting AVs [30], no research to date has explored a unified and accessible framework across multiple stages of the trip for BLV travelers. Moreover, no work has sought to identify how to appropriately structure information sequentially for users as they navigate these multiple trip segments. Addressing both of these issues represent key contributions of the current work. A promising approach is to leverage users’ interest in utilizing speech and audio interaction with AVs [11, 18], as explored in the following. 2.2 Natural Language Sequences for Spatial Tasks Turn-by-turn (TBT) direction systems have become the de facto standard for audio-based navigation, but evidence suggests that TBT natural language systems can be problematic for BLV users, as the information provided is often inadequate for developing an accurate mental representation necessary for safe navigation [23]. This is primarily because of the information (or lack thereof) provided in standard TBT direction systems, leading to significant errors [1]. For BLV users to safely and accurately travel to and from AVs, information beyond the route is necessary as well, including indication of relevant landmarks, obstructions, cues about sidewalk or street parameters, and car orientation [20]. An alternative to traditional TBT direction systems for BLV travelers are systems that include spatialized information, i.e., where the distance and direction of a street, landmark, or point of interest is audibly received from its location in space [38]. While adding spatialized information to TBT instructions can improve route guidance accuracy and reduce cognitive load [26, 27], it does not eliminate the limitation that these instructions are only predominately conveying route guidance information. Therefore, to address this limitation and offer a more comprehensive navigation experience, this study explores the potential of enhancing NL navigation instructions as an alternative to traditional TBT directions. Moreover, we aim to understand how to best sequence this information to aid future navigation systems and the associated algorithms that provide users with sequential real-time information [35]. 3 Study 1: Survey of BLV Information Needs Across the Complete Trip To determine the information needs for BLV travelers across the vehicle localization, entry, and egress segments of the trip (RQ1), a survey was developed by the authors of this paper and distributed remotely to (N = 202) BLV participants by Qualtrics (https://www.qualtrics.com/). The decision to use a third-party survey service for participant recruitment was intended to maximize respondents who met demographic criteria of interest (see 3.1.1 Participants below). Prior to 5 providing responses, participants were asked to imagine three scenarios that involved finding, entering, and exiting an AV in an urban area. To increase the realism of the imagined scenarios, participants were asked to imagine that the AV would be taking them to a grocery store. The AV was noted as being able to safely and legally provide transport without being accompanied by a human driver or attendant. To reflect the possibility of widespread mobility-as-a-service prediction models for future AV deployment [14, 31], participants were also told that there might be other passengers on board and that the seats might be in nontraditional arrangements (like a bus or a shuttle). Due to the limited real-world availability of AV transportation, the use of these ‘imaginative’ scenarios were employed that aimed to unify participant understanding and encourage responses as if they were traveling in these environments. 3.1 Methods The survey consisted of 32 questions (Appendix A) divided between three parts of the trip: navigating to the vehicle, entering and orienting within it, and exiting and orienting to the surrounding environment. Survey respondents were instructed to imagine that they were going to use an autonomous rideshare vehicle in three scenarios related to each part of the trip. Individual pieces of information that were evaluated within these three phases were generated based on the prior literature on entering and exiting AVs [20, 30], as well as from personal insight and lived experience of one of the authors on this paper who is congenitally blind and a frequent traveler. For example, in the navigation phase, information included finding the correct vehicle and door handle, as well as avoiding obstacles. In the entering phase, eight questions evaluated a range of information items (e.g., seat placement, cleanliness details, information on other passengers). The exiting phase also included eight questions evaluating a range of information items (e.g., obstacle avoidance, other passenger movement, and points of interest). These information items were first rated on 5-point Likert scales to evaluate their importance (1, Strongly Disagree to 5, Strongly Agree). To gain an initial understanding of how to order information in our description logic (RQ2), participants were then asked to identify which piece of information they would like to receive first during that phase. The survey was tested internally for accessibility and confirmed to work well with screen readers prior to deployment. The research was approved by The University of Maine IRB. 3.1.1 Participants Selection criteria ensured participants were at least 18 years of age with a known and uncorrected visual impairment, utilized an accessibility device or mobility aid (e.g., screen reader, magnification, white cane, guide dog, etc.), and were non-drivers who utilize transportation, such as rideshares, public transit, or private vehicles. Demographic information such as name, age, gender, and email addresses were not able to be recorded by Qualtrics; however, geographic level of urbanization, the classification of vision status, and what accessibility aids participants utilize were all documented. The majority of participants lived in suburban areas (N = 81), classified their vision status as “Moderate Vision Impairment” (N = 93), and reported using magnification (N = 133). A complete summary of participant demographics from the pre- survey questions are detailed in Table 1. Participants were compensated by Qualtrics in accordance with their compensation protocols. Table 1 Interview Participant Self-Reported Demographic Information Geographic Area Vision Status Accessibility Aids (multi-select) Category N (%) Category N (%) Category N (%) Suburban 81 (40.1%) Moderate vision impairment 93 (46%) Magnification 133 (65.8%) Urban 72 (35.6%) Mild vision impairment 61 (30.2%) Mobile screen readers 93 (46%) Rural 49 (24.3%) Severe vision impairment (includes blindness) 48 (23.8%) PC screen readers 49 (24.3%) White cane 36 (17.8%) Guide dog 32 (15.8%) CV apps 30 (14.9%) Braille display 13 (6.4%) Other* 11 (5.4%) 6 3.1.2 Data Analysis The Likert scale questions (1, Strongly Disagree to 5, Strongly Agree) rating the importance of each item in the survey were first analyzed using basic descriptive statistics; mean (M) and standard deviation (SD). As our goal was to identify whether each item was important, rather than to compare the importance between the individual items, inferential statistics were not employed for the results of these questions. However, subsequent inferential analyses of the distribution of rankings by vision impairment status (Mild, Moderate, and Severe) were conducted to reveal potential differences or similarities in item rankings among these groups. As the rankings are non-parametric data and three visual groups are being compared, a Kruskal-Wallis test was used to determine the impact of vision status for each item ranking. Dunn’s post-hoc comparisons with Bonferroni correction were conducted on items where the Kruskal-Wallis test was found to be significant. The results and Likert distributions, separated by survey section, are reported below in sections 3.2.1, 3.2.2, and 3.2.3. 3.2 Results 3.2.1 Navigation Based on the self-reported Likert data, the majority of participants wanted information about the make, model, and year of the vehicle (M = 4.15, SD = 1.05). Participants also indicated that they found it challenging while navigating to the vehicle to avoid hazards (M = 3.53, SD = 1.20) and to find the correct vehicle (M = 3.51, SD = 1.23). Participants did not respond as strongly for or against it being challenging for them to locate the correct door and handle when navigating to the vehicle (M = 3.14, SD = 1.32). Figure 1 displays the total distribution of rankings and the distributions by reported vision status. Three of the items (Q1, Q2, Q3) were found to have significant differences in rankings between vision status groups (p < .001). Post-hoc comparisons revealed significant differences in rankings (all p’s < .01) between the Mild and Severe groups as well as the Moderate and Severe groups across the three items. These results suggest that individuals with severe vision impairment may face greater challenges in locating the correct vehicle, identifying the correct door, and avoiding hazards on the way to the vehicle compared to those with mild or moderate vision impairment. 7 Fig. 1 Diverging stacked bar plots showing the distribution of Likert responses by self-reported vision status for Study 1 Survey statements concerning Navigating to the Vehicle When considering the piece of information people want to know first (Figure 2), several information items received roughly 40 responses (~20% of participants). These included knowing which door to enter, the direction of the vehicle, position of other passengers, information about the curb, and information about the destination. Fig. 2 Bar plot of Survey 1 responses about the first information participants wanted during the Navigating to the Vehicle phase To determine if there were statistically significant differences between the items listed first, a Chi-square goodness of fit test was performed (χ2 = 63.72, p < .001), indicating that participant preferences were not equally distributed across the range of information types (i.e., the Response categories in Figure 2). That is, results suggest that there was an overall difference between the information to be delivered first during the navigation phase. This was likely due to the few responses in the Bike Lane and Other Information categories. Indeed, pairwise post-hoc tests demonstrated that Other Information and Bike Lane were chosen statistically significantly fewer times than the other information categories (all p’s < .001). This was surprising since the Other Information category was included to capture responses like the make, model, and year of the vehicle, which participants indicated they wanted strongly in the earlier Likert questions (Q4 from Figure 1). Furthermore, the post-hoc tests demonstrated no statistically significant differences between knowing which door to enter, the direction of the vehicle, position of other passengers, information about curbs, and information about the 8 destination. Taken together, these data indicate that a range of information is important to BLV users when navigating to AVs and that users are divided on which information should be presented first (which we analyze further in the Study 2 interviews, Section 4). 3.2.2 Entering and Orienting Across all Likert scale questions about entering the vehicle (Figure 3), participants responded that they want to know each piece of information queried in the survey, with all means except the location of the control interface (M = 3.93, SD = .10) scoring a 4 or above. Two of the nine items, the vehicle’s route (Q2) and interior seat layout (Q4), were found to have significant differences between vision status groups (p < .05). Post-hoc comparisons revealed significant differences between the Moderate (M = 4.05, SD = .94) and Severe (M = 4.38, SD = 0.94) groups for Q2 rankings (p = 0.04) and significant differences between Mild (M = 4.02, SD = .90) and Severe (M = 4.48, SD = .85) groups for Q4 rankings (p < .01). These results may suggest that participants with more severe vision impairment prioritize certain vehicle entry information more strongly – specifically, clarity about the vehicle’s planned route and how seats are arranged – than those with mild or moderate vision impairment. Fig. 3 Diverging stacked bar plots showing the distribution of Likert responses by self-reported vision status for Study 1 Survey statements concerning Entering and Orienting When asked what information participants wanted to know first in the entering phase (Figure 4), the most common response was information about the vehicle’s destination with 46 responses, or 22.8% of participants, closely followed by the location of safety features (N = 43, 21.3%). 9 Fig. 4 Bar plot of Survey 1 responses about the first information participants wanted during the Entering and Orienting phase A Chi-square goodness of fit test was performed (χ2 = 86.35, p < .001), indicating that the information items were not equally distributed across information types. Pairwise post-hoc tests demonstrated that the Vehicle’s Destination and Safety Features were statistically more likely than the five least common responses (Seat Placement, Route Information, Vehicle Cleanliness, Control Interface, and Forward Direction), with all p’s less than .05. This indicates that during the entry phase of the trip, BLV travelers are concerned with verifying the vehicle’s ultimate destination prior to entry and to know how to travel safely. 3.2.3 Exiting and Orienting The results of the Likert scale questions in the exiting and orienting section of the survey (Figure 5) indicate that users strongly desire access to a range of information when exiting the vehicle, with all means above 4.24, with the exception of points of interest in the immediate area (M = 3.86, SD = 1.09). Two of the eight items, where the vehicle is located when exiting (Q7) and the direction passengers are facing when exiting (Q8), were found to have significant differences between vision groups (p = .05). Post-hoc comparisons revealed significant differences between Mild (M = 4.13, SD = 1.04) and Moderate (M = 4.52, SD = .75) groups for Q7 (p = .05) and significant differences between Mild (M = 4.05, SD = .92) and Moderate (M = 4.43, SD = .70) groups for Q8 (p = .03), suggesting that individuals with moderate vision impairment may place greater importance on knowing the vehicle’s exact location and their orientation upon exiting the vehicle compared to those with mild impairment. 10 Fig. 5 Diverging stacked bar plots showing the distribution of Likert responses by self-reported vision status for Study 1 Survey statements concerning Exiting and Orienting When asked what information about exiting they wanted to know first when immediately leaving the vehicle, 79 participants, or 39.1%, chose information about what hazards and obstacles may be outside the vehicle. Other participant responses to this question can be found in Figure 6, with the next most common response being which side of the vehicle they are exiting, indicated by 40 responses, or 19.8% of participants. Fig. 6 Bar plot of Survey 1 responses about the first information participants wanted during the Entering and Orienting phase A Chi-square goodness of fit test was performed (χ2 = 165.6, p < .001), indicating that the information items were not equally distributed across information types. Pairwise post-hoc tests demonstrated that the Hazards/Obstacles item was statistically more likely than the other information types, with all p’s less than .001. Exiting Side was also demonstrated to be more likely than the six items with fewer responses, with all p’s less than .05. Taken together, these results again suggest 11 a clear preference for safety through notification of hazards and obstacles, as well as which side of the vehicle is safe to exit. 4 Study 2: Interviews Identifying Desired Order of Description Information After identifying the types of information that were important in Study 1, we sought to glean additional information on what, when, and how information should be presented to BLV travelers during an autonomous vehicle trip (RQ2). Whereas in Study 1 we queried participants on what information is important, as well as what information they would want first in three stages of AV travel, the goal in this subsequent interview study was to confirm the first piece of information (identified in the survey) and to prioritize each piece of information thereafter to inform our description logic. We engaged (N = 12) BLV participants in an interactive interview that was guided by a pre-interview worksheet that participants completed in advance of the interview (see Section 4.1.2 Procedure). 4.1 Methods 4.1.1 Participants A total of 12 BLV participants aged 31-71 (M = 47.58, SD = 15.55) who identified as legally blind and used a primary mobility aid were recruited across a range of visual impairment and etiology (Table 2). The interviews were conducted one-on-one with a researcher, with the participants being recruited through the lab’s local network in the United States and via personal contacts in the BLV community. Given the nature of recruitment, no interviewees also completed the Study 1 survey. Participants were compensated with a $50 gift card. Table 2 Interview Participant Self-Reported Demographic Information Participant Age Gender Visual Condition Visual Function Mobility / Accessibility Aids 1 71 Male Retinopathy of Prematurity No Vision White cane, Guide dog 2 63 Female Medical Mishap Movement, Light White cane, Guide dog 3 71 Male Hereditary Optic Neuropathy Shapes, Shadows White cane 4 32 Male Retinitis Pigmentosa Light, Colors White cane 5 31 Male Unknown Light White cane 6 32 Female Leber’s Congenital Amaurosis Light White cane 7 34 Female Retinopathy of Prematurity, Glaucoma Light, Colors White cane, Guide dog, Braille 8 56 Male Unknown Light Guide dog 9 44 Non-Binary Retinitis Pigmentosa Light, Shapes White cane, Guide dog 10 32 Male Condition with High Eye Pressure No Vision White cane 11 54 Female Optic Nerve Hypoplasia Light, Colors White cane, Magnification 12 51 Male Leber’s Congenital Amaurosis No Vision White cane 4.1.2 Procedure Participants completed a pre-interview worksheet (Appendix B) prior to the interview and ranked the order that they would desire for information to be presented in each of the three travel segments. The results of the Likert responses from the survey in Study 1 prompted the addition of obstacle and hazard information as an option in the ordering for the navigation phase on the pre-interview worksheet. The worksheet asked participants to think about what information would be most important for them to receive first and then to imagine how the information should flow thereafter. Again, this exercise employed three scenarios that included (1) navigating to an AV, (2) entering and orienting in the vehicle, and (3) exiting the vehicle and orienting to the surrounding environment. For each imagined scenario, participants ranked the order in which they would like to receive the information by placing a “1” next to the information they would most prefer to receive first and increasing integers for the pieces of information they would want to know thereafter. Participants were able to eliminate pieces of information they found unimportant by ranking them with a “0”. 12 After participants completed and sent the pre-interview worksheet back to the team, remote one-on-one interviews were conducted via Zoom, which were audio recorded. The interviews lasted approximately an hour each and followed a general script that was tailored to each participant based on their pre-interview worksheet answers, allowing for elaboration of choices and reasoning. To begin, participants were asked questions about their demographic information and current experiences with technology and transportation as well as questions soliciting their overall thoughts on the rollout of fully autonomous vehicles. Next, participants were asked to explain their thought process as they filled out the worksheet including ordering of information and information they eliminated. To help contextualize and explain results from the Study 1 survey, participants were also provided with the most frequent response to the “first thing I want to know is” questions from the survey for each of the three travel segments and asked to postulate why the response was indicated and under what scenarios it would be most relevant. These questions were intended to help resolve any disagreement between the first item results from the survey and the entire sequence responses collected from the interviews. Given our interest in how information should be presented (RQ2), participants were also asked if information should be presented from their phone, the vehicle, or another device, as well as the presentation modality (i.e., haptics, audio, or combinations of both). To conclude the interviews, participants were asked about their safety concerns related to AVs and what problems with their current transportation experience they think autonomous vehicles could solve. The full interview guide can be found in Appendix C. 4.1.3 Data Analysis Qualitative thematic analysis of the interview transcripts was conducted by two independent researchers using the Taguette qualitative data analysis tool [33]. Going through the interview transcripts in opposite orders to avoid fatigue effects, the researchers identified key ideas (codes) and recurring themes that participants mentioned in their responses. Afterwards, a team of five researchers reviewed all of the codes and their associated quotes to resolve disagreements. This process generated 83 unique codes that were further categorized by the team of researchers into subgroups and thematic groups [4]. The qualitative results, including four emerging themes from the thematic analysis process, are reported in Section 4.2.1. As non-parametric data, the order of information in the navigation, entry, and exiting segments was determined by calculating the mean rank followed by a Friedman’s test. Items that participants eliminated in their worksheet, were scored as last place plus 1. In the event of a tie, the information item with the fewest number of eliminations, meaning more participants chose to include that information, would receive a higher ranking. How this information should be presented (i.e., the presentation modality) was also recorded. The most common response to presentation for each information item was recorded in Tables 3–5 along with the mean order of the information items. The results of the information sequencing are reported below in Section 4.2.2. 4.2 Results 4.2.1 Qualitative Results Qualitative thematic analysis of participant interviews revealed four key themes that highlight the complex relationships that BLV individuals have with their current transportation experiences and the potential for AVs to significantly reshape their independence and access to mobility. First, safety emerged as a critical consideration, encompassing both immediate physical risks and the broader reliability of AVs as an emerging technology. Second, participants detailed the difficulties and challenges they currently face with transportation, including personal, spatial, and technological barriers. Third, participants expressed significant optimism regarding the opportunities presented by AVs, envisioning increased independence, agency, and efficiency in their future mobility. Finally, interviews revealed the adaptations BLV individuals utilize to navigate these challenges, often mentioning the need to balance several navigation apps at once, manage their time, and rely on others, all of which can be frustrating and inefficient. These four themes collectively describe the impact that transportation has on the lives of BLV individuals and underscore the potential of AV technology to transform their 13 transportation experiences. Put simply, P3 offered, “The ability to independently travel when blind is a big deal.” The following subsections are organized around the four themes identified from the interviews and offer additional insights into their role in AV transportation. Safety Participants expressed a range of safety concerns focused on environmental hazards, unexpected events, and the reliability of both current rideshares and future AVs. During the navigation phase, many emphasized that AV interfaces must respond to unpredictable environments. P1 illustrated this by noting, “You're somewhat vulnerable when there's some unusual situation, especially a hole in the ground […] Things like that can be very difficult.” Entering the vehicle raised concerns about identifying the correct car and feeling secure. P3 shared, “It’s very important that the autonomous car makes me comfortable that I’m in the right place and that it’s safe to get in.” Exiting also posed challenges, particularly when vehicles stop in unsafe locations. P8 remarked, “I definitely want to know which side of the vehicle to get out on because I don't want to walk into headlong traffic.” Concerns about AVs’ ability to handle unexpected incidents—like accidents or road obstructions—were frequent. “My safety concerns would be an autonomous car’s ability to deal with the unexpected” said P3. Worries extended to being dropped off in chaotic or unfamiliar places: “I'm worried about a drop off where there's something going on that isn't expected and the autonomous vehicle can't figure it out… and I can't figure it out.” Others, like P2, raised concerns about reaction time to “a fallen tree or a deer in the road.” Ultimately, participants stressed the fundamental importance of safety in transportation, as P10 asserted, “I want to get to my destination, and safety's a big part of that.” Overall, these concerns were important to users across each segment of the AV trip and help contextualize the importance of information identified in the sequencing tasks during the interview. For example, environmental awareness concerns during the navigation and exiting phase highlight the need for hazard and obstacle detection, whereas concerns with safety when entering speak to the need for correct vehicle confirmations and seat detection. Safety generally was connected with difficulties and challenges experienced during transportation, as provided in the following. Difficulties and Challenges Participants described a wide range of challenges in current transportation systems, including spatial confusion, unreliable technology, and negative social experiences. Spatial difficulties—particularly around pick-ups and drop-offs—were a major theme. P1 said, “It can be really difficult to find the right vehicle. In fact, I made a mistake not long ago and got into the wrong Uber.” P12 added, “Sometimes they'll pull up right in front of the business or sometimes […] across the street,” making it hard to identify the correct location. Navigation apps often complicated rather than clarified travel. P10 shared, “So often, when I am using Google Maps […] it's giving me directions as if I can see.” P3 called it “cumbersome” to juggle multiple apps, especially when one hand is already occupied. These limitations often forced participants to rely on incomplete or confusing information. Participants also spoke about emotional stress, embarrassment, and dependency on others. P6 noted, “Sometimes it can be embarrassing to open a door and someone is sitting there,” and P5 added, “Nobody wants to open the door and accidentally sit in somebody’s lap.” These moments contributed to heightened anxiety during entry. Social and systemic challenges were also frequent, especially for those traveling with guide dogs. P2 stated, “They’re not supposed to refuse me, but I get refusals every day.” P8 added, “They may not always recognize the cane or if they see the dog they’ll just keep driving and cancel the ride.” These ongoing frustrations underscored the need for AV systems that can reduce reliance on unpredictable human interactions. Taken together, challenges and difficulties expressed by participants supported information items ultimately included in our description logic (Section 5), particularly Finding the Correct Vehicle and Where other Passengers are Sitting. Opportunities Participants expressed excitement and optimism regarding the potential benefits and opportunities of AVs. They anticipated increased efficiency, availability, reliability, and independence (which were all reoccurring themes) compared to current transportation options. “It will reduce unpredictability, so I can just get where I need to go,” stated P4, and P11 14 shared “I'm looking forward to the opportunity to just get up and go whenever I want to, which [...] would be amazing!” highlighting the desire for spontaneous, unrestricted travel. These responses were closely tied to the prospect of greater independence and personal autonomy, as P1 expressed, “[AVs] will enable me to go places anytime I want to go and do it safely and independently, that's the main thing.” P4 further elaborated on this, saying, “I'm very excited for independence. And when I say independence […] it's really the independence of like, I can just get the car when I need it.” This theme of independence also extended to the hope of personal AV ownership, to which P5 stated, “Well, if I have my own autonomous vehicle, that means I can go work wherever I want, whenever I want, which would be pretty sweet.” P10 also expressed this wish by saying, “My hope is that one day I can buy a[n] [autonomous] vehicle… I've gone on dates as a blind person, and I can't tell you how many times I wish that I could go pick her up.” Ultimately, participants described AVs as a pathway to a more inclusive and empowered future, with P8 concluding, “I'm thinking big picture. It’s going to lead to more gainful employment for people, more ability to take care of your health, more ability to do leisure activity... so overall, a better quality of life.” Adaptations Participants also discussed ways in which they have needed to adapt to overcome their current transportation challenges. To improve spatial awareness P1 explained, “I like to use the apps to know where I am when I'm in a moving vehicle so that I have some awareness of when I might be arriving.” However, this reliance on technology was not without its complications, as discussed previously, by the need to multitask and manage multiple apps simultaneously. Time management emerged as a popular adaptation, with participants consistently building in buffer time to account for potential delays. P1 noted, “I always booked it [a ride] to get me to a place at least a half hour before I was supposed to be there,” illustrating a proactive approach to unpredictable ride timelines. Preemptive planning was also mentioned as a way to navigate unfamiliar environments, as P9 articulated, “…I want some information ahead of time so that I can plan how I'm going to hop out and where I'm gonna go from there. I want to have a battle plan, I'm a planner.” Shared rides presented unique social situations where BLV passengers may rely on others, as P9 acknowledged, “…if there are other passengers there, you don't want to be fumbling around… [so I] try to ask other people on the ride for assistance or what have you, which certainly happens sometimes on public transit.” Overall, these adaptive behaviors highlight the resourcefulness that is often necessary and the important role of information to overcome transportation hurdles faced by BLV travelers. 4.2.2 Sequence Results Navigation The results of navigation information ordering and presentation from the interviews are presented in Table 3. Participants consistently prioritized Finding the Correct Vehicle as the piece of information most desired (and important) to be presented first (mean order = 1.33), favoring an audio presentation with accompanying haptic/vibrational feedback from a device. Subsequent information items, including Description of Vehicle (mean order = 2.67), Avoiding Obstacles/Hazards (mean order = 2.75), and Locating the Correct Door (mean order = 3.58), were ranked after in the sequence and were predominantly preferred in an audio-only format. The most frequent preferred output source for all items was the user’s personal device (i.e., their phone). Table 3 Preferences for sequence of information presentation for the Navigating to Vehicle phase Information Item Mean Order (M ± SD) Modality (N, %) Output Source (N, %) Finding Correct Vehicle 1.33 ± 0.65 Audio + Haptics (7, 58.33) Phone (7, 58.33) Description of Vehicle 2.67 ± 1.37 Audio (9, 90) Phone (8, 80) Avoiding Obstacles / Hazards 2.75 ± 1.29 Audio (7, 70) Phone (9, 90) 15 Locating the Correct Door 3.58 ± 1.08 Audio (4, 44.45) Phone (4, 44.45) The results of the Friedman’s test revealed that the ranking scores were significantly different for the information items, χ2(3) = 15.5, p = .001 with a moderate effect size (W = .43). Post-hoc pairwise tests revealed statistically significant differences in order between information for Finding the Correct Vehicle and each of the other information types (all p’s < .05), with Finding the Correct Vehicle being preferred first. The order provided by interview participants, starting with information to help find the correct vehicle and ending with locating the correct door, somewhat contradicted the survey data (where participants indicated that they desired information on knowing the correct door first). When asked why they thought this might be, participants mentioned that knowing the correct door was situational: it would be more important to know earlier if it was a relatively full vehicle in a shared ride (N = 6 participants) or if they were worried about being embarrassed by going to the wrong door (N = 4 participants). Entering Table 4 presents the participants' sequencing of information related to entering the vehicle and their preferred presentation modalities. Participants generally considered Where Other Passengers are Sitting (mean order = 2.58) as the highest priority, followed by how to Interact with the Vehicle (mean order = 2.83) and Seat Placement (mean order = 3.08). The Vehicle Interface Location (mean order = 4.33), Safety Feature Location (mean order = 4.58), and Vehicle Destination information (mean order = 5.08) were ranked with moderate importance. Lower in importance and thus presented later in the information sequence were Vehicle Route (mean order = 6.00), Vehicle Cleanliness (mean order = 7.50), and Forward Direction information (mean order = 8.17). Across all information items, participants predominantly preferred an audio- only format for presenting the information and selected both information from the vehicle and the phone + vehicle as their preferred output source. Table 4 Preferences for sequence of information presentation for the Entering and Orienting phase Information Item Mean Order (M ± SD) Modality (N, %) Output Source (N, %) Where Other Passengers Sitting 2.58 ± 1.08 Audio (7, 63.63) Phone + Vehicle (6, 54.55) Interact with Vehicle 2.83 ± 1.70 Audio (8, 66.67) Vehicle (6, 50) Seat Placement 3.08 ± 2.43 Audio (9, 81.81) Phone + Vehicle (5, 45.45) Vehicle Interface Location 4.33 ± 2.46 Audio (9, 81.81) Phone + Vehicle (5, 45.45) Safety Feature Location 4.58 ± 2.27 Audio (10, 83.33) Vehicle (6, 50) Vehicle Destination 5.08 ± 2.35 Audio (12, 100) Phone + Vehicle (5, 41.67) Vehicle Route 6.00 ± 2.22 Audio (10, 90.9) Phone (5, 45.45) Vehicle Cleanliness 7.50 ± 3.21 Audio (5, 83.33) Vehicle (3, 50) Forward Direction 8.17 ± 2.62 Audio (3, 50) Vehicle (3, 50) Again, the ranking scores were significantly different for the information items, χ2(8) = 37.5, p < .001 with a moderate effect size (W = .39). Post-hoc pairwise tests revealed statistically significant differences in rank between Where Other Passengers are Sitting and Forward Direction (p = .04). These results suggest a high concern with seat availability and that finding the correct seat is top of mind for BLV users when entering AVs. However, when told that the prior survey results indicated a desire for knowing the vehicle’s destination first, interview participants agreed that this was important, 16 particularly to increase comfort and assuage concerns stemming from prior experiences with getting on the wrong bus or rideshare vehicle. Participants specifically mentioned that knowing where the vehicle was going was important for confirming they were in the correct vehicle (N = 7 participants), knowing what to expect and preparedness (N = 5 participants), and seeking a sense of comfort (N = 4 participants). Exiting Participants' ordering of information during the vehicle exiting phase, along with their preferred presentation of the information, reveal insights into the group’s egress priorities (Table 5). The Safe Side of Vehicle to Exit (mean order = 1.75) emerged as the most important piece of information to receive first. Following closely was Direction + Distance to Destination information (mean order = 2.58), indicating an interest to maintain spatial awareness post-exit. Information regarding Vehicle Location (mean order = 3.25) and knowledge of Hazards/Obstacles Outside the vehicle (mean order = 3.33) were also considered relatively important. Comparatively, information about the Direction and Flow of Traffic (mean order = 6.33), Points of Interest (mean order = 6.50), Direction Facing of the vehicle (mean order = 6.50), and other Passengers Exiting/Entering (mean order = 6.75) were deemed less essential to include. Consistent with the other trip segments, audio-only presentation was favored across all of the information items. As with the navigation phase, the preferred output source was most commonly the user’s phone. Table 5 Preferences for sequence of information presentation for the Exiting and Orienting phase Information Item Mean Order (M ± SD) Modality (N, %) Output Source (N, %) Safe Side of Vehicle to Exit 1.75 ± 0.75 Audio (7, 58.33) Vehicle (6, 50) Direction + Distance to Destination 2.58 ± 1.44 Audio (9, 81.82) Phone (5, 45.45) Vehicle Location 3.25 ± 2.70 Audio (9, 90) Phone (4, 40) Phone + Vehicle (4, 40) Hazards / Obstacles Outside 3.33 ± 1.44 Audio (9, 81.82) Phone + Vehicle (5, 45.45) Direction and Flow of Traffic 6.33 ± 2.35 Audio (4, 57.14) Phone (4, 57.14) Points of Interest 6.50 ± 2.28 Audio (8, 100) Phone (4, 50) Direction Facing 6.50 ± 2.81 Audio (5, 71.43) Phone + Vehicle (4, 57.14) Passengers Exiting / Entering 6.75 ± 2.22 Audio (4, 57.14) Phone (4, 57.14) The Friedman’s test revealed that ranking scores were significantly different for the information items, χ2(7) = 45.4, p < .001 with a large effect size (W = .54). Post-hoc tests demonstrated statistically significant differences in rank between Safe Side of Vehicle to Exit and Direction and Flow of Traffic (p = .001), Points of Interest (p = .002), Direction Facing (p = .02), and Passengers Exiting/Entering (p = .002). Results here were more aligned with the survey than in the previous two phases – participants prioritized information about which side to exit the vehicle in both studies. Whereas hazard identification was selected more frequently to be presented first in the survey, it was still ranked highly in the interviews and not statistically significantly different from the items rank ordered above it (Vehicle Location, Direction + Distance to Destination, and Safe Side of Vehicle to Exit). Indeed, when asked why participants from the survey may have selected hazards/obstacle avoidance first, participants mentioned it was very important, noting a general need for safety and to avoid falling risks (N = 10 participants), knowing what to expect outside the vehicle (N = 6 participants), and how this information could assist navigation to the destination (N = 4 participants). 17 4.2.3 Output Source When considering how the information sequences identified above should be presented, participants were asked if information should come from the vehicle (e.g., a speaker), their personal device (e.g., a smartphone), another device, or a combination. We report these data for each information item individually in Tables 3-5 but offer combined data in the following. For the navigation phase, personal devices were prioritized (68.30% of all responses), followed by a combination (24.39%), and the vehicle (7.32%). During the entry and orientation phase, a combination of personal device and vehicle was most preferred (38.04%), followed by just the vehicle (36.96%) and just the personal device (25.00%). Finally, during the exiting phase, personal devices were again prioritized (38.36%), followed by a combination (35.62%) and just the vehicle (26.03%). Collapsing these responses together, across all three phases, participants prioritized receiving information from their personal device (43.8% of all responses) instead of from a combination (32.55%) or the vehicle alone (23.30%). As explored more thoroughly in our guidelines section, this preference for information from personal devices speaks to participants being comfortable with their personal devices, the privacy they afford, as well as the accessibility settings to which they are already accustomed. 5 Description Logic Our goal in presenting the following description logic (Figure 7) is to provide a structured source of information that can support BLV users who, as our results indicated, (1) are excited to use AVs, (2) experience challenges with the entry, navigation, and exiting portion of the trip, and (3) are accustomed to using information as an adaptation, preferring audio interaction. Participants also highlighted how cumbersome it can be to have to learn multiple apps, each with their own information and interaction style, which is a consistent finding in the BLV AV research [20]. By presenting a unified structure for natural language audio descriptions tied to tasks known to be difficult for BLV travelers, we offer a practical approach for AV designers to adopt a consistent flow of information from one phase of travel to the next. 18 Fig. 7 Culminating description logic based on Study 1 Survey and Study 2 Interview results As noted in Section 3 and Section 4, the Study 1 Survey and Study 2 Interview illustrated notable differences in which items should be presented first to BLV users. Opposed to weighting one of these sets of results more heavily than the other, we opted to include the survey results that were statistically significantly more likely to be presented first (i.e., Vehicle’s Destination and Safety Features in the Entering phase and Hazards/Obstacle Detection in the Exiting phase) as potential first-items in the final description logic. The logic here, as indicated by our interviews, is that much of the information is situational and should be elevated based on the context of travel. For example, in a crowded pick-up location, confirming the Vehicle’s Destination can be imperative during the Entry phase to ensure that someone is not entering the wrong vehicle. Likewise, when there are hazards or obstacles to avoid in the destination environment, users want to know about these Hazards / Obstacles first. By combining the interview and survey data in this way, we offer a structured approach that also sheds light on how descriptions to support future accessible use of AVs need to be adaptive to the specific context of travel. 6 Guidelines and Discussion Findings from the studies presented here highlight the important challenges and opportunities for BLV users during AV travel, as well as the ways in which information can serve as an adaptation during multiple stages of the trip. For instance, results from both Study 1 Survey and Study 2 Interviews suggest that BLV users face spatial challenges like finding the correct vehicle and navigating pick-up and drop-off locations, as well as concerns with safety when entering and exiting. Moreover, we identify the range of information items that are important to communicate to BLV users to overcome these challenges (e.g., providing information to find the correct vehicle and which side of the vehicle to exit) combined with how these sets of information should be ordered in an accessible interface. The description logic developed as a result can be used by AV designers to develop structured sequences on which contextually relevant information is scaffolded. Although applying the description logic is not the focus of the current work, we envision it serving as a set of variables that AV-related apps can populate with information specific to each trip. Our results show that when doing so, most 19 information should be presented from the user’s personal device using audio as the primary information modality. In the following, we explore these findings in conversation with the literature and offer ideas for future work pursuing accessible AVs more generally. 6.1 Prioritizing Safety-critical Information Across the Trip Both the Study 1 survey data and Study 2 interview data indicate the critical role of using information to promote safety for BLV users in the navigating, entry, and exiting phases of AV travel. Participants prioritized safety as a theme during the interviews, as well as in the information order in our description logic, which coincides with existing BLV AV research where safety has been a dominant concern [2, 7, 9]. Results from the Study 1 survey highlight that during the navigation phase, safety concerns emerge due to challenges with avoiding obstacles on the way to the vehicle, as well as finding the correct vehicle. These results echo existing work focused on BLV navigation to AVs that uses computer vision to help users identify objects and localize vehicles and door handles [20]. We add to the existing work by identifying the order of information that should be prioritized, starting first with information to help find the correct vehicle (e.g., distance and direction information), transitioning to obstacle avoidance, and ending with information to find the correct door (e.g., the rear passenger-side door). While participants in the present work did not indicate that finding the correct door and handle was as challenging as indicated by previous work [20], this may have been due to how we combined the tasks of finding the door and finding the handle into a single information item, a potential limitation of our question structure. It is worth noting that as flush door handles become more prevalent, the task of finding the door handle may become increasingly challenging for BLV users. Indeed, designers of future AVs should be cognizant of how design choices can exacerbate current problems for BLV users and, as demonstrated by our interviews, how information can serve as an adaptation to challenges during transportation. During the entry phase of the trip, safety emerged as a critical theme with participants prioritizing knowing about the safety features available in the vehicle. Interview responses suggest that it is important for AVs to confirm to BLV users that it is safe to get in and to confirm that the user is in the right vehicle and orienting to the correct seat. Interestingly, unlike the navigation and exiting phase, there is an apparent opportunity to explore using the vehicle itself as an output source instead of the user’s personal device during the entry phase, as explored more thoroughly in the following subsection. Given these results, designers should ensure that the vehicle’s destination is communicated upon passenger entry and that safety features (e.g., hand rails and emergency exits) are appropriately communicated to BLV riders. Finally, during the exiting phase of the trip, hazard avoidance and communicating which side of the vehicle is safe to exit were prioritized by participants. These data coincide with existing work on exiting AVs by Meinhardt et al. (2025), which emphasized the role of new multisensory interfaces leveraging haptics to communicate both static and dynamic obstacle detection for BLV users [30]. However, our results suggest that users likely prefer using their existing devices primarily via audio, as opposed to new devices leveraging haptics or other multisensory interactions. We explore this finding in the following subsections, offering caveats to our results and suggesting future work. 6.2 Designing for BLV Users’ Personal Devices A key takeaway from the two studies presented here is that that BLV users generally prefer to have the majority of AV information that they receive delivered from their personal smartphone. This result coincides with existing accessibility work with AVs [20], with the principal reason being that users often have different preferences and accessibility needs for how information is presented in the UI. As current smart devices have significant universal design (UD) features built-in to the interface for both input and output operations, BLV users are empowered to customize their device with important personalized features like the speech rate of audio output [17]. These accessibility features are native to the device’s operating system and deeply embedded into the UI, meaning they work across system states, applications, and usage scenarios, which reduces the learning curve and increases user confidence. This level of access is why an estimated 90% of BLV persons use smartphones, with the vast majority preferring iOS based devices given their incorporation of so many UD design principles in the native UI [36]. The takeaway, as one of our participants aptly put it, is that “every car is going 20 to be different, but [my] phone remains the same.” Additionally, having the information coming from the smartphone allows users to “tailor the app or whatever service [they’re] using within the phone to tell [them] the information as [they] want it” (supporting customization). The variety exhibited in our results with respect to which piece of information participants would like to receive first during the navigation phase highlights this need for the user to be able to customize the presentation of the provided information. We argue that AV developers adopting this design decision will not only benefit their userbase, but also their manufacturers, as they simply need to follow well-established accessibility design conventions when creating apps. The alternative (i.e., attempting to build in this level of accessibility into the vehicle control system) will be nontrivial, expensive, and require significant usability testing to avoid conflicts and introduce the potential for broken UI elements upon every update. That said, our data do show that BLV users may prefer some information directly delivered from the vehicle, particularly during the entry phase. How to interact with the vehicle and the location of safety features were items that specifically could be delivered via in-vehicle speakers, whereas others (e.g., seat placement and where other passengers are sitting) could be delivered by both a user’s smartphone and the vehicle itself. This finding supports approaches in the literature using in-cabin audio to support BLV users during the trip [19]. We contend that in these combined instances, there may be an opportunity to explore multisensory information (e.g., haptics on the phone and audio from the vehicle) opposed to audio only, as discussed in the following. 6.3 Emphasizing Audio Based Interaction? Our results show a clear preference for BLV users receiving information from AVs using audio as the primary interaction modality. This finding coincides with existing research by Fink et al. (2023) [18] for BLV users across visual status subsamples (mild visual impairment, moderate, and legally blind). However, as recognized in this previous work, BLV users tend to have more experience with audio-based interaction than other interaction modalities (e.g., haptics), meaning self-reported preferences available in the literature may be impacted by experiential or cognitive bias. Indeed, when BLV participants have been exposed to multisensory UIs leveraging haptic interactions in a transportation context, results have been on par with or even outperformed audio-only approaches [22, 29, 30]. This apparent contradiction between self- reported survey and interview results with experimental device testing suggests that more work is needed to expose BLV users to multisensory spatial applications. We argue that as multisensory UIs proliferate, designers should consider how to integrate audio with other modalities like haptics, particularly when a user’s personal device can be used (as has been done in recent work with vibro-audio interactions on smartphones [22, 32]). 6.4 Limitations Due to the hypothetical nature of this study, the BLV participants’ answers may be skewed toward their current lived experience with human-operated rideshares (e.g., Uber and Lyft). As a result, their answers are based on their current knowledge and experiences with technology and are not necessarily representative of a hypothetical (imagined) perspective in which non-human-operated AV rideshares exist. As this technology slowly migrates from the hypothetical to the practical, further study and analysis would help to better extrapolate the needs of the BLV community regarding independent AV travel. We recommend follow-up behavioral studies to further explore the ideas and framework outlined in this paper, as these studies may add realism to the hypothetical problems of undeveloped technologies and allow for more concrete observations from both participants and researchers alike. The qualitative component of our study relied on interviews with a sample size of 12 participants. While this number is consistent with prior work in accessibility research with blind and low vision populations, we acknowledge that it is a reasonably small sample that limits the generalizability of our findings. We recognize that our analysis is more exploratory and focused on identifying key information needs rather than making broad population-level generalizations. Future research should strive to increase sample sizes to enhance the statistical power and external validity of findings within the accessibility research domain. Our study followed a sequential design, with a survey preceding qualitative interviews. We chose this approach because our goal was to identify a culminating description logic. We opted to hold interviews subsequent to the survey to help 21 contextualize and explain the results via rich qualitative data. Although an alternative approach, such as beginning with interviews, would have also been valid, we believe that the trajectory from a broad survey to more focused interviews was most useful in pursuit of our intended goals and how we used the observed outcomes. Demographic information analysis was limited in due part to the contract with Qualtrics, who was responsible for recruiting respondents to the survey. As such, we did not collect robust demographic information that could have further stratified the participant groups and analyses. This may be of value for future investigation in order to better analyze if people of certain ages, regions, and backgrounds influence the data. For example, related research has shown that people from the United States, Hong Kong, and China may interpret and react to AVs differently as a result of their cultural background [25]. Our focus, lying primarily on what information was important to BLV travelers in the hypothetical scenarios provided and when this information was best to be delivered, did not necessitate the analysis of this demographic information and as such did not take away from the intent and scope of the paper. Furthermore, we recognize that using a third-party service for participant recruitment is an emerging practice in research with specific trade-offs. While it maximized the likelihood of reaching our target population and obtaining data from a larger sample than is practical to recruit for in-lab studies, this technique did prevent us from engaging directly with participants. To ensure data integrity, we implemented screening questions and carefully reviewed all responses to filter out any that appeared inauthentic, incomplete, or suspicious. However, we acknowledge that the use of such a service introduces a risk of unverified data that is not present in direct recruitment and could be considered a drawback of our study (and any survey research using such services). 7 Conclusion This paper explored the information needs of blind and low vision (BLV) travelers throughout the autonomous vehicle (AV) travel experience (i.e., navigating to, entering, and exiting) and examined the optimal sequencing and presentation of information to support travelers in each of these trip segments. The study utilized a remote survey with 202 BLV respondents to assess the key BLV information needs and priorities when using autonomous transportation. The survey was followed by a slate of qualitative remote interviews with 12 BLV participants to understand current transportation challenges, while also predicting future challenges pertaining to AV travel, and identifying information sequences for addressing these current and future challenges. The resulting description logic from the survey and interviews emphasizes the critical importance of context-specific information delivery for AV travel; the information needs of BLV travelers are not uniform across the journey and vary significantly depending on the specific trip segment. Specifically, results highlight that information to find the correct vehicle and about the vehicle itself (e.g., make and model, the correct door to enter) are vital during the navigation phase. Upon entering the AV, the focus shifts to information that builds personal security and safety during the trip, such as knowing where the vehicle is traveling to and where safety features are located. Finally, as BLV travelers prepare to exit the AV, the priority once again centers on hazards, obstacle awareness, and knowing which side of the vehicle to exit to ensure a safe transition from the vehicle to the external environment and their destination. Across each of these stages, users identified the need for audio interaction and prioritized information from their personal device (i.e., their smartphone), while also indicating opportunities to receive information from the vehicle itself (e.g., onboard speakers) during the entry phase. Themes identified within the interviews underscore the current challenges faced by BLV travelers, impacting their independence and safety, but also expressing strong optimism for AVs to transform their mobility. Participants' current adaptations, while resourceful, highlight the limitations of existing transportation systems and provide evidence for the need of more reliable and user-friendly transportation solutions. By elucidating distinct information priorities and sequencing preferences within the context of three individual trip segments, this research contributes a foundational framework for AV technologies/interfaces that afford safer and more efficient wayfinding for BLV travelers. As we look toward the future of autonomous transportation and the opportunities that this technology can provide to users who may benefit the most, defining clear and consistent information standards for accessible use of these vehicles is essential for promoting greater independence and mobility for future BLV travelers. 22 References [1] Ahmetovic, D., Oh, U., Mascetti, S. and Asakawa, C. 2018. Turn Right: Analysis of Rotation Errors in Turn-by-Turn Navigation for Individuals with Visual Impairments. Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (Galway Ireland, Oct. 2018), 333–339. [2] Bennett, R., Vijaygopal, R. and Kottasz, R. 2020. Willingness of people who are blind to accept autonomous vehicles: An empirical investigation. Transportation Research Part F: Traffic Psychology and Behaviour. 69, (Feb. 2020), 13–27. https://doi.org/10.1016/j.trf.2019.12.012. [3] Bourne, R. et al. 2021. Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the Global Burden of Disease Study. The Lancet Global Health. 9, 2 (Feb. 2021), e130–e143. https://doi.org/10.1016/S2214-109X(20)30425-3. [4] Braun, V. and and Clarke, V. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology. 3, 2 (Jan. 2006), 77–101. https://doi.org/10.1191/1478088706qp063oa. [5] Brewer, R. and Ellison, N. 2020. Supporting People with Vision Impairments in Automated Vehicles: Challenge and Opportunities. University of Michigan, Ann Arbor, Transportation Research Institute. [6] Brewer, R.N., Austin, A.M. and Ellison, N.B. 2019. Stories from the Front Seat: Supporting Accessible Transportation in the Sharing Economy. Proc. ACM Hum.-Comput. Interact. 3, CSCW (Nov. 2019). https://doi.org/10.1145/3359197. [7] Brewer, R.N. and Kameswaran, V. 2018. Understanding the Power of Control in Autonomous Vehicles for People with Vision Impairment. Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (Galway Ireland, Oct. 2018), 185–197. [8] Brewer, R.N. and Kameswaran, V. 2019. Understanding Trust, Transportation, and Accessibility through Ridesharing. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19 (Glasgow, Scotland Uk, 2019), 1–11. [9] Brinkley, J., Huff, E.W., Posadas, B., Woodward, J., Daily, S.B. and Gilbert, J.E. 2020. Exploring the Needs, Preferences, and Concerns of Persons with Visual Impairments Regarding Autonomous Vehicles. ACM Trans. Access. Comput. 13, 1 (Apr. 2020). https://doi.org/10.1145/3372280. [10] Brinkley, J., Posadas, B., Sherman, I., Daily, S.B. and Gilbert, J.E. 2019. An Open Road Evaluation of a Self-Driving Vehicle Human–Machine Interface Designed for Visually Impaired Users. International Journal of Human– Computer Interaction. 35, 11 (Jul. 2019), 1018–1032. https://doi.org/10.1080/10447318.2018.1561787. [11] Brinkley, J., Posadas, B., Woodward, J. and Gilbert, J.E. 2017. Opinions and Preferences of Blind and Low Vision Consumers Regarding Self-Driving Vehicles: Results of Focus Group Discussions. Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (Baltimore Maryland USA, Oct. 2017), 290–299. [12] Bu, L., Cui, B., Pan, W., Chen, H., Xia, S. and Li, H. 2025. User-centered multimodal interaction design for autonomous vehicles: a focus on cognitive load and accessibility for users with severe visual impairment. Universal Access in the Information Society. (Jun. 2025). https://doi.org/10.1007/s10209-025-01240-4. [13] Claypool, H., Bin-Nun, A. and Gerlach, J. 2017. Self-driving cars: The impact on people with disabilities. Newton, MA: Ruderman Family Foundation. (2017). [14] Detjen, H., Schneegass, S., Geisler, S., Kun, A. and Sundar, V. 2022. An Emergent Design Framework for Accessible and Inclusive Future Mobility. Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (New York, NY, USA, 2022), 1–12. [15] Disability: https://www.who.int/news-room/fact-sheets/detail/disability-and-health. Accessed: 2025-03-03. [16] El-taher, F.E., Taha, A., Courtney, J. and Mckeever, S. 2021. A Systematic Review of Urban Navigation Systems for Visually Impaired People. Sensors (Basel, Switzerland). 21, 9 (Apr. 2021), 3103. https://doi.org/10.3390/s21093103. [17] Fink, P.D.S. 2023. Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People Who Are Blind and Visually Impaired. Doctoral Thesis #3817. The University of Maine. [18] Fink, P.D.S., Alsamsam, M., Brown, J.R., Kindler, H.D. and Giudice, N.A. 2023. Give us something to chauffeur it: Exploring user needs in traditional and fully autonomous ridesharing for people who are blind or visually impaired. Transportation Research Part F: Traffic Psychology and Behaviour. 98, (Oct. 2023), 91–103. https://doi.org/10.1016/j.trf.2023.09.004. [19] Fink, P.D.S., Dimitrov, V., Yasuda, H., Chen, T.L., Corey, R.R., Giudice, N.A. and Sumner, E.S. 2023. Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2023), 1–13. 23 [20] Fink, P.D.S., Doore, S.A., Lin, X., Maring, M., Zhao, P., Nygaard, A., Beals, G., Corey, R.R., Perry, R.J., Freund, K., Dimitrov, V. and Giudice, N.A. 2023. The Autonomous Vehicle Assistant (AVA): Emerging technology design supporting blind and visually impaired travelers in autonomous transportation. International Journal of Human- Computer Studies. 179, (Nov. 2023), 103125. https://doi.org/10.1016/j.ijhcs.2023.103125. [21] Fink, P.D.S., Holz, J.A. and Giudice, N.A. 2021. Fully Autonomous Vehicles for People with Visual Impairment: Policy, Accessibility, and Future Directions. ACM Trans. Access. Comput. 14, 3 (Aug. 2021). https://doi.org/10.1145/3471934. [22] Fink, P.D.S., Milne, H., Caccese, A., Alsamsam, M., Loranger, J., Colley, M. and Giudice, N.A. 2024. Accessible Maps for the Future of Inclusive Ridesharing. Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Stanford CA USA, Sep. 2024), 106–115. [23] Giudice, N.A. and Long, R.G. 2010. Establishing and maintaining orientation: tools, techniques, and technologies. Foundations of Orientation and Mobility, 4th ed.; APH Press: Louisville, KY, USA. 1, (2010), 45–62. [24] Kacperski, C., Kutzner, F. and Vogel, T. 2024. Comparing autonomous vehicle acceptance of German residents with and without visual impairments. Disability and Rehabilitation: Assistive Technology. 19, 8 (Nov. 2024), 2869– 2879. https://doi.org/10.1080/17483107.2024.2317930. [25] Kim, K.J. and Wang, S. 2025. Regional differences in public perceptions of autonomous vehicles facing moral dilemmas: a comparative study between the United States, Hong Kong, and China. Universal Access in the Information Society. 24, 2 (Jun. 2025), 1369–1377. https://doi.org/10.1007/s10209-024-01145-8. [26] Klatzky, R.L., Marston, J.R., Giudice, N.A., Golledge, R.G. and Loomis, J.M. 2006. Cognitive load of navigating without vision when guided by virtual sound versus spatial language. Journal of Experimental Psychology: Applied. 12, 4 (2006), 223–232. https://doi.org/10.1037/1076-898X.12.4.223. [27] Loomis, J.M., Klatzky, R.L., Philbeck, J.W. and Golledge, R.G. 1998. Assessing auditory distance perception using perceptually directed action. Perception & Psychophysics. 60, 6 (1998), 966–980. https://doi.org/10.3758/BF03211932. [28] Ma, Z., Schroeter, R. and Gomez, R. 2024. Designing HMI for BVI Users in Fully Automated Vehicles: A Participatory and In-the-field Approach. The 26th International ACM SIGACCESS Conference on Computers and Accessibility (St. John’s NL Canada, Oct. 2024), 1–5. [29] Meinhardt, L.-M., Rück, M., Zähnle, J., Elhaidary, M., Colley, M., Rietzler, M. and Rukzio, E. 2024. Hey, What’s Going On? Conveying Traffic Information to People with Visual Impairments in Highly Automated Vehicles: Introducing OnBoard. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 8, 2 (May 2024), 67:1-67:24. https://doi.org/10.1145/3659618. [30] Meinhardt, L.-M., Weilke, L.M., Elhaidary, M., von Abel, J., Fink, P.D.S., Rietzler, M., Colley, M. and Rukzio, E. 2025. Light My Way: Developing and Exploring a Multimodal Interface to Assist People With Visual Impairments to Exit Highly Automated Vehicles. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2025). [31] Narayanan, S., Chaniotakis, E. and Antoniou, C. 2020. Shared autonomous vehicle services: A comprehensive review. Transportation Research Part C: Emerging Technologies. 111, (Feb. 2020), 255–293. https://doi.org/10.1016/j.trc.2019.12.008. [32] Palani, H.P., Fink, P.D. and Giudice, N.A. 2021. Comparing Map Learning Between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users. Multimodal Technologies and Interaction. 6, 1 (2021), 1. https://doi.org/10.3390/mti6010001. [33] Rampin, R. and Rampin, V. 2021. Taguette: open-source qualitative data analysis. Journal of Open Source Software. 6, 68 (Dec. 2021), 3522. https://doi.org/10.21105/joss.03522. [34] Ranjbar, P., Krishnakumari, P.K., Andersson, J. and Klingegård, M. 2022. Vibrotactile guidance for trips with autonomous vehicles for persons with blindness, deafblindness, and deafness. Transportation Research Interdisciplinary Perspectives. 15, (Sep. 2022), 100630. https://doi.org/10.1016/j.trip.2022.100630. [35] Son, S., Jeong, Y. and Lee, B. 2019. An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning. Sensors (Basel, Switzerland). 19, 22 (Nov. 2019), 5035. https://doi.org/10.3390/s19225035. [36] Survey on screen reader usage #9: 2019. https://webaim.org/projects/screenreadersurvey9/. Accessed: 2023-05- 02. [37] Wang, P. and Yi, B. 2024. Accessibility of Shared Automated Vehicles for the Visually Impaired Travelers. Social Science Research Network. [38] Microsoft Soundscape. Microsoft Research. 24 APPENDIX Appendix A Study 1 Survey Screening Criteria: (must meet all three) 1. At least 18 years old and have a known uncorrected visual impairment 2. Utilize an accessibility device or mobility aid (e.g., screen reader, magnification, white cane, guide dog, etc…) 3. Does not drive – Utilizes transportation such as rideshares, public transit, or private vehicles Pre-survey Questions: 1. How would you classify where you live or where you commute to most often? a. Urban b. Suburban c. Rural 2. What accessibility aid or aids do you use? [check all that apply] a. Screen readers such as JAWS and NVDA b. Mobile screen readers such as VoiceOver and TalkBack c. Magnification d. White cane e. Guide dog f. Braille display g. Apps such as Be My Eyes and Aira h. Other:_______ 3. How would you classify your vision status? a. Mild vision impairment b. Moderate vision impairment c. Severe vision impairment d. Blindness Instructions: Thank you for agreeing to help with our research! You will be presented with 3 scenarios that relate to finding, entering, and exiting an autonomous vehicle. For these scenarios, the autonomous vehicles are able to safely and legally drive themselves and are not accompanied by a human driver or an attendant. Please read through each scenario description carefully, imagine yourself in the scenario, and answer the accompanying questions honestly. Identifying and navigating to vehicle: Imagine ordering a ride to the grocery store in an urban setting. An autonomous vehicle (a vehicle that drives itself and does not have a human driver) pulls up close to where you are located on a busy street. You know that it's near the sidewalk and you are standing within 5 meters of it. Likert Scale: 1 - 5 (1 = Strongly Disagree, 2 = Somewhat Disagree, 3 = Neither Agree nor Disagree, 4 = Somewhat Agree, 5 = Strongly Agree) 25 1. It can be challenging to locate the correct door and door handle. 2. It can be challenging to find the correct vehicle. 3. I would like to know information about the vehicle, like the make, model, and year, before I get to it. 4. Avoiding obstacles and hazards that are in my way can be challenging. 5. I am comfortable finding my own way to the vehicle. 6. I would like to know about [blank] before I enter the vehicle. [Typed response] 7. I think that [blank] would best help me navigate to the vehicle. a. Honking the vehicle’s horn b. A wayfinding app that uses spatialized audio and natural language descriptions on my smartphone c. A wayfinding app that uses vibrations on my smartphone d. Asking someone nearby e. Other:_____________ 8. Before getting to the vehicle, the first thing I want to know is [blank]. a. The direction of the vehicle b. If there is a bike lane c. Which door I should go to d. Where the car is going e. If there is a steep curb or curb cut f. If there are other people in the car already g. Other:______________ Entering and orienting inside vehicle: It’s likely that fully autonomous vehicles are going to use a rideshare model where there may be multiple people on board and the seating arrangements may not be like traditional cars. For example, these vehicles could feel more like a bus or a train car where the seats sometimes face each other, run along the side of the vehicle, or could even be moved around. Imagine that you ordered one of these autonomous vehicles to go to the grocery store. It has arrived and you have successfully navigated to the correct door and are about to enter it. Likert Scale: 1 - 5 (1 = Strongly Disagree, 2 = Somewhat Disagree, 3 = Neither Agree nor Disagree, 4 = Somewhat Agree, 5 = Strongly Agree) 1. I want to know which direction is forward as I enter the vehicle’s interior. 2. I want to know about the location of the control interface (e.g., radio, climate, mapping, etc.) within the vehicle. 3. I want to know about the cleanliness of the vehicle’s interior. 4. I want to know what seats are available (i.e., unoccupied) within the vehicle. 5. I want to know about seat placement and the layout of the vehicle’s interior. 6. I want to know where safety features (e.g., emergency exits, handrails, etc.) are located within the vehicle. 7. I want to know the route the vehicle is going to take as I enter the vehicle’s interior. 8. I want to know about where the vehicle is going as I enter the vehicle’s interior. 9. I want to know how to interact with the vehicle (e.g., voice, haptics, gestures, etc.) 10. What else would you like to know as you enter or while you are riding in the vehicle? [Typed response] 11. When entering the vehicle, the first thing I want to know is [blank]?. a. Where the vehicle is going b. The route the vehicle is going to take c. Where safety features are located d. The seat placement 26 e. Where other passengers are sitting f. The vehicle’s cleanliness g. Where the control interface is located h. How to interact with the vehicle i. Which direction is forward j. Other:_________ Exiting and orienting to surroundings: The autonomous vehicle has safely driven you to the grocery store. It has pulled up next to the sidewalk about 20 meters from the store's entrance. Likert Scale: 1 - 5 (1 = Strongly Disagree, 2 = Somewhat Disagree, 3 = Neither Agree nor Disagree, 4 = Somewhat Agree, 5 = Strongly Agree) 1. I want to know what direction I’m facing when exiting the vehicle. 2. I want to know where the vehicle is located when I exit the vehicle. 3. I want to know about the direction and flow of traffic when exiting the vehicle. 4. I want to know if there are other passengers trying to exit or enter the vehicle. 5. I want to know what hazards or obstacles may be outside of the vehicle. 6. I want to know what points of interest are in the immediate area. 7. I want to know which side of the vehicle is safest to exit through. 8. I feel comfortable exiting the vehicle on my own without assistance. 9. What else would you like to know before or after you exit the vehicle? [Typed response] 10. When exiting the vehicle, the first thing I want to know is [blank]? a. Which side of the vehicle I’m exiting b. What points of interest are around me c. What hazards or obstacles may be outside the vehicle d. If there are passengers exiting or entering the vehicle e. What the direction and flow of traffic is f. Where the vehicle is located g. What direction I’m facing h. Other:_______ [End of survey] Appendix B Study 2 Pre-Interview Worksheet Summary: The following worksheet is designed to help engineers decide what order accessible information should be presented in future technology that assists with finding and using autonomous (driverless) vehicles. Instructions: When filling out this worksheet, think about what information would be most important for you to access first and then how information should be ordered after that depending on the scenario. Use numbers to rank items in the following scenarios in the order in which you’d would want them to be presented to you. Feel free to eliminate items that are unimportant to you by ranking them with a zero (0). For example, if you would want to know the car’s color first, you would place the number 1 next to [Rank: ] for that item. If you don’t care about the vehicle’s color, mark that item with the number 0 next to [Rank: ]. Navigating to the vehicle 27 Scenario: Imagine ordering a ride to the grocery store in an urban setting. An autonomous vehicle (a vehicle that drives itself and does not have a human driver) pulls up close to where you are located on a busy street. You know that it's near the sidewalk and you are standing within 20 feet of it. When thinking about an interface that can give you information on the way to that autonomous vehicle, the following order of information would make the most sense or be preferable to you: [Instructions: Put the following items in the order that you would like them presented to you. Eliminate items you consider unimportant by ranking them with a 0.] - Information and/or assistance for finding the correct vehicle (e.g., distance and direction information) [Rank: ] - Information and/or assistance for locating the correct door or door handle [Rank: ] - Information and/or assistance for avoiding obstacles and hazards when navigating to the vehicle [Rank: ] - Information and/or assistance about the vehicle itself (e.g., make, model, color, year) [Rank: ] Entering and Orienting Scenario: It’s likely that fully autonomous vehicles are going to use a rideshare model where there may be multiple people on board and the seating arrangements may not be like traditional cars. For example, these vehicles could feel more like a bus or a train car where the seats sometimes face each other, run along the side of the vehicle, or could even be moved around. Imagine that you ordered one of these autonomous vehicles to go to the grocery store. It has arrived and you have successfully navigated to the correct door and are about to enter it. When thinking about an interface that can give you information when entering and orienting within the autonomous vehicle, the following order of information would make the most sense or be preferable to you: [Instructions: Put the following items in the order that you would like them presented to you. Eliminate items you consider unimportant by ranking them with a 0.] - Information about where the vehicle is going [Rank: ] - Information about the route the vehicle is going to take [Rank: ] - Information about where safety features are located in the vehicle [Rank: ] - Information about the seat placement and orientation [Rank: ] - Information about where other passengers are sitting [Rank: ] - Information about the vehicle's cleanliness [Rank: ] - Information about where the vehicle control interface is located [Rank: ] - Information about how to interact with the vehicle [Rank: ] - Information about which direction is forward [Rank: ] Exiting and Orienting to the Surroundings Scenario: The autonomous vehicle has safely driven you to the grocery store. It has pulled up next to the sidewalk about 20 meters from the store's entrance. When thinking about an interface that can give you information as you exit an autonomous vehicle and orient to your surroundings, the following order of information would make the most sense or be preferable to you: [Instructions: Put the following items in the order that you would like them presented to you. Eliminate items you consider unimportant by ranking them with a 0.] - Information about which side of the vehicle is safe to exit [Rank: ] 28 - Information about points of interest that are around you (e.g., direction and distance to your destination) [Rank: ] - Information about hazards or obstacles that may be outside the vehicle [Rank: ] - Information about other passengers exiting or entering the vehicle [Rank: ] - Information about the direction and flow of traffic [Rank: ] - Information about where the vehicle is located [Rank: ] - Information about what direction you are facing [Rank: ] This concludes the worksheet, thanks for your input! Appendix C Interview Guide Hi there, I’m [name]. Thanks so much for participating in this interview being conducted by [anonymized]. The purpose of the interview is to help the design of future transportation that is more accessible and inclusive. To do so, we’ll be asking some questions related to your current experience traveling and what you expect might be helpful in future scenarios with autonomous vehicles. We’ll also be audio recording the interview to help with our analysis later on. You’ve already provided us some information on the worksheet you completed before this interview. Throughout the interview, we’ll ask several times for you to imagine traveling to, entering, and exiting a fully autonomous vehicle. By this we mean a vehicle that can safely, efficiently, and legally drive without a human “at the wheel.” What questions can I answer before we get started? Demographics and Experience To begin, we’ll go over some demographic information and then talk about your experiences with technology and transportation. 1. Can you state your name, age, and gender identity? 2. Can you explain what your current day to day transportation experience is like? 3. What challenges or difficulties do you face during transportation? 4. How does the mode of transportation (e.g., public busses vs. rideshare) impact your experience? 5. Can you describe your vision loss? This could include the extent, any metrics you're aware of like acuity or light perception, as well as the etiology, or cause, and onset if known. a. How do you use your vision, if at all, during your day to day transportation experience? 6. Do you use a mobility aid (cane or dog, magnification device, etc.) 7. Do you use navigation assistance technologies or apps? For example, Blind Square, SeeingAI, AIRA, Good Maps, etc.? 8. So here we’re going to ask you to think about fully autonomous vehicles, like we described before, assuming they can drive safely, efficiently, and legally without a human driver. You can think of it like an Uber or Lyft but without a driver at the wheel. What are your overall thoughts on the rollout of these vehicles? a. What are you excited about? b. What are you worried about? Worksheet We sent home a worksheet for you to fill out and we have a few questions about your responses. Navigation The first scenario had you imagine navigating to an autonomous vehicle without a human driver available to provide assistance. 29 1. Can you briefly talk me through your thought process as you filled out the worksheet? How did you decide on the order? Great, thanks. Now we’re going to read each item you included, in the order that you included them. For each item, we have a few questions prepared. This might feel repetitive but your input is really important for designing future applications. In the interest of time, I’ll ask that your responses are brief, just a few words. [For each item included on worksheet] The first piece of information you included was: 1. When or how far from the vehicle would you want this information? 2. Would you want that information in every situation or only in certain situations (for example at night)? What situations? 3. How do you imagine that information being best presented, for example through audio, haptics (i.e., touch/vibration), combinations of both? 4. Can you provide an example of what it might sound or feel like? 5. Where would you like the information to come from? Your phone? The vehicle? Another device? You decided not to include: Can you explain this decision? In a survey of people who are blind and low vision we conducted prior to the interviews, the majority of respondents said knowing which door of the vehicle to go to was a priority to know first. Why do you think this was indicated, and under what scenarios do you think it’s most relevant? Entry/Orientation The second scenario on the worksheet had you imagine entering and orienting within an autonomous vehicle without a human driver available to provide assistance. 1. Can you briefly talk me through your thought process as you filled out the worksheet? How did you decide on the order? Great, thanks. Now, just like before, we’re going to read each item you included, in the order that you included them. For each item, we have a few questions prepared. In the interest of time, I’ll ask that your responses are brief, just a few words [For each item included on worksheet] The first piece of information you included was: 1. When would you want this information? For example, as you’re entering, before entering, or once you’re inside? 2. Would you want that information in every situation or only in certain situations (for example at night)? What situations? 3. How do you imagine that information being presented the best, for example through audio, haptics, combinations of both? 4. Can you provide an example of what it might sound or feel like? 5. Where would you like the information to come from? Your phone? The vehicle? Another device? 30 You decided not to include: Can you explain this decision? In that survey we mentioned, the majority of respondents said knowing where the vehicle is going was a priority to know first. Why do you think this was indicated, and under what scenarios do you think it’s most relevant? Exiting/Orientation The third scenario on the worksheet had you imagine exiting an autonomous vehicle and orienting to your surroundings without a human driver available to provide assistance. 1. Can you talk me through your thought process as you filled out the worksheet? How did you decide on the order? Great, thanks. Now, again, we’re going to read each item you included, in the order that you included them. For each item, we have a few questions prepared. In the interest of time, I’ll ask that your responses are brief, just a few words [For each item included on worksheet] The first piece of information you included was: 1. When would you want this information? For example, as you’re exiting, before you exit, or once you’re outside? 2. Would you want that information in every situation or only in certain situations (for example at night)? What situations? 3. How do you imagine that information being presented the best, for example through audio, haptics, combinations of both? 4. Can you provide an example of what it might sound or feel like? 5. Where would you like the information to come from? Your phone? The vehicle? Another device? You decided not to include: Can you explain this decision? In that survey we mentioned, the majority of respondents said knowing what hazards or obstacles are outside of the vehicle was a priority to know first. Why do you think this was indicated, and under what scenarios do you think it’s most relevant? Conclusion So to wrap up, we have just a few general questions. In that survey I mentioned, respondents brought up safety as a key concern. What safety concerns do you have? - Do you think it being a new technology is a contributing factor? In the example sentences that you gave, we’re particularly interested in what we call the reference frame of directions. Do you prefer clockface positions, left-right, near-side vs far-side? Finally, what problems do you think do you think autonomous vehicles will solve with your current transportation experience? Thank you so much for participating!
Dude, Where's My (Autonomous) Car? Defining an Accessible Description Logic for Blind and Low Vision Travelers Using Autonomous Vehicles Paul D. S. Fink1,2; 0000-0003-2915-1331 Justin R. Brown1; 0000-0002-3359-8411 Rachel Coombs1; 0009-0004-8593-559X Emily A. Hamby1; 0009-0005-0318-1446 Kyle J. James1; 0009-0003-3205-824X Aisha Harris1; 0009-0008-0916-0912 Jacob Bond3; 0000-0003-2025-5230 Morgan E. Andrulis3; 0009-0009-8827-4885 Nicholas A. Giudice1,4*; 0000-0002-7640-0428 1VEMI Lab, The 04469, USA. 2 49401, USA. 3Connected Vehicle Experience Research Lab, General Motors Global Research & Development, Warren, Michigan 48092, USA. 4 04469, USA. *Corresponding Author: Purpose: Autonomous vehicles (AVs) are becoming a promising transportation solution for blind and low-vision (BLV) travelers, offering the potential for greater independent mobility. This paper explores the information needs of BLV users across multiple steps of the transportation journey, including finding and navigating to, entering, and exiting vehicles independently. Methods: A survey with 202 BLV respondents and interviews with 12 BLV individuals revealed the perspectives of BLV end-users and informed the sequencing of natural language information required for successful travel. Whereas the survey identified key information needs across the three trip segments, the interviews helped prioritize how that information should be presented in a sequence of accessible descriptions to travelers. Results: Taken together, the survey and interviews reveal that BLV users prioritize knowing the vehicle's make and model and how to find the correct vehicle during the navigation phase. They also emphasize the importance of confirmations about the vehicle's destination and onboard safety features upon entering the vehicle. While exiting, BLV users value information about hazards and obstacles, as well as knowing which side of the vehicle to exit. Furthermore, results highlight that BLV travelers desire using their own smartphone devices when receiving information from AVs and prefer audiobased interaction. Conclusion: The findings from this research contribute a structured framework for delivering trip-related information to BLV users, useful for designers incorporating natural language descriptions tailored to each travel segment. This work offers important contributions for sequencing transportation-related descriptions throughout the AV journey, ultimately enhancing the mobility and independence of BLV individuals. Keywords: Autonomous Vehicles, Blind and Low Vision Users, Natural Language Descriptions, Accessibility Statements and Declarations 2 Funding: This work was supported by General Motors Global Research & Development. Competing Interests: The authors have no competing interests to declare that are relevant to the content of this article. Human Subjects Approval: This research was approved by The #2023-08-09. Informed consent was obtained from all participants included in this research. 1 Introduction Transportation is a challenging undertaking for the 1.3 billion people experiencing significant disabilities worldwide [15], including the approximately 295 million people who report moderate to severe visual impairment [3]. When considering 'transportation', a common misconception is that the travel process begins and ends in the vehicle. In reality, the complete journey is characterized by several mutually dependent travel segments that start with planning the trip. Among the remaining segments are navigating to a ride, entering, exiting, and safely navigating to the destination. Autonomous vehicles (AVs) hold the potential to revolutionize accessible mobility for blind and low-vision (BLV) travelers by providing door-to-door transportation without requiring friends, family members, or sighted guides for assistance. However, this promise of independent mobility depends on future interfaces conveying meaningful and useful information across each segment of the complete trip. While the body of work investigating accessible AVs for BLV users has prioritized in-vehicle interaction, much less is known about information needs in the equally important (but often ignored) segments that involve localizing a vehicle, entering it safely, and exiting and orienting to the destination [20, 30]. To address these problems, we sought to identify what we refer to as an accessible description logic, providing inclusively designed information for BLV travelers in a user-defined sequence of natural language descriptions across otherwise disconnected travel segments. Our description logic aims to be a progression of information categories that designers and developers can use to insert natural language audio cues that are adapted to the specific environment. Just as a developer might use a sequence of variables to 'slot in' specific values in that sequence, we envision the utility of this logic to be identifying the order and priority of information that can be adapted to each AV trip. We specifically study aspects of AV-enabled transportation known to be difficult for BLV people, starting first with safely navigating to and localizing a ride once it arrives. As AVs adopting rideshare models may arrive to unanticipated locations in dynamic environments [17, 22], BLV travelers will likely be required to utilize outdoor navigation skills to find and navigate to AVs, often in tandem with applications designed to support wayfinding [22], in addition to their preferred mobility tool, e.g., the long cane or guide dog. Unlike typical outdoor navigation activities, however, localizing AVs involves a host of context-specific tasks for BLV travelers, including identifying and confirming the correct vehicle, avoiding obstacles within the pick-up area (e.g., construction hazards), and finding the correct door handle [20]. While the tasks for navigating to a vehicle are well-defined in the literature, how to prioritize and order information in an accessible interface remains an open question. This is also the case for entering and orienting within the vehicle, which often includes determining how to enter safely and understanding seat locations and availability. Finally, once the AV arrives at its destination, BLV users must exit the vehicle safely and orient themselves to another environment [30]. Rather than studying each of these segments in isolation, as is common in the existing literature, we seek to provide users and designers with a comprehensive and consistent logic for completing the trip. Our goal in this paper is to identify a structured sequence of natural language descriptions for each of these travel segments, ensuring that user needs are reflected throughout by directly engaging BLV users in two studies. The first, a confidential, anonymous survey with (N = 202) BLV respondents, was motivated by the need to identify essential information to be conveyed during the trip. It sought to answer our first research question: RQ1: What information is important for BLV travelers to receive across the complete trip with AVs? The second stage of this project consisted of interviews with (N = 12) BLV individuals, which aimed to order the initial set of information preferences defined by the survey in a logical sequence for each trip segment. This goal was addressed by our second research question: RQ2: How should information be ordered and presented to BLV travelers across travel segments with AVs? 3 The interviews provided feedback not only on the order in which participants preferred to receive information, but also the accessibility aids and technologies they found most useful, and enabled a clearer understanding of their individual transportation experiences. By integrating survey results and interview insights, we developed a final description logic and accompanying guidelines that offer best practices for designing accessible transportation apps and related technologies. The key contributions of this work advance accessible computing in the transportation context by providing a structured approach to delivering critical trip-related information for BLV users, something that to our knowledge has not been previously studied. Importantly, this project addresses a critical user experience gap (e.g., BLV riders) identified by our OEM partner, while also providing a strong foundation for future studies that empirically test the logic defined here using behavioral methods. We believe that our findings help pave the way for ongoing work by our group and others addressing the question at the end of the paper (Section 6.3) regarding the efficacy of using audio-only vs. multisensory information presentation in user interfaces (UIs) supporting AV usage as part of a complete trip solution. We highlight the apparent experiential gap indicated by BLV users who self-report preferences for audio-only information when, in related work, evidence suggests performance is on-par or better with multisensory information. By identifying audio-only information in the AV context across multiple trip segments, the current paper contributes an important testbed for evaluating this potential gap between what people say they prefer (which we hypothesize is based on their existing experiences with audioonly interfaces) and how they perform with new multisensory user interfaces. The remainder of this paper is organized as follows: In Section 2, we review the current work in AV technology and its impact on independent transportation for BLV individuals, including research on complete trip accessibility. Section 3 details the methodology and results of Study 1, which identifies critical trip information. Section 4 presents the methods and results of Study 2, which refines how that information should be structured and delivered as well as reports the findings of thematic analysis on the interviews as a whole. In Section 5, we present our culminating description logic and in Section 6 we discuss our findings in relation to existing accessibility literature and AV design. Finally, Section 7 concludes with the key takeaways from this work. 2 Related Work AV technology is rapidly advancing; however, the small but growing body of literature examining accessible AVs for BLV users indicates that more work is needed to meet the needs of travelers who often rely on human assistance throughout the trip [30]. One useful strategy in pursuit of this goal is involving end-users in the design of AV systems. Indeed, multiple researchers have emphasized the importance of the disability community being involved in the development of technologies that are needed to make AVs readily accessible to consumers. For instance, in their focus group study, Brinkley et al. (2017) found that BLV users were concerned about not being adequately considered in self-driving development [11], industry white papers have elucidated the important role of advocacy among people with disabilities to be considered in AV development [13], and user experience work has emphasized inclusive, participatory design of AV system features to support BLV individuals [20]. While this direct involvement is important, Ma (2024) correctly points out that prior work is predominantly hypothetical, as AVs truly designed for BLV users do not yet exist [28]. Recognizing this limitation, the following subsections review the available body of work exploring BLV AV accessibility, as well as the importance of natural language descriptions in supporting the complete trip, which we argue is critical to consider as travel is not limited to discrete trip segments or activities. 2.1 Complete Trip Accessibility for BLV Travelers The emergence of AVs presents both opportunities and challenges for BLV travelers. While AVs hold the potential to enhance mobility, independence, and workforce participation [21], research consistently highlights the need for concerted inclusive design efforts to address accessibility concerns. For instance, in their survey-based study, Bennett et al., 2020 noted that BLV users are skeptical that AVs will be designed for them [2], which was a concern echoed by participants in a dual survey and focus group study by Brinkley and colleagues [9]. Even understanding these concerns, the previous work emphasized optimism about AVs among the BLV community, which has been demonstrated to be more prevalent among 4 BLV users than their sighted counterparts [24]. One key issue is ensuring that BLV persons are effectively supported throughout the entire travel experience, from planning a trip to reaching their destination. Studies have shown that BLV travelers rely on various cues to understand the travel environment across the trip, many of which are currently provided by human rideshare drivers [6, 8]. For instance, Brewer and Ellison (2020) found that BLV rideshare users often depend on drivers for assistance in locating the vehicle, exiting safely at accessible points, and receiving verbal descriptions of the environment [5]. The predicted transition to AV rideshare services raises concerns about whether these supportive interactions currently provided by humans can be effectively replaced by technology when human drivers are no longer in the loop [7, 18]. While past rideshare experiences based on human-driven vehicles can highlight current accessibility gaps, they do not directly translate to a one-to-one set of expectations or solutions for AVs, when the human agent is no longer part of the information-access equation. Future AV use will likely involve new interaction paradigms, different constraints, and evolving user priorities, necessitating distinct design strategies. Other research has addressed the lack of support provided to BLV users from modern navigation applications and devices that do not meet users' requirements, such as different types of intersections all being labeled as 'intersection' and implementations that do not accurately address the precise location and orientation of crosswalks [16, 37]. Non-visual interfaces have been explored to bridge this gap, primarily focusing on in-vehicle experiences providing information on vehicle location via in-air haptics and/or spatialized audio [19], as well as on-the-go updates about the ride via audio cues [10]. Recent work has also explored the ideal combination of multisensory cues for BLV users during the in-vehicle portion of the route, identifying tactile and spatialized audio as the optimal combination [12]. However, less attention has been paid to other phases of the journey like pre-trip planning and vehicle localization [17, 34]. While some work has focused on designing inclusive interfaces for localizing AVs [20] and exiting AVs [30], no research to date has explored a unified and accessible framework across multiple stages of the trip for BLV travelers. Moreover, no work has sought to identify how to appropriately structure information sequentially for users as they navigate these multiple trip segments. Addressing both of these issues represent key contributions of the current work. A promising approach is to leverage users' interest in utilizing speech and audio interaction with AVs [11, 18], as explored in the following. 2.2 Natural Language Sequences for Spatial Tasks Turn-by-turn (TBT) direction systems have become the de facto standard for audio-based navigation, but evidence suggests that TBT natural language systems can be problematic for BLV users, as the information provided is often inadequate for developing an accurate mental representation necessary for safe navigation [23]. This is primarily because of the information (or lack thereof) provided in standard TBT direction systems, leading to significant errors [1]. For BLV users to safely and accurately travel to and from AVs, information beyond the route is necessary as well, including indication of relevant landmarks, obstructions, cues about sidewalk or street parameters, and car orientation [20]. An alternative to traditional TBT direction systems for BLV travelers are systems that include spatialized information, i.e., where the distance and direction of a street, landmark, or point of interest is audibly received from its location in space [38]. While adding spatialized information to TBT instructions can improve route guidance accuracy and reduce cognitive load [26, 27], it does not eliminate the limitation that these instructions are only predominately conveying route guidance information. Therefore, to address this limitation and offer a more comprehensive navigation experience, this study explores the potential of enhancing NL navigation instructions as an alternative to traditional TBT directions. Moreover, we aim to understand how to best sequence this information to aid future navigation systems and the associated algorithms that provide users with sequential real-time information [35]. 3 Study 1: Survey of BLV Information Needs Across the Complete Trip To determine the information needs for BLV travelers across the vehicle localization, entry, and egress segments of the trip (RQ1), a survey was developed by the authors of this paper and distributed remotely to (N = 202) BLV participants by Qualtrics (https://www.qualtrics.com/). The decision to use a third-party survey service for participant recruitment was intended to maximize respondents who met demographic criteria of interest (see 3.1.1 Participants below). Prior to 5 providing responses, participants were asked to imagine three scenarios that involved finding, entering, and exiting an AV in an urban area. To increase the realism of the imagined scenarios, participants were asked to imagine that the AV would be taking them to a grocery store. The AV was noted as being able to safely and legally provide transport without being accompanied by a human driver or attendant. To reflect the possibility of widespread mobility-as-a-service prediction models for future AV deployment [14, 31], participants were also told that there might be other passengers on board and that the seats might be in nontraditional arrangements (like a bus or a shuttle). Due to the limited real-world availability of AV transportation, the use of these 'imaginative' scenarios were employed that aimed to unify participant understanding and encourage responses as if they were traveling in these environments. 3.1 Methods The survey consisted of 32 questions (Appendix A) divided between three parts of the trip: navigating to the vehicle, entering and orienting within it, and exiting and orienting to the surrounding environment. Survey respondents were instructed to imagine that they were going to use an autonomous rideshare vehicle in three scenarios related to each part of the trip. Individual pieces of information that were evaluated within these three phases were generated based on the prior literature on entering and exiting AVs [20, 30], as well as from personal insight and lived experience of one of the authors on this paper who is congenitally blind and a frequent traveler. For example, in the navigation phase, information included finding the correct vehicle and door handle, as well as avoiding obstacles. In the entering phase, eight questions evaluated a range of information items (e.g., seat placement, cleanliness details, information on other passengers). The exiting phase also included eight questions evaluating a range of information items (e.g., obstacle avoidance, other passenger movement, and points of interest). These information items were first rated on 5-point Likert scales to evaluate their importance (1, Strongly Disagree to 5, Strongly Agree). To gain an initial understanding of how to order information in our description logic (RQ2), participants were then asked to identify which piece of information they would like to receive first during that phase. The survey was tested internally for accessibility and confirmed to work well with screen readers prior to deployment. The research was approved by The . 3.1.1 Participants Selection criteria ensured participants were at least 18 years of age with a known and uncorrected visual impairment, utilized an accessibility device or mobility aid (e.g., screen reader, magnification, white cane, guide dog, etc.), and were non-drivers who utilize transportation, such as rideshares, public transit, or private vehicles. Demographic information such as name, age, gender, and email addresses were not able to be recorded by Qualtrics; however, geographic level of urbanization, the classification of vision status, and what accessibility aids participants utilize were all documented. The majority of participants lived in suburban areas (N = 81), classified their vision status as "Moderate Vision Impairment" (N = 93), and reported using magnification (N = 133). A complete summary of participant demographics from the presurvey questions are detailed in Table 1. Participants were compensated by Qualtrics in accordance with their compensation protocols. Table 1 Interview Participant Self-Reported Demographic Information Geographic Area Vision Status Accessibility Aids (multi-select) Category N (%) Category N (%) Category N (%) Suburban 81 (40.1%) Moderate vision impairment 93 (46%) Magnification 133 (65.8%) Urban 72 (35.6%) Mild vision impairment 61 (30.2%) Mobile screen readers 93 (46%) Rural 49 (24.3%) Severe vision impairment (includes blindness) 48 (23.8%) PC screen readers 49 (24.3%) White cane 36 (17.8%) Guide dog 32 (15.8%) CV apps 30 (14.9%) Braille display 13 (6.4%) Other* 11 (5.4%) 6 3.1.2 Data Analysis The Likert scale questions (1, Strongly Disagree to 5, Strongly Agree) rating the importance of each item in the survey were first analyzed using basic descriptive statistics; mean (M) and standard deviation (SD). As our goal was to identify whether each item was important, rather than to compare the importance between the individual items, inferential statistics were not employed for the results of these questions. However, subsequent inferential analyses of the distribution of rankings by vision impairment status (Mild, Moderate, and Severe) were conducted to reveal potential differences or similarities in item rankings among these groups. As the rankings are non-parametric data and three visual groups are being compared, a Kruskal-Wallis test was used to determine the impact of vision status for each item ranking. Dunn's post-hoc comparisons with Bonferroni correction were conducted on items where the Kruskal-Wallis test was found to be significant. The results and Likert distributions, separated by survey section, are reported below in sections 3.2.1, 3.2.2, and 3.2.3. 3.2 Results 3.2.1 Navigation Based on the self-reported Likert data, the majority of participants wanted information about the make, model, and year of the vehicle (M = 4.15, SD = 1.05). Participants also indicated that they found it challenging while navigating to the vehicle to avoid hazards (M = 3.53, SD = 1.20) and to find the correct vehicle (M = 3.51, SD = 1.23). Participants did not respond as strongly for or against it being challenging for them to locate the correct door and handle when navigating to the vehicle (M = 3.14, SD = 1.32). Figure 1 displays the total distribution of rankings and the distributions by reported vision status. Three of the items (Q1, Q2, Q3) were found to have significant differences in rankings between vision status groups (p < .001). Post-hoc comparisons revealed significant differences in rankings (all p's < .01) between the Mild and Severe groups as well as the Moderate and Severe groups across the three items. These results suggest that individuals with severe vision impairment may face greater challenges in locating the correct vehicle, identifying the correct door, and avoiding hazards on the way to the vehicle compared to those with mild or moderate vision impairment. 7 Fig. 1 Diverging stacked bar plots showing the distribution of Likert responses by self-reported vision status for Study 1 Survey statements concerning Navigating to the Vehicle When considering the piece of information people want to know first (Figure 2), several information items received roughly 40 responses (~20% of participants). These included knowing which door to enter, the direction of the vehicle, position of other passengers, information about the curb, and information about the destination. Fig. 2 Bar plot of Survey 1 responses about the first information participants wanted during the Navigating to the Vehicle phase To determine if there were statistically significant differences between the items listed first, a Chi-square goodness of fit test was performed (χ2 = 63.72, p < .001), indicating that participant preferences were not equally distributed across the range of information types (i.e., the Response categories in Figure 2). That is, results suggest that there was an overall difference between the information to be delivered first during the navigation phase. This was likely due to the few responses in the Bike Lane and Other Information categories. Indeed, pairwise post-hoc tests demonstrated that Other Information and Bike Lane were chosen statistically significantly fewer times than the other information categories (all p's < .001). This was surprising since the Other Information category was included to capture responses like the make, model, and year of the vehicle, which participants indicated they wanted strongly in the earlier Likert questions (Q4 from Figure 1). Furthermore, the post-hoc tests demonstrated no statistically significant differences between knowing which door to enter, the direction of the vehicle, position of other passengers, information about curbs, and information about the 8 destination. Taken together, these data indicate that a range of information is important to BLV users when navigating to AVs and that users are divided on which information should be presented first (which we analyze further in the Study 2 interviews, Section 4). 3.2.2 Entering and Orienting Across all Likert scale questions about entering the vehicle (Figure 3), participants responded that they want to know each piece of information queried in the survey, with all means except the location of the control interface (M = 3.93, SD = .10) scoring a 4 or above. Two of the nine items, the vehicle's route (Q2) and interior seat layout (Q4), were found to have significant differences between vision status groups (p < .05). Post-hoc comparisons revealed significant differences between the Moderate (M = 4.05, SD = .94) and Severe (M = 4.38, SD = 0.94) groups for Q2 rankings (p = 0.04) and significant differences between Mild (M = 4.02, SD = .90) and Severe (M = 4.48, SD = .85) groups for Q4 rankings (p < .01). These results may suggest that participants with more severe vision impairment prioritize certain vehicle entry information more strongly - specifically, clarity about the vehicle's planned route and how seats are arranged - than those with mild or moderate vision impairment. Fig. 3 Diverging stacked bar plots showing the distribution of Likert responses by self-reported vision status for Study 1 Survey statements concerning Entering and Orienting When asked what information participants wanted to know first in the entering phase (Figure 4), the most common response was information about the vehicle's destination with 46 responses, or 22.8% of participants, closely followed by the location of safety features (N = 43, 21.3%). 9 Fig. 4 Bar plot of Survey 1 responses about the first information participants wanted during the Entering and Orienting phase A Chi-square goodness of fit test was performed (χ2 = 86.35, p < .001), indicating that the information items were not equally distributed across information types. Pairwise post-hoc tests demonstrated that the Vehicle's Destination and Safety Features were statistically more likely than the five least common responses (Seat Placement, Route Information, Vehicle Cleanliness, Control Interface, and Forward Direction), with all p's less than .05. This indicates that during the entry phase of the trip, BLV travelers are concerned with verifying the vehicle's ultimate destination prior to entry and to know how to travel safely. 3.2.3 Exiting and Orienting The results of the Likert scale questions in the exiting and orienting section of the survey (Figure 5) indicate that users strongly desire access to a range of information when exiting the vehicle, with all means above 4.24, with the exception of points of interest in the immediate area (M = 3.86, SD = 1.09). Two of the eight items, where the vehicle is located when exiting (Q7) and the direction passengers are facing when exiting (Q8), were found to have significant differences between vision groups (p = .05). Post-hoc comparisons revealed significant differences between Mild (M = 4.13, SD = 1.04) and Moderate (M = 4.52, SD = .75) groups for Q7 (p = .05) and significant differences between Mild (M = 4.05, SD = .92) and Moderate (M = 4.43, SD = .70) groups for Q8 (p = .03), suggesting that individuals with moderate vision impairment may place greater importance on knowing the vehicle's exact location and their orientation upon exiting the vehicle compared to those with mild impairment. 10 Fig. 5 Diverging stacked bar plots showing the distribution of Likert responses by self-reported vision status for Study 1 Survey statements concerning Exiting and Orienting When asked what information about exiting they wanted to know first when immediately leaving the vehicle, 79 participants, or 39.1%, chose information about what hazards and obstacles may be outside the vehicle. Other participant responses to this question can be found in Figure 6, with the next most common response being which side of the vehicle they are exiting, indicated by 40 responses, or 19.8% of participants. Fig. 6 Bar plot of Survey 1 responses about the first information participants wanted during the Entering and Orienting phase A Chi-square goodness of fit test was performed (χ2 = 165.6, p < .001), indicating that the information items were not equally distributed across information types. Pairwise post-hoc tests demonstrated that the Hazards/Obstacles item was statistically more likely than the other information types, with all p's less than .001. Exiting Side was also demonstrated to be more likely than the six items with fewer responses, with all p's less than .05. Taken together, these results again suggest 11 a clear preference for safety through notification of hazards and obstacles, as well as which side of the vehicle is safe to exit. 4 Study 2: Interviews Identifying Desired Order of Description Information After identifying the types of information that were important in Study 1, we sought to glean additional information on what, when, and how information should be presented to BLV travelers during an autonomous vehicle trip (RQ2). Whereas in Study 1 we queried participants on what information is important, as well as what information they would want first in three stages of AV travel, the goal in this subsequent interview study was to confirm the first piece of information (identified in the survey) and to prioritize each piece of information thereafter to inform our description logic. We engaged (N = 12) BLV participants in an interactive interview that was guided by a pre-interview worksheet that participants completed in advance of the interview (see Section 4.1.2 Procedure). 4.1 Methods 4.1.1 Participants A total of 12 BLV participants aged 31-71 (M = 47.58, SD = 15.55) who identified as legally blind and used a primary mobility aid were recruited across a range of visual impairment and etiology (Table 2). The interviews were conducted one-on-one with a researcher, with the participants being recruited through the lab's local network in the United States and via personal contacts in the BLV community. Given the nature of recruitment, no interviewees also completed the Study 1 survey. Participants were compensated with a $50 gift card. Table 2 Interview Participant Self-Reported Demographic Information Participant Age Gender Visual Condition Visual Function Mobility / Accessibility Aids 1 71 Male Retinopathy of Prematurity No Vision White cane, Guide dog 2 63 Female Medical Mishap Movement, Light White cane, Guide dog 3 71 Male Hereditary Optic Neuropathy Shapes, Shadows White cane 4 32 Male Retinitis Pigmentosa Light, Colors White cane 5 31 Male Unknown Light White cane 6 32 Female Leber's Congenital Amaurosis Light White cane 7 34 Female Retinopathy of Prematurity, Glaucoma Light, Colors White cane, Guide dog, Braille 8 56 Male Unknown Light Guide dog 9 44 Non-Binary Retinitis Pigmentosa Light, Shapes White cane, Guide dog 10 32 Male Condition with High Eye Pressure No Vision White cane 11 54 Female Optic Nerve Hypoplasia Light, Colors White cane, Magnification 12 51 Male Leber's Congenital Amaurosis No Vision White cane 4.1.2 Procedure Participants completed a pre-interview worksheet (Appendix B) prior to the interview and ranked the order that they would desire for information to be presented in each of the three travel segments. The results of the Likert responses from the survey in Study 1 prompted the addition of obstacle and hazard information as an option in the ordering for the navigation phase on the pre-interview worksheet. The worksheet asked participants to think about what information would be most important for them to receive first and then to imagine how the information should flow thereafter. Again, this exercise employed three scenarios that included (1) navigating to an AV, (2) entering and orienting in the vehicle, and (3) exiting the vehicle and orienting to the surrounding environment. For each imagined scenario, participants ranked the order in which they would like to receive the information by placing a "1" next to the information they would most prefer to receive first and increasing integers for the pieces of information they would want to know thereafter. Participants were able to eliminate pieces of information they found unimportant by ranking them with a "0". 12 After participants completed and sent the pre-interview worksheet back to the team, remote one-on-one interviews were conducted via Zoom, which were audio recorded. The interviews lasted approximately an hour each and followed a general script that was tailored to each participant based on their pre-interview worksheet answers, allowing for elaboration of choices and reasoning. To begin, participants were asked questions about their demographic information and current experiences with technology and transportation as well as questions soliciting their overall thoughts on the rollout of fully autonomous vehicles. Next, participants were asked to explain their thought process as they filled out the worksheet including ordering of information and information they eliminated. To help contextualize and explain results from the Study 1 survey, participants were also provided with the most frequent response to the "first thing I want to know is" questions from the survey for each of the three travel segments and asked to postulate why the response was indicated and under what scenarios it would be most relevant. These questions were intended to help resolve any disagreement between the first item results from the survey and the entire sequence responses collected from the interviews. Given our interest in how information should be presented (RQ2), participants were also asked if information should be presented from their phone, the vehicle, or another device, as well as the presentation modality (i.e., haptics, audio, or combinations of both). To conclude the interviews, participants were asked about their safety concerns related to AVs and what problems with their current transportation experience they think autonomous vehicles could solve. The full interview guide can be found in Appendix C. 4.1.3 Data Analysis Qualitative thematic analysis of the interview transcripts was conducted by two independent researchers using the Taguette qualitative data analysis tool [33]. Going through the interview transcripts in opposite orders to avoid fatigue effects, the researchers identified key ideas (codes) and recurring themes that participants mentioned in their responses. Afterwards, a team of five researchers reviewed all of the codes and their associated quotes to resolve disagreements. This process generated 83 unique codes that were further categorized by the team of researchers into subgroups and thematic groups [4]. The qualitative results, including four emerging themes from the thematic analysis process, are reported in Section 4.2.1. As non-parametric data, the order of information in the navigation, entry, and exiting segments was determined by calculating the mean rank followed by a Friedman's test. Items that participants eliminated in their worksheet, were scored as last place plus 1. In the event of a tie, the information item with the fewest number of eliminations, meaning more participants chose to include that information, would receive a higher ranking. How this information should be presented (i.e., the presentation modality) was also recorded. The most common response to presentation for each information item was recorded in Tables 3-5 along with the mean order of the information items. The results of the information sequencing are reported below in Section 4.2.2. 4.2 Results 4.2.1 Qualitative Results Qualitative thematic analysis of participant interviews revealed four key themes that highlight the complex relationships that BLV individuals have with their current transportation experiences and the potential for AVs to significantly reshape their independence and access to mobility. First, safety emerged as a critical consideration, encompassing both immediate physical risks and the broader reliability of AVs as an emerging technology. Second, participants detailed the difficulties and challenges they currently face with transportation, including personal, spatial, and technological barriers. Third, participants expressed significant optimism regarding the opportunities presented by AVs, envisioning increased independence, agency, and efficiency in their future mobility. Finally, interviews revealed the adaptations BLV individuals utilize to navigate these challenges, often mentioning the need to balance several navigation apps at once, manage their time, and rely on others, all of which can be frustrating and inefficient. These four themes collectively describe the impact that transportation has on the lives of BLV individuals and underscore the potential of AV technology to transform their 13 transportation experiences. Put simply, P3 offered, "The ability to independently travel when blind is a big deal." The following subsections are organized around the four themes identified from the interviews and offer additional insights into their role in AV transportation. Safety Participants expressed a range of safety concerns focused on environmental hazards, unexpected events, and the reliability of both current rideshares and future AVs. During the navigation phase, many emphasized that AV interfaces must respond to unpredictable environments. P1 illustrated this by noting, "You're somewhat vulnerable when there's some unusual situation, especially a hole in the ground [...] Things like that can be very difficult." Entering the vehicle raised concerns about identifying the correct car and feeling secure. P3 shared, "It's very important that the autonomous car makes me comfortable that I'm in the right place and that it's safe to get in." Exiting also posed challenges, particularly when vehicles stop in unsafe locations. P8 remarked, "I definitely want to know which side of the vehicle to get out on because I don't want to walk into headlong traffic." Concerns about AVs' ability to handle unexpected incidents-like accidents or road obstructions-were frequent. "My safety concerns would be an autonomous car's ability to deal with the unexpected" said P3. Worries extended to being dropped off in chaotic or unfamiliar places: "I'm worried about a drop off where there's something going on that isn't expected and the autonomous vehicle can't figure it out... and I can't figure it out." Others, like P2, raised concerns about reaction time to "a fallen tree or a deer in the road." Ultimately, participants stressed the fundamental importance of safety in transportation, as P10 asserted, "I want to get to my destination, and safety's a big part of that." Overall, these concerns were important to users across each segment of the AV trip and help contextualize the importance of information identified in the sequencing tasks during the interview. For example, environmental awareness concerns during the navigation and exiting phase highlight the need for hazard and obstacle detection, whereas concerns with safety when entering speak to the need for correct vehicle confirmations and seat detection. Safety generally was connected with difficulties and challenges experienced during transportation, as provided in the following. Difficulties and Challenges Participants described a wide range of challenges in current transportation systems, including spatial confusion, unreliable technology, and negative social experiences. Spatial difficulties-particularly around pick-ups and drop-offs-were a major theme. P1 said, "It can be really difficult to find the right vehicle. In fact, I made a mistake not long ago and got into the wrong Uber." P12 added, "Sometimes they'll pull up right in front of the business or sometimes [...] across the street," making it hard to identify the correct location. Navigation apps often complicated rather than clarified travel. P10 shared, "So often, when I am using Google Maps [...] it's giving me directions as if I can see." P3 called it "cumbersome" to juggle multiple apps, especially when one hand is already occupied. These limitations often forced participants to rely on incomplete or confusing information. Participants also spoke about emotional stress, embarrassment, and dependency on others. P6 noted, "Sometimes it can be embarrassing to open a door and someone is sitting there," and P5 added, "Nobody wants to open the door and accidentally sit in somebody's lap." These moments contributed to heightened anxiety during entry. Social and systemic challenges were also frequent, especially for those traveling with guide dogs. P2 stated, "They're not supposed to refuse me, but I get refusals every day." P8 added, "They may not always recognize the cane or if they see the dog they'll just keep driving and cancel the ride." These ongoing frustrations underscored the need for AV systems that can reduce reliance on unpredictable human interactions. Taken together, challenges and difficulties expressed by participants supported information items ultimately included in our description logic (Section 5), particularly Finding the Correct Vehicle and Where other Passengers are Sitting. Opportunities Participants expressed excitement and optimism regarding the potential benefits and opportunities of AVs. They anticipated increased efficiency, availability, reliability, and independence (which were all reoccurring themes) compared to current transportation options. "It will reduce unpredictability, so I can just get where I need to go," stated P4, and P11 14 shared "I'm looking forward to the opportunity to just get up and go whenever I want to, which [...] would be amazing!" highlighting the desire for spontaneous, unrestricted travel. These responses were closely tied to the prospect of greater independence and personal autonomy, as P1 expressed, "[AVs] will enable me to go places anytime I want to go and do it safely and independently, that's the main thing." P4 further elaborated on this, saying, "I'm very excited for independence. And when I say independence [...] it's really the independence of like, I can just get the car when I need it." This theme of independence also extended to the hope of personal AV ownership, to which P5 stated, "Well, if I have my own autonomous vehicle, that means I can go work wherever I want, whenever I want, which would be pretty sweet." P10 also expressed this wish by saying, "My hope is that one day I can buy a[n] [autonomous] vehicle... I've gone on dates as a blind person, and I can't tell you how many times I wish that I could go pick her up." Ultimately, participants described AVs as a pathway to a more inclusive and empowered future, with P8 concluding, "I'm thinking big picture. It's going to lead to more gainful employment for people, more ability to take care of your health, more ability to do leisure activity... so overall, a better quality of life." Adaptations Participants also discussed ways in which they have needed to adapt to overcome their current transportation challenges. To improve spatial awareness P1 explained, "I like to use the apps to know where I am when I'm in a moving vehicle so that I have some awareness of when I might be arriving." However, this reliance on technology was not without its complications, as discussed previously, by the need to multitask and manage multiple apps simultaneously. Time management emerged as a popular adaptation, with participants consistently building in buffer time to account for potential delays. P1 noted, "I always booked it [a ride] to get me to a place at least a half hour before I was supposed to be there," illustrating a proactive approach to unpredictable ride timelines. Preemptive planning was also mentioned as a way to navigate unfamiliar environments, as P9 articulated, "...I want some information ahead of time so that I can plan how I'm going to hop out and where I'm gonna go from there. I want to have a battle plan, I'm a planner." Shared rides presented unique social situations where BLV passengers may rely on others, as P9 acknowledged, "...if there are other passengers there, you don't want to be fumbling around... [so I] try to ask other people on the ride for assistance or what have you, which certainly happens sometimes on public transit." Overall, these adaptive behaviors highlight the resourcefulness that is often necessary and the important role of information to overcome transportation hurdles faced by BLV travelers. 4.2.2 Sequence Results Navigation The results of navigation information ordering and presentation from the interviews are presented in Table 3. Participants consistently prioritized Finding the Correct Vehicle as the piece of information most desired (and important) to be presented first (mean order = 1.33), favoring an audio presentation with accompanying haptic/vibrational feedback from a device. Subsequent information items, including Description of Vehicle (mean order = 2.67), Avoiding Obstacles/Hazards (mean order = 2.75), and Locating the Correct Door (mean order = 3.58), were ranked after in the sequence and were predominantly preferred in an audio-only format. The most frequent preferred output source for all items was the user's personal device (i.e., their phone). Table 3 Preferences for sequence of information presentation for the Navigating to Vehicle phase Information Item Mean Order (M ± SD) Modality (N, %) Output Source (N, %) Finding Correct Vehicle 1.33 ± 0.65 Audio + Haptics (7, 58.33) Phone (7, 58.33) Description of Vehicle 2.67 ± 1.37 Audio (9, 90) Phone (8, 80) Avoiding Obstacles / Hazards 2.75 ± 1.29 Audio (7, 70) Phone (9, 90) 15 Locating the Correct Door 3.58 ± 1.08 Audio (4, 44.45) Phone (4, 44.45) The results of the Friedman's test revealed that the ranking scores were significantly different for the information items, χ2(3) = 15.5, p = .001 with a moderate effect size (W = .43). Post-hoc pairwise tests revealed statistically significant differences in order between information for Finding the Correct Vehicle and each of the other information types (all p's < .05), with Finding the Correct Vehicle being preferred first. The order provided by interview participants, starting with information to help find the correct vehicle and ending with locating the correct door, somewhat contradicted the survey data (where participants indicated that they desired information on knowing the correct door first). When asked why they thought this might be, participants mentioned that knowing the correct door was situational: it would be more important to know earlier if it was a relatively full vehicle in a shared ride (N = 6 participants) or if they were worried about being embarrassed by going to the wrong door (N = 4 participants). Entering Table 4 presents the participants' sequencing of information related to entering the vehicle and their preferred presentation modalities. Participants generally considered Where Other Passengers are Sitting (mean order = 2.58) as the highest priority, followed by how to Interact with the Vehicle (mean order = 2.83) and Seat Placement (mean order = 3.08). The Vehicle Interface Location (mean order = 4.33), Safety Feature Location (mean order = 4.58), and Vehicle Destination information (mean order = 5.08) were ranked with moderate importance. Lower in importance and thus presented later in the information sequence were Vehicle Route (mean order = 6.00), Vehicle Cleanliness (mean order = 7.50), and Forward Direction information (mean order = 8.17). Across all information items, participants predominantly preferred an audioonly format for presenting the information and selected both information from the vehicle and the phone + vehicle as their preferred output source. Table 4 Preferences for sequence of information presentation for the Entering and Orienting phase Information Item Mean Order (M ± SD) Modality (N, %) Output Source (N, %) Where Other Passengers Sitting 2.58 ± 1.08 Audio (7, 63.63) Phone + Vehicle (6, 54.55) Interact with Vehicle 2.83 ± 1.70 Audio (8, 66.67) Vehicle (6, 50) Seat Placement 3.08 ± 2.43 Audio (9, 81.81) Phone + Vehicle (5, 45.45) Vehicle Interface Location 4.33 ± 2.46 Audio (9, 81.81) Phone + Vehicle (5, 45.45) Safety Feature Location 4.58 ± 2.27 Audio (10, 83.33) Vehicle (6, 50) Vehicle Destination 5.08 ± 2.35 Audio (12, 100) Phone + Vehicle (5, 41.67) Vehicle Route 6.00 ± 2.22 Audio (10, 90.9) Phone (5, 45.45) Vehicle Cleanliness 7.50 ± 3.21 Audio (5, 83.33) Vehicle (3, 50) Forward Direction 8.17 ± 2.62 Audio (3, 50) Vehicle (3, 50) Again, the ranking scores were significantly different for the information items, χ2(8) = 37.5, p < .001 with a moderate effect size (W = .39). Post-hoc pairwise tests revealed statistically significant differences in rank between Where Other Passengers are Sitting and Forward Direction (p = .04). These results suggest a high concern with seat availability and that finding the correct seat is top of mind for BLV users when entering AVs. However, when told that the prior survey results indicated a desire for knowing the vehicle's destination first, interview participants agreed that this was important, 16 particularly to increase comfort and assuage concerns stemming from prior experiences with getting on the wrong bus or rideshare vehicle. Participants specifically mentioned that knowing where the vehicle was going was important for confirming they were in the correct vehicle (N = 7 participants), knowing what to expect and preparedness (N = 5 participants), and seeking a sense of comfort (N = 4 participants). Exiting Participants' ordering of information during the vehicle exiting phase, along with their preferred presentation of the information, reveal insights into the group's egress priorities (Table 5). The Safe Side of Vehicle to Exit (mean order = 1.75) emerged as the most important piece of information to receive first. Following closely was Direction + Distance to Destination information (mean order = 2.58), indicating an interest to maintain spatial awareness post-exit. Information regarding Vehicle Location (mean order = 3.25) and knowledge of Hazards/Obstacles Outside the vehicle (mean order = 3.33) were also considered relatively important. Comparatively, information about the Direction and Flow of Traffic (mean order = 6.33), Points of Interest (mean order = 6.50), Direction Facing of the vehicle (mean order = 6.50), and other Passengers Exiting/Entering (mean order = 6.75) were deemed less essential to include. Consistent with the other trip segments, audio-only presentation was favored across all of the information items. As with the navigation phase, the preferred output source was most commonly the user's phone. Table 5 Preferences for sequence of information presentation for the Exiting and Orienting phase Information Item Mean Order (M ± SD) Modality (N, %) Output Source (N, %) Safe Side of Vehicle to Exit 1.75 ± 0.75 Audio (7, 58.33) Vehicle (6, 50) Direction + Distance to Destination 2.58 ± 1.44 Audio (9, 81.82) Phone (5, 45.45) Vehicle Location 3.25 ± 2.70 Audio (9, 90) Phone (4, 40) Phone + Vehicle (4, 40) Hazards / Obstacles Outside 3.33 ± 1.44 Audio (9, 81.82) Phone + Vehicle (5, 45.45) Direction and Flow of Traffic 6.33 ± 2.35 Audio (4, 57.14) Phone (4, 57.14) Points of Interest 6.50 ± 2.28 Audio (8, 100) Phone (4, 50) Direction Facing 6.50 ± 2.81 Audio (5, 71.43) Phone + Vehicle (4, 57.14) Passengers Exiting / Entering 6.75 ± 2.22 Audio (4, 57.14) Phone (4, 57.14) The Friedman's test revealed that ranking scores were significantly different for the information items, χ2(7) = 45.4, p < .001 with a large effect size (W = .54). Post-hoc tests demonstrated statistically significant differences in rank between Safe Side of Vehicle to Exit and Direction and Flow of Traffic (p = .001), Points of Interest (p = .002), Direction Facing (p = .02), and Passengers Exiting/Entering (p = .002). Results here were more aligned with the survey than in the previous two phases - participants prioritized information about which side to exit the vehicle in both studies. Whereas hazard identification was selected more frequently to be presented first in the survey, it was still ranked highly in the interviews and not statistically significantly different from the items rank ordered above it (Vehicle Location, Direction + Distance to Destination, and Safe Side of Vehicle to Exit). Indeed, when asked why participants from the survey may have selected hazards/obstacle avoidance first, participants mentioned it was very important, noting a general need for safety and to avoid falling risks (N = 10 participants), knowing what to expect outside the vehicle (N = 6 participants), and how this information could assist navigation to the destination (N = 4 participants). 17 4.2.3 Output Source When considering how the information sequences identified above should be presented, participants were asked if information should come from the vehicle (e.g., a speaker), their personal device (e.g., a smartphone), another device, or a combination. We report these data for each information item individually in Tables 3-5 but offer combined data in the following. For the navigation phase, personal devices were prioritized (68.30% of all responses), followed by a combination (24.39%), and the vehicle (7.32%). During the entry and orientation phase, a combination of personal device and vehicle was most preferred (38.04%), followed by just the vehicle (36.96%) and just the personal device (25.00%). Finally, during the exiting phase, personal devices were again prioritized (38.36%), followed by a combination (35.62%) and just the vehicle (26.03%). Collapsing these responses together, across all three phases, participants prioritized receiving information from their personal device (43.8% of all responses) instead of from a combination (32.55%) or the vehicle alone (23.30%). As explored more thoroughly in our guidelines section, this preference for information from personal devices speaks to participants being comfortable with their personal devices, the privacy they afford, as well as the accessibility settings to which they are already accustomed. 5 Description Logic Our goal in presenting the following description logic (Figure 7) is to provide a structured source of information that can support BLV users who, as our results indicated, (1) are excited to use AVs, (2) experience challenges with the entry, navigation, and exiting portion of the trip, and (3) are accustomed to using information as an adaptation, preferring audio interaction. Participants also highlighted how cumbersome it can be to have to learn multiple apps, each with their own information and interaction style, which is a consistent finding in the BLV AV research [20]. By presenting a unified structure for natural language audio descriptions tied to tasks known to be difficult for BLV travelers, we offer a practical approach for AV designers to adopt a consistent flow of information from one phase of travel to the next. 18 Fig. 7 Culminating description logic based on Study 1 Survey and Study 2 Interview results As noted in Section 3 and Section 4, the Study 1 Survey and Study 2 Interview illustrated notable differences in which items should be presented first to BLV users. Opposed to weighting one of these sets of results more heavily than the other, we opted to include the survey results that were statistically significantly more likely to be presented first (i.e., Vehicle's Destination and Safety Features in the Entering phase and Hazards/Obstacle Detection in the Exiting phase) as potential first-items in the final description logic. The logic here, as indicated by our interviews, is that much of the information is situational and should be elevated based on the context of travel. For example, in a crowded pick-up location, confirming the Vehicle's Destination can be imperative during the Entry phase to ensure that someone is not entering the wrong vehicle. Likewise, when there are hazards or obstacles to avoid in the destination environment, users want to know about these Hazards / Obstacles first. By combining the interview and survey data in this way, we offer a structured approach that also sheds light on how descriptions to support future accessible use of AVs need to be adaptive to the specific context of travel. 6 Guidelines and Discussion Findings from the studies presented here highlight the important challenges and opportunities for BLV users during AV travel, as well as the ways in which information can serve as an adaptation during multiple stages of the trip. For instance, results from both Study 1 Survey and Study 2 Interviews suggest that BLV users face spatial challenges like finding the correct vehicle and navigating pick-up and drop-off locations, as well as concerns with safety when entering and exiting. Moreover, we identify the range of information items that are important to communicate to BLV users to overcome these challenges (e.g., providing information to find the correct vehicle and which side of the vehicle to exit) combined with how these sets of information should be ordered in an accessible interface. The description logic developed as a result can be used by AV designers to develop structured sequences on which contextually relevant information is scaffolded. Although applying the description logic is not the focus of the current work, we envision it serving as a set of variables that AV-related apps can populate with information specific to each trip. Our results show that when doing so, most 19 information should be presented from the user's personal device using audio as the primary information modality. In the following, we explore these findings in conversation with the literature and offer ideas for future work pursuing accessible AVs more generally. 6.1 Prioritizing Safety-critical Information Across the Trip Both the Study 1 survey data and Study 2 interview data indicate the critical role of using information to promote safety for BLV users in the navigating, entry, and exiting phases of AV travel. Participants prioritized safety as a theme during the interviews, as well as in the information order in our description logic, which coincides with existing BLV AV research where safety has been a dominant concern [2, 7, 9]. Results from the Study 1 survey highlight that during the navigation phase, safety concerns emerge due to challenges with avoiding obstacles on the way to the vehicle, as well as finding the correct vehicle. These results echo existing work focused on BLV navigation to AVs that uses computer vision to help users identify objects and localize vehicles and door handles [20]. We add to the existing work by identifying the order of information that should be prioritized, starting first with information to help find the correct vehicle (e.g., distance and direction information), transitioning to obstacle avoidance, and ending with information to find the correct door (e.g., the rear passenger-side door). While participants in the present work did not indicate that finding the correct door and handle was as challenging as indicated by previous work [20], this may have been due to how we combined the tasks of finding the door and finding the handle into a single information item, a potential limitation of our question structure. It is worth noting that as flush door handles become more prevalent, the task of finding the door handle may become increasingly challenging for BLV users. Indeed, designers of future AVs should be cognizant of how design choices can exacerbate current problems for BLV users and, as demonstrated by our interviews, how information can serve as an adaptation to challenges during transportation. During the entry phase of the trip, safety emerged as a critical theme with participants prioritizing knowing about the safety features available in the vehicle. Interview responses suggest that it is important for AVs to confirm to BLV users that it is safe to get in and to confirm that the user is in the right vehicle and orienting to the correct seat. Interestingly, unlike the navigation and exiting phase, there is an apparent opportunity to explore using the vehicle itself as an output source instead of the user's personal device during the entry phase, as explored more thoroughly in the following subsection. Given these results, designers should ensure that the vehicle's destination is communicated upon passenger entry and that safety features (e.g., hand rails and emergency exits) are appropriately communicated to BLV riders. Finally, during the exiting phase of the trip, hazard avoidance and communicating which side of the vehicle is safe to exit were prioritized by participants. These data coincide with existing work on exiting AVs by Meinhardt et al. (2025), which emphasized the role of new multisensory interfaces leveraging haptics to communicate both static and dynamic obstacle detection for BLV users [30]. However, our results suggest that users likely prefer using their existing devices primarily via audio, as opposed to new devices leveraging haptics or other multisensory interactions. We explore this finding in the following subsections, offering caveats to our results and suggesting future work. 6.2 Designing for BLV Users' Personal Devices A key takeaway from the two studies presented here is that that BLV users generally prefer to have the majority of AV information that they receive delivered from their personal smartphone. This result coincides with existing accessibility work with AVs [20], with the principal reason being that users often have different preferences and accessibility needs for how information is presented in the UI. As current smart devices have significant universal design (UD) features built-in to the interface for both input and output operations, BLV users are empowered to customize their device with important personalized features like the speech rate of audio output [17]. These accessibility features are native to the device's operating system and deeply embedded into the UI, meaning they work across system states, applications, and usage scenarios, which reduces the learning curve and increases user confidence. This level of access is why an estimated 90% of BLV persons use smartphones, with the vast majority preferring iOS based devices given their incorporation of so many UD design principles in the native UI [36]. The takeaway, as one of our participants aptly put it, is that "every car is going 20 to be different, but [my] phone remains the same." Additionally, having the information coming from the smartphone allows users to "tailor the app or whatever service [they're] using within the phone to tell [them] the information as [they] want it" (supporting customization). The variety exhibited in our results with respect to which piece of information participants would like to receive first during the navigation phase highlights this need for the user to be able to customize the presentation of the provided information. We argue that AV developers adopting this design decision will not only benefit their userbase, but also their manufacturers, as they simply need to follow well-established accessibility design conventions when creating apps. The alternative (i.e., attempting to build in this level of accessibility into the vehicle control system) will be nontrivial, expensive, and require significant usability testing to avoid conflicts and introduce the potential for broken UI elements upon every update. That said, our data do show that BLV users may prefer some information directly delivered from the vehicle, particularly during the entry phase. How to interact with the vehicle and the location of safety features were items that specifically could be delivered via in-vehicle speakers, whereas others (e.g., seat placement and where other passengers are sitting) could be delivered by both a user's smartphone and the vehicle itself. This finding supports approaches in the literature using in-cabin audio to support BLV users during the trip [19]. We contend that in these combined instances, there may be an opportunity to explore multisensory information (e.g., haptics on the phone and audio from the vehicle) opposed to audio only, as discussed in the following. 6.3 Emphasizing Audio Based Interaction? Our results show a clear preference for BLV users receiving information from AVs using audio as the primary interaction modality. This finding coincides with existing research by Fink et al. (2023) [18] for BLV users across visual status subsamples (mild visual impairment, moderate, and legally blind). However, as recognized in this previous work, BLV users tend to have more experience with audio-based interaction than other interaction modalities (e.g., haptics), meaning self-reported preferences available in the literature may be impacted by experiential or cognitive bias. Indeed, when BLV participants have been exposed to multisensory UIs leveraging haptic interactions in a transportation context, results have been on par with or even outperformed audio-only approaches [22, 29, 30]. This apparent contradiction between selfreported survey and interview results with experimental device testing suggests that more work is needed to expose BLV users to multisensory spatial applications. We argue that as multisensory UIs proliferate, designers should consider how to integrate audio with other modalities like haptics, particularly when a user's personal device can be used (as has been done in recent work with vibro-audio interactions on smartphones [22, 32]). 6.4 Limitations Due to the hypothetical nature of this study, the BLV participants' answers may be skewed toward their current lived experience with human-operated rideshares (e.g., Uber and Lyft). As a result, their answers are based on their current knowledge and experiences with technology and are not necessarily representative of a hypothetical (imagined) perspective in which non-human-operated AV rideshares exist. As this technology slowly migrates from the hypothetical to the practical, further study and analysis would help to better extrapolate the needs of the BLV community regarding independent AV travel. We recommend follow-up behavioral studies to further explore the ideas and framework outlined in this paper, as these studies may add realism to the hypothetical problems of undeveloped technologies and allow for more concrete observations from both participants and researchers alike. The qualitative component of our study relied on interviews with a sample size of 12 participants. While this number is consistent with prior work in accessibility research with blind and low vision populations, we acknowledge that it is a reasonably small sample that limits the generalizability of our findings. We recognize that our analysis is more exploratory and focused on identifying key information needs rather than making broad population-level generalizations. Future research should strive to increase sample sizes to enhance the statistical power and external validity of findings within the accessibility research domain. Our study followed a sequential design, with a survey preceding qualitative interviews. We chose this approach because our goal was to identify a culminating description logic. We opted to hold interviews subsequent to the survey to help 21 contextualize and explain the results via rich qualitative data. Although an alternative approach, such as beginning with interviews, would have also been valid, we believe that the trajectory from a broad survey to more focused interviews was most useful in pursuit of our intended goals and how we used the observed outcomes. Demographic information analysis was limited in due part to the contract with Qualtrics, who was responsible for recruiting respondents to the survey. As such, we did not collect robust demographic information that could have further stratified the participant groups and analyses. This may be of value for future investigation in order to better analyze if people of certain ages, regions, and backgrounds influence the data. For example, related research has shown that people from the United States, Hong Kong, and China may interpret and react to AVs differently as a result of their cultural background [25]. Our focus, lying primarily on what information was important to BLV travelers in the hypothetical scenarios provided and when this information was best to be delivered, did not necessitate the analysis of this demographic information and as such did not take away from the intent and scope of the paper. Furthermore, we recognize that using a third-party service for participant recruitment is an emerging practice in research with specific trade-offs. While it maximized the likelihood of reaching our target population and obtaining data from a larger sample than is practical to recruit for in-lab studies, this technique did prevent us from engaging directly with participants. To ensure data integrity, we implemented screening questions and carefully reviewed all responses to filter out any that appeared inauthentic, incomplete, or suspicious. However, we acknowledge that the use of such a service introduces a risk of unverified data that is not present in direct recruitment and could be considered a drawback of our study (and any survey research using such services). 7 Conclusion This paper explored the information needs of blind and low vision (BLV) travelers throughout the autonomous vehicle (AV) travel experience (i.e., navigating to, entering, and exiting) and examined the optimal sequencing and presentation of information to support travelers in each of these trip segments. The study utilized a remote survey with 202 BLV respondents to assess the key BLV information needs and priorities when using autonomous transportation. The survey was followed by a slate of qualitative remote interviews with 12 BLV participants to understand current transportation challenges, while also predicting future challenges pertaining to AV travel, and identifying information sequences for addressing these current and future challenges. The resulting description logic from the survey and interviews emphasizes the critical importance of context-specific information delivery for AV travel; the information needs of BLV travelers are not uniform across the journey and vary significantly depending on the specific trip segment. Specifically, results highlight that information to find the correct vehicle and about the vehicle itself (e.g., make and model, the correct door to enter) are vital during the navigation phase. Upon entering the AV, the focus shifts to information that builds personal security and safety during the trip, such as knowing where the vehicle is traveling to and where safety features are located. Finally, as BLV travelers prepare to exit the AV, the priority once again centers on hazards, obstacle awareness, and knowing which side of the vehicle to exit to ensure a safe transition from the vehicle to the external environment and their destination. Across each of these stages, users identified the need for audio interaction and prioritized information from their personal device (i.e., their smartphone), while also indicating opportunities to receive information from the vehicle itself (e.g., onboard speakers) during the entry phase. Themes identified within the interviews underscore the current challenges faced by BLV travelers, impacting their independence and safety, but also expressing strong optimism for AVs to transform their mobility. Participants' current adaptations, while resourceful, highlight the limitations of existing transportation systems and provide evidence for the need of more reliable and user-friendly transportation solutions. By elucidating distinct information priorities and sequencing preferences within the context of three individual trip segments, this research contributes a foundational framework for AV technologies/interfaces that afford safer and more efficient wayfinding for BLV travelers. As we look toward the future of autonomous transportation and the opportunities that this technology can provide to users who may benefit the most, defining clear and consistent information standards for accessible use of these vehicles is essential for promoting greater independence and mobility for future BLV travelers. 22 References [1] Ahmetovic, D., Oh, U., Mascetti, S. and Asakawa, C. 2018. Turn Right: Analysis of Rotation Errors in Turn-by-Turn Navigation for Individuals with Visual Impairments. Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (Galway Ireland, Oct. 2018), 333-339. [2] Bennett, R., Vijaygopal, R. and Kottasz, R. 2020. Willingness of people who are blind to accept autonomous vehicles: An empirical investigation. Transportation Research Part F: Traffic Psychology and Behaviour. 69, (Feb. 2020), 13-27. https://doi.org/10.1016/j.trf.2019.12.012. [3] Bourne, R. et al. 2021. Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the Global Burden of Disease Study. The Lancet Global Health. 9, 2 (Feb. 2021), e130-e143. https://doi.org/10.1016/S2214-109X(20)30425-3. [4] Braun, V. and and Clarke, V. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology. 3, 2 (Jan. 2006), 77-101. https://doi.org/10.1191/1478088706qp063oa. [5] Brewer, R. and Ellison, N. 2020. Supporting People with Vision Impairments in Automated Vehicles: Challenge and Opportunities. . [6] Brewer, R.N., Austin, A.M. and Ellison, N.B. 2019. Stories from the Front Seat: Supporting Accessible Transportation in the Sharing Economy. Proc. ACM Hum.-Comput. Interact. 3, CSCW (Nov. 2019). https://doi.org/10.1145/3359197. [7] Brewer, R.N. and Kameswaran, V. 2018. Understanding the Power of Control in Autonomous Vehicles for People with Vision Impairment. Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (Galway Ireland, Oct. 2018), 185-197. [8] Brewer, R.N. and Kameswaran, V. 2019. Understanding Trust, Transportation, and Accessibility through Ridesharing. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19 (Glasgow, Scotland Uk, 2019), 1-11. [9] Brinkley, J., Huff, E.W., Posadas, B., Woodward, J., Daily, S.B. and Gilbert, J.E. 2020. Exploring the Needs, Preferences, and Concerns of Persons with Visual Impairments Regarding Autonomous Vehicles. ACM Trans. Access. Comput. 13, 1 (Apr. 2020). https://doi.org/10.1145/3372280. [10] Brinkley, J., Posadas, B., Sherman, I., Daily, S.B. and Gilbert, J.E. 2019. An Open Road Evaluation of a Self-Driving Vehicle Human-Machine Interface Designed for Visually Impaired Users. International Journal of HumanComputer Interaction. 35, 11 (Jul. 2019), 1018-1032. https://doi.org/10.1080/10447318.2018.1561787. [11] Brinkley, J., Posadas, B., Woodward, J. and Gilbert, J.E. 2017. Opinions and Preferences of Blind and Low Vision Consumers Regarding Self-Driving Vehicles: Results of Focus Group Discussions. Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (Baltimore Maryland USA, Oct. 2017), 290-299. [12] Bu, L., Cui, B., Pan, W., Chen, H., Xia, S. and Li, H. 2025. User-centered multimodal interaction design for autonomous vehicles: a focus on cognitive load and accessibility for users with severe visual impairment. Universal Access in the Information Society. (Jun. 2025). https://doi.org/10.1007/s10209-025-01240-4. [13] Claypool, H., Bin-Nun, A. and Gerlach, J. 2017. Self-driving cars: The impact on people with disabilities. Newton, MA: Ruderman Family Foundation. (2017). [14] Detjen, H., Schneegass, S., Geisler, S., Kun, A. and Sundar, V. 2022. An Emergent Design Framework for Accessible and Inclusive Future Mobility. Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (New York, NY, USA, 2022), 1-12. [15] Disability: https://www.who.int/news-room/fact-sheets/detail/disability-and-health. Accessed: 2025-03-03. [16] El-taher, F.E., Taha, A., Courtney, J. and Mckeever, S. 2021. A Systematic Review of Urban Navigation Systems for Visually Impaired People. Sensors (Basel, Switzerland). 21, 9 (Apr. 2021), 3103. https://doi.org/10.3390/s21093103. [17] Fink, P.D.S. 2023. Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People Who Are Blind and Visually Impaired. Doctoral Thesis #3817. The . [18] Fink, P.D.S., Alsamsam, M., Brown, J.R., Kindler, H.D. and Giudice, N.A. 2023. Give us something to chauffeur it: Exploring user needs in traditional and fully autonomous ridesharing for people who are blind or visually impaired. Transportation Research Part F: Traffic Psychology and Behaviour. 98, (Oct. 2023), 91-103. https://doi.org/10.1016/j.trf.2023.09.004. [19] Fink, P.D.S., Dimitrov, V., Yasuda, H., Chen, T.L., Corey, R.R., Giudice, N.A. and Sumner, E.S. 2023. Autonomous is Not Enough: Designing Multisensory Mid-Air Gestures for Vehicle Interactions Among People with Visual Impairments. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2023), 1-13. 23 [20] Fink, P.D.S., Doore, S.A., Lin, X., Maring, M., Zhao, P., Nygaard, A., Beals, G., Corey, R.R., Perry, R.J., Freund, K., Dimitrov, V. and Giudice, N.A. 2023. The Autonomous Vehicle Assistant (AVA): Emerging technology design supporting blind and visually impaired travelers in autonomous transportation. International Journal of HumanComputer Studies. 179, (Nov. 2023), 103125. https://doi.org/10.1016/j.ijhcs.2023.103125. [21] Fink, P.D.S., Holz, J.A. and Giudice, N.A. 2021. Fully Autonomous Vehicles for People with Visual Impairment: Policy, Accessibility, and Future Directions. ACM Trans. Access. Comput. 14, 3 (Aug. 2021). https://doi.org/10.1145/3471934. [22] Fink, P.D.S., Milne, H., Caccese, A., Alsamsam, M., Loranger, J., Colley, M. and Giudice, N.A. 2024. Accessible Maps for the Future of Inclusive Ridesharing. Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Stanford CA USA, Sep. 2024), 106-115. [23] Giudice, N.A. and Long, R.G. 2010. Establishing and maintaining orientation: tools, techniques, and technologies. Foundations of Orientation and Mobility, 4th ed.; APH Press: Louisville, KY, USA. 1, (2010), 45-62. [24] Kacperski, C., Kutzner, F. and Vogel, T. 2024. Comparing autonomous vehicle acceptance of German residents with and without visual impairments. Disability and Rehabilitation: Assistive Technology. 19, 8 (Nov. 2024), 28692879. https://doi.org/10.1080/17483107.2024.2317930. [25] Kim, K.J. and Wang, S. 2025. Regional differences in public perceptions of autonomous vehicles facing moral dilemmas: a comparative study between the United States, Hong Kong, and China. Universal Access in the Information Society. 24, 2 (Jun. 2025), 1369-1377. https://doi.org/10.1007/s10209-024-01145-8. [26] Klatzky, R.L., Marston, J.R., Giudice, N.A., Golledge, R.G. and Loomis, J.M. 2006. Cognitive load of navigating without vision when guided by virtual sound versus spatial language. Journal of Experimental Psychology: Applied. 12, 4 (2006), 223-232. https://doi.org/10.1037/1076-898X.12.4.223. [27] Loomis, J.M., Klatzky, R.L., Philbeck, J.W. and Golledge, R.G. 1998. Assessing auditory distance perception using perceptually directed action. Perception & Psychophysics. 60, 6 (1998), 966-980. https://doi.org/10.3758/BF03211932. [28] Ma, Z., Schroeter, R. and Gomez, R. 2024. Designing HMI for BVI Users in Fully Automated Vehicles: A Participatory and In-the-field Approach. The 26th International ACM SIGACCESS Conference on Computers and Accessibility (St. John's NL Canada, Oct. 2024), 1-5. [29] Meinhardt, L.-M., Rück, M., Zähnle, J., Elhaidary, M., Colley, M., Rietzler, M. and Rukzio, E. 2024. Hey, What's Going On? Conveying Traffic Information to People with Visual Impairments in Highly Automated Vehicles: Introducing OnBoard. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 8, 2 (May 2024), 67:1-67:24. https://doi.org/10.1145/3659618. [30] Meinhardt, L.-M., Weilke, L.M., Elhaidary, M., von Abel, J., Fink, P.D.S., Rietzler, M., Colley, M. and Rukzio, E. 2025. Light My Way: Developing and Exploring a Multimodal Interface to Assist People With Visual Impairments to Exit Highly Automated Vehicles. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2025). [31] Narayanan, S., Chaniotakis, E. and Antoniou, C. 2020. Shared autonomous vehicle services: A comprehensive review. Transportation Research Part C: Emerging Technologies. 111, (Feb. 2020), 255-293. https://doi.org/10.1016/j.trc.2019.12.008. [32] Palani, H.P., Fink, P.D. and Giudice, N.A. 2021. Comparing Map Learning Between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users. Multimodal Technologies and Interaction. 6, 1 (2021), 1. https://doi.org/10.3390/mti6010001. [33] Rampin, R. and Rampin, V. 2021. Taguette: open-source qualitative data analysis. Journal of Open Source Software. 6, 68 (Dec. 2021), 3522. https://doi.org/10.21105/joss.03522. [34] Ranjbar, P., Krishnakumari, P.K., Andersson, J. and Klingegård, M. 2022. Vibrotactile guidance for trips with autonomous vehicles for persons with blindness, deafblindness, and deafness. Transportation Research Interdisciplinary Perspectives. 15, (Sep. 2022), 100630. https://doi.org/10.1016/j.trip.2022.100630. [35] Son, S., Jeong, Y. and Lee, B. 2019. An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning. Sensors (Basel, Switzerland). 19, 22 (Nov. 2019), 5035. https://doi.org/10.3390/s19225035. [36] Survey on screen reader usage #9: 2019. https://webaim.org/projects/screenreadersurvey9/. Accessed: 2023-0502. [37] Wang, P. and Yi, B. 2024. Accessibility of Shared Automated Vehicles for the Visually Impaired Travelers. Social Science Research Network. [38] Microsoft Soundscape. Microsoft Research. 24 APPENDIX Appendix A Study 1 Survey Screening Criteria: (must meet all three) 1. At least 18 years old and have a known uncorrected visual impairment 2. Utilize an accessibility device or mobility aid (e.g., screen reader, magnification, white cane, guide dog, etc...) 3. Does not drive - Utilizes transportation such as rideshares, public transit, or private vehicles Pre-survey Questions: 1. How would you classify where you live or where you commute to most often? a. Urban b. Suburban c. Rural 2. What accessibility aid or aids do you use? [check all that apply] a. Screen readers such as JAWS and NVDA b. Mobile screen readers such as VoiceOver and TalkBack c. Magnification d. White cane e. Guide dog f. Braille display g. Apps such as Be My Eyes and Aira h. Other:_______ 3. How would you classify your vision status? a. Mild vision impairment b. Moderate vision impairment c. Severe vision impairment d. Blindness Instructions: Thank you for agreeing to help with our research! You will be presented with 3 scenarios that relate to finding, entering, and exiting an autonomous vehicle. For these scenarios, the autonomous vehicles are able to safely and legally drive themselves and are not accompanied by a human driver or an attendant. Please read through each scenario description carefully, imagine yourself in the scenario, and answer the accompanying questions honestly. Identifying and navigating to vehicle: Imagine ordering a ride to the grocery store in an urban setting. An autonomous vehicle (a vehicle that drives itself and does not have a human driver) pulls up close to where you are located on a busy street. You know that it's near the sidewalk and you are standing within 5 meters of it. Likert Scale: 1 - 5 (1 = Strongly Disagree, 2 = Somewhat Disagree, 3 = Neither Agree nor Disagree, 4 = Somewhat Agree, 5 = Strongly Agree) 25 1. It can be challenging to locate the correct door and door handle. 2. It can be challenging to find the correct vehicle. 3. I would like to know information about the vehicle, like the make, model, and year, before I get to it. 4. Avoiding obstacles and hazards that are in my way can be challenging. 5. I am comfortable finding my own way to the vehicle. 6. I would like to know about [blank] before I enter the vehicle. [Typed response] 7. I think that [blank] would best help me navigate to the vehicle. a. Honking the vehicle's horn b. A wayfinding app that uses spatialized audio and natural language descriptions on my smartphone c. A wayfinding app that uses vibrations on my smartphone d. Asking someone nearby e. Other:_____________ 8. Before getting to the vehicle, the first thing I want to know is [blank]. a. The direction of the vehicle b. If there is a bike lane c. Which door I should go to d. Where the car is going e. If there is a steep curb or curb cut f. If there are other people in the car already g. Other:______________ Entering and orienting inside vehicle: It's likely that fully autonomous vehicles are going to use a rideshare model where there may be multiple people on board and the seating arrangements may not be like traditional cars. For example, these vehicles could feel more like a bus or a train car where the seats sometimes face each other, run along the side of the vehicle, or could even be moved around. Imagine that you ordered one of these autonomous vehicles to go to the grocery store. It has arrived and you have successfully navigated to the correct door and are about to enter it. Likert Scale: 1 - 5 (1 = Strongly Disagree, 2 = Somewhat Disagree, 3 = Neither Agree nor Disagree, 4 = Somewhat Agree, 5 = Strongly Agree) 1. I want to know which direction is forward as I enter the vehicle's interior. 2. I want to know about the location of the control interface (e.g., radio, climate, mapping, etc.) within the vehicle. 3. I want to know about the cleanliness of the vehicle's interior. 4. I want to know what seats are available (i.e., unoccupied) within the vehicle. 5. I want to know about seat placement and the layout of the vehicle's interior. 6. I want to know where safety features (e.g., emergency exits, handrails, etc.) are located within the vehicle. 7. I want to know the route the vehicle is going to take as I enter the vehicle's interior. 8. I want to know about where the vehicle is going as I enter the vehicle's interior. 9. I want to know how to interact with the vehicle (e.g., voice, haptics, gestures, etc.) 10. What else would you like to know as you enter or while you are riding in the vehicle? [Typed response] 11. When entering the vehicle, the first thing I want to know is [blank]?. a. Where the vehicle is going b. The route the vehicle is going to take c. Where safety features are located d. The seat placement 26 e. Where other passengers are sitting f. The vehicle's cleanliness g. Where the control interface is located h. How to interact with the vehicle i. Which direction is forward j. Other:_________ Exiting and orienting to surroundings: The autonomous vehicle has safely driven you to the grocery store. It has pulled up next to the sidewalk about 20 meters from the store's entrance. Likert Scale: 1 - 5 (1 = Strongly Disagree, 2 = Somewhat Disagree, 3 = Neither Agree nor Disagree, 4 = Somewhat Agree, 5 = Strongly Agree) 1. I want to know what direction I'm facing when exiting the vehicle. 2. I want to know where the vehicle is located when I exit the vehicle. 3. I want to know about the direction and flow of traffic when exiting the vehicle. 4. I want to know if there are other passengers trying to exit or enter the vehicle. 5. I want to know what hazards or obstacles may be outside of the vehicle. 6. I want to know what points of interest are in the immediate area. 7. I want to know which side of the vehicle is safest to exit through. 8. I feel comfortable exiting the vehicle on my own without assistance. 9. What else would you like to know before or after you exit the vehicle? [Typed response] 10. When exiting the vehicle, the first thing I want to know is [blank]? a. Which side of the vehicle I'm exiting b. What points of interest are around me c. What hazards or obstacles may be outside the vehicle d. If there are passengers exiting or entering the vehicle e. What the direction and flow of traffic is f. Where the vehicle is located g. What direction I'm facing h. Other:_______ [End of survey] Appendix B Study 2 Pre-Interview Worksheet Summary: The following worksheet is designed to help engineers decide what order accessible information should be presented in future technology that assists with finding and using autonomous (driverless) vehicles. Instructions: When filling out this worksheet, think about what information would be most important for you to access first and then how information should be ordered after that depending on the scenario. Use numbers to rank items in the following scenarios in the order in which you'd would want them to be presented to you. Feel free to eliminate items that are unimportant to you by ranking them with a zero (0). For example, if you would want to know the car's color first, you would place the number 1 next to [Rank: ] for that item. If you don't care about the vehicle's color, mark that item with the number 0 next to [Rank: ]. Navigating to the vehicle 27 Scenario: Imagine ordering a ride to the grocery store in an urban setting. An autonomous vehicle (a vehicle that drives itself and does not have a human driver) pulls up close to where you are located on a busy street. You know that it's near the sidewalk and you are standing within 20 feet of it. When thinking about an interface that can give you information on the way to that autonomous vehicle, the following order of information would make the most sense or be preferable to you: [Instructions: Put the following items in the order that you would like them presented to you. Eliminate items you consider unimportant by ranking them with a 0.] - Information and/or assistance for finding the correct vehicle (e.g., distance and direction information) [Rank: ] - Information and/or assistance for locating the correct door or door handle [Rank: ] - Information and/or assistance for avoiding obstacles and hazards when navigating to the vehicle [Rank: ] - Information and/or assistance about the vehicle itself (e.g., make, model, color, year) [Rank: ] Entering and Orienting Scenario: It's likely that fully autonomous vehicles are going to use a rideshare model where there may be multiple people on board and the seating arrangements may not be like traditional cars. For example, these vehicles could feel more like a bus or a train car where the seats sometimes face each other, run along the side of the vehicle, or could even be moved around. Imagine that you ordered one of these autonomous vehicles to go to the grocery store. It has arrived and you have successfully navigated to the correct door and are about to enter it. When thinking about an interface that can give you information when entering and orienting within the autonomous vehicle, the following order of information would make the most sense or be preferable to you: [Instructions: Put the following items in the order that you would like them presented to you. Eliminate items you consider unimportant by ranking them with a 0.] - Information about where the vehicle is going [Rank: ] - Information about the route the vehicle is going to take [Rank: ] - Information about where safety features are located in the vehicle [Rank: ] - Information about the seat placement and orientation [Rank: ] - Information about where other passengers are sitting [Rank: ] - Information about the vehicle's cleanliness [Rank: ] - Information about where the vehicle control interface is located [Rank: ] - Information about how to interact with the vehicle [Rank: ] - Information about which direction is forward [Rank: ] Exiting and Orienting to the Surroundings Scenario: The autonomous vehicle has safely driven you to the grocery store. It has pulled up next to the sidewalk about 20 meters from the store's entrance. When thinking about an interface that can give you information as you exit an autonomous vehicle and orient to your surroundings, the following order of information would make the most sense or be preferable to you: [Instructions: Put the following items in the order that you would like them presented to you. Eliminate items you consider unimportant by ranking them with a 0.] - Information about which side of the vehicle is safe to exit [Rank: ] 28 - Information about points of interest that are around you (e.g., direction and distance to your destination) [Rank: ] - Information about hazards or obstacles that may be outside the vehicle [Rank: ] - Information about other passengers exiting or entering the vehicle [Rank: ] - Information about the direction and flow of traffic [Rank: ] - Information about where the vehicle is located [Rank: ] - Information about what direction you are facing [Rank: ] This concludes the worksheet, thanks for your input! Appendix C Interview Guide Hi there, I'm [name]. Thanks so much for participating in this interview being conducted by [anonymized]. The purpose of the interview is to help the design of future transportation that is more accessible and inclusive. To do so, we'll be asking some questions related to your current experience traveling and what you expect might be helpful in future scenarios with autonomous vehicles. We'll also be audio recording the interview to help with our analysis later on. You've already provided us some information on the worksheet you completed before this interview. Throughout the interview, we'll ask several times for you to imagine traveling to, entering, and exiting a fully autonomous vehicle. By this we mean a vehicle that can safely, efficiently, and legally drive without a human "at the wheel." What questions can I answer before we get started? Demographics and Experience To begin, we'll go over some demographic information and then talk about your experiences with technology and transportation. 1. Can you state your name, age, and gender identity? 2. Can you explain what your current day to day transportation experience is like? 3. What challenges or difficulties do you face during transportation? 4. How does the mode of transportation (e.g., public busses vs. rideshare) impact your experience? 5. Can you describe your vision loss? This could include the extent, any metrics you're aware of like acuity or light perception, as well as the etiology, or cause, and onset if known. a. How do you use your vision, if at all, during your day to day transportation experience? 6. Do you use a mobility aid (cane or dog, magnification device, etc.) 7. Do you use navigation assistance technologies or apps? For example, Blind Square, SeeingAI, AIRA, Good Maps, etc.? 8. So here we're going to ask you to think about fully autonomous vehicles, like we described before, assuming they can drive safely, efficiently, and legally without a human driver. You can think of it like an Uber or Lyft but without a driver at the wheel. What are your overall thoughts on the rollout of these vehicles? a. What are you excited about? b. What are you worried about? Worksheet We sent home a worksheet for you to fill out and we have a few questions about your responses. Navigation The first scenario had you imagine navigating to an autonomous vehicle without a human driver available to provide assistance. 29 1. Can you briefly talk me through your thought process as you filled out the worksheet? How did you decide on the order? Great, thanks. Now we're going to read each item you included, in the order that you included them. For each item, we have a few questions prepared. This might feel repetitive but your input is really important for designing future applications. In the interest of time, I'll ask that your responses are brief, just a few words. [For each item included on worksheet] The first piece of information you included was: 1. When or how far from the vehicle would you want this information? 2. Would you want that information in every situation or only in certain situations (for example at night)? What situations? 3. How do you imagine that information being best presented, for example through audio, haptics (i.e., touch/vibration), combinations of both? 4. Can you provide an example of what it might sound or feel like? 5. Where would you like the information to come from? Your phone? The vehicle? Another device? You decided not to include: Can you explain this decision? In a survey of people who are blind and low vision we conducted prior to the interviews, the majority of respondents said knowing which door of the vehicle to go to was a priority to know first. Why do you think this was indicated, and under what scenarios do you think it's most relevant? Entry/Orientation The second scenario on the worksheet had you imagine entering and orienting within an autonomous vehicle without a human driver available to provide assistance. 1. Can you briefly talk me through your thought process as you filled out the worksheet? How did you decide on the order? Great, thanks. Now, just like before, we're going to read each item you included, in the order that you included them. For each item, we have a few questions prepared. In the interest of time, I'll ask that your responses are brief, just a few words [For each item included on worksheet] The first piece of information you included was: 1. When would you want this information? For example, as you're entering, before entering, or once you're inside? 2. Would you want that information in every situation or only in certain situations (for example at night)? What situations? 3. How do you imagine that information being presented the best, for example through audio, haptics, combinations of both? 4. Can you provide an example of what it might sound or feel like? 5. Where would you like the information to come from? Your phone? The vehicle? Another device? 30 You decided not to include: Can you explain this decision? In that survey we mentioned, the majority of respondents said knowing where the vehicle is going was a priority to know first. Why do you think this was indicated, and under what scenarios do you think it's most relevant? Exiting/Orientation The third scenario on the worksheet had you imagine exiting an autonomous vehicle and orienting to your surroundings without a human driver available to provide assistance. 1. Can you talk me through your thought process as you filled out the worksheet? How did you decide on the order? Great, thanks. Now, again, we're going to read each item you included, in the order that you included them. For each item, we have a few questions prepared. In the interest of time, I'll ask that your responses are brief, just a few words [For each item included on worksheet] The first piece of information you included was: 1. When would you want this information? For example, as you're exiting, before you exit, or once you're outside? 2. Would you want that information in every situation or only in certain situations (for example at night)? What situations? 3. How do you imagine that information being presented the best, for example through audio, haptics, combinations of both? 4. Can you provide an example of what it might sound or feel like? 5. Where would you like the information to come from? Your phone? The vehicle? Another device? You decided not to include: Can you explain this decision? In that survey we mentioned, the majority of respondents said knowing what hazards or obstacles are outside of the vehicle was a priority to know first. Why do you think this was indicated, and under what scenarios do you think it's most relevant? Conclusion So to wrap up, we have just a few general questions. In that survey I mentioned, respondents brought up safety as a key concern. What safety concerns do you have? - Do you think it being a new technology is a contributing factor? In the example sentences that you gave, we're particularly interested in what we call the reference frame of directions. Do you prefer clockface positions, left-right, near-side vs far-side? Finally, what problems do you think do you think autonomous vehicles will solve with your current transportation experience? Thank you so much for participating!
2510.14912
1 Decoherence-Aware Entangling and Swapping Strategy Optimization for Entanglement Routing in Quantum Networks Shao-Min Huang∥, Cheng-Yang Cheng∥, Ming-Huang Chien∥, Jian-Jhih Kuo, and Chih-Yu Wang Abstract—Quantum teleportation enables high-security com- munications through end-to-end quantum entangled pairs. End- to-end entangled pairs are created by using swapping processes to consume short entangled pairs and generate long pairs. However, due to environmental interference, entangled pairs decohere over time, resulting in low fidelity. Thus, generating entangled pairs at the right time is crucial. Moreover, the swapping process also causes additional fidelity loss. To this end, this paper presents a short time slot protocol, where a time slot can only accommodate a process. It has a more flexible arrangement of entangling and swapping processes than the traditional long time slot protocol. It raises a new optimization problem TETRIS for finding strategies of entangling and swapping for each request to maximize the fidelity sum of all accepted requests. To solve the TETRIS, we de- sign two novel algorithms with different optimization techniques. Finally, the simulation results manifest that our algorithms can outperform the existing methods by up to 60 ∼78% in general, and by 20 ∼75% even under low entangling probabilities. Index Terms—Quantum networks, fidelity, decoherence, entan- glement routing, scheduling, resource management, optimization problem, NP-hardness, inapproximability, bi-criteria approxima- tion algorithm I. INTRODUCTION A S the frontiers of modern technology, quantum networks (QNs) have been developed and implemented to evaluate their practicability for information transmission [1]–[5]. QNs connect quantum nodes to transmit quantum bits (qubits) using end-to-end entangled pairs [5] (i.e., quantum teleportation). These networks serve as the foundation of quantum services such as quantum key distribution (QKD) [2], distributed quan- tum computing [3], and clock synchronization [4]. As illus- S.-M. Huang is with the Research Center for Information Technology Innovation, Academia Sinica, Taiwan, and also with the Department of Com- puter Science and Information Engineering, National Chung Cheng University, Taiwan (e-mail: shaominhuang2019@alum.ccu.edu.tw). C.-Y. Cheng is with the Institute of Data Science and Engineering, National Yang Ming Chiao Tung University, Taiwan, and also with the Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan (e-mail: yang890912.cs12@nycu.edu.tw). M.-H. Chien is with the Institute of Computer Science and Engineering, National Yang Ming Chiao Tung University, Taiwan, and also with the De- partment of Computer Science and Information Engineering, National Chung Cheng University, Taiwan (e-mail: qq11123334.cs12@nycu.edu.tw). J.-J. Kuo is with the Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan, and also with the Advanced Institute of Manufacturing with High-tech Innovations, National Chung Cheng University, Taiwan (e-mail: lajacky@cs.ccu.edu.tw). C.-Y. Wang is with the Research Center for Information Technology Innovation, Academia Sinica, Taiwan (e-mail: cywang@citi.sinica.edu.tw). ∥denotes the equal contributions. Corresponding author: Jian-Jhih Kuo (a) Entangling and swapping from time slot 𝑡1 to 𝑡5 in a network (b) Skewed strategy tree (c) Complete strategy tree Fig. 1. Scheduling of entangling and swapping in QNs. trated in Fig. 1(a), each quantum node in the QN has a specific amount of quantum memory (blue squares) to store entangled qubits, mitigating rapid decoherence [5]–[8]. Adjacent nodes are interconnected by optical fibers (black lines) that facilitate the entanglement process. However, an end-to-end entangled pair is not directly available between two non-adjacent nodes. In such a case, their end-to-end entangled pair can be achieved via one or more entanglement swapping processes, each of which consumes two short entangled pairs to create a long one. For example, at time slot 𝑡2 in Fig. 1(a), there are two entangled pairs (𝑣1, 𝑣2) and (𝑣2, 𝑣3). After performing a swapping at the end of 𝑡2, we obtain a long entangled pair (𝑣1, 𝑣3) at time slot 𝑡3 and free two quantum memory units at 𝑣2 [8]. However, entangled pairs stored in the quantum memory decohere over time, resulting in a loss of quality, i.e., fidelity loss. Thus, finding the perfect timing for entangling is crucial; an improper strategy may cause entangled pairs to idle and arXiv:2510.14912v1 [quant-ph] 16 Oct 2025 2 unnecessary fidelity loss. Moreover, swapping also causes fi- delity loss as the entangled link generated by swapping exhibits lower fidelity than the two consumed entangled links, and the loss is exacerbated when the fidelity difference between the two pairs is significant. Thus, the entangling and swapping strategy should be properly designed to improve fidelity. Existing QN protocols, studied in the literature, often employ a standard long time slot protocol setting [9]–[22], which may result in unnecessary fidelity loss when handling multiple requests. Specifically, in traditional long time slot protocols, the entangling processes of each request (i.e., source-destination (SD) pair) take place during the entangling phase, while swapping processes occur during the swapping phase. As a result, shorter requests might have to wait for longer requests to complete because the latter involve more processes to perform. Further, traditional long time slot protocols bind resources for the entire time slot, even though a swapping process frees two quantum memory units, which originally could be used for other requests but remain blocked by the protocol. In light of the shortcomings, we propose a short time slot protocol where either entangling or swapping can occur in any given time slot. Specifically, in our short time slot protocol, a node is free to perform one process within a single time slot, and the process could be entangling, swapping, teleporting, or idling. Thus, short requests no longer need to wait for other requests to establish their entangled pairs, as they can perform processes separately, while longer requests can utilize the resource released by the nodes performing swapping processes. The adoption of a short time slot protocol enables more effi- cient entangling and swapping strategies, minimizing unneces- sary fidelity loss by precisely timing entangled pair generation for subsequent swapping. Take Figs. 1(a) and 1(b) for example, we have a request from 𝑣1 to 𝑣5.1 At time slot 𝑡1, pairs (𝑣1, 𝑣2) and (𝑣2, 𝑣3) start entangling, taking one time slot. At time slot 𝑡2, both the generated pairs (𝑣1, 𝑣2) and (𝑣2, 𝑣3) have the initial fidelity 0.98 and start decoherence while doing the swapping process. One may observe that we can choose to start the entangling of pair (𝑣3, 𝑣4) at either 𝑡1 or 𝑡2. Clearly, entangling at 𝑡2 is better than 𝑡1 since they will have less wait time for subsequent swapping processes, leading to less decoherence and better fidelity. On the other hand, swapping processes cause extra fidelity loss, and the fidelity of (𝑣1, 𝑣3) is 0.951, which is lower than the fidelity before swapping (i.e., the two links with fidelity 0.98 after decoherence for 1 time slot will have fidelity 0.975). At time slot 𝑡3, the generated pair (𝑣1, 𝑣3) with fidelity 0.951 and (𝑣3, 𝑣4) with fidelity 0.95 start decoherence while swapping, leading to pair (𝑣1, 𝑣4) with fidelity 0.88 after swapping. Meanwhile, pair (𝑣4, 𝑣5) start entangling. Finally, (𝑣1, 𝑣5) are generated at 𝑡5 and has fidelity 0.787. Unlike the skewed strategy tree in Fig. 1(b), the strategy in Fig. 1(c) requires only four slots. Intuitively, it should have better fidelity because it suffers less decoherence time. However, the result is counter-intuitive; Fig. 1(b) has better fidelity than Fig. 1(c) 1In this example, all the parameters follow the default settings in Section VII. In addition, the fidelity after decoherence over time and swapping is calculated using Eqs. (1) and (2), respectively, in Section III. because most of the swapping processes in Fig. 1(b) consume two entangled pairs with similar fidelity and thus suffer less fidelity loss due to swapping processes.2 From the perspective above, we can guess that if all the links on the path have similar initial fidelity, the complete strategy tree will perform better than others. For example, if all the generated pairs (𝑣1, 𝑣2), (𝑣2, 𝑣3), (𝑣3, 𝑣4), and (𝑣4, 𝑣5) have initial fidelity 0.98, then the final fidelity of the strategy trees in Figs. 1(b) and 1(c) will be 0.889 and 0.891, respectively. We have two observations from the above examples: 1) swapping two entangled pairs with similar fidelity performs better, and 2) the more skewed the strategy tree, the less memory is required within a time slot. Moreover, a different strategy leads to varying fidelity and distribution of resource consumption. In this paper, such a distribution is referred to as numerology, representing how the resources (i.e., quantum memory in this paper) are consumed for the request. The numerology provides a direct representation of the correspond- ing strategy to examine its feasibility. The quality of the strategy, which is the resulting fidelity of the request, can also be derived directly through the corresponding numerology. Thus, the entangling and swapping strategy formation can be transformed into the numerology selection problem. In this paper, we aim to choose a numerology for each request to maximize the fidelity sum for all accepted requests while meeting the fidelity threshold within a batch of time slots, i.e., the Optimized Decoherence-aware Strategy for Entangling and Swapping Scheduling Problem (TETRIS). The TETRIS introduces four novel challenges: 1) Entangled pairs will decohere over time. How can we choose the right time to generate entangled pairs to reduce their wait time? 2) The swapping process may cause additional fidelity loss. How can we achieve an intelligent swapping strategy to minimize this loss? 3) Numerologies determine the resulting fidelity. How can we identify the feasible numerologies for requests to meet the fidelity threshold, guaranteeing the quality of teleportation? 4) Quantum memory is limited, and its availability will be affected by simultaneous requests. How can we find efficient numerologies to mitigate resource contention and satisfy more requests while ensuring a better fidelity sum? It can be shown that the TETRIS with no fidelity threshold constraint (i.e., TETRIS-N) is already NP-hard and cannot be approximated within any factor of |𝐼|1−𝜖, where |𝐼| denotes the number of SD pairs. To conquer the above challenges, we propose two novel algorithms. 1) The first one is Fractional Numerology Packing and Rounding Algorithm (FNPR). FNPR aims to maximize the number of accepted requests. With linear programming (LP) rounding, FNPR can achieve a bi-criteria approximation ratio for the TETRIS-N as the actual duration of a time slot and the number of time slots are sufficiently small. Then, we extend FNPR to solve the TETRIS. 2) The second one is Fidelity-Load Trade-Off Algorithm (FLTO). FLTO utilizes two efficient dynamic programming (DP) algorithms to find two candidate numerologies for each request with a given 2More simulations and clarifications of the reasons behind this counter- intuitive example are provided in Appendix A of the supplementary material. 3 path. One maximizes fidelity, and the other considers resource utilization. Then, FLTO iteratively invokes them to allocate the resource for each request based on a subtly-designed index called resource efficiency index (REI), inspired by [23]. In sum, the contributions of this paper are as follows: 1) To the best of our knowledge, this is the first attempt to consider the decoherence of entangled pairs over time for entangling and swapping scheduling while applying a short time slot protocol that offers better resource allocation. 2) We prove that the TETRIS is an NP-hard problem, and even its special case, the TETRIS-N, cannot be approximated within any factor of |𝐼|1−𝜖. 3) We propose a combinatorial algorithm with a clever separation oracle to achieve a bi-criteria approximation. Meanwhile, we develop two DP algorithms to find candidate numerologies with different goals and design an index to choose numerologies adaptively. II. RELATED WORK Before discussing the related works on optimization prob- lems for entangling and swapping, we review previous works that examine feasibility and realization of QNs [1], [24]–[28]. Early foundational studies have laid the groundwork for secure communication protocols and robust architectures in quantum networks. Elliott et al. introduced QN to realize secure com- munications [1]. Meter et al. proposed a large QN architecture with layered recursive repeaters, where repeaters may not trust each other, and then designed new protocol layers to support quantum sessions to ensure robustness and interoperable com- munication [24]. Muralidharan et al. presented new quantum nodes that can execute the quantum error correction (QEC) processes and classified the theoretically feasible technologies of quantum nodes into three generations [25]. Pirandola et al. discussed the limits of repeater-less quantum communications and provided general benchmarks for repeaters [26]. Caleffi et al. designed a routing protocol to ensure a high end-to- end entanglement rate between any two nodes [27]. Zhao et al. proposed two transport layer protocols for quantum data networks that achieve high throughput and fairness [28]. Building on these foundational works, subsequent research has focused on path selection and entangling and swapping process optimization for long time slot system-based QNs [9]– [22]. Pant et al. presented a greedy algorithm to determine the paths for each request [9]. Shi et al. designed a routing method Q-CAST based on the Dijkstra algorithm to find primary paths and recovery paths to mitigate the effect of entanglement failures [10]. Zhao et al. presented an LP-based algorithm REPS to maximize throughput in SDN-based QNs [11]. Zhao et al. considered entangled links’ fidelity and ex- ploited quantum purification to enhance link fidelity [12]. Chen et al. proposed two heuristic algorithms for entangling and swapping phases separately [13]. Farahbakhsh et al. developed an add-up scheme to store and forward data qubits as much as possible [14]. Zeng et al. studied how to simultaneously maximize the number of quantum-user pairs and their expected throughput [15]. Zeng et al. utilized the properties of 𝑛-fusion to help increase the success probability of constructing a long entangled pair for quantum networks [16]. Li et al. exploited purification to meet the fidelity requirements for multiple SD pairs as many as possible [17]. Chakraborty et al. gave an LP formulation to compute the maximum total entanglement distribution rate [18]. Pouryousef et al. attempted to stockpile entangled pairs in advance when the traffic demand is low while using them otherwise [19]. Zhao et al. considered a room-size network scenario, where a longer entangled pair can be established by sending one of its photons via all- optical switching without swapping at intermediate nodes [20]. However, none of the above works considers the effect of decoherence on fidelity. To this end, Cicconetti et al. studied different policies of path selection and request scheduling and then measured the resulting link fidelity under the influence of dephasing and depolarizing noises (i.e., decoherence) [21]. Ghaderibaneh et al. further considered time decoherence when determining the swapping order for each SD pair to minimize entangled pair generation latency [22]. However, all of the above works use long time slot systems and conduct entangling and swapping sequentially, which may block short requests to wait for long requests, resulting in more memory idle time. To address the limitations of long time slot systems, some studies [29], [30] have explored short time slot systems or even asynchronous protocols to improve efficiency. Huang et al. employed a short time slot protocol while opportunistically forwarding data qubits via social nodes to ensure security. However, they neglected decoherence over time, fidelity loss due to swapping processes, and freed qubits after swapping processes [29]. Yang et al. proposed an asynchronous model to freely generate entangled pairs or conduct swapping processes without time slot limitations. It helps establish end-to-end connections more quickly while utilizing more resources, as resources can be freed immediately [30]. However, it did not consider decoherence or fidelity loss. In contrast, our short time slot system considers more practical fidelity losses due to time decoherence and swapping processes. It further leverages diverse numerologies for requests to maximize the fidelity sum, achieving efficient entangling and swapping scheduling. Although some short time slot approaches have been pro- posed to enhance efficiency, they still overlook fidelity degrada- tion over time due to decoherence and swapping processes. To this end, the following works [31]–[34] consider decoherence when constructing remote entangled pairs along a specific path. Haldar et al. employed the 𝑄-learning reinforcement-learning (RL) algorithm to discover policies that optimize both average waiting time and fidelity for a single request [31]. I˜nesta et al. proposed a method based on Markov decision processes- based method with value and policy iteration to minimize the expected needed time to achieve end-to-end entanglement for a single request [32]. Haldar et al. proposed the quasi-local multiplexing policies, following the SWAP-ASAP approach and incorporating entanglement purification, to optimize both average waiting time and fidelity for a single request in linear chain quantum networks [33]. Goodenough et al. analyzed the value of fidelity up to 25 segments over time using a generating function. The approach can be applied to find cut-off policies in 4 TABLE I COMPARISON OF RELATED WORKS Literature Time slot Fidelity decohere over time Multi- request scheduling Path selection Objective Solution strategy [9] Long No No Yes Max throughput Greedy [10] Long No No Yes Max throughput Dijkstra-based [11] Long No No Yes Max throughput LP rounding + Heuristic [12] Long No No Yes Max throughput Dijkstra-based + LP rounding [13] Long No No Yes Max throughput Heuristic + DP [14] Long No No No Min waiting time Greedy [15] Long No No Yes Max user pairs & Max throughput Dijkstra-based + LP rounding + Branch and bound algorithm [16] Long No No Yes Max throughput Dijkstra-based [17] Long No No Yes Max throughput Dijkstra-based + Greedy [18] Long No No Yes Max throughput Multicommodity flow-based algorithm [19] Long No No Yes Max throughput & Min delay LP [20] Long No No Yes Max throughput LP rounding + Heuristic [21] Long No Yes Yes Max throughput Heuristic + Dijkstra-based [22] Long Yes No No Min latency DP [29] Short No No Yes Min waiting time Greedy [30] Async No No Yes Max network efficiency Dijkstra-based + Primal-Dual-based algorithm [31] Short Yes No No Min waiting time & Max fidelity 𝑄-learning reinforcement-learning [32] Short Yes No No Min delivery time Markov decision process-based algorithm [33] Short Yes No No Min waiting time & Max fidelity SWAP-ASAP-based algorithm Ours Short Yes Yes Yes Max fidelity & Max throughput Combinatorial algorithm with DP-based separation oracle + LP rounding & Greedy + DP QKD [34]. Since the above studies focused on handling single requests, their policies will bind sufficient resources for each accepted request, enabling re-entangling and swapping until a successful end-to-end entangled link is established. Thus, they may perform well in single-request scenarios but are less effective when dealing with multiple requests. Table I summarizes the related works based on their setups (e.g., time slot length, fidelity decoherence, scheduling method, path selection) and optimization objectives (e.g., throughput, fidelity, latency, waiting time). III. BACKGROUND AND ASSUMPTIONS A. Fidelity and Decoherence of Entangled Pairs Fidelity is a classic index to qualify whether an entangled pair is good. An entangled pair with fidelity 𝐹can be written in a density matrix as 𝜌= 𝐹|Φ+⟩⟨Φ+| + 𝑏|Φ−⟩⟨Φ−| + 𝑐|Ψ+⟩⟨Ψ+| + 𝑑|Ψ−⟩⟨Ψ−|, where 𝐹+ 𝑏+ 𝑐+ 𝑑= 1 and |Φ+⟩ is our wanted state. In this paper, we consider our entangled pairs are Werner state [8], [35], [36], which has the relation 𝑏= 𝑐= 𝑑. A quantum state will interact with the environment and gradually lose its quantum properties (i.e., the fidelity decreases), called decoherence. As 𝐹= 𝑏= 𝑐= 𝑑= 0.25, it loses all its quantum properties and behaves like a classic random behavior [8]. The decoherence speed depends on the (a) Decoherence curve (b) Fidelity of swapping Fig. 2. Fidelity loss due to elapsed time and swapping. type of quantum memory. The general decoherence can be described as follows: 𝐹𝑑(𝑡) = 𝐴+ 𝐵×𝑒−(𝑡/T) 𝜅, (1) which is an empirical formula fitting to the experimental data [37], [38], as plotted in Fig. 2(a), while the unit of 𝑡is second. Note that 𝐴, 𝐵, T, and 𝜅are the constants according to the adopted technique. In addition, there is a one-to-one mapping between wait time and fidelity because 𝐹𝑑(𝑡) is invertible. B. Entanglement Swapping Entanglement swapping causes extra fidelity loss [8]. Swap- ping two Werner states with the fidelity of 𝐹1 and 𝐹2 will lead to the fidelity as follows: 𝐹𝑠(𝐹1, 𝐹2) = 𝐹1×𝐹2 + 1 3 (1 −𝐹1) × (1 −𝐹2). (2) 5 Fig. 2(b) shows how the fidelity changes after swapping. The blue curve represents the fidelity change of an entangled pair due to decoherence; on the other hand, the red curve depicts the fidelity of the entangled pair that conducts swapping at 𝑡𝑠. After swapping, there is a sudden drop of fidelity to 𝐹𝑒𝑞, and it keeps decohering after the drop. It can be viewed as a shift from the blue curve, i.e., the red curve after 𝑡𝑠behaves the same as the blue curve after 𝑡𝑒𝑞. We call 𝑡𝑒𝑞the equivalent wait time after swapping, which can be calculated by 𝑡𝑒𝑞= 𝐹−1 𝑑(𝐹𝑒𝑞). The success probability of entangling decreases exponen- tially with channel distance [10]. Specifically, the entangling probability between any adjacent nodes 𝑢and 𝑣is defined as: Pr(𝑢, 𝑣) = 1 −(1 −𝑒−𝜆·𝑙(𝑢,𝑣)) 𝜉, (3) where 𝜉= ⌊𝜏 𝔗⌋represents the number of entangling attempts, 𝜏is the length of a short time slot, 𝔗is the entangling time, 𝑙(𝑢, 𝑣) is the length of the quantum channel between 𝑢and 𝑣, and 𝜆is a constant determined by the optical fiber material [7]. Then, swapping also has a different success probability at each node 𝑣, denoted as Pr(𝑣) [10]. Thus, the success probability of a path 𝑝between the source 𝑠and the destination 𝑑can be expressed as follows: Pr(𝑝) = Ö (𝑢,𝑣)∈𝑝 Pr(𝑢, 𝑣) Ö 𝑣∈𝑝\{𝑠,𝑑} Pr(𝑣). (4) C. Assumptions For ease of presentation, this paper has the following as- sumptions: 1) Each entangled pair is described by a Werner state [8], [35], [36], a mixture of a specific pure state and white noise. The white noise is represented by the normalized identity operator in the Hilbert space corresponding to the state [39]. 2) This paper assumes that the fiber resource is always sufficient because it is relatively cheaper than quantum memory. In other words, we focus on quantum memory consumed by the entangling process while assuming all entangling processes will not be blocked by fiber availability. 3) Following [11], all nodes communicate with a central controller and share global knowledge. 4) The entangled pairs between the same pair of two nodes have identical initial fidelity. Besides, all the entangled pairs decohere as the same relation, as shown in Fig. 2(a), because of using the same quantum memory technique (i.e., constants 𝐴, 𝐵, T, and 𝜅are the same for all nodes). 5) For synchronization, the time slot length should be at least the duration required for a single swapping process (e.g., around 1.1 ms [40]). Since an entangling process generally requires less time than swapping (e.g., 0.25 ms [40]), it can repeatedly attempt entangling within a time slot to improve the entangling success probability [10], as described in Eq. (3). IV. SYSTEM MODEL & PROBLEM FORMULATION A. System Model We consider a QN with multiple quantum nodes, each with limited quantum memory. Nodes connected through a bunch of fibers are adjacent nodes. Data qubits are transmitted via end- to-end entangled pairs in QN. A one-hop entangled pair can be directly constructed via the fiber between two adjacent nodes 𝑢and 𝑣with the success probability of Pr(𝑢, 𝑣) and the initial fidelity of 𝐹(𝑢, 𝑣). Plus, a long entangled pair (𝑢, 𝑤) can be constructed by performing the swapping process at a repeater 𝑣to consume two short entangled pairs (𝑢, 𝑣) and (𝑣, 𝑤) with the success probability of Pr(𝑣). The quality of an entangled pair is measured by fidelity. A generated entangled pair will decohere over time, as defined by Eq. (1) in Section III-A. A node can perform an entanglement swapping process when it possesses two qubits, each from different entangled pairs. Subsequently, the memory occupied by these two qubits after swapping will be freed. The fidelity of the resulting longer entangled pair after swapping follows Eq. (2). We consider a short time slot protocol where a single time slot only accommodates a single process for each memory unit. Each node can freely execute one process within a time slot for each memory unit, such as entangling, swapping, or even idling. The protocol periodically performs all necessary computations in advance, producing a complete operation plan before any entangling or swapping begins. The controller then instructs all quantum nodes to synchronously execute these operations within a fixed batch of time slots. Any request that fails within a batch is deferred to the next batch for reattempt. For clarity, let 𝑇be the set of time slots in the current batch. To consider the resource distribution, we define the strategy tree, numerology, and fidelity formulas in our system. Definition 1. A strategy tree 𝛾is an activity-on-vertex tree structure, describing an entangling and swapping strategy be- tween node 𝑣1 and node 𝑣𝑛on path 𝑝= {𝑣1, 𝑣2, ..., 𝑣𝑛}. It consists of two parts; one is the entangling part (𝛾𝑒), and the other is the swapping part (𝛾𝑠), as shown by the green and blue lines in Fig. 1(b), respectively. The entangling part 𝛾𝑒includes all the external pairs. Each external pair denotes a quantum pair (𝑣𝑖, 𝑣𝑖+1) being entangled, where 𝑖∈{1, 2, ..., 𝑛−1}, as shown by the green pair in Fig. 1(b). On the other hand, the swapping part 𝛾𝑠is an edge-weighted binary tree with a root pair (𝑣1, 𝑣𝑛), where each leaf pair represents an entangled pair (𝑣𝑖, 𝑣𝑖+1) for each 𝑖∈{1, 2, ..., 𝑛−1} and each edge’s weight denotes the number of elapsed time slots for the corresponding entangling or swapping processes. Besides, each non-leaf pair (𝑣𝑖, 𝑣𝑗) has exactly two child pairs, (𝑣𝑖, 𝑣𝑘) and (𝑣𝑘, 𝑣𝑗), where 𝑖< 𝑘< 𝑗, denoting a long pair created by consuming two pairs. Last, each leaf pair (𝑣𝑖, 𝑣𝑖+1) in 𝛾𝑠is connected to a corresponding external pair (𝑣𝑖, 𝑣𝑖+1) in 𝛾𝑒by an edge with a cost to form 𝛾. For a given strategy tree 𝛾, each edge in 𝛾has a positive cost (i.e., the number of elapsed time slots for the corresponding entangling or swapping processes). Let 𝜌𝑟denote the root pair of the strategy tree 𝛾, and 𝜌𝑎denote any pair in 𝛾. We define the cost sum 𝛿(𝜌𝑟, 𝜌𝑎) as the total cost along the simple path from 𝜌𝑎to 𝜌𝑟in 𝛾, i.e., the total number of elapsed time slots from 𝜌𝑎to 𝜌𝑟. Following the definition above, for a given strategy tree 𝛾, the root pair 𝜌𝑟is assigned a designated time slot 𝑡𝑟∈𝑇, where 𝑇= {1, 2, . . . , |𝑇|} represents the set of time slots in the current batch. Therefore, the corresponding time slot for a pair 𝜌𝑎in 𝛾can be expressed as 𝑡𝑎= 𝑡𝑟−𝛿(𝜌𝑟, 𝜌𝑎). 6 (a) Three possible strategy trees for the path 𝑣1 →𝑣2 →𝑣3 →𝑣4 (b) The corresponding numerologies for the strategy trees in (a) Fig. 3. Numerologies induced by different strategy trees. A strategy tree 𝛾is feasible if satisfying the condition: The time slot 𝑡𝑎of each pair 𝜌𝑎∈𝛾is within 𝑇(i.e., 𝑡𝑎∈𝑇). This condition is essential; otherwise, 𝑡𝑎may fall outside the set 𝑇. For example, consider the right strategy tree in Fig. 3(a). Suppose the root pair 𝜌𝑟= (𝑣1, 𝑣4) is with 𝑡𝑟= 3. Now, taking the external pair (𝑣1, 𝑣2) as 𝜌𝑎, the corresponding time slot 𝑡𝑎 is calculated as 𝑡𝑎= 3 −𝛿(𝜌𝑟, 𝜌𝑎) = 3 −4 = −1, which means 𝑡𝑎∉𝑇. Thus, if 𝑡𝑟= 3, the right strategy tree is infeasible. Definition 2. A numerology 𝑚represents the resource distribu- tion of a specific strategy tree 𝛾. Specifically, let 𝜌𝑝= (𝑣𝑖, 𝑣𝑗) denote a pair at time slot 𝑡𝑝in 𝛾, with a left child pair 𝜌𝑐𝑙= (𝑣𝑖, 𝑣𝑘) at time slot 𝑡𝑐𝑙and a right child pair 𝜌𝑐𝑟= (𝑣𝑘, 𝑣𝑗) at time slot 𝑡𝑐𝑟, where 𝑡𝑝> 𝑡𝑐𝑙and 𝑡𝑝> 𝑡𝑐𝑟. Consequently, the left child pair 𝜌𝑐𝑙occupies one memory unit of 𝑣𝑖and 𝑣𝑘during time slots 𝑇(𝜌𝑐𝑙) = {𝑡∈Z | 𝑡𝑐𝑙≤𝑡< 𝑡𝑝}, while the right child pair 𝜌𝑐𝑟occupies one memory unit of 𝑣𝑘and 𝑣𝑗during time slots 𝑇(𝜌𝑐𝑟) = {𝑡∈Z | 𝑡𝑐𝑟≤𝑡< 𝑡𝑝}. Furthermore, if 𝜌𝑝has only a child pair 𝜌𝑐= (𝑣𝑖, 𝑣𝑗) (i.e., the external pair in 𝛾) at time slot 𝑡𝑐, this child pair occupies one memory unit of 𝑣𝑖 and 𝑣𝑗during time slots 𝑇(𝜌𝑐) = {𝑡∈Z | 𝑡𝑐≤𝑡< 𝑡𝑝}. Plus, 𝑇(𝜌𝑟) = {𝑡𝑟} for the root pair 𝜌𝑟at time slot 𝑡𝑟in 𝛾. Therefore, numerology 𝑚captures the resource distribution across the nodes in the strategy tree 𝛾and can be formally defined as 𝑚= {(𝜌𝑎,𝑇(𝜌𝑎)) | 𝜌𝑎∈𝛾}, where each pair 𝜌𝑎∈𝛾and its associated time slots are grouped in a tuple (𝜌𝑎,𝑇(𝜌𝑎)). The amount of memory on node 𝑣occupied by numerology 𝑚at time slot 𝑡is 𝜃𝑚(𝑡, 𝑣) = {(𝜌𝑎,𝑇(𝜌𝑎)) ∈𝑚| 𝑣∈𝜌𝑎, 𝑡∈ 𝑇(𝜌𝑎)} . Note that 𝜃𝑚(𝑡, 𝑣) ∈{0, 1, 2}. We further illustrate the example in Fig. 3. Each strategy tree in Fig. 3(a) corresponds to a distinct numerology shown in Fig. 3(b). In this figure, yellow squares indicate a node using one memory unit during that time slot, while blue squares represent a node using two memory units. For the numerology 𝑚on the right in Fig. 3(b), we observe that 𝜃𝑚(𝑡𝑖+1, 𝑣2) equals 2, as node 𝑣2 simultaneously uses one memory unit for each of the pairs (𝑣1, 𝑣2) and (𝑣2, 𝑣3) during time slot 𝑡𝑖+1. Additionally, it is worth noting that for any given path, feasible strategy trees have a one-to-one and onto mapping to feasible numerologies. Lemma 1. For any given path 𝑝, there exists a one-to-one and onto mapping 𝑓, which maps a feasible strategy tree 𝛾 to a feasible numerology 𝑚, while 𝑓−1 denotes the inverse mapping, i.e., 𝑓(𝛾) = 𝑚and 𝑓−1(𝑚) = 𝛾. Proof. Definition 2 indicates that each strategy tree has a corresponding numerology, i.e., 𝑓(𝛾) = 𝑚. Then, function 𝑓is onto since only the numerologies derived from feasible strategy trees are feasible. It suffices to show that for any two different strategy trees 𝛾1 and 𝛾2, their corresponding numerologies 𝑚1 and 𝑚2 are different, i.e., 𝑚1 = 𝑓(𝛾1) ≠𝑓(𝛾2) = 𝑚2. We prove it by contradiction. Assume that there exist two different strategy trees mapping to the same numerology for the given path 𝑝= {𝑣1, 𝑣2, 𝑣3, ..., 𝑣𝑛}, i.e., 𝑓(𝛾1) = 𝑓(𝛾2) = 𝑚. Let 𝑡𝑟 denote the time slot of root pair 𝑟in 𝛾1. Therefore, 𝑚uses one memory unit of nodes 𝑣1 and 𝑣𝑛at time 𝑡𝑟, respectively, denoted by (𝑣1, 𝑣𝑛) in the strategy tree. By Definition 1, we know that a non-leaf node (𝑣𝑖, 𝑣𝑗) in the swapping part must have exactly two children (𝑣𝑖, 𝑣𝑘) and (𝑣𝑘, 𝑣𝑗), where 𝑖< 𝑘< 𝑗. This means that if we start scanning 𝑚from time slot 𝑡𝑟in decreasing order, we can find exactly one node 𝑘 which spends two memory units at some time slot. Take Fig. 4(a) as an example. We start scanning from root pair (𝑣1, 𝑣6) at time 𝑡5. Then, when reaching 𝑡4, we can find node 𝑣4 with two occupied memory units, implying two children (𝑣1, 𝑣4) and (𝑣4, 𝑣6). Similarly, we keep scanning the remaining time slots from 𝑡4 to 𝑡3. Note that the resource distribution at 𝑡4 is the combination of resource distributions of pairs (𝑣1, 𝑣4) and (𝑣4, 𝑣6), each of which also meets Definition 1, as shown in Fig. 4(b). Thus, we can divide 𝑚into two sub-distributions from 𝑣4, find their children individually, and construct the sub- strategy tree recursively. By doing so, we can reconstruct only one possible strategy tree, implying a contradiction. Then, 𝑓 is a one-to-one and onto mapping, and the lemma follows. □ Definition 3. In the short time slot protocol, with Eq. (1), the fidelity of an entangled pair after one-time-slot idling becomes 𝐹𝜏(𝐹) = 𝐹𝑑(𝐹−1 𝑑(𝐹) + 𝜏), (5) where 𝜏is the actual duration (i.e., length) of one time slot and 𝐹is the fidelity before decay. As two short entangled pairs with fidelity 𝐹1 and 𝐹2 conduct swapping, with Eqs. (2) and (5), the fidelity of the long entangled pair becomes 𝐹𝜏 𝑠(𝐹1, 𝐹2) = 𝐹𝑠(𝐹𝜏(𝐹1), 𝐹𝜏(𝐹2)). (6) We continue to demonstrate the example in Fig. 3. The left strategy tree in Fig. 3(a) (or the left numerology in Fig. 3(b)) and its mirror strategy tree (i.e., the middle case) will have the same fidelity if all the entangled pairs have the same initial fidelity. The right case is similar to the left one but useful as 𝑣2 and 𝑣4 have no available memory at 𝑡𝑖+2 and 𝑡𝑖+1, respectively (i.e., the left and middle cases cannot be chosen). However, it costs more resources and has lower fidelity than the left case. 7 (a) Scanning numerology from time slot 𝑡5 to 𝑡4 (b) Scanning numerology from time slot 𝑡4 to 𝑡3 Fig. 4. An illustrating example of transforming a numerology into a strategy tree. Fig. 5. Overlapping numerologies for two accepted requests. Note that requests’ numerologies could overlap, as shown in Fig. 5, where two requests from 𝑣1 to 𝑣6 and from 𝑣1 to 𝑣3 are denoted by the red- and black-frame numerologies. As the numerology indicates the strategy tree in the form of quantum resource distribution, the QN strategy tree selection problem can map to a quantum resource allocation problem. B. Problem Formulation We formulate the problem based on the above scenario. Definition 4. Consider a network 𝐺= (𝑉, 𝐸), where each node 𝑣∈𝑉has a memory limit 𝑐𝑡(𝑣) ∈Z+ for each time slot 𝑡∈𝑇= {1, 2, ..., |𝑇|}. The network also includes a set of SD pairs 𝐼. Each pair 𝑖∈𝐼has a predefined path set 𝑃(𝑖) derived by an unspecified routing algorithm.3 Each path 𝑝∈𝑃(𝑖) has the success probability Pr(𝑝). Given the fidelity threshold b𝐹, the Optimized Decoherence-aware Strategy for Entangling and Swapping Scheduling Problem (TETRIS) aims to maximize the expected fidelity sum of the chosen numerology for each SD pair, subject to the following constraints. 1) At most, a numerology associated with one path from the path set 𝑃(𝑖) can be selected for each SD pair.3 2) The amount of occupied memory on each node 𝑣does not exceed its memory limit 𝑐𝑡(𝑣) at each time slot 𝑡. 3) The selected numerology’s fidelity should be no less than the threshold b𝐹to ensure high-quality teleportation. Let 𝑀𝑝(𝑖) denote the set of all numerologies that can be implemented from the path 𝑝∈𝑃(𝑖) within 𝑇(i.e., the period from time slot 1 to time slot |𝑇|) for an SD pair 𝑖∈𝐼, and let 3There are many algorithms for finding routing paths for SD pairs [9]–[11] to derive the predefined path set 𝑃(𝑖) for each pair 𝑖∈𝐼. 𝐹(𝑚) denote the fidelity of an entangled pair constructed with numerology 𝑚∈𝑀𝑝(𝑖). Following Definition 2, let 𝜃𝑚(𝑡, 𝑣) ∈ {0, 1, 2} denote the required amount of memory on node 𝑣∈𝑉 at time slot 𝑡∈𝑇when implementing a numerology 𝑚∈𝑀𝑝(𝑖) for SD pair 𝑖∈𝐼. In this way, TETRIS can be formulated as the following integer linear programming (ILP), where the binary variable 𝑥𝑖𝑝 𝑚indicates whether a numerology 𝑚∈𝑀𝑝(𝑖) is chosen (𝑥𝑖𝑝 𝑚= 1) or not (𝑥𝑖𝑝 𝑚= 0) for SD pair 𝑖∈𝐼and path 𝑝∈𝑃(𝑖). maximize ∑︁ 𝑖∈𝐼 ∑︁ 𝑝∈𝑃(𝑖) ∑︁ 𝑚∈𝑀𝑝(𝑖) Pr(𝑝) · 𝐹(𝑚) · 𝑥𝑖𝑝 𝑚 (7a) subject to ∑︁ 𝑝∈𝑃(𝑖) ∑︁ 𝑚∈𝑀𝑝(𝑖) 𝑥𝑖𝑝 𝑚≤1, ∀𝑖∈𝐼 (7b) ∑︁ 𝑖∈𝐼 ∑︁ 𝑝∈𝑃(𝑖) ∑︁ 𝑚∈𝑀𝑝(𝑖) 𝜃𝑚(𝑡, 𝑣) · 𝑥𝑖𝑝 𝑚≤𝑐𝑡(𝑣), ∀𝑣∈𝑉, ∀𝑡∈𝑇 (7c) (𝐹(𝑚) −b𝐹) · 𝑥𝑖𝑝 𝑚≥0, ∀𝑖∈𝐼, ∀𝑝∈𝑃(𝑖), ∀𝑚∈𝑀𝑝(𝑖) (7d) 𝑥𝑖𝑝 𝑚∈{0, 1}, ∀𝑖∈𝐼, ∀𝑝∈𝑃(𝑖), ∀𝑚∈𝑀𝑝(𝑖) (7e) The objective (7a) maximizes the expected fidelity sum of all SD pairs, where Pr(𝑝) is the success probability of path 𝑝. Constraint (7b) ensures that at most one numerology can be chosen from a single path 𝑝within the path set 𝑃(𝑖) for each SD pair 𝑖∈𝐼. Constraint (7c) guarantees that the total amount of memory occupied on each node 𝑣∈𝑉at every time slot 𝑡∈𝑇does not exceed its memory limit. Constraint (7d) ensures that the selected numerology fidelity should be no less than the threshold b𝐹the controller sets based on its policy. C. NP-hardness and Inapproximability We prove the hardness of TETRIS by demonstrating that even without the constraint (7d), the problem, termed TETRIS- N, is very challenging. Specifically, we prove the NP-hardness and inapproximability of TETRIS-N in Theorem 1 by reduc- ing the well-known NP-hard problem, the MIS [41], to the TETRIS-N. Therefore, the TETRIS, as a general case of the TETRIS-N, must be at least as hard as TETRIS-N, thereby implying Corollary 1. Note that the MIS asks for a maximum set of vertices in a graph, no two of which are adjacent. 8 (a) MIS instance G (b) Path construction for 𝐺 (c) TETRIS-N instance 𝐺 (d) Numerology of 𝑟1 (e) Resource contention for 𝑟2 Fig. 6. An example of the reduction from the MIS to the TETRIS. Theorem 1. The TETRIS-N is NP-hard and cannot be approx- imated within any factor of |𝐼|1−𝜖unless 𝑁𝑃= 𝑃for any fixed 𝜖> 0, where |𝐼| is the number of SD pairs. Proof. The proof idea is to create an SD pair and a dedicated path and then add them to the TETRIS-N instance for each node in the MIS instance such that the corresponding SD pairs of any two neighboring nodes in the MIS instance cannot be satisfied simultaneously within 𝑇. In the following, for any given MIS instance {G = {V, E}}, we show how to construct the corresponding TETRIS-N instance in detail. For each node in V, we create a corresponding SD pair and add it to the TETRIS-N instance. Subsequently, for each created SD pair, we construct a dedicated path that consists of 2⌈log |V|⌉+ 1 nodes with 2⌈log |V|⌉edges and then add the path to 𝐺of the TETRIS-N instance. Afterward, |𝑇| is set to ⌈log |V|⌉+ 2. In addition, the amounts of memory of the source and destination on the path are set to 1, and those of the other nodes (i.e., intermediate nodes) on the path are set to 2. Then, the probability of each path is uniformly set to 1 (i.e., Pr(𝑝) = 1). Last, for each pair of neighboring nodes in G, we pick an intermediate node that has not been picked yet from each of their corresponding paths in 𝐺and then merge the two picked nodes into a single node. Note that the node induced by merging cannot be picked for merging again. The construction can be done in polynomial time. Fig. 6 illustrates an example of instance construction. The MIS instance G includes a clique with three nodes, 𝑟1, 𝑟2, and 𝑟3, which are drawn in the red, yellow, and blue frames, respectively. Then, for each node in G, we create an SD pair and construct a dedicated path that consists of 2⌈log 3⌉+ 1 = 5 nodes, as shown in Fig. 6(b). Note that the color of the edges on each path corresponds to the color of relevant node in G, and thus the path {𝑣𝑟1 1 , 𝑣𝑟1 2 , ..., 𝑣𝑟1 5 } of SD pair 𝑟1 in 𝐺represents the node 𝑟1 in G. In G, an orange edge is between 𝑟1 and 𝑟2. Therefore, 𝑣𝑟1 2 and 𝑣𝑟2 2 are selected from the two corresponding paths, respectively, and merged into an orange node denoted by 𝐶12 in 𝐺, as shown in Fig. 6(c). Similarly, 𝐶23 and 𝐶13 are derived due to the edges (𝑟2, 𝑟3) and (𝑟1, 𝑟3), respectively. We then show that the solution of the MIS instance is one-to- one mapping to that of the constructed TETRIS-N instance. In this TETRIS-N instance, every SD pair has only one path with scarce memory, and its candidate strategy trees include only the full binary tree due to the subtle setting of |𝑇|, as shown in Fig. 6(d), where |𝑇| is set to ⌈log 3⌉+ 2 = 4. Therefore, for any two neighboring nodes in G, the numerologies of the two corresponding pairs in the TETRIS-N instance cannot be selected at the same time since the merged node on the two paths in 𝐺has no sufficient memory to serve the two pairs within 𝑇. In contrast, they can be selected simultaneously if the two corresponding nodes in G are not adjacent. Figs. 6(d) and 6(e) continue to show the example. Assume that SD pair 𝑟1 is admitted to construct numerology on path {𝑣𝑟1 1 , C12, C13, 𝑣𝑟1 4 , 𝑣𝑟1 5 }. In such a case, 𝑟2 cannot construct the numerology on the path {𝑣𝑟2 1 , C12, C23, 𝑣𝑟2 4 , 𝑣𝑟2 5 } since the memory of C12 has been occupied by 𝑟1 (i.e., resource con- tention), as shown in Fig. 6(e). Thus, the solutions of any MIS instance and its corresponding TETRIS-N instance are one-to- one mapping. Thereby, TETRIS-N is NP-hard. We continue to show that the TETRIS-N does not admit any approximation algorithm within a factor of |𝐼|1−𝜖for any fixed 𝜖> 0 by contradiction. To this end, suppose there exists an |𝐼|1−𝜖-approximation algorithm 𝐴for the TETRIS-N. Follow- ing the above-mentioned instance construction, any arbitrary MIS instance has a corresponding TETRIS-N instance. Assume the optimal solution for the MIS instance has 𝑘nodes, implying the optimal solution for the TETRIS-N instance maximizes the expected fidelity sum by satisfying the corresponding 𝑘 pairs. Then, we can employ algorithm 𝐴to find a solution that satisfies at least 𝑘 |𝐼|1−𝜖pairs, and the found solution corresponds to a solution with at least 𝑘 𝑛1−𝜖nodes for the MIS instance, where 𝑛denotes the number of nodes in the corresponding MIS instance. This is because |𝐼| is set to 𝑛during the instance construction. However, unless 𝑁𝑃= 𝑃, the MIS does not admit any approximation ratio within 𝑛1−𝜖for any fixed 𝜖> 0 [41]. Thus, algorithm 𝐴must not exist; otherwise, 𝐴could be employed to derive an 𝑛1−𝜖-approximation algorithm for the MIS. The theorem follows. □ Corollary 1. The TETRIS is at least as hard as the TETRIS-N. V. THE DESIGN OF FRACTIONAL NUMEROLOGY PACKING AND ROUNDING ALGORITHM (FNPR) In this section, we first design a bi-criteria approximation algorithm FNPR for the TETRIS-N (i.e., it has an approxi- mation ratio while relaxing the memory limit by a bounded ratio), then remedy the relaxation on the memory limit, and finally extend it to support the TETRIS. In the beginning, we observe the case where 𝜏and |𝑇| are small. In this case, the impact of time slot decoherence diminishes, therefore, all feasible numerologies within 𝑇achieve similar fidelity. To better illustrate this perspective, we conduct the simulation to observe the results, as shown in Fig. 7, where the average path 9 length of each SD pair is about seven. This figure shows that as the time slot length 𝜏and the batch size of time slots 𝑇 decrease, the ratio of 𝐹max 𝐹min approaches 1, where 𝐹max denotes maximum fidelity that can be achieved among all feasible numerologies within 𝑇, while 𝐹min denotes the minimum one. Specifically, this is because: 1) A smaller time slot length 𝜏 reduces decoherence and fidelity loss within each time slot, thereby increasing 𝐹min. 2) With a smaller batch size of time slots 𝑇, the overall processing time decreases and further limits the feasible numerologies available for each request. Thus, these feasible numerologies yield comparable fidelity levels, as 𝑇restricts the strategies for entangling and swapping. Therefore, in this case, we can temporarily neglect 𝐹(𝑚) in the TETRIS-N and consider it later when deriving the bi-criteria approximation ratio. Then, the modified problem formulation has the objective as Eq. (8) and the constraints (7b), (7c), and (7e). maximize ∑︁ 𝑖∈𝐼 ∑︁ 𝑝∈𝑃(𝑖) ∑︁ 𝑚∈𝑀𝑝(𝑖) Pr(𝑝) · 𝑥𝑖𝑝 𝑚. (8) Specifically, the FNPR employs a combinatorial algorithm with a cleverly-designed DP-based separation oracle, a ran- domized rounding technique, and two heuristics for ensuring feasibility, as detailed in Sections V-A, V-B, V-C, and V-E, respectively. Furthermore, in Section V-E, we extend the FNPR to address the constraint (7d), guaranteeing the solution feasi- bility for the TETRIS. Overall, the FNPR can approximate the optimum solution of the TETRIS-N within an approximation ratio of 𝑂( 𝐹max 𝐹min ) in the price of a bounded relaxation ratio of 𝑂(log(|𝑉| · |𝑇|)) on the memory limit (i.e., bi-criteria approx- imation). Then, with the two heuristics, FNPR’s solution can be improved to meet the fidelity threshold and approach the optimum solution as 𝜏and |𝑇| are small enough, i.e., 𝐹max 𝐹min ≈1. A. The Combinatorial Algorithm We first obtain the relaxed LP by replacing constraint (7e) with constraint (9) as follows: 𝑥𝑖𝑝 𝑚≥0, ∀𝑖∈𝐼, ∀𝑝∈𝑃(𝑖), ∀𝑚∈𝑀𝑝(𝑖). (9) However, the number of variables in our ILP grows expo- nentially with the input size since the number of feasible numerologies for each SD pair may be exponential (i.e., the number of possible binary trees). Thus, the relaxed LP (8), (7b), (7c), (9) cannot be solved by an LP solver in polynomial time. On the other hand, by [42], if an LP is in standard form and has a polynomial number of constraints (except for non- negativity constraints of variables), the number of variables in its dual LP is also polynomial, as described in Definition 5. Then, by combinatorial algorithms, we can obtain a near- optimum solution to the primal LP in polynomial time if the following properties are satisfied [43]. 1) Each coefficient on the left-hand side is not greater than the constant on the right-hand side for every inequality of the constraint in the primal LP (except for the non-negativity constraints of variables). 1.0 1.4 1.8 2.2 τ(10−3) 1 3 5 7 9 11 13 15 |T| 1.0 1.2 1.4 1.6 1.8 2.0 Fmax Fmin Fig. 7. The ratio of 𝐹max 𝐹min for different values of 𝜏and |𝑇|. 2) In the dual LP, all coefficients on the left-hand-side of inequality and the constant on the right-hand-side of in- equality are positive in each constraint (except for the non- negativity constraints of variables). 3) A separation oracle exists to tell whether there is a violated constraint for the dual LP. Definition 5. [42] The LP standard form is max{𝑐𝑇𝑥|𝐴𝑥≤ 𝑏, 𝑥≥0}, where 𝑥is an 𝑛× 1 variable vector, and 𝑐, 𝑏, and 𝐴 denote 𝑛× 1, 𝑚× 1, 𝑚× 𝑛constant vectors, respectively. The corresponding dual LP is min{𝑏𝑇𝑦|𝐴𝑇𝑦≥𝑐, 𝑦≥0}, where 𝑦 is an 𝑚× 1 variable vector. Note that 𝑥≥0 and 𝑦≥0 are the non-negativity constraints of 𝑥and 𝑦. Clearly, the relaxed primal LP (8), (7b), (7c), and (9) meets the first property. By Definition 5, we associate dual variables 𝛼𝑖and 𝛽𝑡 𝑣with each constraint in (7b) and (7c), respectively. Thus, we obtain the corresponding dual LP (10a)−(10c). It can be seen that the dual LP also satisfies the second property. minimize ∑︁ 𝑖∈𝐼 𝛼𝑖+ ∑︁ 𝑣∈𝑉 ∑︁ 𝑡∈𝑇 𝑐𝑡(𝑣) · 𝛽𝑡 𝑣 (10a) subject to 𝛼𝑖+ ∑︁ 𝑣∈𝑉 ∑︁ 𝑡∈𝑇 𝜃𝑚(𝑡, 𝑣) · 𝛽𝑡 𝑣≥Pr(𝑝), ∀𝑖∈𝐼, ∀𝑝∈𝑃(𝑖), ∀𝑚∈𝑀𝑝(𝑖) (10b) 𝛼𝑖, 𝛽𝑡 𝑣≥0, ∀𝑖∈𝐼, ∀𝑣∈𝑉, ∀𝑡∈𝑇 (10c) Then, we design a separation oracle for the dual LP and get the fractional solution for the relaxed primal LP in Section V-B. However, the fractional solution may be infeasible for the ILP (8), (7b), (7c), and (7e). To this end, we propose an LP rounding algorithm and improve the solution in Sections V-C and V-E. Last, we analyze its performance in Section V-D. B. The Separation Oracle Given an arbitrary (fractional) solution (𝛼, 𝛽) to the dual LP, the separation oracle tells whether a violated constraint exists. 10 ˆ𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) =   𝛽𝑡 𝑠+ 𝛽𝑡 𝑑+ 𝛽𝑡−1 𝑠 + 𝛽𝑡−1 𝑑 if (𝑠,𝑑)∈𝐸, 𝑡≥2, 𝑐𝑡(𝑠) and 𝑐𝑡−1 (𝑠)≥𝜎𝑠, 𝑐𝑡(𝑑) and 𝑐𝑡−1 (𝑑)≥𝜎𝑑; 𝛽𝑡 𝑠+ 𝛽𝑡 𝑑+ min  ˆ𝑔(𝑡−1, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)), min 𝑘∈I(𝑠,𝑑) ˆℎ(𝑡−1, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑))  else if 𝑡≥2, 𝑐𝑡(𝑠)≥𝜎𝑠, 𝑐𝑡(𝑑)≥𝜎𝑑; ∞ otherwise. (13) ˆℎ(𝑡, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑)) = min  ˆ𝑔(𝑡, (𝑠, 𝑘), (𝜎𝑠, 2)) + ˆ𝑔(𝑡, (𝑘, 𝑑), (1, 𝜎𝑑)), ˆ𝑔(𝑡, (𝑠, 𝑘), (𝜎𝑠, 1)) + ˆ𝑔(𝑡, (𝑘, 𝑑), (2, 𝜎𝑑))  . (14) It is easy to examine every constraint in (10c) in polynomial time, but the challenge arises in identifying whether a violated constraint exists in (10b). Note that the number of SD pairs and the size of path set for each SD pair are both polynomial. Therefore, it suffices to identify the most violated constraint in (10b) for each pair 𝑖∈𝐼and each path 𝑝∈𝑃(𝑖). This can be done by computing min 𝑚∈𝑀𝑝(𝑖) ∑︁ 𝑣∈𝑉 ∑︁ 𝑡∈𝑇 𝜃𝑚(𝑡, 𝑣) · 𝛽𝑡 𝑣. (11) To this end, we subtly design a DP algorithm to iteratively solve a larger subproblem by examining its smaller subprob- lems. Specifically, any numerology 𝑚∈𝑀𝑝(𝑖) has a one-to- one and onto mapping to a strategy tree through path 𝑝with a root represented by the SD pair (𝑠𝑖, 𝑑𝑖), as shown in Fig. 3. We introduce the function ˆ𝑓(𝑇, (𝑠, 𝑑)) to find a numerology 𝑚that can generate an entangled pair from node 𝑠to node 𝑑through the path 𝑝within 𝑇, while minimizing Í 𝑣∈𝑉 Í 𝑡∈𝑇𝜃𝑚(𝑡, 𝑣)·𝛽𝑡 𝑣. In this way, the desired numerology for a given SD pair 𝑖and path 𝑝∈𝑃(𝑖) can be found by using ˆ𝑓(𝑇, (𝑠𝑖, 𝑑𝑖)). To compute ˆ𝑓(𝑇, (𝑠, 𝑑)), we examine all local minimum numerologies, each corresponding to a strategy tree rooted at a time slot 𝑡∈𝑇. We know that any feasible numerology 𝑚requires 𝜃𝑚(𝑡, 𝑣) memory units on node 𝑣at time slot 𝑡, and a strategy tree can be built by combining two sub-strategy trees. Thus, we define a function ˆ𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) to find a numerology 𝑚with a (sub-)strategy tree rooted at (𝑠, 𝑑) at time slot 𝑡, which can minimize Í 𝑣∈𝑉 Í 𝑡′∈𝑇𝜃𝑚(𝑡′, 𝑣)·𝛽𝑡′ 𝑣while ensuring the memory limit 𝑐𝑡(𝑠) ≥𝜎𝑠and 𝑐𝑡(𝑑) ≥𝜎𝑑, where 𝜎𝑠, 𝜎𝑑∈{1, 2}. Then, Eq. (12) derives ˆ𝑓(𝑇, (𝑠, 𝑑)). ˆ𝑓(𝑇, (𝑠, 𝑑)) = min 𝑡∈𝑇  ˆ𝑔(𝑡, (𝑠, 𝑑), (1, 1)) (12) We derive ˆ𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) according to the three cases. 1) Leaves of strategy tree. If (𝑠, 𝑑) ∈𝐸and 𝑡≥2, then we reach a pair of leaves in a strategy tree. Thus, it returns the sum of values of these two leaves and their external nodes as long as the memory limits of 𝑠and 𝑑are sufficient. 2) Non-leaves of strategy tree. If (𝑠, 𝑑) ∉𝐸and 𝑡≥2, then we have two possible cases at time slot 𝑡−1. 1) Both 𝑠 and 𝑑idle. 2) 𝑠and 𝑑conduct swapping to consume two entangled links (𝑠, 𝑘) and (𝑘, 𝑑) to acquire an entangled link (𝑠, 𝑑), where 𝑘is an intermediate node between 𝑠and 𝑑on path 𝑝. It examines the values of 𝑠and 𝑑plus the value of every case and then returns the minimum if the memory limits of 𝑠and 𝑑are sufficient. 3) Wrong time or no capacity. If the time slot 𝑡≤1, it is impossible to have an entangled link (𝑠, 𝑑) at time slot 𝑡. Moreover, if the memory limits of 𝑠and 𝑑do not meet the requirements, the numerology does not exist. Thus, these cases result in an infeasible solution. Based on the above cases, to derive ˆ𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)), we additionally define the function ˆℎ(𝑡, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑)) in Eq. (14) to represent the minimum Í 𝑣∈𝑉 Í 𝑡′∈𝑇𝜃𝑚(𝑡′, 𝑣) · 𝛽𝑡′ 𝑣of the entangled pair (𝑠, 𝑑) that can be generated by limiting the amount of used memory on node 𝑘in either the left or right subproblem. Specifically, if the amount of memory on node 𝑘 is limited in the left (or right) subproblem, then we set 𝜎𝑑= 2 (or 𝜎𝑠= 2) to check whether the amount of memory of 𝑘is at least 2. Otherwise, the left (or right) subproblem has no limit, and we set 𝜎𝑑= 1 (or 𝜎𝑠= 1). Then, the recurrence relation of ˆ𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) can be expressed as Eq. (13), where I(𝑠, 𝑑) denotes the set of intermediate nodes on the path 𝑝 between 𝑠and 𝑑(i.e., the nodes on 𝑝but excluding 𝑠and 𝑑). Eqs. (12)−(14) can help find the numerology 𝑚that minimize Í 𝑣∈𝑉 Í 𝑡∈𝑇𝜃𝑚(𝑡, 𝑣) · 𝛽𝑡 𝑣for any given 𝑖∈𝐼and 𝑝∈𝑃(𝑖). With Eqs. (12)−(14), we can efficiently identify the most violated constraint in (10b), satisfying the third property. As all three properties are satisfied, we can solve the relaxed primal LP and obtain a fractional solution ˆ𝑥𝑖𝑝 𝑚using the combinatorial algorithm in [43] that incorporates our DP separation oracle. C. The Randomized Rounding Given a fractional solution ˆ𝑥𝑖𝑝 𝑚of our primal LP, we design a randomized rounding algorithm to obtain the solution for TETRIS-N. Specifically, let ˆ𝑀𝑝(𝑖) denote the set of numerolo- gies with ˆ𝑥𝑖𝑝 𝑚> 0 for each SD pair 𝑖∈𝐼and path 𝑝∈𝑃(𝑖). Then, let ˆ𝑀(𝑖) = ˆ𝑀𝑝1(𝑖) ∪ˆ𝑀𝑝2(𝑖) ∪· · · ∪ˆ𝑀𝑝𝑛(𝑖), where {𝑝1, 𝑝2, . . . , 𝑝𝑛} = 𝑃(𝑖). It is worth noting that the size of ˆ𝑀(𝑖) is polynomial since the combinatorial algorithm terminates in polynomial time [43]. We then interpret ˆ𝑥𝑖𝑝 𝑚as the probability of selecting the numerology 𝑚for each SD pair 𝑖. For example, assume that SD pair 𝑖has three numerologies 𝑚1, 𝑚2, and 𝑚3 in ˆ𝑀(𝑖), and they are ˆ𝑥𝑖𝑝1 𝑚1 = 0.1, ˆ𝑥𝑖𝑝1 𝑚2 = 0.2, and ˆ𝑥𝑖𝑝2 𝑚3 = 0.3, respectively. Thus, the probabilities of selecting 𝑚1, 𝑚2, and 𝑚3 are 0.1, 0.2, and 0.3, respectively, while the probability of not selecting any numerology for SD pair 𝑖is 0.4. D. Bi-Criteria Approximation Ratio and Time Complexity We first analyze the feasibility and the relaxation bound of Eqs. (7b) and (7c) after randomized rounding, and the time complexity in Lemmas 2, 3, and 4, respectively. We then derive the bi-criteria approximation ratio in Theorem 3. To this end, we use the Chernoff bound in Theorem 2 as follows. 11 Theorem 2 (Chernoff bound [44]). There is a set of 𝑛 independent random variables 𝑥1, ..., 𝑥𝑛, where 𝑥𝑖∈[0, 1] for each 𝑖∈[1, 𝑛]. Let 𝑋= Í𝑛 𝑖=1 𝑥𝑖and 𝜇= E[𝑋]. Then, Pr " 𝑛 ∑︁ 𝑖=1 𝑥𝑖≥(1 + 𝜖)𝜇 # ≤𝑒 −𝜖2 𝜇 2+𝜖. (15) Lemma 2. The FNPR will satisfy constraint (7b). Proof. The FNPR selects at most one numerology from the union of all sets 𝑀𝑝(𝑖) for each SD pair 𝑖using the randomized rounding in Section V-C. Thus, the lemma holds. □ Lemma 3. The probability that the FNPR relaxes constraint (7c) for any node 𝑣at any time slot 𝑡by more than a factor of (1 + 4 ln(|𝑉| · |𝑇|)) is at most 1 |𝑉|2·|𝑇|2 , which is negligible. Proof. For each node 𝑣at time slot 𝑡, we define a random variable 𝑧𝑖 𝑡,𝑣that denotes the amount of memory occupied on node 𝑣at time slot 𝑡for each SD pair 𝑖. Note that a quantum node at time slot 𝑡may require 𝜃𝑚(𝑡, 𝑣) units of memory to perform numerology 𝑚. Thus, 𝑧𝑖 𝑡,𝑣can be expressed as: 𝑧𝑖 𝑡,𝑣= ( 𝜃𝑚(𝑡, 𝑣) with probability ˆ𝑥𝑖𝑝 𝑚; 0 otherwise. Note that their sum 𝑍𝑡,𝑣= Í 𝑖∈𝐼𝑧𝑖 𝑡,𝑣is exactly the amount of memory needed on 𝑣at time slot 𝑡after rounding. Then, we derive the upper bound of the expectation of the amount of memory occupied on node 𝑣at time slot 𝑡as follows: E[𝑍𝑡,𝑣] = E "∑︁ 𝑖∈𝐼 𝑧𝑖 𝑡,𝑣 # = ∑︁ 𝑖∈𝐼 E[𝑧𝑖 𝑡,𝑣] = ∑︁ 𝑖∈𝐼  ∑︁ 𝑝∈𝑃(𝑖) ∑︁ 𝑚∈𝑀𝑝(𝑖) 𝜃𝑚(𝑡, 𝑣) · ˆ𝑥𝑖𝑝 𝑚  ≤𝑐𝑡(𝑣). Note that the last inequality directly follows the memory limit constraint (7c). Afterward, we can prove the lemma by Chernoff bound as follows: Pr∃𝑣∈𝑉, ∃𝑡∈𝑇: 𝑍𝑡,𝑣≥(1 + 4 ln(|𝑉| · |𝑇|)) · 𝑐𝑡(𝑣) ≤ ∑︁ 𝑣∈𝑉 ∑︁ 𝑡∈𝑇 Pr  𝑍𝑡,𝑣 𝑐𝑡(𝑣) ≥1 + 4 ln(|𝑉| · |𝑇|)  ≤ ∑︁ 𝑣∈𝑉 ∑︁ 𝑡∈𝑇 Pr ∑︁ 𝑖∈𝐼 𝑧𝑖 𝑡,𝑣 𝑐𝑡(𝑣) ≥1 + 4 ln(|𝑉| · |𝑇|) ! ≤|𝑉| · |𝑇| · 𝑒 −(4 ln(|𝑉|·|𝑇|))2 2+4 ln(|𝑉|·|𝑇|) ≤|𝑉|−2 · |𝑇|−2. Note that 𝑧𝑖 𝑡,𝑣 𝑐𝑡(𝑣) ≤1 since the separation oracle adopts Eq. (13) to examine the memory limit, meeting the condition of the Chernoff bound (i.e., every variable 𝑧𝑖 𝑡,𝑣 𝑐𝑡(𝑣) ranges from 0 to 1). Besides, the last inequality holds when |𝑉| · |𝑇| ≥5. □ Lemma 4. The FNPR can be executed in polynomial time. Proof. According to [43], the combinatorial algorithm executes 𝑂(𝜔−2𝜂1 log 𝜂1) times of separation oracle, where 𝜔is a constant error bound of primal LP solution and 𝜂1 is the num- ber of constraints (except for the non-negativity constraints), i.e., 𝜂1 = (|𝑉| · |𝑇| + |𝐼|). Thus, it outputs 𝑂(𝜔−2𝜂1 log 𝜂1) numerologies for randomized rounding. In addition, each time of separation oracle takes 𝜂2 = 𝑂(|𝑉|3 · |𝑇| · |𝐼|). Then, the combinatorial algorithm takes 𝑂(𝜔−2𝜂1𝜂2 log 𝜂1), and thus the overall time complexity of FNPR is 𝑂(𝜔−2𝜂1𝜂2 log 𝜂1). □ Theorem 3. The bi-criteria approximation ratio of the FNPR for the TETRIS-N is (𝑂( 𝐹max 𝐹min ), 𝑂(log(|𝑉| · |𝑇|)). Proof. With Lemmas 2, 3, and 4, we can know that FNPR relaxes the memory limit by a ratio of at most 𝑂(log(|𝑉| · |𝑇|)) with high probability and runs in polynomial time. It suffices to focus on the gap between the optimal solution and the FNPR’s solution. For ease of presentation, let 𝑂𝑃𝑇, 𝑂𝑃𝑇𝑛𝐹, and 𝑂𝑃𝑇𝑛𝐹 𝑃𝐿be the optimum values of the TETRIS-N (i.e., the ILP (7) with no constraint (7d)), the TETRIS-N without considering fidelity (i.e., the ILP (8), (7b), (7c), (7e)), and the primal LP (8), (7b), (7c), (9), respectively. Clearly, 𝑂𝑃𝑇≤𝑂𝑃𝑇𝑛𝐹· 𝐹max because 𝑂𝑃𝑇𝑛𝐹has the maximum number of SD pairs with an allocated numerology. Besides, 𝑂𝑃𝑇𝑛𝐹≤𝑂𝑃𝑇𝑛𝐹 𝑃𝐿since the primal LP (8), (7b), (7c), (9) is a LP relaxation of the ILP (8), (7b), (7c), (7e). In addition, let ˆ𝑥𝑖𝑝 𝑚denote the solution output by the combinatorial algorithm. Then, let ¯𝑥𝑖𝑝 𝑚be the rounded solution. Since the expected value of the rounded solution is equal to the optimal solution of the primal LP, the expected fidelity sum of the rounded solution can be bound as follows: E  ∑︁ 𝑖∈𝐼 ∑︁ 𝑝∈𝑃(𝑖) ∑︁ 𝑚∈𝑀𝑝(𝑖) Pr(𝑝) · 𝐹(𝑚) · ¯𝑥𝑖𝑝 𝑚  ≥ ∑︁ 𝑖∈𝐼 ∑︁ 𝑝∈𝑃(𝑖) ∑︁ 𝑚∈𝑀𝑝(𝑖) E h Pr(𝑝) · 𝐹min · ¯𝑥𝑖𝑝 𝑚 i = 𝐹min · ∑︁ 𝑖∈𝐼 ∑︁ 𝑝∈𝑃(𝑖) ∑︁ 𝑚∈𝑀𝑝(𝑖) Pr(𝑝) · ˆ𝑥𝑖𝑝 𝑚 = 𝐹min · 𝐴𝑃𝑃𝑛𝐹 𝑃𝐿≥𝐹min 1 + 𝜔· 𝑂𝑃𝑇𝑛𝐹 𝑃𝐿 ≥𝐹min 1 + 𝜔· 𝑂𝑃𝑇𝑛𝐹≥𝐹min 𝐹max · 𝑂𝑃𝑇 1 + 𝜔, where 𝐴𝑃𝑃𝑛𝐹 𝑃𝐿is the near-optimal value of primal LP from the combinatorial algorithm with a user-defined constant error bound 𝜔> 0, and the second inequality holds due to [43]. □ E. Heuristic Algorithms for Solution Improvement 1) Remedy for the Memory Limit Relaxation: Since the solution acquired by the FNPR after rounding may relax the memory limit on nodes with a bounded ratio, the following steps are adopted to deal with the memory limit relaxation. Let ¯𝑥𝑖𝑝 𝑚and ¯𝑀𝑝(𝑖) denote the variables in the rounded solution and the set of numerologies with ¯𝑥𝑖𝑝 𝑚> 0 for each 𝑖∈𝐼and 𝑝∈𝑃(𝑖), respectively. The heuristic has two phases as follows. In the first phase, for each node 𝑣and each time slot 𝑡 where the memory limit is exceeded, we sort the numerolo- gies occupying 𝑣’s memory at time slot 𝑡(i.e., ¯𝑀(𝑣, 𝑡) = {𝑚| 𝜃𝑚(𝑡, 𝑣)· ¯𝑥𝑖𝑝 𝑚> 0, ∀𝑚∈¯𝑀𝑝(𝑖), ∀𝑝∈𝑃(𝑖), ∀𝑖∈𝐼}) in non- decreasing order of their expected fidelity, and then iteratively 12 remove numerologies in ¯𝑀(𝑣, 𝑡) based on this order until the memory limit of node 𝑣at time slot 𝑡is satisfied to make the solution feasible. In the second phase, we utilize the residual resources as much as possible. We iteratively allocate the numerology 𝑚for each pair 𝑖in non-increasing order of the value of those positive variables ˆ𝑥𝑖𝑝 𝑚(i.e., the fractional solution of LP (8), (7b), (7c), (9)) until no numerology can be chosen or accommodated by the residual network. The heuristic is a polynomial-time algo- rithm since the number of positive variables ˆ𝑥𝑖𝑝 𝑚is polynomial. 2) Extension to Support the TETRIS: We design the FNPR based on the observation of numerology characteristics in the context of small 𝜏and |𝑇|. However, as 𝜏and |𝑇| increase, the FNPR may not guarantee solution feasibility for the TETRIS. To address this limitation, we extend the FNPR to support the TETRIS by introducing a heuristic as follows. Following the heuristic proposed in Section V-E1, we additionally add one step to remove the numerology from ¯𝑀(𝑣, 𝑡) if its fidelity is less than the threshold b𝐹in the first phase. In the second phase, we only add the numerology 𝑚with adequate fidelity for each pair 𝑖in non-increasing order of the value of those positive variables ˆ𝑥𝑖𝑝 𝑚until no 𝑚can be chosen or accommodated. This heuristic can ensure solution feasibility, and later, the numerical results show the modified solution can approach the optimum. VI. THE DESIGN OF FIDELITY-LOAD TRADE-OFF ALGORITHM (FLTO) Although the FNPR can approximate the optimum when 𝜏 and |𝑇| are small, its performance might decrease when 𝜏or |𝑇| increases since it neglects fidelity loss. However, it is non- trivial to find the optimal solution for the TETRIS since every SD pair could have an exponential number of numerologies for selection. Fortunately, the separation oracle in the FNPR serves as inspiration for determining the optimal numerol- ogy. Following this inspiration and the intrinsic properties of TETRIS, we first design a DP algorithm that can find the optimal numerology with the maximum expected fidelity on the selected path 𝑝∈𝑃(𝑖) for any given single SD pair 𝑖. Intuitively, we can derive a solution for the TETRIS by iteratively adding a new request based on the available resource by invoking the DP algorithm until no more request can be satisfied. Nonetheless, such a greedy algorithm tends to fall into a local optimum trap since it lacks an overview of current resource utilization for load balancing unless a proper index is provided for the greedy algorithm to estimate the efficiency of the current allocation. Inspired by [23], we subtly design the resource efficiency index (REI) to evaluate a numerology. With REI, our greedy algorithm can mitigate severe network congestion and increase the fidelity sum as 𝜏or |𝑇| grows. A. Numerology with the Max. Exp. Fidelity for Single Request Similar to Section V-B, we exploit DP to iteratively solve a larger subproblem by examining the optimum solution of each smaller subproblem to derive the optimal solution for ILP (7a)−(7e) when the SD pair 𝑖is single. Hence, we introduce the recursive function 𝑓(𝑇, (𝑠, 𝑑)) to return the maximum fidelity that can be achieved between node 𝑠and node 𝑑on a specific path 𝑝within 𝑇as well as the corresponding numerology. Similar to Eq. (12), 𝑓(𝑇, (𝑠, 𝑑)) requires an additional function 𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) to derive its value, as shown in Eq. (16). 𝑓(𝑇, (𝑠, 𝑑)) = max 𝑡∈𝑇  𝑔(𝑡,(𝑠, 𝑑), (1, 1)) | 𝑔(𝑡, (𝑠, 𝑑), (1, 1)) ≥b𝐹 . (16) Note that if the derived maximum fidelity is less than the fidelity threshold b𝐹, the corresponding path will not be con- sidered. Following Eq. (13), to derive 𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)), we define the function ℎ(𝑡, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑)) in Eq. (18), which represents the maximum fidelity of the entangled pair (𝑠, 𝑑) that can be generated by limiting the amount of used memory on node 𝑘in either the left or right subproblem. The details are omitted due to the similarity, and the recurrence relation of 𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) can be expressed as Eq. (17). In this way, we can identify the numerology with the maximum fidelity on any specific path for any given single SD pair. Subsequently, it is possible to find the numerology with the maximum expected fidelity among all paths in 𝑃(𝑖) since the size of predefined path set 𝑃(𝑖) is polynomial. To this end, we calculate Pr(𝑝) · 𝐹( ˆ𝑚) of the numerology ˆ𝑚returned by Eq. (16) for each 𝑝∈𝑃(𝑖) and choose the numerology with the highest value. B. Greedy Algorithm with DP Technique We can design a greedy algorithm to solve the ILP (7a)−(7e) by utilizing the DP algorithm in Section VI-A. That is, we iteratively choose the numerology with the maximum expected fidelity by the DP algorithm among all SD pairs and then allocate the resources to the corresponding SD pair until no more numerologies can be chosen. However, such a naive algorithm may cause congestion. To remedy the side effect, we design the REI to help us choose a numerology while avoiding hot spots. Specifically, the REI is defined as follows: ℜ(𝑖, 𝑚) = Pr(𝑝) · 𝐹(𝑚) Í 𝑣∈𝑉 Í 𝑡∈𝑇 𝜃𝑚(𝑡,𝑣) 𝑐𝑡(𝑣) , (22) where 𝑝∈𝑃(𝑖) and 𝑚∈𝑀𝑝(𝑖) for SD pair 𝑖. Note that the numerator and denominator denote the expected fidelity and resource cost of numerology 𝑚, respectively. However, finding the numerology with the maximum ℜ(𝑖, 𝑚) among all possible numerologies is computationally-intensive. Thus, FLTO focuses on two types of candidate numerologies for each SD pair 𝑖as follows: 1) the numerology with the maximum expected fidelity on each path 𝑝∈𝑃(𝑖), denoted by 𝑚𝑖𝑝 1 , and 2) the numerology with the minimum resource cost on each path 𝑝∈𝑃(𝑖), denoted by 𝑚𝑖𝑝 2 . Subsequently, FLTO selects the numerology with the highest ℜ(𝑖, 𝑚) from all candidates across all SD pairs, i.e., max𝑖∈𝐼,𝑝∈𝑃(𝑖){ℜ(𝑖, 𝑚𝑖𝑝 1 ), ℜ(𝑖, 𝑚𝑖𝑝 2 )}. Specifically, FLTO invokes the DP algorithm in Section VI-A to determine the 𝑚𝑖𝑝 1 for each path 𝑝∈𝑃(𝑖). In addition, 𝑚𝑖𝑝 2 for each 𝑝can be found using a similar DP algorithm designed in Eqs. (19)−(21). These equations describe the recurrence relation to search the numerology with the minimum 13 𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) =   𝐹(𝑠, 𝑑) if (𝑠,𝑑)∈𝐸, 𝑡≥2, 𝑐𝑡(𝑠) and 𝑐𝑡−1 (𝑠)≥𝜎𝑠, 𝑐𝑡(𝑑) and 𝑐𝑡−1 (𝑑)≥𝜎𝑑; max  𝐹𝜏𝑔(𝑡−1, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)), max 𝑘∈I(𝑠,𝑑)  ℎ(𝑡−1, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑))  else if 𝑡≥2, 𝑐𝑡(𝑠)≥𝜎𝑠, 𝑐𝑡(𝑑)≥𝜎𝑑; −∞ otherwise. (17) ℎ(𝑡, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑)) = max  𝐹𝜏 𝑠 𝑔(𝑡, (𝑠, 𝑘), (𝜎𝑠, 2)), 𝑔(𝑡, (𝑘, 𝑑), (1, 𝜎𝑑)), 𝐹𝜏 𝑠 𝑔(𝑡, (𝑠, 𝑘), (𝜎𝑠, 1)), 𝑔(𝑡, (𝑘, 𝑑), (2, 𝜎𝑑)) . (18) ¯𝑓(𝑇, (𝑠, 𝑑)) = min 𝑡∈𝑇  ¯𝑔(𝑡, (𝑠, 𝑑), (1, 1)) . (19) ¯𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) =   𝑟𝑡 𝑠+ 𝑟𝑡 𝑑+ 𝑟𝑡−1 𝑠 + 𝑟𝑡−1 𝑑 if (𝑠,𝑑)∈𝐸, 𝑡≥2, 𝑐𝑡(𝑠) and 𝑐𝑡−1 (𝑠)≥𝜎𝑠, 𝑐𝑡(𝑑) and 𝑐𝑡−1 (𝑑)≥𝜎𝑑; 𝑟𝑡 𝑠+ 𝑟𝑡 𝑑+ min  ¯𝑔(𝑡−1, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)), min 𝑘∈I(𝑠,𝑑) ¯ℎ(𝑡−1, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑))  else if 𝑡≥2, 𝑐𝑡(𝑠)≥𝜎𝑠, 𝑐𝑡(𝑑)≥𝜎𝑑; ∞ otherwise. (20) ¯ℎ(𝑡, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑)) = min  ¯𝑔(𝑡, (𝑠, 𝑘), (𝜎𝑠, 2)) + ¯𝑔(𝑡, (𝑘, 𝑑), (1, 𝜎𝑑)), ¯𝑔(𝑡, (𝑠, 𝑘), (𝜎𝑠, 1)) + ¯𝑔(𝑡, (𝑘, 𝑑), (2, 𝜎𝑑))  . (21) resource cost, where 𝑟𝑡 𝑣denotes the resource cost of node 𝑣at time slot 𝑡and is set to 1 𝑐𝑡(𝑣) based on Eq. (22). In brief, ¯𝑓(𝑇, (𝑠, 𝑑)) finds the numerology that can generate an entan- gled pair from node 𝑠to node 𝑑on specific path 𝑝with the minimum resource cost within 𝑇, while ¯𝑔(𝑡, (𝑠, 𝑑), (𝜎𝑠, 𝜎𝑑)) and ¯ℎ(𝑡, (𝑠, 𝑘, 𝑑), (𝜎𝑠, 𝜎𝑑)) are additional functions similar to Eqs. (17) and (18). Note that the numerology will not be considered if the fidelity is less than the fidelity threshold b𝐹, and the DP details are omitted due to the similarity. Overall, FLTO iteratively chooses an SD pair with the nu- merology that has the maximum REI among all unsaturated SD pairs and then allocates the resources asked by that numerology to saturate the SD pair until no more SD pairs can be served. Then, we analyze the time complexity of FLTO. Both the DP algorithm Eqs. (16) and (19) take 𝑂(|𝑉|3·|𝑇|) for each SD pair 𝑖∈𝐼on specific path 𝑝∈𝑃(𝑖). Thus, it needs 𝑂(|𝑉|3 · |𝑇| · |𝑃|) for each SD pair 𝑖∈𝐼calculating for all paths, where |𝑃| denotes the maximum size of path set 𝑃(𝑖) between all SD pairs. Since FLTO chooses one SD pair among all pairs for each round and the number of chosen SD pairs is at most |𝐼|, the overall time complexity is 𝑂(|𝑉|3 · |𝑇| · |𝑃| · |𝐼|2). VII. PERFORMANCE EVALUATION A. Simulation Settings The model and default parameters are as follows. The QN topology is generated using the Waxman model [45], with |𝑉| = 100 nodes deployed over a 150 × 300 km region, and an average edge length of 30 km. For clarity, a network topology example is given in Appendix B of the supplementary material. We select |𝐼| = 50 random SD pairs within a batch of |𝑇| = 13 time slots, where each time slot length is 𝜏= 2 ms. This setup ensures that the total duration of a batch (𝜏× |𝑇|) does not exceed 40 ms, consistent with recent advances in quantum memory [38]. The choice of 𝜏= 2 ms is reasonable since a sin- gle swapping process requires approximately 1.1 ms [40]. For each edge, we set 𝜆= 0.045/km [7] and the entangling time 𝔗= 0.25 ms [40]. Thus, the average entangling probability is around 0.9, with the number of entangling attempts per time slot given by 𝜉= ⌊𝜏 𝔗⌋= 8. The initial fidelity for each edge is randomly sampled from the range [0.7, 0.98] [12]. The average memory limit of a node (i.e., 𝑐𝑡(𝑣)) is set between 6 to 14 units, aligned with [33] (10 to 16 per node). The swapping probability is set to 0.9 [10], [11]. Following [38], the parameters in Eq. (1) are set as 𝐴= 0.25, 𝐵= 0.75, T = 40 ms, and 𝜅= 2. The fidelity threshold b𝐹is set to 0.5. Then, we apply GREEDY [9], Q-CAST [10], and REPS [11] to generate the predefined path set 𝑃(𝑖) for each pair 𝑖∈𝐼. For simplicity, they are denoted by G, Q, and R. Each result is averaged over 50 trials. We compare FNPR and FLTO with the following methods. 1) Nesting [7] first conducts entangling for any two adjacent nodes until all entangled links on the path are created. After that, it greedily maximizes the number of swapping pro- cesses for each time slot until the end-to-end entangled link is built. Thus, Nesting has a near-balanced tree structure. 2) Linear [14] performs swapping operations one by one, starting from the source towards the destination, after all entangled links on the path are created. Thus, the numerol- ogy that Linear tends to use is a near-biased tree structure. 3) ASAP [33] performs swapping as soon as two adjacent entangled links are successfully generated. ASAP needs to bind the resources for re-entangling and swapping until a successful end-to-end entangled link is achieved. 4) UB is the fractional solution of LP (8), (7b), (7c), (9), derived by the combinatorial algorithm with the separation oracle in FNPR to know the upper bound of the optimum. Note that it is not necessarily feasible for TETRIS. B. Numerical Results Figs. 8−11 illustrate the expected fidelity sum and the num- ber of accepted requests under different parameters. Overall, both FNPR and FLTO consistently demonstrate superior per- formance across all parameter settings, effectively enhancing 14 10 30 50 70 90 #Requests 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (a) #Requests vs Exp. Fid. Sum 10 30 50 70 90 #Requests 5 10 15 20 25 30 35 40 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (b) #Requests vs #Acc. Requests 10 30 50 70 90 #Requests 5 10 15 20 25 Expected Fidelity Sum Q-FNPR Q-FLTO Q-UB Q-Nesting Q-Linear Q-ASAP (c) #Requests vs Exp. Fid. Sum 10 30 50 70 90 #Requests 5 10 15 20 25 30 35 40 #Accepted Requests Q-FNPR Q-FLTO Q-Nesting Q-Linear Q-ASAP (d) #Requests vs #Acc. Requests 10 30 50 70 90 #Requests 5 10 15 20 25 Expected Fidelity Sum R-FNPR R-FLTO R-UB R-Nesting R-Linear R-ASAP (e) #Requests vs Exp. Fid. Sum 10 30 50 70 90 #Requests 5 10 15 20 25 30 35 40 #Accepted Requests R-FNPR R-FLTO R-Nesting R-Linear R-ASAP (f) #Requests vs #Acc. Requests Fig. 8. Effect of different parameters and predefined paths on various metrics. fidelity and throughput while efficiently utilizing the resources of the quantum network. 1) Effect of Number of Requests: Fig. 8 shows the expected fidelity sum and number of accepted requests across varying numbers of requests. As the number of requests increases, both the expected fidelity sum and the number of accepted requests generally exhibit an upward trend since more numerologies can be considered for selection to fully utilize the network resources. No matter which routing algorithm is employed, FNPR and FLTO significantly outperform Nesting, Linear, and ASAP. The results show that FNPR (and FLTO) averagely outperforms Nesting, Linear, and ASAP on expected fidelity sum by up to 60% (60%), 78% (77%), and 63% (62%), respectively. This is because FNPR can achieve near-optimum under a moderate 𝜏and |𝑇| by the combinatorial algorithm with the separation oracle, while FLTO uses REI to choose numerologies, balancing resource utilization and fidelity. 2) Effect of Swapping Probability: In the literature, the swapping probability for simulations is typically between 0.5 and 1. To make the model more comprehensive and realistic, we compare the performance of algorithms under different swapping probabilities ranging from 0.5 to 0.9. Figs. 9(a) and 9(b) show the expected fidelity sum and the number of accepted requests under a different swapping probability. Generally, as 0.5 0.6 0.7 0.8 0.9 Swapping Probability 0 5 10 15 20 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (a) Swap. Prob. vs Exp. Fid. Sum 0.5 0.6 0.7 0.8 0.9 Swapping Probability 0 5 10 15 20 25 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (b) Swap. Prob. vs #Acc. Requests 1.0 2.5 4.0 5.5 7.0 Entangling Time (10−4 s) 0 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (c) Ent. Time vs Exp. Fid. Sum 1.0 2.5 4.0 5.5 7.0 Entangling Time (10−4 s) 0 5 10 15 20 25 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (d) Ent. Time vs #Acc. Requests 12.5 15.0 20.0 22.5 23.0 τ (10−4 s) 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (e) 𝜏vs Exp. Fid. Sum 12.5 15.0 20.0 22.5 23.0 τ (10−4 s) 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (f) 𝜏vs #Acc. Requests Fig. 9. Effect of different parameters on various metrics. the swapping probability increases, both the expected fidelity sum and the number of accepted requests tend to rise. Ad- ditionally, FNPR and FLTO consistently outperform the other algorithms across all swapping probabilities, indicating they are more robust and efficient in managing resources, leading to superior performance regardless of the swapping probability. 3) Effect of Entangling Time: The timing relationship be- tween entangling and swapping may be affected by distance in reality, causing 𝜉to vary under certain conditions. To achieve more comprehensive comparisons, we consider variations in the entangling time length ranging from 0.1 ms to 0.7 ms, with corresponding 𝜉values ranging from 8 to 2. Figs. 9(c) and 9(d) show the expected fidelity sum and the number of accepted requests for different entangling time lengths. Generally, as entangling time length increases, both the expected fidelity sum and the number of accepted requests tend to decrease. This is because a smaller 𝜉allows for fewer entangling attempts, leading to a lower entangling probability. Additionally, FNPR and FLTO outperform the other algorithms. 4) Effect of 𝜏: Figs. 9(e) and 9(f) illustrate the expected fidelity sum and the number of accepted requests under a differ- ent 𝜏. Note that varying 𝜏affects 𝜉, as the number of entangling attempts depends on the time slot length. Although a larger 𝜏 allows more entangling attempts, leading to higher entangling 15 5 9 13 17 21 |T| 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (a) |𝑇| vs Exp. Fid. Sum 5 9 13 17 21 |T| 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (b) |𝑇| vs #Acc. Requests 6 8 10 12 14 Average Memory Limit 5 10 15 20 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (c) Avg. #Mem. vs Exp. Fid. Sum 6 8 10 12 14 Average Memory Limit 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (d) Avg. #Mem. vs #Acc. Requests 0.6 0.7 0.8 0.9 0.95 Minimum Initial Fidelity 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (e) Min. Init. Fid. vs Exp. Fid. Sum 0.6 0.7 0.8 0.9 0.95 Minimum Initial Fidelity 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (f) Min. Init. Fid. vs #Acc. Requests 0.4 0.45 0.5 0.55 0.6 Fidelity Threshold 0 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (g) Fid. Thr. vs Exp. Fid. Sum 0.4 0.45 0.5 0.55 0.6 Fidelity Threshold 0 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (h) Fid. Thr. vs #Acc. Requests Fig. 10. Effect of different parameters on various metrics. probability, the results show that a larger 𝜏conversely results in a lower expected fidelity sum and fewer accepted requests. This is because the impact of decoherence over a longer time slot outweighs the benefits of increased entangling probability in our setting. Thus, some numerologies will not be admitted due to their low fidelity. Fig. 9(e) further confirms that FNPR outperforms FLTO as 𝜏is small, while FLTO is slightly better when 𝜏is large, as described in Section VI. 5) Effect of Resource Allocation: Figs. 10(a)−10(d) show the expected fidelity sum and the number of accepted requests under different |𝑇| and average memory limits. Generally, as |𝑇| or the amount of memory increases, both the fidelity sum and the number of accepted requests tend to increase, benefiting from sufficient resources. In Figs. 10(a) and 10(b), 5 7.5 10 12.5 15 Entangling Probablity (10−3) 0 5 10 15 20 Expected Fidelity Sum (10−5) G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (a) Ent. Prob. vs Exp. Fid. Sum 5 7.5 10 12.5 15 Entangling Probablity (10−3) 0 5 10 15 20 #Accepted Requests (10−5) G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (b) Ent. Prob. vs #Acc. Requests 10−4 10−3 10−2 10−1 100 Entangling Probablity 10−10 10−7 10−4 10−1 102 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (c) Ent. Prob. vs Exp. Fid. Sum 10−4 10−3 10−2 10−1 100 Entangling Probablity 10−10 10−7 10−4 10−1 102 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (d) Ent. Prob. vs #Acc. Requests Fig. 11. Effect of different entangling probabilities on various metrics. FNPR slightly outperforms FLTO as |𝑇| is small since FNPR can be close to the optimum solution as 𝐹max 𝐹min ≈1. However, in scenarios with a large |𝑇|, FLTO proves more suitable since it can avoid selecting inefficient numerologies via our DP algorithm with REI when the resources are sufficient. The above results show that each has its own merits. 6) Effect of Initial Fidelity and Threshold: Figs. 10(e)−10(f) and 10(g)−10(h) show the expected fidelity sum and the number of accepted requests at varying minimum initial fidelity levels and fidelity thresholds. Generally, as the minimum initial fidelity increases, both the expected fidelity sum and the number of accepted requests increase since most numerologies can achieve high fidelity. FNPR performs the best under higher minimum initial fidelity because it can approach the optimum solution as 𝐹max 𝐹min ≈1. Besides, as the fidelity threshold in- creases, the expected fidelity sum tends to decrease inevitably; however, FNPR and FLTO still outperform others in all cases. 7) Effect of Entangling Probability: To further observe the effect of low entangling probability, we conduct experiments by treating the entangling probability (i.e., Pr(𝑢, 𝑣)) as an abstract input parameter, as shown in Fig. 11. Figs. 11(a) and 11(b) show the expected fidelity sum and number of accepted requests across varying entangling probabilities. In this simulation, the entangling probability is set to range from 0.005 to 0.015, which covers the realistic parameter range discussed in [46]. Due to the low entangling probability, the fidelity sum and number of accepted requests are low under this setting for all simulated methods. Even under such adverse conditions, the proposed FNPR and FLTO still outperform the other algorithms by 20% (18%) to 75% (75%) on average. This is because FNPR utilizes the combinatorial algorithm with randomized rounding to achieve near-optimal solutions, while FLTO invokes the DP techniques to design an effective greedy algorithm. Both algorithms are designed to solve the TETRIS 16 suitably. To more deeply observe the performance changes under different entangling probabilities, we conducted simula- tions across a wider range of entangling probabilities (i.e., from 0.0001 to 1), as shown in Figs. 11(c) and 11(d). The results show that our proposed algorithms consistently outperform the other algorithms across all entangling probability levels, which ensures the robustness of the proposed algorithms. VIII. CONCLUSION This paper proposes a promising short time slot protocol for entangling and swapping scheduling with a novel optimization problem TETRIS. The TETRIS asks for the solution that selects the numerology (strategy tree) for each accepted request while maximizing the fidelity sum of all accepted requests. To solve the TETRIS, we design two new algorithms. FNPR is a bi-criteria approximation algorithm by solving an LP with a separation oracle and rounding the solution for the cases where the time slot length and the number of time slots in a batch are small. FLTO is a greedy algorithm with a proper index to call two DP-based algorithms adaptively for the other cases. Finally, the simulation results manifest that our algorithms can outperform the existing methods by up to 60 ∼78% in general, and by 20 ∼75% even under low entangling probabilities. REFERENCES [1] C. Elliott, “Building the quantum network,” New J. Phys., vol. 4, no. 1, p. 46, 2002. [2] Y.-A. Chen et al., “An integrated space-to-ground quantum communi- cation network over 4,600 kilometres,” Nature, vol. 589, no. 7841, pp. 214–219, 2021. [3] S. Daiss et al., “A quantum-logic gate between distant quantum-network modules,” Science, vol. 371, no. 6529, pp. 614–617, 2021. [4] P. Komar et al., “A quantum network of clocks,” Nat. Phys., vol. 10, no. 8, pp. 582–587, 2014. [5] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge university press, 2010. [6] M. Schlosshauer, “Decoherence, the measurement problem, and interpre- tations of quantum mechanics,” Rev. Mod. Phys., vol. 76, no. 4, p. 1267, 2005. [7] N. Sangouard et al., “Quantum repeaters based on atomic ensembles and linear optics,” Rev. Mod. Phys., vol. 83, no. 1, p. 33, 2011. [8] R. Van Meter, Quantum Networking. John Wiley & Sons, Ltd, 2014. [9] M. Pant et al., “Routing entanglement in the quantum internet,” npj Quantum Inf., vol. 5, no. 1, p. 25, 2019. [10] S. Shi and C. Qian, “Concurrent entanglement routing for quantum networks: Model and designs,” in ACM SIGCOMM, 2020. [11] Y. Zhao and C. Qiao, “Redundant entanglement provisioning and se- lection for throughput maximization in quantum networks,” in IEEE INFOCOM, 2021. [12] Y. Zhao et al., “E2E fidelity aware routing and purification for throughput maximization in quantum networks,” in IEEE INFOCOM, 2022. [13] L. Chen et al., “A heuristic remote entanglement distribution algorithm on memory-limited quantum paths,” IEEE Trans. Commun., vol. 70, no. 11, pp. 7491–7504, 2022. [14] A. Farahbakhsh and C. Feng, “Opportunistic routing in quantum net- works,” in IEEE INFOCOM, 2022. [15] Y. Zeng et al., “Entanglement routing design over quantum networks,” IEEE/ACM Trans. Netw., vol. 32, no. 1, pp. 352–367, 2024. [16] Z. Yiming et al., “Entanglement routing over quantum networks using greenberger-horne-zeilinger measurements,” in IEEE ICDCS, 2023. [17] J. Li et al., “Fidelity-guaranteed entanglement routing in quantum networks,” IEEE Trans. Commun., vol. 70, no. 10, pp. 6748–6763, 2022. [18] K. Chakraborty et al., “Entanglement distribution in a quantum network: A multicommodity flow-based approach,” IEEE Trans. Quantum Eng., vol. 1, pp. 1–21, 2020. [19] S. Pouryousef et al., “A quantum overlay network for efficient entangle- ment distribution,” in IEEE INFOCOM, 2023. [20] G. Zhao et al., “Segmented entanglement establishment with all-optical switching in quantum networks,” IEEE/ACM Trans. Netw., vol. 32, no. 1, pp. 268–282, 2024. [21] C. Cicconetti, M. Conti, and A. Passarella, “Request scheduling in quantum networks,” IEEE Trans. Quantum Eng., vol. 2, pp. 2–17, 2021. [22] M. Ghaderibaneh et al., “Efficient quantum network communication using optimized entanglement swapping trees,” IEEE Trans. Quantum Eng., vol. 3, pp. 1–20, 2022. [23] Y. Azar and O. Regev, “Strongly polynomial algorithms for the unsplit- table flow problem,” in IPCO, 2001. [24] R. Van Meter and J. Touch, “Designing quantum repeater networks,” IEEE Commun. Mag., vol. 51, no. 8, pp. 64–71, 2013. [25] S. Muralidharan et al., “Optimal architectures for long distance quantum communication,” Sci. Rep., vol. 6, no. 1, p. 20463, 2016. [26] S. Pirandola et al., “Fundamental limits of repeaterless quantum com- munications,” Nat. Commun., vol. 8, no. 1, p. 15043, 2017. [27] M. Caleffi, “Optimal routing for quantum networks,” IEEE Access, vol. 5, pp. 22 299–22 312, 2017. [28] Y. Zhao and C. Qiao, “Distributed transport protocols for quantum data networks,” IEEE/ACM Trans. Netw., vol. 31, no. 6, pp. 2777–2792, 2023. [29] S.-M. Huang et al., “Socially-aware opportunistic routing with path segment selection in quantum networks,” in IEEE GLOBECOM, 2023. [30] L. Yang et al., “Asynchronous entanglement provisioning and routing for distributed quantum computing,” in IEEE INFOCOM, 2023. [31] S. Haldar et al., “Fast and reliable entanglement distribution with quan- tum repeaters: Principles for improving protocols using reinforcement learning,” Phys. Rev. Appl., vol. 21, p. 024041, 2024. [32] ´A. G. I˜nesta et al., “Optimal entanglement distribution policies in homogeneous repeater chains with cutoffs,” npj Quantum Inf., vol. 9, p. 46, 2023. [33] S. Haldar et al., “Reducing classical communication costs in multiplexed quantum repeaters using hardware-aware quasi-local policies,” arXiv preprint arXiv:2401.13168, 2024. [34] K. Goodenough, T. Coopmans, and D. Towsley, “On noise in swap asap repeater chains: exact analytics, distributions and tight approximations,” arXiv preprint arXiv:2404.07146, 2024. [35] R. F. Werner, “Quantum states with einstein-podolsky-rosen correlations admitting a hidden-variable model,” Phys. Rev. A, vol. 40, no. 8, p. 4277, 1989. [36] N. K. Panigrahy et al., “On the capacity region of a quantum switch with entanglement purification,” in IEEE INFOCOM, 2023. [37] M. H. Abobeih et al., “One-second coherence for a single electron spin coupled to a multi-qubit nuclear-spin environment,” Nat. Commun., vol. 9, no. 1, p. 2552, 2018. [38] C. E. Bradley et al., “A ten-qubit solid-state spin register with quantum memory up to one minute,” Phys. Rev. X., vol. 9, no. 3, p. 031045, 2019. [39] A. Sen, U. Sen, ˇC. Brukner, V. Buˇzek, and M. ˙Zukowski, “Entanglement swapping of noisy states: A kind of superadditivity in nonclassicality,” Phys. Rev. A: At., Mol., Opt. Phys., vol. 72, no. 4, p. 042310, 2005. [40] M. Pompili et al., “Realization of a multinode quantum network of remote solid-state qubits,” Science, vol. 372, no. 6539, pp. 259–264, 2021. [41] J. Hastad, “Clique is hard to approximate within 𝑛1−𝜖,” in IEEE FOCS, 1996. [42] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, Third Edition. The MIT Press, 2009. [43] N. Garg and J. K¨onemann, “Faster and simpler algorithms for multicom- modity flow and other fractional packing problems,” SIAM J. Comput., vol. 37, no. 2, pp. 630–652, 2007. [44] M. Mitzenmacher and E. Upfal, Probability and Computing: Random- ization and Probabilistic Techniques in Algorithms and Data Analysis, 2nd ed. USA: Cambridge University Press, 2017. [45] B. Waxman, “Routing of multipoint connections,” IEEE J. Sel. Areas Commun., vol. 6, no. 9, pp. 1617–1622, 1988. [46] A. J. Stolk et al., “Metropolitan-scale heralded entanglement of solid- state qubits,” Sci. Adv., vol. 10, no. 44, p. eadp6442, 2024. APPENDIX A DISCUSSION OF THE COUNTER-INTUITIVE EXAMPLE We conducted additional simulations and clarified the rea- sons behind this counter-intuitive example. There are two main 17 1.00 1.25 1.50 1.75 2.00 2.25 κ 0.760 0.765 0.770 0.775 0.780 0.785 Fidelity Skewed strategy tree Complete strategy tree Fig. 12. Comparison of fidelity between the strategy trees in Figs. 1(b) and 1(c) under different values of 𝜅. factors explaining why Fig. 1(b) has better fidelity than Fig. 1(c): 1) In our decoherence model (i.e., Eq. (1)), we set 𝜅= 2. Note that 𝜅controls the decoherence speed. For example, in Fig. 12, we compare the two strategy trees in Figs. 1(b) and 1(c) under different values of 𝜅ranging from 1 to 2.3. As 𝜅 increases, the fidelity of the skewed strategy tree in Fig. 1(b) tends to surpass that of the complete strategy tree in Fig. 1(c) because a higher 𝜅reduces decoherence rate. 2) Most of the swapping processes in Fig. 1(b) consume two entangled pairs with similar fidelity and thus suffer less fidelity loss due to swapping processes. From the perspective above, we can also conclude that if all the links along the path have similar initial fidelity, the complete strategy tree will conversely outperform the other. For example, if all the generated pairs (𝑣1, 𝑣2), (𝑣2, 𝑣3), (𝑣3, 𝑣4), and (𝑣4, 𝑣5) have initial fidelity 0.98, then the final fidelity of the strategy trees in Figs. 1(b) and 1(c) will be 0.889 and 0.891, respectively. APPENDIX B EXAMPLE OF NETWORK TOPOLOGY Fig. 13 shows a representative illustration of the network topology generated using the Waxman model. 0 20 40 60 80 100 120 140 X (km) 0 50 100 150 200 250 300 Y (km) Fig. 13. Network topology sample generated using the Waxman model.
1 Decoherence-Aware Entangling and Swapping Strategy Optimization for Entanglement Routing in Quantum Networks Shao-Min Huang∥, Cheng-Yang Cheng∥, Ming-Huang Chien∥, Jian-Jhih Kuo, and Chih-Yu Wang Abstract-Quantum teleportation enables high-security communications through end-to-end quantum entangled pairs. Endto-end entangled pairs are created by using swapping processes to consume short entangled pairs and generate long pairs. However, due to environmental interference, entangled pairs decohere over time, resulting in low fidelity. Thus, generating entangled pairs at the right time is crucial. Moreover, the swapping process also causes additional fidelity loss. To this end, this paper presents a short time slot protocol, where a time slot can only accommodate a process. It has a more flexible arrangement of entangling and swapping processes than the traditional long time slot protocol. It raises a new optimization problem TETRIS for finding strategies of entangling and swapping for each request to maximize the fidelity sum of all accepted requests. To solve the TETRIS, we design two novel algorithms with different optimization techniques. Finally, the simulation results manifest that our algorithms can outperform the existing methods by up to 60 ∼78% in general, and by 20 ∼75% even under low entangling probabilities. Index Terms-Quantum networks, fidelity, decoherence, entanglement routing, scheduling, resource management, optimization problem, NP-hardness, inapproximability, bi-criteria approximation algorithm I. INTRODUCTION A S the frontiers of modern technology, quantum networks (QNs) have been developed and implemented to evaluate their practicability for information transmission [1]-[5]. QNs connect quantum nodes to transmit quantum bits (qubits) using end-to-end entangled pairs [5] (i.e., quantum teleportation). These networks serve as the foundation of quantum services such as quantum key distribution (QKD) [2], distributed quantum computing [3], and clock synchronization [4]. As illusS.-M. Huang is with the Research Center for Information Technology Innovation, Academia Sinica, Taiwan, and also with the - puter Science and Information Engineering, National Chung Cheng University, Taiwan (e-mail: ). C.-Y. Cheng is with the (e-mail: ). M.-H. Chien is with the - partment of Computer Science and Information Engineering, National Chung Cheng University, Taiwan (e-mail: ). J.-J. Kuo is with the -tech Innovations, National Chung Cheng University, Taiwan (e-mail: ). C.-Y. Wang is with the Research Center for Information Technology Innovation, Academia Sinica, Taiwan (e-mail: ). ∥denotes the equal contributions. Corresponding author: Jian-Jhih Kuo (a) Entangling and swapping from time slot t1 to t5 in a network (b) Skewed strategy tree (c) Complete strategy tree Fig. 1. Scheduling of entangling and swapping in QNs. trated in Fig. 1(a), each quantum node in the QN has a specific amount of quantum memory (blue squares) to store entangled qubits, mitigating rapid decoherence [5]-[8]. Adjacent nodes are interconnected by optical fibers (black lines) that facilitate the entanglement process. However, an end-to-end entangled pair is not directly available between two non-adjacent nodes. In such a case, their end-to-end entangled pair can be achieved via one or more entanglement swapping processes, each of which consumes two short entangled pairs to create a long one. For example, at time slot t2 in Fig. 1(a), there are two entangled pairs (v1, v2) and (v2, v3). After performing a swapping at the end of t2, we obtain a long entangled pair (v1, v3) at time slot t3 and free two quantum memory units at v2 [8]. However, entangled pairs stored in the quantum memory decohere over time, resulting in a loss of quality, i.e., fidelity loss. Thus, finding the perfect timing for entangling is crucial; an improper strategy may cause entangled pairs to idle and 16 Oct 2025 2 unnecessary fidelity loss. Moreover, swapping also causes fidelity loss as the entangled link generated by swapping exhibits lower fidelity than the two consumed entangled links, and the loss is exacerbated when the fidelity difference between the two pairs is significant. Thus, the entangling and swapping strategy should be properly designed to improve fidelity. Existing QN protocols, studied in the literature, often employ a standard long time slot protocol setting [9]-[22], which may result in unnecessary fidelity loss when handling multiple requests. Specifically, in traditional long time slot protocols, the entangling processes of each request (i.e., source-destination (SD) pair) take place during the entangling phase, while swapping processes occur during the swapping phase. As a result, shorter requests might have to wait for longer requests to complete because the latter involve more processes to perform. Further, traditional long time slot protocols bind resources for the entire time slot, even though a swapping process frees two quantum memory units, which originally could be used for other requests but remain blocked by the protocol. In light of the shortcomings, we propose a short time slot protocol where either entangling or swapping can occur in any given time slot. Specifically, in our short time slot protocol, a node is free to perform one process within a single time slot, and the process could be entangling, swapping, teleporting, or idling. Thus, short requests no longer need to wait for other requests to establish their entangled pairs, as they can perform processes separately, while longer requests can utilize the resource released by the nodes performing swapping processes. The adoption of a short time slot protocol enables more efficient entangling and swapping strategies, minimizing unnecessary fidelity loss by precisely timing entangled pair generation for subsequent swapping. Take Figs. 1(a) and 1(b) for example, we have a request from v1 to v5.1 At time slot t1, pairs (v1, v2) and (v2, v3) start entangling, taking one time slot. At time slot t2, both the generated pairs (v1, v2) and (v2, v3) have the initial fidelity 0.98 and start decoherence while doing the swapping process. One may observe that we can choose to start the entangling of pair (v3, v4) at either t1 or t2. Clearly, entangling at t2 is better than t1 since they will have less wait time for subsequent swapping processes, leading to less decoherence and better fidelity. On the other hand, swapping processes cause extra fidelity loss, and the fidelity of (v1, v3) is 0.951, which is lower than the fidelity before swapping (i.e., the two links with fidelity 0.98 after decoherence for 1 time slot will have fidelity 0.975). At time slot t3, the generated pair (v1, v3) with fidelity 0.951 and (v3, v4) with fidelity 0.95 start decoherence while swapping, leading to pair (v1, v4) with fidelity 0.88 after swapping. Meanwhile, pair (v4, v5) start entangling. Finally, (v1, v5) are generated at t5 and has fidelity 0.787. Unlike the skewed strategy tree in Fig. 1(b), the strategy in Fig. 1(c) requires only four slots. Intuitively, it should have better fidelity because it suffers less decoherence time. However, the result is counter-intuitive; Fig. 1(b) has better fidelity than Fig. 1(c) 1In this example, all the parameters follow the default settings in Section VII. In addition, the fidelity after decoherence over time and swapping is calculated using Eqs. (1) and (2), respectively, in Section III. because most of the swapping processes in Fig. 1(b) consume two entangled pairs with similar fidelity and thus suffer less fidelity loss due to swapping processes.2 From the perspective above, we can guess that if all the links on the path have similar initial fidelity, the complete strategy tree will perform better than others. For example, if all the generated pairs (v1, v2), (v2, v3), (v3, v4), and (v4, v5) have initial fidelity 0.98, then the final fidelity of the strategy trees in Figs. 1(b) and 1(c) will be 0.889 and 0.891, respectively. We have two observations from the above examples: 1) swapping two entangled pairs with similar fidelity performs better, and 2) the more skewed the strategy tree, the less memory is required within a time slot. Moreover, a different strategy leads to varying fidelity and distribution of resource consumption. In this paper, such a distribution is referred to as numerology, representing how the resources (i.e., quantum memory in this paper) are consumed for the request. The numerology provides a direct representation of the corresponding strategy to examine its feasibility. The quality of the strategy, which is the resulting fidelity of the request, can also be derived directly through the corresponding numerology. Thus, the entangling and swapping strategy formation can be transformed into the numerology selection problem. In this paper, we aim to choose a numerology for each request to maximize the fidelity sum for all accepted requests while meeting the fidelity threshold within a batch of time slots, i.e., the Optimized Decoherence-aware Strategy for Entangling and Swapping Scheduling Problem (TETRIS). The TETRIS introduces four novel challenges: 1) Entangled pairs will decohere over time. How can we choose the right time to generate entangled pairs to reduce their wait time? 2) The swapping process may cause additional fidelity loss. How can we achieve an intelligent swapping strategy to minimize this loss? 3) Numerologies determine the resulting fidelity. How can we identify the feasible numerologies for requests to meet the fidelity threshold, guaranteeing the quality of teleportation? 4) Quantum memory is limited, and its availability will be affected by simultaneous requests. How can we find efficient numerologies to mitigate resource contention and satisfy more requests while ensuring a better fidelity sum? It can be shown that the TETRIS with no fidelity threshold constraint (i.e., TETRIS-N) is already NP-hard and cannot be approximated within any factor of |I|1-ε, where |I| denotes the number of SD pairs. To conquer the above challenges, we propose two novel algorithms. 1) The first one is Fractional Numerology Packing and Rounding Algorithm (FNPR). FNPR aims to maximize the number of accepted requests. With linear programming (LP) rounding, FNPR can achieve a bi-criteria approximation ratio for the TETRIS-N as the actual duration of a time slot and the number of time slots are sufficiently small. Then, we extend FNPR to solve the TETRIS. 2) The second one is Fidelity-Load Trade-Off Algorithm (FLTO). FLTO utilizes two efficient dynamic programming (DP) algorithms to find two candidate numerologies for each request with a given 2More simulations and clarifications of the reasons behind this counterintuitive example are provided in Appendix A of the supplementary material. 3 path. One maximizes fidelity, and the other considers resource utilization. Then, FLTO iteratively invokes them to allocate the resource for each request based on a subtly-designed index called resource efficiency index (REI), inspired by [23]. In sum, the contributions of this paper are as follows: 1) To the best of our knowledge, this is the first attempt to consider the decoherence of entangled pairs over time for entangling and swapping scheduling while applying a short time slot protocol that offers better resource allocation. 2) We prove that the TETRIS is an NP-hard problem, and even its special case, the TETRIS-N, cannot be approximated within any factor of |I|1-ε. 3) We propose a combinatorial algorithm with a clever separation oracle to achieve a bi-criteria approximation. Meanwhile, we develop two DP algorithms to find candidate numerologies with different goals and design an index to choose numerologies adaptively. II. RELATED WORK Before discussing the related works on optimization problems for entangling and swapping, we review previous works that examine feasibility and realization of QNs [1], [24]-[28]. Early foundational studies have laid the groundwork for secure communication protocols and robust architectures in quantum networks. Elliott et al. introduced QN to realize secure communications [1]. Meter et al. proposed a large QN architecture with layered recursive repeaters, where repeaters may not trust each other, and then designed new protocol layers to support quantum sessions to ensure robustness and interoperable communication [24]. Muralidharan et al. presented new quantum nodes that can execute the quantum error correction (QEC) processes and classified the theoretically feasible technologies of quantum nodes into three generations [25]. Pirandola et al. discussed the limits of repeater-less quantum communications and provided general benchmarks for repeaters [26]. Caleffi et al. designed a routing protocol to ensure a high end-toend entanglement rate between any two nodes [27]. Zhao et al. proposed two transport layer protocols for quantum data networks that achieve high throughput and fairness [28]. Building on these foundational works, subsequent research has focused on path selection and entangling and swapping process optimization for long time slot system-based QNs [9]- [22]. Pant et al. presented a greedy algorithm to determine the paths for each request [9]. Shi et al. designed a routing method Q-CAST based on the Dijkstra algorithm to find primary paths and recovery paths to mitigate the effect of entanglement failures [10]. Zhao et al. presented an LP-based algorithm REPS to maximize throughput in SDN-based QNs [11]. Zhao et al. considered entangled links' fidelity and exploited quantum purification to enhance link fidelity [12]. Chen et al. proposed two heuristic algorithms for entangling and swapping phases separately [13]. Farahbakhsh et al. developed an add-up scheme to store and forward data qubits as much as possible [14]. Zeng et al. studied how to simultaneously maximize the number of quantum-user pairs and their expected throughput [15]. Zeng et al. utilized the properties of n-fusion to help increase the success probability of constructing a long entangled pair for quantum networks [16]. Li et al. exploited purification to meet the fidelity requirements for multiple SD pairs as many as possible [17]. Chakraborty et al. gave an LP formulation to compute the maximum total entanglement distribution rate [18]. Pouryousef et al. attempted to stockpile entangled pairs in advance when the traffic demand is low while using them otherwise [19]. Zhao et al. considered a room-size network scenario, where a longer entangled pair can be established by sending one of its photons via alloptical switching without swapping at intermediate nodes [20]. However, none of the above works considers the effect of decoherence on fidelity. To this end, Cicconetti et al. studied different policies of path selection and request scheduling and then measured the resulting link fidelity under the influence of dephasing and depolarizing noises (i.e., decoherence) [21]. Ghaderibaneh et al. further considered time decoherence when determining the swapping order for each SD pair to minimize entangled pair generation latency [22]. However, all of the above works use long time slot systems and conduct entangling and swapping sequentially, which may block short requests to wait for long requests, resulting in more memory idle time. To address the limitations of long time slot systems, some studies [29], [30] have explored short time slot systems or even asynchronous protocols to improve efficiency. Huang et al. employed a short time slot protocol while opportunistically forwarding data qubits via social nodes to ensure security. However, they neglected decoherence over time, fidelity loss due to swapping processes, and freed qubits after swapping processes [29]. Yang et al. proposed an asynchronous model to freely generate entangled pairs or conduct swapping processes without time slot limitations. It helps establish end-to-end connections more quickly while utilizing more resources, as resources can be freed immediately [30]. However, it did not consider decoherence or fidelity loss. In contrast, our short time slot system considers more practical fidelity losses due to time decoherence and swapping processes. It further leverages diverse numerologies for requests to maximize the fidelity sum, achieving efficient entangling and swapping scheduling. Although some short time slot approaches have been proposed to enhance efficiency, they still overlook fidelity degradation over time due to decoherence and swapping processes. To this end, the following works [31]-[34] consider decoherence when constructing remote entangled pairs along a specific path. Haldar et al. employed the Q-learning reinforcement-learning (RL) algorithm to discover policies that optimize both average waiting time and fidelity for a single request [31]. I ̃nesta et al. proposed a method based on Markov decision processesbased method with value and policy iteration to minimize the expected needed time to achieve end-to-end entanglement for a single request [32]. Haldar et al. proposed the quasi-local multiplexing policies, following the SWAP-ASAP approach and incorporating entanglement purification, to optimize both average waiting time and fidelity for a single request in linear chain quantum networks [33]. Goodenough et al. analyzed the value of fidelity up to 25 segments over time using a generating function. The approach can be applied to find cut-off policies in 4 TABLE I COMPARISON OF RELATED WORKS Literature Time slot Fidelity decohere over time Multirequest scheduling Path selection Objective Solution strategy [9] Long No No Yes Max throughput Greedy [10] Long No No Yes Max throughput Dijkstra-based [11] Long No No Yes Max throughput LP rounding + Heuristic [12] Long No No Yes Max throughput Dijkstra-based + LP rounding [13] Long No No Yes Max throughput Heuristic + DP [14] Long No No No Min waiting time Greedy [15] Long No No Yes Max user pairs & Max throughput Dijkstra-based + LP rounding + Branch and bound algorithm [16] Long No No Yes Max throughput Dijkstra-based [17] Long No No Yes Max throughput Dijkstra-based + Greedy [18] Long No No Yes Max throughput Multicommodity flow-based algorithm [19] Long No No Yes Max throughput & Min delay LP [20] Long No No Yes Max throughput LP rounding + Heuristic [21] Long No Yes Yes Max throughput Heuristic + Dijkstra-based [22] Long Yes No No Min latency DP [29] Short No No Yes Min waiting time Greedy [30] Async No No Yes Max network efficiency Dijkstra-based + Primal-Dual-based algorithm [31] Short Yes No No Min waiting time & Max fidelity Q-learning reinforcement-learning [32] Short Yes No No Min delivery time Markov decision process-based algorithm [33] Short Yes No No Min waiting time & Max fidelity SWAP-ASAP-based algorithm Ours Short Yes Yes Yes Max fidelity & Max throughput Combinatorial algorithm with DP-based separation oracle + LP rounding & Greedy + DP QKD [34]. Since the above studies focused on handling single requests, their policies will bind sufficient resources for each accepted request, enabling re-entangling and swapping until a successful end-to-end entangled link is established. Thus, they may perform well in single-request scenarios but are less effective when dealing with multiple requests. Table I summarizes the related works based on their setups (e.g., time slot length, fidelity decoherence, scheduling method, path selection) and optimization objectives (e.g., throughput, fidelity, latency, waiting time). III. BACKGROUND AND ASSUMPTIONS A. Fidelity and Decoherence of Entangled Pairs Fidelity is a classic index to qualify whether an entangled pair is good. An entangled pair with fidelity Fcan be written in a density matrix as ρ= F|Φ+⟩⟨Φ+| + b|Φ-⟩⟨Φ-| + c|Ψ+⟩⟨Ψ+| + d|Ψ-⟩⟨Ψ-|, where F+ b+ c+ d= 1 and |Φ+⟩ is our wanted state. In this paper, we consider our entangled pairs are Werner state [8], [35], [36], which has the relation b= c= d. A quantum state will interact with the environment and gradually lose its quantum properties (i.e., the fidelity decreases), called decoherence. As F= b= c= d= 0.25, it loses all its quantum properties and behaves like a classic random behavior [8]. The decoherence speed depends on the (a) Decoherence curve (b) Fidelity of swapping Fig. 2. Fidelity loss due to elapsed time and swapping. type of quantum memory. The general decoherence can be described as follows: Fd(t) = A+ B×e-(t/T) κ, (1) which is an empirical formula fitting to the experimental data [37], [38], as plotted in Fig. 2(a), while the unit of tis second. Note that A, B, T, and κare the constants according to the adopted technique. In addition, there is a one-to-one mapping between wait time and fidelity because Fd(t) is invertible. B. Entanglement Swapping Entanglement swapping causes extra fidelity loss [8]. Swapping two Werner states with the fidelity of F1 and F2 will lead to the fidelity as follows: Fs(F1, F2) = F1×F2 + 1 3 (1 -F1) × (1 -F2). (2) 5 Fig. 2(b) shows how the fidelity changes after swapping. The blue curve represents the fidelity change of an entangled pair due to decoherence; on the other hand, the red curve depicts the fidelity of the entangled pair that conducts swapping at ts. After swapping, there is a sudden drop of fidelity to Feq, and it keeps decohering after the drop. It can be viewed as a shift from the blue curve, i.e., the red curve after tsbehaves the same as the blue curve after teq. We call teqthe equivalent wait time after swapping, which can be calculated by teq= F-1 d(Feq). The success probability of entangling decreases exponentially with channel distance [10]. Specifically, the entangling probability between any adjacent nodes uand vis defined as: Pr(u, v) = 1 -(1 -e-λ·l(u,v)) ξ, (3) where ξ= ⌊τ T⌋represents the number of entangling attempts, τis the length of a short time slot, Tis the entangling time, l(u, v) is the length of the quantum channel between uand v, and λis a constant determined by the optical fiber material [7]. Then, swapping also has a different success probability at each node v, denoted as Pr(v) [10]. Thus, the success probability of a path pbetween the source sand the destination dcan be expressed as follows: Pr(p) = Ö (u,v)∈p Pr(u, v) Ö v∈p\{s,d} Pr(v). (4) C. Assumptions For ease of presentation, this paper has the following assumptions: 1) Each entangled pair is described by a Werner state [8], [35], [36], a mixture of a specific pure state and white noise. The white noise is represented by the normalized identity operator in the Hilbert space corresponding to the state [39]. 2) This paper assumes that the fiber resource is always sufficient because it is relatively cheaper than quantum memory. In other words, we focus on quantum memory consumed by the entangling process while assuming all entangling processes will not be blocked by fiber availability. 3) Following [11], all nodes communicate with a central controller and share global knowledge. 4) The entangled pairs between the same pair of two nodes have identical initial fidelity. Besides, all the entangled pairs decohere as the same relation, as shown in Fig. 2(a), because of using the same quantum memory technique (i.e., constants A, B, T, and κare the same for all nodes). 5) For synchronization, the time slot length should be at least the duration required for a single swapping process (e.g., around 1.1 ms [40]). Since an entangling process generally requires less time than swapping (e.g., 0.25 ms [40]), it can repeatedly attempt entangling within a time slot to improve the entangling success probability [10], as described in Eq. (3). IV. SYSTEM MODEL & PROBLEM FORMULATION A. System Model We consider a QN with multiple quantum nodes, each with limited quantum memory. Nodes connected through a bunch of fibers are adjacent nodes. Data qubits are transmitted via endto-end entangled pairs in QN. A one-hop entangled pair can be directly constructed via the fiber between two adjacent nodes uand vwith the success probability of Pr(u, v) and the initial fidelity of F(u, v). Plus, a long entangled pair (u, w) can be constructed by performing the swapping process at a repeater vto consume two short entangled pairs (u, v) and (v, w) with the success probability of Pr(v). The quality of an entangled pair is measured by fidelity. A generated entangled pair will decohere over time, as defined by Eq. (1) in Section III-A. A node can perform an entanglement swapping process when it possesses two qubits, each from different entangled pairs. Subsequently, the memory occupied by these two qubits after swapping will be freed. The fidelity of the resulting longer entangled pair after swapping follows Eq. (2). We consider a short time slot protocol where a single time slot only accommodates a single process for each memory unit. Each node can freely execute one process within a time slot for each memory unit, such as entangling, swapping, or even idling. The protocol periodically performs all necessary computations in advance, producing a complete operation plan before any entangling or swapping begins. The controller then instructs all quantum nodes to synchronously execute these operations within a fixed batch of time slots. Any request that fails within a batch is deferred to the next batch for reattempt. For clarity, let Tbe the set of time slots in the current batch. To consider the resource distribution, we define the strategy tree, numerology, and fidelity formulas in our system. Definition 1. A strategy tree γis an activity-on-vertex tree structure, describing an entangling and swapping strategy between node v1 and node vnon path p= {v1, v2, ..., vn}. It consists of two parts; one is the entangling part (γe), and the other is the swapping part (γs), as shown by the green and blue lines in Fig. 1(b), respectively. The entangling part γeincludes all the external pairs. Each external pair denotes a quantum pair (vi, vi+1) being entangled, where i∈{1, 2, ..., n-1}, as shown by the green pair in Fig. 1(b). On the other hand, the swapping part γsis an edge-weighted binary tree with a root pair (v1, vn), where each leaf pair represents an entangled pair (vi, vi+1) for each i∈{1, 2, ..., n-1} and each edge's weight denotes the number of elapsed time slots for the corresponding entangling or swapping processes. Besides, each non-leaf pair (vi, vj) has exactly two child pairs, (vi, vk) and (vk, vj), where i tcland tp> tcr. Consequently, the left child pair ρcloccupies one memory unit of viand vkduring time slots T(ρcl) = {t∈Z | tcl≤t 0, where |I| is the number of SD pairs. Proof. The proof idea is to create an SD pair and a dedicated path and then add them to the TETRIS-N instance for each node in the MIS instance such that the corresponding SD pairs of any two neighboring nodes in the MIS instance cannot be satisfied simultaneously within T. In the following, for any given MIS instance {G = {V, E}}, we show how to construct the corresponding TETRIS-N instance in detail. For each node in V, we create a corresponding SD pair and add it to the TETRIS-N instance. Subsequently, for each created SD pair, we construct a dedicated path that consists of 2⌈log |V|⌉+ 1 nodes with 2⌈log |V|⌉edges and then add the path to Gof the TETRIS-N instance. Afterward, |T| is set to ⌈log |V|⌉+ 2. In addition, the amounts of memory of the source and destination on the path are set to 1, and those of the other nodes (i.e., intermediate nodes) on the path are set to 2. Then, the probability of each path is uniformly set to 1 (i.e., Pr(p) = 1). Last, for each pair of neighboring nodes in G, we pick an intermediate node that has not been picked yet from each of their corresponding paths in Gand then merge the two picked nodes into a single node. Note that the node induced by merging cannot be picked for merging again. The construction can be done in polynomial time. Fig. 6 illustrates an example of instance construction. The MIS instance G includes a clique with three nodes, r1, r2, and r3, which are drawn in the red, yellow, and blue frames, respectively. Then, for each node in G, we create an SD pair and construct a dedicated path that consists of 2⌈log 3⌉+ 1 = 5 nodes, as shown in Fig. 6(b). Note that the color of the edges on each path corresponds to the color of relevant node in G, and thus the path {vr1 1 , vr1 2 , ..., vr1 5 } of SD pair r1 in Grepresents the node r1 in G. In G, an orange edge is between r1 and r2. Therefore, vr1 2 and vr2 2 are selected from the two corresponding paths, respectively, and merged into an orange node denoted by C12 in G, as shown in Fig. 6(c). Similarly, C23 and C13 are derived due to the edges (r2, r3) and (r1, r3), respectively. We then show that the solution of the MIS instance is one-toone mapping to that of the constructed TETRIS-N instance. In this TETRIS-N instance, every SD pair has only one path with scarce memory, and its candidate strategy trees include only the full binary tree due to the subtle setting of |T|, as shown in Fig. 6(d), where |T| is set to ⌈log 3⌉+ 2 = 4. Therefore, for any two neighboring nodes in G, the numerologies of the two corresponding pairs in the TETRIS-N instance cannot be selected at the same time since the merged node on the two paths in Ghas no sufficient memory to serve the two pairs within T. In contrast, they can be selected simultaneously if the two corresponding nodes in G are not adjacent. Figs. 6(d) and 6(e) continue to show the example. Assume that SD pair r1 is admitted to construct numerology on path {vr1 1 , C12, C13, vr1 4 , vr1 5 }. In such a case, r2 cannot construct the numerology on the path {vr2 1 , C12, C23, vr2 4 , vr2 5 } since the memory of C12 has been occupied by r1 (i.e., resource contention), as shown in Fig. 6(e). Thus, the solutions of any MIS instance and its corresponding TETRIS-N instance are one-toone mapping. Thereby, TETRIS-N is NP-hard. We continue to show that the TETRIS-N does not admit any approximation algorithm within a factor of |I|1-εfor any fixed ε> 0 by contradiction. To this end, suppose there exists an |I|1-ε-approximation algorithm Afor the TETRIS-N. Following the above-mentioned instance construction, any arbitrary MIS instance has a corresponding TETRIS-N instance. Assume the optimal solution for the MIS instance has knodes, implying the optimal solution for the TETRIS-N instance maximizes the expected fidelity sum by satisfying the corresponding k pairs. Then, we can employ algorithm Ato find a solution that satisfies at least k |I|1-εpairs, and the found solution corresponds to a solution with at least k n1-εnodes for the MIS instance, where ndenotes the number of nodes in the corresponding MIS instance. This is because |I| is set to nduring the instance construction. However, unless NP= P, the MIS does not admit any approximation ratio within n1-εfor any fixed ε> 0 [41]. Thus, algorithm Amust not exist; otherwise, Acould be employed to derive an n1-ε-approximation algorithm for the MIS. The theorem follows. □ Corollary 1. The TETRIS is at least as hard as the TETRIS-N. V. THE DESIGN OF FRACTIONAL NUMEROLOGY PACKING AND ROUNDING ALGORITHM (FNPR) In this section, we first design a bi-criteria approximation algorithm FNPR for the TETRIS-N (i.e., it has an approximation ratio while relaxing the memory limit by a bounded ratio), then remedy the relaxation on the memory limit, and finally extend it to support the TETRIS. In the beginning, we observe the case where τand |T| are small. In this case, the impact of time slot decoherence diminishes, therefore, all feasible numerologies within Tachieve similar fidelity. To better illustrate this perspective, we conduct the simulation to observe the results, as shown in Fig. 7, where the average path 9 length of each SD pair is about seven. This figure shows that as the time slot length τand the batch size of time slots T decrease, the ratio of Fmax Fmin approaches 1, where Fmax denotes maximum fidelity that can be achieved among all feasible numerologies within T, while Fmin denotes the minimum one. Specifically, this is because: 1) A smaller time slot length τ reduces decoherence and fidelity loss within each time slot, thereby increasing Fmin. 2) With a smaller batch size of time slots T, the overall processing time decreases and further limits the feasible numerologies available for each request. Thus, these feasible numerologies yield comparable fidelity levels, as Trestricts the strategies for entangling and swapping. Therefore, in this case, we can temporarily neglect F(m) in the TETRIS-N and consider it later when deriving the bi-criteria approximation ratio. Then, the modified problem formulation has the objective as Eq. (8) and the constraints (7b), (7c), and (7e). maximize ∑︁ i∈I ∑︁ p∈P(i) ∑︁ m∈Mp(i) Pr(p) · xip m. (8) Specifically, the FNPR employs a combinatorial algorithm with a cleverly-designed DP-based separation oracle, a randomized rounding technique, and two heuristics for ensuring feasibility, as detailed in Sections V-A, V-B, V-C, and V-E, respectively. Furthermore, in Section V-E, we extend the FNPR to address the constraint (7d), guaranteeing the solution feasibility for the TETRIS. Overall, the FNPR can approximate the optimum solution of the TETRIS-N within an approximation ratio of O( Fmax Fmin ) in the price of a bounded relaxation ratio of O(log(|V| · |T|)) on the memory limit (i.e., bi-criteria approximation). Then, with the two heuristics, FNPR's solution can be improved to meet the fidelity threshold and approach the optimum solution as τand |T| are small enough, i.e., Fmax Fmin ≈1. A. The Combinatorial Algorithm We first obtain the relaxed LP by replacing constraint (7e) with constraint (9) as follows: xip m≥0, ∀i∈I, ∀p∈P(i), ∀m∈Mp(i). (9) However, the number of variables in our ILP grows exponentially with the input size since the number of feasible numerologies for each SD pair may be exponential (i.e., the number of possible binary trees). Thus, the relaxed LP (8), (7b), (7c), (9) cannot be solved by an LP solver in polynomial time. On the other hand, by [42], if an LP is in standard form and has a polynomial number of constraints (except for nonnegativity constraints of variables), the number of variables in its dual LP is also polynomial, as described in Definition 5. Then, by combinatorial algorithms, we can obtain a nearoptimum solution to the primal LP in polynomial time if the following properties are satisfied [43]. 1) Each coefficient on the left-hand side is not greater than the constant on the right-hand side for every inequality of the constraint in the primal LP (except for the non-negativity constraints of variables). 1.0 1.4 1.8 2.2 τ(10-3) 1 3 5 7 9 11 13 15 |T| 1.0 1.2 1.4 1.6 1.8 2.0 Fmax Fmin Fig. 7. The ratio of Fmax Fmin for different values of τand |T|. 2) In the dual LP, all coefficients on the left-hand-side of inequality and the constant on the right-hand-side of inequality are positive in each constraint (except for the nonnegativity constraints of variables). 3) A separation oracle exists to tell whether there is a violated constraint for the dual LP. Definition 5. [42] The LP standard form is max{cTx|Ax≤ b, x≥0}, where xis an n× 1 variable vector, and c, b, and A denote n× 1, m× 1, m× nconstant vectors, respectively. The corresponding dual LP is min{bTy|ATy≥c, y≥0}, where y is an m× 1 variable vector. Note that x≥0 and y≥0 are the non-negativity constraints of xand y. Clearly, the relaxed primal LP (8), (7b), (7c), and (9) meets the first property. By Definition 5, we associate dual variables αiand βt vwith each constraint in (7b) and (7c), respectively. Thus, we obtain the corresponding dual LP (10a)-(10c). It can be seen that the dual LP also satisfies the second property. minimize ∑︁ i∈I αi+ ∑︁ v∈V ∑︁ t∈T ct(v) · βt v (10a) subject to αi+ ∑︁ v∈V ∑︁ t∈T θm(t, v) · βt v≥Pr(p), ∀i∈I, ∀p∈P(i), ∀m∈Mp(i) (10b) αi, βt v≥0, ∀i∈I, ∀v∈V, ∀t∈T (10c) Then, we design a separation oracle for the dual LP and get the fractional solution for the relaxed primal LP in Section V-B. However, the fractional solution may be infeasible for the ILP (8), (7b), (7c), and (7e). To this end, we propose an LP rounding algorithm and improve the solution in Sections V-C and V-E. Last, we analyze its performance in Section V-D. B. The Separation Oracle Given an arbitrary (fractional) solution (α, β) to the dual LP, the separation oracle tells whether a violated constraint exists. 10 ˆg(t, (s, d), (σs, σd)) =   βt s+ βt d+ βt-1 s + βt-1 d if (s,d)∈E, t≥2, ct(s) and ct-1 (s)≥σs, ct(d) and ct-1 (d)≥σd; βt s+ βt d+ min ˆg(t-1, (s, d), (σs, σd)), min k∈I(s,d) ˆh(t-1, (s, k, d), (σs, σd)) else if t≥2, ct(s)≥σs, ct(d)≥σd; ∞ otherwise. (13) ˆh(t, (s, k, d), (σs, σd)) = min ˆg(t, (s, k), (σs, 2)) + ˆg(t, (k, d), (1, σd)), ˆg(t, (s, k), (σs, 1)) + ˆg(t, (k, d), (2, σd)) . (14) It is easy to examine every constraint in (10c) in polynomial time, but the challenge arises in identifying whether a violated constraint exists in (10b). Note that the number of SD pairs and the size of path set for each SD pair are both polynomial. Therefore, it suffices to identify the most violated constraint in (10b) for each pair i∈Iand each path p∈P(i). This can be done by computing min m∈Mp(i) ∑︁ v∈V ∑︁ t∈T θm(t, v) · βt v. (11) To this end, we subtly design a DP algorithm to iteratively solve a larger subproblem by examining its smaller subproblems. Specifically, any numerology m∈Mp(i) has a one-toone and onto mapping to a strategy tree through path pwith a root represented by the SD pair (si, di), as shown in Fig. 3. We introduce the function ˆf(T, (s, d)) to find a numerology mthat can generate an entangled pair from node sto node dthrough the path pwithin T, while minimizing Í v∈V Í t∈Tθm(t, v)·βt v. In this way, the desired numerology for a given SD pair iand path p∈P(i) can be found by using ˆf(T, (si, di)). To compute ˆf(T, (s, d)), we examine all local minimum numerologies, each corresponding to a strategy tree rooted at a time slot t∈T. We know that any feasible numerology mrequires θm(t, v) memory units on node vat time slot t, and a strategy tree can be built by combining two sub-strategy trees. Thus, we define a function ˆg(t, (s, d), (σs, σd)) to find a numerology mwith a (sub-)strategy tree rooted at (s, d) at time slot t, which can minimize Í v∈V Í t′∈Tθm(t′, v)·βt′ vwhile ensuring the memory limit ct(s) ≥σsand ct(d) ≥σd, where σs, σd∈{1, 2}. Then, Eq. (12) derives ˆf(T, (s, d)). ˆf(T, (s, d)) = min t∈T ˆg(t, (s, d), (1, 1)) (12) We derive ˆg(t, (s, d), (σs, σd)) according to the three cases. 1) Leaves of strategy tree. If (s, d) ∈Eand t≥2, then we reach a pair of leaves in a strategy tree. Thus, it returns the sum of values of these two leaves and their external nodes as long as the memory limits of sand dare sufficient. 2) Non-leaves of strategy tree. If (s, d) ∉Eand t≥2, then we have two possible cases at time slot t-1. 1) Both s and didle. 2) sand dconduct swapping to consume two entangled links (s, k) and (k, d) to acquire an entangled link (s, d), where kis an intermediate node between sand don path p. It examines the values of sand dplus the value of every case and then returns the minimum if the memory limits of sand dare sufficient. 3) Wrong time or no capacity. If the time slot t≤1, it is impossible to have an entangled link (s, d) at time slot t. Moreover, if the memory limits of sand ddo not meet the requirements, the numerology does not exist. Thus, these cases result in an infeasible solution. Based on the above cases, to derive ˆg(t, (s, d), (σs, σd)), we additionally define the function ˆh(t, (s, k, d), (σs, σd)) in Eq. (14) to represent the minimum Í v∈V Í t′∈Tθm(t′, v) · βt′ vof the entangled pair (s, d) that can be generated by limiting the amount of used memory on node kin either the left or right subproblem. Specifically, if the amount of memory on node k is limited in the left (or right) subproblem, then we set σd= 2 (or σs= 2) to check whether the amount of memory of kis at least 2. Otherwise, the left (or right) subproblem has no limit, and we set σd= 1 (or σs= 1). Then, the recurrence relation of ˆg(t, (s, d), (σs, σd)) can be expressed as Eq. (13), where I(s, d) denotes the set of intermediate nodes on the path p between sand d(i.e., the nodes on pbut excluding sand d). Eqs. (12)-(14) can help find the numerology mthat minimize Í v∈V Í t∈Tθm(t, v) · βt vfor any given i∈Iand p∈P(i). With Eqs. (12)-(14), we can efficiently identify the most violated constraint in (10b), satisfying the third property. As all three properties are satisfied, we can solve the relaxed primal LP and obtain a fractional solution ˆxip musing the combinatorial algorithm in [43] that incorporates our DP separation oracle. C. The Randomized Rounding Given a fractional solution ˆxip mof our primal LP, we design a randomized rounding algorithm to obtain the solution for TETRIS-N. Specifically, let ˆMp(i) denote the set of numerologies with ˆxip m> 0 for each SD pair i∈Iand path p∈P(i). Then, let ˆM(i) = ˆMp1(i) ∪ˆMp2(i) ∪· · · ∪ˆMpn(i), where {p1, p2, . . . , pn} = P(i). It is worth noting that the size of ˆM(i) is polynomial since the combinatorial algorithm terminates in polynomial time [43]. We then interpret ˆxip mas the probability of selecting the numerology mfor each SD pair i. For example, assume that SD pair ihas three numerologies m1, m2, and m3 in ˆM(i), and they are ˆxip1 m1 = 0.1, ˆxip1 m2 = 0.2, and ˆxip2 m3 = 0.3, respectively. Thus, the probabilities of selecting m1, m2, and m3 are 0.1, 0.2, and 0.3, respectively, while the probability of not selecting any numerology for SD pair iis 0.4. D. Bi-Criteria Approximation Ratio and Time Complexity We first analyze the feasibility and the relaxation bound of Eqs. (7b) and (7c) after randomized rounding, and the time complexity in Lemmas 2, 3, and 4, respectively. We then derive the bi-criteria approximation ratio in Theorem 3. To this end, we use the Chernoff bound in Theorem 2 as follows. 11 Theorem 2 (Chernoff bound [44]). There is a set of n independent random variables x1, ..., xn, where xi∈[0, 1] for each i∈[1, n]. Let X= Ín i=1 xiand μ= E[X]. Then, Pr " n ∑︁ i=1 xi≥(1 + ε)μ # ≤e -ε2 μ 2+ε. (15) Lemma 2. The FNPR will satisfy constraint (7b). Proof. The FNPR selects at most one numerology from the union of all sets Mp(i) for each SD pair iusing the randomized rounding in Section V-C. Thus, the lemma holds. □ Lemma 3. The probability that the FNPR relaxes constraint (7c) for any node vat any time slot tby more than a factor of (1 + 4 ln(|V| · |T|)) is at most 1 |V|2·|T|2 , which is negligible. Proof. For each node vat time slot t, we define a random variable zi t,vthat denotes the amount of memory occupied on node vat time slot tfor each SD pair i. Note that a quantum node at time slot tmay require θm(t, v) units of memory to perform numerology m. Thus, zi t,vcan be expressed as: zi t,v= ( θm(t, v) with probability ˆxip m; 0 otherwise. Note that their sum Zt,v= Í i∈Izi t,vis exactly the amount of memory needed on vat time slot tafter rounding. Then, we derive the upper bound of the expectation of the amount of memory occupied on node vat time slot tas follows: E[Zt,v] = E "∑︁ i∈I zi t,v # = ∑︁ i∈I E[zi t,v] = ∑︁ i∈I ∑︁ p∈P(i) ∑︁ m∈Mp(i) θm(t, v) · ˆxip m ≤ct(v). Note that the last inequality directly follows the memory limit constraint (7c). Afterward, we can prove the lemma by Chernoff bound as follows: Pr ∃v∈V, ∃t∈T: Zt,v≥(1 + 4 ln(|V| · |T|)) · ct(v) ≤ ∑︁ v∈V ∑︁ t∈T Pr Zt,v ct(v) ≥1 + 4 ln(|V| · |T|) ≤ ∑︁ v∈V ∑︁ t∈T Pr ∑︁ i∈I zi t,v ct(v) ≥1 + 4 ln(|V| · |T|) ! ≤|V| · |T| · e -(4 ln(|V|·|T|))2 2+4 ln(|V|·|T|) ≤|V|-2 · |T|-2. Note that zi t,v ct(v) ≤1 since the separation oracle adopts Eq. (13) to examine the memory limit, meeting the condition of the Chernoff bound (i.e., every variable zi t,v ct(v) ranges from 0 to 1). Besides, the last inequality holds when |V| · |T| ≥5. □ Lemma 4. The FNPR can be executed in polynomial time. Proof. According to [43], the combinatorial algorithm executes O(ω-2η1 log η1) times of separation oracle, where ωis a constant error bound of primal LP solution and η1 is the number of constraints (except for the non-negativity constraints), i.e., η1 = (|V| · |T| + |I|). Thus, it outputs O(ω-2η1 log η1) numerologies for randomized rounding. In addition, each time of separation oracle takes η2 = O(|V|3 · |T| · |I|). Then, the combinatorial algorithm takes O(ω-2η1η2 log η1), and thus the overall time complexity of FNPR is O(ω-2η1η2 log η1). □ Theorem 3. The bi-criteria approximation ratio of the FNPR for the TETRIS-N is (O( Fmax Fmin ), O(log(|V| · |T|)). Proof. With Lemmas 2, 3, and 4, we can know that FNPR relaxes the memory limit by a ratio of at most O(log(|V| · |T|)) with high probability and runs in polynomial time. It suffices to focus on the gap between the optimal solution and the FNPR's solution. For ease of presentation, let OPT, OPTnF, and OPTnF PLbe the optimum values of the TETRIS-N (i.e., the ILP (7) with no constraint (7d)), the TETRIS-N without considering fidelity (i.e., the ILP (8), (7b), (7c), (7e)), and the primal LP (8), (7b), (7c), (9), respectively. Clearly, OPT≤OPTnF· Fmax because OPTnFhas the maximum number of SD pairs with an allocated numerology. Besides, OPTnF≤OPTnF PLsince the primal LP (8), (7b), (7c), (9) is a LP relaxation of the ILP (8), (7b), (7c), (7e). In addition, let ˆxip mdenote the solution output by the combinatorial algorithm. Then, let ̄xip mbe the rounded solution. Since the expected value of the rounded solution is equal to the optimal solution of the primal LP, the expected fidelity sum of the rounded solution can be bound as follows: E  ∑︁ i∈I ∑︁ p∈P(i) ∑︁ m∈Mp(i) Pr(p) · F(m) · ̄xip m  ≥ ∑︁ i∈I ∑︁ p∈P(i) ∑︁ m∈Mp(i) E h Pr(p) · Fmin · ̄xip m i = Fmin · ∑︁ i∈I ∑︁ p∈P(i) ∑︁ m∈Mp(i) Pr(p) · ˆxip m = Fmin · APPnF PL≥Fmin 1 + ω· OPTnF PL ≥Fmin 1 + ω· OPTnF≥Fmin Fmax · OPT 1 + ω, where APPnF PLis the near-optimal value of primal LP from the combinatorial algorithm with a user-defined constant error bound ω> 0, and the second inequality holds due to [43]. □ E. Heuristic Algorithms for Solution Improvement 1) Remedy for the Memory Limit Relaxation: Since the solution acquired by the FNPR after rounding may relax the memory limit on nodes with a bounded ratio, the following steps are adopted to deal with the memory limit relaxation. Let ̄xip mand ̄Mp(i) denote the variables in the rounded solution and the set of numerologies with ̄xip m> 0 for each i∈Iand p∈P(i), respectively. The heuristic has two phases as follows. In the first phase, for each node vand each time slot t where the memory limit is exceeded, we sort the numerologies occupying v's memory at time slot t(i.e., ̄M(v, t) = {m| θm(t, v)· ̄xip m> 0, ∀m∈ ̄Mp(i), ∀p∈P(i), ∀i∈I}) in nondecreasing order of their expected fidelity, and then iteratively 12 remove numerologies in ̄M(v, t) based on this order until the memory limit of node vat time slot tis satisfied to make the solution feasible. In the second phase, we utilize the residual resources as much as possible. We iteratively allocate the numerology mfor each pair iin non-increasing order of the value of those positive variables ˆxip m(i.e., the fractional solution of LP (8), (7b), (7c), (9)) until no numerology can be chosen or accommodated by the residual network. The heuristic is a polynomial-time algorithm since the number of positive variables ˆxip mis polynomial. 2) Extension to Support the TETRIS: We design the FNPR based on the observation of numerology characteristics in the context of small τand |T|. However, as τand |T| increase, the FNPR may not guarantee solution feasibility for the TETRIS. To address this limitation, we extend the FNPR to support the TETRIS by introducing a heuristic as follows. Following the heuristic proposed in Section V-E1, we additionally add one step to remove the numerology from ̄M(v, t) if its fidelity is less than the threshold bFin the first phase. In the second phase, we only add the numerology mwith adequate fidelity for each pair iin non-increasing order of the value of those positive variables ˆxip muntil no mcan be chosen or accommodated. This heuristic can ensure solution feasibility, and later, the numerical results show the modified solution can approach the optimum. VI. THE DESIGN OF FIDELITY-LOAD TRADE-OFF ALGORITHM (FLTO) Although the FNPR can approximate the optimum when τ and |T| are small, its performance might decrease when τor |T| increases since it neglects fidelity loss. However, it is nontrivial to find the optimal solution for the TETRIS since every SD pair could have an exponential number of numerologies for selection. Fortunately, the separation oracle in the FNPR serves as inspiration for determining the optimal numerology. Following this inspiration and the intrinsic properties of TETRIS, we first design a DP algorithm that can find the optimal numerology with the maximum expected fidelity on the selected path p∈P(i) for any given single SD pair i. Intuitively, we can derive a solution for the TETRIS by iteratively adding a new request based on the available resource by invoking the DP algorithm until no more request can be satisfied. Nonetheless, such a greedy algorithm tends to fall into a local optimum trap since it lacks an overview of current resource utilization for load balancing unless a proper index is provided for the greedy algorithm to estimate the efficiency of the current allocation. Inspired by [23], we subtly design the resource efficiency index (REI) to evaluate a numerology. With REI, our greedy algorithm can mitigate severe network congestion and increase the fidelity sum as τor |T| grows. A. Numerology with the Max. Exp. Fidelity for Single Request Similar to Section V-B, we exploit DP to iteratively solve a larger subproblem by examining the optimum solution of each smaller subproblem to derive the optimal solution for ILP (7a)-(7e) when the SD pair iis single. Hence, we introduce the recursive function f(T, (s, d)) to return the maximum fidelity that can be achieved between node sand node don a specific path pwithin Tas well as the corresponding numerology. Similar to Eq. (12), f(T, (s, d)) requires an additional function g(t, (s, d), (σs, σd)) to derive its value, as shown in Eq. (16). f(T, (s, d)) = max t∈T g(t,(s, d), (1, 1)) | g(t, (s, d), (1, 1)) ≥bF . (16) Note that if the derived maximum fidelity is less than the fidelity threshold bF, the corresponding path will not be considered. Following Eq. (13), to derive g(t, (s, d), (σs, σd)), we define the function h(t, (s, k, d), (σs, σd)) in Eq. (18), which represents the maximum fidelity of the entangled pair (s, d) that can be generated by limiting the amount of used memory on node kin either the left or right subproblem. The details are omitted due to the similarity, and the recurrence relation of g(t, (s, d), (σs, σd)) can be expressed as Eq. (17). In this way, we can identify the numerology with the maximum fidelity on any specific path for any given single SD pair. Subsequently, it is possible to find the numerology with the maximum expected fidelity among all paths in P(i) since the size of predefined path set P(i) is polynomial. To this end, we calculate Pr(p) · F( ˆm) of the numerology ˆmreturned by Eq. (16) for each p∈P(i) and choose the numerology with the highest value. B. Greedy Algorithm with DP Technique We can design a greedy algorithm to solve the ILP (7a)-(7e) by utilizing the DP algorithm in Section VI-A. That is, we iteratively choose the numerology with the maximum expected fidelity by the DP algorithm among all SD pairs and then allocate the resources to the corresponding SD pair until no more numerologies can be chosen. However, such a naive algorithm may cause congestion. To remedy the side effect, we design the REI to help us choose a numerology while avoiding hot spots. Specifically, the REI is defined as follows: R(i, m) = Pr(p) · F(m) Í v∈V Í t∈T θm(t,v) ct(v) , (22) where p∈P(i) and m∈Mp(i) for SD pair i. Note that the numerator and denominator denote the expected fidelity and resource cost of numerology m, respectively. However, finding the numerology with the maximum R(i, m) among all possible numerologies is computationally-intensive. Thus, FLTO focuses on two types of candidate numerologies for each SD pair ias follows: 1) the numerology with the maximum expected fidelity on each path p∈P(i), denoted by mip 1 , and 2) the numerology with the minimum resource cost on each path p∈P(i), denoted by mip 2 . Subsequently, FLTO selects the numerology with the highest R(i, m) from all candidates across all SD pairs, i.e., maxi∈I,p∈P(i){R(i, mip 1 ), R(i, mip 2 )}. Specifically, FLTO invokes the DP algorithm in Section VI-A to determine the mip 1 for each path p∈P(i). In addition, mip 2 for each pcan be found using a similar DP algorithm designed in Eqs. (19)-(21). These equations describe the recurrence relation to search the numerology with the minimum 13 g(t, (s, d), (σs, σd)) =   F(s, d) if (s,d)∈E, t≥2, ct(s) and ct-1 (s)≥σs, ct(d) and ct-1 (d)≥σd; max Fτ g(t-1, (s, d), (σs, σd)) , max k∈I(s,d) h(t-1, (s, k, d), (σs, σd)) else if t≥2, ct(s)≥σs, ct(d)≥σd; -∞ otherwise. (17) h(t, (s, k, d), (σs, σd)) = max Fτ s g(t, (s, k), (σs, 2)), g(t, (k, d), (1, σd)) , Fτ s g(t, (s, k), (σs, 1)), g(t, (k, d), (2, σd)) . (18) ̄f(T, (s, d)) = min t∈T ̄g(t, (s, d), (1, 1)) . (19) ̄g(t, (s, d), (σs, σd)) =   rt s+ rt d+ rt-1 s + rt-1 d if (s,d)∈E, t≥2, ct(s) and ct-1 (s)≥σs, ct(d) and ct-1 (d)≥σd; rt s+ rt d+ min ̄g(t-1, (s, d), (σs, σd)), min k∈I(s,d) ̄h(t-1, (s, k, d), (σs, σd)) else if t≥2, ct(s)≥σs, ct(d)≥σd; ∞ otherwise. (20) ̄h(t, (s, k, d), (σs, σd)) = min ̄g(t, (s, k), (σs, 2)) + ̄g(t, (k, d), (1, σd)), ̄g(t, (s, k), (σs, 1)) + ̄g(t, (k, d), (2, σd)) . (21) resource cost, where rt vdenotes the resource cost of node vat time slot tand is set to 1 ct(v) based on Eq. (22). In brief, ̄f(T, (s, d)) finds the numerology that can generate an entangled pair from node sto node don specific path pwith the minimum resource cost within T, while ̄g(t, (s, d), (σs, σd)) and ̄h(t, (s, k, d), (σs, σd)) are additional functions similar to Eqs. (17) and (18). Note that the numerology will not be considered if the fidelity is less than the fidelity threshold bF, and the DP details are omitted due to the similarity. Overall, FLTO iteratively chooses an SD pair with the numerology that has the maximum REI among all unsaturated SD pairs and then allocates the resources asked by that numerology to saturate the SD pair until no more SD pairs can be served. Then, we analyze the time complexity of FLTO. Both the DP algorithm Eqs. (16) and (19) take O(|V|3·|T|) for each SD pair i∈Ion specific path p∈P(i). Thus, it needs O(|V|3 · |T| · |P|) for each SD pair i∈Icalculating for all paths, where |P| denotes the maximum size of path set P(i) between all SD pairs. Since FLTO chooses one SD pair among all pairs for each round and the number of chosen SD pairs is at most |I|, the overall time complexity is O(|V|3 · |T| · |P| · |I|2). VII. PERFORMANCE EVALUATION A. Simulation Settings The model and default parameters are as follows. The QN topology is generated using the Waxman model [45], with |V| = 100 nodes deployed over a 150 × 300 km region, and an average edge length of 30 km. For clarity, a network topology example is given in Appendix B of the supplementary material. We select |I| = 50 random SD pairs within a batch of |T| = 13 time slots, where each time slot length is τ= 2 ms. This setup ensures that the total duration of a batch (τ× |T|) does not exceed 40 ms, consistent with recent advances in quantum memory [38]. The choice of τ= 2 ms is reasonable since a single swapping process requires approximately 1.1 ms [40]. For each edge, we set λ= 0.045/km [7] and the entangling time T= 0.25 ms [40]. Thus, the average entangling probability is around 0.9, with the number of entangling attempts per time slot given by ξ= ⌊τ T⌋= 8. The initial fidelity for each edge is randomly sampled from the range [0.7, 0.98] [12]. The average memory limit of a node (i.e., ct(v)) is set between 6 to 14 units, aligned with [33] (10 to 16 per node). The swapping probability is set to 0.9 [10], [11]. Following [38], the parameters in Eq. (1) are set as A= 0.25, B= 0.75, T = 40 ms, and κ= 2. The fidelity threshold bFis set to 0.5. Then, we apply GREEDY [9], Q-CAST [10], and REPS [11] to generate the predefined path set P(i) for each pair i∈I. For simplicity, they are denoted by G, Q, and R. Each result is averaged over 50 trials. We compare FNPR and FLTO with the following methods. 1) Nesting [7] first conducts entangling for any two adjacent nodes until all entangled links on the path are created. After that, it greedily maximizes the number of swapping processes for each time slot until the end-to-end entangled link is built. Thus, Nesting has a near-balanced tree structure. 2) Linear [14] performs swapping operations one by one, starting from the source towards the destination, after all entangled links on the path are created. Thus, the numerology that Linear tends to use is a near-biased tree structure. 3) ASAP [33] performs swapping as soon as two adjacent entangled links are successfully generated. ASAP needs to bind the resources for re-entangling and swapping until a successful end-to-end entangled link is achieved. 4) UB is the fractional solution of LP (8), (7b), (7c), (9), derived by the combinatorial algorithm with the separation oracle in FNPR to know the upper bound of the optimum. Note that it is not necessarily feasible for TETRIS. B. Numerical Results Figs. 8-11 illustrate the expected fidelity sum and the number of accepted requests under different parameters. Overall, both FNPR and FLTO consistently demonstrate superior performance across all parameter settings, effectively enhancing 14 10 30 50 70 90 #Requests 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (a) #Requests vs Exp. Fid. Sum 10 30 50 70 90 #Requests 5 10 15 20 25 30 35 40 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (b) #Requests vs #Acc. Requests 10 30 50 70 90 #Requests 5 10 15 20 25 Expected Fidelity Sum Q-FNPR Q-FLTO Q-UB Q-Nesting Q-Linear Q-ASAP (c) #Requests vs Exp. Fid. Sum 10 30 50 70 90 #Requests 5 10 15 20 25 30 35 40 #Accepted Requests Q-FNPR Q-FLTO Q-Nesting Q-Linear Q-ASAP (d) #Requests vs #Acc. Requests 10 30 50 70 90 #Requests 5 10 15 20 25 Expected Fidelity Sum R-FNPR R-FLTO R-UB R-Nesting R-Linear R-ASAP (e) #Requests vs Exp. Fid. Sum 10 30 50 70 90 #Requests 5 10 15 20 25 30 35 40 #Accepted Requests R-FNPR R-FLTO R-Nesting R-Linear R-ASAP (f) #Requests vs #Acc. Requests Fig. 8. Effect of different parameters and predefined paths on various metrics. fidelity and throughput while efficiently utilizing the resources of the quantum network. 1) Effect of Number of Requests: Fig. 8 shows the expected fidelity sum and number of accepted requests across varying numbers of requests. As the number of requests increases, both the expected fidelity sum and the number of accepted requests generally exhibit an upward trend since more numerologies can be considered for selection to fully utilize the network resources. No matter which routing algorithm is employed, FNPR and FLTO significantly outperform Nesting, Linear, and ASAP. The results show that FNPR (and FLTO) averagely outperforms Nesting, Linear, and ASAP on expected fidelity sum by up to 60% (60%), 78% (77%), and 63% (62%), respectively. This is because FNPR can achieve near-optimum under a moderate τand |T| by the combinatorial algorithm with the separation oracle, while FLTO uses REI to choose numerologies, balancing resource utilization and fidelity. 2) Effect of Swapping Probability: In the literature, the swapping probability for simulations is typically between 0.5 and 1. To make the model more comprehensive and realistic, we compare the performance of algorithms under different swapping probabilities ranging from 0.5 to 0.9. Figs. 9(a) and 9(b) show the expected fidelity sum and the number of accepted requests under a different swapping probability. Generally, as 0.5 0.6 0.7 0.8 0.9 Swapping Probability 0 5 10 15 20 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (a) Swap. Prob. vs Exp. Fid. Sum 0.5 0.6 0.7 0.8 0.9 Swapping Probability 0 5 10 15 20 25 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (b) Swap. Prob. vs #Acc. Requests 1.0 2.5 4.0 5.5 7.0 Entangling Time (10-4 s) 0 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (c) Ent. Time vs Exp. Fid. Sum 1.0 2.5 4.0 5.5 7.0 Entangling Time (10-4 s) 0 5 10 15 20 25 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (d) Ent. Time vs #Acc. Requests 12.5 15.0 20.0 22.5 23.0 τ (10-4 s) 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (e) τvs Exp. Fid. Sum 12.5 15.0 20.0 22.5 23.0 τ (10-4 s) 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (f) τvs #Acc. Requests Fig. 9. Effect of different parameters on various metrics. the swapping probability increases, both the expected fidelity sum and the number of accepted requests tend to rise. Additionally, FNPR and FLTO consistently outperform the other algorithms across all swapping probabilities, indicating they are more robust and efficient in managing resources, leading to superior performance regardless of the swapping probability. 3) Effect of Entangling Time: The timing relationship between entangling and swapping may be affected by distance in reality, causing ξto vary under certain conditions. To achieve more comprehensive comparisons, we consider variations in the entangling time length ranging from 0.1 ms to 0.7 ms, with corresponding ξvalues ranging from 8 to 2. Figs. 9(c) and 9(d) show the expected fidelity sum and the number of accepted requests for different entangling time lengths. Generally, as entangling time length increases, both the expected fidelity sum and the number of accepted requests tend to decrease. This is because a smaller ξallows for fewer entangling attempts, leading to a lower entangling probability. Additionally, FNPR and FLTO outperform the other algorithms. 4) Effect of τ: Figs. 9(e) and 9(f) illustrate the expected fidelity sum and the number of accepted requests under a different τ. Note that varying τaffects ξ, as the number of entangling attempts depends on the time slot length. Although a larger τ allows more entangling attempts, leading to higher entangling 15 5 9 13 17 21 |T| 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (a) |T| vs Exp. Fid. Sum 5 9 13 17 21 |T| 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (b) |T| vs #Acc. Requests 6 8 10 12 14 Average Memory Limit 5 10 15 20 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (c) Avg. #Mem. vs Exp. Fid. Sum 6 8 10 12 14 Average Memory Limit 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (d) Avg. #Mem. vs #Acc. Requests 0.6 0.7 0.8 0.9 0.95 Minimum Initial Fidelity 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (e) Min. Init. Fid. vs Exp. Fid. Sum 0.6 0.7 0.8 0.9 0.95 Minimum Initial Fidelity 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (f) Min. Init. Fid. vs #Acc. Requests 0.4 0.45 0.5 0.55 0.6 Fidelity Threshold 0 5 10 15 20 25 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (g) Fid. Thr. vs Exp. Fid. Sum 0.4 0.45 0.5 0.55 0.6 Fidelity Threshold 0 5 10 15 20 25 30 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (h) Fid. Thr. vs #Acc. Requests Fig. 10. Effect of different parameters on various metrics. probability, the results show that a larger τconversely results in a lower expected fidelity sum and fewer accepted requests. This is because the impact of decoherence over a longer time slot outweighs the benefits of increased entangling probability in our setting. Thus, some numerologies will not be admitted due to their low fidelity. Fig. 9(e) further confirms that FNPR outperforms FLTO as τis small, while FLTO is slightly better when τis large, as described in Section VI. 5) Effect of Resource Allocation: Figs. 10(a)-10(d) show the expected fidelity sum and the number of accepted requests under different |T| and average memory limits. Generally, as |T| or the amount of memory increases, both the fidelity sum and the number of accepted requests tend to increase, benefiting from sufficient resources. In Figs. 10(a) and 10(b), 5 7.5 10 12.5 15 Entangling Probablity (10-3) 0 5 10 15 20 Expected Fidelity Sum (10-5) G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (a) Ent. Prob. vs Exp. Fid. Sum 5 7.5 10 12.5 15 Entangling Probablity (10-3) 0 5 10 15 20 #Accepted Requests (10-5) G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (b) Ent. Prob. vs #Acc. Requests 10-4 10-3 10-2 10-1 100 Entangling Probablity 10-10 10-7 10-4 10-1 102 Expected Fidelity Sum G-FNPR G-FLTO G-UB G-Nesting G-Linear G-ASAP (c) Ent. Prob. vs Exp. Fid. Sum 10-4 10-3 10-2 10-1 100 Entangling Probablity 10-10 10-7 10-4 10-1 102 #Accepted Requests G-FNPR G-FLTO G-Nesting G-Linear G-ASAP (d) Ent. Prob. vs #Acc. Requests Fig. 11. Effect of different entangling probabilities on various metrics. FNPR slightly outperforms FLTO as |T| is small since FNPR can be close to the optimum solution as Fmax Fmin ≈1. However, in scenarios with a large |T|, FLTO proves more suitable since it can avoid selecting inefficient numerologies via our DP algorithm with REI when the resources are sufficient. The above results show that each has its own merits. 6) Effect of Initial Fidelity and Threshold: Figs. 10(e)-10(f) and 10(g)-10(h) show the expected fidelity sum and the number of accepted requests at varying minimum initial fidelity levels and fidelity thresholds. Generally, as the minimum initial fidelity increases, both the expected fidelity sum and the number of accepted requests increase since most numerologies can achieve high fidelity. FNPR performs the best under higher minimum initial fidelity because it can approach the optimum solution as Fmax Fmin ≈1. Besides, as the fidelity threshold increases, the expected fidelity sum tends to decrease inevitably; however, FNPR and FLTO still outperform others in all cases. 7) Effect of Entangling Probability: To further observe the effect of low entangling probability, we conduct experiments by treating the entangling probability (i.e., Pr(u, v)) as an abstract input parameter, as shown in Fig. 11. Figs. 11(a) and 11(b) show the expected fidelity sum and number of accepted requests across varying entangling probabilities. In this simulation, the entangling probability is set to range from 0.005 to 0.015, which covers the realistic parameter range discussed in [46]. Due to the low entangling probability, the fidelity sum and number of accepted requests are low under this setting for all simulated methods. Even under such adverse conditions, the proposed FNPR and FLTO still outperform the other algorithms by 20% (18%) to 75% (75%) on average. This is because FNPR utilizes the combinatorial algorithm with randomized rounding to achieve near-optimal solutions, while FLTO invokes the DP techniques to design an effective greedy algorithm. Both algorithms are designed to solve the TETRIS 16 suitably. To more deeply observe the performance changes under different entangling probabilities, we conducted simulations across a wider range of entangling probabilities (i.e., from 0.0001 to 1), as shown in Figs. 11(c) and 11(d). The results show that our proposed algorithms consistently outperform the other algorithms across all entangling probability levels, which ensures the robustness of the proposed algorithms. VIII. CONCLUSION This paper proposes a promising short time slot protocol for entangling and swapping scheduling with a novel optimization problem TETRIS. The TETRIS asks for the solution that selects the numerology (strategy tree) for each accepted request while maximizing the fidelity sum of all accepted requests. To solve the TETRIS, we design two new algorithms. FNPR is a bi-criteria approximation algorithm by solving an LP with a separation oracle and rounding the solution for the cases where the time slot length and the number of time slots in a batch are small. FLTO is a greedy algorithm with a proper index to call two DP-based algorithms adaptively for the other cases. Finally, the simulation results manifest that our algorithms can outperform the existing methods by up to 60 ∼78% in general, and by 20 ∼75% even under low entangling probabilities. REFERENCES [1] C. Elliott, "Building the quantum network," New J. Phys., vol. 4, no. 1, p. 46, 2002. [2] Y.-A. Chen et al., "An integrated space-to-ground quantum communication network over 4,600 kilometres," Nature, vol. 589, no. 7841, pp. 214-219, 2021. [3] S. Daiss et al., "A quantum-logic gate between distant quantum-network modules," Science, vol. 371, no. 6529, pp. 614-617, 2021. [4] P. Komar et al., "A quantum network of clocks," Nat. Phys., vol. 10, no. 8, pp. 582-587, 2014. [5] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge university press, 2010. [6] M. Schlosshauer, "Decoherence, the measurement problem, and interpretations of quantum mechanics," Rev. Mod. Phys., vol. 76, no. 4, p. 1267, 2005. [7] N. Sangouard et al., "Quantum repeaters based on atomic ensembles and linear optics," Rev. Mod. Phys., vol. 83, no. 1, p. 33, 2011. [8] R. Van Meter, Quantum Networking. John Wiley & Sons, Ltd, 2014. [9] M. Pant et al., "Routing entanglement in the quantum internet," npj Quantum Inf., vol. 5, no. 1, p. 25, 2019. [10] S. Shi and C. Qian, "Concurrent entanglement routing for quantum networks: Model and designs," in ACM SIGCOMM, 2020. [11] Y. Zhao and C. Qiao, "Redundant entanglement provisioning and selection for throughput maximization in quantum networks," in IEEE INFOCOM, 2021. [12] Y. Zhao et al., "E2E fidelity aware routing and purification for throughput maximization in quantum networks," in IEEE INFOCOM, 2022. [13] L. Chen et al., "A heuristic remote entanglement distribution algorithm on memory-limited quantum paths," IEEE Trans. Commun., vol. 70, no. 11, pp. 7491-7504, 2022. [14] A. Farahbakhsh and C. Feng, "Opportunistic routing in quantum networks," in IEEE INFOCOM, 2022. [15] Y. Zeng et al., "Entanglement routing design over quantum networks," IEEE/ACM Trans. Netw., vol. 32, no. 1, pp. 352-367, 2024. [16] Z. Yiming et al., "Entanglement routing over quantum networks using greenberger-horne-zeilinger measurements," in IEEE ICDCS, 2023. [17] J. Li et al., "Fidelity-guaranteed entanglement routing in quantum networks," IEEE Trans. Commun., vol. 70, no. 10, pp. 6748-6763, 2022. [18] K. Chakraborty et al., "Entanglement distribution in a quantum network: A multicommodity flow-based approach," IEEE Trans. Quantum Eng., vol. 1, pp. 1-21, 2020. [19] S. Pouryousef et al., "A quantum overlay network for efficient entanglement distribution," in IEEE INFOCOM, 2023. [20] G. Zhao et al., "Segmented entanglement establishment with all-optical switching in quantum networks," IEEE/ACM Trans. Netw., vol. 32, no. 1, pp. 268-282, 2024. [21] C. Cicconetti, M. Conti, and A. Passarella, "Request scheduling in quantum networks," IEEE Trans. Quantum Eng., vol. 2, pp. 2-17, 2021. [22] M. Ghaderibaneh et al., "Efficient quantum network communication using optimized entanglement swapping trees," IEEE Trans. Quantum Eng., vol. 3, pp. 1-20, 2022. [23] Y. Azar and O. Regev, "Strongly polynomial algorithms for the unsplittable flow problem," in IPCO, 2001. [24] R. Van Meter and J. Touch, "Designing quantum repeater networks," IEEE Commun. Mag., vol. 51, no. 8, pp. 64-71, 2013. [25] S. Muralidharan et al., "Optimal architectures for long distance quantum communication," Sci. Rep., vol. 6, no. 1, p. 20463, 2016. [26] S. Pirandola et al., "Fundamental limits of repeaterless quantum communications," Nat. Commun., vol. 8, no. 1, p. 15043, 2017. [27] M. Caleffi, "Optimal routing for quantum networks," IEEE Access, vol. 5, pp. 22 299-22 312, 2017. [28] Y. Zhao and C. Qiao, "Distributed transport protocols for quantum data networks," IEEE/ACM Trans. Netw., vol. 31, no. 6, pp. 2777-2792, 2023. [29] S.-M. Huang et al., "Socially-aware opportunistic routing with path segment selection in quantum networks," in IEEE GLOBECOM, 2023. [30] L. Yang et al., "Asynchronous entanglement provisioning and routing for distributed quantum computing," in IEEE INFOCOM, 2023. [31] S. Haldar et al., "Fast and reliable entanglement distribution with quantum repeaters: Principles for improving protocols using reinforcement learning," Phys. Rev. Appl., vol. 21, p. 024041, 2024. [32] ́A. G. I ̃nesta et al., "Optimal entanglement distribution policies in homogeneous repeater chains with cutoffs," npj Quantum Inf., vol. 9, p. 46, 2023. [33] S. Haldar et al., "Reducing classical communication costs in multiplexed quantum repeaters using hardware-aware quasi-local policies," arXiv preprint , 2024. [34] K. Goodenough, T. Coopmans, and D. Towsley, "On noise in swap asap repeater chains: exact analytics, distributions and tight approximations," arXiv preprint , 2024. [35] R. F. Werner, "Quantum states with einstein-podolsky-rosen correlations admitting a hidden-variable model," Phys. Rev. A, vol. 40, no. 8, p. 4277, 1989. [36] N. K. Panigrahy et al., "On the capacity region of a quantum switch with entanglement purification," in IEEE INFOCOM, 2023. [37] M. H. Abobeih et al., "One-second coherence for a single electron spin coupled to a multi-qubit nuclear-spin environment," Nat. Commun., vol. 9, no. 1, p. 2552, 2018. [38] C. E. Bradley et al., "A ten-qubit solid-state spin register with quantum memory up to one minute," Phys. Rev. X., vol. 9, no. 3, p. 031045, 2019. [39] A. Sen, U. Sen, ˇC. Brukner, V. Buˇzek, and M. ̇Zukowski, "Entanglement swapping of noisy states: A kind of superadditivity in nonclassicality," Phys. Rev. A: At., Mol., Opt. Phys., vol. 72, no. 4, p. 042310, 2005. [40] M. Pompili et al., "Realization of a multinode quantum network of remote solid-state qubits," Science, vol. 372, no. 6539, pp. 259-264, 2021. [41] J. Hastad, "Clique is hard to approximate within n1-ε," in IEEE FOCS, 1996. [42] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algorithms, Third Edition. The MIT Press, 2009. [43] N. Garg and J. K ̈onemann, "Faster and simpler algorithms for multicommodity flow and other fractional packing problems," SIAM J. Comput., vol. 37, no. 2, pp. 630-652, 2007. [44] M. Mitzenmacher and E. Upfal, Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis, 2nd ed. USA: Cambridge University Press, 2017. [45] B. Waxman, "Routing of multipoint connections," IEEE J. Sel. Areas Commun., vol. 6, no. 9, pp. 1617-1622, 1988. [46] A. J. Stolk et al., "Metropolitan-scale heralded entanglement of solidstate qubits," Sci. Adv., vol. 10, no. 44, p. eadp6442, 2024. APPENDIX A DISCUSSION OF THE COUNTER-INTUITIVE EXAMPLE We conducted additional simulations and clarified the reasons behind this counter-intuitive example. There are two main 17 1.00 1.25 1.50 1.75 2.00 2.25 κ 0.760 0.765 0.770 0.775 0.780 0.785 Fidelity Skewed strategy tree Complete strategy tree Fig. 12. Comparison of fidelity between the strategy trees in Figs. 1(b) and 1(c) under different values of κ. factors explaining why Fig. 1(b) has better fidelity than Fig. 1(c): 1) In our decoherence model (i.e., Eq. (1)), we set κ= 2. Note that κcontrols the decoherence speed. For example, in Fig. 12, we compare the two strategy trees in Figs. 1(b) and 1(c) under different values of κranging from 1 to 2.3. As κ increases, the fidelity of the skewed strategy tree in Fig. 1(b) tends to surpass that of the complete strategy tree in Fig. 1(c) because a higher κreduces decoherence rate. 2) Most of the swapping processes in Fig. 1(b) consume two entangled pairs with similar fidelity and thus suffer less fidelity loss due to swapping processes. From the perspective above, we can also conclude that if all the links along the path have similar initial fidelity, the complete strategy tree will conversely outperform the other. For example, if all the generated pairs (v1, v2), (v2, v3), (v3, v4), and (v4, v5) have initial fidelity 0.98, then the final fidelity of the strategy trees in Figs. 1(b) and 1(c) will be 0.889 and 0.891, respectively. APPENDIX B EXAMPLE OF NETWORK TOPOLOGY Fig. 13 shows a representative illustration of the network topology generated using the Waxman model. 0 20 40 60 80 100 120 140 X (km) 0 50 100 150 200 250 300 Y (km) Fig. 13. Network topology sample generated using the Waxman model.
2510.14914
1 Design of Paper Robot Building Kits Ruhan Yang ATLAS Institute, University of Colorado Boulder ruhan.yang@colorado.edu Ellen Yi-Luen Do ATLAS Institute, University of Colorado Boulder ellen.do@colorado.edu Figure 1: Design space of paper robots Building robots is an engaging activity that provides opportunities for hands-on learning. However, traditional robot-building kits are usually costly with limited functionality due to material and technology constraints. To improve the accessibility and flexibility of such kits, we take paper as the building material and extensively explore the versatility of paper-based interactions. Based on an analysis of current robot-building kits and paper-based interaction research, we propose a design space for devising paper robots. We also analyzed our building kit designs using this design space, where these kits demonstrate the potential of paper as a cost-effective material for robot building. As a starting point, our design space and building kit examples provide a guideline that inspires and informs future research and development of novel paper robot- building kits. Keywords: Design space; paper robot; robot-building kits; paper-based interaction 2 1 INTRODUCTION With the popularity of STEM education, educational robots have become a part of our lives. Today, there are robot-building kits for people of all ages and skill levels. Conventional robot-building kits are usually made with plastic as it could be shaped in various forms and colors [Freinkel, 2011]. However, those unsustainable plastic components need a specific fabrication process to mold, which makes it costly and challenging to be customized by end users. To improve accessibility, sustainability, and foster creativity, we propose designing paper robot-building kits. Building robots out of paper offers a unique solution to this issue. Since the 1990s, researchers have explored paper-based interactions. These efforts range from the integration of paper with virtual reality [e.g., Wellner, 1991] to the movement of the paper itself [e.g., Wrensch, 1998]. These studies have provided a solid foundation for the application of paper-based tangible interactions in everyday life. By applying those techniques, we can achieve more creative and expressive designs in robot building without the constraints of cost and fabrication capabilities. The development of paper robot-building kits could also increase public interest in paper-based interaction and promote research and progress in the field. This approach to robot building not only has the potential to change the way we build and design, but it also makes the world of human- computer interaction more accessible and open to all, thus opening up new possibilities for education, entertainment, and more. However, despite its potential, the design space for paper robots remains largely uncharted. To better apply paper-based interaction to robot building, we surveyed 30 robot-building kits and 30 paper-based interaction studies, identified key design elements, and synthesized them into the following design space of paper robots (see Figure 1). First, we divided all interactions into input and output categories. Input interactions can be achieved through manipulation, integrated sensors, or separated sensors, and appear as flat, folded, or 3D shapes. Output interactions can be implemented by integrated or separated actuators and appear as single or multiple pieces. This design space integrates insights from robot building kits and paper-based interaction research, laying the groundwork for further exploration and innovation in this emerging field. In this paper, we also showcase several paper robot-building kits to highlight the potential and versatility of paper robot techniques. These example kits provide insight into the design and fabrication possibilities of paper robots, and will serve as inspiration for those who wish to explore this field. We presented these designs to the public, and the positive feedback received emphasizes the feasibility of paper robots and the interest in the future development of this field. The results suggest the potential of paper robot-building kits to play a popular and influential role in the fields of robotics, engineering, crafts, and the arts. By providing the design space and example designs, we aim to motivate others to explore this exciting area and use the information provided in this paper to guide and direct their own work. In the following, we first introduce the concepts related to paper robot-building kits. We then review the state of the art in robot-building kits and paper-based interaction research, define their features, then describe the design space of the paper robots. This is followed by a demonstration of several building kit designs and an analysis of them based on the design space. Finally, we discuss future research opportunities. 2 SCOPE AND METHOD 2.1 Scope and Definitions 2.1.1 Robots and Robot-building Kits. The scope of robots discussed in this paper is smaller-sized personal robots that can perform some simple movements, including interaction with people or the environment. The building kit discussed in this paper refers to a collection of components that can be assembled to form a certain structure. When searching using "robot building" and "robot building kit" as keywords, we found that most robot-building kits are intended for education and entertainment. These robots are smaller and have more limited functionality, so they do not include industrial-grade materials or structures; users focus more on building and designing, and rarely require the robot to complete a specified task or complex movement. Rather, these robots play a role in developing hands-on skills, STEM education, and entertainment. Therefore, all the robot-related work we discuss in this paper has an educational or entertainment focus, and we do not cover any specialized robot-building kits or systems (such as Roboteq and ArduPilot). Moreover, we consider STEM building platforms that are provided with parts only such as Arduino and LittleBits as systems of parts, so they are not considered as robot-building kits in this paper. 3 2.1.2 Paper and Paper-based Interactions. Our discussion covers all types of paper and its derivatives, including paper of different weights, paper with different coatings, cardboard, etc. There has been some work discussing paper robots. Some use paper structures and retain the circuitry and electronics of traditional robots, as in [Dibia, 2017]; others have used the properties of paper to complete a functional structure that replaces traditional electronic components, such as [Ryu, 2020]. All these works present us with cases of paper interaction design use in robot building. In this paper, we consider paper-based interaction as a broader scope that includes any interaction that involves the use of paper, regardless of the technology used. We classify these interactions into three types: augmented reality-based interactions (e.g., using projections [Wellner, 1991], sound [Back, 2001], or AR markers [Zheng, 2020]); paper circuits [e.g., Coelho, 2009]; and paper movement [e.g., Saul, 2010]. 2.2 METHOD The design space of the paper robot, as well as the designs of the robot-building kits, are derived from a review of the current robot-building kits and paper-based interactions research. In order to collect a representative set of related work, we investigated the commercial market and the research and maker communities. Figure 2: Complexity distribution of robot-building kits For the robot-building kits, we first surveyed stores including Target and Walmart, as well as online e-commerce sites Amazon and eBay. We searched for "robot building", "robot kit", "robot building kit", "robot building toy", "STEM building kit", "paper robot", "craft robot", and "DIY robot". We then searched Instructables, Pinterest, Maker Faire, YouTube, TikTok, and Etsy, using the same keywords, to learn what is going on in the maker community. Finally we investigated the research community by searching the ACM Digital Library, IEEE Xplore, Springer, and ResearchGate. Based on these results, we also investigated the source of these designs by searching for the company, publisher, and author of them. We identified over 1,000 robot-building kit designs in the first round. From them, we excluded designs that were out of scope such as the more professionally-focused systems and those that did not meet the interaction requirements, such as model robots intended only for decoration. Finally, we summarized the remaining designs, organized them by the way they were physically built and digitally created, and the level of difficulty involved in building them. We finally selected 30 representative designs for detailed analysis. Figure 2 shows the distribution of these building kits in terms of complexity of the digital creation and physical build. For paper-based interactions, we searched for "paper craft", "paper interaction", "paper mechanisms", "paper computing", "paper circuits", and "origami". We collected 176 papers and grouped them by their interaction designs. For papers that addressed different versions of the same 4 project, we kept only the most recent one; and for different projects that used a similar interaction design, we kept only the earliest one. In the end, we selected 30 representative papers/projects for further review. Although we have provided an in-depth review into both robot-building kit designs and paper-based interaction research, our list of relevant works is not an exhaustive list of these two fields. Our focus is on the expansion of the design space, so we are interested in the uniqueness of these efforts in terms of how interactions are performed. We first analyzed the designs of the robot building kits and collected the interaction elements included in each kit. We then reviewed paper-based interaction studies and summarized the interaction modalities they proposed. Next, we analyzed and summarized all the interactions and identified the dimensions of the design space. Based on this design space, we analyzed the five building kits that we designed. These building kits were developed to test the feasibility of the concept of building robots out of paper. We deployed these kits in public events, where we observed people working on them. 3 INTERACTION DESIGNS FROM ROBOT-BUILDING KITS Figure 3: Input and output modalities of robot-building kits In this section, we delve into the interaction design elements present in robot-building kits. We categorize the input and output modalities of these kits, and we identify two input types: 1) manipulation, and 2) integrated sensors. For the outputs, we propose two output types: 1) integrated actuators, and 2) separated actuators. With these dimensions, we can map the interactions of robot-building kits into an initial design space (Figure 3). In Figure 4, we provide a list of these interactions, indicating the specific kit from which each interaction originates. 5 Figure 4: Interaction methods from each robot-building kit Input Type 1) Manipulation: Manipulation refers to direct interaction with the robot. This interaction is usually found in robot-building kits without electronics, or that include only simple circuits without a microcontroller. These kits are simple in structure and less expensive than those with multiple electronics and complex circuitry. The simplest kits are easy to assemble and are often colorful, such as [“Colorations”, n.d.]. These kits promote arts and crafts learning, help develop fine motor skills, and encourage curiosity and social development. However, the only "interaction" these robots provide is for the user to connect the different parts. Some robot building kits are based on opening and closing circuits [such as Johnco, n.d.], or where the circuit is based on structural connections [such as Schweikardt, 2007]. These kits also use the manipulation of connections as an input. Some robot-building kits focus on the mechanical structure through fine parts to make the whole robot more dynamic, [such as “Nintendo Labo”, n.d.]. The mechanical motion of these robots is based on pushing and rotating. Input Type 2) Integrated Sensor: Integrated Sensor refers to inputs based on electronic components embedded in the robots. Our focus is on the type of interaction, not the type of sensor, so we classify sensors by their ability to detect specific stimuli, rather than their physical design or technical specifications. For example, a phototransistor can detect light and also the motion of occluders/shadows. So even though there is no controller, such a robot can have input through integrated sensors. Some robot kits use optical encoders to measure positions and motions [such as “LEGO BOOST”, n.d.]. Similarly, sensors that detect motion and distance are often interchangeable and are common in robot building kits [such as “Zivko the Robot”, n.d.]. Sound is also a common modality, usually sensed using a microphone [such as “LEGO BOOST”, n.d.]. Some more advanced kits use cameras to detect different content, including light, color, and barcodes [such as “LEGO Super Mario”, n.d.]. Other sensors are only found in advanced robot building kits [such as “MBot | Makeblock”, n.d.], including temperature sensors, magnetic field sensors, gyroscopes, and accelerometers. Output Type 1) Integrated Actuator: Integrated Actuator refers to outputs based on electronic components embedded in the robots. Almost all robot outputs are of this type, considering that most robots are standalone devices. The most common output is the shape-changing system [such as “LEGO BOOST”, n.d.], implemented by a variety of motors. Other kits include small motors that output in the form of vibration [such as “Nintendo Labo”, n.d.]. Another common output is light, usually using LEDs or similar optical components. The fourth is the use of speakers to provide audio feedback. 6 Output Type 2) Separated Actuator: Separated actuators refer to outputs based on electronic components that are not fully embodied in the robots. They may be connected to the robot by cable or wirelessly, but are not a (structurally or functionally necessary) part of the robot itself. One (and the only) representative actuator is the screen. A few robots have a built-in screen, such as [“LEGO Super Mario”, n.d., "ClicBot", n.d.], through which they display some interactive information as well as add animation effects to the robot. For other robot-building kits, such as [“LEGO MINDSTORMS”, n.d.], the screen gives the user more feedback on the operation than on the interaction. There are also kits involving using a phone or tablet for operation [“LEGO BOOST”, n.d., “The Crafty Robot”, n.d.], in which case the screen is more a controller than an output device. Except for adding animation effects which are necessary on the robot, all these screen uses do not require the screen itself as part of the robot, so we consider the screen as a separated actuator. 4 ELEMENTS OF PAPER-BASED INTERACTIONS Figure 5: Elements of paper-based interaction designs In this section, we categorize the interaction designs proposed in paper-based interaction studies (Figure 5). Since the 1990s, many researchers have been exploring paper-based human-computer interaction, a field known as paper computing [Kaplan, 2010]. Research within this field often falls at the intersection of multiple domains, including augmented reality environments, tangible interactions, physical computing, ubiquitous computing, digital art, and more. Based on the interactive environments, we divide these studies into three groups: 1) interaction based on augmented reality, 2) paper circuits, and 3) movable paper crafts. Inspired by Zhu’s taxonomy [2013] we propose another dimension of design space: the shape of the paper interface. This dimension spans flat, folded, and 3D paper devices for the input, and single sheet and multiple sheets of paper for the output devices. 4.1 Paper with Augmented Reality (AR) Paper-based interactions based on augmented reality technologies typically use manipulation and separated sensors for input, and separated actuators for output. Early paper computing focused on the interoperable use of paper and digital documents. These studies aimed to bridge the physical and digital worlds by creating an interactive work environment. Like [Wellner, 1991], many systems used a camera to capture the contents of a paper document and the position of the user's finger, and gave feedback using a projector to overlay images directly on the same paper document. With the advancement of augmented reality technology, researchers explored designing and developing mixed reality books that use augmented reality glasses or screens to overlay virtual content onto the pages of physical books [Grasset, 2008, Rajaram, 2022]. These 7 works showed how new interactive technologies could enrich traditional paper-based work. Another series of studies, represented by [Back, 2001, Mackay, 2002, Liao, 2005], were conducted on physical books. These studies used cameras and magnetic field sensors to control the audio (sound), adding a rich soundtrack to printed graphics and text while maintaining the original appearance of the book. A more recent study uses augmented reality markers (computer vision markers) printed on paper, which turn traditional paper into an interactive interface with a combination of camera and display [Zheng, 2020, Bae, 2021, Bae, 2022]. Across all these studies, an important insight of paper-based interaction is to maintain the properties of the paper and the ways people traditionally interact with them. Users can interact with those through conventional methods of manipulation, such as touching, flipping, rotating, and sliding. Regarding the shape of the paper, the input of most systems [Wellner, 1991, Grasset, 2008, Back, 2001, Mackay, 2002, Liao, 2005] is based on flat paper except for [Zheng, 2020] which is based on folded paper and [Bae, 2022] which is based on 3D paper. Systems that use projectors output on the single sheet of paper; unlike systems that use screens. 4.2 Paper Circuit Most paper circuits use manipulation as input, and integrated actuators for outputs, with all outputs directly on the paper. Starting with pulp- based computing, researchers investigated embedded circuits. Some start from the process of making paper by hand and embedding conductive materials or electronic components into the paper “sandwich” [Coelho, 2009, Knouf, 2017]. Other studies proposed different ways of making paper circuits, including the use of circuit stickers [Hodges, 2014, “Chibitronics”, n.d.] and weaving techniques [Kato, 2019]. In these systems, common inputs include connection and bending, and common outputs include embedded LEDs (lights). The inputs of these interactions are based on 3D structures, while the outputs are based on multiple sheets of paper. The other direction of exploration focuses on the surface of the paper. By adding different coatings and pasting different electronic components, [Buechley, 2009] introduces interaction using painting as input. Further explorations of material properties include paper's thinness and softness. Researchers further explored the coating of the paper. These studies discussed the fabrication of paper-based interaction with mechanical processing equipment instead of traditional handcrafting. Using color-changing paint, researchers created paper computing artworks that would make these graphic arts pieces more interactive [Kato, 2019]. In this interaction, the color-changing was triggered by heating up and cooling down the paper. The inputs of these interactions are based on flat paper, while the outputs are based on a single sheet of paper. As the forms of paper computing became diverse, researchers began to explore the aesthetics of this field. With the Electronic Popables proposed, the combination of interactive books and electronic devices was also explored [Qi, 2010]. In this design, the input modalities include rotation, touching, pressing, pulling, and painting, with the light output through the embedded LEDs. The inputs of these interactions are based on folded paper, while the outputs are based on multiple sheets of paper. Techniques for making circuits via inkjet printers have also been widely discussed. They are commonly used to make paper-based sensors and printed circuit boards [Oh, 2018, Gong, 2014, Landers, 2022]. Paper circuits can be used to make loudspeakers (sound) without permanent magnets via electrostatic speaker technology [Kato, 2022]. The inputs presented in these systems include connection, temperature-changing, bending, pressing, touching, and soaking; the outputs include lights, color-changing, and sound. In addition to adding conductive materials to paper, researchers have also explored working on coated paper. In Sensing Kirigami, researchers enabled the paper to sense bending, pushing, and stretching, by making different cuts to carbon-coated paper [Zheng, 2019]. Along with the exploration of materials, these studies also emphasize the importance of aesthetics in paper computing, opening up more possibilities for the fabrication of paper-based interaction. The inputs of these interactions are based on 3D paper structures; while the output of [Kato, 2022] is based on a single sheet of paper and [Oh, 2018] is based on multiple sheets of paper. Most systems from this series are served as input devices only. 4.3 Moveable Paper Crafts Interactions in moveable paper crafts are more focused on outputs, including both integrated actuators and separated actuators. [Wrensch, 1998, Saul, 2010, Zhu, 2013] investigated shape memory alloy (SMA) embedded in paper objects. Multiple patterned SMA wires can be straightened or bent upon receipt of a signal, thus triggering angle-changing. These movements can occur on a whole paper structure (single sheet) [Wrensch, 1998], between two paper structures connected by SMA wires (multiple sheets) [Saul, 2010, Zhu, 2013], or on a whole sheet of 8 folded paper (single sheet) [Qi, 2012]. Due to the limitations of SMA materials in handling, other deformation materials have also been investigated. Some researchers have emphasized the importance of making deformable paper crafts at home, and suggested using microwaves to heat paper for shape-changing [Yasu, 2012]. To that end, the combination of paper and plastic was also explored. By using 3D printers to add Polylactide (PLA) to the surface of the paper, these novel and easy-to-manufacture paper actuators offer reversible deformation (bending) [Wang, 2018]. Using the swelling and shrinking (shape-changing) of paper, researchers also printed wax on paper to create a water-driven paper robot [Ryu, 2020]. The outputs of these systems are all based on a single sheet of paper. In addition to these movements with embedded materials, paper movements using separate actuators have also been explored. [Oh, 2018] presented different motor-based paper movements, including rotation and linear movement. This system demonstrated the great potential of this technology for education, machines, toys, and dynamic artwork. By adding magnets to the paper and placing it on a base with motors, researchers have also developed low-cost toolkits that allow non-technical designers to see shape-changing more intuitively [Yehoshua, 2021]. These systems are output on multiple sheets of paper. These explorations often emphasize low cost, creativity, and customization. They connect the two fields of handcraft and computing, introducing the concept of computationally-enhanced craft. 5 DESIGN SPACE OF PAPER ROBOTS Building on the relevant work of robot-building kits and paper-based interactions, this section proposes the design space of paper robots (review Figure 1 for a visual summary of the design space dimensions). For the interface shapes, we used the same dimensions of paper-based interaction design space – flat, folded, and 3D paper structures for input, and single and multiple sheets of paper for output. For each individual interaction approach, we explain it with illustrations and text in the following tables (Table 1 and Table 2). In particular, when introducing designs from the robot-building kits into this design space, we made some adjustments to their type and approach. This design space covers the interactions that appear in the paper-based interaction designs and the robot-building kits, helping us to understand the capabilities and limitations of the prior work. We first noticed that prior paper-based interaction designs did not cover all interactions we can find in the robot-building kits. Most paper-based interaction designs have focused on manipulation of the paper itself, and less on sensing with embedded electronics. For example, distance sensing with ultrasonic sensors, which is common in robot-building kits, has not yet been used in paper-based interaction design. In addition to compiling the interactions from the robot-building kits and paper-based interaction design, we propose additional interactions based on our own experience on paper crafts, including inputs such as twist, blow, stack, cut, moisture, AR glasses, and motion capture system, and outputs such as twist, connection, separation, length-changing, and shadow. The categories of these interactions are briefly illustrated in the figure below (Figure 6). Figure 6: Additional interaction methods With this design space being proposed, we can perceive these missing interactions and propose new designs. Most of the robot-building kits use plastic or metal parts, which are expensive and not easily customized. Given the current cardboard robot-building kits available, it is feasible to use paper for structural construction. And with this design space, we see the possibility of using paper instead of traditional plastic parts to complete the interaction, which can further reduce the cost of the robot-building kits, and make them easy to customize. 9 Table 1: Interaction modalities as inputs 10 Table 1 (continued): Interaction modalities as inputs 11 Table 2: Interaction modalities as outputs 12 Table 2 (continued): Interaction modalities as outputs 6 DESIGN EXAMPLES OF PAPER ROBOT-BUILDING KITS Robot-building kits with paper-based interactions provide a balance between the cost, customization, and functionality. With this vision, we designed three paper circuit building kits and two paper robot-building kits. The paper circuit building kits focus on visual feedback from the circuits, while the paper robot-building kits provide more dynamic interactions and enable end-users to customize these units in terms of both digital creation and physical fabrication. In the following, we present the designs of these building kits and evaluate them based on our proposed design space. Figure 7: Bookmark Light Kit (a: printed pattern (left) and built bookmark (right); b: the light is off when the bookmark is on the page; c: and lights up when it’s removed from the page) The Bookmark Light Kit (Figure 7) provides materials to make a bookmark that lights up. After completing the circuit as illustrated, the bookmark attaches to the pages of a book like a normal magnetic bookmark, and its circuit will automatically close when it is removed thus becoming a small flashlight. This kit uses manipulation-connection and folded paper structures for the input, and integrated actuator-light 13 with multiple sheets of paper for the output. Although it is folded from a single sheet, its circuit is divided into two parts, one on each side of the folding line. In this case we considered it as multiple sheets. Figure 8: Christmas Tree Kit (a: front side (outside) of the paper; b: back side (inside) of the paper; c: built Christmas tree (bent and standing)) The Christmas Tree Kit (Figure 8) is a Christmas card that can be mailed to others. After applying the copper tape and magnets to the cardstock, users bend it into a cone and then use the attraction of the magnets to hang LEDs on it. Its circuit is more complex than the previous kit, requiring copper tape on both sides of the paper. This kit also uses manipulation-connection as an input, but it is based on a 3D paper structure, as the completion of the circuit relies on the 3D structure after bending. It uses integrated actuator-light as its output. Although its circuits are on the front and back sides of the paper, these circuits cannot be physically separated as in the previous set, in which case we consider the output to be based on a single sheet of paper. Figure 9: Halloween Lantern Kit (a: half-built lantern(flat); b: built lantern (turned off); c: built lantern (turned on)) The third kit we designed omits the magnets and uses a paper structure to open and close the circuit. The Halloween Lantern Kit (Figure 9) provides a pattern of a box lantern with a movable panel that acts as an on/off switch. Once built, the lantern looks like a white cube; upon pressing the movable panel, the LED in the middle of the lantern will light up and the design printed inside the lantern will be displayed. Its inputs are manipulation-pressing and manipulation-pulling based on the 3D paper structure; its outputs are integrated actuator-lights and integrated actuator-shadow based on the single sheet of paper structure. Figure 10: The Escaping Takeout Box (a: flat; b: folded) The Escaping Takeout Box (Figure 10) is a stand-alone paper robot. This paper robot-building kit includes controller circuits, a battery, two vibration motors, an ultrasonic sensor, and a piece of cardstock that folds into the shape of a takeout box. After it is built, the pre-burned program in the controller enables it to detect nearby objects and “escapes” when something (such as a human hand) approaches it. End-users can customize this robot in both physical fabrication and digital creation. This building kit comes with a pre-cut cardstock piece that can be trimmed 14 to any new pattern for physically customizing the robot's structure. The ATMEGA328P microcontroller used in this robot allows for smooth programming and customization by end-users. It is compatible with the Arduino Uno platform and can read and write programs directly, without any additional equipment or chip burners. This robot uses integrated sensor-distance for input with a 3D paper structure, and the output is based on the single sheet of paper, moving by integrated actuator-vibration that changes its center of gravity. Figure 11: Paper Box Robot Kit (from left to right: power box, waving box, spinning box, speaker box, and light box) The Paper Box Robot Kit (Figure 11) [Yang, 2024] uses a modular design that was inspired by modular robots like Topobo [Raffle, 2004] and Cubelets [Schweikardt, 2007]. This robot includes five modules, which are the power box, the waving box, the spinning box, the speaker box, and the light box. Except for the power and spinning boxes, the other three boxes have independent controller circuits. Similar to the Escaping Takeout Box, these controllers offer the capability of being reprogrammed by end-users, and the use of IC sockets makes it easier for users to replace these microcontrollers. Each of these boxes could be considered as an individual paper circuit building kit, and they combine to form a paper robot-building kit. We color-coded the different boxes in order to better distinguish them as they are similar in appearance. As an entire paper robot-building kit, the Paper Box Robot Kit features inputs that are based on 3D paper structure with manipulation- connection. The power box (orange) contains a simple circuit with a 9v battery, two capacitors, and a voltage regulator. The conductive fabric tape on the outside of the box passes through the four sides of the box and then goes inside the box to connect with the two terminals of the circuit output. The waving box (blue) includes the controller circuit and a servo motor held in the middle by a paper structure, and its output is based on multiple sheets of paper with integrated actuator-rotation. The motor connects to the piece of paper on the outside of the box by magnetic coupling [“Engineering360”, n.d.], to transmit torque without touching. Magnetic couplings allow the robot to obtain more mechanical motion while being easy to assemble. The spinning box (green) uses a DC motor, and as it requires more power to operate, we added an extra battery, a diode, and a transistor to the circuit. A small opening on the box allows the motor shaft to go straight through the box to the outside, so its output is based on a single sheet of paper with the separated actuator-shape-changing system. It differs t from the waving box in that its output is the motor itself, not on the paper. If we connect the motor's shaft to another box, then its output will be based on multiple sheets of paper. Due to the low torque of the motor in the spinning box, magnetic coupling is not suitable. The speaker box (red) includes the control circuit, an additional resistor, and a buzzer. It uses a single sheet of paper with the integrated actuator-sound as output. The light box (white) includes the control circuit, three resistors, and an RGB LED. This box also comes with an outer shell, the inside of which can be printed or painted with different patterns, and the patterns become visible when the shell is set on the lighted box. In addition to using 15 manipulation-connection, the combination of the shell and the light box also use manipulation-stacking as input. The output of the light box is based on integrated actuators-lights with the single sheet structure when used alone, and integrated actuator-shadow with multiple sheets when used with a shell. 7 DISCUSSION AND CONCLUSION Based on a review of 60 relevant works, we proposed a design space for paper robots and presented five building kits that were designed using elements from this design space. Our workshops confirmed the public's interest in building robots out of paper. Our design space can help researchers and designers better understand the state of the art in paper-based interaction design, and we can address the gaps between paper- based interactions and robot-building kit designs by filling in the empty slots of the design space. The building kits presented in this paper serve as examples of how elements in the design space could be combined and applied to paper robots. The design space reveals that most current paper-based interactions consist of manipulation as inputs with integrated actuators for outputs. Based on these interactions that have already been developed, we can build paper robots with different combinations of them. With the design space, there are also opportunities to build paper robots by combining more types of inputs and outputs. However, for paper-based interactions with separate sensors and separate actuators, how to combine these electronic components with paper remains a key issue for researchers. The current design space for paper robots has certain limitations, such as the lack of analysis on controllers and other logic units. In our future work, more dimensions of this design space will be developed to better support complex interaction designs. Another question to ponder is whether there should be a limit to one input and one output in a paper robot. Although the Paper Box Robot Kit provided multiple output units, participants in the workshop preferred to connect one box at a time to the power unit; however, for the Escaping Takeout Box, participants did expect more feedback than just vibration/movement after a certain amount of time spent with it. In addition, the exploration of the appearance of paper robots should also be taken into consideration, in other words, appearance and size should also be included in the design space in the future. Another dimension that needs to be developed is the fabrication/building technique, and this is also an issue that must be addressed. In discussing the accessibility of paper robot-building kits, we would like the fabrication techniques used to be household-accessible and not requiring any special environment or equipment. Accessibility was the main reason we chose the paper structures and circuit designs that are full of straight lines. While using copper tape is an affordable option, during the workshop, we noted that some participants were unable to use copper tape proficiently, especially children and others with fine motor impairments. The use of conductive paint, on the other hand, would have been more costly, which was not in line with the original intent of building robots with paper. As the designs of paper robots become increasingly complex, it is critical that we not only consider their interaction elements, but also investigate the methods used to fabricate them. While copper tape is currently sufficient for building relatively simple circuits, we must explore alternative solutions to make the fabrication of paper circuits easier and more feasible for future designs. By bringing fabrication methods into the design space, we can ensure that paper robots are accessible and feasible. Paper robot-building kits achieve a good balance between affordability, functionality and customization options. Building robots out of paper allows for more creative expressions. People can use different colors, patterns and shapes to make the robot the way they like without worrying about the cost or recycling of the materials. Looking at the current market, the maker community, and the research community, there are already some robots made using cardboard. However, it wouldn't make the most sense to just replace the plastic material of a traditional robot with cardboard and then use plastic parts or metal screws to connect them while keeping the traditional circuitry. We need paper robots that highlight the properties of the paper itself, and by integrating paper-based interaction design, we can achieve this goal and create real paper robots. For future work, we should further explore the paper robot design space and make better use of paper-based interactions for different robot designs. In this paper, we present the design space for paper robots, which brings together insights from traditional robot-building kits and paper-based interaction studies. Our aim is to create a comprehensive framework to explore the potential of paper robots and to foster creativity in this emerging field. We hope that our work will provide guidance for future investigations and stimulate future studies of paper robots. 16 REFERENCE 4M Kidzlabs Mega Hydraulic Arm Robotic Science Kit. Walmart.com. https://www.walmart.com/ip/4M-Kidzlabs-Mega-Hydraulic-Arm-Robotic-Science- Kit/396388274 AlphaBot2 robot building kit for BBC micro:bit (no micro:bit). botnroll.com. https://www.botnroll.com/en/mounting-kits/3523-alphabot2-robot-building-kit-for-bbc- micro-bit-no-micro-bit.html Chibitronics. Chibitronics. https://chibitronics.com/ Colorations® Create Your Own Robot - Kit for 12. Discount School Supply. https://www.discountschoolsupply.com/arts-crafts/arts-crafts-kits/craft-kits- projects/colorations-create-your-own-robot---kit-for-12/p/33667 Discovery Robotics. Walmart.com. https://www.walmart.com/ip/Discovery-Robotics/341579728 Magnetic Couplings Selection Guide: Types, Features, Applications | Engineering360. https://www.globalspec.com/learnmore/motion_controls/power_transmission_mechanical/magnetic_couplings Amazon.com: Erector by Meccano Super Construction 25-In-1 Motorized Building Set, Steam Education Toy, 638 Parts, For Ages 10+ : Toys & Games. https://www.amazon.com/Meccano-Construction-Motorized-Building-Education/dp/B000GOF5S2 “High-Fivey” the Cardboard Micro:bit Robot : 18 Steps (with Pictures). Instructables. https://www.instructables.com/High-Fivey-the-Cardboard-Microbit-Robot/ Amazon.com: Klutz Lego Gear Bots Science/STEM Activity Kit : Toys & Games. https://www.amazon.com/Klutz-Lego-Gear-Bots/dp/1338603450 LEGO® BOOST | Official LEGO® Shop US. https://www.lego.com/en-us/themes/boost/about LEGO® MINDSTORMS® | Invent a Robot | Official LEGO® Shop US. https://www.lego.com/en-us/themes/mindstorms Take brick building to a new level with LEGO® Super Mario™. https://www.lego.com/en-us/themes/super-mario/about Makedo. https://www.make.do/ mBot Neo STEM Programmable Robotics Kit | Makeblock. https://store.makeblock.com/products/diy-coding-robot-kits-mbot- neo?currency=USD&variant=42817922891992&utm_medium=cpc&utm_source=google&utm_campaign=Google%20Shopping&gclid=CjwKCAiA_vKeBhA dEiwAFb_nraXwT2YLaugSs8S8sUFNKRbd7elE42jax0VgWbD_6yaldKn56vtSaBoCWzQQAvD_BwE Nintendo Labo Toy-Con 01 Variety Kit | Nintendo Switch | Nintendo. Nintendo Labo Toy-Con 01 Variety Kit | Nintendo Switch | Nintendo. https://www.nintendo.com/sg/switch/adfu/index.html Amazon.com: Thames & Kosmos SolarBots: 8-in-1 Solar Robot STEM Experiment Kit | Build 8 Cool Solar-Powered Robots in Minutes | No Batteries Required | Learn About Solar Energy & Technology | Solar Panel Included : Toys & Games. https://www.amazon.com/Thames-Kosmos-SolarBots-Experiment-Solar- Powered/dp/B085P361MQ Kids First Coding & Robotics Screen-free Coding Kit & Lessons, K to 2 – Thames & Kosmos. https://store.thamesandkosmos.com/products/coding-and-robotics The Crafty Robot. The Crafty Robot. https://thecraftyrobot.net/ Jumbo Kit > TheOffbits. TheOffbits. https://theoffbits.com/product/offbits-multi-kit-jumbo-kit VEX Robotics | HEXBUG. https://www.hexbug.com/vex Walking Robot. KiwiCo. https://www.kiwico.com/us/store/dp/walking-robot-project-kit/1986 Zivko the Robot. https://shop.elenco.com/consumers/zivko-the-robot.html 로보트리(Robotry). https://robotry.co.kr/ Maribeth Back, Jonathan Cohen, Rich Gold, Steve Harrison, and Scott Minneman. 2001. Listen reader: an electronically augmented paper-based book. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’01), 23–29. https://doi.org/10.1145/365024.365031 S. Sandra Bae, Rishi Vanukuru, Ruhan Yang, Peter Gyory, Ran Zhou, Ellen Yi-Luen Do, and Danielle Albers Szafir. 2022. Cultivating Visualization Literacy for Children Through Curiosity and Play. https://doi.org/10.48550/arXiv.2208.05015 Sandra Bae, Ruhan Yang, Peter Gyory, Julia Uhr, Danielle Albers Szafir, and Ellen Yi-Luen Do. 2021. Touching Information with DIY Paper Charts & AR Markers. 17 In Interaction Design and Children (IDC ’21), 433–438. https://doi.org/10.1145/3459990.3465191 Leah Buechley, Sue Hendrix, and Mike Eisenberg. 2009. Paints, paper, and programs: first steps toward the computational sketchbook. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction (TEI ’09), 9–12. https://doi.org/10.1145/1517664.1517670 Marcelo Coelho, Lyndl Hall, Joanna Berzowska, and Pattie Maes. 2009. Pulp-based computing: a framework for building computers out of paper. In CHI ’09 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’09), 3527–3528. https://doi.org/10.1145/1520340.1520525 Victor C. Dibia, Maryam Ashoori, Aaron Cox, and Justin D. Weisz. 2017. TJBot: An Open Source DIY Cardboard Robot for Programming Cognitive Systems. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’17), 381–384. https://doi.org/10.1145/3027063.3052965 Susan Freinkel. 2011. Plastic: A Toxic Love Story. Text Publishing Company. Nan-Wei Gong, Jürgen Steimle, Simon Olberding, Steve Hodges, Nicholas Edward Gillian, Yoshihiro Kawahara, and Joseph A. Paradiso. 2014. PrintSense: a versatile sensing technique to support multimodal flexible surface interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14), 1407–1410. https://doi.org/10.1145/2556288.2557173 Raphaël Grasset, Andreas Dünser, and Mark Billinghurst. 2008. Edutainment with a mixed reality book: a visually augmented illustrative childrens’ book. In Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology (ACE ’08), 292–295. https://doi.org/10.1145/1501750.1501819 Steve Hodges, Nicolas Villar, Nicholas Chen, Tushar Chugh, Jie Qi, Diana Nowacka, and Yoshihiro Kawahara. 2014. Circuit stickers: peel-and-stick construction of interactive electronic prototypes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14), 1743–1746. https://doi.org/10.1145/2556288.2557150 Johnco. 4M - KidzRobotix - Tin Can Robot. Johnco. http://www.johncoproductions.com/products/4m-kidzrobotix-tin-can-robot Johnco. 4M - Sci:Bits - Box Robot. Johnco. http://www.johncoproductions.com/products/4m-sci-bits-box-robot Johnco. 4M - Techcraft - Paper Circuit Science. Johnco. http://www.johncoproductions.com/products/4m-techcraft-sound-light-kit Fredéric Kaplan and Patrick Jermann. 2010. PaperComp 2010: first international workshop on paper computing. In Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing - Adjunct (UbiComp ’10 Adjunct), 507–510. https://doi.org/10.1145/1864431.1864500 Kunihiro Kato, Kaori Ikematsu, Yuki Igarashi, and Yoshihiro Kawahara. 2022. Paper-Woven Circuits: Fabrication Approach for Papercraft-based Electronic Devices. In Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’22), 1–11. https://doi.org/10.1145/3490149.3502253 Kunihiro Kato, Kazuya Saito, and Yoshihiro Kawahara. 2019. OrigamiSpeaker: Handcrafted Paper Speaker with Silver Nano-Particle Ink. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19), 1–6. https://doi.org/10.1145/3290607.3312872 Nicholas A. Knouf. 2017. Felted Paper Circuits Using Joomchi. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’17), 443–450. https://doi.org/10.1145/3024969.3025071 Mya Landers, Anwar Elhadad, Maryam Rezaie, and Seokheun Choi. 2022. Integrated Papertronic Techniques: Highly Customizable Resistor, Supercapacitor, and Transistor Circuitry on a Single Sheet of Paper. ACS Applied Materials & Interfaces 14, 40: 45658–45668. https://doi.org/10.1021/acsami.2c13503 Chunyuan Liao, François Guimbretière, and Ken Hinckley. 2005. PapierCraft: a command system for interactive paper. In Proceedings of the 18th annual ACM symposium on User interface software and technology (UIST ’05), 241–244. https://doi.org/10.1145/1095034.1095074 Wendy E. Mackay, Guillaume Pothier, Catherine Letondal, Kaare Bøegh, and Hans Erik Sørensen. 2002. The missing link: augmenting biology laboratory notebooks. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST ’02), 41–50. https://doi.org/10.1145/571985.571992 Hyunjoo Oh, Sherry Hsi, Michael Eisenberg, and Mark D. Gross. 2018. Paper mechatronics: present and future. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC ’18), 389–395. https://doi.org/10.1145/3202185.3202761 Hyunjoo Oh, Tung D. Ta, Ryo Suzuki, Mark D. Gross, Yoshihiro Kawahara, and Lining Yao. 2018. PEP (3D Printed Electronic Papercrafts): An Integrated Approach for 3D Sculpting Paper-Based Electronic Devices. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), 1–12. https://doi.org/10.1145/3173574.3174015 Jie Qi and Leah Buechley. 2010. Electronic popables: exploring paper-based computing through an interactive pop-up book. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (TEI ’10), 121–128. https://doi.org/10.1145/1709886.1709909 Jie Qi and Leah Buechley. 2012. Animating paper using shape memory alloys. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12), 749–752. https://doi.org/10.1145/2207676.2207783 Hayes Solos Raffle, Amanda J. Parkes, and Hiroshi Ishii. 2004. Topobo: a constructive assembly system with kinetic memory. In Proceedings of the 2004 conference on Human factors in computing systems - CHI ’04, 647–654. https://doi.org/10.1145/985692.985774 18 Shwetha Rajaram and Michael Nebeling. 2022. Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences. In CHI Conference on Human Factors in Computing Systems (CHI ’22), 1–16. https://doi.org/10.1145/3491102.3517486 M. Resnick, F. Martin, R. Sargent, and B. Silverman. 1996. Programmable Bricks: Toys to think with. IBM Systems Journal 35, 3.4: 443–452. https://doi.org/10.1147/sj.353.0443 Jihyun Ryu, Maedeh Mohammadifar, Mehdi Tahernia, Ha-ill Chun, Yang Gao, and Seokheun Choi. 2020. Paper Robotics: Self-Folding, Gripping, and Locomotion. Advanced Materials Technologies 5, 4: 1901054. https://doi.org/10.1002/admt.201901054 Greg Saul, Cheng Xu, and Mark D. Gross. 2010. Interactive paper devices: end-user design & fabrication. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (TEI ’10), 205–212. https://doi.org/10.1145/1709886.1709924 Eric Schweikardt. 2007. Modular robotics as tools for design. In Proceedings of the 6th ACM SIGCHI conference on Creativity & cognition (C&amp;C ’07), 298. https://doi.org/10.1145/1254960.1255034 Kohei Tsuji and Akira Wakita. 2011. Anabiosis: an interactive pictorial art based on polychrome paper computing. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology - ACE ’11, 1. https://doi.org/10.1145/2071423.2071521 Guanyun Wang, Tingyu Cheng, Youngwook Do, Humphrey Yang, Ye Tao, Jianzhe Gu, Byoungkwon An, and Lining Yao. 2018. Printed Paper Actuator: A Low-cost Reversible Actuation and Sensing Method for Shape Changing Interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18), 1–12. https://doi.org/10.1145/3173574.3174143 Pierre Wellner. 1991. The DigitalDesk calculator: tangible manipulation on a desk top display. In Proceedings of the 4th annual ACM symposium on User interface software and technology (UIST ’91), 27–33. https://doi.org/10.1145/120782.120785 Thomas Wrensch and Michael Eisenberg. 1998. The programmable hinge: toward computationally enhanced crafts. In Proceedings of the 11th annual ACM symposium on User interface software and technology (UIST ’98), 89–96. https://doi.org/10.1145/288392.288577 Peta Wyeth and Gordon Wyeth. 2001. Electronic Blocks: Tangible Programming Elements for Preschoolers. In Proceedings of the Eighth IFIP TC13 Conference on Human-Computer Interaction (INTERACT. Ruhan Yang and Ellen Yi-Luen Do. 2024. PaBo Bot: Paper Box Robots for Everyone. In Companion of the 2024 ACM/IEEE International Conference on Human- Robot Interaction (HRI '24). Association for Computing Machinery, New York, NY, USA, 1158–1162. https://doi- org.colorado.idm.oclc.org/10.1145/3610978.3640696 Kentaro Yasu and Masahiko Inami. 2012. POPAPY: Instant Paper Craft Made Up in a Microwave Oven. In Advances in Computer Entertainment, Anton Nijholt, Teresa Romão and Dennis Reidsma (eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 406–420. https://doi.org/10.1007/978-3-642-34292-9_29 Iddo Yehoshua Wald and Oren Zuckerman. 2021. Magnetform: a Shape-change Display Toolkit for Material-oriented Designers. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, 1–14. https://doi.org/10.1145/3430524.3446066 Clement Zheng, HyunJoo Oh, Laura Devendorf, and Ellen Yi-Luen Do. 2019. Sensing Kirigami. In Proceedings of the 2019 on Designing Interactive Systems Conference (DIS ’19), 921–934. https://doi.org/10.1145/3322276.3323689 Clement Zheng, Peter Gyory, and Ellen Yi-Luen Do. 2020. Tangible Interfaces with Printed Paper Markers. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. Association for Computing Machinery, New York, NY, USA, 909–923. https://doi.org/10.1145/3357236.3395578 Kening Zhu and Shengdong Zhao. 2013. AutoGami: a low-cost rapid prototyping toolkit for automated movable paper craft. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13), 661–670. https://doi.org/10.1145/2470654.2470748
1 Design of Paper Robot Building Kits Ruhan Yang ATLAS Institute, -Luen Do ATLAS Institute, 1: Design space of paper robots Building robots is an engaging activity that provides opportunities for hands-on learning. However, traditional robot-building kits are usually costly with limited functionality due to material and technology constraints. To improve the accessibility and flexibility of such kits, we take paper as the building material and extensively explore the versatility of paper-based interactions. Based on an analysis of current robot-building kits and paper-based interaction research, we propose a design space for devising paper robots. We also analyzed our building kit designs using this design space, where these kits demonstrate the potential of paper as a cost-effective material for robot building. As a starting point, our design space and building kit examples provide a guideline that inspires and informs future research and development of novel paper robotbuilding kits. Keywords: Design space; paper robot; robot-building kits; paper-based interaction 2 1 INTRODUCTION With the popularity of STEM education, educational robots have become a part of our lives. Today, there are robot-building kits for people of all ages and skill levels. Conventional robot-building kits are usually made with plastic as it could be shaped in various forms and colors [Freinkel, 2011]. However, those unsustainable plastic components need a specific fabrication process to mold, which makes it costly and challenging to be customized by end users. To improve accessibility, sustainability, and foster creativity, we propose designing paper robot-building kits. Building robots out of paper offers a unique solution to this issue. Since the 1990s, researchers have explored paper-based interactions. These efforts range from the integration of paper with virtual reality [e.g., Wellner, 1991] to the movement of the paper itself [e.g., Wrensch, 1998]. These studies have provided a solid foundation for the application of paper-based tangible interactions in everyday life. By applying those techniques, we can achieve more creative and expressive designs in robot building without the constraints of cost and fabrication capabilities. The development of paper robot-building kits could also increase public interest in paper-based interaction and promote research and progress in the field. This approach to robot building not only has the potential to change the way we build and design, but it also makes the world of humancomputer interaction more accessible and open to all, thus opening up new possibilities for education, entertainment, and more. However, despite its potential, the design space for paper robots remains largely uncharted. To better apply paper-based interaction to robot building, we surveyed 30 robot-building kits and 30 paper-based interaction studies, identified key design elements, and synthesized them into the following design space of paper robots (see Figure 1). First, we divided all interactions into input and output categories. Input interactions can be achieved through manipulation, integrated sensors, or separated sensors, and appear as flat, folded, or 3D shapes. Output interactions can be implemented by integrated or separated actuators and appear as single or multiple pieces. This design space integrates insights from robot building kits and paper-based interaction research, laying the groundwork for further exploration and innovation in this emerging field. In this paper, we also showcase several paper robot-building kits to highlight the potential and versatility of paper robot techniques. These example kits provide insight into the design and fabrication possibilities of paper robots, and will serve as inspiration for those who wish to explore this field. We presented these designs to the public, and the positive feedback received emphasizes the feasibility of paper robots and the interest in the future development of this field. The results suggest the potential of paper robot-building kits to play a popular and influential role in the fields of robotics, engineering, crafts, and the arts. By providing the design space and example designs, we aim to motivate others to explore this exciting area and use the information provided in this paper to guide and direct their own work. In the following, we first introduce the concepts related to paper robot-building kits. We then review the state of the art in robot-building kits and paper-based interaction research, define their features, then describe the design space of the paper robots. This is followed by a demonstration of several building kit designs and an analysis of them based on the design space. Finally, we discuss future research opportunities. 2 SCOPE AND METHOD 2.1 Scope and Definitions 2.1.1 Robots and Robot-building Kits. The scope of robots discussed in this paper is smaller-sized personal robots that can perform some simple movements, including interaction with people or the environment. The building kit discussed in this paper refers to a collection of components that can be assembled to form a certain structure. When searching using "robot building" and "robot building kit" as keywords, we found that most robot-building kits are intended for education and entertainment. These robots are smaller and have more limited functionality, so they do not include industrial-grade materials or structures; users focus more on building and designing, and rarely require the robot to complete a specified task or complex movement. Rather, these robots play a role in developing hands-on skills, STEM education, and entertainment. Therefore, all the robot-related work we discuss in this paper has an educational or entertainment focus, and we do not cover any specialized robot-building kits or systems (such as Roboteq and ArduPilot). Moreover, we consider STEM building platforms that are provided with parts only such as Arduino and LittleBits as systems of parts, so they are not considered as robot-building kits in this paper. 3 2.1.2 Paper and Paper-based Interactions. Our discussion covers all types of paper and its derivatives, including paper of different weights, paper with different coatings, cardboard, etc. There has been some work discussing paper robots. Some use paper structures and retain the circuitry and electronics of traditional robots, as in [Dibia, 2017]; others have used the properties of paper to complete a functional structure that replaces traditional electronic components, such as [Ryu, 2020]. All these works present us with cases of paper interaction design use in robot building. In this paper, we consider paper-based interaction as a broader scope that includes any interaction that involves the use of paper, regardless of the technology used. We classify these interactions into three types: augmented reality-based interactions (e.g., using projections [Wellner, 1991], sound [Back, 2001], or AR markers [Zheng, 2020]); paper circuits [e.g., Coelho, 2009]; and paper movement [e.g., Saul, 2010]. 2.2 METHOD The design space of the paper robot, as well as the designs of the robot-building kits, are derived from a review of the current robot-building kits and paper-based interactions research. In order to collect a representative set of related work, we investigated the commercial market and the research and maker communities. Figure 2: Complexity distribution of robot-building kits For the robot-building kits, we first surveyed stores including Target and Walmart, as well as online e-commerce sites Amazon and eBay. We searched for "robot building", "robot kit", "robot building kit", "robot building toy", "STEM building kit", "paper robot", "craft robot", and "DIY robot". We then searched Instructables, Pinterest, Maker Faire, YouTube, TikTok, and Etsy, using the same keywords, to learn what is going on in the maker community. Finally we investigated the research community by searching the ACM Digital Library, IEEE Xplore, Springer, and ResearchGate. Based on these results, we also investigated the source of these designs by searching for the company, publisher, and author of them. We identified over 1,000 robot-building kit designs in the first round. From them, we excluded designs that were out of scope such as the more professionally-focused systems and those that did not meet the interaction requirements, such as model robots intended only for decoration. Finally, we summarized the remaining designs, organized them by the way they were physically built and digitally created, and the level of difficulty involved in building them. We finally selected 30 representative designs for detailed analysis. Figure 2 shows the distribution of these building kits in terms of complexity of the digital creation and physical build. For paper-based interactions, we searched for "paper craft", "paper interaction", "paper mechanisms", "paper computing", "paper circuits", and "origami". We collected 176 papers and grouped them by their interaction designs. For papers that addressed different versions of the same 4 project, we kept only the most recent one; and for different projects that used a similar interaction design, we kept only the earliest one. In the end, we selected 30 representative papers/projects for further review. Although we have provided an in-depth review into both robot-building kit designs and paper-based interaction research, our list of relevant works is not an exhaustive list of these two fields. Our focus is on the expansion of the design space, so we are interested in the uniqueness of these efforts in terms of how interactions are performed. We first analyzed the designs of the robot building kits and collected the interaction elements included in each kit. We then reviewed paper-based interaction studies and summarized the interaction modalities they proposed. Next, we analyzed and summarized all the interactions and identified the dimensions of the design space. Based on this design space, we analyzed the five building kits that we designed. These building kits were developed to test the feasibility of the concept of building robots out of paper. We deployed these kits in public events, where we observed people working on them. 3 INTERACTION DESIGNS FROM ROBOT-BUILDING KITS Figure 3: Input and output modalities of robot-building kits In this section, we delve into the interaction design elements present in robot-building kits. We categorize the input and output modalities of these kits, and we identify two input types: 1) manipulation, and 2) integrated sensors. For the outputs, we propose two output types: 1) integrated actuators, and 2) separated actuators. With these dimensions, we can map the interactions of robot-building kits into an initial design space (Figure 3). In Figure 4, we provide a list of these interactions, indicating the specific kit from which each interaction originates. 5 Figure 4: Interaction methods from each robot-building kit Input Type 1) Manipulation: Manipulation refers to direct interaction with the robot. This interaction is usually found in robot-building kits without electronics, or that include only simple circuits without a microcontroller. These kits are simple in structure and less expensive than those with multiple electronics and complex circuitry. The simplest kits are easy to assemble and are often colorful, such as ["Colorations", n.d.]. These kits promote arts and crafts learning, help develop fine motor skills, and encourage curiosity and social development. However, the only "interaction" these robots provide is for the user to connect the different parts. Some robot building kits are based on opening and closing circuits [such as Johnco, n.d.], or where the circuit is based on structural connections [such as Schweikardt, 2007]. These kits also use the manipulation of connections as an input. Some robot-building kits focus on the mechanical structure through fine parts to make the whole robot more dynamic, [such as "Nintendo Labo", n.d.]. The mechanical motion of these robots is based on pushing and rotating. Input Type 2) Integrated Sensor: Integrated Sensor refers to inputs based on electronic components embedded in the robots. Our focus is on the type of interaction, not the type of sensor, so we classify sensors by their ability to detect specific stimuli, rather than their physical design or technical specifications. For example, a phototransistor can detect light and also the motion of occluders/shadows. So even though there is no controller, such a robot can have input through integrated sensors. Some robot kits use optical encoders to measure positions and motions [such as "LEGO BOOST", n.d.]. Similarly, sensors that detect motion and distance are often interchangeable and are common in robot building kits [such as "Zivko the Robot", n.d.]. Sound is also a common modality, usually sensed using a microphone [such as "LEGO BOOST", n.d.]. Some more advanced kits use cameras to detect different content, including light, color, and barcodes [such as "LEGO Super Mario", n.d.]. Other sensors are only found in advanced robot building kits [such as "MBot | Makeblock", n.d.], including temperature sensors, magnetic field sensors, gyroscopes, and accelerometers. Output Type 1) Integrated Actuator: Integrated Actuator refers to outputs based on electronic components embedded in the robots. Almost all robot outputs are of this type, considering that most robots are standalone devices. The most common output is the shape-changing system [such as "LEGO BOOST", n.d.], implemented by a variety of motors. Other kits include small motors that output in the form of vibration [such as "Nintendo Labo", n.d.]. Another common output is light, usually using LEDs or similar optical components. The fourth is the use of speakers to provide audio feedback. 6 Output Type 2) Separated Actuator: Separated actuators refer to outputs based on electronic components that are not fully embodied in the robots. They may be connected to the robot by cable or wirelessly, but are not a (structurally or functionally necessary) part of the robot itself. One (and the only) representative actuator is the screen. A few robots have a built-in screen, such as ["LEGO Super Mario", n.d., "ClicBot", n.d.], through which they display some interactive information as well as add animation effects to the robot. For other robot-building kits, such as ["LEGO MINDSTORMS", n.d.], the screen gives the user more feedback on the operation than on the interaction. There are also kits involving using a phone or tablet for operation ["LEGO BOOST", n.d., "The Crafty Robot", n.d.], in which case the screen is more a controller than an output device. Except for adding animation effects which are necessary on the robot, all these screen uses do not require the screen itself as part of the robot, so we consider the screen as a separated actuator. 4 ELEMENTS OF PAPER-BASED INTERACTIONS Figure 5: Elements of paper-based interaction designs In this section, we categorize the interaction designs proposed in paper-based interaction studies (Figure 5). Since the 1990s, many researchers have been exploring paper-based human-computer interaction, a field known as paper computing [Kaplan, 2010]. Research within this field often falls at the intersection of multiple domains, including augmented reality environments, tangible interactions, physical computing, ubiquitous computing, digital art, and more. Based on the interactive environments, we divide these studies into three groups: 1) interaction based on augmented reality, 2) paper circuits, and 3) movable paper crafts. Inspired by Zhu's taxonomy [2013] we propose another dimension of design space: the shape of the paper interface. This dimension spans flat, folded, and 3D paper devices for the input, and single sheet and multiple sheets of paper for the output devices. 4.1 Paper with Augmented Reality (AR) Paper-based interactions based on augmented reality technologies typically use manipulation and separated sensors for input, and separated actuators for output. Early paper computing focused on the interoperable use of paper and digital documents. These studies aimed to bridge the physical and digital worlds by creating an interactive work environment. Like [Wellner, 1991], many systems used a camera to capture the contents of a paper document and the position of the user's finger, and gave feedback using a projector to overlay images directly on the same paper document. With the advancement of augmented reality technology, researchers explored designing and developing mixed reality books that use augmented reality glasses or screens to overlay virtual content onto the pages of physical books [Grasset, 2008, Rajaram, 2022]. These 7 works showed how new interactive technologies could enrich traditional paper-based work. Another series of studies, represented by [Back, 2001, Mackay, 2002, Liao, 2005], were conducted on physical books. These studies used cameras and magnetic field sensors to control the audio (sound), adding a rich soundtrack to printed graphics and text while maintaining the original appearance of the book. A more recent study uses augmented reality markers (computer vision markers) printed on paper, which turn traditional paper into an interactive interface with a combination of camera and display [Zheng, 2020, Bae, 2021, Bae, 2022]. Across all these studies, an important insight of paper-based interaction is to maintain the properties of the paper and the ways people traditionally interact with them. Users can interact with those through conventional methods of manipulation, such as touching, flipping, rotating, and sliding. Regarding the shape of the paper, the input of most systems [Wellner, 1991, Grasset, 2008, Back, 2001, Mackay, 2002, Liao, 2005] is based on flat paper except for [Zheng, 2020] which is based on folded paper and [Bae, 2022] which is based on 3D paper. Systems that use projectors output on the single sheet of paper; unlike systems that use screens. 4.2 Paper Circuit Most paper circuits use manipulation as input, and integrated actuators for outputs, with all outputs directly on the paper. Starting with pulpbased computing, researchers investigated embedded circuits. Some start from the process of making paper by hand and embedding conductive materials or electronic components into the paper "sandwich" [Coelho, 2009, Knouf, 2017]. Other studies proposed different ways of making paper circuits, including the use of circuit stickers [Hodges, 2014, "Chibitronics", n.d.] and weaving techniques [Kato, 2019]. In these systems, common inputs include connection and bending, and common outputs include embedded LEDs (lights). The inputs of these interactions are based on 3D structures, while the outputs are based on multiple sheets of paper. The other direction of exploration focuses on the surface of the paper. By adding different coatings and pasting different electronic components, [Buechley, 2009] introduces interaction using painting as input. Further explorations of material properties include paper's thinness and softness. Researchers further explored the coating of the paper. These studies discussed the fabrication of paper-based interaction with mechanical processing equipment instead of traditional handcrafting. Using color-changing paint, researchers created paper computing artworks that would make these graphic arts pieces more interactive [Kato, 2019]. In this interaction, the color-changing was triggered by heating up and cooling down the paper. The inputs of these interactions are based on flat paper, while the outputs are based on a single sheet of paper. As the forms of paper computing became diverse, researchers began to explore the aesthetics of this field. With the Electronic Popables proposed, the combination of interactive books and electronic devices was also explored [Qi, 2010]. In this design, the input modalities include rotation, touching, pressing, pulling, and painting, with the light output through the embedded LEDs. The inputs of these interactions are based on folded paper, while the outputs are based on multiple sheets of paper. Techniques for making circuits via inkjet printers have also been widely discussed. They are commonly used to make paper-based sensors and printed circuit boards [Oh, 2018, Gong, 2014, Landers, 2022]. Paper circuits can be used to make loudspeakers (sound) without permanent magnets via electrostatic speaker technology [Kato, 2022]. The inputs presented in these systems include connection, temperature-changing, bending, pressing, touching, and soaking; the outputs include lights, color-changing, and sound. In addition to adding conductive materials to paper, researchers have also explored working on coated paper. In Sensing Kirigami, researchers enabled the paper to sense bending, pushing, and stretching, by making different cuts to carbon-coated paper [Zheng, 2019]. Along with the exploration of materials, these studies also emphasize the importance of aesthetics in paper computing, opening up more possibilities for the fabrication of paper-based interaction. The inputs of these interactions are based on 3D paper structures; while the output of [Kato, 2022] is based on a single sheet of paper and [Oh, 2018] is based on multiple sheets of paper. Most systems from this series are served as input devices only. 4.3 Moveable Paper Crafts Interactions in moveable paper crafts are more focused on outputs, including both integrated actuators and separated actuators. [Wrensch, 1998, Saul, 2010, Zhu, 2013] investigated shape memory alloy (SMA) embedded in paper objects. Multiple patterned SMA wires can be straightened or bent upon receipt of a signal, thus triggering angle-changing. These movements can occur on a whole paper structure (single sheet) [Wrensch, 1998], between two paper structures connected by SMA wires (multiple sheets) [Saul, 2010, Zhu, 2013], or on a whole sheet of 8 folded paper (single sheet) [Qi, 2012]. Due to the limitations of SMA materials in handling, other deformation materials have also been investigated. Some researchers have emphasized the importance of making deformable paper crafts at home, and suggested using microwaves to heat paper for shape-changing [Yasu, 2012]. To that end, the combination of paper and plastic was also explored. By using 3D printers to add Polylactide (PLA) to the surface of the paper, these novel and easy-to-manufacture paper actuators offer reversible deformation (bending) [Wang, 2018]. Using the swelling and shrinking (shape-changing) of paper, researchers also printed wax on paper to create a water-driven paper robot [Ryu, 2020]. The outputs of these systems are all based on a single sheet of paper. In addition to these movements with embedded materials, paper movements using separate actuators have also been explored. [Oh, 2018] presented different motor-based paper movements, including rotation and linear movement. This system demonstrated the great potential of this technology for education, machines, toys, and dynamic artwork. By adding magnets to the paper and placing it on a base with motors, researchers have also developed low-cost toolkits that allow non-technical designers to see shape-changing more intuitively [Yehoshua, 2021]. These systems are output on multiple sheets of paper. These explorations often emphasize low cost, creativity, and customization. They connect the two fields of handcraft and computing, introducing the concept of computationally-enhanced craft. 5 DESIGN SPACE OF PAPER ROBOTS Building on the relevant work of robot-building kits and paper-based interactions, this section proposes the design space of paper robots (review Figure 1 for a visual summary of the design space dimensions). For the interface shapes, we used the same dimensions of paper-based interaction design space - flat, folded, and 3D paper structures for input, and single and multiple sheets of paper for output. For each individual interaction approach, we explain it with illustrations and text in the following tables (Table 1 and Table 2). In particular, when introducing designs from the robot-building kits into this design space, we made some adjustments to their type and approach. This design space covers the interactions that appear in the paper-based interaction designs and the robot-building kits, helping us to understand the capabilities and limitations of the prior work. We first noticed that prior paper-based interaction designs did not cover all interactions we can find in the robot-building kits. Most paper-based interaction designs have focused on manipulation of the paper itself, and less on sensing with embedded electronics. For example, distance sensing with ultrasonic sensors, which is common in robot-building kits, has not yet been used in paper-based interaction design. In addition to compiling the interactions from the robot-building kits and paper-based interaction design, we propose additional interactions based on our own experience on paper crafts, including inputs such as twist, blow, stack, cut, moisture, AR glasses, and motion capture system, and outputs such as twist, connection, separation, length-changing, and shadow. The categories of these interactions are briefly illustrated in the figure below (Figure 6). Figure 6: Additional interaction methods With this design space being proposed, we can perceive these missing interactions and propose new designs. Most of the robot-building kits use plastic or metal parts, which are expensive and not easily customized. Given the current cardboard robot-building kits available, it is feasible to use paper for structural construction. And with this design space, we see the possibility of using paper instead of traditional plastic parts to complete the interaction, which can further reduce the cost of the robot-building kits, and make them easy to customize. 9 Table 1: Interaction modalities as inputs 10 Table 1 (continued): Interaction modalities as inputs 11 Table 2: Interaction modalities as outputs 12 Table 2 (continued): Interaction modalities as outputs 6 DESIGN EXAMPLES OF PAPER ROBOT-BUILDING KITS Robot-building kits with paper-based interactions provide a balance between the cost, customization, and functionality. With this vision, we designed three paper circuit building kits and two paper robot-building kits. The paper circuit building kits focus on visual feedback from the circuits, while the paper robot-building kits provide more dynamic interactions and enable end-users to customize these units in terms of both digital creation and physical fabrication. In the following, we present the designs of these building kits and evaluate them based on our proposed design space. Figure 7: Bookmark Light Kit (a: printed pattern (left) and built bookmark (right); b: the light is off when the bookmark is on the page; c: and lights up when it's removed from the page) The Bookmark Light Kit (Figure 7) provides materials to make a bookmark that lights up. After completing the circuit as illustrated, the bookmark attaches to the pages of a book like a normal magnetic bookmark, and its circuit will automatically close when it is removed thus becoming a small flashlight. This kit uses manipulation-connection and folded paper structures for the input, and integrated actuator-light 13 with multiple sheets of paper for the output. Although it is folded from a single sheet, its circuit is divided into two parts, one on each side of the folding line. In this case we considered it as multiple sheets. Figure 8: Christmas Tree Kit (a: front side (outside) of the paper; b: back side (inside) of the paper; c: built Christmas tree (bent and standing)) The Christmas Tree Kit (Figure 8) is a Christmas card that can be mailed to others. After applying the copper tape and magnets to the cardstock, users bend it into a cone and then use the attraction of the magnets to hang LEDs on it. Its circuit is more complex than the previous kit, requiring copper tape on both sides of the paper. This kit also uses manipulation-connection as an input, but it is based on a 3D paper structure, as the completion of the circuit relies on the 3D structure after bending. It uses integrated actuator-light as its output. Although its circuits are on the front and back sides of the paper, these circuits cannot be physically separated as in the previous set, in which case we consider the output to be based on a single sheet of paper. Figure 9: Halloween Lantern Kit (a: half-built lantern(flat); b: built lantern (turned off); c: built lantern (turned on)) The third kit we designed omits the magnets and uses a paper structure to open and close the circuit. The Halloween Lantern Kit (Figure 9) provides a pattern of a box lantern with a movable panel that acts as an on/off switch. Once built, the lantern looks like a white cube; upon pressing the movable panel, the LED in the middle of the lantern will light up and the design printed inside the lantern will be displayed. Its inputs are manipulation-pressing and manipulation-pulling based on the 3D paper structure; its outputs are integrated actuator-lights and integrated actuator-shadow based on the single sheet of paper structure. Figure 10: The Escaping Takeout Box (a: flat; b: folded) The Escaping Takeout Box (Figure 10) is a stand-alone paper robot. This paper robot-building kit includes controller circuits, a battery, two vibration motors, an ultrasonic sensor, and a piece of cardstock that folds into the shape of a takeout box. After it is built, the pre-burned program in the controller enables it to detect nearby objects and "escapes" when something (such as a human hand) approaches it. End-users can customize this robot in both physical fabrication and digital creation. This building kit comes with a pre-cut cardstock piece that can be trimmed 14 to any new pattern for physically customizing the robot's structure. The ATMEGA328P microcontroller used in this robot allows for smooth programming and customization by end-users. It is compatible with the Arduino Uno platform and can read and write programs directly, without any additional equipment or chip burners. This robot uses integrated sensor-distance for input with a 3D paper structure, and the output is based on the single sheet of paper, moving by integrated actuator-vibration that changes its center of gravity. Figure 11: Paper Box Robot Kit (from left to right: power box, waving box, spinning box, speaker box, and light box) The Paper Box Robot Kit (Figure 11) [Yang, 2024] uses a modular design that was inspired by modular robots like Topobo [Raffle, 2004] and Cubelets [Schweikardt, 2007]. This robot includes five modules, which are the power box, the waving box, the spinning box, the speaker box, and the light box. Except for the power and spinning boxes, the other three boxes have independent controller circuits. Similar to the Escaping Takeout Box, these controllers offer the capability of being reprogrammed by end-users, and the use of IC sockets makes it easier for users to replace these microcontrollers. Each of these boxes could be considered as an individual paper circuit building kit, and they combine to form a paper robot-building kit. We color-coded the different boxes in order to better distinguish them as they are similar in appearance. As an entire paper robot-building kit, the Paper Box Robot Kit features inputs that are based on 3D paper structure with manipulationconnection. The power box (orange) contains a simple circuit with a 9v battery, two capacitors, and a voltage regulator. The conductive fabric tape on the outside of the box passes through the four sides of the box and then goes inside the box to connect with the two terminals of the circuit output. The waving box (blue) includes the controller circuit and a servo motor held in the middle by a paper structure, and its output is based on multiple sheets of paper with integrated actuator-rotation. The motor connects to the piece of paper on the outside of the box by magnetic coupling ["Engineering360", n.d.], to transmit torque without touching. Magnetic couplings allow the robot to obtain more mechanical motion while being easy to assemble. The spinning box (green) uses a DC motor, and as it requires more power to operate, we added an extra battery, a diode, and a transistor to the circuit. A small opening on the box allows the motor shaft to go straight through the box to the outside, so its output is based on a single sheet of paper with the separated actuator-shape-changing system. It differs t from the waving box in that its output is the motor itself, not on the paper. If we connect the motor's shaft to another box, then its output will be based on multiple sheets of paper. Due to the low torque of the motor in the spinning box, magnetic coupling is not suitable. The speaker box (red) includes the control circuit, an additional resistor, and a buzzer. It uses a single sheet of paper with the integrated actuator-sound as output. The light box (white) includes the control circuit, three resistors, and an RGB LED. This box also comes with an outer shell, the inside of which can be printed or painted with different patterns, and the patterns become visible when the shell is set on the lighted box. In addition to using 15 manipulation-connection, the combination of the shell and the light box also use manipulation-stacking as input. The output of the light box is based on integrated actuators-lights with the single sheet structure when used alone, and integrated actuator-shadow with multiple sheets when used with a shell. 7 DISCUSSION AND CONCLUSION Based on a review of 60 relevant works, we proposed a design space for paper robots and presented five building kits that were designed using elements from this design space. Our workshops confirmed the public's interest in building robots out of paper. Our design space can help researchers and designers better understand the state of the art in paper-based interaction design, and we can address the gaps between paperbased interactions and robot-building kit designs by filling in the empty slots of the design space. The building kits presented in this paper serve as examples of how elements in the design space could be combined and applied to paper robots. The design space reveals that most current paper-based interactions consist of manipulation as inputs with integrated actuators for outputs. Based on these interactions that have already been developed, we can build paper robots with different combinations of them. With the design space, there are also opportunities to build paper robots by combining more types of inputs and outputs. However, for paper-based interactions with separate sensors and separate actuators, how to combine these electronic components with paper remains a key issue for researchers. The current design space for paper robots has certain limitations, such as the lack of analysis on controllers and other logic units. In our future work, more dimensions of this design space will be developed to better support complex interaction designs. Another question to ponder is whether there should be a limit to one input and one output in a paper robot. Although the Paper Box Robot Kit provided multiple output units, participants in the workshop preferred to connect one box at a time to the power unit; however, for the Escaping Takeout Box, participants did expect more feedback than just vibration/movement after a certain amount of time spent with it. In addition, the exploration of the appearance of paper robots should also be taken into consideration, in other words, appearance and size should also be included in the design space in the future. Another dimension that needs to be developed is the fabrication/building technique, and this is also an issue that must be addressed. In discussing the accessibility of paper robot-building kits, we would like the fabrication techniques used to be household-accessible and not requiring any special environment or equipment. Accessibility was the main reason we chose the paper structures and circuit designs that are full of straight lines. While using copper tape is an affordable option, during the workshop, we noted that some participants were unable to use copper tape proficiently, especially children and others with fine motor impairments. The use of conductive paint, on the other hand, would have been more costly, which was not in line with the original intent of building robots with paper. As the designs of paper robots become increasingly complex, it is critical that we not only consider their interaction elements, but also investigate the methods used to fabricate them. While copper tape is currently sufficient for building relatively simple circuits, we must explore alternative solutions to make the fabrication of paper circuits easier and more feasible for future designs. By bringing fabrication methods into the design space, we can ensure that paper robots are accessible and feasible. Paper robot-building kits achieve a good balance between affordability, functionality and customization options. Building robots out of paper allows for more creative expressions. People can use different colors, patterns and shapes to make the robot the way they like without worrying about the cost or recycling of the materials. Looking at the current market, the maker community, and the research community, there are already some robots made using cardboard. However, it wouldn't make the most sense to just replace the plastic material of a traditional robot with cardboard and then use plastic parts or metal screws to connect them while keeping the traditional circuitry. We need paper robots that highlight the properties of the paper itself, and by integrating paper-based interaction design, we can achieve this goal and create real paper robots. For future work, we should further explore the paper robot design space and make better use of paper-based interactions for different robot designs. In this paper, we present the design space for paper robots, which brings together insights from traditional robot-building kits and paper-based interaction studies. Our aim is to create a comprehensive framework to explore the potential of paper robots and to foster creativity in this emerging field. We hope that our work will provide guidance for future investigations and stimulate future studies of paper robots. 16 REFERENCE 4M Kidzlabs Mega Hydraulic Arm Robotic Science Kit. Walmart.com. https://www.walmart.com/ip/4M-Kidzlabs-Mega-Hydraulic-Arm-Robotic-ScienceKit/396388274 AlphaBot2 robot building kit for BBC micro:bit (no micro:bit). botnroll.com. https://www.botnroll.com/en/mounting-kits/3523-alphabot2-robot-building-kit-for-bbcmicro-bit-no-micro-bit.html Chibitronics. Chibitronics. https://chibitronics.com/ Colorations® Create Your Own Robot - Kit for 12. Discount School Supply. https://www.discountschoolsupply.com/arts-crafts/arts-crafts-kits/craft-kitsprojects/colorations-create-your-own-robot---kit-for-12/p/33667 Discovery Robotics. Walmart.com. https://www.walmart.com/ip/Discovery-Robotics/341579728 Magnetic Couplings Selection Guide: Types, Features, Applications | Engineering360. https://www.globalspec.com/learnmore/motion_controls/power_transmission_mechanical/magnetic_couplings Amazon.com: Erector by Meccano Super Construction 25-In-1 Motorized Building Set, Steam Education Toy, 638 Parts, For Ages 10+ : Toys & Games. https://www.amazon.com/Meccano-Construction-Motorized-Building-Education/dp/B000GOF5S2 "High-Fivey" the Cardboard Micro:bit Robot : 18 Steps (with Pictures). Instructables. https://www.instructables.com/High-Fivey-the-Cardboard-Microbit-Robot/ Amazon.com: Klutz Lego Gear Bots Science/STEM Activity Kit : Toys & Games. https://www.amazon.com/Klutz-Lego-Gear-Bots/dp/1338603450 LEGO® BOOST | Official LEGO® Shop US. https://www.lego.com/en-us/themes/boost/about LEGO® MINDSTORMS® | Invent a Robot | Official LEGO® Shop US. https://www.lego.com/en-us/themes/mindstorms Take brick building to a new level with LEGO® Super MarioTM. https://www.lego.com/en-us/themes/super-mario/about Makedo. https://www.make.do/ mBot Neo STEM Programmable Robotics Kit | Makeblock. https://store.makeblock.com/products/diy-coding-robot-kits-mbotneo?currency=USD&variant=42817922891992&utm_medium=cpc&utm_source=google&utm_campaign=Google%20Shopping&gclid=CjwKCAiA_vKeBhA dEiwAFb_nraXwT2YLaugSs8S8sUFNKRbd7elE42jax0VgWbD_6yaldKn56vtSaBoCWzQQAvD_BwE Nintendo Labo Toy-Con 01 Variety Kit | Nintendo Switch | Nintendo. Nintendo Labo Toy-Con 01 Variety Kit | Nintendo Switch | Nintendo. https://www.nintendo.com/sg/switch/adfu/index.html Amazon.com: Thames & Kosmos SolarBots: 8-in-1 Solar Robot STEM Experiment Kit | Build 8 Cool Solar-Powered Robots in Minutes | No Batteries Required | Learn About Solar Energy & Technology | Solar Panel Included : Toys & Games. https://www.amazon.com/Thames-Kosmos-SolarBots-Experiment-SolarPowered/dp/B085P361MQ Kids First Coding & Robotics Screen-free Coding Kit & Lessons, K to 2 - Thames & Kosmos. https://store.thamesandkosmos.com/products/coding-and-robotics The Crafty Robot. The Crafty Robot. https://thecraftyrobot.net/ Jumbo Kit > TheOffbits. TheOffbits. https://theoffbits.com/product/offbits-multi-kit-jumbo-kit VEX Robotics | HEXBUG. https://www.hexbug.com/vex Walking Robot. KiwiCo. https://www.kiwico.com/us/store/dp/walking-robot-project-kit/1986 Zivko the Robot. https://shop.elenco.com/consumers/zivko-the-robot.html 로보트리(Robotry). https://robotry.co.kr/ Maribeth Back, Jonathan Cohen, Rich Gold, Steve Harrison, and Scott Minneman. 2001. Listen reader: an electronically augmented paper-based book. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '01), 23-29. https://doi.org/10.1145/365024.365031 S. Sandra Bae, Rishi Vanukuru, Ruhan Yang, Peter Gyory, Ran Zhou, Ellen Yi-Luen Do, and Danielle Albers Szafir. 2022. Cultivating Visualization Literacy for Children Through Curiosity and Play. https://doi.org/10.48550/arXiv.2208.05015 Sandra Bae, Ruhan Yang, Peter Gyory, Julia Uhr, Danielle Albers Szafir, and Ellen Yi-Luen Do. 2021. Touching Information with DIY Paper Charts & AR Markers. 17 In Interaction Design and Children (IDC '21), 433-438. https://doi.org/10.1145/3459990.3465191 Leah Buechley, Sue Hendrix, and Mike Eisenberg. 2009. Paints, paper, and programs: first steps toward the computational sketchbook. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction (TEI '09), 9-12. https://doi.org/10.1145/1517664.1517670 Marcelo Coelho, Lyndl Hall, Joanna Berzowska, and Pattie Maes. 2009. Pulp-based computing: a framework for building computers out of paper. In CHI '09 Extended Abstracts on Human Factors in Computing Systems (CHI EA '09), 3527-3528. https://doi.org/10.1145/1520340.1520525 Victor C. Dibia, Maryam Ashoori, Aaron Cox, and Justin D. Weisz. 2017. TJBot: An Open Source DIY Cardboard Robot for Programming Cognitive Systems. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '17), 381-384. https://doi.org/10.1145/3027063.3052965 Susan Freinkel. 2011. Plastic: A Toxic Love Story. Text Publishing Company. Nan-Wei Gong, Jürgen Steimle, Simon Olberding, Steve Hodges, Nicholas Edward Gillian, Yoshihiro Kawahara, and Joseph A. Paradiso. 2014. PrintSense: a versatile sensing technique to support multimodal flexible surface interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14), 1407-1410. https://doi.org/10.1145/2556288.2557173 Raphaël Grasset, Andreas Dünser, and Mark Billinghurst. 2008. Edutainment with a mixed reality book: a visually augmented illustrative childrens' book. In Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology (ACE '08), 292-295. https://doi.org/10.1145/1501750.1501819 Steve Hodges, Nicolas Villar, Nicholas Chen, Tushar Chugh, Jie Qi, Diana Nowacka, and Yoshihiro Kawahara. 2014. Circuit stickers: peel-and-stick construction of interactive electronic prototypes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14), 1743-1746. https://doi.org/10.1145/2556288.2557150 Johnco. 4M - KidzRobotix - Tin Can Robot. Johnco. http://www.johncoproductions.com/products/4m-kidzrobotix-tin-can-robot Johnco. 4M - Sci:Bits - Box Robot. Johnco. http://www.johncoproductions.com/products/4m-sci-bits-box-robot Johnco. 4M - Techcraft - Paper Circuit Science. Johnco. http://www.johncoproductions.com/products/4m-techcraft-sound-light-kit Fredéric Kaplan and Patrick Jermann. 2010. PaperComp 2010: first international workshop on paper computing. In Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing - Adjunct (UbiComp '10 Adjunct), 507-510. https://doi.org/10.1145/1864431.1864500 Kunihiro Kato, Kaori Ikematsu, Yuki Igarashi, and Yoshihiro Kawahara. 2022. Paper-Woven Circuits: Fabrication Approach for Papercraft-based Electronic Devices. In Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '22), 1-11. https://doi.org/10.1145/3490149.3502253 Kunihiro Kato, Kazuya Saito, and Yoshihiro Kawahara. 2019. OrigamiSpeaker: Handcrafted Paper Speaker with Silver Nano-Particle Ink. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA '19), 1-6. https://doi.org/10.1145/3290607.3312872 Nicholas A. Knouf. 2017. Felted Paper Circuits Using Joomchi. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (TEI '17), 443-450. https://doi.org/10.1145/3024969.3025071 Mya Landers, Anwar Elhadad, Maryam Rezaie, and Seokheun Choi. 2022. Integrated Papertronic Techniques: Highly Customizable Resistor, Supercapacitor, and Transistor Circuitry on a Single Sheet of Paper. ACS Applied Materials & Interfaces 14, 40: 45658-45668. https://doi.org/10.1021/acsami.2c13503 Chunyuan Liao, François Guimbretière, and Ken Hinckley. 2005. PapierCraft: a command system for interactive paper. In Proceedings of the 18th annual ACM symposium on User interface software and technology (UIST '05), 241-244. https://doi.org/10.1145/1095034.1095074 Wendy E. Mackay, Guillaume Pothier, Catherine Letondal, Kaare Bøegh, and Hans Erik Sørensen. 2002. The missing link: augmenting biology laboratory notebooks. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST '02), 41-50. https://doi.org/10.1145/571985.571992 Hyunjoo Oh, Sherry Hsi, Michael Eisenberg, and Mark D. Gross. 2018. Paper mechatronics: present and future. In Proceedings of the 17th ACM Conference on Interaction Design and Children (IDC '18), 389-395. https://doi.org/10.1145/3202185.3202761 Hyunjoo Oh, Tung D. Ta, Ryo Suzuki, Mark D. Gross, Yoshihiro Kawahara, and Lining Yao. 2018. PEP (3D Printed Electronic Papercrafts): An Integrated Approach for 3D Sculpting Paper-Based Electronic Devices. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1-12. https://doi.org/10.1145/3173574.3174015 Jie Qi and Leah Buechley. 2010. Electronic popables: exploring paper-based computing through an interactive pop-up book. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (TEI '10), 121-128. https://doi.org/10.1145/1709886.1709909 Jie Qi and Leah Buechley. 2012. Animating paper using shape memory alloys. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12), 749-752. https://doi.org/10.1145/2207676.2207783 Hayes Solos Raffle, Amanda J. Parkes, and Hiroshi Ishii. 2004. Topobo: a constructive assembly system with kinetic memory. In Proceedings of the 2004 conference on Human factors in computing systems - CHI '04, 647-654. https://doi.org/10.1145/985692.985774 18 Shwetha Rajaram and Michael Nebeling. 2022. Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences. In CHI Conference on Human Factors in Computing Systems (CHI '22), 1-16. https://doi.org/10.1145/3491102.3517486 M. Resnick, F. Martin, R. Sargent, and B. Silverman. 1996. Programmable Bricks: Toys to think with. IBM Systems Journal 35, 3.4: 443-452. https://doi.org/10.1147/sj.353.0443 Jihyun Ryu, Maedeh Mohammadifar, Mehdi Tahernia, Ha-ill Chun, Yang Gao, and Seokheun Choi. 2020. Paper Robotics: Self-Folding, Gripping, and Locomotion. Advanced Materials Technologies 5, 4: 1901054. https://doi.org/10.1002/admt.201901054 Greg Saul, Cheng Xu, and Mark D. Gross. 2010. Interactive paper devices: end-user design & fabrication. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (TEI '10), 205-212. https://doi.org/10.1145/1709886.1709924 Eric Schweikardt. 2007. Modular robotics as tools for design. In Proceedings of the 6th ACM SIGCHI conference on Creativity & cognition (C&C '07), 298. https://doi.org/10.1145/1254960.1255034 Kohei Tsuji and Akira Wakita. 2011. Anabiosis: an interactive pictorial art based on polychrome paper computing. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology - ACE '11, 1. https://doi.org/10.1145/2071423.2071521 Guanyun Wang, Tingyu Cheng, Youngwook Do, Humphrey Yang, Ye Tao, Jianzhe Gu, Byoungkwon An, and Lining Yao. 2018. Printed Paper Actuator: A Low-cost Reversible Actuation and Sensing Method for Shape Changing Interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18), 1-12. https://doi.org/10.1145/3173574.3174143 Pierre Wellner. 1991. The DigitalDesk calculator: tangible manipulation on a desk top display. In Proceedings of the 4th annual ACM symposium on User interface software and technology (UIST '91), 27-33. https://doi.org/10.1145/120782.120785 Thomas Wrensch and Michael Eisenberg. 1998. The programmable hinge: toward computationally enhanced crafts. In Proceedings of the 11th annual ACM symposium on User interface software and technology (UIST '98), 89-96. https://doi.org/10.1145/288392.288577 Peta Wyeth and Gordon Wyeth. 2001. Electronic Blocks: Tangible Programming Elements for Preschoolers. In Proceedings of the Eighth IFIP TC13 Conference on Human-Computer Interaction (INTERACT. Ruhan Yang and Ellen Yi-Luen Do. 2024. PaBo Bot: Paper Box Robots for Everyone. In Companion of the 2024 ACM/IEEE International Conference on HumanRobot Interaction (HRI '24). Association for Computing Machinery, New York, NY, USA, 1158-1162. https://doiorg.colorado.idm.oclc.org/10.1145/3610978.3640696 Kentaro Yasu and Masahiko Inami. 2012. POPAPY: Instant Paper Craft Made Up in a Microwave Oven. In Advances in Computer Entertainment, Anton Nijholt, Teresa Romão and Dennis Reidsma (eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 406-420. https://doi.org/10.1007/978-3-642-34292-9_29 Iddo Yehoshua Wald and Oren Zuckerman. 2021. Magnetform: a Shape-change Display Toolkit for Material-oriented Designers. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, 1-14. https://doi.org/10.1145/3430524.3446066 Clement Zheng, HyunJoo Oh, Laura Devendorf, and Ellen Yi-Luen Do. 2019. Sensing Kirigami. In Proceedings of the 2019 on Designing Interactive Systems Conference (DIS '19), 921-934. https://doi.org/10.1145/3322276.3323689 Clement Zheng, Peter Gyory, and Ellen Yi-Luen Do. 2020. Tangible Interfaces with Printed Paper Markers. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. Association for Computing Machinery, New York, NY, USA, 909-923. https://doi.org/10.1145/3357236.3395578 Kening Zhu and Shengdong Zhao. 2013. AutoGami: a low-cost rapid prototyping toolkit for automated movable paper craft. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13), 661-670. https://doi.org/10.1145/2470654.2470748
2510.14916
EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN Abstract. In many applications, one seeks to approximate integration against a positive measure of interest by a positive discrete measure: a numerical quad- rature rule with positive weights. One common desired discretization property is moment preservation over a finite dimensional function space, e.g., bounded- degree polynomials. Carath´eodory’s theorem asserts that if there is any finitely supported quadrature rule with more nodes than the dimension of the given function space, one can form a smaller (and hence more efficient) positive, nested, quadrature rule that preserves the moments of the original rule. We describe an efficient streaming procedure for Carath´eodory-Steinitz pruning, a numerical procedure that implements Carath´eodory’s theorem for this measure compression. The new algorithm makes use of Givens rotations and on-demand storage of arrays to successfully prune very large rules whose storage complexity only depends on the dimension of the function space. This approach improves on a naive implementation of Carath´eodory-Steinitz prun- ing whose runtime and storage complexity are quadratic and linear, respec- tively, in the size of the original measure. We additionally prove mathematical stability properties of our method with respect to a set of admissible, total- variation perturbations of the original measure. Our method is compared to two alternate approaches with larger storage requirements: non-negative least squares and linear programming, and we demonstrate comparable runtimes, with improved stability and storage robustness. Finally, we demonstrate prac- tical usage of this algorithm to generate quadrature for discontinous Galerkin finite element simulations on cut-cell meshes. 1. Introduction Efficient and accurate quadrature (or cubature) rules that approximate integrals are fundamental ingredients in computational science, being used for numerical or statistical integration in the context of solutions of differential equations, uncer- tainty quantification, inference, and scientific machine learning. In these application scenarios one may have access to an acceptably-accurate quadrature rule with pos- itive weights; the challenge is that this quadrature rule might be too large to use in practice because it requires too many function evaluations. To ameliorate this situ- ation, one can consider using this starting quadrature rule to identify a quadrature rule with many fewer nodes that retains desirable properties, in particular retains both positivity and accuracy, where the latter is quantified by exact integration of specified moments. The core algorithm we consider, Carath´eodory-Steinitz pruning (CSP), is one strategy that identifies a quadrature rule that is nested with respect to the original (hence, is a “pruned” version because nodes are removed) [22]. The CSP algorithm has been particularly popular for its clear and easy implementation, and has seen applications in contexts requiring high-dimensional quadrature over general domains [2, 3, 7, 11, 12, 13, 19, 26, 27]. 1 arXiv:2510.14916v1 [math.NA] 16 Oct 2025 2 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN However, a primary challenge with the CSP algorithm is computational cost. If an M-point positive quadrature rule is pruned to an N-point positive quadra- ture rule subject to N moment constraints, then a naive implementation requires a cumulative O((M −N)MN 2) computational complexity and O(MN) storage complexity. In several practical use cases of interest, M ≫N, which makes both the storage and complexity demands of a naive CSP algorithm onerous. Our contributions in this paper are the following two major advances: First, we devise a compute- and storage-efficient version of CSP, which makes the per-step computational complexity independent of M and improves overall storage require- ments to O(N 2) when the algorithm is used in streaming contexts for pruning a size- M positive quadrature rule down to N nodes. Our storage-efficient, “streaming” version of CSP, the SCSP algorithm, is given in Algorithm 2. A further augmentation of this algorithm, the GSCSP procedure (“Givens SCSP”), is an efficient procedure for computing cokernel vectors. The GSCSP algorithm requires only O(N 2) complexity per iteration for a cumulative O((M −N)N 2) + O(N 3) computational complexity. This efficiency is gained by exercising Givens rotations for updating cokernel vec- tors of a matrix. The GSCSP algorithm is Algorithm 2 with the augmentation in Algorithm 3. Second, we provide a new stability guarantee for the SCSP and GSCSP algorithms: By considering any particular quadrature rule as a (discrete) measure, we show that these procedures are mathematically stable in the total variation distance on measures. When the SCSP and GSCSP algorithms are mappings that take as input positive measures with large finite support to output positive measures with smaller support, then both algorithms are locally Lipschitz (and in particular continuous) with respect to the total variation distance on both input and the output. See Theorem 4.1. In the numerical results presented in Section 5, we demonstrate that the GSCSP algorithm can successfully prune very large rules with one billion points and com- pare the computational efficiency of GSCSP to competing algorithms, in particular, a non-negative least squares (NNLS) formulation and a linear programming (LP) formulation. We also provide supporting evidence for the total variation stability guarantee for SCSP and GSCSP, and show that the stability properties of this algo- rithm are more favorable than the stability properties of the alternative NNLS and LP algorithms. We demonstrate the potential of our new algorithm by generating nontrivial quadrature on two-dimensional cut-cell finite element mesh geometries. The GSCSP and other related “pruning” algorithms are implemented in the open- source software package CaratheodoryPruning.jl. 2. Background We use the notation N := {1, 2, . . . , } and N0 := {0} ∪N. For N ∈N, we let [N] = {1, . . . , N}. Lowercase/uppercase boldface letters are vectors/matrices, respectively, e.g., x is a vector and X is a matrix. If A ∈RM×N with S ⊂[M] and T ⊂[N], then we use the notation, AS∗∈R|S|×N, A∗T ∈RM×|T |, AST ∈R|S|×|T |, to slice A producing, respectively, the S-indexed rows of A, the T-indexed columns of A, and the submatrix formed by rows indexed S and columns indexed T. Throughout, we will consider S, T, and any other subsets of indices as ordered EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES3 sets (e.g., sequences) so that, e.g., the first row of AS∗is the row of A correspond- ing to the first index in the ordered set S. Similarly, given a vector b ∈RM, we use the notation bS ∈R|S| to slice b, producing the ordered, S-indexed, elements of b. If v and w are vectors of the same size, then v ≥w means that the inequality holds component-wise. Unless otherwise noted, we will denote the two-norm of a vector as ∥w∥= ∥w∥2 = √ wT w. We denote the nonnegative reals by R+ and the positive reals by R++. Given a set of points P = {p1, p2, . . . , pM} ⊂RN, we say that a point p ∈RN lies in the conic hull of P, denoted p ⊂cone(P), if there exist {wm}m∈[M] ⊂R+ such that p = X m∈[M] wmpm. 2.1. Positive quadrature rules: Tchakaloff’s theorem. Let (X, M, µ) be a measure space, µ is a positive measure, e.g., µ a probability measure, and let V be an N-dimensional subspace of functions in L1 µ(X) spanned by basis elements vj: V := span{v1, . . . , vN}, vj : X →R. (1) Our main goal is to construct a positive quadrature rule, i.e., a set of nodes and weights, X = {xq}q∈[Q] ⊂X and {wq}q∈[Q] ⊂R++, such that, Z X v(x)dµ(x) = X q∈[Q] wqv(xq), ∀v ∈V, (2) where we assume that the basis vj is continuous at X so that v(xq) is well-defined for v ∈V . The above procedure is sometimes called measure compression because µ with possibly infinite support is reduced to a measure supported only on the finite points xq. Tchakaloff’s Theorem states that this compression is possible for polynomial integrands under very general scenarios. Theorem 2.1 (Tchakaloff’s Theorem, [1, Theorem 1]). Fix k ∈N0 and let V be the space of degree-k polynomials over the d-dimensional domain X ⊂Rd. Assume µ is positive over X with finite moments up to degree m, i.e., R X Qd j=1 |xj|αjdµ(x) < ∞ for all α = (α1, . . . , αd) ∈Nd 0 satisfying ∥α∥1 ≤m. Then, there exists a Q-point quadrature rule such that (2) holds, with Q ≤dim(V ). The general result above builds on a series of historical results [6, 7, 20, 25]. In general, the bound Q = N = dim(V ) is sharp, but Q < N is attainable in some special cases, see Appendix B for an example. The central problem statement of this paper is that we seek to computationally realize Tchakaloff measure compression for general (non-polynomial) subspaces V but where µ is a finitely-supported discrete measure. Hence our discussion and analysis moving forward will move beyond polyno- mial subspaces; we will focus on computationally realizing Theorem 2.1 with Q = dim(V ). To do so we will first assume that some quadrature rule with more than dim(V ) nodes is available that meets the accuracy requirements (2). Equivalently, we make the fairly strong assumption that the initial measure µ is a finitely- supported (discrete) measure (or can be approximated sufficiently well by a finitely- supported measure), and seek to compress this measure subject to a V -moment matching condition. 4 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN 2.2. Finitely supported measures. In many scenarios one is able to construct a positive quadrature rule with M ≫N points; another way to state this is that there is a measure µM supported on M points, i.e., µ ≈µM, dµM(x) = X m∈[M] wmδxm, M < ∞, (3) where δx is the Dirac mass centered at x and {xm}m∈[M] ⊂X and {wm}m∈[M] ⊂ R++ are the nodes and weights for µM respectively. If M ≤N, we already have a Tchakaloff-realizing quadrature rule, so we assume without loss of generality that M > N. In this case we have that µM, defined by its nodes and weights, has certain moments of V . While we cannot directly appeal to Tchakaloff’s theorem (because V may contain non-polynomial functions), we can state an essentially similar result. With a fixed N-dimensional subspace V with basis {vj}N j=1, we have, ηn := Z X vn(x)dµM(x) = X m∈[M] wmvn(xm) ∈R, n ∈[N]. (4) These mild assumptions are enough to articulate a Tchakaloff-like result. Theorem 2.2. Let (µM, V ) be as described above with finite moments as defined in (4). Then (2) holds with Q ≤N = dim(V ) where the Q quadrature nodes are a subset of supp(µM) = {xm}m∈[M]. The above result is not new and is essentially well-known. See, e.g, related statements in [19, 22, 26], although we failed to find an identical formulation in existing literature. In fact, in Theorem 2.2 and all that follows, we may take X as an arbitrary (possibly infinite-dimensional) metric space, substantially relaxing our original X ⊂Rd assumption. Theorem 2.2 and Theorem 2.1 both have uses: Theo- rem 2.2 applies to general non-polynomial subspaces V whereas Theorem 2.1 does not; Theorem 2.1 applies to measures µ with infinite support, whereas Theorem 2.2 does not. One standard proof of Theorem 2.2 reveals a popular algorithm that makes the result constructive; this proof relies on a minor variant of Carath´eodory’s theorem in convex geometry. Theorem 2.3 (Carath´eodory’s theorem, conic version [8]). Let P ⊂RN be a finite set of points in RN with |P| > N. If p ∈cone(P), then there exist a subset S ⊂P with |S| ≤N such that p ∈cone(S). Remark 2.1. The more traditional phrasing of Carath´eodory’s Theorem that con- siders the stronger notion of convex combinations yields the looser conclusion |S| ≤N + 1. To see how this applies to our situation, we provide a direct, simple proof that reveals a computational implementation. Like Theorem 2.2 itself, neither this proof nor the algorithm are new. 2.3. The Carath´eodory-Steinitz “pruning” construction. In this section we review one simple constructive proof of both Theorem 2.2 and Theorem 2.3 reveal- ing an algorithm. This algorithm has recently seen considerable use [11, 12, 19, 26]. We attribute this algorithm originally to Steinitz [22], and will hence refer to the following naive algorithm as the Carath´eodory-Steinitz pruning (CSP) algorithm. EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES5 If M ≤N, then Theorem 2.2 is trivially proven, so without loss we assume M > N. The core idea is the simple observation that the moment conditions (4) can be written through linear algebra: V T w = η, V =      v(x1)T v(x2)T ... v(xM)T     ∈RM×N, (5) where w, v(xm), and η are v(xm) := (v1(xm), . . . , vN(xm))T ∈RN, m ∈[M], (6) w := (w1, . . . , wM)T ∈RM ++, η := (η1, . . . , ηN)T ∈RN, with ηn, n ∈[N], defined in (4). If M > N, then V T ∈RN×M has a non-trivial kernel, so there is some kernel vector, say n ̸= 0, such that, V T (w −cn) = η, ∀c ∈R. This kernel vector can be used to construct a size-(M −1) quadrature rule by augmenting w. We first partition [M] into sets where n is positive, negative, and 0, S± :=  m ∈[M] ± nm > 0 , S0 :=  m ∈[M] nm = 0 , [M] = S+ ∪S−∪S0. Because n ̸= 0, it is not possible for both S+ and S−to be empty. We now define smallest-magnitude constants c that ensure w −cn has (at least) one zero component: m± = argmin m∈S± wm nm , c± = wm± nm± , (7) where when S± = ∅we assign c± = ±∞. With this construction, c−< 0 < c+, and, c ∈(c−, c+) ⇐⇒V T (w −cn) = η and w −cn > 0, c = c± ⇐⇒V T (w −cn) = η, w −cn ≥0, and (w −cn)m± = 0, where in the second line we assume that both c± are finite; at least one of them must be finite since S+ ∪S−is non-empty. Hence, choosing either c = c+ or c = c−, setting w ←w −cn, and then removing row m± (and all other zeroed rows) from both w and V , constructs an (at most) (M −1)-point rule with w ≥0 satisfying V T w = η. The procedure is visualized in Figure 1. This process can be repeated while V has a nontrivial cokernel, which is generically until V is square, corresponding to an (at most) Q = N-point rule, completing the proofs of both Theorem 2.2 and Theorem 2.3. The sign σ ∈{+, −} that identifies cσ must be algorithmically chosen. In general, this choice is given by σ =    +, S−= ∅ −, S+ = ∅ SigSelect(V , w, n), otherwise. (8) 6 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN 1 2 3 4 5 0.0 0.5 1.0 1.5 2.0 2.5 w 1 2 3 4 5 −1.0 −0.5 0.0 0.5 1.0 n 1 2 3 4 5 0.0 0.5 1.0 1.5 2.0 2.5 w −c +n 1 2 3 4 5 0.0 0.5 1.0 1.5 2.0 2.5 w −c −n Figure 1. Visual depiction of the two possible pruning choices for given weights and kernel vector. From left to right: visualizing w, n, w −c+n, and w −c−n. One example of the function SigSelect would be the simple rule, SigSelect = argmin σ∈{+,−} |cσ| =⇒ m = argmin m∈S+∪S− wm |nm|, c = wm nm , (9) which simply chooses + or −based on which choice corresponds to a minimum- norm perturbation of w. In general, this choice could depend on V , w, and n. Pseudocode for the CSP procedure is given in Algorithm 1. Algorithm 1 CSP: Carath´eodory-Steinitz pruning Input: V ∈RM×N, w ∈RM ++ Output: S with |S| ≤N, wS ∈R|S| ++ 1: S = [M] 2: while |S| > N do 3: Compute n ∈ker(V S∗ T ). ▷O(|S|N 2) 4: Identify S± and compute c± in (7) using S±, wS, n. 5: Choose σ ∈{+, −} as in (8) ▷SigSelect, e.g., as in (9) 6: Set wS ←wS −cσn, P =  s ∈S ws = 0 7: S ←S\P 8: end while Remark 2.2. One can continue the while loop in Algorithm 1 with |S| ≤N so long as V T has a non-trivial kernel. This would yield a rule with |S| < N points. However, if a positive quadrature rule of size |S| < N does exist, there is no guarantee that Algorithm 1 finds this rule, and instead it can terminate with N nodes. A direct implementation of Algorithm 1 requires O(MN) storage, largely to ac- cess the full original matrix V . The computational complexity is O M(M −N)N 2 ≲O(MN 3 + M 2N 2), since the dominant cost is identification of the kernel vector n at each step. Note that the most expensive step is when |S| = M and that the algorithm terminates in a maximum of (M −N) steps. In contrast, the main algorithmic innovation of this paper is a procedure that accomplishes the same result as the CSP algorithm but requires only O(N 2) storage and O((M −N)N 2) + O(N 3) ≲O(MN 2 + N 3) complexity. In particular, the new algorithm has a storage complexity independent of M and a computational complexity that is linear in M, which is of considerable benefit in the realistic M ≫N setting. EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES7 2.4. Alternative algorithms. We describe two alternatives to CSP that have also enjoyed popularity due to their computational convenience [11, 12, 19, 26]. The first alternative method employs non-negative least squares (NNLS), and numerically solves the quadratic programming problem, argmin v∈RM + V T v −η 2 . (10) In order to accomplish pruning, one hopes that v is N-sparse. The explicit opti- mization formulation above does not suggest why such sparse solutions should be obtained, but practical algorithms to solve this problem implicitly enforce sparsity [4, 17], and in generic situations numerically solving (10) indeed produces N-sparse solutions and achieves zero objective. A second alternative is through linear programming. First, observe that any solution v ∈RM to the linear moment constrained problem (5) with the desired non-negativity constraints is given by, W := n v ∈RM + V T v = η o (11a) =  w + Kz ∈RM + z arbitrary , (11b) where K is a(ny) matrix whose range is the cokernel of V , and w is a(ny) set of M weights that matches moments. Hence, the feasible set W of weight vectors v is in a polytope in RM of dimension dim(coker(V )) ≥M −N. The extreme points of this polytope correspond to at least M −N active constraints in RM + , i.e., at least M −N zero weights. Hence, one way to identify a vector of quadrature weights with at most N entries is to identify one point in ex(W), the set of extreme points of W, which can be accomplished through linear programming: For some c ∈RM, solving, min v cT v subject to v ∈W, (12) generically produces an N-sparse solution. “Generically” means, e.g., that if c has random components that are drawn iid from a standard normal distribution, then with probability 1 the solution to (12) has at most N non-zero entries. Note that (12) does not require K through definition (11a) of W, but having knowledge of K converts (12) from an M-dimensional optimization to a (M −N)-dimensional one on the variables z through definition (11b) of W. A similar linear programming formulation is used in the integration of parameter-dependent functions in [28]. 3. Kernel vector computations through Givens rotations We present the main algorithmic novelty of this paper in this section, which is an efficient procedure to accomplish line 3 in Algorithm 1, i.e., to compute cokernel vectors repeatedly for progressively row-pruned matrices V . One essential idea is that computing cokernel vectors is equivalent to computing vectors orthogonal to the range, and the latter is accomplished through the QR decomposition of a row rank-deficient matrix. For an M × N matrix V with M > N and rank N: V = QR = [Q1 Q2] R, Q1 ∈RM×N, Q2 ∈RM×(M−N), R ∈RM×N, where Q has orthonormal columns and R is upper triangular. The matrix Q2 above and the matrix K in (11b) have the same range; the difference is that Q2 has orthonormal columns. A(ny) nontrivial vector in the range of Q2 is a kernel 8 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN vector of V T , equivalently is a cokernel vector of V . Hence, a full QR decomposition of V , having complexity O(MN 2), accomplishes identification of a cokernel vector. 3.1. The SCSP algorithm: O(N 2) storage. One simple modification of the above approach is motivated by observing that it’s wasteful to compute M −N kernel vectors (all of Q2) when only 1 is needed. One remedy is to customize a QR decomposition routine of the full matrix V so that it terminates early by computing only a single kernel vector; such a procedure still requires storage complexity that depends on M. An alternative and more efficient approach is to compute a single cokernel vector for a slicing of V . If S ⊂[M] with |S| > N is any row subset, then consider the full QR decompositions of the S-row sketched matrix, which requires O(|S|N 2) effort: V S∗= eQ eR = h eQ1 eQ2 i eR, eQ1 ∈R|S|×N, eQ2 ∈R|S|×(|S|−N), eR ∈R|S|×N. We observe that if we start with an M vector of zeros, and insert into entries S any nontrivial |S|-vector from the range of eQ2, then this constructed vector is in the cokernel of V . Hence, we’ve constructed a cokernel vector of V requiring only O(|S|N) storage. If |S| = N + 1, this reduces the storage requirement to O(N 2). A straightforward extension of the above idea to an iterative version of a CSP algorithm would order the elements in [M] in any way, say encoded as a vector Σ ∈[M]M whose values are a permutation of the elements of [M], and for some fixed k independent of M and N (chosen small for reduced runtime and storage complexity; we later focus on k = 1), initialize S as the first N + k elements of this ordering and then repeatedly prune one index, and then add another. We denote this streaming variant of Carath´eodory-Steinitz pruning the SCSP algorithm, shown in Algorithm 2. Algorithm 2 SCSP: Streaming Carath´eodory-Steinitz pruning Input: V ∈RM×N, w ∈RM + , k ∈N, Σ ∈[M]M Output: S with |S| ≤N, wS ∈R|S| + 1: S = Σ[N+k], pop first N + k indices from Σ. 2: while Σ non-empty or |S| > N do 3: Compute n ∈ker(V S∗ T ) ▷O((N + k)N 2) 4: Identify S± and compute c± in (7) using S±, wS, n. 5: Choose σ ∈{+, −} as in (8) ▷SigSelect, e.g., as in (9) 6: Set wS ←wS −cσn, let P =  q ∈S wq = 0 . 7: S ←S\P, pop first min(|P|, |Σ|) elements of Σ and add to S. 8: end while The SCSP algorithm now requires only O((N +k)N) storage, since only N +k < M rows of the full matrix V and full initial weight vector w are stored at a time. The computational complexity of the SCSP algorithm is O((M −N)(N + k)N 2) because we expend O((N + k)N 2) effort to compute a cokernel vector of V a total of M −N times. Moving forward, we will assume fixed k in which case the storage complexity is O(N 2) and the computational complexity is O((M −N)N 3). We reduce the complexity’s polynomial order on N by 1 in the next section. EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES9 3.2. The GSCSP algorithm: O(N 2) per-iteration complexity. The computa- tional bottleneck in a straightforward implementation of Algorithm 2 is the O(N 3) complexity of line 3 that computes a cokernel vector of V . At the first iteration, this cost is necessary, but at subsequent iterations the current iteration’s matrix V differs from the previous iteration’s by simply a single row; we can therefore employ low-rank modifications of the previous iterate’s QR decomposition to generate the QR decomposition of the current iterate. More precisely, we first downdate the previous iterate’s QR decomposition by removing a row from V , and then update the QR decomposition by adding a row to V . Efficient O(N 2) implementations of these procedures through Givens rotations are described in [14]; we summarize the procedure here. Consider V ∈R(N+k)×N corresponding to the previous iterate, and that V has the full QR decomposition V = QR. We let G denote a generic Givens rotation of size N + k. To efficiently downdate the QR decomposition by removing row irem, one seeks to transform row irem of Q to the vector ±eirem by building N + k −1 Givens rotations that zero out elements of this row: V = QR = (QGN+k−1 · · · G1)(G1 T · · · GN+k−1 T R) =   | Q1 0 Q2 | – 0T – ±1 – 0T – | Q3 0 Q4 |     R1 – ± vT – R2   =   Q1R1 + Q2R2 — vT — Q3R1 + Q4R2  , (13) where column irem also equals ±eirem because the product of unitary matrices (Q and Givens rotations) is also unitary. The vector vT coincides with row irem of V , V {irem}∗. Then, letting T = [N + k]\{irem}, by removing row irem from the above expression, we have, V T ∗= eQ eR = Q1 Q2 Q3 Q4  R1 R2  = Q1R1 + Q2R2 Q3R1 + Q4R2  , where eQ is unitary and eR remains upper triangular through proper ordering of the Givens rotations. Hence, (13) uses O(N 2) complexity to remove row irem from a QR decomposition. See Appendix C for pseudocode. The second step is to now to replace v in (13) with a new row vector, say evT . Note that (13) with v replaced with ev is in a QR product form, but R is not upper triangular because the row irem is dense. Again, we exercise Givens rotations to rectify this, first by zeroing out the first irem −1 entries of ev, followed by ensuring the subdiagonal of R vanishes: (14)   V 1 – evT – V 2  = QR =   | Q1 0 Q2 | – 0T – ±1 – 0T – | Q3 0 Q4 |   GT N · · · GT 1 G1 · · · GN   R1 – evT – R2   . 10 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN The update procedure (14) requires N Givens rotations for a complexity of O(N 2). Therefore, this downupdate-update procedure for fixed k requires O(N 2) to com- pute a new kernel vector from the previous one. A formal pseudocode of the downdate-update procedure is presented in Algorithm 3. The GSCSP algorithm (“Givens SCSP”) is the augmentation of the SCSP algorithm by (i) retaining a dense (N + k) × N QR factorization throughout the process, (ii) using the downdate-update procedure described by (13) and (14) (implemented in Algorithm 3) on line 7 when updating the index set S, (iii) implementing line 3 by simply slicing one of the trailing k columns from the stored Q matrix. The GSCSP algorithm is the proposed algorithm in this paper, which accomplishes Carath´eodory-Steinitz pruning with per-iteration O(N 2) complexity and storage. A summary of complexity and storage for all three algorithms discussed, assuming fixed k, is presented in Table 1. Complexity Storage CSP: Algorithm 1 MN 2 MN SCSP: Algorithm 2 MN 2 N 2 GSCSP(∗): Algorithms 2 and 3 N 2 N 2 Table 1. Per-iteration (M, N)-asymptotic complexity of three al- gorithms presented in this paper. M is the support size of µM, N is the number of moments preserved, and k is the fixed number of per-iteration kernel vectors used in the streaming algorithms. In general each algorithm requires M −N iterations to complete. (∗): The first iteration of the GSCSP algorithm requires O(N 3) complex- ity to compute a dense QR factorization of an (N +k)×N matrix. 4. Stability under measure perturbations One numerical consideration is the effect of small perturbations of the origi- nal measure, µM, on the resulting pruned quadrature rule. Consider the polygon W identified in (11b). Small perturbations of the weights w correspond to small perturbations of W. The addition of new nodes with small weights also contin- uously changes W since this is equivalent to considering nodes with zero weights and perturbing them to slightly positive weights. The pruned quadrature rule is necessarily an extreme point of W because it corresponds to at least M −N active constraints (zero quadrature weights). Because continuous deformations of W also continuously deform ex(W), the stability of the pruned quadrature rule with re- spect to small perturbations of the input quadrature rule is conceptually expected. Of course, the algorithmic way in which an element of ex(W) is identified may have different stability properties. We demonstrate explicitly in this section that the SCSP algorithm retains conti- nuity of the pruned quadrature rule under small enough perturbations of the input quadrature rule. This result immediately applies to the GSCSP algorithm since this algorithm is simply a computationally efficient version of SCSP. Instead of speak- ing in terms of quadrature rules, we will speak in terms of (discrete) measures. EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES 11 Consider the set of finite and finitely supported discrete signed measures on X, P =   ν ν = X m∈[M] amδxm, M ∈N, am ∈R and xm ∈X ∀m ∈[M]   . (15) The set P+ denotes the subset of P that are non-negative measures: P+ =   ν = X m∈[M] amδxm ∈P am ≥0 ∀m ∈[M]   . (16) We compare two elements, α, β ∈P+, using a variant of the total variation distance: dTV(α, β) = |α −β| |α| + |β|, (17) where |γ| denotes the ℓ1 norm of the vector of weights of γ ∈P: γ = X m∈[M] dmδxm ∈P =⇒ |γ| = X m∈[M] |dm|. If α and β are both probability measures, then dTV in (17) reduces to the standard definition of total variation distance on discrete probability measures, which is 1 2 times the ℓ1 norm difference between their mass functions. 4.1. Assumptions. We require some assumptions on µM and the types of permis- sible measure perturbations. We first discuss a condition that allows us to ensure that the cokernel vectors used by the SCSP algorithm are well-behaved. Definition 4.1. The N-dimensional subspace V is a Chebyshev system with respect to µM if the Vandermonde-like matrix V in (5) satisfies det(V S∗) ̸= 0 for every S ⊂[M] with |S| = N. For example, if V is the subspace of degree-(N −1) univariate polynomials, then it’s always a Chebyshev system for any set supp(µM) with at least N distinct points. If V is a subspace of multivariate polynomials and µM has its nodes drawn randomly with respect to some Lebesgue density function, then with probability one V is a Chebyshev system with respect to µM. The Chebyshev system property will enable us to assert uniqueness of cokernel vectors for submatrices of V . The ordering of the nodes in the measure µM also plays a role in stability, and motivates the types of permissible perturbations to µM. Recall that the SCSP algorithm starts from the M × N Vandermonde-like matrix V that is given as input. We first observe that the SCSP algorithm is not stable with respect to permutations of the rows of V . This can be conceptually understood by noting that permuting the rows of V would correspond to a change in the sequence of cokernel vectors n that is selected on line 3 of Algorithm 2. Hence, it is unreasonable to expect that the same pruned quadrature rule would result compared to the unpermuted case. From this observation, we recognize that it’s not enough to discuss perturbations of a given measure µM under the total variation distance: we must also consider such perturbations subject to conditions that retain the ordering of the elements in supp(µM). In particular, we require enough assumptions so that the sequence of chosen cokernel vectors in the SCSP algorithm remains unchanged 12 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN under perturbations. We will consider an ordering of the set supp(µM) ⊂X; we denote this ordering as Σ: Σ ∈Π(supp(µM)), Π(Y ) = Sym(Y ) = {collection of permutations of Y } . Formally, we require the following assumptions on the measure µM and the hyper- parameters of the SCSP algorithm. Assumption 4.1. Suppose the SCSP algorithm is run on (V, µM, Σ), for some Σ ∈ Π(supp(µM)). We assume that: • V is a Chebyshev system with respect to µM. • With V fixed, then running the SCSP algorithm for any (µM, Σ) uses the same basis vn(·) as in (6). • k = 1, where k is the integer input to Algorithm 2. • The SigSelect function is chosen as in (9). • The minimization problem (9) has a unique solution at every iteration. Given (V, µM, Σ) that satisfies Assumption 4.1, note that running the SCSP al- gorithm generates a unique output measure ν with unique size-N support supp(ν). The uniqueness stems from the fact that Assumption 4.1 guarantees unique behav- ior of the algorithm: • Prescribing Σ implies that the full Vandermonde-like matrix V is unique. • k = 1 implies that a cokernel vector from an (N + 1) × N matrix, V S∗is computed at every step. • V being a Chebyshev system implies that rank(V S∗) = N since any N ×N submatrix must have full rank. • The above two properties imply that the cokernel vector n at each iteration is unique up to multiplicative scaling. • Choosing SigSelect as in (9), with the corresponding minimization prob- lem having a unique solution, implies that the choice of pruned quadrature node is uniquely determined at every iteration. From this observation, we let S0 denote the unique size-N subset of [M] corre- sponding to non-zero weights when the SCSP algorithm terminates (i.e., S0 is the size-N subset S output by Algorithm 2 upon termination). Finally, we introduce the following set of admissible perturbations of µM. Definition 4.2. Let (V, µM, Σ) satisfy Assumption 4.1 and fix τ > 0. Let S0 denote the size-N subset of [M] with positive weights after applying the SCSP algorithm. We define the set of admissible perturbations to µM as the set of measures and corresponding permutations on their supports, (eµ, eΣ) for eΣ ∈Π(supp(eµ)), as, Pτ(µM, Σ) := n (eµ, eΣ) eµ ∈P+, eΣ[M] = Σ, supp(eµ)\supp(µM) ⊂Xτ o , (18) where Xτ = Xτ(V ) is defined as, Xτ := ( x ∈X sup v∈V \{0} v(x) ∥v∥L1(X) ≤1 τ ) . (19) The set of valid measure perturbations therefore corresponds to measures that are positive, whose first ordered M support points match the same ordered M support points of the original measure, and whose support lies in the set Xτ ⊆ X. The introduction of τ is a technical assumption; the precise value of τ is not EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES 13 conceptually important for theory, and it may be taken as an arbitrarily small, positive number. In particular, if the basis elements vj(·) are all bounded over X, then there is a τ0 > 0 such that for any τ ∈(0, τ0], we have Xτ = X. Informally, Xτ exists to disallow nodal locations where v(x) for arbitrary v ∈V has unbounded value. 4.2. Stability results. Our result on the stability of the SCSP algorithm is the following. Theorem 4.1 (SCSP and GSCSP stability). Let (V, µM, Σ) be given that satisfy As- sumption 4.1, and let ν = SCSP(µM, V, Σ) be the output of the SCSP algorithm. For any fixed τ > 0, define Pτ(µM, Σ) as in Definition 4.2. Then the SCSP algo- rithm is locally Lipschitz (in particular continuous) with respect to the total vari- ation distance in a dTV-neighborhood of Pτ(µM, Σ) around (µM, Σ). I.e., there are positive constants δ0 = δ0(µM, Σ, τ) and C = C(µM, Σ, τ) such that for any (eµM, eΣ) ∈Pτ(µM, Σ) satisfying dTV(µM, eµM) < δ0, then dTV(ν, eν) ≤C dTV(µM, eµM), where ν = SCSP(µM, V, Σ) and eν = SCSP(eµM, V, eΣ). See Appendix A for the proof of this statement. The main message is that the SCSP (and hence also GSCSP) algorithm is robust to small enough perturbations (even perturbations that add nodes), provided those perturbations don’t change the ordering of the original nodes. A similar stability argument can be proven for the NNLS algorithm, Algorithm 4 or NNLS, defined in Appendix D. The main difference is that NNLS is agnostic to ordering of the original nodes, but is not robust to adding new nodes. In particular, fixing µM ∈P+, we define the following set of perturbations formed by simply modifying the existing weights: PNNLS(µM) :=  µ ∈P+ supp(µ) = supp(µM) . Similar to the proof of stability for the SCSP algorithm, we require the following assumptions on the measure µM and the hyperparameters of the NNLS algorithm. Assumption 4.2. Suppose the NNLS algorithm is run on (V, µM). We assume that: • With V fixed, then running the NNLS algorithm for any (µM, Σ) uses the same basis vn(·) as in (6). • The maximizations have unique solutions for all iterations (specifically lines 4 and 8 of Algorithm 4). • For all iterations, the intermediate least squares solutions have full density, i.e., for a realized index set P, the least squares solution on those indices has exactly |P| nonzero elements. • The output of the algorithm has exactly N nonzero elements. Under Assumption 4.2, the NNLS algorithm is robust to dTV perturbations in PNNLS. Theorem 4.2 (NNLS stability). Fix V ⊂L1(X), and let µM ∈P+ be a given mea- sure, with ν = NNLS(µM, V ) be the output of Algorithm 4 satisfying Assumption 4.2. Then the NNLS algorithm is locally Lipschitz (in particular continuous) with respect to the total variation distance in a dTV-neighborhood of PNNLS(µM) around µM. 14 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN I.e., there are positive constants δ0 = δ0(µM) and C = C(µM) such that for any eµM ∈PNNLS(µM) satisfying dTV(µM, eµM) < δ0, then dTV(ν, eν) ≤C dTV(µM, eµM), where eν = NNLS(eµM, V ). See Appendix D for the proof. Note that NNLS algorithm is agnostic to ordering of the input nodes, in contrast to SCSP and GSCSP. However, the set of admissible NNLS perturbations PNNLS is considerably more restrictive than the admissible SCSP perturbations Pτ, since the latter allows adding new nodes whereas the former does not. From the restriction that we disallow adding nodes in NNLS, a hypothesis is that NNLS is not robust to adding new nodes. Our numerical results confirm this. 5. Numerical Results We investigate the efficacy of the GSCSP algorithm in this section. All experi- ments are performed in Julia version 1.11.5. We compare the following algorithms: GSCSP The proposed algorithm of this manuscript: Algorithm 2 with the efficient Givens up/downdating described in Section 3.2 with k = 1 and SigSelect as in (9). Our implementation is in the package CaratheodoryPruning.jl. NNLS The non-negative least squares procedure described in Section 2.4. For im- plementation we use the package NonNegLeastSquares.jl (alg=:nnls), which makes use of a version of the Lawson-Hanson algorithm with House- holder reflections to solve intermediate least squares problems [17]. LP The linear programming approach described in Section 2.4 and in (12). Unless otherwise stated, the vector c in (12) is formed with uniform random entries between 0 and 1. The implementation we use is the package JuMP.jl [18] with the HiGHs solver. 5.1. Computational complexity. We verify computational complexity in this section, in particular that GSCSP (or even just SCSP) requires linear complexity in M. For a given (M, N), we generate V , w as having independent and identically distributed (iid) entries uniformly between 0 and 1. Figure 2 illustrates runtime comparisons of GSCSP, LP, and NNLS approaches with fixed values of N and increas- ing M. The results demonstrate that all methods have linear runtime complexity in M and that in these regimes the NNLS procedure is the fastest followed by GSCSP, and then LP. Increasing N (Figure 2, right) seems to have the effect of reducing the gap in runtime between GSCSP and LP. 5.2. Quadrature on manufactured domains. We present two sets of examples that demonstrate the flexibility of this approach in generating positive quadrature rules, and in particular in compressing very large quadrature rules. In all the ex- amples of this section, we use the GSCSP algorithm to generate pruned quadrature. We will take M = 109. Let X ⊂Rd be a d-dimensional compact domain; for visu- alization purposes we will focus on d = 2, 3. We consider the uniform probability measure µ on X, and we generate a random measure µM by sampling M iid points xm from µ (by rejection sampling), and assign uniform weights wm = 1 M . Hence, while µM that we consider is random, we expect only small perturbations due to this randomness because the large M = 109 suggests we are well within the asymptotic regime of probabilistic concentration. In this large-M regime, the alternative LP EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES 15 M 23 24 25 26 27 28 29 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 Time (s) 2 −16 2 −14 2 −12 2 −10 2 −8 2 −6 2 −4 2 −2 20 22 24 Random Matrix Runtime Comparison with N=8 GSCSP LP NNLS Slope 1 M 28 29 210 211 212 213 214 215 Time (s) 2 −4 2 −2 20 22 24 Random Matrix Runtime Comparison with N=256 GSCSP LP NNLS Slope 1 Figure 2. Runtime comparison of GSCSP pruning procedure to the LP and NNLS approaches for N = 8 (left) and N = 256 (right) varying M. Each scatter point refers to a mean over 20 trials. and NNLS algorithms are simply infeasible to use due to computational storage re- quirements. With M = 109 and N ≈70, it would take ≈560GB of memory to store the dense Vandermonde matrix in double precision. Let x = (x(1), . . . , x(d)) ∈Rd and α = (α1, . . . , αd) ∈Nd 0. We will use the following standard multi-index set definitions for r ≥0: AHC,r :=  α ∈Nd 0 ∥log(α + 1)∥1 ≤log(r + 1) , Ap,r := n α ∈Nd 0 ∥α∥p ≤r o , ATD,r := A1,r. In all our examples, a basis for V is formed as a collection of d-fold products of univariate functions, where each basis function is constructed from one index α in a multi-index set. For example, each column of the Vandermonde-like matrix V is formed by choosing one α, which defines a basis function ϕ ∈V that is evaluated on {xm}m∈[M]. 5.2.1. Two-dimensional domains, d = 2. Figure 3 illustrates three examples of pruned Monte-Carlo integration rules on irregular shapes X. The subspace V is defined as a hyperbolic cross-type of subspace defined by a multi-index set A. With Hq : R →R, Jq : R →R, and Lq : R →R the univariate Hermite polynomial of degree q, Bessel function of the first kind of order q, and the Legendre polynomial of degree q, our three examples are: X1 : Mickey mouse shape V = span n Hα1(x(1))Hα2(x(2)) α ∈AHC,20 o , X2 : Pumpkin shape V = span n Jα1(x(1))Jα2(x(2)) α ∈A1/3,25 o , X3 : Spiral shape V = span n Lα1(x(1))Lα2(x(2)) α ∈ATD,10 o . The resulting quadrature rules have 70, 70, and 66 points respectively. 5.2.2. Three-dimensional domains, d = 3. The dimension d of the problem has no significant effect on the difficulty or complexity of the GSCSP algorithm. The only change is that generating the elements of V becomes slightly more expensive when basis functions are three-dimensional. We consider a three-dimensional volume 16 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN x −1 0 1 y −1 0 1 Mickey Pruned Quadrature Rule x −1 0 1 y −1 0 1 Pumpkin Pruned Quadrature Rule x −1 0 1 y −1 0 1 Spiral Pruned Quadrature Rule Weights 10 −4 10 −3 10 −2 10 −1 Figure 3. Examples of GSCSP-pruned quadrature rules on various 2D shapes. Left, middle, and right: examples X1, X2, and X3, respectively. shown in Figure 4, which is the volume inside a torus that changes in radius and height as a function of the polar angle in the two-dimensional plane with X4 : Torus shape V = span n (x(1))α1(x(2))α2(x(3))α3 α ∈AHC,11 o , resulting in a pruned quadrature rule with dim(V ) = 74 points. x y z Torus Shape −1 0 1 −1 0 1 −1 0 1 x y Torus Pruned Quadrature Rule Weights 10 −4 10 −3 10 −2 10 −1 −1 0 1 −1 0 1 −1 0 1 Figure 4. Example X4 of a GSCSP-pruned quadrature rule on a 3D domain. 5.3. Stability of pruned quadrature rules. We provide empirical experiments to complement the stability theory provided in Section 4. In particular, we inves- tigate dTV(νM, eνM) when eµM is a small perturbation of µM. We restrict ourselves to the model described in Section 4, where dTV(µM, eµM) is small subject to the constraint that ordering of nodes in eµM is fixed and that added nodes are appended to the existing ordering of the support of µM. Figure 5 illustrates the impacts of three different types of random perturbations on a fixed discrete measure, µM with M = 104. The set X and basis are chosen to be X :  x ∈R2 ∥x∥≤1 V = span n Lα1(x(1))Lα2(x(2)) α ∈AHC,30 o , with dim(V ) = 113. We first compute νCSP, νLP, and νNNLS, by applying the GSCSP, LP, and NNLS al- gorithms to µM respectively. Then, perturbations to achieve various total variation EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES 17 dTV(𝜇M,𝜇̃M) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10− dTV(𝜈,𝜈̃) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 Pruned Perturbation Errors dTV(𝜇M,𝜇̃M) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10− dTV(𝜈,𝜈̃) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 Pruned Perturbation Errors dTV(𝜇M,𝜇̃M) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 dTV(𝜈,𝜈̃) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 Pruned Perturbation Errors GSCSP LP NNLS Figure 5. TV errors after various TV perturbations to a discrete measure µM. distances are applied to µM to form eµM. We then compute eνCSP, eνLP, and eνNNLS, by applying the GSCSP, LP, and NNLS algorithms to eµM respectively. Finally, we compute dTV(νCSP, eνCSP), dTV(νLP, eνLP), and dTV(νNNLS, eνNNLS). The scattered values are the median along with the 0.2 up to the 0.8 quantiles over 20 repetitions. To reduce randomness of the LP algorithm, the vector c used in (12) is kept the same through all simulations with ones appended for appended nodes. This choice of appending ones to c seems to be important to maintaining stability. Figure Figure 6 illustrates the same test as is performed in the bottom row of Figure 5 but with LP methods where c has ones appended versus uniform, (0, 1), random elements appended. It demonstrates that when many new nodes are added, the LP method with random elements appended to c is unstable. In the left panel of Figure 5, no new nodes are added to µM, however, we apply random, mean zero, displacements to the weights of µM (maintaining positivity) to achieve various total variation perturbations. All methods are stable to this type of perturbation up to a certain magnitude displacement. LP was found to be the most stable in this case, followed by NNLS, followed by GSCSP. In the middle panel of Figure 5, the weights of µM are maintained and 10 < M new (randomly sampled) nodes are inserted with uniform small weight to achieve various total- variation distances. In this case, the NNLS algorithm loses stability while the GSCSP algorithm seems to be slightly more stable than LP. Finally, in the right panel of Figure 5, the same type of perturbation is used as in the middle panel, instead, 104 = M nodes are appended. In this case, the NNLS algorithm is completely unstable while the GSCSP algorithm and LP algorithms remain relatively stable. In Figure 7, we visualize the instability of the LP and NNLS methods. We use the same domain and basis as in the other stability test. In this test, we first form µM with M = 105 points. We then prune the result down to ν using the NNLS algorithm. We know from former results that NNLS is not stable with respect to adding weights, so the purpose of using the NNLS algorithm is to form a sparse quadrature rule which is not biased towards either the GSCSP or LP algorithms. We then perturb the measure ν by adding 104 nodes of uniform weight to form eν such that dTV(ν, eν) = 10−9. Finally, we compute eνCSP, eνLP, and eνNNLS, by applying the GSCSP, LP, and NNLS algorithms to eν respectively. The top row of Figure 7 displays the point-wise weight relative errors between the pruned rules and ν. The bottom row of Figure 7 illustrates which nodes are retained through this process and which are not. Figure 7 clearly illustrates that the GSCSP algorithm is the most stable with respect to this type of perturbations. 18 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN dTV(𝜇M,𝜇̃M) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10 dTV(𝜈,𝜈̃) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 Pruned Perturbation Errors dTV(𝜇M,𝜇̃M) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10 dTV(𝜈,𝜈̃) 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 Pruned Perturbation Errors LP 1 LP rand Figure 6. Replication of the middle and right subfigures of Fig- ure 5 with two choices of the LP c vector. x −1 0 1 y −1 0 1 GSCSP Pruned Relative Errors Relative Error 10 −11 10 −10 10 −9 10 −8 10 −7 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 100 x −1 0 1 y −1 0 1 GSCSP Pruned x −1 0 1 y −1 0 1 LP Relative Errors Relative Error 10 −1 100 x −1 0 1 y −1 0 1 LP Pruned x −1 0 1 y −1 0 1 NNLS Relative Errors Relative Error 10 −1 100 x −1 0 1 y −1 0 1 NNLS Pruned In both In unperturbed In perturbed Figure 7. Relative errors and point comparison for pruned per- turbed quadrature rules on 2D circle. Pruning now done by NNLS. 5.4. Application: cut-cell discontinuous Galerkin (DG) methods. We con- clude with an example of Caratheodory-Steinitz pruning applied to the generation of quadrature rules for high order cut-cell discontinuous Galerkin (DG) methods. Carath´eodory-Steinitz pruning was previously used to construct reduced quadra- ture rules for cut-cell DG methods [23] and projection-based reduced order models [21]. These reduced quadrature rules retain positivity while exactly satisfying cer- tain moment conditions related to integration by parts [5], and can be used to construct semi-discretely entropy stable discretizations for nonlinear conservation laws. Here, we utilize reduced quadrature rules constructed using Carath´eodory-Steinitz pruning as described in [23] for high order cut-cell DG formulations of a 2D linear time-dependent advection-diffusion problem: ( ∂u ∂t + ∇· (βu) −ϵ∆u = f, x = (x, y) ∈Ω, u = 0, x ∈∂Ω, EFFICIENT AND ROBUST CARATH´EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES 19 where β(x) = (−y, x)T is a spatially varying advection vector, ϵ = 10−2 is the diffusivity coefficient, f(x) = 1 is a forcing term, and the domain, Ω, is taken to be [−1, 1]2 \ Γ, where Γ is the union of two circles of radius R = 0.4 centered at 1 2(−1, 1) and 1 2(1, −1). On cut cells, the DG solution is often represented using physical frame total degree P polynomials [10, 23, 24]. Volume integrals over cut cells are typically challenging to compute due to the nature of the geometry and the physical-frame approximation space. In this work, we construct quadratures for volume integrals over cut cells which are exact for physical-frame polynomial integrands up to a certain degree. To construct exact quadratures on cut cells, we first approximate cut elements using curved isoparametric subtriangulations. Then, an exact quadrature on a cut cell can be constructed using a composite quadrature rule from simplicial quadra- tures of sufficient degree on each element of the curved subtriangulation. Because of the isoparametric representation of such a subtriangulation, one can show that a physical-frame polynomial integrand of degree K can be exactly integrated using a quadrature rule of degree KP + 2(P −1) on the reference element [23].1 In this work, we take K = 2P −1 such that the product of a degree P polynomial and its derivative are exactly integrated. Thus, the reference quadrature rules we use to construct composite quadratures over cut cells should be exact for polyno- mials of degree 2P 2 + P −2. These can be constructed using tensor products of one-dimensional Gaussian quadrature rules combined with a collapsed coordinate mapping [16]. For this problem, this construction requires M = ⌈2P 2+P −2 2 ⌉2 ∼P 4 nodes in the dense, unpruned quadrature rule. After mapping these to the physical domain, pruning is performed preserving the N = P(2P −1) ∼2P 2 moments of the total degree set of degree K = 2P −1 bivariate polynomials. For a degree P = 7 approximation, this implies the reference quadrature rule must be exact for an extraordinarily high polynomial degree of 103, which results in a reference quad- rature rule with M = 8427 nodes. After pruning, we are left with a size N = 105 point quadrature rule. Figure 8 shows the domain with pruned cut-cell quadrature points overlaid, as well as a comparison of the pruned and original un-pruned quad- rature rule with P = 7. We note that, for the linear advection-diffusion problem in this paper, a positive moment-preserving quadrature rule is not strictly necessary to guarantee stability [10, 24]. However, the use of Carath´eodory-Steinitz pruning does ensure positive-definiteness of the mass matrix (via positivity) and high order accuracy (via exact satisfaction of moment conditions). Figure 9 shows snapshots of a degree P = 7 solution of the advection-diffusion equation. The advective portion is computed using a standard DG weak formulation with an upwind flux [15], and the diffusive contribution is discretized using a BR- 1 viscous discretization [9]. Zero inflow conditions are imposed on the advective discretization, while zero Dirichlet boundary conditions are imposed everywhere for the viscous discretization. 1In this example, surface integrals are exactly computed using standard one dimensional Gauss- ian quadrature rules of sufficient degree, but could also be pruned using a similar Carath´eodory- Steinitz procedure. 20 FILIP BˇEL´IK, JESSE CHAN, AND AKIL NARAYAN Figure 8. The cut domain for the advection-diffusion problem (left) and pruned quadrature nodes compared with un-pruned quadrature nodes on a single cut cell (right). Figure 9. Snapshots of the degree P = 7 cut-cell DG solution to the advection-diffusion problem at times t ≈0.551 (left), t ≈ 1.7755 (center), and t = 3 (right) with color limits of (0, 0.5), (0, 1.5), and (0, 2.25) respectively. 6. Conclusion We have proposed a new, computationally storage- and complexity-efficient algorithm, the Givens streaming version of the Carath´eodory- Steinitz algorithm (GSCSP). We have provided mathematical stability in the total variation distance for this algorithm, and have numerically investigated the proce- dure and its theoretical stability on several test cases, including for pruning billion- point quadrature rules, and for generating non-standard quadrature rules in finite element simulations on non-trivial geometries. Compared to popular alternatives, the GSCSP algorithm is competitively stable and efficient, and requires considerably less memory. Reproducibility of computational results. The GSCSP and other related pruning algorithms were implemented in the open-source software package CaratheodoryPruning.jl developed by the authors. Additionally, code for repli- cating the figures can be found at https://github.com/fbelik/CaratheodoryFigures. Acknowledgments. FB and AN were partially supported by FA9550-23-1- 0749. JC acknowledges support from National Science Foundation under award DMS-1943186. REFERENCES 21 References [1] C. Bayer and J. Teichmann. “The proof of Tchakaloff’s Theorem”. en. In: Proceedings of the American Mathematical Society 134.10 (2006), pp. 3035– 3040. issn: 0002-9939, 1088-6826. doi: 10.1090/S0002-9939-06-08249-9. [2] L. M. M. van den Bos, B. Koren, and R. P. Dwight. “Non-intrusive uncertainty quantification using reduced cubature rules”. In: Journal of Computational Physics 332 (2017), pp. 418–445. issn: 0021-9991. doi: 10.1016/j.jcp. 2016.12.011. [3] L. van den Bos, B. Sanderse, W. Bierbooms, and G. van Bussel. “Generating Nested Quadrature Rules with Positive Weights based on Arbitrary Sam- ple Sets”. In: SIAM/ASA Journal on Uncertainty Quantification 8.1 (2020), pp. 139–169. doi: 10.1137/18M1213373. [4] R. Bro and S. De Jong. “A fast non-negativity-constrained least squares algo- rithm”. In: Journal of Chemometrics: A Journal of the Chemometrics Society 11.5 (1997), pp. 393–401. [5] J. Chan. “Skew-symmetric entropy stable modal discontinuous Galerkin for- mulations”. In: Journal of Scientific Computing 81.1 (2019), pp. 459–485. [6] R. E. Curto and L. A. Fialkow. “A duality proof of Tchakaloff’s theorem”. In: Journal of Mathematical Analysis and Applications 269.2 (2002), pp. 519– 532. issn: 0022-247X. doi: 10.1016/S0022-247X(02)00034-3. [7] P. J. Davis. “A construction of nonnegative approximate quadratures”. In: Mathematics of Computation 21.100 (1967), pp. 578–582. issn: 0025-5718, 1088-6842. doi: 10.1090/S0025-5718-1967-0222534-4. [8] F. Eisenbrand and G. Shmonin. “Carath´eodory bounds for integer cones”. In: Operations Research Letters 34.5 (2006), pp. 564–568. issn: 0167-6377. doi: https://doi.org/10.1016/j.orl.2005.09.008. [9] G. J. Gassner, A. R. Winters, F. J. Hindenlang, and D. A. Kopriva. “The BR1 scheme is stable for the compressible Navier–Stokes equations”. In: Journal of Scientific Computing 77.1 (2018), pp. 154–200. [10] A. Giuliani. “A two-dimensional stabilized discontinuous Galerkin method on curvilinear embedded boundary grids”. In: SIAM Journal on Scientific Computing 44.1 (2022), A389–A415. [11] J. Glaubitz. “Constructing Positive Interpolatory Cubature Formulas”. In: arXiv:2009.11981 [cs, math] (2020). arXiv: 2009.11981. [12] J. Glaubitz. “Stable High Order Quadrature Rules for Scattered Data and General Weight Functions”. In: SIAM Journal on Numerical Analysis 58.4 (2020), pp. 2144–2164. issn: 0036-1429. doi: 10.1137/19M1257901. [13] J. Glaubitz. “Construction and application of provable positive and exact cubature formulas”. In: IMA Journal of Numerical Analysis 43.3 (2023), pp. 1616–1652. issn: 0272-4979. doi: 10.1093/imanum/drac017. [14] G. H. Golub and C. F. V. Loan. Matrix Computations (Johns Hopkins Studies in Mathematical Sciences). 3rd. The Johns Hopkins University Press, 1996. isbn: 0-8018-5414-8. [15] J. S. Hesthaven and T. Warburton. Nodal discontinuous Galerkin methods: algorithms, analysis, and applications. Springer. [16] G. Karniadakis and S. Sherwin. Spectral/hp element methods for computa- tional fluid dynamics. Oxford University Press, USA, 2013. 22 REFERENCES [17] C. L. Lawson and R. J. Hanson. Solving Least Squares Problems. Society for Industrial and Applied Mathematics, 1995. doi: 10.1137/1.9781611971217. eprint: https://epubs.siam.org/doi/pdf/10.1137/1.9781611971217. [18] M. Lubin, O. Dowson, J. Dias Garcia, J. Huchette, B. Legat, and J. P. Vielma. “JuMP 1.0: Recent improvements to a modeling language for mathemati- cal optimization”. In: Mathematical Programming Computation 15 (2023), pp. 581–589. doi: 10.1007/s12532-023-00239-3. [19] F. Piazzon, A. Sommariva, M. Vianello, and M. Vianello. “Caratheodory- Tchakaloff Subsampling”. In: Dolomites Research Notes on Approximation 10.1 (2017). arxiv 1611.02065 [math.NA], pp. 5–14. issn: 20356803. doi: 10. 14658/pupj-drna-2017-1-2. [20] M. Putinar. “A note on Tchakaloff’s Theorem”. In: Proceedings of the Ameri- can Mathematical Society 125.8 (1997), pp. 2409–2414. issn: 0002-9939, 1088- 6826. doi: 10.1090/S0002-9939-97-03862-8. [21] R. Qu, A. Narayan, and J. Chan. “Entropy stable reduced order modeling of nonlinear conservation laws using discontinuous Galerkin methods”. In: arXiv preprint arXiv:2502.09381 (2025). [22] E. Steinitz. “Bedingt konvergente Reihen und konvexe Systeme.” de. In: Jour- nal f¨ur die reine und angewandte Mathematik 1913.143 (1913), pp. 128–176. issn: 1435-5345. doi: 10.1515/crll.1913.143.128. [23] C. G. Taylor and J. Chan. “An Entropy Stable High-Order Discontinuous Galerkin Method on Cut Meshes”. In: arXiv preprint arXiv:2412.13002 (2024). [24] C. G. Taylor, L. C. Wilcox, and J. Chan. “An energy stable high-order cut cell discontinuous Galerkin method with state redistribution for wave propa- gation”. In: Journal of Computational Physics 521 (2025), p. 113528. [25] V. Tchakaloff. “Formules de cubatures m´ecaniques `a coefficients non n´egatifs”. In: Bull. Sci. Math 81.2 (1957), pp. 123–134. [26] M. Tchernychova. “Caratheodory cubature measures”. http://purl.org/dc/dcmitype/Text. University of Oxford, 2016. [27] M. W. Wilson. “A general algorithm for nonnegative quadrature formulas”. In: Mathematics of Computation 23.106 (1969), pp. 253–258. issn: 0025-5718, 1088-6842. doi: 10.1090/S0025-5718-1969-0242374-1. [28] M. Yano and A. T. Patera. “An LP empirical quadrature procedure for re- duced basis treatment of parametrized nonlinear PDEs”. In: Computer Meth- ods in Applied Mechanics and Engineering 344 (2019), pp. 1104–1123. issn: 0045-7825. doi: https://doi.org/10.1016/j.cma.2018.02.028. Appendix A. Proof of Theorem 4.1: Stability of SCSP and GSCSP We recall that we assume the conditions of Assumption 4.1 that guarantee unique behavior of SCSP for an input µ and ordering Σ of its support. The set of valid perturbations to µ is given in Definition 4.2. Before presenting the proof details, we set up notation. The SCSP algorithm goes through a certain number of iterations to prune µM down to a measure with support size at most N. Due to the assumptions articulated in Assumption 4.1, we claim that the algorithm takes exactly M −N iterations, prunes exactly 1 node at every iteration, and that the cokernel vector n computed at every step is unique up to multiplicative constants. To see why, note that at every iteration, line 3 of Algorithm 2 computes a cokernel vector for the matrix V S∗. Assume at any iteration that |S| = N + 1, so that V S∗∈ REFERENCES 23 R(N+1)×N. Because of the assumption that V is a Chebyshev system for µM, then rank(V S∗) = N, so that dim(coker(V S∗)) = 1, and hence the cokernel vector n is unique up to multiplicative scaling. Our assumptions also guarantee that the minimization problem (9) is unique, so that w −cn has exactly one node zeroed out. Thus, the set of zero weights P in line 7 of Algorithm 2 has a single element. Hence, at the next iteration we again start with |S| = N + 1. Through finite induction, we conclude with the claim at the beginning of this paragraph. Hence, at iteration j ∈[M −N] of SCSP operating on µM, we use the following notation to identify unique objects at iteration j: • nj is the kernel vector identified in line 3 of Algorithm 2. • Sj is the size-(N + 1) set of ordered global indices in [M] corresponding to the active weights at iteration j. • mj ∈[N + 1], associated with iteration j of the SCSP algorithm, is the iteration-local index that is zeroed out. • wj ∈RN+1 is the Sj-indexed weight vector at the start of iteration j. • cj ∈R is the constant identified by (9) such that wj −cjnj zeros out one element of wj. There are analogous quantities arising from running SCSP on eµ. We denote these corresponding quantities enj, eSj, emj, ewj, and ecj, respectively. Because the SigSelect function is chosen as in (9) by assumption, then at iteration j of the SCSP algorithm, the index mj and constant cj are chosen by the formulas, mj = argmin m∈S+∪S− wm nm , cj = wmj nmj . (20) We require some quantities defined in terms of the sequence of (unique) kernel vectors: ϵj := min k∈[N+1]\{mj} wj,k − nj,k nj,mj wj,mj > 0, Nj := ∥nj∥1 |nj,mj| < ∞, where ϵj > 0 because by definition of mj, wj,mj nj,mj < wj,k |nj,k| ∀k ∈[N + 1]\{mj}, and Nj < ∞because nj,mj is non-zero. Note in particular that both ϵj and Nj are invariant under multiplicative scaling of nj, and hence are unique numbers. The number ϵj is a scaled version of the optimality gap of the minimization problem (20), and Nj measures the total mass of nj relative to the mass on the pruned index. A final quantity we’ll need is a geometrically growing sequence derived from the Nj numbers: C0 = 1, Cj = (1 + Nj)Cj−1 + 1, j > 0. (21) Proof of Theorem 4.1. We first define δ0. The measure ν = SCSP(µM, V, Σ) is unique, having support points and weights, ν = X ℓ∈[N] uℓδzℓ, {zℓ}ℓ∈[N] ⊂supp(µM), u := min ℓ∈[N] uℓ> 0, (22) where u > 0 because Assumption 4.1 guarantees that each step of the SCSP algo- rithm zeros out exactly one weight. With V ∈RM×N the Vandermonde-like matrix 24 REFERENCES defined in (5), and S0 ⊂[M] the size-N index set that SCSP(µM, V, Σ) identified to prune µM down to ν, then define U := V S0∗D−1, D = diag ∥v1∥L1(X), . . . , ∥vN∥L1(X)  , (23) and note that V S0∗is invertible by the Chebyshev system assumption. Then we define δ0 as, δ0 := min j∈[5] {δj} , δ1 = 1 3, δ2 = 1 3|µM| min j∈[M−N] ϵj Cj , δ3 = uτ 6 √ N|µM|∥U −1∥2 , δ4 = u 6|µM|CM−N , δ5 = |ν| 6|µM| h CM−N + N3/2 τ ∥U −1∥2 i. Now consider any (eµM, eΣ) ∈Pτ(µM, Σ) satisfying δ = dTV(µM, eµM) < δ0. Let f M = |supp(eµ)| ≥M denote the support size of eµ, and let w, ew ∈Rf M denote the weights on these support points. The f M-vector w is formed by padding the original weights w[M] for µM with f M −M zeros. I.e., ew = ew1, . . . , ewM, ewM+1, . . . , ewf M  , w = w1, . . . , wM, 0, . . . , 0 | {z } f M−M entries  . Note that this zero padding of w does not affect the result of ν = SCSP(µM, V, Σ) since after M −N iterations the procedure would simply prune the zero-padded weights because the Chebyshev system assumption ensures that nj,N+1 ̸= 0. With this setup, then dTV(µM, eµM) = δ δ<δ1 =⇒ ∥wT −ewT ∥1 ≤3∥w∥1δ = 3|µM|δ, for any T ⊂[f M]. In particular, f M X q=M+1 | ewq| ≤3|µM|δ, |wj −ewj| ≤3|µM|δ, j ∈[M]. (24) We now analyze eν = SCSP(eµ, V, eΣ), which we break up two parts. The first part considers the first M −N iterations, which operate on w[M] and ew[M] that involve nodes in the shared set supp(µM). The second part of the analysis considers nodes in the set supp(eµM)\supp(µM) that are supported only in eµM. For the first part of the analysis, we consider the first M −N iterations of the SCSP algorithm. At iteration 1 (j = 1) of the SCSP algorithm, we have S1 = eS1, and, ∥w1 −ew1∥1 ≤∥w −ew∥1 < 3∥w∥1δ. Now fix any j ∈[M −N]. We make the inductive hypothesis that at the start of iteration j we have, ∥wj −ewj∥1 ≤3 |µM| δCj−1, Sj = eSj. (25) Then our assumption that δ ≤δ2 implies: ∥wj −ewj∥1 ≤3 |µM| δ2Cj−1 ≤ϵj Cj−1 Cj < ϵj 1 + Nj . REFERENCES 25 I.e., we have |wj,k −ewj,k| ≤ϵj/(1 + Nj) for every k ∈[N + 1]. This implies that for any k ̸= mj, nj,k nj,mj ewj,mj −wj,mj  −( ewj,k −wj,k) ≤ϵj < wj,k − nj,k nj,mj wj,mj. Rearranging the strict inequality between the left- and right-most expressions above yields, ewj,mj nj,mj < ewj,k |nj,k|, for any k ̸= mj. Hence, the perturbed version of the minimization problem (20) identifies the same index, mj, as the unperturbed problem, so that mj = emj, i.e., the same node is chosen for removal in both algorithms. In particular, the values cj = wj,mj/|nj,mj| and ecj = ewj,mj/|nj,mj| are well-defined, and the difference between the corresponding iteration-j pruned weight vectors is, ∥(wj −cjnj) −( ewj −ecjnj)∥1 ≤∥wj −ewj∥1 + wj,mj −ewj,mj ∥nj∥1 |nj,mj| ≤(1 + Nj) ∥wj −ewj∥1 ≤3δ|µM|Cj−1(1 + Nj) In particular, this will guarantee that Sj+1 = eSj+1. This completes the steps on line 6 of Algorithm 2. We must next complete the steps on line 7 of Algorithm 2, which forms new weight vectors for the next iteration, i.e., forms wj+1 and ewj+1 by (i) filling N entries as the N non-zero entries in index locations [N +1]\{mj} of wj −cjnj and ewj −ecjnj, respectively, and (ii) appending the (N + 1)st entry as the entries from w and ew at global index ΣN+j+1. Hence, the difference between these two vectors is, ∥wj+1 −ewj+1∥1 = |wN+1 −ewN+1| + ∥(wj −cjnj) −( ewj −ecjnj)∥1 (24) ≤3|µM|δ + 3δ|µM|Cj−1(1 + Nj) = 3δ|µM|Cj, which completes the proof of (25) for iteration j + 1. By finite induction, we conclude that after M−N iterations have completed, we have pruned weight vectors wM−N+1 and ewM−N+1, which satisfy, ∥wM−N+1 −ewM−N+1∥1 ≤3δ|µM|(1 + NM−N)CM−N−1 ≤3δ|µM|CM−N. (26) For the second part of the analysis, we consider iterations j for j ∈[M −N +1, f M − N]. Through finite induction, we will show that ewj,N+1 is the pruned weight. Our inductive hypothesis for this portion of the analysis is Sj = S0 ∪{j + N} and min q∈[N] ewj,q > u 2, ewj,[N] −ewM−N+1,[N] = j−1 X ℓ=M−N+1 −ewℓ,N+1nℓ,[N] |nℓ,N+1| . (27) Note that for the first iteration, j = M −N +1, Sj = S0∪{j+N} holds because the first M −N iterations of SCSP(eµ, V, eΣ) prune the same indices as SCSP(µM, V, Σ). The second relation of Equation (27) holds because the sum is vacuous. The re- maining relation holds by using δ < δ4 in (26). As in (22), we use (z1, . . . , zN) to denote the N nodes on which ν is supported. For brevity, we use x = xj+N to denote the element of X corresponding to node index j. Note that the matrix V Sj∗∈R(N+1)×N again has a unique cokernel 26 REFERENCES vector because the square submatrix V S0∗is full rank by the Chebyshev system assumption. This unique cokernel vector nj is orthogonal to every column of V Sj∗, which is equivalent to the conditions, nj,N+1vq(x) = − X ℓ∈[N] vq(zℓ)nj,ℓ, q ∈[N]. (28) Concatenating all these equalities for every q ∈[N] yields, nj,N+1v(x) = −(V S0∗)T nj,[N], (29) With D and U as in (23), we rearrange, premultiply both sizes by D−1, and take vector ℓ∞norms to obtain: ∥nj,[N]∥∞ |nj,N+1| = ∥U −T D−1v(xj)∥∞ x∈Xτ ≤ ∥U −T ∥∞ τ ≤ √ N∥U −T ∥2 τ = √ N∥U −1∥2 τ . Hence, we have for any q ∈[N]: |nj,q| |nj,N+1| ewj,N+1 ≤ √ N∥U −1∥2 ewj,N+1 τ (24) ≤3 √ N|µM|∥U −1∥2 τ δ < u1 2 < ewj,q i.e., ewj,N+1 |nj,N+1| < ewj,q |nj,q|, q ∈[N], so that node N + 1, i.e., xj+N, is chosen for removal. Hence, at the next iteration, we have, Sj+1 = (Sj\{j + N}) ∪{j + 1 + N} = S0 ∪{j + 1 + N}. Furthermore, the first N weights at the next iteration are updated as, ewj+1,[N] = ewj,[N] −cjnj,[N] = ewj,[N] −ewj,N+1 nj,[N] |nj,N+1|. (30a) Then for q ∈[N], ewj+1,q = ewj,q −ewj,N+1 nj,q nj,N+1 ≥ewM−N+1,q − √ N∥U −1∥2 τ f M X ℓ=M+1 ewℓ (24) ≥ewM−N+1,q −3|µM|δ √ N∥U −1∥2 1 τ δ<δ3 ≥u −1 2u = 1 2u. (30b) REFERENCES 27 The relations (30) establish the inductive relations (27) for iteration j + 1. Finally, we have established that at the terminal iteration j = f M −N + 1 of SCSP(eµ, V, eΣ), ew f M−N+1,[N] −ewM−N+1,[N] 1 = f M−N X j=M−N+1 ewj,N+1 nj,[N] |nj,N+1| 1 ≤N f M−N X j=M−N+1 ewj,N+1 nj,[N] ∞ |nj,N+1| ≤N 3/2 τ ∥U −1∥2 f M X ℓ=M+1 ewℓ≤δ 3|µM|N 3/2 τ ∥U −1∥2. (31) Combining (31) and (26) with the triangle inequality yields, wM−N+1,[N] −ew f M−N+1,[N] 1 ≤3δ|µM|  CM−N + N 3/2 τ ∥U −1∥2  . Finally, when δ < δ5, then the above implies |eν| = ew f M−N+1,[N] 1 ≥1 2 wM−N+1,[N] 1 = 1 2|ν|. Therefore, dTV(ν, eν) = wM−N+1,[N] −ew f M−N+1,[N] 1 |ν| + |eν| ≤Cδ, C = 2|µM| |ν|  CM−N + N 3/2 τ ∥U −1∥2  . □ Appendix B. Tensorized quadrature attaining Q < N This section provides explicit examples of a Tchakaloff quadrature rules in The- orem 2.1 with Q < N. Hence, while in this paper we consider identifying such rules with Q = N, the example in this section demonstrates that such rules need not be minimal quadrature rules. Fix k ∈N0, d ∈N, and let µ be a product measure on X = Rd, and let V be the subspace of at-most degree-k d-variate polynomials. With µj the coordinate-j marginal measure of µ, assume µj has finite moments up to uni- variate degree k + 1, and let {x(j) q , w(j) q }q∈[p] be the p-point univariate µj-Gaussian quadrature rule, where p :=  k+1 2  . This choice of p ensures exact µj-integration of degree 2p −1 ≥k polynomials. By tensorizing these d different p-point rules, we have a Q = pd-point quadrature rule that exactly µ-integrates all d-variate polynomials of degree at most k. We can now compare Q and N = dim(V ): N =  k + d d  = (k + 1)(d) d! ∼kd d! , Q N =  k+1 2 d (k + 1)(d) d! k≫1 ∼ d! 2d , where (k + 1)(d) denotes the rising factorial/Pochhammer function, (k + 1)(d) = Qd j=1(k + j) . Hence, for large k, d, the above ratio is greater than unity, implying Q > N, so that this is not a Tchakaloff-attaining quadrature. However, direct computation of Q/N when k = 2 shows that this construction achieves Q < N when d < 4. 28 REFERENCES Appendix C. Givens rotation-based downdates and updates This section provides more detailed pseudocode, Algorithm 3, that accomplishes the procedure described in Section 3.2: An O(N 2) procedure that uses Givens rotations to update the full QR decomposition of an (N +k)×N matrix by replacing k rows from the original matrix with a new set of k rows. The GSCSP algorithm is the SCSP procedure in Algorithm 2 augmented by using Algorithm 3 as a subroutine to accomplish line 7 of Algorithm 2. Algorithm 3 Givens Row UpDowndate and Kernel Vector Input: V ∈RM×N, T ⊂[M] with |T| = N + k, Q ∈RN+k×N+k, R ∈RN+k×N, irem ∈T, inew ∈[M] \ T Output: Q ∈RN+k×N+k, R ∈RN+k×N, k ∈RN+k, T 1: jrem ←indexof(T, irem) 2: T ←(T \ {irem}) ∪{inew} 3: for i = N + k down to jrem + 1 do ▷Begin Givens downdate 4: Form Givens rotation G for indices i and i −1 such that (QGT )jrem,i = 0 ▷ O(1) 5: Q ←QGT ▷O(N + k), repeated N + k −jrem times 6: R ←GR ▷O(N), repeated N + k −jrem times 7: end for 8: for i = jrem −1 down to 1 do 9: Form Givens rotation G for indices i and jrem such that (QGT )jrem,i = 0 ▷ O(1) 10: Q ←QGT ▷O(N + k), repeated jrem −1 times 11: R ←GR ▷O(N), repeated jrem −1 times 12: end for ▷End Givens downdate 13: R{jrem}∗←V {inew}∗ ▷Begin Givens update 14: Qjrem,jrem ←+1.0 15: for i = 1 to min(N, jrem −1) do 16: Form Givens rotation G for indices i and jrem such that (GR)jrem,i = 0 ▷ O(1) 17: Q ←QGT ▷O(N + k), repeated min(N, jrem −1) times 18: R ←GR ▷O(N), repeated min(N, jrem −1) times 19: end for 20: for i = jrem to N do 21: Form Givens rotation G for indices i + 1 and i such that (GR)i+1,i = 0 ▷ O(1) 22: Q ←QGT ▷O(N + k), repeated max(0, N −jrem + 1) times 23: R ←GR ▷O(N), repeated max(0, N −jrem + 1) times 24: end for ▷End Givens update 25: k ←M∗{N+1} Appendix D. Stability of NNLS The Lawson-Hansen algorithm, without details of efficient updates and down- dates by Householder reflections, is provided in Algorithm 4 [4, 17]. The algorithm REFERENCES 29 iteratively constructs a more accurate and less-sparse solution, w, by including in- dices with the largest dual and by solving least squares problems. The dual at a given iteration is defined as the gradient of the squared ℓ2 moment error: d = V (η −V T w) = −1 2 ∇w∥V T w −η∥2 2. In the algorithm, an index set P ⊂[M] is gradually built and the final solution satisfies nonnegativity, zero error gradient for indices with positive weight, and positive error gradient for active indices (zero weights). There is an inner loop (starting on line 7 of algorithm 4) is intended to occur infrequently and plays the role of removing indices whose weights are made negative by adding new indices. Algorithm 4 NNLS: Lawson Hansen NNLS Algorithm Input: V ∈RM×N, η = V T w ∈RN Output: P ⊂[M] and w ∈RM + which is a P-sparse vector that solves minw≥0 ∥V T w −η∥2 1: P ←∅, w ←0, s ←0 2: d ←V (η −V T w) = V η 3: while max(d) > 0 do 4: m ←maxindex(d), P ←P ∪{m} 5: sP ←(V P ∗ T )†η 6: Q ←{i ∈P : si ≤0} 7: while |Q| > 0 do 8: irem ←argmini∈Q −wi si−wi 9: P ←P\{irem}, wirem ←0 10: sP ←(V P ∗ T )†η 11: Q ←{i ∈P : si ≤0} 12: end while 13: wP ←sP 14: d ←V (η −V T w) 15: end while Proof of Theorem 4.2. Having provided the rigorous details for the proof of The- orem 4.1, we omit many steps and technical notations and computations that are similar in spirit for this proof. For example, we will only seek to show that ∥u−eu∥2 is small, with u ∈RN the weights for ν and eu ∈RN the weights for eν, omitting the computations that connect these quantities to the total variation distances. We will also provide the argument for a single outer loop iteration of the algorithm instead of providing the meticulous argument that holds for every iteration. We let V ∈RM×N and w ∈RM be the Vandermonde matrix associated with (V, µM). Recall we assume that u is the output of NNLS(V , V T w), and that u ∈ RN. Let S1 ⊂P([M]) (the power set of [M]) be the collection of index sets built by the unperturbed NNLS algorithm at the start of the outer loop (the index sets P that are encountered on line 3 of algorithm 4) . Similarly, let S2 ⊂P([M]) be the set of index sets P observed by the unperturbed NNLS algorithm on line 7 at the start of the inner loop. Finally, let P0 be the final set of indices, satisfying |P0| = N by Assumption 4.2. 30 REFERENCES For any P ∈S1 ∪S2, we define s(P) to be the weight vector corresponding to that index set, given by s(P)P = (V T P ∗)†η and s(P)[M]\P = 0. We similarly define d(P) = V (η −V T P ∗s(P)P ) for any P ∈S1 to be the corresponding dual vector at the iteration corresponding to that value of P. We define wP ∈RM to be the weight vector at the beginning of the outer loop (line 3) corresponding to s(P). We also define Q = Q(P) to be the set defined on line 6. We use tilde’d quantities to correspond to weight vectors and dual vectors in the perturbed problem. Our main effort will seek to ensure that the discrete optimization problems on lines 4 and 8 have the same solutions for the unperturbed and the perturbed prob- lems. To this end, we define optimality gaps δ1(P) and δ2(P), which correspond to the difference between the extremum and the second extremum on lines 4 and 8, respectively. With all of the notation in place, we define ϵ0 = min P ∈S1 max(d(P)) ∥V (I −V P ∗ T (V P ∗ T )†)V T ∥2 , ϵ1 = min P ∈S1 δ1(P)/2 ∥V (I −V P ∗ T (V P ∗ T )†)V T ∥2 , ϵ2 = min P ∈S1∪S2 min |s(P)P | ∥(V P ∗ T )†V T ∥2 , ϵ3 = min P ∈S2 w(P)i −s(P)i 2(1 + ∥(V T P ∗)†V T ∥∞) , ϵ4 = min P ∈S2 (s(P)i −w(P)i)2δ2 (−s(P)i + w(P)i)∥(V T P ∗)†V T ∥∞ , ϵ = min{ϵ0, ϵ1, ϵ2, ϵ3, ϵ4}, C = ∥(V P0∗ T )†V T ∥2. Now assume a perturbed measure with weights ew = w + ∆w ∈RM satisfying ∥ew −w∥< ϵ. For any P ∈S1, ed(P) = V (eη −V T ew) = V (V T ew0 −V P ∗ T ewP ) = V (V T (w0 + ∆w) −V P ∗ T (wP + (V P ∗ T )†V T ∆w)) = V (η −V T w + V T (∆w) −V P ∗ T ((V P ∗ T )†V T ∆w)) = d(P) + V (I −V P ∗ T (V P ∗ T )†)V T ∆w, which implies, ∥ed(P) −d(P)∥2 = ∥V (I −V P ∗ T (V P ∗ T )†)V T ∆w∥2 < ∥V (I −V P ∗ T (V P ∗ T )†)V T ∥2ϵ ≤max(d(P)), and this last strict inequality ensures that max(ed) > 0, so that for any iteration of the unperturbed algorithm when the outer loop is triggered on line 3, the perturbed algorithm also has this condition triggered. Building on the inequality above, we have, ∥ed(P) −d(P)∥∞≤∥ed(P) −d(P)∥2 < ∥V (I −V P ∗ T (V P ∗ T )†)V T ∥2ϵ ≤δ1(P) 2 , REFERENCES 31 for all P ∈S1 ensuring that the maximization problem on line 4 has the same so- lution in both the perturbed and unperturbed algorithms. To ensure Q(P) defined on line 6 is the same in both perturbed and unperturbed algorithms, we compute, ∥es(P)P −s(P)P ∥2 = ∥(V P ∗ T )†eη −s(P)P ∥2 = ∥s(P)P + (V P ∗ T )†V T ∆w −s(P)P ∥2 = ∥(V P ∗ T )†V T ∥2ϵ ϵ≤ϵ2 ≤min |s(P)|, for all P ∈S2 ensuring that no Q(P)-elements of es(P) have differing signs than s(P). Finally, we ensure that the minimization on line 8 identifies the same index in both perturbed and unperturbed cases whenever |Q(P)| ≥2. Note that for all i ∈Q(P) we have w(P)i > 0 and s(P)i ≤0 so s(P)i −w(P)i < 0 and −w(P )i s(P )i−w(P )i ∈ (0, 1). Then, −ew(P)i es(P)i −ew(P)i = −w(P)i −∆wi ((V T P ∗)†eη)i −w(P)i −∆wi = −w(P)i −∆wi ((V T P ∗)†(η + V T ∆w))i −w(P)i −∆wi = −w(P)i −∆wi s(P)i −w(P)i + ((V T P ∗)†V T ∆w)i −∆wi . Then the discrepancy between this and the unperturbed quantity is, −ew(P)i es(P)i −ew(P)i − −w(P)i s(P)i −w(P)i = −∆wis(P)i + w(P)i((V T P ∗)†V T ∆w)i (s(P)i −w(P)i + ((V T P ∗)†V T ∆w)i −∆wi)(s(P)i −w(P)i) ≤ 1 2(s(P)i −w(P)i)2 −∆wis(P)i + w(P)i((V T P ∗)†V T ∆w)i ≤−s(P)i + w(P)i∥(V T P ∗)†V T ∥∞ 2(s(P)i −w(P)i)2 ∥∆w∥∞ < −s(P)i + w(P)i∥(V T P ∗)†V T ∥∞ 2(s(P)i −w(P)i)2 ϵ ϵ≤ϵ4 ≤ δ2 2 , which is half the optimality gap, so that the minimum index obtained in line 8 is unchanged. Finally, due to optimality of the least squares solution with |P0| = N, we have that ed(P0) ≤0, exiting the outer loop at the same time as the unperturbed problem, and the solution satisfies ∥ew −w∥2 = ∥(V P0∗ T )†V T ∆w∥2 ≤∥(V P0∗ T )†V T ∥2∥∆w∥2 = C∥∆w∥2. □
EFFICIENT AND ROBUST CARATH ́EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES FILIP BˇEL ́IK, JESSE CHAN, AND AKIL NARAYAN Abstract. In many applications, one seeks to approximate integration against a positive measure of interest by a positive discrete measure: a numerical quadrature rule with positive weights. One common desired discretization property is moment preservation over a finite dimensional function space, e.g., boundeddegree polynomials. Carath ́eodory's theorem asserts that if there is any finitely supported quadrature rule with more nodes than the dimension of the given function space, one can form a smaller (and hence more efficient) positive, nested, quadrature rule that preserves the moments of the original rule. We describe an efficient streaming procedure for Carath ́eodory-Steinitz pruning, a numerical procedure that implements Carath ́eodory's theorem for this measure compression. The new algorithm makes use of Givens rotations and on-demand storage of arrays to successfully prune very large rules whose storage complexity only depends on the dimension of the function space. This approach improves on a naive implementation of Carath ́eodory-Steinitz pruning whose runtime and storage complexity are quadratic and linear, respectively, in the size of the original measure. We additionally prove mathematical stability properties of our method with respect to a set of admissible, totalvariation perturbations of the original measure. Our method is compared to two alternate approaches with larger storage requirements: non-negative least squares and linear programming, and we demonstrate comparable runtimes, with improved stability and storage robustness. Finally, we demonstrate practical usage of this algorithm to generate quadrature for discontinous Galerkin finite element simulations on cut-cell meshes. 1. Introduction Efficient and accurate quadrature (or cubature) rules that approximate integrals are fundamental ingredients in computational science, being used for numerical or statistical integration in the context of solutions of differential equations, uncertainty quantification, inference, and scientific machine learning. In these application scenarios one may have access to an acceptably-accurate quadrature rule with positive weights; the challenge is that this quadrature rule might be too large to use in practice because it requires too many function evaluations. To ameliorate this situation, one can consider using this starting quadrature rule to identify a quadrature rule with many fewer nodes that retains desirable properties, in particular retains both positivity and accuracy, where the latter is quantified by exact integration of specified moments. The core algorithm we consider, Carath ́eodory-Steinitz pruning (CSP), is one strategy that identifies a quadrature rule that is nested with respect to the original (hence, is a "pruned" version because nodes are removed) [22]. The CSP algorithm has been particularly popular for its clear and easy implementation, and has seen applications in contexts requiring high-dimensional quadrature over general domains [2, 3, 7, 11, 12, 13, 19, 26, 27]. 1 16 Oct 2025 2 FILIP BˇEL ́IK, JESSE CHAN, AND AKIL NARAYAN However, a primary challenge with the CSP algorithm is computational cost. If an M-point positive quadrature rule is pruned to an N-point positive quadrature rule subject to N moment constraints, then a naive implementation requires a cumulative O((M -N)MN 2) computational complexity and O(MN) storage complexity. In several practical use cases of interest, M ≫N, which makes both the storage and complexity demands of a naive CSP algorithm onerous. Our contributions in this paper are the following two major advances: First, we devise a compute- and storage-efficient version of CSP, which makes the per-step computational complexity independent of M and improves overall storage requirements to O(N 2) when the algorithm is used in streaming contexts for pruning a sizeM positive quadrature rule down to N nodes. Our storage-efficient, "streaming" version of CSP, the SCSP algorithm, is given in Algorithm 2. A further augmentation of this algorithm, the GSCSP procedure ("Givens SCSP"), is an efficient procedure for computing cokernel vectors. The GSCSP algorithm requires only O(N 2) complexity per iteration for a cumulative O((M -N)N 2) + O(N 3) computational complexity. This efficiency is gained by exercising Givens rotations for updating cokernel vectors of a matrix. The GSCSP algorithm is Algorithm 2 with the augmentation in Algorithm 3. Second, we provide a new stability guarantee for the SCSP and GSCSP algorithms: By considering any particular quadrature rule as a (discrete) measure, we show that these procedures are mathematically stable in the total variation distance on measures. When the SCSP and GSCSP algorithms are mappings that take as input positive measures with large finite support to output positive measures with smaller support, then both algorithms are locally Lipschitz (and in particular continuous) with respect to the total variation distance on both input and the output. See Theorem 4.1. In the numerical results presented in Section 5, we demonstrate that the GSCSP algorithm can successfully prune very large rules with one billion points and compare the computational efficiency of GSCSP to competing algorithms, in particular, a non-negative least squares (NNLS) formulation and a linear programming (LP) formulation. We also provide supporting evidence for the total variation stability guarantee for SCSP and GSCSP, and show that the stability properties of this algorithm are more favorable than the stability properties of the alternative NNLS and LP algorithms. We demonstrate the potential of our new algorithm by generating nontrivial quadrature on two-dimensional cut-cell finite element mesh geometries. The GSCSP and other related "pruning" algorithms are implemented in the opensource software package CaratheodoryPruning.jl. 2. Background We use the notation N := {1, 2, . . . , } and N0 := {0} ∪N. For N ∈N, we let [N] = {1, . . . , N}. Lowercase/uppercase boldface letters are vectors/matrices, respectively, e.g., x is a vector and X is a matrix. If A ∈RM×N with S ⊂[M] and T ⊂[N], then we use the notation, AS∗∈R|S|×N, A∗T ∈RM×|T |, AST ∈R|S|×|T |, to slice A producing, respectively, the S-indexed rows of A, the T-indexed columns of A, and the submatrix formed by rows indexed S and columns indexed T. Throughout, we will consider S, T, and any other subsets of indices as ordered EFFICIENT AND ROBUST CARATH ́EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES3 sets (e.g., sequences) so that, e.g., the first row of AS∗is the row of A corresponding to the first index in the ordered set S. Similarly, given a vector b ∈RM, we use the notation bS ∈R|S| to slice b, producing the ordered, S-indexed, elements of b. If v and w are vectors of the same size, then v ≥w means that the inequality holds component-wise. Unless otherwise noted, we will denote the two-norm of a vector as ∥w∥= ∥w∥2 = √ wT w. We denote the nonnegative reals by R+ and the positive reals by R++. Given a set of points P = {p1, p2, . . . , pM} ⊂RN, we say that a point p ∈RN lies in the conic hull of P, denoted p ⊂cone(P), if there exist {wm}m∈[M] ⊂R+ such that p = X m∈[M] wmpm. 2.1. Positive quadrature rules: Tchakaloff's theorem. Let (X, M, μ) be a measure space, μ is a positive measure, e.g., μ a probability measure, and let V be an N-dimensional subspace of functions in L1 μ(X) spanned by basis elements vj: V := span{v1, . . . , vN}, vj : X →R. (1) Our main goal is to construct a positive quadrature rule, i.e., a set of nodes and weights, X = {xq}q∈[Q] ⊂X and {wq}q∈[Q] ⊂R++, such that, Z X v(x)dμ(x) = X q∈[Q] wqv(xq), ∀v ∈V, (2) where we assume that the basis vj is continuous at X so that v(xq) is well-defined for v ∈V . The above procedure is sometimes called measure compression because μ with possibly infinite support is reduced to a measure supported only on the finite points xq. Tchakaloff's Theorem states that this compression is possible for polynomial integrands under very general scenarios. Theorem 2.1 (Tchakaloff's Theorem, [1, Theorem 1]). Fix k ∈N0 and let V be the space of degree-k polynomials over the d-dimensional domain X ⊂Rd. Assume μ is positive over X with finite moments up to degree m, i.e., R X Qd j=1 |xj|αjdμ(x) N. In this case we have that μM, defined by its nodes and weights, has certain moments of V . While we cannot directly appeal to Tchakaloff's theorem (because V may contain non-polynomial functions), we can state an essentially similar result. With a fixed N-dimensional subspace V with basis {vj}N j=1, we have, ηn := Z X vn(x)dμM(x) = X m∈[M] wmvn(xm) ∈R, n ∈[N]. (4) These mild assumptions are enough to articulate a Tchakaloff-like result. Theorem 2.2. Let (μM, V ) be as described above with finite moments as defined in (4). Then (2) holds with Q ≤N = dim(V ) where the Q quadrature nodes are a subset of supp(μM) = {xm}m∈[M]. The above result is not new and is essentially well-known. See, e.g, related statements in [19, 22, 26], although we failed to find an identical formulation in existing literature. In fact, in Theorem 2.2 and all that follows, we may take X as an arbitrary (possibly infinite-dimensional) metric space, substantially relaxing our original X ⊂Rd assumption. Theorem 2.2 and Theorem 2.1 both have uses: Theorem 2.2 applies to general non-polynomial subspaces V whereas Theorem 2.1 does not; Theorem 2.1 applies to measures μ with infinite support, whereas Theorem 2.2 does not. One standard proof of Theorem 2.2 reveals a popular algorithm that makes the result constructive; this proof relies on a minor variant of Carath ́eodory's theorem in convex geometry. Theorem 2.3 (Carath ́eodory's theorem, conic version [8]). Let P ⊂RN be a finite set of points in RN with |P| > N. If p ∈cone(P), then there exist a subset S ⊂P with |S| ≤N such that p ∈cone(S). Remark 2.1. The more traditional phrasing of Carath ́eodory's Theorem that considers the stronger notion of convex combinations yields the looser conclusion |S| ≤N + 1. To see how this applies to our situation, we provide a direct, simple proof that reveals a computational implementation. Like Theorem 2.2 itself, neither this proof nor the algorithm are new. 2.3. The Carath ́eodory-Steinitz "pruning" construction. In this section we review one simple constructive proof of both Theorem 2.2 and Theorem 2.3 revealing an algorithm. This algorithm has recently seen considerable use [11, 12, 19, 26]. We attribute this algorithm originally to Steinitz [22], and will hence refer to the following naive algorithm as the Carath ́eodory-Steinitz pruning (CSP) algorithm. EFFICIENT AND ROBUST CARATH ́EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES5 If M ≤N, then Theorem 2.2 is trivially proven, so without loss we assume M > N. The core idea is the simple observation that the moment conditions (4) can be written through linear algebra: V T w = η, V =      v(x1)T v(x2)T ... v(xM)T     ∈RM×N, (5) where w, v(xm), and η are v(xm) := (v1(xm), . . . , vN(xm))T ∈RN, m ∈[M], (6) w := (w1, . . . , wM)T ∈RM ++, η := (η1, . . . , ηN)T ∈RN, with ηn, n ∈[N], defined in (4). If M > N, then V T ∈RN×M has a non-trivial kernel, so there is some kernel vector, say n ̸= 0, such that, V T (w -cn) = η, ∀c ∈R. This kernel vector can be used to construct a size-(M -1) quadrature rule by augmenting w. We first partition [M] into sets where n is positive, negative, and 0, S± := m ∈[M] ± nm > 0 , S0 := m ∈[M] nm = 0 , [M] = S+ ∪S-∪S0. Because n ̸= 0, it is not possible for both S+ and S-to be empty. We now define smallest-magnitude constants c that ensure w -cn has (at least) one zero component: m± = argmin m∈S± wm nm , c± = wm± nm± , (7) where when S± = ∅we assign c± = ±∞. With this construction, c- 0, c = c± ⇐⇒V T (w -cn) = η, w -cn ≥0, and (w -cn)m± = 0, where in the second line we assume that both c± are finite; at least one of them must be finite since S+ ∪S-is non-empty. Hence, choosing either c = c+ or c = c-, setting w ←w -cn, and then removing row m± (and all other zeroed rows) from both w and V , constructs an (at most) (M -1)-point rule with w ≥0 satisfying V T w = η. The procedure is visualized in Figure 1. This process can be repeated while V has a nontrivial cokernel, which is generically until V is square, corresponding to an (at most) Q = N-point rule, completing the proofs of both Theorem 2.2 and Theorem 2.3. The sign σ ∈{+, -} that identifies cσ must be algorithmically chosen. In general, this choice is given by σ =    +, S-= ∅ -, S+ = ∅ SigSelect(V , w, n), otherwise. (8) 6 FILIP BˇEL ́IK, JESSE CHAN, AND AKIL NARAYAN 1 2 3 4 5 0.0 0.5 1.0 1.5 2.0 2.5 w 1 2 3 4 5 -1.0 -0.5 0.0 0.5 1.0 n 1 2 3 4 5 0.0 0.5 1.0 1.5 2.0 2.5 w -c +n 1 2 3 4 5 0.0 0.5 1.0 1.5 2.0 2.5 w -c -n Figure 1. Visual depiction of the two possible pruning choices for given weights and kernel vector. From left to right: visualizing w, n, w -c+n, and w -c-n. One example of the function SigSelect would be the simple rule, SigSelect = argmin σ∈{+,-} |cσ| =⇒ m = argmin m∈S+∪Swm |nm|, c = wm nm , (9) which simply chooses + or -based on which choice corresponds to a minimumnorm perturbation of w. In general, this choice could depend on V , w, and n. Pseudocode for the CSP procedure is given in Algorithm 1. Algorithm 1 CSP: Carath ́eodory-Steinitz pruning Input: V ∈RM×N, w ∈RM ++ Output: S with |S| ≤N, wS ∈R|S| ++ 1: S = [M] 2: while |S| > N do 3: Compute n ∈ker(V S∗ T ). ▷O(|S|N 2) 4: Identify S± and compute c± in (7) using S±, wS, n. 5: Choose σ ∈{+, -} as in (8) ▷SigSelect, e.g., as in (9) 6: Set wS ←wS -cσn, P = s ∈S ws = 0 7: S ←S 8: end while Remark 2.2. One can continue the while loop in Algorithm 1 with |S| ≤N so long as V T has a non-trivial kernel. This would yield a rule with |S| N and rank N: V = QR = [Q1 Q2] R, Q1 ∈RM×N, Q2 ∈RM×(M-N), R ∈RM×N, where Q has orthonormal columns and R is upper triangular. The matrix Q2 above and the matrix K in (11b) have the same range; the difference is that Q2 has orthonormal columns. A(ny) nontrivial vector in the range of Q2 is a kernel 8 FILIP BˇEL ́IK, JESSE CHAN, AND AKIL NARAYAN vector of V T , equivalently is a cokernel vector of V . Hence, a full QR decomposition of V , having complexity O(MN 2), accomplishes identification of a cokernel vector. 3.1. The SCSP algorithm: O(N 2) storage. One simple modification of the above approach is motivated by observing that it's wasteful to compute M -N kernel vectors (all of Q2) when only 1 is needed. One remedy is to customize a QR decomposition routine of the full matrix V so that it terminates early by computing only a single kernel vector; such a procedure still requires storage complexity that depends on M. An alternative and more efficient approach is to compute a single cokernel vector for a slicing of V . If S ⊂[M] with |S| > N is any row subset, then consider the full QR decompositions of the S-row sketched matrix, which requires O(|S|N 2) effort: V S∗= eQ eR = h eQ1 eQ2 i eR, eQ1 ∈R|S|×N, eQ2 ∈R|S|×(|S|-N), eR ∈R|S|×N. We observe that if we start with an M vector of zeros, and insert into entries S any nontrivial |S|-vector from the range of eQ2, then this constructed vector is in the cokernel of V . Hence, we've constructed a cokernel vector of V requiring only O(|S|N) storage. If |S| = N + 1, this reduces the storage requirement to O(N 2). A straightforward extension of the above idea to an iterative version of a CSP algorithm would order the elements in [M] in any way, say encoded as a vector Σ ∈[M]M whose values are a permutation of the elements of [M], and for some fixed k independent of M and N (chosen small for reduced runtime and storage complexity; we later focus on k = 1), initialize S as the first N + k elements of this ordering and then repeatedly prune one index, and then add another. We denote this streaming variant of Carath ́eodory-Steinitz pruning the SCSP algorithm, shown in Algorithm 2. Algorithm 2 SCSP: Streaming Carath ́eodory-Steinitz pruning Input: V ∈RM×N, w ∈RM + , k ∈N, Σ ∈[M]M Output: S with |S| ≤N, wS ∈R|S| + 1: S = Σ[N+k], pop first N + k indices from Σ. 2: while Σ non-empty or |S| > N do 3: Compute n ∈ker(V S∗ T ) ▷O((N + k)N 2) 4: Identify S± and compute c± in (7) using S±, wS, n. 5: Choose σ ∈{+, -} as in (8) ▷SigSelect, e.g., as in (9) 6: Set wS ←wS -cσn, let P = q ∈S wq = 0 . 7: S ←S , pop first min(|P|, |Σ|) elements of Σ and add to S. 8: end while The SCSP algorithm now requires only O((N +k)N) storage, since only N +k 0. Let S0 denote the size-N subset of [M] with positive weights after applying the SCSP algorithm. We define the set of admissible perturbations to μM as the set of measures and corresponding permutations on their supports, (eμ, eΣ) for eΣ ∈Π(supp(eμ)), as, Pτ(μM, Σ) := n (eμ, eΣ) eμ ∈P+, eΣ[M] = Σ, supp(eμ) (μM) ⊂Xτ o , (18) where Xτ = Xτ(V ) is defined as, Xτ := ( x ∈X sup v∈V \{0} v(x) ∥v∥L1(X) ≤1 τ ) . (19) The set of valid measure perturbations therefore corresponds to measures that are positive, whose first ordered M support points match the same ordered M support points of the original measure, and whose support lies in the set Xτ ⊆ X. The introduction of τ is a technical assumption; the precise value of τ is not EFFICIENT AND ROBUST CARATH ́EODORY-STEINITZ PRUNING OF POSITIVE DISCRETE MEASURES 13 conceptually important for theory, and it may be taken as an arbitrarily small, positive number. In particular, if the basis elements vj(·) are all bounded over X, then there is a τ0 > 0 such that for any τ ∈(0, τ0], we have Xτ = X. Informally, Xτ exists to disallow nodal locations where v(x) for arbitrary v ∈V has unbounded value. 4.2. Stability results. Our result on the stability of the SCSP algorithm is the following. Theorem 4.1 (SCSP and GSCSP stability). Let (V, μM, Σ) be given that satisfy Assumption 4.1, and let ν = SCSP(μM, V, Σ) be the output of the SCSP algorithm. For any fixed τ > 0, define Pτ(μM, Σ) as in Definition 4.2. Then the SCSP algorithm is locally Lipschitz (in particular continuous) with respect to the total variation distance in a dTV-neighborhood of Pτ(μM, Σ) around (μM, Σ). I.e., there are positive constants δ0 = δ0(μM, Σ, τ) and C = C(μM, Σ, τ) such that for any (eμM, eΣ) ∈Pτ(μM, Σ) satisfying dTV(μM, eμM) 0, Nj := ∥nj∥1 |nj,mj| 0 because by definition of mj, wj,mj nj,mj 0. (21) Proof of Theorem 4.1. We first define δ0. The measure ν = SCSP(μM, V, Σ) is unique, having support points and weights, ν = X l∈[N] ulδzl, {zl}l∈[N] ⊂supp(μM), u := min l∈[N] ul> 0, (22) where u > 0 because Assumption 4.1 guarantees that each step of the SCSP algorithm zeros out exactly one weight. With V ∈RM×N the Vandermonde-like matrix 24 REFERENCES defined in (5), and S0 ⊂[M] the size-N index set that SCSP(μM, V, Σ) identified to prune μM down to ν, then define U := V S0∗D-1, D = diag ∥v1∥L1(X), . . . , ∥vN∥L1(X) , (23) and note that V S0∗is invertible by the Chebyshev system assumption. Then we define δ0 as, δ0 := min j∈[5] {δj} , δ1 = 1 3, δ2 = 1 3|μM| min j∈[M-N] εj Cj , δ3 = uτ 6 √ N|μM|∥U -1∥2 , δ4 = u 6|μM|CM-N , δ5 = |ν| 6|μM| h CM-N + N3/2 τ ∥U -1∥2 i. Now consider any (eμM, eΣ) ∈Pτ(μM, Σ) satisfying δ = dTV(μM, eμM) u 2, ewj,[N] -ewM-N+1,[N] = j-1 X l=M-N+1 -ewl,N+1nl,[N] |nl,N+1| . (27) Note that for the first iteration, j = M -N +1, Sj = S0∪{j+N} holds because the first M -N iterations of SCSP(eμ, V, eΣ) prune the same indices as SCSP(μM, V, Σ). The second relation of Equation (27) holds because the sum is vacuous. The remaining relation holds by using δ N, so that this is not a Tchakaloff-attaining quadrature. However, direct computation of Q/N when k = 2 shows that this construction achieves Q 0 do 4: m ←maxindex(d), P ←P ∪{m} 5: sP ←(V P ∗ T )†η 6: Q ←{i ∈P : si ≤0} 7: while |Q| > 0 do 8: irem ←argmini∈Q -wi si-wi 9: P ←P\{irem}, wirem ←0 10: sP ←(V P ∗ T )†η 11: Q ←{i ∈P : si ≤0} 12: end while 13: wP ←sP 14: d ←V (η -V T w) 15: end while Proof of Theorem 4.2. Having provided the rigorous details for the proof of Theorem 4.1, we omit many steps and technical notations and computations that are similar in spirit for this proof. For example, we will only seek to show that ∥u-eu∥2 is small, with u ∈RN the weights for ν and eu ∈RN the weights for eν, omitting the computations that connect these quantities to the total variation distances. We will also provide the argument for a single outer loop iteration of the algorithm instead of providing the meticulous argument that holds for every iteration. We let V ∈RM×N and w ∈RM be the Vandermonde matrix associated with (V, μM). Recall we assume that u is the output of NNLS(V , V T w), and that u ∈ RN. Let S1 ⊂P([M]) (the power set of [M]) be the collection of index sets built by the unperturbed NNLS algorithm at the start of the outer loop (the index sets P that are encountered on line 3 of algorithm 4) . Similarly, let S2 ⊂P([M]) be the set of index sets P observed by the unperturbed NNLS algorithm on line 7 at the start of the inner loop. Finally, let P0 be the final set of indices, satisfying |P0| = N by Assumption 4.2. 30 REFERENCES For any P ∈S1 ∪S2, we define s(P) to be the weight vector corresponding to that index set, given by s(P)P = (V T P ∗)†η and s(P)[M] = 0. We similarly define d(P) = V (η -V T P ∗s(P)P ) for any P ∈S1 to be the corresponding dual vector at the iteration corresponding to that value of P. We define wP ∈RM to be the weight vector at the beginning of the outer loop (line 3) corresponding to s(P). We also define Q = Q(P) to be the set defined on line 6. We use tilde'd quantities to correspond to weight vectors and dual vectors in the perturbed problem. Our main effort will seek to ensure that the discrete optimization problems on lines 4 and 8 have the same solutions for the unperturbed and the perturbed problems. To this end, we define optimality gaps δ1(P) and δ2(P), which correspond to the difference between the extremum and the second extremum on lines 4 and 8, respectively. With all of the notation in place, we define ε0 = min P ∈S1 max(d(P)) ∥V (I -V P ∗ T (V P ∗ T )†)V T ∥2 , ε1 = min P ∈S1 δ1(P)/2 ∥V (I -V P ∗ T (V P ∗ T )†)V T ∥2 , ε2 = min P ∈S1∪S2 min |s(P)P | ∥(V P ∗ T )†V T ∥2 , ε3 = min P ∈S2 w(P)i -s(P)i 2(1 + ∥(V T P ∗)†V T ∥∞) , ε4 = min P ∈S2 (s(P)i -w(P)i)2δ2 (-s(P)i + w(P)i)∥(V T P ∗)†V T ∥∞ , ε = min{ε0, ε1, ε2, ε3, ε4}, C = ∥(V P0∗ T )†V T ∥2. Now assume a perturbed measure with weights ew = w + ∆w ∈RM satisfying ∥ew -w∥ 0, so that for any iteration of the unperturbed algorithm when the outer loop is triggered on line 3, the perturbed algorithm also has this condition triggered. Building on the inequality above, we have, ∥ed(P) -d(P)∥∞≤∥ed(P) -d(P)∥2 0 and s(P)i ≤0 so s(P)i -w(P)i < 0 and -w(P )i s(P )i-w(P )i ∈ (0, 1). Then, -ew(P)i es(P)i -ew(P)i = -w(P)i -∆wi ((V T P ∗)†eη)i -w(P)i -∆wi = -w(P)i -∆wi ((V T P ∗)†(η + V T ∆w))i -w(P)i -∆wi = -w(P)i -∆wi s(P)i -w(P)i + ((V T P ∗)†V T ∆w)i -∆wi . Then the discrepancy between this and the unperturbed quantity is, -ew(P)i es(P)i -ew(P)i - -w(P)i s(P)i -w(P)i = -∆wis(P)i + w(P)i((V T P ∗)†V T ∆w)i (s(P)i -w(P)i + ((V T P ∗)†V T ∆w)i -∆wi)(s(P)i -w(P)i) ≤ 1 2(s(P)i -w(P)i)2 -∆wis(P)i + w(P)i((V T P ∗)†V T ∆w)i ≤-s(P)i + w(P)i∥(V T P ∗)†V T ∥∞ 2(s(P)i -w(P)i)2 ∥∆w∥∞ < -s(P)i + w(P)i∥(V T P ∗)†V T ∥∞ 2(s(P)i -w(P)i)2 ε ε≤ε4 ≤ δ2 2 , which is half the optimality gap, so that the minimum index obtained in line 8 is unchanged. Finally, due to optimality of the least squares solution with |P0| = N, we have that ed(P0) ≤0, exiting the outer loop at the same time as the unperturbed problem, and the solution satisfies ∥ew -w∥2 = ∥(V P0∗ T )†V T ∆w∥2 ≤∥(V P0∗ T )†V T ∥2∥∆w∥2 = C∥∆w∥2. □
2510.14909
*Corresponding Author’s Email:*annieliy@mit.edu Proceedings of the Global Public Health Conference, Vol. 8, Issue 1, 2025, pp.17-29 Copyright © 2025 Li Y ISSN 2613-8417 online DOI: https://doi.org/10.17501/26138417.2025.8102 THE IMPACT OF MEDICAID COVERAGE ON MENTAL HEALTH, WHY INSURANCE MAKES PEOPLE HAPPIER IN OHIE: BY SPENDING LESS OR BY SPENDING MORE? Yangyang Li* Department of Electrical Engineering and Computer Science and the Department of Economics, Massachusetts Institute of Technology, United States Abstract: The Oregon Health Insurance Experiment (OHIE) offers a unique opportunity to examine the causal relationship between Medicaid coverage and happiness among low-income adults, using an experimental design. This study leverages data from comprehensive surveys conducted at 0 and 12 months post-treatment. Previous studies based on OHIE have shown that individuals receiving Medicaid exhibited a significant improvement in mental health compared to those who did not receive coverage. The primary objective is to explore how Medicaid coverage impacts happiness, specifically analyzing in which direction do variations in healthcare spending significantly improve mental health: higher spending or lower spending after Medicaid. Utilizing instrumental variable (IV) regression, I conducted six separate regressions across subgroups categorized by expenditure levels and happiness ratings, and the results reveal distinct patterns. Enrolling in OHP has significantly decreased the probability of experiencing unhappiness, regardless of whether individuals had high or low medical spending. Additionally, it decreased the probability of being pretty happy and having high medical expenses, while increasing the probability among those with lower expenses. Concerning the probability of being very happy, the OHP only had a positive effect on being very happy and spending less, and its effect on those with high expenses was insignificant. These findings align with the benefit of Medicaid: alleviating financial burden, contributing to the well-being of distinct subgroups. Keyword: 2sls, IV, Medicaid, mental health, expenditure, OHIE. Introduction The advent of the Oregon Health Insurance Experiment (OHIE) in 2008 offered an unprecedented opportunity to rigorously evaluate the causal effects of Medicaid coverage on a range of outcomes through a randomized controlled design (Finkelstein et al., 2012). As the state of Oregon opened its Medicaid program to a limited number of low-income adults via a lottery system, a natural experiment unfolded from the 90,000 individuals who signed up, allowing for an objective analysis that sidesteps the perennial challenges of unobserved differences that often confound such research (Baicker & Finkelstein, 2011). This paper contributes to the body of evidence by examining the nuanced effects of Medicaid coverage on mental health and happiness within the OHIE framework. Specifically, it investigates whether increased happiness among Medicaid recipients is attributable to greater healthcare spending, which may imply improved access to necessary services, or to reduced financial strain due to lower out-of- pocket expenditures. The Oregon Medicaid lottery, by effectively randomizing access to public insurance, mitigates the selection bias inherent in previous observational studies. The resulting analysis leverages both administrative and survey data to glean insights into the complex dynamics at play between health insurance, healthcare utilization, and subjective well-being. Li Y / The Impact of Medicaid Coverage on Mental Health, ……… 18 Initial findings from the OHIE indicate that Medicaid coverage has led to statistically significant higher healthcare utilization, reduced out-of-pocket expenses, and enhanced self-reported health among the lottery-selected group as compared to the control group without such coverage (Taubman et al., 2014). This research extends these findings by disaggregating the effects of Medicaid coverage on happiness. The results in this research show an overall trend of a decrease in the probability of being pretty happy and having high medical expenses, and an increase in the probability among those with lower expenses. Concerning the probability of being very happy, the OHP only had a positive effect on being very happy and spending less, and its effect on those with high expenses was insignificant. These results align with the advantages of Medicaid, showing how it helps by reducing financial stress and improving the overall health of specific groups within the population Background The Oregon Health Insurance Experiment (OHIE) has served as a seminal source for understanding the effects of Medicaid expansion. Taubman et al. (2014) provided crucial insights, reporting that Medicaid significantly increased emergency department visits by 0.41 visits per person, contradicting the hypothesis that Medicaid expansion would decrease costly emergency department usage by improving access to primary care and overall health (Taubman et al., 2014). Finkelstein et al. (2012) expanded on these findings by demonstrating that, in the first year after Medicaid expansion, there was a significant increase in healthcare utilization including primary and preventive care, alongside reductions in financial strain due to medical expenses. Their work underscored the complexity of healthcare behaviors and hinted at potential long-term benefits not immediately evident in terms of cost reductions or health outcomes (Finkelstein et al., 2012). Kowalski (2016) advanced the analytical approach by applying Marginal Treatment Effect (MTE) methods to dissect heterogeneity within the OHIE data. Her work revealed that the treatment effect of insurance on emergency room utilization varied significantly across different subgroups, delineating a more nuanced understanding of how Medicaid impacts different population segments (Kowalski, 2016). Kowalski (2018) further contributed by comparing results from Oregon with the Massachusetts health reform, aiming to reconcile why similar expansions led to divergent outcomes in emergency department utilization. By leveraging the MTE framework, she suggested that initial health status and prior healthcare utilization patterns could explain these variations, proposing that healthier new enrollees in Massachusetts might reduce emergency usage, unlike in Oregon (Kowalski, 2018). The body of research emanating from the OHIE underscores the intricate dynamics of healthcare policy's impact on low-income populations (Baicker et al., 2014). These studies collectively emphasize the need for nuanced policy instruments that consider initial health conditions, existing healthcare infrastructure, and localized healthcare behaviors. Further research should continue to leverage randomized designs where feasible, alongside sophisticated econometric models to untangle the causal impacts of health policy changes (Kaczynski & Solnica, 2012). Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 19 Data Data source The dataset I use is the Oregon Health Insurance Experiment (OHIE) from the National Bureau of Economic Research (NBER) (National Bureau of Economic Research, n.d.). The OHIE dataset covers a period from March to September 2008, when the Medicaid lottery was conducted, with follow-up data collected primarily at one and two years post-lottery, extending the coverage of the dataset through at least 2010, which allows the analysis of short-term impacts of Medicaid coverage on the participants (Finkelstein et al., 2012). The unit of observation for the data in the OHIE is at the individual level (Finkelstein et al., 2012). The lotterfy system selected individuals to have the opportunity to apply for Medicaid, and the results were analyzed based on individual health outcomes, financial hardship, and other factors (Baicker et al., 2013). Additionally, since the opportunity to apply for coverage could extend to other family members within the household, the analysis also took into account household-level effects (Finkelstein et al., 2012). In 2008, 89,824 individuals entered the lottery, with 35,169 individuals, representing 29,664 households, selected for the chance to apply for coverage (Taubman et al., 2014). The OHIE dataset is a longitudinal panel dataset that captures cross-sectional data at multiple points in time (Hattab et al., 2024). It is primarily a cross-sectional dataset with elements of longitudinal tracking in two periods. It is not a traditional time series dataset, because it does not track variables continuously over time; instead, it captures data at specific points following the implementation of the Medicaid lottery. However, it does follow the same individuals over a period, thereby incorporating some longitudinal aspects. The data I use are composed of the descriptive data and the survey data collected after the lottery. Descriptive data were collected when individuals entered the lottery and then again during follow-up periods to assess outcomes after the lottery. The follow-up data included several waves, with each capturing data post-lottery, half-year after lottery and one year after lottery, thus forming repeated cross- sections of data for the individuals and households involved (National Bureau of Economic Research, n.d.). Applicants for the lottery provided characteristic information, while follow-up surveys collected healthcare utilization data, including emergency department visits, hospital admissions, prescription drug use, and financial measures such as credit scores and debts.1 Summary Statistics This study utilizes data from a series of surveys conducted at different intervals from 0 months and 12 months after participants received treatment. Notably, the dataset corresponding to the 6-month interval was excluded from analysis due to significant data incompleteness and inadequate temporal coverage post-treatment initiation, precluding a comprehensive assessment of Medicaid's effect during this period. Our aim is to understand how having Medicaid coverage influences happiness, focusing on whether decreased healthcare spending or increased healthcare spending contributes more significantly to this outcome. Li Y / The Impact of Medicaid Coverage on Mental Health, ……… 20 This study utilized a dataset I merged from three datasets of OHIE Public Use Data, as listed below: Descriptive Data (named as `oregonhie_descriptive_vars`): Contains descriptive variables about the participants, such as personal and household IDs and lottery selection status. First and Final Wave Survey Data (named as `oregonhie_survey0m_vars` and `oregonhie_survey12m _vars`): Include variables from surveys conducted immediately after receiving treatment and 12 months later, respectively. These datasets provide a comprehensive view of the participants' demographic information, Medicaid coverage status, healthcare spending, and reported happiness over time, etc. Figure 1: Treatment Intensity Figure 1 illustrates the distribution of current Oregon Health Plan (OHP) insurance coverage among individuals selected and not selected in the lottery. The left panel shows that individuals not selected in the lottery exhibit a near-zero density of having OHP insurance, as expected. Conversely, the right panel indicates that those selected in the lottery have a bimodal distribution, with significant proportions both possessing and not possessing OHP insurance. Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 21 Figure 2: Happiness by IV Figure 2, titled "Happiness by IV," compares the self-reported happiness levels of individuals who were not selected versus those who were selected in the lottery to receive free insurance. Both pie charts are segmented into three categories of happiness: "not too happy," "pretty happy," and "very happy." It appears that the proportion of individuals reporting higher happiness ("very happy") is slightly larger in the group selected to receive insurance compared to those not selected. This visual suggests a potential positive association between being selected for Medicaid coverage and self-reported happiness. Further analysis in this study will be done to establish causality and quantify the effect size. Data cleaning and Variable Definitions To prepare the data for analysis, several steps were taken: Participants Selection: Only those who participated in both the initial and 12-month follow-up surveys were retained, resulting in the deletion of 16,517 records. Missing Data Handling: Missing answers for gender were addressed by making one adjustment. A significant cleaning step involved rectifying 406 cases with missing treatment variables. Outlier Deletion: One identification and deletion of outliers that could potentially skew the results. Specifically, an extreme value of expenditure, recorded at 2.20e+7, was identified as an outlier and dropped, resulting in the deletion of one observation and 58,404 non missing values remaining. Treatment Variable Correction: I utilized enrollment in OHP in the first wave survey data as the treatment variable. To ensure the accuracy of the treatment variable, a correction was applied based on additional data fields: for individuals with data indicating confirmed acceptance into OHP by both preliminary and final approval processes, the treatment status was uniformly updated to one. This Li Y / The Impact of Medicaid Coverage on Mental Health, ……… 22 correction was necessary to accurately reflect the enrollment status in the OHP and reduce missing data problems, as these conditions conclusively demonstrate that the participants were accepted into the plan. Variable Generation: Age variable was generated from the year of birth. Table 1: Summary Statistics After major data cleaning process, key variables were generated and renamed for clarity: • `IV (Z_i): Winning the Lottery`: Named Winning Lottery in Table 1, indicates lottery selection, a binary variable identifying if participants were selected (1) or not (0). 58,404 non missing values, binary variable. Originally named treatment, renamed to LotterSelected. • `Treatment (D_i): Enrollment in the OHP`: The treatment variable, designated as D, was initially labeled as ins_ohp_0m in the dataset and was renamed to OHP in Table 1, identifying whether participants currently have coverage under the Oregon Health Plan (OHP). This variable is a binary indicator, where a value of 1 signifies that the participant has OHP coverage, and a value of 0 indicates the absence of such coverage. Data for the treatment variable were collected from 23,139 non-missing entries 12 months following the lottery selection. This rigorous adjustment ensures that the treatment variable reliably represents the actual health Variable N Mean Standard Deviation Min Max Winning Lottery (IV) 58404 0.5066 0.5000 0 1 OHP (treatment) 23139 0.1734 0.3786 0 1 Happiness (outcome) 23449 1.7365 0.6585 1 3 # HH Members 58404 1.2927 0.4608 1 3 Expenditure 22765 736.3079 55076.7200 0 8301400 Age 58404 39.8860 12.1225 20 63 Female 58404 0.5464 0.4978 0 1 Previous Expenditure 24028 583613.4 8.39e+07 0 1.30e+10 Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 23 coverage status of the participants, thereby facilitating a more precise analysis of the impact of OHP coverage on their reported happiness outcomes. • `Outcome (Y_i): Level of Happiness`: Named Happiness in Table 1, is a dependent variable. Represents the reported overall happiness, not too happy (1), pretty happy (2), and very happy (3). Collected 12 months after treatment with 23,449 non missing values. Originally named happiness_12m. • `Control (X_i)`: The first control variable is the number of people in a household on the lottery list with 58,404 non missing values. Originally named numhh_list, renamed to # HH Members in Table 1. Currently, other control variables of my selection include age, gender, and previous expenditure, named Age, Female, and Previous Expenditure in Table 1, all have 58,404 non missing values. • `Expenditure (E_i)`: Total out-of-pocket healthcare spending over the last 12 months with 22,765 non missing values, in dollars. Originally named cost_tot_oop_12m, renamed to Expenditure in Table 1. • `median (median)`: median of all E_i . • `Above Median (〖AM〗_i)`: dummy variable, 〖AM〗_i equals to 1 if above median of all E_i , 0 otherwise. Empirical Methodology Identification Strategy The instrument utilized is the lottery selection (Z_i), which is assumed to be random and thus uncorrelated with unobservable determinants of the outcome variable (happiness). The instrument's validity is based on its correlation with the treatment variable (D_i), which represents a self-selection process in which a lottery-selected individual can choose to enroll in OHP/Medicaid coverage or not, while being uncorrelated with the error term in the outcome equation. The empirical strategy for assessing the impact of Medicaid coverage on happiness involves a two-stage least squares (2SLS) regression, utilizing an instrumental variable (IV) approach. This method addresses potential endogeneity in the treatment assignment (Medicaid coverage). The 6 Regressions Model I perform subgroup analyses to determine if the impact of Medicaid coverage on happiness differs by levels of medical expenditure (E_i). This involves running separate regressions for subgroups where expenditures are 〖AM〗_i = 0 (below the median) and 〖AM〗_i = 1 (above the median), happiness Y_i = 1 (not too happy), Y_i = 2 (pretty happy), and Y_i = 3 (very happy). I first create six dummy variables and rename them Y ̃_(j,i), defined as: Li Y / The Impact of Medicaid Coverage on Mental Health, ……… 24 • Y ̃_(1,i)=1(Y_i=1,AM_i=1) • Y ̃_(2,i)=1(Y_i=1,AM_i=0) • Y ̃_(3,i)=1(Y_i=2,AM_i=1) • Y ̃_(4,i)=1(Y_i=2,AM_i=0) • Y ̃_(5,i)=1(Y_i=3,AM_i=1) • Y ̃_(6,i)=1(Y_i=3,AM_i=0) The regression model can be written as: Y ̃_j=β_(j,0)+β_(j,1) D+X^' β_(j,X)+ϵ,j=1,⋅⋅⋅,6 (Second Stage)D=γ_0+γ_1 Z+X^' 〖γ_X〗_ +u (First Stage) β_(j,1) captures the effect of enrolling in OHP on the probabilities, and is the coefficient I want to identify and estimate. For example, when j=1, β_1,1 can be written as: β_1,1=E[1(Y_i=1,AM_i=1)|D_i=1]-E[1(Y_i=1,AM_i=1)|D_i=0] =Pr[Y_i=1,AM_i=1|D_i=1]-Pr[Y_i=1,AM_i=1|D_i=0]. The transformation from expectation to probability is because the outcome is binary. Assumptions The following are assumptions for the 2SLS regression analysis: (1) {Y_i ,D_i ,Z_i ,X_i }_(i=1)^N i.i.d. (2) E(ϵ_i |Z_i)=0,E(u_i |Z_i)=0 (3) corr(D_i ,Z_i)>0 (4) var(ϵ_i |Z_i) <∞,var(u_i |Z_i) <∞ The interpretations of these assumptions are: (1) The observations of the outcome variable (Y_i), the treatment variable (D_i), the instrumental variable (Z_i), and the control variables (X_i)are presumed to be independent and identically distributed (i.i.d.). (2) The instrument's validity is predicated on the zero conditional mean independence assumptions. These conditions imply that, given the instrument, the error terms in the Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 25 first and second stages of the 2SLS have an expected value of zero, signaling the absence of omitted variable bias and the proper isolation of the instrument's exogenous variation. (3) The relevance of the instrument is affirmed by a positive correlation between the instrument (Z_i), and the endogenous treatment variable (D_i). This critical assumption ensures that the instrument exerts a significant influence on the treatment variable, thus excluding the presence of a weak instrument which could undermine the statistical power and the consistency of the estimator. (4) The technical feasibility of the model estimation is predicated on the variance conditions, specifying that the conditional variances are finite. This technical assumption ensures that the model does not suffer from the problems of infinite variance, which can occur with heteroskedastic errors and lead to unreliable standard errors and test statistics. Empirical Results Table 2: 6 Regressions Results t statistics in parentheses * p < 0.05, ** p < 0.01, *** p < 0.001 (1) (2) (3) (4) (5) (6) Not Happy Above Median Not Happy Below Median Pretty Happy Above Median Pretty Happy Below Median Very Happy Above Median Very Happy Below Median Treatment -0.203*** -0.0642** -0.113*** 0.173*** 0.0133 0.0662*** (-8.67) (-2.90) (-4.45) (6.59) (1.07) (4.81) #HH Members -0.0109* -0.0294*** -0.000173 0.0252*** 0.00920** 0.0142*** (-2.17) (-6.41) (-0.03) (4.33) (3.21) (4.42) Age 0.00237*** 0.00193*** 0.00151*** -0.000304 -0.000210 -0.000490*** (12.62) (11.14) (7.00) (-1.38) (-1.92) (-3.94) Li Y / The Impact of Medicaid Coverage on Mental Health, ……… 26 Female 0.0264*** -0.0359*** 0.0690*** -0.0284*** 0.0160*** -0.00520 (5.66) (-7.84) (13.92) (-5.34) (6.66) (-1.86) Initial Health Condition 8.05e- 12*** -4.09e-12** 5.80e-13 -2.55e- 11*** -2.91e-12*** -8.19e-12*** (5.39) (-2.90) (0.35) (-14.82) (-3.64) (-8.94) _cons 0.0624*** 0.104*** 0.0777*** 0.137*** 0.0187** 0.0348*** (4.99) (8.83) (5.51) (9.42) (2.62) (4.48) N 21413 21413 21413 21413 21413 21413 Table 2 shows the result for the six 2sls IV regressions, regression 1 nothappy_above is when Y_i = 1 (not too happy) and 〖AM〗_i = 1 (expenditure above median), regression 2 nothappy_below is when Y_i = 1 (not too happy) and 〖AM〗_i = 0 (expenditure below median); regression 3 pretty_above is when Y_i = 2 (pretty happy) and 〖AM〗_i = 1 (expenditure above median), regression 4 pretty_below is when Y_i = 2 (pretty happy) and 〖AM〗_i = 0 (expenditure below median); regression 5 very_above is when Y_i = 3 (very happy) and 〖AM〗_i = 1 (expenditure above median), regression 6 very_below is when Y_i = 3 (very happy) and 〖AM〗_i = 0 (expenditure below median). The treatment coefficient β_1 in regression (1) is significantly negative at -0.203 (p<0.01), showing that with the Medicaid coverage leads to a substantive decrease in the probability that someone is both in the 'not too happy' category, and has above median spending. On net, the Medicaid coverage is decreasing the number of people that are in the not too happy and above median spending category. This is consistent with the hypothesis that Medicaid coverage makes people happier without worrying about medical expenditures and without increasing their medical expenditures. Contrastingly, for the same happiness category with expenditures below the median (regression (2)), the treatment effect is smaller but still negative and significant at β_(2 )= -0.0642 (p<0.05), indicating that enrolling in OHP reduces the number of people in this category as well, although by less than for the unhappy and above median category. For those who were 'very happy', Regression (5) does not show a statistically significant correlation between higher expenditures and happiness β_(5 )= 0.0133 (p>0.05), suggesting that for this group, the level of spending under the OHP does not have a discernible impact on the fraction of people in this category. Yet, for those with lower expenditures (Regression (6)), a positive and significant treatment Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 27 effect β_(6 )= 0.0662 (p<0.01) is evident, underscoring again that reduced financial strain has increased the probability of people falling into the category of very happy and below median spending post- treatment. Regression (3) shows that treatment reduces the number of people who are pretty happy and have above- median spending, with a significant and negative coefficient β_(3 )= −0.113 (p < 0.01). However, we do not know where people go from this cell: the decrease in probability of people in the pretty happy group post-treatment could have gone to the very happy category or the not too happy category. When expenditures are below the median (regression (4)), the treatment has a positive effect β_(4 )= 0.173 (p<0.01), indicating an increase in the number of people under the pretty happy and below median spending category. This result aligns with the hypothesis that reduced financial burden, as facilitated by the OHP, can improve mental well-being. In summary, enrolling in OHP has significantly decreased the probability of experiencing unhappiness, regardless of whether individuals had high or low medical spending. Additionally, it revealed that, on net, there is a significant decrease in probability of being not too happy and having above median medical expenses post treatment, and a significant increase in probability of being very happy and having below median expenses post treatment. Concerning the overall result, enrolling in the OHP had a positive effect on being happier and spending less. These findings align with the hypothesis that alleviating financial burden as a benefit of Medicaid contributes to the well-being of individuals in distinct subgroups. Conclusion and Discussion This study's analysis, predicated on a randomized controlled design, delved into the impacts of Medicaid coverage on mental health by observing a low-income, uninsured adult population over approximately one year. The empirical findings reveal an overall trend: a decrease in the number of individuals who are 'not too happy' and have above median spending, and an increase in the number of individuals under the below-median expenditures and 'very happy' category. These findings align with the hypothesis that reduced financial strain after Medicaid contributes to mental health improvement. The implications of these findings are profound, especially when considering policy designs aimed at optimizing both the health and overall well-being of low-income populations. With the expansion of Medicaid eligibility under the Patient Protection and Affordable Care Act, understanding these effects takes on a heightened policy relevance (Begley et al., 2013). As the OHIE continues to provide a wealth of data, this paper aims to contribute to the dialogue on how Medicaid coverage affects not just health in the narrow sense, but the broader psychosocial well-being of individuals. For robustness checks, an alternative approach I thought of could have been employed wherein the outcome variable Y_i is substituted with another indicator of happiness, and the regression is re- executed. This would help ascertain the consistency of the observed effects across different measures of well-being. Li Y / The Impact of Medicaid Coverage on Mental Health, ……… 28 Future research could explore the causality in this relationship further. Additionally, including interaction terms in the regression model could offer more insights into how the relationship between Medicaid coverage and happiness may be moderated by factors such as health status or financial stress. References Baicker, K., & Finkelstein, A. (2011). The effects of Medicaid coverage—Learning from the Oregon experiment. New England Journal of Medicine, 365(8), 683–685. https://doi.org/10.1056/NEJMp1108222 Baicker, K., Finkelstein, A., Song, J., & Taubman, S. (2014). The impact of Medicaid on labor market activity and program participation: Evidence from the Oregon Health Insurance Experiment. American Economic Review, 104(5), 322–328. https://doi.org/10.1257/aer.104.5.322 Baicker, K., Taubman, S. L., Allen, H. L., Bernstein, M., Gruber, J. H., Newhouse, J. P., Schneider, E. C., Wright, B. J., Zaslavsky, A. M., Finkelstein, A. N., & Oregon Health Study Group. (2013). The Oregon experiment—Effects of Medicaid on clinical outcomes. New England Journal of Medicine, 368(18), 1713–1722. https://doi.org/10.1056/NEJMsa1212321 Begley, C. E., Deshmukh, A., Eschbach, K., Fouladi, N., Liu, J., & Reynolds, T. (2013). PHP52 - Health insurance coverage in the Houston-Galveston area under the Patient Protection and Affordable Care Act. Value in Health, 16(3), A191. https://doi.org/10.1016/j.jval.2013.03.1290 Finkelstein, A., Taubman, S., Wright, B., Bernstein, M., Gruber, J., Newhouse, J. P., Allen, H., Baicker, K., & Oregon Health Study Group. (2012). The Oregon Health Insurance Experiment: Evidence from the first year. The Quarterly Journal of Economics, 127(3), 1057–1106. https://doi.org/10.1093/qje/qjs020 Hattab, Z., Doherty, E., Ryan, A. M., & O’Neill, S. (2024). Heterogeneity within the Oregon Health Insurance Experiment: An application of causal forests. PLoS ONE, 19(1), e0297205. https://doi.org/10.1371/journal.pone.0297205 Kaczynski, L., & Solnica, B. (2012). PRM155 A pragmatic randomized clinical trials—Design and quality assessment of the source of effectiveness data. Value in Health, 15(7), A487. https://doi.org/10.1016/j.jval.2012.08.1618 Kowalski, A. E. (2016). Doing more when you’re running late: Applying marginal treatment effect methods to examine treatment effect heterogeneity in experiments (NBER Working Paper No. 22363). National Bureau of Economic Research. http://www.nber.org/papers/w22363 Kowalski, A. E. (2018). Reconciling seemingly contradictory results from the Oregon Health Insurance Experiment and the Massachusetts Health Reform (NBER Working Paper No. 24647). National Bureau of Economic Research. http://www.nber.org/papers/w24647 Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 29 National Bureau of Economic Research. (n.d.). Oregon Health Insurance Experiment (OHIE) data. Retrieved October 30, 2024, from https://www.nber.org/research/data/oregon-health- insurance-experiment-data Taubman, S. L., Allen, H. L., Wright, B. J., Baicker, K., & Finkelstein, A. N. (2014). Medicaid increases emergency-department use: Evidence from Oregon’s Health Insurance Experiment. Science, 343(6168), 263–268. https://doi.org/10.1126/science.1246183
*Corresponding Author's Proceedings of the Global Public Health Conference, Vol. 8, Issue 1, 2025, pp.17-29 Li Y ISSN 2613-8417 online THE IMPACT OF MEDICAID COVERAGE ON MENTAL HEALTH, WHY INSURANCE MAKES PEOPLE HAPPIER IN OHIE: BY SPENDING LESS OR BY SPENDING MORE? Yangyang Li* : The Oregon Health Insurance Experiment (OHIE) offers a unique opportunity to examine the causal relationship between Medicaid coverage and happiness among low-income adults, using an experimental design. This study leverages data from comprehensive surveys conducted at 0 and 12 months post-treatment. Previous studies based on OHIE have shown that individuals receiving Medicaid exhibited a significant improvement in mental health compared to those who did not receive coverage. The primary objective is to explore how Medicaid coverage impacts happiness, specifically analyzing in which direction do variations in healthcare spending significantly improve mental health: higher spending or lower spending after Medicaid. Utilizing instrumental variable (IV) regression, I conducted six separate regressions across subgroups categorized by expenditure levels and happiness ratings, and the results reveal distinct patterns. Enrolling in OHP has significantly decreased the probability of experiencing unhappiness, regardless of whether individuals had high or low medical spending. Additionally, it decreased the probability of being pretty happy and having high medical expenses, while increasing the probability among those with lower expenses. Concerning the probability of being very happy, the OHP only had a positive effect on being very happy and spending less, and its effect on those with high expenses was insignificant. These findings align with the benefit of Medicaid: alleviating financial burden, contributing to the well-being of distinct subgroups. Keyword: 2sls, IV, Medicaid, mental health, expenditure, OHIE. Introduction The advent of the Oregon Health Insurance Experiment (OHIE) in 2008 offered an unprecedented opportunity to rigorously evaluate the causal effects of Medicaid coverage on a range of outcomes through a randomized controlled design (Finkelstein et al., 2012). As the state of Oregon opened its Medicaid program to a limited number of low-income adults via a lottery system, a natural experiment unfolded from the 90,000 individuals who signed up, allowing for an objective analysis that sidesteps the perennial challenges of unobserved differences that often confound such research (Baicker & Finkelstein, 2011). This paper contributes to the body of evidence by examining the nuanced effects of Medicaid coverage on mental health and happiness within the OHIE framework. Specifically, it investigates whether increased happiness among Medicaid recipients is attributable to greater healthcare spending, which may imply improved access to necessary services, or to reduced financial strain due to lower out-ofpocket expenditures. The Oregon Medicaid lottery, by effectively randomizing access to public insurance, mitigates the selection bias inherent in previous observational studies. The resulting analysis leverages both administrative and survey data to glean insights into the complex dynamics at play between health insurance, healthcare utilization, and subjective well-being. Li Y / The Impact of Medicaid Coverage on Mental Health, ......... 18 Initial findings from the OHIE indicate that Medicaid coverage has led to statistically significant higher healthcare utilization, reduced out-of-pocket expenses, and enhanced self-reported health among the lottery-selected group as compared to the control group without such coverage (Taubman et al., 2014). This research extends these findings by disaggregating the effects of Medicaid coverage on happiness. The results in this research show an overall trend of a decrease in the probability of being pretty happy and having high medical expenses, and an increase in the probability among those with lower expenses. Concerning the probability of being very happy, the OHP only had a positive effect on being very happy and spending less, and its effect on those with high expenses was insignificant. These results align with the advantages of Medicaid, showing how it helps by reducing financial stress and improving the overall health of specific groups within the population Background The Oregon Health Insurance Experiment (OHIE) has served as a seminal source for understanding the effects of Medicaid expansion. Taubman et al. (2014) provided crucial insights, reporting that Medicaid significantly increased emergency department visits by 0.41 visits per person, contradicting the hypothesis that Medicaid expansion would decrease costly emergency department usage by improving access to primary care and overall health (Taubman et al., 2014). Finkelstein et al. (2012) expanded on these findings by demonstrating that, in the first year after Medicaid expansion, there was a significant increase in healthcare utilization including primary and preventive care, alongside reductions in financial strain due to medical expenses. Their work underscored the complexity of healthcare behaviors and hinted at potential long-term benefits not immediately evident in terms of cost reductions or health outcomes (Finkelstein et al., 2012). Kowalski (2016) advanced the analytical approach by applying Marginal Treatment Effect (MTE) methods to dissect heterogeneity within the OHIE data. Her work revealed that the treatment effect of insurance on emergency room utilization varied significantly across different subgroups, delineating a more nuanced understanding of how Medicaid impacts different population segments (Kowalski, 2016). Kowalski (2018) further contributed by comparing results from Oregon with the Massachusetts health reform, aiming to reconcile why similar expansions led to divergent outcomes in emergency department utilization. By leveraging the MTE framework, she suggested that initial health status and prior healthcare utilization patterns could explain these variations, proposing that healthier new enrollees in Massachusetts might reduce emergency usage, unlike in Oregon (Kowalski, 2018). The body of research emanating from the OHIE underscores the intricate dynamics of healthcare policy's impact on low-income populations (Baicker et al., 2014). These studies collectively emphasize the need for nuanced policy instruments that consider initial health conditions, existing healthcare infrastructure, and localized healthcare behaviors. Further research should continue to leverage randomized designs where feasible, alongside sophisticated econometric models to untangle the causal impacts of health policy changes (Kaczynski & Solnica, 2012). Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 19 Data Data source The dataset I use is the Oregon Health Insurance Experiment (OHIE) from the National Bureau of Economic Research (NBER) (National Bureau of Economic Research, n.d.). The OHIE dataset covers a period from March to September 2008, when the Medicaid lottery was conducted, with follow-up data collected primarily at one and two years post-lottery, extending the coverage of the dataset through at least 2010, which allows the analysis of short-term impacts of Medicaid coverage on the participants (Finkelstein et al., 2012). The unit of observation for the data in the OHIE is at the individual level (Finkelstein et al., 2012). The lotterfy system selected individuals to have the opportunity to apply for Medicaid, and the results were analyzed based on individual health outcomes, financial hardship, and other factors (Baicker et al., 2013). Additionally, since the opportunity to apply for coverage could extend to other family members within the household, the analysis also took into account household-level effects (Finkelstein et al., 2012). In 2008, 89,824 individuals entered the lottery, with 35,169 individuals, representing 29,664 households, selected for the chance to apply for coverage (Taubman et al., 2014). The OHIE dataset is a longitudinal panel dataset that captures cross-sectional data at multiple points in time (Hattab et al., 2024). It is primarily a cross-sectional dataset with elements of longitudinal tracking in two periods. It is not a traditional time series dataset, because it does not track variables continuously over time; instead, it captures data at specific points following the implementation of the Medicaid lottery. However, it does follow the same individuals over a period, thereby incorporating some longitudinal aspects. The data I use are composed of the descriptive data and the survey data collected after the lottery. Descriptive data were collected when individuals entered the lottery and then again during follow-up periods to assess outcomes after the lottery. The follow-up data included several waves, with each capturing data post-lottery, half-year after lottery and one year after lottery, thus forming repeated crosssections of data for the individuals and households involved (National Bureau of Economic Research, n.d.). Applicants for the lottery provided characteristic information, while follow-up surveys collected healthcare utilization data, including emergency department visits, hospital admissions, prescription drug use, and financial measures such as credit scores and debts.1 Summary Statistics This study utilizes data from a series of surveys conducted at different intervals from 0 months and 12 months after participants received treatment. Notably, the dataset corresponding to the 6-month interval was excluded from analysis due to significant data incompleteness and inadequate temporal coverage post-treatment initiation, precluding a comprehensive assessment of Medicaid's effect during this period. Our aim is to understand how having Medicaid coverage influences happiness, focusing on whether decreased healthcare spending or increased healthcare spending contributes more significantly to this outcome. Li Y / The Impact of Medicaid Coverage on Mental Health, ......... 20 This study utilized a dataset I merged from three datasets of OHIE Public Use Data, as listed below: Descriptive Data (named as `oregonhie_descriptive_vars`): Contains descriptive variables about the participants, such as personal and household IDs and lottery selection status. First and Final Wave Survey Data (named as `oregonhie_survey0m_vars` and `oregonhie_survey12m _vars`): Include variables from surveys conducted immediately after receiving treatment and 12 months later, respectively. These datasets provide a comprehensive view of the participants' demographic information, Medicaid coverage status, healthcare spending, and reported happiness over time, etc. Figure 1: Treatment Intensity Figure 1 illustrates the distribution of current Oregon Health Plan (OHP) insurance coverage among individuals selected and not selected in the lottery. The left panel shows that individuals not selected in the lottery exhibit a near-zero density of having OHP insurance, as expected. Conversely, the right panel indicates that those selected in the lottery have a bimodal distribution, with significant proportions both possessing and not possessing OHP insurance. Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 21 Figure 2: Happiness by IV Figure 2, titled "Happiness by IV," compares the self-reported happiness levels of individuals who were not selected versus those who were selected in the lottery to receive free insurance. Both pie charts are segmented into three categories of happiness: "not too happy," "pretty happy," and "very happy." It appears that the proportion of individuals reporting higher happiness ("very happy") is slightly larger in the group selected to receive insurance compared to those not selected. This visual suggests a potential positive association between being selected for Medicaid coverage and self-reported happiness. Further analysis in this study will be done to establish causality and quantify the effect size. Data cleaning and Variable Definitions To prepare the data for analysis, several steps were taken: Participants Selection: Only those who participated in both the initial and 12-month follow-up surveys were retained, resulting in the deletion of 16,517 records. Missing Data Handling: Missing answers for gender were addressed by making one adjustment. A significant cleaning step involved rectifying 406 cases with missing treatment variables. Outlier Deletion: One identification and deletion of outliers that could potentially skew the results. Specifically, an extreme value of expenditure, recorded at 2.20e+7, was identified as an outlier and dropped, resulting in the deletion of one observation and 58,404 non missing values remaining. Treatment Variable Correction: I utilized enrollment in OHP in the first wave survey data as the treatment variable. To ensure the accuracy of the treatment variable, a correction was applied based on additional data fields: for individuals with data indicating confirmed acceptance into OHP by both preliminary and final approval processes, the treatment status was uniformly updated to one. This Li Y / The Impact of Medicaid Coverage on Mental Health, ......... 22 correction was necessary to accurately reflect the enrollment status in the OHP and reduce missing data problems, as these conditions conclusively demonstrate that the participants were accepted into the plan. Variable Generation: Age variable was generated from the year of birth. Table 1: Summary Statistics After major data cleaning process, key variables were generated and renamed for clarity: • `IV (Z_i): Winning the Lottery`: Named Winning Lottery in Table 1, indicates lottery selection, a binary variable identifying if participants were selected (1) or not (0). 58,404 non missing values, binary variable. Originally named treatment, renamed to LotterSelected. • `Treatment (D_i): Enrollment in the OHP`: The treatment variable, designated as D, was initially labeled as ins_ohp_0m in the dataset and was renamed to OHP in Table 1, identifying whether participants currently have coverage under the Oregon Health Plan (OHP). This variable is a binary indicator, where a value of 1 signifies that the participant has OHP coverage, and a value of 0 indicates the absence of such coverage. Data for the treatment variable were collected from 23,139 non-missing entries 12 months following the lottery selection. This rigorous adjustment ensures that the treatment variable reliably represents the actual health Variable N Mean Standard Deviation Min Max Winning Lottery (IV) 58404 0.5066 0.5000 0 1 OHP (treatment) 23139 0.1734 0.3786 0 1 Happiness (outcome) 23449 1.7365 0.6585 1 3 # HH Members 58404 1.2927 0.4608 1 3 Expenditure 22765 736.3079 55076.7200 0 8301400 Age 58404 39.8860 12.1225 20 63 Female 58404 0.5464 0.4978 0 1 Previous Expenditure 24028 583613.4 8.39e+07 0 1.30e+10 Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 23 coverage status of the participants, thereby facilitating a more precise analysis of the impact of OHP coverage on their reported happiness outcomes. • `Outcome (Y_i): Level of Happiness`: Named Happiness in Table 1, is a dependent variable. Represents the reported overall happiness, not too happy (1), pretty happy (2), and very happy (3). Collected 12 months after treatment with 23,449 non missing values. Originally named happiness_12m. • `Control (X_i)`: The first control variable is the number of people in a household on the lottery list with 58,404 non missing values. Originally named numhh_list, renamed to # HH Members in Table 1. Currently, other control variables of my selection include age, gender, and previous expenditure, named Age, Female, and Previous Expenditure in Table 1, all have 58,404 non missing values. • `Expenditure (E_i)`: Total out-of-pocket healthcare spending over the last 12 months with 22,765 non missing values, in dollars. Originally named cost_tot_oop_12m, renamed to Expenditure in Table 1. • `median (median)`: median of all E_i . • `Above Median (〖AM〗_i)`: dummy variable, 〖AM〗_i equals to 1 if above median of all E_i , 0 otherwise. Empirical Methodology Identification Strategy The instrument utilized is the lottery selection (Z_i), which is assumed to be random and thus uncorrelated with unobservable determinants of the outcome variable (happiness). The instrument's validity is based on its correlation with the treatment variable (D_i), which represents a self-selection process in which a lottery-selected individual can choose to enroll in OHP/Medicaid coverage or not, while being uncorrelated with the error term in the outcome equation. The empirical strategy for assessing the impact of Medicaid coverage on happiness involves a two-stage least squares (2SLS) regression, utilizing an instrumental variable (IV) approach. This method addresses potential endogeneity in the treatment assignment (Medicaid coverage). The 6 Regressions Model I perform subgroup analyses to determine if the impact of Medicaid coverage on happiness differs by levels of medical expenditure (E_i). This involves running separate regressions for subgroups where expenditures are 〖AM〗_i = 0 (below the median) and 〖AM〗_i = 1 (above the median), happiness Y_i = 1 (not too happy), Y_i = 2 (pretty happy), and Y_i = 3 (very happy). I first create six dummy variables and rename them Y ̃_(j,i), defined as: Li Y / The Impact of Medicaid Coverage on Mental Health, ......... 24 • Y ̃_(1,i)=1(Y_i=1,AM_i=1) • Y ̃_(2,i)=1(Y_i=1,AM_i=0) • Y ̃_(3,i)=1(Y_i=2,AM_i=1) • Y ̃_(4,i)=1(Y_i=2,AM_i=0) • Y ̃_(5,i)=1(Y_i=3,AM_i=1) • Y ̃_(6,i)=1(Y_i=3,AM_i=0) The regression model can be written as: Y ̃_j=β_(j,0)+β_(j,1) D+X^' β_(j,X)+ε,j=1,⋅⋅⋅,6 (Second Stage)D=γ_0+γ_1 Z+X^' 〖γ_X〗_ +u (First Stage) β_(j,1) captures the effect of enrolling in OHP on the probabilities, and is the coefficient I want to identify and estimate. For example, when j=1, β_1,1 can be written as: β_1,1=E[1(Y_i=1,AM_i=1)|D_i=1]-E[1(Y_i=1,AM_i=1)|D_i=0] =Pr[Y_i=1,AM_i=1|D_i=1]-Pr[Y_i=1,AM_i=1|D_i=0]. The transformation from expectation to probability is because the outcome is binary. Assumptions The following are assumptions for the 2SLS regression analysis: (1) {Y_i ,D_i ,Z_i ,X_i }_(i=1)^N i.i.d. (2) E(ε_i |Z_i)=0,E(u_i |Z_i)=0 (3) corr(D_i ,Z_i)>0 (4) var(ε_i |Z_i) 0.05), suggesting that for this group, the level of spending under the OHP does not have a discernible impact on the fraction of people in this category. Yet, for those with lower expenditures (Regression (6)), a positive and significant treatment Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 27 effect β_(6 )= 0.0662 (p<0.01) is evident, underscoring again that reduced financial strain has increased the probability of people falling into the category of very happy and below median spending posttreatment. Regression (3) shows that treatment reduces the number of people who are pretty happy and have abovemedian spending, with a significant and negative coefficient β_(3 )= -0.113 (p < 0.01). However, we do not know where people go from this cell: the decrease in probability of people in the pretty happy group post-treatment could have gone to the very happy category or the not too happy category. When expenditures are below the median (regression (4)), the treatment has a positive effect β_(4 )= 0.173 (p<0.01), indicating an increase in the number of people under the pretty happy and below median spending category. This result aligns with the hypothesis that reduced financial burden, as facilitated by the OHP, can improve mental well-being. In summary, enrolling in OHP has significantly decreased the probability of experiencing unhappiness, regardless of whether individuals had high or low medical spending. Additionally, it revealed that, on net, there is a significant decrease in probability of being not too happy and having above median medical expenses post treatment, and a significant increase in probability of being very happy and having below median expenses post treatment. Concerning the overall result, enrolling in the OHP had a positive effect on being happier and spending less. These findings align with the hypothesis that alleviating financial burden as a benefit of Medicaid contributes to the well-being of individuals in distinct subgroups. Conclusion and Discussion This study's analysis, predicated on a randomized controlled design, delved into the impacts of Medicaid coverage on mental health by observing a low-income, uninsured adult population over approximately one year. The empirical findings reveal an overall trend: a decrease in the number of individuals who are 'not too happy' and have above median spending, and an increase in the number of individuals under the below-median expenditures and 'very happy' category. These findings align with the hypothesis that reduced financial strain after Medicaid contributes to mental health improvement. The implications of these findings are profound, especially when considering policy designs aimed at optimizing both the health and overall well-being of low-income populations. With the expansion of Medicaid eligibility under the Patient Protection and Affordable Care Act, understanding these effects takes on a heightened policy relevance (Begley et al., 2013). As the OHIE continues to provide a wealth of data, this paper aims to contribute to the dialogue on how Medicaid coverage affects not just health in the narrow sense, but the broader psychosocial well-being of individuals. For robustness checks, an alternative approach I thought of could have been employed wherein the outcome variable Y_i is substituted with another indicator of happiness, and the regression is reexecuted. This would help ascertain the consistency of the observed effects across different measures of well-being. Li Y / The Impact of Medicaid Coverage on Mental Health, ......... 28 Future research could explore the causality in this relationship further. Additionally, including interaction terms in the regression model could offer more insights into how the relationship between Medicaid coverage and happiness may be moderated by factors such as health status or financial stress. References Baicker, K., & Finkelstein, A. (2011). The effects of Medicaid coverage-Learning from the Oregon experiment. New England Journal of Medicine, 365(8), 683-685. https://doi.org/10.1056/NEJMp1108222 Baicker, K., Finkelstein, A., Song, J., & Taubman, S. (2014). The impact of Medicaid on labor market activity and program participation: Evidence from the Oregon Health Insurance Experiment. American Economic Review, 104(5), 322-328. https://doi.org/10.1257/aer.104.5.322 Baicker, K., Taubman, S. L., Allen, H. L., Bernstein, M., Gruber, J. H., Newhouse, J. P., Schneider, E. C., Wright, B. J., Zaslavsky, A. M., Finkelstein, A. N., & Oregon Health Study Group. (2013). The Oregon experiment-Effects of Medicaid on clinical outcomes. New England Journal of Medicine, 368(18), 1713-1722. https://doi.org/10.1056/NEJMsa1212321 Begley, C. E., Deshmukh, A., Eschbach, K., Fouladi, N., Liu, J., & Reynolds, T. (2013). PHP52 - Health insurance coverage in the Houston-Galveston area under the Patient Protection and Affordable Care Act. Value in Health, 16(3), A191. https://doi.org/10.1016/j.jval.2013.03.1290 Finkelstein, A., Taubman, S., Wright, B., Bernstein, M., Gruber, J., Newhouse, J. P., Allen, H., Baicker, K., & Oregon Health Study Group. (2012). The Oregon Health Insurance Experiment: Evidence from the first year. The Quarterly Journal of Economics, 127(3), 1057-1106. https://doi.org/10.1093/qje/qjs020 Hattab, Z., Doherty, E., Ryan, A. M., & O'Neill, S. (2024). Heterogeneity within the Oregon Health Insurance Experiment: An application of causal forests. PLoS ONE, 19(1), e0297205. https://doi.org/10.1371/journal.pone.0297205 Kaczynski, L., & Solnica, B. (2012). PRM155 A pragmatic randomized clinical trials-Design and quality assessment of the source of effectiveness data. Value in Health, 15(7), A487. https://doi.org/10.1016/j.jval.2012.08.1618 Kowalski, A. E. (2016). Doing more when you're running late: Applying marginal treatment effect methods to examine treatment effect heterogeneity in experiments (NBER Working Paper No. 22363). National Bureau of Economic Research. http://www.nber.org/papers/w22363 Kowalski, A. E. (2018). Reconciling seemingly contradictory results from the Oregon Health Insurance Experiment and the Massachusetts Health Reform (NBER Working Paper No. 24647). National Bureau of Economic Research. http://www.nber.org/papers/w24647 Proceedings of the Global Public Health Conference, Vol. 8, Issue. 1, 2025, pp.17-29 29 National Bureau of Economic Research. (n.d.). Oregon Health Insurance Experiment (OHIE) data. Retrieved October 30, 2024, from https://www.nber.org/research/data/oregon-healthinsurance-experiment-data Taubman, S. L., Allen, H. L., Wright, B. J., Baicker, K., & Finkelstein, A. N. (2014). Medicaid increases emergency-department use: Evidence from Oregon's Health Insurance Experiment. Science, 343(6168), 263-268. https://doi.org/10.1126/science.1246183
2510.14910
VORTEX LINES INTERACTION IN THE THREE-DIMENSIONAL MAGNETIC GINZBURG–LANDAU MODEL CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Abstract. We complete our study of the three dimensional Ginzburg–Landau functional with magnetic field, in the asymptotic regime of a small inverse Ginzburg–Landau parameter ε, and near the first critical field Hc1 for which the first vortex filaments appear in energy minimizers. Under a nondegeneracy condition, we show a next order asymptotic expansion of Hc1 as ε →0, and exhibit a sequence of transitions, with vortex lines appearing one by one as the intensity of the applied magnetic field is increased: passing Hc1 there is one vortex, then increasing Hc1 by an increment of order log | log ε| a second vortex line appears, etc. These vortex lines accumulate near a special curve Γ0, solution to an isoflux problem. We derive a next order energy that the vortex lines must minimize in the asymptotic limit, after a suitable horizontal blow-up around Γ0. This energy is the sum of terms where penalizations of the length of the lines, logarithmic repulsion between the lines and magnetic confinement near Γ0 compete. This elucidates the shape of vortex lines in superconductors. Keywords: Ginzburg–Landau, vortices, vortex filaments, first critical field, phase transitions, Abelian Higgs model, vortex interaction MSC: 35Q56, 82D55, 35J50, 49K10. 1. Introduction This work is the conclusion of our study of the emergence of vortex lines in the three- dimensional full Ginzburg–Landau model from physics, i.e. the model with gauge and with external magnetic field. The Ginzburg–Landau model is important as the simplest gauge theory, where the U(1)-gauge is Abelian (it is also known as the Abelian Higgs model) and where topological defects in the form of vortices arise. It is also one of the most famous models of condensed matter physics, the widely used and studied model for superconductivity, and also (Carlos Rom´an) Facultad de Matem´aticas e Instituto de Ingenier´ıa Matem´atica y Computa- cional, Pontificia Universidad Cat´olica de Chile, Vicu˜na Mackenna 4860, 7820436 Macul, San- tiago, Chile (Etienne Sandier) LAMA - CNRS UMR 8050, Universit´e Paris-Est Cr´eteil, 61 Avenue du G´en´eral de Gaulle, 94010 Cr´eteil, France (Sylvia Serfaty) Courant Institute of Mathematical Sciences, New York University, 251 Mer- cer St., New York, NY 10012, United States, and Sorbonne Universit´e, CNRS, Universit´e de Paris, Laboratoire Jacques-Louis Lions (LJLL), F-75005 Paris E-mail addresses: carlos.roman@uc.cl, sandier@u-pec.fr, serfaty@cims.nyu.edu. Date: October 17, 2025. 1 arXiv:2510.14910v1 [math.AP] 16 Oct 2025 2 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY very similar to models for superfluids and Bose–Einstein condensates [SSTS69, DG99, Tin96, TT90]. In the two-dimensional version of the model, vortices are point-like topological defects, arising when the applied magnetic field is large enough, as superconductivity defects in which the magnetic flux can penetrate. Vortices interact logarithmically, the magnetic field acting as an effective confinement potential. As a result of this competition between repulsion and confinement, in energy minimizers vortices form very interesting patterns, including a famous triangular lattice pattern called in physics Abrikosov lattice. The program carried out in particular by the last two authors, see [SS07], culminating with [SS12], was to mathematically analyze the formation of these vortices and derive effective interaction energies that the limiting vortex pattern must minimize in a certain asymptotic limit, thus relating the minimization of the Ginzburg–Landau energy to discrete minimization problems, some of them of number theoretic nature. Our main goal here was to accomplish the same in three dimensions, deriving effective inter- action energies for vortex lines in three dimensions in order to precisely understand and describe the vortex patterns in superconductors. This is significantly more delicate in three dimensions than in two, since vortex lines carry much more geometry than vortex points. In particular, curvature effects and regularity questions, requiring the use of fine geometric measure theoretic tools coupled with differential forms, appear. In the three dimensional situation, the energetic cost of a vortex is the balance between its length cost, the logarithmic interaction with other vortices, and the confinement effect of the magnetic field. While the length cost effect had been analyzed in a simplified setting in the mathematical literature [Riv95, LR01, San01, BBO01], the logarithmic repulsion effect analyzed, again in a simplified setting, more recently in [CJ17], our paper is the first to handle all three effects at the same time in the completely realistic physical setting of the full gauged energy. In particular, it settles questions raised since the turn of the century (for instance [Riv95, AR01]) about whether vortices in three dimensional superconductors will be asymptotically straight or curved. 1.1. Description of the model. Let us now get into the details of the Ginzburg–Landau model, whose physical background can be found in the standard texts [SSTS69,DG99,Tin96]. After nondimensionalization of the physical constants, one may reduce to studying the energy functional (1.1) GLε(u, A) := 1 2 Z Ω |∇Au|2 + 1 2ε2(1 −|u|2)2 + 1 2 Z R3 |H −Hex|2. Here Ωrepresents the material sample, we assume it to be a bounded simply connected subset of R3 with regular boundary. The function u : Ω→C is the order parameter, representing the local state of the material in this macroscopic theory (|u|2 ≤1 indicates the local density of superconducting electrons), while the vector-field A : R3 →R3 is the gauge of the magnetic field, and the magnetic field induced inside the sample and outside is H := ∇× A, as is standard in electromagnetism. The covariant derivative ∇A means ∇−iA. The vector field Hex here represents an applied magnetic field and we will assume that Hex = hexH0,ex where VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 3 H0,ex is a fixed vector field and hex is a real parameter, representing an intensity that can be tuned. Finally, the parameter ε > 0 is the inverse of the so-called Ginzburg–Landau parameter κ, a dimensionless ratio of all material constants, that depends only on the type of material. In our mathematical analysis of the model, we will study the asymptotics of ε →0, also called “London limit” in physics, which corresponds to extreme type-II superconductors (type- II superconductors are those with large κ). This is the limiting where the correlation length is much smaller than the penetration depth of the magnetic field, effectively this means that vortex cores are very small. The Ginzburg–Landau theory is an effective Landau theory, describing the local state at the mesoscale level by the order parameter u, but it can be formally derived as a limit of the microscopic quantum Bardeen–Cooper–Schrieffer theory [BCS57] near the critical temperature. This has been partially accomplished rigorously in [FHSS12]. The Ginzburg–Landau model is a U(1)-gauge theory, in which all the meaningful physical quantities are invariant under the gauge transformations u →ueiΦ, A →A + ∇Φ where Φ is any regular enough real-valued function. The Ginzburg–Landau energy and its associated free energy (1.2) Fε(u, A) := 1 2 Z Ω |∇Au|2 + 1 2ε2(1 −|u|2)2 + 1 2 Z R3 |H|2 are gauge-invariant, as well as the density of superconducting Cooper pairs |u|2, the induced magnetic field H, and the vorticity defined below. Throughout this paper, we assume that Hex ∈L2 loc(R3, R3) is such that div Hex = 0 in R3. Consequently, there exists a vector potential Aex ∈H1 loc(R3, R3) such that curl Aex = Hex and div Aex = 0 in R3. The natural space for minimizing GLε in 3D is H1(Ω, C) × [Aex + Hcurl] where Hcurl := {A ∈H1 loc(R3, R3)| curl A ∈L2(R3, R3)}; see [Rom19a]. Critical points (u, A) of GLε in this space satisfy the Ginzburg–Landau equations (1.3)          −(∇A)2u = 1 ε2u(1 −|u|2) in Ω curl(H −Hex) = (iu, ∇Au)χΩ in R3 ∇Au · ν = 0 on ∂Ω [H −Hex] × ν = 0 on ∂Ω, where χΩis the characteristic function of Ω, [ · ] denotes the jump across ∂Ω, ν is the outer unit normal to the boundary, ∇Au · ν = P3 j=1(∂ju −iAju)νj, and the covariant Laplacian (∇A)2 is defined by (∇A)2u = (div −iA·)∇Au. We also note that rotating superfluids and rotating Bose–Einstein condensates can be de- scribed through a very similar Gross–Pitaevskii model, which no longer contains the gauge A, and where the applied field Hex is replaced by a rotation vector whose intensity can be tuned. In the regime of low enough rotation these models can be treated with the same techniques as 4 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY those developed for Ginzburg–Landau, see [Ser01, AJ03, Aft06, BJOS13, TT90] and references therein. Type-II superconductors are known to exhibit several phase transitions as a function of the intensity of the applied field. The one we focus on is the onset of vortex-lines. Mathemati- cally, these are zeroes of the complex-valued order parameter function u around which u has a nontrivial winding number or degree (the rotation number of its phase). More precisely, it is established that there exists a first critical field Hc1 of order | log ε|, such that if the intensity of the applied field hex is below Hc1 then the material is superconducting and |u| is roughly constant equal to 1, while when the intensity exceeds Hc1, vortex filaments appear in the sam- ple. The order parameter u vanishes at the core of each vortex tube and has a nonzero winding number around it. Rigorously, this was first derived by Alama–Bronsard–Montero [ABM06] in the setting of a ball. Baldo–Jerrard–Orlandi–Soner [BJOS12,BJOS13] derived a mean-field model for many vortices and the main order of the first critical field in the general case, and [Rom19a] gave a more precise expansion of Hc1 in the general case, and moreover proved that global minimizers have no vortices below Hc1, while they do above this value. One may also point out the paper [JMS04] that constructs locally minimizing solutions with vortices. In these papers, the occurrence of the first vortex line(s) and its precise location in the sample is connected to what we named an isoflux problem, which we studied for its own sake in [RSS23] and which is described below. Moreover, we showed in that paper that if the intensity of the magnetic field hex does not exceed Hc1 by more than K log | log ε|, then the vorticity remains bounded independently of ε, i.e. informally the total length of the curves remains bounded, and we expect only a finite number of curves. From there, it is however quite difficult to extract the optimal number of curves or the individual curve behavior. In particular the coercivity of the energy with respect to the curve locations is quite delicate to identify, as we will see. On the other hand, as mentioned above, the two-dimensional version of the gauged Ginzburg– Landau model, in which vortices are essentially points instead of lines, was studied in details in the mathematics literature. In particular, in [Ser99a, Ser99c, SS00, SS03] (see [SS07] for a recap) the first critical field was precisely computed, and it was shown that under some nondegeneracy condition the vortices appear one by one near a distinguished point p of the bounded domain, as the magnetic field is increased: at Hc1 one vortex appears, then when the external field is increased by an order log | log ε| a second vortex appears, then when the external field is increased by an additional order log | log ε| a third vortex appears, etc. Moreover, the vortices were shown to minimize, in the limit ε →0 and after a suitable rescaling around p, an effective“renormalized” interaction energy (thus called by analogy with the renormalized energy W of [BBH94]) of the form − X i̸=j log |xi −xj| + N N X i=1 Q(xi), where Q is a positive quadratic function, which results from the logarithmic vortex repulsion and the magnetic confinement effect, see in particular [SS07, Chapter 9]. Specific mathematical VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 5 techniques for analyzing vortices in the two-dimensional model had been first developed for the simplified model without magnetic field, which is obtained by setting A ≡0 and H ≡0 in the two-dimensional version of (1.2), in particular in [BBH94,San98,Jer99,JS02] and then extended to the situation with gauge in [BR95,Ser99a,Ser99c,SS00,SS03]. As alluded to above, vortices in the context of the same simplified model without magnetic field but in three dimensions were also analyzed in the mathematical literature in [Riv95,LR99, LR01,San01,BBO01]. These works demonstrated that, in the absence of magnetic field effects, vortex lines Γ carry a leading order energy proportional to their length, π|Γ|| log ε|, while their interaction effect is a lower order effect (of order 1). Thus, in order to minimize the energy, vortices (which in that setting only occur because of an imposed Dirichlet boundary condition) should be straight lines, and their interaction becomes a negligible effect in the ε →0 limit. It was thus not clear whether magnetic effects could suffice to curve the vortices. A formal derivation was attempted in [AR01] in the context of Bose–Einstein condensates, proposing an effective energy where length effects and interaction effects compete. More recently, in [CJ17], the authors found a setting where the length and the interaction compete at next order: they study the same simplified Ginzburg–Landau model without gauge in a cylindrical domain, choose the Dirichlet boundary condition to have a degree N vortex at one point (say the North pole) of the boundary and a degree −N at another point (say the South pole). The energy minimizers must then have N vortex filaments which connect the two poles, moreover to minimize the leading order energy these should all be nearly parallel and close to vertical straight lines. Since the vortices repel each other logarithmically, these lines curve a little bit when leaving the poles, in order to separate by an optimal distance shown to be 1/ p | log ε|. When rescaling horizontally at that lengthscale, one sees well separated vortex lines with competition between the linearization of the length and the logarithmic interaction. The authors are able to extract an effective limiting energy (1.4) π Z L 0 N X i=1 1 2|u′ i(z)|2 − X i̸=j log |ui(z) −uj(z)|dz, where z : (0, L) 7→(ui(z), z) represent the rescaled curves. The critical points of this energy happen to solve a “Toda lattice” ODE system. In [DDPMR22], solutions of the Ginzburg– Landau equations (without gauge) and with vortex helices which are critical points of (1.4) were constructed. This setting is however, a little bit artificial due to this particular boundary condition. In addition, in all the problems without magnetic gauge, the number of vortex lines ends up auto- matically bounded as a result of enforcing a Dirichlet boundary condition with finite vorticity. This significantly simplifies the analysis, and as in the two-dimensional case, we need to deal with a more realistic situation where the number of vortices may a priori be unbounded, for this we rely on [RSS23], itself relying on [Rom19b]. What we do here is derive the first interaction energy where length, interaction and magnetic effects compete, in the setting of the full physical model with gauge. In addition, we do not 6 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY restrict to geometries where the filaments are almost straight. We consider general geometries and magnetic fields that can lead to the optimal location of the first vortex as hex passes Hc1 – the solution to the isoflux problem – to be a curved line, called Γ0. The next vortices ob- tained by increasing slightly hex will be nearly parallel to Γ0, hence to each other, leading to a curved version of the situation of [CJ17]. However, we have to work in coordinates aligned with Γ0, which turns out to be equivalent to studying Ginzburg–Landau functionals on a man- ifold. In addition, technical difficulties will arise near the boundary, at the endpoints of the vortex filaments, where these may diverge away from Γ0 to meet the boundary orthogonally, in contrast with the almost parallel setup of [CJ17] where all the curves meet at their (fixed) endpoints. The problem thus combines all the possible technical difficulties of vortex analysis in three dimensions, in particular, dealing with a priori unbounded numbers of vortices, dealing with estimates on manifolds, and dealing with nonflat boundaries. We will make intensive use of the toolbox assembled in [Rom19b] for energy bounds in three dimensions, as well as vari- ous methods for obtaining two-dimensional estimates [SS07,AB98], to which we are eventually able to reduce modulo appropriate slicing. We also provide a completely new upper bound construction, based on the Biot-Savart law, approximating the optimal Ginzburg–Landau en- ergy for configurations with vorticity carried by prescribed curves. This is the first upper bound construction applicable to general curves, to be compared with prior constructions in [ABO05,MSZ04,CJ17] which all involve almost straight vortex lines. We will give further detail on the proof techniques after the statement of the main theorem. Before stating the main theorem, we need to introduce various notions and notation. 1.2. Energy splitting. The Ginzburg–Landau model admits a unique state, modulo gauge transformations, that we will call “Meissner state” in reference to the Meissner effect in physics, i.e. the complete repulsion of the magnetic field by the superconductor when the superconduct- ing density saturates at |u| = 1 with no vortices. It is obtained by minimizing GLε(u, A) under the constraint |u| = 1, so that in particular it is independent of ε. In the gauge where div A = 0, this state is of the form (eihexϕ0, hexA0), where ϕ0, A0 depend only on Ωand H0,ex, and was first identified in [Rom19a]. It is not a true critical point of (1.1) (or true solution of the associated Euler–Lagrange equations (1.3), but is a good approximation of one as ε →0. The energy of this state is easily seen to be proportional to h2 ex, we write GLε(eihexϕ0, hexA0) =: h2 exJ0. Closely related is a magnetic field B0 constructed in [Rom19a], whose definition will be recalled later. The superconducting current of a pair (u, A) ∈H1(Ω, C)×H1(Ω, R3) is defined as the 1-form j(u, A) = (iu, dAu) = 3 X k=1 (iu, ∂ku −iAku)dxk VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 7 and the gauge-invariant vorticity µ(u, A) of a configuration (u, A) as µ(u, A) = dj(u, A) + dA. Thus µ(u, A) is an exact 2-form in Ω. It can also be seen as a 1-dimensional current, which is defined through its action on 1-forms by the relation µ(u, A)(ϕ) = Z Ω µ(u, A) ∧ϕ. The vector field corresponding to µ(u, A) (i.e. the J(u, A) such that µ(u, A)∧ϕ = ϕ(J(u, A)) dV where dV is the Euclidean volume form, is at the same time a gauge-invariant analogue of twice the Jacobian determinant, see for instance [JS02], and a three-dimensional analogue of the gauge-invariant vorticity of [SS07]. The vorticity µ(u, A) is concentrated in the vortices and, in the limit ε →0, it is exactly supported on the limit vortex lines. We now recall the algebraic splitting of the Ginzburg–Landau energy from [Rom19a], which allows to follow the roadmap of [SS07] in three dimensions: for any (u, A), letting u = e−ihexϕ0u and A = A −hexA0, we have GLε(u, A) = h2 exJ0 + Fε(u, A) −hex Z Ω µ(u, A) ∧B0 + o(1), This formula allows, up to a small error, to exactly separate the energy of the Meissner state h2 exJ0, the positive free energy cost Fε and the magnetic gain −hex R µ(u, A) ∧B0 which corresponds to the value of the magnetic flux of B0 through the vortex or rather the loop formed by the vortex line on the one hand, and any curve lying on ∂Ωthat allows to close it, see Figure 1. Figure 1. Vortex filament with boundary closure. 8 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY The first critical field is reached when competitors with vortices have an energy strictly less than that of the Meissner state, h2 exJ0, that is when the magnetic gain beats the free energy cost. Approximating the energy cost of a curve Γ by π|Γ|| log ε| leads to the following isoflux problem. 1.3. The isoflux problem and nondegeneracy assumption. The isoflux problem charac- terizes the curves that maximize the magnetic flux for a given length (hence the name isoflux, by analogy with isoperimetric), providing the critical value of hex. Given a domain Ω⊂R3, we let N be the space of normal 1-currents supported in Ω, with boundary supported on ∂Ω. We always denote by |·| the mass of a current. Recall that normal currents are currents with finite mass whose boundaries have finite mass as well. We also let X denote the class of currents in N which are simple oriented Lipschitz curves. An element of X must either be a loop contained in Ωor have its two endpoints on ∂Ω. Given σ ∈(0, 1], we let C0,σ T (Ω) denote the space of vector fields B ∈C0,σ(Ω) such that B × ⃗ν = 0 on ∂Ω, where hereafter ⃗ν is the outer unit normal to ∂Ω. The symbol ∗will denote its dual space. Such a B may also be interpreted as 2-form, we will not distinguish the two in our notation. For any vector field B ∈C0,1 T (Ω, R3) and any Γ ∈N, we denote by ⟨B, Γ⟩the value of Γ applied to B, which corresponds to the circulation of the vector field B on Γ when Γ is a curve. We also let (1.5) ∥Γ∥∗:= sup ∥B∥C0,1 T (Ω,R3)≤1 ⟨B, Γ⟩ be the dual norm to the norm in C0,1 T (Ω, R3). Definition 1.1 (Isoflux problem). The isoflux problem relative to Ωand a vector field B0 ∈ C0,1 T (Ω, R3), is the question of maximizing over N the ratio (1.6) R(Γ) := ⟨B0, Γ⟩ |Γ| . In [RSS23, Theorem 1], we proved that the maximum is achieved, and under the additional condition sup Cloops R < sup N R, where Cloops denotes the space of closed oriented Lipschitz curves (that is, loops) supported in Ω, then the supremum of the ratio R over N is attained by an element of X which is not a loop, and hence has two endpoints in ∂Ω. We will denote it Γ0. A vortex line is thus seen to be favorable if and only if hex ≥H0 c1 where (1.7) H0 c1 := | log ε| 2 R0 , VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 9 and (1.8) R0 := sup Γ∈X R(Γ) = R(Γ0). We refer the interested reader to the recent article [DVR25], in which a weighted variation of the isoflux problem is derived within the framework of a pinned version of the 3D magnetic Ginzburg–Landau model. To go further, we need to define tube coordinates around the optimal curve Γ0, assumed to be smooth and meeting the boundary of Ωorthogonally at its two endpoints. These coordinates, whose existence we prove in Proposition 2.2, are defined in a δ-tube around Γ0. In this system of coordinates, Γ0(z) is mapped to (0, 0, z) and, if we denote by gij the coefficients of the Euclidean metric in these coordinates, then we have g13 = g23 = 0 and g33(0, 0, z) = 1 for z ∈[0, L0], where, throughout the article, L0 denotes the length of Γ0. We will denote by ⃗u = (x, y) the two first coordinates and by z the third coordinate. Then g(⃗u, z) will denote the Euclidean metric in these coordinates, and we define g• to be the metric along the z-axis, that is, g(0, 0, z). Definition 1.2 (Strong nondegeneracy). We say that Γ0 is a nondegenerate maximizer for the ratio R if it maximizes R and if the quadratic form Q(⃗u) := −d2 dt2 |t=0 R(Γt) with Γt(z) := Γ0(z) + t⃗u(z), is positive definite over the Sobolev space H1((0, L0), R2). We then let αQ = sup ⃗u∈H1 ∥⃗u∥H1≤1 Q(⃗u). An explicit expression of Q in terms of B0, Γ0 and Ωis given in Lemma 3.2. We will show in Section 3.4 that this strong nondegeneracy condition holds at least for small enough balls. We may then define the renormalized energy of a family of curves Γ∗ i (z) = Γ0(z) + ⃗u∗ i (z), for 1 ≤i ≤N, by (1.9) WN(Γ∗ 1, . . . , Γ∗ N) = πL0N N X i=1 Q(⃗u∗ i ) −π Z L0 0 X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g•, where | · |g• denotes the norm as measured in the g• metric. 1.4. Main theorem. The next theorem shows that there exists a sequence of transitions at values of hex that we now define. Definition 1.3. Given an integer N ≥1, we define the N-th critical value by HN := 1 2 R0  | log ε| + (N −1) log | log ε| 2 R0 + kN  , 10 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY where kN = (N −1) log 1 N + N 2 −3N + 2 2 log N −1 N + 1 πL0  min WN −min WN−1+γL0+(2N −1)CΩ  , CΩis a constant depending only on Ωand H0,ex and defined in (2.8), and γ is a universal constant first introduced in [BBH94]. In particular H1 will coincide (up to o(1)) with Hc1, defined as the first critical field above which the energy of a configuration with a vortex line becomes strictly smaller than that of the Meissner state. Then H2 is the critical field above which the energy of a configuration with two vortex lines becomes strictly smaller than that with one, etc. The main theorem shows these transitions, and proves that WN is the effective interaction energy of the (finite number of) vortex curves. Theorem 1.1. Assume that the smooth simple curve Γ0 is a unique nondegenerate maximizer (in the sense of Definition 1.2) of the ratio R. There exists cε →0 as ε →0 such that the following holds. Assume that hex ∈(HN −cε, HN + cε) with N ≥1 independent of ε. Let (u, A) be a minimizer (depending on ε) of GLε in H1(Ω, C) × [Aex + Hcurl] and (u, A) = (e−ihexϕ0u, A −hexA0) as above. Then for any sequence {ε} tending to 0, there exists a subse- quence such that, for ε small enough (depending on N), letting µ∗ ε be the pull-back of µ(u, A) under the horizontal rescaling map defined in the tube coordinates described above by (x, y, z) 7→ r N hex x, r N hex y, z ! , then lim ε→0 µ∗ ε 2π − N X i=1 Γ∗ i ∗ = 0, where Γ∗ i (z) = Γ0(z) + ⃗u∗ i (z) are H1 graphs that minimize WN(Γ∗ 1, . . . , Γ∗ N) as defined in (1.9). Moreover, as ε →0, defining K to be such that (1.10) hex = H0 c1 + K log | log ε|, we have (1.11) GLε(u, A) = h2 exJ0 + π 2 L0N(N −1) log hex −2πK R0 L0N log | log ε| −π 2 L0N(N −1) log N + min WN + γNL0 + N 2CΩ+ o(1), where CΩis the constant defined in (2.8) and γ is the universal constant of [BBH94]. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 11 Remark 1.1. We really show in the course of the proof that for hex as in (1.10), the functional GLε −  h2 exJ0 + π 2 L0N(N −1) log hex −2πK R0 L0N log | log ε| −π 2 L0N(N −1) log N + γNL0 + N 2CΩ  Γ-converges to WN. This theorem shows a sequence of phase transitions for intensities of the applied magnetic field equal to H1, H2, etc, at which vortex lines appear one by one, near the optimal curve Γ0, and at a distance to Γ0 of order | log ε|−1/2. In particular, it shows the first expansion up to o(1) of the value of the first critical field: Hc1 = 1 2 R0  | log ε| + γL0 + CΩ πL0  + oε(1), refining that of [Rom19a]. The theorem also provides an asymptotic expansion up to order o(1) of the minimal energy in the regime hex ≤Hc1 + O(log | log ε|), identifying the coefficients in factor of the leading order term in h2 ex, then the subleading order terms in log | log ε|, and finally the sub-subleading order terms of order 1, which really contains the most detailed and interesting information about the vortices, showing that the curves congregate near Γ0 and that one needs to zoom in “horizontally” by a factor q hex N near Γ0 to see well separated curves, which converge to a minimizer of WN in the limit. This is all consistent with the two-dimensional picture obtained in [Ser99a,SS03,SS07]. One may in particular compare the formulas to those in [SS07, Chapter 12]. To our knowledge, no result and description so precise can be found in the physics literature. As a result, if Γ0 is straight, which is the case for instance in a rotationally symmetric geometry with a rotationally symmetric H0,ex, then the vortex curves will be nearly parallel, however they will curve at next-to-leading order, especially as they get closer to the boundary at their endpoints, where they will tend to separate in order to meet the boundary orthogonally to minimize length – in contrast with the setting of [CJ17] where the curves meet at the poles; see Figure 2. This can be compared to simulations done in rotating Bose–Einstein condensates (where the geometry is naturally rotationally symmetric), in particular the so-called “S-shaped states” and “U-shaped states”, see [AJ03, AR01], physics papers [RASV+01, RBD02, Bra02], and numerics [AD03]. If the domain and the applied field are such that Γ0 is curved, then all vortex lines will also be curved, again with next-to-leading order deviations from Γ0 which can in principle be estimated via WN. 1.5. Proof approach. As usual with Γ-convergence, the proof is based on establishing a gen- eral lower bound for the energy, and a matching upper bound obtained via a construction. The main difficulty in proving the lower bound is that in order to extract the interaction energy, which is a (next-to) next-to-leading order contribution, the energy needs to be es- timated up to a o(1) error as ε →0, while the leading order is | log ε|. In contrast, all 12 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Γ0 ≈ 1 √ | log ε| Γ0 ≈ 1 √ | log ε| Figure 2. 2D and 3D views of 4 vortex lines. the prior three-dimensional studies of vortex filaments in Ginzburg–Landau energies, that is [Riv95,LR99,LR01,San01,BBO01], had an error or a precision of o(| log ε|), with the exception of [CJ17] in the very special setup described above. To estimate the energetic cost Fε of vortex lines, we crucially rely on the approach of [Rom19b] which is the only one with the advantage of being ε-quantitative and robust in the number of curves, i.e. allowing it to possibly blow up when ε →0. It proceeds by approximating the vorticity measure µ(uε, Aε) by 2π times a polyhedral 1-dimensional integer-multiplicity current (in other words, a piecewise straight line), and bounding the energy Fε from below by the mass of the current (in other words, the length of the polyhedral curve) times | log ε|. That approach still involves an error at least O(log | log ε|), which at first seems to forbid any hope of a o(1) error only. On the other hand, one may obtain quite precise lower bounds by slicing perpendicularly to Γ0 and using two-dimensional techniques. However, integrating out the energy over slices misses a part of the energy: the length part in the directions parallel to the slices, corresponding to the energy term 1 2 R |∂zu|2 in Fε, where z is the coordinate along Γ0. To recover this energy, we use the following trick, which is similar in spirit to [CJ17] but whose implementation somewhat differs, which allows to combine the two ways of obtaining lower bounds: we define (roughly) F ⊥ ε as the part of the energy that contains only derivatives of u tangential to the slices, ˜Fε as the VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 13 energy obtained after a applying a blow-up around Γ0 of a factor 1/ℓ≫1 in the “horizontal” direction perpendicular to Γ0 and no blow-up in the z-direction parallel to Γ0. The lengthscale ℓis a chosen small enough so that the vortex lines essentially (up to small excursions) remain in an ℓ-neighborhood of Γ0. A change of variables allows to write that Fε ≥(1 −ℓ2)F ⊥ ε + ℓ2 ˜Fε. The part ˜Fε contains the missing R |∂zu|2 component. It can be bounded below by using the three-dimensional lower bounds of [Rom19b], which yields O(log | log ε|) errors. The crucial point is that, once multiplied by ℓ2 which is small, these errors become o(1). The part F ⊥ ε is bounded below by slicing along (curved) slices perpendicular to Γ0, and integrating up two-dimensional lower bounds obtained by the ball construction method of [SS07]. These lower bounds are expressed in terms of two-dimensional effective vortices (the points which are the centers of the final balls), which a priori may in some slices be quite different from the trace along the slice of the polyhedral approximation – so a difficulty in the whole proof is to reconcile the various vorticity approximations. Another difficulty is that we have to use slices that are adapted to the boundary, as one approaches the endpoints of Γ0, and we may lose information there. In contrast to the three-dimensional bounds `a la [Rom19b], these bounds already contain some part of the vortex interaction (i.e. the repulsion effect). This way all the leading and subleading energy is recovered. When combining with the energy upper bound obtained by our new precise construction, and performing a quadratic expansion of the magnetic term in the energy, we find a posteriori that the curves must be confined very near Γ0, at distance of order 1/√hex or 1/ p | log ε|. It is only at this stage, when combining all the components to the energy, that compactness for the curves (which is a subsubleading order effect as well) can be retrieved, yielding the existence of limiting curves after blow-up. At this stage, one may finally use even more refined two-dimensional lower bounds `a la [BBH94] – more precisely we will do it by following [AB98, Ser99a] and [IJ21] for the curved aspects – which contain the exact logarithmic repulsion and can be made so precise to involve only an o(1) error, provided the energy in each slice is bounded above by O(| log ε|). This is not known a priori but comes as a consequence of the analysis of [RSS23] which shows that, for the regime of hex that we consider (hex ≤H0 c1 + O(log | log ε|)), the energy Fε does remain bounded by O(| log ε|) and the vorticity remains bounded in length. Plugging into the prior estimates yields the energy lower bound up to o(1), the optimal separation from Γ0, and the fact that the limiting curves must minimize WN. The energy upper bound is interesting in itself, it involves an explicit construction relying on the Biot–Savart law, where again the difficulty lies in obtaining o(1) precision on the energy. 1.6. Plan of the paper. The paper starts in Section 2 with preliminaries on the splitting formula, the construction of a superconducting current and gauge field from the Biot–Savart law associated to a curve, energy lower bounds from [Rom19b] and needed results from [RSS23]. It then describes the tubes coordinates, the rewriting of the energy, and the horizontal rescaling. 14 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Section 3 gathers important preliminaries on the isoflux problem and its quadratic expansion. It proves the nondegeneracy condition for graphs, then coercivity, then the fact that strong nondegeneracy implies weak nondegeneracy. The section concludes by showing that the strong nondegeneracy condition holds at least for small enough balls. In Section 4, we prove two types of energy lower bounds by horizontal slicing: one by vortex balls construction, and a more precise one, recovering the constant order term, under a very strong upper bound assumption, by applying two-dimensional lower bound techniques `a la [BBH94,AB98,Ser99a]. Section 5 is the core of the proof, it assembles the prior elements to prove the main lower bound. Finally, Section 6 is devoted to the upper bound construction. Acknowledgements: C.R. was supported by ANID FONDECYT 1231593. He thanks the Courant Institute of Mathematical Sciences and Universit´e Paris-Est Cr´eteil for their support and kind hospitality during the completion of part of this work. S. S. was supported by NSF grant DMS-2000205, DMS-2247846, and by the Simons Foundation through the Simons Investigator program. This work was also supported by a Labex Bezout funded invitation to Universit´e Paris-Est Cr´eteil of the third author. 2. Preliminaries 2.1. Vector fields, forms, currents and notation. We introduce certain concepts and no- tation from the theory of currents and differential forms. In Euclidean spaces, vector fields can be identified with 1-forms. In particular, a vector field F = (F1, F2, F3) can be identified with the 1-form F1dx1 + F2dx2 + F3dx3. We use the same notation for both the vector field and the 1-form. Given α ∈(0, 1], we let C0,α T (Ω) denote the space of vector fields B ∈C0,α(Ω) such that B × ⃗ν = 0 on ∂Ω, where hereafter ⃗ν is the outer unit normal to ∂Ω. Such a B may also be interpreted as 2-form, we will not distinguish the two in our notation. It is worth recalling that the boundary of a 1-current T relative to a set Θ is a 0-current ∂T, and that ∂T = 0 relative to Θ if T(dϕ) = 0 for all 0-forms ϕ with compact support in Θ. In particular, an integration by parts shows that the 1-dimensional current µ(u, A) has zero boundary relative to Ω. We let Dk(Θ) be the space of smooth k-forms with compact support in Θ. For a k-current T in Θ, we define its mass by |T|(Θ) := sup  T(ϕ) | ϕ ∈Dk(Θ), ∥ϕ∥L∞≤1 and by (2.1) ∥T∥F(Θ) := sup  T(ϕ) | ϕ ∈Dk(Θ), max{∥ϕ∥L∞, ∥dϕ∥L∞} ≤1 its flat norm. Remark 2.1. For 0-currents, the flat and (C0,1 0 )∗norms coincide, whereas for k-currents the former is stronger than the latter. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 15 It is not completely obvious from the definitions (1.5) and (2.1) that ∥Γ∥∗≤C∥Γ∥F(Ω), since ∥Γ∥∗involves testing with vector fields that are not necessarily compactly supported in Ω. Nevertheless it is true if |Γ| is assumed to be bounded, because if ∥X∥L∞, ∥∇X∥L∞≤1, then we may consider Xn(·) = (1 −ndist(·, ∂Ω))+X(·), and we have ⟨Xn, Γ⟩→⟨X, Γ⟩as n →+∞ while ∥curl X∥L∞remains bounded independently of n if X is normal to ∂Ω. 2.2. Reference magnetic field and splitting formula. We may now recall the definition of the magnetic field B0 that we will work with, constructed in [Rom19a]. It appears in the Hodge decomposition of A0 in Ω, where (eihexϕ0, hexA0) is the Meissner state, as A0 = curl B0 + ∇ϕ0, supplemented with the conditions div B0 = 0, and B0 × ⃗ν = 0 on ∂Ω. Moreover, it is such that Z Ω (−∆B0 + B0 −H0,ex) · A = 0, for any divergence-free A ∈C∞ 0 (Ω, R3). Also, we recall that ϕ0 is supplemented with the conditions R Ωϕ0 = 0 and ∇ϕ0 · ⃗ν = A0 · ⃗ν on ∂Ω. We now recall the precise algebraic splitting of the Ginzburg–Landau energy from [Rom19a]. Proposition A. For any sufficiently integrable (u, A), letting u = e−ihexϕ0u and A = A−hexA0, where (eihexϕ0, hexA0) is the approximate Meissner state, we have (2.2) GLε(u, A) = h2 exJ0 + Fε(u, A) −hex Z Ω µ(u, A) ∧B0 + r0, where Fε(u, A) is as in (1.2) and r0 = h2 ex 2 Z Ω (|u|2 −1)| curl B0|2. In particular, |r0| ≤Cεh2 exFε(|u|, 0) 1 2. 2.3. Biot-Savart vector fields and a new constant. Definition 2.1. The Biot–Savart vector field associated to a smooth simple closed curve in R3 is (2.3) XΓ(p) = 1 2 Z t Γ(t) −p |Γ(t) −p|3 × Γ′(t) dt. It is divergence free, satisfies curl XΓ = 2πΓ, and belongs to Lp loc for any 1 ≤p < 2. Moreover, denoting pΓ the nearest point to p on Γ, and U a bounded neighborhood of Γ on which this projection is well defined, the difference (2.4) XΓ(p) −pΓ −p |pΓ −p|2 × Γ′(pΓ) is in Lq(U), for any q ≥1. The approximation (2.4) is classical (see for instance [AKO07]) and may be derived from (2.3). The difference is in fact O(log |p −pΓ|). 16 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY In the next proposition, to any nice curve Γ, we associate via XΓ a current and magnetic gauge pair. This will be in particular useful for the upper bound construction. Proposition 2.1. Assume Ω⊂R3 is a smooth and bounded domain. Assume Γ is a smooth simple closed curve in R3 which intersects ∂Ωtransversally. Then there exists a unique divergence free jΓ : Ω→R3, belonging to Lp for any p < 2, and a unique divergence free AΓ ∈H1(R3, R3) such that, curl(jΓ+AΓ) = 2πΓ in Ω, such that ν·jΓ = 0 on ∂Ω, and such that the following equation is satisfied in the sense of distributions in R3: ∆AΓ + jΓ1Ω= 0. In particular, jΓ and AΓ only depend on Γ ∩Ω. It also holds that AΓ ∈W 2, 3 2 loc (R3, R3) and that jΓ −XΓ ∈W 1,q(Ω), for any q < 4. Moreover, there exists 1 < p < 2 such that if Γ and Γ′ are two curves as above then, (2.5) Z R3 |∇AΓ −∇AΓ′|2 + Z Ω |jΓ −XΓ −(jΓ′ −XΓ′)|2 ≤C(n(Γ) + n(Γ′)) ∥XΓ −XΓ′∥Lp(∂Ω) + ∥XΓ −XΓ′∥Lp(Ω)  , where n(Γ) = ∥XΓ∥Lp(∂Ω) + ∥XΓ∥Lp(Ω). Proof. Assume Γ is as above. For (f, A) ∈H1(Ω) × H1(R3, R3) let IΓ(f, A) = 1 2 Z R3 |∇A|2 + 1 2 Z Ω |∇f −A|2 + Z ∂Ω fXΓ · ν − Z Ω XΓ · A. When minimizing IΓ (recall that XΓ is divergence-free), we have the freedom of adding a con- stant to f hence we may assume that f has zero average over Ω. Then, we use the embeddings H 1 2(∂Ω) ,→L4 and H1(R3) ,→L6 to find Z ∂Ω |fXΓ · ν| ≤C∥XΓ∥L 3 2 (∂Ω)∥∇f∥L2(Ω), Z Ω XΓ · A ≤C∥XΓ∥L 3 2 (Ω)∥∇A∥L2(R3). It is not difficult then to check that the infimum of IΓ, which is nonpositive, is achieved by some couple (fΓ, AΓ). Moreover, still assuming fΓ to have zero average over Ω, we have (2.6) Z R3 |∇AΓ|2, Z Ω |∇fΓ −AΓ|2, Z Ω |∇fΓ|2 ≤C  ∥XΓ∥L 3 2 (∂Ω) + ∥XΓ∥L 3 2 (Ω)  . We let jΓ = XΓ + ∇fΓ −AΓ. The Euler–Lagrange equations for IΓ can then be written as (2.7)      ∆AΓ + jΓ1Ω= 0 in R3 div(jΓ1Ω) = 0 in R3 curl(jΓ + AΓ) = 2πΓ in Ω. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 17 From (2.6) and the definition of jΓ we deduce a bound for ∥jΓ∥L 3 2 , which in turn implies using the equation for AΓ that AΓ ∈W 2, 3 2 loc and that ∥AΓ∥W 2, 3 2 (Ω) ≤C  ∥XΓ∥L 3 2 (∂Ω) + ∥XΓ∥L 3 2 (Ω)  . Taking the divergence of the first equation in (2.7), we also find that AΓ is divergence-free in R3. Moreover, the function fΓ is harmonic, and its normal derivative on ∂Ωis (XΓ −AΓ) · ν, which belongs to Lp(∂Ω) for any p < 2. Since L2(∂Ω) embeds into W −1 2 ,4(∂Ω), we deduce that fΓ ∈W 1,q(Ω) for any q < 4 (see for instance [Lie13]). Together with the fact that A ∈W 2, 3 2 loc , this implies that jΓ −XΓ belongs to W 1,q(Ω) for any q < 4. We also deduce along these lines the estimate that for some p < 2 ∥fΓ∥L3(∂Ω) ≤C∥XΓ∥Lp(∂Ω). To check uniqueness, assume both (j, A) and (j′, A′) satisfy (2.7) and let B = A −A′ and k = j −j′. Then curl(k + B) = 0 in Ωhence there exists g such that k + B = ∇g. From the equation −∆B = k1Ωintegrated against B we find that ∥B∥2 L2(R3) = ⟨k, B⟩L2(Ω). Since div k = 0 and k · ν = 0 on ∂Ω, it holds that ∇g and k are orthogonal in L2(Ω). Thus we have ⟨k, B⟩L2(Ω) = ⟨k, ∇g −k⟩L2(Ω) = −∥k∥2 L2(Ω). It follows that B and k are equal to 0. To prove the last assertion of the proposition, assume that (j, A) = (jΓ, AΓ) and that (j′, A′) = (jΓ′, AΓ′). Then D(IΓ)(f,A)(f −f ′, A −A′) = 0 and D(IΓ′)(f′,A′)(f −f ′, A −A′) = 0. Taking the difference of the two identities we find that Z R3 |∇(A−A′)|2+ Z Ω |∇(f −f ′)−(A−A′)|2 = Z ∂Ω (f −f ′)(XΓ−XΓ′)·ν− Z Ω (XΓ−XΓ′)·(A−A′). Then we use the estimates for ∥AΓ∥W 2, 3 2 (Ω) and ∥fΓ∥L3(∂Ω) to find that (2.5) holds, noting that j −XΓ = ∇f −A and similarly for j′ −XΓ′. □ We may now define the constant CΩwhich appears in the expansion of the energy of minimiz- ers of the Ginzburg–Landau energy. It is the equivalent of the renormalized energy of [BBH94] for the optimal curve Γ0. Definition 2.2. Assume Ωis a smooth bounded domain such that the maximum of the ratio is achieved at a smooth simple curve Γ0 which can be extended to a smooth simple closed curve in R3, still denoted Γ0. We set AΩ= AΓ0 and jΩ= jΓ0 where (AΓ0, jΓ0) are given in Proposition 2.1. We define (2.8) CΩ= 1 2 Z R3 | curl AΩ|2 + lim ρ→0 1 2 Z Ω\Tρ(Γ0) |jΩ|2 + πL0 log ρ ! , 18 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY where Tρ(Γ0) denotes the tube of radius ρ around Γ0, intersected with Ω. 2.4. ε-level lower bounds. We next recall the ε-level estimates provided in [Rom19b,RSS23]. Theorem A. For any m, n, M > 0 there exist C, ε0 > 0 depending only on m, n, M, and ∂Ω, such that, for any ε < ε0, if (uε, Aε) ∈H1(Ω, C) × H1(Ω, R3) is a configuration such that Fε(uε, Aε) ≤M| log ε|m then there exists a polyhedral 1-dimensional current νε and a measurable set Sνε such that (1) νε/2π is integer multiplicity. (2) ∂νε = 0 relative to Ω, (3) supp(νε) ⊂Sνε ⊂Ωwith |Sνε| ≤C| log ε|−q, where | · | denotes the measure of the set and q(m, n) := 3 2(m + n), (4) Z Sνε |∇Aεuε|2 + 1 2ε2(1 −|uε|2)2 ≥|νε|(Ω)  log 1 ε −C log log 1 ε  − C | log ε|n, (5) for any σ ∈(0, 1] there exists a constant Cσ depending only on σ and ∂Ω, such that ∥µ(uε, Aε) −νε∥C0,σ T (Ω)∗≤Cσ Fε(uε, Aε) + 1 | log ε|σq ; (6) and for any α ∈(0, 1), ∥µ(uε, Aε) −νε∥F(Ωε) ≤C Fε(uε, Aε) + 1 | log ε|αq , where the flat norm was defined in (2.1) and Ωε :=  x ∈Ω| dist(x, ∂Ω) > | log ε|−(q+1) . 2.4.1. Bounded vorticity. We recall that X denotes the class of oriented Lipschitz curves, seen as 1-current with multiplicity 1, which do not self intersect and which are either a loop contained in Ωor have two different endpoints on ∂Ω. We also recall that N be the space of normal 1-currents supported in Ω, with boundary supported on ∂Ω. Condition 2.1 (Weak Nondegeneracy condition). There exists a unique curve Γ0 in X such that R(Γ0) = sup Γ∈N ⟨B0, Γ⟩ |Γ| . Moreover, there exists constants c0, P > 0 depending on Ωand B0 such that (2.9) R(Γ0) −R(Γ) ≥C0 min ∥Γ −Γ0∥P ∗, 1  for every Γ ∈X. We will see in Section 3.3 that the strong nondegeneracy condition of Definition 1.2 implies this one. We now state a result adapted from [RSS23] that, under this weak nondegeneracy condition, locates the vortices of almost minimizers near Γ0. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 19 Theorem B. Assume that Condition 2.1 holds. Then, for any K > 0 and α ∈(0, 1), there exists positive constants ε0, C > 0 depending on Ω, B0, K, and α, such that the following holds. For any ε < ε0 and any hex < H0 c1 + K log | log ε|, if (u, A) is a configuration in H1(Ω, C) × [Aex + Hcurl] such that GLε(u, A) ≤h2 exJ0, then, letting (u, A) = (e−ihexϕ0u, A −hexA0), there exists “good” Lipschitz curves Γ1, . . . , ΓN0 ∈X and eΓ ∈N such that N0 ≤C and (1) for 1 ≤i ≤N0, we have R(Γ0) −R(Γi) ≤C log | log ε| | log ε| ; (2) for 1 ≤i ≤N0, we have ∥Γi −Γ0∥∗≤C log | log ε| | log ε|α  1 P and |Γi| −L0 ≤C log | log ε| | log ε|α  1 P ; (3) eΓ is a sum in the sense of currents of curves in X such that |eΓ| ≤C log | log ε| | log ε|1−α ; (4) we have µ(u, A) −2π N0 X i=1 Γi −2πeΓ (C0,1 T (Ω)) ∗ ≤C| log ε|−2 and µ(u, A) −2π N0 X i=1 Γi −2πeΓ F(Ωε) ≤C| log ε|−2, where Ωε :=  x ∈Ω| dist(x, ∂Ω) > | log ε|−2 . Remark 2.2. In [RSS23, Theorem 3] we did not state the vorticity estimate for the flat norm above. However, it appears from the proof that νε = N0 X i=1 Γi + eΓ and that the vorticity estimate stated in [RSS23, Theorem 3] directly follows from applying Theorem A. A straightforward use of this theorem then also yields the vorticity estimate for the flat norm stated above. 2.5. Tube coordinates, energy rewriting and horizontal rescaling. 20 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY 2.5.1. Tube coordinates. Proposition 2.2. Assume Γ0 : [0, L0] →Ωis a smooth curve parametrized by arclength with endpoints p = Γ0(0) and q = Γ0(L0) on ∂Ωand meeting the boundary orthogonally. Then there exists δ > 0 and coordinates defined in Tδ ∩Ω, where Tδ is the tube of radius δ around Γ0 (see remark below), such that: - In these coordinates, z 7→(0, 0, z) is an arclength parametrization of Γ0. - Denoting by g(x, y, z) the Euclidean metric in these coordinates, we have g13 = g23 = 0 and, from the previous property, g33(0, 0, z) = 1 for z ∈[0, L0]. - These coordinates map a neighborhood of p in ∂Ωinto R2 × {0} and a neighborhood of q into R2 × {L0}. We will denote by Cδ the coordinate patch corresponding to Tδ ∩Ω, and by Φ : Cδ →Tδ ∩Ωthe inverse of the coordinate chart. Remark 2.3. To define Tδ we first need to extend Γ0 a little bit outside Ω. Then the tube really refers to this extended curve, so that Tδ ∩Ωindeed contains a neighborhood in ∂Ωof each of the endpoints of Γ0. Remark 2.4. If Γ0 is a critical point of the ratio R, it is easy to check using the fact that B0 × ν = 0 on ∂Ωthat Γ0 intersects ∂Ωorthogonally. Proof of the proposition. Let s 7→Γ0(s) be a parametrization of Γ0 by arclength on the interval [0, L0]. We define f(x) = s if x = Γ0(s), let f = 0 in a neighborhood of p := Γ0(0) in ∂Ωand let f = L0 in a neighborhood of q := Γ0(L0) in ∂Ω. The function f may be extended smoothly in a neighborhood W of Γ0 in Ωin such a way that the level sets of f meet Γ0 orthogonally and, restricting the neighborhood if necessary, we may assume its gradient does not vanish there. If η > 0 is small enough, then Ση = ¯B(p, η) ∩∂Ωis included in W and diffeomorphic to the disk ¯D(0, η), and we let φ, defined in ¯D(0, η) be such a diffeomorphism. For (x, y, z) ∈C := ¯D(0, η) × [0, L0], we define Φ(x, y, z) to be equal to γ(z), where γ is the integral curve of the vector field ∇f/|∇f|2 originating at φ(x, y), so that f(γ(z))′ = 1, and hence f(Φ(x, y, z)) = z. In particular, z 7→Φ(0, 0, z) is a parametrization of Γ0 by arclength. If η is chosen small enough, then Φ is well-defined, injective and smooth, and its differential is invertible. Moreover C ∩{z = 0} and C ∩{z = L0} are mapped to the boundary of Ω, since f(Φ(x, y, z)) = z. Therefore Φ is a diffeomorphism from C to a neighborhood of Γ0 in Ω. We choose δ > 0 small enough so that Tδ ∩Ω⊂Φ(C ). Then the coordinate system defined by Φ has all the desired properties. The fact that, if g denotes the pull-back of the Euclidean metric by Φ, we have g13 = g23 = 0 follows from the fact that Φ(·, ·, z) is mapped to {f = z}, hence ∂xΦ and ∂yΦ are orthogonal to ∇f, hence to ∂zΦ. □ 2.5.2. Energy in the new coordinates. Then we express the quantities of interest in the new coordinates: g is the metric, pull-back of the Euclidean metric by Φ. Given a configuration (u, A) in Ω, the order parameter transforms as a scalar field and A as a 1-form. The field B0 transforms as a one-form. Keeping the same notation for the quantities in the new coordinates, VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 21 we may define the superconducting current j(u, A) = (iu, du −iAu) as in the original coordi- nates, it is a 1-form, and the vorticity µ(u, A) = dj(u, A) + dA, which is a two-form. The new coordinates are defined in Uδ = Tδ ∩Ω, using the notation of the previous proposition, and we let Cδ = Φ−1(Uδ). We define Fε(u, A, Uδ) = 1 2 Z Uδ eε(u, A), where eε(u, A) = 1 2  |∇Au|2 + 1 2ε2(1 −|u|2)2 + | curl A|2  . In the new coordinates, this becomes Fε(u, A, Uδ) = 1 2 Z Cδ |du −iAu|2 g + 1 2ε2(1 −|u|2)2 + |dA|2 g dvolg, where volg is the volume form relative to the metric g. Given a k-form ω, its norm relative to g is defined by the identity |ω|2 g dvolg = ω ∧∗gω. Alternatively, if e1, e2, e3 is a g-orthonormal frame, |ω|2 g = P ω(ei1, . . . , eik)2, where the sum runs over ordered k-tuples 1 ≤i1 < · · · < ik ≤3. 2.5.3. Vertical and perpendicular parts of the energy. The energy in the new coordinates splits as follows. We let g⊥= g −g33 dz2. We have |dA|2 g = |dA|2 g⊥+ X 1≤i,j≤2 (dA)i3(dA)j3gijg33. Note that, since (gij)1≤i,j≤2 is a positive definite matrix, then so is its inverse matrix and hence the sum above is a nonnegative number. Then we decompose Fε(u, A, Uδ) = F ⊥ ε (u, A) + F z ε (u, A), where, writing Σz for the intersection of Cδ with the horizontal plane at height z, F ⊥ ε (u, A) := Z L0 z=0 Z Σz e⊥ ε (u, A) dvolg⊥dz, F z ε (u, A) := Z Cδ ez ε(u, A) dvolg, and e⊥ ε (u, A) := 1 2  |du −iAu|2 g⊥+ 1 2ε2(1 −|u|2)2 + |dA|2 g⊥  √g33, (2.10) ez ε(u, A) := 1 2 g33|∂zu −iAzu|2 + X 1≤i,j≤2 (dA)i3(dA)j3gijg33 ! . 22 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY 2.5.4. Horizontal blow-up. We now perform the announced horizontal rescaling, at a scale ℓ which will be optimized later. Given ℓ> 0, we now consider in Cδ the metric ˜g defined by ˜gij = ℓ−2gij if 1 ≤i, j ≤2 and ˜gij = gij otherwise (recall that g13 = g23 = 0). The volume element for ˜g is dvol˜g = ℓ−2dvolg. We let ˜ε = ε/ℓ, and (2.11) ˜Fε(u, A) = 1 2 Z Cδ |du −iAu|2 ˜g + 1 2˜ε2(1 −|u|2)2 + |dA|2 ˜g dvol˜g. We have |du −iAu|2 ˜g = ℓ2|du −iAu|2 g⊥+ g33|∂zu −iAzu|2, |dA|2 ˜g = ℓ4|dA|2 g⊥+ ℓ2 X 1≤i,j≤2 (dA)i3(dA)j3gijg33, so that, if ℓ≤1, then (2.12) ℓ2 ˜Fε(u, A) ≤ℓ2F ⊥ ε (u, A) + F z ε (u, A). Therefore (2.13) Fε(u, A, Uδ) ≥(1 −ℓ2)F ⊥ ε (u, A) + ℓ2 ˜Fε(u, A). 3. Preliminaries on the ratio function 3.1. Non-degeneracy condition for graphs and piecewise graphs. Here we compute the second derivative of the ratio Γ 7→R(Γ) for Γ’s that are graphs over Γ0. In addition, we consider a more general class of curves, which we call piecewise graphs over Γ0, and which naturally appear in our setting since, by Theorem B, the approximation of the vorticity is essentially composed of a sum of N0 such objects. We provide a kind of nondegeneracy condition for the ratio among piecewise graphs, and then establish a relation between the second derivative of the ratio and this nondegeneracy condition. Throughout this section, B0 is the Meissner magnetic field associated to the domain Ω. It is in particular true that the restriction of B0 to a plane tangent to the boundary of Ωis zero. Without changing notation, we will work in the coordinates defined in Proposition 2.2 in a neighborhood of Γ0. In these coordinates, Γ0 is the interval [0, L0] on the z-axis. We have R(Γ) = ⟨B0, Γ⟩/|Γ| where, in the new coordinates, given a curve Γ in Cδ, |Γ| = Z |Γ′(s)|g ds, ⟨B0, Γ⟩= Z Γ B0. For a map f defined on Cδ, we denote by f • z the map f • z (x, y, z) = f(0, 0, z). The map f • z may be seen either as a function of x, y, z, or just of z. Recall from the introduction that for the metric evaluated along the axis g(0, 0, z), the variable z is omitted from the notation, and we write g• instead of g• z. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 23 We use the notation ⃗u = (x, y, 0) and d⃗u = dx dy. For any curve or sum of curves Γ, we define ⟨Γ⟩⊥:= Z Γ 1 2(dg33)• z(⃗u) dz ⟨Γ⟩∥:= Z Γ LL(z, ⃗u) dz, where the linearization of the length is defined as (3.1) LL(z, ⃗u) = 1 4(d2g33)• z(⃗u, ⃗u) −1 8(dg33)• z(⃗u)2. Remark 3.1. Note that (3.2) LL(z,⃗u) ≤C|⃗u|2. Hereafter χ is a smooth cutoff function on Cδ taking values in [0, 1] such that (3.3) χ(·) = 1 if distg(·, Γ0) < δ/2, |∇χ| ≤C/δ, χ(·) = 0 if distg(·, Γ0) > 3δ/4. For any 2-form µ = µ12 dx ∧dy + µ23 dy ∧dz + µ31 dz ∧dx, we let (3.4) ⟨µ⟩2D = Z z Z Σz χ√g33 µ12 d⃗u dz =  χ ∂ ∂z, µ  . We also define, for any curve or sum of curves Γ, ⟨Γ⟩2D := Z Γ χ √g33 dz. The following lemma motivates the notation ⟨Γ⟩2D. Lemma 3.1. Assume Γ is a curve or sum of curves, and that ∥µ −2πΓ∥∗≤η, where ∥· ∥∗is defined in (1.5). Then ⟨µ⟩2D −2π ⟨Γ⟩2D ≤Cη. Proof. We have ⟨µ⟩2D = Z z Z Σz χ √g33µ12 d⃗u dz = Z Cδ (χ √g33 dz) ∧µ. Since the restriction of χ √g33 dz to ∂Cδ vanishes, it belongs to C0,1 T (Cδ), and thus Z Cδ (χ √g33 dz) ∧µ −2π Z Γ χ √g33 dz ≤Cη. □ 24 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Figure 3. Piecewise graph: z′(t) = 1 on [0, z2] and [z2 + h, L0 + h], z′(t) = −1 on [z2, z2 + h]. We will need to expand up to second order the ratio of a curve which is more general than a graph over Γ0, what we call a piecewise graph. Such a piecewise graph is defined as a curve Γ : [0, L] →Cδ, where Γ(t) = Γ0(z(t))+⃗u(t), where z(t) : [0, L] →[0, L0] is a sawtooth function — i.e. its derivative is piecewise constant with values ±1 — and ⃗u(t) is horizontal for any t ∈[0, L]. An example is shown in Figure 3. Proposition 3.1. For any piecewise graph Γ : [0, L] →Cδ/2, where Γ(t) = Γ0(z(t)) + ⃗u(t) with z(0) = 0 and z(L) = L0, we have ⟨Γ⟩2D = L0 + ⟨Γ⟩⊥+ ⟨Γ⟩∥+ O  ∥⃗u∥3 L3([0,L])  . Proof. Since ∥⃗u∥L∞< δ/2, we have χ(Γ(t)) = 1 for any t. Moreover, since Γ′(t) = z′(t)e3+⃗u′(t), we have dz(Γ′(t)) = z′(t). Thus Z Γ χ √g33 dz = Z L 0 z′(t) p g33 (Γ0(z(t)) + ⃗u(t)) dt. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 25 We Taylor expand to find that, for every t, g33 (Γ0(z) + ⃗u) = 1 + (dg33)• z(⃗u) + 1 2(d2g33)• z(⃗u, ⃗u) + O |⃗u|3 , where z = z(t). We deduce that p g33 (Γ0(z(t)) + ⃗u(t)) = 1 + 1 2(dg33)• z(⃗u) + 1 4(d2g33)• z(⃗u, ⃗u) −1 8(dg33)• z(⃗u)2  + O |⃗u|3 . Integrating with respect to t yields, since χ(Γ) ≡1 and dz(Γ′) = z′, that ⟨Γ⟩2D = Z Γ dz + ⟨Γ⟩⊥+ ⟨Γ⟩∥+ O ∥⃗u∥3 L3  . To conclude, we note that since z(0) = 0 and z(L) = L0, we have Z Γ dz = Z L 0 z′(t) dt = L0. □ Proposition 3.2. Assume Γ : [0, L] →Cδ, where Γ(t) = Γ0(z(t)) + ⃗u(t) with z(0) = 0 and z(L) = L0. Then, ⟨B0, Γ⟩= ⟨B0, Γ0⟩+ ⟨B0, Γ⟩⊥+ ⟨B0, Γ⟩∥+ O ∥⃗u∥3 L3 + ∥⃗u∥2 L∞∥⃗u′∥L1 , where ⟨B0, Γ⟩⊥:= Z L 0 z′(t)(dB0)• z(t)(⃗u, e3) dt, ⟨B0, Γ⟩∥:= Z L 0 z′(t)LB(z(t), ⃗u(t), z′(t)⃗u′(t)) dt, LB(z,⃗u,⃗v) := 1 2 ((dB0)• z(⃗u,⃗v) + (∂⃗udB0)• z(⃗u, e3)) . (3.5) Remark 3.2. Note that (3.6) LB(z, ⃗u, z′⃗u′) ≤C |⃗u|2 + |⃗u||⃗u′|  . Proof. First note that, as currents, the curves t →Γ0(t) and t →Γ0(z(t)) are equal since for any smooth vector field X we have Z L 0 z′(t)Γ′ 0(z(t)) · X(Γ0(z(t)) dt = Z L0 0 Γ′ 0(s) · X(Γ0(s)) ds. We consider the surface Σ parametrized by (s, t) →Γ0(z(t)) + s⃗u(t), for s ∈(0, 1) and t ∈(0, L). Then the boundary of Σ, oriented by the 2-form ds ∧dt is equal to Γ −Γ0, plus two horizontal segments on the top and bottom boundaries of Cδ. On these boundaries B0 vanishes for horizontal vectors therefore, from Stokes’s formula, (3.7) Z Γ B0 − Z Γ0 B0 = Z Σ dB0. 26 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY We have Z Σ dB0 = Z 1 0 Z L 0 (dB0)Γ0(z(t))+s⃗u(t)(⃗u(t), z′(t)e3 + s⃗u′(t)) dt ds. Then we expand (dB0)Γ0(z(t))+s⃗u(t) = (dB0)• z(t) + s(∂⃗u(t)dB0)• z(t) + O(|⃗u(t)|2), to obtain, omitting the variable t for clarity, (dB0)Γ0(z)+s⃗u(⃗u, z′e3 + s⃗u′) = (dB0)• z(⃗u, z′e3) + s(dB0)• z(⃗u, ⃗u′) + s(∂⃗udB0)• z(⃗u, z′e3) + O(|⃗u|3 + |⃗u′||⃗u|2), and then Z Σ dB0 = Z L 0  (dB0)• z(⃗u, z′e3) + 1 2(dB0)• z(⃗u, ⃗u′) + 1 2(∂⃗udB0)• z(⃗u, z′e3)  dt + O |⃗u|3 + |⃗u′||⃗u|2)  . Then we replace in (3.7) to deduce that ⟨B0, Γ⟩= ⟨B0, Γ0⟩+ ⟨B0, Γ⟩⊥+ ⟨B0, Γ⟩∥+ O(∥⃗u∥3 L3 + ∥⃗u′∥L1∥⃗u∥2 L∞). □ Proposition 3.3. Assume the maximum of Γ 7→R(Γ) is achieved at Γ0(z) = (0, 0, z) (in tube coordinates). Then for any z ∈(0, L0) and for any horizontal vector ⃗u we have (3.8) (dB0)• z(⃗u, e3) = R(Γ0) 2 (dg33)• z(⃗u). In particular, for any piecewise graph Γ in Cδ we have (3.9) ⟨Γ⟩⊥= ⟨B0, Γ⟩⊥ R(Γ0) . Proof. Consider a smooth variation of Γ0 written as Γt(z) = Γ0(z) + t⃗u(z), where ⃗u(z) is horizontal for any z. We have Γ′ t(z) = e3 + t⃗u′(z) and |Γ′ t(z)|g = q g33(Γt(z)) + t2|⃗u′(z)|2 g, therefore d dt|t=0|Γ′ t(z)|g = 1 2(dg33)(0,0,z)(⃗u(z)), and d dt|t=0|Γt| = 1 2 Z L0 0 (dg33)(0,0,z)(⃗u(z)) dz. To compute the derivative of ⟨B0, Γt⟩with respect to t, we let Σt be the surface parametrized by (s, z) →(0, 0, z) + s⃗u(z), for s ∈(0, t) and z ∈(0, L0). Then the boundary of Σt, oriented by the 2-form ds ∧dz is equal to Γt −Γ0, plus two horizontal segments on the top and bottom VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 27 boundaries of Cδ. On these boundaries B0 vanishes for horizontal vectors therefore, from Stoke’s formula, Z Γt B0 − Z Γ0 B0 = Z Σt dB0, and Z Σt dB0 = Z t 0 Z L0 0 (dB0)Γ0(z)+s⃗u(z)(⃗u(z), e3 + s⃗u′(z)) dz ds = t Z L0 0 (dB0)(0,0,z)(⃗u(z), e3) dz + O t2|⃗u|2 ∞  . If Γ0 is critical for the ratio, then L0 d dt|t=0 ⟨B0, Γt⟩= ⟨B0, Γ0⟩d dt|t=0|Γt|, therefore L0 Z L0 0 (dB0)(0,0,z)(⃗u(z), e3) dz = 1 2 ⟨B0, Γ0⟩ Z L0 0 (dg33)(0,0,z)(⃗u(z)) dz. Since this is true for any variation Γt(z) = Γ0(z) + t⃗u(z), we deduce (3.8), and (3.9) follows in view of the definition of ⟨Γ⟩⊥and ⟨B0, Γ⟩⊥. □ The above computations imply the following. Lemma 3.2. Assume the maximum of Γ 7→R(Γ) is achieved at Γ0(z) = (0, 0, z) and that Γ is a graph over Γ0 included in Cδ and parametrized as Γ(z) = Γ0(z) + ⃗u(z). We define Γt(z) = Γ0(z) + t⃗u(z). Then Q(⃗u) := −d2 dt2 |t=0 R(Γt) = 2 R(Γ0) L0 Z L0 0 1 2|⃗u′|2 g• + L (z,⃗u(z), ⃗u′(z)) dz, where L (z, ⃗u, ⃗u′) = LL(z,⃗u) − 1 R(Γ0)LB(z, ⃗u,⃗u′), with LB as in (3.5) and LL as in (3.1). Proof. Let f(t) = R(Γt). Then f(t) = g(t)/h(t), with g(t) = ⟨B0, Γt⟩and h(t) = |Γt|. Then, since f ′(0) = 0, we have (3.10) f ′′(0) = g′′(0)h(0) −g(0)h′′(0) h(0)2 . From Lemmas 3.1 and 3.2 we have d2 dt2 |t=0 ⟨Γt⟩2D = 2 ⟨Γ⟩∥, d2 dt2 |t=0 ⟨B0, Γt⟩= 2 ⟨B0, Γ⟩∥. Moreover, |Γt| −⟨Γt⟩2D = Z L0 0 q g33(Γt) + t2|⃗u′|2 g(Γt) − p g33(Γt) dz = t2 2 Z L0 0 |⃗u′|2 g• dz + O(t3). 28 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY It follows that (3.11) d2 dt2 |t=0  |Γt| −⟨Γt⟩2D = Z L0 0 |⃗u′|2 g• dz, and then, in view of (3.10) and (3.11), that f ′′(0) = 2 L0 Z L0 0 LB(z, ⃗u(z), ⃗u′(z)) −R(Γ0) 1 2|⃗u′|2 g• + LL(z,⃗u(z))  dz. □ The above computation motivates Definition 1.2. 3.2. Coercivity. The quantity Qℓ(Γ) defined below contains the interesting terms in the lower bound for GLε(u, A) which we can compute using the idea of horizontal blow-up introduced in [CJ17]. Here ℓrepresents the scale of the horizontal blow-up and tends to zero with ε, and Γ represents the vortex filaments associated to (u, A). In this section we show that the quadratic form Q defined above is essentially the limit of Qℓas ℓ→0. Definition 3.1. Assume that Γ is a piecewise graph over Γ0. For ℓ> 0, define Qℓ(Γ) := ℓ2  |Γ|˜g −⟨Γ⟩2D + (⟨Γ⟩2D −L0) −⟨B0, Γ −Γ0⟩ R(Γ0) , where |Γ|˜g is the length of Γ with respect to the metric ˜g = g33dz2 + ℓ−2g⊥. Lemma 3.3. If Γ is a piecewise graph in Cδ/2, parametrized as Γ(t) = Γ0(z(t))+⃗u(t), t ∈[0, L], with z(0) = 0 and z(L) = L0, and if the maximum of Γ 7→R(Γ) is achieved at Γ0(z) = (0, 0, z), then (3.12) Qℓ(Γ) = Z L 0 n ℓ2 q g33(Γ) + ℓ−2|⃗u′|2 g(Γ) −z′p g33(Γ)  +z′  LL(z,⃗u) −LB(z,⃗u, z′⃗u′) R(Γ0)  dt + O ∥⃗u∥3 L∞+ ∥⃗u∥2 L∞∥⃗u′∥L1 . Proof. The result follows from the definition of Qℓ, ⟨Γ⟩2D and Propositions 3.1, 3.2, taking into account the fact that, because of the criticality of Γ0, we have from Proposition 3.3 that R(Γ0) ⟨Γ⟩⊥= ⟨B0, Γ⟩⊥. □ Proposition 3.4. Assume that Γ0 is a nondegenerate maximizer of the ratio R. (1) There exists a small constant c > 0 such that for any ℓ≤1 and for any piecewise graph Γ parametrized as Γ(t) = Γ0(z(t)) + ⃗u(t) with ∥⃗u∥L∞< cℓit holds that Qℓ(Γ) ≥c∥⃗u∥2 L∞, so that in particular Qℓ(Γ) ≥0. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 29 (2) Assume {ℓn}n, {αn}n and the piecewise graphs {Γn}n — parametrized as Γn(t) = Γ0(zn(t)) + ⃗un(t) for t ∈[0, Ln] — are such that, as n →+∞, 0 < αn ≪ℓn ≤1, Qℓn(Γn) = O(αn 2), ∥⃗un∥L∞= o(ℓn). Then, defining ⃗vn = ⃗un/αn, there exists a subsequence {n} such that Ln →L0 and such that {vn}n converges in the sense of distributions to some ⃗u∗which belongs to H1((0, L0), R2). Moreover, as n →+∞, (3.13) max t |⃗vn(t) −⃗u∗(zn(t))| →0, ∥Γn −Γ∗,n∥∗ αn →0, where Γ∗,n(z) = Γ0(z) + αn⃗u∗(z) is a graph over Γ0. Finally, (3.14) lim inf n→+∞ Qℓn(Γn) αn2 ≥ L0 2 R(Γ0)Q(⃗u∗). Remark 3.3. If the metric g was Euclidean, we could state the result in terms of the horizontal blow-up of the curves Γn converging to a curve Γ∗. Here, it is really the metric which is blown-up horizontally. Remark 3.4. The first assertion in (3.13) means that the distance of any point of Γn with vertical coordinate z to Γ∗,n(z) is o(αn), uniformly with respect to the point chosen and to z. Note that Γn need not be a graph, it may even have a diverging number of points of a given height z. Note that the first statement follows from the second one, since, by reasoning by absurdity, if we choose αn = ∥⃗un∥L∞, then ∥⃗u∗∥L∞= 1 and therefore Q(⃗u∗) is bounded below by a constant depending only on Q. Thus we only need to prove the second item of the proposition. Proof. Assume {ℓn}n, {αn}n, and {Γn}n are as above, and that Γn is parametrized as Γn(t) = Γ0(zn(t)) + ⃗un(t), for t ∈[0, Ln]. We define zn(t) to be the monotone increasing envelope of t →zn(t), i.e. zn(t) = maxs≤t zn(s). Note that zn is continuous and piecewise affine, with zn ′(t) equal to either 0 or 1, except at the finite number of points of discontinuity of zn ′. Let qℓn(Γn)(t) denote the integrand in (3.12). We rewrite (3.12) as (3.15) Qℓn(Γn) = Z An qℓn(Γn)(t)dt + Z Acn qℓn(Γn)(t)dt + O ∥⃗u∥3 L∞+ ∥⃗u∥2 L∞∥⃗u′∥L1 , where An := {t ∈[0, Ln] | zn = zn}. In particular we have zn′ = 1, almost everywhere in An. The second integral in (3.15) may be bounded below by noticing that the integral of z′ n over Ac n is equal to 0. Since g33(Γ0(z)) = 1 for any z, we have Z Acn z′ n p g33(Γn(t)) dt = Z Acn z′ n p g33(Γn(t)) − p g33(Γ0(zn(t)))  dt, 30 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY and since | p g33(Γn(t)) − p g33(Γ0(zn(t)))| ≤C∥⃗un∥L∞we find that Z Acn z′ n p g33(Γn(t)) dt ≤C |Ac n| ∥⃗un∥L∞. Using the bounds (3.6) and (3.2) we deduce that Z Acn qℓn(Γn)(t)dt ≥ Z Acn ℓn 2q g33(Γn) + ℓn −2|⃗u′ n|2 g(Γn) −C∥⃗un∥L∞ Z Acn ℓn 2 + |⃗un| + |⃗u′ n| dt, and therefore Z Acn qℓn(Γn)(t)dt ≥1 2 Z Acn ℓn 2 + ℓn|⃗u′ n|g(Γn) dt −C∥⃗un∥L∞ Z Acn ℓn 2 + |⃗un| + |⃗u′ n| dt. Since ∥⃗un∥L∞/ℓn →0 as n →+∞, it follows that if n is large enough, then (3.16) Z Acn qℓn(Γn)(t)dt ≥1 4 Z Acn ℓn 2 + ℓn|⃗u′ n|g(Γn) dt. Next, we analyze further the first integral in (3.15) by splitting it as the integral on a set Bn and on its complement: We choose δn > 0 such that, as n →+∞, (3.17) max(αn, ∥⃗un∥∞) ≪δn ≪ℓn, and then we set Bn = {t ∈An | |⃗u′ n(t)|g(Γn) ≤δn}. If t ∈Bn, we have |⃗u′ n|2 g(Γn) ℓn 2 ≤δn ℓn , and the right-hand side goes to 0 as n →+∞. Therefore a Taylor expansion yields, as n →+∞, ℓn 2 Z Bn q g33(Γn) + ℓn −2|⃗u′ n|2 g(Γn) − p g33(Γn) dz = 1 2 −o(1)  Z Bn |⃗u′ n|2 g(Γ0) dz, where we have also used the fact that g33(Γn) →1 and g(Γn) →g(Γ0) uniformly as n →+∞, since ∥⃗un∥L∞→0. We deduce that, writing g• = g(Γ0), Z Bn qℓn(Γn)(t)dt ≥ 1 2 −o(1)  Z Bn |⃗u′ n|2 g• dt + LL(zn(t), ⃗un) −LB(zn(t), ⃗un, ⃗u′ n) R(Γ0) dt. Using (3.2) and (3.6), we deduce in particular that (3.18) Z Bn qℓn(Γn)(t)dt ≥−C∥⃗un∥2 L∞. On the other hand, since (√a + x −√a)/√x is increasing, we have for any t in An \ Bn that r g33(Γn) + |⃗u′n|2 g(Γn) ℓn2 − p g33(Γn) |⃗u′ n|g(Γn)/ℓn ≥ q g33(Γn) + δ2n ℓn2 − p g33(Γn) δn/ℓn . VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 31 Since δn ≪ℓn and since g33(Γn) →1 and g(Γn) →g•, we deduce s g33(Γn) + δ2 n ℓn 2 − p g33(Γn) ≥ 1 2 −o(1)  δ2 n ℓn 2, and therefore ℓn 2   s g33(Γn) + |⃗u′ n|2 g(Γn) ℓn 2 − p g33(Γn)  ≥ 1 2 −o(1)  |⃗u′ n|g(Γn)δn. From (3.6) and (3.2) we know that LL(z, ⃗un) and LB(z,⃗un, ⃗u′ n) are both negligible compared to |⃗u′ n|δn if t ∈An \ Bn, therefore we deduce that (3.19) Z An\Bn qℓn(Γn)(t)dt ≥ 1 2 −o(1)  δn Z An\Bn |⃗u′ n|g• ≥ 1 2 −o(1)  δ2 n|An \ Bn|. From the hypothesis Qℓn(Γn) = O(αn2) and the estimates (3.15), (3.16), (3.18) and (3.19), we find that ℓn Z Acn ℓn + |⃗u′ n|g(Γn) dt + δn Z An\Bn δn + |⃗u′ n|g(Γn) dt ≤C α2 n + ∥⃗un∥2 L∞(1 + ∥⃗u′ n∥L1)  . Then we can decompose ∥⃗u′ n∥L1 = ∥⃗u′ n∥L1(Bcn) + ∥⃗u′ n∥L1(Bn) , use the fact that |⃗u′ n| ≤δn on Bn and use (3.17) to deduce that (3.20) ∥⃗u′ n∥L1(Bcn) = o (αn + ∥⃗un∥L∞) , |Bc n| = o(1). We now define ⃗wn : [0, L0] →R2 as follows. First we require that ⃗wn(0) = ⃗un(0). Then we note that, except for a finite set of values of z — which we denote J — there exists a unique t such that zn(t) = z and therefore a unique t ∈An such that zn(t) = z. We then require that for any t ∈An, ⃗w′ n(zn(t)) = ( ⃗u′ n(t) if t ∈Bn 0 if t ∈An \ Bn. This defines unambiguously ⃗w′ n(z) if z /∈J, thus ⃗wn is well defined. Moreover, using (3.20), we have for any t ∈An (3.21) |⃗wn(zn(t)) −⃗un(t)| ≤ Z Bcn |⃗u′ n(s)| ds = o(αn + ∥⃗un∥L∞). Since LL(z, ·) is a quadratic form, we thus have that for any t ∈An LL(zn(t), ⃗wn(zn(t))) −LL(zn(t), ⃗un(t)) dt = o(α2 n + ∥⃗un∥2 L∞). Since LB(z, ·, ·) is bilinear, we also deduce that LB(zn(t), ⃗wn(zn(t)), ⃗w′ n(zn(t))) = 0 for t ∈ An \ Bn while for t ∈Bn we have LB(zn(t), ⃗wn(zn(t)), ⃗w′ n(zn(t))) −LB(zn(t), ⃗un(t), ⃗u′ n(t)) = o((αn + ∥⃗un∥L∞)|⃗w′ n|). 32 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY This allows us to write Z Bn LL(z, ⃗un) = Z L0 0 LL(z, ⃗wn) dz + o(αn 2 + ∥⃗un∥2 L∞), Z Bn LB(z,⃗un, ⃗u′ n) − Z L0 0 LB(z, ⃗wn, ⃗w′ n) dz = o(αn + ∥⃗un∥L∞) Z L0 0 |⃗w′ n|. Note that we have use the change of variables z = zn(t), to pass from integrals over An to integrals over [0, L0]. We now use again the hypothesis Qℓn(Γn) = O(αn2) with the information we have gathered so far to obtain that Cα2 n ≥ Z Bn qℓn(Γn)(t)dt + O ∥⃗un∥3 ∞+ ∥⃗un∥2 ∞∥⃗u′ n∥L1 = Z L0 0 (1 2 −o(1))|⃗w′ n|2 g• + L (z, ⃗wn, ⃗w′ n) dz + o(αn 2 + ∥⃗un∥2 ∞) + o(αn + ∥⃗un∥∞) Z L0 0 |⃗w′ n|. (3.22) Then, from the nondegeneracy of the maximizer Γ0 as defined in Definition 1.2, that is, the positive definiteness of Q, we deduce that Z L0 0 |⃗w′ n|2 g• + L (z, ⃗wn, ⃗w′ n) dz ≥c0∥⃗wn∥2 H1, for some c0 > 0 independent of n. Using also the fact the H1 norm controls the L∞norm in one dimension, we see that the error terms in (3.22) may be absorbed by the left-hand side and the first term on the right-hand side to deduce that ⃗wn/αn subsequentially converges weakly in H1([0, L0]) to some ⃗u∗, and that lim inf n Qℓ(Γn) αn2 ≥ L0 2 R(Γ0)Q(⃗u∗). The weak H1 convergence also implies that ⃗wn/αn converges to ⃗u∗uniformly if we take a further subsequence. It then follows from (3.21) that |⃗vn(t) −⃗u∗(zn(t))| converges to 0 as n →+∞, uniformly w.r.t. t, hence proving the first assertion in (3.13). We also deduce from (3.20) that Ln →L0, since zn′ = 1 on An and the integral of zn′ on [0, Ln] is equal to L0. It also follows from (3.20) that zn(t) →t uniformly. Together with the uniform convergence of ⃗vn(t) −⃗u∗(zn(t)) to 0, this implies that vn →⃗u∗in the sense of distributions. Note that the fact that zn(t) →t tells us that the limit of the piecewise graphs Γn is in fact a graph. It remains to prove that ∥Γn −Γ∗,n∥∗= o(αn). This is a direct consequence of the above and Lemma 3.4 below, noting that the length of Γn is bounded as a consequence of (3.20), and the fact that ⃗w′ n is bounded in L2, hence in L1 also. □ VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 33 Lemma 3.4. Let Γ, eΓ ⊂Tδ ∩Ωbe two curves such that, in tube coordinates, Γ(t) = Γ0(z(t)) + ⃗u(t) is a piecewise graph defined over [0, L] with z(0) = 0 and z(L) = L0 and eΓ(z) = Γ0(z)+⃗v(z) is a graph over Γ0. Then ∥Γ −eΓ∥F(Ω), ∥Γ −eΓ∥∗≤C  max t |⃗u(t) −⃗v(z(t))|   |Γ| + |eΓ|  , where all norms are understood with respect to the metric g. Proof. Let X be any vector field defined in Ω, normal to ∂Ω, and such that ∥X∥∞and ∥curl X∥∞are less than or equal to 1. We need to show that ⟨Γ, X⟩− D eΓ, X E ≤C  max t |⃗u(t) −⃗v(z(t))|   |Γ| + |eΓ|  . For this we define Σ(s, t) = (1 −s)Γ(t) + seΓ(z(t)) for (s, t) ∈[0, 1] × [0, L]. Then, applying Stoke’s Theorem and the fact that X is normal to ∂Ω, we have ⟨Γ, X⟩− D eΓ, X E = Z Σ curl X · νΣ ≤|Σ|, and therefore ⟨Γ, X⟩− D eΓ, X E ≤ Z L 0 Z 1 0 |∂sΣ||∂tΣ| ds dt. Since ∂sΣ = eΓ(z(t)) −Γ(t) and |∂tΣ(s, t)| ≤|Γ′(t)| + |eΓ′(t)|, the result follows. □ 3.3. Strong nondegeneracy implies weak nondegeneracy. Lemma 3.5. There exists η > 0 depending on Ωsuch that the following holds. Assume Γ is Lipschitz curve with no boundary in Ωsuch that (3.23)  χη ∂z √g33 , Γ0 L0 −Γ |Γ|  ≤ℓ, where χη is a cut-off function for the cylinder Cη = B(0, η) × (0, L0), that is, χη is equal to 1 in Cη/2, equal to 0 outside Cη, takes values in [0, 1] and has gradient bounded by C/η. Then, if ℓis small enough depending on Ω, Γ is included in a neighborhood of Γ0 where tubular coordinates are defined, so that we may write Γ(t) = Γ0(z(t)) + ⃗u(t) for a horizontal vector ⃗u. Then, parametrizing Γ and Γ0 by arclength with a proper orientation, we have (3.24) Z L 0 |1 −z′(t)|dt ≤C √ ℓ, ∥⃗u∥L∞≤C √ ℓ, ||Γ| −L0| ≤C √ ℓ, where C depends on Ω. Moreover z(0) = 0 and z(L) = L0. Proof. Fix η > 0 small enough so that tubular coordinates are defined in Cη and such that √g33 ∈(1/2, 2) in Cη. The domain in Ωcorresponding to Cη is denoted Tη. We assume Γ is parametrized by arclength and whenever Γ(t) ∈Tη, we denote by (x(t), y(t), z(t)) the coordinates of Γ(t). We let X = χη ∂z √g33. 34 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Since ⟨X, Γ0/L0⟩= 1, denoting L = |Γ| we have by definition and using the fact that |Γ′|g = 1 from the arclength parametrization hypothesis, that  X, Γ |Γ|  = − Z L 0 ⟨X, Γ′⟩g , ⟨X, Γ′⟩g ≤1. It follows that 0 ≤  X, Γ0 L0 −Γ |Γ|  = − Z L 0 (1 −|X|g) +  |X|g|Γ′|g −⟨X, Γ′⟩g  ≤Cℓ, Both terms in the integral being nonnegative, they are each bounded by Cℓ. When Γ(t) ∈Cη, we may write it as Γ(t) = Γ0(z(t)) + ⃗u(t), where ⃗u(t) is horizontal. Then, replacing |X|g and ⟨X, Γ′⟩g by their value, we have (3.25) − Z L 0 (1 −χη) ≤Cℓ, − Z L 0 χη (1 −√g33z′) ≤Cℓ. We now show that L = |Γ| is bounded by a constant depending only on Ω. For this we begin by extending the coordinate function z defined in Tη to a C1 function defined in Ω, whose C1 norm is then a constant depending on Ω. Since √g33 ∈(1/2, 2) on Cη, we have − Z L 0 χη 1 2 −z′  ≤− Z L 0 χη  1 √g33 −z′  ≤2− Z L 0 χη √g33  1 √g33 −z′  ≤Cℓ, while, using the first inequality in (3.25), − Z L 0 (1 −χη) 1 2 −z′  ≤C− Z L 0 (1 −χη) ≤Cℓ. The two inequalities imply − Z L 0 1 2 −z′ ≤Cℓ. Since the integral of z′ is bounded above by 2 maxΩ|z| ≤C, we deduce that 1 2 −Cℓ≤C L , and then that L is bounded above by a constant depending only on Ω, if ℓis small enough. Next, we claim that Γ is included in the cylinder CC √ ℓfor some C > 0 depending only on Ω. First, from (3.23), there exists a ∈(0, L) such that Γ(a) ∈CCℓ. If Γ exited from CC′√ ℓ, then the first exit point b (which we assume larger than a w.l.o.g) is such that Γ(t) ∈Cη/2 for t ∈(a, b), so that χη(Γ) = 1 in (a, b), and such that (3.26) Z b a |⃗u′|g⊥≥C′√ ℓ−Cℓ. On the other hand, since |Γ′|2 g = |⃗u′|2 g⊥+ g33z′2 = 1 the latter inequality in (3.25) implies that Z b a q |⃗u′|2 g⊥+ g33z′2 −√g33z′ ≤Cℓ, VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 35 and then — using the fact that √1 + x−1 ≥c min(√x, x) for any x ≥0 and that ℓ≤ √ ℓsince ℓis assumed small — that (3.27) Z b a |⃗u′|g⊥≤C √ ℓ, which would contradict (3.26) if C′ is chosen large enough. Therefore Γ is included in CC √ ℓfor some C > 0 and therefore ∥⃗u∥L∞≤C √ ℓ, which proves the second inequality in (3.24) Now, since Γ is included in CC √ ℓ, we have that (x(t), y(t), z(t)) are defined in [0, L]. Since Γ(0) and Γ(L) belong to ∂Ω, we must have z(0), z(L) ∈{0, L0}, and then, in view of the latter inequality in (3.25), that z(0) = 0 and z(L) = L0, if ℓis chosen small enough. Then, arguing as above we find that (3.27) holds with a = 0 and b = L, so that (3.28) ∥⃗u′∥L1([0,L]) ≤C √ ℓ. Moreover, since g33 = 1 on Γ0 from the definition of g in Proposition 2.2, we have |1 −√g33| ≤ C √ ℓon CC √ ℓ, so that (3.25) implies Z L 0 |1 −z′| ≤C √ ℓ, which using z(0) = 0 and z(L) = L0 implies that |L −L0| ≤C √ ℓand then, together with (3.28) implies that ||Γ| −L0| ≤C √ ℓ. □ Proposition 3.5. Assume that the maximum of R among normal 1-currents which are diver- gence free in Ωis uniquely achieved (modulo a multiplicative constant) at a simple smooth curve Γ0 with endpoints on ∂Ωand that the second derivative of R at Γ0 is definite negative. Then there exists α > 0 such that for any curve Γ we have R(Γ0) −R(Γ) ≥α min(∥Γ −Γ0∥2 ∗, 1). Proof. The statement is equivalent to proving that if {Γn}n is a maximizing sequence of curves with no boundary in Ω, then lim inf n→+∞ R(Γ0) −R(Γn) ∥Γn −Γ0∥2 ∗ > 0. Consider such a sequence {Γn}n. Then, using the compactness of currents, Γn/|Γn| converges to Γ0/L0 in the flat metric, hence in (C0,1 T (Ω, R3))∗. It follows using Lemma 3.5 that if n is large enough, then Γn lies in Cδ and, using tubular coordinates, we may parametrize it as Γn(t) = Γ0(zn(t)) + ⃗un(t) in [0, Ln], where z′ n(t) ∈{±1} and where zn(0) = 0, zn(Ln) = L0. It is also the case that |Γn| →L0 and that ∥1 −z′ n∥L1([0,Ln]) →0, as n →+∞. Then, applying Proposition 3.4 with ℓ= 1, we deduce that (3.29) lim inf n→+∞ Q1(Γn) ∥⃗un∥2 L∞> 0. 36 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY But, since ⟨B0, Γ0⟩/ R(Γ0) = L0, we have |Γn| R(Γ0) (R(Γ0) −R(Γn)) = |Γn| −⟨B0, Γn⟩ R(Γ0) =  |Γn| −⟨Γn⟩2D +  ⟨Γn⟩2D −L0  + L0 −⟨B0, Γn⟩ R(Γ0) =  |Γn| −⟨Γn⟩2D +  ⟨Γn⟩2D −L0  −⟨B0, Γn −Γ0⟩ R(Γ0) = Q1(Γn). It follows, in view of (3.29) and since |Γn| →L0, that R(Γ0) > R(Γn) + c∥⃗un∥2 L∞. To prove the proposition, it remains to note that ∥Γn −Γ0∥∗< C∥⃗un∥L∞. This is proved for instance by choosing some vector field X in Ωsuch that ∥X∥L∞, ∥∇X∥∞≤1 and bounding | ⟨X, Γn −Γ0⟩| by C∥⃗un∥L∞: Z Ln 0 X(Γ0(zn(t)) + ⃗un(t)) · (Γ0(zn(t)) + ⃗un(t))′ dt − Z Ln 0 X(Γ0(zn(t)) · Γ0(zn(t))′ dt ≤∥⃗un∥L∞∥∇X∥L∞|Γn| + Z Ln 0 X(Γ0(zn(t))) · ⃗u′ n(t) dt ≤C∥⃗un∥L∞+ Z Ln 0 X(Γ0(zn(t))′ · ⃗un(t) dt ≤C∥⃗un∥L∞. □ 3.4. Strong nondegeneracy for small balls. In this section we show that the strong non- degeneracy condition is satisfied in the case of a ball with small enough radius ρ. We recall that it was proved in [Rom19a] (see also [ABM06]) that the diameter Γ0 = {(0, 0, z) ∈Bρ} is the unique maximizer of the ratio R among simple curves, and in [RSS23] it was proved that it satisfies the weak nondegeneracy condition (2.9). We in fact have: Proposition 3.6. The ratio R in the ball Bρ satisfies the strong nondegeneracy condition of Definition 1.2 if ρ is small enough. Proof. We may define coordinates x ∈[0, 1) and z ∈(−1, 1) on the half disk D+ ρ = {R + iZ | R2 + Z2 ≤ρ2 and R > 0} in C as follows : The point with coordinates x, z is ρ times the image of the complex number iz by the Moebius transform φx(w) = (x + w)/(1 + xw), which maps the vertical segment i[−1, 1] to the intersection of the circle centered at (1 + x2)/2x with the unit disk. We thus have R + iZ = ρ x + iz 1 + ixz. We may then extend straightforwardly this coordinate system to the ball Bρ in R3 by requiring that a point with cylindrical coordinates (R, θ, Z) in Bρ \ {(0, 0, ±ρ)} has coordinates (x, θ, z), VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 37 where x ∈[0, 1) and z ∈(−1, 1) such that R = ρx 1 + z2 1 + x2z2, Z = ρz 1 −x2 1 + x2z2. The Euclidean metric dR2 + dZ2 + R2dθ2 then becomes g(x, z, θ) = ρ2 (1 + x2z2)2 (1 + z2)2 dx2 + (1 −x2)2 dz2 + x2(1 + z2)2 dθ2 . It is straightforward to compute the second derivative of the length. Let Γt(z) = (tx(z), z, θ(z)) be a family of curves parametrized by z ∈[−1, 1] in our coordinates. Then (3.30) d2 dt2 |t=0|Γt| = ρ Z 1 −1 1 2x′2(1 + z2)2 −x2(1 + z2) + 1 2x2(1 + z2)2θ′2 dz. To compute the second derivative of ⟨B0, Γt⟩, we resort to the explicit expression for curl B0 computed in [Lon50]: curl B0 = 3ρ 2 sinh ρ  cosh r −sinh r r  sin ϕ r eθ, where (r, ϕ, θ) are spherical coordinates on Bρ, so that R = r sin ϕ and Z = r cos ϕ. It follows that curl B0 · eθ = 3 2 r2 3 sin ϕ r + O(ρ3) = R 2 + O(ρ3). We deduce that, denoting D+ ρ = {(R, Z) | R2 + Z2 ≤ρ2 and R > 0}, (3.31) ⟨B0, Γ0⟩= ZZ D+ ρ curl B0 · eθ = Z ρ R=0 R 2 × 2 p ρ2 −R2 dR + O(ρ5) = ρ3 3 + O(ρ5). We also find that (curl B0 · eθ) rdr ∧dϕ = ρ3 2 x(1 −x2)(1 + z2)2 (1 + x2z2)3 dx ∧dz + O(ρ5). This allows us to compute ⟨B0, Γ0⟩−⟨B0, Γt⟩= ZZ At curl B0 · eθ dx dz, where At = {(s, z) | −1 ≤z ≤1 and 0 ≤s ≤tx(z)}, from which we compute (3.32) d2 dt2 |t=0 ⟨B0, Γt⟩= −ρ3 4 Z 1 −1 x2(1 + z2) dz + O(ρ5). Since Γ0 maximizes the ratio R, its differential at Γ0 vanishes and then, for t = 0, d2 dt2 |t=0 R(Γt) = d2 dt2 |t=0 ⟨B0, Γt⟩L0 −⟨B0, Γ0⟩d2 dt2 |t=0|Γt| L0 2 , 38 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY so that, in view of (3.30), (3.31), (3.32) and the fact that L0 = 2ρ, we have after simplification d2 dt2 |t=0 R(Γt) = −ρ2 24 Z 1 −1 (1 + z2)2(x′2 + x2θ′2) + x2(1 + z2) dz + O(ρ3). Therefore the quadratic form d2 R(Γ0) is definite negative if ρ is small enough. □ 4. Lower bounds by slicing In this section we prove two types of lower bounds for the free energy Fε contained in a tubular neighborhood of Γ0, and obtained by integrating in the z coordinate two-dimensional lower bounds over slices. The first type is very robust and is obtained by the ball construction method (`a la [San98, Jer99]) but in a two-step growth process that allows to retain more information on the degrees at small scales. It retains an error which is larger than a constant. In this construction, the varying weight is approximated by a constant weight, leading to errors that can be absorbed into the ball construction errors. The second type of lower bound is more precise, it recovers the exact constant γ and an error only o(1). To obtain such a precise error, the varying weight can no longer be approximated by a constant, instead one needs to resort to performing the ball construction in the precise geometry we are working in, i.e. we need to grow geodesic balls. The techniques thus combine ball construction methods within the geometric framework to capture the energy on large scales, with the refined [BBH94] analysis found in of [AB98,Ser99a] to capture the precise energy at small scales while approximating the weight by a uniform one. 4.1. Lower bounds by ball growth method. First, recalling (3.3) and (3.4), which corre- spond respectively to the definitions of the smooth cutoff χ and ⟨µ⟩2D, we have the following result. Proposition 4.1. Assume that Fε(u, A) < C0| log ε|, that Γ0 is critical for the ratio, and that N0 ∈N, ϱ ∈(| log ε|−1, δ/2) are such that ∥µ(u, A) −2πN0Γ0∥F(Ω) ≤ϱ. Then, writing for short µ instead of µ(u, A), we have F ⊥ ε (u, A) ≥| log ε| 2 ⟨µ⟩2D + πL0N0(N0 −1) log 1 ϱ −Rem, where Rem ≤C (1 + ϱ log | log ε|). This result will be proven by obtaining lower bounds for the energy on the slices Σz = Cδ ∩R2 × {z} and integrating them with respect to z. Let us first state and prove the two-dimensional lower bounds on slices Σz. This is an adaptation of the vortex ball construction (here, specifically [San98, SS07]), with a two-stage growth process made to handle the varying weight. Such a two-stage growth process was already used in [SS07, Chapter 9]. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 39 Proposition 4.2. For any q > 0, C > 0 and integer N, there exist ε0, C′ > 0 such that for any z, any (u, A), and any ε < ε0, the following holds. Assume that (4.1) Z Σz e⊥ ε (u, A)dvolg⊥< C| log ε|, where e⊥ ε is defined as in (2.10), and let ϱz = max ∥µz −πNδ0∥F(Σz), | log ε|−1 , where µz = µ12 dx ∧dy. Then there exists points a1, . . . , ak ∈Σz and integers d1, . . . , dk such that, denoting n = |d1 + · · · + dk| and D = |d1| + · · · + |dk|, we have Z Σz e⊥ ε (u, A) dvolg⊥≥π k X i=1 |di| p g33(ai)  log ϱz ε −C′  + πN 2 log δ ϱz −C′ log | log ε| (ϱz + (D −n)) , where δ is as in Proposition 2.2, and (4.2) π k X i=1 diδai −µz F(Σz) ≤C′| log ε|−q, D ≤C′. Later, we will be able to show that ϱz is very small and D = n = N, which will allow to translate this lower bound into a lower bound with O(1) error. Proof of Proposition 4.2. In what follows, C′ is any constant depending on q, C, and the in- equalities involving ε are understood to hold for any ε small enough depending on C and q. We write for short p(x) = p g33(x) and will use that notation throughout the rest of the section. Step 1: lower bound by two-stage ball growth. We use the ball construction, or ball growth procedure relative to the metric g⊥on Σz, see [SS07] in the Euclidean case, the metric does not affect the construction as it corresponds to making locally an affine change of coordinates, see [SS04] for detail (constructions for a surface may be found in [IJ21, Proposition 8.2] or [DS18] for instance). Given a final radius r = | log ε|−Q, with Q a large number, we thus obtain a collection B = {Bi := B(ai, ri)}i of disjoint balls such that r = P i ri, such that (4.3) Z Bi 1 pe⊥ ε (u, A) dvolg⊥≥π|di|  log r Dε −C′ , the degree of u over ∂Bi is equal to di and (4.2) is satisfied. Moreover, D = |d1| + · · · + |dk| is bounded by C′, for otherwise by disjointness of the balls the lower bound (4.3) would contradict (4.1). Since on Bi we have p(x) ≥p(ai) −Cri and ri ≤| log ε|−Q, it follows that (4.4) Z Bi e⊥ ε (u, A) dvolg⊥≥π|di|p(ai)  log r ε −C′ . 40 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Then, since from its definition ϱz > r, using the ball growth method of [San98] (see for instance [SS07, Chapter 4]) we may grow the balls from total radius r to total radius ϱz, at which point the balls will have grown into a collection B′ = {B′ j := B(a′ j, r′ j)}j, with degrees d′ j, such that for each B′ j we have Z Bj\B e⊥ ε (u, A) dvolg⊥≥π|d′ j| p(a′ j) −Cϱz  log ϱz r −C′  . Moreover, by additivity of the degree and disjointness of the balls, we must have (4.5) d′ j = X i,ai∈B′ j di. Since log(ϱz/r) ≤C′ log | log ε| we find (4.6) Z Bj\B e⊥ ε (u, A) dvolg⊥≥π|d′ j|p(a′ j)  log ϱz r −C′ −C′ϱz log | log ε|  . Step 2: lower bound by integration over large circles. We next retrieve the degree squared contribution to the energy outside of the small balls following the method of integration along circles of [SS03]. Denote by L a minimal connection between the measure Pk i=1 diδai and Nδ0, relative to the metric g⊥, allowing connections to the boundary. Then in view of (4.2), |L| ≤C′(ϱz+| log ε|−q). The fact that ϱz is the flat distance for the Euclidean metric and not g⊥is accounted for by the constant C′. Moreover, if the circle Cs (relative to g⊥) of center 0 and radius s does not intersect L, then the winding number of u on Cs is equal to N. Since we have p(x) ≥1 −Cs on Cs, and since the length of Cs relative to g⊥is bounded above by 2πs + C′s2, we obtain for such an s that Z ∂Cs e⊥ ε dvolg⊥≥(1 −C′s)πN 2 s . But the measure of the set of s such that Cs intersects either B′ or L is bounded above by 2ϱz + |L|, since B′ has total radius ϱz. Since |L| is the flat distance of Pk i=1 diδai to Nδ0, and from (4.2) and the definition of ϱz, this measure is less than 4ϱz, if ε is small enough. Integrating the circle bound above with respect to s such that Cs intersects neither B′ nor L, we obtain Z Σ\B′ e⊥ ε dvolg⊥≥ Z δ 4ϱz(1 −C′s)πN 2 s ds ≥πN 2  log δ ϱz −C′  . Note that if 4ϱz > δ the lower bound remains true, since the left-hand side is nonnegative. Adding this to (4.6) and (4.4) we find (4.7) Z Σ e⊥ ε dvolg⊥≥π X i |di|p(ai)  log r ε −C′ + π X j |d′ j|p(a′ j) log ϱz r + πN 2 log δ ϱz −C′ −C′ϱz log | log ε|. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 41 Then we note that, for any j, in view of (4.5), we have X i,ai∈B′ j |di|p(ai) ≤|d′ j|(p(a′ j) + Cϱz) + C′    X i,ai∈B′ j |di|  −|d′ j|  . Summing with respect to j, we find that X j |d′ j|p(a′ j) ≥ X i |di|p(ai) −C′ (ϱz + D −n) , where now the sums run over all indices i and j, respectively. Inserting into (4.7), we deduce that Z Σ e⊥ ε dvolg⊥≥π X i |di|p(ai)  log ϱz ε −C′  + πN 2 log L ϱz −C′ (d + D −n) log | log ε| −C′, which together with (4.2) proves the proposition. □ For slices Σz for which we have a weaker energy bound, we use a cruder estimate. Lemma 4.1. For any q > 0 and C > 0, there exist ε0, C′ > 0 such that, for any z such that Z Σz e⊥ ε dvolg⊥< C| log ε|q, there exists points a1, . . . , ak and integers d1, . . . , dk such that, denoting µz = µ12 dx ∧dy and D = |d1| + · · · + |dk|, we have Z Σz e⊥ ε dvolg⊥≥π k X i=1 |di|| p g33(ai)  log 1 ε −C′ log | log ε|  , 2π k X i=1 diδai −µz F(Σz) ≤C′| log ε|−q, D ≤C′| log ε|q−1. Proof. We apply the ball construction (see [SS07, Chapter 4]) with final radius r = | log ε|−Q, with Q a large number, we get a collection B = {Bi := B(ai, ri)}i of balls for which Z B 1 pe⊥ ε dvolg⊥≥πD  log r Dε −C′ , where D = |d1| + · · · + |dk|, the last statement of the lemma holds and the second one holds by the Jacobian estimate. Since on Bi we have √g33(x) ≥√g33(ai) −Cri, it also follows that Z Bi e⊥ ε dvolg⊥≥π|di| p g33(ai)  log r Dε −C′ . □ 42 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Proof of Proposition 4.1. We denote throughout the proof by C, q generic large positive con- stants depending only on Ω, C0, and χ is as above. We write f(z) = Z Σz e⊥ ε (u, A) dvolg⊥, so that the desired result is a lower bound on R z f(z)dz. We define for any z such that Σz is not empty ϱz = max(∥µ12 d⃗u −2πN0δ0∥F(Σz), | log ε|−1) Then we define three sets of slices: • We denote by I the set of z’s such that f(z) ≤M| log ε|, for some M > 0 to be chosen below. • We denote by J the set of z /∈I such that f(z) ≤| log ε|q. • We denote by K the set of z /∈I ∪J. Let us first treat the case z ∈K. Since on Σz, the 2-form µ12 d⃗u is the exterior differential of the current j(u, A) + A restricted to Σz, it follows that Z Σz χ √g33 µ12 d⃗u = Z Σz d(χ √g33) ∧j(u, A) + Z Σz χ √g33 dA d⃗u ≤Cf(z)1/2, from the Cauchy-Schwarz inequality and the definition of e⊥ ε (u, A). It then follows that for any z ∈K, since f(z) > | log ε|q and q > 2, we have (4.8) f(z) ≥| log ε| 2 Z Σz χ √g33 µ12 d⃗u + πN0(N0 −1) log 1 ϱ. For z ∈I, we may apply Proposition 4.2 on Σz with N = N0 to find that there exists points a1, . . . , ak and integers d1, . . . , dk such that, denoting n = |d1 + · · · + dk| and D = |d1|+· · ·+|dk|, we have D ≤C and f(z) ≥π X i |di| p g33(ai)  log ϱz ε −C′  + πN 2 0 log δ ϱz −C′ log | log ε| (ϱz + (D −n)) . Moreover (4.9) 2π k X i=1 diδai −µ12 d⃗u F(Σz) ≤C′| log ε|−q, so that, for z ∈I, Z Σz χ √g33 µ12 d⃗u −2π X i diχ(ai) p g33(ai) ≤C| log ε|−q. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 43 It follows that f(z) ≥1 2 Z Σz χ √g33 µ12 d⃗u log ϱz ε + π  log ϱz ε  X i (|di| −diχ(ai)) p g33(ai)+ + πN0 2 log 1 ϱz −C log | log ε| (D −n + ϱz) −C. If z is such that D > n, then the error terms on the right-hand side may be absorbed in the second term, noting that π log(ϱz/ε) p g33(ai) > | log ε|/C . If not, then the error term may be written C(ϱz log | log ε| + 1). Therefore, integrating with respect to z ∈I we find (4.10) Z z∈I f(z) dz ≥ Z z∈I Z Σz 1 2 log ϱz ε χ √g33 µ12 d⃗u+ +πN0 2 log 1 ϱz −C (ϱz log | log ε| + 1)  dz. Using the fact that ϱz bounds the flat distance between µ12 d⃗u and 2πN0δ0, we have log ϱz Z Σz χ √g33 µ12 d⃗u ≥2πN0 log ϱz −Cϱz| log ϱz|, and ϱz being bounded, since D ≤n, the error term above is bounded by a constant. Replacing in (4.10), we obtain (4.11) Z z∈I f(z) dz ≥ Z z∈I | log ε| 2 Z Σz χ √g33 µ12 d⃗u +π(N0 2 −N0) log 1 ϱz −C (ϱz log | log ε| + 1)  dz. Let us finally consider z ∈J. Then f(z) ≥M| log ε|, for some large constant M. We first exclude the case where Z Σz χ √g33 µ12 < f(z) | log ε|, because in this case, noting that πN0(N0 −1) log(1/ϱ) is bounded by C log | log ε|, the lower bound (4.8) is clearly satisfied. In the opposite case, we have from the definition of ϱz and the lower bound f(z) ≥M| log ε| that M −2πN0 ≤ Z Σz χ √g33 µ12d⃗u −2πN0 ≤Cϱz. In particular, if M is chosen large enough compared to N0, we deduce that ϱz ≥N0 and then Z Σz χ √g33 µ12d⃗u ≤Cϱz. We then apply Lemma 4.1 to find that (4.12) f(z) ≥π X i |di| p g33(ai)  log 1 ε −C log | log ε|  . 44 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Using (4.9) which remains true, we may then rewrite (4.12) as f(z) ≥| log ε| 2 Z Σz χ √g33 µ12 d⃗u −Cϱz log | log ε|. It follows that f(z) ≥| log ε| 2 Z Σz χ √g33 µ12 d⃗u + π N 2 0 −N0  log 1 ϱz −C  ϱz log | log ε| + log 1 ϱz  , where we have added and substracted the middle term on the right-hand side. This adds an extra undesired error term, which may be absorbed into a constant since ϱz ≥N0 in this bad case. We deduce that (4.13) Z z∈J f(z) dz ≥ Z z∈J | log ε| 2 Z Σz χ √g33 µ12 d⃗u +π N 2 0 −N0  log 1 ϱz −C (ϱz log | log ε| + 1)}  dz. Adding (4.11), (4.13) and the integral of (4.8) with respect to z ∈K, we obtain in view of (4.9) and R z ϱz dz ≤Cϱ (see [Fed69, 4.3.1]) that (4.14) Z z f(z) dz ≥| log ε| 2 Z z Z Σz χ √g33 µ12 d⃗u dz + Z K π N 2 0 −N0  log 1 ϱ dz+ + Z z∈I∪J π N 2 0 −N0  log 1 ϱz −Cϱz log | log ε| dz −C. Finally, we use the concavity of log together with the fact that the integral of ϱz with respect to z is bounded by Cϱ — and also the fact that since f(z) ≤C| log ε| we must have |I∪J| ≥L0/2 if ε is small enough — to find that Z z∈I∪J log 1 ϱz dz ≥ Z z∈I∪J log 1 ϱ −C, which together with (4.14) yields Z z f(z) dz ≥| log ε| 2 Z z Z Σz χ √g33 µ12 d⃗u dz + πL0 N 2 0 −N0  log 1 ϱ −C (ϱ log | log ε| + 1) , which is the desired estimate. □ 4.2. Lower bound up to o(1) error. Proposition 4.3. For any q > 0, C > 0 and integer N, there exist ε0, κ, C′ > 0 such that for any z, any (u, A), and any ε < ε0, the following holds. Assume that (4.15) Z Σz e⊥ ε (u, A)dvolg⊥< π(N + m)| log ε|, VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 45 where e⊥ ε is defined as in (2.10), and m < 1 2 is a small enough constant, to be determined below. Assume that ϱz ≤C| log ε|−1/2, where ϱz = max ∥µz −2πNδ0∥F(Σz), | log ε|−1 , µz = µ12 dx ∧dy. Then there exists points a1, . . . , aN ∈Σz such that we have µz(u, A) −2π N X i=1 δai F(Σz) ≤C| log ε|−q and Z Σz e⊥ ε (u, A) dvolg⊥ ≥π N X i=1 p g33(ai) log 1 ε + πN 2 log δ −π X i̸=j log distg⊥(ai −aj) + Nγ + oε(1) + oδ(1), where δ is as in Proposition 2.2 and γ is the universal constant introduced in [BBH94]. The rest of this subsection is devoted to proving this proposition. The first step is to show we can neglect A by a good choice of gauge. Lemma 4.2. For any q, M > 0 there exists ε0, C > 0 such that if ε < ε0, then any (u, A) such that R Σz e⊥ ε (u, A)dvolg⊥≤M| log ε| is gauge-equivalent to some (v, B) such that ∥µz(u, A) −µz(v, 0)∥F(Σz) ≤C| log ε|−q and such that (4.16) Z Σz e⊥ ε (u, A)dvolg⊥≥ Z Σz e⊥ ε (v, 0)dvolg⊥−oε(1) −oδ(1). Proof. We let p = √g33. Then we chose B = ∗dξ/p where ξ solves  d∗dξ −dp · dξ = p ∗dA in Σz ξ = 0 on ∂Σz. Since dB = dA, (u, A) is gauge-equivalent to (v, B), for some v. Moreover, using the fact that p is smooth and |p −1| ≤Cδ in Σz, elliptic estimates yield bounds for ξ in H2(Σz) and imply, in particular, that ∥B∥L2 ≤Cδ∥dA∥L2, sup x,y∈Σz |ξ(x) −ξ(y) |x −y|1/2 ≤Cδ1/2∥dA∥L2. The L2 bound for B implies straightforwardly that (4.17) Z Σz e⊥ ε (v, 0)dvolg⊥≤C Z Σz e⊥ ε (v, B)dvolg⊥≤CM| log ε| and, since µz(v, 0) −µz(u, A) = µz(v, 0) −µz(v, B) = d((1 −|u|2)B), 46 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY that ∥µz(v, 0) −µz(u, A)∥F(Σz) ≤C∥B∥L2∥1 −|v|2∥L2 ≤Cδε| log ε|. It remains to prove that (4.16) holds. We write (4.18) Z Σz e⊥ ε (u, A)dvolg⊥− Z Σz e⊥ ε (v, 0)dvolg⊥≥ Z Σz p 2 |dB|2 −2⟨j(v, 0), B⟩  dvolg⊥, and then note that, writing j for the restriction of j(v, 0) to Σz and µ = dj, Z Σz p⟨j, B⟩dvolg⊥= Z Σz j ∧dξ = Z Σz ξµ. It is well-known that (4.17) implies that µ may be approximated by a sum of Dirac masses with total mass bounded by CM and error bounded by CMε1/2| log ε| in the dual of C0,1/2. Therefore Z Σz ξµ ≤CM ∥ξ∥∞+ ε1/2| log ε|∥ξ∥C0,1/2  ≤(oδ(1) + oε(1)) ∥dA∥L2. Together with (4.18) and since dA = dB, (4.16) follows. □ The proof of Proposition 4.3 makes use of the parabolic regularization method of [AB98] (itself borrowed from a preliminary version of [BBH94]) to define “essential balls” for the map u, for that we will follow a bit also the presentation in [Ser99a, Ser99c]. Let 0 < η < 1. We define uη as the minimizer of min v∈H1(Bg⊥(0,δ),C) Z Σz 1 2 √g33  |dv|2 g⊥+ 1 2ε2(1 −|v|2)2 + |u −v|2 2ε2η  dvolg⊥. The minimum is achieved by some map uη (which is not necessarily unique, but we make an arbitrary choice). The solution uη is regular and satisfies |uη| ≤1 and |duη| ≤C ε by maximum principle. Also, by obvious comparison Z Σz e⊥ ε (u, 0)dvolg⊥≥ Z Σz e⊥ ε (uη, 0)dvolg⊥. We will denote by Bg⊥the geodesic ball with respect to the metric g⊥. The next lemma provides vortex balls of small size (a power of ε) which are well separated and on which the energy is well bounded below. Lemma 4.3. Let 0 < η < β < µ < 1. Under the assumptions of Proposition 4.3, for ε small enough, there exists a set I with #I bounded by some constant independent of ε, points (ai)i∈I, ¯µ > β, and a radius ρ > 0 such that εµ ≤ρ ≤ε¯µ < εβ and such that, letting Bi = Bg⊥(ai, ρ) and di = deg (uη, ∂Bi) we have: (i) |ai −aj| ≥8ρ for all i ̸= j ∈I; (ii) dist(ai, ∂Σz) ≥εβ; VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 47 (iii) if x /∈∪i∈IBi then (4.19) |uη(x)| ≥1 2, whereas |uη| ≥1 − 2 | log ε|2 on ∪i∈I ∂Bi; (iv) for i ∈I, it holds that (4.20) Z ∂Bi e⊥ ε (uη, 0)dvolg⊥≤C(β, µ) ρ ; (v) for i ∈I, it holds that for some c > 0 independent of ε, (4.21) Z Bi e⊥ ε (uη, 0)dvolg⊥≥max  p(ai)π|di| log ρ ε + O(1), c| log ε|  ; (vi) and for any 0 < α ≤1, there exists κ > 0 such that (4.22) 2π X i∈I diδai −d(iu, du) F(Σz) ≤εκ We omit the proof which follows closely the lines of [AB98] or [Ser99a,Ser99c], except with balls replaced by metric balls, and with the weight. The relation (4.22) at the level of uη is a direct consequence of the Jacobian estimates, see for instance [SS07, Chapter 6], it is also true for u by the a priori bound on the energy minimized by uη, see [Ser99a, Lemma 4.2]. Using again the shortcut p for √g33, we now have the following result obtained by growing the balls from the prior lemma by a two-stage ball growth process. Lemma 4.4. Under the assumptions of Proposition 4.3, we have di = 1 for all i ∈I and #I = N. Also all the ai’s, i ∈I, belong to Bg⊥(0, Cϱz) ⊂Bg⊥(0, C| log ε|−1/2). Moreover, either (i) the balls Bg⊥(ai, r) with r = | log ε|−8 are such that |ai −aj| ≫r for all i ̸= j ∈I and for all i ∈I, Z Bg⊥(ai,r)\Bi e⊥ ε (uη, 0)dvolg⊥≥πp(ai) log r ρ + o(1) or (ii) there exist a family of disjoint geodesic balls ¯Bk = Bg⊥(bk, rk), containing the Bi’s, of total radius ¯r = Nrk = 1 | log ε|2 such that, letting ¯dk = deg(uη, ∂¯Bk), we have X k Z B′′ k e⊥ ε (uη, 0)dvolg⊥≥π X k (min ¯Bk p)| ¯dk| log ¯r ε + π log | log ε|. Proof. First we prove that di ≥0 for all i ∈I. Since √g33 ≥1 −o(1) in the support of χ, and choosing the constant µ close enough to 1, we get in view of (4.21) combined with (4.15) that P i∈I |di| < N + 2m < N + 1. On the other hand, as seen in the proof of Proposition 4.2, 48 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY denoting by L a minimal connection between P i diδai and Nδ0 relative to the metric g⊥and allowing connections to the boundary, in view of (4.22) we have |L| ≤C(ϱz + εκ) ≤Cϱz. This implies that there must be at least N points (counted with multiplicity) of di ≥0 connected to 0. Since P i∈I |di| ≤N, this is only possible if for each i ∈I, di ≥0, and no point is connected to the boundary. This shows that all the points ai are at distance ≤Cϱz from 0. We now turn to the lower bounds. Let us grow the geodesic balls Bi via the ball construction method with weight and metric, exactly as was done in Proposition 4.2, using as final total radius parameter r′ = 1 | log ε|4. This gives a collection of geodesic balls B′ j. We then regrow the balls into geodesic balls of final total radius r′′ = 1 | log ε|2, this gives a collection B′′ k. We have for every j, (4.23) Z B′ j\∪i∈IBi e⊥ ε (uη, 0)dvolg⊥≥π|d′ j|  min B′ j p   log r′ ε −C  and for every k, (4.24) Z B′′ k \∪jB′ j e⊥ ε (uη, 0)dvolg⊥≥π|d′′ k|  min B′′ k p   log r′′ r′ −C  . Let us now consider a final ball B′′ k and d′′ k its degree. Since all initial degrees were seen above to be nonnegative, we have |d′′ k| = P i,Bi⊂B′′ k |di|. We next add all the energy contributions inside B′′ k from (4.21), (4.23) and (4.24). If B′′ k contains an initial ball (Bi)i∈I of degree di = 0, then we use the lower bound by c| log ε| in (4.21) and we deduce that Z B′′ k e⊥ ε (uη, 0)dvolg⊥ ≥π|d′′ k|  min B′′ k p  log r′′ ε + c| log ε| −C ≥π|d′′ k|  min B′′ k p  log r′′ ε + 1 2c| log ε|, for ε small enough. Summing over all the balls and comparing with the upper bound (4.15), we conclude to a contradiction if m is taken small enough compared to c. We have thus shown that di ≥1 for all i ∈I, i.e. for all the balls Bi obtained from Lemma 4.3. Since all degrees are nonnegative, the ball growth procedure from the Bi’s to the B′ j’s yields really an energy at least π min B′ j p X i,Bi⊂B′ j |di|2  log r′ Cρ −C  . If di ≥2 for some i ∈I, this gives at least an extra energy of c′| log ε|, for some c′ > 0, than announced, which again yields a contradiction with (4.15) if m is chosen small enough. Thus, we have shown that di = 1 for all i ∈I. Next, let us first consider the case where for some k, one B′ j ⊂B′′ k contains more than one Bi. Since all the di = 1, this implies d′ j ≥2. Then, since all degrees are nonnegative, the ball VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 49 growth procedure from the B′ j’s to the B′′ k’s yields really an energy π min B′′ k p X j,B′ j⊂B′′ k |d′ j|2  log r′′ r′ −C  , so at least an extra π log | log ε| energy compared to what was announced. Summing over all balls, we may thus deduce in this case that X k Z B′′ k e⊥ ε (uη, 0)dvolg⊥≥π X k  min B′′ k p  |d′′ k| log r′′ ε + π log | log ε|. The proof is concluded via item (ii) with the ¯Bk’s equal to the B′′ k’s. There remains to consider the case where none of the B′ j’s contain more than one Bi, which means that d′ j = 1 and that there are no mergings in the growth from Bi to B′ j: each B′ j contains a unique Bi, which is simply inflated. Let us then consider new B′ i’s equal to the geodesic balls of centers equal to the ai and radii r = 1 | log ε|8, i.e. we restart the ball construction from the Bi. Since there were no mergings previously, it means that these balls B′ i are disjoint, and separated by at least | log ε| times their radii. Moreover, the energy over the annulus B′ j\Bi is bounded below by πp(ai) log r ε −o(1) (the error o(1) is due to the variation in p, and to 1 −|uη|, which is very small by (4.19)). The proof is then concluded in this case as well. □ Proof of Proposition 4.3. Now that we know that all di = 1, we may also refine the upper bound (4.15) into Z Σz e⊥ ε (u, A)dvolg⊥≤π(N + o(1))| log ε| because otherwise what we want to prove is true. Thanks to that, an examination of the proof in [AB98], [BS99, Appendix] shows that we may refine (4.20) into Z ∂Bi e⊥ ε (uη, 0)dvolg⊥≤πp(ai) + o(1) ρ and thanks to this bound we have Z ∂Bi e⊥ ε (uη, 0)dvolg⊥≥πp(ai) log ρ ε + γ + o(1). This is an adaptation from the analysis of [BBH94], with metric, and here γ is the constant of [BBH94]. Combining all these results, we have obtained either a collection of N balls Bg⊥(ai, r), r = | log ε|−8 such that |ai −aj| ≫r and for each i, Z Bg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥≥πp(ai) log r ε + γ + o(1) or a collection of balls Bg⊥(bk, rk) with rk ≤| log ε|−2 and (4.25) Z ∪kBg⊥(bk,rk) e⊥ ε (uη, 0)dvolg⊥≥π X i p(ai) log 1 | log ε|2ε + π log | log ε|. 50 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Let us call the two cases Case 1 and Case 2. Let ℓ= C| log ε|−1/2 with C ≥2 be such that all the balls Bi are in Bg⊥(0, ℓ/2). For any r ∈(ℓ, δ), in view of (4.19), the fact that di = 1 and #I = N, we have that deg(uη, ∂Bg⊥(0, r)) = N. In Case 2, we may grow the balls Bg⊥(bk, rk) further to reach a final total radius | log ε|−1/2, and still all the balls will be included in B(0, ℓ). We retrieve this way an extra energy (4.26) Z B(0,ℓ)\∪kBg⊥(bk,rk) e⊥ ε (uη, 0)dvolg⊥≥π X i∈I p(ai) log(ℓ| log ε|2) + o(log | log ε|). Integrating then over circles of radius r ∈(ℓ, δ), for instance as in Step 2 of the proof of Proposition 4.2, and using p ≥1 −o(1), we also obtain (4.27) Z B(0,δ)\B(0,ℓ) e⊥ ε (uη, 0)dvolg⊥≥π(1 −o(1))N 2 log δ ℓ. Adding (4.25), (4.26) and (4.27), we obtain that Z ∪kBg⊥(bk,rk) e⊥ ε (uη, 0)dvolg⊥≥π X i∈I p(ai) log 1 ε + πN 2 log δ + πN(N −1) log 1 ℓ+ π 2 log | log ε|, which implies the desired inequality. Let us now turn to Case 1. We consider the energy in B(0, | log ε|−1/8)\ ∪i B(ai, r). In this punctured domain, p = 1 + O(| log ε|−1/8), so we may use this as a bound from below and get Z Bg⊥(0,| log ε|−1/8)\∪iBg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥ ≥(1 −O(| log ε|−1/8) Z Bg⊥(0,| log ε|−1/8)\∪iBg⊥(ai,r) 1 2  |duη|2 g⊥+ 1 2ε2(1 −|uη|2)2  dvolg⊥ To bound from below the right-hand side we may use the Bethuel–Brezis–H´elein theory with metric g, for instance as written down in [IJ21, Section 2.2]. This yields Z Bg⊥(0,| log ε|−1/8)\∪iBg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥ ≥(1 −O(| log ε|−1/8)  πN log 1 r + Wg⊥(ai, . . . , aN) + o(1)  where Wg⊥(a1, . . . , aN) = −π X i̸=j log distg⊥(ai, aj) + π X i,j R(ai, aj) and R(x, y) = 2πG(x, y) + log distg⊥(x, y), VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 51 where G is the Green function, solution of (see for instance [Aub98, Chapter 2]) ( −∆g⊥G(x, y) = δy in Bg⊥(0, | log ε|−1/8) G(x, y) = 0 for x ∈∂Bg⊥(0, | log ε|−1/8). Since we know that all ai ∈Bg⊥(0, C| log ε|−1 2) we have that R(ai, aj) ∼R(0, 0) as ε →0. Moreover, R(0, 0) is easily computed to be log | log ε|−1/8. We thus have found Z Bg⊥(0,| log ε|−1/8)\∪iBg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥ ≥πN log 1 r + π X i̸=j log distg⊥(ai, aj) + πN 2 log | log ε|−1/8 + o(1). Finally, we bound from below the energy over Bg⊥(0, δ)\Bg⊥(0, | log ε|−1/8). As in [Ser99b, Lemma A.1], the co-area formula and the energy upper bound yield the existence of t ∈ h 1 − 3 | log ε|2, 1 − 2 | log ε|2 i such that H1(|uη(x)| = t) ≤Cε| log ε|3. Since |uη| ≥1 − 2 | log ε|2 on ∂Bi, this implies that the set S of r’s such that {|uη| < t} intersects ∂Bg⊥(ai, r) is of measure less than Cε| log ε|3. Using also that p(x) ≥1 −O(|x|) and deg(uη) = N, we may now bound from below Z Bg⊥(0,δ)\∪iBg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥ ≥1 2  1 − 3 | log ε|2  Z [| log ε|−1/8,δ]\S (1 −Cr)(2πN)2 2πr dr ≥πN 2 log δ | log ε|−1/8 −Cδ. Here we have optimized by checking that the smallest value of the integral is taken when S is at the lower end of the interval. Adding all the results we conclude that Z Bg⊥(0,δ) e⊥ ε (uη, 0)dvolg⊥≥π N X i=1 p(ai) log δ ε + Nγ −π X i̸=j log distg⊥(ai, aj) + o(1) −Cδ hence the result is proved. □ 5. Distance of filaments to Γ0 and main lower bound In this section, we complete the proof of the lower bound part of the main theorem, by proving the following. Proposition 5.1. Assume that the smooth simple curve Γ0 is a unique nondegenerate max- imizer of the ratio R. For any ε > 0, assume hex = H0 c1 + K log | log ε| with K bounded independently of ε, and let (u, A) be a minimizer of GLε and (u, A) = (e−ihexϕ0u, A −hexA0). 52 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Then for any sequence {ε} tending to 0, there exists a subsequence such that µ(u, A)/2π is well approximated by a sum of N simple curves Γ1, . . . , ΓN, where N is independent of ε in the sense that, for an arbitrarily small δ > 0 in the tube coordinates definition, (5.1) X i Γi −µ(u, A) ∗ ≤C (log | log ε|)2 | log ε|4/3 , X i Γi −µ(u, A) F(Ωε) ≤C (log | log ε|)2 | log ε|4/3 , where (5.2) Ωε =  x ∈Ω| dist(x, ∂Ω) > | log ε|−2 . Moreover, the curves Γi converge as ε →0 to Γ0 uniformly, and writing Γi(t) = Γ0(zi(t)) + ⃗ui(t) in tube coordinates as piecewise graphs over Γ0, then the rescaled curves eΓi(t) = Γ0(zi(t)) + r hex N ⃗ui(t) converge as ε →0 in ∥· ∥∗norm to Γ∗ i (z) = Γ0(z) + ⃗u∗ i (z) and (5.3) GLε(u, A) ≥h2 exJ0 + π 2 L0N(N −1) log hex −2πK R0 L0N log | log ε| −π 2 L0N(N −1) log N + WN(Γ∗ 1, . . . , Γ∗ N) + πL0Nγ + N 2CΩ+ oε(1) + Cδoε(1) + oδ(1), where WN is as in (1.9). Finally, if N > 1, then max 1≤i≤N ∥⃗u∗ i ∥L∞> 0. The proof of this proposition involves several steps, the first goal being to compute a lower bound for GLε(u, A) up to O(1) in terms of a suitable vortex filament approximation of the vorticity measure µ(u, A), which then allows to determine the typical distance from the filaments to Γ0, and then improve the lower-bound to o(1) precision. The first step is to choose the scale ℓof the horizontal blow-up in a way such that the vorticity remains concentrated near Γ0 at this scale (Step 1), which in turn implies (Step 2) that we may bound from below F ⊥ ε (u, A) in terms of the vorticity in the ℓ-tube around Γ0. In Step 3 we construct vortex filaments for the tube blown-up at scale ℓhorizontally, and show that the distance of the vortex filaments to Γ0 is in fact much smaller than ℓ, which allows to apply Proposition 3.4 to bound from below in a sufficiently precise way Fε(u, A). The final step uses the resulting lower bound of GLε(u, A) and, combining with the matching upper bound, draws the consequences stated above. Proof. Throughout the proof we write µ instead of µ(u, A). We start with a preliminary claim, that there exists C > 0 such that for any curve Γ with no boundary in Ωand any vector field X, we have (5.4) | ⟨X, Γ⟩| ≤C|Γ|2∥curl X∥L∞. Indeed, given Γ, there exists a surface Σ such that Γ = ∂Σ∩Ωand such that |Σ| ≤C|Γ|2. Then ⟨X, Γ⟩= Z Σ curl X, VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 53 from which the claim follows. Step 1. µ is ℓ-concentrated near Γ0. From the nondegeneracy hypothesis and Proposition 3.5, Condition 2.1 is satisfied with P = 2. Then, from Theorem B applied for instance with α = 3/10, we know that for any ε > 0 there exists Lipschitz curves Γ1, . . . , Γk, with k bounded independently of ε, and a normal current eΓ without boundary in Ωsuch that for 1 ≤i ≤k we have, (5.5) ∥Γi −Γ0∥∗≤ C | log ε|1/7, ||Γi| −L0| ≤ C | log ε|1/7 and such that (5.6) |eΓ| ≤ C | log ε|2/3. Moreover, (5.7) µ −2π N X i=1 Γi −2πeΓ ∗ , µ −2π N X i=1 Γi −2πeΓ F(Ωε) ≤C| log ε|−2, where, since the number k is bounded independently of ε, we have assumed it is equal to some fixed integer N by going to a subsequence, and where Ωε is defined in (5.2). (Note that the log | log ε| factor in Theorem B, (3) has been absorbed by using a different power for | log ε| to obtain (5.6)). In particular, as a consequence of (5.6) and (5.4) we have (5.8) ∥eΓ∥∗≤ C | log ε|4/3, ∥eΓ∥F(Ω) ≤ C | log ε|4/3. From now on, we let (5.9) ℓ:= 1 (log | log ε|)2. Note that the power 2 in (5.9), in (5.7), and in (5.2) is arbitrary, it could be any large number. We consider coordinates in a neighborhood of Γ0 as in Proposition 2.2, the coordinate domain being Cδ. For carrying out the horizontal blow-up procedure, we need to work in a smaller neighbourhood of Γ0. For convenience we use a cylinder in tube-coordinates. Let Cr := B(0, r) × (0, L0). We let χℓbe a cutoff function for the cylinder Cℓ: χℓis equal to 1 on Cℓ/2 and equal to 0 outside Cℓ, its gradient is bounded by C/ℓ. Then, from (5.5) and Lemma 3.5, every Γi is included in a tubular neighborhood of Γ0 with radius C| log ε|−1/14, hence χℓΓi = Γi. Thus, in view of (5.7), we find that (5.10) ∥(1 −χℓ)µ∥∗, ∥(1 −χℓ)µ∥F(Ωε) ≤C (log | log ε|)2 | log ε|4/3 54 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY and that the same bounds hold for (χδ −χℓ)µ, with a constant depending on δ. Step 2. Lower bound for F ⊥ ε (u, A). Inserting into the splitting formula (2.2) the definition (1.7), the fact that hex = H0 c1 + K log | log ε| = | log ε| 2 R(Γ0) + K log | log ε|, and the minimality of (u, A) which implies that h2 exJ0 ≥GLε(u, A), we find (5.11) oε(1) ≥Fε(u, A) −  | log ε| 2 R(Γ0) + K log | log ε|  ⟨B0, µ⟩. But, again using (5.5)–(5.7) and the definition (1.6), we have ⟨B0, µ⟩= 2πL0N R(Γ0) + O(| log ε|−1/7), which together with (5.11) implies that (5.12) Fε(u, A) ≤πL0N| log ε| + O(| log ε|6/7). It also follows directly from (5.5)–(5.7) that (5.13) ∥µ −2πNΓ0∥∗= O(| log ε|−1/7), but to apply Proposition 4.1 on Cδ, where Cδ is defined in Proposition 2.2, we need to check instead that the flat distance between µ and 2πNΓ0 tends to 0 with ε, which we can prove is true in Ωε but not in Ω. From (5.5) and Lemma 3.5 we find that each Γi is included in a tubular neighborhood of Γ0 with radius C| log ε|−1/14, and that, in tube coordinates, its endpoints have vertical coordinate 0 and L0, respectively. Then, Lemma 3.4 implies that ∥Γi −Γ0∥F(Ω) ≤C| log ε|−1 14. Hence, combining with (5.8) and (5.7), we find that (5.14) ϱ := max  ∥µ −2πNΓ0∥F(Ωε), | log ε|−1 = O(| log ε|−1 14). It follows from (5.14) and (5.12) that we may apply Proposition 4.1 in a subdomain Cδ ε of Cδ obtained by stripping layers of height | log ε|−2 at the top and bottom. We find that F ⊥ ε (u, A) ≥| log ε| 2 Z Cδε χδ √g33µ12 + πL0N(N −1) log 1 ϱ −C(1 + ϱ log | log ε|). But, integrating by parts on each slice, by definition of µ, we have Z Cδ\Cδε χδ √g33 µ12 = Z Cδ\Cδε d(χδ √g33) ∧j(u, A) + χδ √g33 dA ≤C Z Cδ\Cδε eε(u, A) 1/2 |Cδ \ Cδ ε|1/2 = oε(1), so that, using also (5.10), we conclude that (5.15) F ⊥ ε (u, A) ≥| log ε| 2 ⟨χℓµ⟩2D + πL0N(N −1) log 1 ϱ −C(1 + ϱ log | log ε|). VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 55 Step 3. Lower bound for Fε(u, A)−hex ⟨B0, µ⟩. We apply one more time the curve construction of Theorem A, this time on the cylinder Cℓequipped with the metric ˜g defined as above by ˜gij = ℓ−2gij if 1 ≤i, j ≤2 and ˜gij = gij otherwise. We find that there exists a polyhedral 1-current ν with no boundary relative to Cℓsuch that (5.16) ˜Fε(u, A) ≥1 2(| log ε| −C log | log ε|)|ν|˜g −oε(1), ∥µ −ν∥∗,ℓ≤ C | log ε|q , where the ∥·∥∗,ℓdenotes the norm in the dual space C0,1 T (Cℓ) ∗and q may be chosen arbitrarily large. Here ˜Fε(u, A) is defined in (2.11), the integral could in fact be taken over Cℓbut we will not use this fact. We also have (5.17) ∥µ −ν∥F(Cℓ,ε) ≤ C | log ε|q , where Cℓ,ε is the set of points in Cℓat distance at least | log ε|−2 from the boundary. Note that, even though we cannot directly apply Theorem A to the functional ˜Fε(u, A), since it involves a non-Euclidean metric, a straightforward modification of the proof in [Rom19b] reveals that it holds in this case as well. Indeed the proof involves summing lower-bounds on an appropriate grid of cubes of side-length an arbitrarily large negative power of | log ε|. In our case, we can approximate the metric by a constant metric in each cube, which will thus be Euclidean after a linear change of coordinate. We can then obtain the desired energy lower bound and Jacobian estimate in each cube, the errors due to the non constant metric will be an arbitrarily large negative power of | log ε|. Note also that the lower bound really involves ˜ε = ε/ℓ, but this only introduces an error of order | log ℓ| which is absorbed in the term C log | log ε|. Also, ∥· ∥∗,ℓshould be understood relative to the metric ˜g, but it differs from the Euclidean version by at most a factor Cℓ2 which does not alter the above bound considering that q is arbitrary anyway. It follows from (5.16) and (5.13), that (5.18) ∥ν −2πNΓ0∥∗,ℓ≤ C | log ε|1/7. In particular, using (5.4) we have that (5.19) |ν|˜g ≥1 C . Now we recall from (2.12) the relation ℓ2 ˜Fε(u, A) ≤ℓ2F ⊥ ε (u, A) + F z ε (u, A). 56 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Therefore, multiplying (5.15) by (1 −ℓ2), using the choice of (5.9) and adding ℓ2 times (5.16), we obtain that (5.20) Fε(u, A) ≥1 2| log ε|  (1 −ℓ2) ⟨χℓµ⟩2D + ℓ2|ν|˜g  + π(1 −ℓ2)L0N(N −1) log 1 ϱ + O 1 + ℓ2 log | log ε||ν˜g|  , where we have used the fact that ϱ log | log ε| = oε(1), in view of (5.14). Moreover, from (5.10), (5.13) we have in view of (3.4) that ⟨χℓµ⟩2D =  χℓ ∂ ∂z, 2πNΓ0  + O(| log ε|−1/7) = 2πNL0 + O(| log ε|−1/7). Inserting this into (5.20) and comparing with (5.12), we find that (5.21) |ν|˜g −2πL0N ≤O(ℓ−2| log ε|−1/7) = O(| log ε|−1/8). We then let, as in the proof of Lemma 3.5, X = χℓ ∂z √g33 . Note that ˜g33 = g33. Then from (5.18), (5.19), and (5.21), we have (5.22)  X, Γ0 L0 − ν |ν|˜g  =  X, 2πNΓ0 −ν |ν|˜g  + ⟨X, 2πNΓ0⟩  1 2πNL0 − 1 |ν|˜g  ≤O(| log ε|−1/8). Here we have used the fact that |Γ0|˜g = L0, that ⟨X, Γ0⟩= L0 and that the left-hand side is positive: indeed, since ∥X∥∞≤1 and since X restricted to Γ0 is precisely the unit tangent vector, it holds that  X, ν |ν|˜g  ≤1 =  X, Γ0 L0  . Next, we decompose ν as a sum of simple curves {νi}i∈I, so that  X, Γ0 L0 − ν |ν|˜g  = X i∈I αi  1 −  X, νi |νi|˜g  := X i∈I αi∆i, αi = |νi|˜g |ν|˜g . The ∆i’s are nonnegative. We let Igood denote those indices for which ∆i < | log ε|−1/16 and we denote {Γi}i∈Igood the corresponding curves. The rest of the indices is denoted Ibad, and the sum of corresponding curves νbad, so that (5.23) ν = X i∈Igood 2πΓi + νbad. Then (5.22) implies that the sum of the coefficients αi for i ranging over Ibad is O(| log ε|−1/16). Therefore, since the total length of ν is bounded, the total length of bad curves is O(| log ε|−1/16) as well. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 57 As for the good curves, that we denote {Γi}i∈Igood, we have if i ∈Igood that (5.24) ∆i =  X, Γ0 L0 − Γi |Γi|˜g  ≤| log ε|−1/16. Since this is much smaller than ℓ2 defined in (5.9), we deduce from Lemma 3.5 that the good curves are included in a tube of radius O(| log ε|−1/32) around Γ0 and that each of their lengths is equal to L0 + O(| log ε|−1/32). We thus have (5.25) |Γi|˜g = L0 + O(| log ε|−1/32), |νbad|˜g = O(| log ε|−1/16). In view of the estimate (5.18) we deduce that there are exactly N good curves, that we denote from now on Γ1, . . . , ΓN. We recall that from (5.23), (5.25), and and (5.21), |ν|˜g = 2πNL0 + oε(1). Going back to (5.20) we now express ⟨χℓµ⟩2D in terms of the curves Γi, using the vorticity estimate in (5.16) and (5.23). Since |ν˜g| = O(1), we find (5.26) Fε(u, A) ≥| log ε| 2 (1 −ℓ2) N X i=1 2π ⟨Γi⟩2D + νbad 2D ! + ℓ2|ν|˜g ! + π(1 −ℓ2)L0N(N −1) log 1 ϱ + O(1), where we also used the fact that ℓ2 log | log ε| = oε(1). Similarly, let us rewrite hex ⟨B0, µ⟩. First we note that, from (5.10) and (5.16), hex ⟨B0, µ⟩= hex ⟨χℓB0, µ⟩+ oε(1) = hex ⟨χℓB0, ν⟩+ oε(1). Then, from (5.18) and (5.25) that N X i=1 Γi −NΓ0 ∗ ≤C| log ε|−1/8, using again (5.4) to bound ∥νbad∥∗. From the vorticity estimate in (5.16) and since χℓB0 ∈ C0,1 T (Cℓ), using (5.10), we deduce that (5.27) hex ⟨B0, µ⟩=  | log ε| 2 R(Γ0) + K log | log ε|  χℓB0, νbad + N X i=1 2π ⟨B0, Γi⟩ ! + oε(1) = π| log ε| R(Γ0) N X i=1 ⟨B0, Γi −Γ0⟩+ πL0N| log ε| + 2πNK ⟨B0, Γ0⟩log | log ε| + O(| log ε| χℓB0, νbad ) + oε(1). Then we substract off (5.27) from (5.26), noting that ℓ2|ν|˜g = ℓ2 |νbad|˜g + 2π N X i=1 |Γi|˜g ! , 58 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY and that, since νbad 2D and B0, νbad are O(|νbad|2) — which is a negative power of | log ε| times |νbad| in view of (5.4) — the terms π| log ε|(1 −ℓ2) νbad 2D and | log ε| χℓB0, νbad may be absorbed in the term 1 2| log ε|ℓ2|νbad|˜g. Also, from (5.14) and (5.9) we have that ℓ2 log(1/ϱ) = oε(1). Notice that here, the maximization with | log ε|−1 in the definition of ρ in (5.14) plays a key role. We thus obtain (see Definition 3.1) (5.28) Fε(u, A) −hex ⟨B0, µ⟩≥πL0N(N −1) log 1 ϱ −2πK ⟨B0, Γ0⟩N log | log ε|+ + π| log ε| N X i=1 Qℓ(Γi) + cℓ2| log ε||νbad|˜g + O(1), for some c > 0. Step 4. Convergence of blown-up curves. We write Γi as a piecewise graph over Γ0 as above, letting Γi(t) = Γ0(zi(t)) + ⃗ui(t). From (5.24) and Lemma 3.5 we have that ∥⃗ui∥L∞≪ℓfor every i. Thus, Proposition 3.4 implies that (5.29) Qℓ(Γi) ≥c∥⃗ui∥2 L∞. It also follows from (5.10), (5.17), Lemma 3.4 applied to each of the curves Γ1,. . . ,ΓN with eΓ = Γ0, and (5.4) applied to estimate ∥νbad∥F(Cℓ,ε) that (5.30) ϱ ≤C X i ∥⃗ui∥L∞+ |νbad|2 ˜g ! + O((log | log ε|)2| log ε|−4/3) + | log ε|−1. On the other hand, by minimality of (u, A), we deduce from the upper bound of Theorem 6.1 and (2.2) that (5.31) Fε(u, A) −hex ⟨B0, µ⟩≤π 2 L0N(N −1) log | log ε| −2πK ⟨B0, Γ0⟩N log | log ε| + O(1). Let Y = | log ε| N X i=1 Qℓ(Γi) + ℓ2|νbad|˜g ! . From (5.29), (5.30), and the fact that we have ℓ2 ≥|νbad|3 ˜g in view of (5.25), we get (5.32) ϱ2| log ε| ≤CY + oε(1). Combining (5.28) and (5.31), in view of (5.32), we find (5.33) π 2 L0N(N −1) log 1 CY + oε(1) + cY ≤πL0N(N −1) log 1 ϱ p | log ε| + cY ≤O(1). It follows that Y = O(1), ϱ ≤C| log ε|−1/2, and then that for every 1 ≤i ≤N, in view of (5.29), (5.34) ∥⃗ui∥L∞≤ C p | log ε| , Qℓ(Γi) ≤ C | log ε|. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 59 It also follows that (5.35) |νbad|˜g ≤C (log | log ε|)4 | log ε| . If N > 1 then (5.33) implies in addition that ϱ is bounded below by c| log ε|−1/2 and thus, in view of (5.35), from (5.30) we deduce that (5.36) max i ∥⃗ui∥L∞≥ c p | log ε| . Recalling (5.23), the vorticity estimates in (5.1) follow from (5.10), the vorticity estimate in (5.16), (5.17), and (5.4) together with (5.35) and (5.23). We denote eΓi the curve Γi blown up horizontally by a factor q hex N = q | log ε| 2 R0 N + oε(1), so that eΓi(t) = Γ0(zi(t))+ q hex N ⃗ui(t). From (5.34) and Proposition 3.4, there exists a subsequence {ε} tending to zero such that eΓi converges, for any i = 1, . . . , N, as ε →0, uniformly and as currents, to an H1-graph Γ∗ i (z) = Γ0(z) + ⃗u∗ i (z). Notice that if q hex N ∥⃗ui∥∞→0, then eΓi converges to Γ0. Moreover, from (5.36), if N > 1, then max 1≤i≤N ∥⃗u∗ i ∥L∞> 0. Step 5. Improved lower bound. We return to bounding from below Fε(u, A, Uδ) using (2.13). As above we denote by ν the polyhedral 1-current obtained by applying Theorem B on the cylinder Cℓequipped with the metric ˜g. It decomposes as νbad + νgood, where νgood = 2π PN i=1 Γi. From (5.16), (5.23), and (5.35) we find (5.37) ˜Fε(u, A) ≥π N X i=1 |Γi|˜g| log ε| + O(log | log ε|). Also, we may slice ν (resp. νgood) by the coordinate function z defined on Uδ. This provides a family of 0-currents {νz}z (resp. {(νgood)z}z), where z belongs to a set of full measure in (0, L0). Both νz and (νgood)z are sums of Dirac masses with weights belonging to 2πZ for almost every z. From (5.17), we know that ∥χℓ(µ −ν)∥F(Ωε) ≤Cℓ−1| log ε|−q. Moreover χℓνgood = νgood since, from the previous step (see (5.34)), we know that each Γi is included in a tube of radius O(1/ p | log ε|) around Γ0. Thus, in view of (5.10), and using (5.4) together with (5.35) to estimate ∥νbad∥F(Ωε), we find that ∥µ −νgood∥F(Ωε) ≤C (log | log ε|)2 | log ε| 4 3 , and then that Z L0−| log ε|−2 z=| log ε|−2 ∥µz(u, A) −(νgood)z∥F(Σz)dz ≤∥µ(u, A) −νgood∥F(Ωε) ≤C (log | log ε|)2 | log ε| 4 3 . 60 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY For any ε > 0 we define Tε = n z ∈[| log ε|−2, L0 −| log ε|−2] | ∥µz −(νgood)z∥F(Σz) ≤| log ε|−7 6 o , T c ε = [0, L0] \ Tε. It follows from the above that (5.38) |T c ε | ≤C(log | log ε|)2| log ε|−1 6. Let z ∈Tε. We claim that, for any ε > 0 small enough (depending on z), we have (5.39) Z Σz e⊥ ε (u, A) dvolg⊥−| log ε| 2 Z Σz χℓ √g33µ12d⃗u −πN 2 log δ + πN(N −1) log r N hex ≥−π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• + Nγ + oε(1) + oδ(1), if ⃗u∗ i (z) ̸= ⃗u∗ j(z) for all i ̸= j, whereas if ⃗u∗ i (z) = ⃗u∗ j(z) for some i ̸= j, then lim inf ε→0 Z Σz e⊥ ε (u, A) dvolg⊥−| log ε| 2 Z Σz χℓ √g33µ12d⃗u + πN(N −1) log r N hex = +∞. To prove the claim, we consider three cases: Case 1. If R Σz e⊥ ε (u, A) dvolg⊥> | log ε|3, then, by integration by parts, we have Z Σz χℓ √g33µ12d⃗u = Z Σz d(χℓ √g33) ∧j(u, A) + χℓ √g33dA ≤Cℓ Z Σz e⊥ ε (u, A) dvolg⊥  1 2 , and therefore Z Σz e⊥ ε (u, A) dvolg⊥−| log ε| 2 Z Σz χℓ √g33µ12d⃗u > | log ε|3 −C | log ε| 5 2 (log | log ε|)2, which implies the claim. Case 2. If π(N + m)| log ε| ≤ R Σz e⊥ ε (u, A) dvolg⊥≤| log ε|3 for some m > 0, we can apply Lemma 4.1, which provides the existence of points a1, . . . , ak and integers d1, . . . , dk such that (5.40) Z Σz e⊥ ε (u, A) dvolg⊥≥π X i |di| p g33(ai) (| log ε| −C log | log ε|) , and 2π X i diδai −µ12d⃗u F(Σz) ≤C| log ε|−3. In addition, from the definition of Tε, we deduce that 2π X i diδai −(νgood)z F(Σz) ≤C| log ε|−7 6. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 61 In particular, we have that (5.41) Z Σz χℓ √g33µ12d⃗u = Z Σz χℓ √g33(νgood)z + O(| log ε|−7 6) = 2πN + O(| log ε|−1 2), where in the last equality we used the fact that g33 is equal to 1 on the axis z = 0 and that all good curves are contained in a tubular neighborhood of radius O(| log ε|−1 2) around Γ0. Hence, combining (5.40) with (5.41), we find Z Σz e⊥ ε (u, A) dvolg⊥−| log ε| 2 Z Σz χℓ √g33µ12d⃗u ≥π X i p g33(ai)|di|| log ε| −πN| log ε| + O(| log ε| 1 2). If P i |di| > N, the claim immediately follows. On the other hand, if P i |di| ≤N, then by combining π(N + m)| log ε| ≤ R Σz e⊥ ε (u, A) dvolg⊥with (5.41), we find Z Σz e⊥ ε (u, A) dvolg⊥−| log ε| 2 Z Σz χℓ √g33µ12d⃗u ≥πm| log ε| + O(| log ε| 1 2). Once again, since m > 0, the claim follows. Case 3. If R Σz e⊥ ε (u, A) dvolg⊥< π(N + m)| log ε|, since all good curves are contained in a tubular neighborhood of radius O(1/ p | log ε|) around Γ0, we deduce that 2πNδ0 −(νgood)z F(Σz) ≤C| log ε|−1 2, (that we also used in (5.41)) which together with the definition of Tε gives ϱz = max n ∥µz −2πNδ0∥F(Σz) , | log ε|−1o ≤C| log ε|−1 2. We can therefore apply Proposition 4.3 (choosing m sufficiently small), which provides the existence of N distinct points a1, . . . , aN, such that (5.42) Z Σz e⊥ ε (u, A) dvolg⊥≥π N X i=1 p g33(ai)| log ε| + πN 2 log δ −π X i̸=j log distg⊥(ai −aj) + Nγ + oε(1) + oδ(1) and (5.43) 2π N X i=1 δai −µ12d⃗u F(Σz) ≤C| log ε|−s, 62 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY where s > 0 can be chosen arbitrarily large. In addition, from the definition of Tε, we deduce that (5.44) 2π N X i=1 δai −(νgood)z F(Σz) ≤C| log ε|−7 6. From (5.44) we deduce that, for any i = 1, . . . , N, there exists a point, that we denote Γi,ε(z), belonging to Γi,ε, such that N X i=1 distg⊥(ai, Γi,ε(z)) ≤∥µz −(νgood)z∥F(Σz) ≤C| log ε|−7 6. From this and the fact that the blown up good curves ˜Γi,ε converge uniformly to Γ∗ i , letting ˜ai denote the point ai blown up horizontally by a factor q hex N , we deduce that ˜ai converges to ˜Γ∗ i (z) as ε →0. Finally, we note that (5.45) −π X i̸=j log distg⊥(ai −aj) = −π X i̸=j log |˜ai −˜aj|g• −πN(N −1) log r N hex + oε(1), and that, when passing to the limit ε →0 and using the lower semi-continuity of −log, we have lim inf ε→0 −π X i̸=j log |˜ai −˜aj|g• = −π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• if ⃗u∗ i (z) ̸= ⃗u∗ j(z) for all i ̸= j, whereas if ⃗u∗ i (z) = ⃗u∗ j(z) for some i ̸= j, then lim inf ε→0 −π X i̸=j log |˜ai −˜aj|g• = +∞. The claim (5.39) thus follows from combining this with (5.42), (5.45) and (5.43). We next integrate over z ∈[0, L0]. We claim that the contribution from T c ε is bounded below by oε(1), i.e. that (5.46) Z z∈[0,L0] Z Σz e⊥ ε (u, A) dvolg⊥−1 2 Z Σz χℓ √g33µ12d⃗u log 1 ε ≥ Z z∈Tε Z Σz e⊥ ε (u, A) dvolg⊥−1 2 Z Σz χℓ √g33µ12d⃗u log 1 ε + oε(1). Let z ∈(Tε)c. We consider two cases. First, if R Σz e⊥ ε (u, A) dvolg⊥> | log ε|3, then, arguing exactly as in Case 1., we find that (5.47) Z Σz e⊥ ε (u, A) dvolg⊥−| log ε| 2 Z Σz χℓ √g33µ12d⃗u > | log ε|3 −C | log ε| 5 2 (log | log ε|)2 ≥0. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 63 Second, if R Σz e⊥ ε (u, A) dvolg⊥≤| log ε|3, we can apply once again Lemma 4.1, which provides the existence of points a1, . . . , ak and integers d1, . . . , dk such that (5.48) Z Σz e⊥ ε (u, A) dvolg⊥≥π X i |di| p g33(ai) (| log ε| −C log | log ε|) and 2π X i diδai −µ12d⃗u F(Σz) ≤C| log ε|−3. In particular, we have that (5.49) Z Σz χℓ √g33µ12d⃗u = 2π X i diχℓ(ai) p g33(ai) + O(| log ε|−3). Combining (5.48) with (5.49), we find (5.50) Z Σz e⊥ ε (u, A) dvolg⊥−| log ε| 2 Z Σz χℓ √g33µ12d⃗u ≥π X i p g33(ai) (|di| −diχ(ai)) + O(log | log ε|) ≥O(log | log ε|). Hence, from (5.47), (5.50), and (5.38), we deduce that Z z∈(Tε)c Z Σz e⊥ ε (u, A) dvolg⊥−1 2 Z (Tε)c Z Σz χℓ √g33µ12d⃗u log 1 ε ≥|(Tε)c|O(log | log ε|) = oε(1), which proves (5.46). Thus, by definition (3.4), in view of (5.10), and the estimate (5.39), we have (5.51) Z z∈[0,L0] Z Σz e⊥ ε (u, A) dvolg⊥−1 2⟨µ2D⟩log 1 ε −πN 2L0 log δ + πN(N −1) log r N hex ≥ Z z∈Tε −π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• + Nγ ! dz + oδ(1) + oε(1) + Cδoε(1). Adding (5.51) times (1 −ℓ2) to (5.37) times ℓ2 we obtain, in view of (2.13) (5.52) Fε(u, A, Uδ) ≥π log 1 ε (1 −ℓ2) N X i=1 ⟨Γi⟩2D + νbad 2D ! + ℓ2 N X i=1 |Γi|˜g ! + πN 2L0 log δ −π(1 −ℓ2)L0N(N −1) log r N hex + Z z∈Tε −π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• + Nγ ! dz + Cδoε(1) + oδ(1) + oε(1), where we used again the fact that ℓ2 log | log ε| = oε(1). 64 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY On the other hand, using (5.27), (5.35) and (5.4) again, we have hex ⟨B0, µ⟩= π| log ε| R(Γ0) N X i=1 ⟨B0, Γi −Γ0⟩+ πL0N| log ε| + 2πNK⟨B0, Γ0⟩log | log ε| + oε(1). Subtracting from (5.52) we find, as in (5.28) (5.53) Fε(u, A, Uδ) −hex ⟨B0, µ⟩≥π 2 (1 −ℓ2)L0N(N −1) log hex N −2πKN ⟨B0, Γ0⟩log | log ε| + πL0N 2 log δ + π| log ε| N X i=1 Qℓ(Γi) + Z z∈Tε −π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• + Nγ ! dz + Cδoε(1) + oδ(1) + oε(1), Using the coercivity of Qℓfrom Proposition 3.4 and the upper bound of Theorem 6.1, we deduce that Z z∈Tε −π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• + Nγ ! dz ≤C with C > 0 independent of ε (but depending on δ). We may extract a subsequence {εk}k such that P k |(Tεk)c| < ∞. From the upper bound above and the monotone convergence theorem it follows that lim k→∞ Z z∈Tεk −π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• + Nγ ! dz = Z z∈[0,L0] −π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• + Nγ ! dz and the right-hand side is a convergent integral. Using Fatou’s lemma, dominated convergence theorem, and (3.14), by taking the limit in (5.53), we are led to (5.54) lim inf ε→0  Fε(u, A, Uδ) −hex ⟨B0, µ⟩−π 2 L0N(N −1) log hex N −2πKN ⟨B0, Γ0⟩log | log ε|  ≥πN 2L0 log δ + πL0N N X i=1 Q(⃗u∗ i ) + Z z∈[0,L0] −π X i̸=j log |⃗u∗ i (z) −⃗u∗ j(z)|g• + Nγ ! dz + oδ(1). There remains to bound from below Fε(u, A, Ω\ Uδ). We can assume without loss of generality that A is divergence free in R3. From (2.2), (5.54), and the upper bound of Theorem 6.1 we find Fε(u, A, Ω\ Uδ) ≤C independent of N, for every δ. Thus, taking a subsequence if necessary, VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 65 ∇Au is bounded in L2(Ω\Uδ) and A is bounded in H1(R3\Uδ). Hence, j(u, A) converges weakly to some j∗in L2(Ω\Uδ) and A converges weakly to some A∗in H1(R3\Uδ). Since (u, A) is a minimizer of GLε, it (weakly) satisfies the Ginzburg–Landau equations (1.3). In particular, curl(curl A −Hex) = (iu, ∇Au)χΩ in R3. On the other hand, since (u, A) = (e−ihexϕ0u, A −hexA0), we deduce that (u, A) is gauge equivalent to (u, A + hex curl B0) (recall that A0 = ∇ϕ0 + curl B0 in Ω), and therefore, it holds (weakly )in R3 that curl curl(A + hex curl B0) −Hex  = (iu, ∇A+hex curl B0u)χΩ = j(u, A) −hex curl B0|u|2 χΩ. But A0 (weakly) solves curl(hex curl A0 −Hex) = −curl B0χΩ in R3. Hence, −∆A = curl curl A = j(u, A) + hex curl B0(1 −|u|2)  χΩ in R3. Since hex curl B0(1 −|u|2) strongly converges to 0 in L2(Ω), by passing to the limit in the previous equation we find −∆A∗= j∗χΩ in R3. Moreover, from (5.13), we find µ(u, A) →2πNΓ0. Hence, in the sense of distributions, we have curl(j∗+ A∗) = 2πNΓ0 in Ω. Finally, from Proposition 2.1 combined with the lower semicontinuity of the energy and (2.8), we deduce that lim inf ε→0  Fε(u, A, Ω\ Uδ) + 1 2 Z R3\Uδ | curl A|2  ≥N 2CΩ+ πN 2L0 log 1 δ + oδ(1). Combining with (5.54) and (2.2), in view of the definition of R0 (1.8), we have proved (5.3). □ Proof of Theorem 1.1. Theorem 6.1 shows that for minimizers there is equality in (5.3), and therefore (1.11) holds. To conclude the proof of Theorem 1.1, there remains to optimize over N integer. From (1.11), we know that the minimal energy of a solution with N vortex filaments, assuming N is independent of ε, is given by gε(N) + o(1), where gε(N) = fε(N) + min WN + γL0N with fε(N) = h2 exJ0 + πL0N| log ε| −2πNL0 R0 hex + πL0N(N −1) log r hex N + N 2CΩ. This has exactly the same form as the minimal energy of a solution with N vortex points in 2D; see [SS07, Chapter 12]. Hence, the optimization is identical to that in [SS07, Lemma 12.1 66 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY and Theorem 12.1] and yields that if hex ∈(HN −oε(1), HN+1 + oε(1)), then N is the optimal number of curves. □ 6. Upper bound We now present our upper bound for the 3D Ginzburg–Landau functional. 6.1. Statement of the main result. Theorem 6.1. Assume Ωis a smooth bounded domain such that the maximum of the ratio is achieved at a smooth simple curve Γ0 which can be extended to a smooth simple closed curve in R3, still denoted Γ0. For any ε > 0, assume hex = H0 c1 + K log | log ε| with K bounded independently of ε, and let N be an integer independent of ε. Define tube coordinates (x, z) ∈Cδ in a neighborhood of Γ0 using Proposition 2.2. Assume that for each ε, the curves Γ1,ε, . . . , ΓN,ε are defined in these coordinates by (6.1) Γi,ε(z) = Γ0(z) + r N hex ⃗ui(z), where ⃗ui : [0, L0] →R2 is smooth and independent of ε. Then, for any ε sufficiently small, there exists a configuration (uε, Aε) such that (6.2) GLε(uε, Aε) ≤h2 exJ0 + π 2 L0N(N −1) log hex −2πK R0 L0N log | log ε| −π 2 L0N(N −1) log N + WN(Γ1, . . . , ΓN) + γL0N + N 2CΩ+ oδ(1) + Cδoε(1) + oε(1), where Γi(z) = Γ0(z) + ⃗ui(z) and WN is defined in (1.9). The upper bound is computed using the velocity field given by the Biot–Savart law (see Definition 2.1) associated to a collection of N vortex filaments nearly parallel and close to Γ0, as ε →0. Outside of a fixed but small tube around Γ0, this velocity field will coincide up to a small error (as ε →0) with the field associated to Γ0 by Proposition 2.1. However we must estimate the energy from our construction in the tube, for which we need a simple enough approximation to our velocity field. The rest of this section is devoted to the proof of Theorem 6.1. 6.2. Definition of the test configuration. We let uε = eihexϕ0uε and Aε = hexA0 + Aε, where (eihexϕ0, hexA0) is the approximate Meissner state. In order to define (uε, Aε), we proceed as follows. First, we let rε := | log ε|−3. We then define ρi,ε(x) =    1 f0 rε ε f0 dist(x, Γi,ε) ε  if dist(x, Γi,ε) ≤rε 1 otherwise. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 67 where hereafter f0 denotes the modulus of the (unique nonconstant) degree-one radial vortex solution u0, see for instance [SS07, Proposition 3.11]. It is important to recall that, as R →∞, f0(R) →1 and (6.3) 1 2 Z R 0  |f0 ′|2 + f0 2 r2 + (1 −f0 2)2 2  rdr = 1 2π (π log R + γ + o(1)) , where γ > 0 is still the fixed constant from [BBH94]. We also define ρε(x) := min i∈{1,...,N} ρi,ε(x). On the other hand, we let φε be defined by the relation (6.4) ∇φε = N X i=1 XΓi,ε + ∇fΓi,ε, where XΓi,ε and fΓi,ε are defined by applying Proposition 2.11 with Γ = Γi,ε. Let us remark that since curl PN i=1 XΓi,ε = 2π Pn i=1 Γi,ε, if σ denotes a smooth, simple, and closed curve that does not intersect any of the curves Γi,ε, then, by Stokes’ theorem, Z σ N X i=1 XΓi,ε + ∇fΓi,ε ! = 2πm, for some m ∈Z. This ensures that φε is a well-defined function modulo 2π. We finally let uε(x) := ρε(x)eiφε(x) and Aε(x) = N X i=1 AΓi,ε(x), where, once again, AΓi,ε is defined by applying Proposition 2.1 with Γ = Γi,ε. 6.3. The difference between the covariant gradient and the gradient is negligible in Tδ(Γ0). Hereafter, Tδ(Γ0) denotes the tube defined in Proposition 2.2. A straightforward computation shows that |∇Aεuε|2 −|∇uε|2 = ρ2 ε |Aε|2 −2∇φε · Aε  . Let p < 2 and q > 2 be such that 1 p + 1 q = 1. From H¨older’s inequality, we deduce that Z Tδ(Γ0) |∇Aεuε|2 −|∇uε|2 ≤∥Aε∥2 L2(Tδ(Γ0)) + C∥∇φε∥Lp(Tδ(Γ0))∥Aε∥Lq(Tδ(Γ0)). 1To be precise, fΓi,ε is defined in the proof of the proposition. 68 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Since Aε ∈W 2, 3 2(Ω), from Sobolev embedding it follows that Aε ∈Lr(Ω) for any r ≥1. Moreover, ∇φε ∈Lr(Ω) for any r < 2, which follows from Proposition 2.1. Hence, from H¨older’s inequality, we deduce that (6.5) Z Tδ(Γ0) |∇Aεuε|2 −|∇uε|2 ≤C∥Aε∥Lq(Tδ(Γ0)) ≤C|Tδ(Γ0)| 1 2q ∥Aε∥L2q(Tδ(Γ0)) ≤Cδ 1 q for any q > 2. The constant C depends on q and blows up as q →2 and as q →∞. By fixing its value, we ensure that the RHS is oδ(1). 6.4. Energy estimate in small tubes around the curves. We first smoothly extend Γi,ε to a smooth simple closed curve in R3, which we denote ˜Γi,ε. The extension is supported in the complement of Ω, without intersecting its boundary. We have considerable freedom in choosing this extension, except that ˜Γi,ε must intersect ∂Ωtransversally, so that we can apply Proposition 2.1 with Γ = ˜Γi,ε. We then consider the tube ˜Trε(Γi,ε) of radius rε around ˜Γi,ε, and its restriction to Ω, that is, Trε(Γi,ε) := {x ∈Ω: dist(x, Γi,ε) < rε}. We claim that (6.6) 1 2 Z Trε(Γi,ε) |∇uε|2 + 1 2ε2(1 −|uε|2)2 ≤π|Γi,ε| log rε ε + γ|Γi,ε| + oε(1), where γ is the constant that appears in (6.3). To prove this we proceed as follows. First, given a point in ˜Trε(Γi,ε) we denote by pΓi,ε its nearest point on Γi,ε. We then define Dz := n p ∈˜Trε(Γi,ε) | pΓi,ε = Γi(z) o . Observe that Dz is a disk in R2 centered at Γi,ε(z) with radius rε. We now appeal to (2.4), which yields that hi,ε(p) := XΓi,ε(p) −YΓi,ε(p) ∈Lq  ˜Trε(Γi,ε)  for any q ≥1, where YΓi,ε(p) := pΓi,ε −p |pΓi,ε −p|2 × τΓi,ε(pΓi,ε). Here, τΓi,ε(pΓi,ε) denotes the tangent vector to Γi,ε at pΓi,ε. Once again, from the proof of Proposition 2.1 we know that fΓi,ε ∈W 1,q(Ω) for any q < 4 and i = 1, . . . , N. Then, recalling (6.4) and using H¨older’s inequality, we deduce that (6.7) Z Trε(Γi,ε) ∇φε − N X j=1 YΓj,ε 2 = Z Trε(Γi,ε) N X j=1 hj,ε + ∇fΓj,ε 2 ≤|Trε(Γi,ε)| 1 3 N X j=1 hj,ε + ∇fΓj,ε 2 L3(Trε(Γi,ε)) ≤Cr 2 3ε . VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 69 In addition, note that for a.e. p ∈Trε(Γi,ε) and j ̸= i, we have (recall (6.1)) |Yj(p)| |Yi(p)| ≤|pΓi,ε −p| |pΓj,ε −p| ≤C rε p | log ε| . We then deduce that (6.8) Z Trε(Γi,ε) ρ2 ε N X j=1 YΓj,ε 2 = Z Trε(Γi,ε) ρ2 ε|YΓi,ε|2 1 + O rε p | log ε| !!2 = 1 + O rε p | log ε| !! Z Trε(Γi,ε) ρ2 ε|YΓi,ε|2. Using (6.7) and (6.8), we then deduce that (6.9) 1 2 Z Trε(Γi,ε)  |∇uε|2 + 1 2ε2(1 −|uε|2)2  = 1 2 Z Trε(Γi,ε)  |∇ρε|2 + ρ2 ε|∇φε|2 + 1 2ε2(1 −ρ2 ε)2  = 1 2 + O rε p | log ε| !! Z Trε(Γi,ε)  |∇ρε|2 + ρ2 ε|YΓi,ε|2 + 1 2ε2(1 −ρ2 ε)2  + O  r 2 3ε  . On the other hand, by change of coordinates, we have (6.10) Z Trε(Γi,ε)  |∇ρε|2 + ρ2 ε|YΓi,ε|2 + 1 2ε2(1 −ρ2 ε)2  = Z |Γi,ε| 0 Z Dz∩Ω  |∇ρε|2 + ρ2 ε|YΓi,ε|2 + 1 2ε2(1 −ρ2 ε)2  |Jac(ϕ)|dxdy  dz ≤ Z |Γi,ε| 0 Z Dz  |∇ρε|2 + ρ2 ε|YΓi,ε|2 + 1 2ε2(1 −ρ2 ε)2  |Jac(ϕ)|dxdy  dz, where ϕ : Dz × (0, |Γi,ε|) →R3 is such that ϕ (Γi,ε(z) + (x, y, z)) = Γi,ε(z) + xv1(z) + yv2(z) with (v1(z), v2(z)) orthonormal and such that v1(z), v2(z) are perpendicular to Γ′ i,ε and smooth with respect to z. Notice that here we used the fact that Dz = Γi,ε(z)+D(0, rε), where D(0, rε) denotes the disk in R2 centered at 0 and with radius rε. Observe that |Jac(ϕ)| = |det(Γ′ i,ε + xv′ 1(z) + yv′ 2(z), v1, v2)| = 1 + O(rε). Hence, by combining this with (6.9) and (6.10), we deduce that (6.11) 1 2 Z Trε(Γi,ε)  |∇uε|2 + 1 2ε2(1 −|uε|2)2  + O  r 2 3ε  ≤ 1 2 + O rε p | log ε| !! Z |Γi,ε| 0 Z Dz  |∇ρε|2 + ρ2 ε|YΓi,ε|2 + 1 2ε2(1 −ρ2 ε)2  dxdy  dz. 70 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY To compute the integral over Dz we use polar coordinates centered at Γi,ε(z). Letting r denote the distance to this point and θ the polar angle, we have |∇ρε(x)| = f0 ′ r ε  εf0 rε ε  |∇r| = f0 ′ r ε  εf0 rε ε  and |YΓi,ε| = |∇θ| = 1 r. It follows that Z Dz  |∇ρε|2 + ρ2 ε|YΓi,ε|2 + 1 2ε2(1 −ρ2 ε)2  dxdy = 2π Z rε 0  f0 ′ r ε 2 ε2f0 rε ε 2 + f0 r ε 2 r2f0 rε ε 2 + 1 2ε2 1 −f0 r ε 2 f0 rε ε 2 !2 rdr = 2π Z rε ε 0  f0 ′(s)2 f0 rε ε 2 + f0(s)2 s2f0 rε ε 2 + 1 2 1 −f0(s)2 f0 rε ε 2 !2 sds, where in the last equality we used the change of variables r = εs. Finally, using that limε→0 f0 rε ε  = 1, since limε→0 rε ε = +∞, from (6.3) we obtain Z Dz  |∇ρε|2 + ρ2 ε|YΓi,ε|2 + 1 2ε2(1 −ρ2 ε)2  dxdy = 2  π log rε ε + γ + oε(1)  . Inserting this in (6.11), we obtain (6.6). 6.5. Energy estimate in the perforated tube. We consider the perforated tube T δ rε := Tδ(Γ0) \ SN i=1 Trε(Γi,ε). In this region we have ρε ≡1, and therefore Eε(uε, T δ rε) := 1 2 Z T δrε  |∇uε|2 + 1 2ε2(1 −|uε|2)2  = 1 2 Z T δrε |∇φε|2. Arguing as when obtaining (6.7), we find Z T δrε ∇φε − N X j=1 YΓj,ε 2 = Z T δrε N X j=1 hj,ε + ∇fΓj,ε 2 ≤|T δ rε| 1 3 N X j=1 hj,ε + ∇fΓj,ε 2 L3(T δrε) ≤Cδ 2 3, which yields (6.12) Eε(uε, T δ rε) = 1 2 Z T δrε N X j=1 YΓj,ε 2 + oδ(1) = 1 2 Z L0 0 Z Σz∩T δrε N X j=1 YΓj,ε 2 √g33dvolg⊥+ oδ(1), where in the last equality we used the coordinates we defined in Section 2.5. In order to estimate the integral on the RHS, we proceed in several steps. Step 1. From g⊥to the Euclidean metric. Given z ∈[0, L0], we define Π : Σz →⟨Γ′ 0(z)⟩⊥be the orthogonal projection of Σz onto the perpendicular plane to Γ′ 0(z). Observe that DΠ(Γ0(z)) VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 71 is the identity map from ⟨Γ′ 0(z)⟩⊥to itself, and, therefore, Π−1 is well defined (for any sufficiently small δ) and such that DΠ−1(0) is the identity map from Σz to itself. Moreover (6.13) ∥DΠ−1(0) −DΠ−1(x)∥∞≤C|x| for any δ sufficiently small. In particular, for any x, y ∈⟨Γ′ 0(z)⟩⊥, we have (6.14) Π−1(x) −Π−1(y) = x −y + O (|x −y|(|x| + |y|)) . Let us now observe that, since g33 = 1 on Γ0, for any p ∈Σz we have g33(p) = 1 + O(dist(p, Γ0)). In particular, if p = Π−1(x), using (6.14), we obtain (6.15) g33(Π−1(x)) = 1 + O(dist(Π−1(x), Γ0)) = 1 + O(|x|). Moreover, using again (6.14), we deduce that (6.16) (Π−1)∗(g⊥)(x) = Euclidean metric + O(|x|), where (Π−1)∗(g⊥) denotes the pullback of g⊥by Π−1. Step 2. Projection on Γi,ε. Let p ∈Σz. Recall that pΓi,ε denotes the projection of p on Γi,ε. Noting that Γi,ε(z) = Γi,ε ∩Σz, we also define di = |p −pΓi,ε|, di,z = |p −Γi,ε(z)|. Notice that we can write pΓi,ε = Γi,ε(z + ηi) for some ηi ∈R, which we now proceed to estimate. From Taylor’s expansion we have (6.17) pΓi,ε −Γi,ε(z) = ηiΓ′ i,ε(z) + O(η2 i ) and (6.18) pΓi,ε −Γi,ε(z) = ηiΓ′ i,ε(z + ηi) + O(η2 i ). which, in particular, implies that (6.19) |pΓi,ε −Γi,ε(z)|2 = η2 i + O(η3 i ). On the other hand, by definition of pΓi,ε, we directly have (6.20) (p −pΓi,ε) · Γ′ i,ε(z + ηi) = 0. Moreover, we also have (6.21) (p −Γi,ε(z)) · Γ′ i,ε(z) = O di,z p | log ε| + d2 i,z ! . Indeed, this follows from the following facts. First, if we let ν denote the normal vector to Σz at Γi,ε(z) (with Σz oriented according to Γ′ i,ε(z)), we have that Γ′ i,ε(z) = Γ′ 0(z) + O 1 p | log ε| ! = ν + O 1 p | log ε| ! . 72 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Second, since Σz is smooth and p, Γi,ε(z) ∈Σz, we have (p −Γi,ε(z)) · ν = O(|p −Γi,ε(z)|2) = O(d2 i,z). Therefore (p −Γi,ε(z)) · Γ′ i,ε(z) = (p −Γi,ε(z)) · ν + O |p −Γi,ε(z)| p | log ε| ! = O di,z p | log ε| + d2 i,z ! . Observe that (6.22) d2 i = |p−Γi,ε(z)+Γi,ε(z)−pΓi,ε|2 = d2 i,z +|pΓi,ε −Γi,ε(z)|2−2(p−Γi,ε(z))·(pΓi,ε −Γi,ε(z)). and (6.23) d2 i,z = |p −pΓi,ε + pΓi,ε −Γi,ε(z)|2 = d2 i + |pΓi,ε −Γi,ε(z)|2 + 2(p −pΓi,ε) · (pΓi,ε −Γi,ε(z)). Using (6.20), we can write (p −pΓi,ε) · (pΓi,ε −Γi,ε(z)) = (p −pΓi,ε) · pΓi,ε −Γi,ε(z) −ηiΓ′ i,ε(z + ηi) + ηiΓ′ i,ε(z + ηi)  = (p −pΓi,ε) · pΓi,ε −Γi,ε(z) −ηiΓ′ i,ε(z + ηi)  , which combined with (6.18) yields (6.24) (p −pΓi,ε) · (pΓi,ε −Γi,ε(z)) = O(η2 i di). On the other hand, we can write (p −Γi,ε(z)) · (pΓi,ε −Γi,ε(z)) = (p −Γi,ε(z)) · pΓi,ε −Γi,ε(z) −ηiΓ′ i,ε(z) + ηiΓ′ i,ε(z)  = (p −Γi,ε(z)) · pΓi,ε −Γi,ε(z) −ηiΓ′ i,ε(z)  + ηi(p −Γi,ε(z)) · Γ′ i,ε(z), which combined with using (6.21) and (6.17) yields (p −Γi,ε(z)) · (pΓi,ε −Γi,ε(z)) = O(η2 i di,z) + ηiO di,z p | log ε| + d2 i,z ! . Finally, by adding up (6.22) with (6.23), and using (6.19), (6.24), and (6.21), we deduce that (6.25) ηi = O di,z p | log ε| + d2 i,z ! . Step 3. Estimating YΓi,ε on Σz ∩T δ rε. Let p ∈Σz. From (6.25), we know that pΓi,ε −Γi,ε(z) = O |p −Γi,ε(z)| p | log ε| + |p −Γi,ε(z)|2 ! and therefore (6.26) pΓi,ε −p = Γi,ε(z) −p + O |p −Γi,ε(z)| p | log ε| + |p −Γi,ε(z)|2 ! . VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 73 We now define x = Π(p) and xz i,ε = Π(Γi,ε(z)). From (6.14), since |xz i,ε| = O  1 √ | log ε|  , we have Γi,ε(z) −p = xz i,ε −x + O |x| + 1 p | log ε| ! |xz i,ε −x| ! , which combined with (6.26) yields pΓi,ε −p = xz i,ε −x + O |xz i,ε −x| p | log ε| + |xz i,ε −x|2 ! + O |xz i,ε −x| |x| + 1 p | log ε| !! = xz i,ε −x + O |xz i,ε −x|2 + |xz i,ε −x||x| + |xz i,ε −x| p | log ε| ! . (6.27) In particular, we have that (6.28) |Γi,ε(z) −p| = O(|pΓi,ε −p|) = O(|xz i,ε −x|). In addition, from (6.27) we find 1 |pΓi,ε −p|2 = 1 |xz i,ε −x|2 + O  |xz i,ε −x|3 + |xz i,ε −x|2|x| + |xz i,ε−x|2 √ | log ε|  = 1 |xz i,ε −x|2 1 1 + O  |xz i,ε −x| + |x| + 1 √ | log ε|  = 1 |xz i,ε −x|2 1 + O |xz i,ε −x| + |x| + 1 p | log ε| !! . (6.29) On the other hand, from Taylor’s expansion, we deduce that Γ′ i,ε(z + ηi) = Γ′ i,ε(z) + O(|pΓi,ε −Γi,ε(z)|) = Γ′ 0(z) + O 1 p | log ε| ! + O(|pΓi,ε −Γi,ε(z)|). Therefore, using (6.25) and (6.28), we find τΓi,ε(pΓi,ε) = Γ′ i,ε(z + ηi) = Γ′ 0(z) + O 1 p | log ε| + |xz i,ε −x|2 ! . Finally, from this, (6.27), and (6.29), we find (6.30) YΓi,ε(p) = pΓi,ε −p |pΓi,ε −p|2 × τΓi,ε(pΓi,ε) = (xz i,ε −x)⊥ |xz i,ε −x|2 + O 1 + 1 |xz i,ε −x| p | log ε| + |x| |xz i,ε −x| ! , where we recall that xz i,ε −x = Π(Γi,ε(z)) −Π(p). 74 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Step 4. Combining the previous steps. Let z ∈[0, L0]. By change of coordinates, we have Z Σz∩T δrε N X j=1 YΓj,ε 2 √g33dvolg⊥ = Z Π(Σz∩T δrε) N X j=1 YΓj,ε Π−1(x)  2 p g33 (Π−1(x))d(Π−1)∗(volg⊥)(x). From (6.16), we find d(Π−1)∗(volg⊥)(x) = (1 + O(|x|))dx. On the other hand, from (6.27), we deduce that D xz i,ε, rε(1 −oε(1))  ⊂Π(Σz ∩Trε(Γi,ε)) and Π(Σz ∩Tδ(Γ0)) ⊂D (0, δ(1 + oδ(1))) . In particular, Π Σz ∩T δ rε  ⊂Aδ rε(z) := D (0, δ(1 + oδ(1))) \ ∪N i=1D xz i,ε, rε(1 −oε(1))  . From the previous estimates and (6.15), we then deduce that (6.31) Z Σz∩T δrε N X j=1 YΓj,ε 2 √g33dvolg⊥≤ Z Aδrε(z) N X j=1 YΓj,ε Π−1(x)  2 (1 + O(|x|))dx. On the other hand, for x ∈Aδ rε(z), using (6.30), we have N X j=1 YΓj,ε Π−1(x)  2 (1 + O(|x|)) = N X j=1 (xz j,ε −x)⊥ |xz j,ε −x|2 + O 1 + 1 |xz j,ε −x| p | log ε| + |x| |xz j,ε −x| ! 2 (1 + O(|x|)) = N X j=1 (xz j,ε −x)⊥ |xz j,ε −x|2 2 + N X j=1 O 1 |xz j,ε −x| + 1 |xz j,ε −x|2p | log ε| + |x| |xz j,ε −x|2 + 1 ! . (6.32) Let us now observe that (6.33) N X j=1 Z Aδrε(z)  1 |xz j,ε −x| + 1  dx = O(δ2), which directly follows from the integrability of the integrand in compact subets of R2. VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 75 On the other hand, from a direct computation, recalling that rε = | log ε|−q for some q > 0, it follows that N X j=1 Z Aδrε(z) 1 |xz j,ε −x|2dx = O(log | log ε|), and therefore (6.34) 1 p | log ε| N X j=1 Z Aδrε(z) 1 |xz j,ε −x|2dx = oε(1). In addition, we have N X j=1 Z Aδrε(z) |x| |xz j,ε −x|2dx = N X j=1 Z Aδrε(z) 1 |xz j,ε −x|dx + Z Aδrε(z) |xz j,ε| |xz j,ε −x|2dx ! . Since |xz j,ε| = O  1 √ | log ε|  , from (6.33) and (6.34), we find (6.35) N X j=1 Z Aδrε(z) |x| |xz j,ε −x|2dx = O(δ2) + oε(1). By using (6.32), (6.33), (6.34), and (6.35), from (6.31) it follows that (6.36) Z Σz∩T δrε N X j=1 YΓj,ε 2 √g33dvolg⊥≤ Z Aδrε(z) N X j=1 (xz j,ε −x)⊥ |xz j,ε −x|2 2 dx + O(δ2) + oε(1). Step 5. Renormalized energy. We claim that 1 2 Z Aδrε(z) N X j=1 (xz j,ε −x)⊥ |xz j,ε −x|2 2 dx = −π N X i,j=1 i̸=j log |xz i,ε −xz j,ε| + πN log 1 rε + πN 2 log δ + oδ(1) + Cδoε(1) + oε(1). (6.37) To prove this, we consider Φz ε(x) = − N X i=1 log |x −xz i,ε|, which satisfies (6.38) −∆Φz ε = 2π N X i=1 δxz i,ε in R2. Notice that ∇Φz ε(x) = − N X i=1 x −xz i,ε |x −xz i,ε|2. 76 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY In addition |∇Φz ε(x)| = |∇⊥Φz ε(x)| = N X i=1 (x −xz i,ε)⊥ |x −xz i,ε|2 . Therefore 1 2 Z Aδrε(z) N X j=1 (xz j,ε −x)⊥ |xz j,ε −x|2 2 dx = 1 2 Z Aδrε(z) |∇Φz ε(x)|2dx = −1 2 Z Aδrε(z) −∆Φz εΦz εdx + 1 2 Z ∂Aδrε(z) Φz ε ∂Φz ε ∂ν dS(x) = 1 2 Z ∂D(0,δ(1+oδ(1))) Φz ε ∂Φz ε ∂ν dS(x) −1 2 N X i=1 Z ∂D(xz i,ε,rε(1−oε(1))) Φz ε ∂Φz ε ∂ν dS(x), (6.39) where we used integration by parts and (6.38). To estimate the first integral in the RHS of (6.39), we observe that, for x ∈∂D (0, δ(1 + oδ(1))) , we have Φz ε(x) = − N X i=1  log |x −xz i,ε| |x| + log |x|  = −N log |x| + O p | log ε| δ ! and ∂Φz ε(x) ∂ν = − N X i=1 x −xz i,ε |x −xz i,ε|2 · x |x| = − N X i=1  1 |x| + xz i,ε · x −|xz i,ε|2 |x||x −xz i,ε|2  = −N |x| + O p | log ε| δ2 ! . Hence (6.40) 1 2 Z ∂D(0,δ(1+oδ(1))) Φz ε ∂Φz ε ∂ν dS(x) = πN 2 log δ + oδ(1) + O log δ p | log ε| δ + | log ε| δ2 ! = πN 2 log δ + oδ(1) + Cδoε(1). We now proceed to estimate the second integral in the RHS of (6.39). Note that, for x ∈ ∂D xz i,ε, rε(1 −oε(1))  , we have Φz ε(x) = −log |x −xz i,ε| − N X j=1 j̸=i  log |x −xz j,ε| |xz i,ε −xz j,ε| + log |xz i,ε −xz j,ε|  = −log |x −xz i,ε| − N X j=1 j̸=i log |xz i,ε −xz j,ε| + O rε p | log ε| ! VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 77 and ∂Φz ε(x) ∂ν = − N X j=1 x −xz j,ε |x −xz j,ε|2 · x −xz i,ε |x −xz i,ε| = − 1 |x −xz i,ε|    1 + N X j=1 j̸=i (x −xz j,ε) · (x −xz i,ε) |x −xz j,ε|2    = − 1 |x −xz i,ε| 1 + O rε p | log ε| !! . We then obtain that 1 2 Z ∂D(xz i,ε,rε(1−oε(1))) Φz ε ∂Φz ε ∂ν dS(x) = π log rε + π N X j=1 j̸=i log |xz i,ε −xz j,ε| + O rε log rε p | log ε| ! + oε(1) and thus (6.41) −1 2 N X i=1 Z ∂D(xz i,ε,rε(1−oε(1))) Φz ε ∂Φz ε ∂ν dS(x) = −πN log rε −π N X i,j=1 i̸=j log |xz i,ε −xz j,ε| + oε(1). By inserting (6.40) and (6.41) into (6.39), we obtain the claim (6.37). On the other hand, since Γi,ε(z) = Π−1(xz i,ε) and Γj,ε(z) = Π−1(xz j,ε), for i ̸= j we have that distg⊥(Γi,ε(z), Γj,ε(z)) ≤ Z 1 0 ∥DΠ−1 xz i,ε + t(xz j,ε −xz i,ε)  (xz j,ε −xz i,ε)  ∥dt, which combined with (6.13) yields distg⊥(Γi,ε(z), Γj,ε(z)) ≤(1 + O(|xz i,ε| + |xz j,ε|))|xz j,ε −xz i,ε|. The concavity of the logarithm function then implies that −log |xz j,ε −xz i,ε| ≤−log distg⊥(Γi,ε(z), Γj,ε(z)) + O(|xz i,ε| + O |xz j,ε|)  ≤−log distg⊥(Γi,ε(z), Γj,ε(z)) + O 1 p | log ε| ! . (6.42) 78 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY Finally, by combining (6.36) with (6.37) and (6.42), we find (6.43) 1 2 Z Aδrε(z) N X j=1 (xz j,ε −x)⊥ |xz j,ε −x|2 2 dx ≤−π N X i,j=1 i̸=j log distg⊥(Γi,ε(z), Γj,ε(z)) + πN log 1 rε + πN 2 log δ + oδ(1) + Cδoε(1) + oε(1). Step 6. Conclusion. By integrating from z = 0 to z = L0 (6.36) and (6.43), and combining with (6.12), we find (6.44) Eε(uε, T δ rε) ≤−π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + πNL0 log 1 rε + πN 2L0 log δ + oδ(1) + Cδoε(1) + oε(1). 6.6. Energy estimate far from Γ0. We now work in Ω\ Tδ(Γ0). In this region ρε ≡1 and therefore, using (6.4), we find |∇Aεuε| = |∇φε −Aε| = N X i=1 XΓi,ε + ∇fΓi,ε −AΓi,ε = N X i=1 jΓi,ε , where in the last equality we used once again Proposition 2.1. Hence Iδ := 1 2 Z Ω\Tδ(Γ0) |∇Aεuε|2 + 1 2ε2(1 −ρ2 ε)2 + 1 2 Z R3 | curl Aε|2 = 1 2 Z Ω\Tδ(Γ0) N X i=1 jΓi,ε 2 + 1 2 Z R3 N X i=1 curl AΓi,ε 2 . From (2.5) and the explicit formula (2.3), which yields a control on ∥XΓi,ε −XΓ0∥L2(Ω\Tδ(Γ0)), we deduce that, for any i = 1, . . . , N, jΓi,ε −jΓ0 2 L2(Ω\Tδ(Γ0)) + AΓi,ε −AΓ0 2 H1(R3) ≤Cδoε(1), which combined with the previous equality yields Iδ = N 2 1 2 Z Ω\Tδ(Γ0) |jΓ0|2 + 1 2 Z R3 |curl AΓ0|2  + Cδoε(1). Recalling (2.8), we obtain (6.45) Iδ = N 2CΩ−πN 2L0 log δ + oδ(1) + Cδoε(1). VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 79 6.7. Vorticity estimate. Let B ∈C0,1 T (Ω). Observe that, by integration by parts, we have (6.46) Z Ω µ(uε, Aε) · B = Z Ω curl(j(uε, Aε) + Aε) · B = Z Ω (j(uε, Aε) + Aε) · curl B = Z Ω ρ2 ε∇φε · curl B + Z Ω (1 −ρ2 ε)Aε · curl B = Z Ω ∇φε · curl B + Z Ω (1 −ρ2 ε)(Aε −∇φε) · curl B. Let q < 2. Recall that Aε −∇φ ∈Lq(Ω), therefore, letting p > 2 be such that 1 p + 1 q = 1, by H¨older’s inequality we have Z Ω (1 −ρ2 ε)(Aε −∇φε) · curl B ≤∥curl B∥L∞(Ω)∥Aε −∇φε∥Lq(Ω)∥(1 −ρ2 ε)∥Lp(Ω) ≤C∥curl B∥L∞(Ω)∥(1 −ρ2 ε)∥ 2 p L2(Ω) ≤C∥curl B∥L∞(Ω)ε 2 pEε(|uε|) ≤C∥curl B∥L∞(Ω)ε 2 p| log ε|, (6.47) where we used that 1 −ρ2 ε ∈[0, 1], ρε = 1 in Ω\ ∪N i=1Trε(Γi,ε), and (6.6). On the other hand, using (6.4), we have (6.48) Z Ω ∇φε · curl B = N X i=1 Z Ω XΓi,ε + ∇fΓi,ε  · curl B = N X i=1 Z Ω curl XΓi,ε · B, where the last equality follows from integration by parts. Finally, since curl XΓi,ε = 2πΓi,ε, from (6.46), (6.47), and (6.48), we deduce that (6.49) µ(uε, Aε) −2π N X i=1 Γi,ε (C0,1 T (Ω))∗ ≤Cε 2 p| log ε|, for any p > 2, where C is a constant that depends on p and blows up as p →2. We observe that the same estimate holds for the flat norm. 6.8. Putting everything together. By putting together (6.5), (6.6), (6.44), and (6.45), we obtain Fε(uε, Aε) ≤ N X i=1  π|Γi,ε| log rε ε + γ|Γi,ε|  −π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + πNL0 log 1 rε + πN 2L0 log δ + N 2CΩ−πN 2L0 log δ + oδ(1) + Cδoε(1) + oε(1). 80 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY From the computations in the proof of Lemma 3.2, we have |Γi,ε| = L0 + O  1 | log ε|  , which yields (recall that rε = | log ε|−3) π N X i=1 |Γi,ε| log rε + πNL0 log 1 rε = oε(1) and that γ|Γi,ε| = γL0 + oε(1). Hence (6.50) Fε(uε, Aε) ≤π| log ε| N X i=1 |Γi,ε| −π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + γNL0 + N 2CΩ+ oδ(1) + Cδoε(1) + oε(1). Recall that uε = eihexϕ0uε and Aε = Aε + hexA0, where (eihexϕ0, hexA0) is the approximate Meissner state. By inserting (6.50) and (6.49) in the splitting formula (2.2), we find GLε(uε, Aε) ≤h2 exJ0 + π| log ε| N X i=1 |Γi,ε| −π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + γNL0 + N 2CΩ−2π N X i=1 hex ⟨B0, Γi,ε⟩+ oδ(1) + Cδoε(1) + oε(1). Since hex = | log ε| 2 R(Γ0) + K log | log ε|, for some K ≥0, we have that 2π N X i=1 hex ⟨B0, Γi,ε⟩= π| log ε| N X i=1  |Γi,ε|R(Γi,ε) R(Γ0) + 2πK log | log ε| ⟨B0, Γi,ε⟩  = π| log ε| N X i=1 |Γi,ε|R(Γi,ε) R(Γ0) + 2πKN log | log ε| ⟨B0, Γ0⟩, where in the last equality we used the fact that ⟨B0, Γi,ε⟩= ⟨B0, Γ0⟩+ O 1 p | log ε| ! (which follows from Proposition 3.2). Hence GLε(uε, Aε) ≤h2 exJ0 + π| log ε| N X i=1 |Γi,ε|  1 −R(Γi,ε) R(Γ0)  −π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + γNL0 + N 2CΩ−2πKN log | log ε| ⟨B0, Γ0⟩+ oδ(1) + Cδoε(1) + oε(1). VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 81 Moreover, from Lemma 3.2, for any i = 1, . . . N we have R(Γi,ε) = R(Γ0) −1 2Q r N hex ⃗ui ! + O 1 | log ε| 3 2 ! = R(Γ0) −1 2 N hex Q (⃗ui) + O 1 | log ε| 3 2 ! , which combined with the fact that |Γi,ε| = L0 + oε(1), yields π| log ε| N X i=1 |Γi,ε|  1 −R(Γi,ε) R(Γ0)  = πL0N N X i=1 Q(⃗ui) + oε(1). Finally, by observing that −π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz = −π N X i,j=1 i̸=j Z L0 0 log |⃗ui(z) −⃗uj(z)|g•dz + π 2 L0N(N −1) log hex N + oε(1), we obtain (6.2). The proof of Theorem 6.1 is thus concluded. □ References [AB98] L. Almeida and F. Bethuel, Topological methods for the Ginzburg-Landau equations, J. Math. Pures Appl. (9) 77 (1998), no. 1, 1–49. MR1617594 ↑6, 13, 14, 38, 46, 47, 49 [ABM06] S. Alama, L. Bronsard, and J. A. Montero, On the Ginzburg-Landau model of a superconducting ball in a uniform field, Ann. Inst. H. Poincar´e Anal. Non Lin´eaire 23 (2006), no. 2, 237–267. MR2201153 ↑4, 36 [ABO05] G. Alberti, S. Baldo, and G. Orlandi, Variational convergence for functionals of Ginzburg-Landau type, Indiana Univ. Math. J. 54 (2005), no. 5, 1411–1472. MR2177107 ↑6 [AD03] A. Aftalion and I. Danaila, Three-dimensional vortex configurations in a rotating bose-einstein condensate, Phys. Rev. A 68 (2003Aug), 023603. ↑11 [Aft06] A. Aftalion, Vortices in Bose-Einstein condensates, Progress in Nonlinear Differential Equations and their Applications, vol. 67, Birkh¨auser Boston, Inc., Boston, MA, 2006. MR2228356 ↑4 [AJ03] A. Aftalion and R. L. Jerrard, Properties of a single vortex solution in a rotating Bose Einstein condensate, C. R. Math. Acad. Sci. Paris 336 (2003), no. 9, 713–718. MR1988308 ↑4, 11 [AKO07] S. V. Alekseenko, P. A. Kuibin, and V. L. Okulov, Theory of concentrated vortices, Springer, Berlin, 2007. An introduction, Translated from the 2003 Russian original. MR2398034 ↑15 [AR01] A. Aftalion and T. Riviere, Vortex energy and vortex bending for a rotating Bose-Einstein con- densate, Phys. Rev. A 64 (2001), 043611. ↑2, 5, 11 [Aub98] T. Aubin, Some nonlinear problems in Riemannian geometry, Springer Monographs in Mathemat- ics, Springer-Verlag, Berlin, 1998. MR1636569 ↑51 [BBH94] F. Bethuel, H. Brezis, and F. H´elein, Ginzburg-Landau vortices, Progress in Nonlinear Differential Equations and their Applications, vol. 13, Birkh¨auser Boston, Inc., Boston, MA, 1994. MR1269538 ↑4, 5, 10, 13, 14, 17, 38, 45, 46, 49, 67 82 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY [BBO01] F. Bethuel, H. Brezis, and G. Orlandi, Asymptotics for the Ginzburg-Landau equation in arbitrary dimensions, J. Funct. Anal. 186 (2001), no. 2, 432–520. MR1864830 ↑2, 5, 12 [BCS57] J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Theory of superconductivity, Phys. Rev. 108 (1957), 1175–1204. ↑3 [BJOS12] S. Baldo, R. L. Jerrard, G. Orlandi, and H. M. Soner, Convergence of Ginzburg-Landau functionals in three-dimensional superconductivity, Arch. Ration. Mech. Anal. 205 (2012), no. 3, 699–752. MR2960031 ↑4 [BJOS13] S. Baldo, R. L. Jerrard, G. Orlandi, and H. M. Soner, Vortex density models for superconductivity and superfluidity, Comm. Math. Phys. 318 (2013), no. 1, 131–171. MR3017066 ↑4 [BR95] F. Bethuel and T. Rivi`ere, Vortices for a variational problem related to superconductivity, Ann. Inst. H. Poincar´e Anal. Non Lin´eaire 12 (1995), no. 3, 243–303. MR1340265 ↑5 [Bra02] E. H. Brandt, Vortices in superconductors, Physica C: Superconductivity 369 (2002), no. 1, 10–20. ↑11 [BS99] F. Bethuel and J.-C. Saut, Travelling waves for the Gross-Pitaevskii equation. I, Ann. Inst. H. Poincar´e Phys. Th´eor. 70 (1999), no. 2, 147–238. MR1669387 ↑49 [CJ17] A. Contreras and R. L. Jerrard, Nearly parallel vortex filaments in the 3D Ginzburg-Landau equa- tions, Geom. Funct. Anal. 27 (2017), no. 5, 1161–1230. MR3714719 ↑2, 5, 6, 11, 12, 28 [DDPMR22] J. D´avila, M. Del Pino, M. Medina, and R. Rodiac, Interacting helical vortex filaments in the three- dimensional Ginzburg–Landau equation, J. Eur. Math. Soc. (JEMS) (2022). Published online first. ↑5 [DG99] P. G. De Gennes, Superconductivity of Metals and Alloys, Advanced book classics, Perseus, Cam- bridge, MA, 1999. ↑2 [DS18] M. Duerinckx and S. Serfaty, Mean-field dynamics for Ginzburg-Landau vortices with pinning and forcing, Ann. PDE 4 (2018), no. 2, Paper No. 19, 172. MR3973142 ↑39 [DVR25] M. D´ıaz-Vera and C. Rom´an, First critical field in the pinned three dimensional Ginzburg–Landau model of superconductivity, 2025. Available at https://doi.org/10.48550/arXiv.2507.10915. ↑9 [Fed69] H. Federer, Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, Band 153, Springer-Verlag New York Inc., New York, 1969. MR0257325 ↑44 [FHSS12] R. L. Frank, C. Hainzl, R. Seiringer, and J. P. Solovej, Microscopic derivation of Ginzburg-Landau theory, J. Amer. Math. Soc. 25 (2012), no. 3, 667–713. MR2904570 ↑3 [IJ21] R. Ignat and R. L. Jerrard, Renormalized energy between vortices in some Ginzburg-Landau models on 2-dimensional Riemannian manifolds, Arch. Ration. Mech. Anal. 239 (2021), no. 3, 1577–1666. MR4215198 ↑13, 39, 50 [Jer99] R. L. Jerrard, Lower bounds for generalized Ginzburg-Landau functionals, SIAM J. Math. Anal. 30 (1999), no. 4, 721–746. MR1684723 ↑5, 38 [JMS04] R. Jerrard, A. Montero, and P. Sternberg, Local minimizers of the Ginzburg-Landau energy with magnetic field in three dimensions, Comm. Math. Phys. 249 (2004), no. 3, 549–577. MR2084007 ↑4 [JS02] R. L. Jerrard and H. M. Soner, The Jacobian and the Ginzburg-Landau energy, Calc. Var. Partial Differential Equations 14 (2002), no. 2, 151–191. MR1890398 ↑5, 7 [Lie13] G. M. Lieberman, Oblique derivative problems for elliptic equations, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2013. MR3059278 ↑17 [Lon50] F. London, Superfluids, Structure of matter series, Wiley, New York, 1950. ↑37 [LR01] F.-H. Lin and T. Rivi`ere, A quantization property for static Ginzburg-Landau vortices, Comm. Pure Appl. Math. 54 (2001), no. 2, 206–228. MR1794353 ↑2, 5, 12 VORTEX LINES INTERACTION IN 3D GINZBURG–LANDAU 83 [LR99] F. Lin and T. Rivi`ere, Complex Ginzburg-Landau equations in high dimensions and codimension two area minimizing currents, J. Eur. Math. Soc. (JEMS) 1 (1999), no. 3, 237–311. MR1714735 ↑5, 12 [MSZ04] J. A. Montero, P. Sternberg, and W. P. Ziemer, Local minimizers with vortices in the Ginzburg- Landau system in three dimensions, Comm. Pure Appl. Math. 57 (2004), no. 1, 99–125. MR2007357 ↑6 [RASV+01] C. Raman, J. R. Abo-Shaeer, J. M. Vogels, K. Xu, and W. Ketterle, Vortex nucleation in a stirred bose-einstein condensate, Phys. Rev. Lett. 87 (2001Nov), 210402. ↑11 [RBD02] P. Rosenbusch, V. Bretin, and J. Dalibard, Dynamics of a single vortex line in a bose-einstein condensate, Phys. Rev. Lett. 89 (2002Oct), 200403. ↑11 [Riv95] T. Rivi`ere, Line vortices in the U(1)-Higgs model, ESAIM Contrˆole Optim. Calc. Var. 1 (1995/96), 77–167. MR1394302 ↑2, 5, 12 [Rom19a] C. Rom´an, On the first critical field in the three dimensional Ginzburg-Landau model of super- conductivity, Comm. Math. Phys. 367 (2019), no. 1, 317–349. MR3933412 ↑3, 4, 6, 7, 11, 15, 36 [Rom19b] C. Rom´an, Three dimensional vortex approximation construction and ε-level estimates for the Ginzburg-Landau functional, Arch. Ration. Mech. Anal. 231 (2019), no. 3, 1531–1614. MR3902469 ↑5, 6, 12, 13, 18, 55 [RSS23] C. Rom´an, E. Sandier, and S. Serfaty, Bounded vorticity for the 3D Ginzburg-Landau model and an isoflux problem, Proc. Lond. Math. Soc. (3) 126 (2023), no. 3, 1015–1062. MR4563866 ↑4, 5, 8, 13, 18, 19, 36 [San01] E. Sandier, Ginzburg-Landau minimizers from Rn+1 to Rn and minimal connections, Indiana Univ. Math. J. 50 (2001), no. 4, 1807–1844. MR1889083 ↑2, 5, 12 [San98] E. Sandier, Lower bounds for the energy of unit vector fields and applications, J. Funct. Anal. 152 (1998), no. 2, 379–403. MR1607928 ↑5, 38, 40 [Ser01] S. Serfaty, On a model of rotating superfluids, ESAIM Control Optim. Calc. Var. 6 (2001), 201– 238. MR1816073 ↑4 [Ser99a] S. Serfaty, Local minimizers for the Ginzburg-Landau energy near critical magnetic field. I, Com- mun. Contemp. Math. 1 (1999), no. 2, 213–254. MR1696100 ↑4, 5, 11, 13, 14, 38, 46, 47 [Ser99b] S. Serfaty, Local minimizers for the Ginzburg-Landau energy near critical magnetic field. II, Com- mun. Contemp. Math. 1 (1999), no. 3, 295–333. MR1707887 ↑51 [Ser99c] S. Serfaty, Stable configurations in superconductivity: uniqueness, multiplicity, and vortex- nucleation, Arch. Ration. Mech. Anal. 149 (1999), no. 4, 329–365. MR1731999 ↑4, 5, 46, 47 [SS00] E. Sandier and S. Serfaty, Global minimizers for the Ginzburg-Landau functional below the first critical magnetic field, Ann. Inst. H. Poincar´e Anal. Non Lin´eaire 17 (2000), no. 1, 119–145. MR1743433 ↑4, 5 [SS03] E. Sandier and S. Serfaty, Ginzburg-Landau minimizers near the first critical field have bounded vorticity, Calc. Var. Partial Differential Equations 17 (2003), no. 1, 17–28. MR1979114 ↑4, 5, 11, 40 [SS04] E. Sandier and S. Serfaty, A product-estimate for Ginzburg-Landau and corollaries, J. Funct. Anal. 211 (2004), no. 1, 219–244. MR2054623 ↑39 [SS07] E. Sandier and S. Serfaty, Vortices in the magnetic Ginzburg-Landau model, Progress in Nonlinear Differential Equations and their Applications, vol. 70, Birkh¨auser Boston, Inc., Boston, MA, 2007. MR2279839 ↑2, 4, 6, 7, 11, 13, 38, 39, 40, 41, 47, 65, 67 [SS12] E. Sandier and S. Serfaty, From the Ginzburg-Landau model to vortex lattice problems, Comm. Math. Phys. 313 (2012), no. 3, 635–743. MR2945619 ↑2 84 CARLOS ROM´AN, ETIENNE SANDIER, AND SYLVIA SERFATY [SSTS69] D. Saint-James, G. Sarma, E. J. Thomas, and P. Silverman, Type II Superconductivity, Pergamon Press Oxford, New York, 1969. ↑2 [Tin96] M. Tinkham, Introduction to superconductivity, Second, McGraw-Hill, New York, 1996. ↑2 [TT90] D. Tilley and J. Tilley, Superfluidity and superconductivity, Hilger, 1990. ↑2, 4
VORTEX LINES INTERACTION IN THE THREE-DIMENSIONAL MAGNETIC GINZBURG-LANDAU MODEL CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY Abstract. We complete our study of the three dimensional Ginzburg-Landau functional with magnetic field, in the asymptotic regime of a small inverse Ginzburg-Landau parameter ε, and near the first critical field Hc1 for which the first vortex filaments appear in energy minimizers. Under a nondegeneracy condition, we show a next order asymptotic expansion of Hc1 as ε →0, and exhibit a sequence of transitions, with vortex lines appearing one by one as the intensity of the applied magnetic field is increased: passing Hc1 there is one vortex, then increasing Hc1 by an increment of order log | log ε| a second vortex line appears, etc. These vortex lines accumulate near a special curve Γ0, solution to an isoflux problem. We derive a next order energy that the vortex lines must minimize in the asymptotic limit, after a suitable horizontal blow-up around Γ0. This energy is the sum of terms where penalizations of the length of the lines, logarithmic repulsion between the lines and magnetic confinement near Γ0 compete. This elucidates the shape of vortex lines in superconductors. Keywords: Ginzburg-Landau, vortices, vortex filaments, first critical field, phase transitions, Abelian Higgs model, vortex interaction MSC: 35Q56, 82D55, 35J50, 49K10. 1. Introduction This work is the conclusion of our study of the emergence of vortex lines in the threedimensional full Ginzburg-Landau model from physics, i.e. the model with gauge and with external magnetic field. The Ginzburg-Landau model is important as the simplest gauge theory, where the U(1)-gauge is Abelian (it is also known as the Abelian Higgs model) and where topological defects in the form of vortices arise. It is also one of the most famous models of condensed matter physics, the widely used and studied model for superconductivity, and also (Carlos Rom ́an) Facultad de Matem ́aticas e Instituto de Ingenier ́ıa Matem ́atica y Computacional, Pontificia Universidad Cat ́olica de Chile, Vicu ̃na Mackenna 4860, 7820436 Macul, Santiago, Chile (Etienne Sandier) LAMA - CNRS UMR 8050, Universit ́e Paris-Est Cr ́eteil, 61 Avenue du G ́en ́eral de Gaulle, 94010 Cr ́eteil, France (Sylvia Serfaty) Courant 251 Mercer St., New York, NY 10012, United States, and Sorbonne Universit ́e, CNRS, Universit ́e de Paris, Laboratoire Jacques-Louis Lions (LJLL), F-75005 Paris E-mail addresses: , , . Date: October 17, 2025. 1 16 Oct 2025 2 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY very similar to models for superfluids and Bose-Einstein condensates [SSTS69, DG99, Tin96, TT90]. In the two-dimensional version of the model, vortices are point-like topological defects, arising when the applied magnetic field is large enough, as superconductivity defects in which the magnetic flux can penetrate. Vortices interact logarithmically, the magnetic field acting as an effective confinement potential. As a result of this competition between repulsion and confinement, in energy minimizers vortices form very interesting patterns, including a famous triangular lattice pattern called in physics Abrikosov lattice. The program carried out in particular by the last two authors, see [SS07], culminating with [SS12], was to mathematically analyze the formation of these vortices and derive effective interaction energies that the limiting vortex pattern must minimize in a certain asymptotic limit, thus relating the minimization of the Ginzburg-Landau energy to discrete minimization problems, some of them of number theoretic nature. Our main goal here was to accomplish the same in three dimensions, deriving effective interaction energies for vortex lines in three dimensions in order to precisely understand and describe the vortex patterns in superconductors. This is significantly more delicate in three dimensions than in two, since vortex lines carry much more geometry than vortex points. In particular, curvature effects and regularity questions, requiring the use of fine geometric measure theoretic tools coupled with differential forms, appear. In the three dimensional situation, the energetic cost of a vortex is the balance between its length cost, the logarithmic interaction with other vortices, and the confinement effect of the magnetic field. While the length cost effect had been analyzed in a simplified setting in the mathematical literature [Riv95, LR01, San01, BBO01], the logarithmic repulsion effect analyzed, again in a simplified setting, more recently in [CJ17], our paper is the first to handle all three effects at the same time in the completely realistic physical setting of the full gauged energy. In particular, it settles questions raised since the turn of the century (for instance [Riv95, AR01]) about whether vortices in three dimensional superconductors will be asymptotically straight or curved. 1.1. Description of the model. Let us now get into the details of the Ginzburg-Landau model, whose physical background can be found in the standard texts [SSTS69,DG99,Tin96]. After nondimensionalization of the physical constants, one may reduce to studying the energy functional (1.1) GLε(u, A) := 1 2 Z Ω |∇Au|2 + 1 2ε2(1 -|u|2)2 + 1 2 Z R3 |H -Hex|2. Here Ωrepresents the material sample, we assume it to be a bounded simply connected subset of R3 with regular boundary. The function u : Ω→C is the order parameter, representing the local state of the material in this macroscopic theory (|u|2 ≤1 indicates the local density of superconducting electrons), while the vector-field A : R3 →R3 is the gauge of the magnetic field, and the magnetic field induced inside the sample and outside is H := ∇× A, as is standard in electromagnetism. The covariant derivative ∇A means ∇-iA. The vector field Hex here represents an applied magnetic field and we will assume that Hex = hexH0,ex where VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 3 H0,ex is a fixed vector field and hex is a real parameter, representing an intensity that can be tuned. Finally, the parameter ε > 0 is the inverse of the so-called Ginzburg-Landau parameter κ, a dimensionless ratio of all material constants, that depends only on the type of material. In our mathematical analysis of the model, we will study the asymptotics of ε →0, also called "London limit" in physics, which corresponds to extreme type-II superconductors (typeII superconductors are those with large κ). This is the limiting where the correlation length is much smaller than the penetration depth of the magnetic field, effectively this means that vortex cores are very small. The Ginzburg-Landau theory is an effective Landau theory, describing the local state at the mesoscale level by the order parameter u, but it can be formally derived as a limit of the microscopic quantum Bardeen-Cooper-Schrieffer theory [BCS57] near the critical temperature. This has been partially accomplished rigorously in [FHSS12]. The Ginzburg-Landau model is a U(1)-gauge theory, in which all the meaningful physical quantities are invariant under the gauge transformations u →ueiΦ, A →A + ∇Φ where Φ is any regular enough real-valued function. The Ginzburg-Landau energy and its associated free energy (1.2) Fε(u, A) := 1 2 Z Ω |∇Au|2 + 1 2ε2(1 -|u|2)2 + 1 2 Z R3 |H|2 are gauge-invariant, as well as the density of superconducting Cooper pairs |u|2, the induced magnetic field H, and the vorticity defined below. Throughout this paper, we assume that Hex ∈L2 loc(R3, R3) is such that div Hex = 0 in R3. Consequently, there exists a vector potential Aex ∈H1 loc(R3, R3) such that curl Aex = Hex and div Aex = 0 in R3. The natural space for minimizing GLε in 3D is H1(Ω, C) × [Aex + Hcurl] where Hcurl := {A ∈H1 loc(R3, R3)| curl A ∈L2(R3, R3)}; see [Rom19a]. Critical points (u, A) of GLε in this space satisfy the Ginzburg-Landau equations (1.3)          -(∇A)2u = 1 ε2u(1 -|u|2) in Ω curl(H -Hex) = (iu, ∇Au)χΩ in R3 ∇Au · ν = 0 on ∂Ω [H -Hex] × ν = 0 on ∂Ω, where χΩis the characteristic function of Ω, [ · ] denotes the jump across ∂Ω, ν is the outer unit normal to the boundary, ∇Au · ν = P3 j=1(∂ju -iAju)νj, and the covariant Laplacian (∇A)2 is defined by (∇A)2u = (div -iA·)∇Au. We also note that rotating superfluids and rotating Bose-Einstein condensates can be described through a very similar Gross-Pitaevskii model, which no longer contains the gauge A, and where the applied field Hex is replaced by a rotation vector whose intensity can be tuned. In the regime of low enough rotation these models can be treated with the same techniques as 4 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY those developed for Ginzburg-Landau, see [Ser01, AJ03, Aft06, BJOS13, TT90] and references therein. Type-II superconductors are known to exhibit several phase transitions as a function of the intensity of the applied field. The one we focus on is the onset of vortex-lines. Mathematically, these are zeroes of the complex-valued order parameter function u around which u has a nontrivial winding number or degree (the rotation number of its phase). More precisely, it is established that there exists a first critical field Hc1 of order | log ε|, such that if the intensity of the applied field hex is below Hc1 then the material is superconducting and |u| is roughly constant equal to 1, while when the intensity exceeds Hc1, vortex filaments appear in the sample. The order parameter u vanishes at the core of each vortex tube and has a nonzero winding number around it. Rigorously, this was first derived by Alama-Bronsard-Montero [ABM06] in the setting of a ball. Baldo-Jerrard-Orlandi-Soner [BJOS12,BJOS13] derived a mean-field model for many vortices and the main order of the first critical field in the general case, and [Rom19a] gave a more precise expansion of Hc1 in the general case, and moreover proved that global minimizers have no vortices below Hc1, while they do above this value. One may also point out the paper [JMS04] that constructs locally minimizing solutions with vortices. In these papers, the occurrence of the first vortex line(s) and its precise location in the sample is connected to what we named an isoflux problem, which we studied for its own sake in [RSS23] and which is described below. Moreover, we showed in that paper that if the intensity of the magnetic field hex does not exceed Hc1 by more than K log | log ε|, then the vorticity remains bounded independently of ε, i.e. informally the total length of the curves remains bounded, and we expect only a finite number of curves. From there, it is however quite difficult to extract the optimal number of curves or the individual curve behavior. In particular the coercivity of the energy with respect to the curve locations is quite delicate to identify, as we will see. On the other hand, as mentioned above, the two-dimensional version of the gauged GinzburgLandau model, in which vortices are essentially points instead of lines, was studied in details in the mathematics literature. In particular, in [Ser99a, Ser99c, SS00, SS03] (see [SS07] for a recap) the first critical field was precisely computed, and it was shown that under some nondegeneracy condition the vortices appear one by one near a distinguished point p of the bounded domain, as the magnetic field is increased: at Hc1 one vortex appears, then when the external field is increased by an order log | log ε| a second vortex appears, then when the external field is increased by an additional order log | log ε| a third vortex appears, etc. Moreover, the vortices were shown to minimize, in the limit ε →0 and after a suitable rescaling around p, an effective"renormalized" interaction energy (thus called by analogy with the renormalized energy W of [BBH94]) of the form - X i̸=j log |xi -xj| + N N X i=1 Q(xi), where Q is a positive quadratic function, which results from the logarithmic vortex repulsion and the magnetic confinement effect, see in particular [SS07, Chapter 9]. Specific mathematical VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 5 techniques for analyzing vortices in the two-dimensional model had been first developed for the simplified model without magnetic field, which is obtained by setting A ≡0 and H ≡0 in the two-dimensional version of (1.2), in particular in [BBH94,San98,Jer99,JS02] and then extended to the situation with gauge in [BR95,Ser99a,Ser99c,SS00,SS03]. As alluded to above, vortices in the context of the same simplified model without magnetic field but in three dimensions were also analyzed in the mathematical literature in [Riv95,LR99, LR01,San01,BBO01]. These works demonstrated that, in the absence of magnetic field effects, vortex lines Γ carry a leading order energy proportional to their length, π|Γ|| log ε|, while their interaction effect is a lower order effect (of order 1). Thus, in order to minimize the energy, vortices (which in that setting only occur because of an imposed Dirichlet boundary condition) should be straight lines, and their interaction becomes a negligible effect in the ε →0 limit. It was thus not clear whether magnetic effects could suffice to curve the vortices. A formal derivation was attempted in [AR01] in the context of Bose-Einstein condensates, proposing an effective energy where length effects and interaction effects compete. More recently, in [CJ17], the authors found a setting where the length and the interaction compete at next order: they study the same simplified Ginzburg-Landau model without gauge in a cylindrical domain, choose the Dirichlet boundary condition to have a degree N vortex at one point (say the North pole) of the boundary and a degree -N at another point (say the South pole). The energy minimizers must then have N vortex filaments which connect the two poles, moreover to minimize the leading order energy these should all be nearly parallel and close to vertical straight lines. Since the vortices repel each other logarithmically, these lines curve a little bit when leaving the poles, in order to separate by an optimal distance shown to be 1/ p | log ε|. When rescaling horizontally at that lengthscale, one sees well separated vortex lines with competition between the linearization of the length and the logarithmic interaction. The authors are able to extract an effective limiting energy (1.4) π Z L 0 N X i=1 1 2|u′ i(z)|2 - X i̸=j log |ui(z) -uj(z)|dz, where z : (0, L) 7→(ui(z), z) represent the rescaled curves. The critical points of this energy happen to solve a "Toda lattice" ODE system. In [DDPMR22], solutions of the GinzburgLandau equations (without gauge) and with vortex helices which are critical points of (1.4) were constructed. This setting is however, a little bit artificial due to this particular boundary condition. In addition, in all the problems without magnetic gauge, the number of vortex lines ends up automatically bounded as a result of enforcing a Dirichlet boundary condition with finite vorticity. This significantly simplifies the analysis, and as in the two-dimensional case, we need to deal with a more realistic situation where the number of vortices may a priori be unbounded, for this we rely on [RSS23], itself relying on [Rom19b]. What we do here is derive the first interaction energy where length, interaction and magnetic effects compete, in the setting of the full physical model with gauge. In addition, we do not 6 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY restrict to geometries where the filaments are almost straight. We consider general geometries and magnetic fields that can lead to the optimal location of the first vortex as hex passes Hc1 - the solution to the isoflux problem - to be a curved line, called Γ0. The next vortices obtained by increasing slightly hex will be nearly parallel to Γ0, hence to each other, leading to a curved version of the situation of [CJ17]. However, we have to work in coordinates aligned with Γ0, which turns out to be equivalent to studying Ginzburg-Landau functionals on a manifold. In addition, technical difficulties will arise near the boundary, at the endpoints of the vortex filaments, where these may diverge away from Γ0 to meet the boundary orthogonally, in contrast with the almost parallel setup of [CJ17] where all the curves meet at their (fixed) endpoints. The problem thus combines all the possible technical difficulties of vortex analysis in three dimensions, in particular, dealing with a priori unbounded numbers of vortices, dealing with estimates on manifolds, and dealing with nonflat boundaries. We will make intensive use of the toolbox assembled in [Rom19b] for energy bounds in three dimensions, as well as various methods for obtaining two-dimensional estimates [SS07,AB98], to which we are eventually able to reduce modulo appropriate slicing. We also provide a completely new upper bound construction, based on the Biot-Savart law, approximating the optimal Ginzburg-Landau energy for configurations with vorticity carried by prescribed curves. This is the first upper bound construction applicable to general curves, to be compared with prior constructions in [ABO05,MSZ04,CJ17] which all involve almost straight vortex lines. We will give further detail on the proof techniques after the statement of the main theorem. Before stating the main theorem, we need to introduce various notions and notation. 1.2. Energy splitting. The Ginzburg-Landau model admits a unique state, modulo gauge transformations, that we will call "Meissner state" in reference to the Meissner effect in physics, i.e. the complete repulsion of the magnetic field by the superconductor when the superconducting density saturates at |u| = 1 with no vortices. It is obtained by minimizing GLε(u, A) under the constraint |u| = 1, so that in particular it is independent of ε. In the gauge where div A = 0, this state is of the form (eihexφ0, hexA0), where φ0, A0 depend only on Ωand H0,ex, and was first identified in [Rom19a]. It is not a true critical point of (1.1) (or true solution of the associated Euler-Lagrange equations (1.3), but is a good approximation of one as ε →0. The energy of this state is easily seen to be proportional to h2 ex, we write GLε(eihexφ0, hexA0) =: h2 exJ0. Closely related is a magnetic field B0 constructed in [Rom19a], whose definition will be recalled later. The superconducting current of a pair (u, A) ∈H1(Ω, C)×H1(Ω, R3) is defined as the 1-form j(u, A) = (iu, dAu) = 3 X k=1 (iu, ∂ku -iAku)dxk VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 7 and the gauge-invariant vorticity μ(u, A) of a configuration (u, A) as μ(u, A) = dj(u, A) + dA. Thus μ(u, A) is an exact 2-form in Ω. It can also be seen as a 1-dimensional current, which is defined through its action on 1-forms by the relation μ(u, A)(φ) = Z Ω μ(u, A) ∧φ. The vector field corresponding to μ(u, A) (i.e. the J(u, A) such that μ(u, A)∧φ = φ(J(u, A)) dV where dV is the Euclidean volume form, is at the same time a gauge-invariant analogue of twice the Jacobian determinant, see for instance [JS02], and a three-dimensional analogue of the gauge-invariant vorticity of [SS07]. The vorticity μ(u, A) is concentrated in the vortices and, in the limit ε →0, it is exactly supported on the limit vortex lines. We now recall the algebraic splitting of the Ginzburg-Landau energy from [Rom19a], which allows to follow the roadmap of [SS07] in three dimensions: for any (u, A), letting u = e-ihexφ0u and A = A -hexA0, we have GLε(u, A) = h2 exJ0 + Fε(u, A) -hex Z Ω μ(u, A) ∧B0 + o(1), This formula allows, up to a small error, to exactly separate the energy of the Meissner state h2 exJ0, the positive free energy cost Fε and the magnetic gain -hex R μ(u, A) ∧B0 which corresponds to the value of the magnetic flux of B0 through the vortex or rather the loop formed by the vortex line on the one hand, and any curve lying on ∂Ωthat allows to close it, see Figure 1. Figure 1. Vortex filament with boundary closure. 8 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY The first critical field is reached when competitors with vortices have an energy strictly less than that of the Meissner state, h2 exJ0, that is when the magnetic gain beats the free energy cost. Approximating the energy cost of a curve Γ by π|Γ|| log ε| leads to the following isoflux problem. 1.3. The isoflux problem and nondegeneracy assumption. The isoflux problem characterizes the curves that maximize the magnetic flux for a given length (hence the name isoflux, by analogy with isoperimetric), providing the critical value of hex. Given a domain Ω⊂R3, we let N be the space of normal 1-currents supported in Ω, with boundary supported on ∂Ω. We always denote by |·| the mass of a current. Recall that normal currents are currents with finite mass whose boundaries have finite mass as well. We also let X denote the class of currents in N which are simple oriented Lipschitz curves. An element of X must either be a loop contained in Ωor have its two endpoints on ∂Ω. Given σ ∈(0, 1], we let C0,σ T (Ω) denote the space of vector fields B ∈C0,σ(Ω) such that B × ⃗ν = 0 on ∂Ω, where hereafter ⃗ν is the outer unit normal to ∂Ω. The symbol ∗will denote its dual space. Such a B may also be interpreted as 2-form, we will not distinguish the two in our notation. For any vector field B ∈C0,1 T (Ω, R3) and any Γ ∈N, we denote by ⟨B, Γ⟩the value of Γ applied to B, which corresponds to the circulation of the vector field B on Γ when Γ is a curve. We also let (1.5) ∥Γ∥∗:= sup ∥B∥C0,1 T (Ω,R3)≤1 ⟨B, Γ⟩ be the dual norm to the norm in C0,1 T (Ω, R3). Definition 1.1 (Isoflux problem). The isoflux problem relative to Ωand a vector field B0 ∈ C0,1 T (Ω, R3), is the question of maximizing over N the ratio (1.6) R(Γ) := ⟨B0, Γ⟩ |Γ| . In [RSS23, Theorem 1], we proved that the maximum is achieved, and under the additional condition sup Cloops R 0 there exist C, ε0 > 0 depending only on m, n, M, and ∂Ω, such that, for any ε | log ε|-(q+1) . 2.4.1. Bounded vorticity. We recall that X denotes the class of oriented Lipschitz curves, seen as 1-current with multiplicity 1, which do not self intersect and which are either a loop contained in Ωor have two different endpoints on ∂Ω. We also recall that N be the space of normal 1-currents supported in Ω, with boundary supported on ∂Ω. Condition 2.1 (Weak Nondegeneracy condition). There exists a unique curve Γ0 in X such that R(Γ0) = sup Γ∈N ⟨B0, Γ⟩ |Γ| . Moreover, there exists constants c0, P > 0 depending on Ωand B0 such that (2.9) R(Γ0) -R(Γ) ≥C0 min ∥Γ -Γ0∥P ∗, 1 for every Γ ∈X. We will see in Section 3.3 that the strong nondegeneracy condition of Definition 1.2 implies this one. We now state a result adapted from [RSS23] that, under this weak nondegeneracy condition, locates the vortices of almost minimizers near Γ0. VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 19 Theorem B. Assume that Condition 2.1 holds. Then, for any K > 0 and α ∈(0, 1), there exists positive constants ε0, C > 0 depending on Ω, B0, K, and α, such that the following holds. For any ε | log ε|-2 . Remark 2.2. In [RSS23, Theorem 3] we did not state the vorticity estimate for the flat norm above. However, it appears from the proof that νε = N0 X i=1 Γi + eΓ and that the vorticity estimate stated in [RSS23, Theorem 3] directly follows from applying Theorem A. A straightforward use of this theorem then also yields the vorticity estimate for the flat norm stated above. 2.5. Tube coordinates, energy rewriting and horizontal rescaling. 20 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY 2.5.1. Tube coordinates. Proposition 2.2. Assume Γ0 : [0, L0] →Ωis a smooth curve parametrized by arclength with endpoints p = Γ0(0) and q = Γ0(L0) on ∂Ωand meeting the boundary orthogonally. Then there exists δ > 0 and coordinates defined in Tδ ∩Ω, where Tδ is the tube of radius δ around Γ0 (see remark below), such that: - In these coordinates, z 7→(0, 0, z) is an arclength parametrization of Γ0. - Denoting by g(x, y, z) the Euclidean metric in these coordinates, we have g13 = g23 = 0 and, from the previous property, g33(0, 0, z) = 1 for z ∈[0, L0]. - These coordinates map a neighborhood of p in ∂Ωinto R2 × {0} and a neighborhood of q into R2 × {L0}. We will denote by Cδ the coordinate patch corresponding to Tδ ∩Ω, and by Φ : Cδ →Tδ ∩Ωthe inverse of the coordinate chart. Remark 2.3. To define Tδ we first need to extend Γ0 a little bit outside Ω. Then the tube really refers to this extended curve, so that Tδ ∩Ωindeed contains a neighborhood in ∂Ωof each of the endpoints of Γ0. Remark 2.4. If Γ0 is a critical point of the ratio R, it is easy to check using the fact that B0 × ν = 0 on ∂Ωthat Γ0 intersects ∂Ωorthogonally. Proof of the proposition. Let s 7→Γ0(s) be a parametrization of Γ0 by arclength on the interval [0, L0]. We define f(x) = s if x = Γ0(s), let f = 0 in a neighborhood of p := Γ0(0) in ∂Ωand let f = L0 in a neighborhood of q := Γ0(L0) in ∂Ω. The function f may be extended smoothly in a neighborhood W of Γ0 in Ωin such a way that the level sets of f meet Γ0 orthogonally and, restricting the neighborhood if necessary, we may assume its gradient does not vanish there. If η > 0 is small enough, then Ση = ̄B(p, η) ∩∂Ωis included in W and diffeomorphic to the disk ̄D(0, η), and we let φ, defined in ̄D(0, η) be such a diffeomorphism. For (x, y, z) ∈C := ̄D(0, η) × [0, L0], we define Φ(x, y, z) to be equal to γ(z), where γ is the integral curve of the vector field ∇f/|∇f|2 originating at φ(x, y), so that f(γ(z))′ = 1, and hence f(Φ(x, y, z)) = z. In particular, z 7→Φ(0, 0, z) is a parametrization of Γ0 by arclength. If η is chosen small enough, then Φ is well-defined, injective and smooth, and its differential is invertible. Moreover C ∩{z = 0} and C ∩{z = L0} are mapped to the boundary of Ω, since f(Φ(x, y, z)) = z. Therefore Φ is a diffeomorphism from C to a neighborhood of Γ0 in Ω. We choose δ > 0 small enough so that Tδ ∩Ω⊂Φ(C ). Then the coordinate system defined by Φ has all the desired properties. The fact that, if g denotes the pull-back of the Euclidean metric by Φ, we have g13 = g23 = 0 follows from the fact that Φ(·, ·, z) is mapped to {f = z}, hence ∂xΦ and ∂yΦ are orthogonal to ∇f, hence to ∂zΦ. □ 2.5.2. Energy in the new coordinates. Then we express the quantities of interest in the new coordinates: g is the metric, pull-back of the Euclidean metric by Φ. Given a configuration (u, A) in Ω, the order parameter transforms as a scalar field and A as a 1-form. The field B0 transforms as a one-form. Keeping the same notation for the quantities in the new coordinates, VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 21 we may define the superconducting current j(u, A) = (iu, du -iAu) as in the original coordinates, it is a 1-form, and the vorticity μ(u, A) = dj(u, A) + dA, which is a two-form. The new coordinates are defined in Uδ = Tδ ∩Ω, using the notation of the previous proposition, and we let Cδ = Φ-1(Uδ). We define Fε(u, A, Uδ) = 1 2 Z Uδ eε(u, A), where eε(u, A) = 1 2 |∇Au|2 + 1 2ε2(1 -|u|2)2 + | curl A|2 . In the new coordinates, this becomes Fε(u, A, Uδ) = 1 2 Z Cδ |du -iAu|2 g + 1 2ε2(1 -|u|2)2 + |dA|2 g dvolg, where volg is the volume form relative to the metric g. Given a k-form ω, its norm relative to g is defined by the identity |ω|2 g dvolg = ω ∧∗gω. Alternatively, if e1, e2, e3 is a g-orthonormal frame, |ω|2 g = P ω(ei1, . . . , eik)2, where the sum runs over ordered k-tuples 1 ≤i1 0, we now consider in Cδ the metric ̃g defined by ̃gij = l-2gij if 1 ≤i, j ≤2 and ̃gij = gij otherwise (recall that g13 = g23 = 0). The volume element for ̃g is dvol ̃g = l-2dvolg. We let ̃ε = ε/l, and (2.11) ̃Fε(u, A) = 1 2 Z Cδ |du -iAu|2 ̃g + 1 2 ̃ε2(1 -|u|2)2 + |dA|2 ̃g dvol ̃g. We have |du -iAu|2 ̃g = l2|du -iAu|2 g⊥+ g33|∂zu -iAzu|2, |dA|2 ̃g = l4|dA|2 g⊥+ l2 X 1≤i,j≤2 (dA)i3(dA)j3gijg33, so that, if l≤1, then (2.12) l2 ̃Fε(u, A) ≤l2F ⊥ ε (u, A) + F z ε (u, A). Therefore (2.13) Fε(u, A, Uδ) ≥(1 -l2)F ⊥ ε (u, A) + l2 ̃Fε(u, A). 3. Preliminaries on the ratio function 3.1. Non-degeneracy condition for graphs and piecewise graphs. Here we compute the second derivative of the ratio Γ 7→R(Γ) for Γ's that are graphs over Γ0. In addition, we consider a more general class of curves, which we call piecewise graphs over Γ0, and which naturally appear in our setting since, by Theorem B, the approximation of the vorticity is essentially composed of a sum of N0 such objects. We provide a kind of nondegeneracy condition for the ratio among piecewise graphs, and then establish a relation between the second derivative of the ratio and this nondegeneracy condition. Throughout this section, B0 is the Meissner magnetic field associated to the domain Ω. It is in particular true that the restriction of B0 to a plane tangent to the boundary of Ωis zero. Without changing notation, we will work in the coordinates defined in Proposition 2.2 in a neighborhood of Γ0. In these coordinates, Γ0 is the interval [0, L0] on the z-axis. We have R(Γ) = ⟨B0, Γ⟩/|Γ| where, in the new coordinates, given a curve Γ in Cδ, |Γ| = Z |Γ′(s)|g ds, ⟨B0, Γ⟩= Z Γ B0. For a map f defined on Cδ, we denote by f • z the map f • z (x, y, z) = f(0, 0, z). The map f • z may be seen either as a function of x, y, z, or just of z. Recall from the introduction that for the metric evaluated along the axis g(0, 0, z), the variable z is omitted from the notation, and we write g• instead of g• z. VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 23 We use the notation ⃗u = (x, y, 0) and d⃗u = dx dy. For any curve or sum of curves Γ, we define ⟨Γ⟩⊥:= Z Γ 1 2(dg33)• z(⃗u) dz ⟨Γ⟩∥:= Z Γ LL(z, ⃗u) dz, where the linearization of the length is defined as (3.1) LL(z, ⃗u) = 1 4(d2g33)• z(⃗u, ⃗u) -1 8(dg33)• z(⃗u)2. Remark 3.1. Note that (3.2) LL(z,⃗u) ≤C|⃗u|2. Hereafter χ is a smooth cutoff function on Cδ taking values in [0, 1] such that (3.3) χ(·) = 1 if distg(·, Γ0) 3δ/4. For any 2-form μ = μ12 dx ∧dy + μ23 dy ∧dz + μ31 dz ∧dx, we let (3.4) ⟨μ⟩2D = Z z Z Σz χ√g33 μ12 d⃗u dz = χ ∂ ∂z, μ . We also define, for any curve or sum of curves Γ, ⟨Γ⟩2D := Z Γ χ √g33 dz. The following lemma motivates the notation ⟨Γ⟩2D. Lemma 3.1. Assume Γ is a curve or sum of curves, and that ∥μ -2πΓ∥∗≤η, where ∥· ∥∗is defined in (1.5). Then ⟨μ⟩2D -2π ⟨Γ⟩2D ≤Cη. Proof. We have ⟨μ⟩2D = Z z Z Σz χ √g33μ12 d⃗u dz = Z Cδ (χ √g33 dz) ∧μ. Since the restriction of χ √g33 dz to ∂Cδ vanishes, it belongs to C0,1 T (Cδ), and thus Z Cδ (χ √g33 dz) ∧μ -2π Z Γ χ √g33 dz ≤Cη. □ 24 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY Figure 3. Piecewise graph: z′(t) = 1 on [0, z2] and [z2 + h, L0 + h], z′(t) = -1 on [z2, z2 + h]. We will need to expand up to second order the ratio of a curve which is more general than a graph over Γ0, what we call a piecewise graph. Such a piecewise graph is defined as a curve Γ : [0, L] →Cδ, where Γ(t) = Γ0(z(t))+⃗u(t), where z(t) : [0, L] →[0, L0] is a sawtooth function - i.e. its derivative is piecewise constant with values ±1 - and ⃗u(t) is horizontal for any t ∈[0, L]. An example is shown in Figure 3. Proposition 3.1. For any piecewise graph Γ : [0, L] →Cδ/2, where Γ(t) = Γ0(z(t)) + ⃗u(t) with z(0) = 0 and z(L) = L0, we have ⟨Γ⟩2D = L0 + ⟨Γ⟩⊥+ ⟨Γ⟩∥+ O ∥⃗u∥3 L3([0,L]) . Proof. Since ∥⃗u∥L∞ 0, define Ql(Γ) := l2 |Γ| ̃g -⟨Γ⟩2D + (⟨Γ⟩2D -L0) -⟨B0, Γ -Γ0⟩ R(Γ0) , where |Γ| ̃g is the length of Γ with respect to the metric ̃g = g33dz2 + l-2g⊥. Lemma 3.3. If Γ is a piecewise graph in Cδ/2, parametrized as Γ(t) = Γ0(z(t))+⃗u(t), t ∈[0, L], with z(0) = 0 and z(L) = L0, and if the maximum of Γ 7→R(Γ) is achieved at Γ0(z) = (0, 0, z), then (3.12) Ql(Γ) = Z L 0 n l2 q g33(Γ) + l-2|⃗u′|2 g(Γ) -z′p g33(Γ) +z′ LL(z,⃗u) -LB(z,⃗u, z′⃗u′) R(Γ0) dt + O ∥⃗u∥3 L∞+ ∥⃗u∥2 L∞∥⃗u′∥L1 . Proof. The result follows from the definition of Ql, ⟨Γ⟩2D and Propositions 3.1, 3.2, taking into account the fact that, because of the criticality of Γ0, we have from Proposition 3.3 that R(Γ0) ⟨Γ⟩⊥= ⟨B0, Γ⟩⊥. □ Proposition 3.4. Assume that Γ0 is a nondegenerate maximizer of the ratio R. (1) There exists a small constant c > 0 such that for any l≤1 and for any piecewise graph Γ parametrized as Γ(t) = Γ0(z(t)) + ⃗u(t) with ∥⃗u∥L∞ 0 such that, as n →+∞, (3.17) max(αn, ∥⃗un∥∞) ≪δn ≪ln, and then we set Bn = {t ∈An | |⃗u′ n(t)|g(Γn) ≤δn}. If t ∈Bn, we have |⃗u′ n|2 g(Γn) ln 2 ≤δn ln , and the right-hand side goes to 0 as n →+∞. Therefore a Taylor expansion yields, as n →+∞, ln 2 Z Bn q g33(Γn) + ln -2|⃗u′ n|2 g(Γn) - p g33(Γn) dz = 1 2 -o(1) Z Bn |⃗u′ n|2 g(Γ0) dz, where we have also used the fact that g33(Γn) →1 and g(Γn) →g(Γ0) uniformly as n →+∞, since ∥⃗un∥L∞→0. We deduce that, writing g• = g(Γ0), Z Bn qln(Γn)(t)dt ≥ 1 2 -o(1) Z Bn |⃗u′ n|2 g• dt + LL(zn(t), ⃗un) -LB(zn(t), ⃗un, ⃗u′ n) R(Γ0) dt. Using (3.2) and (3.6), we deduce in particular that (3.18) Z Bn qln(Γn)(t)dt ≥-C∥⃗un∥2 L∞. On the other hand, since (√a + x -√a)/√x is increasing, we have for any t in An \ Bn that r g33(Γn) + |⃗u′n|2 g(Γn) ln2 - p g33(Γn) |⃗u′ n|g(Γn)/ln ≥ q g33(Γn) + δ2n ln2 - p g33(Γn) δn/ln . VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 31 Since δn ≪ln and since g33(Γn) →1 and g(Γn) →g•, we deduce s g33(Γn) + δ2 n ln 2 - p g33(Γn) ≥ 1 2 -o(1) δ2 n ln 2, and therefore ln 2   s g33(Γn) + |⃗u′ n|2 g(Γn) ln 2 - p g33(Γn)  ≥ 1 2 -o(1) |⃗u′ n|g(Γn)δn. From (3.6) and (3.2) we know that LL(z, ⃗un) and LB(z,⃗un, ⃗u′ n) are both negligible compared to |⃗u′ n|δn if t ∈An \ Bn, therefore we deduce that (3.19) Z An qln(Γn)(t)dt ≥ 1 2 -o(1) δn Z An |⃗u′ n|g• ≥ 1 2 -o(1) δ2 n|An \ Bn|. From the hypothesis Qln(Γn) = O(αn2) and the estimates (3.15), (3.16), (3.18) and (3.19), we find that ln Z Acn ln + |⃗u′ n|g(Γn) dt + δn Z An δn + |⃗u′ n|g(Γn) dt ≤C α2 n + ∥⃗un∥2 L∞(1 + ∥⃗u′ n∥L1) . Then we can decompose ∥⃗u′ n∥L1 = ∥⃗u′ n∥L1(Bcn) + ∥⃗u′ n∥L1(Bn) , use the fact that |⃗u′ n| ≤δn on Bn and use (3.17) to deduce that (3.20) ∥⃗u′ n∥L1(Bcn) = o (αn + ∥⃗un∥L∞) , |Bc n| = o(1). We now define ⃗wn : [0, L0] →R2 as follows. First we require that ⃗wn(0) = ⃗un(0). Then we note that, except for a finite set of values of z - which we denote J - there exists a unique t such that zn(t) = z and therefore a unique t ∈An such that zn(t) = z. We then require that for any t ∈An, ⃗w′ n(zn(t)) = ( ⃗u′ n(t) if t ∈Bn 0 if t ∈An \ Bn. This defines unambiguously ⃗w′ n(z) if z /∈J, thus ⃗wn is well defined. Moreover, using (3.20), we have for any t ∈An (3.21) |⃗wn(zn(t)) -⃗un(t)| ≤ Z Bcn |⃗u′ n(s)| ds = o(αn + ∥⃗un∥L∞). Since LL(z, ·) is a quadratic form, we thus have that for any t ∈An LL(zn(t), ⃗wn(zn(t))) -LL(zn(t), ⃗un(t)) dt = o(α2 n + ∥⃗un∥2 L∞). Since LB(z, ·, ·) is bilinear, we also deduce that LB(zn(t), ⃗wn(zn(t)), ⃗w′ n(zn(t))) = 0 for t ∈ An \ Bn while for t ∈Bn we have LB(zn(t), ⃗wn(zn(t)), ⃗w′ n(zn(t))) -LB(zn(t), ⃗un(t), ⃗u′ n(t)) = o((αn + ∥⃗un∥L∞)|⃗w′ n|). 32 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY This allows us to write Z Bn LL(z, ⃗un) = Z L0 0 LL(z, ⃗wn) dz + o(αn 2 + ∥⃗un∥2 L∞), Z Bn LB(z,⃗un, ⃗u′ n) - Z L0 0 LB(z, ⃗wn, ⃗w′ n) dz = o(αn + ∥⃗un∥L∞) Z L0 0 |⃗w′ n|. Note that we have use the change of variables z = zn(t), to pass from integrals over An to integrals over [0, L0]. We now use again the hypothesis Qln(Γn) = O(αn2) with the information we have gathered so far to obtain that Cα2 n ≥ Z Bn qln(Γn)(t)dt + O ∥⃗un∥3 ∞+ ∥⃗un∥2 ∞∥⃗u′ n∥L1 = Z L0 0 (1 2 -o(1))|⃗w′ n|2 g• + L (z, ⃗wn, ⃗w′ n) dz + o(αn 2 + ∥⃗un∥2 ∞) + o(αn + ∥⃗un∥∞) Z L0 0 |⃗w′ n|. (3.22) Then, from the nondegeneracy of the maximizer Γ0 as defined in Definition 1.2, that is, the positive definiteness of Q, we deduce that Z L0 0 |⃗w′ n|2 g• + L (z, ⃗wn, ⃗w′ n) dz ≥c0∥⃗wn∥2 H1, for some c0 > 0 independent of n. Using also the fact the H1 norm controls the L∞norm in one dimension, we see that the error terms in (3.22) may be absorbed by the left-hand side and the first term on the right-hand side to deduce that ⃗wn/αn subsequentially converges weakly in H1([0, L0]) to some ⃗u∗, and that lim inf n Ql(Γn) αn2 ≥ L0 2 R(Γ0)Q(⃗u∗). The weak H1 convergence also implies that ⃗wn/αn converges to ⃗u∗uniformly if we take a further subsequence. It then follows from (3.21) that |⃗vn(t) -⃗u∗(zn(t))| converges to 0 as n →+∞, uniformly w.r.t. t, hence proving the first assertion in (3.13). We also deduce from (3.20) that Ln →L0, since zn′ = 1 on An and the integral of zn′ on [0, Ln] is equal to L0. It also follows from (3.20) that zn(t) →t uniformly. Together with the uniform convergence of ⃗vn(t) -⃗u∗(zn(t)) to 0, this implies that vn →⃗u∗in the sense of distributions. Note that the fact that zn(t) →t tells us that the limit of the piecewise graphs Γn is in fact a graph. It remains to prove that ∥Γn -Γ∗,n∥∗= o(αn). This is a direct consequence of the above and Lemma 3.4 below, noting that the length of Γn is bounded as a consequence of (3.20), and the fact that ⃗w′ n is bounded in L2, hence in L1 also. □ VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 33 Lemma 3.4. Let Γ, eΓ ⊂Tδ ∩Ωbe two curves such that, in tube coordinates, Γ(t) = Γ0(z(t)) + ⃗u(t) is a piecewise graph defined over [0, L] with z(0) = 0 and z(L) = L0 and eΓ(z) = Γ0(z)+⃗v(z) is a graph over Γ0. Then ∥Γ -eΓ∥F(Ω), ∥Γ -eΓ∥∗≤C max t |⃗u(t) -⃗v(z(t))| |Γ| + |eΓ| , where all norms are understood with respect to the metric g. Proof. Let X be any vector field defined in Ω, normal to ∂Ω, and such that ∥X∥∞and ∥curl X∥∞are less than or equal to 1. We need to show that ⟨Γ, X⟩- D eΓ, X E ≤C max t |⃗u(t) -⃗v(z(t))| |Γ| + |eΓ| . For this we define Σ(s, t) = (1 -s)Γ(t) + seΓ(z(t)) for (s, t) ∈[0, 1] × [0, L]. Then, applying Stoke's Theorem and the fact that X is normal to ∂Ω, we have ⟨Γ, X⟩- D eΓ, X E = Z Σ curl X · νΣ ≤|Σ|, and therefore ⟨Γ, X⟩- D eΓ, X E ≤ Z L 0 Z 1 0 |∂sΣ||∂tΣ| ds dt. Since ∂sΣ = eΓ(z(t)) -Γ(t) and |∂tΣ(s, t)| ≤|Γ′(t)| + |eΓ′(t)|, the result follows. □ 3.3. Strong nondegeneracy implies weak nondegeneracy. Lemma 3.5. There exists η > 0 depending on Ωsuch that the following holds. Assume Γ is Lipschitz curve with no boundary in Ωsuch that (3.23) χη ∂z √g33 , Γ0 L0 -Γ |Γ| ≤l, where χη is a cut-off function for the cylinder Cη = B(0, η) × (0, L0), that is, χη is equal to 1 in Cη/2, equal to 0 outside Cη, takes values in [0, 1] and has gradient bounded by C/η. Then, if lis small enough depending on Ω, Γ is included in a neighborhood of Γ0 where tubular coordinates are defined, so that we may write Γ(t) = Γ0(z(t)) + ⃗u(t) for a horizontal vector ⃗u. Then, parametrizing Γ and Γ0 by arclength with a proper orientation, we have (3.24) Z L 0 |1 -z′(t)|dt ≤C √ l, ∥⃗u∥L∞≤C √ l, ||Γ| -L0| ≤C √ l, where C depends on Ω. Moreover z(0) = 0 and z(L) = L0. Proof. Fix η > 0 small enough so that tubular coordinates are defined in Cη and such that √g33 ∈(1/2, 2) in Cη. The domain in Ωcorresponding to Cη is denoted Tη. We assume Γ is parametrized by arclength and whenever Γ(t) ∈Tη, we denote by (x(t), y(t), z(t)) the coordinates of Γ(t). We let X = χη ∂z √g33. 34 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY Since ⟨X, Γ0/L0⟩= 1, denoting L = |Γ| we have by definition and using the fact that |Γ′|g = 1 from the arclength parametrization hypothesis, that X, Γ |Γ| = - Z L 0 ⟨X, Γ′⟩g , ⟨X, Γ′⟩g ≤1. It follows that 0 ≤ X, Γ0 L0 -Γ |Γ| = - Z L 0 (1 -|X|g) + |X|g|Γ′|g -⟨X, Γ′⟩g ≤Cl, Both terms in the integral being nonnegative, they are each bounded by Cl. When Γ(t) ∈Cη, we may write it as Γ(t) = Γ0(z(t)) + ⃗u(t), where ⃗u(t) is horizontal. Then, replacing |X|g and ⟨X, Γ′⟩g by their value, we have (3.25) - Z L 0 (1 -χη) ≤Cl, - Z L 0 χη (1 -√g33z′) ≤Cl. We now show that L = |Γ| is bounded by a constant depending only on Ω. For this we begin by extending the coordinate function z defined in Tη to a C1 function defined in Ω, whose C1 norm is then a constant depending on Ω. Since √g33 ∈(1/2, 2) on Cη, we have - Z L 0 χη 1 2 -z′ ≤- Z L 0 χη 1 √g33 -z′ ≤2Z L 0 χη √g33 1 √g33 -z′ ≤Cl, while, using the first inequality in (3.25), - Z L 0 (1 -χη) 1 2 -z′ ≤CZ L 0 (1 -χη) ≤Cl. The two inequalities imply - Z L 0 1 2 -z′ ≤Cl. Since the integral of z′ is bounded above by 2 maxΩ|z| ≤C, we deduce that 1 2 -Cl≤C L , and then that L is bounded above by a constant depending only on Ω, if lis small enough. Next, we claim that Γ is included in the cylinder CC √ lfor some C > 0 depending only on Ω. First, from (3.23), there exists a ∈(0, L) such that Γ(a) ∈CCl. If Γ exited from CC′√ l, then the first exit point b (which we assume larger than a w.l.o.g) is such that Γ(t) ∈Cη/2 for t ∈(a, b), so that χη(Γ) = 1 in (a, b), and such that (3.26) Z b a |⃗u′|g⊥≥C′√ l-Cl. On the other hand, since |Γ′|2 g = |⃗u′|2 g⊥+ g33z′2 = 1 the latter inequality in (3.25) implies that Z b a q |⃗u′|2 g⊥+ g33z′2 -√g33z′ ≤Cl, VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 35 and then - using the fact that √1 + x-1 ≥c min(√x, x) for any x ≥0 and that l≤ √ lsince lis assumed small - that (3.27) Z b a |⃗u′|g⊥≤C √ l, which would contradict (3.26) if C′ is chosen large enough. Therefore Γ is included in CC √ lfor some C > 0 and therefore ∥⃗u∥L∞≤C √ l, which proves the second inequality in (3.24) Now, since Γ is included in CC √ l, we have that (x(t), y(t), z(t)) are defined in [0, L]. Since Γ(0) and Γ(L) belong to ∂Ω, we must have z(0), z(L) ∈{0, L0}, and then, in view of the latter inequality in (3.25), that z(0) = 0 and z(L) = L0, if lis chosen small enough. Then, arguing as above we find that (3.27) holds with a = 0 and b = L, so that (3.28) ∥⃗u′∥L1([0,L]) ≤C √ l. Moreover, since g33 = 1 on Γ0 from the definition of g in Proposition 2.2, we have |1 -√g33| ≤ C √ lon CC √ l, so that (3.25) implies Z L 0 |1 -z′| ≤C √ l, which using z(0) = 0 and z(L) = L0 implies that |L -L0| ≤C √ land then, together with (3.28) implies that ||Γ| -L0| ≤C √ l. □ Proposition 3.5. Assume that the maximum of R among normal 1-currents which are divergence free in Ωis uniquely achieved (modulo a multiplicative constant) at a simple smooth curve Γ0 with endpoints on ∂Ωand that the second derivative of R at Γ0 is definite negative. Then there exists α > 0 such that for any curve Γ we have R(Γ0) -R(Γ) ≥α min(∥Γ -Γ0∥2 ∗, 1). Proof. The statement is equivalent to proving that if {Γn}n is a maximizing sequence of curves with no boundary in Ω, then lim inf n→+∞ R(Γ0) -R(Γn) ∥Γn -Γ0∥2 ∗ > 0. Consider such a sequence {Γn}n. Then, using the compactness of currents, Γn/|Γn| converges to Γ0/L0 in the flat metric, hence in (C0,1 T (Ω, R3))∗. It follows using Lemma 3.5 that if n is large enough, then Γn lies in Cδ and, using tubular coordinates, we may parametrize it as Γn(t) = Γ0(zn(t)) + ⃗un(t) in [0, Ln], where z′ n(t) ∈{±1} and where zn(0) = 0, zn(Ln) = L0. It is also the case that |Γn| →L0 and that ∥1 -z′ n∥L1([0,Ln]) →0, as n →+∞. Then, applying Proposition 3.4 with l= 1, we deduce that (3.29) lim inf n→+∞ Q1(Γn) ∥⃗un∥2 L∞> 0. 36 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY But, since ⟨B0, Γ0⟩/ R(Γ0) = L0, we have |Γn| R(Γ0) (R(Γ0) -R(Γn)) = |Γn| -⟨B0, Γn⟩ R(Γ0) = |Γn| -⟨Γn⟩2D + ⟨Γn⟩2D -L0 + L0 -⟨B0, Γn⟩ R(Γ0) = |Γn| -⟨Γn⟩2D + ⟨Γn⟩2D -L0 -⟨B0, Γn -Γ0⟩ R(Γ0) = Q1(Γn). It follows, in view of (3.29) and since |Γn| →L0, that R(Γ0) > R(Γn) + c∥⃗un∥2 L∞. To prove the proposition, it remains to note that ∥Γn -Γ0∥∗ 0} in C as follows : The point with coordinates x, z is ρ times the image of the complex number iz by the Moebius transform φx(w) = (x + w)/(1 + xw), which maps the vertical segment i[-1, 1] to the intersection of the circle centered at (1 + x2)/2x with the unit disk. We thus have R + iZ = ρ x + iz 1 + ixz. We may then extend straightforwardly this coordinate system to the ball Bρ in R3 by requiring that a point with cylindrical coordinates (R, θ, Z) in Bρ \ {(0, 0, ±ρ)} has coordinates (x, θ, z), VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 37 where x ∈[0, 1) and z ∈(-1, 1) such that R = ρx 1 + z2 1 + x2z2, Z = ρz 1 -x2 1 + x2z2. The Euclidean metric dR2 + dZ2 + R2dθ2 then becomes g(x, z, θ) = ρ2 (1 + x2z2)2 (1 + z2)2 dx2 + (1 -x2)2 dz2 + x2(1 + z2)2 dθ2 . It is straightforward to compute the second derivative of the length. Let Γt(z) = (tx(z), z, θ(z)) be a family of curves parametrized by z ∈[-1, 1] in our coordinates. Then (3.30) d2 dt2 |t=0|Γt| = ρ Z 1 -1 1 2x′2(1 + z2)2 -x2(1 + z2) + 1 2x2(1 + z2)2θ′2 dz. To compute the second derivative of ⟨B0, Γt⟩, we resort to the explicit expression for curl B0 computed in [Lon50]: curl B0 = 3ρ 2 sinh ρ cosh r -sinh r r sin φ r eθ, where (r, φ, θ) are spherical coordinates on Bρ, so that R = r sin φ and Z = r cos φ. It follows that curl B0 · eθ = 3 2 r2 3 sin φ r + O(ρ3) = R 2 + O(ρ3). We deduce that, denoting D+ ρ = {(R, Z) | R2 + Z2 ≤ρ2 and R > 0}, (3.31) ⟨B0, Γ0⟩= ZZ D+ ρ curl B0 · eθ = Z ρ R=0 R 2 × 2 p ρ2 -R2 dR + O(ρ5) = ρ3 3 + O(ρ5). We also find that (curl B0 · eθ) rdr ∧dφ = ρ3 2 x(1 -x2)(1 + z2)2 (1 + x2z2)3 dx ∧dz + O(ρ5). This allows us to compute ⟨B0, Γ0⟩-⟨B0, Γt⟩= ZZ At curl B0 · eθ dx dz, where At = {(s, z) | -1 ≤z ≤1 and 0 ≤s ≤tx(z)}, from which we compute (3.32) d2 dt2 |t=0 ⟨B0, Γt⟩= -ρ3 4 Z 1 -1 x2(1 + z2) dz + O(ρ5). Since Γ0 maximizes the ratio R, its differential at Γ0 vanishes and then, for t = 0, d2 dt2 |t=0 R(Γt) = d2 dt2 |t=0 ⟨B0, Γt⟩L0 -⟨B0, Γ0⟩d2 dt2 |t=0|Γt| L0 2 , 38 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY so that, in view of (3.30), (3.31), (3.32) and the fact that L0 = 2ρ, we have after simplification d2 dt2 |t=0 R(Γt) = -ρ2 24 Z 1 -1 (1 + z2)2(x′2 + x2θ′2) + x2(1 + z2) dz + O(ρ3). Therefore the quadratic form d2 R(Γ0) is definite negative if ρ is small enough. □ 4. Lower bounds by slicing In this section we prove two types of lower bounds for the free energy Fε contained in a tubular neighborhood of Γ0, and obtained by integrating in the z coordinate two-dimensional lower bounds over slices. The first type is very robust and is obtained by the ball construction method (`a la [San98, Jer99]) but in a two-step growth process that allows to retain more information on the degrees at small scales. It retains an error which is larger than a constant. In this construction, the varying weight is approximated by a constant weight, leading to errors that can be absorbed into the ball construction errors. The second type of lower bound is more precise, it recovers the exact constant γ and an error only o(1). To obtain such a precise error, the varying weight can no longer be approximated by a constant, instead one needs to resort to performing the ball construction in the precise geometry we are working in, i.e. we need to grow geodesic balls. The techniques thus combine ball construction methods within the geometric framework to capture the energy on large scales, with the refined [BBH94] analysis found in of [AB98,Ser99a] to capture the precise energy at small scales while approximating the weight by a uniform one. 4.1. Lower bounds by ball growth method. First, recalling (3.3) and (3.4), which correspond respectively to the definitions of the smooth cutoff χ and ⟨μ⟩2D, we have the following result. Proposition 4.1. Assume that Fε(u, A) 0, C > 0 and integer N, there exist ε0, C′ > 0 such that for any z, any (u, A), and any ε r, using the ball growth method of [San98] (see for instance [SS07, Chapter 4]) we may grow the balls from total radius r to total radius ρz, at which point the balls will have grown into a collection B′ = {B′ j := B(a′ j, r′ j)}j, with degrees d′ j, such that for each B′ j we have Z Bj e⊥ ε (u, A) dvolg⊥≥π|d′ j| p(a′ j) -Cρz log ρz r -C′ . Moreover, by additivity of the degree and disjointness of the balls, we must have (4.5) d′ j = X i,ai∈B′ j di. Since log(ρz/r) ≤C′ log | log ε| we find (4.6) Z Bj e⊥ ε (u, A) dvolg⊥≥π|d′ j|p(a′ j) log ρz r -C′ -C′ρz log | log ε| . Step 2: lower bound by integration over large circles. We next retrieve the degree squared contribution to the energy outside of the small balls following the method of integration along circles of [SS03]. Denote by L a minimal connection between the measure Pk i=1 diδai and Nδ0, relative to the metric g⊥, allowing connections to the boundary. Then in view of (4.2), |L| ≤C′(ρz+| log ε|-q). The fact that ρz is the flat distance for the Euclidean metric and not g⊥is accounted for by the constant C′. Moreover, if the circle Cs (relative to g⊥) of center 0 and radius s does not intersect L, then the winding number of u on Cs is equal to N. Since we have p(x) ≥1 -Cs on Cs, and since the length of Cs relative to g⊥is bounded above by 2πs + C′s2, we obtain for such an s that Z ∂Cs e⊥ ε dvolg⊥≥(1 -C′s)πN 2 s . But the measure of the set of s such that Cs intersects either B′ or L is bounded above by 2ρz + |L|, since B′ has total radius ρz. Since |L| is the flat distance of Pk i=1 diδai to Nδ0, and from (4.2) and the definition of ρz, this measure is less than 4ρz, if ε is small enough. Integrating the circle bound above with respect to s such that Cs intersects neither B′ nor L, we obtain Z Σ ′ e⊥ ε dvolg⊥≥ Z δ 4ρz(1 -C′s)πN 2 s ds ≥πN 2 log δ ρz -C′ . Note that if 4ρz > δ the lower bound remains true, since the left-hand side is nonnegative. Adding this to (4.6) and (4.4) we find (4.7) Z Σ e⊥ ε dvolg⊥≥π X i |di|p(ai) log r ε -C′ + π X j |d′ j|p(a′ j) log ρz r + πN 2 log δ ρz -C′ -C′ρz log | log ε|. VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 41 Then we note that, for any j, in view of (4.5), we have X i,ai∈B′ j |di|p(ai) ≤|d′ j|(p(a′ j) + Cρz) + C′   X i,ai∈B′ j |di| -|d′ j|  . Summing with respect to j, we find that X j |d′ j|p(a′ j) ≥ X i |di|p(ai) -C′ (ρz + D -n) , where now the sums run over all indices i and j, respectively. Inserting into (4.7), we deduce that Z Σ e⊥ ε dvolg⊥≥π X i |di|p(ai) log ρz ε -C′ + πN 2 log L ρz -C′ (d + D -n) log | log ε| -C′, which together with (4.2) proves the proposition. □ For slices Σz for which we have a weaker energy bound, we use a cruder estimate. Lemma 4.1. For any q > 0 and C > 0, there exist ε0, C′ > 0 such that, for any z such that Z Σz e⊥ ε dvolg⊥ 0 to be chosen below. • We denote by J the set of z /∈I such that f(z) ≤| log ε|q. • We denote by K the set of z /∈I ∪J. Let us first treat the case z ∈K. Since on Σz, the 2-form μ12 d⃗u is the exterior differential of the current j(u, A) + A restricted to Σz, it follows that Z Σz χ √g33 μ12 d⃗u = Z Σz d(χ √g33) ∧j(u, A) + Z Σz χ √g33 dA d⃗u ≤Cf(z)1/2, from the Cauchy-Schwarz inequality and the definition of e⊥ ε (u, A). It then follows that for any z ∈K, since f(z) > | log ε|q and q > 2, we have (4.8) f(z) ≥| log ε| 2 Z Σz χ √g33 μ12 d⃗u + πN0(N0 -1) log 1 ρ. For z ∈I, we may apply Proposition 4.2 on Σz with N = N0 to find that there exists points a1, . . . , ak and integers d1, . . . , dk such that, denoting n = |d1 + · · · + dk| and D = |d1|+· · ·+|dk|, we have D ≤C and f(z) ≥π X i |di| p g33(ai) log ρz ε -C′ + πN 2 0 log δ ρz -C′ log | log ε| (ρz + (D -n)) . Moreover (4.9) 2π k X i=1 diδai -μ12 d⃗u F(Σz) ≤C′| log ε|-q, so that, for z ∈I, Z Σz χ √g33 μ12 d⃗u -2π X i diχ(ai) p g33(ai) ≤C| log ε|-q. VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 43 It follows that f(z) ≥1 2 Z Σz χ √g33 μ12 d⃗u log ρz ε + π log ρz ε X i (|di| -diχ(ai)) p g33(ai)+ + πN0 2 log 1 ρz -C log | log ε| (D -n + ρz) -C. If z is such that D > n, then the error terms on the right-hand side may be absorbed in the second term, noting that π log(ρz/ε) p g33(ai) > | log ε|/C . If not, then the error term may be written C(ρz log | log ε| + 1). Therefore, integrating with respect to z ∈I we find (4.10) Z z∈I f(z) dz ≥ Z z∈I Z Σz 1 2 log ρz ε χ √g33 μ12 d⃗u+ +πN0 2 log 1 ρz -C (ρz log | log ε| + 1) dz. Using the fact that ρz bounds the flat distance between μ12 d⃗u and 2πN0δ0, we have log ρz Z Σz χ √g33 μ12 d⃗u ≥2πN0 log ρz -Cρz| log ρz|, and ρz being bounded, since D ≤n, the error term above is bounded by a constant. Replacing in (4.10), we obtain (4.11) Z z∈I f(z) dz ≥ Z z∈I | log ε| 2 Z Σz χ √g33 μ12 d⃗u +π(N0 2 -N0) log 1 ρz -C (ρz log | log ε| + 1) dz. Let us finally consider z ∈J. Then f(z) ≥M| log ε|, for some large constant M. We first exclude the case where Z Σz χ √g33 μ12 0, C > 0 and integer N, there exist ε0, κ, C′ > 0 such that for any z, any (u, A), and any ε 0 there exists ε0, C > 0 such that if ε β, and a radius ρ > 0 such that εμ ≤ρ ≤ε ̄μ 0 independent of ε, (4.21) Z Bi e⊥ ε (uη, 0)dvolg⊥≥max p(ai)π|di| log ρ ε + O(1), c| log ε| ; (vi) and for any 0 0 such that (4.22) 2π X i∈I diδai -d(iu, du) F(Σz) ≤εκ We omit the proof which follows closely the lines of [AB98] or [Ser99a,Ser99c], except with balls replaced by metric balls, and with the weight. The relation (4.22) at the level of uη is a direct consequence of the Jacobian estimates, see for instance [SS07, Chapter 6], it is also true for u by the a priori bound on the energy minimized by uη, see [Ser99a, Lemma 4.2]. Using again the shortcut p for √g33, we now have the following result obtained by growing the balls from the prior lemma by a two-stage ball growth process. Lemma 4.4. Under the assumptions of Proposition 4.3, we have di = 1 for all i ∈I and #I = N. Also all the ai's, i ∈I, belong to Bg⊥(0, Cρz) ⊂Bg⊥(0, C| log ε|-1/2). Moreover, either (i) the balls Bg⊥(ai, r) with r = | log ε|-8 are such that |ai -aj| ≫r for all i ̸= j ∈I and for all i ∈I, Z Bg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥≥πp(ai) log r ρ + o(1) or (ii) there exist a family of disjoint geodesic balls ̄Bk = Bg⊥(bk, rk), containing the Bi's, of total radius ̄r = Nrk = 1 | log ε|2 such that, letting ̄dk = deg(uη, ∂ ̄Bk), we have X k Z B′′ k e⊥ ε (uη, 0)dvolg⊥≥π X k (min ̄Bk p)| ̄dk| log ̄r ε + π log | log ε|. Proof. First we prove that di ≥0 for all i ∈I. Since √g33 ≥1 -o(1) in the support of χ, and choosing the constant μ close enough to 1, we get in view of (4.21) combined with (4.15) that P i∈I |di| 0, than announced, which again yields a contradiction with (4.15) if m is chosen small enough. Thus, we have shown that di = 1 for all i ∈I. Next, let us first consider the case where for some k, one B′ j ⊂B′′ k contains more than one Bi. Since all the di = 1, this implies d′ j ≥2. Then, since all degrees are nonnegative, the ball VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 49 growth procedure from the B′ j's to the B′′ k's yields really an energy π min B′′ k p X j,B′ j⊂B′′ k |d′ j|2 log r′′ r′ -C , so at least an extra π log | log ε| energy compared to what was announced. Summing over all balls, we may thus deduce in this case that X k Z B′′ k e⊥ ε (uη, 0)dvolg⊥≥π X k min B′′ k p |d′′ k| log r′′ ε + π log | log ε|. The proof is concluded via item (ii) with the ̄Bk's equal to the B′′ k's. There remains to consider the case where none of the B′ j's contain more than one Bi, which means that d′ j = 1 and that there are no mergings in the growth from Bi to B′ j: each B′ j contains a unique Bi, which is simply inflated. Let us then consider new B′ i's equal to the geodesic balls of centers equal to the ai and radii r = 1 | log ε|8, i.e. we restart the ball construction from the Bi. Since there were no mergings previously, it means that these balls B′ i are disjoint, and separated by at least | log ε| times their radii. Moreover, the energy over the annulus B′ j is bounded below by πp(ai) log r ε -o(1) (the error o(1) is due to the variation in p, and to 1 -|uη|, which is very small by (4.19)). The proof is then concluded in this case as well. □ Proof of Proposition 4.3. Now that we know that all di = 1, we may also refine the upper bound (4.15) into Z Σz e⊥ ε (u, A)dvolg⊥≤π(N + o(1))| log ε| because otherwise what we want to prove is true. Thanks to that, an examination of the proof in [AB98], [BS99, Appendix] shows that we may refine (4.20) into Z ∂Bi e⊥ ε (uη, 0)dvolg⊥≤πp(ai) + o(1) ρ and thanks to this bound we have Z ∂Bi e⊥ ε (uη, 0)dvolg⊥≥πp(ai) log ρ ε + γ + o(1). This is an adaptation from the analysis of [BBH94], with metric, and here γ is the constant of [BBH94]. Combining all these results, we have obtained either a collection of N balls Bg⊥(ai, r), r = | log ε|-8 such that |ai -aj| ≫r and for each i, Z Bg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥≥πp(ai) log r ε + γ + o(1) or a collection of balls Bg⊥(bk, rk) with rk ≤| log ε|-2 and (4.25) Z ∪kBg⊥(bk,rk) e⊥ ε (uη, 0)dvolg⊥≥π X i p(ai) log 1 | log ε|2ε + π log | log ε|. 50 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY Let us call the two cases Case 1 and Case 2. Let l= C| log ε|-1/2 with C ≥2 be such that all the balls Bi are in Bg⊥(0, l/2). For any r ∈(l, δ), in view of (4.19), the fact that di = 1 and #I = N, we have that deg(uη, ∂Bg⊥(0, r)) = N. In Case 2, we may grow the balls Bg⊥(bk, rk) further to reach a final total radius | log ε|-1/2, and still all the balls will be included in B(0, l). We retrieve this way an extra energy (4.26) Z B(0,l)\∪kBg⊥(bk,rk) e⊥ ε (uη, 0)dvolg⊥≥π X i∈I p(ai) log(l| log ε|2) + o(log | log ε|). Integrating then over circles of radius r ∈(l, δ), for instance as in Step 2 of the proof of Proposition 4.2, and using p ≥1 -o(1), we also obtain (4.27) Z B(0,δ) (0,l) e⊥ ε (uη, 0)dvolg⊥≥π(1 -o(1))N 2 log δ l. Adding (4.25), (4.26) and (4.27), we obtain that Z ∪kBg⊥(bk,rk) e⊥ ε (uη, 0)dvolg⊥≥π X i∈I p(ai) log 1 ε + πN 2 log δ + πN(N -1) log 1 l+ π 2 log | log ε|, which implies the desired inequality. Let us now turn to Case 1. We consider the energy in B(0, | log ε|-1/8)\ ∪i B(ai, r). In this punctured domain, p = 1 + O(| log ε|-1/8), so we may use this as a bound from below and get Z Bg⊥(0,| log ε|-1/8)\∪iBg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥ ≥(1 -O(| log ε|-1/8) Z Bg⊥(0,| log ε|-1/8)\∪iBg⊥(ai,r) 1 2 |duη|2 g⊥+ 1 2ε2(1 -|uη|2)2 dvolg⊥ To bound from below the right-hand side we may use the Bethuel-Brezis-H ́elein theory with metric g, for instance as written down in [IJ21, Section 2.2]. This yields Z Bg⊥(0,| log ε|-1/8)\∪iBg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥ ≥(1 -O(| log ε|-1/8) πN log 1 r + Wg⊥(ai, . . . , aN) + o(1) where Wg⊥(a1, . . . , aN) = -π X i̸=j log distg⊥(ai, aj) + π X i,j R(ai, aj) and R(x, y) = 2πG(x, y) + log distg⊥(x, y), VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 51 where G is the Green function, solution of (see for instance [Aub98, Chapter 2]) ( -∆g⊥G(x, y) = δy in Bg⊥(0, | log ε|-1/8) G(x, y) = 0 for x ∈∂Bg⊥(0, | log ε|-1/8). Since we know that all ai ∈Bg⊥(0, C| log ε|-1 2) we have that R(ai, aj) ∼R(0, 0) as ε →0. Moreover, R(0, 0) is easily computed to be log | log ε|-1/8. We thus have found Z Bg⊥(0,| log ε|-1/8)\∪iBg⊥(ai,r) e⊥ ε (uη, 0)dvolg⊥ ≥πN log 1 r + π X i̸=j log distg⊥(ai, aj) + πN 2 log | log ε|-1/8 + o(1). Finally, we bound from below the energy over Bg⊥(0, δ) ⊥(0, | log ε|-1/8). As in [Ser99b, Lemma A.1], the co-area formula and the energy upper bound yield the existence of t ∈ h 1 - 3 | log ε|2, 1 - 2 | log ε|2 i such that H1(|uη(x)| = t) ≤Cε| log ε|3. Since |uη| ≥1 - 2 | log ε|2 on ∂Bi, this implies that the set S of r's such that {|uη| 0, assume hex = H0 c1 + K log | log ε| with K bounded independently of ε, and let (u, A) be a minimizer of GLε and (u, A) = (e-ihexφ0u, A -hexA0). 52 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY Then for any sequence {ε} tending to 0, there exists a subsequence such that μ(u, A)/2π is well approximated by a sum of N simple curves Γ1, . . . , ΓN, where N is independent of ε in the sense that, for an arbitrarily small δ > 0 in the tube coordinates definition, (5.1) X i Γi -μ(u, A) ∗ ≤C (log | log ε|)2 | log ε|4/3 , X i Γi -μ(u, A) F(Ωε) ≤C (log | log ε|)2 | log ε|4/3 , where (5.2) Ωε = x ∈Ω| dist(x, ∂Ω) > | log ε|-2 . Moreover, the curves Γi converge as ε →0 to Γ0 uniformly, and writing Γi(t) = Γ0(zi(t)) + ⃗ui(t) in tube coordinates as piecewise graphs over Γ0, then the rescaled curves eΓi(t) = Γ0(zi(t)) + r hex N ⃗ui(t) converge as ε →0 in ∥· ∥∗norm to Γ∗ i (z) = Γ0(z) + ⃗u∗ i (z) and (5.3) GLε(u, A) ≥h2 exJ0 + π 2 L0N(N -1) log hex -2πK R0 L0N log | log ε| -π 2 L0N(N -1) log N + WN(Γ∗ 1, . . . , Γ∗ N) + πL0Nγ + N 2CΩ+ oε(1) + Cδoε(1) + oδ(1), where WN is as in (1.9). Finally, if N > 1, then max 1≤i≤N ∥⃗u∗ i ∥L∞> 0. The proof of this proposition involves several steps, the first goal being to compute a lower bound for GLε(u, A) up to O(1) in terms of a suitable vortex filament approximation of the vorticity measure μ(u, A), which then allows to determine the typical distance from the filaments to Γ0, and then improve the lower-bound to o(1) precision. The first step is to choose the scale lof the horizontal blow-up in a way such that the vorticity remains concentrated near Γ0 at this scale (Step 1), which in turn implies (Step 2) that we may bound from below F ⊥ ε (u, A) in terms of the vorticity in the l-tube around Γ0. In Step 3 we construct vortex filaments for the tube blown-up at scale lhorizontally, and show that the distance of the vortex filaments to Γ0 is in fact much smaller than l, which allows to apply Proposition 3.4 to bound from below in a sufficiently precise way Fε(u, A). The final step uses the resulting lower bound of GLε(u, A) and, combining with the matching upper bound, draws the consequences stated above. Proof. Throughout the proof we write μ instead of μ(u, A). We start with a preliminary claim, that there exists C > 0 such that for any curve Γ with no boundary in Ωand any vector field X, we have (5.4) | ⟨X, Γ⟩| ≤C|Γ|2∥curl X∥L∞. Indeed, given Γ, there exists a surface Σ such that Γ = ∂Σ∩Ωand such that |Σ| ≤C|Γ|2. Then ⟨X, Γ⟩= Z Σ curl X, VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 53 from which the claim follows. Step 1. μ is l-concentrated near Γ0. From the nondegeneracy hypothesis and Proposition 3.5, Condition 2.1 is satisfied with P = 2. Then, from Theorem B applied for instance with α = 3/10, we know that for any ε > 0 there exists Lipschitz curves Γ1, . . . , Γk, with k bounded independently of ε, and a normal current eΓ without boundary in Ωsuch that for 1 ≤i ≤k we have, (5.5) ∥Γi -Γ0∥∗≤ C | log ε|1/7, ||Γi| -L0| ≤ C | log ε|1/7 and such that (5.6) |eΓ| ≤ C | log ε|2/3. Moreover, (5.7) μ -2π N X i=1 Γi -2πeΓ ∗ , μ -2π N X i=1 Γi -2πeΓ F(Ωε) ≤C| log ε|-2, where, since the number k is bounded independently of ε, we have assumed it is equal to some fixed integer N by going to a subsequence, and where Ωε is defined in (5.2). (Note that the log | log ε| factor in Theorem B, (3) has been absorbed by using a different power for | log ε| to obtain (5.6)). In particular, as a consequence of (5.6) and (5.4) we have (5.8) ∥eΓ∥∗≤ C | log ε|4/3, ∥eΓ∥F(Ω) ≤ C | log ε|4/3. From now on, we let (5.9) l:= 1 (log | log ε|)2. Note that the power 2 in (5.9), in (5.7), and in (5.2) is arbitrary, it could be any large number. We consider coordinates in a neighborhood of Γ0 as in Proposition 2.2, the coordinate domain being Cδ. For carrying out the horizontal blow-up procedure, we need to work in a smaller neighbourhood of Γ0. For convenience we use a cylinder in tube-coordinates. Let Cr := B(0, r) × (0, L0). We let χlbe a cutoff function for the cylinder Cl: χlis equal to 1 on Cl/2 and equal to 0 outside Cl, its gradient is bounded by C/l. Then, from (5.5) and Lemma 3.5, every Γi is included in a tubular neighborhood of Γ0 with radius C| log ε|-1/14, hence χlΓi = Γi. Thus, in view of (5.7), we find that (5.10) ∥(1 -χl)μ∥∗, ∥(1 -χl)μ∥F(Ωε) ≤C (log | log ε|)2 | log ε|4/3 54 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY and that the same bounds hold for (χδ -χl)μ, with a constant depending on δ. Step 2. Lower bound for F ⊥ ε (u, A). Inserting into the splitting formula (2.2) the definition (1.7), the fact that hex = H0 c1 + K log | log ε| = | log ε| 2 R(Γ0) + K log | log ε|, and the minimality of (u, A) which implies that h2 exJ0 ≥GLε(u, A), we find (5.11) oε(1) ≥Fε(u, A) - | log ε| 2 R(Γ0) + K log | log ε| ⟨B0, μ⟩. But, again using (5.5)-(5.7) and the definition (1.6), we have ⟨B0, μ⟩= 2πL0N R(Γ0) + O(| log ε|-1/7), which together with (5.11) implies that (5.12) Fε(u, A) ≤πL0N| log ε| + O(| log ε|6/7). It also follows directly from (5.5)-(5.7) that (5.13) ∥μ -2πNΓ0∥∗= O(| log ε|-1/7), but to apply Proposition 4.1 on Cδ, where Cδ is defined in Proposition 2.2, we need to check instead that the flat distance between μ and 2πNΓ0 tends to 0 with ε, which we can prove is true in Ωε but not in Ω. From (5.5) and Lemma 3.5 we find that each Γi is included in a tubular neighborhood of Γ0 with radius C| log ε|-1/14, and that, in tube coordinates, its endpoints have vertical coordinate 0 and L0, respectively. Then, Lemma 3.4 implies that ∥Γi -Γ0∥F(Ω) ≤C| log ε|-1 14. Hence, combining with (5.8) and (5.7), we find that (5.14) ρ := max ∥μ -2πNΓ0∥F(Ωε), | log ε|-1 = O(| log ε|-1 14). It follows from (5.14) and (5.12) that we may apply Proposition 4.1 in a subdomain Cδ ε of Cδ obtained by stripping layers of height | log ε|-2 at the top and bottom. We find that F ⊥ ε (u, A) ≥| log ε| 2 Z Cδε χδ √g33μ12 + πL0N(N -1) log 1 ρ -C(1 + ρ log | log ε|). But, integrating by parts on each slice, by definition of μ, we have Z Cδ δε χδ √g33 μ12 = Z Cδ δε d(χδ √g33) ∧j(u, A) + χδ √g33 dA ≤C Z Cδ δε eε(u, A) 1/2 |Cδ \ Cδ ε|1/2 = oε(1), so that, using also (5.10), we conclude that (5.15) F ⊥ ε (u, A) ≥| log ε| 2 ⟨χlμ⟩2D + πL0N(N -1) log 1 ρ -C(1 + ρ log | log ε|). VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 55 Step 3. Lower bound for Fε(u, A)-hex ⟨B0, μ⟩. We apply one more time the curve construction of Theorem A, this time on the cylinder Clequipped with the metric ̃g defined as above by ̃gij = l-2gij if 1 ≤i, j ≤2 and ̃gij = gij otherwise. We find that there exists a polyhedral 1-current ν with no boundary relative to Clsuch that (5.16) ̃Fε(u, A) ≥1 2(| log ε| -C log | log ε|)|ν| ̃g -oε(1), ∥μ -ν∥∗,l≤ C | log ε|q , where the ∥·∥∗,ldenotes the norm in the dual space C0,1 T (Cl) ∗and q may be chosen arbitrarily large. Here ̃Fε(u, A) is defined in (2.11), the integral could in fact be taken over Clbut we will not use this fact. We also have (5.17) ∥μ -ν∥F(Cl,ε) ≤ C | log ε|q , where Cl,ε is the set of points in Clat distance at least | log ε|-2 from the boundary. Note that, even though we cannot directly apply Theorem A to the functional ̃Fε(u, A), since it involves a non-Euclidean metric, a straightforward modification of the proof in [Rom19b] reveals that it holds in this case as well. Indeed the proof involves summing lower-bounds on an appropriate grid of cubes of side-length an arbitrarily large negative power of | log ε|. In our case, we can approximate the metric by a constant metric in each cube, which will thus be Euclidean after a linear change of coordinate. We can then obtain the desired energy lower bound and Jacobian estimate in each cube, the errors due to the non constant metric will be an arbitrarily large negative power of | log ε|. Note also that the lower bound really involves ̃ε = ε/l, but this only introduces an error of order | log l| which is absorbed in the term C log | log ε|. Also, ∥· ∥∗,lshould be understood relative to the metric ̃g, but it differs from the Euclidean version by at most a factor Cl2 which does not alter the above bound considering that q is arbitrary anyway. It follows from (5.16) and (5.13), that (5.18) ∥ν -2πNΓ0∥∗,l≤ C | log ε|1/7. In particular, using (5.4) we have that (5.19) |ν| ̃g ≥1 C . Now we recall from (2.12) the relation l2 ̃Fε(u, A) ≤l2F ⊥ ε (u, A) + F z ε (u, A). 56 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY Therefore, multiplying (5.15) by (1 -l2), using the choice of (5.9) and adding l2 times (5.16), we obtain that (5.20) Fε(u, A) ≥1 2| log ε| (1 -l2) ⟨χlμ⟩2D + l2|ν| ̃g + π(1 -l2)L0N(N -1) log 1 ρ + O 1 + l2 log | log ε||ν ̃g| , where we have used the fact that ρ log | log ε| = oε(1), in view of (5.14). Moreover, from (5.10), (5.13) we have in view of (3.4) that ⟨χlμ⟩2D = χl ∂ ∂z, 2πNΓ0 + O(| log ε|-1/7) = 2πNL0 + O(| log ε|-1/7). Inserting this into (5.20) and comparing with (5.12), we find that (5.21) |ν| ̃g -2πL0N ≤O(l-2| log ε|-1/7) = O(| log ε|-1/8). We then let, as in the proof of Lemma 3.5, X = χl ∂z √g33 . Note that ̃g33 = g33. Then from (5.18), (5.19), and (5.21), we have (5.22) X, Γ0 L0 - ν |ν| ̃g = X, 2πNΓ0 -ν |ν| ̃g + ⟨X, 2πNΓ0⟩ 1 2πNL0 - 1 |ν| ̃g ≤O(| log ε|-1/8). Here we have used the fact that |Γ0| ̃g = L0, that ⟨X, Γ0⟩= L0 and that the left-hand side is positive: indeed, since ∥X∥∞≤1 and since X restricted to Γ0 is precisely the unit tangent vector, it holds that X, ν |ν| ̃g ≤1 = X, Γ0 L0 . Next, we decompose ν as a sum of simple curves {νi}i∈I, so that X, Γ0 L0 - ν |ν| ̃g = X i∈I αi 1 - X, νi |νi| ̃g := X i∈I αi∆i, αi = |νi| ̃g |ν| ̃g . The ∆i's are nonnegative. We let Igood denote those indices for which ∆i 0. Step 4. Convergence of blown-up curves. We write Γi as a piecewise graph over Γ0 as above, letting Γi(t) = Γ0(zi(t)) + ⃗ui(t). From (5.24) and Lemma 3.5 we have that ∥⃗ui∥L∞≪lfor every i. Thus, Proposition 3.4 implies that (5.29) Ql(Γi) ≥c∥⃗ui∥2 L∞. It also follows from (5.10), (5.17), Lemma 3.4 applied to each of the curves Γ1,. . . ,ΓN with eΓ = Γ0, and (5.4) applied to estimate ∥νbad∥F(Cl,ε) that (5.30) ρ ≤C X i ∥⃗ui∥L∞+ |νbad|2 ̃g ! + O((log | log ε|)2| log ε|-4/3) + | log ε|-1. On the other hand, by minimality of (u, A), we deduce from the upper bound of Theorem 6.1 and (2.2) that (5.31) Fε(u, A) -hex ⟨B0, μ⟩≤π 2 L0N(N -1) log | log ε| -2πK ⟨B0, Γ0⟩N log | log ε| + O(1). Let Y = | log ε| N X i=1 Ql(Γi) + l2|νbad| ̃g ! . From (5.29), (5.30), and the fact that we have l2 ≥|νbad|3 ̃g in view of (5.25), we get (5.32) ρ2| log ε| ≤CY + oε(1). Combining (5.28) and (5.31), in view of (5.32), we find (5.33) π 2 L0N(N -1) log 1 CY + oε(1) + cY ≤πL0N(N -1) log 1 ρ p | log ε| + cY ≤O(1). It follows that Y = O(1), ρ ≤C| log ε|-1/2, and then that for every 1 ≤i ≤N, in view of (5.29), (5.34) ∥⃗ui∥L∞≤ C p | log ε| , Ql(Γi) ≤ C | log ε|. VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 59 It also follows that (5.35) |νbad| ̃g ≤C (log | log ε|)4 | log ε| . If N > 1 then (5.33) implies in addition that ρ is bounded below by c| log ε|-1/2 and thus, in view of (5.35), from (5.30) we deduce that (5.36) max i ∥⃗ui∥L∞≥ c p | log ε| . Recalling (5.23), the vorticity estimates in (5.1) follow from (5.10), the vorticity estimate in (5.16), (5.17), and (5.4) together with (5.35) and (5.23). We denote eΓi the curve Γi blown up horizontally by a factor q hex N = q | log ε| 2 R0 N + oε(1), so that eΓi(t) = Γ0(zi(t))+ q hex N ⃗ui(t). From (5.34) and Proposition 3.4, there exists a subsequence {ε} tending to zero such that eΓi converges, for any i = 1, . . . , N, as ε →0, uniformly and as currents, to an H1-graph Γ∗ i (z) = Γ0(z) + ⃗u∗ i (z). Notice that if q hex N ∥⃗ui∥∞→0, then eΓi converges to Γ0. Moreover, from (5.36), if N > 1, then max 1≤i≤N ∥⃗u∗ i ∥L∞> 0. Step 5. Improved lower bound. We return to bounding from below Fε(u, A, Uδ) using (2.13). As above we denote by ν the polyhedral 1-current obtained by applying Theorem B on the cylinder Clequipped with the metric ̃g. It decomposes as νbad + νgood, where νgood = 2π PN i=1 Γi. From (5.16), (5.23), and (5.35) we find (5.37) ̃Fε(u, A) ≥π N X i=1 |Γi| ̃g| log ε| + O(log | log ε|). Also, we may slice ν (resp. νgood) by the coordinate function z defined on Uδ. This provides a family of 0-currents {νz}z (resp. {(νgood)z}z), where z belongs to a set of full measure in (0, L0). Both νz and (νgood)z are sums of Dirac masses with weights belonging to 2πZ for almost every z. From (5.17), we know that ∥χl(μ -ν)∥F(Ωε) ≤Cl-1| log ε|-q. Moreover χlνgood = νgood since, from the previous step (see (5.34)), we know that each Γi is included in a tube of radius O(1/ p | log ε|) around Γ0. Thus, in view of (5.10), and using (5.4) together with (5.35) to estimate ∥νbad∥F(Ωε), we find that ∥μ -νgood∥F(Ωε) ≤C (log | log ε|)2 | log ε| 4 3 , and then that Z L0-| log ε|-2 z=| log ε|-2 ∥μz(u, A) -(νgood)z∥F(Σz)dz ≤∥μ(u, A) -νgood∥F(Ωε) ≤C (log | log ε|)2 | log ε| 4 3 . 60 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY For any ε > 0 we define Tε = n z ∈[| log ε|-2, L0 -| log ε|-2] | ∥μz -(νgood)z∥F(Σz) ≤| log ε|-7 6 o , T c ε = [0, L0] \ Tε. It follows from the above that (5.38) |T c ε | ≤C(log | log ε|)2| log ε|-1 6. Let z ∈Tε. We claim that, for any ε > 0 small enough (depending on z), we have (5.39) Z Σz e⊥ ε (u, A) dvolg⊥-| log ε| 2 Z Σz χl √g33μ12d⃗u -πN 2 log δ + πN(N -1) log r N hex ≥-π X i̸=j log |⃗u∗ i (z) -⃗u∗ j(z)|g• + Nγ + oε(1) + oδ(1), if ⃗u∗ i (z) ̸= ⃗u∗ j(z) for all i ̸= j, whereas if ⃗u∗ i (z) = ⃗u∗ j(z) for some i ̸= j, then lim inf ε→0 Z Σz e⊥ ε (u, A) dvolg⊥-| log ε| 2 Z Σz χl √g33μ12d⃗u + πN(N -1) log r N hex = +∞. To prove the claim, we consider three cases: Case 1. If R Σz e⊥ ε (u, A) dvolg⊥> | log ε|3, then, by integration by parts, we have Z Σz χl √g33μ12d⃗u = Z Σz d(χl √g33) ∧j(u, A) + χl √g33dA ≤Cl Z Σz e⊥ ε (u, A) dvolg⊥ 1 2 , and therefore Z Σz e⊥ ε (u, A) dvolg⊥-| log ε| 2 Z Σz χl √g33μ12d⃗u > | log ε|3 -C | log ε| 5 2 (log | log ε|)2, which implies the claim. Case 2. If π(N + m)| log ε| ≤ R Σz e⊥ ε (u, A) dvolg⊥≤| log ε|3 for some m > 0, we can apply Lemma 4.1, which provides the existence of points a1, . . . , ak and integers d1, . . . , dk such that (5.40) Z Σz e⊥ ε (u, A) dvolg⊥≥π X i |di| p g33(ai) (| log ε| -C log | log ε|) , and 2π X i diδai -μ12d⃗u F(Σz) ≤C| log ε|-3. In addition, from the definition of Tε, we deduce that 2π X i diδai -(νgood)z F(Σz) ≤C| log ε|-7 6. VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 61 In particular, we have that (5.41) Z Σz χl √g33μ12d⃗u = Z Σz χl √g33(νgood)z + O(| log ε|-7 6) = 2πN + O(| log ε|-1 2), where in the last equality we used the fact that g33 is equal to 1 on the axis z = 0 and that all good curves are contained in a tubular neighborhood of radius O(| log ε|-1 2) around Γ0. Hence, combining (5.40) with (5.41), we find Z Σz e⊥ ε (u, A) dvolg⊥-| log ε| 2 Z Σz χl √g33μ12d⃗u ≥π X i p g33(ai)|di|| log ε| -πN| log ε| + O(| log ε| 1 2). If P i |di| > N, the claim immediately follows. On the other hand, if P i |di| ≤N, then by combining π(N + m)| log ε| ≤ R Σz e⊥ ε (u, A) dvolg⊥with (5.41), we find Z Σz e⊥ ε (u, A) dvolg⊥-| log ε| 2 Z Σz χl √g33μ12d⃗u ≥πm| log ε| + O(| log ε| 1 2). Once again, since m > 0, the claim follows. Case 3. If R Σz e⊥ ε (u, A) dvolg⊥ 0 can be chosen arbitrarily large. In addition, from the definition of Tε, we deduce that (5.44) 2π N X i=1 δai -(νgood)z F(Σz) ≤C| log ε|-7 6. From (5.44) we deduce that, for any i = 1, . . . , N, there exists a point, that we denote Γi,ε(z), belonging to Γi,ε, such that N X i=1 distg⊥(ai, Γi,ε(z)) ≤∥μz -(νgood)z∥F(Σz) ≤C| log ε|-7 6. From this and the fact that the blown up good curves ̃Γi,ε converge uniformly to Γ∗ i , letting ̃ai denote the point ai blown up horizontally by a factor q hex N , we deduce that ̃ai converges to ̃Γ∗ i (z) as ε →0. Finally, we note that (5.45) -π X i̸=j log distg⊥(ai -aj) = -π X i̸=j log | ̃ai - ̃aj|g• -πN(N -1) log r N hex + oε(1), and that, when passing to the limit ε →0 and using the lower semi-continuity of -log, we have lim inf ε→0 -π X i̸=j log | ̃ai - ̃aj|g• = -π X i̸=j log |⃗u∗ i (z) -⃗u∗ j(z)|g• if ⃗u∗ i (z) ̸= ⃗u∗ j(z) for all i ̸= j, whereas if ⃗u∗ i (z) = ⃗u∗ j(z) for some i ̸= j, then lim inf ε→0 -π X i̸=j log | ̃ai - ̃aj|g• = +∞. The claim (5.39) thus follows from combining this with (5.42), (5.45) and (5.43). We next integrate over z ∈[0, L0]. We claim that the contribution from T c ε is bounded below by oε(1), i.e. that (5.46) Z z∈[0,L0] Z Σz e⊥ ε (u, A) dvolg⊥-1 2 Z Σz χl √g33μ12d⃗u log 1 ε ≥ Z z∈Tε Z Σz e⊥ ε (u, A) dvolg⊥-1 2 Z Σz χl √g33μ12d⃗u log 1 ε + oε(1). Let z ∈(Tε)c. We consider two cases. First, if R Σz e⊥ ε (u, A) dvolg⊥> | log ε|3, then, arguing exactly as in Case 1., we find that (5.47) Z Σz e⊥ ε (u, A) dvolg⊥-| log ε| 2 Z Σz χl √g33μ12d⃗u > | log ε|3 -C | log ε| 5 2 (log | log ε|)2 ≥0. VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 63 Second, if R Σz e⊥ ε (u, A) dvolg⊥≤| log ε|3, we can apply once again Lemma 4.1, which provides the existence of points a1, . . . , ak and integers d1, . . . , dk such that (5.48) Z Σz e⊥ ε (u, A) dvolg⊥≥π X i |di| p g33(ai) (| log ε| -C log | log ε|) and 2π X i diδai -μ12d⃗u F(Σz) ≤C| log ε|-3. In particular, we have that (5.49) Z Σz χl √g33μ12d⃗u = 2π X i diχl(ai) p g33(ai) + O(| log ε|-3). Combining (5.48) with (5.49), we find (5.50) Z Σz e⊥ ε (u, A) dvolg⊥-| log ε| 2 Z Σz χl √g33μ12d⃗u ≥π X i p g33(ai) (|di| -diχ(ai)) + O(log | log ε|) ≥O(log | log ε|). Hence, from (5.47), (5.50), and (5.38), we deduce that Z z∈(Tε)c Z Σz e⊥ ε (u, A) dvolg⊥-1 2 Z (Tε)c Z Σz χl √g33μ12d⃗u log 1 ε ≥|(Tε)c|O(log | log ε|) = oε(1), which proves (5.46). Thus, by definition (3.4), in view of (5.10), and the estimate (5.39), we have (5.51) Z z∈[0,L0] Z Σz e⊥ ε (u, A) dvolg⊥-1 2⟨μ2D⟩log 1 ε -πN 2L0 log δ + πN(N -1) log r N hex ≥ Z z∈Tε -π X i̸=j log |⃗u∗ i (z) -⃗u∗ j(z)|g• + Nγ ! dz + oδ(1) + oε(1) + Cδoε(1). Adding (5.51) times (1 -l2) to (5.37) times l2 we obtain, in view of (2.13) (5.52) Fε(u, A, Uδ) ≥π log 1 ε (1 -l2) N X i=1 ⟨Γi⟩2D + νbad 2D ! + l2 N X i=1 |Γi| ̃g ! + πN 2L0 log δ -π(1 -l2)L0N(N -1) log r N hex + Z z∈Tε -π X i̸=j log |⃗u∗ i (z) -⃗u∗ j(z)|g• + Nγ ! dz + Cδoε(1) + oδ(1) + oε(1), where we used again the fact that l2 log | log ε| = oε(1). 64 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY On the other hand, using (5.27), (5.35) and (5.4) again, we have hex ⟨B0, μ⟩= π| log ε| R(Γ0) N X i=1 ⟨B0, Γi -Γ0⟩+ πL0N| log ε| + 2πNK⟨B0, Γ0⟩log | log ε| + oε(1). Subtracting from (5.52) we find, as in (5.28) (5.53) Fε(u, A, Uδ) -hex ⟨B0, μ⟩≥π 2 (1 -l2)L0N(N -1) log hex N -2πKN ⟨B0, Γ0⟩log | log ε| + πL0N 2 log δ + π| log ε| N X i=1 Ql(Γi) + Z z∈Tε -π X i̸=j log |⃗u∗ i (z) -⃗u∗ j(z)|g• + Nγ ! dz + Cδoε(1) + oδ(1) + oε(1), Using the coercivity of Qlfrom Proposition 3.4 and the upper bound of Theorem 6.1, we deduce that Z z∈Tε -π X i̸=j log |⃗u∗ i (z) -⃗u∗ j(z)|g• + Nγ ! dz ≤C with C > 0 independent of ε (but depending on δ). We may extract a subsequence {εk}k such that P k |(Tεk)c| 0, assume hex = H0 c1 + K log | log ε| with K bounded independently of ε, and let N be an integer independent of ε. Define tube coordinates (x, z) ∈Cδ in a neighborhood of Γ0 using Proposition 2.2. Assume that for each ε, the curves Γ1,ε, . . . , ΓN,ε are defined in these coordinates by (6.1) Γi,ε(z) = Γ0(z) + r N hex ⃗ui(z), where ⃗ui : [0, L0] →R2 is smooth and independent of ε. Then, for any ε sufficiently small, there exists a configuration (uε, Aε) such that (6.2) GLε(uε, Aε) ≤h2 exJ0 + π 2 L0N(N -1) log hex -2πK R0 L0N log | log ε| -π 2 L0N(N -1) log N + WN(Γ1, . . . , ΓN) + γL0N + N 2CΩ+ oδ(1) + Cδoε(1) + oε(1), where Γi(z) = Γ0(z) + ⃗ui(z) and WN is defined in (1.9). The upper bound is computed using the velocity field given by the Biot-Savart law (see Definition 2.1) associated to a collection of N vortex filaments nearly parallel and close to Γ0, as ε →0. Outside of a fixed but small tube around Γ0, this velocity field will coincide up to a small error (as ε →0) with the field associated to Γ0 by Proposition 2.1. However we must estimate the energy from our construction in the tube, for which we need a simple enough approximation to our velocity field. The rest of this section is devoted to the proof of Theorem 6.1. 6.2. Definition of the test configuration. We let uε = eihexφ0uε and Aε = hexA0 + Aε, where (eihexφ0, hexA0) is the approximate Meissner state. In order to define (uε, Aε), we proceed as follows. First, we let rε := | log ε|-3. We then define ρi,ε(x) =    1 f0 rε ε f0 dist(x, Γi,ε) ε if dist(x, Γi,ε) ≤rε 1 otherwise. VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 67 where hereafter f0 denotes the modulus of the (unique nonconstant) degree-one radial vortex solution u0, see for instance [SS07, Proposition 3.11]. It is important to recall that, as R →∞, f0(R) →1 and (6.3) 1 2 Z R 0 |f0 ′|2 + f0 2 r2 + (1 -f0 2)2 2 rdr = 1 2π (π log R + γ + o(1)) , where γ > 0 is still the fixed constant from [BBH94]. We also define ρε(x) := min i∈{1,...,N} ρi,ε(x). On the other hand, we let φε be defined by the relation (6.4) ∇φε = N X i=1 XΓi,ε + ∇fΓi,ε, where XΓi,ε and fΓi,ε are defined by applying Proposition 2.11 with Γ = Γi,ε. Let us remark that since curl PN i=1 XΓi,ε = 2π Pn i=1 Γi,ε, if σ denotes a smooth, simple, and closed curve that does not intersect any of the curves Γi,ε, then, by Stokes' theorem, Z σ N X i=1 XΓi,ε + ∇fΓi,ε ! = 2πm, for some m ∈Z. This ensures that φε is a well-defined function modulo 2π. We finally let uε(x) := ρε(x)eiφε(x) and Aε(x) = N X i=1 AΓi,ε(x), where, once again, AΓi,ε is defined by applying Proposition 2.1 with Γ = Γi,ε. 6.3. The difference between the covariant gradient and the gradient is negligible in Tδ(Γ0). Hereafter, Tδ(Γ0) denotes the tube defined in Proposition 2.2. A straightforward computation shows that |∇Aεuε|2 -|∇uε|2 = ρ2 ε |Aε|2 -2∇φε · Aε . Let p 2 be such that 1 p + 1 q = 1. From H ̈older's inequality, we deduce that Z Tδ(Γ0) |∇Aεuε|2 -|∇uε|2 ≤∥Aε∥2 L2(Tδ(Γ0)) + C∥∇φε∥Lp(Tδ(Γ0))∥Aε∥Lq(Tδ(Γ0)). 1To be precise, fΓi,ε is defined in the proof of the proposition. 68 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY Since Aε ∈W 2, 3 2(Ω), from Sobolev embedding it follows that Aε ∈Lr(Ω) for any r ≥1. Moreover, ∇φε ∈Lr(Ω) for any r 2. The constant C depends on q and blows up as q →2 and as q →∞. By fixing its value, we ensure that the RHS is oδ(1). 6.4. Energy estimate in small tubes around the curves. We first smoothly extend Γi,ε to a smooth simple closed curve in R3, which we denote ̃Γi,ε. The extension is supported in the complement of Ω, without intersecting its boundary. We have considerable freedom in choosing this extension, except that ̃Γi,ε must intersect ∂Ωtransversally, so that we can apply Proposition 2.1 with Γ = ̃Γi,ε. We then consider the tube ̃Trε(Γi,ε) of radius rε around ̃Γi,ε, and its restriction to Ω, that is, Trε(Γi,ε) := {x ∈Ω: dist(x, Γi,ε) 0, it follows that N X j=1 Z Aδrε(z) 1 |xz j,ε -x|2dx = O(log | log ε|), and therefore (6.34) 1 p | log ε| N X j=1 Z Aδrε(z) 1 |xz j,ε -x|2dx = oε(1). In addition, we have N X j=1 Z Aδrε(z) |x| |xz j,ε -x|2dx = N X j=1 Z Aδrε(z) 1 |xz j,ε -x|dx + Z Aδrε(z) |xz j,ε| |xz j,ε -x|2dx ! . Since |xz j,ε| = O 1 √ | log ε| , from (6.33) and (6.34), we find (6.35) N X j=1 Z Aδrε(z) |x| |xz j,ε -x|2dx = O(δ2) + oε(1). By using (6.32), (6.33), (6.34), and (6.35), from (6.31) it follows that (6.36) Z Σz∩T δrε N X j=1 YΓj,ε 2 √g33dvolg⊥≤ Z Aδrε(z) N X j=1 (xz j,ε -x)⊥ |xz j,ε -x|2 2 dx + O(δ2) + oε(1). Step 5. Renormalized energy. We claim that 1 2 Z Aδrε(z) N X j=1 (xz j,ε -x)⊥ |xz j,ε -x|2 2 dx = -π N X i,j=1 i̸=j log |xz i,ε -xz j,ε| + πN log 1 rε + πN 2 log δ + oδ(1) + Cδoε(1) + oε(1). (6.37) To prove this, we consider Φz ε(x) = - N X i=1 log |x -xz i,ε|, which satisfies (6.38) -∆Φz ε = 2π N X i=1 δxz i,ε in R2. Notice that ∇Φz ε(x) = - N X i=1 x -xz i,ε |x -xz i,ε|2. 76 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY In addition |∇Φz ε(x)| = |∇⊥Φz ε(x)| = N X i=1 (x -xz i,ε)⊥ |x -xz i,ε|2 . Therefore 1 2 Z Aδrε(z) N X j=1 (xz j,ε -x)⊥ |xz j,ε -x|2 2 dx = 1 2 Z Aδrε(z) |∇Φz ε(x)|2dx = -1 2 Z Aδrε(z) -∆Φz εΦz εdx + 1 2 Z ∂Aδrε(z) Φz ε ∂Φz ε ∂ν dS(x) = 1 2 Z ∂D(0,δ(1+oδ(1))) Φz ε ∂Φz ε ∂ν dS(x) -1 2 N X i=1 Z ∂D(xz i,ε,rε(1-oε(1))) Φz ε ∂Φz ε ∂ν dS(x), (6.39) where we used integration by parts and (6.38). To estimate the first integral in the RHS of (6.39), we observe that, for x ∈∂D (0, δ(1 + oδ(1))) , we have Φz ε(x) = - N X i=1 log |x -xz i,ε| |x| + log |x| = -N log |x| + O p | log ε| δ ! and ∂Φz ε(x) ∂ν = - N X i=1 x -xz i,ε |x -xz i,ε|2 · x |x| = - N X i=1 1 |x| + xz i,ε · x -|xz i,ε|2 |x||x -xz i,ε|2 = -N |x| + O p | log ε| δ2 ! . Hence (6.40) 1 2 Z ∂D(0,δ(1+oδ(1))) Φz ε ∂Φz ε ∂ν dS(x) = πN 2 log δ + oδ(1) + O log δ p | log ε| δ + | log ε| δ2 ! = πN 2 log δ + oδ(1) + Cδoε(1). We now proceed to estimate the second integral in the RHS of (6.39). Note that, for x ∈ ∂D xz i,ε, rε(1 -oε(1)) , we have Φz ε(x) = -log |x -xz i,ε| - N X j=1 j̸=i log |x -xz j,ε| |xz i,ε -xz j,ε| + log |xz i,ε -xz j,ε| = -log |x -xz i,ε| - N X j=1 j̸=i log |xz i,ε -xz j,ε| + O rε p | log ε| ! VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 77 and ∂Φz ε(x) ∂ν = - N X j=1 x -xz j,ε |x -xz j,ε|2 · x -xz i,ε |x -xz i,ε| = - 1 |x -xz i,ε|    1 + N X j=1 j̸=i (x -xz j,ε) · (x -xz i,ε) |x -xz j,ε|2    = - 1 |x -xz i,ε| 1 + O rε p | log ε| !! . We then obtain that 1 2 Z ∂D(xz i,ε,rε(1-oε(1))) Φz ε ∂Φz ε ∂ν dS(x) = π log rε + π N X j=1 j̸=i log |xz i,ε -xz j,ε| + O rε log rε p | log ε| ! + oε(1) and thus (6.41) -1 2 N X i=1 Z ∂D(xz i,ε,rε(1-oε(1))) Φz ε ∂Φz ε ∂ν dS(x) = -πN log rε -π N X i,j=1 i̸=j log |xz i,ε -xz j,ε| + oε(1). By inserting (6.40) and (6.41) into (6.39), we obtain the claim (6.37). On the other hand, since Γi,ε(z) = Π-1(xz i,ε) and Γj,ε(z) = Π-1(xz j,ε), for i ̸= j we have that distg⊥(Γi,ε(z), Γj,ε(z)) ≤ Z 1 0 ∥DΠ-1 xz i,ε + t(xz j,ε -xz i,ε) (xz j,ε -xz i,ε) ∥dt, which combined with (6.13) yields distg⊥(Γi,ε(z), Γj,ε(z)) ≤(1 + O(|xz i,ε| + |xz j,ε|))|xz j,ε -xz i,ε|. The concavity of the logarithm function then implies that -log |xz j,ε -xz i,ε| ≤-log distg⊥(Γi,ε(z), Γj,ε(z)) + O(|xz i,ε| + O |xz j,ε|) ≤-log distg⊥(Γi,ε(z), Γj,ε(z)) + O 1 p | log ε| ! . (6.42) 78 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY Finally, by combining (6.36) with (6.37) and (6.42), we find (6.43) 1 2 Z Aδrε(z) N X j=1 (xz j,ε -x)⊥ |xz j,ε -x|2 2 dx ≤-π N X i,j=1 i̸=j log distg⊥(Γi,ε(z), Γj,ε(z)) + πN log 1 rε + πN 2 log δ + oδ(1) + Cδoε(1) + oε(1). Step 6. Conclusion. By integrating from z = 0 to z = L0 (6.36) and (6.43), and combining with (6.12), we find (6.44) Eε(uε, T δ rε) ≤-π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + πNL0 log 1 rε + πN 2L0 log δ + oδ(1) + Cδoε(1) + oε(1). 6.6. Energy estimate far from Γ0. We now work in Ω\ Tδ(Γ0). In this region ρε ≡1 and therefore, using (6.4), we find |∇Aεuε| = |∇φε -Aε| = N X i=1 XΓi,ε + ∇fΓi,ε -AΓi,ε = N X i=1 jΓi,ε , where in the last equality we used once again Proposition 2.1. Hence Iδ := 1 2 Z Ω δ(Γ0) |∇Aεuε|2 + 1 2ε2(1 -ρ2 ε)2 + 1 2 Z R3 | curl Aε|2 = 1 2 Z Ω δ(Γ0) N X i=1 jΓi,ε 2 + 1 2 Z R3 N X i=1 curl AΓi,ε 2 . From (2.5) and the explicit formula (2.3), which yields a control on ∥XΓi,ε -XΓ0∥L2(Ω δ(Γ0)), we deduce that, for any i = 1, . . . , N, jΓi,ε -jΓ0 2 L2(Ω δ(Γ0)) + AΓi,ε -AΓ0 2 H1(R3) ≤Cδoε(1), which combined with the previous equality yields Iδ = N 2 1 2 Z Ω δ(Γ0) |jΓ0|2 + 1 2 Z R3 |curl AΓ0|2 + Cδoε(1). Recalling (2.8), we obtain (6.45) Iδ = N 2CΩ-πN 2L0 log δ + oδ(1) + Cδoε(1). VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 79 6.7. Vorticity estimate. Let B ∈C0,1 T (Ω). Observe that, by integration by parts, we have (6.46) Z Ω μ(uε, Aε) · B = Z Ω curl(j(uε, Aε) + Aε) · B = Z Ω (j(uε, Aε) + Aε) · curl B = Z Ω ρ2 ε∇φε · curl B + Z Ω (1 -ρ2 ε)Aε · curl B = Z Ω ∇φε · curl B + Z Ω (1 -ρ2 ε)(Aε -∇φε) · curl B. Let q 2 be such that 1 p + 1 q = 1, by H ̈older's inequality we have Z Ω (1 -ρ2 ε)(Aε -∇φε) · curl B ≤∥curl B∥L∞(Ω)∥Aε -∇φε∥Lq(Ω)∥(1 -ρ2 ε)∥Lp(Ω) ≤C∥curl B∥L∞(Ω)∥(1 -ρ2 ε)∥ 2 p L2(Ω) ≤C∥curl B∥L∞(Ω)ε 2 pEε(|uε|) ≤C∥curl B∥L∞(Ω)ε 2 p| log ε|, (6.47) where we used that 1 -ρ2 ε ∈[0, 1], ρε = 1 in Ω\ ∪N i=1Trε(Γi,ε), and (6.6). On the other hand, using (6.4), we have (6.48) Z Ω ∇φε · curl B = N X i=1 Z Ω XΓi,ε + ∇fΓi,ε · curl B = N X i=1 Z Ω curl XΓi,ε · B, where the last equality follows from integration by parts. Finally, since curl XΓi,ε = 2πΓi,ε, from (6.46), (6.47), and (6.48), we deduce that (6.49) μ(uε, Aε) -2π N X i=1 Γi,ε (C0,1 T (Ω))∗ ≤Cε 2 p| log ε|, for any p > 2, where C is a constant that depends on p and blows up as p →2. We observe that the same estimate holds for the flat norm. 6.8. Putting everything together. By putting together (6.5), (6.6), (6.44), and (6.45), we obtain Fε(uε, Aε) ≤ N X i=1 π|Γi,ε| log rε ε + γ|Γi,ε| -π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + πNL0 log 1 rε + πN 2L0 log δ + N 2CΩ-πN 2L0 log δ + oδ(1) + Cδoε(1) + oε(1). 80 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY From the computations in the proof of Lemma 3.2, we have |Γi,ε| = L0 + O 1 | log ε| , which yields (recall that rε = | log ε|-3) π N X i=1 |Γi,ε| log rε + πNL0 log 1 rε = oε(1) and that γ|Γi,ε| = γL0 + oε(1). Hence (6.50) Fε(uε, Aε) ≤π| log ε| N X i=1 |Γi,ε| -π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + γNL0 + N 2CΩ+ oδ(1) + Cδoε(1) + oε(1). Recall that uε = eihexφ0uε and Aε = Aε + hexA0, where (eihexφ0, hexA0) is the approximate Meissner state. By inserting (6.50) and (6.49) in the splitting formula (2.2), we find GLε(uε, Aε) ≤h2 exJ0 + π| log ε| N X i=1 |Γi,ε| -π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + γNL0 + N 2CΩ-2π N X i=1 hex ⟨B0, Γi,ε⟩+ oδ(1) + Cδoε(1) + oε(1). Since hex = | log ε| 2 R(Γ0) + K log | log ε|, for some K ≥0, we have that 2π N X i=1 hex ⟨B0, Γi,ε⟩= π| log ε| N X i=1 |Γi,ε|R(Γi,ε) R(Γ0) + 2πK log | log ε| ⟨B0, Γi,ε⟩ = π| log ε| N X i=1 |Γi,ε|R(Γi,ε) R(Γ0) + 2πKN log | log ε| ⟨B0, Γ0⟩, where in the last equality we used the fact that ⟨B0, Γi,ε⟩= ⟨B0, Γ0⟩+ O 1 p | log ε| ! (which follows from Proposition 3.2). Hence GLε(uε, Aε) ≤h2 exJ0 + π| log ε| N X i=1 |Γi,ε| 1 -R(Γi,ε) R(Γ0) -π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz + γNL0 + N 2CΩ-2πKN log | log ε| ⟨B0, Γ0⟩+ oδ(1) + Cδoε(1) + oε(1). VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 81 Moreover, from Lemma 3.2, for any i = 1, . . . N we have R(Γi,ε) = R(Γ0) -1 2Q r N hex ⃗ui ! + O 1 | log ε| 3 2 ! = R(Γ0) -1 2 N hex Q (⃗ui) + O 1 | log ε| 3 2 ! , which combined with the fact that |Γi,ε| = L0 + oε(1), yields π| log ε| N X i=1 |Γi,ε| 1 -R(Γi,ε) R(Γ0) = πL0N N X i=1 Q(⃗ui) + oε(1). Finally, by observing that -π N X i,j=1 i̸=j Z L0 0 log distg⊥(Γi,ε(z), Γj,ε(z)) dz = -π N X i,j=1 i̸=j Z L0 0 log |⃗ui(z) -⃗uj(z)|g•dz + π 2 L0N(N -1) log hex N + oε(1), we obtain (6.2). The proof of Theorem 6.1 is thus concluded. □ References [AB98] L. Almeida and F. Bethuel, Topological methods for the Ginzburg-Landau equations, J. Math. Pures Appl. (9) 77 (1998), no. 1, 1-49. MR1617594 ↑6, 13, 14, 38, 46, 47, 49 [ABM06] S. Alama, L. Bronsard, and J. A. Montero, On the Ginzburg-Landau model of a superconducting ball in a uniform field, Ann. Inst. H. Poincar ́e Anal. Non Lin ́eaire 23 (2006), no. 2, 237-267. MR2201153 ↑4, 36 [ABO05] G. Alberti, S. Baldo, and G. Orlandi, Variational convergence for functionals of Ginzburg-Landau type, Indiana Univ. Math. J. 54 (2005), no. 5, 1411-1472. MR2177107 ↑6 [AD03] A. Aftalion and I. Danaila, Three-dimensional vortex configurations in a rotating bose-einstein condensate, Phys. Rev. A 68 (2003Aug), 023603. ↑11 [Aft06] A. Aftalion, Vortices in Bose-Einstein condensates, Progress in Nonlinear Differential Equations and their Applications, vol. 67, Birkh ̈auser Boston, Inc., Boston, MA, 2006. MR2228356 ↑4 [AJ03] A. Aftalion and R. L. Jerrard, Properties of a single vortex solution in a rotating Bose Einstein condensate, C. R. Math. Acad. Sci. Paris 336 (2003), no. 9, 713-718. MR1988308 ↑4, 11 [AKO07] S. V. Alekseenko, P. A. Kuibin, and V. L. Okulov, Theory of concentrated vortices, Springer, Berlin, 2007. An introduction, Translated from the 2003 Russian original. MR2398034 ↑15 [AR01] A. Aftalion and T. Riviere, Vortex energy and vortex bending for a rotating Bose-Einstein condensate, Phys. Rev. A 64 (2001), 043611. ↑2, 5, 11 [Aub98] T. Aubin, Some nonlinear problems in Riemannian geometry, Springer Monographs in Mathematics, Springer-Verlag, Berlin, 1998. MR1636569 ↑51 [BBH94] F. Bethuel, H. Brezis, and F. H ́elein, Ginzburg-Landau vortices, Progress in Nonlinear Differential Equations and their Applications, vol. 13, Birkh ̈auser Boston, Inc., Boston, MA, 1994. MR1269538 ↑4, 5, 10, 13, 14, 17, 38, 45, 46, 49, 67 82 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY [BBO01] F. Bethuel, H. Brezis, and G. Orlandi, Asymptotics for the Ginzburg-Landau equation in arbitrary dimensions, J. Funct. Anal. 186 (2001), no. 2, 432-520. MR1864830 ↑2, 5, 12 [BCS57] J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Theory of superconductivity, Phys. Rev. 108 (1957), 1175-1204. ↑3 [BJOS12] S. Baldo, R. L. Jerrard, G. Orlandi, and H. M. Soner, Convergence of Ginzburg-Landau functionals in three-dimensional superconductivity, Arch. Ration. Mech. Anal. 205 (2012), no. 3, 699-752. MR2960031 ↑4 [BJOS13] S. Baldo, R. L. Jerrard, G. Orlandi, and H. M. Soner, Vortex density models for superconductivity and superfluidity, Comm. Math. Phys. 318 (2013), no. 1, 131-171. MR3017066 ↑4 [BR95] F. Bethuel and T. Rivi`ere, Vortices for a variational problem related to superconductivity, Ann. Inst. H. Poincar ́e Anal. Non Lin ́eaire 12 (1995), no. 3, 243-303. MR1340265 ↑5 [Bra02] E. H. Brandt, Vortices in superconductors, Physica C: Superconductivity 369 (2002), no. 1, 10-20. ↑11 [BS99] F. Bethuel and J.-C. Saut, Travelling waves for the Gross-Pitaevskii equation. I, Ann. Inst. H. Poincar ́e Phys. Th ́eor. 70 (1999), no. 2, 147-238. MR1669387 ↑49 [CJ17] A. Contreras and R. L. Jerrard, Nearly parallel vortex filaments in the 3D Ginzburg-Landau equations, Geom. Funct. Anal. 27 (2017), no. 5, 1161-1230. MR3714719 ↑2, 5, 6, 11, 12, 28 [DDPMR22] J. D ́avila, M. Del Pino, M. Medina, and R. Rodiac, Interacting helical vortex filaments in the threedimensional Ginzburg-Landau equation, J. Eur. Math. Soc. (JEMS) (2022). Published online first. ↑5 [DG99] P. G. De Gennes, Superconductivity of Metals and Alloys, Advanced book classics, Perseus, Cambridge, MA, 1999. ↑2 [DS18] M. Duerinckx and S. Serfaty, Mean-field dynamics for Ginzburg-Landau vortices with pinning and forcing, Ann. PDE 4 (2018), no. 2, Paper No. 19, 172. MR3973142 ↑39 [DVR25] M. D ́ıaz-Vera and C. Rom ́an, First critical field in the pinned three dimensional Ginzburg-Landau model of superconductivity, 2025. Available at https://doi.org/10.48550/arXiv.2507.10915. ↑9 [Fed69] H. Federer, Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, Band 153, Springer-Verlag New York Inc., New York, 1969. MR0257325 ↑44 [FHSS12] R. L. Frank, C. Hainzl, R. Seiringer, and J. P. Solovej, Microscopic derivation of Ginzburg-Landau theory, J. Amer. Math. Soc. 25 (2012), no. 3, 667-713. MR2904570 ↑3 [IJ21] R. Ignat and R. L. Jerrard, Renormalized energy between vortices in some Ginzburg-Landau models on 2-dimensional Riemannian manifolds, Arch. Ration. Mech. Anal. 239 (2021), no. 3, 1577-1666. MR4215198 ↑13, 39, 50 [Jer99] R. L. Jerrard, Lower bounds for generalized Ginzburg-Landau functionals, SIAM J. Math. Anal. 30 (1999), no. 4, 721-746. MR1684723 ↑5, 38 [JMS04] R. Jerrard, A. Montero, and P. Sternberg, Local minimizers of the Ginzburg-Landau energy with magnetic field in three dimensions, Comm. Math. Phys. 249 (2004), no. 3, 549-577. MR2084007 ↑4 [JS02] R. L. Jerrard and H. M. Soner, The Jacobian and the Ginzburg-Landau energy, Calc. Var. Partial Differential Equations 14 (2002), no. 2, 151-191. MR1890398 ↑5, 7 [Lie13] G. M. Lieberman, Oblique derivative problems for elliptic equations, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2013. MR3059278 ↑17 [Lon50] F. London, Superfluids, Structure of matter series, Wiley, New York, 1950. ↑37 [LR01] F.-H. Lin and T. Rivi`ere, A quantization property for static Ginzburg-Landau vortices, Comm. Pure Appl. Math. 54 (2001), no. 2, 206-228. MR1794353 ↑2, 5, 12 VORTEX LINES INTERACTION IN 3D GINZBURG-LANDAU 83 [LR99] F. Lin and T. Rivi`ere, Complex Ginzburg-Landau equations in high dimensions and codimension two area minimizing currents, J. Eur. Math. Soc. (JEMS) 1 (1999), no. 3, 237-311. MR1714735 ↑5, 12 [MSZ04] J. A. Montero, P. Sternberg, and W. P. Ziemer, Local minimizers with vortices in the GinzburgLandau system in three dimensions, Comm. Pure Appl. Math. 57 (2004), no. 1, 99-125. MR2007357 ↑6 [RASV+01] C. Raman, J. R. Abo-Shaeer, J. M. Vogels, K. Xu, and W. Ketterle, Vortex nucleation in a stirred bose-einstein condensate, Phys. Rev. Lett. 87 (2001Nov), 210402. ↑11 [RBD02] P. Rosenbusch, V. Bretin, and J. Dalibard, Dynamics of a single vortex line in a bose-einstein condensate, Phys. Rev. Lett. 89 (2002Oct), 200403. ↑11 [Riv95] T. Rivi`ere, Line vortices in the U(1)-Higgs model, ESAIM Contrˆole Optim. Calc. Var. 1 (1995/96), 77-167. MR1394302 ↑2, 5, 12 [Rom19a] C. Rom ́an, On the first critical field in the three dimensional Ginzburg-Landau model of superconductivity, Comm. Math. Phys. 367 (2019), no. 1, 317-349. MR3933412 ↑3, 4, 6, 7, 11, 15, 36 [Rom19b] C. Rom ́an, Three dimensional vortex approximation construction and ε-level estimates for the Ginzburg-Landau functional, Arch. Ration. Mech. Anal. 231 (2019), no. 3, 1531-1614. MR3902469 ↑5, 6, 12, 13, 18, 55 [RSS23] C. Rom ́an, E. Sandier, and S. Serfaty, Bounded vorticity for the 3D Ginzburg-Landau model and an isoflux problem, Proc. Lond. Math. Soc. (3) 126 (2023), no. 3, 1015-1062. MR4563866 ↑4, 5, 8, 13, 18, 19, 36 [San01] E. Sandier, Ginzburg-Landau minimizers from Rn+1 to Rn and minimal connections, Indiana Univ. Math. J. 50 (2001), no. 4, 1807-1844. MR1889083 ↑2, 5, 12 [San98] E. Sandier, Lower bounds for the energy of unit vector fields and applications, J. Funct. Anal. 152 (1998), no. 2, 379-403. MR1607928 ↑5, 38, 40 [Ser01] S. Serfaty, On a model of rotating superfluids, ESAIM Control Optim. Calc. Var. 6 (2001), 201238. MR1816073 ↑4 [Ser99a] S. Serfaty, Local minimizers for the Ginzburg-Landau energy near critical magnetic field. I, Commun. Contemp. Math. 1 (1999), no. 2, 213-254. MR1696100 ↑4, 5, 11, 13, 14, 38, 46, 47 [Ser99b] S. Serfaty, Local minimizers for the Ginzburg-Landau energy near critical magnetic field. II, Commun. Contemp. Math. 1 (1999), no. 3, 295-333. MR1707887 ↑51 [Ser99c] S. Serfaty, Stable configurations in superconductivity: uniqueness, multiplicity, and vortexnucleation, Arch. Ration. Mech. Anal. 149 (1999), no. 4, 329-365. MR1731999 ↑4, 5, 46, 47 [SS00] E. Sandier and S. Serfaty, Global minimizers for the Ginzburg-Landau functional below the first critical magnetic field, Ann. Inst. H. Poincar ́e Anal. Non Lin ́eaire 17 (2000), no. 1, 119-145. MR1743433 ↑4, 5 [SS03] E. Sandier and S. Serfaty, Ginzburg-Landau minimizers near the first critical field have bounded vorticity, Calc. Var. Partial Differential Equations 17 (2003), no. 1, 17-28. MR1979114 ↑4, 5, 11, 40 [SS04] E. Sandier and S. Serfaty, A product-estimate for Ginzburg-Landau and corollaries, J. Funct. Anal. 211 (2004), no. 1, 219-244. MR2054623 ↑39 [SS07] E. Sandier and S. Serfaty, Vortices in the magnetic Ginzburg-Landau model, Progress in Nonlinear Differential Equations and their Applications, vol. 70, Birkh ̈auser Boston, Inc., Boston, MA, 2007. MR2279839 ↑2, 4, 6, 7, 11, 13, 38, 39, 40, 41, 47, 65, 67 [SS12] E. Sandier and S. Serfaty, From the Ginzburg-Landau model to vortex lattice problems, Comm. Math. Phys. 313 (2012), no. 3, 635-743. MR2945619 ↑2 84 CARLOS ROM ́AN, ETIENNE SANDIER, AND SYLVIA SERFATY [SSTS69] D. Saint-James, G. Sarma, E. J. Thomas, and P. Silverman, Type II Superconductivity, Pergamon Press Oxford, New York, 1969. ↑2 [Tin96] M. Tinkham, Introduction to superconductivity, Second, McGraw-Hill, New York, 1996. ↑2 [TT90] D. Tilley and J. Tilley, Superfluidity and superconductivity, Hilger, 1990. ↑2, 4
2510.14913
Preprint BUDGET-AWARE TEST-TIME SCALING VIA DISCRIMI- NATIVE VERIFICATION Kyle Montgomery1∗, Sijun Tan2∗, Yuqi Chen1, Siyuan Zhuang2, Tianjun Zhang2, Raluca Ada Popa2, Chenguang Wang1† 1UC Santa Cruz, 2UC Berkeley {kylemontgomery, chenguangwang}@ucsc.edu, sijuntan@berkeley.edu ABSTRACT Test-time scaling is a powerful strategy for boosting the performance of large language models on complex reasoning tasks. While state-of-the-art approaches often employ generative verifiers to select the best solution from a pool of candi- dates, this method incurs prohibitive computational costs, limiting its practicality. In this work, we shift the focus to a more budget-aware paradigm: discrimina- tive verification. We conduct a thorough empirical analysis and demonstrate that while discriminative verifiers may underperform in isolation, combining them with self-consistency in a hybrid approach creates a powerful and efficient test- time scaling mechanism. Notably, under a fixed compute budget, this hybrid approach surpasses state-of-the-art generative verification by a significant margin: achieving up to 15.3% higher accuracy on AIME2025. Our findings establish that for practical, real-world applications, budget-aware scaling with discriminative verifiers is not only a "free" upgrade over self-consistency, but also a more effec- tive and efficient alternative to costly generative techniques. Code is available at https://github.com/wang-research-lab/verification. 1 INTRODUCTION The pursuit of advanced reasoning in large language models (LLMs) has been defined by the principle of scale: scaling up models, datasets, and training compute has consistently unlocked new capabilities. More recently, a new frontier has emerged in this paradigm: scaling compute not just during training, but at the point of inference. This strategy, known as test-time scaling, aims to elicit a model’s full potential by allocating additional resources to solve a single problem at inference time, leading to dramatic performance gains in complex domains like mathematics and programming (OpenAI, 2024; Snell et al., 2024). The simplest and most canonical form of test-time scaling is self-consistency (SC) (Wang et al., 2023b). Instead of trusting a single, greedily decoded answer, SC samples a diverse ensemble of solutions and selects the final answer through a simple plurality vote. This brute-force yet remarkably effective method has become a foundational baseline, demonstrating that more computation in the form of more samples often leads to better reasoning. The natural next question is whether this compute can be used more intelligently. Rather than relying on a democratic vote, could an expert "verifier" model scrutinize each solution and select the best one? This question has given rise to a new class of powerful, state-of-the-art techniques centered on generative verification. These verifiers are themselves sophisticated LLMs that produce a detailed chain-of-thought (CoT) rationale, critically evaluating a candidate solution before rendering a final verdict (Zhang et al., 2024; Mahan et al., 2024). The approach is intuitively appealing; it mimics human meta-cognition and opens up a new axis for scaling. If one verification pass is good, multiple passes should be even better, allowing for deeper scrutiny and higher confidence (Shi & Jin, 2025; Zhao et al., 2025). *Equal contribution. †Corresponding author. 1 arXiv:2510.14913v1 [cs.AI] 16 Oct 2025 Preprint SC@N BoN@N WSC@N PV@N GPV@N,M +15.3% +2.8% Figure 1: Hybrid discriminative verification techniques (e.g., weighted self-consistency (WSC) (Welleck et al., 2024) and pessimistic verification (PV) (Shi & Jin, 2025)) outperform generative pessimistic verification (GPV) under equalized compute budgets of less than 22.5 minutes (shaded region). For example, at latency budgets of 13.8 minutes and 15.7 minutes, hybrid discrim- inative verification can outperform generative verification by 15.3% and 2.8%, respectively. N is doubled at each point along the x-axis. For GPV, each solution is verified twice (M = 2). However, this expressive power comes at a staggering computational cost. Generating a detailed CoT critique for each candidate can match or even exceed the cost of generating the original solution. This immense overhead makes generative verification impractical for many real-world applications where inference budgets are constrained. Indeed, a careful analysis by Singhi et al. (2025) reveals that when verification costs are properly accounted for, these state-of-the-art verification methods require up to 8× more compute just to match the performance of simple self-consistency, and deliver only marginal gains even when granted a colossal 128× budget. These findings underscore an important limitation of scaling verification: solution correctness is fundamentally constrained by the quality of the candidates produced by the solver. If no correct solutions are sampled, no amount of verification, regardless of strength, can recover the right answer. Moreover, SC already provides a strong baseline, closely tracking pass@N on many tasks. To improve over SC, a verifier must reliably agree with the majority when it is correct, while also identifying the minority solution when the majority is wrong. These requirements make it difficult for a verifier to deliver significant gains, especially under a fixed compute budget. As a result, allocating additional compute to generating candidate solutions typically yields better returns than spending it on verification. Given these limitations, it is appealing to develop a budget-aware verification mechanisms that improve the model performance while minimizing compute costs. Discriminative verifiers present a promising alternative due to their computational efficiency. Unlike generative verifiers, which require both a costly prefilling step and sequential token generation during decoding, discriminative verifiers only perform a single forward pass (i.e., prefilling) to output a scalar score, thus avoiding the expensive sequential decoding bottleneck. However, despite their speed advantage, discriminative verifiers exhibit limited capabilities on complex reasoning tasks (Tan et al., 2025b), often underperforming SC as the pool of candidate solutions grows, which has limited their practical use. In this work, we show that hybrid approaches combining discriminative verification with self- consistency can offer the best trade-off between effectiveness and efficiency under practical compute budgets. For instance, under fixed practical inference budgets of 5 × 1015 and 1 × 1016 FLOPs, 2 Preprint hybrid discriminative verification methods (Welleck et al., 2024; Shi & Jin, 2025) outperform state- of-the-art generative verification by 6.1% and 2.5%, respectively. Moreover, although discriminative verifiers underperform SC in isolation, we show that by leveraging these hybrid methods, the resulting test-time scaling pipeline can obtain consistent improvements over SC on AIME2025 by up to 5.1%, while having only 2% compute overhead. These results highlight hybrid discriminative verification as a practical and scalable alternative, delivering strong accuracy gains with negligible overhead and outperforming more expensive generative approaches under realistic budget constraints. Our contributions are as follows: • We conduct a thorough empirical analysis of discriminative verification techniques, exploring how different selection strategies perform across scaling regimes. To our knowledge, this is the first study to systematically examine the test-time scaling properties of discriminative verification. • Building on this analysis, we present a compute-centric comparison of discriminative and generative verification, showing that discriminative methods offer a more practical and efficient alternative under realistic inference budgets. 2 EFFECTIVE DISCRIMINATIVE VERIFICATION 2.1 PRELIMINARIES Repeated sampling is a test-time scaling technique that involves generating a batch of N independent candidate solutions {si}N i=1 for a given problem Q. Each solution si is a chain of reasoning that terminates in a final answer ai = Ans(si). As N increases, the probability that at least one answer is correct also rises (i.e., Pass@N improves; see Figure 1) (Cobbe et al., 2021). However, this leaves open the central challenge of selecting a single answer a∗from among the candidates in the absence of ground truth. Self-consistency. A common approach for this selection problem is self-consistency (SC) (Wang et al., 2023b). Since correct answers tend to reoccur across independent solutions, SC groups responses by their final answer and selects the most frequent one. Formally, each distinct answer a has support size na = |{i : ai = a}|, and SC chooses a∗= arg maxa na. While this approach is robust when the correct answer is common, it can fail when the majority converges on an incorrect answer. Pseudocode for this method is provided in Algorithm 1. Best-of-N. Another strategy is best-of-N (BoN) selection (Charniak & Johnson, 2005; Cobbe et al., 2021), which uses a discriminative verifier to assign each solution a scalar score (e.g., in [0, 1]), and selects the final answer from the highest-scoring solution. Formally, each solution si receives a scalar score r(si), then BoN chooses a∗= Ans(s∗) where s∗= arg maxsi r(si). A strong verifier can identify correct but rare responses that SC might miss. However, as N increases, it can also be misled by confident yet incorrect responses, highlighting a long-tail vulnerability (see Figure 1). Pseudocode for this method is provided in Algorithm 2. 2.2 HYBRID DISCRIMINATIVE VERIFICATION To guard against the long-tail of high-scoring but incorrect responses, hybrid discriminative verifica- tion methods combine the consensus signal from SC with the verifier’s signal from BoN. We study two hybrid approaches: • Weighted self-consistency (WSC) (Welleck et al., 2024) groups solutions by their final answers and selects the answer with the largest total verifier score, i.e., a∗= arg maxa P i:ai=a r(si). The approach prioritizes answers that are not only common but also favored by the verifier. Pseudocode for this method is provided in Algorithm 3. • Pessimistic verification (PV) (Shi & Jin, 2025) groups solutions by their final answer and penalizes small answer clusters to reduce the chance of selecting low-support answers. Formally, a∗= arg maxa  1 na P i:ai=a r(si) −α ln N na+1  , where α controls the strength of the penalty. 3 Preprint 0 50 100 150 200 250 300 350 Step 0.50 0.55 0.60 0.65 0.70 0.75 Loss 0.2 0.4 0.6 0.8 1.0 Score Margin Figure 2: Blue: The loss decreases over one epoch of training. Red: The score margin, i.e., the difference in score assigned to correct solutions and incorrect solutions on average across a global batch, increases during training. Together, these indicate that the discriminative verifier learns to discriminate between correct and incorrect solutions. When α = 0, selection is based exclusively on the mean verifier score. As α →∞, the penalty dominates and the selection collapses to SC. Empirically, we find that α = 0.5 provides a good tradeoff (see Appendix C.1). Pseudocode for this method is provided in Algorithm 4. 2.3 DISCRIMINATIVE VERIFIER TRAINING This subsection outlines an approach for training a lightweight discriminative verifier, which provides the verification signal for BoN and hybrid discriminative verification techniques (WSC and PV). Dataset curation. We sample 32k math problems from NuminaMath (LI et al., 2024), which aggregates problems from Chinese K-12 exams, Orca-Math (Mitra et al., 2024), AoPS forums, and various Olympiads (e.g., IMO, APMO, BMO), among other sources. We decontaminate the training dataset by excluding any problem whose fuzzy-match similarity to an entry in our evaluation sets exceeds 80. For each question, we sample one response from each of ten LLMs: DeepSeek-R1 and its six distilled variants (DeepSeek-AI et al., 2025), DeepScaleR-1.5B-Preview (Luo et al., 2025b), and both the preview and production releases of QWQ-32B (Team, 2024; 2025). Following Shi & Jin (2025), we remove the reasoning content (i.e., the tokens between the <think> and </think> tags) from each response (see Appendix C.2 for an ablation on this choice). Each response is graded for correctness using HuggingFace’s Math-Verify toolkit (Kydlíˇcek, 2025), which parses the model’s final answer and performs symbolic equivalence checks against the reference solution. We throw out problems for which all ten solutions are either correct or incorrect, since they contain no learnable signal. Training. Following prior work (Qwen et al., 2025; Yang et al., 2024), we replace the language mod- eling head of the LLM (specifically DeepSeek-R1-Distill-Qwen-1.5B) with a two-layer scaler value head. We train our verifier using a Bradley-Terry ranking loss combined with an L2 regularization term (Ouyang et al., 2022; Kirchner et al., 2024). Concretely, our loss is L = − 1 |P| |N| X i∈P X j∈N log σ ri −rj  + λ 2 E r2 , where r = (r1, . . . , rm) are the logits assigned by the verifier to a batch of m responses, σ(x) is the logistic function, and P and N are the sets of correct and incorrect responses, respectively. The first term implements the Bradley–Terry model by maximizing the probability σ(ri −rj) that every correct response i ∈P outranks every incorrect response j ∈N (Bradley & Terry, 1952), and the second term keeps score head well-behaved and centered around zero. By computing all |P| × |N| 4 Preprint comparisons in one vectorized pass instead of sampling pairs, we gain both higher throughput and more stable gradients. We train for a single epoch on 11,420 response groups. Additional training details and hyperparameters are provided in Appendix B. 3 RESULTS We analyze the performance of our trained discriminative verifier under various discriminative verification techniques on several challenging benchmarks: AIME2024, AIME2025, LiveBench Math (White et al., 2025), and GPQA (Rein et al., 2023). For each AIME problem, we sample 128 candidate responses no longer than 16k tokens from DeepSeek-R1-Distill-Qwen-32B. On LiveBench Math and GPQA, we sample only 64 candidate responses. Similar to the construction of our training dataset, we exclude the reasoning content (i.e., the tokens between the <think> and </think> tags) during inference (see Appendix C.2). To ensure our metric estimates (e.g., Pass@N or PV@N) are precise, we report the mean over 1000 resampled draws of size N per problem and report 95% confidence intervals. Our results are provided in Table 1. Method AIME2024 AIME2025 LiveBench Math GPQA Pass@1 67.0 ± 0.5 51.9 ± 0.6 62.1 ± 0.2 56.9 ± 0.2 SC@32 83.4 ± 0.4 66.6 ± 0.5 67.0 ± 0.2 63.5 ± 0.2 BoN@32 79.1 ± 0.5 60.8 ± 0.6 64.1 ± 0.2 63.9 ± 0.2 WSC@32 85.6 ± 0.4 68.8 ± 0.5 67.5 ± 0.2 65.0 ± 0.2 PV@32 85.5 ± 0.4 69.1 ± 0.5 67.8 ± 0.2 65.6 ± 0.2 Table 1: Accuracy rates of DeepSeek-R1-Distill-Qwen-32B (N = 32) with various discriminative verification techniques (highlighted in yellow). Pass@1 and SC@32 are included for comparison. Across the board in Table 1, hybrid verification methods like WSC and PV consistently outperform competing selection methods. For example, on AIME2025, PV@32 improves over Pass@1 by 17.2%, and beats SC@32 and BoN@32 by 2.5% and 8.3%, respectively. Amazingly, even on an out-of-distribution task like GPQA, which includes questions on biology, physics, and chemistry, PV@32 can outperform SC@32 by 2.1%. 3.1 COMPARISON OF DISCRIMINATIVE AND GENERATIVE VERIFICATION Recent work has explored leveraging the generative and reasoning abilities of LLMs to verify candidate solutions (Zhang et al., 2025; Mahan et al., 2024). Generative verifiers can leverage additional test-time scaling to generate and aggregate over multiple CoT rationales to produce more accurate verdicts (Zhao et al., 2025; Shi & Jin, 2025). While this strategy can boost performance, it comes at a high cost. Generative verifiers require N (1 + M) = O(NM) long CoT generations per problem, where M is the number of times each candidate solution is verified, leading to prohibitively high inference costs as N or M is scaled. Discriminative verifiers provide a compelling alternative to generative ones: they require only a single forward pass per candidate solution, avoiding the costly decoding of long rationales. This efficiency makes them particularly attractive when compute is limited, since any budget spent on verification could otherwise be allocated to generating additional candidate solutions. In this subsection, we compare discriminative and generative verification under equalized compute budgets. Following prior work (Singhi et al., 2025), we measure the total inference compute, i.e., the compute required to generate and verify candidate solutions. Concretely, we leverage Heimdall (Shi & Jin, 2025), a state-of-the-art generative verifier trained from DeepSeek-R1-Distill-Qwen-32B. Similar to hybrid discriminative verification, Heimdall leverages pessimistic verification to incorporate the consensus signal from SC, thereby improving performance. We refer to this approach as GPV (see Algorithm 5). We focus our compute analysis on two perspectives: FLOPs and latency. FLOPs capture the theoretical compute cost of each approach, while latency reflects the real-world efficiency on modern hardware. Together, these perspectives allow us to identify the compute regimes where discriminative verifiers are most effective and where the added expense of generative verification may be justified. 5 Preprint 3.1.1 FLOPS ANALYSIS FLOPs provide a theoretical measure of the intrinsic compute required, independent of hardware and other implementation details, allowing us to study how compute requirements scale for discriminative and verification techniques. For a decoder-only transformer model with hidden size d, intermediate size m, L layers, and vocabulary size V , the FLOPs roughly decompose into three components: 1. Layer projections. Each token per layer requires 8d2 + 4dm FLOPs for Q, K, V, O projections and the MLP. 2. Attention. With KV caching, prefill compute is quadratic in Tin: each of the Tin tokens attends to all previous tokens, giving 4d · Tin(Tin+1) 2 FLOPs per layer. During decoding, cached keys/values avoid recomputation, so each of the Tout generated tokens only attends to the fixed prefix and prior outputs, costing 4d · (TinTout + Tout(Tout−1) 2 ) FLOPs per layer. 3. LM Head. Finally, output projection adds 2dV Tout FLOPs, where V is the vocabulary size. For discriminative verification, we set V = 1 and Tout = 1, corresponding to a single scalar output. Note that this formulation omits smaller terms such as normalization layers, activation functions, or positional encodings. We compare discriminative and generative verification methods on AIME2025. For each, we vary the number of candidate solutions N ∈2, 4, 8, 16, 32, 64, 128 and, for generative verification, the number of verifications per response M ∈1, 2, 4, 8, 16, 32. Results are presented in Figure 3. SC@N DV@N WSC@N DPV@N GPV@N,M 1015 1016 1017 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (a) M = 1 1015 1016 1017 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (b) M = 2 1015 1016 1017 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (c) M = 4 1015 1016 1017 1018 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (d) M = 8 1015 1016 1017 1018 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (e) M = 16 1015 1016 1017 1018 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (f) M = 32 Figure 3: Accuracy vs. FLOPs on AIME2025 under equalized compute budgets. Each subplot varies the number of verifications per candidate solution (M). Along each curve, successive points correspond to doubling the number of candidate solutions (N). The shaded region highlights the FLOPs budgets where hybrid discriminative verification techniques strictly outperform generative verification under equalized compute budgets. Repeated sampling provides a natural compute baseline: generating N candidate solutions requires O(N) long CoT traces. For example, generating 32 candidate solutions to a problem from AIME2025 with DeepSeek-R1-Distill-Qwen-32B costs 2.0 × 1016 FLOPs on average. SC selects the most com- mon answer from the candidate solutions and uses no additional compute beyond that of repeated sampling. By contrast, verification-based techniques incur additional compute cost. For example, 6 Preprint verifying 32 solutions with our discriminative verifier trained in Section 2.3 costs just 4.1 × 1014 FLOPs on average, just 2.0% of the compute used for repeated sampling. All discriminative ver- ification techniques (BoN, WSC, PV) use the same amount of verification compute. While BoN tends to underperform SC when N is large, hybrid discriminative verification methods consistently outperform the SC baseline by up to 5.1% for a negligible amount of additional compute. Conversely, generative verification techniques are significantly less efficient. For example, verifying the same 32 solutions with Heimdall (Shi & Jin, 2025) just once (M = 1) requires 3.1 × 1016 FLOPs, over 50% more FLOPs than solution generation and nearly 76× more FLOPs than discriminative verification. While generative verification can be made more effective by scaling the number of verifications per candidate solution (i.e., increasing M), the compute requirements scale linearly. Critically, under practical FLOP budgets, hybrid discriminative verification techniques outperform generative verification. This is because discriminative methods allocate nearly all of the compute budget towards sampling candidate solutions, while generative verification splits its compute budget between sampling and verifying candidates. Under realistic compute budgets, scaling the number of candidate solutions produces greater returns than scaling verifications; even an oracle-level verifier will fail to produce the correct answer if no correct solutions were sampled. With a large enough budget, however, the gain from sampling additional candidates begins to saturate, and generative verification techniques begin to dominate. The critical threshold at which generative verification becomes superior depends on M (Figure 3). For example, when M = 1, hybrid discriminative verification techniques outperform generative verification for any N ≤128. The optimal generative configuration occurs when M = 2, but even still, hybrid discriminative verification methods remain optimal for compute budgets less than 2.2 × 1016 FLOPs. 3.1.2 LATENCY ANALYSIS While FLOPs provide a useful theoretical measure of compute, they do not fully capture the practical costs of inference. In real deployments, generation is often memory- and I/O-bound, with bottlenecks introduced by KV cache size, communication overhead, and sampling inefficiencies. Wall-clock latency, therefore, provides a more realistic measure of efficiency, since compute is ultimately priced in time rather than FLOPs. We measure the average latency on AIME2025 using a single NVIDIA H100 SXM5 GPU. We leverage vLLM (Kwon et al., 2023) and its many optimizations, including dynamic batching and prefix caching, to reflect real-world usage. Similar to Section 3.1.1, we time the generation of N ∈ 2, 4, 8, 16, 32, 64, 128 candidate solutions with DeepSeek-R1-Distill-Qwen-32B and the verification of the solutions with our trained discriminative verifier and Heimdall (Shi & Jin, 2025). Latency results are reported in Table 2. N = 1 N = 2 N = 4 N = 8 N = 16 N = 32 N = 64 N = 128 Repeated Sampling 273.1 276.6 288.4 448.4 782.9 1434.0 2815.5 5514.1 Discriminative 0.05 0.10 0.21 0.42 0.83 1.66 3.32 6.65 Generative (M = 2) 552.0 558.8 656.6 992.8 1825.7 3423.7 6668.8 13160.7 Table 2: The average wall-clock time (s) for repeatedly sampling N candidate solutions, as well as the average time to verify each candidate solution using discriminative and generative verification. The latency results largely mirror the FLOP-based analysis in Section 3.1.1, but with even larger differences between discriminative and generative verification. For instance, verifying 32 solutions sampled from DeepSeek-R1-Distill-Qwen-32B with our 1.5B discriminative verifier takes only 1.66 seconds, just 0.1% of the generation time. This is an order of magnitude smaller than what FLOP estimates suggested (2.0%), reflecting the fact that discriminative verifiers avoid the decoding bottlenecks that dominate wall-clock latency. Generative verification, by contrast, becomes even less practical under a latency perspective. Just verifying 32 candidate solutions with Heimdall at M = 2 takes 3423.7 seconds, over twice the time needed for solution generation, and more than 2000× the cost of discriminative verification. These inefficiencies stem from the need to generate long CoTs for each verification, which incur memory-bandwidth and KV cache overheads not reflected in theoretical FLOP estimates. Indeed, as 7 Preprint shown in Figure 1, hybrid discriminative verification methods dominate generative verification for all inference budgets shorter than 22.5 minutes (1350s) on AIME2025 with M = 2. This threshold is dependent on a range of factors, including the number of verifications per solution (M), the specific solver, the size of the verifier, and the dataset, but it highlights a broader trend: under realistic latency constraints, discriminative verification almost always gives better performance than generative verification. In summary, while the FLOP analysis in Section 3.1.1 already showed discriminative verification to be more efficient, latency measurements make the contrast even sharper: discriminative verification achieves consistent gains for virtually the same latency as SC, whereas generative verification quickly becomes impractical as N or M grows. 3.2 SCALING MODEL SIZE FOR DISCRIMINATIVE VERIFICATION Here, we analyze how discriminative verification techniques scale with respect to the size of the solver model, which generates the candidate solutions. To do so, we generate 128 candidate solutions per question in AIME2024 and AIME2025 using DeepSeek-R1-Distill-Qwen models with 1.5B, 7B, 14B, and 32B parameters, and verify each using our trained discriminative verifier. We plot the aggregate results in Figure 4 for several values of N. Pass@N SC@N BoN@N WSC@N PV@N 1.5B 7B 14B 32B Solver Size 40 50 60 70 80 90 ACC on AIME (a) N = 8 1.5B 7B 14B 32B Solver Size 40 50 60 70 80 90 ACC on AIME (b) N = 16 1.5B 7B 14B 32B Solver Size 40 50 60 70 80 90 ACC on AIME (c) N = 32 1.5B 7B 14B 32B Solver Size 40 50 60 70 80 90 ACC on AIME (d) N = 64 Figure 4: Accuracy rates on AIME 2024/2025 for various discriminative verification methods across four solver sizes for several values of N. Pass@N and SC@N are included as baselines. We observe that increasing the solver’s size produces consistent but diminishing performance increases on AIME. Specifically, hybrid methods like WSC and PV scale similarly to SC as the size of the solver is increased, with WSC and PV maintaining a consistent edge over SC regardless of the solver’s size, across various Ns. BoN, on the other hand, exhibits poor scaling behavior: when N is small, BoN only slightly underperforms SC, but when N is large, BoN trails far behind. These results suggest that hybrid approaches can effectively mitigate BoN’s long-tail vulnerability. 3.3 INFERENCE-TIME SCALING OF DISCRIMINATIVE VERIFICATION We study how each discriminative verification method benefits from increased inference-time com- pute along two axes: the number of candidate solutions sampled from the solver and the reasoning budget allocated to the solver. First, we observe that scaling N produces consistent but diminishing improvements in performance on AIME (i.e., Pass@N increases). BoN struggles to benefit from scal- ing N, with performance quickly saturating and even falling. On the other hand, hybrid approaches like WSC and PV show consistent improvements as more solutions are sampled, maintaining a 2.2% to 5.6% edge over SC as N is scaled from 2 to 128. On AIME2024, WSC and PV boost the accuracy of DeepSeek-R1-Distill-Qwen-32B from 66.8% to 79.7% with only 4 candidate solutions, matching the performance of o3-mini (medium) or DeepSeek-R1, and outperforming SC by 3.7%. To control the reasoning budget, we use budget forcing (Muennighoff et al., 2025) and truncate the candidate solutions T ∈{0, 512, 1024, 2048, 4096, 8192, 16384} tokens after the opening think tag, manually append the closing think tag, then allow the model to continue generating its final answer. In doing so, we collect solutions under constrained reasoning budgets. We observe that even as the 8 Preprint Pass@N SC@N BoN@N WSC@N PV@N 20 21 22 23 24 25 26 27 Number of solutions sampled (N) 60 65 70 75 80 85 90 ACC on AIME2024 DeepSeekR1 o3-mini (medium) o3-mini (high) o3-mini (low) 0 29 210 211 212 213 214 Reasoning Length 20 30 40 50 60 70 80 90 ACC on AIME2024 Figure 5: Left: Unlike BoN, hybrid techniques show consistent but diminishing improvements on AIME2024 from increasing the number of candidate results N sampled from DeepSeek-R1- Distill-Qwen-32B. Right: The performance of DeepSeek-R1-Distill-Qwen-32B on AIME2024 scales logarithmically with the reasoning budget regardless of verification method. Here, N = 32. reasoning budget is scaled from 0 to 16k tokens, WSC and PV maintain an edge over SC, even while BoN falls off, showcasing the reliability of hybrid verification methods under various constraints. 4 RELATED WORK LLM-based verifiers can be broadly categorized into generative and discriminative approaches. Generative verifiers use large language models as judges that assess the correctness or quality of outputs by generating natural language rationales. A growing body of work explores this direction, employing LLMs as judges for modeling human preferences (Dubois et al., 2024; Zheng et al., 2024; Li et al., 2024; Wang et al., 2023c; Kim et al., 2023; 2024; Li et al., 2023; Zhu et al., 2023b; Mahan et al., 2024), or as verifiers for evaluating solution correctness in reasoning tasks (Zhang et al., 2024; Singhi et al., 2025; Shi & Jin, 2025; Saha et al., 2025). In contrast, discriminative verifiers, such as reward models, assign scalar scores to candidate responses based on human preference data (Christiano et al., 2017; Ziegler et al., 2019; Zhu et al., 2023a; Liu & Zeng, 2024; Wang et al., 2024; Park et al., 2024; Han et al., 2024). These models are central to reinforcement learning from human feedback and are also used to rank or select responses in BoN inference settings (Lightman et al., 2023; Wang et al., 2023a; Luo et al., 2024; Saunders et al., 2022; Uesato et al., 2022; Yu et al., 2024). Together, generative and discriminative verifiers provide complementary paradigms for evaluating, selecting, and aligning LLM outputs at inference time. A substantial body of work has investigated improving the mathematical reasoning capabilities of LLMs through prompting (Wei et al., 2022; Kojima et al., 2022; Crispino et al., 2024), training (Cobbe et al., 2021; Guan et al., 2025; Hosseini et al., 2024; Lightman et al., 2023; Pang et al., 2024; Ye et al., 2025; Luo et al., 2025a;b; Tan et al., 2025a), and test-time scaling (Snell et al., 2024; Brown et al., 2024; Setlur et al., 2024). Following the release of o1 (OpenAI, 2024), there has been a surge of interest in test-time scaling methods for LLM reasoning (Snell et al., 2024; Brown et al., 2024; Singhi et al., 2025; Zhao et al., 2025), which improve performance by sampling multiple solutions and aggregating them via majority voting or LLM-based verification. Our work builds on this line of research, demonstrating that discriminative LLM verifiers can serve as an effective and efficient verification approach for test-time scaling in complex math reasoning tasks. 9 Preprint 5 CONCLUSION We studied hybrid discriminative verification as a practical alternative to costly generative approaches. Discriminative methods achieve comparable or superior accuracy in practical compute regimes, where the high cost of CoT generation limits generative approaches. Our results highlight hybrid discriminative verification as the more efficient choice for realistic test-time scaling. REFERENCES R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3–4):324–345, December 1952. Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787, 2024. Eugene Charniak and Mark Johnson. Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In Kevin Knight, Hwee Tou Ng, and Kemal Oflazer (eds.), Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL‘05), pp. 173–180, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics. doi: 10.3115/1219840.12 19862. URL https://aclanthology.org/P05-1022/. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. Nicholas Crispino, Kyle Montgomery, Fankun Zeng, Dawn Song, and Chenguang Wang. Agent instructs large language models to be general zero-shot reasoners. In Proceedings of the 41st International Conference on Machine Learning, pp. 9458–9549, 2024. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y. K. Li, Y. Q. Wang, Y. X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y. X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, 10 Preprint Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948. Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. Advances in Neural Information Processing Systems, 36, 2024. Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. rstar-math: Small llms can master math reasoning with self-evolved deep thinking. arXiv preprint arXiv:2501.04519, 2025. Seungju Han, Kavel Rao, Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, and Nouha Dziri. Wildguard: Open one-stop moderation tools for safety risks, jailbreaks, and refusals of llms, 2024. URL https://arxiv.org/abs/2406.18495. Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457, 2024. Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. Prometheus: Inducing fine-grained eval- uation capability in language models. In The Twelfth International Conference on Learning Representations, 2023. Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, and Minjoon Seo. Prometheus 2: An open source language model specialized in evaluating other language models. arXiv preprint arXiv:2405.01535, 2024. Jan Hendrik Kirchner, Yining Chen, Harri Edwards, Jan Leike, Nat McAleese, and Yuri Burda. Prover-verifier games improve legibility of llm outputs, 2024. URL https://arxiv.org/ab s/2407.13692. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35: 22199–22213, 2022. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. Hynek Kydlíˇcek. Math-Verify: Math Verification Library. https://github.com/hugging face/math-verify, 2025. Version 0.6.1, Apache-2.0 license. Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu. Numinamath. [https://huggingface.co/AI-M O/NuminaMath-CoT](https://github.com/project-numina/aimo-progres s-prize/blob/main/report/numina_dataset.pdf), 2024. Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. Generative judge for evaluating alignment. arXiv preprint arXiv:2310.05470, 2023. Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Tianhao Wu, Banghua Zhu, Joseph E Gonzalez, and Ion Stoica. From crowdsourced data to high-quality benchmarks: Arena-hard and benchbuilder pipeline. arXiv preprint arXiv:2406.11939, 2024. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, 2023. 11 Preprint Chris Yuhao Liu and Liang Zeng. Skywork reward model series. https://huggingface.co /Skywork, September 2024. URL https://huggingface.co/Skywork. Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592, 2024. Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepcoder: A fully open-source 14b coder at o3-mini level. https://pretty-radio-b75 .notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-min i-Level-1cf81902c14680b3bee5eb349a512a51, 2025a. Notion Blog. Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y. Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scaling rl. https://pretty-radio-b75.notion.site/DeepS caleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-196 81902c1468005bed8ca303013a4e2, 2025b. Notion Blog. Dakota Mahan, Duy Van Phung, Rafael Rafailov, Chase Blagden, Nathan Lile, Louis Castricato, Jan-Philipp Fränken, Chelsea Finn, and Alon Albalak. Generative reward models. arXiv preprint arXiv:2410.12832, 2024. Justus Mattern, Sami Jaghouar, Manveer Basra, Jannik Straube, Matthew Di Ferrante, Felix Gabriel, Jack Min Ong, Vincent Weisser, and Johannes Hagemann. Synthetic-1: Two million collaboratively generated reasoning traces from deepseek-r1, 2025. URL https://www.primeintellect .ai/blog/synthetic-1-release. Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math: Unlocking the potential of slms in grade school math, 2024. Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393, 2025. OpenAI. Learning to reason with language models. https://openai.com/index/learnin g-to-reason-with-llms/, 2024. Accessed: 2025-04-25. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022. URL https://arxiv.org/abs/2203.02155. Richard Yuanzhe Pang, Weizhe Yuan, He He, Kyunghyun Cho, Sainbayar Sukhbaatar, and Jason Weston. Iterative reasoning preference optimization. Advances in Neural Information Processing Systems, 37:116617–116637, 2024. Junsoo Park, Seungyeon Jwa, Meiying Ren, Daeyoung Kim, and Sanghyuk Choi. Offsetbias: Leveraging debiased data for tuning evaluators, 2024. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. GPQA: A Graduate-Level Google-Proof Q&A Benchmark, 2023. URL https://arxiv.org/abs/2311.12022. 12 Preprint Swarnadeep Saha, Xian Li, Marjan Ghazvininejad, Jason Weston, and Tianlu Wang. Learning to plan & reason for evaluation with thinking-llm-as-a-judge. arXiv preprint arXiv:2501.18099, 2025. William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802, 2022. Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. Rewarding progress: Scaling automated process verifiers for llm reasoning. arXiv preprint arXiv:2410.08146, 2024. Wenlei Shi and Xing Jin. Heimdall: test-time scaling on the generative verification, 2025. URL https://arxiv.org/abs/2504.10337. Nishad Singhi, Hritik Bansal, Arian Hosseini, Aditya Grover, Kai-Wei Chang, Marcus Rohrbach, and Anna Rohrbach. When to solve, when to verify: Compute-optimal problem solving and generative verification for llm reasoning, 2025. URL https://arxiv.org/abs/2504.01005. Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314, 2024. Sijun Tan, Michael Luo, Colin Cai, Tarun Venkat, Kyle Montgomery, Aaron Hao, Tianhao Wu, Arnav Balyan, Manan Roongta, Chenguang Wang, Li Erran Li, Raluca Ada Popa, and Ion Stoica. rllm: A framework for post-training language agents. https://pretty-radio-b75.notion. site/rLLM-A-Framework-for-Post-Training-Language-Agents-21b8190 2c146819db63cd98a54ba5f31, 2025a. Notion Blog. Sijun Tan, Siyuan Zhuang, Kyle Montgomery, William Y. Tang, Alejandro Cuadron, Chenguang Wang, Raluca Ada Popa, and Ion Stoica. Judgebench: A benchmark for evaluating llm-based judges, 2025b. URL https://arxiv.org/abs/2410.12784. Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. URL https://qwenlm.github.io/blog/qwq-32b-preview/. Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. URL https://qwenlm.github.io/blog/qwq-32b/. Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275, 2022. Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Y Wu, and Zhifang Sui. Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning. arXiv preprint arXiv:2312.08935, 2023a. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh- ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023b. URL https://arxiv.org/abs/2203.11171. Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization. arXiv preprint arXiv:2306.05087, 2023c. Zhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi Zeng, Gerald Shen, Daniel Egert, Jimmy J. Zhang, Makesh Narsimhan Sreedhar, and Oleksii Kuchaiev. Helpsteer2: Open-source dataset for training top-performing reward models, 2024. URL https://arxiv.org/abs/2406.08673. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, and Zaid Harchaoui. From decoding to meta-generation: Inference-time algorithms for large language models. arXiv preprint arXiv:2406.16838, 2024. 13 Preprint Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Ben Feuer, Siddhartha Jain, Ravid Shwartz- Ziv, Neel Jain, Khalid Saifullah, Siddartha Naidu, et al. Livebench: A challenging, contamination- free llm benchmark. arXiv preprint arXiv:2406.19314, 2024. Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Ben Feuer, Siddhartha Jain, Ravid Shwartz- Ziv, Neel Jain, Khalid Saifullah, Sreemanti Dey, Shubh-Agrawal, Sandeep Singh Sandha, Siddartha Naidu, Chinmay Hegde, Yann LeCun, Tom Goldstein, Willie Neiswanger, and Micah Goldblum. Livebench: A challenging, contamination-limited llm benchmark, 2025. URL https://arxiv. org/abs/2406.19314. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement, 2024. URL https://arxiv.org/abs/2409.12122. Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387, 2025. Fei Yu, Anningzhe Gao, and Benyou Wang. Ovm, outcome-supervised value models for planning in mathematical reasoning. In Findings of the Association for Computational Linguistics: NAACL 2024, pp. 858–875, 2024. Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint arXiv:2408.15240, 2024. Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction, 2025. URL https://arxiv. org/abs/2408.15240. Eric Zhao, Pranjal Awasthi, and Sreenivas Gollapudi. Sample, scrutinize and scale: Effective inference-time search by scaling verification, 2025. URL https://arxiv.org/abs/2502 .01839. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36, 2024. Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, and Jiantao Jiao. Starling-7b: Improving llm helpfulness & harmlessness with rlaif, November 2023a. Lianghui Zhu, Xinggang Wang, and Xinlong Wang. Judgelm: Fine-tuned large language models are scalable judges. arXiv preprint arXiv:2310.17631, 2023b. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. A ALGORITHMS Algorithm 1 Self-Consistency (SC@N) Require: problem Q, solver LM, slate size N 1: Candidates ←{si}N i=1 ∼LM(Q) ▷Stage 1: Generate Candidates 2: Extract final answers {ai}N i=1 and partition into clusters {Ca} by a Stage 2: Group Answers 3: for each cluster Ca do 4: na ←|Ca| 5: a∗←arg maxa na ▷Stage 3: Plurality Vote 6: return a∗ 14 Preprint Algorithm 2 Best-of-N (BoN@N) Require: problem Q, solver LM, slate size N, verifier V 1: Candidates ←{si}N i=1 ∼LM(Q) ▷Stage 1: Generate Candidates 2: Verifications ←{ ri = V (si) }N i=1 ▷Stage 2: Verify Candidates 3: i∗←arg maxi∈{1,...,N} ri ▷Stage 3: Select Highest-Scoring Solution 4: a∗←Ans(si∗) ▷Stage 4: Extract Final Answer 5: return a∗ Algorithm 3 Weighted Self-Consistency (WSC@N) Require: problem Q, solver LM, slate size N, verifier V 1: Candidates ←{si}N i=1 ∼LM(Q) Stage 1: Generate Candidates 2: Verifications ←{ ri = V (si) }N i=1 ▷Stage 2: Verify Candidates 3: Extract final answers {ai}N i=1 and partition into clusters {Ca} by a Stage 3: Group Answers 4: for each cluster Ca do 5: Wa ←P i∈Ca ri 6: a∗←arg maxa Wa Stage 4: Select Highest-Weight Answer 7: return a∗ Algorithm 4 Pessimistic Verification (PV@N) Require: problem Q, solver LM, slate size N, verifier V , penalty weight α 1: Candidates ←{si}N i=1 ∼LM(Q) ▷Stage 1: Generate Candidates 2: Verifications ←{ ri = V (si)}N i=1 ▷Stage 2: Verify Candidates 3: Extract final answers {ai}N i=1 and partition into clusters {Ca} by a Stage 3: Group Answers 4: for each cluster Ca do 5: na ←|Ca| 6: ¯r(a) ← 1 na P i∈Ca ri 7: ψa ← ln N na+1 8: a∗←arg maxa [ ¯r(a) −α ψa ] ▷Stage 4: Select Best Answer 9: return a∗ Algorithm 5 Generative Pessimistic Verification (GPV@N, M) Require: problem Q, solver LM, slate size N, generative verifier V , # of verifications M, penalty weight α 1: Candidates ←{si}N i=1 ∼LM(Q) ▷Stage 1: Generate Candidates 2: for i = 1 to N do ▷Stage 2: Generative Verifications (repeat M times) 3: for m = 1 to M do 4: (CoTi,m, ri,m) ←V (si) 5: ˜ri ← 1 M PM m=1 ri,m 6: Extract final answers {ai}N i=1 and partition into clusters {Ca} by a Stage 3: Group Answers 7: for each cluster Ca do 8: na ←|Ca| 9: ¯r(a) ← 1 na P i∈Ca ˜ri 10: ψa ←ln(NM) naM+1 11: a∗←arg maxa [ ¯r(a) −α ψa ] ▷Stage 4: Select Best Answer 12: return a∗ B ADDITIONAL TECHNICAL DETAILS Our training data is based on a subset of Numina-Math (LI et al., 2024). DeepSeek-R1 responses were collected from Mattern et al. (2025). Meanwhile, the majority of the responses from six DeepSeek- 15 Preprint R1-Distill models, DeepScaleR-1.5B-Preview, and the two QwQ models were generated on a local cluster of NVIDIA A100 GPUs, with a minority coming from 3rd party API providers. Our evaluation datasets are AIME2024, AIME2025, LiveBench-Math (White et al., 2024), and GPQA (Rein et al., 2023). Combined, they include 596 questions. We decontaminate the training dataset by excluding any problem whose fuzzy-match similarity to an entry in our evaluation sets exceeds 80. For each AIME problem, we sample 128 candidate solutions, while on LiveBench Math and GPQA, we sample only 64 candidate solutions. When rolling out solutions during training and evaluation, we follow the model’s usage recommenda- tions, namely prefilling the opening <think> token, sampling with a temperature of 0.6 and a top-p value of 0.95, and instructing the model to output its final answer within \boxed{}. Our 1.5B discriminative verifier was trained for a single epoch (11,420 response groups) on 4xA100 SXM4 GPUs using the hyperparameters listed in Table 3. Hyper-parameter Value Global batch size 32 LR 5×10−5 LR scheduler Linear with 20 warmup steps Optimizer (AdamW) β1 = 0.9, β2 = 0.999 λ 0.01 Max gradient norm 1.0 Table 3: Hyper-parameters for training discriminative verifiers. C ADDITIONAL ABLATION EXPERIMENTS In addition to our main experiments, we include two further ablations conducted on a held-out validation set. To construct this set, we removed 250 problems from the training dataset and generated 32 responses per problem with 1.5B, 7B, 14B, and 32B variants of deepseek-ai/DeepSeek-R1-Distill- Qwen. We discarded items where all sampled responses were correct or all incorrect, leaving 691 problems for validation. This setup ensures that both correct and incorrect responses are available, making it suitable for evaluating the performance of a verifier. C.1 EFFECT OF THE PESSIMISM WEIGHT α 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 55 60 65 70 75 80 ACC on Val Set N=4 N=8 N=16 N=32 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 40 45 50 55 60 65 70 75 80 ACC on Val Set 1.5B 7B 14B 32B Figure 6: Left: Validation accuracy of PV as a function of the pessimism weight α for various numbers of independent candidate solutions (N). Right: Validation accuracy of PV as a function of the pessimism weight α for various-sized solver models. We first ablate the effect of the pessimism weight α in pessimistic verification (PV). As shown in Figure 6 (left), which only includes 147 response groups generated by deepseek-ai/DeepSeek- R1-Distill-Qwen-32B, performance peaks around α ≈0.5 for N ∈4, 8, 16, 32 and slowly decays. 16 Preprint 21 22 23 24 25 26 27 Number of solutions sampled (N) 65 70 75 80 85 90 95 100 ACC on Val Set Pass@N SC@N BoN@N WSC@N PV@N BoN@N (w/ reasoning) WSC@N (w/ reasoning) PV@N (w/ reasoning) Figure 7: Validation accuracy on the held-out set when including vs. excluding reasoning content in verifier inputs for both training and inference. Figure 6 (right) demonstrates that α = 0.5 is a reasonable choice for 4 solver models of various sizes. Based on this result, we set α = 0.5 for all main experiments. Notably, in Shi & Jin (2025), the authors use an α = 0.1 for experiments with Heimdall. This makes sense: with a stronger verifier and sufficiently large M, you can reduce α and put more weight on the verifier. C.2 EFFECT OF REASONING CONTENT ON THE VERIFIER We next ablate whether to pass the reasoning content (the tokens between <think> and </think>) to the verifier during training and inference. Our main experiments exclude reasoning, i.e., the verifier observes only the final solution string. For comparison, we trained and evaluated a second verifier that retains the reasoning content. As shown in Figure 7, including reasoning consistently degrades performance across all selection methods: BoN, WSC, and PV all achieve lower accuracy when reasoning traces are present. This suggests that the additional reasoning text introduces noise rather than a useful signal, reinforcing our choice to exclude it during both training and evaluation. 17
Preprint BUDGET-AWARE TEST-TIME SCALING VIA DISCRIMINATIVE VERIFICATION Kyle Montgomery1∗, Sijun Tan2∗, Yuqi Chen1, Siyuan Zhuang2, Tianjun Zhang2, Raluca Ada Popa2, Chenguang Wang1† 1UC Santa Cruz, 2UC Berkeley {kylemontgomery, , ABSTRACT Test-time scaling is a powerful strategy for boosting the performance of large language models on complex reasoning tasks. While state-of-the-art approaches often employ generative verifiers to select the best solution from a pool of candidates, this method incurs prohibitive computational costs, limiting its practicality. In this work, we shift the focus to a more budget-aware paradigm: discriminative verification. We conduct a thorough empirical analysis and demonstrate that while discriminative verifiers may underperform in isolation, combining them with self-consistency in a hybrid approach creates a powerful and efficient testtime scaling mechanism. Notably, under a fixed compute budget, this hybrid approach surpasses state-of-the-art generative verification by a significant margin: achieving up to 15.3% higher accuracy on AIME2025. Our findings establish that for practical, real-world applications, budget-aware scaling with discriminative verifiers is not only a "free" upgrade over self-consistency, but also a more effective and efficient alternative to costly generative techniques. Code is available at https://github.com/wang-research-lab/verification. 1 INTRODUCTION The pursuit of advanced reasoning in large language models (LLMs) has been defined by the principle of scale: scaling up models, datasets, and training compute has consistently unlocked new capabilities. More recently, a new frontier has emerged in this paradigm: scaling compute not just during training, but at the point of inference. This strategy, known as test-time scaling, aims to elicit a model's full potential by allocating additional resources to solve a single problem at inference time, leading to dramatic performance gains in complex domains like mathematics and programming (OpenAI, 2024; Snell et al., 2024). The simplest and most canonical form of test-time scaling is self-consistency (SC) (Wang et al., 2023b). Instead of trusting a single, greedily decoded answer, SC samples a diverse ensemble of solutions and selects the final answer through a simple plurality vote. This brute-force yet remarkably effective method has become a foundational baseline, demonstrating that more computation in the form of more samples often leads to better reasoning. The natural next question is whether this compute can be used more intelligently. Rather than relying on a democratic vote, could an expert "verifier" model scrutinize each solution and select the best one? This question has given rise to a new class of powerful, state-of-the-art techniques centered on generative verification. These verifiers are themselves sophisticated LLMs that produce a detailed chain-of-thought (CoT) rationale, critically evaluating a candidate solution before rendering a final verdict (Zhang et al., 2024; Mahan et al., 2024). The approach is intuitively appealing; it mimics human meta-cognition and opens up a new axis for scaling. If one verification pass is good, multiple passes should be even better, allowing for deeper scrutiny and higher confidence (Shi & Jin, 2025; Zhao et al., 2025). *Equal contribution. †Corresponding author. 1 16 Oct 2025 Preprint SC@N BoN@N WSC@N PV@N GPV@N,M +15.3% +2.8% Figure 1: Hybrid discriminative verification techniques (e.g., weighted self-consistency (WSC) (Welleck et al., 2024) and pessimistic verification (PV) (Shi & Jin, 2025)) outperform generative pessimistic verification (GPV) under equalized compute budgets of less than 22.5 minutes (shaded region). For example, at latency budgets of 13.8 minutes and 15.7 minutes, hybrid discriminative verification can outperform generative verification by 15.3% and 2.8%, respectively. N is doubled at each point along the x-axis. For GPV, each solution is verified twice (M = 2). However, this expressive power comes at a staggering computational cost. Generating a detailed CoT critique for each candidate can match or even exceed the cost of generating the original solution. This immense overhead makes generative verification impractical for many real-world applications where inference budgets are constrained. Indeed, a careful analysis by Singhi et al. (2025) reveals that when verification costs are properly accounted for, these state-of-the-art verification methods require up to 8× more compute just to match the performance of simple self-consistency, and deliver only marginal gains even when granted a colossal 128× budget. These findings underscore an important limitation of scaling verification: solution correctness is fundamentally constrained by the quality of the candidates produced by the solver. If no correct solutions are sampled, no amount of verification, regardless of strength, can recover the right answer. Moreover, SC already provides a strong baseline, closely tracking pass@N on many tasks. To improve over SC, a verifier must reliably agree with the majority when it is correct, while also identifying the minority solution when the majority is wrong. These requirements make it difficult for a verifier to deliver significant gains, especially under a fixed compute budget. As a result, allocating additional compute to generating candidate solutions typically yields better returns than spending it on verification. Given these limitations, it is appealing to develop a budget-aware verification mechanisms that improve the model performance while minimizing compute costs. Discriminative verifiers present a promising alternative due to their computational efficiency. Unlike generative verifiers, which require both a costly prefilling step and sequential token generation during decoding, discriminative verifiers only perform a single forward pass (i.e., prefilling) to output a scalar score, thus avoiding the expensive sequential decoding bottleneck. However, despite their speed advantage, discriminative verifiers exhibit limited capabilities on complex reasoning tasks (Tan et al., 2025b), often underperforming SC as the pool of candidate solutions grows, which has limited their practical use. In this work, we show that hybrid approaches combining discriminative verification with selfconsistency can offer the best trade-off between effectiveness and efficiency under practical compute budgets. For instance, under fixed practical inference budgets of 5 × 1015 and 1 × 1016 FLOPs, 2 Preprint hybrid discriminative verification methods (Welleck et al., 2024; Shi & Jin, 2025) outperform stateof-the-art generative verification by 6.1% and 2.5%, respectively. Moreover, although discriminative verifiers underperform SC in isolation, we show that by leveraging these hybrid methods, the resulting test-time scaling pipeline can obtain consistent improvements over SC on AIME2025 by up to 5.1%, while having only 2% compute overhead. These results highlight hybrid discriminative verification as a practical and scalable alternative, delivering strong accuracy gains with negligible overhead and outperforming more expensive generative approaches under realistic budget constraints. Our contributions are as follows: • We conduct a thorough empirical analysis of discriminative verification techniques, exploring how different selection strategies perform across scaling regimes. To our knowledge, this is the first study to systematically examine the test-time scaling properties of discriminative verification. • Building on this analysis, we present a compute-centric comparison of discriminative and generative verification, showing that discriminative methods offer a more practical and efficient alternative under realistic inference budgets. 2 EFFECTIVE DISCRIMINATIVE VERIFICATION 2.1 PRELIMINARIES Repeated sampling is a test-time scaling technique that involves generating a batch of N independent candidate solutions {si}N i=1 for a given problem Q. Each solution si is a chain of reasoning that terminates in a final answer ai = Ans(si). As N increases, the probability that at least one answer is correct also rises (i.e., Pass@N improves; see Figure 1) (Cobbe et al., 2021). However, this leaves open the central challenge of selecting a single answer a∗from among the candidates in the absence of ground truth. Self-consistency. A common approach for this selection problem is self-consistency (SC) (Wang et al., 2023b). Since correct answers tend to reoccur across independent solutions, SC groups responses by their final answer and selects the most frequent one. Formally, each distinct answer a has support size na = |{i : ai = a}|, and SC chooses a∗= arg maxa na. While this approach is robust when the correct answer is common, it can fail when the majority converges on an incorrect answer. Pseudocode for this method is provided in Algorithm 1. Best-of-N. Another strategy is best-of-N (BoN) selection (Charniak & Johnson, 2005; Cobbe et al., 2021), which uses a discriminative verifier to assign each solution a scalar score (e.g., in [0, 1]), and selects the final answer from the highest-scoring solution. Formally, each solution si receives a scalar score r(si), then BoN chooses a∗= Ans(s∗) where s∗= arg maxsi r(si). A strong verifier can identify correct but rare responses that SC might miss. However, as N increases, it can also be misled by confident yet incorrect responses, highlighting a long-tail vulnerability (see Figure 1). Pseudocode for this method is provided in Algorithm 2. 2.2 HYBRID DISCRIMINATIVE VERIFICATION To guard against the long-tail of high-scoring but incorrect responses, hybrid discriminative verification methods combine the consensus signal from SC with the verifier's signal from BoN. We study two hybrid approaches: • Weighted self-consistency (WSC) (Welleck et al., 2024) groups solutions by their final answers and selects the answer with the largest total verifier score, i.e., a∗= arg maxa P i:ai=a r(si). The approach prioritizes answers that are not only common but also favored by the verifier. Pseudocode for this method is provided in Algorithm 3. • Pessimistic verification (PV) (Shi & Jin, 2025) groups solutions by their final answer and penalizes small answer clusters to reduce the chance of selecting low-support answers. Formally, a∗= arg maxa 1 na P i:ai=a r(si) -α ln N na+1 , where α controls the strength of the penalty. 3 Preprint 0 50 100 150 200 250 300 350 Step 0.50 0.55 0.60 0.65 0.70 0.75 Loss 0.2 0.4 0.6 0.8 1.0 Score Margin Figure 2: Blue: The loss decreases over one epoch of training. Red: The score margin, i.e., the difference in score assigned to correct solutions and incorrect solutions on average across a global batch, increases during training. Together, these indicate that the discriminative verifier learns to discriminate between correct and incorrect solutions. When α = 0, selection is based exclusively on the mean verifier score. As α →∞, the penalty dominates and the selection collapses to SC. Empirically, we find that α = 0.5 provides a good tradeoff (see Appendix C.1). Pseudocode for this method is provided in Algorithm 4. 2.3 DISCRIMINATIVE VERIFIER TRAINING This subsection outlines an approach for training a lightweight discriminative verifier, which provides the verification signal for BoN and hybrid discriminative verification techniques (WSC and PV). Dataset curation. We sample 32k math problems from NuminaMath (LI et al., 2024), which aggregates problems from Chinese K-12 exams, Orca-Math (Mitra et al., 2024), AoPS forums, and various Olympiads (e.g., IMO, APMO, BMO), among other sources. We decontaminate the training dataset by excluding any problem whose fuzzy-match similarity to an entry in our evaluation sets exceeds 80. For each question, we sample one response from each of ten LLMs: DeepSeek-R1 and its six distilled variants (DeepSeek-AI et al., 2025), DeepScaleR-1.5B-Preview (Luo et al., 2025b), and both the preview and production releases of QWQ-32B (Team, 2024; 2025). Following Shi & Jin (2025), we remove the reasoning content (i.e., the tokens between the and tags) from each response (see Appendix C.2 for an ablation on this choice). Each response is graded for correctness using HuggingFace's Math-Verify toolkit (Kydlíˇcek, 2025), which parses the model's final answer and performs symbolic equivalence checks against the reference solution. We throw out problems for which all ten solutions are either correct or incorrect, since they contain no learnable signal. Training. Following prior work (Qwen et al., 2025; Yang et al., 2024), we replace the language modeling head of the LLM (specifically DeepSeek-R1-Distill-Qwen-1.5B) with a two-layer scaler value head. We train our verifier using a Bradley-Terry ranking loss combined with an L2 regularization term (Ouyang et al., 2022; Kirchner et al., 2024). Concretely, our loss is L = - 1 |P| |N| X i∈P X j∈N log σ ri -rj + λ 2 E r2 , where r = (r1, . . . , rm) are the logits assigned by the verifier to a batch of m responses, σ(x) is the logistic function, and P and N are the sets of correct and incorrect responses, respectively. The first term implements the Bradley-Terry model by maximizing the probability σ(ri -rj) that every correct response i ∈P outranks every incorrect response j ∈N (Bradley & Terry, 1952), and the second term keeps score head well-behaved and centered around zero. By computing all |P| × |N| 4 Preprint comparisons in one vectorized pass instead of sampling pairs, we gain both higher throughput and more stable gradients. We train for a single epoch on 11,420 response groups. Additional training details and hyperparameters are provided in Appendix B. 3 RESULTS We analyze the performance of our trained discriminative verifier under various discriminative verification techniques on several challenging benchmarks: AIME2024, AIME2025, LiveBench Math (White et al., 2025), and GPQA (Rein et al., 2023). For each AIME problem, we sample 128 candidate responses no longer than 16k tokens from DeepSeek-R1-Distill-Qwen-32B. On LiveBench Math and GPQA, we sample only 64 candidate responses. Similar to the construction of our training dataset, we exclude the reasoning content (i.e., the tokens between the and tags) during inference (see Appendix C.2). To ensure our metric estimates (e.g., Pass@N or PV@N) are precise, we report the mean over 1000 resampled draws of size N per problem and report 95% confidence intervals. Our results are provided in Table 1. Method AIME2024 AIME2025 LiveBench Math GPQA Pass@1 67.0 ± 0.5 51.9 ± 0.6 62.1 ± 0.2 56.9 ± 0.2 SC@32 83.4 ± 0.4 66.6 ± 0.5 67.0 ± 0.2 63.5 ± 0.2 BoN@32 79.1 ± 0.5 60.8 ± 0.6 64.1 ± 0.2 63.9 ± 0.2 WSC@32 85.6 ± 0.4 68.8 ± 0.5 67.5 ± 0.2 65.0 ± 0.2 PV@32 85.5 ± 0.4 69.1 ± 0.5 67.8 ± 0.2 65.6 ± 0.2 Table 1: Accuracy rates of DeepSeek-R1-Distill-Qwen-32B (N = 32) with various discriminative verification techniques (highlighted in yellow). Pass@1 and SC@32 are included for comparison. Across the board in Table 1, hybrid verification methods like WSC and PV consistently outperform competing selection methods. For example, on AIME2025, PV@32 improves over Pass@1 by 17.2%, and beats SC@32 and BoN@32 by 2.5% and 8.3%, respectively. Amazingly, even on an out-of-distribution task like GPQA, which includes questions on biology, physics, and chemistry, PV@32 can outperform SC@32 by 2.1%. 3.1 COMPARISON OF DISCRIMINATIVE AND GENERATIVE VERIFICATION Recent work has explored leveraging the generative and reasoning abilities of LLMs to verify candidate solutions (Zhang et al., 2025; Mahan et al., 2024). Generative verifiers can leverage additional test-time scaling to generate and aggregate over multiple CoT rationales to produce more accurate verdicts (Zhao et al., 2025; Shi & Jin, 2025). While this strategy can boost performance, it comes at a high cost. Generative verifiers require N (1 + M) = O(NM) long CoT generations per problem, where M is the number of times each candidate solution is verified, leading to prohibitively high inference costs as N or M is scaled. Discriminative verifiers provide a compelling alternative to generative ones: they require only a single forward pass per candidate solution, avoiding the costly decoding of long rationales. This efficiency makes them particularly attractive when compute is limited, since any budget spent on verification could otherwise be allocated to generating additional candidate solutions. In this subsection, we compare discriminative and generative verification under equalized compute budgets. Following prior work (Singhi et al., 2025), we measure the total inference compute, i.e., the compute required to generate and verify candidate solutions. Concretely, we leverage Heimdall (Shi & Jin, 2025), a state-of-the-art generative verifier trained from DeepSeek-R1-Distill-Qwen-32B. Similar to hybrid discriminative verification, Heimdall leverages pessimistic verification to incorporate the consensus signal from SC, thereby improving performance. We refer to this approach as GPV (see Algorithm 5). We focus our compute analysis on two perspectives: FLOPs and latency. FLOPs capture the theoretical compute cost of each approach, while latency reflects the real-world efficiency on modern hardware. Together, these perspectives allow us to identify the compute regimes where discriminative verifiers are most effective and where the added expense of generative verification may be justified. 5 Preprint 3.1.1 FLOPS ANALYSIS FLOPs provide a theoretical measure of the intrinsic compute required, independent of hardware and other implementation details, allowing us to study how compute requirements scale for discriminative and verification techniques. For a decoder-only transformer model with hidden size d, intermediate size m, L layers, and vocabulary size V , the FLOPs roughly decompose into three components: 1. Layer projections. Each token per layer requires 8d2 + 4dm FLOPs for Q, K, V, O projections and the MLP. 2. Attention. With KV caching, prefill compute is quadratic in Tin: each of the Tin tokens attends to all previous tokens, giving 4d · Tin(Tin+1) 2 FLOPs per layer. During decoding, cached keys/values avoid recomputation, so each of the Tout generated tokens only attends to the fixed prefix and prior outputs, costing 4d · (TinTout + Tout(Tout-1) 2 ) FLOPs per layer. 3. LM Head. Finally, output projection adds 2dV Tout FLOPs, where V is the vocabulary size. For discriminative verification, we set V = 1 and Tout = 1, corresponding to a single scalar output. Note that this formulation omits smaller terms such as normalization layers, activation functions, or positional encodings. We compare discriminative and generative verification methods on AIME2025. For each, we vary the number of candidate solutions N ∈2, 4, 8, 16, 32, 64, 128 and, for generative verification, the number of verifications per response M ∈1, 2, 4, 8, 16, 32. Results are presented in Figure 3. SC@N DV@N WSC@N DPV@N GPV@N,M 1015 1016 1017 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (a) M = 1 1015 1016 1017 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (b) M = 2 1015 1016 1017 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (c) M = 4 1015 1016 1017 1018 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (d) M = 8 1015 1016 1017 1018 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (e) M = 16 1015 1016 1017 1018 FLOPs 50 55 60 65 70 75 80 85 ACC on AIME2025 (f) M = 32 Figure 3: Accuracy vs. FLOPs on AIME2025 under equalized compute budgets. Each subplot varies the number of verifications per candidate solution (M). Along each curve, successive points correspond to doubling the number of candidate solutions (N). The shaded region highlights the FLOPs budgets where hybrid discriminative verification techniques strictly outperform generative verification under equalized compute budgets. Repeated sampling provides a natural compute baseline: generating N candidate solutions requires O(N) long CoT traces. For example, generating 32 candidate solutions to a problem from AIME2025 with DeepSeek-R1-Distill-Qwen-32B costs 2.0 × 1016 FLOPs on average. SC selects the most common answer from the candidate solutions and uses no additional compute beyond that of repeated sampling. By contrast, verification-based techniques incur additional compute cost. For example, 6 Preprint verifying 32 solutions with our discriminative verifier trained in Section 2.3 costs just 4.1 × 1014 FLOPs on average, just 2.0% of the compute used for repeated sampling. All discriminative verification techniques (BoN, WSC, PV) use the same amount of verification compute. While BoN tends to underperform SC when N is large, hybrid discriminative verification methods consistently outperform the SC baseline by up to 5.1% for a negligible amount of additional compute. Conversely, generative verification techniques are significantly less efficient. For example, verifying the same 32 solutions with Heimdall (Shi & Jin, 2025) just once (M = 1) requires 3.1 × 1016 FLOPs, over 50% more FLOPs than solution generation and nearly 76× more FLOPs than discriminative verification. While generative verification can be made more effective by scaling the number of verifications per candidate solution (i.e., increasing M), the compute requirements scale linearly. Critically, under practical FLOP budgets, hybrid discriminative verification techniques outperform generative verification. This is because discriminative methods allocate nearly all of the compute budget towards sampling candidate solutions, while generative verification splits its compute budget between sampling and verifying candidates. Under realistic compute budgets, scaling the number of candidate solutions produces greater returns than scaling verifications; even an oracle-level verifier will fail to produce the correct answer if no correct solutions were sampled. With a large enough budget, however, the gain from sampling additional candidates begins to saturate, and generative verification techniques begin to dominate. The critical threshold at which generative verification becomes superior depends on M (Figure 3). For example, when M = 1, hybrid discriminative verification techniques outperform generative verification for any N ≤128. The optimal generative configuration occurs when M = 2, but even still, hybrid discriminative verification methods remain optimal for compute budgets less than 2.2 × 1016 FLOPs. 3.1.2 LATENCY ANALYSIS While FLOPs provide a useful theoretical measure of compute, they do not fully capture the practical costs of inference. In real deployments, generation is often memory- and I/O-bound, with bottlenecks introduced by KV cache size, communication overhead, and sampling inefficiencies. Wall-clock latency, therefore, provides a more realistic measure of efficiency, since compute is ultimately priced in time rather than FLOPs. We measure the average latency on AIME2025 using a single NVIDIA H100 SXM5 GPU. We leverage vLLM (Kwon et al., 2023) and its many optimizations, including dynamic batching and prefix caching, to reflect real-world usage. Similar to Section 3.1.1, we time the generation of N ∈ 2, 4, 8, 16, 32, 64, 128 candidate solutions with DeepSeek-R1-Distill-Qwen-32B and the verification of the solutions with our trained discriminative verifier and Heimdall (Shi & Jin, 2025). Latency results are reported in Table 2. N = 1 N = 2 N = 4 N = 8 N = 16 N = 32 N = 64 N = 128 Repeated Sampling 273.1 276.6 288.4 448.4 782.9 1434.0 2815.5 5514.1 Discriminative 0.05 0.10 0.21 0.42 0.83 1.66 3.32 6.65 Generative (M = 2) 552.0 558.8 656.6 992.8 1825.7 3423.7 6668.8 13160.7 Table 2: The average wall-clock time (s) for repeatedly sampling N candidate solutions, as well as the average time to verify each candidate solution using discriminative and generative verification. The latency results largely mirror the FLOP-based analysis in Section 3.1.1, but with even larger differences between discriminative and generative verification. For instance, verifying 32 solutions sampled from DeepSeek-R1-Distill-Qwen-32B with our 1.5B discriminative verifier takes only 1.66 seconds, just 0.1% of the generation time. This is an order of magnitude smaller than what FLOP estimates suggested (2.0%), reflecting the fact that discriminative verifiers avoid the decoding bottlenecks that dominate wall-clock latency. Generative verification, by contrast, becomes even less practical under a latency perspective. Just verifying 32 candidate solutions with Heimdall at M = 2 takes 3423.7 seconds, over twice the time needed for solution generation, and more than 2000× the cost of discriminative verification. These inefficiencies stem from the need to generate long CoTs for each verification, which incur memory-bandwidth and KV cache overheads not reflected in theoretical FLOP estimates. Indeed, as 7 Preprint shown in Figure 1, hybrid discriminative verification methods dominate generative verification for all inference budgets shorter than 22.5 minutes (1350s) on AIME2025 with M = 2. This threshold is dependent on a range of factors, including the number of verifications per solution (M), the specific solver, the size of the verifier, and the dataset, but it highlights a broader trend: under realistic latency constraints, discriminative verification almost always gives better performance than generative verification. In summary, while the FLOP analysis in Section 3.1.1 already showed discriminative verification to be more efficient, latency measurements make the contrast even sharper: discriminative verification achieves consistent gains for virtually the same latency as SC, whereas generative verification quickly becomes impractical as N or M grows. 3.2 SCALING MODEL SIZE FOR DISCRIMINATIVE VERIFICATION Here, we analyze how discriminative verification techniques scale with respect to the size of the solver model, which generates the candidate solutions. To do so, we generate 128 candidate solutions per question in AIME2024 and AIME2025 using DeepSeek-R1-Distill-Qwen models with 1.5B, 7B, 14B, and 32B parameters, and verify each using our trained discriminative verifier. We plot the aggregate results in Figure 4 for several values of N. Pass@N SC@N BoN@N WSC@N PV@N 1.5B 7B 14B 32B Solver Size 40 50 60 70 80 90 ACC on AIME (a) N = 8 1.5B 7B 14B 32B Solver Size 40 50 60 70 80 90 ACC on AIME (b) N = 16 1.5B 7B 14B 32B Solver Size 40 50 60 70 80 90 ACC on AIME (c) N = 32 1.5B 7B 14B 32B Solver Size 40 50 60 70 80 90 ACC on AIME (d) N = 64 Figure 4: Accuracy rates on AIME 2024/2025 for various discriminative verification methods across four solver sizes for several values of N. Pass@N and SC@N are included as baselines. We observe that increasing the solver's size produces consistent but diminishing performance increases on AIME. Specifically, hybrid methods like WSC and PV scale similarly to SC as the size of the solver is increased, with WSC and PV maintaining a consistent edge over SC regardless of the solver's size, across various Ns. BoN, on the other hand, exhibits poor scaling behavior: when N is small, BoN only slightly underperforms SC, but when N is large, BoN trails far behind. These results suggest that hybrid approaches can effectively mitigate BoN's long-tail vulnerability. 3.3 INFERENCE-TIME SCALING OF DISCRIMINATIVE VERIFICATION We study how each discriminative verification method benefits from increased inference-time compute along two axes: the number of candidate solutions sampled from the solver and the reasoning budget allocated to the solver. First, we observe that scaling N produces consistent but diminishing improvements in performance on AIME (i.e., Pass@N increases). BoN struggles to benefit from scaling N, with performance quickly saturating and even falling. On the other hand, hybrid approaches like WSC and PV show consistent improvements as more solutions are sampled, maintaining a 2.2% to 5.6% edge over SC as N is scaled from 2 to 128. On AIME2024, WSC and PV boost the accuracy of DeepSeek-R1-Distill-Qwen-32B from 66.8% to 79.7% with only 4 candidate solutions, matching the performance of o3-mini (medium) or DeepSeek-R1, and outperforming SC by 3.7%. To control the reasoning budget, we use budget forcing (Muennighoff et al., 2025) and truncate the candidate solutions T ∈{0, 512, 1024, 2048, 4096, 8192, 16384} tokens after the opening think tag, manually append the closing think tag, then allow the model to continue generating its final answer. In doing so, we collect solutions under constrained reasoning budgets. We observe that even as the 8 Preprint Pass@N SC@N BoN@N WSC@N PV@N 20 21 22 23 24 25 26 27 Number of solutions sampled (N) 60 65 70 75 80 85 90 ACC on AIME2024 DeepSeekR1 o3-mini (medium) o3-mini (high) o3-mini (low) 0 29 210 211 212 213 214 Reasoning Length 20 30 40 50 60 70 80 90 ACC on AIME2024 Figure 5: Left: Unlike BoN, hybrid techniques show consistent but diminishing improvements on AIME2024 from increasing the number of candidate results N sampled from DeepSeek-R1Distill-Qwen-32B. Right: The performance of DeepSeek-R1-Distill-Qwen-32B on AIME2024 scales logarithmically with the reasoning budget regardless of verification method. Here, N = 32. reasoning budget is scaled from 0 to 16k tokens, WSC and PV maintain an edge over SC, even while BoN falls off, showcasing the reliability of hybrid verification methods under various constraints. 4 RELATED WORK LLM-based verifiers can be broadly categorized into generative and discriminative approaches. Generative verifiers use large language models as judges that assess the correctness or quality of outputs by generating natural language rationales. A growing body of work explores this direction, employing LLMs as judges for modeling human preferences (Dubois et al., 2024; Zheng et al., 2024; Li et al., 2024; Wang et al., 2023c; Kim et al., 2023; 2024; Li et al., 2023; Zhu et al., 2023b; Mahan et al., 2024), or as verifiers for evaluating solution correctness in reasoning tasks (Zhang et al., 2024; Singhi et al., 2025; Shi & Jin, 2025; Saha et al., 2025). In contrast, discriminative verifiers, such as reward models, assign scalar scores to candidate responses based on human preference data (Christiano et al., 2017; Ziegler et al., 2019; Zhu et al., 2023a; Liu & Zeng, 2024; Wang et al., 2024; Park et al., 2024; Han et al., 2024). These models are central to reinforcement learning from human feedback and are also used to rank or select responses in BoN inference settings (Lightman et al., 2023; Wang et al., 2023a; Luo et al., 2024; Saunders et al., 2022; Uesato et al., 2022; Yu et al., 2024). Together, generative and discriminative verifiers provide complementary paradigms for evaluating, selecting, and aligning LLM outputs at inference time. A substantial body of work has investigated improving the mathematical reasoning capabilities of LLMs through prompting (Wei et al., 2022; Kojima et al., 2022; Crispino et al., 2024), training (Cobbe et al., 2021; Guan et al., 2025; Hosseini et al., 2024; Lightman et al., 2023; Pang et al., 2024; Ye et al., 2025; Luo et al., 2025a;b; Tan et al., 2025a), and test-time scaling (Snell et al., 2024; Brown et al., 2024; Setlur et al., 2024). Following the release of o1 (OpenAI, 2024), there has been a surge of interest in test-time scaling methods for LLM reasoning (Snell et al., 2024; Brown et al., 2024; Singhi et al., 2025; Zhao et al., 2025), which improve performance by sampling multiple solutions and aggregating them via majority voting or LLM-based verification. Our work builds on this line of research, demonstrating that discriminative LLM verifiers can serve as an effective and efficient verification approach for test-time scaling in complex math reasoning tasks. 9 Preprint 5 CONCLUSION We studied hybrid discriminative verification as a practical alternative to costly generative approaches. Discriminative methods achieve comparable or superior accuracy in practical compute regimes, where the high cost of CoT generation limits generative approaches. Our results highlight hybrid discriminative verification as the more efficient choice for realistic test-time scaling. REFERENCES R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3-4):324-345, December 1952. Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint , 2024. Eugene Charniak and Mark Johnson. Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In Kevin Knight, Hwee Tou Ng, and Kemal Oflazer (eds.), Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pp. 173-180, Ann Arbor, Michigan, June 2005. Association for Computational Linguistics. 19862. URL https://aclanthology.org/P05-1022/. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30, 2017. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint , 2021. Nicholas Crispino, Kyle Montgomery, Fankun Zeng, Dawn Song, and Chenguang Wang. Agent instructs large language models to be general zero-shot reasoners. In Proceedings of the 41st International Conference on Machine Learning, pp. 9458-9549, 2024. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y. K. Li, Y. Q. Wang, Y. X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y. X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, 10 Preprint Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948. Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. Advances in Neural Information Processing Systems, 36, 2024. Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. rstar-math: Small llms can master math reasoning with self-evolved deep thinking. arXiv preprint , 2025. Seungju Han, Kavel Rao, Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, and Nouha Dziri. Wildguard: Open one-stop moderation tools for safety risks, jailbreaks, and refusals of llms, 2024. URL https://arxiv.org/abs/2406.18495. Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint , 2024. Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. Prometheus: Inducing fine-grained evaluation capability in language models. In The Twelfth International Conference on Learning Representations, 2023. Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, and Minjoon Seo. Prometheus 2: An open source language model specialized in evaluating other language models. arXiv preprint , 2024. Jan Hendrik Kirchner, Yining Chen, Harri Edwards, Jan Leike, Nat McAleese, and Yuri Burda. Prover-verifier games improve legibility of llm outputs, 2024. URL https://arxiv.org/ab s/2407.13692. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35: 22199-22213, 2022. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles, 2023. Hynek Kydlíˇcek. Math-Verify: Math Verification Library. https://github.com/hugging face/math-verify, 2025. Version 0.6.1, Apache-2.0 license. Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Costa Huang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong, Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu. Numinamath. [https://huggingface.co/AI-M O/NuminaMath-CoT](https://github.com/project-numina/aimo-progres s-prize/blob/main/report/numina_dataset.pdf), 2024. Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. Generative judge for evaluating alignment. arXiv preprint , 2023. Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Tianhao Wu, Banghua Zhu, Joseph E Gonzalez, and Ion Stoica. From crowdsourced data to high-quality benchmarks: Arena-hard and benchbuilder pipeline. arXiv preprint , 2024. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2023. 11 Preprint Chris Yuhao Liu and Liang Zeng. Skywork reward model series. https://huggingface.co /Skywork, September 2024. URL https://huggingface.co/Skywork. Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint , 2024. Michael Luo, Sijun Tan, Roy Huang, Ameen Patel, Alpay Ariyak, Qingyang Wu, Xiaoxiang Shi, Rachel Xin, Colin Cai, Maurice Weber, Ce Zhang, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepcoder: A fully open-source 14b coder at o3-mini level. https://pretty-radio-b75 .notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-at-O3-min i-Level-1cf81902c14680b3bee5eb349a512a51, 2025a. Notion Blog. Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y. Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scaling rl. https://pretty-radio-b75.notion.site/DeepS caleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-196 81902c1468005bed8ca303013a4e2, 2025b. Notion Blog. Dakota Mahan, Duy Van Phung, Rafael Rafailov, Chase Blagden, Nathan Lile, Louis Castricato, Jan-Philipp Fränken, Chelsea Finn, and Alon Albalak. Generative reward models. arXiv preprint , 2024. Justus Mattern, Sami Jaghouar, Manveer Basra, Jannik Straube, Matthew Di Ferrante, Felix Gabriel, Jack Min Ong, Vincent Weisser, and Johannes Hagemann. Synthetic-1: Two million collaboratively generated reasoning traces from deepseek-r1, 2025. URL https://www.primeintellect .ai/blog/synthetic-1-release. Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. Orca-math: Unlocking the potential of slms in grade school math, 2024. Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint , 2025. OpenAI. Learning to reason with language models. https://openai.com/index/learnin g-to-reason-with-llms/, 2024. Accessed: 2025-04-25. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback, 2022. URL https://arxiv.org/abs/2203.02155. Richard Yuanzhe Pang, Weizhe Yuan, He He, Kyunghyun Cho, Sainbayar Sukhbaatar, and Jason Weston. Iterative reasoning preference optimization. Advances in Neural Information Processing Systems, 37:116617-116637, 2024. Junsoo Park, Seungyeon Jwa, Meiying Ren, Daeyoung Kim, and Sanghyuk Choi. Offsetbias: Leveraging debiased data for tuning evaluators, 2024. Qwen, :, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tianyi Tang, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report, 2025. URL https://arxiv.org/abs/2412.15115. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bowman. GPQA: A Graduate-Level Google-Proof Q&A Benchmark, 2023. URL https://arxiv.org/abs/2311.12022. 12 Preprint Swarnadeep Saha, Xian Li, Marjan Ghazvininejad, Jason Weston, and Tianlu Wang. Learning to plan & reason for evaluation with thinking-llm-as-a-judge. arXiv preprint , 2025. William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. Self-critiquing models for assisting human evaluators. arXiv preprint , 2022. Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. Rewarding progress: Scaling automated process verifiers for llm reasoning. arXiv preprint , 2024. Wenlei Shi and Xing Jin. Heimdall: test-time scaling on the generative verification, 2025. URL https://arxiv.org/abs/2504.10337. Nishad Singhi, Hritik Bansal, Arian Hosseini, Aditya Grover, Kai-Wei Chang, Marcus Rohrbach, and Anna Rohrbach. When to solve, when to verify: Compute-optimal problem solving and generative verification for llm reasoning, 2025. URL https://arxiv.org/abs/2504.01005. Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint , 2024. Sijun Tan, Michael Luo, Colin Cai, Tarun Venkat, Kyle Montgomery, Aaron Hao, Tianhao Wu, Arnav Balyan, Manan Roongta, Chenguang Wang, Li Erran Li, Raluca Ada Popa, and Ion Stoica. rllm: A framework for post-training language agents. https://pretty-radio-b75.notion. site/rLLM-A-Framework-for-Post-Training-Language-Agents-21b8190 2c146819db63cd98a54ba5f31, 2025a. Notion Blog. Sijun Tan, Siyuan Zhuang, Kyle Montgomery, William Y. Tang, Alejandro Cuadron, Chenguang Wang, Raluca Ada Popa, and Ion Stoica. Judgebench: A benchmark for evaluating llm-based judges, 2025b. URL https://arxiv.org/abs/2410.12784. Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. URL https://qwenlm.github.io/blog/qwq-32b-preview/. Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. URL https://qwenlm.github.io/blog/qwq-32b/. Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint , 2022. Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Y Wu, and Zhifang Sui. Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning. arXiv preprint , 2023a. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023b. URL https://arxiv.org/abs/2203.11171. Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, et al. Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization. arXiv preprint , 2023c. Zhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi Zeng, Gerald Shen, Daniel Egert, Jimmy J. Zhang, Makesh Narsimhan Sreedhar, and Oleksii Kuchaiev. Helpsteer2: Open-source dataset for training top-performing reward models, 2024. URL https://arxiv.org/abs/2406.08673. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824-24837, 2022. Sean Welleck, Amanda Bertsch, Matthew Finlayson, Hailey Schoelkopf, Alex Xie, Graham Neubig, Ilia Kulikov, and Zaid Harchaoui. From decoding to meta-generation: Inference-time algorithms for large language models. arXiv preprint , 2024. 13 Preprint Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Ben Feuer, Siddhartha Jain, Ravid ShwartzZiv, Neel Jain, Khalid Saifullah, Siddartha Naidu, et al. Livebench: A challenging, contaminationfree llm benchmark. arXiv preprint , 2024. Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Ben Feuer, Siddhartha Jain, Ravid ShwartzZiv, Neel Jain, Khalid Saifullah, Sreemanti Dey, Shubh-Agrawal, Sandeep Singh Sandha, Siddartha Naidu, Chinmay Hegde, Yann LeCun, Tom Goldstein, Willie Neiswanger, and Micah Goldblum. Livebench: A challenging, contamination-limited llm benchmark, 2025. URL https://arxiv. org/abs/2406.19314. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, and Zhenru Zhang. Qwen2.5-math technical report: Toward mathematical expert model via self-improvement, 2024. URL https://arxiv.org/abs/2409.12122. Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint , 2025. Fei Yu, Anningzhe Gao, and Benyou Wang. Ovm, outcome-supervised value models for planning in mathematical reasoning. In Findings of the Association for Computational Linguistics: NAACL 2024, pp. 858-875, 2024. Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint , 2024. Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction, 2025. URL https://arxiv. org/abs/2408.15240. Eric Zhao, Pranjal Awasthi, and Sreenivas Gollapudi. Sample, scrutinize and scale: Effective inference-time search by scaling verification, 2025. URL https://arxiv.org/abs/2502 .01839. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems, 36, 2024. Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, and Jiantao Jiao. Starling-7b: Improving llm helpfulness & harmlessness with rlaif, November 2023a. Lianghui Zhu, Xinggang Wang, and Xinlong Wang. Judgelm: Fine-tuned large language models are scalable judges. arXiv preprint , 2023b. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint , 2019. A ALGORITHMS Algorithm 1 Self-Consistency (SC@N) Require: problem Q, solver LM, slate size N 1: Candidates ←{si}N i=1 ∼LM(Q) ▷Stage 1: Generate Candidates 2: Extract final answers {ai}N i=1 and partition into clusters {Ca} by a Stage 2: Group Answers 3: for each cluster Ca do 4: na ←|Ca| 5: a∗←arg maxa na ▷Stage 3: Plurality Vote 6: return a∗ 14 Preprint Algorithm 2 Best-of-N (BoN@N) Require: problem Q, solver LM, slate size N, verifier V 1: Candidates ←{si}N i=1 ∼LM(Q) ▷Stage 1: Generate Candidates 2: Verifications ←{ ri = V (si) }N i=1 ▷Stage 2: Verify Candidates 3: i∗←arg maxi∈{1,...,N} ri ▷Stage 3: Select Highest-Scoring Solution 4: a∗←Ans(si∗) ▷Stage 4: Extract Final Answer 5: return a∗ Algorithm 3 Weighted Self-Consistency (WSC@N) Require: problem Q, solver LM, slate size N, verifier V 1: Candidates ←{si}N i=1 ∼LM(Q) Stage 1: Generate Candidates 2: Verifications ←{ ri = V (si) }N i=1 ▷Stage 2: Verify Candidates 3: Extract final answers {ai}N i=1 and partition into clusters {Ca} by a Stage 3: Group Answers 4: for each cluster Ca do 5: Wa ←P i∈Ca ri 6: a∗←arg maxa Wa Stage 4: Select Highest-Weight Answer 7: return a∗ Algorithm 4 Pessimistic Verification (PV@N) Require: problem Q, solver LM, slate size N, verifier V , penalty weight α 1: Candidates ←{si}N i=1 ∼LM(Q) ▷Stage 1: Generate Candidates 2: Verifications ←{ ri = V (si)}N i=1 ▷Stage 2: Verify Candidates 3: Extract final answers {ai}N i=1 and partition into clusters {Ca} by a Stage 3: Group Answers 4: for each cluster Ca do 5: na ←|Ca| 6: ̄r(a) ← 1 na P i∈Ca ri 7: ψa ← ln N na+1 8: a∗←arg maxa [ ̄r(a) -α ψa ] ▷Stage 4: Select Best Answer 9: return a∗ Algorithm 5 Generative Pessimistic Verification (GPV@N, M) Require: problem Q, solver LM, slate size N, generative verifier V , # of verifications M, penalty weight α 1: Candidates ←{si}N i=1 ∼LM(Q) ▷Stage 1: Generate Candidates 2: for i = 1 to N do ▷Stage 2: Generative Verifications (repeat M times) 3: for m = 1 to M do 4: (CoTi,m, ri,m) ←V (si) 5: ̃ri ← 1 M PM m=1 ri,m 6: Extract final answers {ai}N i=1 and partition into clusters {Ca} by a Stage 3: Group Answers 7: for each cluster Ca do 8: na ←|Ca| 9: ̄r(a) ← 1 na P i∈Ca ̃ri 10: ψa ←ln(NM) naM+1 11: a∗←arg maxa [ ̄r(a) -α ψa ] ▷Stage 4: Select Best Answer 12: return a∗ B ADDITIONAL TECHNICAL DETAILS Our training data is based on a subset of Numina-Math (LI et al., 2024). DeepSeek-R1 responses were collected from Mattern et al. (2025). Meanwhile, the majority of the responses from six DeepSeek15 Preprint R1-Distill models, DeepScaleR-1.5B-Preview, and the two QwQ models were generated on a local cluster of NVIDIA A100 GPUs, with a minority coming from 3rd party API providers. Our evaluation datasets are AIME2024, AIME2025, LiveBench-Math (White et al., 2024), and GPQA (Rein et al., 2023). Combined, they include 596 questions. We decontaminate the training dataset by excluding any problem whose fuzzy-match similarity to an entry in our evaluation sets exceeds 80. For each AIME problem, we sample 128 candidate solutions, while on LiveBench Math and GPQA, we sample only 64 candidate solutions. When rolling out solutions during training and evaluation, we follow the model's usage recommendations, namely prefilling the opening token, sampling with a temperature of 0.6 and a top-p value of 0.95, and instructing the model to output its final answer within . Our 1.5B discriminative verifier was trained for a single epoch (11,420 response groups) on 4xA100 SXM4 GPUs using the hyperparameters listed in Table 3. Hyper-parameter Value Global batch size 32 LR 5×10-5 LR scheduler Linear with 20 warmup steps Optimizer (AdamW) β1 = 0.9, β2 = 0.999 λ 0.01 Max gradient norm 1.0 Table 3: Hyper-parameters for training discriminative verifiers. C ADDITIONAL ABLATION EXPERIMENTS In addition to our main experiments, we include two further ablations conducted on a held-out validation set. To construct this set, we removed 250 problems from the training dataset and generated 32 responses per problem with 1.5B, 7B, 14B, and 32B variants of deepseek-ai/DeepSeek-R1-DistillQwen. We discarded items where all sampled responses were correct or all incorrect, leaving 691 problems for validation. This setup ensures that both correct and incorrect responses are available, making it suitable for evaluating the performance of a verifier. C.1 EFFECT OF THE PESSIMISM WEIGHT α 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 55 60 65 70 75 80 ACC on Val Set N=4 N=8 N=16 N=32 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 40 45 50 55 60 65 70 75 80 ACC on Val Set 1.5B 7B 14B 32B Figure 6: Left: Validation accuracy of PV as a function of the pessimism weight α for various numbers of independent candidate solutions (N). Right: Validation accuracy of PV as a function of the pessimism weight α for various-sized solver models. We first ablate the effect of the pessimism weight α in pessimistic verification (PV). As shown in Figure 6 (left), which only includes 147 response groups generated by deepseek-ai/DeepSeekR1-Distill-Qwen-32B, performance peaks around α ≈0.5 for N ∈4, 8, 16, 32 and slowly decays. 16 Preprint 21 22 23 24 25 26 27 Number of solutions sampled (N) 65 70 75 80 85 90 95 100 ACC on Val Set Pass@N SC@N BoN@N WSC@N PV@N BoN@N (w/ reasoning) WSC@N (w/ reasoning) PV@N (w/ reasoning) Figure 7: Validation accuracy on the held-out set when including vs. excluding reasoning content in verifier inputs for both training and inference. Figure 6 (right) demonstrates that α = 0.5 is a reasonable choice for 4 solver models of various sizes. Based on this result, we set α = 0.5 for all main experiments. Notably, in Shi & Jin (2025), the authors use an α = 0.1 for experiments with Heimdall. This makes sense: with a stronger verifier and sufficiently large M, you can reduce α and put more weight on the verifier. C.2 EFFECT OF REASONING CONTENT ON THE VERIFIER We next ablate whether to pass the reasoning content (the tokens between and ) to the verifier during training and inference. Our main experiments exclude reasoning, i.e., the verifier observes only the final solution string. For comparison, we trained and evaluated a second verifier that retains the reasoning content. As shown in Figure 7, including reasoning consistently degrades performance across all selection methods: BoN, WSC, and PV all achieve lower accuracy when reasoning traces are present. This suggests that the additional reasoning text introduces noise rather than a useful signal, reinforcing our choice to exclude it during both training and evaluation. 17
2510.14908
Fermi Bubbles Without AGN: γ-Ray Bubbles in MHD Galaxy Formation Simulations with Full Cosmic Ray Spectra Isabel S. Sands, Philip F. Hopkins, and Sam B. Ponnada TAPIR, California Institute of Technology, Mailcode 350-17, Pasadena, CA 91125, USA (Dated: October 17, 2025) For the first time, we show in MHD simulations with cosmological initial conditions that bi-lobed gamma-ray outflows similar to the Fermi bubbles can form from star formation and supernova feed- back, without involvement from active galactic nuclei (AGN). We use simulations run with full MHD and dynamical, on-the-fly multi-species cosmic ray transport in MeV-TeV energy bins to model gamma-ray emission in Milky Way-mass spiral galaxies from neutral pion decay, relativistic non-thermal Bremsstrahlung, and inverse Compton scattering. We find that these γ-ray outflows are present in all three Milky-Way mass simulated galaxies. The amplitude, shape, and the compo- sition of the gamma-ray spectrum of these bubbles fluctuates over time, with lepton-dominated and hadron-dominated phases. Spectra in which there is O(1) more γ-ray flux from inverse Compton scattering than neutral pion decay are a good fit to the measured Fermi-LAT spectrum. Addi- tionally, these simulations predict multi-wavelength features in soft x-rays and synchrotron radio, potentially providing new observational signatures that can connect the circumgalactic medium to cosmic ray physics and activity in the galactic center. Introduction– There is abundant observational evi- dence across multiple wavelengths of outflows from the central regions of galaxies [1]. These outflows are typi- cally thought to arise from clustered supernovae in galax- ies with active star formation, or from radiative or kinetic modes of active galactic nucleus (AGN) feedback, and have been observed in a variety of galaxies across a range of masses [2]. Because observables of these outflows span the electromagnetic spectrum, from diffuse radio emis- sion to x-ray and γ-ray bubbles, such features are ideal targets to test different feedback mechanisms that can regulate star formation in galaxies. Within the Milky Way (MW), the Fermi Large Area Telescope (LAT) has detected a pair of bi-lobed, bubble- like features in γ-ray emission above and below the plane of the MW’s galactic disk, spanning approximately 10 kpc and originating from the galactic center [3]. The γ-ray bubbles are accompanied by a pair of larger x- ray bubbles detected by ROSAT and eROSITA, approx- imately 12-14 kpc in length [4]. The origins of both the Fermi and eROSITA bubbles remain elusive: whether they are driven by AGN or star formation, a single event or multiple episodes in which energy and material are ejected from the galaxy, and whether the γ-ray emission is from hadronic or leptonic processes [5–8]. Most past studies of the origin and nature of the Fermi bubbles have relied on fitting the Fermi-LAT data to static templates generated by cosmic ray propagation codes or observed gas maps [3, 9], or idealized hydrody- namic simulations [6–8, 10, 11]. In the case of the former, fits to static templates are inherently unable to probe the time evolution of these structures. On the other hand, idealized hydrodynamic simulations can study the growth over time of outflows fueled by a specific pro- cess (e.g., stellar winds or an AGN burst) but typically impose a galactic potential, make simplifying assump- tions about magnetic field structure and the hot circum- galactic medium (CGM), and do not evolve the CRs in a cosmological setting. It has only recently become possible to simulate CRs alongside full magnetohydro- dynamics in cosmological simulations [12–21]. Most of these simulations evolve CRs as a single fluid element in a single energy bin. Several of these simulations have shown that CR pressure allows for the formation of kpc to Mpc-scale gas outflows, indicating that CR pressure may play an important role in the formation of the Fermi and eROSITA bubbles [22, 23]. More recently, new numerical methods have enabled the self-consistent evolution of individual CR species in magnetohydrodynamic galaxy formation simulations run from cosmological initial conditions, allowing for detailed studies of astro-particle phenomena and the forward- modeling of multi-wavelength observables arising from CR interactions with the interstellar medium (ISM) [24]. In this letter, we analyze the simulations introduced in [24], in which multi-species cosmic rays (electrons, positrons, protons, anti-protons, and heavier nuclei) are simulated alongside full magnetohydrodynamics. We find that, for our sample of three MW-mass galaxies, ex- tended γ-ray bubbles and halos are a ubiquitous feature that arise from the inverse Compton scattering of CR leptons injected by supernovae and fast stellar winds. The γ-ray spectrum of these bubbles is consistent with the Fermi-LAT observations, and we further find that these γ-ray features have counterparts in x-ray and syn- chrotron radio emission. This letter is structured as follows. We first provide a brief description of the simulations and the methods used to model γ-ray emission from the simulated CRs. We then show how different particle physics interactions con- tribute to the γ-ray emission, creating bi-lobed bubbles. We further demonstrate how the structure and compo- sition of these bubbles (i.e. whether the γ-ray emission is predominantly produced by hadronic or leptonic in- teractions) changes over million-year (Myr) time scales. Finally, we show that the galactic outflows that lead to arXiv:2510.14908v1 [astro-ph.HE] 16 Oct 2025 2 m12f m12i m12m 6 5 4 3 2 log10 (E2 × flux) [GeV cm 2 s 1 sr 1] -ray Emission Maps Pion decay Bremsstrahlung Inverse Compton 5 4 3 2 log10 (E2 × flux) [GeV cm 2 s 1 sr 1] FIG. 1. Top: total flux of 5-50 GeV γ-rays for the three simulated MW analogs in our sample. All galaxies show a distinct bi-lobed feature above and below the plane of the galactic disk; this feature is less pronounced for m12i, which has a warped disk and a strong magnetic field at its center, which leads to higher CR lepton loss rates than in m12f or m12m. Bottom: γ-ray emission in galaxy m12mfrom the three relevant CR interactions: neutral pion decay, relativistic Bremsstrahlung, and inverse Compton scattering. ICS is responsible for most of the bubble structure, with a slight hadronic contribution at lower latitudes. the γ-ray bubbles produce features in other wavelengths, including synchrotron radio and soft x-ray. This work marks the first time that Fermi bubble-like γ-ray features have been produced in simulations of MW-mass galaxies evolved from cosmological initial conditions. Methods: In this work, we analyze the same set of simulations first introduced in Hopkins et. al. 2022 [24]. These simulations are controlled re-starts of single-bin MHD-CR simulations evolved to late times, and were run with the GIZMO code and Feedback in Realistic En- vironments (FIRE) physics. There is feedback from type Ia and II supernovae, radiation (e.g., photo-electric and photo-ionization heating), and fast stellar winds from O/B and AGB star mass loss [25, 26]. We refer the reader to these works for details of numerical implementation, particularly [24] for details of the methods for CR trans- port. The simulations adopt a single power-law injection for the cosmic rays, with 10% of supernova and stellar wind shock energy going into CRs, and 2% of that into leptons. There are no black holes or AGN in these simu- lations; CRs are injected only from stellar sources. None of the parameters used in our simulations are tuned to match the MW or Fermi-LAT observations. We use lo- cal ISM observations from Voyager and AMS to set the relative normalizations of the cosmic ray spectra for in- dividual species at redshift z = 0.05. For our set of three realizations of MW-like galax- ies (m12f, m12i, and m12m), we model γ-ray emis- sion from neutral pion decay, relativistic non-thermal Bremsstrahlung, and inverse Compton scattering (ICS) in post-processing, following the procedure described in [27]. We extend the emission for relativistic Bremsstrahlung and ICS to the x-ray regime, so we cover photons emitted from 0.1 keV to 100 GeV. We neglect x-ray emission from non-relativistic thermal Bremsstrahlung, as there is little hot gas in the re- 3 gion of interest, and past analysis has demonstrated that CR pressure in the CGM can suppress thermal Bremsstrahlung emission [28]. When calculating the γ- ray spectrum for the bubbles, we mask the central ±20◦ to exclude the galactic disk, and then select a conic region within a slope of ±45◦in latitude vs. longitude. Overview of results: All three of the galaxies in our sample form bi-lobed γ-ray bubbles (Fig. 1, top); these features are present at the beginning of the controlled restart, which indicates that cosmic ray pressure acting against thermal pressure in the CGM plays an important role in inflating the bubbles by acting against thermal pressure and allowing cool gas to expand [29]. Through- out the simulation, CR pressure is comparable to or ex- ceeds thermal pressure in the inner CGM. Similar to the Fermi bubbles, these features are relatively flat in surface brightness. The round “bubble” structure arises from the inverse Compton scattering of CR leptons. γ-ray emis- sion from Π0 decay and relativistic Bremsstrahlung are brightest within the plane of the galactic disk, but can also contribute to the emission from the bubbles (Fig. 1, bottom). The bubbles in galaxies m12f and m12m are brighter and have more structure than those in m12i. The fainter bubbles in m12i are likely attributable to the morphol- ogy of the galaxy, which features a strong magnetic field at the galactic center, and a warped inner disk that is nearly perpendicular to the outer disk. The strong mag- netic field results in high CR lepton loss rates via syn- chrotron radiation, so fewer high-energy CR leptons es- cape the galactic center [27]. In contrast, m12f and m12m have more typical disk structure and weaker mag- netic fields, allowing CR leptons to stream into the halo. Both the amplitude and the shape of the γ-ray spectrum of the bubbles vary by order-of-magnitude amounts over the 460 Myr span of the simulations. The amplitude of the γ-ray spectrum is typically 10-100 times higher than the spectrum measured by Fermi-LAT, due to the comparatively higher rate of star formation (and thus supernovae) relative to the MW in the simulations. However, the shape of the γ-ray spectrum, while highly variable over time, is consistent with the Fermi-LAT measurement (Fig. 2, top). The flux from ICS is relatively constant over time, while flux from Π0 decay and relativistic Bremsstrahlung vary significantly (Fig. 2, bottom); the greater variability in the latter two processes is due to order-of-magnitude variations in gas density over time, while the cosmic ray flux and radiation background remain comparatively steady. The flux from relativistic Bremsstrahlung is typically subdominant to flux from both ICS and Π0 decay. When the γ-ray flux from ICS exceeds the flux from Π0 decay from 0.1 −100 GeV, the γ-ray spectrum is flatter, and closer in shape to the spectrum measured by Fermi-LAT. When the flux from Π0 decay exceeds the flux from ICS, then the γ-ray spectrum has a pronounced peak at Eγ ∼0.3 GeV, with a steeper tail. 10 1 100 101 E [GeV] 10 7 10 6 E2 × flux [GeV cm 2 s 1 sr 1] -ray Spectrum of Bubbles, Spread over Time m12f m12i m12m Fermi-LAT 10 1 100 101 E [GeV] 10 8 10 7 10 6 10 5 E2 × flux [GeV cm 2 s 1 sr 1] -ray Spectrum Components for m12m Bubbles Pion decay Bremsstrahlung Inverse Compton FIG. 2. Top: the spread of γ-ray spectra from all CR inter- actions across all snapshots in the simulation, spaced between 460 Myr and 0 Myr in lookback time. A global renormaliza- tion to the spectrum amplitude has been applied to compare the spectra from simulations to each other and to the Fermi- LAT measurement. While the spectra for galaxies m12f and m12m is consistent with Fermi-LAT, m12i has a steeper tail at all times, due to high loss rates for CR leptons in the galac- tic center. Bottom: the components of the γ-ray spectra for the bubbles in galaxy m12m. γ-ray emission is produced by Π0 decay, relativistic Bremsstrahlung, and inverse Compton scattering. The flux from inverse Compton is significantly less variable over time than the flux from Π0 decay and relativistic Bremsstrahlung, both of which are proportional to gas den- sity. Growth and evolution of bubbles: As there are no AGN in these simulations, the γ-ray bubbles are sourced by star formation, supernova feedback, and stel- lar winds, leading to a self-regulating “galactic fountain” [30–33]. In all three simulated galaxies, there is a rel- atively small burst of star formation between 300 and 500 Myr ago, blowing gas and CRs out of the galactic center. Over a span of ∼100 Myr, this gas falls back into the galactic disk and interacts with CR hadrons and leptons. Additional gas is supplied by supernova-driven outflows over the subsequent duration of the simulation. Figure 3 shows the γ-ray spectra for the bubbles in galaxy m12m at selected snapshots over this period of time. At 340 Myr, the spectrum is lepton-dominated and in good agreement with the Fermi-LAT measurement. At 320 4 10 1 100 101 E [GeV] 10 7 10 6 E2 × flux [GeV cm 2 s 1 sr 1] -ray spectrum for m12m bubbles over time Fermi-LAT 240 260 280 300 320 340 Lookback time [Myr] 10 1 100 101 E [GeV] 10 1 100 Fhadron/ Flepton Hadron-Dominated Lepton-Dominated Most similar to Fermi-LAT Ratio of -ray flux from CR hadrons to flux from CR leptons 240 260 280 300 320 340 Lookback time [Myr] FIG. 3. Left: The γ-ray spectrum for the bubbles in galaxy m12m at select snapshots between 340 and 240 Myr in lookback time. At 340 Myr, the shape of the spectrum is in good agreement with the measured spectrum of the Fermi bubbles. At 320 Myr, an inflow of gas causes a sharp increase in the spectrum, as well as a more peaked shape characteristic of hadronic spectra. After the gas settles, the spectra become flatter, until a cluster of supernovae go off at 240 Myr. Right: The ratio of γ-ray flux from Π0 decay to flux from ICS for the same snapshots. The snapshot where the γ-ray spectrum is most similar in shape to the measured Fermi-LAT spectrum is indicated by the thick magenta line. Spectra corresponding to gas outflows/ inflows are more hadronic, while those more consistent with the Fermi-LAT measurement are lepton-dominated. However, there are times at which the γ-ray spectrum for the bubbles are more leptonic than the measured Fermi-LAT spectrum. Myr, infalling gas interacts with the CRs, resulting in greater γ-ray flux from Π0 decay and a higher-amplitude, hadron-dominated spectrum. Eventually, the gas settles back into the disk, and the γ-ray spectrum of the bub- bles becomes more leptonic. As this occurs, both the infalling gas and the compression of gas already in the disk from previous supernovae trigger further star for- mation [34]. Finally, another cluster of supernovae goes off at 240 Myr, causing the γ-ray spectrum to be more hadronic once again. The most well-defined bubble structures form when there is little gas present from either inflows or outflows. The γ-ray spectrum calculated at such snapshots is pre- dominantly leptonic, dominated by flux from ICS (Fig. 3, right). The time the bubbles take to go through a “cy- cle” in which an ICS-dominated bubble is inflated and then deflated by CRs interacting with an influx of gas is ∼50 −100 Myr. In galaxies m12f and m12m, sev- eral of these cycles occur, with oscillations between ICS- dominated and hadron-dominated γ-ray spectra for the duration of the simulation. In contrast, the γ-ray spec- trum of the bubbles in galaxy m12i, which has a higher, relatively constant star formation rate for most of the duration of the simulation, become increasingly hadronic over time, particularly as strong magnetic fields in the galactic center deplete the CR leptons [27]. Multi-wavelength observations: x-ray and syn- chrotron radio: The Fermi bubbles have associated observable features in other wavelengths, including the eROSITA bubbles (soft x-ray) [4], the WMAP haze (mi- crowave and radio) [35–37], and Loop 1 and the South Po- lar Spur (radio) [38, 39]. We find that the bubbles in our simulations have counterparts in x-ray and synchrotron radio emission (Fig. 5). While the dominant morphol- ogy of the emission at these wavelengths is a diffuse halo, some geometry of the bubbles is present (particularly in southern hemisphere in the soft x-ray). The edge of the γ-ray bubbles coincides with the edge of the x-ray halo. Both the synchrotron radio and the x-ray emission are bright enough to be feasibly detected by current and next-generation radio and x-ray telescopes, respectively. We leave further exploration of multi-wavelength observ- ables for galactic outflows in spectral MHD-CR simula- tions to future work. Discussion: This work demonstrates that feedback from supernovae and stellar winds can give rise to Fermi bubble-like γ-ray outflows, without invoking AGN feed- back or beyond standard model physics. Because γ-ray outflows and their counterparts in other wavelengths oc- cur in all three simulated galaxies in our sample, it is plausible that these features are very common, but diffi- cult to detect in galaxies beyond the MW and the Local Group. γ-ray bubbles that form by this mechanism are in best agreement with the observed Fermi bubble γ-ray spectrum when most of the γ-ray flux is produced via ICS, with a ∼10 −20% contribution from Π0 decay and relativistic Bremsstrahlung. These structures are highly dynamical, evolving on the scale of tens of Myrs, over which their composition can change dramatically. These results do not rule out the possibility that AGN- accelerated cosmic rays contribute to the formation of such structures. Rather, by including AGN feedback in future simulations, we can leverage the Fermi bubble observations as a constraint on models of CR transport and AGN physics. There are features of the Fermi and eROSITA bubbles (e.g., the hard edge to the bubble sur- face brightness, shape of the x-ray emission) that are not reproduced in some of these simulations. If such features occur in simulations with CR injection from both AGN feedback and star formation, or from AGN alone, they 5 354 Myr 331 Myr 309 Myr 6 5 4 3 2 log10 (E2 × flux) [GeV cm 2 s 1 sr 1] m12m -ray emission over time 354 Myr 331 Myr 309 Myr 2 0 2 4 6 log10 NH [cm 3] m12m -ray gas density over time FIG. 4. γ-ray emission (top) and gas density (bottom) for galaxy m12m over a 45 Myr span. At 354 Myr, a clear bi-lobed γ-ray bubble is visible. At 331 Myr, the bubbles become brighter and less ordered as gas previously ejected by supernova feedback falls back into the disk. By 309 Myr, the gas has re-settled in the disk, and the distinct bi-lobed shape of the bubbles is visible once more. could help discriminate between these scenarios. Addi- tionally, forward-modeling of γ-ray and x-ray observables in the simulation can provide insight into whether the hard edge of the Fermi bubbles arises from physics (e.g., a shock) or observational effects, such as geometry or modeling of foregrounds and backgrounds. Beyond the formation of the Fermi bubbles, this work has important implications for and connections to other facets of galaxy formation and astro-particle physics. If γ-ray bubbles typically form via stellar feedback-driven outflows, then the existence of the Fermi bubbles may indicate a relatively recent starburst (within last ∼100 Myr) in the MW’s galactic center. There are numer- ous observational and theoretical challenges for measur- ing the star formation history of the MW’s central regions [40]. Simulations have indicated that star formation in the galactic center of MW-like galaxies is typically bursty, with feedback-driven intervals in star formation rate of ∼10 −100 Myr [41, 42]. Historically, star formation in this region was thought to be quasi-continuous over the last 10 Gyr, but recent observations suggest that there was a burst of star formation in the MW’s nuclear stellar disk in the last 30 Myr [43]. Future observational surveys of young stars in the galactic center will help to constrain the star formation history in the galactic center, and can test stellar feedback as a formation mechanism for the Fermi bubbles. The processes that gives rise to the γ-ray bubbles in these simulations directly affect the CGM, where CR transport is both less constrained and has greater impact on galaxy formation than in the ISM. The γ-ray bubbles form through a cycle of galactic outflows and inflows that has hadron-dominated and lepton-dominated phases; the hadron-dominated phases are typically associated with galactic fountain flows. During the lepton-dominated phases, the γ-ray bubbles are expected to have coun- 6 8 7 6 5 4 log10 (E2 × flux) [GeV cm 2 s 1 sr 1] FIG. 5. Observable features in other wavelengths related to the γ-ray bubbles in galaxy m12m. Top: Synchrotron radio emission at 20 GHz from CR leptons in galaxy m12m. The flux is roughly comparable to the observed flux of the WMAP haze, and there is structure similar to observed radio loops and spurs in the MW [35] Bottom: soft x-ray (0.1-2.4 keV) surface brightness from inverse Compton scattering of CR leptons in galaxy m12m. Contours for γ-ray flux are shown in the black lines, with each contour separated by 100.5 GeV cm−2 s−1 sr−1. The edges of the γ-ray bubbles and x-ray halo are roughly coincident. Both of these observable features arise from CR lepton interactions (for synchrotron, with magnetic fields, and for ICS, with radiation), and are bright enough to potentially be detected by current and future surveys. terparts in diffuse emission in other wavelengths, such as synchrotron radio and soft x-ray from ICS. Although detecting diffuse γ-ray emission around other galaxies is difficult due to the faintness of these features, there are better prospects for observing multi-wavelength counter- parts with next-generation observatories . Detecting the these multi-wavelength signatures of outflows in a larger sample of galaxies could determine how much time a MW-like galaxy spends in each phase and inform our understanding of the inner CGM. In addition, these re- sults support a growing body of work that indicates that, rather than all CRs free-streaming into the CGM and be- yond, a sizable fraction must inhabit a large ( ∼10 kpc) CR “scattering” halo– in these simulations, the CRs that produce γ-ray bubbles persist at radii of ∼5-10 kpc above the galactic disk over tens of Myrs. CR leptons that es- cape the smaller scattering halo can diffuse into the outer CGM, where, by inverse Compton scattering off of CMB photons, they can produce non-thermal x-ray emission at large radii (∼100 kpc) [44]. Uncovering the origins of the Fermi bubbles is a com- plex problem due to the multitude of astrophysical pro- cesses and scales involved. The ability to simulate fea- tures similar to the Fermi bubbles in full cosmological context and forward-model their emission from γ-ray to radio frequencies will enable new tests of their potential formation mechanisms. [1] K. C. Sarkar, The fermi/erosita bubbles: A look into the nuclear outflow from the milky way (2024), arXiv:2403.09824 [astro-ph.HE]. [2] S. Veilleux, R. Maiolino, A. D. Bolatto, and S. Aalto, Cool outflows in galaxies and their implications, The As- tronomy and Astrophysics Review 28, 10.1007/s00159- 019-0121-9 (2020). [3] M. Su, T. R. Slatyer, and D. P. Finkbeiner, Giant gamma-ray bubbles fromfermi-lat: Active galactic nu- cleus activity or bipolar galactic wind?, The Astrophysi- cal Journal 724, 1044–1082 (2010). [4] P. Predehl, R. A. Sunyaev, W. Becker, H. Brunner, R. Burenin, A. Bykov, A. Cherepashchuk, N. Chugai, E. Churazov, V. Doroshenko, N. Eismont, M. Freyberg, M. Gilfanov, F. Haberl, I. Khabibullin, R. Krivonos, C. Maitra, P. Medvedev, A. Merloni, K. Nandra, V. Nazarov, M. Pavlinsky, G. Ponti, J. S. Sanders, M. Sasaki, S. Sazonov, A. W. Strong, and J. Wilms, De- tection of large-scale X-ray bubbles in the Milky Way halo, Nature (London) 588, 227 (2020), arXiv:2012.05840 [astro-ph.GA]. [5] H.-Y. Yang, M. Ruszkowski, and E. Zweibel, Unveiling the origin of the fermi bubbles, Galaxies 6, 29 (2018). [6] S. Mondal, U. Keshet, K. C. Sarkar, and I. Gurwich, Fermi bubbles: the collimated outburst needed to ex- plain forward-shock edges, MNRAS 514, 2581 (2022), arXiv:2109.03834 [astro-ph.HE]. [7] H. Y. K. Yang, M. Ruszkowski, and E. G. Zweibel, Fermi and eROSITA bubbles as relics of the past activity of the Galaxy’s central black hole, Nature Astronomy 6, 584 (2022), arXiv:2203.02526 [astro-ph.HE]. [8] E. R. Owen and H. Y. K. Yang, Emission from hadronic and leptonic processes in galactic jet-driven bubbles, MN- RAS 516, 1539 (2022), arXiv:2111.01402 [astro-ph.HE]. [9] M. Ackermann et al., The Spectrum and Morphology of the Fermi Bubbles, Astrophys. J. 793, 64 (2014), arXiv:1407.7905 [astro-ph.HE]. [10] H. Y. K. Yang, M. Ruszkowski, P. M. Ricker, E. Zweibel, 7 and D. Lee, The Fermi Bubbles: Supersonic Active Galactic Nucleus Jets with Anisotropic Cosmic-Ray Dif- fusion, Astrophys. J. 761, 185 (2012), arXiv:1207.4185 [astro-ph.GA]. [11] K. C. Sarkar, B. B. Nath, and P. Sharma, Multiwave- length features of Fermi bubbles as signatures of a Galac- tic wind, MNRAS 453, 3827 (2015), arXiv:1505.03634 [astro-ph.GA]. [12] C. M. Booth, O. Agertz, A. V. Kravtsov, and N. Y. Gnedin, Simulations of disk galaxies with cosmic ray driven galactic winds, The Astrophysical Journal 777, L16 (2013). [13] M. Salem and G. L. Bryan, Cosmic ray driven outflows in global galaxy disc models, Monthly Notices of the Royal Astronomical Society 437, 3312–3330 (2013). [14] P. Girichidis, T. Naab, S. Walch, M. Hanasz, M.- M. Mac Low, J. P. Ostriker, A. Gatto, T. Peters, R. W¨unsch, S. C. O. Glover, R. S. Klessen, P. C. Clark, and C. Baczynski, Launching Cosmic-Ray-driven Out- flows from the Magnetized Interstellar Medium, The As- trophysical Journall 816, L19 (2016), arXiv:1509.07247 [astro-ph.GA]. [15] I. S. Butsky and T. R. Quinn, The role of cosmic- ray transport in shaping the simulated circumgalactic medium, The Astrophysical Journal 868, 108 (2018). [16] T. K. Chan, D. Kereˇs, P. F. Hopkins, E. Quataert, K.- Y. Su, C. C. Hayward, and C.-A. Faucher-Gigu`ere, Cos- mic ray feedback in the fire simulations: constraining cosmic ray propagation with gev gamma-ray emission, Monthly Notices of the Royal Astronomical Society 488, 3716–3744 (2019). [17] T. Buck, C. Pfrommer, R. Pakmor, R. J. J. Grand, and V. Springel, The effects of cosmic rays on the formation of milky way-mass galaxies in a cosmological context, Monthly Notices of the Royal Astronomical Society 497, 1712–1737 (2020). [18] M. Werhahn, C. Pfrommer, P. Girichidis, E. Puchwein, and R. Pakmor, Cosmic rays and non-thermal emission in simulated galaxies i. electron and proton spectra com- pared to voyager 1 data, Monthly Notices of the Royal Astronomical Society 505, 3273–3294 (2021). [19] C. Pfrommer, M. Werhahn, R. Pakmor, P. Girichidis, and C. M. Simpson, Simulating radio synchrotron emis- sion in star-forming galaxies: small-scale magnetic dy- namo and the origin of the far-infrared–radio correlation, Monthly Notices of the Royal Astronomical Society 515, 4229–4264 (2022). [20] M. Farcy, J. Rosdahl, Y. Dubois, J. Blaizot, and S. Martin-Alvarez, Radiation-magnetohydrodynamics simulations of cosmic ray feedback in disc galaxies, Monthly Notices of the Royal Astronomical Society 513, 5000–5019 (2022). [21] T. Thomas, C. Pfrommer, and R. Pakmor, Cosmic ray- driven galactic winds: transport modes of cosmic rays and alfv´en-wave dark regions (2022), arXiv:2203.12029 [astro-ph.GA]. [22] A. Pillepich, D. Sotillo-Ramos, R. Ramesh, D. Nelson, C. Engler, V. Rodriguez-Gomez, M. Fournier, M. Don- nari, V. Springel, and L. Hernquist, Milky Way and An- dromeda analogues from the TNG50 simulation, MNRAS 535, 1721 (2024), arXiv:2303.16217 [astro-ph.GA]. [23] R. Ramesh, D. Nelson, and P. Girichidis, IllustrisTNG plus cosmic rays with a simple transport model: From dwarfs to L∗galaxies, Astronomy & Astrophysics 699, A125 (2025), arXiv:2409.18238 [astro-ph.GA]. [24] P. F. Hopkins, I. S. Butsky, G. V. Panopoulou, S. Ji, E. Quataert, C.-A. Faucher-Gigu`ere, and D. Kereˇs, First predicted cosmic ray spectra, primary-to-secondary ra- tios, and ionization rates from mhd galaxy formation simulations, Monthly Notices of the Royal Astronomical Society 516, 3470–3514 (2022). [25] P. F. Hopkins et al., Fire-2 simulations: physics versus numerics in galaxy formation, Monthly Notices of the Royal Astronomical Society 480, 800–863 (2018). [26] P. F. Hopkins, A. Wetzel, C. Wheeler, R. Sanderson, M. Y. Grudi´c, O. Sameie, M. Boylan-Kolchin, M. Orr, X. Ma, C.-A. Faucher-Gigu`ere, D. Kereˇs, E. Quataert, K.-Y. Su, J. Moreno, R. Feldmann, J. S. Bullock, S. R. Loebman, D. Angl´es-Alc´azar, J. Stern, L. Necib, C. R. Choban, and C. C. Hayward, Fire-3: updated stellar evo- lution models, yields, and microphysics and fitting func- tions for applications in galaxy simulations, Monthly No- tices of the Royal Astronomical Society 519, 3154–3181 (2022). [27] I. S. Sands, P. F. Hopkins, S. B. Ponnada, D. Keres, L. Necib, and Y.-H. J. Lin, Galactic center gamma- ray emission in mhd galaxy formation simulations with full cosmic ray spectra (2025), arXiv:2509.18351 [astro- ph.HE]. [28] E. M. Silich, J. ZuHone, E. Bellomi, C. Hummels, B. Op- penheimer, P. F. Hopkins, C. Lochhaas, S. B. Ponnada, and A. Vikhlinin, X-ray emission signatures of galactic feedback in the hot circumgalactic medium: predictions from cosmological hydrodynamical simulations (2025), arXiv:2506.17440 [astro-ph.GA]. [29] S. Ji, T. K. Chan, C. B. Hummels, P. F. Hopkins, J. Stern, D. Kereˇs, E. Quataert, C.-A. Faucher-Gigu`ere, and N. Murray, Properties of the circumgalactic medium in cosmic ray-dominated galaxy haloes, Monthly No- tices of the Royal Astronomical Society 496, 4221–4238 (2020). [30] J. N. Bregman, The galactic fountain of high-velocity clouds., Astrophys. J. 236, 577 (1980). [31] F. Fraternali and J. J. Binney, Accretion of gas on to nearby spiral galaxies, MNRAS 386, 935 (2008), arXiv:0802.0496 [astro-ph]. [32] A. Marasco, V. P. Debattista, F. Fraternali, T. van der Hulst, J. Wadsley, T. Quinn, and R. Roˇskar, The effect of stellar feedback on a Milky Way-like galaxy and its gaseous halo, MNRAS 451, 4223 (2015), arXiv:1506.00652 [astro-ph.GA]. [33] F. Fraternali, Gas Accretion via Condensation and Foun- tains, in Gas Accretion onto Galaxies, Astrophysics and Space Science Library, Vol. 430, edited by A. Fox and R. Dav´e (2017) p. 323, arXiv:1612.00477 [astro-ph.GA]. [34] X. Ma, M. Y. Grudi´c, E. Quataert, P. F. Hopkins, C.- A. Faucher-Gigu`ere, M. Boylan-Kolchin, A. Wetzel, J.-h. Kim, N. Murray, and D. Kereˇs, Self-consistent proto- globular cluster formation in cosmological simulations of high-redshift galaxies, MNRAS 493, 4315 (2020), arXiv:1906.11261 [astro-ph.GA]. [35] D. P. Finkbeiner, Microwave Interstellar Medium Emis- sion Observed by the Wilkinson Microwave Anisotropy Probe, Astrophys. J. 614, 186 (2004), arXiv:astro- ph/0311547 [astro-ph]. [36] G. Dobler and D. P. Finkbeiner, Extended Anomalous Foreground Emission in the WMAP Three-Year Data, Astrophys. J. 680, 1222 (2008), arXiv:0712.1038 [astro- 8 ph]. [37] Planck Collaboration, P. A. R. Ade, et al., Planck inter- mediate results. IX. Detection of the Galactic haze with Planck, Astronomy & Astrophysics 554, A139 (2013), arXiv:1208.5483 [astro-ph.GA]. [38] M. Vidal, C. Dickinson, R. D. Davies, and J. P. Leahy, Polarized radio filaments outside the Galactic plane, MN- RAS 452, 656 (2015), arXiv:1410.4438 [astro-ph.GA]. [39] Planck Collaboration, P. A. R. Ade, et al., Planck 2015 results. XXV. Diffuse low-frequency Galactic fore- grounds, Astronomy & Astrophysics 594, A25 (2016), arXiv:1506.06660 [astro-ph.GA]. [40] J. D. Henshaw, A. T. Barnes, C. Battersby, A. Gins- burg, M. C. Sormani, and D. L. Walker, Star Formation in the Central Molecular Zone of the Milky Way, in Pro- tostars and Planets VII , Astronomical Society of the Pa- cific Conference Series, Vol. 534, edited by S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, and M. Tamura (2023) p. 83, arXiv:2203.11223 [astro-ph.GA]. [41] P. Torrey, P. F. Hopkins, C.-A. Faucher-Gigu`ere, M. Vo- gelsberger, E. Quataert, D. Kereˇs, and N. Murray, An instability of feedback-regulated star formation in galac- tic nuclei, MNRAS 467, 2301 (2017), arXiv:1601.07186 [astro-ph.GA]. [42] M. E. Orr, H. P. Hatchfield, C. Battersby, C. C. Hay- ward, P. F. Hopkins, A. Wetzel, S. M. Benincasa, S. R. Loebman, M. C. Sormani, and R. S. Klessen, Fiery Cores: Bursty and Smooth Star Formation Distributions across Galaxy Centers in Cosmological Zoom-in Simulations, ApJ Letters 908, L31 (2021), arXiv:2101.11034 [astro- ph.GA]. [43] F. Nogueras-Lara, R. Sch¨odel, A. T. Gallego-Calvente, E. Gallego-Cano, B. Shahzamanian, H. Dong, N. Neumayer, M. Hilker, F. Najarro, S. Nishiyama, A. Feldmeier-Krause, J. H. V. Girard, and S. Cassisi, Early formation and recent starburst activity in the nuclear disk of the milky way, Nature Astronomy 4, 377–381 (2019). [44] P. F. Hopkins, E. Quataert, S. B. Ponnada, and E. Silich, Cosmic rays masquerading as hot cgm gas: An inverse- compton origin for diffuse x-ray emission in the circum- galactic medium (2025), arXiv:2501.18696 [astro-ph.HE].
Fermi Bubbles Without AGN: γ-Ray Bubbles in MHD Galaxy Formation Simulations with Full Cosmic Ray Spectra Isabel S. Sands, Philip F. Hopkins, and Sam B. Ponnada TAPIR, California 350-17, Pasadena, CA 91125, USA (Dated: October 17, 2025) For the first time, we show in MHD simulations with cosmological initial conditions that bi-lobed gamma-ray outflows similar to the Fermi bubbles can form from star formation and supernova feedback, without involvement from active galactic nuclei (AGN). We use simulations run with full MHD and dynamical, on-the-fly multi-species cosmic ray transport in MeV-TeV energy bins to model gamma-ray emission in Milky Way-mass spiral galaxies from neutral pion decay, relativistic non-thermal Bremsstrahlung, and inverse Compton scattering. We find that these γ-ray outflows are present in all three Milky-Way mass simulated galaxies. The amplitude, shape, and the composition of the gamma-ray spectrum of these bubbles fluctuates over time, with lepton-dominated and hadron-dominated phases. Spectra in which there is O(1) more γ-ray flux from inverse Compton scattering than neutral pion decay are a good fit to the measured Fermi-LAT spectrum. Additionally, these simulations predict multi-wavelength features in soft x-rays and synchrotron radio, potentially providing new observational signatures that can connect the circumgalactic medium to cosmic ray physics and activity in the galactic center. Introduction- There is abundant observational evidence across multiple wavelengths of outflows from the central regions of galaxies [1]. These outflows are typically thought to arise from clustered supernovae in galaxies with active star formation, or from radiative or kinetic modes of active galactic nucleus (AGN) feedback, and have been observed in a variety of galaxies across a range of masses [2]. Because observables of these outflows span the electromagnetic spectrum, from diffuse radio emission to x-ray and γ-ray bubbles, such features are ideal targets to test different feedback mechanisms that can regulate star formation in galaxies. Within the Milky Way (MW), the Fermi Large Area Telescope (LAT) has detected a pair of bi-lobed, bubblelike features in γ-ray emission above and below the plane of the MW's galactic disk, spanning approximately 10 kpc and originating from the galactic center [3]. The γ-ray bubbles are accompanied by a pair of larger xray bubbles detected by ROSAT and eROSITA, approximately 12-14 kpc in length [4]. The origins of both the Fermi and eROSITA bubbles remain elusive: whether they are driven by AGN or star formation, a single event or multiple episodes in which energy and material are ejected from the galaxy, and whether the γ-ray emission is from hadronic or leptonic processes [5-8]. Most past studies of the origin and nature of the Fermi bubbles have relied on fitting the Fermi-LAT data to static templates generated by cosmic ray propagation codes or observed gas maps [3, 9], or idealized hydrodynamic simulations [6-8, 10, 11]. In the case of the former, fits to static templates are inherently unable to probe the time evolution of these structures. On the other hand, idealized hydrodynamic simulations can study the growth over time of outflows fueled by a specific process (e.g., stellar winds or an AGN burst) but typically impose a galactic potential, make simplifying assumptions about magnetic field structure and the hot circumgalactic medium (CGM), and do not evolve the CRs in a cosmological setting. It has only recently become possible to simulate CRs alongside full magnetohydrodynamics in cosmological simulations [12-21]. Most of these simulations evolve CRs as a single fluid element in a single energy bin. Several of these simulations have shown that CR pressure allows for the formation of kpc to Mpc-scale gas outflows, indicating that CR pressure may play an important role in the formation of the Fermi and eROSITA bubbles [22, 23]. More recently, new numerical methods have enabled the self-consistent evolution of individual CR species in magnetohydrodynamic galaxy formation simulations run from cosmological initial conditions, allowing for detailed studies of astro-particle phenomena and the forwardmodeling of multi-wavelength observables arising from CR interactions with the interstellar medium (ISM) [24]. In this letter, we analyze the simulations introduced in [24], in which multi-species cosmic rays (electrons, positrons, protons, anti-protons, and heavier nuclei) are simulated alongside full magnetohydrodynamics. We find that, for our sample of three MW-mass galaxies, extended γ-ray bubbles and halos are a ubiquitous feature that arise from the inverse Compton scattering of CR leptons injected by supernovae and fast stellar winds. The γ-ray spectrum of these bubbles is consistent with the Fermi-LAT observations, and we further find that these γ-ray features have counterparts in x-ray and synchrotron radio emission. This letter is structured as follows. We first provide a brief description of the simulations and the methods used to model γ-ray emission from the simulated CRs. We then show how different particle physics interactions contribute to the γ-ray emission, creating bi-lobed bubbles. We further demonstrate how the structure and composition of these bubbles (i.e. whether the γ-ray emission is predominantly produced by hadronic or leptonic interactions) changes over million-year (Myr) time scales. Finally, we show that the galactic outflows that lead to 16 Oct 2025 2 m12f m12i m12m 6 5 4 3 2 log10 (E2 × flux) [GeV cm 2 s 1 sr 1] -ray Emission Maps Pion decay Bremsstrahlung Inverse Compton 5 4 3 2 log10 (E2 × flux) [GeV cm 2 s 1 sr 1] FIG. 1. Top: total flux of 5-50 GeV γ-rays for the three simulated MW analogs in our sample. All galaxies show a distinct bi-lobed feature above and below the plane of the galactic disk; this feature is less pronounced for m12i, which has a warped disk and a strong magnetic field at its center, which leads to higher CR lepton loss rates than in m12f or m12m. Bottom: γ-ray emission in galaxy m12mfrom the three relevant CR interactions: neutral pion decay, relativistic Bremsstrahlung, and inverse Compton scattering. ICS is responsible for most of the bubble structure, with a slight hadronic contribution at lower latitudes. the γ-ray bubbles produce features in other wavelengths, including synchrotron radio and soft x-ray. This work marks the first time that Fermi bubble-like γ-ray features have been produced in simulations of MW-mass galaxies evolved from cosmological initial conditions. Methods: In this work, we analyze the same set of simulations first introduced in Hopkins et. al. 2022 [24]. These simulations are controlled re-starts of single-bin MHD-CR simulations evolved to late times, and were run with the GIZMO code and Feedback in Realistic Environments (FIRE) physics. There is feedback from type Ia and II supernovae, radiation (e.g., photo-electric and photo-ionization heating), and fast stellar winds from O/B and AGB star mass loss [25, 26]. We refer the reader to these works for details of numerical implementation, particularly [24] for details of the methods for CR transport. The simulations adopt a single power-law injection for the cosmic rays, with 10% of supernova and stellar wind shock energy going into CRs, and 2% of that into leptons. There are no black holes or AGN in these simulations; CRs are injected only from stellar sources. None of the parameters used in our simulations are tuned to match the MW or Fermi-LAT observations. We use local ISM observations from Voyager and AMS to set the relative normalizations of the cosmic ray spectra for individual species at redshift z = 0.05. For our set of three realizations of MW-like galaxies (m12f, m12i, and m12m), we model γ-ray emission from neutral pion decay, relativistic non-thermal Bremsstrahlung, and inverse Compton scattering (ICS) in post-processing, following the procedure described in [27]. We extend the emission for relativistic Bremsstrahlung and ICS to the x-ray regime, so we cover photons emitted from 0.1 keV to 100 GeV. We neglect x-ray emission from non-relativistic thermal Bremsstrahlung, as there is little hot gas in the re3 gion of interest, and past analysis has demonstrated that CR pressure in the CGM can suppress thermal Bremsstrahlung emission [28]. When calculating the γray spectrum for the bubbles, we mask the central ±20◦ to exclude the galactic disk, and then select a conic region within a slope of ±45◦in latitude vs. longitude. Overview of results: All three of the galaxies in our sample form bi-lobed γ-ray bubbles (Fig. 1, top); these features are present at the beginning of the controlled restart, which indicates that cosmic ray pressure acting against thermal pressure in the CGM plays an important role in inflating the bubbles by acting against thermal pressure and allowing cool gas to expand [29]. Throughout the simulation, CR pressure is comparable to or exceeds thermal pressure in the inner CGM. Similar to the Fermi bubbles, these features are relatively flat in surface brightness. The round "bubble" structure arises from the inverse Compton scattering of CR leptons. γ-ray emission from Π0 decay and relativistic Bremsstrahlung are brightest within the plane of the galactic disk, but can also contribute to the emission from the bubbles (Fig. 1, bottom). The bubbles in galaxies m12f and m12m are brighter and have more structure than those in m12i. The fainter bubbles in m12i are likely attributable to the morphology of the galaxy, which features a strong magnetic field at the galactic center, and a warped inner disk that is nearly perpendicular to the outer disk. The strong magnetic field results in high CR lepton loss rates via synchrotron radiation, so fewer high-energy CR leptons escape the galactic center [27]. In contrast, m12f and m12m have more typical disk structure and weaker magnetic fields, allowing CR leptons to stream into the halo. Both the amplitude and the shape of the γ-ray spectrum of the bubbles vary by order-of-magnitude amounts over the 460 Myr span of the simulations. The amplitude of the γ-ray spectrum is typically 10-100 times higher than the spectrum measured by Fermi-LAT, due to the comparatively higher rate of star formation (and thus supernovae) relative to the MW in the simulations. However, the shape of the γ-ray spectrum, while highly variable over time, is consistent with the Fermi-LAT measurement (Fig. 2, top). The flux from ICS is relatively constant over time, while flux from Π0 decay and relativistic Bremsstrahlung vary significantly (Fig. 2, bottom); the greater variability in the latter two processes is due to order-of-magnitude variations in gas density over time, while the cosmic ray flux and radiation background remain comparatively steady. The flux from relativistic Bremsstrahlung is typically subdominant to flux from both ICS and Π0 decay. When the γ-ray flux from ICS exceeds the flux from Π0 decay from 0.1 -100 GeV, the γ-ray spectrum is flatter, and closer in shape to the spectrum measured by Fermi-LAT. When the flux from Π0 decay exceeds the flux from ICS, then the γ-ray spectrum has a pronounced peak at Eγ ∼0.3 GeV, with a steeper tail. 10 1 100 101 E [GeV] 10 7 10 6 E2 × flux [GeV cm 2 s 1 sr 1] -ray Spectrum of Bubbles, Spread over Time m12f m12i m12m Fermi-LAT 10 1 100 101 E [GeV] 10 8 10 7 10 6 10 5 E2 × flux [GeV cm 2 s 1 sr 1] -ray Spectrum Components for m12m Bubbles Pion decay Bremsstrahlung Inverse Compton FIG. 2. Top: the spread of γ-ray spectra from all CR interactions across all snapshots in the simulation, spaced between 460 Myr and 0 Myr in lookback time. A global renormalization to the spectrum amplitude has been applied to compare the spectra from simulations to each other and to the FermiLAT measurement. While the spectra for galaxies m12f and m12m is consistent with Fermi-LAT, m12i has a steeper tail at all times, due to high loss rates for CR leptons in the galactic center. Bottom: the components of the γ-ray spectra for the bubbles in galaxy m12m. γ-ray emission is produced by Π0 decay, relativistic Bremsstrahlung, and inverse Compton scattering. The flux from inverse Compton is significantly less variable over time than the flux from Π0 decay and relativistic Bremsstrahlung, both of which are proportional to gas density. Growth and evolution of bubbles: As there are no AGN in these simulations, the γ-ray bubbles are sourced by star formation, supernova feedback, and stellar winds, leading to a self-regulating "galactic fountain" [30-33]. In all three simulated galaxies, there is a relatively small burst of star formation between 300 and 500 Myr ago, blowing gas and CRs out of the galactic center. Over a span of ∼100 Myr, this gas falls back into the galactic disk and interacts with CR hadrons and leptons. Additional gas is supplied by supernova-driven outflows over the subsequent duration of the simulation. Figure 3 shows the γ-ray spectra for the bubbles in galaxy m12m at selected snapshots over this period of time. At 340 Myr, the spectrum is lepton-dominated and in good agreement with the Fermi-LAT measurement. At 320 4 10 1 100 101 E [GeV] 10 7 10 6 E2 × flux [GeV cm 2 s 1 sr 1] -ray spectrum for m12m bubbles over time Fermi-LAT 240 260 280 300 320 340 Lookback time [Myr] 10 1 100 101 E [GeV] 10 1 100 Fhadron/ Flepton Hadron-Dominated Lepton-Dominated Most similar to Fermi-LAT Ratio of -ray flux from CR hadrons to flux from CR leptons 240 260 280 300 320 340 Lookback time [Myr] FIG. 3. Left: The γ-ray spectrum for the bubbles in galaxy m12m at select snapshots between 340 and 240 Myr in lookback time. At 340 Myr, the shape of the spectrum is in good agreement with the measured spectrum of the Fermi bubbles. At 320 Myr, an inflow of gas causes a sharp increase in the spectrum, as well as a more peaked shape characteristic of hadronic spectra. After the gas settles, the spectra become flatter, until a cluster of supernovae go off at 240 Myr. Right: The ratio of γ-ray flux from Π0 decay to flux from ICS for the same snapshots. The snapshot where the γ-ray spectrum is most similar in shape to the measured Fermi-LAT spectrum is indicated by the thick magenta line. Spectra corresponding to gas outflows/ inflows are more hadronic, while those more consistent with the Fermi-LAT measurement are lepton-dominated. However, there are times at which the γ-ray spectrum for the bubbles are more leptonic than the measured Fermi-LAT spectrum. Myr, infalling gas interacts with the CRs, resulting in greater γ-ray flux from Π0 decay and a higher-amplitude, hadron-dominated spectrum. Eventually, the gas settles back into the disk, and the γ-ray spectrum of the bubbles becomes more leptonic. As this occurs, both the infalling gas and the compression of gas already in the disk from previous supernovae trigger further star formation [34]. Finally, another cluster of supernovae goes off at 240 Myr, causing the γ-ray spectrum to be more hadronic once again. The most well-defined bubble structures form when there is little gas present from either inflows or outflows. The γ-ray spectrum calculated at such snapshots is predominantly leptonic, dominated by flux from ICS (Fig. 3, right). The time the bubbles take to go through a "cycle" in which an ICS-dominated bubble is inflated and then deflated by CRs interacting with an influx of gas is ∼50 -100 Myr. In galaxies m12f and m12m, several of these cycles occur, with oscillations between ICSdominated and hadron-dominated γ-ray spectra for the duration of the simulation. In contrast, the γ-ray spectrum of the bubbles in galaxy m12i, which has a higher, relatively constant star formation rate for most of the duration of the simulation, become increasingly hadronic over time, particularly as strong magnetic fields in the galactic center deplete the CR leptons [27]. Multi-wavelength observations: x-ray and synchrotron radio: The Fermi bubbles have associated observable features in other wavelengths, including the eROSITA bubbles (soft x-ray) [4], the WMAP haze (microwave and radio) [35-37], and Loop 1 and the South Polar Spur (radio) [38, 39]. We find that the bubbles in our simulations have counterparts in x-ray and synchrotron radio emission (Fig. 5). While the dominant morphology of the emission at these wavelengths is a diffuse halo, some geometry of the bubbles is present (particularly in southern hemisphere in the soft x-ray). The edge of the γ-ray bubbles coincides with the edge of the x-ray halo. Both the synchrotron radio and the x-ray emission are bright enough to be feasibly detected by current and next-generation radio and x-ray telescopes, respectively. We leave further exploration of multi-wavelength observables for galactic outflows in spectral MHD-CR simulations to future work. Discussion: This work demonstrates that feedback from supernovae and stellar winds can give rise to Fermi bubble-like γ-ray outflows, without invoking AGN feedback or beyond standard model physics. Because γ-ray outflows and their counterparts in other wavelengths occur in all three simulated galaxies in our sample, it is plausible that these features are very common, but difficult to detect in galaxies beyond the MW and the Local Group. γ-ray bubbles that form by this mechanism are in best agreement with the observed Fermi bubble γ-ray spectrum when most of the γ-ray flux is produced via ICS, with a ∼10 -20% contribution from Π0 decay and relativistic Bremsstrahlung. These structures are highly dynamical, evolving on the scale of tens of Myrs, over which their composition can change dramatically. These results do not rule out the possibility that AGNaccelerated cosmic rays contribute to the formation of such structures. Rather, by including AGN feedback in future simulations, we can leverage the Fermi bubble observations as a constraint on models of CR transport and AGN physics. There are features of the Fermi and eROSITA bubbles (e.g., the hard edge to the bubble surface brightness, shape of the x-ray emission) that are not reproduced in some of these simulations. If such features occur in simulations with CR injection from both AGN feedback and star formation, or from AGN alone, they 5 354 Myr 331 Myr 309 Myr 6 5 4 3 2 log10 (E2 × flux) [GeV cm 2 s 1 sr 1] m12m -ray emission over time 354 Myr 331 Myr 309 Myr 2 0 2 4 6 log10 NH [cm 3] m12m -ray gas density over time FIG. 4. γ-ray emission (top) and gas density (bottom) for galaxy m12m over a 45 Myr span. At 354 Myr, a clear bi-lobed γ-ray bubble is visible. At 331 Myr, the bubbles become brighter and less ordered as gas previously ejected by supernova feedback falls back into the disk. By 309 Myr, the gas has re-settled in the disk, and the distinct bi-lobed shape of the bubbles is visible once more. could help discriminate between these scenarios. Additionally, forward-modeling of γ-ray and x-ray observables in the simulation can provide insight into whether the hard edge of the Fermi bubbles arises from physics (e.g., a shock) or observational effects, such as geometry or modeling of foregrounds and backgrounds. Beyond the formation of the Fermi bubbles, this work has important implications for and connections to other facets of galaxy formation and astro-particle physics. If γ-ray bubbles typically form via stellar feedback-driven outflows, then the existence of the Fermi bubbles may indicate a relatively recent starburst (within last ∼100 Myr) in the MW's galactic center. There are numerous observational and theoretical challenges for measuring the star formation history of the MW's central regions [40]. Simulations have indicated that star formation in the galactic center of MW-like galaxies is typically bursty, with feedback-driven intervals in star formation rate of ∼10 -100 Myr [41, 42]. Historically, star formation in this region was thought to be quasi-continuous over the last 10 Gyr, but recent observations suggest that there was a burst of star formation in the MW's nuclear stellar disk in the last 30 Myr [43]. Future observational surveys of young stars in the galactic center will help to constrain the star formation history in the galactic center, and can test stellar feedback as a formation mechanism for the Fermi bubbles. The processes that gives rise to the γ-ray bubbles in these simulations directly affect the CGM, where CR transport is both less constrained and has greater impact on galaxy formation than in the ISM. The γ-ray bubbles form through a cycle of galactic outflows and inflows that has hadron-dominated and lepton-dominated phases; the hadron-dominated phases are typically associated with galactic fountain flows. During the lepton-dominated phases, the γ-ray bubbles are expected to have coun6 8 7 6 5 4 log10 (E2 × flux) [GeV cm 2 s 1 sr 1] FIG. 5. Observable features in other wavelengths related to the γ-ray bubbles in galaxy m12m. Top: Synchrotron radio emission at 20 GHz from CR leptons in galaxy m12m. The flux is roughly comparable to the observed flux of the WMAP haze, and there is structure similar to observed radio loops and spurs in the MW [35] Bottom: soft x-ray (0.1-2.4 keV) surface brightness from inverse Compton scattering of CR leptons in galaxy m12m. Contours for γ-ray flux are shown in the black lines, with each contour separated by 100.5 GeV cm-2 s-1 sr-1. The edges of the γ-ray bubbles and x-ray halo are roughly coincident. Both of these observable features arise from CR lepton interactions (for synchrotron, with magnetic fields, and for ICS, with radiation), and are bright enough to potentially be detected by current and future surveys. terparts in diffuse emission in other wavelengths, such as synchrotron radio and soft x-ray from ICS. Although detecting diffuse γ-ray emission around other galaxies is difficult due to the faintness of these features, there are better prospects for observing multi-wavelength counterparts with next-generation observatories . Detecting the these multi-wavelength signatures of outflows in a larger sample of galaxies could determine how much time a MW-like galaxy spends in each phase and inform our understanding of the inner CGM. In addition, these results support a growing body of work that indicates that, rather than all CRs free-streaming into the CGM and beyond, a sizable fraction must inhabit a large ( ∼10 kpc) CR "scattering" halo- in these simulations, the CRs that produce γ-ray bubbles persist at radii of ∼5-10 kpc above the galactic disk over tens of Myrs. CR leptons that escape the smaller scattering halo can diffuse into the outer CGM, where, by inverse Compton scattering off of CMB photons, they can produce non-thermal x-ray emission at large radii (∼100 kpc) [44]. Uncovering the origins of the Fermi bubbles is a complex problem due to the multitude of astrophysical processes and scales involved. The ability to simulate features similar to the Fermi bubbles in full cosmological context and forward-model their emission from γ-ray to radio frequencies will enable new tests of their potential formation mechanisms. [1] K. C. Sarkar, The fermi/erosita bubbles: A look into the nuclear outflow from the milky way (2024), . [2] S. Veilleux, R. Maiolino, A. D. Bolatto, and S. Aalto, Cool outflows in galaxies and their implications, The Astronomy and Astrophysics Review 28, 10.1007/s00159019-0121-9 (2020). [3] M. Su, T. R. Slatyer, and D. P. Finkbeiner, Giant gamma-ray bubbles fromfermi-lat: Active galactic nucleus activity or bipolar galactic wind?, The Astrophysical Journal 724, 1044-1082 (2010). [4] P. Predehl, R. A. Sunyaev, W. Becker, H. Brunner, R. Burenin, A. Bykov, A. Cherepashchuk, N. Chugai, E. Churazov, V. Doroshenko, N. Eismont, M. Freyberg, M. Gilfanov, F. Haberl, I. Khabibullin, R. Krivonos, C. Maitra, P. Medvedev, A. Merloni, K. Nandra, V. Nazarov, M. Pavlinsky, G. Ponti, J. S. Sanders, M. Sasaki, S. Sazonov, A. W. Strong, and J. Wilms, Detection of large-scale X-ray bubbles in the Milky Way halo, Nature (London) 588, 227 (2020), . [5] H.-Y. Yang, M. Ruszkowski, and E. Zweibel, Unveiling the origin of the fermi bubbles, Galaxies 6, 29 (2018). [6] S. Mondal, U. Keshet, K. C. Sarkar, and I. Gurwich, Fermi bubbles: the collimated outburst needed to explain forward-shock edges, MNRAS 514, 2581 (2022), . [7] H. Y. K. Yang, M. Ruszkowski, and E. G. Zweibel, Fermi and eROSITA bubbles as relics of the past activity of the Galaxy's central black hole, Nature Astronomy 6, 584 (2022), . [8] E. R. Owen and H. Y. K. Yang, Emission from hadronic and leptonic processes in galactic jet-driven bubbles, MNRAS 516, 1539 (2022), . [9] M. Ackermann et al., The Spectrum and Morphology of the Fermi Bubbles, Astrophys. J. 793, 64 (2014), . [10] H. Y. K. Yang, M. Ruszkowski, P. M. Ricker, E. Zweibel, 7 and D. Lee, The Fermi Bubbles: Supersonic Active Galactic Nucleus Jets with Anisotropic Cosmic-Ray Diffusion, Astrophys. J. 761, 185 (2012), . [11] K. C. Sarkar, B. B. Nath, and P. Sharma, Multiwavelength features of Fermi bubbles as signatures of a Galactic wind, MNRAS 453, 3827 (2015), . [12] C. M. Booth, O. Agertz, A. V. Kravtsov, and N. Y. Gnedin, Simulations of disk galaxies with cosmic ray driven galactic winds, The Astrophysical Journal 777, L16 (2013). [13] M. Salem and G. L. Bryan, Cosmic ray driven outflows in global galaxy disc models, Monthly Notices of the Royal Astronomical Society 437, 3312-3330 (2013). [14] P. Girichidis, T. Naab, S. Walch, M. Hanasz, M.- M. Mac Low, J. P. Ostriker, A. Gatto, T. Peters, R. W ̈unsch, S. C. O. Glover, R. S. Klessen, P. C. Clark, and C. Baczynski, Launching Cosmic-Ray-driven Outflows from the Magnetized Interstellar Medium, The Astrophysical Journall 816, L19 (2016), . [15] I. S. Butsky and T. R. Quinn, The role of cosmicray transport in shaping the simulated circumgalactic medium, The Astrophysical Journal 868, 108 (2018). [16] T. K. Chan, D. Kereˇs, P. F. Hopkins, E. Quataert, K.- Y. Su, C. C. Hayward, and C.-A. Faucher-Gigu`ere, Cosmic ray feedback in the fire simulations: constraining cosmic ray propagation with gev gamma-ray emission, Monthly Notices of the Royal Astronomical Society 488, 3716-3744 (2019). [17] T. Buck, C. Pfrommer, R. Pakmor, R. J. J. Grand, and V. Springel, The effects of cosmic rays on the formation of milky way-mass galaxies in a cosmological context, Monthly Notices of the Royal Astronomical Society 497, 1712-1737 (2020). [18] M. Werhahn, C. Pfrommer, P. Girichidis, E. Puchwein, and R. Pakmor, Cosmic rays and non-thermal emission in simulated galaxies i. electron and proton spectra compared to voyager 1 data, Monthly Notices of the Royal Astronomical Society 505, 3273-3294 (2021). [19] C. Pfrommer, M. Werhahn, R. Pakmor, P. Girichidis, and C. M. Simpson, Simulating radio synchrotron emission in star-forming galaxies: small-scale magnetic dynamo and the origin of the far-infrared-radio correlation, Monthly Notices of the Royal Astronomical Society 515, 4229-4264 (2022). [20] M. Farcy, J. Rosdahl, Y. Dubois, J. Blaizot, and S. Martin-Alvarez, Radiation-magnetohydrodynamics simulations of cosmic ray feedback in disc galaxies, Monthly Notices of the Royal Astronomical Society 513, 5000-5019 (2022). [21] T. Thomas, C. Pfrommer, and R. Pakmor, Cosmic raydriven galactic winds: transport modes of cosmic rays and alfv ́en-wave dark regions (2022), . [22] A. Pillepich, D. Sotillo-Ramos, R. Ramesh, D. Nelson, C. Engler, V. Rodriguez-Gomez, M. Fournier, M. Donnari, V. Springel, and L. Hernquist, Milky Way and Andromeda analogues from the TNG50 simulation, MNRAS 535, 1721 (2024), . [23] R. Ramesh, D. Nelson, and P. Girichidis, IllustrisTNG plus cosmic rays with a simple transport model: From dwarfs to L∗galaxies, Astronomy & Astrophysics 699, A125 (2025), . [24] P. F. Hopkins, I. S. Butsky, G. V. Panopoulou, S. Ji, E. Quataert, C.-A. Faucher-Gigu`ere, and D. Kereˇs, First predicted cosmic ray spectra, primary-to-secondary ratios, and ionization rates from mhd galaxy formation simulations, Monthly Notices of the Royal Astronomical Society 516, 3470-3514 (2022). [25] P. F. Hopkins et al., Fire-2 simulations: physics versus numerics in galaxy formation, Monthly Notices of the Royal Astronomical Society 480, 800-863 (2018). [26] P. F. Hopkins, A. Wetzel, C. Wheeler, R. Sanderson, M. Y. Grudi ́c, O. Sameie, M. Boylan-Kolchin, M. Orr, X. Ma, C.-A. Faucher-Gigu`ere, D. Kereˇs, E. Quataert, K.-Y. Su, J. Moreno, R. Feldmann, J. S. Bullock, S. R. Loebman, D. Angl ́es-Alc ́azar, J. Stern, L. Necib, C. R. Choban, and C. C. Hayward, Fire-3: updated stellar evolution models, yields, and microphysics and fitting functions for applications in galaxy simulations, Monthly Notices of the Royal Astronomical Society 519, 3154-3181 (2022). [27] I. S. Sands, P. F. Hopkins, S. B. Ponnada, D. Keres, L. Necib, and Y.-H. J. Lin, Galactic center gammaray emission in mhd galaxy formation simulations with full cosmic ray spectra (2025), . [28] E. M. Silich, J. ZuHone, E. Bellomi, C. Hummels, B. Oppenheimer, P. F. Hopkins, C. Lochhaas, S. B. Ponnada, and A. Vikhlinin, X-ray emission signatures of galactic feedback in the hot circumgalactic medium: predictions from cosmological hydrodynamical simulations (2025), . [29] S. Ji, T. K. Chan, C. B. Hummels, P. F. Hopkins, J. Stern, D. Kereˇs, E. Quataert, C.-A. Faucher-Gigu`ere, and N. Murray, Properties of the circumgalactic medium in cosmic ray-dominated galaxy haloes, Monthly Notices of the Royal Astronomical Society 496, 4221-4238 (2020). [30] J. N. Bregman, The galactic fountain of high-velocity clouds., Astrophys. J. 236, 577 (1980). [31] F. Fraternali and J. J. Binney, Accretion of gas on to nearby spiral galaxies, MNRAS 386, 935 (2008), . [32] A. Marasco, V. P. Debattista, F. Fraternali, T. van der Hulst, J. Wadsley, T. Quinn, and R. Roˇskar, The effect of stellar feedback on a Milky Way-like galaxy and its gaseous halo, MNRAS 451, 4223 (2015), . [33] F. Fraternali, Gas Accretion via Condensation and Fountains, in Gas Accretion onto Galaxies, Astrophysics and Space Science Library, Vol. 430, edited by A. Fox and R. Dav ́e (2017) p. 323, . [34] X. Ma, M. Y. Grudi ́c, E. Quataert, P. F. Hopkins, C.- A. Faucher-Gigu`ere, M. Boylan-Kolchin, A. Wetzel, J.-h. Kim, N. Murray, and D. Kereˇs, Self-consistent protoglobular cluster formation in cosmological simulations of high-redshift galaxies, MNRAS 493, 4315 (2020), . [35] D. P. Finkbeiner, Microwave Interstellar Medium Emission Observed by the Wilkinson Microwave Anisotropy Probe, Astrophys. J. 614, 186 (2004), arXiv:astroph/0311547 [astro-ph]. [36] G. Dobler and D. P. Finkbeiner, Extended Anomalous Foreground Emission in the WMAP Three-Year Data, Astrophys. J. 680, 1222 (2008), . [37] Planck Collaboration, P. A. R. Ade, et al., Planck intermediate results. IX. Detection of the Galactic haze with Planck, Astronomy & Astrophysics 554, A139 (2013), . [38] M. Vidal, C. Dickinson, R. D. Davies, and J. P. Leahy, Polarized radio filaments outside the Galactic plane, MNRAS 452, 656 (2015), . [39] Planck Collaboration, P. A. R. Ade, et al., Planck 2015 results. XXV. Diffuse low-frequency Galactic foregrounds, Astronomy & Astrophysics 594, A25 (2016), . [40] J. D. Henshaw, A. T. Barnes, C. Battersby, A. Ginsburg, M. C. Sormani, and D. L. Walker, Star Formation in the Central Molecular Zone of the Milky Way, in Protostars and Planets VII , Astronomical Society of the Pacific Conference Series, Vol. 534, edited by S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, and M. Tamura (2023) p. 83, . [41] P. Torrey, P. F. Hopkins, C.-A. Faucher-Gigu`ere, M. Vogelsberger, E. Quataert, D. Kereˇs, and N. Murray, An instability of feedback-regulated star formation in galactic nuclei, MNRAS 467, 2301 (2017), . [42] M. E. Orr, H. P. Hatchfield, C. Battersby, C. C. Hayward, P. F. Hopkins, A. Wetzel, S. M. Benincasa, S. R. Loebman, M. C. Sormani, and R. S. Klessen, Fiery Cores: Bursty and Smooth Star Formation Distributions across Galaxy Centers in Cosmological Zoom-in Simulations, ApJ Letters 908, L31 (2021), . [43] F. Nogueras-Lara, R. Sch ̈odel, A. T. Gallego-Calvente, E. Gallego-Cano, B. Shahzamanian, H. Dong, N. Neumayer, M. Hilker, F. Najarro, S. Nishiyama, A. Feldmeier-Krause, J. H. V. Girard, and S. Cassisi, Early formation and recent starburst activity in the nuclear disk of the milky way, Nature Astronomy 4, 377-381 (2019). [44] P. F. Hopkins, E. Quataert, S. B. Ponnada, and E. Silich, Cosmic rays masquerading as hot cgm gas: An inversecompton origin for diffuse x-ray emission in the circumgalactic medium (2025), .
2510.14907
Learnable Mixed Nash Equilibria are Collectively Rational Geelon So Yi-An Ma University of California, San Diego Abstract We extend the study of learning in games to dynamics that exhibit non-asymptotic stability. We do so through the notion of uniform stability, which is concerned with equilibria of individually utility-seeking dynamics. Perhaps surprisingly, it turns out to be closely connected to economic properties of collective rationality. Under mild non-degeneracy conditions and up to strategic equivalence, if a mixed equilibrium is not uniformly stable, then it is not weakly Pareto optimal—there is a way for all players to improve by jointly deviating from the equilibrium. On the other hand, if it is locally uniformly stable, then the equilibrium must be weakly Pareto optimal. Moreover, we show that uniform stability determines the last-iterate convergence behavior for the family of incremental smoothed best-response dynamics, used to model individual and corporate behaviors in the markets. Unlike dynamics around strict equilibria, which can stabilize to socially-inefficient solutions, individually utility-seeking behaviors near mixed Nash equilibria lead to collective rationality. Keywords: algorithmic game theory, evolutionary dynamics, non-asymptotic stability, last-iterate convergence, smoothed best-response 1 Introduction The Nash (1951) equilibrium is a foundational solution concept in games, capturing when collective behavior or strategies may be stationary. These equilibria are meaningful to study, for once such strategies appear, they may persist for a long time. But, there is an important caveat: not all equilibria can be robustly reached by players in the game (Hart and Mas-Colell, 2003; Papadimitriou, 2007; Daskalakis et al., 2009, 2010; Milionis et al., 2023). Any equilibrium that cannot be found or sustained is unlikely to have practical relevance. This caveat leads to the question on learnability: which Nash equilibria can players eventually learn to play from repeated interactions? To grasp individual and corporate behaviors, literature in classical economics considers a model of learning where players: (i) take an evolutionary approach and incrementally update their strategies; (ii) are utility- seeking, which means that they aim to improve their own payoffs; and (iii) are uncoupled, where players are unaware of the other players’ utilities or methods to improve them (Alchian, 1950; Winter, 1971). The goal of the players in this model is not to compute any pre-determined Nash equilibrium, at least not explicitly. Rather, the equilibrium is to emerge out of their joint, but individually utility-seeking, behavior. Within this model of learning, the impossibility result of Hart and Mas-Colell (2003) shows that no uncoupled and asymptotically-stable learning dynamics can converge to all Nash equilibria. Uncoupled means that players learn in uncoordinated and decentralized ways. Asymptotic stability means that the convergence is robust to small perturbations in the dynamics. Simply put, not all equilibria admit such a strong notion of learnability, viz. dynamical stability. The ones that do, however, have been characterized for certain classes of learning dynamics in standard normal-form games: an equilibrium is ‘asymptotically learnable’ in this way if and only if it is strict, where every player has a single, deterministic strategy that is clearly locally optimal (Samuelson and Zhang, 1992; Vlatakis-Gkaragkounis et al., 2020; Giannou et al., 2021). A significant gap remains for the learnability of mixed Nash equilibria, where players may use randomized strategies. On the one hand, mixed equilibria are not strict, and as a result, they are not asymptotically stable under these learning dynamics. Observations corroborating this finding demonstrate that many dynamics are not able to generically learn mixed Nash equilibria. This has been a significant source of criticism on the Correspondence to: geelon@ucsd.edu and yianma@ucsd.edu 1 arXiv:2510.14907v1 [cs.GT] 16 Oct 2025 (a) Pareto-dominated equilibria are unstable (b) Weakly-Pareto equilibria are neutrally stable Figure 1: Uncoupled learning dynamics in two 2 × 2 normal-form games with purely-strategic utilities f = (f1, f2). The Nash equilibria are marked by stars. The streamlines visualize the trajectories of the learning dynamics. The heatmap plots the social welfare function min{f1, f2}, measuring the utility of the player worst-off. The heatmap is white where the utilities are equal to the equilibrium; the darker the red, the worse the social welfare; the darker the blue, the better. (a) The mixed Nash equilibrium in this game is not weakly Pareto optimal, and the dynamics are unstable. (b) This equilibrium is weakly Pareto optimal, and the dynamics are non-asymptotically stable. The dynamics visualized here is continuous-time mirror ascent induced by the entropy mirror map. viability of mixed equilibria as solutions in games (Shapley, 1963; Crawford, 1985; Stahl II, 1988; Jordan, 1993; Krishna and Sj¨ostr¨om, 1998; Ellison and Fudenberg, 2000; Kleinberg et al., 2011; Mertikopoulos et al., 2018; Bailey et al., 2021). On the other hand, there are cases in which mixed equilibria are approachable by simple, uncoupled dynamics, such as those in two-player zero-sum games (Robinson, 1951; Fudenberg and Kreps, 1993; Gjerstad, 1996; Hofbauer and Sigmund, 1998; Benaım and Hirsch, 1999; Hofbauer and Sorin, 2006; Daskalakis and Panageas, 2019; Wei et al., 2021; Piliouras et al., 2022). To address this conundrum, we must go beyond the criterion of asymptotic stability, which is too stringent a requirement. This work extends the study of learning in games to dynamics that exhibit non-asymptotic stability. It is a weaker notion of stability that includes neutral stability, which only requires that players starting off in a neighborhood of such an equilibrium will continue to remain close to it. The question we ask is: Under the relaxed criterion of non-asymptotic stability, which Nash equilibria are learnable by uncoupled dynamics, and what are their economic properties? There are two parts to this work. In the first part, we introduce and characterize a form of non-asymptotic stability that applies to broad classes of learning dynamics. We call it uniform stability. While it arises from dynamical considerations, we show that uniform stability is closely tied to the economic properties of the equilibrium, namely, strategic Pareto optimality. This is a game-variant of weak Pareto optimality defining a minimal form of collective rationality—it describes solutions where there is no way to strictly improve everyone’s utilities, up to strategic equivalence.1 We show that an equilibrium that is not uniformly stable must not be strategically Pareto optimal. This formalizes and tightens an observation made by prior work: non-convergence to mixed equilibria can sometimes actually be a blessing in disguise, where players would be worse off had they managed to converge to the equilibrium (Kleinberg et al., 2011; Ostrovski and van Strien, 2013; Pangallo et al., 2022; Anagnostides et al., 2022). This may be surprising as it runs counter to the more prominent phenomenon where non-cooperative, individual behavior leads to worse social outcomes, or a high price of anarchy, such as in the Prisoner’s Dilemma or the Tragedy of the Commons (Luce and Raiffa, 1957; Hardin, 1968; Nachbar, 1990; Weibull, 1994; Koutsoupias and Papadimitriou, 1999). In the second part, we focus on convergence and non-convergence of incremental smoothed best-response dynamics, a generalization of a canonical family of learning dynamics. For this class, we show that non-strict, 1As standard learning dynamics generally factor through strategic equivalence classes of games (they do not depend on the ‘externalities’ or non-strategic components of the game), this reservation is necessary. 2 uniformly-stable equilibria can be learned, though with slower convergence than strict equilibria. Around a locally uniformly-stable Nash equilibrium, these dynamics can be stabilized: they lead to asymptotically- stable dynamics that approximate the equilibrium to arbitrary accuracy. While for non-uniformly stable mixed equilibria, there are dynamics that can never be stabilized to the equilibrium. Thus we observe a dichotomy between dynamics around strict and mixed Nash equilibria under the context of ‘as if’ rationality (Friedman, 1953; Weibull, 1994). In the former, individually utility-seeking dynamics lead to the same equilibrium behaviors as if all the players have absolute, individual rationality. As in the prisoner’s dilemma, they can stabilize at collectively-irrational behaviors. Around mixed equilib- ria, uniform learnability disallows collective irrationality. As a result, individually utility-seeking dynamics robustly behave as if the players were collectively rational, reminiscent of the ‘invisible hand’ from classical economic theory (Smith, 1776; Debreu, 1959). 1.1 Main results We consider uncoupled learning dynamics for standard N-player normal-form games, where players, through repeated interactions, evolve their individual strategies in order to maximize their own utilities. For such dynamics, in Section 3, we introduce two non-asymptotic stability classes for mixed equilibria: those that have pointwise uniform stability and those that exhibit a stronger local uniform stability. These impose a second-order differential condition, either only at the equilibrium or in a neighborhood around it (note that the equilibrium condition itself is a first-order condition for mixed strategies). These conditions naturally arise from linear stability analysis, and they apply to broad classes of dynamics such as smoothed best-response, gradient ascent/adaptive dynamics, mirror ascent/positive-definite adaptive dynamics. In Section 4, we show the chain of implications for mixed Nash equilibria under mild assumptions: local uniform stability =⇒ strategic Pareto optimality =⇒ pointwise uniform stability. Moreover, all of these solution concepts are equivalent for the class of graphical or polymatrix games, which are particularly succinct normal-form games that decompose across pairwise interactions between players. In Section 5, we prove convergence results for the family of incremental, smoothed best-response learning dynamics. The dynamics are controlled by a smoothing parameter β and a learning rate η. The first governs the quality of any approximate Nash equilibria that the dynamics finds, while the second impacts the stability of its fixed points; whether the dynamics converge and at what rate. We prove that if a Nash equilibrium is locally uniformly stable, then for any choice of approximation β, the dynamics can always be stabilized to the equilibrium by taking a sufficiently fine learning rate η, whereby it converges with asymptotic stability. With T iterations, the overall convergence rate to the locally uniformly stable mixed Nash equilibria is of order T −1/2. However, if a Nash equilibrium is not pointwise uniformly stable, then there are dynamics that can never be stabilized beyond a certain approximation error, no matter what learning rate is chosen. Finally, as the first set of results are stated for fully-mixed equilibria, Section 6 extends these results to the case where the Nash equilibrium may be partially mixed. The following summarizes our main results: Local uniform stability implies strategic Pareto optimality. If a Nash equilibrium falls within a region of uniformly-stable strategies, it must be collectively rational (Theorem 1). In this case, it is robustly approximable under incremental, smoothed best-response dynamics (Theorems 3 and 4). Strategic Pareto optimality implies pointwise uniform stability. We show that strategic Pareto optimality implies strategic Pareto stationarity, which is a second-order condition for collective ratio- nality (Proposition 1). Strategic stationarity is equivalent to pointwise uniform stability (Theorem 2), and it is also necessary for convergence with non-asymptotic stability (Proposition 2). 1.2 Related work This work considers the convergence of learning dynamics in games. Two modes of convergence are often studied (i) time-averaged convergence, where players aim to achieve little or no regret in the long run, or the stronger (ii) day-to-day or last-iterate convergence, where players eventually learn to play an equilibrium of the game; see also Fudenberg and Kreps (1993) for discussion. This work falls in the latter category. 3 The learnability of strict Nash equilibria is fairly well-understood (Hart and Mas-Colell, 2003; Mer- tikopoulos and Sandholm, 2016; Vlatakis-Gkaragkounis et al., 2020; Giannou et al., 2021, and related works therein). In contrast, the learnability of mixed Nash equilibria is more complicated. Many works have pointed out that asymptotically-stable convergence to mixed Nash equilibria is generally ruled out because linearized dynamics about them have zero trace (Crawford, 1985; Jordan, 1993; Singh et al., 2000; Hart and Mas-Colell, 2003; Mertikopoulos et al., 2018). As mixed Nash equilibria are needed for the existence of Nash equilibria (Nash, 1951), their generic infeasibility is troubling for the viability of the solution concept, but has a resolution via Harsanyi’s purification theorem (Harsanyi, 1973). Philosophically, unlike Harsanyi (1973), this work does not try to save all mixed equilibria, but proceeds from the fact that uncoupled dynamics can sometimes converge to mixed Nash equilibria, and aims to further refine the solution concept (Van Damme, 1992). Much of the existing work providing positive convergence results focuses on two-player games (Robinson, 1951; Fudenberg and Kreps, 1993; Gjerstad, 1996; Hofbauer and Sigmund, 1998; Benaım and Hirsch, 1999; Hofbauer and Sorin, 2006; Wei et al., 2021), and especially for zero-sum games (Gilpin et al., 2012; Daskalakis and Panageas, 2019; de Montbrun and Renault, 2022; Piliouras et al., 2022, and related works therein). Our results apply to general N-player normal-form games. The results and techniques in this paper also relate to many areas of game theory. The connection between uniform stability and weak Pareto optimality generalizes existing characterizations of games with weakly Pareto optimal equilibria in two-player games (Moulin and Vial, 1978; Adler et al., 2009), and bridges to the classical notion of strong Nash equilibria (Aumann, 1959). Our notion of uniform stability builds on the game Jacobian (also sometimes called the game Hessian), which is also generally used to analyze smooth games (Isaacs, 1999; Candogan et al., 2011; Balduzzi et al., 2018; Letcher et al., 2019). The algorithmic component of this paper focuses on smoothed best-response dynamics, which can be viewed as regularized learning/optimization (Rockafellar, 1997; Mertikopoulos and Sandholm, 2016). In normal-form games, the canonical regularizer is the logit map, yielding quantal response equilibria (McKelvey and Palfrey, 1995; Al´os-Ferrer and Netzer, 2010; Goeree et al., 2016; Mertikopoulos and Sandholm, 2016). Our convergence result introduces the notion of stabilization to an equilibrium, which is similar to the tracing procedure discussed by McKelvey and Palfrey (1995). 2 Preliminaries An N-player game (Ω, f) with players indexed by n ∈[N] consists of a joint strategy space Ω= Ω1×· · ·×ΩN and a set of utilities f = (f1, . . . , fN), where each player would like to maximize their own utility fn : Ω→R. In a round of the game, everyone independently chooses a strategy xn ∈Ωn. These choices constitute the joint decision vector x = (x1, . . . , xN), and players receive their respective payoffs fn(x) ≡fn(xn; x−n). Here, we let x−n = (x1, . . . , xn−1, xn+1, . . . , xN) denote the joint strategy without the nth component. This work focuses on normal-form games, which we define here and expand upon in Section A. Definition 1 (Normal-form game). An N-player normal-form game (Ω, f) is one where each player n ∈[N] has a set of kn alternatives/pure strategies, the strategy space Ωn is the (kn −1)-dimensional probability simplex ∆kn−1, and each utility fn is defined by a payoff tensor T n ∈Rk1×···×kN as follows: fn(x) = X i1∈[k1] · · · X iN∈[kN] T n i1,...,iN x1,i1 · · · xN,iN and xn = (xn,1, . . . , xn,kn) ∈∆kn−1. We say that the strategy x ∈Ωis mixed if it is in the interior of Ω, which we denote by int(Ω). Normal-form games are a special case of multilinear games, which are games whose utilities are poly- nomials where the maximum degree in each variable is one. In the following, we first define multilinear polynomials of the form p : Rd1 × · · · × RdN →R, where we think of dn = kn −1. The connection between normal-form games and multilinear games is formalized by Lemma A.1. Definition 2 (Multilinear polynomial). Let R[x] contain the polynomials of the form p : Rd1×· · ·×RdN →R, with variables x = (x1, . . . , xN) and xn = (xn,1, . . . , xn,dn) for each n ∈[N]. We say that p ∈R[x] is multilinear over x if it is affine over each xn. That is, for each n ∈[N], the polynomial is affine in xn, p(x) = An(x−n)⊤xn + bn(x−n), 4 where An ∈R[x−n]dn and bn ∈R[x−n] are polynomials, and x−n includes all variables besides those in xn. Definition 3 (Multilinear game). An N-player multilinear game (Ω, f) is a game where for each n ∈[N], the strategy space Ωn is an open, convex subset of Rdn, and the utility fn : Ω→R is a multilinear polynomial. That is, each utility fn extends to a multilinear polynomial on all of Rd1 × · · · × RdN . 2.1 Individual and collective rationality: two solution concepts A fundamental aspect of non-cooperative games is that, even though players cannot directly control how others behave, the payoff that each receives nevertheless depends on their joint decision. This leads to the Nash equilibrium as a solution concept for games—joint strategies where it is not possible to improve one’s own utility without coordinating with other players. It is a notion of individual rationality: Definition 4 (Nash equilibrium). A joint decision vector x∗∈Ωis a Nash equilibrium if: ∀n ∈[N], x∗ n ∈arg max xn∈Ωn fn(xn; x∗ −n). In contrast, if as in the multi-objective optimization, players are cooperative and can coordinate, then a minimal solution concept is weak Pareto optimality, which describes joint decisions where there is no way to strictly improve everyone’s utilities. It is a notion of collective rationality: Definition 5 (Weak Pareto optimality). A joint decision vector x∗∈Ωis weakly Pareto optimal if there does not exist any x ∈Ωsuch that fn(x) > fn(x∗) for all n ∈[N]. These solution concepts are generally incomparable and independent of each other. Famous examples, including the prisoner’s dilemma or the tragedy of the commons, demonstrate that Nash equilibria and Pareto optimal strategies can even be disjoint—in the absence of mechanisms to enable or enforce cooperation, socially-optimal outcomes can be unstable under individually-rational or selfish behavior. This work refines the relationship between Nash stability and Pareto optimality from the perspective of learning in games, which is a dynamical perspective in game theory motivated by a simple observation: a solution concept has no bearing on practice if players of a game cannot find it. Eventually, we will show that the interior Nash equilibria that can be robustly found must, in fact, be strategically Pareto optimal, a notion related to Pareto optimality that is well-suited for games. We will motivate and introduce this in the next section. To do so, we now turn to how players come to discover and play a Nash equilibrium. 2.2 Uncoupled learning dynamics and strategic equivalence Consider a model of learning where players learn to play the game via trial-and-error: they simply play many rounds of the game. As players may observe the joint strategies x(t) chosen over time t, they can also continually adjust their own strategies. A learning rule/dynamics specifies how a player n ∈[N] chooses to act each round xn(t) as a function of past experiences, namely x(s) where s < t, as well as any additional information they may have about the game—for example, we assume players know their own utilities. A basic informational constraint we impose is that the players are uncoupled, which means that a player’s learning rule cannot depend on the utilities or learning rules of other players (Hart and Mas-Colell, 2003). Intuitively, this means that players do not know what others want, nor how others learn from past experiences. Even if they were so inclined, players do not know how to directly be altruistic or malicious. This constraint in a sense enforces selfish behavior, or individual rationality. To discuss the connection between the economic and the dynamic properties of games, we need to focus on the ‘active ingredients’ in the utilities that play a role in the dynamics. In existing literature, this motivates the decomposition of utilities into their strategic and non-strategic components (Candogan et al., 2011). In multilinear games, the strategic component precisely consists of the terms in the polynomial fn that depends on the nth set of variables xn, while the non-strategic terms are those that vary independently of xn. Games with the same strategic components are called strategically equivalent (Monderer and Shapley, 1996). 5 Definition 6 (Strategic and non-strategic components). Let (Ω, f) be a multilinear game. For each n ∈[N], the utility fn is a polynomial that is affine in xn, with the following decomposition given in Definition 2, fn(x) = An(x−n)⊤xn | {z } strategic + bn(x−n) | {z } non-strategic . The strategic component of fn is the term linear in xn. The non-strategic component of fn is the offset term. Two utilities are strategically equivalent if they have the same strategic component, denoted by fn ≃SE f ′ n. The target of this work is to illuminate the game-theoretic properties of equilibria found or not found by simple, uncoupled learning dynamics. As the dynamics we study, Equation (1) below, indeed turn out to be invariant to changes in the non-strategic component of the game (Lemma C.1), we expect results only up to strategic equivalence. This motivates the following notion of strategic Pareto optimality. Definition 7 (Strategic Pareto optimality). Let (Ω, f) be a multilinear game. A joint decision vector x∗∈Ω is strategically Pareto optimal if it is weakly Pareto optimal for the strategic component of the game. Akin to the classic story about Nash equilibria and weak Pareto optimality, a simple example shows that neither the condition of being a Nash equilibrium nor being strategically Pareto optimal imply each other (Example A.1). But surprisingly, despite uncoupledness, players can be guaranteed to converge to a mixed Nash equilibrium that is strategically Pareto optimal, while they are not guaranteed to converge to a mixed Nash equilibrium without strategic Pareto optimality under mild non-degeneracy assumptions. In the next section, we formally introduce the learning dynamics considered in this work. 2.3 A simple class of learning rules Let (Ω, f) be an N-player game where each Ωn is a convex domain. Let h = (h1, . . . , hN) be a collection of strictly convex regularizers hn : Ωn →R. The dynamics we study are based on smoothed best-responses, where players maximize their own utilities by solving a regularized optimization problem: Definition 8 (β-smoothed best-response). The β-smoothed best-response map Φβ = (Φβ 1, . . . , Φβ N) : Ω→Ω with respect to a set of strictly convex regularizers h for any β > 0 is given by: ∀n ∈[N], Φβ n(x) = arg max x′n∈Ωn fn(x′ n; x−n) −βhn(x′ n). Throughout this work, we assume that the regularizers are steep, which is a standard assumption from convex optimization that guarantees that Φβ map into the interior of Ω(Rockafellar, 1997, Theorem 26.1). Definition 9 (Steep regularizer). Let Ωbe a compact and convex domain, and h : Ω→R be strictly convex and smooth on int(Ω). It is steep if for any sequence xk ∈Ωconverging to a boundary point of Ω, lim k→∞ ∇h(xk) = ∞. In the case of normal-form games, a canonical choice is entropy regularization, hn(xn) = P i xn,i log xn,i, which is a steep regularizer on the probability simplex. The smoothed best-response map is also called quantal response, and the smoothed-best responses are given by the softmax function: Φβ n(x) ∝exp  1 β ∇nfn(x)  . The specific class of discrete-time dynamics that we study in this work is the following: Incremental smoothed best-response dynamics. Let η ∈(0, 1) be a learning rate and Φβ be a β-smoothed best-response map induced by steep regularizers. The (Φβ, η)-averaging dynamics is the dynamics x(t) given by the transition map: x(t) = (1 −η) x(t −1) + η Φβx(t −1)  , (1) where the joint strategy x(t) played in the round t is an weighted average of the strategy x(t−1) played in the previous round t −1 with its smoothed best response Φβ(x(t −1)). 6 The fixed points of (1) are precisely the fixed points of the smoothed best-response map Φβ, which are called smoothed equilibria. Importantly, smoothed equilibria are approximate Nash equilibria (Lemma B.1), and they converge to Nash equilibria as the smoothness parameter β goes to zero (Lemma B.2). When the dynamics converge, they achieve meaningful solutions in a game-theoretic sense. Definition 10 (β-smoothed equilibrium). Let Φβ be a β-smoothed best-response map. A joint decision vector xβ ∈Ωis a β-smoothed equilibrium if it is a fixed point xβ = Φβ(xβ). Definition 11 (ε-approximate Nash equilibrium). Let ε ≥0. We say that a joint decision vector xε ∈Ωis an ε-approximate Nash equilibrium if fn(xn, xε −n) ≤fn(xε) + ε for all x ∈Ωand n ∈[N]. Economic interpretation of game dynamics Smoothed best-response is a common model of bounded rationality, where the solution that a player finds is only an approximate best-reply to the strategies x−n chosen by the others, perhaps due to noise, or constraints on information or computation. The player approaches ‘perfect rationality’ as the parameter β goes to zero. Typically, smoothed best-response and its limiting best-response are rational from the perspective of players that are myopic, which means that they do not try to steer the long-term behavior of the dynamics. Myopia is often justified in the evolutionary game theory setting, where each ‘player’ corresponds to a large population of individuals. As each individual has an infinitesimal impact on the long-term behavior of the dynamics, the most individually-rational behavior is to optimize for immediate outcomes. Myopia is also justified if the individual’s lifespan is much shorter than the timescale over which the dynamics evolve. Under this context, the incremental or averaging aspect of the dynamics (1) also arise naturally; Fu- denberg and Levine (1998) also call these “partial” dynamics. It capture situations where only a random fraction of individual within the populations play the game at each round. As only a fraction of individ- uals update their strategies, the strategy profile only partially evolves toward the smoothed best-response direction. In short, the dynamics (1) can be considered as a model for strategic interactions across large, imperfectly-rational populations, such as at a traffic stop where individuals with bounded rationality are intermittently matched up with others to play a dangerous game of chicken. 3 Uniform stability of Nash equilibria The local behavior of a smooth, non-degenerate dynamical system is determined by linearizing the dynamics. In learning in games literature, this motivates the game Jacobian (e.g. Letcher et al. (2019)). It arises naturally from the analysis of gradient ascent dynamics, where players update their strategies along the direction of steepest ascent ∇nfn(x), and the game Jacobian is the Jacobian of the gradient ascent dynamics: Definition 12 (Jacobian of the game). Let (Ω, f) be an N-player multilinear game and let x ∈Ω. The game Jacobian J(x) is the Jacobian of the map x 7→(∇1f1(x), . . . , ∇NfN(x)), so that: ∀n, m ∈[N], Jnm(x) = ∇2 nmfn(x). In particular, the block diagonals Jnn(x) = 0 are zero matrices. We use the game Jacobian to define a notion of stability of a Nash equilibrium x∗. It depends on the eigenvalues of the matrices H−1J(x∗), where H ranges over the family of positive-definite, block-diagonal matrices. If such an eigenvalue has positive real part, the dynamics around x∗become unstable for some regularizers (Proposition 2). Due to that fact that the block diagonals of the game Jacobian are zero matrices, sum of all eigenvalues is zero: tr(H−1J(x)) = 0. As a result, instability also arises if there is an eigenvalue with negative real part—some other eigenvalue must have positive real part. And so, the only possibility for stability is when all eigenvalues of matrices H−1J(x∗) are purely imaginary. We call this uniform stability. Formally, this definition uses the following, purely linear-algebraic matrix condition (Definition 13). Notice that uniform stability is a bilinear (that is, a second-order) condition depending only on the strategic component of the game. 7 Definition 13 (Uniformly-stable block matrix). Let J ∈Rd×d be a block-matrix, where d is the sum of its block dimensions (d1, . . . , dN). We say that J is uniformly stable if the eigenvalues of H−1J are purely imaginary for all positive-definite, block-diagonal H ∈Rd×d, spec H−1J  ⊆iR, where H = diag(H1, . . . , HN) and Hn ∈Rdn×dn is positive definite. Definition 14 (Uniformly stable equilibria). Let (Ω, f) be an N-player multilinear game. We say that a Nash equilibrium x∗∈Ωis (pointwise) uniformly stable if the Jacobian J(x∗) is uniformly stable. It is locally uniformly stable if is contained in an open set U ⊂Ωsuch that J(x) is uniformly stable for all x ∈U. Section 4 develops the economic meaning of uniform stability. Section 5 shows the consequences of this spectral condition for dynamic stability. These two sections are independent of each other and can be read in either order. Before doing so, we provide some intuitive and formal motivation for uniform stability. 3.1 Motivation for uniform stability The concept of uniform stability arises when we relax the assumption that players are ‘perfectly rational’. Unlike gradient-ascent players, smoothed best-response players do not necessarily update their strategies in the direction of steepest ascent (Lemma 2). With the existence of the regularizers, players are no longer maximally efficient—due to either bounded rationality or a need for better stability in their strategies, players only aim to improve modestly. At a current joint strategy x, let’s say that the nth player makes an update along the vn direction. This leads to a local improvement whenever ∇nfn(x)⊤vn > 0. And this occurs if and only if there is a positive-definite matrix Hn ≻0 so that the update direction vn and the gradient ∇nfn(x) are related by: vn = H−1 n ∇nfn(x). This is a consequence of a simple linear-algebraic result, proved in Section D: Lemma 1. Let u, v ∈Rm \ {0}. Then, u⊤v > 0 if and only if there is a positive-definite matrix H so that: u = Hv. In optimization, this result justifies methods such as preconditioned gradient ascent and mirror ascent. In learning in games, it also directly justifies smoothed best-response dynamics as well as positive-definite adap- tive dynamics (Hopkins, 1999). In the current paper, this result leads to the notion of uniform stability by pointing us toward the preconditioned Jacobian maps x 7→ H−1 1 ∇1f1(x), . . . , H−1 N ∇NfN(x)  . As uncoupled players do not exactly know how others are learning, stability is defined uniformly over all positive-definite matrices. This is a fairly strong condition; at a high level, this is the price that comes with considering a broad class of learning dynamics, where players may have bounded rationality. A formal connection between uniform stability and smoothed best-response dynamics is obtained by computing the Jacobian of the smoothed best-response map Φβ. From Lemma 2 below, it becomes clear that the eigenvalues of ∇Φβ(x∗) are related to the spectra of matrices H−1J(x∗). This result allows us to achieve in Section 5 the convergence guarantees assuming local uniform stability, as well as non-convergence without pointwise uniform stability. The proof of Lemma 2 is given in Section E. Lemma 2 (Gradient of smoothed best-response). Let (Ω, f) be a multilinear game, where Ωn ⊂Rdn for each n ∈[N], and let h be a set of smooth and strictly convex regularizers, hn : Ωn →R. Then, the associated β-smoothed best-response map Φβ is well-defined and smooth, where the gradient of Φβ at x is given by: ∇Φβ(x) = 1 β H(x)−1J(x), where H(x)nm = ( ∇2 nhn(xn) n = m 0dn×dm n ̸= m, and H(x) ∈Rd×d is a positive-definite, block-diagonal matrix with d = d1 + · · · + dN. Conversely, Lemma L.2 shows that for any choice x and positive-definite, block diagonal matrix H−1, there exists a collection of steep regularizers such that ∇Φβ(x) is precisely given by H−1J(x), up to scaling. 8 4 The economic meaning of uniform stability We will introduce assumptions in Section 4.1, define and motivate strategic stationarity in Section 4.2, before finally providing the economic characterizations of uniform stability in Section 4.3. 4.1 Structural assumptions on the game The following assumptions will help us exclude trivial ways in which a strategy is strategically Pareto optimal. For example, if one player has constant utility, then every joint decision is strategically Pareto optimal, since it is impossible to strictly improve that player’s utility. Specifically, we assume players have bi-directional interactions, ruling out one-sided strategic interactions between any two players. For example, when one player’s utility directly depends on another, but not vice versa. Definition 15 (Bi-directional interactions). Let (Ω, f) be an N-player game with smooth utilities and let x ∈int(Ω). We say that two players n, m ∈[N] have bi-directional interactions at x if for all n, m ∈[N]: ker Jnm(x)  = ker Jmn(x)⊤ . We will also assume that the interaction graph is connected, where this is a graph that captures when a player is strategically responsive to the action of another: an edge from player n to player m denotes that the block in the game Jacobian Jnm(x) is non-zero. Definition 16 (Interaction graph). Let (Ω, f) be an N-player game and let x ∈int(Ω). The interaction graph G(x) = (V, E) at x is the directed graph whose vertices are the players V = [N], and the edges are: (n, m) ∈E ⇐⇒ ∇2 nmfn(x) ̸= 0. If the game is bi-directional at x, then G(x) = (V, E) is undirected, since (n, m) ∈E if and only if (m, n) ∈E. Mathematically speaking, this assumption is mild and generic in multilinear games. It ensures that the lower-order, bilinear terms around an equilibrium dominate each player’s utility. 4.2 Necessity and sufficiency for strategic Pareto optimality We define a second-order (bilinear) relaxation of strategic Pareto optimality, which we call strategic Pareto stationarity, or strategic stationarity, for short. It will make use of the following matrix concept: Definition 17 (λ-skew matrix). Let J ∈Rd×d be a block-matrix, where d is the sum of its block dimensions (d1, . . . , dN). Let λ = (λ1, . . . , λN) be a tuple of positive scalars. We say that J is λ-skew-symmetric if: ∀n, m ∈[N], λnJnm = −λmJ⊤ mn. In other words, the matrix ΛJ is skew-symmetric, where Λ is block-diagonal with Λnn = λnIdn×dn. Definition 18 (Strategic Pareto stationarity). Let (Ω, f) be an N-player multilinear game. A Nash equi- librium x∗is strategically Pareto stationary if there is a set of positive scalars λ so that J(x∗) is λ-skew. To understand the meaning of strategic Pareto stationarity, it is useful to first focus on polymatrix games, which are multilinear games whose utilities are polynomials of degree at most two—they have up to bilinear terms. These games are also called graphical games (Kearns, 2008; Cai et al., 2016). Formally: Definition 19 (Polymatrix game). A multilinear game is polymatrix if its utilities are bilinear polynomials. The payoffs in these games only depend on the pairwise interactions between players (higher-order k-way interactions are captured by the degree-k terms in the polynomial utilities). Furthermore, the Jacobian of a polymatrix game is a constant J(x) ≡J, so if a polymatrix game has a Nash equilibrium at x∗, then a particularly nice and strategically-equivalent form of the utility fn is given by: fn(x) ≃SE X m̸=n (xn −x∗ n)⊤Jnm(xm −x∗ m). 9 From this form of the utilities, we can see that when the Nash equilibrium of a polymatrix game is strate- gically Pareto stationary, every pair of player must be in strict competition. Classically, strict competition describes two-player games where any strategy that improves one player’s utility must worsen the other’s; that is, every strategy is Pareto optimal (Adler et al., 2009). Indeed, we have here that Jnm = −J⊤ mn, up to a change of units. Thus, strategic Pareto stationarity in the bilinear setting means that if two players n and m deviate away from the Nash equilibrium, any resulting change in the strategic components of their utilities must be opposite in sign. Generalizing this, we obtain a sufficient condition for polymatrix games: Lemma 3 (Sufficient condition for strategic Pareto optimality). Let (Ω, f) be an N-player polymatrix game with a Nash equilibrium x∗. If x∗is strategically Pareto stationary, then it is strategically Pareto optimal. This is proved in Section G. Under the assumptions of connected interaction graph and bi-directional interactions at the equilibrium—defined in Section 4.1—the converse is also true, where strategic Pareto optimality implies strategic Pareto stationarity. In fact, this holds generally for multilinear games. With these assumptions, we show that in multilinear games, strategic Pareto stationarity is necessary for optimality. The proof of this result is given in Section H. Proposition 1 (Strategic Pareto stationarity is necessary for strategic Pareto optimality). Let (Ω, f) be an N-player multilinear game with a Nash equilibrium x∗. Let the interaction graph G(x∗) be connected and the game be bi-directional at x∗. If the equilibrium x∗is strategically Pareto optimal, then it is strategically Pareto stationary. 4.3 The economic characterization of uniform stability The formal results concerning the economic meaning of uniform stability follow. Theorem 1 (Local uniform stability implies strategic Pareto optimality). Let x∗be a Nash equilibrium of an N-player multilinear game, at which it is locally bi-directional. Suppose the interaction graph is connected at x∗. If the game is locally uniformly stable at x∗, then the equilibrium is strategically Pareto optimal. Theorem 2 (Pointwise uniform stability is equivalent to strategic Pareto stationarity). Let x∗be a Nash equilibrium of an N-player multilinear game, at which it is bi-directional. Suppose that the interaction graph is connected at x∗. Then, x∗is uniformly stable if and only if it is strategically Pareto stationary. Theorem 2 and Proposition 1 prove that strategic Pareto optimality implies pointwise uniform stability. In a way, it is not surprising that a bilinear condition like strategic stationarity ends up being the adequate solution concept to characterizing uniformly-stable Nash equilibria. Observe that at an equilibrium x∗, the first-order or linear strategic terms ∇nfn(x∗) vanish across every player’s utility, and that uniform stability at x∗can only reveal information only about the bilinear terms ∇2 nmfn(x∗) within the strategic component of the utilities; the condition depends purely on the Jacobian of the game J(x∗) at that point (see Definition 14). 5 Convergence under uniform stability The fundamental result by Hart and Mas-Colell (2003) shows that, in general, uncoupled dynamics cannot be guaranteed to asymptotically converge to mixed Nash equilibria. In this section, we refine this result for smoothed best-response learning dynamics (1), providing conditions for when the dynamics are unstable and fail to converge, and when they are stable and, in fact, globally converge. We consider two stability classes for mixed equilibria: those that are pointwise uniformly stable, and those that are locally uniformly stable. For simplicity, we consider a Nash equilibrium x∗that is approximated by the sequence of β-smoothed equilibria xβ as β goes to zero (see Lemma B.2). The first result shows that when x∗is not uniformly stable, then the (Φβ, η)-averaging dynamics can become unstable around xβ once β becomes sufficiently small. Intuitively, this means that x∗is ‘inapproximable’ by generic smoothed best-response dynamics. In contrast, our second result shows that if x∗is contained in a region that is uniformly stable, then it can be approximated to arbitrary precision by generic, smoothed best-response dynamics. It turns out that the price of higher precision, and in turn smaller β, is a slower rate of convergence due to the requirement of a smaller learning rate η. Informally: 10 Non-convergence result. If a Nash equilibrium x∗is not uniformly stable, then there are smoothed best-responses that cannot be stabilized: there are regularizers h so that when β > 0 is sufficiently small, all (Φβ, η)-averaging dynamics become unstable about the smoothed equilibrium xβ (Proposition 2). Convergence result. If a Nash equilibrium is x∗is contained in a region that is uniformly stable, then every smoothed best-responses can be stabilized: for all Φβ and sufficiently small η > 0, the (Φβ, η)-averaging dynamics globally converges to xβ (Theorem 3). 5.1 Stability concepts in dynamical systems Before we state the main results, recall the standard notion of stability from dynamical systems: Definition 20 (Stability of fixed points). Let Ω⊂Rd be a domain and let Φ : Ω→Ωbe a transition map defining the discrete-time dynamical system x(t + 1) = Φ(x(t)). Let x∗∈Ωbe a fixed point of Φ. • The fixed point x∗is stable if for every ε > 0, there exists some δ > 0 such that: x(0) −x∗ < δ =⇒ ∀t ≥0, x(t) −x∗ < ε. • Moreover, it is asymptotically stable if there exists some δ > 0 such that: x(0) −x∗ < δ =⇒ lim t→∞ x(t) −x∗ = 0. We say that x∗is unstable if it is not stable. We will consider families of β-smoothed best-response dynamics, whose fixed points xβ converge to a Nash equilibrium x∗as β goes to zero. This motivates the following notion of stabilization to Nash equilibria, where each xβ is an asymptotically-stable fixed point of sufficiently fine (Φβ, η)-averaging dynamics. Definition 21 (Stabilizing smoothed best-response dynamics). Let (Ω, f) be an N-player multilinear game with a Nash equilibrium x∗. Let h be a set of smooth and strictly convex regularizers. We say that smoothed best-response dynamics with respect to h stabilizes to x∗if the following hold: (a) The β-smoothed equilibria xβ converge to x∗as β goes to zero. (b) For each β > 0, the β-smoothed equilibrium xβ is an asymptotically-stable fixed point of the (Φβ, η)- averaging dynamics (1) whenever η ∈(0, 1) is sufficiently small. 5.2 Convergence results for smoothed best-response dynamics The first result shows that if a Nash equilibrium is not uniformly stable, then not only is it not possible to stabilize generic smooth best-response dynamics to it (the dynamics to xβ are not asymptotically stable for arbitrarily small β), but in fact, the dynamics can become unstable. The following is proved in Section J. Proposition 2 (Inapproximability of unstable equilibria). Let (Ω, f) be an N-player multilinear game with an interior Nash equilibrium x∗. Suppose that x∗is not uniformly stable. Then, there is some collection of smooth and strictly convex regularizers h such that smoothed best-response dynamics does not stabilize to x∗. In fact, even if the sequence of β-smoothed equilibria xβ converges to x∗, each xβ is an unstable fixed point of the (Φβ, η)-averaging dynamics (1) for all η ∈(0, 1) and sufficiently small β > 0. Now, we present the main convergence result concerning Nash equilibria with local uniform stability. It turns out that due to additional structure in multilinear games, convergence is global: no matter where the dynamics is initialized, it will converge to the smoothed equilibrium. The following is proved in Section K. Theorem 3 (Convergence to equilibria in uniformly-stable regions). Let (Ω, f) be an N-player multilinear game with a Nash equilibrium x∗that is locally uniformly stable. Over arbitrary choice of smooth and strictly convex regularizer h, all smoothed best-response dynamics stabilize to x∗. In particular, there is a constant Cf > 0 depending only on the game so that when η ≤Cfβ2, the (Φβ, η)-averaging dynamics globally converge: x(t) −xβ ≤exp  −ηt + ln N 2  . Using the fact that β-smoothed equilibria are O(β)-approximate Nash equilibria, this implies that smoothed best-response dynamics with averaging achieve T −1/2-convergence rate to Nash equilibria. 11 (a) Dynamics across smoothing parameters β (b) Dynamics across learning rates η Figure 2: The trajectories of β-smoothed best-response dynamics with η-learning rate toward a uniformly-stable, mixed Nash equilibrium (star) in a two-player normal-form game. (a) The β-smoothed equilibria become better approximations of the Nash equilibrium as β shrinks, but the dynamics become less stable and exhibit more cycling. The figure on the left plots the trajectories initialized at the black dot for varying smoothing β and fixed averaging η. (b) The learning dynamics converge to β-smoothed equilibria once η becomes sufficiently small, and it does so with greater stability with smaller η. However, stability comes at the expense of slower rate of convergence. The figure on the right plots the trajectories for fixed β and varying η. 6 Uniform stability for partially-mixed equilibria The main convergence result (Theorem 3) can be extended to equilibria that are not fully mixed, and are not interior equilibria. A Nash equilibrium x∗is on the boundary of the probability simplex if there are players who never play certain actions. This can happen if these actions are strictly dominated, in which case a perfectly rational player will never play those strategies, while a player with bounded rationality will play them with decreasing frequency. Intuitively, as long as the players are not overly irrational, the stability of smoothed best-response dynamics around these equilibria should not hinge upon these suboptimal actions. Before we make this intuition formal, recall some notations for normal-form game: • The joint strategy space Ω= Ω1 × · · · × ΩN is a product of probability simplices, Ωn = ∆kn−1, where the nth player may randomize over kn pure strategies. • The vertices of the simplex Ωn correspond to pure strategies of player n. For each alternative i ∈[kn], we let en,i ∈Ωn denote the pure strategy where the nth player deterministically chooses the ith action. The support of a strategy is the set of pure strategies with positive probability of being played: Definition 22 (Support of a strategy). For any player n, the support of a strategy xn ∈Ωn is the set: supp(xn) =  i ∈[kn] : xn,i ̸= 0 . Given x ∈Ω, define the reduced strategy sets: Ωn(xn) :=  x′ n ∈Ωn : supp(x′ n) ⊂supp(xn) and Ω(x) := Y i∈[N] Ωn(xn). (2) Note that Ωn(xn) is an (k′ n −1)-dimensional simplex where k′ n = |supp(xn)|. 12 For the convergence result, we consider quasi-strict equilibria, which are Nash equilibria where each player fully mixes on the set of best responses (Harsanyi and Selten, 1988; Van Damme, 1992).2 This captures the property that if an alternative is never played in an equilibrium, then it must be strictly dominated. Definition 23 (Quasi-strict equilibrium). We say that a Nash equilibrium x∗of an N-player normal form game is fully-mixed on the best-response set or quasi-strict if for each player n ∈[N]: i ∈supp(x∗ n) ⇐⇒ i ∈arg max i∈[kn] fn(en,i; x∗ −n) Since actions that are not supported by a quasi-strong equilibrium are dominated, we expect that they are strategically irrelevant. This motivates the following refinement of the game (Harsanyi and Selten, 1988), where the irrelevant actions are removed from the game. Definition 24 (Reduced game). Let (Ω, f) be a normal-form game with a Nash equilibrium x∗. The reduced game around x∗is the game on the domain Ω(x∗) with the utilities f restricted to Ω(x∗). In the reduced game, x∗becomes an interior or fully-mixed Nash equilibrium (Corollary L.1), allowing the results in Section 5.2 to be applied. The remaining problem is about how the players learn to converge towards the support of the equilibrium and avoid playing the suboptimal strategies that have been removed from the reduced game. In order to control the rate at which these suboptimal strategies appear in the β-smoothed equilibria xβ, we introduce the following quantitative version of steepness (c.f. Definition 9): Definition 25 (Linear steepness). Let h : ∆k−1 →R be a steep regularizer. For all v ∈Rk, define the map: xβ(v) = arg max x∈∆k−1 v⊤x −βh(x). For each i ∈[i] and ε > 0, let Vi(ε) = {v ∈Rk : vi < maxj∈[k] vj −ε} be the set of vectors where the ith component is ε-suboptimal. We say that h is linearly steep if the following holds for all i ∈[k] and ε > 0: lim β→0 sup v∈Vi(ε) xβ(v)i β = 0. In words, this states that the probability mass xβ(v)i placed on any alternative i ∈[k] must shrink sub- linearly with β when the value vi is ε-suboptimal. In the case that h is the entropy map, Example L.1 shows that, in fact, this distance shrinks as exp(−Ω(1/β)) at a much faster, exponential rate. As a consequence of linear steepness, the following shows that the rate at which suboptimal alternatives are played in β-smoothed equilibria xβ also vanish at a sublinear rate o(β): Lemma 4 (Dominated strategies occur at β-sublinear rates in β-smoothed equilibria). Let (Ω, f) be an N- player normal-form game and h be a collection of linearly steep regularizers. Suppose that as β goes to zero, the sequence of of β-smoothed equilibria xβ converges to a quasi-strict equilibrium x∗. Let Π : Ω→Ω(x∗) be the orthogonal projection onto the support of x∗. Then: lim β→0 xβ −Πxβ β = 0. Before we can give the main convergence result for all quasi-strict equilibria, we need a technical condition to ensure that pseudoinverse ∇2h(x)+ extends smoothly to the boundary of the simplex: Definition 26 (Proper regularizer). A map h : ∆k−1 →R on the simplex is a proper regularizer if: • The restriction of h onto each face of the simplex is steep regularizer. • The Moore-Penrose pseudoinverse of the Hessian ∇2h(x)+ is smooth on the simplex. 2Classically, these are called quasi-strong equilibria. We prefer the term quasi-strict since strong Nash equilibria is another, unrelated refinement of the Nash equilibrium concept, which we also make use of. For discussion, see Van Damme (1992). 13 See Section L.1 for formal definition of the faces of the simplex and what it means to be smooth on it. The following convergence result shows that if x∗is a quasi-strict equilibrium, it suffices to analyze its uniform stability within the reduced game, neglecting all suboptimal alternatives. This is because whenever the regularizers are linearly steep, any potentially de-stabilizing effect of these additional strategies are overwhelmed by the stabilizing effects of the averaging dynamics. Theorem 4 (Convergence to non-interior equilibria). Let (Ω, f) be an N-player, normal-form game and let h be a collection of linearly steep, proper regularizers. Suppose that x∗is a quasi-strict Nash equilibrium and that it is the limit of the β-smoothed equilibria xβ as β goes to zero. If the reduced game around x∗is locally uniformly stable, then smoothed best-response dynamics stabilize to x∗. In particular, there exists a constant Cf > 0 such that for all sufficiently small β > 0 and η ≤Cfβ2, the (Φβ, η)-averaging dynamics is locally asymptotically stable at xβ, where the operator norm of its Jacobian at xβ is bounded by: (1 −η)I + η∇Φβ(xβ) 2 ≤exp  −η 2  . The results of this section are proved in Section L. 7 Discussion In this work, we have introduced a theory of non-asymptotic stability for learning in games. We showed a fairly tight connection between uniform stability and collective rationality. And through uniform stability, we studied the last-iterate convergence behavior for the class of incremental, smoothed best-response dynamics. We close with a few open questions that we believe would be important to pursue. Strengthening the necessity of uniform stability for convergence to mixed Nash equilibria Our main non-convergence result (Proposition 2) shows that if an equilibrium x∗is not uniformly stable, then there exist smoothed best-responses that cannot be stabilized. We believe this can be strengthened, where ‘there exist’ is replaced with ‘almost all’ (in terms of the Hessian of the regularizer at x∗). A second strengthening has to do with the economic properties of the non-converging behaviors. We have shown that players can avoid stabilizing to Pareto inefficient equilibria, which can be seen as a form of collective rationality. However, we have not analyzed if players escape inefficient equilibria in a collectively- rational way: Open Question 1. Are there uncoupled learning dynamics that not only do not stabilize to mixed equilibria that are not uniformly stable, but moreover locally escape them in collectively-rational ways. That is, whenever ∥x(t) −x∗∥< ε is sufficiently close, there exists some δ > 0 such that for some time, all players improve: ∀s ∈(t, t + δ) and n ∈[N], fn x(s)  > fn x(t)  . Non-asymptotic analyses beyond smoothed best-response While incremental, smoothed best-response dynamics is a natural family of learning dynamics. It would be of interest to consider other classes of updates that can lead to mixed Nash equilibria. Relaxation of bi-directionality assumption It would be interesting to generalize beyond the assump- tion of bi-directional interactions. In particular, the bi-directional interactions are represented by undirected graphs. One can also consider interaction structures that are represented by directed graphs, which may lead to longer-range interactions (e.g., players form a directed cycle as in the Cyclic Matching Pennies game of Kleinberg et al. (2011)). Acknowledgments We express our gratitude toward Mike Jordan, for his inspiration and suggestions. 14 References Ilan Adler, Constantinos Daskalakis, and Christos H Papadimitriou. A note on strictly competitive games. In International Workshop on Internet and Network Economics, pages 471–474. Springer, 2009. Armen A Alchian. Uncertainty, evolution, and economic theory. Journal of political economy, 58(3):211–221, 1950. Carlos Al´os-Ferrer and Nick Netzer. The logit-response dynamics. Games and Economic Behavior, 68(2): 413–427, 2010. Ioannis Anagnostides, Ioannis Panageas, Gabriele Farina, and Tuomas Sandholm. On last-iterate convergence beyond zero-sum games. In International Conference on Machine Learning, pages 536–581. PMLR, 2022. Robert J Aumann. Acceptable points in general cooperative n-person games. Contributions to the Theory of Games, 4:287, 1959. James P Bailey, Sai Ganesh Nagarajan, and Georgios Piliouras. Stochastic multiplicative weights updates in zero-sum games. arXiv preprint arXiv:2110.02134, 2021. David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of n-player differentiable games. In International Conference on Machine Learning, pages 354–363. PMLR, 2018. Michel Benaım and Morris W Hirsch. Mixed equilibria and dynamical systems arising from fictitious play in perturbed games. Games and Economic Behavior, 29(1-2):36–72, 1999. Nicoletta Bof, Ruggero Carli, and Luca Schenato. Lyapunov theory for discrete time systems. arXiv preprint arXiv:1809.05289, 2018. Yang Cai, Ozan Candogan, Constantinos Daskalakis, and Christos Papadimitriou. Zero-sum polymatrix games: A generalization of minmax. Mathematics of Operations Research, 41(2):648–655, 2016. Ozan Candogan, Ishai Menache, Asuman Ozdaglar, and Pablo A Parrilo. Flows and decompositions of games: Harmonic and potential games. Mathematics of Operations Research, 36(3):474–503, 2011. Vincent P Crawford. Learning behavior and mixed-strategy Nash equilibria. Journal of Economic Behavior & Organization, 6(1):69–78, 1985. Constantinos Daskalakis and Ioannis Panageas. Last-iterate convergence: Zero-sum games and constrained min-max optimization. In 10th Innovations in Theoretical Computer Science (ITCS) conference, ITCS 2019, 2019. Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a Nash equilibrium. Communications of the ACM, 52(2):89–97, 2009. Constantinos Daskalakis, Rafael Frongillo, Christos H Papadimitriou, George Pierrakos, and Gregory Valiant. On learning algorithms for Nash equilibria. In International Symposium on Algorithmic Game Theory, pages 114–125. Springer, 2010. ´Etienne de Montbrun and J´erˆome Renault. Optimistic gradient descent ascent in zero-sum and general-sum bilinear games. arXiv preprint arXiv:2208.03085, 2022. Gerard Debreu. Theory of value: An axiomatic analysis of economic equilibrium. Wiley, New York, 1959. Glenn Ellison and Drew Fudenberg. Learning purified mixed equilibria. Journal of Economic Theory, 90(1): 84–115, 2000. Ludwig Elsner. An optimal bound for the spectral variation of two matrices. Linear algebra and its appli- cations, 71:77–80, 1985. 15 Milton Friedman. The methodology of positive economics. The Essays in Positive Economics, 1953. Drew Fudenberg and David M Kreps. Learning mixed equilibria. Games and economic behavior, 5(3): 320–367, 1993. Drew Fudenberg and David K Levine. The theory of learning in games, volume 2. MIT press, 1998. Angeliki Giannou, Emmanouil Vasileios Vlatakis-Gkaragkounis, and Panayotis Mertikopoulos. Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information. In Conference on Learning Theory, pages 2147–2148. PMLR, 2021. Andrew Gilpin, Javier Pena, and Tuomas Sandholm. First-order algorithm with convergence for-equilibrium in two-person zero-sum games. Mathematical programming, 133(1):279–298, 2012. Steven Gjerstad. The rate of convergence of continuous fictitious play. Economic Theory, 7:161–177, 1996. Jacob K Goeree, Charles A Holt, and Thomas R Palfrey. Quantal response equilibrium: A stochastic theory of games. In Quantal response equilibrium. Princeton University Press, 2016. Garrett Hardin. The tragedy of the commons. Science, 162(3859):1243–1248, 1968. John C Harsanyi. Games with randomly disturbed payoffs: A new rationale for mixed-strategy equilibrium points. International journal of game theory, 2(1):1–23, 1973. John C Harsanyi and Reinhard Selten. A general theory of equilibrium selection in games. MIT Press Books, 1, 1988. Sergiu Hart and Andreu Mas-Colell. Uncoupled dynamics do not lead to Nash equilibrium. American Economic Review, 93(5):1830–1836, 2003. Josef Hofbauer and Karl Sigmund. Evolutionary games and population dynamics. Cambridge university press, 1998. Josef Hofbauer and Sylvain Sorin. Best response dynamics for continuous zero-sum games. Discrete and Continuous Dynamical Systems Series B, 6(1):215, 2006. Ed Hopkins. A note on best response dynamics. Games and Economic Behavior, 29(1-2):138–150, 1999. Roger A Horn and Charles R Johnson. Matrix analysis. Cambridge university press, 2012. Rufus Isaacs. Differential games: a mathematical theory with applications to warfare and pursuit, control and optimization. Courier Corporation, 1999. James S Jordan. Three problems in learning mixed-strategy Nash equilibria. Games and Economic Behavior, 5(3):368–386, 1993. Michael Kearns. Graphical games. In The New Palgrave Dictionary of Economics, pages 1–4. Springer, 2008. Robert D Kleinberg, Katrina Ligett, Georgios Piliouras, and ´Eva Tardos. Beyond the Nash equilibrium barrier. In ICS, volume 20, pages 125–140, 2011. Elias Koutsoupias and Christos Papadimitriou. Worst-case equilibria. In Annual symposium on theoretical aspects of computer science, pages 404–413. Springer, 1999. Vijay Krishna and Tomas Sj¨ostr¨om. On the convergence of fictitious play. Mathematics of Operations Research, 23(2):479–511, 1998. Peter D Lax. Linear algebra and its applications. John Wiley & Sons, 2007. Alistair Letcher, David Balduzzi, S´ebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. Journal of Machine Learning Research, 20(84):1–40, 2019. 16 R Duncan Luce and Howard Raiffa. Games and decisions: Introduction and critical survey. John Wiley & Sons, Inc., 1957. Richard D McKelvey and Thomas R Palfrey. Quantal response equilibria for normal form games. Games and economic behavior, 10(1):6–38, 1995. Panayotis Mertikopoulos and William H Sandholm. Learning in games via reinforcement and regularization. Mathematics of Operations Research, 41(4):1297–1324, 2016. Panayotis Mertikopoulos, Christos Papadimitriou, and Georgios Piliouras. Cycles in adversarial regularized learning. In Proceedings of the twenty-ninth annual ACM-SIAM symposium on discrete algorithms, pages 2703–2717. SIAM, 2018. Jason Milionis, Christos Papadimitriou, Georgios Piliouras, and Kelly Spendlove. An impossibility theorem in game dynamics. Proceedings of the National Academy of Sciences, 120(41):e2305349120, 2023. Dov Monderer and Lloyd S Shapley. Potential games. Games and economic behavior, 14(1):124–143, 1996. Herv´e Moulin and J-P Vial. Strategically zero-sum games: the class of games whose completely mixed equilibria cannot be improved upon. International Journal of Game Theory, 7(3-4):201–221, 1978. John H Nachbar. “Evolutionary” selection dynamics in games: Convergence and limit properties. Interna- tional Journal of Game Theory, 19(1):59–89, 1990. John Nash. Non-cooperative games. Annals of Mathematics, 54(2):286–295, 1951. Georg Ostrovski and Sebastian van Strien. Payoff performance of fictitious play. arXiv preprint arXiv:1308.4049, 2013. Marco Pangallo, James BT Sanders, Tobias Galla, and J Doyne Farmer. Towards a taxonomy of learning dynamics in 2 × 2 games. Games and Economic Behavior, 132:1–21, 2022. Christos H Papadimitriou. The complexity of finding Nash equilibria. Algorithmic game theory, 2:30, 2007. Georgios Piliouras, Lillian Ratliff, Ryann Sim, and Stratis Skoulakis. Fast convergence of optimistic gradient ascent in network zero-sum extensive form games. In International Symposium on Algorithmic Game Theory, pages 383–399. Springer, 2022. Julia Robinson. An iterative method of solving a game. Annals of mathematics, 54(2):296–301, 1951. R Tyrrell Rockafellar. Convex analysis, volume 28. Princeton university press, 1997. Larry Samuelson and Jianbo Zhang. Evolutionary stability in asymmetric games. Journal of economic theory, 57(2):363–391, 1992. Lloyd S Shapley. Some topics in two-person games. Advances in game theory, 1963. Satinder Singh, Michael J Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in general- sum games. In UAI, pages 541–548, 2000. Adam Smith. An inquiry into the nature and causes of the wealth of nations. London: printed for W. Strahan; and T. Cadell, 1776. Dale O Stahl II. On the instability of mixed-strategy Nash equilibria. Journal of Economic Behavior & Organization, 9(1):59–69, 1988. Eric Van Damme. Refinements of Nash equilibrium. In Advances in Economic Theory: Sixth World Congress, volume 1, pages 32–75. Cambridge University Press New York, 1992. Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Thanasis Lianeas, Panayotis Mertikopoulos, and Georgios Piliouras. No-regret learning and mixed Nash equilibria: They do not mix. Advances in Neural Information Processing Systems, 33:1380–1391, 2020. 17 Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Linear last-iterate convergence in con- strained saddle-point optimization. The Ninth International Conference on Learning Representations, 2021. J¨orgen W Weibull. The ‘as if’ approach to game theory: Three positive results and four obstacles. European Economic Review, 38(3-4):868–881, 1994. Sidney G Winter. Satisficing, selection, and the innovating remnant. The Quarterly Journal of Economics, 85(2):237–261, 1971. 18 A Normal-form games and multilinear games A.1 Background for normal-form games In a normal-form game, each player n ∈[N] chooses an action out of kn alternatives or pure strategies, indexed over [kn]. To allow for randomized strategies, let ∆kn−1 denote the probability simplex over [kn], ∆kn−1 = n xn ∈Rkn : X i∈[kn] xn,i = 1 and xn,i ≥0 o . The nth player’s strategy space is Ωn = ∆kn−1, where we say that xn ∈Ωn is a mixed strategy profile, with: ∀i ∈[kn], xn,i = probability that player n plays action i. Assuming players randomize their actions independently, each joint mixed strategy x = (x1, . . . , xN) ∈Ω defines the product distribution x1 ⊗· · · ⊗xN over the joint space of alternatives [k1] × · · · × [kN]. The utilities in a normal-form game are defined by a collection of payoff tensors T 1, . . . , T N ∈Rk1×···×kN , T n i1,...,iN = payoff for player n when the mth player chooses action im for m ∈[N]. The utilities f of a normal-form game extend these payoffs to mixed strategies in all of Ω, and they compute the expected payoffs when each player randomize independently: fn(x) = X i1∈[k1] · · · X iN∈[kN] T n i1,...,iN x1,i1 · · · xN,iN . Lemma A.1 (Normal-form games are multilinear games). Let (Ω, f) be an N-player normal-form game, where the nth player has kn alternatives. There is a multilinear game ( ˜Ω,˜f) with ˜f : Rk1−1 × · · · × RkN−1 → RN and an affine bijection ϕ : int(Ω) →˜Ωsuch that f(x) = (˜f ◦ϕ)(x) for all x ∈int(Ω). Proof of Lemma A.1. This follows from the form of the utility of a normal form game. A.2 Individual and collective rationality are generally incomparable Example A.1 (Strategic Pareto optimality does not imply Nash stability). Consider the following 2-player normal-form game, which is purely strategic, since all rows and columns sum to zero: fA B1 B2 B3 A1 4 −1 −3 A2 −10 2 8 A3 6 −1 −5 fB B1 B2 B3 A1 4 −10 6 A2 −1 2 −1 A3 −3 8 −5 Here, we show the payoff matrices for the A and B players, and bold the preferred action conditioned on the other player. In particular, the joint strategy (A2, B2) is a Nash equilibrium, but not a strategic Pareto optimum, since it can be improved to (A1, B1). And while (A1, B1) is Pareto optimal, it is not a Nash equilibrium. B Smoothed equilibria and dynamics Lemma B.1 (β-smoothed equilibria are O(β)-approximate Nash equilibria). Let h be a set of strictly convex and bounded regularizers, where max hn −min hn ≤C for n ∈[N], inducing the β-smoothed best-response map Φβ. If a joint strategy is a β-smoothed equilibrium of Φβ, then it is a Cβ-approximate Nash equilibrium. Proof of Lemma B.1. Let x be a β-smoothed equilibrium with respect to h. By the definition of Φβ, we have that the following inequality holds for any alternate strategy zn ∈Ωn; fn(zn; x−n) −βhn(zn) ≤fn(x) −βhn(xn). 19 By rearranging and applying the boundedness of the regularizer, we obtain: fn(zn; x−n) ≤fn(x) + β hn(zn) −hn(xn)  ≤fn(x) + Cβ. Thus, x is a Cβ-approximate Nash equilibrium. Lemma B.2 (β-smoothed equilibria converge to Nash equilibria). Let Ωbe a convex, joint strategy space with a metric ρ. Let (Ω, f) be an N-player game with continuous utilities, and h be a set of strictly convex and bounded regularizers. Suppose that the game has a unique Nash equilibrium x∗. Then: lim β→0 ρ(xβ, x∗) = 0. Proof of Lemma B.2. This follows from Lemma B.1 and Lemma B.3. Lemma B.3 (Convergence to Nash equilibria). Let Ωbe a joint strategy space with a metric ρ, and suppose that (Ω, f) is an N-player game with continuous utilities. Let (xεk)k∈N be any sequence of εk-Nash equilibria where εk →0. If x∗is a limit point of the sequence, then x∗is a Nash equilibrium. Proof. The result follows since: 1. Let Xε denote the set of ε-Nash equilibria. It is a closed set for all ε > 0. 2. The ε-Nash equilibria satisfies Xε ⊂Xε′ whenever ε ≤ε′. Also, the set of Nash equilibria is given by: X∗= \ ε>0 Xε. 3. For each ε, the sequence xεk eventually remains in Xε, and so x∗∈Xε for all ε. This implies that x∗is Nash. C Strategic equivalence and games in canonical form For many results of this work, we may often restrict our consideration to games in canonical form: Definition C.1 (Canonical form of a multilinear game). Let (Ω, f) be an N-player multilinear game, where the joint strategy space includes the origin 0 ∈Ω. We say that the game is in canonical form if the utilities have no non-strategic component and if the origin is a Nash equilibrium, with x∗= 0. We can do this without loss of generality because the solution concepts we are concerned with all respect strategic equivalence, namely strategic Pareto optimality (Definition 7), uniform stability (Definition 14), and strategic Pareto stationarity (Definition 18). Therefore, it is enough to prove them for games that are purely strategic (i.e. they have no non-strategic component; see Definition 6). Furthermore, in multilinear games, we can assume that the Nash equilibrium is set at the origin x∗= 0. We may do this without loss of generality since the translation of a multilinear polynomial remains a multilinear polynomial. Lemma C.1. The dynamics (1) are uncoupled and are preserved across strategic equivalence classes. Proof. The dynamics depend only on the strategic components of the game. D Proof of Lemma 1 Lemma 1. Let u, v ∈Rm \ {0}. Then, u⊤v > 0 if and only if there is a positive-definite matrix H so that: u = Hv. Proof. (⇒). Suppose that u⊤v > 0. There are two cases: 20 1. If u and v are linearly dependent, then we can let H = λI where λ = ∥u∥/∥v∥, so that Hv = u. 2. Otherwise, u and v span two dimensions. We can choose an orthonormal basis B such that: B⊤u = α1e1 + α2e2 and B⊤v = β1e1 + β2e2, α1, α2, β1, β2 > 0. This is possible because u⊤v > 0. Now, let Λ = diag(α1/β1, α2/β2, 1, . . . , 1) and set: H = BΛB⊤, so that by construction, u = Hv. (⇐). Suppose that u = Hv for some positive-definite matrix H. Then u⊤v = v⊤Hv > 0. E Proof of Lemma 2 Lemma 2 (Gradient of smoothed best-response). Let (Ω, f) be a multilinear game, where Ωn ⊂Rdn for each n ∈[N], and let h be a set of smooth and strictly convex regularizers, hn : Ωn →R. Then, the associated β-smoothed best-response map Φβ is well-defined and smooth, where the gradient of Φβ at x is given by: ∇Φβ(x) = 1 β H(x)−1J(x), where H(x)nm = ( ∇2 nhn(xn) n = m 0dn×dm n ̸= m, and H(x) ∈Rd×d is a positive-definite, block-diagonal matrix with d = d1 + · · · + dN. Proof. The map x′ n 7→fn(x′ n, x−n) −βnhn(xn) is a strictly concave function over a convex set Ωn. And so, it has a unique maximizer, implying that Φβ(x)n is well-defined. Given any x that maps to a fully mixed strategy Φβ(x) ∈int(Ω), we can apply the Lagrange multiplier theorem to express Φβ(x)n for each n ∈[N] as the stationary point to the Lagrangian: Li(x′ n, λ; x) = fn(x′ n, x−n) −βhn(x′ n) −λ1⊤x′ n, where 1 ∈Rdn is the all-ones vector. This implies that Φβ(x)n is the unique stationary point satisfying: 0 = ∇x′nLn x′ n, λ; x  = ∇x′n  fn(x′ n, x−n) −βhn(x′ n)  −λ1. Let Πn = I − 1 dn 11⊤be the linear projection and Ψn : Ωn × Ω→TΩn be the map: Ψn(x′ n, x) = ∇n  fn(x′ n, x−n) −βhn(x′ n)  . Then, x′ n = Φβ(x)n is the unique solution to Ψn x′ n, x  = 0. By the implicit function theorem, we obtain the gradient of Φβ. The blocks on the diagonals are zero because fn is multilinear, while ∇2 nhn is positive definite on Ωn since hn is strictly convex. F Proof of Theorem 1 Theorem 1 (Local uniform stability implies strategic Pareto optimality). Let x∗be a Nash equilibrium of an N-player multilinear game, at which it is locally bi-directional. Suppose the interaction graph is connected at x∗. If the game is locally uniformly stable at x∗, then the equilibrium is strategically Pareto optimal. Proof. Under these assumptions, we claim that if a Nash equilibrium x∗has local uniform stability, then the game must be bilinear. Theorem 2 then shows that x∗must be strategically Pareto stationary. As the game is bilinear, Lemma 3 shows that x∗is strategically Pareto optimal. To finish the proof, we show that the strategic component of the game is bilinear: we prove that for all triples of players n, m, k ∈[N], the higher-order derivative ∇3 nmkfn ≡0 is identically zero everywhere. 21 First, note that as the game is locally uniformly stable at x∗, the Jacobian map J is uniformly λ-skew, where λ is a set of positive scalars independent of x (Lemma F.1). By a positive rescaling of the utilities, we may assume that J(x) is skew-symmetric everywhere, and that for any triple of players n, m, k ∈[N], ∀x ∈Ω, Jnm(x) = −Jmn(x)⊤, Jmk(x) = −Jkm(x)⊤, Jkn(x) = −Jnk(x)⊤. Taking derivatives and using the fact that the partial derivatives commute over smooth functions, we have: ∇3 nmkfn + ∇3 nmkfm ≡0 ∇3 nmkfm + ∇3 nmkfk ≡0 ∇3 nmkfk + ∇3 nmkfn ≡0 This can only happen if the triples ∇3 nmkfn = ∇3 nmkfm = ∇3 nmkfk = 0 are identically zero everywhere. Lemma F.1. Let (Ω, f) be an N-player multilinear game in canonical form. Let x∗be a Nash equilibrium that is locally bi-directional and is locally uniformly stable. There exists a set of positive scalars λ∗such that J(x) is λ∗-skew for all x ∈Ω. Proof. Let the Nash equilibrium be locally bi-directional and locally uniformly stable, in particular, on a neighborhood U of x∗. Then, Theorem 2 shows that J(x) is λ(x)-skew, for some local map x 7→λ(x) on U to the positive orthant of RN. In particular, if we let λnm(x) = λm(x)/λn(x), we obtain: Jnm(x) = −λnm(x)Jmn(x)⊤. (3) We claim that λnm(x) is in fact constant in x. From this, it follows that J(x) is λ∗-skew where λ∗= λ(x∗). To prove the claim, fix some z ∈Ωand consider the ℓth player. For this proof, define the vector z(ε) by: z(ε) = (zℓ+ ε, z−ℓ), which is a perturbation only in the ℓth player’s strategy. By multilinearity, we have that for all n, m ∈[N]: ∇ℓJnm(z(ε)) = ∇ℓJnm(z), showing that ∇ℓJnm(z(ε)) is constant in ε. In particular, this follows from Jnm(z(ε)) = Jnm(z)+∇ℓJnm(z)ε. To compress notation, define the following tensors: Trst =  ∇ℓJnm(z)rs  t and T ′ rst =  ∇ℓJmn(z)sr  t. Let λ ≡λnm(z(ε)). Then, applying the product rule to Equation (3) at x = z(ε), we obtain that: Trst = −∂tλ · T ′ rsuεu −λT ′ rst = −T ′ rsu · h εu · ∂tλ + λ · δut i . The product rule applies because λnm is smooth; it is the quotient of two non-zero polynomials. We obtain: T ′ rsuεu∂tλ + Trsu + λT ′ rsu  · δut = 0. For each fixed pair (r, s), the following (u, t)-slice is either the zero matrix or a full-rank (diagonal) matrix: Trsu + λT ′ rsu  · δut. On the other hand, the following (u, t)-slice is either the zero matrix or a rank-1 matrix: T ′ rsuεu∂tλ. As the sum of these two matrices is zero, the first must have been the zero matrix, whereby we obtain: Trsu + λT ′ rsu = 0. This shows that λ ≡λnm(z(ε)) is constant in ε. This holds for all choices of joint strategy z ∈U and player ℓ, so this implies that λnm is constant everywhere on the open set U. Finally, due to multilinearity, this implies that it is constant on the whole space Ω. 22 Notation 1 (Translation operator). For any c ∈Rn, let Tc : M →M be the translation operator: (Tcf)(x) = f(x −c). It is a linear map, and under the standard polynomial basis, it has matrix representation:  Tc  S,S′ =            (−1)|S′\S| Y i∈S′\S ci S′ ⊇S 0 S′ ̸⊇S. (4) Lemma F.2. Let f, g ∈M and S ⊂[n]. Suppose that for all c ∈Rn, we have that: (Tcf)S = (Tcg)S. Then, for all S′ ⊇S, the coefficients are equal fS′ = gS′. Proof. We proceed by induction. Let P(k) denote the proposition: fS′ = gS′ for all S′ ⊇S where |S′\S| ≤k. • Base case: If S′ = S, then by assumption fS = (T0f)S = (T0g)S = gS. Thus, P(0) is true. • Inductive step: Suppose that P(k) holds. Let S′ ⊇S and |S′ \ S| = k + 1. Define c = (c1, . . . , cn): ci = ( 1 i ∈S′ 0 i /∈S′, and let Tc be the translation with offset c. By (4), the S-coefficients of Tcf and Tcg are given by: Tcf  S = X I⊆[n] αIfI and Tcg  S = X I⊆[n] αIgI, where αI = (−1)|I\S| · 1  S ⊆I ⊆S′ . By P(k), all coefficients are equal fI = gI when S ⊆I ⊊S′. By the equality (Tcf)S = (Tcg)S, we obtain that: 0 = Tcf  S − Tcg  S = X S⊆I⊊S′ αI · (fI −gI) + αS′ · fS′ −gS′ = αS′ · fS′ −gS′ , which implies that fS′ = gS′ since αS′ ̸= 0. This proves P(k + 1) holds. The result follows by induction. G Proof of Lemma 3 Lemma 3 (Sufficient condition for strategic Pareto optimality). Let (Ω, f) be an N-player polymatrix game with a Nash equilibrium x∗. If x∗is strategically Pareto stationary, then it is strategically Pareto optimal. Proof. Suppose that x∗is strategically Pareto stationary. Then, there exist a set of positive scalars λn > 0 such that the weighted strategic components of the utilities sum to zero for all x ∈Ω: X n∈[N] λnfn(x) ≃SE X n∈[N] X m̸=n λn(xn −x∗ n)⊤Jnm(xm −x∗ m) = 0. It follows that there does not exist any x ∈Ωsuch that the following holds simultaneously for all n ∈[N], fn(x) −fn(x∗) ≃SE X m̸=n (xn −x∗ n)⊤Jnm(xm −x∗ m) > 0. In particular, the Nash equilibrium is strategically Pareto optimal. 23 H Proof of Proposition 1 In this section, we show that strategic Pareto optimality implies stationarity. We break down the proof of Proposition 1 into a sequence of results of increasing generality. The core of each proof is by contradiction: if a Nash equilibrium is not strategically stationary, then it is not Pareto optimal. 1. In Section H.1, we prove the result in the two-player case. The main innovation here is Lemma H.2; it is a linear-algebraic result showing that bilinear forms (x, y) 7→x⊤My can be recovered from the related map (x, y) 7→sign(x⊤My), up to a positive scale. 2. In Section H.2, we prove the general result for N players. This result makes use of Lemma H.3, which shows under some assumptions, that a Nash equilibrium is weakly Pareto optimal if and only if it is a strong Nash equilibrium. This will give us a set of easier conditions to check, which we make use of in the proof of Lemma H.4, which is precisely Proposition 1 in the case of games in canonical form. Proposition 1 (Strategic Pareto stationarity is necessary for strategic Pareto optimality). Let (Ω, f) be an N-player multilinear game with a Nash equilibrium x∗. Let the interaction graph G(x∗) be connected and the game be bi-directional at x∗. If the equilibrium x∗is strategically Pareto optimal, then it is strategically Pareto stationary. Proof. This follows from Lemma H.4. H.1 Weak Pareto optimality implies stationarity: the two-player case In this section, we let (Ω, f) be a two-player polymatrix game in canonical form, with individual strategy spaces Ωn = Rdn for n ∈{1, 2}, and a Nash equilibrium x∗at the origin. As the game is bilinear, the game Jacobian is constant, which we denote by J. In particular, we may assume that the utilities are given by: f1(x) = x⊤ 1 J12x2 and f2(x) = x⊤ 2 J21x1, so that the payoffs for both players at the Nash equilibrium are f1(0) = f2(0) = 0. Recall that a Nash equilibrium x∗is strategically Pareto stationary if J is block skew-symmetrizable. In this case, this means that there exists Σ ∈Rd1×d2 and λ > 0 such that: J =  J12 J21  =  −λΣ Σ⊤  . Lemma H.1 (Strategic Pareto optimality in two-player games). Let (Ω, f) be a bi-directional, two-player polymatrix game in canonical form with a Nash equilibrium x∗and Jacobian J. Suppose x∗is strategically Pareto optimal. Then, it is strategically Pareto stationary. That is, there exists some λ > 0 such that: J12 = −λJ⊤ 21. Proof. Let x∗be strategically Pareto optimal. Note that in this case, as the utilities are purely strategic, this simply means that x∗is weakly Pareto optimal. We claim that for all x ∈Ω: sign f1(x)  = sign −f2(x)  , (5) where sign : R →{−, 0, +} is the sign function. Assuming the claim for now, we obtain the result immediately from Lemma H.2, which shows that the sign of a bilinear map characterizes it up to a positive scalar. That is, the two matrices J12 and −J⊤ 21 are related by some positive scaling factor λ > 0. Proof of claim. To show the sign condition (5), we only need to rule out the two cases: (1) one utility is strictly negative fn(x) < 0 while the other is non-positive f−n(x) ≤0, and the opposite case, (2) one utility is strictly positive fn(x) > 0 while the other is non-negative f−n(x) ≥0. We will only show how to rule out case (1) by contradiction, as the other case is completely analogous. Without loss of generality, suppose that: f1(x) = x⊤ 1 J12x2 < 0 and f2(x) = x⊤ 1 J⊤ 21x2 ≤0. 24 Since f1(x) ̸= 0, we have that x2 /∈ker(J12). By bi-directionality, we also have that x2 /∈ker(J⊤ 21). In particular, the two vectors J12x2 and J⊤ 21x2 are non-zero and lie on in same halfspace {w ∈Rd1 : x⊤ 1 w ≤0}, where at least one is contained in the interior of the halfspace. And so, there exists some w1 such that: w⊤ 1 J12x2 > 0 and w⊤ 1 J⊤ 21x2 > 0. For example, let: w1 = 1 2  J12x2 ∥J12x2∥+ J⊤ 21x2 ∥J⊤ 21x2∥  . When x′ = (w1, x2), we obtain that both f1(x′) > 0 and f2(x′) > 0. Recall that fn(x∗) = 0. Thus, the joint vector x′ strictly improves both utilities over x∗, violating the weak Pareto optimality of x∗. Lemma H.2 (Bilinear maps are characterized by their signs up to scaling). Let A, B ∈Rd1×d2 be matrices such that for all x ∈Rd1 and y ∈Rd2, sign(x⊤Ay) = sign(x⊤By), where sign : R →{−, 0, +} is the sign function, sign(z) =      + z > 0 0 z = 0 − z < 0. Then, there is some λ > 0 such that A = λB. Proof. Consider the singular value decomposition of A: A = UΣAV ⊤, where U ∈Rd1×d1 and V ∈Rd2×d2 are orthogonal and the singular values σA j = ΣA jj are non-negative, where j ∈{1, . . . , d1 ∧d2}. Now define ΣB = U ⊤BV . We now show that in fact this yields a simultaneous singular value decomposition: B = UΣBV ⊤, where all singular values σB j are non-negative, and σA j = 0 if and only if σB j = 0. This is true because the signs of ΣA and ΣB match coordinate-wise. To see this, let e1 j and e2 k be from the standard basis vectors of Rd1 and Rd2, respectively. Then: ΣA jk = (Ue1 j)⊤A(V e2 k) and ΣB jk = (Ue1 j)⊤B(V e2 k). By assumption, the entries ΣB jk and ΣA jk share same signs. Now, we show that ΣA = αΣB for some λ > 0. Let uj and vk denote the left and right singular vectors—the columns of U and V . Without loss of generality, we may assume that σA 1 , σB 1 > 0, and we let: λ = u⊤ 1 Av1 u⊤ 1 Bv2 . For each i > 1, let αi = p σA i /σA 1 , so that: (ui + αiu1)⊤A(vi −αiv1) = u⊤ i Avi −α2 i u⊤ 1 Av1  = 0, where cross-terms such as u⊤ 1 Avi = 0 because u1 and vi are unpaired singular vectors. By assumption that the signs match, this also implies that the same holds for B, where: 0 = (ui + αiu1)⊤B(vi −αiv1) = u⊤ i Bvi −α2 i u⊤ 1 Bv1  = σB i −σA i σA 1 u⊤ 1 Av1 α . And since u⊤ 1 Av1/σA 1 = 1, we obtain that λσB i = σA i . It follows that A = λB. 25 H.2 Strategic Pareto optimality implies stationarity: the N-player case We now generalize the result from Section H.1 to N-player multilinear games. We prove this result by way of the classical notion of strong Nash equilibria (Aumann, 1959). At the usual Nash equilibrium, no single player can improve their own utility without coordinating with others. The strong Nash equilibrium extends this form of stability to groups or coalitions of players. For any subset of players S ⊂[N], let xS and x−S denote the tuples (xn : n ∈S) and (xn : n /∈S), respectively. Then: Definition H.1 (Strong Nash equilibrium). Let S ⊂[N] be any subset or coalition of players. A joint decision vector x∗∈Ωis weakly S-Pareto optimal if there does not exist some x ∈Ωsuch that: ∀n ∈S, fn(xS; x∗ −S) > fn(x∗). We say that x∗is a strong Nash equilibrium if it is weakly S-Pareto optimal for all S ⊂[N]. The following lemma establishes an equivalence between strong Nash stability and weak Pareto optimality for games in canonical form under mild structural assumptions. Lemma H.3 (Strongly Nash equilibria are weakly Pareto). Let (Ω, f) be a bi-directional, N-player multi- linear game in canonical form with a Nash equilibrium x∗. Suppose the interaction graph is connected at x∗. Then, the equilibrium x∗is a strong Nash equilibrium if and only if x∗is weakly Pareto optimal. The forward direction is immediate by definition: when x∗is a strong Nash equilibrium, then applying the condition to S = [N] shows that it also weakly Pareto optimal. For the reverse direction, suppose that x∗ is not a strong Nash equilibrium. Thus, there is a coalition of players S ⊂[N] that can coordinate together to strictly improve their own utilities. We show that this group can be grown to include all N players, which then implies that x∗is not weakly Pareto optimal. In particular, the coalition can be extended to include any player that is adjacent to S in the interaction graph. Proof of Lemma H.3. (⇒). If x∗is a strong Nash equilibrium, then by definition, it is weakly S-Pareto optimal where S = [N]. Thus, it is weakly Pareto optimal. (⇐). Suppose that x∗is not a strong Nash equilibrium. Let S ⊆[N] be a maximal coalition such that x∗ is not weakly S-Pareto optimal: there is a joint decision vector of the form x = (xS, x∗ −S) where all players in S strictly improve. If S = [N], then this shows that x∗is not weakly Pareto optimal. To finish the proof, we now rule out the possibility that S is not all of [N]. Let G(x∗) = (V, E) be the interaction graph at x∗. Choose any player n /∈S that is adjacent to some player m ∈S in G(x∗); this player exists because the graph is connected, and let S′ = S ∪{n}. We will construct a joint strategy that is not S′-Pareto optimal, contradicting the maximality of S as desired. Consider joint decision vectors x′ of the form: x′ ℓ= ( xℓ+ εℓ ℓ∈S′ x∗ ℓ ℓ/∈S′. Players in the coalition S play a perturbation of the joint strategy xS, the player n plays a perturbed Nash strategy, and the remainder play the Nash equilibrium strategy x∗ −S′. By the continuity of the utilities, there exists δ > 0 so that if ∥x −x′∥< δ, then all coalition members m ∈S maintain a strict improvement: ∀m ∈S, fm(x′) −fm(x∗) > 0. Because the game is in canonical form, the nth player’s utility at x′ can be expressed in the form: fn(x′) −fn(x∗) = ε⊤ n An(εS), where An is a polynomial in εS. In particular, it has a non-zero linear term: An(εS) = X m∈S Jnm(x∗) εm + o(∥εS∥), since the player n is connected to at least one player m ∈S. As the polynomial An is not identically zero, there exist choices of x′ satisfying fn(x′)−fn(x∗) > 0 with ∥εS′∥< δ. By this choice, we have ∥x′−x∥< δ, so that all players in S′ achieve positive utilities. Thus, the equilibrium x∗is not weakly S′-Pareto optimal. 26 Lemma H.4 (Strategic Pareto optimality in N-player games). Let (Ω, f) be a bi-directional, N-player multilinear game in canonical form with a Nash equilibrium x∗. Suppose that x∗is a strategically Pareto optimal. Then, it is strategically Pareto stationary. Proof. In the following, let us denote J ≡J(x∗) for short. Let x∗be strategically Pareto optimal. Lemma H.3 shows that it is a strong Nash equilibrium. Thus, it is weakly S-Pareto optimal for all pairs S = {n, m}. The game restricted to these two players is bilinear; Lemma H.1 implies that there is some amn > 0 so that: Jnm = −amnJ⊤ mn. We prove that x∗is strategically Pareto stationary by showing that J is block skew-symmetrizable, where there is a set of positive scalars (λn)n∈[N] such that: ∀n, m ∈[N], amn = λm λn . (6) We prove the consistency condition (6) by induction over the subgraphs of the interaction graph G of the game. Without loss of generality, we may assume that G is connected, since players in different connected components are completely independent of each other. In the following, if G′ = (V ′, E′) is a subgraph of G, we let J[G′] denote the matrix: J[G′]nm = ( Jnm (n, m) ∈E′ 0 (n, m) /∈E′, where the block J[T]nm = 0 is zeroed out when (n, m) /∈E′. Base case. Let T = (VT , ET ) be a spanning tree of G. Endow T with a rooted tree structure, so that there is a root vertex n0 ∈VT and each other vertex n ∈VT has a unique parent vertex pa(v) ∈VT . Inductively define λn > 0 for n ∈VT so that: λn0 = 1 and λn = amnλm where m = pa(n). By construction, we have that J[T] is block skew-symmetrizable by the set of positive scalars (λn)n∈[N]. Inductive step. Assume that there is a subgraph G′ ⊂G such that J[G′] is block skew-symmetrizable by the positive scalars (λn)n∈[N]. Let e ∈E \ E′ be an edge not in G′. Moreover, choose e so that there is a minimal cycle graph C = (VC, EC) ⊂G containing the edge e such that all other edges in C are in G′: e ∈EC and EC \ {e} ⊂E′. In particular, there are no edges in G between any two vertices of C that are not already in C. Suppose there are r players in the cycle. Let us rename these players so that VC = [r]. Actually, for convenience, let VC = Z/rZ, where player r can also be called player 0. Naturally, order the players so that: (n, m) ∈EC ⇐⇒ n −m mod r = ±1. Let us also choose the labels so that e = (0, 1). We now prove that the consistency condition (6) holds; that is, ar1 = λr/λ1. We proceed by contradiction and construct xC such that: ∀n ∈VC, fn(xC; x∗ −C) −fn(x∗) > 0, This would contradict the assumption that x∗is weakly Pareto optimality over the coalition of players in C. Assume without loss of generality that a1r < λ1/λr (otherwise, we can use the relation ar1 = a−1 1r and reverse the order of the players). Choose any 0 < α < 1 −a1r · λr λ1 and define the sequence: ∀n ∈[r], αn = 1 −α · n −1 r −1 . 27 Now, for each n ∈[r], choose any sequence wn /∈ker(Jn−1,n). We define xC = (x1, . . . , xr) as follows: xn =        w1 n = 1 αnwn λnw⊤ n Jn,n−1xn−1 1 < n ≤r. Since the game is in canonical form, we have that x∗= 0 is centered at the origin, and that when all player outside of VC plays x∗ −C, the change in utility is: fn(xC; x∗ −C) −fn(x∗) = x⊤ n Jn,n−1xn−1 + x⊤ n Jn,n+1xn+1 + o(∥xC∥2), where o(∥xC∥2) captures terms in the polynomial that have degree at least three. To obtain the contradiction, it suffices to show that there exists xC such that the bilinear terms are positive for for all players in C; by replacing xC with εxC with sufficiently small ε > 0, the higher-order terms can be made to have vanishing contribution to the utility relative to the bilinear term. By the inductive hypothesis, J[G′] is block skew-symmetrizable by the set of scalars (λn)n∈[N], so that: ∀n ∈[r −1], fn(xC; x∗ −C) −fn(x∗) = x⊤ n Jn,n−1xn−1 −λn+1 λn · x⊤ n+1Jn+1,nxn + o(∥xC∥2) = λ−1 n · αn −αn+1  + o(∥xC∥2) = λ−1 n · α/(r −1) + o(∥xC∥2). We also have that the change of the utility of the rth player is: fr(xC; x∗ −C) −fr(x∗) = λ−1 r · αr −a1r · x1J1rxr + o(∥xC∥2) = λ−1 r  1 −α −a1r · λr λ1  + o(∥xC∥2), where in the first step, we used the equality Jr1 = −a1rJ⊤ 1r. By construction, the lowest-order, bilinear term of the utility is positive, of each player in the coalition C using this choice of xC, obtaining the desired contradiction. And so, we must have that a1r = λ1/λr, implying the consistency condition (6). I Proof of Theorem 2 Theorem 2 (Pointwise uniform stability is equivalent to strategic Pareto stationarity). Let x∗be a Nash equilibrium of an N-player multilinear game, at which it is bi-directional. Suppose that the interaction graph is connected at x∗. Then, x∗is uniformly stable if and only if it is strategically Pareto stationary. Proof of Theorem 2. Uniform stability and strategic Pareto stationarity both depend only on the bilinear terms of the strategic component of games. Thus, we may assume without loss of generality that the game is a purely-strategic, N-player polymatrix game with the Jacobian J ≡J(x∗). In fact, we may assume it is in canonical form (Definition C.1), where the Nash equilibrium is centered at the origin x∗= 0, with utilities: fn(x) = X m̸=n xnJnmxm. (⇒). For Nash equilibria in polymatrix games in canonical form, Proposition 1 shows that the strategic Pareto stationarity is equivalent to weak Pareto optimality. And so, to prove the forward direction, we may show the contrapositive: if x∗is not weakly Pareto optimal, then it is not uniformly stable. When x∗is not weakly Pareto optimal, there is a joint strategy x ∈Ωsuch that all players improve: fn(x) −fn(x∗) = x⊤ n X m̸=n Jnmxm > 0. 28 Lemma 1 shows that for each player n ∈[N], there exists Hn ≻0 such that: Hn X m̸=n Jnmxm = xn. Let H = diag(H1, . . . , HN). We obtain that HJx = x, so that HJ has a positive eigenvalue λ = 1. Thus, the game is not uniformly stable at x∗. (⇐). When x∗is strategically Pareto stationary, then by definition, there is a block-diagonal matrix Λ such that ΛJ = Σ is skew-symmetric, where each diagonal block is the diagonal matrix Λnn = λnIdn×dn. Let H be any positive-definite block-diagonal matrix. Then, H−1J is similar to a skew-symmetric matrix: H−1J ∼ H−1/2Λ1/2 Σ Λ1/2H−1/2 , where we used the fact that ΛH−1 = H−1Λ. By Lemma I.1, H−1J has purely imaginary eigenvalues. Since this holds for all possible choice of H, the game is uniformly stable at x∗. Lemma I.1 (Skew-symmetric matrices have imaginary eigenvalues, Theorem 7 in Chapter 8 of Lax (2007)). If M ∈Rm×m is skew-symmetric, then spec(M) ⊂iR. Theorem I.1 (Eigenvalues of matrix powers, Theorem 1.1.6 of Horn and Johnson (2012)). Let M ∈Cm×m be a square matrix. Let p(t) be a given polynomial of degree k. If (λ, v) is an eigenvalue–eigenvector pair of M, then (p(λ), v) is an eigenvalue–eigenvector pair of p(M). Conversely, if k ≥1 and if µ is an eigenvalue of p(M), then there is some eigenvalue λ of M such that µ = p(λ). J Proof of Proposition 2 Proposition 2 (Inapproximability of unstable equilibria). Let (Ω, f) be an N-player multilinear game with an interior Nash equilibrium x∗. Suppose that x∗is not uniformly stable. Then, there is some collection of smooth and strictly convex regularizers h such that smoothed best-response dynamics does not stabilize to x∗. In fact, even if the sequence of β-smoothed equilibria xβ converges to x∗, each xβ is an unstable fixed point of the (Φβ, η)-averaging dynamics (1) for all η ∈(0, 1) and sufficiently small β > 0. Proof. In the following, let H(x) denote the block-diagonal matrix with Hn(x) = ∇2 nhn(xn) given a set of smooth and strictly convex regularizers h. As x∗is not uniformly stable, we can choose h such that the eigenvalues of H−1(x∗)J(x∗) are not all purely imaginary (such a choice exists by Lemma L.2). In fact, it must have an eigenvalue λ∗with positive real part ℜ(λ∗) > c > 0. This is because the matrix H−1(x∗)J(x∗) has zero trace—the diagonal blocks of the matrix Hn(x∗)Jnn(x∗) in multilinear games is zero—and so, the sum of all eigenvalues must also be zero. The existence of eigenvalues with positive real parts, bounded away from zero, extends to an open region around x∗by continuity. In particular, the matrix-valued function x 7→H−1(x)J(x) is continuous, so by the continuity of eigenvalues (Theorem J.1), there is an open set U around x∗so that for all x ∈U, each matrix H−1(x)J(x) also has an eigenvalue λ(x), where its real part is bounded away from zero: ℜ λ(x)  > c/2 > 0. We now show that the dynamics do not stabilize to x∗. That is, either the sequence of β-smoothed equilibria xβ does not converge to x∗as β goes to zero, or the dynamics become unstable around xβ. If the sequence does not converge, then there is nothing to do. So, without loss of generality, we may assume that there exists some sufficiently small β0 < 2/c so that all β-smoothed equilibria remain in U when β < β0: ∀β < β0, xβ ∈U. We can now analyze the linear stability of xβ under the (Φβ, η)-averaging dynamics (1). Its Jacobian is: ∇  (1 −η) x + η Φβ(x)  = (1 −η) I + η H−1(x)J(x) β , 29 where I is the identity matrix, making use of the gradient computation Lemma 2. The eigenvalues are: (1 −η) + ηλ β , where λ ∈spec H−1(xβ)J(xβ)  . Whenever β < β0, we have that xβ ∈U, so that there is at least one λ whose real part is at least c/2. Moreover, as β0 < 2/c, it follows that the modulus is bounded below for all η ∈(0, 1): (1 −η) + ηλ β > 1 −η + η c 2β > 1. The fixed point xβ is unstable when β < β0 as the Jacobian of the dynamics has an eigenvalue whose modulus is greater than 1. This follows from Lyapunov’s indirect method (Bof et al., 2018, Theorem 3.3). Theorem J.1 (Continuity of eigenvalues, Theorem 1 of Elsner (1985)). Let A, B ∈Cm×m have spectra {λ1, . . . , λm} and {µ1, . . . , µm}, respectively. Then: max j∈[m] min i∈[m] λi −µj ≤ ∥A∥F + ∥B∥F (m−1)/m · A −B 1/m F . K Proof of Theorem 3 Theorem 3 (Convergence to equilibria in uniformly-stable regions). Let (Ω, f) be an N-player multilinear game with a Nash equilibrium x∗that is locally uniformly stable. Over arbitrary choice of smooth and strictly convex regularizer h, all smoothed best-response dynamics stabilize to x∗. In particular, there is a constant Cf > 0 depending only on the game so that when η ≤Cfβ2, the (Φβ, η)-averaging dynamics globally converge: x(t) −xβ ≤exp  −ηt + ln N 2  . Proof. Let Φβ be any choice of smoothed best-response map induced by a set of smooth and strictly convex regularizer h and β > 0. Recall from Lemma 2 that its Jacobian is: ∇Φβ(x) = 1 β H(x)−1J(x), where H(x) is the block-diagonal matrix with the nth block being ∇2 nhn(xn) and J(x) is the game Jacobian. Let U ⊂Ωbe any open ball containing x∗such that ∥H(x)−1J(x)∥≤L is bounded by a constant L > 0. Such a ball exists by the continuity of ∇Φβ. We now show that for sufficiently small η, the (Φβ, η)-averaging dynamics contracts toward xβ when initialized within U, implying that xβ is asymptotically stable. In particular, we show that at each iteration, the distance to xβ contracts by a factor of exp(−Ω(η)) whenever: η ≤ β2 1 + L2 . Fix any x ∈U. By the mean-value theorem, there is some ¯x ∈U, a convex combination of x and xβ, such that Φβ(x) −Φβ(xβ) = ∇Φβ(¯x)(x −xβ). We use ¯x to define Mη(x) as the matrix: Mη(x) = (1 −η)I + η∇Φβ(¯x), where I ∈Rd×d is the identity. The (Φβ, η)-averaging dynamics can be re-centered at xβ, which becomes: x(t + 1) −xβ = (1 −η) x(t) −xβ + η Φβ(x(t)) −Φβ(xβ)  = Mη x(t) x(t) −xβ . The dynamics contract to xβ exponentially quickly if the spectral norm ∥Mη(x)∥2 is bounded by 1 −Ω(1), x(t + 1) −xβ ≤ Mη x(t)  2 x(t) −xβ . 30 Indeed, we have: Mη(x) 2 ≤ 1 −η + iη L β = s 1 −2η + η2  1 + L2 β2  ≤exp  −η + 1 2η2  1 + L2 β2  ≤exp  −η 2  , where the first inequality follows from ∥∇Φ∥≤L, the second from the inequality √1 −z ≤e−z/2, and the last inequality holds whenever η ≤β2/(1 + L2). The rate then follows from Theorem 2, along with the bound on the diameter of the joint simplex space: sup x,x′∈Ω ∥x −x′∥≤ √ N. L Beyond interior Nash equilibria The convergence results Proposition 1 and Theorem 3 considered only interior Nash equilibria. This section extends these result to Nash equilibria when they are on the boundary of the simplex. The following lemma shows that if x∗is an isolated, non-interior equilibrium, then the game can be locally reduced by removing actions that are not in the support of x∗. These actions turn out to be strictly dominated, and so the game-theoretic equilibrium x∗is preserved after removing dominated strategies. Lemma L.1. Let (Ω, f) be an N-player normal-form game with a quasi-strong equilibrium x∗∈Ω. There exists a neighborhood U containing x∗such that actions not supported by x∗are strictly dominated. That is, there exists some ε > 0 such that for each player n ∈[N] and for all x ∈U and i /∈supp(x∗ n), fn(en,i; x−n) < max j∈[kn] fn(en,j; x−n) −ε. Proof. As x∗is quasi-strong, for each player n ∈[N], there is some εn > 0 such that whenever a pure strategy i ∈[kn] is not supported, meaning that i /∈supp(x∗), then it is strictly dominated: fn(en,i; x∗ −n) < max j∈[kn] fn(en,j; x∗ −n) −εn. This is because the set of pure strategies that are best-responses to x∗are precisely those in supp(x∗ n). As this is a finite set of alternatives, there must be some positive gap between those that maximize utility and those that do not. And as fn is continuous, for a sufficiently small ball Bn around x∗, the same domination condition remains true, although perhaps with a smaller gap. That is, when i /∈supp(x∗), ∀x ∈Bn, fn(en,i; x−i) < max j∈[kn] fn(en,j; x−n) −εn/2. The result follows by letting U be the intersection B1 ∩· · · ∩BN and ε = minn∈[N] εn/2. Corollary L.1 (Reduction to interior equilibria). Let x∗be a quasi-strong equilibrium of a normal-form game. Then, it is an interior Nash equilibrium of the reduced game at x∗. Proof. For each player n ∈[N] and alternative i ∈supp(x∗ n), the probability mass on i is bounded away from zero since it is positive x∗ n,i > 0. Then, either it is the unique action that is supported, in which we case we say that x∗ n is trivially contained in the interior of the reduced simplex Ωn(x∗ n). Otherwise, it must be bounded away from one x∗ n,i < 1, since there is at least one other supported action. Either way: x∗ n ∈int Ωn(x∗ n)  . Thus, x∗is an interior equilibrium of the reduced game. 31 L.1 Calculus on the probability simplex This section formalizes smoothness on the boundary of the simplex. Let ∆≡∆k−1 be the (k−1)-dimensional probability simplex. We view it as the union of open manifolds of dimensions ranging from 0 through k −1. In particular, we decompose the simplex as follows: ∆= [ S⊂[k] ∆S, where ∆S :=  x ∈∆k−1 : supp(x) = S , so that ∆S is the interior of a probability simplex with |S| −1 dimensions. For each S ⊂[k], we say that the closure of ∆S, denoted by ∆S, is a face of the simplex. To take the gradient and Hessian of map h : ∆→R at a point x ∈∆with supp(x) = S, we simply take them with respect to the simplex ∆S. Definition L.1 (Calculus on the simplex). For each S ⊂[k], let T∆S denote the tangent space of ∆S, which we may naturally identify with the following subspace of Rk: T∆S :=  v ∈Rk : supp(v) = S and 1⊤v = 0 . When a map hS : ∆S →R is smooth, let ∇ShS : ∆S →T∆S denote its gradient. Let h : ∆→R be a map whose restriction onto ∆S is smooth for all S ⊂[k]. Its gradient and Hessian at x are defined as: ∇h(x) := ∇Sh(x) and ∇2h(x) := ∇2 Sh(x), where S = supp(x). Thus, ∇h(x) ∈Rk and ∇2h(x) ∈Rk×k. Definition L.2 (Smoothness on the simplex). A map g : ∆→R is smooth on the simplex if it can be extended to a smooth map on an open set U ⊂Rk containing ∆. Any utility f : Ω→R in a normal-form game is smooth on the simplex, since it immediately extends to a multilinear map p : Rk1 ×· · ·×RkN →R, where p is formally the same polynomial as f. While their function values coincide on Ω, they are not the same function because their domains are distinct. In particular, their derivatives are not generally equal, as they map to distinct tangent spaces. Nevertheless, we will use the representation inherited from the ambient space for each ∆kn−1 ⊂Rkn specified in Definition L.1. L.2 The rate of suboptimal plays in smoothed-best responses Example L.1 (Entropic regularizer is linearly steep). Let h(x) = P xi log xi on Ω= ∆k−1. For any v ∈TΩ, the solution xβ(v) := ∇h−1(v/β) is given by: xβ(v)i = exp(vi/β) Pk j=1 exp(vj/β) ≤exp  −1 β ·  max j∈[k] vj −vi  . It follows that the entropic map is linearly steep since exp(−ε/β) decays to zero much faster than β. Lemma 4 (Dominated strategies occur at β-sublinear rates in β-smoothed equilibria). Let (Ω, f) be an N- player normal-form game and h be a collection of linearly steep regularizers. Suppose that as β goes to zero, the sequence of of β-smoothed equilibria xβ converges to a quasi-strict equilibrium x∗. Let Π : Ω→Ω(x∗) be the orthogonal projection onto the support of x∗. Then: lim β→0 xβ −Πxβ β = 0. Proof of Lemma 4. By Lemma L.1, there exists an open region U around x∗and some ε > 0 such that for each player n ∈[N] and for all x ∈U and i /∈supp(x∗), the action i is ε-suboptimal. In particular, ∇nfn(x) ∈TΩn denote the gradient of fn on the manifold Ωn. We obtain that: ∇nfn(x)i < max j∈[kn] ∇nfn(x)j −ε. 32 Since (xβ)β converges to x∗, for sufficiently small β, the β-smoothed equilibrium xβ eventually remains in U as β goes to zero. In particular, ∇fn(xβ) is contained in the set Vi(ε) where the ith alternative is ε-suboptimal, defined in Definition 25. As the regularizer hn is linearly steep, the β-smoothed equilibrium places sublinearly little mass on any action i /∈supp(x∗ n): lim β→0 xβ n,i β = 0. Thus, the projection of the smoothed equilibrium xβ onto Ω(x∗) satisfies: xβ −Πxβ β ≤ X n∈[N] X i/∈supp(x∗n) 2xβ n,i β , by the triangle inequality. Taking the limit as β goes to zero yields the result. Lemma L.2 (Hessians of linearly steep regularizers). Let ∆= ∆k−1 be the (k −1)-dimensional simplex. Let H ∈T∆⊗T∆be positive definite and x ∈int(∆). There is a linearly steep regularizer h : ∆→R where: ∇h(x) = 0 and ∇2h(x) = H. Proof. Define Sym+(T∆) to be the set of positive-definite matrices in T∆⊗T∆. Fix x ∈int(Ωi) and consider the family of steep regularizers: h(x; λ, A, w) = λ k X i=1 xi log xi + 1 2 A(x −w) 2 2, where λ > 0, A ∈Rk×k is invertible, and w ∈Rmi. We show that h is steep and that for this fixed x, Sym+(T∆) = n ∇2h(x; λ, A) : λ > 0 and A invertible o . Under the coordinate representation given by Definition L.1, the gradient on the simplex ∆is given by: ∇h(x; λ, A) =  I −1 k 11⊤  h λ log x + A⊤A(x −w) i , where log x is applied component-wise. Thus, h is steep, since log xi →−∞as xi →0. There exists w ∈Rk such that ∇h(xi) = 0, by selecting: w = xi −λ(A⊤A)−1 log xi. The Hessian on the simplex is given by: ∇2h(x; λ, A) =  I −1 k 11⊤  h λ · diag(1/x) + A⊤A i  I −1 k 11⊤  , where diag(1/x) is the diagonal matrix with diagonal entries 1/xi. For every M ∈Sym+(T∆), there is a sufficiently small λ > 0 such that the following matrix is positive-definite, so there is an invertible matrix A: A⊤A = M + 11⊤−λ · diag(1/x). It follows that ∇2h(x; λ, A) = M, showing that Sym+(T∆) ⊂{∇2 Ωihi(z; λ, A) : λ > 0 and A invertible}. The reverse inclusion holds as h is strongly convex, so its Hessian on the simplex is contained in Sym+(T∆). 33 L.3 Convergence to non-interior Nash equilibria Theorem 4 (Convergence to non-interior equilibria). Let (Ω, f) be an N-player, normal-form game and let h be a collection of linearly steep, proper regularizers. Suppose that x∗is a quasi-strict Nash equilibrium and that it is the limit of the β-smoothed equilibria xβ as β goes to zero. If the reduced game around x∗is locally uniformly stable, then smoothed best-response dynamics stabilize to x∗. In particular, there exists a constant Cf > 0 such that for all sufficiently small β > 0 and η ≤Cfβ2, the (Φβ, η)-averaging dynamics is locally asymptotically stable at xβ, where the operator norm of its Jacobian at xβ is bounded by: (1 −η)I + η∇Φβ(xβ) 2 ≤exp  −η 2  . Proof. Let Φβ be any choice of smoothed best-response map induced by a set of linearly steep and proper regularizers h and β > 0. Its Jacobian under the coordinate representation specified by Definition L.1 is: ∇Φβ(x) = 1 β H(x)+J(x), where H(x) is the block-diagonal matrix with the nth block being ∇2 nhn(xn)+ and J(x) is the game Jacobian, as computed by Lemma 2. Define G(x) := H(x)+J(x) and let Π : Ω→Ω(x∗) be the orthogonal projection to the support of x∗. Let U ⊂Ωbe any open ball containing x∗such that ∥H(x)+J(x)∥≤L is bounded by a constant L > 0. Such a ball exists because the regularizers are proper, so that ∇Φβ is smooth. Moreover, the reduced game around x∗is locally uniformly stable, so this ball can be chosen such that: spec G(Πxβ)  ⊂iR. To prove that the (Φβ, η)-averaging dynamics is a local contraction at xβ whenever η is sufficiently small, we show that the Jacobian of these dynamics have operator norm bounded by 1. The Jacobian of the (Φβ, η)-averaging dynamics can be decomposed as follows: (1 −η)I + η∇Φβ(xβ) = (1 −η)I + η β G(xβ) = (1 −η)I + η β G(Πxβ) + η β  G(xβ) −G(Πxβ)  = (1 −η)I + η β G(Πxβ) + η ∇G(¯xβ)(xβ −Πxβ) β  , where in the last step applies the mean-value theorem to find some ¯xβ that is a convex combination of xβ and Πxβ, once again making use of the smoothness of the pseudoinverse of the Hessian of the regularizers. When β is sufficiently small, then Πxβ is contained in U, so that the eigenvalue of G(Πxβ) is iλ for some real-valued scalar λ with |λ| < L. By Lemma 4, the norm of the last term can be made arbitrarily small: ∇G(¯xβ)(xβ −Πxβ) β 2 ≤C∥xβ −Πxβ∥ β ≤o(β), where we may uniformly bound the norm of ∇G over Ωsince it is a smooth map on a compact set. Let us choose β small enough so that this term is no more than 1/2. Thus, the operator norm of the Jacobian of the dynamics is bounded by: (1 −η)I + η∇Φβ(xβ) 2 ≤ 1 −η 2 + iη L β ≤exp  −η 2  , where the last inequality holds when η ≤β2/(1+4L2); see the proof of Theorem 3 for this computation. 34
Learnable Mixed Nash Equilibria are Collectively Rational Geelon So Yi-An Ma -asymptotic stability. We do so through the notion of uniform stability, which is concerned with equilibria of individually utility-seeking dynamics. Perhaps surprisingly, it turns out to be closely connected to economic properties of collective rationality. Under mild non-degeneracy conditions and up to strategic equivalence, if a mixed equilibrium is not uniformly stable, then it is not weakly Pareto optimal-there is a way for all players to improve by jointly deviating from the equilibrium. On the other hand, if it is locally uniformly stable, then the equilibrium must be weakly Pareto optimal. Moreover, we show that uniform stability determines the last-iterate convergence behavior for the family of incremental smoothed best-response dynamics, used to model individual and corporate behaviors in the markets. Unlike dynamics around strict equilibria, which can stabilize to socially-inefficient solutions, individually utility-seeking behaviors near mixed Nash equilibria lead to collective rationality. Keywords: algorithmic game theory, evolutionary dynamics, non-asymptotic stability, last-iterate convergence, smoothed best-response 1 Introduction The Nash (1951) equilibrium is a foundational solution concept in games, capturing when collective behavior or strategies may be stationary. These equilibria are meaningful to study, for once such strategies appear, they may persist for a long time. But, there is an important caveat: not all equilibria can be robustly reached by players in the game (Hart and Mas-Colell, 2003; Papadimitriou, 2007; Daskalakis et al., 2009, 2010; Milionis et al., 2023). Any equilibrium that cannot be found or sustained is unlikely to have practical relevance. This caveat leads to the question on learnability: which Nash equilibria can players eventually learn to play from repeated interactions? To grasp individual and corporate behaviors, literature in classical economics considers a model of learning where players: (i) take an evolutionary approach and incrementally update their strategies; (ii) are utilityseeking, which means that they aim to improve their own payoffs; and (iii) are uncoupled, where players are unaware of the other players' utilities or methods to improve them (Alchian, 1950; Winter, 1971). The goal of the players in this model is not to compute any pre-determined Nash equilibrium, at least not explicitly. Rather, the equilibrium is to emerge out of their joint, but individually utility-seeking, behavior. Within this model of learning, the impossibility result of Hart and Mas-Colell (2003) shows that no uncoupled and asymptotically-stable learning dynamics can converge to all Nash equilibria. Uncoupled means that players learn in uncoordinated and decentralized ways. Asymptotic stability means that the convergence is robust to small perturbations in the dynamics. Simply put, not all equilibria admit such a strong notion of learnability, viz. dynamical stability. The ones that do, however, have been characterized for certain classes of learning dynamics in standard normal-form games: an equilibrium is 'asymptotically learnable' in this way if and only if it is strict, where every player has a single, deterministic strategy that is clearly locally optimal (Samuelson and Zhang, 1992; Vlatakis-Gkaragkounis et al., 2020; Giannou et al., 2021). A significant gap remains for the learnability of mixed Nash equilibria, where players may use randomized strategies. On the one hand, mixed equilibria are not strict, and as a result, they are not asymptotically stable under these learning dynamics. Observations corroborating this finding demonstrate that many dynamics are not able to generically learn mixed Nash equilibria. This has been a significant source of criticism on the Correspondence to: and 1 16 Oct 2025 (a) Pareto-dominated equilibria are unstable (b) Weakly-Pareto equilibria are neutrally stable Figure 1: Uncoupled learning dynamics in two 2 × 2 normal-form games with purely-strategic utilities f = (f1, f2). The Nash equilibria are marked by stars. The streamlines visualize the trajectories of the learning dynamics. The heatmap plots the social welfare function min{f1, f2}, measuring the utility of the player worst-off. The heatmap is white where the utilities are equal to the equilibrium; the darker the red, the worse the social welfare; the darker the blue, the better. (a) The mixed Nash equilibrium in this game is not weakly Pareto optimal, and the dynamics are unstable. (b) This equilibrium is weakly Pareto optimal, and the dynamics are non-asymptotically stable. The dynamics visualized here is continuous-time mirror ascent induced by the entropy mirror map. viability of mixed equilibria as solutions in games (Shapley, 1963; Crawford, 1985; Stahl II, 1988; Jordan, 1993; Krishna and Sj ̈ostr ̈om, 1998; Ellison and Fudenberg, 2000; Kleinberg et al., 2011; Mertikopoulos et al., 2018; Bailey et al., 2021). On the other hand, there are cases in which mixed equilibria are approachable by simple, uncoupled dynamics, such as those in two-player zero-sum games (Robinson, 1951; Fudenberg and Kreps, 1993; Gjerstad, 1996; Hofbauer and Sigmund, 1998; Benaım and Hirsch, 1999; Hofbauer and Sorin, 2006; Daskalakis and Panageas, 2019; Wei et al., 2021; Piliouras et al., 2022). To address this conundrum, we must go beyond the criterion of asymptotic stability, which is too stringent a requirement. This work extends the study of learning in games to dynamics that exhibit non-asymptotic stability. It is a weaker notion of stability that includes neutral stability, which only requires that players starting off in a neighborhood of such an equilibrium will continue to remain close to it. The question we ask is: Under the relaxed criterion of non-asymptotic stability, which Nash equilibria are learnable by uncoupled dynamics, and what are their economic properties? There are two parts to this work. In the first part, we introduce and characterize a form of non-asymptotic stability that applies to broad classes of learning dynamics. We call it uniform stability. While it arises from dynamical considerations, we show that uniform stability is closely tied to the economic properties of the equilibrium, namely, strategic Pareto optimality. This is a game-variant of weak Pareto optimality defining a minimal form of collective rationality-it describes solutions where there is no way to strictly improve everyone's utilities, up to strategic equivalence.1 We show that an equilibrium that is not uniformly stable must not be strategically Pareto optimal. This formalizes and tightens an observation made by prior work: non-convergence to mixed equilibria can sometimes actually be a blessing in disguise, where players would be worse off had they managed to converge to the equilibrium (Kleinberg et al., 2011; Ostrovski and van Strien, 2013; Pangallo et al., 2022; Anagnostides et al., 2022). This may be surprising as it runs counter to the more prominent phenomenon where non-cooperative, individual behavior leads to worse social outcomes, or a high price of anarchy, such as in the Prisoner's Dilemma or the Tragedy of the Commons (Luce and Raiffa, 1957; Hardin, 1968; Nachbar, 1990; Weibull, 1994; Koutsoupias and Papadimitriou, 1999). In the second part, we focus on convergence and non-convergence of incremental smoothed best-response dynamics, a generalization of a canonical family of learning dynamics. For this class, we show that non-strict, 1As standard learning dynamics generally factor through strategic equivalence classes of games (they do not depend on the 'externalities' or non-strategic components of the game), this reservation is necessary. 2 uniformly-stable equilibria can be learned, though with slower convergence than strict equilibria. Around a locally uniformly-stable Nash equilibrium, these dynamics can be stabilized: they lead to asymptoticallystable dynamics that approximate the equilibrium to arbitrary accuracy. While for non-uniformly stable mixed equilibria, there are dynamics that can never be stabilized to the equilibrium. Thus we observe a dichotomy between dynamics around strict and mixed Nash equilibria under the context of 'as if' rationality (Friedman, 1953; Weibull, 1994). In the former, individually utility-seeking dynamics lead to the same equilibrium behaviors as if all the players have absolute, individual rationality. As in the prisoner's dilemma, they can stabilize at collectively-irrational behaviors. Around mixed equilibria, uniform learnability disallows collective irrationality. As a result, individually utility-seeking dynamics robustly behave as if the players were collectively rational, reminiscent of the 'invisible hand' from classical economic theory (Smith, 1776; Debreu, 1959). 1.1 Main results We consider uncoupled learning dynamics for standard N-player normal-form games, where players, through repeated interactions, evolve their individual strategies in order to maximize their own utilities. For such dynamics, in Section 3, we introduce two non-asymptotic stability classes for mixed equilibria: those that have pointwise uniform stability and those that exhibit a stronger local uniform stability. These impose a second-order differential condition, either only at the equilibrium or in a neighborhood around it (note that the equilibrium condition itself is a first-order condition for mixed strategies). These conditions naturally arise from linear stability analysis, and they apply to broad classes of dynamics such as smoothed best-response, gradient ascent/adaptive dynamics, mirror ascent/positive-definite adaptive dynamics. In Section 4, we show the chain of implications for mixed Nash equilibria under mild assumptions: local uniform stability =⇒ strategic Pareto optimality =⇒ pointwise uniform stability. Moreover, all of these solution concepts are equivalent for the class of graphical or polymatrix games, which are particularly succinct normal-form games that decompose across pairwise interactions between players. In Section 5, we prove convergence results for the family of incremental, smoothed best-response learning dynamics. The dynamics are controlled by a smoothing parameter β and a learning rate η. The first governs the quality of any approximate Nash equilibria that the dynamics finds, while the second impacts the stability of its fixed points; whether the dynamics converge and at what rate. We prove that if a Nash equilibrium is locally uniformly stable, then for any choice of approximation β, the dynamics can always be stabilized to the equilibrium by taking a sufficiently fine learning rate η, whereby it converges with asymptotic stability. With T iterations, the overall convergence rate to the locally uniformly stable mixed Nash equilibria is of order T -1/2. However, if a Nash equilibrium is not pointwise uniformly stable, then there are dynamics that can never be stabilized beyond a certain approximation error, no matter what learning rate is chosen. Finally, as the first set of results are stated for fully-mixed equilibria, Section 6 extends these results to the case where the Nash equilibrium may be partially mixed. The following summarizes our main results: Local uniform stability implies strategic Pareto optimality. If a Nash equilibrium falls within a region of uniformly-stable strategies, it must be collectively rational (Theorem 1). In this case, it is robustly approximable under incremental, smoothed best-response dynamics (Theorems 3 and 4). Strategic Pareto optimality implies pointwise uniform stability. We show that strategic Pareto optimality implies strategic Pareto stationarity, which is a second-order condition for collective rationality (Proposition 1). Strategic stationarity is equivalent to pointwise uniform stability (Theorem 2), and it is also necessary for convergence with non-asymptotic stability (Proposition 2). 1.2 Related work This work considers the convergence of learning dynamics in games. Two modes of convergence are often studied (i) time-averaged convergence, where players aim to achieve little or no regret in the long run, or the stronger (ii) day-to-day or last-iterate convergence, where players eventually learn to play an equilibrium of the game; see also Fudenberg and Kreps (1993) for discussion. This work falls in the latter category. 3 The learnability of strict Nash equilibria is fairly well-understood (Hart and Mas-Colell, 2003; Mertikopoulos and Sandholm, 2016; Vlatakis-Gkaragkounis et al., 2020; Giannou et al., 2021, and related works therein). In contrast, the learnability of mixed Nash equilibria is more complicated. Many works have pointed out that asymptotically-stable convergence to mixed Nash equilibria is generally ruled out because linearized dynamics about them have zero trace (Crawford, 1985; Jordan, 1993; Singh et al., 2000; Hart and Mas-Colell, 2003; Mertikopoulos et al., 2018). As mixed Nash equilibria are needed for the existence of Nash equilibria (Nash, 1951), their generic infeasibility is troubling for the viability of the solution concept, but has a resolution via Harsanyi's purification theorem (Harsanyi, 1973). Philosophically, unlike Harsanyi (1973), this work does not try to save all mixed equilibria, but proceeds from the fact that uncoupled dynamics can sometimes converge to mixed Nash equilibria, and aims to further refine the solution concept (Van Damme, 1992). Much of the existing work providing positive convergence results focuses on two-player games (Robinson, 1951; Fudenberg and Kreps, 1993; Gjerstad, 1996; Hofbauer and Sigmund, 1998; Benaım and Hirsch, 1999; Hofbauer and Sorin, 2006; Wei et al., 2021), and especially for zero-sum games (Gilpin et al., 2012; Daskalakis and Panageas, 2019; de Montbrun and Renault, 2022; Piliouras et al., 2022, and related works therein). Our results apply to general N-player normal-form games. The results and techniques in this paper also relate to many areas of game theory. The connection between uniform stability and weak Pareto optimality generalizes existing characterizations of games with weakly Pareto optimal equilibria in two-player games (Moulin and Vial, 1978; Adler et al., 2009), and bridges to the classical notion of strong Nash equilibria (Aumann, 1959). Our notion of uniform stability builds on the game Jacobian (also sometimes called the game Hessian), which is also generally used to analyze smooth games (Isaacs, 1999; Candogan et al., 2011; Balduzzi et al., 2018; Letcher et al., 2019). The algorithmic component of this paper focuses on smoothed best-response dynamics, which can be viewed as regularized learning/optimization (Rockafellar, 1997; Mertikopoulos and Sandholm, 2016). In normal-form games, the canonical regularizer is the logit map, yielding quantal response equilibria (McKelvey and Palfrey, 1995; Al ́os-Ferrer and Netzer, 2010; Goeree et al., 2016; Mertikopoulos and Sandholm, 2016). Our convergence result introduces the notion of stabilization to an equilibrium, which is similar to the tracing procedure discussed by McKelvey and Palfrey (1995). 2 Preliminaries An N-player game (Ω, f) with players indexed by n ∈[N] consists of a joint strategy space Ω= Ω1×· · ·×ΩN and a set of utilities f = (f1, . . . , fN), where each player would like to maximize their own utility fn : Ω→R. In a round of the game, everyone independently chooses a strategy xn ∈Ωn. These choices constitute the joint decision vector x = (x1, . . . , xN), and players receive their respective payoffs fn(x) ≡fn(xn; x-n). Here, we let x-n = (x1, . . . , xn-1, xn+1, . . . , xN) denote the joint strategy without the nth component. This work focuses on normal-form games, which we define here and expand upon in Section A. Definition 1 (Normal-form game). An N-player normal-form game (Ω, f) is one where each player n ∈[N] has a set of kn alternatives/pure strategies, the strategy space Ωn is the (kn -1)-dimensional probability simplex ∆kn-1, and each utility fn is defined by a payoff tensor T n ∈Rk1×···×kN as follows: fn(x) = X i1∈[k1] · · · X iN∈[kN] T n i1,...,iN x1,i1 · · · xN,iN and xn = (xn,1, . . . , xn,kn) ∈∆kn-1. We say that the strategy x ∈Ωis mixed if it is in the interior of Ω, which we denote by int(Ω). Normal-form games are a special case of multilinear games, which are games whose utilities are polynomials where the maximum degree in each variable is one. In the following, we first define multilinear polynomials of the form p : Rd1 × · · · × RdN →R, where we think of dn = kn -1. The connection between normal-form games and multilinear games is formalized by Lemma A.1. Definition 2 (Multilinear polynomial). Let R[x] contain the polynomials of the form p : Rd1×· · ·×RdN →R, with variables x = (x1, . . . , xN) and xn = (xn,1, . . . , xn,dn) for each n ∈[N]. We say that p ∈R[x] is multilinear over x if it is affine over each xn. That is, for each n ∈[N], the polynomial is affine in xn, p(x) = An(x-n)⊤xn + bn(x-n), 4 where An ∈R[x-n]dn and bn ∈R[x-n] are polynomials, and x-n includes all variables besides those in xn. Definition 3 (Multilinear game). An N-player multilinear game (Ω, f) is a game where for each n ∈[N], the strategy space Ωn is an open, convex subset of Rdn, and the utility fn : Ω→R is a multilinear polynomial. That is, each utility fn extends to a multilinear polynomial on all of Rd1 × · · · × RdN . 2.1 Individual and collective rationality: two solution concepts A fundamental aspect of non-cooperative games is that, even though players cannot directly control how others behave, the payoff that each receives nevertheless depends on their joint decision. This leads to the Nash equilibrium as a solution concept for games-joint strategies where it is not possible to improve one's own utility without coordinating with other players. It is a notion of individual rationality: Definition 4 (Nash equilibrium). A joint decision vector x∗∈Ωis a Nash equilibrium if: ∀n ∈[N], x∗ n ∈arg max xn∈Ωn fn(xn; x∗ -n). In contrast, if as in the multi-objective optimization, players are cooperative and can coordinate, then a minimal solution concept is weak Pareto optimality, which describes joint decisions where there is no way to strictly improve everyone's utilities. It is a notion of collective rationality: Definition 5 (Weak Pareto optimality). A joint decision vector x∗∈Ωis weakly Pareto optimal if there does not exist any x ∈Ωsuch that fn(x) > fn(x∗) for all n ∈[N]. These solution concepts are generally incomparable and independent of each other. Famous examples, including the prisoner's dilemma or the tragedy of the commons, demonstrate that Nash equilibria and Pareto optimal strategies can even be disjoint-in the absence of mechanisms to enable or enforce cooperation, socially-optimal outcomes can be unstable under individually-rational or selfish behavior. This work refines the relationship between Nash stability and Pareto optimality from the perspective of learning in games, which is a dynamical perspective in game theory motivated by a simple observation: a solution concept has no bearing on practice if players of a game cannot find it. Eventually, we will show that the interior Nash equilibria that can be robustly found must, in fact, be strategically Pareto optimal, a notion related to Pareto optimality that is well-suited for games. We will motivate and introduce this in the next section. To do so, we now turn to how players come to discover and play a Nash equilibrium. 2.2 Uncoupled learning dynamics and strategic equivalence Consider a model of learning where players learn to play the game via trial-and-error: they simply play many rounds of the game. As players may observe the joint strategies x(t) chosen over time t, they can also continually adjust their own strategies. A learning rule/dynamics specifies how a player n ∈[N] chooses to act each round xn(t) as a function of past experiences, namely x(s) where s 0 is given by: ∀n ∈[N], Φβ n(x) = arg max x′n∈Ωn fn(x′ n; x-n) -βhn(x′ n). Throughout this work, we assume that the regularizers are steep, which is a standard assumption from convex optimization that guarantees that Φβ map into the interior of Ω(Rockafellar, 1997, Theorem 26.1). Definition 9 (Steep regularizer). Let Ωbe a compact and convex domain, and h : Ω→R be strictly convex and smooth on int(Ω). It is steep if for any sequence xk ∈Ωconverging to a boundary point of Ω, lim k→∞ ∇h(xk) = ∞. In the case of normal-form games, a canonical choice is entropy regularization, hn(xn) = P i xn,i log xn,i, which is a steep regularizer on the probability simplex. The smoothed best-response map is also called quantal response, and the smoothed-best responses are given by the softmax function: Φβ n(x) ∝exp 1 β ∇nfn(x) . The specific class of discrete-time dynamics that we study in this work is the following: Incremental smoothed best-response dynamics. Let η ∈(0, 1) be a learning rate and Φβ be a β-smoothed best-response map induced by steep regularizers. The (Φβ, η)-averaging dynamics is the dynamics x(t) given by the transition map: x(t) = (1 -η) x(t -1) + η Φβ x(t -1) , (1) where the joint strategy x(t) played in the round t is an weighted average of the strategy x(t-1) played in the previous round t -1 with its smoothed best response Φβ(x(t -1)). 6 The fixed points of (1) are precisely the fixed points of the smoothed best-response map Φβ, which are called smoothed equilibria. Importantly, smoothed equilibria are approximate Nash equilibria (Lemma B.1), and they converge to Nash equilibria as the smoothness parameter β goes to zero (Lemma B.2). When the dynamics converge, they achieve meaningful solutions in a game-theoretic sense. Definition 10 (β-smoothed equilibrium). Let Φβ be a β-smoothed best-response map. A joint decision vector xβ ∈Ωis a β-smoothed equilibrium if it is a fixed point xβ = Φβ(xβ). Definition 11 (ε-approximate Nash equilibrium). Let ε ≥0. We say that a joint decision vector xε ∈Ωis an ε-approximate Nash equilibrium if fn(xn, xε -n) ≤fn(xε) + ε for all x ∈Ωand n ∈[N]. Economic interpretation of game dynamics Smoothed best-response is a common model of bounded rationality, where the solution that a player finds is only an approximate best-reply to the strategies x-n chosen by the others, perhaps due to noise, or constraints on information or computation. The player approaches 'perfect rationality' as the parameter β goes to zero. Typically, smoothed best-response and its limiting best-response are rational from the perspective of players that are myopic, which means that they do not try to steer the long-term behavior of the dynamics. Myopia is often justified in the evolutionary game theory setting, where each 'player' corresponds to a large population of individuals. As each individual has an infinitesimal impact on the long-term behavior of the dynamics, the most individually-rational behavior is to optimize for immediate outcomes. Myopia is also justified if the individual's lifespan is much shorter than the timescale over which the dynamics evolve. Under this context, the incremental or averaging aspect of the dynamics (1) also arise naturally; Fudenberg and Levine (1998) also call these "partial" dynamics. It capture situations where only a random fraction of individual within the populations play the game at each round. As only a fraction of individuals update their strategies, the strategy profile only partially evolves toward the smoothed best-response direction. In short, the dynamics (1) can be considered as a model for strategic interactions across large, imperfectly-rational populations, such as at a traffic stop where individuals with bounded rationality are intermittently matched up with others to play a dangerous game of chicken. 3 Uniform stability of Nash equilibria The local behavior of a smooth, non-degenerate dynamical system is determined by linearizing the dynamics. In learning in games literature, this motivates the game Jacobian (e.g. Letcher et al. (2019)). It arises naturally from the analysis of gradient ascent dynamics, where players update their strategies along the direction of steepest ascent ∇nfn(x), and the game Jacobian is the Jacobian of the gradient ascent dynamics: Definition 12 (Jacobian of the game). Let (Ω, f) be an N-player multilinear game and let x ∈Ω. The game Jacobian J(x) is the Jacobian of the map x 7→(∇1f1(x), . . . , ∇NfN(x)), so that: ∀n, m ∈[N], Jnm(x) = ∇2 nmfn(x). In particular, the block diagonals Jnn(x) = 0 are zero matrices. We use the game Jacobian to define a notion of stability of a Nash equilibrium x∗. It depends on the eigenvalues of the matrices H-1J(x∗), where H ranges over the family of positive-definite, block-diagonal matrices. If such an eigenvalue has positive real part, the dynamics around x∗become unstable for some regularizers (Proposition 2). Due to that fact that the block diagonals of the game Jacobian are zero matrices, sum of all eigenvalues is zero: tr(H-1J(x)) = 0. As a result, instability also arises if there is an eigenvalue with negative real part-some other eigenvalue must have positive real part. And so, the only possibility for stability is when all eigenvalues of matrices H-1J(x∗) are purely imaginary. We call this uniform stability. Formally, this definition uses the following, purely linear-algebraic matrix condition (Definition 13). Notice that uniform stability is a bilinear (that is, a second-order) condition depending only on the strategic component of the game. 7 Definition 13 (Uniformly-stable block matrix). Let J ∈Rd×d be a block-matrix, where d is the sum of its block dimensions (d1, . . . , dN). We say that J is uniformly stable if the eigenvalues of H-1J are purely imaginary for all positive-definite, block-diagonal H ∈Rd×d, spec H-1J ⊆iR, where H = diag(H1, . . . , HN) and Hn ∈Rdn×dn is positive definite. Definition 14 (Uniformly stable equilibria). Let (Ω, f) be an N-player multilinear game. We say that a Nash equilibrium x∗∈Ωis (pointwise) uniformly stable if the Jacobian J(x∗) is uniformly stable. It is locally uniformly stable if is contained in an open set U ⊂Ωsuch that J(x) is uniformly stable for all x ∈U. Section 4 develops the economic meaning of uniform stability. Section 5 shows the consequences of this spectral condition for dynamic stability. These two sections are independent of each other and can be read in either order. Before doing so, we provide some intuitive and formal motivation for uniform stability. 3.1 Motivation for uniform stability The concept of uniform stability arises when we relax the assumption that players are 'perfectly rational'. Unlike gradient-ascent players, smoothed best-response players do not necessarily update their strategies in the direction of steepest ascent (Lemma 2). With the existence of the regularizers, players are no longer maximally efficient-due to either bounded rationality or a need for better stability in their strategies, players only aim to improve modestly. At a current joint strategy x, let's say that the nth player makes an update along the vn direction. This leads to a local improvement whenever ∇nfn(x)⊤vn > 0. And this occurs if and only if there is a positive-definite matrix Hn ≻0 so that the update direction vn and the gradient ∇nfn(x) are related by: vn = H-1 n ∇nfn(x). This is a consequence of a simple linear-algebraic result, proved in Section D: Lemma 1. Let u, v ∈Rm \ {0}. Then, u⊤v > 0 if and only if there is a positive-definite matrix H so that: u = Hv. In optimization, this result justifies methods such as preconditioned gradient ascent and mirror ascent. In learning in games, it also directly justifies smoothed best-response dynamics as well as positive-definite adaptive dynamics (Hopkins, 1999). In the current paper, this result leads to the notion of uniform stability by pointing us toward the preconditioned Jacobian maps x 7→ H-1 1 ∇1f1(x), . . . , H-1 N ∇NfN(x) . As uncoupled players do not exactly know how others are learning, stability is defined uniformly over all positive-definite matrices. This is a fairly strong condition; at a high level, this is the price that comes with considering a broad class of learning dynamics, where players may have bounded rationality. A formal connection between uniform stability and smoothed best-response dynamics is obtained by computing the Jacobian of the smoothed best-response map Φβ. From Lemma 2 below, it becomes clear that the eigenvalues of ∇Φβ(x∗) are related to the spectra of matrices H-1J(x∗). This result allows us to achieve in Section 5 the convergence guarantees assuming local uniform stability, as well as non-convergence without pointwise uniform stability. The proof of Lemma 2 is given in Section E. Lemma 2 (Gradient of smoothed best-response). Let (Ω, f) be a multilinear game, where Ωn ⊂Rdn for each n ∈[N], and let h be a set of smooth and strictly convex regularizers, hn : Ωn →R. Then, the associated β-smoothed best-response map Φβ is well-defined and smooth, where the gradient of Φβ at x is given by: ∇Φβ(x) = 1 β H(x)-1J(x), where H(x)nm = ( ∇2 nhn(xn) n = m 0dn×dm n ̸= m, and H(x) ∈Rd×d is a positive-definite, block-diagonal matrix with d = d1 + · · · + dN. Conversely, Lemma L.2 shows that for any choice x and positive-definite, block diagonal matrix H-1, there exists a collection of steep regularizers such that ∇Φβ(x) is precisely given by H-1J(x), up to scaling. 8 4 The economic meaning of uniform stability We will introduce assumptions in Section 4.1, define and motivate strategic stationarity in Section 4.2, before finally providing the economic characterizations of uniform stability in Section 4.3. 4.1 Structural assumptions on the game The following assumptions will help us exclude trivial ways in which a strategy is strategically Pareto optimal. For example, if one player has constant utility, then every joint decision is strategically Pareto optimal, since it is impossible to strictly improve that player's utility. Specifically, we assume players have bi-directional interactions, ruling out one-sided strategic interactions between any two players. For example, when one player's utility directly depends on another, but not vice versa. Definition 15 (Bi-directional interactions). Let (Ω, f) be an N-player game with smooth utilities and let x ∈int(Ω). We say that two players n, m ∈[N] have bi-directional interactions at x if for all n, m ∈[N]: ker Jnm(x) = ker Jmn(x)⊤ . We will also assume that the interaction graph is connected, where this is a graph that captures when a player is strategically responsive to the action of another: an edge from player n to player m denotes that the block in the game Jacobian Jnm(x) is non-zero. Definition 16 (Interaction graph). Let (Ω, f) be an N-player game and let x ∈int(Ω). The interaction graph G(x) = (V, E) at x is the directed graph whose vertices are the players V = [N], and the edges are: (n, m) ∈E ⇐⇒ ∇2 nmfn(x) ̸= 0. If the game is bi-directional at x, then G(x) = (V, E) is undirected, since (n, m) ∈E if and only if (m, n) ∈E. Mathematically speaking, this assumption is mild and generic in multilinear games. It ensures that the lower-order, bilinear terms around an equilibrium dominate each player's utility. 4.2 Necessity and sufficiency for strategic Pareto optimality We define a second-order (bilinear) relaxation of strategic Pareto optimality, which we call strategic Pareto stationarity, or strategic stationarity, for short. It will make use of the following matrix concept: Definition 17 (λ-skew matrix). Let J ∈Rd×d be a block-matrix, where d is the sum of its block dimensions (d1, . . . , dN). Let λ = (λ1, . . . , λN) be a tuple of positive scalars. We say that J is λ-skew-symmetric if: ∀n, m ∈[N], λnJnm = -λmJ⊤ mn. In other words, the matrix ΛJ is skew-symmetric, where Λ is block-diagonal with Λnn = λnIdn×dn. Definition 18 (Strategic Pareto stationarity). Let (Ω, f) be an N-player multilinear game. A Nash equilibrium x∗is strategically Pareto stationary if there is a set of positive scalars λ so that J(x∗) is λ-skew. To understand the meaning of strategic Pareto stationarity, it is useful to first focus on polymatrix games, which are multilinear games whose utilities are polynomials of degree at most two-they have up to bilinear terms. These games are also called graphical games (Kearns, 2008; Cai et al., 2016). Formally: Definition 19 (Polymatrix game). A multilinear game is polymatrix if its utilities are bilinear polynomials. The payoffs in these games only depend on the pairwise interactions between players (higher-order k-way interactions are captured by the degree-k terms in the polynomial utilities). Furthermore, the Jacobian of a polymatrix game is a constant J(x) ≡J, so if a polymatrix game has a Nash equilibrium at x∗, then a particularly nice and strategically-equivalent form of the utility fn is given by: fn(x) ≃SE X m̸=n (xn -x∗ n)⊤Jnm(xm -x∗ m). 9 From this form of the utilities, we can see that when the Nash equilibrium of a polymatrix game is strategically Pareto stationary, every pair of player must be in strict competition. Classically, strict competition describes two-player games where any strategy that improves one player's utility must worsen the other's; that is, every strategy is Pareto optimal (Adler et al., 2009). Indeed, we have here that Jnm = -J⊤ mn, up to a change of units. Thus, strategic Pareto stationarity in the bilinear setting means that if two players n and m deviate away from the Nash equilibrium, any resulting change in the strategic components of their utilities must be opposite in sign. Generalizing this, we obtain a sufficient condition for polymatrix games: Lemma 3 (Sufficient condition for strategic Pareto optimality). Let (Ω, f) be an N-player polymatrix game with a Nash equilibrium x∗. If x∗is strategically Pareto stationary, then it is strategically Pareto optimal. This is proved in Section G. Under the assumptions of connected interaction graph and bi-directional interactions at the equilibrium-defined in Section 4.1-the converse is also true, where strategic Pareto optimality implies strategic Pareto stationarity. In fact, this holds generally for multilinear games. With these assumptions, we show that in multilinear games, strategic Pareto stationarity is necessary for optimality. The proof of this result is given in Section H. Proposition 1 (Strategic Pareto stationarity is necessary for strategic Pareto optimality). Let (Ω, f) be an N-player multilinear game with a Nash equilibrium x∗. Let the interaction graph G(x∗) be connected and the game be bi-directional at x∗. If the equilibrium x∗is strategically Pareto optimal, then it is strategically Pareto stationary. 4.3 The economic characterization of uniform stability The formal results concerning the economic meaning of uniform stability follow. Theorem 1 (Local uniform stability implies strategic Pareto optimality). Let x∗be a Nash equilibrium of an N-player multilinear game, at which it is locally bi-directional. Suppose the interaction graph is connected at x∗. If the game is locally uniformly stable at x∗, then the equilibrium is strategically Pareto optimal. Theorem 2 (Pointwise uniform stability is equivalent to strategic Pareto stationarity). Let x∗be a Nash equilibrium of an N-player multilinear game, at which it is bi-directional. Suppose that the interaction graph is connected at x∗. Then, x∗is uniformly stable if and only if it is strategically Pareto stationary. Theorem 2 and Proposition 1 prove that strategic Pareto optimality implies pointwise uniform stability. In a way, it is not surprising that a bilinear condition like strategic stationarity ends up being the adequate solution concept to characterizing uniformly-stable Nash equilibria. Observe that at an equilibrium x∗, the first-order or linear strategic terms ∇nfn(x∗) vanish across every player's utility, and that uniform stability at x∗can only reveal information only about the bilinear terms ∇2 nmfn(x∗) within the strategic component of the utilities; the condition depends purely on the Jacobian of the game J(x∗) at that point (see Definition 14). 5 Convergence under uniform stability The fundamental result by Hart and Mas-Colell (2003) shows that, in general, uncoupled dynamics cannot be guaranteed to asymptotically converge to mixed Nash equilibria. In this section, we refine this result for smoothed best-response learning dynamics (1), providing conditions for when the dynamics are unstable and fail to converge, and when they are stable and, in fact, globally converge. We consider two stability classes for mixed equilibria: those that are pointwise uniformly stable, and those that are locally uniformly stable. For simplicity, we consider a Nash equilibrium x∗that is approximated by the sequence of β-smoothed equilibria xβ as β goes to zero (see Lemma B.2). The first result shows that when x∗is not uniformly stable, then the (Φβ, η)-averaging dynamics can become unstable around xβ once β becomes sufficiently small. Intuitively, this means that x∗is 'inapproximable' by generic smoothed best-response dynamics. In contrast, our second result shows that if x∗is contained in a region that is uniformly stable, then it can be approximated to arbitrary precision by generic, smoothed best-response dynamics. It turns out that the price of higher precision, and in turn smaller β, is a slower rate of convergence due to the requirement of a smaller learning rate η. Informally: 10 Non-convergence result. If a Nash equilibrium x∗is not uniformly stable, then there are smoothed best-responses that cannot be stabilized: there are regularizers h so that when β > 0 is sufficiently small, all (Φβ, η)-averaging dynamics become unstable about the smoothed equilibrium xβ (Proposition 2). Convergence result. If a Nash equilibrium is x∗is contained in a region that is uniformly stable, then every smoothed best-responses can be stabilized: for all Φβ and sufficiently small η > 0, the (Φβ, η)-averaging dynamics globally converges to xβ (Theorem 3). 5.1 Stability concepts in dynamical systems Before we state the main results, recall the standard notion of stability from dynamical systems: Definition 20 (Stability of fixed points). Let Ω⊂Rd be a domain and let Φ : Ω→Ωbe a transition map defining the discrete-time dynamical system x(t + 1) = Φ(x(t)). Let x∗∈Ωbe a fixed point of Φ. • The fixed point x∗is stable if for every ε > 0, there exists some δ > 0 such that: x(0) -x∗ 0 such that: x(0) -x∗ 0, the β-smoothed equilibrium xβ is an asymptotically-stable fixed point of the (Φβ, η)- averaging dynamics (1) whenever η ∈(0, 1) is sufficiently small. 5.2 Convergence results for smoothed best-response dynamics The first result shows that if a Nash equilibrium is not uniformly stable, then not only is it not possible to stabilize generic smooth best-response dynamics to it (the dynamics to xβ are not asymptotically stable for arbitrarily small β), but in fact, the dynamics can become unstable. The following is proved in Section J. Proposition 2 (Inapproximability of unstable equilibria). Let (Ω, f) be an N-player multilinear game with an interior Nash equilibrium x∗. Suppose that x∗is not uniformly stable. Then, there is some collection of smooth and strictly convex regularizers h such that smoothed best-response dynamics does not stabilize to x∗. In fact, even if the sequence of β-smoothed equilibria xβ converges to x∗, each xβ is an unstable fixed point of the (Φβ, η)-averaging dynamics (1) for all η ∈(0, 1) and sufficiently small β > 0. Now, we present the main convergence result concerning Nash equilibria with local uniform stability. It turns out that due to additional structure in multilinear games, convergence is global: no matter where the dynamics is initialized, it will converge to the smoothed equilibrium. The following is proved in Section K. Theorem 3 (Convergence to equilibria in uniformly-stable regions). Let (Ω, f) be an N-player multilinear game with a Nash equilibrium x∗that is locally uniformly stable. Over arbitrary choice of smooth and strictly convex regularizer h, all smoothed best-response dynamics stabilize to x∗. In particular, there is a constant Cf > 0 depending only on the game so that when η ≤Cfβ2, the (Φβ, η)-averaging dynamics globally converge: x(t) -xβ ≤exp -ηt + ln N 2 . Using the fact that β-smoothed equilibria are O(β)-approximate Nash equilibria, this implies that smoothed best-response dynamics with averaging achieve T -1/2-convergence rate to Nash equilibria. 11 (a) Dynamics across smoothing parameters β (b) Dynamics across learning rates η Figure 2: The trajectories of β-smoothed best-response dynamics with η-learning rate toward a uniformly-stable, mixed Nash equilibrium (star) in a two-player normal-form game. (a) The β-smoothed equilibria become better approximations of the Nash equilibrium as β shrinks, but the dynamics become less stable and exhibit more cycling. The figure on the left plots the trajectories initialized at the black dot for varying smoothing β and fixed averaging η. (b) The learning dynamics converge to β-smoothed equilibria once η becomes sufficiently small, and it does so with greater stability with smaller η. However, stability comes at the expense of slower rate of convergence. The figure on the right plots the trajectories for fixed β and varying η. 6 Uniform stability for partially-mixed equilibria The main convergence result (Theorem 3) can be extended to equilibria that are not fully mixed, and are not interior equilibria. A Nash equilibrium x∗is on the boundary of the probability simplex if there are players who never play certain actions. This can happen if these actions are strictly dominated, in which case a perfectly rational player will never play those strategies, while a player with bounded rationality will play them with decreasing frequency. Intuitively, as long as the players are not overly irrational, the stability of smoothed best-response dynamics around these equilibria should not hinge upon these suboptimal actions. Before we make this intuition formal, recall some notations for normal-form game: • The joint strategy space Ω= Ω1 × · · · × ΩN is a product of probability simplices, Ωn = ∆kn-1, where the nth player may randomize over kn pure strategies. • The vertices of the simplex Ωn correspond to pure strategies of player n. For each alternative i ∈[kn], we let en,i ∈Ωn denote the pure strategy where the nth player deterministically chooses the ith action. The support of a strategy is the set of pure strategies with positive probability of being played: Definition 22 (Support of a strategy). For any player n, the support of a strategy xn ∈Ωn is the set: supp(xn) = i ∈[kn] : xn,i ̸= 0 . Given x ∈Ω, define the reduced strategy sets: Ωn(xn) := x′ n ∈Ωn : supp(x′ n) ⊂supp(xn) and Ω(x) := Y i∈[N] Ωn(xn). (2) Note that Ωn(xn) is an (k′ n -1)-dimensional simplex where k′ n = |supp(xn)|. 12 For the convergence result, we consider quasi-strict equilibria, which are Nash equilibria where each player fully mixes on the set of best responses (Harsanyi and Selten, 1988; Van Damme, 1992).2 This captures the property that if an alternative is never played in an equilibrium, then it must be strictly dominated. Definition 23 (Quasi-strict equilibrium). We say that a Nash equilibrium x∗of an N-player normal form game is fully-mixed on the best-response set or quasi-strict if for each player n ∈[N]: i ∈supp(x∗ n) ⇐⇒ i ∈arg max i∈[kn] fn(en,i; x∗ -n) Since actions that are not supported by a quasi-strong equilibrium are dominated, we expect that they are strategically irrelevant. This motivates the following refinement of the game (Harsanyi and Selten, 1988), where the irrelevant actions are removed from the game. Definition 24 (Reduced game). Let (Ω, f) be a normal-form game with a Nash equilibrium x∗. The reduced game around x∗is the game on the domain Ω(x∗) with the utilities f restricted to Ω(x∗). In the reduced game, x∗becomes an interior or fully-mixed Nash equilibrium (Corollary L.1), allowing the results in Section 5.2 to be applied. The remaining problem is about how the players learn to converge towards the support of the equilibrium and avoid playing the suboptimal strategies that have been removed from the reduced game. In order to control the rate at which these suboptimal strategies appear in the β-smoothed equilibria xβ, we introduce the following quantitative version of steepness (c.f. Definition 9): Definition 25 (Linear steepness). Let h : ∆k-1 →R be a steep regularizer. For all v ∈Rk, define the map: xβ(v) = arg max x∈∆k-1 v⊤x -βh(x). For each i ∈[i] and ε > 0, let Vi(ε) = {v ∈Rk : vi 0: lim β→0 sup v∈Vi(ε) xβ(v)i β = 0. In words, this states that the probability mass xβ(v)i placed on any alternative i ∈[k] must shrink sublinearly with β when the value vi is ε-suboptimal. In the case that h is the entropy map, Example L.1 shows that, in fact, this distance shrinks as exp(-Ω(1/β)) at a much faster, exponential rate. As a consequence of linear steepness, the following shows that the rate at which suboptimal alternatives are played in β-smoothed equilibria xβ also vanish at a sublinear rate o(β): Lemma 4 (Dominated strategies occur at β-sublinear rates in β-smoothed equilibria). Let (Ω, f) be an Nplayer normal-form game and h be a collection of linearly steep regularizers. Suppose that as β goes to zero, the sequence of of β-smoothed equilibria xβ converges to a quasi-strict equilibrium x∗. Let Π : Ω→Ω(x∗) be the orthogonal projection onto the support of x∗. Then: lim β→0 xβ -Πxβ β = 0. Before we can give the main convergence result for all quasi-strict equilibria, we need a technical condition to ensure that pseudoinverse ∇2h(x)+ extends smoothly to the boundary of the simplex: Definition 26 (Proper regularizer). A map h : ∆k-1 →R on the simplex is a proper regularizer if: • The restriction of h onto each face of the simplex is steep regularizer. • The Moore-Penrose pseudoinverse of the Hessian ∇2h(x)+ is smooth on the simplex. 2Classically, these are called quasi-strong equilibria. We prefer the term quasi-strict since strong Nash equilibria is another, unrelated refinement of the Nash equilibrium concept, which we also make use of. For discussion, see Van Damme (1992). 13 See Section L.1 for formal definition of the faces of the simplex and what it means to be smooth on it. The following convergence result shows that if x∗is a quasi-strict equilibrium, it suffices to analyze its uniform stability within the reduced game, neglecting all suboptimal alternatives. This is because whenever the regularizers are linearly steep, any potentially de-stabilizing effect of these additional strategies are overwhelmed by the stabilizing effects of the averaging dynamics. Theorem 4 (Convergence to non-interior equilibria). Let (Ω, f) be an N-player, normal-form game and let h be a collection of linearly steep, proper regularizers. Suppose that x∗is a quasi-strict Nash equilibrium and that it is the limit of the β-smoothed equilibria xβ as β goes to zero. If the reduced game around x∗is locally uniformly stable, then smoothed best-response dynamics stabilize to x∗. In particular, there exists a constant Cf > 0 such that for all sufficiently small β > 0 and η ≤Cfβ2, the (Φβ, η)-averaging dynamics is locally asymptotically stable at xβ, where the operator norm of its Jacobian at xβ is bounded by: (1 -η)I + η∇Φβ(xβ) 2 ≤exp -η 2 . The results of this section are proved in Section L. 7 Discussion In this work, we have introduced a theory of non-asymptotic stability for learning in games. We showed a fairly tight connection between uniform stability and collective rationality. And through uniform stability, we studied the last-iterate convergence behavior for the class of incremental, smoothed best-response dynamics. We close with a few open questions that we believe would be important to pursue. Strengthening the necessity of uniform stability for convergence to mixed Nash equilibria Our main non-convergence result (Proposition 2) shows that if an equilibrium x∗is not uniformly stable, then there exist smoothed best-responses that cannot be stabilized. We believe this can be strengthened, where 'there exist' is replaced with 'almost all' (in terms of the Hessian of the regularizer at x∗). A second strengthening has to do with the economic properties of the non-converging behaviors. We have shown that players can avoid stabilizing to Pareto inefficient equilibria, which can be seen as a form of collective rationality. However, we have not analyzed if players escape inefficient equilibria in a collectivelyrational way: Open Question 1. Are there uncoupled learning dynamics that not only do not stabilize to mixed equilibria that are not uniformly stable, but moreover locally escape them in collectively-rational ways. That is, whenever ∥x(t) -x∗∥ 0 such that for some time, all players improve: ∀s ∈(t, t + δ) and n ∈[N], fn x(s) > fn x(t) . Non-asymptotic analyses beyond smoothed best-response While incremental, smoothed best-response dynamics is a natural family of learning dynamics. It would be of interest to consider other classes of updates that can lead to mixed Nash equilibria. Relaxation of bi-directionality assumption It would be interesting to generalize beyond the assumption of bi-directional interactions. In particular, the bi-directional interactions are represented by undirected graphs. One can also consider interaction structures that are represented by directed graphs, which may lead to longer-range interactions (e.g., players form a directed cycle as in the Cyclic Matching Pennies game of Kleinberg et al. (2011)). Acknowledgments We express our gratitude toward Mike Jordan, for his inspiration and suggestions. 14 References Ilan Adler, Constantinos Daskalakis, and Christos H Papadimitriou. A note on strictly competitive games. In International Workshop on Internet and Network Economics, pages 471-474. Springer, 2009. Armen A Alchian. Uncertainty, evolution, and economic theory. Journal of political economy, 58(3):211-221, 1950. Carlos Al ́os-Ferrer and Nick Netzer. The logit-response dynamics. Games and Economic Behavior, 68(2): 413-427, 2010. Ioannis Anagnostides, Ioannis Panageas, Gabriele Farina, and Tuomas Sandholm. On last-iterate convergence beyond zero-sum games. In International Conference on Machine Learning, pages 536-581. PMLR, 2022. Robert J Aumann. Acceptable points in general cooperative n-person games. Contributions to the Theory of Games, 4:287, 1959. James P Bailey, Sai Ganesh Nagarajan, and Georgios Piliouras. Stochastic multiplicative weights updates in zero-sum games. arXiv preprint , 2021. David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of n-player differentiable games. In International Conference on Machine Learning, pages 354-363. PMLR, 2018. Michel Benaım and Morris W Hirsch. Mixed equilibria and dynamical systems arising from fictitious play in perturbed games. Games and Economic Behavior, 29(1-2):36-72, 1999. Nicoletta Bof, Ruggero Carli, and Luca Schenato. Lyapunov theory for discrete time systems. arXiv preprint , 2018. Yang Cai, Ozan Candogan, Constantinos Daskalakis, and Christos Papadimitriou. Zero-sum polymatrix games: A generalization of minmax. Mathematics of Operations Research, 41(2):648-655, 2016. Ozan Candogan, Ishai Menache, Asuman Ozdaglar, and Pablo A Parrilo. Flows and decompositions of games: Harmonic and potential games. Mathematics of Operations Research, 36(3):474-503, 2011. Vincent P Crawford. Learning behavior and mixed-strategy Nash equilibria. Journal of Economic Behavior & Organization, 6(1):69-78, 1985. Constantinos Daskalakis and Ioannis Panageas. Last-iterate convergence: Zero-sum games and constrained min-max optimization. In 10th Innovations in Theoretical Computer Science (ITCS) conference, ITCS 2019, 2019. Constantinos Daskalakis, Paul W Goldberg, and Christos H Papadimitriou. The complexity of computing a Nash equilibrium. Communications of the ACM, 52(2):89-97, 2009. Constantinos Daskalakis, Rafael Frongillo, Christos H Papadimitriou, George Pierrakos, and Gregory Valiant. On learning algorithms for Nash equilibria. In International Symposium on Algorithmic Game Theory, pages 114-125. Springer, 2010. ́Etienne de Montbrun and J ́erˆome Renault. Optimistic gradient descent ascent in zero-sum and general-sum bilinear games. arXiv preprint , 2022. Gerard Debreu. Theory of value: An axiomatic analysis of economic equilibrium. Wiley, New York, 1959. Glenn Ellison and Drew Fudenberg. Learning purified mixed equilibria. Journal of Economic Theory, 90(1): 84-115, 2000. Ludwig Elsner. An optimal bound for the spectral variation of two matrices. Linear algebra and its applications, 71:77-80, 1985. 15 Milton Friedman. The methodology of positive economics. The Essays in Positive Economics, 1953. Drew Fudenberg and David M Kreps. Learning mixed equilibria. Games and economic behavior, 5(3): 320-367, 1993. Drew Fudenberg and David K Levine. The theory of learning in games, volume 2. MIT press, 1998. Angeliki Giannou, Emmanouil Vasileios Vlatakis-Gkaragkounis, and Panayotis Mertikopoulos. Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information. In Conference on Learning Theory, pages 2147-2148. PMLR, 2021. Andrew Gilpin, Javier Pena, and Tuomas Sandholm. First-order algorithm with convergence for-equilibrium in two-person zero-sum games. Mathematical programming, 133(1):279-298, 2012. Steven Gjerstad. The rate of convergence of continuous fictitious play. Economic Theory, 7:161-177, 1996. Jacob K Goeree, Charles A Holt, and Thomas R Palfrey. Quantal response equilibrium: A stochastic theory of games. In Quantal response equilibrium. Princeton University Press, 2016. Garrett Hardin. The tragedy of the commons. Science, 162(3859):1243-1248, 1968. John C Harsanyi. Games with randomly disturbed payoffs: A new rationale for mixed-strategy equilibrium points. International journal of game theory, 2(1):1-23, 1973. John C Harsanyi and Reinhard Selten. A general theory of equilibrium selection in games. MIT Press Books, 1, 1988. Sergiu Hart and Andreu Mas-Colell. Uncoupled dynamics do not lead to Nash equilibrium. American Economic Review, 93(5):1830-1836, 2003. Josef Hofbauer and Karl Sigmund. Evolutionary games and population dynamics. Cambridge university press, 1998. Josef Hofbauer and Sylvain Sorin. Best response dynamics for continuous zero-sum games. Discrete and Continuous Dynamical Systems Series B, 6(1):215, 2006. Ed Hopkins. A note on best response dynamics. Games and Economic Behavior, 29(1-2):138-150, 1999. Roger A Horn and Charles R Johnson. Matrix analysis. Cambridge university press, 2012. Rufus Isaacs. Differential games: a mathematical theory with applications to warfare and pursuit, control and optimization. Courier Corporation, 1999. James S Jordan. Three problems in learning mixed-strategy Nash equilibria. Games and Economic Behavior, 5(3):368-386, 1993. Michael Kearns. Graphical games. In The New Palgrave Dictionary of Economics, pages 1-4. Springer, 2008. Robert D Kleinberg, Katrina Ligett, Georgios Piliouras, and ́Eva Tardos. Beyond the Nash equilibrium barrier. In ICS, volume 20, pages 125-140, 2011. Elias Koutsoupias and Christos Papadimitriou. Worst-case equilibria. In Annual symposium on theoretical aspects of computer science, pages 404-413. Springer, 1999. Vijay Krishna and Tomas Sj ̈ostr ̈om. On the convergence of fictitious play. Mathematics of Operations Research, 23(2):479-511, 1998. Peter D Lax. Linear algebra and its applications. John Wiley & Sons, 2007. Alistair Letcher, David Balduzzi, S ́ebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. Journal of Machine Learning Research, 20(84):1-40, 2019. 16 R Duncan Luce and Howard Raiffa. Games and decisions: Introduction and critical survey. John Wiley & Sons, Inc., 1957. Richard D McKelvey and Thomas R Palfrey. Quantal response equilibria for normal form games. Games and economic behavior, 10(1):6-38, 1995. Panayotis Mertikopoulos and William H Sandholm. Learning in games via reinforcement and regularization. Mathematics of Operations Research, 41(4):1297-1324, 2016. Panayotis Mertikopoulos, Christos Papadimitriou, and Georgios Piliouras. Cycles in adversarial regularized learning. In Proceedings of the twenty-ninth annual ACM-SIAM symposium on discrete algorithms, pages 2703-2717. SIAM, 2018. Jason Milionis, Christos Papadimitriou, Georgios Piliouras, and Kelly Spendlove. An impossibility theorem in game dynamics. Proceedings of the National Academy of Sciences, 120(41):e2305349120, 2023. Dov Monderer and Lloyd S Shapley. Potential games. Games and economic behavior, 14(1):124-143, 1996. Herv ́e Moulin and J-P Vial. Strategically zero-sum games: the class of games whose completely mixed equilibria cannot be improved upon. International Journal of Game Theory, 7(3-4):201-221, 1978. John H Nachbar. "Evolutionary" selection dynamics in games: Convergence and limit properties. International Journal of Game Theory, 19(1):59-89, 1990. John Nash. Non-cooperative games. Annals of Mathematics, 54(2):286-295, 1951. Georg Ostrovski and Sebastian van Strien. Payoff performance of fictitious play. arXiv preprint , 2013. Marco Pangallo, James BT Sanders, Tobias Galla, and J Doyne Farmer. Towards a taxonomy of learning dynamics in 2 × 2 games. Games and Economic Behavior, 132:1-21, 2022. Christos H Papadimitriou. The complexity of finding Nash equilibria. Algorithmic game theory, 2:30, 2007. Georgios Piliouras, Lillian Ratliff, Ryann Sim, and Stratis Skoulakis. Fast convergence of optimistic gradient ascent in network zero-sum extensive form games. In International Symposium on Algorithmic Game Theory, pages 383-399. Springer, 2022. Julia Robinson. An iterative method of solving a game. Annals of mathematics, 54(2):296-301, 1951. R Tyrrell Rockafellar. Convex analysis, volume 28. Princeton university press, 1997. Larry Samuelson and Jianbo Zhang. Evolutionary stability in asymmetric games. Journal of economic theory, 57(2):363-391, 1992. Lloyd S Shapley. Some topics in two-person games. Advances in game theory, 1963. Satinder Singh, Michael J Kearns, and Yishay Mansour. Nash convergence of gradient dynamics in generalsum games. In UAI, pages 541-548, 2000. Adam Smith. An inquiry into the nature and causes of the wealth of nations. London: printed for W. Strahan; and T. Cadell, 1776. Dale O Stahl II. On the instability of mixed-strategy Nash equilibria. Journal of Economic Behavior & Organization, 9(1):59-69, 1988. Eric Van Damme. Refinements of Nash equilibrium. In Advances in Economic Theory: Sixth World Congress, volume 1, pages 32-75. Cambridge University Press New York, 1992. Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Thanasis Lianeas, Panayotis Mertikopoulos, and Georgios Piliouras. No-regret learning and mixed Nash equilibria: They do not mix. Advances in Neural Information Processing Systems, 33:1380-1391, 2020. 17 Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. Linear last-iterate convergence in constrained saddle-point optimization. The Ninth International Conference on Learning Representations, 2021. J ̈orgen W Weibull. The 'as if' approach to game theory: Three positive results and four obstacles. European Economic Review, 38(3-4):868-881, 1994. Sidney G Winter. Satisficing, selection, and the innovating remnant. The Quarterly Journal of Economics, 85(2):237-261, 1971. 18 A Normal-form games and multilinear games A.1 Background for normal-form games In a normal-form game, each player n ∈[N] chooses an action out of kn alternatives or pure strategies, indexed over [kn]. To allow for randomized strategies, let ∆kn-1 denote the probability simplex over [kn], ∆kn-1 = n xn ∈Rkn : X i∈[kn] xn,i = 1 and xn,i ≥0 o . The nth player's strategy space is Ωn = ∆kn-1, where we say that xn ∈Ωn is a mixed strategy profile, with: ∀i ∈[kn], xn,i = probability that player n plays action i. Assuming players randomize their actions independently, each joint mixed strategy x = (x1, . . . , xN) ∈Ω defines the product distribution x1 ⊗· · · ⊗xN over the joint space of alternatives [k1] × · · · × [kN]. The utilities in a normal-form game are defined by a collection of payoff tensors T 1, . . . , T N ∈Rk1×···×kN , T n i1,...,iN = payoff for player n when the mth player chooses action im for m ∈[N]. The utilities f of a normal-form game extend these payoffs to mixed strategies in all of Ω, and they compute the expected payoffs when each player randomize independently: fn(x) = X i1∈[k1] · · · X iN∈[kN] T n i1,...,iN x1,i1 · · · xN,iN . Lemma A.1 (Normal-form games are multilinear games). Let (Ω, f) be an N-player normal-form game, where the nth player has kn alternatives. There is a multilinear game ( ̃Ω, ̃f) with ̃f : Rk1-1 × · · · × RkN-1 → RN and an affine bijection φ : int(Ω) → ̃Ωsuch that f(x) = ( ̃f ◦φ)(x) for all x ∈int(Ω). Proof of Lemma A.1. This follows from the form of the utility of a normal form game. A.2 Individual and collective rationality are generally incomparable Example A.1 (Strategic Pareto optimality does not imply Nash stability). Consider the following 2-player normal-form game, which is purely strategic, since all rows and columns sum to zero: fA B1 B2 B3 A1 4 -1 -3 A2 -10 2 8 A3 6 -1 -5 fB B1 B2 B3 A1 4 -10 6 A2 -1 2 -1 A3 -3 8 -5 Here, we show the payoff matrices for the A and B players, and bold the preferred action conditioned on the other player. In particular, the joint strategy (A2, B2) is a Nash equilibrium, but not a strategic Pareto optimum, since it can be improved to (A1, B1). And while (A1, B1) is Pareto optimal, it is not a Nash equilibrium. B Smoothed equilibria and dynamics Lemma B.1 (β-smoothed equilibria are O(β)-approximate Nash equilibria). Let h be a set of strictly convex and bounded regularizers, where max hn -min hn ≤C for n ∈[N], inducing the β-smoothed best-response map Φβ. If a joint strategy is a β-smoothed equilibrium of Φβ, then it is a Cβ-approximate Nash equilibrium. Proof of Lemma B.1. Let x be a β-smoothed equilibrium with respect to h. By the definition of Φβ, we have that the following inequality holds for any alternate strategy zn ∈Ωn; fn(zn; x-n) -βhn(zn) ≤fn(x) -βhn(xn). 19 By rearranging and applying the boundedness of the regularizer, we obtain: fn(zn; x-n) ≤fn(x) + β hn(zn) -hn(xn) ≤fn(x) + Cβ. Thus, x is a Cβ-approximate Nash equilibrium. Lemma B.2 (β-smoothed equilibria converge to Nash equilibria). Let Ωbe a convex, joint strategy space with a metric ρ. Let (Ω, f) be an N-player game with continuous utilities, and h be a set of strictly convex and bounded regularizers. Suppose that the game has a unique Nash equilibrium x∗. Then: lim β→0 ρ(xβ, x∗) = 0. Proof of Lemma B.2. This follows from Lemma B.1 and Lemma B.3. Lemma B.3 (Convergence to Nash equilibria). Let Ωbe a joint strategy space with a metric ρ, and suppose that (Ω, f) is an N-player game with continuous utilities. Let (xεk)k∈N be any sequence of εk-Nash equilibria where εk →0. If x∗is a limit point of the sequence, then x∗is a Nash equilibrium. Proof. The result follows since: 1. Let Xε denote the set of ε-Nash equilibria. It is a closed set for all ε > 0. 2. The ε-Nash equilibria satisfies Xε ⊂Xε′ whenever ε ≤ε′. Also, the set of Nash equilibria is given by: X∗= \ ε>0 Xε. 3. For each ε, the sequence xεk eventually remains in Xε, and so x∗∈Xε for all ε. This implies that x∗is Nash. C Strategic equivalence and games in canonical form For many results of this work, we may often restrict our consideration to games in canonical form: Definition C.1 (Canonical form of a multilinear game). Let (Ω, f) be an N-player multilinear game, where the joint strategy space includes the origin 0 ∈Ω. We say that the game is in canonical form if the utilities have no non-strategic component and if the origin is a Nash equilibrium, with x∗= 0. We can do this without loss of generality because the solution concepts we are concerned with all respect strategic equivalence, namely strategic Pareto optimality (Definition 7), uniform stability (Definition 14), and strategic Pareto stationarity (Definition 18). Therefore, it is enough to prove them for games that are purely strategic (i.e. they have no non-strategic component; see Definition 6). Furthermore, in multilinear games, we can assume that the Nash equilibrium is set at the origin x∗= 0. We may do this without loss of generality since the translation of a multilinear polynomial remains a multilinear polynomial. Lemma C.1. The dynamics (1) are uncoupled and are preserved across strategic equivalence classes. Proof. The dynamics depend only on the strategic components of the game. D Proof of Lemma 1 Lemma 1. Let u, v ∈Rm \ {0}. Then, u⊤v > 0 if and only if there is a positive-definite matrix H so that: u = Hv. Proof. (⇒). Suppose that u⊤v > 0. There are two cases: 20 1. If u and v are linearly dependent, then we can let H = λI where λ = ∥u∥/∥v∥, so that Hv = u. 2. Otherwise, u and v span two dimensions. We can choose an orthonormal basis B such that: B⊤u = α1e1 + α2e2 and B⊤v = β1e1 + β2e2, α1, α2, β1, β2 > 0. This is possible because u⊤v > 0. Now, let Λ = diag(α1/β1, α2/β2, 1, . . . , 1) and set: H = BΛB⊤, so that by construction, u = Hv. (⇐). Suppose that u = Hv for some positive-definite matrix H. Then u⊤v = v⊤Hv > 0. E Proof of Lemma 2 Lemma 2 (Gradient of smoothed best-response). Let (Ω, f) be a multilinear game, where Ωn ⊂Rdn for each n ∈[N], and let h be a set of smooth and strictly convex regularizers, hn : Ωn →R. Then, the associated β-smoothed best-response map Φβ is well-defined and smooth, where the gradient of Φβ at x is given by: ∇Φβ(x) = 1 β H(x)-1J(x), where H(x)nm = ( ∇2 nhn(xn) n = m 0dn×dm n ̸= m, and H(x) ∈Rd×d is a positive-definite, block-diagonal matrix with d = d1 + · · · + dN. Proof. The map x′ n 7→fn(x′ n, x-n) -βnhn(xn) is a strictly concave function over a convex set Ωn. And so, it has a unique maximizer, implying that Φβ(x)n is well-defined. Given any x that maps to a fully mixed strategy Φβ(x) ∈int(Ω), we can apply the Lagrange multiplier theorem to express Φβ(x)n for each n ∈[N] as the stationary point to the Lagrangian: Li(x′ n, λ; x) = fn(x′ n, x-n) -βhn(x′ n) -λ1⊤x′ n, where 1 ∈Rdn is the all-ones vector. This implies that Φβ(x)n is the unique stationary point satisfying: 0 = ∇x′nLn x′ n, λ; x = ∇x′n fn(x′ n, x-n) -βhn(x′ n) -λ1. Let Πn = I - 1 dn 11⊤be the linear projection and Ψn : Ωn × Ω→TΩn be the map: Ψn(x′ n, x) = ∇n fn(x′ n, x-n) -βhn(x′ n) . Then, x′ n = Φβ(x)n is the unique solution to Ψn x′ n, x = 0. By the implicit function theorem, we obtain the gradient of Φβ. The blocks on the diagonals are zero because fn is multilinear, while ∇2 nhn is positive definite on Ωn since hn is strictly convex. F Proof of Theorem 1 Theorem 1 (Local uniform stability implies strategic Pareto optimality). Let x∗be a Nash equilibrium of an N-player multilinear game, at which it is locally bi-directional. Suppose the interaction graph is connected at x∗. If the game is locally uniformly stable at x∗, then the equilibrium is strategically Pareto optimal. Proof. Under these assumptions, we claim that if a Nash equilibrium x∗has local uniform stability, then the game must be bilinear. Theorem 2 then shows that x∗must be strategically Pareto stationary. As the game is bilinear, Lemma 3 shows that x∗is strategically Pareto optimal. To finish the proof, we show that the strategic component of the game is bilinear: we prove that for all triples of players n, m, k ∈[N], the higher-order derivative ∇3 nmkfn ≡0 is identically zero everywhere. 21 First, note that as the game is locally uniformly stable at x∗, the Jacobian map J is uniformly λ-skew, where λ is a set of positive scalars independent of x (Lemma F.1). By a positive rescaling of the utilities, we may assume that J(x) is skew-symmetric everywhere, and that for any triple of players n, m, k ∈[N], ∀x ∈Ω, Jnm(x) = -Jmn(x)⊤, Jmk(x) = -Jkm(x)⊤, Jkn(x) = -Jnk(x)⊤. Taking derivatives and using the fact that the partial derivatives commute over smooth functions, we have: ∇3 nmkfn + ∇3 nmkfm ≡0 ∇3 nmkfm + ∇3 nmkfk ≡0 ∇3 nmkfk + ∇3 nmkfn ≡0 This can only happen if the triples ∇3 nmkfn = ∇3 nmkfm = ∇3 nmkfk = 0 are identically zero everywhere. Lemma F.1. Let (Ω, f) be an N-player multilinear game in canonical form. Let x∗be a Nash equilibrium that is locally bi-directional and is locally uniformly stable. There exists a set of positive scalars λ∗such that J(x) is λ∗-skew for all x ∈Ω. Proof. Let the Nash equilibrium be locally bi-directional and locally uniformly stable, in particular, on a neighborhood U of x∗. Then, Theorem 2 shows that J(x) is λ(x)-skew, for some local map x 7→λ(x) on U to the positive orthant of RN. In particular, if we let λnm(x) = λm(x)/λn(x), we obtain: Jnm(x) = -λnm(x)Jmn(x)⊤. (3) We claim that λnm(x) is in fact constant in x. From this, it follows that J(x) is λ∗-skew where λ∗= λ(x∗). To prove the claim, fix some z ∈Ωand consider the lth player. For this proof, define the vector z(ε) by: z(ε) = (zl+ ε, z-l), which is a perturbation only in the lth player's strategy. By multilinearity, we have that for all n, m ∈[N]: ∇lJnm(z(ε)) = ∇lJnm(z), showing that ∇lJnm(z(ε)) is constant in ε. In particular, this follows from Jnm(z(ε)) = Jnm(z)+∇lJnm(z)ε. To compress notation, define the following tensors: Trst = ∇lJnm(z)rs t and T ′ rst = ∇lJmn(z)sr t. Let λ ≡λnm(z(ε)). Then, applying the product rule to Equation (3) at x = z(ε), we obtain that: Trst = -∂tλ · T ′ rsuεu -λT ′ rst = -T ′ rsu · h εu · ∂tλ + λ · δut i . The product rule applies because λnm is smooth; it is the quotient of two non-zero polynomials. We obtain: T ′ rsuεu∂tλ + Trsu + λT ′ rsu · δut = 0. For each fixed pair (r, s), the following (u, t)-slice is either the zero matrix or a full-rank (diagonal) matrix: Trsu + λT ′ rsu · δut. On the other hand, the following (u, t)-slice is either the zero matrix or a rank-1 matrix: T ′ rsuεu∂tλ. As the sum of these two matrices is zero, the first must have been the zero matrix, whereby we obtain: Trsu + λT ′ rsu = 0. This shows that λ ≡λnm(z(ε)) is constant in ε. This holds for all choices of joint strategy z ∈U and player l, so this implies that λnm is constant everywhere on the open set U. Finally, due to multilinearity, this implies that it is constant on the whole space Ω. 22 Notation 1 (Translation operator). For any c ∈Rn, let Tc : M →M be the translation operator: (Tcf)(x) = f(x -c). It is a linear map, and under the standard polynomial basis, it has matrix representation: Tc S,S′ =            (-1)|S′ | Y i∈S′ ci S′ ⊇S 0 S′ ̸⊇S. (4) Lemma F.2. Let f, g ∈M and S ⊂[n]. Suppose that for all c ∈Rn, we have that: (Tcf)S = (Tcg)S. Then, for all S′ ⊇S, the coefficients are equal fS′ = gS′. Proof. We proceed by induction. Let P(k) denote the proposition: fS′ = gS′ for all S′ ⊇S where |S′ | ≤k. • Base case: If S′ = S, then by assumption fS = (T0f)S = (T0g)S = gS. Thus, P(0) is true. • Inductive step: Suppose that P(k) holds. Let S′ ⊇S and |S′ \ S| = k + 1. Define c = (c1, . . . , cn): ci = ( 1 i ∈S′ 0 i /∈S′, and let Tc be the translation with offset c. By (4), the S-coefficients of Tcf and Tcg are given by: Tcf S = X I⊆[n] αIfI and Tcg S = X I⊆[n] αIgI, where αI = (-1)|I | · 1 S ⊆I ⊆S′ . By P(k), all coefficients are equal fI = gI when S ⊆I ⊊S′. By the equality (Tcf)S = (Tcg)S, we obtain that: 0 = Tcf S - Tcg S = X S⊆I⊊S′ αI · (fI -gI) + αS′ · fS′ -gS′ = αS′ · fS′ -gS′ , which implies that fS′ = gS′ since αS′ ̸= 0. This proves P(k + 1) holds. The result follows by induction. G Proof of Lemma 3 Lemma 3 (Sufficient condition for strategic Pareto optimality). Let (Ω, f) be an N-player polymatrix game with a Nash equilibrium x∗. If x∗is strategically Pareto stationary, then it is strategically Pareto optimal. Proof. Suppose that x∗is strategically Pareto stationary. Then, there exist a set of positive scalars λn > 0 such that the weighted strategic components of the utilities sum to zero for all x ∈Ω: X n∈[N] λnfn(x) ≃SE X n∈[N] X m̸=n λn(xn -x∗ n)⊤Jnm(xm -x∗ m) = 0. It follows that there does not exist any x ∈Ωsuch that the following holds simultaneously for all n ∈[N], fn(x) -fn(x∗) ≃SE X m̸=n (xn -x∗ n)⊤Jnm(xm -x∗ m) > 0. In particular, the Nash equilibrium is strategically Pareto optimal. 23 H Proof of Proposition 1 In this section, we show that strategic Pareto optimality implies stationarity. We break down the proof of Proposition 1 into a sequence of results of increasing generality. The core of each proof is by contradiction: if a Nash equilibrium is not strategically stationary, then it is not Pareto optimal. 1. In Section H.1, we prove the result in the two-player case. The main innovation here is Lemma H.2; it is a linear-algebraic result showing that bilinear forms (x, y) 7→x⊤My can be recovered from the related map (x, y) 7→sign(x⊤My), up to a positive scale. 2. In Section H.2, we prove the general result for N players. This result makes use of Lemma H.3, which shows under some assumptions, that a Nash equilibrium is weakly Pareto optimal if and only if it is a strong Nash equilibrium. This will give us a set of easier conditions to check, which we make use of in the proof of Lemma H.4, which is precisely Proposition 1 in the case of games in canonical form. Proposition 1 (Strategic Pareto stationarity is necessary for strategic Pareto optimality). Let (Ω, f) be an N-player multilinear game with a Nash equilibrium x∗. Let the interaction graph G(x∗) be connected and the game be bi-directional at x∗. If the equilibrium x∗is strategically Pareto optimal, then it is strategically Pareto stationary. Proof. This follows from Lemma H.4. H.1 Weak Pareto optimality implies stationarity: the two-player case In this section, we let (Ω, f) be a two-player polymatrix game in canonical form, with individual strategy spaces Ωn = Rdn for n ∈{1, 2}, and a Nash equilibrium x∗at the origin. As the game is bilinear, the game Jacobian is constant, which we denote by J. In particular, we may assume that the utilities are given by: f1(x) = x⊤ 1 J12x2 and f2(x) = x⊤ 2 J21x1, so that the payoffs for both players at the Nash equilibrium are f1(0) = f2(0) = 0. Recall that a Nash equilibrium x∗is strategically Pareto stationary if J is block skew-symmetrizable. In this case, this means that there exists Σ ∈Rd1×d2 and λ > 0 such that: J = J12 J21 = -λΣ Σ⊤ . Lemma H.1 (Strategic Pareto optimality in two-player games). Let (Ω, f) be a bi-directional, two-player polymatrix game in canonical form with a Nash equilibrium x∗and Jacobian J. Suppose x∗is strategically Pareto optimal. Then, it is strategically Pareto stationary. That is, there exists some λ > 0 such that: J12 = -λJ⊤ 21. Proof. Let x∗be strategically Pareto optimal. Note that in this case, as the utilities are purely strategic, this simply means that x∗is weakly Pareto optimal. We claim that for all x ∈Ω: sign f1(x) = sign -f2(x) , (5) where sign : R →{-, 0, +} is the sign function. Assuming the claim for now, we obtain the result immediately from Lemma H.2, which shows that the sign of a bilinear map characterizes it up to a positive scalar. That is, the two matrices J12 and -J⊤ 21 are related by some positive scaling factor λ > 0. Proof of claim. To show the sign condition (5), we only need to rule out the two cases: (1) one utility is strictly negative fn(x) 0 while the other is non-negative f-n(x) ≥0. We will only show how to rule out case (1) by contradiction, as the other case is completely analogous. Without loss of generality, suppose that: f1(x) = x⊤ 1 J12x2 0 and w⊤ 1 J⊤ 21x2 > 0. For example, let: w1 = 1 2 J12x2 ∥J12x2∥+ J⊤ 21x2 ∥J⊤ 21x2∥ . When x′ = (w1, x2), we obtain that both f1(x′) > 0 and f2(x′) > 0. Recall that fn(x∗) = 0. Thus, the joint vector x′ strictly improves both utilities over x∗, violating the weak Pareto optimality of x∗. Lemma H.2 (Bilinear maps are characterized by their signs up to scaling). Let A, B ∈Rd1×d2 be matrices such that for all x ∈Rd1 and y ∈Rd2, sign(x⊤Ay) = sign(x⊤By), where sign : R →{-, 0, +} is the sign function, sign(z) =      + z > 0 0 z = 0 - z 0 such that A = λB. Proof. Consider the singular value decomposition of A: A = UΣAV ⊤, where U ∈Rd1×d1 and V ∈Rd2×d2 are orthogonal and the singular values σA j = ΣA jj are non-negative, where j ∈{1, . . . , d1 ∧d2}. Now define ΣB = U ⊤BV . We now show that in fact this yields a simultaneous singular value decomposition: B = UΣBV ⊤, where all singular values σB j are non-negative, and σA j = 0 if and only if σB j = 0. This is true because the signs of ΣA and ΣB match coordinate-wise. To see this, let e1 j and e2 k be from the standard basis vectors of Rd1 and Rd2, respectively. Then: ΣA jk = (Ue1 j)⊤A(V e2 k) and ΣB jk = (Ue1 j)⊤B(V e2 k). By assumption, the entries ΣB jk and ΣA jk share same signs. Now, we show that ΣA = αΣB for some λ > 0. Let uj and vk denote the left and right singular vectors-the columns of U and V . Without loss of generality, we may assume that σA 1 , σB 1 > 0, and we let: λ = u⊤ 1 Av1 u⊤ 1 Bv2 . For each i > 1, let αi = p σA i /σA 1 , so that: (ui + αiu1)⊤A(vi -αiv1) = u⊤ i Avi -α2 i u⊤ 1 Av1 = 0, where cross-terms such as u⊤ 1 Avi = 0 because u1 and vi are unpaired singular vectors. By assumption that the signs match, this also implies that the same holds for B, where: 0 = (ui + αiu1)⊤B(vi -αiv1) = u⊤ i Bvi -α2 i u⊤ 1 Bv1 = σB i -σA i σA 1 u⊤ 1 Av1 α . And since u⊤ 1 Av1/σA 1 = 1, we obtain that λσB i = σA i . It follows that A = λB. 25 H.2 Strategic Pareto optimality implies stationarity: the N-player case We now generalize the result from Section H.1 to N-player multilinear games. We prove this result by way of the classical notion of strong Nash equilibria (Aumann, 1959). At the usual Nash equilibrium, no single player can improve their own utility without coordinating with others. The strong Nash equilibrium extends this form of stability to groups or coalitions of players. For any subset of players S ⊂[N], let xS and x-S denote the tuples (xn : n ∈S) and (xn : n /∈S), respectively. Then: Definition H.1 (Strong Nash equilibrium). Let S ⊂[N] be any subset or coalition of players. A joint decision vector x∗∈Ωis weakly S-Pareto optimal if there does not exist some x ∈Ωsuch that: ∀n ∈S, fn(xS; x∗ -S) > fn(x∗). We say that x∗is a strong Nash equilibrium if it is weakly S-Pareto optimal for all S ⊂[N]. The following lemma establishes an equivalence between strong Nash stability and weak Pareto optimality for games in canonical form under mild structural assumptions. Lemma H.3 (Strongly Nash equilibria are weakly Pareto). Let (Ω, f) be a bi-directional, N-player multilinear game in canonical form with a Nash equilibrium x∗. Suppose the interaction graph is connected at x∗. Then, the equilibrium x∗is a strong Nash equilibrium if and only if x∗is weakly Pareto optimal. The forward direction is immediate by definition: when x∗is a strong Nash equilibrium, then applying the condition to S = [N] shows that it also weakly Pareto optimal. For the reverse direction, suppose that x∗ is not a strong Nash equilibrium. Thus, there is a coalition of players S ⊂[N] that can coordinate together to strictly improve their own utilities. We show that this group can be grown to include all N players, which then implies that x∗is not weakly Pareto optimal. In particular, the coalition can be extended to include any player that is adjacent to S in the interaction graph. Proof of Lemma H.3. (⇒). If x∗is a strong Nash equilibrium, then by definition, it is weakly S-Pareto optimal where S = [N]. Thus, it is weakly Pareto optimal. (⇐). Suppose that x∗is not a strong Nash equilibrium. Let S ⊆[N] be a maximal coalition such that x∗ is not weakly S-Pareto optimal: there is a joint decision vector of the form x = (xS, x∗ -S) where all players in S strictly improve. If S = [N], then this shows that x∗is not weakly Pareto optimal. To finish the proof, we now rule out the possibility that S is not all of [N]. Let G(x∗) = (V, E) be the interaction graph at x∗. Choose any player n /∈S that is adjacent to some player m ∈S in G(x∗); this player exists because the graph is connected, and let S′ = S ∪{n}. We will construct a joint strategy that is not S′-Pareto optimal, contradicting the maximality of S as desired. Consider joint decision vectors x′ of the form: x′ l= ( xl+ εl l∈S′ x∗ l l/∈S′. Players in the coalition S play a perturbation of the joint strategy xS, the player n plays a perturbed Nash strategy, and the remainder play the Nash equilibrium strategy x∗ -S′. By the continuity of the utilities, there exists δ > 0 so that if ∥x -x′∥ 0. Because the game is in canonical form, the nth player's utility at x′ can be expressed in the form: fn(x′) -fn(x∗) = ε⊤ n An(εS), where An is a polynomial in εS. In particular, it has a non-zero linear term: An(εS) = X m∈S Jnm(x∗) εm + o(∥εS∥), since the player n is connected to at least one player m ∈S. As the polynomial An is not identically zero, there exist choices of x′ satisfying fn(x′)-fn(x∗) > 0 with ∥εS′∥ 0 so that: Jnm = -amnJ⊤ mn. We prove that x∗is strategically Pareto stationary by showing that J is block skew-symmetrizable, where there is a set of positive scalars (λn)n∈[N] such that: ∀n, m ∈[N], amn = λm λn . (6) We prove the consistency condition (6) by induction over the subgraphs of the interaction graph G of the game. Without loss of generality, we may assume that G is connected, since players in different connected components are completely independent of each other. In the following, if G′ = (V ′, E′) is a subgraph of G, we let J[G′] denote the matrix: J[G′]nm = ( Jnm (n, m) ∈E′ 0 (n, m) /∈E′, where the block J[T]nm = 0 is zeroed out when (n, m) /∈E′. Base case. Let T = (VT , ET ) be a spanning tree of G. Endow T with a rooted tree structure, so that there is a root vertex n0 ∈VT and each other vertex n ∈VT has a unique parent vertex pa(v) ∈VT . Inductively define λn > 0 for n ∈VT so that: λn0 = 1 and λn = amnλm where m = pa(n). By construction, we have that J[T] is block skew-symmetrizable by the set of positive scalars (λn)n∈[N]. Inductive step. Assume that there is a subgraph G′ ⊂G such that J[G′] is block skew-symmetrizable by the positive scalars (λn)n∈[N]. Let e ∈E \ E′ be an edge not in G′. Moreover, choose e so that there is a minimal cycle graph C = (VC, EC) ⊂G containing the edge e such that all other edges in C are in G′: e ∈EC and EC \ {e} ⊂E′. In particular, there are no edges in G between any two vertices of C that are not already in C. Suppose there are r players in the cycle. Let us rename these players so that VC = [r]. Actually, for convenience, let VC = Z/rZ, where player r can also be called player 0. Naturally, order the players so that: (n, m) ∈EC ⇐⇒ n -m mod r = ±1. Let us also choose the labels so that e = (0, 1). We now prove that the consistency condition (6) holds; that is, ar1 = λr/λ1. We proceed by contradiction and construct xC such that: ∀n ∈VC, fn(xC; x∗ -C) -fn(x∗) > 0, This would contradict the assumption that x∗is weakly Pareto optimality over the coalition of players in C. Assume without loss of generality that a1r 0, the higher-order terms can be made to have vanishing contribution to the utility relative to the bilinear term. By the inductive hypothesis, J[G′] is block skew-symmetrizable by the set of scalars (λn)n∈[N], so that: ∀n ∈[r -1], fn(xC; x∗ -C) -fn(x∗) = x⊤ n Jn,n-1xn-1 -λn+1 λn · x⊤ n+1Jn+1,nxn + o(∥xC∥2) = λ-1 n · αn -αn+1 + o(∥xC∥2) = λ-1 n · α/(r -1) + o(∥xC∥2). We also have that the change of the utility of the rth player is: fr(xC; x∗ -C) -fr(x∗) = λ-1 r · αr -a1r · x1J1rxr + o(∥xC∥2) = λ-1 r 1 -α -a1r · λr λ1 + o(∥xC∥2), where in the first step, we used the equality Jr1 = -a1rJ⊤ 1r. By construction, the lowest-order, bilinear term of the utility is positive, of each player in the coalition C using this choice of xC, obtaining the desired contradiction. And so, we must have that a1r = λ1/λr, implying the consistency condition (6). I Proof of Theorem 2 Theorem 2 (Pointwise uniform stability is equivalent to strategic Pareto stationarity). Let x∗be a Nash equilibrium of an N-player multilinear game, at which it is bi-directional. Suppose that the interaction graph is connected at x∗. Then, x∗is uniformly stable if and only if it is strategically Pareto stationary. Proof of Theorem 2. Uniform stability and strategic Pareto stationarity both depend only on the bilinear terms of the strategic component of games. Thus, we may assume without loss of generality that the game is a purely-strategic, N-player polymatrix game with the Jacobian J ≡J(x∗). In fact, we may assume it is in canonical form (Definition C.1), where the Nash equilibrium is centered at the origin x∗= 0, with utilities: fn(x) = X m̸=n xnJnmxm. (⇒). For Nash equilibria in polymatrix games in canonical form, Proposition 1 shows that the strategic Pareto stationarity is equivalent to weak Pareto optimality. And so, to prove the forward direction, we may show the contrapositive: if x∗is not weakly Pareto optimal, then it is not uniformly stable. When x∗is not weakly Pareto optimal, there is a joint strategy x ∈Ωsuch that all players improve: fn(x) -fn(x∗) = x⊤ n X m̸=n Jnmxm > 0. 28 Lemma 1 shows that for each player n ∈[N], there exists Hn ≻0 such that: Hn X m̸=n Jnmxm = xn. Let H = diag(H1, . . . , HN). We obtain that HJx = x, so that HJ has a positive eigenvalue λ = 1. Thus, the game is not uniformly stable at x∗. (⇐). When x∗is strategically Pareto stationary, then by definition, there is a block-diagonal matrix Λ such that ΛJ = Σ is skew-symmetric, where each diagonal block is the diagonal matrix Λnn = λnIdn×dn. Let H be any positive-definite block-diagonal matrix. Then, H-1J is similar to a skew-symmetric matrix: H-1J ∼ H-1/2Λ1/2 Σ Λ1/2H-1/2 , where we used the fact that ΛH-1 = H-1Λ. By Lemma I.1, H-1J has purely imaginary eigenvalues. Since this holds for all possible choice of H, the game is uniformly stable at x∗. Lemma I.1 (Skew-symmetric matrices have imaginary eigenvalues, Theorem 7 in Chapter 8 of Lax (2007)). If M ∈Rm×m is skew-symmetric, then spec(M) ⊂iR. Theorem I.1 (Eigenvalues of matrix powers, Theorem 1.1.6 of Horn and Johnson (2012)). Let M ∈Cm×m be a square matrix. Let p(t) be a given polynomial of degree k. If (λ, v) is an eigenvalue-eigenvector pair of M, then (p(λ), v) is an eigenvalue-eigenvector pair of p(M). Conversely, if k ≥1 and if μ is an eigenvalue of p(M), then there is some eigenvalue λ of M such that μ = p(λ). J Proof of Proposition 2 Proposition 2 (Inapproximability of unstable equilibria). Let (Ω, f) be an N-player multilinear game with an interior Nash equilibrium x∗. Suppose that x∗is not uniformly stable. Then, there is some collection of smooth and strictly convex regularizers h such that smoothed best-response dynamics does not stabilize to x∗. In fact, even if the sequence of β-smoothed equilibria xβ converges to x∗, each xβ is an unstable fixed point of the (Φβ, η)-averaging dynamics (1) for all η ∈(0, 1) and sufficiently small β > 0. Proof. In the following, let H(x) denote the block-diagonal matrix with Hn(x) = ∇2 nhn(xn) given a set of smooth and strictly convex regularizers h. As x∗is not uniformly stable, we can choose h such that the eigenvalues of H-1(x∗)J(x∗) are not all purely imaginary (such a choice exists by Lemma L.2). In fact, it must have an eigenvalue λ∗with positive real part R(λ∗) > c > 0. This is because the matrix H-1(x∗)J(x∗) has zero trace-the diagonal blocks of the matrix Hn(x∗)Jnn(x∗) in multilinear games is zero-and so, the sum of all eigenvalues must also be zero. The existence of eigenvalues with positive real parts, bounded away from zero, extends to an open region around x∗by continuity. In particular, the matrix-valued function x 7→H-1(x)J(x) is continuous, so by the continuity of eigenvalues (Theorem J.1), there is an open set U around x∗so that for all x ∈U, each matrix H-1(x)J(x) also has an eigenvalue λ(x), where its real part is bounded away from zero: R λ(x) > c/2 > 0. We now show that the dynamics do not stabilize to x∗. That is, either the sequence of β-smoothed equilibria xβ does not converge to x∗as β goes to zero, or the dynamics become unstable around xβ. If the sequence does not converge, then there is nothing to do. So, without loss of generality, we may assume that there exists some sufficiently small β0 1 -η + η c 2β > 1. The fixed point xβ is unstable when β 0 depending only on the game so that when η ≤Cfβ2, the (Φβ, η)-averaging dynamics globally converge: x(t) -xβ ≤exp -ηt + ln N 2 . Proof. Let Φβ be any choice of smoothed best-response map induced by a set of smooth and strictly convex regularizer h and β > 0. Recall from Lemma 2 that its Jacobian is: ∇Φβ(x) = 1 β H(x)-1J(x), where H(x) is the block-diagonal matrix with the nth block being ∇2 nhn(xn) and J(x) is the game Jacobian. Let U ⊂Ωbe any open ball containing x∗such that ∥H(x)-1J(x)∥≤L is bounded by a constant L > 0. Such a ball exists by the continuity of ∇Φβ. We now show that for sufficiently small η, the (Φβ, η)-averaging dynamics contracts toward xβ when initialized within U, implying that xβ is asymptotically stable. In particular, we show that at each iteration, the distance to xβ contracts by a factor of exp(-Ω(η)) whenever: η ≤ β2 1 + L2 . Fix any x ∈U. By the mean-value theorem, there is some ̄x ∈U, a convex combination of x and xβ, such that Φβ(x) -Φβ(xβ) = ∇Φβ( ̄x)(x -xβ). We use ̄x to define Mη(x) as the matrix: Mη(x) = (1 -η)I + η∇Φβ( ̄x), where I ∈Rd×d is the identity. The (Φβ, η)-averaging dynamics can be re-centered at xβ, which becomes: x(t + 1) -xβ = (1 -η) x(t) -xβ + η Φβ(x(t)) -Φβ(xβ) = Mη x(t) x(t) -xβ . The dynamics contract to xβ exponentially quickly if the spectral norm ∥Mη(x)∥2 is bounded by 1 -Ω(1), x(t + 1) -xβ ≤ Mη x(t) 2 x(t) -xβ . 30 Indeed, we have: Mη(x) 2 ≤ 1 -η + iη L β = s 1 -2η + η2 1 + L2 β2 ≤exp -η + 1 2η2 1 + L2 β2 ≤exp -η 2 , where the first inequality follows from ∥∇Φ∥≤L, the second from the inequality √1 -z ≤e-z/2, and the last inequality holds whenever η ≤β2/(1 + L2). The rate then follows from Theorem 2, along with the bound on the diameter of the joint simplex space: sup x,x′∈Ω ∥x -x′∥≤ √ N. L Beyond interior Nash equilibria The convergence results Proposition 1 and Theorem 3 considered only interior Nash equilibria. This section extends these result to Nash equilibria when they are on the boundary of the simplex. The following lemma shows that if x∗is an isolated, non-interior equilibrium, then the game can be locally reduced by removing actions that are not in the support of x∗. These actions turn out to be strictly dominated, and so the game-theoretic equilibrium x∗is preserved after removing dominated strategies. Lemma L.1. Let (Ω, f) be an N-player normal-form game with a quasi-strong equilibrium x∗∈Ω. There exists a neighborhood U containing x∗such that actions not supported by x∗are strictly dominated. That is, there exists some ε > 0 such that for each player n ∈[N] and for all x ∈U and i /∈supp(x∗ n), fn(en,i; x-n) 0 such that whenever a pure strategy i ∈[kn] is not supported, meaning that i /∈supp(x∗), then it is strictly dominated: fn(en,i; x∗ -n) 0. Then, either it is the unique action that is supported, in which we case we say that x∗ n is trivially contained in the interior of the reduced simplex Ωn(x∗ n). Otherwise, it must be bounded away from one x∗ n,i 0 such that for each player n ∈[N] and for all x ∈U and i /∈supp(x∗), the action i is ε-suboptimal. In particular, ∇nfn(x) ∈TΩn denote the gradient of fn on the manifold Ωn. We obtain that: ∇nfn(x)i 0, A ∈Rk×k is invertible, and w ∈Rmi. We show that h is steep and that for this fixed x, Sym+(T∆) = n ∇2h(x; λ, A) : λ > 0 and A invertible o . Under the coordinate representation given by Definition L.1, the gradient on the simplex ∆is given by: ∇h(x; λ, A) = I -1 k 11⊤ h λ log x + A⊤A(x -w) i , where log x is applied component-wise. Thus, h is steep, since log xi →-∞as xi →0. There exists w ∈Rk such that ∇h(xi) = 0, by selecting: w = xi -λ(A⊤A)-1 log xi. The Hessian on the simplex is given by: ∇2h(x; λ, A) = I -1 k 11⊤ h λ · diag(1/x) + A⊤A i I -1 k 11⊤ , where diag(1/x) is the diagonal matrix with diagonal entries 1/xi. For every M ∈Sym+(T∆), there is a sufficiently small λ > 0 such that the following matrix is positive-definite, so there is an invertible matrix A: A⊤A = M + 11⊤-λ · diag(1/x). It follows that ∇2h(x; λ, A) = M, showing that Sym+(T∆) ⊂{∇2 Ωihi(z; λ, A) : λ > 0 and A invertible}. The reverse inclusion holds as h is strongly convex, so its Hessian on the simplex is contained in Sym+(T∆). 33 L.3 Convergence to non-interior Nash equilibria Theorem 4 (Convergence to non-interior equilibria). Let (Ω, f) be an N-player, normal-form game and let h be a collection of linearly steep, proper regularizers. Suppose that x∗is a quasi-strict Nash equilibrium and that it is the limit of the β-smoothed equilibria xβ as β goes to zero. If the reduced game around x∗is locally uniformly stable, then smoothed best-response dynamics stabilize to x∗. In particular, there exists a constant Cf > 0 such that for all sufficiently small β > 0 and η ≤Cfβ2, the (Φβ, η)-averaging dynamics is locally asymptotically stable at xβ, where the operator norm of its Jacobian at xβ is bounded by: (1 -η)I + η∇Φβ(xβ) 2 ≤exp -η 2 . Proof. Let Φβ be any choice of smoothed best-response map induced by a set of linearly steep and proper regularizers h and β > 0. Its Jacobian under the coordinate representation specified by Definition L.1 is: ∇Φβ(x) = 1 β H(x)+J(x), where H(x) is the block-diagonal matrix with the nth block being ∇2 nhn(xn)+ and J(x) is the game Jacobian, as computed by Lemma 2. Define G(x) := H(x)+J(x) and let Π : Ω→Ω(x∗) be the orthogonal projection to the support of x∗. Let U ⊂Ωbe any open ball containing x∗such that ∥H(x)+J(x)∥≤L is bounded by a constant L > 0. Such a ball exists because the regularizers are proper, so that ∇Φβ is smooth. Moreover, the reduced game around x∗is locally uniformly stable, so this ball can be chosen such that: spec G(Πxβ) ⊂iR. To prove that the (Φβ, η)-averaging dynamics is a local contraction at xβ whenever η is sufficiently small, we show that the Jacobian of these dynamics have operator norm bounded by 1. The Jacobian of the (Φβ, η)-averaging dynamics can be decomposed as follows: (1 -η)I + η∇Φβ(xβ) = (1 -η)I + η β G(xβ) = (1 -η)I + η β G(Πxβ) + η β G(xβ) -G(Πxβ) = (1 -η)I + η β G(Πxβ) + η ∇G( ̄xβ)(xβ -Πxβ) β , where in the last step applies the mean-value theorem to find some ̄xβ that is a convex combination of xβ and Πxβ, once again making use of the smoothness of the pseudoinverse of the Hessian of the regularizers. When β is sufficiently small, then Πxβ is contained in U, so that the eigenvalue of G(Πxβ) is iλ for some real-valued scalar λ with |λ| < L. By Lemma 4, the norm of the last term can be made arbitrarily small: ∇G( ̄xβ)(xβ -Πxβ) β 2 ≤C∥xβ -Πxβ∥ β ≤o(β), where we may uniformly bound the norm of ∇G over Ωsince it is a smooth map on a compact set. Let us choose β small enough so that this term is no more than 1/2. Thus, the operator norm of the Jacobian of the dynamics is bounded by: (1 -η)I + η∇Φβ(xβ) 2 ≤ 1 -η 2 + iη L β ≤exp -η 2 , where the last inequality holds when η ≤β2/(1+4L2); see the proof of Theorem 3 for this computation. 34
2510.14906
A Hard-Label Black-Box Evasion Attack against ML-based Malicious Traffic Detection Systems Zixuan Liu∗, Yi Zhao†B, Zhuotao Liu∗‡, Qi Li∗‡, Chuanpu Fu∗, Guangmeng Zhou∗, Ke Xu∗‡B ∗Tsinghua University, †Beijing Institute of Technology, ‡Zhongguancun Lab Abstract—Machine Learning (ML)-based malicious traffic de- tection is a promising security paradigm. It outperforms rule- based traditional detection by identifying various advanced attacks. However, the robustness of these ML models is largely unexplored, thereby allowing attackers to craft adversarial traffic examples that evade detection. Existing evasion attacks typically rely on overly restrictive conditions (e.g., encrypted protocols, Tor, or specialized setups), or require detailed prior knowledge of the target (e.g., training data and model parameters), which is impractical in realistic black-box scenarios. The feasibility of a hard-label black-box evasion attack (i.e., applicable across diverse tasks and protocols without internal target insights) thus remains an open challenge. To this end, we develop NetMasquerade, which leverages reinforcement learning (RL) to manipulate attack flows to mimic benign traffic and evade detection. Specifically, we establish a tailored pre-trained model called Traffic-BERT, utilizing a network-specialized tokenizer and an attention mechanism to extract diverse benign traffic patterns. Subsequently, we integrate Traffic-BERT into the RL framework, allowing NetMasquerade to effectively manipulate malicious packet sequences based on benign traffic patterns with minimal modifications. Experimental results demonstrate that NetMasquerade enables both brute-force and stealthy attacks to evade 6 existing detection methods under 80 attack scenarios, achieving over 96.65% attack success rate. Notably, it can evade the methods that are either empirically or certifiably robust against existing evasion attacks. Finally, Net- Masquerade achieves low-latency adversarial traffic generation, demonstrating its practicality in real-world scenarios. I. INTRODUCTION Machine learning (ML)-based malicious traffic detection systems identify attack behaviors by learning the features of traffic [28], [62], [30]. As an emerging security paradigm, it is promising for identifying multiple sophisticated attacks [51], [53], [49] and thus outperforms traditional rule-based detec- tion [98], [39], [93] in both effectiveness [20], [4] and effi- ciency [104], [7]. Currently, ML-based systems are deployed to complement the traditional systems due to their ability to detect unknown [23] and encrypted [4], [89] attack traffic. Unfortunately, as in many other ML-based domains [86], [66], [32], robustness issues are prevalent in ML-based traffic detection systems [84], [5], [47]. That is, attackers can craft adversarial traffic by adding, delaying, or otherwise modifying packets [37], [96], causing detection models to misclassify these deceptive flows as benign. The research community has put forward a range of advanced evasion methods (see Table I), yet many of these methods operate under narrowly defined conditions, such as leveraging encrypted protocols [65], [58], [82], tunnels [65], Tor networks [50], or circumventing spe- cialized third-party censorship systems [58]. The effectiveness of these protocol-related and task-related approaches drops significantly when the attack environment changes. Moreover, some solutions rely heavily on prior knowledge about the target or dataset, requiring full (white-box) [82], [65], [96] or partial (gray-box) [65], [37] access, which is impractical in a more realistic black-box setting. To bridge these gaps, we aim to design a black-box adver- sarial attack targeting widely used ML-based traffic detection systems that rely on statistical patterns [28], [62], [104], [7]. In particular, the attack must be protocol-agnostic and task- agnostic, allowing it to be seamlessly applied to any malicious traffic, regardless of whether it is encrypted, tunneled, or otherwise constrained. Moreover, the attacker can generate ad- versarial malicious traffic with minimal modifications, relying solely on whether the target system drops malicious packets (i.e., hard-label attack [94], [88]). In contrast to feature-space attacks [57], [3], i.e., impractical settings that require attackers to interfere with ML execution, our traffic modifications must preserve the effectiveness of the attacks [82], [37]. This paper presents NetMasquerade, a hard-label black- box evasion attack, which utilizes deep reinforcement learning (RL) to transform malicious traffic into adversarial examples by mimicking benign traffic patterns. At its core, we propose Traffic-BERT, a tailored pre-trained model for capturing di- verse and complex benign traffic distributions. Subsequently, we develop an RL framework that decides the location and type of packet modification step-by-step, leveraging Traffic- BERT’s embedded knowledge of benign behaviors. The only feedback required for the RL training process is the blocked- or-not signal from the targeted detection system. The detailed design ensures that NetMasquerade achieves minimal, yet ef- fective, modifications across diverse traffic types and detection models, thereby evading detection systems under black-box conditions. We address two main challenges in constructing effective adversarial traffic. First, we must capture rich benign traffic patterns in order to mimic them convincingly. To address this, we study the distribution of Internet packet patterns, then pad and chunk traffic from public datasets using an optimal setup to improve diversification. Afterwards, we pre-process the traffic with network-specific tokenizers. Finally, we extract dependencies among the tokens with a novel attention block in Traffic- BERT, providing a robust representation of benign traffic across various protocols and scenarios. Network and Distributed System Security (NDSS) Symposium 2026 23 - 27 February 2026 , San Diego, CA, USA ISBN 979-8-9919276-8-0 https://dx.doi.org/10.14722/ndss.2026.240916 www.ndss-symposium.org arXiv:2510.14906v1 [cs.CR] 16 Oct 2025 TABLE I COMPARISON OF EXISTING EVASION ATTACKS AGAINST TRAFFIC ANALYSIS SYSTEMS. Scenarios Evasion Techniques Attack Applicability Without Prior Knowledge Attack Performance Protocol-agnostic Task-agnostic Datasets Features Model Low Overhead Low Latency White-box Gradient Analysis [82], [65] ✗ ✓ ✗ ✗ ✗ ✗ ✓ Optimization [96] ✗ ✓ ✗ ✗ ✗ ✗ ✓ Gray-box Sample Transferability [65] ✗ ✗ ✗ ✗ ✓ ✗ ✓ Feature Manipulation [37] ✓ ✓ ✓ ✗ ✓ ✗ ✗ Black-box Packet reassembly [58] ✗ ✗ ✓ ✓ ✓ ✗ ✗ Traffic Mimicking (Ours) ✓ ✓ ✓ ✓ ✓ ✓ ✓ Second, the RL model for generating optimal evasion policies must maintain both low training overhead and low online inference latency. To this end, we formulate the traffic modifications as a Finite Horizon Markov Decision Process, enabling a multi-step decision strategy with explicit incentives for minimal and targeted modifications, effectively reducing inference costs. Meanwhile, we utilize a lightweight policy network and ensure rapid convergence by leveraging already- learned benign distributions in Traffic-BERT, significantly reducing training overhead. In addition, we introduce an effec- tiveness penalty, which safeguards the malicious functionality of the attack. We prototype NetMasquerade1 with Pytorch [74] and Intel DPDK [45]. Experiments demonstrate that NetMasquerade enables both high-rate and low-rate attacks to evade 6 top- performing detection systems in 80 attack scenarios, achieving over 96.65% attack success rate (ASR). Note that NetMasquer- ade can evade the methods that are either empirically [28] or certifiably [95] robust against existing evasion attacks. Com- pared with other attacks [28], [37], NetMasquerade applies minimal modifications to no more than 10 steps in all test scenarios, thus incurring little impact, e.g., a Kullback-Leibler (KL) divergence of 0.013 between the original and modified bandwidth distributions. Moreover, the evasion attacks can be constructed in time, i.e., NetMasquerade can transform 4.239K adversarial packets per second. In general, our studies reveal that the robustness issue of traffic detection remains unaddressed, which emphasizes the necessity of enhancing robustness against advanced evasion attacks. The contributions of this paper are three-fold: • We develop a hard-label black-box evasion attack that op- erates across diverse traffic types and targets widely used ML-based detection systems relying on statistical pat- terns, leveraging deep reinforcement learning to manipu- late attack packets for efficient benign-traffic mimicry. • We design a tailored pre-trained model, Traffic-BERT, to capture diverse and complex benign traffic patterns, equipping NetMasquerade with the ability to transform malicious flows into benign-like traffic across a wide range of protocols and tasks. 1Source code: https://github.com/09nat/NetMasquerade Fig. 1. Network topology. • We establish an RL-based framework that transforms original attack traffic into adversarial examples with minimal modifications, and experimentally validate that NetMasquerade can generate various attack traffic to evade multiple state-of-the-art detection systems with small overhead. II. THREAT MODEL AND ASSUMPTIONS Threat Model. We assume an external adversary who instructs botnets to deliver malicious traffic (e.g., high-speed attack and stealthy malware behaviors) toward victim hosts [89]. The attack flows originate on the public Internet and must traverse an in-line, ML-based traffic detection system deployed at the victim’s ingress link, as shown in Figure 1. The detection system operates on certain flow- or packet-level statistical patterns (e.g., sizes, delays) for traffic classification and does not inspect the payload [28], [30], [62], [7]. Such pattern- based models are increasingly popular as they are effective on encrypted traffic. Meanwhile, the system forwards benign traffic and drops or rate-limits malicious traffic. This behavior is consistent with both existing academic proposals [102] and default configurations of open-source in-line detectors [68], which prioritize network availability by choosing low-latency actions like packet dropping over more aggressive responses (e.g., blocking entire source IPs, which could be a NAT gateway serving many legitimate users) that may cause sig- nificant collateral damage. To evade detection, the attacker craft each malicious flow to construct adversarial traffic that misleads detection systems into classifying them as benign, while retaining the flow’s original intent. Assumptions. The attacker cannot obtain any details of the target detection systems, such as ML models, parameters, 2 Craft Packets Fixed Length Flow Chunking Adversarial Traffic Generation Malicious Traffic Light-Weight Policy Network Target Detection System Adversarial Feature Sequence Benign Traffic Pattern Mimicking Enormous Public Benign Traffic Fixed Length Features Training Train Traffic-BERT Encoder Block Encoder Block N × Self Attention Bi-Cross Attention Self Attention Feed&Forward Feed&Forward … &Tokenizer Discretization Padding Feature Extraction Adversarial Malicious Traffic Encoding 1 n+1 n 2n … n 1 … … … 𝑴 𝑴 Add Perturbations 1.1 1.2 Select & Mask Encoder Block 𝑴 𝑴 𝑴 𝑴 Send Probe Traffic Get Feedback (Reward 𝑟!) 𝒓= 𝒓𝑬+ 𝒓𝑫+ 𝒓𝑴 Update Policy 𝒕times 2.1 RL-guided Perturbation 2.2 Probing & Feedback Reward … Fig. 2. High-Level Design of NetMasquerade. feature extractors and training datasets. That means, the at- tacker treats target systems as a strict (i.e., hard-label) black- box, which differs from traditional white-box and gray-box attacks that either need to access ML models (e.g., obtain gradients) [96] or datasets [65] for better transferability. This black-box setting meets the real-world scenarios, as most traffic detection systems are closed-source software [12] or outsourced cloud services [16], [1], effectively preventing attackers from obtaining any information. However, the attacker can conduct a reconnaissance phase to gather pass/fail feedback from the target detection system. Specifically, the attacker sends probe traffic to remote hosts behind the target detection systems. The probe traffic should exactly mirror the malicious traffic’s intended traffic pattern (i.e., matching packet sizes and inter-packet delays) without embedding the original malicious payload (i.e., the attacker can freely embed payloads of the same size). For TCP flows, any return packet (e.g., RST, ACK, SYN-ACK) from the destination [97] signals that the probe has traversed the IDS, whereas the complete absence of a reply within the retrans- mission window indicates the blockage [24]. For UDP flows, the attacker could employ a stateful application-layer protocol (e.g., QUIC [46]) to induce a response. If the destination port is closed, the attacker would typically observe ICMP Unreach- able when the traffic successfully passes the detection [73], [80]. However, no such message will be received if the flow is blocked. By checking for a response within a fixed timeout, the attacker assigns a pass/fail label to each probe. Besides, side-channel techniques (e.g., hop-limited TTL [18], [48], IPID [26], [75]) can further reveal whether the traffic reaches the destination. Overall, this binary indicator (i.e., hard-label) enables the attacker to refine subsequent adversarial traffic generation. In addition, the attacker can access benign traffic from public datasets [99], which differ from those used to train the traffic detection model. We supplement additional discussions regarding our assumptions in § VI. III. THE OVERVIEW OF NETMASQUERADE A. Key Observation We begin by noting that modern traffic detection [1], [12], [16] operates as a strict (i.e., hard-label) black-box system from the perspective of typical malicious attackers. All model implementations and training details are concealed, which significantly degrades the performance of existing white- box [65], [82] and gray-box [37], [65] methods. In contrast, such systems often drop or throttle [102], [100], [29] malicious traffic while allowing benign traffic to pass. This behavioral asymmetry inherently yields a feedback signal, motivating us to interactively probe the detector’s decision boundary and steer malicious traffic within that boundary. We accomplish this process with reinforcement learning. However, Internet traffic spans a wide variety of protocols and use cases. A naive design can degrade RL performance (see § IX-D), while task-specific attacks fail in heterogeneous environments [58]. Fortunately, existing studies show that be- nign traffic distributions tend to be denser, whereas malicious traffic is often more sparse [76]. This density gap encourages us to morph the malicious flow so that its features migrate toward the benign manifold while preserving attack semantics. To achieve this, we develop an effective semantic model (i.e., Traffic-BERT) to capture benign traffic patterns, thereby guid- ing RL training through its learned representations. Finally, we introduce a two-stage divide-and-conquer training framework, along with several additional mechanisms, to significantly 3 reduce the overhead introduced by integrating Traffic-BERT with the RL process while preserving the effectiveness of the generated adversarial traffic. B. High-Level Architecture Figure 2 shows the two stages of NetMasquerade, a black- box evasion attack method. In the first stage, it captures the benign traffic patterns. In the second stage, it generates adversarial traffic based on the traffic patterns. Benign Traffic Pattern Mimicking. In this stage, we focus on comprehensively modeling benign traffic distributions to provide a solid foundation for subsequent adversarial flow generation. To this end, we propose Traffic-BERT, a variant of BERT [19], capable of processing multiple feature sequence inputs and outputs. Specifically, we first extract basic flow features (i.e., packet size sequence and inter-packet delay sequence). Next, we introduce dedicated feature extraction and embedding schemes to reconcile the gap between continuous, variable-length traffic data and the fixed-length, discrete input format typically required by Traffic-BERT. Building on these enriched representations, we propose a cross-feature bidirec- tional attention mechanism to simultaneously capture global dependencies within each individual feature sequence and across heterogeneous feature modalities. By training Traffic- BERT under a Mask-Fill task, we enable it to learn deep bidirectional dependencies and acquire the capability to con- textually complete fine-grained benign features. The trained Traffic-BERT can be directly used in Adversarial Traffic Generation to guide the RL optimization process. We will detail the Benign Traffic Pattern Mimicking in § IV. Adversarial Traffic Generation. In this stage, our goal is to embed the pattern of benign traffic into malicious traffic with minimal modifications while preserving attack semantics and domain constraints. We model this as a standard Markov Decision Process (MDP), and employ deep RL to address com- plex sequential decision-making. Specifically, NetMasquerade utilizes the Gated Recurrent Units (GRUs) [11], a lightweight neural network, as the policy network and the state-value networks (a.k.a. Q-Networks). This design significantly re- duces training time and inference latency while still effectively capturing temporal flow features. By learning an optimal policy to select packet-level feature positions for masking, NetMasquerade leverages Traffic-BERT to fill the masked tokens with benign traffic patterns. The resulting adversarial flow is used to probe the target system. The response provides the core feedback, which we integrate with two novel penalty terms to form a comprehensive reward signal: a dissimilarity penalty, which ensures that the final adversarial flow remains close to the original malicious flow while also reducing the required inference steps, and an effectiveness penalty, which retains the underlying attack function. This complete reward signal then guides the optimization of the policy network using the Soft Actor-Critic (SAC) [34] algorithm. We will detail the Adversarial Traffic Generation in § V. Two-Stage Framework Advantages. In many RL applica- tions, models are initialized from expert demonstrations via be- havior learning and then deployed as policy networks in down- stream tasks [41], [92]. However, this Pretraining-Finetuning framework is not suitable for the traffic domain because it in- troduces significant overhead. By contrast, our design cleanly decouples benign traffic modeling (Stage 1) from adversarial RL optimization (Stage 2). Traffic-BERT learns high-fidelity benign traffic embeddings without entangling with the RL process, avoiding repeated large-scale retraining. Meanwhile, the lightweight policy network incrementally references the embeddings to weave benign patterns into malicious flows, preserving both the efficiency and the effectiveness of the generated adversarial traffic. IV. BENIGN TRAFFIC PATTERN MIMICKING A. Feature Extraction The feature extraction module encodes network traffic into Traffic-BERT’s token sequences. Although various traffic stud- ies have explored related encoding strategies in other con- texts [29], [56], [76], [103], they are not directly applicable to Traffic-BERT for two reasons. First, as a language model, Traffic-BERT demands fixed-length inputs, while real-world flows vary widely in size and duration. It is essential to set a base input length that accommodates the majority of flows and provide a mechanism to capture extended flows without information loss. Second, Traffic-BERT requires tokens to reside in a uniformly discrete space, whereas raw network features (e.g., inter-packet delays) may be highly skewed or continuous. We overcome these issues and characterize flows based on statistical insights, as detailed below. Flow Classification. Traffic-BERT takes sequences of fixed length as input. Typically, the input length is a power of 2, and we assume it to be n, where n = 2k. Figure 3(a) shows the probability density function (PDF) and cumulative distribution (CDF) for flow lengths in the MAWI internet traffic dataset (Jun. 2023) [99]. We randomly sample over 1e7 flows to plot the figure. Clearly, the distribution of flow lengths exhibits a long-tail pattern, with short flows dominating the majority. We obtain the 99th percentile from the cumulative distribution and select the closest n as our hyperparameter for fixed length. Nonetheless, studies of flow distributions [25] indicate that long flows hold most of the packets. Figure 3(b) shows the bytes retained for different fixed truncations (i.e., n), which can be approximately considered as information entropy, as a proportion of the total bytes of the flow. We randomly sample one week and analyze the complete flow data daily, finding that the information entropy ratios for common values of n do not exceed 0.27. To address this, we apply two complementary strategies: • Short Flow Padding. If m ≤n, we append n−m special padding tokens (i.e., [PAD]) to the end of its feature sequence. • Long Flow Chunking. If m > n, we divide its feature sequence into m −n + 1 segments, with each segment’s index with a range of [i, i + n), 0 ≤i ≤m −n. Feature Representation. Having standardized flow lengths via padding and chunking, we next convert these sequences 4 99th percentile CDF PDF Probability Flow length [log 10] (a) Flow length distribution. Mon. Sun. Sat. Fri. Thu. Wed. Tues. n = 1024 n = 256 n = 512 n = 128 (b) Flow entropy ratios. Fig. 3. Flow length distribution and entropy ratios. into discrete tokens that Traffic-BERT can ingest. Specifically, we focus on two per-packet attributes: packet sizes and inter- packet delays (IPDs). To optimize this tokenization process, we study the distribution of packet sizes and IPDs in benign internet traffic. For the IPD feature, we observe that after taking the base-10 logarithm, the data exhibits a more uniform distribution across magnitudes. We randomly sample over 80 million packets for our analysis and plot these in Figure 4(a). The analysis shows that frequencies in the range of [−6, −2] (corresponding to 1e−6 to 1e−2 seconds) consistently range between 1.0e7 and 1.7e7, while the total for other magnitudes falls below 1e7. Based on this, we set several logarithmically equal-length intervals and hash the IPDs into the intervals, using the interval indices as the corresponding tokens. We adjust the interval lengths to balance the count of elements within each. The packet sizes exhibit a bimodal distribution: predominantly very short or very long (due to fragmentation), with a more uniform distribution in between, as shown in Figure 4(b). Due to its discrete nature, we directly use its value as the token. We use a standard Maximum Transmission Unit (MTU) length as the capacity for the packet size vocabulary. This is because we manipulate the traffic on a per-packet basis, making it impossible to generate a single packet that exceeds the MTU. We categorize features significantly exceeding the MTU under a single class and represent them with the [UNK] token. The token vocabularies for packet sizes and IPDs are independent. Moreover, we add special tokens [PAD] and [MASK] to the vocabulary as placeholders and for masking certain tokens, respectively. We represent each token by two embeddings: token em- bedding and position embedding. The complete embedding is constructed by summing both of them together. • Token Embedding. A high-dimensional vector represent- ing the token, which is randomly initialized and trained jointly with Traffic-BERT. • Position Embedding. A vector annotating the token’s rel- ative position in the sequence using sinusoidal functions, similar to Transformer [91]. For chunked features, apart from the first segment, the indices of other segments do not start from 0. This helps the model learn long flow representations more effectively. B. Traffic-BERT Although transformer-based architectures have been ap- plied to network traffic modeling [56], [103], most existing Inter-packet delays [log 10] Frequency 0.5 1.0 1.5 2.0 2.5 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 ×1e7 Scale [1.0, 1.7]×1e7 (a) IPD distribution. 0 1 2 3 4 5 7 6 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 MTU+ Frequency Packet Size [×1000] log10 Scale (b) Packet size distribution.1 1 Size exceeds MTU, due to misconfigurations/specialized protocols. Fig. 4. Packet size and IPD distribution. BERT-like models are constrained to handling only a single sequence of discrete tokens [19], [59]. In contrast, benign network flows typically involve multiple feature sequences, whose interactions are crucial for capturing real-world traffic patterns. Therefore, the first challenge lies in how to effectively model these interwoven modal features without increasing computational overhead. The second challenge is how to define the Traffic-BERT’s training task that enables the model to learn representations of these features directly applicable to adversarial traffic generation, without additional training costs. To address these challenges, we propose a novel bi-cross attention mechanism for efficient fusion of multi-feature data and design a Mask-Fill task to guide Traffic-BERT in acquiring the ability to fill benign traffic packets based on flow context. Traffic-BERT Structure. The core innovation of Traffic- BERT is the introduction of a bi-cross attention layer within a stack of bidirectional encoder blocks, which is shown in Figure 5. Each block comprises three principal components: (i) self-attention, (ii) bi-cross attention, and (iii) a feed-forward network (FFN), with residual connections linking these layers. Generally, the attention mechanism [91] can be represented as: Attn.(Q, K, V ) = softmax QKT √dk  V, (1) where Q, K, and V represent the Query, Key, and Value, respectively. They are three sets of linear transformations derived from the packet size and inter-packet delay features. The parameter √dk represents the dimension of the Key. Each encoder in Traffic-BERT takes size features P and IPD features H as input, which are first processed by the self- attention layer to yield respective hidden states hP and hH. Formally: hP = P + Attn.(QP , KP , VP ), hH = H + Attn.(QH, KH, VH). (2) Self-Attention is characterized by deriving both the Query and Key from the same sequence, thereby computing the relative importance of each element with respect to all other elements in that sequence. Next, hP and hH are fed into the bi-cross attention layer to compute the interrelated attention features, that is, using one feature sequence to query another feature sequence, and vice versa. This can be formulated as: h′ P = hP + Attn.(QhP , KH, VH), h′ H = hH + Attn.(QhH, KP , VP ). (3) 5 Encoder Block Multi-Head Self Attention Multi-Head Self Attention Packet Size Feat. 𝑃 IPD Feat. 𝐻 Multi-Head Bi-Cross Attention Add & Norm Add & Norm Add & Norm Add & Norm Feed & Forward Feed & Forward Input Output Encoder Block Encoder Block Encoder Block … N × 𝑉! 𝐾! 𝑄"! 𝐾# 𝑉# 𝑄"" Add & Norm Add & Norm ℎ′! ℎ′" 𝑃′ for next stage 𝐻′ for next stage Fig. 5. Core design of Traffic-BERT. Unlike self-attention, bi-cross attention uses the hidden states hP and hH as the Query to compute similarity with the other sequence’s output from the previous block, and assigns attention weights to the other’s Value. The bi-cross attention shares the same complexity O(n2dk) with self-attention, pro- viding an efficient solution for multi-feature sequences from distinct feature spaces. It enables the model to better capture the long-term interactions and dependencies between different feature sequences in network traffic, significantly enhancing the semantic understanding of the benign flow. The outputs from the bi-cross attention layer are then passed through the feed-forward network layer, serving as the input for the next encoder block. The output from the last encoder block is then passed through a linear layer to obtain the probability distribution of the output tokens. Traffic-BERT Optimization. We train Traffic-BERT with a Mask-Fill task that generalizes RoBERTa’s Dynamic Mask- ing [59] to handle multiple correlated feature sequences. In each training step, we select 15% of token positions for masking. When a position is chosen, tokens in both sequences are masked simultaneously: 80% are replaced with [MASK], 10% with a random token, and 10% remain unchanged. This dual-sequence masking scheme not only compels Traffic- BERT to master deep bidirectional semantics within individual feature sequences, but also reinforces the cross-feature in- teractions introduced by our bi-cross attention. The trained Traffic-BERT can reconstruct realistic per-packet attributes from partial observations and thus can be directly applied to the second stage in § V. In a few high-speed traffic generation cases, we option- ally apply the Constrained Decoding mechanism [71]. This mechanism restricts the model’s output to a predefined range, ensuring that only tokens within this range are considered. However, in most cases, such explicit constraints are unneces- sary because the Mask-Fill task itself already biases Traffic- BERT toward valid, realistic traffic patterns. V. ADVERSARIAL TRAFFIC GENERATION In this section, we present the technical details of generating adversarial traffic based on deep RL, focusing on its formu- lation, optimization, and runtime inference. NetMasquerade addresses four main challenges: 1) Efficient Learning from Limited Feedback. We carefully design the state space (§ V-A), and leverage Traffic-BERT to guide RL (§ V-B), to achieve efficient exploration under black-box settings. 2) Preserving Malicious Functionality. We adopt a tailored action space and an effectiveness penalty (§ V-A) to maintain malicious behavior within domain constraints. 3) Reducing Training & Inference Overhead. We introduce a dissimilarity penalty (§ V-A) and employ a two-stage decoupled training scheme (§ V-B) to minimize overhead while ensuring stability. 4) Enabling Attacks Without Feedback. We propose an offline reward estimation mechanism (§ V-C) that supports real-world adversarial generation when direct feedback is unavailable during inference. A. MDP Formulation NetMasquerade aims to generate effective adversarial ma- licious traffic that evades black-box detection systems with minimal modifications. To achieve this, we perturb only two domain-agnostic features learned by Traffic-BERT from be- nign traffic: packet sizes and inter-packet delays. We incorpo- rate specific penalty terms to maintain malicious effectiveness and adhere to domain constraints. This leads to the following definition of our adversarial traffic generation process. Definition 1 (Adversarial Traffic Generation). Given an ML- based malicious traffic detection system f ∈F and an instance x from the malicious traffic space X, the objective of the adversarial traffic generator G(·, ·) : F × X →X can be defined as: argmax ˜x I(f(˜x) ̸= f(x)) s.t. ˜x = G(f, x), D(˜x, x) ≤τ, M(˜x, x, X) = 1, (4) where D(·, ·) : X ×X →R+ is a distance metric that measures the dissimilarity between the adversarial traffic and the original malicious traffic, ensuring that the perturbed instance ˜x is close to the original instance x within the threshold τ, and M(·, ·, X) : X × X →{0, 1} is an equivalence indicator that indicates whether the perturbed instance ˜x is equivalent in effectiveness to the original malicious instance. Note that this optimization problem is difficult to address directly because the objective and the malicious equivalence constraint M are binary and non-differentiable functions, and the distance metric D depends on the way adversarial traffic ˜x is generated. However, we can overcome these challenges by leveraging RL. To do this, we model the attack procedure as a Finite Horizon Markov Decision Process MDP = (S, A, P, R, T ). The definition of such an MDP is as follows: State Space (S): The state st ∈S at time t is represented by the tuple (Pt, Ht), where Pt = [p0,t, p1,t, . . . , pn,t] rep- resents the sequence of packet sizes at time t and Ht = 6 [h0,t, h1,t, . . . , hn,t] represents the sequence of inter-packet delays at time t. The initial state s0 = (P0, H0) represents the features extracted from the original malicious traffic. Action Space (A): The attacker is allowed to modify the features of a single packet or insert a chaff packet (i.e., an intentionally injected, non-functional packet) into the flow per step. Therefore, for each feature sequence within the state st of length n, the size of the action space is 2n + 1, where each action at ∈A represents the index of the modification or insertion. Specifically, when at is odd, it indicates the attacker’s intention to modify the element at position ⌊at/2⌋ in each sequence; when at is even, it indicates the intention to add a new element at position at/2 in each sequence. NetMasquerade does not perturb existing payloads to maintain traffic functionality and avoid introducing detectable artifacts. Transition Probabilities (P): Our environment is determin- istic. That is, for any given state st and action at, the state transition probability P can be represented as: P(st+1|st, at) = ( 1 if st+1 = Trans(st, at), 0 otherwise. (5) Reward Function (R): The reward function is a composite of three distinct components, formalized as r(st, at) = rE(st, at) + β · rD(st, at) + γ · rM(st, at), (6) where β and γ are non-negative hyperparameters. The term rE aligns with the optimization objective in (4), which can be defined as: rE(st, at) = Nevade(st+1) −Nevade(st) Ntotal , (7) where Nevade(·) : S →Z+ 0 represents the number of non-chaff packets that evade the target, and Ntotal = n represents the total number of packets in the original malicious traffic. The term rD is the dissimilarity penalty which is derived from the distance metric D(·, ·). In our scenario, we use the Edit Distance between the current state st and the previous state st−1. Note that since the attacker modifies or inserts exactly one packet at each step, we have: rD(st, at) = −1. (8) rD serves to minimize the distance between the adversarial and original malicious traffic, ensuring that NetMasquerade achieves its adversarial objectives in as few steps as possible, as each additional step incurs a non-positive reward. In gen- eral, this design preserves the stealthiness of adversarial traffic and reduces the probability that the perturbations are detected. On the other hand, NetMasquerade achieves its goals with fewer modifications, thereby accelerating inference speed. The term rM is the effectiveness penalty that depends on the specific intent of the attack. For instance, for DoS traffic, rM can be defined as the rate of the traffic flow. Conversely, for maliciousness that stems from the payload, such as phishing traffic, rM can be set to zero, as our adversarial traffic generation process does not impact the payload. Horizon (T ): The process terminates in either of two situ- ations: first, in the step t = τ, which is consistent with the constraint D(˜x, x) ≤τ in (1) as a measure of the maximum permissible distance between the adversarial and original malicious traffic; second, when the reward rE(st, at) > ξ, indicating a successful evasion ratio greater than the threshold ξ. This dual-condition criterion guarantees a bounded process. B. Policy Optimization Algorithm 1 shows the process of training NetMasquerade. Given the MDP setup as defined in § V-A, a sampled MDP trajectory will be (s0, a0, r0, s1, a1, r1, . . . , s˜t, a˜t, r˜t), where ˜t ≤τ. To handle the problem’s large discrete action space, we employ the Soft Actor-Critic (SAC) algorithm for optimiza- tion, which is well known for strong exploration capabilities. The SAC algorithm is an off-policy maximum entropy RL method, aiming to maximize the balance between expected return and entropy, where the entropy signifies the randomness of the policy. Its objective function is given by: π∗= arg max π Eπ "X t ηt (r(st, at) + αH(π(·|st))) # , (9) where π is the stochastic policy to be optimized, α is the temperature hyperparameter that controls the trade-off between exploration and exploitation, η represents the discount factor, and the entropy of the policy H(π(·|st)) is defined as the expected value of the negative log-probability of the actions taken according to the policy: H(π(·|st)) = Eat∼π(·|st)[−log π(at|st)]. (10) Given the optimization objective, we build a policy network to approximate the optimal policy π∗, for which we employ Gated Recurrent Units (GRUs) as the backbone. We choose GRUs for two reasons: on the one hand, as a classical type of Recurrent Neural Network (RNN), GRUs are capable of un- derstanding the semantics within the traffic feature sequences; on the other hand, compared to the more computationally demanding Traffic-BERT, GRUs offer a balance between complexity and performance, enhancing the training efficiency of the reinforcement learning model. In each step t, the policy network takes as input the concatenated feature sequences of packet sizes and IPDs at state st, and outputs a distribution over the action space A, as detailed in § V-A. An action at is then sampled from this distribution. Notably, when at is odd, the ⌊at/2⌋-th element of the inter-packet delay sequence of state st is replaced with a [MASK] token, indicating the attacker’s intent to modify the transmission timestamp of the packet at that position. Consequently, the state st is transformed into s′ t ≜(P ′ t, H′ t) = ([p0,t, p1,t, . . . , pn,t], [h0,t, h1,t, . . . , h⌊at/2⌋−1,t, [MASK], h⌊at/2⌋+1,t, . . . , hn,t]  . (11) In this context, the packet size sequence remains unaltered as changes to packet sizes might violate the domain constraints 7 Algorithm 1 NetMasquerade Training Process 1: Initialize policy network πϕ(a|s), Q-networks Qω1(s, a), Qω2(s, a), and experience replay buffer B 2: for each iteration do 3: Sample a malicious flow and get initial state s0 4: for each environment step t do 5: Observe state st and select action at ∼πϕ(·|st) based on the current policy 6: Modify st by inserting [MASK] or replacing features with [MASK] to produce s′ t 7: Use Traffic-BERT for Mask-Fill task to fill [MASK], obtaining st+1 8: Restore st+1 to adversarial malicious traffic 9: Send the adversarial malicious traffic and compute reward rt = rE + β · rD + γ · rM 10: Store transition tuple (st, at, rt, st+1) in B 11: if |B| exceeds minimum replay buffer size then 12: Sample mini-batch {s¯t, a¯t, r¯t, s¯t+1} from B 13: Compute target value for each Q-network: y¯t = r¯t+η mini=1,2 Q¯ωi(s¯t+1, π(·|s¯t+1))−α log π(a¯t|s¯t) 14: Update Q-networks: ωi ←ωi −λQ∇ωi P(Qωi(s¯t, a¯t) −y¯t)2 15: Update policy network: ϕ ←ϕ −λπ∇ϕ P (α log(πϕ(a¯t|s¯t)) −Qωi(s¯t, a¯t)) 16: Update target Q-networks: Q¯ωi ←λQωi + (1 −λ)Q¯ωi 17: Update entropy temperature α: α ←α −λα∇α P (−α log(π(a¯t|s¯t)) −αH0) 18: end if 19: end for 20: end for of the packet. Correspondingly, when at is even, a [MASK] token is inserted at the at/2 position of both feature sequences, indicating the attacker’s intention to insert a chaff packet at that position. In this case, the state st is transformed into s′ t ≜(P ′ t, H′ t) = [p0,t, p1,t, . . . , pat/2−1,t, [MASK], pat/2,t, . . . , pn,t], [h0,t, h1,t, . . . , hat/2−1,t, [MASK], hat/2,t, . . . , hn,t]  . (12) Considering that the fixed length n may exceed the actual length of a flow, not all actions are feasible. To address this issue, we employ an Invalid Action Masking mechanism [44], adjusting the probabilities of infeasible actions to a large nega- tive value, and then re-normalizing the probability distribution to ensure the effectiveness of the chosen actions. Once s′ t is obtained, the attacker leverages Traffic-BERT in the Mask-Fill task to embed benign traffic patterns, thereby de- riving the next state st+1. During this process, Traffic-BERT’s parameters are fixed and considered a part of the environment. This decoupling of training significantly improves training efficiency. According to § II, the attacker can conduct a reconnaissance phase to gather pass/fail feedback, i.e., by observing whether there is any response to the attack traffic. Based on this, the attacker restores the adversarial flow from st+1 and sends it to the target. The process is straightforward as NetMasquerade only modifies timestamps or inserts new chaff packets. These chaff packets share the same source/destination IPs and ports as the malicious flow. Their payloads are randomly populated to match the packet size features. Following prior work [40], [37], for example, we use incorrect sequence numbers for TCP, set a short TTL for UDP packets, or send orphan IP fragments for other protocols which are discarded after a reassembly timeout [72]. After the traffic is sent, the attacker calculates the reward r = rE + β · rD + γ · rM. Finally, the resulting transition (st, at, rt, st+1) is stored in the experience replay buffer B for further policy optimization. Given the optimization objective and the transitions, we model two state-action value functions (a.k.a., Q-networks) Qω1 and Qω2. The utilization of double Q-networks helps to mitigate the overestimation of action values. Following the Soft Bellman Equation [33], each Q-network can be updated by minimizing the following loss function: LQ(ω) = E(st,at,rt,st+1)∼B 1 2  Qω(st, at) −yt 2 , yt = rt + η( min j=1,2 Q¯ωj(st+1, π(·|st+1)) −α log π(at|st)), (13) where Q¯ω represents the target Q-network [63], which helps to smooth the learning updates. The target Q-networks are updated using Q-networks: Q¯ωi ←λQωi + (1 −λ)Q¯ωi. (14) The policy network optimization is achieved by minimizing the Kullback-Leibler Divergence from the exponential of the Q-network. This results in the following loss function: Lπ(θ) = Es∼B,a∼π  α log(π(a|s)) −min j=1,2 Q¯ωj(s, a)  . (15) Also, following [35], we employ an automated entropy adjustment mechanism for the temperature parameter α: L(α) = Est∼B,at∼π(·|st)[−α log π(at|st) −αH0]. (16) C. Runtime Inference Algorithm 2 shows the inference process of NetMasquerade. Unlike the training phase, the attacker might not be able to receive reward feedback rE from the target detection system during the inference phase, which prevents direct evaluation of the termination time for adversarial traffic generation. To address this, we approximate the expected total reward for each action using the maximum value from the two Q- networks. The termination condition for the inference phase of NetMasquerade is as follows: (t ≥τ) ∨  max i=1,2 Qωi(st, at) ≥ξ −β · rD −γ · rM  , (17) 8 Algorithm 2 NetMasquerade Inference Process 1: Initialize policy network πϕ(a|s) with trained parameters 2: Initialize Q-networks Qω1(s, a) and Qω2(s, a) with trained parameters 3: Set step t = 0 4: Transform the malicious flow into initial state s0 5: while t < τ do 6: Observe state st and select action at ∼πϕ(·|st) based on the policy network 7: Modify st by inserting [MASK] or replacing features with [MASK] to produce s′ t 8: Use Traffic-BERT for Mask-Fill task to fill [MASK], obtaining st+1 9: Calculate Q-Values q1 ←Qω1(st, at), q2 ←Qω2(st, at) 10: if maxi=1,2 qi ≥ξ′ then 11: break ▷Termination condition is met 12: end if 13: t ←t + 1 14: end while 15: Restore st to the final adversarial malicious traffic where the threshold ξ is determined empirically from the training phase. Specifically, we monitor the attack success rate (ASR) during training. Once the ASR stabilizes at a high level, indicating a successfully trained agent, the corresponding Q- value is recorded to serve as the threshold. In cases where the attacker cannot compute rM, the termination condition can be transformed into: (t ≥τ) ∨  max i=1,2 Qωi(st, at) ≥ξ′  . (18) VI. EVALUATION A. Experiment Setup Implementation. Traffic-BERT and the RL pipeline are writ- ten in Python v3.8.15 using PyTorch [74] v1.13.1. Each adversarial flow produced by NetMasquerade is delivered over a socket to an Intel DPDK [45] v24.11.1 worker that emits the actual packets. The DPDK process, written in C (compiled with GCC v9.4.0 -O3 via Meson v0.61.5 and Ninja v1.8.2), pre-allocates NUMA-aware mbuf pools, configures a single 1024-descriptor TX queue, and relies on TSC-based busy- wait pacing to preserve µs-level inter-packet spacing, thereby avoiding the NIC’s internal burst-coalescing that would other- wise distort the on-wire delay. NetMasquerade runs on a Dell server equipped with two Intel Xeon Gold 6348 CPUs (2×28 cores, 112 threads) and a single NVIDIA Tesla A100 GPU (driver v530.30.02, CUDA [67] v12.1) under Ubuntu v18.04.6 (Linux 5.4.0-150-generic). The DPDK worker interfaces with an Intel 82599SE NIC (2 × 10 Gb/s SFP+ ports). All hyper- parameters are listed in Table V in the Appendix. Datasets. We use real-world backbone network traffic traces from Samplepoint-F of the WIDE MAWI project [99], col- lected in June and August 2023, as background traffic. Follow- ing established practices [52], [78], we remove scanning traffic that attempts to connect to more than 10% of IP addresses and apply additional filtering rules [52] to eliminate flooding traffic. We then employ the resulting background traffic in two ways: (i) to train Traffic-BERT, using more than 1 million flows collected in June 2023, and (ii) to supplement the target system’s training data with flows from August 2023 when the proportion of benign traffic in the malicious dataset is insufficient (Botnet Attacks, see Table III). Notably, this choice does not compromise the black-box setting, as there is no correlation between the distributions of the datasets. To closely mirror real-world scenarios and highlight Net- Masquerade’s task-agnostic capabilities, we replay 4 groups of attacks from multiple datasets, totaling 12 attacks: (i) Recon- naissance and Scanning Attacks, including host-scanning and fuzz-scanning traffic; (ii) Denial of Service Attacks, covering SSDP and TCP SYN flood traffic; (iii) Botnet Malwares, featuring 4 common botnet strains—Mirai, Zeus, Storm, and Waledac; and (iv) Encrypted Web Attacks, encompassing webshell, XSS, CSRF, and encrypted spam traffic. The details of the datasets can be found in the Appendix IX-A. Target Systems. We deliberately select as attack target 6 existing malicious traffic detection systems that reflect diverse designs. We use 3 advanced traditional ML-based detection systems: Whisper [28], FlowLens [7], NetBeacon [104] and 3 top-performing DL-based systems: Vanilla feature + RNNs, CICFlowMeter [22], [54] + MLP, Kitsune [62]. They operate using different learning approaches like supervised classifica- tion [104] and unsupervised anomaly detection [62], [28], at both the flow-level [7] and packet-level [62], and their imple- mentations span both software [62], [28] and programmable switches [104], [7]. More information about malicious detec- tion systems can be found in Appendix IX-B. Baselines. To validate NetMasquerade, we select 2 classic and 2 state-of-the-art attack methods as baselines: • Random Mutation. Random Mutation refers to the technique of obscuring traffic by randomly adjusting IPDs. This tradi- tional method has been demonstrated to be powerful in sev- eral works [43], [85], and existing attack tools also employ this method [64], [8]. In our experiments, the randomization of time intervals follows a Gaussian distribution based on the malicious flow’s mean and variance. The number of mutated packets matches NetMasquerade’s modification steps. • Mutate-and-Inject. We combine Random Mutation and Packet Injection to create a comprehensive attack strategy, which has been used as a standard for evaluating the robustness of advanced detection systems [28]. For Random Mutation, we follow the same rules described above. For Packet Injection, we either inject chaff packets with random sizes and intervals into the malicious flow or duplicate segments of the malicious packets. The modification steps match those in NetMasquerade. • Traffic Manipulator [37]. Traffic Manipulator is the SOTA attack algorithm capable of generating practical adversarial traffic against malicious traffic detection systems in a gray- box scenario. Traffic Manipulator learns adversarial traffic 9 TABLE II ATTACK SUCCESS RATE (ASR) OF NETMASQUERADE AND BASELINES ON DETECTION SYSTEMS Target System Methods Recon.&Scan. DoS Botnet Encrypted Web Attacks Overall Scan Fuzz. SSDP SYN Mirai Zeus Storm Waledac Webshell XSS CSRF Spam Traditional ML-based systems Whisper R.M. - 1 0.0100 - 0.2552 0.2324 0.1011 0.2289 0.0585 0.0812 0.0721 0.1717 0.0927 0.1087 M.I. 0.8907 0.0756 0.1132 0.3346 0.5521 0.5719 0.4590 0.4251 0.6802 0.7010 0.7259 0.7319 0.5218 T.M. 0.9344 0.9270 0.7712 0.2790 0.6355 0.2551 0.1820 0.3664 0.5839 0.5527 0.6055 0.9072 0.5833 Amoeba 0.9999 0.9934 0.9999 0.9998 0.9167 0.9254 0.9844 0.8970 0.9999 0.9999 0.9966 0.8381 0.9626 NetM. 0.9999 0.9965 0.9999 0.9467 0.9988 0.9972 0.9999 0.9355 0.9999 0.9999 0.9999 0.9795 0.9878 FlowLens R.M. - - 0.1782 0.7660 0.6893 0.0760 0.3846 0.0434 0.0100 - 0.0150 - 0.1802 M.I. 0.9800 0.1158 0.2375 0.5950 0.9370 0.4941 0.6510 0.3114 0.6391 0.5959 0.6633 0.1313 0.5293 T.M. 0.0222 0.1525 0.9344 0.9125 0.8591 0.2670 0.8374 0.2899 0.0760 0.0736 0.0036 0.3913 0.4016 Amoeba 0.9976 0.9442 0.9999 0.9990 0.8776 0.8665 0.9252 0.8000 0.9990 0.9999 0.9295 0.9700 0.9424 NetM. 0.9999 0.9335 0.9999 0.9995 0.9537 0.9102 0.9990 0.9955 0.9795 0.9999 0.9428 0.9475 0.9717 NetBeacon R.M. - - 0.5291 0.1823 0.2864 0.0230 - 0.0790 0.6294 0.3916 0.1066 0.1030 0.1942 M.I. 0.6511 - 0.2285 0.2841 0.5544 0.3455 0.3032 - 0.8781 0.7010 0.6446 0.1134 0.3920 T.M. 0.6494 0.2435 0.8577 0.4393 0.3047 0.1992 0.4415 0.2180 0.4585 0.5645 0.5294 0.9091 0.4846 Amoeba 0.9900 0.9999 0.9987 0.9999 0.9999 0.5905 0.6916 0.9727 0.9550 0.9999 0.9894 N/A 2 0.8490 NetM. 0.9999 0.9999 0.9999 0.9999 0.9899 0.9449 0.9965 0.9999 0.9999 0.9955 0.9999 0.8448 0.9809 DL-based systems Vanilla R.M. - 0.3660 0.0455 0.5815 0.1163 - - 0.3299 0.0118 - 0.0050 0.0515 0.1256 M.I. 0.9510 - - 0.3355 0.8769 - 0.5415 0.6711 0.6085 0.5353 0.6751 0.1958 0.4492 T.M. - 0.0375 0.8600 0.6550 0.0790 0.2232 0.2595 0.1617 0.0492 0.0278 0.0264 0.8636 0.2702 Amoeba 0.9999 0.9999 0.9999 0.9999 0.8038 0.7156 0.6540 0.2682 0.9975 0.9999 0.9455 0.2538 0.8032 NetM. 0.9999 0.9985 0.9825 0.9890 0.9817 0.9894 0.9805 0.9687 0.9999 0.9999 0.9999 0.8485 0.9782 CIC. R.M. - 0.0422 0.1100 0.6398 0.5578 0.2467 0.2922 0.0301 0.0151 0.1855 0.4467 0.1031 0.2224 M.I. 0.2300 0.1367 0.9711 0.5735 0.7111 0.3956 0.5396 0.2122 0.7011 0.6185 0.6598 0.2886 0.5032 T.M. 0.1444 - 0.9822 0.6520 0.6656 0.1433 0.3026 0.1021 0.0311 0.0445 0.3381 0.6391 0.3371 Amoeba 0.9999 N/A 0.9999 0.9112 0.9980 0.9999 0.8704 0.8182 0.9800 0.9865 0.9999 N/A 0.7970 NetM. 0.9999 0.9744 0.9999 0.9959 0.9999 0.9867 0.8898 0.9810 0.9767 0.9999 0.9999 0.7475 0.9626 Kitsune R.M. - - 0.2379 0.3744 0.2949 0.0360 0.0990 0.2901 - 0.0277 0.0374 - 0.1165 M.I. 0.3514 0.4484 0.0913 0.1815 0.8109 0.0801 0.4424 0.6334 0.6159 0.4498 0.3493 0.5359 0.4159 T.M. 0.9760 0.9860 0.7848 0.5590 0.9049 0.4735 0.8318 0.7878 0.8884 0.8965 0.8406 0.6949 0.8020 Amoeba 0.9339 N/A 0.8949 0.9292 0.9915 0.9449 0.7256 0.4595 0.4355 0.7814 0.7017 N/A 0.6498 NetM. 0.9049 0.9850 0.8218 0.9333 0.9968 0.9359 0.9911 0.9291 0.9219 0.9231 0.9177 0.7522 0.9177 1 These methods cannot successfully attack the target (ASR < 0.01). 2 These methods cannot generate legitimate traffic (with illegal packet sizes). patterns with GANs and adjusts the patterns of malicious traffic with the particle swarm optimization algorithm. We use the open-source implementation of Traffic Manipula- tor [38] and retrain the model. We use the Kitsune Feature Extractor as the default for Traffic Manipulator, following the paper. This means that for Kitsune, it is a gray-box attack, whereas for other systems, it is a black-box attack. • Amoeba [58]. Amoeba is designed with a per-packet ad- justment technique to circumvent censorship in a black-box scenario. Specifically, Amoeba leverages RL to truncate and pad packets, which are then reassembled by the remote end after passing through the censorship system. We disregard the practical applicability of Amoeba’s splitting strategy under different traffic conditions and only evaluate whether it can bypass different types of detection systems. We adopt its open-source implementation and retrain the model. Metrics. We use AUC (Area Under the Curve) and F1 score to assess the effectiveness of the malicious traffic detection systems, and attack success rate (ASR) to measure the per- formance of attack methods. Specifically, ASR measures the fraction of malicious flows that are not detected when the IDS uses the threshold rendering the highest F1 score on the validation data. We also measure the bandwidth (megabits per second, Mbps) of both malicious and adversarial traffic to illustrate the impact of perturbations. Moreover, we measure throughput (packets per second, PPS) to show the efficiency. B. Attack Performance Table II presents the attack success rate against advanced detection systems. NetMasquerade achieves 0.7475 ∼0.999 ASR, with an average of 0.9878, 0.9717, 0.9809, 0.9782, 0.9626 and 0.991 against Whisper, FlowLens, NetBeacon, Vanilla, CICFlowMeter and Kitsune detection systems, show- ing an improvement of 2.61%, 3.11%, 15.53%, 21.88%, 20.78%, and 14.42% over the best performance of the base- lines. In 56 out of the 72 evaluated scenarios, NetMasquer- 10 Episodes Max Q-Value 0.0 0.4 0.6 0.8 1.0 0 200 400 600 800 1000 1200 0.2 1.2 Mirai Storm Fuzzing Zeus (a) NetBeacon. Episodes Max Q-Value 0.0 0.4 0.6 0.8 1.0 0 200 400 600 800 1000 1200 0.2 1.2 Mirai Storm (b) CICFlowMeter + MLP. Fig. 6. Max Q-Value during the training phase. The shaded region represents a standard deviation of the average evaluation over 5 trials. Curves are smoothed uniformly for visual clarity. ade achieves the highest attack success rate. By contrast, Amoeba matches or exceeds NetMasquerade’s performance in 26 scenarios, yet in certain cases its success rate plummets below 30% and even produces flows with illegal packet sizes. Moreover, Amoeba relies on truncating and padding every single packet to craft adversarial traffic, requiring cooperation from the receiving endpoint to reassemble these packets. As a result, this packet-level manipulation can fail in practical attack scenarios (e.g., spam), where such coordination is typically unavailable. Meanwhile, we observe that Traffic Manipulator’s performance drops significantly under the black-box setting (i.e., when the attacker cannot access the feature extractor), while NetMasquerade maintains its capability under all scenar- ios. Figure 6 shows the max Q-Value during the training phase. NetMasquerade achieves 90% convergence in less than 420 episodes in 6(a) and converges within 300 episodes in 6(b), demonstrating strong convergence ability. This ensures that the RL stage incurs low training overhead. We can define the Q-Value threshold ξ′ according to the training curve. NetMasquerade is a stealthy attack capable of generating effective adversarial traffic with minimal modifications. We set the step threshold τ case-by-case while ensuring it is no more than 10. Random Mutation and Mutate-and-Inject perform poorly under the same step setting, as shown in Table II. Amoeba, Traffic Manipulator and other traffic obfuscation methods [61] typically modify most of the packets, making the attack easy to detect. Figure 7 illustrates the relationship between ASR and Q-Value threshold ξ′ under different step thresholds τ. In Figure 7(a), we show the effect of attacking FlowLens on the Zeus dataset, which is a complex scenario (i.e., ASR = 0.9102). NetMasquerade achieves near-optimal attack rates within no more than 10 steps of modifications. According to Figure 7(b), we find that ASR is not always positively correlated with the number of modification steps at the same ξ, especially in the neighborhood of higher ASR values. An excessively high τ may lead the algorithm to make excessive modifications when the ξ cannot be satisfied. NetMasquerade ensures the effectiveness of adversarial traf- fic. Our focus is on two types of malicious traffic: SSDP Flood and SYN DoS, whose effectiveness is demonstrated by high rates. We define the effectiveness penalty rM as the sending rate of the flow. Additionally, by post-processing Traffic-BERT (see § IV-B for details), we can limit the range of IPD perturbations. We measure the bandwidth distributions of the original and adversarial traffic, as shown in Figure 8. Q-Value threshold !′ ASR 0.4 0.6 0.8 1.0 0.70 0.80 0.85 0.90 0.95 1.00 1.05 Ideal ! = 10 ! = 8 ! = 5 ! = 3 (a) FlowLens / Zeus. Q-Value threshold !′ ASR 0.0 0.4 0.6 0.8 1.0 0.70 0.80 0.85 0.88 0.90 0.95 1.00 0.2 Ideal 1.00 0.95 0.95 1.00 Ideal ! = 10 ! = 5 ! = 3 ! = 1 (b) Vanilla + RNN / Storm. Fig. 7. The relationship between ASR and Q-Value threshold ξ′ under different step thresholds τ. “Ideal” represents the maximum ASR when the attacker can obtain feedback. Bandwidth [Mbps] PDF 0.0 0.1 0.2 0 10 20 30 40 50 Original Adversarial (a) SSDP Flood. Bandwidth [Mbps] PDF 0.0 0.1 0 10 20 30 40 50 Original Adversarial (b) SYN DoS. Fig. 8. Bandwidth of DoS attack. The KL divergence between the adversarial and original traffic distributions for SSDP Flood and SYN DoS is 0.009 and 0.013, indicating that NetMasquerade does not significantly change the bandwidth distribution. On the other hand, the negligible KL divergence confirms that NetMasquerade does not introduce any new, delay-related artifacts that can be readily detected. C. Overhead and Efficiency Training Overhead. To be practical in real-world scenar- ios, our attack must be prepared efficiently. Our framework achieves this through the two-stage design that separates computationally intensive training from rapid online execution. • Stage 1 (Benign Traffic Pattern Mimicking) is pre-trained offline on publicly available benign traces. Since this stage can be completed well before the actual attack commences, its end-to-end training cost (about ∼75 hours on our testbed) does not affect the online generation speed. • Stage 2 (Adversarial Traffic Generation) runs online during the actual attack setup. We measure the time required for the RL loop (i.e., adversarial flow generation →DPDK emission →feedback reception →policy update) to con- verge. On our testbed, this stage is highly efficient, with the policy typically converging within just 1 hour. Overall, the two-stage design shifts most of the overhead to offline training, thereby ensuring timely attack execution. Efficiency. High inference speed is crucial for generating adversarial traffic, especially in high-throughput network en- vironments. Our measurement is end-to-end, including packet extraction and transmitting packets via our DPDK engine. Figure 9(a) compares the throughput of NetMasquerade with baseline methods. Both NetMasquerade and Traffic Manipu- lator operate at the flow level, yet NetMasquerade achieves an average efficiency improvement of ∼69.6× over Traf- fic Manipulator across eight datasets. In contrast, Amoeba performs packet-level inference, maintaining a throughput of approximately 300 ± 10 PPS under various attack scenarios. 11 PPS Fuzzing SSDP Flood OS Scan Zeus Mirai Waledac Storm Syn DoS 0 1 2 3 log10 Scale T.M. NetMasquerade 66.1× 54.5× 12.7× 12.9× 6.4× 272.2× 17.2× 115.1× Amoeba (a) Throughput comparison. PPS 0 2 6 ×1e2 Scale Fuzzing 8 4 1 2 3 5 8 10 SSDP Flood T.M. Amoeba (b) Throughput vs. steps under attack Fig. 9. Efficiency of NetMasquerade and baselines. Notably, for long-flow attacks (e.g., Waledac), NetMasquerade exhibits a clear advantage in efficiency. Figure 9(b) shows the efficiency curve under different maximum inference steps. By adjusting thresholds ξ′ and τ, we can make a trade-off between attack accuracy and inference efficiency. In our experiments, the maximum number of inference steps is generally no more than 10. D. Deep Dive Effectiveness under Limited Probes. NetMasquerade as- sumes that the attacker is able to perform a probing process to collect feedback for model training. However, the probing budget may be limited, as excessive probes could trigger alarms and lead to aggressive countermeasures. To quantify this trade-off, we evaluate the ASR of NetMasquerade given various probing budgets against two detection systems (Net- Beacon and Vanilla + RNN) across three distinct datasets each. As shown in Figure 10 (the solid lines), the ASR exhibits a steep learning curve between 200 and 1, 000 probes. For instance, against NetBeacon, NetMasquerade achieves, on average, 35.6%, 70.4%, and 88.9% of its final ASR with bud- gets of 200, 500, and 1, 000 probes, respectively. The policy typically converges between 1, 000 and 2, 000 probes, although certain complex scenarios (e.g., Vanilla + RNN / Spam) may require a larger budget. This probing load is several orders of magnitude lower than the steady-state workload of a single data-center server (∼500 flows per second) [79] or the capacity of a commercial telemetry system [15] (over 50, 000 flows per-second [14]). Crucially, this volume also sits below the thresholds that mainstream IDS products use to raise scan or anomaly alarms [13]. Furthermore, we examine how the probing budget affects the average number of perturbation steps per flow (the dashed lines in Figure 10). Since each step costs one probe, this count reflects the model’s probe utilization efficiency. In the early training phase, the agent requires more steps to explore the action space and identify effective modifications; 100 200 500 1000 2000 Probing Budgets 0.0 0.2 0.4 0.6 0.8 1.0 ASR Fuzzing Zeus XSS 0 1 5 10 Steps (a) NetBeacon. 100 200 500 1000 2000 Probing Budgets 0.0 0.2 0.4 0.6 0.8 1.0 ASR Spam Waledac Mirai 0 1 5 10 Steps (b) Vanilla + RNN. Fig. 10. ASR under different probing budgets. Solid lines denote ASR; dashed lines denote steps. (a) Vanilla + RNN. (b) Whisper. Fig. 11. ASR under different noise levels. as training progresses and feedback accumulates, the average steps per flow drop sharply. This shows the policy is learning to use the budget more efficiently, which is key to learning optimal, minimal-modification policies and contributes to the fast convergence of our framework. Robustness to Noisy Feedback. The feedback that an attacker can collect may be unreliable in real-world deployment. Such noise may be caused by various reasons, including misclassifi- cations, specific configurations, or even adversarial responses from the target designed to mislead the attacker. Figure 11 shows the final ASR achieved against the Vanilla + RNN and Whisper under noise levels of 5%, 15%, and 30%. The results demonstrate that NetMasquerade maintains high efficacy under moderate noise. For example, when the noise level is 5%, Net- Masquerade exhibits an average drop in attack success rate of only 0.063. When the reward signal becomes highly unreliable (i.e., given a 30% noise level), the ASR degrades gracefully rather than collapsing. Meanwhile, we observe that increased noise primarily affects the convergence efficiency, particularly the stability of the Q-value. With extended explorations, it is still possible to achieve higher ASR. Importance of NetMasquerade’s Components. NetMasquer- ade’s success relies on its two-stage framework design. We validate the necessity of each stage through a series of ablation studies, which show that removing or replacing either stage significantly reduces the attack success rate. Detailed ablation analysis can be found in Appendix IX-D. E. Robustness against Defenses NetMasquerade generates adversarial traffic within the traf- fic (physical) space, which translates into an unrestricted adversarial attack in the feature space. Consequently, defenses that operate in the feature space, that is, defenses designed to make a model robust to variations within a certain numerical norm of its input feature vectors, such as adversarial train- ing [32], [69] or sample randomized smoothing [17], [55], fail to counter NetMasquerade. 12 ASR Fuzzing SSDP Flood OS Scan Zeus Mirai Waledac Storm Syn DoS 0.80 0.85 0.90 0.95 Vanilla BARS 1.00 Whisper RNN Fig. 12. ASR against BARS. • BARS [95]. BARS employs a combined distribution trans- former in feature space to map smoothed noise onto arbitrary manifolds, providing certified robustness bounds for certain perturbations in that space. We use the open-source imple- mentation of BARS and deploy it on two detection methods: Whisper and Vanilla + RNN, and the results are shown in Figure 12. NetMasquerade maintains a high ASR across 8 test datasets even against BARS. The primary reason lies in BARS’s limited certified bound in the feature space, whereas NetMasquerade performs multi-step manipulations in the traffic space (e.g., inserting additional packets), leading to perturbations that exceed BARS’s bound when projected back into the feature space. Moreover, in some datasets, NetMasquerade even attains a higher ASR under BARS, likely because random noise reduces the model’s accuracy, even with training data augmentation. Guidelines for Building More Robust Detection Systems. Overall, it is difficult for the feature-space defenses to defend against NetMasquerade, because even small perturbations in traffic space can lead to large distances in feature space. A straightforward countermeasure is traffic-space adversarial training: augment the training set with packet-level perturba- tions to strengthen the model’s decision boundaries. Second, instead of certifying an Lp norm in feature space, traffic space certification (e.g., bounding the number of inserted packets) may be more effective against NetMasquerade. Fi- nally, NetMasquerade’s training process implicitly assumes a static decision boundary from the target model. Thus, the defense models can introduce randomness at inference to avoid static decision boundaries, such as altering the model architecture [21] or parameters [60]. Such techniques would force the attacker to optimize against a distribution of models rather than a single target, expanding the exploration space. We will explore these strategies in future work. VII. RELATED WORK ML-based Malicious Traffic Detection. For generic detec- tion, various methods have been developed to learn flow-level features, such as frequency domain features [28], distribution features [7], statistical features [101], and graph features [30]. In particular, existing methods utilize programmable network devices to achieve high efficiency, e.g., NetBeacon [104] in- stalled decision trees on programmable switches, N3IC imple- mented binary neural networks on SmartNICs [83]. In contrast to these flow-level detections, Kitsune [62], nPrintML [42], and CLAP [105] learned packet-level features of malicious traffic. For task-specific detection, several studies aimed to detect malware behaviors. For example, Tegeler et al. [89] detected communication traffic from botnets. Similarly, Tang et al. [87] detected malicious web traffic. Dodia et al. [20] identified malicious Tor traffic from malware. Furthermore, Sharma et al. [81] and Tekiner et al. [90] captured attack traffic targeting IoT devices. On the Robustness of Traffic Detection. Robustness is- sues are prevalent in traffic analysis systems, i.e., attackers can easily construct adversarial traffic examples to trick the systems into misclassifying traffic. First, Fu et al. [27] revealed that attackers can easily mimic benign traffic to evade the traditional methods [62], [83] by simply injecting random noise. Such observations necessitate robust detection methods [30], [28], [27]. Second, advanced evasion strategies are developed, which optimize the adversarial traffic examples according to the outputs of white-box [96], [82], [70] and grey- box detection models [37]. These methods are different from our hard-label black-box evasion. Additionally, existing studies analyzed the robustness of traffic analysis other than traffic de- tection, e.g., improving robustness for web fingerprinting [65], [6], which are orthogonal to our evasion attack. Common Issues of ML-Based Security Application. Som- mer et al. [84] analyzed why ML-based traffic detection systems suffer from low usability, and emphasized the impor- tance of considering evasion behaviors of real-world attackers. Arp et al. [5] explored the practical challenges associated with ML-based applications, highlighting issues of evasion at- tacks [62], [23]. Moreover, Alahmadi et al. [2], Vermeer et al. [93], and Fu et al. [31] further demonstrated that existing ML- based traffic detections raised massive false positive alarms. Additionally, Han et al. [36], Jacobs et al. [47], and Wei et al. [98] addressed the explainability of traffic detection systems. VIII. CONCLUSION In this paper, we introduce NetMasquerade, a hard-label black-box evasion attack method specifically devised for malicious traffic detection systems. NetMasquerade employs a two-stage framework. First, NetMasquerade establishes a tailored pre-trained model called Traffic-BERT for capturing diverse benign traffic patterns. Subsequently, NetMasquerade integrates the Traffic-BERT into an RL framework, effectively manipulating malicious packet sequences based on benign traf- fic patterns with minimal modifications. Also, NetMasquerade introduces dissimilarity and effectiveness penalties, allowing adversarial traffic to retain attack stealth and effectiveness. Extensive experiments show that NetMasquerade enables both high-rate and low-rate attacks to evade 6 top-performing de- tection systems in 80 attack scenarios, achieving over 96.65% attack success rate on average. Moreover, NetMasquerade applies minimal modifications to no more than 10 steps in all scenarios. Additionally, NetMasquerade can achieves low-latency adversarial traffic generation, demonstrating its practicality in real-world scenarios. 13 IX. ETHICAL CONSIDERATIONS We carefully assess several ethical aspects to ensure that our study adheres to ethical standards. This work is aimed solely at assessing and improving the robustness of traffic detection models, rather than facilitating malicious or unlawful activities. We strictly follow all terms of use, and no pri- vate or sensitive data is accessed or disclosed. All analysis relies exclusively on publicly available datasets—Kitsune, PeerRush, HyperVision (malicious traffic), and MAWI (benign traffic)—without intercepting or manipulating any real-world network traffic. Likewise, we do not perform any active attacks or evasions against real detection systems, ensuring no impact on actual network traffic or stability. ACKNOWLEDGMENT We thank the anonymous reviewers for their thoughtful comments. This work was supported in part by the National Science Foundation for Distinguished Young Scholars of China (No. 62425201), the National Natural Science Foun- dation of China (Grant Nos. 62472036, 62472247, 62202258, 62221003, 62132011, 61932016, and U22B2031), and Beijing Nova Program. Yi Zhao and Ke Xu are the corresponding authors. REFERENCES [1] AKamai, “Prolexic,” https://www.akamai.com/products/ prolexic-solutions, Accessed May 2024. [2] B. A. Alahmadi et al., “99% false positives: A qualitative study of SOC analysts’ perspectives on security alarms,” in Security. USENIX Association, 2022, pp. 2783–2800. [3] E. Alhajjar, P. Maxwell, and N. D. Bastian, “Adversarial machine learning in network intrusion detection systems,” Expert Syst. Appl., vol. 186, p. 115782, 2021. [4] B. Anderson and D. A. McGrew, “Identifying encrypted malware traffic with contextual flow data,” in AISec@CCS. ACM, 2016, pp. 35–46. [5] D. Arp et al., “Dos and don’ts of machine learning in computer security,” in Security. USENIX Association, 2022. [6] A. Bahramali, A. Bozorgi, and A. Houmansadr, “Realistic website fingerprinting by augmenting network traces,” in CCS. ACM, 2023, pp. 1035–1049. [7] D. Barradas, N. Santos, L. Rodrigues, S. Signorello, F. M. V. Ramos, and A. Madeira, “FlowLens: Enabling efficient flow classification for ml-based network security applications,” in NDSS. The Internet Society, 2021. [8] BeichenDream, “Godzilla,” https://github.com/BeichenDream/ Godzilla, 2021. [9] L. Bilge et al., “Disclosure: Detecting botnet command and control servers through large-scale netflow analysis,” in ACSAC. ACM, 2012, pp. 129–138. [10] T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in KDD. ACM, 2016, pp. 785–794. [11] J. Chung, C¸ . G¨ulc¸ehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” CoRR, vol. abs/1412.3555, 2014. [12] Cisco Systems, “Encrypted traffic analytics white paper,” 2021, https://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/ enterprise-network-security/nb-09-encrytd-traf-anlytcs-wp-cte-en. html. [13] Cisco Systems, Inc., Cisco Security Manager 4.19 User Guide, Cisco Systems, Inc., 2018. [Online]. Available: https://www.cisco.com/c/ en/us/td/docs/security/security management/cisco security manager/ security manager/419/user/guide/CSMUserGuide.html [14] ——, Send On-Premises Flows from Cisco Telemetry Broker or Secure Network Analytics to Secure Cloud Analytics Configuration Guide, Cisco Systems, Inc., 2023, configuration Guide v7.5.0, Document Version 1.0. [Online]. Available: https://www.cisco.com/ c/dam/en/us/td/docs/security/stealthwatch/on-premises-flows/7 5 0 Send On Prem Flows to Secure Cloud Analytics DV 1 0.pdf [15] ——, “Cisco Telemetry Broker Data Sheet,” https://www.cisco.com/c/ en/us/products/collateral/security/telemetry-broker/ctb-datasheet.html, Mar. 2024. [16] Cloudflare, “Cloudflare ddos protection products,” https://developers. cloudflare.com/ddos-protection/managed-rulesets/adaptive-protection/, Accessed May 2024. [17] J. Cohen, E. Rosenfeld, and J. Z. Kolter, “Certified adversarial robust- ness via randomized smoothing,” in ICML, ser. Proceedings of Machine Learning Research, vol. 97. PMLR, 2019, pp. 1310–1320. [18] G. Detal, B. Hesmans, O. Bonaventure, Y. Vanaubel, and B. Donnet, “Revealing middlebox interference with tracebox,” in Proceedings of the 2013 Internet Measurement Conference, IMC 2013, Barcelona, Spain, October 23-25, 2013, K. Papagiannaki, P. K. Gummadi, and C. Partridge, Eds. ACM, 2013, pp. 1–8. [19] J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” in NAACL-HLT (1). Association for Computational Linguistics, 2019, pp. 4171–4186. [20] P. Dodia, M. AlSabah, O. Alrawi, and T. Wang, “Exposing the rat in the tunnel: Using traffic analysis for tor-based malware detection,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS 2022, Los Angeles, CA, USA, November 7-11, 2022, H. Yin, A. Stavrou, C. Cremers, and E. Shi, Eds. ACM, 2022, pp. 875–889. [Online]. Available: https://doi.org/10.1145/3548606.3560604 [21] M. Dong, X. Chen, Y. Wang, and C. Xu, “Random normalization aggregation for adversarial defense,” in NeurIPS, 2022. [22] G. Draper-Gil, A. H. Lashkari, M. S. I. Mamun, and A. A. Ghorbani, “Characterization of encrypted and VPN traffic using time-related features,” in ICISSP. SciTePress, 2016, pp. 407–414. [23] M. Du, F. Li, G. Zheng, and V. Srikumar, “Deeplog: Anomaly detection and diagnosis from system logs through deep learning,” in CCS. ACM, 2017, pp. 1285–1298. [24] W. M. Eddy, “Transmission control protocol (TCP),” RFC, vol. 9293, pp. 1–98, 2022. [Online]. Available: https://doi.org/10.17487/RFC9293 [25] C. Estan and G. Varghese, “New directions in traffic measurement and accounting: Focusing on the elephants, ignoring the mice,” ACM Trans. Comput. Syst., vol. 21, no. 3, pp. 270–313, 2003. [26] X. Feng, C. Fu, Q. Li, K. Sun, and K. Xu, “Off-path TCP exploits of the mixed IPID assignment,” in CCS ’20: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020, J. Ligatti, X. Ou, J. Katz, and G. Vigna, Eds. ACM, 2020, pp. 1323–1335. [Online]. Available: https://doi.org/10.1145/3372297.3417884 [27] C. Fu et al., “Frequency domain feature based robust malicious traffic detection,” IEEE/ACM Trans. Netw., vol. 31, no. 1, pp. 452–467, 2023. [28] C. Fu, Q. Li, M. Shen, and K. Xu, “Realtime robust malicious traffic detection via frequency domain analysis,” in CCS. ACM, 2021, pp. 3431–3446. [29] ——, “Detecting tunneled flooding traffic via deep semantic analysis of packet length patterns,” in CCS. ACM, 2024, pp. 3659–3673. [30] C. Fu, Q. Li, and K. Xu, “Detecting unknown encrypted malicious traffic in real time via flow interaction graph analysis,” in NDSS. The Internet Society, 2023. [31] C. Fu, Q. Li, K. Xu, and J. Wu, “Point cloud analysis for ml- based malicious traffic detection: Reducing majorities of false positive alarms,” in CCS. ACM, 2023, pp. 1005–1019. [32] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in ICLR (Poster), 2015. [33] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine, “Reinforcement learning with deep energy-based policies,” in ICML, ser. Proceedings of Machine Learning Research, vol. 70. PMLR, 2017, pp. 1352–1361. [34] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor,” in ICML, ser. Proceedings of Machine Learning Research, vol. 80. PMLR, 2018, pp. 1856–1865. 14 [35] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, and S. Levine, “Soft actor- critic algorithms and applications,” CoRR, vol. abs/1812.05905, 2018. [36] D. Han, Z. Wang, W. Chen, Y. Zhong, S. Wang, H. Zhang, J. Yang, X. Shi, and X. Yin, “Deepaid: Interpreting and improving deep learning-based anomaly detection in security applications,” in CCS. ACM, 2021, pp. 3197–3217. [37] D. Han, Z. Wang, Y. Zhong, W. Chen, J. Yang, S. Lu, X. Shi, and X. Yin, “Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors,” IEEE J. Sel. Areas Commun., vol. 39, no. 8, pp. 2632–2647, 2021. [38] ——, “Trafficmanipulator,” https://github.com/dongtsi/ TrafficManipulator, 2021. [39] M. Handley, V. Paxson, and C. Kreibich, “Network intrusion detection: Evasion, traffic normalization, and end-to-end protocol semantics,” in USENIX Security Symposium. USENIX Association, 2001. [40] M. J. Hashemi, G. Cusack, and E. Keller, “Towards evaluation of nidss in adversarial setting,” in Big-DAMA@CoNEXT. ACM, 2019, pp. 14– 21. [41] T. Hester, M. Vecer´ık, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, I. Osband, G. Dulac-Arnold, J. P. Agapiou, J. Z. Leibo, and A. Gruslys, “Deep q-learning from demonstrations,” in AAAI. AAAI Press, 2018, pp. 3223–3230. [42] J. Holland, P. Schmitt, N. Feamster, and P. Mittal, “New directions in automated traffic analysis,” in CCS. ACM, 2021, pp. 3366–3383. [43] I. Homoliak, M. Teknos, M. Ochoa, D. Breitenbacher, S. Hosseini, and P. Han´acek, “Improving network intrusion detection classifiers by non-payload-based exploit-independent obfuscations: An adversarial approach,” EAI Endorsed Trans. Security Safety, vol. 5, no. 17, p. e4, 2019. [44] S. Huang and S. Onta˜n´on, “A closer look at invalid action masking in policy gradient algorithms,” in FLAIRS, 2022. [45] Intel, “Data Plane Development Kit,” https://www.dpdk.org/, 2025, accessed Apr 2025. [46] J. Iyengar and M. Thomson, “QUIC: A udp-based multiplexed and secure transport,” RFC, vol. 9000, pp. 1–151, 2021. [Online]. Available: https://doi.org/10.17487/RFC9000 [47] A. S. Jacobs et al., “AI/ML for network security: The emperor has no clothes,” in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2022, pp. 1537–1551. [48] V. Jacobson, “traceroute,” ftp://ftp.ee.lbl.gov/traceroute.tar.gz, Dec. 1988, lawrence Berkeley National Laboratory, software distribution. [49] M. Javed and V. Paxson, “Detecting stealthy, distributed SSH brute- forcing,” in CCS. ACM, 2013, pp. 85–96. [50] M. Jiang, B. Cui, J. Fu, T. Wang, L. Yao, and B. K. Bhargava, “RUDOLF: an efficient and adaptive defense approach against website fingerprinting attacks based on soft actor-critic algorithm,” IEEE Trans. Inf. Forensics Secur., vol. 19, pp. 7794–7809, 2024. [51] M. S. Kang, S. B. Lee, and V. D. Gligor, “The crossfire attack,” in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2013, pp. 127–141. [52] D. Kopp, J. Santanna, M. Wichtlhuber, O. Hohlfeld, I. Poese, and C. Dietzel, “Ddos hide & seek: On the effectiveness of a booter services takedown,” in Internet Measurement Conference. ACM, 2019, pp. 65– 72. [53] A. Kuzmanovic and E. W. Knightly, “Low-rate tcp-targeted denial of service attacks: the shrew vs. the mice and elephants,” in SIGCOMM. ACM, 2003, pp. 75–86. [54] A. H. Lashkari, G. Draper-Gil, M. S. I. Mamun, and A. A. Ghorbani, “Characterization of tor traffic using time based features,” in ICISSP. SciTePress, 2017, pp. 253–262. [55] M. L´ecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, “Certified robustness to adversarial examples with differential privacy,” in IEEE Symposium on Security and Privacy. IEEE, 2019, pp. 656–672. [56] X. Lin, G. Xiong, G. Gou, Z. Li, J. Shi, and J. Yu, “ET-BERT: A contextualized datagram representation with pre-training transformers for encrypted traffic classification,” in WWW ’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, F. Laforest, R. Troncy, E. Simperl, D. Agarwal, A. Gionis, I. Herman, and L. M´edini, Eds. ACM, 2022, pp. 633–642. [Online]. Available: https://doi.org/10.1145/3485447.3512217 [57] Z. Lin, Y. Shi, and Z. Xue, “IDSGAN: generative adversarial networks for attack generation against intrusion detection,” in PAKDD (3), ser. Lecture Notes in Computer Science. Springer, 2022, pp. 79–91. [58] H. Liu, A. F. Diallo, and P. Patras, “Amoeba: Circumventing ml- supported network censorship via adversarial reinforcement learning,” PACMNET, vol. 1, no. CoNEXT, pp. 9:1–9:25, 2023. [59] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A robustly optimized BERT pretraining approach,” CoRR, vol. abs/1907.11692, 2019. [60] Y. Ma, M. Dong, and C. Xu, “Adversarial robustness through random weight sampling,” in NeurIPS, 2023. [61] R. Meier, V. Lenders, and L. Vanbever, “ditto: WAN traffic obfuscation at line rate,” in NDSS. The Internet Society, 2022. [62] Y. Mirsky, T. Doitshman, Y. Elovici, and A. Shabtai, “Kitsune: An ensemble of autoencoders for online network intrusion detection,” in NDSS. The Internet Society, 2018. [63] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nat., vol. 518, no. 7540, pp. 529–533, 2015. [64] R. Mudge, “Cobalt strike - adversary simulation and red team opera- tions,” https://www.cobaltstrike.com/, 2020. [65] M. Nasr, A. Bahramali, and A. Houmansadr, “Defeating dnn-based traffic analysis systems in real-time with blind adversarial perturba- tions,” in USENIX Security Symposium. USENIX Association, 2021, pp. 2705–2722. [66] A. M. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in CVPR. IEEE Computer Society, 2015, pp. 427–436. [67] NVIDIA, “CUDA Toolkit: A Parallel Computing Platform on GPU,” https://developer.nvidia.com/cuda-toolkit, 2025, accessed Apr 2025. [68] Open Information Security Foundation, “Suricata – Network Intrusion Detection/Prevention and Security Monitoring Engine,” Software, Version 8.0.0, Jul. 2025, released 08 Jul 2025. [Online]. Available: https://suricata.io/ [69] N. Papernot, P. D. McDaniel, X. Wu, S. Jha, and A. Swami, “Distil- lation as a defense to adversarial perturbations against deep neural networks,” in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2016, pp. 582–597. [70] A. Piplai, S. S. L. Chukkapalli, and A. Joshi, “Nattack! adversarial attacks to bypass a GAN based classifier trained to detect network intrusion,” in BigDataSecurity/HPSC/IDS. IEEE, 2020, pp. 49–54. [71] M. Post and D. Vilar, “Fast lexically constrained decoding with dynamic beam allocation for neural machine translation,” in NAACL- HLT. Association for Computational Linguistics, 2018, pp. 1314– 1324. [72] J. Postel, “Internet Protocol,” Request for Comments 791, 1981. [Online]. Available: https://www.rfc-editor.org/rfc/rfc791 [73] ——, “Internet control message protocol,” RFC, vol. 792, pp. 1–21, 1981. [Online]. Available: https://doi.org/10.17487/RFC0792 [74] PyTorch, “A Deep Learning Framework,” https://pytorch.org/, 2025, accessed Apr 2025. [75] Z. Qian and Z. M. Mao, “Off-path TCP sequence number inference attack - how firewall middleboxes reduce security,” in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2012, pp. 347–361. [76] Y. Qing, Q. Yin, X. Deng, Y. Chen, Z. Liu, K. Sun, K. Xu, J. Zhang, and Q. Li, “Low-quality training data only? A robust framework for detecting encrypted malicious network traffic,” in NDSS. The Internet Society, 2024. [77] B. Rahbarinia, R. Perdisci, A. Lanzi, and K. Li, “PeerRush: Mining for unwanted P2P traffic,” in DIMVA, ser. Lecture Notes in Computer Science, vol. 7967. Springer, 2013, pp. 62–82. [78] P. Richter and A. W. Berger, “Scanning the scanners: Sensing the internet from a massively distributed network telescope,” in Internet Measurement Conference. ACM, 2019, pp. 144–157. [79] A. Roy, H. Zeng, J. Bagga, G. Porter, and A. C. Snoeren, “Inside the social network’s (datacenter) network,” in SIGCOMM. ACM, 2015, pp. 123–137. [80] N. N. Scanning, “The official nmap project guide to network discovery and security scanning,” Boston: Nmap Project, 2009. [81] R. A. Sharma, I. Sabane, M. Apostolaki, A. Rowe, and V. Sekar, “Lumen: A framework for developing and evaluating ml-based iot network anomaly detection,” in CoNEXT. ACM, 2022, pp. 59–71. [82] R. Sheatsley, B. Hoak, E. Pauley, Y. Beugin, M. J. Weisman, and P. D. McDaniel, “On the robustness of domain constraints,” in CCS. ACM, 2021, pp. 495–515. [83] G. Siracusano, S. Galea, D. Sanvito, M. Malekzadeh, G. Antichi, P. Costa, H. Haddadi, and R. Bifulco, “Re-architecting traffic analysis 15 with neural network interface cards,” in NSDI. USENIX Association, 2022, pp. 513–533. [84] R. Sommer and V. Paxson, “Outside the closed world: On using machine learning for network intrusion detection,” in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2010, pp. 305–316. [85] E. Stinson and J. C. Mitchell, “Towards systematic evaluation of the evadability of bot/botnet detection methods,” in WOOT. USENIX Association, 2008. [86] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in ICLR (Poster), 2014. [87] R. Tang, Z. Yang, Z. Li, W. Meng, H. Wang, Q. Li, Y. Sun, D. Pei, T. Wei, Y. Xu, and Y. Liu, “Zerowall: Detecting zero-day web attacks through encoder-decoder recurrent neural networks,” in INFOCOM. IEEE, 2020, pp. 2479–2488. [88] G. Tao, S. An, S. Cheng, G. Shen, and X. Zhang, “Hard-label black-box universal adversarial patch attack,” in USENIX Security Symposium. USENIX Association, 2023, pp. 697–714. [89] F. Tegeler, X. Fu, G. Vigna, and C. Kruegel, “Botfinder: Finding bots in network traffic without deep packet inspection,” in CoNEXT. ACM, 2012, pp. 349–360. [90] E. Tekiner, A. Acar, and A. S. Uluagac, “A lightweight iot cryptojack- ing detection mechanism in heterogeneous smart home networks,” in NDSS. The Internet Society, 2022. [91] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is all you need,” in NIPS, 2017, pp. 5998–6008. [92] M. Vecer´ık, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Roth¨orl, T. Lampe, and M. A. Riedmiller, “Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards,” CoRR, vol. abs/1707.08817, 2017. [93] M. Vermeer, N. Kadenko, M. van Eeten, C. Ga˜n´an, and S. Parkin, “Alert alchemy: SOC workflows and decisions in the management of NIDS rules,” in CCS. ACM, 2023, pp. 2770–2784. [94] J. Wan, J. Fu, L. Wang, and Z. Yang, “Bounceattack: A query-efficient decision-based adversarial attack by bouncing into the wild,” in IEEE Symposium on Security and Privacy. IEEE, 2024, pp. 1270–1286. [95] K. Wang, Z. Wang, D. Han, W. Chen, J. Yang, X. Shi, and X. Yin, “BARS: local robustness certification for deep learning based traffic analysis systems,” in NDSS. The Internet Society, 2023. [96] N. Wang, Y. Chen, Y. Xiao, Y. Hu, W. Lou, and Y. T. Hou, “MANDA: on adversarial example detection for network intrusion detection sys- tem,” IEEE Trans. Dependable Secur. Comput., vol. 20, no. 2, pp. 1139–1153, 2023. [97] N. Weaver, R. Sommer, and V. Paxson, “Detecting forged TCP reset packets,” in NDSS. The Internet Society, 2009. [98] F. Wei et al., “Xnids: Explaining deep learning-based network intrusion detection systems for active intrusion responses,” in Security. USENIX Association, 2023, p. to appear. [99] WIDE, “Mawi working group traffic archive,” http://mawi.wide.ad.jp/ mawi/, 2023, accessed: June 2023. [100] J. Xing et al., “Ripple: A programmable, decentralized link-flooding defense against adaptive adversaries,” in Security. USENIX, 2021, pp. 3865–3880. [101] Z. Xu, S. Ramanathan, A. M. Rush, J. Mirkovic, and M. Yu, “Xatu: Boosting existing ddos detection systems using auxiliary signals,” in CoNEXT. ACM, 2022, pp. 1–17. [102] M. Zhang, G. Li, S. Wang, C. Liu, A. Chen, H. Hu, G. Gu, Q. Li, M. Xu, and J. Wu, “Poseidon: Mitigating volumetric ddos attacks with programmable switches,” in NDSS. The Internet Society, 2020. [103] G. Zhou, X. Guo, Z. Liu, T. Li, Q. Li, and K. Xu, “Trafficformer: an efficient pre-trained model for traffic data,” in SP. IEEE Computer Society, 2024, pp. 102–102. [104] G. Zhou, Z. Liu, C. Fu, Q. Li, and K. Xu, “An efficient design of intelligent network data plane,” in USENIX Security Symposium. USENIX Association, 2023, pp. 6203–6220. [105] S. Zhu, S. Li, Z. Wang, X. Chen, Z. Qian, S. V. Krishnamurthy, K. S. Chan, and A. Swami, “You do (not) belong here: Detecting DPI evasion attacks with context learning,” in CoNEXT. ACM, 2020, pp. 183–197. APPENDIX A. Details of Datasets To closely mirror real-world scenarios, we replay 4 categories of malicious traffic, totaling 12 attack types: Reconnaissance & Scanning, Denial of Service (DoS), Botnet, and Encrypted Web attacks. The details are shown in Table III. • Reconnaissance and Scanning Attacks. These attacks iden- tify open ports and services across a wide range of servers by sending specific packets, e.g., by sending ICMP echo requests to determine if a host is active. Adversarial traffic made by NetMasquerade does not influence the effective- ness. Moreover, since these scanning attacks typically do not involve the transmission of payloads, they can bypass payload-based detection methods. Employing a pattern- based detection system is a common way of detecting such attacks. We select two distinct scanning attacks from Kitsune [62]: OS Scan and Fuzz Scan. • Denial of Service Attacks. DoS attacks incapacitate targeted services by inundating them with an overwhelming volume of requests, depleting resources and rendering the services unavailable. We selected SSDP DoS and SYN DoS traffic from Kitsune. Similar to scanning attacks, payload-based detection methods fall short against DoS attacks; however, detecting these attacks becomes feasible when focusing on the characteristics of their patterns. • Botnet Attacks. Botnets, which are large networks of com- promised machines, are controlled by attackers via com- mand and control (C&C) channels to conduct malicious ac- tivities [9], [89]. We employ the Mirai dataset from Kitsune, and analyze data from three typical botnets: Zeus, Storm, and Waledac, which were collected by PeerRush [77]. • Encrypted Web Attacks. Typically, malicious web traffic is encrypted with HTTPS, concealing its malicious behavior within the packet payload. This encryption prevents tra- ditional rule-based methods unable to detect the traffic. Meanwhile, most traditional ML-based detection systems cannot effectively detect the traffic due to their low packet rates [30]. We obtained four common types of Web attack traffic from [31], including automated vulnerability discov- ery (XSS, CSRF), Webshell, and Spam traffic. We replay traffic from diverse sources—covering high and low throughput flows, as well as encrypted and unencrypted streams—to demonstrate NetMasquerade’s general applicabil- ity across varying protocols and tasks. For the Scanning, DoS and Encrypted Web attacks, each target detection system is trained on the malicious and benign traces from the corre- sponding private datasets. For the Botnet attack, the original datasets contain virtually no benign traffic, so we supplement benign flows with traces from the WIDE MAWI project (August 2023). This choice does not compromise the black- box setting, as the Traffic-BERT model is trained on data from June 2023, ensuring that there is no correlation between the distributions of the datasets. To achieve class balance, we train the target models with thousands of malicious flows and an 16 TABLE III DETAILS OF MALICIOUS TRAFFIC DATASETS Malicious Traffic Dataset Description Source Bandwidth Enc. Ratio Mal. Ratio External Data 1 Recon. OS Scan Scanning for active hosts and operating systems. Kitsune [62] 0.96 Mbps 0.0% 0.0045 N/A Fuzz Scan Scanning for vulnerabilities in protocols. Kitsune 27.9 Mbps 0.0% 0.0089 N/A DoS SSDP DoS Amplifying SSDP traffic to flood targets. Kitsune 27.2 Mbps 0.0% 0.0321 N/A SYN DoS Flooding servers with half-open TCP connections. Kitsune 23.5 Mbps 0.0% 0.0858 N/A Botnet Mirai Infects IoT devices with the Mirai malware. Kitsune 0.12 Mbps 0.0% 0.8408 MAWI 2 Zeus Botnet infected by a Zeus trojan. PeerRush [77] 0.06 Mbps 0.0% 0.9999 MAWI Storm Peer-to-peer botnet spreading malware. PeerRush 25.3 Mbps 0.0% 0.9628 MAWI Waledac Spam botnet harvesting user data. PeerRush 13.9 Mbps 0.0% 1.0000 MAWI Enc. Web Attacks Webshell Malicious script enabling remote server control. H.V. [30] 11.2 Mbps 100.0% 0.0234 N/A XSS Injects malicious scripts into legitimate websites. H.V. 31.8 Mbps 100.0% 0.0259 N/A CSRF Fools authenticated users into unintended actions. H.V. 7.73 Mbps 100.0% 0.0236 N/A Spam Bulk messages with phishing / malware. H.V. 36.2 Mbps 100.0% 0.0238 N/A 1 We use an external benign dataset when the malicious dataset is nearly 100% malicious. 2 To ensure a strict black-box setting, we employ the real-world backbone network traces from the WIDE MAWI project’s August 2023 dataset [99] to train the additional models, keeping them entirely separate from the June 2023 traces used to train Traffic-BERT. equal number of benign flows. For the one-class detection system Kitsune, we use only benign samples during training. B. Details of Target Detection Systems Target Systems. We use three advanced traditional ML-based malicious traffic detection systems as target systems: • Whisper [28]. Whisper transforms the patterns of flows into frequency domain features, employing clustering to unsupervisedly learn these features. Similarly, the effective- ness of original traffic is assessed by the distance between frequency domain features and the cluster centers. Whisper is particularly effective at detecting DoS traffic, so we retrain the model on the DoS dataset using its default configuration. For botnet traffic, we replace clustering with a linear classifier to enhance detection capabilities. • FlowLens [7]. FlowLens samples the distribution of packet- level features on the data plane and uses random forests to learn these features in a supervised manner on the control plane, introducing a new paradigm of malicious traffic detection. We retrain the model with the default model structure and hyperparameters described in the paper. • NetBeacon [104]. NetBeacon introduces the concept of Intelligent Data Plane (IDP), performing flow-level feature extraction and feature classification with tree-based models directly in the data plane. We reconstruct the feature extrac- tion method described in the paper, select XGBoost [10] as the representative tree model for traffic classification, and adjust hyperparameters to achieve optimal accuracy. We also implement three top-performing DL-based systems: • CICFlowMeter [22], [54] + MLP. CICFlowMeter is a widely used feature processor that extracts over 80 time-related features from flow patterns. We employ a four-layer linear neural network to learn these features, with the number of neurons in the hidden layers set to three times that of the input layer. By adjusting the hyperparameters, we achieve the model’s best classification performance. • Vanilla + RNN. The native feature extractor extracts se- quences of sizes and IPDs from the flow, without additional feature processing. Given the sequential nature of the fea- tures, we use a single-layer LSTM as the model, taking the concatenation of the two feature sequences as input. • Kitsune [62]. Kitsune dynamically extracts and maintains per-packet features through specially designed feature ex- tractors and uses autoencoders and clustering to learn the features of benign traffic unsupervisedly. The detection of malicious traffic is based on the discrepancy between the autoencoder’s output and the original feature input. We retrain the model using its original feature extraction and model structure with default hyperparameters. Target System Detection Performance. Table IV summarizes the detection performance of 6 target systems on 12 types of malicious traffic. Notably, Kitsune outputs a score indicating how malicious each sample is; we perform a grid search to determine a threshold and compute the corresponding F1 score. We then apply this threshold in the attack experiments to calculate the ASR. All detection systems achieve high AUC and F1 scores, demonstrating strong performance in the absence of evasion attacks. C. Details of Hyperparameter Settings The default hyperparameters of NetMasquerade are listed in Table V. D. Deep Dive into NetMasquerade Importance of Two-Stage Framework. NetMasquerade con- sists of two stages: benign traffic pattern mimicking and adversarial traffic generation. To study the importance of each stage, we design three ablation strategies and conduct attacks 17 TABLE IV MALICIOUS TRAFFIC DETECTION SYSTEMS’ PERFORMANCE Malicious Traffic Dataset Traditional ML-based Systems DL-based Systems Whisper FlowLens NetBeacon Vanilla CIC. Kitsune AUC F1 AUC F1 AUC F1 AUC F1 AUC F1 AUC F1 Recon. OS Scan 0.9978 0.9979 0.9946 0.9947 0.9897 0.9899 0.9720 0.9728 0.9980 0.9980 0.9211 0.9780 Fuzz Scan 0.9905 0.9899 0.9972 0.9933 0.9913 0.9910 0.9662 0.9663 0.9900 0.9901 0.9952 0.9974 DoS SSDP DoS 0.9167 0.9231 0.9982 0.9981 0.9995 0.9994 0.9790 0.9790 0.9999 0.9999 0.9900 0.9996 SYN DoS 0.9879 0.9823 0.9815 0.9800 0.9903 0.9833 0.9616 0.9560 0.9852 0.9849 0.9801 0.9213 Botnet Mirai 0.9449 0.9458 0.9600 0.9463 0.9444 0.9371 0.9099 0.9156 0.9574 0.9521 0.9322 0.9762 Zeus 0.9121 0.9056 0.9250 0.8837 0.9279 0.9516 0.9118 0.9041 0.9625 0.9484 0.9246 0.9017 Storm 0.9495 0.9468 0.9395 0.9415 0.9972 0.9978 0.9233 0.9271 0.9968 0.9982 0.9302 0.9822 Waledac 0.9505 0.9484 0.9660 0.9653 0.9285 0.9467 0.9299 0.9304 0.9860 0.9862 0.8964 0.8414 Enc. Webshell 0.9980 0.9979 0.9955 0.9946 0.9943 0.9955 0.9989 0.9874 0.9965 0.9964 0.9996 0.9887 XSS 0.9975 0.9974 0.9965 0.9965 0.9967 0.9966 0.9845 0.9937 0.9937 0.9984 0.9991 0.9990 CSRF 0.9944 0.9950 0.9920 0.9920 0.9935 0.9934 0.9819 0.9822 0.9928 0.9927 0.9019 0.6625 Spam 0.9200 0.9135 0.9780 0.9756 0.9635 0.9622 0.8897 0.8847 0.9900 0.9901 0.8887 0.8690 TABLE V DETAILS OF HYPERPARAMETERS. Stage Hyperparameter Value Description 1: Traffic-BERT n 512 Fixed length of sequences. dk 128 Embedding size / total length of Q, K, V . N LAYERS 6 Number of encoder blocks. ATTN HEADS 8 Number of attention heads. D FF 512 Dimension of feed-forward layer. T SIZE 56 Size of IPD feature vocabulary. S SIZE 1606 Size of size feature vocabulary. 2: RL process β 0.01 ∼0.1 Weight of rD. γ 0 ∼0.2 Weight of rM. τ ≤10 Max step threshold. ξ′ 0.8 ∼1.05 Stop reward threshold. η 1 Discount factor. λ 0.9 Soft update weight of Q-networks. B 1e5 Size of experience replay buffer. TARGET ENTROPY −10 Desired policy entropy (related to α) on two detection systems, NetBeacon and CICFlowMeter + MLP, across all eight datasets. Table VI shows the ASR. In the first scenario, we retain stage 1 and replace stage 2 with randomly selecting positions for feature modifica- tions(denoted as S1). Clearly, under this setting, the attacker cannot find the optimal modification positions, resulting in a significant drop in attack capability. On both datasets, the attack success rate drops by an average of 56%. This result underscores that merely integrating BERT-generated traffic patterns is insufficient to evade detection; the RL step in Stage 2 is crucial for identifying the most strategic insertion points. In the second scenario, we remove stage 2 and replace it with a new Mask-Fill strategy. The first strategy is to fill in the selected positions with stochastic values between the minimum and maximum values of the same flow feature sequence (denoted as S2-S). By converting the Markov decision pro- cess from deterministic to stochastic, it becomes exceedingly difficult for the RL algorithm to converge. Consequently, we observe that the RL strategy predominantly adds chaff packets because the rewards for this type of action are relatively stable. The second strategy is filling in the selected positions with TABLE VI EFFECT OF TWO-STAGE FRAMEWORK. S1, S2-S, S2-F, NETM. STAND FOR STAGE 1 ONLY, STAGE 2 ONLY WITH STOCHASTIC VALUE, STAGE 2 ONLY WITH FIXED VALUE, AND THE OVERALL NETMASQUERADE. Target Systems NetBeacon CICFlowMeter + MLP Methods S1 S2-S S2-F NetM. S1 S2-S S2-F NetM. OS Scan 0.940 0.990 0.990 0.999 0.479 0.990 0.999 0.999 Fuzzing 0.538 0.936 0.996 0.999 0.063 0.004 0.009 0.974 SSDP Flood 0.001 0.434 0.481 0.999 0.488 0.557 0.639 0.999 SYN DoS 0.058 0.325 0.545 0.999 0.508 0.011 0.754 0.996 Mirai 0.915 0.895 0.915 0.990 0.488 0.856 0.938 0.999 Zeus 0.355 0.508 0.508 0.945 0.347 0.813 0.891 0.987 Storm 0.201 0.233 0.239 0.997 0.647 0.797 0.817 0.890 Waledac 0.750 0.727 0.924 0.999 0.123 0.783 0.880 0.981 the average value of the same flow feature (denoted as S2-F). Due to the variation in sending patterns across different flow segments, this strategy is limited in some cases. As Table 3 illustrates, the attack success rate for these two strategies decrease by an average of 37.4% and 26.8%, and neither strategy is effective against high-speed traffic. This outcome underscores how merely relying on an average-value approach or a random approach to features cannot capture dynamic and peak-driven traffic patterns—an issue that becomes even more pronounced in fine-tuning scenarios such as high-speed traffic. Traffic-BERT, on the other hand, guides the RL training process by offering stable and effective benign-pattern per- turbations. Although more complex candidate Mask-Fill rules could be considered, these rules can only be applied during stage 2, which would exponentially expand the action space and lead to an action space explosion. 18
A Hard-Label Black-Box Evasion Attack against ML-based Malicious Traffic Detection Systems Zixuan Liu∗, Yi Zhao†B, Zhuotao Liu∗‡, Qi Li∗‡, Chuanpu Fu∗, Guangmeng Zhou∗, Ke Xu∗‡B ∗Tsinghua University, †Beijing ‡Zhongguancun Lab Abstract-Machine Learning (ML)-based malicious traffic detection is a promising security paradigm. It outperforms rulebased traditional detection by identifying various advanced attacks. However, the robustness of these ML models is largely unexplored, thereby allowing attackers to craft adversarial traffic examples that evade detection. Existing evasion attacks typically rely on overly restrictive conditions (e.g., encrypted protocols, Tor, or specialized setups), or require detailed prior knowledge of the target (e.g., training data and model parameters), which is impractical in realistic black-box scenarios. The feasibility of a hard-label black-box evasion attack (i.e., applicable across diverse tasks and protocols without internal target insights) thus remains an open challenge. To this end, we develop NetMasquerade, which leverages reinforcement learning (RL) to manipulate attack flows to mimic benign traffic and evade detection. Specifically, we establish a tailored pre-trained model called Traffic-BERT, utilizing a network-specialized tokenizer and an attention mechanism to extract diverse benign traffic patterns. Subsequently, we integrate Traffic-BERT into the RL framework, allowing NetMasquerade to effectively manipulate malicious packet sequences based on benign traffic patterns with minimal modifications. Experimental results demonstrate that NetMasquerade enables both brute-force and stealthy attacks to evade 6 existing detection methods under 80 attack scenarios, achieving over 96.65% attack success rate. Notably, it can evade the methods that are either empirically or certifiably robust against existing evasion attacks. Finally, NetMasquerade achieves low-latency adversarial traffic generation, demonstrating its practicality in real-world scenarios. I. INTRODUCTION Machine learning (ML)-based malicious traffic detection systems identify attack behaviors by learning the features of traffic [28], [62], [30]. As an emerging security paradigm, it is promising for identifying multiple sophisticated attacks [51], [53], [49] and thus outperforms traditional rule-based detection [98], [39], [93] in both effectiveness [20], [4] and efficiency [104], [7]. Currently, ML-based systems are deployed to complement the traditional systems due to their ability to detect unknown [23] and encrypted [4], [89] attack traffic. Unfortunately, as in many other ML-based domains [86], [66], [32], robustness issues are prevalent in ML-based traffic detection systems [84], [5], [47]. That is, attackers can craft adversarial traffic by adding, delaying, or otherwise modifying packets [37], [96], causing detection models to misclassify these deceptive flows as benign. The research community has put forward a range of advanced evasion methods (see Table I), yet many of these methods operate under narrowly defined conditions, such as leveraging encrypted protocols [65], [58], [82], tunnels [65], Tor networks [50], or circumventing specialized third-party censorship systems [58]. The effectiveness of these protocol-related and task-related approaches drops significantly when the attack environment changes. Moreover, some solutions rely heavily on prior knowledge about the target or dataset, requiring full (white-box) [82], [65], [96] or partial (gray-box) [65], [37] access, which is impractical in a more realistic black-box setting. To bridge these gaps, we aim to design a black-box adversarial attack targeting widely used ML-based traffic detection systems that rely on statistical patterns [28], [62], [104], [7]. In particular, the attack must be protocol-agnostic and taskagnostic, allowing it to be seamlessly applied to any malicious traffic, regardless of whether it is encrypted, tunneled, or otherwise constrained. Moreover, the attacker can generate adversarial malicious traffic with minimal modifications, relying solely on whether the target system drops malicious packets (i.e., hard-label attack [94], [88]). In contrast to feature-space attacks [57], [3], i.e., impractical settings that require attackers to interfere with ML execution, our traffic modifications must preserve the effectiveness of the attacks [82], [37]. This paper presents NetMasquerade, a hard-label blackbox evasion attack, which utilizes deep reinforcement learning (RL) to transform malicious traffic into adversarial examples by mimicking benign traffic patterns. At its core, we propose Traffic-BERT, a tailored pre-trained model for capturing diverse and complex benign traffic distributions. Subsequently, we develop an RL framework that decides the location and type of packet modification step-by-step, leveraging TrafficBERT's embedded knowledge of benign behaviors. The only feedback required for the RL training process is the blockedor-not signal from the targeted detection system. The detailed design ensures that NetMasquerade achieves minimal, yet effective, modifications across diverse traffic types and detection models, thereby evading detection systems under black-box conditions. We address two main challenges in constructing effective adversarial traffic. First, we must capture rich benign traffic patterns in order to mimic them convincingly. To address this, we study the distribution of Internet packet patterns, then pad and chunk traffic from public datasets using an optimal setup to improve diversification. Afterwards, we pre-process the traffic with network-specific tokenizers. Finally, we extract dependencies among the tokens with a novel attention block in TrafficBERT, providing a robust representation of benign traffic across various protocols and scenarios. Network and Distributed System Security (NDSS) Symposium 2026 23 - 27 February 2026 , San Diego, CA, USA ISBN 979-8-9919276-8-0 https://dx.doi.org/10.14722/ndss.2026.240916 www.ndss-symposium.org 16 Oct 2025 TABLE I COMPARISON OF EXISTING EVASION ATTACKS AGAINST TRAFFIC ANALYSIS SYSTEMS. Scenarios Evasion Techniques Attack Applicability Without Prior Knowledge Attack Performance Protocol-agnostic Task-agnostic Datasets Features Model Low Overhead Low Latency White-box Gradient Analysis [82], [65] ✗ ✓ ✗ ✗ ✗ ✗ ✓ Optimization [96] ✗ ✓ ✗ ✗ ✗ ✗ ✓ Gray-box Sample Transferability [65] ✗ ✗ ✗ ✗ ✓ ✗ ✓ Feature Manipulation [37] ✓ ✓ ✓ ✗ ✓ ✗ ✗ Black-box Packet reassembly [58] ✗ ✗ ✓ ✓ ✓ ✗ ✗ Traffic Mimicking (Ours) ✓ ✓ ✓ ✓ ✓ ✓ ✓ Second, the RL model for generating optimal evasion policies must maintain both low training overhead and low online inference latency. To this end, we formulate the traffic modifications as a Finite Horizon Markov Decision Process, enabling a multi-step decision strategy with explicit incentives for minimal and targeted modifications, effectively reducing inference costs. Meanwhile, we utilize a lightweight policy network and ensure rapid convergence by leveraging alreadylearned benign distributions in Traffic-BERT, significantly reducing training overhead. In addition, we introduce an effectiveness penalty, which safeguards the malicious functionality of the attack. We prototype NetMasquerade1 with Pytorch [74] and Intel DPDK [45]. Experiments demonstrate that NetMasquerade enables both high-rate and low-rate attacks to evade 6 topperforming detection systems in 80 attack scenarios, achieving over 96.65% attack success rate (ASR). Note that NetMasquerade can evade the methods that are either empirically [28] or certifiably [95] robust against existing evasion attacks. Compared with other attacks [28], [37], NetMasquerade applies minimal modifications to no more than 10 steps in all test scenarios, thus incurring little impact, e.g., a Kullback-Leibler (KL) divergence of 0.013 between the original and modified bandwidth distributions. Moreover, the evasion attacks can be constructed in time, i.e., NetMasquerade can transform 4.239K adversarial packets per second. In general, our studies reveal that the robustness issue of traffic detection remains unaddressed, which emphasizes the necessity of enhancing robustness against advanced evasion attacks. The contributions of this paper are three-fold: • We develop a hard-label black-box evasion attack that operates across diverse traffic types and targets widely used ML-based detection systems relying on statistical patterns, leveraging deep reinforcement learning to manipulate attack packets for efficient benign-traffic mimicry. • We design a tailored pre-trained model, Traffic-BERT, to capture diverse and complex benign traffic patterns, equipping NetMasquerade with the ability to transform malicious flows into benign-like traffic across a wide range of protocols and tasks. 1Source code: https://github.com/09nat/NetMasquerade Fig. 1. Network topology. • We establish an RL-based framework that transforms original attack traffic into adversarial examples with minimal modifications, and experimentally validate that NetMasquerade can generate various attack traffic to evade multiple state-of-the-art detection systems with small overhead. II. THREAT MODEL AND ASSUMPTIONS Threat Model. We assume an external adversary who instructs botnets to deliver malicious traffic (e.g., high-speed attack and stealthy malware behaviors) toward victim hosts [89]. The attack flows originate on the public Internet and must traverse an in-line, ML-based traffic detection system deployed at the victim's ingress link, as shown in Figure 1. The detection system operates on certain flow- or packet-level statistical patterns (e.g., sizes, delays) for traffic classification and does not inspect the payload [28], [30], [62], [7]. Such patternbased models are increasingly popular as they are effective on encrypted traffic. Meanwhile, the system forwards benign traffic and drops or rate-limits malicious traffic. This behavior is consistent with both existing academic proposals [102] and default configurations of open-source in-line detectors [68], which prioritize network availability by choosing low-latency actions like packet dropping over more aggressive responses (e.g., blocking entire source IPs, which could be a NAT gateway serving many legitimate users) that may cause significant collateral damage. To evade detection, the attacker craft each malicious flow to construct adversarial traffic that misleads detection systems into classifying them as benign, while retaining the flow's original intent. Assumptions. The attacker cannot obtain any details of the target detection systems, such as ML models, parameters, 2 Craft Packets Fixed Length Flow Chunking Adversarial Traffic Generation Malicious Traffic Light-Weight Policy Network Target Detection System Adversarial Feature Sequence Benign Traffic Pattern Mimicking Enormous Public Benign Traffic Fixed Length Features Training Train Traffic-BERT Encoder Block Encoder Block N × Self Attention Bi-Cross Attention Self Attention Feed&Forward Feed&Forward ... &Tokenizer Discretization Padding Feature Extraction Adversarial Malicious Traffic Encoding 1 n+1 n 2n ... n 1 ... ... ... M M Add Perturbations 1.1 1.2 Select & Mask Encoder Block M M M M Send Probe Traffic Get Feedback (Reward r!) r= rE+ rD+ rM Update Policy ttimes 2.1 RL-guided Perturbation 2.2 Probing & Feedback Reward ... Fig. 2. High-Level Design of NetMasquerade. feature extractors and training datasets. That means, the attacker treats target systems as a strict (i.e., hard-label) blackbox, which differs from traditional white-box and gray-box attacks that either need to access ML models (e.g., obtain gradients) [96] or datasets [65] for better transferability. This black-box setting meets the real-world scenarios, as most traffic detection systems are closed-source software [12] or outsourced cloud services [16], [1], effectively preventing attackers from obtaining any information. However, the attacker can conduct a reconnaissance phase to gather pass/fail feedback from the target detection system. Specifically, the attacker sends probe traffic to remote hosts behind the target detection systems. The probe traffic should exactly mirror the malicious traffic's intended traffic pattern (i.e., matching packet sizes and inter-packet delays) without embedding the original malicious payload (i.e., the attacker can freely embed payloads of the same size). For TCP flows, any return packet (e.g., RST, ACK, SYN-ACK) from the destination [97] signals that the probe has traversed the IDS, whereas the complete absence of a reply within the retransmission window indicates the blockage [24]. For UDP flows, the attacker could employ a stateful application-layer protocol (e.g., QUIC [46]) to induce a response. If the destination port is closed, the attacker would typically observe ICMP Unreachable when the traffic successfully passes the detection [73], [80]. However, no such message will be received if the flow is blocked. By checking for a response within a fixed timeout, the attacker assigns a pass/fail label to each probe. Besides, side-channel techniques (e.g., hop-limited TTL [18], [48], IPID [26], [75]) can further reveal whether the traffic reaches the destination. Overall, this binary indicator (i.e., hard-label) enables the attacker to refine subsequent adversarial traffic generation. In addition, the attacker can access benign traffic from public datasets [99], which differ from those used to train the traffic detection model. We supplement additional discussions regarding our assumptions in § VI. III. THE OVERVIEW OF NETMASQUERADE A. Key Observation We begin by noting that modern traffic detection [1], [12], [16] operates as a strict (i.e., hard-label) black-box system from the perspective of typical malicious attackers. All model implementations and training details are concealed, which significantly degrades the performance of existing whitebox [65], [82] and gray-box [37], [65] methods. In contrast, such systems often drop or throttle [102], [100], [29] malicious traffic while allowing benign traffic to pass. This behavioral asymmetry inherently yields a feedback signal, motivating us to interactively probe the detector's decision boundary and steer malicious traffic within that boundary. We accomplish this process with reinforcement learning. However, Internet traffic spans a wide variety of protocols and use cases. A naive design can degrade RL performance (see § IX-D), while task-specific attacks fail in heterogeneous environments [58]. Fortunately, existing studies show that benign traffic distributions tend to be denser, whereas malicious traffic is often more sparse [76]. This density gap encourages us to morph the malicious flow so that its features migrate toward the benign manifold while preserving attack semantics. To achieve this, we develop an effective semantic model (i.e., Traffic-BERT) to capture benign traffic patterns, thereby guiding RL training through its learned representations. Finally, we introduce a two-stage divide-and-conquer training framework, along with several additional mechanisms, to significantly 3 reduce the overhead introduced by integrating Traffic-BERT with the RL process while preserving the effectiveness of the generated adversarial traffic. B. High-Level Architecture Figure 2 shows the two stages of NetMasquerade, a blackbox evasion attack method. In the first stage, it captures the benign traffic patterns. In the second stage, it generates adversarial traffic based on the traffic patterns. Benign Traffic Pattern Mimicking. In this stage, we focus on comprehensively modeling benign traffic distributions to provide a solid foundation for subsequent adversarial flow generation. To this end, we propose Traffic-BERT, a variant of BERT [19], capable of processing multiple feature sequence inputs and outputs. Specifically, we first extract basic flow features (i.e., packet size sequence and inter-packet delay sequence). Next, we introduce dedicated feature extraction and embedding schemes to reconcile the gap between continuous, variable-length traffic data and the fixed-length, discrete input format typically required by Traffic-BERT. Building on these enriched representations, we propose a cross-feature bidirectional attention mechanism to simultaneously capture global dependencies within each individual feature sequence and across heterogeneous feature modalities. By training TrafficBERT under a Mask-Fill task, we enable it to learn deep bidirectional dependencies and acquire the capability to contextually complete fine-grained benign features. The trained Traffic-BERT can be directly used in Adversarial Traffic Generation to guide the RL optimization process. We will detail the Benign Traffic Pattern Mimicking in § IV. Adversarial Traffic Generation. In this stage, our goal is to embed the pattern of benign traffic into malicious traffic with minimal modifications while preserving attack semantics and domain constraints. We model this as a standard Markov Decision Process (MDP), and employ deep RL to address complex sequential decision-making. Specifically, NetMasquerade utilizes the Gated Recurrent Units (GRUs) [11], a lightweight neural network, as the policy network and the state-value networks (a.k.a. Q-Networks). This design significantly reduces training time and inference latency while still effectively capturing temporal flow features. By learning an optimal policy to select packet-level feature positions for masking, NetMasquerade leverages Traffic-BERT to fill the masked tokens with benign traffic patterns. The resulting adversarial flow is used to probe the target system. The response provides the core feedback, which we integrate with two novel penalty terms to form a comprehensive reward signal: a dissimilarity penalty, which ensures that the final adversarial flow remains close to the original malicious flow while also reducing the required inference steps, and an effectiveness penalty, which retains the underlying attack function. This complete reward signal then guides the optimization of the policy network using the Soft Actor-Critic (SAC) [34] algorithm. We will detail the Adversarial Traffic Generation in § V. Two-Stage Framework Advantages. In many RL applications, models are initialized from expert demonstrations via behavior learning and then deployed as policy networks in downstream tasks [41], [92]. However, this Pretraining-Finetuning framework is not suitable for the traffic domain because it introduces significant overhead. By contrast, our design cleanly decouples benign traffic modeling (Stage 1) from adversarial RL optimization (Stage 2). Traffic-BERT learns high-fidelity benign traffic embeddings without entangling with the RL process, avoiding repeated large-scale retraining. Meanwhile, the lightweight policy network incrementally references the embeddings to weave benign patterns into malicious flows, preserving both the efficiency and the effectiveness of the generated adversarial traffic. IV. BENIGN TRAFFIC PATTERN MIMICKING A. Feature Extraction The feature extraction module encodes network traffic into Traffic-BERT's token sequences. Although various traffic studies have explored related encoding strategies in other contexts [29], [56], [76], [103], they are not directly applicable to Traffic-BERT for two reasons. First, as a language model, Traffic-BERT demands fixed-length inputs, while real-world flows vary widely in size and duration. It is essential to set a base input length that accommodates the majority of flows and provide a mechanism to capture extended flows without information loss. Second, Traffic-BERT requires tokens to reside in a uniformly discrete space, whereas raw network features (e.g., inter-packet delays) may be highly skewed or continuous. We overcome these issues and characterize flows based on statistical insights, as detailed below. Flow Classification. Traffic-BERT takes sequences of fixed length as input. Typically, the input length is a power of 2, and we assume it to be n, where n = 2k. Figure 3(a) shows the probability density function (PDF) and cumulative distribution (CDF) for flow lengths in the MAWI internet traffic dataset (Jun. 2023) [99]. We randomly sample over 1e7 flows to plot the figure. Clearly, the distribution of flow lengths exhibits a long-tail pattern, with short flows dominating the majority. We obtain the 99th percentile from the cumulative distribution and select the closest n as our hyperparameter for fixed length. Nonetheless, studies of flow distributions [25] indicate that long flows hold most of the packets. Figure 3(b) shows the bytes retained for different fixed truncations (i.e., n), which can be approximately considered as information entropy, as a proportion of the total bytes of the flow. We randomly sample one week and analyze the complete flow data daily, finding that the information entropy ratios for common values of n do not exceed 0.27. To address this, we apply two complementary strategies: • Short Flow Padding. If m ≤n, we append n-m special padding tokens (i.e., [PAD]) to the end of its feature sequence. • Long Flow Chunking. If m > n, we divide its feature sequence into m -n + 1 segments, with each segment's index with a range of [i, i + n), 0 ≤i ≤m -n. Feature Representation. Having standardized flow lengths via padding and chunking, we next convert these sequences 4 99th percentile CDF PDF Probability Flow length [log 10] (a) Flow length distribution. Mon. Sun. Sat. Fri. Thu. Wed. Tues. n = 1024 n = 256 n = 512 n = 128 (b) Flow entropy ratios. Fig. 3. Flow length distribution and entropy ratios. into discrete tokens that Traffic-BERT can ingest. Specifically, we focus on two per-packet attributes: packet sizes and interpacket delays (IPDs). To optimize this tokenization process, we study the distribution of packet sizes and IPDs in benign internet traffic. For the IPD feature, we observe that after taking the base-10 logarithm, the data exhibits a more uniform distribution across magnitudes. We randomly sample over 80 million packets for our analysis and plot these in Figure 4(a). The analysis shows that frequencies in the range of [-6, -2] (corresponding to 1e-6 to 1e-2 seconds) consistently range between 1.0e7 and 1.7e7, while the total for other magnitudes falls below 1e7. Based on this, we set several logarithmically equal-length intervals and hash the IPDs into the intervals, using the interval indices as the corresponding tokens. We adjust the interval lengths to balance the count of elements within each. The packet sizes exhibit a bimodal distribution: predominantly very short or very long (due to fragmentation), with a more uniform distribution in between, as shown in Figure 4(b). Due to its discrete nature, we directly use its value as the token. We use a standard Maximum Transmission Unit (MTU) length as the capacity for the packet size vocabulary. This is because we manipulate the traffic on a per-packet basis, making it impossible to generate a single packet that exceeds the MTU. We categorize features significantly exceeding the MTU under a single class and represent them with the [UNK] token. The token vocabularies for packet sizes and IPDs are independent. Moreover, we add special tokens [PAD] and [MASK] to the vocabulary as placeholders and for masking certain tokens, respectively. We represent each token by two embeddings: token embedding and position embedding. The complete embedding is constructed by summing both of them together. • Token Embedding. A high-dimensional vector representing the token, which is randomly initialized and trained jointly with Traffic-BERT. • Position Embedding. A vector annotating the token's relative position in the sequence using sinusoidal functions, similar to Transformer [91]. For chunked features, apart from the first segment, the indices of other segments do not start from 0. This helps the model learn long flow representations more effectively. B. Traffic-BERT Although transformer-based architectures have been applied to network traffic modeling [56], [103], most existing Inter-packet delays [log 10] Frequency 0.5 1.0 1.5 2.0 2.5 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 ×1e7 Scale [1.0, 1.7]×1e7 (a) IPD distribution. 0 1 2 3 4 5 7 6 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 MTU+ Frequency Packet Size [×1000] log10 Scale (b) Packet size distribution.1 1 Size exceeds MTU, due to misconfigurations/specialized protocols. Fig. 4. Packet size and IPD distribution. BERT-like models are constrained to handling only a single sequence of discrete tokens [19], [59]. In contrast, benign network flows typically involve multiple feature sequences, whose interactions are crucial for capturing real-world traffic patterns. Therefore, the first challenge lies in how to effectively model these interwoven modal features without increasing computational overhead. The second challenge is how to define the Traffic-BERT's training task that enables the model to learn representations of these features directly applicable to adversarial traffic generation, without additional training costs. To address these challenges, we propose a novel bi-cross attention mechanism for efficient fusion of multi-feature data and design a Mask-Fill task to guide Traffic-BERT in acquiring the ability to fill benign traffic packets based on flow context. Traffic-BERT Structure. The core innovation of TrafficBERT is the introduction of a bi-cross attention layer within a stack of bidirectional encoder blocks, which is shown in Figure 5. Each block comprises three principal components: (i) self-attention, (ii) bi-cross attention, and (iii) a feed-forward network (FFN), with residual connections linking these layers. Generally, the attention mechanism [91] can be represented as: Attn.(Q, K, V ) = softmax QKT √dk V, (1) where Q, K, and V represent the Query, Key, and Value, respectively. They are three sets of linear transformations derived from the packet size and inter-packet delay features. The parameter √dk represents the dimension of the Key. Each encoder in Traffic-BERT takes size features P and IPD features H as input, which are first processed by the selfattention layer to yield respective hidden states hP and hH. Formally: hP = P + Attn.(QP , KP , VP ), hH = H + Attn.(QH, KH, VH). (2) Self-Attention is characterized by deriving both the Query and Key from the same sequence, thereby computing the relative importance of each element with respect to all other elements in that sequence. Next, hP and hH are fed into the bi-cross attention layer to compute the interrelated attention features, that is, using one feature sequence to query another feature sequence, and vice versa. This can be formulated as: h′ P = hP + Attn.(QhP , KH, VH), h′ H = hH + Attn.(QhH, KP , VP ). (3) 5 Encoder Block Multi-Head Self Attention Multi-Head Self Attention Packet Size Feat. P IPD Feat. H Multi-Head Bi-Cross Attention Add & Norm Add & Norm Add & Norm Add & Norm Feed & Forward Feed & Forward Input Output Encoder Block Encoder Block Encoder Block ... N × V! K! Q"! K# V# Q"" Add & Norm Add & Norm h′! h′" P′ for next stage H′ for next stage Fig. 5. Core design of Traffic-BERT. Unlike self-attention, bi-cross attention uses the hidden states hP and hH as the Query to compute similarity with the other sequence's output from the previous block, and assigns attention weights to the other's Value. The bi-cross attention shares the same complexity O(n2dk) with self-attention, providing an efficient solution for multi-feature sequences from distinct feature spaces. It enables the model to better capture the long-term interactions and dependencies between different feature sequences in network traffic, significantly enhancing the semantic understanding of the benign flow. The outputs from the bi-cross attention layer are then passed through the feed-forward network layer, serving as the input for the next encoder block. The output from the last encoder block is then passed through a linear layer to obtain the probability distribution of the output tokens. Traffic-BERT Optimization. We train Traffic-BERT with a Mask-Fill task that generalizes RoBERTa's Dynamic Masking [59] to handle multiple correlated feature sequences. In each training step, we select 15% of token positions for masking. When a position is chosen, tokens in both sequences are masked simultaneously: 80% are replaced with [MASK], 10% with a random token, and 10% remain unchanged. This dual-sequence masking scheme not only compels TrafficBERT to master deep bidirectional semantics within individual feature sequences, but also reinforces the cross-feature interactions introduced by our bi-cross attention. The trained Traffic-BERT can reconstruct realistic per-packet attributes from partial observations and thus can be directly applied to the second stage in § V. In a few high-speed traffic generation cases, we optionally apply the Constrained Decoding mechanism [71]. This mechanism restricts the model's output to a predefined range, ensuring that only tokens within this range are considered. However, in most cases, such explicit constraints are unnecessary because the Mask-Fill task itself already biases TrafficBERT toward valid, realistic traffic patterns. V. ADVERSARIAL TRAFFIC GENERATION In this section, we present the technical details of generating adversarial traffic based on deep RL, focusing on its formulation, optimization, and runtime inference. NetMasquerade addresses four main challenges: 1) Efficient Learning from Limited Feedback. We carefully design the state space (§ V-A), and leverage Traffic-BERT to guide RL (§ V-B), to achieve efficient exploration under black-box settings. 2) Preserving Malicious Functionality. We adopt a tailored action space and an effectiveness penalty (§ V-A) to maintain malicious behavior within domain constraints. 3) Reducing Training & Inference Overhead. We introduce a dissimilarity penalty (§ V-A) and employ a two-stage decoupled training scheme (§ V-B) to minimize overhead while ensuring stability. 4) Enabling Attacks Without Feedback. We propose an offline reward estimation mechanism (§ V-C) that supports real-world adversarial generation when direct feedback is unavailable during inference. A. MDP Formulation NetMasquerade aims to generate effective adversarial malicious traffic that evades black-box detection systems with minimal modifications. To achieve this, we perturb only two domain-agnostic features learned by Traffic-BERT from benign traffic: packet sizes and inter-packet delays. We incorporate specific penalty terms to maintain malicious effectiveness and adhere to domain constraints. This leads to the following definition of our adversarial traffic generation process. Definition 1 (Adversarial Traffic Generation). Given an MLbased malicious traffic detection system f ∈F and an instance x from the malicious traffic space X, the objective of the adversarial traffic generator G(·, ·) : F × X →X can be defined as: argmax ̃x I(f( ̃x) ̸= f(x)) s.t. ̃x = G(f, x), D( ̃x, x) ≤τ, M( ̃x, x, X) = 1, (4) where D(·, ·) : X ×X →R+ is a distance metric that measures the dissimilarity between the adversarial traffic and the original malicious traffic, ensuring that the perturbed instance ̃x is close to the original instance x within the threshold τ, and M(·, ·, X) : X × X →{0, 1} is an equivalence indicator that indicates whether the perturbed instance ̃x is equivalent in effectiveness to the original malicious instance. Note that this optimization problem is difficult to address directly because the objective and the malicious equivalence constraint M are binary and non-differentiable functions, and the distance metric D depends on the way adversarial traffic ̃x is generated. However, we can overcome these challenges by leveraging RL. To do this, we model the attack procedure as a Finite Horizon Markov Decision Process MDP = (S, A, P, R, T ). The definition of such an MDP is as follows: State Space (S): The state st ∈S at time t is represented by the tuple (Pt, Ht), where Pt = [p0,t, p1,t, . . . , pn,t] represents the sequence of packet sizes at time t and Ht = 6 [h0,t, h1,t, . . . , hn,t] represents the sequence of inter-packet delays at time t. The initial state s0 = (P0, H0) represents the features extracted from the original malicious traffic. Action Space (A): The attacker is allowed to modify the features of a single packet or insert a chaff packet (i.e., an intentionally injected, non-functional packet) into the flow per step. Therefore, for each feature sequence within the state st of length n, the size of the action space is 2n + 1, where each action at ∈A represents the index of the modification or insertion. Specifically, when at is odd, it indicates the attacker's intention to modify the element at position ⌊at/2⌋ in each sequence; when at is even, it indicates the intention to add a new element at position at/2 in each sequence. NetMasquerade does not perturb existing payloads to maintain traffic functionality and avoid introducing detectable artifacts. Transition Probabilities (P): Our environment is deterministic. That is, for any given state st and action at, the state transition probability P can be represented as: P(st+1|st, at) = ( 1 if st+1 = Trans(st, at), 0 otherwise. (5) Reward Function (R): The reward function is a composite of three distinct components, formalized as r(st, at) = rE(st, at) + β · rD(st, at) + γ · rM(st, at), (6) where β and γ are non-negative hyperparameters. The term rE aligns with the optimization objective in (4), which can be defined as: rE(st, at) = Nevade(st+1) -Nevade(st) Ntotal , (7) where Nevade(·) : S →Z+ 0 represents the number of non-chaff packets that evade the target, and Ntotal = n represents the total number of packets in the original malicious traffic. The term rD is the dissimilarity penalty which is derived from the distance metric D(·, ·). In our scenario, we use the Edit Distance between the current state st and the previous state st-1. Note that since the attacker modifies or inserts exactly one packet at each step, we have: rD(st, at) = -1. (8) rD serves to minimize the distance between the adversarial and original malicious traffic, ensuring that NetMasquerade achieves its adversarial objectives in as few steps as possible, as each additional step incurs a non-positive reward. In general, this design preserves the stealthiness of adversarial traffic and reduces the probability that the perturbations are detected. On the other hand, NetMasquerade achieves its goals with fewer modifications, thereby accelerating inference speed. The term rM is the effectiveness penalty that depends on the specific intent of the attack. For instance, for DoS traffic, rM can be defined as the rate of the traffic flow. Conversely, for maliciousness that stems from the payload, such as phishing traffic, rM can be set to zero, as our adversarial traffic generation process does not impact the payload. Horizon (T ): The process terminates in either of two situations: first, in the step t = τ, which is consistent with the constraint D( ̃x, x) ≤τ in (1) as a measure of the maximum permissible distance between the adversarial and original malicious traffic; second, when the reward rE(st, at) > ξ, indicating a successful evasion ratio greater than the threshold ξ. This dual-condition criterion guarantees a bounded process. B. Policy Optimization Algorithm 1 shows the process of training NetMasquerade. Given the MDP setup as defined in § V-A, a sampled MDP trajectory will be (s0, a0, r0, s1, a1, r1, . . . , s ̃t, a ̃t, r ̃t), where ̃t ≤τ. To handle the problem's large discrete action space, we employ the Soft Actor-Critic (SAC) algorithm for optimization, which is well known for strong exploration capabilities. The SAC algorithm is an off-policy maximum entropy RL method, aiming to maximize the balance between expected return and entropy, where the entropy signifies the randomness of the policy. Its objective function is given by: π∗= arg max π Eπ "X t ηt (r(st, at) + αH(π(·|st))) # , (9) where π is the stochastic policy to be optimized, α is the temperature hyperparameter that controls the trade-off between exploration and exploitation, η represents the discount factor, and the entropy of the policy H(π(·|st)) is defined as the expected value of the negative log-probability of the actions taken according to the policy: H(π(·|st)) = Eat∼π(·|st)[-log π(at|st)]. (10) Given the optimization objective, we build a policy network to approximate the optimal policy π∗, for which we employ Gated Recurrent Units (GRUs) as the backbone. We choose GRUs for two reasons: on the one hand, as a classical type of Recurrent Neural Network (RNN), GRUs are capable of understanding the semantics within the traffic feature sequences; on the other hand, compared to the more computationally demanding Traffic-BERT, GRUs offer a balance between complexity and performance, enhancing the training efficiency of the reinforcement learning model. In each step t, the policy network takes as input the concatenated feature sequences of packet sizes and IPDs at state st, and outputs a distribution over the action space A, as detailed in § V-A. An action at is then sampled from this distribution. Notably, when at is odd, the ⌊at/2⌋-th element of the inter-packet delay sequence of state st is replaced with a [MASK] token, indicating the attacker's intent to modify the transmission timestamp of the packet at that position. Consequently, the state st is transformed into s′ t ≜(P ′ t, H′ t) = ([p0,t, p1,t, . . . , pn,t], [h0,t, h1,t, . . . , h⌊at/2⌋-1,t, [MASK], h⌊at/2⌋+1,t, . . . , hn,t] . (11) In this context, the packet size sequence remains unaltered as changes to packet sizes might violate the domain constraints 7 Algorithm 1 NetMasquerade Training Process 1: Initialize policy network πφ(a|s), Q-networks Qω1(s, a), Qω2(s, a), and experience replay buffer B 2: for each iteration do 3: Sample a malicious flow and get initial state s0 4: for each environment step t do 5: Observe state st and select action at ∼πφ(·|st) based on the current policy 6: Modify st by inserting [MASK] or replacing features with [MASK] to produce s′ t 7: Use Traffic-BERT for Mask-Fill task to fill [MASK], obtaining st+1 8: Restore st+1 to adversarial malicious traffic 9: Send the adversarial malicious traffic and compute reward rt = rE + β · rD + γ · rM 10: Store transition tuple (st, at, rt, st+1) in B 11: if |B| exceeds minimum replay buffer size then 12: Sample mini-batch {s ̄t, a ̄t, r ̄t, s ̄t+1} from B 13: Compute target value for each Q-network: y ̄t = r ̄t+η mini=1,2 Q ̄ωi(s ̄t+1, π(·|s ̄t+1))-α log π(a ̄t|s ̄t) 14: Update Q-networks: ωi ←ωi -λQ∇ωi P(Qωi(s ̄t, a ̄t) -y ̄t)2 15: Update policy network: φ ←φ -λπ∇φ P (α log(πφ(a ̄t|s ̄t)) -Qωi(s ̄t, a ̄t)) 16: Update target Q-networks: Q ̄ωi ←λQωi + (1 -λ)Q ̄ωi 17: Update entropy temperature α: α ←α -λα∇α P (-α log(π(a ̄t|s ̄t)) -αH0) 18: end if 19: end for 20: end for of the packet. Correspondingly, when at is even, a [MASK] token is inserted at the at/2 position of both feature sequences, indicating the attacker's intention to insert a chaff packet at that position. In this case, the state st is transformed into s′ t ≜(P ′ t, H′ t) = [p0,t, p1,t, . . . , pat/2-1,t, [MASK], pat/2,t, . . . , pn,t], [h0,t, h1,t, . . . , hat/2-1,t, [MASK], hat/2,t, . . . , hn,t] . (12) Considering that the fixed length n may exceed the actual length of a flow, not all actions are feasible. To address this issue, we employ an Invalid Action Masking mechanism [44], adjusting the probabilities of infeasible actions to a large negative value, and then re-normalizing the probability distribution to ensure the effectiveness of the chosen actions. Once s′ t is obtained, the attacker leverages Traffic-BERT in the Mask-Fill task to embed benign traffic patterns, thereby deriving the next state st+1. During this process, Traffic-BERT's parameters are fixed and considered a part of the environment. This decoupling of training significantly improves training efficiency. According to § II, the attacker can conduct a reconnaissance phase to gather pass/fail feedback, i.e., by observing whether there is any response to the attack traffic. Based on this, the attacker restores the adversarial flow from st+1 and sends it to the target. The process is straightforward as NetMasquerade only modifies timestamps or inserts new chaff packets. These chaff packets share the same source/destination IPs and ports as the malicious flow. Their payloads are randomly populated to match the packet size features. Following prior work [40], [37], for example, we use incorrect sequence numbers for TCP, set a short TTL for UDP packets, or send orphan IP fragments for other protocols which are discarded after a reassembly timeout [72]. After the traffic is sent, the attacker calculates the reward r = rE + β · rD + γ · rM. Finally, the resulting transition (st, at, rt, st+1) is stored in the experience replay buffer B for further policy optimization. Given the optimization objective and the transitions, we model two state-action value functions (a.k.a., Q-networks) Qω1 and Qω2. The utilization of double Q-networks helps to mitigate the overestimation of action values. Following the Soft Bellman Equation [33], each Q-network can be updated by minimizing the following loss function: LQ(ω) = E(st,at,rt,st+1)∼B 1 2 Qω(st, at) -yt 2 , yt = rt + η( min j=1,2 Q ̄ωj(st+1, π(·|st+1)) -α log π(at|st)), (13) where Q ̄ω represents the target Q-network [63], which helps to smooth the learning updates. The target Q-networks are updated using Q-networks: Q ̄ωi ←λQωi + (1 -λ)Q ̄ωi. (14) The policy network optimization is achieved by minimizing the Kullback-Leibler Divergence from the exponential of the Q-network. This results in the following loss function: Lπ(θ) = Es∼B,a∼π α log(π(a|s)) -min j=1,2 Q ̄ωj(s, a) . (15) Also, following [35], we employ an automated entropy adjustment mechanism for the temperature parameter α: L(α) = Est∼B,at∼π(·|st)[-α log π(at|st) -αH0]. (16) C. Runtime Inference Algorithm 2 shows the inference process of NetMasquerade. Unlike the training phase, the attacker might not be able to receive reward feedback rE from the target detection system during the inference phase, which prevents direct evaluation of the termination time for adversarial traffic generation. To address this, we approximate the expected total reward for each action using the maximum value from the two Qnetworks. The termination condition for the inference phase of NetMasquerade is as follows: (t ≥τ) ∨ max i=1,2 Qωi(st, at) ≥ξ -β · rD -γ · rM , (17) 8 Algorithm 2 NetMasquerade Inference Process 1: Initialize policy network πφ(a|s) with trained parameters 2: Initialize Q-networks Qω1(s, a) and Qω2(s, a) with trained parameters 3: Set step t = 0 4: Transform the malicious flow into initial state s0 5: while t < τ do 6: Observe state st and select action at ∼πφ(·|st) based on the policy network 7: Modify st by inserting [MASK] or replacing features with [MASK] to produce s′ t 8: Use Traffic-BERT for Mask-Fill task to fill [MASK], obtaining st+1 9: Calculate Q-Values q1 ←Qω1(st, at), q2 ←Qω2(st, at) 10: if maxi=1,2 qi ≥ξ′ then 11: break ▷Termination condition is met 12: end if 13: t ←t + 1 14: end while 15: Restore st to the final adversarial malicious traffic where the threshold ξ is determined empirically from the training phase. Specifically, we monitor the attack success rate (ASR) during training. Once the ASR stabilizes at a high level, indicating a successfully trained agent, the corresponding Qvalue is recorded to serve as the threshold. In cases where the attacker cannot compute rM, the termination condition can be transformed into: (t ≥τ) ∨ max i=1,2 Qωi(st, at) ≥ξ′ . (18) VI. EVALUATION A. Experiment Setup Implementation. Traffic-BERT and the RL pipeline are written in Python v3.8.15 using PyTorch [74] v1.13.1. Each adversarial flow produced by NetMasquerade is delivered over a socket to an Intel DPDK [45] v24.11.1 worker that emits the actual packets. The DPDK process, written in C (compiled with GCC v9.4.0 -O3 via Meson v0.61.5 and Ninja v1.8.2), pre-allocates NUMA-aware mbuf pools, configures a single 1024-descriptor TX queue, and relies on TSC-based busywait pacing to preserve μs-level inter-packet spacing, thereby avoiding the NIC's internal burst-coalescing that would otherwise distort the on-wire delay. NetMasquerade runs on a Dell server equipped with two Intel Xeon Gold 6348 CPUs (2×28 cores, 112 threads) and a single NVIDIA Tesla A100 GPU (driver v530.30.02, CUDA [67] v12.1) under Ubuntu v18.04.6 (Linux 5.4.0-150-generic). The DPDK worker interfaces with an Intel 82599SE NIC (2 × 10 Gb/s SFP+ ports). All hyperparameters are listed in Table V in the Appendix. Datasets. We use real-world backbone network traffic traces from Samplepoint-F of the WIDE MAWI project [99], collected in June and August 2023, as background traffic. Following established practices [52], [78], we remove scanning traffic that attempts to connect to more than 10% of IP addresses and apply additional filtering rules [52] to eliminate flooding traffic. We then employ the resulting background traffic in two ways: (i) to train Traffic-BERT, using more than 1 million flows collected in June 2023, and (ii) to supplement the target system's training data with flows from August 2023 when the proportion of benign traffic in the malicious dataset is insufficient (Botnet Attacks, see Table III). Notably, this choice does not compromise the black-box setting, as there is no correlation between the distributions of the datasets. To closely mirror real-world scenarios and highlight NetMasquerade's task-agnostic capabilities, we replay 4 groups of attacks from multiple datasets, totaling 12 attacks: (i) Reconnaissance and Scanning Attacks, including host-scanning and fuzz-scanning traffic; (ii) Denial of Service Attacks, covering SSDP and TCP SYN flood traffic; (iii) Botnet Malwares, featuring 4 common botnet strains-Mirai, Zeus, Storm, and Waledac; and (iv) Encrypted Web Attacks, encompassing webshell, XSS, CSRF, and encrypted spam traffic. The details of the datasets can be found in the Appendix IX-A. Target Systems. We deliberately select as attack target 6 existing malicious traffic detection systems that reflect diverse designs. We use 3 advanced traditional ML-based detection systems: Whisper [28], FlowLens [7], NetBeacon [104] and 3 top-performing DL-based systems: Vanilla feature + RNNs, CICFlowMeter [22], [54] + MLP, Kitsune [62]. They operate using different learning approaches like supervised classification [104] and unsupervised anomaly detection [62], [28], at both the flow-level [7] and packet-level [62], and their implementations span both software [62], [28] and programmable switches [104], [7]. More information about malicious detection systems can be found in Appendix IX-B. Baselines. To validate NetMasquerade, we select 2 classic and 2 state-of-the-art attack methods as baselines: • Random Mutation. Random Mutation refers to the technique of obscuring traffic by randomly adjusting IPDs. This traditional method has been demonstrated to be powerful in several works [43], [85], and existing attack tools also employ this method [64], [8]. In our experiments, the randomization of time intervals follows a Gaussian distribution based on the malicious flow's mean and variance. The number of mutated packets matches NetMasquerade's modification steps. • Mutate-and-Inject. We combine Random Mutation and Packet Injection to create a comprehensive attack strategy, which has been used as a standard for evaluating the robustness of advanced detection systems [28]. For Random Mutation, we follow the same rules described above. For Packet Injection, we either inject chaff packets with random sizes and intervals into the malicious flow or duplicate segments of the malicious packets. The modification steps match those in NetMasquerade. • Traffic Manipulator [37]. Traffic Manipulator is the SOTA attack algorithm capable of generating practical adversarial traffic against malicious traffic detection systems in a graybox scenario. Traffic Manipulator learns adversarial traffic 9 TABLE II ATTACK SUCCESS RATE (ASR) OF NETMASQUERADE AND BASELINES ON DETECTION SYSTEMS Target System Methods Recon.&Scan. DoS Botnet Encrypted Web Attacks Overall Scan Fuzz. SSDP SYN Mirai Zeus Storm Waledac Webshell XSS CSRF Spam Traditional ML-based systems Whisper R.M. - 1 0.0100 - 0.2552 0.2324 0.1011 0.2289 0.0585 0.0812 0.0721 0.1717 0.0927 0.1087 M.I. 0.8907 0.0756 0.1132 0.3346 0.5521 0.5719 0.4590 0.4251 0.6802 0.7010 0.7259 0.7319 0.5218 T.M. 0.9344 0.9270 0.7712 0.2790 0.6355 0.2551 0.1820 0.3664 0.5839 0.5527 0.6055 0.9072 0.5833 Amoeba 0.9999 0.9934 0.9999 0.9998 0.9167 0.9254 0.9844 0.8970 0.9999 0.9999 0.9966 0.8381 0.9626 NetM. 0.9999 0.9965 0.9999 0.9467 0.9988 0.9972 0.9999 0.9355 0.9999 0.9999 0.9999 0.9795 0.9878 FlowLens R.M. - - 0.1782 0.7660 0.6893 0.0760 0.3846 0.0434 0.0100 - 0.0150 - 0.1802 M.I. 0.9800 0.1158 0.2375 0.5950 0.9370 0.4941 0.6510 0.3114 0.6391 0.5959 0.6633 0.1313 0.5293 T.M. 0.0222 0.1525 0.9344 0.9125 0.8591 0.2670 0.8374 0.2899 0.0760 0.0736 0.0036 0.3913 0.4016 Amoeba 0.9976 0.9442 0.9999 0.9990 0.8776 0.8665 0.9252 0.8000 0.9990 0.9999 0.9295 0.9700 0.9424 NetM. 0.9999 0.9335 0.9999 0.9995 0.9537 0.9102 0.9990 0.9955 0.9795 0.9999 0.9428 0.9475 0.9717 NetBeacon R.M. - - 0.5291 0.1823 0.2864 0.0230 - 0.0790 0.6294 0.3916 0.1066 0.1030 0.1942 M.I. 0.6511 - 0.2285 0.2841 0.5544 0.3455 0.3032 - 0.8781 0.7010 0.6446 0.1134 0.3920 T.M. 0.6494 0.2435 0.8577 0.4393 0.3047 0.1992 0.4415 0.2180 0.4585 0.5645 0.5294 0.9091 0.4846 Amoeba 0.9900 0.9999 0.9987 0.9999 0.9999 0.5905 0.6916 0.9727 0.9550 0.9999 0.9894 N/A 2 0.8490 NetM. 0.9999 0.9999 0.9999 0.9999 0.9899 0.9449 0.9965 0.9999 0.9999 0.9955 0.9999 0.8448 0.9809 DL-based systems Vanilla R.M. - 0.3660 0.0455 0.5815 0.1163 - - 0.3299 0.0118 - 0.0050 0.0515 0.1256 M.I. 0.9510 - - 0.3355 0.8769 - 0.5415 0.6711 0.6085 0.5353 0.6751 0.1958 0.4492 T.M. - 0.0375 0.8600 0.6550 0.0790 0.2232 0.2595 0.1617 0.0492 0.0278 0.0264 0.8636 0.2702 Amoeba 0.9999 0.9999 0.9999 0.9999 0.8038 0.7156 0.6540 0.2682 0.9975 0.9999 0.9455 0.2538 0.8032 NetM. 0.9999 0.9985 0.9825 0.9890 0.9817 0.9894 0.9805 0.9687 0.9999 0.9999 0.9999 0.8485 0.9782 CIC. R.M. - 0.0422 0.1100 0.6398 0.5578 0.2467 0.2922 0.0301 0.0151 0.1855 0.4467 0.1031 0.2224 M.I. 0.2300 0.1367 0.9711 0.5735 0.7111 0.3956 0.5396 0.2122 0.7011 0.6185 0.6598 0.2886 0.5032 T.M. 0.1444 - 0.9822 0.6520 0.6656 0.1433 0.3026 0.1021 0.0311 0.0445 0.3381 0.6391 0.3371 Amoeba 0.9999 N/A 0.9999 0.9112 0.9980 0.9999 0.8704 0.8182 0.9800 0.9865 0.9999 N/A 0.7970 NetM. 0.9999 0.9744 0.9999 0.9959 0.9999 0.9867 0.8898 0.9810 0.9767 0.9999 0.9999 0.7475 0.9626 Kitsune R.M. - - 0.2379 0.3744 0.2949 0.0360 0.0990 0.2901 - 0.0277 0.0374 - 0.1165 M.I. 0.3514 0.4484 0.0913 0.1815 0.8109 0.0801 0.4424 0.6334 0.6159 0.4498 0.3493 0.5359 0.4159 T.M. 0.9760 0.9860 0.7848 0.5590 0.9049 0.4735 0.8318 0.7878 0.8884 0.8965 0.8406 0.6949 0.8020 Amoeba 0.9339 N/A 0.8949 0.9292 0.9915 0.9449 0.7256 0.4595 0.4355 0.7814 0.7017 N/A 0.6498 NetM. 0.9049 0.9850 0.8218 0.9333 0.9968 0.9359 0.9911 0.9291 0.9219 0.9231 0.9177 0.7522 0.9177 1 These methods cannot successfully attack the target (ASR < 0.01). 2 These methods cannot generate legitimate traffic (with illegal packet sizes). patterns with GANs and adjusts the patterns of malicious traffic with the particle swarm optimization algorithm. We use the open-source implementation of Traffic Manipulator [38] and retrain the model. We use the Kitsune Feature Extractor as the default for Traffic Manipulator, following the paper. This means that for Kitsune, it is a gray-box attack, whereas for other systems, it is a black-box attack. • Amoeba [58]. Amoeba is designed with a per-packet adjustment technique to circumvent censorship in a black-box scenario. Specifically, Amoeba leverages RL to truncate and pad packets, which are then reassembled by the remote end after passing through the censorship system. We disregard the practical applicability of Amoeba's splitting strategy under different traffic conditions and only evaluate whether it can bypass different types of detection systems. We adopt its open-source implementation and retrain the model. Metrics. We use AUC (Area Under the Curve) and F1 score to assess the effectiveness of the malicious traffic detection systems, and attack success rate (ASR) to measure the performance of attack methods. Specifically, ASR measures the fraction of malicious flows that are not detected when the IDS uses the threshold rendering the highest F1 score on the validation data. We also measure the bandwidth (megabits per second, Mbps) of both malicious and adversarial traffic to illustrate the impact of perturbations. Moreover, we measure throughput (packets per second, PPS) to show the efficiency. B. Attack Performance Table II presents the attack success rate against advanced detection systems. NetMasquerade achieves 0.7475 ∼0.999 ASR, with an average of 0.9878, 0.9717, 0.9809, 0.9782, 0.9626 and 0.991 against Whisper, FlowLens, NetBeacon, Vanilla, CICFlowMeter and Kitsune detection systems, showing an improvement of 2.61%, 3.11%, 15.53%, 21.88%, 20.78%, and 14.42% over the best performance of the baselines. In 56 out of the 72 evaluated scenarios, NetMasquer10 Episodes Max Q-Value 0.0 0.4 0.6 0.8 1.0 0 200 400 600 800 1000 1200 0.2 1.2 Mirai Storm Fuzzing Zeus (a) NetBeacon. Episodes Max Q-Value 0.0 0.4 0.6 0.8 1.0 0 200 400 600 800 1000 1200 0.2 1.2 Mirai Storm (b) CICFlowMeter + MLP. Fig. 6. Max Q-Value during the training phase. The shaded region represents a standard deviation of the average evaluation over 5 trials. Curves are smoothed uniformly for visual clarity. ade achieves the highest attack success rate. By contrast, Amoeba matches or exceeds NetMasquerade's performance in 26 scenarios, yet in certain cases its success rate plummets below 30% and even produces flows with illegal packet sizes. Moreover, Amoeba relies on truncating and padding every single packet to craft adversarial traffic, requiring cooperation from the receiving endpoint to reassemble these packets. As a result, this packet-level manipulation can fail in practical attack scenarios (e.g., spam), where such coordination is typically unavailable. Meanwhile, we observe that Traffic Manipulator's performance drops significantly under the black-box setting (i.e., when the attacker cannot access the feature extractor), while NetMasquerade maintains its capability under all scenarios. Figure 6 shows the max Q-Value during the training phase. NetMasquerade achieves 90% convergence in less than 420 episodes in 6(a) and converges within 300 episodes in 6(b), demonstrating strong convergence ability. This ensures that the RL stage incurs low training overhead. We can define the Q-Value threshold ξ′ according to the training curve. NetMasquerade is a stealthy attack capable of generating effective adversarial traffic with minimal modifications. We set the step threshold τ case-by-case while ensuring it is no more than 10. Random Mutation and Mutate-and-Inject perform poorly under the same step setting, as shown in Table II. Amoeba, Traffic Manipulator and other traffic obfuscation methods [61] typically modify most of the packets, making the attack easy to detect. Figure 7 illustrates the relationship between ASR and Q-Value threshold ξ′ under different step thresholds τ. In Figure 7(a), we show the effect of attacking FlowLens on the Zeus dataset, which is a complex scenario (i.e., ASR = 0.9102). NetMasquerade achieves near-optimal attack rates within no more than 10 steps of modifications. According to Figure 7(b), we find that ASR is not always positively correlated with the number of modification steps at the same ξ, especially in the neighborhood of higher ASR values. An excessively high τ may lead the algorithm to make excessive modifications when the ξ cannot be satisfied. NetMasquerade ensures the effectiveness of adversarial traffic. Our focus is on two types of malicious traffic: SSDP Flood and SYN DoS, whose effectiveness is demonstrated by high rates. We define the effectiveness penalty rM as the sending rate of the flow. Additionally, by post-processing Traffic-BERT (see § IV-B for details), we can limit the range of IPD perturbations. We measure the bandwidth distributions of the original and adversarial traffic, as shown in Figure 8. Q-Value threshold !′ ASR 0.4 0.6 0.8 1.0 0.70 0.80 0.85 0.90 0.95 1.00 1.05 Ideal ! = 10 ! = 8 ! = 5 ! = 3 (a) FlowLens / Zeus. Q-Value threshold !′ ASR 0.0 0.4 0.6 0.8 1.0 0.70 0.80 0.85 0.88 0.90 0.95 1.00 0.2 Ideal 1.00 0.95 0.95 1.00 Ideal ! = 10 ! = 5 ! = 3 ! = 1 (b) Vanilla + RNN / Storm. Fig. 7. The relationship between ASR and Q-Value threshold ξ′ under different step thresholds τ. "Ideal" represents the maximum ASR when the attacker can obtain feedback. Bandwidth [Mbps] PDF 0.0 0.1 0.2 0 10 20 30 40 50 Original Adversarial (a) SSDP Flood. Bandwidth [Mbps] PDF 0.0 0.1 0 10 20 30 40 50 Original Adversarial (b) SYN DoS. Fig. 8. Bandwidth of DoS attack. The KL divergence between the adversarial and original traffic distributions for SSDP Flood and SYN DoS is 0.009 and 0.013, indicating that NetMasquerade does not significantly change the bandwidth distribution. On the other hand, the negligible KL divergence confirms that NetMasquerade does not introduce any new, delay-related artifacts that can be readily detected. C. Overhead and Efficiency Training Overhead. To be practical in real-world scenarios, our attack must be prepared efficiently. Our framework achieves this through the two-stage design that separates computationally intensive training from rapid online execution. • Stage 1 (Benign Traffic Pattern Mimicking) is pre-trained offline on publicly available benign traces. Since this stage can be completed well before the actual attack commences, its end-to-end training cost (about ∼75 hours on our testbed) does not affect the online generation speed. • Stage 2 (Adversarial Traffic Generation) runs online during the actual attack setup. We measure the time required for the RL loop (i.e., adversarial flow generation →DPDK emission →feedback reception →policy update) to converge. On our testbed, this stage is highly efficient, with the policy typically converging within just 1 hour. Overall, the two-stage design shifts most of the overhead to offline training, thereby ensuring timely attack execution. Efficiency. High inference speed is crucial for generating adversarial traffic, especially in high-throughput network environments. Our measurement is end-to-end, including packet extraction and transmitting packets via our DPDK engine. Figure 9(a) compares the throughput of NetMasquerade with baseline methods. Both NetMasquerade and Traffic Manipulator operate at the flow level, yet NetMasquerade achieves an average efficiency improvement of ∼69.6× over Traffic Manipulator across eight datasets. In contrast, Amoeba performs packet-level inference, maintaining a throughput of approximately 300 ± 10 PPS under various attack scenarios. 11 PPS Fuzzing SSDP Flood OS Scan Zeus Mirai Waledac Storm Syn DoS 0 1 2 3 log10 Scale T.M. NetMasquerade 66.1× 54.5× 12.7× 12.9× 6.4× 272.2× 17.2× 115.1× Amoeba (a) Throughput comparison. PPS 0 2 6 ×1e2 Scale Fuzzing 8 4 1 2 3 5 8 10 SSDP Flood T.M. Amoeba (b) Throughput vs. steps under attack Fig. 9. Efficiency of NetMasquerade and baselines. Notably, for long-flow attacks (e.g., Waledac), NetMasquerade exhibits a clear advantage in efficiency. Figure 9(b) shows the efficiency curve under different maximum inference steps. By adjusting thresholds ξ′ and τ, we can make a trade-off between attack accuracy and inference efficiency. In our experiments, the maximum number of inference steps is generally no more than 10. D. Deep Dive Effectiveness under Limited Probes. NetMasquerade assumes that the attacker is able to perform a probing process to collect feedback for model training. However, the probing budget may be limited, as excessive probes could trigger alarms and lead to aggressive countermeasures. To quantify this trade-off, we evaluate the ASR of NetMasquerade given various probing budgets against two detection systems (NetBeacon and Vanilla + RNN) across three distinct datasets each. As shown in Figure 10 (the solid lines), the ASR exhibits a steep learning curve between 200 and 1, 000 probes. For instance, against NetBeacon, NetMasquerade achieves, on average, 35.6%, 70.4%, and 88.9% of its final ASR with budgets of 200, 500, and 1, 000 probes, respectively. The policy typically converges between 1, 000 and 2, 000 probes, although certain complex scenarios (e.g., Vanilla + RNN / Spam) may require a larger budget. This probing load is several orders of magnitude lower than the steady-state workload of a single data-center server (∼500 flows per second) [79] or the capacity of a commercial telemetry system [15] (over 50, 000 flows per-second [14]). Crucially, this volume also sits below the thresholds that mainstream IDS products use to raise scan or anomaly alarms [13]. Furthermore, we examine how the probing budget affects the average number of perturbation steps per flow (the dashed lines in Figure 10). Since each step costs one probe, this count reflects the model's probe utilization efficiency. In the early training phase, the agent requires more steps to explore the action space and identify effective modifications; 100 200 500 1000 2000 Probing Budgets 0.0 0.2 0.4 0.6 0.8 1.0 ASR Fuzzing Zeus XSS 0 1 5 10 Steps (a) NetBeacon. 100 200 500 1000 2000 Probing Budgets 0.0 0.2 0.4 0.6 0.8 1.0 ASR Spam Waledac Mirai 0 1 5 10 Steps (b) Vanilla + RNN. Fig. 10. ASR under different probing budgets. Solid lines denote ASR; dashed lines denote steps. (a) Vanilla + RNN. (b) Whisper. Fig. 11. ASR under different noise levels. as training progresses and feedback accumulates, the average steps per flow drop sharply. This shows the policy is learning to use the budget more efficiently, which is key to learning optimal, minimal-modification policies and contributes to the fast convergence of our framework. Robustness to Noisy Feedback. The feedback that an attacker can collect may be unreliable in real-world deployment. Such noise may be caused by various reasons, including misclassifications, specific configurations, or even adversarial responses from the target designed to mislead the attacker. Figure 11 shows the final ASR achieved against the Vanilla + RNN and Whisper under noise levels of 5%, 15%, and 30%. The results demonstrate that NetMasquerade maintains high efficacy under moderate noise. For example, when the noise level is 5%, NetMasquerade exhibits an average drop in attack success rate of only 0.063. When the reward signal becomes highly unreliable (i.e., given a 30% noise level), the ASR degrades gracefully rather than collapsing. Meanwhile, we observe that increased noise primarily affects the convergence efficiency, particularly the stability of the Q-value. With extended explorations, it is still possible to achieve higher ASR. Importance of NetMasquerade's Components. NetMasquerade's success relies on its two-stage framework design. We validate the necessity of each stage through a series of ablation studies, which show that removing or replacing either stage significantly reduces the attack success rate. Detailed ablation analysis can be found in Appendix IX-D. E. Robustness against Defenses NetMasquerade generates adversarial traffic within the traffic (physical) space, which translates into an unrestricted adversarial attack in the feature space. Consequently, defenses that operate in the feature space, that is, defenses designed to make a model robust to variations within a certain numerical norm of its input feature vectors, such as adversarial training [32], [69] or sample randomized smoothing [17], [55], fail to counter NetMasquerade. 12 ASR Fuzzing SSDP Flood OS Scan Zeus Mirai Waledac Storm Syn DoS 0.80 0.85 0.90 0.95 Vanilla BARS 1.00 Whisper RNN Fig. 12. ASR against BARS. • BARS [95]. BARS employs a combined distribution transformer in feature space to map smoothed noise onto arbitrary manifolds, providing certified robustness bounds for certain perturbations in that space. We use the open-source implementation of BARS and deploy it on two detection methods: Whisper and Vanilla + RNN, and the results are shown in Figure 12. NetMasquerade maintains a high ASR across 8 test datasets even against BARS. The primary reason lies in BARS's limited certified bound in the feature space, whereas NetMasquerade performs multi-step manipulations in the traffic space (e.g., inserting additional packets), leading to perturbations that exceed BARS's bound when projected back into the feature space. Moreover, in some datasets, NetMasquerade even attains a higher ASR under BARS, likely because random noise reduces the model's accuracy, even with training data augmentation. Guidelines for Building More Robust Detection Systems. Overall, it is difficult for the feature-space defenses to defend against NetMasquerade, because even small perturbations in traffic space can lead to large distances in feature space. A straightforward countermeasure is traffic-space adversarial training: augment the training set with packet-level perturbations to strengthen the model's decision boundaries. Second, instead of certifying an Lp norm in feature space, traffic space certification (e.g., bounding the number of inserted packets) may be more effective against NetMasquerade. Finally, NetMasquerade's training process implicitly assumes a static decision boundary from the target model. Thus, the defense models can introduce randomness at inference to avoid static decision boundaries, such as altering the model architecture [21] or parameters [60]. Such techniques would force the attacker to optimize against a distribution of models rather than a single target, expanding the exploration space. We will explore these strategies in future work. VII. RELATED WORK ML-based Malicious Traffic Detection. For generic detection, various methods have been developed to learn flow-level features, such as frequency domain features [28], distribution features [7], statistical features [101], and graph features [30]. In particular, existing methods utilize programmable network devices to achieve high efficiency, e.g., NetBeacon [104] installed decision trees on programmable switches, N3IC implemented binary neural networks on SmartNICs [83]. In contrast to these flow-level detections, Kitsune [62], nPrintML [42], and CLAP [105] learned packet-level features of malicious traffic. For task-specific detection, several studies aimed to detect malware behaviors. For example, Tegeler et al. [89] detected communication traffic from botnets. Similarly, Tang et al. [87] detected malicious web traffic. Dodia et al. [20] identified malicious Tor traffic from malware. Furthermore, Sharma et al. [81] and Tekiner et al. [90] captured attack traffic targeting IoT devices. On the Robustness of Traffic Detection. Robustness issues are prevalent in traffic analysis systems, i.e., attackers can easily construct adversarial traffic examples to trick the systems into misclassifying traffic. First, Fu et al. [27] revealed that attackers can easily mimic benign traffic to evade the traditional methods [62], [83] by simply injecting random noise. Such observations necessitate robust detection methods [30], [28], [27]. Second, advanced evasion strategies are developed, which optimize the adversarial traffic examples according to the outputs of white-box [96], [82], [70] and greybox detection models [37]. These methods are different from our hard-label black-box evasion. Additionally, existing studies analyzed the robustness of traffic analysis other than traffic detection, e.g., improving robustness for web fingerprinting [65], [6], which are orthogonal to our evasion attack. Common Issues of ML-Based Security Application. Sommer et al. [84] analyzed why ML-based traffic detection systems suffer from low usability, and emphasized the importance of considering evasion behaviors of real-world attackers. Arp et al. [5] explored the practical challenges associated with ML-based applications, highlighting issues of evasion attacks [62], [23]. Moreover, Alahmadi et al. [2], Vermeer et al. [93], and Fu et al. [31] further demonstrated that existing MLbased traffic detections raised massive false positive alarms. Additionally, Han et al. [36], Jacobs et al. [47], and Wei et al. [98] addressed the explainability of traffic detection systems. VIII. CONCLUSION In this paper, we introduce NetMasquerade, a hard-label black-box evasion attack method specifically devised for malicious traffic detection systems. NetMasquerade employs a two-stage framework. First, NetMasquerade establishes a tailored pre-trained model called Traffic-BERT for capturing diverse benign traffic patterns. Subsequently, NetMasquerade integrates the Traffic-BERT into an RL framework, effectively manipulating malicious packet sequences based on benign traffic patterns with minimal modifications. Also, NetMasquerade introduces dissimilarity and effectiveness penalties, allowing adversarial traffic to retain attack stealth and effectiveness. Extensive experiments show that NetMasquerade enables both high-rate and low-rate attacks to evade 6 top-performing detection systems in 80 attack scenarios, achieving over 96.65% attack success rate on average. Moreover, NetMasquerade applies minimal modifications to no more than 10 steps in all scenarios. Additionally, NetMasquerade can achieves low-latency adversarial traffic generation, demonstrating its practicality in real-world scenarios. 13 IX. ETHICAL CONSIDERATIONS We carefully assess several ethical aspects to ensure that our study adheres to ethical standards. This work is aimed solely at assessing and improving the robustness of traffic detection models, rather than facilitating malicious or unlawful activities. We strictly follow all terms of use, and no private or sensitive data is accessed or disclosed. All analysis relies exclusively on publicly available datasets-Kitsune, PeerRush, HyperVision (malicious traffic), and MAWI (benign traffic)-without intercepting or manipulating any real-world network traffic. Likewise, we do not perform any active attacks or evasions against real detection systems, ensuring no impact on actual network traffic or stability. ACKNOWLEDGMENT We thank the anonymous reviewers for their thoughtful comments. This work was supported in part by the National Science Foundation for Distinguished Young Scholars of China (No. 62425201), the National Natural Science Foundation of China (Grant Nos. 62472036, 62472247, 62202258, 62221003, 62132011, 61932016, and U22B2031), and Beijing Nova Program. Yi Zhao and Ke Xu are the corresponding authors. REFERENCES [1] AKamai, "Prolexic," https://www.akamai.com/products/ prolexic-solutions, Accessed May 2024. [2] B. A. Alahmadi et al., "99% false positives: A qualitative study of SOC analysts' perspectives on security alarms," in Security. USENIX Association, 2022, pp. 2783-2800. [3] E. Alhajjar, P. Maxwell, and N. D. Bastian, "Adversarial machine learning in network intrusion detection systems," Expert Syst. Appl., vol. 186, p. 115782, 2021. [4] B. Anderson and D. A. McGrew, "Identifying encrypted malware traffic with contextual flow data," in AISec@CCS. ACM, 2016, pp. 35-46. [5] D. Arp et al., "Dos and don'ts of machine learning in computer security," in Security. USENIX Association, 2022. [6] A. Bahramali, A. Bozorgi, and A. Houmansadr, "Realistic website fingerprinting by augmenting network traces," in CCS. ACM, 2023, pp. 1035-1049. [7] D. Barradas, N. Santos, L. Rodrigues, S. Signorello, F. M. V. Ramos, and A. Madeira, "FlowLens: Enabling efficient flow classification for ml-based network security applications," in NDSS. The Internet Society, 2021. [8] BeichenDream, "Godzilla," https://github.com/BeichenDream/ Godzilla, 2021. [9] L. Bilge et al., "Disclosure: Detecting botnet command and control servers through large-scale netflow analysis," in ACSAC. ACM, 2012, pp. 129-138. [10] T. Chen and C. Guestrin, "Xgboost: A scalable tree boosting system," in KDD. ACM, 2016, pp. 785-794. [11] J. Chung, C ̧ . G ̈ulc ̧ehre, K. Cho, and Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," CoRR, vol. abs/1412.3555, 2014. [12] Cisco Systems, "Encrypted traffic analytics white paper," 2021, https://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/ enterprise-network-security/nb-09-encrytd-traf-anlytcs-wp-cte-en. html. [13] Cisco Systems, Inc., Cisco Security Manager 4.19 User Guide, Cisco Systems, Inc., 2018. [Online]. Available: https://www.cisco.com/c/ en/us/td/docs/security/security management/cisco security manager/ security manager/419/user/guide/CSMUserGuide.html [14] --, Send On-Premises Flows from Cisco Telemetry Broker or Secure Network Analytics to Secure Cloud Analytics Configuration Guide, Cisco Systems, Inc., 2023, configuration Guide v7.5.0, Document Version 1.0. [Online]. Available: https://www.cisco.com/ c/dam/en/us/td/docs/security/stealthwatch/on-premises-flows/7 5 0 Send On Prem Flows to Secure Cloud Analytics DV 1 0.pdf [15] --, "Cisco Telemetry Broker Data Sheet," https://www.cisco.com/c/ en/us/products/collateral/security/telemetry-broker/ctb-datasheet.html, Mar. 2024. [16] Cloudflare, "Cloudflare ddos protection products," https://developers. cloudflare.com/ddos-protection/managed-rulesets/adaptive-protection/, Accessed May 2024. [17] J. Cohen, E. Rosenfeld, and J. Z. Kolter, "Certified adversarial robustness via randomized smoothing," in ICML, ser. Proceedings of Machine Learning Research, vol. 97. PMLR, 2019, pp. 1310-1320. [18] G. Detal, B. Hesmans, O. Bonaventure, Y. Vanaubel, and B. Donnet, "Revealing middlebox interference with tracebox," in Proceedings of the 2013 Internet Measurement Conference, IMC 2013, Barcelona, Spain, October 23-25, 2013, K. Papagiannaki, P. K. Gummadi, and C. Partridge, Eds. ACM, 2013, pp. 1-8. [19] J. Devlin, M. Chang, K. Lee, and K. Toutanova, "BERT: pre-training of deep bidirectional transformers for language understanding," in NAACL-HLT (1). Association for Computational Linguistics, 2019, pp. 4171-4186. [20] P. Dodia, M. AlSabah, O. Alrawi, and T. Wang, "Exposing the rat in the tunnel: Using traffic analysis for tor-based malware detection," in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS 2022, Los Angeles, CA, USA, November 7-11, 2022, H. Yin, A. Stavrou, C. Cremers, and E. Shi, Eds. ACM, 2022, pp. 875-889. [Online]. Available: https://doi.org/10.1145/3548606.3560604 [21] M. Dong, X. Chen, Y. Wang, and C. Xu, "Random normalization aggregation for adversarial defense," in NeurIPS, 2022. [22] G. Draper-Gil, A. H. Lashkari, M. S. I. Mamun, and A. A. Ghorbani, "Characterization of encrypted and VPN traffic using time-related features," in ICISSP. SciTePress, 2016, pp. 407-414. [23] M. Du, F. Li, G. Zheng, and V. Srikumar, "Deeplog: Anomaly detection and diagnosis from system logs through deep learning," in CCS. ACM, 2017, pp. 1285-1298. [24] W. M. Eddy, "Transmission control protocol (TCP)," RFC, vol. 9293, pp. 1-98, 2022. [Online]. Available: https://doi.org/10.17487/RFC9293 [25] C. Estan and G. Varghese, "New directions in traffic measurement and accounting: Focusing on the elephants, ignoring the mice," ACM Trans. Comput. Syst., vol. 21, no. 3, pp. 270-313, 2003. [26] X. Feng, C. Fu, Q. Li, K. Sun, and K. Xu, "Off-path TCP exploits of the mixed IPID assignment," in CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020, J. Ligatti, X. Ou, J. Katz, and G. Vigna, Eds. ACM, 2020, pp. 1323-1335. [Online]. Available: https://doi.org/10.1145/3372297.3417884 [27] C. Fu et al., "Frequency domain feature based robust malicious traffic detection," IEEE/ACM Trans. Netw., vol. 31, no. 1, pp. 452-467, 2023. [28] C. Fu, Q. Li, M. Shen, and K. Xu, "Realtime robust malicious traffic detection via frequency domain analysis," in CCS. ACM, 2021, pp. 3431-3446. [29] --, "Detecting tunneled flooding traffic via deep semantic analysis of packet length patterns," in CCS. ACM, 2024, pp. 3659-3673. [30] C. Fu, Q. Li, and K. Xu, "Detecting unknown encrypted malicious traffic in real time via flow interaction graph analysis," in NDSS. The Internet Society, 2023. [31] C. Fu, Q. Li, K. Xu, and J. Wu, "Point cloud analysis for mlbased malicious traffic detection: Reducing majorities of false positive alarms," in CCS. ACM, 2023, pp. 1005-1019. [32] I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in ICLR (Poster), 2015. [33] T. Haarnoja, H. Tang, P. Abbeel, and S. Levine, "Reinforcement learning with deep energy-based policies," in ICML, ser. Proceedings of Machine Learning Research, vol. 70. PMLR, 2017, pp. 1352-1361. [34] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor," in ICML, ser. Proceedings of Machine Learning Research, vol. 80. PMLR, 2018, pp. 1856-1865. 14 [35] T. Haarnoja, A. Zhou, K. Hartikainen, G. Tucker, S. Ha, J. Tan, V. Kumar, H. Zhu, A. Gupta, P. Abbeel, and S. Levine, "Soft actorcritic algorithms and applications," CoRR, vol. abs/1812.05905, 2018. [36] D. Han, Z. Wang, W. Chen, Y. Zhong, S. Wang, H. Zhang, J. Yang, X. Shi, and X. Yin, "Deepaid: Interpreting and improving deep learning-based anomaly detection in security applications," in CCS. ACM, 2021, pp. 3197-3217. [37] D. Han, Z. Wang, Y. Zhong, W. Chen, J. Yang, S. Lu, X. Shi, and X. Yin, "Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors," IEEE J. Sel. Areas Commun., vol. 39, no. 8, pp. 2632-2647, 2021. [38] --, "Trafficmanipulator," https://github.com/dongtsi/ TrafficManipulator, 2021. [39] M. Handley, V. Paxson, and C. Kreibich, "Network intrusion detection: Evasion, traffic normalization, and end-to-end protocol semantics," in USENIX Security Symposium. USENIX Association, 2001. [40] M. J. Hashemi, G. Cusack, and E. Keller, "Towards evaluation of nidss in adversarial setting," in Big-DAMA@CoNEXT. ACM, 2019, pp. 1421. [41] T. Hester, M. Vecer ́ık, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, I. Osband, G. Dulac-Arnold, J. P. Agapiou, J. Z. Leibo, and A. Gruslys, "Deep q-learning from demonstrations," in AAAI. AAAI Press, 2018, pp. 3223-3230. [42] J. Holland, P. Schmitt, N. Feamster, and P. Mittal, "New directions in automated traffic analysis," in CCS. ACM, 2021, pp. 3366-3383. [43] I. Homoliak, M. Teknos, M. Ochoa, D. Breitenbacher, S. Hosseini, and P. Han ́acek, "Improving network intrusion detection classifiers by non-payload-based exploit-independent obfuscations: An adversarial approach," EAI Endorsed Trans. Security Safety, vol. 5, no. 17, p. e4, 2019. [44] S. Huang and S. Onta ̃n ́on, "A closer look at invalid action masking in policy gradient algorithms," in FLAIRS, 2022. [45] Intel, "Data Plane Development Kit," https://www.dpdk.org/, 2025, accessed Apr 2025. [46] J. Iyengar and M. Thomson, "QUIC: A udp-based multiplexed and secure transport," RFC, vol. 9000, pp. 1-151, 2021. [Online]. Available: https://doi.org/10.17487/RFC9000 [47] A. S. Jacobs et al., "AI/ML for network security: The emperor has no clothes," in Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2022, pp. 1537-1551. [48] V. Jacobson, "traceroute," ftp://ftp.ee.lbl.gov/traceroute.tar.gz, Dec. 1988, lawrence Berkeley National Laboratory, software distribution. [49] M. Javed and V. Paxson, "Detecting stealthy, distributed SSH bruteforcing," in CCS. ACM, 2013, pp. 85-96. [50] M. Jiang, B. Cui, J. Fu, T. Wang, L. Yao, and B. K. Bhargava, "RUDOLF: an efficient and adaptive defense approach against website fingerprinting attacks based on soft actor-critic algorithm," IEEE Trans. Inf. Forensics Secur., vol. 19, pp. 7794-7809, 2024. [51] M. S. Kang, S. B. Lee, and V. D. Gligor, "The crossfire attack," in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2013, pp. 127-141. [52] D. Kopp, J. Santanna, M. Wichtlhuber, O. Hohlfeld, I. Poese, and C. Dietzel, "Ddos hide & seek: On the effectiveness of a booter services takedown," in Internet Measurement Conference. ACM, 2019, pp. 6572. [53] A. Kuzmanovic and E. W. Knightly, "Low-rate tcp-targeted denial of service attacks: the shrew vs. the mice and elephants," in SIGCOMM. ACM, 2003, pp. 75-86. [54] A. H. Lashkari, G. Draper-Gil, M. S. I. Mamun, and A. A. Ghorbani, "Characterization of tor traffic using time based features," in ICISSP. SciTePress, 2017, pp. 253-262. [55] M. L ́ecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, "Certified robustness to adversarial examples with differential privacy," in IEEE Symposium on Security and Privacy. IEEE, 2019, pp. 656-672. [56] X. Lin, G. Xiong, G. Gou, Z. Li, J. Shi, and J. Yu, "ET-BERT: A contextualized datagram representation with pre-training transformers for encrypted traffic classification," in WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 - 29, 2022, F. Laforest, R. Troncy, E. Simperl, D. Agarwal, A. Gionis, I. Herman, and L. M ́edini, Eds. ACM, 2022, pp. 633-642. [Online]. Available: https://doi.org/10.1145/3485447.3512217 [57] Z. Lin, Y. Shi, and Z. Xue, "IDSGAN: generative adversarial networks for attack generation against intrusion detection," in PAKDD (3), ser. Lecture Notes in Computer Science. Springer, 2022, pp. 79-91. [58] H. Liu, A. F. Diallo, and P. Patras, "Amoeba: Circumventing mlsupported network censorship via adversarial reinforcement learning," PACMNET, vol. 1, no. CoNEXT, pp. 9:1-9:25, 2023. [59] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, "Roberta: A robustly optimized BERT pretraining approach," CoRR, vol. abs/1907.11692, 2019. [60] Y. Ma, M. Dong, and C. Xu, "Adversarial robustness through random weight sampling," in NeurIPS, 2023. [61] R. Meier, V. Lenders, and L. Vanbever, "ditto: WAN traffic obfuscation at line rate," in NDSS. The Internet Society, 2022. [62] Y. Mirsky, T. Doitshman, Y. Elovici, and A. Shabtai, "Kitsune: An ensemble of autoencoders for online network intrusion detection," in NDSS. The Internet Society, 2018. [63] V. Mnih et al., "Human-level control through deep reinforcement learning," Nat., vol. 518, no. 7540, pp. 529-533, 2015. [64] R. Mudge, "Cobalt strike - adversary simulation and red team operations," https://www.cobaltstrike.com/, 2020. [65] M. Nasr, A. Bahramali, and A. Houmansadr, "Defeating dnn-based traffic analysis systems in real-time with blind adversarial perturbations," in USENIX Security Symposium. USENIX Association, 2021, pp. 2705-2722. [66] A. M. Nguyen, J. Yosinski, and J. Clune, "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images," in CVPR. IEEE Computer Society, 2015, pp. 427-436. [67] NVIDIA, "CUDA Toolkit: A Parallel Computing Platform on GPU," https://developer.nvidia.com/cuda-toolkit, 2025, accessed Apr 2025. [68] Open Information Security Foundation, "Suricata - Network Intrusion Detection/Prevention and Security Monitoring Engine," Software, Version 8.0.0, Jul. 2025, released 08 Jul 2025. [Online]. Available: https://suricata.io/ [69] N. Papernot, P. D. McDaniel, X. Wu, S. Jha, and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks," in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2016, pp. 582-597. [70] A. Piplai, S. S. L. Chukkapalli, and A. Joshi, "Nattack! adversarial attacks to bypass a GAN based classifier trained to detect network intrusion," in BigDataSecurity/HPSC/IDS. IEEE, 2020, pp. 49-54. [71] M. Post and D. Vilar, "Fast lexically constrained decoding with dynamic beam allocation for neural machine translation," in NAACLHLT. Association for Computational Linguistics, 2018, pp. 13141324. [72] J. Postel, "Internet Protocol," Request for Comments 791, 1981. [Online]. Available: https://www.rfc-editor.org/rfc/rfc791 [73] --, "Internet control message protocol," RFC, vol. 792, pp. 1-21, 1981. [Online]. Available: https://doi.org/10.17487/RFC0792 [74] PyTorch, "A Deep Learning Framework," https://pytorch.org/, 2025, accessed Apr 2025. [75] Z. Qian and Z. M. Mao, "Off-path TCP sequence number inference attack - how firewall middleboxes reduce security," in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2012, pp. 347-361. [76] Y. Qing, Q. Yin, X. Deng, Y. Chen, Z. Liu, K. Sun, K. Xu, J. Zhang, and Q. Li, "Low-quality training data only? A robust framework for detecting encrypted malicious network traffic," in NDSS. The Internet Society, 2024. [77] B. Rahbarinia, R. Perdisci, A. Lanzi, and K. Li, "PeerRush: Mining for unwanted P2P traffic," in DIMVA, ser. Lecture Notes in Computer Science, vol. 7967. Springer, 2013, pp. 62-82. [78] P. Richter and A. W. Berger, "Scanning the scanners: Sensing the internet from a massively distributed network telescope," in Internet Measurement Conference. ACM, 2019, pp. 144-157. [79] A. Roy, H. Zeng, J. Bagga, G. Porter, and A. C. Snoeren, "Inside the social network's (datacenter) network," in SIGCOMM. ACM, 2015, pp. 123-137. [80] N. N. Scanning, "The official nmap project guide to network discovery and security scanning," Boston: Nmap Project, 2009. [81] R. A. Sharma, I. Sabane, M. Apostolaki, A. Rowe, and V. Sekar, "Lumen: A framework for developing and evaluating ml-based iot network anomaly detection," in CoNEXT. ACM, 2022, pp. 59-71. [82] R. Sheatsley, B. Hoak, E. Pauley, Y. Beugin, M. J. Weisman, and P. D. McDaniel, "On the robustness of domain constraints," in CCS. ACM, 2021, pp. 495-515. [83] G. Siracusano, S. Galea, D. Sanvito, M. Malekzadeh, G. Antichi, P. Costa, H. Haddadi, and R. Bifulco, "Re-architecting traffic analysis 15 with neural network interface cards," in NSDI. USENIX Association, 2022, pp. 513-533. [84] R. Sommer and V. Paxson, "Outside the closed world: On using machine learning for network intrusion detection," in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2010, pp. 305-316. [85] E. Stinson and J. C. Mitchell, "Towards systematic evaluation of the evadability of bot/botnet detection methods," in WOOT. USENIX Association, 2008. [86] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," in ICLR (Poster), 2014. [87] R. Tang, Z. Yang, Z. Li, W. Meng, H. Wang, Q. Li, Y. Sun, D. Pei, T. Wei, Y. Xu, and Y. Liu, "Zerowall: Detecting zero-day web attacks through encoder-decoder recurrent neural networks," in INFOCOM. IEEE, 2020, pp. 2479-2488. [88] G. Tao, S. An, S. Cheng, G. Shen, and X. Zhang, "Hard-label black-box universal adversarial patch attack," in USENIX Security Symposium. USENIX Association, 2023, pp. 697-714. [89] F. Tegeler, X. Fu, G. Vigna, and C. Kruegel, "Botfinder: Finding bots in network traffic without deep packet inspection," in CoNEXT. ACM, 2012, pp. 349-360. [90] E. Tekiner, A. Acar, and A. S. Uluagac, "A lightweight iot cryptojacking detection mechanism in heterogeneous smart home networks," in NDSS. The Internet Society, 2022. [91] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," in NIPS, 2017, pp. 5998-6008. [92] M. Vecer ́ık, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Roth ̈orl, T. Lampe, and M. A. Riedmiller, "Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards," CoRR, vol. abs/1707.08817, 2017. [93] M. Vermeer, N. Kadenko, M. van Eeten, C. Ga ̃n ́an, and S. Parkin, "Alert alchemy: SOC workflows and decisions in the management of NIDS rules," in CCS. ACM, 2023, pp. 2770-2784. [94] J. Wan, J. Fu, L. Wang, and Z. Yang, "Bounceattack: A query-efficient decision-based adversarial attack by bouncing into the wild," in IEEE Symposium on Security and Privacy. IEEE, 2024, pp. 1270-1286. [95] K. Wang, Z. Wang, D. Han, W. Chen, J. Yang, X. Shi, and X. Yin, "BARS: local robustness certification for deep learning based traffic analysis systems," in NDSS. The Internet Society, 2023. [96] N. Wang, Y. Chen, Y. Xiao, Y. Hu, W. Lou, and Y. T. Hou, "MANDA: on adversarial example detection for network intrusion detection system," IEEE Trans. Dependable Secur. Comput., vol. 20, no. 2, pp. 1139-1153, 2023. [97] N. Weaver, R. Sommer, and V. Paxson, "Detecting forged TCP reset packets," in NDSS. The Internet Society, 2009. [98] F. Wei et al., "Xnids: Explaining deep learning-based network intrusion detection systems for active intrusion responses," in Security. USENIX Association, 2023, p. to appear. [99] WIDE, "Mawi working group traffic archive," http://mawi.wide.ad.jp/ mawi/, 2023, accessed: June 2023. [100] J. Xing et al., "Ripple: A programmable, decentralized link-flooding defense against adaptive adversaries," in Security. USENIX, 2021, pp. 3865-3880. [101] Z. Xu, S. Ramanathan, A. M. Rush, J. Mirkovic, and M. Yu, "Xatu: Boosting existing ddos detection systems using auxiliary signals," in CoNEXT. ACM, 2022, pp. 1-17. [102] M. Zhang, G. Li, S. Wang, C. Liu, A. Chen, H. Hu, G. Gu, Q. Li, M. Xu, and J. Wu, "Poseidon: Mitigating volumetric ddos attacks with programmable switches," in NDSS. The Internet Society, 2020. [103] G. Zhou, X. Guo, Z. Liu, T. Li, Q. Li, and K. Xu, "Trafficformer: an efficient pre-trained model for traffic data," in SP. IEEE Computer Society, 2024, pp. 102-102. [104] G. Zhou, Z. Liu, C. Fu, Q. Li, and K. Xu, "An efficient design of intelligent network data plane," in USENIX Security Symposium. USENIX Association, 2023, pp. 6203-6220. [105] S. Zhu, S. Li, Z. Wang, X. Chen, Z. Qian, S. V. Krishnamurthy, K. S. Chan, and A. Swami, "You do (not) belong here: Detecting DPI evasion attacks with context learning," in CoNEXT. ACM, 2020, pp. 183-197. APPENDIX A. Details of Datasets To closely mirror real-world scenarios, we replay 4 categories of malicious traffic, totaling 12 attack types: Reconnaissance & Scanning, Denial of Service (DoS), Botnet, and Encrypted Web attacks. The details are shown in Table III. • Reconnaissance and Scanning Attacks. These attacks identify open ports and services across a wide range of servers by sending specific packets, e.g., by sending ICMP echo requests to determine if a host is active. Adversarial traffic made by NetMasquerade does not influence the effectiveness. Moreover, since these scanning attacks typically do not involve the transmission of payloads, they can bypass payload-based detection methods. Employing a patternbased detection system is a common way of detecting such attacks. We select two distinct scanning attacks from Kitsune [62]: OS Scan and Fuzz Scan. • Denial of Service Attacks. DoS attacks incapacitate targeted services by inundating them with an overwhelming volume of requests, depleting resources and rendering the services unavailable. We selected SSDP DoS and SYN DoS traffic from Kitsune. Similar to scanning attacks, payload-based detection methods fall short against DoS attacks; however, detecting these attacks becomes feasible when focusing on the characteristics of their patterns. • Botnet Attacks. Botnets, which are large networks of compromised machines, are controlled by attackers via command and control (C&C) channels to conduct malicious activities [9], [89]. We employ the Mirai dataset from Kitsune, and analyze data from three typical botnets: Zeus, Storm, and Waledac, which were collected by PeerRush [77]. • Encrypted Web Attacks. Typically, malicious web traffic is encrypted with HTTPS, concealing its malicious behavior within the packet payload. This encryption prevents traditional rule-based methods unable to detect the traffic. Meanwhile, most traditional ML-based detection systems cannot effectively detect the traffic due to their low packet rates [30]. We obtained four common types of Web attack traffic from [31], including automated vulnerability discovery (XSS, CSRF), Webshell, and Spam traffic. We replay traffic from diverse sources-covering high and low throughput flows, as well as encrypted and unencrypted streams-to demonstrate NetMasquerade's general applicability across varying protocols and tasks. For the Scanning, DoS and Encrypted Web attacks, each target detection system is trained on the malicious and benign traces from the corresponding private datasets. For the Botnet attack, the original datasets contain virtually no benign traffic, so we supplement benign flows with traces from the WIDE MAWI project (August 2023). This choice does not compromise the blackbox setting, as the Traffic-BERT model is trained on data from June 2023, ensuring that there is no correlation between the distributions of the datasets. To achieve class balance, we train the target models with thousands of malicious flows and an 16 TABLE III DETAILS OF MALICIOUS TRAFFIC DATASETS Malicious Traffic Dataset Description Source Bandwidth Enc. Ratio Mal. Ratio External Data 1 Recon. OS Scan Scanning for active hosts and operating systems. Kitsune [62] 0.96 Mbps 0.0% 0.0045 N/A Fuzz Scan Scanning for vulnerabilities in protocols. Kitsune 27.9 Mbps 0.0% 0.0089 N/A DoS SSDP DoS Amplifying SSDP traffic to flood targets. Kitsune 27.2 Mbps 0.0% 0.0321 N/A SYN DoS Flooding servers with half-open TCP connections. Kitsune 23.5 Mbps 0.0% 0.0858 N/A Botnet Mirai Infects IoT devices with the Mirai malware. Kitsune 0.12 Mbps 0.0% 0.8408 MAWI 2 Zeus Botnet infected by a Zeus trojan. PeerRush [77] 0.06 Mbps 0.0% 0.9999 MAWI Storm Peer-to-peer botnet spreading malware. PeerRush 25.3 Mbps 0.0% 0.9628 MAWI Waledac Spam botnet harvesting user data. PeerRush 13.9 Mbps 0.0% 1.0000 MAWI Enc. Web Attacks Webshell Malicious script enabling remote server control. H.V. [30] 11.2 Mbps 100.0% 0.0234 N/A XSS Injects malicious scripts into legitimate websites. H.V. 31.8 Mbps 100.0% 0.0259 N/A CSRF Fools authenticated users into unintended actions. H.V. 7.73 Mbps 100.0% 0.0236 N/A Spam Bulk messages with phishing / malware. H.V. 36.2 Mbps 100.0% 0.0238 N/A 1 We use an external benign dataset when the malicious dataset is nearly 100% malicious. 2 To ensure a strict black-box setting, we employ the real-world backbone network traces from the WIDE MAWI project's August 2023 dataset [99] to train the additional models, keeping them entirely separate from the June 2023 traces used to train Traffic-BERT. equal number of benign flows. For the one-class detection system Kitsune, we use only benign samples during training. B. Details of Target Detection Systems Target Systems. We use three advanced traditional ML-based malicious traffic detection systems as target systems: • Whisper [28]. Whisper transforms the patterns of flows into frequency domain features, employing clustering to unsupervisedly learn these features. Similarly, the effectiveness of original traffic is assessed by the distance between frequency domain features and the cluster centers. Whisper is particularly effective at detecting DoS traffic, so we retrain the model on the DoS dataset using its default configuration. For botnet traffic, we replace clustering with a linear classifier to enhance detection capabilities. • FlowLens [7]. FlowLens samples the distribution of packetlevel features on the data plane and uses random forests to learn these features in a supervised manner on the control plane, introducing a new paradigm of malicious traffic detection. We retrain the model with the default model structure and hyperparameters described in the paper. • NetBeacon [104]. NetBeacon introduces the concept of Intelligent Data Plane (IDP), performing flow-level feature extraction and feature classification with tree-based models directly in the data plane. We reconstruct the feature extraction method described in the paper, select XGBoost [10] as the representative tree model for traffic classification, and adjust hyperparameters to achieve optimal accuracy. We also implement three top-performing DL-based systems: • CICFlowMeter [22], [54] + MLP. CICFlowMeter is a widely used feature processor that extracts over 80 time-related features from flow patterns. We employ a four-layer linear neural network to learn these features, with the number of neurons in the hidden layers set to three times that of the input layer. By adjusting the hyperparameters, we achieve the model's best classification performance. • Vanilla + RNN. The native feature extractor extracts sequences of sizes and IPDs from the flow, without additional feature processing. Given the sequential nature of the features, we use a single-layer LSTM as the model, taking the concatenation of the two feature sequences as input. • Kitsune [62]. Kitsune dynamically extracts and maintains per-packet features through specially designed feature extractors and uses autoencoders and clustering to learn the features of benign traffic unsupervisedly. The detection of malicious traffic is based on the discrepancy between the autoencoder's output and the original feature input. We retrain the model using its original feature extraction and model structure with default hyperparameters. Target System Detection Performance. Table IV summarizes the detection performance of 6 target systems on 12 types of malicious traffic. Notably, Kitsune outputs a score indicating how malicious each sample is; we perform a grid search to determine a threshold and compute the corresponding F1 score. We then apply this threshold in the attack experiments to calculate the ASR. All detection systems achieve high AUC and F1 scores, demonstrating strong performance in the absence of evasion attacks. C. Details of Hyperparameter Settings The default hyperparameters of NetMasquerade are listed in Table V. D. Deep Dive into NetMasquerade Importance of Two-Stage Framework. NetMasquerade consists of two stages: benign traffic pattern mimicking and adversarial traffic generation. To study the importance of each stage, we design three ablation strategies and conduct attacks 17 TABLE IV MALICIOUS TRAFFIC DETECTION SYSTEMS' PERFORMANCE Malicious Traffic Dataset Traditional ML-based Systems DL-based Systems Whisper FlowLens NetBeacon Vanilla CIC. Kitsune AUC F1 AUC F1 AUC F1 AUC F1 AUC F1 AUC F1 Recon. OS Scan 0.9978 0.9979 0.9946 0.9947 0.9897 0.9899 0.9720 0.9728 0.9980 0.9980 0.9211 0.9780 Fuzz Scan 0.9905 0.9899 0.9972 0.9933 0.9913 0.9910 0.9662 0.9663 0.9900 0.9901 0.9952 0.9974 DoS SSDP DoS 0.9167 0.9231 0.9982 0.9981 0.9995 0.9994 0.9790 0.9790 0.9999 0.9999 0.9900 0.9996 SYN DoS 0.9879 0.9823 0.9815 0.9800 0.9903 0.9833 0.9616 0.9560 0.9852 0.9849 0.9801 0.9213 Botnet Mirai 0.9449 0.9458 0.9600 0.9463 0.9444 0.9371 0.9099 0.9156 0.9574 0.9521 0.9322 0.9762 Zeus 0.9121 0.9056 0.9250 0.8837 0.9279 0.9516 0.9118 0.9041 0.9625 0.9484 0.9246 0.9017 Storm 0.9495 0.9468 0.9395 0.9415 0.9972 0.9978 0.9233 0.9271 0.9968 0.9982 0.9302 0.9822 Waledac 0.9505 0.9484 0.9660 0.9653 0.9285 0.9467 0.9299 0.9304 0.9860 0.9862 0.8964 0.8414 Enc. Webshell 0.9980 0.9979 0.9955 0.9946 0.9943 0.9955 0.9989 0.9874 0.9965 0.9964 0.9996 0.9887 XSS 0.9975 0.9974 0.9965 0.9965 0.9967 0.9966 0.9845 0.9937 0.9937 0.9984 0.9991 0.9990 CSRF 0.9944 0.9950 0.9920 0.9920 0.9935 0.9934 0.9819 0.9822 0.9928 0.9927 0.9019 0.6625 Spam 0.9200 0.9135 0.9780 0.9756 0.9635 0.9622 0.8897 0.8847 0.9900 0.9901 0.8887 0.8690 TABLE V DETAILS OF HYPERPARAMETERS. Stage Hyperparameter Value Description 1: Traffic-BERT n 512 Fixed length of sequences. dk 128 Embedding size / total length of Q, K, V . N LAYERS 6 Number of encoder blocks. ATTN HEADS 8 Number of attention heads. D FF 512 Dimension of feed-forward layer. T SIZE 56 Size of IPD feature vocabulary. S SIZE 1606 Size of size feature vocabulary. 2: RL process β 0.01 ∼0.1 Weight of rD. γ 0 ∼0.2 Weight of rM. τ ≤10 Max step threshold. ξ′ 0.8 ∼1.05 Stop reward threshold. η 1 Discount factor. λ 0.9 Soft update weight of Q-networks. B 1e5 Size of experience replay buffer. TARGET ENTROPY -10 Desired policy entropy (related to α) on two detection systems, NetBeacon and CICFlowMeter + MLP, across all eight datasets. Table VI shows the ASR. In the first scenario, we retain stage 1 and replace stage 2 with randomly selecting positions for feature modifications(denoted as S1). Clearly, under this setting, the attacker cannot find the optimal modification positions, resulting in a significant drop in attack capability. On both datasets, the attack success rate drops by an average of 56%. This result underscores that merely integrating BERT-generated traffic patterns is insufficient to evade detection; the RL step in Stage 2 is crucial for identifying the most strategic insertion points. In the second scenario, we remove stage 2 and replace it with a new Mask-Fill strategy. The first strategy is to fill in the selected positions with stochastic values between the minimum and maximum values of the same flow feature sequence (denoted as S2-S). By converting the Markov decision process from deterministic to stochastic, it becomes exceedingly difficult for the RL algorithm to converge. Consequently, we observe that the RL strategy predominantly adds chaff packets because the rewards for this type of action are relatively stable. The second strategy is filling in the selected positions with TABLE VI EFFECT OF TWO-STAGE FRAMEWORK. S1, S2-S, S2-F, NETM. STAND FOR STAGE 1 ONLY, STAGE 2 ONLY WITH STOCHASTIC VALUE, STAGE 2 ONLY WITH FIXED VALUE, AND THE OVERALL NETMASQUERADE. Target Systems NetBeacon CICFlowMeter + MLP Methods S1 S2-S S2-F NetM. S1 S2-S S2-F NetM. OS Scan 0.940 0.990 0.990 0.999 0.479 0.990 0.999 0.999 Fuzzing 0.538 0.936 0.996 0.999 0.063 0.004 0.009 0.974 SSDP Flood 0.001 0.434 0.481 0.999 0.488 0.557 0.639 0.999 SYN DoS 0.058 0.325 0.545 0.999 0.508 0.011 0.754 0.996 Mirai 0.915 0.895 0.915 0.990 0.488 0.856 0.938 0.999 Zeus 0.355 0.508 0.508 0.945 0.347 0.813 0.891 0.987 Storm 0.201 0.233 0.239 0.997 0.647 0.797 0.817 0.890 Waledac 0.750 0.727 0.924 0.999 0.123 0.783 0.880 0.981 the average value of the same flow feature (denoted as S2-F). Due to the variation in sending patterns across different flow segments, this strategy is limited in some cases. As Table 3 illustrates, the attack success rate for these two strategies decrease by an average of 37.4% and 26.8%, and neither strategy is effective against high-speed traffic. This outcome underscores how merely relying on an average-value approach or a random approach to features cannot capture dynamic and peak-driven traffic patterns-an issue that becomes even more pronounced in fine-tuning scenarios such as high-speed traffic. Traffic-BERT, on the other hand, guides the RL training process by offering stable and effective benign-pattern perturbations. Although more complex candidate Mask-Fill rules could be considered, these rules can only be applied during stage 2, which would exponentially expand the action space and lead to an action space explosion. 18
2510.14903
Skyrmion behavior in attractive-repulsive square array of pinning centers L. Basseto1, N. P. Vizarim2, J. C. Bellizotti Souza2 and P. A. Venegas1 1 Department of Physics, Schools of Sciences, S˜ao Paulo State University - UNESP, 17033-360 Bauru, SP, Brazil 2 ”Gleb Wataghin” Institute of Physics, Department of Condensed Matter Physics, University of Campinas - UNICAMP, 13083-859, Campinas, SP, Brazil E-mail: lucas.basseto@unesp.br Abstract. We investigate the driven dynamics of a single skyrmion in a square lattice of mixed pinning sites, where attractive and repulsive defects coexist using a particle-based model. The mixed landscape yields directional locking at θsk = −45◦and flow at locked angles near the intrinsic skyrmion Hall angle. By mapping defect strengths, we show that weaker attraction lowers the depinning threshold, whereas stronger repulsion stabilizes and broadens the −45◦locking plateau. Moreover, combinations of attractive and repulsive defect strengths allows control of directional lockings and their force ranges. Defect size further tunes the response, selecting among −45◦, −50◦, −55◦, and ≈−59◦. These results establish mixed pinning as a practical knob to steer skyrmion trajectories and the effective Hall response, providing design guidelines for skyrmion-based memory and logic devices. Keywords: Magnetism, skyrmions, pinning, dynamic phases, directional locking arXiv:2510.14903v1 [cond-mat.mes-hall] 16 Oct 2025 Skyrmion behavior in attractive-repulsive square array of pinning centers 2 1. Introduction Magnetic skyrmions are topological excitations stabilized in chiral magnets by the interplay between exchange and the Dzyaloshinskii–Moriya interaction (DMI) [1, 2]. Owing to their nontrivial topology, skyrmions are robust against moderate perturbations and can be treated as quasiparticles. Beyond their fundamental interest, skyrmions are promising candidates for information carriers in spintronic applications, where the bit state can be encoded in their presence or absence [3, 4, 5, 6], and they have also been proposed as building blocks for qubits and novel logic elements [7, 8]. A central challenge in such skyrmion-based devices is that skyrmion motion is strongly influenced by a Magnus force set by topology [9, 10]. In clean samples, a driven skyrmion travels at an intrinsic angle θ int sk relative to the drive, a phenomenon known as the skyrmion Hall effect (SkHE) [11, 12]. The magnitude of θ int sk increases with the ratio of the Magnus term to the dissipative term, and experimentally spans from a few degrees to very large values depending on material parameters and skyrmion size [11, 13, 12]. Large deflections are detrimental for racetrack-like devices, since skyrmions may leave the track and be annihilated at the edges [14, 15]. In realistic media, pinning strongly impacts trajectories and effectively changes the observed skyrmion Hall angle θsk [16, 17]. These pinning centers can be either attractive or repulsive, natural or nanoengineered. Remarkably, periodic pinning has been shown to quantize or lock the direction of motion as the drive increases, enabling controllable transport [18, 19, 20, 21, 22, 23]. This directional locking is analogous to phenomena in superconducting vortices [24] and colloids [25, 26, 27] driven over periodic substrates. For skyrmions, the drive direction is fixed, but the velocity-dependent θsk produces discrete reorientations of the trajectory as the drive magnitude is ramped. On a locking step, θsk remains constant over a finite drive interval, while transitions between steps coincide with characteristic features (dips or cusps) in the velocity–force curves. Such plateaus offer a practical knob to steer skyrmions via small drive adjustments. Additional strategies to control motion include ratchet effects [28, 29, 30, 31, 32, 33], interface-guided transport [34, 35], strain [36], magnetic or temperature gradient drives [37, 38, 39, 40, 41], 1D potential channels [42, 43, 44], nanotracks [45, 6, 46], and hybrid skyrmion–vortex architectures [47, 48, 49]. For square arrays of periodic obstacles, the locked directions often follow φ = arctan(p/q) with integers p and q [18, 19]. However, to the best of our knowledge, the impact of coexisting attractive and repulsive sites (mixed sites) arranged on a square lattice has not been systematically explored on the skyrmion dynamics. In this work we address this gap by studying skyrmion motion in a lattice combining attractive and repulsive defects. We demonstrate new ways for steering trajectories and tuning the effective Hall response, thereby refining control of skyrmion transport. Our simulation methodology (shown in Section 2) employs a particle-based approach for a single skyrmion [50], with dissipative and Magnus terms in the equation of motion. Defect interactions are modeled by Gaussian potentials, and an external drive FD is applied. We characterize the direction of skyrmion motion by the Hall angle θsk = arctan(v⊥/v∥), defined from velocity components perpendicular and parallel to FD. Section 3 we present our results where three dynamical regimes can be seen: (i) a pre-depinning regime where the skyrmion is trapped; (ii) a directional-locking regime with a stable trajectory at −45◦; and (iii) a high-drive Magnus regime where θsk →θ int sk . The novelty here is that the relative strengths of attractive and repulsive pinning control the extent of each regime, and that sufficiently strong attraction suppresses the locking plateau. In Section 4 we map the pinning strength effects to identify optimal operating windows. In Section 5 we investigate pinning size effects and find that enlarging repulsive sites can modify θsk at low drives, while at higher drives the motion is dominated by the intrinsic SkHE and becomes less sensitive to size. In Section 6 we discuss our results highlighting important features and addressing future prospects, and in Section 7 a summary is exhibited. Skyrmion behavior in attractive-repulsive square array of pinning centers 3 0 10 20 30 X 0 5 10 15 20 25 30 Y Figure 1. Schematic of the square array of periodic defects in our system that interact with the skyrmion. Red circles correspond to repulsive obstacles, while black circles to attractive pinning sites. 2. Simulation We simulated the dynamics of a single skyrmion in a ferromagnetic thin film that can host N´eel skyrmions at zero temperature. The skyrmion is embedded in a two-dimensional simulation box of dimensions Lx ×Ly, with periodic boundary conditions along the x and y directions. As shown in Fig. 1, the system contains a square lattice of nanoengineered defects that are either attractive or repulsive to skyrmions. The skyrmion dynamics follows a particle-based equation of motion as in Lin et al. [50]. We use a custom Fortran code based on standard molecular dynamics methods [51]. The equation of motion is: αdvi +αm ˆz×vi = Fat i +Frep i +FD (1) The first term on the left-hand side represents the damping that arises from spin precession and electron dissipation in the skyrmion core, where αd is the damping constant. The second term represents the Magnus force, acting perpendicularly to the skyrmion velocity, where αm is the Magnus constant. On the right side of the equation we have the skyrmion-defect interactions, Fat i and Frep i , that are modeled by a Gaussian potential of the type Uo = C0 e−(rio/ao)2, where C0 represents the strength of the defect potential. The force associated with this potential is Fio = −∇Uo = −2Uo a2o rio e−(rio/ao)2 ˆrio. Here, rio is the distance between skyrmion i and pinning site o, ao is the pinning radius, and ˆrio is the unit vector pointing from the pinning center to the skyrmion. As mentioned earlier, in this work two types of obstacles were considered. Attractive pinning sites are described by a negative potential strength C0 = −Cat, producing a force that pulls the skyrmion toward the defect center. While repulsive obstacles have a positive potential strength C0 = Crep resulting in a force that pushes the skyrmion away from the obstacle center. Thus, each pinning or obstacle site have different potential strengths Cat and Crep, and different radius aat and arep, respectively. The third term on the right side is the interaction between the skyrmion and the external transport force, given by FD = FD ˆd, Skyrmion behavior in attractive-repulsive square array of pinning centers 4 where throughout this work we use ˆd = ˆx. We increase FD in small steps of δFD = 0.01 and spend 2×105 simulation time steps, with a time step δt = 0.001, at each drive increment to ensure a steady state. We measure the average velocities V∥ = ⟨v·bx⟩and ⟨V⊥⟩= ⟨v·by⟩. We normalize all distances by the screening length ξ and select the damping and Magnus constants such that αm2 +αd2 = 1. Besides that, we also fix the rate (αm/αd) = 1.732, resulting in an intrinsic Hall angle of approximately θsk ≈−60◦in all calculations. 3. Skyrmion Dynamics with Attractive and Repulsive Defects We first considerCat = −0.5, Crep = 1.0, and aat = arep = 0.65. Fig. 2 shows ⟨V∥⟩, ⟨V⊥⟩and the corresponding θsk as functions of FD. The initial skyrmion position, i.e., the ground state is the skyrmion trapped in a attractive pinning site close to the center of the simulation box. It is important to note that the energy of the system is the same for the skyrmion trapped at any attractive pinning site since we consider an infinite ferromagnetic thin-film with periodic boundary conditions. As we increase the external driving, FD, the skyrmion average velocities remain with zero values for FD ≤0.68, as shown in Fig. 2 (a). This means that the skyrmion remain trapped in the attractive defect, and its equilibrium position just deviates a bit from the center of the defect due to the external driving. This is the pinned phase. For FD > 0.68, Fig. 2 (a) shows that V∥ exhibit a sharp increase and ⟨V⊥⟩a sharp decrease, indicating that the skyrmion begin to move. For the interval 0.68 < FD < 0.95 the average velocities varies smoothly with the applied drive. This regime can be seen in Fig. 2 (b) where skyrmion direction of motion is locked at θsk = 45◦. This is a clear directional locking effect for skyrmions in a periodic array of repulsive obstacles that has been observed in previous works [18, 19, 21]. However, here we show that this phase is also observed in a mixture between attractive and repulsive defects. Due to the ratio αm/αd = 1.732, the skyrmion intrinsic Hall angle is θ int sk ≈60◦. However, the interaction between the pinning landscape, the skyrmion and the external drive results in a directional locking at θsk = 45◦. Different than observed in previous works [18, 19, 21], where only repulsive obstacles were considered, here we do not observe the skyrmion moving at θsk = 0◦. In order for the skyrmion to move along θsk = 0◦it is necessary close repulsive defects to repel the skyrmion and lock its motion along the driving force. Here, due to the mixture of repulsive and attractive defects, we observed a expanded range of forces of θsk = 45◦. In Fig. 3 (a) we illustrate the skyrmion trajectories among the defects. Note that the skyrmion moves along the diagonal of the sample, being repelled by the repulsive obstacles in a zig-zag type of motion. For FD > 0.95, both skyrmion average velocities, V∥ and ⟨V⊥⟩, exhibit decrease in magnitude followed by oscillations, as shown in Fig. 2 (a), indicating the presence of a transient skyrmion motion between stable phases. This can also be seen in Fig 2 (b), where θsk is oscillating and changing locking directions in short intervals of FD. For 1.13 < FD < 1.42, the skyrmion velocities increase in magnitude more smoothly, along with a stable θsk ≈59◦. For FD > 2.50, the skyrmion direction of motion tends to align with the intrinsinc skyrmion Hall angle continuously, similar to observed in previous works [18, 19, 22]. This is a clear signature that pinning effects are being mitigated by the strong driving forces, resulting in a dynamics that resembles the clean sample case. In Fig. 3 (b) it is illustrated some skyrmion trajectories for FD = 1.60, where the system is locked along θsk ≈59◦. In this case, the skyrmion trajectory passes through some attractive pinning sites but also some repulsive sites, different than shown in Fig. 3 (a). 4. Effects of Defect Strength on the Dynamics In Fig. 4 (a), we plot θsk as a function of the driving force FD for a system with both defect radius fixed at aat = arep = 0.65. The repulsive strength was kept constant at Crep = 1.0, while the attractive strength Cat Skyrmion behavior in attractive-repulsive square array of pinning centers 5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 FD −2.5 −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 ⟨V∥⟩,⟨V⊥⟩ (a) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 FD −58 −56 −54 −52 −50 −48 −46 θsk(◦) (b) Figure 2. Results for a system with a single skyrmion in mixture of repulsive and attractive pinning sites as shown in Fig. 1, using aat = arep = 0.65, Crep = −0.5 and Crep = 1.0. (a) ⟨V∥⟩and ⟨V⊥⟩as a function FD. (b) The corresponding Hall angle θsk as a function of FD. 0 10 20 30 X 0 5 10 15 20 25 30 Y FD = 0.70 (a) 0 10 20 30 X 0 5 10 15 20 25 30 Y FD = 1.60 (b) Figure 3. Illustration of the skyrmion trajectory (blue line) under the defect array for different values of the external driving force FD for the system shown in Figs. 1. and 2. In (a) FD = 0.70, where a directional locking regime occurs with θsk = −45◦; (b) FD = 1.60, regime dominated by a directional locking θsk = −59◦very close to the intrinsic Hall angle θ int sk ≈−60◦. was varied using −0.25, −0.50, −0.75, and −1.0. The results clearly show that the depinning threshold is sensitive to the magnitude of the attractive strength Cat. In the inset of Fig. 4 (a), it is shown a clear linear increase in magnitude of Fdepin as a function of Cat. Additionally, we observe that the regime characterized by θsk = −45◦is dependent of Cat. For example, the range of FD in which the skyrmion remains in this regime is largest for Cat = −0.25. On the other hand, this regime vanishes completely for |Cat| ≥0.75. That is, for stronger values of Cat the skyrmion becomes strongly pinned at the pinning site. When the external drive is sufficient large to depin the skyrmion, it depins with high velocity and cannot stabilize the regime at which it locks at θsk = −45◦. This θsk = −45◦locking phase requires moderate skyrmion velocities to be stable. The results for θsk vs. FD for the system with fixed aat = arep = 0.65 and Cat = −1.00, while varied Crep = 0.25, 0.50, 0.75, and 1.0 is shown in Fig. 4 (b). Different than shown in Fig. 4 (a), here the depinning thresholds are all the same, at FD = 1.34. Since the ground state correspond to a skyrmion trapped at Skyrmion behavior in attractive-repulsive square array of pinning centers 6 an attractive pinning site, the effect of Crep has no effect on the depinning forces. However, the dynamic phases strongly depends on Crep. For the case of Crep = 0.25 the skyrmion direction of motion tends to the intrinsic angle much faster than for stronger values of Crep. That is, the repulsive obstacle sites are crucial for controlling the skyrmion direction of motion, corroborating previous works [19]. For example, using Crep = 1.0, the intervals at which the dynamic phases are stable are much expanded, including the θsk ≈59◦ phase. These features can also be seen in Fig. 4 (c) and (d), where we performed a detailed analysis of the range of driving force regimes ∆FD for selected most prominent Hall angles, θsk = −45◦and −60◦. These ranges of ∆FD are determined directly from the θsk vs FD curves shown in (a) and (b), identifying the force ranges in which the θsk remains fairly constant within a tolerance of ∆θsk = ±1.5◦due to numeric oscillations. Fig. 4 (c) illustrates the intervals ∆FD for the pinned state, θsk = −45◦and −60◦as a function of the attractive force Cat, while fixed Crep = 0.5 for several values of Cat. From Fig. 4 (c), it is possible to see that the pinned phase progressively decreases as |Cat| decreases. As the attractive sites becomes weaker, it is easier for the skyrmion to depin. The θsk = −45◦regime can only be stable for |Cat| ≤0.3. Additionally, the interval of forces ∆FD where this regime is stable increases with decreasing Cat. The phase where the regime is locked at θsk = −60◦also increases as |Cat| decreases. This result is expected, as the attractive pinning sites becomes weaker, the system tends to behave more similar to a clean sample where the stable regime is flowing along the intrinsic angle. In Fig. 4 (d) is shown the intervals ∆FD as a function of the repulsive force Crep, while fixed Cat = −0.5. The pinned phase is completely independent of Crep. The θsk = −45◦regime is stable for Crep ≥0.7, while θsk = −60◦phase tends to be more robust as Crep decreases. These results show clearly that there is a balance of values of Crep and Cat where more dynamic phases can be observed. Thus, from the practical point of view, it is possible to modulate phases using a combination of attractive and repulsive defect sites. Fig. 5 illustrates selected skyrmion trajectories. In Fig. 5 (a), the skyrmion is flowing in a defect landscape using Cat = −1.0, Crep = 1.0 and FD = 1.50, where its motion is locked at θsk ≈−57◦. In this regime the skyrmion flows and interact with both attractive and repulsive sites resulting in a periodic motion displacing six lattice parameters along y and four along x, that is, a ratio of R = 6/4. The angle of motion can be calculated as θsk = arctan(R) ≈57◦. In transient Fig. 5 (b), using Cat = −0.75, Cat = 1.0 and FD = 1.60, the skyrmion motion is not locked at a specific direction. This is a transient motion between different locking directions. As can be seen, the skyrmion motion does not exhibit periodicity. For a complete description of the defect strengths, Cat and Crep, in Fig. 6 we show a phase diagram of ∆FD vs. Cat vs. Crep for the pinned phase, θsk = −45◦and θsk = −60◦. In Fig. 6 (a) it is clearly shown that the pinned phase depends solely on Cat. In Fig. 6 (b) we observe that θsk = −45◦regime can only occur for a combination of low Cat and high Crep. The white region in the plot indicates regions where this phase cannot be stabilized. Finally, θsk = −60◦is most prominent when both Crep and Cat are weak. That is, when the system exhibits weak pinning, the dynamics resembles the clean sample case, where the skyrmion flows along the intrinsic SkHE, which in our case is θ int sk ≈60◦. 5. Influence of Defect Size To investigate the effects of defect size on skyrmion dynamics, we fix Cat = −0.2 and Crep = 0.8 and then vary the size of the attractive and repulsive defect sites. These values were chosen as they exhibit the phases θsk = −45◦and ≈−60◦, allowing us to observe how only defect size effects the dynamic regimes. The radii of the defects, both attractive and repulsive, were varied from 0.05 to 1.0 in steps of 0.05. In Fig. 7 (a), using FD = 0.25, we can see that for small repulsive defects of arep < 0.4 the skyrmion Hall angle can lock at different angles θsk ≈−56◦or ≈−60◦depending on the value of aat. For small aat and arep, the skyrmion tends to lock at angles very close to the intrinsic angle, as expected. However, Skyrmion behavior in attractive-repulsive square array of pinning centers 7 0.0 0.5 1.0 1.5 2.0 2.5 3.0 FD −65 −60 −55 −50 −45 θsk(◦) Crep = 1.0 (a) −1.0 −0.5 Cat 0.5 1.0 Fdepin Cat = −0.25 Cat = −0.50 Cat = −0.75 Cat = −1.00 1.5 2.0 2.5 3.0 FD −59 −58 −57 −56 −55 θsk(◦) Cat = −1.0 (b) Crep = 0.25 Crep = 0.50 Crep = 0.75 Crep = 1.00 −1.0 −0.8 −0.6 −0.4 −0.2 Cat 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ∆FD Crep = 0.5 (c) θsk = −60◦ θsk = −45◦ Pinned 0.2 0.4 0.6 0.8 1.0 Crep 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ∆FD Cat = −0.5 (d) θsk = −60◦ θsk = −45◦ Pinned Figure 4. (a,b) Skyrmion Hall angle θsk as a function of the driving force, FD and (c,d) the range of forces ∆FD vs. the defect strengths. (a) Results for fixed Crep = 1.0 and varied Cat, and (b) for fixed Cat = −1.0 and varied Crep. (c) ∆FD vs. Cat with fixed Crep = 0.5 and (d) ∆FD vs. Crep with Cat = −0.5. Simulations were performed with the ratio αm/αd = 1.732 and aat = arep = 0.65. 0 10 20 30 X 0 5 10 15 20 25 30 Y (a) FD = 1.50 Crep = 1.00 Cat = −1.00 0 10 20 30 X 0 5 10 15 20 25 30 Y (b) FD = 1.60 Crep = 1.00 Cat = −0.75 Figure 5. Skyrmion trajectories in a periodic lattice of attractive and repulsive defects for different combinations of potential strengths and driving force, FD. (a) Cat = −1.0, Crep = 1.0 and FD = 1.50, where the skyrmion motion is locked at θsk = −57◦. (b) Cat = −0.75, Cat = 1.0 and FD = 1.60, where the skyrmion motion is in a transient state, between locked states θsk = −57◦and −59◦. Skyrmion behavior in attractive-repulsive square array of pinning centers 8 −1.0 −0.8 −0.6 −0.4 −0.2 Cat 0.2 0.4 0.6 0.8 1.0 Crep (a) Pinned 0.2 0.4 0.6 0.8 1.0 1.2 ∆FD −0.6 −0.4 −0.2 Cat 0.2 0.4 0.6 0.8 1.0 Crep (b) θsk = −45◦ 0.2 0.4 0.6 0.8 ∆FD −1.0 −0.8 −0.6 −0.4 −0.2 Cat 0.2 0.4 0.6 0.8 1.0 Crep (c) θsk = −60◦ 1.5 2.0 2.5 ∆FD Figure 6. Dynamic phases of ∆FD as a function of attractive (Cat) and repulsive (Crep) defect strengths using αm/αd = 1.732 and aat = arep = 0.65. In (a) diagram for the pinned phase, (b) θsk = −45◦; and (c) θsk = −60◦. for larger attractive sites, aat ≥0.8, the influence of the defects is strong enough to lock skyrmions along θsk ≈56◦. For larger repulsive defects of 0.40 < arep < 0.95, the skyrmion is locked at θsk ≈−45◦phase. For very large repulsive defects, arep = 1.0 the skyrmion locks at a different direction θsk ≈48◦. In Fig. 7 (b), using FD = 0.50, as we vary the size of both defects it is only possible to observe two different locking directions, θsk ≈−45◦for arep > 0.40, or ≈−60◦for arep < 0.40. In this case, as more external driving force is applied, the skyrmion has higher average velocities which helps it to lock at fewer directions. For FD = 0.75, as shown in Fig. 7 (c), the θsk ≈−45◦is much reduced, only available at 0.50 < arep < 0.65. For arep > 0.65 the skyrmion direction of motion tends to align along θsk ≈−50◦. Meanwhile, for FD = 1.00, shown in Fig. 7 (d), the skyrmion is even less influenced by the defects, changing from θsk ≈−60◦to ≈−56◦due to larger arep. Hence, defect size is a key parameter for steering skyrmion trajectories. Increasing the radius amplifies pinning or repulsive forces and expand directional locking phases, whereas smaller defects weaken these effects. Additionally, we expect that large and strong defects stabilize more locking directions. These findings chart a practical route to controlling skyrmion motion via coexisting attractive and repulsive defects. 6. Discussion Using particle-based simulations we showed that it is possible to control a single skyrmion direction of motion using a landscape composed of attractive and repulsive defects. We model the skyrmions following the model proposed by Lin et al. [50], where skyrmions are treated as point-like rigid bodies that cannot deform or change size; however, in real materials, skyrmions may deform, change size or be annihilated. These features can change the dynamics or give rise to new phenomena, which could be explored further using continuum-based simulations. Thermal fluctuations, neglected here, are likewise expected to modify the behavior, where thermal creep at low drives can shift depinning thresholds and phase boundaries, and fragile plateaus of small ∆FD values may disappear at moderate temperatures. We expect the qualitative influence of mixed pinning to be robust across other pinning geometries, while the preferred locking directions reflect lattice symmetry. That is, in the square array considered here, the principal direction is 45◦, whereas a triangular lattice should favor 30◦and 60◦. Skyrmion behavior in attractive-repulsive square array of pinning centers 9 0.2 0.4 0.6 0.8 1.0 arep −60 −55 −50 −45 −40 −35 θsk (a) FD = 0.25 aat = 0.2 aat = 0.4 aat = 0.6 aat = 0.8 aat = 1.0 0.2 0.4 0.6 0.8 1.0 arep −60 −55 −50 −45 −40 −35 θsk (b) FD = 0.50 0.2 0.4 0.6 0.8 1.0 arep −60 −55 −50 −45 −40 −35 θsk (c) FD = 0.75 0.2 0.4 0.6 0.8 1.0 arep −60 −55 −50 −45 −40 −35 θsk (d) FD = 1.00 Figure 7. Skyrmion Hall angle, θsk as a function of the repulsive defect size arep for different attractive defect sizes aat and driving force values: (a) FD = 0.25, (b) FD = 0.50, (c) FD = 0.75, and (d) FD = 1.0. 7. Summary In this work we have investigated the dynamics of a single skyrmion interacting with a square array of defects that coexists attractive and repulsive defects. The defect landscape produces directional locking at θsk = −45◦and flow near the intrinsic skyrmion Hall angle, in our case θsk ≈−60◦. Unlike repulsive-only arrays, our system does not exhibit a θsk = 0◦step, as the alternation of pin types suppresses longitudinal locking. By tuning defect strengths, we control both the depinning threshold and the extent of the −45◦ locking step, where weaker attraction lowers the depinning force, whereas stronger repulsion stabilizes directional locking and broadens its force window. Thus, the set and width of accessible locking steps can be expanded or suppressed by appropriate strength choices. We also investigated defect size effects. At low drives the dynamics is more sensitive to defect radii, enabling locking at θsk = −45◦, −50◦, −55◦, or ≈−59◦depending on parameters; at higher drives the motion converges toward θ int sk . We hope our results bring new insights in using a combined mixture of defects that are attractive and repulsive to enable new possibilities to control the skyrmion motion in future spintronic devices. Skyrmion behavior in attractive-repulsive square array of pinning centers 10 Acknowledgments This work was supported by the Brazilian agencies Coordenac¸˜ao de Aperfeic¸oamento de Pessoal de N´ıvel Superior - CAPES, Conselho Nacional de Desenvolvimento Cient´ıfico e Tecnol´ogico - CNPq, and Fundac¸˜ao de Amparo `a Pesquisa do Estado de S˜ao Paulo - FAPESP. L.B., N.P.V., and J.C.B.S acknowledge funding from FAPESP (Grants: 2024/21139-1, 2024/13248-5, and 2022/14053-8, respectively). We would like to thank FAPESP for providing the computational resources used in this work (Grant: 2024/02941-1). References [1] Bogdanov A and Hubert A 1994 Journal of Magnetism and Magnetic Materials 138 255–269 [2] Bogdanov A N and Yablonskii D A 1989 Sov. Phys. JETP 68 178–182 [3] Kiselev N S, Bogdanov A N, Sch¨afer R and R¨oßler U K 2011 J. Phys. D: Appl. Phys. 44 392001 [4] Hagemeister J, Romming N, von Bergmann K, Vedemenko E Y and Wiesendanger R 2015 Nat. Commun. 6 8455 [5] Luo S, Song M, Li X, Zhang Y, Hong J, Yang X, Zou X, Xu N and You L 2018 Nano Letters 18 1180–1184 [6] Zhang X, Ezawa M and Zhou Y 2015 Scientific Reports 5 9400 [7] Xia J, Zhang X, Liu X, Zhou Y and Ezawa M 2023 Phys. Rev. Lett. 130(10) 106701 [8] Romming N and et al 2013 Science 341 636–639 [9] Zang J, Mostovoy M, Han J H and Nagaosa N 2011 Physical Review Letters 107 136804 [10] Everschor-Sitte K and Sitte M 2014 Journal of Applied Physics 115 172602 [11] Jiang W, Zhang X, Yu G, Zhang W, Wang X, Benjamin Jungfleisch M, Pearson J E, Cheng X, Heinonen O, Wang K L, Zhou Y, Hoffmann A and te Velthuis S G E 2017 Nature Physics 13 162–169 [12] Litzius K, Lemesh I, Kr¨uger B, Bassirian P, Caretta L, Richter K, B¨uttner F, Sato K, Tretiakov O A, F¨orster J, Reeve R M, Weigand M, Bykova I, Stoll H, Sch¨utz G, Beach G S D and Kl¨aui M 2017 Nature Physics 13 170–175 [13] Zeissler K, Finizio S, Barton C, Huxtable A J, Massey J, Raabe J, Sadovnikov A V, Nikitov S A, Brearton R, Hesjedal T, Laan G, Rosamond M C, Linfield E H, Burnell G and Marrows C H 2020 Nature Communications 11 1–11 [14] Zhang Y, Luo S, Yan B, Ou-Yang J, Yang X, Chen S, Zhu B and You L 2017 Nanoscale 9 10212–10218 [15] Fert A, Reyren N and Cros V 2017 Nature Reviews Materials 2 17031 [16] Reichhardt C and Reichhardt C J O 2015 Physical Review Letters 114 [17] Reichhardt C, Reichhardt C J O and Miloˇsevi´c M V 2022 Rev. Mod. Phys. 94 035005 [18] Reichhardt C, Ray D and Reichhardt C J O 2015 Physical Review B 91 [19] Vizarim N P, Reichhardt C, Reichhardt C J O and Venegas P A 2020 New Journal of Physics 22 053025 [20] Vizarim N P, Reichhardt C J O, Venegas P A and Reichhardt C 2020 The European Physical Journal B 93 112 [21] Feilhauer J, Saha S, Tobik J, Zelent M, Heyderman L J and Mruczkiewicz M 2020 Physical Review B 102 184425 [22] Vizarim N P, Souza J C B, Reichhardt C, Reichhardt C J O and Venegas P A 2021 Journal of Physics: Condensed Matter 33 305801 [23] Stosic D, Ludermir T B and Miloˇsevi´c M V 2017 Physical Review B 96 [24] Reichhardt C and Nori F 1999 Physical Review Letters 82 414–417 [25] Bohlein T and Bechinger C 2012 Physical Review Letters 109 058301 [26] Reichhardt C and Olson Reichhardt C J 2004 Physical Review E 69 041405 [27] Gopinathan A and Grier D G 2004 Physical Review Letters 92 130602 [28] G¨obel B and Mertig I 2021 Scientific Reports 11 3020 [29] Ma X, Reichhardt C J O and Reichhardt C 2017 Physical Review B 95 104401 [30] Reichhardt C, Ray D and Reichhardt C J O 2015 New Journal of Physics 17 073034 [31] Souza J C B, Vizarim N P, Reichhardt C J O, Reichhardt C and Venegas P A 2021 Physical Review B 104 054434 [32] Vizarim N P, Reichhardt C J O, Venegas P A and Reichhardt C 2020 Journal of Physics Communications 4 085001 [33] Souza J C B, Vizarim N P, Reichhardt C J O, Reichhardt C and Venegas P A 2024 Physical Review B 109 054407 [34] Vizarim N P, Reichhardt C, Venegas P A and Reichhardt C J O 2021 Journal of Magnetism and Magnetic Materials 528 167710 [35] Zhang C L, Wang J N, Song C K, Mehmood N, Zeng Z Z, Ma Y X, Wang J B and Liu Q F 2022 Rare Metals 41 865–870 [36] Yanes R, Garcia-Sanchez F, Luis R F, Martinez E, Raposo V, Torres L and Lopez-Diaz L 2019 Applied Physics Letters 115 132401 [37] Zhang S L, Wang W W, Burn D M, Peng H, Berger H, Bauer A, Pfleiderer C, van der Laan G and Hesjedal T 2018 Nature Communications 9 2115 [38] Casiraghi A, Corte-Le´on H, Vafaee M, Garcia-Sanchez F, Durin G, Pasquale M, Jakob G, Kl¨aui M and Kazakova O 2019 Communications Physics 2 1–9 [39] Everschor K, Garst M, Binz B, Jonietz F, M¨uhlbauer S, Pfleiderer C and Rosch A 2012 Physical Review B 86 054432 Skyrmion behavior in attractive-repulsive square array of pinning centers 11 [40] Kong L and Zang J 2013 Physical Review Letters 111 067203 [41] Wang Y, Shimada T, Wang J, Kitamura T and Hirakata H 2021 Acta Materialia 221 117383 [42] Purnama I, Gan W L, Wong D W and Lew W S 2015 Scientific Reports 5 10620 [43] Juge R, Bairagi K, Rana K G, Vogel J, Sall M, Mailly D, Pham V T, Zhang Q, Sisodia N, Foerster M, Aballe L, Belmeguenai M, Roussign´e Y, Auffret S, Buda-Prejbeanu L D, Gaudin G, Ravelosona D and Boulle O 2021 Nano Letters 21 2989–2996 [44] Kern L M, Pfau B, Deinhart V, Schneider M, Klose C, Gerlinger K, Wittrock S, Engel D, Will I, G¨unther C M, Liefferink R, Mentink J H, Wintz S, Weigand M, Huang M J, Battistelli R, Metternich D, B¨uttner F, H¨oflich K and Eisebitt S 2022 Nano Letters 22 4028–4035 [45] Leliaert J, Gypens P, Miloˇsevi´c M V, Waeyenberge B V and Mulkers J 2018 J. Phys. D 52 024003 [46] Toscano D, Mendonc¸a J P A, Miranda A L S, de Araujo C I L, Sato F, Coura P Z and Leonel S A 2020 Journal of Magnetism and Magnetic Materials 504 166655 [47] Menezes R M, Neto J F S, Silva C C d S and Miloˇsevi´c M V 2019 Physical Review B 100 014431 [48] Neto J F and de Souza Silva C C 2022 Physical Review Letters 128 [49] Xie Y J, Qian A, He B, Wu Y B, Wang S, Xu B, Yu G, Han X and Qiu X G 2024 Physical Review Letters 133 [50] Lin S Z, Reichhardt C, Batista C D and Saxena A 2013 Physical Review B 87 [51] Paul W B 1993 Advanced Materials 5 223–224
Skyrmion behavior in attractive-repulsive square array of pinning centers L. Basseto1, N. P. Vizarim2, J. C. Bellizotti Souza2 and P. A. Venegas1 1 ̃ao Paulo State University - UNESP, 17033-360 Bauru, SP, Brazil 2 "Gleb Wataghin" - UNICAMP, 13083-859, Campinas, SP, Brazil E-mail: Abstract. We investigate the driven dynamics of a single skyrmion in a square lattice of mixed pinning sites, where attractive and repulsive defects coexist using a particle-based model. The mixed landscape yields directional locking at θsk = -45◦and flow at locked angles near the intrinsic skyrmion Hall angle. By mapping defect strengths, we show that weaker attraction lowers the depinning threshold, whereas stronger repulsion stabilizes and broadens the -45◦locking plateau. Moreover, combinations of attractive and repulsive defect strengths allows control of directional lockings and their force ranges. Defect size further tunes the response, selecting among -45◦, -50◦, -55◦, and ≈-59◦. These results establish mixed pinning as a practical knob to steer skyrmion trajectories and the effective Hall response, providing design guidelines for skyrmion-based memory and logic devices. Keywords: Magnetism, skyrmions, pinning, dynamic phases, directional locking 16 Oct 2025 Skyrmion behavior in attractive-repulsive square array of pinning centers 2 1. Introduction Magnetic skyrmions are topological excitations stabilized in chiral magnets by the interplay between exchange and the Dzyaloshinskii-Moriya interaction (DMI) [1, 2]. Owing to their nontrivial topology, skyrmions are robust against moderate perturbations and can be treated as quasiparticles. Beyond their fundamental interest, skyrmions are promising candidates for information carriers in spintronic applications, where the bit state can be encoded in their presence or absence [3, 4, 5, 6], and they have also been proposed as building blocks for qubits and novel logic elements [7, 8]. A central challenge in such skyrmion-based devices is that skyrmion motion is strongly influenced by a Magnus force set by topology [9, 10]. In clean samples, a driven skyrmion travels at an intrinsic angle θ int sk relative to the drive, a phenomenon known as the skyrmion Hall effect (SkHE) [11, 12]. The magnitude of θ int sk increases with the ratio of the Magnus term to the dissipative term, and experimentally spans from a few degrees to very large values depending on material parameters and skyrmion size [11, 13, 12]. Large deflections are detrimental for racetrack-like devices, since skyrmions may leave the track and be annihilated at the edges [14, 15]. In realistic media, pinning strongly impacts trajectories and effectively changes the observed skyrmion Hall angle θsk [16, 17]. These pinning centers can be either attractive or repulsive, natural or nanoengineered. Remarkably, periodic pinning has been shown to quantize or lock the direction of motion as the drive increases, enabling controllable transport [18, 19, 20, 21, 22, 23]. This directional locking is analogous to phenomena in superconducting vortices [24] and colloids [25, 26, 27] driven over periodic substrates. For skyrmions, the drive direction is fixed, but the velocity-dependent θsk produces discrete reorientations of the trajectory as the drive magnitude is ramped. On a locking step, θsk remains constant over a finite drive interval, while transitions between steps coincide with characteristic features (dips or cusps) in the velocity-force curves. Such plateaus offer a practical knob to steer skyrmions via small drive adjustments. Additional strategies to control motion include ratchet effects [28, 29, 30, 31, 32, 33], interface-guided transport [34, 35], strain [36], magnetic or temperature gradient drives [37, 38, 39, 40, 41], 1D potential channels [42, 43, 44], nanotracks [45, 6, 46], and hybrid skyrmion-vortex architectures [47, 48, 49]. For square arrays of periodic obstacles, the locked directions often follow φ = arctan(p/q) with integers p and q [18, 19]. However, to the best of our knowledge, the impact of coexisting attractive and repulsive sites (mixed sites) arranged on a square lattice has not been systematically explored on the skyrmion dynamics. In this work we address this gap by studying skyrmion motion in a lattice combining attractive and repulsive defects. We demonstrate new ways for steering trajectories and tuning the effective Hall response, thereby refining control of skyrmion transport. Our simulation methodology (shown in Section 2) employs a particle-based approach for a single skyrmion [50], with dissipative and Magnus terms in the equation of motion. Defect interactions are modeled by Gaussian potentials, and an external drive FD is applied. We characterize the direction of skyrmion motion by the Hall angle θsk = arctan(v⊥/v∥), defined from velocity components perpendicular and parallel to FD. Section 3 we present our results where three dynamical regimes can be seen: (i) a pre-depinning regime where the skyrmion is trapped; (ii) a directional-locking regime with a stable trajectory at -45◦; and (iii) a high-drive Magnus regime where θsk →θ int sk . The novelty here is that the relative strengths of attractive and repulsive pinning control the extent of each regime, and that sufficiently strong attraction suppresses the locking plateau. In Section 4 we map the pinning strength effects to identify optimal operating windows. In Section 5 we investigate pinning size effects and find that enlarging repulsive sites can modify θsk at low drives, while at higher drives the motion is dominated by the intrinsic SkHE and becomes less sensitive to size. In Section 6 we discuss our results highlighting important features and addressing future prospects, and in Section 7 a summary is exhibited. Skyrmion behavior in attractive-repulsive square array of pinning centers 3 0 10 20 30 X 0 5 10 15 20 25 30 Y Figure 1. Schematic of the square array of periodic defects in our system that interact with the skyrmion. Red circles correspond to repulsive obstacles, while black circles to attractive pinning sites. 2. Simulation We simulated the dynamics of a single skyrmion in a ferromagnetic thin film that can host N ́eel skyrmions at zero temperature. The skyrmion is embedded in a two-dimensional simulation box of dimensions Lx ×Ly, with periodic boundary conditions along the x and y directions. As shown in Fig. 1, the system contains a square lattice of nanoengineered defects that are either attractive or repulsive to skyrmions. The skyrmion dynamics follows a particle-based equation of motion as in Lin et al. [50]. We use a custom Fortran code based on standard molecular dynamics methods [51]. The equation of motion is: αdvi +αm ˆz×vi = Fat i +Frep i +FD (1) The first term on the left-hand side represents the damping that arises from spin precession and electron dissipation in the skyrmion core, where αd is the damping constant. The second term represents the Magnus force, acting perpendicularly to the skyrmion velocity, where αm is the Magnus constant. On the right side of the equation we have the skyrmion-defect interactions, Fat i and Frep i , that are modeled by a Gaussian potential of the type Uo = C0 e-(rio/ao)2, where C0 represents the strength of the defect potential. The force associated with this potential is Fio = -∇Uo = -2Uo a2o rio e-(rio/ao)2 ˆrio. Here, rio is the distance between skyrmion i and pinning site o, ao is the pinning radius, and ˆrio is the unit vector pointing from the pinning center to the skyrmion. As mentioned earlier, in this work two types of obstacles were considered. Attractive pinning sites are described by a negative potential strength C0 = -Cat, producing a force that pulls the skyrmion toward the defect center. While repulsive obstacles have a positive potential strength C0 = Crep resulting in a force that pushes the skyrmion away from the obstacle center. Thus, each pinning or obstacle site have different potential strengths Cat and Crep, and different radius aat and arep, respectively. The third term on the right side is the interaction between the skyrmion and the external transport force, given by FD = FD ˆd, Skyrmion behavior in attractive-repulsive square array of pinning centers 4 where throughout this work we use ˆd = ˆx. We increase FD in small steps of δFD = 0.01 and spend 2×105 simulation time steps, with a time step δt = 0.001, at each drive increment to ensure a steady state. We measure the average velocities V∥ = ⟨v·bx⟩and ⟨V⊥⟩= ⟨v·by⟩. We normalize all distances by the screening length ξ and select the damping and Magnus constants such that αm2 +αd2 = 1. Besides that, we also fix the rate (αm/αd) = 1.732, resulting in an intrinsic Hall angle of approximately θsk ≈-60◦in all calculations. 3. Skyrmion Dynamics with Attractive and Repulsive Defects We first considerCat = -0.5, Crep = 1.0, and aat = arep = 0.65. Fig. 2 shows ⟨V∥⟩, ⟨V⊥⟩and the corresponding θsk as functions of FD. The initial skyrmion position, i.e., the ground state is the skyrmion trapped in a attractive pinning site close to the center of the simulation box. It is important to note that the energy of the system is the same for the skyrmion trapped at any attractive pinning site since we consider an infinite ferromagnetic thin-film with periodic boundary conditions. As we increase the external driving, FD, the skyrmion average velocities remain with zero values for FD ≤0.68, as shown in Fig. 2 (a). This means that the skyrmion remain trapped in the attractive defect, and its equilibrium position just deviates a bit from the center of the defect due to the external driving. This is the pinned phase. For FD > 0.68, Fig. 2 (a) shows that V∥ exhibit a sharp increase and ⟨V⊥⟩a sharp decrease, indicating that the skyrmion begin to move. For the interval 0.68 0.95, both skyrmion average velocities, V∥ and ⟨V⊥⟩, exhibit decrease in magnitude followed by oscillations, as shown in Fig. 2 (a), indicating the presence of a transient skyrmion motion between stable phases. This can also be seen in Fig 2 (b), where θsk is oscillating and changing locking directions in short intervals of FD. For 1.13 2.50, the skyrmion direction of motion tends to align with the intrinsinc skyrmion Hall angle continuously, similar to observed in previous works [18, 19, 22]. This is a clear signature that pinning effects are being mitigated by the strong driving forces, resulting in a dynamics that resembles the clean sample case. In Fig. 3 (b) it is illustrated some skyrmion trajectories for FD = 1.60, where the system is locked along θsk ≈59◦. In this case, the skyrmion trajectory passes through some attractive pinning sites but also some repulsive sites, different than shown in Fig. 3 (a). 4. Effects of Defect Strength on the Dynamics In Fig. 4 (a), we plot θsk as a function of the driving force FD for a system with both defect radius fixed at aat = arep = 0.65. The repulsive strength was kept constant at Crep = 1.0, while the attractive strength Cat Skyrmion behavior in attractive-repulsive square array of pinning centers 5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 FD -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 ⟨V∥⟩,⟨V⊥⟩ (a) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 FD -58 -56 -54 -52 -50 -48 -46 θsk(◦) (b) Figure 2. Results for a system with a single skyrmion in mixture of repulsive and attractive pinning sites as shown in Fig. 1, using aat = arep = 0.65, Crep = -0.5 and Crep = 1.0. (a) ⟨V∥⟩and ⟨V⊥⟩as a function FD. (b) The corresponding Hall angle θsk as a function of FD. 0 10 20 30 X 0 5 10 15 20 25 30 Y FD = 0.70 (a) 0 10 20 30 X 0 5 10 15 20 25 30 Y FD = 1.60 (b) Figure 3. Illustration of the skyrmion trajectory (blue line) under the defect array for different values of the external driving force FD for the system shown in Figs. 1. and 2. In (a) FD = 0.70, where a directional locking regime occurs with θsk = -45◦; (b) FD = 1.60, regime dominated by a directional locking θsk = -59◦very close to the intrinsic Hall angle θ int sk ≈-60◦. was varied using -0.25, -0.50, -0.75, and -1.0. The results clearly show that the depinning threshold is sensitive to the magnitude of the attractive strength Cat. In the inset of Fig. 4 (a), it is shown a clear linear increase in magnitude of Fdepin as a function of Cat. Additionally, we observe that the regime characterized by θsk = -45◦is dependent of Cat. For example, the range of FD in which the skyrmion remains in this regime is largest for Cat = -0.25. On the other hand, this regime vanishes completely for |Cat| ≥0.75. That is, for stronger values of Cat the skyrmion becomes strongly pinned at the pinning site. When the external drive is sufficient large to depin the skyrmion, it depins with high velocity and cannot stabilize the regime at which it locks at θsk = -45◦. This θsk = -45◦locking phase requires moderate skyrmion velocities to be stable. The results for θsk vs. FD for the system with fixed aat = arep = 0.65 and Cat = -1.00, while varied Crep = 0.25, 0.50, 0.75, and 1.0 is shown in Fig. 4 (b). Different than shown in Fig. 4 (a), here the depinning thresholds are all the same, at FD = 1.34. Since the ground state correspond to a skyrmion trapped at Skyrmion behavior in attractive-repulsive square array of pinning centers 6 an attractive pinning site, the effect of Crep has no effect on the depinning forces. However, the dynamic phases strongly depends on Crep. For the case of Crep = 0.25 the skyrmion direction of motion tends to the intrinsic angle much faster than for stronger values of Crep. That is, the repulsive obstacle sites are crucial for controlling the skyrmion direction of motion, corroborating previous works [19]. For example, using Crep = 1.0, the intervals at which the dynamic phases are stable are much expanded, including the θsk ≈59◦ phase. These features can also be seen in Fig. 4 (c) and (d), where we performed a detailed analysis of the range of driving force regimes ∆FD for selected most prominent Hall angles, θsk = -45◦and -60◦. These ranges of ∆FD are determined directly from the θsk vs FD curves shown in (a) and (b), identifying the force ranges in which the θsk remains fairly constant within a tolerance of ∆θsk = ±1.5◦due to numeric oscillations. Fig. 4 (c) illustrates the intervals ∆FD for the pinned state, θsk = -45◦and -60◦as a function of the attractive force Cat, while fixed Crep = 0.5 for several values of Cat. From Fig. 4 (c), it is possible to see that the pinned phase progressively decreases as |Cat| decreases. As the attractive sites becomes weaker, it is easier for the skyrmion to depin. The θsk = -45◦regime can only be stable for |Cat| ≤0.3. Additionally, the interval of forces ∆FD where this regime is stable increases with decreasing Cat. The phase where the regime is locked at θsk = -60◦also increases as |Cat| decreases. This result is expected, as the attractive pinning sites becomes weaker, the system tends to behave more similar to a clean sample where the stable regime is flowing along the intrinsic angle. In Fig. 4 (d) is shown the intervals ∆FD as a function of the repulsive force Crep, while fixed Cat = -0.5. The pinned phase is completely independent of Crep. The θsk = -45◦regime is stable for Crep ≥0.7, while θsk = -60◦phase tends to be more robust as Crep decreases. These results show clearly that there is a balance of values of Crep and Cat where more dynamic phases can be observed. Thus, from the practical point of view, it is possible to modulate phases using a combination of attractive and repulsive defect sites. Fig. 5 illustrates selected skyrmion trajectories. In Fig. 5 (a), the skyrmion is flowing in a defect landscape using Cat = -1.0, Crep = 1.0 and FD = 1.50, where its motion is locked at θsk ≈-57◦. In this regime the skyrmion flows and interact with both attractive and repulsive sites resulting in a periodic motion displacing six lattice parameters along y and four along x, that is, a ratio of R = 6/4. The angle of motion can be calculated as θsk = arctan(R) ≈57◦. In transient Fig. 5 (b), using Cat = -0.75, Cat = 1.0 and FD = 1.60, the skyrmion motion is not locked at a specific direction. This is a transient motion between different locking directions. As can be seen, the skyrmion motion does not exhibit periodicity. For a complete description of the defect strengths, Cat and Crep, in Fig. 6 we show a phase diagram of ∆FD vs. Cat vs. Crep for the pinned phase, θsk = -45◦and θsk = -60◦. In Fig. 6 (a) it is clearly shown that the pinned phase depends solely on Cat. In Fig. 6 (b) we observe that θsk = -45◦regime can only occur for a combination of low Cat and high Crep. The white region in the plot indicates regions where this phase cannot be stabilized. Finally, θsk = -60◦is most prominent when both Crep and Cat are weak. That is, when the system exhibits weak pinning, the dynamics resembles the clean sample case, where the skyrmion flows along the intrinsic SkHE, which in our case is θ int sk ≈60◦. 5. Influence of Defect Size To investigate the effects of defect size on skyrmion dynamics, we fix Cat = -0.2 and Crep = 0.8 and then vary the size of the attractive and repulsive defect sites. These values were chosen as they exhibit the phases θsk = -45◦and ≈-60◦, allowing us to observe how only defect size effects the dynamic regimes. The radii of the defects, both attractive and repulsive, were varied from 0.05 to 1.0 in steps of 0.05. In Fig. 7 (a), using FD = 0.25, we can see that for small repulsive defects of arep 0.40, or ≈-60◦for arep 0.65 the skyrmion direction of motion tends to align along θsk ≈-50◦. Meanwhile, for FD = 1.00, shown in Fig. 7 (d), the skyrmion is even less influenced by the defects, changing from θsk ≈-60◦to ≈-56◦due to larger arep. Hence, defect size is a key parameter for steering skyrmion trajectories. Increasing the radius amplifies pinning or repulsive forces and expand directional locking phases, whereas smaller defects weaken these effects. Additionally, we expect that large and strong defects stabilize more locking directions. These findings chart a practical route to controlling skyrmion motion via coexisting attractive and repulsive defects. 6. Discussion Using particle-based simulations we showed that it is possible to control a single skyrmion direction of motion using a landscape composed of attractive and repulsive defects. We model the skyrmions following the model proposed by Lin et al. [50], where skyrmions are treated as point-like rigid bodies that cannot deform or change size; however, in real materials, skyrmions may deform, change size or be annihilated. These features can change the dynamics or give rise to new phenomena, which could be explored further using continuum-based simulations. Thermal fluctuations, neglected here, are likewise expected to modify the behavior, where thermal creep at low drives can shift depinning thresholds and phase boundaries, and fragile plateaus of small ∆FD values may disappear at moderate temperatures. We expect the qualitative influence of mixed pinning to be robust across other pinning geometries, while the preferred locking directions reflect lattice symmetry. That is, in the square array considered here, the principal direction is 45◦, whereas a triangular lattice should favor 30◦and 60◦. Skyrmion behavior in attractive-repulsive square array of pinning centers 9 0.2 0.4 0.6 0.8 1.0 arep -60 -55 -50 -45 -40 -35 θsk (a) FD = 0.25 aat = 0.2 aat = 0.4 aat = 0.6 aat = 0.8 aat = 1.0 0.2 0.4 0.6 0.8 1.0 arep -60 -55 -50 -45 -40 -35 θsk (b) FD = 0.50 0.2 0.4 0.6 0.8 1.0 arep -60 -55 -50 -45 -40 -35 θsk (c) FD = 0.75 0.2 0.4 0.6 0.8 1.0 arep -60 -55 -50 -45 -40 -35 θsk (d) FD = 1.00 Figure 7. Skyrmion Hall angle, θsk as a function of the repulsive defect size arep for different attractive defect sizes aat and driving force values: (a) FD = 0.25, (b) FD = 0.50, (c) FD = 0.75, and (d) FD = 1.0. 7. Summary In this work we have investigated the dynamics of a single skyrmion interacting with a square array of defects that coexists attractive and repulsive defects. The defect landscape produces directional locking at θsk = -45◦and flow near the intrinsic skyrmion Hall angle, in our case θsk ≈-60◦. Unlike repulsive-only arrays, our system does not exhibit a θsk = 0◦step, as the alternation of pin types suppresses longitudinal locking. By tuning defect strengths, we control both the depinning threshold and the extent of the -45◦ locking step, where weaker attraction lowers the depinning force, whereas stronger repulsion stabilizes directional locking and broadens its force window. Thus, the set and width of accessible locking steps can be expanded or suppressed by appropriate strength choices. We also investigated defect size effects. At low drives the dynamics is more sensitive to defect radii, enabling locking at θsk = -45◦, -50◦, -55◦, or ≈-59◦depending on parameters; at higher drives the motion converges toward θ int sk . We hope our results bring new insights in using a combined mixture of defects that are attractive and repulsive to enable new possibilities to control the skyrmion motion in future spintronic devices. Skyrmion behavior in attractive-repulsive square array of pinning centers 10 Acknowledgments This work was supported by the Brazilian agencies Coordenac ̧ ̃ao de Aperfeic ̧oamento de Pessoal de N ́ıvel Superior - CAPES, Conselho Nacional de Desenvolvimento Cient ́ıfico e Tecnol ́ogico - CNPq, and Fundac ̧ ̃ao de Amparo `a Pesquisa do Estado de S ̃ao Paulo - FAPESP. L.B., N.P.V., and J.C.B.S acknowledge funding from FAPESP (Grants: 2024/21139-1, 2024/13248-5, and 2022/14053-8, respectively). We would like to thank FAPESP for providing the computational resources used in this work (Grant: 2024/02941-1). References [1] Bogdanov A and Hubert A 1994 Journal of Magnetism and Magnetic Materials 138 255-269 [2] Bogdanov A N and Yablonskii D A 1989 Sov. Phys. JETP 68 178-182 [3] Kiselev N S, Bogdanov A N, Sch ̈afer R and R ̈oßler U K 2011 J. Phys. D: Appl. Phys. 44 392001 [4] Hagemeister J, Romming N, von Bergmann K, Vedemenko E Y and Wiesendanger R 2015 Nat. Commun. 6 8455 [5] Luo S, Song M, Li X, Zhang Y, Hong J, Yang X, Zou X, Xu N and You L 2018 Nano Letters 18 1180-1184 [6] Zhang X, Ezawa M and Zhou Y 2015 Scientific Reports 5 9400 [7] Xia J, Zhang X, Liu X, Zhou Y and Ezawa M 2023 Phys. Rev. Lett. 130(10) 106701 [8] Romming N and et al 2013 Science 341 636-639 [9] Zang J, Mostovoy M, Han J H and Nagaosa N 2011 Physical Review Letters 107 136804 [10] Everschor-Sitte K and Sitte M 2014 Journal of Applied Physics 115 172602 [11] Jiang W, Zhang X, Yu G, Zhang W, Wang X, Benjamin Jungfleisch M, Pearson J E, Cheng X, Heinonen O, Wang K L, Zhou Y, Hoffmann A and te Velthuis S G E 2017 Nature Physics 13 162-169 [12] Litzius K, Lemesh I, Kr ̈uger B, Bassirian P, Caretta L, Richter K, B ̈uttner F, Sato K, Tretiakov O A, F ̈orster J, Reeve R M, Weigand M, Bykova I, Stoll H, Sch ̈utz G, Beach G S D and Kl ̈aui M 2017 Nature Physics 13 170-175 [13] Zeissler K, Finizio S, Barton C, Huxtable A J, Massey J, Raabe J, Sadovnikov A V, Nikitov S A, Brearton R, Hesjedal T, Laan G, Rosamond M C, Linfield E H, Burnell G and Marrows C H 2020 Nature Communications 11 1-11 [14] Zhang Y, Luo S, Yan B, Ou-Yang J, Yang X, Chen S, Zhu B and You L 2017 Nanoscale 9 10212-10218 [15] Fert A, Reyren N and Cros V 2017 Nature Reviews Materials 2 17031 [16] Reichhardt C and Reichhardt C J O 2015 Physical Review Letters 114 [17] Reichhardt C, Reichhardt C J O and Miloˇsevi ́c M V 2022 Rev. Mod. Phys. 94 035005 [18] Reichhardt C, Ray D and Reichhardt C J O 2015 Physical Review B 91 [19] Vizarim N P, Reichhardt C, Reichhardt C J O and Venegas P A 2020 New Journal of Physics 22 053025 [20] Vizarim N P, Reichhardt C J O, Venegas P A and Reichhardt C 2020 The European Physical Journal B 93 112 [21] Feilhauer J, Saha S, Tobik J, Zelent M, Heyderman L J and Mruczkiewicz M 2020 Physical Review B 102 184425 [22] Vizarim N P, Souza J C B, Reichhardt C, Reichhardt C J O and Venegas P A 2021 Journal of Physics: Condensed Matter 33 305801 [23] Stosic D, Ludermir T B and Miloˇsevi ́c M V 2017 Physical Review B 96 [24] Reichhardt C and Nori F 1999 Physical Review Letters 82 414-417 [25] Bohlein T and Bechinger C 2012 Physical Review Letters 109 058301 [26] Reichhardt C and Olson Reichhardt C J 2004 Physical Review E 69 041405 [27] Gopinathan A and Grier D G 2004 Physical Review Letters 92 130602 [28] G ̈obel B and Mertig I 2021 Scientific Reports 11 3020 [29] Ma X, Reichhardt C J O and Reichhardt C 2017 Physical Review B 95 104401 [30] Reichhardt C, Ray D and Reichhardt C J O 2015 New Journal of Physics 17 073034 [31] Souza J C B, Vizarim N P, Reichhardt C J O, Reichhardt C and Venegas P A 2021 Physical Review B 104 054434 [32] Vizarim N P, Reichhardt C J O, Venegas P A and Reichhardt C 2020 Journal of Physics Communications 4 085001 [33] Souza J C B, Vizarim N P, Reichhardt C J O, Reichhardt C and Venegas P A 2024 Physical Review B 109 054407 [34] Vizarim N P, Reichhardt C, Venegas P A and Reichhardt C J O 2021 Journal of Magnetism and Magnetic Materials 528 167710 [35] Zhang C L, Wang J N, Song C K, Mehmood N, Zeng Z Z, Ma Y X, Wang J B and Liu Q F 2022 Rare Metals 41 865-870 [36] Yanes R, Garcia-Sanchez F, Luis R F, Martinez E, Raposo V, Torres L and Lopez-Diaz L 2019 Applied Physics Letters 115 132401 [37] Zhang S L, Wang W W, Burn D M, Peng H, Berger H, Bauer A, Pfleiderer C, van der Laan G and Hesjedal T 2018 Nature Communications 9 2115 [38] Casiraghi A, Corte-Le ́on H, Vafaee M, Garcia-Sanchez F, Durin G, Pasquale M, Jakob G, Kl ̈aui M and Kazakova O 2019 Communications Physics 2 1-9 [39] Everschor K, Garst M, Binz B, Jonietz F, M ̈uhlbauer S, Pfleiderer C and Rosch A 2012 Physical Review B 86 054432 Skyrmion behavior in attractive-repulsive square array of pinning centers 11 [40] Kong L and Zang J 2013 Physical Review Letters 111 067203 [41] Wang Y, Shimada T, Wang J, Kitamura T and Hirakata H 2021 Acta Materialia 221 117383 [42] Purnama I, Gan W L, Wong D W and Lew W S 2015 Scientific Reports 5 10620 [43] Juge R, Bairagi K, Rana K G, Vogel J, Sall M, Mailly D, Pham V T, Zhang Q, Sisodia N, Foerster M, Aballe L, Belmeguenai M, Roussign ́e Y, Auffret S, Buda-Prejbeanu L D, Gaudin G, Ravelosona D and Boulle O 2021 Nano Letters 21 2989-2996 [44] Kern L M, Pfau B, Deinhart V, Schneider M, Klose C, Gerlinger K, Wittrock S, Engel D, Will I, G ̈unther C M, Liefferink R, Mentink J H, Wintz S, Weigand M, Huang M J, Battistelli R, Metternich D, B ̈uttner F, H ̈oflich K and Eisebitt S 2022 Nano Letters 22 4028-4035 [45] Leliaert J, Gypens P, Miloˇsevi ́c M V, Waeyenberge B V and Mulkers J 2018 J. Phys. D 52 024003 [46] Toscano D, Mendonc ̧a J P A, Miranda A L S, de Araujo C I L, Sato F, Coura P Z and Leonel S A 2020 Journal of Magnetism and Magnetic Materials 504 166655 [47] Menezes R M, Neto J F S, Silva C C d S and Miloˇsevi ́c M V 2019 Physical Review B 100 014431 [48] Neto J F and de Souza Silva C C 2022 Physical Review Letters 128 [49] Xie Y J, Qian A, He B, Wu Y B, Wang S, Xu B, Yu G, Han X and Qiu X G 2024 Physical Review Letters 133 [50] Lin S Z, Reichhardt C, Batista C D and Saxena A 2013 Physical Review B 87 [51] Paul W B 1993 Advanced Materials 5 223-224
2510.14905
Continuous-time quantum walk on a random graph using quantum circuits Sabyasachi Chakraborty ,1, ∗Rohit Sarma Sarkar ,2, † Sonjoy Majumder ,1, ‡ and Rohit Kishan Ray 3, 4, § 1Department of Physics, Indian Institute of Technology Kharagpur, Kharagpur, West Bengal 721302, India 2International Centre for Theoretical Sciences (ICTS-TIFR), Bengaluru 560089, India 3Department of Material Science and Engineering, Virginia Tech, Blacksburg, VA 24061, USA 4Center for Theoretical Physics of Complex Systems, Institute for Basic Science(IBS), Daejeon 34126, Republic of Korea Quantum walks, particularly continuous-time quantum walks (CTQW), have emerged as powerful tools for modeling quantum transport, simulating complex dynamics, and developing quantum algorithms with potential speedups over classical counterparts. In this work, we present a scalable quantum circuit formalism to simulate CTQW on random graph structures, especially focusing on Erd˝os-R´enyi random graphs. Our quantum circuit construction efficiently implements the time evolution of the graph Laplacian, using the Trotterization scheme. We investigate key dynamical properties, i.e., the localization behavior of the CTQW. Our quantum circuit implementation over random graph ensures that the circuit design can work on any graph structure, thereby laying the foundation for realizing CTQW-based quantum simulations efficiently. I. INTRODUCTION Quantum computers provide a natural frame- work for simulating quantum dynamical pro- cesses that are otherwise challenging for classical computation [1–4]. Within this context, quan- tum walks (QWs) have emerged as powerful and versatile tools [5–10]. They serve as fundamen- tal algorithmic building blocks for graph-based problems [7, 11, 12], provide a rich framework for modeling quantum transport [13], and probing complex networks [14]. A QW is the quantum generalization of a classical random walk, where quantum superposition and interference replace classical stochasticity, giving rise to significantly different transport properties [12, 15, 16]. In con- trast to classical diffusion, quantum walks show ballistic spreading [8, 15], localization [17, 18], applications in optimization, simulation [19–21], and probing physical processes from energy ∗sabyasachi.sch@gmail.com † rohit15sarkar@yahoo.com ‡ sonjoym@phy.iitkgp.ac.in § rkray@vt.edu transfer and topological phases to transport in complex networks [13, 14, 22–24]. Implementing QWs on quantum hardware, therefore, repre- sents a promising route to bridge abstract quan- tum models with realizable algorithms and ex- perimentally accessible simulations [25, 26]. Quantum walks are broadly classified into discrete-time (DTQW) [12, 27] and continuous- time (CTQW) [5]. In DTQWs, evolution pro- ceeds through repeated coin–shift operations, introducing internal degrees of freedom that en- able controllability, making them well-suited for circuit design and local graph propagation [28– 30]. In contrast, CTQWs are defined directly on graphs, with the Hamiltonian typically chosen as the adjacency matrix or the graph Lapla- cian. They also don’t require any extra de- gree of freedom, such as coin operator. This makes CTQW circuit implementations challeng- ing [31, 32], since their continuous evolution depends on the global structure of the graph rather than local connections. Simulating CTQWs on a quantum computer requires the efficient encoding of the graph Hamiltonian into quantum circuits with the uni- tary time-evolution operator U(t) = exp(−iHt). arXiv:2510.14905v1 [quant-ph] 16 Oct 2025 2 H is the graph Hamiltonian, often chosen as the adjacency matrix or the Laplacian of the graph [8, 15, 31]. Thus, it is a problem of Hamiltonian simulation, and since Hamiltonian simulation is known to be BQP-complete [33– 35], efficient classical solutions are unlikely. A widely used strategy in this context is the imple- mentation of Trotter–Suzuki [36–39] decomposi- tion (TSD) or product formulas [40]. Here, the Hamiltonian H is broken down into a sum of local Hamiltonians (not necessarily commuting with each other) Hj such as H = PL j=1 ajHj and the TSD approximates the exponential of a sum of Hamiltonian terms at each Trotter step δt i.e., e(−i PL j=1 ajHjδt) by sequentially op- erating the exponential of the individual terms exp(−iajHjδt). Here δt = t/r where r is the Trotter step number controlling the approxima- tion error. Thus, the total time-evolution opera- tor becomes e−iHt ≈   L Y j=1 e−iHjδt   r . (1) A given 2n × 2n-dimensional Hamiltonian H, acting on n qubits can be written in terms of elementary gates using n-length Pauli strings, S(n) P = {Nn i=1 σi|σi ∈SP , 1 ≤i ≤n}, where SP = { I, X, Y, Z } is the Pauli matrix set con- sisting of SU(2) generators in the Pauli ba- sis. These strings form an orthonormal ba- sis for the algebra of 2n × 2n matrices. Each exp  −iajS(n) P δt  can be implemented by O(n) elementary gates. However, the number of Pauli terms grows exponentially (reaching O(4n) in the worst case), thereby increasing the depth of the circuit. Therefore, the gate complex- ity of a CTQW simulation is governed by the structure of the underlying graph Hamiltonian and the choice of decomposition scheme. For sparse graphs, product formula methods re- main tractable and allow faithful simulation of CTQWs with polynomial gate overhead. How- ever, random graphs can have dense connectivity, which increases the number of required terms during time evolution, rendering optimized de- composition strategies and error-controlled Trot- terization especially important. In this paper, we develop a quantum circuit framework for simulating continuous-time quan- tum walks (CTQWs) on random graph struc- tures, namely the Erd˝os–R´enyi random graphs. The graph Hamiltonian (H) is expressed in terms of the Laplacian (L) of the graph, which serves as the generator of the walk in our case. To implement this evolution efficiently on quantum hardware, we introduce a graph Laplacian parti- tioning algorithm (LPA). The LPA decomposes the Laplacian L of a given graph into a col- lection of sparse Laplacians {L(j)}, such that L = P2n−1 j=1 L(j), where each L(j) corresponds to a sparse submatrix of L. A key feature of this construction is that each L(j) is permutation- similar to a block-diagonal Hamiltonian consist- ing of 2×2 nontrivial blocks. These permutation matrices have a direct representation in terms of CNOT gates. We then present the quan- tum circuit construction of the block-diagonal Hamiltonian. The full-time evolution operator exp(−iHt) is then implemented using a TSD scheme applied to partitioned submatrices. This approach reduces the worst-case decomposition size from O(4n) (as encountered in Pauli-string decompositions) to O(2n −1), thus producing a significantly more resource-efficient circuit de- sign. By combining graph-theoretic partitioning with quantum circuit synthesis, our method es- tablishes a protocol for simulating CTQWs on arbitrary random graphs, offering a practical alternative to conventional Hamiltonian simula- tion techniques, and ensuring a wide applicabil- ity of our circuit. The rest of the paper is organized as fol- lows. In Sec. II, we review the preliminary con- cepts of graphs, continuous-time quantum walks (CTQWs), and define localization. Section III introduces the graph Laplacian partitioning algo- rithm, which forms the foundation for construct- ing quantum circuits for CTQWs. Section IV is devoted to the design of quantum circuits for CTQWs, while Sec. V presents their applica- tion to CTQW implementations. In Sec. VI, we analyze the accuracy of the Trotterized circuit evolution and study localization for CTQW cir- 3 cuit simulations. Finally, Sec. VII summarizes our findings and outlines future perspectives. II. THEORETICAL PRELIMINARIES A. Graphs and Continuous-Time Quantum Walks Let G = (V, E) be an undirected graph [9, 10, 41], where V = {v1, v2, . . . , vN} denotes the set of vertices and E ⊆{{vi, vj} | i < j} is the set of undirected edges. The structure of the graph is described by the N × N adjacency matrix A [9, 10, 41], defined as, [A]ij = ( 1, if {vi, vj} ∈E, 0, otherwise. (2) For undirected graphs, A is symmetric. The degree of a vertex vi is given by di = PN j=1 Aij, and the diagonal degree matrix D is defined by [D]ii = di. The Laplacian matrix of the graph is then L = D −A [9, 10]. In contrast to undirected simple graphs with predefined edges, random graphs [41] are gen- erated by probabilistic rules and can be viewed as a collection of vertices with edges chosen at random. A widely studied class of random graphs is the Erd˝os-R´enyi random graph (ERG) G(N, p) [42, 43], where each possible edge be- tween N vertices is included independently with probability p. In this study, we use ERG for con- structing our quantum circuit algorithms for continuous-time quantum walks [5], as ERG offers a generic and statistically well-defined model and is widely studied in the literature as well [18, 44, 45]. The adjacency matrices of the ERG are symmetric, with an expected ver- tex degree ⟨d⟩= p(N −1). So, for low values of p, the ERG becomes sparse. The structural randomness of these graphs ensures that the successful performance of our quantum circuit algorithm on them generalizes to a broad class of graphs, thereby providing a robust and mean- ingful testbed for our methods. A given graph can be mapped onto a quantum system by defining a Hamiltonian that reflects the connectivity of the graph [6, 9, 31, 46]. Two common choices of graph Hamiltonians are, H = −γA (adjacency-based), (3) or, H = −γL = −γ(A −D) (Laplacian-based), (4) where γ is the uniform hopping rate, denot- ing the transition probability per unit time be- tween any two connected vertices. For regular graphs [41, 46] where each vertex has the same degree, the Hamiltonians in Eqs. (3) and (4) generate equivalent dynamics up to a global phase [31]. However, for irregular graphs, how- ever, this equivalence no longer holds. In such cases, the degree matrix D is not proportional to the identity, so the eigenvalue shifts it in- troduces cannot be factored out as a constant phase in the evolution operator exp{(−iAt)}. Instead, they modify the relative phases of the eigenstates, producing distinct oscillatory behav- ior. To maintain interpretational consistency, and since we are dealing with random graphs, we will adhere to Laplacian-based Hamiltoni- ans Eq. (4). This choice allows us to capture the structural inhomogeneity of the underlying graph more accurately during CTQW evolution. The dynamics of quantum systems evolving over graph structures are elegantly captured by the framework of continuous-time quantum walks (CTQWs) [5, 9, 10]. The system evolves in a Hilbert space H of N-dimension, with |ψ(t)⟩ representing the state at a given time t. H is spanned by the computational basis { |j⟩}N−1 j=0 , with each basis state |j⟩corresponds to vertex vj. The state of the system at time t is repre- sented by a quantum state |ψ(t)⟩in a Hilbert space of N dimensions, where the basis states |j⟩= {|0⟩, |1⟩, . . . , |N −1⟩} correspond to the vertices {v0, v1, . . . , vN−1} of the graph. The probability of finding the walker at vertex vj at time t is given by, pj(t) = |⟨j|ψ(t)⟩|2. (5) The time evolution of the state is governed by 4 the Schr¨odinger equation (ℏ= 1), i d dt |ψ(t)⟩= H |ψ(t)⟩, (6) with formal solution, |ψ(t)⟩= e−iHt |ψ(0)⟩. (7) B. Localization In quantum walks, the localization implies non-vanishing probability of finding the walker at, or near its initial position in the long-time limit [17, 18, 47]. This localization can arise not only from disorder (Anderson-type localiza- tion [48–51]) but also from structural features of the graph, such as symmetries or spectral degeneracies [17, 52, 53]. Let the walker initially occupy a vertex v0, with state |ψ⟩. To characterize the long-time behavior of the walker at a given vertex, we define the time-averaged probability at vertex vj as (using Eq. (5)), pC(j) = lim T →∞ 1 T Z T 0 pj(t), = lim T →∞ 1 T Z T 0 | ⟨j| e−iHt |ψ⟩|2 dt. (8) For a walk with uniform probability distribution (i.e., maximally mixed walk) on a graph with N vertices, the walker is found at each vertex with probability 1/N. We say that a CTQW exhibits localization at a given vertex vj if, in the long-time limit, the probability of finding the walker at vj remains strictly greater than 1/N. In other words, pC(j) > N −1. (9) III. PARTITIONING THE LAPLACIAN We now establish the foundation for con- structing scalable quantum circuits to simulate (a) (b) (c) (d) (e) (f) 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 FIG. 1. (a) Random graph G(N, p) with N = 5 and each edge is present independently with probability p = 0.4. (b–f) Decomposition of the original graph into subgraphs, each corresponding to a distinct 1- sparse Hamiltonian representation. CTQWs by introducing a graph Laplacian par- titioning algorithm (Fig. 1). In essence, the partitioning algorithm allows us to break the Laplacian of a given graph into a set of Lapla- cians which represent sparse graphs. This is an essential step in our quantum circuit design. The LPA proceeds in two key stages (i) the gen- eration and indexing of permutation matrices, and (ii) the subsequent breakdown of the Lapla- cian into a sum of sparse sub-matrices which are permutation-similar to a block-diagonal ma- trix with 2 × 2 non-trivial blocks. The primary idea behind the algorithm is to keep track of the position of non-zero elements in the Laplacian operator. We begin with the following defini- tions. Definition 1. The support set ΓM of a d × d matrix M is defined as the set of positions of its non-zero elements. For example, suppose that a matrix A of size d × d has nonzero elements at positions i, j = (1, 1), (1, d), and (d, 1). Then, the support set is given by ΓA = { (1, 1), (1, d), (d, 1) }. Definition 2. Let A be any matrix. Its support matrix, denoted ˜ A, is the binary matrix entry- 5 wise defined by [ ˜ A]ij = ( 1, Aij ̸= 0, 0, Aij = 0. (10) In other words, ˜ A encodes the pattern of zero and nonzero entries i.e., structure of A. Definition 3. We call two d×d matrices E and F structurally similar, E SS= F , if their support sets ΓE and ΓF , respectively, are equal. Consider, ΓE = { (p, q) | [E]p,q ̸= 0, and 1 ≤p, q ≤d } and ΓF = { (p, q) | [F ]p,q ̸= 0, and 1 ≤p, q ≤d }, then ΓE = ΓF implies E is structurally similar to F , i.e., E SS= F . This trivially implies, ˜ E = ˜ F . With these definitions, we describe how the Laplacian of a graph can be decomposed into sparse components. Let G = (V, E) be an undi- rected graph with N = 2n vertices (n denotes number of qubits). Using Eq. (4), G can be rep- resented by its N × N Laplacian matrix L. L is symmetric by construction. Our objective is to decompose L into a sum of structured subma- trices L(j) as, L = N−1 X j=1 L(j), (11) Each L(j) is permutation-similar to a block- diagonal matrix composed of 2 × 2 non-trivial blocks i.e., L(j) = P j nL(j) BD(P j n)T . (12) Here L(j) BD is the block diagonal matrix com- prised of 2 × 2 non-zero blocks. The permuta- tion matrix P j n of size 2n × 2n acts on the jth sub-matrix, and T denotes the transpose opera- tion. We know that any 2 × 2 complex matrix A ∈M2(C) can be represented using genera- tors of SU(2), i.e., Pauli basis using the set of Pauli matrices SP = { I, X, Y, Z }. The set of n-length Pauli strings (n times tensor product of 2×2 matrics composed of Pauli matrices and the identity matrix) i.e., S(n) P = {Nn i=1 σi|σi ∈ SP , 1 ≤i ≤n} forms a basis for M2n(C), the set of 2n × 2n complex matrices. Lemma 1. Let A = Nn i=1 Ai ∈S(n) I,X ⊂ S(n) P be an n-length Pauli string comprising of I, and X. Further, B = Nn i=1 Bi ∈ S(n) P be another n-length Pauli string such that ( Bj ∈{X, Y } if Aj = X Bj ∈{I, Z} if Aj = I2. Then A SS= B. Proof. Since, X SS= Y , and I SS= Z, their Kro- necker products are also structurally similar by definition. Hence, A SS= B. Following Lemma 1, we can write the support matrix of any 2n × 2n Hamiltonian matrix using the binary Pauli basis {I, X}⊗n, where n is the qubit number. We define a support basis in the following way. For a given Pauli string as described above, we can replace I 7→0, and X 7→1, such that we can map a string of the form { IXXI } 7→{ 0110 }. This can be further identified with a basis in the 2n dimensional computational space (n is both the string length and the number of qubits, to be identified from the context). To further clarify, consider a three qubit (n = 3) Pauli string (IXI), which we can write as (010) ≡|010⟩. Now, as discussed above, we identify this with one of the basis elements in the 23 = 8 dimensional computational space, such as |010⟩≡|2⟩. Therefore, to index all such Pauli strings that span the given n qubit description of the support of A i.e., ˜ A, we can use the index j = 0, . . . , 2n− 1. Consider the three-qubit case as before. All the possible Pauli bases chosen from the set S(3) P = n III, IIX, IXI, IXX, XII, XIX, XXI, XXX o , 7→ n 000, 001, 010, 011, 100, 101, 110, 111 o 7→{0, 1, 2, 3, 4, 5, 6, 7} (13) 6 where j encodes the indices 0, 1 · · · 7. Having established the notion of the support basis and its indexing through j, we now turn to the construction of the corresponding permu- tation matrix P j n. Our objective is to represent the permutation operator using CNOT gates. We use CNOT(p, q) to identify the positions of the control (p) and target (q) qubits of the CNOT gates (excluding j = 0, 1). For the target qubit of CNOT, we need to identify the qubit associated with the given value of the index j as discussed above (see Eq. (13)). To ease our computation load, we fix the last (nth) qubit as control (or target, depending on j, see below). We convert j to its binary equivalent i.e., jbin. Since we are fixing the nth qubit, we remove the right-most value from the binary string of jbin and call the rest of the string b. In the binary string b, we record the positions of ones as κj (from right to left), and form the index set κ = { κj }1. Each κj ∈κ indicates a target qubit for a CNOT gate with the fixed control qubit being qubit nth i.e., CNOT(n, n−κj−1). However, this sequence of operations is valid for odd values of j. For even j, we need two extra CNOT gates where the control is n −max κ −1 i.e., CNOT(n−max κ−1, n). The expression for P j n can now be written as P j n = I N n, if j = 0 or 1; =      Q j CNOT(n, n−κj−1), if j is odd; Q j CNOT(n−max κ−1, n) CNOT(n, n−κj−1) CNOT(n−max κ−1, n), if j is even. (14) This product involving CNOT gates is equiv- alent to a permutation operation; we provide the proof in Appendix A 1. For example, for n = 4 and odd j, we have, P j=5 n=4 1 := 1 2 3 4 • P j=15 n=4 := 1 2 3 4 • • • 1 As an example, consider n = 4 and j = 5. Binary equivalent of j i.e., jbin = 0101. Dropping the right most value from jbin gives— string b = 010. Thus, κ = { 1 }. and for even j, is even, then we have, P j=4 n=4 := 1 2 • • 3 4 • P j=14 n=4 := 1 • • 2 3 4 • • • Each value of j encodes the underlying sup- port basis, which uniquely identifies the struc- ture of the given matrix. Therefore, two matrices A and B having different support matrices ˜ A and ˜ B, respectively, will have distinct j values. Lemma 2. Let ˜ A, ˜ B ∈S(n) I,X \ {I2n} such that ˜ A ̸= ˜ B with corresponding support sets ΓA and ΓB. Then ΓA ∩ΓB = ∅ 7 Proof. We know from Eq. (13) that, j ∈ { 0, 1, · · · , 2n −1 } for a n qubit Pauli string. And from Theorem 2 for each j1, j2 ∈ { 0, 1, · · · 2n −1 } there exist P j1 n and P j2 n such that P j1 n ˜ AP j1 n = P j2 n ˜ BP j2 n = I⊗(n−1) ⊗X. (15) Further, from Proposition 2, no two permutation matrix P j1 n and P j2 n for j1 ̸= j2 share the same 2-cycles. Let, ΓA ∩ΓB ̸= ∅. Let’s assume there ex- ists at least one common row and column in- dex p, q such that [A]p,q, [B]p,q ̸= 0. Since P j1 n (I⊗(n−1) ⊗X)P j1 n = ˜ A and P j2 n (I⊗(n−1) ⊗ X)P j2 n = ˜ B, there exists at least one 2-cycle that is common in both P j1 n and P j2 n due to our assumption. This leads to a contradiction. Hence, the lemma is proved. Corollary 1. Let ˜ A, ˜ B ∈S(n) I,X \ {I2n} such that ˜ A ̸= ˜ B with corresponding ΓA and ΓB. Then for any two Pauli strings E ∈SP (A) and F ∈SP (B) ΓE ∩ΓF = ∅. Proof. Readily follows from the definition of structural similarity (Definition 3) and Lemma 2. It can be observed that, the elements of S(n) {I,X}j for j ∈{1, . . . , 2n −1} is permutation similar, i.e., S(n) {I,X}j = P j n(I⊗(n−1) 2 ⊗X)P j n (see Appendix A 2). Please note that the total num- ber of Pauli strings in a n-qubit system is 4n, the number of similar structural sets is 2n. Thus, we get L = 2n−1 X j=0 P j nL(j) BDP j n (16) Since P j n is symmetric and orthogonal— P j n = (P j n)T . Thus, from our discussions so far, it is evident that L(j) BD = P j nL(j)P j n is a 2-sparse (each row and column have at most 2 non-zero elements) block-diagonal matrix with 2 × 2 non- trivial blocks. Despite its apparent simplicity, generating all L(j) BD involves a sequence of consec- utive matrix multiplications, which can become Algorithm 1: Laplacian partition algorithm for 2n × 2n Hermitian matrices Input: A 2n × 2n real symmetric matrix L Output: L(j) such that L = P2n−1 j=0 L(j), where L(j) = P j nL(j) BDP j n and L(j) BD is 2-sparse block-diagonal with 2 × 2 blocks Provided: 1. A 2n × 2n real symmetric matrix M 2. ˜ H =  1 1 1 −1  3. Permutation matrices from the set P j n. Extracting block-diagonal and diagonal elements for j ←0 to 2n −1 do L(j) ←02n×2n; for k ←0 to 2n −1 do [L(0)]k,k ←[L]k,k; for k ←0 to 2n −1 step +2 do [L(1)]k,k+1 ←[L]k,k+1; [L(1)]k+1,k ←[L(1)]k,k+1; Extracting sub-matrices permutation-similar to 2 × 2 blocks for j ←2 to 2n −1 do for u ←0 to 2n−1 −1 do if j is odd then Compute αΛ j−1 2 (u) and βΛ j−1 2 (u); if α < β then [L(j)]α−1,α ←[L]α−1,β; [L(j)]β,β−1 ←[L]α,β−1; [L(j)]α,α−1 ←[L]β,α−1; [L(j)]β−1,β ←[L]β−1,α; else Compute αΛ j 2 (u) and βΛ j 2 (u); if α < β then [L(j)]α−1,α ←[L]α−1,β; [L(j)]β,β+1 ←[L]α,β+1; [L(j)]α,α−1 ←[L]β,α−1; [L(j)]β+1,β ←[L]β+1,α; computationally demanding. In order to lower 8 the complexity of the algorithm, we exploit the sparse structure of P j n. Let for some real sym- metric matrix M, we denote M ′ = P j nMP j n. When j is odd, one can observe from Proposi- tion 2 that [M ′]ακ j−1 2 (u) −1, ακ j−1 2 (u) = [M]ακ j−1 2 (u) −1, βκ j−1 2 (u) [M ′]βκ j−1 2 (u), βκ j−1 2 (u) −1 = [M]ακ j−1 2 (u), βκ j−1 2 (u) −1. (17) Subsequently, if j is even, then [M ′]ακ j 2 (u)−1,ακ j 2 (u) = [M]ακ j 2 (u)−1,βκ j−1 2 (u) [M ′]βκ j 2 (u),βκ j 2 (u)+1 = [M]ακ j 2 (u),βκ j−1 2 (u)+1, (18) . where u is an integer such that 0 ≤u ≤ 2n−1 −1 and have a corresponding binary repre- sentation u = (un−2, . . . , u0). The terms Λ, α, β are defined in Proposition 2 and also in Ref. [54]. M being an symmetric matrix one can easily observe that after performing permutation if [M]i,j →[M]i′,j′ then [M]j,i →[M]j′,i′. Thus, we can directly substitute matrix mul- tiplication with swapping the elements around by harnessing the sparsity pattern of the per- mutation matrices. We finally arrive at our decomposition algorithm 1. Theorem 1. The running time complexity for algorithm 1 is O(N 2) where N = 2n. Proof. Follows from the algorithm immediately. IV. QUANTUM CIRCUIT DECOMPOSITION For a given graph, we simulate the time evolution operator U = exp(−iHδt), where H = −γL 4. Using Eqs. (11), (12) the Hamilto- nian can be expressed as, H = −γ 2n−1 X j=1 P j n L(j) BD P j n. (19) To simulate the corresponding dynamics on a quantum circuit, we approximate the unitary evolution operator U via a first-order Trotter [36– 39] expansion. The effective unitary becomes, U = exp  iγ 2n−1 X j=1 P j n L(j) BD P j nδt  , = 2n−1 Y j=1 P j n exp  iγ L(j) BD δt  P j n , = 2n−1 Y j=1 P j n ˘U (j) BDP j n , (20) where we define ˘U (j) BD = exp  iγL(j) BDδt  as the block-diagonal unitary corresponding to the jth component. Now, for each block-diagonal unitary ˘U (j) BD we seek circuit decomposition. Since ˘U (j) BD is composed of 2 × 2 non-trivial blocks, we can write the following, ˘UBD =      ˘U1(∆1, θ1, ζ1, φ1) ˘U2(∆2, θ2, ζ2, φ2) ... ˘U2n−1(∆2n−1, θ2n−1, ζ2n−1, φ2n−1)     , (21) 9 where each 2 × 2 block is of the form ˘Ub(∆b, θb, ζb, φb) =  exp{i(∆b+θb+ζb)} cos φb exp{i(∆b+θb−ζb)} sin φb −exp{i(∆b−θb+ζb)} sin φb exp{i(∆b−θb−ζb)} cos φb  =  ei∆b 0 0 ei∆b   Ub  (22) Here ∆b, θb, ζb, and φb are real-valued parame- ters. When e2i∆b = ±1, the block ˘Ub reduces to a special unitary matrix Ub(θb, ζb, φb) ∈SU(2). Thus, ˘UBD = ei∆bI  UBD  (23) where, I is 2 × 2 identity matrix and (UBD) contains 2×2 special unitary block Ub(θb, ζb, φb). To understand the circuit-level realization of (UBD), we fix the nth qubit as the target, with all remaining qubits acting as controls. For an axis a ∈{ Y, Z } we use the standard one-qubit rotations Ry(θ) =  cos θ sin θ −sin θ cos θ  , (24) and Rz(θ) =  eiθ 0 0 e−iθ  . (25) We write CNOT(c, t) for a CNOT with control c and target t, and it can be expressed as, CNOT(c, t) = |0⟩⟨0|c ⊗It + |1⟩⟨1|c ⊗Xt. (26) Two basic conjugation identities that will be used are, X Ra(ϕ) X = Ra(−ϕ) for a ∈{ Y, Z } , Ra(α) Ra(β) = Ra(α + β). (27) Since any 2 × 2 special unitary matrix has a ZYZ decomposition, UBD has a circuit from using the multi-controlled rotation gates, which we explain below. Definition 4. For n-qubit systems, let n ≥2. The first n −1 qubits form the control regis- ter, and the n-th qubit is the target. For a list of angles Θ = {θb}1≤b≤2n−1 the n-qubit multi- controlled rotation around axis a is the block- diagonal unitary defined as [55–57], Fn(Ra; Θ) =    Ra(θ1) ... Ra(θ2n−1)    (28) The corresponding circuit is given below, 1 ... n −1 n Fn(Ra) (29) Example: The 2-qubit multi-controlled rota- tion gate circuit from circuit 29 is, 1 • • 2 Ra(θ1) Ra(θ2) (30) The 3-qubit multi-controlled rotation gate cir- cuit from circuit 29 is, 1 • • 2 • • • • 3 Rz(θ1) Rz(θ2) Rz(θ3) Rz(θ4) (31) Definition 5. A unitary U on n-qubits is 2 × 2 block diagonal with, U = M 1≤b≤2n−1 Ub, and Ub ∈SU(2) (32) 10 Every Ub admits a ZYZ factorization Ub = Rz(αb) Ry(γb) Rz(βb). (33) Consequently, Proposition 1. Any U (Eq. (32)) can be im- plemented as [55–57] U = Fn(Rz; {αb}) Fn(Ry; {γb}) Fn(Rz; {βb}) . (34) The corresponding circuit representation is given by, 1 2 ... ... ... ... n −1 n Fn(Rz(α1, . . . , α2n−1)) Fn(Ry(γ1, . . . , γ2n−1)) Fn(Rz(β1, . . . , β2n−1)) (35) Lemma 3. [55, 56] For an n-qubit cir- cuit, let k = n −1 be the number of con- trol qubits and let the last qubit be the tar- get. Consider a sequence of single-qubit rota- tions Ra(ω1), Ra(ω2), . . . , Ra(ω2n−1) on the tar- get with Ω= {ωi}1≤i≤2n−1, with a ∈{ Y, Z }. For each i, let mi ∈{ 0, 1 }k encode which con- trol lines are connected to the target immediately before Ra(ωi). Then the total unitary is block diagonal in the control basis, U = M c∈{ 0,1 }k Ra(ηc), (36) with the block angle for control string c given by ηc = 2n−1 X i=1 (−1)⟨c,mi⟩ωi, ⟨c, mi⟩=  k X j=1 cjmi,j  (mod 2) . (37) The Lemma 3 states nothing but a solution of the linear system of equations [55, 56, 58], M ⊗k      ω1 ω2 ... ω2k     =      η1 η2 ... η2k      (38) where the matrix elements [M ⊗k]ij can be determined using Lemma 3 (a detailed discus- sion is given in Appendix B). Eq. (38) is exactly a Walsh-Hadamard transform [55, 56], where 2−k/2M ⊗k corresponds to H⊗k. Thus, comput- ing the { ωi } angles is precisely multiplication by 2−kH⊗k applied to { η }i [see Eq. (B5) for details]. Therefore, the final circuit represen- tation of ˘UBD in Eq. (20) consists of the ZYZ circuit decomposition (see Eq. (35)) and the phase components [i.e., ei∆bI Eq. (23)] of the original unitary blocks ˘UBD. These are factored out from ˘UBD and collected into a final diagonal gate denoted by U (d), which captures the local phases. From the circuit structure of the multi- qubit rotation gate circuit in Eq. (29), it can be clearly understood that the circuit structure shown above represents the recursive construc- tion of a unitary operator where n-qubit circuit structure can be constructed from n −1 qubit circuit structure. 11 10 1 100 101 102 103 104 105 Time 0.800 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 Fidelity t = 10 3 t = 10 2 p = 0.1 p = 0.4 p = 0.7 p = 1.0 FIG. 2. Fidelity plot of the 6-qubit quantum circuit simulating continuous-time quantum walk on Erd˝os- R´enyi graphs for four different edge probabilities p = 0.1, 0.4, 0.7, 1.0. The simulation is performed using two Trotter step sizes δt = 10−2 (dashed lines) and δt = 10−3 (solid lines). Fidelity is com- puted against the exact unitary evolution operator exp(−iHt) using Eq. 39. The results demonstrate that smaller Trotter step sizes yield higher circuit fidelity over longer evolution times, with fidelity de- grading more rapidly for higher connectivity (larger p). V. CONTINUOUS-TIME QUANTUM WALK IMPLEMENTATION A. Performance of a quantum circuit The performance of the quantum circuit, which is outlined in section IV for the continuous- time quantum walks, is evaluated here. We com- pare the circuit-evolved states i.e. |ψcircuit(t)⟩ with the state generated by the exact unitary dy- namics governed by the Hamiltonian H = −γL [Eq. (4)] i.e., |ψexact(t)⟩. The exact evolution is obtained from direct exponentiation of the Lapla- cian, exp(−iHt), while the circuit dynamics are simulated using a first-order Trotter-Suzuki [36– 39] decomposition. The fidelity is defined as [59], F(t) = |⟨ψexact(t)|ψcircuit(t)⟩|2, (39) This fidelity value quantifies the accuracy of the circuit approximation. Fig. 2 presents the fidelity against time for a six-qubit system (N = 26 vertices) performing CTQWs circuit simulation on Erd˝os–R´enyi graphs with varying edge probabilities p = 0.1, 0.4, 0.7, and 1.0. The simulations are performed over logarithmically spaced time values up to t = 105. Two different Trotter step sizes, δt = 10−2 and δt = 10−3, are considered to evaluate the circuit performance. For all values of p, the fidelity degrades over time due to the accumulation of Trotter errors. The results indicate that reducing the Trotter step size improves the accuracy of the simula- tion. Smaller step sizes (δt = 10−3) show slow fidelity decay, maintaining fidelity > 0.98 up to t ∼104. From Fig. 2, the dependence of fidelity on graph connectivity is also can be observed— the edge probability p significantly affects the fidelity decay. Sparse graphs exhibit slower fi- delity decay because their Hamiltonians contain fewer non-commuting terms. As connectivity p increases, additional non-commutativity acceler- ates fidelity loss. For fixed p, the fidelity remains closer to unity for longer period of time when δt = 10−3 than when δt = 10−2. In contrast, for fixed δt, sparser graphs maintain higher fidelity over longer times. Thus, the departure of fidelity from unity is governed jointly by graph connec- tivity and the Trotter–Suzuki step size. This be- havior is consistent with general results in Hamil- tonian simulation, where the Trotter–Suzuki er- ror scales with both the Hamiltonian norm and the chosen time step [36–39]. B. Fidelity scaling To further understand the accuracy of our quantum circuit implementation, we analyze the decay of circuit fidelity as a function of system size and graph connectivity. We define the cutoff time τc as the evolution time at which the fi- delity drops to approximately 0.95. The fidelity is averaged over ten independent realizations of Erd˝os–R´enyi graphs in order to account for statistical fluctuations. The results are shown in Fig. 3, where τc is plotted against the number of qubits n for several values of the edge proba- bility p. Two Trotter step sizes are considered 12 3 4 5 6 7 8 9 10 Number of Qubits 2 0 2 4 6 8 10 ln( c) p = 0.1 p = 0.4 p = 0.7 p = 1.0 (a) δt = 10−2 3 4 5 6 7 8 9 10 Number of Qubits 4 6 8 10 12 14 16 ln( c) p = 0.1 p = 0.4 p = 0.7 p = 1.0 (b) δt = 10−3 FIG. 3. Cutoff Time (τc) at which the quantum circuit fidelity ∼0.95 plotted against the number of qubits n, for different edge probabilities p in the un- derlying Erd˝os-R´enyi graph. (a) Results for Trotter time step δt = 10−2. (b) Same for δt = 10−3. The fidelity decays more rapidly with increasing number of qubits n, and the decay is further for graphs with higher connectivity p and larger Trotter step size δt. δt = 10−3 and δt = 10−2. From Fig. 3 we observe, the cutoff time τc de- creases as the number of qubits increases, indicat- ing that the larger the qubit number, the more the number of non-commuting terms, which re- sult in a rapid increase of Trotter errors. Also, 3 4 5 6 7 8 9 10 Number of Qubits 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 ln( c) m=-1.27 m=-1.19 m=-1.23 m=-1.21 m=-1.48 m=-1.41 m=-1.42 m=-1.32 p = 0.1 p = 0.4 p = 0.7 p = 1.0 t = 10 3 t = 10 2 FIG. 4. Combined analysis of the cutoff time (τc) for fidelity decay (falls below 95%) plotted against the number of qubits n, including data from both Fig. 3a and Fig. 3b. Each curve corresponds to a different Erd˝os-R´enyi graph connectivity p and Trotter step size δt. The straight lines represent exponential fits of the form T(n) ∼emn+c, with fitted slope (m) mentioned in the legend. the graph connectivity plays an important role, sparse graphs i.e., graphs with low edge proba- bility p depict higher τc than higher p for a fixed number of qubits. Moreover, the choice of Trot- ter step size significantly affects performance. For δt = 10−3, τc is larger across all values of p than τc for δt = 10−2, which indicates that larger Trotter step size δt leads to significantly shorter evolution times before fidelity falls below 0.95. C. Trotter error check Fig. 4 presents a combined analysis of τc across different qubit numbers n, which includes data from both Fig. 3a and Fig. 3b. Each curve cor- responds to an Erd˝os–R´enyi graph with varying edge probability p, and two Trotter step sizes are considered, δt = 10−3 and δt = 10−2. The data are fitted to an exponential curve of the form T(n) ∼emn+c, with the fitted slopes m reported in the legend, n denotes the number of qubits. 13 3 4 5 6 7 8 9 10 Number of Qubits 10 8 6 4 2 0 2 4 ln( t) slope=1.39 slope=1.39 t = 10 3 t = 10 2 FIG. 5. Scaling analysis of the Trotterization error (εδt) at a single Trotter step δt as a function of qubit number n. Theoretical upper bound of Trotter error (εδt), given by δt2·ϵ·22n−1, is also fitted with straight lines, showing a slope of ∼1.39 for both δt values. The results show a clear exponential decay of τc with increasing qubit number. For δt = 10−3, the fitted slopes vary between −1.19 and −1.27, with an average value mavg ≈− p 3/2. For δt = 10−2, the slopes are slightly steeper, rang- ing from −1.32 to −1.48 with an average of mavg ≈− √ 2. To connect these observations with theoretical error estimates, we analyze the scaling of the theoretical Trotter error per step. For a Hamilto- nian decomposed into non-commuting terms, the first-order Trotter–Suzuki bound scales as [39] εδt ∼δt2 ϵ 22n−1, (40) where ϵ denotes the typical operator norm of commutators among Hamiltonian blocks. In our case, the Laplacian decomposition produces 2n −1 non-commuting blocks, giving rise to approximately 2n−1 2  ∼22n−1 commutator con- tributions, thereby explaining the exponential scaling of the Trotter error. For a total evolution time T and step size δt, the accumulated error scales as εtot ∼T · δt · ϵ · 22n−1. (41) Fig. 5 shows the scaling of the Trotter error εδt with n, together with theoretical upper bounds, considering ϵ = 1. The fitted slope of the error curves is ∼1.39, which is close in magnitude to the average negative slope of the fidelity decay (mavg ≈− p 3/2) observed in Fig. 4. This cor- respondence indicates that the observed fidelity decay is governed by the exponential growth of Trotter error with qubit number. It is worth noting that the empirical slopes are somewhat smaller than the theoretical up- per bounds. This discrepancy arises because the worst-case analysis overestimates the theoreti- cal error. The effective commutator norms ϵ are reduced by the sparsity and structure of the Laplacian blocks, and the actual error accumu- lation depends on the choice of initial state also. These findings validate the effectiveness of the proposed Trotterized circuit architecture for sim- ulating continuous-time quantum walks, while clarifying the limitations imposed by Trotter error scaling. VI. LOCALIZATION IN CTQW CIRCUIT SIMULATIONS In this section, we study localization in continuous-time quantum walks. We use local- ization as a tool for validating the accuracy of the Trotterized circuit evolution against exact simulations. Localization plays a key role in char- acterizing transport efficiency, memory retention of initial states, and spectral features of the un- derlying graph Hamiltonian. Unlike Anderson- type localization, which arises from disorder- induced destructive interference, the localization observed here emerges from spectral degenera- cies of the graph Hamiltonian [17, 18, 47, 52, 53]. Figs. 6a, 6b, 7a, and 7b present the time- averaged probability distributions pc(j) of a walker over all N = 2n vertices with n = 5 for Erd˝os–R´enyi graphs with edge probabilities p = 0.4 and p = 0.7 evaluated at 1000 steps, com- puted both from exact evolution and from the Trotterized quantum circuit. The orange bars denote the initial vertex (|ψ0⟩), selected as the node with minimum degree for Fig. 6 and maxi- mum degree for Fig. 7. The deviations from the 14 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) 1/N line | 0 pC(j) Trotterized pC(j) (a) p = 0.1 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) 1/N line | 0 pC(j) Trotterized pC(j) (b) p = 0.4 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (c) p = 0.1 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (d) p = 0.4 FIG. 6. Panels (a), (b) — Time-averaged probability distribution pc(j) (localization profile) of the quantum walker over all N = 2n (n = 5) vertices for different Erd˝os-R´enyi graph edge probabilities p = 0.1, 0.4. The orange bars mark the initial vertex, chosen as the node with the minimum degree. The deviation from the uniform line at 1/N indicates varying degrees of localization. Strong peaks at the initial site highlight the persistence of the walker’s probability near its origin, even for higher p. Results from exact simulation and Trotterized circuit evolution are shown to agree closely. Panels (c), (d) — Contour plots showing the temporal evolution of the CTQW probability distribution (pc(j)) for different edge probabilities p = 0.1, 0.4. Initial vertex, chosen as the node with the minimum degree. Each heatmap displays the walker’s probability at each vertex as a function of time. The presence of persistent high-probability bands indicates localization near the initial site. These results are from the circuit-based implementation. uniform baseline 1/N reveal the presence of local- ization, where we observe a high peak at the ini- tial site (|ψ0⟩), indicating a higher probability of finding the walker near |ψ0⟩. In both cases (exact evolution and the Trotterized circuit evolution), the agreement between the two methods is ex- 15 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) 1/N line | 0 pC(j) Trotterized pC(j) (a) p = 0.4 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) 1/N line | 0 pC(j) Trotterized pC(j) (b) p = 0.7 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (c) p = 0.4 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (d) p = 0.7 FIG. 7. Panels (a), (b) — Time-averaged probability distribution pc(j) (localization profile) of the quantum walker over all N = 2n (n = 5) vertices for different Erd˝os-R´enyi graph edge probabilities p = 0.4, 0.7 at steps 1000. The orange bars mark the initial vertex, chosen as the node with maximum degree. The deviation from the uniform line at 1/N indicates varying degrees of localization. Strong peaks at the initial site highlight the persistence of the walker’s probability near its origin, even for higher p. Results from exact simulation (green bar) and Trotterized circuit evolution (purple bar) are shown to agree closely. Panels (c), (d) — Contour plots of showing the temporal evolution of the CTQW probability distribution (pc(j)) for different edge probabilities p = 0.4, 0.7. Initial vertex, chosen as the node with the maximum degree. Each heatmap displays the walker’s probability at each vertex as a function of time. The presence of persistent high-probability bands indicates localization near the initial site. These results are from the circuit-based implementation. cellent. A key observation from our simulations is that the degree of the initial vertex strongly influences localization. For Erd˝os–R´enyi graphs with lower connectivity p ∼0.1, localization be- comes particularly pronounced when the walker begins at the vertex of minimum degree. Apart from it, an interesting observation oc- curs in the contour plots of the Fig. 6c, 6d, 7c, 16 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (a) p = 1.0 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (b) p = 1.0 FIG. 8. Contour plots of showing the temporal evolution of the CTQW probability distribution (pc(j)) for edge probabilities p = 1.0. Initial vertex, chosen as the node with maximum degree for (a) 5 qubit and (b) 6 qubit. Each heatmap displays the walker’s probability at each vertex as a function of time. The presence of persistent high-probability bands indicates localization near the initial site. These results are from the circuit-based implementation. and 7d where the temporal evolutions of pc(j) are depicted. Each heatmap illustrates the prob- ability distribution across vertices as a func- tion of time. A striking feature emerges for some graphs where vertices that are directly connected and share the same degree show os- cillatory behavior in the walker’s probability amplitude when we choose any of them as our initial starting state |ψ0⟩. In such cases, the walker dynamically redistributes its localization weight between these same degree vertices or oscillating vertex group, leading to a persistent oscillation of probability across time. Conversely, other vertices, with the same maximal degree, do not participate in this oscillation if that is not directly connected to the oscillating vertex group. For that vertex, the walker’s localization probability remains comparatively high through- out the evolution if the starting |ψ0⟩is on that vertex. This behavior describes the role of graph connectivity. In summary, when we initialize the walker at a vertex that carries the maximum degree, the walker tends to localize in that vertex (Figs. 9a– 9c) if it is not directly connected to the oscillat- ing vertex group. This effect originates from the spectral structure of the Laplacian, where high- degree vertices contribute disproportionately to degenerate (or nearly degenerate) eigenmodes. Since the initial state has a large overlap with these modes, part of the amplitude acquires only global phases during evolution, prevent- ing complete delocalization. As a result, the walker retains a significant long-time probabil- ity at the starting vertex. Even when p ≥0.9, i.e., when the underlying graph is complete or near-complete, all vertices have the same de- gree, if we initialize the walker at a single ver- tex, the time-averaged probability indicates that the walker remains localized at that vertex in- stead of spreading uniformly across the graph (Figs. 8 and 9d). This localization does not stem from disorder, as in Anderson localization, but rather from the symmetry and spectral de- generacy [17, 18, 47, 52, 53] of the complete graph Laplacian. The decomposition of the ini- tial state into a stationary uniform component and a degenerate oscillatory subspace explains the persistence of amplitude near the origin (a de- tailed account is given in the Appendix C). The complete graph, therefore, provides a striking example where strong connectivity and high sym- 17 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (a) p = 0.1 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (b) p = 0.4 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (c) p = 0.7 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (d) p = 1.0 FIG. 9. Panels (a) – (d) — Contour plots showing the temporal evolution of the CTQW probability distribution (pc(j)) for different edge probabilities p = 0.1, 0.4, 0.7, 1.0 (from top left to bottom right) for 6 qubit. Initial vertex, chosen as the node with the maximum degree. Each heatmap displays the walker’s probability at each vertex as a function of time. The presence of persistent high-probability bands indicates localization near the initial site. These results are from the circuit-based implementation. metry induce localization in CTQWs through purely spectral mechanisms. VII. CONCLUSION In this work, we have developed a scal- able quantum circuit framework for simulating continuous-time quantum walks (CTQWs) on arbitrary random graphs, with a particular focus on Erd˝os–R´enyi (ER) graphs. By representing the CTQW Hamiltonian in terms of the graph Laplacian and introducing the graph Laplacian partitioning algorithm (LPA), we demonstrated that the Laplacian L of an n-qubit graph can be decomposed into a set of sparse submatri- ces {L(j)}, each of which is permutation-similar to a block-diagonal form with 2 × 2 non-trivial blocks. This decomposition allows the efficient encoding of the graph Hamiltonian into quan- tum circuits through permutation matrices that can be realized using CNOTgates. 18 The resulting framework enables the imple- mentation of the full time-evolution operator U(t) = e−iHt using a Trotter–Suzuki product formula applied to the partitioned Hamiltonian components. Compared to standard Pauli-string decompositions that scale as O(4n), our block- diagonal approach achieves a reduced decomposi- tion complexity of O(2n −1)—substantially low- ering circuit depth and gate count. This provides a resource-efficient route for realizing CTQWs on near-term quantum devices and paves the way for the exploration of random graph dynam- ics on noisy intermediate-scale quantum (NISQ) hardware. Furthermore, we compared the Trotterized cir- cuit evolution against exact simulations by veri- fying fidelity of the Trotterized evolution against exact dynamics. The time-averaged probability distributions revealed excellent agreement be- tween exact and circuit-based dynamics, confirm- ing high fidelity of the implemented evolution. We showcase that our circuit error closely follows the theoretical Trotter error. We also tested our circuit using localization as a diagnostic tool. We found that localization in our CTQW implemen- tation arises not from disorder, as in Anderson- type localization, but from spectral degenera- cies of the Laplacian. The degree of the initial vertex strongly influences localization strength. The walkers initialized at low-degree vertices in sparse ER graphs (p ∼0.1) exhibit localiza- tion, while in dense or complete graphs (p→1) localization persists due to symmetry-induced degeneracies. In highly connected graphs, os- cillatory behavior between connected vertices of equal degree was observed, corresponding to coherent population transfer within degenerate eigen-subspaces. These results demonstrate that spectral structure and graph connectivity dictate localization behavior in CTQWs, and that the proposed circuit framework faithfully reproduces these quantum transport features. We establish a general framework for Hamilto- nian simulation using the graph Laplacian par- tition algorithm with reduced complexity com- pared to standard Pauli decomposition. How- ever, we believe that the partitioning strat- egy could be further improved to have a bet- ter fidelity response over larger Trotter steps. This work also opens up the implementation of weighted graph walks, i.e., lackadaisical quan- tum walks, quantum walks with memories, to name a few. One of the major drawbacks of our method lies in its scalability—as the number of qubits increases, the circuit depth also pro- portionately increases because of the presence of a higher number of partitions in the LPA. Therefore, optimizing our algorithm to produce fix gate-depth circuit remains a future objective. We can also implement various quantum walk algorithmic tasks, such as the traveling salesman problem [60], finding the inverse of a matrix [61]. Our work implements CTQW on quantum cir- cuits for random graphs, which is a crucial result at the age of NISQ devices. VIII. ACKNOWLEDGMENT SC acknowledge the support of the Prime Min- ister’s Research Fellowship (PMRF). RKR ac- knowledges the support by the Institute for Basic Science in Korea (IBS-R024-D1). The authors would also like to acknowledge Paramshakti Su- percomputer facility at IIT Kharagpur—a na- tional supercomputing mission of the Govern- ment of India, for providing the necessary high- performance computational resources. Appendix A: Permutation Operator and Permutation Similarity between Pauli strings 1. Permutation Operator The product involving CNOT gates (see Eq. (14)) are equivalent to permutation oper- ation. To understand this argument we follow the work Sarkar et al. [54]. The key idea here is that for a set of n-length Pauli strings S(n) P (n is the qubit number), one can find a set of permutation matrices P j n, that transform the Pauli strings into block-diagonal matrices with 2 × 2 non-trivial blocks. In Ref. [54] the authors define permutations 19 as ΠTe n,x and ΠTo n,x 2, where x is a binary string x = n−2 X κj=0 xκj2κj ≡(xn−2, . . . , x0), and xκj ∈{0, 1}. (A1) We can also define the index set κ = { κj } for each x as κx. These permutations are similar to P j n = (ΠTo n,x = ΠTo n, j 2 , if j is even, ΠTe n,x = ΠTe n, j−1 2 , otherwise. (A2) Both x = 0 and j = 0, 1 will give the Identity matrix. The notation e (o) in ΠTe n,x (ΠTo n,x) indicates that the corresponding permutation matrix is a product of permutations of disjoint 2-cycles P(α, β), where both α and β3 are even ( exactly one of α or β is odd). The notation P(α, β) denotes the matrix obtained by exchanging the αth and βth rows of the target matrix. Proposition 2. For any x, and any u = (un−2, . . . , u0); define ¯uk := uk⊕1. Consider the functions αg κx : {0, 1}n−1 →{ 0, . . . , 2n−1 −1 } and βg κx : { 0, 1}n−1 →{0, . . . , 2n−1 −1 } , g ∈ { e, o } defined as αg κx(u) = X k∈κx uk 2k+1 + X ˜k/∈κx u˜k 2 ˜k+1 + 2, βe κx(u) = X k∈κx ¯uk 2k+1 + X ˜k/∈κx u˜k 2 ˜k+1 + 2, βo κx(u) = X k∈κx ¯uk 2k+1 + X ˜k/∈κx u˜k 2 ˜k+1 + 1. 2 Here, Π denotes the product over all disjoint 2-cycle permutations P(α, β) defined by the index functions αg κx and βg κx, while T is simply a symbolic label used to distinguish the corresponding permutation type. 3 α, β are row or column index—which will be clear from the given context. Then ΠTg n,x = Y 0≤u<2n−1−1 αg κx(u)<βg κx(u) Pαg κx(u), βg κx(u) , g ∈{e, o}. It follows for x ̸= y, ΠTg n,x ̸= ΠTg n,y with αg κx(u), βg κx(u)  ̸=  αg κy(u), βg κy(u)  for all 0 ≤u ≤2n−1 −1. Example: for n = 3 and x = 1, from Eq. (A1)— x = 1 =⇒(x1, x0) = (0, 1) =⇒ κx = {0}. αe κx(u) = 2u0 + 4u1 + 2, βe κx(u) = 2(1 −u0) + 4u1 + 2, u = (u1, u0) ∈{0, 1}2. u αe κx(u) βe κx(u) α < β (0, 0) 2 4 yes (0, 1) 4 2 no (1, 0) 6 8 yes (1, 1) 8 6 no We have, ΠTe 3,1 = Y u∈{0,1}2 αe κx(u)<βe κx(u) P(αe κx(u), βe κx(u)) = P(2,4) P(6,8). Similarly for ΠTo 3,1 = Y u∈{0,1}2 αo κx(u)<βo κx(u) P(αo κx(u), βo κx(u)) = P(2,3) P(6,7). P j=2 n=3 = ΠTo 3,1 := 1 2 • • 3 • P j=3 n=3 = ΠTe 3,1 := 1 2 3 • 20 2. Permutation Similarity between Pauli strings We now state the following theorem from Ref. [54], which establishes permutation simi- larity of the elements of S(n) I,X. Theorem 2. [54] Let j ∈{1, . . . , 2n −1}. Then S(n) {I,X}j =    ΠTe n, j−1 2 (I⊗(n−1) 2 ⊗X)ΠTe n, j−1 2 , j odd, ΠTo n, j 2 (I⊗(n−1) 2 ⊗X)ΠTo n, j 2 , j even, = P j n(I⊗(n−1) 2 ⊗X)P j n. Proof. See Sarkar et al. [54] for the detailed proof. Appendix B: Lemma 3 Example Consider the circuit 31 with two controls (top wires) and one target (bottom wire). From left to right, between the single-qubit rotations on the target, the CNOTs from the controls to the target are connected. We denote the four target rotations by Rz(ω1), Rz(ω2), Rz(ω3), Rz(ω4). Here k = 2, so c ∈{00, 01, 10, 11} is the con- trol basis string. Just before each rotation, the active-control mask is m1 = 00, m2 = 01, m3 = 10, m4 = 11. (B1) Interpret each mask mi via its overlap size i.e. (number of shared 1s). The parity used in Lemma 3 is precisely this overlap size mod 2. Hence, the unitary is block diagonal, U = M c∈{0,1}2 Rz(ηc), where ηc = 4 X i=1 (−1) ⟨c·mi⟩ωi. (B2) For each control string c we list the overlap sizes ⟨c · mi⟩=: oi (with i = 1, . . . , 4), their parities, and the resulting signs: c o1 o2 o3 o4 o1(mod 2) o2(mod 2) o3(mod 2) o4(mod 2) signs ((−1)oi) 00 0 0 0 0 0 0 0 0 (+, +, +, +) 01 0 1 0 1 0 1 0 1 (+, −, +, −) 10 0 0 1 1 0 0 1 1 (+, +, −, −) 11 0 1 1 2 0 1 1 0 (+, −, −, +) (B3) Using the signs above in ηc = P i(−1)oi(c)ωi gives η00 = ω1 + ω2 + ω3 + ω4, η01 = ω1 −ω2 + ω3 −ω4, η10 = ω1 + ω2 −ω3 −ω4, η11 = ω1 −ω2 −ω3 + ω4. (B4) Equivalently, with ω = (ω1, ω2, ω3, ω4)T ,      η00 η01 η10 η11     =      +1 +1 +1 +1 +1 −1 +1 −1 +1 +1 −1 −1 +1 −1 −1 +1      | {z } 22H⊗2      ω1 ω2 ω3 ω4     . (B5) 21 Appendix C: Localization profile for all connected graphs To quantify localization, we employ the in- verse participation ratio (IPR), defined for a walker initialized at vertex j as, IPRj(t) = N X i=1 p2 ij(t), pij(t) = ⟨i| e−iHt |j⟩ 2 , (C1) which measures the spread of the probability dis- tribution in the vertex basis. For a completely delocalized state, pij(t) ≈1/N for all vertices, yielding IPRj(t) ≈1/N, which serves as a natu- ral ergodic baseline. Localization is implied at a said vertex j, whenever the value of IPR at that vertex is greater than 1/N. For a complete graph KN, each vertex is con- nected to all others with degree deg(v) = N −1 for all v ∈KN. (C2) The CTQW Hamiltonian is defined as (assum- ing γ = 1) H = −L, (C3) where L is the Laplacian of KN. The evolution operator is U(t) = exp(iLt), and its spectral decomposition governs the transport dynamics. The Laplacian spectrum of the complete graph is highly degenerate: there is one eigenvalue E1 = 0, corresponding to the uniform superposition state, and (N −1) degenerate eigenvalues equal to N, E1 = 0, Ej = N (j = 2, . . . , N). (C4) This large degeneracy underpins the persistence of localization in CTQWs on KN. The normalized eigenvector associated with the zero eigenvalue is the uniform state |s⟩= 1 √ N N X j=1 |j⟩, (C5) while the remaining eigenvectors span the sub- space orthogonal to |s⟩. An initial state localized at a vertex |v⟩can be decomposed as |v⟩= ⟨s|v⟩|s⟩+  |v⟩−⟨s|v⟩|s⟩  . (C6) The uniform component |s⟩is stationary since L |s⟩= 0, whereas the orthogonal component evolves with a global phase eiNt, owing to its eigenvalue N. The total state at time t is there- fore |ψ(t)⟩= ⟨s|v⟩|s⟩+ eiNt |v⟩−⟨s|v⟩|s⟩  . (C7) The amplitude to remain at the initial vertex is ⟨v|ψ(t)⟩= 1 N +  1 −1 N  eiNt, (C8) leading to the instantaneous probability |⟨v|ψ(t)⟩|2 = 1 N 2 +  1 −1 N 2 + 2 N  1 −1 N  cos(Nt). (C9) Averaging over time removes the oscillatory term and yields the time-averaged probability at the starting vertex, pv = 1 N 2 +  1 −1 N 2 = 1 −2 N + 2 N 2 . (C10) For large N, this approaches pv ≈1 −2 N , (C11) which is markedly higher than the uniform dis- tribution 1/N. Thus, even though the complete graph is maximally connected, the walker re- tains a strong probability of being detected at its initial position at long times. To compute the IPR, we first note that using Eq. (C5) ⟨i|ψ(t)⟩= 1 N + eiNt  δiv −1 N  . (C12) 22 Which further allows us to write from Eq. (C6) piv(t) = 1 N +  1 −1 N  eiNt 2 for i = v, = 1 −2 N + 2 N 2 + 2(N −1) N 2 cos Nt. (C13) piv(t) = 1 N (1 −eiNt) 2 for i ̸= v, = 2 N 2 (1 −cos Nt). (C14) Therefore, we can compute the IPR using Eq. (C1) as IPRv(t) = p2 iv(t) | {z } i=v +(N −1) p2 iv(t) | {z } i̸=v , = 1 −4 N + 10 N 2 −6 N 3 + 4(N −1)(N −2) N 3 cos Nt, + 2(N −1) N 3 cos 2Nt. (C15) The average over a long time results in IPRv = 1 −4 N + 10 N 2 −6 N 3 , (C16) which in the large N limit reduces to IPRv ≈1 −4 N . (C17) This suggests that for large N, on average, the IPR remains close to 1, suggesting strong local- ization. [1] R. P. Feynman, International Journal of Theo- retical Physics 21, 467 (1982). [2] S. Lloyd, Science 273, 1073 (1996). [3] I. M. Georgescu, S. Ashhab, and F. Nori, Rev. Mod. Phys. 86, 153 (2014). [4] B. Fauseweh, Nature Communications 15, 2123 (2024). [5] E. Farhi and S. Gutmann, Physical Review A 58, 915 (1998). [6] A. M. Childs, E. Farhi, and S. Gutmann, Quan- tum Information Processing 1, 35 (2002). [7] A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A. Spielman, in Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing, STOC ’03 (Association for Computing Machinery, New York, NY, USA, 2003) p. 59–68. [8] K. Manouchehri and J. Wang, Physical Imple- mentation of Quantum Walks, 1st ed., Quan- tum Science and Technology (Springer Berlin, Heidelberg, 2014) pp. X, 230, springer-Verlag Berlin Heidelberg. [9] R. K. Ray, Phys. Rev. E 106, 024115 (2022). [10] R. K. Ray, R. Srikanth, and S. Majumder, Phys. Rev. E 112, 044120 (2025). [11] D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani, in Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Comput- ing, STOC ’01 (Association for Computing Ma- chinery, New York, NY, USA, 2001) p. 50–59. [12] J. Kempe, Contemporary Physics 44, 307 (2003), https://doi.org/10.1080/00107151031000110776. [13] O. M¨ulken and A. Blumen, Physics Reports 502, 37 (2011). [14] K. Mukai and N. Hatano, Phys. Rev. Res. 2, 023378 (2020). [15] R. Portugal, Quantum Walks and Search Al- gorithms, 1st ed., Quantum Science and Tech- nology (Springer, New York, NY, 2013) pp. XI, 222, 37 b/w illustrations. [16] S. Wald and L. B¨ottcher, Phys. Rev. E 103, 012122 (2021). [17] N. Inui, Y. Konishi, and N. Konno, Phys. Rev. A 69, 052323 (2004). [18] S. Chakraborty, K. Luh, and J. Roland, Physi- cal review letters 124, 050501 (2020). [19] R. Campos, P. A. M. Casares, and M. A. Martin- Delgado, Quantum Machine Intelligence 5, 28 (2023). [20] E. Lee, S. Lee, and S. Kim, Phys. Rev. D 111, 116001 (2025). 23 [21] C. H. Alderete, S. Singh, N. H. Nguyen, D. Zhu, R. Balu, C. Monroe, C. M. Chandrashekar, and N. M. Linke, Nature Communications 11, 3720 (2020). [22] M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspuru-Guzik, The Journal of Chemical Physics 129, 174106 (2008). [23] T. Kitagawa, M. S. Rudner, E. Berg, and E. Demler, Phys. Rev. A 82, 033429 (2010). [24] R. Duda, M. N. Ivaki, I. Sahlberg, K. P¨oyh¨onen, and T. Ojanen, Phys. Rev. Res. 5, 023150 (2023). [25] K. Shukla and C. M. Chandrashekar, Phys. Rev. A 109, 032608 (2024). [26] H. Gao, K. Wang, D. Qu, Q. Lin, and P. Xue, New Journal of Physics 25, 053011 (2023). [27] Y. Aharonov, L. Davidovich, and N. Zagury, Phys. Rev. A 48, 1687 (1993). [28] B. L. Douglas and J. B. Wang, Phys. Rev. A 79, 052335 (2009). [29] T. Loke and J. B. Wang, Phys. Rev. A 86, 042338 (2012). [30] T. Loke and J. Wang, Annals of Physics 382, 64 (2017). [31] T. Loke and J. B. Wang, Journal of Physics A: Mathematical and Theoretical 50, 055303 (2017). [32] Z. Chen, G. Li, and L. Li, Phys. Rev. A 110, 052215 (2024). [33] R. P. Feynman, Optics News 11, 11 (1985). [34] P. Wocjan and S. Zhang, arXiv preprint quant-ph/0606179 https://doi.org/10.48550/arXiv.quant- ph/0606179 (2006). [35] A. M. Childs, Lecture notes at University of Maryland 5 (2017). [36] H. F. Trotter, Proceedings of the American Mathematical Society 10, 545 (1959). [37] M. Suzuki, Communications in Mathematical Physics 51, 183 (1976). [38] M. Suzuki, Journal of Mathematical Physics 32, 400 (1991). [39] A. M. Childs, Y. Su, M. C. Tran, N. Wiebe, and S. Zhu, Phys. Rev. X 11, 011020 (2021). [40] A. M. Childs and Y. Su, Phys. Rev. Lett. 123, 050503 (2019). [41] B. Bollob´as, Random Graphs, 2nd ed., Cam- bridge Studies in Advanced Mathematics (Cam- bridge University Press, 2001). [42] P. Erd˝os and A. R´enyi, Publicationes Mathe- maticae 6, 290 (1959). [43] P. Erd˝os and A. R´enyi, Publ. Math. Inst. Hun- gar. Acad. Sci 5, 17 (1960). [44] J. Tindall, A. Searle, A. Alhajri, and D. Jaksch, Nature Communications 13, 7445 (2022). [45] S. Dutta, Physical Review E 111, 10.1103/Phys- RevE.111.034312 (2025). [46] F. R. K. Chung, Spectral Graph Theory, CBMS Regional Conference Series in Mathematics, Vol. 92 (American Mathematical Society, 1997) chapter 1 available online. [47] A. Mandal, R. S. Sarkar, S. Chakraborty, and B. Adhikari, Physical Review A 106, 042405 (2022). [48] P. W. Anderson, Phys. Rev. 109, 1492 (1958). [49] Y. Lahini, A. Avidan, F. Pozzi, M. Sorel, R. Morandotti, D. N. Christodoulides, and Y. Silberberg, Phys. Rev. Lett. 100, 013906 (2008). [50] A. Crespi, R. Osellame, R. Ramponi, V. Gio- vannetti, R. Fazio, L. Sansoni, F. De Nicola, F. Sciarrino, and P. Mataloni, Nature Photonics 7, 322 (2013). [51] I. Vakulchyk, M. V. Fistul, P. Qin, and S. Flach, Phys. Rev. B 96, 144204 (2017). [52] R. Bueno and N. Hatano, Physical Review Re- search 2, 033185 (2020). [53] A. P. Balachandran, A. Kundalpady, P. Pad- manabhan, and A. Sinha, Phys. Rev. A 109, 012205 (2024). [54] R. S. Sarkar, S. Chakraborty, and B. Adhikari, arXiv preprint arXiv:2405.13605 (2024). [55] M. M¨ott¨onen, J. J. Vartiainen, V. Bergholm, and M. M. Salomaa, Phys. Rev. Lett. 93, 130502 (2004). [56] M. M¨ott¨onen, J. J. Vartiainen, V. Bergholm, and M. M. Salomaa, Quant. Inf. Comp. 5, 467–473 (2005). [57] R. S. Sarkar and B. Adhikari, in 2023 IEEE International Conference on Quantum Comput- ing and Engineering (QCE), Vol. 01 (2023) pp. 1078–1088. [58] A. M. Krol, A. Sarkar, I. Ashraf, Z. Al- Ars, and K. Bertels, Applied Sciences 12, 10.3390/app12020759 (2022). [59] R. Jozsa, Journal of modern optics 41, 2315 (1994). [60] S. Marsh and J. B. Wang, Phys. Rev. Res. 2, 023302 (2020). [61] A. Kay and C. Tamon, Matrix Inversion by Quantum Walk, 2508.06611.
Continuous-time quantum walk on a random graph using quantum circuits Sabyasachi Chakraborty ,1, ∗Rohit Sarma Sarkar ,2, † Sonjoy Majumder ,1, ‡ and Rohit Kishan Ray 3, 4, § 1 721302, India 2International Centre for Theoretical Sciences (ICTS-TIFR), Bengaluru 560089, India 3 24061, USA 4Center for Theoretical Physics of Complex Systems, Institute for Basic Science(IBS), Daejeon 34126, Republic of Korea Quantum walks, particularly continuous-time quantum walks (CTQW), have emerged as powerful tools for modeling quantum transport, simulating complex dynamics, and developing quantum algorithms with potential speedups over classical counterparts. In this work, we present a scalable quantum circuit formalism to simulate CTQW on random graph structures, especially focusing on Erd ̋os-R ́enyi random graphs. Our quantum circuit construction efficiently implements the time evolution of the graph Laplacian, using the Trotterization scheme. We investigate key dynamical properties, i.e., the localization behavior of the CTQW. Our quantum circuit implementation over random graph ensures that the circuit design can work on any graph structure, thereby laying the foundation for realizing CTQW-based quantum simulations efficiently. I. INTRODUCTION Quantum computers provide a natural framework for simulating quantum dynamical processes that are otherwise challenging for classical computation [1-4]. Within this context, quantum walks (QWs) have emerged as powerful and versatile tools [5-10]. They serve as fundamental algorithmic building blocks for graph-based problems [7, 11, 12], provide a rich framework for modeling quantum transport [13], and probing complex networks [14]. A QW is the quantum generalization of a classical random walk, where quantum superposition and interference replace classical stochasticity, giving rise to significantly different transport properties [12, 15, 16]. In contrast to classical diffusion, quantum walks show ballistic spreading [8, 15], localization [17, 18], applications in optimization, simulation [19-21], and probing physical processes from energy ∗ † ‡ § transfer and topological phases to transport in complex networks [13, 14, 22-24]. Implementing QWs on quantum hardware, therefore, represents a promising route to bridge abstract quantum models with realizable algorithms and experimentally accessible simulations [25, 26]. Quantum walks are broadly classified into discrete-time (DTQW) [12, 27] and continuoustime (CTQW) [5]. In DTQWs, evolution proceeds through repeated coin-shift operations, introducing internal degrees of freedom that enable controllability, making them well-suited for circuit design and local graph propagation [2830]. In contrast, CTQWs are defined directly on graphs, with the Hamiltonian typically chosen as the adjacency matrix or the graph Laplacian. They also don't require any extra degree of freedom, such as coin operator. This makes CTQW circuit implementations challenging [31, 32], since their continuous evolution depends on the global structure of the graph rather than local connections. Simulating CTQWs on a quantum computer requires the efficient encoding of the graph Hamiltonian into quantum circuits with the unitary time-evolution operator U(t) = exp(-iHt). 16 Oct 2025 2 H is the graph Hamiltonian, often chosen as the adjacency matrix or the Laplacian of the graph [8, 15, 31]. Thus, it is a problem of Hamiltonian simulation, and since Hamiltonian simulation is known to be BQP-complete [3335], efficient classical solutions are unlikely. A widely used strategy in this context is the implementation of Trotter-Suzuki [36-39] decomposition (TSD) or product formulas [40]. Here, the Hamiltonian H is broken down into a sum of local Hamiltonians (not necessarily commuting with each other) Hj such as H = PL j=1 ajHj and the TSD approximates the exponential of a sum of Hamiltonian terms at each Trotter step δt i.e., e(-i PL j=1 ajHjδt) by sequentially operating the exponential of the individual terms exp(-iajHjδt). Here δt = t/r where r is the Trotter step number controlling the approximation error. Thus, the total time-evolution operator becomes e-iHt ≈   L Y j=1 e-iHjδt   r . (1) A given 2n × 2n-dimensional Hamiltonian H, acting on n qubits can be written in terms of elementary gates using n-length Pauli strings, S(n) P = {Nn i=1 σi|σi ∈SP , 1 ≤i ≤n}, where SP = { I, X, Y, Z } is the Pauli matrix set consisting of SU(2) generators in the Pauli basis. These strings form an orthonormal basis for the algebra of 2n × 2n matrices. Each exp -iajS(n) P δt can be implemented by O(n) elementary gates. However, the number of Pauli terms grows exponentially (reaching O(4n) in the worst case), thereby increasing the depth of the circuit. Therefore, the gate complexity of a CTQW simulation is governed by the structure of the underlying graph Hamiltonian and the choice of decomposition scheme. For sparse graphs, product formula methods remain tractable and allow faithful simulation of CTQWs with polynomial gate overhead. However, random graphs can have dense connectivity, which increases the number of required terms during time evolution, rendering optimized decomposition strategies and error-controlled Trotterization especially important. In this paper, we develop a quantum circuit framework for simulating continuous-time quantum walks (CTQWs) on random graph structures, namely the Erd ̋os-R ́enyi random graphs. The graph Hamiltonian (H) is expressed in terms of the Laplacian (L) of the graph, which serves as the generator of the walk in our case. To implement this evolution efficiently on quantum hardware, we introduce a graph Laplacian partitioning algorithm (LPA). The LPA decomposes the Laplacian L of a given graph into a collection of sparse Laplacians {L(j)}, such that L = P2n-1 j=1 L(j), where each L(j) corresponds to a sparse submatrix of L. A key feature of this construction is that each L(j) is permutationsimilar to a block-diagonal Hamiltonian consisting of 2×2 nontrivial blocks. These permutation matrices have a direct representation in terms of CNOT gates. We then present the quantum circuit construction of the block-diagonal Hamiltonian. The full-time evolution operator exp(-iHt) is then implemented using a TSD scheme applied to partitioned submatrices. This approach reduces the worst-case decomposition size from O(4n) (as encountered in Pauli-string decompositions) to O(2n -1), thus producing a significantly more resource-efficient circuit design. By combining graph-theoretic partitioning with quantum circuit synthesis, our method establishes a protocol for simulating CTQWs on arbitrary random graphs, offering a practical alternative to conventional Hamiltonian simulation techniques, and ensuring a wide applicability of our circuit. The rest of the paper is organized as follows. In Sec. II, we review the preliminary concepts of graphs, continuous-time quantum walks (CTQWs), and define localization. Section III introduces the graph Laplacian partitioning algorithm, which forms the foundation for constructing quantum circuits for CTQWs. Section IV is devoted to the design of quantum circuits for CTQWs, while Sec. V presents their application to CTQW implementations. In Sec. VI, we analyze the accuracy of the Trotterized circuit evolution and study localization for CTQW cir3 cuit simulations. Finally, Sec. VII summarizes our findings and outlines future perspectives. II. THEORETICAL PRELIMINARIES A. Graphs and Continuous-Time Quantum Walks Let G = (V, E) be an undirected graph [9, 10, 41], where V = {v1, v2, . . . , vN} denotes the set of vertices and E ⊆{{vi, vj} | i N -1. (9) III. PARTITIONING THE LAPLACIAN We now establish the foundation for constructing scalable quantum circuits to simulate (a) (b) (c) (d) (e) (f) 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 FIG. 1. (a) Random graph G(N, p) with N = 5 and each edge is present independently with probability p = 0.4. (b-f) Decomposition of the original graph into subgraphs, each corresponding to a distinct 1sparse Hamiltonian representation. CTQWs by introducing a graph Laplacian partitioning algorithm (Fig. 1). In essence, the partitioning algorithm allows us to break the Laplacian of a given graph into a set of Laplacians which represent sparse graphs. This is an essential step in our quantum circuit design. The LPA proceeds in two key stages (i) the generation and indexing of permutation matrices, and (ii) the subsequent breakdown of the Laplacian into a sum of sparse sub-matrices which are permutation-similar to a block-diagonal matrix with 2 × 2 non-trivial blocks. The primary idea behind the algorithm is to keep track of the position of non-zero elements in the Laplacian operator. We begin with the following definitions. Definition 1. The support set ΓM of a d × d matrix M is defined as the set of positions of its non-zero elements. For example, suppose that a matrix A of size d × d has nonzero elements at positions i, j = (1, 1), (1, d), and (d, 1). Then, the support set is given by ΓA = { (1, 1), (1, d), (d, 1) }. Definition 2. Let A be any matrix. Its support matrix, denoted ̃ A, is the binary matrix entry5 wise defined by [ ̃ A]ij = ( 1, Aij ̸= 0, 0, Aij = 0. (10) In other words, ̃ A encodes the pattern of zero and nonzero entries i.e., structure of A. Definition 3. We call two d×d matrices E and F structurally similar, E SS= F , if their support sets ΓE and ΓF , respectively, are equal. Consider, ΓE = { (p, q) | [E]p,q ̸= 0, and 1 ≤p, q ≤d } and ΓF = { (p, q) | [F ]p,q ̸= 0, and 1 ≤p, q ≤d }, then ΓE = ΓF implies E is structurally similar to F , i.e., E SS= F . This trivially implies, ̃ E = ̃ F . With these definitions, we describe how the Laplacian of a graph can be decomposed into sparse components. Let G = (V, E) be an undirected graph with N = 2n vertices (n denotes number of qubits). Using Eq. (4), G can be represented by its N × N Laplacian matrix L. L is symmetric by construction. Our objective is to decompose L into a sum of structured submatrices L(j) as, L = N-1 X j=1 L(j), (11) Each L(j) is permutation-similar to a blockdiagonal matrix composed of 2 × 2 non-trivial blocks i.e., L(j) = P j nL(j) BD(P j n)T . (12) Here L(j) BD is the block diagonal matrix comprised of 2 × 2 non-zero blocks. The permutation matrix P j n of size 2n × 2n acts on the jth sub-matrix, and T denotes the transpose operation. We know that any 2 × 2 complex matrix A ∈M2(C) can be represented using generators of SU(2), i.e., Pauli basis using the set of Pauli matrices SP = { I, X, Y, Z }. The set of n-length Pauli strings (n times tensor product of 2×2 matrics composed of Pauli matrices and the identity matrix) i.e., S(n) P = {Nn i=1 σi|σi ∈ SP , 1 ≤i ≤n} forms a basis for M2n(C), the set of 2n × 2n complex matrices. Lemma 1. Let A = Nn i=1 Ai ∈S(n) I,X ⊂ S(n) P be an n-length Pauli string comprising of I, and X. Further, B = Nn i=1 Bi ∈ S(n) P be another n-length Pauli string such that ( Bj ∈{X, Y } if Aj = X Bj ∈{I, Z} if Aj = I2. Then A SS= B. Proof. Since, X SS= Y , and I SS= Z, their Kronecker products are also structurally similar by definition. Hence, A SS= B. Following Lemma 1, we can write the support matrix of any 2n × 2n Hamiltonian matrix using the binary Pauli basis {I, X}⊗n, where n is the qubit number. We define a support basis in the following way. For a given Pauli string as described above, we can replace I 7→0, and X 7→1, such that we can map a string of the form { IXXI } 7→{ 0110 }. This can be further identified with a basis in the 2n dimensional computational space (n is both the string length and the number of qubits, to be identified from the context). To further clarify, consider a three qubit (n = 3) Pauli string (IXI), which we can write as (010) ≡|010⟩. Now, as discussed above, we identify this with one of the basis elements in the 23 = 8 dimensional computational space, such as |010⟩≡|2⟩. Therefore, to index all such Pauli strings that span the given n qubit description of the support of A i.e., ̃ A, we can use the index j = 0, . . . , 2n1. Consider the three-qubit case as before. All the possible Pauli bases chosen from the set S(3) P = n III, IIX, IXI, IXX, XII, XIX, XXI, XXX o , 7→ n 000, 001, 010, 011, 100, 101, 110, 111 o 7→{0, 1, 2, 3, 4, 5, 6, 7} (13) 6 where j encodes the indices 0, 1 · · · 7. Having established the notion of the support basis and its indexing through j, we now turn to the construction of the corresponding permutation matrix P j n. Our objective is to represent the permutation operator using CNOT gates. We use CNOT(p, q) to identify the positions of the control (p) and target (q) qubits of the CNOT gates (excluding j = 0, 1). For the target qubit of CNOT, we need to identify the qubit associated with the given value of the index j as discussed above (see Eq. (13)). To ease our computation load, we fix the last (nth) qubit as control (or target, depending on j, see below). We convert j to its binary equivalent i.e., jbin. Since we are fixing the nth qubit, we remove the right-most value from the binary string of jbin and call the rest of the string b. In the binary string b, we record the positions of ones as κj (from right to left), and form the index set κ = { κj }1. Each κj ∈κ indicates a target qubit for a CNOT gate with the fixed control qubit being qubit nth i.e., CNOT(n, n-κj-1). However, this sequence of operations is valid for odd values of j. For even j, we need two extra CNOT gates where the control is n -max κ -1 i.e., CNOT(n-max κ-1, n). The expression for P j n can now be written as P j n = I N n, if j = 0 or 1; =      Q j CNOT(n, n-κj-1), if j is odd; Q j CNOT(n-max κ-1, n) CNOT(n, n-κj-1) CNOT(n-max κ-1, n), if j is even. (14) This product involving CNOT gates is equivalent to a permutation operation; we provide the proof in Appendix A 1. For example, for n = 4 and odd j, we have, P j=5 n=4 1 := 1 2 3 4 • P j=15 n=4 := 1 2 3 4 • • • 1 As an example, consider n = 4 and j = 5. Binary equivalent of j i.e., jbin = 0101. Dropping the right most value from jbin gives- string b = 010. Thus, κ = { 1 }. and for even j, is even, then we have, P j=4 n=4 := 1 2 • • 3 4 • P j=14 n=4 := 1 • • 2 3 4 • • • Each value of j encodes the underlying support basis, which uniquely identifies the structure of the given matrix. Therefore, two matrices A and B having different support matrices ̃ A and ̃ B, respectively, will have distinct j values. Lemma 2. Let ̃ A, ̃ B ∈S(n) I,X \ {I2n} such that ̃ A ̸= ̃ B with corresponding support sets ΓA and ΓB. Then ΓA ∩ΓB = ∅ 7 Proof. We know from Eq. (13) that, j ∈ { 0, 1, · · · , 2n -1 } for a n qubit Pauli string. And from Theorem 2 for each j1, j2 ∈ { 0, 1, · · · 2n -1 } there exist P j1 n and P j2 n such that P j1 n ̃ AP j1 n = P j2 n ̃ BP j2 n = I⊗(n-1) ⊗X. (15) Further, from Proposition 2, no two permutation matrix P j1 n and P j2 n for j1 ̸= j2 share the same 2-cycles. Let, ΓA ∩ΓB ̸= ∅. Let's assume there exists at least one common row and column index p, q such that [A]p,q, [B]p,q ̸= 0. Since P j1 n (I⊗(n-1) ⊗X)P j1 n = ̃ A and P j2 n (I⊗(n-1) ⊗ X)P j2 n = ̃ B, there exists at least one 2-cycle that is common in both P j1 n and P j2 n due to our assumption. This leads to a contradiction. Hence, the lemma is proved. Corollary 1. Let ̃ A, ̃ B ∈S(n) I,X \ {I2n} such that ̃ A ̸= ̃ B with corresponding ΓA and ΓB. Then for any two Pauli strings E ∈SP (A) and F ∈SP (B) ΓE ∩ΓF = ∅. Proof. Readily follows from the definition of structural similarity (Definition 3) and Lemma 2. It can be observed that, the elements of S(n) {I,X}j for j ∈{1, . . . , 2n -1} is permutation similar, i.e., S(n) {I,X}j = P j n(I⊗(n-1) 2 ⊗X)P j n (see Appendix A 2). Please note that the total number of Pauli strings in a n-qubit system is 4n, the number of similar structural sets is 2n. Thus, we get L = 2n-1 X j=0 P j nL(j) BDP j n (16) Since P j n is symmetric and orthogonal- P j n = (P j n)T . Thus, from our discussions so far, it is evident that L(j) BD = P j nL(j)P j n is a 2-sparse (each row and column have at most 2 non-zero elements) block-diagonal matrix with 2 × 2 nontrivial blocks. Despite its apparent simplicity, generating all L(j) BD involves a sequence of consecutive matrix multiplications, which can become Algorithm 1: Laplacian partition algorithm for 2n × 2n Hermitian matrices Input: A 2n × 2n real symmetric matrix L Output: L(j) such that L = P2n-1 j=0 L(j), where L(j) = P j nL(j) BDP j n and L(j) BD is 2-sparse block-diagonal with 2 × 2 blocks Provided: 1. A 2n × 2n real symmetric matrix M 2. ̃ H = 1 1 1 -1 3. Permutation matrices from the set P j n. Extracting block-diagonal and diagonal elements for j ←0 to 2n -1 do L(j) ←02n×2n; for k ←0 to 2n -1 do [L(0)]k,k ←[L]k,k; for k ←0 to 2n -1 step +2 do [L(1)]k,k+1 ←[L]k,k+1; [L(1)]k+1,k ←[L(1)]k,k+1; Extracting sub-matrices permutation-similar to 2 × 2 blocks for j ←2 to 2n -1 do for u ←0 to 2n-1 -1 do if j is odd then Compute αΛ j-1 2 (u) and βΛ j-1 2 (u); if α 0.98 up to t ∼104. From Fig. 2, the dependence of fidelity on graph connectivity is also can be observedthe edge probability p significantly affects the fidelity decay. Sparse graphs exhibit slower fidelity decay because their Hamiltonians contain fewer non-commuting terms. As connectivity p increases, additional non-commutativity accelerates fidelity loss. For fixed p, the fidelity remains closer to unity for longer period of time when δt = 10-3 than when δt = 10-2. In contrast, for fixed δt, sparser graphs maintain higher fidelity over longer times. Thus, the departure of fidelity from unity is governed jointly by graph connectivity and the Trotter-Suzuki step size. This behavior is consistent with general results in Hamiltonian simulation, where the Trotter-Suzuki error scales with both the Hamiltonian norm and the chosen time step [36-39]. B. Fidelity scaling To further understand the accuracy of our quantum circuit implementation, we analyze the decay of circuit fidelity as a function of system size and graph connectivity. We define the cutoff time τc as the evolution time at which the fidelity drops to approximately 0.95. The fidelity is averaged over ten independent realizations of Erd ̋os-R ́enyi graphs in order to account for statistical fluctuations. The results are shown in Fig. 3, where τc is plotted against the number of qubits n for several values of the edge probability p. Two Trotter step sizes are considered 12 3 4 5 6 7 8 9 10 Number of Qubits 2 0 2 4 6 8 10 ln( c) p = 0.1 p = 0.4 p = 0.7 p = 1.0 (a) δt = 10-2 3 4 5 6 7 8 9 10 Number of Qubits 4 6 8 10 12 14 16 ln( c) p = 0.1 p = 0.4 p = 0.7 p = 1.0 (b) δt = 10-3 FIG. 3. Cutoff Time (τc) at which the quantum circuit fidelity ∼0.95 plotted against the number of qubits n, for different edge probabilities p in the underlying Erd ̋os-R ́enyi graph. (a) Results for Trotter time step δt = 10-2. (b) Same for δt = 10-3. The fidelity decays more rapidly with increasing number of qubits n, and the decay is further for graphs with higher connectivity p and larger Trotter step size δt. δt = 10-3 and δt = 10-2. From Fig. 3 we observe, the cutoff time τc decreases as the number of qubits increases, indicating that the larger the qubit number, the more the number of non-commuting terms, which result in a rapid increase of Trotter errors. Also, 3 4 5 6 7 8 9 10 Number of Qubits 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 ln( c) m=-1.27 m=-1.19 m=-1.23 m=-1.21 m=-1.48 m=-1.41 m=-1.42 m=-1.32 p = 0.1 p = 0.4 p = 0.7 p = 1.0 t = 10 3 t = 10 2 FIG. 4. Combined analysis of the cutoff time (τc) for fidelity decay (falls below 95%) plotted against the number of qubits n, including data from both Fig. 3a and Fig. 3b. Each curve corresponds to a different Erd ̋os-R ́enyi graph connectivity p and Trotter step size δt. The straight lines represent exponential fits of the form T(n) ∼emn+c, with fitted slope (m) mentioned in the legend. the graph connectivity plays an important role, sparse graphs i.e., graphs with low edge probability p depict higher τc than higher p for a fixed number of qubits. Moreover, the choice of Trotter step size significantly affects performance. For δt = 10-3, τc is larger across all values of p than τc for δt = 10-2, which indicates that larger Trotter step size δt leads to significantly shorter evolution times before fidelity falls below 0.95. C. Trotter error check Fig. 4 presents a combined analysis of τc across different qubit numbers n, which includes data from both Fig. 3a and Fig. 3b. Each curve corresponds to an Erd ̋os-R ́enyi graph with varying edge probability p, and two Trotter step sizes are considered, δt = 10-3 and δt = 10-2. The data are fitted to an exponential curve of the form T(n) ∼emn+c, with the fitted slopes m reported in the legend, n denotes the number of qubits. 13 3 4 5 6 7 8 9 10 Number of Qubits 10 8 6 4 2 0 2 4 ln( t) slope=1.39 slope=1.39 t = 10 3 t = 10 2 FIG. 5. Scaling analysis of the Trotterization error (εδt) at a single Trotter step δt as a function of qubit number n. Theoretical upper bound of Trotter error (εδt), given by δt2·ε·22n-1, is also fitted with straight lines, showing a slope of ∼1.39 for both δt values. The results show a clear exponential decay of τc with increasing qubit number. For δt = 10-3, the fitted slopes vary between -1.19 and -1.27, with an average value mavg ≈- p 3/2. For δt = 10-2, the slopes are slightly steeper, ranging from -1.32 to -1.48 with an average of mavg ≈- √ 2. To connect these observations with theoretical error estimates, we analyze the scaling of the theoretical Trotter error per step. For a Hamiltonian decomposed into non-commuting terms, the first-order Trotter-Suzuki bound scales as [39] εδt ∼δt2 ε 22n-1, (40) where ε denotes the typical operator norm of commutators among Hamiltonian blocks. In our case, the Laplacian decomposition produces 2n -1 non-commuting blocks, giving rise to approximately 2n-1 2 ∼22n-1 commutator contributions, thereby explaining the exponential scaling of the Trotter error. For a total evolution time T and step size δt, the accumulated error scales as εtot ∼T · δt · ε · 22n-1. (41) Fig. 5 shows the scaling of the Trotter error εδt with n, together with theoretical upper bounds, considering ε = 1. The fitted slope of the error curves is ∼1.39, which is close in magnitude to the average negative slope of the fidelity decay (mavg ≈- p 3/2) observed in Fig. 4. This correspondence indicates that the observed fidelity decay is governed by the exponential growth of Trotter error with qubit number. It is worth noting that the empirical slopes are somewhat smaller than the theoretical upper bounds. This discrepancy arises because the worst-case analysis overestimates the theoretical error. The effective commutator norms ε are reduced by the sparsity and structure of the Laplacian blocks, and the actual error accumulation depends on the choice of initial state also. These findings validate the effectiveness of the proposed Trotterized circuit architecture for simulating continuous-time quantum walks, while clarifying the limitations imposed by Trotter error scaling. VI. LOCALIZATION IN CTQW CIRCUIT SIMULATIONS In this section, we study localization in continuous-time quantum walks. We use localization as a tool for validating the accuracy of the Trotterized circuit evolution against exact simulations. Localization plays a key role in characterizing transport efficiency, memory retention of initial states, and spectral features of the underlying graph Hamiltonian. Unlike Andersontype localization, which arises from disorderinduced destructive interference, the localization observed here emerges from spectral degeneracies of the graph Hamiltonian [17, 18, 47, 52, 53]. Figs. 6a, 6b, 7a, and 7b present the timeaveraged probability distributions pc(j) of a walker over all N = 2n vertices with n = 5 for Erd ̋os-R ́enyi graphs with edge probabilities p = 0.4 and p = 0.7 evaluated at 1000 steps, computed both from exact evolution and from the Trotterized quantum circuit. The orange bars denote the initial vertex (|ψ0⟩), selected as the node with minimum degree for Fig. 6 and maximum degree for Fig. 7. The deviations from the 14 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) 1/N line | 0 pC(j) Trotterized pC(j) (a) p = 0.1 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) 1/N line | 0 pC(j) Trotterized pC(j) (b) p = 0.4 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (c) p = 0.1 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (d) p = 0.4 FIG. 6. Panels (a), (b) - Time-averaged probability distribution pc(j) (localization profile) of the quantum walker over all N = 2n (n = 5) vertices for different Erd ̋os-R ́enyi graph edge probabilities p = 0.1, 0.4. The orange bars mark the initial vertex, chosen as the node with the minimum degree. The deviation from the uniform line at 1/N indicates varying degrees of localization. Strong peaks at the initial site highlight the persistence of the walker's probability near its origin, even for higher p. Results from exact simulation and Trotterized circuit evolution are shown to agree closely. Panels (c), (d) - Contour plots showing the temporal evolution of the CTQW probability distribution (pc(j)) for different edge probabilities p = 0.1, 0.4. Initial vertex, chosen as the node with the minimum degree. Each heatmap displays the walker's probability at each vertex as a function of time. The presence of persistent high-probability bands indicates localization near the initial site. These results are from the circuit-based implementation. uniform baseline 1/N reveal the presence of localization, where we observe a high peak at the initial site (|ψ0⟩), indicating a higher probability of finding the walker near |ψ0⟩. In both cases (exact evolution and the Trotterized circuit evolution), the agreement between the two methods is ex15 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) 1/N line | 0 pC(j) Trotterized pC(j) (a) p = 0.4 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) 1/N line | 0 pC(j) Trotterized pC(j) (b) p = 0.7 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (c) p = 0.4 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (d) p = 0.7 FIG. 7. Panels (a), (b) - Time-averaged probability distribution pc(j) (localization profile) of the quantum walker over all N = 2n (n = 5) vertices for different Erd ̋os-R ́enyi graph edge probabilities p = 0.4, 0.7 at steps 1000. The orange bars mark the initial vertex, chosen as the node with maximum degree. The deviation from the uniform line at 1/N indicates varying degrees of localization. Strong peaks at the initial site highlight the persistence of the walker's probability near its origin, even for higher p. Results from exact simulation (green bar) and Trotterized circuit evolution (purple bar) are shown to agree closely. Panels (c), (d) - Contour plots of showing the temporal evolution of the CTQW probability distribution (pc(j)) for different edge probabilities p = 0.4, 0.7. Initial vertex, chosen as the node with the maximum degree. Each heatmap displays the walker's probability at each vertex as a function of time. The presence of persistent high-probability bands indicates localization near the initial site. These results are from the circuit-based implementation. cellent. A key observation from our simulations is that the degree of the initial vertex strongly influences localization. For Erd ̋os-R ́enyi graphs with lower connectivity p ∼0.1, localization becomes particularly pronounced when the walker begins at the vertex of minimum degree. Apart from it, an interesting observation occurs in the contour plots of the Fig. 6c, 6d, 7c, 16 0 200 400 600 800 1000 Steps 0 5 10 15 20 25 30 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (a) p = 1.0 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (b) p = 1.0 FIG. 8. Contour plots of showing the temporal evolution of the CTQW probability distribution (pc(j)) for edge probabilities p = 1.0. Initial vertex, chosen as the node with maximum degree for (a) 5 qubit and (b) 6 qubit. Each heatmap displays the walker's probability at each vertex as a function of time. The presence of persistent high-probability bands indicates localization near the initial site. These results are from the circuit-based implementation. and 7d where the temporal evolutions of pc(j) are depicted. Each heatmap illustrates the probability distribution across vertices as a function of time. A striking feature emerges for some graphs where vertices that are directly connected and share the same degree show oscillatory behavior in the walker's probability amplitude when we choose any of them as our initial starting state |ψ0⟩. In such cases, the walker dynamically redistributes its localization weight between these same degree vertices or oscillating vertex group, leading to a persistent oscillation of probability across time. Conversely, other vertices, with the same maximal degree, do not participate in this oscillation if that is not directly connected to the oscillating vertex group. For that vertex, the walker's localization probability remains comparatively high throughout the evolution if the starting |ψ0⟩is on that vertex. This behavior describes the role of graph connectivity. In summary, when we initialize the walker at a vertex that carries the maximum degree, the walker tends to localize in that vertex (Figs. 9a9c) if it is not directly connected to the oscillating vertex group. This effect originates from the spectral structure of the Laplacian, where highdegree vertices contribute disproportionately to degenerate (or nearly degenerate) eigenmodes. Since the initial state has a large overlap with these modes, part of the amplitude acquires only global phases during evolution, preventing complete delocalization. As a result, the walker retains a significant long-time probability at the starting vertex. Even when p ≥0.9, i.e., when the underlying graph is complete or near-complete, all vertices have the same degree, if we initialize the walker at a single vertex, the time-averaged probability indicates that the walker remains localized at that vertex instead of spreading uniformly across the graph (Figs. 8 and 9d). This localization does not stem from disorder, as in Anderson localization, but rather from the symmetry and spectral degeneracy [17, 18, 47, 52, 53] of the complete graph Laplacian. The decomposition of the initial state into a stationary uniform component and a degenerate oscillatory subspace explains the persistence of amplitude near the origin (a detailed account is given in the Appendix C). The complete graph, therefore, provides a striking example where strong connectivity and high sym17 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (a) p = 0.1 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (b) p = 0.4 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (c) p = 0.7 0 200 400 600 800 1000 Steps 0 10 20 30 40 50 60 Vertex Index 0.0 0.2 0.4 0.6 0.8 1.0 pC(j) (d) p = 1.0 FIG. 9. Panels (a) - (d) - Contour plots showing the temporal evolution of the CTQW probability distribution (pc(j)) for different edge probabilities p = 0.1, 0.4, 0.7, 1.0 (from top left to bottom right) for 6 qubit. Initial vertex, chosen as the node with the maximum degree. Each heatmap displays the walker's probability at each vertex as a function of time. The presence of persistent high-probability bands indicates localization near the initial site. These results are from the circuit-based implementation. metry induce localization in CTQWs through purely spectral mechanisms. VII. CONCLUSION In this work, we have developed a scalable quantum circuit framework for simulating continuous-time quantum walks (CTQWs) on arbitrary random graphs, with a particular focus on Erd ̋os-R ́enyi (ER) graphs. By representing the CTQW Hamiltonian in terms of the graph Laplacian and introducing the graph Laplacian partitioning algorithm (LPA), we demonstrated that the Laplacian L of an n-qubit graph can be decomposed into a set of sparse submatrices {L(j)}, each of which is permutation-similar to a block-diagonal form with 2 × 2 non-trivial blocks. This decomposition allows the efficient encoding of the graph Hamiltonian into quantum circuits through permutation matrices that can be realized using CNOTgates. 18 The resulting framework enables the implementation of the full time-evolution operator U(t) = e-iHt using a Trotter-Suzuki product formula applied to the partitioned Hamiltonian components. Compared to standard Pauli-string decompositions that scale as O(4n), our blockdiagonal approach achieves a reduced decomposition complexity of O(2n -1)-substantially lowering circuit depth and gate count. This provides a resource-efficient route for realizing CTQWs on near-term quantum devices and paves the way for the exploration of random graph dynamics on noisy intermediate-scale quantum (NISQ) hardware. Furthermore, we compared the Trotterized circuit evolution against exact simulations by verifying fidelity of the Trotterized evolution against exact dynamics. The time-averaged probability distributions revealed excellent agreement between exact and circuit-based dynamics, confirming high fidelity of the implemented evolution. We showcase that our circuit error closely follows the theoretical Trotter error. We also tested our circuit using localization as a diagnostic tool. We found that localization in our CTQW implementation arises not from disorder, as in Andersontype localization, but from spectral degeneracies of the Laplacian. The degree of the initial vertex strongly influences localization strength. The walkers initialized at low-degree vertices in sparse ER graphs (p ∼0.1) exhibit localization, while in dense or complete graphs (p→1) localization persists due to symmetry-induced degeneracies. In highly connected graphs, oscillatory behavior between connected vertices of equal degree was observed, corresponding to coherent population transfer within degenerate eigen-subspaces. These results demonstrate that spectral structure and graph connectivity dictate localization behavior in CTQWs, and that the proposed circuit framework faithfully reproduces these quantum transport features. We establish a general framework for Hamiltonian simulation using the graph Laplacian partition algorithm with reduced complexity compared to standard Pauli decomposition. However, we believe that the partitioning strategy could be further improved to have a better fidelity response over larger Trotter steps. This work also opens up the implementation of weighted graph walks, i.e., lackadaisical quantum walks, quantum walks with memories, to name a few. One of the major drawbacks of our method lies in its scalability-as the number of qubits increases, the circuit depth also proportionately increases because of the presence of a higher number of partitions in the LPA. Therefore, optimizing our algorithm to produce fix gate-depth circuit remains a future objective. We can also implement various quantum walk algorithmic tasks, such as the traveling salesman problem [60], finding the inverse of a matrix [61]. Our work implements CTQW on quantum circuits for random graphs, which is a crucial result at the age of NISQ devices. VIII. ACKNOWLEDGMENT SC acknowledge the support of the Prime Minister's Research Fellowship (PMRF). RKR acknowledges the support by the Institute for Basic Science in Korea (IBS-R024-D1). The authors would also like to acknowledge Paramshakti Supercomputer facility at IIT Kharagpur-a national supercomputing mission of the Government of India, for providing the necessary highperformance computational resources. Appendix A: Permutation Operator and Permutation Similarity between Pauli strings 1. Permutation Operator The product involving CNOT gates (see Eq. (14)) are equivalent to permutation operation. To understand this argument we follow the work Sarkar et al. [54]. The key idea here is that for a set of n-length Pauli strings S(n) P (n is the qubit number), one can find a set of permutation matrices P j n, that transform the Pauli strings into block-diagonal matrices with 2 × 2 non-trivial blocks. In Ref. [54] the authors define permutations 19 as ΠTe n,x and ΠTo n,x 2, where x is a binary string x = n-2 X κj=0 xκj2κj ≡(xn-2, . . . , x0), and xκj ∈{0, 1}. (A1) We can also define the index set κ = { κj } for each x as κx. These permutations are similar to P j n = (ΠTo n,x = ΠTo n, j 2 , if j is even, ΠTe n,x = ΠTe n, j-1 2 , otherwise. (A2) Both x = 0 and j = 0, 1 will give the Identity matrix. The notation e (o) in ΠTe n,x (ΠTo n,x) indicates that the corresponding permutation matrix is a product of permutations of disjoint 2-cycles P(α, β), where both α and β3 are even ( exactly one of α or β is odd). The notation P(α, β) denotes the matrix obtained by exchanging the αth and βth rows of the target matrix. Proposition 2. For any x, and any u = (un-2, . . . , u0); define ̄uk := uk⊕1. Consider the functions αg κx : {0, 1}n-1 →{ 0, . . . , 2n-1 -1 } and βg κx : { 0, 1}n-1 →{0, . . . , 2n-1 -1 } , g ∈ { e, o } defined as αg κx(u) = X k∈κx uk 2k+1 + X ̃k/∈κx u ̃k 2 ̃k+1 + 2, βe κx(u) = X k∈κx ̄uk 2k+1 + X ̃k/∈κx u ̃k 2 ̃k+1 + 2, βo κx(u) = X k∈κx ̄uk 2k+1 + X ̃k/∈κx u ̃k 2 ̃k+1 + 1. 2 Here, Π denotes the product over all disjoint 2-cycle permutations P(α, β) defined by the index functions αg κx and βg κx, while T is simply a symbolic label used to distinguish the corresponding permutation type. 3 α, β are row or column index-which will be clear from the given context. Then ΠTg n,x = Y 0≤u<2n-1-1 αg κx(u)<βg κx(u) P αg κx(u), βg κx(u) , g ∈{e, o}. It follows for x ̸= y, ΠTg n,x ̸= ΠTg n,y with αg κx(u), βg κx(u) ̸= αg κy(u), βg κy(u) for all 0 ≤u ≤2n-1 -1. Example: for n = 3 and x = 1, from Eq. (A1)- x = 1 =⇒(x1, x0) = (0, 1) =⇒ κx = {0}. αe κx(u) = 2u0 + 4u1 + 2, βe κx(u) = 2(1 -u0) + 4u1 + 2, u = (u1, u0) ∈{0, 1}2. u αe κx(u) βe κx(u) α < β (0, 0) 2 4 yes (0, 1) 4 2 no (1, 0) 6 8 yes (1, 1) 8 6 no We have, ΠTe 3,1 = Y u∈{0,1}2 αe κx(u)<βe κx(u) P(αe κx(u), βe κx(u)) = P(2,4) P(6,8). Similarly for ΠTo 3,1 = Y u∈{0,1}2 αo κx(u)<βo κx(u) P(αo κx(u), βo κx(u)) = P(2,3) P(6,7). P j=2 n=3 = ΠTo 3,1 := 1 2 • • 3 • P j=3 n=3 = ΠTe 3,1 := 1 2 3 • 20 2. Permutation Similarity between Pauli strings We now state the following theorem from Ref. [54], which establishes permutation similarity of the elements of S(n) I,X. Theorem 2. [54] Let j ∈{1, . . . , 2n -1}. Then S(n) {I,X}j =    ΠTe n, j-1 2 (I⊗(n-1) 2 ⊗X)ΠTe n, j-1 2 , j odd, ΠTo n, j 2 (I⊗(n-1) 2 ⊗X)ΠTo n, j 2 , j even, = P j n(I⊗(n-1) 2 ⊗X)P j n. Proof. See Sarkar et al. [54] for the detailed proof. Appendix B: Lemma 3 Example Consider the circuit 31 with two controls (top wires) and one target (bottom wire). From left to right, between the single-qubit rotations on the target, the CNOTs from the controls to the target are connected. We denote the four target rotations by Rz(ω1), Rz(ω2), Rz(ω3), Rz(ω4). Here k = 2, so c ∈{00, 01, 10, 11} is the control basis string. Just before each rotation, the active-control mask is m1 = 00, m2 = 01, m3 = 10, m4 = 11. (B1) Interpret each mask mi via its overlap size i.e. (number of shared 1s). The parity used in Lemma 3 is precisely this overlap size mod 2. Hence, the unitary is block diagonal, U = M c∈{0,1}2 Rz(ηc), where ηc = 4 X i=1 (-1) ⟨c·mi⟩ωi. (B2) For each control string c we list the overlap sizes ⟨c · mi⟩=: oi (with i = 1, . . . , 4), their parities, and the resulting signs: c o1 o2 o3 o4 o1(mod 2) o2(mod 2) o3(mod 2) o4(mod 2) signs ((-1)oi) 00 0 0 0 0 0 0 0 0 (+, +, +, +) 01 0 1 0 1 0 1 0 1 (+, -, +, -) 10 0 0 1 1 0 0 1 1 (+, +, -, -) 11 0 1 1 2 0 1 1 0 (+, -, -, +) (B3) Using the signs above in ηc = P i(-1)oi(c)ωi gives η00 = ω1 + ω2 + ω3 + ω4, η01 = ω1 -ω2 + ω3 -ω4, η10 = ω1 + ω2 -ω3 -ω4, η11 = ω1 -ω2 -ω3 + ω4. (B4) Equivalently, with ω = (ω1, ω2, ω3, ω4)T ,      η00 η01 η10 η11     =      +1 +1 +1 +1 +1 -1 +1 -1 +1 +1 -1 -1 +1 -1 -1 +1      | {z } 22H⊗2      ω1 ω2 ω3 ω4     . (B5) 21 Appendix C: Localization profile for all connected graphs To quantify localization, we employ the inverse participation ratio (IPR), defined for a walker initialized at vertex j as, IPRj(t) = N X i=1 p2 ij(t), pij(t) = ⟨i| e-iHt |j⟩ 2 , (C1) which measures the spread of the probability distribution in the vertex basis. For a completely delocalized state, pij(t) ≈1/N for all vertices, yielding IPRj(t) ≈1/N, which serves as a natural ergodic baseline. Localization is implied at a said vertex j, whenever the value of IPR at that vertex is greater than 1/N. For a complete graph KN, each vertex is connected to all others with degree deg(v) = N -1 for all v ∈KN. (C2) The CTQW Hamiltonian is defined as (assuming γ = 1) H = -L, (C3) where L is the Laplacian of KN. The evolution operator is U(t) = exp(iLt), and its spectral decomposition governs the transport dynamics. The Laplacian spectrum of the complete graph is highly degenerate: there is one eigenvalue E1 = 0, corresponding to the uniform superposition state, and (N -1) degenerate eigenvalues equal to N, E1 = 0, Ej = N (j = 2, . . . , N). (C4) This large degeneracy underpins the persistence of localization in CTQWs on KN. The normalized eigenvector associated with the zero eigenvalue is the uniform state |s⟩= 1 √ N N X j=1 |j⟩, (C5) while the remaining eigenvectors span the subspace orthogonal to |s⟩. An initial state localized at a vertex |v⟩can be decomposed as |v⟩= ⟨s|v⟩|s⟩+ |v⟩-⟨s|v⟩|s⟩ . (C6) The uniform component |s⟩is stationary since L |s⟩= 0, whereas the orthogonal component evolves with a global phase eiNt, owing to its eigenvalue N. The total state at time t is therefore |ψ(t)⟩= ⟨s|v⟩|s⟩+ eiNt |v⟩-⟨s|v⟩|s⟩ . (C7) The amplitude to remain at the initial vertex is ⟨v|ψ(t)⟩= 1 N + 1 -1 N eiNt, (C8) leading to the instantaneous probability |⟨v|ψ(t)⟩|2 = 1 N 2 + 1 -1 N 2 + 2 N 1 -1 N cos(Nt). (C9) Averaging over time removes the oscillatory term and yields the time-averaged probability at the starting vertex, pv = 1 N 2 + 1 -1 N 2 = 1 -2 N + 2 N 2 . (C10) For large N, this approaches pv ≈1 -2 N , (C11) which is markedly higher than the uniform distribution 1/N. Thus, even though the complete graph is maximally connected, the walker retains a strong probability of being detected at its initial position at long times. To compute the IPR, we first note that using Eq. (C5) ⟨i|ψ(t)⟩= 1 N + eiNt δiv -1 N . (C12) 22 Which further allows us to write from Eq. (C6) piv(t) = 1 N + 1 -1 N eiNt 2 for i = v, = 1 -2 N + 2 N 2 + 2(N -1) N 2 cos Nt. (C13) piv(t) = 1 N (1 -eiNt) 2 for i ̸= v, = 2 N 2 (1 -cos Nt). (C14) Therefore, we can compute the IPR using Eq. (C1) as IPRv(t) = p2 iv(t) | {z } i=v +(N -1) p2 iv(t) | {z } i̸=v , = 1 -4 N + 10 N 2 -6 N 3 + 4(N -1)(N -2) N 3 cos Nt, + 2(N -1) N 3 cos 2Nt. (C15) The average over a long time results in IPRv = 1 -4 N + 10 N 2 -6 N 3 , (C16) which in the large N limit reduces to IPRv ≈1 -4 N . (C17) This suggests that for large N, on average, the IPR remains close to 1, suggesting strong localization. [1] R. P. Feynman, International Journal of Theoretical Physics 21, 467 (1982). [2] S. Lloyd, Science 273, 1073 (1996). [3] I. M. Georgescu, S. Ashhab, and F. Nori, Rev. Mod. Phys. 86, 153 (2014). [4] B. Fauseweh, Nature Communications 15, 2123 (2024). [5] E. Farhi and S. Gutmann, Physical Review A 58, 915 (1998). [6] A. M. Childs, E. Farhi, and S. Gutmann, Quantum Information Processing 1, 35 (2002). [7] A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A. Spielman, in Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing, STOC '03 (Association for Computing Machinery, New York, NY, USA, 2003) p. 59-68. [8] K. Manouchehri and J. Wang, Physical Implementation of Quantum Walks, 1st ed., Quantum Science and Technology (Springer Berlin, Heidelberg, 2014) pp. X, 230, springer-Verlag Berlin Heidelberg. [9] R. K. Ray, Phys. Rev. E 106, 024115 (2022). [10] R. K. Ray, R. Srikanth, and S. Majumder, Phys. Rev. E 112, 044120 (2025). [11] D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani, in Proceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing, STOC '01 (Association for Computing Machinery, New York, NY, USA, 2001) p. 50-59. [12] J. Kempe, Contemporary Physics 44, 307 (2003), https://doi.org/10.1080/00107151031000110776. [13] O. M ̈ulken and A. Blumen, Physics Reports 502, 37 (2011). [14] K. Mukai and N. Hatano, Phys. Rev. Res. 2, 023378 (2020). [15] R. Portugal, Quantum Walks and Search Algorithms, 1st ed., Quantum Science and Technology (Springer, New York, NY, 2013) pp. XI, 222, 37 b/w illustrations. [16] S. Wald and L. B ̈ottcher, Phys. Rev. E 103, 012122 (2021). [17] N. Inui, Y. Konishi, and N. Konno, Phys. Rev. A 69, 052323 (2004). [18] S. Chakraborty, K. Luh, and J. Roland, Physical review letters 124, 050501 (2020). [19] R. Campos, P. A. M. Casares, and M. A. MartinDelgado, Quantum Machine Intelligence 5, 28 (2023). [20] E. Lee, S. Lee, and S. Kim, Phys. Rev. D 111, 116001 (2025). 23 [21] C. H. Alderete, S. Singh, N. H. Nguyen, D. Zhu, R. Balu, C. Monroe, C. M. Chandrashekar, and N. M. Linke, Nature Communications 11, 3720 (2020). [22] M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspuru-Guzik, The Journal of Chemical Physics 129, 174106 (2008). [23] T. Kitagawa, M. S. Rudner, E. Berg, and E. Demler, Phys. Rev. A 82, 033429 (2010). [24] R. Duda, M. N. Ivaki, I. Sahlberg, K. P ̈oyh ̈onen, and T. Ojanen, Phys. Rev. Res. 5, 023150 (2023). [25] K. Shukla and C. M. Chandrashekar, Phys. Rev. A 109, 032608 (2024). [26] H. Gao, K. Wang, D. Qu, Q. Lin, and P. Xue, New Journal of Physics 25, 053011 (2023). [27] Y. Aharonov, L. Davidovich, and N. Zagury, Phys. Rev. A 48, 1687 (1993). [28] B. L. Douglas and J. B. Wang, Phys. Rev. A 79, 052335 (2009). [29] T. Loke and J. B. Wang, Phys. Rev. A 86, 042338 (2012). [30] T. Loke and J. Wang, Annals of Physics 382, 64 (2017). [31] T. Loke and J. B. Wang, Journal of Physics A: Mathematical and Theoretical 50, 055303 (2017). [32] Z. Chen, G. Li, and L. Li, Phys. Rev. A 110, 052215 (2024). [33] R. P. Feynman, Optics News 11, 11 (1985). [34] P. Wocjan and S. Zhang, arXiv preprint quant-ph/0606179 https://doi.org/10.48550/arXiv.quantph/0606179 (2006). [35] A. M. Childs, Lecture notes at 5 (2017). [36] H. F. Trotter, Proceedings of the American Mathematical Society 10, 545 (1959). [37] M. Suzuki, Communications in Mathematical Physics 51, 183 (1976). [38] M. Suzuki, Journal of Mathematical Physics 32, 400 (1991). [39] A. M. Childs, Y. Su, M. C. Tran, N. Wiebe, and S. Zhu, Phys. Rev. X 11, 011020 (2021). [40] A. M. Childs and Y. Su, Phys. Rev. Lett. 123, 050503 (2019). [41] B. Bollob ́as, Random Graphs, 2nd ed., Cambridge Studies in Advanced Mathematics (Cambridge University Press, 2001). [42] P. Erd ̋os and A. R ́enyi, Publicationes Mathematicae 6, 290 (1959). [43] P. Erd ̋os and A. R ́enyi, Publ. Math. Inst. Hungar. Acad. Sci 5, 17 (1960). [44] J. Tindall, A. Searle, A. Alhajri, and D. Jaksch, Nature Communications 13, 7445 (2022). [45] S. Dutta, Physical Review E 111, 10.1103/PhysRevE.111.034312 (2025). [46] F. R. K. Chung, Spectral Graph Theory, CBMS Regional Conference Series in Mathematics, Vol. 92 (American Mathematical Society, 1997) chapter 1 available online. [47] A. Mandal, R. S. Sarkar, S. Chakraborty, and B. Adhikari, Physical Review A 106, 042405 (2022). [48] P. W. Anderson, Phys. Rev. 109, 1492 (1958). [49] Y. Lahini, A. Avidan, F. Pozzi, M. Sorel, R. Morandotti, D. N. Christodoulides, and Y. Silberberg, Phys. Rev. Lett. 100, 013906 (2008). [50] A. Crespi, R. Osellame, R. Ramponi, V. Giovannetti, R. Fazio, L. Sansoni, F. De Nicola, F. Sciarrino, and P. Mataloni, Nature Photonics 7, 322 (2013). [51] I. Vakulchyk, M. V. Fistul, P. Qin, and S. Flach, Phys. Rev. B 96, 144204 (2017). [52] R. Bueno and N. Hatano, Physical Review Research 2, 033185 (2020). [53] A. P. Balachandran, A. Kundalpady, P. Padmanabhan, and A. Sinha, Phys. Rev. A 109, 012205 (2024). [54] R. S. Sarkar, S. Chakraborty, and B. Adhikari, arXiv preprint (2024). [55] M. M ̈ott ̈onen, J. J. Vartiainen, V. Bergholm, and M. M. Salomaa, Phys. Rev. Lett. 93, 130502 (2004). [56] M. M ̈ott ̈onen, J. J. Vartiainen, V. Bergholm, and M. M. Salomaa, Quant. Inf. Comp. 5, 467-473 (2005). [57] R. S. Sarkar and B. Adhikari, in 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), Vol. 01 (2023) pp. 1078-1088. [58] A. M. Krol, A. Sarkar, I. Ashraf, Z. AlArs, and K. Bertels, Applied Sciences 12, 10.3390/app12020759 (2022). [59] R. Jozsa, Journal of modern optics 41, 2315 (1994). [60] S. Marsh and J. B. Wang, Phys. Rev. Res. 2, 023302 (2020). [61] A. Kay and C. Tamon, Matrix Inversion by Quantum Walk, 2508.06611.
2510.14902
VLA2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation Han Zhao‡,1,2, Jiaxuan Zhang‡,2,3, Wenxuan Song4, Pengxiang Ding1,2, Donglin Wang*,2, 1Zhejiang University, China 2MiLAB, Westlake University, China 3Southern University of Science and Technology, China 4Hong Kong University of Science and Technology (Guangzhou), China Abstract— Current vision-language-action (VLA) models, pre-trained on large-scale robotic data, exhibit strong multi- task capabilities and generalize well to variations in visual and language instructions for manipulation. However, their success rate drops significantly when faced with object concepts outside the training data, such as unseen object descriptions and textures in the dataset. To address this, we propose a novel agentic framework, VLA2, which leverages OpenVLA as the execution backbone and effectively leverages external modules such as web retrieval and object detection to provide visual and textual knowledge about target objects to the VLA. This approach mitigates generalization failure when handling out-of-distribution objects. Based on the LIBERO simulation environment, we introduced novel objects and object descriptions to construct a new evaluation benchmark with three difficulty levels to test the effectiveness of our method. Our framework successfully outperformed the current state- of-the-art models on our designed hard-level generalization benchmark. Compared to the standalone OpenVLA baseline, VLA2 achieves a 44.2% improvement in the success rate in the hard-level benchmark and an average improvement of 20.2% in all customized environments without any performance degradation on in-domain tasks. Project website: https://vla- 2.github.io. I. INTRODUCTION In recent years, foundation models have profoundly in- fluenced the development of artificial intelligence research. This impact spans visual encoders [1]–[3], multi-modal large language models [4]–[6], and agent systems [7]–[9], among others. In the field of robotics, Vision-Language-Action (VLA) models [10]–[16] built upon vision-language models represent a prominent research paradigm. By fully integrating visual perception, language instruction understanding, and action execution into a unified model, VLA leverages large- scale robotic manipulation datasets for end-to-end training. This approach effectively harnesses the learning capacity of large-scale models and shows strong potential to serve as a foundational backbone for general-purpose robots perform- ing manipulation tasks in open-world environments in the future. However, although VLA models have acquired a certain degree of generalization ability, such as understanding some unseen language instructions and manipulating correspond- ing objects, they completely fail to comprehend instruc- tions involving entirely unseen concepts (as demonstrated ‡Equal Contribution *Corresponding Author Fig. 1: Evaluation result on our custom Hard-level bench- mark. In evaluation involving unseen concepts (i.e., object textures and language descriptions outside the dataset), our proposed framework surpasses other state-of-the-art models finetuned on the original LIBERO dataset. In contrast, the reproduced Agentic Robot framework [17] using our model exhibits a significantly noticeable performance degradation in this task. in OpenVLA failure cases [11]) and are unable to transfer previously learned manipulation experience to such scenar- ios. Some researchers have attempted to jointly train robotic manipulation data with web-scale multimodal data [10], [14], aiming to preserve extensive conceptual knowledge during training and thereby enhance generalization in manipulation tasks. However, such a training paradigm not only demands substantial resources but also makes iterative model updates with emerging concepts impractical. As a result, it fails to fully address the problem. To this end, we proposed Vision-Language-Action Agent (VLA2), a novel integrated system-level framework designed to increase the capabilities of VLA systems by supporting the invocation of diverse tools—including task planning, web search, object detection, and other functional mod- ules—thereby extending the executive limits of the current VLA models. Our main contributions are as follows: • We propose the VLA2 framework that integrates task planning, conversion of unseen concepts into known information via web and memory retrieval, VLA-based execution, and a verifier module to assess task comple- tion. • We fine-tune OpenVLA [11] on the augmented LIBERO arXiv:2510.14902v1 [cs.RO] 16 Oct 2025 Fig. 2: Framework overview. The proposed framework comprises three components: A. preliminary processing, B. cognition and memory, and C. Judgment and execution. During task running, preliminary processing and cognition (except video object segmentation as VOS) are invoked only once at the start of each task. [18] dataset to enable the VLA to accept masked images as input conditions for improving generalization in object manipulation. • Based on the LIBERO simulation environment, we de- signed object generalization tasks across three difficulty levels, ranging from simple color variations (Easy) and manipulation of generalized target objects (Medium) to generalization to objects with unseen concepts (Hard). II. RELATED WORKS A. Vision-Language-Action Models VLA models [10]–[16], [19] belong to a type of foun- dation model that processes visual and other modal data as observations, and follows human natural language commands to execute the corresponding robotic tasks. Through pre- training on large-scale robotic manipulation datasets [20]– [22] and minimal fine-tuning through supervised fine- tuning [23]–[25] or reinforcement learning [26]–[31] on downstream tasks. While VLA models can effectively integrate perception and decision-making in an end-to-end manner, they still face significant challenges in real-world applications that require strong generalization capabilities, such as open-vocabulary object manipulation and long-horizon task execution. In contrast to the aforementioned approaches, our work does not primarily focus on improving generalization by directly optimizing the VLA model. Instead, we introduce external modules on top of existing models to form a more com- prehensive system, which enhances the performance of the downstream VLA by leveraging external tools for improved information processing. B. Embodied Agents Inspired by the concept of agents [32] in the field of large language models, a growing body of research has begun to integrate VLA models as an execution module [17], [33]–[35] into embodied agent systems. This is achieved by incorporating additional modules that serve as external tools, effectively expanding the capability boundaries of VLA through appropriate invocation. The prior work incorporated modules such as task plan- ning, situational memory, and skill libraries. In this paper, we focus on enhancing the agent’s tool invocation capability by using web search, object detection, and other functional modules—in combination with current visual observations and task instructions—to identify target objects for manipu- lation. This approach enables the precise operation of objects beyond the cognitive scope of the single VLA model. III. METHOD We consist of three major parts, as in Fig. 2: Preliminary Information Processing, responsible for analyzing textual and visual information; Cognition and Memory, responsible for transforming all received information into knowledge accessible to the next part; and Judgment and Execution, responsible for monitoring task progress and interacting with the environment. As shown in the figure, we use LIBERO as the simulation environment. A. Preliminary Information Processing In this part, we employ a planner and a vision pre- processing module to perform the initial decomposition and processing of information. 1) Planner: The planner is responsible for decompos- ing complex natural-language instructions into a sequence of structured subtasks executable by downstream modules. To ensure reliability, the planner prompt is designed with strict constraints: each subtask must contain exactly one action verb (e.g., pick up, move, open) and must explicitly specify the relevant objects and locations, with additional syntactic and structural rules enforced so that the post- processing stage can reliably parse the output. This design transforms a complex compound action into multiple smaller subtasks, each consisting of a single action. The planner is implemented using the GLM-4.1V-9B-Thinking [36], which is locally deployed. To enable modular extraction of the task list and objects & locations from GLM’s output, we designed a three-layer post-processing module consisting of: (a) automatic linguistic extraction; (b) error detection and regeneration when extractions fail; and (c) hard-coded task-specific parsing once an error tolerance threshold is exceeded. This architecture ensures that, regardless of what GLM outputs, only valid and high-quality information is passed to the downstream modules. 2) Vision Pre-processing: In the initial processing stage of visual information, the framework employs the MM- GroundingDINO [37] model to generate a list containing the bounding boxes of the objects and locations provided to this module, as aligned on the first image. Probabilistically, some of the bboxes may be empty due to model failures in recognition or inadequate post-processing. These cases must be further addressed by subsequent cognition and memory. To better adapt to the overall framework and the task- execution environment, the MMGroundingDINO model is fine-tuned within this framework to improve the accuracy of recognizing task-relevant objects. The experimental setup of this framework is based on the LIBERO simulation environment. Accordingly, 500 randomly rendered images were collected across the LIBERO-Spatial, Goal, Object, and Long datasets [18]. Bounding boxes and object names were manually annotated, and data augmentation was applied to the images. Using the MMDetection [38] toolkit, the model was fine-tuned, resulting in a version that can reli- ably recognize the objects appearing in these four LIBERO environments. B. Cognition & Memory To enhance the out-of-distribution (OOD) performance of the underlying VLA, this project integrates an active web- based information retrieval capability into the higher-level text–image processing pipeline. The following serial sections will introduce the logic of web search enhancement for visual and linguistic information in detail. 1) Vision: Overview. In the visual processing pipeline, task–related objects and locations in the third–person robot view are overlaid with transparent, colored masks to reduce reliance on surface textures and mitigate visual overfitting. Fig. 2 summarizes this module and its interfaces to the rest of the system. And Fig. 3 displayed the detailed logical re- lationships between the small modules in the vision module. Fig. 3: Vision framework. This figure illustrates the whole structure and contents within Vision. Double judgment. For each word (object/location), the system first checks whether a valid bounding box (bbox) is available and, in parallel, whether auxiliary keywords are present. If either signal is missing, a visual search branch is triggered: bbid [39] downloads web images for the word, the images are arranged into a 2×3 collage and paired with a structured text prompt, and this input is sent to the GLM Understanding (Vision) module. The resulting keywords, images, and collage are cached in vision memory for reuse. The enriched prompt (original text + keywords) is then re- submitted to the detector; if detection still fails, an empty bbox is returned and no mask is applied for that item. GLM understanding (Vision). Given the first image, the retrieved web collage, and the current word, this module produces five concise descriptive keywords that anchor the unknown word to elemental attributes (e.g., color, shape, function, size). These keywords support robust re-detection and are stored in memory for subsequent tasks. Vision processing. MMGroundingDINO consumes the word together with its keywords to localize the word in the first image, producing a bbox when possible (the “Vision processing” block in Fig. 3). SAM: Segmentation, color encoding, and interface routing. Given validated bboxes, SAM2.1-L [40] converts each box into a pixel-accurate mask that specifies the tar- get’s location and shape in the image. The outputs (bbox, mask, and the term–color assignment) are packaged with the corresponding vision memory (e.g., keywords and web collage). This package is then routed to two consumers: (i) the Language module, which stores the vision-memory fields for the subsequent replace step (explained in the next section); and (ii) the VOS pipeline—a module separate from Vision—which uses the term–color mapping to guide Cutie [41] in generating temporally consistent, color-coded masked image flows. Objects and locations use distinct color Fig. 4: language framework. This figure illustrates the whole structure and contents within Language. palettes so that downstream components can exploit role- aware color cues when learning action–image correspon- dences. Rationale: instant learning. This pipeline converts unfa- miliar inputs into familiar representations for MMGround- ingDINO, enabling effective OOD generalization by de- composing novel concepts into elemental attributes and an- choring them to known ones. We refer to this as “instant learning”: leveraging prior knowledge to rapidly assimilate unfamiliar concepts. Prior studies indicate that accessible knowledge facilitates the comprehension and memory of new information [42], that successful knowledge construction reactivates previously learned information [43], and that adaptive memory builds on prior knowledge rather than learning tabula rasa [44]. Moreover, the explicit color–mask alignment improves visual–text overlap, consistent with find- ings that finer instance- and token-level alignment boosts performance [45] and that stronger color perception benefits color-related generalization [46]. 2) Language: Overview. A primary role of the language- processing component is to align all object-related tokens in task prompts with a controlled vocabulary derived from train- ing and fine-tuning, thereby ensuring consistent system-level cognition. The detailed structure and information content of the Language framework are shown in Fig. 4. Double judgment. A substitution mechanism handles tokens absent from this vocabulary. For each prompt, once bounding boxes are obtained from the visual pipeline, object terms are replaced at the text level; if no box is detected, substitution is still attempted but designed to return NONE when no reliable replacement is found. If the token is known on the KnownList (details are at the end of the section III- C), it is used directly; otherwise, the GLM (shared with the planner) generates a replacement. GLM understanding (Text). The GLM input message comprises: (i) the first image with cropped bounding-box regions and scores (or an empty list), (ii) a collage from web search (or NONE), (iii) the original prompt, (iv) web- derived keywords (or NONE), (v) the known-vocabulary list, and (vi) auxiliary descriptive information from external APIs. Analogous to the planner, we designed dedicated input pre- processing and output post-processing modules for the GLM Understanding (Text) component to better align with the language framework and to enable instant learning. If the replacement word generated by GLM is valid, the corresponding substitution (new corresponding to original) will be recorded in the text memory of the language module, so that when this term reappears for replacement, the system can directly utilize the stored memory. If the replacement word is invalid, no substitution is performed, and no memory is created. Text processing. Finally, within the current task, once all substitution mappings have been determined, the target terms are replaced accordingly, and the final task list is repaired to eliminate errors arising from long-chain information propa- gation. C. Judgment & Execution Judgment. We employ Qwen2.5-VL-3B-Instruct [47] as the verifier. To adapt it more effectively to the experimental scenarios and to improve judgment accuracy, we manually constructed a fine-tuning dataset using videos from the LIBERO dataset. Specifically, video segments were extracted from the original visual recordings of the simulation envi- ronment. For each segment, a text prompt was generated corresponding to the current subtask, and annotations were added to indicate whether the subtask had been completed and whether the system could proceed to the next subtask. Fine-tuning of Qwen2.5-VL-3B-Instruct was then carried out using LLaMA-Factory [48] as the training tool, resulting in a verifier better aligned with the LIBERO environments and the task decomposition rules described in the planner section. Beyond checking whether each subtask is completed, we design a recovery mechanism that uses a dynamic thresh- old to determine whether the end-effector is stuck or in an anomalous state. Once the recovery detector flags an anomaly, we forcibly set current task to “lift the gripper” and, after a fixed number of steps, resume the subtask that was active before recovery and restore its execution progress. Execution. The lower-level VLA is fine-tuned to accom- modate the structured inputs produced by the upper-level planner and visual processing modules. In particular, the visual modality of the LIBERO dataset is reformulated by replacing the original third-person RGB videos with RGB videos augmented by transparent colored masks. To construct these masked videos and the accompanying task list, we employ the same vision and language modules described above; all logic and processing remain identical to the main framework. Consequently, during dataset preparation, the vision and language memories already encode the in- distribution(ID) portion of the tasks. For subsequent evalua- Fig. 5: Comparison between origin and new environments. In this figure, we illustrate the differences between the new and original environments. We present a single rendered scene to highlight the modified objects; the novel items appearing in the other scenes share the same appearance. tion on the three OOD environments, any OOD-related mem- ories are re-initialized before each validation run to ensure strict fairness and to isolate the effect of our instant-learning mechanism. Meanwhile, the task descriptions are reformatted into temporally segmented, plan-based task prompts that explicitly reflect the distribution of subtasks over time. More- over, during fine-tuning and evaluation, the task text prompts are enhanced in the form: “now do ‘current subtask’, the whole task is ‘joint of all subtasks’”, such that the VLA both knows what it is supposed to do now and what the overall task is. Training the VLA on this modified dataset enables it to process masked visual inputs and sequential subtask prompts consistently with the planner-driven structure, which improves downstream execution performance. During OpenVLA fine-tuning, a knowledge base of known object terms is built using an NLTK-based extractor. Tokens are identified via tokenization and part-of-speech tagging, aggregated into a JSON vocabulary, and stored with the model for use at inference. This is the KnownList in the Language section. IV. EXPERIMENTS We concentrated experiments on evaluating the zero-shot OOD adaptability of the proposed VLA2 framework. To this end, a new evaluation environment was constructed to specifically test OOD generalization across novel sce- narios, in addition to adopting the LIBERO benchmark as a standardized reference. The goal is to examine whether the framework can generalize to previously unseen tasks and maintain robustness without task-specific fine-tuning, while also analyzing the contributions of its key components through ablation studies. Specifically, the experiments aim to answer the following questions: Q1. How does the testing performance of VLA2 on in-domain tasks compare to state- of-the-art VLAs? Q2. How is the generalization performance of VLA2 on out-of-distribution test tasks with high difficulty? Q3. Do the key modules we designed contribute significantly to the framework’s generalization performance? A. Experimental Setup LIBERO simulation environment. Within the original LIBERO simulation environment, we constructed three new variants—Easy, Medium, and Hard—based on the Spatial and Goal environments—comparison between the original and the new environments in Fig. 5. The modifications are limited to object appearances as follows. In Easy, the original black bowl was recolored to an orange series. In Medium, the black bowl was replaced with LIBERO’s native white bowl, the wine bottle was recolored to sky blue and renamed as the blue bottle, and the wooden cabinet was replaced with LIBERO’s native white cabinet. In Hard, the wine bottle was completely redesigned to resemble the well-known Chinese liquor Moutai, the black bowl was redesigned with blue- and-white porcelain patterns and renamed the blue white porcelain bowl, and the wooden cabinet was again replaced with the white cabinet. The original cream cheese has been replaced with butter, which looks different but has approx- imately the same collision model. No other modifications were introduced beyond these appearance changes. For the evaluation on the new environments, each task is executed 50 times, and both the overall success rate (SR) and the success rate of each individual task are reported. The same evaluation protocol is applied to the LIBERO original environments when testing the framework. Baseline. We compares the proposed VLA2 frame- work against several widely recognized, high-performance VLA baselines finetuned on the same LIBERO training dataset: OpenVLA [11], OpenVLA-OFT [23], π0 [12], π0-FAST [19], and Agentic Robot [17], a embodied agent framework. All experiments are conducted in the original four simulation suites, as well as in the three newly crafted environments specifically designed for OOD evaluation. Training details. All components of the framework were trained/fine-tuned on NVIDIA A100–80GB GPUs. For MM- GroundingDINO, we adopted the default MMDetection train- ing configuration and fine-tuned on our custom dataset using TABLE I: LIBERO simulation benchmark (Original En- vironment). FT denotes fine-tuning on task-specific demon- strations. Bold numbers mark the best within all classes. Underline numbers mark the best within Class 2. Method Spatial Object Goal Long Average Class 1 OpenVLA-OFT (FT) 97.6 98.4 97.9 94.5 97.1 π0 (FT) 96.8 98.8 95.8 85.2 94.2 π0-FAST (FT) 96.4 96.8 88.6 60.2 85.5 Class 2 Agentic Robot 85.8 89.0 81.8 61.6 79.6 OpenVLA (FT) 84.7 88.4 79.2 53.7 76.5 VLA2 (ours) 86.4 86.2 83.2 64.4 80.1 2 GPUs for 100 episodes. For Qwen2.5-VL-3B-Instruct, we used LLaMA-Factory’s default qwen2-sft recipe with our custom dataset, increased the number of episodes by a factor of five, and trained on 4 GPUs. For OpenVLA, we used the official fine-tuning script on our custom dataset with a learning rate of 3×10−4, training on 8 GPUs. Implementation. This project adopts a 20-step verification waiting period. A custom end-effector jam detection module was implemented with a 10-step recovery waiting to replace the original recovery mechanism and logic. All other model configurations and information transmission pipelines remain the same as described in the Method section. In this case, the parameters are closer to those of the original Agentic Robot [17], making the comparison more meaningful. B. Main Results Original environments (in-domain; Table I). The eval- uation shows that Class 1 systems with stronger VLA backbones obtain higher averages. In contrast, our frame- work uses OpenVLA as the VLA backbone, so the fairest in-distribution comparison is within the OpenVLA family (Class 2). VLA2 attains the highest Class 2 average SR at 80.1%, which is higher than Agentic Robot and the fine- tuned OpenVLA. On Object, the SR of our framework (86.2%) remains below these two baselines. The reason for the result degradation due to a perception bottleneck: 224×224 observations and imprecise object names make fine-grained recognition difficult; MMGroundingDINO often misses or mislocalizes boxes; web images used for grounding differ from the simulator views. These perceptual errors can leave the first subtask unresolved, preventing the verifier from advancing and depressing overall SR on affected tasks. Custom environments (out-of-distribution; Tables II and III). Across the custom environments, all methods exhibit SR declines as OOD difficulty increases, from simple color changes to semantic reinterpretations (e.g., replacing a wine bottle with Moutai) and synonym substitutions (e.g., plate →saucer). Despite this, VLA2 attains the best overall average SR at 81.5%. The advantage is most pronounced on the Hard environment, where VLA2 reaches 76.2%, exceeding π0 by 16.2% and OpenVLA-OFT by 28.8% TABLE II: LIBERO simulation benchmark (Custom Environment). SR comparison on Easy/Medium/Hard. FT denotes fine-tuning on task-specific demonstrations. Bold numbers mark the best across all methods. Method Easy Medium Hard Average Class 1 OpenVLA-OFT (FT) 98.8 95.4 47.4 80.5 π0 (FT) 97.2 86.0 60.0 81.1 π0-FAST (FT) 98.0 75.2 45.8 73.0 Class 2 Agentic Robot (RP) 83.8 48.6 26.2 52.9 OpenVLA (FT) 85.0 66.7 32.0 61.2 VLA2 (ours) 86.6 81.6 76.2 81.5 (Table II). Task-level results further highlight robustness on large semantic shifts—for example, moutai–rack (72 for VLA2 vs. 44 for π0) and bowl–saucer (88 for VLA2 vs. 16 for π0), as shown in Table III. These findings support our core premise: by explicitly reforming unfamiliar inputs into the model’s known distribution (via our knowledge- alignment pipeline), VLA2 is less perturbed by OOD shifts than competing baselines, even those with more advanced backbones. C. Ablation Study We evaluate three ablations in the custom LIBERO- Hard setup, each removing a distinct capability from our framework (Table III). w/o mask excludes the transparent instance/region overlays and color mask injects. w/o re- place disables lexical normalization, i.e., unknown or out-of- vocabulary nouns in the task text are no longer substituted with semantically related in-distribution texts. w/o web turns off all external retrieval and episodic reuse, meaning no image search, no text search, and no previously cached memory from web retrieval can be consulted during plan- ning or execution. Additionally, we designed an experiment termed Agentic Robot (RP) that removes all the afore- mentioned modules and replaces every component in the framework [17] with the other models mentioned above and additionally omits our subtask-augmentation in the execution prompts, serving as an ablation study. Ablation on mask. Disabling transparent masks reduces the average SR from 76.2 to 64.8 ( −11.4 ), with the largest drops on interaction-heavy and cluttered scenes: open-drawer −26 (78→52), bowl-cabinet −22 (86→64), moutai-rack −36 (72→36), and moutai-cabinet −12 (88→76), see Ta- ble III. These patterns indicate the mask overlay is most critical when the VLA must localize within containers/oc- clusions or discriminate among visually similar instances. Minimal effect on stove (−2) and even a slight gain on bowl-stove (+2) suggest that for simple, single-object place- ments, the raw RGB already suffices, but removing masks consistently hurts spatial reasoning and long-horizon pick- and-place chains. Ablation on replace. Turning off semantic substitution TABLE III: LIBERO-Hard tasks environment simulation results. Transposed SR comparison per task. The row names under “new items” (e.g., “stove”) are concise task abbreviations; “new items” indicates the number of zero-shot items in the task text prompt. Bold marks the best performance across all models. For the Ablation rows, values in parentheses denote the vertical difference from VLA2 (ours) in the same column, computed as Ablation−VLA2. The Agentic Robot (RP) means w/o mask, replace and web, also no subtask augmentation introduced in the Execution part. Strictly follow the original Agentic Robot pipeline [17]. Category Method 0 new item 1 new item 2 new items Average SR stove open- drawer drawer- bowl saucer- stove bowl- stove moutai- rack bowl- saucer bowl- cabinet butter- bowl moutai- cabinet Class 1 OpenVLA-OFT (FT) 100 100 92 8 88 0 0 82 0 4 47.4 π0 (FT) 98 94 66 88 92 44 16 68 0 34 60.0 π0-FAST (FT) 96 62 72 6 98 0 34 90 2 0 45.8 Class 2 OpenVLA (FT) 96 40 14 84 52 0 2 30 2 0 32.0 VLA2 (ours) 96 78 62 84 86 72 88 86 22 88 76.2 Ablation VLA2 (w/o mask) 94 (-2) 52 (-26) 58 (-4) 78 (-6) 88 (+2) 36 (-36) 84 (-4) 64 (-22) 18 (-4) 76 (-12) 64.8 (-11.4) VLA2 (w/o replace) 96 (0) 74 (-4) 26 (-36) 54 (-30) 90 (+4) 16 (-56) 16 (-72) 86 (0) 12 (-10) 42 (-46) 51.2 (-25.0) VLA2 (w/o web) 96 (0) 82 (+4) 58 (-4) 82 (-2) 92 (+6) 24 (-48) 84 (-4) 78 (-8) 20 (-2) 36 (-52) 65.2 (-11.0) Agentic Robot (RP) 96 (0) 38 (-40) 0 (-62) 0 (-84) 44 (-42) 0 (-72) 0 (-88) 64 (-22) 0 (-22) 20 (-68) 26.2 (-50.0) yields the largest average degradation, from 76.2 to 51.2 ( −25.0 ). Catastrophic failures occur when novel or compo- sitional nouns must be grounded: bowl-saucer −72 (88→16), moutai-rack −56 (72→16), moutai-cabinet −46 (88→42), drawer-bowl −36 (62→26), and saucer-stove −30 (84→54). These gaps quantify that synonym/alias replacement is the dominant lever for bridging text OOD to the model’s in- distribution vocabulary, especially when two unseen tokens co-occur (the “2 new items” block). Small neutral/positive shifts on stove (0) and bowl-stove (+4) imply replacement is unnecessary for well-known nouns, but omitting it severely limits compositional generalization elsewhere. Ablation on web. Removing web image/text search and retrieved memory lowers the average SR to 65.2 ( −11.0 ) and disproportionately harms novel-brand targets: moutai- rack −48 (72→24) and moutai-cabinet −52 (88→36). Mod- erate declines also appear in bowl-cabinet −8 (86→78). Slight gains on open-drawer (+4) and bowl-stove (+6) show that retrieval can inject noise on trivially familiar scenes, but its net benefit on unfamiliar concepts is decisive. Notably, butter-bowl remains difficult across settings (ours 22; deltas only −2 to −10): the low-resolution “butter” appears visually ambiguous and cannot be reliably disambiguated by retrieval or text substitution, so even humans struggle to verify it, explaining the uniformly low SR in this task. All three modules removed (Agentic Robot (RP)). This experiment fully adopts the framework [17], with the only modification being the replacement of all corresponding modules with the models used in our proposed method, and also omitting our subtask augmentation, average SR collapses to 26.2 ( −50.0 ). Many hard tasks drop to zero: drawer-bowl −62 (62→0), saucer-stove −84 (84→0), bowl- saucer −88 (88→0), and butter-bowl −22 (22→0); large losses persist on moutai-cabinet −68 (88→20), moutai-rack −72 (72→0), and open-drawer −40 (78→38). Beyond the ablated capabilities, we find the task-list prompt format used in Agentic Robot introduces substantially increased OOD portion after decomposition (e.g., splitting “put the blue- white porcelain bowl in the cabinet” into subgoals that diverge from training distributions). This causes the verifier to repeatedly fail the first subtask, preventing progression and yielding SR = 0 for many episodes. In contrast, our prompts condition OpenVLA on “now do current subtask, while conditioning on the full task context,” which injects stronger ID structure; combined with mask, replace, and web, this design stabilizes execution and underlies the consistent gains in Table III. V. CONCLUSIONS In this paper, we propose VLA2, a framework that inte- grates arbitrary VLAs into a comprehensive embodied agent system. By incorporating modules such as task planning, web search, scene memory, and process verification, our frame- work enhances the task performance of VLAs. Experiments demonstrate that our module design significantly improves the generalization capability of the agent in grasping objects from unseen concept categories. Although our method achieves substantial improvements over existing approaches, it still has certain limitations. Our current framework designs are still confined to relatively rigid procedural structures. Enhancing the versatility of VLA2 to achieve greater system autonomy and enable the invocation of more external tools to handle a wider range of tasks represents a promising direction for future exploration. Moreover, we have not conducted real-world experiments at this stage, and it is essential to extend our work to open- world real-world grasping evaluations in the future. ACKNOWLEDGMENT This work was supported by the National Science and Technology Innovation 2030 - Major Project (Grant No. 2022ZD0208800), and NSFC General Program (Grant No. 62176215). REFERENCES [1] S. Yang, D. Wei et al., “Contrastive language-image pre-training model based semantic communication performance optimization,” 2025. [2] M. Caron, H. Touvron, I. Misra et al., “Emerging properties in self- supervised vision transformers,” 2021. [3] X. Zhai, B. Mustafa, A. Kolesnikov et al., “Sigmoid loss for language image pre-training,” 2023. [4] H. Liu, C. Li, Q. Wu et al., “Visual instruction tuning,” 2023. [5] S. Karamcheti, S. Nair, A. Balakrishna et al., “Prismatic vlms: Inves- tigating the design space of visually-conditioned language models,” 2024. [6] H. Zhao, M. Zhang, W. Zhao et al., “Cobra: Extending mamba to multi-modal large language model for efficient inference,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 10, pp. 10 421–10 429, Apr. 2025. [7] G. Wang, Y. Xie, Y. Jiang et al., “Voyager: An open-ended embodied agent with large language models,” 2023. [8] I. Gur, H. Furuta, A. Huang et al., “A real-world webagent with planning, long context understanding, and program synthesis,” 2024. [9] S. Yao, J. Zhao, D. Yu et al., “React: Synergizing reasoning and acting in language models,” arXiv preprint arXiv:2210.03629, 2022. [10] A. Brohan, N. Brown, J. Carbajal et al., “Rt-2: Vision-language-action models transfer web knowledge to robotic control,” 2023. [11] M. J. Kim, K. Pertsch, S. Karamcheti et al., “OpenVLA: An open-source vision-language-action model,” arXiv preprint arXiv:2406.09246, 2024. [12] K. Black, N. Brown, D. Driess et al., “π0: A vision-language- action flow model for general robot control,” arXiv preprint arXiv:2410.24164, 2024. [13] P. Ding, H. Zhao, W. Zhang et al., “Quar-vla: Vision-language-action model for quadruped robots,” 2025. [14] Z. Zhou, Y. Zhu, M. Zhu et al., “Chatvla: Unified multimodal understanding and robot control with vision-language-action model,” 2025. [15] C. Cheang, S. Chen, Z. Cui et al., “Gr-3 technical report,” 2025. [16] NVIDIA, :, J. Bjorck, F. Casta˜neda, N. Cherniadev et al., “Gr00t n1: An open foundation model for generalist humanoid robots,” 2025. [17] Z. Yang, Y. Chen, X. Zhou et al., “Agentic robot: A brain-inspired framework for vision-language-action models in embodied agents,” 2025. [18] B. Liu, Y. Zhu, C. Gao et al., “Libero: Benchmarking knowledge transfer for lifelong robot learning,” arXiv preprint arXiv:2306.03310, 2023. [19] K. Pertsch, K. Stachowicz, B. Ichter, D. Driess, S. Nair, Q. Vuong, O. Mees, C. Finn, and S. Levine, “Fast: Efficient action tokenization for vision-language-action models,” 2025. [20] A. Khazatsky, K. Pertsch, S. Nair et al., “Droid: A large-scale in-the- wild robot manipulation dataset,” 2025. [21] E. Collaboration et al., “Open x-embodiment: Robotic learning datasets and rt-x models,” 2025. [22] AgiBot-World-Contributors et al., “Agibot world colosseo: A large- scale manipulation platform for scalable and intelligent embodied systems,” 2025. [23] M. J. Kim, C. Finn, and P. Liang, “Fine-tuning vision-language- action models: Optimizing speed and success,” arXiv preprint arXiv:2502.19645, 2025. [24] P. Li, Y. Wu, Z. Xi et al., “Controlvla: Few-shot object-centric adaptation for pre-trained vision-language-action models,” 2025. [25] W. Song, J. Chen, P. Ding et al., “Ceed-vla: Consistency vision- language-action model with early-exit decoding,” 2025. [26] W. Song, H. Zhao, P. Ding et al., “GeRM: A generalist robotic model with mixture-of-experts for quadruped robot,” in 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024, pp. 11 879–11 886. [27] H. Zhao, W. Song, D. Wang et al., “MoRE: Unlocking scalability in reinforcement learning for quadruped vision-language-action models,” 2025. [28] G. Lu, W. Guo, C. Zhang et al., “Vla-rl: Towards masterful and general robotic manipulation with scalable reinforcement learning,” 2025. [29] H. Zhang, Z. Zhuang, H. Zhao et al., “Reinbot: Amplifying robot visual-language manipulation with reinforcement learning,” 2025. [30] S. Tan, K. Dou, Y. Zhao et al., “Interactive post-training for vision- language-action models,” 2025. [31] Y. Chen, S. Tian, S. Liu et al., “Conrft: A reinforced fine-tuning method for vla models via consistency policy,” 2025. [32] J. Luo, W. Zhang, Y. Yuan et al., “Large language model agent: A survey on methodology, applications and challenges,” 2025. [33] H. Shi, B. Xie, Y. Liu et al., “Memoryvla: Perceptual-cognitive memory in vision-language-action models for robotic manipulation,” 2025. [34] M. Lei, H. Cai, B. Que et al., “Robomemory: A brain-inspired multi- memory agentic framework for lifelong learning in physical embodied systems,” 2025. [35] S. Zhou, X. Wang, J. Zhang et al., “P3: Toward versatile embodied agents,” 2025. [36] V. Team, W. Hong, W. Yu et al., “Glm-4.5v and glm-4.1v-thinking: Towards versatile multimodal reasoning with scalable reinforcement learning,” 2025. [37] X. Zhao, Y. Chen, S. Xu et al., “An open and comprehensive pipeline for unified object grounding and detection,” 2024. [38] K. Chen, J. Wang, J. Pang et al., “MMDetection: Open mmlab detection toolbox and benchmark,” arXiv preprint arXiv:1906.07155, 2019. [39] Ostroluck´y, “Bulk bing image downloader,” https://github.com/ost rolucky/Bulk-Bing-Image-downloader, 2014, software used for downloading images from Bing using keywords. [40] N. Ravi, V. Gabeur, Y.-T. Hu et al., “Sam 2: Segment anything in images and videos,” arXiv preprint, 2024. [41] H. K. Cheng, S. W. Oh, B. Price et al., “Putting the object back into video object segmentation,” in arXiv, 2023. [42] G. Brod et al., “The influence of prior knowledge on memory,” Journal of Cognitive Neuroscience, 2013, “if prior knowledge is available and accessible, it facilitates comprehension and memory of new incoming information”. [43] M. van Kesteren and et al., “Integrating educational knowledge: reactivation of prior knowledge during new learning enhances memory integration,” Trends in Neuroscience and Education, 2018, “Successful knowledge construction is suggested to happen through reactivation of previously learned information during new learning”. [44] O. Bein and et al., “Prior knowledge promotes hippocampal separation but cortical integration of overlapping memories,” Nature Communi- cations, 2020, “An adaptive memory system rarely learns information tabula rasa, but rather builds on prior knowledge to facilitate learning”. [45] J. Bi, D. Cheng, P. Yao et al., “Vl-match: Enhancing vision-language pretraining with token-level and instance-level matching,” in Proceed- ings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 2584–2593. [46] A. M. Samin, M. F. Ahmed, and M. M. S. Rafee, “Colorfoil: Investigating color blindness in large vision and language models,” in NAACL-SRW 2025, 2025, pp. 294–300. [47] S. Bai, K. Chen, X. Liu et al., “Qwen2.5-vl technical report,” 2025. [48] Y. Zheng, R. Zhang, J. Zhang et al., “Llamafactory: Unified efficient fine-tuning of 100+ language models,” in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations). APPENDIX DETAILED DESCRIPTION OF THE PROJECT FRAMEWORK In thw Fig 6 we explain—purely from an information-processing perspective—how all OOD inputs are transformed via the framework described in the main text into ID representations usable by downstream modules; we then outline the design and content of the key prompts used to effect this conversion; finally, we present the computational runtime of each module, so as to evaluate our system’s efficiency. Fig. 6: Transformation pipeline. This figure demonstrates how external information is progressively converted into knowledge available to the VLA via the system described in the main text. As illustrated in Fig. 6, the environment-sensed information enters the system at 1 —which matches the typical input format used by VLA systems. Below, we provide a concise, information-processing view of how the content in each gray box transforms and what specific components it comprises. • 0 The environment produces a continually updated image flow. After the task query and the first image are received, only the pathway from 0 to 6 remains active for this round; all other transformation pathways are deferred until the next task is initiated. • 1 Here, the image denotes the first frame returned once the environment is ready, and the accompanying text is the task prompt for that environment. In our running example—“put the blue white porcelain bowl on the stove.”—the phrase blue white porcelain bowl denotes a newly introduced object category. • 2 In this information block, the task list is the set of decomposed sub-tasks produced by the planner. For the example in 1 , the ideal output is: “1) pick up the blue white porcelain bowl; 2) place the blue white porcelain bowl on the stove.” We also extract two structured fields: objects, which are the items that the manipulator must first contact or grasp, and locations, which define the target placement context. In this example, there is one object: “blue white porcelain bowl” and a location: “stove”. • 3 After vision pre-processing, we obtain bounding boxes from a recognition model by using the names in objects and locations together with the image as inputs. This transformation already separates “known” versus “unknown” visual categories: in our example, the stove is known because the model was fine-tuned with stove data, whereas the blue and white porcelain bowl is unknown. This known/unknown status is passed forward to the next Vision module. Fig. 7: Vision processing for unknown blue white porcelain bowl. The vision memory adopts the same format and similar content as shown in the gray dotted box, for which the equal sign denotes equivalence in structure and content. The system generated the keywords and stored the images here automatically during the evaluation in Table II. • 3 - 4 As shown in Fig. 7, the information transformation process for the unknown “blue white porcelain bowl” is illustrated. The figure explicates how web-search images plus the 3 information are fed into the GLM understanding (Vision) module to generate auxiliary enhanced data for the Vision processing module. In this diagram, we primarily display the storage format of the generated memory and example contents of that memory. • 4 After the Vision stage described in the main text, the module can also recognize some previously unknown categories. In the figure, this is reflected by an additional red bounding box indicating that the blue white porcelain bowl has become identifiable. This recognition is attributed to the cognitive, web-enhanced search phase that creates a persistent memory. Subsequently, all bounding boxes are converted to masks by a SAM-style segmentation step, and masks are color-coded into two palettes corresponding to the objects group and the locations group. The “vision mem” in this block denotes the memory produced by the cognitive search process. • 4 - 5 As shown in Fig. 8, the overall framework used by the Language module is highly analogous to that of the Vision module. The memory in Language is stored as a JSON file (the “replace map”). • 5 After the Language module, the task list is augmented with color cues and undergoes controlled lexical replacement. In this example, it becomes: “1) pick up the red-mask black bowl; 2) place the red-mask black bowl on the blue-mask stove.” All other metadata remain the same as in 4 . (Color-aligned masks and text labels are a standard way to synchronize language outputs with pixel-level regions in VLM pipelines.) • 6 This final block serves as the continuously updated input to downstream modules: the image stream is rendered as mask overlays, and the task list is strictly aligned with the visual overlays. Uncertainties at both the language and vision levels are minimized or resolved, yielding a representation that is easier to execute and evaluate. After task initiation and completion of the cognitive interpretation stage, only the transformation pathway from 0 to 6 is retained; the task list is finalized and no longer changes. In parallel, mask memory distilled from earlier frames is persisted in the VOS, enabling each subsequent frame to infer its masks directly from the new image and the stored mask memory, thereby producing a continuous mask–overlay video stream. Our VOS module is architected following the design principles of the Cutie project1. For algorithms, data structures, and training/inference pipelines, please refer to that project. 1See the Cutie repository for detailed technical specifications and implementation details: https://github.com/hkchengrex/Cutie. Fig. 8: Language processing for unknown blue white porcelain bowl. The equals sign and the gray dotted box denote the same meaning of reference as in Fig. 7. Within the Planner, Vision, and Language modules, GLM-4.1V-9B-Thinking is employed. To curb error accumulation from upstream to downstream, we adopt a two-stage failure-handling policy for GLM usage: the first failure triggers an automatic retry, while a second failure invokes a hard-coded fallback or, if necessary, aborts the operation. Consequently, even when truly novel objects cannot be reliably interpreted, the stability of the overall system is preserved. In every invocation of the GLM and Qwen models, we design prompts tailored to functionality and module interrelations. The planner prompt is shown in PLANNER PROMPT, whose core is the task decomposition prompt, while the other parts enforce module ordering and output constraints. For the verifier, we designed a detailed task-analysis input prompt, as shown in VERIFIER PROMPT. The prompt for GLM understanding (Vision) is given in GLM UNDERSTANDING (VISION) PROMPT. For GLM understanding (Text), as shown in GLM UNDERSTANDING (TEXT) PROMPT, the prompt fed into GLM is not fixed; it is dynamically adapted based on the available inputs and conditions. In all cases, the ultimate objective is to generate a correct replacement mapping from the known vocabulary, given the available context. COMPUTATIONAL EFFICIENCY ANALYSIS Using the same number of validation runs specified in the Methods (i.e., matching those used to obtain the validation data), we measured and reported the mean computation time per task and per module. TABLE IV: Average computation time. Computing time in seconds for each module and task. Module Spatial Goal Object Long Easy Medium Hard Avg Planner 20.727 19.013 17.126 25.532 21.979 19.452 20.207 20.576 Vision & Vision Pre-Processing 0.086 0.072 0.095 0.208 0.753 1.277 1.066 0.508 Language 0.022 0.016 0.046 0.038 0.263 0.582 0.778 0.249 VOS 8.908 8.698 9.016 12.075 7.945 9.112 9.194 9.278 VLA 72.951 73.104 79.783 131.353 69.706 82.759 99.019 86.825 Verifier 2.862 3.585 3.607 5.542 4.488 4.690 4.869 4.234 Total 105.556 104.488 109.673 174.748 105.134 117.872 135.133 121.658 From Table IV, we observe that compared with [17], our agentic system’s additional modules—Vision & vision pre- processing, Language, and VOS—incur only an average extra runtime of 0.508+0.249+9.278 = 10.035 seconds per task over 50 validation runs. This overhead enables the OOD-to-ID conversion pipeline while keeping latency modest. The nearly doubled computation time of the VLA model on the LIBERO-Long tasks arises because every task in that set involves two pick-and-place operations or requires fulfilling two independent actions. Therefore, such tasks demand more steps, resulting in a total runtime roughly twice that of the other three original LIBERO tasks. Because we run GLM-4.1V-9B-Thinking in “thinking” mode, a substantial portion of the Planner’s runtime is spent emitting intermediate “think tokens.” Empirically, we observe that Planner latency per task is roughly 20s across different tasks. The Vision and Language modules, which internally embed GLM models, operate under a “first cognition + memory reuse” design: after a correct initial inference, subsequent invocations can reuse stored memory and thus run extremely quickly. As a result, their first-time inference costs are comparable to the Planner (approximately 20s), but repeated usage is much faster. Moreover, in Fig. 9, the modules that execute in every step—VOS, VLA, VLM—show time curves that change in lockstep with task variation, exhibiting nearly identical trend lines. We also note that in our new environment, recognition- centric modules (Vision & vision pre-processing, Language) incur higher average times due to additional unknown object cognition demands and GLM memory generation. In contrast, Planner—used once per task—shows little runtime difference between the original Libero environment and our custom Libero environment, except for modest variations due to input complexity or error rates. Fig. 9: Modules runtime across tasks. This figure shows the average computation time of each module in the agent framework for each task. LIBERO-HARD TASK EXPLANATION In the table III we abbreviate task names; here are their full expansions based on the BDDL filenames in the LIBERO- ZERO environment: Abbreviation Full Human-Readable Task Name stove turn on the stove open-drawer open the middle drawer of the white cabinet drawer-bowl open the top drawer and put the blue white porcelain bowl inside saucer-stove push the saucer to the front of the stove bowl-stove put the blue white porcelain bowl on the stove moutai-rack put the moutai on the rack bowl-saucer put the blue white porcelain bowl on the saucer bowl-cabinet put the blue white porcelain bowl on top of the white cabinet butter-bowl put the butter in the blue white porcelain bowl moutai-cabinet put the moutai on top of the white cabinet This naming preserves the task structure from the LIBERO-LONG benchmark: each task follows the same schema or template as in the original set, and our version differs only in that we substituted the object terms (e.g., “bowl”, “moutai”) with our custom names. PLANNER PROMPT ### reading notice: "#" means the comment in python. This project is written in python, and the following content illustrates the logic and structure of the GLM model prompt. ### if sign!="success": ### "sign" is a signal for regenerating a better output, sent by the post -processing function. The unsuccessful situations were mainly caused by an unmatchable and unreadable model output. ### if sign=="no subtask found": additional_info = "PAY MORE ATTENTION TO THE SUBTASKS in your last output, no valid subtask found. You should output the subtask in the same format as the example, without any other analysis or description." elif sign=="no objects found": additional_info = "PAY MORE ATTENTION TO THE OBJECTS in your last output, no valid objects found in /(here)/. You should output the objects in the same format as the example, without any other analysis or description." else: additional_info = "PAY MORE ATTENTION TO THE SUBTASKS and OBJECTS in your last output , no valid subtask or objects found. You should output the subtask and objects in the same format as the example, without any other analysis or description." else: additional_info = "You are doing a good job, keep it up" task_decomposition_prompt =f""" You are a planning assistant for a fixed robotic arm. Your goal is to break down a high-level task into a sequence of **essential high-level commands**, suitable for a capable Vision -Language-Action (VLA) model to execute directly. Output Format: Generate a numbered list of commands. Each command should represent a significant action achieving a clear sub-goal. Stick to the allowed high-level actions. Example Plan Format (Use **exactly** this level of granularity): Plan for the robot arm: Goal: <original instruction> 1. pick up the <object_name_1> /(<object_name_1>)/ 2. place the <object_name_1> in the <target_location> /(<object_name_1>,<target_location>)/ 3. pick up the <object_name_2> /(<object_name_2>)/ 4. place the <object_name_2> in the <target_location> /(<object_name_2>,<target_location>)/ --- Example for a different task --- Goal: Put the apple in the red bowl 1. pick up the apple /(apple)/ 2. place the apple in the red bowl /(apple, red bowl)/ --- Example for another task --- Goal: Put the cup in the microwave and close it 1. pick up the cup /(cup)/ 2. place the cup in the microwave /(cup, microwave)/ 3. close the microwave /(microwave)/ --- Example for another task --- Goal: Turn on the stove and put the pot on it 1. turn on the stove /(stove)/ 2. pick up the pot /(pot)/ 3. place the pot on the stove /(pot, stove)/ --- Example for another task --- Goal: Put both books on the bookshelf 1. pick up the red book /(red book)/ 2. place the red book on the bookshelf /(red book, bookshelf)/ 3. pick up the brown book /(brown book)/ 4. place the brown book on the bookshelf /(brown book, bookshelf)/ --- Example for another task --- Goal: pick the red book near the butter and the brown book on the plate and put them on the left bookshelf 1. pick up the red book near the butter /(red book)/ 2. place the red book near the butter on the left bookshelf /(red book, bookshelf)/ 3. pick up the brown book on the plate /(brown book)/ 4. place the brown book on the plate on the left bookshelf /(brown book, bookshelf)/ --- Example for another task --- Goal: pick up the yellow and white mug next to the cookie box and place it on the plate 1. pick up the yellow and white mug next to the cookie box /(yellow and white mug)/ 2. place the yellow and white mug next to the cookie box on the plate /(yellow and white mug, plate)/ --- Example for another task --- Goal: put the black bowl in the bottom drawer of the cabinet and close it 1. pick up the black bowl /(black bowl)/ 2. place the black bowl in the bottom drawer of the cabinet /(black bowl, cabinet)/ 3. close the bottom drawer of the cabinet /(cabinet)/ Instructions: - Generate **only** high-level commands. - Your output should be in the ***ABSOLUTELY SAME format*** as the example above. Even with unseen tasks, follow the same structure. ***WITHOUT ANY OTHER ANALYSIS and DESCRIPTION ***. - **After each command**, include a comment with the object names and locations in */()/*. This is necessary for the VLA model to understand which objects are involved in each command. - DO NOT include any descriptions of position and order in */()/* (e.g., "first pot", "back of the shelf", "bottom of sth", "upper of sth"), only color and shape are permitted (e.g ., "red bowl", "cylindrical box"). But you should maintain the details of the objects and locations as described in the task to subtask, such as "red bowl near the plate", "brown book on the cabinet", "left bookshelf", "black bowl next to the cookie box", etc. - **ONLY USE */()/* to EXPRESS *OBJECTS*.** Comments, explanations, and anything else that has nothing to do with expressing objects are not allowed. - When an object or location has a qualifying modifier, such as a cabinet’s drawer, door of a microwave, or the handle of pot, what you are expected to display in the /()/ is actually the **largest specific items these expressions** refer to, which are cabinets, microwaves, and pots, not the parts or subordinate items on these items that belong to these items. Meanwhile, you should still maintain the detailed expression in the subtask as "the drawer of the cabinet", "the door of the microwave" (eg. pick up the bottle on the stove; pick up the bowl in the drawer). - **Allowed commands are strictly limited to:** - ‘pick up [object]‘ - ‘place [object] on [location]‘ - ‘place [object] in [location]‘ - ‘open [object/container/drawer/cabinet/etc.]‘ - ‘close [object/container/drawer/cabinet/etc.]‘ - ‘turn on [device]‘ - ‘turn off [device]‘ - Use the commands above **only when necessary** to achieve the goal. Most tasks will primarily use ‘pick up‘ and ‘place‘. - **Explicitly DO NOT include separate steps for:** - ‘locate‘ (Assume VLA finds the object as part of executing the command) - ‘move to‘ or ‘move towards‘ (Assume the command includes necessary travel) - ‘lift‘, ‘lower‘, ‘grasp‘, ‘release‘, ‘push‘, ‘pull‘, ‘rotate‘, ‘adjust‘ (Assume high- level commands handle these internally) - **Assume the VLA model handles all implicit actions:** - "pick up [object]" means: Find the object, navigate to it, grasp it securely, and lift it. - "place [object] in [location]" means: Transport the object to the location, position it correctly, and release the grasp. - "open/close [container]" means: Find the handle/seam, interact with it appropriately ( pull, slide, lift) to change the container’s state. - "turn on/off [device]" means: Find the correct button/switch, interact with it to change the device’s power state. - Use the descriptive names from the task description and **DO NOT make any distortions** in subtasks (e.g., if the task involves {inlist}, make sure the subtasks about them are exactly the same). - Generate the minimal sequence of these high-level commands required to fulfill the Goal. Ensure the sequence logically achieves the task (e.g., you might need to ‘open‘ a drawer before ‘placing something inside it, even if ’open’ isn’t explicitly stated in the goal). - Additional INFO:{additional_info} Task: {task_description} Output: """ VERIFIER PROMPT ### The Verifier prompt essentially depends on the input subtask main verb and differentiates each subtask into the following few situations. ### prefix = ( f"{title_prefix + ’ - ’ if title_prefix else ’’}" f"Observe the inputs (two videos or two image-flow videos). " f"The subtask robot arm is currently working on: ’{subtask}’. " ) if verb == "pick up": prompt = ( f"{prefix} Based *Only* on the provided media, has ’{object_name}’ or anything else been grasped and lifted off any surface by the end? " "Answer ’Yes’ or ’No’." ) elif verb == "place": prompt = ( f"{prefix} Based *Only* on the provided media, has ’{object_name}’ or anything else been placed ’{location_name}’ and is the gripper away? " "Answer ’Yes’ or ’No’." ) elif verb in ("turn on", "turn off", "open", "close"): target = raw_part or object_name action_text = { "turn on": "turned on (powered up)", "turn off": "turned off (powered down)", "open": "fully opened", "close": "fully closed", }[verb] prompt = ( f"{prefix} Based *Only* on the provided media, has ’{target}’ or anything else been { action_text} by the end? " "Answer ’Yes’ or ’No’." ) else: prompt = ( f"{prefix} Based *Only* on the provided media, has the instructed action been completed successfully by the end? " "Answer ’Yes’ or ’No’." ) GLM UNDERSTANDING (VISION) PROMPT ### "Query" here means the object or location aiming to be understood. ### system_prompt = rf""" You are an intelligent assistant specialized in analyzing images and extracting meaningful information. Your task is to identify a specific person or object that appears in all provided images and generate five of the most relevant keywords to describe this person or object. **Think in ten sentences.** You must follow this rule strictly. Guidelines: For the combined image: If the same person appears in all images: Focus on describing the person’s gender, skin tone, and occupation. Avoid keywords related to clothing or environment. Example keywords might include: "female", "light-skinned", "doctor", etc. If the same object appears in all images: Focus on describing the object’s physical characteristics. Example keywords might include: "round", "metallic", "small", etc. **IMPORTANT** The keywords are going to help another Model to find the same or almost like subjects or persons in the real-world image. Thus the keywords should be very specific and descriptive, not general or abstract, and can reflect the basic attributes of this task or thing. Making another VLM easily find the same or similar subjects or persons in the real-world image. For the current image: There is something suitable for the query"{query}", but the model can’t find the bbox exactly. Your mission is to base on the current image and the combined image to describe the same thing in both. Output Format: Output the keywords in JSON format. Ensure the output contains only the keywords, without additional text or explanation. The JSON structure should be a list of strings. Example JSON Output: ["female", "light-skinned", "doctor", "middle-aged", "smiling"]. Your output should be in a format that the code below can easily extract the keywords: --match = re.search(r"\[.*?\]", output_text[0]) -- if match: -- str_list = json.loads(match.group(0)) -- print(str_list) Task: Analyze the provided images and generate five keywords that best describe the identified person or object based on the guidelines above. Output the keywords in the specified JSON format. input:{query} output: """ messages = [ { "role": "system", "content": [ { "type": "text", "text": system_prompt, } ] }, { "role": "user", "content": [ { "type":"text", "text":"Here is the combined image from the web.", }, { "type": "image", "image": com_image, ##combined images from internet }, ] }, { "role": "user", "content": [ { "type":"text", "text":"This is the current image from the camera.", }, { "type": "image", "image": cur_image, ##current main view }, ] } ] GLM UNDERSTANDING (TEXT) PROMPT # Build messages for GLM inference (memory-first replace) messages: list[dict] = [] # 1) System steer (role and objective) messages.append({ "role": "system", "content": [{ "type": "text", "text": ( "You normalize open-world object mentions to a closed training vocabulary. " "Return EXACTLY ONE label copied verbatim from the allowed list below, " "or output NONE if no label applies." ) }] }) # 2) Allowed vocabulary (verbatim list shown to the model) allowed_text = "\n".join(f"- {lab}" for lab in known_list) messages.append({ "role": "user", "content": [{"type": "text", "text": "Allowed vocabulary:\n" + allowed_text}] }) # 3) The new object mentioned (query term) messages.append({ "role": "user", "content": [{"type": "text", "text": f"New object mention: {norm_prompt}"}] }) # 4) Decide available evidence has_com = (pil_com is not None) # composite reference image has_kw = bool(keywords) # keyword list has_boxes = (top_crop is not None) # highest-score crop from original image has_scores = bool(boxes_list) # detector had scores/boxes at all # 5) Case A: (no comimage, no keywords); include crop if available; else include raw image if (not has_com) and (not has_kw) and (has_boxes or (pil_image is not None)): if has_boxes: messages.append({ "role": "user", "content": [ {"type": "text", "text": "Evidence crop (highest detector score)."}, {"type": "image", "image": top_crop}, ], }) elif pil_image is not None: messages.append({ "role": "user", "content": [ {"type": "text", "text": "Context image."}, {"type": "image", "image": pil_image}, ], }) # 6) Case B: (no comimage, no keywords, no boxes/scores); optional raw image only if (not has_com) and (not has_kw) and (not has_boxes) and (not has_scores): if pil_image is not None: messages.append({ "role": "user", "content": [ {"type": "text", "text": "Context image."}, {"type": "image", "image": pil_image}, ], }) # 7) Case C: (comimage + keywords + crop are all available); each as its own user turn if has_com and has_kw and has_boxes: messages.append({ "role": "user", "content": [ {"type": "text", "text": "Composite reference image from the web."}, {"type": "image", "image": pil_com}, ], }) messages.append({ "role": "user", "content": [ {"type": "text", "text": "Top-scoring evidence crop from the original image."}, {"type": "image", "image": top_crop}, ], }) messages.append({ "role": "user", "content": [{"type": "text", "text": "Image/scene keywords: " + ", ".join(map(str, keywords))}] }) # 8) Optional: brief external snippets (web/Wikipedia), one separate turn if web: qs = [norm_prompt] + ([k.strip() for k in keywords] if keywords else []) web_brief = fetch_snippets(qs, limit=4) # function enables searching online and with a " limit" to prevent error content. # if web_brief: messages.append({ "role": "user", "content": [{"type": "text", "text": "External brief (web/Wikipedia):\n" + web_brief}] }) # 9) Final instruction with strict stability constraints messages.append({ "role": "user", "content": [{ "type": "text", "text": ( "STRICT CONSTRAINTS:\n" "- Output MUST be exactly one label copied verbatim from the allowed vocabulary above, " "or the token NONE when no label applies.\n" "- DO NOT include any analysis, explanation, reasoning, or additional text.\n" "- Format your final decision ONLY as:\n" " <answer>LABEL_OR_NONE</answer>\n" "- LABEL_OR_NONE must be one of the allowed labels or NONE." ) }] })
VLA2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation Han Zhao‡,1,2, Jiaxuan Zhang‡,2,3, Wenxuan Song4, Pengxiang Ding1,2, Donglin Wang*,2, 1Zhejiang University, China 2MiLAB, Westlake University, China 3Southern 4Hong Kong (Guangzhou), China Abstract- Current vision-language-action (VLA) models, pre-trained on large-scale robotic data, exhibit strong multitask capabilities and generalize well to variations in visual and language instructions for manipulation. However, their success rate drops significantly when faced with object concepts outside the training data, such as unseen object descriptions and textures in the dataset. To address this, we propose a novel agentic framework, VLA2, which leverages OpenVLA as the execution backbone and effectively leverages external modules such as web retrieval and object detection to provide visual and textual knowledge about target objects to the VLA. This approach mitigates generalization failure when handling out-of-distribution objects. Based on the LIBERO simulation environment, we introduced novel objects and object descriptions to construct a new evaluation benchmark with three difficulty levels to test the effectiveness of our method. Our framework successfully outperformed the current stateof-the-art models on our designed hard-level generalization benchmark. Compared to the standalone OpenVLA baseline, VLA2 achieves a 44.2% improvement in the success rate in the hard-level benchmark and an average improvement of 20.2% in all customized environments without any performance degradation on in-domain tasks. Project website: https://vla2.github.io. I. INTRODUCTION In recent years, foundation models have profoundly influenced the development of artificial intelligence research. This impact spans visual encoders [1]-[3], multi-modal large language models [4]-[6], and agent systems [7]-[9], among others. In the field of robotics, Vision-Language-Action (VLA) models [10]-[16] built upon vision-language models represent a prominent research paradigm. By fully integrating visual perception, language instruction understanding, and action execution into a unified model, VLA leverages largescale robotic manipulation datasets for end-to-end training. This approach effectively harnesses the learning capacity of large-scale models and shows strong potential to serve as a foundational backbone for general-purpose robots performing manipulation tasks in open-world environments in the future. However, although VLA models have acquired a certain degree of generalization ability, such as understanding some unseen language instructions and manipulating corresponding objects, they completely fail to comprehend instructions involving entirely unseen concepts (as demonstrated ‡Equal Contribution *Corresponding Author Fig. 1: Evaluation result on our custom Hard-level benchmark. In evaluation involving unseen concepts (i.e., object textures and language descriptions outside the dataset), our proposed framework surpasses other state-of-the-art models finetuned on the original LIBERO dataset. In contrast, the reproduced Agentic Robot framework [17] using our model exhibits a significantly noticeable performance degradation in this task. in OpenVLA failure cases [11]) and are unable to transfer previously learned manipulation experience to such scenarios. Some researchers have attempted to jointly train robotic manipulation data with web-scale multimodal data [10], [14], aiming to preserve extensive conceptual knowledge during training and thereby enhance generalization in manipulation tasks. However, such a training paradigm not only demands substantial resources but also makes iterative model updates with emerging concepts impractical. As a result, it fails to fully address the problem. To this end, we proposed Vision-Language-Action Agent (VLA2), a novel integrated system-level framework designed to increase the capabilities of VLA systems by supporting the invocation of diverse tools-including task planning, web search, object detection, and other functional modules-thereby extending the executive limits of the current VLA models. Our main contributions are as follows: • We propose the VLA2 framework that integrates task planning, conversion of unseen concepts into known information via web and memory retrieval, VLA-based execution, and a verifier module to assess task completion. • We fine-tune OpenVLA [11] on the augmented LIBERO 16 Oct 2025 Fig. 2: Framework overview. The proposed framework comprises three components: A. preliminary processing, B. cognition and memory, and C. Judgment and execution. During task running, preliminary processing and cognition (except video object segmentation as VOS) are invoked only once at the start of each task. [18] dataset to enable the VLA to accept masked images as input conditions for improving generalization in object manipulation. • Based on the LIBERO simulation environment, we designed object generalization tasks across three difficulty levels, ranging from simple color variations (Easy) and manipulation of generalized target objects (Medium) to generalization to objects with unseen concepts (Hard). II. RELATED WORKS A. Vision-Language-Action Models VLA models [10]-[16], [19] belong to a type of foundation model that processes visual and other modal data as observations, and follows human natural language commands to execute the corresponding robotic tasks. Through pretraining on large-scale robotic manipulation datasets [20]- [22] and minimal fine-tuning through supervised finetuning [23]-[25] or reinforcement learning [26]-[31] on downstream tasks. While VLA models can effectively integrate perception and decision-making in an end-to-end manner, they still face significant challenges in real-world applications that require strong generalization capabilities, such as open-vocabulary object manipulation and long-horizon task execution. In contrast to the aforementioned approaches, our work does not primarily focus on improving generalization by directly optimizing the VLA model. Instead, we introduce external modules on top of existing models to form a more comprehensive system, which enhances the performance of the downstream VLA by leveraging external tools for improved information processing. B. Embodied Agents Inspired by the concept of agents [32] in the field of large language models, a growing body of research has begun to integrate VLA models as an execution module [17], [33]-[35] into embodied agent systems. This is achieved by incorporating additional modules that serve as external tools, effectively expanding the capability boundaries of VLA through appropriate invocation. The prior work incorporated modules such as task planning, situational memory, and skill libraries. In this paper, we focus on enhancing the agent's tool invocation capability by using web search, object detection, and other functional modules-in combination with current visual observations and task instructions-to identify target objects for manipulation. This approach enables the precise operation of objects beyond the cognitive scope of the single VLA model. III. METHOD We consist of three major parts, as in Fig. 2: Preliminary Information Processing, responsible for analyzing textual and visual information; Cognition and Memory, responsible for transforming all received information into knowledge accessible to the next part; and Judgment and Execution, responsible for monitoring task progress and interacting with the environment. As shown in the figure, we use LIBERO as the simulation environment. A. Preliminary Information Processing In this part, we employ a planner and a vision preprocessing module to perform the initial decomposition and processing of information. 1) Planner: The planner is responsible for decomposing complex natural-language instructions into a sequence of structured subtasks executable by downstream modules. To ensure reliability, the planner prompt is designed with strict constraints: each subtask must contain exactly one action verb (e.g., pick up, move, open) and must explicitly specify the relevant objects and locations, with additional syntactic and structural rules enforced so that the postprocessing stage can reliably parse the output. This design transforms a complex compound action into multiple smaller subtasks, each consisting of a single action. The planner is implemented using the GLM-4.1V-9B-Thinking [36], which is locally deployed. To enable modular extraction of the task list and objects & locations from GLM's output, we designed a three-layer post-processing module consisting of: (a) automatic linguistic extraction; (b) error detection and regeneration when extractions fail; and (c) hard-coded task-specific parsing once an error tolerance threshold is exceeded. This architecture ensures that, regardless of what GLM outputs, only valid and high-quality information is passed to the downstream modules. 2) Vision Pre-processing: In the initial processing stage of visual information, the framework employs the MMGroundingDINO [37] model to generate a list containing the bounding boxes of the objects and locations provided to this module, as aligned on the first image. Probabilistically, some of the bboxes may be empty due to model failures in recognition or inadequate post-processing. These cases must be further addressed by subsequent cognition and memory. To better adapt to the overall framework and the taskexecution environment, the MMGroundingDINO model is fine-tuned within this framework to improve the accuracy of recognizing task-relevant objects. The experimental setup of this framework is based on the LIBERO simulation environment. Accordingly, 500 randomly rendered images were collected across the LIBERO-Spatial, Goal, Object, and Long datasets [18]. Bounding boxes and object names were manually annotated, and data augmentation was applied to the images. Using the MMDetection [38] toolkit, the model was fine-tuned, resulting in a version that can reliably recognize the objects appearing in these four LIBERO environments. B. Cognition & Memory To enhance the out-of-distribution (OOD) performance of the underlying VLA, this project integrates an active webbased information retrieval capability into the higher-level text-image processing pipeline. The following serial sections will introduce the logic of web search enhancement for visual and linguistic information in detail. 1) Vision: Overview. In the visual processing pipeline, task-related objects and locations in the third-person robot view are overlaid with transparent, colored masks to reduce reliance on surface textures and mitigate visual overfitting. Fig. 2 summarizes this module and its interfaces to the rest of the system. And Fig. 3 displayed the detailed logical relationships between the small modules in the vision module. Fig. 3: Vision framework. This figure illustrates the whole structure and contents within Vision. Double judgment. For each word (object/location), the system first checks whether a valid bounding box (bbox) is available and, in parallel, whether auxiliary keywords are present. If either signal is missing, a visual search branch is triggered: bbid [39] downloads web images for the word, the images are arranged into a 2×3 collage and paired with a structured text prompt, and this input is sent to the GLM Understanding (Vision) module. The resulting keywords, images, and collage are cached in vision memory for reuse. The enriched prompt (original text + keywords) is then resubmitted to the detector; if detection still fails, an empty bbox is returned and no mask is applied for that item. GLM understanding (Vision). Given the first image, the retrieved web collage, and the current word, this module produces five concise descriptive keywords that anchor the unknown word to elemental attributes (e.g., color, shape, function, size). These keywords support robust re-detection and are stored in memory for subsequent tasks. Vision processing. MMGroundingDINO consumes the word together with its keywords to localize the word in the first image, producing a bbox when possible (the "Vision processing" block in Fig. 3). SAM: Segmentation, color encoding, and interface routing. Given validated bboxes, SAM2.1-L [40] converts each box into a pixel-accurate mask that specifies the target's location and shape in the image. The outputs (bbox, mask, and the term-color assignment) are packaged with the corresponding vision memory (e.g., keywords and web collage). This package is then routed to two consumers: (i) the Language module, which stores the vision-memory fields for the subsequent replace step (explained in the next section); and (ii) the VOS pipeline-a module separate from Vision-which uses the term-color mapping to guide Cutie [41] in generating temporally consistent, color-coded masked image flows. Objects and locations use distinct color Fig. 4: language framework. This figure illustrates the whole structure and contents within Language. palettes so that downstream components can exploit roleaware color cues when learning action-image correspondences. Rationale: instant learning. This pipeline converts unfamiliar inputs into familiar representations for MMGroundingDINO, enabling effective OOD generalization by decomposing novel concepts into elemental attributes and anchoring them to known ones. We refer to this as "instant learning": leveraging prior knowledge to rapidly assimilate unfamiliar concepts. Prior studies indicate that accessible knowledge facilitates the comprehension and memory of new information [42], that successful knowledge construction reactivates previously learned information [43], and that adaptive memory builds on prior knowledge rather than learning tabula rasa [44]. Moreover, the explicit color-mask alignment improves visual-text overlap, consistent with findings that finer instance- and token-level alignment boosts performance [45] and that stronger color perception benefits color-related generalization [46]. 2) Language: Overview. A primary role of the languageprocessing component is to align all object-related tokens in task prompts with a controlled vocabulary derived from training and fine-tuning, thereby ensuring consistent system-level cognition. The detailed structure and information content of the Language framework are shown in Fig. 4. Double judgment. A substitution mechanism handles tokens absent from this vocabulary. For each prompt, once bounding boxes are obtained from the visual pipeline, object terms are replaced at the text level; if no box is detected, substitution is still attempted but designed to return NONE when no reliable replacement is found. If the token is known on the KnownList (details are at the end of the section IIIC), it is used directly; otherwise, the GLM (shared with the planner) generates a replacement. GLM understanding (Text). The GLM input message comprises: (i) the first image with cropped bounding-box regions and scores (or an empty list), (ii) a collage from web search (or NONE), (iii) the original prompt, (iv) webderived keywords (or NONE), (v) the known-vocabulary list, and (vi) auxiliary descriptive information from external APIs. Analogous to the planner, we designed dedicated input preprocessing and output post-processing modules for the GLM Understanding (Text) component to better align with the language framework and to enable instant learning. If the replacement word generated by GLM is valid, the corresponding substitution (new corresponding to original) will be recorded in the text memory of the language module, so that when this term reappears for replacement, the system can directly utilize the stored memory. If the replacement word is invalid, no substitution is performed, and no memory is created. Text processing. Finally, within the current task, once all substitution mappings have been determined, the target terms are replaced accordingly, and the final task list is repaired to eliminate errors arising from long-chain information propagation. C. Judgment & Execution Judgment. We employ Qwen2.5-VL-3B-Instruct [47] as the verifier. To adapt it more effectively to the experimental scenarios and to improve judgment accuracy, we manually constructed a fine-tuning dataset using videos from the LIBERO dataset. Specifically, video segments were extracted from the original visual recordings of the simulation environment. For each segment, a text prompt was generated corresponding to the current subtask, and annotations were added to indicate whether the subtask had been completed and whether the system could proceed to the next subtask. Fine-tuning of Qwen2.5-VL-3B-Instruct was then carried out using LLaMA-Factory [48] as the training tool, resulting in a verifier better aligned with the LIBERO environments and the task decomposition rules described in the planner section. Beyond checking whether each subtask is completed, we design a recovery mechanism that uses a dynamic threshold to determine whether the end-effector is stuck or in an anomalous state. Once the recovery detector flags an anomaly, we forcibly set current task to "lift the gripper" and, after a fixed number of steps, resume the subtask that was active before recovery and restore its execution progress. Execution. The lower-level VLA is fine-tuned to accommodate the structured inputs produced by the upper-level planner and visual processing modules. In particular, the visual modality of the LIBERO dataset is reformulated by replacing the original third-person RGB videos with RGB videos augmented by transparent colored masks. To construct these masked videos and the accompanying task list, we employ the same vision and language modules described above; all logic and processing remain identical to the main framework. Consequently, during dataset preparation, the vision and language memories already encode the indistribution(ID) portion of the tasks. For subsequent evaluaFig. 5: Comparison between origin and new environments. In this figure, we illustrate the differences between the new and original environments. We present a single rendered scene to highlight the modified objects; the novel items appearing in the other scenes share the same appearance. tion on the three OOD environments, any OOD-related memories are re-initialized before each validation run to ensure strict fairness and to isolate the effect of our instant-learning mechanism. Meanwhile, the task descriptions are reformatted into temporally segmented, plan-based task prompts that explicitly reflect the distribution of subtasks over time. Moreover, during fine-tuning and evaluation, the task text prompts are enhanced in the form: "now do 'current subtask', the whole task is 'joint of all subtasks'", such that the VLA both knows what it is supposed to do now and what the overall task is. Training the VLA on this modified dataset enables it to process masked visual inputs and sequential subtask prompts consistently with the planner-driven structure, which improves downstream execution performance. During OpenVLA fine-tuning, a knowledge base of known object terms is built using an NLTK-based extractor. Tokens are identified via tokenization and part-of-speech tagging, aggregated into a JSON vocabulary, and stored with the model for use at inference. This is the KnownList in the Language section. IV. EXPERIMENTS We concentrated experiments on evaluating the zero-shot OOD adaptability of the proposed VLA2 framework. To this end, a new evaluation environment was constructed to specifically test OOD generalization across novel scenarios, in addition to adopting the LIBERO benchmark as a standardized reference. The goal is to examine whether the framework can generalize to previously unseen tasks and maintain robustness without task-specific fine-tuning, while also analyzing the contributions of its key components through ablation studies. Specifically, the experiments aim to answer the following questions: Q1. How does the testing performance of VLA2 on in-domain tasks compare to stateof-the-art VLAs? Q2. How is the generalization performance of VLA2 on out-of-distribution test tasks with high difficulty? Q3. Do the key modules we designed contribute significantly to the framework's generalization performance? A. Experimental Setup LIBERO simulation environment. Within the original LIBERO simulation environment, we constructed three new variants-Easy, Medium, and Hard-based on the Spatial and Goal environments-comparison between the original and the new environments in Fig. 5. The modifications are limited to object appearances as follows. In Easy, the original black bowl was recolored to an orange series. In Medium, the black bowl was replaced with LIBERO's native white bowl, the wine bottle was recolored to sky blue and renamed as the blue bottle, and the wooden cabinet was replaced with LIBERO's native white cabinet. In Hard, the wine bottle was completely redesigned to resemble the well-known Chinese liquor Moutai, the black bowl was redesigned with blueand-white porcelain patterns and renamed the blue white porcelain bowl, and the wooden cabinet was again replaced with the white cabinet. The original cream cheese has been replaced with butter, which looks different but has approximately the same collision model. No other modifications were introduced beyond these appearance changes. For the evaluation on the new environments, each task is executed 50 times, and both the overall success rate (SR) and the success rate of each individual task are reported. The same evaluation protocol is applied to the LIBERO original environments when testing the framework. Baseline. We compares the proposed VLA2 framework against several widely recognized, high-performance VLA baselines finetuned on the same LIBERO training dataset: OpenVLA [11], OpenVLA-OFT [23], π0 [12], π0-FAST [19], and Agentic Robot [17], a embodied agent framework. All experiments are conducted in the original four simulation suites, as well as in the three newly crafted environments specifically designed for OOD evaluation. Training details. All components of the framework were trained/fine-tuned on NVIDIA A100-80GB GPUs. For MMGroundingDINO, we adopted the default MMDetection training configuration and fine-tuned on our custom dataset using TABLE I: LIBERO simulation benchmark (Original Environment). FT denotes fine-tuning on task-specific demonstrations. Bold numbers mark the best within all classes. Underline numbers mark the best within Class 2. Method Spatial Object Goal Long Average Class 1 OpenVLA-OFT (FT) 97.6 98.4 97.9 94.5 97.1 π0 (FT) 96.8 98.8 95.8 85.2 94.2 π0-FAST (FT) 96.4 96.8 88.6 60.2 85.5 Class 2 Agentic Robot 85.8 89.0 81.8 61.6 79.6 OpenVLA (FT) 84.7 88.4 79.2 53.7 76.5 VLA2 (ours) 86.4 86.2 83.2 64.4 80.1 2 GPUs for 100 episodes. For Qwen2.5-VL-3B-Instruct, we used LLaMA-Factory's default qwen2-sft recipe with our custom dataset, increased the number of episodes by a factor of five, and trained on 4 GPUs. For OpenVLA, we used the official fine-tuning script on our custom dataset with a learning rate of 3×10-4, training on 8 GPUs. Implementation. This project adopts a 20-step verification waiting period. A custom end-effector jam detection module was implemented with a 10-step recovery waiting to replace the original recovery mechanism and logic. All other model configurations and information transmission pipelines remain the same as described in the Method section. In this case, the parameters are closer to those of the original Agentic Robot [17], making the comparison more meaningful. B. Main Results Original environments (in-domain; Table I). The evaluation shows that Class 1 systems with stronger VLA backbones obtain higher averages. In contrast, our framework uses OpenVLA as the VLA backbone, so the fairest in-distribution comparison is within the OpenVLA family (Class 2). VLA2 attains the highest Class 2 average SR at 80.1%, which is higher than Agentic Robot and the finetuned OpenVLA. On Object, the SR of our framework (86.2%) remains below these two baselines. The reason for the result degradation due to a perception bottleneck: 224×224 observations and imprecise object names make fine-grained recognition difficult; MMGroundingDINO often misses or mislocalizes boxes; web images used for grounding differ from the simulator views. These perceptual errors can leave the first subtask unresolved, preventing the verifier from advancing and depressing overall SR on affected tasks. Custom environments (out-of-distribution; Tables II and III). Across the custom environments, all methods exhibit SR declines as OOD difficulty increases, from simple color changes to semantic reinterpretations (e.g., replacing a wine bottle with Moutai) and synonym substitutions (e.g., plate →saucer). Despite this, VLA2 attains the best overall average SR at 81.5%. The advantage is most pronounced on the Hard environment, where VLA2 reaches 76.2%, exceeding π0 by 16.2% and OpenVLA-OFT by 28.8% TABLE II: LIBERO simulation benchmark (Custom Environment). SR comparison on Easy/Medium/Hard. FT denotes fine-tuning on task-specific demonstrations. Bold numbers mark the best across all methods. Method Easy Medium Hard Average Class 1 OpenVLA-OFT (FT) 98.8 95.4 47.4 80.5 π0 (FT) 97.2 86.0 60.0 81.1 π0-FAST (FT) 98.0 75.2 45.8 73.0 Class 2 Agentic Robot (RP) 83.8 48.6 26.2 52.9 OpenVLA (FT) 85.0 66.7 32.0 61.2 VLA2 (ours) 86.6 81.6 76.2 81.5 (Table II). Task-level results further highlight robustness on large semantic shifts-for example, moutai-rack (72 for VLA2 vs. 44 for π0) and bowl-saucer (88 for VLA2 vs. 16 for π0), as shown in Table III. These findings support our core premise: by explicitly reforming unfamiliar inputs into the model's known distribution (via our knowledgealignment pipeline), VLA2 is less perturbed by OOD shifts than competing baselines, even those with more advanced backbones. C. Ablation Study We evaluate three ablations in the custom LIBEROHard setup, each removing a distinct capability from our framework (Table III). w/o mask excludes the transparent instance/region overlays and color mask injects. w/o replace disables lexical normalization, i.e., unknown or out-ofvocabulary nouns in the task text are no longer substituted with semantically related in-distribution texts. w/o web turns off all external retrieval and episodic reuse, meaning no image search, no text search, and no previously cached memory from web retrieval can be consulted during planning or execution. Additionally, we designed an experiment termed Agentic Robot (RP) that removes all the aforementioned modules and replaces every component in the framework [17] with the other models mentioned above and additionally omits our subtask-augmentation in the execution prompts, serving as an ablation study. Ablation on mask. Disabling transparent masks reduces the average SR from 76.2 to 64.8 ( -11.4 ), with the largest drops on interaction-heavy and cluttered scenes: open-drawer -26 (78→52), bowl-cabinet -22 (86→64), moutai-rack -36 (72→36), and moutai-cabinet -12 (88→76), see Table III. These patterns indicate the mask overlay is most critical when the VLA must localize within containers/occlusions or discriminate among visually similar instances. Minimal effect on stove (-2) and even a slight gain on bowl-stove (+2) suggest that for simple, single-object placements, the raw RGB already suffices, but removing masks consistently hurts spatial reasoning and long-horizon pickand-place chains. Ablation on replace. Turning off semantic substitution TABLE III: LIBERO-Hard tasks environment simulation results. Transposed SR comparison per task. The row names under "new items" (e.g., "stove") are concise task abbreviations; "new items" indicates the number of zero-shot items in the task text prompt. Bold marks the best performance across all models. For the Ablation rows, values in parentheses denote the vertical difference from VLA2 (ours) in the same column, computed as Ablation-VLA2. The Agentic Robot (RP) means w/o mask, replace and web, also no subtask augmentation introduced in the Execution part. Strictly follow the original Agentic Robot pipeline [17]. Category Method 0 new item 1 new item 2 new items Average SR stove opendrawer drawerbowl saucerstove bowlstove moutairack bowlsaucer bowlcabinet butterbowl moutaicabinet Class 1 OpenVLA-OFT (FT) 100 100 92 8 88 0 0 82 0 4 47.4 π0 (FT) 98 94 66 88 92 44 16 68 0 34 60.0 π0-FAST (FT) 96 62 72 6 98 0 34 90 2 0 45.8 Class 2 OpenVLA (FT) 96 40 14 84 52 0 2 30 2 0 32.0 VLA2 (ours) 96 78 62 84 86 72 88 86 22 88 76.2 Ablation VLA2 (w/o mask) 94 (-2) 52 (-26) 58 (-4) 78 (-6) 88 (+2) 36 (-36) 84 (-4) 64 (-22) 18 (-4) 76 (-12) 64.8 (-11.4) VLA2 (w/o replace) 96 (0) 74 (-4) 26 (-36) 54 (-30) 90 (+4) 16 (-56) 16 (-72) 86 (0) 12 (-10) 42 (-46) 51.2 (-25.0) VLA2 (w/o web) 96 (0) 82 (+4) 58 (-4) 82 (-2) 92 (+6) 24 (-48) 84 (-4) 78 (-8) 20 (-2) 36 (-52) 65.2 (-11.0) Agentic Robot (RP) 96 (0) 38 (-40) 0 (-62) 0 (-84) 44 (-42) 0 (-72) 0 (-88) 64 (-22) 0 (-22) 20 (-68) 26.2 (-50.0) yields the largest average degradation, from 76.2 to 51.2 ( -25.0 ). Catastrophic failures occur when novel or compositional nouns must be grounded: bowl-saucer -72 (88→16), moutai-rack -56 (72→16), moutai-cabinet -46 (88→42), drawer-bowl -36 (62→26), and saucer-stove -30 (84→54). These gaps quantify that synonym/alias replacement is the dominant lever for bridging text OOD to the model's indistribution vocabulary, especially when two unseen tokens co-occur (the "2 new items" block). Small neutral/positive shifts on stove (0) and bowl-stove (+4) imply replacement is unnecessary for well-known nouns, but omitting it severely limits compositional generalization elsewhere. Ablation on web. Removing web image/text search and retrieved memory lowers the average SR to 65.2 ( -11.0 ) and disproportionately harms novel-brand targets: moutairack -48 (72→24) and moutai-cabinet -52 (88→36). Moderate declines also appear in bowl-cabinet -8 (86→78). Slight gains on open-drawer (+4) and bowl-stove (+6) show that retrieval can inject noise on trivially familiar scenes, but its net benefit on unfamiliar concepts is decisive. Notably, butter-bowl remains difficult across settings (ours 22; deltas only -2 to -10): the low-resolution "butter" appears visually ambiguous and cannot be reliably disambiguated by retrieval or text substitution, so even humans struggle to verify it, explaining the uniformly low SR in this task. All three modules removed (Agentic Robot (RP)). This experiment fully adopts the framework [17], with the only modification being the replacement of all corresponding modules with the models used in our proposed method, and also omitting our subtask augmentation, average SR collapses to 26.2 ( -50.0 ). Many hard tasks drop to zero: drawer-bowl -62 (62→0), saucer-stove -84 (84→0), bowlsaucer -88 (88→0), and butter-bowl -22 (22→0); large losses persist on moutai-cabinet -68 (88→20), moutai-rack -72 (72→0), and open-drawer -40 (78→38). Beyond the ablated capabilities, we find the task-list prompt format used in Agentic Robot introduces substantially increased OOD portion after decomposition (e.g., splitting "put the bluewhite porcelain bowl in the cabinet" into subgoals that diverge from training distributions). This causes the verifier to repeatedly fail the first subtask, preventing progression and yielding SR = 0 for many episodes. In contrast, our prompts condition OpenVLA on "now do current subtask, while conditioning on the full task context," which injects stronger ID structure; combined with mask, replace, and web, this design stabilizes execution and underlies the consistent gains in Table III. V. CONCLUSIONS In this paper, we propose VLA2, a framework that integrates arbitrary VLAs into a comprehensive embodied agent system. By incorporating modules such as task planning, web search, scene memory, and process verification, our framework enhances the task performance of VLAs. Experiments demonstrate that our module design significantly improves the generalization capability of the agent in grasping objects from unseen concept categories. Although our method achieves substantial improvements over existing approaches, it still has certain limitations. Our current framework designs are still confined to relatively rigid procedural structures. Enhancing the versatility of VLA2 to achieve greater system autonomy and enable the invocation of more external tools to handle a wider range of tasks represents a promising direction for future exploration. Moreover, we have not conducted real-world experiments at this stage, and it is essential to extend our work to openworld real-world grasping evaluations in the future. ACKNOWLEDGMENT This work was supported by the National Science and Technology Innovation 2030 - Major Project (Grant No. 2022ZD0208800), and NSFC General Program (Grant No. 62176215). REFERENCES [1] S. Yang, D. Wei et al., "Contrastive language-image pre-training model based semantic communication performance optimization," 2025. [2] M. Caron, H. Touvron, I. Misra et al., "Emerging properties in selfsupervised vision transformers," 2021. [3] X. Zhai, B. Mustafa, A. Kolesnikov et al., "Sigmoid loss for language image pre-training," 2023. [4] H. Liu, C. Li, Q. Wu et al., "Visual instruction tuning," 2023. [5] S. Karamcheti, S. Nair, A. Balakrishna et al., "Prismatic vlms: Investigating the design space of visually-conditioned language models," 2024. [6] H. Zhao, M. Zhang, W. Zhao et al., "Cobra: Extending mamba to multi-modal large language model for efficient inference," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 10, pp. 10 421-10 429, Apr. 2025. [7] G. Wang, Y. Xie, Y. Jiang et al., "Voyager: An open-ended embodied agent with large language models," 2023. [8] I. Gur, H. Furuta, A. Huang et al., "A real-world webagent with planning, long context understanding, and program synthesis," 2024. [9] S. Yao, J. Zhao, D. Yu et al., "React: Synergizing reasoning and acting in language models," arXiv preprint , 2022. [10] A. Brohan, N. Brown, J. Carbajal et al., "Rt-2: Vision-language-action models transfer web knowledge to robotic control," 2023. [11] M. J. Kim, K. Pertsch, S. Karamcheti et al., "OpenVLA: An open-source vision-language-action model," arXiv preprint , 2024. [12] K. Black, N. Brown, D. Driess et al., "π0: A vision-languageaction flow model for general robot control," arXiv preprint , 2024. [13] P. Ding, H. Zhao, W. Zhang et al., "Quar-vla: Vision-language-action model for quadruped robots," 2025. [14] Z. Zhou, Y. Zhu, M. Zhu et al., "Chatvla: Unified multimodal understanding and robot control with vision-language-action model," 2025. [15] C. Cheang, S. Chen, Z. Cui et al., "Gr-3 technical report," 2025. [16] NVIDIA, :, J. Bjorck, F. Casta ̃neda, N. Cherniadev et al., "Gr00t n1: An open foundation model for generalist humanoid robots," 2025. [17] Z. Yang, Y. Chen, X. Zhou et al., "Agentic robot: A brain-inspired framework for vision-language-action models in embodied agents," 2025. [18] B. Liu, Y. Zhu, C. Gao et al., "Libero: Benchmarking knowledge transfer for lifelong robot learning," arXiv preprint , 2023. [19] K. Pertsch, K. Stachowicz, B. Ichter, D. Driess, S. Nair, Q. Vuong, O. Mees, C. Finn, and S. Levine, "Fast: Efficient action tokenization for vision-language-action models," 2025. [20] A. Khazatsky, K. Pertsch, S. Nair et al., "Droid: A large-scale in-thewild robot manipulation dataset," 2025. [21] E. Collaboration et al., "Open x-embodiment: Robotic learning datasets and rt-x models," 2025. [22] AgiBot-World-Contributors et al., "Agibot world colosseo: A largescale manipulation platform for scalable and intelligent embodied systems," 2025. [23] M. J. Kim, C. Finn, and P. Liang, "Fine-tuning vision-languageaction models: Optimizing speed and success," arXiv preprint , 2025. [24] P. Li, Y. Wu, Z. Xi et al., "Controlvla: Few-shot object-centric adaptation for pre-trained vision-language-action models," 2025. [25] W. Song, J. Chen, P. Ding et al., "Ceed-vla: Consistency visionlanguage-action model with early-exit decoding," 2025. [26] W. Song, H. Zhao, P. Ding et al., "GeRM: A generalist robotic model with mixture-of-experts for quadruped robot," in 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024, pp. 11 879-11 886. [27] H. Zhao, W. Song, D. Wang et al., "MoRE: Unlocking scalability in reinforcement learning for quadruped vision-language-action models," 2025. [28] G. Lu, W. Guo, C. Zhang et al., "Vla-rl: Towards masterful and general robotic manipulation with scalable reinforcement learning," 2025. [29] H. Zhang, Z. Zhuang, H. Zhao et al., "Reinbot: Amplifying robot visual-language manipulation with reinforcement learning," 2025. [30] S. Tan, K. Dou, Y. Zhao et al., "Interactive post-training for visionlanguage-action models," 2025. [31] Y. Chen, S. Tian, S. Liu et al., "Conrft: A reinforced fine-tuning method for vla models via consistency policy," 2025. [32] J. Luo, W. Zhang, Y. Yuan et al., "Large language model agent: A survey on methodology, applications and challenges," 2025. [33] H. Shi, B. Xie, Y. Liu et al., "Memoryvla: Perceptual-cognitive memory in vision-language-action models for robotic manipulation," 2025. [34] M. Lei, H. Cai, B. Que et al., "Robomemory: A brain-inspired multimemory agentic framework for lifelong learning in physical embodied systems," 2025. [35] S. Zhou, X. Wang, J. Zhang et al., "P3: Toward versatile embodied agents," 2025. [36] V. Team, W. Hong, W. Yu et al., "Glm-4.5v and glm-4.1v-thinking: Towards versatile multimodal reasoning with scalable reinforcement learning," 2025. [37] X. Zhao, Y. Chen, S. Xu et al., "An open and comprehensive pipeline for unified object grounding and detection," 2024. [38] K. Chen, J. Wang, J. Pang et al., "MMDetection: Open mmlab detection toolbox and benchmark," arXiv preprint , 2019. [39] Ostroluck ́y, "Bulk bing image downloader," https://github.com/ost rolucky/Bulk-Bing-Image-downloader, 2014, software used for downloading images from Bing using keywords. [40] N. Ravi, V. Gabeur, Y.-T. Hu et al., "Sam 2: Segment anything in images and videos," arXiv preprint, 2024. [41] H. K. Cheng, S. W. Oh, B. Price et al., "Putting the object back into video object segmentation," in arXiv, 2023. [42] G. Brod et al., "The influence of prior knowledge on memory," Journal of Cognitive Neuroscience, 2013, "if prior knowledge is available and accessible, it facilitates comprehension and memory of new incoming information". [43] M. van Kesteren and et al., "Integrating educational knowledge: reactivation of prior knowledge during new learning enhances memory integration," Trends in Neuroscience and Education, 2018, "Successful knowledge construction is suggested to happen through reactivation of previously learned information during new learning". [44] O. Bein and et al., "Prior knowledge promotes hippocampal separation but cortical integration of overlapping memories," Nature Communications, 2020, "An adaptive memory system rarely learns information tabula rasa, but rather builds on prior knowledge to facilitate learning". [45] J. Bi, D. Cheng, P. Yao et al., "Vl-match: Enhancing vision-language pretraining with token-level and instance-level matching," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 2584-2593. [46] A. M. Samin, M. F. Ahmed, and M. M. S. Rafee, "Colorfoil: Investigating color blindness in large vision and language models," in NAACL-SRW 2025, 2025, pp. 294-300. [47] S. Bai, K. Chen, X. Liu et al., "Qwen2.5-vl technical report," 2025. [48] Y. Zheng, R. Zhang, J. Zhang et al., "Llamafactory: Unified efficient fine-tuning of 100+ language models," in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations). APPENDIX DETAILED DESCRIPTION OF THE PROJECT FRAMEWORK In thw Fig 6 we explain-purely from an information-processing perspective-how all OOD inputs are transformed via the framework described in the main text into ID representations usable by downstream modules; we then outline the design and content of the key prompts used to effect this conversion; finally, we present the computational runtime of each module, so as to evaluate our system's efficiency. Fig. 6: Transformation pipeline. This figure demonstrates how external information is progressively converted into knowledge available to the VLA via the system described in the main text. As illustrated in Fig. 6, the environment-sensed information enters the system at 1 -which matches the typical input format used by VLA systems. Below, we provide a concise, information-processing view of how the content in each gray box transforms and what specific components it comprises. • 0 The environment produces a continually updated image flow. After the task query and the first image are received, only the pathway from 0 to 6 remains active for this round; all other transformation pathways are deferred until the next task is initiated. • 1 Here, the image denotes the first frame returned once the environment is ready, and the accompanying text is the task prompt for that environment. In our running example-"put the blue white porcelain bowl on the stove."-the phrase blue white porcelain bowl denotes a newly introduced object category. • 2 In this information block, the task list is the set of decomposed sub-tasks produced by the planner. For the example in 1 , the ideal output is: "1) pick up the blue white porcelain bowl; 2) place the blue white porcelain bowl on the stove." We also extract two structured fields: objects, which are the items that the manipulator must first contact or grasp, and locations, which define the target placement context. In this example, there is one object: "blue white porcelain bowl" and a location: "stove". • 3 After vision pre-processing, we obtain bounding boxes from a recognition model by using the names in objects and locations together with the image as inputs. This transformation already separates "known" versus "unknown" visual categories: in our example, the stove is known because the model was fine-tuned with stove data, whereas the blue and white porcelain bowl is unknown. This known/unknown status is passed forward to the next Vision module. Fig. 7: Vision processing for unknown blue white porcelain bowl. The vision memory adopts the same format and similar content as shown in the gray dotted box, for which the equal sign denotes equivalence in structure and content. The system generated the keywords and stored the images here automatically during the evaluation in Table II. • 3 - 4 As shown in Fig. 7, the information transformation process for the unknown "blue white porcelain bowl" is illustrated. The figure explicates how web-search images plus the 3 information are fed into the GLM understanding (Vision) module to generate auxiliary enhanced data for the Vision processing module. In this diagram, we primarily display the storage format of the generated memory and example contents of that memory. • 4 After the Vision stage described in the main text, the module can also recognize some previously unknown categories. In the figure, this is reflected by an additional red bounding box indicating that the blue white porcelain bowl has become identifiable. This recognition is attributed to the cognitive, web-enhanced search phase that creates a persistent memory. Subsequently, all bounding boxes are converted to masks by a SAM-style segmentation step, and masks are color-coded into two palettes corresponding to the objects group and the locations group. The "vision mem" in this block denotes the memory produced by the cognitive search process. • 4 - 5 As shown in Fig. 8, the overall framework used by the Language module is highly analogous to that of the Vision module. The memory in Language is stored as a JSON file (the "replace map"). • 5 After the Language module, the task list is augmented with color cues and undergoes controlled lexical replacement. In this example, it becomes: "1) pick up the red-mask black bowl; 2) place the red-mask black bowl on the blue-mask stove." All other metadata remain the same as in 4 . (Color-aligned masks and text labels are a standard way to synchronize language outputs with pixel-level regions in VLM pipelines.) • 6 This final block serves as the continuously updated input to downstream modules: the image stream is rendered as mask overlays, and the task list is strictly aligned with the visual overlays. Uncertainties at both the language and vision levels are minimized or resolved, yielding a representation that is easier to execute and evaluate. After task initiation and completion of the cognitive interpretation stage, only the transformation pathway from 0 to 6 is retained; the task list is finalized and no longer changes. In parallel, mask memory distilled from earlier frames is persisted in the VOS, enabling each subsequent frame to infer its masks directly from the new image and the stored mask memory, thereby producing a continuous mask-overlay video stream. Our VOS module is architected following the design principles of the Cutie project1. For algorithms, data structures, and training/inference pipelines, please refer to that project. 1See the Cutie repository for detailed technical specifications and implementation details: https://github.com/hkchengrex/Cutie. Fig. 8: Language processing for unknown blue white porcelain bowl. The equals sign and the gray dotted box denote the same meaning of reference as in Fig. 7. Within the Planner, Vision, and Language modules, GLM-4.1V-9B-Thinking is employed. To curb error accumulation from upstream to downstream, we adopt a two-stage failure-handling policy for GLM usage: the first failure triggers an automatic retry, while a second failure invokes a hard-coded fallback or, if necessary, aborts the operation. Consequently, even when truly novel objects cannot be reliably interpreted, the stability of the overall system is preserved. In every invocation of the GLM and Qwen models, we design prompts tailored to functionality and module interrelations. The planner prompt is shown in PLANNER PROMPT, whose core is the task decomposition prompt, while the other parts enforce module ordering and output constraints. For the verifier, we designed a detailed task-analysis input prompt, as shown in VERIFIER PROMPT. The prompt for GLM understanding (Vision) is given in GLM UNDERSTANDING (VISION) PROMPT. For GLM understanding (Text), as shown in GLM UNDERSTANDING (TEXT) PROMPT, the prompt fed into GLM is not fixed; it is dynamically adapted based on the available inputs and conditions. In all cases, the ultimate objective is to generate a correct replacement mapping from the known vocabulary, given the available context. COMPUTATIONAL EFFICIENCY ANALYSIS Using the same number of validation runs specified in the Methods (i.e., matching those used to obtain the validation data), we measured and reported the mean computation time per task and per module. TABLE IV: Average computation time. Computing time in seconds for each module and task. Module Spatial Goal Object Long Easy Medium Hard Avg Planner 20.727 19.013 17.126 25.532 21.979 19.452 20.207 20.576 Vision & Vision Pre-Processing 0.086 0.072 0.095 0.208 0.753 1.277 1.066 0.508 Language 0.022 0.016 0.046 0.038 0.263 0.582 0.778 0.249 VOS 8.908 8.698 9.016 12.075 7.945 9.112 9.194 9.278 VLA 72.951 73.104 79.783 131.353 69.706 82.759 99.019 86.825 Verifier 2.862 3.585 3.607 5.542 4.488 4.690 4.869 4.234 Total 105.556 104.488 109.673 174.748 105.134 117.872 135.133 121.658 From Table IV, we observe that compared with [17], our agentic system's additional modules-Vision & vision preprocessing, Language, and VOS-incur only an average extra runtime of 0.508+0.249+9.278 = 10.035 seconds per task over 50 validation runs. This overhead enables the OOD-to-ID conversion pipeline while keeping latency modest. The nearly doubled computation time of the VLA model on the LIBERO-Long tasks arises because every task in that set involves two pick-and-place operations or requires fulfilling two independent actions. Therefore, such tasks demand more steps, resulting in a total runtime roughly twice that of the other three original LIBERO tasks. Because we run GLM-4.1V-9B-Thinking in "thinking" mode, a substantial portion of the Planner's runtime is spent emitting intermediate "think tokens." Empirically, we observe that Planner latency per task is roughly 20s across different tasks. The Vision and Language modules, which internally embed GLM models, operate under a "first cognition + memory reuse" design: after a correct initial inference, subsequent invocations can reuse stored memory and thus run extremely quickly. As a result, their first-time inference costs are comparable to the Planner (approximately 20s), but repeated usage is much faster. Moreover, in Fig. 9, the modules that execute in every step-VOS, VLA, VLM-show time curves that change in lockstep with task variation, exhibiting nearly identical trend lines. We also note that in our new environment, recognitioncentric modules (Vision & vision pre-processing, Language) incur higher average times due to additional unknown object cognition demands and GLM memory generation. In contrast, Planner-used once per task-shows little runtime difference between the original Libero environment and our custom Libero environment, except for modest variations due to input complexity or error rates. Fig. 9: Modules runtime across tasks. This figure shows the average computation time of each module in the agent framework for each task. LIBERO-HARD TASK EXPLANATION In the table III we abbreviate task names; here are their full expansions based on the BDDL filenames in the LIBEROZERO environment: Abbreviation Full Human-Readable Task Name stove turn on the stove open-drawer open the middle drawer of the white cabinet drawer-bowl open the top drawer and put the blue white porcelain bowl inside saucer-stove push the saucer to the front of the stove bowl-stove put the blue white porcelain bowl on the stove moutai-rack put the moutai on the rack bowl-saucer put the blue white porcelain bowl on the saucer bowl-cabinet put the blue white porcelain bowl on top of the white cabinet butter-bowl put the butter in the blue white porcelain bowl moutai-cabinet put the moutai on top of the white cabinet This naming preserves the task structure from the LIBERO-LONG benchmark: each task follows the same schema or template as in the original set, and our version differs only in that we substituted the object terms (e.g., "bowl", "moutai") with our custom names. PLANNER PROMPT ### reading notice: "#" means the comment in python. This project is written in python, and the following content illustrates the logic and structure of the GLM model prompt. ### if sign!="success": ### "sign" is a signal for regenerating a better output, sent by the post -processing function. The unsuccessful situations were mainly caused by an unmatchable and unreadable model output. ### if sign=="no subtask found": additional_info = "PAY MORE ATTENTION TO THE SUBTASKS in your last output, no valid subtask found. You should output the subtask in the same format as the example, without any other analysis or description." elif sign=="no objects found": additional_info = "PAY MORE ATTENTION TO THE OBJECTS in your last output, no valid objects found in /(here)/. You should output the objects in the same format as the example, without any other analysis or description." else: additional_info = "PAY MORE ATTENTION TO THE SUBTASKS and OBJECTS in your last output , no valid subtask or objects found. You should output the subtask and objects in the same format as the example, without any other analysis or description." else: additional_info = "You are doing a good job, keep it up" task_decomposition_prompt =f""" You are a planning assistant for a fixed robotic arm. Your goal is to break down a high-level task into a sequence of **essential high-level commands**, suitable for a capable Vision -Language-Action (VLA) model to execute directly. Output Format: Generate a numbered list of commands. Each command should represent a significant action achieving a clear sub-goal. Stick to the allowed high-level actions. Example Plan Format (Use **exactly** this level of granularity): Plan for the robot arm: Goal: 1. pick up the /( )/ 2. place the in the /( , )/ 3. pick up the /( )/ 4. place the in the /( , )/ --- Example for a different task --- Goal: Put the apple in the red bowl 1. pick up the apple /(apple)/ 2. place the apple in the red bowl /(apple, red bowl)/ --- Example for another task --- Goal: Put the cup in the microwave and close it 1. pick up the cup /(cup)/ 2. place the cup in the microwave /(cup, microwave)/ 3. close the microwave /(microwave)/ --- Example for another task --- Goal: Turn on the stove and put the pot on it 1. turn on the stove /(stove)/ 2. pick up the pot /(pot)/ 3. place the pot on the stove /(pot, stove)/ --- Example for another task --- Goal: Put both books on the bookshelf 1. pick up the red book /(red book)/ 2. place the red book on the bookshelf /(red book, bookshelf)/ 3. pick up the brown book /(brown book)/ 4. place the brown book on the bookshelf /(brown book, bookshelf)/ --- Example for another task --- Goal: pick the red book near the butter and the brown book on the plate and put them on the left bookshelf 1. pick up the red book near the butter /(red book)/ 2. place the red book near the butter on the left bookshelf /(red book, bookshelf)/ 3. pick up the brown book on the plate /(brown book)/ 4. place the brown book on the plate on the left bookshelf /(brown book, bookshelf)/ --- Example for another task --- Goal: pick up the yellow and white mug next to the cookie box and place it on the plate 1. pick up the yellow and white mug next to the cookie box /(yellow and white mug)/ 2. place the yellow and white mug next to the cookie box on the plate /(yellow and white mug, plate)/ --- Example for another task --- Goal: put the black bowl in the bottom drawer of the cabinet and close it 1. pick up the black bowl /(black bowl)/ 2. place the black bowl in the bottom drawer of the cabinet /(black bowl, cabinet)/ 3. close the bottom drawer of the cabinet /(cabinet)/ Instructions: - Generate **only** high-level commands. - Your output should be in the ***ABSOLUTELY SAME format*** as the example above. Even with unseen tasks, follow the same structure. ***WITHOUT ANY OTHER ANALYSIS and DESCRIPTION ***. - **After each command**, include a comment with the object names and locations in */()/*. This is necessary for the VLA model to understand which objects are involved in each command. - DO NOT include any descriptions of position and order in */()/* (e.g., "first pot", "back of the shelf", "bottom of sth", "upper of sth"), only color and shape are permitted (e.g ., "red bowl", "cylindrical box"). But you should maintain the details of the objects and locations as described in the task to subtask, such as "red bowl near the plate", "brown book on the cabinet", "left bookshelf", "black bowl next to the cookie box", etc. - **ONLY USE */()/* to EXPRESS *OBJECTS*.** Comments, explanations, and anything else that has nothing to do with expressing objects are not allowed. - When an object or location has a qualifying modifier, such as a cabinet's drawer, door of a microwave, or the handle of pot, what you are expected to display in the /()/ is actually the **largest specific items these expressions** refer to, which are cabinets, microwaves, and pots, not the parts or subordinate items on these items that belong to these items. Meanwhile, you should still maintain the detailed expression in the subtask as "the drawer of the cabinet", "the door of the microwave" (eg. pick up the bottle on the stove; pick up the bowl in the drawer). - **Allowed commands are strictly limited to:** - 'pick up [object]' - 'place [object] on [location]' - 'place [object] in [location]' - 'open [object/container/drawer/cabinet/etc.]' - 'close [object/container/drawer/cabinet/etc.]' - 'turn on [device]' - 'turn off [device]' - Use the commands above **only when necessary** to achieve the goal. Most tasks will primarily use 'pick up' and 'place'. - **Explicitly DO NOT include separate steps for:** - 'locate' (Assume VLA finds the object as part of executing the command) - 'move to' or 'move towards' (Assume the command includes necessary travel) - 'lift', 'lower', 'grasp', 'release', 'push', 'pull', 'rotate', 'adjust' (Assume highlevel commands handle these internally) - **Assume the VLA model handles all implicit actions:** - "pick up [object]" means: Find the object, navigate to it, grasp it securely, and lift it. - "place [object] in [location]" means: Transport the object to the location, position it correctly, and release the grasp. - "open/close [container]" means: Find the handle/seam, interact with it appropriately ( pull, slide, lift) to change the container's state. - "turn on/off [device]" means: Find the correct button/switch, interact with it to change the device's power state. - Use the descriptive names from the task description and **DO NOT make any distortions** in subtasks (e.g., if the task involves {inlist}, make sure the subtasks about them are exactly the same). - Generate the minimal sequence of these high-level commands required to fulfill the Goal. Ensure the sequence logically achieves the task (e.g., you might need to 'open' a drawer before 'placing something inside it, even if 'open' isn't explicitly stated in the goal). - Additional INFO:{additional_info} Task: {task_description} Output: """ VERIFIER PROMPT ### The Verifier prompt essentially depends on the input subtask main verb and differentiates each subtask into the following few situations. ### prefix = ( f"{title_prefix + ' - ' if title_prefix else ''}" f"Observe the inputs (two videos or two image-flow videos). " f"The subtask robot arm is currently working on: '{subtask}'. " ) if verb == "pick up": prompt = ( f"{prefix} Based *Only* on the provided media, has '{object_name}' or anything else been grasped and lifted off any surface by the end? " "Answer 'Yes' or 'No'." ) elif verb == "place": prompt = ( f"{prefix} Based *Only* on the provided media, has '{object_name}' or anything else been placed '{location_name}' and is the gripper away? " "Answer 'Yes' or 'No'." ) elif verb in ("turn on", "turn off", "open", "close"): target = raw_part or object_name action_text = { "turn on": "turned on (powered up)", "turn off": "turned off (powered down)", "open": "fully opened", "close": "fully closed", }[verb] prompt = ( f"{prefix} Based *Only* on the provided media, has '{target}' or anything else been { action_text} by the end? " "Answer 'Yes' or 'No'." ) else: prompt = ( f"{prefix} Based *Only* on the provided media, has the instructed action been completed successfully by the end? " "Answer 'Yes' or 'No'." ) GLM UNDERSTANDING (VISION) PROMPT ### "Query" here means the object or location aiming to be understood. ### system_prompt = rf""" You are an intelligent assistant specialized in analyzing images and extracting meaningful information. Your task is to identify a specific person or object that appears in all provided images and generate five of the most relevant keywords to describe this person or object. **Think in ten sentences.** You must follow this rule strictly. Guidelines: For the combined image: If the same person appears in all images: Focus on describing the person's gender, skin tone, and occupation. Avoid keywords related to clothing or environment. Example keywords might include: "female", "light-skinned", "doctor", etc. If the same object appears in all images: Focus on describing the object's physical characteristics. Example keywords might include: "round", "metallic", "small", etc. **IMPORTANT** The keywords are going to help another Model to find the same or almost like subjects or persons in the real-world image. Thus the keywords should be very specific and descriptive, not general or abstract, and can reflect the basic attributes of this task or thing. Making another VLM easily find the same or similar subjects or persons in the real-world image. For the current image: There is something suitable for the query"{query}", but the model can't find the bbox exactly. Your mission is to base on the current image and the combined image to describe the same thing in both. Output Format: Output the keywords in JSON format. Ensure the output contains only the keywords, without additional text or explanation. The JSON structure should be a list of strings. Example JSON Output: ["female", "light-skinned", "doctor", "middle-aged", "smiling"]. Your output should be in a format that the code below can easily extract the keywords: --match = re.search(r" ", output_text[0]) -- if match: -- str_list = json.loads(match.group(0)) -- print(str_list) Task: Analyze the provided images and generate five keywords that best describe the identified person or object based on the guidelines above. Output the keywords in the specified JSON format. input:{query} output: """ messages = [ { "role": "system", "content": [ { "type": "text", "text": system_prompt, } ] }, { "role": "user", "content": [ { "type":"text", "text":"Here is the combined image from the web.", }, { "type": "image", "image": com_image, ##combined images from internet }, ] }, { "role": "user", "content": [ { "type":"text", "text":"This is the current image from the camera.", }, { "type": "image", "image": cur_image, ##current main view }, ] } ] GLM UNDERSTANDING (TEXT) PROMPT # Build messages for GLM inference (memory-first replace) messages: list[dict] = [] # 1) System steer (role and objective) messages.append({ "role": "system", "content": [{ "type": "text", "text": ( "You normalize open-world object mentions to a closed training vocabulary. " "Return EXACTLY ONE label copied verbatim from the allowed list below, " "or output NONE if no label applies." ) }] }) # 2) Allowed vocabulary (verbatim list shown to the model) allowed_text = " ".join(f"- {lab}" for lab in known_list) messages.append({ "role": "user", "content": [{"type": "text", "text": "Allowed vocabulary: " + allowed_text}] }) # 3) The new object mentioned (query term) messages.append({ "role": "user", "content": [{"type": "text", "text": f"New object mention: {norm_prompt}"}] }) # 4) Decide available evidence has_com = (pil_com is not None) # composite reference image has_kw = bool(keywords) # keyword list has_boxes = (top_crop is not None) # highest-score crop from original image has_scores = bool(boxes_list) # detector had scores/boxes at all # 5) Case A: (no comimage, no keywords); include crop if available; else include raw image if (not has_com) and (not has_kw) and (has_boxes or (pil_image is not None)): if has_boxes: messages.append({ "role": "user", "content": [ {"type": "text", "text": "Evidence crop (highest detector score)."}, {"type": "image", "image": top_crop}, ], }) elif pil_image is not None: messages.append({ "role": "user", "content": [ {"type": "text", "text": "Context image."}, {"type": "image", "image": pil_image}, ], }) # 6) Case B: (no comimage, no keywords, no boxes/scores); optional raw image only if (not has_com) and (not has_kw) and (not has_boxes) and (not has_scores): if pil_image is not None: messages.append({ "role": "user", "content": [ {"type": "text", "text": "Context image."}, {"type": "image", "image": pil_image}, ], }) # 7) Case C: (comimage + keywords + crop are all available); each as its own user turn if has_com and has_kw and has_boxes: messages.append({ "role": "user", "content": [ {"type": "text", "text": "Composite reference image from the web."}, {"type": "image", "image": pil_com}, ], }) messages.append({ "role": "user", "content": [ {"type": "text", "text": "Top-scoring evidence crop from the original image."}, {"type": "image", "image": top_crop}, ], }) messages.append({ "role": "user", "content": [{"type": "text", "text": "Image/scene keywords: " + ", ".join(map(str, keywords))}] }) # 8) Optional: brief external snippets (web/Wikipedia), one separate turn if web: qs = [norm_prompt] + ([k.strip() for k in keywords] if keywords else []) web_brief = fetch_snippets(qs, limit=4) # function enables searching online and with a " limit" to prevent error content. # if web_brief: messages.append({ "role": "user", "content": [{"type": "text", "text": "External brief (web/Wikipedia): " + web_brief}] }) # 9) Final instruction with strict stability constraints messages.append({ "role": "user", "content": [{ "type": "text", "text": ( "STRICT CONSTRAINTS: " "- Output MUST be exactly one label copied verbatim from the allowed vocabulary above, " "or the token NONE when no label applies. " "- DO NOT include any analysis, explanation, reasoning, or additional text. " "- Format your final decision ONLY as: " " LABEL_OR_NONE " "- LABEL_OR_NONE must be one of the allowed labels or NONE." ) }] })
2510.14904
MASKCAPTIONER: LEARNING TO JOINTLY SEGMENT AND CAPTION OBJECT TRAJECTORIES IN VIDEOS Gabriel Fiastre1∗ Antoine Yang2 Cordelia Schmid1 1Inria, École Normale Supérieure, CNRS, PSL Research University 2Google Deepmind ABSTRACT Dense Video Object Captioning (DVOC) is the task of jointly detecting, track- ing, and captioning object trajectories in a video, requiring the ability to un- derstand spatio-temporal details and describe them in natural language. Due to the complexity of the task and the high cost associated with manual an- notation, previous approaches resort to disjoint training strategies, potentially leading to suboptimal performance. To circumvent this issue, we propose to generate captions about spatio-temporally localized entities leveraging a state- of-the-art VLM. By extending the LVIS and LV-VIS datasets with our syn- thetic captions (LVISCap and LV-VISCap), we train MaskCaptioner, an end- to-end model capable of jointly detecting, segmenting, tracking and captioning object trajectories. Moreover, with pretraining on LVISCap and LV-VISCap, MaskCaptioner achieves state-of-the-art DVOC results on three existing bench- marks, VidSTG, VLN and BenSMOT. The datasets and code are available at https://www.gabriel.fiastre.fr/maskcaptioner/. 1 INTRODUCTION A fundamental aim of computer vision is to enable machines to understand videos with human- like acuity in perceiving and reasoning about the world. Recent advances have led to remarkable progress in both spatio-temporal localization (Ren et al., 2015; Redmon et al., 2016; Carion et al., 2020; He et al., 2017; Wojke et al., 2017; Zhang et al., 2022b) and vision-language understanding (Vinyals et al., 2015; Sun et al., 2019; Yu et al., 2019; Yang et al., 2021; Chen et al., 2020; Jia et al., 2021; Zellers et al., 2021; Bain et al., 2021; Lu et al., 2019; Kamath et al., 2021). However, building vision-language models that can simultaneously reason about spatially localized objects and temporal dynamics of a complex scene remains a significant challenge, motivated by many real-world applications including autonomous driving (Kim et al., 2019; Atakishiyev et al., 2024), human-computer interaction (Shridhar et al., 2020; Ahn et al., 2022), or video editing (Molad et al., 2023; Jeong and Ye, 2024). Dense Video Object Captioning (DVOC) (Zhou et al., 2025) serves as a key benchmark for this purpose, as it requires to jointly localize, track, and describe in natural language all visual entities in a video. Manual annotation for such a fine-grained task is particularly expensive, leading to a scarcity of datasets with densely annotated object-level video descriptions. To tackle DVOC, prior work resorted to alternative training approaches: Zhou et al. (2025) propose a disjoint training strategy, decomposing the problem into subtasks and training a model sequentially on datasets for each subtask. Choudhuri et al. (2024) leverage the pretraining of multiple specialized models to alleviate the need for object- level annotations. Both methods allowed to perform DVOC while circumventing the need for costly annotations, but the lack of end-to-end training with densely annotated object-level supervision may lead to suboptimal performance. We propose to address this limitation by generating synthetic object-level annotations, motivated by the recent success of LLM-generated supervision (Liu et al., 2023; Abdin et al., 2024) and the growing visual capacities of Vision Language Models (VLMs) (Alayrac et al., 2022; Li et al., 2022; 2023a; Team et al., 2023; 2024; Achiam et al., 2023; Grattafiori et al., 2024; Bai et al., 2023). To the best of our knowledge, our work is the first to generate localized, object-level captions for DVOC. To this end, we introduce a multi-modal prompting strategy leveraging a state-of-the-art VLM, and *Corresponding author: gabriel.fiastre@inria.fr 1 arXiv:2510.14904v1 [cs.CV] 16 Oct 2025 A person is holding a pencil sharpener in their hands. The black Derwent pencil sharpener is being held and rotated in a person's hand. A graphic pencil with a red end sits on a wooden surface next to a pencil sharpener. Figure 1: Examples of synthetic captions in our LV-VISCap dataset. extend two segmentation datasets, LVIS (Gupta et al., 2019) for images and LV-VIS (Wang et al., 2023) for videos, to be the first DVOC training sets with (mask, box, category, caption) annotations for all objects, dubbed LVISCap and LV-VISCap, see figure 1. Using our generated datasets, we extend the DVOC task, traditionally formulated as detection (Zhou et al., 2025), to segmentation and train MaskCaptioner, the first end-to-end model that can jointly produce (mask, caption) pairs for all object trajectories in a video. We show that (i) our generated datasets, LVISCap and LV-VISCap, largely benefit MaskCaptioner’s DVOC performance, (ii) our MaskCaptioner outperforms previous state-of-the-art models on the VidSTG, VLN and BenSMOT DVOC benchmarks and (iii) we can extend the DVOC task to segmentation. Overall, our contributions can be summarized as follows: 1. We introduce a VLM-based method to generate synthetic object captions for videos, and extend the LVIS and LV-VIS datasets to be the first unified DVOC training set with object captions, boxes, and segmentation masks: LVISCap and LV-VISCap. 2. Using our unified generated data, we train MaskCaptioner, the first end-to-end model to jointly detect, segment, track and caption objects in a video. 3. MaskCaptioner achieves state-of-the-art DVOC results on the three existing benchmarks : VidSTG, VLN and BenSMOT. 2 RELATED WORK Open-Vocabulary Video Instance Segmentation (OV-VIS). The OV-VIS task aims to segment, track, and classify objects from an open set of categories in videos (Guo et al., 2025; Wang et al., 2023), using datasets such as LV-VIS (Wang et al., 2023). State-of-the-art methods (Guo et al., 2025; Wang et al., 2023; Fang et al., 2025) commonly use query-based approaches that classify objects by matching visual features with CLIP embeddings (Radford et al., 2021). Methods like OVFormer (Fang et al., 2025) or BriVIS (Cheng et al., 2024) improve this approach by better aligning visual queries with the CLIP space. Unlike these methods focused on CLIP feature matching for classification, our work explores describing objects in natural language (Li et al., 2023a). Localized vision-language understanding. Going beyond pioneering vision-language tasks such as visual question answering (Antol et al., 2015) or image captioning (Chen et al., 2015), recent work has explored spatial understanding tasks that require localizing natural language queries in images. This includes referred expression segmentation (Kazemzadeh et al., 2014; Yu et al., 2018; Yang et al., 2022b), image grounding (Rohrbach et al., 2016; Plummer et al., 2015), reasoning segmentation (Lai et al., 2024; Wang and Ke, 2024), spatio-temporal video grounding (Zhang et al., 2020; Yang et al., 2022a) and grounded visual question answering (Zhu et al., 2016; Xiao et al., 2024; Lei et al., 2018; 2020). While these tasks typically require localizing one or a few entities, dense captioning (Johnson et al., 2016; Wu et al., 2024) aims to spatially localize and describe in natural language all salient regions in images. Our work addresses the more challenging task of predicting both object trajectories and descriptions for all objects in a video. Dense Video Object Captioning (DVOC). The DVOC task aims at jointly detecting, tracking, and describing the trajectory of all visual entities in a video. DVOC-DS (Zhou et al., 2025) tackles this task by generating frame-wise object box proposals (Zhou et al., 2019; Cai and Vasconcelos, 2018) that are tracked (Zhou et al., 2022) before feeding aggregated and cropped features to a generative image-to-text decoder (Wang et al., 2022). To cope with the lack of DVOC annotations, the model is trained disjointly on various subtasks: object detection using COCO (Lin et al., 2014), image 2 MaskCaptioner Training Bounding boxes Construct prompt Draw Synthetic captions ... ... ... ... 'The person is sitting in a deck chair inside of a workshop.' 'A light-colored wooden deck chair sits on a patterned floor, supporting a man wearing shorts.' """ Bounding Box location : (0.29,0.10,0.42,0.83),... Areas : 42%, ..., 63% Category: 'person' Other categories: {'chair','shoe','shorts'} Generate a one-sentence caption, focusing on the object of class 'person' highlighted by the bounding boxes. """ Man Chair Figure 2: Our MaskCaptioner data annotation pipeline: For each object, we extract its bounding boxes Bj from its annotated masks Mj, and draw them in the video. The video with drawn bounding boxes ˆxj, along with a prompt (ps, pu(j)) including additional information such as the label of the object to caption Cj, the labels and bounding boxes of other objects in the image, is fed to a state-of-the-art VLM (Gemini 2.0 Flash (Team et al., 2023)) to generate the object-level caption. ps denotes the static prompt with general instructions and pu(j) the dynamic prompt with annotation information. We use this pipeline to generate the LVISCap and LV-VISCap datasets, used to train MaskCaptioner. object-level captioning using Visual Genome (Krishna et al., 2017), video scene-level captioning using SMiT (Monfort et al., 2021) and video object tracking using AugCOCO (Lin et al., 2014). OW-VisCapTor (Choudhuri et al., 2024) extends the DVOC task to segmentation masks. Its archi- tecture relies on an object abstractor using a prompt encoder and transformer blocks, an inter-query contrastive loss to ensure object queries are diverse, and an object-to-text abstractor that connects these queries to a frozen LLM, generating rich, object-centric captions for each detected instance. While proposing to extend DVOC to segmentation, OW-VisCapTor is hindered by the absence of paired (mask, caption) annotations, hence segmentation and DVOC are tackled in isolation using sep- arate models. Closely related, SMOTer (Li et al., 2024) extends multi-object tracking to object-level captioning, video-level captioning and object-relation predictions, and introduce a hand-annotated dataset, BenSMOT. However, BenSMOT focuses on humans only, whereas the standard DVOC task considers all visual entities in the video. In this work, we automatically generate DVOC datasets, LVISCap and LV-VISCap, and train MaskCaptioner, a model that can end-to-end detect, segment, track and caption object trajectories in videos. Vision-language data generation. A promising approach for intricate vision-language tasks is to gen- erate visual annotations using Large Language Models (LLMs) or Vision Language Models (VLMs). LLaVA (Liu et al., 2023) leverages a LLM to generate conversations and detailed descriptions from paired image-text annotations (Chen et al., 2015). This approach has been followed to generate large- scale instruction tuning data for video understanding (Maaz et al., 2024; Li et al., 2023b). Recent research has focused on the generation of spatially grounded text-image data using LLMs/VLMs: Shikra (Chen et al., 2023) generates grounded QA pairs from Flickr30K Entities (Plummer et al., 2015), LLaVA-Grounding (Zhang et al., 2024) and GLaMM (Rasheed et al., 2024) generate grounded conversational data from datasets such as COCO (Lin et al., 2014). Recently, GROVE (Kazakos et al., 2025) extended GLaMM to generate grounded, temporally consistent video captions, extending dense captioning to the video domain. These methods predominantly operate at the scene level and do not produce localized, object-level video descriptions. In contrast, we introduce a multi-modal prompting strategy to leverage VLMs into generating fine-grained captions for individual object trajectories across time. Recently, Yuan et al. (2025) proposed a related VLM-based approach to generate object-level video captions designed for long sentences referring segmentation. Differently, we focus on the task of Dense Video Object Captioning where no text prompt is given to the model. 3 METHOD Dense Video Object Captioning (DVOC) (Zhou et al., 2025) is the task of jointly detecting, tracking and captioning objects trajectories in a video, i.e. producing a (bounding box or mask, caption) pair 3 for each object in a video at each timestamp they appear. Such densely annotated data is lacking for video, as there is currently no available training dataset including captions for all object trajectories in the videos. The spatio-temporal grounding dataset VidSTG (Zhang et al., 2020) has been repurposed to DVOC but only includes annotations for few objects per video and a limited number of frames per video. We address the lack of data by introducing a strategy for synthetic DVOC data generation, allowing unified training of a DVOC model on (trajectory, caption) pairs for each object, as presented in Section 3.1. With this strategy, we extend the LV-VIS dataset (Wang et al., 2023), which includes both boxes and masks in their (trajectory, label) annotations for all objects, to a variant, dubbed LV-VISCap, which additionally includes synthetically generated captions for each trajectory. To enable end-to-end training on this rich data using segmentation masks, we build an architecture based on a state-of-the-art Open-Vocabulary Video Instance Segmentation (OV-VIS) model OVFormer (Fang et al., 2025), as described in Section 3.2.1. As OVFormer is designed for classification, we extend it with a captioning head (Choudhuri et al., 2024), as explained in Section 3.2.2. Finally, we present in Section 3.3 the losses used to train our model. 3.1 DVOC DATA GENERATION We start from the LV-VIS dataset (Wang et al., 2023) which contains (segmentation masks, category) manual annotations for all objects and timestamps of the videos. To automatically collect DVOC data, one challenge is to generate accurate object-level captions for each trajectory. For this, we leverage a state-of-the-art VLM (Gemini 2.0 Flash (Team et al., 2023)) and feed it with videos where the object to caption is marked with drawn bounding boxes, as illustrated in Figure 2. Table 1: Impact of the prompting strategy on cap- tion quality. Scores are given by an expert human evaluator from 0 to 2 (incorrect, partially correct, or correct) on a subset from the LV-VIS validation set, and brought to 0-100 range. For the mask visual prompt experiments, we use our best prompt with either the object’s bounding boxes or center point coordinates as a localization cue in the text prompt. Visual prompt Prompting method Average rating bounding boxes single frame 26.8 + multiple frames 27.1 + detailed instructions 29.5 + category labels 80.7 + bbox coordinates 83.1 + bbox area 84.3 + few shot examples 85.1 mask boundaries center point coordinates 75.9 bbox coordinates 77.1 Visual Prompt. Formally, let x ∈ RN×H×W ×3 be a video clip of length N, with associated mask and category annotations for M objects in the video: (Mj, Catj), j ∈1, ..M. We first extract bounding boxes annotations from the ground- truth masks, Bj ∈RN×4, j ∈1, ..., M. We draw each object boxes on a separate copy of the video, and denote as a ∧b the operation of drawing box b on frame a. We obtain a visual prompt ˆxi j for each object trajectory: ˆxi j = xi ∧Bj for i ∈1, ..., N, j ∈1, ..., M Note that in practice, we subsample N to 4 uniformly sampled video frames as we found it produces representative enough visual con- tent for the captioning task. Text prompts. In detail, the prompt we feed the VLM is composed of the previously de- scribed visual prompt ˆxj, a system prompt ps and a user prompt pu(j). The system prompt is static for all objects/videos and gives gen- eral instructions (e.g. "Generate a caption about a queried object highlighted with bounding boxes."), rules (e.g. "Do not mention the bounding boxes in the caption."), format, and an example. The user prompt pu(j) however, is constructed for each annotation and enriched with informations about the specific query to help the model. The user prompt encodes textual bounding box coordinates, area, category name of the queried object, and category names from other objects in the image. Passing information through different channels (visual prompt, text prompt) helps the model focusing on the queried object and being accurate when describing the scene. Also, this complementary information can lead the model to reason about the objects (e.g. area being small for an object of category ’ele- phant’ implies the object is most likely part of the background). The different prompting cues have been ablated in Table 1. In particular, showing that including textual semantic and localization cues in the prompt helps the model to focus on the queried object and generate more accurate object-focused captions. We notice that using the segmentation masks as the visual prompt for the model results in less accurate object captions, which might result from a poorer alignment with the localization cues 4 T Video clip Instance tracking Boxes Masks Track ids ... Captions N Clip-level Object queries Video-level obj Clip-level OVFormer Scores Mask2former Box head Classifier Object head Temporal aggregation Tagg LLM decoder Figure 3: Our MaskCaptioner architecture jointly segments and captions objects in videos. For each clip of T frames, we obtain Nobj clip-level object queries through Mask2Former (Cheng et al., 2021), and yield associated score, mask and box predictions. At a video-level, we match the predicted object queries with the previously processed clips with the Hungarian bipartite matching algorithm, and perform instance tracking (Zhu et al., 2024). For each track, we sample Tagg clips uniformly across the video, and aggregate the tracked queries to obtain a single video query per object, which we can feed to the LLM captioning head (Li et al., 2023a), producing a single caption per trajectory. included in the text prompt. The full template details for the model prompt construction and a more detailed ablation are given in Appendix A.4.1 and A.2. The LVISCap and LV-VISCap datasets. For each object j in the video, we prompt Gemini 2.0 Flash (Team et al., 2023) with the visual and textual prompts (ˆxj , ps and pu(j)) to output a synthetic caption Capj. The full DVOC annotation is (Mj, Bj, Catj, Capj). We repeat this process for each object in each videos of LV-VIS, generating over 19.5k synthetic captions over 3.9k videos covering 1, 020 object classes, and with an average length of 13.9 words. We repeat this process with the LVIS dataset to support image pre-training, applying the same method as above described with each image considered a video of len N = 1, and obtain our DVOC training sets: LVISCap and LV-VISCap. 3.2 MASKCAPTIONER ARCHITECTURE To enable end-to-end training on the previously described DVOC data including segmentation masks, our architecture, illustrated in Figure 3, processes videos as Nclip clips of T consecutive frames each, and is composed with: (i) at the clip level, an instance segmentation and detection component, see Section 3.2.1 (ii) at the video-level, a tracking module and a captioning head, see Section 3.2.2. 3.2.1 INSTANCE SEGMENTATION AND DETECTION Background: OVFormer (Fang et al., 2025). Our clip-level instance segmentation component is based on OVFormer (Fang et al., 2025), which we shortly describe next. On a high level, OVFormer augments Mask2Former (Cheng et al., 2021) with a classification head to handle Open-Vocabulary Video Instance Segmentation. Mask2Former learns clip-level object queries with a transformer decoder which cross-attends to the features extracted with a visual backbone and are refined with a pixel decoder. Formally, each clip xi clip ∈RT ×3×H×W is composed of T consecutive frames for images of resolution (H,W), and processed independently by the OVFormer model which outputs clip-level object queries qi obj, associated mask predictions Mi, classification and objectness scores Si cls and Si obj: (qobj, Mi, Si cls, Si obj) = ΦOVFormer(xi) where qi obj ∈RNobj×D, Mi ∈RNobj×T ×H×W , Si cls ∈RNclip×Nobj×Ncls and Si obj ∈RNclip×Nobj. Detection head. We extend this segmentation module with detection by using a 4-layer MLP to generate boxes on top of the object queries qi obj: Bi = BoxHead(qi obj) ∈RNobj×T ×4. Confidence scores. Note that OVFormer (Fang et al., 2025) only computes class-aware query-wise confidence scores over the full video. However, for objects appearing only in a small subset of frames in the video this strategy could result in inaccurate scores. Moreover, for DVOC, we wish to avoid redundant predictions i.e. having two queries predicting a similar trajectory. Hence we additionally compute class-agnostic query-wise confidence scores for each clip Si cls∗, by taking the maximum classification score over all labels c ∈1, ..., Ncls for each query and clip: 5 Si cls∗= max c (Si cls(c)) ∈RNclip×Nobj. Finally we derive the per-clip score Si = q Si cls∗× Si obj which we use for filtering predictions below a threshold tthresh at inference-time for every time step. 3.2.2 INSTANCE TRACKING AND CAPTIONING Tracking module. To derive the output video-level trajectories from the clip-level predictions, we need to obtain a matching between the queries at time i and the queries at time i + 1. For this, we perform tracking between the clips using the top-K enhanced query-matching module from Zhu et al. (2024). For each clip, this module keeps a memory bank containing the queries from the Tmatch previous clips. Among these, it identifies the Kmatch most matched clips, and computes the optimal assignment using the Hungarian bipartite matching algorithm. Using the Kmatch most matched clips helps reducing error propagation compared to the OVFormer (Fang et al., 2025) tracking module, which maps queries from time i + 1 to time i directly. Notably, we can keep track of objects that disappear and re-appear in a video, whereas they are automatically lost using the OVFormer tracking module. This method is referred to as semi-online tracking as we represent objects at a clip-level and associate between the clips in an online fashion. This offers the advantages of being flexible (fully-online for clips of length T = 1) and to arbitrate between using multi-frame information and memory constraints for long videos. Captioning head. To caption tracked object trajectories, we adapt the captioning head from Choud- huri et al. (2024) based on BLIP-2 (Li et al., 2023a). The BLIP-2 decoder processes object queries one by one using masked-attention conditioned with the predicted masks, before projecting the resulting object query into the LLM space for caption prediction. However, for consistency and efficiency, we predict a single video-level caption per tracked object query, replacing clip-level prediction. Query aggregation. Let’s consider ΦQT BLIP2 the BLIP-2 query transformer, ΦLLM BLIP2 the LLM decoder, qi obj(j) ∈R1×D and Si obj(j) ∈R respectively the query and the detection score for object j from clip i. For each object, we aggregate the tracked queries over time after they are processed by the BLIP-2 query transformer, by sampling a set Iagg of Tagg clips uniformly across the video. We obtain a video query for each object j: q∗ obj(j) = P i∈Iagg Si(j) × ΦQT BLIP-2(qi obj(j), Mi j). We can compute the video captioning prediction for the tracked object: Cap(j) = ΦLLM BLIP2(q∗ obj(j)) for j ∈1, ...N. 3.3 MODEL TRAINING We train MaskCaptioner with a combination of clip-level and video-level losses. For each clip, we predict masks/boxes, classification scores and captions, and derive the clip-level training objective as the following combination of supervised losses: Lclip-level = Lseg + Ldet + Ls + Lcap (1) where Lseg and Ls are the VIS losses from Fang et al. (2025), i.e. Lseg = λdiceLdice + λceLce, with Ldice and Lce the dice and cross-entropy segmentation losses respectively, and Ls = λclsLcls+λobjLobj where Lcls and Lobj are the cross-entropy losses for classification and objectness. We add detection and captioning losses Ldet = λl1Ll1 + λgiouLgiou where Ll1 and Lgiou are detection losses from Yang et al. (2022a), and Lcap = λclip-lmLlm, with Llm the cross-entropy language modeling loss (Zhou et al., 2025). When including the temporal aggregation module for captioning, we train the captioning head at the video level, i.e. we predict a caption per object for the full video after the tracking is performed and each object-query has been augmented across time. Lvideo-level = λvid-lmLlm (2) MaskCaptioner can be trained in a completely end-to-end manner. However, in practice, we train the model in two-stages for most of the experiments to alleviate memory constraints: we first train the segmentation/detection and classification model, then freeze it and tune the captioning head. The captioning head is trained either at the clip-level, or at the video-level when enabling the temporal aggregation module (i.e. λclip-lm = 0 or λvid-lm = 0). For each loss when they are computed we set their respective weights to λdice, λce, λl1 = 5, λgiou, λcls, λobj = 2, and (λclip-lm = 1 or λvid-lm = 1). 6 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING Datasets. LVIS (Gupta et al., 2019) and LV-VIS(Wang et al., 2023) are large-vocabulary instance segmentation datasets, respectively for image and video. LVISCap and LV-VISCap denote our extensions of LVIS and LV-VIS (see Section 3.1), with respectively 1, 2M/244k synthetic image object captions for the training/validation of LVISCap and 16k/3.7k synthetic video object captions for LV-VISCap (average of 5.4 objects per video). Note that in the absence of annotations on the test sets of LVIS and LV-VIS, we only extend the training and validation sets with captions, and use the validation set for evaluation. Benchmarks. VidSTG(Zhang et al., 2020) is a spatio-temporal video grounding dataset containing text descriptions serving as queries, which Zhou et al. (2025) propose to use for DVOC evaluation. The repurposed training and validation sets count 5.4k/602 videos for 15.1k/1.6k object trajectories with captions respectively. Video Localized Narratives (VLN) is similarly repurposed (Zhou et al., 2025). For each of the 5.1k training and 2.4k validation videos, the dataset contains 3 sparsely annotated frames with non exhaustive captions. BenSMOT contains bounding box trajectories and associated captions focusing exclusively on humans in videos, with an average of 2.2 instances per video. It counts 2.2k videos for training and 1k for evaluation. More details about the datasets are given in Appendix A.4.2. Evaluation Metrics. Following prior work, we evaluate DVOC using the CHOTA metric introduced by Zhou et al. (2025), which extends the widely used multi-object tracking HOTA metric (Luiten et al., 2021). CHOTA decomposes the DVOC task into three components: detection accuracy (DetA)(Luiten et al., 2021), association accuracy (AssA)(Luiten et al., 2021), and captioning accuracy (CapA)(Zhou et al., 2025). These components reflect the model’s ability to (i) correctly localize objects, (ii) maintain their identity across frames, and (iii) generate accurate natural language descriptions. CHOTA is defined as the geometric mean of the three components: CHOTA = 3√DetA · AssA · CapA. Matching between predicted and ground-truth trajectories is performed using Intersection-over-Union (IoU) thresholds, similarly to standard tracking evaluations (Milan et al., 2016). To extend the metric to segmentation masks, we simply replace box-based IoU with mask-based IoU when computing CHOTA with segmentation masks. Implementation details. Following OVFormer (Fang et al., 2025) we used ResNet50 (He et al., 2016) and SwinBase (Liu et al., 2021) visual backbones. Our Mask2former (Cheng et al., 2022) transformer decoder has 11 layers, and our captioning head is based on the BLIP-2 (Li et al., 2023a) decoder with OPT-2.7B LLM (Zhang et al., 2022a). For LV-VIS experiments we tune the model end-to-end with clip-level supervision only. For other experiments, we first train the segmentation/detection model, then freeze it and tune the captioning head. For VidSTG, VLN and BenSMOT experiments we use video-level tuning for captioning with temporal aggregation. For the largest dataset (COCO + LVIS) the optimization takes 2 days on 4 H100 GPUs. More implementation details and hyper-parameters for different experiments are given in Appendix A.4.3. 4.2 RESULTS 4.2.1 BENEFITS OF TRAINING ON LVISCAP AND LV-VISCAP We first study the impact of integrating our generated datasets for DVOC training in Table 2. We train MaskCaptioner on our synthetic video set, LV-VISCap, and progressively add LVISCap image pretraining. Results are reported on the LV-VISCap validation set. First, we observe that MaskCap- tioner achieves strong DVOC results with training only on LVISCap or LV-VISCap, demonstrating the effectiveness of our architecture. Importantly, combining both LVISCap and LV-VISCap leads to best results, showing the benefit of both our generated datasets. Moreover, we observe that our MaskCaptioner is robust to the choice of the visual backbone. Note that the AssA score depends on the detections, hence a worse DetA score with lower recall but higher precision can make the tracking easier: this explains the higher AssA score from the model without LV-VIScap tuning. In Fig. 4, we show that the CapA performance is logarithmically correlated with the quantity of training captions, suggesting that generating more data with our approach might bring further improvements. 7 Table 2: Impact of training with LVISCap and LV-VISCap and of the visual backbone on LV-VISCap DVOC. Backbone LVIScap LV-VIScap CapA DetA AssA CHOTA SwinB - ✓ 37.9 48.1 89.5 54.7 ✓ - 30.0 34.3 93.2 45.8 ✓ ✓ 43.6 54.3 89.1 59.5 ResNet50 ✓ ✓ 39.0 51.1 88.5 56.1 4.2.2 COMPARISON WITH THE STATE OF THE ART 0 1 2 5 10 30 100 Percentage of LVIScap captions used for training (log scale) 36 38 40 42 44 CapA CapA vs. data percentage (log scale) ±1 around log-fit CapA Log-linear fit Figure 4: Impact of the generated data scale on the CapA metric. We train MaskCap- tioner on a varying percentage of LVISCap captions and finetune on LV-VISCap. We compare MaskCaptioner to state-of-the-art DVOC methods following the standard evaluation protocol on three existing benchmarks : VidSTG in Table 3, VLN in Table 4 and BenSMOT in Table 5. DVOC-DS (Zhou et al., 2025) reports results with- out pretraining, and with their disjoint training train- ing strategy, while OW-VISCaptor (Choudhuri et al., 2024) leverages Mask2former pretrained on COCO for instance segmentation. In Table 3, we include results with the same pretraining data as these meth- ods and show the impact of using our data instead. Pretraining MaskCaptioner on COCO yields better detection and tracking compared to OW-VISCaptor but comparable captioning performance due to the similar captioning head design. Including our data leads to an improvement on all metrics and especially captioning with a 6.7 CapA increase (15% relative improvement). When pretraining on the disjoint DVOC-DS set, we observe a substantial gain in the captioning metric due to the model design. Moreover, we show that using our pretraining sets results in a further performance improvement, while additionally allowing a unified, much faster training (2032 GPU hours (Zhou et al., 2025) vs 208 for our approach). Eventually, our proposed approach can output segmentation masks unlike other methods. Table 3: Comparison with the state of the art on VidSTG DVOC validation set. All models are finetuned on VidSTG. "temp. agg." refers to including the query temporal aggregation module. Method Pretraining set CapA DetA AssA CHOTA OW-VISCaptor (Choudhuri et al., 2024) COCO 43.9 60.1 54.0 53.0 MaskCaptioner (ours) 44.3 65.1 70.2 58.7 DVOC-DS (Zhou et al., 2025) COCO + VG + SMIT + AugCOCO 39.7 65.8 70.4 56.9 MaskCaptioner (ours) 50.1 65.0 69.2 60.9 MaskCaptioner (ours) COCO + LVIScap + LV-VIScap 51.0 66.8 71.0 62.3 + temp agg COCO + LVIScap + LV-VIScap 52.7 66.8 71.0 63.0 Table 4: Comparison with state of the art on the VLN DVOC validation set. Mask loss refers to using the segmentation masks in the detection loss. All models are finetuned on VLN. Method Mask loss CapA DetA AssA CHOTA DVOC-DS Zhou et al. (2025) - 17.7 44.3 89.5 41.3 MaskCaptioner (ours) - 21.4 48.7 89.7 45.4 MaskCaptioner (ours) ✓ 22.9 50.1 92.7 47.4 + temp agg ✓ 23.4 50.1 92.7 47.7 Table 5: Comparison on the BenSMOT validation set. CapA, and thus CHOTA are not reported on this dataset. All models are finetuned on BenSMOT. Method DetA AssA CIDEr SMOTer (Li et al., 2024) 80.8 73.7 8.7 DVOC-DS Zhou et al. (2025) 90.8 89.6 25.4 MaskCaptioner (ours) 91.6 87.5 39.9 + temp agg 91.6 87.5 42.6 We further evaluate MaskCaptioner on VLN in Table 4 and BensMOT in Table 5. On both benchmarks, our approach improves the detection while the tracking remains competitive. Most important, the captioning metrics improve by a large margin (+5.2 CapA on VLN, +14.7 CIDEr on BenSMOT 8 Table 6: Automatic vs manual annotations for evaluation on a subset from LV-VIScap validation with 50 videos and 233 objects tra- jectories ; "automatic" and "manual" annotations stands for synthetic or human annotated captions. All models are trained on LVISCap and tuned on our LV-VISCap training set. Annotation type LVIScap captions CapA DetA AssA CHOTA automatic - 33.0 48.4 89.6 52.3 ✓ 40.5 50.0 91.0 56.7 manual - 22.5 48.4 89.6 46.0 ✓ 33.2 50.0 91.0 53.1 Table 7: Impact of the in- ference clip on LV-VIScap. All models are trained on LVIScap then LV-VIScap training set. Clip length mAP 3 26.0 4 26.1 5 26.6 6 26.3 7 26.8 compared to state of the art). Additionally, our method is able to jointly segment objects, which further improves the performance as shown in Table 4. Across all benchmarks, we show that including the temporal aggregation module further improves captioning performance by effectively merging information from multiple video clips, enriching the description of temporally extended actions. We notice that this improvement is only marginal on VLN, which we attribute to the short video lengths in that dataset. Note that we maintain consistent detection models when enabling the temporal aggregation module, keeping the DetA and AssA scores unchanged. Other results employ best-clip captioning. 4.2.3 ADDITIONAL ABLATIONS Table 8: Impact of temporal aggregation on Vid- STG validation (with finetuning). All methods are pretrained on COCO + LVIScap + LV-VIScap. Multi- clip results are obtained with weighted mean tempo- ral aggregation, except for † which uses arithmetic mean. num clips clip selection CapA CHOTA 1 best score 51.0 62.3 1 middle frame 46.9 60.6 4 uniform† 49.1 61.5 4 uniform 51.6 62.6 8 uniform 52.7 63.0 16 uniform 53.8 63.4 32 uniform 55.4 64.0 Annotation bias. To evaluate the bias in- troduced by evaluating on LV-VISCap syn- thetic captions (see Table 2), we annotate a representative subset of our LV-VISCap datasets by hand and compare the impact of our LVISCap captions when evaluating MaskCaptioner on automatic vs manual data. Our subset comports 50 videos from the LV- VISCap validation set with 233 object tra- jectories annotated by hand. The results are presented in Table 6. We observe that both the evaluation on automatic and manual data show a comparable improvement on the CapA and CHOTA metric when using our LVISCap captions for the training. This re- sult confirms the importance of our synthetic captions for DVOC performance, and further shows that the bias introduced by evaluating on synthetic LV-VISCap captions is marginal. Clip length. We show in Table 7 the impact of the clip length used for the MaskCaptioner inference. A higher clip length leads to temporally richer queries, improving detection and tracking. Note that in our implementation, we follow OVFormer (Fang et al., 2025) and use a clip length of T = 5. Temporal aggregation. We show the impact of temporal aggregation on DVOC performance in Table 8. Increasing the number of clips for aggregation consistently improves the captioning scores at the expense of higher computational cost. Weighting the clips for aggregation using the detection scores yields a clear improvement over the arithmetic mean. We also observe that performing captioning based on the single clip with best score performs relatively well, which highlights the limits of the complexity of the actions observable in the current benchmarks, e.g. VidSTG.(Zhang et al., 2020). Additional results. We show additional results about the prompt ablation and the impact of the tracking module in Appendix A.2. We add MaskCaptioner qualitative examples in Appendix A.1, and discuss the failure cases and limitations of our method in Appendix A.3, as well as give further implementation details in Appendix A.4. 9 5 CONCLUSION We propose an approach to generate synthetic object-level captions using a state-of-the-art VLM and extend the LVIS and LV-VIS datasets with synthetic captions. We use the resulting LVISCap and LV-VISCap datasets to train MaskCaptioner, a DVOC model that can simultaneously detect, segment, track, and caption objects throughout a video. With finetuning, MaskCaptioner achieves state-of-the-art performance on the VidSTG, VLN and BenSMOT benchmarks, while extending the DVOC task to segmentation masks. ACKNOWLEDGEMENTS This work was granted access to the HPC resources of IDRIS under the allocation AD011014323R2 made by GENCI. It was funded in part by the French government under management of Agence Nationale de la Recherche as part of the “France 2030" program, reference ANR-23-IACL-0008 (PR[AI]RIE-PSAI project), the ANR project VideoPredict ANR-21-FAI1-0002- 01, and Paris Île-de- France Région in the frame of the DIM AI4IDF. Cordelia Schmid would like to acknowledge the support by the Körber European Science Prize. REFERENCES Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. Phi-3 Technical Report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219, 2024. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. GPT-4 Technical Report. arXiv preprint arXiv:2303.08774, 2023. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. In NeurIPS, pages 23716–23736, 2022. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, pages 2425–2433, 2015. Shahin Atakishiyev, Mohammad Salameh, Hengshuai Yao, and Randy Goebel. Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions. IEEE Access, 2024. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen Technical Report. arXiv preprint arXiv:2309.16609, 2023. Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In ICCV, 2021. Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: Delving into high quality object detection. In CVPR, pages 6154–6162, 2018. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-End Object Detection with Transformers. In ECCV, pages 213–229. Springer, 2020. Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal LLM’s referential dialogue magic. arXiv preprint arXiv:2306.15195, 2023. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO Captions: Data Collection and Evaluation Server. arXiv preprint arXiv:1504.00325, 2015. 10 Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: UNiversal Image-TExt Representation learning. In ECCV, pages 104–120. Springer, 2020. Bowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexander Kirillov, Rohit Girdhar, and Alexander G. Schwing. Mask2Former for Video Instance Segmentation, 2021. URL https://arxiv.org/ abs/2112.10764. Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked- Attention Mask Transformer for Universal Image Segmentation. In CVPR, pages 1290–1299, June 2022. Zesen Cheng, Kehan Li, Hao Li, Peng Jin, Chang Liu, Xiawu Zheng, Rongrong Ji, and Jie Chen. Instance brownian bridge as texts for open-vocabulary video instance segmentation. arXiv preprint arXiv:2401.09732, 2024. Anwesa Choudhuri, Girish Chowdhary, and Alex Schwing. OW-VISCapTor: Abstractors for Open- World Video Instance Segmentation and Captioning. In NeurIPS, 2024. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009. doi: 10.1109/CVPR.2009.5206848. Hao Fang, Peng Wu, Yawei Li, Xinxin Zhang, and Xiankai Lu. Unified Embedding Alignment for Open-Vocabulary Video Instance Segmentation. In ECCV, pages 225–241. Springer, 2025. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The Llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. Pinxue Guo, Hao Huang, Peiyang He, Xuefeng Liu, Tianjun Xiao, and Wenqiang Zhang. OpenVIS: Open-vocabulary video instance segmentation. In AAAI, volume 39, pages 3275–3283, 2025. Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A Dataset for Large Vocabulary Instance Segmentation. In CVPR, pages 5356–5364, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In ICCV, pages 2961–2969, 2017. Hyeonho Jeong and Jong Chul Ye. Ground-a-video: Zero-shot grounded video editing using text-to- image diffusion models. In ICLR, 2024. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, pages 4904–4916. PMLR, 2021. Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks for dense captioning. In CVPR, pages 4565–4574, 2016. Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, and Nicolas Carion. MDETR - Modulated Detection for End-to-End Multi-Modal Understanding. In ICCV, 2021. Evangelos Kazakos, Cordelia Schmid, and Josef Sivic. Large-scale Pre-training for Grounded Video Caption Generation. In ICCV, 2025. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. ReferItGame: Referring to objects in photographs of natural scenes. In EMNLP, pages 787–798, 2014. Jinkyu Kim, Teruhisa Misu, Yi-Ting Chen, Ashish Tawari, and John Canny. Grounding human-to- vehicle advice for self-driving vehicles. In CVPR, pages 10591–10599, 2019. 11 Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. In IJCV, volume 123, pages 32–73. Springer, 2017. Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. LISA: Reasoning segmentation via Large Language Model. In CVPR, pages 9579–9589, 2024. Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. Tvqa: Localized, compositional Video Question Answering. In EMNLP, 2018. Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. Tvqa+: Spatio-temporal grounding for Video Question Answering. In ACL, 2020. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. BLIP: Bootstrapping language-image pre- training for unified vision-language understanding and generation. In ICML, pages 12888–12900. PMLR, 2022. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. BLIP-2: Bootstrapping Language-Image Pre- training with Frozen Image Encoders and Large Language Models. In ICML, pages 19730–19742. PMLR, 2023a. KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. VideoChat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023b. Yunhao Li, Qin Li, Hao Wang, Xue Ma, Jiali Yao, Shaohua Dong, Heng Fan, and Libo Zhang. Beyond MOT: Semantic Multi-Object Tracking. In ECCV, 2024. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In ECCV, pages 740–755. Springer, 2014. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS, pages 34892–34916, 2023. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical vision transformer using shifted windows. In ICCV, pages 10012–10022, 2021. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. ViLBERT: Pretraining Task-Agnostic Visiolin- guistic Representations for Vision-and-Language Tasks. In NeurIPS, 2019. Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixé, and Bastian Leibe. HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking. IJCV, 129: 548–578, 2021. Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-ChatGPT: Towards detailed video understanding via large vision and language models. In ACL, 2024. Anton Milan, Laura Leal-Taixé, Ian Reid, Stefan Roth, and Konrad Schindler. MOT16: A Benchmark for Multi-Object Tracking. arXiv preprint arXiv:1603.00831, 2016. Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, and Yedid Hoshen. Dreamix: Video diffusion models are general video editors. arXiv preprint arXiv:2302.01329, 2023. Mathew Monfort, SouYoung Jin, Alexander Liu, David Harwath, Rogerio Feris, James Glass, and Aude Oliva. Spoken moments: Learning joint audio-visual representations from video descriptions. In CVPR, 2021. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In ICCV, pages 2641–2649, 2015. 12 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748–8763. PmLR, 2021. Hanoona Rasheed, Muhammad Maaz, Sahal Shaji, Abdelrahman Shaker, Salman Khan, Hisham Cholakkal, Rao M Anwer, Eric Xing, Ming-Hsuan Yang, and Fahad S Khan. Glamm: Pixel grounding large multimodal model. In CVPR, pages 13009–13018, 2024. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. In CVPR, June 2016. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards Real- Time Object Detection with Region Proposal Networks. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, NeurIPS, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper_files/paper/2015/ file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf. Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. Grounding of textual phrases in images by reconstruction. In ECCV, pages 817–834. Springer, 2016. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. In CVPR, pages 10740–10749, 2020. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. VideoBERT: A Joint Model for Video and Language Representation Learning. In ICCV, pages 7464–7473, 2019. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023. Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530, 2024. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and Tell: A Neural Image Caption Generator. In CVPR, pages 3156–3164, 2015. Haochen Wang, Cilin Yan, Shuai Wang, Xiaolong Jiang, Xu Tang, Yao Hu, Weidi Xie, and Efstratios Gavves. Towards Open-Vocabulary Video Instance Segmentation. In ICCV, pages 4057–4066, October 2023. Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. GIT: A generative image-to-text transformer for vision and language. In TMLR, 2022. Junchi Wang and Lei Ke. LLM-Seg: Bridging image segmentation and large language model reasoning. In CVPR, pages 1765–1774, 2024. Nicolai Wojke, Alex Bewley, and Dietrich Paulus. Simple Online and Realtime Tracking with a Deep Association Metric. In ICIP, pages 3645–3649. IEEE, 2017. Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, and Lijuan Wang. Grit: A generative region-to-text transformer for object understanding. In ECCV, pages 207–224. Springer, 2024. Junbin Xiao, Angela Yao, Yicong Li, and Tat-Seng Chua. Can i trust your answer? Visually grounded Video Question Answering. In CVPR, pages 13204–13214, 2024. Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Just Ask: Learning to answer questions from millions of narrated videos. In ICCV, pages 1686–1697, 2021. Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. TubeDETR: Spatio- Temporal Video Grounding With Transformers. In CVPR, pages 16442–16453, June 2022a. 13 Zhao Yang, Jiaqi Wang, Yansong Tang, Kai Chen, Hengshuang Zhao, and Philip HS Torr. Lavt: Language-aware vision transformer for referring image segmentation. In CVPR, pages 18155– 18165, 2022b. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. MAttNet: Modular attention network for Referring Expression Comprehension. In CVPR, pages 1307–1315, 2018. Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for Visual Question Answering. In CVPR, pages 6281–6290, 2019. Haobo Yuan, Xiangtai Li, Tao Zhang, Zilong Huang, Shilin Xu, Shunping Ji, Yunhai Tong, Lu Qi, Jiashi Feng, and Ming-Hsuan Yang. Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos. arXiv pre-print, 2025. Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. MERLOT: Multimodal Neural Script Knowledge Models. In NeurIPS, 2021. Hao Zhang, Hongyang Li, Feng Li, Tianhe Ren, Xueyan Zou, Shilong Liu, Shijia Huang, Jianfeng Gao, Leizhang, Chunyuan Li, et al. LLaVa-Grounding: Grounded visual chat with large multimodal models. In ECCV, pages 19–35. Springer, 2024. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. OPT: Open Pre-trained Transformer Language Models. arXiv preprint arXiv:2205.01068, 2022a. Yifu Zhang, Peize Sun, Yi Jiang, Dongdong Yu, Fucheng Weng, Zehuan Yuan, Ping Luo, Wenyu Liu, and Xinggang Wang. ByteTrack: Multi-Object Tracking by Associating Every Detection Box. In ECCV, pages 1–21. Springer, 2022b. Zhu Zhang, Zhou Zhao, Yang Zhao, Qi Wang, Huasheng Liu, and Lianli Gao. Where does it exist: Spatio-temporal video grounding for multi-form sentences. In CVPR, pages 10668–10677, 2020. Xingyi Zhou, Dequan Wang, and Philipp Krähenbühl. Objects as points. arXiv preprint arXiv:1904.07850, 2019. Xingyi Zhou, Tianwei Yin, Vladlen Koltun, and Philipp Krähenbühl. Global Tracking Transformers. In CVPR, pages 8771–8780, 2022. Xingyi Zhou, Anurag Arnab, Chen Sun, and Cordelia Schmid. Dense Video Object Captioning from Disjoint Supervision. In ICLR, 2025. Wenqi Zhu, Jiale Cao, Jin Xie, Shuangming Yang, and Yanwei Pang. CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation. IEEE Transactions on Circuits and Systems for Video Technology, 2024. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7W: Grounded Question Answering in images. In CVPR, pages 4995–5004, 2016. 14 A APPENDIX In this appendix, we show qualitative results from our method in Section A.1, present additional ablations in Section A.2, discuss failure cases and limitations in Section A.3, and share more details about the datasets, the method, and the implementation in Section A.4. A.1 QUALITATIVE RESULTS We show qualitative DVOC results from MaskCaptioner on the LV-VIS (Wang et al., 2023) dataset with (mask, caption) pairs predictions, and on the VidSTG dataset (Zhang et al., 2020) with (box, caption) pairs in Figures 5 and 6 respectively. These examples show that MaskCaptioner has learned to predict captions that focus on the localized objects while integrating high-level scene understanding. On LV-VIS (Fig. 5), MaskCaptioner is able to produce descriptive captions for each objects including related context, even in the case of a scene including a high number of objects. We note that, when tuned on VidSTG (see Fig. 6), MaskCaptioner produces less informative and less descriptive captions. This is due to the VidSTG annotation captions being designed for grounding rather than for captioning or DVOC, and thus being only little descriptive or informative, and overlooking to the global context. In contrast, when trained on LVISCap and LV-VISCap, MaskCaptioner visually generates much richer and accurate descriptions, further highlighting the value of our synthetic captions. Overall, MaskCaptioner effectively learns to jointly segment, detect, track and caption object trajecto- ries. A person is wearing a blue smartwatch with a heart rate monitor on their wrist. The light blue smartwatch displays the time on its face as it rests on a persons wrist. The two dollar bill is being held in the hand of a person. A person is featured on the two-dollar bill. A person is holding a small dollar bill. The person is using a handsaw to cut a piece of wood. The jean is being worn by a person working on a piece of wood with a handsaw. The handsaw is being used by a person to cut a piece of wood. A person is seen adjusting the focus of a microscope. A ring is worn on the finger of a person who is examining a microscope. The microscope is being adjusted with a person's hand. The glass bowl is being filled with an octopus by a person's hand. The octopus is being placed inside a glass bowl by a person. A person wearing an apron is holding an octopus in a glass bowl. A white towel is placed on the wooden table near a knife. The wooden table is supporting a glass bowl filled with an octopus being handled by a person. Figure 5: Qualitative examples, obtained with MaskCaptioner on the LV-VIS dataset. 15 There is a dog away a bicycle on the ground. A child in white is above a pink bicycle. There is a pink bicycle beneath a kid on the ground. An adult in black clothes holds a baby. There is a camera in front of an adult. A baby leans on an adult in the bathroom. There is a horse towards an adult on the grass. An adult rides the horse on the grass. There is a black car behind a child in red. A child in red is in front of a black car. Figure 6: Qualitative examples, obtained with MaskCaptioner on the VidSTG dataset. Table 9: Impact of the prompting strategy on caption quality. Scores are given by a human evaluator from 0 to 2 (incorrect, partially correct, or correct) on a subset from the LV-VIS validation set, and brought to 0-100 range. For the mask visual prompt experiments, we use our best prompt with either the object’s bounding boxes or center point coordinates as a localization cue in the text prompt. Visual prompt Prompting method Average rating Rating percentage 0 1 2 bounding boxes single frame 26.8 66.3 13.9 19.9 + multiple frames 27.1 68.7 8.4 22.9 + detailed instructions 29.5 65.0 10.8 24.10 + category labels 80.7 10.8 16.9 72.3 + bounding box coordinates 83.1 9.6 14.5 75.9 + bounding box area 84.3 7.8 15.7 76.5 + few shot examples 85.1 9.1 11.5 79.4 mask boundaries center point coordinates 75.9 17.5 13.2 69.3 bounding box coordinates 77.1 15.7 14.5 69.9 16 A.2 ADDITIONAL ABLATIONS Prompting strategy. In Table 9, we show the distribution of ratings given by the human annotator depending on the prompting strategy. Using the box visual prompt yields a better focus on the queried object and more correct object captions. Importantly, giving the category labels in the prompt helps the model to generate more accurate object captions. Overall, the rate of correct captions with the best prompt indicates good quality for our synthetic object captions. Tracking module. In Table 10, we show that the top-k enhanced tracking (Zhu et al., 2024) is important for tracking objects effectively on the challenging VidSTG dataset (Zhang et al., 2020), as seen in the AssA, CapA and CHOTA scores. We attribute this difference to the numerous objects that disappear for a significant number of frames in the long videos of VidSTG. The top-K approach uses a memory bank of tracked queries that helps keeping track of these entities, while they are lost using the i to i + 1 tracking from OVFormer (Fang et al., 2025). Table 10: Impact of the tracking module on VidSTG DVOC. Tracking method CapA DetA AssA CHOTA OVFormer module (Fang et al., 2025) 51.9 67.0 58.2 58.7 Top-K enhanced module (Zhu et al., 2024) 52.7 66.8 71.0 63.0 A.3 FAILURE CASES AND LIMITATIONS A.3.1 FAILURE CASES (ii) Inconsistent object categories (ii) Detection/segmentation error The knife is being used to cut a steak on a wooden cutting board. The steak is being cut with a knife on a wooden cutting board. A person is holding a steak with tongs on a wooden cutting board. The wooden chopping board holds a steak being cut with a knife and a pair of tongs. A person is holding a roll of tape in front of them. The yellow robe is being worn by a person who is holding a pink spinner. The towel rack is mounted on a wall in a bathroom. A white bath towel hangs on the wall. The earring is being help up by a person in front of a mirror. The bracelet is worn on the wrist of a person who is holding a pink sphygmomanometer. The gray heron is standing in the water with a fish in its beak. The knife is being held by a person next to a steak on a wooden cutting board. (i) Recognition error Figure 7: Some DVOC failure cases of MaskCaptioner observed on the LV-VIS dataset. We observe 3 main types of failure cases for our approach and illustrate them in Figure 7: (i) Recognition error: in the case of ambiguous context, blurred instance or rare categories, MaskCap- tioner might fail to recognize the object it is describing, sometimes leading to a wrong denomination 17 (e.g. here, a rare "pair of tongs" is incorrectly denominated as "knife"). (ii) Inconsistent captions : in similar situations, the captions produced by MaskCaptioner can be inconsistent when referring to the same object in different captions. (iii) Detection/segmentation error : In case of complex movement, appearance change or occlusion, MaskCaptioner sometimes fails to detect, segment, or track objects, leading to missing captions (e.g. here, the fish is not detected in the beak of the heron, and thus has no associated caption). A.3.2 LIMITATIONS Limitations. While our approach outperforms the state-of-the-art in dense video object captioning, there is still room for improving localization and captioning. Localization sometimes fails, in particular for small objects. Furthermore, the automatically generated captions are, in some cases, too generic, and can mix up two objects of the same class in the video. Future work could investigate different automatic captioning techniques for DVOC, for example based on an approach such as Ref-SAV(Yuan et al., 2025), which generates captions in multiple steps to separate appearance from motion description. Eventually, objects in the videos often perform a single or few actions, and we believe that it is important for future works to build benchmarks with more complex object interactions, for instance with multiple action segments. A.4 ADDITIONAL DETAILS A.4.1 PROMPTING STRATEGY DETAILS The full prompt template used to generate our LVISCap and LV-VISCap datasets is illustrated in Figure 8. For a video x with N objects, we prompt the VLM N times, and for each object j the prompt is composed in three parts: (i) the static system prompt ps gives general instructions for object-level caption generation, practical rules, prompting format and an example, (ii) the user prompt pu(j) depends on the example and contains textual annotation information to help the model describe objects and interactions accurately. These information notably contain target object positions, areas, category, and the categories of other objects in the scene, (iii) the visual prompt ˆxj consists of 4 sampled frames with drawn bounding boxes for object j. A.4.2 DATASET DETAILS LVIS (Gupta et al., 2019) is a large-vocabulary image instance segmentation dataset with a long-tail distribution of 1203 annotated categories, for a total of over 2.2 million annotations in 164k natural images. The dataset is split in a training set with 100k images and 1.2M annotations, a validation set with 19k images and 244k annotations, and two test sets with 19k images each. LV-VIS (Wang et al., 2023) is a recent large-vocabulary video instance segmentation (VIS) benchmark. It comprises 4, 828 videos with over 26k video segmentation masks from 1,196 object categories, with an average of over 5.4 objects per video. The data is split into a training set of 3, 076 videos and 16k video-level annotations, a validation set of 837 videos and 3.7k annotations, and a test set with 908 videos. LVISCap and LV-VISCap denote our extensions of LVIS and LV-VIS (see Section 3.1). LVISCap extends LVIS with a total of 1, 488, 354 synthetic captions, including 1, 244, 271 training annotations and 244, 083 validation annotations. LV-VISCap includes a total of 19, 717 synthetic captions for 16, 017 training and 3, 700 validation annotations. Note that in the absence of annotations on the test sets of LVIS and LV-VIS, we only extend the training and validation sets with captions, and use the validation set for evaluation. VidSTG(Zhang et al., 2020) is a spatio-temporal video grounding dataset, containing 6, 924 videos with 44, 808 exhaustive trajectories annotations over 80 categories, as well as object sentence descrip- tions (for some objects and some timestamps only), which serve as queries for grounding. Zhou et al. (2025) repurposed the dataset for DVOC by using queries as captions, and by excluding annotations without captions during evaluation. Following Zhou et al. (2025), we sample 200 frames uniformly across each video for both training and evaluation. Overall, the repurposed VidSTG training set counts 28, 275 object trajectories, with 15, 182 object captions. The validation set, used for DVOC evaluation, includes 602 videos with 1, 644 captions. 18 """ Generate a caption for a video, focusing on a queried object highlighted with green bounding boxes, and semantic class provided. It should be a rich sentence describing the object's APPEARANCE and ACTION, trajectory, or interaction with other objects in the video frames. The other objects are not highlighted and should be mentioned only if they interact with the query object or are relevant to the context. # Rules - The single query object highlighted with bounding boxes in the frames SHOULD be the subject of the sentence. ex: for category "bottle": "The bottle is being inspected by a person" - Only facts that are visible in the video should be mentioned. - You should NOT mention the fact that the query object is highlighted with green bounding boxes. - You should RETURN A CAPTION no matter what, even if the query object is not visible in any frame. - If multiple objects of the same class as the query object are visible, the caption should focus exclusively on the single highlighted object and describe it as the singular subject of the sentence. - No foreign alphabets or special characters should be used in the caption. Translate foreign words if needed. # Input Details - **Frames**: Provided sequence of 4 frames sampled from a video, in which a bounding-box highlights a query object - **Bounding Box**: Locations of the query object in the respective frames, in the format [(xmin, ymin, xmax, ymax),...] with each value ranging from 0 to 1000 representing a percentage of image dimensions. - **Area**: Area of the query object in the image, as a percentage of the total image area. This could help to determine wether the object is in the background. (big object class with small area) - **Semantic Class**: Class of the query object to be described in the caption - **Other classes**: Some classes of other objects in the video, These objects are not highlighted and should be mentioned only if relevant. # Examples - **Input**: An image showing a women dressed in black and white and a dog both running on a beach with people in the background. The woman's short is highlighted with green bounding boxes in the video. object class: "short pants". - **Output**: "A black and white short pants is being worn by a woman running with a dog across the sandy beach" """ """ Query object Bounding Box location in the respective frames: {formatted_normalized_bbox} Query object areas in percentage of the respective frames : {formatted_area_pct} Query object Class: '{cat["name"]}' Other classes: {", ".join(other_cat_names)} Generate a one-sentence caption, focusing on the object of class '{cat["name"]}' highlighted by the bounding boxes. """ System Prompt Visual prompt Sampled frames with drawn bounding boxes ... ... User Prompt Figure 8: Prompt template used to generate synthetic object captions from video segmentation annotations for the LV-VIS dataset Wang et al. (2023). The system prompt ps contains general instructions, while user prompt and visual prompt pu(j) and ˆxj are formatted with information from each annotation. Video Localized Narratives (VLN) extends existing datasets with "narrations of actors actions" in videos. We use the subset from the UVO dataset, which contains 3 sparsely annotated frames with non exhaustive captions for a total of 5, 136 training and 2, 451 validation videos. BenSMOT contains manually collected annotations of bounding box trajectories and associated captions, focusing exclusively on humans in videos. It includes an average of 2.2 instances per video, and counts 2, 284 videos for training and 1, 008 for evaluation. A.4.3 MORE IMPLEMENTATION DETAILS The visual backbone is initialized with weights pretrained on ImageNet-21K Deng et al. (2009) following OVFormer Fang et al. (2025), and the Mask2Former Cheng et al. (2022) weights are trained from scratch. The OVFormer classifier uses a frozen CLIP ViT-B/32 Radford et al. (2021) encoder. The captioning head is initialized with weights from BLIP-2 Li et al. (2023a) with frozen LLM OPT-2.7B Zhang et al. (2022a) following Chouduri et al. Choudhuri et al. (2024). For all experiments except LV-VIS tuning, we first train the segmentation/detection model, then freeze it and tune the captioning head. Respectively for LVIS/VidSTG/LV-VIS we train for 440k/40k/22k for the first stage and 5k/2k/2k for the second stage. When tuning pretrained models on Vid- STG/VLN/BenSMOT, we train the two stages for (15k, 2k)/(15k, 500)/(15k, 2k) steps, whereas for 19 LV-VIS, we end-to-end tune the model for 2k steps. Experiments are run with a batch size of 8, except when using LVIS+COCO and LV-VIS where we use a batch size of 4 and for video-level tuning of the captioning head where we use a batch-size of 1. Experiments on LV-VIS are end-to-end trainings with clip-level supervision only. For VidSTG, VLN and BenSMOT experiments we use video-level tuning for captioning with temporal aggregation, with Tagg = 8. For all experiments we train the model with a clips of size T = 2, and at inference use T = 5/1/1/1, Tmatch = 1/100/20/40, Kmatch = 1/7/5/7 for LV-VIS/VidSTG/VLN/BenSMOT experiments respectively. For the largest dataset (COCO + LVIS) the optimization takes 2 days on 4 H100 GPUs. Following OVFormer Fang et al. (2025), for all datasets we use an AdamW optimizer and the step learning rate schedule, with an initial learning rate of 0.0001 and a weight decay of 0.05, and apply a 0.1 learning rate multiplier to the backbone. We decay the learning rate at 0.9 and 0.95 fractions of the total number of training steps by a factor of 10. For respectively image/video datasets, we resize the shortest edge of the image to 800/480 for SwinB and 800/360 for ResNet for training and inference. 20
MASKCAPTIONER: LEARNING TO JOINTLY SEGMENT AND CAPTION OBJECT TRAJECTORIES IN VIDEOS Gabriel Fiastre1∗ Antoine Yang2 Cordelia Schmid1 1Inria, École Normale Supérieure, CNRS, PSL Research University 2Google Deepmind ABSTRACT Dense Video Object Captioning (DVOC) is the task of jointly detecting, tracking, and captioning object trajectories in a video, requiring the ability to understand spatio-temporal details and describe them in natural language. Due to the complexity of the task and the high cost associated with manual annotation, previous approaches resort to disjoint training strategies, potentially leading to suboptimal performance. To circumvent this issue, we propose to generate captions about spatio-temporally localized entities leveraging a stateof-the-art VLM. By extending the LVIS and LV-VIS datasets with our synthetic captions (LVISCap and LV-VISCap), we train MaskCaptioner, an endto-end model capable of jointly detecting, segmenting, tracking and captioning object trajectories. Moreover, with pretraining on LVISCap and LV-VISCap, MaskCaptioner achieves state-of-the-art DVOC results on three existing benchmarks, VidSTG, VLN and BenSMOT. The datasets and code are available at https://www.gabriel.fiastre.fr/maskcaptioner/. 1 INTRODUCTION A fundamental aim of computer vision is to enable machines to understand videos with humanlike acuity in perceiving and reasoning about the world. Recent advances have led to remarkable progress in both spatio-temporal localization (Ren et al., 2015; Redmon et al., 2016; Carion et al., 2020; He et al., 2017; Wojke et al., 2017; Zhang et al., 2022b) and vision-language understanding (Vinyals et al., 2015; Sun et al., 2019; Yu et al., 2019; Yang et al., 2021; Chen et al., 2020; Jia et al., 2021; Zellers et al., 2021; Bain et al., 2021; Lu et al., 2019; Kamath et al., 2021). However, building vision-language models that can simultaneously reason about spatially localized objects and temporal dynamics of a complex scene remains a significant challenge, motivated by many real-world applications including autonomous driving (Kim et al., 2019; Atakishiyev et al., 2024), human-computer interaction (Shridhar et al., 2020; Ahn et al., 2022), or video editing (Molad et al., 2023; Jeong and Ye, 2024). Dense Video Object Captioning (DVOC) (Zhou et al., 2025) serves as a key benchmark for this purpose, as it requires to jointly localize, track, and describe in natural language all visual entities in a video. Manual annotation for such a fine-grained task is particularly expensive, leading to a scarcity of datasets with densely annotated object-level video descriptions. To tackle DVOC, prior work resorted to alternative training approaches: Zhou et al. (2025) propose a disjoint training strategy, decomposing the problem into subtasks and training a model sequentially on datasets for each subtask. Choudhuri et al. (2024) leverage the pretraining of multiple specialized models to alleviate the need for objectlevel annotations. Both methods allowed to perform DVOC while circumventing the need for costly annotations, but the lack of end-to-end training with densely annotated object-level supervision may lead to suboptimal performance. We propose to address this limitation by generating synthetic object-level annotations, motivated by the recent success of LLM-generated supervision (Liu et al., 2023; Abdin et al., 2024) and the growing visual capacities of Vision Language Models (VLMs) (Alayrac et al., 2022; Li et al., 2022; 2023a; Team et al., 2023; 2024; Achiam et al., 2023; Grattafiori et al., 2024; Bai et al., 2023). To the best of our knowledge, our work is the first to generate localized, object-level captions for DVOC. To this end, we introduce a multi-modal prompting strategy leveraging a state-of-the-art VLM, and *Corresponding author: 1 16 Oct 2025 A person is holding a pencil sharpener in their hands. The black Derwent pencil sharpener is being held and rotated in a person's hand. A graphic pencil with a red end sits on a wooden surface next to a pencil sharpener. Figure 1: Examples of synthetic captions in our LV-VISCap dataset. extend two segmentation datasets, LVIS (Gupta et al., 2019) for images and LV-VIS (Wang et al., 2023) for videos, to be the first DVOC training sets with (mask, box, category, caption) annotations for all objects, dubbed LVISCap and LV-VISCap, see figure 1. Using our generated datasets, we extend the DVOC task, traditionally formulated as detection (Zhou et al., 2025), to segmentation and train MaskCaptioner, the first end-to-end model that can jointly produce (mask, caption) pairs for all object trajectories in a video. We show that (i) our generated datasets, LVISCap and LV-VISCap, largely benefit MaskCaptioner's DVOC performance, (ii) our MaskCaptioner outperforms previous state-of-the-art models on the VidSTG, VLN and BenSMOT DVOC benchmarks and (iii) we can extend the DVOC task to segmentation. Overall, our contributions can be summarized as follows: 1. We introduce a VLM-based method to generate synthetic object captions for videos, and extend the LVIS and LV-VIS datasets to be the first unified DVOC training set with object captions, boxes, and segmentation masks: LVISCap and LV-VISCap. 2. Using our unified generated data, we train MaskCaptioner, the first end-to-end model to jointly detect, segment, track and caption objects in a video. 3. MaskCaptioner achieves state-of-the-art DVOC results on the three existing benchmarks : VidSTG, VLN and BenSMOT. 2 RELATED WORK Open-Vocabulary Video Instance Segmentation (OV-VIS). The OV-VIS task aims to segment, track, and classify objects from an open set of categories in videos (Guo et al., 2025; Wang et al., 2023), using datasets such as LV-VIS (Wang et al., 2023). State-of-the-art methods (Guo et al., 2025; Wang et al., 2023; Fang et al., 2025) commonly use query-based approaches that classify objects by matching visual features with CLIP embeddings (Radford et al., 2021). Methods like OVFormer (Fang et al., 2025) or BriVIS (Cheng et al., 2024) improve this approach by better aligning visual queries with the CLIP space. Unlike these methods focused on CLIP feature matching for classification, our work explores describing objects in natural language (Li et al., 2023a). Localized vision-language understanding. Going beyond pioneering vision-language tasks such as visual question answering (Antol et al., 2015) or image captioning (Chen et al., 2015), recent work has explored spatial understanding tasks that require localizing natural language queries in images. This includes referred expression segmentation (Kazemzadeh et al., 2014; Yu et al., 2018; Yang et al., 2022b), image grounding (Rohrbach et al., 2016; Plummer et al., 2015), reasoning segmentation (Lai et al., 2024; Wang and Ke, 2024), spatio-temporal video grounding (Zhang et al., 2020; Yang et al., 2022a) and grounded visual question answering (Zhu et al., 2016; Xiao et al., 2024; Lei et al., 2018; 2020). While these tasks typically require localizing one or a few entities, dense captioning (Johnson et al., 2016; Wu et al., 2024) aims to spatially localize and describe in natural language all salient regions in images. Our work addresses the more challenging task of predicting both object trajectories and descriptions for all objects in a video. Dense Video Object Captioning (DVOC). The DVOC task aims at jointly detecting, tracking, and describing the trajectory of all visual entities in a video. DVOC-DS (Zhou et al., 2025) tackles this task by generating frame-wise object box proposals (Zhou et al., 2019; Cai and Vasconcelos, 2018) that are tracked (Zhou et al., 2022) before feeding aggregated and cropped features to a generative image-to-text decoder (Wang et al., 2022). To cope with the lack of DVOC annotations, the model is trained disjointly on various subtasks: object detection using COCO (Lin et al., 2014), image 2 MaskCaptioner Training Bounding boxes Construct prompt Draw Synthetic captions ... ... ... ... 'The person is sitting in a deck chair inside of a workshop.' 'A light-colored wooden deck chair sits on a patterned floor, supporting a man wearing shorts.' """ Bounding Box location : (0.29,0.10,0.42,0.83),... Areas : 42%, ..., 63% Category: 'person' Other categories: {'chair','shoe','shorts'} Generate a one-sentence caption, focusing on the object of class 'person' highlighted by the bounding boxes. """ Man Chair Figure 2: Our MaskCaptioner data annotation pipeline: For each object, we extract its bounding boxes Bj from its annotated masks Mj, and draw them in the video. The video with drawn bounding boxes ˆxj, along with a prompt (ps, pu(j)) including additional information such as the label of the object to caption Cj, the labels and bounding boxes of other objects in the image, is fed to a state-of-the-art VLM (Gemini 2.0 Flash (Team et al., 2023)) to generate the object-level caption. ps denotes the static prompt with general instructions and pu(j) the dynamic prompt with annotation information. We use this pipeline to generate the LVISCap and LV-VISCap datasets, used to train MaskCaptioner. object-level captioning using Visual Genome (Krishna et al., 2017), video scene-level captioning using SMiT (Monfort et al., 2021) and video object tracking using AugCOCO (Lin et al., 2014). OW-VisCapTor (Choudhuri et al., 2024) extends the DVOC task to segmentation masks. Its architecture relies on an object abstractor using a prompt encoder and transformer blocks, an inter-query contrastive loss to ensure object queries are diverse, and an object-to-text abstractor that connects these queries to a frozen LLM, generating rich, object-centric captions for each detected instance. While proposing to extend DVOC to segmentation, OW-VisCapTor is hindered by the absence of paired (mask, caption) annotations, hence segmentation and DVOC are tackled in isolation using separate models. Closely related, SMOTer (Li et al., 2024) extends multi-object tracking to object-level captioning, video-level captioning and object-relation predictions, and introduce a hand-annotated dataset, BenSMOT. However, BenSMOT focuses on humans only, whereas the standard DVOC task considers all visual entities in the video. In this work, we automatically generate DVOC datasets, LVISCap and LV-VISCap, and train MaskCaptioner, a model that can end-to-end detect, segment, track and caption object trajectories in videos. Vision-language data generation. A promising approach for intricate vision-language tasks is to generate visual annotations using Large Language Models (LLMs) or Vision Language Models (VLMs). LLaVA (Liu et al., 2023) leverages a LLM to generate conversations and detailed descriptions from paired image-text annotations (Chen et al., 2015). This approach has been followed to generate largescale instruction tuning data for video understanding (Maaz et al., 2024; Li et al., 2023b). Recent research has focused on the generation of spatially grounded text-image data using LLMs/VLMs: Shikra (Chen et al., 2023) generates grounded QA pairs from Flickr30K Entities (Plummer et al., 2015), LLaVA-Grounding (Zhang et al., 2024) and GLaMM (Rasheed et al., 2024) generate grounded conversational data from datasets such as COCO (Lin et al., 2014). Recently, GROVE (Kazakos et al., 2025) extended GLaMM to generate grounded, temporally consistent video captions, extending dense captioning to the video domain. These methods predominantly operate at the scene level and do not produce localized, object-level video descriptions. In contrast, we introduce a multi-modal prompting strategy to leverage VLMs into generating fine-grained captions for individual object trajectories across time. Recently, Yuan et al. (2025) proposed a related VLM-based approach to generate object-level video captions designed for long sentences referring segmentation. Differently, we focus on the task of Dense Video Object Captioning where no text prompt is given to the model. 3 METHOD Dense Video Object Captioning (DVOC) (Zhou et al., 2025) is the task of jointly detecting, tracking and captioning objects trajectories in a video, i.e. producing a (bounding box or mask, caption) pair 3 for each object in a video at each timestamp they appear. Such densely annotated data is lacking for video, as there is currently no available training dataset including captions for all object trajectories in the videos. The spatio-temporal grounding dataset VidSTG (Zhang et al., 2020) has been repurposed to DVOC but only includes annotations for few objects per video and a limited number of frames per video. We address the lack of data by introducing a strategy for synthetic DVOC data generation, allowing unified training of a DVOC model on (trajectory, caption) pairs for each object, as presented in Section 3.1. With this strategy, we extend the LV-VIS dataset (Wang et al., 2023), which includes both boxes and masks in their (trajectory, label) annotations for all objects, to a variant, dubbed LV-VISCap, which additionally includes synthetically generated captions for each trajectory. To enable end-to-end training on this rich data using segmentation masks, we build an architecture based on a state-of-the-art Open-Vocabulary Video Instance Segmentation (OV-VIS) model OVFormer (Fang et al., 2025), as described in Section 3.2.1. As OVFormer is designed for classification, we extend it with a captioning head (Choudhuri et al., 2024), as explained in Section 3.2.2. Finally, we present in Section 3.3 the losses used to train our model. 3.1 DVOC DATA GENERATION We start from the LV-VIS dataset (Wang et al., 2023) which contains (segmentation masks, category) manual annotations for all objects and timestamps of the videos. To automatically collect DVOC data, one challenge is to generate accurate object-level captions for each trajectory. For this, we leverage a state-of-the-art VLM (Gemini 2.0 Flash (Team et al., 2023)) and feed it with videos where the object to caption is marked with drawn bounding boxes, as illustrated in Figure 2. Table 1: Impact of the prompting strategy on caption quality. Scores are given by an expert human evaluator from 0 to 2 (incorrect, partially correct, or correct) on a subset from the LV-VIS validation set, and brought to 0-100 range. For the mask visual prompt experiments, we use our best prompt with either the object's bounding boxes or center point coordinates as a localization cue in the text prompt. Visual prompt Prompting method Average rating bounding boxes single frame 26.8 + multiple frames 27.1 + detailed instructions 29.5 + category labels 80.7 + bbox coordinates 83.1 + bbox area 84.3 + few shot examples 85.1 mask boundaries center point coordinates 75.9 bbox coordinates 77.1 Visual Prompt. Formally, let x ∈ RN×H×W ×3 be a video clip of length N, with associated mask and category annotations for M objects in the video: (Mj, Catj), j ∈1, ..M. We first extract bounding boxes annotations from the groundtruth masks, Bj ∈RN×4, j ∈1, ..., M. We draw each object boxes on a separate copy of the video, and denote as a ∧b the operation of drawing box b on frame a. We obtain a visual prompt ˆxi j for each object trajectory: ˆxi j = xi ∧Bj for i ∈1, ..., N, j ∈1, ..., M Note that in practice, we subsample N to 4 uniformly sampled video frames as we found it produces representative enough visual content for the captioning task. Text prompts. In detail, the prompt we feed the VLM is composed of the previously described visual prompt ˆxj, a system prompt ps and a user prompt pu(j). The system prompt is static for all objects/videos and gives general instructions (e.g. "Generate a caption about a queried object highlighted with bounding boxes."), rules (e.g. "Do not mention the bounding boxes in the caption."), format, and an example. The user prompt pu(j) however, is constructed for each annotation and enriched with informations about the specific query to help the model. The user prompt encodes textual bounding box coordinates, area, category name of the queried object, and category names from other objects in the image. Passing information through different channels (visual prompt, text prompt) helps the model focusing on the queried object and being accurate when describing the scene. Also, this complementary information can lead the model to reason about the objects (e.g. area being small for an object of category 'elephant' implies the object is most likely part of the background). The different prompting cues have been ablated in Table 1. In particular, showing that including textual semantic and localization cues in the prompt helps the model to focus on the queried object and generate more accurate object-focused captions. We notice that using the segmentation masks as the visual prompt for the model results in less accurate object captions, which might result from a poorer alignment with the localization cues 4 T Video clip Instance tracking Boxes Masks Track ids ... Captions N Clip-level Object queries Video-level obj Clip-level OVFormer Scores Mask2former Box head Classifier Object head Temporal aggregation Tagg LLM decoder Figure 3: Our MaskCaptioner architecture jointly segments and captions objects in videos. For each clip of T frames, we obtain Nobj clip-level object queries through Mask2Former (Cheng et al., 2021), and yield associated score, mask and box predictions. At a video-level, we match the predicted object queries with the previously processed clips with the Hungarian bipartite matching algorithm, and perform instance tracking (Zhu et al., 2024). For each track, we sample Tagg clips uniformly across the video, and aggregate the tracked queries to obtain a single video query per object, which we can feed to the LLM captioning head (Li et al., 2023a), producing a single caption per trajectory. included in the text prompt. The full template details for the model prompt construction and a more detailed ablation are given in Appendix A.4.1 and A.2. The LVISCap and LV-VISCap datasets. For each object j in the video, we prompt Gemini 2.0 Flash (Team et al., 2023) with the visual and textual prompts (ˆxj , ps and pu(j)) to output a synthetic caption Capj. The full DVOC annotation is (Mj, Bj, Catj, Capj). We repeat this process for each object in each videos of LV-VIS, generating over 19.5k synthetic captions over 3.9k videos covering 1, 020 object classes, and with an average length of 13.9 words. We repeat this process with the LVIS dataset to support image pre-training, applying the same method as above described with each image considered a video of len N = 1, and obtain our DVOC training sets: LVISCap and LV-VISCap. 3.2 MASKCAPTIONER ARCHITECTURE To enable end-to-end training on the previously described DVOC data including segmentation masks, our architecture, illustrated in Figure 3, processes videos as Nclip clips of T consecutive frames each, and is composed with: (i) at the clip level, an instance segmentation and detection component, see Section 3.2.1 (ii) at the video-level, a tracking module and a captioning head, see Section 3.2.2. 3.2.1 INSTANCE SEGMENTATION AND DETECTION Background: OVFormer (Fang et al., 2025). Our clip-level instance segmentation component is based on OVFormer (Fang et al., 2025), which we shortly describe next. On a high level, OVFormer augments Mask2Former (Cheng et al., 2021) with a classification head to handle Open-Vocabulary Video Instance Segmentation. Mask2Former learns clip-level object queries with a transformer decoder which cross-attends to the features extracted with a visual backbone and are refined with a pixel decoder. Formally, each clip xi clip ∈RT ×3×H×W is composed of T consecutive frames for images of resolution (H,W), and processed independently by the OVFormer model which outputs clip-level object queries qi obj, associated mask predictions Mi, classification and objectness scores Si cls and Si obj: (qobj, Mi, Si cls, Si obj) = ΦOVFormer(xi) where qi obj ∈RNobj×D, Mi ∈RNobj×T ×H×W , Si cls ∈RNclip×Nobj×Ncls and Si obj ∈RNclip×Nobj. Detection head. We extend this segmentation module with detection by using a 4-layer MLP to generate boxes on top of the object queries qi obj: Bi = BoxHead(qi obj) ∈RNobj×T ×4. Confidence scores. Note that OVFormer (Fang et al., 2025) only computes class-aware query-wise confidence scores over the full video. However, for objects appearing only in a small subset of frames in the video this strategy could result in inaccurate scores. Moreover, for DVOC, we wish to avoid redundant predictions i.e. having two queries predicting a similar trajectory. Hence we additionally compute class-agnostic query-wise confidence scores for each clip Si cls∗, by taking the maximum classification score over all labels c ∈1, ..., Ncls for each query and clip: 5 Si cls∗= max c (Si cls(c)) ∈RNclip×Nobj. Finally we derive the per-clip score Si = q Si cls∗× Si obj which we use for filtering predictions below a threshold tthresh at inference-time for every time step. 3.2.2 INSTANCE TRACKING AND CAPTIONING Tracking module. To derive the output video-level trajectories from the clip-level predictions, we need to obtain a matching between the queries at time i and the queries at time i + 1. For this, we perform tracking between the clips using the top-K enhanced query-matching module from Zhu et al. (2024). For each clip, this module keeps a memory bank containing the queries from the Tmatch previous clips. Among these, it identifies the Kmatch most matched clips, and computes the optimal assignment using the Hungarian bipartite matching algorithm. Using the Kmatch most matched clips helps reducing error propagation compared to the OVFormer (Fang et al., 2025) tracking module, which maps queries from time i + 1 to time i directly. Notably, we can keep track of objects that disappear and re-appear in a video, whereas they are automatically lost using the OVFormer tracking module. This method is referred to as semi-online tracking as we represent objects at a clip-level and associate between the clips in an online fashion. This offers the advantages of being flexible (fully-online for clips of length T = 1) and to arbitrate between using multi-frame information and memory constraints for long videos. Captioning head. To caption tracked object trajectories, we adapt the captioning head from Choudhuri et al. (2024) based on BLIP-2 (Li et al., 2023a). The BLIP-2 decoder processes object queries one by one using masked-attention conditioned with the predicted masks, before projecting the resulting object query into the LLM space for caption prediction. However, for consistency and efficiency, we predict a single video-level caption per tracked object query, replacing clip-level prediction. Query aggregation. Let's consider ΦQT BLIP2 the BLIP-2 query transformer, ΦLLM BLIP2 the LLM decoder, qi obj(j) ∈R1×D and Si obj(j) ∈R respectively the query and the detection score for object j from clip i. For each object, we aggregate the tracked queries over time after they are processed by the BLIP-2 query transformer, by sampling a set Iagg of Tagg clips uniformly across the video. We obtain a video query for each object j: q∗ obj(j) = P i∈Iagg Si(j) × ΦQT BLIP-2(qi obj(j), Mi j). We can compute the video captioning prediction for the tracked object: Cap(j) = ΦLLM BLIP2(q∗ obj(j)) for j ∈1, ...N. 3.3 MODEL TRAINING We train MaskCaptioner with a combination of clip-level and video-level losses. For each clip, we predict masks/boxes, classification scores and captions, and derive the clip-level training objective as the following combination of supervised losses: Lclip-level = Lseg + Ldet + Ls + Lcap (1) where Lseg and Ls are the VIS losses from Fang et al. (2025), i.e. Lseg = λdiceLdice + λceLce, with Ldice and Lce the dice and cross-entropy segmentation losses respectively, and Ls = λclsLcls+λobjLobj where Lcls and Lobj are the cross-entropy losses for classification and objectness. We add detection and captioning losses Ldet = λl1Ll1 + λgiouLgiou where Ll1 and Lgiou are detection losses from Yang et al. (2022a), and Lcap = λclip-lmLlm, with Llm the cross-entropy language modeling loss (Zhou et al., 2025). When including the temporal aggregation module for captioning, we train the captioning head at the video level, i.e. we predict a caption per object for the full video after the tracking is performed and each object-query has been augmented across time. Lvideo-level = λvid-lmLlm (2) MaskCaptioner can be trained in a completely end-to-end manner. However, in practice, we train the model in two-stages for most of the experiments to alleviate memory constraints: we first train the segmentation/detection and classification model, then freeze it and tune the captioning head. The captioning head is trained either at the clip-level, or at the video-level when enabling the temporal aggregation module (i.e. λclip-lm = 0 or λvid-lm = 0). For each loss when they are computed we set their respective weights to λdice, λce, λl1 = 5, λgiou, λcls, λobj = 2, and (λclip-lm = 1 or λvid-lm = 1). 6 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING Datasets. LVIS (Gupta et al., 2019) and LV-VIS(Wang et al., 2023) are large-vocabulary instance segmentation datasets, respectively for image and video. LVISCap and LV-VISCap denote our extensions of LVIS and LV-VIS (see Section 3.1), with respectively 1, 2M/244k synthetic image object captions for the training/validation of LVISCap and 16k/3.7k synthetic video object captions for LV-VISCap (average of 5.4 objects per video). Note that in the absence of annotations on the test sets of LVIS and LV-VIS, we only extend the training and validation sets with captions, and use the validation set for evaluation. Benchmarks. VidSTG(Zhang et al., 2020) is a spatio-temporal video grounding dataset containing text descriptions serving as queries, which Zhou et al. (2025) propose to use for DVOC evaluation. The repurposed training and validation sets count 5.4k/602 videos for 15.1k/1.6k object trajectories with captions respectively. Video Localized Narratives (VLN) is similarly repurposed (Zhou et al., 2025). For each of the 5.1k training and 2.4k validation videos, the dataset contains 3 sparsely annotated frames with non exhaustive captions. BenSMOT contains bounding box trajectories and associated captions focusing exclusively on humans in videos, with an average of 2.2 instances per video. It counts 2.2k videos for training and 1k for evaluation. More details about the datasets are given in Appendix A.4.2. Evaluation Metrics. Following prior work, we evaluate DVOC using the CHOTA metric introduced by Zhou et al. (2025), which extends the widely used multi-object tracking HOTA metric (Luiten et al., 2021). CHOTA decomposes the DVOC task into three components: detection accuracy (DetA)(Luiten et al., 2021), association accuracy (AssA)(Luiten et al., 2021), and captioning accuracy (CapA)(Zhou et al., 2025). These components reflect the model's ability to (i) correctly localize objects, (ii) maintain their identity across frames, and (iii) generate accurate natural language descriptions. CHOTA is defined as the geometric mean of the three components: CHOTA = 3√DetA · AssA · CapA. Matching between predicted and ground-truth trajectories is performed using Intersection-over-Union (IoU) thresholds, similarly to standard tracking evaluations (Milan et al., 2016). To extend the metric to segmentation masks, we simply replace box-based IoU with mask-based IoU when computing CHOTA with segmentation masks. Implementation details. Following OVFormer (Fang et al., 2025) we used ResNet50 (He et al., 2016) and SwinBase (Liu et al., 2021) visual backbones. Our Mask2former (Cheng et al., 2022) transformer decoder has 11 layers, and our captioning head is based on the BLIP-2 (Li et al., 2023a) decoder with OPT-2.7B LLM (Zhang et al., 2022a). For LV-VIS experiments we tune the model end-to-end with clip-level supervision only. For other experiments, we first train the segmentation/detection model, then freeze it and tune the captioning head. For VidSTG, VLN and BenSMOT experiments we use video-level tuning for captioning with temporal aggregation. For the largest dataset (COCO + LVIS) the optimization takes 2 days on 4 H100 GPUs. More implementation details and hyper-parameters for different experiments are given in Appendix A.4.3. 4.2 RESULTS 4.2.1 BENEFITS OF TRAINING ON LVISCAP AND LV-VISCAP We first study the impact of integrating our generated datasets for DVOC training in Table 2. We train MaskCaptioner on our synthetic video set, LV-VISCap, and progressively add LVISCap image pretraining. Results are reported on the LV-VISCap validation set. First, we observe that MaskCaptioner achieves strong DVOC results with training only on LVISCap or LV-VISCap, demonstrating the effectiveness of our architecture. Importantly, combining both LVISCap and LV-VISCap leads to best results, showing the benefit of both our generated datasets. Moreover, we observe that our MaskCaptioner is robust to the choice of the visual backbone. Note that the AssA score depends on the detections, hence a worse DetA score with lower recall but higher precision can make the tracking easier: this explains the higher AssA score from the model without LV-VIScap tuning. In Fig. 4, we show that the CapA performance is logarithmically correlated with the quantity of training captions, suggesting that generating more data with our approach might bring further improvements. 7 Table 2: Impact of training with LVISCap and LV-VISCap and of the visual backbone on LV-VISCap DVOC. Backbone LVIScap LV-VIScap CapA DetA AssA CHOTA SwinB - ✓ 37.9 48.1 89.5 54.7 ✓ - 30.0 34.3 93.2 45.8 ✓ ✓ 43.6 54.3 89.1 59.5 ResNet50 ✓ ✓ 39.0 51.1 88.5 56.1 4.2.2 COMPARISON WITH THE STATE OF THE ART 0 1 2 5 10 30 100 Percentage of LVIScap captions used for training (log scale) 36 38 40 42 44 CapA CapA vs. data percentage (log scale) ±1 around log-fit CapA Log-linear fit Figure 4: Impact of the generated data scale on the CapA metric. We train MaskCaptioner on a varying percentage of LVISCap captions and finetune on LV-VISCap. We compare MaskCaptioner to state-of-the-art DVOC methods following the standard evaluation protocol on three existing benchmarks : VidSTG in Table 3, VLN in Table 4 and BenSMOT in Table 5. DVOC-DS (Zhou et al., 2025) reports results without pretraining, and with their disjoint training training strategy, while OW-VISCaptor (Choudhuri et al., 2024) leverages Mask2former pretrained on COCO for instance segmentation. In Table 3, we include results with the same pretraining data as these methods and show the impact of using our data instead. Pretraining MaskCaptioner on COCO yields better detection and tracking compared to OW-VISCaptor but comparable captioning performance due to the similar captioning head design. Including our data leads to an improvement on all metrics and especially captioning with a 6.7 CapA increase (15% relative improvement). When pretraining on the disjoint DVOC-DS set, we observe a substantial gain in the captioning metric due to the model design. Moreover, we show that using our pretraining sets results in a further performance improvement, while additionally allowing a unified, much faster training (2032 GPU hours (Zhou et al., 2025) vs 208 for our approach). Eventually, our proposed approach can output segmentation masks unlike other methods. Table 3: Comparison with the state of the art on VidSTG DVOC validation set. All models are finetuned on VidSTG. "temp. agg." refers to including the query temporal aggregation module. Method Pretraining set CapA DetA AssA CHOTA OW-VISCaptor (Choudhuri et al., 2024) COCO 43.9 60.1 54.0 53.0 MaskCaptioner (ours) 44.3 65.1 70.2 58.7 DVOC-DS (Zhou et al., 2025) COCO + VG + SMIT + AugCOCO 39.7 65.8 70.4 56.9 MaskCaptioner (ours) 50.1 65.0 69.2 60.9 MaskCaptioner (ours) COCO + LVIScap + LV-VIScap 51.0 66.8 71.0 62.3 + temp agg COCO + LVIScap + LV-VIScap 52.7 66.8 71.0 63.0 Table 4: Comparison with state of the art on the VLN DVOC validation set. Mask loss refers to using the segmentation masks in the detection loss. All models are finetuned on VLN. Method Mask loss CapA DetA AssA CHOTA DVOC-DS Zhou et al. (2025) - 17.7 44.3 89.5 41.3 MaskCaptioner (ours) - 21.4 48.7 89.7 45.4 MaskCaptioner (ours) ✓ 22.9 50.1 92.7 47.4 + temp agg ✓ 23.4 50.1 92.7 47.7 Table 5: Comparison on the BenSMOT validation set. CapA, and thus CHOTA are not reported on this dataset. All models are finetuned on BenSMOT. Method DetA AssA CIDEr SMOTer (Li et al., 2024) 80.8 73.7 8.7 DVOC-DS Zhou et al. (2025) 90.8 89.6 25.4 MaskCaptioner (ours) 91.6 87.5 39.9 + temp agg 91.6 87.5 42.6 We further evaluate MaskCaptioner on VLN in Table 4 and BensMOT in Table 5. On both benchmarks, our approach improves the detection while the tracking remains competitive. Most important, the captioning metrics improve by a large margin (+5.2 CapA on VLN, +14.7 CIDEr on BenSMOT 8 Table 6: Automatic vs manual annotations for evaluation on a subset from LV-VIScap validation with 50 videos and 233 objects trajectories ; "automatic" and "manual" annotations stands for synthetic or human annotated captions. All models are trained on LVISCap and tuned on our LV-VISCap training set. Annotation type LVIScap captions CapA DetA AssA CHOTA automatic - 33.0 48.4 89.6 52.3 ✓ 40.5 50.0 91.0 56.7 manual - 22.5 48.4 89.6 46.0 ✓ 33.2 50.0 91.0 53.1 Table 7: Impact of the inference clip on LV-VIScap. All models are trained on LVIScap then LV-VIScap training set. Clip length mAP 3 26.0 4 26.1 5 26.6 6 26.3 7 26.8 compared to state of the art). Additionally, our method is able to jointly segment objects, which further improves the performance as shown in Table 4. Across all benchmarks, we show that including the temporal aggregation module further improves captioning performance by effectively merging information from multiple video clips, enriching the description of temporally extended actions. We notice that this improvement is only marginal on VLN, which we attribute to the short video lengths in that dataset. Note that we maintain consistent detection models when enabling the temporal aggregation module, keeping the DetA and AssA scores unchanged. Other results employ best-clip captioning. 4.2.3 ADDITIONAL ABLATIONS Table 8: Impact of temporal aggregation on VidSTG validation (with finetuning). All methods are pretrained on COCO + LVIScap + LV-VIScap. Multiclip results are obtained with weighted mean temporal aggregation, except for † which uses arithmetic mean. num clips clip selection CapA CHOTA 1 best score 51.0 62.3 1 middle frame 46.9 60.6 4 uniform† 49.1 61.5 4 uniform 51.6 62.6 8 uniform 52.7 63.0 16 uniform 53.8 63.4 32 uniform 55.4 64.0 Annotation bias. To evaluate the bias introduced by evaluating on LV-VISCap synthetic captions (see Table 2), we annotate a representative subset of our LV-VISCap datasets by hand and compare the impact of our LVISCap captions when evaluating MaskCaptioner on automatic vs manual data. Our subset comports 50 videos from the LVVISCap validation set with 233 object trajectories annotated by hand. The results are presented in Table 6. We observe that both the evaluation on automatic and manual data show a comparable improvement on the CapA and CHOTA metric when using our LVISCap captions for the training. This result confirms the importance of our synthetic captions for DVOC performance, and further shows that the bias introduced by evaluating on synthetic LV-VISCap captions is marginal. Clip length. We show in Table 7 the impact of the clip length used for the MaskCaptioner inference. A higher clip length leads to temporally richer queries, improving detection and tracking. Note that in our implementation, we follow OVFormer (Fang et al., 2025) and use a clip length of T = 5. Temporal aggregation. We show the impact of temporal aggregation on DVOC performance in Table 8. Increasing the number of clips for aggregation consistently improves the captioning scores at the expense of higher computational cost. Weighting the clips for aggregation using the detection scores yields a clear improvement over the arithmetic mean. We also observe that performing captioning based on the single clip with best score performs relatively well, which highlights the limits of the complexity of the actions observable in the current benchmarks, e.g. VidSTG.(Zhang et al., 2020). Additional results. We show additional results about the prompt ablation and the impact of the tracking module in Appendix A.2. We add MaskCaptioner qualitative examples in Appendix A.1, and discuss the failure cases and limitations of our method in Appendix A.3, as well as give further implementation details in Appendix A.4. 9 5 CONCLUSION We propose an approach to generate synthetic object-level captions using a state-of-the-art VLM and extend the LVIS and LV-VIS datasets with synthetic captions. We use the resulting LVISCap and LV-VISCap datasets to train MaskCaptioner, a DVOC model that can simultaneously detect, segment, track, and caption objects throughout a video. With finetuning, MaskCaptioner achieves state-of-the-art performance on the VidSTG, VLN and BenSMOT benchmarks, while extending the DVOC task to segmentation masks. ACKNOWLEDGEMENTS This work was granted access to the HPC resources of IDRIS under the allocation AD011014323R2 made by GENCI. It was funded in part by the French government under management of Agence Nationale de la Recherche as part of the "France 2030" program, reference ANR-23-IACL-0008 (PR[AI]RIE-PSAI project), the ANR project VideoPredict ANR-21-FAI1-0002- 01, and Paris Île-deFrance Région in the frame of the DIM AI4IDF. Cordelia Schmid would like to acknowledge the support by the Körber European Science Prize. REFERENCES Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. Phi-3 Technical Report: A highly capable language model locally on your phone. arXiv preprint , 2024. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. GPT-4 Technical Report. arXiv preprint , 2023. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint , 2022. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. In NeurIPS, pages 23716-23736, 2022. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, pages 2425-2433, 2015. Shahin Atakishiyev, Mohammad Salameh, Hengshuai Yao, and Randy Goebel. Explainable artificial intelligence for autonomous driving: A comprehensive overview and field guide for future research directions. IEEE Access, 2024. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen Technical Report. arXiv preprint , 2023. Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In ICCV, 2021. Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: Delving into high quality object detection. In CVPR, pages 6154-6162, 2018. Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-End Object Detection with Transformers. In ECCV, pages 213-229. Springer, 2020. Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal LLM's referential dialogue magic. arXiv preprint , 2023. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO Captions: Data Collection and Evaluation Server. arXiv preprint , 2015. 10 Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: UNiversal Image-TExt Representation learning. In ECCV, pages 104-120. Springer, 2020. Bowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexander Kirillov, Rohit Girdhar, and Alexander G. Schwing. Mask2Former for Video Instance Segmentation, 2021. URL https://arxiv.org/ abs/2112.10764. Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. MaskedAttention Mask Transformer for Universal Image Segmentation. In CVPR, pages 1290-1299, June 2022. Zesen Cheng, Kehan Li, Hao Li, Peng Jin, Chang Liu, Xiawu Zheng, Rongrong Ji, and Jie Chen. Instance brownian bridge as texts for open-vocabulary video instance segmentation. arXiv preprint , 2024. Anwesa Choudhuri, Girish Chowdhary, and Alex Schwing. OW-VISCapTor: Abstractors for OpenWorld Video Instance Segmentation and Captioning. In NeurIPS, 2024. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, pages 248-255, 2009. Hao Fang, Peng Wu, Yawei Li, Xinxin Zhang, and Xiankai Lu. Unified Embedding Alignment for Open-Vocabulary Video Instance Segmentation. In ECCV, pages 225-241. Springer, 2025. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The Llama 3 herd of models. arXiv preprint , 2024. Pinxue Guo, Hao Huang, Peiyang He, Xuefeng Liu, Tianjun Xiao, and Wenqiang Zhang. OpenVIS: Open-vocabulary video instance segmentation. In AAAI, volume 39, pages 3275-3283, 2025. Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A Dataset for Large Vocabulary Instance Segmentation. In CVPR, pages 5356-5364, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In ICCV, pages 2961-2969, 2017. Hyeonho Jeong and Jong Chul Ye. Ground-a-video: Zero-shot grounded video editing using text-toimage diffusion models. In ICLR, 2024. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, pages 4904-4916. PMLR, 2021. Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks for dense captioning. In CVPR, pages 4565-4574, 2016. Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, and Nicolas Carion. MDETR - Modulated Detection for End-to-End Multi-Modal Understanding. In ICCV, 2021. Evangelos Kazakos, Cordelia Schmid, and Josef Sivic. Large-scale Pre-training for Grounded Video Caption Generation. In ICCV, 2025. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. ReferItGame: Referring to objects in photographs of natural scenes. In EMNLP, pages 787-798, 2014. Jinkyu Kim, Teruhisa Misu, Yi-Ting Chen, Ashish Tawari, and John Canny. Grounding human-tovehicle advice for self-driving vehicles. In CVPR, pages 10591-10599, 2019. 11 Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. In IJCV, volume 123, pages 32-73. Springer, 2017. Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, and Jiaya Jia. LISA: Reasoning segmentation via Large Language Model. In CVPR, pages 9579-9589, 2024. Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. Tvqa: Localized, compositional Video Question Answering. In EMNLP, 2018. Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. Tvqa+: Spatio-temporal grounding for Video Question Answering. In ACL, 2020. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In ICML, pages 12888-12900. PMLR, 2022. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. BLIP-2: Bootstrapping Language-Image Pretraining with Frozen Image Encoders and Large Language Models. In ICML, pages 19730-19742. PMLR, 2023a. KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. VideoChat: Chat-centric video understanding. arXiv preprint , 2023b. Yunhao Li, Qin Li, Hao Wang, Xue Ma, Jiali Yao, Shaohua Dong, Heng Fan, and Libo Zhang. Beyond MOT: Semantic Multi-Object Tracking. In ECCV, 2024. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In ECCV, pages 740-755. Springer, 2014. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. In NeurIPS, pages 34892-34916, 2023. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical vision transformer using shifted windows. In ICCV, pages 10012-10022, 2021. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS, 2019. Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas Geiger, Laura Leal-Taixé, and Bastian Leibe. HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking. IJCV, 129: 548-578, 2021. Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-ChatGPT: Towards detailed video understanding via large vision and language models. In ACL, 2024. Anton Milan, Laura Leal-Taixé, Ian Reid, Stefan Roth, and Konrad Schindler. MOT16: A Benchmark for Multi-Object Tracking. arXiv preprint , 2016. Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, and Yedid Hoshen. Dreamix: Video diffusion models are general video editors. arXiv preprint , 2023. Mathew Monfort, SouYoung Jin, Alexander Liu, David Harwath, Rogerio Feris, James Glass, and Aude Oliva. Spoken moments: Learning joint audio-visual representations from video descriptions. In CVPR, 2021. Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In ICCV, pages 2641-2649, 2015. 12 Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748-8763. PmLR, 2021. Hanoona Rasheed, Muhammad Maaz, Sahal Shaji, Abdelrahman Shaker, Salman Khan, Hisham Cholakkal, Rao M Anwer, Eric Xing, Ming-Hsuan Yang, and Fahad S Khan. Glamm: Pixel grounding large multimodal model. In CVPR, pages 13009-13018, 2024. Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. In CVPR, June 2016. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards RealTime Object Detection with Region Proposal Networks. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, NeurIPS, volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper_files/paper/2015/ file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf. Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. Grounding of textual phrases in images by reconstruction. In ECCV, pages 817-834. Springer, 2016. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. In CVPR, pages 10740-10749, 2020. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. VideoBERT: A Joint Model for Video and Language Representation Learning. In ICCV, pages 7464-7473, 2019. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint , 2023. Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint , 2024. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and Tell: A Neural Image Caption Generator. In CVPR, pages 3156-3164, 2015. Haochen Wang, Cilin Yan, Shuai Wang, Xiaolong Jiang, Xu Tang, Yao Hu, Weidi Xie, and Efstratios Gavves. Towards Open-Vocabulary Video Instance Segmentation. In ICCV, pages 4057-4066, October 2023. Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. GIT: A generative image-to-text transformer for vision and language. In TMLR, 2022. Junchi Wang and Lei Ke. LLM-Seg: Bridging image segmentation and large language model reasoning. In CVPR, pages 1765-1774, 2024. Nicolai Wojke, Alex Bewley, and Dietrich Paulus. Simple Online and Realtime Tracking with a Deep Association Metric. In ICIP, pages 3645-3649. IEEE, 2017. Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, and Lijuan Wang. Grit: A generative region-to-text transformer for object understanding. In ECCV, pages 207-224. Springer, 2024. Junbin Xiao, Angela Yao, Yicong Li, and Tat-Seng Chua. Can i trust your answer? Visually grounded Video Question Answering. In CVPR, pages 13204-13214, 2024. Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Just Ask: Learning to answer questions from millions of narrated videos. In ICCV, pages 1686-1697, 2021. Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. TubeDETR: SpatioTemporal Video Grounding With Transformers. In CVPR, pages 16442-16453, June 2022a. 13 Zhao Yang, Jiaqi Wang, Yansong Tang, Kai Chen, Hengshuang Zhao, and Philip HS Torr. Lavt: Language-aware vision transformer for referring image segmentation. In CVPR, pages 1815518165, 2022b. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. MAttNet: Modular attention network for Referring Expression Comprehension. In CVPR, pages 1307-1315, 2018. Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for Visual Question Answering. In CVPR, pages 6281-6290, 2019. Haobo Yuan, Xiangtai Li, Tao Zhang, Zilong Huang, Shilin Xu, Shunping Ji, Yunhai Tong, Lu Qi, Jiashi Feng, and Ming-Hsuan Yang. Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos. arXiv pre-print, 2025. Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. MERLOT: Multimodal Neural Script Knowledge Models. In NeurIPS, 2021. Hao Zhang, Hongyang Li, Feng Li, Tianhe Ren, Xueyan Zou, Shilong Liu, Shijia Huang, Jianfeng Gao, Leizhang, Chunyuan Li, et al. LLaVa-Grounding: Grounded visual chat with large multimodal models. In ECCV, pages 19-35. Springer, 2024. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. OPT: Open Pre-trained Transformer Language Models. arXiv preprint , 2022a. Yifu Zhang, Peize Sun, Yi Jiang, Dongdong Yu, Fucheng Weng, Zehuan Yuan, Ping Luo, Wenyu Liu, and Xinggang Wang. ByteTrack: Multi-Object Tracking by Associating Every Detection Box. In ECCV, pages 1-21. Springer, 2022b. Zhu Zhang, Zhou Zhao, Yang Zhao, Qi Wang, Huasheng Liu, and Lianli Gao. Where does it exist: Spatio-temporal video grounding for multi-form sentences. In CVPR, pages 10668-10677, 2020. Xingyi Zhou, Dequan Wang, and Philipp Krähenbühl. Objects as points. arXiv preprint , 2019. Xingyi Zhou, Tianwei Yin, Vladlen Koltun, and Philipp Krähenbühl. Global Tracking Transformers. In CVPR, pages 8771-8780, 2022. Xingyi Zhou, Anurag Arnab, Chen Sun, and Cordelia Schmid. Dense Video Object Captioning from Disjoint Supervision. In ICLR, 2025. Wenqi Zhu, Jiale Cao, Jin Xie, Shuangming Yang, and Yanwei Pang. CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation. IEEE Transactions on Circuits and Systems for Video Technology, 2024. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7W: Grounded Question Answering in images. In CVPR, pages 4995-5004, 2016. 14 A APPENDIX In this appendix, we show qualitative results from our method in Section A.1, present additional ablations in Section A.2, discuss failure cases and limitations in Section A.3, and share more details about the datasets, the method, and the implementation in Section A.4. A.1 QUALITATIVE RESULTS We show qualitative DVOC results from MaskCaptioner on the LV-VIS (Wang et al., 2023) dataset with (mask, caption) pairs predictions, and on the VidSTG dataset (Zhang et al., 2020) with (box, caption) pairs in Figures 5 and 6 respectively. These examples show that MaskCaptioner has learned to predict captions that focus on the localized objects while integrating high-level scene understanding. On LV-VIS (Fig. 5), MaskCaptioner is able to produce descriptive captions for each objects including related context, even in the case of a scene including a high number of objects. We note that, when tuned on VidSTG (see Fig. 6), MaskCaptioner produces less informative and less descriptive captions. This is due to the VidSTG annotation captions being designed for grounding rather than for captioning or DVOC, and thus being only little descriptive or informative, and overlooking to the global context. In contrast, when trained on LVISCap and LV-VISCap, MaskCaptioner visually generates much richer and accurate descriptions, further highlighting the value of our synthetic captions. Overall, MaskCaptioner effectively learns to jointly segment, detect, track and caption object trajectories. A person is wearing a blue smartwatch with a heart rate monitor on their wrist. The light blue smartwatch displays the time on its face as it rests on a persons wrist. The two dollar bill is being held in the hand of a person. A person is featured on the two-dollar bill. A person is holding a small dollar bill. The person is using a handsaw to cut a piece of wood. The jean is being worn by a person working on a piece of wood with a handsaw. The handsaw is being used by a person to cut a piece of wood. A person is seen adjusting the focus of a microscope. A ring is worn on the finger of a person who is examining a microscope. The microscope is being adjusted with a person's hand. The glass bowl is being filled with an octopus by a person's hand. The octopus is being placed inside a glass bowl by a person. A person wearing an apron is holding an octopus in a glass bowl. A white towel is placed on the wooden table near a knife. The wooden table is supporting a glass bowl filled with an octopus being handled by a person. Figure 5: Qualitative examples, obtained with MaskCaptioner on the LV-VIS dataset. 15 There is a dog away a bicycle on the ground. A child in white is above a pink bicycle. There is a pink bicycle beneath a kid on the ground. An adult in black clothes holds a baby. There is a camera in front of an adult. A baby leans on an adult in the bathroom. There is a horse towards an adult on the grass. An adult rides the horse on the grass. There is a black car behind a child in red. A child in red is in front of a black car. Figure 6: Qualitative examples, obtained with MaskCaptioner on the VidSTG dataset. Table 9: Impact of the prompting strategy on caption quality. Scores are given by a human evaluator from 0 to 2 (incorrect, partially correct, or correct) on a subset from the LV-VIS validation set, and brought to 0-100 range. For the mask visual prompt experiments, we use our best prompt with either the object's bounding boxes or center point coordinates as a localization cue in the text prompt. Visual prompt Prompting method Average rating Rating percentage 0 1 2 bounding boxes single frame 26.8 66.3 13.9 19.9 + multiple frames 27.1 68.7 8.4 22.9 + detailed instructions 29.5 65.0 10.8 24.10 + category labels 80.7 10.8 16.9 72.3 + bounding box coordinates 83.1 9.6 14.5 75.9 + bounding box area 84.3 7.8 15.7 76.5 + few shot examples 85.1 9.1 11.5 79.4 mask boundaries center point coordinates 75.9 17.5 13.2 69.3 bounding box coordinates 77.1 15.7 14.5 69.9 16 A.2 ADDITIONAL ABLATIONS Prompting strategy. In Table 9, we show the distribution of ratings given by the human annotator depending on the prompting strategy. Using the box visual prompt yields a better focus on the queried object and more correct object captions. Importantly, giving the category labels in the prompt helps the model to generate more accurate object captions. Overall, the rate of correct captions with the best prompt indicates good quality for our synthetic object captions. Tracking module. In Table 10, we show that the top-k enhanced tracking (Zhu et al., 2024) is important for tracking objects effectively on the challenging VidSTG dataset (Zhang et al., 2020), as seen in the AssA, CapA and CHOTA scores. We attribute this difference to the numerous objects that disappear for a significant number of frames in the long videos of VidSTG. The top-K approach uses a memory bank of tracked queries that helps keeping track of these entities, while they are lost using the i to i + 1 tracking from OVFormer (Fang et al., 2025). Table 10: Impact of the tracking module on VidSTG DVOC. Tracking method CapA DetA AssA CHOTA OVFormer module (Fang et al., 2025) 51.9 67.0 58.2 58.7 Top-K enhanced module (Zhu et al., 2024) 52.7 66.8 71.0 63.0 A.3 FAILURE CASES AND LIMITATIONS A.3.1 FAILURE CASES (ii) Inconsistent object categories (ii) Detection/segmentation error The knife is being used to cut a steak on a wooden cutting board. The steak is being cut with a knife on a wooden cutting board. A person is holding a steak with tongs on a wooden cutting board. The wooden chopping board holds a steak being cut with a knife and a pair of tongs. A person is holding a roll of tape in front of them. The yellow robe is being worn by a person who is holding a pink spinner. The towel rack is mounted on a wall in a bathroom. A white bath towel hangs on the wall. The earring is being help up by a person in front of a mirror. The bracelet is worn on the wrist of a person who is holding a pink sphygmomanometer. The gray heron is standing in the water with a fish in its beak. The knife is being held by a person next to a steak on a wooden cutting board. (i) Recognition error Figure 7: Some DVOC failure cases of MaskCaptioner observed on the LV-VIS dataset. We observe 3 main types of failure cases for our approach and illustrate them in Figure 7: (i) Recognition error: in the case of ambiguous context, blurred instance or rare categories, MaskCaptioner might fail to recognize the object it is describing, sometimes leading to a wrong denomination 17 (e.g. here, a rare "pair of tongs" is incorrectly denominated as "knife"). (ii) Inconsistent captions : in similar situations, the captions produced by MaskCaptioner can be inconsistent when referring to the same object in different captions. (iii) Detection/segmentation error : In case of complex movement, appearance change or occlusion, MaskCaptioner sometimes fails to detect, segment, or track objects, leading to missing captions (e.g. here, the fish is not detected in the beak of the heron, and thus has no associated caption). A.3.2 LIMITATIONS Limitations. While our approach outperforms the state-of-the-art in dense video object captioning, there is still room for improving localization and captioning. Localization sometimes fails, in particular for small objects. Furthermore, the automatically generated captions are, in some cases, too generic, and can mix up two objects of the same class in the video. Future work could investigate different automatic captioning techniques for DVOC, for example based on an approach such as Ref-SAV(Yuan et al., 2025), which generates captions in multiple steps to separate appearance from motion description. Eventually, objects in the videos often perform a single or few actions, and we believe that it is important for future works to build benchmarks with more complex object interactions, for instance with multiple action segments. A.4 ADDITIONAL DETAILS A.4.1 PROMPTING STRATEGY DETAILS The full prompt template used to generate our LVISCap and LV-VISCap datasets is illustrated in Figure 8. For a video x with N objects, we prompt the VLM N times, and for each object j the prompt is composed in three parts: (i) the static system prompt ps gives general instructions for object-level caption generation, practical rules, prompting format and an example, (ii) the user prompt pu(j) depends on the example and contains textual annotation information to help the model describe objects and interactions accurately. These information notably contain target object positions, areas, category, and the categories of other objects in the scene, (iii) the visual prompt ˆxj consists of 4 sampled frames with drawn bounding boxes for object j. A.4.2 DATASET DETAILS LVIS (Gupta et al., 2019) is a large-vocabulary image instance segmentation dataset with a long-tail distribution of 1203 annotated categories, for a total of over 2.2 million annotations in 164k natural images. The dataset is split in a training set with 100k images and 1.2M annotations, a validation set with 19k images and 244k annotations, and two test sets with 19k images each. LV-VIS (Wang et al., 2023) is a recent large-vocabulary video instance segmentation (VIS) benchmark. It comprises 4, 828 videos with over 26k video segmentation masks from 1,196 object categories, with an average of over 5.4 objects per video. The data is split into a training set of 3, 076 videos and 16k video-level annotations, a validation set of 837 videos and 3.7k annotations, and a test set with 908 videos. LVISCap and LV-VISCap denote our extensions of LVIS and LV-VIS (see Section 3.1). LVISCap extends LVIS with a total of 1, 488, 354 synthetic captions, including 1, 244, 271 training annotations and 244, 083 validation annotations. LV-VISCap includes a total of 19, 717 synthetic captions for 16, 017 training and 3, 700 validation annotations. Note that in the absence of annotations on the test sets of LVIS and LV-VIS, we only extend the training and validation sets with captions, and use the validation set for evaluation. VidSTG(Zhang et al., 2020) is a spatio-temporal video grounding dataset, containing 6, 924 videos with 44, 808 exhaustive trajectories annotations over 80 categories, as well as object sentence descriptions (for some objects and some timestamps only), which serve as queries for grounding. Zhou et al. (2025) repurposed the dataset for DVOC by using queries as captions, and by excluding annotations without captions during evaluation. Following Zhou et al. (2025), we sample 200 frames uniformly across each video for both training and evaluation. Overall, the repurposed VidSTG training set counts 28, 275 object trajectories, with 15, 182 object captions. The validation set, used for DVOC evaluation, includes 602 videos with 1, 644 captions. 18 """ Generate a caption for a video, focusing on a queried object highlighted with green bounding boxes, and semantic class provided. It should be a rich sentence describing the object's APPEARANCE and ACTION, trajectory, or interaction with other objects in the video frames. The other objects are not highlighted and should be mentioned only if they interact with the query object or are relevant to the context. # Rules - The single query object highlighted with bounding boxes in the frames SHOULD be the subject of the sentence. ex: for category "bottle": "The bottle is being inspected by a person" - Only facts that are visible in the video should be mentioned. - You should NOT mention the fact that the query object is highlighted with green bounding boxes. - You should RETURN A CAPTION no matter what, even if the query object is not visible in any frame. - If multiple objects of the same class as the query object are visible, the caption should focus exclusively on the single highlighted object and describe it as the singular subject of the sentence. - No foreign alphabets or special characters should be used in the caption. Translate foreign words if needed. # Input Details - **Frames**: Provided sequence of 4 frames sampled from a video, in which a bounding-box highlights a query object - **Bounding Box**: Locations of the query object in the respective frames, in the format [(xmin, ymin, xmax, ymax),...] with each value ranging from 0 to 1000 representing a percentage of image dimensions. - **Area**: Area of the query object in the image, as a percentage of the total image area. This could help to determine wether the object is in the background. (big object class with small area) - **Semantic Class**: Class of the query object to be described in the caption - **Other classes**: Some classes of other objects in the video, These objects are not highlighted and should be mentioned only if relevant. # Examples - **Input**: An image showing a women dressed in black and white and a dog both running on a beach with people in the background. The woman's short is highlighted with green bounding boxes in the video. object class: "short pants". - **Output**: "A black and white short pants is being worn by a woman running with a dog across the sandy beach" """ """ Query object Bounding Box location in the respective frames: {formatted_normalized_bbox} Query object areas in percentage of the respective frames : {formatted_area_pct} Query object Class: '{cat["name"]}' Other classes: {", ".join(other_cat_names)} Generate a one-sentence caption, focusing on the object of class '{cat["name"]}' highlighted by the bounding boxes. """ System Prompt Visual prompt Sampled frames with drawn bounding boxes ... ... User Prompt Figure 8: Prompt template used to generate synthetic object captions from video segmentation annotations for the LV-VIS dataset Wang et al. (2023). The system prompt ps contains general instructions, while user prompt and visual prompt pu(j) and ˆxj are formatted with information from each annotation. Video Localized Narratives (VLN) extends existing datasets with "narrations of actors actions" in videos. We use the subset from the UVO dataset, which contains 3 sparsely annotated frames with non exhaustive captions for a total of 5, 136 training and 2, 451 validation videos. BenSMOT contains manually collected annotations of bounding box trajectories and associated captions, focusing exclusively on humans in videos. It includes an average of 2.2 instances per video, and counts 2, 284 videos for training and 1, 008 for evaluation. A.4.3 MORE IMPLEMENTATION DETAILS The visual backbone is initialized with weights pretrained on ImageNet-21K Deng et al. (2009) following OVFormer Fang et al. (2025), and the Mask2Former Cheng et al. (2022) weights are trained from scratch. The OVFormer classifier uses a frozen CLIP ViT-B/32 Radford et al. (2021) encoder. The captioning head is initialized with weights from BLIP-2 Li et al. (2023a) with frozen LLM OPT-2.7B Zhang et al. (2022a) following Chouduri et al. Choudhuri et al. (2024). For all experiments except LV-VIS tuning, we first train the segmentation/detection model, then freeze it and tune the captioning head. Respectively for LVIS/VidSTG/LV-VIS we train for 440k/40k/22k for the first stage and 5k/2k/2k for the second stage. When tuning pretrained models on VidSTG/VLN/BenSMOT, we train the two stages for (15k, 2k)/(15k, 500)/(15k, 2k) steps, whereas for 19 LV-VIS, we end-to-end tune the model for 2k steps. Experiments are run with a batch size of 8, except when using LVIS+COCO and LV-VIS where we use a batch size of 4 and for video-level tuning of the captioning head where we use a batch-size of 1. Experiments on LV-VIS are end-to-end trainings with clip-level supervision only. For VidSTG, VLN and BenSMOT experiments we use video-level tuning for captioning with temporal aggregation, with Tagg = 8. For all experiments we train the model with a clips of size T = 2, and at inference use T = 5/1/1/1, Tmatch = 1/100/20/40, Kmatch = 1/7/5/7 for LV-VIS/VidSTG/VLN/BenSMOT experiments respectively. For the largest dataset (COCO + LVIS) the optimization takes 2 days on 4 H100 GPUs. Following OVFormer Fang et al. (2025), for all datasets we use an AdamW optimizer and the step learning rate schedule, with an initial learning rate of 0.0001 and a weight decay of 0.05, and apply a 0.1 learning rate multiplier to the backbone. We decay the learning rate at 0.9 and 0.95 fractions of the total number of training steps by a factor of 10. For respectively image/video datasets, we resize the shortest edge of the image to 800/480 for SwinB and 800/360 for ResNet for training and inference. 20
2510.14900
Mapping Smarter, Not Harder: A Test-Time Reinforcement Learning Agent That Improves Without Labels or Model Updates Wen-Kwang Tsao*, Yao-Ching Yu*, Chien-Ming Huang AI Lab, TrendMicro {spark_tsao,yaoching_yu}@trendmicro.com Abstract The Enterprise Intelligence Platform must in- tegrate logs from numerous third-party ven- dors in order to perform various downstream tasks. However, vendor documentation is of- ten unavailable at test time. It is either mis- placed, mismatched, poorly formatted, or in- complete, which makes schema mapping chal- lenging. We introduce a reinforcement learn- ing agent that can self-improve without labeled examples or model weight updates. During inference, the agent first identifies ambiguous field-mapping attempts, then generates targeted web-search queries to gather external evidence, and finally applies a confidence-based reward to iteratively refine its mappings. To demon- strate this concept, we converted Microsoft Defender for Endpoint logs into a common schema. Our method increased mapping accu- racy from 56.4% (LLM-only) to 72.73% (RAG) to 93.94% over 100 iterations using GPT-4o. At the same time, it reduced the number of low-confidence mappings requiring expert re- view by 85%. This new approach provides an evidence-driven, transparent method for solv- ing future industry problems, paving the way for more robust, accountable, scalable, efficient, flexible, adaptable, and collaborative solutions. 1 Introduction Enterprise IT environments are rapidly evolving toward proactive, agent-driven workflows. At the core of these systems lies a foundational require- ment: the ability to efficiently process and seman- tically interpret logs from a wide range of third- party sources. Such capability enables agents to remain context-aware, access real-time data, and make well-informed decisions. For cybersecurity use cases, enterprises must ingest massive volumes of logs—often terabytes per day—from heterogeneous sources such as fire- walls, servers, endpoints, cloud applications, net- *Corresponding author. work flows, policy events, file operations, and API calls, in order to enable effective security operations and real-time threat detection (Reid, 2023). The cost of Security Information and Event Management (SIEM) ingestion is substan- tial, exceeding $500k annually for just 500 GB per day (Hazel, 2021; Microsoft, 2024). The chal- lenge lies not only in scale, but also in achieving semantic consistency across dozens to hundreds of disparate event types. Failures in log correla- tion and schema normalization have contributed to catastrophic breaches—including Target (2013) and Equifax (2017)—where overlooked alerts and misaligned schemas resulted in damages exceeding $1 billion (Krishna, 2023). The emergence of large language models (LLMs) with robust natural language processing ca- pabilities has the potential to transform third-party log integration. These models could dramatically reduce the need for labor-intensive processes that require experts. The full log integration pipeline comprises four stages: processing raw logs into structured data, mapping source schemas to target schemas, generating data transformation code, and deploying to production with ongoing monitoring. Schema mapping is the critical decision point that underpins the success of the entire integration pro- cess. This paper focuses on the practical challenge of schema mapping, a topic that is often overlooked in current literature. Our focus is on a scenario in which enterprises ingest new data into their platforms. Therefore, the target schema is usually well-documented and stable. By contrast, incoming data source schemas are often poorly documented, typically due to their origin in legacy systems or outdated software versions. Unlike conventional machine learning tasks—where the challenge is extracting key fea- tures from abundant data—the difficulty here is the opposite: the source provides too little context. Be- cause these vendor schemas lack labeled training arXiv:2510.14900v1 [cs.AI] 16 Oct 2025 data, fine-tuning or other supervised methods are impractical. Although expert review remains neces- sary, it is resource-intensive and must be carefully prioritized. To address these challenges, we propose a test- time reinforcement learning (RL) agent that can self-improve without updating model weights or re- lying on pre-defined labeled data. Our approach is inspired by the TTRL framework (Zuo et al., 2025), which improves model performance on unlabeled data through test-stage rewards. In the absence of ground-truth labels, we introduce a confidence- based score as a proxy reward signal to guide the agent toward more accurate mappings. Our design explicitly targets industrially practical constraints: model updates are costly, GPU-intensive, and op- erationally complex. Instead, our method adapts the system prompt at inference time by iteratively enriching the context in a verbal RL-driven manner, thereby enabling continual self-improvement under real-world deployment conditions. The confidence-based proxy reward is both in- tuitive and interpretable. The agent identifies con- flicts and ambiguities in its prior mapping attempts and formulates targeted search queries to gather external evidence. Confidence scores then serve as reward signals to determine whether the collected evidence should be retained or discarded, guiding iterative refinement. Furthermore, the agent pro- duces transparent reasoning traces, enabling human experts to focus their review on low-confidence mappings, thereby reducing manual verification costs while preserving overall reliability. This method has several advantages over existing approaches. First, it achieves significantly higher mapping accuracy through iterative refinement, re- ducing reliance on static, one-time attempts. Sec- ond, it uses confidence-based rewards instead of ground truth labels, enabling effective learning in settings where labeled data is unavailable. Finally, it promotes transparency by revealing its reasoning and evidence collection processes. This empow- ers security experts to understand decisions and prioritize low-confidence mappings. Overall, our key contributions are: 1) identifying the challenge of schema mapping where prompting, fine-tuning, and retrieval-augmented generation all face limitations and no ground truth labels are avail- able, and 2) proposing a test-time reinforcement learning framework that enables an agent to im- prove accuracy over time. This approach opens a new research direction by using confidence as a proxy signal to guide the agent’s learning process. Log Raw CEF:0|h1|h2|001|5| LocalPort=443 Log Parsed Vendor=h1 Product=h2 DeviceVersion=001 Severity=5 LocalPort=443 Target schema Common Schema fields • vendor • pname • severity • src • spt • dst • dpt KB AI Response: the answer is … src more context - layer:network - type:array[string] - desc: the source IP address RQ1: Can we detect the ambiguity in the LocalPort mapping? RQ2: Can we improve without label data at test time? RQ3: What if we don’t have high-quality documents in our internal KB? Prompt: Which field in target schema should LocalPort map to? Context: A: Log parsing B: Field mapping C: Field retrieve for augment generation D: E2e test-time reinforcement learning agent Figure 1: Position of our test-time RL agent relative to prior work in the schema matching pipeline. In Section A, Logparser-LLM Zhong et al. (2024) han- dles raw-to-structured parsing. In section B, Schema- Matching-LLM Parciak et al. (2024) shows baseline one-shot mapping capability. In section C, ReMatch Sheetrit et al. (2024) adds retrieval when full documen- tation exists. MatchMaker Seedat and van der Schaar (2024) pre-embeds target schemas for reuse. In sec- tion D, Self-consistency Wang et al. (2022) improves chain-of-thought reasoning. Search-R1 Jin et al. (2025) enhances LLM reasoning with search-augmented RL. This example shows that the log LocalPort is ambiguous for decision making; we need more context to deter- mine whether this field should map to src or dst. Our research uniquely extends rightward beyond traditional enterprise knowledge bases to handle newly seen logs with ill-formatted or incomplete documentation. Unlike fine-tuning approaches that require labeled data and risk overfitting, our agent operates without test-time labels, conducting internet searches to gather evidence outside the enterprise KB scope. This addresses the critical gap where traditional methods fail on unseen vendor schemas with insufficient documentation. 2 Related Work The foundation of schema mapping begins with converting raw logs into structured data. Zhong et al. (2024) introduced LogParser-LLM, which addresses the initial step of our pipeline by trans- forming unstructured log messages into key-value pairs. Their hybrid approach combines an LLM- based template extractor with a prefix-tree cluster- ing method, achieving efficient parsing; the authors report ~272.5 LLM calls amortized across large log sets. This work eliminates the need for hyper- parameter tuning or labeled data, enabling rapid adaptability to new log formats. LogParser-LLM’s contribution is complemen- tary to our work: while they focus on the raw-to- structured parsing phase. In contrast, our approach begins where their process ends: once logs have already been parsed into structured records, we take these structured logs and further map them into standardized, common schemas that enable consistent downstream analysis, correlation, and integration across diverse sources. Parciak et al. (2024) conducted a comprehensive experimental study of LLM capabilities in schema matching, focusing on off-the-shelf performance using only element names and descriptions. Their work provides crucial insights into the baseline capabilities of LLMs for schema matching tasks, demonstrating that context scope significantly af- fects performance—neither too little nor too much context leads to optimal results. Their findings directly inform our approach in several ways: (1) they validate that LLMs can per- form meaningful schema matching without requir- ing data instances, which aligns with our privacy- sensitive enterprise scenarios; (2) their context opti- mization insights guide our prompt engineering; and (3) their baseline performance metrics pro- vide a foundation for measuring the improvements achieved by our reinforcement learning approach. However, their work focuses on one-shot matching capabilities, while our approach addresses the it- erative improvement challenge through test-time learning. ReMatch, introduced by Sheetrit et al. (2024), leverages retrieval-augmented prompting to reduce the target schema search space before matching. Their approach assumes the availability of compre- hensive documentation and uses embedding-based retrieval to balance recall and precision in schema matching tasks. ReMatch demonstrates strong per- formance in healthcare schema matching scenarios where complete documentation is available. Our work differs from ReMatch in a fundamental assumption: while ReMatch operates in environ- ments with well-documented schemas and compre- hensive knowledge bases, our approach is designed for practical enterprise scenarios where such docu- mentation is often incomplete or unavailable. Re- Match’s retrieval mechanism works within a closed set of known mappings, whereas our agent dynam- ically discovers and accumulates evidence from external sources to handle previously unseen ven- dor schemas. Seedat and van der Schaar (2024) introduced MatchMaker, which decomposes schema match- ing into candidate generation, refinement, and con- fidence scoring steps using LLMs and vector re- trieval. Their approach focuses on embedding tar- get schemas for better retrieval and future reuse, generating synthetic in-context examples on-the-fly for zero-shot self-improvement. MatchMaker’s compositional approach and con- fidence scoring align with our methodology, but their self-improvement mechanism differs signif- icantly from ours. While MatchMaker generates synthetic examples for in-context learning, our ap- proach accumulates real-world evidence through external search and interaction. MatchMaker’s fo- cus on target schema embedding optimization com- plements our source schema adaptation strategy, suggesting potential for hybrid approaches in fu- ture work. Two key techniques underpin our reinforce- ment learning framework: search-enhanced rea- soning and self-consistency validation. Jin et al. (2025) demonstrated how external search can en- hance LLM reasoning capabilities, providing a foundation for our evidence collection mechanism. Their Search-R1 approach shows that targeted web search can significantly improve reasoning accu- racy in complex tasks. Wang et al. (2022) established that consistency across multiple reasoning paths serves as a reli- able indicator of correctness, forming the basis of our confidence-based reward system. Their self- consistency approach demonstrates that taking the majority answer from multiple sampled reasoning paths significantly improves accuracy, which di- rectly informs our confidence score calculation. A closely related line of research is the Reflexion framework (Shinn et al., 2023), which introduces verbal reinforcement learning as a way for language agents to improve through self-generated feedback rather than model weight updates. Reflexion agents reflect on task feedback, store these reflections in episodic memory, and leverage them to guide future decisions. This approach demonstrates that linguis- tic feedback—whether scalar or free-form—can significantly enhance agent performance across se- quential decision-making, coding, and reasoning tasks, achieving state-of-the-art results on bench- marks such as HumanEval. Our work shares a similar philosophy in emphasizing memory- and feedback-driven improvement at test time; how- ever, we focus specifically on schema mapping un- der industrial constraints where ground truth labels and comprehensive documentation are unavailable. In this setting, we extend the notion of verbal rein- forcement learning by integrating external evidence collection with confidence-based rewards to refine mappings dynamically. Our work addresses the critical gap where ex- isting methods fail on newly encountered ven- dor schemas with ill-formatted or incomplete doc- umentation. Unlike prior approaches that op- erate within controlled environments with well- documented schemas, our test-time reinforcement learning agent adapts dynamically by accumulating external evidence without requiring model updates or labeled training data. 3 Methodology 3.1 Problem Formulation We formally define the schema mapping prob- lem as follows. Given a source schema S = {s1, s2, . . . , sn} and a target schema T = {t1, t2, . . . , tm}, we seek to find a mapping func- tion f : S →T ∪{∅} such that f(si) = tj repre- sents a semantic correspondence between source field si and target field tj, or f(si) = ∅indicates no corresponding field exists in the target schema. Our objective is to maximize the correctness of this mapping function: maxf Pn i=1 I[f(si) = f∗(si)] where f∗(si) represents the ground truth mapping and I[·] is the indicator function. However, in practical deployment scenarios—particularly when processing third-party vendor logs ground truth mappings f∗are unavailable. To address this challenge, we use confidence scores as a proxy for correctness. We define con- fidence C(f(si)) as the consistency of mapping predictions across multiple inferences, serving as a surrogate objective: maxf Pn i=1 C(f(si)). 3.2 Test-Time Reinforcement Learning Framework Our test-time reinforcement learning agent im- proves schema mapping policy’s accuracy by it- eratively collecting evidence and reinforcing use- ful context. The agent starts from an empty evi- dence set, identifies inconsistencies through mul- tiple mapping attempts, and formulates targeted search queries for ambiguous fields. Collected evi- dence is added to the context, and confidence-based reward signals guide which evidence to retain or discard. The complete algorithm is detailed in Al- gorithm 1. 3.3 Reinforcement Learning Formulation To formalize our approach, we define the reinforce- ment learning components as follows: State: The current state st consists of the schema mapping hypothesis from source to target fields and the collected evidence at time t. Specifically: st = {Mt, Et} where Mt is the current mapping hypothesis and Et is the set of evidence collected up to time t. Action: The action at involves selecting a field or conflict to investigate and executing a targeted search query. The action space includes all possible fields that could be investigated and the possible search queries that could be formulated. Reward: The reward rt is derived from the change in confidence score after adding new ev- idence: rt = Ct+1 −Ct where Ct is the confidence score at time t. The confidence score serves as a proxy for accuracy when ground truth labels are unavailable. Policy: The agent’s policy π(a|s) specifies the suggested mapping from source schema fields to target schema fields. While the policy can be im- plemented in various ways, one practical approach is to leverage an LLM’s reasoning process. In this setup, the LLM performs the mapping by combin- ing its prior knowledge with contextual evidence, enabling the system to detect conflicts and generate targeted search queries. We define a conflict as a disagreement among the n prompt-variant pre- dictions for the same source field within a single iteration (as opposed to cross-run drift). Learning: The goal of learning is to adapt a policy’s behavior in a beneficial direction by lever- aging rewards that reflect the situation. Instead of updating the LLM’s model weights, learning takes place through the accumulation of useful evidence in the agent’s context. Evidence that increases con- fidence is preserved, while unhelpful evidence is discarded. This process reflects a form of verbal, memory-based learning in which the agent’s knowl- edge base is continuously refined. We use verbal RL in the Reflexion sense—policy adaptation via context/memory rather than parame- ter updates; rewards are intrinsic confidence deltas, not ground-truth returns. Algorithm 1 Iterative RL Schema Mapping with Confidence Improvement Inputs: Source schema S = {s1, . . . , sn}; target schema T = {t1, . . . , tm}; iteration limit α (default: 100); conflict detection attempts n (default: 3); initial evidence context E = ∅; generative LLM F. Outputs: Mapping function f; refined evidence context E. 1: for i ←1 to α do 2: f ←F(S, E) // Generate initial mappings using current evidence 3: C ←ConflictDetection(f, n) // Detect inconsistent mappings 4: for each source field si ∈C do 5: Qsi ←QueryFormulation(si) // Formulate search queries 6: esi ←EvidenceCollection(Qsi) // Collect external evidence 7: rsi ←ConfidenceEvaluation(f(si), esi) // Evaluate new confidence 8: rprev ←Confidence(f(si)) // Retrieve previous confidence 9: if rsi > rprev then 10: E ←ContextUpdate(E, esi, rsi) // Update context if confidence improves 11: end if 12: end for 13: end for 14: return f, E 4 Experiment 4.1 Setup We evaluated our approach using two schemas: the source schema was the Microsoft Defender for End- point schema containing 195 fields, and the target was our Common Schema with 137 fields. The ground truth consisted of 66 manually verified field mappings, curated collaboratively by domain, threat intelligence, and product experts to cover a diverse range of field types and complexity levels. We employed GPT-4o as the underlying model and assessed performance using two key metrics: (1) Confidence score—measuring the consistency of predictions across multiple attempts within a single iteration, with special handling for empty outputs; and (2) Accuracy—defined as the percentage of correctly predicted mappings among the 66 verified pairs. For each field mapping iteration, the model was executed three times (n=3), and both the confidence score and accuracy were computed. While accu- racy depends on the ground truth and is used solely for evaluation, confidence scores—which can be calculated without ground truth—are leveraged to guide the reinforcement learning process and are therefore suitable for production deployment. Confidence Score Calculation: We calcu- late confidence as the consistency of predic- tions across multiple attempts using a modified frequency-based approach. For a given field, if we collect predictions P = [p1, p2, p3] across three iterations, the confidence score is: C = count(most_frequent_prediction) adjusted_total where the adjusted to- tal accounts for empty predictions with reduced weight (0.5 instead of 1.0). The design aligns with the findings of Kalai et al. (2025), who argue that current training and evaluation paradigms implic- itly reward guessing behavior in language models, leading to hallucinations. By contrast, our scoring mechanism provides a quantitative incentive for uncertainty acknowledgment, steering the model toward more trustworthy behavior. We prepare two baselines for comparison. Base- line 1: We used GPT-4o with a single-shot prompt containing only the field name and value, resulting in an accuracy of 56.36%. Baseline 2: We used GPT-4o with a single-shot prompt enhanced with additional field descriptions, data types, and sam- ple data context from our internal knowledge base, resulting in an accuracy of 72.73%. 4.2 Performance Improvements Starting from a Baseline 2: accuracy of 72.73% with GPT-4o in the first iteration, our method achieved significant improvements throughout the experiment. By the end of the 100-iteration exper- iment, the model demonstrated consistent perfor- mance with accuracy reaching 93.94%. Our analysis of the 100-iteration experiment re- veals several key performance characteristics. GPT- 4o collects 81 evidence tuples across 100 iterations, with the most significant accuracy gains occurring in the early stages—for example, iteration 1 shows Figure 2: Comprehensive performance analysis of the reinforcement learning agent over 100 iterations. Top: Accuracy improves from 72.73% to 93.94%, while confidence trends upward and approaches 1.0, highlighting a persistent overconfidence gap. Middle: Accuracy variability is high in early iterations but stabilizes over time. Bottom: Conflict field count decreases from 26 to 4(85% reduction), demonstrating reduced ambiguity as evidence accumulates. a +21.21% gain, iteration 5 achieves +9.60%, and iteration 6 yields +10.71%. The system makes de- cisions solely based on confidence scores, without referencing accuracy at any point during execution. This reflects real-world conditions in which ground truth labels are unavailable. In these sce- narios, accuracy can only be measured retrospec- tively to confirm whether the solution performed correctly. Notably, in 19 of the 100 iterations, the system rejected newly proposed evidence, demon- strating that it learned to recognize when gather- ing additional information was unlikely to improve results. This selective behavior preserves a high- quality evidence set, which is critical not only for validating outcomes but also for providing trans- parency into how the agent arrived at its decisions. By the conclusion of the 100 iterations, the agent resolved 22 conflicts, only 4 fields were flagged as low-confidence and requiring expert review—down from 26 initially—representing an 85% reduction in the verification burden on security analysts and allowing them to focus on the most ambiguous or critical cases. To evaluate the robustness of our approach, we ran the full 100-iteration process multiple times. The final accuracy consistently reached 93-94%, with minimal variance (standard deviation less than 0.01) in the later iterations. This demonstrates that the improvement is reliable and not due to chance. 4.3 Evidence Collection Analysis For this experiment, we equipped the AI agent with a tool that enables internet searches for additional facts and evidence. Specifically, we used a generic search utility powered by Bing. In each iteration, the agent analyzes conflicts identified across three prompt variants, formulates a targeted search query, and retrieves relevant results from the tool. This setup demonstrates one possible method of evi- dence collection; other approaches could include Figure 3: The relationship between confidence and accu- racy. Path 1 shows the starting point. Path 2 illustrates how confidence as a proxy reward improves accuracy. Path 3 highlights the challenge of overconfidence, where confidence saturates while accuracy lags behind. There- fore, more engineering and research efforts are needed to bring the confidence curve down. The diagonal line indicates perfect calibration, and the curves above it show overconfidence. consulting human experts, performing advanced reasoning, or validating findings against produc- tion data. We organized the collected evidence into three- element tuples, each comprising the detected con- flict, the resolution plan, and the retrieved evidence. This structured representation ensures systematic evidence collection and evaluation. Evidence is re- tained only if it demonstrably increases the LLM’s confidence in its mapping decisions. Helpful ev- idence contributes in 4 ways, (1) by enhancing awareness through the identification of ambigu- ous fields and conflicting mappings; (2) by en- abling self-correction through improved reasoning about relationships among fields; (3) by improving clarity via authoritative information on how fields are defined and used in both the common schema and vendor-specific schemas (e.g., Microsoft De- fender); and (4) by revealing context-dependent mappings, in which the correct correspondence varies by scenario and requires additional contex- tual understanding. 5 Conclusion This paper presents a test-time learning framework in which a reinforcement learning agent improves schema mapping accuracy through iterative evi- dence collection and context refinement—without requiring model weight updates or pre-collected training data. Unlike conventional approaches that rely on all labeled data before inference, our method actively gathers new evidence during test- time to refine its decisions. This design eliminates the high computational cost, GPU dependency, and potential instability associated with model retrain- ing, making it more practical for real-world de- ployment. A confidence-based score serves as a proxy reward for accuracy, providing a novel mech- anism that evaluates model consistency across mul- tiple mapping attempts within a single iteration. Through systematic evidence collection and evalu- ation, the agent resolves mapping ambiguities with transparent reasoning that facilitates expert review and validation. Applied to Microsoft Defender for Endpoint logs mapped to a common schema, our approach improves accuracy from 72.73% to 93.94%, while reducing the number of fields requir- ing expert review by 85%—from 26 to 4. A key insight from our work is the use of confi- dence scores as a proxy for ground-truth labels, as conceptually illustrated in Figure 3. While confi- dence can effectively guide accuracy improvement, our results reveal persistent overconfidence, where confidence values exceed actual accuracy. This observation underscores the need for better confi- dence definition and calibration. In our current ap- proach, confidence is measured as the consistency of predictions across three trials. Future research could extend this by (1) employing multiple mod- els to generate diverse predictions and compute ensemble-based confidence, or (2) directly prompt- ing LLMs to provide explicit self-assessed confi- dence scores alongside their outputs. Additional experiments extending the number of inference at- tempts from three to ten show encouraging signs of reducing the overconfidence gap (see Appendix B), albeit with increased computational cost. These enhancements could yield better-calibrated confi- dence measures that more accurately reflect true prediction quality. Limitations While our approach demonstrates significant im- provements, several limitations should be acknowl- edged: 1. Evidence Quality Dependency: The sys- tem’s performance is dependent on the quality and availability of external evidence sources. In domains where documentation is sparse or inconsistent, the improvement may be limited. 2. Computational Overhead: The iterative ev- idence collection process requires multiple LLM calls and external searches, which may increase computational costs compared to single-shot approaches. 3. Domain Specificity: Our evaluation focuses on cybersecurity schemas. While the ap- proach is general, validation in other domains (healthcare, finance, etc.) would strengthen the generalizability claims. 4. Scope Limitations: We focus on 1-to-1 map- pings to address the core challenge of knowl- edge scarcity. Extension to more complex mapping cardinalities (1-to-N, N-to-M) re- mains future work. Ethical Considerations This study does not involve human subjects or per- sonal data. However, the method collects web evidence at test time, which could occasionally include malicious or adversarial content. Practi- tioners reproducing this system should apply input sanitization, source filtering, and sandboxing to prevent prompt-injection or security risks. References Thomas Hazel. 2021. Harnessing the value of log data analytics. Techstrong Research. Accessed: 2025-06- 23. Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. 2025. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. arXiv preprint arXiv:2503.09516. Adam Tauman Kalai, Ofir Nachum, Santosh S Vem- pala, and Edwin Zhang. 2025. Why language models hallucinate. arXiv preprint arXiv:2509.04664. G. Krishna. 2023. Security logging and monitoring failures: A comprehensive guide. Accessed: 2025- 06-23. Microsoft. 2024. Microsoft sentinel pricing calculator. Accessed: 2025-06-23. Marcel Parciak, Brecht Vandevoort, Frank Neven, Liesbet M Peeters, and Stijn Vansummeren. 2024. Schema matching with large language models: an ex- perimental study. arXiv preprint arXiv:2407.11852. Colin Reid. 2023. Searching for a siem solution? here are 7 things it likely needs. Gartner. Accessed: 2025-06-23. Nabeel Seedat and Mihaela van der Schaar. 2024. Matchmaker: Self-improving large language model programs for schema matching. arXiv preprint arXiv:2410.24105. Eitam Sheetrit, Menachem Brief, Moshik Mishaeli, and Oren Elisha. 2024. Rematch: Retrieval en- hanced schema matching with llms. arXiv preprint arXiv:2403.01567. Noah Shinn, Federico Cassano, Beck Labash, Ash- win Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with ver- bal reinforcement learning, 2023. URL https://arxiv. org/abs/2303.11366, 1. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Aoxiao Zhong, Dengyao Mo, Guiyang Liu, Jinbu Liu, Qingda Lu, Qi Zhou, Jiesheng Wu, Quanzheng Li, and Qingsong Wen. 2024. Logparser-llm: Advancing efficient log parsing with large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4559–4570. Yuxin Zuo, Kaiyan Zhang, Li Sheng, Shang Qu, Ganqu Cui, Xuekai Zhu, Haozhan Li, Yuchen Zhang, Xin- wei Long, Ermo Hua, and 1 others. 2025. Ttrl: Test-time reinforcement learning. arXiv preprint arXiv:2504.16084. A Case Study: Direction-Sensitive Port Mapping (Iteration 49) To illustrate the practical difficulty of schema map- ping, we examine a representative example from iteration 49 involving the common schema field dpt (destination port). The field dpt is defined in the Trend Micro Common Schema as “the service destination port of the private application server (dstport).” In Microsoft Defender for Endpoint, however, two candidate fields exist: LocalPort (TCP port on the local device used during commu- nication) and RemotePort (TCP port on the remote device being connected to). Mapping ambiguity. Both candidates are seman- tically plausible depending on the traffic direction. For outbound connections, the local device is the source, so the destination port resides on the remote endpoint—corresponding to RemotePort. Con- versely, for inbound connections (e.g., an RDP session initiated by a remote host), the local device becomes the destination, and thus the correct map- ping is LocalPort. Without an explicit indicator of connection direction, an LLM can easily misassign the field. Why fine-tuning does not help. Simply fine- tuning model weights cannot reliably resolve this ambiguity. The correct mapping is not a mem- orized association (dpt →RemotePort) but a conditional rule that depends on runtime con- text—information that is absent from the training corpus. Updating parameters may reinforce spu- rious correlations rather than teach the model to reason over directionality, resulting in brittle behav- ior across different network scenarios. Evidence-based reasoning. During iteration 49, the agent retrieved definitional evidence clarifying that: • LocalPort refers to the port on the local de- vice. • RemotePort refers to the port on the remote device being connected to. • dpt represents the service destination port. Although this evidence did not directly yield the final mapping, it revealed that the correct cor- respondence varies by communication context and motivated the agent to infer direction from auxiliary fields such as RemoteIP, LocalIP, and ActionType. The agent’s confidence increased from 0.67 to 1.0, reflecting a clearer and more con- sistent conceptual understanding. Categorization of helpful evidence. Accord- ing to the taxonomy defined in the main paper, this instance exemplifies Category (4): revealing context-dependent mappings. The evidence was helpful because it exposed the conditional nature of the mapping—showing that correctness depends on dynamic context rather than static field align- ment—and thereby guided subsequent reasoning toward more generalizable, context-aware mapping rules. Derived practical rule. dpt = RemotePort, if Direction = Outbound; LocalPort, if Direction = Inbound; (defer/flag), if Direction is unknown. This case highlights that progress in schema align- ment arises not from parameter updates but from the integration of structured evidence and contex- tual inference. B Overconfidence Mitigation Experiments To address the overconfidence phenomenon ob- served in our main experiments (Figure 3), we conducted an additional study to examine how in- creasing the number of inference attempts affects confidence calibration. Our hypothesis was that a larger number of inference samples would im- prove statistical robustness and reduce systematic overconfidence. B.1 Experimental Setup We extended the original setup by increasing the number of inference attempts per iteration from 3 to 10. This change allows for more reliable con- fidence estimation through better sampling of the model’s output distribution. All other experimental parameters remained identical to the main study: the same source and target schemas (Microsoft De- fender for Endpoint with 195 fields and the Com- mon Schema with 137 fields), the same 66 manu- ally verified ground-truth mappings, and the same GPT-4o model configuration. The confidence score computation followed the same principle of prediction consistency but with a larger sample size (10 vs. 3). This design pro- vides a more statistically grounded measure of un- certainty and is expected to yield better-calibrated confidence estimates. B.2 Results and Analysis Table 1 presents a comparison between the 3- and 10-inference settings. Increasing the number of inference attempts from 3 to 10 effectively narrows the gap between confidence and accuracy. With 10 inferences, the mean confidence (89.3%) aligns much more closely with the observed accuracy (92.1%), whereas with 3 inferences, confidence (95.2%) noticeably ex- ceeds accuracy (93.94%), indicating mild overcon- fidence. This calibration improvement comes with a mod- est cost: while the confidence estimates become more realistic, overall accuracy decreases slightly (93.94% →92.1%). Nevertheless, the more conser- vative confidence levels offer better guidance for expert review prioritization. The reduction in low- confidence fields remains substantial under both settings (85% and 80% reductions), demonstrat- ing that the key benefit—reducing expert review effort—is preserved. B.3 Implications and Trade-offs These findings highlight a clear balance between calibration quality and computational cost. Increas- ing the number of inference attempts leads to better- calibrated confidence scores but requires more com- putation. In practical terms: • For high-precision scenarios, increasing at- tempts (e.g., to 10) provides more reliable uncertainty estimates and tighter alignment between confidence and accuracy. • For cost-sensitive deployments, even three attempts already achieve strong accuracy and substantial reduction in expert workload, mak- ing it a highly efficient operational setting. These results demonstrate improved calibra- tion—confidence more accurately tracks empir- ical accuracy as the number of inferences in- creases—without the need for additional metric reporting. C Prompt Architecture Table 2 summarizes the three prompts used in our schema mapping system. Each serves a distinct function in guiding the agent’s reasoning process across sessions and mapping iterations. System Prompt Example. The system prompt de- fines the AI agent’s expertise and reasoning frame- work. It is shown below for reference. You are a Trend Micro cybersecurity data expert spe- cializing in Trend Micro’s Common Schema across multiple layers, including endpoint, network, mes- saging, and cloud. You have extensive expertise in processing and mapping third-party product and log schemas to Trend Micro’s Common Data Schema, enabling cross-log correlation, advanced threat detec- tion, and compliance reporting. You routinely perform professional schema mapping for new third-party log sources with a focus on accu- racy. Your approach follows a layered reasoning pro- cess: (1) identify core entities such as IP addresses, filenames, and hashes (e.g., SHA1); (2) narrow down candidate fields based on data flow direction and con- text; (3) make precise mapping decisions supported by semantic consistency. For example: – src_ip vs. dst_ip de- pends on whether traffic is inbound or outbound. – SHA1 hashes may represent parent_process_sha1, launched_process_sha1, or dropped_object_sha1. If no suitable mapping exists, respond professionally with NOT_COVERED. Setting Final Accuracy Mean Confidence Low-Confidence Fields 3 Inferences 93.94% 95.2% 26 →4 (-85%) 10 Inferences 92.1% 89.3% 31 →6 (-80%) Table 1: Comparison of overconfidence mitigation between 3 and 10 inferences per iteration. The 10-inference setting improves calibration by bringing confidence values more closely aligned with actual accuracy. # Prompt Name Purpose Usage 1 System Prompt Defines the AI agent’s role as a cy- bersecurity data expert specializing in schema mapping. Establishes reasoning rules and guidelines. Loaded once per ses- sion 2 User Prompt Carries the per-request payload: (a) curated facts from prior conflicts (if any), (b) RAG-assembled source/target schema context (descriptions, sample values, types), and (c) the mapping task/question. Enforces a strict XML re- sponse (CSV decision, 1–5 confidence, reasoning) for deterministic parsing. For every mapping request 3 Search Prompt Generates targeted Internet search queries string to resolve ambiguous field mappings using prior conflict in- formation. Invoked only during conflict resolution Table 2: Overview of the three prompts used in the schema mapping system, detailing their roles and usage.
Mapping Smarter, Not Harder: A Test-Time Reinforcement Learning Agent That Improves Without Labels or Model Updates Wen-Kwang Tsao*, Yao-Ching Yu*, Chien-Ming Huang AI Lab, TrendMicro { Abstract The Enterprise Intelligence Platform must integrate logs from numerous third-party vendors in order to perform various downstream tasks. However, vendor documentation is often unavailable at test time. It is either misplaced, mismatched, poorly formatted, or incomplete, which makes schema mapping challenging. We introduce a reinforcement learning agent that can self-improve without labeled examples or model weight updates. During inference, the agent first identifies ambiguous field-mapping attempts, then generates targeted web-search queries to gather external evidence, and finally applies a confidence-based reward to iteratively refine its mappings. To demonstrate this concept, we converted Microsoft Defender for Endpoint logs into a common schema. Our method increased mapping accuracy from 56.4% (LLM-only) to 72.73% (RAG) to 93.94% over 100 iterations using GPT-4o. At the same time, it reduced the number of low-confidence mappings requiring expert review by 85%. This new approach provides an evidence-driven, transparent method for solving future industry problems, paving the way for more robust, accountable, scalable, efficient, flexible, adaptable, and collaborative solutions. 1 Introduction Enterprise IT environments are rapidly evolving toward proactive, agent-driven workflows. At the core of these systems lies a foundational requirement: the ability to efficiently process and semantically interpret logs from a wide range of thirdparty sources. Such capability enables agents to remain context-aware, access real-time data, and make well-informed decisions. For cybersecurity use cases, enterprises must ingest massive volumes of logs-often terabytes per day-from heterogeneous sources such as firewalls, servers, endpoints, cloud applications, net- *Corresponding author. work flows, policy events, file operations, and API calls, in order to enable effective security operations and real-time threat detection (Reid, 2023). The cost of Security Information and Event Management (SIEM) ingestion is substantial, exceeding 1 billion (Krishna, 2023). The emergence of large language models (LLMs) with robust natural language processing capabilities has the potential to transform third-party log integration. These models could dramatically reduce the need for labor-intensive processes that require experts. The full log integration pipeline comprises four stages: processing raw logs into structured data, mapping source schemas to target schemas, generating data transformation code, and deploying to production with ongoing monitoring. Schema mapping is the critical decision point that underpins the success of the entire integration process. This paper focuses on the practical challenge of schema mapping, a topic that is often overlooked in current literature. Our focus is on a scenario in which enterprises ingest new data into their platforms. Therefore, the target schema is usually well-documented and stable. By contrast, incoming data source schemas are often poorly documented, typically due to their origin in legacy systems or outdated software versions. Unlike conventional machine learning tasks-where the challenge is extracting key features from abundant data-the difficulty here is the opposite: the source provides too little context. Because these vendor schemas lack labeled training 16 Oct 2025 data, fine-tuning or other supervised methods are impractical. Although expert review remains necessary, it is resource-intensive and must be carefully prioritized. To address these challenges, we propose a testtime reinforcement learning (RL) agent that can self-improve without updating model weights or relying on pre-defined labeled data. Our approach is inspired by the TTRL framework (Zuo et al., 2025), which improves model performance on unlabeled data through test-stage rewards. In the absence of ground-truth labels, we introduce a confidencebased score as a proxy reward signal to guide the agent toward more accurate mappings. Our design explicitly targets industrially practical constraints: model updates are costly, GPU-intensive, and operationally complex. Instead, our method adapts the system prompt at inference time by iteratively enriching the context in a verbal RL-driven manner, thereby enabling continual self-improvement under real-world deployment conditions. The confidence-based proxy reward is both intuitive and interpretable. The agent identifies conflicts and ambiguities in its prior mapping attempts and formulates targeted search queries to gather external evidence. Confidence scores then serve as reward signals to determine whether the collected evidence should be retained or discarded, guiding iterative refinement. Furthermore, the agent produces transparent reasoning traces, enabling human experts to focus their review on low-confidence mappings, thereby reducing manual verification costs while preserving overall reliability. This method has several advantages over existing approaches. First, it achieves significantly higher mapping accuracy through iterative refinement, reducing reliance on static, one-time attempts. Second, it uses confidence-based rewards instead of ground truth labels, enabling effective learning in settings where labeled data is unavailable. Finally, it promotes transparency by revealing its reasoning and evidence collection processes. This empowers security experts to understand decisions and prioritize low-confidence mappings. Overall, our key contributions are: 1) identifying the challenge of schema mapping where prompting, fine-tuning, and retrieval-augmented generation all face limitations and no ground truth labels are available, and 2) proposing a test-time reinforcement learning framework that enables an agent to improve accuracy over time. This approach opens a new research direction by using confidence as a proxy signal to guide the agent's learning process. Log Raw CEF:0|h1|h2|001|5| LocalPort=443 Log Parsed Vendor=h1 Product=h2 DeviceVersion=001 Severity=5 LocalPort=443 Target schema Common Schema fields • vendor • pname • severity • src • spt • dst • dpt KB AI Response: the answer is ... src more context - layer:network - type:array[string] - desc: the source IP address RQ1: Can we detect the ambiguity in the LocalPort mapping? RQ2: Can we improve without label data at test time? RQ3: What if we don't have high-quality documents in our internal KB? Prompt: Which field in target schema should LocalPort map to? Context: A: Log parsing B: Field mapping C: Field retrieve for augment generation D: E2e test-time reinforcement learning agent Figure 1: Position of our test-time RL agent relative to prior work in the schema matching pipeline. In Section A, Logparser-LLM Zhong et al. (2024) handles raw-to-structured parsing. In section B, SchemaMatching-LLM Parciak et al. (2024) shows baseline one-shot mapping capability. In section C, ReMatch Sheetrit et al. (2024) adds retrieval when full documentation exists. MatchMaker Seedat and van der Schaar (2024) pre-embeds target schemas for reuse. In section D, Self-consistency Wang et al. (2022) improves chain-of-thought reasoning. Search-R1 Jin et al. (2025) enhances LLM reasoning with search-augmented RL. This example shows that the log LocalPort is ambiguous for decision making; we need more context to determine whether this field should map to src or dst. Our research uniquely extends rightward beyond traditional enterprise knowledge bases to handle newly seen logs with ill-formatted or incomplete documentation. Unlike fine-tuning approaches that require labeled data and risk overfitting, our agent operates without test-time labels, conducting internet searches to gather evidence outside the enterprise KB scope. This addresses the critical gap where traditional methods fail on unseen vendor schemas with insufficient documentation. 2 Related Work The foundation of schema mapping begins with converting raw logs into structured data. Zhong et al. (2024) introduced LogParser-LLM, which addresses the initial step of our pipeline by transforming unstructured log messages into key-value pairs. Their hybrid approach combines an LLMbased template extractor with a prefix-tree clustering method, achieving efficient parsing; the authors report ~272.5 LLM calls amortized across large log sets. This work eliminates the need for hyperparameter tuning or labeled data, enabling rapid adaptability to new log formats. LogParser-LLM's contribution is complementary to our work: while they focus on the raw-tostructured parsing phase. In contrast, our approach begins where their process ends: once logs have already been parsed into structured records, we take these structured logs and further map them into standardized, common schemas that enable consistent downstream analysis, correlation, and integration across diverse sources. Parciak et al. (2024) conducted a comprehensive experimental study of LLM capabilities in schema matching, focusing on off-the-shelf performance using only element names and descriptions. Their work provides crucial insights into the baseline capabilities of LLMs for schema matching tasks, demonstrating that context scope significantly affects performance-neither too little nor too much context leads to optimal results. Their findings directly inform our approach in several ways: (1) they validate that LLMs can perform meaningful schema matching without requiring data instances, which aligns with our privacysensitive enterprise scenarios; (2) their context optimization insights guide our prompt engineering; and (3) their baseline performance metrics provide a foundation for measuring the improvements achieved by our reinforcement learning approach. However, their work focuses on one-shot matching capabilities, while our approach addresses the iterative improvement challenge through test-time learning. ReMatch, introduced by Sheetrit et al. (2024), leverages retrieval-augmented prompting to reduce the target schema search space before matching. Their approach assumes the availability of comprehensive documentation and uses embedding-based retrieval to balance recall and precision in schema matching tasks. ReMatch demonstrates strong performance in healthcare schema matching scenarios where complete documentation is available. Our work differs from ReMatch in a fundamental assumption: while ReMatch operates in environments with well-documented schemas and comprehensive knowledge bases, our approach is designed for practical enterprise scenarios where such documentation is often incomplete or unavailable. ReMatch's retrieval mechanism works within a closed set of known mappings, whereas our agent dynamically discovers and accumulates evidence from external sources to handle previously unseen vendor schemas. Seedat and van der Schaar (2024) introduced MatchMaker, which decomposes schema matching into candidate generation, refinement, and confidence scoring steps using LLMs and vector retrieval. Their approach focuses on embedding target schemas for better retrieval and future reuse, generating synthetic in-context examples on-the-fly for zero-shot self-improvement. MatchMaker's compositional approach and confidence scoring align with our methodology, but their self-improvement mechanism differs significantly from ours. While MatchMaker generates synthetic examples for in-context learning, our approach accumulates real-world evidence through external search and interaction. MatchMaker's focus on target schema embedding optimization complements our source schema adaptation strategy, suggesting potential for hybrid approaches in future work. Two key techniques underpin our reinforcement learning framework: search-enhanced reasoning and self-consistency validation. Jin et al. (2025) demonstrated how external search can enhance LLM reasoning capabilities, providing a foundation for our evidence collection mechanism. Their Search-R1 approach shows that targeted web search can significantly improve reasoning accuracy in complex tasks. Wang et al. (2022) established that consistency across multiple reasoning paths serves as a reliable indicator of correctness, forming the basis of our confidence-based reward system. Their selfconsistency approach demonstrates that taking the majority answer from multiple sampled reasoning paths significantly improves accuracy, which directly informs our confidence score calculation. A closely related line of research is the Reflexion framework (Shinn et al., 2023), which introduces verbal reinforcement learning as a way for language agents to improve through self-generated feedback rather than model weight updates. Reflexion agents reflect on task feedback, store these reflections in episodic memory, and leverage them to guide future decisions. This approach demonstrates that linguistic feedback-whether scalar or free-form-can significantly enhance agent performance across sequential decision-making, coding, and reasoning tasks, achieving state-of-the-art results on benchmarks such as HumanEval. Our work shares a similar philosophy in emphasizing memory- and feedback-driven improvement at test time; however, we focus specifically on schema mapping under industrial constraints where ground truth labels and comprehensive documentation are unavailable. In this setting, we extend the notion of verbal reinforcement learning by integrating external evidence collection with confidence-based rewards to refine mappings dynamically. Our work addresses the critical gap where existing methods fail on newly encountered vendor schemas with ill-formatted or incomplete documentation. Unlike prior approaches that operate within controlled environments with welldocumented schemas, our test-time reinforcement learning agent adapts dynamically by accumulating external evidence without requiring model updates or labeled training data. 3 Methodology 3.1 Problem Formulation We formally define the schema mapping problem as follows. Given a source schema S = {s1, s2, . . . , sn} and a target schema T = {t1, t2, . . . , tm}, we seek to find a mapping function f : S →T ∪{∅} such that f(si) = tj represents a semantic correspondence between source field si and target field tj, or f(si) = ∅indicates no corresponding field exists in the target schema. Our objective is to maximize the correctness of this mapping function: maxf Pn i=1 I[f(si) = f∗(si)] where f∗(si) represents the ground truth mapping and I[·] is the indicator function. However, in practical deployment scenarios-particularly when processing third-party vendor logs ground truth mappings f∗are unavailable. To address this challenge, we use confidence scores as a proxy for correctness. We define confidence C(f(si)) as the consistency of mapping predictions across multiple inferences, serving as a surrogate objective: maxf Pn i=1 C(f(si)). 3.2 Test-Time Reinforcement Learning Framework Our test-time reinforcement learning agent improves schema mapping policy's accuracy by iteratively collecting evidence and reinforcing useful context. The agent starts from an empty evidence set, identifies inconsistencies through multiple mapping attempts, and formulates targeted search queries for ambiguous fields. Collected evidence is added to the context, and confidence-based reward signals guide which evidence to retain or discard. The complete algorithm is detailed in Algorithm 1. 3.3 Reinforcement Learning Formulation To formalize our approach, we define the reinforcement learning components as follows: State: The current state st consists of the schema mapping hypothesis from source to target fields and the collected evidence at time t. Specifically: st = {Mt, Et} where Mt is the current mapping hypothesis and Et is the set of evidence collected up to time t. Action: The action at involves selecting a field or conflict to investigate and executing a targeted search query. The action space includes all possible fields that could be investigated and the possible search queries that could be formulated. Reward: The reward rt is derived from the change in confidence score after adding new evidence: rt = Ct+1 -Ct where Ct is the confidence score at time t. The confidence score serves as a proxy for accuracy when ground truth labels are unavailable. Policy: The agent's policy π(a|s) specifies the suggested mapping from source schema fields to target schema fields. While the policy can be implemented in various ways, one practical approach is to leverage an LLM's reasoning process. In this setup, the LLM performs the mapping by combining its prior knowledge with contextual evidence, enabling the system to detect conflicts and generate targeted search queries. We define a conflict as a disagreement among the n prompt-variant predictions for the same source field within a single iteration (as opposed to cross-run drift). Learning: The goal of learning is to adapt a policy's behavior in a beneficial direction by leveraging rewards that reflect the situation. Instead of updating the LLM's model weights, learning takes place through the accumulation of useful evidence in the agent's context. Evidence that increases confidence is preserved, while unhelpful evidence is discarded. This process reflects a form of verbal, memory-based learning in which the agent's knowledge base is continuously refined. We use verbal RL in the Reflexion sense-policy adaptation via context/memory rather than parameter updates; rewards are intrinsic confidence deltas, not ground-truth returns. Algorithm 1 Iterative RL Schema Mapping with Confidence Improvement Inputs: Source schema S = {s1, . . . , sn}; target schema T = {t1, . . . , tm}; iteration limit α (default: 100); conflict detection attempts n (default: 3); initial evidence context E = ∅; generative LLM F. Outputs: Mapping function f; refined evidence context E. 1: for i ←1 to α do 2: f ←F(S, E) // Generate initial mappings using current evidence 3: C ←ConflictDetection(f, n) // Detect inconsistent mappings 4: for each source field si ∈C do 5: Qsi ←QueryFormulation(si) // Formulate search queries 6: esi ←EvidenceCollection(Qsi) // Collect external evidence 7: rsi ←ConfidenceEvaluation(f(si), esi) // Evaluate new confidence 8: rprev ←Confidence(f(si)) // Retrieve previous confidence 9: if rsi > rprev then 10: E ←ContextUpdate(E, esi, rsi) // Update context if confidence improves 11: end if 12: end for 13: end for 14: return f, E 4 Experiment 4.1 Setup We evaluated our approach using two schemas: the source schema was the Microsoft Defender for Endpoint schema containing 195 fields, and the target was our Common Schema with 137 fields. The ground truth consisted of 66 manually verified field mappings, curated collaboratively by domain, threat intelligence, and product experts to cover a diverse range of field types and complexity levels. We employed GPT-4o as the underlying model and assessed performance using two key metrics: (1) Confidence score-measuring the consistency of predictions across multiple attempts within a single iteration, with special handling for empty outputs; and (2) Accuracy-defined as the percentage of correctly predicted mappings among the 66 verified pairs. For each field mapping iteration, the model was executed three times (n=3), and both the confidence score and accuracy were computed. While accuracy depends on the ground truth and is used solely for evaluation, confidence scores-which can be calculated without ground truth-are leveraged to guide the reinforcement learning process and are therefore suitable for production deployment. Confidence Score Calculation: We calculate confidence as the consistency of predictions across multiple attempts using a modified frequency-based approach. For a given field, if we collect predictions P = [p1, p2, p3] across three iterations, the confidence score is: C = count(most_frequent_prediction) adjusted_total where the adjusted total accounts for empty predictions with reduced weight (0.5 instead of 1.0). The design aligns with the findings of Kalai et al. (2025), who argue that current training and evaluation paradigms implicitly reward guessing behavior in language models, leading to hallucinations. By contrast, our scoring mechanism provides a quantitative incentive for uncertainty acknowledgment, steering the model toward more trustworthy behavior. We prepare two baselines for comparison. Baseline 1: We used GPT-4o with a single-shot prompt containing only the field name and value, resulting in an accuracy of 56.36%. Baseline 2: We used GPT-4o with a single-shot prompt enhanced with additional field descriptions, data types, and sample data context from our internal knowledge base, resulting in an accuracy of 72.73%. 4.2 Performance Improvements Starting from a Baseline 2: accuracy of 72.73% with GPT-4o in the first iteration, our method achieved significant improvements throughout the experiment. By the end of the 100-iteration experiment, the model demonstrated consistent performance with accuracy reaching 93.94%. Our analysis of the 100-iteration experiment reveals several key performance characteristics. GPT4o collects 81 evidence tuples across 100 iterations, with the most significant accuracy gains occurring in the early stages-for example, iteration 1 shows Figure 2: Comprehensive performance analysis of the reinforcement learning agent over 100 iterations. Top: Accuracy improves from 72.73% to 93.94%, while confidence trends upward and approaches 1.0, highlighting a persistent overconfidence gap. Middle: Accuracy variability is high in early iterations but stabilizes over time. Bottom: Conflict field count decreases from 26 to 4(85% reduction), demonstrating reduced ambiguity as evidence accumulates. a +21.21% gain, iteration 5 achieves +9.60%, and iteration 6 yields +10.71%. The system makes decisions solely based on confidence scores, without referencing accuracy at any point during execution. This reflects real-world conditions in which ground truth labels are unavailable. In these scenarios, accuracy can only be measured retrospectively to confirm whether the solution performed correctly. Notably, in 19 of the 100 iterations, the system rejected newly proposed evidence, demonstrating that it learned to recognize when gathering additional information was unlikely to improve results. This selective behavior preserves a highquality evidence set, which is critical not only for validating outcomes but also for providing transparency into how the agent arrived at its decisions. By the conclusion of the 100 iterations, the agent resolved 22 conflicts, only 4 fields were flagged as low-confidence and requiring expert review-down from 26 initially-representing an 85% reduction in the verification burden on security analysts and allowing them to focus on the most ambiguous or critical cases. To evaluate the robustness of our approach, we ran the full 100-iteration process multiple times. The final accuracy consistently reached 93-94%, with minimal variance (standard deviation less than 0.01) in the later iterations. This demonstrates that the improvement is reliable and not due to chance. 4.3 Evidence Collection Analysis For this experiment, we equipped the AI agent with a tool that enables internet searches for additional facts and evidence. Specifically, we used a generic search utility powered by Bing. In each iteration, the agent analyzes conflicts identified across three prompt variants, formulates a targeted search query, and retrieves relevant results from the tool. This setup demonstrates one possible method of evidence collection; other approaches could include Figure 3: The relationship between confidence and accuracy. Path 1 shows the starting point. Path 2 illustrates how confidence as a proxy reward improves accuracy. Path 3 highlights the challenge of overconfidence, where confidence saturates while accuracy lags behind. Therefore, more engineering and research efforts are needed to bring the confidence curve down. The diagonal line indicates perfect calibration, and the curves above it show overconfidence. consulting human experts, performing advanced reasoning, or validating findings against production data. We organized the collected evidence into threeelement tuples, each comprising the detected conflict, the resolution plan, and the retrieved evidence. This structured representation ensures systematic evidence collection and evaluation. Evidence is retained only if it demonstrably increases the LLM's confidence in its mapping decisions. Helpful evidence contributes in 4 ways, (1) by enhancing awareness through the identification of ambiguous fields and conflicting mappings; (2) by enabling self-correction through improved reasoning about relationships among fields; (3) by improving clarity via authoritative information on how fields are defined and used in both the common schema and vendor-specific schemas (e.g., Microsoft Defender); and (4) by revealing context-dependent mappings, in which the correct correspondence varies by scenario and requires additional contextual understanding. 5 Conclusion This paper presents a test-time learning framework in which a reinforcement learning agent improves schema mapping accuracy through iterative evidence collection and context refinement-without requiring model weight updates or pre-collected training data. Unlike conventional approaches that rely on all labeled data before inference, our method actively gathers new evidence during testtime to refine its decisions. This design eliminates the high computational cost, GPU dependency, and potential instability associated with model retraining, making it more practical for real-world deployment. A confidence-based score serves as a proxy reward for accuracy, providing a novel mechanism that evaluates model consistency across multiple mapping attempts within a single iteration. Through systematic evidence collection and evaluation, the agent resolves mapping ambiguities with transparent reasoning that facilitates expert review and validation. Applied to Microsoft Defender for Endpoint logs mapped to a common schema, our approach improves accuracy from 72.73% to 93.94%, while reducing the number of fields requiring expert review by 85%-from 26 to 4. A key insight from our work is the use of confidence scores as a proxy for ground-truth labels, as conceptually illustrated in Figure 3. While confidence can effectively guide accuracy improvement, our results reveal persistent overconfidence, where confidence values exceed actual accuracy. This observation underscores the need for better confidence definition and calibration. In our current approach, confidence is measured as the consistency of predictions across three trials. Future research could extend this by (1) employing multiple models to generate diverse predictions and compute ensemble-based confidence, or (2) directly prompting LLMs to provide explicit self-assessed confidence scores alongside their outputs. Additional experiments extending the number of inference attempts from three to ten show encouraging signs of reducing the overconfidence gap (see Appendix B), albeit with increased computational cost. These enhancements could yield better-calibrated confidence measures that more accurately reflect true prediction quality. Limitations While our approach demonstrates significant improvements, several limitations should be acknowledged: 1. Evidence Quality Dependency: The system's performance is dependent on the quality and availability of external evidence sources. In domains where documentation is sparse or inconsistent, the improvement may be limited. 2. Computational Overhead: The iterative evidence collection process requires multiple LLM calls and external searches, which may increase computational costs compared to single-shot approaches. 3. Domain Specificity: Our evaluation focuses on cybersecurity schemas. While the approach is general, validation in other domains (healthcare, finance, etc.) would strengthen the generalizability claims. 4. Scope Limitations: We focus on 1-to-1 mappings to address the core challenge of knowledge scarcity. Extension to more complex mapping cardinalities (1-to-N, N-to-M) remains future work. Ethical Considerations This study does not involve human subjects or personal data. However, the method collects web evidence at test time, which could occasionally include malicious or adversarial content. Practitioners reproducing this system should apply input sanitization, source filtering, and sandboxing to prevent prompt-injection or security risks. References Thomas Hazel. 2021. Harnessing the value of log data analytics. Techstrong Research. Accessed: 2025-0623. Bowen Jin, Hansi Zeng, Zhenrui Yue, Jinsung Yoon, Sercan Arik, Dong Wang, Hamed Zamani, and Jiawei Han. 2025. Search-r1: Training llms to reason and leverage search engines with reinforcement learning. arXiv preprint . Adam Tauman Kalai, Ofir Nachum, Santosh S Vempala, and Edwin Zhang. 2025. Why language models hallucinate. arXiv preprint . G. Krishna. 2023. Security logging and monitoring failures: A comprehensive guide. Accessed: 202506-23. Microsoft. 2024. Microsoft sentinel pricing calculator. Accessed: 2025-06-23. Marcel Parciak, Brecht Vandevoort, Frank Neven, Liesbet M Peeters, and Stijn Vansummeren. 2024. Schema matching with large language models: an experimental study. arXiv preprint . Colin Reid. 2023. Searching for a siem solution? here are 7 things it likely needs. Gartner. Accessed: 2025-06-23. Nabeel Seedat and Mihaela van der Schaar. 2024. Matchmaker: Self-improving large language model programs for schema matching. arXiv preprint . Eitam Sheetrit, Menachem Brief, Moshik Mishaeli, and Oren Elisha. 2024. Rematch: Retrieval enhanced schema matching with llms. arXiv preprint . Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement learning, 2023. URL https://arxiv. org/abs/2303.11366, 1. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint . Aoxiao Zhong, Dengyao Mo, Guiyang Liu, Jinbu Liu, Qingda Lu, Qi Zhou, Jiesheng Wu, Quanzheng Li, and Qingsong Wen. 2024. Logparser-llm: Advancing efficient log parsing with large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4559-4570. Yuxin Zuo, Kaiyan Zhang, Li Sheng, Shang Qu, Ganqu Cui, Xuekai Zhu, Haozhan Li, Yuchen Zhang, Xinwei Long, Ermo Hua, and 1 others. 2025. Ttrl: Test-time reinforcement learning. arXiv preprint . A Case Study: Direction-Sensitive Port Mapping (Iteration 49) To illustrate the practical difficulty of schema mapping, we examine a representative example from iteration 49 involving the common schema field dpt (destination port). The field dpt is defined in the Trend Micro Common Schema as "the service destination port of the private application server (dstport)." In Microsoft Defender for Endpoint, however, two candidate fields exist: LocalPort (TCP port on the local device used during communication) and RemotePort (TCP port on the remote device being connected to). Mapping ambiguity. Both candidates are semantically plausible depending on the traffic direction. For outbound connections, the local device is the source, so the destination port resides on the remote endpoint-corresponding to RemotePort. Conversely, for inbound connections (e.g., an RDP session initiated by a remote host), the local device becomes the destination, and thus the correct mapping is LocalPort. Without an explicit indicator of connection direction, an LLM can easily misassign the field. Why fine-tuning does not help. Simply finetuning model weights cannot reliably resolve this ambiguity. The correct mapping is not a memorized association (dpt →RemotePort) but a conditional rule that depends on runtime context-information that is absent from the training corpus. Updating parameters may reinforce spurious correlations rather than teach the model to reason over directionality, resulting in brittle behavior across different network scenarios. Evidence-based reasoning. During iteration 49, the agent retrieved definitional evidence clarifying that: • LocalPort refers to the port on the local device. • RemotePort refers to the port on the remote device being connected to. • dpt represents the service destination port. Although this evidence did not directly yield the final mapping, it revealed that the correct correspondence varies by communication context and motivated the agent to infer direction from auxiliary fields such as RemoteIP, LocalIP, and ActionType. The agent's confidence increased from 0.67 to 1.0, reflecting a clearer and more consistent conceptual understanding. Categorization of helpful evidence. According to the taxonomy defined in the main paper, this instance exemplifies Category (4): revealing context-dependent mappings. The evidence was helpful because it exposed the conditional nature of the mapping-showing that correctness depends on dynamic context rather than static field alignment-and thereby guided subsequent reasoning toward more generalizable, context-aware mapping rules. Derived practical rule. dpt = RemotePort, if Direction = Outbound; LocalPort, if Direction = Inbound; (defer/flag), if Direction is unknown. This case highlights that progress in schema alignment arises not from parameter updates but from the integration of structured evidence and contextual inference. B Overconfidence Mitigation Experiments To address the overconfidence phenomenon observed in our main experiments (Figure 3), we conducted an additional study to examine how increasing the number of inference attempts affects confidence calibration. Our hypothesis was that a larger number of inference samples would improve statistical robustness and reduce systematic overconfidence. B.1 Experimental Setup We extended the original setup by increasing the number of inference attempts per iteration from 3 to 10. This change allows for more reliable confidence estimation through better sampling of the model's output distribution. All other experimental parameters remained identical to the main study: the same source and target schemas (Microsoft Defender for Endpoint with 195 fields and the Common Schema with 137 fields), the same 66 manually verified ground-truth mappings, and the same GPT-4o model configuration. The confidence score computation followed the same principle of prediction consistency but with a larger sample size (10 vs. 3). This design provides a more statistically grounded measure of uncertainty and is expected to yield better-calibrated confidence estimates. B.2 Results and Analysis Table 1 presents a comparison between the 3- and 10-inference settings. Increasing the number of inference attempts from 3 to 10 effectively narrows the gap between confidence and accuracy. With 10 inferences, the mean confidence (89.3%) aligns much more closely with the observed accuracy (92.1%), whereas with 3 inferences, confidence (95.2%) noticeably exceeds accuracy (93.94%), indicating mild overconfidence. This calibration improvement comes with a modest cost: while the confidence estimates become more realistic, overall accuracy decreases slightly (93.94% →92.1%). Nevertheless, the more conservative confidence levels offer better guidance for expert review prioritization. The reduction in lowconfidence fields remains substantial under both settings (85% and 80% reductions), demonstrating that the key benefit-reducing expert review effort-is preserved. B.3 Implications and Trade-offs These findings highlight a clear balance between calibration quality and computational cost. Increasing the number of inference attempts leads to bettercalibrated confidence scores but requires more computation. In practical terms: • For high-precision scenarios, increasing attempts (e.g., to 10) provides more reliable uncertainty estimates and tighter alignment between confidence and accuracy. • For cost-sensitive deployments, even three attempts already achieve strong accuracy and substantial reduction in expert workload, making it a highly efficient operational setting. These results demonstrate improved calibration-confidence more accurately tracks empirical accuracy as the number of inferences increases-without the need for additional metric reporting. C Prompt Architecture Table 2 summarizes the three prompts used in our schema mapping system. Each serves a distinct function in guiding the agent's reasoning process across sessions and mapping iterations. System Prompt Example. The system prompt defines the AI agent's expertise and reasoning framework. It is shown below for reference. You are a Trend Micro cybersecurity data expert specializing in Trend Micro's Common Schema across multiple layers, including endpoint, network, messaging, and cloud. You have extensive expertise in processing and mapping third-party product and log schemas to Trend Micro's Common Data Schema, enabling cross-log correlation, advanced threat detection, and compliance reporting. You routinely perform professional schema mapping for new third-party log sources with a focus on accuracy. Your approach follows a layered reasoning process: (1) identify core entities such as IP addresses, filenames, and hashes (e.g., SHA1); (2) narrow down candidate fields based on data flow direction and context; (3) make precise mapping decisions supported by semantic consistency. For example: - src_ip vs. dst_ip depends on whether traffic is inbound or outbound. - SHA1 hashes may represent parent_process_sha1, launched_process_sha1, or dropped_object_sha1. If no suitable mapping exists, respond professionally with NOT_COVERED. Setting Final Accuracy Mean Confidence Low-Confidence Fields 3 Inferences 93.94% 95.2% 26 →4 (-85%) 10 Inferences 92.1% 89.3% 31 →6 (-80%) Table 1: Comparison of overconfidence mitigation between 3 and 10 inferences per iteration. The 10-inference setting improves calibration by bringing confidence values more closely aligned with actual accuracy. # Prompt Name Purpose Usage 1 System Prompt Defines the AI agent's role as a cybersecurity data expert specializing in schema mapping. Establishes reasoning rules and guidelines. Loaded once per session 2 User Prompt Carries the per-request payload: (a) curated facts from prior conflicts (if any), (b) RAG-assembled source/target schema context (descriptions, sample values, types), and (c) the mapping task/question. Enforces a strict XML response (CSV decision, 1-5 confidence, reasoning) for deterministic parsing. For every mapping request 3 Search Prompt Generates targeted Internet search queries string to resolve ambiguous field mappings using prior conflict information. Invoked only during conflict resolution Table 2: Overview of the three prompts used in the schema mapping system, detailing their roles and usage.
2510.14897
Forecasting Quantum Observables: A Compressed Sensing Approach with Performance Guarantees Víctor Valls,1 Albert Akhriev,1 Olatz Sanz Larrarte,2 Javier Oliva del Moral,2, 3 Štěpán Šmíd,4, 5 Josu Etxezarreta Martinez,2 Sergiy Zhuk,1 and Dmytro Mishagli1, 5 1IBM Quantum, IBM Research Europe – Dublin 2Department of Basic Sciences, Tecnun - University of Navarra, 20018 San Sebastian, Spain 3Donostia International Physics Center, 20018 San Sebastián, Spain 4Department of Computing, Imperial College London, UK 5IBM Quantum, IBM Research Europe – UK (Dated: October 17, 2025) Noise in near-term quantum devices limits the timescales over which measurements can be reliably obtained. Existing data-driven extrapolation methods extend the dynamics of quantum observables from measurements, but they cannot guarantee that the recovered model reflects the true dynamics. In this paper, we introduce atomic norm minimization as a framework to certify that a model learned by any algorithm accurately captures the underlying dynamics of quantum observables. This certification is valid when the system is governed by a small number of well-separated Bohr frequencies. We validate the framework across multiple algorithms on numerical experiments with spin-chain Hamiltonians involving 8–20 qubits. Comparing with exact diagonalization, certification yields an average forecasting error below 0.1 (within the observable’s range of [−1, 1]) in 98% of cases and below 0.05 in 89–97% of cases, depending on the forecasting algorithm. Even in the presence of realistic shot noise, certified models remain robust, with success rates decreasing only to 88–95% for the 0.1 error threshold. I. INTRODUCTION Quantum spin Hamiltonians are natural testbeds for near-term quantum devices because their local interac- tion structure maps directly onto existing qubit connec- tivity [1]. Recent experiments have already probed dy- namics with more than one hundred qubits and circuit depths of several hundred two-qubit gates [2, 3]. How- ever, noise grows rapidly with circuit depth [4], limiting the timescales over which reliable quantum time evolu- tion (TE) can be observed. One way to extend this limit is to infer the dy- namics from early, low-noise measurement data [5–8]. These data-driven forecasting methods are appealing for their simplicity and scalability, in contrast to com- putationally heavier model-based approaches that de- mand explicit Hamiltonian knowledge (e.g., Krylov sub- space methods [9, 10]). The effectiveness of this data- driven paradigm—which traces its origins to classical signal recovery methods like Prony’s method [11, 12]— has been empirically demonstrated, for example, in pre- dicting the dynamics of the transverse-field Ising model (TFIM) using a truncated Dynamic Mode Decomposition (DMD) [7] or a spin-boson system using ESPRIT [5]. Despite their empirical success, a critical drawback of these data-driven forecasting methods is their funda- mental lack of reliable guarantees on the long-term fore- cast values. To address this limitation, we introduce a novel framework that uses Atomic Norm Minimization (ANM) [13, 14] to certify the correctness of candidate TE models. While ANM was originally devised as a robust technique for signal recovery in off-the-grid compressed sensing [13, 15], it has a high computation cost, making it impractical for a large number of measurements. Here, we repurpose the ANM theoretical guarantees to create a rigorous validation framework that ensures that a fore- cast, obtained by any algorithm, accurately captures the underlying quantum dynamics. The paper makes the following contributions: • We introduce a framework that leverages ANM to certify the correctness of candidate quantum TE models obtained by any forecasting algorithm (Sec. III B). The framework requires that the underlying model has a small number of well- separated Bohr frequencies relative to the number of measurements. • We validate the certification framework through ex- tensive experiments on spin-chain time evolutions (TEs) with 8–20 qubits (Sec. IV). Our results show that, for exact TEs, certifications yields an average forecasting error below 0.1 (within the observable’s range of [−1, 1]) in 98% of cases and below 0.05 in 89% of cases. For TEs with shot noise, these rates decrease to 88–95% for a 0.1 error threshold and 53–74% for a 0.05 threshold, depending on the algorithm. II. PROBLEM STATEMENT Consider a quantum system with n qubits that evolves according to a time-independent Hamiltonian H ∈ C2n×2n. Let |ψ(0)⟩∈C2n be an initial state vec- tor. The quantum TE of the state vector is given by |ψ(τk)⟩= e−iHτk|ψ(0)⟩where τk = ∆(k −1) is a discrete time, ∆> 0, and k ∈[m] := {1, . . . , m} with m the to- tal number of time steps. The k-th measurement of an arXiv:2510.14897v1 [quant-ph] 16 Oct 2025 2 -0.1 0 0.1 0.2 0.3 0.4 0.5 -0.2 -0.1 0 0.1 0.2 (a) |ψ(0)⟩= |0⟩⊗n -0.2 -0.1 0 0.1 0.2 (b) |ψ(0)⟩= |1⟩⊗|0⟩⊗n−1 O = X1 R(c(f)) Bohr frequency O = X1 Bohr frequency FIG. 1. Amplitudes R(c(f)) for TE under H = Pn i=1 σz i σz i+1 + Pn i=1 σx i with n = 8 qubits, two initial states, and periodic boundary conditions (site n+1 = 1). The imag- inary part of c(f) is zero for both cases. observable O ∈C2n×2n is given by yk := ⟨ψ(τk)|O|ψ(τk)⟩= X f∈Ω c(f)ei2πfτk, (1) where c(f) ∈C, and f ∈Ω⊂[−1 2, 1 2] is a Bohr frequency, i.e., normalized difference of the eigenvalues of H. The set Ωcontains only the frequencies with non-zero amplitude, which depend on the spectral information of H, the initial state |ψ(0)⟩, and observable O. Note that Ωis generally unknown for large systems since we cannot compute the spectral information of H. Fig. 1 shows the amplitudes R(c(f)) for all f ∈Ωfor a TFIM Hamiltonian with two initial states and the same observable. Note that the number of non-zero amplitudes varies. Specif- ically, subfigure (a) has fewer and more well-separated frequencies than (b). Also, note that the spectrum is symmetric. The TE forecasting problem is the following. For m given measurements y := (y1, . . . , ym) ∈[−1, 1]m and tolerance ǫ ≥0, find a forecast sequence xk with k ∈[m′] such that 1 m′ m′ X k=1 |xk −⟨ψ(τm+k)|O|ψ(τm+k)⟩| ≤ǫ. (2) That is, the goal is to predict m′ future values of the time series based on the m available measurements. III. CERTIFICATION VIA ANM In this section, we briefly review atomic norm mini- mization (ANM, Sec. III A) and describe how it can be used to certify the frequencies and amplitudes recovered by any spectral estimation algorithm (Sec. III B). A. Atomic norm minimization ANM aims to minimize the atomic norm: ∥y∥A := inf{t ≥0 : y ∈t · conv(A)}, where A is a collection of “atoms” (vectors) and conv(A) its convex hull. The −1 −0.5 0 0.5 1 −0.4 −0.2 0 0.2 0.4 R(c(f)) Q(f) Bohr frequency FIG. 2. Illustration of the dual polynomial Q(f) for the set- ting in Fig. 1 (a). The quantum TE is generated over 10s with a sampling rate of 5 steps per second (i.e., m = 50, ∆= 1/5). Only frequencies with amplitude magnitude larger than 0.005 are shown. The markers (small crosses) indicate where the Q(f) matches a frequency. atomic norm can be seen as the continuous analogue of the ℓ1 norm, since A may contain an uncountable set of atoms corresponding to all possible continuous-frequency sinusoidal components. For our TE forecasting problem, A contains vectors of the form a(f) := (ei2πfτ1, . . . , ei2πfτm), (3) where f ∈[−1 2, 1 2], and the atomic norm minimization finds the sparsest combination of these continuous atoms that match the measurements vector y. In some cases, this representation is unique, enabling exact recovery of the true frequencies and amplitudes underlying y. For the representation to be unique, two sufficient condi- tions must be met: the number of measurements is large enough, and the true frequencies are sufficiently sepa- rated (i.e., |fi −fj| ≥ 4 m for all fi, fj ∈Ωwith i ̸= j); see details in [15, Theorem 1.2] and [16, Theorem 1.1]. In practice, we can solve the ANM with atoms of the form in Eq. (3) via a semidefinite program (SDP): minimize T,t>0 1 2mTr(T ) + 1 2t s.t.  T y y∗t  ⪰0, (4) where T ∈Cm×m is a Hermitian Toeplitz matrix that admits a decomposition T = P f∈F w(f)a(f)a(f)∗(see [16, Lemma 2.2]), where w(f) > 0 and F ⊂[−1 2, 1 2] de- note the support set of active frequencies. In brief, the linear matrix inequality in Eq. (4) ensures that y lies in the subspace spanned by vectors a(f), while the objec- tive promotes low-rank T , thus enforcing sparsity in the frequency domain. Once the optimal T is found, the fre- quencies can be recovered from T using Prony’s method. Crucially, the correctness of these recovered frequen- cies can be certified by solving the associated dual prob- lem: maximize q∈Cm ⟨y, q⟩ s.t. sup f∈[−1 2 , 1 2 ] |Q(f)| ≤1, (5) where Q(f) := ⟨a(f), q⟩is the dual polynomial. The so- lution is certified as unique when the dual polynomial 3 reaches its maximum value |Q(f)| = 1 precisely for all f ∈Ωand |Q(f)| < 1 otherwise (see [16, Proposition 2.4]). An example of this dual certificate is shown in Fig. 2 for the setting used in Fig. 1 (a), where Q(f) is real-valued and touches ±1 precisely at the true frequen- cies (indicated with markers). Finally, we note that solving the ANM with the SDP formulation in Eq. (4) is computationally impractical for a large number of measurements (see Appendix D). In the following section, we show how ANM provides a princi- pled way to certify whether, under sparsity and frequency separation conditions, a candidate model accurately re- produces the true dynamics in Eq. (1). B. Certification pipeline The proposed certification pipeline is illustrated in Fig. 3. An algorithm (e.g., ESPRIT) takes as input a vector y with m measurements and returns candi- date frequencies Ωand amplitudes c(f) such that yk ≈ P f∈Ωc(f) e−i2πfτk, k ∈{1, . . ., m}. These candidates are then certified by solving the ANM dual problem in Eq. (5). Certificates. To certify optimality, we evaluate three key criteria: the duality gap, frequency separation, and the dual polynomial. The duality gap measures the difference between the primal and dual values (Eqs. (4) and (5)). The gap is always non-negative and, due to strong duality, is zero when both problems are solved optimally. We can ob- tain an upper bound on the optimal primal objective by constructing T = P f∈Ω|c(f)|a(f)a(f)∗, and then set- ting t = y∗T †y. This construction ensures that the con- straints in Eq. (4) are satisfied by the Schur complement condition. A lower bound on the dual value can be ob- tained by solving the dual problem approximately (as discussed later). The duality gap is considered admissi- ble if this is below a prescribed tolerance. For frequency separation, we need to check that the minimum distance between the candidate frequencies in Ωexceeds 4/m, which is required for unique recovery in ANM. Finally, the dual polynomial has to satisfy |Q(f)| ≈1 for all f ∈Ωand |Q(f)| < 1, otherwise, within numerical tolerances. Computing the dual polynomial is straight- forward once we have a solution to the dual problem. Solving the dual problem. The dual problem in Eq. (5) is infinite-dimensional and must be discretized to make it tractable. We therefore sample the frequency domain finely, transforming the problem into a second- order cone program. In addition, since the measurement vector y is real-valued (which is always the case in quan- tum TEs), the dual problem can be further simplified to max q∈Rm ⟨y, q⟩ s.t. −1 ≤⟨a(f), q⟩≤1, ∀f ∈F, (6) Find frequencies and amplitudes such that Certify solution Gap optimality certificate FIG. 3. Schematic illustration of the proposed certification pipeline. The process verifies frequency and amplitude esti- mates obtained from any algorithm using the dual formulation of ANM. where F is a finely discretized set of frequencies in [0, 1 2], since the frequency spectrum is symmetric. Finally, al- though discretization introduces a small approximation error, it remains negligible when the frequency grid is sufficiently fine. IV. EXPERIMENTS This section evaluates the proposed ANM certification methodology using several existing algorithms. The goal is to demonstrate that, when a certificate of optimality is obtained, the correct model can be recovered, enabling reliable forecasting of the TE. Note that successful cer- tification also depends on the true model being sparse and having well-separated frequencies—conditions that are typically unknown in practice. A. Setup Dataset. We consider quantum TEs generated by 15 spin-chain Hamiltonians (4 TFIM and 11 Heisenberg) with 8–20 qubits and periodic boundary conditions [17– 26] (see details in Appendix B). For all TEs, we consider five initial states—Néel, dimerized, paramagnetic, ferro- magnetic, and the zero state—and measure six observ- ables (X, Y , Z, XX, Y Y , and ZZ); see Appendix B. This results in a total of 5850 TEs. The TEs are gen- erated using Qutip [27] for 40 time units with dt = 0.2, yielding m = 200 measurements per TE. A time unit is t·J, where J is the coupling strength (interaction energy) in a Hamiltonian. Spectral estimation algorithms. We use four al- gorithms to compute the frequencies and amplitudes: Prony, OMP, FISTA, and ESPRIT. Prony [11] is a method that can provably recover the true frequencies and amplitudes under ideal conditions but is highly sen- sitive to noise. OMP [28, 29] is a widely used heuristic in signal processing for sparse signal recovery. FISTA [30] is a fast gradient method for solving Lasso problems, and ESPRIT [31, 32] is a well-known algorithm in spectrum estimation, also applied in quantum TE forecasting [5]. The algorithms are implemented with default or minimal 4 Exact TE TE with shot noise ǫ PRO OMP FIS ESP PRO OMP FIS ESP 0.01 83.4 75.1 68.7 70.8 50.2 20.2 10.3 21.3 0.05 97.4 96.4 95.8 89.5 58.3 69.9 74.6 53.2 0.10 98.7 99.2 99.7 98.8 82.2 95.1 95.4 83.1 TABLE I. SCs rate for different algorithms and forecasting tolerances (ǫ). Noisy TEs use 1000 shots. Exact TE TE with shot noise ǫ PRO OMP FIS ESP PRO OMP FIS ESP 0.01 15.8 11.2 2.9 13.8 6.7 7.1 1.3 6.6 0.05 18.4 14.3 4.1 17.5 7.7 24.5 9.0 16.4 0.10 18.6 14.7 4.3 19.3 10.9 33.4 11.6 25.6 TABLE II. Absolute SCs rate for different algorithms and forecasting tolerances (ǫ). The Absolute SC rate is the per- centage of SCs over the total number of runs. Noisy TEs use 1000 shots. tuning settings, as our focus is on evaluating the certifica- tion methodology. A brief description of each algorithm and its implementation is provided in Appendix A. Forecasting procedure. Each algorithm is applied to the first k measurements, where k ∈{10, . . ., 160}, and forecasting is evaluated on the remaining 200 −k measurements, as defined in Eq. (2). Certification. For each algorithm run, ANM certifi- cation is performed (150 times per algorithm and TE). This is done by solving Eq. (6) on a frequency grid of 1000 uniformly spaced points in [0, 1 2] [33], augmented with the candidate frequencies estimated by the algorithm— to ensure that the dual polynomial is evaluated at the exact/candidate frequencies. To account for numerical inaccuracies, empirically chosen tolerances are applied. Specifically, for exact TEs, the dual polynomial must sat- isfy |Q(f)| ≥0.98 at all candidate frequencies, and the duality gap must be smaller than 0.05. For TEs with shot noise, the thresholds are |Q(f)| ≥0.95 and a dual- ity gap smaller than 0.5. In addition, we only consider amplitudes with magnitude larger than 0.025 for exact TEs and 0.05 for TEs with shot noise. B. Results We evaluate forecasting performance using tolerances ǫ ∈{0.01, 0.05, 0.1} (see Sec. II), for both exact TEs and TEs with shot noise. We consider the output of a spectral algorithm (i.e., frequencies and amplitudes) certified if it meets the criteria in Sec. III B. A successful certification (SC) occurs when a certified run also satisfies the fore- casting error in Eq. (2) for the remaining measurements, which range from 190 (k = 10) down to 40 (k = 160). We define the SC rate as the percentage of SCs among 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25 0.3 Fraction of certified TEs Error Exact TE Prony OMP FISTA ESPRIT Error 1000-shot noise FIG. 4. Distribution of the forecasting error upon the first certification, with exact and noisy TEs. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25 0.3 Fraction of certified TEs Error Exact TE Prony OMP FISTA ESPRIT Error 1000-shot noise FIG. 5. Distribution of the forecasting error upon the first certification for certificates issued for 30 or more measure- ments, with exact and noisy TEs. all certified runs (successful or not). Average performance. Table I reports the SC rate for each algorithm and tolerance ǫ. For exact TEs, all algorithms achieve high SC rates, exceeding 98% for ǫ ≤ 0.1. When shot noise is introduced (1000 shots), SC rates decrease, particularly for ǫ ≤0.01. OMP and FISTA generally outperform Prony and ES- PRIT on noisy TEs; however, FISTA certifies a smaller number of runs compared to the other algorithms. Ta- ble II shows the percentage of SCs over all algorithm runs (877,500 runs per algorithm: 150 runs per TE across 5850 TEs). Only a fraction of the runs yield a certificate, with Prony and ESPRIT achieving SC around 18% of all the runs for ǫ ≤0.05. Interestingly, the absolute SC rate for OMP and ES- PRIT (Table II) increases with noisy TEs. This is likely due to the looser certification tolerances applied under noise, which increase the total number of certified runs. Performance on first certification. Fig. 4 shows the distribution of forecasting errors at the first certifi- cation (the first k ∈{10, . . ., 160} that meets the crite- ria in Sec. III B). Overall, the algorithms exhibit similar behavior. For exact TEs, Prony and OMP achieve er- rors below 0.05 and 0.1 in approximately 50% and 75% of cases, respectively. For noisy TEs, the proportion of forecasts with errors below 0.05 drops sharply, and both Prony and OMP achieve errors below 0.1 in only about 50% of cases. These results contrast with the high SC rates in Ta- 5 ble I, possibly due to insufficient measurements used dur- ing spectral estimation or certification. To investigate this, Fig. 5 shows forecasting errors for first certifications issued only after 30 or more measurements. Performance improves noticeably: for exact TEs, Prony achieves er- rors below 0.05 and 0.1 in 83.4% and 93.4% of cases, respectively, while OMP achieves the same thresholds in 76% and 100% of cases. For noisy TEs, all algorithms re- duce the proportion of forecasts with errors exceeding 0.1 (cf. Fig. 4 and Fig. 5), with OMP achieving errors below 0.1 and 0.15 in 75% and 97% of cases, respectively. These results indicate that a sufficiently large number of measurements is crucial for reliable certification; while the minimum is unknown, monitoring stability, as sug- gested in [5, Sec. III-C], or setting a minimum measure- ment threshold (e.g., k ≥30) can significantly enhance performance. V. CONCLUSIONS We introduced ANM as a framework to certify the spectral models produced by data-driven forecasting al- gorithms. Our numerical experiments show that, with carefully chosen tolerance thresholds, ANM can success- fully certify both exact and noisy quantum TEs. A key limitation is that certification is guaranteed only when the underlying dynamics are sparse and the Bohr frequencies are well-separated. Nevertheless, our results indicate that ANM can still provide reliable certificates in realistic scenarios. A promising direction for future work is to characterize which time evolutions—defined by specific Hamiltonians, initial states, and observables— satisfy the framework conditions, thereby pinpointing the regimes where reliable forecasting is theoretically guaran- teed. VI. ACKNOWLEDGMENTS SZ and DM are grateful to Sergey Bravyi, Ewout van den Berg, and Kristan Temme for stimulating discus- sions. We all wish to thank the members of the Quantum Information Lab at Tecnun. This work has been supported by the BasQ strategy of the Department of Science, Universities, and Innovation of the Basque Government through the “Extrapolation of Von Neumann Dynamics beyond the reach of current Utility Scale Devices (VNDExUSD)” project. [1] B. Fauseweh, Quantum many-body simulations on digi- tal quantum computers: State-of-the-art and future chal- lenges, Nature Communications 15, 2123 (2024). [2] Y. Kim, A. Eddins, S. Anand, K. X. Wei, E. van den Berg, S. Rosenblatt, H. Nayfeh, Y. Wu, M. Zaletel, K. Temme, and A. Kandala, Evidence for the utility of quantum computing before fault tolerance, Nature 618, 500 (2023). [3] E. D. Switzer, N. Robertson, N. Keenan, Á. Rodríguez, A. D’Urbano, B. Pokharel, T. S. Rahman, O. Shtanko, S. Zhuk, and N. Lorente, Realization of two-dimensional discrete time crystals with anisotropic heisenberg cou- pling (2025). [4] K. Temme, S. Bravyi, and J. M. Gambetta, Error miti- gation for short-depth quantum circuits, Physical review letters 119, 180509 (2017). [5] A. Erpenbeck, Y. Zhu, Y. Yu, L. Zhang, R. Gerum, O. Goulko, C. Yang, G. Cohen, and E. Gull, Com- pact representation and long-time extrapolation of real-time data for quantum systems, arXiv preprint arXiv:2506.13760 (2025). [6] K. Manos, M. Weilenmann, and M. Navascues, Ex- trapolation of quantum measurement data (2025), arXiv:2507.06912. [7] R. Kaneko, M. Imada, Y. Kabashima, and T. Ohtsuki, Forecasting long-time dynamics in quantum many-body systems by dynamic mode decomposition, Physical Re- view Research 7, 013085 (2025). [8] Y. Shen, A. Buzali, H.-Y. Hu, K. Klymko, D. Camps, S. F. Yelin, and R. Van Beeumen, Efficient measurement- driven eigenenergy estimation with classical shadows, arXiv preprint arXiv:2409.13691 (2024). [9] P. Nandy, A. S. Matsoukas-Roubeas, P. Martínez- Azcona, A. Dymarsky, and A. del Campo, Quantum dynamics in krylov space: Methods and applications, Physics Reports 1125–1128, 1 (2025). [10] K. Takahashi and A. Del Campo, Krylov subspace meth- ods for quantum dynamics with time-dependent genera- tors, Physical Review Letters 134, 030401 (2025). [11] T. Sauer, Prony’s method: an old trick for new prob- lems, Snapshots of modern mathematics from Oberwol- fach (2018). [12] G. Plonka and M. Tasche, Prony methods for recov- ery of structured functions, GAMM-Mitteilungen 37, 239 (2014). [13] B. N. Bhaskar, G. Tang, and B. Recht, Atomic norm denoising with applications to line spectral estimation, IEEE Transactions on Signal Processing 61, 5987 (2013). [14] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky, The convex geometry of linear inverse prob- lems, Foundations of Computational mathematics 12, 805 (2012). [15] E. J. Candès and C. Fernandez-Granda, Towards a math- ematical theory of super-resolution, Communications on pure and applied Mathematics 67, 906 (2014). [16] G. Tang, B. N. Bhaskar, P. Shah, and B. Recht, Com- pressed sensing offthe grid, IEEE transactions on infor- mation theory 59, 7465 (2013). [17] S. R. White and D. A. Huse, Numerical renormalization-group study of low-lying eigen- states of the antiferromagnetic s=1 heisenberg chain, Phys. Rev. B 48, 3844 (1993). [18] Honecker and Wessel, Magnetocaloric effect in quantum spin-s chains, Condensed Matter Physics 12, 399 (2009). 6 [19] T. Giamarchi, Quantum Physics in One Dimension (Oxford University Press, 2003). [20] H. J. Schulz, Phase diagrams and correlation exponents for quantum spin chains of arbitrary spin quantum num- ber, Phys. Rev. B 34, 6372 (1986). [21] I. Affleck, Quantum spin chains and the haldane gap, Journal of Physics: Condensed Matter 1, 3047 (1989). [22] M. den Nijs and K. Rommelse, Preroughening transitions in crystal surfaces and valence-bond phases in quantum spin chains, Phys. Rev. B 40, 4709 (1989). [23] J. Kurmann, H. Thomas, and G. Müller, Antiferromag- netic long-range order in the anisotropic quantum spin chain, Physica A: Statistical Mechanics and its Applica- tions 112, 235 (1982). [24] A. K. Bera, B. Lake, A. T. M. N. Islam, B. Klemke, E. Faulhaber, and J. M. Law, Field-induced magnetic ordering and single-ion anisotropy in the quasi-one-dimensional haldane chain com- pound srni2v2o8: A single-crystal investigation, Phys. Rev. B 87, 224423 (2013). [25] B. Nachtergaele, W. Spitzer, and S. Starr, Ferromagnetic ordering of energy levels, Journal of Statistical Physics 116, 719 (2004). [26] S. M. Giampaolo, G. Adesso, and F. Illuminati, Theory of ground state factorization in quantum cooperative sys- tems, Phys. Rev. Lett. 100, 197201 (2008). [27] J. R. Johansson, P. D. Nation, and F. Nori, Qutip: An open-source python framework for the dynamics of open quantum systems, Computer physics communica- tions 183, 1760 (2012). [28] S. Chen and D. Donoho, Basis pursuit, in Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers, Vol. 1 (IEEE, 1994) pp. 41–44. [29] F. Locatello, R. Khanna, M. Tschannen, and M. Jaggi, A unified optimization view on generalized matching pur- suit and frank-wolfe, in Proceedings of the 20th Interna- tional Conference on Artificial Intelligence and Statistics, Vol. 54, edited by A. Singh and J. Zhu (2017) pp. 860– 868. [30] A. Beck and M. Teboulle, A fast iterative shrinkage- thresholding algorithm for linear inverse problems, SIAM journal on imaging sciences 2, 183 (2009). [31] A. Paulraj, R. Roy, and T. Kailath, Estimation of signal parameters via rotational invariance techniques- esprit, in Nineteeth Asilomar Conference on Circuits, Systems and Computers, 1985. (1985) pp. 83–89. [32] R. Roy and T. Kailath, Esprit-estimation of signal parameters via rotational invariance techniques, IEEE Transactions on acoustics, speech, and signal processing 37, 984 (2002). [33] It is sufficient to consider [0, 1 2] instead of [−1 2, 1 2] since the frequency spectrum is symmetric. [34] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014). [35] M. ApS, MOSEK Optimization Software (2025), version 11.0. 7 Appendix A: Algorithms 1. Algorithm for finding frequencies and amplitudes a. Prony’s method Prony’s method is implemented as described in [11]. The method requires that the number of measurements m satisfies ≥2|Ω|, where |Ω| denotes the number of frequencies. However, |Ω| is typically unknown. To address this, we execute Prony’s method for a range of candidate values of |Ω|. Specifically, given m measurements, we apply Prony’s method using the first 2k measurements for k = 1, . . . , ⌊m/2⌋. For each k, we compute the error ∥y −x∥2, where x, y ∈Rm denote the reconstructed and measured signals, respectively. The algorithm selects the model (i.e., the set of frequencies and amplitudes defining x) that yields the smallest error. b. OMP Orthogonal Matching Pursuit (OMP) is implemented as described in [29, Algorithm 1], using atoms defined by a(f, φ) = (cos(2πfτ1 + φ), . . . , cos(2πfτm + φ)), (A1) where f ∈F := {0, 1 2L, . . . , L 2L} and Φ := {0, 2π P , . . . , 2π(P −1) P }, with L = 10000 and P = 10. We use the atoms in Eq. (A1) instead of those in Eq. (3) because y is real-valued. Consequently, its frequency spectrum is symmetric, and the corresponding amplitudes occur in complex-conjugate pairs. We make the algorithm terminate after 250 iterations or when the recovered signal, x, satisfies ∥y −x∥2 ≤ǫ for ǫ = 10−2. The algorithm returns the frequencies, phases, and amplitude magnitudes of the selected atoms. c. FISTA The Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) is implemented as described in [30] to solve the optimization problem minimize x∈RLP 1 2∥Ax −y∥2 2 + λ∥x∥1, (A2) where y ∈Rm denotes the vector of measurements and A ∈Rm×LP is a matrix whose columns correspond to the atoms defined in Eq. (A1), with L = 1000 and P = 4. The algorithm outputs the frequencies, phases, and amplitude magnitudes associated with the selected atoms (i.e., the active columns of A). To account for numerical issues, we only consider an atom is selected if the associated amplitude magnitude is larger than 10−4. d. ESPRIT We have adjusted the method [31, 32] for a signal (1), generated by a Hamiltonian evolution, as follows. For N samples of a signal y = Pr k=1 ak z n k and for a window length d = m + 1 with 2 ≤d ≤N −1, we form delay Hankel matrices Y0, Y1 ∈Cd×L (L = N −d) that are shifted by one sample. Here m is a parameter that controls the size of a subspace, chosen such that N ≥m + 2. We then compute a rank-r singular value decomposition of Y0 = U Σ V ∗ and project the one-step shift matrix Y1 onto the r-dimensional signal subspace spanned by columns of U by forming the reduced operator Φ = U ∗Y1 V Σ−1 ∈Cr×r. (A3) Using Y0 = UΣV ∗and Y1 = V ΛW, we have U ∗Y1V Σ−1 ≈U ∗(V ΛW)V Σ−1, and since U spans the columns of V , Φ is similar to Λ on the signal subspace. Thus eig(Φ) = {z1, . . . , zr}. (A4) Having the poles {zk}, we form the Vandermonde matrix on the full sample range,  V  nk = z n k where n = 0, . . . , N −1, and find the amplitudes from least squares: mina∈Cr y −V a 2 2. The frequencies follow from ωk = −arg(zk). 8 Initial State Definition Zero |0⟩⊗N Néel |01⟩⊗N/2 Dimerized (|01⟩−|10⟩)⊗N/2 Paramagnetic (|0⟩+ |1⟩)⊗N Ferromagnetic |1⟩⊗N TABLE III. Initial states and their definitions (normalization constants are omitted for clarity of presentation). Additionally, we assumed unitary dynamics and, thus, project zk to a unit circle, zk 7→ei arg zk. The subspace parameter m was chosen as max(2, ⌊N 3 ⌋) providing the requirement N ≥m + 2. The number of samples N varied in simulations as N ≡k ∈{1−, . . ., 160}. When the SVD procedure failed, we considered that as a negative TE instance. 2. Algorithm to solve the dual problem in Eq. (5) We solve the problem in Eq. (5) with Adam [34]. The objective function is f(q) = ⟨y, q⟩+ λ L X j=1 max{0, |(Aq)j| −1}2, (A5) where y ∈Rm is the vector of measurements, and A ∈RL×m a matrix with rows equal to the atoms in Eq. (3). We use L = 1000 and λ = 15. We set the learning rate equal to 0.1 and run the algorithm for 500 iterations. Appendix B: Hamiltonians and Initial States We consider one–dimensional spin- 1 2 chains. For convenience, we write all Hamiltonians in the Pauli convention, i.e. with Pauli matrices X, Y , and Z acting on site k: HXYZ = Jx X k XkXk+1 + Jy X k YkYk+1 + Jz X k ZkZk+1 + hx X k Xk + hy X k Yk + hz X k Zk. When restricted to Ising–type couplings and a transverse field, we use: HTFIM = Jz X k ZkZk+1 + hx X k Xk. Here the index k runs over lattice sites (boundary conditions are specified elsewhere, mostly as “periodic”), and the couplings {Jα} and fields {hα} are given in units where energy scales are O(1). Sometimes, spin Hamiltonians are written with spin operators Sα k rather than Pauli matrices σα k . Since Sα k = 1 2 σα k , our parameters map to the S-operator convention as: J(S) α = 4 Jα, h(S) α = 2 hα, and respectively, Jα = J(S) α /4, hα = h(S) α /2. Signs are unaffected by this rescaling. Our study focuses on spin- 1 2 models in the above Pauli normalization for implementation simplicity. The parameter choices we explore were inspired by Hamiltonians used in diverse physical simulations (see Table IV). We note, however, that several of those source models were originally formulated for spin-1 or higher; when importing their coefficients into our convention one should apply the normalization map above. Spin- 1 2 and spin-1 chains can exhibit qualitatively different physics. Because our primary objective here is predictive performance rather than a faithful comparison of phase structures, we do not distinguish between spin- 1 2 and spin-1 models in the experiments. Results should therefore be interpreted with this caveat in mind. The initial states used in all simulations (Table III) are standard in the literature and admit shallow state-preparation circuits, making them both representative and practical for our benchmarks. 9 Parameters Description Jx = 0, Jy = 0, Jz = −2, hx = 1, hy = 0, hz = 0 Ising ferromagnet. Jx = 0, Jy = 0, Jz = 2, hx = 1, hy = 0, hz = 0 Ising antiferromagnet. Jx = 0, Jy = 0, Jz = 1, hx = 2, hy = 0, hz = 0 Ising paramagnet. Jx = 0, Jy = 0, Jz = 1, hx = 1, hy = 0, hz = 0 Ising critical model. Jx = Jy = Jz = 1, hx = hy = 0, hz = 0.2 Isotropic Haldane, tiny longitudinal field, [17]. Jx = Jy = Jz = 1, hx = hy = 0, hz = 0.4105 Antiferromagnetic quantum chain at the lower critical field, [18]. Jx = Jy = Jz = 1, hx = hy = 0, hz = 0.6 Field-induced Tomonaga–Luttinger liquid, [19]. Jx = 0.7, Jy = 0.7, Jz = 1.3, hx = 0, hy = 0, hz = 0.1 Easy-axis XXZ with weak field, [20]. Jx = 1.0, Jy = 1.0, Jz = 0.6, hx = 0.2, hy = 0, hz = 0 Easy-plane XXZ, transverse probe, [21], Jx = Jy = Jz = 1, hx = 0.3, hy = 0, hz = 0.3 Tilted field, symmetry breaking, [22]. Jx = 1.2, Jy = 0.8, Jz = 1, hx = 0.9, hy = 0, hz = 0 Anisotropic XYZ + transverse field, [23]. Jx = Jy = Jz = 1, hx = 0, hy = 0, hz = 1 Isotropic AFM, intermediate longitudinal field, [24]. Jx = Jy = Jz = −1, hx = 0, hy = 0, hz = 0.1 Isotropic ferromagnetic Heisenberg chain, tiny field, [25]. Jx = 1, Jy = 1, Jz = −0.7, hx = 0.15, hy = 0, hz = 0.15 Mixed-sign anisotropy, easy-plane exchange but ferromagmetic Jz, [19]. Jx = Jy = Jz = 1, hx = 1.5, hy = 0, hz = 0 Isotropic AFM with strong transverse field (near product paramagnet) Jz, [26]. TABLE IV. Hamiltonians selected for TE. 10 Appendix C: Additional experiments 1. Error growth after first certification The following figures illustrate the evolution of the forecasting error after the first certification, measured on the samples predicted beyond that point. For example, if the first certification occurs at sample 50 in one TE and at sample 100 in another, the first forecast samples correspond to 51 and 101, respectively. The average error is indicated by a thick line, and the standard deviation is shown as a shaded area. The figures display the average results for all algorithms and TEs, with the experimental setup described in Sec. IV. 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 No shot noise Error 250 shot noise 500 shot noise Error Forecast measurements 1000 shot noise Forecast measurements FIG. 6. Average per-sample forecasting error after the first certification for exact TEs and three shot noise levels (250, 500, and 1000 shots). The error is averaged over all algorithms. 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 No shot noise Error 250 shot noise 500 shot noise Error Forecast measurements 1000 shot noise Forecast measurements FIG. 7. Average per-sample forecasting error after the first certification after 30 or more measurements, for exact TEs and three shot noise levels (250, 500, and 1000 shots). The error is averaged over all algorithms. 11 2. Illustrative examples −1 −0.5 0 0.5 1 0 20 40 60 80 100 120 140 160 180 200 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Measurements Forecasting Measurement |Q(f)| f ∈Ω Bohr frequency FIG. 8. Forecasting with OMP: 18 qubits Anisotropic XYZ + transverse field Hamiltonian [23], dimerized initial state, and O = X9X10. The red vertical line marks the training interval [0 . . . 100), where we fit the sparse model; the remaining data [100 . . . 200) are reserved for out-of-sample verification. −1 −0.5 0 0.5 1 0 20 40 60 80 100 120 140 160 180 200 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Measurements Forecasting Measurement |Q(f)| f ∈Ω Bohr frequency FIG. 9. Forecasting with OMP: 20 qubit easy-plane XXZ transverse probe Hamiltonian [21], paramagnetic initial state, and O = Y10Y11. The red vertical line marks the training interval [0 . . . 100), where we fit the sparse model; the remaining data [100 . . . 200) are reserved for out-of-sample verification. 12 Appendix D: SDP Running Times with MOSEK Fig. 10 shows the time it takes MOSEK [35] (a commercial SDP solver) to solve the problem in Eq. (4). The SDP is run on an Intel i7-14700 CPU with 32 GB of memory. 0 50 100 150 200 250 0 50 100 150 200 250 300 350 400 Time (s) Number of measurements FIG. 10. Time to solve the SDP in Eq. (4) with MOSEK [35] for different number of measurements.
Forecasting Quantum Observables: A Compressed Sensing Approach with Performance Guarantees Víctor Valls,1 Albert Akhriev,1 Olatz Sanz Larrarte,2 Javier Oliva del Moral,2, 3 Štěpán Šmíd,4, 5 Josu Etxezarreta Martinez,2 Sergiy Zhuk,1 and Dmytro Mishagli1, 5 1IBM Quantum, IBM Research Europe - Dublin 2 - 20018 San Sebastian, Spain 3Donostia International Physics Center, 20018 San Sebastián, Spain 4 5IBM Quantum, IBM Research Europe - UK (Dated: October 17, 2025) Noise in near-term quantum devices limits the timescales over which measurements can be reliably obtained. Existing data-driven extrapolation methods extend the dynamics of quantum observables from measurements, but they cannot guarantee that the recovered model reflects the true dynamics. In this paper, we introduce atomic norm minimization as a framework to certify that a model learned by any algorithm accurately captures the underlying dynamics of quantum observables. This certification is valid when the system is governed by a small number of well-separated Bohr frequencies. We validate the framework across multiple algorithms on numerical experiments with spin-chain Hamiltonians involving 8-20 qubits. Comparing with exact diagonalization, certification yields an average forecasting error below 0.1 (within the observable's range of [-1, 1]) in 98% of cases and below 0.05 in 89-97% of cases, depending on the forecasting algorithm. Even in the presence of realistic shot noise, certified models remain robust, with success rates decreasing only to 88-95% for the 0.1 error threshold. I. INTRODUCTION Quantum spin Hamiltonians are natural testbeds for near-term quantum devices because their local interaction structure maps directly onto existing qubit connectivity [1]. Recent experiments have already probed dynamics with more than one hundred qubits and circuit depths of several hundred two-qubit gates [2, 3]. However, noise grows rapidly with circuit depth [4], limiting the timescales over which reliable quantum time evolution (TE) can be observed. One way to extend this limit is to infer the dynamics from early, low-noise measurement data [5-8]. These data-driven forecasting methods are appealing for their simplicity and scalability, in contrast to computationally heavier model-based approaches that demand explicit Hamiltonian knowledge (e.g., Krylov subspace methods [9, 10]). The effectiveness of this datadriven paradigm-which traces its origins to classical signal recovery methods like Prony's method [11, 12]- has been empirically demonstrated, for example, in predicting the dynamics of the transverse-field Ising model (TFIM) using a truncated Dynamic Mode Decomposition (DMD) [7] or a spin-boson system using ESPRIT [5]. Despite their empirical success, a critical drawback of these data-driven forecasting methods is their fundamental lack of reliable guarantees on the long-term forecast values. To address this limitation, we introduce a novel framework that uses Atomic Norm Minimization (ANM) [13, 14] to certify the correctness of candidate TE models. While ANM was originally devised as a robust technique for signal recovery in off-the-grid compressed sensing [13, 15], it has a high computation cost, making it impractical for a large number of measurements. Here, we repurpose the ANM theoretical guarantees to create a rigorous validation framework that ensures that a forecast, obtained by any algorithm, accurately captures the underlying quantum dynamics. The paper makes the following contributions: • We introduce a framework that leverages ANM to certify the correctness of candidate quantum TE models obtained by any forecasting algorithm (Sec. III B). The framework requires that the underlying model has a small number of wellseparated Bohr frequencies relative to the number of measurements. • We validate the certification framework through extensive experiments on spin-chain time evolutions (TEs) with 8-20 qubits (Sec. IV). Our results show that, for exact TEs, certifications yields an average forecasting error below 0.1 (within the observable's range of [-1, 1]) in 98% of cases and below 0.05 in 89% of cases. For TEs with shot noise, these rates decrease to 88-95% for a 0.1 error threshold and 53-74% for a 0.05 threshold, depending on the algorithm. II. PROBLEM STATEMENT Consider a quantum system with n qubits that evolves according to a time-independent Hamiltonian H ∈ C2n×2n. Let |ψ(0)⟩∈C2n be an initial state vector. The quantum TE of the state vector is given by |ψ(τk)⟩= e-iHτk|ψ(0)⟩where τk = ∆(k -1) is a discrete time, ∆> 0, and k ∈[m] := {1, . . . , m} with m the total number of time steps. The k-th measurement of an 16 Oct 2025 2 -0.1 0 0.1 0.2 0.3 0.4 0.5 -0.2 -0.1 0 0.1 0.2 (a) |ψ(0)⟩= |0⟩⊗n -0.2 -0.1 0 0.1 0.2 (b) |ψ(0)⟩= |1⟩⊗|0⟩⊗n-1 O = X1 R(c(f)) Bohr frequency O = X1 Bohr frequency FIG. 1. Amplitudes R(c(f)) for TE under H = Pn i=1 σz i σz i+1 + Pn i=1 σx i with n = 8 qubits, two initial states, and periodic boundary conditions (site n+1 = 1). The imaginary part of c(f) is zero for both cases. observable O ∈C2n×2n is given by yk := ⟨ψ(τk)|O|ψ(τk)⟩= X f∈Ω c(f)ei2πfτk, (1) where c(f) ∈C, and f ∈Ω⊂[-1 2, 1 2] is a Bohr frequency, i.e., normalized difference of the eigenvalues of H. The set Ωcontains only the frequencies with non-zero amplitude, which depend on the spectral information of H, the initial state |ψ(0)⟩, and observable O. Note that Ωis generally unknown for large systems since we cannot compute the spectral information of H. Fig. 1 shows the amplitudes R(c(f)) for all f ∈Ωfor a TFIM Hamiltonian with two initial states and the same observable. Note that the number of non-zero amplitudes varies. Specifically, subfigure (a) has fewer and more well-separated frequencies than (b). Also, note that the spectrum is symmetric. The TE forecasting problem is the following. For m given measurements y := (y1, . . . , ym) ∈[-1, 1]m and tolerance ǫ ≥0, find a forecast sequence xk with k ∈[m′] such that 1 m′ m′ X k=1 |xk -⟨ψ(τm+k)|O|ψ(τm+k)⟩| ≤ǫ. (2) That is, the goal is to predict m′ future values of the time series based on the m available measurements. III. CERTIFICATION VIA ANM In this section, we briefly review atomic norm minimization (ANM, Sec. III A) and describe how it can be used to certify the frequencies and amplitudes recovered by any spectral estimation algorithm (Sec. III B). A. Atomic norm minimization ANM aims to minimize the atomic norm: ∥y∥A := inf{t ≥0 : y ∈t · conv(A)}, where A is a collection of "atoms" (vectors) and conv(A) its convex hull. The -1 -0.5 0 0.5 1 -0.4 -0.2 0 0.2 0.4 R(c(f)) Q(f) Bohr frequency FIG. 2. Illustration of the dual polynomial Q(f) for the setting in Fig. 1 (a). The quantum TE is generated over 10s with a sampling rate of 5 steps per second (i.e., m = 50, ∆= 1/5). Only frequencies with amplitude magnitude larger than 0.005 are shown. The markers (small crosses) indicate where the Q(f) matches a frequency. atomic norm can be seen as the continuous analogue of the l1 norm, since A may contain an uncountable set of atoms corresponding to all possible continuous-frequency sinusoidal components. For our TE forecasting problem, A contains vectors of the form a(f) := (ei2πfτ1, . . . , ei2πfτm), (3) where f ∈[-1 2, 1 2], and the atomic norm minimization finds the sparsest combination of these continuous atoms that match the measurements vector y. In some cases, this representation is unique, enabling exact recovery of the true frequencies and amplitudes underlying y. For the representation to be unique, two sufficient conditions must be met: the number of measurements is large enough, and the true frequencies are sufficiently separated (i.e., |fi -fj| ≥ 4 m for all fi, fj ∈Ωwith i ̸= j); see details in [15, Theorem 1.2] and [16, Theorem 1.1]. In practice, we can solve the ANM with atoms of the form in Eq. (3) via a semidefinite program (SDP): minimize T,t>0 1 2mTr(T ) + 1 2t s.t. T y y∗t ⪰0, (4) where T ∈Cm×m is a Hermitian Toeplitz matrix that admits a decomposition T = P f∈F w(f)a(f)a(f)∗(see [16, Lemma 2.2]), where w(f) > 0 and F ⊂[-1 2, 1 2] denote the support set of active frequencies. In brief, the linear matrix inequality in Eq. (4) ensures that y lies in the subspace spanned by vectors a(f), while the objective promotes low-rank T , thus enforcing sparsity in the frequency domain. Once the optimal T is found, the frequencies can be recovered from T using Prony's method. Crucially, the correctness of these recovered frequencies can be certified by solving the associated dual problem: maximize q∈Cm ⟨y, q⟩ s.t. sup f∈[-1 2 , 1 2 ] |Q(f)| ≤1, (5) where Q(f) := ⟨a(f), q⟩is the dual polynomial. The solution is certified as unique when the dual polynomial 3 reaches its maximum value |Q(f)| = 1 precisely for all f ∈Ωand |Q(f)| < 1 otherwise (see [16, Proposition 2.4]). An example of this dual certificate is shown in Fig. 2 for the setting used in Fig. 1 (a), where Q(f) is real-valued and touches ±1 precisely at the true frequencies (indicated with markers). Finally, we note that solving the ANM with the SDP formulation in Eq. (4) is computationally impractical for a large number of measurements (see Appendix D). In the following section, we show how ANM provides a principled way to certify whether, under sparsity and frequency separation conditions, a candidate model accurately reproduces the true dynamics in Eq. (1). B. Certification pipeline The proposed certification pipeline is illustrated in Fig. 3. An algorithm (e.g., ESPRIT) takes as input a vector y with m measurements and returns candidate frequencies Ωand amplitudes c(f) such that yk ≈ P f∈Ωc(f) e-i2πfτk, k ∈{1, . . ., m}. These candidates are then certified by solving the ANM dual problem in Eq. (5). Certificates. To certify optimality, we evaluate three key criteria: the duality gap, frequency separation, and the dual polynomial. The duality gap measures the difference between the primal and dual values (Eqs. (4) and (5)). The gap is always non-negative and, due to strong duality, is zero when both problems are solved optimally. We can obtain an upper bound on the optimal primal objective by constructing T = P f∈Ω|c(f)|a(f)a(f)∗, and then setting t = y∗T †y. This construction ensures that the constraints in Eq. (4) are satisfied by the Schur complement condition. A lower bound on the dual value can be obtained by solving the dual problem approximately (as discussed later). The duality gap is considered admissible if this is below a prescribed tolerance. For frequency separation, we need to check that the minimum distance between the candidate frequencies in Ωexceeds 4/m, which is required for unique recovery in ANM. Finally, the dual polynomial has to satisfy |Q(f)| ≈1 for all f ∈Ωand |Q(f)| < 1, otherwise, within numerical tolerances. Computing the dual polynomial is straightforward once we have a solution to the dual problem. Solving the dual problem. The dual problem in Eq. (5) is infinite-dimensional and must be discretized to make it tractable. We therefore sample the frequency domain finely, transforming the problem into a secondorder cone program. In addition, since the measurement vector y is real-valued (which is always the case in quantum TEs), the dual problem can be further simplified to max q∈Rm ⟨y, q⟩ s.t. -1 ≤⟨a(f), q⟩≤1, ∀f ∈F, (6) Find frequencies and amplitudes such that Certify solution Gap optimality certificate FIG. 3. Schematic illustration of the proposed certification pipeline. The process verifies frequency and amplitude estimates obtained from any algorithm using the dual formulation of ANM. where F is a finely discretized set of frequencies in [0, 1 2], since the frequency spectrum is symmetric. Finally, although discretization introduces a small approximation error, it remains negligible when the frequency grid is sufficiently fine. IV. EXPERIMENTS This section evaluates the proposed ANM certification methodology using several existing algorithms. The goal is to demonstrate that, when a certificate of optimality is obtained, the correct model can be recovered, enabling reliable forecasting of the TE. Note that successful certification also depends on the true model being sparse and having well-separated frequencies-conditions that are typically unknown in practice. A. Setup Dataset. We consider quantum TEs generated by 15 spin-chain Hamiltonians (4 TFIM and 11 Heisenberg) with 8-20 qubits and periodic boundary conditions [1726] (see details in Appendix B). For all TEs, we consider five initial states-Néel, dimerized, paramagnetic, ferromagnetic, and the zero state-and measure six observables (X, Y , Z, XX, Y Y , and ZZ); see Appendix B. This results in a total of 5850 TEs. The TEs are generated using Qutip [27] for 40 time units with dt = 0.2, yielding m = 200 measurements per TE. A time unit is t·J, where J is the coupling strength (interaction energy) in a Hamiltonian. Spectral estimation algorithms. We use four algorithms to compute the frequencies and amplitudes: Prony, OMP, FISTA, and ESPRIT. Prony [11] is a method that can provably recover the true frequencies and amplitudes under ideal conditions but is highly sensitive to noise. OMP [28, 29] is a widely used heuristic in signal processing for sparse signal recovery. FISTA [30] is a fast gradient method for solving Lasso problems, and ESPRIT [31, 32] is a well-known algorithm in spectrum estimation, also applied in quantum TE forecasting [5]. The algorithms are implemented with default or minimal 4 Exact TE TE with shot noise ǫ PRO OMP FIS ESP PRO OMP FIS ESP 0.01 83.4 75.1 68.7 70.8 50.2 20.2 10.3 21.3 0.05 97.4 96.4 95.8 89.5 58.3 69.9 74.6 53.2 0.10 98.7 99.2 99.7 98.8 82.2 95.1 95.4 83.1 TABLE I. SCs rate for different algorithms and forecasting tolerances (ǫ). Noisy TEs use 1000 shots. Exact TE TE with shot noise ǫ PRO OMP FIS ESP PRO OMP FIS ESP 0.01 15.8 11.2 2.9 13.8 6.7 7.1 1.3 6.6 0.05 18.4 14.3 4.1 17.5 7.7 24.5 9.0 16.4 0.10 18.6 14.7 4.3 19.3 10.9 33.4 11.6 25.6 TABLE II. Absolute SCs rate for different algorithms and forecasting tolerances (ǫ). The Absolute SC rate is the percentage of SCs over the total number of runs. Noisy TEs use 1000 shots. tuning settings, as our focus is on evaluating the certification methodology. A brief description of each algorithm and its implementation is provided in Appendix A. Forecasting procedure. Each algorithm is applied to the first k measurements, where k ∈{10, . . ., 160}, and forecasting is evaluated on the remaining 200 -k measurements, as defined in Eq. (2). Certification. For each algorithm run, ANM certification is performed (150 times per algorithm and TE). This is done by solving Eq. (6) on a frequency grid of 1000 uniformly spaced points in [0, 1 2] [33], augmented with the candidate frequencies estimated by the algorithmto ensure that the dual polynomial is evaluated at the exact/candidate frequencies. To account for numerical inaccuracies, empirically chosen tolerances are applied. Specifically, for exact TEs, the dual polynomial must satisfy |Q(f)| ≥0.98 at all candidate frequencies, and the duality gap must be smaller than 0.05. For TEs with shot noise, the thresholds are |Q(f)| ≥0.95 and a duality gap smaller than 0.5. In addition, we only consider amplitudes with magnitude larger than 0.025 for exact TEs and 0.05 for TEs with shot noise. B. Results We evaluate forecasting performance using tolerances ǫ ∈{0.01, 0.05, 0.1} (see Sec. II), for both exact TEs and TEs with shot noise. We consider the output of a spectral algorithm (i.e., frequencies and amplitudes) certified if it meets the criteria in Sec. III B. A successful certification (SC) occurs when a certified run also satisfies the forecasting error in Eq. (2) for the remaining measurements, which range from 190 (k = 10) down to 40 (k = 160). We define the SC rate as the percentage of SCs among 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25 0.3 Fraction of certified TEs Error Exact TE Prony OMP FISTA ESPRIT Error 1000-shot noise FIG. 4. Distribution of the forecasting error upon the first certification, with exact and noisy TEs. 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0 0.05 0.1 0.15 0.2 0.25 0.3 Fraction of certified TEs Error Exact TE Prony OMP FISTA ESPRIT Error 1000-shot noise FIG. 5. Distribution of the forecasting error upon the first certification for certificates issued for 30 or more measurements, with exact and noisy TEs. all certified runs (successful or not). Average performance. Table I reports the SC rate for each algorithm and tolerance ǫ. For exact TEs, all algorithms achieve high SC rates, exceeding 98% for ǫ ≤ 0.1. When shot noise is introduced (1000 shots), SC rates decrease, particularly for ǫ ≤0.01. OMP and FISTA generally outperform Prony and ESPRIT on noisy TEs; however, FISTA certifies a smaller number of runs compared to the other algorithms. Table II shows the percentage of SCs over all algorithm runs (877,500 runs per algorithm: 150 runs per TE across 5850 TEs). Only a fraction of the runs yield a certificate, with Prony and ESPRIT achieving SC around 18% of all the runs for ǫ ≤0.05. Interestingly, the absolute SC rate for OMP and ESPRIT (Table II) increases with noisy TEs. This is likely due to the looser certification tolerances applied under noise, which increase the total number of certified runs. Performance on first certification. Fig. 4 shows the distribution of forecasting errors at the first certification (the first k ∈{10, . . ., 160} that meets the criteria in Sec. III B). Overall, the algorithms exhibit similar behavior. For exact TEs, Prony and OMP achieve errors below 0.05 and 0.1 in approximately 50% and 75% of cases, respectively. For noisy TEs, the proportion of forecasts with errors below 0.05 drops sharply, and both Prony and OMP achieve errors below 0.1 in only about 50% of cases. These results contrast with the high SC rates in Ta5 ble I, possibly due to insufficient measurements used during spectral estimation or certification. To investigate this, Fig. 5 shows forecasting errors for first certifications issued only after 30 or more measurements. Performance improves noticeably: for exact TEs, Prony achieves errors below 0.05 and 0.1 in 83.4% and 93.4% of cases, respectively, while OMP achieves the same thresholds in 76% and 100% of cases. For noisy TEs, all algorithms reduce the proportion of forecasts with errors exceeding 0.1 (cf. Fig. 4 and Fig. 5), with OMP achieving errors below 0.1 and 0.15 in 75% and 97% of cases, respectively. These results indicate that a sufficiently large number of measurements is crucial for reliable certification; while the minimum is unknown, monitoring stability, as suggested in [5, Sec. III-C], or setting a minimum measurement threshold (e.g., k ≥30) can significantly enhance performance. V. CONCLUSIONS We introduced ANM as a framework to certify the spectral models produced by data-driven forecasting algorithms. Our numerical experiments show that, with carefully chosen tolerance thresholds, ANM can successfully certify both exact and noisy quantum TEs. A key limitation is that certification is guaranteed only when the underlying dynamics are sparse and the Bohr frequencies are well-separated. Nevertheless, our results indicate that ANM can still provide reliable certificates in realistic scenarios. A promising direction for future work is to characterize which time evolutions-defined by specific Hamiltonians, initial states, and observablessatisfy the framework conditions, thereby pinpointing the regimes where reliable forecasting is theoretically guaranteed. VI. ACKNOWLEDGMENTS SZ and DM are grateful to Sergey Bravyi, Ewout van den Berg, and Kristan Temme for stimulating discussions. We all wish to thank the members of the Quantum Information Lab at Tecnun. This work has been supported by the BasQ strategy of the "Extrapolation of Von Neumann Dynamics beyond the reach of current Utility Scale Devices (VNDExUSD)" project. [1] B. Fauseweh, Quantum many-body simulations on digital quantum computers: State-of-the-art and future challenges, Nature Communications 15, 2123 (2024). [2] Y. Kim, A. Eddins, S. Anand, K. X. Wei, E. van den Berg, S. Rosenblatt, H. Nayfeh, Y. Wu, M. Zaletel, K. Temme, and A. Kandala, Evidence for the utility of quantum computing before fault tolerance, Nature 618, 500 (2023). [3] E. D. Switzer, N. Robertson, N. Keenan, Á. Rodríguez, A. D'Urbano, B. Pokharel, T. S. Rahman, O. Shtanko, S. Zhuk, and N. Lorente, Realization of two-dimensional discrete time crystals with anisotropic heisenberg coupling (2025). [4] K. Temme, S. Bravyi, and J. M. Gambetta, Error mitigation for short-depth quantum circuits, Physical review letters 119, 180509 (2017). [5] A. Erpenbeck, Y. Zhu, Y. Yu, L. Zhang, R. Gerum, O. Goulko, C. Yang, G. Cohen, and E. Gull, Compact representation and long-time extrapolation of real-time data for quantum systems, arXiv preprint (2025). [6] K. Manos, M. Weilenmann, and M. Navascues, Extrapolation of quantum measurement data (2025), . [7] R. Kaneko, M. Imada, Y. Kabashima, and T. Ohtsuki, Forecasting long-time dynamics in quantum many-body systems by dynamic mode decomposition, Physical Review Research 7, 013085 (2025). [8] Y. Shen, A. Buzali, H.-Y. Hu, K. Klymko, D. Camps, S. F. Yelin, and R. Van Beeumen, Efficient measurementdriven eigenenergy estimation with classical shadows, arXiv preprint (2024). [9] P. Nandy, A. S. Matsoukas-Roubeas, P. MartínezAzcona, A. Dymarsky, and A. del Campo, Quantum dynamics in krylov space: Methods and applications, Physics Reports 1125-1128, 1 (2025). [10] K. Takahashi and A. Del Campo, Krylov subspace methods for quantum dynamics with time-dependent generators, Physical Review Letters 134, 030401 (2025). [11] T. Sauer, Prony's method: an old trick for new problems, Snapshots of modern mathematics from Oberwolfach (2018). [12] G. Plonka and M. Tasche, Prony methods for recovery of structured functions, GAMM-Mitteilungen 37, 239 (2014). [13] B. N. Bhaskar, G. Tang, and B. Recht, Atomic norm denoising with applications to line spectral estimation, IEEE Transactions on Signal Processing 61, 5987 (2013). [14] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky, The convex geometry of linear inverse problems, Foundations of Computational mathematics 12, 805 (2012). [15] E. J. Candès and C. Fernandez-Granda, Towards a mathematical theory of super-resolution, Communications on pure and applied Mathematics 67, 906 (2014). [16] G. Tang, B. N. Bhaskar, P. Shah, and B. Recht, Compressed sensing offthe grid, IEEE transactions on information theory 59, 7465 (2013). [17] S. R. White and D. A. Huse, Numerical renormalization-group study of low-lying eigenstates of the antiferromagnetic s=1 heisenberg chain, Phys. Rev. B 48, 3844 (1993). [18] Honecker and Wessel, Magnetocaloric effect in quantum spin-s chains, Condensed Matter Physics 12, 399 (2009). 6 [19] T. Giamarchi, Quantum Physics in One Dimension (Oxford University Press, 2003). [20] H. J. Schulz, Phase diagrams and correlation exponents for quantum spin chains of arbitrary spin quantum number, Phys. Rev. B 34, 6372 (1986). [21] I. Affleck, Quantum spin chains and the haldane gap, Journal of Physics: Condensed Matter 1, 3047 (1989). [22] M. den Nijs and K. Rommelse, Preroughening transitions in crystal surfaces and valence-bond phases in quantum spin chains, Phys. Rev. B 40, 4709 (1989). [23] J. Kurmann, H. Thomas, and G. Müller, Antiferromagnetic long-range order in the anisotropic quantum spin chain, Physica A: Statistical Mechanics and its Applications 112, 235 (1982). [24] A. K. Bera, B. Lake, A. T. M. N. Islam, B. Klemke, E. Faulhaber, and J. M. Law, Field-induced magnetic ordering and single-ion anisotropy in the quasi-one-dimensional haldane chain compound srni2v2o8: A single-crystal investigation, Phys. Rev. B 87, 224423 (2013). [25] B. Nachtergaele, W. Spitzer, and S. Starr, Ferromagnetic ordering of energy levels, Journal of Statistical Physics 116, 719 (2004). [26] S. M. Giampaolo, G. Adesso, and F. Illuminati, Theory of ground state factorization in quantum cooperative systems, Phys. Rev. Lett. 100, 197201 (2008). [27] J. R. Johansson, P. D. Nation, and F. Nori, Qutip: An open-source python framework for the dynamics of open quantum systems, Computer physics communications 183, 1760 (2012). [28] S. Chen and D. Donoho, Basis pursuit, in Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers, Vol. 1 (IEEE, 1994) pp. 41-44. [29] F. Locatello, R. Khanna, M. Tschannen, and M. Jaggi, A unified optimization view on generalized matching pursuit and frank-wolfe, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Vol. 54, edited by A. Singh and J. Zhu (2017) pp. 860868. [30] A. Beck and M. Teboulle, A fast iterative shrinkagethresholding algorithm for linear inverse problems, SIAM journal on imaging sciences 2, 183 (2009). [31] A. Paulraj, R. Roy, and T. Kailath, Estimation of signal parameters via rotational invariance techniques- esprit, in Nineteeth Asilomar Conference on Circuits, Systems and Computers, 1985. (1985) pp. 83-89. [32] R. Roy and T. Kailath, Esprit-estimation of signal parameters via rotational invariance techniques, IEEE Transactions on acoustics, speech, and signal processing 37, 984 (2002). [33] It is sufficient to consider [0, 1 2] instead of [-1 2, 1 2] since the frequency spectrum is symmetric. [34] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, arXiv preprint (2014). [35] M. ApS, MOSEK Optimization Software (2025), version 11.0. 7 Appendix A: Algorithms 1. Algorithm for finding frequencies and amplitudes a. Prony's method Prony's method is implemented as described in [11]. The method requires that the number of measurements m satisfies ≥2|Ω|, where |Ω| denotes the number of frequencies. However, |Ω| is typically unknown. To address this, we execute Prony's method for a range of candidate values of |Ω|. Specifically, given m measurements, we apply Prony's method using the first 2k measurements for k = 1, . . . , ⌊m/2⌋. For each k, we compute the error ∥y -x∥2, where x, y ∈Rm denote the reconstructed and measured signals, respectively. The algorithm selects the model (i.e., the set of frequencies and amplitudes defining x) that yields the smallest error. b. OMP Orthogonal Matching Pursuit (OMP) is implemented as described in [29, Algorithm 1], using atoms defined by a(f, φ) = (cos(2πfτ1 + φ), . . . , cos(2πfτm + φ)), (A1) where f ∈F := {0, 1 2L, . . . , L 2L} and Φ := {0, 2π P , . . . , 2π(P -1) P }, with L = 10000 and P = 10. We use the atoms in Eq. (A1) instead of those in Eq. (3) because y is real-valued. Consequently, its frequency spectrum is symmetric, and the corresponding amplitudes occur in complex-conjugate pairs. We make the algorithm terminate after 250 iterations or when the recovered signal, x, satisfies ∥y -x∥2 ≤ǫ for ǫ = 10-2. The algorithm returns the frequencies, phases, and amplitude magnitudes of the selected atoms. c. FISTA The Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) is implemented as described in [30] to solve the optimization problem minimize x∈RLP 1 2∥Ax -y∥2 2 + λ∥x∥1, (A2) where y ∈Rm denotes the vector of measurements and A ∈Rm×LP is a matrix whose columns correspond to the atoms defined in Eq. (A1), with L = 1000 and P = 4. The algorithm outputs the frequencies, phases, and amplitude magnitudes associated with the selected atoms (i.e., the active columns of A). To account for numerical issues, we only consider an atom is selected if the associated amplitude magnitude is larger than 10-4. d. ESPRIT We have adjusted the method [31, 32] for a signal (1), generated by a Hamiltonian evolution, as follows. For N samples of a signal y = Pr k=1 ak z n k and for a window length d = m + 1 with 2 ≤d ≤N -1, we form delay Hankel matrices Y0, Y1 ∈Cd×L (L = N -d) that are shifted by one sample. Here m is a parameter that controls the size of a subspace, chosen such that N ≥m + 2. We then compute a rank-r singular value decomposition of Y0 = U Σ V ∗ and project the one-step shift matrix Y1 onto the r-dimensional signal subspace spanned by columns of U by forming the reduced operator Φ = U ∗Y1 V Σ-1 ∈Cr×r. (A3) Using Y0 = UΣV ∗and Y1 = V ΛW, we have U ∗Y1V Σ-1 ≈U ∗(V ΛW)V Σ-1, and since U spans the columns of V , Φ is similar to Λ on the signal subspace. Thus eig(Φ) = {z1, . . . , zr}. (A4) Having the poles {zk}, we form the Vandermonde matrix on the full sample range, V nk = z n k where n = 0, . . . , N -1, and find the amplitudes from least squares: mina∈Cr y -V a 2 2. The frequencies follow from ωk = -arg(zk). 8 Initial State Definition Zero |0⟩⊗N Néel |01⟩⊗N/2 Dimerized (|01⟩-|10⟩)⊗N/2 Paramagnetic (|0⟩+ |1⟩)⊗N Ferromagnetic |1⟩⊗N TABLE III. Initial states and their definitions (normalization constants are omitted for clarity of presentation). Additionally, we assumed unitary dynamics and, thus, project zk to a unit circle, zk 7→ei arg zk. The subspace parameter m was chosen as max(2, ⌊N 3 ⌋) providing the requirement N ≥m + 2. The number of samples N varied in simulations as N ≡k ∈{1-, . . ., 160}. When the SVD procedure failed, we considered that as a negative TE instance. 2. Algorithm to solve the dual problem in Eq. (5) We solve the problem in Eq. (5) with Adam [34]. The objective function is f(q) = ⟨y, q⟩+ λ L X j=1 max{0, |(Aq)j| -1}2, (A5) where y ∈Rm is the vector of measurements, and A ∈RL×m a matrix with rows equal to the atoms in Eq. (3). We use L = 1000 and λ = 15. We set the learning rate equal to 0.1 and run the algorithm for 500 iterations. Appendix B: Hamiltonians and Initial States We consider one-dimensional spin- 1 2 chains. For convenience, we write all Hamiltonians in the Pauli convention, i.e. with Pauli matrices X, Y , and Z acting on site k: HXYZ = Jx X k XkXk+1 + Jy X k YkYk+1 + Jz X k ZkZk+1 + hx X k Xk + hy X k Yk + hz X k Zk. When restricted to Ising-type couplings and a transverse field, we use: HTFIM = Jz X k ZkZk+1 + hx X k Xk. Here the index k runs over lattice sites (boundary conditions are specified elsewhere, mostly as "periodic"), and the couplings {Jα} and fields {hα} are given in units where energy scales are O(1). Sometimes, spin Hamiltonians are written with spin operators Sα k rather than Pauli matrices σα k . Since Sα k = 1 2 σα k , our parameters map to the S-operator convention as: J(S) α = 4 Jα, h(S) α = 2 hα, and respectively, Jα = J(S) α /4, hα = h(S) α /2. Signs are unaffected by this rescaling. Our study focuses on spin- 1 2 models in the above Pauli normalization for implementation simplicity. The parameter choices we explore were inspired by Hamiltonians used in diverse physical simulations (see Table IV). We note, however, that several of those source models were originally formulated for spin-1 or higher; when importing their coefficients into our convention one should apply the normalization map above. Spin- 1 2 and spin-1 chains can exhibit qualitatively different physics. Because our primary objective here is predictive performance rather than a faithful comparison of phase structures, we do not distinguish between spin- 1 2 and spin-1 models in the experiments. Results should therefore be interpreted with this caveat in mind. The initial states used in all simulations (Table III) are standard in the literature and admit shallow state-preparation circuits, making them both representative and practical for our benchmarks. 9 Parameters Description Jx = 0, Jy = 0, Jz = -2, hx = 1, hy = 0, hz = 0 Ising ferromagnet. Jx = 0, Jy = 0, Jz = 2, hx = 1, hy = 0, hz = 0 Ising antiferromagnet. Jx = 0, Jy = 0, Jz = 1, hx = 2, hy = 0, hz = 0 Ising paramagnet. Jx = 0, Jy = 0, Jz = 1, hx = 1, hy = 0, hz = 0 Ising critical model. Jx = Jy = Jz = 1, hx = hy = 0, hz = 0.2 Isotropic Haldane, tiny longitudinal field, [17]. Jx = Jy = Jz = 1, hx = hy = 0, hz = 0.4105 Antiferromagnetic quantum chain at the lower critical field, [18]. Jx = Jy = Jz = 1, hx = hy = 0, hz = 0.6 Field-induced Tomonaga-Luttinger liquid, [19]. Jx = 0.7, Jy = 0.7, Jz = 1.3, hx = 0, hy = 0, hz = 0.1 Easy-axis XXZ with weak field, [20]. Jx = 1.0, Jy = 1.0, Jz = 0.6, hx = 0.2, hy = 0, hz = 0 Easy-plane XXZ, transverse probe, [21], Jx = Jy = Jz = 1, hx = 0.3, hy = 0, hz = 0.3 Tilted field, symmetry breaking, [22]. Jx = 1.2, Jy = 0.8, Jz = 1, hx = 0.9, hy = 0, hz = 0 Anisotropic XYZ + transverse field, [23]. Jx = Jy = Jz = 1, hx = 0, hy = 0, hz = 1 Isotropic AFM, intermediate longitudinal field, [24]. Jx = Jy = Jz = -1, hx = 0, hy = 0, hz = 0.1 Isotropic ferromagnetic Heisenberg chain, tiny field, [25]. Jx = 1, Jy = 1, Jz = -0.7, hx = 0.15, hy = 0, hz = 0.15 Mixed-sign anisotropy, easy-plane exchange but ferromagmetic Jz, [19]. Jx = Jy = Jz = 1, hx = 1.5, hy = 0, hz = 0 Isotropic AFM with strong transverse field (near product paramagnet) Jz, [26]. TABLE IV. Hamiltonians selected for TE. 10 Appendix C: Additional experiments 1. Error growth after first certification The following figures illustrate the evolution of the forecasting error after the first certification, measured on the samples predicted beyond that point. For example, if the first certification occurs at sample 50 in one TE and at sample 100 in another, the first forecast samples correspond to 51 and 101, respectively. The average error is indicated by a thick line, and the standard deviation is shown as a shaded area. The figures display the average results for all algorithms and TEs, with the experimental setup described in Sec. IV. 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 No shot noise Error 250 shot noise 500 shot noise Error Forecast measurements 1000 shot noise Forecast measurements FIG. 6. Average per-sample forecasting error after the first certification for exact TEs and three shot noise levels (250, 500, and 1000 shots). The error is averaged over all algorithms. 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 No shot noise Error 250 shot noise 500 shot noise Error Forecast measurements 1000 shot noise Forecast measurements FIG. 7. Average per-sample forecasting error after the first certification after 30 or more measurements, for exact TEs and three shot noise levels (250, 500, and 1000 shots). The error is averaged over all algorithms. 11 2. Illustrative examples -1 -0.5 0 0.5 1 0 20 40 60 80 100 120 140 160 180 200 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Measurements Forecasting Measurement |Q(f)| f ∈Ω Bohr frequency FIG. 8. Forecasting with OMP: 18 qubits Anisotropic XYZ + transverse field Hamiltonian [23], dimerized initial state, and O = X9X10. The red vertical line marks the training interval [0 . . . 100), where we fit the sparse model; the remaining data [100 . . . 200) are reserved for out-of-sample verification. -1 -0.5 0 0.5 1 0 20 40 60 80 100 120 140 160 180 200 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 Measurements Forecasting Measurement |Q(f)| f ∈Ω Bohr frequency FIG. 9. Forecasting with OMP: 20 qubit easy-plane XXZ transverse probe Hamiltonian [21], paramagnetic initial state, and O = Y10Y11. The red vertical line marks the training interval [0 . . . 100), where we fit the sparse model; the remaining data [100 . . . 200) are reserved for out-of-sample verification. 12 Appendix D: SDP Running Times with MOSEK Fig. 10 shows the time it takes MOSEK [35] (a commercial SDP solver) to solve the problem in Eq. (4). The SDP is run on an Intel i7-14700 CPU with 32 GB of memory. 0 50 100 150 200 250 0 50 100 150 200 250 300 350 400 Time (s) Number of measurements FIG. 10. Time to solve the SDP in Eq. (4) with MOSEK [35] for different number of measurements.
2510.14898
CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Abstract. We prove the stability and global convergence of a coupled actor-critic gradient flow for infinite-horizon and entropy-regularised Markov decision processes (MDPs) in continuous state and action space with linear function approximation under Q-function realisability. We consider a version of the actor critic gradient flow where the critic is updated using temporal difference (TD) learning while the policy is updated using a policy mirror descent method on a separate timescale. We demonstrate stability and exponential convergence of the actor critic flow to the optimal policy. Finally, we address the interplay of the timescale separation and entropy regularisation and its effect on stability and convergence. 1. Introduction In reinforcement learning (RL) an agent aims to learn an optimal policy that maximizes the expected cumulative reward through repeated interactions with its environment. Such methods typically involve two key components: policy evaluation and policy improvement. During policy evaluation, the advantage function corresponding to a policy, or its function approximation, is updated using state, action and reward data generated under this policy. Policy improvement then uses this approximate advantage function to update the policy, most commonly through some policy gradient method. Algorithms that explicitly combine these two components are known as actor–critic (AC) methods [13], where the actor corresponds to policy improvement and the critic to policy evaluation. There are many policy gradient methods to choose from. In the last decade trust region policy optimization (TRPO) methods [20] and methods inspired by these like PPO [21] have become increasingly well-established due to their impressive empirical performance. Largely, this is because they alleviate the difficulty in choosing appropriate step sizes for the policy gradient updates: for vanilla policy gradient even a small change in the parameter may result in large change in the policy, leading to instability, but TRPO prevents this by explicitly ensuring the KL divergence between successive updates is smaller than some tolerance. Mirror descent replaces the TRPO’s hard constraint with a penalty leading to a first order method which is also ameanable to analysis. Indeed, at least for direct parametrisation, it is known to converge with sub-linear and even linear rate for entropy regularized problems (depending on exact assumptions) [9, 14, 11]. Due to the favourable analytical properties of mirror descent, in this paper we consider a version of the actor critic gradient flow where the policy is updated using a policy mirror descent method while the critic is updated using temporal difference (TD) on a separate timescale. Entropy-regularised MDPs are widely used in practice since the entropic regularizer leads to a number of desirable properties: it has a natural interpretation as something that drives exploration, it ensures that there is a unique optimal policy and it can accelerate convergence of mirror descent [11], as well as classical policy gradient [18]. However, analysing the stability and convergence of actor–critic methods in this entropy-regularized setting with general state and action spaces remains highly non-trivial due to lack of a priori bounds on the value functions. To address the actor critic methods for entropy regularised MDPs in general action spaces, a careful treatment of tools from two timescale analysis, convex analysis over both Euclidean spaces and measure spaces must be deployed. In this paper, we address precisely this challenge. We study the stability and convergence of a widely used actor–critic algorithm in which the critic is updated using Temporal Difference (TD) learning [22], and the policy is updated through Policy Mirror Descent [9]. Our analysis employs a two-timescale update scheme [3], where both the actor and critic are updated at each iteration with the critic updated on a faster timescale. Keywords: Reinforcement learning, Actor-Critic method, Entropy regularisation, Approximate gradient flow, Non- convex optimization, Global convergence, Function approximation. 1 arXiv:2510.14898v1 [math.OC] 16 Oct 2025 2 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH 1.1. Related works. We focus on the subset of RL literature that address the convergence of coupled actor-critic algorithms. In the unregularised setting, actor–critic methods have been studied extensively. The first convergence results in the two-timescale regime established asymptotic convergence in the continuous-time limit of coupled updates ([3, 13]). Most modern research employs linear function ap- proximation for the critic, where linear convergence rates have been obtained under various assumptions on the step-sizes of the actor and critic ([1, 24, 8]). Most closely related to our work is [25], which considers the same two-timescale actor–critic scheme in the continuous-time limit for unregularised MDPs, with an overparameterized neural network used for the critic. However, convergence to the optimal policy was only achieved up to a neighbourhood of a scaling factor, and a restarting mechanism was required to ensure the stability of the dynamics. In the entropy-regularised setting, [5, 6] address the convergence of a natural actor critic algorithm. However, the convergence and stability of these results rely on the finite cardinality of the action space in presence of entropy regularisation. 1.2. Our Contribution. Under linear Q-realisability assumption, we address the following question: “Are actor–critic methods for entropy-regularized MDPs in general action spaces stable and convergent, and if so, at what rate?” Our main contributions are as follows: • We study a common variant of actor–critic where the critic is updated using temporal difference (TD) learning and the policy is updated using mirror descent. Similarly to [13, 25], we analyse the coupled updates in the continuous-time limit, resulting in a dynamical system where the critic flow is captured by a semi-gradient flow and the actor flow corresponds to an approximate Fisher–Rao gradient flow over the space of probability kernels. • By combining convex analysis over the space of probability measures with classical Euclidean convex analysis, we develop a Lyapunov-based stability framework that captures the interplay between entropy regularisation and timescale separation, and establish stability of the resulting dynamics. • We prove convergence of the actor–critic dynamics for entropy-regularized MDPs with infinite action spaces. 1.3. Notation. Let (E, d) denote a Polish space (i.e., a complete separable metric space). We always equip a Polish space with its Borel sigma-field B(E). Denote by Bb(E) the space of bounded measurable functions f : E →R endowed with the supremum norm |f|Bb(E) = supx∈E |f(x)|. Denote by M(E) the Banach space of finite signed measures µ on E endowed with the total variation norm |µ|M(E) = |µ|(E), where |µ| is the total variation measure. Recall that if µ = f dρ, where ρ ∈M+(E) is a nonnegative measure and f ∈L1(E, ρ), then |µ|M(E) = |f|L1(E,ρ). Denote by P(E) ⊂M(E) the set of probability measures on E. Moreover, we denote the Euclidean norm on RN by | · | with inner product ⟨·, ·⟩. Given some A, B ∈RN×N, we denote by λmin(A) the minimum eigenvalue of A and denote A ⪰B if and only if A −B is positive semidefinite. Moreover, we denote by |A|op the operator norm of A induced by the Euclidean norm, |A|op := sup|x|̸=0 |Ax| |x| . 1.4. Entropy Regularised Markov Decision Processes. Consider an infinite horizon Markov De- cision Process (S, A, P, c, γ), where the state space S and action space A are Polish, P ∈P(S|S × A) is the state transition probability kernel, c is a bounded cost function and γ ∈(0, 1) is a discount factor. Let µ ∈P(A) denote a reference probability measure and τ > 0 denote a regularisation parameter. To ease notation, for each π ∈P(A|S) we define the kernels Pπ(ds′|s) := R A P(ds′|s, a)π(da|s) and P π(ds′, da′|s, a) := P(ds′|s, a)π(da′|s′). Denoting Eπ s = Eπ δs where δs ∈P(S) denotes the Dirac measure at s ∈S, for each stochastic policy π ∈P(A|S) and s ∈S, define the regularised value function by (1) V π τ (s) = Eπ s " ∞ X n=0 γn c(sn, an) + τ KL(π(·|sn)|µ) # ∈R ∪{∞} , where KL(π(·|s)|µ) is the Kullback-Leibler (KL) divergence of π(·|s) with respect to µ, define as KL(π(·|s)|µ) := R A ln dπ dµ(a|s)π(da|s) if π(·|s) is absolutely continuous with respect to µ, and infinity otherwise. For each π ∈P(A|S), we define the state-action value function Qπ τ ∈Bb(S × A) by (2) Qπ τ (s, a) = c(s, a) + γ Z S V π τ (s′)P(ds′|s, a) . CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 3 By the Dynamic Programming Principle, Qπ τ : S × A →R is the unique fixed point of the Bellman operator Tπ : Bb(S × A) →Bb(S × A), which for any f ∈Bb(S × A) is defined as (3) Tπf(s, a) = c(s, a) + γ Z S×A f(s′, a′)P π(ds′, da′|s, a) + τγ Z S KL(π(·|s′)|µ)P(ds′|s, a). The state-occupancy kernel dπ ∈P(S|S) is defined by (4) dπ(ds′|s) = (1 −γ) ∞ X n=0 γnP n π (ds′|s) , where P n π is the n-times product of the kernel Pπ with P 0 π(ds′|s) := δs(ds′). Moreover, for each π ∈ P(A|S) and (s, a) ∈S × A, we define the state-action occupancy kernel as (5) dπ(ds, da|s, a) = (1 −γ) ∞ X n=0 γn(P π)n(ds, da|s, a) where (P π)n is the n-times product of the kernel P π with (P π)0(ds′, da′|s, a) := δ(s,a)(ds′, da′). Given some initial state-action distribution β ∈P(S × A) with initial state distribution given by ρ(ds) = R A β(da, ds), we define the state-occupancy and state-action occupancy measures as (6) dπ ρ(ds) = Z S dπ(ds|s′)ρ(ds′), dπ β(ds, da) = Z S×A dπ(ds, da|s′, a′)β(da′, ds′). Note that for all E ∈B(S × A), by defining the linear operator Jπ : P(S × A) →P(S × A) as (7) Jπβ(E) = Z S×A P π(E|s′, a′)β(ds′, da′), it directly holds that (8) dπ β(da, ds) = (1 −γ) ∞ X n=0 γnJn π β(da, ds), with Jn π the n-fold product of the operator Jπ with J0 π = I, the identity operator on P(S × A). By choosing β = ρ ⊗π, we retrieve the classical state-action occupancy measure dπ β = dπ ρπ. For a given initial distribution ρ ∈P(S), the optimal value function is defined as (9) V ∗ τ (ρ) = min π∈P(A|S) V π τ (ρ), with V π τ (ρ) := Z S V π τ (s)ρ(ds) and we refer to π∗∈P(A|S) as the optimal policy if V ∗ τ (ρ) = V π∗ τ (ρ). Due to [11, Theorem B.1, Lemma B.2] we have the following dynamical programming principle for entropy regularised MDPs. Theorem 1.1 (Dynamical Programming Principle). Let τ > 0. The optimal value function V ∗ τ is the unique bounded solution of the following Bellman equation: V ∗ τ (s) = −τ ln Z A exp  −1 τ Q∗ τ(s, a)  µ(da), where Q∗ τ ∈Bb(S × A) is defined by Q∗ τ(s, a) = c(s, a) + γ Z S V ∗ τ (s′)P(ds′|s, a) , ∀(s, a) ∈S × A . Moreover, there is an optimal policy π∗ τ ∈P(A|S) given by π∗ τ(da|s) = exp  −1 τ (Q∗ τ(s, a) −V ∗ τ (s))  µ(da) , ∀s ∈S. Finally, the value function V π τ is the unique bounded solution of the following Bellman equation for all s ∈S V π τ (s) = Z A  Qπ τ (s, a) + τ ln dπ dµ(a, s)  π(da|s) . Theorem 1.1 suggests that, without loss of generality, it suffices to minimise (9) over the class of policies that are equivalent to the reference measure µ. Definition 1.1 (Admissible Policies). Let Πµ denote the class of policies for which there exists f ∈ Bb(S × A) with π(da|s) = exp(f(s, a)) R A exp(f(s, a))µ(da)µ(da). 4 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH The performance difference lemma, first introduced for tabular unregularised MDPs, has become fundamental in the analysis of MDPs as it acts a substitute for the strong convexity of the π 7→V π τ if the state-occupancy measure dπ ρ is ignored (e.g [10], [25], [9]). By virtue of [11], we have the following performance difference for entropy regularised MDPs in Polish state and action spaces. Lemma 1.1 (Performance difference). For all ρ ∈P(S) and π, π′ ∈Πµ, V π τ (ρ) −V π′ τ (ρ) = 1 1 −γ Z S  Z A  Qπ′ τ (s, a) + τ ln dπ′ dµ (a, s)  (π −π′)(da|s) + τ KL(π(·|s)|π′(·|s))  dπ ρ(ds) . 2. Mirror-Descent and the Fisher-Rao Gradient flow Defining the soft advantage function as Aπ τ (s, a) := Qπ τ (s, a) + τ ln dπ dµ(s, a) −V π τ (s), then for some λ > 0 and π0 ∈Πµ, the Policy Mirror Descent update rule reads as πn+1(·|s) = arg min m∈P(A) Z A Aπn τ (s, a)(m(da) −πn(da|s)) + 1 λ KL(m|πn(·|s))  (10) = arg min m∈P(A) Z A  Qπn τ (s, a) + τ ln dπn dµ (s, a)  (m(da) −πn(da|s)) + 1 λ KL(m|πn(·|s))  . (11) [7] shows that the pointwise optimisation is achieved by (12) dπn+1 dπn (a, s) = exp −λAπn τ (s, a)  R A exp (−λAπn τ (s, a)) πn(da|s). Observe that for any π ∈P(A|S), it holds that R A Aπ τ (s, a)π(da|s) = 0. Hence taking the logarithm of (12) we have ln dπn+1 dµ (s, a) −ln dπn dµ (s, a) = −λAπn τ (s, a) −ln Z A e−λAπn τ (s,a)πn(da|s). Interpolating in the time variable and letting λ →0 we retrieve the Fisher-Rao gradient flow for the policies (13) ∂t ln dπt dµ (s, a) = −  Aπt τ (s, a) − Z A Aπt τ (s, a)πt(da|s)  = −Aπt τ (s, a). Note that the soft advantage formally corresponds to the functional derivative of the value function with respect to the policy πn and thus (13) can be seen as a gradient flow of the value function over the space of kernels P(A|S) (see [11] for a detailed description of the functional derivative). In the case where the advantage function is fully accessible for all t ≥0, [11][Theorem 2.8] shows that the entropy regularisation in the value function induces an exponential convergence to the optimal policy. In the following section we define the approximate Fisher Rao dynamics arising from approximating the advantage for all t ≥0. 3. Actor Critic Methods Given some feature mapping ϕ : S × A →RN, we parametrise the state-action value function as Q(s, a; θ) := ⟨θ, ϕ(s, a)⟩. Moreover, we denote the approximate soft Advantage function as (14) A(s, a; θ) = Q(s, a; θ) + τ ln dπ dµ(s, a) − Z A  Q(s, a; θ) + τ ln dπ dµ(s, a)  π(da|s). The Mean Squared Bellman Error (MSBE) is defined as (15) MSBE(θ, π) = 1 2 Z S×A (Q(s, a; θ) −TπQ(s, a; θ))2dπ β(da, ds) where dπ β ∈P(S × A) is the state-action occupancy measure defined in (6). Given that β ∈P(S × A) has full support, by (3) it holds that MSBE(θ, π) = 0 if and only if Q(s, a; θ) = Qπ τ (s, a) for all s ∈S and a ∈A. Hence one approach to implementing the Policy Mirror Descent updates is to calculate the optimal parameters for Q(s, a; θ) by minimising the MSBE at each policy mirror descent iteration (16) θn+1 = arg min θ∈RN MSBE(θ, πn), CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 5 (17) dπn+1 dπn (a, s) = exp −λA(s, a; θn+1)  R A exp (−λA(s, a; θn+1)) πn(da|s). To avoid fully solving the optimisation in (16) for each policy update, one can update the critic using a semi-gradient descent on a different timescale to the policy update. Let hn, λn > 0 be the step-sizes of the critic and actor respectively at iteration n ≥0. Let the semi-gradient g : RN × P(A|S) →RN of the MSBE with respect to θ be (18) g(θ, π) := Z S×A (Q(s, a; θ) −TπQ(s, a; θ))ϕ(s, a)dπ β(da, ds). The two-timescale actor-critic Mirror Descent scheme reads as (19) θn+1 = θn −hng(θn, πn), dπn+1 dπn (a, s) = exp −λnA(s, a; θn+1)  R A exp (−λnA(s, a; θn+1)) πn(da|s). (20) where timescale separation ηn := hn λn > 1 ensures that the critic is updated on a much faster timescale than the policy to improve the local estimation of the policy updates. As pointed out in [25], even with the KL penalty in (10) the critic may still be far away from the true state-action value function, resulting in unstable updates. 4. Dynamics In this paper, we study the stability and convergence of the two-timescale actor-critic Mirror Descent scheme in the continuous-time limit. Let Qt(s, a) := Q(s, a; θt) and At(s, a) := A(s, a; θt). Let η : [0, ∞) →[1, ∞) be a non-decreasing function representing the timescale separation, then for some θ0 ∈ RN and π0 ∈Πµ we have the following coupled dynamics (21) dθt dt = −ηtg(θt, πt) ∂tπt(da|s) = −At(s, a)πt(da|s) (22) where g : RN × P(A|S) is the semi-gradient of the MSBE defined in (18). We refer to (22) as the Approximate Fisher Rao Gradient flow. We perform our analysis under the following assumptions. Assumption 4.1 (Qπ τ -realisability). For all π ∈Πµ and (s, a) ∈S × A, there exists θπ ∈RN such that Qπ(s, a) = ⟨θπ, ϕ(s, a)⟩. A simple example of when this holds is in the tabular case, where one can choose ϕ to be a one-hot encoding of the state-action space. Moreover, all linear MDPs are Qπ-realisable. In a linear MDP there exists exists ϕ : S × A →RN, w ∈RN and a sequence {ψi}N i=1 with ψi ∈M(S) such that for all (s, a) ∈S × A, c(s, a) = ⟨w, ϕ(s, a)⟩, P(ds′ | s, a) = N X i=1 ϕi(s, a)ψi(ds′). In this case it holds that (θπ)i = wi + R S V π(s′)ψi(ds′). Assumption 4.1 can be seen as a convention to omit function approximation errors in the final convergence results. This assumption, or the presence of approximation errors in convergence results, are widely present in the actor-critic literature ([5], [24], [23], [6], [8], [19]). More recently, [16] derives some weaker ordering conditions in the bandit case (empty state space) which guarantee the convergence of soft-max policy gradient in the tabular setting beyond realisability. However as of now it is unclear how this applies to MDPs and also fundamentally depends on the finite cardinality of the action space. By [4], Assumption 4.1 holds in the limit N →∞when ϕi are the basis functions of L2(ρ ⊗µ) for some ρ ⊗µ ∈P(S × A). However, analysis in such a Hilbert space becomes more involved and intricate and is the result of ongoing work. Assumption 4.2. For all (s, a) ∈S × A it holds that |ϕ(s, a)| ≤1. Assumption 4.2 is purely for convention and is without loss of generality in the finite-dimensional case. 6 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Assumption 4.3. Let β ∈P(S × A) be fixed. Then λβ := λmin Z S×A ϕ(s, a)ϕ(s, a)⊤β(ds da)  > 0. Note that unlike the analogous assumptions imposed in [8], Assumption 4.3 is independent of the policy. This property allows us to remove any dependence on the continuity of eigenvalues. Definition 4.1. For all π ∈Πµ and ζ ∈P(S × A), the squared loss with respect to ζ is defined as (23) L(θ, π; ζ) = 1 2 Z S×A (⟨θ, ϕ(s, a)⟩−Qπ(s, a))2ζ(da, ds) where Qπ τ is defined in (2). A straightforward calculation given in Lemma A.4 shows that due to Lemma 5.1 and Assumption 4.3, for any π ∈Πµ it holds that L(·, π; dπ β) is (1 −γ)λβ-strongly convex. The following result then connects the geometry of the semi-gradient of the MSBE and the gradient of L(θ, π; β), which can be seen as an extension of Lemma 3 of [2] to the current entropy regularised setting and where the measure of integration in the MSBE is not necessarily stationary. Lemma 4.1. Let Assumption 4.1 hold. Then for all θ ∈RN and π ∈Πµ it holds that (24) −⟨g(θ, π), θ −θπ⟩≤−(1 −√γ)(1 −γ) ⟨∇θL(θ, π; β), θ −θπ⟩ with ∇θL(θ, π; β) = Z S×A (⟨θ, ϕ(s, a)⟩−Qπ τ (s, a))ϕ(s, a)β(da, ds). See Appendix A.1 for a proof. 5. Stability In this section we analyse the stability of the coupled Actor Critic flow. Throughout this section, to ease notation we let Γ := λβ(1 −γ)(1 −√γ), Kt := sup s∈S KL(πt(·|s)|µ), with λβ > 0 the constant from Assumption 4.3. The following lemma establishes properties of the state-action occupancy measure defined in (6) and which are useful in the proofs. Lemma 5.1. For all π ∈P(A|S), β ∈P(S × A) and E ∈B(S × A) it holds that (25) dπ Jπβ(E) = Jπdπ β(E). Moreover, for all γ ∈(0, 1) we have (26) dπ β(E) −γdπ Jπβ(E) = (1 −γ)β(E). See Appendix A.2 for a proof. Lemma 5.2 then establishes the effect of the coupling and timescale separation in the actor-critic flow and its effect on the stability of the critic parameters. Lemma 5.2. Let Assumptions 4.2 and 4.3 hold. Then for all t ≥0 it holds that (27) 1 2ηt d dt|θt|2 ≤−Γ 2 |θt|2 + τ 2γ2K2 t Γ + |c|2 Bb(S×A) Γ See Appendix B.1 for a proof. By connecting the result from Lemma 5.2 with the approximate Fisher Rao gradient flow, we are able to establish a Gronwall-type inequality for the KL divergence of the policies with respect to the reference measure. Theorem 5.1. Let Assumptions 4.2 and 4.3 hold. Let η0 > τ Γ. Then there exists constants a1 = a1 τ, η0, γ, λβ, |c|Bb(S×A), dπ0 dµ Bb(S×A) ! > 0 and a2 = a2(τ, η0, γ, λβ) > 0 such that for all γ ∈(0, 1) and t ≥0 it holds that (28) K2 t ≤a1 + a2 Z t 0 e−τ(t−r)K2 r dr. CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 7 See Appendix B.2 for a proof. Through applications of Gronwall’s Lemma (Lemma A.1), two direct corollaries of Theorem 5.1 show that the KL divergence of the policies with respect to the reference measure and the critic parameters do not blow up in finite time. Corollary 5.1 (Stability). Under the same assumptions as Theorem 5.1, for all γ ∈(0, 1), s ∈S and t ≥0 it holds that (29) KL(πt(·|s)|µ)2 ≤a1ea2t. Corollary 5.2. Under the same assumptions as Theorem 5.1, suppose that there exists α > 0 such that d dtηt ≤αηt. Then for all γ ∈(0, 1) there exists r1, r2 > 0 such that for all t ≥0 it holds that (30) |θt| ≤r1er2t. See Appendix B.3 and B.4 for the proofs. Corollaries 5.3 and 5.4 then show that if the MDP is sufficiently regularised through a sufficiently small discounting factor, the KL divergence of the policies with respect to the reference measure remains uniformly bounded along the flow. Corollary 5.3 (Uniform boundedness). Under the same assumptions as Theorem 5.1, for γ ∈(0, 1) such that 64γ2 Γ2−Γτ η0 < 1 it holds that a2 < τ and for all t ≥0 it holds that (31) KL(πt(·|s)|µ)2 ≤ a1τ τ −a2 Corollary 5.4. Under the conditions of Corollary 5.3, there exists R > 0 such that for all t ≥0 it holds that (32) |θt| ≤R See Appendix B.5 and B.6 for the proofs. 6. Convergence In this section we will present three convergence results of the coupled actor-critic flow. Firstly, we characterise the time derivative of the state-action value function along the approximate gradient flow for the policies. Lemma 6.1. For all t ≥0 and (s, a) ∈S × A, it holds that (33) d dtQπt τ (s, a) = γ 1 −γ Z S Z S×A Aπt τ (s′′, a′′)∂tπt(da′′|s′′)dπt(ds′′|s′)  P(ds′|s, a) See Appendix C.1 for a proof. Observe that in the exact setting, (13), we obtain the dissipative property of {Qπt τ }t≥0 along the flow d dtQπt τ (s, a) = −γ 1 −γ Z S Z S×A Aπt τ (s′′, a′′)2dπt(ds′′|s′)  P(ds′|s, a) ≤0. Furthermore, Theorem 6.1 shows that the actor-critic flow maintains the exponential convergence to the optimal policy induced by the τ-regularisation up to a error term arising from not solving the critic to full accuracy. Theorem 6.1. Let {πt, θt}t≥0 be the trajectories of the actor critic flow. Let Assumptions 4.1 and 4.2 hold. Then for all t > 0 it holds that min r∈[0,t] V πr τ (ρ) −V π∗ τ (ρ) ≤ τ 2(1 −γ)(1 −e−τ 2 t) e−τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (34) + 1 2τ Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ! (35) See Appendix C.2 for a proof. Note that by Theorem 1.1, it holds that KL(π∗(·|s)|π0(·|s)) < ∞. For Polish state and action spaces, in the unregularised case (τ = 0), KL(π∗(·|s)|π0(·|s)) will typically be infinite (see [15][Theorem 2.1]). Theorem 6.1 shows that the exponentially weighted error term determines the rate of convergence of the actor-critic dynamics. On this note, Theorem 6.2 shows that this error term decays exponentially up to an integral which now depends on the rate of change of the true state-action value function and the timescale separation. 8 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Theorem 6.2. Let Assumptions 4.1, 4.2 and 4.3 hold. Let η0 > 1 Γ and 0 < τ < 1. Then for all t ≥0 there exists constants b1, b2 > 0 such that Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ≤b1e−τ 2 t + b2 Z t 0 e−τ 2 (t−r) 1 ηr dθπr dt 2 dr. (36) See Appendix C.3 for a proof. The following two results then connects Corollaries 5.1 and 5.2, Lemma 6.1 and Theorem 6.1 to demonstrate an exponential convergence to the optimal policy for all γ ∈(0, 1) when the critic is updated sufficiently fast. Theorem 6.3. Under the same assumptions as Theorem 6.2, there exists k1 > 0 with ηt = η0ek1t and k2 > 0 such that for all γ ∈(0, 1) and t > 0 it holds that min r∈[0,t] V πr τ (ρ) −V π∗ τ (ρ) ≤ τe−τ 2 t 2(1 −γ)(1 −e−τ 2 t) Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) + k2 2τ ! (37) See Appendix C.4 for a proof. A direct consequence arising from the proof of Theorem 6.3 shows that if the MDP is sufficiently regularised through the same small discounting factor condition as in Corollary 5.3, one can arrive at convergence for a much more general class of functions ηt. Corollary 6.1. Under the same assumptions as Theorem 6.2, for γ ∈(0, 1) such that 2 √ 2γ q Γ2−Γτ η0 < 1 there exists d1 > 0 such that for all t ≥0, min r∈[0,t] V πr τ (ρ) −V π∗ τ (ρ) ≤ τ 2(1 −γ)(1 −e−τ 2 t) e−τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (38) + d1 Z t 0 e−τ 2 (t−r) 1 ηr dr ! . (39) See Appendix C.3 for a proof. For example, suppose the small discounting factor condition is satisfied, choosing ηt = t 1 2 + η0 with η0 > 1 Γ and τ = 0.5, it can be shown that asymptotically (40) min r∈[0,t] V πr τ (ρ) −V π∗ τ (ρ) ∼1 √ t. 7. Limitations In this work, we only study the continuous-time dynamics of the actor-critic algorithm. Although this formulation gives insights into the discrete counterpart, a rigorous treatment of the discrete-time setting is more realistic for practical purposes and is left for future research. Moreover, for the purposes of analysis our critic approximation is linear while in practice non-linear neural networks are used to approximate the critic. Finally, our work assumes all integrals are evaluated exactly, in particular the semi-gradient (18). In practice these would need to be estimated from samples leading to additional Monte-Carlo errors. To fully analyse this is left for future work. 8. Acknowledgements DZ was supported by the EPSRC Centre for Doctoral Training in Mathematical Modelling, Analysis and Computation (MAC-MIGS) funded by the UK Engineering and Physical Sciences Research Coun- cil (grant EP/S023291/1), Heriot-Watt University and the University of Edinburgh. We acknowledge funding from the UKRI Prosperity Partnerships grant APP43592: AI2 – Assurance and Insurance for Artificial Intelligence, which supported this work. CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 9 References [1] Anas Barakat, Pascal Bianchi, and Julien Lehmann, Analysis of a target-based actor-critic algorithm with linear function approximation, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics (Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera, eds.), Proceedings of Machine Learning Research, vol. 151, PMLR, 28–30 Mar 2022, pp. 991–1040. [2] Jalaj Bhandari, Daniel Russo, and Raghav Singal, A finite time analysis of temporal difference learning with linear function approximation, Operations Research 69 (2021), no. 3, 950–973. [3] V. S. Borkar and V. R. Konda, The actor-critic algorithm as multi-time-scale stochastic approximation, Sadhana 22 (1997), no. 5, 525–543. [4] Haim Brezis, Functional analysis, sobolev spaces and partial differential equations, 1 ed., Universitext, Springer, New York, NY, 2011, Published: 02 November 2010, eBook ISBN: 978-0-387-70914-7, Series ISSN: 0172-5939, Series E-ISSN: 2191-6675. [5] Semih Cayci, Niao He, and R. Srikant, Convergence of entropy-regularized natural policy gradient with linear function approximation, SIAM Journal on Optimization 34 (2024), no. 3, 2729–2755. [6] Semih Cayci, Niao He, and R. Srikant, Finite-time analysis of entropy-regularized neural natural actor-critic algorithm, Transactions on Machine Learning Research (2024). [7] Paul Dupuis and Richard S. Ellis, A weak convergence approach to the theory of large deviations, Wiley Series in Probability and Statistics, John Wiley & Sons, Inc., 1997. [8] Mingyi Hong, Hoi-To Wai, Zhaoran Wang, and Zhuoran Yang, A two-timescale stochastic algorithm framework for bilevel optimization: Complexity analysis and application to actor-critic, SIAM Journal on Optimization 33 (2023), no. 1, 147–180. [9] Caleb Ju and Guanghui Lan, Policy optimization over general state and action spaces, 2024. [10] Sham M. Kakade and John Langford, Approximately optimal approximate reinforcement learning, International Con- ference on Machine Learning, 2002. [11] Bekzhan Kerimkulov, James-Michael Leahy, David Siska, Lukasz Szpruch, and Yufei Zhang, A fisher-rao gradient flow for entropy-regularised markov decision processes in polish spaces, 2024. [12] Bekzhan Kerimkulov, David ˇSiˇska, Lukasz Szpruch, and Yufei Zhang, Mirror descent for stochastic control problems with measure-valued controls, 2024. [13] Vijay Konda and John Tsitsiklis, Actor-critic algorithms, Advances in Neural Information Processing Systems (S. Solla, T. Leen, and K. M¨uller, eds.), vol. 12, MIT Press, 1999. [14] Guanghui Lan, Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes, Mathematical programming 198 (2023), no. 1, 1059–1106. [15] James-Michael Leahy, Bekzhan Kerimkulov, David Siska, and Lukasz Szpruch, Convergence of policy gradient for entropy regularized MDPs with neural network approximation in the mean-field regime, Proceedings of the 39th International Conference on Machine Learning (Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, eds.), Proceedings of Machine Learning Research, vol. 162, PMLR, 17–23 Jul 2022, pp. 12222–12252. [16] Max Qiushi Lin, Jincheng Mei, Matin Aghaei, Michael Lu, Bo Dai, Alekh Agarwal, Dale Schuurmans, Csaba Szepesvari, and Sharan Vaswani, Rethinking the global convergence of softmax policy gradient with linear function approximation, 2025. [17] L. Liu, M. B. Majka, and L. Szpruch, Polyak– Lojasiewicz inequality on the space of measures and convergence of mean-field birth-death processes, Applied Mathematics and Optimization 87 (2023), 48. [18] Jincheng Mei, Chenjun Xiao, Csaba Szepesvari, and Dale Schuurmans, On the global convergence rates of softmax policy gradient methods, International conference on machine learning, PMLR, 2020, pp. 6820–6829. [19] Shuang Qiu, Zhaoran Yang, Jianfeng Ye, and Zhuoran Wang, On finite-time convergence of actor-critic algorithm, IEEE Journal on Selected Areas in Information Theory 2 (2021), no. 2, 652–664. [20] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz, Trust region policy optimization, International conference on machine learning, PMLR, 2015, pp. 1889–1897. [21] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov, Proximal policy optimization algo- rithms, arXiv preprint arXiv:1707.06347 (2017). [22] Richard S. Sutton, Learning to predict by the methods of temporal differences, Machine Learning 3 (1988), no. 1, 9–44. [23] Andrea Zanette, Martin J. Wainwright, and Emma Brunskill, Provable benefits of actor-critic methods for offline reinforcement learning, Proceedings of the 35th International Conference on Neural Information Processing Systems (Red Hook, NY, USA), NIPS ’21, Curran Associates Inc., 2021. [24] Shangtong Zhang, Bo Liu, Hengshuai Yao, and Shimon Whiteson, Provably convergent two-timescale off-policy actor- critic with function approximation, Proceedings of the 37th International Conference on Machine Learning (Hal Daum´e III and Aarti Singh, eds.), Proceedings of Machine Learning Research, vol. 119, PMLR, 13–18 Jul 2020, pp. 11204– 11213. [25] Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael Jordan, and Zhaoran Wang, Wasserstein flow meets replicator dynamics: A mean-field analysis of representation learning in actor-critic, Advances in Neural Information Processing Systems (A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, eds.), 2021. 10 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Appendix A. Technical details In this section, we present some calculations that will be used in the proofs of the main results. Lemma A.1 (Gronwall). Let λ(s) ≥0, a = a(s), b = b(s) and y = y(s) be locally integrable, real-valued functions defined on [0, T] such that y is also locally integrable and for almost all s ∈[0, T], y(s) + a(s) ≤b(s) + Z s 0 λ(t)y(t)dt. Then y(s) + a(s) ≤b(s) + Z s 0 λ(t) Z t 0 λ(r)(b(r) −a(r))dr  dt, ∀s ∈[0, T]. Furthermore, if b is monotone increasing and a is non-negative, then y(s) + a(s) ≤b(s)e R s 0 λ(r)dr, ∀s ∈[0, T]. Lemma A.2. For some β ∈P(S × A), let dπ β ∈P(S × A) be the state-action occupancy measure. Moreover let κ(ds, da, ds′, da′) := P π(ds′, da′|s, a)dπ β(ds, da). Then for any π ∈Πµ and any integrable f : S × A →R, it holds that (41) Z S×A×S×A f(s, a)f(s′, a′)κ(ds, da, ds′, da′) ≤ 1 √γ Z S×A f(s, a)2dπ β(ds, da) Proof. By H¨older’s inequality, it holds that Z S×A×S×A f(s, a)f(s′, a′)κ(ds, da, ds′, da′) (42) ≤ Z S×A×S×A f(s, a)2κ(ds, da, ds′, da′)  1 2 Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′)  1 2 . (43) Moreover, observe that Z S×A×S×A f(s, a)2κ(ds, da, ds′, da′) = Z S×A Z S×A P π(ds′, da′|s, a)  f(s, a)2dπ β(ds, da) (44) = Z S×A f(s, a)2dπ β(ds, da), (45) hence (42) becomes Z S×A×S×A f(s, a)2κ(ds, da, ds′, da′)  1 2 Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′)  1 2 (46) ≤ Z S×A f(s, a)2dπ β(ds, da)  1 2 Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′)  1 2 . (47) Now by the first part of Lemma 5.1, it holds that Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′) = Z S×A×S×A f(s′, a′)2P π(ds′, da′|s, a)dπ β(ds, da) (48) = Z S×A f(s, a)2dπ Jπβ(ds, da), (49) where Jπ : P(S × A) →P(S × A) is defined in (7). Then by the second part of Lemma 5.1 we have Z S×A f(s, a)2dπ β(ds, da)  1 2 Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′)  1 2 (50) ≤ Z S×A f(s, a)2dπ β(ds, da)  1 2 Z S×A f(s, a)2dπ Jπβ(ds, da)  1 2 (51) ≤ 1 √γ Z S×A f(s, a)2dπ β(ds, da), (52) which concludes the proof. □ CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 11 Lemma A.3. For some θ0 ∈RN and π0 ∈Πµ, let {πt, θt}t≥0 be the trajectory of coupled actor-critic flow. Moreover let Kt = sups∈S KL(πt(·|s)|µ). There exists C1 > 0 such that for all t ≥0 it holds that sup s∈S |∂tπt(·|s)|M(A) ≤|At|Bb(S×A) , (53) |At|Bb(S×A) ≤2 |Qt|Bb(S×A) + 2τ ln dπt dµ Bb(S×A) , (54) |Qπt τ |Bb(S×A) ≤ 1 1 −γ  |c|Bb(S×A) + τγKt  , (55) ln dπt dµ Bb(S×A) ≤C1 + 2 τ sup r∈[0,t] |θr| + sup r∈[0,t] Kr. (56) Proof. The first claim sups∈S |∂tπt(·|s)|M(A) ≤|Aπt τ |Bb(S×A) follows trivially from the definition of the approximate Fisher Rao gradient flow defined in (22). Moreover, it holds that |At|Bb(S×A) = Qt + τ ln dπt dµ − Z A  Qt(·, a) + τ ln dπt dµ (·, a)  πt(da|·) Bb(S×A) (57) ≤2 Qt + τ ln dπt dµ Bb(S×A) (58) ≤2 |Qt|Bb(S×A) + 2τ ln dπt dµ Bb(S×A) (59) where we used the triangle inequality in the final inequality. Moreover, the state-action value function Qπt τ is a fixed point of the Bellman operator defined in (3). Hence, for all (s, a) ∈S × A, we have Qπt τ (s, a) = c(s, a) + γ Z S×A Qπt τ (s′, a′) P πt(ds′, da′|s, a) + τγ Z S KL(πt(·|s′)∥µ) P(ds′|s, a). (60) Taking absolute values and using the triangle inequality we have |Qπt τ (s, a)| ≤|c|Bb(S×A) + γ |Qπt τ |Bb(S×A) + τγ sup s′∈S KL(πt(·|s′)∥µ) (61) = |c|Bb(S×A) + γ |Qπt τ |Bb(S×A) + τγKt. (62) Taking the supremum over (s, a) ∈S × A on the left-hand side yields (63) |Qπt τ |Bb(S×A) ≤|c|Bb(S×A) + γ |Qπt τ |Bb(S×A) + τγKt. Rearranging gives (64) (1 −γ) |Qπt τ |Bb(S×A) ≤|c|Bb(S×A) + τγKt, which is the desired bound. Recall the approximate Fisher-Rao gradient flow for the policies {πt}t≥0, which for all t ≥0 and for all (s, a) ∈S × A is given by (65) ∂t ln dπt dµ (s, a) = −  Qt(s, a) + τ ln dπt dµ (a, s) − Z A  Qt(s, a′) + τ ln dπt dµ (a′, s)  πt(da′|s)  . Duhamel’s principle yields for all t ≥0 that ln dπt dµ (s, a) = e−τt ln dπ0 dµ (a, s) + Z t 0 e−τ(t−r) Z A Qr(s, a′)πr(da′|s) −Qr(s, a)  dr (66) + τ Z t 0 e−τ(t−r) KL(πr(·|s)|µ)dr. (67) 12 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Since π0 ∈Πµ, there exists C1 ≥1 such that ln dπ0 dµ Bb(S×A) ≤C1. Then by Assumption 4.2 we have that for all t ≥0, ln dπt dµ (s, a) ≤C1 + Z t 0 e−τ(t−r) Z A Qr(s, a′)πr(da′|s) −Qr(s, a) dr (68) + τ Z t 0 e−τ(t−r) KL(πr(·|s)∥µ) dr (69) ≤C1 + 2 Z t 0 e−τ(t−r) |θr| dr + τ Z t 0 e−τ(t−r)Kr dr (70) ≤C1 + 2 τ sup r∈[0,t] |θr| + sup r∈[0,t] Kr, (71) where in the last inequality we used R t 0 e−τ(t−r)dr ≤1 τ . Taking the supremum over (s, a) ∈S × A yields (72) ln dπt dµ Bb(S×A) ≤C1 + 2 τ sup r∈[0,t] |θr| + sup r∈[0,t] Kr, which is the desired bound. □ Lemma A.4. Let Assumption 4.3 hold. Then for all π ∈Πµ, it holds that L(·, π; dπ β) is λβ(1−γ)-strongly convex. Proof. For any ξ ∈P(S × A), let Σξ := R S×A ϕ(s, a)ϕ(s, a)⊤ξ(ds, da) ∈RN×N. Then by Lemma 5.1 and Assumption 4.3 it holds that Σdπ β ⪰(1 −γ)Σβ ⪰(1 −γ)λβI and thus L(·, π; dπ β) is λβ(1 −γ)-strongly convex. □ A.1. Proof of Lemma 4.1. Proof. Recall that Q(s, a) = ⟨θ, ϕ(s, a)⟩for some θ ∈RN and that for all π ∈Πµ, there exists θπ ∈RN such that Qπ(s, a) = ⟨θπ, ϕ(s, a)⟩by Assumption 4.1. Then by definition of the semi-gradient of the MSBE g : RN × P(A|S) →RN in (18), it holds that ⟨g(θ, π), θ −θπ⟩= Z S×A (Q(s, a) −TπQ(s, a)) ϕ(s, a)dπ β(da, ds), θ −θπ  (73) = Z S×A (Q(s, a) −Qπ τ (s, a))ϕ(s, a)dπ β(da, ds), θ −θπ  (74) + Z S×A (Qπ τ (s, a) −TπQ(s, a)ϕ(s, a)dπ β(da, ds), θ −θπ  (75) = Z S×A (Q(s, a) −Qπ τ (s, a))ϕ(s, a)dπ β(da, ds), θ −θπ  (76) −γ Z S×A×S×A (Q(s′, a′) −Qπ τ (s′, a′))ϕ(s, a)P π(ds′, da′|s, a)dπ β(ds, da), θ −θπ  , (77) where we added and subtracted the true state-action value function Qπ τ ∈Bb(S × A) in the second equality and used the fact that it is a fixed point of the Bellman operator defined in (3). To ease notation, let ε(s, a) := Q(s, a) −Qπ τ (s, a). Multiplying both sides by −1 and using the associativity of CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 13 the inner product, we have −⟨g(θ, π), θ −θπ⟩ (78) = − Z S×A ε(s, a)ϕ(s, a)dπ β(da, ds), θ −θπ  (79) + γ Z S×A ε(s′, a′)ϕ(s, a)P π(ds′, da′|s, a)dπ β(ds, da), θ −θπ  (80) = − Z S×A ε(s, a) ⟨ϕ(s, a), θ −θπ⟩dπ β(da, ds) (81) + γ Z S×A ε(s′, a′) ⟨ϕ(s, a), θ −θπ⟩P π(ds′, da′|s, a)dπ β(ds, da) (82) = − Z S×A ε(s, a)2dπ β(da, ds) (83) + γ Z S×A×S×A ε(s, a)ε(s′, a′)P π(ds′, da′|s, a)dπ β(ds, da) (84) = I(1) + γI(2). (85) Now applying Lemma A.2 to I(2) we have I(2) := Z S×A×S×A ε(s, a)ε(s′, a′)P π(ds′, da′|s, a)dπ β(ds, da) (86) ≤ 1 √γ Z S×A ε(s, a)2dπ β(ds, da). (87) Thus it holds that −⟨g(θ, π), θ −θπ⟩≤I(1) + γI(2) (88) ≤−(1 −√γ) Z S×A ϵ(s, a)2dπ β(da, ds) (89) = −(1 −√γ) Z S×A (Q(s, a) −Qπ τ (s, a))2dπ β(da, ds) (90) = −(1 −√γ) ∇θL(θ, π; dπ β), θ −θπ , (91) where the last inequality follows from the Assumption 4.1 and the definition of Q(s, a) = ⟨θ, ϕ(s, a)⟩. □ A.2. Proof of Lemma 5.1. Proof. For any β ∈P(S × A), π ∈P(A|S) and E ∈B(S × A), it holds that dπ Jπβ(E) = (1 −γ) ∞ X n=0 γn(Jn π Jπβ)(E) (92) = Jπdπ β(E) (93) where we just used the associativity of the operator Jπ. Furthermore by letting m = n + 1 it holds that dπ Jπβ(E) = (1 −γ) ∞ X n=0 γnJn+1 π β(E) (94) = (1 −γ) ∞ X m=1 γm−1Jm π β(E) (95) = 1 −γ γ ∞ X m=1 γmJm π β(E) (96) = 1 γ (dπ β(E) −(1 −γ)β(E)). (97) Rearranging concludes the proof. □ 14 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Appendix B. Proof of Regularity Results B.1. Proof of Lemma 5.2. Proof. Consider the following equation 1 2ηt d dt |θt|2 = 1 ηt  d dtθt, θt  (98) = −⟨g(θt, πt), θt⟩ (99) = − Z S×A (Qt(s, a) −T πtQt(s, a)) ϕ(s, a) dπt β (da, ds), θt  (100) = − Z S×A Qt(s, a)ϕ(s, a) dπt β (da, ds), θt  (101) + Z S×A T πtQt(s, a)ϕ(s, a) dπt β (da, ds), θt  (102) := −J(1) t + J(2) t (103) where we used the θt dynamics from (21) in the second equality and the definition of the semi-gradient in the third equality. For any π ∈Πµ, let Σπ ∈RN×N be (104) Σπ = Z S×A ϕ(s, a)ϕ(s, a)⊤dπ β(da, ds). Then by definition we have that Qt(s, a) = ⟨θt, ϕ(s, a)⟩, hence for J(1) t we have J(1) t = Z S×A Qt(s, a)ϕ(s, a) dπt β (da, ds), θt  (105) = Z S×A ⟨θt, ϕ(s, a)⟩ϕ(s, a)dπt β (da, ds), θt  (106) =  θt, Z S×A ϕ(s, a)ϕ(s, a)⊤dπt β (da, ds)  θt  (107) = ⟨θt, Σπtθt⟩ (108) Now dealing with J(1) t , expanding the Bellman operator defined in (3) we have J(2) t = Z S×A TπtQt(s, a)ϕ(s, a) dπt β (da, ds), θt  (109) = Z S×A c(s, a)ϕ(s, a)dπt β (da, ds), θt  (110) + γ Z S×A ⟨θt, ϕ(s′, a′)⟩ϕ(s, a)P πt(ds′, da′|s, a)dπt β (da, ds), θt  (111) + τγ Z S×A Z S KL(πt(·|s′), µ)P(ds′|s, a)ϕ(s, a)dπt β (da, ds)  , θt  (112) ≤|c|Bb(S×A)|θt| + γI(1) t + τγI(2) t (113) where we defined I(1) t = Z S×A ⟨θt, ϕ(s′, a′)⟩ϕ(s, a)P πt(ds′, da′|s, a)dπt β (da, ds), θt  , I(2) t = Z S×A Z S KL(πt(·|s′), µ)P(ds′|s, a)ϕ(s, a)dπ β(da, ds)  , θt  . Moreover, to ease notation let Kt := sup s∈S KL(πt(·|s)|µ) CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 15 and temporarily let κt(ds, da, ds′, da′) := P πt(ds′, da′|s, a)dπt β (da, ds). Now focusing on I(1) t , it holds that I(1) t = Z S×A×S×A ⟨θt, ϕ(s′, a′)⟩ϕ(s, a)κt(da′, ds′, da, ds), θt  (114) = Z S×A×S×A ⟨θt, ϕ(s, a)⟩⟨θt, ϕ(s′, a′)⟩κt(ds′, da′, ds, da). (115) Now using Lemma A.2 with f = ⟨θ, ϕ(·, ·)⟩we have I(1) t ≤ 1 √γ Z S×A ⟨θt, ϕ(s, a)⟩2 dπt β (ds, da)  1 2 Z S×A ⟨θt, ϕ(s, a)⟩2 dπt β (ds, da)  1 2 (116) = 1 √γ Z S×A ⟨θt, ϕ(s, a)⟩2 dπt β (ds, da) (117) = 1 √γ ⟨θt, Σπtθt⟩. (118) Thus all together it holds that (119) γI(1) t ≤√γ ⟨θt, Σπtθt⟩. Now focusing on I(2) t , we have I(2) t = Z S×A Z S KL(πt(·|s′), µ)P(ds′|s, a)  ϕ(s, a)dπt β (da, ds), θt  (120) ≤Kt Z S×A ϕ(s, a)dπt β (ds, da) |θt| (121) ≤Kt|θt| (122) where we used Assumption 4.2 in the final inequality. Hence along with (108), (98) becomes 1 2ηt d dt|θt|2 ≤−J(1) t + J(2) t (123) ≤−⟨θt, Σπtθt⟩+ |c|Bb(S×A)|θt| + γI(1) t + τγI(2) t (124) ≤−⟨θt, Σπtθt⟩+ √γ ⟨θt, Σπtθt⟩+ |c|Bb(S×A)|θt| + τγKt|θt| (125) = −(1 −√γ) ⟨θt, Σπtθt⟩+ |c|Bb(S×A) + τγKt  |θt|. (126) Observe that by (26) and Assumption 4.2, Σπ ∈RN×N is positive definite for all π ∈P(A|S), hence it holds that (127) ⟨θt, Σπtθt⟩≥(1 −γ)λβ |θt|2 . Therefore (123) becomes (128) 1 2ηt d dt|θt|2 ≤−(1 −√γ)(1 −γ)λβ |θt|2 + (|c|Bb(S×A) + τγKt)|θt| Let Γ := λβ(1 −γ)(1 −√γ). By Young’s inequality, there exists ϵ > 0 such that 1 2ηt d dt|θt|2 ≤−Γ|θt|2 + ϵ 2|θt|2 + (|c|Bb(S×A) + τγKt)2 2ϵ (129) ≤−Γ|θt|2 + ϵ 2|θt|2 + |c|2 Bb(S×A) + τ 2γ2K2 t ϵ , (130) where we used the identity (a + b)2 ≤2a2 + 2b2. Choosing ϵ = Γ we arrive at (131) 1 2ηt d dt|θt|2 ≤−Γ 2 |θt|2 + τ 2γ2K2 t Γ + |c|2 Bb(S×A) Γ which concludes the proof. □ 16 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH B.2. Proof of Theorem 5.1. Proof. By Lemma 5.2, we have that for all r ≥0 (132) 1 2ηr d dr|θr|2 ≤−Γ 2 |θr|2 + τ 2γ2K2 r Γ + |c|2 Bb(S×A) Γ . Rearranging, it holds that for all t ≥0 |θr|2 ≤−1 Γηr d dr|θr|2 + 2|c|2 Bb(S×A) + 2τ 2γ2K2 r Γ2 . (133) Multiplying both sides by e−τ(t−r) and integrating over r from 0 to t we have that for all t ≥0 Z t 0 e−τ(t−r)|θr|2dr ≤−1 Γ Z t 0 e−τ(t−r) 1 ηr d dr|θr|2dr + 2|c|2 Bb(S×A) Γ2 Z t 0 e−τ(t−r)dr (134) + 2τ 2γ2 Γ2 Z t 0 e−τ(t−r)K2 rdr (135) ≤−1 Γ Z t 0 e−τ(t−r) 1 ηr d dr|θr|2dr + 2|c|2 Bb(S×A) Γ2τ + 2τ 2γ2 Γ2 Z t 0 e−τ(t−r)K2 rdr, (136) where we used that R t 0 e−τ(t−r)dr ≤1 τ . Integrating the first term by parts, we have − Z t 0 e−τ(t−r) 1 ηr d dr|θr|2dr = −|θt|2 ηt + e−τt |θ0|2 η0 + τ Z t 0 |θr|2 e−τ(t−r) ηr dr (137) − Z t 0 |θr|2 e−τ(t−r) d drηr η2r dr. (138) Since by definition we have that for all t ≥0, ηt ≥1 and d dtηt ≥0 it holds that (139) Z t 0 |θr|2 e−τ(t−r) d drηr η2r dr ≥0. Hence dropping the negative terms on the right hand side of (137) and using that ηt ≥η0 for all t ≥0, we have −1 Γ Z t 0 e−τ(t−r) 1 ηr d dr|θr|2dr ≤e−τt |θ0|2 Γη0 + τ Γη0 Z t 0 e−τ(t−r)|θr|2dr. (140) Substituting this back into (134), for all t ≥0 we have that Z t 0 e−τ(t−r)|θr|2dr ≤e−τt |θ0|2 Γη0 + τ Γη0 Z t 0 e−τ(t−r)|θr|2dr (141) + 2|c|2 Bb(S×A) Γ2τ + 2τ 2γ2 Γ2 Z t 0 e−τ(t−r)K2 rdr. (142) Grouping like terms we have  1 − τ Γη0  Z t 0 e−τ(t−r)|θr|2dr ≤e−τt |θ0|2 Γη0 + 2|c|2 Bb(S×A) Γ2τ + 2τ 2γ2 Γ2 Z t 0 e−τ(t−r)K2 rdr. (143) Recall that we have η0 > τ Γ to ensure that 1 − τ Γη0 > 0. Dividing through by 1 − τ Γη0 gives for all t ≥0 that Z t 0 e−τ(t−r)|θr|2dr ≤σ1 + σ2 Z t 0 e−τ(t−r)K2 rdr (144) where we’ve set σ1 := |θ0|2 Γη0  1 − τ Γη0  + 2|c|2 Bb(S×A) Γ2τ  1 − τ Γη0 , σ2 := 2τ 2γ2 Γ2  1 − τ Γη0 . Recall the approximate Fisher Rao gradient flow for the policies {πt}t≥0, which for all t ≥0 and for all s ∈S, a ∈A is (145) ∂t ln dπt dµ (s, a) = −  Qt(s, a) + τ ln dπt dµ (a, s) − Z A  Qt(s, a) + τ ln dπt dµ (a, s)  πt(da|s)  CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 17 Duhamel’s principle yields for all t ≥0 that ln dπt dµ (s, a) = e−τt ln dπ0 dµ (a, s) + Z t 0 e−τ(t−r) Z A Qr(s, a)πr(da|s) −Qr(s, a)  dr (146) + τ Z t 0 e−τ(t−r) KL(πr(·|s)|µ)dr (147) Observe that since π0 ∈Πµ, there exists C1 ≥1 such that ln dπt dµ Bb(S×A) ≤C1. Assumption 4.2 gives that for all t ≥0 ln dπt dµ (s, a) ≤C1 + 2 Z t 0 e−τ(t−r)|θr|dr + τ Z t 0 e−τ(t−r) KL(πr(·|s)|µ)dr (148) ≤C1 + 2 Z t 0 e−τ(t−r)|θr|dr + τ Z t 0 e−τ(t−r)Krdr (149) Integrating over the actions with respect to πt(·|s) ∈P(A) gives for all t ≥0 that (150) KL(πt(·|s)|µ) ≤C1 + 2 Z t 0 e−τ(t−r)|θr|dr + τ Z t 0 e−τ(t−r)Krdr where we again use that Kr = sups∈S KL(πr(·|s)|µ). Following from the techniques in [17], observe that from (66) and Assumption 4.2 we similarly get for all t ≥0 that (151) ln dµ dπt (a, s) = −ln dπt dµ (s, a) ≤C1 + 2 Z t 0 e−τ(t−r)|θr|dr −τ Z t 0 e−τ(t−r)Krdr. Now integrating over the actions with respect to the reference measure µ ∈P(A) we have (152) KL(µ|πt(·|s)) ≤C1 + 2 Z t 0 e−τ(t−r)|θr|dr −τ Z t 0 e−τ(t−r)Krdr Moreover, using the non-negativity of the KL divergence, it holds for all t ≥0 that (153) KL(πt(·|s)|µ) ≤KL(πt(·|s)|µ) + KL(µ|πt(·|s)) ≤2C1 + 4 Z t 0 e−τ(t−r)|θr|dr Since this holds for any s ∈S, it holds for all t ≥0 that (154) Kt ≤2C1 + 4 Z t 0 e−τ(t−r)|θr|dr Now squaring both sides and using the H¨older’s inequality, we have K2 t ≤  2C1 + 4 Z t 0 e−τ(t−r)|θr|dr 2 (155) ≤8(C1)2 + 32 Z t 0 e−τ(t−r)|θr|dr 2 (156) = 8(C1)2 + 32 Z t 0 e−τ 2 (t−r)e−τ 2 (t−r)|θr|dr 2 (157) ≤8(C1)2 + 32 Z t 0 e−τ(t−r)dr  Z t 0 e−τ(t−r)|θr|2dr  (158) ≤8(C1)2 + 32 τ Z t 0 e−τ(t−r)|θr|2dr, (159) where we again used R t 0 e−τ(t−r)dr ≤1 τ . We can now substitute (144) into (159) to arrive at K2 t ≤8(C1)2 + 32 τ σ1 + 32 τ σ2 Z t 0 e−τ(t−r)K2 rdr (160) := a1 + a2 Z t 0 e−τ(t−r)K2 rdr (161) with a1 = 8(C1)2 + 32 τ σ1 and a2 = 32σ2 τ . □ 18 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH B.3. Proof of Corollary 5.1. Proof. By Theorem 5.1 it holds that (162) K2 t ≤a1 + a2 Z t 0 e−τ(t−r)K2 rdr. Observe that by multiplying through by eτt, we can rewrite this as (163) eτtK2 t ≤eτta1 + a2 Z t 0 eτrK2 rdr. Hence after defining g(t) = eτtK2 t and applying Gronwall’s inequality (Lemma A.1), for all γ ∈(0, 1) it holds for all t ≥0 that (164) K2 t ≤a1ea2t. □ B.4. Proof of Corollary 5.2. Proof. By Corollary 5.1 and Lemma 5.2, for all γ ∈(0, 1) it holds that 1 2 d dt |θt|2 ≤−Γ 2 ηt|θt|2 + btηt (165) such that (166) bt = 2|c|2 Bb(S×A) + 2τ 2γ2a1ea2t Γ2 ! . Recall that there exists α > 0 such that d dtηt ≤αηt, another application of Gronwall’s Lemma then concludes the proof. □ B.5. Proof of Corollary 5.3. Proof. By Theorem 5.1 we have that (167) K2 t ≤a1 + a2 Z t 0 e−τ(t−r)K2 rdr. Taking the supremum over [0, t] on the right hand side, we have (168) K2 t ≤a1 + a2 τ sup r∈[0,t] K2 r. Since this holds for all t ≥0, we have (169) sup r∈[0,t] K2 r ≤a1 + a2 τ sup r∈[0,t] K2 r. Now forcing 1 −a2 τ > 0, which is equivalent to the condition 64γ2 Γ2 −Γτ η0 < 1. Hence after rearranging we have (170) K2 t ≤sup r∈[0,t] K2 r ≤ a1τ τ −a2 □ B.6. Proof of Corollary 5.4. Proof. By Corollary 5.3, for sufficiently small γ > 0 it holds that for all t ≥0, K2 t ≤ a1τ τ −a2 . Hence by Lemma 5.2 we have 1 2 d dt |θt|2 ≤−ηt Γ 2 |θt|2 + ηt   2|c|2 Bb(S×A) + 2τ 2γ2  a1τ τ−a2  Γ2  . (171) The uniform boundedness in time of |θt| then follows by Gronwall’s Lemma (Lemma A.1). □ CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 19 Appendix C. Proof of Convergence Results C.1. Proof of Lemma 6.1. Proof. By the definition of the state-action value function (2) it holds that d dtQπt τ (s, a) = lim h→0 Qπt+h τ (s, a) −Qπt τ (s, a) h (172) = γ Z S d dtV πt τ (s′)P(ds′|s, a). (173) Now observe that by [11][Proof of Proposition 2.6], we have (174) d dtV πt τ (s) = 1 1 −γ Z S×A Aπt τ (s, a)∂tπt(da|s′)dπt(ds′|s). Thus we have d dtQπt τ (s, a) = γ 1 −γ Z S Z S×A Aπt τ (s′′, a′′)∂tπt(da′′|s′′)dπt(ds′′|s′)  P(ds′|s, a). (175) □ C.2. Proof of Theorem 6.1. Proof. Recall the performance difference Lemma (Lemma 1.1): for all ρ ∈P(S) and π, π′ ∈Πµ, V π τ (ρ) −V π′ τ (ρ) (176) = 1 1 −γ Z S  Z A  Qπ′ τ (s, a) + τ ln dπ′ dµ (a, s)  (π −π′)(da|s) + τ KL(π(·|s)|π′(·|s))  dπ ρ(ds) . (177) Now let π = π∗and π′ = πt and multiply both sides by −1 we have V πt τ (ρ) −V π∗ τ (ρ) = −1 1 −γ Z S Z A  Qπt(s, a) + τ ln dπt dµ (a, s)  (π∗−πt)(da|s) (178) + τ KL(π∗(·|s)|πt(·|s)) ! dπ∗ ρ (ds). (179) Recall the approximate Fisher Rao dynamics, which we write as (180) ∂t ln dπt dµ (s, a) +  Qt(s, a) + τ ln dπt dµ (a, s) − Z A  Qt(s, a′) + τ ln dπt dµ (a′, s)  πt(da′|s)  = 0. Observe that since the normalisation constant (enforcing the conservation of mass along the flow) R A  Qt(s, a) + τ ln dπt dµ (a, s)  πt(da|s) is independent of a ∈A, it holds that Z A Z A  Qt(s, a′) + τ ln dπt dµ (a′, s)  πt(da′|s)  (π∗−πt)(da|s) = 0. Hence adding 0 in the form of (180) into (178) it holds that for all t ≥0 V πt τ (ρ) −V π∗ τ (ρ) = 1 1 −γ Z S×A ∂t ln dπt dµ (a, s)(π∗−πt)(da|s)dπ∗ ρ (ds) (181) + Z S×A (Qt(s, a) −Qπt τ (s, a))(π∗−πt)(da|s)dπ∗ ρ (ds) −τ Z S KL(π∗(·|s)|πt(·|s)dπ∗ ρ (ds) ! . (182) By [12, Lemma 3.8] and Corollary 5.1, for any fixed ν ∈Πµ, the map t →KL(ν|πt) is differentiable. Hence we have Z A ∂t ln dπt dµ (s, a)(π∗−πt)(da|s) = Z A ∂t ln dπt dµ (s, a)π∗(da|s) − Z A ∂t ln dπt dµ (s, a)πt(da|s) (183) = Z A ∂t ln dπt dµ (s, a)π∗(da|s) (184) = −d dt KL(π∗(·|s)|πt(·|s)), (185) 20 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH where we used the conservation of mass of the policy dynamics in the second equality. Substituting this into (181) we have V πt τ (ρ) −V π∗ τ (ρ) = 1 1 −γ −d dt Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) (186) + Z S×A (Qt(s, a) −Qπt τ (s, a))(π∗−πt)(da|s)dπ∗ ρ (ds) −τ Z S KL(π∗(·|s)|πt(·|s)dπ∗ ρ (ds) ! . (187) Focusing on the second term, we have Z S×A (Qt(s, a) −Qπt τ (s, a))(π∗−πt)(da|s)dπ∗ ρ (ds) (188) ≤|Qt(s, a) −Qπt τ (s, a)|Bb(S×A) Z S TV(π∗(·|s), πt(·|s))dπ∗ ρ (ds) (189) ≤ 1 √ 2|θt −θπt| Z S KL(π∗(·|s)|πt(·|s)) 1 2 dπ∗ ρ (ds) (190) ≤ 1 √ 2|θt −θπt| Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds)  1 2 , (191) where we used Pinsker’s Inequality in the second inequality and H¨older’s inequality in the final inequality. Now applying Young’s inequality, there exists ϵ > 0 such that (192) |θt −θπt| Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds)  1 2 ≤1 2ϵ|θt −θπt|2 + ϵ 2 Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds). Substituting this back into (186) and choosing ϵ = √ 2τ we have V πt τ (ρ) −V π∗ τ (ρ) = 1 1 −γ −d dt Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) (193) −τ 2 Z S KL(π∗(·|s)|πt(·|s)dπ∗ ρ (ds) + 1 4τ |θt −θπt|2 ! . (194) Rearranging, we arrive at d dt Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) ≤−τ 2 Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) (195) −(1 −γ)  V πt τ (ρ) −V π∗ τ (ρ)  + 1 4τ |θt −θπt|2. (196) Applying Duhamel’s principle yields Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) ≤e−τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (197) −(1 −γ) Z t 0 e−τ 2 (t−r)(V πr τ (ρ) −V π∗ τ (ρ))dr + 1 2τ Z t 0 e−τ 2 (t−r)|θr −θπr|2dr. (198) Now using that R t 0 e−τ 2 (t−r)dr = 2(1−e−τ 2 ) τ , we have Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) ≤e−τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (199) −2(1 −γ)(1 −e−τ 2 ) τ min r∈[0,t]  V πr τ (ρ) −V π∗ τ (ρ)  + 1 2τ Z t 0 e−τ 2 (t−r)|θr −θπr|2dr. (200) Rearranging, we have min r∈[0,t] V πr τ (ρ) −V π∗ τ (ρ) ≤ τ 2(1 −γ)(1 −e−τ 2 ) e−τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (201) + 1 2τ Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ! . (202) which concludes the proof. □ CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 21 C.3. Proof of Theorem 6.2. Proof. Using the chain rule and the critic dynamics in (21), we have that for all r ≥0 1 2ηr d dr|θr −θπr|2 = 1 ηr dθr dr , θr −θπr  − dθπr dr , θr −θπr  (203) = −⟨g(θr, πr), θr −θπr⟩−1 ηr dθπr dr , θr −θπr  (204) Let Γ = λβ(1 −γ)(1 −√γ). Using Lemma 4.1 and the λβ-strong convexity of L(·, π; β) and recalling that L(θπr, πr) = 0 for all r ≥0, it holds for all r ≥0 that 1 2ηt d dt|θt −θπt|2 = −⟨g(θt, πt), θt −θπt⟩−1 ηt dθπt dt , θt −θπt  (205) ≤−(1 −γ)(1 −√γ) ⟨∇θL(θt, πt; β), θt −θπt⟩−1 ηt dθπt dt , θt −θπt  (206) ≤−(1 −γ)(1 −√γ)L(θt, πt; β) −Γ 2 |θt −θπt|2 −1 ηt dθπt dt , θt −θπt  (207) ≤−(1 −γ)(1 −√γ)L(θt, πt; β) −Γ 2 |θt −θπt|2 + 1 2ηt dθπt dt 2 + |θt −θπt|2 ! (208) = −(1 −γ)(1 −√γ)L(θt, πt; β) − Γ 2 −1 2ηt  |θt −θπt|2 + 1 2ηt dθπt dt 2 , (209) where we used H¨older’s and Young’s inequalities in (208). Since η0 > 1 Γ and ηt is a non-decreasing function, it holds that ηt > 1 Γ for all t ≥0. Hence Γ 2 − 1 2ηt > 0 and thus we can drop the second term. Moreover the λβ-strong convexity of L(·, π; β) along with L(θπ, π; β) = 0 and ∇θL(θπ, π) = 0 for all π ∈Πµ gives that |θt −θπt|2 ≤2 λβ L(θt, πt; β). Hence for all r ≥0 we arrive at 1 2ηr d dr|θr −θπr|2 ≤−Γ 2 |θr −θπr|2 + 1 2ηr dθπr dr 2 . (210) Rearranging, multiplying by e−τ(t−r) and integrating over r from 0 to t, it holds for all t ≥0 that Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ≤−1 Γ Z t 0 e−τ 2 (t−r) 1 ηr d dr|θr −θπr|2dr + 1 Γ Z t 0 e−τ 2 (t−r) 1 ηr dθπr dt 2 dr. (211) Integrating the first term by parts (identically to (137) from the proof of Theorem 5.1), we have Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ≤1 Γ −|θt −θπt|2 ηt + e−τ 2 t |θ0 −θπ0|2 η0 (212) + τ 2 Z t 0 e−τ 2 (t−r) 1 ηr |θr −θπr|2dr − Z t 0 |θr −θπr|2 e−τ 2 (t−r) d drηr η2r dr (213) + Z t 0 e−τ 2 (t−r) 1 ηr dθπr dr 2 dr ! . (214) Since for all t ≥0 it holds that ηt ≥1 and d dtηt ≥0, we have that Z t 0 |θr −θπr|2 e−τ 2 (t−r) d drηr η2r dr ≥0. Thus after dropping all negative terms and using that ηt ≥η0 for all t ≥0, we have (215)  1 − τ 2Γη0  Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ≤e−τ 2 |θ0 −θπ0|2 Γη0 + Z t 0 e−τ 2 (t−r) 1 ηr dθπr dr 2 dr. Since η0 > 1 2Γ and τ < 1, it holds that 1 − τ 2Γη0 > 0 and hence it holds that Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ≤e−τ 2 |θ0 −θπ0|2 Γη0  1 − τ 2Γη0  + 1  1 − τ 2Γη0  Z t 0 e−τ 2 (t−r) 1 ηr dθπr dr 2 dr, (216) 22 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH which concludes the proof. □ C.4. Proof of Theorem 6.3. Proof. By Theorem 6.2, we have Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ≤e−τ 2 |θ0 −θπ0|2 Γη0  1 − τ 2Γη0  + 1  1 − τ 2Γη0  Z t 0 e−τ 2 (t−r) 1 ηr dθπr dr 2 dr. (217) Hence it remains to characterise the growth of the final integral. Observe that for all π ∈P(A|S), θπ ∈RN satisfies the least-squares optimality condition given by (218) θπ = arg min θ L(θ, π; β) = Z S×A ϕ(s, a)ϕ(s, a)⊤β(da, ds) −1 Z S×A ϕ(s, a)Qπ τ (s, a) β(ds, da)  . Setting π = πt and differentiating time we arrive at (219) dθπt dt = Z S×A ϕ(s, a)ϕ(s, a)⊤β(da, ds) −1 Z S×A ϕ(s, a) d dtQπt(s, a) β(ds, da)  . Hence by Lemma 6.1, Assumption 4.2 and Assumption 4.3, for all t ≥0 it holds that dθπt dt = Z S×A ϕ(s, a)ϕ(s, a)⊤β(da, ds) −1 Z S×A ϕ(s, a) d dtQπt τ (s, a) β(ds, da)  (220) ≤ Z S×A ϕ(s, a)ϕ(s, a)⊤β(da, ds) −1 op d dtQπt τ Bb(S×A) (221) = 1 λβ d dtQπt Bb(S×A) (222) = γ λβ(1 −γ) Z S Z S×A Aπt τ (s′′, a′′)∂tπt(da′′|s′′) dπt(ds′′|s′)  P(ds′|·, ·) Bb(S×A) (223) ≤ γ λβ(1 −γ) |Aπt τ |Bb(S×A) sup s∈S |∂tπt(·|s)|M(A) . (224) Now using Lemma A.3, it holds that |Aπt τ |Bb(S×A) sup s∈S |∂tπt(·|s)|M(A) ≤|Aπt τ |Bb(S×A) |At|Bb(S×A) (225) ≤ 2 |Qπt τ |Bb(S×A) + 2τ ln dπt dµ Bb(S×A) ! 2 |Qt|Bb(S×A) + 2τ ln dπt dµ Bb(S×A) ! . (226) Hence by Corollaries 5.1 and 5.2 and Lemma A.3, there exists α1, α2 > 0 such that dθπt dt 2 ≤α1eα2t. Thus Theorem 6.2 becomes Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ≤e−τ 2 |θ0 −θπ0|2 Γη0  1 − τ 2Γη0  + α1  1 − τ 2Γη0  Z t 0 e−τ 2 (t−r) eα2r ηr dr. (227) Let ηt = η0ek1t for any k1 > τ 2 + α2. Then observe that Z t 0 e−τ 2 (t−r) eα2r ηr dr = 1 η0 e−τ 2 t Z t 0 e( τ 2 +α2−k1)rdr (228) ≤1 η0 e−τ 2 t e( τ 2 +α2−k1)t −1 τ 2 + α2 −k1 ! (229) ≤ e−τ 2 t η0 τ 2 + α2 −k1 , (230) hence all together it holds that Z t 0 e−τ 2 (t−r)|θr −θπr|2dr ≤e−τ 2 |θ0 −θπ0|2 Γη0  1 − τ 2Γη0  + e−τ 2 t α1 η0 − τ 2Γ  τ 2 + α2 −k1 . (231) CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 23 Substituting this into the result from Theorem 6.2 concludes the proof. □ C.5. Proof of Theorem 6.3. Following completely identically to the proof of Theorem 6.3, we have dθπt dt ≤ γ λβ(1 −γ) |Aπt τ |Bb(S×A) sup s∈S |∂tπt(·|s)|M(A) (232) ≤ 4 (1 −γ)2  |c|Bb(S×A) + Kt 2 + 4τ C1 + 2 τ sup r∈[0,t] |θr| + sup r∈[0,t] Kr !2 . (233) Then by Corollaries 5.3 and 5.4, there exists b2 > 0 such that dθπt dt 2 ≤d1. Hence by Theorem 6.2 we have min r∈[0,t] V πr τ (ρ) −V π∗ τ (ρ) ≤ τ 2(1 −γ)(1 −e−τ 2 ) e−τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (234) + d1 Z t 0 e−τ 2 (t−r) 1 ηr dr ! . (235) School of Mathematics, University of Edinburgh, UK Email address: ezorba@ed.ac.uk School of Mathematics, University of Edinburgh, UK Email address: d.siska@ed.ac.uk School of Mathematics, University of Edinburgh, UK, The Alan Turing Institute, UK, and Simtopia, UK Email address: l.szpruch@ed.ac.uk
CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Abstract. We prove the stability and global convergence of a coupled actor-critic gradient flow for infinite-horizon and entropy-regularised Markov decision processes (MDPs) in continuous state and action space with linear function approximation under Q-function realisability. We consider a version of the actor critic gradient flow where the critic is updated using temporal difference (TD) learning while the policy is updated using a policy mirror descent method on a separate timescale. We demonstrate stability and exponential convergence of the actor critic flow to the optimal policy. Finally, we address the interplay of the timescale separation and entropy regularisation and its effect on stability and convergence. 1. Introduction In reinforcement learning (RL) an agent aims to learn an optimal policy that maximizes the expected cumulative reward through repeated interactions with its environment. Such methods typically involve two key components: policy evaluation and policy improvement. During policy evaluation, the advantage function corresponding to a policy, or its function approximation, is updated using state, action and reward data generated under this policy. Policy improvement then uses this approximate advantage function to update the policy, most commonly through some policy gradient method. Algorithms that explicitly combine these two components are known as actor-critic (AC) methods [13], where the actor corresponds to policy improvement and the critic to policy evaluation. There are many policy gradient methods to choose from. In the last decade trust region policy optimization (TRPO) methods [20] and methods inspired by these like PPO [21] have become increasingly well-established due to their impressive empirical performance. Largely, this is because they alleviate the difficulty in choosing appropriate step sizes for the policy gradient updates: for vanilla policy gradient even a small change in the parameter may result in large change in the policy, leading to instability, but TRPO prevents this by explicitly ensuring the KL divergence between successive updates is smaller than some tolerance. Mirror descent replaces the TRPO's hard constraint with a penalty leading to a first order method which is also ameanable to analysis. Indeed, at least for direct parametrisation, it is known to converge with sub-linear and even linear rate for entropy regularized problems (depending on exact assumptions) [9, 14, 11]. Due to the favourable analytical properties of mirror descent, in this paper we consider a version of the actor critic gradient flow where the policy is updated using a policy mirror descent method while the critic is updated using temporal difference (TD) on a separate timescale. Entropy-regularised MDPs are widely used in practice since the entropic regularizer leads to a number of desirable properties: it has a natural interpretation as something that drives exploration, it ensures that there is a unique optimal policy and it can accelerate convergence of mirror descent [11], as well as classical policy gradient [18]. However, analysing the stability and convergence of actor-critic methods in this entropy-regularized setting with general state and action spaces remains highly non-trivial due to lack of a priori bounds on the value functions. To address the actor critic methods for entropy regularised MDPs in general action spaces, a careful treatment of tools from two timescale analysis, convex analysis over both Euclidean spaces and measure spaces must be deployed. In this paper, we address precisely this challenge. We study the stability and convergence of a widely used actor-critic algorithm in which the critic is updated using Temporal Difference (TD) learning [22], and the policy is updated through Policy Mirror Descent [9]. Our analysis employs a two-timescale update scheme [3], where both the actor and critic are updated at each iteration with the critic updated on a faster timescale. Keywords: Reinforcement learning, Actor-Critic method, Entropy regularisation, Approximate gradient flow, Nonconvex optimization, Global convergence, Function approximation. 1 16 Oct 2025 2 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH 1.1. Related works. We focus on the subset of RL literature that address the convergence of coupled actor-critic algorithms. In the unregularised setting, actor-critic methods have been studied extensively. The first convergence results in the two-timescale regime established asymptotic convergence in the continuous-time limit of coupled updates ([3, 13]). Most modern research employs linear function approximation for the critic, where linear convergence rates have been obtained under various assumptions on the step-sizes of the actor and critic ([1, 24, 8]). Most closely related to our work is [25], which considers the same two-timescale actor-critic scheme in the continuous-time limit for unregularised MDPs, with an overparameterized neural network used for the critic. However, convergence to the optimal policy was only achieved up to a neighbourhood of a scaling factor, and a restarting mechanism was required to ensure the stability of the dynamics. In the entropy-regularised setting, [5, 6] address the convergence of a natural actor critic algorithm. However, the convergence and stability of these results rely on the finite cardinality of the action space in presence of entropy regularisation. 1.2. Our Contribution. Under linear Q-realisability assumption, we address the following question: "Are actor-critic methods for entropy-regularized MDPs in general action spaces stable and convergent, and if so, at what rate?" Our main contributions are as follows: • We study a common variant of actor-critic where the critic is updated using temporal difference (TD) learning and the policy is updated using mirror descent. Similarly to [13, 25], we analyse the coupled updates in the continuous-time limit, resulting in a dynamical system where the critic flow is captured by a semi-gradient flow and the actor flow corresponds to an approximate Fisher-Rao gradient flow over the space of probability kernels. • By combining convex analysis over the space of probability measures with classical Euclidean convex analysis, we develop a Lyapunov-based stability framework that captures the interplay between entropy regularisation and timescale separation, and establish stability of the resulting dynamics. • We prove convergence of the actor-critic dynamics for entropy-regularized MDPs with infinite action spaces. 1.3. Notation. Let (E, d) denote a Polish space (i.e., a complete separable metric space). We always equip a Polish space with its Borel sigma-field B(E). Denote by Bb(E) the space of bounded measurable functions f : E →R endowed with the supremum norm |f|Bb(E) = supx∈E |f(x)|. Denote by M(E) the Banach space of finite signed measures μ on E endowed with the total variation norm |μ|M(E) = |μ|(E), where |μ| is the total variation measure. Recall that if μ = f dρ, where ρ ∈M+(E) is a nonnegative measure and f ∈L1(E, ρ), then |μ|M(E) = |f|L1(E,ρ). Denote by P(E) ⊂M(E) the set of probability measures on E. Moreover, we denote the Euclidean norm on RN by | · | with inner product ⟨·, ·⟩. Given some A, B ∈RN×N, we denote by λmin(A) the minimum eigenvalue of A and denote A ⪰B if and only if A -B is positive semidefinite. Moreover, we denote by |A|op the operator norm of A induced by the Euclidean norm, |A|op := sup|x|̸=0 |Ax| |x| . 1.4. Entropy Regularised Markov Decision Processes. Consider an infinite horizon Markov Decision Process (S, A, P, c, γ), where the state space S and action space A are Polish, P ∈P(S|S × A) is the state transition probability kernel, c is a bounded cost function and γ ∈(0, 1) is a discount factor. Let μ ∈P(A) denote a reference probability measure and τ > 0 denote a regularisation parameter. To ease notation, for each π ∈P(A|S) we define the kernels Pπ(ds′|s) := R A P(ds′|s, a)π(da|s) and P π(ds′, da′|s, a) := P(ds′|s, a)π(da′|s′). Denoting Eπ s = Eπ δs where δs ∈P(S) denotes the Dirac measure at s ∈S, for each stochastic policy π ∈P(A|S) and s ∈S, define the regularised value function by (1) V π τ (s) = Eπ s " ∞ X n=0 γn c(sn, an) + τ KL(π(·|sn)|μ) # ∈R ∪{∞} , where KL(π(·|s)|μ) is the Kullback-Leibler (KL) divergence of π(·|s) with respect to μ, define as KL(π(·|s)|μ) := R A ln dπ dμ(a|s)π(da|s) if π(·|s) is absolutely continuous with respect to μ, and infinity otherwise. For each π ∈P(A|S), we define the state-action value function Qπ τ ∈Bb(S × A) by (2) Qπ τ (s, a) = c(s, a) + γ Z S V π τ (s′)P(ds′|s, a) . CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 3 By the Dynamic Programming Principle, Qπ τ : S × A →R is the unique fixed point of the Bellman operator Tπ : Bb(S × A) →Bb(S × A), which for any f ∈Bb(S × A) is defined as (3) Tπf(s, a) = c(s, a) + γ Z S×A f(s′, a′)P π(ds′, da′|s, a) + τγ Z S KL(π(·|s′)|μ)P(ds′|s, a). The state-occupancy kernel dπ ∈P(S|S) is defined by (4) dπ(ds′|s) = (1 -γ) ∞ X n=0 γnP n π (ds′|s) , where P n π is the n-times product of the kernel Pπ with P 0 π(ds′|s) := δs(ds′). Moreover, for each π ∈ P(A|S) and (s, a) ∈S × A, we define the state-action occupancy kernel as (5) dπ(ds, da|s, a) = (1 -γ) ∞ X n=0 γn(P π)n(ds, da|s, a) where (P π)n is the n-times product of the kernel P π with (P π)0(ds′, da′|s, a) := δ(s,a)(ds′, da′). Given some initial state-action distribution β ∈P(S × A) with initial state distribution given by ρ(ds) = R A β(da, ds), we define the state-occupancy and state-action occupancy measures as (6) dπ ρ(ds) = Z S dπ(ds|s′)ρ(ds′), dπ β(ds, da) = Z S×A dπ(ds, da|s′, a′)β(da′, ds′). Note that for all E ∈B(S × A), by defining the linear operator Jπ : P(S × A) →P(S × A) as (7) Jπβ(E) = Z S×A P π(E|s′, a′)β(ds′, da′), it directly holds that (8) dπ β(da, ds) = (1 -γ) ∞ X n=0 γnJn π β(da, ds), with Jn π the n-fold product of the operator Jπ with J0 π = I, the identity operator on P(S × A). By choosing β = ρ ⊗π, we retrieve the classical state-action occupancy measure dπ β = dπ ρπ. For a given initial distribution ρ ∈P(S), the optimal value function is defined as (9) V ∗ τ (ρ) = min π∈P(A|S) V π τ (ρ), with V π τ (ρ) := Z S V π τ (s)ρ(ds) and we refer to π∗∈P(A|S) as the optimal policy if V ∗ τ (ρ) = V π∗ τ (ρ). Due to [11, Theorem B.1, Lemma B.2] we have the following dynamical programming principle for entropy regularised MDPs. Theorem 1.1 (Dynamical Programming Principle). Let τ > 0. The optimal value function V ∗ τ is the unique bounded solution of the following Bellman equation: V ∗ τ (s) = -τ ln Z A exp -1 τ Q∗ τ(s, a) μ(da), where Q∗ τ ∈Bb(S × A) is defined by Q∗ τ(s, a) = c(s, a) + γ Z S V ∗ τ (s′)P(ds′|s, a) , ∀(s, a) ∈S × A . Moreover, there is an optimal policy π∗ τ ∈P(A|S) given by π∗ τ(da|s) = exp -1 τ (Q∗ τ(s, a) -V ∗ τ (s)) μ(da) , ∀s ∈S. Finally, the value function V π τ is the unique bounded solution of the following Bellman equation for all s ∈S V π τ (s) = Z A Qπ τ (s, a) + τ ln dπ dμ(a, s) π(da|s) . Theorem 1.1 suggests that, without loss of generality, it suffices to minimise (9) over the class of policies that are equivalent to the reference measure μ. Definition 1.1 (Admissible Policies). Let Πμ denote the class of policies for which there exists f ∈ Bb(S × A) with π(da|s) = exp(f(s, a)) R A exp(f(s, a))μ(da)μ(da). 4 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH The performance difference lemma, first introduced for tabular unregularised MDPs, has become fundamental in the analysis of MDPs as it acts a substitute for the strong convexity of the π 7→V π τ if the state-occupancy measure dπ ρ is ignored (e.g [10], [25], [9]). By virtue of [11], we have the following performance difference for entropy regularised MDPs in Polish state and action spaces. Lemma 1.1 (Performance difference). For all ρ ∈P(S) and π, π′ ∈Πμ, V π τ (ρ) -V π′ τ (ρ) = 1 1 -γ Z S Z A Qπ′ τ (s, a) + τ ln dπ′ dμ (a, s) (π -π′)(da|s) + τ KL(π(·|s)|π′(·|s)) dπ ρ(ds) . 2. Mirror-Descent and the Fisher-Rao Gradient flow Defining the soft advantage function as Aπ τ (s, a) := Qπ τ (s, a) + τ ln dπ dμ(s, a) -V π τ (s), then for some λ > 0 and π0 ∈Πμ, the Policy Mirror Descent update rule reads as πn+1(·|s) = arg min m∈P(A) Z A Aπn τ (s, a)(m(da) -πn(da|s)) + 1 λ KL(m|πn(·|s)) (10) = arg min m∈P(A) Z A Qπn τ (s, a) + τ ln dπn dμ (s, a) (m(da) -πn(da|s)) + 1 λ KL(m|πn(·|s)) . (11) [7] shows that the pointwise optimisation is achieved by (12) dπn+1 dπn (a, s) = exp -λAπn τ (s, a) R A exp (-λAπn τ (s, a)) πn(da|s). Observe that for any π ∈P(A|S), it holds that R A Aπ τ (s, a)π(da|s) = 0. Hence taking the logarithm of (12) we have ln dπn+1 dμ (s, a) -ln dπn dμ (s, a) = -λAπn τ (s, a) -ln Z A e-λAπn τ (s,a)πn(da|s). Interpolating in the time variable and letting λ →0 we retrieve the Fisher-Rao gradient flow for the policies (13) ∂t ln dπt dμ (s, a) = - Aπt τ (s, a) - Z A Aπt τ (s, a)πt(da|s) = -Aπt τ (s, a). Note that the soft advantage formally corresponds to the functional derivative of the value function with respect to the policy πn and thus (13) can be seen as a gradient flow of the value function over the space of kernels P(A|S) (see [11] for a detailed description of the functional derivative). In the case where the advantage function is fully accessible for all t ≥0, [11][Theorem 2.8] shows that the entropy regularisation in the value function induces an exponential convergence to the optimal policy. In the following section we define the approximate Fisher Rao dynamics arising from approximating the advantage for all t ≥0. 3. Actor Critic Methods Given some feature mapping φ : S × A →RN, we parametrise the state-action value function as Q(s, a; θ) := ⟨θ, φ(s, a)⟩. Moreover, we denote the approximate soft Advantage function as (14) A(s, a; θ) = Q(s, a; θ) + τ ln dπ dμ(s, a) - Z A Q(s, a; θ) + τ ln dπ dμ(s, a) π(da|s). The Mean Squared Bellman Error (MSBE) is defined as (15) MSBE(θ, π) = 1 2 Z S×A (Q(s, a; θ) -TπQ(s, a; θ))2dπ β(da, ds) where dπ β ∈P(S × A) is the state-action occupancy measure defined in (6). Given that β ∈P(S × A) has full support, by (3) it holds that MSBE(θ, π) = 0 if and only if Q(s, a; θ) = Qπ τ (s, a) for all s ∈S and a ∈A. Hence one approach to implementing the Policy Mirror Descent updates is to calculate the optimal parameters for Q(s, a; θ) by minimising the MSBE at each policy mirror descent iteration (16) θn+1 = arg min θ∈RN MSBE(θ, πn), CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 5 (17) dπn+1 dπn (a, s) = exp -λA(s, a; θn+1) R A exp (-λA(s, a; θn+1)) πn(da|s). To avoid fully solving the optimisation in (16) for each policy update, one can update the critic using a semi-gradient descent on a different timescale to the policy update. Let hn, λn > 0 be the step-sizes of the critic and actor respectively at iteration n ≥0. Let the semi-gradient g : RN × P(A|S) →RN of the MSBE with respect to θ be (18) g(θ, π) := Z S×A (Q(s, a; θ) -TπQ(s, a; θ))φ(s, a)dπ β(da, ds). The two-timescale actor-critic Mirror Descent scheme reads as (19) θn+1 = θn -hng(θn, πn), dπn+1 dπn (a, s) = exp -λnA(s, a; θn+1) R A exp (-λnA(s, a; θn+1)) πn(da|s). (20) where timescale separation ηn := hn λn > 1 ensures that the critic is updated on a much faster timescale than the policy to improve the local estimation of the policy updates. As pointed out in [25], even with the KL penalty in (10) the critic may still be far away from the true state-action value function, resulting in unstable updates. 4. Dynamics In this paper, we study the stability and convergence of the two-timescale actor-critic Mirror Descent scheme in the continuous-time limit. Let Qt(s, a) := Q(s, a; θt) and At(s, a) := A(s, a; θt). Let η : [0, ∞) →[1, ∞) be a non-decreasing function representing the timescale separation, then for some θ0 ∈ RN and π0 ∈Πμ we have the following coupled dynamics (21) dθt dt = -ηtg(θt, πt) ∂tπt(da|s) = -At(s, a)πt(da|s) (22) where g : RN × P(A|S) is the semi-gradient of the MSBE defined in (18). We refer to (22) as the Approximate Fisher Rao Gradient flow. We perform our analysis under the following assumptions. Assumption 4.1 (Qπ τ -realisability). For all π ∈Πμ and (s, a) ∈S × A, there exists θπ ∈RN such that Qπ(s, a) = ⟨θπ, φ(s, a)⟩. A simple example of when this holds is in the tabular case, where one can choose φ to be a one-hot encoding of the state-action space. Moreover, all linear MDPs are Qπ-realisable. In a linear MDP there exists exists φ : S × A →RN, w ∈RN and a sequence {ψi}N i=1 with ψi ∈M(S) such that for all (s, a) ∈S × A, c(s, a) = ⟨w, φ(s, a)⟩, P(ds′ | s, a) = N X i=1 φi(s, a)ψi(ds′). In this case it holds that (θπ)i = wi + R S V π(s′)ψi(ds′). Assumption 4.1 can be seen as a convention to omit function approximation errors in the final convergence results. This assumption, or the presence of approximation errors in convergence results, are widely present in the actor-critic literature ([5], [24], [23], [6], [8], [19]). More recently, [16] derives some weaker ordering conditions in the bandit case (empty state space) which guarantee the convergence of soft-max policy gradient in the tabular setting beyond realisability. However as of now it is unclear how this applies to MDPs and also fundamentally depends on the finite cardinality of the action space. By [4], Assumption 4.1 holds in the limit N →∞when φi are the basis functions of L2(ρ ⊗μ) for some ρ ⊗μ ∈P(S × A). However, analysis in such a Hilbert space becomes more involved and intricate and is the result of ongoing work. Assumption 4.2. For all (s, a) ∈S × A it holds that |φ(s, a)| ≤1. Assumption 4.2 is purely for convention and is without loss of generality in the finite-dimensional case. 6 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Assumption 4.3. Let β ∈P(S × A) be fixed. Then λβ := λmin Z S×A φ(s, a)φ(s, a)⊤β(ds da) > 0. Note that unlike the analogous assumptions imposed in [8], Assumption 4.3 is independent of the policy. This property allows us to remove any dependence on the continuity of eigenvalues. Definition 4.1. For all π ∈Πμ and ζ ∈P(S × A), the squared loss with respect to ζ is defined as (23) L(θ, π; ζ) = 1 2 Z S×A (⟨θ, φ(s, a)⟩-Qπ(s, a))2ζ(da, ds) where Qπ τ is defined in (2). A straightforward calculation given in Lemma A.4 shows that due to Lemma 5.1 and Assumption 4.3, for any π ∈Πμ it holds that L(·, π; dπ β) is (1 -γ)λβ-strongly convex. The following result then connects the geometry of the semi-gradient of the MSBE and the gradient of L(θ, π; β), which can be seen as an extension of Lemma 3 of [2] to the current entropy regularised setting and where the measure of integration in the MSBE is not necessarily stationary. Lemma 4.1. Let Assumption 4.1 hold. Then for all θ ∈RN and π ∈Πμ it holds that (24) -⟨g(θ, π), θ -θπ⟩≤-(1 -√γ)(1 -γ) ⟨∇θL(θ, π; β), θ -θπ⟩ with ∇θL(θ, π; β) = Z S×A (⟨θ, φ(s, a)⟩-Qπ τ (s, a))φ(s, a)β(da, ds). See Appendix A.1 for a proof. 5. Stability In this section we analyse the stability of the coupled Actor Critic flow. Throughout this section, to ease notation we let Γ := λβ(1 -γ)(1 -√γ), Kt := sup s∈S KL(πt(·|s)|μ), with λβ > 0 the constant from Assumption 4.3. The following lemma establishes properties of the state-action occupancy measure defined in (6) and which are useful in the proofs. Lemma 5.1. For all π ∈P(A|S), β ∈P(S × A) and E ∈B(S × A) it holds that (25) dπ Jπβ(E) = Jπdπ β(E). Moreover, for all γ ∈(0, 1) we have (26) dπ β(E) -γdπ Jπβ(E) = (1 -γ)β(E). See Appendix A.2 for a proof. Lemma 5.2 then establishes the effect of the coupling and timescale separation in the actor-critic flow and its effect on the stability of the critic parameters. Lemma 5.2. Let Assumptions 4.2 and 4.3 hold. Then for all t ≥0 it holds that (27) 1 2ηt d dt|θt|2 ≤-Γ 2 |θt|2 + τ 2γ2K2 t Γ + |c|2 Bb(S×A) Γ See Appendix B.1 for a proof. By connecting the result from Lemma 5.2 with the approximate Fisher Rao gradient flow, we are able to establish a Gronwall-type inequality for the KL divergence of the policies with respect to the reference measure. Theorem 5.1. Let Assumptions 4.2 and 4.3 hold. Let η0 > τ Γ. Then there exists constants a1 = a1 τ, η0, γ, λβ, |c|Bb(S×A), dπ0 dμ Bb(S×A) ! > 0 and a2 = a2(τ, η0, γ, λβ) > 0 such that for all γ ∈(0, 1) and t ≥0 it holds that (28) K2 t ≤a1 + a2 Z t 0 e-τ(t-r)K2 r dr. CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 7 See Appendix B.2 for a proof. Through applications of Gronwall's Lemma (Lemma A.1), two direct corollaries of Theorem 5.1 show that the KL divergence of the policies with respect to the reference measure and the critic parameters do not blow up in finite time. Corollary 5.1 (Stability). Under the same assumptions as Theorem 5.1, for all γ ∈(0, 1), s ∈S and t ≥0 it holds that (29) KL(πt(·|s)|μ)2 ≤a1ea2t. Corollary 5.2. Under the same assumptions as Theorem 5.1, suppose that there exists α > 0 such that d dtηt ≤αηt. Then for all γ ∈(0, 1) there exists r1, r2 > 0 such that for all t ≥0 it holds that (30) |θt| ≤r1er2t. See Appendix B.3 and B.4 for the proofs. Corollaries 5.3 and 5.4 then show that if the MDP is sufficiently regularised through a sufficiently small discounting factor, the KL divergence of the policies with respect to the reference measure remains uniformly bounded along the flow. Corollary 5.3 (Uniform boundedness). Under the same assumptions as Theorem 5.1, for γ ∈(0, 1) such that 64γ2 Γ2-Γτ η0 0 such that for all t ≥0 it holds that (32) |θt| ≤R See Appendix B.5 and B.6 for the proofs. 6. Convergence In this section we will present three convergence results of the coupled actor-critic flow. Firstly, we characterise the time derivative of the state-action value function along the approximate gradient flow for the policies. Lemma 6.1. For all t ≥0 and (s, a) ∈S × A, it holds that (33) d dtQπt τ (s, a) = γ 1 -γ Z S Z S×A Aπt τ (s′′, a′′)∂tπt(da′′|s′′)dπt(ds′′|s′) P(ds′|s, a) See Appendix C.1 for a proof. Observe that in the exact setting, (13), we obtain the dissipative property of {Qπt τ }t≥0 along the flow d dtQπt τ (s, a) = -γ 1 -γ Z S Z S×A Aπt τ (s′′, a′′)2dπt(ds′′|s′) P(ds′|s, a) ≤0. Furthermore, Theorem 6.1 shows that the actor-critic flow maintains the exponential convergence to the optimal policy induced by the τ-regularisation up to a error term arising from not solving the critic to full accuracy. Theorem 6.1. Let {πt, θt}t≥0 be the trajectories of the actor critic flow. Let Assumptions 4.1 and 4.2 hold. Then for all t > 0 it holds that min r∈[0,t] V πr τ (ρ) -V π∗ τ (ρ) ≤ τ 2(1 -γ)(1 -e-τ 2 t) e-τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (34) + 1 2τ Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ! (35) See Appendix C.2 for a proof. Note that by Theorem 1.1, it holds that KL(π∗(·|s)|π0(·|s)) 1 Γ and 0 0 such that Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ≤b1e-τ 2 t + b2 Z t 0 e-τ 2 (t-r) 1 ηr dθπr dt 2 dr. (36) See Appendix C.3 for a proof. The following two results then connects Corollaries 5.1 and 5.2, Lemma 6.1 and Theorem 6.1 to demonstrate an exponential convergence to the optimal policy for all γ ∈(0, 1) when the critic is updated sufficiently fast. Theorem 6.3. Under the same assumptions as Theorem 6.2, there exists k1 > 0 with ηt = η0ek1t and k2 > 0 such that for all γ ∈(0, 1) and t > 0 it holds that min r∈[0,t] V πr τ (ρ) -V π∗ τ (ρ) ≤ τe-τ 2 t 2(1 -γ)(1 -e-τ 2 t) Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) + k2 2τ ! (37) See Appendix C.4 for a proof. A direct consequence arising from the proof of Theorem 6.3 shows that if the MDP is sufficiently regularised through the same small discounting factor condition as in Corollary 5.3, one can arrive at convergence for a much more general class of functions ηt. Corollary 6.1. Under the same assumptions as Theorem 6.2, for γ ∈(0, 1) such that 2 √ 2γ q Γ2-Γτ η0 0 such that for all t ≥0, min r∈[0,t] V πr τ (ρ) -V π∗ τ (ρ) ≤ τ 2(1 -γ)(1 -e-τ 2 t) e-τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (38) + d1 Z t 0 e-τ 2 (t-r) 1 ηr dr ! . (39) See Appendix C.3 for a proof. For example, suppose the small discounting factor condition is satisfied, choosing ηt = t 1 2 + η0 with η0 > 1 Γ and τ = 0.5, it can be shown that asymptotically (40) min r∈[0,t] V πr τ (ρ) -V π∗ τ (ρ) ∼1 √ t. 7. Limitations In this work, we only study the continuous-time dynamics of the actor-critic algorithm. Although this formulation gives insights into the discrete counterpart, a rigorous treatment of the discrete-time setting is more realistic for practical purposes and is left for future research. Moreover, for the purposes of analysis our critic approximation is linear while in practice non-linear neural networks are used to approximate the critic. Finally, our work assumes all integrals are evaluated exactly, in particular the semi-gradient (18). In practice these would need to be estimated from samples leading to additional Monte-Carlo errors. To fully analyse this is left for future work. 8. Acknowledgements DZ was supported by the EPSRC Centre for Doctoral Training in Mathematical Modelling, Analysis and Computation (MAC-MIGS) funded by the UK Engineering and Physical Sciences Research Council (grant EP/S023291/1), Heriot-Watt University and the . We acknowledge funding from the UKRI Prosperity Partnerships grant APP43592: AI2 - Assurance and Insurance for Artificial Intelligence, which supported this work. CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 9 References [1] Anas Barakat, Pascal Bianchi, and Julien Lehmann, Analysis of a target-based actor-critic algorithm with linear function approximation, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics (Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera, eds.), Proceedings of Machine Learning Research, vol. 151, PMLR, 28-30 Mar 2022, pp. 991-1040. [2] Jalaj Bhandari, Daniel Russo, and Raghav Singal, A finite time analysis of temporal difference learning with linear function approximation, Operations Research 69 (2021), no. 3, 950-973. [3] V. S. Borkar and V. R. Konda, The actor-critic algorithm as multi-time-scale stochastic approximation, Sadhana 22 (1997), no. 5, 525-543. [4] Haim Brezis, Functional analysis, sobolev spaces and partial differential equations, 1 ed., Universitext, Springer, New York, NY, 2011, Published: 02 November 2010, eBook ISBN: 978-0-387-70914-7, Series ISSN: 0172-5939, Series E-ISSN: 2191-6675. [5] Semih Cayci, Niao He, and R. Srikant, Convergence of entropy-regularized natural policy gradient with linear function approximation, SIAM Journal on Optimization 34 (2024), no. 3, 2729-2755. [6] Semih Cayci, Niao He, and R. Srikant, Finite-time analysis of entropy-regularized neural natural actor-critic algorithm, Transactions on Machine Learning Research (2024). [7] Paul Dupuis and Richard S. Ellis, A weak convergence approach to the theory of large deviations, Wiley Series in Probability and Statistics, John Wiley & Sons, Inc., 1997. [8] Mingyi Hong, Hoi-To Wai, Zhaoran Wang, and Zhuoran Yang, A two-timescale stochastic algorithm framework for bilevel optimization: Complexity analysis and application to actor-critic, SIAM Journal on Optimization 33 (2023), no. 1, 147-180. [9] Caleb Ju and Guanghui Lan, Policy optimization over general state and action spaces, 2024. [10] Sham M. Kakade and John Langford, Approximately optimal approximate reinforcement learning, International Conference on Machine Learning, 2002. [11] Bekzhan Kerimkulov, James-Michael Leahy, David Siska, Lukasz Szpruch, and Yufei Zhang, A fisher-rao gradient flow for entropy-regularised markov decision processes in polish spaces, 2024. [12] Bekzhan Kerimkulov, David ˇSiˇska, Lukasz Szpruch, and Yufei Zhang, Mirror descent for stochastic control problems with measure-valued controls, 2024. [13] Vijay Konda and John Tsitsiklis, Actor-critic algorithms, Advances in Neural Information Processing Systems (S. Solla, T. Leen, and K. M ̈uller, eds.), vol. 12, MIT Press, 1999. [14] Guanghui Lan, Policy mirror descent for reinforcement learning: Linear convergence, new sampling complexity, and generalized problem classes, Mathematical programming 198 (2023), no. 1, 1059-1106. [15] James-Michael Leahy, Bekzhan Kerimkulov, David Siska, and Lukasz Szpruch, Convergence of policy gradient for entropy regularized MDPs with neural network approximation in the mean-field regime, Proceedings of the 39th International Conference on Machine Learning (Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, eds.), Proceedings of Machine Learning Research, vol. 162, PMLR, 17-23 Jul 2022, pp. 12222-12252. [16] Max Qiushi Lin, Jincheng Mei, Matin Aghaei, Michael Lu, Bo Dai, Alekh Agarwal, Dale Schuurmans, Csaba Szepesvari, and Sharan Vaswani, Rethinking the global convergence of softmax policy gradient with linear function approximation, 2025. [17] L. Liu, M. B. Majka, and L. Szpruch, Polyak- Lojasiewicz inequality on the space of measures and convergence of mean-field birth-death processes, Applied Mathematics and Optimization 87 (2023), 48. [18] Jincheng Mei, Chenjun Xiao, Csaba Szepesvari, and Dale Schuurmans, On the global convergence rates of softmax policy gradient methods, International conference on machine learning, PMLR, 2020, pp. 6820-6829. [19] Shuang Qiu, Zhaoran Yang, Jianfeng Ye, and Zhuoran Wang, On finite-time convergence of actor-critic algorithm, IEEE Journal on Selected Areas in Information Theory 2 (2021), no. 2, 652-664. [20] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz, Trust region policy optimization, International conference on machine learning, PMLR, 2015, pp. 1889-1897. [21] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov, Proximal policy optimization algorithms, arXiv preprint (2017). [22] Richard S. Sutton, Learning to predict by the methods of temporal differences, Machine Learning 3 (1988), no. 1, 9-44. [23] Andrea Zanette, Martin J. Wainwright, and Emma Brunskill, Provable benefits of actor-critic methods for offline reinforcement learning, Proceedings of the 35th International Conference on Neural Information Processing Systems (Red Hook, NY, USA), NIPS '21, Curran Associates Inc., 2021. [24] Shangtong Zhang, Bo Liu, Hengshuai Yao, and Shimon Whiteson, Provably convergent two-timescale off-policy actorcritic with function approximation, Proceedings of the 37th International Conference on Machine Learning (Hal Daum ́e III and Aarti Singh, eds.), Proceedings of Machine Learning Research, vol. 119, PMLR, 13-18 Jul 2020, pp. 1120411213. [25] Yufeng Zhang, Siyu Chen, Zhuoran Yang, Michael Jordan, and Zhaoran Wang, Wasserstein flow meets replicator dynamics: A mean-field analysis of representation learning in actor-critic, Advances in Neural Information Processing Systems (A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, eds.), 2021. 10 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Appendix A. Technical details In this section, we present some calculations that will be used in the proofs of the main results. Lemma A.1 (Gronwall). Let λ(s) ≥0, a = a(s), b = b(s) and y = y(s) be locally integrable, real-valued functions defined on [0, T] such that y is also locally integrable and for almost all s ∈[0, T], y(s) + a(s) ≤b(s) + Z s 0 λ(t)y(t)dt. Then y(s) + a(s) ≤b(s) + Z s 0 λ(t) Z t 0 λ(r)(b(r) -a(r))dr dt, ∀s ∈[0, T]. Furthermore, if b is monotone increasing and a is non-negative, then y(s) + a(s) ≤b(s)e R s 0 λ(r)dr, ∀s ∈[0, T]. Lemma A.2. For some β ∈P(S × A), let dπ β ∈P(S × A) be the state-action occupancy measure. Moreover let κ(ds, da, ds′, da′) := P π(ds′, da′|s, a)dπ β(ds, da). Then for any π ∈Πμ and any integrable f : S × A →R, it holds that (41) Z S×A×S×A f(s, a)f(s′, a′)κ(ds, da, ds′, da′) ≤ 1 √γ Z S×A f(s, a)2dπ β(ds, da) Proof. By H ̈older's inequality, it holds that Z S×A×S×A f(s, a)f(s′, a′)κ(ds, da, ds′, da′) (42) ≤ Z S×A×S×A f(s, a)2κ(ds, da, ds′, da′) 1 2 Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′) 1 2 . (43) Moreover, observe that Z S×A×S×A f(s, a)2κ(ds, da, ds′, da′) = Z S×A Z S×A P π(ds′, da′|s, a) f(s, a)2dπ β(ds, da) (44) = Z S×A f(s, a)2dπ β(ds, da), (45) hence (42) becomes Z S×A×S×A f(s, a)2κ(ds, da, ds′, da′) 1 2 Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′) 1 2 (46) ≤ Z S×A f(s, a)2dπ β(ds, da) 1 2 Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′) 1 2 . (47) Now by the first part of Lemma 5.1, it holds that Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′) = Z S×A×S×A f(s′, a′)2P π(ds′, da′|s, a)dπ β(ds, da) (48) = Z S×A f(s, a)2dπ Jπβ(ds, da), (49) where Jπ : P(S × A) →P(S × A) is defined in (7). Then by the second part of Lemma 5.1 we have Z S×A f(s, a)2dπ β(ds, da) 1 2 Z S×A×S×A f(s′, a′)2κ(ds, da, ds′, da′) 1 2 (50) ≤ Z S×A f(s, a)2dπ β(ds, da) 1 2 Z S×A f(s, a)2dπ Jπβ(ds, da) 1 2 (51) ≤ 1 √γ Z S×A f(s, a)2dπ β(ds, da), (52) which concludes the proof. □ CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 11 Lemma A.3. For some θ0 ∈RN and π0 ∈Πμ, let {πt, θt}t≥0 be the trajectory of coupled actor-critic flow. Moreover let Kt = sups∈S KL(πt(·|s)|μ). There exists C1 > 0 such that for all t ≥0 it holds that sup s∈S |∂tπt(·|s)|M(A) ≤|At|Bb(S×A) , (53) |At|Bb(S×A) ≤2 |Qt|Bb(S×A) + 2τ ln dπt dμ Bb(S×A) , (54) |Qπt τ |Bb(S×A) ≤ 1 1 -γ |c|Bb(S×A) + τγKt , (55) ln dπt dμ Bb(S×A) ≤C1 + 2 τ sup r∈[0,t] |θr| + sup r∈[0,t] Kr. (56) Proof. The first claim sups∈S |∂tπt(·|s)|M(A) ≤|Aπt τ |Bb(S×A) follows trivially from the definition of the approximate Fisher Rao gradient flow defined in (22). Moreover, it holds that |At|Bb(S×A) = Qt + τ ln dπt dμ - Z A Qt(·, a) + τ ln dπt dμ (·, a) πt(da|·) Bb(S×A) (57) ≤2 Qt + τ ln dπt dμ Bb(S×A) (58) ≤2 |Qt|Bb(S×A) + 2τ ln dπt dμ Bb(S×A) (59) where we used the triangle inequality in the final inequality. Moreover, the state-action value function Qπt τ is a fixed point of the Bellman operator defined in (3). Hence, for all (s, a) ∈S × A, we have Qπt τ (s, a) = c(s, a) + γ Z S×A Qπt τ (s′, a′) P πt(ds′, da′|s, a) + τγ Z S KL(πt(·|s′)∥μ) P(ds′|s, a). (60) Taking absolute values and using the triangle inequality we have |Qπt τ (s, a)| ≤|c|Bb(S×A) + γ |Qπt τ |Bb(S×A) + τγ sup s′∈S KL(πt(·|s′)∥μ) (61) = |c|Bb(S×A) + γ |Qπt τ |Bb(S×A) + τγKt. (62) Taking the supremum over (s, a) ∈S × A on the left-hand side yields (63) |Qπt τ |Bb(S×A) ≤|c|Bb(S×A) + γ |Qπt τ |Bb(S×A) + τγKt. Rearranging gives (64) (1 -γ) |Qπt τ |Bb(S×A) ≤|c|Bb(S×A) + τγKt, which is the desired bound. Recall the approximate Fisher-Rao gradient flow for the policies {πt}t≥0, which for all t ≥0 and for all (s, a) ∈S × A is given by (65) ∂t ln dπt dμ (s, a) = - Qt(s, a) + τ ln dπt dμ (a, s) - Z A Qt(s, a′) + τ ln dπt dμ (a′, s) πt(da′|s) . Duhamel's principle yields for all t ≥0 that ln dπt dμ (s, a) = e-τt ln dπ0 dμ (a, s) + Z t 0 e-τ(t-r) Z A Qr(s, a′)πr(da′|s) -Qr(s, a) dr (66) + τ Z t 0 e-τ(t-r) KL(πr(·|s)|μ)dr. (67) 12 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Since π0 ∈Πμ, there exists C1 ≥1 such that ln dπ0 dμ Bb(S×A) ≤C1. Then by Assumption 4.2 we have that for all t ≥0, ln dπt dμ (s, a) ≤C1 + Z t 0 e-τ(t-r) Z A Qr(s, a′)πr(da′|s) -Qr(s, a) dr (68) + τ Z t 0 e-τ(t-r) KL(πr(·|s)∥μ) dr (69) ≤C1 + 2 Z t 0 e-τ(t-r) |θr| dr + τ Z t 0 e-τ(t-r)Kr dr (70) ≤C1 + 2 τ sup r∈[0,t] |θr| + sup r∈[0,t] Kr, (71) where in the last inequality we used R t 0 e-τ(t-r)dr ≤1 τ . Taking the supremum over (s, a) ∈S × A yields (72) ln dπt dμ Bb(S×A) ≤C1 + 2 τ sup r∈[0,t] |θr| + sup r∈[0,t] Kr, which is the desired bound. □ Lemma A.4. Let Assumption 4.3 hold. Then for all π ∈Πμ, it holds that L(·, π; dπ β) is λβ(1-γ)-strongly convex. Proof. For any ξ ∈P(S × A), let Σξ := R S×A φ(s, a)φ(s, a)⊤ξ(ds, da) ∈RN×N. Then by Lemma 5.1 and Assumption 4.3 it holds that Σdπ β ⪰(1 -γ)Σβ ⪰(1 -γ)λβI and thus L(·, π; dπ β) is λβ(1 -γ)-strongly convex. □ A.1. Proof of Lemma 4.1. Proof. Recall that Q(s, a) = ⟨θ, φ(s, a)⟩for some θ ∈RN and that for all π ∈Πμ, there exists θπ ∈RN such that Qπ(s, a) = ⟨θπ, φ(s, a)⟩by Assumption 4.1. Then by definition of the semi-gradient of the MSBE g : RN × P(A|S) →RN in (18), it holds that ⟨g(θ, π), θ -θπ⟩= Z S×A (Q(s, a) -TπQ(s, a)) φ(s, a)dπ β(da, ds), θ -θπ (73) = Z S×A (Q(s, a) -Qπ τ (s, a))φ(s, a)dπ β(da, ds), θ -θπ (74) + Z S×A (Qπ τ (s, a) -TπQ(s, a)φ(s, a)dπ β(da, ds), θ -θπ (75) = Z S×A (Q(s, a) -Qπ τ (s, a))φ(s, a)dπ β(da, ds), θ -θπ (76) -γ Z S×A×S×A (Q(s′, a′) -Qπ τ (s′, a′))φ(s, a)P π(ds′, da′|s, a)dπ β(ds, da), θ -θπ , (77) where we added and subtracted the true state-action value function Qπ τ ∈Bb(S × A) in the second equality and used the fact that it is a fixed point of the Bellman operator defined in (3). To ease notation, let ε(s, a) := Q(s, a) -Qπ τ (s, a). Multiplying both sides by -1 and using the associativity of CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 13 the inner product, we have -⟨g(θ, π), θ -θπ⟩ (78) = - Z S×A ε(s, a)φ(s, a)dπ β(da, ds), θ -θπ (79) + γ Z S×A ε(s′, a′)φ(s, a)P π(ds′, da′|s, a)dπ β(ds, da), θ -θπ (80) = - Z S×A ε(s, a) ⟨φ(s, a), θ -θπ⟩dπ β(da, ds) (81) + γ Z S×A ε(s′, a′) ⟨φ(s, a), θ -θπ⟩P π(ds′, da′|s, a)dπ β(ds, da) (82) = - Z S×A ε(s, a)2dπ β(da, ds) (83) + γ Z S×A×S×A ε(s, a)ε(s′, a′)P π(ds′, da′|s, a)dπ β(ds, da) (84) = I(1) + γI(2). (85) Now applying Lemma A.2 to I(2) we have I(2) := Z S×A×S×A ε(s, a)ε(s′, a′)P π(ds′, da′|s, a)dπ β(ds, da) (86) ≤ 1 √γ Z S×A ε(s, a)2dπ β(ds, da). (87) Thus it holds that -⟨g(θ, π), θ -θπ⟩≤I(1) + γI(2) (88) ≤-(1 -√γ) Z S×A ε(s, a)2dπ β(da, ds) (89) = -(1 -√γ) Z S×A (Q(s, a) -Qπ τ (s, a))2dπ β(da, ds) (90) = -(1 -√γ) ∇θL(θ, π; dπ β), θ -θπ , (91) where the last inequality follows from the Assumption 4.1 and the definition of Q(s, a) = ⟨θ, φ(s, a)⟩. □ A.2. Proof of Lemma 5.1. Proof. For any β ∈P(S × A), π ∈P(A|S) and E ∈B(S × A), it holds that dπ Jπβ(E) = (1 -γ) ∞ X n=0 γn(Jn π Jπβ)(E) (92) = Jπdπ β(E) (93) where we just used the associativity of the operator Jπ. Furthermore by letting m = n + 1 it holds that dπ Jπβ(E) = (1 -γ) ∞ X n=0 γnJn+1 π β(E) (94) = (1 -γ) ∞ X m=1 γm-1Jm π β(E) (95) = 1 -γ γ ∞ X m=1 γmJm π β(E) (96) = 1 γ (dπ β(E) -(1 -γ)β(E)). (97) Rearranging concludes the proof. □ 14 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH Appendix B. Proof of Regularity Results B.1. Proof of Lemma 5.2. Proof. Consider the following equation 1 2ηt d dt |θt|2 = 1 ηt d dtθt, θt (98) = -⟨g(θt, πt), θt⟩ (99) = - Z S×A (Qt(s, a) -T πtQt(s, a)) φ(s, a) dπt β (da, ds), θt (100) = - Z S×A Qt(s, a)φ(s, a) dπt β (da, ds), θt (101) + Z S×A T πtQt(s, a)φ(s, a) dπt β (da, ds), θt (102) := -J(1) t + J(2) t (103) where we used the θt dynamics from (21) in the second equality and the definition of the semi-gradient in the third equality. For any π ∈Πμ, let Σπ ∈RN×N be (104) Σπ = Z S×A φ(s, a)φ(s, a)⊤dπ β(da, ds). Then by definition we have that Qt(s, a) = ⟨θt, φ(s, a)⟩, hence for J(1) t we have J(1) t = Z S×A Qt(s, a)φ(s, a) dπt β (da, ds), θt (105) = Z S×A ⟨θt, φ(s, a)⟩φ(s, a)dπt β (da, ds), θt (106) = θt, Z S×A φ(s, a)φ(s, a)⊤dπt β (da, ds) θt (107) = ⟨θt, Σπtθt⟩ (108) Now dealing with J(1) t , expanding the Bellman operator defined in (3) we have J(2) t = Z S×A TπtQt(s, a)φ(s, a) dπt β (da, ds), θt (109) = Z S×A c(s, a)φ(s, a)dπt β (da, ds), θt (110) + γ Z S×A ⟨θt, φ(s′, a′)⟩φ(s, a)P πt(ds′, da′|s, a)dπt β (da, ds), θt (111) + τγ Z S×A Z S KL(πt(·|s′), μ)P(ds′|s, a)φ(s, a)dπt β (da, ds) , θt (112) ≤|c|Bb(S×A)|θt| + γI(1) t + τγI(2) t (113) where we defined I(1) t = Z S×A ⟨θt, φ(s′, a′)⟩φ(s, a)P πt(ds′, da′|s, a)dπt β (da, ds), θt , I(2) t = Z S×A Z S KL(πt(·|s′), μ)P(ds′|s, a)φ(s, a)dπ β(da, ds) , θt . Moreover, to ease notation let Kt := sup s∈S KL(πt(·|s)|μ) CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 15 and temporarily let κt(ds, da, ds′, da′) := P πt(ds′, da′|s, a)dπt β (da, ds). Now focusing on I(1) t , it holds that I(1) t = Z S×A×S×A ⟨θt, φ(s′, a′)⟩φ(s, a)κt(da′, ds′, da, ds), θt (114) = Z S×A×S×A ⟨θt, φ(s, a)⟩⟨θt, φ(s′, a′)⟩κt(ds′, da′, ds, da). (115) Now using Lemma A.2 with f = ⟨θ, φ(·, ·)⟩we have I(1) t ≤ 1 √γ Z S×A ⟨θt, φ(s, a)⟩2 dπt β (ds, da) 1 2 Z S×A ⟨θt, φ(s, a)⟩2 dπt β (ds, da) 1 2 (116) = 1 √γ Z S×A ⟨θt, φ(s, a)⟩2 dπt β (ds, da) (117) = 1 √γ ⟨θt, Σπtθt⟩. (118) Thus all together it holds that (119) γI(1) t ≤√γ ⟨θt, Σπtθt⟩. Now focusing on I(2) t , we have I(2) t = Z S×A Z S KL(πt(·|s′), μ)P(ds′|s, a) φ(s, a)dπt β (da, ds), θt (120) ≤Kt Z S×A φ(s, a)dπt β (ds, da) |θt| (121) ≤Kt|θt| (122) where we used Assumption 4.2 in the final inequality. Hence along with (108), (98) becomes 1 2ηt d dt|θt|2 ≤-J(1) t + J(2) t (123) ≤-⟨θt, Σπtθt⟩+ |c|Bb(S×A)|θt| + γI(1) t + τγI(2) t (124) ≤-⟨θt, Σπtθt⟩+ √γ ⟨θt, Σπtθt⟩+ |c|Bb(S×A)|θt| + τγKt|θt| (125) = -(1 -√γ) ⟨θt, Σπtθt⟩+ |c|Bb(S×A) + τγKt |θt|. (126) Observe that by (26) and Assumption 4.2, Σπ ∈RN×N is positive definite for all π ∈P(A|S), hence it holds that (127) ⟨θt, Σπtθt⟩≥(1 -γ)λβ |θt|2 . Therefore (123) becomes (128) 1 2ηt d dt|θt|2 ≤-(1 -√γ)(1 -γ)λβ |θt|2 + (|c|Bb(S×A) + τγKt)|θt| Let Γ := λβ(1 -γ)(1 -√γ). By Young's inequality, there exists ε > 0 such that 1 2ηt d dt|θt|2 ≤-Γ|θt|2 + ε 2|θt|2 + (|c|Bb(S×A) + τγKt)2 2ε (129) ≤-Γ|θt|2 + ε 2|θt|2 + |c|2 Bb(S×A) + τ 2γ2K2 t ε , (130) where we used the identity (a + b)2 ≤2a2 + 2b2. Choosing ε = Γ we arrive at (131) 1 2ηt d dt|θt|2 ≤-Γ 2 |θt|2 + τ 2γ2K2 t Γ + |c|2 Bb(S×A) Γ which concludes the proof. □ 16 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH B.2. Proof of Theorem 5.1. Proof. By Lemma 5.2, we have that for all r ≥0 (132) 1 2ηr d dr|θr|2 ≤-Γ 2 |θr|2 + τ 2γ2K2 r Γ + |c|2 Bb(S×A) Γ . Rearranging, it holds that for all t ≥0 |θr|2 ≤-1 Γηr d dr|θr|2 + 2|c|2 Bb(S×A) + 2τ 2γ2K2 r Γ2 . (133) Multiplying both sides by e-τ(t-r) and integrating over r from 0 to t we have that for all t ≥0 Z t 0 e-τ(t-r)|θr|2dr ≤-1 Γ Z t 0 e-τ(t-r) 1 ηr d dr|θr|2dr + 2|c|2 Bb(S×A) Γ2 Z t 0 e-τ(t-r)dr (134) + 2τ 2γ2 Γ2 Z t 0 e-τ(t-r)K2 rdr (135) ≤-1 Γ Z t 0 e-τ(t-r) 1 ηr d dr|θr|2dr + 2|c|2 Bb(S×A) Γ2τ + 2τ 2γ2 Γ2 Z t 0 e-τ(t-r)K2 rdr, (136) where we used that R t 0 e-τ(t-r)dr ≤1 τ . Integrating the first term by parts, we have - Z t 0 e-τ(t-r) 1 ηr d dr|θr|2dr = -|θt|2 ηt + e-τt |θ0|2 η0 + τ Z t 0 |θr|2 e-τ(t-r) ηr dr (137) - Z t 0 |θr|2 e-τ(t-r) d drηr η2r dr. (138) Since by definition we have that for all t ≥0, ηt ≥1 and d dtηt ≥0 it holds that (139) Z t 0 |θr|2 e-τ(t-r) d drηr η2r dr ≥0. Hence dropping the negative terms on the right hand side of (137) and using that ηt ≥η0 for all t ≥0, we have -1 Γ Z t 0 e-τ(t-r) 1 ηr d dr|θr|2dr ≤e-τt |θ0|2 Γη0 + τ Γη0 Z t 0 e-τ(t-r)|θr|2dr. (140) Substituting this back into (134), for all t ≥0 we have that Z t 0 e-τ(t-r)|θr|2dr ≤e-τt |θ0|2 Γη0 + τ Γη0 Z t 0 e-τ(t-r)|θr|2dr (141) + 2|c|2 Bb(S×A) Γ2τ + 2τ 2γ2 Γ2 Z t 0 e-τ(t-r)K2 rdr. (142) Grouping like terms we have 1 - τ Γη0 Z t 0 e-τ(t-r)|θr|2dr ≤e-τt |θ0|2 Γη0 + 2|c|2 Bb(S×A) Γ2τ + 2τ 2γ2 Γ2 Z t 0 e-τ(t-r)K2 rdr. (143) Recall that we have η0 > τ Γ to ensure that 1 - τ Γη0 > 0. Dividing through by 1 - τ Γη0 gives for all t ≥0 that Z t 0 e-τ(t-r)|θr|2dr ≤σ1 + σ2 Z t 0 e-τ(t-r)K2 rdr (144) where we've set σ1 := |θ0|2 Γη0 1 - τ Γη0 + 2|c|2 Bb(S×A) Γ2τ 1 - τ Γη0 , σ2 := 2τ 2γ2 Γ2 1 - τ Γη0 . Recall the approximate Fisher Rao gradient flow for the policies {πt}t≥0, which for all t ≥0 and for all s ∈S, a ∈A is (145) ∂t ln dπt dμ (s, a) = - Qt(s, a) + τ ln dπt dμ (a, s) - Z A Qt(s, a) + τ ln dπt dμ (a, s) πt(da|s) CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 17 Duhamel's principle yields for all t ≥0 that ln dπt dμ (s, a) = e-τt ln dπ0 dμ (a, s) + Z t 0 e-τ(t-r) Z A Qr(s, a)πr(da|s) -Qr(s, a) dr (146) + τ Z t 0 e-τ(t-r) KL(πr(·|s)|μ)dr (147) Observe that since π0 ∈Πμ, there exists C1 ≥1 such that ln dπt dμ Bb(S×A) ≤C1. Assumption 4.2 gives that for all t ≥0 ln dπt dμ (s, a) ≤C1 + 2 Z t 0 e-τ(t-r)|θr|dr + τ Z t 0 e-τ(t-r) KL(πr(·|s)|μ)dr (148) ≤C1 + 2 Z t 0 e-τ(t-r)|θr|dr + τ Z t 0 e-τ(t-r)Krdr (149) Integrating over the actions with respect to πt(·|s) ∈P(A) gives for all t ≥0 that (150) KL(πt(·|s)|μ) ≤C1 + 2 Z t 0 e-τ(t-r)|θr|dr + τ Z t 0 e-τ(t-r)Krdr where we again use that Kr = sups∈S KL(πr(·|s)|μ). Following from the techniques in [17], observe that from (66) and Assumption 4.2 we similarly get for all t ≥0 that (151) ln dμ dπt (a, s) = -ln dπt dμ (s, a) ≤C1 + 2 Z t 0 e-τ(t-r)|θr|dr -τ Z t 0 e-τ(t-r)Krdr. Now integrating over the actions with respect to the reference measure μ ∈P(A) we have (152) KL(μ|πt(·|s)) ≤C1 + 2 Z t 0 e-τ(t-r)|θr|dr -τ Z t 0 e-τ(t-r)Krdr Moreover, using the non-negativity of the KL divergence, it holds for all t ≥0 that (153) KL(πt(·|s)|μ) ≤KL(πt(·|s)|μ) + KL(μ|πt(·|s)) ≤2C1 + 4 Z t 0 e-τ(t-r)|θr|dr Since this holds for any s ∈S, it holds for all t ≥0 that (154) Kt ≤2C1 + 4 Z t 0 e-τ(t-r)|θr|dr Now squaring both sides and using the H ̈older's inequality, we have K2 t ≤ 2C1 + 4 Z t 0 e-τ(t-r)|θr|dr 2 (155) ≤8(C1)2 + 32 Z t 0 e-τ(t-r)|θr|dr 2 (156) = 8(C1)2 + 32 Z t 0 e-τ 2 (t-r)e-τ 2 (t-r)|θr|dr 2 (157) ≤8(C1)2 + 32 Z t 0 e-τ(t-r)dr Z t 0 e-τ(t-r)|θr|2dr (158) ≤8(C1)2 + 32 τ Z t 0 e-τ(t-r)|θr|2dr, (159) where we again used R t 0 e-τ(t-r)dr ≤1 τ . We can now substitute (144) into (159) to arrive at K2 t ≤8(C1)2 + 32 τ σ1 + 32 τ σ2 Z t 0 e-τ(t-r)K2 rdr (160) := a1 + a2 Z t 0 e-τ(t-r)K2 rdr (161) with a1 = 8(C1)2 + 32 τ σ1 and a2 = 32σ2 τ . □ 18 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH B.3. Proof of Corollary 5.1. Proof. By Theorem 5.1 it holds that (162) K2 t ≤a1 + a2 Z t 0 e-τ(t-r)K2 rdr. Observe that by multiplying through by eτt, we can rewrite this as (163) eτtK2 t ≤eτta1 + a2 Z t 0 eτrK2 rdr. Hence after defining g(t) = eτtK2 t and applying Gronwall's inequality (Lemma A.1), for all γ ∈(0, 1) it holds for all t ≥0 that (164) K2 t ≤a1ea2t. □ B.4. Proof of Corollary 5.2. Proof. By Corollary 5.1 and Lemma 5.2, for all γ ∈(0, 1) it holds that 1 2 d dt |θt|2 ≤-Γ 2 ηt|θt|2 + btηt (165) such that (166) bt = 2|c|2 Bb(S×A) + 2τ 2γ2a1ea2t Γ2 ! . Recall that there exists α > 0 such that d dtηt ≤αηt, another application of Gronwall's Lemma then concludes the proof. □ B.5. Proof of Corollary 5.3. Proof. By Theorem 5.1 we have that (167) K2 t ≤a1 + a2 Z t 0 e-τ(t-r)K2 rdr. Taking the supremum over [0, t] on the right hand side, we have (168) K2 t ≤a1 + a2 τ sup r∈[0,t] K2 r. Since this holds for all t ≥0, we have (169) sup r∈[0,t] K2 r ≤a1 + a2 τ sup r∈[0,t] K2 r. Now forcing 1 -a2 τ > 0, which is equivalent to the condition 64γ2 Γ2 -Γτ η0 0 it holds that for all t ≥0, K2 t ≤ a1τ τ -a2 . Hence by Lemma 5.2 we have 1 2 d dt |θt|2 ≤-ηt Γ 2 |θt|2 + ηt   2|c|2 Bb(S×A) + 2τ 2γ2 a1τ τ-a2 Γ2  . (171) The uniform boundedness in time of |θt| then follows by Gronwall's Lemma (Lemma A.1). □ CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 19 Appendix C. Proof of Convergence Results C.1. Proof of Lemma 6.1. Proof. By the definition of the state-action value function (2) it holds that d dtQπt τ (s, a) = lim h→0 Qπt+h τ (s, a) -Qπt τ (s, a) h (172) = γ Z S d dtV πt τ (s′)P(ds′|s, a). (173) Now observe that by [11][Proof of Proposition 2.6], we have (174) d dtV πt τ (s) = 1 1 -γ Z S×A Aπt τ (s, a)∂tπt(da|s′)dπt(ds′|s). Thus we have d dtQπt τ (s, a) = γ 1 -γ Z S Z S×A Aπt τ (s′′, a′′)∂tπt(da′′|s′′)dπt(ds′′|s′) P(ds′|s, a). (175) □ C.2. Proof of Theorem 6.1. Proof. Recall the performance difference Lemma (Lemma 1.1): for all ρ ∈P(S) and π, π′ ∈Πμ, V π τ (ρ) -V π′ τ (ρ) (176) = 1 1 -γ Z S Z A Qπ′ τ (s, a) + τ ln dπ′ dμ (a, s) (π -π′)(da|s) + τ KL(π(·|s)|π′(·|s)) dπ ρ(ds) . (177) Now let π = π∗and π′ = πt and multiply both sides by -1 we have V πt τ (ρ) -V π∗ τ (ρ) = -1 1 -γ Z S Z A Qπt(s, a) + τ ln dπt dμ (a, s) (π∗-πt)(da|s) (178) + τ KL(π∗(·|s)|πt(·|s)) ! dπ∗ ρ (ds). (179) Recall the approximate Fisher Rao dynamics, which we write as (180) ∂t ln dπt dμ (s, a) + Qt(s, a) + τ ln dπt dμ (a, s) - Z A Qt(s, a′) + τ ln dπt dμ (a′, s) πt(da′|s) = 0. Observe that since the normalisation constant (enforcing the conservation of mass along the flow) R A Qt(s, a) + τ ln dπt dμ (a, s) πt(da|s) is independent of a ∈A, it holds that Z A Z A Qt(s, a′) + τ ln dπt dμ (a′, s) πt(da′|s) (π∗-πt)(da|s) = 0. Hence adding 0 in the form of (180) into (178) it holds that for all t ≥0 V πt τ (ρ) -V π∗ τ (ρ) = 1 1 -γ Z S×A ∂t ln dπt dμ (a, s)(π∗-πt)(da|s)dπ∗ ρ (ds) (181) + Z S×A (Qt(s, a) -Qπt τ (s, a))(π∗-πt)(da|s)dπ∗ ρ (ds) -τ Z S KL(π∗(·|s)|πt(·|s)dπ∗ ρ (ds) ! . (182) By [12, Lemma 3.8] and Corollary 5.1, for any fixed ν ∈Πμ, the map t →KL(ν|πt) is differentiable. Hence we have Z A ∂t ln dπt dμ (s, a)(π∗-πt)(da|s) = Z A ∂t ln dπt dμ (s, a)π∗(da|s) - Z A ∂t ln dπt dμ (s, a)πt(da|s) (183) = Z A ∂t ln dπt dμ (s, a)π∗(da|s) (184) = -d dt KL(π∗(·|s)|πt(·|s)), (185) 20 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH where we used the conservation of mass of the policy dynamics in the second equality. Substituting this into (181) we have V πt τ (ρ) -V π∗ τ (ρ) = 1 1 -γ -d dt Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) (186) + Z S×A (Qt(s, a) -Qπt τ (s, a))(π∗-πt)(da|s)dπ∗ ρ (ds) -τ Z S KL(π∗(·|s)|πt(·|s)dπ∗ ρ (ds) ! . (187) Focusing on the second term, we have Z S×A (Qt(s, a) -Qπt τ (s, a))(π∗-πt)(da|s)dπ∗ ρ (ds) (188) ≤|Qt(s, a) -Qπt τ (s, a)|Bb(S×A) Z S TV(π∗(·|s), πt(·|s))dπ∗ ρ (ds) (189) ≤ 1 √ 2|θt -θπt| Z S KL(π∗(·|s)|πt(·|s)) 1 2 dπ∗ ρ (ds) (190) ≤ 1 √ 2|θt -θπt| Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) 1 2 , (191) where we used Pinsker's Inequality in the second inequality and H ̈older's inequality in the final inequality. Now applying Young's inequality, there exists ε > 0 such that (192) |θt -θπt| Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) 1 2 ≤1 2ε|θt -θπt|2 + ε 2 Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds). Substituting this back into (186) and choosing ε = √ 2τ we have V πt τ (ρ) -V π∗ τ (ρ) = 1 1 -γ -d dt Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) (193) -τ 2 Z S KL(π∗(·|s)|πt(·|s)dπ∗ ρ (ds) + 1 4τ |θt -θπt|2 ! . (194) Rearranging, we arrive at d dt Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) ≤-τ 2 Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) (195) -(1 -γ) V πt τ (ρ) -V π∗ τ (ρ) + 1 4τ |θt -θπt|2. (196) Applying Duhamel's principle yields Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) ≤e-τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (197) -(1 -γ) Z t 0 e-τ 2 (t-r)(V πr τ (ρ) -V π∗ τ (ρ))dr + 1 2τ Z t 0 e-τ 2 (t-r)|θr -θπr|2dr. (198) Now using that R t 0 e-τ 2 (t-r)dr = 2(1-e-τ 2 ) τ , we have Z S KL(π∗(·|s)|πt(·|s))dπ∗ ρ (ds) ≤e-τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (199) -2(1 -γ)(1 -e-τ 2 ) τ min r∈[0,t] V πr τ (ρ) -V π∗ τ (ρ) + 1 2τ Z t 0 e-τ 2 (t-r)|θr -θπr|2dr. (200) Rearranging, we have min r∈[0,t] V πr τ (ρ) -V π∗ τ (ρ) ≤ τ 2(1 -γ)(1 -e-τ 2 ) e-τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (201) + 1 2τ Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ! . (202) which concludes the proof. □ CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 21 C.3. Proof of Theorem 6.2. Proof. Using the chain rule and the critic dynamics in (21), we have that for all r ≥0 1 2ηr d dr|θr -θπr|2 = 1 ηr dθr dr , θr -θπr - dθπr dr , θr -θπr (203) = -⟨g(θr, πr), θr -θπr⟩-1 ηr dθπr dr , θr -θπr (204) Let Γ = λβ(1 -γ)(1 -√γ). Using Lemma 4.1 and the λβ-strong convexity of L(·, π; β) and recalling that L(θπr, πr) = 0 for all r ≥0, it holds for all r ≥0 that 1 2ηt d dt|θt -θπt|2 = -⟨g(θt, πt), θt -θπt⟩-1 ηt dθπt dt , θt -θπt (205) ≤-(1 -γ)(1 -√γ) ⟨∇θL(θt, πt; β), θt -θπt⟩-1 ηt dθπt dt , θt -θπt (206) ≤-(1 -γ)(1 -√γ)L(θt, πt; β) -Γ 2 |θt -θπt|2 -1 ηt dθπt dt , θt -θπt (207) ≤-(1 -γ)(1 -√γ)L(θt, πt; β) -Γ 2 |θt -θπt|2 + 1 2ηt dθπt dt 2 + |θt -θπt|2 ! (208) = -(1 -γ)(1 -√γ)L(θt, πt; β) - Γ 2 -1 2ηt |θt -θπt|2 + 1 2ηt dθπt dt 2 , (209) where we used H ̈older's and Young's inequalities in (208). Since η0 > 1 Γ and ηt is a non-decreasing function, it holds that ηt > 1 Γ for all t ≥0. Hence Γ 2 - 1 2ηt > 0 and thus we can drop the second term. Moreover the λβ-strong convexity of L(·, π; β) along with L(θπ, π; β) = 0 and ∇θL(θπ, π) = 0 for all π ∈Πμ gives that |θt -θπt|2 ≤2 λβ L(θt, πt; β). Hence for all r ≥0 we arrive at 1 2ηr d dr|θr -θπr|2 ≤-Γ 2 |θr -θπr|2 + 1 2ηr dθπr dr 2 . (210) Rearranging, multiplying by e-τ(t-r) and integrating over r from 0 to t, it holds for all t ≥0 that Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ≤-1 Γ Z t 0 e-τ 2 (t-r) 1 ηr d dr|θr -θπr|2dr + 1 Γ Z t 0 e-τ 2 (t-r) 1 ηr dθπr dt 2 dr. (211) Integrating the first term by parts (identically to (137) from the proof of Theorem 5.1), we have Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ≤1 Γ -|θt -θπt|2 ηt + e-τ 2 t |θ0 -θπ0|2 η0 (212) + τ 2 Z t 0 e-τ 2 (t-r) 1 ηr |θr -θπr|2dr - Z t 0 |θr -θπr|2 e-τ 2 (t-r) d drηr η2r dr (213) + Z t 0 e-τ 2 (t-r) 1 ηr dθπr dr 2 dr ! . (214) Since for all t ≥0 it holds that ηt ≥1 and d dtηt ≥0, we have that Z t 0 |θr -θπr|2 e-τ 2 (t-r) d drηr η2r dr ≥0. Thus after dropping all negative terms and using that ηt ≥η0 for all t ≥0, we have (215) 1 - τ 2Γη0 Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ≤e-τ 2 |θ0 -θπ0|2 Γη0 + Z t 0 e-τ 2 (t-r) 1 ηr dθπr dr 2 dr. Since η0 > 1 2Γ and τ 0 and hence it holds that Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ≤e-τ 2 |θ0 -θπ0|2 Γη0 1 - τ 2Γη0 + 1 1 - τ 2Γη0 Z t 0 e-τ 2 (t-r) 1 ηr dθπr dr 2 dr, (216) 22 DENIS ZORBA, DAVID ˇSIˇSKA, AND LUKASZ SZPRUCH which concludes the proof. □ C.4. Proof of Theorem 6.3. Proof. By Theorem 6.2, we have Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ≤e-τ 2 |θ0 -θπ0|2 Γη0 1 - τ 2Γη0 + 1 1 - τ 2Γη0 Z t 0 e-τ 2 (t-r) 1 ηr dθπr dr 2 dr. (217) Hence it remains to characterise the growth of the final integral. Observe that for all π ∈P(A|S), θπ ∈RN satisfies the least-squares optimality condition given by (218) θπ = arg min θ L(θ, π; β) = Z S×A φ(s, a)φ(s, a)⊤β(da, ds) -1 Z S×A φ(s, a)Qπ τ (s, a) β(ds, da) . Setting π = πt and differentiating time we arrive at (219) dθπt dt = Z S×A φ(s, a)φ(s, a)⊤β(da, ds) -1 Z S×A φ(s, a) d dtQπt(s, a) β(ds, da) . Hence by Lemma 6.1, Assumption 4.2 and Assumption 4.3, for all t ≥0 it holds that dθπt dt = Z S×A φ(s, a)φ(s, a)⊤β(da, ds) -1 Z S×A φ(s, a) d dtQπt τ (s, a) β(ds, da) (220) ≤ Z S×A φ(s, a)φ(s, a)⊤β(da, ds) -1 op d dtQπt τ Bb(S×A) (221) = 1 λβ d dtQπt Bb(S×A) (222) = γ λβ(1 -γ) Z S Z S×A Aπt τ (s′′, a′′)∂tπt(da′′|s′′) dπt(ds′′|s′) P(ds′|·, ·) Bb(S×A) (223) ≤ γ λβ(1 -γ) |Aπt τ |Bb(S×A) sup s∈S |∂tπt(·|s)|M(A) . (224) Now using Lemma A.3, it holds that |Aπt τ |Bb(S×A) sup s∈S |∂tπt(·|s)|M(A) ≤|Aπt τ |Bb(S×A) |At|Bb(S×A) (225) ≤ 2 |Qπt τ |Bb(S×A) + 2τ ln dπt dμ Bb(S×A) ! 2 |Qt|Bb(S×A) + 2τ ln dπt dμ Bb(S×A) ! . (226) Hence by Corollaries 5.1 and 5.2 and Lemma A.3, there exists α1, α2 > 0 such that dθπt dt 2 ≤α1eα2t. Thus Theorem 6.2 becomes Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ≤e-τ 2 |θ0 -θπ0|2 Γη0 1 - τ 2Γη0 + α1 1 - τ 2Γη0 Z t 0 e-τ 2 (t-r) eα2r ηr dr. (227) Let ηt = η0ek1t for any k1 > τ 2 + α2. Then observe that Z t 0 e-τ 2 (t-r) eα2r ηr dr = 1 η0 e-τ 2 t Z t 0 e( τ 2 +α2-k1)rdr (228) ≤1 η0 e-τ 2 t e( τ 2 +α2-k1)t -1 τ 2 + α2 -k1 ! (229) ≤ e-τ 2 t η0 τ 2 + α2 -k1 , (230) hence all together it holds that Z t 0 e-τ 2 (t-r)|θr -θπr|2dr ≤e-τ 2 |θ0 -θπ0|2 Γη0 1 - τ 2Γη0 + e-τ 2 t α1 η0 - τ 2Γ τ 2 + α2 -k1 . (231) CONVERGENCE OF ACTOR-CRITIC FOR ENTROPY REGULARISED MDPS IN GENERAL ACTION SPACES 23 Substituting this into the result from Theorem 6.2 concludes the proof. □ C.5. Proof of Theorem 6.3. Following completely identically to the proof of Theorem 6.3, we have dθπt dt ≤ γ λβ(1 -γ) |Aπt τ |Bb(S×A) sup s∈S |∂tπt(·|s)|M(A) (232) ≤ 4 (1 -γ)2 |c|Bb(S×A) + Kt 2 + 4τ C1 + 2 τ sup r∈[0,t] |θr| + sup r∈[0,t] Kr !2 . (233) Then by Corollaries 5.3 and 5.4, there exists b2 > 0 such that dθπt dt 2 ≤d1. Hence by Theorem 6.2 we have min r∈[0,t] V πr τ (ρ) -V π∗ τ (ρ) ≤ τ 2(1 -γ)(1 -e-τ 2 ) e-τ 2 t Z S KL(π∗(·|s)|π0(·|s))dπ∗ ρ (ds) (234) + d1 Z t 0 e-τ 2 (t-r) 1 ηr dr ! . (235) : : :
2510.14901
Reasoning with Sampling: Your Base Model is Smarter Than You Think Aayush Karan1, Yilun Du1 1Harvard University Œ Website ‡ Code Abstract Frontier reasoning models have exhibited incredible capabilities across a wide array of disciplines, driven by posttraining large language models (LLMs) with reinforcement learning (RL). However, despite the widespread success of this paradigm, much of the literature has been devoted to disentangling truly novel be- haviors that emerge during RL but are not present in the base models. In our work, we approach this question from a different angle, instead asking whether compa- rable reasoning capabilites can be elicited from base models at inference time by pure sampling, without any additional training. Inspired by Markov chain Monte Carlo (MCMC) techniques for sampling from sharpened distributions, we pro- pose a simple iterative sampling algorithm leveraging the base models’ own like- lihoods. Over different base models, we show that our algorithm offers substantial boosts in reasoning that nearly match and even outperform those from RL on a wide variety of single-shot tasks, including MATH500, HumanEval, and GPQA. Moreover, our sampler avoids the collapse in diversity over multiple samples that is characteristic of RL-posttraining. Crucially, our method does not require train- ing, curated datasets, or a verifier, suggesting broad applicability beyond easily verifiable domains. 1 Introduction Reinforcement learning (RL) has become the dominant paradigm for enhancing the reasoning ca- pabilities of large language models (LLMs) [Guo et al., 2025, Hu et al., 2025]. Equipped with a reward signal that is typically automatically verifiable, popular RL techniques have been success- fully applied to posttrain frontier models, leading to sizeable performance gains in domains like math, coding, and science [Hendrycks et al., 2021, Li et al., 2022, Rein et al., 2024]. Despite the widespread empirical success of RL for LLMs, a large body of literature has centered around the following question: are the capabilities that emerge during RL-posttraining fundamen- tally novel behaviors that are not present in the base models? This is the question of distribution sharpening [He et al., 2025, Shao et al., 2025, Yue et al., 2025]: that is, whether the posttrained distribution is simply a “sharper” version of the base model distribution, instead of placing mass on reasoning traces the base model is unlikely to generate. Several works point towards the difficulty in learning new capabilities with RL-posttraining. He et al. [2025], Song et al. [2025] compare the pass@k (multi-shot) scores of base models with post- trained models, finding that for large k, base models actually outperform while the latter suffer from degraded generation diversity. In such cases, RL appears to redistribute pass@k performance to single-shot performance at the expense of multi-shot reasoning. Yue et al. [2025] also notes that the reasoning traces post-RL are tightly concentrated at high likelihoods/confidences under the base model, seemingly drawing from existing high-likelihood capabilities. We illustrate this point in our own experiments in Figure 4. Regardless, the advantage of RL-posttraining for single-shot reasoning has remained, as of yet, undeniable. Preprint. Under review. arXiv:2510.14901v1 [cs.LG] 16 Oct 2025 MATH500 HumanEval GPQA 10 20 30 40 50 60 70 80 Accuracy (%) 49.6 32.9 27.8 78.5 53.7 39.9 74.8 57.3 38.9 Reasoning AlpacaEval2.0 0.5 1.0 1.5 2.0 2.5 3.0 Score 1.61 2.38 2.88 General Base (Qwen2.5-Math-7B) GRPO (RL) Ours (Training-free) Figure 1: Our sampling algorithm can match and outperform RL-posttraining. Left: we compare our sam- pling algorithm (ours) against the base model (base) and RL-posttraining (GRPO) on three verifiable reasoning tasks (MATH500, HumanEval, GPQA). Right: we compare them on an unverifiable general task (AlpacaE- val2.0). Our algorithm achieves comparable performance to GRPO within the posttraining domain (MATH500) but can outperform on out-of-domain tasks such as HumanEval and AlpacaEval. In this paper, we present a surprising result: sampling directly from the base model can achieve single-shot reasoning capabilites on par with those from RL. We propose a sampling algorithm for base models that leverages additional compute at inference time, achieving single-shot performance that nearly matches RL-posttraining on in-domain reason- ing tasks and can even outperform on out-of-domain reasoning tasks. Furthermore, we observe that generation diversity does not degrade with our sampler; in fact, our pass@k (multi-shot) perfor- mance strongly outperforms RL. We benchmark specifically against Group Relative Policy Opti- mization (GRPO), which is the standard RL algorithm for enhancing LLM reasoning [Shao et al., 2024]. Crucially, our algorithm is training-free, dataset-free, and verifier-free, avoiding some of the inher- ent weaknesses of RL methods including extensive hyperparameter sweeps to avoid training insta- bilities, the need to curate a diverse and expansive posttraining dataset, and the lack of guaranteed access to a ground truth verifier/reward signal [Prabhudesai et al., 2025]. Our contributions can be summarized as follows: i) We introduce the power distribution as a useful sampling target for reasoning tasks. Since it can be explicitly specified with a base LLM, no additional training is required. ii) We further introduce an approximate sampling algorithm for the power distribution using a Markov chain Monte Carlo (MCMC) algorithm that iteratively resamples token subsequences according to their base model likelihoods. iii) We empirically demonstrate the effectiveness of our algorithm over a range of models (Qwen2.5- Math-7B, Qwen2.5-7B, Phi-3.5-mini-instruct) and reasoning tasks (MATH500, HumanEval, GPQA, AlpacaEval 2.0). Our results show that sampling directly from the base model can achieve results on par with GRPO. In fact, for some out-of-domain tasks, our algorithm consistently out- performs the RL baseline. Moreover, over multiple samples, we avoid the collapse in diversity afflicting RL-posttraining, achieving the best of both worlds in terms of single-to-few-shot rea- soning capabilities as well as sample diversity. Our results collectively illustrate that existing base models are much more capable at single-shot reasoning than current sampling methods reveal. 2 2 Related Works Reinforcement learning for LLMs. RL has been instrumental in posttraining LLMs. Early on, RL with human feedback (RLHF) [Ouyang et al., 2022] was developed as a technique to align LLMs with human preferences using a trained reward model. Recently, RL with verifiable rewards (RLVR) has emerged as a powerful new posttraining technique, where many works [Guo et al., 2025, Lam- bert et al., 2024, Hu et al., 2025, Zeng et al., 2025] discovered that a simple, end-of-generation reward given by an automated verifier could substantially enhance performance on difficult reason- ing tasks in mathematics and coding. The Group Relative Policy Optimization (GRPO) algorithm was at the center of these advances [Shao et al., 2024]. Building off of this success, many subsequent works have examined using reward signals derived from internal signals such as self-entropy [Zhao et al., 2025], confidence [Prabhudesai et al., 2025], and even random rewards [Shao et al., 2025]. Similar to these works, our paper examines base model likelihoods as a mechanism for improving reasoning performance, but crucially, our technique is training-free. Autoregressive MCMC sampling with LLMs. Prior works have explored integrating classic MCMC techniques with autoregressive sampling. Many settings including red-teaming, prompt- engineering, and personalized generation can be framed as targeting sampling from the base LLM distribution but tilted towards an external reward function. Zhao et al. [2024] proposes learning intermediate value functions that are used in a Sequential Monte Carlo (SMC) framework [Chopin, 2004], where multiple candidate sequences are maintained and updated according to their expected future reward. Similarly, Faria et al. [2024] proposes a Metropolis-Hastings (MH) algorithm, which instead of maintaining multiple candidates performs iterative resampling, again updating according to expected reward. Methodologically, our sampling algorithm is most similar to this latter work, but the crucial difference is that our target sampling distribution is completely specified by the base LLM, avoiding the need for an external reward. Annealed sampling for diffusion. In the statistical physics and Monte Carlo literature, sampling from pα is known as sampling from an annealed, or tempered, distribution [Neal, 1998] and has inspired a new wave of interest within the diffusion community. Indeed, in traditional MCMC sampling, annealing is used as a way to avoid mode-collapse during sampling and more accurately sample from complex multimodal distributions [Łatuszy´nski et al., 2025]. This has re-emerged as inference-time sampling methods for diffusion that aim to steer a pretrained model towards “tilted distributions” [Du et al., 2023, Kim et al., 2025, Karan et al., 2025, Wang et al., 2025, Kong et al., 2025, Zhang et al., 2025]. Where traditional RL techniques exhibit mode collapse, applications in the physical sciences [Sambridge, 2014] require multimodal sampling. To this end, works such as Du et al. [2023], Wang et al. [2025], Kim et al. [2025] construct sequences of annealed distributions to ease the transition from base diffusion distribution to tilted distribution. Other works [Skreta et al., 2025, Xu et al., 2025] intentionally target sampling from pα for α > 1 as a means of generating higher quality samples from the base diffusion model, which is particularly popular for generating more designable proteins [Geffner et al., 2025]. 3 Preliminaries Let X be a finite vocabulary of tokens, and let X T denote the set of finite sequences of tokens x0:T = (x0, x1, . . . , xT ), where xi ∈X for all i and T ∈Z≥0 is some nonnegative integer. For convenience, for a given t, let x<t = (x0, . . . , xt−1) and x>t = (xt+1, . . . , xT ), with similar definitions for x≤t and x≥t. In general, x refers to a token sequence x0:T , where T is implicitly given. Then an LLM defines a distribution p over token sequences X T by autoregressively learning the conditional token distributions p(xt|x<t) for all t, giving the joint distribution via the identity p(x0:T ) = T Y t=0 p(xt|x<t). (1) To sample a sequence from p, we simply sample from the LLM token by token using the conditional distributions, which by (1) directly samples from the joint distribution. 3 4 MCMC Sampling for Power Distributions p pα (α = 4.0) Figure 2: A toy example of distribution sharpening. Here p is a mixture of Gaus- sians, which we plot against pα (α = 4.0). In this section, we introduce our sampling algorithm for base models. Our core intuition is derived from the notion of distribution sharpening posed in Section 1. Sharpen- ing a reference distribution refers to reweighting the dis- tribution so that high likelihood regions are further up- weighted while low likelihood regions are downweighted, biasing samples heavily towards higher likelihoods under the reference. Then if RL posttrained models really are just sharpened versions of the base model, we should be able to explicitly specify a target sampling distribution that achieves the same effect. We organize this section as follows. Section 4.1 presents this target sharpened distribution and provides some math- ematical motivation for why its samples are amenable for reasoning tasks. Section 4.2 introduces a general class of Markov chain Monte Carlo (MCMC) al- gorithms aimed at actually sampling from this target distribution, and finally, Section 4.3 details our specific implementation for LLMs. 4.1 Reasoning with Power Distributions One natural way to sharpen a distribution p is to sample from the power distribution pα. Since p(x) > p(x′) =⇒ p(x)α p(x′)α > p(x) p(x′) (α ∈[1, ∞]), (2) it follows that exponentiating p increases the relative weight on higher likelihood sequences (x) while decreasing the relative weight on lower likelihood ones (x′) (see Figure 2 for a visualization). A related but well-known sharpening strategy is low-temperature sampling [Wang et al., 2020], which exponentiates the conditional next-token distributions at each step: ptemp(xt|x0 . . . xt−1) = p(xt|xt−1 . . . x0)α P x′ t∈X p(x′ t|xt−1 . . . x0)α , (3) where the temperature is τ = 1/α. A common misconception is that sampling with (3) over T tokens is equivalent to sampling from pα; however, this is false in a subtle yet crucial way, as we illuminate in the following. Proposition 1. Low-temperature sampling does not sample from the power distribution pα. Proof. We show that the associated conditional next-token distributions are distinct at each timestep t. The conditional distribution on xt for pα is given by ppow(xt|x0 . . . xt−1) = P x>t p(x0, . . . , xt, . . . , xT )α P x≥t p(x0, . . . , xt, . . . , xT )α . (4) Using Bayes rule p(xt|xt−1 . . . x0) = p(x0, . . . , xt) p(x0, . . . , xt−1) = P x>t p(x0, . . . , xt, . . . , xT ) P x≥t p(x0, . . . , xt, . . . , xT ), (5) we can rewrite the low-temperature marginal (3) as ptemp(xt|x0 . . . xt−1) = P x>t p(x0, . . . , xt, . . . , xT ) α P x′ t P x>t p(x0, . . . , xt, . . . , xT ) α . (6) Ignoring normalizations for clarity, the relative weight on token xt for sampling from pα is given by a sum of exponents ppow(xt|x<t) ∝ X x>t p(x0, . . . , xt, . . . , xT )α. (7) 4 Meanwhile, the relative weight for low-temperature sampling is given by an exponent of sums ptemp(xt|x<t) ∝ X x>t p(x0, . . . , xt, . . . , xT ) !α . (8) Since the relative weights of next-token prediction are distinct for each sampling strategy, it fol- lows that the joint distribution over seqeunces must also be distinct for each sampler. Hence, the distribution on sequences given by low-temperature sampling is not the same as the one given by pα. One intuitive way to understand this difference is that low-temperature sampling does not account for how exponentiation sharpens the likelihoods of “future paths” at time step t, instead “greedily” averaging all these future likelihoods (exponent of sums (8)). On the other hand, sampling from pα inherently accounts for future completions as it exponentiates all future paths (sum of exponents (7)) before computing the weights for next-token prediction. This has the following consequence: Observation 1. The power distribution upweights tokens with few but high likelihood future paths, while low-temperature sampling upweights tokens with several but low likelihood completions. Example 1. We can observe this phenomenon with a simple example. Let us consider the token vocabulary X = {a, b} and restrict our attention to two-token sequences (x0, x1): aa, ab, ba, bb. Let p(aa) = 0.00, p(ab) = 0.40, p(ba) = 0.25, p(bb) = 0.25, so that p(x0 = a) = 0.40, p(x0 = b) = 0.50. Let α = 2.0. Under pα, we have ppow(x0 = a) ∝0.002 + 0.402 = 0.160, ppow(x0 = b) ∝0.252 + 0.252 = 0.125, so pα prefers sampling a over b. Under low-temperature sampling, ptemp(x0 = a) ∝(0.00 + 0.40)2 = 0.160, ptemp(x0 = b) ∝(0.25 + 0.25)2 = 0.250, preferring sampling b over a. If pα samples x0 = a, there is only one future path with likelihood 0.40. If ptemp samples x0 = b, there are two future paths ba, bb, but either choice has likelihood 0.25. In other words, even though a has lower conditional likelihood under both p and ptemp, pα upweights a and samples the highest likelihood two-token sequence. b has many future paths contributing to a higher likelihood under p and ptemp, but leads to low likelihood sequences. We provide a stronger formalization of this phenomenon in Appendix A.1. Thus, sampling from pα encourages sampling tokens which have fewer but higher likelihood “future paths”, as opposed to tokens with several lower likelihood completions. This type of behavior is immensely valuable for reasoning tasks. For example, choosing “wrong” tokens that have high average likelihoods but trap outputs in low likelihood individual futures are examples of critical windows or pivotal tokens [Li et al., 2025, Abdin et al., 2024], a phenomenon where a few tokens are highly influential in the correctness of language model outputs. In fact, sharp critical windows have been shown to correlate strongly with reasoning failures [Li et al., 2025]. Instead, embedded in sampling from the power distribution is an implicit bias towards planning for future high likelihood tokens. 4.2 The Metropolis-Hastings Algorithm Now that we have seen how sampling from pα can in theory assist the underlying LLM’s ability to reason, our aim now turns towards proposing an algorithm to accurately sample from it. Given an LLM p, we have access to the values pα over any sequence length; however, these values are unnormalized. Direct sampling from the true probabilities requires normalizing over all sequences (x0, . . . , xT ) ∈X T , which is computationally intractable. To get around this, we invoke a Markov Chain Monte Carlo (MCMC) algorithm known as Metropolis-Hastings (MH) [Metropolis et al., 1953], which targets exactly what we want: approx- imate sampling from an unnormalized probability distribution. The MH algorithm constructs a 5 Figure 3: Illustrating Metropolis-Hastings with random resampling. A random index t is selected and a new candidate is generated by resampling. Based on the relative likelihoods, the candidate is accepted or rejected, and the process repeats. Markov chain of sample sequences (x0, x1, . . . , xn) using an arbitrary proposal distribution q(x|xi) to select the next candidate xi+1. With probability A(x, xi) = min  1, pα(x) · q(xi|x) pα(xi) · q(x|xi)  , (9) candidate x is accepted as xi+1; otherwise, MH sets xi+1 = xi. This algorithm is especially convenient as it only requires the relative weights given by pα (as the normalization weights in A cancel) and works with any generic but tractable sampler q with minimal restrictions. Remarkably, for large enough n, this process converges to sampling from the target distribution pα under the following (quite minimal) conditions on the proposal distribution [Neal, 1993]: Definition 1. The proposal distribution q is irreducible if for any set X with nonzero mass under the target distribution pα, q has nonzero probability of eventually sampling from X. The proposal is aperiodic if the induced chain of samples does not return to the same sample after a fixed interval number of steps. Thus, we must simply ensure that our proposal distribution satisfies irreducibility and aperiodicity, and Metropolis-Hastings takes care of the rest. On a practical level, we would also like both q(x|xi) and its reverse q(xi|x) to be easily computable. Consider the following family of random resampling proposal distributions (see Figure 3). Let pprop be a proposal LLM. With uniform probability 1 T , select a random t ∈[1, T] and resample the se- quence starting at index t using pprop. Then the transition likelihood q(x|xi) is simply the likelihood of the resampling. Note that at each candidate selection step, we have a nonzero probability of transitioning between any two sequences x, x′ ∈X, since with some probability we can always re- sample as early as the beginning of x. This ensures our proposal distribution is both irreducible and aperiodic. Moreover, q(xi|x) is easy to calculate by symmetry, since we can treat xi as a resampled version of x. With the flexibility endowed by Metropolis-Hastings, we can choose the proposal LLM pprop to be any LLM with any sampling strategy (e.g., low-temperature sampling). 4.3 Power Sampling with Autoregressive MCMC A direct implementation of Metropolis-Hastings for LLMs would involve initializing with a sampled token sequence of length T, subsequently generating new candidates of length T with (9) over many, many iterations. This process is computationally expensive, however, due to the repeated, full sequence inference calls to the LLM. In fact, the main downside to MCMC algorithms in practice is the potential for an exponential mixing time [Gheissari et al., 2017], where a poor choice of initialization or proposal distribution can result in an exponentially large number of samples required before convergence to the target distribution. This problem is exacerbated if the sample space has high dimensionality [Bandeira et al., 2022, Schmidler and Woodard, 2013], which is precisely exhibited by the sequence space of tokens X T , especially for long sequences/large values of T. To remedy this, we propose an algorithm that leverages the sequential structure of autoregressive sampling. We define a series of intermediate distributions which we progressively sample from, until converging to the target distribution pα. In particular, samples from one intermediate distribution initiate a Metropolis-Hastings process for the next, helping avoid pathological initializations. 6 Algorithm 1: Power Sampling for Autoregressive Models Input : base p; proposal pprop; power α; length T Hyperparams: block size B; MCMC steps NMCMC Output : (x0, . . . , xT ) ∼pα 1 Notation: Define the unnormalized intermediate target πk(x0:kB) ∝p(x0:kB)α. 2 for k ←0 to ⌈T B ⌉−1 do 3 Given prefix x0:kB, we wish to sample from πk+1. Construct initialization x0 by extending autoregressively with pprop: x(0) t ∼pprop xt | x<t  , for kB + 1 ≤t ≤(k + 1)B. Set the current state x ←x0. 4 for n ←1 to NMCMC do 5 Sample an index m ∈{1, . . . , (k + 1)B} uniformly. 6 Construct proposal sequence x′ with prefix x0:m−1 and resampled completion: x′ t ∼pprop xt | x<t  , for m ≤t ≤(k + 1)B. 7 Compute acceptance ratio (9) A(x′, x) ←min ( 1, πk(x′) πk(x) · pprop(x | x′) pprop(x′ | x) ) . Draw u ∼Uniform(0, 1); 8 if u ≤A(x′, x) then accept and set x ←x′ 9 end 10 Set x0:(k+1)B ←x to fix the new prefix sequence for the next stage. 11 end 12 return x0:T Fix block size B and proposal LLM pprop, and consider the sequence of (unnormalized) distributions ∅−→p(x0, . . . , xB)α −→p(x0, . . . , x2B)α −→· · · −→p(x0, . . . , xT )α, (10) where p(x0, . . . , xkB) denotes the joint distribution over token sequences of length kB, for any k. For convenience, let πk denote the distribution given by πk(x0:kB) ∝p(x0:kB)α. (11) Suppose we have a sample from πk. To obtain a sample from πk+1, we initialize a Metropolis- Hastings process by sampling the next B tokens xkB+1:(k+1)B with pprop. We subsequently run the MCMC sampling procedure for NMCMC steps, using the random resampling proposal distribution q from the previous section. The full details are presented in Algorithm 1. Note that Algorithm 1 is single-shot: even though multiple inference calls are made, the decision to accept vs. reject new tokens is made purely by base model likelihoods to simulate sampling a single sequence from pα. We can interpret this as a new axis for inference-time scaling, as we expend additional compute during sampling to obtain a higher quality/likelihood sample. To quantify the scaling, we can estimate the average number of tokens generated by Algorithm 1. Note that each candidate generation step when sampling from πk(x0:kB resamples an average of kB 2 tokens, NMCMC times. Summing over all k, the expected number of tokens generated is Etokens = NMCMC ⌈T/B⌉ X k=1 kB 2 ≈NMCMCT 2 4B . (12) The key tradeoff here is between the block size B and number of MCMC steps NMCMC. A larger B requires larger “jumps” between intermediate distributions, requiring a larger NMCMC to adequately 7 transition. In Section 5, we empirically find a value for B that makes Algorithm 1 performant for relatively small values of NMCMC. 5 Experiments 5.1 Experimental Setup Evaluation. We use a standard suite of reasoning benchmarks ranging across mathematics, coding, and STEM (MATH500, HumanEval, GPQA), along with a non-verifiable benchmark (AlpacaEval 2.0) evaluating general helpfulness. We evaluate all of our methods and baselines single-shot; i.e., on one final response string. • MATH500: The MATH dataset [Lightman et al., 2024] consists of competition math problems spanning seven categories including geometry, number theory, and precalculus. There are 12500 problems total, with 7500 training problems and 5000 test problems. MATH500 is a specific randomly chosen subset of the test set standardized by OpenAI. • HumanEval: HumanEval is a set of 164 handwritten programming problems covering algorihtms, reasoning, mathematics, and language comprehension [Chen et al., 2021]. Each problem has an average of 7.7 associated unit tests, where solving the problem corresponds to passing all unit tests. • GPQA: GPQA [Rein et al., 2024] is a dataset of multiple-choice science questions (physics, chem- istry, and biology) which require advanced reasoning skills to solve. We use subset GPQA Dia- mond for evaluation, which consists of 198 questions which represent the highest quality subset of the GPQA dataset. • AlpacaEval 2.0: The AlpacaEval dataset is a collection of 805 prompts [Dubois et al., 2024] that gauge general helpfulness with questions asking e.g., for movie reviews, recommendations, and reading emails. The model responses are graded by an automated LLM judge (GPT-4-turbo), which determines a preference for the model responses over those from a baseline (also GPT-4- turbo). The resulting score is a win rate of model responses normalized for the length of the model response. Models. To demonstrate the efficacy of our sampling algorithm, we use the base models Qwen2.5- Math-7B, Qwen2.5-7B, and Phi-3.5-mini-instruct. For our RL baselines, we use the implementation of GRPO in Shao et al. [2025], which posttrains these models on the MATH training split. For both the Qwen2.5 models, we use the default hyperparameters used to benchmark their performance in Shao et al. [2025]. For the Phi-3.5 model, we use a set of hyperparameters selected from Abdin et al. [2024] that avoids training instabilities and converges to improvement over the base model over a large number of epochs. Sampling Algorithm. For our implementation of power sampling (Algorithm 1), we set the max- imum T to be Tmax = 3072 (termination can happen earlier with an EOS token) and block size B = 3072/16 = 192. Empirically, we find α = 4.0 coupled with a proposal LLM pprop chosen as the base model with sampling temperature 1/α to be most performant for reasoning tasks. For Al- pacaEval 2.0, we find that having a proposal distribution of higher temperature (τ = 0.5) improves performance. 5.2 Results Main results. We display our main results in Table 1. Across base models of different families, our sampling algorithm achieves massive, near-universal boosts in single-shot accuracies and scores over different reasoning and evaluation tasks that reach, e.g., up to +51.9% on HumanEval with Phi-3.5-mini and +25.2% on MATH500 with Qwen2.5-Math. In particular, on MATH500, which is in-domain for RL-posttraining, power sampling achieves accuracies that are on par with those obtained by GRPO. Furthermore, on out-of-domain reasoning, our algorithm again matches GRPO on GPQA and actually outperforms on HumanEval by up to +59.8%. Similarly, power sampling consistently outperforms on the non-verifiable AlpacaEval 2.0, suggesting a generalizability of our boosts to domains beyond verifiability. The surprising success of this fundamentally simple yet training-free sampling algorithm under- scores the latent reasoning capabilities of existing base models. 8 MATH500 HumanEval GPQA AlpacaEval2.0 Qwen2.5-Math-7B Base 0.496 0.329 0.278 1.61 Low-temperature 0.690 0.512 0.353 2.09 Power Sampling (ours) 0.748 0.573 0.389 2.88 GRPO (MATH) 0.785 0.537 0.399 2.38 Qwen2.5-7B Base 0.498 0.329 0.278 7.05 Low-temperature 0.628 0.524 0.303 5.29 Power Sampling (ours) 0.706 0.622 0.318 8.59 GRPO (MATH) 0.740 0.561 0.354 7.62 Phi-3.5-mini-instruct Base 0.400 0.213 0.273 14.82 Low-temperature 0.478 0.585 0.293 18.15 Power Sampling (ours) 0.508 0.732 0.364 17.65 GRPO (MATH) 0.406 0.134 0.359 16.74 Table 1: Power sampling (ours) matches and even outperforms GRPO across model families and tasks. We benchmark the performance of our sampling algorithm on MATH500, HumanEval, GPQA, and AlpacaEval 2.0. We bold the scores of both our method and GRPO, and underline whenever our method outperforms GRPO. Across models, we see that power sampling is comparable to GRPO on in-domain reasoning (MATH500), and can outperform GRPO on out-of-domain tasks. 5.3 Analysis We analyze how the reasoning characteristics of power sampling relate to those of GRPO. We present an example in Table 2, with further examples in Appendix A.3. Reasoning trace likelihoods and confidences. By design, power sampling targets sampling higher likelihood sequences from the base model. In Figure 4, the left graph plots a histogram of the out- put sequence log-likelihoods (averaged by length) of the base model, power sampling, and GRPO responses on MATH500, where likelihoods are taken relative to the Qwen2.5-Math-7B base model. Our method samples from higher likelihood regions of the base model, as intended, but still main- tains noticeable spread. Meanwhile, GRPO samples are heavily concentrated at the highest likeli- hood peak. Figure 4: Base model (Qwen2.5-Math-7B) likelihoods and confidences for MATH500 responses. Left: We plot the log-likelihoods (relative to the base model) of original, power sampling, and GRPO responses over MATH500. Right: We do the same but for confidences relative to the base model. We observe that GRPO samples from the highest likelihood and confidence regions with power sampling close behind, which correlates with higher empirical accuracy. 9 Filter an input list of strings only for ones that start with a given prefix. (Phi-3.5-mini-instruct: HumanEval) Method Response Passed Ours return [s for s in strings if s.startswith (prefix)] true GRPO return [string for string in strings if string. startswith (f’{prefix}’*2)] false Table 2: Sample responses on HumanEval: Phi-3.5-mini-instruct. We present an example where our method solves a simple coding question, but GRPO does not. We also plot the base model confidence of MATH500 responses, defined to be the average negative entropy (uncertainty) of the next-token distributions [Prabhudesai et al., 2025]: Conf(x0:T ) = 1 T + 1 T X t=0 X x∈X p(x|x<t) log p(x|x<t). (13) The right plot of Figure 4 demonstrates that our method’s and GRPO responses sample from sim- ilarly high confidence regions from the base model, which again correspond to regions of higher likelihood and correct reasoning. Reasoning trace lengths. Another defining characteristic of RL-posttraining is long-form reasoning [Guo et al., 2025], where samples tend to exhibit longer responses. On MATH500, Qwen2.5-Math- 7B averages a response length of 600 tokens, while GRPO averages 671 tokens. Surprisingly, power sampling achieves a similar average length of 679 tokens, without explicitly being encouraged to favor longer generations. This emerges naturally from the sampling procedure. Diversity and pass@k performance. Again, notice the peaked and highly concentrated likelihood- s/confidences of GRPO relative to the distributional spread of power sampling in Figure 4. This suggests GRPO exhibits a collapse in diversity while our sampler does not, aligning with the obser- vation that RL-posttraining strongly sharpens the base model distribution at the expense of diversity [Song et al., 2025]. To quantify the comparative diversity of power sampling relative to GRPO, we can plot the pass@k accuracy rate, where a question is solved if at least one of k samples is accu- rate. Figure 5 shows exactly this: unlike GRPO, whose pass@k performance tapers off for large k, power sampling strongly outperforms for k > 1. Moreover, our performance curve supersedes that of the base model until finally converging in performance. In particular, we are able to achieve GRPO-level single-shot performance without compromising multi-shot performance (see Appendix A.2 for other domains), addressing a long-standing downside to RL-posttraining. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 k 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Pass@k Accuracy MATH500 Ours Base GRPO Figure 5: Pass@k performance on MATH500. We plot the pass@k accuracy (correct if at least one of k samples is accurate) of power sampling (ours) and RL (GRPO) relative to the base model (Qwen2.5-Math- 7B). Our performance curve is strictly better than both GRPO and the base model, and our pass rate at high k matches the base model, demonstrating sustained generation diversity. The effect of power distributions. The two most important hyperparameters for power sampling are the choice of α and the number of MCMC (resampling) steps during sequence generation 10 Figure 6: Effect of hyperparameters on power sampling. Left: We plot MATH500 accuracy across model families for various values of α. Right: We plot the increase in accuracy of power sampling on Qwen models as the number of MCMC steps increases. NMCMC. At the extremes, choosing α = 1.0 samples from the base model directly, while taking α →∞has the effect of deterministically accepting any resampled sequence that strictly increases the likelihood. Of course, even though higher base model likelihoods correlate with better reasoning (Figure 4), directly optimizing for likelihood is not necessarily optimal for reasoning, suggesting an ideal intermediate value of α. In Figure 6, we display MATH500 accuracies across various values of α and find that an intermediate α = 4.0 outperforms other values, as expected. Noticeably, the accuracies of power sampling remain relatively stable beyond α ≥2.0, suggesting that power sampling in practice is relatively robust to the choice of α. Test-time scaling with MCMC steps. On the other hand, NMCMC toggles the inference-time com- pute expended by our algorithm, providing a natural axis for test-time scaling. In Section 4.3 we raised the notion of a mixing time, or the number of MCMC steps required before adequately sam- pling from the target distribution. In our case, we expect that the fewer MCMC steps we take, the further our algorithm samples from the target pα. We plot performance dependence on NMCMC in Figure 6 and notice a steady increase in accuracy until NMCMC = 10, beyond which accuracy remains roughly stable (not plotted). The accuracy difference from using fewer MCMC steps is noticeable but no more than 3-4% between NMCMC = 2 and NMCMC = 10. However, the jump in accuracy by using at least two steps as opposed to none is substantial (3-4%). We can even compute the total amount of tokens generated by our method relative to running GRPO. From (12), our sampler generates 1 4B · NMCMCT times as many tokens as standard inference to gen- erate a sequence of length T. Plugging in our experimental parameters NMCMC = 10, T = 679 (our average output length for MATH500) and B = 192, running inference with power sampling incurs a multiplier of 8.84× the number of tokens as running standard inference. Since GRPO generates multiple rollouts per example during training, our method incurs roughly the same inference cost as one epoch of GRPO training, assuming 8 rollouts per sample with identical dataset sizes. Typically though, one GRPO epoch is still more expensive as it uses 16 rollouts and a training set that is larger than MATH500. 6 Conclusion In this work, we present an algorithm that samples directly from a base model without any additional training or access to an external signal, achieving a single-shot reasoning performance that is on par with, and sometimes even better than, that of a state-of-the-art RL-posttraining algorithm. We use the discussion of RL distribution sharpening to motivate defining the power distribution as a valuable target distribution for reasoning. Although exact power distribution sampling is intractable, we employ classic MCMC techniques alongside the sequential structure of autoregressive generation to define our power sampling algorithm, which demonstrates strong empirical performance. 11 Our results suggest that base model capabilities are underutilized at sampling time and point towards a close relationship between high likelihood regions of the base model and strong reasoning capabil- ities. Employing additional compute at sampling-time with a stronger understanding of base model capabilites offers a promising direction for expanding the scope of reasoning beyond verifiability. 7 Acknowledgments A.K. would like to thank the Paul and Daisy Soros Foundation, NDSEG Fellowship, and Kempner Institute for their support. References Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. 1, 3, 10 Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model. arXiv preprint arXiv:2503.24290, 2025. 1, 3 Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. arXiv preprint arXiv:2103.03874, 2021. 1 Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R´emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel Mankowitz, Esme Sutherland Robson, Push- meet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level code generation with AlphaCode. arXiv preprint arXiv:2203.07814, 2022. 1 David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. GPQA: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, 2024. 1, 8 Andre He, Daniel Fried, and Sean Welleck. Rewarding the unlikely: Lifting GRPO beyond distri- bution sharpening. arXiv preprint arXiv:2506.02355, 2025. 1 Rulin Shao, Shuyue Stella Li, Rui Xin, Scott Geng, Yiping Wang, Sewoong Oh, Simon Shaolei Du, Nathan Lambert, Sewon Min, Ranjay Krishna, Yulia Tsvetkov, Hannaneh Hajishirzi, Pang Wei Koh, and Luke Zettlemoyer. Spurious rewards: Rethinking training signals in RLVR. arXiv preprint arXiv:2506.10947, 2025. 1, 3, 8 Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does re- inforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837, 2025. URL https://arxiv.org/abs/2504.13837. 1 Yuda Song, Julia Kempe, and R´emi Munos. Outcome-based exploration for LLM reasoning. arXiv preprint arXiv:2509.06941, 2025. URL https://arxiv.org/abs/2509.06941. 1, 10 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseek-math: Advancing mathematical reasoning through step-by-step exploration. arXiv preprint arXiv:2404.01140, 2024. 2, 3 Mihir Prabhudesai, Lili Chen, Alex Ippoliti, Katerina Fragkiadaki, Hao Liu, and Deepak Pathak. Maximizing confidence alone improves reasoning. arXiv preprint arXiv:2505.22660, 2025. URL https://arxiv.org/abs/2505.22660. 2, 3, 10 Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. In NeurIPS, volume 35, pages 27730–27744, 2022. 3 12 Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brah- man, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. T¨ulu 3: Pushing frontiers in open language model post-training. arXiv preprint arXiv:2411.15124, 2024. 3 Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Sim- plerlzoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892, 2025. URL https://arxiv.org/abs/2503.18892. 3 Xuandong Zhao, Zhewei Kang, Aosong Feng, Sergey Levine, and Dawn Song. Learning to reason without external rewards. arXiv preprint arXiv:2505.19590, 2025. URL https://arxiv.org/ abs/2505.19590. 3 Stephen Zhao, Rob Brekelmans, Alireza Makhzani, and Roger Grosse. Probabilistic inference in language models via twisted sequential monte carlo. arXiv preprint arXiv:2404.17546, 2024. URL https://arxiv.org/abs/2404.17546. 3 Nicolas Chopin. Central limit theorem for sequential monte carlo methods and its application to bayesian inference. The Annals of Statistics, 32(6): 2385–2411, 2004. doi: 10.1214/009053604000000615. URL https:// projecteuclid.org/journals/annals-of-statistics/volume-32/issue-6/ Central-limit-theorem-for-sequential-Monte-Carlo-methods-and-its/10. 1214/009053604000000698.full. 3 Gonc¸alo R. A. Faria, Sweta Agrawal, Ant´onio Farinhas, Ricardo Rei, Jos´e G. C. de Souza, and Andr´e F. T. Martins. Quest: Quality-aware metropolis-hastings sampling for machine translation. In NeurIPS, 2024. URL https://proceedings.neurips.cc/paper_files/paper/2024/ file/a221d22ff6a33599142c8299c7ed06bb-Paper-Conference.pdf. 3 Radford M. Neal. Annealed importance sampling. arXiv preprint physics/9803008, 1998. URL https://arxiv.org/abs/physics/9803008. 3 Krzysztof Łatuszy´nski, Matthew T. Moores, and Timoth´ee Stumpf-F´etizon. Mcmc for multi-modal distributions. arXiv preprint arXiv:2501.05908, 2025. URL https://arxiv.org/abs/2501. 05908v1. 3 Yilun Du, Conor Durkan, Robin Strudel, Joshua B Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, and Will Sussman Grathwohl. Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. In International con- ference on machine learning, pages 8489–8510. PMLR, 2023. 3 Sunwoo Kim, Minkyu Kim, and Dongmin Park. Test-time alignment of diffusion models without reward over-optimization. arXiv preprint arXiv:2501.05803, 2025. URL https://arxiv.org/ abs/2501.05803. 3 Aayush Karan, Kulin Shah, and Sitan Chen. Reguidance: A simple diffusion wrapper for boosting sample quality on hard inverse problems. arXiv preprint arXiv:2506.10955, 2025. URL https: //arxiv.org/abs/2506.10955. 3 Yanwei Wang, Lirui Wang, Yilun Du, Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia P´erez-D’Arpino, Dieter Fox, and Julie Shah. Inference-time policy steering through human interactions. In 2025 IEEE International Conference on Robotics and Automation (ICRA), pages 15626–15633. IEEE, 2025. 3 Lingkai Kong, Yuanqi Du, Wenhao Mu, Kirill Neklyudov, Valentin De Bortoli, Dongxia Wu, Haorui Wang, Aaron M. Ferber, Yian Ma, Carla P. Gomes, and Chao Zhang. Diffusion models as con- strained samplers for optimization with unknown constraints. In Yingzhen Li, Stephan Mandt, Shipra Agrawal, and Emtiyaz Khan, editors, Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, volume 258 of Proceedings of Machine Learning Research, pages 4582–4590. PMLR, 2025. URL https://proceedings.mlr.press/v258/kong25b. html. 3 Xiangcheng Zhang, Haowei Lin, Haotian Ye, James Zou, Jianzhu Ma, Yitao Liang, and Yilun Du. Inference-time scaling of diffusion models through classical search. arXiv preprint arXiv:2505.23614, 2025. 3 13 Malcolm Sambridge. A parallel tempering algorithm for probabilistic sampling and optimiza- tion. Geophysical Journal International, 196(1):357–374, 2014. doi: 10.1093/gji/ggt374. URL https://academic.oup.com/gji/article/196/1/357/585739. 3 Marta Skreta, Tara Akhound-Sadegh, Viktor Ohanesian, Roberto Bondesan, Al´an Aspuru-Guzik, Arnaud Doucet, Rob Brekelmans, Alexander Tong, and Kirill Neklyudov. Feynman-kac correc- tors in diffusion: Annealing, guidance, and product of experts. arXiv preprint arXiv:2503.02819, 2025. URL https://arxiv.org/abs/2503.02819. 3 Yanbo Xu, Yu Wu, Sungjae Park, Zhizhuo Zhou, and Shubham Tulsiani. Temporal score rescaling for temperature sampling in diffusion and flow models. arXiv preprint arXiv:2510.01184, 2025. URL https://arxiv.org/abs/2510.01184. 3 Tomas Geffner, Kieran Didi, Zuobai Zhang, Danny Reidenbach, Zhonglin Cao, Jason Yim, Mario Geiger, Christian Dallago, Emine Kucukbenli, and Arash Vahdat. Proteina: Scaling flow-based protein structure generative models. arXiv preprint arXiv:2503.00710, 2025. URL https:// arxiv.org/abs/2503.00710. 3 Pei-Hsin Wang, Sheng-Iou Hsieh, Shih-Chieh Chang, Yu-Ting Chen, Jia-Yu Pan, Wei Wei, and Da- Chang Juan. Contextual temperature for language modeling. arXiv preprint arXiv:2012.13575, 2020. URL https://arxiv.org/abs/2012.13575. 4 Marvin Li, Aayush Karan, and Sitan Chen. Blink of an eye: A simple theory for feature localization in generative models. arXiv preprint arXiv:2502.00921, 2025. URL https://arxiv.org/abs/ 2502.00921. 5, 16 Marah Abdin, Jyoti Aneja, Harkirat Behl, S´ebastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Javaheripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C. T. Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Xin Wang, Rachel Ward, Yue Wu, Dingli Yu, Cyril Zhang, and Yi Zhang. Phi-4 technical report. arXiv preprint arXiv:2412.08905, 2024. URL https://arxiv.org/abs/2412.08905. 5, 8, 16 Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Ed- ward Teller. Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21(6):1087–1092, 1953. doi: 10.1063/1.1699114. 5 Radford M Neal. Probabilistic inference using markov chain monte carlo methods. Department of Computer Science, University of Toronto (review paper / technical report), 1993. 6 Reza Gheissari, Eyal Lubetzky, and Yuval Peres. Exponentially slow mixing in the mean-field swendsen–wang dynamics. arXiv preprint arXiv:1702.05797, 2017. 6 Afonso S. Bandeira, Antoine Maillard, Richard Nickl, and Sven Wang. On free energy barriers in gaussian priors and failure of cold start mcmc for high-dimensional unimodal distributions. arXiv preprint arXiv:2209.02001, 2022. URL https://arxiv.org/abs/2209.02001. 6 Scott C. Schmidler and Dawn B. Woodard. Lower bounds on the convergence rates of adaptive mcmc methods. Technical report, Duke University / Cornell University, 2013. URL https: //www2.stat.duke.edu/~scs/Papers/AdaptiveLowerBounds_AS.pdf. 6 Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview. net/forum?id=v8L0pN6EOi. 8 M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and 14 W. Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. URL https://arxiv.org/abs/2107.03374. 8 Yann Dubois, Bal´azs Galambosi, Percy Liang, and Tatsunori B. Hashimoto. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475, 2024. URL https://arxiv.org/abs/2404.04475. 8 15 A Appendix A.1 Additional Theoretical Discussion In this section, we provide a stronger formalization of the phenomenon that power sampling down- weights tokens that trap outputs in low-likelihood futures while low-temperature sampling does not. Proposition 2 (Informal). Power sampling upweights tokens with small support but high likelihood completions, while low-temperature sampling upweights tokens with large support but low likeli- hood completions. Definition 2. For the rest of this section, fix a prefix x0:t−1. We say that xt has marginal weight ε under the conditional next-token distribution if P x>t p(x0, . . . , xt, . . . xT ) = ε. We consider a simplified model of the “critical window” or “pivotal token” phenomenon [Li et al., 2025, Abdin et al., 2024], which refers to intermediate tokens that strongly influence the quality of the final generation. We differentiate between pivotal tokens that lead to high-likelihood futures vs. low-likelihood ones. Definition 3. At one extreme, a pivotal token maximally induces a high-likelihood completion if it places its entire marginal weight ε on one future (singular support); i.e., for only one choice of x > t is p(x0, . . . , xt, . . . , xT ) nonzero. We call such a token a positive pivotal token. Definition 4. At the other extreme, a pivotal token minimizes the likelihood of any future if its entire marginal weight ε is uniformly distributed across N future completions. In other words, there exist N completions x > t such that p(x0, . . . , xt, . . . , xT ) are all nonzero with likelihood ε N . We call such a token a negative pivotal token. Our simplified model of high and low-likelihood futures examines when positive pivotal tokens are favored over negative pivotal tokens under a given sampling distribution. In particular, we show that power sampling can upweight a positive pivotal token over a negative one even if the latter has a higher marginal weight, whereas low-temperature sampling always upweights the negative pivotal token in such a scenario. Of course, whenever a positive pivotal token has higher marginal weight, both power sampling and low-temperature sampling will upweight it. Proposition 3. Let xt be a positive pivotal token with marginal weight ε, and let x′ t be a negative pivotal token with marginal weight ε′ and support N. Then if ε′ N 1−1/α < ε < ε′, (14) the future likelihood of xt is higher than any future likelihood of x′ t. Moreover, power sampling upweights xt over x′ t while low-temperature sampling upweights x′ t over xt. Proof. Since α ≥1, it follows that ε′ N 1−1/α > ε′ N (15) and thus ε > ε′ N , establishing that the future completion likelihood of xt is greater than that of x′ t (i.e. the assignment of positive and negative pivotal tokens is consistent). Now, if ε < ε′, then under the low-temperature distribution, the relative marginal weights on xt and x′ t are εα and ε′α, so the probability of choosing xt is downweighted relative to x′ t. However, for the power distribution, the relative marginal weights are ppow(xt|x<t) = εα and ppow(x′ t|x<t) = ε′α N α−1 . Then, as long as εα > ε′α N α−1 ⇐⇒ε > ε′ N 1−1/α , token xt will be upweighted relative to token x′ t. In other words, the marginal weight on xt can be less than the mass on x′ t under p, but if the completion for xt has higher likelihood than any individual completion for x′ t, power sampling favors xt over x′ t. A.2 Pass@k Accuracies over Multiple Domains In this section, we plot the pass@k performance of power sampling, GRPO, and the base model (Qwen2.5-Math-7B) over MATH500, GPQA, and HumanEval to demonstrate that our sampling 16 algorithm is highly performant at both single-shot and multi-shot reasoning while maintaining re- sponse diversity. Power sampling is plotted with α = 4.0 for MATH500 and GPQA and α = 1.67 for HumanEval (this temperature exhibits slightly better results at earlier k). In all cases, both in- domain and out-of-domain for GRPO, power sampling has near universally better performance than both GRPO and the base model in pass@k for k > 1 and matches, if not exceeds, the base model upper bound at large k. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 k 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Pass@k Accuracy MATH500 Ours Base GRPO Figure 7: Pass@k performance on MATH500 (Qwen2.5-Math-7B). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 k 0.3 0.4 0.5 0.6 0.7 0.8 Pass@k Accuracy HumanEval Ours Base GRPO Figure 8: Pass@k performance on HumanEval (Qwen2.5-Math-7B). One thing to note about these plots is that the loss in diversity varies noticeably from benchmark to benchmark. MATH500 and GPQA clearly show that GRPO has a significantly lower pass@k performance and diversity even for smaller k, while on HumanEval GRPO exhibits better pass@k 17 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 k 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Pass@k Accuracy GPQA Ours Base GRPO Figure 9: Pass@k performance on GPQA (Qwen2.5-Math-7B). than the base model until k = 16. We speculate this might be due to the fact that while MATH500 and GPQA are graded on a “singular” answer, HumanEval is based on completing code where multiple solutions might be accepted, indicating a reduction in diversity may emerge but at much higher k. A.3 More Qualitative Examples Can you solve the following math problem? Write 3 20 as a decimal. Please reason step by step, and put your final answer within \boxed{{}}. Correct answer: 0.15 System Answer Ours To convert the fraction 3 20 to a decimal, we can follow these steps: 1. Identify the numerator and the denominator of the fraction. In this case, the numerator is 3 and the denominator is 20. 2. Divide the numerator by the denominator. In this case, we divide 3 by 20. 3. Perform the division. The result is 0.15. Therefore, the fraction 3 20 as a decimal is 0.15 . GRPO To convert the fraction 3 20 to a decimal, we can follow these steps: 1. Understand the fraction: The fraction 3 20 means 3 divided by 20. 2. Perform the division: We need to divide 3 by 20. Since 3 is less than 20, we will add a decimal point and zeros to 3 to make the division possible. 3. Set up the division: Write 3 as 3.000 (adding three zeros for precision). 4. Divide: - 20 goes into 30 once (since 20 × 1 = 20). - Subtract 20 from 30 to get a remainder of 10. - Bring down the next 0 to make it 100. - 20 goes into 100 five times (since 20 × 5 = 100). - Subtract 100 from 100 to get a remainder of 0. 5. Combine the results: The quotient from the division is 0.15. Therefore, the decimal representation of 3 20 is 0.15 . Table 3: Comparison on MATH500: Qwen2.5-Math-7B. We present an example where our method and GRPO are correct. 18 The Fib4 number sequence is a sequence similar to the Fibonacci sequence defined as: fib4(0)=0, fib4(1)=0, fib4(2)=2, fib4(3)=0, and for n ≥4, fib4(n) = fib4(n-1) + fib4(n-2) + fib4(n-3) + fib4(n-4). Write a function to efficiently compute the n-th element without recursion. System Completion Passed Ours # Base cases if n == 0: return 0 elif n == 1: return 0 elif n == 2: return 2 elif n == 3: return 0 # Initialize the first four elements fib4_values = [0, 0, 2, 0] # Compute the rest of the sequence iteratively for i in range (4, n+1): next_value = ( fib4_values [i -1] + fib4_values [ i -2] + fib4_values [i -3] + fib4_values [ i -4]) fib4_values .append(next_value ) # Return the n-th element return fib4_values [n] true GRPO a, b, c, d = 0, 0, 2, 0 for _ in range(n): a, b, c, d = b, c, d, a + b + c + d return d false Table 4: HumanEval comparison on Phi-3.5-mini-instruct. 19 Can you solve the following math problem? What integer n satisfies 0 ≤n < 18 and n ≡−11213141 (mod 18) ? Please reason step by step, and put your final answer within \boxed{{}}. Correct answer: 13 Method Answer Ours First, we perform the division of 11213141 by 18: 11213141 ÷ 18 ≈622952.277777 . . . The integer part of the quotient is 622952. Now, we multiply 622952 by 18 to find the product: 622952 × 18 = 11213136 Next, we subtract this product from 11213141 to find the remainder: 11213141 −11213136 = 5 So, the remainder when 11213141 is divided by 18 is 5. This means: 11213141 ≡5 (mod 18) Therefore: −11213141 ≡−5 (mod 18) Since we need a non-negative remainder, we convert −5 to a positive equivalent by adding 18: −5 + 18 = 13 Thus: −11213141 ≡13 (mod 18) The integer n that satisfies 0 ≤n < 18 and n ≡−11213141 (mod 18) is: 13 . GRPO Step 1: Find the remainder when 11213141 is divided by 18. We can use the property of modular arithmetic that states a ≡b (mod m) if and only if a mod m = b mod m. So, we need to compute 11213141 mod 18. To simplify this, we can use the fact that the remainder of a number when divided by 18 is the same as the remainder of the sum of its digits when divided by 18 (although this is not always the most efficient method, it can be used here for simplicity). First, let’s sum the digits of 11213141: 1 + 1 + 2 + 1 + 3 + 1 + 4 + 1 = 14. Now, we need to find the remainder when 14 is divided by 18. Since 14 is less than 18, the remainder is simply 14. Therefore, 11213141 ≡14 (mod 18). Step 2: Find the remainder when −11213141 is divided by 18. Since 11213141 ≡14 (mod 18), it follows that −11213141 ≡−14 (mod 18). We need to convert −14 to a positive remainder between 0 and 17. To do this, we add 18 to −14: −14 + 18 = 4. Therefore, −11213141 ≡4 (mod 18). The integer n that satisfies 0 ≤n < 18 and n ≡−11213141 (mod 18) is 4 . Table 5: MATH500 comparison between our sampling algorithm and GRPO for Qwen2.5-Math-7B. Here is an example where GRPO gets an incorrect answer, while our sampling algorithm succeeds. Our sample answer uses a distinct method altogether. 20
Reasoning with Sampling: Your Base Model is Smarter Than You Think Aayush Karan1, Yilun Du1 1Harvard University Œ Website ‡ Code Abstract Frontier reasoning models have exhibited incredible capabilities across a wide array of disciplines, driven by posttraining large language models (LLMs) with reinforcement learning (RL). However, despite the widespread success of this paradigm, much of the literature has been devoted to disentangling truly novel behaviors that emerge during RL but are not present in the base models. In our work, we approach this question from a different angle, instead asking whether comparable reasoning capabilites can be elicited from base models at inference time by pure sampling, without any additional training. Inspired by Markov chain Monte Carlo (MCMC) techniques for sampling from sharpened distributions, we propose a simple iterative sampling algorithm leveraging the base models' own likelihoods. Over different base models, we show that our algorithm offers substantial boosts in reasoning that nearly match and even outperform those from RL on a wide variety of single-shot tasks, including MATH500, HumanEval, and GPQA. Moreover, our sampler avoids the collapse in diversity over multiple samples that is characteristic of RL-posttraining. Crucially, our method does not require training, curated datasets, or a verifier, suggesting broad applicability beyond easily verifiable domains. 1 Introduction Reinforcement learning (RL) has become the dominant paradigm for enhancing the reasoning capabilities of large language models (LLMs) [Guo et al., 2025, Hu et al., 2025]. Equipped with a reward signal that is typically automatically verifiable, popular RL techniques have been successfully applied to posttrain frontier models, leading to sizeable performance gains in domains like math, coding, and science [Hendrycks et al., 2021, Li et al., 2022, Rein et al., 2024]. Despite the widespread empirical success of RL for LLMs, a large body of literature has centered around the following question: are the capabilities that emerge during RL-posttraining fundamentally novel behaviors that are not present in the base models? This is the question of distribution sharpening [He et al., 2025, Shao et al., 2025, Yue et al., 2025]: that is, whether the posttrained distribution is simply a "sharper" version of the base model distribution, instead of placing mass on reasoning traces the base model is unlikely to generate. Several works point towards the difficulty in learning new capabilities with RL-posttraining. He et al. [2025], Song et al. [2025] compare the pass@k (multi-shot) scores of base models with posttrained models, finding that for large k, base models actually outperform while the latter suffer from degraded generation diversity. In such cases, RL appears to redistribute pass@k performance to single-shot performance at the expense of multi-shot reasoning. Yue et al. [2025] also notes that the reasoning traces post-RL are tightly concentrated at high likelihoods/confidences under the base model, seemingly drawing from existing high-likelihood capabilities. We illustrate this point in our own experiments in Figure 4. Regardless, the advantage of RL-posttraining for single-shot reasoning has remained, as of yet, undeniable. Preprint. Under review. 16 Oct 2025 MATH500 HumanEval GPQA 10 20 30 40 50 60 70 80 Accuracy (%) 49.6 32.9 27.8 78.5 53.7 39.9 74.8 57.3 38.9 Reasoning AlpacaEval2.0 0.5 1.0 1.5 2.0 2.5 3.0 Score 1.61 2.38 2.88 General Base (Qwen2.5-Math-7B) GRPO (RL) Ours (Training-free) Figure 1: Our sampling algorithm can match and outperform RL-posttraining. Left: we compare our sampling algorithm (ours) against the base model (base) and RL-posttraining (GRPO) on three verifiable reasoning tasks (MATH500, HumanEval, GPQA). Right: we compare them on an unverifiable general task (AlpacaEval2.0). Our algorithm achieves comparable performance to GRPO within the posttraining domain (MATH500) but can outperform on out-of-domain tasks such as HumanEval and AlpacaEval. In this paper, we present a surprising result: sampling directly from the base model can achieve single-shot reasoning capabilites on par with those from RL. We propose a sampling algorithm for base models that leverages additional compute at inference time, achieving single-shot performance that nearly matches RL-posttraining on in-domain reasoning tasks and can even outperform on out-of-domain reasoning tasks. Furthermore, we observe that generation diversity does not degrade with our sampler; in fact, our pass@k (multi-shot) performance strongly outperforms RL. We benchmark specifically against Group Relative Policy Optimization (GRPO), which is the standard RL algorithm for enhancing LLM reasoning [Shao et al., 2024]. Crucially, our algorithm is training-free, dataset-free, and verifier-free, avoiding some of the inherent weaknesses of RL methods including extensive hyperparameter sweeps to avoid training instabilities, the need to curate a diverse and expansive posttraining dataset, and the lack of guaranteed access to a ground truth verifier/reward signal [Prabhudesai et al., 2025]. Our contributions can be summarized as follows: i) We introduce the power distribution as a useful sampling target for reasoning tasks. Since it can be explicitly specified with a base LLM, no additional training is required. ii) We further introduce an approximate sampling algorithm for the power distribution using a Markov chain Monte Carlo (MCMC) algorithm that iteratively resamples token subsequences according to their base model likelihoods. iii) We empirically demonstrate the effectiveness of our algorithm over a range of models (Qwen2.5Math-7B, Qwen2.5-7B, Phi-3.5-mini-instruct) and reasoning tasks (MATH500, HumanEval, GPQA, AlpacaEval 2.0). Our results show that sampling directly from the base model can achieve results on par with GRPO. In fact, for some out-of-domain tasks, our algorithm consistently outperforms the RL baseline. Moreover, over multiple samples, we avoid the collapse in diversity afflicting RL-posttraining, achieving the best of both worlds in terms of single-to-few-shot reasoning capabilities as well as sample diversity. Our results collectively illustrate that existing base models are much more capable at single-shot reasoning than current sampling methods reveal. 2 2 Related Works Reinforcement learning for LLMs. RL has been instrumental in posttraining LLMs. Early on, RL with human feedback (RLHF) [Ouyang et al., 2022] was developed as a technique to align LLMs with human preferences using a trained reward model. Recently, RL with verifiable rewards (RLVR) has emerged as a powerful new posttraining technique, where many works [Guo et al., 2025, Lambert et al., 2024, Hu et al., 2025, Zeng et al., 2025] discovered that a simple, end-of-generation reward given by an automated verifier could substantially enhance performance on difficult reasoning tasks in mathematics and coding. The Group Relative Policy Optimization (GRPO) algorithm was at the center of these advances [Shao et al., 2024]. Building off of this success, many subsequent works have examined using reward signals derived from internal signals such as self-entropy [Zhao et al., 2025], confidence [Prabhudesai et al., 2025], and even random rewards [Shao et al., 2025]. Similar to these works, our paper examines base model likelihoods as a mechanism for improving reasoning performance, but crucially, our technique is training-free. Autoregressive MCMC sampling with LLMs. Prior works have explored integrating classic MCMC techniques with autoregressive sampling. Many settings including red-teaming, promptengineering, and personalized generation can be framed as targeting sampling from the base LLM distribution but tilted towards an external reward function. Zhao et al. [2024] proposes learning intermediate value functions that are used in a Sequential Monte Carlo (SMC) framework [Chopin, 2004], where multiple candidate sequences are maintained and updated according to their expected future reward. Similarly, Faria et al. [2024] proposes a Metropolis-Hastings (MH) algorithm, which instead of maintaining multiple candidates performs iterative resampling, again updating according to expected reward. Methodologically, our sampling algorithm is most similar to this latter work, but the crucial difference is that our target sampling distribution is completely specified by the base LLM, avoiding the need for an external reward. Annealed sampling for diffusion. In the statistical physics and Monte Carlo literature, sampling from pα is known as sampling from an annealed, or tempered, distribution [Neal, 1998] and has inspired a new wave of interest within the diffusion community. Indeed, in traditional MCMC sampling, annealing is used as a way to avoid mode-collapse during sampling and more accurately sample from complex multimodal distributions [Łatuszy ́nski et al., 2025]. This has re-emerged as inference-time sampling methods for diffusion that aim to steer a pretrained model towards "tilted distributions" [Du et al., 2023, Kim et al., 2025, Karan et al., 2025, Wang et al., 2025, Kong et al., 2025, Zhang et al., 2025]. Where traditional RL techniques exhibit mode collapse, applications in the physical sciences [Sambridge, 2014] require multimodal sampling. To this end, works such as Du et al. [2023], Wang et al. [2025], Kim et al. [2025] construct sequences of annealed distributions to ease the transition from base diffusion distribution to tilted distribution. Other works [Skreta et al., 2025, Xu et al., 2025] intentionally target sampling from pα for α > 1 as a means of generating higher quality samples from the base diffusion model, which is particularly popular for generating more designable proteins [Geffner et al., 2025]. 3 Preliminaries Let X be a finite vocabulary of tokens, and let X T denote the set of finite sequences of tokens x0:T = (x0, x1, . . . , xT ), where xi ∈X for all i and T ∈Z≥0 is some nonnegative integer. For convenience, for a given t, let x t = (xt+1, . . . , xT ), with similar definitions for x≤t and x≥t. In general, x refers to a token sequence x0:T , where T is implicitly given. Then an LLM defines a distribution p over token sequences X T by autoregressively learning the conditional token distributions p(xt|x p(x′) =⇒ p(x)α p(x′)α > p(x) p(x′) (α ∈[1, ∞]), (2) it follows that exponentiating p increases the relative weight on higher likelihood sequences (x) while decreasing the relative weight on lower likelihood ones (x′) (see Figure 2 for a visualization). A related but well-known sharpening strategy is low-temperature sampling [Wang et al., 2020], which exponentiates the conditional next-token distributions at each step: ptemp(xt|x0 . . . xt-1) = p(xt|xt-1 . . . x0)α P x′ t∈X p(x′ t|xt-1 . . . x0)α , (3) where the temperature is τ = 1/α. A common misconception is that sampling with (3) over T tokens is equivalent to sampling from pα; however, this is false in a subtle yet crucial way, as we illuminate in the following. Proposition 1. Low-temperature sampling does not sample from the power distribution pα. Proof. We show that the associated conditional next-token distributions are distinct at each timestep t. The conditional distribution on xt for pα is given by ppow(xt|x0 . . . xt-1) = P x>t p(x0, . . . , xt, . . . , xT )α P x≥t p(x0, . . . , xt, . . . , xT )α . (4) Using Bayes rule p(xt|xt-1 . . . x0) = p(x0, . . . , xt) p(x0, . . . , xt-1) = P x>t p(x0, . . . , xt, . . . , xT ) P x≥t p(x0, . . . , xt, . . . , xT ), (5) we can rewrite the low-temperature marginal (3) as ptemp(xt|x0 . . . xt-1) = P x>t p(x0, . . . , xt, . . . , xT ) α P x′ t P x>t p(x0, . . . , xt, . . . , xT ) α . (6) Ignoring normalizations for clarity, the relative weight on token xt for sampling from pα is given by a sum of exponents ppow(xt|x t p(x0, . . . , xt, . . . , xT )α. (7) 4 Meanwhile, the relative weight for low-temperature sampling is given by an exponent of sums ptemp(xt|x t p(x0, . . . , xt, . . . , xT ) !α . (8) Since the relative weights of next-token prediction are distinct for each sampling strategy, it follows that the joint distribution over seqeunces must also be distinct for each sampler. Hence, the distribution on sequences given by low-temperature sampling is not the same as the one given by pα. One intuitive way to understand this difference is that low-temperature sampling does not account for how exponentiation sharpens the likelihoods of "future paths" at time step t, instead "greedily" averaging all these future likelihoods (exponent of sums (8)). On the other hand, sampling from pα inherently accounts for future completions as it exponentiates all future paths (sum of exponents (7)) before computing the weights for next-token prediction. This has the following consequence: Observation 1. The power distribution upweights tokens with few but high likelihood future paths, while low-temperature sampling upweights tokens with several but low likelihood completions. Example 1. We can observe this phenomenon with a simple example. Let us consider the token vocabulary X = {a, b} and restrict our attention to two-token sequences (x0, x1): aa, ab, ba, bb. Let p(aa) = 0.00, p(ab) = 0.40, p(ba) = 0.25, p(bb) = 0.25, so that p(x0 = a) = 0.40, p(x0 = b) = 0.50. Let α = 2.0. Under pα, we have ppow(x0 = a) ∝0.002 + 0.402 = 0.160, ppow(x0 = b) ∝0.252 + 0.252 = 0.125, so pα prefers sampling a over b. Under low-temperature sampling, ptemp(x0 = a) ∝(0.00 + 0.40)2 = 0.160, ptemp(x0 = b) ∝(0.25 + 0.25)2 = 0.250, preferring sampling b over a. If pα samples x0 = a, there is only one future path with likelihood 0.40. If ptemp samples x0 = b, there are two future paths ba, bb, but either choice has likelihood 0.25. In other words, even though a has lower conditional likelihood under both p and ptemp, pα upweights a and samples the highest likelihood two-token sequence. b has many future paths contributing to a higher likelihood under p and ptemp, but leads to low likelihood sequences. We provide a stronger formalization of this phenomenon in Appendix A.1. Thus, sampling from pα encourages sampling tokens which have fewer but higher likelihood "future paths", as opposed to tokens with several lower likelihood completions. This type of behavior is immensely valuable for reasoning tasks. For example, choosing "wrong" tokens that have high average likelihoods but trap outputs in low likelihood individual futures are examples of critical windows or pivotal tokens [Li et al., 2025, Abdin et al., 2024], a phenomenon where a few tokens are highly influential in the correctness of language model outputs. In fact, sharp critical windows have been shown to correlate strongly with reasoning failures [Li et al., 2025]. Instead, embedded in sampling from the power distribution is an implicit bias towards planning for future high likelihood tokens. 4.2 The Metropolis-Hastings Algorithm Now that we have seen how sampling from pα can in theory assist the underlying LLM's ability to reason, our aim now turns towards proposing an algorithm to accurately sample from it. Given an LLM p, we have access to the values pα over any sequence length; however, these values are unnormalized. Direct sampling from the true probabilities requires normalizing over all sequences (x0, . . . , xT ) ∈X T , which is computationally intractable. To get around this, we invoke a Markov Chain Monte Carlo (MCMC) algorithm known as Metropolis-Hastings (MH) [Metropolis et al., 1953], which targets exactly what we want: approximate sampling from an unnormalized probability distribution. The MH algorithm constructs a 5 Figure 3: Illustrating Metropolis-Hastings with random resampling. A random index t is selected and a new candidate is generated by resampling. Based on the relative likelihoods, the candidate is accepted or rejected, and the process repeats. Markov chain of sample sequences (x0, x1, . . . , xn) using an arbitrary proposal distribution q(x|xi) to select the next candidate xi+1. With probability A(x, xi) = min 1, pα(x) · q(xi|x) pα(xi) · q(x|xi) , (9) candidate x is accepted as xi+1; otherwise, MH sets xi+1 = xi. This algorithm is especially convenient as it only requires the relative weights given by pα (as the normalization weights in A cancel) and works with any generic but tractable sampler q with minimal restrictions. Remarkably, for large enough n, this process converges to sampling from the target distribution pα under the following (quite minimal) conditions on the proposal distribution [Neal, 1993]: Definition 1. The proposal distribution q is irreducible if for any set X with nonzero mass under the target distribution pα, q has nonzero probability of eventually sampling from X. The proposal is aperiodic if the induced chain of samples does not return to the same sample after a fixed interval number of steps. Thus, we must simply ensure that our proposal distribution satisfies irreducibility and aperiodicity, and Metropolis-Hastings takes care of the rest. On a practical level, we would also like both q(x|xi) and its reverse q(xi|x) to be easily computable. Consider the following family of random resampling proposal distributions (see Figure 3). Let pprop be a proposal LLM. With uniform probability 1 T , select a random t ∈[1, T] and resample the sequence starting at index t using pprop. Then the transition likelihood q(x|xi) is simply the likelihood of the resampling. Note that at each candidate selection step, we have a nonzero probability of transitioning between any two sequences x, x′ ∈X, since with some probability we can always resample as early as the beginning of x. This ensures our proposal distribution is both irreducible and aperiodic. Moreover, q(xi|x) is easy to calculate by symmetry, since we can treat xi as a resampled version of x. With the flexibility endowed by Metropolis-Hastings, we can choose the proposal LLM pprop to be any LLM with any sampling strategy (e.g., low-temperature sampling). 4.3 Power Sampling with Autoregressive MCMC A direct implementation of Metropolis-Hastings for LLMs would involve initializing with a sampled token sequence of length T, subsequently generating new candidates of length T with (9) over many, many iterations. This process is computationally expensive, however, due to the repeated, full sequence inference calls to the LLM. In fact, the main downside to MCMC algorithms in practice is the potential for an exponential mixing time [Gheissari et al., 2017], where a poor choice of initialization or proposal distribution can result in an exponentially large number of samples required before convergence to the target distribution. This problem is exacerbated if the sample space has high dimensionality [Bandeira et al., 2022, Schmidler and Woodard, 2013], which is precisely exhibited by the sequence space of tokens X T , especially for long sequences/large values of T. To remedy this, we propose an algorithm that leverages the sequential structure of autoregressive sampling. We define a series of intermediate distributions which we progressively sample from, until converging to the target distribution pα. In particular, samples from one intermediate distribution initiate a Metropolis-Hastings process for the next, helping avoid pathological initializations. 6 Algorithm 1: Power Sampling for Autoregressive Models Input : base p; proposal pprop; power α; length T Hyperparams: block size B; MCMC steps NMCMC Output : (x0, . . . , xT ) ∼pα 1 Notation: Define the unnormalized intermediate target πk(x0:kB) ∝p(x0:kB)α. 2 for k ←0 to ⌈T B ⌉-1 do 3 Given prefix x0:kB, we wish to sample from πk+1. Construct initialization x0 by extending autoregressively with pprop: x(0) t ∼pprop xt | x 1. Moreover, our performance curve supersedes that of the base model until finally converging in performance. In particular, we are able to achieve GRPO-level single-shot performance without compromising multi-shot performance (see Appendix A.2 for other domains), addressing a long-standing downside to RL-posttraining. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 k 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Pass@k Accuracy MATH500 Ours Base GRPO Figure 5: Pass@k performance on MATH500. We plot the pass@k accuracy (correct if at least one of k samples is accurate) of power sampling (ours) and RL (GRPO) relative to the base model (Qwen2.5-Math7B). Our performance curve is strictly better than both GRPO and the base model, and our pass rate at high k matches the base model, demonstrating sustained generation diversity. The effect of power distributions. The two most important hyperparameters for power sampling are the choice of α and the number of MCMC (resampling) steps during sequence generation 10 Figure 6: Effect of hyperparameters on power sampling. Left: We plot MATH500 accuracy across model families for various values of α. Right: We plot the increase in accuracy of power sampling on Qwen models as the number of MCMC steps increases. NMCMC. At the extremes, choosing α = 1.0 samples from the base model directly, while taking α →∞has the effect of deterministically accepting any resampled sequence that strictly increases the likelihood. Of course, even though higher base model likelihoods correlate with better reasoning (Figure 4), directly optimizing for likelihood is not necessarily optimal for reasoning, suggesting an ideal intermediate value of α. In Figure 6, we display MATH500 accuracies across various values of α and find that an intermediate α = 4.0 outperforms other values, as expected. Noticeably, the accuracies of power sampling remain relatively stable beyond α ≥2.0, suggesting that power sampling in practice is relatively robust to the choice of α. Test-time scaling with MCMC steps. On the other hand, NMCMC toggles the inference-time compute expended by our algorithm, providing a natural axis for test-time scaling. In Section 4.3 we raised the notion of a mixing time, or the number of MCMC steps required before adequately sampling from the target distribution. In our case, we expect that the fewer MCMC steps we take, the further our algorithm samples from the target pα. We plot performance dependence on NMCMC in Figure 6 and notice a steady increase in accuracy until NMCMC = 10, beyond which accuracy remains roughly stable (not plotted). The accuracy difference from using fewer MCMC steps is noticeable but no more than 3-4% between NMCMC = 2 and NMCMC = 10. However, the jump in accuracy by using at least two steps as opposed to none is substantial (3-4%). We can even compute the total amount of tokens generated by our method relative to running GRPO. From (12), our sampler generates 1 4B · NMCMCT times as many tokens as standard inference to generate a sequence of length T. Plugging in our experimental parameters NMCMC = 10, T = 679 (our average output length for MATH500) and B = 192, running inference with power sampling incurs a multiplier of 8.84× the number of tokens as running standard inference. Since GRPO generates multiple rollouts per example during training, our method incurs roughly the same inference cost as one epoch of GRPO training, assuming 8 rollouts per sample with identical dataset sizes. Typically though, one GRPO epoch is still more expensive as it uses 16 rollouts and a training set that is larger than MATH500. 6 Conclusion In this work, we present an algorithm that samples directly from a base model without any additional training or access to an external signal, achieving a single-shot reasoning performance that is on par with, and sometimes even better than, that of a state-of-the-art RL-posttraining algorithm. We use the discussion of RL distribution sharpening to motivate defining the power distribution as a valuable target distribution for reasoning. Although exact power distribution sampling is intractable, we employ classic MCMC techniques alongside the sequential structure of autoregressive generation to define our power sampling algorithm, which demonstrates strong empirical performance. 11 Our results suggest that base model capabilities are underutilized at sampling time and point towards a close relationship between high likelihood regions of the base model and strong reasoning capabilities. Employing additional compute at sampling-time with a stronger understanding of base model capabilites offers a promising direction for expanding the scope of reasoning beyond verifiability. 7 Acknowledgments A.K. would like to thank the Paul and Daisy Soros Foundation, NDSEG Fellowship, and Kempner Institute for their support. References Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv preprint , 2025. 1, 3, 10 Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model. arXiv preprint , 2025. 1, 3 Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. arXiv preprint , 2021. 1 Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, R ́emi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level code generation with AlphaCode. arXiv preprint , 2022. 1 David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. GPQA: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling, 2024. 1, 8 Andre He, Daniel Fried, and Sean Welleck. Rewarding the unlikely: Lifting GRPO beyond distribution sharpening. arXiv preprint , 2025. 1 Rulin Shao, Shuyue Stella Li, Rui Xin, Scott Geng, Yiping Wang, Sewoong Oh, Simon Shaolei Du, Nathan Lambert, Sewon Min, Ranjay Krishna, Yulia Tsvetkov, Hannaneh Hajishirzi, Pang Wei Koh, and Luke Zettlemoyer. Spurious rewards: Rethinking training signals in RLVR. arXiv preprint , 2025. 1, 3, 8 Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint , 2025. URL https://arxiv.org/abs/2504.13837. 1 Yuda Song, Julia Kempe, and R ́emi Munos. Outcome-based exploration for LLM reasoning. arXiv preprint , 2025. URL https://arxiv.org/abs/2509.06941. 1, 10 Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseek-math: Advancing mathematical reasoning through step-by-step exploration. arXiv preprint , 2024. 2, 3 Mihir Prabhudesai, Lili Chen, Alex Ippoliti, Katerina Fragkiadaki, Hao Liu, and Deepak Pathak. Maximizing confidence alone improves reasoning. arXiv preprint , 2025. URL https://arxiv.org/abs/2505.22660. 2, 3, 10 Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. In NeurIPS, volume 35, pages 27730-27744, 2022. 3 12 Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. T ̈ulu 3: Pushing frontiers in open language model post-training. arXiv preprint , 2024. 3 Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerlzoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint , 2025. URL https://arxiv.org/abs/2503.18892. 3 Xuandong Zhao, Zhewei Kang, Aosong Feng, Sergey Levine, and Dawn Song. Learning to reason without external rewards. arXiv preprint , 2025. URL https://arxiv.org/ abs/2505.19590. 3 Stephen Zhao, Rob Brekelmans, Alireza Makhzani, and Roger Grosse. Probabilistic inference in language models via twisted sequential monte carlo. arXiv preprint , 2024. URL https://arxiv.org/abs/2404.17546. 3 Nicolas Chopin. Central limit theorem for sequential monte carlo methods and its application to bayesian inference. The Annals of Statistics, 32(6): 2385-2411, 2004. URL https:// projecteuclid.org/journals/annals-of-statistics/volume-32/issue-6/ Central-limit-theorem-for-sequential-Monte-Carlo-methods-and-its/10. 1214/009053604000000698.full. 3 Gonc ̧alo R. A. Faria, Sweta Agrawal, Ant ́onio Farinhas, Ricardo Rei, Jos ́e G. C. de Souza, and Andr ́e F. T. Martins. Quest: Quality-aware metropolis-hastings sampling for machine translation. In NeurIPS, 2024. URL https://proceedings.neurips.cc/paper_files/paper/2024/ file/a221d22ff6a33599142c8299c7ed06bb-Paper-Conference.pdf. 3 Radford M. Neal. Annealed importance sampling. arXiv preprint physics/9803008, 1998. URL https://arxiv.org/abs/physics/9803008. 3 Krzysztof Łatuszy ́nski, Matthew T. Moores, and Timoth ́ee Stumpf-F ́etizon. Mcmc for multi-modal distributions. arXiv preprint , 2025. URL https://arxiv.org/abs/2501. 05908v1. 3 Yilun Du, Conor Durkan, Robin Strudel, Joshua B Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, and Will Sussman Grathwohl. Reduce, reuse, recycle: Compositional generation with energy-based diffusion models and mcmc. In International conference on machine learning, pages 8489-8510. PMLR, 2023. 3 Sunwoo Kim, Minkyu Kim, and Dongmin Park. Test-time alignment of diffusion models without reward over-optimization. arXiv preprint , 2025. URL https://arxiv.org/ abs/2501.05803. 3 Aayush Karan, Kulin Shah, and Sitan Chen. Reguidance: A simple diffusion wrapper for boosting sample quality on hard inverse problems. arXiv preprint , 2025. URL https: //arxiv.org/abs/2506.10955. 3 Yanwei Wang, Lirui Wang, Yilun Du, Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia P ́erez-D'Arpino, Dieter Fox, and Julie Shah. Inference-time policy steering through human interactions. In 2025 IEEE International Conference on Robotics and Automation (ICRA), pages 15626-15633. IEEE, 2025. 3 Lingkai Kong, Yuanqi Du, Wenhao Mu, Kirill Neklyudov, Valentin De Bortoli, Dongxia Wu, Haorui Wang, Aaron M. Ferber, Yian Ma, Carla P. Gomes, and Chao Zhang. Diffusion models as constrained samplers for optimization with unknown constraints. In Yingzhen Li, Stephan Mandt, Shipra Agrawal, and Emtiyaz Khan, editors, Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, volume 258 of Proceedings of Machine Learning Research, pages 4582-4590. PMLR, 2025. URL https://proceedings.mlr.press/v258/kong25b. html. 3 Xiangcheng Zhang, Haowei Lin, Haotian Ye, James Zou, Jianzhu Ma, Yitao Liang, and Yilun Du. Inference-time scaling of diffusion models through classical search. arXiv preprint , 2025. 3 13 Malcolm Sambridge. A parallel tempering algorithm for probabilistic sampling and optimization. Geophysical Journal International, 196(1):357-374, 2014. URL https://academic.oup.com/gji/article/196/1/357/585739. 3 Marta Skreta, Tara Akhound-Sadegh, Viktor Ohanesian, Roberto Bondesan, Al ́an Aspuru-Guzik, Arnaud Doucet, Rob Brekelmans, Alexander Tong, and Kirill Neklyudov. Feynman-kac correctors in diffusion: Annealing, guidance, and product of experts. arXiv preprint , 2025. URL https://arxiv.org/abs/2503.02819. 3 Yanbo Xu, Yu Wu, Sungjae Park, Zhizhuo Zhou, and Shubham Tulsiani. Temporal score rescaling for temperature sampling in diffusion and flow models. arXiv preprint , 2025. URL https://arxiv.org/abs/2510.01184. 3 Tomas Geffner, Kieran Didi, Zuobai Zhang, Danny Reidenbach, Zhonglin Cao, Jason Yim, Mario Geiger, Christian Dallago, Emine Kucukbenli, and Arash Vahdat. Proteina: Scaling flow-based protein structure generative models. arXiv preprint , 2025. URL https:// arxiv.org/abs/2503.00710. 3 Pei-Hsin Wang, Sheng-Iou Hsieh, Shih-Chieh Chang, Yu-Ting Chen, Jia-Yu Pan, Wei Wei, and DaChang Juan. Contextual temperature for language modeling. arXiv preprint , 2020. URL https://arxiv.org/abs/2012.13575. 4 Marvin Li, Aayush Karan, and Sitan Chen. Blink of an eye: A simple theory for feature localization in generative models. arXiv preprint , 2025. URL https://arxiv.org/abs/ 2502.00921. 5, 16 Marah Abdin, Jyoti Aneja, Harkirat Behl, S ́ebastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Javaheripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C. T. Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Xin Wang, Rachel Ward, Yue Wu, Dingli Yu, Cyril Zhang, and Yi Zhang. Phi-4 technical report. arXiv preprint , 2024. URL https://arxiv.org/abs/2412.08905. 5, 8, 16 Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21(6):1087-1092, 1953. 5 Radford M Neal. Probabilistic inference using markov chain monte carlo methods. (review paper / technical report), 1993. 6 Reza Gheissari, Eyal Lubetzky, and Yuval Peres. Exponentially slow mixing in the mean-field swendsen-wang dynamics. arXiv preprint , 2017. 6 Afonso S. Bandeira, Antoine Maillard, Richard Nickl, and Sven Wang. On free energy barriers in gaussian priors and failure of cold start mcmc for high-dimensional unimodal distributions. arXiv preprint , 2022. URL https://arxiv.org/abs/2209.02001. 6 Scott C. Schmidler and Dawn B. Woodard. Lower bounds on the convergence rates of adaptive mcmc methods. Technical report, Duke University / Cornell University, 2013. URL https: //www2.stat.duke.edu/~scs/Papers/AdaptiveLowerBounds_AS.pdf. 6 Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let's verify step by step. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview. net/forum?id=v8L0pN6EOi. 8 M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and 14 W. Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021. URL https://arxiv.org/abs/2107.03374. 8 Yann Dubois, Bal ́azs Galambosi, Percy Liang, and Tatsunori B. Hashimoto. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint , 2024. URL https://arxiv.org/abs/2404.04475. 8 15 A Appendix A.1 Additional Theoretical Discussion In this section, we provide a stronger formalization of the phenomenon that power sampling downweights tokens that trap outputs in low-likelihood futures while low-temperature sampling does not. Proposition 2 (Informal). Power sampling upweights tokens with small support but high likelihood completions, while low-temperature sampling upweights tokens with large support but low likelihood completions. Definition 2. For the rest of this section, fix a prefix x0:t-1. We say that xt has marginal weight ε under the conditional next-token distribution if P x>t p(x0, . . . , xt, . . . xT ) = ε. We consider a simplified model of the "critical window" or "pivotal token" phenomenon [Li et al., 2025, Abdin et al., 2024], which refers to intermediate tokens that strongly influence the quality of the final generation. We differentiate between pivotal tokens that lead to high-likelihood futures vs. low-likelihood ones. Definition 3. At one extreme, a pivotal token maximally induces a high-likelihood completion if it places its entire marginal weight ε on one future (singular support); i.e., for only one choice of x > t is p(x0, . . . , xt, . . . , xT ) nonzero. We call such a token a positive pivotal token. Definition 4. At the other extreme, a pivotal token minimizes the likelihood of any future if its entire marginal weight ε is uniformly distributed across N future completions. In other words, there exist N completions x > t such that p(x0, . . . , xt, . . . , xT ) are all nonzero with likelihood ε N . We call such a token a negative pivotal token. Our simplified model of high and low-likelihood futures examines when positive pivotal tokens are favored over negative pivotal tokens under a given sampling distribution. In particular, we show that power sampling can upweight a positive pivotal token over a negative one even if the latter has a higher marginal weight, whereas low-temperature sampling always upweights the negative pivotal token in such a scenario. Of course, whenever a positive pivotal token has higher marginal weight, both power sampling and low-temperature sampling will upweight it. Proposition 3. Let xt be a positive pivotal token with marginal weight ε, and let x′ t be a negative pivotal token with marginal weight ε′ and support N. Then if ε′ N 1-1/α ε′ N (15) and thus ε > ε′ N , establishing that the future completion likelihood of xt is greater than that of x′ t (i.e. the assignment of positive and negative pivotal tokens is consistent). Now, if ε ε′α N α-1 ⇐⇒ε > ε′ N 1-1/α , token xt will be upweighted relative to token x′ t. In other words, the marginal weight on xt can be less than the mass on x′ t under p, but if the completion for xt has higher likelihood than any individual completion for x′ t, power sampling favors xt over x′ t. A.2 Pass@k Accuracies over Multiple Domains In this section, we plot the pass@k performance of power sampling, GRPO, and the base model (Qwen2.5-Math-7B) over MATH500, GPQA, and HumanEval to demonstrate that our sampling 16 algorithm is highly performant at both single-shot and multi-shot reasoning while maintaining response diversity. Power sampling is plotted with α = 4.0 for MATH500 and GPQA and α = 1.67 for HumanEval (this temperature exhibits slightly better results at earlier k). In all cases, both indomain and out-of-domain for GRPO, power sampling has near universally better performance than both GRPO and the base model in pass@k for k > 1 and matches, if not exceeds, the base model upper bound at large k. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 k 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Pass@k Accuracy MATH500 Ours Base GRPO Figure 7: Pass@k performance on MATH500 (Qwen2.5-Math-7B). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 k 0.3 0.4 0.5 0.6 0.7 0.8 Pass@k Accuracy HumanEval Ours Base GRPO Figure 8: Pass@k performance on HumanEval (Qwen2.5-Math-7B). One thing to note about these plots is that the loss in diversity varies noticeably from benchmark to benchmark. MATH500 and GPQA clearly show that GRPO has a significantly lower pass@k performance and diversity even for smaller k, while on HumanEval GRPO exhibits better pass@k 17 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 k 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Pass@k Accuracy GPQA Ours Base GRPO Figure 9: Pass@k performance on GPQA (Qwen2.5-Math-7B). than the base model until k = 16. We speculate this might be due to the fact that while MATH500 and GPQA are graded on a "singular" answer, HumanEval is based on completing code where multiple solutions might be accepted, indicating a reduction in diversity may emerge but at much higher k. A.3 More Qualitative Examples Can you solve the following math problem? Write 3 20 as a decimal. Please reason step by step, and put your final answer within }. Correct answer: 0.15 System Answer Ours To convert the fraction 3 20 to a decimal, we can follow these steps: 1. Identify the numerator and the denominator of the fraction. In this case, the numerator is 3 and the denominator is 20. 2. Divide the numerator by the denominator. In this case, we divide 3 by 20. 3. Perform the division. The result is 0.15. Therefore, the fraction 3 20 as a decimal is 0.15 . GRPO To convert the fraction 3 20 to a decimal, we can follow these steps: 1. Understand the fraction: The fraction 3 20 means 3 divided by 20. 2. Perform the division: We need to divide 3 by 20. Since 3 is less than 20, we will add a decimal point and zeros to 3 to make the division possible. 3. Set up the division: Write 3 as 3.000 (adding three zeros for precision). 4. Divide: - 20 goes into 30 once (since 20 × 1 = 20). - Subtract 20 from 30 to get a remainder of 10. - Bring down the next 0 to make it 100. - 20 goes into 100 five times (since 20 × 5 = 100). - Subtract 100 from 100 to get a remainder of 0. 5. Combine the results: The quotient from the division is 0.15. Therefore, the decimal representation of 3 20 is 0.15 . Table 3: Comparison on MATH500: Qwen2.5-Math-7B. We present an example where our method and GRPO are correct. 18 The Fib4 number sequence is a sequence similar to the Fibonacci sequence defined as: fib4(0)=0, fib4(1)=0, fib4(2)=2, fib4(3)=0, and for n ≥4, fib4(n) = fib4(n-1) + fib4(n-2) + fib4(n-3) + fib4(n-4). Write a function to efficiently compute the n-th element without recursion. System Completion Passed Ours # Base cases if n == 0: return 0 elif n == 1: return 0 elif n == 2: return 2 elif n == 3: return 0 # Initialize the first four elements fib4_values = [0, 0, 2, 0] # Compute the rest of the sequence iteratively for i in range (4, n+1): next_value = ( fib4_values [i -1] + fib4_values [ i -2] + fib4_values [i -3] + fib4_values [ i -4]) fib4_values .append(next_value ) # Return the n-th element return fib4_values [n] true GRPO a, b, c, d = 0, 0, 2, 0 for _ in range(n): a, b, c, d = b, c, d, a + b + c + d return d false Table 4: HumanEval comparison on Phi-3.5-mini-instruct. 19 Can you solve the following math problem? What integer n satisfies 0 ≤n < 18 and n ≡-11213141 (mod 18) ? Please reason step by step, and put your final answer within }. Correct answer: 13 Method Answer Ours First, we perform the division of 11213141 by 18: 11213141 ÷ 18 ≈622952.277777 . . . The integer part of the quotient is 622952. Now, we multiply 622952 by 18 to find the product: 622952 × 18 = 11213136 Next, we subtract this product from 11213141 to find the remainder: 11213141 -11213136 = 5 So, the remainder when 11213141 is divided by 18 is 5. This means: 11213141 ≡5 (mod 18) Therefore: -11213141 ≡-5 (mod 18) Since we need a non-negative remainder, we convert -5 to a positive equivalent by adding 18: -5 + 18 = 13 Thus: -11213141 ≡13 (mod 18) The integer n that satisfies 0 ≤n < 18 and n ≡-11213141 (mod 18) is: 13 . GRPO Step 1: Find the remainder when 11213141 is divided by 18. We can use the property of modular arithmetic that states a ≡b (mod m) if and only if a mod m = b mod m. So, we need to compute 11213141 mod 18. To simplify this, we can use the fact that the remainder of a number when divided by 18 is the same as the remainder of the sum of its digits when divided by 18 (although this is not always the most efficient method, it can be used here for simplicity). First, let's sum the digits of 11213141: 1 + 1 + 2 + 1 + 3 + 1 + 4 + 1 = 14. Now, we need to find the remainder when 14 is divided by 18. Since 14 is less than 18, the remainder is simply 14. Therefore, 11213141 ≡14 (mod 18). Step 2: Find the remainder when -11213141 is divided by 18. Since 11213141 ≡14 (mod 18), it follows that -11213141 ≡-14 (mod 18). We need to convert -14 to a positive remainder between 0 and 17. To do this, we add 18 to -14: -14 + 18 = 4. Therefore, -11213141 ≡4 (mod 18). The integer n that satisfies 0 ≤n < 18 and n ≡-11213141 (mod 18) is 4 . Table 5: MATH500 comparison between our sampling algorithm and GRPO for Qwen2.5-Math-7B. Here is an example where GRPO gets an incorrect answer, while our sampling algorithm succeeds. Our sample answer uses a distinct method altogether. 20
2510.14896
Leveraging Multimodal LLM Descriptions of Activity for Explainable Semi-Supervised Video Anomaly Detection Furkan Mumcu1,† Michael J. Jones2,∗ Anoop Cherian3,∗ Yasin Yilmaz4,† †University of South Florida ∗Mitsubishi Electric Research Laboratories (MERL) 1furkan@usf.edu 2mjones@merl.com 3cherian@merl.com 4yasiny@usf.edu Abstract Existing semi-supervised video anomaly detection (VAD) methods often struggle with detecting complex anomalies involving object interactions and generally lack explainability. To overcome these limitations, we propose a novel VAD framework leveraging Multi- modal Large Language Models (MLLMs). Unlike previ- ous MLLM-based approaches that make direct anomaly judgments at the frame level, our method focuses on extracting and interpreting object activity and interac- tions over time. By querying an MLLM with visual inputs of object pairs at different moments, we gener- ate textual descriptions of the activity and interactions from nominal videos. These textual descriptions serve as a high-level representation of the activity and inter- actions of objects in a video. They are used to detect anomalies during test time by comparing them to tex- tual descriptions found in nominal training videos. Our approach inherently provides explainability and can be combined with many traditional VAD methods to further enhance their interpretability. Extensive experiments on benchmark datasets demonstrate that our method not only detects complex interaction-based anomalies effec- tively but also achieves state-of-the-art performance on datasets without interaction anomalies. 1. Introduction Video anomaly detection (VAD) has emerged as an ac- tive research field due to its significant applications in security and public safety. The rapid growth of surveil- lance video data makes it impractical for humans to monitor thoroughly. Consequently, VAD algorithms play a vital role in identifying unusual events in video footage, enabling human operators to focus their atten- tion on reviewing these flagged incidents. In this paper, we focus on the formulation in which nominal videos (also called training videos) containing only normal ac- tivities in a particular scene are provided for learning a model. The goal is to identify when and where anoma- lous activities happen within test videos of the same scene. Along with other papers [3, 44], we refer to this version of the problem as semi-supervised VAD because of the availability of normal, but not anomalous, training video. In Table 1, we summarize the properties of some recent papers in this area to highlight the differences be- tween our work and other recent approaches. Existing semi-supervised video anomaly detection methods suffer from several weaknesses. Recent studies struggle to provide explainability for detected anoma- lies. While some existing work [36, 37] explored more explainable approaches by interpreting the input fea- tures, they do not offer direct, textual explanations. Furthermore, the ComplexVAD dataset [28] introduced interaction-based anomalies and demonstrated that de- tecting these anomalies is even more challenging than identifying the simpler anomalies presented in earlier datasets. Hence, we propose a new method to address the drawbacks present in existing works, including the lack of focus on complex anomalies and explainabil- ity. By utilizing a Large Language Model-based detec- tion approach, we introduce an effective video anomaly detection framework that excels at identifying complex anomalies and features built-in explainability. Large Language Models, especially in their multi- modal forms (MLLMs), have demonstrated strong ca- pabilities in interpreting and accurately describing com- plex situations from both text and images. GPT-4 with vision [1], for example, can analyze visual inputs and produce detailed, contextually relevant descriptions. Ex- isting work has explored the use of MLLMs for video anomaly detection. Most of these methods are designed for a multi-scene, weakly supervised setting. While such approaches are effective for detecting a common set of anomalies across multiple scenes, they fall short in iden- tifying scene-specific anomalies, which are central to the semi-supervised, single-scene VAD task. To address this limitation, we propose a novel MLLM-based frame- work that: (1) considers object interactions in addition to individual object activities, (2) leverages an MLLM 1 arXiv:2510.14896v1 [cs.CV] 16 Oct 2025 Method Year Focus Features Task Objective MLLM Interaction Explain- Semi Scene Usage Anomalies ability Supervised Specific VadCLIP [47] 2023 Using CLIP for multi-scene weakly supervised VAD G# # # # # Vera [50] 2025 Optimizing prompts for MLLM-based weakly supervised VAD # # # Holmes-VAU [53] 2025 MLLM finetuning for anomaly understanding across multi-scene # # # AnomalyRuler [46] 2024 Scene-specific rule generation and frame-level descriptions via MLLMs # G# VLAVAD [16] 2024 Using frame level MLLM descriptions for semi-supervised VAD # # EVAL [36] 2023 Using object-level features for explainable VAD # # G# T-EVAL [37] 2024 Object-level, tracklet-based explainable VAD # # G# Scene-Graph [28] 2025 Exemplar and scene graph for interaction-based VAD # G# Ours 2026 Object-level MLLM descriptions for interaction-based VAD Table 1. Summary of properties and differences of some recent work in VAD. Our work is the only one designed for semi- supervised, scene-specific VAD that handles interaction anomalies and provides textual explanations of anomalies. to generate high-level descriptions used in an exemplar- based model that can be easily extended without retrain- ing deep networks for each new scene, and (3) enables both spatial and temporal localization of anomalies. In our method, we adopt an object-level feature ex- traction strategy similar to [28], using an object detector and tracker. We then employ an MLLM agent to de- scribe object activities and interactions by querying it with pairs of frames, separated in time by a fixed in- terval, to capture temporal information. A key novelty is using these MLLM-generated textual descriptions di- rectly as features for modeling normal activity. By re- moving redundant descriptions (Section 3.2), we con- struct a final set of textual exemplars representing nom- inal videos. This exemplar set is then used to detect anomalies: descriptions from anomalous interactions are expected to deviate from nominal ones. In summary, our contributions are: • We introduce the first MLLM-based approach for video anomaly detection specifically designed to iden- tify complex anomalies caused by object interactions. • We propose a novel use of MLLMs for VAD: instead of directly deciding the presence of an anomaly, as done in prior work, our method first derives a repre- sentation (using MLLM descriptions) of what is nor- mal in a scene and then identifies anomalies based on deviations from this normal representation. • We demonstrate that our method offers built-in ex- plainability. Furthermore, we show that it can be in- tegrated with many existing VAD approaches to en- hance their interpretability. • We evaluate our method on multiple benchmark datasets and show that it achieves state-of-the-art per- formance. 2. Related Work Many different formulations of the video anomaly detec- tion (VAD) problem have been studied in recent years: semi-supervised [13, 15, 19, 42], weakly supervised [38], unsupervised [51], training-free [52], single-scene [32] and multi-scene [22]. Each of these formulations has applicability in different scenarios. This paper fo- cuses on the single-scene, semi-supervised version of video anomaly detection because it corresponds to the most common use case that we have observed in the real- world; namely, that of a surveillance camera at a partic- ular location in the world in which video of normal ac- tivity is abundant but video containing anomalies is rare and expensive to collect. Furthermore, what is anoma- lous in one scene may be normal in another. This along with the fact that location-dependent anomalies (e.g. a pedestrian may be anomalous in some locations but not others) are common in the single-scene version of VAD but do not occur in the multi-scene version of VAD im- plies that the multi-scene version of VAD is not a gen- eralization of single-scene VAD [32]. Solutions for one version generally do not work well on the other version. Early work on the single-scene, semi-supervised ver- sion of VAD includes many papers based on frame re- construction [13, 15] or frame prediction [19, 42]. One big disadvantage of such approaches is that they require training a deep network on video from each new scene. To avoid this disadvantage, others [7, 8, 28, 30, 31, 36, 37] used an exemplar-based model that extracts a set of representative features from normal video but does not require any deep network training for each scene. We also use this idea in our work. 2.1. MLLMs in Video Anomaly Detection Recently, researchers have explored ways of taking ad- vantage of large language models or vision language models for VAD. These approaches, however, vary sig- nificantly depending on the VAD problem formulation. Weakly-Supervised and Training-Free VAD. Many approaches to VAD that use an MLLM try to use its built-in knowledge of what is normal instead of learning normality from normal training video [23, 50, 52]. A disadvantage of such approaches is that they cannot handle scenarios such as a boxing gym in which activity that is typically rare and anomalous (fighting) is actually very common. Another important difference between our work and most other MLLM- 2 based approaches to VAD is that others have focused on the weakly supervised, multi-scene version of VAD [17, 23, 45, 50, 52]. Semi-Supervised VAD. Compared to the weakly- supervised setting, the capabilities of MLLMs are un- derstudied in the semi-supervised setting. Wu et al. [46] addresses semi-supervised, single-scene VAD, but with a fundamentally different methodology. Their approach uses an MLLM to first generate a set of textual rules from normal video and then applies these rules during inference. In contrast, our method directly compares MLLM-generated text descriptions between normal and test videos without an intermediate rule-generation step. Furthermore, their model processes full video frames, whereas we employ an object-centric approach that an- alyzes regions around objects and their pairs. This key design choice allows our method to model object inter- actions and perform both spatial and temporal localiza- tion, while their approach is limited to temporal local- ization only. Li and Jiang [16] also focuses on semi-supervised single scene anomaly detection and proposes an MLLM- based method. Like our method, they first detect and track objects and then query an MLLM to yield high- level descriptions of objects and their activity. However, they do not model object interactions, and do not use an exemplar-based model. In real-world scenarios, many anomalies are charac- terized by an unusual interaction between two objects. Therefore, one of the important aspects of our method is its modeling of object interactions in the scene to iden- tify anomalous interactions. There have been a few other methods that also modeled interactions [5, 9, 28, 39, 40]. While there are many differences among these meth- ods, one of the main differences compared to our current work is that none of them take advantage of MLLMs. In our approach, the textual descriptions of object activity and interactions provided by an MLLM yield powerful cues for a high-level understanding of a scene. 3. Our Approach We propose a novel method to detect anomalies in video. The pipeline of our method (which we call MLLM- EVAD, for MLLM-based Explainable Video Anomaly Detection) is depicted in Figure 1. The main idea is to use a set of textual descriptions of the activity seen in normal training video of a scene as the model of normality. These textual descriptions are obtained us- ing an MLLM. Anomalies in testing video are found by comparing textual descriptions of the activity in testing frames with the set of normal descriptions. A test de- scription that is dissimilar from all normal descriptions indicates an anomaly. The initial step of our method is detecting and tracking objects followed by identify- ing pairs of objects that are in close spatial proximity and therefore likely to be interacting. Then, for each frame and a future frame offset by a fixed time interval, we generate crops around pairs of interacting objects as well as single objects that are not close to other objects. The crops are given to an MLLM to obtain natural lan- guage descriptions of objects’ activities. These descrip- tions are embedded into a vector space using a sentence embedding model. By applying an exemplar selection algorithm to all of the sentence embeddings found in nominal training video, we construct representative sets of embeddings for both object pairs and single objects. Anomaly detection is performed by comparing sentence embeddings extracted from testing frames to the exem- plar sets, and assigning anomaly scores based on their distance to the most similar exemplar. 3.1. Object Detection and Tracking In our proposed method, we want to build a model of what objects do and how they interact in video contain- ing normal activity. This requires detecting objects and tracking them across frames in training videos. We also find pairs of objects that are likely to be interacting ac- cording to their spatial proximity. A video V is a collection of M frames {Fi}M i=1, such that V = [F1, F2, ..., FM]. Each frame Fi is sent to the object detector O, which returns X number of detected objects. The object detector provides, for each object o, the location l = (xo, yo) which is the x and y coor- dinates of the center of the object, b = (wo, ho) which is the width and height of the bounding box for the ob- ject, and class id c. The output of the object detector is then O(F) = [o1, o2, ..., oX], where each object oi is represented by oi = [b, c, l]. After detecting objects in a frame, they are then tracked using an object tracker. Each detected object o is sent to the object tracker, which returns x and y coordinates for that object in the subse- quent frames. In our method, we track objects for 30 frames. Therefore, for every object, we acquire the tra- jectory θ = {(x1, y1), (x2, y2), ..., (x30, y30)}. Next, we pair objects according to their spatial prox- imity in the frame. To determine which objects should be paired, we need to calculate the 3D distances between each pair of objects. To calculate the 3D distance we need to derive the 3D coordinates of the object locations by estimating a pseudo-depth since we do not have ac- cess to actual depth measurements. Given two objects o1 and o2, we have 2D coordinates l1 = (x1, y1) and l2 = (x2, y2). Then we define a relative depth, z, be- tween two objects by taking the absolute difference of y values such that z = |y1 −y2|. This estimate of pseudo- depth assumes that objects are resting on the ground plane and the ground plane is farther from the camera 3 Object Detection and Tracking MLLM Visual Input: Crop of two objects enclosed by bounding boxes, from two frames at time t and t+30. . . . Text Encoder Answer sentence r: Two people are walking side by side on the crosswalk without interacting. X Embedding of r Exemplar set of embeddings Training mode: If score is greater than the threshold add embedding of r to exemplar set. Inference mode: Use score for anomaly detection. . . . Prompt: Briefly describe what the people in the enclosed regions of these images are doing. The two images were taken one second apart. Score calculation: Eq. (1) & Eq. (2) Video Frames Figure 1. The pipeline of our method (MLLM-EVAD) for frame to textual description generation with the help of an ob- ject detector, object tracker and MLLM. Please see the text for more explanation. the closer it is to the top of the image. The 3D distance d can then be calculated by taking the Euclidean distance between 3D coordinates (x1, y1, z) and (x2, y2, 0). Any objects that have a 3D distance d smaller than a prede- termined threshold h are paired. Due to applying the threshold, not every single object is necessarily con- nected to another object, which leads to having single objects in addition to object pairs. 3.2. Generating Textual Descriptions At the end of the object detection and tracking stage, for each frame Ft, we obtain a set of object pairs and a set of single objects. For each of these, we generate two cropped regions that capture the relevant spatial area for both the current frame Ft and a future frame Ft+∆(e.g., ∆= 30), assuming a static camera (as depicted in Fig- ure 1). This ensures that both crops contain the same background and object regions, allowing for consistent comparisons across time. We provide the details of the cropping process in the Appendix. Red rectangles are drawn around each object in the cropped images using the bounding boxes detected by the object detector. The rectangles indicate to the MLLM which objects are be- ing referred to in the prompt. We then obtain a sentence r that describes the inter- action between objects o1 and o2 by querying an MLLM agent with the corresponding image crops Ct and Ct+∆, along with the user prompt: “Briefly describe what the <object name>s in the enclosed regions of these im- ages are doing. The two images were taken one second apart.” For single-object cases, the prompt is adjusted to: “Briefly describe what the <object name> in the enclosed region of these images is doing. The two im- ages were taken one second apart.” In both prompts, the <object name> placeholder is replaced by the class la- bel c obtained during feature selection. An additional “system” prompt is also used with the MLLM: “You will be provided with two frames from a video and asked to describe what objects that are indi- cated by bounding boxes in the video are doing. Your task is to answer the query in a simple sentence. If there is any interaction between the indicated objects, a de- scription of the interaction should be given.” The MLLM-generated responses are then used in both the model building and anomaly detection stages. 3.3. Model Building We follow a similar exemplar selection strategy as intro- duced in recent works [28, 30, 36, 37]. For all frames in a video, we collect all textual descriptions generated from object pairs into one set and all textual descriptions generated from single objects into another set. Then for each of the sets we run an exemplar selection algorithm which selects a subset of the elements of the set such that no two members of the subset are near each other according to a distance function. The intuition behind exemplar selection is to simply remove redundant (or nearly redundant) elements from the set leaving behind a compact, representative subset of exemplars. Vari- ous distance functions between sentences can be used to compare textual descriptions. In most of our experi- ments we use Sentence-BERT [33], but we also test us- ing BLEU [29] and METEOR [2]. Given a set S, the exemplar selection algorithm pro- ceeds as follows: (1) Initialize the exemplar set to NULL. (2) Add the first element of S to the exemplar set. (3) For each subsequent element of S, find its dis- tance to the nearest instance in the exemplar set. If this distance is above a threshold, th, then add the element to the exemplar set. Since we obtain a textual description r for each ob- ject pair or single object, we transform these descrip- tions into sentence embeddings using a Natural Lan- guage Processing (NLP) model N, such that Ir = N(r). The distance D between two textual descriptions, r1 and r2, is then computed as: D(r1, r2) = 1 −cos (Ir1, Ir2) (1) where cos(·, ·) denotes the cosine similarity between the corresponding embeddings. Using the distance function D and the described exemplar selection algorithm, we 4 construct two separate exemplar sets: Epair and Esingle, containing sentence embeddings of textual descriptions for object pairs and single objects, respectively. 3.4. Anomaly Detection in Test Video After constructing the exemplar sets from nominal videos, the next step is to detect anomalies in a test video from the same scene. Similar to the model-building phase, we extract textual descriptions and encode them into sentence embeddings Ir. The anomaly score for a given object pair (or single object) is then computed based on its similarity to the corresponding exemplar set. Specifically, the score is defined as: score(r) = 1 −max (cos (Ir, e)) , e ∈Epair or Esingle (2) depending on whether the input is an object pair or a sin- gle object. A higher score indicates a greater deviation from the nominal descriptions and is thus more likely to be anomalous. 3.5. Combination with Previous Methods While the textual descriptions generated by an MLLM provide powerful high-level features for modeling the normal activity in a scene and for detecting anomalies in the scene, some anomalies require more detailed, fine- grained features for detection. For this reason, it can be advantageous to combine our method with previous methods that use more detailed features such as object sizes and trajectories. Combining with another method is especially easy to do if the other method is also object- based and uses an exemplar-based model. In our experiments, we show results when combining our method separately with two other methods; namely, the scene-graph method of Mumcu et al. [28] and the tracklet method of Singh et al [37]. Our method of modeling interactions was inspired by the scene-graph method of Mumcu et al. [28]. Their method builds a scene graph for each frame where each object is a node and objects that are nearby and, there- fore possibly interacting, are connected by an edge. The attributes stored at each node are the object’s location, size, object class, trajectory and skeletal pose. Then an exemplar-based model is created for every connected pair of nodes across all scene graphs in all nominal frames. A separate exemplar-based model is also created for all singleton nodes across all scene graphs in all nom- inal frames. To combine our method with theirs, we can simply add the MLLM-generated textual descriptions to each connected pair of nodes and to each singleton node. The distance function used to compare nodes takes the maximum over all attribute distances including the dis- tance between textual descriptions. In the case of the tracklet method of Singh et al. [37], objects in each frame are also detected and tracked. Sim- ilar to Mumcu et al., for each object, a set of attributes are stored including the location, size, object class and trajectory. An exemplar-based model is then learned from all of the objects found in the nominal training videos using a distance function for comparing objects that takes the maximum over each attribute distance. To combine our method with theirs, we again only need to add our MLLM-generated textual descriptions as an- other attribute associated with each object. In this case, object interactions are not modeled. We use this combi- nation to test on datasets that do not contain anomalous object interactions. 4. Experiments In our experiments, we use the following configurations: Datasets: We evaluate MLLM-EVAD on three video anomaly detection benchmark datasets which are all publicly available under a CC-BY-SA-4.0 license. Com- plexVAD [28] is a large-scale dataset with a total of over 3.6 million frames, with 2,069,941 frames for train- ing and 1,611,497 for testing. It features interaction- based anomalies and serves as our main focus, since our method leverages descriptions of object activity and in- teractions for video anomaly detection. The scene fea- tures a crosswalk, pedestrian sidewalks, and a two-lane street, which are heavily used. It demonstrates interac- tion anomalies such as a person leaving an object on the ground, a person sitting on a car, or a dog walking with- out a person or leash, where the individual objects are normal, but their interactions are anomalous. In addition, we evaluate performance on the Avenue and Street Scene datasets. Avenue [21] contains short surveillance videos of a campus walkway, with anoma- lies such as running, throwing objects, or walking in the wrong direction. Street Scene [30] includes longer videos of a city street environment, where anomalies involve vehicles or pedestrians behaving unusually, in- cluding cars driving on the wrong side of the road, pedestrians crossing at non-designated areas, or bikers riding on the sidewalk. Avenue contains 30,652 frames, split nearly evenly between 15,328 training frames and 15,324 testing frames. Street Scene is a larger dataset with a total of 203,257 frames, of which 56,847 are used for training and 146,410 for testing. Implementation Details: We use Detectron2 [48] and ByteTrack [54] as the object detector and object tracker in our method. Sentence-BERT [33] (using the pretrained weights, ”all-MiniLM-L6-v2”) is used as the text embedding model to encode activity descrip- tions. Initially, GPT-4o [14] was used as the multi- modal MLLM agent. However, with the recent introduc- tion of Gemma 3 [41], we re-conducted the experiments 5 Method RBDC TBDC Frame MemAE [12] 0.05 0.0 58.0 EVAL [36] 10.0 62.0 54.0 AnomalyRuler [46] N/A N/A 56.0 Scene-Graph[28] 19.0 64.0 60.0 MLLM-EVAD 24.0 68.0 61.0 Scene-Graph + MLLM-EVAD 25.0 70.0 63.0 Table 2. Results on the ComplexVAD dataset. Each entry is the area under the curve (AUC) as a percentage for the three benchmark methods using the RBDC, TBDC and Frame-Level evaluation criteria. Bold face indicates the best score and blue indicates second best. Method RBDC TBDC Frame Auto-encoder [13] 0.3 2.0 61.0 Dictionary method [21] 1.6 10.0 48.0 Flow baseline [30] 11.0 52.0 51.0 FG Baseline [30] 21.0 53.0 61.0 EVAL [36] 24.3 64.5 N/A Contextual GMM [49] 34.0 62.5 67.0 T-EVAL [37] 30.9 72.9 66.0 T-EVAL+ MLLM-EVAD 31.1 73.5 66.8 Table 3. Results on the Street Scene dataset using the area under the curve (AUC) (as a percentage) for the RBDC and TBDC evaluation criteria for different methods. Bold face in- dicates the best score and blue indicates second best. on ComplexVAD using Gemma 3. Hence, the results on ComplexVAD are reported with both Gemma 3 and GPT-4o, while the results on the other datasets are re- ported with GPT-4o. We choose a threshold th = 0.65 for exemplar selection that resulted in a modest num- ber of total exemplars selected. From past work that used exemplar-based models, this threshold mainly ef- fects model size and has a small effect on test accuracy [36, 37]. We provide additional implementation details in the Appendix. Evaluation Criteria: We use the Region-Based De- tection Criterion (RBDC) and the Track-Based Detec- tion Criterion (TBDC) as proposed in [30] as our pri- mary evaluation criteria and report the area under the curve (AUC) for false positive rates per frame from 0 to 1. We also report frame-level AUC [24] scores. As high- lighted in previous works [30] frame-level AUC only evaluates temporal accuracy and disregards spatial lo- calization of anomalies. RBDC and TBDC measure a method’s capacity to accurately identify anomalous spatio-temporal regions within a given video sequence. Method RBDC TBDC Frame Siamese[31] 41.2 78.6 87.2 Hybrid[20] 41.1 86.2 91.1 BG agnostic [11] 65.1 66.9 92.3 Hybrid[20] + PCAB[34] 62.3 89.3 90.9 BG agnostic[11] + PCAB[34] 66.0 64.9 92.9 SSMTL++v1 [4] 40.9 82.1 93.7 SSMTL++v2 [4] 47.8 85.2 91.6 AnomalyRuler [46] N/A N/A 89.7 EVAL[36] 68.2 87.6 86.0 T-EVAL[37] 67.5 89.7 88.0 T-EVAL + MLLM-EVAD 68.9 90.1 88.4 Table 4. Results on the CUHK Avenue dataset using the area under the curve (AUC) as a percentage for the RBDC and TBDC evaluation criteria for different methods. Bold face in- dicates the best score and blue indicates second best. 4.1. Quantitative Results The results of our method on ComplexVAD, as well as those of EVAL [36], MemAE [12], AnomalyRuler [46], and Scene-Graph [28], using the three evalua- tion criteria described above, are reported in Table 2. We also provide the result for the combination of our method and Scene-Graph on the ComplexVAD dataset. Our MLLM-EVAD method outperforms the next best method (Scene-Graph) by 5, 4 and 1 percentage points in RBDC, TBDC, and frame-level evaluations, respec- tively. We achieve the best result by combining the Scene-Graph method with ours, reaching 25%, 70%, and 63% in RBDC, TBDC, and frame-level evaluations, respectively. AnomalyRuler, a recent MLLM-based method [46], achieves 54% frame-level AUC. However, we were not able to obtain results for the RBDC and TBDC criteria, as this method does not support spatial localization of anomalies. The MemAE method does very poorly for the two criteria that measure spatial lo- calization. This implies that the regions of an image that MemAE predicts as anomalous are usually normal. For the experiments on Avenue and Street Scene, we combine our method with the best-performing ex- isting exemplar-based method, namely Tracklet EVAL [37]. These datasets include many anomalies that need more fine-grained attributes (such as the direction and speed of travel for people and cars), so the high-level textual descriptions used by MLLM-EVAD are not suf- ficient on their own to detect all of the anomalies. We show that our textual descriptions do add an important new attribute, however, since, when combined with the more fine-grained attributes of Tracklet EVAL, the re- sults with the MLLM textual descriptions improve ac- curacy for both datasets across all three evaluation cri- 6 The person is crouching down on the ground. The person is walking across the grass. The person is holding a phone up and appearing to take a picture or video while being pushed in a large box by another person. The person is walking along the sidewalk. Anomaly Closest Exemplar Anomaly Closest Exemplar The two people are walking across the crosswalk, with the person on the right appearing to stumble or jump. Two people are walking across the crosswalk on the street. The person on the bicycle is being pushed forward by another person who is riding a scooter. The person is walking while the cyclist is riding the bicycle alongside them. Figure 2. Examples of anomalies in the ComplexVAD dataset that are detected by our method. In each box, the top row shows two frames of the anomalous video clip followed by the textual description generated by Gemma 3. The second row shows two frames from the closest matching exemplar to the anomaly. The textual description for the closest exemplar is also shown. By contrasting the anomalous description with the closest normal description, it is clear why the testing video clip was found to be anomalous. teria. As shown in Table 4, on the Avenue dataset, the Tracklet-EVAL + MLLM-EVAD method improves the state-of-the-art for both the RBDC and TBDC criteria. On Street Scene (results shown in Table 3), the state-of- the-art is surpassed for the TBDC criterion. Since the frame-level criterion does not measure spatial localiza- tion accuracy which is the focus of these datasets, we would argue that the slightly-below SOTA frame-level numbers are not representative of the true accuracy of these methods. We should note that we were not able to exactly reproduce the results reported for Tracklet EVAL [37] on the Avenue dataset, most likely due to different parameters, so we report the results we could reproduce for that method and use that version in com- bination with MLLM-EVAD for the results on the com- bined method. 4.2. Qualitative Results: Explainability In addition to the quantitative results, we present qualita- tive results to demonstrate the explainability capabilities of our method. The combination of using an MLLM agent and an exemplar selection strategy enables intu- itive explanations for detected anomalies by providing descriptions of both the anomaly and the most simi- lar event from the nominal training video. In Figure 2, we demonstrate four detected anomalies from Com- plexVAD along with their textual descriptions and the textual descriptions of the closest exemplars, as gener- ated by our method. For example, one detected anomaly is described as ”The person is crouching down on the ground.” The closest matching exemplar from the nom- inal set is ”The person is walking across the grass.” The action of crouching deviates from the typical behav- ior observed in the training data. This difference high- lights how our method leverages semantic understanding to identify uncommon activity, offering an interpretable explanation for the anomaly. Another example involves an anomaly described as ”The person is holding a phone up and appearing to take a picture or video while being pushed in a large box by another person.” The closest matching exemplar from the nominal set is ”The person is walking along the sidewalk.”, which is, in fact, the most common ac- tivity involving a person in this dataset. The anomalous event, a person being pushed in a box, does not occur in the nominal video and represents an unusual and com- plex behavior. This example further demonstrates our method’s ability to detect rare interaction patterns that deviate from typical activities in the scene. In addi- tion, these examples illustrate how our method also pro- vides reasonable explanations for detected anomalies. By leveraging descriptive comparisons with nominal ex- emplars, our method enables enhanced and explainable video anomaly detection. 4.3. Ablation Study We conduct two ablation studies by modifying the de- fault settings of our method. In the first ablation study, instead of using sentence embeddings and cosine dis- similarity as the distance metric, we use the raw sen- tences and compute distances using BLEU [29] and ME- TEOR [2]. In Table 5, we compare the results of cosine distance over Sentence-BERT embeddings with those obtained using BLEU and METEOR, based on the de- 7 Method RBDC TBDC Frame Sentence-BERT 19.0 67.0 59.0 BLEU 17.0 69.0 58.0 METEOR 16.0 69.0 57.0 Table 5. The table reports results on the ComplexVAD datasets using the area under the curve (AUC) as a percentage for the RBDC, TBDC and Frame-Level evaluation criteria for the our method using 3 different sentence distance functions. MLLM RBDC TBDC Frame GPT-4o 19.0 67.0 59.0 Gemma 3 24.0 68.0 61.0 Table 6. The table reports the performance of our proposed method on the ComplexVAD datasets using the area under the curve (AUC) as a percentage for the RBDC, TBDC and Frame- Level evaluation criteria with different MLLM agents GPT-4o and Gemma 3. scriptions generated by our method using GPT-4o as the MLLM agent. The results show that Sentence-BERT embeddings yield slightly higher accuracy in the RBDC and frame-level criteria while BLEU and METEOR per- form slightly better on the TBDC criterion. Sentence- BERT is the preferred similarity function considering its greater efficiency due to embeddings which can be saved and only require a dot product for comparison. In the second ablation study, we compare our method’s performance on ComplexVAD using different MLLM agents, namely Gemma 3 and GPT-4o. In Table 6, we show that Gemma 3 achieves better performance, with 24%, 68%, and 61% in the RBDC, TBDC, and frame-level criteria, respectively, while GPT-4o yields 19%, 67%, and 59%. We attribute this difference to the fact that Gemma 3 generates more detailed and descrip- tive sentences compared to GPT-4o. This level of detail is particularly important for detecting interaction-based anomalies, where subtle contextual cues are crucial, and likely contributed to the observed performance gap. We provide a comparison of the descriptions generated by Gemma 3 and GPT-4o in the Appendix. 5. Limitations and Future Directions While our framework shows promising results, we iden- tify several limitations that pave the way for future re- search. First, our implementation relies on powerful but computationally expensive multimodal models like Gemma 3. Their high inference latency poses a chal- lenge for real-time applications. Recent studies have shown that smaller, task-specific models can perform ex- ceptionally well on specialized tasks [25, 26, 35, 43]. We will explore fine-tuning such models for single scenes, specifically for describing object activities and interactions. Second, a primary contribution of our method is its semantic explainability, but quantitatively evaluating this feature is difficult due to a lack of suitable bench- marks. Current datasets with textual ground truth are designed for multi-scene or weakly-supervised tasks and do not fit the semi-supervised, single-scene VAD prob- lem [10, 53, 55]. Therefore, a key future direction is the development of new datasets with ground-truth textual descriptions for scene-specific anomalies. Finally, our framework currently leverages a robust traditional object detector, which is well-suited for sce- narios with known object classes. To further general- ize our approach, a promising future direction is the in- tegration of open-vocabulary object detection methods [6, 18, 27]. By enabling the recognition of an open set of object categories, this advancement would significantly broaden our system’s applicability, allowing it to ana- lyze and describe a more diverse range of anomalies in unconstrained, real-world environments. 6. Conclusions In this work, we presented a novel video anomaly de- tection framework that leverages multimodal Large Lan- guage Models to detect and explain complex anomalies. By extracting object-level activity and interactions and converting them into textual descriptions, our method moves beyond traditional pixel level modeling and in- troduces a language-based, interpretable layer for video anomaly detection. Unlike many prior MLLM-based methods that rely on direct frame-level judgments, our approach builds a set of nominal textual descriptions and identifies anomalies through language-based com- parison, offering both accuracy and explainability. We demonstrate that our method outperforms exist- ing approaches on ComplexVAD which is a challeng- ing benchmark specifically designed to test interaction- based anomalies. In addition our method also improves of the state-of-the-art results on standard datasets like Avenue and Street Scene when combined with the fine- grained method Tracklet EVAL [37]. Through abla- tion studies, we showed the importance of using sen- tence embeddings and detailed MLLM-generated de- scriptions. Notably, we found that Gemma 3 provided more informative descriptions than GPT-4o, contribut- ing to higher detection accuracy. Beyond strong quantitative results, our method of- fers clear and interpretable explanations for detected anomalies, highlighting its potential for real world de- ployment in safety critical surveillance applications. We believe this framework opens new directions for inte- grating multimodal large language models into video understanding tasks and sets the stage for future re- search in explainable and high level language-based video anomaly detection. 8 References [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 1 [2] Satanjeev Banerjee and Alon Lavie. Meteor: An auto- matic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL Work- shop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, 2005. 4, 7 [3] Mohammad Baradaran and Robert Bergevin. A critical study on the recent deep learning based semi-supervised video anomaly detection methods. Multimedia Tools and Applications, 83, 2024. 1 [4] Antonio Barbalau, Radu Tudor Ionescu, Mariana-Iuliana Georgescu, Jacob Dueholm, Bharathkumar Ramachan- dra, Kamal Nasrollahi, Fahad Shahbaz Khan, Thomas B Moeslund, and Mubarak Shah. Ssmtl++: Revisiting self- supervised multi-task learning for video anomaly detec- tion. Computer Vision and Image Understanding, 229: 103656, 2023. 6 [5] Nicholas F.Y. Chen, Zhiyuan Du, and Khin Hua Ng. Scene graphs for interpretable video anomaly classifica- tion. In NIPS, 2018. 3 [6] Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, and Ying Shan. Yolo-world: Real-time open-vocabulary object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16901–16911, 2024. 8 [7] Keval Doshi and Yasin Yilmaz. Any-shot sequential anomaly detection in surveillance videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 934–935, 2020. 2 [8] Keval Doshi and Yasin Yilmaz. Continual learning for anomaly detection in surveillance videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 254–255, 2020. 2 [9] Keval Doshi and Yasin Yilmaz. Towards interpretable video anomaly detection. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2023. 3 [10] Hang Du, Sicheng Zhang, Binzhu Xie, Guoshun Nan, Jiayang Zhang, Junrui Xu, Hangyu Liu, Sicong Leng, Jiangming Liu, Hehe Fan, et al. Uncovering what why and how: A comprehensive benchmark for causation un- derstanding of video anomaly. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18793–18803, 2024. 8 [11] Mariana Iuliana Georgescu, Radu Ionescu, Fahad Shah- baz Khan, Marius Popescu, and Mubarak Shah. A background-agnostic framework with adversarial train- ing for abnormal event detection in video. IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 2021. 6 [12] Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, and An- ton van den Hengel. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for un- supervised anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, 2019. 6 [13] Mahmudul Hasan, Jonghyun Choi, Jan Neumann, Amit K Roy-Chowdhury, and Larry S Davis. Learning temporal regularity in video sequences. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 733–742, 2016. 2, 6 [14] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Ak- ila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276, 2024. 5 [15] Radu Tudor Ionescu, Fahad Shahbaz Khan, Mariana- Iuliana Georgescu, and Ling Shao. Object-centric auto- encoders and dummy anomalies for abnormal event de- tection in video. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 7842–7851, 2019. 2 [16] Changkang Li and Yalong Jiang. Vlavad: Vision- language models assisted unsupervised video anomaly detection. In Proceedings of the British Machine Vision Conference, 2024. 2, 3 [17] Fei Li, Wenxuan Liu, Jingjing Chen, Ruixu Zhang, Yuran Wang, Xian Zhong, and Zheng Wnag. Anomize: Better open vocabulary video anomaly detection. In Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3 [18] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In European Conference on Computer Vision, pages 38– 55. Springer, 2024. 8 [19] Wen Liu, Weixin Luo, Dongze Lian, and Shenghua Gao. Future frame prediction for anomaly detection–a new baseline. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, pages 6536–6545, 2018. 2 [20] Zhian Liu, Yongwei Nie, Chengjiang Long, Qing Zhang, and Guiqing Li. A hybrid video anomaly detection framework via memory-augmented flow reconstruction and flow-guided frame prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vi- sion, pages 13588–13597, 2021. 6 [21] Cewu Lu, Jianping Shi, and Jiaya Jia. Abnormal event detection at 150 fps in matlab. In Proceedings of the IEEE International Conference on Computer Vision, pages 2720–2727, 2013. 5, 6 [22] Weixin Luo, Wen Liu, and Shenghua Gao. A revisit of sparse coding based anomaly detection in stacked rnn framework. In Proceedings of the IEEE international conference on computer vision, pages 341–349, 2017. 2 [23] Hui Lv and Qianru Sun. Video anomaly detection and explanation via large language models. arXiv preprint arXiv:2401.05702, 2024. 2, 3 [24] Vijay Mahadevan, Weixin Li, Viral Bhalodia, and Nuno Vasconcelos. Anomaly detection in crowded scenes. In 9 2010 IEEE computer society conference on computer vi- sion and pattern recognition, pages 1975–1981. IEEE, 2010. 6 [25] Andr´es Marafioti, Orr Zohar, Miquel Farr´e, Merve Noyan, Elie Bakouch, Pedro Cuenca, Cyril Zakka, Loubna Ben Allal, Anton Lozhkov, Nouamane Tazi, et al. Smolvlm: Redefining small and efficient multi- modal models. arXiv preprint arXiv:2504.05299, 2025. 8 [26] Furkan Mumcu and Yasin Yilmaz. Fast and lightweight vision-language model for adversarial traffic sign detec- tion. Electronics, 13(11):2172, 2024. 8 [27] Furkan Mumcu, Michael J Jones, Anoop Cherian, and Yasin Yilmaz. Llm-guided agentic object de- tection for open-world understanding. arXiv preprint arXiv:2507.10844, 2025. 8 [28] Furkan Mumcu, Michael J. Jones, Yasin Yilmaz, and Anoop Cherian. Complexvad: Detecting interaction anomalies in video. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2025. 1, 2, 3, 4, 5, 6 [29] Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguis- tics, 2002. 4, 7 [30] Bharathkumar Ramachandra and Michael Jones. Street scene: A new dataset and evaluation protocol for video anomaly detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2569–2578, 2020. 2, 4, 5, 6 [31] Bharathkumar Ramachandra, Michael Jones, and Ranga Vatsavai. Learning a distance function with a siamese network to localize anomalies in videos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2598–2607, 2020. 2, 6 [32] Bharathkumar Ramachandra, Michael Jones, and Ranga Raju Vatsavai. A survey of single-scene video anomaly detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 2 [33] Nils Reimers and Iryna Gurevych. Sentence-bert: Sen- tence embeddings using siamese bert-networks. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Process- ing, 2019. 4, 5 [34] Nicolae-Catalin Ristea, Neelu Madan, Radu Tudor Ionescu, Kamal Nasrollahi, Fahad Shahbaz Khan, Thomas B Moeslund, and Mubarak Shah. Self- supervised predictive convolutional attentive block for anomaly detection. arXiv preprint arXiv:2111.09099, 2021. 6 [35] Timo Schick and Hinrich Sch¨utze. It’s not just size that matters: Small language models are also few-shot learn- ers. arXiv preprint arXiv:2009.07118, 2020. 8 [36] Ashish Singh, Michael J. Jones, and Erik Learned-Miller. Eval: Explainable video anomaly localization. In Pro- ceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, 2023. 1, 2, 4, 6 [37] Ashish Singh, Michael J. Jones, and Erik Learned-Miller. Tracklet-based explainable video anomaly localization. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition Workshops, 2024. 1, 2, 4, 5, 6, 7, 8 [38] Waqas Sultani, Chen Chen, and Mubarak Shah. Real- world anomaly detection in surveillance videos. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 6479–6488, 2018. 2 [39] Che Sun, Yunde Jia, Yao Hu, and Yuwei Wu. Scene- aware context reasoning for unsupervised abnormal event detection in videos. In ACMMM, 2020. 3 [40] Stanislaw Szymanowicz, James Charles, and Roberto Cipolla. X-man: Explaining multiple sources of anoma- lies in video. In CVPR, 2021. 3 [41] Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Per- rin, Tatiana Matejovicova, Alexandre Ram´e, Morgane Rivi`ere, et al. Gemma 3 technical report. arXiv preprint arXiv:2503.19786, 2025. 5 [42] Chenxu Wang, Yanxin Yao, and Han Yao. Video anomaly detection method based on future frame pre- diction and attention mechanism. In IEEE 11th Annual Computing and Communication Workshop and Confer- ence (CCWC). IEEE, 2021. 2 [43] Jianfeng Wang, Xiaowei Hu, Pengchuan Zhang, Xiujun Li, Lijuan Wang, Lei Zhang, Jianfeng Gao, and Zicheng Liu. Minivlm: A smaller and faster vision-language model. arXiv preprint arXiv:2012.06946, 2020. 8 [44] Peng Wu, Chengyu Pan, Yuting Yan, Guansong Pang, Peng Wang, and Yanning Zhang. Deep learning for video anomaly detection: A review. arXiv preprint arXiv:2409.05383, 2024. 1 [45] Peng Wu, Xuerong Zhou, Guansong Pang, Yujia Sun, Jing Liu, Peng Wang, and Yanning Zhang. Open- vocabulary video anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, 2024. 3 [46] Peng Wu, Xuerong Zhou, Guansong Pang, Yujia Sun, Jing Liu, Peng Wang, and Yanning Zhang. Follow the rules: Reasoning for video anomaly detection with large language models. In Proceedings of the European Con- ference on Computer Vision, 2024. 2, 3, 6 [47] Peng Wu, Xuerong Zhou, Guansong Pang, Lingru Zhou, Qingsen Yan, Peng Wang, and Yanning Zhang. Vadclip: adapting vision-language models for weakly supervised video anomaly detection. In AAAI Conference on Artifi- cial Intelligence, pages 6074–6082, 2024. 2 [48] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github.com/facebookresearch/ detectron2, 2019. 5 [49] Zhengye Yang and Richard J. Radke. Detecting contex- tual anomalies by discovering consistent spatial regions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2025. 6 [50] Muchao Ye, Weiyang Liu, and Pan He. Vera: Ex- plainable video anomaly detection via verbalized learn- ing of vision-language models. In Proceedings of the 10 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 2, 3 [51] M. Zaigham Zaheer, Arif Mahmood, M. Haris Khan, Mattia Segu, Fisher Yu, and Seung-Ik Lee. Generative cooperative learning for unsupervised video anomaly de- tection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2 [52] Luca Zanella, Willi Menapace, Massimiliano Mancini, Yiming Wang, and Elisa Ricci. Harnessing large lan- guage models for training-free video anomaly detection. In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), 2024. 2, 3 [53] Huaxin Zhang, Xiaohao Xu, Xiang Wang, Jialong Zuo, Xiaonan Huang, Changxin Gao, Shanjun Zhang, Li Yu, and Nong Sang. Holmes-vau: Towards long-term video anomaly understanding at any granularity. In Proceed- ings of the Computer Vision and Pattern Recognition Conference, pages 13843–13853, 2025. 2, 8 [54] Yifu Zhang, Peize Sun, Yi Jiang, Dongdong Yu, Fucheng Weng, Zehuan Yuan, Ping Luo, Wenyu Liu, and Xing- gang Wang. Bytetrack: Multi-object tracking by asso- ciating every detection box. In Proceedings of the Eu- ropean Conference on Computer Vision (ECCV), 2022. 5 [55] Liyun Zhu, Lei Wang, Arjun Raj, Tom Gedeon, and Chen Chen. Advancing video anomaly detection: A concise review and a new dataset. Advances in Neural Informa- tion Processing Systems, 37:89943–89977, 2024. 8 11 Leveraging Multimodal LLM Descriptions of Activity for Explainable Semi-Supervised Video Anomaly Detection Supplementary Material 1. Compute Resources All of our experiments were run on a cluster of eight 80 GB NVIDIA A100 GPUs. A single 80 GB GPU would be sufficient and multiple GPUs were used only to reduce the total compute time by processing multiple video sequences in parallel. Our experiments that used GPT-4o accessed it through an API via the cloud. 2. Cropping Frames For each selected object pair o1 and o2, we first com- pute a single bounding box that tightly encloses both ob- jects in a given frame. Let the individual bounding boxes for the two objects in frame Ft be denoted as bboxa = (x1a, y1a, x2a, y2a) and bboxb = (x1b, y1b, x2b, y2b), where (x1, y1) represents the top-left corner and (x2, y2) the bottom-right corner of a bounding box. These are merged into a single bounding box bbox1 by taking the element-wise minimum for the top-left corner and max- imum for the bottom-right corner: bbox1 = min(x1a, x1b), min(y1a, y1b), max(x2a, x2b), max(y2a, y2b)  (S1) We apply the same procedure to obtain bbox2 in the future frame Ft+∆, using the tracked locations of the same object pair. Let bbox1 = (x1,1, y1,1, x2,1, y2,1) denote the merged bounding box of the selected object pair in frame Ft, and let bbox2 = (x1,2, y1,2, x2,2, y2,2) denote the merged bounding box of the same objects tracked in the future frame Ft+∆. A unified bounding box that covers both time steps is computed as: bboxmerged = min(x1,1, x1,2), min(y1,1, y1,2), max(x2,1, x2,2), max(y2,1, y2,2)  (S2) Let (x1, y1) and (x2, y2) represent the top-left and bottom-right corners of this merged bounding box. Let wmin and hmin denote half of the minimum desired crop width and height, respectively. In our case, we set wmin = 240 and hmin = 135, corresponding to a min- imum crop size of 480 × 270 pixels. This size was cho- sen as a practical minimum that ensures sufficient con- text around objects while keeping computational over- head low. In our experiments, we found that a resolution of 480 × 270 served as a reliable lower bound, provid- ing adequate spatial detail for capturing object activities without reducing the quality of the LLM generated de- scriptions. While larger crops may be used when avail- able, we observed this setting to be a consistent and ef- fective baseline across different datasets. We then compute the crop dimensions as w = max  |x2−x1| 2 , wmin  , h = max  |y2−y1| 2 , hmin  (S3) The crop boundaries are determined as: xmin = max(x1 −w, 0), ymin = max(y1 −h, 0), xmax = min(x2 + w, W), ymax = min(y2 + h, H) (S4) where W and H denotes the width and height of the full frame. The final cropped region Ct is extracted from the current frame Ft as: Ct = Ft[ymin : ymax, xmin : xmax] To ensure both temporal frames share the same spa- tial context, we apply the exact same crop coordinates to the future frame Ft+∆: Ct+∆= Ft+∆[ymin : ymax, xmin : xmax] This guarantees that both cropped frames contain the same background and object regions, allowing for con- sistent comparisons across time. In the case of a single object, the same procedure is applied with one excep- tion: instead of merging two object bounding boxes in each frame, we directly use the bounding box of the sin- gle object in frame Ft and its tracked location in Ft+∆. The rest of the cropping process, including the computa- tion of the unified bounding box across time, minimum size enforcement, and cropping, is performed exactly as described above. 3. Comparison of Gemma 3 and GPT-4o Ac- tivity Descriptions In Figure S1, we share activity descriptions generated by Gemma 3 and GPT-4o for given images and the promt: ”Briefly describe what the <object name>s in the en- closed regions of these images are doing. The two im- ages were taken one second apart.” 12 Gemma 3 GPT-4o The person is crouching down on the ground. The person in the enclosed region is bending over and appears to remain in the same position between the two images The person is holding a phone up and appearing to take a picture or video while being pushed in a large box by another person. In the images, the person standing seems to be pushing or interacting with a cart or box that the seated person appears to be inside or using. The standing person maintains their position or posture relative to the cart between the images. The person in the blue shirt and the person on the bike are interacting, as the person in blue appears to be preventing the cyclist from moving forward. The person on the bicycle is being pushed forward by another person who is riding a scooter. The person is walking across the crosswalk while looking at their phone, and a cyclist is passing by them on the right. The person in the enclosed region is stopping to allow a cyclist to pass by. In the first image, the person in the black shorts and dark hair appears to be jumping or falling next to the bald person. In the second image, the person in the black shorts and dark hair is standing up straight, while the bald person remains standing. There is no apparent interaction between them. The two people are walking across the crosswalk, with the person on the right appearing to stumble or jump. Figure S1. The activity descriptions generated by Gemma 3 and GPT-4o with given images. We have observed that Gemma 3 often gives more concise descriptions with fewer unnecessary details such as the color of a person’s clothes. Moreover, Gemma 3 sometimes includes important details that GPT-4o omits, such as the fact that a person is using his or her phone. 13
Leveraging Multimodal LLM Descriptions of Activity for Explainable Semi-Supervised Video Anomaly Detection Furkan Mumcu1,† Michael J. Jones2,∗ Anoop Cherian3,∗ Yasin Yilmaz4,† † ∗Mitsubishi Electric Research Laboratories (MERL) Abstract Existing semi-supervised video anomaly detection (VAD) methods often struggle with detecting complex anomalies involving object interactions and generally lack explainability. To overcome these limitations, we propose a novel VAD framework leveraging Multimodal Large Language Models (MLLMs). Unlike previous MLLM-based approaches that make direct anomaly judgments at the frame level, our method focuses on extracting and interpreting object activity and interactions over time. By querying an MLLM with visual inputs of object pairs at different moments, we generate textual descriptions of the activity and interactions from nominal videos. These textual descriptions serve as a high-level representation of the activity and interactions of objects in a video. They are used to detect anomalies during test time by comparing them to textual descriptions found in nominal training videos. Our approach inherently provides explainability and can be combined with many traditional VAD methods to further enhance their interpretability. Extensive experiments on benchmark datasets demonstrate that our method not only detects complex interaction-based anomalies effectively but also achieves state-of-the-art performance on datasets without interaction anomalies. 1. Introduction Video anomaly detection (VAD) has emerged as an active research field due to its significant applications in security and public safety. The rapid growth of surveillance video data makes it impractical for humans to monitor thoroughly. Consequently, VAD algorithms play a vital role in identifying unusual events in video footage, enabling human operators to focus their attention on reviewing these flagged incidents. In this paper, we focus on the formulation in which nominal videos (also called training videos) containing only normal activities in a particular scene are provided for learning a model. The goal is to identify when and where anomalous activities happen within test videos of the same scene. Along with other papers [3, 44], we refer to this version of the problem as semi-supervised VAD because of the availability of normal, but not anomalous, training video. In Table 1, we summarize the properties of some recent papers in this area to highlight the differences between our work and other recent approaches. Existing semi-supervised video anomaly detection methods suffer from several weaknesses. Recent studies struggle to provide explainability for detected anomalies. While some existing work [36, 37] explored more explainable approaches by interpreting the input features, they do not offer direct, textual explanations. Furthermore, the ComplexVAD dataset [28] introduced interaction-based anomalies and demonstrated that detecting these anomalies is even more challenging than identifying the simpler anomalies presented in earlier datasets. Hence, we propose a new method to address the drawbacks present in existing works, including the lack of focus on complex anomalies and explainability. By utilizing a Large Language Model-based detection approach, we introduce an effective video anomaly detection framework that excels at identifying complex anomalies and features built-in explainability. Large Language Models, especially in their multimodal forms (MLLMs), have demonstrated strong capabilities in interpreting and accurately describing complex situations from both text and images. GPT-4 with vision [1], for example, can analyze visual inputs and produce detailed, contextually relevant descriptions. Existing work has explored the use of MLLMs for video anomaly detection. Most of these methods are designed for a multi-scene, weakly supervised setting. While such approaches are effective for detecting a common set of anomalies across multiple scenes, they fall short in identifying scene-specific anomalies, which are central to the semi-supervised, single-scene VAD task. To address this limitation, we propose a novel MLLM-based framework that: (1) considers object interactions in addition to individual object activities, (2) leverages an MLLM 1 16 Oct 2025 Method Year Focus Features Task Objective MLLM Interaction ExplainSemi Scene Usage Anomalies ability Supervised Specific VadCLIP [47] 2023 Using CLIP for multi-scene weakly supervised VAD G# # # # # Vera [50] 2025 Optimizing prompts for MLLM-based weakly supervised VAD # # # Holmes-VAU [53] 2025 MLLM finetuning for anomaly understanding across multi-scene # # # AnomalyRuler [46] 2024 Scene-specific rule generation and frame-level descriptions via MLLMs # G# VLAVAD [16] 2024 Using frame level MLLM descriptions for semi-supervised VAD # # EVAL [36] 2023 Using object-level features for explainable VAD # # G# T-EVAL [37] 2024 Object-level, tracklet-based explainable VAD # # G# Scene-Graph [28] 2025 Exemplar and scene graph for interaction-based VAD # G# Ours 2026 Object-level MLLM descriptions for interaction-based VAD Table 1. Summary of properties and differences of some recent work in VAD. Our work is the only one designed for semisupervised, scene-specific VAD that handles interaction anomalies and provides textual explanations of anomalies. to generate high-level descriptions used in an exemplarbased model that can be easily extended without retraining deep networks for each new scene, and (3) enables both spatial and temporal localization of anomalies. In our method, we adopt an object-level feature extraction strategy similar to [28], using an object detector and tracker. We then employ an MLLM agent to describe object activities and interactions by querying it with pairs of frames, separated in time by a fixed interval, to capture temporal information. A key novelty is using these MLLM-generated textual descriptions directly as features for modeling normal activity. By removing redundant descriptions (Section 3.2), we construct a final set of textual exemplars representing nominal videos. This exemplar set is then used to detect anomalies: descriptions from anomalous interactions are expected to deviate from nominal ones. In summary, our contributions are: • We introduce the first MLLM-based approach for video anomaly detection specifically designed to identify complex anomalies caused by object interactions. • We propose a novel use of MLLMs for VAD: instead of directly deciding the presence of an anomaly, as done in prior work, our method first derives a representation (using MLLM descriptions) of what is normal in a scene and then identifies anomalies based on deviations from this normal representation. • We demonstrate that our method offers built-in explainability. Furthermore, we show that it can be integrated with many existing VAD approaches to enhance their interpretability. • We evaluate our method on multiple benchmark datasets and show that it achieves state-of-the-art performance. 2. Related Work Many different formulations of the video anomaly detection (VAD) problem have been studied in recent years: semi-supervised [13, 15, 19, 42], weakly supervised [38], unsupervised [51], training-free [52], single-scene [32] and multi-scene [22]. Each of these formulations has applicability in different scenarios. This paper focuses on the single-scene, semi-supervised version of video anomaly detection because it corresponds to the most common use case that we have observed in the realworld; namely, that of a surveillance camera at a particular location in the world in which video of normal activity is abundant but video containing anomalies is rare and expensive to collect. Furthermore, what is anomalous in one scene may be normal in another. This along with the fact that location-dependent anomalies (e.g. a pedestrian may be anomalous in some locations but not others) are common in the single-scene version of VAD but do not occur in the multi-scene version of VAD implies that the multi-scene version of VAD is not a generalization of single-scene VAD [32]. Solutions for one version generally do not work well on the other version. Early work on the single-scene, semi-supervised version of VAD includes many papers based on frame reconstruction [13, 15] or frame prediction [19, 42]. One big disadvantage of such approaches is that they require training a deep network on video from each new scene. To avoid this disadvantage, others [7, 8, 28, 30, 31, 36, 37] used an exemplar-based model that extracts a set of representative features from normal video but does not require any deep network training for each scene. We also use this idea in our work. 2.1. MLLMs in Video Anomaly Detection Recently, researchers have explored ways of taking advantage of large language models or vision language models for VAD. These approaches, however, vary significantly depending on the VAD problem formulation. Weakly-Supervised and Training-Free VAD. Many approaches to VAD that use an MLLM try to use its built-in knowledge of what is normal instead of learning normality from normal training video [23, 50, 52]. A disadvantage of such approaches is that they cannot handle scenarios such as a boxing gym in which activity that is typically rare and anomalous (fighting) is actually very common. Another important difference between our work and most other MLLM2 based approaches to VAD is that others have focused on the weakly supervised, multi-scene version of VAD [17, 23, 45, 50, 52]. Semi-Supervised VAD. Compared to the weaklysupervised setting, the capabilities of MLLMs are understudied in the semi-supervised setting. Wu et al. [46] addresses semi-supervised, single-scene VAD, but with a fundamentally different methodology. Their approach uses an MLLM to first generate a set of textual rules from normal video and then applies these rules during inference. In contrast, our method directly compares MLLM-generated text descriptions between normal and test videos without an intermediate rule-generation step. Furthermore, their model processes full video frames, whereas we employ an object-centric approach that analyzes regions around objects and their pairs. This key design choice allows our method to model object interactions and perform both spatial and temporal localization, while their approach is limited to temporal localization only. Li and Jiang [16] also focuses on semi-supervised single scene anomaly detection and proposes an MLLMbased method. Like our method, they first detect and track objects and then query an MLLM to yield highlevel descriptions of objects and their activity. However, they do not model object interactions, and do not use an exemplar-based model. In real-world scenarios, many anomalies are characterized by an unusual interaction between two objects. Therefore, one of the important aspects of our method is its modeling of object interactions in the scene to identify anomalous interactions. There have been a few other methods that also modeled interactions [5, 9, 28, 39, 40]. While there are many differences among these methods, one of the main differences compared to our current work is that none of them take advantage of MLLMs. In our approach, the textual descriptions of object activity and interactions provided by an MLLM yield powerful cues for a high-level understanding of a scene. 3. Our Approach We propose a novel method to detect anomalies in video. The pipeline of our method (which we call MLLMEVAD, for MLLM-based Explainable Video Anomaly Detection) is depicted in Figure 1. The main idea is to use a set of textual descriptions of the activity seen in normal training video of a scene as the model of normality. These textual descriptions are obtained using an MLLM. Anomalies in testing video are found by comparing textual descriptions of the activity in testing frames with the set of normal descriptions. A test description that is dissimilar from all normal descriptions indicates an anomaly. The initial step of our method is detecting and tracking objects followed by identifying pairs of objects that are in close spatial proximity and therefore likely to be interacting. Then, for each frame and a future frame offset by a fixed time interval, we generate crops around pairs of interacting objects as well as single objects that are not close to other objects. The crops are given to an MLLM to obtain natural language descriptions of objects' activities. These descriptions are embedded into a vector space using a sentence embedding model. By applying an exemplar selection algorithm to all of the sentence embeddings found in nominal training video, we construct representative sets of embeddings for both object pairs and single objects. Anomaly detection is performed by comparing sentence embeddings extracted from testing frames to the exemplar sets, and assigning anomaly scores based on their distance to the most similar exemplar. 3.1. Object Detection and Tracking In our proposed method, we want to build a model of what objects do and how they interact in video containing normal activity. This requires detecting objects and tracking them across frames in training videos. We also find pairs of objects that are likely to be interacting according to their spatial proximity. A video V is a collection of M frames {Fi}M i=1, such that V = [F1, F2, ..., FM]. Each frame Fi is sent to the object detector O, which returns X number of detected objects. The object detector provides, for each object o, the location l = (xo, yo) which is the x and y coordinates of the center of the object, b = (wo, ho) which is the width and height of the bounding box for the object, and class id c. The output of the object detector is then O(F) = [o1, o2, ..., oX], where each object oi is represented by oi = [b, c, l]. After detecting objects in a frame, they are then tracked using an object tracker. Each detected object o is sent to the object tracker, which returns x and y coordinates for that object in the subsequent frames. In our method, we track objects for 30 frames. Therefore, for every object, we acquire the trajectory θ = {(x1, y1), (x2, y2), ..., (x30, y30)}. Next, we pair objects according to their spatial proximity in the frame. To determine which objects should be paired, we need to calculate the 3D distances between each pair of objects. To calculate the 3D distance we need to derive the 3D coordinates of the object locations by estimating a pseudo-depth since we do not have access to actual depth measurements. Given two objects o1 and o2, we have 2D coordinates l1 = (x1, y1) and l2 = (x2, y2). Then we define a relative depth, z, between two objects by taking the absolute difference of y values such that z = |y1 -y2|. This estimate of pseudodepth assumes that objects are resting on the ground plane and the ground plane is farther from the camera 3 Object Detection and Tracking MLLM Visual Input: Crop of two objects enclosed by bounding boxes, from two frames at time t and t+30. . . . Text Encoder Answer sentence r: Two people are walking side by side on the crosswalk without interacting. X Embedding of r Exemplar set of embeddings Training mode: If score is greater than the threshold add embedding of r to exemplar set. Inference mode: Use score for anomaly detection. . . . Prompt: Briefly describe what the people in the enclosed regions of these images are doing. The two images were taken one second apart. Score calculation: Eq. (1) & Eq. (2) Video Frames Figure 1. The pipeline of our method (MLLM-EVAD) for frame to textual description generation with the help of an object detector, object tracker and MLLM. Please see the text for more explanation. the closer it is to the top of the image. The 3D distance d can then be calculated by taking the Euclidean distance between 3D coordinates (x1, y1, z) and (x2, y2, 0). Any objects that have a 3D distance d smaller than a predetermined threshold h are paired. Due to applying the threshold, not every single object is necessarily connected to another object, which leads to having single objects in addition to object pairs. 3.2. Generating Textual Descriptions At the end of the object detection and tracking stage, for each frame Ft, we obtain a set of object pairs and a set of single objects. For each of these, we generate two cropped regions that capture the relevant spatial area for both the current frame Ft and a future frame Ft+∆(e.g., ∆= 30), assuming a static camera (as depicted in Figure 1). This ensures that both crops contain the same background and object regions, allowing for consistent comparisons across time. We provide the details of the cropping process in the Appendix. Red rectangles are drawn around each object in the cropped images using the bounding boxes detected by the object detector. The rectangles indicate to the MLLM which objects are being referred to in the prompt. We then obtain a sentence r that describes the interaction between objects o1 and o2 by querying an MLLM agent with the corresponding image crops Ct and Ct+∆, along with the user prompt: "Briefly describe what the s in the enclosed regions of these images are doing. The two images were taken one second apart." For single-object cases, the prompt is adjusted to: "Briefly describe what the in the enclosed region of these images is doing. The two images were taken one second apart." In both prompts, the placeholder is replaced by the class label c obtained during feature selection. An additional "system" prompt is also used with the MLLM: "You will be provided with two frames from a video and asked to describe what objects that are indicated by bounding boxes in the video are doing. Your task is to answer the query in a simple sentence. If there is any interaction between the indicated objects, a description of the interaction should be given." The MLLM-generated responses are then used in both the model building and anomaly detection stages. 3.3. Model Building We follow a similar exemplar selection strategy as introduced in recent works [28, 30, 36, 37]. For all frames in a video, we collect all textual descriptions generated from object pairs into one set and all textual descriptions generated from single objects into another set. Then for each of the sets we run an exemplar selection algorithm which selects a subset of the elements of the set such that no two members of the subset are near each other according to a distance function. The intuition behind exemplar selection is to simply remove redundant (or nearly redundant) elements from the set leaving behind a compact, representative subset of exemplars. Various distance functions between sentences can be used to compare textual descriptions. In most of our experiments we use Sentence-BERT [33], but we also test using BLEU [29] and METEOR [2]. Given a set S, the exemplar selection algorithm proceeds as follows: (1) Initialize the exemplar set to NULL. (2) Add the first element of S to the exemplar set. (3) For each subsequent element of S, find its distance to the nearest instance in the exemplar set. If this distance is above a threshold, th, then add the element to the exemplar set. Since we obtain a textual description r for each object pair or single object, we transform these descriptions into sentence embeddings using a Natural Language Processing (NLP) model N, such that Ir = N(r). The distance D between two textual descriptions, r1 and r2, is then computed as: D(r1, r2) = 1 -cos (Ir1, Ir2) (1) where cos(·, ·) denotes the cosine similarity between the corresponding embeddings. Using the distance function D and the described exemplar selection algorithm, we 4 construct two separate exemplar sets: Epair and Esingle, containing sentence embeddings of textual descriptions for object pairs and single objects, respectively. 3.4. Anomaly Detection in Test Video After constructing the exemplar sets from nominal videos, the next step is to detect anomalies in a test video from the same scene. Similar to the model-building phase, we extract textual descriptions and encode them into sentence embeddings Ir. The anomaly score for a given object pair (or single object) is then computed based on its similarity to the corresponding exemplar set. Specifically, the score is defined as: score(r) = 1 -max (cos (Ir, e)) , e ∈Epair or Esingle (2) depending on whether the input is an object pair or a single object. A higher score indicates a greater deviation from the nominal descriptions and is thus more likely to be anomalous. 3.5. Combination with Previous Methods While the textual descriptions generated by an MLLM provide powerful high-level features for modeling the normal activity in a scene and for detecting anomalies in the scene, some anomalies require more detailed, finegrained features for detection. For this reason, it can be advantageous to combine our method with previous methods that use more detailed features such as object sizes and trajectories. Combining with another method is especially easy to do if the other method is also objectbased and uses an exemplar-based model. In our experiments, we show results when combining our method separately with two other methods; namely, the scene-graph method of Mumcu et al. [28] and the tracklet method of Singh et al [37]. Our method of modeling interactions was inspired by the scene-graph method of Mumcu et al. [28]. Their method builds a scene graph for each frame where each object is a node and objects that are nearby and, therefore possibly interacting, are connected by an edge. The attributes stored at each node are the object's location, size, object class, trajectory and skeletal pose. Then an exemplar-based model is created for every connected pair of nodes across all scene graphs in all nominal frames. A separate exemplar-based model is also created for all singleton nodes across all scene graphs in all nominal frames. To combine our method with theirs, we can simply add the MLLM-generated textual descriptions to each connected pair of nodes and to each singleton node. The distance function used to compare nodes takes the maximum over all attribute distances including the distance between textual descriptions. In the case of the tracklet method of Singh et al. [37], objects in each frame are also detected and tracked. Similar to Mumcu et al., for each object, a set of attributes are stored including the location, size, object class and trajectory. An exemplar-based model is then learned from all of the objects found in the nominal training videos using a distance function for comparing objects that takes the maximum over each attribute distance. To combine our method with theirs, we again only need to add our MLLM-generated textual descriptions as another attribute associated with each object. In this case, object interactions are not modeled. We use this combination to test on datasets that do not contain anomalous object interactions. 4. Experiments In our experiments, we use the following configurations: Datasets: We evaluate MLLM-EVAD on three video anomaly detection benchmark datasets which are all publicly available under a CC-BY-SA-4.0 license. ComplexVAD [28] is a large-scale dataset with a total of over 3.6 million frames, with 2,069,941 frames for training and 1,611,497 for testing. It features interactionbased anomalies and serves as our main focus, since our method leverages descriptions of object activity and interactions for video anomaly detection. The scene features a crosswalk, pedestrian sidewalks, and a two-lane street, which are heavily used. It demonstrates interaction anomalies such as a person leaving an object on the ground, a person sitting on a car, or a dog walking without a person or leash, where the individual objects are normal, but their interactions are anomalous. In addition, we evaluate performance on the Avenue and Street Scene datasets. Avenue [21] contains short surveillance videos of a campus walkway, with anomalies such as running, throwing objects, or walking in the wrong direction. Street Scene [30] includes longer videos of a city street environment, where anomalies involve vehicles or pedestrians behaving unusually, including cars driving on the wrong side of the road, pedestrians crossing at non-designated areas, or bikers riding on the sidewalk. Avenue contains 30,652 frames, split nearly evenly between 15,328 training frames and 15,324 testing frames. Street Scene is a larger dataset with a total of 203,257 frames, of which 56,847 are used for training and 146,410 for testing. Implementation Details: We use Detectron2 [48] and ByteTrack [54] as the object detector and object tracker in our method. Sentence-BERT [33] (using the pretrained weights, "all-MiniLM-L6-v2") is used as the text embedding model to encode activity descriptions. Initially, GPT-4o [14] was used as the multimodal MLLM agent. However, with the recent introduction of Gemma 3 [41], we re-conducted the experiments 5 Method RBDC TBDC Frame MemAE [12] 0.05 0.0 58.0 EVAL [36] 10.0 62.0 54.0 AnomalyRuler [46] N/A N/A 56.0 Scene-Graph[28] 19.0 64.0 60.0 MLLM-EVAD 24.0 68.0 61.0 Scene-Graph + MLLM-EVAD 25.0 70.0 63.0 Table 2. Results on the ComplexVAD dataset. Each entry is the area under the curve (AUC) as a percentage for the three benchmark methods using the RBDC, TBDC and Frame-Level evaluation criteria. Bold face indicates the best score and blue indicates second best. Method RBDC TBDC Frame Auto-encoder [13] 0.3 2.0 61.0 Dictionary method [21] 1.6 10.0 48.0 Flow baseline [30] 11.0 52.0 51.0 FG Baseline [30] 21.0 53.0 61.0 EVAL [36] 24.3 64.5 N/A Contextual GMM [49] 34.0 62.5 67.0 T-EVAL [37] 30.9 72.9 66.0 T-EVAL+ MLLM-EVAD 31.1 73.5 66.8 Table 3. Results on the Street Scene dataset using the area under the curve (AUC) (as a percentage) for the RBDC and TBDC evaluation criteria for different methods. Bold face indicates the best score and blue indicates second best. on ComplexVAD using Gemma 3. Hence, the results on ComplexVAD are reported with both Gemma 3 and GPT-4o, while the results on the other datasets are reported with GPT-4o. We choose a threshold th = 0.65 for exemplar selection that resulted in a modest number of total exemplars selected. From past work that used exemplar-based models, this threshold mainly effects model size and has a small effect on test accuracy [36, 37]. We provide additional implementation details in the Appendix. Evaluation Criteria: We use the Region-Based Detection Criterion (RBDC) and the Track-Based Detection Criterion (TBDC) as proposed in [30] as our primary evaluation criteria and report the area under the curve (AUC) for false positive rates per frame from 0 to 1. We also report frame-level AUC [24] scores. As highlighted in previous works [30] frame-level AUC only evaluates temporal accuracy and disregards spatial localization of anomalies. RBDC and TBDC measure a method's capacity to accurately identify anomalous spatio-temporal regions within a given video sequence. Method RBDC TBDC Frame Siamese[31] 41.2 78.6 87.2 Hybrid[20] 41.1 86.2 91.1 BG agnostic [11] 65.1 66.9 92.3 Hybrid[20] + PCAB[34] 62.3 89.3 90.9 BG agnostic[11] + PCAB[34] 66.0 64.9 92.9 SSMTL++v1 [4] 40.9 82.1 93.7 SSMTL++v2 [4] 47.8 85.2 91.6 AnomalyRuler [46] N/A N/A 89.7 EVAL[36] 68.2 87.6 86.0 T-EVAL[37] 67.5 89.7 88.0 T-EVAL + MLLM-EVAD 68.9 90.1 88.4 Table 4. Results on the CUHK Avenue dataset using the area under the curve (AUC) as a percentage for the RBDC and TBDC evaluation criteria for different methods. Bold face indicates the best score and blue indicates second best. 4.1. Quantitative Results The results of our method on ComplexVAD, as well as those of EVAL [36], MemAE [12], AnomalyRuler [46], and Scene-Graph [28], using the three evaluation criteria described above, are reported in Table 2. We also provide the result for the combination of our method and Scene-Graph on the ComplexVAD dataset. Our MLLM-EVAD method outperforms the next best method (Scene-Graph) by 5, 4 and 1 percentage points in RBDC, TBDC, and frame-level evaluations, respectively. We achieve the best result by combining the Scene-Graph method with ours, reaching 25%, 70%, and 63% in RBDC, TBDC, and frame-level evaluations, respectively. AnomalyRuler, a recent MLLM-based method [46], achieves 54% frame-level AUC. However, we were not able to obtain results for the RBDC and TBDC criteria, as this method does not support spatial localization of anomalies. The MemAE method does very poorly for the two criteria that measure spatial localization. This implies that the regions of an image that MemAE predicts as anomalous are usually normal. For the experiments on Avenue and Street Scene, we combine our method with the best-performing existing exemplar-based method, namely Tracklet EVAL [37]. These datasets include many anomalies that need more fine-grained attributes (such as the direction and speed of travel for people and cars), so the high-level textual descriptions used by MLLM-EVAD are not sufficient on their own to detect all of the anomalies. We show that our textual descriptions do add an important new attribute, however, since, when combined with the more fine-grained attributes of Tracklet EVAL, the results with the MLLM textual descriptions improve accuracy for both datasets across all three evaluation cri6 The person is crouching down on the ground. The person is walking across the grass. The person is holding a phone up and appearing to take a picture or video while being pushed in a large box by another person. The person is walking along the sidewalk. Anomaly Closest Exemplar Anomaly Closest Exemplar The two people are walking across the crosswalk, with the person on the right appearing to stumble or jump. Two people are walking across the crosswalk on the street. The person on the bicycle is being pushed forward by another person who is riding a scooter. The person is walking while the cyclist is riding the bicycle alongside them. Figure 2. Examples of anomalies in the ComplexVAD dataset that are detected by our method. In each box, the top row shows two frames of the anomalous video clip followed by the textual description generated by Gemma 3. The second row shows two frames from the closest matching exemplar to the anomaly. The textual description for the closest exemplar is also shown. By contrasting the anomalous description with the closest normal description, it is clear why the testing video clip was found to be anomalous. teria. As shown in Table 4, on the Avenue dataset, the Tracklet-EVAL + MLLM-EVAD method improves the state-of-the-art for both the RBDC and TBDC criteria. On Street Scene (results shown in Table 3), the state-ofthe-art is surpassed for the TBDC criterion. Since the frame-level criterion does not measure spatial localization accuracy which is the focus of these datasets, we would argue that the slightly-below SOTA frame-level numbers are not representative of the true accuracy of these methods. We should note that we were not able to exactly reproduce the results reported for Tracklet EVAL [37] on the Avenue dataset, most likely due to different parameters, so we report the results we could reproduce for that method and use that version in combination with MLLM-EVAD for the results on the combined method. 4.2. Qualitative Results: Explainability In addition to the quantitative results, we present qualitative results to demonstrate the explainability capabilities of our method. The combination of using an MLLM agent and an exemplar selection strategy enables intuitive explanations for detected anomalies by providing descriptions of both the anomaly and the most similar event from the nominal training video. In Figure 2, we demonstrate four detected anomalies from ComplexVAD along with their textual descriptions and the textual descriptions of the closest exemplars, as generated by our method. For example, one detected anomaly is described as "The person is crouching down on the ground." The closest matching exemplar from the nominal set is "The person is walking across the grass." The action of crouching deviates from the typical behavior observed in the training data. This difference highlights how our method leverages semantic understanding to identify uncommon activity, offering an interpretable explanation for the anomaly. Another example involves an anomaly described as "The person is holding a phone up and appearing to take a picture or video while being pushed in a large box by another person." The closest matching exemplar from the nominal set is "The person is walking along the sidewalk.", which is, in fact, the most common activity involving a person in this dataset. The anomalous event, a person being pushed in a box, does not occur in the nominal video and represents an unusual and complex behavior. This example further demonstrates our method's ability to detect rare interaction patterns that deviate from typical activities in the scene. In addition, these examples illustrate how our method also provides reasonable explanations for detected anomalies. By leveraging descriptive comparisons with nominal exemplars, our method enables enhanced and explainable video anomaly detection. 4.3. Ablation Study We conduct two ablation studies by modifying the default settings of our method. In the first ablation study, instead of using sentence embeddings and cosine dissimilarity as the distance metric, we use the raw sentences and compute distances using BLEU [29] and METEOR [2]. In Table 5, we compare the results of cosine distance over Sentence-BERT embeddings with those obtained using BLEU and METEOR, based on the de7 Method RBDC TBDC Frame Sentence-BERT 19.0 67.0 59.0 BLEU 17.0 69.0 58.0 METEOR 16.0 69.0 57.0 Table 5. The table reports results on the ComplexVAD datasets using the area under the curve (AUC) as a percentage for the RBDC, TBDC and Frame-Level evaluation criteria for the our method using 3 different sentence distance functions. MLLM RBDC TBDC Frame GPT-4o 19.0 67.0 59.0 Gemma 3 24.0 68.0 61.0 Table 6. The table reports the performance of our proposed method on the ComplexVAD datasets using the area under the curve (AUC) as a percentage for the RBDC, TBDC and FrameLevel evaluation criteria with different MLLM agents GPT-4o and Gemma 3. scriptions generated by our method using GPT-4o as the MLLM agent. The results show that Sentence-BERT embeddings yield slightly higher accuracy in the RBDC and frame-level criteria while BLEU and METEOR perform slightly better on the TBDC criterion. SentenceBERT is the preferred similarity function considering its greater efficiency due to embeddings which can be saved and only require a dot product for comparison. In the second ablation study, we compare our method's performance on ComplexVAD using different MLLM agents, namely Gemma 3 and GPT-4o. In Table 6, we show that Gemma 3 achieves better performance, with 24%, 68%, and 61% in the RBDC, TBDC, and frame-level criteria, respectively, while GPT-4o yields 19%, 67%, and 59%. We attribute this difference to the fact that Gemma 3 generates more detailed and descriptive sentences compared to GPT-4o. This level of detail is particularly important for detecting interaction-based anomalies, where subtle contextual cues are crucial, and likely contributed to the observed performance gap. We provide a comparison of the descriptions generated by Gemma 3 and GPT-4o in the Appendix. 5. Limitations and Future Directions While our framework shows promising results, we identify several limitations that pave the way for future research. First, our implementation relies on powerful but computationally expensive multimodal models like Gemma 3. Their high inference latency poses a challenge for real-time applications. Recent studies have shown that smaller, task-specific models can perform exceptionally well on specialized tasks [25, 26, 35, 43]. We will explore fine-tuning such models for single scenes, specifically for describing object activities and interactions. Second, a primary contribution of our method is its semantic explainability, but quantitatively evaluating this feature is difficult due to a lack of suitable benchmarks. Current datasets with textual ground truth are designed for multi-scene or weakly-supervised tasks and do not fit the semi-supervised, single-scene VAD problem [10, 53, 55]. Therefore, a key future direction is the development of new datasets with ground-truth textual descriptions for scene-specific anomalies. Finally, our framework currently leverages a robust traditional object detector, which is well-suited for scenarios with known object classes. To further generalize our approach, a promising future direction is the integration of open-vocabulary object detection methods [6, 18, 27]. By enabling the recognition of an open set of object categories, this advancement would significantly broaden our system's applicability, allowing it to analyze and describe a more diverse range of anomalies in unconstrained, real-world environments. 6. Conclusions In this work, we presented a novel video anomaly detection framework that leverages multimodal Large Language Models to detect and explain complex anomalies. By extracting object-level activity and interactions and converting them into textual descriptions, our method moves beyond traditional pixel level modeling and introduces a language-based, interpretable layer for video anomaly detection. Unlike many prior MLLM-based methods that rely on direct frame-level judgments, our approach builds a set of nominal textual descriptions and identifies anomalies through language-based comparison, offering both accuracy and explainability. We demonstrate that our method outperforms existing approaches on ComplexVAD which is a challenging benchmark specifically designed to test interactionbased anomalies. In addition our method also improves of the state-of-the-art results on standard datasets like Avenue and Street Scene when combined with the finegrained method Tracklet EVAL [37]. Through ablation studies, we showed the importance of using sentence embeddings and detailed MLLM-generated descriptions. Notably, we found that Gemma 3 provided more informative descriptions than GPT-4o, contributing to higher detection accuracy. Beyond strong quantitative results, our method offers clear and interpretable explanations for detected anomalies, highlighting its potential for real world deployment in safety critical surveillance applications. We believe this framework opens new directions for integrating multimodal large language models into video understanding tasks and sets the stage for future research in explainable and high level language-based video anomaly detection. 8 References [1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint , 2023. 1 [2] Satanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, 2005. 4, 7 [3] Mohammad Baradaran and Robert Bergevin. A critical study on the recent deep learning based semi-supervised video anomaly detection methods. Multimedia Tools and Applications, 83, 2024. 1 [4] Antonio Barbalau, Radu Tudor Ionescu, Mariana-Iuliana Georgescu, Jacob Dueholm, Bharathkumar Ramachandra, Kamal Nasrollahi, Fahad Shahbaz Khan, Thomas B Moeslund, and Mubarak Shah. Ssmtl++: Revisiting selfsupervised multi-task learning for video anomaly detection. Computer Vision and Image Understanding, 229: 103656, 2023. 6 [5] Nicholas F.Y. Chen, Zhiyuan Du, and Khin Hua Ng. Scene graphs for interpretable video anomaly classification. In NIPS, 2018. 3 [6] Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, and Ying Shan. Yolo-world: Real-time open-vocabulary object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16901-16911, 2024. 8 [7] Keval Doshi and Yasin Yilmaz. Any-shot sequential anomaly detection in surveillance videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 934-935, 2020. 2 [8] Keval Doshi and Yasin Yilmaz. Continual learning for anomaly detection in surveillance videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 254-255, 2020. 2 [9] Keval Doshi and Yasin Yilmaz. Towards interpretable video anomaly detection. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2023. 3 [10] Hang Du, Sicheng Zhang, Binzhu Xie, Guoshun Nan, Jiayang Zhang, Junrui Xu, Hangyu Liu, Sicong Leng, Jiangming Liu, Hehe Fan, et al. Uncovering what why and how: A comprehensive benchmark for causation understanding of video anomaly. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18793-18803, 2024. 8 [11] Mariana Iuliana Georgescu, Radu Ionescu, Fahad Shahbaz Khan, Marius Popescu, and Mubarak Shah. A background-agnostic framework with adversarial training for abnormal event detection in video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. 6 [12] Dong Gong, Lingqiao Liu, Vuong Le, Budhaditya Saha, Moussa Reda Mansour, Svetha Venkatesh, and Anton van den Hengel. Memorizing normality to detect anomaly: Memory-augmented deep autoencoder for unsupervised anomaly detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019. 6 [13] Mahmudul Hasan, Jonghyun Choi, Jan Neumann, Amit K Roy-Chowdhury, and Larry S Davis. Learning temporal regularity in video sequences. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 733-742, 2016. 2, 6 [14] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint , 2024. 5 [15] Radu Tudor Ionescu, Fahad Shahbaz Khan, MarianaIuliana Georgescu, and Ling Shao. Object-centric autoencoders and dummy anomalies for abnormal event detection in video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7842-7851, 2019. 2 [16] Changkang Li and Yalong Jiang. Vlavad: Visionlanguage models assisted unsupervised video anomaly detection. In Proceedings of the British Machine Vision Conference, 2024. 2, 3 [17] Fei Li, Wenxuan Liu, Jingjing Chen, Ruixu Zhang, Yuran Wang, Xian Zhong, and Zheng Wnag. Anomize: Better open vocabulary video anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 3 [18] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In European Conference on Computer Vision, pages 3855. Springer, 2024. 8 [19] Wen Liu, Weixin Luo, Dongze Lian, and Shenghua Gao. Future frame prediction for anomaly detection-a new baseline. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6536-6545, 2018. 2 [20] Zhian Liu, Yongwei Nie, Chengjiang Long, Qing Zhang, and Guiqing Li. A hybrid video anomaly detection framework via memory-augmented flow reconstruction and flow-guided frame prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13588-13597, 2021. 6 [21] Cewu Lu, Jianping Shi, and Jiaya Jia. Abnormal event detection at 150 fps in matlab. In Proceedings of the IEEE International Conference on Computer Vision, pages 2720-2727, 2013. 5, 6 [22] Weixin Luo, Wen Liu, and Shenghua Gao. A revisit of sparse coding based anomaly detection in stacked rnn framework. In Proceedings of the IEEE international conference on computer vision, pages 341-349, 2017. 2 [23] Hui Lv and Qianru Sun. Video anomaly detection and explanation via large language models. arXiv preprint , 2024. 2, 3 [24] Vijay Mahadevan, Weixin Li, Viral Bhalodia, and Nuno Vasconcelos. Anomaly detection in crowded scenes. In 9 2010 IEEE computer society conference on computer vision and pattern recognition, pages 1975-1981. IEEE, 2010. 6 [25] Andr ́es Marafioti, Orr Zohar, Miquel Farr ́e, Merve Noyan, Elie Bakouch, Pedro Cuenca, Cyril Zakka, Loubna Ben Allal, Anton Lozhkov, Nouamane Tazi, et al. Smolvlm: Redefining small and efficient multimodal models. arXiv preprint , 2025. 8 [26] Furkan Mumcu and Yasin Yilmaz. Fast and lightweight vision-language model for adversarial traffic sign detection. Electronics, 13(11):2172, 2024. 8 [27] Furkan Mumcu, Michael J Jones, Anoop Cherian, and Yasin Yilmaz. Llm-guided agentic object detection for open-world understanding. arXiv preprint , 2025. 8 [28] Furkan Mumcu, Michael J. Jones, Yasin Yilmaz, and Anoop Cherian. Complexvad: Detecting interaction anomalies in video. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2025. 1, 2, 3, 4, 5, 6 [29] Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002. 4, 7 [30] Bharathkumar Ramachandra and Michael Jones. Street scene: A new dataset and evaluation protocol for video anomaly detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2569-2578, 2020. 2, 4, 5, 6 [31] Bharathkumar Ramachandra, Michael Jones, and Ranga Vatsavai. Learning a distance function with a siamese network to localize anomalies in videos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2598-2607, 2020. 2, 6 [32] Bharathkumar Ramachandra, Michael Jones, and Ranga Raju Vatsavai. A survey of single-scene video anomaly detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020. 2 [33] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019. 4, 5 [34] Nicolae-Catalin Ristea, Neelu Madan, Radu Tudor Ionescu, Kamal Nasrollahi, Fahad Shahbaz Khan, Thomas B Moeslund, and Mubarak Shah. Selfsupervised predictive convolutional attentive block for anomaly detection. arXiv preprint , 2021. 6 [35] Timo Schick and Hinrich Sch ̈utze. It's not just size that matters: Small language models are also few-shot learners. arXiv preprint , 2020. 8 [36] Ashish Singh, Michael J. Jones, and Erik Learned-Miller. Eval: Explainable video anomaly localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023. 1, 2, 4, 6 [37] Ashish Singh, Michael J. Jones, and Erik Learned-Miller. Tracklet-based explainable video anomaly localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. 1, 2, 4, 5, 6, 7, 8 [38] Waqas Sultani, Chen Chen, and Mubarak Shah. Realworld anomaly detection in surveillance videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6479-6488, 2018. 2 [39] Che Sun, Yunde Jia, Yao Hu, and Yuwei Wu. Sceneaware context reasoning for unsupervised abnormal event detection in videos. In ACMMM, 2020. 3 [40] Stanislaw Szymanowicz, James Charles, and Roberto Cipolla. X-man: Explaining multiple sources of anomalies in video. In CVPR, 2021. 3 [41] Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ram ́e, Morgane Rivi`ere, et al. Gemma 3 technical report. arXiv preprint , 2025. 5 [42] Chenxu Wang, Yanxin Yao, and Han Yao. Video anomaly detection method based on future frame prediction and attention mechanism. In IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2021. 2 [43] Jianfeng Wang, Xiaowei Hu, Pengchuan Zhang, Xiujun Li, Lijuan Wang, Lei Zhang, Jianfeng Gao, and Zicheng Liu. Minivlm: A smaller and faster vision-language model. arXiv preprint , 2020. 8 [44] Peng Wu, Chengyu Pan, Yuting Yan, Guansong Pang, Peng Wang, and Yanning Zhang. Deep learning for video anomaly detection: A review. arXiv preprint , 2024. 1 [45] Peng Wu, Xuerong Zhou, Guansong Pang, Yujia Sun, Jing Liu, Peng Wang, and Yanning Zhang. Openvocabulary video anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. 3 [46] Peng Wu, Xuerong Zhou, Guansong Pang, Yujia Sun, Jing Liu, Peng Wang, and Yanning Zhang. Follow the rules: Reasoning for video anomaly detection with large language models. In Proceedings of the European Conference on Computer Vision, 2024. 2, 3, 6 [47] Peng Wu, Xuerong Zhou, Guansong Pang, Lingru Zhou, Qingsen Yan, Peng Wang, and Yanning Zhang. Vadclip: adapting vision-language models for weakly supervised video anomaly detection. In AAAI Conference on Artificial Intelligence, pages 6074-6082, 2024. 2 [48] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github.com/facebookresearch/ detectron2, 2019. 5 [49] Zhengye Yang and Richard J. Radke. Detecting contextual anomalies by discovering consistent spatial regions. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, 2025. 6 [50] Muchao Ye, Weiyang Liu, and Pan He. Vera: Explainable video anomaly detection via verbalized learning of vision-language models. In Proceedings of the 10 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025. 2, 3 [51] M. Zaigham Zaheer, Arif Mahmood, M. Haris Khan, Mattia Segu, Fisher Yu, and Seung-Ik Lee. Generative cooperative learning for unsupervised video anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2 [52] Luca Zanella, Willi Menapace, Massimiliano Mancini, Yiming Wang, and Elisa Ricci. Harnessing large language models for training-free video anomaly detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. 2, 3 [53] Huaxin Zhang, Xiaohao Xu, Xiang Wang, Jialong Zuo, Xiaonan Huang, Changxin Gao, Shanjun Zhang, Li Yu, and Nong Sang. Holmes-vau: Towards long-term video anomaly understanding at any granularity. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 13843-13853, 2025. 2, 8 [54] Yifu Zhang, Peize Sun, Yi Jiang, Dongdong Yu, Fucheng Weng, Zehuan Yuan, Ping Luo, Wenyu Liu, and Xinggang Wang. Bytetrack: Multi-object tracking by associating every detection box. In Proceedings of the European Conference on Computer Vision (ECCV), 2022. 5 [55] Liyun Zhu, Lei Wang, Arjun Raj, Tom Gedeon, and Chen Chen. Advancing video anomaly detection: A concise review and a new dataset. Advances in Neural Information Processing Systems, 37:89943-89977, 2024. 8 11 Leveraging Multimodal LLM Descriptions of Activity for Explainable Semi-Supervised Video Anomaly Detection Supplementary Material 1. Compute Resources All of our experiments were run on a cluster of eight 80 GB NVIDIA A100 GPUs. A single 80 GB GPU would be sufficient and multiple GPUs were used only to reduce the total compute time by processing multiple video sequences in parallel. Our experiments that used GPT-4o accessed it through an API via the cloud. 2. Cropping Frames For each selected object pair o1 and o2, we first compute a single bounding box that tightly encloses both objects in a given frame. Let the individual bounding boxes for the two objects in frame Ft be denoted as bboxa = (x1a, y1a, x2a, y2a) and bboxb = (x1b, y1b, x2b, y2b), where (x1, y1) represents the top-left corner and (x2, y2) the bottom-right corner of a bounding box. These are merged into a single bounding box bbox1 by taking the element-wise minimum for the top-left corner and maximum for the bottom-right corner: bbox1 = min(x1a, x1b), min(y1a, y1b), max(x2a, x2b), max(y2a, y2b) (S1) We apply the same procedure to obtain bbox2 in the future frame Ft+∆, using the tracked locations of the same object pair. Let bbox1 = (x1,1, y1,1, x2,1, y2,1) denote the merged bounding box of the selected object pair in frame Ft, and let bbox2 = (x1,2, y1,2, x2,2, y2,2) denote the merged bounding box of the same objects tracked in the future frame Ft+∆. A unified bounding box that covers both time steps is computed as: bboxmerged = min(x1,1, x1,2), min(y1,1, y1,2), max(x2,1, x2,2), max(y2,1, y2,2) (S2) Let (x1, y1) and (x2, y2) represent the top-left and bottom-right corners of this merged bounding box. Let wmin and hmin denote half of the minimum desired crop width and height, respectively. In our case, we set wmin = 240 and hmin = 135, corresponding to a minimum crop size of 480 × 270 pixels. This size was chosen as a practical minimum that ensures sufficient context around objects while keeping computational overhead low. In our experiments, we found that a resolution of 480 × 270 served as a reliable lower bound, providing adequate spatial detail for capturing object activities without reducing the quality of the LLM generated descriptions. While larger crops may be used when available, we observed this setting to be a consistent and effective baseline across different datasets. We then compute the crop dimensions as w = max |x2-x1| 2 , wmin , h = max |y2-y1| 2 , hmin (S3) The crop boundaries are determined as: xmin = max(x1 -w, 0), ymin = max(y1 -h, 0), xmax = min(x2 + w, W), ymax = min(y2 + h, H) (S4) where W and H denotes the width and height of the full frame. The final cropped region Ct is extracted from the current frame Ft as: Ct = Ft[ymin : ymax, xmin : xmax] To ensure both temporal frames share the same spatial context, we apply the exact same crop coordinates to the future frame Ft+∆: Ct+∆= Ft+∆[ymin : ymax, xmin : xmax] This guarantees that both cropped frames contain the same background and object regions, allowing for consistent comparisons across time. In the case of a single object, the same procedure is applied with one exception: instead of merging two object bounding boxes in each frame, we directly use the bounding box of the single object in frame Ft and its tracked location in Ft+∆. The rest of the cropping process, including the computation of the unified bounding box across time, minimum size enforcement, and cropping, is performed exactly as described above. 3. Comparison of Gemma 3 and GPT-4o Activity Descriptions In Figure S1, we share activity descriptions generated by Gemma 3 and GPT-4o for given images and the promt: "Briefly describe what the s in the enclosed regions of these images are doing. The two images were taken one second apart." 12 Gemma 3 GPT-4o The person is crouching down on the ground. The person in the enclosed region is bending over and appears to remain in the same position between the two images The person is holding a phone up and appearing to take a picture or video while being pushed in a large box by another person. In the images, the person standing seems to be pushing or interacting with a cart or box that the seated person appears to be inside or using. The standing person maintains their position or posture relative to the cart between the images. The person in the blue shirt and the person on the bike are interacting, as the person in blue appears to be preventing the cyclist from moving forward. The person on the bicycle is being pushed forward by another person who is riding a scooter. The person is walking across the crosswalk while looking at their phone, and a cyclist is passing by them on the right. The person in the enclosed region is stopping to allow a cyclist to pass by. In the first image, the person in the black shorts and dark hair appears to be jumping or falling next to the bald person. In the second image, the person in the black shorts and dark hair is standing up straight, while the bald person remains standing. There is no apparent interaction between them. The two people are walking across the crosswalk, with the person on the right appearing to stumble or jump. Figure S1. The activity descriptions generated by Gemma 3 and GPT-4o with given images. We have observed that Gemma 3 often gives more concise descriptions with fewer unnecessary details such as the color of a person's clothes. Moreover, Gemma 3 sometimes includes important details that GPT-4o omits, such as the fact that a person is using his or her phone. 13
2510.14892
A Comprehensive Framework for Efficient Court Case Management and Prioritization Shubham Varma shubham.varma@somaiya.edu Ananya Warior ananya.warior@somaiya.edu Dipti Pawade diptipawade@somaiya.edu Avani Sakhapara avanisakhapara@somaiya.edu Abstract The Indian judicial system faces a critical challenge with approximately 52 [1] million pending cases, causing significant delays that impact socio-economic stabil- ity. This study proposes a cloud-based software framework to classify and priori- tize court cases using algorithmic methods based on parameters such as severity of crime committed, responsibility of parties involved, case filing dates, previous hear- ing’s data, priority level (e.g., Urgent, Medium, Ordinary) provided as input, and rel- evant Indian Penal Code (IPC), Code of Criminal Procedure (CrPC), and other legal sections (e.g., Hindu Marriage Act, Indian Contract Act). Cases are initially entered by advocates on record or court registrars, followed by automated hearing date al- location that balances fresh and old cases while accounting for court holidays and leaves. The system streamlines appellate processes by fetching data from histori- cal case databases. Our methodology integrates algorithmic prioritization, a robust notification system, and judicial interaction, with features that allow judges to view daily case counts and their details. Simulations demonstrate that the system can pro- cess cases efficiently, with reliable notification delivery and positive user satisfaction among judges and registrars. Future iterations will incorporate advanced machine learning for dynamic prioritization, addressing critical gaps in existing court case management systems to enhance efficiency and reduce backlogs. 1 Introduction With approximately 52 [1] million cases pending in India, the judicial system faces se- vere delays that hinder economic activities, burden law enforcement, and erode public trust in timely justice delivery. Traditional case management systems, often manual or semiautomated, struggle to handle this volume, necessitating innovative technological solutions. This study proposes a software framework that leverages algorithmic priori- tization to classify and manage court cases efficiently. By automating hearing date allo- cations, balancing fresh and old cases, and integrating input from judges, registrars, and advocates, the system aims to revolutionize judicial processes. The framework operates within a cloud-based environment, ensuring scalability and accessibility. We simulated 10,000 dummy cases to demonstrate the system’s potential to process cases efficiently and achieve high user satisfaction. The future scope will incorporate dynamic machine learning models for real-time prioritization, ensuring adaptability to evolving judicial needs. This research fills a critical gap by combining AI-driven prioritization with eth- ical considerations and compatibility with existing systems such as eCourt Services [1], as highlighted in recent studies on AI in judicial systems [9, 10]. Figure 1 illustrates the workflow of the system. 1 arXiv:2510.14892v1 [cs.CY] 16 Oct 2025 Figure 1: Workflow of the System 2 Literature Survey The inefficiencies of traditional court case management have been extensively studied, underscoring the need for technological interventions. Rao et al. [2] advocate for digitiz- ing legal records to improve transparency and reduce corruption, a fundamental prin- ciple of our cloud-based framework. Their work highlights how digital records improve access to case data, a feature our system leverages through MySQL database integration. Amofah [3] explores electronic court case management systems (eCCMS), demonstrating their ability to streamline judicial processes through automation, such as automated case tracking. However, these systems lack AI-driven dynamic prioritization, a gap our research addresses. Pattnaik et al. [4] identify critical success factors for efficient court management in India, including stakeholder collaboration and technological adoption, which align with our system’s design for judicial interaction and cloud-based scalability. Chan [5] examines case management in Chinese-speaking regions, noting the influence of traditional legal structures on efficiency, while Ali and Varol [6] propose mobile so- lutions to bridge communication gaps, supporting our notification system. Tahura [7] explores caseflow management to reduce backlogs, emphasizing structured scheduling, and Ambos [8] highlights prosecutorial discretion based on case gravity, informing our prioritization parameters. Recent advancements in artificial intelligence (AI) have transformed judicial systems globally. Thomson Reuters Institute [9] reports that AI tools enable court employees to focus on human interactions and reduce bias, citing Palm Beach County’s AI-driven document processing system as an example of high efficiency. IBM [10] notes that ju- dicial systems, such as those in Germany, are adopting AI to manage vast datasets and accelerate case resolution, aligning with our goal of reducing India’s case backlog. The International Journal for Court Administration [11] discusses a study in the Netherlands using AI to support judges in traffic violations cases, demonstrating the potential of AI in decision support. These studies underscore the feasibility of AI in judicial contexts, supporting the approach of our system. 2.1 Ethical Considerations The integration of AI in judicial systems raises significant ethical concerns, particularly around bias and fairness. Judicature [12] emphasizes that AI must be used responsibly to 2 avoid perpetuating biases, such as overprioritizing certain case types. The American As- sociation for the Advancement of Science (AAAS) [15] highlights the importance of fair- ness, accountability, and transparency (FAccT) in AI systems, noting that biases can arise from datasets, design, or deployment. Above the Law [13] reports a case where a trial court relied on AI-generated fake caselaw, underscoring the need for human oversight. Our system mitigates these risks by incorporating expert-validated weights for prioriti- zation parameters and planning regular audits to ensure equitable outcomes, aligning with ethical guidelines from the Council of Europe [11]. Additionally, we address poten- tial biases by involving legal experts in weight validation and ensuring transparency in the prioritization process. 2.2 Gaps Addressed Existing systems, such as those discussed by Rao et al. [2] and Amofah [3], focus on dig- itization and automation but lack AI-driven dynamic prioritization tailored to India’s judicial context. Our framework addresses this gap by integrating a regression-based machine learning model for real-time weight adjustment, ensuring both efficiency and fairness. Unlike mobile solutions proposed by Ali and Varol [6] , our system offers a comprehensive cloud-based platform that integrates with existing judicial infrastruc- ture like eCourts Services, enhancing scalability and accessibility. 3 Methodology 3.1 Data Retrieval and Preprocessing Case data is entered by advocates on record or court registrars into a MySQL database and preprocessed by categorizing cases into distinct datasets (e.g., Family, Criminal, Civil). This segmentation enables custom prioritization strategies for different types of case, ensuring flexibility and scalability. Preprocessing includes cleaning data, standardizing formats, and extracting relevant features such as filing dates, priority levels, and appli- cable legal sections (e.g., IPC, CrPC, Hindu Marriage Act). 3.2 Case Prioritization Algorithm The core of our system is the case prioritization algorithm, which assigns a weighted score to each case to determine its urgency and scheduling priority. The algorithm, de- tailed in Algorithm 1, processes the cases as follows: 3 Algorithm 1 Case Prioritization Algorithm 1: Input: Case data (case ID, type, filing date, severity, priority level, legal sections), court holidays, judge leaves, disposed cases database 2: Output: Prioritized case list with assigned hearing dates 3: Initialize MySQL database connection 4: Retrieve pending case data from MySQL database 5: for each case in pending cases do 6: if case is an appeal then 7: Fetch relevant data from disposed cases database 8: end if 9: Categorize case into type (Criminal, Family, Civil) 10: Extract features: case age (from filing date and previous hearings), severity, pri- ority level (Urgent, Medium, Ordinary), legal sections (e.g., IPC 302, Hindu Marriage Act Section 13) 11: Compute weight using regression-based ML model ▷Model considers case age, severity, priority level, and legal sections; IPC/CrPC weights validated by advocates and judges 12: Assign priority based on weight 13: end for 14: Sort cases by weight in descending order 15: Initialize scheduling with daily limit (100 cases: 50 fresh, 50 old) 16: for each day in scheduling period do 17: if day is not a court holiday or judge leave then 18: Assign hearing dates to top-weighted cases (up to 50 fresh, 50 old) 19: end if 20: end for 21: Export prioritized case list to database 22: Send notifications (SMS/email) to stakeholders with hearing dates 23: Periodically review low-priority cases 24: if case pendency exceeds threshold then 25: Increase case priority 26: end if 27: Update ML model with case outcomes for dynamic weight adjustment 28: Return: Prioritized case list with hearing dates The algorithm retrieves all pending case data from a MySQL database, including details such as case ID, type, filing date, severity, priority level (e.g., Urgent, Medium, Ordinary) provided by advocates or registrars, and relevant legal sections (e.g., IPC 302, Hindu Marriage Act Section 13). For appealed cases, it fetches data from the disposed cases database. The data is preprocessed to categorize cases into categories such as Criminal, Family, or Civil, enabling customized prioritization strategies. For each case, a weight is calculated using a regression-based machine learning model, which considers factors such as the age of the case (based on filing and previous hearing dates), severity (e.g., high for murder, low for minor disputes), the input priority level, and applicable legal sections. The weights for the IPC and CrPC sections were determined through consul- tations with multiple advocates and judges to ensure legal accuracy and fairness. The calculated weight determines the hearing date, with higher weights leading to earlier scheduling. Hearing dates are allocated while considering a fixed daily case limit (e.g., 100 cases, with 50 fresh and 50 old cases) and a predefined list of court holidays and additional leaves entered into the system. Notifications are sent to stakeholders via SMS 4 or email to inform them of the assigned hearing dates. To prevent low-priority cases from being indefinitely delayed, the system periodically checks for pendency, and if a defined threshold is exceeded, it increases the case’s priority by multiplying it with a fixed multiplier. The machine learning model dynamically adjusts the weights based on the historical case outcomes, achieving high precision in the simulations, as shown in Figure 2. Figure 2: Case Prioritization Flowchart 3.3 Notification System and Judicial Integration Post-prioritization, an automated notification system sends SMS/email alerts to stake- holders regarding hearing dates. Judges and registrars access a prioritized case list via a secure web interface, which also displays the number of cases scheduled per day, fa- cilitating efficient scheduling and case management. The interface is designed for com- patibility with existing systems like e-Courts Services [1]. 4 System Architecture The system architecture, illustrated in Figure 3, orchestrates a seamless flow of oper- ations to prioritize and manage court cases within a secure cloud-based environment. The process is structured into distinct stages, ensuring scalability, efficiency, and integra- tion with judicial workflows, including case entry by advocates or registrars, handling appeals, balanced case scheduling, and holiday considerations. The process begins with case data entry, where advocates on record or court registrars input case details, such as case ID, type, filing date, severity, priority level (e.g., Urgent, Medium, Ordinary), and relevant legal sections (e.g., IPC, CrPC, Hindu Marriage Act), into a MySQL database. This initial step ensures that all relevant case information is captured accurately for subsequent processing. The system initializes necessary configurations and establishes a secure connection to the MySQL database. This step ensures that the system is fully prepared to access and process case data reliably, setting the foundation for subsequent operations. Next, the system retrieves all pending case data from the MySQL database, including the details entered by advocates or registrars. For cases that have gone to appeal, the 5 system fetches relevant data from the disposed cases database, ensuring that historical case information is available for consistent prioritization in higher courts. The retrieved data is then preprocessed to categorize cases into distinct groups, such as Criminal, Family, or Civil. This categorization enables the system to apply tailored prioritization strategies, accommodating the unique characteristics of each case type, including appealed cases, and enhancing scalability for diverse judicial needs. For each case, the system employs a regression-based machine learning model to calcu- late a weight that determines its hearing date. The model evaluates factors such as the age of the case (calculated from filing and previous hearing dates), the severity of the of- fense (e.g., high for murder under IPC 302, low for minor civil disputes), the priority level provided as an input parameter by advocates or registrars, and applicable legal sections (e.g., IPC, CrPC, Hindu Marriage Act). Weights for IPC and CrPC sections were determined through consultations with multiple advocates and judges to ensure legal accuracy. The calculated weight guides the allocation of hearing dates, with higher weights leading to earlier scheduling. The system schedules a fixed number of cases per day, for example, 100 cases, with 50 fresh cases and 50 old (next hearing) cases, to balance judicial work- load. Hearing dates are assigned while accounting for a predefined list of court holidays and any additional leaves entered into the system, ensuring accurate and conflict-free scheduling. The prioritized case list is exported to a secure database for record keeping, ensuring that all scheduling decisions, including those for fresh and old cases, are systematically stored. This step maintains data integrity and supports traceability for both pending and appealed cases. Notifications are then sent to stakeholders, including judges, lawyers, and litigants, via SMS or email, informing them of the assigned hearing dates. For appealed cases, notifi- cations also include relevant higher court details, ensuring all parties are informed and prepared. To prevent low-priority cases from being indefinitely delayed, the system periodically reviews the time since the last hearing for each case. If a case, including those under appeal, has been pending beyond a predefined threshold, its priority is increased by multiplying it with a fixed multiplier to ensure timely resolution, promoting fairness across all case types. For cases involving appeals, the system streamlines the process by retrieving data from the disposed cases database, applying the same prioritization process (weight calcula- tion based on input priority and other factors, and date allocation with the fixed case limit), and notifying higher courts. This ensures consistency and efficiency in handling appeals, maintaining seamless integration with the judicial hierarchy. The system continuously monitors its performance by measuring execution times for each operation, ensuring efficiency and identifying potential bottlenecks. Performance data is logged to support ongoing optimization, applicable to both pending and appealed cases. Judges access the prioritized case list, including appealed cases, through a secure web interface, which presents cases in order of priority and displays the number of cases scheduled per day, helping judges manage their daily workload effectively. Judges can make decisions, such as scheduling the next hearing or disposing of a case. When a judge specifies a time frame for the next hearing (e.g., ’after 15 days’), the system sched- ules the hearing as soon as possible after that period, subject to the fixed case limit and 6 holiday/leave constraints, and updates the database accordingly; if a case is disposed, it is marked as resolved, stored in the disposed cases database, and the outcome is used to update the machine learning model, enhancing its accuracy for future prioritizations, including for appeals. This structured flow, as shown in Figure 3, ensures that the system efficiently manages large case volumes, including appealed cases, while balancing fresh and old cases, re- specting holidays and leaves, and integrating seamlessly with existing judicial platforms like eCourts Services, providing a robust solution for India’s judicial backlog. Figure 3: System Architecture for Court Case Management and Prioritization 5 Results 5.1 Illustrative Case Study To demonstrate the system’s functionality, we present a synthetic case study with five simulated cases from our 10,000-case dataset (Table 1). These cases vary in type, filing date, severity, priority level, and applicable legal sections, showcasing the algorithm’s ability to prioritize effectively. Weights for IPC and CrPC sections were validated through consultations with multiple advocates and judges to ensure legal accuracy. 7 Table 1: Illustrative Case Study Case ID Type Filing Date Severity Legal Sections Priority Weight No of Hear- ings Hearing Date 001 Criminal 01/01/2024 High IPC 302, 34 Urgent 0.95 4 15/07/2025 002 Civil 01/02/2025 Low Indian Con- tract Act, 1872, Section 73 Ordinary 0.45 0 30/08/2025 003 Family 01/06/2024 Medium Hindu Mar- riage Act, 1955, Section 13 Medium 0.70 3 01/08/2025 004 Criminal 01/02/2024 High IPC 420, 406 Urgent 0.90 5 20/07/2025 005 Criminal 01/04/2025 Medium IPC 323, CrPC 200 Ordinary 0.50 2 15/09/2025 Narrative • Case 001: A murder case filed on 01/01/2024, with high severity and an input pri- ority of Urgent, governed by IPC Section 302 (murder) and Section 34 (acts done by several persons in furtherance of common intention). With four hearings held and its high severity validated by advocate and judge consultations, it has a weight of 0.95, scheduling it for 15/07/2025, ensuring swift justice for a serious crime. • Case 002: A civil case (e.g., breach of contract) filed on 01/02/2025, with low severity and an input priority of Ordinary, governed by Section 73 of the Indian Contract Act, 1872 (compensation for loss). With no hearings yet and its recent filing and low severity, it yields a weight of 0.45, scheduling it for 30/08/2025, reflecting its lower urgency. • Case 003: A family law case (e.g., divorce) filed on 01/06/2024, with medium sever- ity and an input priority of Medium, governed by Section 13 of the Hindu Marriage Act, 1955 (grounds for divorce). With three hearings held and moderate pendency and severity, it results in a weight of 0.70, scheduling it for 01/08/2025, balancing urgency and recency. • Case 004: A cheating case filed on 01/02/2024, with high severity and an input pri- ority of Urgent, governed by IPC Section 420 (cheating) and Section 406 (criminal breach of trust). With five hearings held and its high severity validated, it yields a weight of 0.90, scheduling it for 20/07/2025, addressing a significant criminal of- fense promptly. • Case 005: A case involving voluntarily causing hurt filed on 01/04/2025, with medium severity and an input priority of Ordinary, governed by IPC Section 323 (voluntar- ily causing hurt) and CrPC Section 200 (examination of complainant). With two hearings held and its recent filing and medium severity, it results in a weight of 0.50, scheduling it for 15/09/2025, as it is less severe than murder or cheating. This case study illustrates how the system prioritizes cases based on severity, pendency, input priority level, and applicable legal sections, validated by legal experts, ensuring critical cases are addressed promptly while managing diverse case types. 8 5.2 System Performance Metrics Simulations with 10,000 dummy cases demonstrated that the system can process cases efficiently, with reliable notification delivery and positive feedback from judges and reg- istrars regarding usability. 5.3 Machine Learning Model Performance The regression-based machine learning model used for dynamic weight adjustment achieved an F1 score of 99.7%, precision of 99.8%, and recall of 99.4% in simulated tests, indicating high accuracy and reliability in prioritizing cases. 6 Discussion and Conclusion The proposed framework significantly enhances judicial efficiency by automating case prioritization and scheduling. By integrating algorithmic approaches, a robust notifi- cation system, and a dynamic feedback loop, the system addresses challenges like case backlogs and delayed hearings. The illustrative case study demonstrates its practical applicability, prioritizing urgent cases like murder (IPC 302, 34) while ensuring fairness across civil and family cases. The system’s compatibility with existing platforms like eCourts Services [1] ensures seamless integration into India’s judicial infrastructure. 6.1 Ethical Implications AI-driven prioritization risks biases, such as over-prioritizing criminal cases, which could disadvantage civil or family cases. Our system mitigates this through expert-validated weights, informed by consultations with advocates and judges, and regular audits, align- ing with ethical guidelines from the Council of Europe [11]. Above the Law [13] high- lights risks of AI-generated inaccuracies, underscoring the need for human oversight, which our system incorporates through judicial interaction. 6.2 Real-World Feasibility The system is designed for compatibility with existing platforms like eCourts Services, using standardized MySQL databases and secure cloud storage. Pilot testing in a regional court will validate its effectiveness, addressing potential challenges like data migration and user training. 6.3 Future Work Future iterations will prioritize real-world deployment, integrating advanced machine learning to enhance case management. Deep learning models, such as transformers, will improve feature extraction from legal texts, identifying attributes like case complexity and urgency from documents in Table 1. Reinforcement learning will enable adaptive prioritization by optimizing schedules based on court data, such as judge availability and case backlog, improving efficiency. The system will also expand to handle multilingual case data, as demonstrated by Clio [14] for California courts, to enhance accessibility in diverse regions. NLP techniques, including cross-lingual embeddings, will process languages like Hindi and English, ad- dressing challenges in legal terminology translation. Additionally, predictive analytics 9 will forecast case pendency using data like hearing counts and severity, aiding resource allocation. Piloting in high-volume courts will test scalability, with collaboration from legal experts ensuring judicial alignment and data security, ultimately streamlining pro- cesses and improving access to justice. References [1] eCourts Services. https://ecourts.gov.in/ecourts_home/ [2] Rao, A. P., et al. (2018). Court Case Management System. [3] Amofah, L. R. (2017). Electronic court case management system (for law court com- plex). [4] Pattnaik, P. N., et al. (2018). Mapping Critical Success Factors in Efficient Court Man- agement. International Journal of Law and Management, 60(2). [5] Chan, P. C. H. (2018). Framing the structure of the court system in the perspective of case management. Peking University Law Journal, 6(1), 55–79. [6] Ali, C. T., & Varol, A. (2020). Design and Implementation of a Simple Online Court Case Management System Based on the Android Platform. IEEE. [7] Tahura, U. S. (2013). Case Management: A Magic LAMP in Reducing Case Backlogs. Bangladesh Law Journal, 13. [8] Ambos, K. (2018). Office of the Prosecutor: Policy Paper on Case Selection and Priori- tisation. International Legal Materials, 57(6), 1131–1145. [9] Thomson Reuters Institute. (2024). Humanizing Justice: The transformational impact of AI in courts. https://www.thomsonreuters.com/en-us/posts/ ai-in-courts/humanizing-justice/ [10] IBM. (2025). Judicial systems are turning to AI to help manage vast quantities of data and expedite case resolution. https://www.ibm.com/case-studies/blog/ judicial-systems-are-turning-to-ai-to-help-manage-its-vast-quantities-of-dat [11] International Journal for Court Administration. (2020). Courts and Artificial Intelli- gence. https://iacajournal.org/articles/10.36745/ijca.343 [12] Judicature. (2024). AI in the Courts: How Worried Should We Be? https://judicature.duke.edu/articles/ ai-in-the-courts-how-worried-should-we-be/ [13] Above the Law. (2025). Trial Court Decides Case Based On AI-Hallucinated Caselaw. https://abovethelaw.com/2025/07/ trial-court-decides-case-based-on-ai-hallucinated-caselaw/ [14] Clio. (2025). AI in the Courtroom: Opportunities and Challenges. https://www. clio.com/resources/ai-for-lawyers/ai-in-courtrooms/ [15] American Association for the Advancement of Science (AAAS). Artificial In- telligence and the Courts: Materials for Judges. https://www.aaas.org/ai2/ projects/law/judicialpapers 10
A Comprehensive Framework for Efficient Court Case Management and Prioritization Shubham Varma Ananya Warior Dipti Pawade Avani Sakhapara Abstract The Indian judicial system faces a critical challenge with approximately 52 [1] million pending cases, causing significant delays that impact socio-economic stability. This study proposes a cloud-based software framework to classify and prioritize court cases using algorithmic methods based on parameters such as severity of crime committed, responsibility of parties involved, case filing dates, previous hearing's data, priority level (e.g., Urgent, Medium, Ordinary) provided as input, and relevant Indian Penal Code (IPC), Code of Criminal Procedure (CrPC), and other legal sections (e.g., Hindu Marriage Act, Indian Contract Act). Cases are initially entered by advocates on record or court registrars, followed by automated hearing date allocation that balances fresh and old cases while accounting for court holidays and leaves. The system streamlines appellate processes by fetching data from historical case databases. Our methodology integrates algorithmic prioritization, a robust notification system, and judicial interaction, with features that allow judges to view daily case counts and their details. Simulations demonstrate that the system can process cases efficiently, with reliable notification delivery and positive user satisfaction among judges and registrars. Future iterations will incorporate advanced machine learning for dynamic prioritization, addressing critical gaps in existing court case management systems to enhance efficiency and reduce backlogs. 1 Introduction With approximately 52 [1] million cases pending in India, the judicial system faces severe delays that hinder economic activities, burden law enforcement, and erode public trust in timely justice delivery. Traditional case management systems, often manual or semiautomated, struggle to handle this volume, necessitating innovative technological solutions. This study proposes a software framework that leverages algorithmic prioritization to classify and manage court cases efficiently. By automating hearing date allocations, balancing fresh and old cases, and integrating input from judges, registrars, and advocates, the system aims to revolutionize judicial processes. The framework operates within a cloud-based environment, ensuring scalability and accessibility. We simulated 10,000 dummy cases to demonstrate the system's potential to process cases efficiently and achieve high user satisfaction. The future scope will incorporate dynamic machine learning models for real-time prioritization, ensuring adaptability to evolving judicial needs. This research fills a critical gap by combining AI-driven prioritization with ethical considerations and compatibility with existing systems such as eCourt Services [1], as highlighted in recent studies on AI in judicial systems [9, 10]. Figure 1 illustrates the workflow of the system. 1 16 Oct 2025 Figure 1: Workflow of the System 2 Literature Survey The inefficiencies of traditional court case management have been extensively studied, underscoring the need for technological interventions. Rao et al. [2] advocate for digitizing legal records to improve transparency and reduce corruption, a fundamental principle of our cloud-based framework. Their work highlights how digital records improve access to case data, a feature our system leverages through MySQL database integration. Amofah [3] explores electronic court case management systems (eCCMS), demonstrating their ability to streamline judicial processes through automation, such as automated case tracking. However, these systems lack AI-driven dynamic prioritization, a gap our research addresses. Pattnaik et al. [4] identify critical success factors for efficient court management in India, including stakeholder collaboration and technological adoption, which align with our system's design for judicial interaction and cloud-based scalability. Chan [5] examines case management in Chinese-speaking regions, noting the influence of traditional legal structures on efficiency, while Ali and Varol [6] propose mobile solutions to bridge communication gaps, supporting our notification system. Tahura [7] explores caseflow management to reduce backlogs, emphasizing structured scheduling, and Ambos [8] highlights prosecutorial discretion based on case gravity, informing our prioritization parameters. Recent advancements in artificial intelligence (AI) have transformed judicial systems globally. Thomson Reuters Institute [9] reports that AI tools enable court employees to focus on human interactions and reduce bias, citing Palm Beach County's AI-driven document processing system as an example of high efficiency. IBM [10] notes that judicial systems, such as those in Germany, are adopting AI to manage vast datasets and accelerate case resolution, aligning with our goal of reducing India's case backlog. The International Journal for Court Administration [11] discusses a study in the Netherlands using AI to support judges in traffic violations cases, demonstrating the potential of AI in decision support. These studies underscore the feasibility of AI in judicial contexts, supporting the approach of our system. 2.1 Ethical Considerations The integration of AI in judicial systems raises significant ethical concerns, particularly around bias and fairness. Judicature [12] emphasizes that AI must be used responsibly to 2 avoid perpetuating biases, such as overprioritizing certain case types. The American Association for the Advancement of Science (AAAS) [15] highlights the importance of fairness, accountability, and transparency (FAccT) in AI systems, noting that biases can arise from datasets, design, or deployment. Above the Law [13] reports a case where a trial court relied on AI-generated fake caselaw, underscoring the need for human oversight. Our system mitigates these risks by incorporating expert-validated weights for prioritization parameters and planning regular audits to ensure equitable outcomes, aligning with ethical guidelines from the Council of Europe [11]. Additionally, we address potential biases by involving legal experts in weight validation and ensuring transparency in the prioritization process. 2.2 Gaps Addressed Existing systems, such as those discussed by Rao et al. [2] and Amofah [3], focus on digitization and automation but lack AI-driven dynamic prioritization tailored to India's judicial context. Our framework addresses this gap by integrating a regression-based machine learning model for real-time weight adjustment, ensuring both efficiency and fairness. Unlike mobile solutions proposed by Ali and Varol [6] , our system offers a comprehensive cloud-based platform that integrates with existing judicial infrastructure like eCourts Services, enhancing scalability and accessibility. 3 Methodology 3.1 Data Retrieval and Preprocessing Case data is entered by advocates on record or court registrars into a MySQL database and preprocessed by categorizing cases into distinct datasets (e.g., Family, Criminal, Civil). This segmentation enables custom prioritization strategies for different types of case, ensuring flexibility and scalability. Preprocessing includes cleaning data, standardizing formats, and extracting relevant features such as filing dates, priority levels, and applicable legal sections (e.g., IPC, CrPC, Hindu Marriage Act). 3.2 Case Prioritization Algorithm The core of our system is the case prioritization algorithm, which assigns a weighted score to each case to determine its urgency and scheduling priority. The algorithm, detailed in Algorithm 1, processes the cases as follows: 3 Algorithm 1 Case Prioritization Algorithm 1: Input: Case data (case ID, type, filing date, severity, priority level, legal sections), court holidays, judge leaves, disposed cases database 2: Output: Prioritized case list with assigned hearing dates 3: Initialize MySQL database connection 4: Retrieve pending case data from MySQL database 5: for each case in pending cases do 6: if case is an appeal then 7: Fetch relevant data from disposed cases database 8: end if 9: Categorize case into type (Criminal, Family, Civil) 10: Extract features: case age (from filing date and previous hearings), severity, priority level (Urgent, Medium, Ordinary), legal sections (e.g., IPC 302, Hindu Marriage Act Section 13) 11: Compute weight using regression-based ML model ▷Model considers case age, severity, priority level, and legal sections; IPC/CrPC weights validated by advocates and judges 12: Assign priority based on weight 13: end for 14: Sort cases by weight in descending order 15: Initialize scheduling with daily limit (100 cases: 50 fresh, 50 old) 16: for each day in scheduling period do 17: if day is not a court holiday or judge leave then 18: Assign hearing dates to top-weighted cases (up to 50 fresh, 50 old) 19: end if 20: end for 21: Export prioritized case list to database 22: Send notifications (SMS/email) to stakeholders with hearing dates 23: Periodically review low-priority cases 24: if case pendency exceeds threshold then 25: Increase case priority 26: end if 27: Update ML model with case outcomes for dynamic weight adjustment 28: Return: Prioritized case list with hearing dates The algorithm retrieves all pending case data from a MySQL database, including details such as case ID, type, filing date, severity, priority level (e.g., Urgent, Medium, Ordinary) provided by advocates or registrars, and relevant legal sections (e.g., IPC 302, Hindu Marriage Act Section 13). For appealed cases, it fetches data from the disposed cases database. The data is preprocessed to categorize cases into categories such as Criminal, Family, or Civil, enabling customized prioritization strategies. For each case, a weight is calculated using a regression-based machine learning model, which considers factors such as the age of the case (based on filing and previous hearing dates), severity (e.g., high for murder, low for minor disputes), the input priority level, and applicable legal sections. The weights for the IPC and CrPC sections were determined through consultations with multiple advocates and judges to ensure legal accuracy and fairness. The calculated weight determines the hearing date, with higher weights leading to earlier scheduling. Hearing dates are allocated while considering a fixed daily case limit (e.g., 100 cases, with 50 fresh and 50 old cases) and a predefined list of court holidays and additional leaves entered into the system. Notifications are sent to stakeholders via SMS 4 or email to inform them of the assigned hearing dates. To prevent low-priority cases from being indefinitely delayed, the system periodically checks for pendency, and if a defined threshold is exceeded, it increases the case's priority by multiplying it with a fixed multiplier. The machine learning model dynamically adjusts the weights based on the historical case outcomes, achieving high precision in the simulations, as shown in Figure 2. Figure 2: Case Prioritization Flowchart 3.3 Notification System and Judicial Integration Post-prioritization, an automated notification system sends SMS/email alerts to stakeholders regarding hearing dates. Judges and registrars access a prioritized case list via a secure web interface, which also displays the number of cases scheduled per day, facilitating efficient scheduling and case management. The interface is designed for compatibility with existing systems like e-Courts Services [1]. 4 System Architecture The system architecture, illustrated in Figure 3, orchestrates a seamless flow of operations to prioritize and manage court cases within a secure cloud-based environment. The process is structured into distinct stages, ensuring scalability, efficiency, and integration with judicial workflows, including case entry by advocates or registrars, handling appeals, balanced case scheduling, and holiday considerations. The process begins with case data entry, where advocates on record or court registrars input case details, such as case ID, type, filing date, severity, priority level (e.g., Urgent, Medium, Ordinary), and relevant legal sections (e.g., IPC, CrPC, Hindu Marriage Act), into a MySQL database. This initial step ensures that all relevant case information is captured accurately for subsequent processing. The system initializes necessary configurations and establishes a secure connection to the MySQL database. This step ensures that the system is fully prepared to access and process case data reliably, setting the foundation for subsequent operations. Next, the system retrieves all pending case data from the MySQL database, including the details entered by advocates or registrars. For cases that have gone to appeal, the 5 system fetches relevant data from the disposed cases database, ensuring that historical case information is available for consistent prioritization in higher courts. The retrieved data is then preprocessed to categorize cases into distinct groups, such as Criminal, Family, or Civil. This categorization enables the system to apply tailored prioritization strategies, accommodating the unique characteristics of each case type, including appealed cases, and enhancing scalability for diverse judicial needs. For each case, the system employs a regression-based machine learning model to calculate a weight that determines its hearing date. The model evaluates factors such as the age of the case (calculated from filing and previous hearing dates), the severity of the offense (e.g., high for murder under IPC 302, low for minor civil disputes), the priority level provided as an input parameter by advocates or registrars, and applicable legal sections (e.g., IPC, CrPC, Hindu Marriage Act). Weights for IPC and CrPC sections were determined through consultations with multiple advocates and judges to ensure legal accuracy. The calculated weight guides the allocation of hearing dates, with higher weights leading to earlier scheduling. The system schedules a fixed number of cases per day, for example, 100 cases, with 50 fresh cases and 50 old (next hearing) cases, to balance judicial workload. Hearing dates are assigned while accounting for a predefined list of court holidays and any additional leaves entered into the system, ensuring accurate and conflict-free scheduling. The prioritized case list is exported to a secure database for record keeping, ensuring that all scheduling decisions, including those for fresh and old cases, are systematically stored. This step maintains data integrity and supports traceability for both pending and appealed cases. Notifications are then sent to stakeholders, including judges, lawyers, and litigants, via SMS or email, informing them of the assigned hearing dates. For appealed cases, notifications also include relevant higher court details, ensuring all parties are informed and prepared. To prevent low-priority cases from being indefinitely delayed, the system periodically reviews the time since the last hearing for each case. If a case, including those under appeal, has been pending beyond a predefined threshold, its priority is increased by multiplying it with a fixed multiplier to ensure timely resolution, promoting fairness across all case types. For cases involving appeals, the system streamlines the process by retrieving data from the disposed cases database, applying the same prioritization process (weight calculation based on input priority and other factors, and date allocation with the fixed case limit), and notifying higher courts. This ensures consistency and efficiency in handling appeals, maintaining seamless integration with the judicial hierarchy. The system continuously monitors its performance by measuring execution times for each operation, ensuring efficiency and identifying potential bottlenecks. Performance data is logged to support ongoing optimization, applicable to both pending and appealed cases. Judges access the prioritized case list, including appealed cases, through a secure web interface, which presents cases in order of priority and displays the number of cases scheduled per day, helping judges manage their daily workload effectively. Judges can make decisions, such as scheduling the next hearing or disposing of a case. When a judge specifies a time frame for the next hearing (e.g., 'after 15 days'), the system schedules the hearing as soon as possible after that period, subject to the fixed case limit and 6 holiday/leave constraints, and updates the database accordingly; if a case is disposed, it is marked as resolved, stored in the disposed cases database, and the outcome is used to update the machine learning model, enhancing its accuracy for future prioritizations, including for appeals. This structured flow, as shown in Figure 3, ensures that the system efficiently manages large case volumes, including appealed cases, while balancing fresh and old cases, respecting holidays and leaves, and integrating seamlessly with existing judicial platforms like eCourts Services, providing a robust solution for India's judicial backlog. Figure 3: System Architecture for Court Case Management and Prioritization 5 Results 5.1 Illustrative Case Study To demonstrate the system's functionality, we present a synthetic case study with five simulated cases from our 10,000-case dataset (Table 1). These cases vary in type, filing date, severity, priority level, and applicable legal sections, showcasing the algorithm's ability to prioritize effectively. Weights for IPC and CrPC sections were validated through consultations with multiple advocates and judges to ensure legal accuracy. 7 Table 1: Illustrative Case Study Case ID Type Filing Date Severity Legal Sections Priority Weight No of Hearings Hearing Date 001 Criminal 01/01/2024 High IPC 302, 34 Urgent 0.95 4 15/07/2025 002 Civil 01/02/2025 Low Indian Contract Act, 1872, Section 73 Ordinary 0.45 0 30/08/2025 003 Family 01/06/2024 Medium Hindu Marriage Act, 1955, Section 13 Medium 0.70 3 01/08/2025 004 Criminal 01/02/2024 High IPC 420, 406 Urgent 0.90 5 20/07/2025 005 Criminal 01/04/2025 Medium IPC 323, CrPC 200 Ordinary 0.50 2 15/09/2025 Narrative • Case 001: A murder case filed on 01/01/2024, with high severity and an input priority of Urgent, governed by IPC Section 302 (murder) and Section 34 (acts done by several persons in furtherance of common intention). With four hearings held and its high severity validated by advocate and judge consultations, it has a weight of 0.95, scheduling it for 15/07/2025, ensuring swift justice for a serious crime. • Case 002: A civil case (e.g., breach of contract) filed on 01/02/2025, with low severity and an input priority of Ordinary, governed by Section 73 of the Indian Contract Act, 1872 (compensation for loss). With no hearings yet and its recent filing and low severity, it yields a weight of 0.45, scheduling it for 30/08/2025, reflecting its lower urgency. • Case 003: A family law case (e.g., divorce) filed on 01/06/2024, with medium severity and an input priority of Medium, governed by Section 13 of the Hindu Marriage Act, 1955 (grounds for divorce). With three hearings held and moderate pendency and severity, it results in a weight of 0.70, scheduling it for 01/08/2025, balancing urgency and recency. • Case 004: A cheating case filed on 01/02/2024, with high severity and an input priority of Urgent, governed by IPC Section 420 (cheating) and Section 406 (criminal breach of trust). With five hearings held and its high severity validated, it yields a weight of 0.90, scheduling it for 20/07/2025, addressing a significant criminal offense promptly. • Case 005: A case involving voluntarily causing hurt filed on 01/04/2025, with medium severity and an input priority of Ordinary, governed by IPC Section 323 (voluntarily causing hurt) and CrPC Section 200 (examination of complainant). With two hearings held and its recent filing and medium severity, it results in a weight of 0.50, scheduling it for 15/09/2025, as it is less severe than murder or cheating. This case study illustrates how the system prioritizes cases based on severity, pendency, input priority level, and applicable legal sections, validated by legal experts, ensuring critical cases are addressed promptly while managing diverse case types. 8 5.2 System Performance Metrics Simulations with 10,000 dummy cases demonstrated that the system can process cases efficiently, with reliable notification delivery and positive feedback from judges and registrars regarding usability. 5.3 Machine Learning Model Performance The regression-based machine learning model used for dynamic weight adjustment achieved an F1 score of 99.7%, precision of 99.8%, and recall of 99.4% in simulated tests, indicating high accuracy and reliability in prioritizing cases. 6 Discussion and Conclusion The proposed framework significantly enhances judicial efficiency by automating case prioritization and scheduling. By integrating algorithmic approaches, a robust notification system, and a dynamic feedback loop, the system addresses challenges like case backlogs and delayed hearings. The illustrative case study demonstrates its practical applicability, prioritizing urgent cases like murder (IPC 302, 34) while ensuring fairness across civil and family cases. The system's compatibility with existing platforms like eCourts Services [1] ensures seamless integration into India's judicial infrastructure. 6.1 Ethical Implications AI-driven prioritization risks biases, such as over-prioritizing criminal cases, which could disadvantage civil or family cases. Our system mitigates this through expert-validated weights, informed by consultations with advocates and judges, and regular audits, aligning with ethical guidelines from the Council of Europe [11]. Above the Law [13] highlights risks of AI-generated inaccuracies, underscoring the need for human oversight, which our system incorporates through judicial interaction. 6.2 Real-World Feasibility The system is designed for compatibility with existing platforms like eCourts Services, using standardized MySQL databases and secure cloud storage. Pilot testing in a regional court will validate its effectiveness, addressing potential challenges like data migration and user training. 6.3 Future Work Future iterations will prioritize real-world deployment, integrating advanced machine learning to enhance case management. Deep learning models, such as transformers, will improve feature extraction from legal texts, identifying attributes like case complexity and urgency from documents in Table 1. Reinforcement learning will enable adaptive prioritization by optimizing schedules based on court data, such as judge availability and case backlog, improving efficiency. The system will also expand to handle multilingual case data, as demonstrated by Clio [14] for California courts, to enhance accessibility in diverse regions. NLP techniques, including cross-lingual embeddings, will process languages like Hindi and English, addressing challenges in legal terminology translation. Additionally, predictive analytics 9 will forecast case pendency using data like hearing counts and severity, aiding resource allocation. Piloting in high-volume courts will test scalability, with collaboration from legal experts ensuring judicial alignment and data security, ultimately streamlining processes and improving access to justice. References [1] eCourts Services. https://ecourts.gov.in/ecourts_home/ [2] Rao, A. P., et al. (2018). Court Case Management System. [3] Amofah, L. R. (2017). Electronic court case management system (for law court complex). [4] Pattnaik, P. N., et al. (2018). Mapping Critical Success Factors in Efficient Court Management. International Journal of Law and Management, 60(2). [5] Chan, P. C. H. (2018). Framing the structure of the court system in the perspective of case management. Peking University Law Journal, 6(1), 55-79. [6] Ali, C. T., & Varol, A. (2020). Design and Implementation of a Simple Online Court Case Management System Based on the Android Platform. IEEE. [7] Tahura, U. S. (2013). Case Management: A Magic LAMP in Reducing Case Backlogs. Bangladesh Law Journal, 13. [8] Ambos, K. (2018). Office of the Prosecutor: Policy Paper on Case Selection and Prioritisation. International Legal Materials, 57(6), 1131-1145. [9] Thomson Reuters Institute. (2024). Humanizing Justice: The transformational impact of AI in courts. https://www.thomsonreuters.com/en-us/posts/ ai-in-courts/humanizing-justice/ [10] IBM. (2025). Judicial systems are turning to AI to help manage vast quantities of data and expedite case resolution. https://www.ibm.com/case-studies/blog/ judicial-systems-are-turning-to-ai-to-help-manage-its-vast-quantities-of-dat [11] International Journal for Court Administration. (2020). Courts and Artificial Intelligence. https://iacajournal.org/articles/10.36745/ijca.343 [12] Judicature. (2024). AI in the Courts: How Worried Should We Be? https://judicature.duke.edu/articles/ ai-in-the-courts-how-worried-should-we-be/ [13] Above the Law. (2025). Trial Court Decides Case Based On AI-Hallucinated Caselaw. https://abovethelaw.com/2025/07/ trial-court-decides-case-based-on-ai-hallucinated-caselaw/ [14] Clio. (2025). AI in the Courtroom: Opportunities and Challenges. https://www. clio.com/resources/ai-for-lawyers/ai-in-courtrooms/ [15] American Association for the Advancement of Science (AAAS). Artificial Intelligence and the Courts: Materials for Judges. https://www.aaas.org/ai2/ projects/law/judicialpapers 10
2510.14894
Secure Sparse Matrix Multiplications and their Applications to Privacy-Preserving Machine Learning Marc Damie m.f.d.damie@utwente.nl University of Twente, The Netherlands Inria, France Florian Hahn University of Twente, The Netherlands Andreas Peter University of Oldenburg, Germany Jan Ramon Inria, France Abstract To preserve privacy, multi-party computation (MPC) enables executing Machine Learning (ML) algorithms on secret-shared or encrypted data. However, existing MPC frameworks are not optimized for sparse data. This makes them unsuitable for ML applications involving sparse data, e.g., recommender systems or genomics. Even in plaintext, such applications involve high-dimensional sparse data, that cannot be processed without sparsity-related optimizations due to prohibitively large memory requirements. Since matrix multiplication is central in ML algorithms, we propose MPC algorithms to multiply secret sparse matrices. On the one hand, our algorithms avoid the memory issues of the “dense” data representation of classic secure matrix multiplication algorithms. On the other hand, our algorithms can significantly reduce communication costs (some experiments show a factor 1000) for realistic problem sizes. We validate our algorithms in two ML applications in which existing protocols are impractical. An important question when developing MPC algorithms is what assumptions can be made. In our case, if the number of non-zeros in a row is a sensitive piece of information then a short runtime may reveal that the number of non-zeros is small. Existing approaches make relatively simple assumptions, e.g., that there is a universal upper bound to the number of non-zeros in a row. This often doesn’t align with statistical reality, in a lot of sparse datasets the amount of data per instance satisfies a power law. We propose an approach which allows adopting a safe upper bound on the distribution of non-zeros in rows/columns of sparse matrices. 1 Introduction MPC protocols are cryptographic primitives allowing to compute functions on secret inputs. Over the years, they have been used in many application domains such as privacy-preserving ML (PPML) Mohassel & Zhang (2017) or telemetry Asharov et al. (2022) to address confidentiality and privacy concerns. However, existing protocols are impractical for many real-world applications, especially when the applications involve high-dimensional sparse data, i.e., data with a large majority of zero values. Storing large sparse datasets in a dense format (one value per cell) demands excessive memory. On real-world sparse data, this “dense storage” often becomes prohibitive; making computation infeasible due to memory 1 arXiv:2510.14894v1 [cs.CR] 16 Oct 2025 constraints. Moreover, dense representations lead to inefficient linear algebra operations, as most of the computational effort is wasted processing zeros. Over the last 50 years, algorithms to process plaintext sparse data efficiently have been proposed Buluç et al. (2011); Duff (1977). Since 2002, dedicated software libraries Duff et al. (2002) popularized them. Sparse data is present in many ML applications but also in other fields, including graph algorithms. Our paper focuses on PPML applications, but our results apply to any application involving private sparse data. Several MPC frameworks enable training ML models on secret-shared data Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Mohassel & Zhang (2017); Patra & Suresh (2020); Wagh et al. (2019; 2021). These algorithms make use of existing MPC frameworks such as MP-SPDZ Keller (2020). However, none of these frameworks provide algorithms optimized for sparse data. The scalability limitations (especially memory issues) of dense plaintext algorithms transfer naturally to dense MPC algorithms. Thus, MPC frameworks need dedicated sparse algorithms for ML applications like recommender systems. These frameworks are not fundamentally incompatible with sparse data; there is simply no suitable secure sparse algorithm in the literature. Related works There exist protocols to sum sparse vectors securely, including sorting-based Asharov et al. (2022); Bell et al. (2022), private histograms Bell et al. (2022), or private-union-based Beguier et al. (2021). A few papers Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) studied secure sparse multiplica- tions in a two-party setting. However, these protocols require the data owners to actively participate because one of the computation parties must know the plaintext sparse data. Hence, they do not support more than two data owners simultaneously, which is too limiting, as modern ML applications involve thousands of data owners. Typical ML applications require a more generic MPC setup: outsourced training. In this setting (see Figure 1), the data owners share their secret data with a group of computation servers and disconnect. This distinction between input parties and computation parties enables supporting an arbitrary number of data owners, contrary to existing works Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019). Appendix B details why the adaptation of these works either results in an inefficient construction or is not straightforward. The secure multiplication of sparse matrices in the outsourced setting is then a significant open problem. Our work will focus on this outsourced setting; providing sparse multiplications compatible with any MPC framework. Some information-theory works Egger et al. (2023); Xhemrishi et al. (2022) studied the multiplication of a secret-shared matrix with a public matrix. Due to the public input, their setting differs from classic MPC where all inputs are secret. Finally, Nayak et al. Nayak et al. (2015) presented secure graph computations (a problem closely related to sparse operations due to the sparse adjacency matrices), but they do not present multiplication protocols. Our contributions We present two secure algorithms to compute matrix multiplications on secret- shared sparse data. Our algorithms avoid the memory issues of dense matrix multiplications on sparse data (e.g., in an experiment from Section 5: 19TB using dense multiplications vs. 60GB using ours). Moreover, our algorithms reduce the communication costs up to a factor 1000, and we implement two ML applications that are impractical with existing secure algorithms. Third, as we highlight that any efficient MPC protocol (i.e., not only ours) on sparse data requires public knowledge about the sparsity, we leverage the properties of real-world sparse data to minimize this mandatory public knowledge and obtain it in a privacy-preserving manner. Notations Let nnz(x) (resp. nnz(X)) refer to the number of non-zero values in a vector x (resp. matrix X). Let [[a]] refer to a share of a secret-shared value a. In our algorithms, any operation involving secret- shared values corresponds to a secure variant; e.g., [[a]] + [[b]] is the secure addition of a and b and [[sort(l)]] is the oblivious sort of l. Secure algorithms are executed jointly by all shareholders. 2 Figure 1: Outsourced MPC setting 2 Preliminaries 2.1 Threat model Threat modeling describes the expected behavior of the agents involved in an MPC protocol: an honest agent follows the protocol, an honest-but-curious agent follows the protocol and attempts to infer private informa- tion passively, and a malicious agent does not necessarily follow the protocol and can actively disrupt the protocol to obtain private information. MPC works distinguish security against a majority of honest agents from security against a majority of dishonest agents. “Dishonest agent” is an umbrella term encompassing malicious and honest-but-curious agents (depending on the security setup considered). We analyze our algorithms under honest and dishonest majority. We focus on honest-but-curious security, but our algorithms rely on oblivious shuffling and sorting with known maliciously secure variants Hamada et al. (2014); Laur et al. (2011). We leave for future work the extension to malicious security and the subsequent security proofs. In line with related works, our threat model excludes poisoning attacks where malicious data owners forge input data so the protocol reveal private information. We assume the data owners provide well-formed secret-shared inputs. 2.2 Outsourced setting Our contribution focuses on an “outsourced” computation setup (popular in recent PPML works Hegde et al. (2021); Mohassel & Zhang (2017); Wagh et al. (2019; 2021)): each data owner shares their secret data with a group of computation servers. These servers build a secret-shared matrix [[X]] containing all data owners’ inputs (e.g., one row per data owner1) and perform secure computations on this matrix. For example, the servers can execute a protocol to train an ML model [[θ]] on the dataset X, as in Figure 1. In this setting, the data owners share their secret data and disconnect. Outsourced protocols supports an arbitrary number of data owners because it separates input parties (i.e., data owners) from computation parties. Contrary to many PPML works that support a two-party Mohassel & Zhang (2017); Hegde et al. (2021) or three-party Wagh et al. (2019; 2021) outsourced setups, our work support an unbounded number of computation parties. Nevertheless, secret-sharing protocols induce a quadratic communication cost in the number of parties. This quadratic cost makes outsourcing necessary to support thousands of data owners like in ML applications. All existing secure sparse multiplications Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) support a non-outsourced setting. They presented two-party protocols assuming the computation parties are 1This is sometimes referred to as “horizontal” data partitioning in PPML literature. 3 also data owners (⇒two data owners maximum) because they require one of the computation parties to know the plaintext sparse data. Such a setting is too limiting for ML applications as they typically involve many data owners. Adapting existing protocols to avoid designing new ones? The adaptation is not easy because a plaintext knowledge is often essential to these protocols. Appendix B details why the adaptation of Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) to the outsourced setting either results in an inefficient construction or is not straightforward. 2.3 MPC primitives Secret sharing Shamir (1979) enables sharing a secret value x between multiple parties. Each shareholder j holds a share [[x]]j, revealing nothing about x. The shareholders can reconstruct the secret value by gathering their shares. Several secret sharing schemes exist, e.g., additive shares are finite field elements drawn randomly such that: P j[[x]]j = x. Many protocols Evans et al. (2018) enable processing secret-shared values; from arithmetic operations to complex operations (e.g., sorting). Like related works in private data analysis Araki et al. (2021); Asharov et al. (2022), our algorithms rely heavily on oblivious shuffling and sorting. “Oblivious sorting” designates MPC protocols for sort- ing secret-shared values without revealing information about the values. Sorting networks Hamada et al. (2014) are straightforward oblivious sorting solutions, as their execution is data-independent. Their cost is O(m log2 m) for a list of size m. The oblivious radix sort Hamada et al. (2014) is the state-of-the-art honest-majority protocol, with its optimal complexity of O(m log m) (and low constant factors). Recent papers optimized this protocol in a three-party setting Araki et al. (2021); Asharov et al. (2022). “Oblivious shuffling” protocols randomly permute lists of secret values without revealing the permutation. Laur et al. Laur et al. (2011) proposed an honest-majority protocol with linear costs, used in many recent protocols Araki et al. (2021); Asharov et al. (2022); Hamada et al. (2014). A recent work Lu & Kate (2023) also described dishonest-majority protocols, but with supra-linear costs. 2.4 Matrix multiplication Let Z be the result of the multiplication between the n × m matrix X and the m × p matrix Y . Z is a n × p matrix defined as follows: Zij = m X k=1 xik · ykj, ∀i ∈{1 . . . n}, j ∈{1 . . . p} (1) This formula holds for all matrices. However, the implementation may differ depending on the type of matrices (e.g., vectors, diagonal matrices, or triangular matrices). Our work focuses on optimized algorithms for sparse matrices. Sparse algorithms avoid “useless” multiplications and only multiply the non-zero values. Indeed, if the matrix multiplication is implemented naively (i.e., triple nested loop), the algorithm would compute many scalar multiplications with a null result. Hence, sparse algorithms leverage the matrix sparsity to have a cost inverse proportional to the sparsity. Software libraries usually implement dense matrix multiplication via a triple nested loop, resulting in an O(nmp) cost. While this cost has long been considered the lower bound for dense matrix multiplication, several works described algorithms with a sub-cubic cost Coppersmith & Winograd (1987). Dumas et al. Dumas et al. (2019) described an MPC protocol to execute a matrix multiplication with sub-cubic cost. These algorithms have constant factors limiting their practical interest (even in plaintext), so software libraries favor algorithms with a cubic cost. 2.5 Dense matrix multiplication on secret-shared data The cubic-cost matrix multiplication (i.e., the naive implementation of Equation (1)) can easily be imported to MPC. However, recent works Chen et al. (2020); Mohassel & Rindal (2018); Mohassel & Zhang (2017); Mono & Güneysu (2023); Patra & Suresh (2020); Wagh et al. (2021) described more efficient protocols under 4 honest or dishonest majority assumptions. These works reduce the communication costs, but the computation cost remains asymptotically cubic. Moreover, the storage costs remain asymptotically equivalent to the plaintext algorithm, so they cannot support larger matrices than the plaintext dense algorithm. Dishonest majority Beaver’s multiplication triplets are a popular building block to speed up secure multiplications via preprocessing steps Evans et al. (2018). Mohassel and Zhang Mohassel & Zhang (2017) generalized this concept to matrices: matrix multiplication triplets. These triplets make the communication costs proportional to the input and output size. For example, the multiplication of an n × m matrix with an m × p matrix requires O(nm + mp + np) communications (instead of O(nmp) with the naive algorithm). This approach reduces the asymptotic cost for matrix-matrix multiplications, but the communication costs of the vector-vector and matrix-vector multiplications remains asymptotically equal to the naive costs. Matrix multiplication triplets with malicious security were later described Chen et al. (2020); Mono & Güneysu (2023). Honest majority Using the honest majority assumption, several recent works Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Patra & Suresh (2020); Wagh et al. (2021) designed protocols with a communication cost proportional to the output size. Hence, a square matrix multiplication only requires O(n2) communications, and a vector multiplication O(1) communications. Before these recent protocols, De Hoogh Hoogh (2012) had already described a matrix multiplication with equivalent costs based on Shamir’s secret sharing (SSS). The recent protocols Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Wagh et al. (2021) are more complex to understand than the SSS-based algorithm of Hoogh (2012) because they rely on share conversion protocols. Moreover, these protocols only support 3 parties, while the SSS- based solution supports an arbitrary number of parties. Nevertheless, the SSS-based solution only has honest-but-curious security, while Koti et al. (2021); Lu et al. (2023); Patra & Suresh (2020); Wagh et al. (2021) support malicious security. To understand the gain provided by the honest-majority assumption, let us detail the optimized vector mul- tiplication under SSS Hoogh (2012). In SSS, each secret-shared value is represented using a polynomial of degree t. To multiply two secret-shared scalars in SSS, the parties locally multiply their local shares (obtain- ing a polynomial of degree 2t) and then execute a degree reduction protocol (via a “resharing” protocol). For dense vector multiplication, we can delay the degree reduction and add all local multiplication results together. The addition is possible because all local multiplication results are shares of equal polynomial degree (i.e., 2t). Then, a single degree reduction is done on the sum result. 3 Sparse data in PPML 3.1 Sparse data representation The term “sparse data” refers to data containing a large proportion of zero values. If stored using the default matrix format, the zeros occupy a significant memory space. When the data contains a sufficient number of zeros, sparse data representations can significantly reduce the storage space. Several sparse data representations exist Buluç et al. (2011), each with specific algorithms to perform oper- ations on the data. Depending on the data representations, some operations are more efficient than others. In other words, there is no perfect sparse data representation. We use the tuple representation Buluç et al. (2011): each sparse vector is a list of non-zero tuples: t = (i, vi) with i the coordinate in the vector and vi the non-zero value. Let coord(t) = i denote the coordinate of the non-zero and val(t) = vi its value. On the one hand, we refer to algorithms supporting sparse data as “sparse algorithm”, a common abuse of terminology. On the other hand, we use the term “dense algorithms” to refer to classic algorithms. 5 3.2 Larger datasets are sparser Optimizing dense algorithms, a dead end To process sparse data, one may consider to use dense matrix multiplication, with more memory. Unfortunately, it would require tremendous amount of memory. Secure dense algorithms have the same asymptotic memory footprint and hence run into the same memory issue. There is a limit to the scalability of dense algorithms. Even if extreme amounts of memory would be available, dense secure multiplications would still fail to scale because their communication cost is typically at least linear in the size of the dense input. A workaround to avoid this memory issue could be to partition the matrix in sub-matrices and perform dense multiplications on these sub-matrices. This approach is effective in classic linear algebra to handle large dense datasets and could (to some extent) reduce memory consumption. However, it would still require storing the product as a dense matrix; the memory benefit will not be significant. Moreover, it adds significant costs for repeated sparse-to-dense conversions. Finally, this partitioning approach would not reduce the communication costs of MPC protocols that would remain correlated to the full matrix size. To sum up, sparse algorithms are popular in plaintext because there is no other efficient solutions to process high-dimensional sparse datasets. In plaintext, dense matrix multiplications usually have a sparse equivalent (e.g., NumPy for dense and SciPy for sparse in Python). To cover all possible ML applications, MPC frameworks should similarly have sparse algorithms. The first motivation of sparse algorithms is to avoid the memory issues caused by dense algorithms. Furthermore, they often provide significant performance gains. Sparsity-size correlation Real-world sparse datasets show an interesting phenomenon: the sparsity is correlated to the matrix size. Indeed, larger sparse datasets have in average a larger sparsity. Thus, the larger datasets are, the more beneficial the sparse algorithms become. To understand this phenomenon, let us take the example of recommender systems on a marketplace (e.g., Amazon): the matrix quantifies the interaction of each consumer with each product. The dataset is sparse because each consumer only interacts with a small subset of all possible products. If the marketplace increases its number of products by a factor 1000, the consumer will not consume 1000 times more products. Even if the consumption increases a bit, existing datasets show that it does not follow the matrix growth; i.e., the sparsity increases. The real-world datasets that we arbitrarily selected from Li et al. (2017) satisfy the same trend: MovieLens 1M has 95% sparsity for 1.7K columns, Yahoo Movie has 99.7% sparsity for 12K columns, Bookcrossing has 99.99% sparsity for 340K columns, and flicker has 99.999% sparsity for 500K columns. Theoretical point of view The correlation between sparsity and matrix size is observable analytically. Sparse datasets can often be represented as graphs: e.g., a recommendation dataset is a bipartite graph linking n customers to m products. Also, note that graphs can be represented as sparse dataset via their adjacency matrix. The literature Aiello et al. (2001) showed that, in many application domains, graph data follows a “power law” distribution. In particular, if we take a random vertex v ∈V (G) of a graph G, then the probability distribution of the degree d(v) of v is given by: f(x) = P(d(v) = x) = x−γ Z(γ) (2) for some γ > 1 where Z(γ) = P∞ i=1 i−γ is a normalization constant. Typically, we have 2 ≤γ ≤3 Adamic et al. (2001), but application domains exist where other values of γ are adequate. Practical applications may deviate a bit from (2), especially for small degrees, as this equation is mainly intended to describe asymptotic behavior. In a dataset with n rows (i.e, vertices) where every vertex has degree at most m, one can use a corrected normalization constant Z(γ, m) = Pm i=1 i−γ. Remember that, in our case, the degree of a vertex corresponds 6 to the number of non-zeros in a row. The expected degree of a vertex is: Ev[d(v)] = m X i=1 i1−γ Z(γ, m) We can bound Z(γ, m) ≥1+ R ∞ 1 x−γdx = 1+1/1(1−γ) and Pm i=1 i1−γ < R ∞ 1 x−γdx = (1−m2−γ)/(γ−2) < 1/(2 −γ). Therefore, we can bound the average degree by a constant K not depending on n or m. We can rewrite the number of edges in graph G as |E| = nK = O(n). Hence, the sparsity of the adjacency matrix of the graph G decreases with n: |E| nm = O(m−1) (3) This equation is the theoretical illustration of the phenomenon: larger (sparse) datasets are sparser. 3.3 Public knowledge in secure sparse multiplications Other secure sparse algorithms Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) assume some public knowledge. For example, Schoppmann et al. Schoppmann et al. (2019) assumes “that the sparsity metric of the input is public.” The definition of this “sparsity metric” varies in each of their algorithm: the total number of non-zeros, or the number of non-zero rows or columns. In our paper, this public “sparsity metric” is the number of non-zeros per row. Public knowledge could be interpreted as a limiting assumption, but public knowledge is mandatory to design efficient and secure sparse algorithms. In sparse algorithms, the performance gains (in memory, communication, and computation) are correlated to the sparsity. Thus, it is necessary to know the sparsity to expect any performance gain. Anyway, sparsity knowledge is often necessary to ML practitioners. ML applications can be seen as a pipeline: exploratory data analysis (to choose an appropriate ML model), ML training, and model deployment. Data analysis on sparse dataset necessarily includes a sparsity estimation; often essential to choose an appropriate model. Hence, the exploratory data analysis would provide this public knowledge, before the sparse matrix multiplications. Finally, Section 6 proposes some protocols to minimize the public knowledge and Section 7 proposes protocols to obtain the minimized knowledge in a privacy-preserving manner. 4 Secure sparse multiplications This section presents two secure sparse matrix multiplications: matrix-vector and matrix-matrix. Existing secure sparse multiplications are incompatible with the outsourced setting, so we can only compare to the dense baselines. We assume the bit length and number of computation parties are O(1), so our asymptotic complexities are only in function of the matrix size. Our algorithms rely on secure additions, multiplications, comparisons, oblivious shuffling, and oblivious sorting Hamada et al. (2014). On the one hand, sorting-network-based oblivious sorting only requires secure additions, multiplications, and comparisons. On the other hand, we can implement oblivious shuffling using oblivious sorting: generate a random secret-shared value for each element in the input list and obliviously sort the list based on these random values. Our algorithms are then compatible with all secret-sharing schemes supporting basic arithmetic and comparison operations. However, for optimal performances, one should implement our algorithms using honest-majority sorting/shuffling Hamada et al. (2014); Laur et al. (2011). Appendix A sketches a security proof for our algorithms. 7 Algorithm 1 Secure sparse vector multiplication Input: [[x]] and [[y]] are two secret-shared sparse vectors Public knowledge: Number of non-zero elements per vector 1: function SparseVectorMult([[x]], [[y]]) 2: [[z]] ←Concatenate([[x]], [[y]]) 3: [[z]] ←[[sort(z)]] ▷Asc. sort on the non-zero coordinate. 4: [[s]] ←0 5: for i ←1 to nnz(x) + nnz(y) −1 do 6: [[obliv. if]] [[coord(zi)]] = [[coord(zi+1)]] 7: [[s]] ←[[s]] + [[val(zi)]] × [[val(zi+1)]] 8: return [[s]] ▷a scalar 4.1 Toy protocol: vector-vector We propose to start with a simple vector multiplication. Since dense vector multiplications have an O(1) communication cost (under honest majority), we know in advance that it is unbeatable. Modern ML appli- cations essentially involve matrix-vector and matrix-matrix multiplications, so vector multiplication is not a relevant focus. However, this toy protocol provides simple intuitions to understand how to build sorting-based sparse multiplications. We want to multiply two n-dimensional sparse vectors x and y, represented as tuple lists. Alg. 1 presents our solution. First, we concatenate the two tuple lists. Note that this concatenation is trivial (and) secure because the sparse vectors x and y are lists of shares. Thus, the concatenation requires no communication; the shareholders simply concatenate locally their list of shares. Then, we sort (obliviously) the resulting list by coordinate. If two consecutive tuples have equal coordinates, we multiply their values and sum all the multiplication results to build the vector multiplication value. We use the notation [[obliv. if]] to describe conditional structures on secret-shared values. The presence of conditional branches may seem contrary to the obliviousness goal. Indeed, revealing the executed branch would leak information about the secret-shared value. However, conditional structures on secret-shared values are implemented so that the executed branch remains secret. All the [[obliv. if]] instructions are implemented via a combination of arithmetic and comparison operations. For example, we implement the lines 6 to 7 of Alg. 1 using the following expression: [[s]] ←[[s]]+([[coord(zi)]] = [[coord(zi+1)]])×[[val(zi)]]×[[val(zi+1)]]; with the secure comparison [[coord(zi)]] = [[coord(zi+1)]] outputting [[1]] or [[0]]. Such instructions are less readable, so we keep conditional structures in our algorithms with a dedicated notation. Such conditional statements are recurrent in MPC literature: e.g., Asharov et al. Asharov et al. (2022) used conditional statements in their heavy hitters protocol. Alg. 1 costs O((nnz(x)+nnz(y)) log(nnz(x)+nnz(y))) communications and computations in O(log(nnz(x)+ nnz(y))) rounds. The sorting step induces the superlinear complexity. The multiplication loop is paralleliz- able, so it only costs O(nnz(x) + nnz(y)) in O(1) rounds. In comparison, the dense multiplication has O(1) communication costs under honest majority, and an O(n log n) cost under dishonest majority. Moreover, our sparse algorithm significantly reduces the memory footprint under both threat models. 4.2 Matrix-vector We want to multiply an n × m matrix X to an m-dimensional vector y. Our work assumes the public knowledge motivated in Section 3.3: per-row sparsity. Thus, we can adapt the sparse matrix representation and group the non-zero elements by rows: a sparse matrix would be a list of sparse vectors, one per row. An intuitive solution is to compute a vector-vector multiplication on each matrix row. This approach is adequate for dense multiplications, but inefficient for sparse matrices. Indeed, this naive extension induces a linear dependency on the number of rows: O((nnz(X)+n·nnz(y)) log(nnz(X)+n·nnz(y))) communications and computations. The term n · nnz(y) comes from the “replication” of y for each row of X. The vector y 8 Algorithm 2 Secure sparse matrix-vector multiplication Input: [[X]] a secret-shared sparse matrix, [[y]] a secret-shared sparse vector Public knowledge: Number of non-zeros per matrix/vector Notation: the function RowCoord() (resp. ColCoord()) returns the first/row (resp. second/column) coor- dinate of a non-zero tuple. 1: function SparseMatVectMult([[X]], [[y]]) 2: [[z]] ←X 3: for ([[j]], [[v]]) in [[y]] do 4: Append tuple (coord : ([[⊥]], [[j]]), val : [[v]]) to [[z]] 5: [[z]] ←[[sort(z)]] ▷Asc. sort on the 2nd and then 1st coordinates (with ⊥< 1) 6: prevY ←(coord : ([[⊥]], [[⊥]]), val : [[⊥]]) 7: for k ←1 to (nnz(X) + nnz(y)) do 8: [[obliv. if]] [[ColCoord(prevY )]] = [[ColCoord(zk)]] 9: [[val(zk)]] ←[[val(zk)]] × [[val(prevY )]] 10: else 11: [[val(zk)]] ←[[0]] 12: [[obliv. if]] [[RowCoord(zk)]] = [[−1]] 13: [[prevY ]] ←[[zk]] 14: [[val(zk)]] ←[[⊥]] 15: Drop the second coordinate from all tuples in [[z]] 16: [[z]] ←[[sort(z)]] ▷Asc. sort on the remaining coord. 17: AggEqualCoord([[z]]) 18: PlaceholderRemoval([[z]]) 19: return [[z]] ▷sparse vector stored as a list of non-zeros Algorithm 3 Aggregation of tuples with equal coordinates Input: [[Z]], a sorted list of secret shared tuples 1: procedure AggEqualCoord([[Z]]) 2: for k ←1 to length([[Z]]) −1 do 3: [[obliv. if]] [[coord(Zk)]] = [[coord(Zk+1)]] 4: [[coord(Zk)]] ←[[⊥]] 5: [[val(Zk+1)]] ←[[val(Zk+1)]] + [[val(Zk)]] is sorted independently for each row, which induces this inefficiency. To sum up, we need a dedicated sparse matrix-vector multiplication because the naive extension is inefficient. Alg. 2 presents a more optimized solution avoiding the linear dependency on n. The intuition behind this optimization is to build a tuple list sorted as follows [yj, xi1,j, xi2,j, . . . yj′, xi3,j′ . . .]. In this list, we group the elements related to the same column, starting with the element from vector y. We then fix yj, iterate over all x∗,j, and multiply it with yj until we reach the next yj′. Once we finish the vector component multiplications, we sort the multiplication results according to the first coordinate to aggregate all the results with equal coordinates and build the result. Every time two values are aggregated, the first value is replaced by a placeholder. Alg. 3 details this aggregation step. After the aggregation (Line 17 of Alg. 2), we remove the placeholders using Alg. 4. To avoid revealing information about the non-zero coordinates, Alg. 4 uses the “shuffle-and-reveal” trick frequently used in secure sparse sum Asharov et al. (2022); Bell et al. (2022). Alg. 2 costs O((nnz(X) + nnz(y)) log(nnz(X) + nnz(y))) communications and computations, but it has a linear round complexity due to the multiplication loop (lines 7 to 14 of Alg. 2) and the aggregation step 9 Algorithm 4 Placeholder removal Input: [[Z]], a list of secret shared tuples 1: procedure PlaceholderRemoval([[Z]]) 2: [[Z]] ←[[shuffle(Z)]] 3: for k ←1 to length([[Z]]) do 4: is_placeholder ←reveal([[coord(Zk)]] = ⊥) 5: if is_placeholder 6: Remove the k-th tuple from [[Z]] Majority Algorithm Comm. Comp. Storage Dishonest Dense [Naive alg.], Chen et al. (2020); Mohassel & Zhang (2017); Mono & Güneysu (2023) O(nm) O(nm) Sparse (ours: Alg. 2) O(nnz∗· log(nnz∗)) O(nnz∗) Honest Dense Hoogh (2012); Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Wagh et al. (2021) O(n) O(nm) O(nm) Sparse (ours: Alg. 2) O(nnz∗· log(nnz∗)) O(nnz∗) Table 1: Complexities of the sparse and dense matrix-vector multiplications. Notations: nnz∗= nnz(X) + nnz(y), nnz(X) ≪n × m, nnz(y) ≪m. (Alg. 3). Appendix C presents recursive algorithms taking inspiration from the logarithmic-round secure maximum (notably implemented in MPyC Schoenmakers (2018)) in order to build a multiplication loop and an aggregation step with logarithmic round complexity. Table 1 compares the complexities of our sparse algorithm to the dense baselines. On the one hand, honest- majority dense multiplication is no longer constant in communications, so our algorithm is advantageous for highly sparse matrices. On the other hand, the memory footprint is more critical in this operation. Several real-world sparse matrices do not fit in memory using a dense format (see example application in Section 5), even in plaintext! Our sparse algorithm is then mandatory when memory overflowing prevents from using dense algorithms. 4.3 Matrix-matrix We want to multiply two sparse matrices. Such multiplications are used in ML algorithms for specific compu- tations, especially correlation matrices (i.e., X⊤X). Other ML algorithms also require sparse-dense matrix multiplication (e.g., to multiply a sparse input matrix with a dense parameter matrix). Even in plaintext, sparse-dense multiplications require specific algorithms differing from those of sparse-sparse multiplications (i.e., the focus of our paper). Thus, we leave sparse-dense multiplications for future works. This subsection then emphasizes the computation X⊤X, which is a key sparse-sparse operation in ML. To compute X⊤X, we multiply an m × n sparse matrix to an n × m matrix. With our public knowledge, we know the per-column sparsity of the first matrix (because X is transposed), and per-row sparsity of the second. Alg. 5 generalizes this problem: multiplying an n × m sparse matrix X to an m × p sparse matrix Y , with per-column sparsity knowledge for X and a per-row knowledge for Y . We use the notation X(k) (resp. Y {k}) to refer to the k-th column (resp. row) of X (resp. Y ) stored as a sparse vector. When implementing a matrix-matrix multiplication naively, it is common to design a triple nested loop multiplying: each row i of X is multiplied to each column j of Y . The third loop is the vector multiplication on each k-th element of the vectors. Linear algebra libraries usually leverage a simple optimization to speed 10 Algorithm 5 Secure sparse matrix multiplication Input: [[X]] and [[Y ]] are two secret-shared sparse matrices Public knowledge: Number of non-zeros per column in X (with X(k) a column), number of non-zeros per row in Y (with Y {k} a row) 1: function SparseMatMult([[X]], [[Y ]]) 2: [[Z]] ←∅ 3: for k ←1 to m do 4: for i ←1 to length([[X(k)]]) do 5: for j ←1 to length([[Y {k}]]) do 6: [[ri]] ←[[coord(X(k) i )]] ▷Row coordinate 7: [[rj]] ←[[coord(Y {k} j )]] ▷Column coordinate 8: [[rv]] ←[[val(X(k) i )]] × [[val(Y {k} j )]] 9: Append a tuple (([[ri]], [[rj]]), [[rv]]) to [[Z]] 10: // Non-zero tuple: coordinates then value 11: [[Z]] ←[[sort(Z)]] ▷Asc. sort on the coordinates. 12: AggEqualCoord([[Z]]) 13: PlaceholderRemoval([[Z]]) 14: return [[Z]] ▷sparse matrix stored as a list of non-zeros Majority Algorithm Comm. Comp. Storage Honest Dense Chen et al. (2020); Mohassel & Zhang (2017); Mono & Güneysu (2023) O(nm + mp + np) O(nmp) O(nm + mp + np) Sparse (ours: Alg. 5) O(MinMult · log(MinMult)) O(MinMult) Dishonest Dense Hoogh (2012); Lu et al. (2023); Mohassel & Rindal (2018); Wagh et al. (2021) O(np) O(nmp) O(nm + mp + np) Sparse (ours: Alg. 5) O(MinMult · log(MinMult)) O(MinMult) Table 2: Complexities of the sparse and dense matrix-matrix multiplications. Notations: MinMult = Pm k=0 nnz(X(k)) · nnz(Y {k}), nnz(X) ≪n × m, nnz(Y ) ≪m × p. up the dense matrix multiplication: swapping the loops on i and k. This optimization significantly speeds up the implementation by reducing the memory access costs Leiserson (2018). Our sparse algorithm reuses a similar intuition to design a fast algorithm and avoid performing a series of vector multiplications. Our algorithms have two main steps: (1) compute all the individual scalar multipli- cations, (2) aggregate them using oblivious sorting. In matrix-matrix multiplications, all values in the k-th column of X must be multiplied by all values in the k-th row of Y . Our public knowledge provides direct access to the k-th column of X and the k-th row of Y , so we can compute the individual scalar multiplications by iterating on the non-zeros of X(k) and Y {k}. Alg. 5 reuses Alg. 3 and 4 for aggregation and placeholder removal. Table 2 compares our sparse algorithm to the dense algorithms. The notation MinMult = P k nnz(X(k)) · nnz(Y {k}) denotes the number of non-zero scalar multiplications required by the matrix multiplication. It represents an intuitive lower bound: the best plaintext algorithm is O(MinMult), with no optimality proof. Our algorithm only adds a logarithmic factor compared to the plaintext sparse algorithm Buluç et al. (2011). 11 5 Experimental analysis This section compares the dense algorithms to our sparse algorithms under three sparsity levels: 99%, 99.9%, 99.99% of zero values. These thresholds are recurrent in sparse data applications (see real-world datasets mentioned in Section 3.2). The dense algorithm is, by definition, insensitive to the sparsity level, so we provide one curve for the dense algorithm and one per sparsity level for our sparse algorithm. Our comparison focuses on communication costs and memory usage. The communication costs corresponds to the total communication exchanged between the servers during the protocol. The memory usage is represented thanks to the memory overflows: we scale up our experiments until the algorithms cannot be executed due to a lack of memory. We use red crosses to represent these memory overflows on the plots. For each multiplication type (i.e., matrix-vector and matrix-matrix), we also implement an example use case built upon it. Each example use case relies on a real-world dataset to demonstrate the impracticality of dense algorithms on simple ML applications involving sparse data. 5.1 Experimental setup We execute our experiments on a server with 188GB of RAM and an Intel Xeon CPU. We use the MPyC framework Schoenmakers (2018) to simulate 3-party protocols. We perform our experiments under an honest-majority assumption because this setting provides the best dense baselines (see Tables 1 and 2). More specifically, we chose the SSS-based matrix multiplication Hoogh (2012) as dense baseline instead of the other honest-majority protocols Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Wagh et al. (2021) for two reasons. First, the SSS-based matrix multiplication has smaller constant factors; e.g., it only requires the communication of a single ring element, while Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Wagh et al. (2021) require multiple communication rounds. Second, the SSS-based protocol supports an arbitrary number of parties (like our sparse multiplica- tions), while Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Wagh et al. (2021) are 3-party protocols. Implementation details We use 64-bit (secret-shared) fixed-point numbers, with a 32-bit fractional part. Floating-point arithmetic is still poorly scalable in MPC protocols. MPC works usually rely on fixed-point arithmetic Kelkar et al. (2022) to avoid performance bottlenecks. The minor precision loss due to fixed-point arithmetic is often acceptable for ML applications Kelkar et al. (2022). We use the protocol by Laur et al. Laur et al. (2011) for oblivious shuffling and the oblivious radix sort Hamada et al. (2014) for sorting. The first step of the oblivious radix sort is a secure bit decomposition. As this operation is expensive, we accelerate the process using pre-computations. Our algorithms assume the data owners pre-compute and share a bit decomposition of their non-zero coordinates. Hence, they share a bit array instead of a single integer per non-zero coordinate. 5.2 Matrix-vector Figure 2a compares the dense and sparse matrix-vector multiplications for a square matrix. Note that our algorithm with 99.99% sparsity has a negligible cost when the number of rows is small because the high sparsity leads to empty sparse vectors. Overall, the dense algorithm is less expensive than our algorithm. The main takeaway is the memory issue. Indeed, while the dense algorithm is less expensive, our algorithm supports larger matrices (i.e., the first motivation of sparse algorithms). For example, the dense algorithm (contrary to our sparse algorithm with 99.99% sparsity) triggers memory overflows for matrices with more than 10K columns. As developed in Section 3.2, there is no (dense) solution to avoid memory issues (19TB of memory would be needed). One may argue that our algorithm also has some memory overflows. To analyze these results, we must remind the results from Section 3.2: larger datasets become sparser. Understanding this phenomenon is 12 (a) Matrix-vector (with a square matrix) (b) Matrix-matrix (operation: X⊤X). Figure 2: Comparison of the secure matrix multiplications under honest majority: dense algorithm vs. our sparse algorithm with three sparsity levels. essential to interpret Figure 2a: the memory overflows illustrated in this figure should not impact most real-world datasets because of their correlation between sparsity and matrix sizes. Our figures represent the costs for fixed sparsity rates, but large matrices are unlikely to have a low sparsity rate. For example, real-world datasets with 10K columns Li et al. (2017) can have much more than 99% sparsity; e.g., with 10K columns, Yahoo Movie dataset already has a 99.7% sparsity! Thus, using real-world sparse data, our algorithm should not be affected by such memory issues, because larger datasets are sparser. More generally, any algorithm (even sparse) has memory issues when the data is large enough. Sparse algorithms simply support larger sparse matrices than dense algorithms. In practice, the correlation between sparsity and matrix size even amplifies the gains provided by sparse algorithms. Example use case: Recommender system Matrix-vector is a core component of some recommender systems. This example use case builds a simple recommender system in which a client sends a (secret-shared) item identifier and receives a list of similar items. We use the Bookcrossing dataset: contains ratings from 279K users on 340K books, resulting in 99.998% zeros in the dataset (1K users as training dataset). Each row corresponds to a user, and each column to a book. Our recommender system relies on a nearest neighbors algorithm Aggarwal (2016). This model has no training step: the inference is computed on the training dataset, and requires two matrix-vector multiplica- tions. This algorithm computes the similarity between the book the client chose and all the other books in the database. Then, the system returns the identifiers of the k most similar books. The dense algorithm triggers a memory overflow because it cannot store the training data using a dense matrix. The sparse algorithm takes, in average, 48 minutes. The dense algorithm is impractical while our sparse algorithms support this application. Our runtime could be further reduced using MPC optimizations (e.g., precomputed comparisons Sun & Makri (2025)). 5.3 Matrix-matrix Figure 2b shows the communication costs for the multiplication X⊤X. This experiment considers a varying number of columns and a fixed number of rows (100 rows). The sparse algorithm brings a significant communication reductions compared to the dense multiplication: ×100 gain for 99.9% sparsity and a ×1000 gain for 99.99% sparsity. The cost reduction is much higher for matrix-matrix multiplication than for matrix-vector because the matrix-matrix multiplication complexity has a quadratic dependency on sparsity. 13 Besides the performance gain, our algorithm supports much larger matrices: the dense algorithm triggers a memory overflow for matrices with more than 1K, while our sparse algorithm scales up to 1M columns. Figure 2b shows no memory overflow for our algorithm because we stopped experiments exceeding 10 hours. Example use case: Access control This multiplication can be used to build an ML-based access control system. This example use case trains an ML model deciding whether an access request is suspicious. Access data can be particularly privacy-sensitive, e.g., the access log of medical databases can indirectly leak patient information. As medical access logs are sensitive, there is no publicly available dataset, so we use use the Amazon access control dataset2; that should have similar properties as more sensitive datasets. The Amazon dataset contains 32.7K samples, each containing the user’s properties, the service properties, and the access decision (granted or not). Each property is a categorical value that we transform into binary arrays using the one-hot encoding. This encoded dataset has 15K features with 99.95% sparsity. Our access control relies on Linear Discriminant Analysis. The training requires computing: (1) the proportion of each class (i.e., access decision), (2) the mean vector of each class, and (3) the covariance matrix of the whole dataset. The most expensive operation is, by far, the covariance matrix estimation. Note that the covariance matrix is necessarily dense. As Cov(X) = 1 nX⊤X −¯x⊤¯x, we store the covariance in two parts: the sparse product X⊤X and the dense mean vector ¯x. This trick provides an efficient storage of the covariance matrix. This covariance matrix is used at inference time via a matrix-vector multiplication (e.g., Cov(X) · y = 1 nX⊤X · y −¯x⊤¯x · y). Thus, this compact representation does not change the model inference algorithm and avoids a potential memory overflow due to the high-dimensional dense matrix. We implement this application using the dense and sparse matrix multiplications. We use the first 10K samples as training dataset. As expected from our previous benchmark, the dense algorithm triggers a memory overflow during the covariance computation. The sparse algorithm completes the training algorithm in 5 hours. Hence, our sparse algorithm provides a satisfying solution to this problem, while the dense algorithm is impractical. 6 Minimizing the public knowledge Section 3.3 highlighted the need for sparsity-related public knowledge in secure sparse algorithms. While necessary, this information disclosure might be problematic in some specific use cases in which revealing the exact per-row sparsity is too sensitive. For such cases, we propose three techniques to minimize the public knowledge necessary to our algorithms. 6.1 Row anonymization By default, our algorithms assume the per-row sparsity to be public. This information means that the number of non-zeros from each data owner is public (but the position and value of the non-zeros remain secret). To enhance their privacy, the data owner can submit their shares via an anonymization layer (e.g., Tor network). Machine learning algorithms typically assume that training instances are independently and identically drawn and don’t extract information from the order of the instances (rows). Permuting the rows of the input matrix hence does not substantially change the input. Thanks to the anonymization layer, the servers would be unable to link a data owner to a number of non- zeros. This privacy enhancement also reduces the public knowledge: instead of the per-row sparsity, the distribution of the per-row sparsity is public. This difference is subtle, but it is important for privacy: instead of individual information, a “collective” information is public. Indeed, this public knowledge provides information about the whole population, but not about an individual. 2https://github.com/pyduan/amazonaccess/tree/master/data 14 Figure 3: Per-row sparsity distribution in Bookcrossing Ziegler et al. (2005) ML practitioners usually need this sparsity distribution information, before performing any ML training. Sparsity is essential information to choose an appropriate ML model. Thus, it is realistic to assume that the distribution of the per-row sparsity is public. 6.2 Max-row padding As motivated above, the per-row sparsity distribution is acceptable in practical ML applications. However, if needed, we can further reduce the public knowledge and only reveal an upper bound on the per-row sparsity. To only assume this upper bound as public knowledge, the input sparse matrix must be padded to this maximum per-row sparstity. To pad a row, the data owner represents some of the zero elements as non-zeros anyway (i.e., creating dummy non-zeros), such that all rows have the pre-specified number of non-zeros. Computing the maximum per-row sparsity over all rows in MPC requires O(n log n) communications (in O(log n) rounds) for a matrix of n rows. 6.3 Matrix templating The main drawback of max-row padding is its sensitivity to the maximum number of non-zeros per row. Sparse datasets usually contain only a few rows with many non-zeros. Most rows contain much fewer non- zeros than the maximum. For example, Figure 3 shows the per-row sparsity distribution on the Bookcrossing dataset. While the maximum number of non-zeros per row is nearly 104, 99% of the rows have less than 100 non-zeros and 90% of the rows have less than 10! Thus, padding the rows until a global maximum number of non-zeros would add many unnecessary dummy non-zeros to the dataset. Up until now, all our techniques and algorithms were oblivious to the number of rows owned by each data owner: they support all possible settings from data owners owning a single row to settings with data owners owning a sub-matrix. Note that the input size (i.e., number of rows) of each data owner is usually public in MPC. If each data owner owns a sub-matrix, we can propose a more subtle padding technique avoiding the shortcomings of max-row padding. Instead of padding each row to a fixed maximum, we propose to build a “matrix template”, a safe upper bound for the distribution of number of non-zeros per row, for which we are sure the real data matrix will fit in it. Let us call ˆF(d) = P∞ i=d ˆf(i) the number of rows in the real dataset having d non-zeros or more. Let us describe what we call a template for a matrix Y : the template divides Y (horizontally) into K sub-matrices: Y1, . . . , YK, with respectively n1, . . . , nK rows. Each row in Yk must have nnzk non-zero elements, and we 15 have nnz1 ≤nnz2 ≤. . . nnzK. A matrix template is then defined by T = {(n1, nnz1), . . . , (nK, nnzK)}. A real dataset will fit in the template (after a permutation of the rows, see Section 6.2) if ∀i ∈[K] : ˆF(nnzi + 1) ≤ PK j=i+1 nj. To set the thresholds {nnz1, . . . , nnzK}, we propose to consider the quantiles: 0.25, 0.5, 0.75, 0.9, 0.99, 1, because they capture nicely the power-law distribution of sparse data. In other words, ˆF(nnz1) = 0.25, ˆF(nnz2) = 0.5, . . . , ˆF(nnzK) = 1. A data owner P owning a dataset Y can fit Y into a template T by padding Y according to T. First, they sort (increasingly) the rows of Y based on their number of non-zeros. Remember that the order of the rows does not matter in ML applications. Second, they pad the first n1 rows of Y to have nnz1 non-zeros. They repeat this padding process for the next n2 rows with nnz2 non-zeros, etc. until the last nK rows with nnzK non-zeros. Thanks to the sub-matrices, only YK is padded to have the maximum number of non-zeros, the rest of the rows can have much fewer non-zeros. Note that matrix templating with K = 1 has the same output as max-row padding. Similarly, the higher K is, the smaller the overhead because the matrix template becomes more accurate. We envision four privacy-preserving approaches to obtain such matrix templates. First, one may know the population distribution, i.e., the probability f(d) of drawing a row with d non-zeros in the process generating the data. Second, without such domain knowledge one may have seen a public dataset D from which one could draw statistics and approximate the population distribution by inferring ˆFD(d) ≥F(d), before one has to process the sensitive data. This typically leads to quite accurate results. Third, one could start from the real data provided as input and use differential privacy to release safe upper bounds ˆFDP (d) ≥ˆF(d). This strategy leading to looser bounds to protect privacy. Fourth, we can compute it using MPC-based quantile estimation. Section 7 details these private estimations of the matrix template. 6.4 Overhead comparison Row padding and matrix templating induce some overhead due to the dummy non-zeros they add. Figure 4 compares the memory footprint of several alternatives: dense multiplication, no knowledge minimization, row anonymization, max-row padding, and matrix templating (using quantiles as in Section 6.3). The memory footprint is a simple proxy to understand the impact of knowledge minimization techniques on our algorithms’ related communication and computation costs (see also our tables in Section 4). We base the comparison on four real-world datasets: SMS Spam Gómez Hidalgo et al. (2006), Amazon Access Control, Bookcrossing Ziegler et al. (2005), and MovieLens Harper & Konstan (2016). For the matrix templating, our experiments simulate the MPC-based estimation with 20 data owners. On the one hand, (as expected) row anonymization has no effect on the memory footprint. On the other hand, padding rows to a global maximum induces a small overhead on the Spam and the Access control datasets, while it induces significant overhead on the Bookcrossing and MovieLens datasets. The overhead on Bookcrossing and MovieLens makes the sparse matrix nearly as large as the dense matrix... in such case, the sparse algorithms would have no interest. Our matrix templating significantly reduces this overhead. For example, on MovieLens, adding dummy non-zeros via matrix templating only doubles the memory footprint (vs. nearly ×100 for max-row padding). As mentioned earlier, the overhead could be even further reduced if we reveal more than 5 quantiles. Finally, on the Access Control dataset, all knowledge minimization techniques have an equal memory foot- print. In this dataset, all rows have by design the same number of non-zeros (i.e., naturally padded), so max-row padding and matrix templating does not need to add any dummy non-zero. 7 Private estimation of the public knowledge As explained in Section 6, there are several ways to construct a matrix template, i.e., to find upper bounds on the number of non-zeros in the rows in certain quantiles of the data. While this matrix template limits 16 Figure 4: Matrix storage costs for different techniques of public knowledge minimization on four real-world datasets the prior knowledge necessary, we can further enhance privacy by estimating it using privacy-preserving techniques. This section details three techniques to estimate privately matrix templates: based on MPC, based on Differential Privacy, and on the population distribution. 7.1 MPC-based First, each data owner i shares a value [[nnzj]] for each row j they own. Then, the servers compute securely the quantiles on the list [[[nnz1]], . . . , [[nnzn]]]. To compute them, the servers simply need to sort obliviously the list and to obtain the elements on indices: ⌊0.25n⌋, ⌊0.5n⌋, ⌊0.75n⌋, ⌊0.9n⌋, ⌊0.99n⌋, n. These quantiles provide an approximate template: some submatrices may not fit perfectly into it because each data owner has a slightly different row sparsity. For example, data owner i may have 50% of rows with less than 13 non-zeros while data owner j has 50% with less than 17 non-zeros. If the MPC protocol outputs 15 as quantile 0.5, data owner i can pad its rows, but data owner j has to remove some non-zeros to fit the template. To finalize our template, we compute a scaling factor ensuring that all sub-matrices fit in it. Let {[ nnz1 . . . [ nnzK} refer to the parameters from the previous approximate template. Based on the imper- fect template, each data owner i computes a scaling factor αi ≥1 such that: for any row Xj owned by data owner i, nnzj ≤αi[ nnzk ([ nnzk being the padding parameter corresponding to Xj in the template). Then, the data owners share their respective [[αi]] and the servers compute α using a secure maximum protocol on these secret-shared values. We can now deduce the final templates: nnzk = α[ nnzk, for k ∈{1 . . . K}. Based the values {nnz1, . . . , nnzK} and ni number of rows by data owner i (usually public in MPC), we can infer the dimensions of their padded matrix: nnz1 non-zeros in their first ⌊0.25ni⌋rows, nnz2 in their next ⌊(0.5 −0.25)ni⌋rows, ..., and nnz6 non-zeros in their last ⌊(1 −0.99)ni⌋rows. Using matrix templating, the public knowledge is then reduced to a few quantiles from the per-row sparsity distribution. 17 7.2 Based on Differential Privacy Let [n] be the set of first n integers. Let X = {xi}n i=1 be a set of numbers identically and independently drawn from an unknown but fixed probability distribution DX. Let ˆf(X, t) = |{i ∈[n] | xi = t}| be the empirical probability distribution (histogram) of X and ˆF(X, t) = |{i ∈[n] | xi ≥t}| be the empirical cumulative distribution function. Consider the PPML setting where each data owner has exactly one subset X(j), with X = ∪k j=1X(j). Then, the data owners can compute a secret share of ˆF(X, t) by doing computations linear in n to first locally compute the partial counts ˆF(X(j), t) and then summing these partial counts by MPC. If the numbers in X are sensitive, revealing ˆF(X, t) for some values of t may be undesirable. A classic approach is to use differential privacy. In particular, two sets X and X′ are adjacent if the sets differ in only one element. Next, a mechanism M is ϵ-differentially private (DP) if for all pairs of adjacent X and X′, and for all possible subsets Y of the output space of M there holds that log P(M(X) ⊆Y ) P(P(X′) ⊆Y )  ≤ϵ In contrast to classic DP mechanisms which add zero-mean noise, in this section we are interested in finding a DP upper bound U ˆ F of ˆF. Let us first consider finding a DP upper bound to a single number y = ˆF(X, t). We define M+ ϵ,δ(y) = y −1 ϵ log(2δ) + Lap 1 ϵ  , where Lap(b)(y) = exp(−|y|/b)/2b is the Laplace distribution. Lemma 1. M+ ϵ,δ is ϵ-DP and with probability 1 −δ there holds y ≤M+ ϵ,δ(y). Proof. For λ > 0, the probability that Lap(1/ϵ) ≤−λ is Z −λ −∞ exp(−tϵ)ϵ/2 dt = 1 2 exp(−λϵ) Set λ = −log(2δ)/ϵ, then the probability that Lap(1/ϵ) ≤−λ is δ. Hence, the probability that M+ ϵ,δ(y) = y + λ + Lap(1/ϵ) < y is δ. Next, let us consider the problem of finding a DP vector (ui)l i=1 such that ∀i ∈[l] : ˆF(X, ti) ≤ui with high probability. A well-known result on differential privacy describing the relation between the privacy cost ϵ and the needed amount of noise for histograms and similar data structures was first introduced by Dwork Dwork et al. (2010). The main idea is that for two adjacent datasets X and X′, the sequence ( ˆF(X, ti)−ˆF(X′, ti))l i=1 starts with 0 or more zeros, followed by a series of 0 or more +1 or −1 values, followed by 0 or more zeros. This is a consequence of the assumption that X and X′ differ in only one element, say xT and x′ T , and hence this causes the sequence to only be different at the thresholds ti between the smallest and the largest of xT and x′ T . As a consequence, the series ( ˆF(X, ti))l i=1 and ( ˆF(X′, ti))l i=1 correlate strongly and a strategy adding correlated noise gives favorable results (low variance and low ϵ). In particular, let us assume l = 2L is a power of 2 (else, we can just add a few thresholds ti and benefit from determining DP upper bounds to ˆF(X, ti) for these thresholds). For i ∈[l] and j + 1 ∈[L], let bi,j give the bit decomposition of i −1 in the sense that i −1 = PL−1 j=0 2jbi,j. Let T = (ti)l i=1 be a set of thresholds. For i ∈[L] and j + 1 ∈[2i−1], let ηi,j = Lap((L + 1)/ϵ) be Laplace random variable. Then, for j + 1 ∈[L] and k ∈[2j] we define M+l ϵ,δ(X, ti) = ˆF(X, ti) + L(L + 1) ϵ log L(L + 1) 2δ  + L−1 X j=0 ηj+1,⌊(i−1)/2j⌋ Lemma 2. Let δ = λLe−λ L! The complete sequence  M+l ϵ,δ(X, ti) l i=1 is ϵ-DP and with probability at least 1 −δ there holds ∀i ∈[l] : ˆF(X, ti) ≤M+l ϵ,δ(X, ti). 18 Figure 5: Distribution of row non-zero counts, and DP safe upper bounds for variable template block sizes Proof. That the sequence  M+l ϵ,δ(X, ti) l i=1 is ϵ-DP follows directly from a result by Dwork (Dwork et al., 2010, thm 4.1). Similarly to Lemma 1. For the second part of the statement, we need to analyze the probability that L−1 X j=0 ηj+1,⌊(i−1)/2j⌋≥−L(L + 1) ϵ log L(L + 1) 2δ  The lefthand side is a sum of L independent Laplace distributions Lap((L + 1)/ϵ), for each the probability that Lap L+1 ϵ  ≤−L+1 ϵ log  L(L+1) 2δ  is not larger than δ L. Hence, the probability that this equality holds for all L terms is δ. Figure 5 plots for the Bookcrossing dataset the real cumulative distribution of non-zeros per row, in particular the function value for x gives how many books were rated by x or more users. As can be seen, a few books were rated by a large number of users and most books were only rated by a few users. The figure also shows several curves where DP statistics of the empirical cumulative distribution function were used to publish upper bounds for the number of non-zero slots needed per row. The several curves use a different number of rows per block in the template matrix (the quantiles discussed in Section 6). We use equal-sized blocks, i.e., the uninformed prior. In reality it is likely for most datasets (and also this one) we could perform better by taking smaller blocks for small x and larger blocks for larger x. We used 0.01-differential privacy in Figure 5, and the upper bounds are guaranteed to give sufficient space in the template for the data with probability at least 1 −106. In case the data owners would in the end send rows not fitting in the template, the servers would need to abort the protocol and restart. In practice, machine learning algorithms usually use values for ϵ larger than 0.01. As one can see, revealing in a DP way more quantiles induces a higher privacy cost and requires adding more noise, which in turns results in a higher upper bound. Too many template partitions (and hence quantiles) will attempt to unnecessarily closely model the data and suffer from the differential privacy protection, while too few quantiles will insufficiently take into account the decrease of the function, e.g., here on the left of the plot one can see that the first block always needs to provide sufficient non-zeros to fit the most rated book. In contrast to Figure 3 for the same dataset, Figure 5 uses a linear scale, making the memory footprint more visual. In particular, the memory footprint is the area under the curve of a certain strategy. It seems that in this case blocks of 64 rows is the best among the three options shown. 19 (a) Linear scale (b) Logarithmic scale Figure 6: Distribution of row non-zero counts, and upper bounds based on knowledge of the population distribution 7.3 Based on the population distribution This subsection considers the situation where either the population distribution of the non-zero counts of rows is known publicly, or one has a public sample from which one can compute statistics. Assume that we know the population distribution f where f(d) is the probability that a row has d non-zeros, and F(d) is the probability that a row has d or more non-zeros. In that case, drawing rows identically and independently (as is needed for a sound statistical analysis) implies that we expect in a sample on average a fraction F(d) of rows to require d or more non-zeros, and the standard deviation on this estimate is p F(d)(1 −F(d))/n where n is the total number of rows. We can from this compute the probability δ(λ) that the real fraction of rows with more than d non-zeros is larger than F(d) + λ for λ > 0. In case one doesn’t know the population distribution but has a sample from which one can estimate these statistics, a similar approach can be followed applying the appropriate formulas, e.g., the standard deviation of a sample of size s is p p(1 −p)/(s −1) with p the measured probability. Figure 6a plots the number of non-zeros in the template matrix for every row in a way similar to Figure 5 for the Bookcrossing dataset. As there is no differential privacy cost, we can take the size of a template block to be a single row. The several curves use 5, 10 and 20 standard deviations, i.e., going up to a high level of confidence that the real data will fit in the template matrix if the data is drawn according to the population distribution (or the same distributon as the data from which statistics were taken). In the plot, all curves coincide because much less margin is needed compared to the differential privacy based strategy. To see the difference better, the same plot is shown in Figure 6b in logarithmic scale. 8 Conclusion Our paper introduced algorithms based on oblivious sorting to multiply secret-shared sparse matrices. Our algorithms are compatible with a more generic MPC setup (i.e., outsourced computations) than existing works on secure sparse matrix multiplications. Our algorithms avoid memory issues present in dense secure multiplications by leveraging the data sparsity. Our experiments show communication cost reductions up to ×1000 compared to dense matrix multiplications. We also implemented two real-world ML applications highlighting that our sparse algorithms support applications impractical with existing protocols. Besides matrix multiplications, our work also introduces various (sparse) data analysis algorithms commonly used by ML practitioners. Finally, we proposed methods to minimize the public knowledge (mandatory in any secure sparse algorithm) and obtain it in a privacy-preserving manner. 20 Our implementation is publicly available and open-source, so our algorithms can be transferred into MPC frameworks: https://github.com/MarcT0K/Secure-sparse-multiplications Acknowledgments This work was supported by the Netherlands Organization for Scientific Research (NWO) in the context of the SHARE project [CS.011], the ANR project ANR-20-CE23-0013 PMR, and by the Horizon Europe project HE TRUMPET References Lada A. Adamic, Rajan M. Lukose, Amit R. Puniyani, and Bernardo A. Huberman. Search in power-law networks. Physical Review E, 64(4), 2001. doi: 10.1103/PhysRevE.64.046135. Charu C. Aggarwal. Recommender Systems. Springer International Publishing, 2016. ISBN 978-3-319-29657- 9 978-3-319-29659-3. doi: 10.1007/978-3-319-29659-3. William Aiello, Fan Chung, and Linyuan Lu. A Random Graph Model for Power Law Graphs. Experimental Mathematics, 10(1), 2001. ISSN 1058-6458. doi: 10.1080/10586458.2001.10504428. Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10586458.2001.10504428. Toshinori Araki, Jun Furukawa, Kazuma Ohara, Benny Pinkas, Hanan Rosemarin, and Hikaru Tsuchida. Secure Graph Analysis at Scale. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2021. ISBN 978-1-4503-8454-4. doi: 10.1145/3460120.3484560. Gilad Asharov, Koki Hamada, Dai Ikarashi, Ryo Kikuchi, Ariel Nof, Benny Pinkas, Katsumi Takahashi, and Junichi Tomida. Efficient Secure Three-Party Sorting with Applications to Data Analysis and Heavy Hitters. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022. ISBN 978-1-4503-9450-5. doi: 10.1145/3548606.3560691. Constance Beguier, Mathieu Andreux, and Eric W. Tramel. Efficient Sparse Secure Aggregation for Federated Learning, 2021. James Bell, Adrià Gascón, Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Mariana Raykova, and Phillipp Schoppmann. Distributed, Private, Sparse Histograms in the Two-Server Model. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022. ISBN 978-1-4503-9450-5. doi: 10.1145/3548606.3559383. Aydın Buluç, John Gilbert, and Viral B. Shah. Implementing Sparse Matrices for Graph Algorithms. In Graph Algorithms in the Language of Linear Algebra. Society for Industrial and Applied Mathematics, 2011. ISBN 978-0-89871-990-1 978-0-89871-991-8. doi: 10.1137/1.9780898719918.ch13. Chaochao Chen, Jun Zhou, Li Wang, Xibin Wu, Wenjing Fang, Jin Tan, Lei Wang, Alex X. Liu, Hao Wang, and Cheng Hong. When Homomorphic Encryption Marries Secret Sharing: Secure Large-Scale Sparse Logistic Regression and Applications in Risk Control. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 2652–2662, 2021. ISBN 978-1-4503-8332-5. doi: 10.1145/3447548.3467210. Hao Chen, Miran Kim, Ilya Razenshteyn, Dragos Rotaru, Yongsoo Song, and Sameer Wagh. Maliciously Secure Matrix Multiplication with Applications to Private Deep Learning. In Advances in Cryptology – ASIACRYPT 2020. Springer International Publishing, 2020. ISBN 978-3-030-64840-4. doi: 10.1007/ 978-3-030-64840-4_2. D. Coppersmith and S. Winograd. Matrix multiplication via arithmetic progressions. In Proceedings of the nineteenth annual ACM symposium on Theory of computing, 1987. ISBN 978-0-89791-221-1. doi: 10.1145/28395.28396. Jinming Cui, Chaochao Chen, Lingjuan Lyu, Carl Yang, and Wang Li. Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation. Advances in Neural Information Processing Systems, 34, 2021. 21 Iain S. Duff, Michael A. Heroux, and Roldan Pozo. An overview of the sparse basic linear algebra subpro- grams: The new standard from the BLAS technical forum. ACM Transactions on Mathematical Software, 28(2):239–267, 2002. ISSN 0098-3500. doi: 10.1145/567806.567810. I.S. Duff. A survey of sparse matrix research. Proceedings of the IEEE, 65(4):500–535, 1977. ISSN 1558-2256. doi: 10.1109/PROC.1977.10514. Jean-Guillaume Dumas, Pascal Lafourcade, Julio Lopez Fenner, David Lucas, Jean-Baptiste Orfila, Clé- ment Pernet, and Maxime Puys. Secure Multiparty Matrix Multiplication Based on Strassen-Winograd Algorithm. In Advances in Information and Computer Security, 2019. ISBN 978-3-030-26834-3. doi: 10.1007/978-3-030-26834-3_5. Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N. Rothblum. Differential privacy under continual observation. In Proceedings of the 42nd ACM symposium on Theory of computing - STOC ’10, pp. 715. ACM Press, 2010. ISBN 978-1-4503-0050-6. doi: 10.1145/1806689.1806787. Maximilian Egger, Marvin Xhemrishi, Antonia Wachter-Zeh, and Rawad Bitar. Sparse and Private Dis- tributed Matrix Multiplication with Straggler Tolerance, 2023. David Evans, Vladimir Kolesnikov, and Mike Rosulek. A Pragmatic Introduction to Secure Multi-Party Computation. Foundations and Trends® in Privacy and Security, 2(2-3):70–246, 2018. ISSN 2474-1558, 2474-1566. doi: 10.1561/3300000019. Oded Goldreich. Foundations of Cryptography: Volume 2, Basic Applications. Cambridge University Press, 2009. ISBN 978-1-107-39397-4. José María Gómez Hidalgo, Guillermo Cajigas Bringas, Enrique Puertas Sánz, and Francisco Carrero García. Content based SMS spam filtering. In Proceedings of the 2006 ACM symposium on Document engineering, 2006. ISBN 978-1-59593-515-1. doi: 10.1145/1166160.1166191. Koki Hamada, Dai Ikarashi, Koji Chida, and Katsumi Takahashi. Oblivious Radix Sort: An Efficient Sorting Algorithm for Practical Secure Multi-party Computation, 2014. F. Maxwell Harper and Joseph A. Konstan. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems, 5(4):1–19, 2016. ISSN 2160-6455, 2160-6463. doi: 10.1145/2827872. Aditya Hegde, Helen Möllering, Thomas Schneider, and Hossein Yalame. SoK: Efficient Privacy-preserving Clustering. Proceedings on Privacy Enhancing Technologies, 2021(4):225–248, 2021. ISSN 2299-0984. doi: 10.2478/popets-2021-0068. de Hoogh, S.J.A. Design of large scale applications of secure multiparty computation : secure linear pro- gramming. PhD Thesis, Technische Universiteit Eindhoven, 2012. ISBN: 9789038632032. Mahimna Kelkar, Mariana Raykova, and Karn Seth. Secure Poisson Regression. In 31st USENIX Security Symposium (USENIX Security 22), 2022. Marcel Keller. MP-SPDZ: A Versatile Framework for Multi-Party Computation. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 1575–1590, 2020. ISBN 978-1-4503-7089-9. doi: 10.1145/3372297.3417872. Nishat Koti, Mahak Pancholi, Arpita Patra, and Ajith Suresh. SWIFT: Super-fast and Robust Privacy- Preserving Machine Learning. pp. 2651–2668, 2021. ISBN 978-1-939133-24-3. Sven Laur, Jan Willemson, and Bingsheng Zhang. Round-Efficient Oblivious Database Manipulation. In Information Security. Springer, 2011. ISBN 978-3-642-24861-0. doi: 10.1007/978-3-642-24861-0_18. Charles E. Leiserson. Performance Engineering of Software Systems. Lecture 1: Introduction and Matrix Multiplication Leiserson, 2018. 22 Xiang Li, Charles X. Ling, and Huaimin Wang. The Convergence Behavior of Naive Bayes on Large Sparse Datasets. ACM Transactions on Knowledge Discovery from Data, 11(1):1–24, 2017. ISSN 1556-4681, 1556-472X. doi: 10.1145/2948068. Donghang Lu and Aniket Kate. RPM: Robust Anonymity at Scale. Proceedings on Privacy Enhancing Technologies, 2023(2), 2023. ISSN 2299-0984. doi: 10.56553/popets-2023-0057. Tianpei Lu, Bingsheng Zhang, Lichun Li, and Kui Ren. Aegis: A Lightning Fast Privacy-preserving Machine Learning Platform against Malicious Adversaries, 2023. Payman Mohassel and Peter Rindal. ABY3: A Mixed Protocol Framework for Machine Learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 35– 52, 2018. ISBN 978-1-4503-5693-0. doi: 10.1145/3243734.3243760. Payman Mohassel and Yupeng Zhang. SecureML: A System for Scalable Privacy-Preserving Machine Learn- ing. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38, 2017. doi: 10.1109/SP.2017.12. Johannes Mono and Tim Güneysu. Implementing and Optimizing Matrix Triples with Homomorphic En- cryption. In Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security, 2023. ISBN 9798400700989. doi: 10.1145/3579856.3590344. Kartik Nayak, Xiao Shaun Wang, Stratis Ioannidis, Udi Weinsberg, Nina Taft, and Elaine Shi. GraphSC: Parallel Secure Computation Made Easy. In 2015 IEEE Symposium on Security and Privacy, pp. 377–394, 2015. doi: 10.1109/SP.2015.30. Arpita Patra and Ajith Suresh. BLAZE: Blazing Fast Privacy-Preserving Machine Learning. In Proceedings 2020 Network and Distributed System Security Symposium. Internet Society, 2020. ISBN 978-1-891562- 61-7. doi: 10.14722/ndss.2020.24202. Berry Schoenmakers. MPyC—Python package for secure multiparty computation. In Workshop on the Theory and Practice of MPC, 2018. Phillipp Schoppmann, Adrià Gascón, Mariana Raykova, and Benny Pinkas. Make Some ROOM for the Zeros: Data Sparsity in Secure Distributed Machine Learning. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 1335–1350, 2019. ISBN 978-1-4503-6747-9. doi: 10.1145/3319535.3339816. Adi Shamir. How to share a secret. Communications of the ACM, 22(11):612–613, 1979. ISSN 0001-0782. doi: 10.1145/359168.359176. Shuang Sun and Eleftheria Makri. SoK: Multiparty Computation in the Preprocessing Model, 2025. URL https://eprint.iacr.org/2025/060. Publication info: Preprint. Tomas Toft. Primitives and applications for multi-party computation. PhD thesis, University of Aarhus, 2007. Publisher: Citeseer. Sameer Wagh, Divya Gupta, and Nishanth Chandran. SecureNN: 3-Party Secure Computation for Neural Network Training. Proc. Priv. Enhancing Technol., 2019(3):26–49, 2019. Sameer Wagh, Shruti Tople, Fabrice Benhamouda, Eyal Kushilevitz, Prateek Mittal, and Tal Rabin. Fal- con: Honest-Majority Maliciously Secure Framework for Private Deep Learning. Proceedings on Pri- vacy Enhancing Technologies, 2021. ISSN 2299-0984. URL https://petsymposium.org/popets/2021/ popets-2021-0011.php. Marvin Xhemrishi, Rawad Bitar, and Antonia Wachter-Zeh. Distributed Matrix-Vector Multiplication with Sparsity and Privacy Guarantees. In 2022 IEEE International Symposium on Information Theory (ISIT), 2022. doi: 10.1109/ISIT50566.2022.9834805. Cai-Nicolas Ziegler, Sean M. McNee, Joseph A. Konstan, and Georg Lausen. Improving recommendation lists through topic diversification. In Proceedings of the 14th international conference on World Wide Web, 2005. ISBN 978-1-59593-046-0. doi: 10.1145/1060745.1060754. 23 A Security proof sketch This appendix sketches the security proof of our algorithms. We provide proof sketches in the honest-but- curious model using the real-ideal paradigm Evans et al. (2018). We can define the ideal functionality for the sparse vector multiplication Fη VectMult as follows: • Input: [[xa]] = {([[i1,a]], [[v1,a]]) , . . . ([[iη,a]], [[vη,a]])} and [[xb]] (defined similarly) • Output: a secret-shared scalar [[s]] such that: s = P j,k∈{1...η} vj,a · vk,b · δ(ij,a, ik,b) (with δ the Kroenecker delta). This functionality has a property η. This property represents the protocol public knowledge in our functional- ity. As motivated in Section 3.3, public knowledge is necessary to have efficient secure sparse multiplications. We sketch a proof showing that Alg. 1 securely realizes the ideal functionality Fη VectMult. Our proof relies essentially on the composition theorem for the honest-but-curious model (Theorem 7.3.3 of Goldreich (2009)). Certain subprotocols we use have been proven to be secure and correct in the honest-but-curious model. By using the composition theorem, we can rely on their security and only prove the security of our overall protocol in which these subprotocols are embedded. The correctness of Alg. 1 can be verified by an easy arithmetic exercise. Security of the sub-protocols The first statement in Alg. 1 is a concatenation that is performed locally with no communication and hence does not need to be simulated. The second statement is an oblivious sort. The proof depends on the sub-protocol chosen for instantiation. This choice is highly influenced by the threat model, so we consider the Batcher’s sort for dishonest majority and the oblivious radix sort for honest majority (i.e., the two threat models studied in our paper). On the one hand, the Batcher’s sort only requires comparisons and arithmetic operations. It is then secure because the arithmetic and comparison operations on secret-shared values are trivially secure in the honest-but-curious model Hoogh (2012). On the other hand, the oblivious radix sort is proven secure in Hamada et al. (2014). After the sorting statement, we have a loop. This loop statement reveals no information because it uses η, a property of Fη VectMult (i.e., a public value). Finally, the loop contains a conditional statement. As highlighted in Section 4, we use the notation [[obliv. if]] to maximize the readability. This conditional statement would be implemented as follows: [[s]] ←[[s]] + ([[coord(zi)]] = [[coord(zi+1)]]) × [[val(zi)]] × [[val(zi+1)]], with [[coord(zi)]] = [[coord(zi+1)]] outputting [[1]] or [[0]]. The arithmetic operations are trivially secure in the honest-but-curious model. For the secure equality, we can use the secure protocol proposed by Toft Toft (2007). Hence, this oblivious conditional statement is secure because it relies on secure comparisons and arithmetic operations. Note that such oblivious conditional statements are recurrent in MPC protocols (e.g., in private heavy hitters Asharov et al. (2022)). Security of the overall protocol Since all the sub-protocols are individually secure, according to the Composition Theorem, Alg. 1 preserves the input secrecy in the honest-but-curious model. Matrix-vector and matrix-matrix We do not develop the proof sketch for our sparse matrix-vector (Alg. 2) and matrix-matrix (Alg. 5) multiplications, but the proof is very similar. As our sparse vector multiplication, these two algorithms use oblivious sorting, comparisons, arithmetic operations, and oblivious conditional structures. For all these operations, the security would be proven as sketched for the vector mul- tiplication. However, they also use two additional sub-protocols: AggEqualCoord and PlaceholderRemoval. The AggEqualCoord relies on an oblivious conditional structure and arithmetic operations. The proof is then trivial, because we can express the conditional structure using a combination of comparisons and arithmetic operators. The PlaceholderRemoval uses a “shuffle-and-reveal” trick: shuffle a list of secret-shared values and reveal which of them are placeholders. Other security papers Hamada et al. (2014) used similar “shuffle-and-reveal” tricks in protocols with proven security. In our algorithms, revealing the number of placeholders indirectly 24 reveals the number of non-zero elements in the output matrix. However, this information is public by definition because it is contained in the output. Extension to malicious security Our security proofs hold in the honest-but-curious model. Malicious security would require a verifiable secret-sharing scheme and dedicated proofs for the security in the malicious model. We remind that our main sub-protocols have known maliciously secure variant (especially oblivious sorting Asharov et al. (2022); Hamada et al. (2014) and oblivious shuffling Asharov et al. (2022); Laur et al. (2011)) that would be the basis of a malicious security proof. We leave this extension for future work. B Adapting existing secure sparse multiplication to the outsourced setting Our work introduces new protocols to perform secure sparse multiplications in an outsourced setting. While we introduce the first outsourced secure sparse multiplications, existing works Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) had already described secure protocols, but in a non-outsourced setting. Instead of introducing new protocols, one may wonder whether the existing protocols can be adapted to the outsourced setting. This appendix details why such an adaptation of Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) is not straightforward and can result in inefficient constructions. Adapting Chen et al. (2021); Cui et al. (2021) Cui et al. and Chen et al. described sparse-dense matrix multiplications where one party P1 knows the plaintext sparse matrix X and the other party P2 knows the plaintext dense matrix Y . In both protocols, the dense matrix Y is homomorphically encrypted and the first party P1 performs a dense multiplication between their plaintext sparse matrix X and the encrypted dense matrix Y it receives from P2. In Chen et al. Chen et al. (2021), the encrypted matrix Y is entirely transmitted to the first party. In Cui et al. Cui et al. (2021), the parties use a private information retrieval to avoid transmitting the whole matrix. Despite their differences, these two protocols cannot be adapted to the outsourced setting for the same reasons. First, the homomorphic encryption step is complex in an outsourced setting because there is no evident key dealer (contrary to the two-data-owner setup). Second, these protocols are possible because P1 knows the position of all non-zero values in X. Indeed, P1 performs a multiplication between the plaintext sparse matrix and the encrypted dense matrix. This operation is possible because homomorphically encrypted values can be trivially multiplied by plaintext constants. If these non-zero positions are hidden, this simple multiplication is no longer possible. There is then no straightforward solution to adapt the approach of Chen et al. (2021); Cui et al. (2021) to the outsourced setting. Third, these protocols only support sparse-dense multiplications: the matrix Y must be dense, or at least be converted into a dense representation before being homomorphically encrypted. Otherwise, P1 may learn which values of Y are zero. As a result, the memory needed to store Y could blow up with several orders of magnitude, and so could the communication and computation costs. Adapting Schoppmann et al. Schoppmann et al. (2019) They adopted a different approach than Chen et al. (2021); Cui et al. (2021): they introduce a low-level protocol called “ROOM” (Read-Only Oblivious Map) upon which they built their matrix multiplications. The ROOM protocol performs a secure look-up in a secret key-value store. This protocol guarantees that neither the key-value store, nor the look-up query are revealed. They introduce three instanciations of the ROOM protocol: BasicROOM, CircuitROOM, and PolyROOM They presented all their functionality in a two-data-owner setup with the query issued by the first party and the key-value store owned by the second. However, we can easily write these functionalities for an outsourced setup. Out of the three ROOM instanciations, only CircuitROOM is easily adaptabled to the outsourced setting because it relies on oblivious sorting and oblivious shuffling. Since our protocols also rely on sorting and shuffling, this observation may lead the reader to two questions: (1) What are the differences between the CircuitROOM-based sparse multiplications and our protocols? (2) What are the implications of these differences on efficiency? 25 They use the ROOM protocol to extract a dense sub-matrix from a sparse matrix. In contrast, we use sorting and shuffling for the overall sparse matrix multiplication. Besides this key algorithmic difference, efficient ROOM-based protocols in the outsourced setting is not straightforward. The following paragraphs study each matrix multiplication types to identify the algorithmic and efficiency differences. To multiply two sparse vectors x and y, Schoppmann et al. Schoppmann et al. (2019) first use CircuitROOM to extract a dense vector of size nnz(x) from y (i.e., corresponding to the non-zero values of x). This operation costs O((nnz(x) + nnz(y)) log(nnz(x) + nnz(y))) communications. Their protocol then performs a dense vector multiplication between the non-zero values of x and the extracted dense vector from y. This overall communication cost is O((nnz(x)+nnz(y)) log(nnz(x)+nnz(y))), as in our sparse vector multiplication (Alg. 1). However, their protocol has larger constant factors: while our protocol only requires an oblivious sort, CircuitROOM performs both an oblivious sort and oblivious shuffle. Their protocol even requires a dense vector multiplication after CircuitROOM. To multiply a sparse matrix X with a sparse vector y, Schoppmann et al. Schoppmann et al. (2019) call the ROOM protocol once for each data owner. In their protocol, each data owner shares a dense sub-matrix containing its non-zero columns (i.e., a column with at least one non-zero value). Hence, their protocol call the matrix-vector multiplication (and the underlying ROOM protocol) for each matrix of non-zero columns; in other words, for each data owner. On the contrary, our protocol can group the non-zeros from all data owners and process them all at the same time. In their paper, this design choice is not an issue because there is only two data owners. However, in an outsourced setup, each matrix row can be owned by a different data owner. Hence, the matrix multiplication would perform one vector multiplication per matrix row. Their communication cost is then O((nnz(X) + n · nnz(y)) log(nnz(X) + nnz(y))), which is worse than our cost, by a linear factor n. Indeed, Section 4.2 presents a non-trivial matrix-vector multiplication avoiding this linear factor (i.e., Alg. 2). Since ML datasets can contain thousands or even millions of rows (e.g., 340K in the Bookcrossing dataset), the CircuitROOM- based matrix-vector multiplication would be inefficient compared to our Alg. 2 (and even compared to the dense multiplication). To compute a correlation matrix X⊤X, Schoppmann et al. Schoppmann et al. (2019) require, in addition to ROOM, another sub-protocol called "Scatter". Unfortunately, their protocol described in the non-outsourced setting cannot be adapted easily to an outsourced setting because it relies on oblivious transfer, a purely two-party protocol with a party having access to the plaintext secret. Hence, we cannot adapt their sparse matrix-matrix multiplication to the outsourced setting. To sum up, the adaptation of the CircuitROOM-based sparse multiplication is either inefficient (for vector- vector and matrix-vector multiplications) or not straightforward (for matrix-matrix multiplications). C Avoiding the linear number of rounds This appendix presents optimized algorithms to avoid the linear round complexity in Alg. 2 and 5. In Alg. 2, this round complexity has two sources: the aggregation step (Alg. 3) and the multiplication loop: lines 7 to 14. In Alg. 5, the round complexity is only caused by the aggregation step. To avoid this round complexity, we take inspiration from the recursive secure maximum implemented in many MPC libraries (including MPyC). If implemented naively (i.e., scanning the list elements one by one), the round complexity of the maximum function would be linear. MPC libraries rely on a recursive algorithm to reach a logarithmic complexity. Alg. 6 details this algorithm. The intuition behind it is to represent the list using a tree. Then, each sub-tree recursively computes its maximum and “propagate” this value toward the root. We reuse the same recursive structure to build our optimized algorithms with logarithmic round complexity. Alg. 7 presents the optimized aggregation algorithm. Alg. 10 presents the optimized multiplication loop. Due to their recursive structure, these algorithms are more convoluted than their naive alternatives, but these optimizations are necessary to respect the round complexities presented in the main text for the sparse 26 Algorithm 6 Secure maximum via a recursive function Input: [[V ]], an unsorted list of values. 1: function RecursiveMax([[V ]]) 2: n ←length([[V]]) 3: if n = 1 4: return [[V1]] 5: Initialize [[Roots]], a list of ⌈n/2⌉secret-shared values 6: for k ←1 to ⌈n/2⌉do 7: [[obliv. if]] [[V2k]] > [[V2k+1]] 8: [[Rootsk]] ←[[V2k]] 9: else 10: [[Rootsk]] ←[[V2k+1]] 11: return RecursiveMax(Roots) Algorithm 7 Optimized aggregation of tuples with equal coordinates Input: [[Z]], a sorted list of n secret-shared tuples. Public knowledge: None 1: function OptimizedAggEqualCoord([[Z]]) 2: // Pre-processing: O(1) rounds 3: Initialize list [[Children]] with ⌈n/4⌉empty elements 4: for k ←1 to ⌈n/4⌉do 5: AggIfEqual(Z4k, Z4k+1) 6: AggIfEqual(Z4k+1, Z4k+2) 7: AggIfEqual(Z4k+2, Z4k+3) 8: [[Childrenk]] ←([[Z4k]], [[Z4k+1]], [[Z4k+2]], [[Z4k+3]]) 9: // Online phase: O(log n) rounds 10: [[Z]] ←RecProp([[Children]]) 11: // Offline post-processing 12: for k ←1 to ⌈n/4⌉do 13: ([[Z4k]], [[Z4k+1]], [[Z4k+2]], [[Z4k+3]]) ←[[Childrenk]] 14: for k ←1 to n do 15: [[obliv. if]] [[val(Zk)]] ̸= [[0]] 16: [[val(Zk)]] ←[[⊥]] 17: return [[Z]] matrix-vector and matrix-matrix multiplications. These two algorithms are similar and reuse the same intuitions; only a few minor operations differ. Hence, this appendix only discusses Alg. 7 in detail. Our algorithm builds a binary tree from the non-zero tuples. Each leaf contains four non-zero tuples, and the internal nodes represent some sub-lists of non-zero tuples. Each internal node stores four non-zero tuples: “minimum tuple” of the left (child) sub-tree, “maximum tuple” of the left sub-tree, “minimum tuple” of the right sub-tree, and “maximum tuple” of the right sub-tree. We compare the tuples based on their coordinate, so the maximum tuple is the tuple with the highest coordinates. To understand the intuition behind Alg. 7, it is necessary to understand the role of the recursion. Let us assume we have two sub-lists on which the aggregation is already completed, and let us understand the necessary steps to obtain a whole list with the aggregation completed. To know whether a sub-list must be updated, we only need to compare the “maximum” tuple of the left sub-list to the “minimum” tuple 27 Algorithm 8 Sub-functions used in Alg. 7 1: procedure AggIfEqual([[tup1]], [[tup2]]) 2: [[obliv. if]] [[coord(tup1)]] = [[coord(tup2)]] 3: [[val(tup2)]] ←[[val(tup2)]] + [[val(tup1)]] 4: [[val(tup1)]] ←[[0]] 5: function UpProp([[Child1]], [[Child2]]) 6: [[Root]] ←(MinLeft([[Child1]]), MaxRight([[Child1]]), MinLeft([[Child2]]), MaxRight([[Child2]])) 7: AggIfEqual(MinLeft([[Root]]), MaxLeft([[Root]])) 8: AggIfEqual(MaxLeft([[Root]]), MinRight([[Root]])) 9: AggIfEqual(MinRight([[Root]]), MaxRight([[Root]])) 10: return [[Root]] 11: procedure DownProp([[Child]], [[NewMin]], [[NewMax]]) 12: MinLeft([[Child]]) ←[[NewMin]] 13: AggIfEqual(MinLeft([[Root]]), MaxLeft([[Root]])) 14: AggIfEqual(MaxLeft([[Root]]), MinRight([[Root]])) 15: MaxRight([[Child]]) ←[[NewMax]] Algorithm 9 Generic recursive propagation used in Alg. 7 and 10 Input: [[Children]], a list of 4-element tuples. UpProp and DownProp implementations vary depending on the algorithm using this recursive approach. 1: function RecProp([[Children]]) 2: n ←length([[Children]]) 3: if n = 1 4: return [[Children]] 5: for k ←1 to ⌈n/2⌉do 6: [[Rootsk]] ←UpProp([[Children2k]], [[Children2k+1]]) 7: [[Roots]] ←RecProp([[Roots]]) 8: for k ←1 to ⌈n/2⌉do 9: DownProp([[Children2k]], MinLeft([[Rootsk]]), MaxLeft([[Rootsk]])) 10: DownProp([[Children2k+1]], MinRight([[Rootsk]]), MaxRight([[Rootsk]])) 11: return [[Children]] of the right sub-list. This claim holds because our aggregation algorithm takes as input a sorted list of non-zero tuples. If these two tuples share the same coordinate, the “maximum” tuple of the left list must be aggregated to the “minimum” tuple in the right list. Otherwise, no operation is necessary. Contrary to the secure maximum algorithm, Alg. 7 passes through the tree twice: from leaves to root, then from root to leaves. The first phase identifies which consecutive sub-trees share minimum-maximum with equal coordinates. The second phase propagates the aggregated values to the leaves. Alg. 9 describes this recursive propagation. The implementation of the “upward” and “downward” propagations of Alg. 7 are given in Alg. 8. As each node stores four non-zero tuples, Alg. 7 implicitly assumes that the input list is a multiple of 4 in length. If not, we can pad the list with placeholders without impacting the rest of the algorithm. Since we rely on a binary-tree approach, the complexity of this algorithm is O(m log m) with m the number of leaves. This algorithm reduces the round complexity to O(log m) because we can parallelize the operations at each tree level. 28 Algorithm 10 Optimized multiplication loop for Alg. 2 Input: [[Z]], a sorted list of n secret-shared tuples. Public knowledge: None 1: function OptimizedMultLoop([[Z]]) 2: // Pre-processing O(1) rounds 3: V1 ←[[val(Z1)]] 4: for k ←2 to n do 5: [[obliv. if]] [[coord(Zk)]] = [[coord(Zk−1)]] 6: Vk ←[[val(Zk)]] 7: else 8: Vk ←[[⊥]] 9: Initialize list [[Children]] with ⌈n/4⌉empty elements 10: for k ←1 to ⌈n/4⌉do 11: ReplaceIfNull(V4k, V4k+1) 12: ReplaceIfNull(V4k+1, V4k+2) 13: ReplaceIfNull(V4k+2, V4k+3) 14: [[Childrenk]] ←([[V4k]], [[V4k+1]], [[V4k+2]], [[V4k+3]]) 15: // Online phase: O(log n) rounds 16: [[Children]] ←RecProp([[Children]]) 17: // Post-processing: O(1) rounds 18: for k ←1 to ⌈n/4⌉do 19: ([[V4k]], [[V4k+1]], [[V4k+2]], [[V4k+3]]) ←[[Childrenk]] 20: for k ←1 to n do 21: [[obliv. if]] [[Vk]] ̸= [[⊥]] 22: [[val(Zk)]] ←[[val(Zk)]] × [[Vk]] 23: return [[Z]] 24: procedure ReplaceIfNull([[vold]], [[vnew]]) 25: [[obliv. if]] [[vold]] = [[⊥]] 26: [[vold]] ←[[vnew]] 27: function UpProp([[Child1]], [[Child2]]) 28: [[Root]] ←(MinLeft([[Child1]]), MaxRight([[Child1]]), MinLeft([[Child2]]), MaxRight([[Child2]])) 29: ReplaceIfNull(MinLeft([[Root]]), MaxLeft([[Root]])) 30: ReplaceIfNull(MaxLeft([[Root]]), MinRight([[Root]])) 31: ReplaceIfNull(MinRight([[Root]]), MaxRight([[Root]])) 32: return [[Root]] 33: procedure DownProp([[Child]], [[NewMin]], [[NewMax]]) 34: ReplaceIfNull(MinLeft([[Child]]), [[NewMin]]) 35: ReplaceIfNull(MaxLeft([[Child]]), [[NewMin]]) 36: ReplaceIfNull(MinRight([[Child]]), [[NewMin]]) 37: MaxRight([[Child]]) ←[[NewMax]] The optimized multiplication loop (Alg. 10) reuses the same intuition as the optimized aggregation. The main difference is that we do not want to aggregate some values but replicate some values. We want to build a vector containing one multiplicative value for each non-zero tuple. The algorithm starts by creating a vector of placeholders with a few multiplicative values. Then, we use our recursive propagation to replace each placeholder with the closest (non-placeholder) value on the left. Once this vector is built, we multiply our non-zero elements by their corresponding multiplicative value. This element-wise vector multiplication requires one communication round. 29 Contrary to Alg. 7, the recursive part of Alg. 10 work on a vector of scalar instead of working on a vector of tuples. This difference slightly simplifies the value propagation but requires more pre- and post-processing than Alg. 7. Hence, Alg. 10 also uses the recursive Alg. 9 but with different “upward” and “downward” propagations described in Alg. 10. As Alg. 7, this optimized multiplication loop has a logarithmic round complexity. 30
Secure Sparse Matrix Multiplications and their Applications to Privacy-Preserving Machine Learning Marc Damie -party computation (MPC) enables executing Machine Learning (ML) algorithms on secret-shared or encrypted data. However, existing MPC frameworks are not optimized for sparse data. This makes them unsuitable for ML applications involving sparse data, e.g., recommender systems or genomics. Even in plaintext, such applications involve high-dimensional sparse data, that cannot be processed without sparsity-related optimizations due to prohibitively large memory requirements. Since matrix multiplication is central in ML algorithms, we propose MPC algorithms to multiply secret sparse matrices. On the one hand, our algorithms avoid the memory issues of the "dense" data representation of classic secure matrix multiplication algorithms. On the other hand, our algorithms can significantly reduce communication costs (some experiments show a factor 1000) for realistic problem sizes. We validate our algorithms in two ML applications in which existing protocols are impractical. An important question when developing MPC algorithms is what assumptions can be made. In our case, if the number of non-zeros in a row is a sensitive piece of information then a short runtime may reveal that the number of non-zeros is small. Existing approaches make relatively simple assumptions, e.g., that there is a universal upper bound to the number of non-zeros in a row. This often doesn't align with statistical reality, in a lot of sparse datasets the amount of data per instance satisfies a power law. We propose an approach which allows adopting a safe upper bound on the distribution of non-zeros in rows/columns of sparse matrices. 1 Introduction MPC protocols are cryptographic primitives allowing to compute functions on secret inputs. Over the years, they have been used in many application domains such as privacy-preserving ML (PPML) Mohassel & Zhang (2017) or telemetry Asharov et al. (2022) to address confidentiality and privacy concerns. However, existing protocols are impractical for many real-world applications, especially when the applications involve high-dimensional sparse data, i.e., data with a large majority of zero values. Storing large sparse datasets in a dense format (one value per cell) demands excessive memory. On real-world sparse data, this "dense storage" often becomes prohibitive; making computation infeasible due to memory 1 16 Oct 2025 constraints. Moreover, dense representations lead to inefficient linear algebra operations, as most of the computational effort is wasted processing zeros. Over the last 50 years, algorithms to process plaintext sparse data efficiently have been proposed Buluç et al. (2011); Duff (1977). Since 2002, dedicated software libraries Duff et al. (2002) popularized them. Sparse data is present in many ML applications but also in other fields, including graph algorithms. Our paper focuses on PPML applications, but our results apply to any application involving private sparse data. Several MPC frameworks enable training ML models on secret-shared data Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Mohassel & Zhang (2017); Patra & Suresh (2020); Wagh et al. (2019; 2021). These algorithms make use of existing MPC frameworks such as MP-SPDZ Keller (2020). However, none of these frameworks provide algorithms optimized for sparse data. The scalability limitations (especially memory issues) of dense plaintext algorithms transfer naturally to dense MPC algorithms. Thus, MPC frameworks need dedicated sparse algorithms for ML applications like recommender systems. These frameworks are not fundamentally incompatible with sparse data; there is simply no suitable secure sparse algorithm in the literature. Related works There exist protocols to sum sparse vectors securely, including sorting-based Asharov et al. (2022); Bell et al. (2022), private histograms Bell et al. (2022), or private-union-based Beguier et al. (2021). A few papers Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) studied secure sparse multiplications in a two-party setting. However, these protocols require the data owners to actively participate because one of the computation parties must know the plaintext sparse data. Hence, they do not support more than two data owners simultaneously, which is too limiting, as modern ML applications involve thousands of data owners. Typical ML applications require a more generic MPC setup: outsourced training. In this setting (see Figure 1), the data owners share their secret data with a group of computation servers and disconnect. This distinction between input parties and computation parties enables supporting an arbitrary number of data owners, contrary to existing works Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019). Appendix B details why the adaptation of these works either results in an inefficient construction or is not straightforward. The secure multiplication of sparse matrices in the outsourced setting is then a significant open problem. Our work will focus on this outsourced setting; providing sparse multiplications compatible with any MPC framework. Some information-theory works Egger et al. (2023); Xhemrishi et al. (2022) studied the multiplication of a secret-shared matrix with a public matrix. Due to the public input, their setting differs from classic MPC where all inputs are secret. Finally, Nayak et al. Nayak et al. (2015) presented secure graph computations (a problem closely related to sparse operations due to the sparse adjacency matrices), but they do not present multiplication protocols. Our contributions We present two secure algorithms to compute matrix multiplications on secretshared sparse data. Our algorithms avoid the memory issues of dense matrix multiplications on sparse data (e.g., in an experiment from Section 5: 19TB using dense multiplications vs. 60GB using ours). Moreover, our algorithms reduce the communication costs up to a factor 1000, and we implement two ML applications that are impractical with existing secure algorithms. Third, as we highlight that any efficient MPC protocol (i.e., not only ours) on sparse data requires public knowledge about the sparsity, we leverage the properties of real-world sparse data to minimize this mandatory public knowledge and obtain it in a privacy-preserving manner. Notations Let nnz(x) (resp. nnz(X)) refer to the number of non-zero values in a vector x (resp. matrix X). Let [[a]] refer to a share of a secret-shared value a. In our algorithms, any operation involving secretshared values corresponds to a secure variant; e.g., [[a]] + [[b]] is the secure addition of a and b and [[sort(l)]] is the oblivious sort of l. Secure algorithms are executed jointly by all shareholders. 2 Figure 1: Outsourced MPC setting 2 Preliminaries 2.1 Threat model Threat modeling describes the expected behavior of the agents involved in an MPC protocol: an honest agent follows the protocol, an honest-but-curious agent follows the protocol and attempts to infer private information passively, and a malicious agent does not necessarily follow the protocol and can actively disrupt the protocol to obtain private information. MPC works distinguish security against a majority of honest agents from security against a majority of dishonest agents. "Dishonest agent" is an umbrella term encompassing malicious and honest-but-curious agents (depending on the security setup considered). We analyze our algorithms under honest and dishonest majority. We focus on honest-but-curious security, but our algorithms rely on oblivious shuffling and sorting with known maliciously secure variants Hamada et al. (2014); Laur et al. (2011). We leave for future work the extension to malicious security and the subsequent security proofs. In line with related works, our threat model excludes poisoning attacks where malicious data owners forge input data so the protocol reveal private information. We assume the data owners provide well-formed secret-shared inputs. 2.2 Outsourced setting Our contribution focuses on an "outsourced" computation setup (popular in recent PPML works Hegde et al. (2021); Mohassel & Zhang (2017); Wagh et al. (2019; 2021)): each data owner shares their secret data with a group of computation servers. These servers build a secret-shared matrix [[X]] containing all data owners' inputs (e.g., one row per data owner1) and perform secure computations on this matrix. For example, the servers can execute a protocol to train an ML model [[θ]] on the dataset X, as in Figure 1. In this setting, the data owners share their secret data and disconnect. Outsourced protocols supports an arbitrary number of data owners because it separates input parties (i.e., data owners) from computation parties. Contrary to many PPML works that support a two-party Mohassel & Zhang (2017); Hegde et al. (2021) or three-party Wagh et al. (2019; 2021) outsourced setups, our work support an unbounded number of computation parties. Nevertheless, secret-sharing protocols induce a quadratic communication cost in the number of parties. This quadratic cost makes outsourcing necessary to support thousands of data owners like in ML applications. All existing secure sparse multiplications Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) support a non-outsourced setting. They presented two-party protocols assuming the computation parties are 1This is sometimes referred to as "horizontal" data partitioning in PPML literature. 3 also data owners (⇒two data owners maximum) because they require one of the computation parties to know the plaintext sparse data. Such a setting is too limiting for ML applications as they typically involve many data owners. Adapting existing protocols to avoid designing new ones? The adaptation is not easy because a plaintext knowledge is often essential to these protocols. Appendix B details why the adaptation of Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) to the outsourced setting either results in an inefficient construction or is not straightforward. 2.3 MPC primitives Secret sharing Shamir (1979) enables sharing a secret value x between multiple parties. Each shareholder j holds a share [[x]]j, revealing nothing about x. The shareholders can reconstruct the secret value by gathering their shares. Several secret sharing schemes exist, e.g., additive shares are finite field elements drawn randomly such that: P j[[x]]j = x. Many protocols Evans et al. (2018) enable processing secret-shared values; from arithmetic operations to complex operations (e.g., sorting). Like related works in private data analysis Araki et al. (2021); Asharov et al. (2022), our algorithms rely heavily on oblivious shuffling and sorting. "Oblivious sorting" designates MPC protocols for sorting secret-shared values without revealing information about the values. Sorting networks Hamada et al. (2014) are straightforward oblivious sorting solutions, as their execution is data-independent. Their cost is O(m log2 m) for a list of size m. The oblivious radix sort Hamada et al. (2014) is the state-of-the-art honest-majority protocol, with its optimal complexity of O(m log m) (and low constant factors). Recent papers optimized this protocol in a three-party setting Araki et al. (2021); Asharov et al. (2022). "Oblivious shuffling" protocols randomly permute lists of secret values without revealing the permutation. Laur et al. Laur et al. (2011) proposed an honest-majority protocol with linear costs, used in many recent protocols Araki et al. (2021); Asharov et al. (2022); Hamada et al. (2014). A recent work Lu & Kate (2023) also described dishonest-majority protocols, but with supra-linear costs. 2.4 Matrix multiplication Let Z be the result of the multiplication between the n × m matrix X and the m × p matrix Y . Z is a n × p matrix defined as follows: Zij = m X k=1 xik · ykj, ∀i ∈{1 . . . n}, j ∈{1 . . . p} (1) This formula holds for all matrices. However, the implementation may differ depending on the type of matrices (e.g., vectors, diagonal matrices, or triangular matrices). Our work focuses on optimized algorithms for sparse matrices. Sparse algorithms avoid "useless" multiplications and only multiply the non-zero values. Indeed, if the matrix multiplication is implemented naively (i.e., triple nested loop), the algorithm would compute many scalar multiplications with a null result. Hence, sparse algorithms leverage the matrix sparsity to have a cost inverse proportional to the sparsity. Software libraries usually implement dense matrix multiplication via a triple nested loop, resulting in an O(nmp) cost. While this cost has long been considered the lower bound for dense matrix multiplication, several works described algorithms with a sub-cubic cost Coppersmith & Winograd (1987). Dumas et al. Dumas et al. (2019) described an MPC protocol to execute a matrix multiplication with sub-cubic cost. These algorithms have constant factors limiting their practical interest (even in plaintext), so software libraries favor algorithms with a cubic cost. 2.5 Dense matrix multiplication on secret-shared data The cubic-cost matrix multiplication (i.e., the naive implementation of Equation (1)) can easily be imported to MPC. However, recent works Chen et al. (2020); Mohassel & Rindal (2018); Mohassel & Zhang (2017); Mono & Güneysu (2023); Patra & Suresh (2020); Wagh et al. (2021) described more efficient protocols under 4 honest or dishonest majority assumptions. These works reduce the communication costs, but the computation cost remains asymptotically cubic. Moreover, the storage costs remain asymptotically equivalent to the plaintext algorithm, so they cannot support larger matrices than the plaintext dense algorithm. Dishonest majority Beaver's multiplication triplets are a popular building block to speed up secure multiplications via preprocessing steps Evans et al. (2018). Mohassel and Zhang Mohassel & Zhang (2017) generalized this concept to matrices: matrix multiplication triplets. These triplets make the communication costs proportional to the input and output size. For example, the multiplication of an n × m matrix with an m × p matrix requires O(nm + mp + np) communications (instead of O(nmp) with the naive algorithm). This approach reduces the asymptotic cost for matrix-matrix multiplications, but the communication costs of the vector-vector and matrix-vector multiplications remains asymptotically equal to the naive costs. Matrix multiplication triplets with malicious security were later described Chen et al. (2020); Mono & Güneysu (2023). Honest majority Using the honest majority assumption, several recent works Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Patra & Suresh (2020); Wagh et al. (2021) designed protocols with a communication cost proportional to the output size. Hence, a square matrix multiplication only requires O(n2) communications, and a vector multiplication O(1) communications. Before these recent protocols, De Hoogh Hoogh (2012) had already described a matrix multiplication with equivalent costs based on Shamir's secret sharing (SSS). The recent protocols Koti et al. (2021); Lu et al. (2023); Mohassel & Rindal (2018); Wagh et al. (2021) are more complex to understand than the SSS-based algorithm of Hoogh (2012) because they rely on share conversion protocols. Moreover, these protocols only support 3 parties, while the SSSbased solution supports an arbitrary number of parties. Nevertheless, the SSS-based solution only has honest-but-curious security, while Koti et al. (2021); Lu et al. (2023); Patra & Suresh (2020); Wagh et al. (2021) support malicious security. To understand the gain provided by the honest-majority assumption, let us detail the optimized vector multiplication under SSS Hoogh (2012). In SSS, each secret-shared value is represented using a polynomial of degree t. To multiply two secret-shared scalars in SSS, the parties locally multiply their local shares (obtaining a polynomial of degree 2t) and then execute a degree reduction protocol (via a "resharing" protocol). For dense vector multiplication, we can delay the degree reduction and add all local multiplication results together. The addition is possible because all local multiplication results are shares of equal polynomial degree (i.e., 2t). Then, a single degree reduction is done on the sum result. 3 Sparse data in PPML 3.1 Sparse data representation The term "sparse data" refers to data containing a large proportion of zero values. If stored using the default matrix format, the zeros occupy a significant memory space. When the data contains a sufficient number of zeros, sparse data representations can significantly reduce the storage space. Several sparse data representations exist Buluç et al. (2011), each with specific algorithms to perform operations on the data. Depending on the data representations, some operations are more efficient than others. In other words, there is no perfect sparse data representation. We use the tuple representation Buluç et al. (2011): each sparse vector is a list of non-zero tuples: t = (i, vi) with i the coordinate in the vector and vi the non-zero value. Let coord(t) = i denote the coordinate of the non-zero and val(t) = vi its value. On the one hand, we refer to algorithms supporting sparse data as "sparse algorithm", a common abuse of terminology. On the other hand, we use the term "dense algorithms" to refer to classic algorithms. 5 3.2 Larger datasets are sparser Optimizing dense algorithms, a dead end To process sparse data, one may consider to use dense matrix multiplication, with more memory. Unfortunately, it would require tremendous amount of memory. Secure dense algorithms have the same asymptotic memory footprint and hence run into the same memory issue. There is a limit to the scalability of dense algorithms. Even if extreme amounts of memory would be available, dense secure multiplications would still fail to scale because their communication cost is typically at least linear in the size of the dense input. A workaround to avoid this memory issue could be to partition the matrix in sub-matrices and perform dense multiplications on these sub-matrices. This approach is effective in classic linear algebra to handle large dense datasets and could (to some extent) reduce memory consumption. However, it would still require storing the product as a dense matrix; the memory benefit will not be significant. Moreover, it adds significant costs for repeated sparse-to-dense conversions. Finally, this partitioning approach would not reduce the communication costs of MPC protocols that would remain correlated to the full matrix size. To sum up, sparse algorithms are popular in plaintext because there is no other efficient solutions to process high-dimensional sparse datasets. In plaintext, dense matrix multiplications usually have a sparse equivalent (e.g., NumPy for dense and SciPy for sparse in Python). To cover all possible ML applications, MPC frameworks should similarly have sparse algorithms. The first motivation of sparse algorithms is to avoid the memory issues caused by dense algorithms. Furthermore, they often provide significant performance gains. Sparsity-size correlation Real-world sparse datasets show an interesting phenomenon: the sparsity is correlated to the matrix size. Indeed, larger sparse datasets have in average a larger sparsity. Thus, the larger datasets are, the more beneficial the sparse algorithms become. To understand this phenomenon, let us take the example of recommender systems on a marketplace (e.g., Amazon): the matrix quantifies the interaction of each consumer with each product. The dataset is sparse because each consumer only interacts with a small subset of all possible products. If the marketplace increases its number of products by a factor 1000, the consumer will not consume 1000 times more products. Even if the consumption increases a bit, existing datasets show that it does not follow the matrix growth; i.e., the sparsity increases. The real-world datasets that we arbitrarily selected from Li et al. (2017) satisfy the same trend: MovieLens 1M has 95% sparsity for 1.7K columns, Yahoo Movie has 99.7% sparsity for 12K columns, Bookcrossing has 99.99% sparsity for 340K columns, and flicker has 99.999% sparsity for 500K columns. Theoretical point of view The correlation between sparsity and matrix size is observable analytically. Sparse datasets can often be represented as graphs: e.g., a recommendation dataset is a bipartite graph linking n customers to m products. Also, note that graphs can be represented as sparse dataset via their adjacency matrix. The literature Aiello et al. (2001) showed that, in many application domains, graph data follows a "power law" distribution. In particular, if we take a random vertex v ∈V (G) of a graph G, then the probability distribution of the degree d(v) of v is given by: f(x) = P(d(v) = x) = x-γ Z(γ) (2) for some γ > 1 where Z(γ) = P∞ i=1 i-γ is a normalization constant. Typically, we have 2 ≤γ ≤3 Adamic et al. (2001), but application domains exist where other values of γ are adequate. Practical applications may deviate a bit from (2), especially for small degrees, as this equation is mainly intended to describe asymptotic behavior. In a dataset with n rows (i.e, vertices) where every vertex has degree at most m, one can use a corrected normalization constant Z(γ, m) = Pm i=1 i-γ. Remember that, in our case, the degree of a vertex corresponds 6 to the number of non-zeros in a row. The expected degree of a vertex is: Ev[d(v)] = m X i=1 i1-γ Z(γ, m) We can bound Z(γ, m) ≥1+ R ∞ 1 x-γdx = 1+1/1(1-γ) and Pm i=1 i1-γ 0, the probability that Lap(1/ε) ≤-λ is Z -λ -∞ exp(-tε)ε/2 dt = 1 2 exp(-λε) Set λ = -log(2δ)/ε, then the probability that Lap(1/ε) ≤-λ is δ. Hence, the probability that M+ ε,δ(y) = y + λ + Lap(1/ε) 0. In case one doesn't know the population distribution but has a sample from which one can estimate these statistics, a similar approach can be followed applying the appropriate formulas, e.g., the standard deviation of a sample of size s is p p(1 -p)/(s -1) with p the measured probability. Figure 6a plots the number of non-zeros in the template matrix for every row in a way similar to Figure 5 for the Bookcrossing dataset. As there is no differential privacy cost, we can take the size of a template block to be a single row. The several curves use 5, 10 and 20 standard deviations, i.e., going up to a high level of confidence that the real data will fit in the template matrix if the data is drawn according to the population distribution (or the same distributon as the data from which statistics were taken). In the plot, all curves coincide because much less margin is needed compared to the differential privacy based strategy. To see the difference better, the same plot is shown in Figure 6b in logarithmic scale. 8 Conclusion Our paper introduced algorithms based on oblivious sorting to multiply secret-shared sparse matrices. Our algorithms are compatible with a more generic MPC setup (i.e., outsourced computations) than existing works on secure sparse matrix multiplications. Our algorithms avoid memory issues present in dense secure multiplications by leveraging the data sparsity. Our experiments show communication cost reductions up to ×1000 compared to dense matrix multiplications. We also implemented two real-world ML applications highlighting that our sparse algorithms support applications impractical with existing protocols. Besides matrix multiplications, our work also introduces various (sparse) data analysis algorithms commonly used by ML practitioners. Finally, we proposed methods to minimize the public knowledge (mandatory in any secure sparse algorithm) and obtain it in a privacy-preserving manner. 20 Our implementation is publicly available and open-source, so our algorithms can be transferred into MPC frameworks: https://github.com/MarcT0K/Secure-sparse-multiplications Acknowledgments This work was supported by the Netherlands Organization for Scientific Research (NWO) in the context of the SHARE project [CS.011], the ANR project ANR-20-CE23-0013 PMR, and by the Horizon Europe project HE TRUMPET References Lada A. Adamic, Rajan M. Lukose, Amit R. Puniyani, and Bernardo A. Huberman. Search in power-law networks. Physical Review E, 64(4), 2001. Charu C. Aggarwal. Recommender Systems. Springer International Publishing, 2016. ISBN 978-3-319-296579 978-3-319-29659-3. William Aiello, Fan Chung, and Linyuan Lu. A Random Graph Model for Power Law Graphs. Experimental Mathematics, 10(1), 2001. ISSN 1058-6458. Publisher: Taylor & Francis _eprint: https://doi.org/10.1080/10586458.2001.10504428. Toshinori Araki, Jun Furukawa, Kazuma Ohara, Benny Pinkas, Hanan Rosemarin, and Hikaru Tsuchida. Secure Graph Analysis at Scale. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2021. ISBN 978-1-4503-8454-4. Gilad Asharov, Koki Hamada, Dai Ikarashi, Ryo Kikuchi, Ariel Nof, Benny Pinkas, Katsumi Takahashi, and Junichi Tomida. Efficient Secure Three-Party Sorting with Applications to Data Analysis and Heavy Hitters. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022. ISBN 978-1-4503-9450-5. Constance Beguier, Mathieu Andreux, and Eric W. Tramel. Efficient Sparse Secure Aggregation for Federated Learning, 2021. James Bell, Adrià Gascón, Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Mariana Raykova, and Phillipp Schoppmann. Distributed, Private, Sparse Histograms in the Two-Server Model. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022. ISBN 978-1-4503-9450-5. Aydın Buluç, John Gilbert, and Viral B. Shah. Implementing Sparse Matrices for Graph Algorithms. In Graph Algorithms in the Language of Linear Algebra. Society for Industrial and Applied Mathematics, 2011. ISBN 978-0-89871-990-1 978-0-89871-991-8. Chaochao Chen, Jun Zhou, Li Wang, Xibin Wu, Wenjing Fang, Jin Tan, Lei Wang, Alex X. Liu, Hao Wang, and Cheng Hong. When Homomorphic Encryption Marries Secret Sharing: Secure Large-Scale Sparse Logistic Regression and Applications in Risk Control. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 2652-2662, 2021. ISBN 978-1-4503-8332-5. Hao Chen, Miran Kim, Ilya Razenshteyn, Dragos Rotaru, Yongsoo Song, and Sameer Wagh. Maliciously Secure Matrix Multiplication with Applications to Private Deep Learning. In Advances in Cryptology - ASIACRYPT 2020. Springer International Publishing, 2020. ISBN 978-3-030-64840-4. 978-3-030-64840-4_2. D. Coppersmith and S. Winograd. Matrix multiplication via arithmetic progressions. In Proceedings of the nineteenth annual ACM symposium on Theory of computing, 1987. ISBN 978-0-89791-221-1. Jinming Cui, Chaochao Chen, Lingjuan Lyu, Carl Yang, and Wang Li. Exploiting Data Sparsity in Secure Cross-Platform Social Recommendation. Advances in Neural Information Processing Systems, 34, 2021. 21 Iain S. Duff, Michael A. Heroux, and Roldan Pozo. An overview of the sparse basic linear algebra subprograms: The new standard from the BLAS technical forum. ACM Transactions on Mathematical Software, 28(2):239-267, 2002. ISSN 0098-3500. I.S. Duff. A survey of sparse matrix research. Proceedings of the IEEE, 65(4):500-535, 1977. ISSN 1558-2256. Jean-Guillaume Dumas, Pascal Lafourcade, Julio Lopez Fenner, David Lucas, Jean-Baptiste Orfila, Clément Pernet, and Maxime Puys. Secure Multiparty Matrix Multiplication Based on Strassen-Winograd Algorithm. In Advances in Information and Computer Security, 2019. ISBN 978-3-030-26834-3. Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N. Rothblum. Differential privacy under continual observation. In Proceedings of the 42nd ACM symposium on Theory of computing - STOC '10, pp. 715. ACM Press, 2010. ISBN 978-1-4503-0050-6. Maximilian Egger, Marvin Xhemrishi, Antonia Wachter-Zeh, and Rawad Bitar. Sparse and Private Distributed Matrix Multiplication with Straggler Tolerance, 2023. David Evans, Vladimir Kolesnikov, and Mike Rosulek. A Pragmatic Introduction to Secure Multi-Party Computation. Foundations and Trends® in Privacy and Security, 2(2-3):70-246, 2018. ISSN 2474-1558, 2474-1566. Oded Goldreich. Foundations of Cryptography: Volume 2, Basic Applications. Cambridge University Press, 2009. ISBN 978-1-107-39397-4. José María Gómez Hidalgo, Guillermo Cajigas Bringas, Enrique Puertas Sánz, and Francisco Carrero García. Content based SMS spam filtering. In Proceedings of the 2006 ACM symposium on Document engineering, 2006. ISBN 978-1-59593-515-1. Koki Hamada, Dai Ikarashi, Koji Chida, and Katsumi Takahashi. Oblivious Radix Sort: An Efficient Sorting Algorithm for Practical Secure Multi-party Computation, 2014. F. Maxwell Harper and Joseph A. Konstan. The MovieLens Datasets: History and Context. ACM Transactions on Interactive Intelligent Systems, 5(4):1-19, 2016. ISSN 2160-6455, 2160-6463. Aditya Hegde, Helen Möllering, Thomas Schneider, and Hossein Yalame. SoK: Efficient Privacy-preserving Clustering. Proceedings on Privacy Enhancing Technologies, 2021(4):225-248, 2021. ISSN 2299-0984. de Hoogh, S.J.A. Design of large scale applications of secure multiparty computation : secure linear programming. PhD Thesis, Technische Universiteit Eindhoven, 2012. ISBN: 9789038632032. Mahimna Kelkar, Mariana Raykova, and Karn Seth. Secure Poisson Regression. In 31st USENIX Security Symposium (USENIX Security 22), 2022. Marcel Keller. MP-SPDZ: A Versatile Framework for Multi-Party Computation. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 1575-1590, 2020. ISBN 978-1-4503-7089-9. Nishat Koti, Mahak Pancholi, Arpita Patra, and Ajith Suresh. SWIFT: Super-fast and Robust PrivacyPreserving Machine Learning. pp. 2651-2668, 2021. ISBN 978-1-939133-24-3. Sven Laur, Jan Willemson, and Bingsheng Zhang. Round-Efficient Oblivious Database Manipulation. In Information Security. Springer, 2011. ISBN 978-3-642-24861-0. Charles E. Leiserson. Performance Engineering of Software Systems. Lecture 1: Introduction and Matrix Multiplication Leiserson, 2018. 22 Xiang Li, Charles X. Ling, and Huaimin Wang. The Convergence Behavior of Naive Bayes on Large Sparse Datasets. ACM Transactions on Knowledge Discovery from Data, 11(1):1-24, 2017. ISSN 1556-4681, 1556-472X. Donghang Lu and Aniket Kate. RPM: Robust Anonymity at Scale. Proceedings on Privacy Enhancing Technologies, 2023(2), 2023. ISSN 2299-0984. Tianpei Lu, Bingsheng Zhang, Lichun Li, and Kui Ren. Aegis: A Lightning Fast Privacy-preserving Machine Learning Platform against Malicious Adversaries, 2023. Payman Mohassel and Peter Rindal. ABY3: A Mixed Protocol Framework for Machine Learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 3552, 2018. ISBN 978-1-4503-5693-0. Payman Mohassel and Yupeng Zhang. SecureML: A System for Scalable Privacy-Preserving Machine Learning. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 19-38, 2017. Johannes Mono and Tim Güneysu. Implementing and Optimizing Matrix Triples with Homomorphic Encryption. In Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security, 2023. ISBN 9798400700989. Kartik Nayak, Xiao Shaun Wang, Stratis Ioannidis, Udi Weinsberg, Nina Taft, and Elaine Shi. GraphSC: Parallel Secure Computation Made Easy. In 2015 IEEE Symposium on Security and Privacy, pp. 377-394, 2015. Arpita Patra and Ajith Suresh. BLAZE: Blazing Fast Privacy-Preserving Machine Learning. In Proceedings 2020 Network and Distributed System Security Symposium. Internet Society, 2020. ISBN 978-1-89156261-7. Berry Schoenmakers. MPyC-Python package for secure multiparty computation. In Workshop on the Theory and Practice of MPC, 2018. Phillipp Schoppmann, Adrià Gascón, Mariana Raykova, and Benny Pinkas. Make Some ROOM for the Zeros: Data Sparsity in Secure Distributed Machine Learning. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 1335-1350, 2019. ISBN 978-1-4503-6747-9. Adi Shamir. How to share a secret. Communications of the ACM, 22(11):612-613, 1979. ISSN 0001-0782. Shuang Sun and Eleftheria Makri. SoK: Multiparty Computation in the Preprocessing Model, 2025. URL https://eprint.iacr.org/2025/060. Publication info: Preprint. Tomas Toft. Primitives and applications for multi-party computation. PhD thesis, 2007. Publisher: Citeseer. Sameer Wagh, Divya Gupta, and Nishanth Chandran. SecureNN: 3-Party Secure Computation for Neural Network Training. Proc. Priv. Enhancing Technol., 2019(3):26-49, 2019. Sameer Wagh, Shruti Tople, Fabrice Benhamouda, Eyal Kushilevitz, Prateek Mittal, and Tal Rabin. Falcon: Honest-Majority Maliciously Secure Framework for Private Deep Learning. Proceedings on Privacy Enhancing Technologies, 2021. ISSN 2299-0984. URL https://petsymposium.org/popets/2021/ popets-2021-0011.php. Marvin Xhemrishi, Rawad Bitar, and Antonia Wachter-Zeh. Distributed Matrix-Vector Multiplication with Sparsity and Privacy Guarantees. In 2022 IEEE International Symposium on Information Theory (ISIT), 2022. Cai-Nicolas Ziegler, Sean M. McNee, Joseph A. Konstan, and Georg Lausen. Improving recommendation lists through topic diversification. In Proceedings of the 14th international conference on World Wide Web, 2005. ISBN 978-1-59593-046-0. 23 A Security proof sketch This appendix sketches the security proof of our algorithms. We provide proof sketches in the honest-butcurious model using the real-ideal paradigm Evans et al. (2018). We can define the ideal functionality for the sparse vector multiplication Fη VectMult as follows: • Input: [[xa]] = {([[i1,a]], [[v1,a]]) , . . . ([[iη,a]], [[vη,a]])} and [[xb]] (defined similarly) • Output: a secret-shared scalar [[s]] such that: s = P j,k∈{1...η} vj,a · vk,b · δ(ij,a, ik,b) (with δ the Kroenecker delta). This functionality has a property η. This property represents the protocol public knowledge in our functionality. As motivated in Section 3.3, public knowledge is necessary to have efficient secure sparse multiplications. We sketch a proof showing that Alg. 1 securely realizes the ideal functionality Fη VectMult. Our proof relies essentially on the composition theorem for the honest-but-curious model (Theorem 7.3.3 of Goldreich (2009)). Certain subprotocols we use have been proven to be secure and correct in the honest-but-curious model. By using the composition theorem, we can rely on their security and only prove the security of our overall protocol in which these subprotocols are embedded. The correctness of Alg. 1 can be verified by an easy arithmetic exercise. Security of the sub-protocols The first statement in Alg. 1 is a concatenation that is performed locally with no communication and hence does not need to be simulated. The second statement is an oblivious sort. The proof depends on the sub-protocol chosen for instantiation. This choice is highly influenced by the threat model, so we consider the Batcher's sort for dishonest majority and the oblivious radix sort for honest majority (i.e., the two threat models studied in our paper). On the one hand, the Batcher's sort only requires comparisons and arithmetic operations. It is then secure because the arithmetic and comparison operations on secret-shared values are trivially secure in the honest-but-curious model Hoogh (2012). On the other hand, the oblivious radix sort is proven secure in Hamada et al. (2014). After the sorting statement, we have a loop. This loop statement reveals no information because it uses η, a property of Fη VectMult (i.e., a public value). Finally, the loop contains a conditional statement. As highlighted in Section 4, we use the notation [[obliv. if]] to maximize the readability. This conditional statement would be implemented as follows: [[s]] ←[[s]] + ([[coord(zi)]] = [[coord(zi+1)]]) × [[val(zi)]] × [[val(zi+1)]], with [[coord(zi)]] = [[coord(zi+1)]] outputting [[1]] or [[0]]. The arithmetic operations are trivially secure in the honest-but-curious model. For the secure equality, we can use the secure protocol proposed by Toft Toft (2007). Hence, this oblivious conditional statement is secure because it relies on secure comparisons and arithmetic operations. Note that such oblivious conditional statements are recurrent in MPC protocols (e.g., in private heavy hitters Asharov et al. (2022)). Security of the overall protocol Since all the sub-protocols are individually secure, according to the Composition Theorem, Alg. 1 preserves the input secrecy in the honest-but-curious model. Matrix-vector and matrix-matrix We do not develop the proof sketch for our sparse matrix-vector (Alg. 2) and matrix-matrix (Alg. 5) multiplications, but the proof is very similar. As our sparse vector multiplication, these two algorithms use oblivious sorting, comparisons, arithmetic operations, and oblivious conditional structures. For all these operations, the security would be proven as sketched for the vector multiplication. However, they also use two additional sub-protocols: AggEqualCoord and PlaceholderRemoval. The AggEqualCoord relies on an oblivious conditional structure and arithmetic operations. The proof is then trivial, because we can express the conditional structure using a combination of comparisons and arithmetic operators. The PlaceholderRemoval uses a "shuffle-and-reveal" trick: shuffle a list of secret-shared values and reveal which of them are placeholders. Other security papers Hamada et al. (2014) used similar "shuffle-and-reveal" tricks in protocols with proven security. In our algorithms, revealing the number of placeholders indirectly 24 reveals the number of non-zero elements in the output matrix. However, this information is public by definition because it is contained in the output. Extension to malicious security Our security proofs hold in the honest-but-curious model. Malicious security would require a verifiable secret-sharing scheme and dedicated proofs for the security in the malicious model. We remind that our main sub-protocols have known maliciously secure variant (especially oblivious sorting Asharov et al. (2022); Hamada et al. (2014) and oblivious shuffling Asharov et al. (2022); Laur et al. (2011)) that would be the basis of a malicious security proof. We leave this extension for future work. B Adapting existing secure sparse multiplication to the outsourced setting Our work introduces new protocols to perform secure sparse multiplications in an outsourced setting. While we introduce the first outsourced secure sparse multiplications, existing works Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) had already described secure protocols, but in a non-outsourced setting. Instead of introducing new protocols, one may wonder whether the existing protocols can be adapted to the outsourced setting. This appendix details why such an adaptation of Chen et al. (2021); Cui et al. (2021); Schoppmann et al. (2019) is not straightforward and can result in inefficient constructions. Adapting Chen et al. (2021); Cui et al. (2021) Cui et al. and Chen et al. described sparse-dense matrix multiplications where one party P1 knows the plaintext sparse matrix X and the other party P2 knows the plaintext dense matrix Y . In both protocols, the dense matrix Y is homomorphically encrypted and the first party P1 performs a dense multiplication between their plaintext sparse matrix X and the encrypted dense matrix Y it receives from P2. In Chen et al. Chen et al. (2021), the encrypted matrix Y is entirely transmitted to the first party. In Cui et al. Cui et al. (2021), the parties use a private information retrieval to avoid transmitting the whole matrix. Despite their differences, these two protocols cannot be adapted to the outsourced setting for the same reasons. First, the homomorphic encryption step is complex in an outsourced setting because there is no evident key dealer (contrary to the two-data-owner setup). Second, these protocols are possible because P1 knows the position of all non-zero values in X. Indeed, P1 performs a multiplication between the plaintext sparse matrix and the encrypted dense matrix. This operation is possible because homomorphically encrypted values can be trivially multiplied by plaintext constants. If these non-zero positions are hidden, this simple multiplication is no longer possible. There is then no straightforward solution to adapt the approach of Chen et al. (2021); Cui et al. (2021) to the outsourced setting. Third, these protocols only support sparse-dense multiplications: the matrix Y must be dense, or at least be converted into a dense representation before being homomorphically encrypted. Otherwise, P1 may learn which values of Y are zero. As a result, the memory needed to store Y could blow up with several orders of magnitude, and so could the communication and computation costs. Adapting Schoppmann et al. Schoppmann et al. (2019) They adopted a different approach than Chen et al. (2021); Cui et al. (2021): they introduce a low-level protocol called "ROOM" (Read-Only Oblivious Map) upon which they built their matrix multiplications. The ROOM protocol performs a secure look-up in a secret key-value store. This protocol guarantees that neither the key-value store, nor the look-up query are revealed. They introduce three instanciations of the ROOM protocol: BasicROOM, CircuitROOM, and PolyROOM They presented all their functionality in a two-data-owner setup with the query issued by the first party and the key-value store owned by the second. However, we can easily write these functionalities for an outsourced setup. Out of the three ROOM instanciations, only CircuitROOM is easily adaptabled to the outsourced setting because it relies on oblivious sorting and oblivious shuffling. Since our protocols also rely on sorting and shuffling, this observation may lead the reader to two questions: (1) What are the differences between the CircuitROOM-based sparse multiplications and our protocols? (2) What are the implications of these differences on efficiency? 25 They use the ROOM protocol to extract a dense sub-matrix from a sparse matrix. In contrast, we use sorting and shuffling for the overall sparse matrix multiplication. Besides this key algorithmic difference, efficient ROOM-based protocols in the outsourced setting is not straightforward. The following paragraphs study each matrix multiplication types to identify the algorithmic and efficiency differences. To multiply two sparse vectors x and y, Schoppmann et al. Schoppmann et al. (2019) first use CircuitROOM to extract a dense vector of size nnz(x) from y (i.e., corresponding to the non-zero values of x). This operation costs O((nnz(x) + nnz(y)) log(nnz(x) + nnz(y))) communications. Their protocol then performs a dense vector multiplication between the non-zero values of x and the extracted dense vector from y. This overall communication cost is O((nnz(x)+nnz(y)) log(nnz(x)+nnz(y))), as in our sparse vector multiplication (Alg. 1). However, their protocol has larger constant factors: while our protocol only requires an oblivious sort, CircuitROOM performs both an oblivious sort and oblivious shuffle. Their protocol even requires a dense vector multiplication after CircuitROOM. To multiply a sparse matrix X with a sparse vector y, Schoppmann et al. Schoppmann et al. (2019) call the ROOM protocol once for each data owner. In their protocol, each data owner shares a dense sub-matrix containing its non-zero columns (i.e., a column with at least one non-zero value). Hence, their protocol call the matrix-vector multiplication (and the underlying ROOM protocol) for each matrix of non-zero columns; in other words, for each data owner. On the contrary, our protocol can group the non-zeros from all data owners and process them all at the same time. In their paper, this design choice is not an issue because there is only two data owners. However, in an outsourced setup, each matrix row can be owned by a different data owner. Hence, the matrix multiplication would perform one vector multiplication per matrix row. Their communication cost is then O((nnz(X) + n · nnz(y)) log(nnz(X) + nnz(y))), which is worse than our cost, by a linear factor n. Indeed, Section 4.2 presents a non-trivial matrix-vector multiplication avoiding this linear factor (i.e., Alg. 2). Since ML datasets can contain thousands or even millions of rows (e.g., 340K in the Bookcrossing dataset), the CircuitROOMbased matrix-vector multiplication would be inefficient compared to our Alg. 2 (and even compared to the dense multiplication). To compute a correlation matrix X⊤X, Schoppmann et al. Schoppmann et al. (2019) require, in addition to ROOM, another sub-protocol called "Scatter". Unfortunately, their protocol described in the non-outsourced setting cannot be adapted easily to an outsourced setting because it relies on oblivious transfer, a purely two-party protocol with a party having access to the plaintext secret. Hence, we cannot adapt their sparse matrix-matrix multiplication to the outsourced setting. To sum up, the adaptation of the CircuitROOM-based sparse multiplication is either inefficient (for vectorvector and matrix-vector multiplications) or not straightforward (for matrix-matrix multiplications). C Avoiding the linear number of rounds This appendix presents optimized algorithms to avoid the linear round complexity in Alg. 2 and 5. In Alg. 2, this round complexity has two sources: the aggregation step (Alg. 3) and the multiplication loop: lines 7 to 14. In Alg. 5, the round complexity is only caused by the aggregation step. To avoid this round complexity, we take inspiration from the recursive secure maximum implemented in many MPC libraries (including MPyC). If implemented naively (i.e., scanning the list elements one by one), the round complexity of the maximum function would be linear. MPC libraries rely on a recursive algorithm to reach a logarithmic complexity. Alg. 6 details this algorithm. The intuition behind it is to represent the list using a tree. Then, each sub-tree recursively computes its maximum and "propagate" this value toward the root. We reuse the same recursive structure to build our optimized algorithms with logarithmic round complexity. Alg. 7 presents the optimized aggregation algorithm. Alg. 10 presents the optimized multiplication loop. Due to their recursive structure, these algorithms are more convoluted than their naive alternatives, but these optimizations are necessary to respect the round complexities presented in the main text for the sparse 26 Algorithm 6 Secure maximum via a recursive function Input: [[V ]], an unsorted list of values. 1: function RecursiveMax([[V ]]) 2: n ←length([[V]]) 3: if n = 1 4: return [[V1]] 5: Initialize [[Roots]], a list of ⌈n/2⌉secret-shared values 6: for k ←1 to ⌈n/2⌉do 7: [[obliv. if]] [[V2k]] > [[V2k+1]] 8: [[Rootsk]] ←[[V2k]] 9: else 10: [[Rootsk]] ←[[V2k+1]] 11: return RecursiveMax(Roots) Algorithm 7 Optimized aggregation of tuples with equal coordinates Input: [[Z]], a sorted list of n secret-shared tuples. Public knowledge: None 1: function OptimizedAggEqualCoord([[Z]]) 2: // Pre-processing: O(1) rounds 3: Initialize list [[Children]] with ⌈n/4⌉empty elements 4: for k ←1 to ⌈n/4⌉do 5: AggIfEqual(Z4k, Z4k+1) 6: AggIfEqual(Z4k+1, Z4k+2) 7: AggIfEqual(Z4k+2, Z4k+3) 8: [[Childrenk]] ←([[Z4k]], [[Z4k+1]], [[Z4k+2]], [[Z4k+3]]) 9: // Online phase: O(log n) rounds 10: [[Z]] ←RecProp([[Children]]) 11: // Offline post-processing 12: for k ←1 to ⌈n/4⌉do 13: ([[Z4k]], [[Z4k+1]], [[Z4k+2]], [[Z4k+3]]) ←[[Childrenk]] 14: for k ←1 to n do 15: [[obliv. if]] [[val(Zk)]] ̸= [[0]] 16: [[val(Zk)]] ←[[⊥]] 17: return [[Z]] matrix-vector and matrix-matrix multiplications. These two algorithms are similar and reuse the same intuitions; only a few minor operations differ. Hence, this appendix only discusses Alg. 7 in detail. Our algorithm builds a binary tree from the non-zero tuples. Each leaf contains four non-zero tuples, and the internal nodes represent some sub-lists of non-zero tuples. Each internal node stores four non-zero tuples: "minimum tuple" of the left (child) sub-tree, "maximum tuple" of the left sub-tree, "minimum tuple" of the right sub-tree, and "maximum tuple" of the right sub-tree. We compare the tuples based on their coordinate, so the maximum tuple is the tuple with the highest coordinates. To understand the intuition behind Alg. 7, it is necessary to understand the role of the recursion. Let us assume we have two sub-lists on which the aggregation is already completed, and let us understand the necessary steps to obtain a whole list with the aggregation completed. To know whether a sub-list must be updated, we only need to compare the "maximum" tuple of the left sub-list to the "minimum" tuple 27 Algorithm 8 Sub-functions used in Alg. 7 1: procedure AggIfEqual([[tup1]], [[tup2]]) 2: [[obliv. if]] [[coord(tup1)]] = [[coord(tup2)]] 3: [[val(tup2)]] ←[[val(tup2)]] + [[val(tup1)]] 4: [[val(tup1)]] ←[[0]] 5: function UpProp([[Child1]], [[Child2]]) 6: [[Root]] ←(MinLeft([[Child1]]), MaxRight([[Child1]]), MinLeft([[Child2]]), MaxRight([[Child2]])) 7: AggIfEqual(MinLeft([[Root]]), MaxLeft([[Root]])) 8: AggIfEqual(MaxLeft([[Root]]), MinRight([[Root]])) 9: AggIfEqual(MinRight([[Root]]), MaxRight([[Root]])) 10: return [[Root]] 11: procedure DownProp([[Child]], [[NewMin]], [[NewMax]]) 12: MinLeft([[Child]]) ←[[NewMin]] 13: AggIfEqual(MinLeft([[Root]]), MaxLeft([[Root]])) 14: AggIfEqual(MaxLeft([[Root]]), MinRight([[Root]])) 15: MaxRight([[Child]]) ←[[NewMax]] Algorithm 9 Generic recursive propagation used in Alg. 7 and 10 Input: [[Children]], a list of 4-element tuples. UpProp and DownProp implementations vary depending on the algorithm using this recursive approach. 1: function RecProp([[Children]]) 2: n ←length([[Children]]) 3: if n = 1 4: return [[Children]] 5: for k ←1 to ⌈n/2⌉do 6: [[Rootsk]] ←UpProp([[Children2k]], [[Children2k+1]]) 7: [[Roots]] ←RecProp([[Roots]]) 8: for k ←1 to ⌈n/2⌉do 9: DownProp([[Children2k]], MinLeft([[Rootsk]]), MaxLeft([[Rootsk]])) 10: DownProp([[Children2k+1]], MinRight([[Rootsk]]), MaxRight([[Rootsk]])) 11: return [[Children]] of the right sub-list. This claim holds because our aggregation algorithm takes as input a sorted list of non-zero tuples. If these two tuples share the same coordinate, the "maximum" tuple of the left list must be aggregated to the "minimum" tuple in the right list. Otherwise, no operation is necessary. Contrary to the secure maximum algorithm, Alg. 7 passes through the tree twice: from leaves to root, then from root to leaves. The first phase identifies which consecutive sub-trees share minimum-maximum with equal coordinates. The second phase propagates the aggregated values to the leaves. Alg. 9 describes this recursive propagation. The implementation of the "upward" and "downward" propagations of Alg. 7 are given in Alg. 8. As each node stores four non-zero tuples, Alg. 7 implicitly assumes that the input list is a multiple of 4 in length. If not, we can pad the list with placeholders without impacting the rest of the algorithm. Since we rely on a binary-tree approach, the complexity of this algorithm is O(m log m) with m the number of leaves. This algorithm reduces the round complexity to O(log m) because we can parallelize the operations at each tree level. 28 Algorithm 10 Optimized multiplication loop for Alg. 2 Input: [[Z]], a sorted list of n secret-shared tuples. Public knowledge: None 1: function OptimizedMultLoop([[Z]]) 2: // Pre-processing O(1) rounds 3: V1 ←[[val(Z1)]] 4: for k ←2 to n do 5: [[obliv. if]] [[coord(Zk)]] = [[coord(Zk-1)]] 6: Vk ←[[val(Zk)]] 7: else 8: Vk ←[[⊥]] 9: Initialize list [[Children]] with ⌈n/4⌉empty elements 10: for k ←1 to ⌈n/4⌉do 11: ReplaceIfNull(V4k, V4k+1) 12: ReplaceIfNull(V4k+1, V4k+2) 13: ReplaceIfNull(V4k+2, V4k+3) 14: [[Childrenk]] ←([[V4k]], [[V4k+1]], [[V4k+2]], [[V4k+3]]) 15: // Online phase: O(log n) rounds 16: [[Children]] ←RecProp([[Children]]) 17: // Post-processing: O(1) rounds 18: for k ←1 to ⌈n/4⌉do 19: ([[V4k]], [[V4k+1]], [[V4k+2]], [[V4k+3]]) ←[[Childrenk]] 20: for k ←1 to n do 21: [[obliv. if]] [[Vk]] ̸= [[⊥]] 22: [[val(Zk)]] ←[[val(Zk)]] × [[Vk]] 23: return [[Z]] 24: procedure ReplaceIfNull([[vold]], [[vnew]]) 25: [[obliv. if]] [[vold]] = [[⊥]] 26: [[vold]] ←[[vnew]] 27: function UpProp([[Child1]], [[Child2]]) 28: [[Root]] ←(MinLeft([[Child1]]), MaxRight([[Child1]]), MinLeft([[Child2]]), MaxRight([[Child2]])) 29: ReplaceIfNull(MinLeft([[Root]]), MaxLeft([[Root]])) 30: ReplaceIfNull(MaxLeft([[Root]]), MinRight([[Root]])) 31: ReplaceIfNull(MinRight([[Root]]), MaxRight([[Root]])) 32: return [[Root]] 33: procedure DownProp([[Child]], [[NewMin]], [[NewMax]]) 34: ReplaceIfNull(MinLeft([[Child]]), [[NewMin]]) 35: ReplaceIfNull(MaxLeft([[Child]]), [[NewMin]]) 36: ReplaceIfNull(MinRight([[Child]]), [[NewMin]]) 37: MaxRight([[Child]]) ←[[NewMax]] The optimized multiplication loop (Alg. 10) reuses the same intuition as the optimized aggregation. The main difference is that we do not want to aggregate some values but replicate some values. We want to build a vector containing one multiplicative value for each non-zero tuple. The algorithm starts by creating a vector of placeholders with a few multiplicative values. Then, we use our recursive propagation to replace each placeholder with the closest (non-placeholder) value on the left. Once this vector is built, we multiply our non-zero elements by their corresponding multiplicative value. This element-wise vector multiplication requires one communication round. 29 Contrary to Alg. 7, the recursive part of Alg. 10 work on a vector of scalar instead of working on a vector of tuples. This difference slightly simplifies the value propagation but requires more pre- and post-processing than Alg. 7. Hence, Alg. 10 also uses the recursive Alg. 9 but with different "upward" and "downward" propagations described in Alg. 10. As Alg. 7, this optimized multiplication loop has a logarithmic round complexity. 30
2510.14895
Fast and fault-tolerant logical measurements: Auxiliary hypergraphs and transversal surgery Alexander Cowtan1, Zhiyang He2, Dominic J. Williamson3,4, and Theodore J. Yoder5 1Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK 2Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA 3School of Physics, The University of Sydney, NSW 2006, Australia 4IBM Quantum, IBM Almaden Research Center, San Jose, CA 95120, USA 5IBM Quantum, IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA Quantum code surgery is a promising technique to perform fault-tolerant computation on quantum low-density parity-check codes. Recent developments have significantly reduced the space overhead of surgery. However, generic surgery operations still require O(d) rounds of repeated syndrome extraction to be made fault-tolerant. In this work, we focus on reducing the time overhead of surgery. We first present a general set of conditions that ensure fault-tolerant surgery operations can be performed with constant time overhead. This fast surgery necessarily makes use of an auxiliary complex described by a hyper- graph rather than a graph. We then introduce a concrete scheme called block reading, which performs transversal surgery across multiple code blocks. We further investigate surgery operations with intermediate time overhead, be- tween O(1) and O(d), which apply to quantum locally testable codes. Finally, we establish a circuit equivalence between homomorphic measurement and hy- pergraph surgery and derive bounds on the time overhead of generic logical measurement schemes. Overall, our results demonstrate that reducing the time cost of code surgery is not reliant on the quantum memory being single-shot. Instead it is chiefly the connectivity between a code and its measurement an- cilla system that determines the achievable measurement time overhead. 1 Introduction Quantum error correction [1–5] is an essential building block in any fault-tolerant quan- tum computer that is capable of running large-scale applications. The surface code and other topological codes [6–10] have been extensively studied due to their rich mathemat- ical structure and strong practical performance. Despite their many desirable properties, topological codes incur an appreciable space overhead [11] when performing fault-tolerant quantum computation (FTQC). This motivates the study of quantum low-density parity check (QLDPC) codes, which can achieve better parameters, notably constant encoding rate, in both the asymptotic and near-term regimes [12]. QLDPC codes possess the prac- tically desirable LDPC property – each check (or detector [13]) involves a constant number Alexander Cowtan: akcowtan@gmail.com 1 arXiv:2510.14895v1 [quant-ph] 16 Oct 2025 of qubits and each qubit is in a constant number of checks. Following exciting advance- ments in both theoretical code design [14–18] and experimental qubit connectivity [19– 21], QLDPC codes are now competitive candidates to serve as low-overhead fault-tolerant quantum memories [22, 23]. To realize FTQC in low space overhead with QLDPC codes, we need to design schemes for performing logical computation on their encoded qubits. This is a flourishing area of research with many recent works. Popular techniques include constant-depth unitary circuits [23–36], resource state preparation and teleportation [37–46], and code switch- ing [47–69]. Here, we focus on the technique of QLDPC code surgery, which has seen rapid recent developments [52–67] due to its general applicability. On a high level, code surgery is a technique for performing Pauli measurement on logical qubits encoded in a stabiliser code by introducing an ancilla system, consisting of new qubits and stabiliser generators, to deform the code in such a way that the chosen logical operators are de- composed into products of (newly introduced) stabilisers. The deformation procedure can be viewed as gauge fixing or weight reduction [70–74]. Code surgery can be made fault- tolerant by repeatedly measuring all new stabiliser generators for a sufficient number of rounds (typically taken to be the code distance d) to prevent timelike errors. After ob- taining the logical measurement outcome, the ancillary qubits are individually measured out to return to the original code. Unfortunately, measuring all stabiliser generators for d rounds incurs a substantial time overhead when compared to protocols that use constant-depth gates. For some computa- tions this cost can be substantially ameliorated by using Pauli Based Computation [75] to reduce the number of gates that must be implemented [62, 76]. However, in the worst case a large time overhead remains. While a growing number of code families are known to be single-shot decodable [77–83], only high-dimensional topological codes are known to admit single-shot surgery [60]. Overall, prior to this work we lacked a rigorous understanding of the time overhead required for the fault-tolerance of surgery operations, except that d rounds is typically sufficient. In this work, we develop a theory of fast quantum code surgery, formulating the con- ditions needed for surgery operations to be fault-tolerant with only a constant number of rounds of error correction. Notably, the base memory code does not need to be single- shot decodable. As our primary example, we present a protocol called block reading which enables transversal surgery across identical CSS code blocks in constant space and time overhead. We discuss our main results in Section 1.2, and discuss how our work relates to prior work in the literature, including independent works which appeared shortly before ours [66–69], in Section 1.3. 1.1 Setting In this work, we aim to preserve the LDPC property of the codes and detectors throughout, and highlight where this is not guaranteed. However, we ignore any further specific connec- tivity requirements of potential physical implementations. We assume a phenomenological Pauli and measurement noise model in our notion of fault tolerance. We focus primarily on asymptotics rather than specific constant factors. We do not concern ourselves with decoders for block reading, which depend greatly on the initial CSS codeblocks. We leave this important question to future work. 2 1.2 Summary of main results To discuss our main results, we briefly review the usual setup of generalized code surgery, and refer readers to Section 2 for a more thorough introduction. One starts with, in general, a collection of memory code blocks with encoded logical qubits, on which the aim is to perform fault-tolerant logical Pauli measurements. One constructs an ancilla system of qubits and stabilisers, which can be specified by a hypergraph [56, 58]. One then connects this ancilla system to the memory blocks, in the process modifying the stabilisers of the ancilla system and the memory blocks according to the connections. The modified set of stabilisers is an abelian group, which specifies a stabiliser code (the deformed code). Certain logical operators from the memory blocks become products of stabilisers in this deformed code, which means they are measured when we measure these new stabilisers. By measuring the deformed code stabilisers for a sufficient number of rounds (typically, in prior work, d rounds, where d is the distance of the memory codes), one can deduce the logical measurement results fault-tolerantly. Finally, one measures out the ancilla system to return to the original code. The operators measured depend largely on the ancilla system (equivalently the hypergraph) and its connections to the memory. This formulation is general enough to capture all prior works on code surgery, see Section 3.2 of [62] for a brief review. In this work, we augment the above formulation by associating a 4-term chain complex with the hypergraph H = (V, E). Specifically, let C be a basis of cycles of H, where a cycle is a collections of hyperedges which include every vertex in V an even number of times. Let W be a set of O(1)-sized components of H, where a component is a collection of vertices which intersect every hyperedge on an even number of vertices.1 Then H• = FW 2 FV 2 FE 2 FC 2 δ2 δ1 δ0 is a 4-term cochain complex, where the coboundary maps δi are specified by the inclusion relations of W, V, E, C. Note that when H is a simple connected graph (i.e., every hyper- edge is a regular edge), the only component is the set of all vertices V . Typically when treating cell complexes the above coboundary maps would instead be boundary maps: we flip this convention to suit our conventions in the main body. In prior works on surgery the focus was on the 3-term complex without W, which fully specifies the ancilla system. By including W, we can now consider the 1-cosystolic distance of H•, which is d• 1 = min{|u| : u ∈ker(δ2)\im(δ1)}. Our main result states that t ≥d hypergraph surgery operations, all satisfying certain distance conditions, can be performed sequentially with each operation taking a constant time, such that the overall protocol performs t distinct logical operations fault-tolerantly in O(t) time. Theorem 1.1 (Fast hypergraph surgery, informal statement of Theorem 5.3). Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks, each with distance at least d. Let t ≥d and H• 1, · · · , H• t be sparse chain complexes from hypergraphs which each define a Z-type surgery operation on the memory blocks, such that • All H• i have 1-cosystolic distances at least d, and 1Note here that W is a set of our choice and is often not a basis of all components of H. 3 • The compacted code, which is a CSS code defined by the t surgery operations, has distance at least d. Then the t surgery operations can be performed sequentially with O(1) time each. The phenomenological fault-distance of the protocol is at least d and the LDPC property is preserved throughout. We elaborate on the definition of the compacted code as well as other subtle conditions in the main text, but note here that there exist simpler sufficient conditions to require of each surgery operation individually or of the original code that together ensure the distance of the compacted code is large. Importantly, we consider the fault-tolerance of performing t ≥d surgery operations in time O(t), instead of performing one surgery operation in time O(1). This is because, strictly speaking, performing one surgery operation in time O(1) is not fault-tolerant. We did not assume that the memory code blocks are single-shot decodable, and the deformed codes are not necessarily single-shot decodable. Therefore, the syndrome information from O(1) rounds of noisy measurements is insufficient for fault-tolerance. In contrast, performing t operations in O(t) time generates enough syndrome informa- tion to ensure the whole protocol has phenomenological fault-distance at least d. In this work, we refer to this as amortised constant time. This theorem formulates general conditions on surgery operations which enable constant- time-overhead logical gates. Hypergraph surgeries that measure stabilisers and logical operators of a code are forced to incorporate its local structure, up to changing the choice of generators. In the remaining parts of this work, we study a concrete fast surgery op- eration which we call block reading. In its simplest form, given two identical code blocks of a CSS QLDPC code Q, block reading acts transversally on all pairs of data and check qubits of the two blocks and measure all pairs of logical qubits in the Z ⊗Z (or similarly X ⊗X) basis. It does so by taking the auxiliary complex (or hypergraph) to be the code Q itself, which by assumption has 1-cosystolic distance at least d. This example can be easily generalized to act on more than two code blocks and to measure more sophisticated Pauli product operators, including the checks of any CSS QLDPC code, transversally. We term this protocol full block reading. Theorem 1.2 (Full block reading, informal statement of Theorem 3.12). Let Q, Q′, Q′′... be a set of c identical Jn, k, dK CSS LDPC codeblocks. Then a set of sparse logical Z Pauli operators across Q, Q′, Q′′... can be measured in parallel using O(cn) additional qubits and amortised O(1) time overhead. Every measurement acts transversally on all k logical qubits in a given codeblock. The phenomenological fault-distance of the protocol is at least d, and the LDPC property is preserved throughout. Two important remarks are in order. First and foremost, parallels can be drawn between block reading and Steane-style error correction, or more generally logical mea- surements performed via ancillary code blocks and transversal CNOT gates. In terms of logical action, full block reading has the same effect as measurements via transversal gates, except with a code-deformation procedure in contrast to a unitary circuit. Notably, however, the latter protocol usually assumes the ancillary code block is fault-tolerantly prepared, which in general requires O(d) time overhead. The more subtle result of algo- rithmic fault-tolerance [84] showed that this need not be the case, and an ancillary code block prepared in O(1) time can still be used for measurements via transversal CNOT, such that the whole computation is fault-tolerant even though this individual operation is not. Our result on full block reading is similar to algorithmic fault-tolerance, in that 4 we prove d surgery operations can be performed fault-tolerantly in O(d) time despite the fact that a single operation performed in constant time is not necessarily fault-tolerant. Notably we introduce new surgery gadgets and develop a very different proof to Ref. [84]. This establishes a curious connection between transversal surgery and transversal gates. Secondly, it is instructive to consider the simplest case of full block reading, where there is a single memory block and we measure all logical qubits in the Z basis. This case mirrors Steane error correction, and showcases why a single fast hypergraph surgery operation is not necessarily fault-tolerant: if it were, we would have constant space-time overhead state preparation of arbitrary CSS QLDPC codes (see Remark 3.10)! Of course, full block reading with constant syndrome rounds applied to a non-single-shot code generates a frame error akin to that of Steane error correction with an ancillary code block (under-)prepared with constant syndrome rounds. This frame error is only eliminated in the subsequent O(d) time steps. Therefore, this is one of the reasons that performing either scheme in constant time is not strictly speaking fault-tolerant. With full block reading as a motivating example, we study more flexible fast hyper- graph surgery operations by taking subcodes of the memory code and thickening them appropriately. We call this partial block reading, which acts transversally on all physical and logical qubits contained in the subcodes. Theorem 1.3 (Partial block reading, informal statement of Theorem 4.5 and Corol- lary 7.13). Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d. Let A be a subcode of each of the codeblocks Q, Q′, Q′′... with length nA and distance at least d. Then a suitable hypergraph H can be constructed using O(nAd) additional qubits. When A has soundness ρA this bound can be reduced to O( nA ρA ), assuming the existence of a sparse cycle basis. The precise notions of subcode, soundness and sparse cycle basis are defined in the main text. As usual in the study of code surgery, the factor O(d) space overhead is a worst case upper bound. When the subcode has suitable expansion properties, this space overhead can be significantly reduced, all the way to O(1) in some cases. We expect the space overhead of partial block reading to be highly optimizable in practice, a problem we leave to future work. Beyond constant time overhead surgery, we further study the case where the conditions of Theorem 1.1 are not satisfied, yet surgery can still be performed in lower than O(d) time. We show that when the memory codes have soundness properties, then partial block reading can be performed with less than O(d) syndrome rounds (in amortisation) when the subcode A has distance lower than d. Theorem 1.4 (Intermediate time hypergraph surgery, informal statement of Theorem 6.5). Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and Let t ≥d and H• 1, · · · , H• t be sparse chain complexes from hypergraphs which each define a surgery operation on the memory blocks, such that • Each auxiliary complex H• 1, · · · , H• t has 1-cosystolic distance di ≥ 1 αi d, • The deformed code each has distance at least d, and • The original codes have constant soundness. Then the t surgery operations can be performed sequentially with ⌈αi⌉time each. The phenomenological fault-distance of the protocol is at least d and the LDPC property is preserved throughout. 5 Our hypergraph surgery constructions rely on homomorphic chain maps between the auxiliary complex and the memory. This fact draws strong parallel to the technique of homomorphic measurement [42], which is a logical measurement scheme that employs transversal gates and generalizes Steane error correction. We formalize this connection by proving a general circuit equivalence between surgery and homomorphic measurement. Theorem 1.5 (Informal statement of results in Section 9). Let f define a homomorphic measurement protocol. Then the circuit for f can be rewritten via ZX calculus to an equivalent hypergraph surgery protocol and vice versa. Nevertheless, there are several subtleties, and this equivalence does not generally pre- serve the fault distance of the measurement protocols. At a high level, the data (check) qubits in an ancillary block for homomorphic measurement correspond to new check (data) qubits in the corresponding surgery protocol respectively. As a consequence, timelike er- rors in surgery protocols can be resolved by measuring stabilisers for multiple rounds. However, the corresponding spacelike errors on a homomorphic measurement ancilla code are more difficult to resolve in a similar manner. This is why surgery protocols tend to be more flexible in addressing logical qubits: one can create a highly targeted ancilla system with low measurement fault distance, and then boost that measurement fault distance by measuring for multiple rounds. Finally, we complement our constructions with a structural upper bound on the fault- distance of any logical measurement scheme, including surgery, homomorphic measurement and other similar protocols. The idea for this bound is based on quantifying the smallest undetectable fault that flips the logical representatives being measured while avoiding any ancilla system, see Proposition 9.3. This logical fault corresponds to a spacetime stabiliser in the fault-tolerant memory on the code without any logical measurement, that becomes a nontrivial logical fault when the code is measured. Here, we present a simplified statement of this general bound. Theorem 1.6. Any logical measurement protocol performed in o(d) rounds on a quan- tum LDPC code by an auxiliary system requires connections to more than one logical representative to maintain phenomenological fault distance Ω(d), unless the correct logical measurement outcome is known in advance. In other words, reducing the time cost of code surgery is not reliant on the quantum memory being single-shot, nor does fast logical measurement require transversal gates between codeblocks. Instead it is chiefly the degree of connectivity between a code and its measurement ancilla system that determines the achievable measurement time overhead. 1.3 Related work Code surgery is an active topic of research, and our work has many connections to the literature. Notably, Ref. [67] appeared on arXiv shortly before this paper. Our works were developed independently, but share important results in common. We briefly discuss the areas of overlap and difference. Most importantly, Theorem 5 of Ref. [67] states that a surgery operation defined by a sparse chain complex and a sparse chain homomorphism can be performed in constant time overhead fault-tolerantly if the auxiliary complex has suitable expansion properties. This is similar to our Theorem 1.1 (formally Theorem 5.3), with a few points of distinction: 1. We study the fault-tolerance of performing t ≥d suitable surgery operations in O(t) time, and claim that the time overhead of each logical measurement is con- stant in amortisation. In contrast, Ref. [67] study the fault-tolerance of performing 6 one suitable surgery operation in constant time. As we argued, this operation is not fault-tolerant by itself, and Ref. [67] remedies this by assuming that the syn- drome information on the memory is noiseless before and after the constant time surgery operation (see their Section IIIC). This is a strict assumption in the context of single-shot operations – for generic quantum codes, producing reliable syndrome information requires O(d) rounds of syndrome measurement. In this work, we con- sider a protocol where we start and end with O(d) rounds of syndrome measurement on the memory, and perform t ≥d hypergraph surgery operations in between. We demonstrate that each hypergraph surgery operation generates a round of syndrome information on the memory blocks, and accumulating these rounds guarantees fault- tolerance of the full protocol. 2. Ref. [67] asks for the auxiliary complex to have suitable expansion properties, which is a sufficient but not necessary condition for the deformed code to have high distance. From this perspective our requirements are less stringent. Notably, full block reading for generic codes does not satisfy the expansion conditions in Ref. [67]. On the other hand, for partial block reading we prove that a similar expansion condition can satisfy the second condition of Theorem 1.1. Both Ref. [67] and our work show that the desired expansion can be achieved by taking a tensor product with a repetition code. The idea of using expansion to improve deformed code distance, and the idea of boosting expansion with a repetition code are both standard in the study of code surgery and weight reduction [52, 55, 56, 58, 72]. Our works are largely distinct and complimentary otherwise. As our primary examples we study full and partial block reading (Theorems 1.2 and 1.3), which perform transversal measurement of all logical operators contained in a (sub)code. Ref. [67] provides construc- tive, addressable surgery operations, capable of measuring one or more logical operators, using techniques including brute-force branching and auxiliary graph surgery [59, 61]. Ref. [67] considered concrete examples on abelian multi-cycle codes and demonstrated promising numerical simulation results. We further study the case of hypergraph surgery with intermediate time overhead (Theorem 1.4), where the conditions of Theorem 1.1 are not satisfied yet soundness of the memory codes enables fault-tolerant surgery in less than O(d) time.2 Our results on circuit-equivalence between surgery and homomorphic mea- surement, Theorem 1.5, and on the connectivity requirements for fast logical measurement, Theorem 1.6, have no counterparts in Ref. [67]. There are several other recent independent works which share some overlap with our results. Ref. [65] also considered the use of hypergraphs for code surgery. They explored randomized constructions demonstrating promising practical performance. While they did not explore the time overhead of these surgery operations, they stated a similar condition to boost the distance of the deformed code with expansion properties of the hypergraph (see their Lemma 2). In parallel, Ref. [68] and [69] developed single-shot dimension jumping techniques3 for three-dimensional homological product codes. Relatedly, Ref. [66] developed single-shot code switching techniques for more general homological product codes. In its general form, full block reading can be seen as measuring, for O(1) rounds, the stabilisers of a 2Ref. [67] has a brief remark with a similar observation. 3Dimension jump is a special form of code switching, where one switch from (multiple copies of) lower dimensional codes to (fewer copies) of higher dimensional codes and vice versa. 7 homological product code: that of the memory code and the pattern matrix.4 Our results show that this protocol can be combined with further hypergraph surgery protocols to be made fault-tolerant. In comparison, Refs. [66, 68, 69] all proved stronger fault-tolerance guarantees in their specific settings. They also developed techniques to switch from the resultant homological product code to other codes, which enable a range of interesting logical computation gadgets. We remark that the aforementioned works [65–69] all have other contributions not mentioned in our brief discussion above. Connecting to prior literature, while conventional quantum LDPC code surgery takes as input the support of a logical operator to be measured [52–56, 58], hypergraph surgery must incorporate the local structure of the code on the region being measured. We focus on the simplest choice of hypergraph corresponding to the subcode itself, namely block reading. This is similar to the CSS code surgery described in Ref. [64], as well as the finely- devised sticking in Ref. [59]. Our work is also related to Ref. [60], in which single-shot surgery with 3D and 4D toric codes was studied using the formalism of fault complexes. Our proofs in Appendix A use this formalism, and extend it to include deformations on spacetime volumes. We are aware of upcoming work in which the time overhead of lattice surgery with surface codes is reduced by using the complementary gap to estimate confidence in a surgery protocol and stop early if the confidence is high [85]. Our work is grounded in a different setting, concerned with phenomenological fault-distance and general CSS LDPC codes, and focuses on reducing the time overhead asymptotically. 1.4 Organization This work is lengthy and our results involve many subtleties. For the sake of the reader, we have deferred the majority of technical proofs to the appendices and aimed to build up complicated results from simpler ones. In Section 2, we review important background and definitions. In Section 3, we introduce full block reading as the motivating example, and demonstrate fault-tolerance guarantees with amortised constant time overhead surgery operations. In Section 4, we study partial block reading. In Section 5, we generalize this to hypergraph surgery. For readers familiar with the techniques of code surgery, we recommend skimming Sections 3, 4 and 5 on an initial reading. In Section 6, we study hypergraph surgery operations with intermediate time over- heads, between O(1) and O(d). This type of surgery is enabled by quantum locally testable codes with constant soundness. In Section 7, we define a novel generalisation of the notions of both soundness and relative expansion from Ref. [57] which we call modular expansion. We show that hypergraphs with modular expansion can be used to perform hypergraph surgery in lower space overhead. In Section 8 we demonstrate our block read- ing schemes on a variety of examples. We show that our formalism captures fast surgery with topological codes such as 4D toric codes, and we describe block reading with 2D topological codes and bivariate bicycle codes [86]. Finally, in Section 9 we prove the cir- cuit equivalence between homomorphic measurement and surgery, and prove an upper bound on the phenomenological fault distance for generic logical measurement protocol on quantum LDPC codes. 4More generally the pattern complex, if we use full block reading for both X and Z basis transversal logical measurements. 8 2 Preliminaries 2.1 Stabiliser codes and Tanner graphs Definition 2.1. Let Pn be the Pauli group over n qubits. A qubit stabiliser code Q is specified by an Abelian subgroup −1 /∈S ⊂Pn such that the codespace H is the mutual +1 eigenspace of S, i.e. U |ψ⟩= |ψ⟩ ∀U ∈S, |ψ⟩∈H. We say S is the stabiliser group for Q. A qubit stabiliser code Q can be specified by a stabiliser check matrix H = [HX|HZ] ∈Fr×2n 2 , such that H 0 1 1 0 ! H⊺= 0, where a row [u|v] corresponds to a generator of the stabiliser group, and therefore check on Q, iu·vX(u)Z(v), for u, v ∈Fn 2. Definition 2.2. A qubit CSS code Q is a qubit stabiliser code where the generators of S can be split into two sets SX and SZ. SX contains Pauli products with terms drawn from {X, I} and SZ terms drawn from {Z, I}. Thus there is a stabiliser check matrix H for Q such that H = HX 0 0 HZ ! , HXH⊺ Z = 0. Definition 2.3. A CSS low-density parity check (LDPC) code family is a family of CSS codes such that the column and row weights of HX and HZ are bounded above by some constant σ. Logical operators in a stabiliser code are Pauli products that commute with all the checks of the code. As the stabiliser group is Abelian, all stabilisers are equivalent to the trivial logical operator. The code distance d of Q is the minimum Pauli weight of all the nontrivial logical operator representatives. Definition 2.4. A Tanner graph is a bipartite graph with a vertex for each qubit and each check. A qubit is connected to each check in which it participates with an edge labeled [1|0], [0|1], or [1|1] depending on whether the check acts on the qubit as X, Z, or Y . In our convention, data qubits are black circles and checks are boxes. If a check only measures Z or X terms then we omit the edge label, and instead label the box with a Z or X. Therefore for a CSS code we have no edge labels, and all boxes contain either Z or X labels. In this case, the condition that all stabilisers commute is the same as saying that the number of paths between every Z and X check is even. Depicting large Tanner graphs directly becomes unwieldy, so we use scalable notation5 to make them more compact. Qubits are gathered into disjoint named sets Q0, Q1, Q2, ..., and the same for checks C0, C1, C2, ... . An edge between Qi and Cj is then labeled with the stabiliser check matrix of Q restricted to Qi and lifted to Cj, so we have [CX|CZ] ∈F|Cj|×2|Qi| 2 . 5Terminology borrowed from Ref. [87]. 9 [HX|HZ] (a) A generic stabiliser code, X Z HX HZ (b) A CSS code, X Z X L ∂1 ∂T 1 ∂1 I I I I (c) A hypergraph product code. Figure 1: Examples of scalable Tanner graphs. If that check matrix is all-zeros then omit the edge entirely. If the checks are all of the same type then we omit the all-zeros half of the check matrix; for example if Cj contains only Z checks then we label the check box Z and the edge [CZ] ∈F|Cj|×|Qi| 2 . For example, in Figure 1 we show some basic scalable Tanner graphs with qubit sets drawn as circles and check sets drawn as boxes. We have (a) a generic stabiliser code, (b) a CSS code, and (c) the hypergraph product [14] of a classical code C, with bits L and check matrix ∂1, by a repetition code R with blocksize 2 and check matrix  1 1  . We use scalable Tanner graphs to describe our procedures. 2.2 CSS codes and chain complexes The well-known CSS code-homology correspondence relates quantum CSS codes to chain complexes, allowing for the study of various properties using the techniques of homological algebra [17, 29, 53, 58, 64, 88, 89]. As we employ this formalism to study block reading, we give a brief summary of this correspondence here. A based chain complex over F2, C• = · · · Cl Cl−1 Cl−2 · · · ∂l+1 ∂l ∂l−1 ∂l−2 is a Z-graded vector space L i∈Z Ci over F2 equipped with a basis at each degree i, and boundary operators ∂i : Ci →Ci−1 such that ∂i−1 ◦∂i = 0, ∀i ∈Z. Equivalently, im∂i+1 ⊆ker ∂i. The homology space at degree i is Hi(C•) = ker ∂i/im∂i+1. A CSS code must satisfy the property that HXH⊺ Z = 0, and therefore we can view an Jn, k, dK CSS code as a chain complex with 3 nonzero terms: C• = C2 C1 C0 ∂2 ∂1 where explicitly at each degree we have C• = FmZ 2 Fn 2 FmX 2 H⊺ Z HX An undetectable Z operator must have support on a vector v ∈ker HX, and so every such operator is of the form Z(v). Z stabilisers are elements of imH⊺ Z. Thus equivalence classes 10 of Z logical operators are elements of H1(C•) = ker HX/imH⊺ Z, the homology space of C• at degree 1. Hence k = dim H1(C•), as the number of logical qubits must be equal to the number of independent equivalence classes of Z logical operators. The Z-distance dZ of a CSS code is the weight of the smallest nontrivial Z operator, so dZ = min v∈ker HX\imH⊺ Z |v|. Dual to the notion of chain complex is a cochain complex, C• = · · · Cl Cl+1 Cl+2 · · · δl−1 δl δl+1 δl+2 which is defined similarly to a chain complex but with boundary operators transposed to become coboundary operators δi, over F2. The cohomology space is Hi(C•) = ker δi/imδi−1, and enjoys the isomorphism Hi(C•) ∼= Hi(C•). As HXH⊺ Z = 0, we also have HZH⊺ X = 0, hence we can equally view a CSS code as a cochain complex with 3 terms: C• = FmX 2 Fn 2 FmZ 2 H⊺ X HZ This view prioritises X operators, rather than Z operators. We then have k = dim H1(C•) and dX = min u∈ker HZ\imH⊺ X |u|. Furthermore, d = min(dX, dZ) as the error types are detected independently. A chain map, or homomorphism between chain complexes, is a linear map: f• : C• →D• which has a component fi : Ci →Di at each degree i, such that the following diagram commutes, · · · Cl+1 Cl Cl−1 · · · · · · Dl+1 Dl Dl−1 · · · ∂C l+1 fl+1 ∂C l fl fl−1 ∂D l+1 ∂D l (1) i.e. ∂D i+1fi+1 = fi∂C i+1 ∀i ∈Z. A dual definition applies to a cochain map f•, which can be found by taking the transpose of each map in the diagram. Definition 2.5 (Subcode). Let a subcode of the CSS code defined by a chain complex C• be a code defined by the chain complex A• with an injective chain map f• : A• →C•. We stipulate further that f• must take each data qubit in A• to a single data qubit in C•, and the same for any Z or X checks in A•.6 The additional stipulation is made to maintain control over weights, as this is essential for our analysis; if C• is LDPC, so is A•. Each matrix in the chain map f• is therefore a permutation matrix, up to some all-zero rows. We can equally take the subcode in a dual manner, viewing the original CSS code via its cochain complex C• and stipulating that there is a similar cochain map g• : A• →C•. In this work, it is always made clear from context whether the subcode is taken in the chain (Z) or cochain (X) sense. 6In Ref. [53] this property of f• is called basis-preservation. 11 2.2.1 Tensor products The tensor product of (co)chain complexes is inherited from the tensor product of Z-graded vector spaces. Namely, Definition 2.6. [90, Sec. 2.7] Let C•, D• be chain complexes over F2. Define (C ⊗D)• with components (C ⊗D)l = M i+j=l Ci ⊗Dj where the latter tensor product is the normal tensor product of vector spaces. Differentials between components are given for ∂(C ⊗D) l by matrices ∂C i ⊗I I ⊗∂D j ! for a given i, j, then stacked horizontally for each term i, j | i + j = l, and vertically for each i′, j′ | i′ + j′ = l −1. One can check that ∂(C ⊗D) l−1 ◦∂(C ⊗D) l = 0 (mod 2), as desired. Lemma 2.7 (K¨unneth formula). [90, Thm 3.6.3] Hl(C ⊗D) ∼= M i+j=l Hi(C) ⊗Hj(D) That is, the homology spaces factor through the tensor product conveniently. Both the tensor product and K¨unneth formula can be applied dually to cochain complexes. By virtue of the fact that we are working with finite-dimensional vector spaces with explicit bases, we can strengthen the K¨unneth formula: Lemma 2.8. [91, Prop. 1.13] Hl(C ⊗D) = M i+j=l Hi(C) ⊗Hj(D)7 Proof. We first observe that any basis element of Hi(C) ⊗Hj(D) for i + j = l, which is an equivalence class of elements in Ci ⊗Dj, is an element of Hl(C ⊗D), and that linearly independent elements remain independent. Then by a counting argument, Hl(C ⊗D) = L i+j=l Hi(C) ⊗Hj(D). As CSS codes can be viewed as (co)chain complexes, the tensor product can be used to construct new codes [14, 91]. 2.2.2 Mapping cones Given a chain map f• : A• →C• over F2, the mapping cone of f• is defined to be the chain complex with spaces cone(f)i = Ci ⊕Ai−1 and maps ∂cone(f) i : cone(f)i →cone(f)i−1 given by Ai−1 Ai−2 Ci Ci−1 fi−1 ∂C i ∂A i−1 ⊕ ⊕ 7Strictly speaking this is still an isomorphism, but the isomorphism is canonical. 12 ∂cone(f) i = ∂C i fi−1 0 ∂A i−1 ! . The homology of mapping cones can be studied using exact sequences and the Snake Lemma [90, Lem. 1.3.2], see App. A. Using chain maps and mapping cones, one can construct new quantum codes. In particular, Z logical measurement of a CSS code C• can be seen as constructing a particular auxiliary system A• and then using the chain map f• to build the deformed code cone(f•) [58]; X logical measurements are then mapping cones from cochain maps. This perspective originates from the quantum weight reduction of Hastings [71, 72]. 2.3 Metachecks and soundness Typically, quantum codes require O(d) rounds of error correction for every logical timestep in order to maintain fault-tolerance. This is because in addition to errors occurring on data qubits, measurement errors can occur on checks. As checks are not typically protected against measurement errors, one must measure repeatedly in order to diagnose such errors successfully. This can be seen as constructing a spacetime code whose timelike component is a repetition code [60]. There are, however, quantum codes that do have protection against measurement errors [77–82]. Although there are multiple forms of such protection, here we focus on the approach that uses metachecks [78]. A metacheck is a set of stabilisers that has a product of outcomes known to be +1 in advance, even in the presence of faults on data qubits. Hence a metacheck is a detector [92] on the spacetime code, which detects solely measurement errors. It is possible to construct codes with sufficient metacheck structures, such that the minimum number of faulty measurements in a single error-correction round required to cause a logical fault is the same as the code distance d. Such codes can be single-shot codes, which in principle require only a single error-correction round against adversarial noise [78]. Single-shot LDPC codes with metachecks require redundancy in their stabiliser generators and so can be expensive to construct, as having enough redundant checks to yield sufficient detectors for protection against up to d measurement errors can entail a larger space overhead. Given a scalable Tanner graph for a CSS code, we can incorporate metachecks with new sets of vertices and edges: X Z HX HZ MZ MX where blue (red) diamonds represent metachecks on Z (X) checks respectively, and MZ (MX) is the F2-linear map taking a syndrome outcome to its corresponding set of metachecks. Equivalently, metachecks can be viewed as additional terms in the (co)chain complex of a code: C• = C3 C2 C1 C0 C−1 ∂3 ∂2 ∂1 ∂0 C• = C3 C2 C1 C0 C−1 δ2 δ1 δ0 δ−1 where C3 = C3 represents metachecks on Z checks and C−1 = C−1 represents metachecks on X checks. 13 In scalable Tanner graph notation, maps extend outwards from data qubits, to checks and then metachecks. In (co)chain complexes, maps begin at the lowest or highest degree and point towards the higher or lower degree respectively. Definition 2.9. [78] Let Q be a CSS code. The Z-single shot distance is dss Z = min v∈ker MZ\imHZ |v| = min v∈ker δ2\imδ1 |v| and is the weight of the smallest Z-measurement fault that is undetectable by Z- metachecks and is not equivalent to an X fault on data qubits. Similarly, dss X = min u∈ker MX\imHX |u| = min u∈ker ∂0\im∂1 |u| The single-shot distance of Q is therefore dss = min(dss X, dss Z ), as the Z and X checks are independent. If ker ∂0\im∂1 = ∅then by convention we say that dss X = ∞, and similar for dss Z . Like logical operators, measurement faults can be partitioned into equivalence classes, in H2(C) and H0(C) for Z measurement errors and X measurement errors respectively. Even if the single-shot distance is at least the distance of the code, i.e. dss ≥d, it is still possible for a low-weight measurement fault to prevent fault-tolerant single-shot error correction. This is because there could be a low-weight measurement fault which is equivalent to a fault on data qubits, but that fault is very large. Hence the measurement fault may be interpreted by the decoder as a large fault on data qubits, and a large, erroneous recovery operator applied. As a consequence, the next logical cycle would require only a small fault on data qubits to cause a logical error. This motivates the study of LDPC codes with good soundness [78, 93, 94], where every valid small set of syndrome outcomes has a sufficiently small set of data qubit errors which produces them; thus a minimum distance decoder will not misinterpret a small measurement fault as a large data qubit fault. There are several different definitions of soundness for both classical and quantum codes. For classical codes, we adopt a simple combinatorial definition of local-testability from Ref. [95, Def. 11]. Definition 2.10 (Local testability). A binary linear code C is (ω, ρ)-locally testable if it has a parity-check matrix H : Fn 2 →Fm 2 with rows of weight at most ω such that for any vector v ∈Fn 2, 1 m|Hv| ≥ρ nd(v, C) where d(v, C) = minx∈C(|v + x|) and | · | is the Hamming weight. The values ω and ρ are the locality and soundness of the code respectively. For quantum stabiliser codes, we follow Ref. [96]. For an n-qubit quantum code Q, let Qt = Span{(A1 ⊗· · · ⊗An) |ψ⟩: |ψ⟩∈Q, #{i ∈[n], Ai ̸= I} ≤t} be the t-fattening of Q, which is the set of states with distance at most t from the codespace. Then define DQ = X t≥1 t(ΠQt −ΠQt−1). where ΠQt is the projector onto the space Qt. 14 Definition 2.11. A qubit stabiliser code Q with generators S1, · · · , Sm is locally testable with locality ω and soundness ρ if all generators have weight at most ω and 1 m m X i=1 1 2(I −Si) ⪰ρ nDQ where A ⪰B means the operator A −B is positive semidefinite. Lemma 2.12. [96, Fact 17] A quantum CSS code CSS(HX, HZ) with parity-check matri- ces HX, HZ is quantum locally testable with soundness ρ if ker HX, ker HZ are classical locally testable codes with soundness ρ. Conversely, if CSS(HX, HZ) has soundness ρ then ker HX, ker HZ are classical locally testable codes with soundness at least ρ/2. Remark 2.13. Note that Def. 2.11 is subtly different from the version of soundness given in Ref. [78], and is closer to the definition of confinement in Ref. [82]. Intuitively, the version in Ref. [78] stipulates that every valid small set of syndrome outcomes has a sufficiently small set of data qubit errors which produces them. Def. 2.11 and confinement instead stipulate that every small set of data qubit errors produces a sufficiently large set of syndrome outcomes. See Ref. [82] for more discussion about the distinction between these definitions. Def. 2.11 aligns more closely with the definition of soundness for classical codes, so is preferable for our purposes. It is well-known that a quantum code with both sufficiently high single-shot distance and sufficient soundness, of a certain type, can be decoded in a single-shot manner against adversarial noise using a minimum-weight decoder [78], and similar results apply too to confinement [82]. We remark that the development of quantum locally testable codes with good parameters is an ongoing line of research [97–100]. In summary, a metachecked code being single-shot is reliant on the decoder and noise model, in addition to other code properties than just the single-shot distance. For phe- nomenological adversarial noise, a minimum weight decoder is sufficient when the codes have sufficient properties related to soundness [78, 82]. In this work we do not require the quantum memory to be single-shot, or even have high single-shot distance, but we make use of the general framework of metachecks, soundness and their relation to homology. 2.4 Quantum code surgery Generalised lattice surgery, or quantum code surgery, is a method of performing compu- tation by code deformation, see Section 3.2 of Ref. [62] for a brief review. In this work, we presume that the codes are all stabiliser codes, although one can perform surgery with non-Abelian codes [101–105]. We also focus on logical measurements by the introduction of an auxiliary system, while surgery can be performed in other ways to implement a larger class of operations [53, 64, 106, 107]. Those exceptions aside, all prior works in quantum LDPC code surgery are known to be unified by the construction of a measurement hypergraph [56, 58, 62]. In brief, for a given initial quantum memory Q and logical measurement to be performed, we construct a hypergraph H(V, E), consisting of vertices V, edges E and cycles.8 Edges are associated to new data qubits, and there are two types of new stabiliser checks: vertex checks, whose measurement outcomes are used to infer the logical measurement outcome, and cycle checks, which are present to gauge-fix and thus prevent any new logical qubits 8A cycle in a hypergraph is a collection of edges that contains every vertex an even number of times. 15 from appearing in the deformed code.9 The measurement hypergraph is connected to the initial code Q. Specifically, a subset of vertices (new checks) are connected to qubits in Q in order to perform the logical measurement, while a subset of edges (new data qubits) are connected to checks in Q such that the deformed code commutes as a stabiliser code. Definition 2.14. Let H be a hypergraph with edge-vertex incidence matrix G : F2E →F2V. A cycle basis is a full-rank matrix N : F2F →F2E such that imN = ker G. F is identified as a set of faces, or cycles, in H. We say that the cycle basis is sparse if N is a sparse matrix. Given a cycle basis of a measurement hypergraph, each cycle in the basis can be associated to a new check, which fixes the gauge of the logical which would otherwise have support on the cycle. If the cycle basis is sparse then this can be done while maintaining the LDPC property of the deformed code. There are several methods of constructing such measurement hypergraphs [52–59, 61, 62], which are described in more detail in Ref. [62, Sec. 3]. The most asymptotically efficient method currently known to measure a generic set of Pauli product operators on disjoint logical qubits is by brute-force branching and gauging logical measurements with universal adapters [61]. For an arbitrary collection of logically disjoint Pauli prod- uct measurements supported on t logical qubits, this scheme uses O tω(log t + log3 ω)  ancilla qubits, where ω ≥d is the maximum weight of the single logical Pauli representa- tives involved in the measurements, and d is the code distance. This is all done in time O(d) independent of t, while preserving the phenomenological fault distance and LDPC property. This is a vast improvement to the original work on surgery with quantum LDPC codes [52], which required O(dω) ancilla qubits to measure a single weight ω logical opera- tor representative10 and could not generally perform parallel measurements without losing the LDPC property. However, despite these advances, all contemporary methods require O(d) rounds of error correction per logical cycle for generic codes in order to maintain fault-tolerance11. It also means that the study of LDPC code surgery does not, prior to this work, cover the well-known cases of single-shot surgery with 3D and 4D topological codes [60]. The primary reason for this stark contrast is that, in LDPC code surgery, the measure- ment outcomes of new checks on vertices in the measurement hypergraph H are used to infer logical measurement outcomes; and in prior works on LDPC code surgery these new checks did not have any protection against measurement errors. When measuring a single logical operator using gauging logical measurements or homological measurements [56, 58], for example, the product of vertex check outcomes is interpreted as the logical measure- ment outcome. Hence, were they measured for only one round, just a single measurement error would flip the product, and constitute a logical error. As such, the checks must be measured for O(d) rounds in order to infer the correct logical measurement outcome. In this work, we focus on reducing the number of rounds required by introducing metachecks which protect against measurement errors in H. All of our protocols yield CSS-type surgery. 9These new logical qubits could have low weight logical operators, and lower the dressed weights of other operators acting on logical qubits in the code [52], which would lower the fault-distance. 10The representative was also required to be irreducible, meaning that it did not contain any other logical representatives, a problem which was fully resolved in Refs [56, 58]. 11This assumes a single representative is measured; Ref. [56] described a method to decrease the time overhead to O( d m) via parallel surgery measurement of 2m −1 equivalent logical operators. 16 Definition 2.15. A CSS-type code surgery is a surgery protocol which begins with a CSS code memory, then performs code deformation such that the code remains CSS throughout. CSS-type code surgery is a limited subset of more general code surgery, as one cannot measure e.g. Y terms or products of the form X ⊗Z while preserving the CSS property. Thus all product measurements of CSS-type each contain terms drawn from either {Z, I} or {X, I}. We remark that CSS surgery is easily extended to measure more general logical Pauli operators on codes that are self-dual. Lemma 2.16. Given any CSS code Q, measurement of Z logicals by a measurement hypergraph results in a deformed code with X-distance at least as high as the X-distance of Q, so long as all cycles are gauge-fixed or otherwise stabilisers. This is an immediate corollary of Ref. [58, Thm. 1]. A dual result follows for the Z-distance when performing measurement of X logicals. 3 Full block reading To understand and reduce the time overhead of surgery operations, we first consider full block reading, which is a surgery operation that acts transversally on all the physical qubits, and thereby logical qubits, of identical code blocks. The fact that one can perform block reading with surface codes is a widely known folklore result, as these are in a sense ‘transversal’ measurements between patches. Our results extend substantially beyond this simple case. We start with the simple example of two blocks. 3.1 Full block reading on two blocks Given two identical Jn, k, dK quantum memories, which are CSS LDPC codeblocks labelled Q and Q′, X Z HX HZ Q X Z HX HZ Q′ we can initialise a hypergraph H(V, E) between the two X Z HX HZ Q X Z HX HZ Q′ Z I I I I G E V (2) 17 such that new data qubits are edges, new Z checks are vertices and H has the edge-vertex incidence matrix G ∈FV×E 2 . H has 1-to-1 maps from edges to X checks in Q and to X checks in Q′, and similar for vertices. Thus in order for the checks of the deformed code to commute, we have G = H⊺ X. Let D be this deformed code. Lemma 3.1. Measuring the checks of D performs a Z ⊗Z measurement on each pair of logical qubits (qi, q′ i) for i ∈[k], where qi is a logical qubit in Q and q′ i the same logical qubit in Q′. Proof. Consider an arbitrary Z logical operator representative acting on qi, and call it Zi = Z(vi). Let Z′ i be the identical copy of Zi in Q′. By definition, vi ∈ker(HX). Multiplying Zi by the bijective Z checks P in V does not leave any support on E, as Pvi ∈ker(G⊺) = ker(HX), but does clean Zi into Q′. The operator which it is cleaned to is precisely Z′ i, hence the two logicals are now stabiliser equivalent and so we have performed a measurement of Zi ⊗Z′ i. Now because ker(G⊺) = ker(HX), the only operators which can be cleaned are of this form and so we have performed a Z⊗Z measurement on each pair of logical qubits between the two codes. Specifically, each set of Z checks in V in bijection with a logical Z operator in Q and Q′ measures the product of those operators. The logical measurement outcome is inferred as the product of the syndrome outcomes in that set. Lemma 3.2. D has no additional logical qubits. Proof. New logical qubits can only appear when performing measurement hypergraph surgery if there are cycles in the hypergraph [56, 58], which do not have corresponding faces, i.e. checks, to gauge-fix. In general if ker(G) ̸= 0 then there can exist a new X logical operator with support on u ∈ker(G). This hypergraph may have cycles, but all logical operators on these cycles are also products of X stabilisers from Q (or, alternatively and symmetrically, from Q′). In par- ticular, given any element u ∈ker(G), multiply X(u) by the bijective X checks S in Q. Because u ∈ker(G), Su ∈ker(H⊺ X) and so the cleaned operator maps to 0 on data qubits in Q, hence is a stabiliser. We shall presently prove that the code distance is preserved by the deformation, but in order to do so it is easier to use the language of cohomology. Lemma 3.3. Let C• be the initial codeblock Q viewed as a cochain complex. Then the deformed code D has the cochain complex D• = (C ⊗R)• where R• = R0 R1 P , R0 = F2 2, R1 = F2, P =  1 1  . Proof. (C⊗R)• = C0 ⊗F2 2 C1 ⊗F2 2 ⊕C0 ⊗F2 C2 ⊗F2 2 ⊕C1 ⊗F2 C2 ⊗F2 In words, 18 • at the level of X checks we have two copies of C0, the X checks of Q, • at the level of data qubits we have two copies of C1 the data qubits of Q, and an additional set of data qubits in bijection with the X checks of Q, • at the level of Z checks we have two copies of C2 and an additional set of Z checks in bijection with the data qubits of Q, • there is another term, which we ignore for now and consider only the first 3 terms. We now check that the coboundary maps are correct. δ0 C⊗R = H⊺ X ⊗I I ⊗P ! , δ1 C⊗R = HZ ⊗I 0 I ⊗P H⊺ X ⊗I ! which we recognise as the check matrices, with appropriate transposes, from Eq. 2. Using the perspective of cochains it is easy to verify the logical dimension using the K¨unneth formula. Moreover, by Lemma 2.8 we have the equation: H1(C ⊗R) = H1(C) ⊗H0(R) ⊕H0(C) ⊗H1(R) hence the set of X logical operators in D is given by representatives of H1(C) ⊗H0(R) ⊕H0(C) ⊗H1(R) which explicitly is ker HZ/imH⊺ X ⊗ker P ⊕C2/imHZ ⊗0 = ker HZ/imH⊺ X ⊗ker P so every X logical operator belongs to a stabiliser equivalence class [ui, u′ i] of the identical logical operators ui, u′ i in Q and Q′, or a nontrivial sum thereof. Lemma 3.4. The distance of D is at least d. Proof. As there are no new logical qubits, to prove the distance bound it is sufficient to ensure that cleaning Z logicals cannot reduce their weight below d. The X-distance is guaranteed by Lemma 2.16. For Z logicals, observe that multiplying any Z logical in Q by Z checks in V adds support to the corresponding data qubits in Q′ by a 1-to-1 map, and potentially adds support on E as well. Any Z logical which has support on both Q and Q′, say v and v′, can in the worst case be cleaned to have weight |v + v′| on Q and Q′, considering both v and v′ to be in Fn 2, and v + v′ is a valid logical of Q so must have weight at least d. As a consequence of Lemmas 3.2 and 3.4 it is guaranteed by the arguments in Ref. [56] that the block reading can be performed with fault distance d in d rounds of error cor- rection, as the procedure can be viewed as a parallel gauging logical measurement. All new data qubits are initialised in |+⟩, and all checks are measured for d rounds, before measuring out the new data qubits in the X basis. We reduce this from d to O(1) in Section 3.3. 19 3.2 Full block reading of Z type We can extend block reading straightforwardly to acting on many codeblocks simultane- ously. We start with a set of c identical Jn, k, dK LDPC codeblocks. Definition 3.5 (Pattern Matrix). Let P : Fc 2 →Fϕ 2 be a full-rank matrix, which defines the pattern of logical measurements to be performed between codeblocks by deforming to a new code D. We call P the pattern matrix. Each row r of P corresponds to a set of parallel measurements to be performed. Each entry of 1 in r denotes that a codeblock is involved in that set of measurements, so if rj = 1 for column j then codeblock j is involved, and if rj = 0 then it is not. Then r specifies a measurement of P 1 i ⊗P 2 i ⊗P 3 i · · · for every logical qubit i ∈[k] in codeblocks Q1, Q2, Q3... in the basis P j ∈{Z, I} depending on whether rj = 1 or 0. If P is not full rank, and has some linearly dependent rows, then those rows correspond to logical measurements which are already performed by other rows and so are redundant. Additionally, if any row has weight 1, with the 1 in column j, that measurement is equiv- alent to measuring out codeblock j in the Z basis. Hence for simplicity we assume that P is full rank, with no linearly dependent rows, and that there are no rows with weight 1. Definition 3.6 (Z-type full block reading). A full block reading of Z type takes as input c CSS LDPC codes, each with cochain complex C•, and a pattern matrix P : Fc 2 →Fϕ 2. The cochain complex for the deformed code D of the full block reading denoted by P is given by (C ⊗R)• where R• = Fc 2 Fϕ 2 P , First we check that the definition above is consistent with the definition of a pattern matrix. We have (C⊗R)• = C0 ⊗Fc 2 C1 ⊗Fc 2 ⊕C0 ⊗Fϕ 2 C2 ⊗Fc 2 ⊕C1 ⊗Fϕ 2 C2 ⊗Fϕ 2 where we currently ignore the last C2 ⊗Fϕ 2 term. We can pick out C0 ⊗Fc 2 →C1 ⊗Fc 2 →C2 ⊗Fc 2 as the c original codeblocks. Then, δ0 C⊗R = H⊺ X ⊗I I ⊗P ! , δ1 C⊗R = HZ ⊗I 0 I ⊗P H⊺ X ⊗I ! as for the example in Eq. 2 but where P is now a more general matrix. The Z logical operators in D are then H1(C⊗R) = H1(C)⊗H0(R)⊕H0(C)⊗H1(R) = ker HX/imH⊺ Z⊗Fc 2/imP⊺⊕C0/imHX⊗ker P⊺ where ker P⊺= 0 as P has no surplus rows, so H1(C ⊗R) = ker HX/imH⊺ Z ⊗Fc 2/imP⊺. Given that our initial code had Z logical operators given by ker HX/imH⊺ Z ⊗Fc 2, we see that those in imP⊺are now stabilisers, and so block reading has measured precisely the rows of P on each logical qubit. Furthermore, no new equivalence classes are present, so there are no new logical qubits. 20 Remark 3.7. Let P be LDPC and each identical codeblock be LDPC. Then the deformed code (C ⊗R)• during measurement is LDPC, and the procedure uses O(cn) ancilla qubits. This is immediate from Definition 3.6. Lemma 3.8. The deformed code has code distance at least d. Proof. The X-distance follows from Lemma 2.16. The Z logicals are given by H1(C ⊗R) = ker HX/imH⊺ Z ⊗Fc 2/imP⊺. We start with a Z logical ΛZ composed of different Z logicals v, v′, v′′, ... in each codeblock, at least one of which must have weight at least d and be a nontrivial Z logical. Applying a new Z check in V only cleans support from the codeblocks if the incident data qubits all have support in ΛZ; otherwise the check applies at least 1 new Z to a data qubit in a codeblock, and the weight of ΛZ cannot be reduced below d in this way. If we apply new Z checks in V to clean support from ccl codeblocks, in the worst case we have support |v| + |v′| + |v′′| + · · · −ccl|v ∩v′ ∩v′′ ∩· · · | left in the codeblocks. But |v| + |v′| + |v′′| + · · · −ccl|v ∩v′ ∩v′′ ∩· · · | = |v ∪v′ ∪v′′ ∪· · · | −|v ∩v′ ∩v′′ ∩· · · | ≥|v∆v′∆v′′∆· · · | = |v + v′ + v′′ + · · · | ≥d where ∆is the symmetric difference of sets. Should the row weights of P be too high, meaning that the weights of these logical measurements are high and so the degree of the Tanner graph is high, then one can intro- duce ancilla blocks to break those measurements up, making P sparse. This corresponds to adding new bits, i.e., columns to the check matrix P, such that a row can be split up into multiple rows. The ancilla codeblocks used to reduce its weight can be initialised at the same time as the code deformation procedure begins, and can be considered part of the measurement hypergraph, where data qubits in ancilla blocks are edges, Z checks are vertices and X checks are cycles, as normal. Consequently, the data qubits are initialised in |+⟩. This can be thought of as thickening the hypergraph, see Figure 4 later on. 3.3 Time overhead of full block reading We now return to our original example, Eq. 2 from Section 3.1. When computing the cochain complex in Lemma 3.3 we had an additional term (C ⊗R)3 = C2 ⊗F2 = C2, which corresponds to a collection of metachecks on the Z-checks of the deformed code. Thus the full scalable Tanner graph for the logical measurement is X Z HX HZ Q X Z HX HZ Q′ Z I I I I G E I I G V 21 where G = HZ and MZ = δ2 C⊗R =  I ⊗P HZ ⊗I  =  I ⊗P HZ  . Consider first the case where measurement faults can occur only on checks in V. As G = HZ, if a measurement fault v /∈ker(HZ) then it is detected by the metachecks. If v ∈ker(HZ) then either (a) it is equivalent to an X data qubit error on E or (b) it has weight on checks at least d. (b) follows from the fact that G = H⊺ X, and the lowest weight vector in ker HZ\H⊺ X has by definition weight at least d. For (a), as all the data qubits in E are initialised in the |+⟩basis, X errors leave the state invariant so the logical measurement outcome is unaffected. Therefore to cause an incorrect logical measurement outcome, the measurement fault weight on V must be at least d, in the case where measurement faults cannot happen elsewhere. Note further that any vector v ∈ker(G) representing an undetectable measurement fault on E, regardless of weight, is equivalent to an undetectable X data qubit fault on Q (or Q′), as multiplying by X faults on bijective qubits leaves 0 on the Z checks in Q (or Q′). To study the more general case it is fruitful to use the cohomology of (C ⊗R)•. H2(C ⊗R) = H2(C) ⊗H0(R) ⊕H1(C) ⊗H1(R), which is explicitly C2/imHZ ⊗ker P ⊕ker HZ/imH⊺ X ⊗0 = C2/imHZ ⊗ker P. Thus any Z measurement fault which is neither detected by metachecks nor equivalent to a data qubit fault must be a linear combination of pairs of identical Z check faults in Q and Q′, which are not equivalent to X data qubit faults, and Z check faults which are equivalent to X data qubit faults. Note that the Z single shot distance dss Z of (C ⊗R)• can be very low even in the more general case, as C2/imHZ can have low weight representatives. Nevertheless, any measurement fault which includes Z checks in V must be able to be cleaned, by applying some subset of X errors to Q and Q′, so that it has support only on the Z checks of Q and Q′. Only if the set of X errors which it is cleaned to is large does the measurement fault cause a logical measurement fault. In our analysis of the time overhead, we have first a weak result: Proposition 3.9. Let Q, Q′, Q′′,... be a set of identical CSS LDPC codeblocks with distance d, and let P be a pattern matrix determining a full block reading. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the full block reading procedure. Then full block reading performs logical measurement in O(1) rounds of syndrome measurements with fault-distance d. Proof. See Appendix A. The intuition is that in the timestep where the full block reading is performed, any measurement fault on the check qubits in V with weight < d which is undetectable by metachecks is either (a) in imG and so does not affect the logical measurement outcome or (b) has a paired measurement fault on Z checks outside of V, in the original code, such that the measurement faults on the original code must extend to future and previous timesteps to avoid detection. In order to maintain the fault-distance of the entire procedure we must assume d rounds of syndrome measurements on the original code before and after block reading. This is because there may be low weight faults on data qubits, Z checks and X checks outside of H which are not detected within 1 timestep, and so could lead to logical errors by connecting to timelike errors extending from the top or bottom boundaries. This is equivalent to d measurement rounds of ‘padding’ before and after block reading [108]. 22 Remark 3.10 (State preparation). Proposition 3.9 does not allow one to generically use full block reading on a single codeblock to prepare a logical |+⟩k or |0⟩k state in constant time, because of the padding required to provably preserve fault-distance. Remark 3.11 (Frame change). At the end of the block reading protocol, the ancilla data qubits are measured out in the X-basis, and the X checks in the original code are un- deformed back to their original incidence. These measurements do not commute with the preceding Z checks, and so have random outcomes, although products of these measurement outcomes which correspond to cycles in the hypergraph must be +1 in the absence of errors, as these are precisely the sets of X measurements which do commute with the preceding Z checks. Because each new data qubit is in 1-to-1 correspondence with an X check in each incident codeblock, the anticipated outcomes of X-checks in the original codeblocks are updated after the block reading protocol by taking a product with the outcomes of those data qubit X measurements, inducing a frame change. When the X measurements on ancilla data qubits undergo faults, the frame is updated incorrectly. These measurement errors are therefore in bijection with X check errors on the original code, detected by preceding and subsequent rounds of measurement. The reason why preceding rounds of measurement can diagnose such frame errors is because in the measurement round where ancilla data qubits are measured out the product of such a measurement and its corresponding X-check in an original codeblock commutes with the Z checks in the deformed code, and so must yield a +1 outcome in the absence of errors. Thus having a consistent history of checks before the frame change can detect frame errors. Proposition 3.9 is at first not particularly helpful, because the overall time cost of the procedure is still O(d). However, the proof extends to performing full block read- ings sequentially without padding between logical measurement rounds, with each logical measurement taking O(1) rounds. Theorem 3.12. Let Ξ be a set of η full block reading procedures of Z-type such that no block reading procedure in Ξ is a product of any others. Then all η logical measurement rounds of full block reading can be performed sequen- tially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the full set of procedures. The procedure has phenomenological fault- distance d, for any η ∈N. Proof. See Appendix A. Remark 3.13. In the above statement, the assumption that the pattern matrix is full rank is for technical reasons related to our proof technique. We expect that this requirement can be relaxed. 3.4 Full block reading of CSS-type Full block reading can be generalised to measure sets of commuting operators of the form Z ⊗Z ⊗· · · ⊗Z and X ⊗X ⊗· · · ⊗X simultaneously. This allows for fast code switching into a transversally concatenated code, similar to a procedure in Ref. [66] and the basic operation in the protocol of Ref. [109]. To see this, let PZ be the pattern matrix for Z-type logical measurements. Recall that the rows of PZ each specify a set of parallel Z-type measurements as per Definition 3.5. In order for us to perform X-type block reading at the same time, the X-type logical operators to measure must commute with the Z-type logical operators. Thus each row s 23 of a PX pattern matrix, specifying a set of parallel X-type logical measurements, must satisfy r · s for each row r of PZ, where · is the usual dot product on vectors over F2. Therefore, PZP⊺ X = 0. Our pattern matrix is now upgraded to a pattern complex. Definition 3.14 (CSS-type full block reading). A full block reading of CSS-type takes as input c CSS LDPC codes, each with cochain complex C•, and a pattern cochain complex R•. The cochain complex for the deformed code of the full block reading denoted by P is given by (C ⊗R)• where R• = Fθ 2 Fc 2 Fϕ 2 P⊺ X PZ with R−1 = Fθ 2, R0 = Fc 2 and R1 = Fϕ 2, and we assume that both PZ and PX are full rank with no redundant rows, and that all rows have weight greater than 1. As the tensor product of cochain complexes is also a cochain complex, the checks in the deformed code commute. We have (C ⊗R)−1 = C0 ⊗R−1 (C ⊗R)0 = C1 ⊗R−1 ⊕C0 ⊗R0 (C ⊗R)1 = C2 ⊗R−1 ⊕C1 ⊗R0 ⊕C0 ⊗R1 (C ⊗R)2 = C2 ⊗R0 ⊕C1 ⊗R1 (C ⊗R)3 = C2 ⊗R1 and H1(C ⊗R) = C2/imHZ ⊗ker P⊺ X ⊕ker HZ/imH⊺ X ⊗ker PZ/imP⊺ X ⊕C0 ⊗R1/imPZ which given that ker P⊺ X = 0 and R1/imPZ = 0 reduces to H1(C ⊗R) = ker HZ/imH⊺ X ⊗ker PZ/imP⊺ X. A similar calculation gives H1(C ⊗R) = ker HX/imH⊺ Z ⊗ker PX/imP⊺ Z, so we see that logical Z operators in imP⊺ Z, i.e. the row space of PZ, are now stabilisers, and the same for logical X operators in the row space of PX, for every logical qubit in the codeblocks. That the distance of the CSS-type block reading deformed code is at least d is imme- diate by applying the arguments of Lemma 3.8 twice, first for the Z-type auxiliary system and then for the X-type. In the CSS-type full block reading measurement protocol, all new data qubits included in the Z-type (X-type) measurements are initialised in |+⟩(|0⟩) states respectively. All syndromes are measured for O(1) rounds, and then all new data qubits are measured out in the X (Z) bases respectively. Remark 3.15. Let R• be LDPC and each identical codeblock Q, Q′, Q”... be LDPC. Then the deformed code (C ⊗R)• is LDPC. 24 Lemma 3.16. Let Ξ be a set of η full block reading procedures of CSS-type such that no block reading procedure in Ξ is a product of any others, and each of the measurements commute. Then all η logical measurement rounds of full block reading can be performed sequen- tially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the full set of procedures. The procedure has phenomenological fault- distance d, for any η ∈N. Proof. See Appendix A. In Appendix B we also show that full block reading can be extended to perform non- CSS measurements, namely those with X ⊗Z and Y terms, on CSS codes, but that our constructions for these require specific structure on the codes, such as transversal S gates, so they add no extra computational power to full block reading with CSS-type measurements. Full block reading gives no addressability on logical qubits within a codeblock, as the measurements are on all logical qubits, so the utility of full block reading is limited in the same manner as 1-to-1 transversal CNOT gates between CSS codes. The logical computational power of a full block reading is identical to initialising an ancilla block, then applying transversal CNOTs, then measuring out the ancilla block. Consequently, it is unclear how useful full block reading would be for practical fault- tolerant computation. Compared to measurement using transversal gates and ancilla codeblocks, full block reading requires fewer qubits overall, but full block reading also requires more connections between the initial codeblocks and the auxiliary system. Nevertheless, our analysis of full block reading sets the stage for the more sophisticated operations in the next sections. We first consider the case of reading a subcode, then present our general result on fast hypergraph surgery. 4 Partial block reading Returning again to the example in Eq. 2, consider a different generalisation. We start with two codeblocks Q and Q′, which may be different, X Z HX HZ Q X Z H′ X H′ Z Q′ 25 and construct the measurement hypergraph H to connect only to a subcode of each, but where the subcodes are identical. X Z HX HZ Q X Z H′ X H′ Z Q′ Z Ml Mr Fl Fr G E V That is, up to permutation and the addition of all-zero rows, the matrices satisfy Ml = Mr; Fl = Fr. Given an appropriate choice of subcode and measurement hypergraph, the measurements are only between those logical qubits whose logical representatives have connections to the hypergraph, and so are contained in the subcode. This is called a partial block reading. The construction no longer yields a tensor product of cochain complexes, so we in- stead use mapping cones from Section 2.2.2 to describe partial block reading. As the constructions become quite complicated, we also refrain from studying multiple partial block readings occurring simultaneously, unlike in the full block reading case. All par- tial block readings are assumed to be performed sequentially, albeit with only a constant number of syndrome rounds between them. Definition 4.1. Let A be a subcode of Q, Q′, Q′′..., each of which have chain complexes C•, C′ •, C′′ • ..., such that there is a collection of injective chain maps f•, f′ •, f′′ • ... of the forms f• : A• →C•; f′ • : A• →C′ •; f′′ • : A• →C′′ • ... Then the partial block reading of Z type defined by {f•, f′ •, f′′ • ...} is D• = cone(h•); h• = f• + f′ • + f′′ • + ... Explicitly, D• is the chain complex A1 A0 0 C2 C1 C0 C′ 2 C′ 1 C′ 0 C′′ 2 C′′ 1 C′′ 0 ... ... ... ∂A 1 0 ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ 26 where labels for the differentials in each codeblock, and labels for the chain maps, have been suppressed for visibility. Note that also A• generally possesses a nonzero A2 component. This component corresponds to metachecks, so we ignore that term for now. It is trivial to compute, using the Snake Lemma [90, Lem. 1.3.2], that the measured logical qubits are those in each code which have Z logical representatives contained in the subcode A•. As a Tanner graph, we have X Z HX HZ Q X Z H′ X H′ Z Q′ Z f0 f⊺ 1 G X Z H′′ X H′′ Z Q′′ f′ 0 f′ 1 ⊺ f′′ 0 f′′ 1 ⊺ E V (3) for a partial block reading between three codeblocks. Observe that the chain maps f•, f′ •, f′′ • determine the connectivity between the blocks and the auxiliary hypergraph, and that E = A0, V = A1. Unlike when performing full block reading, the code distance of the deformed code D• in partial block reading is not generically preserved, and can drop below d. Elementary examples of this phenomenon include surface codes with defects and toric codes, which we detail in Section 8. This drop in distance can come from two causes. First, it is possible to multiply existing Z logicals in the codeblocks by new Z checks in order to lower the weights below d. Second, viewing the ancilla system defined by the mapping cone as a hypergraph H, there may be cycles in H which are nontrivial X logicals, meaning that they do not correspond to stabilisers of the deformed code. When unmeasured, the cycles have symplectically paired Z logicals, which when multiplied by other logicals in the original code can result in weights lower than d [52]. In this section, we focus on partial block reading where the subcode has distance d, and the measurements are assumed to be in the Z-basis. Dualising appropriately yields the same results for the X-basis. For the first cause of the deformed code distance dropping, we can thicken the hypergraph. This is equivalent to taking a tensor product of A• and the dual of a repetition code P• before defining the mapping cone, as shown in Figure 2. We can define chain maps from any copy of A• in the tensor product to Q, and the same for Q′, Q′′..., and this choice does not need to be the same for each. An example is shown 27 X A1 (a) A subcode A•, X X · · · X (b) P•, the dual of a repetition code, X X Z X Z · · · · · · A1 A1 A1 A⊺ 1 A⊺ 1 I I I I I I I I (c) The tensor product (P ⊗A)•, Z G E V Z G E V X Z G E V X · · · · · · (d) The thickened hypergraph H, which is (P ⊗A)• with degrees shifted by 1. Figure 2: Constructing a thickened hypergraph. Note that (b) is a conventional Tanner graph, not a scalable Tanner graph. in Figure 3 Lemma 4.2. Let (P ⊗A)• be the thickened subcode, and let f•, f′ •, f′′ • ... be chain maps from any copies of A• in (P ⊗A)• into Q, Q′, Q′′..., with h• = f• + f′ • + f′′ • · · · . Then if (P ⊗A)• has been thickened at least d times the deformed code D• has distance at least d. Proof. Immediate from Corollary 7.10, Lemma 7.12 and Remark 7.11 further down, so we defer the proof until then. This is similar to devised sticking [59] and the original CKBB scheme [52] in that the auxiliary system is thickened d times, but in this case we are measuring many representa- tives of the same equivalence classes of operators simultaneously. As in the CKBB scheme, thickening d times is generally overkill [54], and distance preservation commonly requires a smaller ancilla system than this upper bound. Indeed, in Section 6 we demonstrate that soundness properties of the subcode can be used to reduce the required thickening while provably preserving the distance, and even that gives a sufficient but not gener- ally necessary amount of thickening. If thickening d times were truly necessary then our time overhead of d would essentially be replaced by a space overhead factor of d, at least 28 Z G E V Z G E V X Z G E V X · · · · · · X Z HX HZ Q X Z H′ X H′ Z Q′ X Z H′′ X H′′ Z Q′′ Figure 3: Example of partial block reading with a thickened hypergraph. The logical measurement is the same as in Eq. 3, but the hypergraph has been thickened to prevent the weight of Z operators dropping below d. when A• is large, which is interesting but less useful than achieving a genuine reduction in spacetime overhead. For the second cause of the deformed code distance dropping, that there can be cycles in H which correspond to X logicals, we use the following fact. As the subcode has distance d, the logical measurement protocol can also be performed in 1 round of error correction, similar to full block reading, by introducing redundant Z checks between each level of the hypergraph as shown in Figure 4. This fact helps us to gauge-fix the cycles, because each data qubit in each cycle is initialised in |+⟩immediately before and measured out in the X basis immediately after the single round of error correction. The results of those X measurements can be used to infer the outcome of X stabiliser measurements which would fix the gauges of the cycles, and so we do not need to explicitly add checks for those cycles [56]. However, if the set of cycles does not have a basis of low weight cycles, this can result in detectors with high weight in the space direction, which impinge on the fault-tolerance against stochastic noise. Proposition 4.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let A• be a subcode with distance d, with chain maps from A•, or a thickened version thereof, to Q, Q′, Q′′.... Let D• = cone(h•) be the deformed code, which has code distance at least d. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the partial block reading procedure. Then partial block reading performs logical 29 Z G E V Z G E V X Z G E V X · · · · · · X Z HX HZ Q X Z H′ X H′ Z Q′ X Z H′′ X H′′ Z Q′′ Z Z Figure 4: The same partial block reading as in Figure 3 but with additional redundant checks and metachecks. measurement in O(1) rounds of syndrome measurements with fault-distance d. Proof. See Appendix A. The key here is that when thickening, each Z check error in level a is equivalent to a Z check error in level a −1 along with an X error on the data qubit between them. In this way, we form a chain of metachecks up the thickened hypergraph. These allow all stabilisers in the subcode to be matched to sets of checks in each level of the hypergraph. Evidently, if the subcode A• has nA qubits, and the amount of thickening required is L, then the space cost of a partial block reading is O(nAL). The above proposition considers performing one partial block reading with d rounds of padding before and after the procedure. Once again, we would like to generalize this to performing t ≥d operations in O(t) time, thereby achieving constant time overhead in amortisation. This introduces an extra complication. Instead of the deformed code, we consider the compacted code, defined as follows. Definition 4.4 (Compacted code). Let C• be a complex representing the memory CSS code blocks, and let A1,•, · · · , At,• be chain complexes each with a chain map fi,• : Ai,• →C•. The compacted code is the code taken by applying all the mapping cones to the original code, i.e., CC• = cone( X i (fi,•). While this code is most likely not LDPC and potentially massive, it is just a proof construct and is never explicitly used in our surgery operations. 30 There is an intuitive reason behind defining the compacted code. Ideally, we would like to construct partial block readings assuming just that every deformed code has distance d. This assumption, however, may not be sufficient: if we perform two partial block readings one after another with only O(1) syndrome rounds between them, we must carefully consider how the weight of an unmeasured logical may be reduced by multiplication with the new deformed code checks. While, by assumption, both the first and second partial block readings would individually guarantee any weight-reduced operator remains weight d, there would be no such guarantee when both weight reductions occur together. This may seem irrelevant – after all, we are not performing the partial block readings simultaneously – but it happens that a sequence of check measurement errors in the O(1) syndrome rounds between the block readings can effectively reproduce this simultaneous scenario using potentially just O(1) additional faults (since we are not assuming the original code is sound).12 Therefore, to ensure the overall surgery protocol is fault-tolerant, we impose the condition that the compacted code, which can be seen as the memory blocks deformed by all the surgery operations applied simultaneously, has distance at least d. Theorem 4.5. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let Ξ be a set of Z-type partial block readings {Partial1, Partial2, Partial3, ..., Partialη} where each subcode has distance d and the compacted code has distance d. Then all η logical measurement rounds of partial block reading can be performed sequen- tially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the set of logical measurements. The procedure has phenomenological fault-distance d, for any η ∈N. Proof. See Appendix A. While it may seem daunting to study the compacted code and satisfy the distance con- dition, we show in Section 7 that standard techniques in code surgery, notably thickening and expansion, can be applied to satisfy this condition. We further remark that in the case of full block reading with a full rank pattern matrix P, the resultant compacted code is a homological product code (C ⊗R)•, which has distance at least d. Remark 4.6. In Section 9 we study the relationship between surgery and homomorphic measurement [42]. For now, observe that given a subcode of distance d performing homo- morphic measurement is straightforward, as one initialises an ancilla system that is a copy of that subcode and performs transversal CNOTs and single-qubit measurements. This ini- tialisation can take up to O(d) rounds to preserve fault-tolerance. Note that the method of algorithmic fault-tolerance [84] does not apply to generic homomorphic measurements. Performing partial block reading with a distance d subcode requires additional conditions on the distance of the deformed code to maintain fault-tolerance, which can be resolved at some additional space cost by the thickening described above. 5 Fast hypergraph surgery In this section we generalise our partial block reading results to the case where A• is not a subcode but some other complex with a suitable chain map into the original code. 12We note that this simultaneous weight reduction issue, a scenario not considered in detail by prior code surgery work, is also a concern when doing auxiliary graph surgeries [56, 58] separated by O(1) syndrome rounds if one does not assume expansion of the auxiliary graphs and instead just assumes large deformed code distances. In practical constructions, e.g. Ref. [55], where one indeed tends not to guarantee expansion but just deformed distance, this is an important issue to be aware of. 31 A2 A1 A0 A−1 C2 C1 C0 0 f2 f1 f0 0 0 Definition 5.1. We say that the chain map f• : A• →C• is non-overlapping on vertices if each row in the matrix f1 : A1 →C1 contains at most a single non-zero entry. The degree 1 of A• is shifted by the mapping cone construction to be in cone(f)2, i.e. the set of vertices in the hypergraph is the set of basis elements of A1. We can directly transport Definition 4.1 to this more general setting. Definition 5.2. Let A• be an auxiliary chain complex and let Q, Q′, Q′′... be CSS LDPC codes, each of which have chain complexes C•, C′ •, C′′ • ..., such that there is a collection of sparse chain maps f•, f′ •, f′′ • ... of the forms f• : A• →C•; f′ • : A• →C′ •; f′′ • : A• →C′′ • ... which are non-overlapping on vertices. Then the hypergraph surgery of Z type defined by {f•, f′ •, f′′ • ...} is D• = cone(h•); h• = f• + f′ • + f′′ • + ... As with partial block reading it is trivial to compute which logical operators are mea- sured using the Snake Lemma [90, Lem. 1.3.2]. Theorem 5.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of Z-type hypergraph surgeries {Hyper1, Hyper2, Hyper3, ..., Hyperη} such that: • Each auxiliary complex (A•)1, (A•)2, (A•)3, ..., (A•)η has 1-cosystolic distance d, with sparse cycle bases. • The compacted code has distance at least d. Then all η logical measurement rounds of generalised hypergraph reading can be per- formed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the set of logical measurements. The procedure has phenomeno- logical fault-distance d, for any η ∈N. Proof. See Appendix A. In the mapping cone that describes the surgery operation, the 1-cosystolic distance captures the metacheck distance. To satisfy this distance condition, it is often easier to start with a homomorphic, high distance quantum code, as we did in full block reading. Returning to the second condition, in Section 7 we define a notion of hypergraph expansion called modular expansion and show that it is a sufficient condition for the compacted code to have distance d. We then show that a generic hypergraph can always be thickened (by up to d layers) to a graph with the desired expansion properties. This is in analogy with prior surgery works studying graphs, where it is known that thickening can be used to boost a property called relative expansion [57]. We remark that for both our work and prior works, expansion is a sufficient but not necessary condition, in practice one can often find good constructions heuristically, such as adding random hyperedges. 32 Remark 5.4. In a generic hypergraph surgery, constant-weight local operators are mea- sured alongside high-weight global operators. If we require that the original codespace is restored after the hypergraph surgery, the constant-weight local operators must all be in the stabiliser group, and the high-weight operators must be logical operators. For a Z- type measurement on a single code block, with an injective f1 function, this requires the group generated by F2-linear combinations of the hyperedges to include the truncation of the X-type stabiliser group to the qubits addressed by the measurement. In the previous sections we have chosen the X-type checks of the subcode, but more generally we can pick a different, potentially overcomplete, generating set. 6 Intermediate time overhead with locally testable codes We can consider (a) block reading with a distance d subcode and (b) conventional LDPC code surgery, which uses a distance 1 subcode, to each sit at opposite ends of an axis determining the time cost of logical measurements. In this section we consider interpolat- ing along this axis, to find surgeries which use between O(1) and O(d) time each, while remaining fault-tolerant. 6.1 Partial block reading with a low distance subcode Proposition 6.1. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let A• be a subcode with distance dA = 1 αd, with chain maps from A•, or a thickened version thereof, to Q, Q′, Q′′.... Let D• = cone(h•) be the deformed code with code distance d and let H have a sparse cycle basis. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the partial block reading procedure. First, ⌈α⌉rounds of syndrome measurements are necessary (but not sufficient) to perform partial block reading with fault-distance d. Then if either: (a) α ≤2 or (b) the initial codeblocks are each locally testable codes with soundness ρ ≥2n m , where n is the codeblock length and m the number of checks, then partial block reading performs logical measurement in ⌈α⌉rounds of syndrome measurements with fault-distance d, maintaining the LDPC property throughout. Proof. See Appendix A. Remark 6.2 (Cycle checks). When the block reading takes longer than O(1) round we genuinely require a sparse cycle basis for H to maintain the LDPC property of the de- formed code, as the gauge checks at each timestep cannot easily be inferred from the state preparation and measurement of data qubits in the auxiliary system. In general, it is not guaranteed that H possesses a sparse cycle basis. In practice we expect to find sufficiently sparse cycle bases numerically for small codes. Additionally, if a subcode has low weight X metachecks which have support on checks in the subcode, after degree shifting these metachecks become low weight cycle checks for a Z basis partial block reading. Proposition 6.1 is interesting primarily because there are codes that are not expected to have distance d subcodes addressing targeted subsets of logical qubits. Indeed only topological codes and certain product codes are known to allow for addressing any logical qubit in this manner [42, 43]. Instead, one can find subcodes with distance between 1 and d which still permit fast measurement, but not necessarily in constant time. The distance of A• is the minimum number of check errors on the auxiliary system which are required to cause an undetectable logical measurement error in a single round; this is not the same 33 as the single-shot distance of the entire deformed code, because there can be checks in the bulk which have far less protection against measurement errors. However, the errors which can occur in the bulk do not prevent us from performing many of these partial block readings sequentially without d rounds of padding in between, implying that interpolated partial block reading can indeed speed up computation sub- stantially. Theorem 6.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of partial block readings {Partial1, Partial2, Partial3, ..., Partialη} where each subcode has distance di ≥ 1 αi d for i ∈[η] and the deformed codes each have code distance d, with sparse cycle bases. Suppose the original codes have soundness ρ ≥2n m . Then all η logical measurement rounds of partial block reading can be performed sequen- tially using ⌈αi⌉rounds of syndrome measurement each, and using d rounds of padding before and after the logical measurements. The procedure has phenomenological fault- distance d, for any η ∈N. Proof. See Appendix A. We remark that when the base codes are locally testable codes with constant soundness, we only require the weaker condition that each deformed code has distance d, instead of the stronger condition that the compacted code has distance d. Intuitively, this is because when considering the potential weight reduction of a logical fault that is deformed into two different surgery ancilla systems, the support of that logical fault is necessarily shifted onto timelike check errors, which are proportional to the reduced space support as the codes have constant soundness. Remark 6.4 (Interpolated surgery with lower soundness). Proposition 6.1 and Theo- rem 6.3 demand soundness ρ ≥2n m , which is asymptotically constant as m ∈Θ(n) for LDPC codes. If the soundness is lower than constant, which one may expect for practical codes, it is sufficient to use asymptotically more rounds of syndrome measurement per logical measurement, proportional to 1/ρ, to maintain phenomenological fault-distance d. Furthermore, one cannot guarantee that the surgeries can be performed within O(1) time of each other, without considering the compacted code (see Def. 4.4). Scaling the time between logical measurements to O(1/ρ) is then sufficient to preserve fault-distance. If the soundness is vanishing then there is still a reduction in time overhead from measuring multiple representatives, but the reduction is additive rather than multiplicative, see Proposition 9.3, and one is forced to consider the compacted code to ensure that the logical measurements can be performed close together in time. 6.2 Intermediate time hypergraph surgery Similar to the previous section on partial block reading, we can generalize Theorem 6.3 to the case of hypergraph surgeries. Theorem 6.5. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of hypergraph surgeries {Hyper1, Hyper2, Hyper3, ..., Hyperη} such that: • Each auxiliary complex (A•)1, (A•)2, (A•)3, ..., (A•)η has 1-cosystolic distance di ≥ 1 αi d for i ∈[η], with sparse cycle bases. • Each deformed code has distance at least d. 34 • The original codes have soundness ρ ≥2n m . Then all η logical measurement rounds of hypergraph surgery can be performed sequen- tially using ⌈αi⌉rounds of syndrome measurement each, and using d rounds of padding before and after the logical measurements. The procedure has phenomenological fault- distance d, for any η ∈N. Proof. This follows by combining the proofs of Theorem 5.3 and Theorem 6.3. Gauging logical measurement [56] can be seen as an instance of hypergraph surgery where the subcode A• has distance 1, and therefore requires O(d) rounds of syndrome measurement to maintain fault-distance d. 7 Modular expansion In previous sections, we studied hypergraph surgery operations where the compacted code or the deformed codes has distance d. This is critical for fault-tolerance. In this section, we discuss sufficient expansion properties on the hypergraph for these conditions to hold. In context of past surgery works, when A• contains only one logical operator (so α = d and dA = 1), partial block reading recovers the CKBB scheme [52]. It is known that there are substantially more efficient schemes for measuring individual logical operators in O(d) time by surgery [56, 58], using expander graphs. This section is essentially an extension of these results to hypergraphs, which can measure multiple logical representatives simultaneously and quickly. Definition 7.1. Let H(V, E) be a hypergraph, with edge-vertex incidence F2-matrix G. Let I be an index set of basis elements vi of a subspace W ⊂F2V. Then we can index element in W as κλ for λ ∈2I. We conflate κλ as a vector and as a set of vertices in V. To each basis element vi we associate its set of vertices Vi = supp(vi), and define the vertex set U = S i Vi. Definition 7.2 (Modular expansion). Let H(V, E) be a hypergraph with incidence matrix G : F2E →F2V and specified subspace W ⊂F2V containing elements κλ. Let t ∈R+ be a positive real number. The modular expansion Mt of H is the largest real number such that, for all v ⊆V, |G⊺v| ≥Mt min(t, |κλ| −|v ∩κλ| + |v ∩U\κλ| : κλ ∈W) As this definition is quite complicated, we give some orienting remarks. Modular expansion is a generalisation of both relative expansion from Ref. [57] and the soundness of a classical locally testable code, and is therefore also as generalisation of the edge- expansion of a graph. It is ‘modular’ in the sense that it contains distinguished subsets κλ such that, when they all exist in the same hypergraph, the notion of expansion for each of them combines to give a complete notion of expansion for the whole hypergraph. In detail, recall that edge-expansion (henceforth called global expansion) can be defined for hypergraphs as follows. Definition 7.3 (Global expansion). Let H(V, E) be a hypergraph with incidence matrix G : F2E →F2V. The global expansion of H is the largest real number β such that |G⊺v| ≥β min(|v|, |V\v|). 35 When G is a graph this reduces to traditional edge-expansion and β is the Cheeger constant. Ref. [57] defined relative expansion, a generalisation of global expansion. Definition 7.4 (Relative expansion). Let H(V, E) be a hypergraph with incidence matrix G : F2E →F2V, distinguished subset U ⊂V and chosen parameter t ∈R+. The relative expansion of H is the largest real number βt such that |G⊺v| ≥βt min(t, |v ∩U|, |U| −|v ∩U|). When t ≥|V| and U = V this reduces to the definition of global expansion. Similarly, when |I| = 1 modular expansion reduces to relative expansion, observing that there are only two κλ vectors, one being the empty set and the other U. A different generalisation of global expansion is to the soundness of a locally testable code. Recall Definition 2.10, with variables replaced to be suggestive of our hypergraph setting. Definition 7.5. A binary linear code C ⊂F2V is (ω, ρ)-locally testable if it has a parity- check matrix G⊺with rows of weight at most ω such that for any vector v ∈F2V, 1 m|G⊺v| ≥ρ nd(v, C) where m = |E|, n = |V| and d(v, C) = minx∈C(|v + x|). The values ω and ρ are the locality and soundness of the code respectively. In particular, if we rearrange the equation defining soundness ρ then we get |G⊺v| ≥ βd(v, C) for β = ρm n . We observe that in the case where the only codewords are 0 and V, β coincides with the definition of global expansion. For any sparse hypergraph H, if it has global expansion it is also a locally testable code, albeit a rather trivial one. In this way the soundness of a code can be thought of as a generalisation of expansion to the case where a hypergraph has ker G⊺with dimension greater than 1. As dim G⊺is equal to the number of logical representatives which are measured simultaneously by the auxiliary hypergraph, we require a more general definition than relative expansion in order to perform block reading. Modular expansion coincides with soundness when U = V and t ≥|V| under the rearrangement Mt = ρm n , and when W = ker G⊺, so each element κλ is an element of the kernel. We thus have the informal commuting diagram, Modular expansion Relative expansion k = 1 Soundness of an LTC t ≥n Global expansion k = 1 U = V t ≥n U = V where k = dim G⊺, n = |V|, and each κλ is assumed for simplicity of the diagram to correspond to an element of ker G⊺. 36 Remark 7.6. For a simply connected graph, the edge-expansion can be related to spec- tral properties of the adjacency matrix or Laplacian of the graph. We do not know of a similar relation between relative expansion, soundness or modular expansion and spectral properties of the Laplacian of the hypergraph. To explain our results on hypergraphs with modular expansion we use the terminology of port functions, similar to Ref. [57]. In short, given a CSS code Q and a hypergraph H(V, E), a port function fp is a map from a subset of data qubits in Q to a set of vertices in V. The deformed code is then defined as Q ↔H, with connections introduced between Q and H given by the port function fp. This definition of the deformed code is similar to the definition of the mapping cone by a chain map, although the port function does not specify the connectivity of deformed checks into the auxiliary hypergraph, and the port function is in the opposite direction to f1 in a chain map. Moreover, the port function definition can be applied to codes which are not CSS. Theorem 7.7 (Distance preservation). Let {Zi}i∈I be a set of logical representatives (trivial or nontrivial) on a distance d CSS code Q with supports ξi = supp(Zi). Let H(V, E) be a hypergraph with vertices in V being Z checks, hyperedges in E data qubits and X gauge checks forming a cycle basis, such that H measures all the logical representatives {Zi}i∈I. Let fp : S i ξi →V be an injective port function such that U = imfp ⊂V, and fp(ξi) = Vi ⊂V, ∀i ∈I. Let Md(H) ≥1, where W is generated by fp(ξi). Then the distance of the deformed code Q ↔H is at least d. Proof. See Appendix C. As the above proof demands the port function fp from the original code to the hy- pergraph be injective, it does not capture the most general form of hypergraph surgeries. In that case, scaling the modular expansion proportional to the check incidence to the original code is sufficient to preserve the distance, see Remark 7.16. Theorem 7.8 (Thickening). Let H(V, E) be a hypergraph with modular expansion Mt(H). Let J L be the path graph with length L ≥ 1 Mt(H), i.e. L vertices and L −1 edges. Let HL := H□J L be the hypergraph of H thickened L times. Then HL has modular expansion Mt(HL) ≥1 for Uℓ= S i V ℓ i at any level ℓ∈{1, 2, · · · , L}, where V ℓ i is the copy of Vi in the ℓth level of the thickened hypergraph. Proof. See Appendix C. Lemma 7.9. Let H(V, E) be a hypergraph such that each element in ker(G⊺) has the supporting set κλ for some λ ∈2I. Then Mt(H) ≥1 t . Proof. Consider an arbitrary vertex set v. We consider two cases: first, where |G⊺v| = 0, and then where |G⊺v| ≥1. In the first case, we know that v ∈ker(G⊺) and therefore v = κm for some m ∈2I. Therefore |κm| −|v ∩κm| + |v ∩U\κm| = 0, and so |G⊺v| ≥Mt(H) min(t, |κλ| −|v ∩κλ| + |v ∩U\κλ| : λ ∈2I) for any Mt(H). In the second case, |G⊺v| ≥1. We now consider the two values of the minimum. If min(·) = t then |G⊺v| ≥1 t t. If min(·) = |κλ| −|v ∩κλ| + |v ∩U\κλ| for some λ ∈2I then t ≥|κλ| −|v ∩κλ| + |v ∩U\κλ|, and so |G⊺v| ≥1 ≥1 t (|κλ| −|v ∩κλ| + |v ∩U\κλ|). Corollary 7.10. Let H(V, E) be any hypergraph such that each element in ker(G⊺) has the supporting set κλ for some λ ∈2I. Let Ht := H□J t be the hypergraph of H thickened t times. Then Ht has modular expansion at least Mt(Ht) ≥1 for any level ℓ. 37 Proof. As Mt(H) ≥1 t and J L in Theorem 7.8 has L = t then L ≥ 1 Mt(H). Corollary 7.10 post-hoc justifies devised sticking from Ref. [59] by setting t = d, as there the hypergraph is thickened d times to preserve fault-tolerance, and the CKBB scheme [52] is a special case of this. Remark 7.11. The expansion-based argument does not account for the cycles in the hy- pergraph, however. For these see Ref. [52, Thm. 1], which shows that when the hypergraph is thickened d times the cycles can be considered gauge logicals and the deformed code still has distance at least d as a subsystem code. Theorem 7.8 shows that it is not necessary to thicken d times, if the modular expansion before thickening is greater than 1 d. This has previously been proved for the simpler case of measuring only one logical operator [55, 57]. Lemma 7.12 (Cross-block measurements). Let H be a hypergraph with modular expansion Md(H) ≥1 L. Let H correspond to the degree-shifted subcode A• with distance at least d. Vertices in H are Z checks and edges are data qubits. Let f•, f′ •, f′′ • ... be the inclusion chain maps of A• into C•, C′ •, C′′ • ..., each of which has distance at least d. Let HL be the the hypergraph thickened L times, with modular expansion Md(HL) ≥1 at each level ℓ∈{1, 2, · · · , L}. Then the deformed code D• given by modifying each chain map f•, f′ •, f′′ • ... to have their preimages on any arbitrary levels of the hypergraph and taking the mapping cone has distance d if all cycles are gauge-fixed. Proof. As all cycles are gauge-fixed, there are no new logical qubits, and no X logical can have weight lower than d. We must therefore ensure that no Z logical which is unmeasured can have its weight reduced lower than d by cleaning. We prove the lemma by contradiction. If any Z logical extending over all initial codeblocks C•, C′ •, C′′ • ..., but not the hypergraph, could have its weight reduced below d by cleaning into the hypergraph, then a single logical in one of C•, C′ •, C′′ • ... must also have its weight reduced below d by cleaning. This fact is because removing the chain maps to all other codeblocks would leave the same set of Z checks which could clean ΛZ to the same data qubits σ, with the exception of data qubits in the other codeblocks. This set of data qubits must satisfy |σ| < d for this lemma to be false. However, we know that Md(HL) ≥1 at each level of the hypergraph, therefore |σ| ≥d and the lemma holds. By a combination of Corollary 7.10 and Lemma 7.12, observe that the earlier Lemma 4.2 from Section 4 holds. To summarise this section, if a subcode A• for partial block reading has sufficient soundness then it must always also have modular expansion, which can then be boosted by thickening an appropriate amount to guarantee the deformed code’s distance is preserved. Recall the definition of soundness for quantum codes in Def. 2.11. Corollary 7.13. Let A• be the subcode used in a partial block reading. Let A• have soundness ρA. Then, viewed as a hypergraph H, thickening 2nA ρAmA times preserves the distance of the deformed code when all cycles are gauge-fixed. Proof. By Lemma 2.12, H has soundness ρAmA 2nA as a classical code with parity-check matrix G⊺. Setting W = ker G⊺, H also has modular expansion Md ≥ρAmA 2nA . By Lemma 7.12, set L = 2nA ρAmA and the thickened hypergraph HL has modular expansion Md(HL) ≥1. Remark 7.14. As mA 2nA ∈O(1) for LDPC codes, the asymptotic space cost of partial block reading when all cycles are gauge-fixed can be reduced to O(nA ρA ). 38 Assuming the modular expansion Md(H) before thickening is greater than 1 d, which is its minimum by Lemma 7.9, this thickening requires fewer than the d layers required by the CKBB scheme and similar [52, 59]. Note that as the soundness of a subcode is not generally inherited from the soundness of the full code, finding subcodes with high soundness is generally difficult. Regardless, having hypergraphs equipped with modular expansion is extremely useful to perform fast surgery. Lemma 7.15. If each hypergraph Hi associated to a partial block reading Partiali for i ∈[n] has modular expansion Md(Hi) ≥1 then the compacted code CC• has distance d. Proof. By Theorem 7.7, attaching H1 to the original code via the chain map (h•)1, to yield cone((h•)1), must always preserve the distance of the code. Applying the other mapping cones iteratively will then each preserve the distance, so CC• has distance d. As a consequence, the conditions of Theorem 4.5 can be satisfied constructively, by choosing subcodes of distance d and then thickening such that each hypergraph has mod- ular expansion. Remark 7.16. A similar approach applies to Theorem 5.3, where instead it is sufficient for each hypergraph to have modular expansion Md(Hi) ≥wi, where wi is the maximal column weight of (f1)i, (f′ 1)i, (f′′ 1 )i, · · · . 8 Examples In this section we illustrate block reading with a series of examples. We focus first on 2D surface codes, upon which traditional lattice surgery is extremely well-understood [106, 110, 111]. 8.1 Topological codes 8.1.1 2D surface codes We demonstrate on unrotated surface codes, as the connection to block reading, and LDPC code surgery in general, is more apparent in this case. We start with two square surface code patches with distance d, with n = d2 + (d −1)2 data qubits each, 39 In these diagrams qubits are edges, X and Z checks are vertices and faces respectively. The deformed code for the full block reading performing a Z ⊗Z is then given by where the new edges are dashed grey lines. There are new vertically-oriented faces where there are new squares, and at the boundaries. Evidently the connectivity requirements are close to the requirements for transversal CNOTs. 40 The only distance d subcode (as defined in Def. 2.5) of the square surface code is itself, and so the only partial block reading with a distance d subcode is full block reading. A partial block reading using a subcode A• with distance dA less than d can be constructed by choosing a strip of the surface codes and then connecting these together. In the simple case of surface code patches as above the distance is always preserved by partial block reading, because the only logicals which can have reduced weight are those on Z ⊗Z or X ⊗X, which are now stabilisers. If the surface code patches also have defects, or are toric codes instead, partial block reading does not generically preserve code distance and increasing modular expansion in the auxiliary hypergraph is necessary, by thickening or otherwise. Recall that surface codes have asymptotically vanishing soundness by [78, Cor. 1] so using a partial block reading with dA < d on the surface code is not particularly useful for reducing the time overhead, unless dA ≥d 2 by Proposition 6.1. In the case where dA = 1 and the chosen subcode is along the boundary of the patches, we recover traditional lattice surgery with unrotated surface codes: 41 8.1.2 2D toric codes Partial block reading with toric codes is illustrative, as it showcases how the distance of the deformed code can drop without thickening. First, consider the following d = 5 toric codeblocks, shown by tessellating their fun- damental polygons. Blue and red edge pairs of the same lattice are identified to form loops. 42 A partial block reading between them can be performed with a subcode corresponding to a cylinder around the torus. Measuring only one representative from each torus recovers traditional surgery with 2D toric codes, 43 which measures Z ⊗Z on one logical qubit from each codeblock, but not the other. Growing the width of the cylinder such that the distance of A• is 4, we acquire the deformed code: 44 Depending on the chosen subcode, the distance may drop in the deformed code, as there can be a stringlike Z operator which extends from one torus to the other and then back, skipping the part of the torus with the new connections and making a short noncontractible closed curve. In our example the deformed code distance is still 5, as these curves take the form shown by green edges, 45 which have length at least 6. However, were we to start with d = 7 toric codes, with the width of the cylinder being 6, the distance would drop as a consequence of these same curves. Increasing the length of this closed curve is then possible by thickening the subcode, or boosting the modular expansion in some other manner. If we were to perform a full block reading, these closed curves would become contractible due to the additional stabilisers, so the deformed code distance would be preserved regardless. 8.1.3 4D toric codes 4D toric codes are known to be decodable in a single-shot manner, as they have some soundness and metachecks of both X and Z-type [78, 82, 83, 107]. A 4D toric code can be constructed by a 4-fold tensor product of the chain complex R• = R1 R0 ∂1 where R1 = R0 = Fm 2 and the differential ∂1 is the incidence matrix of the cycle graph with m vertices and edges, so dim ker(∂1) = 1 and dim Fm 2 /im(∂1) = 1. The 4-fold tensor product yields a chain complex with 5 nonzero components. Let the 4D toric code be C• = (R ⊗R ⊗R ⊗R)• = C4 C3 C2 C1 C0 46 and the qubit component is fixed to be C2.13 More explicitly we have C2 = M i+j+k+l=2 Ri ⊗Rj ⊗Rk ⊗Rl, and the entire chain complex is given by R1R1R0R0 R1R1R1R0 R1R0R1R0 R1R0R0R0 R1R1R0R1 R1R0R0R1 R0R1R0R0 R1R1R1R1 R0R0R0R0 R1R0R1R1 R0R1R1R0 R0R0R1R0 R0R1R1R1 R0R1R0R1 R0R0R0R1 R0R0R1R1 where tensor products are suppressed for visibility and columns of terms are taken in a direct sum. The K¨unneth formula implies that dim H2(C) = 6. The distance of C• is d = m2, which can be computed using Lemma 2.8. The intuition is that all nontrivial logical operators in a 4D toric code form membranes through the complex, as opposed to strings in 2D toric codes. For a given summand in C2, say R1 ⊗R1 ⊗R0 ⊗R0, one choice of corresponding nontrivial Z logical operator representative is then 1 ⊗1 ⊗(1, 0, · · · , 0) ⊗(1, 0, · · · , 0), where 1 = (1, 1, · · · , 1). To give a treatment of partial block reading with 4D toric codes we specify a subcode A•. There are evidently many such choices. One choice is to measure just a single logical representative, which by Thm. 1.6 must take Θ(d) syndrome measurement rounds to maintain phenomenological fault distance. Another choice of A• is to specify a set of Z-checks in C3, and to take the summands in C2, in the image of that set of Z-checks and so on. This is trivially guaranteed to be a subcode. For example, set A• to be R1R1R0F2 R1R0R0F2 R1R1R1F2 R1R0R1F2 R0R1R0F2 R0R0R0F2 R0R1R1F2 R0R0R1F2 13This shift in qubit degree compared to convention is for convenience, so that there are no negative indices and basis elements in C2 can be identified with faces in a cell complex. 47 where the chain map component f3 maps R1R1R1F2 in A3 into R1R1R1R0 in C3 and so on, choosing a consistent basis element of R0 to map F2 into. This choice measures 3 Z logical operators, and many representatives of each. This is the choice taken in Ref. [107], and can be seen as setting A• to be a hyperplane of the lattice. While each Z logical representative contained in A• must have weight at least m2, the X-distance of A• is only m, as this is a 3D toric code by identifying that R1R1R1F2 ∼= R1R1R1 etc, with the qubit component at A2 rather than the customary A1. In Ref. [60] it was observed that while one can perform single-shot lattice surgery using a subcode given by a hyperplane with 4D surface and toric codes, the performance was reduced unless O( √ d) = O(m) syndrome rounds were used. By Proposition 6.1 we in fact see that Ω( √ d) rounds are required to maintain fault-distance, as α = √ d in this case. Thus one can perform single-shot lattice surgery, or lattice surgery using a constant number of syndrome rounds, with 4D toric codes, but with only O( √ d) fault-distance, given that choice of subcode. Remark 8.1. Proposition 6.1 is not quite sufficient to justify Θ( √ d) rounds of syndrome measurement to maintain fault-distance, only Ω( √ d), as 4D toric codes do not have con- stant soundness [78, 82], see Remark 6.4. Remark 8.2. Lattice surgery with ‘geometrically enhanced’ 4D toric codes was studied in Ref. [107, Sec. V]. There, the X-distance of A• was called the ‘boundary distance’, and similar arguments about the number of rounds required for fault tolerance were employed. The lattice surgery there is performed in an ‘ancilla-free’ manner, where no new data qubits are used; only new checks. This is a quotient rather than a mapping cone, see Refs. [53, 54, 64]. 8.2 Bivariate bicycle codes Bivariate bicycle (BB) codes [86] are a class of Abelian 2-block group-algebra (2BGA) codes [18, 112], and as such are lifted product and balanced product codes [16, 17, 113]. They are of substantial practical interest, as some bivariate bicycle codes are known to exhibit high pseudo-thresholds and low logical error rates under circuit noise and suitable decoders, while having a higher rate than surface codes [63, 86], albeit requiring more connectivity. BB codes can also be seen as toric codes boosted to have limited non-local connectivity [114]. It is immediate that full block reading can be used to perform parallel Z ⊗Z or X ⊗X measurements between two BB codes using n + mZ = n + mX = 3 2n new qubits, including check qubits, while maintaining the distance and LDPC property. For instance, the J144, 12, 12K gross code, which uses 288 total qubits as a quantum memory when including check qubits, requires 216 additional qubits in total for a full block reading. This is lower than for the same set of parallel measurements in Ref. [54, Sec. 3.3], which used 360 total new qubits (216 new data qubits and 144 new check qubits), increased the check weight of the deformed code to 12, and required O(d) rounds of syndrome measurement. Thus even a naive application of block reading can reduce the spacetime overhead of logical measurements by an order of magnitude compared to conventional surgery, where the connectivity allows for it. We suspect that one can perform partial block readings using subcodes of the gross code, and similar BB codes, in order to address desired logical qubits and reduce space overheads while maintaining fast and parallel logical measurement at high fault distance, and retaining a degree of addressability. As this requires substantial numerics we defer this to future work. 48 9 Equivalence between surgery and homomorphic measurement The abstract relationship between (a) surgery by an auxiliary system and (b) homomorphic measurement [42] was folklore prior to this work, and indeed is implied by Ref. [58]. That is, given a chain map one can always construct a mapping cone and vice versa. The relationship we show in this section is more concrete: given the circuit for homomorphic measurement on a CSS code, it can always be rewritten into a logical measurement by surgery using a mapping cone (called homological measurement in Ref. [58]) and vice versa. Theorem 9.1. Let C• be a CSS code with a homomorphic measurement procotol specified by a chain map f• : A• →C•. Then the circuit corresponding to the homomorphic measurement can be rewritten to a surgery specified by cone(f•). Proof. See Appendix D. Corollary 9.2. Let C• be a CSS code with a surgery protocol specified by cone(f•) for a chain map f• : A• →C•. Then the circuit corresponding to the surgery protocol can be rewritten to a homomomorphic measurement specified by f•. Proof. See Appendix D. These proofs are performed using the ZX-calculus [115], a formal calculus for rewriting tensor networks on qubits. This is a convenient formalism, as the rewriting requires converting checks and data qubits in the homomorphic measurements into data qubits and checks respectively in the surgery procedure. In essence, the time and space axes of the ancilla circuit are inverted. However, the rewriting does not generally preserve fault-distance, unlike the rewrites in Ref. [116, 117]. The reason why converting a surgery to a homomorphic measurement does not generally preserve fault-distance is because having a low timelike distance in a surgery protocol can be resolved by measuring for multiple rounds. However, the equiv- alent homomorphic measurement has low space distance, which is fatal. We expect that one can prepare many of those ancilla states and measure repeatedly, in the style of a Shor measurement [5]. However, this is no longer a homomorphic measurement in the sense defined in Ref. [42]. Conversely, we do not know whether rewriting any homomorphic measurement to a hypergraph surgery preserves fault-distance. Our last results focus on general bounds on fast measurement by an auxiliary system. The results are insensitive to the nature of the auxiliary system: whether it is surgery, homomorphic measurement or any other scheme. Proposition 9.3. Let Q be a quantum LDPC code. Let S be a set of logical operator representatives measured at the same time by one or more auxiliary system for T syndrome rounds. Assume that the correct logical measurement outcomes of S are not known in advance. Let f be the lowest weight data qubit fault in Q such that: • f flips at least one logical representative in S. • If f flips a logical representative in S then it flips all other representatives in the same equivalence class in S. Let w be the weight of f, and s be the syndrome weight of f in Q. Then the phenomeno- logical fault-distance of the measurement protocol is upper bounded by 2w + Ts. 49 Proof. See Appendix E. When Q is CSS and the set S forms a Z-type subcode A•, w is the X-distance of A•. Theorem 1.6. Any logical measurement protocol performed in o(d) rounds on a quan- tum LDPC code by an auxiliary system requires connections to more than one logical representative to maintain phenomenological fault distance Ω(d), unless the correct logical measurement outcome is known in advance. Proof. See Appendix E. Importantly, for fast surgery it is not enough for the deformed code in the idling operation to be a single-shot code. Because the correct outcomes of the new checks are not generally known in advance, many connections between representatives of the original code and the auxiliary hypergraph are essential to correct for measurement errors on these new checks. Indeed, our earlier results on full and partial block reading with distance d subcodes imply that this is the only thing that is required, assuming that the auxiliary hypergraph is sensibly constructed and that sufficent rounds of padding are used. 10 Discussion In this work we have introduced a variety of new, general LDPC code surgeries which reduce the time overhead of logical measurement below O(d) rounds and allow the mea- surement of logical operators in parallel. We generalised prior work [55–58], which used expansion properties of hypergraphs to efficiently measure individual logical operators, to use soundness instead, enabling measurement of many logical operators in parallel. We elucidated the connection between homomorphic measurement and surgery, and gave general bounds on the fault distance for any logical measurement by an auxiliary system. To prove fault distances for fast surgery in Appendix A we developed an extension of the fault complexes formalism [60] by taking mapping cones on idling spacetime volumes, which may be of independent interest. Our work raises many further questions. First, when is it possible to efficiently de- vise a sparse auxiliary hypergraph H with high modular expansion even if the original subcodes have low soundness? In the case of individual logical operator representatives this is straightforward by constructing an expander graph, but for more complicated par- allel measurements involving many representatives the numerical construction of suitable hypergraphs is challenging for large codes, and brute-force branching techniques [59, 61] have formidable constant factor overheads. Similarly, when H is a graph one can appeal to the Decongestion Lemma [118, App. A]. When H is a small hypergraph one can typically devise a sparse cycle basis for H by numerical brute-force [55], but for large codes such a method becomes impractical. Are there families of LDPC codes for which such bases are efficient to construct, or unnecessary entirely? We expect this to be true of any code that is geometrically local in a fixed spatial dimension. In fact, constant-time surgery operations raise the intriguing possibility of avoiding the need for a sparse cycle basis at all. This is because cycle checks do not need to be explicitly measured as they can be inferred from the initialization and readout steps, forming detectors that have a constant extent in the time direction. These detectors must provide fault-tolerance, but need not be strictly local. For instance, see the nonlocal hierarchical fault-tolerant detector complexes derived from local measurements in Ref. [109]. 50 An issue that arises when using hypergraphs is that the method of universal adapters [57] to connect heterogeneous subcodes between codeblocks does not generally apply. This is because the SkipTree algorithm assumes that the auxiliary system is a graph, rather than a hypergraph. We do not know whether generalisations of the algorithm can be invented to connect heterogeneous hypergraphs for product measurements, either in full generality or for specific cases. Here, we have focused on hypergraph measurements that only measure local checks and global logicals, i.e. they do not locally deform the code after the measurement. It is possible to relax these requirements, which leads to a setting of dynamical local codes. Could this lead to more addressable fast surgery without sacrificing performance? Our proofs concerning interpolated partial block reading in Sec. 6 used a strict defini- tion of soundness, see Def. 2.11, which we believe can be loosened to a ‘small set soundness’, with a meaning similar to small set expansion or modular expansion. Our proofs regarding fault-distance require d rounds of syndrome measurement (‘padding’) before and after the surgery protocols in order to provably protect against timelike faults extending from one timelike boundary to the other. We expect that these are typically unnecessary given sen- sible fault-tolerant operations performed before and after the surgery protocol. However, we have not fully characterized when these rounds of padding can be reduced in number or removed. It is known that high-distance subcodes which address specific logical qubits exist for topological and tensor product codes [42, 43], but for other families of LDPC codes little is known. It would be interesting to consider algebraic methods to find high-distance subcodes for lifted or balanced product codes. All our surgery methods used auxiliary hypergraphs, and there are other approaches to surgery which are ‘ancilla-free’, meaning that they use no new data qubits [83, 107]. Such approaches fall under the umbrella of quotients of codes, see Refs. [53, 64], rather than mapping cones [58]. The arguments concerning time overhead appear to translate between the two approaches, but discerning a rigorous connection is left to future work. We suggest that one could take a pushout or similar quotient directly on the spacetime volumes described by fault complexes [60] in order to study such surgeries. We do not know whether it is possible to take an arbitrary quantum LDPC code and provably measure an arbitrary set of commuting logical Pauli products in constant time overhead and linear (up to polylog) space overhead while maintaining fault-distance and the LDPC property. Such a proof is possible for the case of O(d) time overhead [61], but the difficulties of finding appropriate subcodes and cycle bases require new methods. Conversely, it may be possible to show that such a generic protocol is impossible, see Ref. [119] for a related result in the classical setting. We have developed a formalism for studying the phenomenological faults in generic CSS LDPC code surgery operations using fault complexes. We expect it is possible to prove thresholds of surgery on spacetime volumes in this formalism using combinatorial methods on the relevant fault complexes [41]. Block reading entails metachecks, which means the addition of many more detectors to the decoding graph for a single round of measurement. We do not know whether there exist decoders which can efficiently decode such graphs quickly enough to be suitable for quantum hardware. In Ref. [83] a powerset decoder was used to decode the a geometrically enhanced 4D toric code memory, which also has metachecks. However, this was far too slow for realistic hardware, and we do not know how well the powerset decoder would handle surgery operations. More realistic decoders for high-dimensional topological and tensor product codes were studied in Refs. [60, 120]. 51 Finally, to bring the theory of block reading to application it will be necessary to perform many simulations and stability experiments [121], and numerically determine pseudo-thresholds and logical error rates of the surgery operations. 11 Acknowledgements We are grateful to Katie Chang and Louis Golowich for insightful discussions and for spot- ting an error in an earlier version of this paper. A.C. is employed by Xanadu and would like to thank Ben Ide, Eric Sabo, Timo Hillmann, Michael Vasmer and Guillaume Dauphinais for helpful discussions. Z.H. is supported by MIT Department of Mathematics and thanks Harry Zhou, Tomas Jochym-O’Connor and Guanyu Zhu for helpful discussions. D.J.W. is supported by the Australian Research Council Discovery Early Career Research Award (DE220100625). T.J.Y. thanks Qian Xu for sharing his perspective that homomorphic measurements and code surgery should be in some sense equivalent. We used TikZit [122] to generate the Tanner graph figures in this paper. References [1] Peter W Shor. Scheme for reducing decoherence in quantum computer memory. Physical review A, 52(4):R2493, 1995. [2] Daniel Gottesman. Stabilizer codes and quantum error correction. PhD thesis, California Institute of Technology, 1997. URL https://thesis.library.caltech. edu/2900/. [3] A Yu Kitaev. Quantum computations: algorithms and error correction. Russian Mathematical Surveys, 52(6):1191–1249, Dec 1997. ISSN 1468-4829. DOI: 10.1070/rm1997v052n06abeh002155. URL http://dx.doi.org/10.1070/ RM1997v052n06ABEH002155. [4] Dorit Aharonov and Michael Ben-Or. Fault-tolerant quantum computation with con- stant error. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pages 176–188, 1997. [5] P.W. Shor. Fault-tolerant quantum computation. In Proceedings of 37th Conference on Foundations of Computer Science, pages 56–65, 1996. DOI: 10.1109/SFCS.1996.548464. URL https://ieeexplore.ieee.org/document/ 548464. [6] S. B. Bravyi and A. Yu. Kitaev. Quantum codes on a lattice with boundary. arXiv:quant-ph/9811052, 1998. [7] A.Yu. Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2–30, Jan 2003. ISSN 0003-4916. DOI: 10.1016/s0003-4916(02)00018-0. URL http://dx.doi.org/10.1016/S0003-4916(02)00018-0. [8] Eric Dennis, Alexei Kitaev, Andrew Landahl, and John Preskill. Topological quantum memory. Journal of Mathematical Physics, 43(9):4452–4505, Sep 2002. ISSN 1089-7658. DOI: 10.1063/1.1499754. URL http://dx.doi.org/10.1063/1. 1499754. [9] H´ector Bomb´ın. Gauge color codes: optimal transversal gates and gauge fixing in topological stabilizer codes. New Journal of Physics, 17(8):083002, 2015. [10] Topological code. In Victor V. Albert and Philippe Faist, editors, The Error Cor- rection Zoo. 2022. URL https://errorcorrectionzoo.org/c/topological. [11] Sergey Bravyi, David Poulin, and Barbara Terhal. Tradeoffs for Reliable Quan- tum Information Storage in 2D Systems. Phys. Rev. Lett., 104:050503, Feb 52 2010. DOI: 10.1103/PhysRevLett.104.050503. URL https://link.aps.org/doi/ 10.1103/PhysRevLett.104.050503. [12] Quantum ldpc (qldpc) code. In Victor V. Albert and Philippe Faist, editors, The Error Correction Zoo. 2025. URL https://errorcorrectionzoo.org/c/qldpc. [13] C. Gidney. Stim: a fast stabilizer circuit simulator. Quantum, 5:497, Jul 2021. ISSN 2521-327X. DOI: 10.22331/q-2021-07-06-497. URL https://doi.org/10.22331/ q-2021-07-06-497. [14] Jean-Pierre Tillich and Gilles Z´emor. Quantum LDPC codes with positive rate and minimum distance proportional to the square root of the blocklength. IEEE Transactions on Information Theory, 60(2):1193–1202, 2013. [15] Alexey A Kovalev and Leonid P Pryadko. Quantum kronecker sum-product low- density parity-check codes with finite rate. Physical Review A—Atomic, Molecular, and Optical Physics, 88(1):012311, 2013. [16] Pavel Panteleev and Gleb Kalachev. Degenerate Quantum LDPC Codes With Good Finite Length Performance. Quantum, 5:585, Nov 2021. ISSN 2521- 327X. DOI: 10.22331/q-2021-11-22-585. URL https://doi.org/10.22331/ q-2021-11-22-585. [17] Nikolas P. Breuckmann and Jens N. Eberhardt. Balanced Product Quantum Codes. IEEE Transactions on Information Theory, 67(10):6653–6674, 2021. DOI: 10.1109/TIT.2021.3097347. [18] Hsiang-Ku Lin and Leonid P Pryadko. Quantum two-block group algebra codes. Physical Review A, 109(2):022407, 2024. [19] Sergey Bravyi, Oliver Dial, Jay M. Gambetta, Dar´ıo Gil, and Zaira Nazario. The future of quantum computing with superconducting qubits. Journal of Applied Physics, 132(16), Oct 2022. ISSN 1089-7550. DOI: 10.1063/5.0082975. URL http://dx.doi.org/10.1063/5.0082975. [20] Dolev Bluvstein, Simon J Evered, Alexandra A Geim, Sophie H Li, Hengyun Zhou, Tom Manovitz, Sepehr Ebadi, Madelyn Cain, Marcin Kalinowski, Dominik Hangleiter, et al. Logical quantum processor based on reconfigurable atom ar- rays. Nature, 626(7997):58–65, 2024. URL https://www.nature.com/articles/ s41586-023-06927-3. [21] PsiQuantum Team. A manufacturable platform for photonic quantum computing. Nature, pages 1–3, 2025. [22] Qian Xu, J. Pablo Bonilla Ataides, Christopher A. Pattison, Nithin Raveendran, Dolev Bluvstein, Jonathan Wurtz, Bane Vasi´c, Mikhail D. Lukin, Liang Jiang, and Hengyun Zhou. Constant-overhead fault-tolerant quantum computation with reconfigurable atom arrays. Nature Physics, 20(7):1084–1090, Apr 2024. ISSN 1745-2481. DOI: 10.1038/s41567-024-02479-z. URL http://dx.doi.org/10.1038/ s41567-024-02479-z. [23] S. Bravyi, A. W. Cross, J. M. Gambetta, D. Maslov, P. Rall, and T. J. Yoder. High- threshold and low-overhead fault-tolerant quantum memory. Nature, 627(8005): 778–782, Mar 2024. ISSN 1476-4687. DOI: 10.1038/s41586-024-07107-7. URL http: //dx.doi.org/10.1038/s41586-024-07107-7. [24] Nikolas P. Breuckmann and Simon Burton. Fold-Transversal Clifford Gates for Quantum Codes. Quantum, 8:1372, Jun 2024. ISSN 2521-327X. DOI: 10.22331/q- 2024-06-13-1372. URL https://doi.org/10.22331/q-2024-06-13-1372. [25] A. O. Quintavalle, P. Webster, and M. Vasmer. Partitioning qubits in hypergraph product codes to implement logical gates. Quantum, 7(1153), 2023. 53 [26] J. N. Eberhardt and V. Steffan. Logical Operators and Fold-Transversal Gates of Bivariate Bicycle Codes. arXiv:2407.03973v1, 2024. [27] G. Zhu, S. Sikander, E. Portnoy, A. W. Cross, and B. J. Brown. Non-Clifford and parallelizable fault-tolerant logical gates on constant and almost-constant rate homological quantum LDPC codes via higher symmetries. arXiv:2310.16982, 2023. [28] Thomas R Scruby, Arthur Pesah, and Mark Webster. Quantum rainbow codes. arXiv preprint arXiv:2408.13130, 2024. [29] Nikolas P Breuckmann, Margarita Davydova, Jens N Eberhardt, and Nathanan Tantivasadakarn. Cups and Gates I: Cohomology invariants and logical quantum operations. arXiv preprint arXiv:2410.16250, 2024. [30] Po-Shen Hsin, Ryohei Kobayashi, and Guanyu Zhu. Classifying Logical Gates in Quantum Codes via Cohomology Operations and Symmetry. arXiv preprint arXiv:2411.15848, 2024. [31] Ting-Chun Lin. Transversal non-Clifford gates for quantum LDPC codes on sheaves. arXiv preprint arXiv:2410.14631, 2024. [32] Louis Golowich and Ting-Chun Lin. Quantum LDPC Codes with Transversal Non- Clifford Gates via Products of Algebraic Codes. arXiv preprint arXiv:2410.14662, 2024. [33] Alexander J Malcolm, Andrew N Glaudell, Patricio Fuentes, Daryus Chandra, Alexis Schotte, Colby DeLisle, Rafael Haenel, Amir Ebrahimi, Joschka Roffe, Ar- manda O Quintavalle, et al. Computing Efficiently in QLDPC Codes. arXiv preprint arXiv:2502.07150, 2025. [34] Christophe Vuillot and Nikolas P Breuckmann. Quantum pin codes. IEEE Trans- actions on Information Theory, 68(9):5955–5974, 2022. [35] Hasan Sayginel, Stergios Koutsioumpas, Mark Webster, Abhishek Rajput, and Dan E Browne. Fault-Tolerant Logical Clifford Gates from Code Automorphisms. arXiv preprint arXiv:2409.18175, 2024. DOI: 10.48550/arxiv.2409.18175. URL https://arxiv.org/abs/2409.18175. [36] Noah Berthusen, Michael J Gullans, Yifan Hong, Maryam Mudassar, and Shi Jie Samuel Tan. Automorphism gadgets in homological product codes. arXiv preprint arXiv:2508.04794, 2025. [37] Daniel Gottesman. Fault-tolerant quantum computation with constant overhead. arXiv preprint arXiv:1310.2984, 2013. [38] Omar Fawzi, Antoine Grospellier, and Anthony Leverrier. Constant Overhead Quantum Fault-Tolerance with Quantum Expander Codes. In 2018 IEEE 59th Annu. Symp. Found. Comput. Scie. (FOCS), volume 64, pages 106–114. ACM New York, NY, USA, Oct 2018. DOI: 10.1109/focs.2018.00076. URL https: //doi.org/10.1109%2Ffocs.2018.00076. [39] Shiro Tamiya, Masato Koashi, and Hayata Yamasaki. Polylog-time-and constant- space-overhead fault-tolerant quantum computation with quantum low-density parity-check codes. arXiv preprint arXiv:2411.03683, 2024. [40] Quynh T Nguyen and Christopher A Pattison. Quantum fault tolerance with constant-space and logarithmic-time overheads. arXiv preprint arXiv:2411.03632, 2024. [41] Zhiyang He, Quynh T. Nguyen, and Christopher A. Pattison. Composable quantum fault-tolerance, 2025. URL https://arxiv.org/abs/2508.08246. [42] Shilin Huang, Tomas Jochym-O’Connor, and Theodore J. Yoder. Homomorphic Log- ical Measurements. PRX Quantum, 4(3):030301, Jul 2023. DOI: 10.1103/PRXQuan- tum.4.030301. URL https://link.aps.org/doi/10.1103/PRXQuantum.4.030301. 54 [43] Qian Xu, Hengyun Zhou, Guo Zheng, Dolev Bluvstein, J Ataides, Mikhail D Lukin, and Liang Jiang. Fast and Parallelizable Logical Computation with Homological Product Codes. arXiv preprint arXiv:2407.18490, 2024. [44] Thiago Bergamaschi and Yunchao Liu. On fault tolerant single-shot logical state preparation and robust long-range entanglement. arXiv preprint arXiv:2411.04405, 2024. DOI: 10.48550/arxiv.2411.04405. URL https://arxiv.org/abs/2411. 04405. [45] Yifan Hong. Single-shot preparation of hypergraph product codes via dimension jump. Quantum, 9:1879, October 2025. ISSN 2521-327X. DOI: 10.22331/q-2025-10- 07-1879. URL https://doi.org/10.22331/q-2025-10-07-1879. [46] Christine Li, John Preskill, and Qian Xu. Transversal dimension jump for product qLDPC codes. arXiv:2510.07269, oct 2025. URL https://arxiv.org/pdf/2510. 07269. [47] H´ector Bomb´ın and Miguel Angel Martin-Delgado. Quantum measurements and gates by code deformation. Journal of Physics A: Mathematical and Theoretical, 42 (9):095302, 2009. [48] Nikolas P Breuckmann, Christophe Vuillot, Earl Campbell, Anirudh Krishna, and Barbara M Terhal. Hyperbolic and semi-hyperbolic surface codes for quantum stor- age. Quantum Science and Technology, 2(3):035007, Aug 2017. ISSN 2058-9565. DOI: 10.1088/2058-9565/aa7d3b. URL http://dx.doi.org/10.1088/2058-9565/ aa7d3b. [49] Ali Lavasani and Maissam Barkeshli. Low overhead Clifford gates from joint measurements in surface, color, and hyperbolic codes. Physical Review A, 98 (5), Nov 2018. ISSN 2469-9934. DOI: 10.1103/physreva.98.052319. URL http: //dx.doi.org/10.1103/PhysRevA.98.052319. [50] Tomas Jochym-O’Connor. Fault-tolerant gates via homological product codes. Quantum, 3:120, Feb 2019. ISSN 2521-327X. DOI: 10.22331/q-2019-02-04-120. URL https://doi.org/10.22331/q-2019-02-04-120. [51] Anirudh Krishna and David Poulin. Fault-tolerant gates on hypergraph product codes. Physical Review X, 11(1):011023, 2021. [52] Lawrence Z. Cohen, Isaac H. Kim, Stephen D. Bartlett, and Benjamin J. Brown. Low-overhead fault-tolerant quantum computing using long-range connectivity. Sci- ence Advances, 8(20):eabn1717, May 2022. ISSN 2375-2548. DOI: 10.1126/sci- adv.abn1717. URL http://dx.doi.org/10.1126/sciadv.abn1717. [53] Alexander Cowtan and Simon Burton. CSS code surgery as a universal construction. Quantum, 8:1344, 2024. [54] A. Cowtan. SSIP: automated surgery with quantum LDPC codes. arXiv:2407.09423, 2024. [55] Andrew Cross, Zhiyang He, Patrick Rall, and Theodore Yoder. Improved QLDPC Surgery: Logical Measurements and Bridging Codes. arXiv preprint arXiv:2407.18393, 2024. [56] Dominic J Williamson and Theodore J Yoder. Low-overhead fault-tolerant quantum computation by gauging logical operators. arXiv preprint arXiv:2410.02213, 2024. [57] Esha Swaroop, Tomas Jochym-O’Connor, and Theodore J Yoder. Universal adapters between quantum LDPC codes. arXiv preprint arXiv:2410.03628, 2024. [58] Benjamin Ide, Manoj G Gowda, Priya J Nadkarni, and Guillaume Dauphinais. Fault-tolerant logical measurements via homological measurement. arXiv preprint arXiv:2410.02753, 2024. 55 [59] Guo Zhang and Ying Li. Time-efficient logical operations on quantum LDPC codes. arXiv preprint arXiv:2408.01339, 2024. [60] Timo Hillmann, Guillaume Dauphinais, Ilan Tzitrin, and Michael Vasmer. Single- shot and measurement-based quantum error correction via fault complexes. arXiv preprint arXiv:2410.12963, 2024. [61] Alexander Cowtan, Zhiyang He, Dominic J. Williamson, and Theodore J. Yo- der. Parallel Logical Measurements via Quantum Code Surgery. arXiv preprint arXiv:2503.05003, 2025. [62] Zhiyang He, Alexander Cowtan, Dominic J. Williamson, and Theodore J. Yoder. Ex- tractors: QLDPC architectures for efficient pauli-based computation. arXiv preprint arXiv:2503.10390, 2025. [63] Theodore J. Yoder, Eddie Schoute, Patrick Rall, Emily Pritchett, Jay M. Gambetta, Andrew W. Cross, Malcolm Carroll, and Michael E. Beverland. Tour de gross: A modular quantum computer based on bivariate bicycle codes, 2025. URL https: //arxiv.org/abs/2506.03094. [64] Cl´ement Poirson, Joschka Roffe, and Robert I. Booth. Engineering CSS surgery: compiling any CNOT in any code. arXiv preprint arXiv:2505.01370, 2025. [65] Guo Zheng, Liang Jiang, and Qian Xu. High-rate surgery: towards constant- overhead logical operations. arXiv preprint arXiv:2510.08523, 2025. [66] Qian Xu, Hengyun Zhou, Dolev Bluvstein, Madelyn Cain, Marcin Kalinowski, John Preskill, Mikhail D. Lukin, and Nishad Maskara. Batched high-rate logical oper- ations for quantum LDPC codes. oct 2025. URL http://arxiv.org/abs/2510. 06159. [67] Nou´edyn Baspin, Lucas Berent, and Lawrence Z. Cohen. Fast surgery for quantum ldpc codes, 2025. URL https://arxiv.org/abs/2510.04521. [68] Shi Jie Samuel Tan, Yifan Hong, Ting-Chun Lin, Michael J. Gullans, and Min- Hsiu Hsieh. Single-Shot Universality in Quantum LDPC Codes via Code-Switching. arXiv:2510.08552, oct 2025. URL https://arxiv.org/pdf/2510.08552. [69] Louis Golowich, Kathleen Chang, and Guanyu Zhu. Constant-overhead addressable gates via single-shot code switching. arXiv preprint arXiv:2510.06760, 2025. [70] Christophe Vuillot, Lingling Lao, Ben Criger, Carmen Garc´ıa Almud´ever, Koen Bertels, and Barbara M Terhal. Code deformation and lattice surgery are gauge fixing. New Journal of Physics, 21(3):033028, 2019. [71] Mathew B. Hastings. Weight reduction for quantum codes. Quant. Inf. Comput., 17(15-16):1307–1334, 2017. DOI: 10.26421/QIC17.15-16-4. [72] M. B. Hastings. On Quantum Weight Reduction. arXiv:2102.10030, 2021. [73] Eric Sabo, Lane G. Gunderman, Benjamin Ide, Michael Vasmer, and Guillaume Dauphinais. Weight-Reduced Stabilizer Codes with Lower Overhead. PRX Quan- tum, 5:040302, Oct 2024. DOI: 10.1103/PRXQuantum.5.040302. URL https: //link.aps.org/doi/10.1103/PRXQuantum.5.040302. [74] Andrew C. Yuan. Unified Framework for Quantum Code Embedding. jul 2025. URL https://arxiv.org/pdf/2507.05361. [75] S. Bravyi, G. Smith, and J. A. Smolin. Trading Classical and Quantum Com- putational Resources. Physical Review X, 6(2), Jun 2016. ISSN 2160-3308. DOI: 10.1103/physrevx.6.021043. URL http://dx.doi.org/10.1103/PhysRevX. 6.021043. [76] D. Litinski. A Game of Surface Codes: Large-Scale Quantum Computing with Lattice Surgery. Quantum, 3:128, Mar 2019. ISSN 2521-327X. DOI: 10.22331/q- 2019-03-05-128. URL http://dx.doi.org/10.22331/q-2019-03-05-128. 56 [77] H´ector Bomb´ın. Single-Shot Fault-Tolerant Quantum Error Correction. Physical Review X, 5(3), Sep 2015. ISSN 2160-3308. DOI: 10.1103/physrevx.5.031043. URL http://dx.doi.org/10.1103/PhysRevX.5.031043. [78] Earl T Campbell. A theory of single-shot error correction for adversarial noise. Quan- tum Science and Technology, 4(2):025006, Feb 2019. DOI: 10.1088/2058-9565/aafc8f. URL https://dx.doi.org/10.1088/2058-9565/aafc8f. [79] Shouzhen Gu, Eugene Tang, Libor Caha, Shin Ho Choe, Zhiyang He, and Aleksander Kubica. Single-shot decoding of good quantum LDPC codes. Communications in Mathematical Physics, 405(3):85, 2024. [80] Aleksander Kubica and Michael Vasmer. Single-shot quantum error correction with the three-dimensional subsystem toric code. Nature Communications, 13(1), Oct 2022. ISSN 2041-1723. DOI: 10.1038/s41467-022-33923-4. URL http://dx.doi. org/10.1038/s41467-022-33923-4. [81] Thomas R. Scruby, Timo Hillmann, and Joschka Roffe. High-threshold, low- overhead and single-shot decodable fault-tolerant quantum memory, 2024. URL https://arxiv.org/abs/2406.14445. [82] Armanda O. Quintavalle, Michael Vasmer, Joschka Roffe, and Earl T. Campbell. Single-shot error correction of three-dimensional homological product codes. PRX Quantum, 2:020340, Jun 2021. DOI: 10.1103/PRXQuantum.2.020340. URL https: //link.aps.org/doi/10.1103/PRXQuantum.2.020340. [83] David Aasen, Matthew B. Hastings, Vadym Kliuchnikov, Juan M. Bello-Rivas, Adam Paetznick, Rui Chao, Ben W. Reichardt, Matt Zanner, Marcus P. da Silva, Zhenghan Wang, and Krysta M. Svore. A topologically fault-tolerant quantum com- puter with four dimensional geometric codes, 2025. URL https://arxiv.org/abs/ 2506.15130. [84] Hengyun Zhou, Chen Zhao, Madelyn Cain, Dolev Bluvstein, Casey Duckering, Hong- Ye Hu, Sheng-Tao Wang, Aleksander Kubica, and Mikhail D. Lukin. Algorithmic Fault Tolerance for Fast Quantum Computing, 2024. URL https://arxiv.org/ abs/2406.17653. [85] Noah Shutty, Craig Gidney, and Oscar Higgott. Early-stop lattice surgery. To appear. URL https://yale.hosted.panopto.com/Panopto/Pages/Viewer.aspx? id=dbfe6994-f408-46e5-8227-b33001046e13. [86] Sergey Bravyi, Andrew W Cross, Jay M Gambetta, Dmitri Maslov, Patrick Rall, and Theodore J Yoder. High-threshold and low-overhead fault-tolerant quantum memory. Nature, 627(8005):778–782, 2024. [87] Titouan Carette, Dominic Horsman, and Simon Perdrix. SZX-Calculus: Scal- able Graphical Quantum Reasoning. In Peter Rossmanith, Pinar Heggernes, and Joost-Pieter Katoen, editors, 44th International Symposium on Mathemati- cal Foundations of Computer Science (MFCS 2019), volume 138 of Leibniz In- ternational Proceedings in Informatics (LIPIcs), pages 55:1–55:15, Dagstuhl, Ger- many, 2019. Schloss Dagstuhl – Leibniz-Zentrum f¨ur Informatik. ISBN 978-3-95977- 117-7. DOI: 10.4230/LIPIcs.MFCS.2019.55. URL https://drops.dagstuhl.de/ entities/document/10.4230/LIPIcs.MFCS.2019.55. [88] Nikolas P. Breuckmann and Jens Niklas Eberhardt. Quantum Low-Density Parity- Check Codes. PRX Quantum, 2:040101, Oct 2021. DOI: 10.1103/PRXQuan- tum.2.040101. URL https://link.aps.org/doi/10.1103/PRXQuantum.2.040101. [89] Pavel Panteleev and Gleb Kalachev. Asymptotically good Quantum and locally testable classical LDPC codes. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, page 375–388, New York, NY, 57 USA, 2022. Association for Computing Machinery. ISBN 9781450392648. DOI: 10.1145/3519935.3520017. URL https://doi.org/10.1145/3519935.3520017. [90] C. A. Weibel. An Introduction to Homological Algebra. Cambridge Studies in Advanced Mathematics, Cambridge University Press, 1994. doi:10.1017/CBO9781139644136. [91] Benjamin Audoux and Alain Couvreur. On tensor products of CSS codes. Ann. Inst. Henri Poincar´e Comb. Phys. Interact., 6:239–287, 2019. DOI: 10.4171/AIHPD/71. [92] Matt McEwen, Dave Bacon, and Craig Gidney. Relaxing Hardware Requirements for Surface Code Circuits using Time-dynamics. Quantum, 7:1172, November 2023. ISSN 2521-327X. DOI: 10.22331/q-2023-11-07-1172. URL https://doi.org/10. 22331/q-2023-11-07-1172. [93] Dorit Aharonov and Lior Eldar. Quantum Locally Testable Codes. SIAM Journal on Computing, 44(5):1230–1262, 2015. DOI: 10.1137/140975498. [94] Pavel Panteleev and Gleb Kalachev. Asymptotically good Quantum and locally testable classical LDPC codes. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, page 375–388, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450392648. DOI: 10.1145/3519935.3520017. URL https://doi.org/10.1145/3519935.3520017. [95] Anthony Leverrier, Vivien Londe, and Gilles Z´emor. Towards local testability for quantum coding. Quantum, 6:661, Feb 2022. ISSN 2521-327X. DOI: 10.22331/q- 2022-02-24-661. URL https://doi.org/10.22331/q-2022-02-24-661. [96] L. Eldar and A. W. Harrow. Local Hamiltonians Whose Ground States Are Hard to Approximate. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 427–438, Los Alamitos, CA, USA, Oct 2017. IEEE Computer Society. DOI: 10.1109/FOCS.2017.46. URL https://doi.ieeecomputersociety. org/10.1109/FOCS.2017.46. [97] Andrew Cross, Zhiyang He, Anand Natarajan, Mario Szegedy, and Guanyu Zhu. Quantum Locally Testable Code with Constant Soundness. Quantum, 8:1501, Oct 2024. ISSN 2521-327X. DOI: 10.22331/q-2024-10-18-1501. URL https://doi.org/ 10.22331/q-2024-10-18-1501. [98] Adam Wills, Ting-Chun Lin, and Min-Hsiu Hsieh. Tradeoff constructions for quan- tum locally testable codes. IEEE Transactions on Information Theory, 2024. [99] Irit Dinur, Ting-Chun Lin, and Thomas Vidick. Expansion of high-dimensional cubical complexes: with application to quantum locally testable codes. In 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS), pages 379–385. IEEE, 2024. [100] Gleb Kalachev and Pavel Panteleev. Maximally extendable product codes are good coboundary expanders. arXiv preprint arXiv:2501.01411, 2025. [101] Alexander Cowtan and Shahn Majid. Algebraic aspects of boundaries in the kitaev quantum double model. Journal of Mathematical Physics, 64(10):102203, 10 2023. ISSN 0022-2488. DOI: 10.1063/5.0127285. URL https://doi.org/10.1063/5. 0127285. [102] Sergey Bravyi, Isaac Kim, Alexander Kliesch, and Robert Koenig. Adaptive constant-depth circuits for manipulating non-abelian anyons. may 2022. URL http://arxiv.org/abs/2205.01933. [103] Anasuya Lyons, Chiu Fan Bowen Lo, Nathanan Tantivasadakarn, Ashvin Vish- wanath, and Ruben Verresen. Protocols for Creating Anyons and Defects via Gaug- ing. nov 2024. URL http://arxiv.org/abs/2411.04181. 58 [104] Yuanjie Ren, Nathanan Tantivasadakarn, and Dominic J. Williamson. Efficient Preparation of Solvable Anyons with Adaptive Quantum Circuits. Physical Review X, 15(3), aug 2025. DOI: 10.1103/b9hf-gx4f. URL http://arxiv.org/abs/2411. 04985http://dx.doi.org/10.1103/b9hf-gx4f. [105] Margarita Davydova, Andreas Bauer, Julio C. Magdalena de la Fuente, Mark Webster, Dominic J. Williamson, and Benjamin J. Brown. Universal fault tol- erant quantum computation in 2d without getting tied in knots. arXiv preprint arxiv:2503.15751, 2025. [106] Niel de Beaudrap and Dominic Horsman. The ZX calculus is a language for surface code lattice surgery. Quantum, 4:218, January 2020. ISSN 2521- 327X. DOI: 10.22331/q-2020-01-09-218. URL https://doi.org/10.22331/ q-2020-01-09-218. [107] David Aasen, Jeongwan Haah, Matthew B. Hastings, and Zhenghan Wang. Geo- metrically enhanced topological quantum codes, 2025. URL https://arxiv.org/ abs/2505.10403. [108] M. E. Beverland, S. Huang, and V. Kliuchnikov. Fault tolerance of stabilizer chan- nels. arXiv preprint arXiv:2401.12017, 2024. [109] Daniel Litinski. Blocklet concatenation: Low-overhead fault-tolerant protocols for fusion-based quantum computation. jun 2025. URL http://arxiv.org/abs/2506. 13619. [110] Dominic Horsman, Austin G Fowler, Simon Devitt, and Rodney Van Meter. Surface code quantum computing by lattice surgery. New Journal of Physics, 14(12):123011, Dec 2012. ISSN 1367-2630. DOI: 10.1088/1367-2630/14/12/123011. URL http: //dx.doi.org/10.1088/1367-2630/14/12/123011. [111] Daniel Litinski and Felix von Oppen. Lattice Surgery with a Twist: Simplifying Clifford Gates of Surface Codes. Quantum, 2:62, May 2018. ISSN 2521-327X. DOI: 10.22331/q-2018-05-04-62. URL https://doi.org/10.22331/q-2018-05-04-62. [112] Renyu Wang, Hsiang-Ku Lin, and Leonid P Pryadko. Abelian and non-Abelian quantum two-block codes. In 2023 12th International Symposium on Topics in Coding (ISTC), pages 1–5. IEEE, 2023. [113] Pavel Panteleev and Gleb Kalachev. Quantum LDPC Codes With Almost Lin- ear Minimum Distance. IEEE Transactions on Information Theory, 68(1):213–229, 2022. DOI: 10.1109/TIT.2021.3119384. [114] Zijian Liang, Ke Liu, Hao Song, and Yu-An Chen. Generalized toric codes on twisted tori for quantum error correction, 2025. URL https://arxiv.org/abs/ 2503.03827. [115] Bob Coecke and Ross Duncan. Interacting quantum observables: categori- cal algebra and diagrammatics. New Journal of Physics, 13(4):043016, apr 2011. DOI: 10.1088/1367-2630/13/4/043016. URL https://dx.doi.org/10.1088/ 1367-2630/13/4/043016. [116] Benjamin Rodatz, Boldizs´ar Po´or, and Aleks Kissinger. Fault tolerance by construc- tion, 2025. URL https://arxiv.org/abs/2506.17181. [117] Maximilian R¨usch, Benjamin Rodatz, and Aleks Kissinger. Completeness for Fault Equivalence of Clifford ZX Diagrams. oct 2025. URL https://arxiv.org/pdf/ 2510.08477. [118] Michael Freedman and Matthew Hastings. Building manifolds from quantum codes. Geometric and Functional Analysis, 31(4):855–894, 2021. [119] Anirudh Krishna and Gilles Z´emor. Tradeoffs on the volume of fault-tolerant circuits. arXiv:2510.03057, oct 2025. URL https://arxiv.org/abs/2510.03057v1. 59 [120] Oscar Higgott and Nikolas P. Breuckmann. Improved single-shot decoding of higher-dimensional hypergraph-product codes. PRX Quantum, 4:020332, May 2023. DOI: 10.1103/PRXQuantum.4.020332. URL https://link.aps.org/doi/ 10.1103/PRXQuantum.4.020332. [121] Craig Gidney. Stability Experiments: The Overlooked Dual of Memory Experiments. Quantum, 6:786, August 2022. ISSN 2521-327X. DOI: 10.22331/q-2022-08-24-786. URL https://doi.org/10.22331/q-2022-08-24-786. [122] TikZit. https://tikzit.github.io/index.html. [123] Peter-Jan H. S. Derks, Alex Townsend-Teague, Ansgar G. Burchards, and Jens Eisert. Designing fault-tolerant circuits using detector error models, 2024. URL https://arxiv.org/abs/2407.13826. [124] Andrew J. Landahl, Jonas T. Anderson, and Patrick R. Rice. Fault-tolerant quan- tum computing with color codes, 2011. URL https://arxiv.org/abs/1108.5738. https://doi.org/10.48550/arXiv.1108.5738. [125] John van de Wetering. Zx-calculus for the working quantum computer scientist, 2020. URL https://arxiv.org/abs/2012.13966. A Proofs concerning fault-distance In this appendix we provide detailed proofs of the phenomenological fault-distance lower bounds described in the main text. The standard approach to proving phenomenological fault-distance is to construct a measurement circuit, where Pauli error faults can occur at integer timesteps and check faults can occur at half-integer timesteps, with detectors [92, 123], and then to show that any fault that does not trigger any detectors and is not a product of spacetime stabilisers must have at least a given weight d. It is difficult to prove the fault-distances which we require in that setting, and so we use an alternative, almost equivalent one: that of fault complexes [60]. This enables us to use homology to classify the logical faults and bound their corresponding weights. In this setting, Pauli error faults of Z type and Z check errors occur on even timesteps, while Pauli error faults of X type and X check errors occur on odd timesteps. A sketch of the proofs in this section works as follows: we construct fault complexes for the idling operation, and then modify them using mapping cones to obtain fault complexes for various surgery protocols. These result in spacetime volumes where the code is initially in the idling operation for d rounds, then undergoes a series of deformations to perform surgery with no padding between surgery rounds, and finally all auxiliary systems are measured out and the code returns to the idling operation for d rounds. Because the spacetime volumes are defined in terms of chain complexes, and the deformations are described in terms of mapping cones, we can construct long exact sequences which relate the classes of logical faults in the spacetime volumes for surgery to the classes of logical faults in the idling spacetime volume. We then organise these classes of logical faults and show that they each must have weight at least d. Our techniques may be of independent interest to study other protocols in circuit-based or measurement-based quantum computing. Before we move to discussing fault complexes, we give some elementary homological background which is required. The Snake Lemma [90, Lem. 1.3.2] and rank-nullity together imply that for a mapping cone cone(f•) defined by a chain map f• : A• →C•, we have |Hi(cone(f))| = |Hi(C)| + |Hi−1(A)| −|imHi(f)| −|imHi−1(f)|, where |V | is shorthand for dim V when V is a vector space. 60 In more detail, the Snake Lemma dictates that for the mapping cone cone(f•) we have a long exact sequence 14: · · · Hi(A) Hi(C) Hi(cone(f)) Hi−1(A) Hi−1(C) · · · Hi(f) ι∗ π∗ Hi−1(f) In this sequence the Hi(f) are the maps between homology spaces Hi(A) →Hi(C), which are well-defined by the commutative diagram in Eq. 1. ι∗is a map Hi(C) →Hi(cone(f)) generated by the Snake Lemma. In homological measurement [58], this map at i = 1 takes logical operators in the original code to the logical operators in the deformed code when performing surgery. In that case, the map ι∗ will be a surjection, as some of the logical qubits from the original code are measured out and no new logical qubits are added when the auxiliary hypergraph’s cycles are gauge- fixed. π∗is a map Hi(cone(f)) →Hi−1(A). In homological measurement [58], this map at i = 1 picks out the new contributions to the logical space of the deformed code from Hi−1(A); exactness at Hi(cone(f)) shows that the only new logical qubits which can arise are from cycles in the hypergraph. In our case we will take mapping cones on spacetime volumes, rather than just codes, but a similar intuition applies. For any linear map T : V →W rank-nullity implies that |V | = | ker T| + |imT|, so we can compute |Hi(cone(f))| = | ker π∗| + |imπ∗| = |imι∗| + |imπ∗| = |Hi(C)| −| ker ι∗| + | ker Hi−1(f)| = |Hi(C)| −|imHi(f)| + |Hi−1(A)| −|imHi−1(f)|. (4) The premise of all of our proofs is that in a CSS-type code surgery, where the initial code and deformed codes are all CSS codes, all logical faults must be either of Z type in spacetime, composed of Z Pauli errors and X check errors, or X type, composed of X Pauli errors and Z check errors. As any logical fault composed of a product of these is at least as high weight as the corresponding logical fault of either Z or X type, we need only show that the weights of all Z and X logical faults are at least d to prove that the fault-distance of a protocol is at least d. We will frequently make use of the fact that when Hi(f) is injective |imHi(f)| = |Hi(A)|, by definition. A.1 Fault complexes A fault complex is defined as a chain complex with 4 terms, F• = F3 F2 F1 F0 where by convention the F3 and F0 components correspond to Z and X detectors respec- tively. In fault-tolerant measurement-based quantum computing (MBQC), F2 corresponds to a set of X fault locations, i.e. locations in spacetime which can experience only X- type faults, while F1 corresponds to a set of Z-type fault locations. The fault complex is therefore in bijection with a graph state. When the graph state corresponds to a foliated CSS code, this can be reinterpreted in the circuit model, undergoing phenomenological noise. Now, F2 corresponds to a combined 14This means that the image of each map coincides with the kernel of the next map. 61 set of spacetime locations at which data qubits can experience X Pauli faults and also Z checks, which can experience measurement errors. F1 is the same but for Z Pauli faults and X checks. In this case, we can expand out a fault complex to have the following diagram: F1,1 F1,0 F1,2 F0,0 F0,2 F0,1 where components in the overall fault complex are given by taking direct sums vertically in the diagram, so we have F1,2 F1,1 ⊕F0,2 F1,0 ⊕F0,1 F0,0 Throughout this section every diagram drawn in the above form is assumed to have direct sums vertically in the diagram. 15 In the phenomenological circuit model: • F0,0 corresponds to X detectors. • F0,1 corresponds to locations for Z Pauli faults on data qubits. • F1,0 corresponds to locations for X check faults. • F0,2 corresponds to locations for Z check faults. • F1,1 corresponds to locations for X Pauli faults on data qubits. • F1,2 corresponds to Z detectors. Expressed as a commutative diagram the fault complex then has the structure: X Pauli faults X check faults Z detectors X detectors Z check faults Z Pauli faults F1,1 (X Pauli faults) F1,0 (X check faults) F1,2 (Z detectors) F0,0 (X detectors) F0,2 (Z check faults) F0,1 (Z Pauli faults) 15This is essentially passing to the “total complex” [90]. 62 Note that as a fault complex describes a graph state, it does not have the ability to describe morphisms, i.e. MBQC-based or circuit-based protocols with input or output qubits. The standard way to foliate a CSS code C• from the fault complex perspective is to take (R⊗C)•, where R• : R1 →R0 has the differential R being either the full-rank parity- check matrix for the repetition code or its dual. In the former, the code is initialised in the |0⟩state and measured out in the Z basis; in the latter, the code is initialised in the |+⟩state and measured out in the X basis. Hence the code ‘idles’ for l rounds, where l is the blocklength of the repetition code. If we do this, the fault complex has the explicit form: R1 ⊗C1 R1 ⊗C0 R1 ⊗C2 R0 ⊗C0 R0 ⊗C2 R0 ⊗C1 (5) In the circuit model we have, in order, • R0 ⊗C0 corresponds to X detectors. • R0 ⊗C1 corresponds to locations for Z Pauli error faults. • R1 ⊗C0 corresponds to locations for X check error faults. • R0 ⊗C2 corresponds to locations for Z check error faults. • R1 ⊗C1 corresponds to locations for X Pauli error faults. • R1 ⊗C2 corresponds to Z detectors. Note that R1 ⊗C1 and R0 ⊗C1 correspond to the same data qubits but at different points in time; in the circuit model this is purely convention, and any given data qubit can still undergo either type of error, Z or X. It is easy to compute the homologies using the K¨unneth formula. Then elements of H1(F) correspond to equivalence classes of logical Z faults in spacetime, composed of Z Pauli errors and X check errors, and elements of H2(F) to logical X faults, composed of X Pauli errors and Z check errors.16 For completeness, we have, • H1(R ⊗C) = H1(R) ⊗H0(C) ⊕H0(R) ⊗H1(C). • H2(R ⊗C) = H1(R) ⊗H1(C) ⊕H0(R) ⊗H2(C). When R• is the repetition code, we have H1(R ⊗C) = H1(R) ⊗H0(C) = {0, 1} ⊗C0/imHX H2(R ⊗C) = H1(R) ⊗H1(C) = {0, (1, 0, 0, · · · , 0)} ⊗H1(C), 16There are also the homology groups H1(F) and H4(F), corresponding to classes of detector triggers which are impossible. 63 Figure 5: A fault complex for a unit cell of the 2D surface code, starting and terminating at X-type boundaries. Time flows from bottom to top. where 0 and 1 are the all-zero and all-one vectors, and (1, 0, 0, · · · , 0) is a weight 1 vector representing the nonzero equivalence class in H1(R); any vector with odd weight is in the same class. When R• is dual to the repetition code we instead have H1(R ⊗C) = H0(R) ⊗H1(C) = {0, (1, 0, 0, · · · , 0)} ⊗H1(C) H2(R ⊗C) = H0(R) ⊗H2(C) = {0, 1} ⊗H2(C). In Fig. 5 we show the fault complex for a small surface code as a Tanner graph, where circles are fault locations and squares are detectors, and red, teal, yellow and blue indicate basis elements of F0, F1, F2, F3 respectively. Fault complexes are a useful algebraic framework for reasoning about fault distance of CSS codes in the phenomenological model: each location is a point in spacetime which can experience a fault, which is also a basis vector in a space (either F1 or F 2) and so weights of faults can be related directly to weights of vectors in the fault complex. We do not want to presume that the code is initialised and measured out in a given basis, because we want to prove that a given protocol has sufficient fault-distance no matter what other (assumed fault-tolerant) operations the logical circuit contains. One approach to this is to replace the parity-check matrix of R• with the parity-check matrix of a different repetition code, that of the cycle graph. This corresponds to ‘taking the trace’ of the circuit operation being performed, and ensures that errors cannot vanish on the initial and final boundaries. However, this leads to other, serious issues: a timelike fault which runs from the start to the finish of the spacetime volume is now a spacetime stabiliser, and there are other logical faults which we encounter that are equivalent to this fault, and so they will be unaccounted for. Instead, we modify the spacetime volume by adding checks to the start and end, which constitute entry points for timelike errors to enter and exit the volume: depending on whether R• is a full-rank repetition code or its dual, corresponding to initialising in the |0⟩or |+⟩state respectively, the new checks are Z checks or X checks. We accomplish this using mapping cones. Explicitly, when R• is the dual of the repetition code, so R• = Fl−1 2 →Fl 2 for some l determining the number of rounds in the 64 spacetime volume, the code is initialised in |+⟩and measured out in the X-basis; we then introduce new checks for timelike fault entrypoints by taking the mapping cone of the chain map g•, 0 0 0 C0 ⊕C0 F3 F2 F1 F0 g0 where g0 maps the first copy of C0 into the top layer of X-detectors in the fault complex, and the second copy of C0 into the bottom layer of X-detectors. Call the resultant fault complex F ′ = cone(g•). The unit cell from Fig. 5, once modified in this manner, is shown in Fig. 6. The consequent classes of logical faults in cone(g•) can be computed using the Snake Lemma, see Eq. 4. We have |H1(F ′)| = |H1(cone(g))| = |H1(F)| + |C0 ⊕C0| −|imH0(g)| = |H1(F)| + 2|C0| −|imH0(g)|. By the K¨unneth formula, H0(F) = H0(R) ⊗H0(C) = {0, (1, 0, 0, · · · , 0)} ⊗H0(C), so |H0(F)| = |H0(C)|. By inspection, every element of one layer of X-detectors at the bottom of the fault complex is in the image of g•; those at the top are in the same equivalence class. Hence |imH0(g)| = |H0(F)| = |H0(C)|, so |H1(F ′)| = |H1(cone(g))| = |H1(F)| + 2|C0| −|H0(C)|. We therefore add 2mX −rX new basis elements to H1(F), where mX = |C0| is the number of X checks in C• and rX = |H0(C)| is the number of redundant X checks in C•. All new elements of H1(F ′) correspond to logical faults which enter from a timelike boundary on the top or bottom; mX −rX of these can be immediately annihilated by spacelike errors on the boundary, or by spacelike errors on further layers. We illustrate this in Fig. 7, where incoming X check errors turn on X detectors, which can then be immediately switched off again by Z data qubit Pauli errors. The remaining mX are strings of X check failures running from the bottom to the top boundary, illustrated in Fig. 8. In both cases, all new logical faults must have timelike support on the new checks at the boundary, and these cannot be cleaned from the boundary into the bulk. We can similarly calculate that |H0(F ′)| = |H0(F)| + 0 −|imH0(g)| −0 = |H0(F)| −|H0(F)| = 0. For the dual version, where R• = Fl 2 →Fl−1 2 , we instead perform the mapping cone on the cochain complex, and add 2mZ −rZ new elements to H2(F). Additionally, H3(F) becomes 0. In summary, when the initialisation and measurement is in the X basis the spacetime volume for the idling operation now contains: • All Z timelike logical faults, extending through the volume. • Those X timelike logical faults with check errors in H2(C) which are not equivalent to spacelike faults. • All Z spacelike logical faults. 65 Figure 6: A fault complex for a unit cell of the 2D surface code, starting and terminating at X-type boundaries, but with new X checks to allow Z timelike errors to enter and exit the volume. 66 Figure 7: The fault complex from Fig. 6, where a Z timelike fault enters the fault complex and is immediately annihilated by a spacelike fault. The highlighted fault locations are shown in green. 67 Figure 8: The fault complex from Fig. 6, where a Z timelike fault extends from one timelike boundary to the other. The highlighted fault locations are shown in green. 68 • No X spacelike logical faults. and when it is in the Z basis the volume is the same but with the X and Z logical fault characterisation inverted. Therefore if we use both of these fault complexes then, combined, we account for all possible logical faults, as the code (and fault complex) is CSS. We wish to describe complicated surgery protocols, rather than the idling state. Now, a certain fault complex for the idling state is sufficient to study certain stability exper- iments, but insufficient for the description of surgery protocols involving many logical measurements. For this the fault complex must describe the actual code deformation tak- ing place. This can also be accomplished using mapping cones, as we shall demonstrate presently. A.2 Full block reading We are now ready to prove the fault-distance of full block reading. In this case, we start with taking E• = Fc 2 ⊗C• as our initial CSS code in the idling state, so the initial fault complex is F• = (R ⊗E)•. We then consider the two cases, one where the idling volume starts and ends with Z basis measurements, and the other with X basis measurements, including the entrypoints for timelike errors, so that we acquire two different fault com- plexes. For convenience we label them both F ′ •, and the difference will be made clear by dividing into the case for X-type boundaries and Z-type boundaries. In each case, we are applying mapping cones to the idling volume. To perform a full block reading in the Z basis, measuring Z logical operators, the mapping cone is on the chain complex, and in the X basis the mapping cone is on the cochain complex. In the Z basis, the mapping cone on the fault complex is taken from the chain map f• with components: 0 C2 C1 C0 F ′ 3 F ′ 2 F ′ 1 F ′ 0 f3 f2 f1 f0 where the components map the auxiliary code C• to the codeblocks in E• to be measured, at a primal layer. In other words, in the case where we start and finish in the X basis we 69 now have the fault complex E0 (Bottom X check faults) E0 (Top X check faults) R1 ⊗E1 (Old X Pauli faults) R1 ⊗E0 (Old X check faults) R1 ⊗E2 (Old Z detectors) R0 ⊗E0 (Old X detectors) R0 ⊗E2 (Old Z check faults) R0 ⊗E1 (Old Z Pauli faults) C2 (New Z detectors) C1 (New Z check faults) C0 (New Z Pauli faults) where the copies of E0 at the top of the diagram are to add timelike entrypoints to the spacetime volume, and the C• at the bottom performs the full block reading. We can now flip the boundary conditions, so that R• = Fl 2 →Fl−1 2 , and flip the additional timelike entrypoints accordingly, but still perform the same full block reading, 70 to acquire the fault complex E2 (Bottom Z check faults) E2 (Top Z check faults) R1 ⊗E1 (Old X Pauli faults) R1 ⊗E0 (Old X check faults) R1 ⊗E2 (Old Z detectors) R0 ⊗E0 (Old X detectors) R0 ⊗E2 (Old Z check faults) R0 ⊗E1 (Old Z Pauli faults) C2 (New Z detectors) C1 (New Z check faults) C0 (New Z Pauli faults) Due to the change in boundary conditions, each of these fault complexes has different logical faults, and we must account for them both. There are some important consequences of using the fault complex formalism rather than the phenomenological circuit-based model. It dictates that each fault location in the fault complex is a qubit initialised in the |+⟩state and then measured out in the X basis, with the entangling CZ gates performing space- and time-like Hadamards to exchange error types between layers. As a consequence, unlike in the circuit-based model, there are not separate timesteps on which the new ancilla data qubits are initialised in |+⟩and measured out in the X basis, as this is included in the fault complex layers. So, in the circuit model, one would initialise the new data qubits in |+⟩, then measure all checks for 1 round, then measure out the new data qubits in the X basis, which would generally be considered 3 timesteps: one each for the ‘merge’, measure, and ‘split’ steps. In the fault complex picture, this all happens on 1 timestep. This is why there is only one ‘copy’ of C• used for the auxiliary system in the mapping cone. Additionally, the layers of Z and X checks are alternating, rather than occurring on the same timestep as in the circuit-based model. When a surgery protocol only uses 1 round of Z measurements, as in full block reading, the fault complex does not introduce X checks connections to the new data qubits at all. There are also no new fault locations for X Pauli data qubit errors on the auxiliary system, as they would either act on data qubits in |+⟩or before the qubits are measured out in the X basis. 71 Remark A.1 (Multiple measurement rounds). In Section A.4 below we see cases in which the number of measurement rounds required is greater than 1, in which case there are more than one ‘copy’ of C• employed, and there are deformed X checks and new fault locations for X Pauli data qubit errors. Remark A.2 (Cycle detectors). For all the full block reading procedures we do not require any cycle checks, or detectors related to those checks. In Thm. 3.12 we have imposed a condition that no measured logical Paulis should be products of any others, i.e. the proof requires that the set of measured logical Paulis is minimal for the group it generates. This condition can be removed if one uses detectors for the cycles in the hypergraph: these are not cycle checks, merely detectors inferred from the initial state preparation and final measurement of the new data qubits, so the code will remain LDPC, but the detector matrix may not unless the cycle basis is sparse. Proposition 3.9. Let Q, Q′, Q′′,... be a set of identical CSS LDPC codeblocks with distance d, and let P be a pattern matrix determining a full block reading. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the full block reading procedure. Then full block reading performs logical measurement in O(1) rounds of syndrome measurements with fault-distance d. Proof. We treat the case of X-type boundary conditions first. X-type boundary conditions. By the Snake Lemma, we can compute the following. |H1(cone(f))| = |H1(F ′)| −|imH1(f)| + |H0(C)| −|imH0(f)| = |H1(F ′)| −|H1(C)| + |H0(C)| −0 = |H1(F ′)| −|H1(C)| + |H0(C)|. Applying the same argument to H2(cone(f)), we have: • |H1(cone(f))| = |H1(F ′)| −|H1(C)| + |H0(C)|. • |H2(cone(f))| = |H2(F ′)| −|H2(C)|. We first discuss the negative contributions to the classes of logical faults. The negative contribution to Z logical faults (faults that commute with all X detectors) appears because |H1(C)| logical measurements have occurred in the Z basis, and so there are |H1(C)| spacelike Z errors which no longer affect the logical state. The negative contribution to X logical faults (faults that commute with all Z detectors) appears because the full block reading introduces Z detectors between codeblocks, which require timelike Z check errors to now be in ker HZ of the auxiliary code in order to be undetected. As the space distance of the code and deformed code are each at least d, and there are no new equivalence classes of logical faults, we cannot clean any logical fault with these boundary conditions to have weight below d. Z-type boundary conditions. In this case the equivalence classes of logical faults can be computed for H1(cone(f)) as |H1(cone(f))| = |H1(F ′)| −|imH1(f)| + |H0(C)| −|imH0(f)| = |H1(F ′)| −0 + |H0(C)| −0 = |H1(F ′)| + |H0(C)|. Applying the same argument to H2(cone(f)) we have: 72 • |H1(cone(f))| = |H1(F ′)| + |H0(C)|. • |H2(cone(f))| = |H2(F ′)| + |H1(C)| −|H2(C)|. where H1(F ′) and H2(F ′) are equivalence classes already present before the full block reading, and H0(C) and H1(C) are new. Again we have a negative contribution of |H2(C)| to the classes of Z check faults. Note that some of the contributions to equivalence classes appear in both boundary conditions. For the new X logical faults, it is straightforward that H1(C) are equivalence classes of undetectable Z check failures on the auxiliary code, which will flip the logical measurement outcomes of the full block reading. For the new Z logical faults, the equivalence classes given by H0(C) are spacetime logical faults which begin as Z Pauli errors on the auxiliary code, and then move into the original spacetime volume and become timelike errors to then exit from the start or end boundaries. Because the Z Pauli errors are not in im(HX), they cannot be cleaned into the original spacetime volume and are genuinely new classes of logical faults. They are also not products of space and time logical faults. To prove the fault distance of a single full block reading, where there are at least d rounds of measurement before and after the block reading but only 1 round used for the block reading itself, it is sufficient to show that every element of these equivalence classes, and their combinations, must have weight at least d. Any timelike logical fault in H1(F ′) or H2(F ′) must originate at the boundaries, and therefore must have at least weight d to reach, and therefore affect, the protocol. They cannot be cleaned away from the boundary by spacetime stabilisers. Timelike errors which originate at the boundary must extend for d rounds to affect the computation. Logical faults from the positive contribution of H0(C) must exit through one of the boundaries of the spacetime volume, and hence must have weights at least d. Spacelike logical faults in H1(F ′) and H2(F ′) must have weight at least d because the original code and the deformed code during block reading both have distance at least d. Adding spacetime stabilisers to these spacelike logical faults can only increase the weights, because whatever qubit faults are removed by cleaning are merely moved to another round in the volume due to the structure of the repetition code R•. Lastly, logical faults in H1(C), which are new check errors on the auxiliary system, must have weight at least d as timelike errors. Applying spacetime stabilisers to clean these into the original code is possible, but the corresponding spacetime fault must act on at least d data qubits due to the 1-to-1 structure of the full block reading, corresponding to a spacelike logical fault, followed by some timelike errors, and then the same spacelike logical fault, and so all such logical faults must have weight at least d. Evidently a dualised proof would apply were the full block reading to be applied in the X basis instead. Remark A.3. As full block reading is only applied for 1 round of measurements, these new classes of spacetime faults corresponding to H0(C) can be eliminated by adding an extra C−1 term to C•, which adds detectors reconstructed from the initialisation and mea- surements of the ancilla system in the X-basis. Adding an extra C−1 term cannot make the deformed code non-LDPC, but can make the detectors high weight. As we shall presently show, these new classes of spacetime faults are not problematic anyway. 73 We can then consider the case where multiple full block readings are performed in a row, but with only O(1) rounds between them, and still leaving O(d) rounds of measurement before and after the procedure. In this case all the previous calculations apply, by applying the mapping cone to the chain or cochain complex versions of the fault complex. However, the analysis of logical faults requires two more ingredients. First, in the case where only one block reading is performed in the spacetime volume, the elements of H0(C) must exit through the boundaries of the volume. When there are multiple block readings performed in close proximity, however, these elements can connect and form noncontractible closed curves within the spacetime volume, with low fault weight. These noncontractible closed curves appear only when there is a full block reading whose logical measurements are a product of others performed in the same spacetime volume. Second, we must consider the compacted code of the spacetime volume, see Defini- tion 4.4. Lemma A.4. Let Ξ be a set of η full block reading procedures of Z-type such that no block reading procedure in Ξ is a product of any others. Then the compacted code CC• has distance d. Proof. This is identical to the proof of code distance of the code (C ⊗R)•, where R• is a full-rank matrix, see Lemma 3.8. Theorem 3.12. Let Ξ be a set of η full block reading procedures of Z-type such that no block reading procedure in Ξ is a product of any others. Then all η logical measurement rounds of full block reading can be performed sequen- tially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the full set of procedures. The procedure has phenomenological fault- distance d, for any η ∈N. Proof. Say that the η block readings being performed are each defined by a bitstring of length c, and the set of bitstrings is divided into sets of size ηZ and ηX for Z and X type block readings. Each bitstring contains a 1 in a location if that codeblock is involved in the block reading specified by the bitstring, and a 0 otherwise. Whether the block readings are performed simultaneously or spread across multiple rounds, we can describe the block readings being performed by a pair of pattern matrices PZ and PX over F2, with c columns and ηZ and ηX rows respectively. Note that, unlike when performing them simultaneously, these matrices do not need to form a pattern complex. Assume that there is no block reading performed in the spacetime volume whose logical measurements are products of others. Then PZ and PX are each full-rank matrices. Any logical fault composed of Z Pauli errors in H0(C) on a Z-type block reading must extend to timelike X check faults on all blocks in that row of PZ. To prevent extending all the way to the boundaries of the volume, such timelike faults must reach Z Pauli errors in H0(C) on a different row; these must subsequently extend onto all blocks in that row of PZ. Hence in order to form a noncontractible closed curve inside the volume the pattern matrix PZ must not be full rank, and so there are some block readings which are products of previous ones. Dualising the argument gives the same result for X Pauli errors in H2(C) on an X-type block reading. All other timelike logical faults follow the same arguments as before: they must either extend to a boundary or have distance d because of the new metachecks if they are timelike. 74 For spacelike errors, we must be more careful, as spacelike errors in the different deformed codes during the spacetime volume can be cleaned to have overlapping spacelike support. In particular, one can take products of spacelike logical faults in each deformed code, or the undeformed code. Before applying spacetime stabilisers, such a product is a set of disjoint spacelike logical faults separated by some number of timesteps, which in the worst case is constant. Upon multiplying by spacetime stabilisers, such spacelike logical faults can have sup- port that is cleaned to the same timestep and hence cancel, introducing some check errors between the timesteps. Say that we have a set of ϑ spacelike Z logical faults Λ1, Λ2, · · · , Λϑ at different logical timesteps, acting on different logical qubits. These could each be sep- arated by a constant number of timesteps, and induce a constant number of check errors when partially cleaned by a spacetime stabiliser S. Thus, if the spacelike component of the product SΛ1Λ2 · · · Λϑ has weight below d the fault-distance may fall below d. We now consider the compacted code CC• of the spacetime volume. Each logical operator Λi in each deformed code is guaranteed to be a logical operator of the CC•. Let Λ′ i be the logical operator Λi viewed in CC•. We can hence consider the product | Q i Λ′ i|, which is the minimal weight of the spacelike component of SΛ1Λ2 · · · Λϑ. As CC• has distance d, | Q i Λ′ i| ≥d for any choice of logical operators Λi in any deformed codes. Then, | Q i SΛi| ≥d. For spacelike X logical operators, the argument is similar. The X-distance of the compacted code CC• is at least d, and so cleaning X logical operators to have overlapping spacelike components can never reduce the fault weight below d. For full block reading of CSS-type, we require an extension of the previous lemma concerning the compacted code. Lemma A.5. If the hypergraph surgeries are CSS-type full block readings such that no logical measurement is the product of any other then the compacted code CC• has distance d. Proof. This is identical to the proof of code distance of the code (C ⊗R)•, where each pattern matrix in the complex R• is a full-rank matrix, see Lemma 3.8. Lemma 3.16. Let Ξ be a set of η full block reading procedures of CSS-type such that no block reading procedure in Ξ is a product of any others, and each of the measurements commute. Then all η logical measurement rounds of full block reading can be performed sequen- tially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the full set of procedures. The procedure has phenomenological fault- distance d, for any η ∈N. Proof. In this case we again take iterative mapping cones in the same way. In the worst case, where there is short timelike length between full block readings, the cones are applied to only two layers in the fault complex: Z-type measurements to a layer with X detectors (corresponding to a basis element of R0) and X-type measurements to a close layer with Z detectors (a basis element of R1). By virtue of the mapping cone construction, the fault complex commutes as a chain complex. We then use identical arguments to the proof of Z-type full block reading, where this time the constant number of layers between measurements is reduced to 0. As the proof of fault-distance is independent of this number of rounds, and the compacted code has 75 distance d, there are no spacelike logical faults which can be cleaned to have fault weight with spacelike component lower than d. A.3 Partial block reading with high-distance subcodes Proposition 4.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let A• be a subcode with distance d, with chain maps from A•, or a thickened version thereof, to Q, Q′, Q′′.... Let D• = cone(h•) be the deformed code, which has code distance at least d. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the partial block reading procedure. Then partial block reading performs logical measurement in O(1) rounds of syndrome measurements with fault-distance d. Proof. We can rerun the arguments for Prop. 3.9 but where A• ̸= C•. A• is a subcode of C• or can be a subcode thickened up to d times (in the space direction). The thickening does not affect the homology of A•, or the maps on homology Hi(f) into C•. In partial block reading with high-distance subcodes, unlike full block reading, we include additional X-detectors given by the initialisation and measurement of the new qubits in the X-basis. These give us detectors corresponding to a basis of cycles in the hypergraph; however, there is no guarantee that this basis is sparse. This does not affect the LDPC property of the code, but does affect the size of the detectors when they are used for decoding. In other words, the deformed code is guaranteed to be LDPC, but the fault complex is not. Now, for a Z-type partial block reading the fault complex is given by cone(f•), where f• has components 0 A2 A1 A0 A−1 F ′ 3 F ′ 2 F ′ 1 F ′ 0 0 0 f2 f1 f0 0 with im(f•) having support on a single primal layer, across potentially multiple codeblocks, and A−1 is a vector space with differential such that H0(A) = 0 to fix the cycles. For the X-type boundary conditions we then have |H1(cone(f))| = |H1(F ′)| −|imH1(f)| + |H0(A)| −|imH0(f)| = |H1(F ′)| −|H1(A)| + 0 −0 = |H1(F ′)| −|H1(A)|. And |H2(cone(f))| = |H2(F ′)| −|imH2(f)| + |H1(A)| −|imH1(f)| = |H2(F ′)| −|H2(A)| + |H1(A)| −|H1(A)| = |H2(F ′)| −|H2(A)| = |H2(cone(f)| = |H2(F ′)| −|H2(A)|. • |H1(cone(f))| = |H1(F ′)| −|H1(A)|. • |H2(cone(f))| = |H2(F ′)| −|H2(A)|. For the Z-type boundary conditions, the calculation is the same except |imH1(f)| = 0, so: 76 • |H1(cone(f))| = |H1(F ′)|, • |H2(cone(f))| = |H2(F ′)| + |H1(A)| −|H2(A)|. Hence we can categorise the logical faults in an identical way to previously and find that they must all either extend from a boundary of the spacetime volume, or have spacelike weight at least d, or have timelike weight at least d, and that spacetime stabilisers cannot clean these faults to have weight lower than d by virtue of the repetition code structure of R•. Because we added the X-detectors, which are present to allow the space distance of the deformed code to be at least d, we also eliminate the spacetime faults which were present for full-block reading and corresponded to contributions of H0(C). Now, H0(A) = 0 so there are no such contributions, so we need not be concerned about noncontractible loops in the spacetime volume. Theorem 4.5. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let Ξ be a set of Z-type partial block readings {Partial1, Partial2, Partial3, ..., Partialη} where each subcode has distance d and the compacted code has distance d. Then all η logical measurement rounds of partial block reading can be performed sequen- tially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the set of logical measurements. The procedure has phenomenological fault-distance d, for any η ∈N. Proof. All logical faults must either extend from a boundary or have distance d because of the new metachecks if they are timelike. For the spacelike faults, we can apply the same argument as for full block reading. Each logical operator Λi in each deformed code is guaranteed to be a logical operator of the compacted code CC•. Let Λ′ i be the logical operator Λi viewed in CC•. We can hence consider the product | Q i Λ′ i|, which is the minimal weight of the spacelike component of SΛ1Λ2 · · · Λϑ. As CC• has distance d, | Q i Λ′ i| ≥d for any choice of logical operators Λi in any deformed codes. Then, | Q i SΛi| ≥d. For spacelike X logical operators, the argument is similar but more straightforward. The X-distance of the compacted code CC• must always be at least d by Lemma 2.16, and so cleaning X logical operators to have overlapping spacelike components can never reduce the fault weight below d. A.4 Partial block reading with low-distance subcodes Consider a partial block reading, where the subcode A• for the partial block reading has distance less than d. For all our descriptions here we assume the block reading is measuring in the Z basis, and then one can dualise to acquire the corresponding construction to measure in the X basis. We also presume that the auxiliary hypergraph for a Z-type partial block reading has X-checks to gauge-fix cycles, and the same for X-type partial block readings with Z checks. These are genuine checks, and not just detectors inferred from the initial and final measurements of the new qubits. In terms of the initial auxiliary system, this still appears as adding an A−1 term to A• such that H0(A) = 0, A• = A2 A1 A0 A−1 but this is not yet the complex which we will perform the mapping cone with. 77 As before, we ignore the thickening of A• in space, as one can compute that this leaves the homology and categorisation of logical fault equivalence classes unaffected, and is necessary only to preserve the spatial distance of the deformed code. However, as the distance of A• is now less than d, we must thicken A• in time to make a fault complex K• corresponding to running the measurement procedure for more than 1 round. Let dA be the distance of A• and let dA = 1 αd for d the code distance of C•. The fault complex K• corresponds to taking (P(α) ⊗A)•, where P(α)• = P(α)1 P(α)0 = Fα−1 2 Fα 2 is dual to the full-rank repetition code with length α, and has the differential matrix being the incidence matrix of the path graph with α vertices. Explicitly K• is the chain complex: P(α)1 ⊗A1 P(α)1 ⊗A0 P(α)1 ⊗A−1 P(α)1 ⊗A2 P(α)0 ⊗A−1 P(α)0 ⊗A2 P(α)0 ⊗A1 P(α)0 ⊗A0 Observe that A• = K• when α = 1, recovering the case where we measure for only one round. Now, for a Z-type partial block reading the fault complex is given by cone(f•), where f• has the form: P(α)1 ⊗A1 P(α)1 ⊗A0 P(α)1 ⊗A−1 P(α)1 ⊗A2 P(α)0 ⊗A−1 P(α)0 ⊗A2 P(α)0 ⊗A1 P(α)0 ⊗A0 F ′ 3 F ′ 2 F ′ 1 F ′ 0 0 0 0 f2 f1 f0 There are also maps from the top terms in the diagram – P(α)1 ⊗A1, P(α)1 ⊗A0 and P(α)1 ⊗A−1 – but these are zero maps, so we abuse notation slightly by labelling the maps from the bottom layer f2, f1, f0. As before, the only nonzero maps in f• are into primal layers. The mapping cone then yields the fault complex cone(f•) for the measurement, K2 K1 K0 K−1 F ′ 3 F ′ 2 F ′ 1 F ′ 0 where the K3 term, equal to P(α)1 ⊗A2, is omitted as it has no physical meaning, being a “detector on detectors”. Before continuing further, let us make some orienting remarks regarding the terms and maps in this mapping cone. Below we expand out to the full fault complex cone(f•) for 78 the case where the spacetime volume is initialised and measured out in the X basis. As before, the P(α)1 ⊗A2 term is omitted. The diagram clearly commutes, by inheriting the coherent monic of the subcode A• into the original codeblocks E•. P(α)1 ⊗A1 (New Z detectors) P(α)1 ⊗A0 (New X Pauli faults) P(α)1 ⊗A−1 (New X check faults) P(α)0 ⊗A−1 (New X detectors) P(α)0 ⊗A2 (New Z detectors) P(α)0 ⊗A1 (New Z check faults) P(α)0 ⊗A0 (New Z Pauli faults) R0 ⊗E2 (Old Z check faults) R0 ⊗E1 (Old Z Pauli faults) R1 ⊗E2 (Old Z detectors) R0 ⊗E0 (Old X detectors) R1 ⊗E1 (Old X Pauli faults) R1 ⊗E0 (Old X check faults) E0 (Bottom X checks) E0 (Top X checks) f2 f1 f0 • All terms in (R ⊗E)• are those from the fault complex of the idling operation. • The new E0 terms at the bottom are to add timelike entrypoints to the volume. • P(α)0 ⊗A−1 is a set of X detectors which arise from cycle checks throughout the surgery procedure. • P(α)0 ⊗A0 is a set of fault locations for new qubits which can undergo Z Pauli errors. 79 • P(α)1 ⊗A−1 is a set of new X checks corresponding to cycles in the hypergraph, which can undergo measurement errors. • P(α)0 ⊗A1 is a set of new Z checks, corresponding to vertices in the hypergraph, which can undergo measurement errors. • P(α)1 ⊗A0 is a set of fault locations for new qubits which can undergo X Pauli errors. Note that when the block reading was performed for only 1 round these fault locations were not present because of the immediately adjacent initialisations in |+⟩ and measurements in the X-basis. • P(α)0 ⊗A2 is a set of Z detectors which are metachecks due to the block reading. • P(α)1 ⊗A1 is a set of Z detectors due to the vertex checks throughout the surgery procedure. To acquire the opposite spacetime volume, where it is initialised and measured in the Z basis, the R• is dualised, and F ′ has extra E2 components, rather than E0. This is identical to the full block reading case. We can hence rerun a similar proof as for the case of full block-reading, and acquire the homology spaces from the Snake Lemma. We can then prove the following. Proposition 6.1. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let A• be a subcode with distance dA = 1 αd, with chain maps from A•, or a thickened version thereof, to Q, Q′, Q′′.... Let D• = cone(h•) be the deformed code with code distance d and let H have a sparse cycle basis. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the partial block reading procedure. First, ⌈α⌉rounds of syndrome measurements are necessary (but not sufficient) to perform partial block reading with fault-distance d. Then if either: (a) α ≤2 or (b) the initial codeblocks are each locally testable codes with soundness ρ ≥2n m , where n is the codeblock length and m the number of checks, then partial block reading performs logical measurement in ⌈α⌉rounds of syndrome measurements with fault-distance d, maintaining the LDPC property throughout. Proof. X-type boundary conditions. By the Snake Lemma, we can compute the following. • |H1(cone(f))| = |H1(F ′)| −|H1(A)|. • |H2(cone(f))| = |H2(F ′)| −|H2(A)|. In detail, using the K¨unneth formula and the fact that |H1(P(α)) = 0| and |H0(P(α))| = 1, |H1(cone(f))| = |H1(F ′)| + |H0(K)| −|imH1(f)| −|imH0(f)| = |H1(F ′)| + |H0(A)| −|imH1(f)| −|imH0(f)| = |H1(F ′)| + 0 −|H1(A)| −0 = |H1(F ′)| −|H1(A)|. The other computation works similarly. Z-type boundary conditions. In this case the equivalence classes of logical faults can be computed as, • |H1(cone(f))| = |H1(F ′)|. 80 • |H2(cone(f))| = |H2(F ′)| + |H1(A)| −|H2(A)|. We thus have a similar categorisation of all the equivalence classes of logical faults to those in previous proofs: there are spacelike errors which must always have weight at least d by virtue of the deformed code having distance at least d, and there are timelike errors that extend from a boundary and so have weight at least d. However, there are also the timelike errors which correspond to the |H1(A)| term. As check errors in the auxiliary code, these timelike logical faults must always have weight at least d if they extend for ⌈α⌉rounds, because they must have weight at least 1 αd in each round. Should fewer than ⌈α⌉rounds be used then there always exists a logical fault with weight lower than d, proving the first part of the proposition. For the second part, we must be careful about the cleaning with spacetime stabilisers. Given such a timelike logical fault with weight at least d in the auxiliary code, at least dA on each layer, cleaning it entirely into the bulk can leave only 2dA qubit Pauli X errors, one set of size dA just before the surgery and one set of size dA just after, and a set of timelike errors connecting them. We must therefore ensure that this set of timelike errors has sufficent size to yield a high fault-distance. It is distributed over α layers, so we ensure that each layer of check errors contains at least ν = (d−2v) α faults, where v is the size of the set of spacelike errors at the bottom (or equivalently the top), and v ≥dA. When v ≥d/2, ν < 0 so we are done; this is always the case if α ≤2, i.e. dA ≥d/2. The rest of the proof therefore assumes dA ≤v < d/2. Note that ν < d α = dA. Because the minimal size of the bottom set of Pauli errors is dA, there are at least dA check errors in the next round if |HZv| ≥dA, where v is the qubit fault vector for the bottom set of Pauli errors. Now, assume that the HZ parity matrix of the original code has soundness ρC ≥n m as a classical code. This is guaranteed if the original code is quantum locally testable with soundness ρ ≥2n m by Lemma 2.12. We now show that d(v, C) ≥dA, where C = ker HZ. First, if |v + x| was lower than dA for any x ∈imH⊺ X then the distance of the subcode would be lower than dA, as the subcode has connections to all X checks incident to v, so multiplying v by x restricted to A1 would result in a nontrivial logical operator in A• with weight lower than dA. So we must now consider |v+x| for any x ∈ker HZ\imH⊺ X. As v < d/2, the maximal overlap of v with any other X logical in the original code is v. Hence |v + x| ≥|v| ∀x ∈ker HZ\imH⊺ X, and v ≥dA. As a consequence of the soundness being ρC ≥n m, |HZv| ≥m n ρCd(v, C) ≥d(v, C) ≥dA, so each layer between the top and the bottom Pauli X errors must have at least dA check errors. By the same argument, applying cleaning to only a portion of the checks in the auxiliary system will distribute the faults such that the portion in the original code and the portion on checks in the auxiliary system at each timestep must sum to dA, and so the fault distance is at least d. When performing many partial block readings close together in the same spacetime volume, we strengthen the condition to require soundness ρ ≥2n m regardless of whether α is above or below 2. Theorem 6.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of partial block readings {Partial1, Partial2, Partial3, ..., Partialη} where each subcode has distance di ≥ 1 αi d for i ∈[η] and the deformed codes each have code distance d, with sparse cycle bases. Suppose the original codes have soundness ρ ≥2n m . Then all η logical measurement rounds of partial block reading can be performed sequen- tially using ⌈αi⌉rounds of syndrome measurement each, and using d rounds of padding 81 before and after the logical measurements. The procedure has phenomenological fault- distance d, for any η ∈N. Proof. By taking sequential mapping cones on the initial fault complex for the idling operation, one finds that the equivalence classes of logical faults are all either: • Timelike faults extending between boundaries. • Spacelike faults. • Timelike faults which act on the auxiliary systems, and cannot be cleaned to have weight below d in the original code. Timelike logical faults in the auxiliary system and between boundaries must still have weight at least d. Cleaning spacelike logical faults Λ1, Λ2, · · · , Λϑ to have some shared spacelike support must induce at least a proportional weight in check faults, as the initial code has sufficient soundness, so the product SΛ1Λ2 · · · Λϑ cannot fall below weight d. Note that the condition on the distance of the compacted code CC• is unnecessary in this case. Therefore the fault-distance of the procedure is at least d. A.5 Fast hypergraph surgery Theorem 5.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of Z-type hypergraph surgeries {Hyper1, Hyper2, Hyper3, ..., Hyperη} such that: • Each auxiliary complex (A•)1, (A•)2, (A•)3, ..., (A•)η has 1-cosystolic distance d, with sparse cycle bases. • The compacted code has distance at least d. Then all η logical measurement rounds of generalised hypergraph reading can be per- formed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the set of logical measurements. The procedure has phenomeno- logical fault-distance d, for any η ∈N. Proof. This proof proceeds in a largely similar manner to those of Appendix A.3. For a single hypergraph surgery with an ancillary system A• we construct the same fault complex 0 A2 A1 A0 A−1 F ′ 3 F ′ 2 F ′ 1 F ′ 0 0 0 f2 f1 f0 0 but where A• is now an arbitrary code such that f• is non-overlapping on vertices and H0(A) = 0. X-type boundary conditions. |H1(cone(f))| = |H1(F ′)| −|imH1(f)| + |H0(A)| −|imH0(f)| = |H1(F ′)| −|imH1(f)|, 82 and |H2(cone(f))| = |H2(F ′)| −|imH2(f)| + |H1(A)| −|imH1(f)| = |H2(F ′)| −|imH2(f)| + |H1(A)| −|H1(A)| = |H2(F ′)| −|imH2(f)|. In summary we have: • |H1(cone(f))| = |H1(F ′)| −|imH1(f)|. • |H2(cone(f))| = |H2(F ′)| −|imH2(f)|. Z-type boundary conditions. Similar calculations apply but |imH2(f)| = |imH1(f)| = 0. • |H1(cone(f))| = |H1(F ′)|, • |H2(cone(f))| = |H2(F ′)| + |H1(A)| −|imH2(f)|. We can then apply such mapping cones sequentially, and so all logical faults are cate- gorised into equivalence classes with the following representatives: • Logical faults on checks in the auxiliary system. • Logical faults on data qubits in the original or deformed codes. • Logical faults on checks extending through the spacetime volume between bound- aries. In order, • Because each auxiliary complex (A•)1, (A•)2, (A•)3, ..., (A•)η has 1-cosystolic dis- tance d, the weight of a logical fault on checks in the auxiliary system must be at least d. • Because the maps between the auxiliary systems and original codes are all non- overlapping on vertices, the weight of such a logical fault when cleaned into the original code must be at least d to flip d checks in the auxiliary system. • Because the compacted code has distance at least d, any spacelike fault must have weight at least d. This accounts for the fact that spacelike logical operators can cleaned by spacetime stabilisers to cancel their shared spacelike support. • All other timelike faults must extend from a timelike boundary, so to affect the protocol must have weight at least d. Remark A.6. If the chain maps are overlapping on vertices (i.e. they do not satisfy Definition 5.1) then the argument breaks down, as a timelike logical fault in the auxiliary system could be cleaned to a smaller logical fault in the original code, and so additional syndrome rounds are required to increase the weights of these faults. In the extreme case, where multiple auxiliary systems measure a single logical representative, this coincides with Theorem 1.6, which shows that such a measurement procedure will always require Ω(d) rounds to preserve the fault distance. 83 B Non-CSS measurements by full block reading In this appendix we provide a more detailed explanation of how block reading extends to more general logical Pauli measurements on codes that possess transversal gates. B.1 Y measurements We can attach an ancillary system of the following form to any CSS code C: X Y Z Q HZ HX I I I H⊺ Z H⊺ X where the data qubits Q are in the original code, and the r.h.s. scalable Tanner graph is the ancillary system. The checks must always commute by inspection of the diagram. However, the properties of this combined code are complicated. Evidently, none of the Z or X logicals are measured out, as there are no new Z or X checks. Definition B.1. Let C have a basis of representatives such that supp(Y i) = supp(Xi) = supp(Zi), ∀i ∈[k]. Then we say C has matching representatives. Let C be self-dual, i.e. HZ = HX. If C has matching representatives then Y i is measured out. As Y i ∈ker HZ = ker HX, cleaning by the new Y checks PY in bijection with supp(Y i) gives PY Y i = 0 on both the new sets of data qubits. If the code does not have matching representatives then the cleaning argument does not work, and so the ancillary system may not measure the logical qubits. Note that to have supp(Xi) = supp(Zi), |supp(Y i)| = 1 mod 2, i.e. it must have odd support, in order for Xi to anticommute with Zi. If C has matching representatives, all logical qubits will be measured out in the Y basis. There are no new Z logicals, as any Z logical on the top right new data qubits must be in ker(H⊺ Z) so be products of old Z checks, and any Z operators on the bottom right new data qubits won’t commute with the old X checks. The inverse is true for new X checks. Example B.2. 2D colour codes are self-dual and have Y logicals composed of X and Z logicals on the same support [124]. Remark B.3. If C is self-dual but there exists a product of Pauli Y operators on data qubits, which is a composition of Zi and Xj with supp(Zi) = supp(Xj), but i ̸= j so the Y operator is actually a representative of Zi ⊗Xj, then the Y ancillary system will still send this Y operator to zero when cleaned, but the measurement is a Zi ⊗Xj measurement instead. For the measurement protocol, assume C is self-dual with matching representatives. First initialise new data qubits in the top(bottom) right set in the |0⟩(|+⟩) states respec- tively. Then measure all stabilisers. To commute with all Y logicals being measured, each 84 new data qubit must be connected to an even number of new checks in bijection with each Y logical, and so X(Z) errors on top right(bottom right) qubits will not affect the logical measurement outcome. As the code is self-dual, products of Z and X checks with the same support are mapped to a subset of new Y checks, such that any nontrivial Y measurement error must be in ker(HZ)\im(H⊺ X) = ker(HX)\im(H⊺ Z), and hence the logical measurement has some protection against measurement faults. Lemma B.4. Let C be a self-dual CSS code with matching representatives. Then C admits a transversal S gate, acting as S⊗k on the logical space up to a Pauli correction. Proof. From [34, Prop. 9], C admits a transversal S gate up to a Pauli correction if: • u · v = 0 mod 2 for all X stabilisers u, v. • For any X logical operator u and X stabiliser v, u · v = 0 mod 2. As C is self-dual the first condition is satisfied. From the matching representatives property, any Xi logical operator u has the same support as its matching Zi operator, and Zi must commute with all stabilisers hence u · v = 0 mod 2. Therefore the procedure is not very useful, because the codeblock can have its mea- surement basis permuted by transversal S and H and so Y measurements are unnecessary. The same argument applies to performing block reading of multiple codeblocks in products containing Y terms. We do not know if partial block reading for non-CSS measurements, where the subcode is a stabiliser code but not CSS, can allow for fast measurements which cannot be performed using transversal gates to permute bases. B.2 X ⊗Z measurements Given a CSS code C, let D be the dual of that code, i.e. HD X = HC Z and HD Z = HC X. Introduce the following ancillary system: X Z HX HZ [0, H⊺ X] I [0, I] I HZ X [0, HX] [HZ, 0] [I, 0] [I, 0] I where C is on the l.h.s. and D on the r.h.s. The top right and middle checks are now of mixed-type, checking stabilisers with both X and Z terms. One can check by symplectic multiplication that the above Tanner graph commutes. We now also have meta-checks which are of ‘mixed-type’, using both X and Z checks, indicated by the pink diamond. This is a Z⊗Z measurement by full block reading, but conjugated by single-qubit Cliffords. However, because the codeblocks C and D are dual, this could equally be performed by using a modified version of homomorphic measurements but with transversal CX and CZ gates [24]. 85 Figure 9: Venn diagram indicating the overlapping supports of a logical operator ΛZ and sets of checks κλ (magenta) and v ∩U (dashed). C Proofs concerning modular expansion In this appendix we provide technical proofs related to properties of modular expansion used in the main text. Theorem 7.7 (Distance preservation). Let {Zi}i∈I be a set of logical representatives (trivial or nontrivial) on a distance d CSS code Q with supports ξi = supp(Zi). Let H(V, E) be a hypergraph with vertices in V being Z checks, hyperedges in E data qubits and X gauge checks forming a cycle basis, such that H measures all the logical representatives {Zi}i∈I. Let fp : S i ξi →V be an injective port function such that U = imfp ⊂V, and fp(ξi) = Vi ⊂V, ∀i ∈I. Let Md(H) ≥1, where W is generated by fp(ξi). Then the distance of the deformed code Q ↔H is at least d. Proof. We show that for every nontrivial ΛZ logical operator in Q ↔H with some support in S i Vi, the weight of ΛZ cannot drop below d by multiplying with new Z checks (vertices) in H. Let v be an arbitrary subset of V, and define the stabiliser Z(G⊺v) to be the product of all checks in v. We must show that |ΛZ ◦Z(G⊺v)| ≥d. First, G⊺v ≥min(d, |κλ| −|v ∩κλ| + |v ∩U\κλ| : λ ∈2I) by the definition of modular expansion. If min(d, |κλ| −|v ∩κλ| + |v ∩U\κλ| : λ ∈2I) = d then we are immediately done, as no matter which data qubits in Q are turned off by Z(G⊺v), there are at least d new data qubits with a Z Pauli in the ancillary hypergraph. The other case is more difficult. If |G⊺v| ≥|κλ|−|v∩κλ|+|v∩U\κλ| for some λ ∈2I then we can construct the diagram in Fig. 9. 86 The set in the dashed region is v ∩U, the top right circle is U, the bottom left circle is ΛZ and the magenta circle is κλ. We compute by inspection of the diagram that |ΛZ ◦Z(G⊺v)| ≥|τλ| + |σλ| + |Bλ| + |B′ λ| + |Rλ| + |κλ| −|v ∩κλ| + |v ∩U\κλ|. Next, |κλ| −|v ∩κλ| + |v ∩U\κλ| = |φλ| + |σλ| + |B′ λ| + |γλ| and |ΛZ ◦Zλ| = |Rλ| + |τλ| + |γλ| + |Bλ| + |φλ| ≥d where Zλ = Z(f−1κλ) acts as Z on the preimage of κλ in the original code. Zλ is a logical operator, but not necessarily a nontrivial one. As ΛZ is a nontrivial logical operator of the deformed code, |ΛZ ◦Zλ| ≥d. Plugging these together we find that |ΛZ ◦Z(G⊺v)| ≥2(|σλ| + |B′ λ|) + d ≥d. As the cycles are gauge-fixed and no Z logical can be reduced below weight d, the distance of Q ↔H is at least d. Theorem 7.8 (Thickening). Let H(V, E) be a hypergraph with modular expansion Mt(H). Let J L be the path graph with length L ≥ 1 Mt(H), i.e. L vertices and L −1 edges. Let HL := H□J L be the hypergraph of H thickened L times. Then HL has modular expansion Mt(HL) ≥1 for Uℓ= S i V ℓ i at any level ℓ∈{1, 2, · · · , L}, where V ℓ i is the copy of Vi in the ℓth level of the thickened hypergraph. Proof. Let G : F2E →F2V be the incidence matrix of H, RL the incidence matrix of J L, and GL the incidence matrix of HL. Then GL =  G ⊗IL In ⊗RL  where n = |V|. We divide up the vertices of HL into sets Vℓfor ℓ∈[L], corresponding to each level of the thickened graph. Similarly, for a given level ℓwe have V ℓ i and Uℓ= S i V ℓ i . Letting vℓ∈Vℓbe a set of vertices in level ℓ, we have the vector v = (v1 v2 · · · vL)⊺∈ S ℓVℓseen as a set of vertices in HL. Associated to this set we also have uℓ= vℓ∩Uℓ= v ∩Uℓ. Now, let r = arg minj min(t, |κj λ| −|uj ∩κj λ| + |uj\κj λ| : λ ∈2I). Using the triangle inequality and the fact that LMt(H) ≥1, |G⊺ Lv| = |G⊺ L(v1 v2 · · · vL)⊺| = L X j=2 |vj−1 + vj| + L X j=1 |G⊺vj| ≥|vr + vℓ| + LMt(H) min(t, |κr λ| −|ur ∩κr λ| + |ur\κr λ| : λ ∈2I) ≥|ur + uℓ| + min(t, |κr λ| −|ur ∩κr λ| + |ur\κr λ| : λ ∈2I). Now consider the different cases. If min(·) = t then |ur + uℓ| + t ≥t and we are done. 87 If min(·) = |κr λ| −|ur ∩κr λ| + |ur\κr λ| for some λ ∈2I then |ur + uℓ| + |κr λ| −|ur ∩κr λ| + |ur\κr λ| = |ur ∩κr λ + uℓ∩κr λ| + |ur\κr λ + uℓ\κℓ λ| + |κℓ λ| −|ur ∩κr λ| + |ur\κr λ| ≥|κℓ λ| −|uℓ∩κℓ λ| + |uℓ\κℓ λ|, making use of the triangle inequality for the last line. Therefore |G⊺ Lv| ≥min(t, |κℓ λ| −|v ∩κℓ λ| + |v ∩Uℓ\κℓ λ| : λ ∈2I) for any set of vertices v in HL, so HL has modular expansion Mt(HL) ≥1. D Circuit-equivalence of surgery and homomorphic measurement In this appendix we derive a circuit equivalence between generalized surgery measurement and homomorphic measurement. The following proofs use the so-called “scalable” ZX- calculus from Ref. [87]. This calculus has previously been used to describe LDPC surgeries in Ref. [64]. We assume familiarity with the ZX-calculus for this section; see Ref. [125] for a comprehensive introduction. In short, the scalable ZX-calculus (SZX-calculus) has thick wires indexed by a positive integer n, denoting a register of n qubits. Green and red spiders then act on the n qubits, with spider phases now represented by a vector v over C of length n, containing phases acting on each qubit. As in the conventional ZX-calculus, an all-zeros vector is omitted and the green or red spider left unlabelled. We also omit the label denoting the size of the qubit register. The SZX-calculus has other generators than green and red spiders. For example, we also have the matrix arrows A and B (6) where A and B are matrices over F2. These generators can be defined by their action on the Hilbert space. Suppose A ∈Fn×m 2 and B ∈Fp×q 2 . For a qubit state in the computational basis |x⟩with x ∈Fm 2 , the arrow pointing to the right in Eq. 6 corresponds to the linear map RA : |x⟩7→|Ax⟩. Meanwhile, the arrow pointing to the left corresponds to the linear map H⊗pRBT H⊗q = H⊗p  |x⟩7→|BT x⟩  H⊗q. Stabiliser measurement on CSS codes can be neatly described using the SZX-calculus. Given a CSS code with n qubits and the parity-check matrices HX, HZ, the measurements can be described by H⊺ X HZ sZπ sXπ where sZπ is the vector of phases representing syndrome outcomes sZ = {0, 1}mZ, and the same for sXπ. In particular, when the input state is in the codespace, sZ = sX = 0 and the phase vectors can be omitted. The same applies for the case where the the initial state is not in the codespace but the measurements project it into the codespace. Commutation of stabilisers implies that H⊺ X HZ = H⊺ X HZ 88 and furthermore, HZ HZ = HZ with a similar rewrite for X-checks. Obviously such rewrites do not generally preserve fault-tolerance of the circuit: we are taking multiple rounds of syndrome measurement and collapsing them into one round, which will generally reduce the fault-distance of the spacetime code. Along with the merge and copy rules of the ZX-calculus we can now prove that surgery by an auxiliary system and homomorphic measurement using transversal gates are equiv- alent as circuits. Theorem 9.1. Let C• be a CSS code with a homomorphic measurement procotol specified by a chain map f• : A• →C•. Then the circuit corresponding to the homomorphic measurement can be rewritten to a surgery specified by cone(f•). Proof. We describe an Z-basis logical measurement; the same result for X measurements can be easily acquired similarly. The rewrites required are as follows. f⊺ 1 απ H′ Z (H′ X)⊺ H⊺ X HZ = f⊺ 1 απ (H′ X)⊺ H⊺ X HZ = f⊺ 1 απ (H′ X)⊺ H⊺ X HZ = f⊺ 1 απ (H′ X)⊺ H⊺ X HZ f⊺ 2 N⊺ The first SZX-diagram on the top left is depicting a homomorphic measurement. The original code is shown on the bottom wire, with parity-check matrices HZ, HX. An ancilla code is initialised in |0⟩⊗n′ then its stabiliser checks are measured. Then a set of transversal CNOTs defined by a chain map f• are applied, with the controls on the initial code and the targets on the ancilla code. Then the ancilla code is measured out in the Z basis, with the vector απ where α ∈Fn′ 2 determines the logical measurement outcome. Next, the |0⟩initialisations are commuted through the Z-check measurements, and then the merge rule is used to extend the check qubits of the X-check measurements to instead become data qubits of the auxiliary system, initialised in |+⟩mX and measured in the X-basis. We now see that the data qubits of the homomorphic measurement have become check outcomes of Z measurements, which include both the new data qubits of 89 the auxiliary system and data qubits of the original code. The outcomes απ of these Z measurements determine the logical measurement outcome. Last, we add X checks which gauge-fix the cycles (there are cases, such as in full-block reading and the CKBB scheme [52], where these are not necessary), with the check matrix N. We also deform the X checks of the original code to include qubits of the auxiliary system, such that these checks commute with the Z checks that determine the logical measurement outcome. The deformation of these checks is determined by the chain map f•. We can then do the opposite rewriting to convert surgeries to homomorphic measure- ments. Corollary 9.2. Let C• be a CSS code with a surgery protocol specified by cone(f•) for a chain map f• : A• →C•. Then the circuit corresponding to the surgery protocol can be rewritten to a homomomorphic measurement specified by f•. Proof. F ⊺ απ G⊺ H⊺ X HZ M⊺ = V E F ⊺ απ G⊺ H⊺ X HZ M⊺ V E = F ⊺ απ G⊺ H⊺ X HZ = F ⊺ απ G⊺ H⊺ X HZ H′ Z N⊺ V V E E Thus we see that the vertices V of the hypergraph in surgery, which are Z-checks, are rewritten to the data qubits in homomorphic measurement; the edges E of the hypergraph, which are data qubits, are rewritten to be X-check outcomes. The cycles vanish from the circuit entirely, which is because they become X-type meta-checks on the ancilla code. These relabellings correspond precisely to the degree-shifting of a mapping cone to recover the original chain map f•. 90 E Fast measurement requires multiple representatives In this appendix we provide a proof of our results lower bounding the spacetime volume of a general procedure to measure logical representatives in a quantum code. Proposition 9.3. Let Q be a quantum LDPC code. Let S be a set of logical operator representatives measured at the same time by one or more auxiliary system for T syndrome rounds. Assume that the correct logical measurement outcomes of S are not known in advance. Let f be the lowest weight data qubit fault in Q such that: • f flips at least one logical representative in S. • If f flips a logical representative in S then it flips all other representatives in the same equivalence class in S. Let w be the weight of f, and s be the syndrome weight of f in Q. Then the phenomeno- logical fault-distance of the measurement protocol is upper bounded by 2w + Ts. Proof. The measurement scheme begins at time ti and ends at time to = ti + T, and measures all the logical operator representatives contained in S. Data qubit errors occur at integer timesteps and measurement errors occur at half-integer timesteps. To measure this operator by an auxiliary system, entangling gates are applied from each qubit in S to some other system Aux of checks and data qubits, such that the measurement outcomes of checks in Aux infer the logical measurement outcomes. The entangling gates are applied while measuring checks in Aux for T rounds before ceasing the application of entangling gates. In order to receive an incorrect logical measurement outcome without detecting an error it is sufficient to flip any of those logical operators for the duration of the logi- cal measurement scheme, so long as there are no logical measurement outcomes between representatives which are ‘mismatched’, meaning that measurements of different represen- tatives of the same logical operator yield different outcomes. Thus if any representative in S is flipped, all representatives in the same equivalence class in S must be flipped. Ob- serve that as none of the correct logical measurement outcomes of S are known in advance (which would occur if Q was initialised in a particular known state with deterministic logical measurement outcomes) there are now no detectors which can detect the logical error. To flip some operators in the round at time ti, apply a Pauli operator fault f to Q with weight w satisfying the conditions specified in the Proposition. This set of faults is detectable by checks in the original code at timesteps ti + 1 2, ti + 3 2, · · · , so in order to conceal the fault we add s check errors at each timestep for the duration of the logical measurement, where s is the number of checks which f flips. In total there are Ts of these check faults. Lastly, apply an identical Pauli operator fault f after the logical measurement proce- dure concludes, at time to + 1. This is a logical fault of the measurement procedure with weight 2w + Ts. Were the code Q left in the idling operation instead, the above logical fault would be a spacetime stabiliser of the code, and so this logical fault weight can be vastly lower than the distance of Q depending on S, the time T and the syndrome weights s of operators. Note also that this is just an upper bound on the fault-distance: if the auxiliary system is poorly constructed, for example by connecting every qubit in S to a single new check 91 qubit, the fault-distance may be substantially lower due to check errors on the auxiliary system. Theorem 1.6. Any logical measurement protocol performed in o(d) rounds on a quan- tum LDPC code by an auxiliary system requires connections to more than one logical representative to maintain phenomenological fault distance Ω(d), unless the correct logical measurement outcome is known in advance. We prove this by demonstrating that otherwise there exists a weight |f| logical fault, where |f| ∈o(d). Proof. Assume w.l.o.g. that the measurement being performed is on a Z logical operator, as any Pauli measurement is locally equivalent to this measurement by conjugation of single-qubit Cliffords. If the logical operator representative has support v, the measurement being performed is of the operator N i∈v Zi. To measure this operator by an ancillary system, entangling gates are applied from each qubit i to some other system Aux of checks and data qubits, such that the measurement outcomes of checks in Aux infer the logical measurement outcome of N i∈v Zi. The measurement scheme starts at time ti and concludes at time to, with o(d) rounds between. Data qubit errors occur at integer timesteps and measurement errors occur at half-integer timesteps. Assume that the entangling gates are applied while measuring checks in Aux for o(d) rounds before ceasing the application of entangling gates. Regard- less of the system of checks and other structure of Aux, there exists a low weight fault which flips the logical measurement outcome. Consider an arbitrary qubit j ∈v. At any time t ∈[ti, to] the application of a Pauli error Xt j, followed by measurement errors on all incident Y and Z checks at time t+ 1 2, and then a Pauli error Xt+1 j , is a local spacetime stabiliser of the original code, and therefore does not violate any detectors. However, at time t + 1 2 the outcome of measuring N i∈v Zi is flipped, as Xj anticommutes with Zj. This is then corrected by the subsequent Pauli error Xt+1 j , and so at time t + 3 2 the measurement of N i∈v Zi would return the correct outcome again. Hence a logical fault f on the logical measurement can be constructed by applying a Pauli error Xti j and measurement errors on all incident X and Y checks on qubit j from time ti+ 1 2 to time to+ 1 2, and a final Pauli error Xto+1 j . This is a spacetime stabiliser of the original code, as it is composed of local spacetime stabilisers at times t = ti, ti + 1, · · · , to, hence the fault does not violate any detectors. Each round of measurement of N i∈v Zi will be flipped by the original Xti j error, and the outcome of N i∈v Zi is then flipped back after the measurement procedure concludes. The fault f is then Xto+1 j Xti j S ti≤t≤to M t+ 1 2 j , where M t+ 1 2 j is a measurement fault on each Y or Z check incident to qubit j at time t + 1 2. Because the code is LDPC, the fault at each round has weight bounded above by a con- stant, where the precise weight depends on the number of Y and Z checks incident to qubit j, but this cannot increase the weight asymptotically. Because the logical measurement is performed for o(d) rounds, the weight |f| ∈o(d). Note that if the correct logical measurement outcome is known in advance, there is an additional detector at each round of N i∈v Zi and so the above argument does not 92 hold. Similarly, the reason the proof no longer applies when measuring multiple logical operator representatives is because there are additional detectors which appear because the measurements of each representative of the same logical operator must agree with each other to be correct; equivalently, these additional detectors correspond to sets of measurements whose correct outcomes are known in advance, as they are stabilisers of the original code. Both of the above proofs are insensitive to the structure of Aux, which could be a measurement hypergraph in surgery, an ancilla block in homomomorphic measurement or any other logical measurement gadget. 93
Fast and fault-tolerant logical measurements: Auxiliary hypergraphs and transversal surgery Alexander Cowtan1, Zhiyang He2, Dominic J. Williamson3,4, and Theodore J. Yoder5 1 1 3QD, UK 2 02139, USA 3 2006, Australia 4IBM Quantum, IBM Almaden Research Center, San Jose, CA 95120, USA 5IBM Quantum, IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA Quantum code surgery is a promising technique to perform fault-tolerant computation on quantum low-density parity-check codes. Recent developments have significantly reduced the space overhead of surgery. However, generic surgery operations still require O(d) rounds of repeated syndrome extraction to be made fault-tolerant. In this work, we focus on reducing the time overhead of surgery. We first present a general set of conditions that ensure fault-tolerant surgery operations can be performed with constant time overhead. This fast surgery necessarily makes use of an auxiliary complex described by a hypergraph rather than a graph. We then introduce a concrete scheme called block reading, which performs transversal surgery across multiple code blocks. We further investigate surgery operations with intermediate time overhead, between O(1) and O(d), which apply to quantum locally testable codes. Finally, we establish a circuit equivalence between homomorphic measurement and hypergraph surgery and derive bounds on the time overhead of generic logical measurement schemes. Overall, our results demonstrate that reducing the time cost of code surgery is not reliant on the quantum memory being single-shot. Instead it is chiefly the connectivity between a code and its measurement ancilla system that determines the achievable measurement time overhead. 1 Introduction Quantum error correction [1-5] is an essential building block in any fault-tolerant quantum computer that is capable of running large-scale applications. The surface code and other topological codes [6-10] have been extensively studied due to their rich mathematical structure and strong practical performance. Despite their many desirable properties, topological codes incur an appreciable space overhead [11] when performing fault-tolerant quantum computation (FTQC). This motivates the study of quantum low-density parity check (QLDPC) codes, which can achieve better parameters, notably constant encoding rate, in both the asymptotic and near-term regimes [12]. QLDPC codes possess the practically desirable LDPC property - each check (or detector [13]) involves a constant number Alexander Cowtan: 1 16 Oct 2025 of qubits and each qubit is in a constant number of checks. Following exciting advancements in both theoretical code design [14-18] and experimental qubit connectivity [1921], QLDPC codes are now competitive candidates to serve as low-overhead fault-tolerant quantum memories [22, 23]. To realize FTQC in low space overhead with QLDPC codes, we need to design schemes for performing logical computation on their encoded qubits. This is a flourishing area of research with many recent works. Popular techniques include constant-depth unitary circuits [23-36], resource state preparation and teleportation [37-46], and code switching [47-69]. Here, we focus on the technique of QLDPC code surgery, which has seen rapid recent developments [52-67] due to its general applicability. On a high level, code surgery is a technique for performing Pauli measurement on logical qubits encoded in a stabiliser code by introducing an ancilla system, consisting of new qubits and stabiliser generators, to deform the code in such a way that the chosen logical operators are decomposed into products of (newly introduced) stabilisers. The deformation procedure can be viewed as gauge fixing or weight reduction [70-74]. Code surgery can be made faulttolerant by repeatedly measuring all new stabiliser generators for a sufficient number of rounds (typically taken to be the code distance d) to prevent timelike errors. After obtaining the logical measurement outcome, the ancillary qubits are individually measured out to return to the original code. Unfortunately, measuring all stabiliser generators for d rounds incurs a substantial time overhead when compared to protocols that use constant-depth gates. For some computations this cost can be substantially ameliorated by using Pauli Based Computation [75] to reduce the number of gates that must be implemented [62, 76]. However, in the worst case a large time overhead remains. While a growing number of code families are known to be single-shot decodable [77-83], only high-dimensional topological codes are known to admit single-shot surgery [60]. Overall, prior to this work we lacked a rigorous understanding of the time overhead required for the fault-tolerance of surgery operations, except that d rounds is typically sufficient. In this work, we develop a theory of fast quantum code surgery, formulating the conditions needed for surgery operations to be fault-tolerant with only a constant number of rounds of error correction. Notably, the base memory code does not need to be singleshot decodable. As our primary example, we present a protocol called block reading which enables transversal surgery across identical CSS code blocks in constant space and time overhead. We discuss our main results in Section 1.2, and discuss how our work relates to prior work in the literature, including independent works which appeared shortly before ours [66-69], in Section 1.3. 1.1 Setting In this work, we aim to preserve the LDPC property of the codes and detectors throughout, and highlight where this is not guaranteed. However, we ignore any further specific connectivity requirements of potential physical implementations. We assume a phenomenological Pauli and measurement noise model in our notion of fault tolerance. We focus primarily on asymptotics rather than specific constant factors. We do not concern ourselves with decoders for block reading, which depend greatly on the initial CSS codeblocks. We leave this important question to future work. 2 1.2 Summary of main results To discuss our main results, we briefly review the usual setup of generalized code surgery, and refer readers to Section 2 for a more thorough introduction. One starts with, in general, a collection of memory code blocks with encoded logical qubits, on which the aim is to perform fault-tolerant logical Pauli measurements. One constructs an ancilla system of qubits and stabilisers, which can be specified by a hypergraph [56, 58]. One then connects this ancilla system to the memory blocks, in the process modifying the stabilisers of the ancilla system and the memory blocks according to the connections. The modified set of stabilisers is an abelian group, which specifies a stabiliser code (the deformed code). Certain logical operators from the memory blocks become products of stabilisers in this deformed code, which means they are measured when we measure these new stabilisers. By measuring the deformed code stabilisers for a sufficient number of rounds (typically, in prior work, d rounds, where d is the distance of the memory codes), one can deduce the logical measurement results fault-tolerantly. Finally, one measures out the ancilla system to return to the original code. The operators measured depend largely on the ancilla system (equivalently the hypergraph) and its connections to the memory. This formulation is general enough to capture all prior works on code surgery, see Section 3.2 of [62] for a brief review. In this work, we augment the above formulation by associating a 4-term chain complex with the hypergraph H = (V, E). Specifically, let C be a basis of cycles of H, where a cycle is a collections of hyperedges which include every vertex in V an even number of times. Let W be a set of O(1)-sized components of H, where a component is a collection of vertices which intersect every hyperedge on an even number of vertices.1 Then H• = FW 2 FV 2 FE 2 FC 2 δ2 δ1 δ0 is a 4-term cochain complex, where the coboundary maps δi are specified by the inclusion relations of W, V, E, C. Note that when H is a simple connected graph (i.e., every hyperedge is a regular edge), the only component is the set of all vertices V . Typically when treating cell complexes the above coboundary maps would instead be boundary maps: we flip this convention to suit our conventions in the main body. In prior works on surgery the focus was on the 3-term complex without W, which fully specifies the ancilla system. By including W, we can now consider the 1-cosystolic distance of H•, which is d• 1 = min{|u| : u ∈ker(δ2) (δ1)}. Our main result states that t ≥d hypergraph surgery operations, all satisfying certain distance conditions, can be performed sequentially with each operation taking a constant time, such that the overall protocol performs t distinct logical operations fault-tolerantly in O(t) time. Theorem 1.1 (Fast hypergraph surgery, informal statement of Theorem 5.3). Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks, each with distance at least d. Let t ≥d and H• 1, · · · , H• t be sparse chain complexes from hypergraphs which each define a Z-type surgery operation on the memory blocks, such that • All H• i have 1-cosystolic distances at least d, and 1Note here that W is a set of our choice and is often not a basis of all components of H. 3 • The compacted code, which is a CSS code defined by the t surgery operations, has distance at least d. Then the t surgery operations can be performed sequentially with O(1) time each. The phenomenological fault-distance of the protocol is at least d and the LDPC property is preserved throughout. We elaborate on the definition of the compacted code as well as other subtle conditions in the main text, but note here that there exist simpler sufficient conditions to require of each surgery operation individually or of the original code that together ensure the distance of the compacted code is large. Importantly, we consider the fault-tolerance of performing t ≥d surgery operations in time O(t), instead of performing one surgery operation in time O(1). This is because, strictly speaking, performing one surgery operation in time O(1) is not fault-tolerant. We did not assume that the memory code blocks are single-shot decodable, and the deformed codes are not necessarily single-shot decodable. Therefore, the syndrome information from O(1) rounds of noisy measurements is insufficient for fault-tolerance. In contrast, performing t operations in O(t) time generates enough syndrome information to ensure the whole protocol has phenomenological fault-distance at least d. In this work, we refer to this as amortised constant time. This theorem formulates general conditions on surgery operations which enable constanttime-overhead logical gates. Hypergraph surgeries that measure stabilisers and logical operators of a code are forced to incorporate its local structure, up to changing the choice of generators. In the remaining parts of this work, we study a concrete fast surgery operation which we call block reading. In its simplest form, given two identical code blocks of a CSS QLDPC code Q, block reading acts transversally on all pairs of data and check qubits of the two blocks and measure all pairs of logical qubits in the Z ⊗Z (or similarly X ⊗X) basis. It does so by taking the auxiliary complex (or hypergraph) to be the code Q itself, which by assumption has 1-cosystolic distance at least d. This example can be easily generalized to act on more than two code blocks and to measure more sophisticated Pauli product operators, including the checks of any CSS QLDPC code, transversally. We term this protocol full block reading. Theorem 1.2 (Full block reading, informal statement of Theorem 3.12). Let Q, Q′, Q′′... be a set of c identical Jn, k, dK CSS LDPC codeblocks. Then a set of sparse logical Z Pauli operators across Q, Q′, Q′′... can be measured in parallel using O(cn) additional qubits and amortised O(1) time overhead. Every measurement acts transversally on all k logical qubits in a given codeblock. The phenomenological fault-distance of the protocol is at least d, and the LDPC property is preserved throughout. Two important remarks are in order. First and foremost, parallels can be drawn between block reading and Steane-style error correction, or more generally logical measurements performed via ancillary code blocks and transversal CNOT gates. In terms of logical action, full block reading has the same effect as measurements via transversal gates, except with a code-deformation procedure in contrast to a unitary circuit. Notably, however, the latter protocol usually assumes the ancillary code block is fault-tolerantly prepared, which in general requires O(d) time overhead. The more subtle result of algorithmic fault-tolerance [84] showed that this need not be the case, and an ancillary code block prepared in O(1) time can still be used for measurements via transversal CNOT, such that the whole computation is fault-tolerant even though this individual operation is not. Our result on full block reading is similar to algorithmic fault-tolerance, in that 4 we prove d surgery operations can be performed fault-tolerantly in O(d) time despite the fact that a single operation performed in constant time is not necessarily fault-tolerant. Notably we introduce new surgery gadgets and develop a very different proof to Ref. [84]. This establishes a curious connection between transversal surgery and transversal gates. Secondly, it is instructive to consider the simplest case of full block reading, where there is a single memory block and we measure all logical qubits in the Z basis. This case mirrors Steane error correction, and showcases why a single fast hypergraph surgery operation is not necessarily fault-tolerant: if it were, we would have constant space-time overhead state preparation of arbitrary CSS QLDPC codes (see Remark 3.10)! Of course, full block reading with constant syndrome rounds applied to a non-single-shot code generates a frame error akin to that of Steane error correction with an ancillary code block (under-)prepared with constant syndrome rounds. This frame error is only eliminated in the subsequent O(d) time steps. Therefore, this is one of the reasons that performing either scheme in constant time is not strictly speaking fault-tolerant. With full block reading as a motivating example, we study more flexible fast hypergraph surgery operations by taking subcodes of the memory code and thickening them appropriately. We call this partial block reading, which acts transversally on all physical and logical qubits contained in the subcodes. Theorem 1.3 (Partial block reading, informal statement of Theorem 4.5 and Corollary 7.13). Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d. Let A be a subcode of each of the codeblocks Q, Q′, Q′′... with length nA and distance at least d. Then a suitable hypergraph H can be constructed using O(nAd) additional qubits. When A has soundness ρA this bound can be reduced to O( nA ρA ), assuming the existence of a sparse cycle basis. The precise notions of subcode, soundness and sparse cycle basis are defined in the main text. As usual in the study of code surgery, the factor O(d) space overhead is a worst case upper bound. When the subcode has suitable expansion properties, this space overhead can be significantly reduced, all the way to O(1) in some cases. We expect the space overhead of partial block reading to be highly optimizable in practice, a problem we leave to future work. Beyond constant time overhead surgery, we further study the case where the conditions of Theorem 1.1 are not satisfied, yet surgery can still be performed in lower than O(d) time. We show that when the memory codes have soundness properties, then partial block reading can be performed with less than O(d) syndrome rounds (in amortisation) when the subcode A has distance lower than d. Theorem 1.4 (Intermediate time hypergraph surgery, informal statement of Theorem 6.5). Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and Let t ≥d and H• 1, · · · , H• t be sparse chain complexes from hypergraphs which each define a surgery operation on the memory blocks, such that • Each auxiliary complex H• 1, · · · , H• t has 1-cosystolic distance di ≥ 1 αi d, • The deformed code each has distance at least d, and • The original codes have constant soundness. Then the t surgery operations can be performed sequentially with ⌈αi⌉time each. The phenomenological fault-distance of the protocol is at least d and the LDPC property is preserved throughout. 5 Our hypergraph surgery constructions rely on homomorphic chain maps between the auxiliary complex and the memory. This fact draws strong parallel to the technique of homomorphic measurement [42], which is a logical measurement scheme that employs transversal gates and generalizes Steane error correction. We formalize this connection by proving a general circuit equivalence between surgery and homomorphic measurement. Theorem 1.5 (Informal statement of results in Section 9). Let f define a homomorphic measurement protocol. Then the circuit for f can be rewritten via ZX calculus to an equivalent hypergraph surgery protocol and vice versa. Nevertheless, there are several subtleties, and this equivalence does not generally preserve the fault distance of the measurement protocols. At a high level, the data (check) qubits in an ancillary block for homomorphic measurement correspond to new check (data) qubits in the corresponding surgery protocol respectively. As a consequence, timelike errors in surgery protocols can be resolved by measuring stabilisers for multiple rounds. However, the corresponding spacelike errors on a homomorphic measurement ancilla code are more difficult to resolve in a similar manner. This is why surgery protocols tend to be more flexible in addressing logical qubits: one can create a highly targeted ancilla system with low measurement fault distance, and then boost that measurement fault distance by measuring for multiple rounds. Finally, we complement our constructions with a structural upper bound on the faultdistance of any logical measurement scheme, including surgery, homomorphic measurement and other similar protocols. The idea for this bound is based on quantifying the smallest undetectable fault that flips the logical representatives being measured while avoiding any ancilla system, see Proposition 9.3. This logical fault corresponds to a spacetime stabiliser in the fault-tolerant memory on the code without any logical measurement, that becomes a nontrivial logical fault when the code is measured. Here, we present a simplified statement of this general bound. Theorem 1.6. Any logical measurement protocol performed in o(d) rounds on a quantum LDPC code by an auxiliary system requires connections to more than one logical representative to maintain phenomenological fault distance Ω(d), unless the correct logical measurement outcome is known in advance. In other words, reducing the time cost of code surgery is not reliant on the quantum memory being single-shot, nor does fast logical measurement require transversal gates between codeblocks. Instead it is chiefly the degree of connectivity between a code and its measurement ancilla system that determines the achievable measurement time overhead. 1.3 Related work Code surgery is an active topic of research, and our work has many connections to the literature. Notably, Ref. [67] appeared on arXiv shortly before this paper. Our works were developed independently, but share important results in common. We briefly discuss the areas of overlap and difference. Most importantly, Theorem 5 of Ref. [67] states that a surgery operation defined by a sparse chain complex and a sparse chain homomorphism can be performed in constant time overhead fault-tolerantly if the auxiliary complex has suitable expansion properties. This is similar to our Theorem 1.1 (formally Theorem 5.3), with a few points of distinction: 1. We study the fault-tolerance of performing t ≥d suitable surgery operations in O(t) time, and claim that the time overhead of each logical measurement is constant in amortisation. In contrast, Ref. [67] study the fault-tolerance of performing 6 one suitable surgery operation in constant time. As we argued, this operation is not fault-tolerant by itself, and Ref. [67] remedies this by assuming that the syndrome information on the memory is noiseless before and after the constant time surgery operation (see their Section IIIC). This is a strict assumption in the context of single-shot operations - for generic quantum codes, producing reliable syndrome information requires O(d) rounds of syndrome measurement. In this work, we consider a protocol where we start and end with O(d) rounds of syndrome measurement on the memory, and perform t ≥d hypergraph surgery operations in between. We demonstrate that each hypergraph surgery operation generates a round of syndrome information on the memory blocks, and accumulating these rounds guarantees faulttolerance of the full protocol. 2. Ref. [67] asks for the auxiliary complex to have suitable expansion properties, which is a sufficient but not necessary condition for the deformed code to have high distance. From this perspective our requirements are less stringent. Notably, full block reading for generic codes does not satisfy the expansion conditions in Ref. [67]. On the other hand, for partial block reading we prove that a similar expansion condition can satisfy the second condition of Theorem 1.1. Both Ref. [67] and our work show that the desired expansion can be achieved by taking a tensor product with a repetition code. The idea of using expansion to improve deformed code distance, and the idea of boosting expansion with a repetition code are both standard in the study of code surgery and weight reduction [52, 55, 56, 58, 72]. Our works are largely distinct and complimentary otherwise. As our primary examples we study full and partial block reading (Theorems 1.2 and 1.3), which perform transversal measurement of all logical operators contained in a (sub)code. Ref. [67] provides constructive, addressable surgery operations, capable of measuring one or more logical operators, using techniques including brute-force branching and auxiliary graph surgery [59, 61]. Ref. [67] considered concrete examples on abelian multi-cycle codes and demonstrated promising numerical simulation results. We further study the case of hypergraph surgery with intermediate time overhead (Theorem 1.4), where the conditions of Theorem 1.1 are not satisfied yet soundness of the memory codes enables fault-tolerant surgery in less than O(d) time.2 Our results on circuit-equivalence between surgery and homomorphic measurement, Theorem 1.5, and on the connectivity requirements for fast logical measurement, Theorem 1.6, have no counterparts in Ref. [67]. There are several other recent independent works which share some overlap with our results. Ref. [65] also considered the use of hypergraphs for code surgery. They explored randomized constructions demonstrating promising practical performance. While they did not explore the time overhead of these surgery operations, they stated a similar condition to boost the distance of the deformed code with expansion properties of the hypergraph (see their Lemma 2). In parallel, Ref. [68] and [69] developed single-shot dimension jumping techniques3 for three-dimensional homological product codes. Relatedly, Ref. [66] developed single-shot code switching techniques for more general homological product codes. In its general form, full block reading can be seen as measuring, for O(1) rounds, the stabilisers of a 2Ref. [67] has a brief remark with a similar observation. 3Dimension jump is a special form of code switching, where one switch from (multiple copies of) lower dimensional codes to (fewer copies) of higher dimensional codes and vice versa. 7 homological product code: that of the memory code and the pattern matrix.4 Our results show that this protocol can be combined with further hypergraph surgery protocols to be made fault-tolerant. In comparison, Refs. [66, 68, 69] all proved stronger fault-tolerance guarantees in their specific settings. They also developed techniques to switch from the resultant homological product code to other codes, which enable a range of interesting logical computation gadgets. We remark that the aforementioned works [65-69] all have other contributions not mentioned in our brief discussion above. Connecting to prior literature, while conventional quantum LDPC code surgery takes as input the support of a logical operator to be measured [52-56, 58], hypergraph surgery must incorporate the local structure of the code on the region being measured. We focus on the simplest choice of hypergraph corresponding to the subcode itself, namely block reading. This is similar to the CSS code surgery described in Ref. [64], as well as the finelydevised sticking in Ref. [59]. Our work is also related to Ref. [60], in which single-shot surgery with 3D and 4D toric codes was studied using the formalism of fault complexes. Our proofs in Appendix A use this formalism, and extend it to include deformations on spacetime volumes. We are aware of upcoming work in which the time overhead of lattice surgery with surface codes is reduced by using the complementary gap to estimate confidence in a surgery protocol and stop early if the confidence is high [85]. Our work is grounded in a different setting, concerned with phenomenological fault-distance and general CSS LDPC codes, and focuses on reducing the time overhead asymptotically. 1.4 Organization This work is lengthy and our results involve many subtleties. For the sake of the reader, we have deferred the majority of technical proofs to the appendices and aimed to build up complicated results from simpler ones. In Section 2, we review important background and definitions. In Section 3, we introduce full block reading as the motivating example, and demonstrate fault-tolerance guarantees with amortised constant time overhead surgery operations. In Section 4, we study partial block reading. In Section 5, we generalize this to hypergraph surgery. For readers familiar with the techniques of code surgery, we recommend skimming Sections 3, 4 and 5 on an initial reading. In Section 6, we study hypergraph surgery operations with intermediate time overheads, between O(1) and O(d). This type of surgery is enabled by quantum locally testable codes with constant soundness. In Section 7, we define a novel generalisation of the notions of both soundness and relative expansion from Ref. [57] which we call modular expansion. We show that hypergraphs with modular expansion can be used to perform hypergraph surgery in lower space overhead. In Section 8 we demonstrate our block reading schemes on a variety of examples. We show that our formalism captures fast surgery with topological codes such as 4D toric codes, and we describe block reading with 2D topological codes and bivariate bicycle codes [86]. Finally, in Section 9 we prove the circuit equivalence between homomorphic measurement and surgery, and prove an upper bound on the phenomenological fault distance for generic logical measurement protocol on quantum LDPC codes. 4More generally the pattern complex, if we use full block reading for both X and Z basis transversal logical measurements. 8 2 Preliminaries 2.1 Stabiliser codes and Tanner graphs Definition 2.1. Let Pn be the Pauli group over n qubits. A qubit stabiliser code Q is specified by an Abelian subgroup -1 /∈S ⊂Pn such that the codespace H is the mutual +1 eigenspace of S, i.e. U |ψ⟩= |ψ⟩ ∀U ∈S, |ψ⟩∈H. We say S is the stabiliser group for Q. A qubit stabiliser code Q can be specified by a stabiliser check matrix H = [HX|HZ] ∈Fr×2n 2 , such that H 0 1 1 0 ! H⊺= 0, where a row [u|v] corresponds to a generator of the stabiliser group, and therefore check on Q, iu·vX(u)Z(v), for u, v ∈Fn 2. Definition 2.2. A qubit CSS code Q is a qubit stabiliser code where the generators of S can be split into two sets SX and SZ. SX contains Pauli products with terms drawn from {X, I} and SZ terms drawn from {Z, I}. Thus there is a stabiliser check matrix H for Q such that H = HX 0 0 HZ ! , HXH⊺ Z = 0. Definition 2.3. A CSS low-density parity check (LDPC) code family is a family of CSS codes such that the column and row weights of HX and HZ are bounded above by some constant σ. Logical operators in a stabiliser code are Pauli products that commute with all the checks of the code. As the stabiliser group is Abelian, all stabilisers are equivalent to the trivial logical operator. The code distance d of Q is the minimum Pauli weight of all the nontrivial logical operator representatives. Definition 2.4. A Tanner graph is a bipartite graph with a vertex for each qubit and each check. A qubit is connected to each check in which it participates with an edge labeled [1|0], [0|1], or [1|1] depending on whether the check acts on the qubit as X, Z, or Y . In our convention, data qubits are black circles and checks are boxes. If a check only measures Z or X terms then we omit the edge label, and instead label the box with a Z or X. Therefore for a CSS code we have no edge labels, and all boxes contain either Z or X labels. In this case, the condition that all stabilisers commute is the same as saying that the number of paths between every Z and X check is even. Depicting large Tanner graphs directly becomes unwieldy, so we use scalable notation5 to make them more compact. Qubits are gathered into disjoint named sets Q0, Q1, Q2, ..., and the same for checks C0, C1, C2, ... . An edge between Qi and Cj is then labeled with the stabiliser check matrix of Q restricted to Qi and lifted to Cj, so we have [CX|CZ] ∈F|Cj|×2|Qi| 2 . 5Terminology borrowed from Ref. [87]. 9 [HX|HZ] (a) A generic stabiliser code, X Z HX HZ (b) A CSS code, X Z X L ∂1 ∂T 1 ∂1 I I I I (c) A hypergraph product code. Figure 1: Examples of scalable Tanner graphs. If that check matrix is all-zeros then omit the edge entirely. If the checks are all of the same type then we omit the all-zeros half of the check matrix; for example if Cj contains only Z checks then we label the check box Z and the edge [CZ] ∈F|Cj|×|Qi| 2 . For example, in Figure 1 we show some basic scalable Tanner graphs with qubit sets drawn as circles and check sets drawn as boxes. We have (a) a generic stabiliser code, (b) a CSS code, and (c) the hypergraph product [14] of a classical code C, with bits L and check matrix ∂1, by a repetition code R with blocksize 2 and check matrix 1 1 . We use scalable Tanner graphs to describe our procedures. 2.2 CSS codes and chain complexes The well-known CSS code-homology correspondence relates quantum CSS codes to chain complexes, allowing for the study of various properties using the techniques of homological algebra [17, 29, 53, 58, 64, 88, 89]. As we employ this formalism to study block reading, we give a brief summary of this correspondence here. A based chain complex over F2, C• = · · · Cl Cl-1 Cl-2 · · · ∂l+1 ∂l ∂l-1 ∂l-2 is a Z-graded vector space L i∈Z Ci over F2 equipped with a basis at each degree i, and boundary operators ∂i : Ci →Ci-1 such that ∂i-1 ◦∂i = 0, ∀i ∈Z. Equivalently, im∂i+1 ⊆ker ∂i. The homology space at degree i is Hi(C•) = ker ∂i/im∂i+1. A CSS code must satisfy the property that HXH⊺ Z = 0, and therefore we can view an Jn, k, dK CSS code as a chain complex with 3 nonzero terms: C• = C2 C1 C0 ∂2 ∂1 where explicitly at each degree we have C• = FmZ 2 Fn 2 FmX 2 H⊺ Z HX An undetectable Z operator must have support on a vector v ∈ker HX, and so every such operator is of the form Z(v). Z stabilisers are elements of imH⊺ Z. Thus equivalence classes 10 of Z logical operators are elements of H1(C•) = ker HX/imH⊺ Z, the homology space of C• at degree 1. Hence k = dim H1(C•), as the number of logical qubits must be equal to the number of independent equivalence classes of Z logical operators. The Z-distance dZ of a CSS code is the weight of the smallest nontrivial Z operator, so dZ = min v∈ker HX ⊺ Z |v|. Dual to the notion of chain complex is a cochain complex, C• = · · · Cl Cl+1 Cl+2 · · · δl-1 δl δl+1 δl+2 which is defined similarly to a chain complex but with boundary operators transposed to become coboundary operators δi, over F2. The cohomology space is Hi(C•) = ker δi/imδi-1, and enjoys the isomorphism Hi(C•) ∼= Hi(C•). As HXH⊺ Z = 0, we also have HZH⊺ X = 0, hence we can equally view a CSS code as a cochain complex with 3 terms: C• = FmX 2 Fn 2 FmZ 2 H⊺ X HZ This view prioritises X operators, rather than Z operators. We then have k = dim H1(C•) and dX = min u∈ker HZ ⊺ X |u|. Furthermore, d = min(dX, dZ) as the error types are detected independently. A chain map, or homomorphism between chain complexes, is a linear map: f• : C• →D• which has a component fi : Ci →Di at each degree i, such that the following diagram commutes, · · · Cl+1 Cl Cl-1 · · · · · · Dl+1 Dl Dl-1 · · · ∂C l+1 fl+1 ∂C l fl fl-1 ∂D l+1 ∂D l (1) i.e. ∂D i+1fi+1 = fi∂C i+1 ∀i ∈Z. A dual definition applies to a cochain map f•, which can be found by taking the transpose of each map in the diagram. Definition 2.5 (Subcode). Let a subcode of the CSS code defined by a chain complex C• be a code defined by the chain complex A• with an injective chain map f• : A• →C•. We stipulate further that f• must take each data qubit in A• to a single data qubit in C•, and the same for any Z or X checks in A•.6 The additional stipulation is made to maintain control over weights, as this is essential for our analysis; if C• is LDPC, so is A•. Each matrix in the chain map f• is therefore a permutation matrix, up to some all-zero rows. We can equally take the subcode in a dual manner, viewing the original CSS code via its cochain complex C• and stipulating that there is a similar cochain map g• : A• →C•. In this work, it is always made clear from context whether the subcode is taken in the chain (Z) or cochain (X) sense. 6In Ref. [53] this property of f• is called basis-preservation. 11 2.2.1 Tensor products The tensor product of (co)chain complexes is inherited from the tensor product of Z-graded vector spaces. Namely, Definition 2.6. [90, Sec. 2.7] Let C•, D• be chain complexes over F2. Define (C ⊗D)• with components (C ⊗D)l = M i+j=l Ci ⊗Dj where the latter tensor product is the normal tensor product of vector spaces. Differentials between components are given for ∂(C ⊗D) l by matrices ∂C i ⊗I I ⊗∂D j ! for a given i, j, then stacked horizontally for each term i, j | i + j = l, and vertically for each i′, j′ | i′ + j′ = l -1. One can check that ∂(C ⊗D) l-1 ◦∂(C ⊗D) l = 0 (mod 2), as desired. Lemma 2.7 (K ̈unneth formula). [90, Thm 3.6.3] Hl(C ⊗D) ∼= M i+j=l Hi(C) ⊗Hj(D) That is, the homology spaces factor through the tensor product conveniently. Both the tensor product and K ̈unneth formula can be applied dually to cochain complexes. By virtue of the fact that we are working with finite-dimensional vector spaces with explicit bases, we can strengthen the K ̈unneth formula: Lemma 2.8. [91, Prop. 1.13] Hl(C ⊗D) = M i+j=l Hi(C) ⊗Hj(D)7 Proof. We first observe that any basis element of Hi(C) ⊗Hj(D) for i + j = l, which is an equivalence class of elements in Ci ⊗Dj, is an element of Hl(C ⊗D), and that linearly independent elements remain independent. Then by a counting argument, Hl(C ⊗D) = L i+j=l Hi(C) ⊗Hj(D). As CSS codes can be viewed as (co)chain complexes, the tensor product can be used to construct new codes [14, 91]. 2.2.2 Mapping cones Given a chain map f• : A• →C• over F2, the mapping cone of f• is defined to be the chain complex with spaces cone(f)i = Ci ⊕Ai-1 and maps ∂cone(f) i : cone(f)i →cone(f)i-1 given by Ai-1 Ai-2 Ci Ci-1 fi-1 ∂C i ∂A i-1 ⊕ ⊕ 7Strictly speaking this is still an isomorphism, but the isomorphism is canonical. 12 ∂cone(f) i = ∂C i fi-1 0 ∂A i-1 ! . The homology of mapping cones can be studied using exact sequences and the Snake Lemma [90, Lem. 1.3.2], see App. A. Using chain maps and mapping cones, one can construct new quantum codes. In particular, Z logical measurement of a CSS code C• can be seen as constructing a particular auxiliary system A• and then using the chain map f• to build the deformed code cone(f•) [58]; X logical measurements are then mapping cones from cochain maps. This perspective originates from the quantum weight reduction of Hastings [71, 72]. 2.3 Metachecks and soundness Typically, quantum codes require O(d) rounds of error correction for every logical timestep in order to maintain fault-tolerance. This is because in addition to errors occurring on data qubits, measurement errors can occur on checks. As checks are not typically protected against measurement errors, one must measure repeatedly in order to diagnose such errors successfully. This can be seen as constructing a spacetime code whose timelike component is a repetition code [60]. There are, however, quantum codes that do have protection against measurement errors [77-82]. Although there are multiple forms of such protection, here we focus on the approach that uses metachecks [78]. A metacheck is a set of stabilisers that has a product of outcomes known to be +1 in advance, even in the presence of faults on data qubits. Hence a metacheck is a detector [92] on the spacetime code, which detects solely measurement errors. It is possible to construct codes with sufficient metacheck structures, such that the minimum number of faulty measurements in a single error-correction round required to cause a logical fault is the same as the code distance d. Such codes can be single-shot codes, which in principle require only a single error-correction round against adversarial noise [78]. Single-shot LDPC codes with metachecks require redundancy in their stabiliser generators and so can be expensive to construct, as having enough redundant checks to yield sufficient detectors for protection against up to d measurement errors can entail a larger space overhead. Given a scalable Tanner graph for a CSS code, we can incorporate metachecks with new sets of vertices and edges: X Z HX HZ MZ MX where blue (red) diamonds represent metachecks on Z (X) checks respectively, and MZ (MX) is the F2-linear map taking a syndrome outcome to its corresponding set of metachecks. Equivalently, metachecks can be viewed as additional terms in the (co)chain complex of a code: C• = C3 C2 C1 C0 C-1 ∂3 ∂2 ∂1 ∂0 C• = C3 C2 C1 C0 C-1 δ2 δ1 δ0 δ-1 where C3 = C3 represents metachecks on Z checks and C-1 = C-1 represents metachecks on X checks. 13 In scalable Tanner graph notation, maps extend outwards from data qubits, to checks and then metachecks. In (co)chain complexes, maps begin at the lowest or highest degree and point towards the higher or lower degree respectively. Definition 2.9. [78] Let Q be a CSS code. The Z-single shot distance is dss Z = min v∈ker MZ |v| = min v∈ker δ2 δ1 |v| and is the weight of the smallest Z-measurement fault that is undetectable by Zmetachecks and is not equivalent to an X fault on data qubits. Similarly, dss X = min u∈ker MX |u| = min u∈ker ∂0 ∂1 |u| The single-shot distance of Q is therefore dss = min(dss X, dss Z ), as the Z and X checks are independent. If ker ∂0 ∂1 = ∅then by convention we say that dss X = ∞, and similar for dss Z . Like logical operators, measurement faults can be partitioned into equivalence classes, in H2(C) and H0(C) for Z measurement errors and X measurement errors respectively. Even if the single-shot distance is at least the distance of the code, i.e. dss ≥d, it is still possible for a low-weight measurement fault to prevent fault-tolerant single-shot error correction. This is because there could be a low-weight measurement fault which is equivalent to a fault on data qubits, but that fault is very large. Hence the measurement fault may be interpreted by the decoder as a large fault on data qubits, and a large, erroneous recovery operator applied. As a consequence, the next logical cycle would require only a small fault on data qubits to cause a logical error. This motivates the study of LDPC codes with good soundness [78, 93, 94], where every valid small set of syndrome outcomes has a sufficiently small set of data qubit errors which produces them; thus a minimum distance decoder will not misinterpret a small measurement fault as a large data qubit fault. There are several different definitions of soundness for both classical and quantum codes. For classical codes, we adopt a simple combinatorial definition of local-testability from Ref. [95, Def. 11]. Definition 2.10 (Local testability). A binary linear code C is (ω, ρ)-locally testable if it has a parity-check matrix H : Fn 2 →Fm 2 with rows of weight at most ω such that for any vector v ∈Fn 2, 1 m|Hv| ≥ρ nd(v, C) where d(v, C) = minx∈C(|v + x|) and | · | is the Hamming weight. The values ω and ρ are the locality and soundness of the code respectively. For quantum stabiliser codes, we follow Ref. [96]. For an n-qubit quantum code Q, let Qt = Span{(A1 ⊗· · · ⊗An) |ψ⟩: |ψ⟩∈Q, #{i ∈[n], Ai ̸= I} ≤t} be the t-fattening of Q, which is the set of states with distance at most t from the codespace. Then define DQ = X t≥1 t(ΠQt -ΠQt-1). where ΠQt is the projector onto the space Qt. 14 Definition 2.11. A qubit stabiliser code Q with generators S1, · · · , Sm is locally testable with locality ω and soundness ρ if all generators have weight at most ω and 1 m m X i=1 1 2(I -Si) ⪰ρ nDQ where A ⪰B means the operator A -B is positive semidefinite. Lemma 2.12. [96, Fact 17] A quantum CSS code CSS(HX, HZ) with parity-check matrices HX, HZ is quantum locally testable with soundness ρ if ker HX, ker HZ are classical locally testable codes with soundness ρ. Conversely, if CSS(HX, HZ) has soundness ρ then ker HX, ker HZ are classical locally testable codes with soundness at least ρ/2. Remark 2.13. Note that Def. 2.11 is subtly different from the version of soundness given in Ref. [78], and is closer to the definition of confinement in Ref. [82]. Intuitively, the version in Ref. [78] stipulates that every valid small set of syndrome outcomes has a sufficiently small set of data qubit errors which produces them. Def. 2.11 and confinement instead stipulate that every small set of data qubit errors produces a sufficiently large set of syndrome outcomes. See Ref. [82] for more discussion about the distinction between these definitions. Def. 2.11 aligns more closely with the definition of soundness for classical codes, so is preferable for our purposes. It is well-known that a quantum code with both sufficiently high single-shot distance and sufficient soundness, of a certain type, can be decoded in a single-shot manner against adversarial noise using a minimum-weight decoder [78], and similar results apply too to confinement [82]. We remark that the development of quantum locally testable codes with good parameters is an ongoing line of research [97-100]. In summary, a metachecked code being single-shot is reliant on the decoder and noise model, in addition to other code properties than just the single-shot distance. For phenomenological adversarial noise, a minimum weight decoder is sufficient when the codes have sufficient properties related to soundness [78, 82]. In this work we do not require the quantum memory to be single-shot, or even have high single-shot distance, but we make use of the general framework of metachecks, soundness and their relation to homology. 2.4 Quantum code surgery Generalised lattice surgery, or quantum code surgery, is a method of performing computation by code deformation, see Section 3.2 of Ref. [62] for a brief review. In this work, we presume that the codes are all stabiliser codes, although one can perform surgery with non-Abelian codes [101-105]. We also focus on logical measurements by the introduction of an auxiliary system, while surgery can be performed in other ways to implement a larger class of operations [53, 64, 106, 107]. Those exceptions aside, all prior works in quantum LDPC code surgery are known to be unified by the construction of a measurement hypergraph [56, 58, 62]. In brief, for a given initial quantum memory Q and logical measurement to be performed, we construct a hypergraph H(V, E), consisting of vertices V, edges E and cycles.8 Edges are associated to new data qubits, and there are two types of new stabiliser checks: vertex checks, whose measurement outcomes are used to infer the logical measurement outcome, and cycle checks, which are present to gauge-fix and thus prevent any new logical qubits 8A cycle in a hypergraph is a collection of edges that contains every vertex an even number of times. 15 from appearing in the deformed code.9 The measurement hypergraph is connected to the initial code Q. Specifically, a subset of vertices (new checks) are connected to qubits in Q in order to perform the logical measurement, while a subset of edges (new data qubits) are connected to checks in Q such that the deformed code commutes as a stabiliser code. Definition 2.14. Let H be a hypergraph with edge-vertex incidence matrix G : F2E →F2V. A cycle basis is a full-rank matrix N : F2F →F2E such that imN = ker G. F is identified as a set of faces, or cycles, in H. We say that the cycle basis is sparse if N is a sparse matrix. Given a cycle basis of a measurement hypergraph, each cycle in the basis can be associated to a new check, which fixes the gauge of the logical which would otherwise have support on the cycle. If the cycle basis is sparse then this can be done while maintaining the LDPC property of the deformed code. There are several methods of constructing such measurement hypergraphs [52-59, 61, 62], which are described in more detail in Ref. [62, Sec. 3]. The most asymptotically efficient method currently known to measure a generic set of Pauli product operators on disjoint logical qubits is by brute-force branching and gauging logical measurements with universal adapters [61]. For an arbitrary collection of logically disjoint Pauli product measurements supported on t logical qubits, this scheme uses O tω(log t + log3 ω) ancilla qubits, where ω ≥d is the maximum weight of the single logical Pauli representatives involved in the measurements, and d is the code distance. This is all done in time O(d) independent of t, while preserving the phenomenological fault distance and LDPC property. This is a vast improvement to the original work on surgery with quantum LDPC codes [52], which required O(dω) ancilla qubits to measure a single weight ω logical operator representative10 and could not generally perform parallel measurements without losing the LDPC property. However, despite these advances, all contemporary methods require O(d) rounds of error correction per logical cycle for generic codes in order to maintain fault-tolerance11. It also means that the study of LDPC code surgery does not, prior to this work, cover the well-known cases of single-shot surgery with 3D and 4D topological codes [60]. The primary reason for this stark contrast is that, in LDPC code surgery, the measurement outcomes of new checks on vertices in the measurement hypergraph H are used to infer logical measurement outcomes; and in prior works on LDPC code surgery these new checks did not have any protection against measurement errors. When measuring a single logical operator using gauging logical measurements or homological measurements [56, 58], for example, the product of vertex check outcomes is interpreted as the logical measurement outcome. Hence, were they measured for only one round, just a single measurement error would flip the product, and constitute a logical error. As such, the checks must be measured for O(d) rounds in order to infer the correct logical measurement outcome. In this work, we focus on reducing the number of rounds required by introducing metachecks which protect against measurement errors in H. All of our protocols yield CSS-type surgery. 9These new logical qubits could have low weight logical operators, and lower the dressed weights of other operators acting on logical qubits in the code [52], which would lower the fault-distance. 10The representative was also required to be irreducible, meaning that it did not contain any other logical representatives, a problem which was fully resolved in Refs [56, 58]. 11This assumes a single representative is measured; Ref. [56] described a method to decrease the time overhead to O( d m) via parallel surgery measurement of 2m -1 equivalent logical operators. 16 Definition 2.15. A CSS-type code surgery is a surgery protocol which begins with a CSS code memory, then performs code deformation such that the code remains CSS throughout. CSS-type code surgery is a limited subset of more general code surgery, as one cannot measure e.g. Y terms or products of the form X ⊗Z while preserving the CSS property. Thus all product measurements of CSS-type each contain terms drawn from either {Z, I} or {X, I}. We remark that CSS surgery is easily extended to measure more general logical Pauli operators on codes that are self-dual. Lemma 2.16. Given any CSS code Q, measurement of Z logicals by a measurement hypergraph results in a deformed code with X-distance at least as high as the X-distance of Q, so long as all cycles are gauge-fixed or otherwise stabilisers. This is an immediate corollary of Ref. [58, Thm. 1]. A dual result follows for the Z-distance when performing measurement of X logicals. 3 Full block reading To understand and reduce the time overhead of surgery operations, we first consider full block reading, which is a surgery operation that acts transversally on all the physical qubits, and thereby logical qubits, of identical code blocks. The fact that one can perform block reading with surface codes is a widely known folklore result, as these are in a sense 'transversal' measurements between patches. Our results extend substantially beyond this simple case. We start with the simple example of two blocks. 3.1 Full block reading on two blocks Given two identical Jn, k, dK quantum memories, which are CSS LDPC codeblocks labelled Q and Q′, X Z HX HZ Q X Z HX HZ Q′ we can initialise a hypergraph H(V, E) between the two X Z HX HZ Q X Z HX HZ Q′ Z I I I I G E V (2) 17 such that new data qubits are edges, new Z checks are vertices and H has the edge-vertex incidence matrix G ∈FV×E 2 . H has 1-to-1 maps from edges to X checks in Q and to X checks in Q′, and similar for vertices. Thus in order for the checks of the deformed code to commute, we have G = H⊺ X. Let D be this deformed code. Lemma 3.1. Measuring the checks of D performs a Z ⊗Z measurement on each pair of logical qubits (qi, q′ i) for i ∈[k], where qi is a logical qubit in Q and q′ i the same logical qubit in Q′. Proof. Consider an arbitrary Z logical operator representative acting on qi, and call it Zi = Z(vi). Let Z′ i be the identical copy of Zi in Q′. By definition, vi ∈ker(HX). Multiplying Zi by the bijective Z checks P in V does not leave any support on E, as Pvi ∈ker(G⊺) = ker(HX), but does clean Zi into Q′. The operator which it is cleaned to is precisely Z′ i, hence the two logicals are now stabiliser equivalent and so we have performed a measurement of Zi ⊗Z′ i. Now because ker(G⊺) = ker(HX), the only operators which can be cleaned are of this form and so we have performed a Z⊗Z measurement on each pair of logical qubits between the two codes. Specifically, each set of Z checks in V in bijection with a logical Z operator in Q and Q′ measures the product of those operators. The logical measurement outcome is inferred as the product of the syndrome outcomes in that set. Lemma 3.2. D has no additional logical qubits. Proof. New logical qubits can only appear when performing measurement hypergraph surgery if there are cycles in the hypergraph [56, 58], which do not have corresponding faces, i.e. checks, to gauge-fix. In general if ker(G) ̸= 0 then there can exist a new X logical operator with support on u ∈ker(G). This hypergraph may have cycles, but all logical operators on these cycles are also products of X stabilisers from Q (or, alternatively and symmetrically, from Q′). In particular, given any element u ∈ker(G), multiply X(u) by the bijective X checks S in Q. Because u ∈ker(G), Su ∈ker(H⊺ X) and so the cleaned operator maps to 0 on data qubits in Q, hence is a stabiliser. We shall presently prove that the code distance is preserved by the deformation, but in order to do so it is easier to use the language of cohomology. Lemma 3.3. Let C• be the initial codeblock Q viewed as a cochain complex. Then the deformed code D has the cochain complex D• = (C ⊗R)• where R• = R0 R1 P , R0 = F2 2, R1 = F2, P = 1 1 . Proof. (C⊗R)• = C0 ⊗F2 2 C1 ⊗F2 2 ⊕C0 ⊗F2 C2 ⊗F2 2 ⊕C1 ⊗F2 C2 ⊗F2 In words, 18 • at the level of X checks we have two copies of C0, the X checks of Q, • at the level of data qubits we have two copies of C1 the data qubits of Q, and an additional set of data qubits in bijection with the X checks of Q, • at the level of Z checks we have two copies of C2 and an additional set of Z checks in bijection with the data qubits of Q, • there is another term, which we ignore for now and consider only the first 3 terms. We now check that the coboundary maps are correct. δ0 C⊗R = H⊺ X ⊗I I ⊗P ! , δ1 C⊗R = HZ ⊗I 0 I ⊗P H⊺ X ⊗I ! which we recognise as the check matrices, with appropriate transposes, from Eq. 2. Using the perspective of cochains it is easy to verify the logical dimension using the K ̈unneth formula. Moreover, by Lemma 2.8 we have the equation: H1(C ⊗R) = H1(C) ⊗H0(R) ⊕H0(C) ⊗H1(R) hence the set of X logical operators in D is given by representatives of H1(C) ⊗H0(R) ⊕H0(C) ⊗H1(R) which explicitly is ker HZ/imH⊺ X ⊗ker P ⊕C2/imHZ ⊗0 = ker HZ/imH⊺ X ⊗ker P so every X logical operator belongs to a stabiliser equivalence class [ui, u′ i] of the identical logical operators ui, u′ i in Q and Q′, or a nontrivial sum thereof. Lemma 3.4. The distance of D is at least d. Proof. As there are no new logical qubits, to prove the distance bound it is sufficient to ensure that cleaning Z logicals cannot reduce their weight below d. The X-distance is guaranteed by Lemma 2.16. For Z logicals, observe that multiplying any Z logical in Q by Z checks in V adds support to the corresponding data qubits in Q′ by a 1-to-1 map, and potentially adds support on E as well. Any Z logical which has support on both Q and Q′, say v and v′, can in the worst case be cleaned to have weight |v + v′| on Q and Q′, considering both v and v′ to be in Fn 2, and v + v′ is a valid logical of Q so must have weight at least d. As a consequence of Lemmas 3.2 and 3.4 it is guaranteed by the arguments in Ref. [56] that the block reading can be performed with fault distance d in d rounds of error correction, as the procedure can be viewed as a parallel gauging logical measurement. All new data qubits are initialised in |+⟩, and all checks are measured for d rounds, before measuring out the new data qubits in the X basis. We reduce this from d to O(1) in Section 3.3. 19 3.2 Full block reading of Z type We can extend block reading straightforwardly to acting on many codeblocks simultaneously. We start with a set of c identical Jn, k, dK LDPC codeblocks. Definition 3.5 (Pattern Matrix). Let P : Fc 2 →Fφ 2 be a full-rank matrix, which defines the pattern of logical measurements to be performed between codeblocks by deforming to a new code D. We call P the pattern matrix. Each row r of P corresponds to a set of parallel measurements to be performed. Each entry of 1 in r denotes that a codeblock is involved in that set of measurements, so if rj = 1 for column j then codeblock j is involved, and if rj = 0 then it is not. Then r specifies a measurement of P 1 i ⊗P 2 i ⊗P 3 i · · · for every logical qubit i ∈[k] in codeblocks Q1, Q2, Q3... in the basis P j ∈{Z, I} depending on whether rj = 1 or 0. If P is not full rank, and has some linearly dependent rows, then those rows correspond to logical measurements which are already performed by other rows and so are redundant. Additionally, if any row has weight 1, with the 1 in column j, that measurement is equivalent to measuring out codeblock j in the Z basis. Hence for simplicity we assume that P is full rank, with no linearly dependent rows, and that there are no rows with weight 1. Definition 3.6 (Z-type full block reading). A full block reading of Z type takes as input c CSS LDPC codes, each with cochain complex C•, and a pattern matrix P : Fc 2 →Fφ 2. The cochain complex for the deformed code D of the full block reading denoted by P is given by (C ⊗R)• where R• = Fc 2 Fφ 2 P , First we check that the definition above is consistent with the definition of a pattern matrix. We have (C⊗R)• = C0 ⊗Fc 2 C1 ⊗Fc 2 ⊕C0 ⊗Fφ 2 C2 ⊗Fc 2 ⊕C1 ⊗Fφ 2 C2 ⊗Fφ 2 where we currently ignore the last C2 ⊗Fφ 2 term. We can pick out C0 ⊗Fc 2 →C1 ⊗Fc 2 →C2 ⊗Fc 2 as the c original codeblocks. Then, δ0 C⊗R = H⊺ X ⊗I I ⊗P ! , δ1 C⊗R = HZ ⊗I 0 I ⊗P H⊺ X ⊗I ! as for the example in Eq. 2 but where P is now a more general matrix. The Z logical operators in D are then H1(C⊗R) = H1(C)⊗H0(R)⊕H0(C)⊗H1(R) = ker HX/imH⊺ Z⊗Fc 2/imP⊺⊕C0/imHX⊗ker P⊺ where ker P⊺= 0 as P has no surplus rows, so H1(C ⊗R) = ker HX/imH⊺ Z ⊗Fc 2/imP⊺. Given that our initial code had Z logical operators given by ker HX/imH⊺ Z ⊗Fc 2, we see that those in imP⊺are now stabilisers, and so block reading has measured precisely the rows of P on each logical qubit. Furthermore, no new equivalence classes are present, so there are no new logical qubits. 20 Remark 3.7. Let P be LDPC and each identical codeblock be LDPC. Then the deformed code (C ⊗R)• during measurement is LDPC, and the procedure uses O(cn) ancilla qubits. This is immediate from Definition 3.6. Lemma 3.8. The deformed code has code distance at least d. Proof. The X-distance follows from Lemma 2.16. The Z logicals are given by H1(C ⊗R) = ker HX/imH⊺ Z ⊗Fc 2/imP⊺. We start with a Z logical ΛZ composed of different Z logicals v, v′, v′′, ... in each codeblock, at least one of which must have weight at least d and be a nontrivial Z logical. Applying a new Z check in V only cleans support from the codeblocks if the incident data qubits all have support in ΛZ; otherwise the check applies at least 1 new Z to a data qubit in a codeblock, and the weight of ΛZ cannot be reduced below d in this way. If we apply new Z checks in V to clean support from ccl codeblocks, in the worst case we have support |v| + |v′| + |v′′| + · · · -ccl|v ∩v′ ∩v′′ ∩· · · | left in the codeblocks. But |v| + |v′| + |v′′| + · · · -ccl|v ∩v′ ∩v′′ ∩· · · | = |v ∪v′ ∪v′′ ∪· · · | -|v ∩v′ ∩v′′ ∩· · · | ≥|v∆v′∆v′′∆· · · | = |v + v′ + v′′ + · · · | ≥d where ∆is the symmetric difference of sets. Should the row weights of P be too high, meaning that the weights of these logical measurements are high and so the degree of the Tanner graph is high, then one can introduce ancilla blocks to break those measurements up, making P sparse. This corresponds to adding new bits, i.e., columns to the check matrix P, such that a row can be split up into multiple rows. The ancilla codeblocks used to reduce its weight can be initialised at the same time as the code deformation procedure begins, and can be considered part of the measurement hypergraph, where data qubits in ancilla blocks are edges, Z checks are vertices and X checks are cycles, as normal. Consequently, the data qubits are initialised in |+⟩. This can be thought of as thickening the hypergraph, see Figure 4 later on. 3.3 Time overhead of full block reading We now return to our original example, Eq. 2 from Section 3.1. When computing the cochain complex in Lemma 3.3 we had an additional term (C ⊗R)3 = C2 ⊗F2 = C2, which corresponds to a collection of metachecks on the Z-checks of the deformed code. Thus the full scalable Tanner graph for the logical measurement is X Z HX HZ Q X Z HX HZ Q′ Z I I I I G E I I G V 21 where G = HZ and MZ = δ2 C⊗R = I ⊗P HZ ⊗I = I ⊗P HZ . Consider first the case where measurement faults can occur only on checks in V. As G = HZ, if a measurement fault v /∈ker(HZ) then it is detected by the metachecks. If v ∈ker(HZ) then either (a) it is equivalent to an X data qubit error on E or (b) it has weight on checks at least d. (b) follows from the fact that G = H⊺ X, and the lowest weight vector in ker HZ ⊺ X has by definition weight at least d. For (a), as all the data qubits in E are initialised in the |+⟩basis, X errors leave the state invariant so the logical measurement outcome is unaffected. Therefore to cause an incorrect logical measurement outcome, the measurement fault weight on V must be at least d, in the case where measurement faults cannot happen elsewhere. Note further that any vector v ∈ker(G) representing an undetectable measurement fault on E, regardless of weight, is equivalent to an undetectable X data qubit fault on Q (or Q′), as multiplying by X faults on bijective qubits leaves 0 on the Z checks in Q (or Q′). To study the more general case it is fruitful to use the cohomology of (C ⊗R)•. H2(C ⊗R) = H2(C) ⊗H0(R) ⊕H1(C) ⊗H1(R), which is explicitly C2/imHZ ⊗ker P ⊕ker HZ/imH⊺ X ⊗0 = C2/imHZ ⊗ker P. Thus any Z measurement fault which is neither detected by metachecks nor equivalent to a data qubit fault must be a linear combination of pairs of identical Z check faults in Q and Q′, which are not equivalent to X data qubit faults, and Z check faults which are equivalent to X data qubit faults. Note that the Z single shot distance dss Z of (C ⊗R)• can be very low even in the more general case, as C2/imHZ can have low weight representatives. Nevertheless, any measurement fault which includes Z checks in V must be able to be cleaned, by applying some subset of X errors to Q and Q′, so that it has support only on the Z checks of Q and Q′. Only if the set of X errors which it is cleaned to is large does the measurement fault cause a logical measurement fault. In our analysis of the time overhead, we have first a weak result: Proposition 3.9. Let Q, Q′, Q′′,... be a set of identical CSS LDPC codeblocks with distance d, and let P be a pattern matrix determining a full block reading. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the full block reading procedure. Then full block reading performs logical measurement in O(1) rounds of syndrome measurements with fault-distance d. Proof. See Appendix A. The intuition is that in the timestep where the full block reading is performed, any measurement fault on the check qubits in V with weight < d which is undetectable by metachecks is either (a) in imG and so does not affect the logical measurement outcome or (b) has a paired measurement fault on Z checks outside of V, in the original code, such that the measurement faults on the original code must extend to future and previous timesteps to avoid detection. In order to maintain the fault-distance of the entire procedure we must assume d rounds of syndrome measurements on the original code before and after block reading. This is because there may be low weight faults on data qubits, Z checks and X checks outside of H which are not detected within 1 timestep, and so could lead to logical errors by connecting to timelike errors extending from the top or bottom boundaries. This is equivalent to d measurement rounds of 'padding' before and after block reading [108]. 22 Remark 3.10 (State preparation). Proposition 3.9 does not allow one to generically use full block reading on a single codeblock to prepare a logical |+⟩k or |0⟩k state in constant time, because of the padding required to provably preserve fault-distance. Remark 3.11 (Frame change). At the end of the block reading protocol, the ancilla data qubits are measured out in the X-basis, and the X checks in the original code are undeformed back to their original incidence. These measurements do not commute with the preceding Z checks, and so have random outcomes, although products of these measurement outcomes which correspond to cycles in the hypergraph must be +1 in the absence of errors, as these are precisely the sets of X measurements which do commute with the preceding Z checks. Because each new data qubit is in 1-to-1 correspondence with an X check in each incident codeblock, the anticipated outcomes of X-checks in the original codeblocks are updated after the block reading protocol by taking a product with the outcomes of those data qubit X measurements, inducing a frame change. When the X measurements on ancilla data qubits undergo faults, the frame is updated incorrectly. These measurement errors are therefore in bijection with X check errors on the original code, detected by preceding and subsequent rounds of measurement. The reason why preceding rounds of measurement can diagnose such frame errors is because in the measurement round where ancilla data qubits are measured out the product of such a measurement and its corresponding X-check in an original codeblock commutes with the Z checks in the deformed code, and so must yield a +1 outcome in the absence of errors. Thus having a consistent history of checks before the frame change can detect frame errors. Proposition 3.9 is at first not particularly helpful, because the overall time cost of the procedure is still O(d). However, the proof extends to performing full block readings sequentially without padding between logical measurement rounds, with each logical measurement taking O(1) rounds. Theorem 3.12. Let Ξ be a set of η full block reading procedures of Z-type such that no block reading procedure in Ξ is a product of any others. Then all η logical measurement rounds of full block reading can be performed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the full set of procedures. The procedure has phenomenological faultdistance d, for any η ∈N. Proof. See Appendix A. Remark 3.13. In the above statement, the assumption that the pattern matrix is full rank is for technical reasons related to our proof technique. We expect that this requirement can be relaxed. 3.4 Full block reading of CSS-type Full block reading can be generalised to measure sets of commuting operators of the form Z ⊗Z ⊗· · · ⊗Z and X ⊗X ⊗· · · ⊗X simultaneously. This allows for fast code switching into a transversally concatenated code, similar to a procedure in Ref. [66] and the basic operation in the protocol of Ref. [109]. To see this, let PZ be the pattern matrix for Z-type logical measurements. Recall that the rows of PZ each specify a set of parallel Z-type measurements as per Definition 3.5. In order for us to perform X-type block reading at the same time, the X-type logical operators to measure must commute with the Z-type logical operators. Thus each row s 23 of a PX pattern matrix, specifying a set of parallel X-type logical measurements, must satisfy r · s for each row r of PZ, where · is the usual dot product on vectors over F2. Therefore, PZP⊺ X = 0. Our pattern matrix is now upgraded to a pattern complex. Definition 3.14 (CSS-type full block reading). A full block reading of CSS-type takes as input c CSS LDPC codes, each with cochain complex C•, and a pattern cochain complex R•. The cochain complex for the deformed code of the full block reading denoted by P is given by (C ⊗R)• where R• = Fθ 2 Fc 2 Fφ 2 P⊺ X PZ with R-1 = Fθ 2, R0 = Fc 2 and R1 = Fφ 2, and we assume that both PZ and PX are full rank with no redundant rows, and that all rows have weight greater than 1. As the tensor product of cochain complexes is also a cochain complex, the checks in the deformed code commute. We have (C ⊗R)-1 = C0 ⊗R-1 (C ⊗R)0 = C1 ⊗R-1 ⊕C0 ⊗R0 (C ⊗R)1 = C2 ⊗R-1 ⊕C1 ⊗R0 ⊕C0 ⊗R1 (C ⊗R)2 = C2 ⊗R0 ⊕C1 ⊗R1 (C ⊗R)3 = C2 ⊗R1 and H1(C ⊗R) = C2/imHZ ⊗ker P⊺ X ⊕ker HZ/imH⊺ X ⊗ker PZ/imP⊺ X ⊕C0 ⊗R1/imPZ which given that ker P⊺ X = 0 and R1/imPZ = 0 reduces to H1(C ⊗R) = ker HZ/imH⊺ X ⊗ker PZ/imP⊺ X. A similar calculation gives H1(C ⊗R) = ker HX/imH⊺ Z ⊗ker PX/imP⊺ Z, so we see that logical Z operators in imP⊺ Z, i.e. the row space of PZ, are now stabilisers, and the same for logical X operators in the row space of PX, for every logical qubit in the codeblocks. That the distance of the CSS-type block reading deformed code is at least d is immediate by applying the arguments of Lemma 3.8 twice, first for the Z-type auxiliary system and then for the X-type. In the CSS-type full block reading measurement protocol, all new data qubits included in the Z-type (X-type) measurements are initialised in |+⟩(|0⟩) states respectively. All syndromes are measured for O(1) rounds, and then all new data qubits are measured out in the X (Z) bases respectively. Remark 3.15. Let R• be LDPC and each identical codeblock Q, Q′, Q"... be LDPC. Then the deformed code (C ⊗R)• is LDPC. 24 Lemma 3.16. Let Ξ be a set of η full block reading procedures of CSS-type such that no block reading procedure in Ξ is a product of any others, and each of the measurements commute. Then all η logical measurement rounds of full block reading can be performed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the full set of procedures. The procedure has phenomenological faultdistance d, for any η ∈N. Proof. See Appendix A. In Appendix B we also show that full block reading can be extended to perform nonCSS measurements, namely those with X ⊗Z and Y terms, on CSS codes, but that our constructions for these require specific structure on the codes, such as transversal S gates, so they add no extra computational power to full block reading with CSS-type measurements. Full block reading gives no addressability on logical qubits within a codeblock, as the measurements are on all logical qubits, so the utility of full block reading is limited in the same manner as 1-to-1 transversal CNOT gates between CSS codes. The logical computational power of a full block reading is identical to initialising an ancilla block, then applying transversal CNOTs, then measuring out the ancilla block. Consequently, it is unclear how useful full block reading would be for practical faulttolerant computation. Compared to measurement using transversal gates and ancilla codeblocks, full block reading requires fewer qubits overall, but full block reading also requires more connections between the initial codeblocks and the auxiliary system. Nevertheless, our analysis of full block reading sets the stage for the more sophisticated operations in the next sections. We first consider the case of reading a subcode, then present our general result on fast hypergraph surgery. 4 Partial block reading Returning again to the example in Eq. 2, consider a different generalisation. We start with two codeblocks Q and Q′, which may be different, X Z HX HZ Q X Z H′ X H′ Z Q′ 25 and construct the measurement hypergraph H to connect only to a subcode of each, but where the subcodes are identical. X Z HX HZ Q X Z H′ X H′ Z Q′ Z Ml Mr Fl Fr G E V That is, up to permutation and the addition of all-zero rows, the matrices satisfy Ml = Mr; Fl = Fr. Given an appropriate choice of subcode and measurement hypergraph, the measurements are only between those logical qubits whose logical representatives have connections to the hypergraph, and so are contained in the subcode. This is called a partial block reading. The construction no longer yields a tensor product of cochain complexes, so we instead use mapping cones from Section 2.2.2 to describe partial block reading. As the constructions become quite complicated, we also refrain from studying multiple partial block readings occurring simultaneously, unlike in the full block reading case. All partial block readings are assumed to be performed sequentially, albeit with only a constant number of syndrome rounds between them. Definition 4.1. Let A be a subcode of Q, Q′, Q′′..., each of which have chain complexes C•, C′ •, C′′ • ..., such that there is a collection of injective chain maps f•, f′ •, f′′ • ... of the forms f• : A• →C•; f′ • : A• →C′ •; f′′ • : A• →C′′ • ... Then the partial block reading of Z type defined by {f•, f′ •, f′′ • ...} is D• = cone(h•); h• = f• + f′ • + f′′ • + ... Explicitly, D• is the chain complex A1 A0 0 C2 C1 C0 C′ 2 C′ 1 C′ 0 C′′ 2 C′′ 1 C′′ 0 ... ... ... ∂A 1 0 ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕ 26 where labels for the differentials in each codeblock, and labels for the chain maps, have been suppressed for visibility. Note that also A• generally possesses a nonzero A2 component. This component corresponds to metachecks, so we ignore that term for now. It is trivial to compute, using the Snake Lemma [90, Lem. 1.3.2], that the measured logical qubits are those in each code which have Z logical representatives contained in the subcode A•. As a Tanner graph, we have X Z HX HZ Q X Z H′ X H′ Z Q′ Z f0 f⊺ 1 G X Z H′′ X H′′ Z Q′′ f′ 0 f′ 1 ⊺ f′′ 0 f′′ 1 ⊺ E V (3) for a partial block reading between three codeblocks. Observe that the chain maps f•, f′ •, f′′ • determine the connectivity between the blocks and the auxiliary hypergraph, and that E = A0, V = A1. Unlike when performing full block reading, the code distance of the deformed code D• in partial block reading is not generically preserved, and can drop below d. Elementary examples of this phenomenon include surface codes with defects and toric codes, which we detail in Section 8. This drop in distance can come from two causes. First, it is possible to multiply existing Z logicals in the codeblocks by new Z checks in order to lower the weights below d. Second, viewing the ancilla system defined by the mapping cone as a hypergraph H, there may be cycles in H which are nontrivial X logicals, meaning that they do not correspond to stabilisers of the deformed code. When unmeasured, the cycles have symplectically paired Z logicals, which when multiplied by other logicals in the original code can result in weights lower than d [52]. In this section, we focus on partial block reading where the subcode has distance d, and the measurements are assumed to be in the Z-basis. Dualising appropriately yields the same results for the X-basis. For the first cause of the deformed code distance dropping, we can thicken the hypergraph. This is equivalent to taking a tensor product of A• and the dual of a repetition code P• before defining the mapping cone, as shown in Figure 2. We can define chain maps from any copy of A• in the tensor product to Q, and the same for Q′, Q′′..., and this choice does not need to be the same for each. An example is shown 27 X A1 (a) A subcode A•, X X · · · X (b) P•, the dual of a repetition code, X X Z X Z · · · · · · A1 A1 A1 A⊺ 1 A⊺ 1 I I I I I I I I (c) The tensor product (P ⊗A)•, Z G E V Z G E V X Z G E V X · · · · · · (d) The thickened hypergraph H, which is (P ⊗A)• with degrees shifted by 1. Figure 2: Constructing a thickened hypergraph. Note that (b) is a conventional Tanner graph, not a scalable Tanner graph. in Figure 3 Lemma 4.2. Let (P ⊗A)• be the thickened subcode, and let f•, f′ •, f′′ • ... be chain maps from any copies of A• in (P ⊗A)• into Q, Q′, Q′′..., with h• = f• + f′ • + f′′ • · · · . Then if (P ⊗A)• has been thickened at least d times the deformed code D• has distance at least d. Proof. Immediate from Corollary 7.10, Lemma 7.12 and Remark 7.11 further down, so we defer the proof until then. This is similar to devised sticking [59] and the original CKBB scheme [52] in that the auxiliary system is thickened d times, but in this case we are measuring many representatives of the same equivalence classes of operators simultaneously. As in the CKBB scheme, thickening d times is generally overkill [54], and distance preservation commonly requires a smaller ancilla system than this upper bound. Indeed, in Section 6 we demonstrate that soundness properties of the subcode can be used to reduce the required thickening while provably preserving the distance, and even that gives a sufficient but not generally necessary amount of thickening. If thickening d times were truly necessary then our time overhead of d would essentially be replaced by a space overhead factor of d, at least 28 Z G E V Z G E V X Z G E V X · · · · · · X Z HX HZ Q X Z H′ X H′ Z Q′ X Z H′′ X H′′ Z Q′′ Figure 3: Example of partial block reading with a thickened hypergraph. The logical measurement is the same as in Eq. 3, but the hypergraph has been thickened to prevent the weight of Z operators dropping below d. when A• is large, which is interesting but less useful than achieving a genuine reduction in spacetime overhead. For the second cause of the deformed code distance dropping, that there can be cycles in H which correspond to X logicals, we use the following fact. As the subcode has distance d, the logical measurement protocol can also be performed in 1 round of error correction, similar to full block reading, by introducing redundant Z checks between each level of the hypergraph as shown in Figure 4. This fact helps us to gauge-fix the cycles, because each data qubit in each cycle is initialised in |+⟩immediately before and measured out in the X basis immediately after the single round of error correction. The results of those X measurements can be used to infer the outcome of X stabiliser measurements which would fix the gauges of the cycles, and so we do not need to explicitly add checks for those cycles [56]. However, if the set of cycles does not have a basis of low weight cycles, this can result in detectors with high weight in the space direction, which impinge on the fault-tolerance against stochastic noise. Proposition 4.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let A• be a subcode with distance d, with chain maps from A•, or a thickened version thereof, to Q, Q′, Q′′.... Let D• = cone(h•) be the deformed code, which has code distance at least d. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the partial block reading procedure. Then partial block reading performs logical 29 Z G E V Z G E V X Z G E V X · · · · · · X Z HX HZ Q X Z H′ X H′ Z Q′ X Z H′′ X H′′ Z Q′′ Z Z Figure 4: The same partial block reading as in Figure 3 but with additional redundant checks and metachecks. measurement in O(1) rounds of syndrome measurements with fault-distance d. Proof. See Appendix A. The key here is that when thickening, each Z check error in level a is equivalent to a Z check error in level a -1 along with an X error on the data qubit between them. In this way, we form a chain of metachecks up the thickened hypergraph. These allow all stabilisers in the subcode to be matched to sets of checks in each level of the hypergraph. Evidently, if the subcode A• has nA qubits, and the amount of thickening required is L, then the space cost of a partial block reading is O(nAL). The above proposition considers performing one partial block reading with d rounds of padding before and after the procedure. Once again, we would like to generalize this to performing t ≥d operations in O(t) time, thereby achieving constant time overhead in amortisation. This introduces an extra complication. Instead of the deformed code, we consider the compacted code, defined as follows. Definition 4.4 (Compacted code). Let C• be a complex representing the memory CSS code blocks, and let A1,•, · · · , At,• be chain complexes each with a chain map fi,• : Ai,• →C•. The compacted code is the code taken by applying all the mapping cones to the original code, i.e., CC• = cone( X i (fi,•). While this code is most likely not LDPC and potentially massive, it is just a proof construct and is never explicitly used in our surgery operations. 30 There is an intuitive reason behind defining the compacted code. Ideally, we would like to construct partial block readings assuming just that every deformed code has distance d. This assumption, however, may not be sufficient: if we perform two partial block readings one after another with only O(1) syndrome rounds between them, we must carefully consider how the weight of an unmeasured logical may be reduced by multiplication with the new deformed code checks. While, by assumption, both the first and second partial block readings would individually guarantee any weight-reduced operator remains weight d, there would be no such guarantee when both weight reductions occur together. This may seem irrelevant - after all, we are not performing the partial block readings simultaneously - but it happens that a sequence of check measurement errors in the O(1) syndrome rounds between the block readings can effectively reproduce this simultaneous scenario using potentially just O(1) additional faults (since we are not assuming the original code is sound).12 Therefore, to ensure the overall surgery protocol is fault-tolerant, we impose the condition that the compacted code, which can be seen as the memory blocks deformed by all the surgery operations applied simultaneously, has distance at least d. Theorem 4.5. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let Ξ be a set of Z-type partial block readings {Partial1, Partial2, Partial3, ..., Partialη} where each subcode has distance d and the compacted code has distance d. Then all η logical measurement rounds of partial block reading can be performed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the set of logical measurements. The procedure has phenomenological fault-distance d, for any η ∈N. Proof. See Appendix A. While it may seem daunting to study the compacted code and satisfy the distance condition, we show in Section 7 that standard techniques in code surgery, notably thickening and expansion, can be applied to satisfy this condition. We further remark that in the case of full block reading with a full rank pattern matrix P, the resultant compacted code is a homological product code (C ⊗R)•, which has distance at least d. Remark 4.6. In Section 9 we study the relationship between surgery and homomorphic measurement [42]. For now, observe that given a subcode of distance d performing homomorphic measurement is straightforward, as one initialises an ancilla system that is a copy of that subcode and performs transversal CNOTs and single-qubit measurements. This initialisation can take up to O(d) rounds to preserve fault-tolerance. Note that the method of algorithmic fault-tolerance [84] does not apply to generic homomorphic measurements. Performing partial block reading with a distance d subcode requires additional conditions on the distance of the deformed code to maintain fault-tolerance, which can be resolved at some additional space cost by the thickening described above. 5 Fast hypergraph surgery In this section we generalise our partial block reading results to the case where A• is not a subcode but some other complex with a suitable chain map into the original code. 12We note that this simultaneous weight reduction issue, a scenario not considered in detail by prior code surgery work, is also a concern when doing auxiliary graph surgeries [56, 58] separated by O(1) syndrome rounds if one does not assume expansion of the auxiliary graphs and instead just assumes large deformed code distances. In practical constructions, e.g. Ref. [55], where one indeed tends not to guarantee expansion but just deformed distance, this is an important issue to be aware of. 31 A2 A1 A0 A-1 C2 C1 C0 0 f2 f1 f0 0 0 Definition 5.1. We say that the chain map f• : A• →C• is non-overlapping on vertices if each row in the matrix f1 : A1 →C1 contains at most a single non-zero entry. The degree 1 of A• is shifted by the mapping cone construction to be in cone(f)2, i.e. the set of vertices in the hypergraph is the set of basis elements of A1. We can directly transport Definition 4.1 to this more general setting. Definition 5.2. Let A• be an auxiliary chain complex and let Q, Q′, Q′′... be CSS LDPC codes, each of which have chain complexes C•, C′ •, C′′ • ..., such that there is a collection of sparse chain maps f•, f′ •, f′′ • ... of the forms f• : A• →C•; f′ • : A• →C′ •; f′′ • : A• →C′′ • ... which are non-overlapping on vertices. Then the hypergraph surgery of Z type defined by {f•, f′ •, f′′ • ...} is D• = cone(h•); h• = f• + f′ • + f′′ • + ... As with partial block reading it is trivial to compute which logical operators are measured using the Snake Lemma [90, Lem. 1.3.2]. Theorem 5.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of Z-type hypergraph surgeries {Hyper1, Hyper2, Hyper3, ..., Hyperη} such that: • Each auxiliary complex (A•)1, (A•)2, (A•)3, ..., (A•)η has 1-cosystolic distance d, with sparse cycle bases. • The compacted code has distance at least d. Then all η logical measurement rounds of generalised hypergraph reading can be performed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the set of logical measurements. The procedure has phenomenological fault-distance d, for any η ∈N. Proof. See Appendix A. In the mapping cone that describes the surgery operation, the 1-cosystolic distance captures the metacheck distance. To satisfy this distance condition, it is often easier to start with a homomorphic, high distance quantum code, as we did in full block reading. Returning to the second condition, in Section 7 we define a notion of hypergraph expansion called modular expansion and show that it is a sufficient condition for the compacted code to have distance d. We then show that a generic hypergraph can always be thickened (by up to d layers) to a graph with the desired expansion properties. This is in analogy with prior surgery works studying graphs, where it is known that thickening can be used to boost a property called relative expansion [57]. We remark that for both our work and prior works, expansion is a sufficient but not necessary condition, in practice one can often find good constructions heuristically, such as adding random hyperedges. 32 Remark 5.4. In a generic hypergraph surgery, constant-weight local operators are measured alongside high-weight global operators. If we require that the original codespace is restored after the hypergraph surgery, the constant-weight local operators must all be in the stabiliser group, and the high-weight operators must be logical operators. For a Ztype measurement on a single code block, with an injective f1 function, this requires the group generated by F2-linear combinations of the hyperedges to include the truncation of the X-type stabiliser group to the qubits addressed by the measurement. In the previous sections we have chosen the X-type checks of the subcode, but more generally we can pick a different, potentially overcomplete, generating set. 6 Intermediate time overhead with locally testable codes We can consider (a) block reading with a distance d subcode and (b) conventional LDPC code surgery, which uses a distance 1 subcode, to each sit at opposite ends of an axis determining the time cost of logical measurements. In this section we consider interpolating along this axis, to find surgeries which use between O(1) and O(d) time each, while remaining fault-tolerant. 6.1 Partial block reading with a low distance subcode Proposition 6.1. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let A• be a subcode with distance dA = 1 αd, with chain maps from A•, or a thickened version thereof, to Q, Q′, Q′′.... Let D• = cone(h•) be the deformed code with code distance d and let H have a sparse cycle basis. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the partial block reading procedure. First, ⌈α⌉rounds of syndrome measurements are necessary (but not sufficient) to perform partial block reading with fault-distance d. Then if either: (a) α ≤2 or (b) the initial codeblocks are each locally testable codes with soundness ρ ≥2n m , where n is the codeblock length and m the number of checks, then partial block reading performs logical measurement in ⌈α⌉rounds of syndrome measurements with fault-distance d, maintaining the LDPC property throughout. Proof. See Appendix A. Remark 6.2 (Cycle checks). When the block reading takes longer than O(1) round we genuinely require a sparse cycle basis for H to maintain the LDPC property of the deformed code, as the gauge checks at each timestep cannot easily be inferred from the state preparation and measurement of data qubits in the auxiliary system. In general, it is not guaranteed that H possesses a sparse cycle basis. In practice we expect to find sufficiently sparse cycle bases numerically for small codes. Additionally, if a subcode has low weight X metachecks which have support on checks in the subcode, after degree shifting these metachecks become low weight cycle checks for a Z basis partial block reading. Proposition 6.1 is interesting primarily because there are codes that are not expected to have distance d subcodes addressing targeted subsets of logical qubits. Indeed only topological codes and certain product codes are known to allow for addressing any logical qubit in this manner [42, 43]. Instead, one can find subcodes with distance between 1 and d which still permit fast measurement, but not necessarily in constant time. The distance of A• is the minimum number of check errors on the auxiliary system which are required to cause an undetectable logical measurement error in a single round; this is not the same 33 as the single-shot distance of the entire deformed code, because there can be checks in the bulk which have far less protection against measurement errors. However, the errors which can occur in the bulk do not prevent us from performing many of these partial block readings sequentially without d rounds of padding in between, implying that interpolated partial block reading can indeed speed up computation substantially. Theorem 6.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of partial block readings {Partial1, Partial2, Partial3, ..., Partialη} where each subcode has distance di ≥ 1 αi d for i ∈[η] and the deformed codes each have code distance d, with sparse cycle bases. Suppose the original codes have soundness ρ ≥2n m . Then all η logical measurement rounds of partial block reading can be performed sequentially using ⌈αi⌉rounds of syndrome measurement each, and using d rounds of padding before and after the logical measurements. The procedure has phenomenological faultdistance d, for any η ∈N. Proof. See Appendix A. We remark that when the base codes are locally testable codes with constant soundness, we only require the weaker condition that each deformed code has distance d, instead of the stronger condition that the compacted code has distance d. Intuitively, this is because when considering the potential weight reduction of a logical fault that is deformed into two different surgery ancilla systems, the support of that logical fault is necessarily shifted onto timelike check errors, which are proportional to the reduced space support as the codes have constant soundness. Remark 6.4 (Interpolated surgery with lower soundness). Proposition 6.1 and Theorem 6.3 demand soundness ρ ≥2n m , which is asymptotically constant as m ∈Θ(n) for LDPC codes. If the soundness is lower than constant, which one may expect for practical codes, it is sufficient to use asymptotically more rounds of syndrome measurement per logical measurement, proportional to 1/ρ, to maintain phenomenological fault-distance d. Furthermore, one cannot guarantee that the surgeries can be performed within O(1) time of each other, without considering the compacted code (see Def. 4.4). Scaling the time between logical measurements to O(1/ρ) is then sufficient to preserve fault-distance. If the soundness is vanishing then there is still a reduction in time overhead from measuring multiple representatives, but the reduction is additive rather than multiplicative, see Proposition 9.3, and one is forced to consider the compacted code to ensure that the logical measurements can be performed close together in time. 6.2 Intermediate time hypergraph surgery Similar to the previous section on partial block reading, we can generalize Theorem 6.3 to the case of hypergraph surgeries. Theorem 6.5. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of hypergraph surgeries {Hyper1, Hyper2, Hyper3, ..., Hyperη} such that: • Each auxiliary complex (A•)1, (A•)2, (A•)3, ..., (A•)η has 1-cosystolic distance di ≥ 1 αi d for i ∈[η], with sparse cycle bases. • Each deformed code has distance at least d. 34 • The original codes have soundness ρ ≥2n m . Then all η logical measurement rounds of hypergraph surgery can be performed sequentially using ⌈αi⌉rounds of syndrome measurement each, and using d rounds of padding before and after the logical measurements. The procedure has phenomenological faultdistance d, for any η ∈N. Proof. This follows by combining the proofs of Theorem 5.3 and Theorem 6.3. Gauging logical measurement [56] can be seen as an instance of hypergraph surgery where the subcode A• has distance 1, and therefore requires O(d) rounds of syndrome measurement to maintain fault-distance d. 7 Modular expansion In previous sections, we studied hypergraph surgery operations where the compacted code or the deformed codes has distance d. This is critical for fault-tolerance. In this section, we discuss sufficient expansion properties on the hypergraph for these conditions to hold. In context of past surgery works, when A• contains only one logical operator (so α = d and dA = 1), partial block reading recovers the CKBB scheme [52]. It is known that there are substantially more efficient schemes for measuring individual logical operators in O(d) time by surgery [56, 58], using expander graphs. This section is essentially an extension of these results to hypergraphs, which can measure multiple logical representatives simultaneously and quickly. Definition 7.1. Let H(V, E) be a hypergraph, with edge-vertex incidence F2-matrix G. Let I be an index set of basis elements vi of a subspace W ⊂F2V. Then we can index element in W as κλ for λ ∈2I. We conflate κλ as a vector and as a set of vertices in V. To each basis element vi we associate its set of vertices Vi = supp(vi), and define the vertex set U = S i Vi. Definition 7.2 (Modular expansion). Let H(V, E) be a hypergraph with incidence matrix G : F2E →F2V and specified subspace W ⊂F2V containing elements κλ. Let t ∈R+ be a positive real number. The modular expansion Mt of H is the largest real number such that, for all v ⊆V, |G⊺v| ≥Mt min(t, |κλ| -|v ∩κλ| + |v ∩U\κλ| : κλ ∈W) As this definition is quite complicated, we give some orienting remarks. Modular expansion is a generalisation of both relative expansion from Ref. [57] and the soundness of a classical locally testable code, and is therefore also as generalisation of the edgeexpansion of a graph. It is 'modular' in the sense that it contains distinguished subsets κλ such that, when they all exist in the same hypergraph, the notion of expansion for each of them combines to give a complete notion of expansion for the whole hypergraph. In detail, recall that edge-expansion (henceforth called global expansion) can be defined for hypergraphs as follows. Definition 7.3 (Global expansion). Let H(V, E) be a hypergraph with incidence matrix G : F2E →F2V. The global expansion of H is the largest real number β such that |G⊺v| ≥β min(|v|, |V |). 35 When G is a graph this reduces to traditional edge-expansion and β is the Cheeger constant. Ref. [57] defined relative expansion, a generalisation of global expansion. Definition 7.4 (Relative expansion). Let H(V, E) be a hypergraph with incidence matrix G : F2E →F2V, distinguished subset U ⊂V and chosen parameter t ∈R+. The relative expansion of H is the largest real number βt such that |G⊺v| ≥βt min(t, |v ∩U|, |U| -|v ∩U|). When t ≥|V| and U = V this reduces to the definition of global expansion. Similarly, when |I| = 1 modular expansion reduces to relative expansion, observing that there are only two κλ vectors, one being the empty set and the other U. A different generalisation of global expansion is to the soundness of a locally testable code. Recall Definition 2.10, with variables replaced to be suggestive of our hypergraph setting. Definition 7.5. A binary linear code C ⊂F2V is (ω, ρ)-locally testable if it has a paritycheck matrix G⊺with rows of weight at most ω such that for any vector v ∈F2V, 1 m|G⊺v| ≥ρ nd(v, C) where m = |E|, n = |V| and d(v, C) = minx∈C(|v + x|). The values ω and ρ are the locality and soundness of the code respectively. In particular, if we rearrange the equation defining soundness ρ then we get |G⊺v| ≥ βd(v, C) for β = ρm n . We observe that in the case where the only codewords are 0 and V, β coincides with the definition of global expansion. For any sparse hypergraph H, if it has global expansion it is also a locally testable code, albeit a rather trivial one. In this way the soundness of a code can be thought of as a generalisation of expansion to the case where a hypergraph has ker G⊺with dimension greater than 1. As dim G⊺is equal to the number of logical representatives which are measured simultaneously by the auxiliary hypergraph, we require a more general definition than relative expansion in order to perform block reading. Modular expansion coincides with soundness when U = V and t ≥|V| under the rearrangement Mt = ρm n , and when W = ker G⊺, so each element κλ is an element of the kernel. We thus have the informal commuting diagram, Modular expansion Relative expansion k = 1 Soundness of an LTC t ≥n Global expansion k = 1 U = V t ≥n U = V where k = dim G⊺, n = |V|, and each κλ is assumed for simplicity of the diagram to correspond to an element of ker G⊺. 36 Remark 7.6. For a simply connected graph, the edge-expansion can be related to spectral properties of the adjacency matrix or Laplacian of the graph. We do not know of a similar relation between relative expansion, soundness or modular expansion and spectral properties of the Laplacian of the hypergraph. To explain our results on hypergraphs with modular expansion we use the terminology of port functions, similar to Ref. [57]. In short, given a CSS code Q and a hypergraph H(V, E), a port function fp is a map from a subset of data qubits in Q to a set of vertices in V. The deformed code is then defined as Q ↔H, with connections introduced between Q and H given by the port function fp. This definition of the deformed code is similar to the definition of the mapping cone by a chain map, although the port function does not specify the connectivity of deformed checks into the auxiliary hypergraph, and the port function is in the opposite direction to f1 in a chain map. Moreover, the port function definition can be applied to codes which are not CSS. Theorem 7.7 (Distance preservation). Let {Zi}i∈I be a set of logical representatives (trivial or nontrivial) on a distance d CSS code Q with supports ξi = supp(Zi). Let H(V, E) be a hypergraph with vertices in V being Z checks, hyperedges in E data qubits and X gauge checks forming a cycle basis, such that H measures all the logical representatives {Zi}i∈I. Let fp : S i ξi →V be an injective port function such that U = imfp ⊂V, and fp(ξi) = Vi ⊂V, ∀i ∈I. Let Md(H) ≥1, where W is generated by fp(ξi). Then the distance of the deformed code Q ↔H is at least d. Proof. See Appendix C. As the above proof demands the port function fp from the original code to the hypergraph be injective, it does not capture the most general form of hypergraph surgeries. In that case, scaling the modular expansion proportional to the check incidence to the original code is sufficient to preserve the distance, see Remark 7.16. Theorem 7.8 (Thickening). Let H(V, E) be a hypergraph with modular expansion Mt(H). Let J L be the path graph with length L ≥ 1 Mt(H), i.e. L vertices and L -1 edges. Let HL := H□J L be the hypergraph of H thickened L times. Then HL has modular expansion Mt(HL) ≥1 for Ul= S i V l i at any level l∈{1, 2, · · · , L}, where V l i is the copy of Vi in the lth level of the thickened hypergraph. Proof. See Appendix C. Lemma 7.9. Let H(V, E) be a hypergraph such that each element in ker(G⊺) has the supporting set κλ for some λ ∈2I. Then Mt(H) ≥1 t . Proof. Consider an arbitrary vertex set v. We consider two cases: first, where |G⊺v| = 0, and then where |G⊺v| ≥1. In the first case, we know that v ∈ker(G⊺) and therefore v = κm for some m ∈2I. Therefore |κm| -|v ∩κm| + |v ∩U\κm| = 0, and so |G⊺v| ≥Mt(H) min(t, |κλ| -|v ∩κλ| + |v ∩U\κλ| : λ ∈2I) for any Mt(H). In the second case, |G⊺v| ≥1. We now consider the two values of the minimum. If min(·) = t then |G⊺v| ≥1 t t. If min(·) = |κλ| -|v ∩κλ| + |v ∩U\κλ| for some λ ∈2I then t ≥|κλ| -|v ∩κλ| + |v ∩U\κλ|, and so |G⊺v| ≥1 ≥1 t (|κλ| -|v ∩κλ| + |v ∩U\κλ|). Corollary 7.10. Let H(V, E) be any hypergraph such that each element in ker(G⊺) has the supporting set κλ for some λ ∈2I. Let Ht := H□J t be the hypergraph of H thickened t times. Then Ht has modular expansion at least Mt(Ht) ≥1 for any level l. 37 Proof. As Mt(H) ≥1 t and J L in Theorem 7.8 has L = t then L ≥ 1 Mt(H). Corollary 7.10 post-hoc justifies devised sticking from Ref. [59] by setting t = d, as there the hypergraph is thickened d times to preserve fault-tolerance, and the CKBB scheme [52] is a special case of this. Remark 7.11. The expansion-based argument does not account for the cycles in the hypergraph, however. For these see Ref. [52, Thm. 1], which shows that when the hypergraph is thickened d times the cycles can be considered gauge logicals and the deformed code still has distance at least d as a subsystem code. Theorem 7.8 shows that it is not necessary to thicken d times, if the modular expansion before thickening is greater than 1 d. This has previously been proved for the simpler case of measuring only one logical operator [55, 57]. Lemma 7.12 (Cross-block measurements). Let H be a hypergraph with modular expansion Md(H) ≥1 L. Let H correspond to the degree-shifted subcode A• with distance at least d. Vertices in H are Z checks and edges are data qubits. Let f•, f′ •, f′′ • ... be the inclusion chain maps of A• into C•, C′ •, C′′ • ..., each of which has distance at least d. Let HL be the the hypergraph thickened L times, with modular expansion Md(HL) ≥1 at each level l∈{1, 2, · · · , L}. Then the deformed code D• given by modifying each chain map f•, f′ •, f′′ • ... to have their preimages on any arbitrary levels of the hypergraph and taking the mapping cone has distance d if all cycles are gauge-fixed. Proof. As all cycles are gauge-fixed, there are no new logical qubits, and no X logical can have weight lower than d. We must therefore ensure that no Z logical which is unmeasured can have its weight reduced lower than d by cleaning. We prove the lemma by contradiction. If any Z logical extending over all initial codeblocks C•, C′ •, C′′ • ..., but not the hypergraph, could have its weight reduced below d by cleaning into the hypergraph, then a single logical in one of C•, C′ •, C′′ • ... must also have its weight reduced below d by cleaning. This fact is because removing the chain maps to all other codeblocks would leave the same set of Z checks which could clean ΛZ to the same data qubits σ, with the exception of data qubits in the other codeblocks. This set of data qubits must satisfy |σ| < d for this lemma to be false. However, we know that Md(HL) ≥1 at each level of the hypergraph, therefore |σ| ≥d and the lemma holds. By a combination of Corollary 7.10 and Lemma 7.12, observe that the earlier Lemma 4.2 from Section 4 holds. To summarise this section, if a subcode A• for partial block reading has sufficient soundness then it must always also have modular expansion, which can then be boosted by thickening an appropriate amount to guarantee the deformed code's distance is preserved. Recall the definition of soundness for quantum codes in Def. 2.11. Corollary 7.13. Let A• be the subcode used in a partial block reading. Let A• have soundness ρA. Then, viewed as a hypergraph H, thickening 2nA ρAmA times preserves the distance of the deformed code when all cycles are gauge-fixed. Proof. By Lemma 2.12, H has soundness ρAmA 2nA as a classical code with parity-check matrix G⊺. Setting W = ker G⊺, H also has modular expansion Md ≥ρAmA 2nA . By Lemma 7.12, set L = 2nA ρAmA and the thickened hypergraph HL has modular expansion Md(HL) ≥1. Remark 7.14. As mA 2nA ∈O(1) for LDPC codes, the asymptotic space cost of partial block reading when all cycles are gauge-fixed can be reduced to O(nA ρA ). 38 Assuming the modular expansion Md(H) before thickening is greater than 1 d, which is its minimum by Lemma 7.9, this thickening requires fewer than the d layers required by the CKBB scheme and similar [52, 59]. Note that as the soundness of a subcode is not generally inherited from the soundness of the full code, finding subcodes with high soundness is generally difficult. Regardless, having hypergraphs equipped with modular expansion is extremely useful to perform fast surgery. Lemma 7.15. If each hypergraph Hi associated to a partial block reading Partiali for i ∈[n] has modular expansion Md(Hi) ≥1 then the compacted code CC• has distance d. Proof. By Theorem 7.7, attaching H1 to the original code via the chain map (h•)1, to yield cone((h•)1), must always preserve the distance of the code. Applying the other mapping cones iteratively will then each preserve the distance, so CC• has distance d. As a consequence, the conditions of Theorem 4.5 can be satisfied constructively, by choosing subcodes of distance d and then thickening such that each hypergraph has modular expansion. Remark 7.16. A similar approach applies to Theorem 5.3, where instead it is sufficient for each hypergraph to have modular expansion Md(Hi) ≥wi, where wi is the maximal column weight of (f1)i, (f′ 1)i, (f′′ 1 )i, · · · . 8 Examples In this section we illustrate block reading with a series of examples. We focus first on 2D surface codes, upon which traditional lattice surgery is extremely well-understood [106, 110, 111]. 8.1 Topological codes 8.1.1 2D surface codes We demonstrate on unrotated surface codes, as the connection to block reading, and LDPC code surgery in general, is more apparent in this case. We start with two square surface code patches with distance d, with n = d2 + (d -1)2 data qubits each, 39 In these diagrams qubits are edges, X and Z checks are vertices and faces respectively. The deformed code for the full block reading performing a Z ⊗Z is then given by where the new edges are dashed grey lines. There are new vertically-oriented faces where there are new squares, and at the boundaries. Evidently the connectivity requirements are close to the requirements for transversal CNOTs. 40 The only distance d subcode (as defined in Def. 2.5) of the square surface code is itself, and so the only partial block reading with a distance d subcode is full block reading. A partial block reading using a subcode A• with distance dA less than d can be constructed by choosing a strip of the surface codes and then connecting these together. In the simple case of surface code patches as above the distance is always preserved by partial block reading, because the only logicals which can have reduced weight are those on Z ⊗Z or X ⊗X, which are now stabilisers. If the surface code patches also have defects, or are toric codes instead, partial block reading does not generically preserve code distance and increasing modular expansion in the auxiliary hypergraph is necessary, by thickening or otherwise. Recall that surface codes have asymptotically vanishing soundness by [78, Cor. 1] so using a partial block reading with dA < d on the surface code is not particularly useful for reducing the time overhead, unless dA ≥d 2 by Proposition 6.1. In the case where dA = 1 and the chosen subcode is along the boundary of the patches, we recover traditional lattice surgery with unrotated surface codes: 41 8.1.2 2D toric codes Partial block reading with toric codes is illustrative, as it showcases how the distance of the deformed code can drop without thickening. First, consider the following d = 5 toric codeblocks, shown by tessellating their fundamental polygons. Blue and red edge pairs of the same lattice are identified to form loops. 42 A partial block reading between them can be performed with a subcode corresponding to a cylinder around the torus. Measuring only one representative from each torus recovers traditional surgery with 2D toric codes, 43 which measures Z ⊗Z on one logical qubit from each codeblock, but not the other. Growing the width of the cylinder such that the distance of A• is 4, we acquire the deformed code: 44 Depending on the chosen subcode, the distance may drop in the deformed code, as there can be a stringlike Z operator which extends from one torus to the other and then back, skipping the part of the torus with the new connections and making a short noncontractible closed curve. In our example the deformed code distance is still 5, as these curves take the form shown by green edges, 45 which have length at least 6. However, were we to start with d = 7 toric codes, with the width of the cylinder being 6, the distance would drop as a consequence of these same curves. Increasing the length of this closed curve is then possible by thickening the subcode, or boosting the modular expansion in some other manner. If we were to perform a full block reading, these closed curves would become contractible due to the additional stabilisers, so the deformed code distance would be preserved regardless. 8.1.3 4D toric codes 4D toric codes are known to be decodable in a single-shot manner, as they have some soundness and metachecks of both X and Z-type [78, 82, 83, 107]. A 4D toric code can be constructed by a 4-fold tensor product of the chain complex R• = R1 R0 ∂1 where R1 = R0 = Fm 2 and the differential ∂1 is the incidence matrix of the cycle graph with m vertices and edges, so dim ker(∂1) = 1 and dim Fm 2 /im(∂1) = 1. The 4-fold tensor product yields a chain complex with 5 nonzero components. Let the 4D toric code be C• = (R ⊗R ⊗R ⊗R)• = C4 C3 C2 C1 C0 46 and the qubit component is fixed to be C2.13 More explicitly we have C2 = M i+j+k+l=2 Ri ⊗Rj ⊗Rk ⊗Rl, and the entire chain complex is given by R1R1R0R0 R1R1R1R0 R1R0R1R0 R1R0R0R0 R1R1R0R1 R1R0R0R1 R0R1R0R0 R1R1R1R1 R0R0R0R0 R1R0R1R1 R0R1R1R0 R0R0R1R0 R0R1R1R1 R0R1R0R1 R0R0R0R1 R0R0R1R1 where tensor products are suppressed for visibility and columns of terms are taken in a direct sum. The K ̈unneth formula implies that dim H2(C) = 6. The distance of C• is d = m2, which can be computed using Lemma 2.8. The intuition is that all nontrivial logical operators in a 4D toric code form membranes through the complex, as opposed to strings in 2D toric codes. For a given summand in C2, say R1 ⊗R1 ⊗R0 ⊗R0, one choice of corresponding nontrivial Z logical operator representative is then 1 ⊗1 ⊗(1, 0, · · · , 0) ⊗(1, 0, · · · , 0), where 1 = (1, 1, · · · , 1). To give a treatment of partial block reading with 4D toric codes we specify a subcode A•. There are evidently many such choices. One choice is to measure just a single logical representative, which by Thm. 1.6 must take Θ(d) syndrome measurement rounds to maintain phenomenological fault distance. Another choice of A• is to specify a set of Z-checks in C3, and to take the summands in C2, in the image of that set of Z-checks and so on. This is trivially guaranteed to be a subcode. For example, set A• to be R1R1R0F2 R1R0R0F2 R1R1R1F2 R1R0R1F2 R0R1R0F2 R0R0R0F2 R0R1R1F2 R0R0R1F2 13This shift in qubit degree compared to convention is for convenience, so that there are no negative indices and basis elements in C2 can be identified with faces in a cell complex. 47 where the chain map component f3 maps R1R1R1F2 in A3 into R1R1R1R0 in C3 and so on, choosing a consistent basis element of R0 to map F2 into. This choice measures 3 Z logical operators, and many representatives of each. This is the choice taken in Ref. [107], and can be seen as setting A• to be a hyperplane of the lattice. While each Z logical representative contained in A• must have weight at least m2, the X-distance of A• is only m, as this is a 3D toric code by identifying that R1R1R1F2 ∼= R1R1R1 etc, with the qubit component at A2 rather than the customary A1. In Ref. [60] it was observed that while one can perform single-shot lattice surgery using a subcode given by a hyperplane with 4D surface and toric codes, the performance was reduced unless O( √ d) = O(m) syndrome rounds were used. By Proposition 6.1 we in fact see that Ω( √ d) rounds are required to maintain fault-distance, as α = √ d in this case. Thus one can perform single-shot lattice surgery, or lattice surgery using a constant number of syndrome rounds, with 4D toric codes, but with only O( √ d) fault-distance, given that choice of subcode. Remark 8.1. Proposition 6.1 is not quite sufficient to justify Θ( √ d) rounds of syndrome measurement to maintain fault-distance, only Ω( √ d), as 4D toric codes do not have constant soundness [78, 82], see Remark 6.4. Remark 8.2. Lattice surgery with 'geometrically enhanced' 4D toric codes was studied in Ref. [107, Sec. V]. There, the X-distance of A• was called the 'boundary distance', and similar arguments about the number of rounds required for fault tolerance were employed. The lattice surgery there is performed in an 'ancilla-free' manner, where no new data qubits are used; only new checks. This is a quotient rather than a mapping cone, see Refs. [53, 54, 64]. 8.2 Bivariate bicycle codes Bivariate bicycle (BB) codes [86] are a class of Abelian 2-block group-algebra (2BGA) codes [18, 112], and as such are lifted product and balanced product codes [16, 17, 113]. They are of substantial practical interest, as some bivariate bicycle codes are known to exhibit high pseudo-thresholds and low logical error rates under circuit noise and suitable decoders, while having a higher rate than surface codes [63, 86], albeit requiring more connectivity. BB codes can also be seen as toric codes boosted to have limited non-local connectivity [114]. It is immediate that full block reading can be used to perform parallel Z ⊗Z or X ⊗X measurements between two BB codes using n + mZ = n + mX = 3 2n new qubits, including check qubits, while maintaining the distance and LDPC property. For instance, the J144, 12, 12K gross code, which uses 288 total qubits as a quantum memory when including check qubits, requires 216 additional qubits in total for a full block reading. This is lower than for the same set of parallel measurements in Ref. [54, Sec. 3.3], which used 360 total new qubits (216 new data qubits and 144 new check qubits), increased the check weight of the deformed code to 12, and required O(d) rounds of syndrome measurement. Thus even a naive application of block reading can reduce the spacetime overhead of logical measurements by an order of magnitude compared to conventional surgery, where the connectivity allows for it. We suspect that one can perform partial block readings using subcodes of the gross code, and similar BB codes, in order to address desired logical qubits and reduce space overheads while maintaining fast and parallel logical measurement at high fault distance, and retaining a degree of addressability. As this requires substantial numerics we defer this to future work. 48 9 Equivalence between surgery and homomorphic measurement The abstract relationship between (a) surgery by an auxiliary system and (b) homomorphic measurement [42] was folklore prior to this work, and indeed is implied by Ref. [58]. That is, given a chain map one can always construct a mapping cone and vice versa. The relationship we show in this section is more concrete: given the circuit for homomorphic measurement on a CSS code, it can always be rewritten into a logical measurement by surgery using a mapping cone (called homological measurement in Ref. [58]) and vice versa. Theorem 9.1. Let C• be a CSS code with a homomorphic measurement procotol specified by a chain map f• : A• →C•. Then the circuit corresponding to the homomorphic measurement can be rewritten to a surgery specified by cone(f•). Proof. See Appendix D. Corollary 9.2. Let C• be a CSS code with a surgery protocol specified by cone(f•) for a chain map f• : A• →C•. Then the circuit corresponding to the surgery protocol can be rewritten to a homomomorphic measurement specified by f•. Proof. See Appendix D. These proofs are performed using the ZX-calculus [115], a formal calculus for rewriting tensor networks on qubits. This is a convenient formalism, as the rewriting requires converting checks and data qubits in the homomorphic measurements into data qubits and checks respectively in the surgery procedure. In essence, the time and space axes of the ancilla circuit are inverted. However, the rewriting does not generally preserve fault-distance, unlike the rewrites in Ref. [116, 117]. The reason why converting a surgery to a homomorphic measurement does not generally preserve fault-distance is because having a low timelike distance in a surgery protocol can be resolved by measuring for multiple rounds. However, the equivalent homomorphic measurement has low space distance, which is fatal. We expect that one can prepare many of those ancilla states and measure repeatedly, in the style of a Shor measurement [5]. However, this is no longer a homomorphic measurement in the sense defined in Ref. [42]. Conversely, we do not know whether rewriting any homomorphic measurement to a hypergraph surgery preserves fault-distance. Our last results focus on general bounds on fast measurement by an auxiliary system. The results are insensitive to the nature of the auxiliary system: whether it is surgery, homomorphic measurement or any other scheme. Proposition 9.3. Let Q be a quantum LDPC code. Let S be a set of logical operator representatives measured at the same time by one or more auxiliary system for T syndrome rounds. Assume that the correct logical measurement outcomes of S are not known in advance. Let f be the lowest weight data qubit fault in Q such that: • f flips at least one logical representative in S. • If f flips a logical representative in S then it flips all other representatives in the same equivalence class in S. Let w be the weight of f, and s be the syndrome weight of f in Q. Then the phenomenological fault-distance of the measurement protocol is upper bounded by 2w + Ts. 49 Proof. See Appendix E. When Q is CSS and the set S forms a Z-type subcode A•, w is the X-distance of A•. Theorem 1.6. Any logical measurement protocol performed in o(d) rounds on a quantum LDPC code by an auxiliary system requires connections to more than one logical representative to maintain phenomenological fault distance Ω(d), unless the correct logical measurement outcome is known in advance. Proof. See Appendix E. Importantly, for fast surgery it is not enough for the deformed code in the idling operation to be a single-shot code. Because the correct outcomes of the new checks are not generally known in advance, many connections between representatives of the original code and the auxiliary hypergraph are essential to correct for measurement errors on these new checks. Indeed, our earlier results on full and partial block reading with distance d subcodes imply that this is the only thing that is required, assuming that the auxiliary hypergraph is sensibly constructed and that sufficent rounds of padding are used. 10 Discussion In this work we have introduced a variety of new, general LDPC code surgeries which reduce the time overhead of logical measurement below O(d) rounds and allow the measurement of logical operators in parallel. We generalised prior work [55-58], which used expansion properties of hypergraphs to efficiently measure individual logical operators, to use soundness instead, enabling measurement of many logical operators in parallel. We elucidated the connection between homomorphic measurement and surgery, and gave general bounds on the fault distance for any logical measurement by an auxiliary system. To prove fault distances for fast surgery in Appendix A we developed an extension of the fault complexes formalism [60] by taking mapping cones on idling spacetime volumes, which may be of independent interest. Our work raises many further questions. First, when is it possible to efficiently devise a sparse auxiliary hypergraph H with high modular expansion even if the original subcodes have low soundness? In the case of individual logical operator representatives this is straightforward by constructing an expander graph, but for more complicated parallel measurements involving many representatives the numerical construction of suitable hypergraphs is challenging for large codes, and brute-force branching techniques [59, 61] have formidable constant factor overheads. Similarly, when H is a graph one can appeal to the Decongestion Lemma [118, App. A]. When H is a small hypergraph one can typically devise a sparse cycle basis for H by numerical brute-force [55], but for large codes such a method becomes impractical. Are there families of LDPC codes for which such bases are efficient to construct, or unnecessary entirely? We expect this to be true of any code that is geometrically local in a fixed spatial dimension. In fact, constant-time surgery operations raise the intriguing possibility of avoiding the need for a sparse cycle basis at all. This is because cycle checks do not need to be explicitly measured as they can be inferred from the initialization and readout steps, forming detectors that have a constant extent in the time direction. These detectors must provide fault-tolerance, but need not be strictly local. For instance, see the nonlocal hierarchical fault-tolerant detector complexes derived from local measurements in Ref. [109]. 50 An issue that arises when using hypergraphs is that the method of universal adapters [57] to connect heterogeneous subcodes between codeblocks does not generally apply. This is because the SkipTree algorithm assumes that the auxiliary system is a graph, rather than a hypergraph. We do not know whether generalisations of the algorithm can be invented to connect heterogeneous hypergraphs for product measurements, either in full generality or for specific cases. Here, we have focused on hypergraph measurements that only measure local checks and global logicals, i.e. they do not locally deform the code after the measurement. It is possible to relax these requirements, which leads to a setting of dynamical local codes. Could this lead to more addressable fast surgery without sacrificing performance? Our proofs concerning interpolated partial block reading in Sec. 6 used a strict definition of soundness, see Def. 2.11, which we believe can be loosened to a 'small set soundness', with a meaning similar to small set expansion or modular expansion. Our proofs regarding fault-distance require d rounds of syndrome measurement ('padding') before and after the surgery protocols in order to provably protect against timelike faults extending from one timelike boundary to the other. We expect that these are typically unnecessary given sensible fault-tolerant operations performed before and after the surgery protocol. However, we have not fully characterized when these rounds of padding can be reduced in number or removed. It is known that high-distance subcodes which address specific logical qubits exist for topological and tensor product codes [42, 43], but for other families of LDPC codes little is known. It would be interesting to consider algebraic methods to find high-distance subcodes for lifted or balanced product codes. All our surgery methods used auxiliary hypergraphs, and there are other approaches to surgery which are 'ancilla-free', meaning that they use no new data qubits [83, 107]. Such approaches fall under the umbrella of quotients of codes, see Refs. [53, 64], rather than mapping cones [58]. The arguments concerning time overhead appear to translate between the two approaches, but discerning a rigorous connection is left to future work. We suggest that one could take a pushout or similar quotient directly on the spacetime volumes described by fault complexes [60] in order to study such surgeries. We do not know whether it is possible to take an arbitrary quantum LDPC code and provably measure an arbitrary set of commuting logical Pauli products in constant time overhead and linear (up to polylog) space overhead while maintaining fault-distance and the LDPC property. Such a proof is possible for the case of O(d) time overhead [61], but the difficulties of finding appropriate subcodes and cycle bases require new methods. Conversely, it may be possible to show that such a generic protocol is impossible, see Ref. [119] for a related result in the classical setting. We have developed a formalism for studying the phenomenological faults in generic CSS LDPC code surgery operations using fault complexes. We expect it is possible to prove thresholds of surgery on spacetime volumes in this formalism using combinatorial methods on the relevant fault complexes [41]. Block reading entails metachecks, which means the addition of many more detectors to the decoding graph for a single round of measurement. We do not know whether there exist decoders which can efficiently decode such graphs quickly enough to be suitable for quantum hardware. In Ref. [83] a powerset decoder was used to decode the a geometrically enhanced 4D toric code memory, which also has metachecks. However, this was far too slow for realistic hardware, and we do not know how well the powerset decoder would handle surgery operations. More realistic decoders for high-dimensional topological and tensor product codes were studied in Refs. [60, 120]. 51 Finally, to bring the theory of block reading to application it will be necessary to perform many simulations and stability experiments [121], and numerically determine pseudo-thresholds and logical error rates of the surgery operations. 11 Acknowledgements We are grateful to Katie Chang and Louis Golowich for insightful discussions and for spotting an error in an earlier version of this paper. A.C. is employed by Xanadu and would like to thank Ben Ide, Eric Sabo, Timo Hillmann, Michael Vasmer and Guillaume Dauphinais for helpful discussions. Z.H. is supported by MIT -O'Connor and Guanyu Zhu for helpful discussions. D.J.W. is supported by the Australian Research Council Discovery Early Career Research Award (DE220100625). T.J.Y. thanks Qian Xu for sharing his perspective that homomorphic measurements and code surgery should be in some sense equivalent. We used TikZit [122] to generate the Tanner graph figures in this paper. References [1] Peter W Shor. Scheme for reducing decoherence in quantum computer memory. Physical review A, 52(4):R2493, 1995. [2] Daniel Gottesman. Stabilizer codes and quantum error correction. PhD thesis, California 1997. URL https://thesis.library.caltech. edu/2900/. [3] A Yu Kitaev. Quantum computations: algorithms and error correction. Russian Mathematical Surveys, 52(6):1191-1249, Dec 1997. ISSN 1468-4829. URL http://dx.doi.org/10.1070/ RM1997v052n06ABEH002155. [4] Dorit Aharonov and Michael Ben-Or. Fault-tolerant quantum computation with constant error. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing, pages 176-188, 1997. [5] P.W. Shor. Fault-tolerant quantum computation. In Proceedings of 37th Conference on Foundations of Computer Science, pages 56-65, 1996. URL https://ieeexplore.ieee.org/document/ 548464. [6] S. B. Bravyi and A. Yu. Kitaev. Quantum codes on a lattice with boundary. arXiv:quant-ph/9811052, 1998. [7] A.Yu. Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2-30, Jan 2003. ISSN 0003-4916. URL http://dx.doi.org/10.1016/S0003-4916(02)00018-0. [8] Eric Dennis, Alexei Kitaev, Andrew Landahl, and John Preskill. Topological quantum memory. Journal of Mathematical Physics, 43(9):4452-4505, Sep 2002. ISSN 1089-7658. URL http://dx.doi.org/10.1063/1. 1499754. [9] H ́ector Bomb ́ın. Gauge color codes: optimal transversal gates and gauge fixing in topological stabilizer codes. New Journal of Physics, 17(8):083002, 2015. [10] Topological code. In Victor V. Albert and Philippe Faist, editors, The Error Correction Zoo. 2022. URL https://errorcorrectionzoo.org/c/topological. [11] Sergey Bravyi, David Poulin, and Barbara Terhal. Tradeoffs for Reliable Quantum Information Storage in 2D Systems. Phys. Rev. Lett., 104:050503, Feb 52 2010. URL https://link.aps.org/doi/ 10.1103/PhysRevLett.104.050503. [12] Quantum ldpc (qldpc) code. In Victor V. Albert and Philippe Faist, editors, The Error Correction Zoo. 2025. URL https://errorcorrectionzoo.org/c/qldpc. [13] C. Gidney. Stim: a fast stabilizer circuit simulator. Quantum, 5:497, Jul 2021. ISSN 2521-327X. URL https://doi.org/10.22331/ q-2021-07-06-497. [14] Jean-Pierre Tillich and Gilles Z ́emor. Quantum LDPC codes with positive rate and minimum distance proportional to the square root of the blocklength. IEEE Transactions on Information Theory, 60(2):1193-1202, 2013. [15] Alexey A Kovalev and Leonid P Pryadko. Quantum kronecker sum-product lowdensity parity-check codes with finite rate. Physical Review A-Atomic, Molecular, and Optical Physics, 88(1):012311, 2013. [16] Pavel Panteleev and Gleb Kalachev. Degenerate Quantum LDPC Codes With Good Finite Length Performance. Quantum, 5:585, Nov 2021. ISSN 2521327X. URL https://doi.org/10.22331/ q-2021-11-22-585. [17] Nikolas P. Breuckmann and Jens N. Eberhardt. Balanced Product Quantum Codes. IEEE Transactions on Information Theory, 67(10):6653-6674, 2021. [18] Hsiang-Ku Lin and Leonid P Pryadko. Quantum two-block group algebra codes. Physical Review A, 109(2):022407, 2024. [19] Sergey Bravyi, Oliver Dial, Jay M. Gambetta, Dar ́ıo Gil, and Zaira Nazario. The future of quantum computing with superconducting qubits. Journal of Applied Physics, 132(16), Oct 2022. ISSN 1089-7550. URL http://dx.doi.org/10.1063/5.0082975. [20] Dolev Bluvstein, Simon J Evered, Alexandra A Geim, Sophie H Li, Hengyun Zhou, Tom Manovitz, Sepehr Ebadi, Madelyn Cain, Marcin Kalinowski, Dominik Hangleiter, et al. Logical quantum processor based on reconfigurable atom arrays. Nature, 626(7997):58-65, 2024. URL https://www.nature.com/articles/ s41586-023-06927-3. [21] PsiQuantum Team. A manufacturable platform for photonic quantum computing. Nature, pages 1-3, 2025. [22] Qian Xu, J. Pablo Bonilla Ataides, Christopher A. Pattison, Nithin Raveendran, Dolev Bluvstein, Jonathan Wurtz, Bane Vasi ́c, Mikhail D. Lukin, Liang Jiang, and Hengyun Zhou. Constant-overhead fault-tolerant quantum computation with reconfigurable atom arrays. Nature Physics, 20(7):1084-1090, Apr 2024. ISSN 1745-2481. URL http://dx.doi.org/10.1038/ s41567-024-02479-z. [23] S. Bravyi, A. W. Cross, J. M. Gambetta, D. Maslov, P. Rall, and T. J. Yoder. Highthreshold and low-overhead fault-tolerant quantum memory. Nature, 627(8005): 778-782, Mar 2024. ISSN 1476-4687. URL http: //dx.doi.org/10.1038/s41586-024-07107-7. [24] Nikolas P. Breuckmann and Simon Burton. Fold-Transversal Clifford Gates for Quantum Codes. Quantum, 8:1372, Jun 2024. ISSN 2521-327X. 2024-06-13-1372. URL https://doi.org/10.22331/q-2024-06-13-1372. [25] A. O. Quintavalle, P. Webster, and M. Vasmer. Partitioning qubits in hypergraph product codes to implement logical gates. Quantum, 7(1153), 2023. 53 [26] J. N. Eberhardt and V. Steffan. Logical Operators and Fold-Transversal Gates of Bivariate Bicycle Codes. , 2024. [27] G. Zhu, S. Sikander, E. Portnoy, A. W. Cross, and B. J. Brown. Non-Clifford and parallelizable fault-tolerant logical gates on constant and almost-constant rate homological quantum LDPC codes via higher symmetries. , 2023. [28] Thomas R Scruby, Arthur Pesah, and Mark Webster. Quantum rainbow codes. arXiv preprint , 2024. [29] Nikolas P Breuckmann, Margarita Davydova, Jens N Eberhardt, and Nathanan Tantivasadakarn. Cups and Gates I: Cohomology invariants and logical quantum operations. arXiv preprint , 2024. [30] Po-Shen Hsin, Ryohei Kobayashi, and Guanyu Zhu. Classifying Logical Gates in Quantum Codes via Cohomology Operations and Symmetry. arXiv preprint , 2024. [31] Ting-Chun Lin. Transversal non-Clifford gates for quantum LDPC codes on sheaves. arXiv preprint , 2024. [32] Louis Golowich and Ting-Chun Lin. Quantum LDPC Codes with Transversal NonClifford Gates via Products of Algebraic Codes. arXiv preprint , 2024. [33] Alexander J Malcolm, Andrew N Glaudell, Patricio Fuentes, Daryus Chandra, Alexis Schotte, Colby DeLisle, Rafael Haenel, Amir Ebrahimi, Joschka Roffe, Armanda O Quintavalle, et al. Computing Efficiently in QLDPC Codes. arXiv preprint , 2025. [34] Christophe Vuillot and Nikolas P Breuckmann. Quantum pin codes. IEEE Transactions on Information Theory, 68(9):5955-5974, 2022. [35] Hasan Sayginel, Stergios Koutsioumpas, Mark Webster, Abhishek Rajput, and Dan E Browne. Fault-Tolerant Logical Clifford Gates from Code Automorphisms. arXiv preprint , 2024. URL https://arxiv.org/abs/2409.18175. [36] Noah Berthusen, Michael J Gullans, Yifan Hong, Maryam Mudassar, and Shi Jie Samuel Tan. Automorphism gadgets in homological product codes. arXiv preprint , 2025. [37] Daniel Gottesman. Fault-tolerant quantum computation with constant overhead. arXiv preprint , 2013. [38] Omar Fawzi, Antoine Grospellier, and Anthony Leverrier. Constant Overhead Quantum Fault-Tolerance with Quantum Expander Codes. In 2018 IEEE 59th Annu. Symp. Found. Comput. Scie. (FOCS), volume 64, pages 106-114. ACM New York, NY, USA, Oct 2018. URL https: //doi.org/10.1109%2Ffocs.2018.00076. [39] Shiro Tamiya, Masato Koashi, and Hayata Yamasaki. Polylog-time-and constantspace-overhead fault-tolerant quantum computation with quantum low-density parity-check codes. arXiv preprint , 2024. [40] Quynh T Nguyen and Christopher A Pattison. Quantum fault tolerance with constant-space and logarithmic-time overheads. arXiv preprint , 2024. [41] Zhiyang He, Quynh T. Nguyen, and Christopher A. Pattison. Composable quantum fault-tolerance, 2025. URL https://arxiv.org/abs/2508.08246. [42] Shilin Huang, Tomas Jochym-O'Connor, and Theodore J. Yoder. Homomorphic Logical Measurements. PRX Quantum, 4(3):030301, Jul 2023. tum.4.030301. URL https://link.aps.org/doi/10.1103/PRXQuantum.4.030301. 54 [43] Qian Xu, Hengyun Zhou, Guo Zheng, Dolev Bluvstein, J Ataides, Mikhail D Lukin, and Liang Jiang. Fast and Parallelizable Logical Computation with Homological Product Codes. arXiv preprint , 2024. [44] Thiago Bergamaschi and Yunchao Liu. On fault tolerant single-shot logical state preparation and robust long-range entanglement. arXiv preprint , 2024. URL https://arxiv.org/abs/2411. 04405. [45] Yifan Hong. Single-shot preparation of hypergraph product codes via dimension jump. Quantum, 9:1879, October 2025. ISSN 2521-327X. 07-1879. URL https://doi.org/10.22331/q-2025-10-07-1879. [46] Christine Li, John Preskill, and Qian Xu. Transversal dimension jump for product qLDPC codes. , oct 2025. URL https://arxiv.org/pdf/2510. 07269. [47] H ́ector Bomb ́ın and Miguel Angel Martin-Delgado. Quantum measurements and gates by code deformation. Journal of Physics A: Mathematical and Theoretical, 42 (9):095302, 2009. [48] Nikolas P Breuckmann, Christophe Vuillot, Earl Campbell, Anirudh Krishna, and Barbara M Terhal. Hyperbolic and semi-hyperbolic surface codes for quantum storage. Quantum Science and Technology, 2(3):035007, Aug 2017. ISSN 2058-9565. URL http://dx.doi.org/10.1088/2058-9565/ aa7d3b. [49] Ali Lavasani and Maissam Barkeshli. Low overhead Clifford gates from joint measurements in surface, color, and hyperbolic codes. Physical Review A, 98 (5), Nov 2018. ISSN 2469-9934. URL http: //dx.doi.org/10.1103/PhysRevA.98.052319. [50] Tomas Jochym-O'Connor. Fault-tolerant gates via homological product codes. Quantum, 3:120, Feb 2019. ISSN 2521-327X. URL https://doi.org/10.22331/q-2019-02-04-120. [51] Anirudh Krishna and David Poulin. Fault-tolerant gates on hypergraph product codes. Physical Review X, 11(1):011023, 2021. [52] Lawrence Z. Cohen, Isaac H. Kim, Stephen D. Bartlett, and Benjamin J. Brown. Low-overhead fault-tolerant quantum computing using long-range connectivity. Science Advances, 8(20):eabn1717, May 2022. ISSN 2375-2548. adv.abn1717. URL http://dx.doi.org/10.1126/sciadv.abn1717. [53] Alexander Cowtan and Simon Burton. CSS code surgery as a universal construction. Quantum, 8:1344, 2024. [54] A. Cowtan. SSIP: automated surgery with quantum LDPC codes. , 2024. [55] Andrew Cross, Zhiyang He, Patrick Rall, and Theodore Yoder. Improved QLDPC Surgery: Logical Measurements and Bridging Codes. arXiv preprint , 2024. [56] Dominic J Williamson and Theodore J Yoder. Low-overhead fault-tolerant quantum computation by gauging logical operators. arXiv preprint , 2024. [57] Esha Swaroop, Tomas Jochym-O'Connor, and Theodore J Yoder. Universal adapters between quantum LDPC codes. arXiv preprint , 2024. [58] Benjamin Ide, Manoj G Gowda, Priya J Nadkarni, and Guillaume Dauphinais. Fault-tolerant logical measurements via homological measurement. arXiv preprint , 2024. 55 [59] Guo Zhang and Ying Li. Time-efficient logical operations on quantum LDPC codes. arXiv preprint , 2024. [60] Timo Hillmann, Guillaume Dauphinais, Ilan Tzitrin, and Michael Vasmer. Singleshot and measurement-based quantum error correction via fault complexes. arXiv preprint , 2024. [61] Alexander Cowtan, Zhiyang He, Dominic J. Williamson, and Theodore J. Yoder. Parallel Logical Measurements via Quantum Code Surgery. arXiv preprint , 2025. [62] Zhiyang He, Alexander Cowtan, Dominic J. Williamson, and Theodore J. Yoder. Extractors: QLDPC architectures for efficient pauli-based computation. arXiv preprint , 2025. [63] Theodore J. Yoder, Eddie Schoute, Patrick Rall, Emily Pritchett, Jay M. Gambetta, Andrew W. Cross, Malcolm Carroll, and Michael E. Beverland. Tour de gross: A modular quantum computer based on bivariate bicycle codes, 2025. URL https: //arxiv.org/abs/2506.03094. [64] Cl ́ement Poirson, Joschka Roffe, and Robert I. Booth. Engineering CSS surgery: compiling any CNOT in any code. arXiv preprint , 2025. [65] Guo Zheng, Liang Jiang, and Qian Xu. High-rate surgery: towards constantoverhead logical operations. arXiv preprint , 2025. [66] Qian Xu, Hengyun Zhou, Dolev Bluvstein, Madelyn Cain, Marcin Kalinowski, John Preskill, Mikhail D. Lukin, and Nishad Maskara. Batched high-rate logical operations for quantum LDPC codes. oct 2025. URL http://arxiv.org/abs/2510. 06159. [67] Nou ́edyn Baspin, Lucas Berent, and Lawrence Z. Cohen. Fast surgery for quantum ldpc codes, 2025. URL https://arxiv.org/abs/2510.04521. [68] Shi Jie Samuel Tan, Yifan Hong, Ting-Chun Lin, Michael J. Gullans, and MinHsiu Hsieh. Single-Shot Universality in Quantum LDPC Codes via Code-Switching. , oct 2025. URL https://arxiv.org/pdf/2510.08552. [69] Louis Golowich, Kathleen Chang, and Guanyu Zhu. Constant-overhead addressable gates via single-shot code switching. arXiv preprint , 2025. [70] Christophe Vuillot, Lingling Lao, Ben Criger, Carmen Garc ́ıa Almud ́ever, Koen Bertels, and Barbara M Terhal. Code deformation and lattice surgery are gauge fixing. New Journal of Physics, 21(3):033028, 2019. [71] Mathew B. Hastings. Weight reduction for quantum codes. Quant. Inf. Comput., 17(15-16):1307-1334, 2017. [72] M. B. Hastings. On Quantum Weight Reduction. , 2021. [73] Eric Sabo, Lane G. Gunderman, Benjamin Ide, Michael Vasmer, and Guillaume Dauphinais. Weight-Reduced Stabilizer Codes with Lower Overhead. PRX Quantum, 5:040302, Oct 2024. URL https: //link.aps.org/doi/10.1103/PRXQuantum.5.040302. [74] Andrew C. Yuan. Unified Framework for Quantum Code Embedding. jul 2025. URL https://arxiv.org/pdf/2507.05361. [75] S. Bravyi, G. Smith, and J. A. Smolin. Trading Classical and Quantum Computational Resources. Physical Review X, 6(2), Jun 2016. ISSN 2160-3308. URL http://dx.doi.org/10.1103/PhysRevX. 6.021043. [76] D. Litinski. A Game of Surface Codes: Large-Scale Quantum Computing with Lattice Surgery. Quantum, 3:128, Mar 2019. ISSN 2521-327X. 2019-03-05-128. URL http://dx.doi.org/10.22331/q-2019-03-05-128. 56 [77] H ́ector Bomb ́ın. Single-Shot Fault-Tolerant Quantum Error Correction. Physical Review X, 5(3), Sep 2015. ISSN 2160-3308. URL http://dx.doi.org/10.1103/PhysRevX.5.031043. [78] Earl T Campbell. A theory of single-shot error correction for adversarial noise. Quantum Science and Technology, 4(2):025006, Feb 2019. URL https://dx.doi.org/10.1088/2058-9565/aafc8f. [79] Shouzhen Gu, Eugene Tang, Libor Caha, Shin Ho Choe, Zhiyang He, and Aleksander Kubica. Single-shot decoding of good quantum LDPC codes. Communications in Mathematical Physics, 405(3):85, 2024. [80] Aleksander Kubica and Michael Vasmer. Single-shot quantum error correction with the three-dimensional subsystem toric code. Nature Communications, 13(1), Oct 2022. ISSN 2041-1723. URL http://dx.doi. org/10.1038/s41467-022-33923-4. [81] Thomas R. Scruby, Timo Hillmann, and Joschka Roffe. High-threshold, lowoverhead and single-shot decodable fault-tolerant quantum memory, 2024. URL https://arxiv.org/abs/2406.14445. [82] Armanda O. Quintavalle, Michael Vasmer, Joschka Roffe, and Earl T. Campbell. Single-shot error correction of three-dimensional homological product codes. PRX Quantum, 2:020340, Jun 2021. URL https: //link.aps.org/doi/10.1103/PRXQuantum.2.020340. [83] David Aasen, Matthew B. Hastings, Vadym Kliuchnikov, Juan M. Bello-Rivas, Adam Paetznick, Rui Chao, Ben W. Reichardt, Matt Zanner, Marcus P. da Silva, Zhenghan Wang, and Krysta M. Svore. A topologically fault-tolerant quantum computer with four dimensional geometric codes, 2025. URL https://arxiv.org/abs/ 2506.15130. [84] Hengyun Zhou, Chen Zhao, Madelyn Cain, Dolev Bluvstein, Casey Duckering, HongYe Hu, Sheng-Tao Wang, Aleksander Kubica, and Mikhail D. Lukin. Algorithmic Fault Tolerance for Fast Quantum Computing, 2024. URL https://arxiv.org/ abs/2406.17653. [85] Noah Shutty, Craig Gidney, and Oscar Higgott. Early-stop lattice surgery. To appear. URL https://yale.hosted.panopto.com/Panopto/Pages/Viewer.aspx? id=dbfe6994-f408-46e5-8227-b33001046e13. [86] Sergey Bravyi, Andrew W Cross, Jay M Gambetta, Dmitri Maslov, Patrick Rall, and Theodore J Yoder. High-threshold and low-overhead fault-tolerant quantum memory. Nature, 627(8005):778-782, 2024. [87] Titouan Carette, Dominic Horsman, and Simon Perdrix. SZX-Calculus: Scalable Graphical Quantum Reasoning. In Peter Rossmanith, Pinar Heggernes, and Joost-Pieter Katoen, editors, 44th International Symposium on Mathematical Foundations of Computer Science (MFCS 2019), volume 138 of Leibniz International Proceedings in Informatics (LIPIcs), pages 55:1-55:15, Dagstuhl, Germany, 2019. Schloss Dagstuhl - Leibniz-Zentrum f ̈ur Informatik. ISBN 978-3-95977117-7. URL https://drops.dagstuhl.de/ entities/document/10.4230/LIPIcs.MFCS.2019.55. [88] Nikolas P. Breuckmann and Jens Niklas Eberhardt. Quantum Low-Density ParityCheck Codes. PRX Quantum, 2:040101, Oct 2021. tum.2.040101. URL https://link.aps.org/doi/10.1103/PRXQuantum.2.040101. [89] Pavel Panteleev and Gleb Kalachev. Asymptotically good Quantum and locally testable classical LDPC codes. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, page 375-388, New York, NY, 57 USA, 2022. Association for Computing Machinery. ISBN 9781450392648. URL https://doi.org/10.1145/3519935.3520017. [90] C. A. Weibel. An Introduction to Homological Algebra. Cambridge Studies in Advanced Mathematics, Cambridge University Press, 1994. [91] Benjamin Audoux and Alain Couvreur. On tensor products of CSS codes. Ann. Inst. Henri Poincar ́e Comb. Phys. Interact., 6:239-287, 2019. [92] Matt McEwen, Dave Bacon, and Craig Gidney. Relaxing Hardware Requirements for Surface Code Circuits using Time-dynamics. Quantum, 7:1172, November 2023. ISSN 2521-327X. URL https://doi.org/10. 22331/q-2023-11-07-1172. [93] Dorit Aharonov and Lior Eldar. Quantum Locally Testable Codes. SIAM Journal on Computing, 44(5):1230-1262, 2015. [94] Pavel Panteleev and Gleb Kalachev. Asymptotically good Quantum and locally testable classical LDPC codes. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, page 375-388, New York, NY, USA, 2022. Association for Computing Machinery. ISBN 9781450392648. URL https://doi.org/10.1145/3519935.3520017. [95] Anthony Leverrier, Vivien Londe, and Gilles Z ́emor. Towards local testability for quantum coding. Quantum, 6:661, Feb 2022. ISSN 2521-327X. 2022-02-24-661. URL https://doi.org/10.22331/q-2022-02-24-661. [96] L. Eldar and A. W. Harrow. Local Hamiltonians Whose Ground States Are Hard to Approximate. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 427-438, Los Alamitos, CA, USA, Oct 2017. IEEE Computer Society. URL https://doi.ieeecomputersociety. org/10.1109/FOCS.2017.46. [97] Andrew Cross, Zhiyang He, Anand Natarajan, Mario Szegedy, and Guanyu Zhu. Quantum Locally Testable Code with Constant Soundness. Quantum, 8:1501, Oct 2024. ISSN 2521-327X. URL https://doi.org/ 10.22331/q-2024-10-18-1501. [98] Adam Wills, Ting-Chun Lin, and Min-Hsiu Hsieh. Tradeoff constructions for quantum locally testable codes. IEEE Transactions on Information Theory, 2024. [99] Irit Dinur, Ting-Chun Lin, and Thomas Vidick. Expansion of high-dimensional cubical complexes: with application to quantum locally testable codes. In 2024 IEEE 65th Annual Symposium on Foundations of Computer Science (FOCS), pages 379-385. IEEE, 2024. [100] Gleb Kalachev and Pavel Panteleev. Maximally extendable product codes are good coboundary expanders. arXiv preprint , 2025. [101] Alexander Cowtan and Shahn Majid. Algebraic aspects of boundaries in the kitaev quantum double model. Journal of Mathematical Physics, 64(10):102203, 10 2023. ISSN 0022-2488. URL https://doi.org/10.1063/5. 0127285. [102] Sergey Bravyi, Isaac Kim, Alexander Kliesch, and Robert Koenig. Adaptive constant-depth circuits for manipulating non-abelian anyons. may 2022. URL http://arxiv.org/abs/2205.01933. [103] Anasuya Lyons, Chiu Fan Bowen Lo, Nathanan Tantivasadakarn, Ashvin Vishwanath, and Ruben Verresen. Protocols for Creating Anyons and Defects via Gauging. nov 2024. URL http://arxiv.org/abs/2411.04181. 58 [104] Yuanjie Ren, Nathanan Tantivasadakarn, and Dominic J. Williamson. Efficient Preparation of Solvable Anyons with Adaptive Quantum Circuits. Physical Review X, 15(3), aug 2025. URL http://arxiv.org/abs/2411. 04985http://dx.doi.org/10.1103/b9hf-gx4f. [105] Margarita Davydova, Andreas Bauer, Julio C. Magdalena de la Fuente, Mark Webster, Dominic J. Williamson, and Benjamin J. Brown. Universal fault tolerant quantum computation in 2d without getting tied in knots. arXiv preprint arxiv:2503.15751, 2025. [106] Niel de Beaudrap and Dominic Horsman. The ZX calculus is a language for surface code lattice surgery. Quantum, 4:218, January 2020. ISSN 2521327X. URL https://doi.org/10.22331/ q-2020-01-09-218. [107] David Aasen, Jeongwan Haah, Matthew B. Hastings, and Zhenghan Wang. Geometrically enhanced topological quantum codes, 2025. URL https://arxiv.org/ abs/2505.10403. [108] M. E. Beverland, S. Huang, and V. Kliuchnikov. Fault tolerance of stabilizer channels. arXiv preprint , 2024. [109] Daniel Litinski. Blocklet concatenation: Low-overhead fault-tolerant protocols for fusion-based quantum computation. jun 2025. URL http://arxiv.org/abs/2506. 13619. [110] Dominic Horsman, Austin G Fowler, Simon Devitt, and Rodney Van Meter. Surface code quantum computing by lattice surgery. New Journal of Physics, 14(12):123011, Dec 2012. ISSN 1367-2630. URL http: //dx.doi.org/10.1088/1367-2630/14/12/123011. [111] Daniel Litinski and Felix von Oppen. Lattice Surgery with a Twist: Simplifying Clifford Gates of Surface Codes. Quantum, 2:62, May 2018. ISSN 2521-327X. URL https://doi.org/10.22331/q-2018-05-04-62. [112] Renyu Wang, Hsiang-Ku Lin, and Leonid P Pryadko. Abelian and non-Abelian quantum two-block codes. In 2023 12th International Symposium on Topics in Coding (ISTC), pages 1-5. IEEE, 2023. [113] Pavel Panteleev and Gleb Kalachev. Quantum LDPC Codes With Almost Linear Minimum Distance. IEEE Transactions on Information Theory, 68(1):213-229, 2022. [114] Zijian Liang, Ke Liu, Hao Song, and Yu-An Chen. Generalized toric codes on twisted tori for quantum error correction, 2025. URL https://arxiv.org/abs/ 2503.03827. [115] Bob Coecke and Ross Duncan. Interacting quantum observables: categorical algebra and diagrammatics. New Journal of Physics, 13(4):043016, apr 2011. URL https://dx.doi.org/10.1088/ 1367-2630/13/4/043016. [116] Benjamin Rodatz, Boldizs ́ar Po ́or, and Aleks Kissinger. Fault tolerance by construction, 2025. URL https://arxiv.org/abs/2506.17181. [117] Maximilian R ̈usch, Benjamin Rodatz, and Aleks Kissinger. Completeness for Fault Equivalence of Clifford ZX Diagrams. oct 2025. URL https://arxiv.org/pdf/ 2510.08477. [118] Michael Freedman and Matthew Hastings. Building manifolds from quantum codes. Geometric and Functional Analysis, 31(4):855-894, 2021. [119] Anirudh Krishna and Gilles Z ́emor. Tradeoffs on the volume of fault-tolerant circuits. , oct 2025. URL https://arxiv.org/abs/2510.03057v1. 59 [120] Oscar Higgott and Nikolas P. Breuckmann. Improved single-shot decoding of higher-dimensional hypergraph-product codes. PRX Quantum, 4:020332, May 2023. URL https://link.aps.org/doi/ 10.1103/PRXQuantum.4.020332. [121] Craig Gidney. Stability Experiments: The Overlooked Dual of Memory Experiments. Quantum, 6:786, August 2022. ISSN 2521-327X. URL https://doi.org/10.22331/q-2022-08-24-786. [122] TikZit. https://tikzit.github.io/index.html. [123] Peter-Jan H. S. Derks, Alex Townsend-Teague, Ansgar G. Burchards, and Jens Eisert. Designing fault-tolerant circuits using detector error models, 2024. URL https://arxiv.org/abs/2407.13826. [124] Andrew J. Landahl, Jonas T. Anderson, and Patrick R. Rice. Fault-tolerant quantum computing with color codes, 2011. URL https://arxiv.org/abs/1108.5738. https://doi.org/10.48550/arXiv.1108.5738. [125] John van de Wetering. Zx-calculus for the working quantum computer scientist, 2020. URL https://arxiv.org/abs/2012.13966. A Proofs concerning fault-distance In this appendix we provide detailed proofs of the phenomenological fault-distance lower bounds described in the main text. The standard approach to proving phenomenological fault-distance is to construct a measurement circuit, where Pauli error faults can occur at integer timesteps and check faults can occur at half-integer timesteps, with detectors [92, 123], and then to show that any fault that does not trigger any detectors and is not a product of spacetime stabilisers must have at least a given weight d. It is difficult to prove the fault-distances which we require in that setting, and so we use an alternative, almost equivalent one: that of fault complexes [60]. This enables us to use homology to classify the logical faults and bound their corresponding weights. In this setting, Pauli error faults of Z type and Z check errors occur on even timesteps, while Pauli error faults of X type and X check errors occur on odd timesteps. A sketch of the proofs in this section works as follows: we construct fault complexes for the idling operation, and then modify them using mapping cones to obtain fault complexes for various surgery protocols. These result in spacetime volumes where the code is initially in the idling operation for d rounds, then undergoes a series of deformations to perform surgery with no padding between surgery rounds, and finally all auxiliary systems are measured out and the code returns to the idling operation for d rounds. Because the spacetime volumes are defined in terms of chain complexes, and the deformations are described in terms of mapping cones, we can construct long exact sequences which relate the classes of logical faults in the spacetime volumes for surgery to the classes of logical faults in the idling spacetime volume. We then organise these classes of logical faults and show that they each must have weight at least d. Our techniques may be of independent interest to study other protocols in circuit-based or measurement-based quantum computing. Before we move to discussing fault complexes, we give some elementary homological background which is required. The Snake Lemma [90, Lem. 1.3.2] and rank-nullity together imply that for a mapping cone cone(f•) defined by a chain map f• : A• →C•, we have |Hi(cone(f))| = |Hi(C)| + |Hi-1(A)| -|imHi(f)| -|imHi-1(f)|, where |V | is shorthand for dim V when V is a vector space. 60 In more detail, the Snake Lemma dictates that for the mapping cone cone(f•) we have a long exact sequence 14: · · · Hi(A) Hi(C) Hi(cone(f)) Hi-1(A) Hi-1(C) · · · Hi(f) ι∗ π∗ Hi-1(f) In this sequence the Hi(f) are the maps between homology spaces Hi(A) →Hi(C), which are well-defined by the commutative diagram in Eq. 1. ι∗is a map Hi(C) →Hi(cone(f)) generated by the Snake Lemma. In homological measurement [58], this map at i = 1 takes logical operators in the original code to the logical operators in the deformed code when performing surgery. In that case, the map ι∗ will be a surjection, as some of the logical qubits from the original code are measured out and no new logical qubits are added when the auxiliary hypergraph's cycles are gaugefixed. π∗is a map Hi(cone(f)) →Hi-1(A). In homological measurement [58], this map at i = 1 picks out the new contributions to the logical space of the deformed code from Hi-1(A); exactness at Hi(cone(f)) shows that the only new logical qubits which can arise are from cycles in the hypergraph. In our case we will take mapping cones on spacetime volumes, rather than just codes, but a similar intuition applies. For any linear map T : V →W rank-nullity implies that |V | = | ker T| + |imT|, so we can compute |Hi(cone(f))| = | ker π∗| + |imπ∗| = |imι∗| + |imπ∗| = |Hi(C)| -| ker ι∗| + | ker Hi-1(f)| = |Hi(C)| -|imHi(f)| + |Hi-1(A)| -|imHi-1(f)|. (4) The premise of all of our proofs is that in a CSS-type code surgery, where the initial code and deformed codes are all CSS codes, all logical faults must be either of Z type in spacetime, composed of Z Pauli errors and X check errors, or X type, composed of X Pauli errors and Z check errors. As any logical fault composed of a product of these is at least as high weight as the corresponding logical fault of either Z or X type, we need only show that the weights of all Z and X logical faults are at least d to prove that the fault-distance of a protocol is at least d. We will frequently make use of the fact that when Hi(f) is injective |imHi(f)| = |Hi(A)|, by definition. A.1 Fault complexes A fault complex is defined as a chain complex with 4 terms, F• = F3 F2 F1 F0 where by convention the F3 and F0 components correspond to Z and X detectors respectively. In fault-tolerant measurement-based quantum computing (MBQC), F2 corresponds to a set of X fault locations, i.e. locations in spacetime which can experience only Xtype faults, while F1 corresponds to a set of Z-type fault locations. The fault complex is therefore in bijection with a graph state. When the graph state corresponds to a foliated CSS code, this can be reinterpreted in the circuit model, undergoing phenomenological noise. Now, F2 corresponds to a combined 14This means that the image of each map coincides with the kernel of the next map. 61 set of spacetime locations at which data qubits can experience X Pauli faults and also Z checks, which can experience measurement errors. F1 is the same but for Z Pauli faults and X checks. In this case, we can expand out a fault complex to have the following diagram: F1,1 F1,0 F1,2 F0,0 F0,2 F0,1 where components in the overall fault complex are given by taking direct sums vertically in the diagram, so we have F1,2 F1,1 ⊕F0,2 F1,0 ⊕F0,1 F0,0 Throughout this section every diagram drawn in the above form is assumed to have direct sums vertically in the diagram. 15 In the phenomenological circuit model: • F0,0 corresponds to X detectors. • F0,1 corresponds to locations for Z Pauli faults on data qubits. • F1,0 corresponds to locations for X check faults. • F0,2 corresponds to locations for Z check faults. • F1,1 corresponds to locations for X Pauli faults on data qubits. • F1,2 corresponds to Z detectors. Expressed as a commutative diagram the fault complex then has the structure: X Pauli faults X check faults Z detectors X detectors Z check faults Z Pauli faults F1,1 (X Pauli faults) F1,0 (X check faults) F1,2 (Z detectors) F0,0 (X detectors) F0,2 (Z check faults) F0,1 (Z Pauli faults) 15This is essentially passing to the "total complex" [90]. 62 Note that as a fault complex describes a graph state, it does not have the ability to describe morphisms, i.e. MBQC-based or circuit-based protocols with input or output qubits. The standard way to foliate a CSS code C• from the fault complex perspective is to take (R⊗C)•, where R• : R1 →R0 has the differential R being either the full-rank paritycheck matrix for the repetition code or its dual. In the former, the code is initialised in the |0⟩state and measured out in the Z basis; in the latter, the code is initialised in the |+⟩state and measured out in the X basis. Hence the code 'idles' for l rounds, where l is the blocklength of the repetition code. If we do this, the fault complex has the explicit form: R1 ⊗C1 R1 ⊗C0 R1 ⊗C2 R0 ⊗C0 R0 ⊗C2 R0 ⊗C1 (5) In the circuit model we have, in order, • R0 ⊗C0 corresponds to X detectors. • R0 ⊗C1 corresponds to locations for Z Pauli error faults. • R1 ⊗C0 corresponds to locations for X check error faults. • R0 ⊗C2 corresponds to locations for Z check error faults. • R1 ⊗C1 corresponds to locations for X Pauli error faults. • R1 ⊗C2 corresponds to Z detectors. Note that R1 ⊗C1 and R0 ⊗C1 correspond to the same data qubits but at different points in time; in the circuit model this is purely convention, and any given data qubit can still undergo either type of error, Z or X. It is easy to compute the homologies using the K ̈unneth formula. Then elements of H1(F) correspond to equivalence classes of logical Z faults in spacetime, composed of Z Pauli errors and X check errors, and elements of H2(F) to logical X faults, composed of X Pauli errors and Z check errors.16 For completeness, we have, • H1(R ⊗C) = H1(R) ⊗H0(C) ⊕H0(R) ⊗H1(C). • H2(R ⊗C) = H1(R) ⊗H1(C) ⊕H0(R) ⊗H2(C). When R• is the repetition code, we have H1(R ⊗C) = H1(R) ⊗H0(C) = {0, 1} ⊗C0/imHX H2(R ⊗C) = H1(R) ⊗H1(C) = {0, (1, 0, 0, · · · , 0)} ⊗H1(C), 16There are also the homology groups H1(F) and H4(F), corresponding to classes of detector triggers which are impossible. 63 Figure 5: A fault complex for a unit cell of the 2D surface code, starting and terminating at X-type boundaries. Time flows from bottom to top. where 0 and 1 are the all-zero and all-one vectors, and (1, 0, 0, · · · , 0) is a weight 1 vector representing the nonzero equivalence class in H1(R); any vector with odd weight is in the same class. When R• is dual to the repetition code we instead have H1(R ⊗C) = H0(R) ⊗H1(C) = {0, (1, 0, 0, · · · , 0)} ⊗H1(C) H2(R ⊗C) = H0(R) ⊗H2(C) = {0, 1} ⊗H2(C). In Fig. 5 we show the fault complex for a small surface code as a Tanner graph, where circles are fault locations and squares are detectors, and red, teal, yellow and blue indicate basis elements of F0, F1, F2, F3 respectively. Fault complexes are a useful algebraic framework for reasoning about fault distance of CSS codes in the phenomenological model: each location is a point in spacetime which can experience a fault, which is also a basis vector in a space (either F1 or F 2) and so weights of faults can be related directly to weights of vectors in the fault complex. We do not want to presume that the code is initialised and measured out in a given basis, because we want to prove that a given protocol has sufficient fault-distance no matter what other (assumed fault-tolerant) operations the logical circuit contains. One approach to this is to replace the parity-check matrix of R• with the parity-check matrix of a different repetition code, that of the cycle graph. This corresponds to 'taking the trace' of the circuit operation being performed, and ensures that errors cannot vanish on the initial and final boundaries. However, this leads to other, serious issues: a timelike fault which runs from the start to the finish of the spacetime volume is now a spacetime stabiliser, and there are other logical faults which we encounter that are equivalent to this fault, and so they will be unaccounted for. Instead, we modify the spacetime volume by adding checks to the start and end, which constitute entry points for timelike errors to enter and exit the volume: depending on whether R• is a full-rank repetition code or its dual, corresponding to initialising in the |0⟩or |+⟩state respectively, the new checks are Z checks or X checks. We accomplish this using mapping cones. Explicitly, when R• is the dual of the repetition code, so R• = Fl-1 2 →Fl 2 for some l determining the number of rounds in the 64 spacetime volume, the code is initialised in |+⟩and measured out in the X-basis; we then introduce new checks for timelike fault entrypoints by taking the mapping cone of the chain map g•, 0 0 0 C0 ⊕C0 F3 F2 F1 F0 g0 where g0 maps the first copy of C0 into the top layer of X-detectors in the fault complex, and the second copy of C0 into the bottom layer of X-detectors. Call the resultant fault complex F ′ = cone(g•). The unit cell from Fig. 5, once modified in this manner, is shown in Fig. 6. The consequent classes of logical faults in cone(g•) can be computed using the Snake Lemma, see Eq. 4. We have |H1(F ′)| = |H1(cone(g))| = |H1(F)| + |C0 ⊕C0| -|imH0(g)| = |H1(F)| + 2|C0| -|imH0(g)|. By the K ̈unneth formula, H0(F) = H0(R) ⊗H0(C) = {0, (1, 0, 0, · · · , 0)} ⊗H0(C), so |H0(F)| = |H0(C)|. By inspection, every element of one layer of X-detectors at the bottom of the fault complex is in the image of g•; those at the top are in the same equivalence class. Hence |imH0(g)| = |H0(F)| = |H0(C)|, so |H1(F ′)| = |H1(cone(g))| = |H1(F)| + 2|C0| -|H0(C)|. We therefore add 2mX -rX new basis elements to H1(F), where mX = |C0| is the number of X checks in C• and rX = |H0(C)| is the number of redundant X checks in C•. All new elements of H1(F ′) correspond to logical faults which enter from a timelike boundary on the top or bottom; mX -rX of these can be immediately annihilated by spacelike errors on the boundary, or by spacelike errors on further layers. We illustrate this in Fig. 7, where incoming X check errors turn on X detectors, which can then be immediately switched off again by Z data qubit Pauli errors. The remaining mX are strings of X check failures running from the bottom to the top boundary, illustrated in Fig. 8. In both cases, all new logical faults must have timelike support on the new checks at the boundary, and these cannot be cleaned from the boundary into the bulk. We can similarly calculate that |H0(F ′)| = |H0(F)| + 0 -|imH0(g)| -0 = |H0(F)| -|H0(F)| = 0. For the dual version, where R• = Fl 2 →Fl-1 2 , we instead perform the mapping cone on the cochain complex, and add 2mZ -rZ new elements to H2(F). Additionally, H3(F) becomes 0. In summary, when the initialisation and measurement is in the X basis the spacetime volume for the idling operation now contains: • All Z timelike logical faults, extending through the volume. • Those X timelike logical faults with check errors in H2(C) which are not equivalent to spacelike faults. • All Z spacelike logical faults. 65 Figure 6: A fault complex for a unit cell of the 2D surface code, starting and terminating at X-type boundaries, but with new X checks to allow Z timelike errors to enter and exit the volume. 66 Figure 7: The fault complex from Fig. 6, where a Z timelike fault enters the fault complex and is immediately annihilated by a spacelike fault. The highlighted fault locations are shown in green. 67 Figure 8: The fault complex from Fig. 6, where a Z timelike fault extends from one timelike boundary to the other. The highlighted fault locations are shown in green. 68 • No X spacelike logical faults. and when it is in the Z basis the volume is the same but with the X and Z logical fault characterisation inverted. Therefore if we use both of these fault complexes then, combined, we account for all possible logical faults, as the code (and fault complex) is CSS. We wish to describe complicated surgery protocols, rather than the idling state. Now, a certain fault complex for the idling state is sufficient to study certain stability experiments, but insufficient for the description of surgery protocols involving many logical measurements. For this the fault complex must describe the actual code deformation taking place. This can also be accomplished using mapping cones, as we shall demonstrate presently. A.2 Full block reading We are now ready to prove the fault-distance of full block reading. In this case, we start with taking E• = Fc 2 ⊗C• as our initial CSS code in the idling state, so the initial fault complex is F• = (R ⊗E)•. We then consider the two cases, one where the idling volume starts and ends with Z basis measurements, and the other with X basis measurements, including the entrypoints for timelike errors, so that we acquire two different fault complexes. For convenience we label them both F ′ •, and the difference will be made clear by dividing into the case for X-type boundaries and Z-type boundaries. In each case, we are applying mapping cones to the idling volume. To perform a full block reading in the Z basis, measuring Z logical operators, the mapping cone is on the chain complex, and in the X basis the mapping cone is on the cochain complex. In the Z basis, the mapping cone on the fault complex is taken from the chain map f• with components: 0 C2 C1 C0 F ′ 3 F ′ 2 F ′ 1 F ′ 0 f3 f2 f1 f0 where the components map the auxiliary code C• to the codeblocks in E• to be measured, at a primal layer. In other words, in the case where we start and finish in the X basis we 69 now have the fault complex E0 (Bottom X check faults) E0 (Top X check faults) R1 ⊗E1 (Old X Pauli faults) R1 ⊗E0 (Old X check faults) R1 ⊗E2 (Old Z detectors) R0 ⊗E0 (Old X detectors) R0 ⊗E2 (Old Z check faults) R0 ⊗E1 (Old Z Pauli faults) C2 (New Z detectors) C1 (New Z check faults) C0 (New Z Pauli faults) where the copies of E0 at the top of the diagram are to add timelike entrypoints to the spacetime volume, and the C• at the bottom performs the full block reading. We can now flip the boundary conditions, so that R• = Fl 2 →Fl-1 2 , and flip the additional timelike entrypoints accordingly, but still perform the same full block reading, 70 to acquire the fault complex E2 (Bottom Z check faults) E2 (Top Z check faults) R1 ⊗E1 (Old X Pauli faults) R1 ⊗E0 (Old X check faults) R1 ⊗E2 (Old Z detectors) R0 ⊗E0 (Old X detectors) R0 ⊗E2 (Old Z check faults) R0 ⊗E1 (Old Z Pauli faults) C2 (New Z detectors) C1 (New Z check faults) C0 (New Z Pauli faults) Due to the change in boundary conditions, each of these fault complexes has different logical faults, and we must account for them both. There are some important consequences of using the fault complex formalism rather than the phenomenological circuit-based model. It dictates that each fault location in the fault complex is a qubit initialised in the |+⟩state and then measured out in the X basis, with the entangling CZ gates performing space- and time-like Hadamards to exchange error types between layers. As a consequence, unlike in the circuit-based model, there are not separate timesteps on which the new ancilla data qubits are initialised in |+⟩and measured out in the X basis, as this is included in the fault complex layers. So, in the circuit model, one would initialise the new data qubits in |+⟩, then measure all checks for 1 round, then measure out the new data qubits in the X basis, which would generally be considered 3 timesteps: one each for the 'merge', measure, and 'split' steps. In the fault complex picture, this all happens on 1 timestep. This is why there is only one 'copy' of C• used for the auxiliary system in the mapping cone. Additionally, the layers of Z and X checks are alternating, rather than occurring on the same timestep as in the circuit-based model. When a surgery protocol only uses 1 round of Z measurements, as in full block reading, the fault complex does not introduce X checks connections to the new data qubits at all. There are also no new fault locations for X Pauli data qubit errors on the auxiliary system, as they would either act on data qubits in |+⟩or before the qubits are measured out in the X basis. 71 Remark A.1 (Multiple measurement rounds). In Section A.4 below we see cases in which the number of measurement rounds required is greater than 1, in which case there are more than one 'copy' of C• employed, and there are deformed X checks and new fault locations for X Pauli data qubit errors. Remark A.2 (Cycle detectors). For all the full block reading procedures we do not require any cycle checks, or detectors related to those checks. In Thm. 3.12 we have imposed a condition that no measured logical Paulis should be products of any others, i.e. the proof requires that the set of measured logical Paulis is minimal for the group it generates. This condition can be removed if one uses detectors for the cycles in the hypergraph: these are not cycle checks, merely detectors inferred from the initial state preparation and final measurement of the new data qubits, so the code will remain LDPC, but the detector matrix may not unless the cycle basis is sparse. Proposition 3.9. Let Q, Q′, Q′′,... be a set of identical CSS LDPC codeblocks with distance d, and let P be a pattern matrix determining a full block reading. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the full block reading procedure. Then full block reading performs logical measurement in O(1) rounds of syndrome measurements with fault-distance d. Proof. We treat the case of X-type boundary conditions first. X-type boundary conditions. By the Snake Lemma, we can compute the following. |H1(cone(f))| = |H1(F ′)| -|imH1(f)| + |H0(C)| -|imH0(f)| = |H1(F ′)| -|H1(C)| + |H0(C)| -0 = |H1(F ′)| -|H1(C)| + |H0(C)|. Applying the same argument to H2(cone(f)), we have: • |H1(cone(f))| = |H1(F ′)| -|H1(C)| + |H0(C)|. • |H2(cone(f))| = |H2(F ′)| -|H2(C)|. We first discuss the negative contributions to the classes of logical faults. The negative contribution to Z logical faults (faults that commute with all X detectors) appears because |H1(C)| logical measurements have occurred in the Z basis, and so there are |H1(C)| spacelike Z errors which no longer affect the logical state. The negative contribution to X logical faults (faults that commute with all Z detectors) appears because the full block reading introduces Z detectors between codeblocks, which require timelike Z check errors to now be in ker HZ of the auxiliary code in order to be undetected. As the space distance of the code and deformed code are each at least d, and there are no new equivalence classes of logical faults, we cannot clean any logical fault with these boundary conditions to have weight below d. Z-type boundary conditions. In this case the equivalence classes of logical faults can be computed for H1(cone(f)) as |H1(cone(f))| = |H1(F ′)| -|imH1(f)| + |H0(C)| -|imH0(f)| = |H1(F ′)| -0 + |H0(C)| -0 = |H1(F ′)| + |H0(C)|. Applying the same argument to H2(cone(f)) we have: 72 • |H1(cone(f))| = |H1(F ′)| + |H0(C)|. • |H2(cone(f))| = |H2(F ′)| + |H1(C)| -|H2(C)|. where H1(F ′) and H2(F ′) are equivalence classes already present before the full block reading, and H0(C) and H1(C) are new. Again we have a negative contribution of |H2(C)| to the classes of Z check faults. Note that some of the contributions to equivalence classes appear in both boundary conditions. For the new X logical faults, it is straightforward that H1(C) are equivalence classes of undetectable Z check failures on the auxiliary code, which will flip the logical measurement outcomes of the full block reading. For the new Z logical faults, the equivalence classes given by H0(C) are spacetime logical faults which begin as Z Pauli errors on the auxiliary code, and then move into the original spacetime volume and become timelike errors to then exit from the start or end boundaries. Because the Z Pauli errors are not in im(HX), they cannot be cleaned into the original spacetime volume and are genuinely new classes of logical faults. They are also not products of space and time logical faults. To prove the fault distance of a single full block reading, where there are at least d rounds of measurement before and after the block reading but only 1 round used for the block reading itself, it is sufficient to show that every element of these equivalence classes, and their combinations, must have weight at least d. Any timelike logical fault in H1(F ′) or H2(F ′) must originate at the boundaries, and therefore must have at least weight d to reach, and therefore affect, the protocol. They cannot be cleaned away from the boundary by spacetime stabilisers. Timelike errors which originate at the boundary must extend for d rounds to affect the computation. Logical faults from the positive contribution of H0(C) must exit through one of the boundaries of the spacetime volume, and hence must have weights at least d. Spacelike logical faults in H1(F ′) and H2(F ′) must have weight at least d because the original code and the deformed code during block reading both have distance at least d. Adding spacetime stabilisers to these spacelike logical faults can only increase the weights, because whatever qubit faults are removed by cleaning are merely moved to another round in the volume due to the structure of the repetition code R•. Lastly, logical faults in H1(C), which are new check errors on the auxiliary system, must have weight at least d as timelike errors. Applying spacetime stabilisers to clean these into the original code is possible, but the corresponding spacetime fault must act on at least d data qubits due to the 1-to-1 structure of the full block reading, corresponding to a spacelike logical fault, followed by some timelike errors, and then the same spacelike logical fault, and so all such logical faults must have weight at least d. Evidently a dualised proof would apply were the full block reading to be applied in the X basis instead. Remark A.3. As full block reading is only applied for 1 round of measurements, these new classes of spacetime faults corresponding to H0(C) can be eliminated by adding an extra C-1 term to C•, which adds detectors reconstructed from the initialisation and measurements of the ancilla system in the X-basis. Adding an extra C-1 term cannot make the deformed code non-LDPC, but can make the detectors high weight. As we shall presently show, these new classes of spacetime faults are not problematic anyway. 73 We can then consider the case where multiple full block readings are performed in a row, but with only O(1) rounds between them, and still leaving O(d) rounds of measurement before and after the procedure. In this case all the previous calculations apply, by applying the mapping cone to the chain or cochain complex versions of the fault complex. However, the analysis of logical faults requires two more ingredients. First, in the case where only one block reading is performed in the spacetime volume, the elements of H0(C) must exit through the boundaries of the volume. When there are multiple block readings performed in close proximity, however, these elements can connect and form noncontractible closed curves within the spacetime volume, with low fault weight. These noncontractible closed curves appear only when there is a full block reading whose logical measurements are a product of others performed in the same spacetime volume. Second, we must consider the compacted code of the spacetime volume, see Definition 4.4. Lemma A.4. Let Ξ be a set of η full block reading procedures of Z-type such that no block reading procedure in Ξ is a product of any others. Then the compacted code CC• has distance d. Proof. This is identical to the proof of code distance of the code (C ⊗R)•, where R• is a full-rank matrix, see Lemma 3.8. Theorem 3.12. Let Ξ be a set of η full block reading procedures of Z-type such that no block reading procedure in Ξ is a product of any others. Then all η logical measurement rounds of full block reading can be performed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the full set of procedures. The procedure has phenomenological faultdistance d, for any η ∈N. Proof. Say that the η block readings being performed are each defined by a bitstring of length c, and the set of bitstrings is divided into sets of size ηZ and ηX for Z and X type block readings. Each bitstring contains a 1 in a location if that codeblock is involved in the block reading specified by the bitstring, and a 0 otherwise. Whether the block readings are performed simultaneously or spread across multiple rounds, we can describe the block readings being performed by a pair of pattern matrices PZ and PX over F2, with c columns and ηZ and ηX rows respectively. Note that, unlike when performing them simultaneously, these matrices do not need to form a pattern complex. Assume that there is no block reading performed in the spacetime volume whose logical measurements are products of others. Then PZ and PX are each full-rank matrices. Any logical fault composed of Z Pauli errors in H0(C) on a Z-type block reading must extend to timelike X check faults on all blocks in that row of PZ. To prevent extending all the way to the boundaries of the volume, such timelike faults must reach Z Pauli errors in H0(C) on a different row; these must subsequently extend onto all blocks in that row of PZ. Hence in order to form a noncontractible closed curve inside the volume the pattern matrix PZ must not be full rank, and so there are some block readings which are products of previous ones. Dualising the argument gives the same result for X Pauli errors in H2(C) on an X-type block reading. All other timelike logical faults follow the same arguments as before: they must either extend to a boundary or have distance d because of the new metachecks if they are timelike. 74 For spacelike errors, we must be more careful, as spacelike errors in the different deformed codes during the spacetime volume can be cleaned to have overlapping spacelike support. In particular, one can take products of spacelike logical faults in each deformed code, or the undeformed code. Before applying spacetime stabilisers, such a product is a set of disjoint spacelike logical faults separated by some number of timesteps, which in the worst case is constant. Upon multiplying by spacetime stabilisers, such spacelike logical faults can have support that is cleaned to the same timestep and hence cancel, introducing some check errors between the timesteps. Say that we have a set of θ spacelike Z logical faults Λ1, Λ2, · · · , Λθ at different logical timesteps, acting on different logical qubits. These could each be separated by a constant number of timesteps, and induce a constant number of check errors when partially cleaned by a spacetime stabiliser S. Thus, if the spacelike component of the product SΛ1Λ2 · · · Λθ has weight below d the fault-distance may fall below d. We now consider the compacted code CC• of the spacetime volume. Each logical operator Λi in each deformed code is guaranteed to be a logical operator of the CC•. Let Λ′ i be the logical operator Λi viewed in CC•. We can hence consider the product | Q i Λ′ i|, which is the minimal weight of the spacelike component of SΛ1Λ2 · · · Λθ. As CC• has distance d, | Q i Λ′ i| ≥d for any choice of logical operators Λi in any deformed codes. Then, | Q i SΛi| ≥d. For spacelike X logical operators, the argument is similar. The X-distance of the compacted code CC• is at least d, and so cleaning X logical operators to have overlapping spacelike components can never reduce the fault weight below d. For full block reading of CSS-type, we require an extension of the previous lemma concerning the compacted code. Lemma A.5. If the hypergraph surgeries are CSS-type full block readings such that no logical measurement is the product of any other then the compacted code CC• has distance d. Proof. This is identical to the proof of code distance of the code (C ⊗R)•, where each pattern matrix in the complex R• is a full-rank matrix, see Lemma 3.8. Lemma 3.16. Let Ξ be a set of η full block reading procedures of CSS-type such that no block reading procedure in Ξ is a product of any others, and each of the measurements commute. Then all η logical measurement rounds of full block reading can be performed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the full set of procedures. The procedure has phenomenological faultdistance d, for any η ∈N. Proof. In this case we again take iterative mapping cones in the same way. In the worst case, where there is short timelike length between full block readings, the cones are applied to only two layers in the fault complex: Z-type measurements to a layer with X detectors (corresponding to a basis element of R0) and X-type measurements to a close layer with Z detectors (a basis element of R1). By virtue of the mapping cone construction, the fault complex commutes as a chain complex. We then use identical arguments to the proof of Z-type full block reading, where this time the constant number of layers between measurements is reduced to 0. As the proof of fault-distance is independent of this number of rounds, and the compacted code has 75 distance d, there are no spacelike logical faults which can be cleaned to have fault weight with spacelike component lower than d. A.3 Partial block reading with high-distance subcodes Proposition 4.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let A• be a subcode with distance d, with chain maps from A•, or a thickened version thereof, to Q, Q′, Q′′.... Let D• = cone(h•) be the deformed code, which has code distance at least d. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the partial block reading procedure. Then partial block reading performs logical measurement in O(1) rounds of syndrome measurements with fault-distance d. Proof. We can rerun the arguments for Prop. 3.9 but where A• ̸= C•. A• is a subcode of C• or can be a subcode thickened up to d times (in the space direction). The thickening does not affect the homology of A•, or the maps on homology Hi(f) into C•. In partial block reading with high-distance subcodes, unlike full block reading, we include additional X-detectors given by the initialisation and measurement of the new qubits in the X-basis. These give us detectors corresponding to a basis of cycles in the hypergraph; however, there is no guarantee that this basis is sparse. This does not affect the LDPC property of the code, but does affect the size of the detectors when they are used for decoding. In other words, the deformed code is guaranteed to be LDPC, but the fault complex is not. Now, for a Z-type partial block reading the fault complex is given by cone(f•), where f• has components 0 A2 A1 A0 A-1 F ′ 3 F ′ 2 F ′ 1 F ′ 0 0 0 f2 f1 f0 0 with im(f•) having support on a single primal layer, across potentially multiple codeblocks, and A-1 is a vector space with differential such that H0(A) = 0 to fix the cycles. For the X-type boundary conditions we then have |H1(cone(f))| = |H1(F ′)| -|imH1(f)| + |H0(A)| -|imH0(f)| = |H1(F ′)| -|H1(A)| + 0 -0 = |H1(F ′)| -|H1(A)|. And |H2(cone(f))| = |H2(F ′)| -|imH2(f)| + |H1(A)| -|imH1(f)| = |H2(F ′)| -|H2(A)| + |H1(A)| -|H1(A)| = |H2(F ′)| -|H2(A)| = |H2(cone(f)| = |H2(F ′)| -|H2(A)|. • |H1(cone(f))| = |H1(F ′)| -|H1(A)|. • |H2(cone(f))| = |H2(F ′)| -|H2(A)|. For the Z-type boundary conditions, the calculation is the same except |imH1(f)| = 0, so: 76 • |H1(cone(f))| = |H1(F ′)|, • |H2(cone(f))| = |H2(F ′)| + |H1(A)| -|H2(A)|. Hence we can categorise the logical faults in an identical way to previously and find that they must all either extend from a boundary of the spacetime volume, or have spacelike weight at least d, or have timelike weight at least d, and that spacetime stabilisers cannot clean these faults to have weight lower than d by virtue of the repetition code structure of R•. Because we added the X-detectors, which are present to allow the space distance of the deformed code to be at least d, we also eliminate the spacetime faults which were present for full-block reading and corresponded to contributions of H0(C). Now, H0(A) = 0 so there are no such contributions, so we need not be concerned about noncontractible loops in the spacetime volume. Theorem 4.5. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let Ξ be a set of Z-type partial block readings {Partial1, Partial2, Partial3, ..., Partialη} where each subcode has distance d and the compacted code has distance d. Then all η logical measurement rounds of partial block reading can be performed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the set of logical measurements. The procedure has phenomenological fault-distance d, for any η ∈N. Proof. All logical faults must either extend from a boundary or have distance d because of the new metachecks if they are timelike. For the spacelike faults, we can apply the same argument as for full block reading. Each logical operator Λi in each deformed code is guaranteed to be a logical operator of the compacted code CC•. Let Λ′ i be the logical operator Λi viewed in CC•. We can hence consider the product | Q i Λ′ i|, which is the minimal weight of the spacelike component of SΛ1Λ2 · · · Λθ. As CC• has distance d, | Q i Λ′ i| ≥d for any choice of logical operators Λi in any deformed codes. Then, | Q i SΛi| ≥d. For spacelike X logical operators, the argument is similar but more straightforward. The X-distance of the compacted code CC• must always be at least d by Lemma 2.16, and so cleaning X logical operators to have overlapping spacelike components can never reduce the fault weight below d. A.4 Partial block reading with low-distance subcodes Consider a partial block reading, where the subcode A• for the partial block reading has distance less than d. For all our descriptions here we assume the block reading is measuring in the Z basis, and then one can dualise to acquire the corresponding construction to measure in the X basis. We also presume that the auxiliary hypergraph for a Z-type partial block reading has X-checks to gauge-fix cycles, and the same for X-type partial block readings with Z checks. These are genuine checks, and not just detectors inferred from the initial and final measurements of the new qubits. In terms of the initial auxiliary system, this still appears as adding an A-1 term to A• such that H0(A) = 0, A• = A2 A1 A0 A-1 but this is not yet the complex which we will perform the mapping cone with. 77 As before, we ignore the thickening of A• in space, as one can compute that this leaves the homology and categorisation of logical fault equivalence classes unaffected, and is necessary only to preserve the spatial distance of the deformed code. However, as the distance of A• is now less than d, we must thicken A• in time to make a fault complex K• corresponding to running the measurement procedure for more than 1 round. Let dA be the distance of A• and let dA = 1 αd for d the code distance of C•. The fault complex K• corresponds to taking (P(α) ⊗A)•, where P(α)• = P(α)1 P(α)0 = Fα-1 2 Fα 2 is dual to the full-rank repetition code with length α, and has the differential matrix being the incidence matrix of the path graph with α vertices. Explicitly K• is the chain complex: P(α)1 ⊗A1 P(α)1 ⊗A0 P(α)1 ⊗A-1 P(α)1 ⊗A2 P(α)0 ⊗A-1 P(α)0 ⊗A2 P(α)0 ⊗A1 P(α)0 ⊗A0 Observe that A• = K• when α = 1, recovering the case where we measure for only one round. Now, for a Z-type partial block reading the fault complex is given by cone(f•), where f• has the form: P(α)1 ⊗A1 P(α)1 ⊗A0 P(α)1 ⊗A-1 P(α)1 ⊗A2 P(α)0 ⊗A-1 P(α)0 ⊗A2 P(α)0 ⊗A1 P(α)0 ⊗A0 F ′ 3 F ′ 2 F ′ 1 F ′ 0 0 0 0 f2 f1 f0 There are also maps from the top terms in the diagram - P(α)1 ⊗A1, P(α)1 ⊗A0 and P(α)1 ⊗A-1 - but these are zero maps, so we abuse notation slightly by labelling the maps from the bottom layer f2, f1, f0. As before, the only nonzero maps in f• are into primal layers. The mapping cone then yields the fault complex cone(f•) for the measurement, K2 K1 K0 K-1 F ′ 3 F ′ 2 F ′ 1 F ′ 0 where the K3 term, equal to P(α)1 ⊗A2, is omitted as it has no physical meaning, being a "detector on detectors". Before continuing further, let us make some orienting remarks regarding the terms and maps in this mapping cone. Below we expand out to the full fault complex cone(f•) for 78 the case where the spacetime volume is initialised and measured out in the X basis. As before, the P(α)1 ⊗A2 term is omitted. The diagram clearly commutes, by inheriting the coherent monic of the subcode A• into the original codeblocks E•. P(α)1 ⊗A1 (New Z detectors) P(α)1 ⊗A0 (New X Pauli faults) P(α)1 ⊗A-1 (New X check faults) P(α)0 ⊗A-1 (New X detectors) P(α)0 ⊗A2 (New Z detectors) P(α)0 ⊗A1 (New Z check faults) P(α)0 ⊗A0 (New Z Pauli faults) R0 ⊗E2 (Old Z check faults) R0 ⊗E1 (Old Z Pauli faults) R1 ⊗E2 (Old Z detectors) R0 ⊗E0 (Old X detectors) R1 ⊗E1 (Old X Pauli faults) R1 ⊗E0 (Old X check faults) E0 (Bottom X checks) E0 (Top X checks) f2 f1 f0 • All terms in (R ⊗E)• are those from the fault complex of the idling operation. • The new E0 terms at the bottom are to add timelike entrypoints to the volume. • P(α)0 ⊗A-1 is a set of X detectors which arise from cycle checks throughout the surgery procedure. • P(α)0 ⊗A0 is a set of fault locations for new qubits which can undergo Z Pauli errors. 79 • P(α)1 ⊗A-1 is a set of new X checks corresponding to cycles in the hypergraph, which can undergo measurement errors. • P(α)0 ⊗A1 is a set of new Z checks, corresponding to vertices in the hypergraph, which can undergo measurement errors. • P(α)1 ⊗A0 is a set of fault locations for new qubits which can undergo X Pauli errors. Note that when the block reading was performed for only 1 round these fault locations were not present because of the immediately adjacent initialisations in |+⟩ and measurements in the X-basis. • P(α)0 ⊗A2 is a set of Z detectors which are metachecks due to the block reading. • P(α)1 ⊗A1 is a set of Z detectors due to the vertex checks throughout the surgery procedure. To acquire the opposite spacetime volume, where it is initialised and measured in the Z basis, the R• is dualised, and F ′ has extra E2 components, rather than E0. This is identical to the full block reading case. We can hence rerun a similar proof as for the case of full block-reading, and acquire the homology spaces from the Snake Lemma. We can then prove the following. Proposition 6.1. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance d, and let A• be a subcode with distance dA = 1 αd, with chain maps from A•, or a thickened version thereof, to Q, Q′, Q′′.... Let D• = cone(h•) be the deformed code with code distance d and let H have a sparse cycle basis. Assume that there are d rounds of syndrome measurements on Q, Q′, Q′′... before and after the partial block reading procedure. First, ⌈α⌉rounds of syndrome measurements are necessary (but not sufficient) to perform partial block reading with fault-distance d. Then if either: (a) α ≤2 or (b) the initial codeblocks are each locally testable codes with soundness ρ ≥2n m , where n is the codeblock length and m the number of checks, then partial block reading performs logical measurement in ⌈α⌉rounds of syndrome measurements with fault-distance d, maintaining the LDPC property throughout. Proof. X-type boundary conditions. By the Snake Lemma, we can compute the following. • |H1(cone(f))| = |H1(F ′)| -|H1(A)|. • |H2(cone(f))| = |H2(F ′)| -|H2(A)|. In detail, using the K ̈unneth formula and the fact that |H1(P(α)) = 0| and |H0(P(α))| = 1, |H1(cone(f))| = |H1(F ′)| + |H0(K)| -|imH1(f)| -|imH0(f)| = |H1(F ′)| + |H0(A)| -|imH1(f)| -|imH0(f)| = |H1(F ′)| + 0 -|H1(A)| -0 = |H1(F ′)| -|H1(A)|. The other computation works similarly. Z-type boundary conditions. In this case the equivalence classes of logical faults can be computed as, • |H1(cone(f))| = |H1(F ′)|. 80 • |H2(cone(f))| = |H2(F ′)| + |H1(A)| -|H2(A)|. We thus have a similar categorisation of all the equivalence classes of logical faults to those in previous proofs: there are spacelike errors which must always have weight at least d by virtue of the deformed code having distance at least d, and there are timelike errors that extend from a boundary and so have weight at least d. However, there are also the timelike errors which correspond to the |H1(A)| term. As check errors in the auxiliary code, these timelike logical faults must always have weight at least d if they extend for ⌈α⌉rounds, because they must have weight at least 1 αd in each round. Should fewer than ⌈α⌉rounds be used then there always exists a logical fault with weight lower than d, proving the first part of the proposition. For the second part, we must be careful about the cleaning with spacetime stabilisers. Given such a timelike logical fault with weight at least d in the auxiliary code, at least dA on each layer, cleaning it entirely into the bulk can leave only 2dA qubit Pauli X errors, one set of size dA just before the surgery and one set of size dA just after, and a set of timelike errors connecting them. We must therefore ensure that this set of timelike errors has sufficent size to yield a high fault-distance. It is distributed over α layers, so we ensure that each layer of check errors contains at least ν = (d-2v) α faults, where v is the size of the set of spacelike errors at the bottom (or equivalently the top), and v ≥dA. When v ≥d/2, ν < 0 so we are done; this is always the case if α ≤2, i.e. dA ≥d/2. The rest of the proof therefore assumes dA ≤v < d/2. Note that ν < d α = dA. Because the minimal size of the bottom set of Pauli errors is dA, there are at least dA check errors in the next round if |HZv| ≥dA, where v is the qubit fault vector for the bottom set of Pauli errors. Now, assume that the HZ parity matrix of the original code has soundness ρC ≥n m as a classical code. This is guaranteed if the original code is quantum locally testable with soundness ρ ≥2n m by Lemma 2.12. We now show that d(v, C) ≥dA, where C = ker HZ. First, if |v + x| was lower than dA for any x ∈imH⊺ X then the distance of the subcode would be lower than dA, as the subcode has connections to all X checks incident to v, so multiplying v by x restricted to A1 would result in a nontrivial logical operator in A• with weight lower than dA. So we must now consider |v+x| for any x ∈ker HZ ⊺ X. As v < d/2, the maximal overlap of v with any other X logical in the original code is v. Hence |v + x| ≥|v| ∀x ∈ker HZ ⊺ X, and v ≥dA. As a consequence of the soundness being ρC ≥n m, |HZv| ≥m n ρCd(v, C) ≥d(v, C) ≥dA, so each layer between the top and the bottom Pauli X errors must have at least dA check errors. By the same argument, applying cleaning to only a portion of the checks in the auxiliary system will distribute the faults such that the portion in the original code and the portion on checks in the auxiliary system at each timestep must sum to dA, and so the fault distance is at least d. When performing many partial block readings close together in the same spacetime volume, we strengthen the condition to require soundness ρ ≥2n m regardless of whether α is above or below 2. Theorem 6.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of partial block readings {Partial1, Partial2, Partial3, ..., Partialη} where each subcode has distance di ≥ 1 αi d for i ∈[η] and the deformed codes each have code distance d, with sparse cycle bases. Suppose the original codes have soundness ρ ≥2n m . Then all η logical measurement rounds of partial block reading can be performed sequentially using ⌈αi⌉rounds of syndrome measurement each, and using d rounds of padding 81 before and after the logical measurements. The procedure has phenomenological faultdistance d, for any η ∈N. Proof. By taking sequential mapping cones on the initial fault complex for the idling operation, one finds that the equivalence classes of logical faults are all either: • Timelike faults extending between boundaries. • Spacelike faults. • Timelike faults which act on the auxiliary systems, and cannot be cleaned to have weight below d in the original code. Timelike logical faults in the auxiliary system and between boundaries must still have weight at least d. Cleaning spacelike logical faults Λ1, Λ2, · · · , Λθ to have some shared spacelike support must induce at least a proportional weight in check faults, as the initial code has sufficient soundness, so the product SΛ1Λ2 · · · Λθ cannot fall below weight d. Note that the condition on the distance of the compacted code CC• is unnecessary in this case. Therefore the fault-distance of the procedure is at least d. A.5 Fast hypergraph surgery Theorem 5.3. Let Q, Q′, Q′′... be a set of CSS LDPC codeblocks with distance at least d, and let Ξ be a set of Z-type hypergraph surgeries {Hyper1, Hyper2, Hyper3, ..., Hyperη} such that: • Each auxiliary complex (A•)1, (A•)2, (A•)3, ..., (A•)η has 1-cosystolic distance d, with sparse cycle bases. • The compacted code has distance at least d. Then all η logical measurement rounds of generalised hypergraph reading can be performed sequentially using O(1) rounds of syndrome measurement each, and using d rounds of padding before and after the set of logical measurements. The procedure has phenomenological fault-distance d, for any η ∈N. Proof. This proof proceeds in a largely similar manner to those of Appendix A.3. For a single hypergraph surgery with an ancillary system A• we construct the same fault complex 0 A2 A1 A0 A-1 F ′ 3 F ′ 2 F ′ 1 F ′ 0 0 0 f2 f1 f0 0 but where A• is now an arbitrary code such that f• is non-overlapping on vertices and H0(A) = 0. X-type boundary conditions. |H1(cone(f))| = |H1(F ′)| -|imH1(f)| + |H0(A)| -|imH0(f)| = |H1(F ′)| -|imH1(f)|, 82 and |H2(cone(f))| = |H2(F ′)| -|imH2(f)| + |H1(A)| -|imH1(f)| = |H2(F ′)| -|imH2(f)| + |H1(A)| -|H1(A)| = |H2(F ′)| -|imH2(f)|. In summary we have: • |H1(cone(f))| = |H1(F ′)| -|imH1(f)|. • |H2(cone(f))| = |H2(F ′)| -|imH2(f)|. Z-type boundary conditions. Similar calculations apply but |imH2(f)| = |imH1(f)| = 0. • |H1(cone(f))| = |H1(F ′)|, • |H2(cone(f))| = |H2(F ′)| + |H1(A)| -|imH2(f)|. We can then apply such mapping cones sequentially, and so all logical faults are categorised into equivalence classes with the following representatives: • Logical faults on checks in the auxiliary system. • Logical faults on data qubits in the original or deformed codes. • Logical faults on checks extending through the spacetime volume between boundaries. In order, • Because each auxiliary complex (A•)1, (A•)2, (A•)3, ..., (A•)η has 1-cosystolic distance d, the weight of a logical fault on checks in the auxiliary system must be at least d. • Because the maps between the auxiliary systems and original codes are all nonoverlapping on vertices, the weight of such a logical fault when cleaned into the original code must be at least d to flip d checks in the auxiliary system. • Because the compacted code has distance at least d, any spacelike fault must have weight at least d. This accounts for the fact that spacelike logical operators can cleaned by spacetime stabilisers to cancel their shared spacelike support. • All other timelike faults must extend from a timelike boundary, so to affect the protocol must have weight at least d. Remark A.6. If the chain maps are overlapping on vertices (i.e. they do not satisfy Definition 5.1) then the argument breaks down, as a timelike logical fault in the auxiliary system could be cleaned to a smaller logical fault in the original code, and so additional syndrome rounds are required to increase the weights of these faults. In the extreme case, where multiple auxiliary systems measure a single logical representative, this coincides with Theorem 1.6, which shows that such a measurement procedure will always require Ω(d) rounds to preserve the fault distance. 83 B Non-CSS measurements by full block reading In this appendix we provide a more detailed explanation of how block reading extends to more general logical Pauli measurements on codes that possess transversal gates. B.1 Y measurements We can attach an ancillary system of the following form to any CSS code C: X Y Z Q HZ HX I I I H⊺ Z H⊺ X where the data qubits Q are in the original code, and the r.h.s. scalable Tanner graph is the ancillary system. The checks must always commute by inspection of the diagram. However, the properties of this combined code are complicated. Evidently, none of the Z or X logicals are measured out, as there are no new Z or X checks. Definition B.1. Let C have a basis of representatives such that supp(Y i) = supp(Xi) = supp(Zi), ∀i ∈[k]. Then we say C has matching representatives. Let C be self-dual, i.e. HZ = HX. If C has matching representatives then Y i is measured out. As Y i ∈ker HZ = ker HX, cleaning by the new Y checks PY in bijection with supp(Y i) gives PY Y i = 0 on both the new sets of data qubits. If the code does not have matching representatives then the cleaning argument does not work, and so the ancillary system may not measure the logical qubits. Note that to have supp(Xi) = supp(Zi), |supp(Y i)| = 1 mod 2, i.e. it must have odd support, in order for Xi to anticommute with Zi. If C has matching representatives, all logical qubits will be measured out in the Y basis. There are no new Z logicals, as any Z logical on the top right new data qubits must be in ker(H⊺ Z) so be products of old Z checks, and any Z operators on the bottom right new data qubits won't commute with the old X checks. The inverse is true for new X checks. Example B.2. 2D colour codes are self-dual and have Y logicals composed of X and Z logicals on the same support [124]. Remark B.3. If C is self-dual but there exists a product of Pauli Y operators on data qubits, which is a composition of Zi and Xj with supp(Zi) = supp(Xj), but i ̸= j so the Y operator is actually a representative of Zi ⊗Xj, then the Y ancillary system will still send this Y operator to zero when cleaned, but the measurement is a Zi ⊗Xj measurement instead. For the measurement protocol, assume C is self-dual with matching representatives. First initialise new data qubits in the top(bottom) right set in the |0⟩(|+⟩) states respectively. Then measure all stabilisers. To commute with all Y logicals being measured, each 84 new data qubit must be connected to an even number of new checks in bijection with each Y logical, and so X(Z) errors on top right(bottom right) qubits will not affect the logical measurement outcome. As the code is self-dual, products of Z and X checks with the same support are mapped to a subset of new Y checks, such that any nontrivial Y measurement error must be in ker(HZ) (H⊺ X) = ker(HX) (H⊺ Z), and hence the logical measurement has some protection against measurement faults. Lemma B.4. Let C be a self-dual CSS code with matching representatives. Then C admits a transversal S gate, acting as S⊗k on the logical space up to a Pauli correction. Proof. From [34, Prop. 9], C admits a transversal S gate up to a Pauli correction if: • u · v = 0 mod 2 for all X stabilisers u, v. • For any X logical operator u and X stabiliser v, u · v = 0 mod 2. As C is self-dual the first condition is satisfied. From the matching representatives property, any Xi logical operator u has the same support as its matching Zi operator, and Zi must commute with all stabilisers hence u · v = 0 mod 2. Therefore the procedure is not very useful, because the codeblock can have its measurement basis permuted by transversal S and H and so Y measurements are unnecessary. The same argument applies to performing block reading of multiple codeblocks in products containing Y terms. We do not know if partial block reading for non-CSS measurements, where the subcode is a stabiliser code but not CSS, can allow for fast measurements which cannot be performed using transversal gates to permute bases. B.2 X ⊗Z measurements Given a CSS code C, let D be the dual of that code, i.e. HD X = HC Z and HD Z = HC X. Introduce the following ancillary system: X Z HX HZ [0, H⊺ X] I [0, I] I HZ X [0, HX] [HZ, 0] [I, 0] [I, 0] I where C is on the l.h.s. and D on the r.h.s. The top right and middle checks are now of mixed-type, checking stabilisers with both X and Z terms. One can check by symplectic multiplication that the above Tanner graph commutes. We now also have meta-checks which are of 'mixed-type', using both X and Z checks, indicated by the pink diamond. This is a Z⊗Z measurement by full block reading, but conjugated by single-qubit Cliffords. However, because the codeblocks C and D are dual, this could equally be performed by using a modified version of homomorphic measurements but with transversal CX and CZ gates [24]. 85 Figure 9: Venn diagram indicating the overlapping supports of a logical operator ΛZ and sets of checks κλ (magenta) and v ∩U (dashed). C Proofs concerning modular expansion In this appendix we provide technical proofs related to properties of modular expansion used in the main text. Theorem 7.7 (Distance preservation). Let {Zi}i∈I be a set of logical representatives (trivial or nontrivial) on a distance d CSS code Q with supports ξi = supp(Zi). Let H(V, E) be a hypergraph with vertices in V being Z checks, hyperedges in E data qubits and X gauge checks forming a cycle basis, such that H measures all the logical representatives {Zi}i∈I. Let fp : S i ξi →V be an injective port function such that U = imfp ⊂V, and fp(ξi) = Vi ⊂V, ∀i ∈I. Let Md(H) ≥1, where W is generated by fp(ξi). Then the distance of the deformed code Q ↔H is at least d. Proof. We show that for every nontrivial ΛZ logical operator in Q ↔H with some support in S i Vi, the weight of ΛZ cannot drop below d by multiplying with new Z checks (vertices) in H. Let v be an arbitrary subset of V, and define the stabiliser Z(G⊺v) to be the product of all checks in v. We must show that |ΛZ ◦Z(G⊺v)| ≥d. First, G⊺v ≥min(d, |κλ| -|v ∩κλ| + |v ∩U\κλ| : λ ∈2I) by the definition of modular expansion. If min(d, |κλ| -|v ∩κλ| + |v ∩U\κλ| : λ ∈2I) = d then we are immediately done, as no matter which data qubits in Q are turned off by Z(G⊺v), there are at least d new data qubits with a Z Pauli in the ancillary hypergraph. The other case is more difficult. If |G⊺v| ≥|κλ|-|v∩κλ|+|v∩U\κλ| for some λ ∈2I then we can construct the diagram in Fig. 9. 86 The set in the dashed region is v ∩U, the top right circle is U, the bottom left circle is ΛZ and the magenta circle is κλ. We compute by inspection of the diagram that |ΛZ ◦Z(G⊺v)| ≥|τλ| + |σλ| + |Bλ| + |B′ λ| + |Rλ| + |κλ| -|v ∩κλ| + |v ∩U\κλ|. Next, |κλ| -|v ∩κλ| + |v ∩U\κλ| = |φλ| + |σλ| + |B′ λ| + |γλ| and |ΛZ ◦Zλ| = |Rλ| + |τλ| + |γλ| + |Bλ| + |φλ| ≥d where Zλ = Z(f-1κλ) acts as Z on the preimage of κλ in the original code. Zλ is a logical operator, but not necessarily a nontrivial one. As ΛZ is a nontrivial logical operator of the deformed code, |ΛZ ◦Zλ| ≥d. Plugging these together we find that |ΛZ ◦Z(G⊺v)| ≥2(|σλ| + |B′ λ|) + d ≥d. As the cycles are gauge-fixed and no Z logical can be reduced below weight d, the distance of Q ↔H is at least d. Theorem 7.8 (Thickening). Let H(V, E) be a hypergraph with modular expansion Mt(H). Let J L be the path graph with length L ≥ 1 Mt(H), i.e. L vertices and L -1 edges. Let HL := H□J L be the hypergraph of H thickened L times. Then HL has modular expansion Mt(HL) ≥1 for Ul= S i V l i at any level l∈{1, 2, · · · , L}, where V l i is the copy of Vi in the lth level of the thickened hypergraph. Proof. Let G : F2E →F2V be the incidence matrix of H, RL the incidence matrix of J L, and GL the incidence matrix of HL. Then GL = G ⊗IL In ⊗RL where n = |V|. We divide up the vertices of HL into sets Vlfor l∈[L], corresponding to each level of the thickened graph. Similarly, for a given level lwe have V l i and Ul= S i V l i . Letting vl∈Vlbe a set of vertices in level l, we have the vector v = (v1 v2 · · · vL)⊺∈ S lVlseen as a set of vertices in HL. Associated to this set we also have ul= vl∩Ul= v ∩Ul. Now, let r = arg minj min(t, |κj λ| -|uj ∩κj λ| + |uj\κj λ| : λ ∈2I). Using the triangle inequality and the fact that LMt(H) ≥1, |G⊺ Lv| = |G⊺ L(v1 v2 · · · vL)⊺| = L X j=2 |vj-1 + vj| + L X j=1 |G⊺vj| ≥|vr + vl| + LMt(H) min(t, |κr λ| -|ur ∩κr λ| + |ur\κr λ| : λ ∈2I) ≥|ur + ul| + min(t, |κr λ| -|ur ∩κr λ| + |ur\κr λ| : λ ∈2I). Now consider the different cases. If min(·) = t then |ur + ul| + t ≥t and we are done. 87 If min(·) = |κr λ| -|ur ∩κr λ| + |ur\κr λ| for some λ ∈2I then |ur + ul| + |κr λ| -|ur ∩κr λ| + |ur\κr λ| = |ur ∩κr λ + ul∩κr λ| + |ur\κr λ + ul\κl λ| + |κl λ| -|ur ∩κr λ| + |ur\κr λ| ≥|κl λ| -|ul∩κl λ| + |ul\κl λ|, making use of the triangle inequality for the last line. Therefore |G⊺ Lv| ≥min(t, |κl λ| -|v ∩κl λ| + |v ∩Ul\κl λ| : λ ∈2I) for any set of vertices v in HL, so HL has modular expansion Mt(HL) ≥1. D Circuit-equivalence of surgery and homomorphic measurement In this appendix we derive a circuit equivalence between generalized surgery measurement and homomorphic measurement. The following proofs use the so-called "scalable" ZXcalculus from Ref. [87]. This calculus has previously been used to describe LDPC surgeries in Ref. [64]. We assume familiarity with the ZX-calculus for this section; see Ref. [125] for a comprehensive introduction. In short, the scalable ZX-calculus (SZX-calculus) has thick wires indexed by a positive integer n, denoting a register of n qubits. Green and red spiders then act on the n qubits, with spider phases now represented by a vector v over C of length n, containing phases acting on each qubit. As in the conventional ZX-calculus, an all-zeros vector is omitted and the green or red spider left unlabelled. We also omit the label denoting the size of the qubit register. The SZX-calculus has other generators than green and red spiders. For example, we also have the matrix arrows A and B (6) where A and B are matrices over F2. These generators can be defined by their action on the Hilbert space. Suppose A ∈Fn×m 2 and B ∈Fp×q 2 . For a qubit state in the computational basis |x⟩with x ∈Fm 2 , the arrow pointing to the right in Eq. 6 corresponds to the linear map RA : |x⟩7→|Ax⟩. Meanwhile, the arrow pointing to the left corresponds to the linear map H⊗pRBT H⊗q = H⊗p |x⟩7→|BT x⟩ H⊗q. Stabiliser measurement on CSS codes can be neatly described using the SZX-calculus. Given a CSS code with n qubits and the parity-check matrices HX, HZ, the measurements can be described by H⊺ X HZ sZπ sXπ where sZπ is the vector of phases representing syndrome outcomes sZ = {0, 1}mZ, and the same for sXπ. In particular, when the input state is in the codespace, sZ = sX = 0 and the phase vectors can be omitted. The same applies for the case where the the initial state is not in the codespace but the measurements project it into the codespace. Commutation of stabilisers implies that H⊺ X HZ = H⊺ X HZ 88 and furthermore, HZ HZ = HZ with a similar rewrite for X-checks. Obviously such rewrites do not generally preserve fault-tolerance of the circuit: we are taking multiple rounds of syndrome measurement and collapsing them into one round, which will generally reduce the fault-distance of the spacetime code. Along with the merge and copy rules of the ZX-calculus we can now prove that surgery by an auxiliary system and homomorphic measurement using transversal gates are equivalent as circuits. Theorem 9.1. Let C• be a CSS code with a homomorphic measurement procotol specified by a chain map f• : A• →C•. Then the circuit corresponding to the homomorphic measurement can be rewritten to a surgery specified by cone(f•). Proof. We describe an Z-basis logical measurement; the same result for X measurements can be easily acquired similarly. The rewrites required are as follows. f⊺ 1 απ H′ Z (H′ X)⊺ H⊺ X HZ = f⊺ 1 απ (H′ X)⊺ H⊺ X HZ = f⊺ 1 απ (H′ X)⊺ H⊺ X HZ = f⊺ 1 απ (H′ X)⊺ H⊺ X HZ f⊺ 2 N⊺ The first SZX-diagram on the top left is depicting a homomorphic measurement. The original code is shown on the bottom wire, with parity-check matrices HZ, HX. An ancilla code is initialised in |0⟩⊗n′ then its stabiliser checks are measured. Then a set of transversal CNOTs defined by a chain map f• are applied, with the controls on the initial code and the targets on the ancilla code. Then the ancilla code is measured out in the Z basis, with the vector απ where α ∈Fn′ 2 determines the logical measurement outcome. Next, the |0⟩initialisations are commuted through the Z-check measurements, and then the merge rule is used to extend the check qubits of the X-check measurements to instead become data qubits of the auxiliary system, initialised in |+⟩mX and measured in the X-basis. We now see that the data qubits of the homomorphic measurement have become check outcomes of Z measurements, which include both the new data qubits of 89 the auxiliary system and data qubits of the original code. The outcomes απ of these Z measurements determine the logical measurement outcome. Last, we add X checks which gauge-fix the cycles (there are cases, such as in full-block reading and the CKBB scheme [52], where these are not necessary), with the check matrix N. We also deform the X checks of the original code to include qubits of the auxiliary system, such that these checks commute with the Z checks that determine the logical measurement outcome. The deformation of these checks is determined by the chain map f•. We can then do the opposite rewriting to convert surgeries to homomorphic measurements. Corollary 9.2. Let C• be a CSS code with a surgery protocol specified by cone(f•) for a chain map f• : A• →C•. Then the circuit corresponding to the surgery protocol can be rewritten to a homomomorphic measurement specified by f•. Proof. F ⊺ απ G⊺ H⊺ X HZ M⊺ = V E F ⊺ απ G⊺ H⊺ X HZ M⊺ V E = F ⊺ απ G⊺ H⊺ X HZ = F ⊺ απ G⊺ H⊺ X HZ H′ Z N⊺ V V E E Thus we see that the vertices V of the hypergraph in surgery, which are Z-checks, are rewritten to the data qubits in homomorphic measurement; the edges E of the hypergraph, which are data qubits, are rewritten to be X-check outcomes. The cycles vanish from the circuit entirely, which is because they become X-type meta-checks on the ancilla code. These relabellings correspond precisely to the degree-shifting of a mapping cone to recover the original chain map f•. 90 E Fast measurement requires multiple representatives In this appendix we provide a proof of our results lower bounding the spacetime volume of a general procedure to measure logical representatives in a quantum code. Proposition 9.3. Let Q be a quantum LDPC code. Let S be a set of logical operator representatives measured at the same time by one or more auxiliary system for T syndrome rounds. Assume that the correct logical measurement outcomes of S are not known in advance. Let f be the lowest weight data qubit fault in Q such that: • f flips at least one logical representative in S. • If f flips a logical representative in S then it flips all other representatives in the same equivalence class in S. Let w be the weight of f, and s be the syndrome weight of f in Q. Then the phenomenological fault-distance of the measurement protocol is upper bounded by 2w + Ts. Proof. The measurement scheme begins at time ti and ends at time to = ti + T, and measures all the logical operator representatives contained in S. Data qubit errors occur at integer timesteps and measurement errors occur at half-integer timesteps. To measure this operator by an auxiliary system, entangling gates are applied from each qubit in S to some other system Aux of checks and data qubits, such that the measurement outcomes of checks in Aux infer the logical measurement outcomes. The entangling gates are applied while measuring checks in Aux for T rounds before ceasing the application of entangling gates. In order to receive an incorrect logical measurement outcome without detecting an error it is sufficient to flip any of those logical operators for the duration of the logical measurement scheme, so long as there are no logical measurement outcomes between representatives which are 'mismatched', meaning that measurements of different representatives of the same logical operator yield different outcomes. Thus if any representative in S is flipped, all representatives in the same equivalence class in S must be flipped. Observe that as none of the correct logical measurement outcomes of S are known in advance (which would occur if Q was initialised in a particular known state with deterministic logical measurement outcomes) there are now no detectors which can detect the logical error. To flip some operators in the round at time ti, apply a Pauli operator fault f to Q with weight w satisfying the conditions specified in the Proposition. This set of faults is detectable by checks in the original code at timesteps ti + 1 2, ti + 3 2, · · · , so in order to conceal the fault we add s check errors at each timestep for the duration of the logical measurement, where s is the number of checks which f flips. In total there are Ts of these check faults. Lastly, apply an identical Pauli operator fault f after the logical measurement procedure concludes, at time to + 1. This is a logical fault of the measurement procedure with weight 2w + Ts. Were the code Q left in the idling operation instead, the above logical fault would be a spacetime stabiliser of the code, and so this logical fault weight can be vastly lower than the distance of Q depending on S, the time T and the syndrome weights s of operators. Note also that this is just an upper bound on the fault-distance: if the auxiliary system is poorly constructed, for example by connecting every qubit in S to a single new check 91 qubit, the fault-distance may be substantially lower due to check errors on the auxiliary system. Theorem 1.6. Any logical measurement protocol performed in o(d) rounds on a quantum LDPC code by an auxiliary system requires connections to more than one logical representative to maintain phenomenological fault distance Ω(d), unless the correct logical measurement outcome is known in advance. We prove this by demonstrating that otherwise there exists a weight |f| logical fault, where |f| ∈o(d). Proof. Assume w.l.o.g. that the measurement being performed is on a Z logical operator, as any Pauli measurement is locally equivalent to this measurement by conjugation of single-qubit Cliffords. If the logical operator representative has support v, the measurement being performed is of the operator N i∈v Zi. To measure this operator by an ancillary system, entangling gates are applied from each qubit i to some other system Aux of checks and data qubits, such that the measurement outcomes of checks in Aux infer the logical measurement outcome of N i∈v Zi. The measurement scheme starts at time ti and concludes at time to, with o(d) rounds between. Data qubit errors occur at integer timesteps and measurement errors occur at half-integer timesteps. Assume that the entangling gates are applied while measuring checks in Aux for o(d) rounds before ceasing the application of entangling gates. Regardless of the system of checks and other structure of Aux, there exists a low weight fault which flips the logical measurement outcome. Consider an arbitrary qubit j ∈v. At any time t ∈[ti, to] the application of a Pauli error Xt j, followed by measurement errors on all incident Y and Z checks at time t+ 1 2, and then a Pauli error Xt+1 j , is a local spacetime stabiliser of the original code, and therefore does not violate any detectors. However, at time t + 1 2 the outcome of measuring N i∈v Zi is flipped, as Xj anticommutes with Zj. This is then corrected by the subsequent Pauli error Xt+1 j , and so at time t + 3 2 the measurement of N i∈v Zi would return the correct outcome again. Hence a logical fault f on the logical measurement can be constructed by applying a Pauli error Xti j and measurement errors on all incident X and Y checks on qubit j from time ti+ 1 2 to time to+ 1 2, and a final Pauli error Xto+1 j . This is a spacetime stabiliser of the original code, as it is composed of local spacetime stabilisers at times t = ti, ti + 1, · · · , to, hence the fault does not violate any detectors. Each round of measurement of N i∈v Zi will be flipped by the original Xti j error, and the outcome of N i∈v Zi is then flipped back after the measurement procedure concludes. The fault f is then Xto+1 j Xti j S ti≤t≤to M t+ 1 2 j , where M t+ 1 2 j is a measurement fault on each Y or Z check incident to qubit j at time t + 1 2. Because the code is LDPC, the fault at each round has weight bounded above by a constant, where the precise weight depends on the number of Y and Z checks incident to qubit j, but this cannot increase the weight asymptotically. Because the logical measurement is performed for o(d) rounds, the weight |f| ∈o(d). Note that if the correct logical measurement outcome is known in advance, there is an additional detector at each round of N i∈v Zi and so the above argument does not 92 hold. Similarly, the reason the proof no longer applies when measuring multiple logical operator representatives is because there are additional detectors which appear because the measurements of each representative of the same logical operator must agree with each other to be correct; equivalently, these additional detectors correspond to sets of measurements whose correct outcomes are known in advance, as they are stabilisers of the original code. Both of the above proofs are insensitive to the structure of Aux, which could be a measurement hypergraph in surgery, an ancilla block in homomomorphic measurement or any other logical measurement gadget. 93
2510.14891
A Performance Portable Matrix Free Dense MTTKRP in GenTen* Gabriel Kosmacher1, Eric T. Phipps2, Sivasankaran Rajamanickam2 Abstract—We extend the GenTen tensor decomposi- tion package by introducing an accelerated dense ma- tricized tensor times Khatri-Rao product (MTTKRP), the workhorse kernel for canonical polyadic (CP) tensor decompositions, that is portable and performant on modern CPU and GPU architectures. In contrast to the state-of-the-art matrix multiply based MTTKRP kernels used by Tensor Toolbox, TensorLy, etc., that ex- plicitly form Khatri-Rao matrices, we develop a matrix- free element-wise parallelization approach whose mem- ory cost grows with the rank R like the sum of the tensor shape O(R(n+m+k)), compared to matrix-based methods whose memory cost grows like the product of the tensor shape O(R(mnk)). For the largest problem we study, a rank 2000 MTTKRP, the smaller growth rate yields a matrix-free memory cost of just 2% of the matrix-based methods, a 50x improvement. In practice, the reduced memory impact means our matrix-free MTTKRP can compute a rank 2000 tensor decompo- sition on a single NVIDIA H100 instead of six H100s using a matrix-based MTTKRP. We also compare our optimized matrix-free MTTKRP to baseline matrix- free implementations on different devices, showing a 3x single-device speedup on an Intel 8480+ CPU and an 11x speedup on a H100 GPU. In addition to numerical results, we provide fine grained performance models for an ideal multi-level cache machine, compare analyt- ical performance predictions to empirical results, and provide a motivated heuristic selection for selecting an algorithmic hyperparameter. I. Introduction Dense tensors are relevant in the fields of numerical simulations [1], medical imaging [2], signal processing [3], and color perception [4], among others. Low-rank approx- imations of dense tensors are a powerful tool used to compress such datasets and reveal relationships between modes. A popular decomposition for computing such ap- proximations is the CANDECOMP/PARAFAC (CP), also called canonical polyadic, decomposition [5], taking a user *Sandia National Laboratories is a multimission laboratory man- aged and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell Interna- tional Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. SAND2025-13211O. 1The University of Texas at Austin 2Sandia National Laboratories Corresponding author: Gabriel Kosmacher Email: gkosmacher@utexas.edu supplied rank R and outputting a weighted sum of R rank- 1 tensors. The bottleneck kernel for CP algorithms [6] is the matrix action of a mode-k unfolding of a tensor with the Khatri-Rao product of factor matrices and is called the matricized tensor times tensor Khatri-Rao product (MTTKRP). Most open-source tensor decomposition packages (see Section I-A) compute the MTTKRP by explicitly forming the Khatri-Rao product matrix, allowing for invocations to BLAS-3 algorithms but compromising on memory usage. In contrast, we seek to compute the MTTKRP without explicitly forming the Khatri-Rao matrix and instead take advantage of the problem structure to develop matrix-free, also called element-based, algorithms. Our work extends the open-source GenTen1 tensor decomposition package with performance portable MTTKRPs and, as far as we are aware, introduces the first GPU algorithm for a matrix- free MTTKRP with dense tensors. Contributions: Our work offers the following method- ological contributions and experimental results: • The design and open-source implementation of per- formance portable, matrix-free algorithms for dense MTTKRPs (see Section III). For a tensor Y of shape I1, . . . , Id and CP-rank R, our matrix-free algorithms for a mode-k MTTKRP have O(R P n In) memory usage while standard matrix-based approaches have a O(R Q n̸=k In) memory cost. • Multiple analytic memory and computational models for our algorithmic variants based on an ideal multi- level cache machine with different caching assump- tions (see Section IV-B). • A heuristic model for an algorithmic hyper-parameter that aligns with measured times on a CPU and GPU (see Section IV-C1). • CPU and GPU algorithmic evaluations (see Sec- tions IV-D1 and IV-D2). In particular, our matrix-free MTTKRP can perform a tensor decomposition on 1 GPU that requires 6 GPUs for a matrix-based MT- TKRP. We also compare our optimized, matrix-free MTTKRP to baseline matrix-free implementations, showing a 3x speedup on an Intel 8480+ CPU and an 11x speedup on an NVIDIA H100 GPU. 1https://github.com/sandialabs/GenTen 1 arXiv:2510.14891v1 [cs.MS] 16 Oct 2025 A. Related work There are a variety of open-source packages offering dense CP decompositions. N-Way Toolbox [7], TensorLy [8], and rTensor [9] compute MTTKRPs by explicitly unfolding the tensor and forming the Khatri-Rao product. The MATLAB Tensor Toolbox [6], [10], PyTTB [11] and GenTen [12] use and algorithm introduced by Phan et. al. [13], described in Section III-A, that avoids expensive tensor unfolding and reduces memory impact by forming partial Khatri-Rao matrices. Vannieuwenhoven et. al. [14] build upon [13] by constructing an algorithm, implemented using the Eigen3 C++ library [15], that sequentially stores blocks of the Khatri-Rao matrix in constant memory, hence achieving the desired O(R P n In) storage complex- ity. However, the blocking algorithm is only applicable to the special case where all k-mode MTTKRPs are computed at once, and can not generalize to compute individual mode-k MTTKRPs needed by, e.g., the CP- ALS algorithm. Table I gives an overview of features for different dense mode-K MTTKRP algorithms and codebases. There has been considerable more work for optimizing MTTKRPs in the sparse tensor case. DeFacTo [16] stores tensors as collections of sparse matrices and relies on opti- mized sparse matrix-vector multiply routines to compute the MTTKRP. SPLATT [17] offers a variety of compressed tensor formats and a cache-blocking scheme to speedup sparse MTTKRPs by parallelizing over rows of the output matrix. HiCOO [18] proposes a new storage format that partitions the sparse tensor into sparse blocks to increase memory bandwidth and develops an algorithm that paral- lelizes over groups of sparse blocks. Both SPLATT and HiCOO were developed primarily for low-thread-count CPUs and do not consider portability to high-thread GPU architectures. Laukemann et. al. [19] introduce another sparse tensor format, ALTO, that linearizes the tensors multi-index set such that tensor entries that are close in space are close in memory. Tensor entries are then decomposed into line segments that encode subspaces, and a parallel algorithm is introduced that assigns threads to chunks of line segments. Though developed for CPUs, increasing the number of chunks per line segment should allow for straightforward portability to GPUs. GenTen [20] offers an performance portable approach that parallelizes over tensor nonzeros and permutes the nonzeros to avoid atomic contention. This algorithm is similar to ours (in the dense case) as it avoids forming the Khatri-Rao prod- uct explicitly. However, permuting the tensor nonzeros requires an additional storage cost of O(dN), where d is the dimension of the tensor and N is the number of nonzeros. As we show in IV-B, explicitly permuting tensor entries in such a fashion necessary for an efficient dense MTTKRP. II. Background A. Notation A tensor X ∈RI1×···×Id is d-way array of size N = I1 × · · · × Id typically stored in a flattened row-major or column-major format. It is standard practice [5] to denote tensors as bold uppercase letters in Euler calligraphic font (e.g., X), matrices by bold uppercase letters (e.g., A), vectors by bold lowercase letters (e.g., a), and scalars by lowercase letters (e.g., a). We adopt Matlab notation for array indexing2. The Khatri-Rao product ⊙of two matrices A ∈Rm×p and B ∈Rn×p is the column-wise Kronecker product of A and B defined by A ⊙B = [a1 ⊗b1| . . . |ap ⊗bp] ∈Rmn×p, (1) where ⊗is the Kronecker product and a(k) j ≡Ak(:, j) is a column vector. Tensors are indexed via multi-indexed arrays i = (i1, . . . , id) where in ∈[In]. The set notation [a] is shortand for {z : z ∈Z, 1 ≤z ≤a}. We call i ∈[N] the linear index of a tensor and (i1, . . . , id) the multi-index, we assume a bijective mapping between the two index sets, and we refer to a tensor element as xi ≡X(i) ≡X(i1, . . . , id). The forward map i 7→(i1, . . . , id) is given by the operator ind2sub(X, i) and is defined like the Matlab function of the same name3, while the backward map (i1, . . . id) 7→i is given by the operator sub2ind(X, (i1, . . . , id)), again defined like the corresponding Matlab function4. The nth mode-k slice of a tensor S(k) n is a d −1 subtensor obtained by fixing the kth value of the multi-index to n and allowing the other values to range, i.e., S(1) n = X(n, :, . . . , :). The mode-k matricization of a tensor X rearranges the tensor elements into a matrix X(k) ∈RIk×N/Ik such that element i maps to (ik, i′ k) by i′ k = 1 + k−1 X ℓ=1 (iℓ−1)Iℓ+ d X ℓ=k+1 (iℓ−1)Iℓ/Ik. (2) The reshape operator reshape(X, [s1, . . . , sm]) does not permute the underlying tensor elements (unlike tensor matricization) and is defined like the Matlab function of the same name5. The rank-R canonical polyadic (CP) decomposition [21], [22] of a tensor X is a rank-R tensor M of the form X ≈M = R X j=1 λja(1) j ◦· · · ◦a(d) j , (3) 2https://www.mathworks.com/help/matlab/math/array- indexing.html 3https://www.mathworks.com/help/matlab/ref/ind2sub.html 4https://www.mathworks.com/help/matlab/ref/sub2ind.html 5https://www.mathworks.com/help/matlab/ref/double.reshape. html 2 TABLE I Methods and codebases for dense mode-k MTTKRPs. Given a tensor Y of shape I1, . . . , Id with factor matrices A1, . . . , Ad, let the Khatri-Rao matrix be Z = Ad ⊙· · · ⊙Ak+1 ⊙Ak−1 ⊙· · · ⊙A1 (see Section II-B for details). We list memory usage and open-source codes for each method, and we list the parallelization method on CPUs and GPUs for each code base. All methods have the same computational cost of O(Rd(Q n In)). methods memory code CPU GPU full Z [6] O  R Q n̸=k In  N-way Toolbox [7] BLAS-3 TensorLy [8] BLAS-3 BLAS-3 rTensor [9] BLAS-3 partial Z [13] O  R k−1 Q n=1 In + dQ k+1 In  TTB [6] BLAS-3 PyTTB [11] BLAS-3 GenTen [12] BLAS-3 BLAS-3 no Z (our method) O  R dP n=1 In  GenTen SIMD SIMT where λ ∈RR is a weight vector and ◦is a d-way outer product. We call Ak = [a(1) 1 , . . . , a(k) d ] ∈RIk×R the mode- k factor matrix of a tensor X. The form M = JA1, . . . , AdK is referred to as a Kruskal tensor. B. The MTTKRP kernel Algorithms to compute CP decompositions generally fall into two categories: all-at-once [23] or alternating [22], [24]. In both cases, the workhorse kernel is the mode-k matricized tensor times Khatri-Rao product (MTTKRP) defined as G(k) = Y(k) diag(λ)ZT (4) where Z = Ad ⊙· · · ⊙Ak+1 ⊙Ak−1 ⊙· · · ⊙A1 for some a Kruskal tensor M = JA1, . . . , AdK, d-way tensor Y, and weight vector λ ∈RR +. We call algorithms that explicitly construct Z matrix-based MTTKRPs. Matrix-based MTTKRPs have two distinct downsides: the explicit formation the Khatri-Rao matrix Z and ma- tricization Y(k). For a mode-k MTTKRP with CP-rank R, the Khatri-Rao matrix Z is size Q n̸=k In × R, which can dwarf the cost R P n̸=k In of simply storing the fac- tor matrices An individually, leading to O(R Q n̸=k In) memory usage. As for the matricization Y(k), tensors are typically stored in memory as a flat array, and in such cases, the mode-k (for 1 < k < d) matricization requires reordering the non-contiguous tensor entries into contiguous chunks. These reorderings have a memory- bound cost of O((d −1)N) and prove significant costs to an otherwise computationally bound kernel. These drawbacks motivate a matrix-free, or element- wise, definition of the mode-k MTTKRP (common in the sparse case [20]), requiring only the explicit storage of Y, λ and {Am}d m=1, given by G(k)(n, j) = λj N X i=1 ik=n Y(i) d Y m=1 m̸=k Am(im, j), (5) for n ∈[Ik] and j ∈[R]. As the CP-rank R grows, the dominant memory cost of this approach becomes the storage of the factor and output matrices, yielding an overall O(R P n In) memory burden. C. Kokkos programming framework Kokkos [25] is a C++ framework providing abstractions to write one-off functions for single-instruction multiple- data (SIMD) parallelism on CPUs and single-instruction multiple-threads (SIMT) parallelism on GPUs. It is the Kokkos convention to discuss levels of parallelism mirror- ing that of the OpenMP 4.0 specification [26], i.e., leagues of teams comprising team-threads which may execute vector-thread instructions in parallel. Leagues are virtual and correspond to the highest level of parallel space (e.g., a Cuda grid or a collection of OpenMP threads), while a team within a league corresponds to hyperthreads on a CPU and y-dimension threads within a block on a GPU and vector parallelism corresponds to vector instructions on CPUs and x-dimension threads (i.e., threads within a warp) on GPUs. We use LeagueRange, TeamRange, and VectorRange to denote league, team, and vector parallel ranges, respectively. We use bx to denote team size and by to denote vector size, and we use px do denote threads within a team and py to denote vector-threads within a thread. 1) SIMD arrays: Compile-time polymorphic SIMD ar- rays were introduced in [20] as an extension of Kokkos that allow for the allocation of small arrays as register arrays on GPUs or thread-private stack arrays on CPUs. We utilize these arrays to allow for a single vector-thread py to calculate many (e.g., 4) updates to the output matrix G(k) per vector loop, increasing memory-bandwidth efficiency. We denote the SIMD vector size as F and SIMD vectors as greek leters with the vector arrow, e.g., ⃗π, ⃗φ. SIMD vectors are in-effect for loops that the compiler is forced to unroll and vectorize. III. Parallel MTTKRP algorithms The state of the art dense MTTKRP was introduced in [13] and takes the viewpoint of Eq. (4). The algorithm, called MTTKRP-GEMM, uses smart partitioning of the 3 Khatri-Rao product matrix ZT to avoid costly reorders of Y, utilizes temporary memory allocations for a (some- times) lower storage cost, and gets its name from its use of the BLAS-3 general matrix-matrix multiplies (GEMMs) for highly-optimized parallelization. We introduce element based algorithms all taking the viewpoint of Eq. (5): MTTKRP-ELEM, MTTKRP- SLICE, and MTTKRP-TILE. Each of our algorithms utilize the same vector parallelization over columns of {Ak} and differ in how they assign tensor elements Y(i) (or groups of Y(i)’s) to Kokkos leagues and teams. The vector parallelism scheme is the same as used in [20] to introduce parallelism over the columns of the factor matrices {Ak} by assigning multiple column indices j in Eq. (5) to a thread vector py. Along with the enforcement of a row-wise layout, this allows for coalesced reads of the factor matrices, allowing the compute device to achieve a higher percentage of memory bandwidth. For simplicity, we assume that the CP-rank R divides the product of the vector size and the SIMD array length (by × F) for the remainder of this discussion. A. MTTKRP-GEMM The key insight of [13] is to avoid the costly matriciza- tions Y(k) by using the reshape operator and breaking up the full Khatri-Rao product ZT into left and right components ZT = [ZT R ∈RR×IR|ZT L ∈RR×IL], where IL = Qk−1 m=1 Im and IR = Qd m=k+1 Im. We briefly summarize the algorithm given in [13]. For more detail, please refer to the paper. For mode-1, re- place Y(k) with reshape(Y, [I1, IR]) in Eq. (4) to avoid permutations. Similarly, for mode d, replace Y(k) with reshape(Y, [IL, Id])T in Eq. (4) to avoid permutations. The steps for modes 1 < k < d are given in Algorithm 1. Note that although MTTKRP-GEMM has a storage Algorithm 1 MTTKRP-GEMM Input: Y, {A(m)}d m=1, R, k Output: G(k) 1: ZT R = diag(λ)(Ad ⊙· · · ⊙Ak+1)T 2: C = reshape(Y, [IL · Ik, IR])ZR // GEMM 3: C = reshape(C, [IL, Ik, R]) // tensorize the matrix 4: ZT L = diag(λ)(Ak−1 ⊙· · · ⊙A1)T 5: G(k)(ℓ, j) = PIL q=1 C(q, ℓ, j)ZL(q, j) // parallelize with einsum, GEMM, etc. complexity across modes-k of O(R(IL + IR)) as the left and right factor matrices can be stored in the same temporary memory space, the storage cost for modes-1, d is O(R(Qd m=1 In/ min{I1, Id})), the same cost as forming ZT explicitly in 4. B. MTTKRP-ELEM The simplest attempt to parallelize Eq. (5) is to assign every thread within a team px to a tensor element Y(i) without enforcing any particular league-wise structure. This algorithm, MTTKRP-ELEM, is described in line 2. Tensor indices i are given by the league offset plus team rank in Algorithm 2, hence the tensor reads in line 4 are packed on CPUs and coalesced on GPUs. Reading {Am}d m=1 and writing to G(k) require a conversion of the linear index i to a multi-index (i1, . . . , id) in line 5 with the function ind2sub(Y, i) which uses costly integer divisions. Lines 6-20 are the vector parallelism over factor matrices {Am}d m=1 columns. Line 8 ensures that each vector-thread py process at least F columns, and the while loop on lines 9-19 increases the number of columns processed by each thread to R/(by×F). Line 18 ensures that the column strides are packed/coalesced for a fixed px, but this is negated by line 14 as the multi-index entry im is not coalesced for neighboring px. These pseudo-random reads of the factor matrices have a significant negative effect on memory bandwidth as R(d−1) factor matrix columns are read per tensor element Y(i). Finally, G(k) is atomically updated for each N tensor element R times on line 17. Algorithm 2 MTTKRP-ELEM Input: Y, {Am}d m=1, R, k, F Output: G(k) 1: LeagueRange l ∈[N/bx] 2: TeamRange px ∈bx 3: i ←l × N/bx + px 4: yi ←Y[i] // packed/coalesced reads 5: (i1, . . . , id) ←ind2sub(Y, i) 6: VecRange py ∈[by] 7: jj ←0 8: ⃗φ ←0 9: while jj < R do // read column chunks 10: j ←jj + F × by + py 11: ⃗φ ←xi × λ[j : j + F] 12: for m = 1, . . . , d do 13: if m ̸= k then 14: ⃗φ ←⃗φ × Am[im, j : j + F] // pseudo-random reads 15: end if 16: end for 17: AtomicAdd(V[ik, j : j + F], ⃗φ) // N × R atomic updates 18: jj ←jj + F × by 19: end while 20: End VecRange 21: End TeamRange 22: End LeagueRange C. MTTKRP-SLICE We would like our element-based parallelization scheme to avoid the atomic updates and pseudo-random read- s/writes of the MTTKRP-ELEM algorithm. To avoid atomic operations, Eq. (5) informs us to processes each mode-k slice (see Fig. 1) of the tensor S(k) n serially. 4 Fig. 1. Slicing of a 3-way tensor Y ∈RI1×I2×I3 along each mode. We thus construct an algorithm MTTKRP-SLICE that assigns league and team parallelism to each slice S(k) n of Y. The MTTKRP-SLICE algorithm is described by Algorithm 3 with Nt = N/Ik (i.e., the number of elements in a given slice S(k) n ) and modifying the atomic add on line 25 to be a conflict-free global update. The algorithm begins by assigning each team to a slice S(k) n in line 4. Line 6 then computes an anchor multi-index a for the slice, in this case given by am = 0 for m ̸= k and ak = n. Lines 8-28 are the vector parallelism over factor matrices {Am}d m=1 columns and differs from the vector parallelism in Algorithm 2 as each vector-thread py now processes Nt tensor elements yi (lines 12-24). Line 13 computes the tensor multi-index (i1, . . . id) as the sum of the anchor multi-index (a1, . . . , ad) and the ind2sub(S(k) n , ii). Note that the resulting multi-index i is length d (i.e., the same length of a multi-index for Y), and the kth entry of the multi-index ik = n. For different slices S(k) n and S(k) m , the multi-indices differ only in their kth element, hence in practice we implement a special ind2sub(S(k) n , ii) function that uses the same precomputed offsets for each slice, avoiding the need to explicitly form the slice and compute costly inner loop integer divisions used in the classic implementation. Line 14 takes the resulting multi- index and calls a function sub2ind(Y, (i1, . . . , id)) built on cheap integer additions and multiplications to get the tensor linear index i. The inner loop over tensor elements in a slice allows the factor matrices columns to be cached and reused (line 20) in the computation of the element Hadamard product ⃗φ = yi Q m̸=k Am(im, j). The element Hadamard product ⃗φ is then added to the slice Hadamard product ⃗π on line 23. After the inner slice loop terminates on line 28, the slice Hadamard products ⃗π are written to G(k)(ik, j : j + F) without atomics. D. MTTKRP-TILE The MTTKRP-SLICE algorithm has the advantage over MTTKRP-ELEM in that it avoids atomic conflicts and has beneficial cache blocking for columns of the fac- tor matrices {Am}d m=1. However, MTTKRP-SLICE has Ns× more serial operations, which can lead to a significant loss of parallelization for MTTKRPs on modes with large slices. We attempt to remedy this loss of parallelization by assigning leagues to slices and teams to tiles within a slice (see Fig. 2). Given a slice S(n) k , a tile partition is defined by a tile volume Nt that divides the slice volume Ns = N/Ik such that S(k) n = {S(k) n ((t−1)NT : tNT )}NS/NT t=1 and whose set elements S(k) n ((t −1)NT : tNT ) are called tiles. We call the resulting algorithm that parallelizes over tiles MTTKRP-TILE and describe it in Algorithm 3. The algorithm is controlled by the tile volume hyperpa- rameter NT , which controls both the length of the serial accumulation of the tile Hadamard product in lines 12-24 and the number of atomic operations in line 25, which is given by In(NS/NT ) × R. Note that setting a tile size of NT = NS and removing the atomic stipulation on line 25 reverts the algorithm to MTTKRP-SLICE, while setting a tile size of NT = 1 reverts the algorithm to MTTKRP- ELEM, albeit with more structure in the league and team assignments. We refer to the discussion in Section III-C for a detailed discussion of the pseudo-code in Algorithm 3. Algorithm 3 MTTKRP-TILE Input: Y, {Am}d m=1, R, k, F Output: G(k) 1: NS ←N/Ik // mode-n slice volume 2: LeagueRange l ∈[N/(NS/NT )] // league & team parallelism over the tiles 3: TeamRange px ∈[bx] 4: n ←intdiv(l × bx + px, NS/NT ) 5: t ←intmod(l × bx + px, NS/NT ) 6: (a1, . . . , am) ←ind2sub(S(k) n [(t −1)NT ], 0) 7: jj ←0 8: VecRange py ∈[by] 9: ⃗π ←0 // init. tile Hadamard product 10: ⃗φ ←0 // init. element Hadamard product 11: while jj < R do 12: for ii ∈[NT ] do 13: (i1, . . . , id) ← (a1, . . . , ad) + ind2sub(S(k) n [(t −1)NT ], ii) 14: i ←sub2ind(Y, (i1, . . . , id)) 15: yi ←Y[i] // re-read tensor element 16: j ←jj + F × by + py 17: ⃗φ ←yi × λ[j : j + F] 18: for m = 1, . . . , d do 19: if m ̸= k then 20: ⃗φ ←⃗φ × Am[im, j : j + F] // columns are reused 21: end if 22: end for 23: ⃗π ←⃗π + ⃗φ // update tile product 24: end for 25: AtomicAdd(G(k)[n, j : j + F],⃗π) // In(NS/NT ) × R atomic updates 26: jj ←jj + F × by 27: end while 28: End VecRange 29: End TeamRange 30: End LeagueRange 5 Fig. 2. A tensor Y ∈RI1×I2×I3 decomposed into mode-2 slices S(2) n . Each slice in then assigned to a Kokkos league of two teams p1, p2, where each team computes a tile Hadamard product and atomically reduces their local contributions to G(k)(n, j). In this instructive example we have one league per slice, but in practice we allow for more than one league per slice. However, the map between teams and tiles is always bijective. IV. Numerical results We study the performance and portability of the dif- ferent MTTKRP algorithms described in Section III. Our numerical experiments are performed on two tensors with randomly generated values based of relevant scientific simulations [1]: the 2D compressible tearing mode tensor [27], [28] A ∈R401×201×12×501, about 3.7 GB in dou- ble precision, and the 3D island coalescence tensor [29] B ∈R129×129×129×12×39, about 7.5 GB. Our experiments use CP-ranks R = 10, 500, 1000, 2000. A. Experimental setup Each algorithmic variant is implemented in the pub- licly available GenTen library6 built atop Kokkos for performance portability. Our numerical experiments are performed on cluster nodes the Hops machine at Sandia National Labs. Each node has two 3.8 GHz Intel Xeon Platinum 8480+ CPUs and 4 Nvidia H100 80GB 700W SXM5 GPUs. For the CPUs, we set bx = by = 1 (i.e., we only use league parallelism), use GenTen heuristics (see [20] Tables 3,4) for selecting F based on R, and compile with the Intel MKL LAPACK and BLAS and architec- ture specific vectorization flags set by Kokkos . We also set the environment variables OMP NUM THREADS = 56, OMP PROC BIND = close, and OMP PLACES = threads. On GPUs, we set bx = 128/by and choose by and F based on R using the aforementioned GenTen heuristics. B. Performance analysis We use a model f = NRd to measure the flop count of the MTTKRP as for each N tensor elements, we do R(d− 1) multiplications and R additions. Memory bandwidth is measured for each element-based algorithmic variant using a 0-cache model m0, an infinite-cache model m∞, and a 0, LM-cache model, where LM is the middle level of cache on a device, i.e., L1 cache on a GPU and L2 cache on a CPU. The infinite-cache model is given by m∞= sf(N + R P n In), were sf is the size in bytes of the floating-point type, N is the size of the tensor, R P n̸=k In is the size of reading all but the kth factor matrix, and RIk is the size 6https://github.com/sandialabs/GenTen of the output matrix. The 0-cache model is given by m0 = sf(N+NR(d−1)+(NS/NT )IkR) as the whole tensor must be read in, every tensor element N must read each factor matrix column R for d −1 factor matrices, and each R columns of the output matrix are updated NS/NT times for each Ik slice. The 0, LM model assumes that the tensor and output matrices cannot be cached as they are not read repeatedly, but the factor matrices are cached, yielding the model m0,LM = sf(N + NS/NT (IkR)) + 1 l sf(NR(d −1)), where l is the ratio between LM bandwidth and device memory bandwidth. We use these models to predict the total time T0 = f/τf + m0τm, T∞= f/τf + m∞/τm, and T0,LM = f/τf + m0,LM/τm, where τf is the peak machine measured in flops and τm is the peak machine bandwidth measured in read/write operations per second. We define arithmetic intensity to be I = f/m∞and we say that a model is compute bound if I > τf/τm. Given a wall-time t, the throughput of the model is measured as gflops = f/t/10243 and the memory bandwidths for each model are given by mops0 = m0/t and mops∞= m∞/t. Peak throughput τf, bandwidth τm, and LM bandwidth ratio l for each device is reported inTable II. For the H100, throughput and bandwidth are taken from NVIDIA’s whitepaper7 and we calculate the peak L1 cache bandwidth as #SM’s x L1 transfer bytes/cycle x clock rate, which is 123 × 128.98/103 ≈33TB/s, where the L1 transfer bytes per clock cycle is taken from NVIDIAs hopper tuning guide8. Hence, for the H100, we set l = 10. Throughput for the 8480+ is taken from Intel’s APP metrics sheet9 while memory bandwidth and L2 cache bandwidth are measured with the STREAM triad benchmark [30], where we use a 2GB array to measure memory bandwidth and a 0.6MB array to measure the L2 cache bandwidth. We analyze the efficacy of our performance models in 7https://www.nvidia.com/en-us/data-center/h100/ 8https://docs.nvidia.com/cuda/hopper-tuning-guide/index.html 9https://www.intel.com/content/www/us/en/support/articles/ 000005755/processors.html 6 TABLE II Peak compute throughput τf , memory bandwidth τm, and cache bandwidth ratio l for each device. Device 1012τf 10244τm l 8480+ 2.28 0.12 8 H100 34 3.35 10 Table III. Our results show that the cache effects play a significant role for each algorithmic variant: the T∞ prediction is much too optimistic for each variant. The MTTKRP-ELEM algorithm fails to surpass even the 0- cache predicted time T0 for either device. As discussed in Section III-B, this can be attributed to the uncoalesced repeated reads of the factor-matrices. The MTTKRP- SLICE algorithm also fails to surpass T0 on the GPU, though it does so on the CPU. We hypothesize that this discrepancy is due to the much larger cache size of the CPU compared to the GPU. For MTTKRP-SLICE on the CPU and MTTKRP-TILE on both devices, the 0- cache model is far to pessimistic while the ∞-cache model is too optimistic, indicating that the algorithms achieve (to some extent) the desired caching effects on the repeated reads of the factor matrices, Our empirical results for these methods align most with the 0, LM-cache model, achieving at least 90% of the predicted time on the CPU and 40% of the predicted time on the GPU. We surmise that the higher accuracy of our models on the CPU is due to the fact that we use empirically measured peak bandwidth, while on the GPU we use the vendor reported ideal world numbers. C. Experiments 1) Selecting the tile size: Our first goal is to select a value for the heuristic tile volume parameter NT in the MTTKRP-TILE algorithm. Table IV presents the nu- merical performance of a sweep over tile-widths d−1√NT ∈ {2, 4, 6, 8, 10, 12, 14} for the tearing-mode and island coa- lescence tensors for a fixed R = 32. On the GPU, we find that for the 4-way tensor A, a tile size of 216 is optimal, while for the 5-way tensor B, a tile size of 256 is optimal. On an H100, our maximum occupancy with our stan- dard block configuration of (4, 32) is 64 warps per SM. NSIGHT-compute reports our achieved occupancy to be 60%, but even if we assume a more conservative occupancy of 50%, we have 32 tiles per SM, with each tile rereading 8 × 256 bytes of factor matrices entries, or 64KB of information per SM, 25% of the total L1 cache, a good target given that the L1 cache handles read, write, and atomic instructions on a GPU. On the 8480+, we find that a tile width of 12, i.e., the size of the minimum dimension, is optimal. Our parallel CPU implementation assigns one tile per core, and given that each core has 2MB of L2 cache, this means that the factor matrices take up at most ∼0.5% of the L1 cache. However, our implementation assumes that the tile size is regular, i.e., that d−1√NT ∈Z. As such, performance decreases when d−1√NT > minn=1,...,d In. To impose tile regularity and cache awareness, we use the simple heuristic NT = min    d−1 s sLM/4 sf(c/2) !d−1 , min n=1,...,d{In}d−1   , (6) where sLM is the size in bytes of the middle level of cache (i.e., L1 cache (per SM) on a GPU, L2 cache (per core) on a CPU), and c is the maximum number of tiles per cache unit (i.e., SM/core). We find that this heuristic is valid for most R. D. Memory impact of MTTKRP variants As discussed in Section III-A, forming the left and right Khatri-Rao matrices ZT L and ZT R can incur a large storage cost. This cost dominates the total memory required for the mode-k MTTKRP-GEMM algorithm, which is given by mGEMM ∞ = N + R(IL + IR + Ik). It is important to note the storage cost of the MTTKRP-GEMM algorithm relies heavily on the initial shape of a given tensor. For example, the maximum memory usage over all modes k for the island coalescence tensor B for a R = 2000 is 391GB, but if the initial tensor shape was permuted to be 129 × 12 × 129 × 39 × 129 (best case), the memory usage would be 123GB, while if the tensor was permuted to be 129 × 129 × 129 × 39 × 12 (worst case), the memory usage would be 1255GB. However, especially in the context in situ decomposition of scientific simulation data [1], [31], reshaping tensors can eliminate multidimensional relation- ships between tensor entries and can be expensive and im- practical. Fig. 3 shows the maximum memory usage over all modes k for the MTTKRP-GEMM and MTTKRP- TILE algorithms for the tearing mode tensor A and the island coalescence tensor B. The y axis of the plot denotes the minimum number of 80GB H100 GPUs needed for storage, however, the distribution protocol in IV-D2 does not evenly assign memory to MPI ranks, hence in practice more GPUs may be required for a given problem size. 1) Single node performance: The linear-in-rank memory scaling of the MTTKRP-GEMM algorithm described in Section IV-D motivates the use of the tearing-mode tensor A to study single-node performance of the different MTTKRP variants. Fig. 4 reports the average MTTKRP performance, measured in gflops, over all modes, for the OpenMP and Cuda execution spaces. We compare the SLICE, TILE, and GEMM variants on the OpenMP space and the ELEM, TILE, and GEMM variants on the Cuda space, where the tile size NT is chosen by Eq. (6). We omit the ELEM variant for OpenMP and the SLICE variant for Cuda given their poor performance alluded to in Table III. For R > 10, MTTKRP-TILE achieves at least 20% of MTTKRP-GEMM’s performance, an efficient result given MTTKRP-TILE’s memory footprint. Additionally, for R > 10, MTTKRP-TILE performs at least 2× faster 7 TABLE III Execution time T in seconds measured against analytical performance model times T0, T∞for the tensors A, B on the OpenMP and Cuda execution spaces. The tile size NT for the MTTKRP-TILE algorithm is selected via the heuristic in Eq. (6). The CP-rank is fixed to R = 32. OpenMP Cuda T T0 T0,LM T∞ T T0 T0,LM T∞ A ELEM 10.394 3.817 1.349 0.056 1.823 0.138 0.047 0.003 SLICE 0.543 2.877 0.409 0.056 8.528 0.104 0.013 0.003 TILE 0.526 2.955 0.487 0.056 0.043 0.110 0.019 0.003 B ELEM 25.167 9.876 3.054 0.130 3.924 0.356 0.105 0.007 SLICE 1.137 7.927 1.105 0.130 23.032 0.286 0.035 0.007 TILE 1.517 8.414 1.592 0.130 0.120 0.304 0.052 0.007 TABLE IV Sweep over tile widths d−1√NT for the MTTKRP-TILE algorithmic variant with R = 32 for each tensor A, B on both execution spaces, with performance in gflops (higher is better) averaged over all modes. These results motivate a heuristic to choose NT given by Eq. (6). MTTKRP-TILE gflops d−1√NT Tearing mode tensor A Island coalescence tensor B OpenMP Cuda OpenMP Cuda 2 32.9 253.3 51.4 585.1 4 89.9 1313.6 99.0 1232.2 6 98.9 1328.3 106.8 1173.4 8 98.1 1103.7 104.9 944.1 10 96.5 947.3 103.3 884.4 12 99.6 1230.0 110.8 948.8 14 97.4 1159.2 101.4 843.9 10 500 1000 2000 CP-ranks (R) 0 100 200 300 400 Memory Usage (GB) 1 H100 2 H100s 3 H100s 4 H100s 5 H100sTearing mode GEMM TILE 10 500 1000 2000 CP-ranks (R) 1 H100 2 H100s 3 H100s 4 H100s 5 H100s Island coalescence GEMM TILE Fig. 3. Worst case memory usage for the GEMM and TILE MTTKRP variants for the tearing mode tensor A and the island coalescence tensor B for different CP-ranks. The plot is annotated with lines showing the minimum number of NVIDIA H100 GPUs required incur the storage costs. than the SLICE variant on the OpenMP space and at least 3× faster than ELEM on the Cuda space, with a peak speedup of 11× for R = 500. These speedups reinforce the necessity for data-reuse and reduced atomic contention for element based parallelization approaches. 10 500 1000 2000 CP-ranks (R) 0 2000 4000 6000 gflops 29 87 88 101 31 192 204 213 220 630 807 923 OpenMP SLICE TILE GEMM 10 500 1000 2000 CP-ranks (R) 12 122 238 445 522 1385 1386 1401 665 6888 6830 7035 Cuda ELEM TILE GEMM Fig. 4. Tearing mode tensor A average (over all modes) MTTKRP performance (higher is better) for different algorithmic variants on different execution spaces. 2) Comparison against distributed MTTKRP-GEMM: The disparities of memory costs between the MTTKRP- TILE and MTTKRP-GEMM algorithms for the island coalescence tensor B (see Fig. 3) motivate us to compare running the MTTKRP-TILE algorithm on a single GPU against running MTTKRP-GEMM algorithm on (the required) multiple GPUs. Distributed MTTKRPs use the commutation pattern introduced in [32] whose communi- cation overhead consists of an initial tensor redistribution and all-reduce communication to combine contributions across processors. Fig. 5 compares the execution time of the CP-ALS algorithmon B with tolerance 10−4 over ranks R = 10, 500, 1000, 2000 on a single device when using the MTTKRP-TILE algorithm and on multiple devices when using the MTTKRP-GEMM algorithm. While tensor and factor matrices fit snugly into memory for the TILE variant, MTTKRP-GEMM requires at least 6 H100s for rank 2000, 3 H100s for rank 1000, and 2 H100s for rank 500 (findings in line with our lower bound memory model). Note that the hops machine requires two nodes to distribute across 6 GPUs, with the multi-node communication bandwidth drastically increasing tensor redistribution and MTTKRP communication time. When R = 2000, MTTKRP-TILE based CP-ALS on a sin- gle GPU is able to achieve 67% of the performance of MTTKRP-GEMM based CP-ALS run on the minumum 8 6 H100s. For R = 1000, 4 H100s are required to beat the MTTKRP-TILE based CP-ALS for and at least 5 are required to beat the MTTKRP-TILE based CP-ALs for R = 500. For all tested ranks, tensor redistribution is the most expensive overhead incurred in the distributed CP- ALS, dwarfing the all-reduce communication time. 10 500 1000 2000 CP-ranks (R) 0 25 50 75 100 125 150 175 Time (s) TILE GEMM-nGPUs=6 GEMM-nGPUs=4 GEMM-nGPUs=3 GEMM-nGPUs=2 other CP redist comm Fig. 5. Time (lower is better) for CP-ALS on the island coalecence tensor B with MTTKRP-TILE algorithm on a single H100 compared with MTTKRP-GEMM on one H100 per process. MTTKRP time is denoted with hatches, other CP-ALS operations (i.e., inner-product, Cholesky solve, etc.) by a grey fill, tensor redistribution time with a purple fill, and MTTKRP communication time with an orange fill. V. Conclusion We introduce fast and memory efficient algorithms for matrix free dense MTTKRPs together with detailed per- formance analysis and evaluations on CPUs and GPUs. We extend the open-source GenTen package with our methods and demonstrate that state-of-the-art matrix based MTTKRPs need many GPUs to compute the same tensor decomposition that, when replaced with our MT- TKRP, can be computed on a single device. Our methods have limitations. Fig. 4 shows that we have yet to achieve the efficiency of the matrix based MTTKRP-GEMM algo- rithm when the Khatri-Rao product matrix fits in device memory. Table III motivates us to further improve our caching strategies to achieve a higher percentage of the predicted infinite-cache time T∞. Such strategies include GEMM-like tilings of the output matrix and a Hilbert curve or ALTO-like [19] ordering of the input tensor to increase cache locality. References [1] D. M. Dunlavy, E. T. Phipps, H. Kolla, J. N. Shadid, and E. Phillips, “Goal-oriented low-rank tensor decompositions for numerical simulation data,” 2025. [2] D. Van Essen, K. Ugurbil, E. Auerbach, D. Barch, T. Behrens, R. Bucholz, A. Chang, L. Chen, M. Corbetta, S. Curtiss, S. Della Penna, D. Feinberg, M. Glasser, N. Harel, A. Heath, L. Larson- Prior, D. Marcus, G. Michalareas, S. Moeller, R. Oostenveld, S. Petersen, F. Prior, B. Schlaggar, S. Smith, A. Snyder, J. Xu, and E. Yacoub, “The human connectome project: A data acqui- sition perspective,” NeuroImage, vol. 62, no. 4, pp. 2222–2231, 2012. Connectivity. [3] N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and C. Faloutsos, “Tensor decomposition for signal processing and machine learning,” IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551–3582, 2017. [4] S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial dis- tributions of local illumination color in natural scenes,” Vision Research, vol. 120, pp. 39–44, 2016. Vision and the Statistics of the Natural Environment. [5] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009. [6] B. W. Bader and T. G. Kolda, “Efficient matlab computations with sparse and factored tensors,” SIAM Journal on Scientific Computing, vol. 30, no. 1, pp. 205–231, 2008. [7] C. A. Andersson and R. Bro, “The n-way toolbox for mat- lab,” Chemometrics and Intelligent Laboratory Systems, vol. 52, no. 1, pp. 1–4, 2000. [8] J. Kossaifi, Y. Panagakis, A. Anandkumar, and M. Pantic, “Tensorly: Tensor learning in python,” Journal of Machine Learning Research, vol. 20, no. 26, pp. 1–6, 2019. [9] J. Li, J. Bien, and M. T. Wells, “rTensor: An R package for multidimensional array (tensor) unfolding, multiplication, and decomposition,” Journal of Statistical Software, vol. 87, no. 10, pp. 1–31, 2018. [10] B. W. Bader and T. G. Kolda, “Algorithm 862: Matlab tensor classes for fast algorithm prototyping,” ACM Trans. Math. Softw., vol. 32, p. 635–653, Dec. 2006. [11] D. M. Dunlavy, N. T. Johnson, et al., “pyttb: Python Tensor Toolbox, v1.8.3,” Aug. 2025. [12] E. Phipps, T. Kolda, D. Dunlavy, G. Ballard, and T. Plantenga, “Genten: Software for generalized tensor decompositions v. 1.0.0,” 06 2017. [13] A.-H. Phan, P. Tichavsk´y, and A. Cichocki, “Fast alternating ls algorithms for high order candecomp/parafac tensor factoriza- tions,” IEEE Transactions on Signal Processing, vol. 61, no. 19, pp. 4834–4846, 2013. [14] N. Vannieuwenhoven, K. Meerbergen, and R. Vandebril, “Com- puting the gradient in optimization algorithms for the cp decom- position in constant memory through tensor blocking,” SIAM Journal on Scientific Computing, vol. 37, no. 3, pp. C415–C438, 2015. [15] G. Guennebaud, B. Jacob, et al., “Eigen v3.” http://eigen.tuxfamily.org, 2010. [16] J. H. Choi and S. V. N. Vishwanathan, “Dfacto: distributed factorization of tensors,” in Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’14, (Cambridge, MA, USA), p. 1296–1304, MIT Press, 2014. [17] S. Smith, N. Ravindran, N. D. Sidiropoulos, and G. Karypis, “Splatt: Efficient and parallel sparse tensor-matrix multipli- cation,” in 2015 IEEE International Parallel and Distributed Processing Symposium, pp. 61–70, 2015. [18] J. Li, J. Sun, and R. Vuduc, “Hicoo: Hierarchical storage of sparse tensors,” in SC18: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 238–252, 2018. [19] J. Laukemann, A. E. Helal, S. I. G. Anderson, F. Checconi, Y. Soh, J. J. Tithi, T. Ranadive, B. J. Gravelle, F. Petrini, and J. Choi, “Accelerating sparse tensor decomposition using adap- tive linearized representation,” IEEE Transactions on Parallel and Distributed Systems, vol. 36, no. 5, pp. 1025–1041, 2025. [20] E. T. Phipps and T. G. Kolda, “Software for sparse tensor decomposition on emerging computing architectures,” SIAM Journal on Scientific Computing, vol. 41, no. 3, pp. C269–C290, 2019. [21] J. D. Carroll and J. J. Chang, “Analysis of individual dif- ferences in multidimensional scaling via an N-way generaliza- tion of ”Eckart-Young” decomposition,” Psychometrika, vol. 35, pp. 283–319, 1970. [22] R. A. Harshman, “Foundations of the parafac procedure: Models and conditions for an ”explanatory” multi-model factor analy- sis,” in Working Papers in Phonetics, 1970. [23] E. Acar, D. M. Dunlavy, and T. G. Kolda, “A scalable opti- mization approach for fitting canonical tensor decompositions,” J. Chemom., vol. 25, pp. 67–86, Feb. 2011. 9 [24] J. Douglas Carroll, S. Pruzansky, and J. B. Kruskal, “Candelinc: A general approach to multidimensional analysis of many-way arrays with linear constraints on parameters,” Psychometrika, vol. 45, pp. 3–24, Mar 1980. [25] C. R. Trott, D. Lebrun-Grandi´e, D. Arndt, J. Ciesko, V. Dang, N. Ellingwood, R. Gayatri, E. Harvey, D. S. Hollman, D. Ibanez, N. Liber, J. Madsen, J. Miles, D. Poliakoff, A. Powell, S. Ra- jamanickam, M. Simberg, D. Sunderland, B. Turcksin, and J. Wilke, “Kokkos 3: Programming model extensions for the exascale era,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 4, pp. 805–817, 2022. [26] OpenMP Architecture Review Board, “OpenMP application program interface version 4.0,” 2013. [27] J. Bonilla, J. Shadid, X.-Z. Tang, M. Crockatt, P. Ohm, E. Phillips, R. Pawlowski, S. Conde, and O. Beznosov, “On a fully-implicit vms-stabilized fe formulation for low mach number compressible resistive mhd with application to mcf,” Com- puter Methods in Applied Mechanics and Engineering, vol. 417, p. 116359, 2023. A Special Issue in Honor of the Lifetime Achievements of T. J. R. Hughes. [28] L. Chac´on, “An optimal, parallel, fully implicit newton–krylov solver for three-dimensional viscoresistive magnetohydrody- namicsa),” Physics of Plasmas, vol. 15, p. 056103, 02 2008. [29] J. Shadid, R. Pawlowski, E. Cyr, R. Tuminaro, L. Chac´on, and P. Weber, “Scalable implicit incompressible resistive mhd with stabilized fe and fully-coupled newton–krylov-amg,” Computer Methods in Applied Mechanics and Engineering, vol. 304, pp. 1– 25, 2016. [30] J. D. McCalpin, “Memory bandwidth and machine balance in current high performance computers,” IEEE Computer Soci- ety Technical Committee on Computer Architecture (TCCA) Newsletter, pp. 19–25, Dec. 1995. [31] G. Ballard, K. Hayashi, and K. Ramakrishnan, “Parallel non- negative cp decomposition of dense tensors,” in 2018 IEEE 25th International Conference on High Performance Computing (HiPC), pp. 22–31, 2018. [32] C. Lewis and E. Phipps, “Low-communication asynchronous distributed generalized canonical polyadic tensor decomposi- tion,” in 2021 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–5, 2021. 10
A Performance Portable Matrix Free Dense MTTKRP in GenTen* Gabriel Kosmacher1, Eric T. Phipps2, Sivasankaran Rajamanickam2 Abstract-We extend the GenTen tensor decomposition package by introducing an accelerated dense matricized tensor times Khatri-Rao product (MTTKRP), the workhorse kernel for canonical polyadic (CP) tensor decompositions, that is portable and performant on modern CPU and GPU architectures. In contrast to the state-of-the-art matrix multiply based MTTKRP kernels used by Tensor Toolbox, TensorLy, etc., that explicitly form Khatri-Rao matrices, we develop a matrixfree element-wise parallelization approach whose memory cost grows with the rank R like the sum of the tensor shape O(R(n+m+k)), compared to matrix-based methods whose memory cost grows like the product of the tensor shape O(R(mnk)). For the largest problem we study, a rank 2000 MTTKRP, the smaller growth rate yields a matrix-free memory cost of just 2% of the matrix-based methods, a 50x improvement. In practice, the reduced memory impact means our matrix-free MTTKRP can compute a rank 2000 tensor decomposition on a single NVIDIA H100 instead of six H100s using a matrix-based MTTKRP. We also compare our optimized matrix-free MTTKRP to baseline matrixfree implementations on different devices, showing a 3x single-device speedup on an Intel 8480+ CPU and an 11x speedup on a H100 GPU. In addition to numerical results, we provide fine grained performance models for an ideal multi-level cache machine, compare analytical performance predictions to empirical results, and provide a motivated heuristic selection for selecting an algorithmic hyperparameter. I. Introduction Dense tensors are relevant in the fields of numerical simulations [1], medical imaging [2], signal processing [3], and color perception [4], among others. Low-rank approximations of dense tensors are a powerful tool used to compress such datasets and reveal relationships between modes. A popular decomposition for computing such approximations is the CANDECOMP/PARAFAC (CP), also called canonical polyadic, decomposition [5], taking a user *Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. 's National Nuclear Security Administration under contract DE-NA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. . SAND2025-13211O. 1The 2Sandia National Laboratories Corresponding author: Gabriel Kosmacher Email: supplied rank R and outputting a weighted sum of R rank1 tensors. The bottleneck kernel for CP algorithms [6] is the matrix action of a mode-k unfolding of a tensor with the Khatri-Rao product of factor matrices and is called the matricized tensor times tensor Khatri-Rao product (MTTKRP). Most open-source tensor decomposition packages (see Section I-A) compute the MTTKRP by explicitly forming the Khatri-Rao product matrix, allowing for invocations to BLAS-3 algorithms but compromising on memory usage. In contrast, we seek to compute the MTTKRP without explicitly forming the Khatri-Rao matrix and instead take advantage of the problem structure to develop matrix-free, also called element-based, algorithms. Our work extends the open-source GenTen1 tensor decomposition package with performance portable MTTKRPs and, as far as we are aware, introduces the first GPU algorithm for a matrixfree MTTKRP with dense tensors. Contributions: Our work offers the following methodological contributions and experimental results: • The design and open-source implementation of performance portable, matrix-free algorithms for dense MTTKRPs (see Section III). For a tensor Y of shape I1, . . . , Id and CP-rank R, our matrix-free algorithms for a mode-k MTTKRP have O(R P n In) memory usage while standard matrix-based approaches have a O(R Q n̸=k In) memory cost. • Multiple analytic memory and computational models for our algorithmic variants based on an ideal multilevel cache machine with different caching assumptions (see Section IV-B). • A heuristic model for an algorithmic hyper-parameter that aligns with measured times on a CPU and GPU (see Section IV-C1). • CPU and GPU algorithmic evaluations (see Sections IV-D1 and IV-D2). In particular, our matrix-free MTTKRP can perform a tensor decomposition on 1 GPU that requires 6 GPUs for a matrix-based MTTKRP. We also compare our optimized, matrix-free MTTKRP to baseline matrix-free implementations, showing a 3x speedup on an Intel 8480+ CPU and an 11x speedup on an NVIDIA H100 GPU. 1https://github.com/sandialabs/GenTen 1 16 Oct 2025 A. Related work There are a variety of open-source packages offering dense CP decompositions. N-Way Toolbox [7], TensorLy [8], and rTensor [9] compute MTTKRPs by explicitly unfolding the tensor and forming the Khatri-Rao product. The MATLAB Tensor Toolbox [6], [10], PyTTB [11] and GenTen [12] use and algorithm introduced by Phan et. al. [13], described in Section III-A, that avoids expensive tensor unfolding and reduces memory impact by forming partial Khatri-Rao matrices. Vannieuwenhoven et. al. [14] build upon [13] by constructing an algorithm, implemented using the Eigen3 C++ library [15], that sequentially stores blocks of the Khatri-Rao matrix in constant memory, hence achieving the desired O(R P n In) storage complexity. However, the blocking algorithm is only applicable to the special case where all k-mode MTTKRPs are computed at once, and can not generalize to compute individual mode-k MTTKRPs needed by, e.g., the CPALS algorithm. Table I gives an overview of features for different dense mode-K MTTKRP algorithms and codebases. There has been considerable more work for optimizing MTTKRPs in the sparse tensor case. DeFacTo [16] stores tensors as collections of sparse matrices and relies on optimized sparse matrix-vector multiply routines to compute the MTTKRP. SPLATT [17] offers a variety of compressed tensor formats and a cache-blocking scheme to speedup sparse MTTKRPs by parallelizing over rows of the output matrix. HiCOO [18] proposes a new storage format that partitions the sparse tensor into sparse blocks to increase memory bandwidth and develops an algorithm that parallelizes over groups of sparse blocks. Both SPLATT and HiCOO were developed primarily for low-thread-count CPUs and do not consider portability to high-thread GPU architectures. Laukemann et. al. [19] introduce another sparse tensor format, ALTO, that linearizes the tensors multi-index set such that tensor entries that are close in space are close in memory. Tensor entries are then decomposed into line segments that encode subspaces, and a parallel algorithm is introduced that assigns threads to chunks of line segments. Though developed for CPUs, increasing the number of chunks per line segment should allow for straightforward portability to GPUs. GenTen [20] offers an performance portable approach that parallelizes over tensor nonzeros and permutes the nonzeros to avoid atomic contention. This algorithm is similar to ours (in the dense case) as it avoids forming the Khatri-Rao product explicitly. However, permuting the tensor nonzeros requires an additional storage cost of O(dN), where d is the dimension of the tensor and N is the number of nonzeros. As we show in IV-B, explicitly permuting tensor entries in such a fashion necessary for an efficient dense MTTKRP. II. Background A. Notation A tensor X ∈RI1×···×Id is d-way array of size N = I1 × · · · × Id typically stored in a flattened row-major or column-major format. It is standard practice [5] to denote tensors as bold uppercase letters in Euler calligraphic font (e.g., X), matrices by bold uppercase letters (e.g., A), vectors by bold lowercase letters (e.g., a), and scalars by lowercase letters (e.g., a). We adopt Matlab notation for array indexing2. The Khatri-Rao product ⊙of two matrices A ∈Rm×p and B ∈Rn×p is the column-wise Kronecker product of A and B defined by A ⊙B = [a1 ⊗b1| . . . |ap ⊗bp] ∈Rmn×p, (1) where ⊗is the Kronecker product and a(k) j ≡Ak(:, j) is a column vector. Tensors are indexed via multi-indexed arrays i = (i1, . . . , id) where in ∈[In]. The set notation [a] is shortand for {z : z ∈Z, 1 ≤z ≤a}. We call i ∈[N] the linear index of a tensor and (i1, . . . , id) the multi-index, we assume a bijective mapping between the two index sets, and we refer to a tensor element as xi ≡X(i) ≡X(i1, . . . , id). The forward map i 7→(i1, . . . , id) is given by the operator ind2sub(X, i) and is defined like the Matlab function of the same name3, while the backward map (i1, . . . id) 7→i is given by the operator sub2ind(X, (i1, . . . , id)), again defined like the corresponding Matlab function4. The nth mode-k slice of a tensor S(k) n is a d -1 subtensor obtained by fixing the kth value of the multi-index to n and allowing the other values to range, i.e., S(1) n = X(n, :, . . . , :). The mode-k matricization of a tensor X rearranges the tensor elements into a matrix X(k) ∈RIk×N/Ik such that element i maps to (ik, i′ k) by i′ k = 1 + k-1 X l=1 (il-1)Il+ d X l=k+1 (il-1)Il/Ik. (2) The reshape operator reshape(X, [s1, . . . , sm]) does not permute the underlying tensor elements (unlike tensor matricization) and is defined like the Matlab function of the same name5. The rank-R canonical polyadic (CP) decomposition [21], [22] of a tensor X is a rank-R tensor M of the form X ≈M = R X j=1 λja(1) j ◦· · · ◦a(d) j , (3) 2https://www.mathworks.com/help/matlab/math/arrayindexing.html 3https://www.mathworks.com/help/matlab/ref/ind2sub.html 4https://www.mathworks.com/help/matlab/ref/sub2ind.html 5https://www.mathworks.com/help/matlab/ref/double.reshape. html 2 TABLE I Methods and codebases for dense mode-k MTTKRPs. Given a tensor Y of shape I1, . . . , Id with factor matrices A1, . . . , Ad, let the Khatri-Rao matrix be Z = Ad ⊙· · · ⊙Ak+1 ⊙Ak-1 ⊙· · · ⊙A1 (see Section II-B for details). We list memory usage and open-source codes for each method, and we list the parallelization method on CPUs and GPUs for each code base. All methods have the same computational cost of O(Rd(Q n In)). methods memory code CPU GPU full Z [6] O R Q n̸=k In N-way Toolbox [7] BLAS-3 TensorLy [8] BLAS-3 BLAS-3 rTensor [9] BLAS-3 partial Z [13] O R k-1 Q n=1 In + dQ k+1 In TTB [6] BLAS-3 PyTTB [11] BLAS-3 GenTen [12] BLAS-3 BLAS-3 no Z (our method) O R dP n=1 In GenTen SIMD SIMT where λ ∈RR is a weight vector and ◦is a d-way outer product. We call Ak = [a(1) 1 , . . . , a(k) d ] ∈RIk×R the modek factor matrix of a tensor X. The form M = JA1, . . . , AdK is referred to as a Kruskal tensor. B. The MTTKRP kernel Algorithms to compute CP decompositions generally fall into two categories: all-at-once [23] or alternating [22], [24]. In both cases, the workhorse kernel is the mode-k matricized tensor times Khatri-Rao product (MTTKRP) defined as G(k) = Y(k) diag(λ)ZT (4) where Z = Ad ⊙· · · ⊙Ak+1 ⊙Ak-1 ⊙· · · ⊙A1 for some a Kruskal tensor M = JA1, . . . , AdK, d-way tensor Y, and weight vector λ ∈RR +. We call algorithms that explicitly construct Z matrix-based MTTKRPs. Matrix-based MTTKRPs have two distinct downsides: the explicit formation the Khatri-Rao matrix Z and matricization Y(k). For a mode-k MTTKRP with CP-rank R, the Khatri-Rao matrix Z is size Q n̸=k In × R, which can dwarf the cost R P n̸=k In of simply storing the factor matrices An individually, leading to O(R Q n̸=k In) memory usage. As for the matricization Y(k), tensors are typically stored in memory as a flat array, and in such cases, the mode-k (for 1 τf/τm. Given a wall-time t, the throughput of the model is measured as gflops = f/t/10243 and the memory bandwidths for each model are given by mops0 = m0/t and mops∞= m∞/t. Peak throughput τf, bandwidth τm, and LM bandwidth ratio l for each device is reported inTable II. For the H100, throughput and bandwidth are taken from NVIDIA's whitepaper7 and we calculate the peak L1 cache bandwidth as #SM's x L1 transfer bytes/cycle x clock rate, which is 123 × 128.98/103 ≈33TB/s, where the L1 transfer bytes per clock cycle is taken from NVIDIAs hopper tuning guide8. Hence, for the H100, we set l = 10. Throughput for the 8480+ is taken from Intel's APP metrics sheet9 while memory bandwidth and L2 cache bandwidth are measured with the STREAM triad benchmark [30], where we use a 2GB array to measure memory bandwidth and a 0.6MB array to measure the L2 cache bandwidth. We analyze the efficacy of our performance models in 7https://www.nvidia.com/en-us/data-center/h100/ 8https://docs.nvidia.com/cuda/hopper-tuning-guide/index.html 9https://www.intel.com/content/www/us/en/support/articles/ 000005755/processors.html 6 TABLE II Peak compute throughput τf , memory bandwidth τm, and cache bandwidth ratio l for each device. Device 1012τf 10244τm l 8480+ 2.28 0.12 8 H100 34 3.35 10 Table III. Our results show that the cache effects play a significant role for each algorithmic variant: the T∞ prediction is much too optimistic for each variant. The MTTKRP-ELEM algorithm fails to surpass even the 0cache predicted time T0 for either device. As discussed in Section III-B, this can be attributed to the uncoalesced repeated reads of the factor-matrices. The MTTKRPSLICE algorithm also fails to surpass T0 on the GPU, though it does so on the CPU. We hypothesize that this discrepancy is due to the much larger cache size of the CPU compared to the GPU. For MTTKRP-SLICE on the CPU and MTTKRP-TILE on both devices, the 0cache model is far to pessimistic while the ∞-cache model is too optimistic, indicating that the algorithms achieve (to some extent) the desired caching effects on the repeated reads of the factor matrices, Our empirical results for these methods align most with the 0, LM-cache model, achieving at least 90% of the predicted time on the CPU and 40% of the predicted time on the GPU. We surmise that the higher accuracy of our models on the CPU is due to the fact that we use empirically measured peak bandwidth, while on the GPU we use the vendor reported ideal world numbers. C. Experiments 1) Selecting the tile size: Our first goal is to select a value for the heuristic tile volume parameter NT in the MTTKRP-TILE algorithm. Table IV presents the numerical performance of a sweep over tile-widths d-1√NT ∈ {2, 4, 6, 8, 10, 12, 14} for the tearing-mode and island coalescence tensors for a fixed R = 32. On the GPU, we find that for the 4-way tensor A, a tile size of 216 is optimal, while for the 5-way tensor B, a tile size of 256 is optimal. On an H100, our maximum occupancy with our standard block configuration of (4, 32) is 64 warps per SM. NSIGHT-compute reports our achieved occupancy to be 60%, but even if we assume a more conservative occupancy of 50%, we have 32 tiles per SM, with each tile rereading 8 × 256 bytes of factor matrices entries, or 64KB of information per SM, 25% of the total L1 cache, a good target given that the L1 cache handles read, write, and atomic instructions on a GPU. On the 8480+, we find that a tile width of 12, i.e., the size of the minimum dimension, is optimal. Our parallel CPU implementation assigns one tile per core, and given that each core has 2MB of L2 cache, this means that the factor matrices take up at most ∼0.5% of the L1 cache. However, our implementation assumes that the tile size is regular, i.e., that d-1√NT ∈Z. As such, performance decreases when d-1√NT > minn=1,...,d In. To impose tile regularity and cache awareness, we use the simple heuristic NT = min    d-1 s sLM/4 sf(c/2) !d-1 , min n=1,...,d{In}d-1   , (6) where sLM is the size in bytes of the middle level of cache (i.e., L1 cache (per SM) on a GPU, L2 cache (per core) on a CPU), and c is the maximum number of tiles per cache unit (i.e., SM/core). We find that this heuristic is valid for most R. D. Memory impact of MTTKRP variants As discussed in Section III-A, forming the left and right Khatri-Rao matrices ZT L and ZT R can incur a large storage cost. This cost dominates the total memory required for the mode-k MTTKRP-GEMM algorithm, which is given by mGEMM ∞ = N + R(IL + IR + Ik). It is important to note the storage cost of the MTTKRP-GEMM algorithm relies heavily on the initial shape of a given tensor. For example, the maximum memory usage over all modes k for the island coalescence tensor B for a R = 2000 is 391GB, but if the initial tensor shape was permuted to be 129 × 12 × 129 × 39 × 129 (best case), the memory usage would be 123GB, while if the tensor was permuted to be 129 × 129 × 129 × 39 × 12 (worst case), the memory usage would be 1255GB. However, especially in the context in situ decomposition of scientific simulation data [1], [31], reshaping tensors can eliminate multidimensional relationships between tensor entries and can be expensive and impractical. Fig. 3 shows the maximum memory usage over all modes k for the MTTKRP-GEMM and MTTKRPTILE algorithms for the tearing mode tensor A and the island coalescence tensor B. The y axis of the plot denotes the minimum number of 80GB H100 GPUs needed for storage, however, the distribution protocol in IV-D2 does not evenly assign memory to MPI ranks, hence in practice more GPUs may be required for a given problem size. 1) Single node performance: The linear-in-rank memory scaling of the MTTKRP-GEMM algorithm described in Section IV-D motivates the use of the tearing-mode tensor A to study single-node performance of the different MTTKRP variants. Fig. 4 reports the average MTTKRP performance, measured in gflops, over all modes, for the OpenMP and Cuda execution spaces. We compare the SLICE, TILE, and GEMM variants on the OpenMP space and the ELEM, TILE, and GEMM variants on the Cuda space, where the tile size NT is chosen by Eq. (6). We omit the ELEM variant for OpenMP and the SLICE variant for Cuda given their poor performance alluded to in Table III. For R > 10, MTTKRP-TILE achieves at least 20% of MTTKRP-GEMM's performance, an efficient result given MTTKRP-TILE's memory footprint. Additionally, for R > 10, MTTKRP-TILE performs at least 2× faster 7 TABLE III Execution time T in seconds measured against analytical performance model times T0, T∞for the tensors A, B on the OpenMP and Cuda execution spaces. The tile size NT for the MTTKRP-TILE algorithm is selected via the heuristic in Eq. (6). The CP-rank is fixed to R = 32. OpenMP Cuda T T0 T0,LM T∞ T T0 T0,LM T∞ A ELEM 10.394 3.817 1.349 0.056 1.823 0.138 0.047 0.003 SLICE 0.543 2.877 0.409 0.056 8.528 0.104 0.013 0.003 TILE 0.526 2.955 0.487 0.056 0.043 0.110 0.019 0.003 B ELEM 25.167 9.876 3.054 0.130 3.924 0.356 0.105 0.007 SLICE 1.137 7.927 1.105 0.130 23.032 0.286 0.035 0.007 TILE 1.517 8.414 1.592 0.130 0.120 0.304 0.052 0.007 TABLE IV Sweep over tile widths d-1√NT for the MTTKRP-TILE algorithmic variant with R = 32 for each tensor A, B on both execution spaces, with performance in gflops (higher is better) averaged over all modes. These results motivate a heuristic to choose NT given by Eq. (6). MTTKRP-TILE gflops d-1√NT Tearing mode tensor A Island coalescence tensor B OpenMP Cuda OpenMP Cuda 2 32.9 253.3 51.4 585.1 4 89.9 1313.6 99.0 1232.2 6 98.9 1328.3 106.8 1173.4 8 98.1 1103.7 104.9 944.1 10 96.5 947.3 103.3 884.4 12 99.6 1230.0 110.8 948.8 14 97.4 1159.2 101.4 843.9 10 500 1000 2000 CP-ranks (R) 0 100 200 300 400 Memory Usage (GB) 1 H100 2 H100s 3 H100s 4 H100s 5 H100sTearing mode GEMM TILE 10 500 1000 2000 CP-ranks (R) 1 H100 2 H100s 3 H100s 4 H100s 5 H100s Island coalescence GEMM TILE Fig. 3. Worst case memory usage for the GEMM and TILE MTTKRP variants for the tearing mode tensor A and the island coalescence tensor B for different CP-ranks. The plot is annotated with lines showing the minimum number of NVIDIA H100 GPUs required incur the storage costs. than the SLICE variant on the OpenMP space and at least 3× faster than ELEM on the Cuda space, with a peak speedup of 11× for R = 500. These speedups reinforce the necessity for data-reuse and reduced atomic contention for element based parallelization approaches. 10 500 1000 2000 CP-ranks (R) 0 2000 4000 6000 gflops 29 87 88 101 31 192 204 213 220 630 807 923 OpenMP SLICE TILE GEMM 10 500 1000 2000 CP-ranks (R) 12 122 238 445 522 1385 1386 1401 665 6888 6830 7035 Cuda ELEM TILE GEMM Fig. 4. Tearing mode tensor A average (over all modes) MTTKRP performance (higher is better) for different algorithmic variants on different execution spaces. 2) Comparison against distributed MTTKRP-GEMM: The disparities of memory costs between the MTTKRPTILE and MTTKRP-GEMM algorithms for the island coalescence tensor B (see Fig. 3) motivate us to compare running the MTTKRP-TILE algorithm on a single GPU against running MTTKRP-GEMM algorithm on (the required) multiple GPUs. Distributed MTTKRPs use the commutation pattern introduced in [32] whose communication overhead consists of an initial tensor redistribution and all-reduce communication to combine contributions across processors. Fig. 5 compares the execution time of the CP-ALS algorithmon B with tolerance 10-4 over ranks R = 10, 500, 1000, 2000 on a single device when using the MTTKRP-TILE algorithm and on multiple devices when using the MTTKRP-GEMM algorithm. While tensor and factor matrices fit snugly into memory for the TILE variant, MTTKRP-GEMM requires at least 6 H100s for rank 2000, 3 H100s for rank 1000, and 2 H100s for rank 500 (findings in line with our lower bound memory model). Note that the hops machine requires two nodes to distribute across 6 GPUs, with the multi-node communication bandwidth drastically increasing tensor redistribution and MTTKRP communication time. When R = 2000, MTTKRP-TILE based CP-ALS on a single GPU is able to achieve 67% of the performance of MTTKRP-GEMM based CP-ALS run on the minumum 8 6 H100s. For R = 1000, 4 H100s are required to beat the MTTKRP-TILE based CP-ALS for and at least 5 are required to beat the MTTKRP-TILE based CP-ALs for R = 500. For all tested ranks, tensor redistribution is the most expensive overhead incurred in the distributed CPALS, dwarfing the all-reduce communication time. 10 500 1000 2000 CP-ranks (R) 0 25 50 75 100 125 150 175 Time (s) TILE GEMM-nGPUs=6 GEMM-nGPUs=4 GEMM-nGPUs=3 GEMM-nGPUs=2 other CP redist comm Fig. 5. Time (lower is better) for CP-ALS on the island coalecence tensor B with MTTKRP-TILE algorithm on a single H100 compared with MTTKRP-GEMM on one H100 per process. MTTKRP time is denoted with hatches, other CP-ALS operations (i.e., inner-product, Cholesky solve, etc.) by a grey fill, tensor redistribution time with a purple fill, and MTTKRP communication time with an orange fill. V. Conclusion We introduce fast and memory efficient algorithms for matrix free dense MTTKRPs together with detailed performance analysis and evaluations on CPUs and GPUs. We extend the open-source GenTen package with our methods and demonstrate that state-of-the-art matrix based MTTKRPs need many GPUs to compute the same tensor decomposition that, when replaced with our MTTKRP, can be computed on a single device. Our methods have limitations. Fig. 4 shows that we have yet to achieve the efficiency of the matrix based MTTKRP-GEMM algorithm when the Khatri-Rao product matrix fits in device memory. Table III motivates us to further improve our caching strategies to achieve a higher percentage of the predicted infinite-cache time T∞. Such strategies include GEMM-like tilings of the output matrix and a Hilbert curve or ALTO-like [19] ordering of the input tensor to increase cache locality. References [1] D. M. Dunlavy, E. T. Phipps, H. Kolla, J. N. Shadid, and E. Phillips, "Goal-oriented low-rank tensor decompositions for numerical simulation data," 2025. [2] D. Van Essen, K. Ugurbil, E. Auerbach, D. Barch, T. Behrens, R. Bucholz, A. Chang, L. Chen, M. Corbetta, S. Curtiss, S. Della Penna, D. Feinberg, M. Glasser, N. Harel, A. Heath, L. LarsonPrior, D. Marcus, G. Michalareas, S. Moeller, R. Oostenveld, S. Petersen, F. Prior, B. Schlaggar, S. Smith, A. Snyder, J. Xu, and E. Yacoub, "The human connectome project: A data acquisition perspective," NeuroImage, vol. 62, no. 4, pp. 2222-2231, 2012. Connectivity. [3] N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis, and C. Faloutsos, "Tensor decomposition for signal processing and machine learning," IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551-3582, 2017. [4] S. M. Nascimento, K. Amano, and D. H. Foster, "Spatial distributions of local illumination color in natural scenes," Vision Research, vol. 120, pp. 39-44, 2016. Vision and the Statistics of the Natural Environment. [5] T. G. Kolda and B. W. Bader, "Tensor decompositions and applications," SIAM Review, vol. 51, no. 3, pp. 455-500, 2009. [6] B. W. Bader and T. G. Kolda, "Efficient matlab computations with sparse and factored tensors," SIAM Journal on Scientific Computing, vol. 30, no. 1, pp. 205-231, 2008. [7] C. A. Andersson and R. Bro, "The n-way toolbox for matlab," Chemometrics and Intelligent Laboratory Systems, vol. 52, no. 1, pp. 1-4, 2000. [8] J. Kossaifi, Y. Panagakis, A. Anandkumar, and M. Pantic, "Tensorly: Tensor learning in python," Journal of Machine Learning Research, vol. 20, no. 26, pp. 1-6, 2019. [9] J. Li, J. Bien, and M. T. Wells, "rTensor: An R package for multidimensional array (tensor) unfolding, multiplication, and decomposition," Journal of Statistical Software, vol. 87, no. 10, pp. 1-31, 2018. [10] B. W. Bader and T. G. Kolda, "Algorithm 862: Matlab tensor classes for fast algorithm prototyping," ACM Trans. Math. Softw., vol. 32, p. 635-653, Dec. 2006. [11] D. M. Dunlavy, N. T. Johnson, et al., "pyttb: Python Tensor Toolbox, v1.8.3," Aug. 2025. [12] E. Phipps, T. Kolda, D. Dunlavy, G. Ballard, and T. Plantenga, "Genten: Software for generalized tensor decompositions v. 1.0.0," 06 2017. [13] A.-H. Phan, P. Tichavsk ́y, and A. Cichocki, "Fast alternating ls algorithms for high order candecomp/parafac tensor factorizations," IEEE Transactions on Signal Processing, vol. 61, no. 19, pp. 4834-4846, 2013. [14] N. Vannieuwenhoven, K. Meerbergen, and R. Vandebril, "Computing the gradient in optimization algorithms for the cp decomposition in constant memory through tensor blocking," SIAM Journal on Scientific Computing, vol. 37, no. 3, pp. C415-C438, 2015. [15] G. Guennebaud, B. Jacob, et al., "Eigen v3." http://eigen.tuxfamily.org, 2010. [16] J. H. Choi and S. V. N. Vishwanathan, "Dfacto: distributed factorization of tensors," in Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS'14, (Cambridge, MA, USA), p. 1296-1304, MIT Press, 2014. [17] S. Smith, N. Ravindran, N. D. Sidiropoulos, and G. Karypis, "Splatt: Efficient and parallel sparse tensor-matrix multiplication," in 2015 IEEE International Parallel and Distributed Processing Symposium, pp. 61-70, 2015. [18] J. Li, J. Sun, and R. Vuduc, "Hicoo: Hierarchical storage of sparse tensors," in SC18: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 238-252, 2018. [19] J. Laukemann, A. E. Helal, S. I. G. Anderson, F. Checconi, Y. Soh, J. J. Tithi, T. Ranadive, B. J. Gravelle, F. Petrini, and J. Choi, "Accelerating sparse tensor decomposition using adaptive linearized representation," IEEE Transactions on Parallel and Distributed Systems, vol. 36, no. 5, pp. 1025-1041, 2025. [20] E. T. Phipps and T. G. Kolda, "Software for sparse tensor decomposition on emerging computing architectures," SIAM Journal on Scientific Computing, vol. 41, no. 3, pp. C269-C290, 2019. [21] J. D. Carroll and J. J. Chang, "Analysis of individual differences in multidimensional scaling via an N-way generalization of "Eckart-Young" decomposition," Psychometrika, vol. 35, pp. 283-319, 1970. [22] R. A. Harshman, "Foundations of the parafac procedure: Models and conditions for an "explanatory" multi-model factor analysis," in Working Papers in Phonetics, 1970. [23] E. Acar, D. M. Dunlavy, and T. G. Kolda, "A scalable optimization approach for fitting canonical tensor decompositions," J. Chemom., vol. 25, pp. 67-86, Feb. 2011. 9 [24] J. Douglas Carroll, S. Pruzansky, and J. B. Kruskal, "Candelinc: A general approach to multidimensional analysis of many-way arrays with linear constraints on parameters," Psychometrika, vol. 45, pp. 3-24, Mar 1980. [25] C. R. Trott, D. Lebrun-Grandi ́e, D. Arndt, J. Ciesko, V. Dang, N. Ellingwood, R. Gayatri, E. Harvey, D. S. Hollman, D. Ibanez, N. Liber, J. Madsen, J. Miles, D. Poliakoff, A. Powell, S. Rajamanickam, M. Simberg, D. Sunderland, B. Turcksin, and J. Wilke, "Kokkos 3: Programming model extensions for the exascale era," IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 4, pp. 805-817, 2022. [26] OpenMP Architecture Review Board, "OpenMP application program interface version 4.0," 2013. [27] J. Bonilla, J. Shadid, X.-Z. Tang, M. Crockatt, P. Ohm, E. Phillips, R. Pawlowski, S. Conde, and O. Beznosov, "On a fully-implicit vms-stabilized fe formulation for low mach number compressible resistive mhd with application to mcf," Computer Methods in Applied Mechanics and Engineering, vol. 417, p. 116359, 2023. A Special Issue in Honor of the Lifetime Achievements of T. J. R. Hughes. [28] L. Chac ́on, "An optimal, parallel, fully implicit newton-krylov solver for three-dimensional viscoresistive magnetohydrodynamicsa)," Physics of Plasmas, vol. 15, p. 056103, 02 2008. [29] J. Shadid, R. Pawlowski, E. Cyr, R. Tuminaro, L. Chac ́on, and P. Weber, "Scalable implicit incompressible resistive mhd with stabilized fe and fully-coupled newton-krylov-amg," Computer Methods in Applied Mechanics and Engineering, vol. 304, pp. 125, 2016. [30] J. D. McCalpin, "Memory bandwidth and machine balance in current high performance computers," IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter, pp. 19-25, Dec. 1995. [31] G. Ballard, K. Hayashi, and K. Ramakrishnan, "Parallel nonnegative cp decomposition of dense tensors," in 2018 IEEE 25th International Conference on High Performance Computing (HiPC), pp. 22-31, 2018. [32] C. Lewis and E. Phipps, "Low-communication asynchronous distributed generalized canonical polyadic tensor decomposition," in 2021 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1-5, 2021. 10
2510.14899
Electric field controlled second-order anomalous Hall effect in altermagnets Arnob Mukherjee, Biplab Sanyal, Annica M. Black-Schaffer, and Ankita Bhattacharya∗ Department of Physics and Astronomy, Uppsala University, Box-516, S-75120 Uppsala, Sweden (Dated: October 17, 2025) Altermagnets are a recently discovered class of compensated magnets with momentum-dependent spin splittings and unusual transport properties, even without a net magnetization. In the presence of combined four-fold rotation and time-reversal (C4T ) symmetry, linear and also second-order, driven by a Berry curvature dipole, anomalous Hall responses are forbidden in any pure d-wave altermagnet. Nevertheless, here we find that the nontrivial quantum metric of the occupied Bloch states allows for an electric field induced Berry curvature dipole, which generates a strong and tunable second-order Hall current, enabling it to be switched on or off by simply adjusting the relative orientation between the symmetry-reducing dc field and the ac probe field. Specifically, we investigate the electric field induced second-order anomalous Hall response in a two-dimensional Rashba-coupled hybrid altermagnet that interpolates between dx2−y2 (B1g) and dxy (B2g) altermag- net symmetry, motivated by recent proposals for mixed-symmetry states. Crucially, the nonlinear signal is highly sensitive to the underlying symmetry of the altermagnetic order at specific doping levels, offering a purely electrical method to distinguish distinct altermagnetic orders. Our results position hybrid altermagnets as a promising platform for controllable nonlinear transport and spin- tronic applications. The discovery of altermagnets (AMs) has revealed a new class of magnetic order that bridges ferromagnets (FMs) and antiferromagnets (AFMs), exhibiting momentum- dependent spin splitting but zero net magnetization [1– 7]. Unlike conventional (N´eel) AFMs that preserve time- reversal symmetry (T ) through spin-compensated sublat- tices related by translation or inversion (P), AM breaks global T while their sublattices are instead connected by crystalline point-group operations, such as rotations (Cn) or mirrors (M) [7, 8]. Consequently, AMs have been shown to host exotic electronic features, such as symmetry-protected nodal lines, Weyl planes [9], and momentum-locked spin textures [10], all while suppress- ing stray magnetic fields thus, making AMs promising candidates for advanced spintronic and topological de- vices [11, 12]. In AMs, the Berry curvature (BC) exhibits quadrupo- lar or higher-order multipolar symmetry in momentum space, resulting in a vanishing momentum-space aver- age and, consequently, the absence of any BC-driven linear anomalous Hall response [13, 14], despite broken T . Consequently, the leading anomalous Hall response may only emerge at second order. Recently, such second- order anomalous Hall effect (SAHE) has attracted con- siderable interest due to its fundamental connection to the quantum geometry of Bloch states and its poten- tial for promising technological applications [15–22]. The primary driver for this SAHE is the dipole moment of the asymmetric distribution of the BC over the occupied state, namely the Berry curvature dipole (BCD) [15, 22]. In addition to requiring broken P, SAHE is subject to strict crystallographic symmetry constraints [15]. In d- wave AMs, both T and four-fold rotational (C4) symme- tries are broken individually, but the combined C4T is ∗ankita.bhattacharya@physics.uu.se preserved. As a consequence, recent works have shown that the second-order anomalous Hall response is forbid- den in AMs and subsequently, only nonlinear Hall re- sponses at the third order have so far been considered [13, 23–26]. Through a parallel development, recent theoretical [27] and experimental [28, 29] works have demonstrated that the SAHE can also occur in systems where the inherent BCD is symmetry forbidden, if instead mediated by the Berry connection polarizability (BCP) or, equivalently, the band-normalized quantum metric (QM), i.e., the real part of the quantum geometric tensor. [18, 30, 31]. An external dc electric field can induce a finite field-induced BCD through the nontrivial QM of the occupied Bloch bands, making the SAHE present even in systems where the inherent BCD-induced SAHE is absent and, impor- tantly, also provide greater external tunability. Notably, this electric-field driven SAHE is distinct from the QM- dipole-induced intrinsic (independent of relaxation time) SAHE, reported in PT -symmetric AFM [18]. In this work we consider two-dimensional (2D) hybrid AMs. While prototypical AMs such as RuO2 [32–34] and MnTe [35–37] realize distinct dx2−y2 (B1g) and dxy (B2g) spin-splitting, respectively [7, 32, 38], theoretical classi- fications have emphasized that these two order param- eters, belonging to different irreducible representations of the tetragonal point group D4h, can in principle co- exist once the crystal symmetry is reduced to one of its lower-symmetry subgroups [39, 40]. Moreover, lowering the crystal symmetry, through strain [41–44] or surface termination [40, 45], may induce mixing between other- wise pure altermagnetic orders. Experimental techniques such as nanoscale domain engineering [36, 37] also al- low for controlled tuning between these symmetry limits. Motivated by these developments, we consider a hybrid altermagnetic order that continuously interpolates be- tween the B1g and B2g irreducible representations, cap- arXiv:2510.14899v1 [cond-mat.mes-hall] 16 Oct 2025 2 turing the possibility of mixed-symmetry states in real- istic materials. We also consider AMs that inherently host Rashba spin-orbit coupling (RSOC) due to common structural asymmetry [46, 47], which ensures the broken P required for any finite SAHE [15]. Using hybrid AMs with RSOC as realistic platform for AMs, we theoretically establish the existence of an elec- tric field induced BCD-assisted SAHE in AMs. In par- ticular, this exploits the BCP or equivalently, the QM, to overcome the C4T constraints that otherwise prevents an inherent BCD-induced SAHE. Quite remarkably, we also find that the electric field induced SAHE varies substan- tially in magnitude between the dx2−y2 and dxy limits in certain doping regimes, thus enabling a purely electrical route to distinguish the two altermagnetic orders. Our findings establish AMs as a powerful platform to realize highly tunable nonlinear Hall transport already at second order, driven by quantum geometric effects. Model hybrid altermagnet (AM).— To investigate the SAHE in hybrid AMs, we introduce a minimal two-band Hamiltonian defined on a 2D square lattice. In this model, we phenomenologically mix the B1g and B2g com- ponents of the altermagnetic orderings to capture the es- sential physics of a hybrid AM. The tight-binding Hamil- tonian is given by [48–50] H(k) = ϵ(k) σ0 + h(k) · σ, (1) where σν denote the Pauli matrices for ν = 1, 2, 3 and the 2 × 2 identity matrix for ν = 0 acting in the spin- sublattice subspace. The components of the Hamiltonian are ϵ(k) = −2t (cos kx + cos ky) −µ, (2) h(k) = {−λ sin ky, λ sin kx, tam α (cos kx −cos ky) + tam (1 −α) sin kx sin ky} (3) where t denotes the nearest-neighbor hopping amplitude, and µ is the chemical potential, λ denotes the RSOC strength, and tam is the strength of the altermagnetic spin splitting . We parameterize the relative weight of the two altermagnetic orders by 0 ≤α ≤1. In two dimensions, AMs inherently exhibit RSOC due to struc- tural asymmetry [46, 47]. The case α = 1 corresponds to a pure dx2−y2 AM order arising from nearest-neighbor spin-dependent hopping, while α = 0 yields pure dxy AM order governed by next-nearest-neighbor spin-dependent hopping. Intermediate values of α thus describe a hybrid AM that seamlessly interpolates between the two irre- ducible representations B1g and B2g of the point group D4h. Such hybridization significantly enriches the spin splitting . The square lattice Hamiltonian in Eq. (1) may be rele- vant for tetragonal AMs such as RuO2 [32–34] and MnTe [35–37], and is directly relevant to quasi-2D layered com- pounds like Ca3Ru2O7 that host both altermagnetism and RSOC [41]. Additionally, it naturally accommodates the dx2−y2 (B1g) and dxy (B2g) [2, 3] orders whose hy- bridization we study. Second-order anomalous Hall effect (SAHE).— Al- though the Hamiltonian in Eq. (1) breaks T , it exhibits no anomalous Hall response at linear order in an ap- plied ac electric field Eω with frequency ω, owing to the presence of combined C4T symmetry. Consequently, the leading anomalous Hall transport may at most arise at second order in Eω. Within semi-classical Boltzmann theory, the second-order anomalous Hall current can be expressed as jAH i = χAH ijk Eω j Eω k , with the second-order transverse conductivity tensor χAH ijk [15] χAH ijk = ϵilk e3τ 2ℏ2(1 + iωτ) X n Z ddk (2π)d f0 (∂kjΩn l ), (4) where ϵilk is the Levi-Civita symbol and f0 is the equilib- rium Fermi-Dirac distribution function and Ωn l is the l-th component of BC of the nth band. Here, τ is the relax- ation time that makes the SAHE an extrinsic effect. The integral is the first moment of the BC over the occupied states, known as the Berry curvature dipole (BCD) [15] Djl = X n Z ddk (2π)d f0 (∂kjΩn l ). (5) For the BCD, or equivalently χAH ijk , to be nonzero, in- version symmetry P must be broken, otherwise, the mo- mentum derivative ∂kjΩn l (k) transforms as an odd func- tion of momentum, causing its Brillouin zone integral to vanish. In Eq. (1), the Rashba term λ explicitly breaks P. Additionally, the BCD obeys a severe symmetry con- straint, namely the presence of two or more mirror lines in a crystal forces the SAHE to still vanish, while a sin- gle mirror line forces it to be orthogonal to that mirror plane [15, 51, 52]. Thus, mirror symmetries Mx,y, when present, constrain BCD and thus, the SAHE. Pure dx2−y2 altermagnetic order, i.e., when α = 1 in Eq. (1), breaks both Mx and My mirror symmetries, while purely dxy, i.e., α = 0, preserves both Mx,y. The presence of both Mx,y thus directly prohibits any BCD- induced SAHE for α = 0. In contrast, the absence of both mirror planes for α = 1, in principle, should allow SAHE. However, the presence of combined C4T symmetry in AMs has been shown to completely forbid BCD-driven SAHE for any values of α, thus forcing the leading order Hall transport to be third order [13, 14, 53, 54]. Next we will show that the nontrivial QM of the Bloch states actually still enables an extrinsic second-order transverse response in the presence of a dc electric field in AMs. Electric field induced SAHE.— In the presence of a symmetry-reducing dc electromagnetic field Edc, the BC of the Bloch electrons is modified due to the po- sitional shift of Bloch electrons arising from interband mixing [17]. This results in a field-induced Berry con- nection that is expressed as AE = ↔ G Edc, where ↔ G is a second-rank tensor, called the BCP. The component Gn ab of the BCP tensor is defined as [17, 55, 56] Gn ab(k) = 2 Re X m̸=n Anm a (k) Amn b (k) ϵn(k) −ϵm(k) . (6) 3 −π 2 0 π 2 ky −π 2 0 π 2 kx −π 2 0 π 2 ky Edc −π 2 0 π 2 kx Edc −π 2 0 π 2 kx Edc −20 0 20 −50 0 50 FIG. 1. Berry connection polarizability (BCP) tensor com- ponents for one of the band: (a) G1 xx, (b) G1 xy, and (c) G1 yy in the first Brillouin zone. Field-induced Berry curvature for the same band , ΩE(k), for dc-field orientations (d) θ = 0, (e) π/4, and (f) π/2, with range arrows indicating electric field directions. Calculations performed with t = 1.0, tam = 0.5t, α = 0.5, µ = 0.3t, and λ = 0.08t. Here, the interband Berry connection is Anm a (k) = ⟨un|i∂a|um⟩, where ∂a ≡∂ka and |un⟩is the periodic part of the n-th unperturbed Bloch state, with ϵn the corre- sponding band energy. Notably, Gn ab(k) is closely linked with the QM, gn ab = 2 P m̸=n Re[Anm a (k) Amn b (k)], the real part of the quantum geometric tensor [30]. A non- trivial BCP, or equivalently QM, generates in turn a field- induced BC ΩE n = ∇k×AE n , which, similar to the original BC of the Bloch band, acts like a pseudo-magnetic field in k-space which yields a SAHE, just as the original BC in Eq. (4). The field-induced BC is especially important in those systems where either the intrinsic BC vanishes or the original BCD-induced SAHE is forbidden due to the symmetry constraints, as discussed above for AMs. In typical Hall transport measurements, the applied electric field and the measured current are both restricted to the x−y plane. Then, an external electric field applied at an angle θ relative to the x-axis, such that Edc = Edc(cos θ, sin θ), induces a field-induced BC of the form [27, 55, 57], ΩE nz = Edc[(∂kxGn yx −∂kyGn xx) cos θ (7) + (∂kxGn yy −∂kyGn xy) sin θ], which is dependent on both the magnitude and direction of Edc. This field-induced BC can lead to a finite field- induced BCD following Eq. (5) with an angular depen- dence DE(θ) = DE xz(θ), DE yz(θ)  . Due to the lowered symmetry in the presence of Edc, a field-induced BCD then gives rise to the SAHE. Finally, plugging the angular dependence of the BCD into Eq. (4), and recasting it through χAH = j2ω/(Eω)2, we arrive at the second-harmonic current j2ω [27, 28] j2ω = − e3τ 2(1 + iωτ)ℏ2 (ˆz × Eω) [DE(θ) · Eω], (8) where Eω is the in-plane ac Hall probe field that makes an angle ϕ with the x-axis and satisfies the condition Eω ≪ Edc. As seen from Eq. (8), the maximum Hall response is obtained when Eω ∥DE(θ), while it vanishes for Eω ⊥ DE(θ). Overall, the angular dependence j2ω is intricate as it not only varies with θ through DE(θ) but also with ϕ, thus providing a high degree of external tunability of the SAHE. In 2D, only the two components χyxx and χxyy of the tensor χAH ijk are finite and independent. Thus χAH simplifies to [18, 27] χAH(θ, ϕ) = χyxx(θ) cos ϕ −χxyy(θ) sin ϕ. (9) SAHE in altermagnets.— To illustrate how the electric field induced SAHE emerges in AMs described by Eq. (1), we start by showcasing the k-resolved distribution of the components of the BCP tensor, or equivalently, band- normalized QM for one of the two bands, setting α = 0.5 in Fig. 1 (a-c). For the other band, the contributions are exactly equal in magnitude but opposite in sign. For pure altermagnetic orders, i.e., for α = 0 or α = 1, the com- ponents of BCP tensors are shown in the Supplementary Material (SM) [58]. The corresponding band structures for these three α values are shown in the SM [58]. Consis- tent with expectations, the BCP values exhibit maxima at the band-touching points, due to the denominator in Eq. (6). The presence of a finite BCP leads to a finite BC, in accordance with Eq. (7), when an external dc electric field Edc is applied. For Edc oriented along the −5.0 −2.5 0.0 2.5 5.0 −π 2 0 π 2 ky (a) (b) −π 2 0 π 2 kx −π 2 0 π 2 ky (c) −π 2 0 π 2 kx (d) FIG. 2. Momentum distribution of the Fermi-function- weighted derivatives of the field-induced Berry curvature in log-scale for the two bands, (a) f0 ∂kxΩE 1 , (b) f0 ∂kyΩE 1 , and for band 2, (c) f0 ∂kxΩE 2 , (d) f0 ∂kyΩE 2 , evaluated at θ = π/4. Parameters are the same as in Fig. 1. 4 0 π 2 π 3π 2 2π φ −0.02 0.00 0.02 χAH θ = 0 θ = π 6 θ = π 4 θ = π 2 θ = 3π 4 FIG. 3. Second-order Hall conductivity χAH as a function of angle ϕ, set by the ac driving field Eω, for several different angles θ, set by the symmetry-breaking static field Edc, with the x-axis. χAH is scaled by the prefactor A = e3τ 2(1+iωτ)ℏ2 and the parameters are the same as in Fig. 1. x (θ = 0), x = y (θ = π/4), and y(θ = π/2), the field- induced BC ΩE are shown in Fig. 1 (d)-(f), respectively. The k-resolved distribution of ΩE shows a clear dipolar structure. Not only the magnitude, but also the direc- tion of the ΩE and, thus, also the resulting BCD, can be externally tuned by changing the direction of Edc [27]. Once a finite and field-tunable BCD is confirmed, it is straightforward to express χAH in terms of Dxz and Dyz following Eq. (4-5). Figure 2 shows the momentum- resolved components of the Fermi-weighted Berry curva- ture dipole, i.e., the integrand of Eq. (5) for both bands again for α = 0.5, with Edc oriented along the x = y direction. These are then integrated over the Brillouin zone and summed over bands to yield the finite field- induced BCD. Finally, following Eq. (4), with it angular dependence in Eq. (9), we find an overall nonzero second- order Hall response. The complex angular variation of the resulting second-order Hall conductivity χAH(θ, ϕ) in a hybrid AM is depicted in Fig. 3. It offers significant experimental control over the second-order Hall current, enabling it to be switched on or off simply by adjusting the relative orientation of Edc and Eω. These behaviors are similarly observed for the pure altermagnetic order, i.e., for α = 1 and α = 0, see the SM [58]. Remarkably, we find that the electric field induced SAHE exhibits a strong sensitivity in magnitude between different altermagnetic orders, at least for a range of dop- ing levels, set by µ. The response is maximized at α = 0, corresponding to a pure dxy-type altermagnetic order when Edc ⊥Eω, but as α increases, the response χAH decreases and becomes negligible for α →1, i.e., pure dx2−y2 order, see Fig. 4(a). In contrast, when Edc ∥Eω, χAH is found to be vanishingly small for both α = 0 and α = 1 as shown in Fig. 4(b). Thus, simply tuning the direction of either Edc or Eω, provides a straight- forward way to conclusively distinguish between pure or- ders dxy and dx2−y2. For the parameter choices used in the presented results, this doping range is found to be 0.0 0.5 1.0 α 0.0 0.5 1.0 1.5 χAH (a) θ = 0, φ = π 2 λ = 0.08 λ = 0.1 λ = 0.15 0.0 0.5 1.0 α −0.8 −0.6 −0.4 −0.2 0.0 (b) θ = 0, φ = π λ = 0.08 λ = 0.1 λ = 0.15 FIG. 4. Second-order Hall conductivity χAH as a function of α for different RSOC λ values and field orientations: (a) θ = 0, ϕ = π/2 and (b) θ = 0, ϕ = π with t = 1, tam = 0.5t, and µ = 0.3t (same as Fig. 1). −t < µ < t. We have also tested our results for smaller and larger tam, which shifts the corresponding doping range, although it is still clearly present. In this doping regime, the system exhibits two hole-like Fermi pockets for α = 0, whereas for α = 0.5 and α = 1, it features one electron and one hole Fermi pocket, see SM [58]. When the Fermi surfaces (FSs) have the opposite curva- ture, their contributions to Hall transport tend to cancel each other, resulting in a nearly vanishing response. In contrast, when both FSs have the same curvature, their contributions add constructively, leading to a finite over- all response. For example, in the doping regime with two electron pockets (regardless of α), the field-induced SAHE remains nonzero for both pure and hybrid alter- magnetic symmetries, see the SM [58]. In this scenario, pure altermagnetic orders cannot be distinguished based on the second-order response, as all yield a finite signal. These findings unambiguously establish the presence of a nonlinear Hall response in AM at second order and also offer an all-electrical approach to resolve pure alter- magnetic orderings. As shown in Fig. 4, the electric field induced SAHE increases when the strength of RSOC λ decreases. Still, we recall that no SAHE is possible for λ = 0 due to preserved P. At first glance, this may seem non-intuitive. However, we note that for finite λ, RSOC opens gaps at the band-touching points in altermagnetic metals. When λ is small, these gaps are narrow, leading to an enhanced BCP (see Eq. (6)), which in turn am- plifies the field-induced SAHE. The evolution of the two spin-split FSs with λ in various doping limits are shown in the SM [58]. In summary, we demonstrate that Rashba-coupled hy- brid AMs provide a viable platform for electric field in- duced SAHE that arises due to the nontrivial QM of the Bloch bands. Although combined C4T symmetry forbids both the linear anomalous Hall effect and the usual BCD- driven second-order signal, an external dc field lowers the symmetry and generates a finite field-induced BCD via the nontrivial QM of the occupied Bloch states. The intricate angular variation of the resulting second-order Hall conductivity χAH(θ, ϕ) provides substantial exper- imental control over the second-order Hall current, al- lowing it to be toggled on or off by merely adjusting the 5 relative orientation between the external dc field Edc and ac probe field Eω. In certain doping regimes, its magni- tude can even probe the underlying altermagnetic form factor, peaking in the dxy (B2g) limit and decreasing to- ward dx2−y2 (B1g) configuration for Edc ⊥Eω, which then provides a purely electrical means to distinguish the two. These predictions are directly testable in candidate material platforms such as epitaxial RuO2 [32–34] and MnTe [35–37] thin films, where strain or surface termina- tion can realize hybrid B1g–B2g textures and provide the required inversion-symmetry breaking at interfaces that generate interfacial RSOC. To observe the SAHE, an ex- perimental setup similar to the one used in Refs. [28, 29] could be employed. More broadly, our results demon- strate that quantum geometry can enable nonlinear re- sponses in systems where symmetry otherwise suppresses intrinsic BC effects, enabling functionalities such as on- chip frequency doubling and rectification in zero-moment magnets. A.B and A.M.B.-S. acknowledge financial support from the Swedish Research Council (Vetenskapsr˚adet) grant no. 2022-03963 and the European Union through the European Research Council (ERC) under the European Union’s Horizon 2020 research and inno- vation programme (ERC-2022-CoG, Grant agreement no. 101087096). Views and opinions expressed are how- ever those of the author(s) only and do not necessar- ily reflect those of the European Union or the Eu- ropean Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. B.S. acknowledges fi- nancial support from Swedish Research Council (grant no. 2022-04309), STINT Mobility Grant for Internation- alization (grant no. MG2022-9386) and DST-SPARC, In- dia (Ref. No. SPARC/2019-2020/P1879/SL). A.M. ac- knowledges the computational resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at NSC and PDC (NAISS 2024/3- 40) partially funded by the Swedish Research Council through grant agreement no. 2022-06725 and at UPP- MAX (NAISS 2025/2-203). B.S. also acknowledges the allocation of supercomputing hours granted by the Eu- roHPC JU Development Access call in LUMI-C super- computer (grant no. EHPC-DEV-2024D04-071) in Fin- land. [1] J. Krempask`y, L. ˇSmejkal, S. D’souza, M. Hajlaoui, G. Springholz, K. Uhl´ıˇrov´a, F. Alarab, P. Constantinou, V. Strocov, D. Usanov, et al., Nature 626, 517 (2024). [2] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Phys. Rev. X 12, 040501 (2022). [3] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Phys. Rev. X 12, 031042 (2022). [4] R. Tamang, S. Gurung, D. Rai, S. Brahimi, and S. Lou- nis, arXiv preprint arXiv:2412.05377 (2024). [5] C. Song, H. Bai, Z. Zhou, L. Han, H. Reichlova, J. H. Dil, J. Liu, X. Chen, and F. Pan, Nature Reviews Materials , 1 (2025). [6] O. Gomonay, V. Kravchuk, R. Jaeschke-Ubiergo, K. Yer- shov, T. Jungwirth, L. ˇSmejkal, J. v. d. Brink, and J. Sinova, npj Spintronics 2, 35 (2024). [7] P. A. McClarty and J. G. Rau, Phys. Rev. Lett. 132, 176702 (2024). [8] Y. Yu, H. G. Suh, M. Roig, and D. F. Agterberg, Nature Communications 16, 2950 (2025). [9] T. Jungwirth, R. M. Fernandes, J. Sinova, and L. Sme- jkal, arXiv preprint arXiv:2409.10034 (2024). [10] M. Naka, Y. Motome, and H. Seo, npj Spintronics 3, 1 (2025). [11] A. Bose, S. Vadnais, and A. Paramekanti, Phys. Rev. B 110, 205120 (2024). [12] T. Jungwirth, J. Sinova, P. Wadley, D. Kriegner, H. Re- ichlova, F. Krizek, H. Ohno, and L. Smejkal, arXiv preprint arXiv:2508.09748 (2025). [13] Y. Fang, J. Cano, and S. A. A. Ghorashi, Phys. Rev. Lett. 133, 106701 (2024). [14] K. Takahashi, C. R. W. Steward, M. Ogata, R. M. Fer- nandes, and J. Schmalian, Phys. Rev. B 111, 184408 (2025). [15] I. Sodemann and L. Fu, Phys. Rev. Lett. 115, 216806 (2015). [16] Q. Ma, S.-Y. Xu, H. Shen, D. MacNeill, V. Fatemi, T.-R. Chang, A. M. Mier Valdivia, S. Wu, Z. Du, C.-H. Hsu, et al., Nature 565, 337 (2019). [17] Y. Gao, S. A. Yang, and Q. Niu, Phys. Rev. Lett. 112, 166601 (2014). [18] H. Liu, J. Zhao, Y.-X. Huang, W. Wu, X.-L. Sheng, C. Xiao, and S. A. Yang, Phys. Rev. Lett. 127, 277202 (2021). [19] C. Wang, Y. Gao, and D. Xiao, Phys. Rev. Lett. 127, 277201 (2021). [20] A. Gao, Y.-F. Liu, J.-X. Qiu, B. Ghosh, T. V. Trevisan, Y. Onishi, C. Hu, T. Qian, H.-J. Tien, S.-W. Chen, et al., Science 381, 181 (2023). [21] N. Wang, D. Kaplan, Z. Zhang, T. Holder, N. Cao, A. Wang, X. Zhou, F. Zhou, Z. Jiang, C. Zhang, et al., Nature 621, 487 (2023). [22] D. Kaplan, T. Holder, and B. Yan, Phys. Rev. Lett. 132, 026301 (2024). [23] D. S. Antonenko, R. M. Fernandes, and J. W. F. Vender- bos, Phys. Rev. Lett. 134, 096703 (2025). [24] P. Rao, A. Mook, and J. Knolle, Phys. Rev. B 110, 024425 (2024). [25] S. Sorn and A. S. Patri, Phys. Rev. B 110, 125127 (2024). [26] T. Farajollahpour, R. Ganesh, and K. Samokhin, npj Quantum Materials 10, 29 (2025). [27] A. Bhattacharya and A. M. Black-Schaffer, Phys. Rev. B 111, L041202 (2025). [28] X.-G. Ye, H. Liu, P.-F. Zhu, W.-Z. Xu, S. A. Yang, N. Shang, K. Liu, and Z.-M. Liao, Phys. Rev. Lett. 130, 016301 (2023). [29] J. Yang, L. Wei, Y. Li, L. Chen, W. Niu, J. Chen, J. Du, and Y. Pu, arXiv:2506.10657 (2025). [30] P. T¨orm¨a, Phys. Rev. Lett. 131, 240001 (2023). 6 [31] G. Sala, M. T. Mercaldo, K. Domi, S. Gariglio, M. Cuoco, C. Ortix, and A. D. Caviglia, arXiv preprint arXiv:2407.06659 (2024). [32] O. Fedchenko, J. Min´ar, A. Akashdeep, S. W. D’Souza, D. Vasilyev, O. Tkach, L. Odenbreit, Q. Nguyen, D. Kut- nyakhov, N. Wind, et al., Science advances 10, eadj4883 (2024). [33] D. T. Plouff, L. Scheuer, S. Shrestha, W. Wu, N. J. Parvez, S. Bhatt, X. Wang, L. Gundlach, M. B. Jungfleisch, and J. Q. Xiao, npj Spintronics 3, 17 (2025). [34] Y. Guo, J. Zhang, Z. Zhu, Y.-y. Jiang, L. Jiang, C. Wu, J. Dong, X. Xu, W. He, B. He, et al., Advanced Science 11, 2400967 (2024). [35] R. Yamamoto, L. A. Turnbull, M. Schmidt, J. C. Cor- saletti Filho, H. J. Binger, M. Di Pietro Mart´ınez, M. Weigand, S. Finizio, Y. Prots, G. M. Ferguson, U. Vool, S. Wintz, and C. Donnelly, Phys. Rev. Appl. 24, 034037 (2025). [36] O. Amin, A. Dal Din, E. Golias, Y. Niu, A. Zakharov, S. Fromage, C. Fields, S. Heywood, R. Cousins, F. Mac- cherozzi, et al., Nature 636, 348 (2024). [37] A. Hariki, A. Dal Din, O. J. Amin, T. Yamaguchi, A. Badura, D. Kriegner, K. W. Edmonds, R. P. Cam- pion, P. Wadley, D. Backes, L. S. I. Veiga, S. S. Dhesi, G. Springholz, L. ˇSmejkal, K. V´yborn´y, T. Jungwirth, and J. Kuneˇs, Phys. Rev. Lett. 132, 176701 (2024). [38] L. ˇSmejkal, R. Gonz´alez-Hern´andez, T. Jungwirth, and J. Sinova, Science advances 6, eaaz8809 (2020). [39] R. M. Fernandes, V. S. de Carvalho, T. Birol, and R. G. Pereira, Phys. Rev. B 109, 024404 (2024). [40] A. Ramires, arXiv preprint arXiv:2502.19162 (2025). [41] A. Le´on, C. Autieri, T. Brumme, and J. W. Gonz´alez, npj Quantum Materials 10, 98 (2025). [42] B. Karetta, X. H. Verbeek, R. Jaeschke-Ubiergo, L. ˇSmejkal, and J. Sinova, Phys. Rev. B 112, 094454 (2025). [43] M. Khodas, S. Mu, I. Mazin, and K. Belashchenko, arXiv preprint arXiv:2506.06257 (2025). [44] S. Li, Y. Zhang, A. Bahri, X. Zhang, and C. Jia, npj Quantum Materials 10, 83 (2025). [45] R. Tamang, S. Gurung, D. P. Rai, S. Brahimi, and S. Lou- nis, Magnetism 5, 10.3390/magnetism5030017 (2025). [46] P. Rao, A. Mook, and J. Knolle, Phys. Rev. B 110, 024425 (2024). [47] M. Amundsen, A. Brataas, and J. Linder, Phys. Rev. B 110, 054427 (2024). [48] T. Farajollahpour, R. Ganesh, and K. Samokhin, npj Quantum Materials 10, 77 (2025). [49] B. Lu, K. Maeda, H. Ito, K. Yada, and Y. Tanaka, Phys. Rev. Lett. 133, 226002 (2024). [50] S. A. A. Ghorashi, T. L. Hughes, and J. Cano, Phys. Rev. Lett. 133, 106601 (2024). [51] C. Ortix, Advanced Quantum Technologies 4, 2100056 (2021). [52] S. Nandy and I. Sodemann, Phys. Rev. B 100, 195117 (2019). [53] S. Sankar, R. Liu, C.-P. Zhang, Q.-F. Li, C. Chen, X.-J. Gao, J. Zheng, Y.-H. Lin, K. Qian, R.-P. Yu, X. Zhang, Z. Y. Meng, K. T. Law, Q. Shao, and B. J¨ack, Phys. Rev. X 14, 021046 (2024). [54] L. Xiang, C. Zhang, L. Wang, and J. Wang, Phys. Rev. B 107, 075411 (2023). [55] S. Lai, H. Liu, Z. Zhang, J. Zhao, X. Feng, N. Wang, C. Tang, Y. Liu, K. Novoselov, S. A. Yang, et al., Nature Nanotechnology 16, 869 (2021). [56] H. Liu, J. Zhao, Y.-X. Huang, X. Feng, C. Xiao, W. Wu, S. Lai, W.-b. Gao, and S. A. Yang, Phys. Rev. B 105, 045118 (2022). [57] O. Pal and T. K. Ghosh, Phys. Rev. B 109, 035202 (2024). [58] See supplemental material for further details (2025). 7 Supplemental Material for “Electric-field-controlled second-order anomalous Hall effect in altermagnets” In this supplementary material, we present results for the pure altermagnetic orders. We also examine a doping regime in which both pure and hybrid altermagnetic orders feature Fermi pockets with similar curvature, allowing for a direct comparison with the main text, where the results correspond to a doping range in which the altermagnetic orders exhibit either Fermi surfaces with the same curvature or with opposite curvature. I. FIELD-INDUCED SECOND-ORDER HALL RESPONSE FOR PURE ALTERMAGNETIC ORDERS In this section we show the electric field induced second-order Hall response for pure altermagnetic orders. For our model Hamiltonian Eq. (1) in the main text we consider hybrid altermagnetic order, i.e., a combination of dx2−y2 and dxy symmetries by keeping α = 0.5. Here we instead consider the results for pure dx2−y2 and dxy altermagnetic order. In Fig. S1 and Fig. S2, we show the components of the Berry connection polarizability tensor (BCP) for one of the two bands and the field-induced Berry curvature for three different orientations of the symmetry-breaking field Edc for α = 1, i.e., for pure dx2−y2 symmetry and for α = 0, i.e., for pure dxy symmetry, respectively. These figures are equivalent to Fig. 1 in the main text. −π 2 0 π 2 ky −π 2 0 π 2 kx −π 2 0 π 2 ky Edc −π 2 0 π 2 kx Edc −π 2 0 π 2 kx Edc −20 0 20 −50 0 50 FIG. S1. Berry connection polarizability (BCP) tensor components for one of the band : (a) G1 xx, (b) G1 xy, and (c) G1 yy in the first Brillouin zone. Field-induced Berry curvature for the same band , ΩE(k), for dc-field orientations (d) θ = 0, (e) π/4, and (f) π/2, with orange arrows indicating electric field directions. Calculations performed with t = 1.0, tam = 0.5t, α = 1.0, µ = 0.3t, and λ = 0.08t. The k-resolved distribution of ΩE shows a clear dipolar structure, which corresponds directly to a finite field-induced BCD. Not only the magnitude but also the direction of ΩE and, thus, the resulting BCD can be externally tuned by changing the direction of Edc. The corresponding momentum-resolved components of the Fermi-weighted dipole of BC for the two bands for α = 1 and α = 0, with Edc oriented along the x = y direction, are shown in Fig. S4 and Fig. S5, respectively, similar to Fig. 2 in the main text. For α = 1, the system possesses one electron pocket and one hole pocket, similar to α = 0.5, while for α = 0, it has two hole pockets. Figure S6 shows the resulting second-order conductivity χAH(θ, ϕ) as a function of ϕ for various values of θ for the two pure orders. The second-order response χAH(θ, ϕ) for α = 0 is more than an order of magnitude higher than that for α = 1. When the Fermi surfaces (FSs) are of opposite types, as in the case for α = 0.5 or 1, their contributions tend to cancel each other, resulting in a small response. In contrast, when both FSs have same types of curvature as in the case for α = 1, their contributions add constructively, leading to a higher overall response. 8 −π 2 0 π 2 ky −π 2 0 π 2 kx −π 2 0 π 2 ky Edc −π 2 0 π 2 kx Edc −π 2 0 π 2 kx Edc −20 0 20 −50 0 50 FIG. S2. Berry connection polarizability (BCP) tensor components for one of the band : (a) G1 xx, (b) G1 xy, and (c) G1 yy in the first Brillouin zone. Field-induced Berry curvature for the same band , ΩE(k), for dc-field orientations (d) θ = 0, (e) π/4, and (f) π/2, with orange arrows indicating electric field directions. Calculations performed with t = 1.0, tam = 0.5t, α = 0.0, µ = 0.3t, and λ = 0.08t. Γ (0, 0) Y (0, π) S2 (−π, π) Γ (0, 0) S1 (π, π) X (π, 0) Γ (0, 0) −4 −2 0 2 4 E (a) Band 1 Band 2 Γ (0, 0) Y (0, π) S2 (−π, π) Γ (0, 0) S1 (π, π) X (π, 0) Γ (0, 0) (b) Band 1 Band 2 Γ (0, 0) Y (0, π) S2 (−π, π) Γ (0, 0) S1 (π, π) X (π, 0) Γ (0, 0) (c) Band 1 Band 2 FIG. S3. Band structure along high-symmetry paths for (a) α=0 (pure dxy), (b) α=0.5 (hybrid), and (c) α=1 (pure dx2−y2). Blue and red lines represent the two spin-split bands band 1 and band 2, respectively. The dashed line indicates the Fermi level. Parameters: t = 1.0, µ = 0.3t, λ = 0.08t, tam = 0.5t. II. TOPOGRAPHY OF FERMI SURFACES In Fig. S7 and Fig. S8, we present the spin-split Fermi surfaces for various altermagnetic orders across different doping regimes. In the absence of Rashba spin–orbit coupling (RSOC) λ, band-touching points are present, but a gap opens at these points as soon as λ is tuned to a finite value. The second-order Hall response increases with decreasing RSOC λ, as shown in Fig. S9. This is because for a smaller value of λ, the energy gaps between bands are reduced, leading to an enhanced quantum metric or Berry curvature polarization (BCP), which in turn results in a stronger field-induced second-order response. In Fig. S9(a), we show the variation of χAH with α for µ = −3t keeping Edc ⊥Eω. In contrast to the results in Fig. 4 in the main text, we find that χAH is finite similar in magnitude for all α. In this doping limit, there are also two electron pockets for all α in contrast to the doping limit considered in the main text. In this case, the contribution of the two similar Fermi surfaces add constructively to give an overall finite response for any α. In Fig. S9(b), the same plot is shown for Edc ∥Eω, with an overall small χAH, showing that also in this case there is large tunability with field-directions. 9 −5.0 −2.5 0.0 2.5 5.0 −π 2 0 π 2 ky (a) (b) −π 2 0 π 2 kx −π 2 0 π 2 ky (c) −π 2 0 π 2 kx (d) FIG. S4. Momentum distribution of the Fermi-function-weighted derivatives of the field-induced Berry curvature in log-scale for the two bands, (a) f0 ∂kxΩE 1 , (b) f0 ∂kyΩE 1 , and for band 2, (c) f0 ∂kxΩE 2 , (d) f0 ∂kyΩE 2 , evaluated at θ = π/4. Parameters are the same as in Fig. S1. 10 −5.0 −2.5 0.0 2.5 5.0 −π 2 0 π 2 ky (a) (b) −π 2 0 π 2 kx −π 2 0 π 2 ky (c) −π 2 0 π 2 kx (d) FIG. S5. Momentum distribution of the Fermi-function-weighted derivatives of the field-induced Berry curvature in log-scale for the two bands, (a) f0 ∂kxΩE 1 , (b) f0 ∂kyΩE 1 , and for band 2, (c) f0 ∂kxΩE 2 , (d) f0 ∂kyΩE 2 , evaluated at θ = π/4. Parameters are the same as in Fig. S2. 0 π 2 π 3π 2 2π φ −1 0 1 χAH (a) α = 0.0 θ = 0 θ = π 6 θ = π 4 θ = π 2 θ = 3π 4 0 π 2 π 3π 2 2π φ −0.04 −0.02 0.00 0.02 0.04 (b) α = 1.0 FIG. S6. Second-order Hall conductivity χAH as a function of angle ϕ, set by the ac driving field Eω, for several different angles θ, set by the symmetry-breaking static field Edc, with the x-axis for (a) α = 0, (b) α = 1. χAH is scaled by the prefactor A = e3τ 2(1+iωτ)ℏ2 and the parameters are the same as in Fig. S3. 11 −π/2 0 π/2 ky (a) α = 0.0, λ = 0.00 (b) α = 0.5, λ = 0.00 (c) α = 1.0, λ = 0.00 −π/2 0 π/2 −π/2 0 π/2 ky (d) α = 0.0, λ = 0.10 −π/2 0 π/2 kx (e) α = 0.5, λ = 0.10 −π/2 0 π/2 (f) α = 1.0, λ = 0.10 −1.0 −0.5 0.0 0.5 1.0 FIG. S7. Spin-split Fermi surfaces (a,d) α = 0; (b,e) α = 0.5 and (c,f) α = 1.0. Upper row is for λ = 0 and lower row for λ = 0.1t. Chemical potential is set to µ = 0.3t, same as the main text. 12 −π/2 0 π/2 ky (a) α = 0.0, λ = 0.00 (b) α = 0.5, λ = 0.00 (c) α = 1.0, λ = 0.00 −π/2 0 π/2 −π/2 0 π/2 ky (d) α = 0.0, λ = 0.10 −π/2 0 π/2 kx (e) α = 0.5, λ = 0.10 −π/2 0 π/2 (f) α = 1.0, λ = 0.10 −1.0 −0.5 0.0 0.5 1.0 FIG. S8. Spin-split Fermi surfaces (a,d) α = 0; (b,e) α = 0.5 and (c,f) α = 1.0. Upper row is for λ = 0 and lower row for λ = 0.1t. Chemical potential is set to µ = −3.0t. 0.0 0.5 1.0 α 0.15 0.20 0.25 0.30 χAH (a) θ = 0, φ = π 2 λ = 0.08 λ = 0.1 λ = 0.15 0.0 0.5 1.0 α −0.02 −0.01 0.00 0.01 0.02 (b) θ = 0, φ = π λ = 0.08 λ = 0.1 λ = 0.15 FIG. S9. Second-order Hall conductivity χAH as a function of α for different RSOC λ values and field orientations: (a) θ = 0, ϕ = π/2 and (b) θ = 0, ϕ = π with t = 1, tam = 0.5t, and µ = −3t (same as Fig. S3).
Electric field controlled second-order anomalous Hall effect in altermagnets Arnob Mukherjee, Biplab Sanyal, Annica M. Black-Schaffer, and Ankita Bhattacharya∗ -516, S-75120 Uppsala, Sweden (Dated: October 17, 2025) Altermagnets are a recently discovered class of compensated magnets with momentum-dependent spin splittings and unusual transport properties, even without a net magnetization. In the presence of combined four-fold rotation and time-reversal (C4T ) symmetry, linear and also second-order, driven by a Berry curvature dipole, anomalous Hall responses are forbidden in any pure d-wave altermagnet. Nevertheless, here we find that the nontrivial quantum metric of the occupied Bloch states allows for an electric field induced Berry curvature dipole, which generates a strong and tunable second-order Hall current, enabling it to be switched on or off by simply adjusting the relative orientation between the symmetry-reducing dc field and the ac probe field. Specifically, we investigate the electric field induced second-order anomalous Hall response in a two-dimensional Rashba-coupled hybrid altermagnet that interpolates between dx2-y2 (B1g) and dxy (B2g) altermagnet symmetry, motivated by recent proposals for mixed-symmetry states. Crucially, the nonlinear signal is highly sensitive to the underlying symmetry of the altermagnetic order at specific doping levels, offering a purely electrical method to distinguish distinct altermagnetic orders. Our results position hybrid altermagnets as a promising platform for controllable nonlinear transport and spintronic applications. The discovery of altermagnets (AMs) has revealed a new class of magnetic order that bridges ferromagnets (FMs) and antiferromagnets (AFMs), exhibiting momentumdependent spin splitting but zero net magnetization [17]. Unlike conventional (N ́eel) AFMs that preserve timereversal symmetry (T ) through spin-compensated sublattices related by translation or inversion (P), AM breaks global T while their sublattices are instead connected by crystalline point-group operations, such as rotations (Cn) or mirrors (M) [7, 8]. Consequently, AMs have been shown to host exotic electronic features, such as symmetry-protected nodal lines, Weyl planes [9], and momentum-locked spin textures [10], all while suppressing stray magnetic fields thus, making AMs promising candidates for advanced spintronic and topological devices [11, 12]. In AMs, the Berry curvature (BC) exhibits quadrupolar or higher-order multipolar symmetry in momentum space, resulting in a vanishing momentum-space average and, consequently, the absence of any BC-driven linear anomalous Hall response [13, 14], despite broken T . Consequently, the leading anomalous Hall response may only emerge at second order. Recently, such secondorder anomalous Hall effect (SAHE) has attracted considerable interest due to its fundamental connection to the quantum geometry of Bloch states and its potential for promising technological applications [15-22]. The primary driver for this SAHE is the dipole moment of the asymmetric distribution of the BC over the occupied state, namely the Berry curvature dipole (BCD) [15, 22]. In addition to requiring broken P, SAHE is subject to strict crystallographic symmetry constraints [15]. In dwave AMs, both T and four-fold rotational (C4) symmetries are broken individually, but the combined C4T is ∗ preserved. As a consequence, recent works have shown that the second-order anomalous Hall response is forbidden in AMs and subsequently, only nonlinear Hall responses at the third order have so far been considered [13, 23-26]. Through a parallel development, recent theoretical [27] and experimental [28, 29] works have demonstrated that the SAHE can also occur in systems where the inherent BCD is symmetry forbidden, if instead mediated by the Berry connection polarizability (BCP) or, equivalently, the band-normalized quantum metric (QM), i.e., the real part of the quantum geometric tensor. [18, 30, 31]. An external dc electric field can induce a finite field-induced BCD through the nontrivial QM of the occupied Bloch bands, making the SAHE present even in systems where the inherent BCD-induced SAHE is absent and, importantly, also provide greater external tunability. Notably, this electric-field driven SAHE is distinct from the QMdipole-induced intrinsic (independent of relaxation time) SAHE, reported in PT -symmetric AFM [18]. In this work we consider two-dimensional (2D) hybrid AMs. While prototypical AMs such as RuO2 [32-34] and MnTe [35-37] realize distinct dx2-y2 (B1g) and dxy (B2g) spin-splitting, respectively [7, 32, 38], theoretical classifications have emphasized that these two order parameters, belonging to different irreducible representations of the tetragonal point group D4h, can in principle coexist once the crystal symmetry is reduced to one of its lower-symmetry subgroups [39, 40]. Moreover, lowering the crystal symmetry, through strain [41-44] or surface termination [40, 45], may induce mixing between otherwise pure altermagnetic orders. Experimental techniques such as nanoscale domain engineering [36, 37] also allow for controlled tuning between these symmetry limits. Motivated by these developments, we consider a hybrid altermagnetic order that continuously interpolates between the B1g and B2g irreducible representations, cap16 Oct 2025 2 turing the possibility of mixed-symmetry states in realistic materials. We also consider AMs that inherently host Rashba spin-orbit coupling (RSOC) due to common structural asymmetry [46, 47], which ensures the broken P required for any finite SAHE [15]. Using hybrid AMs with RSOC as realistic platform for AMs, we theoretically establish the existence of an electric field induced BCD-assisted SAHE in AMs. In particular, this exploits the BCP or equivalently, the QM, to overcome the C4T constraints that otherwise prevents an inherent BCD-induced SAHE. Quite remarkably, we also find that the electric field induced SAHE varies substantially in magnitude between the dx2-y2 and dxy limits in certain doping regimes, thus enabling a purely electrical route to distinguish the two altermagnetic orders. Our findings establish AMs as a powerful platform to realize highly tunable nonlinear Hall transport already at second order, driven by quantum geometric effects. Model hybrid altermagnet (AM).- To investigate the SAHE in hybrid AMs, we introduce a minimal two-band Hamiltonian defined on a 2D square lattice. In this model, we phenomenologically mix the B1g and B2g components of the altermagnetic orderings to capture the essential physics of a hybrid AM. The tight-binding Hamiltonian is given by [48-50] H(k) = ε(k) σ0 + h(k) · σ, (1) where σν denote the Pauli matrices for ν = 1, 2, 3 and the 2 × 2 identity matrix for ν = 0 acting in the spinsublattice subspace. The components of the Hamiltonian are ε(k) = -2t (cos kx + cos ky) -μ, (2) h(k) = {-λ sin ky, λ sin kx, tam α (cos kx -cos ky) + tam (1 -α) sin kx sin ky} (3) where t denotes the nearest-neighbor hopping amplitude, and μ is the chemical potential, λ denotes the RSOC strength, and tam is the strength of the altermagnetic spin splitting . We parameterize the relative weight of the two altermagnetic orders by 0 ≤α ≤1. In two dimensions, AMs inherently exhibit RSOC due to structural asymmetry [46, 47]. The case α = 1 corresponds to a pure dx2-y2 AM order arising from nearest-neighbor spin-dependent hopping, while α = 0 yields pure dxy AM order governed by next-nearest-neighbor spin-dependent hopping. Intermediate values of α thus describe a hybrid AM that seamlessly interpolates between the two irreducible representations B1g and B2g of the point group D4h. Such hybridization significantly enriches the spin splitting . The square lattice Hamiltonian in Eq. (1) may be relevant for tetragonal AMs such as RuO2 [32-34] and MnTe [35-37], and is directly relevant to quasi-2D layered compounds like Ca3Ru2O7 that host both altermagnetism and RSOC [41]. Additionally, it naturally accommodates the dx2-y2 (B1g) and dxy (B2g) [2, 3] orders whose hybridization we study. Second-order anomalous Hall effect (SAHE).- Although the Hamiltonian in Eq. (1) breaks T , it exhibits no anomalous Hall response at linear order in an applied ac electric field Eω with frequency ω, owing to the presence of combined C4T symmetry. Consequently, the leading anomalous Hall transport may at most arise at second order in Eω. Within semi-classical Boltzmann theory, the second-order anomalous Hall current can be expressed as jAH i = χAH ijk Eω j Eω k , with the second-order transverse conductivity tensor χAH ijk [15] χAH ijk = εilk e3τ 2ħ2(1 + iωτ) X n Z ddk (2π)d f0 (∂kjΩn l ), (4) where εilk is the Levi-Civita symbol and f0 is the equilibrium Fermi-Dirac distribution function and Ωn l is the l-th component of BC of the nth band. Here, τ is the relaxation time that makes the SAHE an extrinsic effect. The integral is the first moment of the BC over the occupied states, known as the Berry curvature dipole (BCD) [15] Djl = X n Z ddk (2π)d f0 (∂kjΩn l ). (5) For the BCD, or equivalently χAH ijk , to be nonzero, inversion symmetry P must be broken, otherwise, the momentum derivative ∂kjΩn l (k) transforms as an odd function of momentum, causing its Brillouin zone integral to vanish. In Eq. (1), the Rashba term λ explicitly breaks P. Additionally, the BCD obeys a severe symmetry constraint, namely the presence of two or more mirror lines in a crystal forces the SAHE to still vanish, while a single mirror line forces it to be orthogonal to that mirror plane [15, 51, 52]. Thus, mirror symmetries Mx,y, when present, constrain BCD and thus, the SAHE. Pure dx2-y2 altermagnetic order, i.e., when α = 1 in Eq. (1), breaks both Mx and My mirror symmetries, while purely dxy, i.e., α = 0, preserves both Mx,y. The presence of both Mx,y thus directly prohibits any BCDinduced SAHE for α = 0. In contrast, the absence of both mirror planes for α = 1, in principle, should allow SAHE. However, the presence of combined C4T symmetry in AMs has been shown to completely forbid BCD-driven SAHE for any values of α, thus forcing the leading order Hall transport to be third order [13, 14, 53, 54]. Next we will show that the nontrivial QM of the Bloch states actually still enables an extrinsic second-order transverse response in the presence of a dc electric field in AMs. Electric field induced SAHE.- In the presence of a symmetry-reducing dc electromagnetic field Edc, the BC of the Bloch electrons is modified due to the positional shift of Bloch electrons arising from interband mixing [17]. This results in a field-induced Berry connection that is expressed as AE = ↔ G Edc, where ↔ G is a second-rank tensor, called the BCP. The component Gn ab of the BCP tensor is defined as [17, 55, 56] Gn ab(k) = 2 Re X m̸=n Anm a (k) Amn b (k) εn(k) -εm(k) . (6) 3 -π 2 0 π 2 ky -π 2 0 π 2 kx -π 2 0 π 2 ky Edc -π 2 0 π 2 kx Edc -π 2 0 π 2 kx Edc -20 0 20 -50 0 50 FIG. 1. Berry connection polarizability (BCP) tensor components for one of the band: (a) G1 xx, (b) G1 xy, and (c) G1 yy in the first Brillouin zone. Field-induced Berry curvature for the same band , ΩE(k), for dc-field orientations (d) θ = 0, (e) π/4, and (f) π/2, with range arrows indicating electric field directions. Calculations performed with t = 1.0, tam = 0.5t, α = 0.5, μ = 0.3t, and λ = 0.08t. Here, the interband Berry connection is Anm a (k) = ⟨un|i∂a|um⟩, where ∂a ≡∂ka and |un⟩is the periodic part of the n-th unperturbed Bloch state, with εn the corresponding band energy. Notably, Gn ab(k) is closely linked with the QM, gn ab = 2 P m̸=n Re[Anm a (k) Amn b (k)], the real part of the quantum geometric tensor [30]. A nontrivial BCP, or equivalently QM, generates in turn a fieldinduced BC ΩE n = ∇k×AE n , which, similar to the original BC of the Bloch band, acts like a pseudo-magnetic field in k-space which yields a SAHE, just as the original BC in Eq. (4). The field-induced BC is especially important in those systems where either the intrinsic BC vanishes or the original BCD-induced SAHE is forbidden due to the symmetry constraints, as discussed above for AMs. In typical Hall transport measurements, the applied electric field and the measured current are both restricted to the x-y plane. Then, an external electric field applied at an angle θ relative to the x-axis, such that Edc = Edc(cos θ, sin θ), induces a field-induced BC of the form [27, 55, 57], ΩE nz = Edc[(∂kxGn yx -∂kyGn xx) cos θ (7) + (∂kxGn yy -∂kyGn xy) sin θ], which is dependent on both the magnitude and direction of Edc. This field-induced BC can lead to a finite fieldinduced BCD following Eq. (5) with an angular dependence DE(θ) = DE xz(θ), DE yz(θ) . Due to the lowered symmetry in the presence of Edc, a field-induced BCD then gives rise to the SAHE. Finally, plugging the angular dependence of the BCD into Eq. (4), and recasting it through χAH = j2ω/(Eω)2, we arrive at the second-harmonic current j2ω [27, 28] j2ω = - e3τ 2(1 + iωτ)ħ2 (ˆz × Eω) [DE(θ) · Eω], (8) where Eω is the in-plane ac Hall probe field that makes an angle φ with the x-axis and satisfies the condition Eω ≪ Edc. As seen from Eq. (8), the maximum Hall response is obtained when Eω ∥DE(θ), while it vanishes for Eω ⊥ DE(θ). Overall, the angular dependence j2ω is intricate as it not only varies with θ through DE(θ) but also with φ, thus providing a high degree of external tunability of the SAHE. In 2D, only the two components χyxx and χxyy of the tensor χAH ijk are finite and independent. Thus χAH simplifies to [18, 27] χAH(θ, φ) = χyxx(θ) cos φ -χxyy(θ) sin φ. (9) SAHE in altermagnets.- To illustrate how the electric field induced SAHE emerges in AMs described by Eq. (1), we start by showcasing the k-resolved distribution of the components of the BCP tensor, or equivalently, bandnormalized QM for one of the two bands, setting α = 0.5 in Fig. 1 (a-c). For the other band, the contributions are exactly equal in magnitude but opposite in sign. For pure altermagnetic orders, i.e., for α = 0 or α = 1, the components of BCP tensors are shown in the Supplementary Material (SM) [58]. The corresponding band structures for these three α values are shown in the SM [58]. Consistent with expectations, the BCP values exhibit maxima at the band-touching points, due to the denominator in Eq. (6). The presence of a finite BCP leads to a finite BC, in accordance with Eq. (7), when an external dc electric field Edc is applied. For Edc oriented along the -5.0 -2.5 0.0 2.5 5.0 -π 2 0 π 2 ky (a) (b) -π 2 0 π 2 kx -π 2 0 π 2 ky (c) -π 2 0 π 2 kx (d) FIG. 2. Momentum distribution of the Fermi-functionweighted derivatives of the field-induced Berry curvature in log-scale for the two bands, (a) f0 ∂kxΩE 1 , (b) f0 ∂kyΩE 1 , and for band 2, (c) f0 ∂kxΩE 2 , (d) f0 ∂kyΩE 2 , evaluated at θ = π/4. Parameters are the same as in Fig. 1. 4 0 π 2 π 3π 2 2π φ -0.02 0.00 0.02 χAH θ = 0 θ = π 6 θ = π 4 θ = π 2 θ = 3π 4 FIG. 3. Second-order Hall conductivity χAH as a function of angle φ, set by the ac driving field Eω, for several different angles θ, set by the symmetry-breaking static field Edc, with the x-axis. χAH is scaled by the prefactor A = e3τ 2(1+iωτ)ħ2 and the parameters are the same as in Fig. 1. x (θ = 0), x = y (θ = π/4), and y(θ = π/2), the fieldinduced BC ΩE are shown in Fig. 1 (d)-(f), respectively. The k-resolved distribution of ΩE shows a clear dipolar structure. Not only the magnitude, but also the direction of the ΩE and, thus, also the resulting BCD, can be externally tuned by changing the direction of Edc [27]. Once a finite and field-tunable BCD is confirmed, it is straightforward to express χAH in terms of Dxz and Dyz following Eq. (4-5). Figure 2 shows the momentumresolved components of the Fermi-weighted Berry curvature dipole, i.e., the integrand of Eq. (5) for both bands again for α = 0.5, with Edc oriented along the x = y direction. These are then integrated over the Brillouin zone and summed over bands to yield the finite fieldinduced BCD. Finally, following Eq. (4), with it angular dependence in Eq. (9), we find an overall nonzero secondorder Hall response. The complex angular variation of the resulting second-order Hall conductivity χAH(θ, φ) in a hybrid AM is depicted in Fig. 3. It offers significant experimental control over the second-order Hall current, enabling it to be switched on or off simply by adjusting the relative orientation of Edc and Eω. These behaviors are similarly observed for the pure altermagnetic order, i.e., for α = 1 and α = 0, see the SM [58]. Remarkably, we find that the electric field induced SAHE exhibits a strong sensitivity in magnitude between different altermagnetic orders, at least for a range of doping levels, set by μ. The response is maximized at α = 0, corresponding to a pure dxy-type altermagnetic order when Edc ⊥Eω, but as α increases, the response χAH decreases and becomes negligible for α →1, i.e., pure dx2-y2 order, see Fig. 4(a). In contrast, when Edc ∥Eω, χAH is found to be vanishingly small for both α = 0 and α = 1 as shown in Fig. 4(b). Thus, simply tuning the direction of either Edc or Eω, provides a straightforward way to conclusively distinguish between pure orders dxy and dx2-y2. For the parameter choices used in the presented results, this doping range is found to be 0.0 0.5 1.0 α 0.0 0.5 1.0 1.5 χAH (a) θ = 0, φ = π 2 λ = 0.08 λ = 0.1 λ = 0.15 0.0 0.5 1.0 α -0.8 -0.6 -0.4 -0.2 0.0 (b) θ = 0, φ = π λ = 0.08 λ = 0.1 λ = 0.15 FIG. 4. Second-order Hall conductivity χAH as a function of α for different RSOC λ values and field orientations: (a) θ = 0, φ = π/2 and (b) θ = 0, φ = π with t = 1, tam = 0.5t, and μ = 0.3t (same as Fig. 1). -t < μ < t. We have also tested our results for smaller and larger tam, which shifts the corresponding doping range, although it is still clearly present. In this doping regime, the system exhibits two hole-like Fermi pockets for α = 0, whereas for α = 0.5 and α = 1, it features one electron and one hole Fermi pocket, see SM [58]. When the Fermi surfaces (FSs) have the opposite curvature, their contributions to Hall transport tend to cancel each other, resulting in a nearly vanishing response. In contrast, when both FSs have the same curvature, their contributions add constructively, leading to a finite overall response. For example, in the doping regime with two electron pockets (regardless of α), the field-induced SAHE remains nonzero for both pure and hybrid altermagnetic symmetries, see the SM [58]. In this scenario, pure altermagnetic orders cannot be distinguished based on the second-order response, as all yield a finite signal. These findings unambiguously establish the presence of a nonlinear Hall response in AM at second order and also offer an all-electrical approach to resolve pure altermagnetic orderings. As shown in Fig. 4, the electric field induced SAHE increases when the strength of RSOC λ decreases. Still, we recall that no SAHE is possible for λ = 0 due to preserved P. At first glance, this may seem non-intuitive. However, we note that for finite λ, RSOC opens gaps at the band-touching points in altermagnetic metals. When λ is small, these gaps are narrow, leading to an enhanced BCP (see Eq. (6)), which in turn amplifies the field-induced SAHE. The evolution of the two spin-split FSs with λ in various doping limits are shown in the SM [58]. In summary, we demonstrate that Rashba-coupled hybrid AMs provide a viable platform for electric field induced SAHE that arises due to the nontrivial QM of the Bloch bands. Although combined C4T symmetry forbids both the linear anomalous Hall effect and the usual BCDdriven second-order signal, an external dc field lowers the symmetry and generates a finite field-induced BCD via the nontrivial QM of the occupied Bloch states. The intricate angular variation of the resulting second-order Hall conductivity χAH(θ, φ) provides substantial experimental control over the second-order Hall current, allowing it to be toggled on or off by merely adjusting the 5 relative orientation between the external dc field Edc and ac probe field Eω. In certain doping regimes, its magnitude can even probe the underlying altermagnetic form factor, peaking in the dxy (B2g) limit and decreasing toward dx2-y2 (B1g) configuration for Edc ⊥Eω, which then provides a purely electrical means to distinguish the two. These predictions are directly testable in candidate material platforms such as epitaxial RuO2 [32-34] and MnTe [35-37] thin films, where strain or surface termination can realize hybrid B1g-B2g textures and provide the required inversion-symmetry breaking at interfaces that generate interfacial RSOC. To observe the SAHE, an experimental setup similar to the one used in Refs. [28, 29] could be employed. More broadly, our results demonstrate that quantum geometry can enable nonlinear responses in systems where symmetry otherwise suppresses intrinsic BC effects, enabling functionalities such as onchip frequency doubling and rectification in zero-moment magnets. A.B and A.M.B.-S. acknowledge financial support from the Swedish Research Council (Vetenskapsr ̊adet) grant no. 2022-03963 and the European Union through the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC-2022-CoG, Grant agreement no. 101087096). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. B.S. acknowledges financial support from Swedish Research Council (grant no. 2022-04309), STINT Mobility Grant for Internationalization (grant no. MG2022-9386) and DST-SPARC, India (Ref. No. SPARC/2019-2020/P1879/SL). A.M. acknowledges the computational resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at NSC and PDC (NAISS 2024/340) partially funded by the Swedish Research Council through grant agreement no. 2022-06725 and at UPPMAX (NAISS 2025/2-203). B.S. also acknowledges the allocation of supercomputing hours granted by the EuroHPC JU Development Access call in LUMI-C supercomputer (grant no. EHPC-DEV-2024D04-071) in Finland. [1] J. Krempask`y, L. ˇSmejkal, S. D'souza, M. Hajlaoui, G. Springholz, K. Uhl ́ıˇrov ́a, F. Alarab, P. Constantinou, V. Strocov, D. Usanov, et al., Nature 626, 517 (2024). [2] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Phys. Rev. X 12, 040501 (2022). [3] L. ˇSmejkal, J. Sinova, and T. Jungwirth, Phys. Rev. X 12, 031042 (2022). [4] R. Tamang, S. Gurung, D. Rai, S. Brahimi, and S. Lounis, arXiv preprint (2024). [5] C. Song, H. Bai, Z. Zhou, L. Han, H. Reichlova, J. H. Dil, J. Liu, X. Chen, and F. Pan, Nature Reviews Materials , 1 (2025). [6] O. Gomonay, V. Kravchuk, R. Jaeschke-Ubiergo, K. Yershov, T. Jungwirth, L. ˇSmejkal, J. v. d. Brink, and J. Sinova, npj Spintronics 2, 35 (2024). [7] P. A. McClarty and J. G. Rau, Phys. Rev. Lett. 132, 176702 (2024). [8] Y. Yu, H. G. Suh, M. Roig, and D. F. Agterberg, Nature Communications 16, 2950 (2025). [9] T. Jungwirth, R. M. Fernandes, J. Sinova, and L. Smejkal, arXiv preprint (2024). [10] M. Naka, Y. Motome, and H. Seo, npj Spintronics 3, 1 (2025). [11] A. Bose, S. Vadnais, and A. Paramekanti, Phys. Rev. B 110, 205120 (2024). [12] T. Jungwirth, J. Sinova, P. Wadley, D. Kriegner, H. Reichlova, F. Krizek, H. Ohno, and L. Smejkal, arXiv preprint (2025). [13] Y. Fang, J. Cano, and S. A. A. Ghorashi, Phys. Rev. Lett. 133, 106701 (2024). [14] K. Takahashi, C. R. W. Steward, M. Ogata, R. M. Fernandes, and J. Schmalian, Phys. Rev. B 111, 184408 (2025). [15] I. Sodemann and L. Fu, Phys. Rev. Lett. 115, 216806 (2015). [16] Q. Ma, S.-Y. Xu, H. Shen, D. MacNeill, V. Fatemi, T.-R. Chang, A. M. Mier Valdivia, S. Wu, Z. Du, C.-H. Hsu, et al., Nature 565, 337 (2019). [17] Y. Gao, S. A. Yang, and Q. Niu, Phys. Rev. Lett. 112, 166601 (2014). [18] H. Liu, J. Zhao, Y.-X. Huang, W. Wu, X.-L. Sheng, C. Xiao, and S. A. Yang, Phys. Rev. Lett. 127, 277202 (2021). [19] C. Wang, Y. Gao, and D. Xiao, Phys. Rev. Lett. 127, 277201 (2021). [20] A. Gao, Y.-F. Liu, J.-X. Qiu, B. Ghosh, T. V. Trevisan, Y. Onishi, C. Hu, T. Qian, H.-J. Tien, S.-W. Chen, et al., Science 381, 181 (2023). [21] N. Wang, D. Kaplan, Z. Zhang, T. Holder, N. Cao, A. Wang, X. Zhou, F. Zhou, Z. Jiang, C. Zhang, et al., Nature 621, 487 (2023). [22] D. Kaplan, T. Holder, and B. Yan, Phys. Rev. Lett. 132, 026301 (2024). [23] D. S. Antonenko, R. M. Fernandes, and J. W. F. Venderbos, Phys. Rev. Lett. 134, 096703 (2025). [24] P. Rao, A. Mook, and J. Knolle, Phys. Rev. B 110, 024425 (2024). [25] S. Sorn and A. S. Patri, Phys. Rev. B 110, 125127 (2024). [26] T. Farajollahpour, R. Ganesh, and K. Samokhin, npj Quantum Materials 10, 29 (2025). [27] A. Bhattacharya and A. M. Black-Schaffer, Phys. Rev. B 111, L041202 (2025). [28] X.-G. Ye, H. Liu, P.-F. Zhu, W.-Z. Xu, S. A. Yang, N. Shang, K. Liu, and Z.-M. Liao, Phys. Rev. Lett. 130, 016301 (2023). [29] J. Yang, L. Wei, Y. Li, L. Chen, W. Niu, J. Chen, J. Du, and Y. Pu, (2025). [30] P. T ̈orm ̈a, Phys. Rev. Lett. 131, 240001 (2023). 6 [31] G. Sala, M. T. Mercaldo, K. Domi, S. Gariglio, M. Cuoco, C. Ortix, and A. D. Caviglia, arXiv preprint (2024). [32] O. Fedchenko, J. Min ́ar, A. Akashdeep, S. W. D'Souza, D. Vasilyev, O. Tkach, L. Odenbreit, Q. Nguyen, D. Kutnyakhov, N. Wind, et al., Science advances 10, eadj4883 (2024). [33] D. T. Plouff, L. Scheuer, S. Shrestha, W. Wu, N. J. Parvez, S. Bhatt, X. Wang, L. Gundlach, M. B. Jungfleisch, and J. Q. Xiao, npj Spintronics 3, 17 (2025). [34] Y. Guo, J. Zhang, Z. Zhu, Y.-y. Jiang, L. Jiang, C. Wu, J. Dong, X. Xu, W. He, B. He, et al., Advanced Science 11, 2400967 (2024). [35] R. Yamamoto, L. A. Turnbull, M. Schmidt, J. C. Corsaletti Filho, H. J. Binger, M. Di Pietro Mart ́ınez, M. Weigand, S. Finizio, Y. Prots, G. M. Ferguson, U. Vool, S. Wintz, and C. Donnelly, Phys. Rev. Appl. 24, 034037 (2025). [36] O. Amin, A. Dal Din, E. Golias, Y. Niu, A. Zakharov, S. Fromage, C. Fields, S. Heywood, R. Cousins, F. Maccherozzi, et al., Nature 636, 348 (2024). [37] A. Hariki, A. Dal Din, O. J. Amin, T. Yamaguchi, A. Badura, D. Kriegner, K. W. Edmonds, R. P. Campion, P. Wadley, D. Backes, L. S. I. Veiga, S. S. Dhesi, G. Springholz, L. ˇSmejkal, K. V ́yborn ́y, T. Jungwirth, and J. Kuneˇs, Phys. Rev. Lett. 132, 176701 (2024). [38] L. ˇSmejkal, R. Gonz ́alez-Hern ́andez, T. Jungwirth, and J. Sinova, Science advances 6, eaaz8809 (2020). [39] R. M. Fernandes, V. S. de Carvalho, T. Birol, and R. G. Pereira, Phys. Rev. B 109, 024404 (2024). [40] A. Ramires, arXiv preprint (2025). [41] A. Le ́on, C. Autieri, T. Brumme, and J. W. Gonz ́alez, npj Quantum Materials 10, 98 (2025). [42] B. Karetta, X. H. Verbeek, R. Jaeschke-Ubiergo, L. ˇSmejkal, and J. Sinova, Phys. Rev. B 112, 094454 (2025). [43] M. Khodas, S. Mu, I. Mazin, and K. Belashchenko, arXiv preprint (2025). [44] S. Li, Y. Zhang, A. Bahri, X. Zhang, and C. Jia, npj Quantum Materials 10, 83 (2025). [45] R. Tamang, S. Gurung, D. P. Rai, S. Brahimi, and S. Lounis, Magnetism 5, 10.3390/magnetism5030017 (2025). [46] P. Rao, A. Mook, and J. Knolle, Phys. Rev. B 110, 024425 (2024). [47] M. Amundsen, A. Brataas, and J. Linder, Phys. Rev. B 110, 054427 (2024). [48] T. Farajollahpour, R. Ganesh, and K. Samokhin, npj Quantum Materials 10, 77 (2025). [49] B. Lu, K. Maeda, H. Ito, K. Yada, and Y. Tanaka, Phys. Rev. Lett. 133, 226002 (2024). [50] S. A. A. Ghorashi, T. L. Hughes, and J. Cano, Phys. Rev. Lett. 133, 106601 (2024). [51] C. Ortix, Advanced Quantum Technologies 4, 2100056 (2021). [52] S. Nandy and I. Sodemann, Phys. Rev. B 100, 195117 (2019). [53] S. Sankar, R. Liu, C.-P. Zhang, Q.-F. Li, C. Chen, X.-J. Gao, J. Zheng, Y.-H. Lin, K. Qian, R.-P. Yu, X. Zhang, Z. Y. Meng, K. T. Law, Q. Shao, and B. J ̈ack, Phys. Rev. X 14, 021046 (2024). [54] L. Xiang, C. Zhang, L. Wang, and J. Wang, Phys. Rev. B 107, 075411 (2023). [55] S. Lai, H. Liu, Z. Zhang, J. Zhao, X. Feng, N. Wang, C. Tang, Y. Liu, K. Novoselov, S. A. Yang, et al., Nature Nanotechnology 16, 869 (2021). [56] H. Liu, J. Zhao, Y.-X. Huang, X. Feng, C. Xiao, W. Wu, S. Lai, W.-b. Gao, and S. A. Yang, Phys. Rev. B 105, 045118 (2022). [57] O. Pal and T. K. Ghosh, Phys. Rev. B 109, 035202 (2024). [58] See supplemental material for further details (2025). 7 Supplemental Material for "Electric-field-controlled second-order anomalous Hall effect in altermagnets" In this supplementary material, we present results for the pure altermagnetic orders. We also examine a doping regime in which both pure and hybrid altermagnetic orders feature Fermi pockets with similar curvature, allowing for a direct comparison with the main text, where the results correspond to a doping range in which the altermagnetic orders exhibit either Fermi surfaces with the same curvature or with opposite curvature. I. FIELD-INDUCED SECOND-ORDER HALL RESPONSE FOR PURE ALTERMAGNETIC ORDERS In this section we show the electric field induced second-order Hall response for pure altermagnetic orders. For our model Hamiltonian Eq. (1) in the main text we consider hybrid altermagnetic order, i.e., a combination of dx2-y2 and dxy symmetries by keeping α = 0.5. Here we instead consider the results for pure dx2-y2 and dxy altermagnetic order. In Fig. S1 and Fig. S2, we show the components of the Berry connection polarizability tensor (BCP) for one of the two bands and the field-induced Berry curvature for three different orientations of the symmetry-breaking field Edc for α = 1, i.e., for pure dx2-y2 symmetry and for α = 0, i.e., for pure dxy symmetry, respectively. These figures are equivalent to Fig. 1 in the main text. -π 2 0 π 2 ky -π 2 0 π 2 kx -π 2 0 π 2 ky Edc -π 2 0 π 2 kx Edc -π 2 0 π 2 kx Edc -20 0 20 -50 0 50 FIG. S1. Berry connection polarizability (BCP) tensor components for one of the band : (a) G1 xx, (b) G1 xy, and (c) G1 yy in the first Brillouin zone. Field-induced Berry curvature for the same band , ΩE(k), for dc-field orientations (d) θ = 0, (e) π/4, and (f) π/2, with orange arrows indicating electric field directions. Calculations performed with t = 1.0, tam = 0.5t, α = 1.0, μ = 0.3t, and λ = 0.08t. The k-resolved distribution of ΩE shows a clear dipolar structure, which corresponds directly to a finite field-induced BCD. Not only the magnitude but also the direction of ΩE and, thus, the resulting BCD can be externally tuned by changing the direction of Edc. The corresponding momentum-resolved components of the Fermi-weighted dipole of BC for the two bands for α = 1 and α = 0, with Edc oriented along the x = y direction, are shown in Fig. S4 and Fig. S5, respectively, similar to Fig. 2 in the main text. For α = 1, the system possesses one electron pocket and one hole pocket, similar to α = 0.5, while for α = 0, it has two hole pockets. Figure S6 shows the resulting second-order conductivity χAH(θ, φ) as a function of φ for various values of θ for the two pure orders. The second-order response χAH(θ, φ) for α = 0 is more than an order of magnitude higher than that for α = 1. When the Fermi surfaces (FSs) are of opposite types, as in the case for α = 0.5 or 1, their contributions tend to cancel each other, resulting in a small response. In contrast, when both FSs have same types of curvature as in the case for α = 1, their contributions add constructively, leading to a higher overall response. 8 -π 2 0 π 2 ky -π 2 0 π 2 kx -π 2 0 π 2 ky Edc -π 2 0 π 2 kx Edc -π 2 0 π 2 kx Edc -20 0 20 -50 0 50 FIG. S2. Berry connection polarizability (BCP) tensor components for one of the band : (a) G1 xx, (b) G1 xy, and (c) G1 yy in the first Brillouin zone. Field-induced Berry curvature for the same band , ΩE(k), for dc-field orientations (d) θ = 0, (e) π/4, and (f) π/2, with orange arrows indicating electric field directions. Calculations performed with t = 1.0, tam = 0.5t, α = 0.0, μ = 0.3t, and λ = 0.08t. Γ (0, 0) Y (0, π) S2 (-π, π) Γ (0, 0) S1 (π, π) X (π, 0) Γ (0, 0) -4 -2 0 2 4 E (a) Band 1 Band 2 Γ (0, 0) Y (0, π) S2 (-π, π) Γ (0, 0) S1 (π, π) X (π, 0) Γ (0, 0) (b) Band 1 Band 2 Γ (0, 0) Y (0, π) S2 (-π, π) Γ (0, 0) S1 (π, π) X (π, 0) Γ (0, 0) (c) Band 1 Band 2 FIG. S3. Band structure along high-symmetry paths for (a) α=0 (pure dxy), (b) α=0.5 (hybrid), and (c) α=1 (pure dx2-y2). Blue and red lines represent the two spin-split bands band 1 and band 2, respectively. The dashed line indicates the Fermi level. Parameters: t = 1.0, μ = 0.3t, λ = 0.08t, tam = 0.5t. II. TOPOGRAPHY OF FERMI SURFACES In Fig. S7 and Fig. S8, we present the spin-split Fermi surfaces for various altermagnetic orders across different doping regimes. In the absence of Rashba spin-orbit coupling (RSOC) λ, band-touching points are present, but a gap opens at these points as soon as λ is tuned to a finite value. The second-order Hall response increases with decreasing RSOC λ, as shown in Fig. S9. This is because for a smaller value of λ, the energy gaps between bands are reduced, leading to an enhanced quantum metric or Berry curvature polarization (BCP), which in turn results in a stronger field-induced second-order response. In Fig. S9(a), we show the variation of χAH with α for μ = -3t keeping Edc ⊥Eω. In contrast to the results in Fig. 4 in the main text, we find that χAH is finite similar in magnitude for all α. In this doping limit, there are also two electron pockets for all α in contrast to the doping limit considered in the main text. In this case, the contribution of the two similar Fermi surfaces add constructively to give an overall finite response for any α. In Fig. S9(b), the same plot is shown for Edc ∥Eω, with an overall small χAH, showing that also in this case there is large tunability with field-directions. 9 -5.0 -2.5 0.0 2.5 5.0 -π 2 0 π 2 ky (a) (b) -π 2 0 π 2 kx -π 2 0 π 2 ky (c) -π 2 0 π 2 kx (d) FIG. S4. Momentum distribution of the Fermi-function-weighted derivatives of the field-induced Berry curvature in log-scale for the two bands, (a) f0 ∂kxΩE 1 , (b) f0 ∂kyΩE 1 , and for band 2, (c) f0 ∂kxΩE 2 , (d) f0 ∂kyΩE 2 , evaluated at θ = π/4. Parameters are the same as in Fig. S1. 10 -5.0 -2.5 0.0 2.5 5.0 -π 2 0 π 2 ky (a) (b) -π 2 0 π 2 kx -π 2 0 π 2 ky (c) -π 2 0 π 2 kx (d) FIG. S5. Momentum distribution of the Fermi-function-weighted derivatives of the field-induced Berry curvature in log-scale for the two bands, (a) f0 ∂kxΩE 1 , (b) f0 ∂kyΩE 1 , and for band 2, (c) f0 ∂kxΩE 2 , (d) f0 ∂kyΩE 2 , evaluated at θ = π/4. Parameters are the same as in Fig. S2. 0 π 2 π 3π 2 2π φ -1 0 1 χAH (a) α = 0.0 θ = 0 θ = π 6 θ = π 4 θ = π 2 θ = 3π 4 0 π 2 π 3π 2 2π φ -0.04 -0.02 0.00 0.02 0.04 (b) α = 1.0 FIG. S6. Second-order Hall conductivity χAH as a function of angle φ, set by the ac driving field Eω, for several different angles θ, set by the symmetry-breaking static field Edc, with the x-axis for (a) α = 0, (b) α = 1. χAH is scaled by the prefactor A = e3τ 2(1+iωτ)ħ2 and the parameters are the same as in Fig. S3. 11 -π/2 0 π/2 ky (a) α = 0.0, λ = 0.00 (b) α = 0.5, λ = 0.00 (c) α = 1.0, λ = 0.00 -π/2 0 π/2 -π/2 0 π/2 ky (d) α = 0.0, λ = 0.10 -π/2 0 π/2 kx (e) α = 0.5, λ = 0.10 -π/2 0 π/2 (f) α = 1.0, λ = 0.10 -1.0 -0.5 0.0 0.5 1.0 FIG. S7. Spin-split Fermi surfaces (a,d) α = 0; (b,e) α = 0.5 and (c,f) α = 1.0. Upper row is for λ = 0 and lower row for λ = 0.1t. Chemical potential is set to μ = 0.3t, same as the main text. 12 -π/2 0 π/2 ky (a) α = 0.0, λ = 0.00 (b) α = 0.5, λ = 0.00 (c) α = 1.0, λ = 0.00 -π/2 0 π/2 -π/2 0 π/2 ky (d) α = 0.0, λ = 0.10 -π/2 0 π/2 kx (e) α = 0.5, λ = 0.10 -π/2 0 π/2 (f) α = 1.0, λ = 0.10 -1.0 -0.5 0.0 0.5 1.0 FIG. S8. Spin-split Fermi surfaces (a,d) α = 0; (b,e) α = 0.5 and (c,f) α = 1.0. Upper row is for λ = 0 and lower row for λ = 0.1t. Chemical potential is set to μ = -3.0t. 0.0 0.5 1.0 α 0.15 0.20 0.25 0.30 χAH (a) θ = 0, φ = π 2 λ = 0.08 λ = 0.1 λ = 0.15 0.0 0.5 1.0 α -0.02 -0.01 0.00 0.01 0.02 (b) θ = 0, φ = π λ = 0.08 λ = 0.1 λ = 0.15 FIG. S9. Second-order Hall conductivity χAH as a function of α for different RSOC λ values and field orientations: (a) θ = 0, φ = π/2 and (b) θ = 0, φ = π with t = 1, tam = 0.5t, and μ = -3t (same as Fig. S3).
2510.14889
Detecting Early and Implicit Suicidal Ideation via Longitudinal and Information Environment Signals on Social Media Soorya Ram Shimgekar1, Ruining Zhao1, Agam Goyal1, Violeta J. Rodriguez1, Paul A. Bloom2, Hari Sundaram1, Koustuv Saha1 1University of Illinois Urbana-Champaign, {sooryas2, ruining, agamg2, vjrodrig, hs1, ksaha2}@illinois.edu 2Columbia University, New York State Psychiatric Institute, paul.bloom@nyspi.columbia.edu Abstract On social media, many individuals experienc- ing suicidal ideation (SI) do not disclose their distress explicitly. Instead, signs may surface indirectly through everyday posts or peer inter- actions. Detecting such implicit signals early is critical but remains challenging. We frame early and implicit SI as a forward-looking pre- diction task and develop a computational frame- work that models a user’s information envi- ronment, consisting of both their longitudinal posting histories as well as the discourse of their socially proximal peers. We adopted a composite network centrality measure to iden- tify top neighbors of a user, and temporally aligned the user’s and neighbors’ interactions— integrating the multi-layered signals in a fine- tuned DeBERTa-v3 model. In a Reddit study of 1,000 (500 Case and 500 Control) users, our ap- proach improves early and implicit SI detection by 15% over individual-only baselines. These findings highlight that peer interactions offer valuable predictive signals and carry broader implications for designing early detection sys- tems that capture indirect as well as masked expressions of risk in online environments. 1 Introduction More than 703,000 individuals die by suicide each year worldwide, making it a major global public health concern (WHO, 2025). Suicidal ideation (SI) refers to a spectrum of cognitive and behav- ioral manifestations related to suicide, ranging from passive thoughts of death to active planning and engagement in self-injurious actions with intent to die (Joiner, 2005). While early detection of these signs is critical, psychiatry and mental health ser- vices have long struggled to detect risk before indi- viduals disclose it explicitly (Calear and Batterham, 2019; Hom et al., 2017). A recent meta-analysis revealed that the estimated prevalence of SI dis- closure is only 46.3%, inferring that the majority of people who experience suicidal thoughts do not disclose them, making early detection critical for timely intervention (Hallford et al., 2023). Online platforms have become vital spaces where signs of SI may surface. People use on- line mental health communities to share distress, seek help, and connect with peers (De Choud- hury and De, 2014). This has created opportuni- ties for both researchers and clinicians to under- stand suicide risk through language, social inter- actions, and online behavioral cues. Prior work has shown the value of linguistic cues for identify- ing mental health risks (Coppersmith et al., 2014; De Choudhury et al., 2013; Guntuku et al., 2017). In particular, within the context of SI, prior work has computationally modeled the language of SI on social media (Burnap et al., 2015; Saha et al., 2019; Alghazzawi et al., 2025; Zhang et al., 2025; Naseem et al., 2025). More recent computational approaches leveraged longitudinal and multimodal data, as well as social network analyses, to antici- pate suicide-related behaviors (Shen et al., 2020). Yet, critical limitations persist. Prior research has primarily examined posts where SI is explicitly con- veyed, such as direct or indirect references to self- harm or suicidal thoughts within suicide-related fo- rums or discussions (Ji, 2022a; Bloom et al., 2025). These approaches presuppose that individuals ar- ticulate their distress, yet, many at-risk individuals neither disclose SI nor exhibit overt warning signs, but instead mask distress within seemingly ordi- nary discourse (McGillivray et al., 2022; Mérelle et al., 2018; Podlogar et al., 2022). Our work targets the detection of these undisclosed SI, characterized by subtle, contextually obscured, or socially dis- tributed indicators, to potentially enable earlier and more effective intervention. To address the above gap, our work is guided by the research question (RQ): How can early signals of SI be detected in social media activity, par- ticularly in the absence of explicit disclosures? To address the RQ, we conceptualize early and im- 1 arXiv:2510.14889v1 [cs.SI] 16 Oct 2025 plicit SI as a forward-looking prediction problem that requires modeling both an individual’s longi- tudinal behavior and the broader social context in which that behavior unfolds. We develop a frame- work that jointly captures temporal dynamics in a user’s posting history and the conversations of socially proximal peers, enabling a richer repre- sentation of early warning signals. This idea of using socially proximal peers to understand an in- dividual’s mental state is derived from many prior acclaimed psychology works (La Greca and Har- rison, 2005; Copeland, 2021; Victor et al., 2019). Within this framework, we pursue three aims: Aim 1: Examine whether users’ longitudinal post- ing patterns, in conjunction with commentary on those posts, reveal early signals of implicit SI. Aim 2: Examine how the surrounding informa- tion environment, including peer interactions and exposure to content, contributes as an additional predictive feature for implicit SI. Aim 3: Identify the linguistic markers that underlie interaction types, in terms of how specific language cues are associated with these interactions. We conducted our study on Reddit, focusing on a sample of 1,000 users divided into 500 Case and 500 Control users (who never participated in men- tal health conversations). Our predictive framework jointly modeled each user’s full posting timeline along with the discourse of their most socially prox- imal neighbors, identified through network central- ity, and fine-tuned a DeBERTa-v3 model (He et al., 2021) to embed both individual and peer interac- tion signals into a unified representation. Our findings show that incorporating peer inter- actions within the information environment, along- side users’ longitudinal posting histories, signifi- cantly enhances detection of early and implicit SI, improving performance by 15% compared to mod- els based only on longitudinal user data. Overall, this paper contributes: 1) a framework for detecting implicit SI; 2) a method for systematically inte- grating environment information through neighbor interactions; and 3) empirical evidence that this integration improves early detection. 2 Related Work 2.1 Suicidal Ideation and Social Context Psychological theory views SI as the product of intertwined social and cognitive factors. The Inter- personal Theory of Suicide (Joiner, 2005) posits that suicidal desire arises from perceived burden- someness and thwarted belongingness, while ca- pability develops through repeated exposure to pain or fear. Early computational studies of de- pression and SI focused on overt signals such as self-disclosure or negative sentiment (Coppersmith et al., 2014; De Choudhury et al., 2013), but later work showed that many at-risk individuals express distress through subtle cues, motivating methods that model temporal and semantic dynamics of be- havior (Guntuku et al., 2017; Benton et al., 2017; Saha and De Choudhury, 2017). A range of methods have been proposed to detect mental health risk signals beyond surface cues. For instance, (Fatima et al., 2021) introduced DASenti- mental, a semi-supervised model combining bag- of-words and semantic networks, while (Trotzek et al., 2018) showed that convolutional networks with linguistic metadata enable earlier detection. More recent work leverages large language mod- els, with GPT-3.5/4 using chain-of-thought prompt- ing on diaries (Shin et al., 2024) and reasoning- guided LLMs improving interpretability (Teng et al., 2025). Temporal dynamics remain critical, from emotional “phases” in user timelines (Sawh- ney et al., 2021a) to transformer-based models en- riched with temporal signals (Sawhney et al., 2020). Social context is equally important: hyperbolic em- beddings of user histories and peer interactions enhance prediction (Sawhney et al., 2021b), peer networks and conversational responses influence trajectories (Wyman et al., 2019; De Choudhury and Kiciman, 2017), and longitudinal patterns re- veal precursors of SI (De Choudhury et al., 2016). Complementary directions include clinical-domain datasets such as ScAN (Rawat et al., 2022), auto- mated counseling support with PsyGUARD (Qiu et al., 2024), and calls to model underlying intent rather than surface disclosure (Ji, 2022b). Together, this work affirms that SI arises from psychological distress, temporal dynamics, and so- cial context, demanding models that go beyond sur- face cues. Yet most approaches still rely on explicit disclosures or static timelines, overlooking how evolving language interacts with peer responses. Our framework addresses this by jointly modeling users’ longitudinal histories and post-level com- mentary, enabling early detection of implicit SI. 2.2 Social Media and Mental Health Online platforms have become key venues for mental-health self-disclosure (De Choudhury and 2 De, 2014), with communities like Reddit fostering targeted support and a sense of belonging (Saha and Sharma, 2020; De Choudhury, 2015; Shimgekar et al., 2025; Kim et al., 2023). Moderated peer- support spaces reduce isolation and help people discuss stigmatized experiences (Johnson et al., 2022). Social support, especially emotional and informational, has been shown to improve well- being both offline and online (Cutrona and Trout- man, 1986; De Choudhury and Kiciman, 2017; Saha and Sharma, 2020). Language plays a central role: psycholinguistic research links specific lin- guistic markers to mental-health outcomes (Chung and Pennebaker, 2007; Pennebaker et al., 2001), and computational studies have used these cues to detect distress and model support dynamics (Chan- cellor et al., 2016; Guntuku et al., 2017; Chancellor and De Choudhury, 2020). Prior work has also established the construct validity of these measure- ments (Saha et al., 2022). Recent NLP work has focused on interpretable and fine-grained modeling of mental health on so- cial media. Symptom-based approaches such as PSYSYM (Zhang et al., 2022; Chen et al., 2023) link online language to clinically meaningful cate- gories of disorders. Depression severity has been quantified through semantic similarity to symptom descriptors (Pérez et al., 2022), while large lan- guage models now enable explainable detection with interpretable rationales (Wang et al., 2024). Analyses of pre and post diagnosis language shifts, highlight the temporal dynamics of distress ex- pression (Alhamed et al., 2024). Supportive lan- guage marked by adaptability, immediacy, and emo- tionality predicts better outcomes (Althoff et al., 2016; Saha and Sharma, 2020), and automatic empathy-detection models scale such insights to peer-support settings (Sharma et al., 2020). For SI, machine-learning approaches have identified risk signals in social media language alongside emo- tional patterns that precede suicide attempts (Cop- persmith et al., 2016; De Choudhury et al., 2016). While online data shows promise for early risk detection, most methods isolate either individual language or specific interactions. Our approach in- stead models full posting timelines alongside peer influences, capturing risk even without explicit SI disclosures. Our methodological framework identi- fies early indicators without requiring references to self-harm or participation in such spaces. Figure 1: Distribution of SI probability for the same set of users before their last non suicidal post and their first post in r/SuicideWatch. 3 Data We used data from r/SuicideWatch, a semi- anonymous Reddit community focused on SI, alongside posts and comments from other sub- reddits. From the PushShift archive (Baumgart- ner et al., 2020) of April 2019 (18.3M posts, 138.5M comments), the data includes 10,037 posts and 38,130 comments from r/SuicideWatch. Prior work has leveraged Reddit for SI (De Choudhury et al., 2016; De Choudhury and Kiciman, 2017; Shimgekar et al., 2025) and broader mental health studies (Sharma and De Choudhury, 2018; Saha et al., 2020; De Choudhury and De, 2014). Constructing Case and Control datasets. We iden- tified two user cohorts. The first cohort comprises 500 Case individuals, defined as individuals who have made at least one post on r/SuicideWatch. The second cohort consists of 500 Control individuals, who never participated in any subreddit related to mental health. We identified the Control users by referencing the taxonomy of mental health-related subreddits from prior work (Sharma and De Choud- hury, 2018). For the Case group, all posts and comments before each user’s first r/SuicideWatch disclosure were labeled positive (1). On average, users made 10.07 posts before disclosure, transitioning within 1.5 days of their last non-r/SuicideWatch post. To construct a balanced Control group, we sampled the first 10 posts from each user (matching Case averages), labeling them negative (0). The dataset was then split 75:25 by users into train and test sets, ensuring no overlap. A zero-shot NLI model (Laurer et al., 2023) re- vealed significantly higher SI probabilities in users’ first r/SuicideWatch posts, indicating that entry marks a key turning point (Figure 1). The model, 3 Post 1 (r/random) Comment 2 (r/random2) Post 3 (r/random3) Post 4 (r/SuicideWatch) Neighbors' Posts R1, R2 Top  Neighbors U User "U" User-User Graph of "U" Others' Replies R1, R2 Neighbors' Posts R1, R2 Others' Replies R1, R2 Neighbors' Posts R1, R2 Others' Replies R1, R2 Neighbor Interaction Immediate  Interaction Input Data Formalization Finetuned Model Early and Implicit Suicide Detection [yes/no] Figure 2: Illustration of user interactions (Immediate/Neighbor): Immediate interactions include users’ self-posts, self-comments, and received replies, while neighbor interactions represent top neighbor posts and self-posts. Top neighbors identified via NeighborScore based on DeBERTa-v3-base, was trained on 1.3M hypothesis–premise pairs from eight NLI datasets (e.g., MultiNLI, FEVER-NLI, LingNLI, DocNLI) to capture long-range reasoning. Using zero-shot labels (SI, non-SI), it assigned SI probabilities to posts, showing that users’ first r/SuicideWatch posts exhibit stronger suicidal self-disclosure than their prior posts. While these probabilities may partly reflect the act of joining r/SuicideWatch, the linguis- tic patterns align more with distress and self-harm intent, supporting their use as an operational proxy for heightened suicidal self-disclosure. 4 Methods To address our RQ on detecting early signals of SI in social media activity, particularly when ex- plicit disclosures are absent, we conceptualized implicit SI as a forward-looking prediction prob- lem of the likelihood that a user would eventu- ally make a SI disclosure, proxied by their first post on r/SuicideWatch. Our framework models a user’s longitudinal activity and social context via two interaction types: (1) immediate inter- actions—self-posts, self-comments, and received comments, and (2) neighbor interactions—self- posts and top neighbors’ posts. This captures both active engagement and passive exposure, reflect- ing how users interact with and are influenced by content they relate to (De Choudhury and Kiciman, 2017). Our methodological framework consists of five components—1) Timeline construction, 2) Neigh- boring user detection, 3) Input data formalization, and 4) implicit SI signal classification with mod- eling, 5) linguistic analysis of interaction types, which we describe below: 4.1 Timeline Construction For each user U, we analyzed their full post and comment history for SI signals, ordered chronolog- ically from earliest to latest. 4.2 Neighboring user detection For a user U, we found its top neighbors by the following four steps: Step 1: Initial User Collection We first collected all users who interacted with the user U, defined as either U commenting on their posts/comments or them commenting on U’s posts/comments. This neighbor-identification procedure was then applied recursively to a maximum depth d=3, ensuring that both direct and indirect neighbors of U were cap- tured. Step 2: User-User Graph Based on all the initial collected users, we constructed a user-user graph where each Node represents an individual user in the network, and an undirected Edge connects two users if either of the users has commented on the other’s post. The weight of the edge was quantified as the total number of comment-based interactions, with higher weights indicating stronger ties. Step 3: Top Neighbor Detection From the user-user graph, we identified the top- n (n=10) neighbors for U. We ranked the neighbors using a NeighborScore, a com- bined centrality score S(n), defined as: S(n) = Cin-degree(n) + Cout-degree(n) + Ccloseness(n) + Ceigenvector(n) + Cbetweenness(n) + CPageRank(n) Cin-degree and Cout-degree capture normalized con- nectivity, Cbetweenness measures shortest-path cen- trality, Ccloseness denotes proximity, Ceigenvector 4 Table 1: Model performance metrics on different data combinations for Models M1–M4 (epochs=20). Data Used Acc. F1 Prec. Rec. M1 Self-posts 0.86 0.85 0.88 0.84 M2 Self-posts + Self-comments 0.81 0.83 0.78 0.91 M3 Self-posts + Self-comments + Others’ comments 0.80 0.79 0.79 0.80 M4 Self-posts + Top neighbor posts 0.95 0.96 0.93 0.99 Table 2: Model performance on showing the importance of choosing the best neighbors (epochs=20). Data Used Acc. F1 Prec. Rec. M4 Self-posts + top neighbor posts 0.95 0.96 0.93 0.99 M5 Self-posts + worst neighbor posts 0.75 0.76 0.78 0.75 M6 Self-posts + non-neighbor posts 0.68 0.71 0.67 0.78 reflects influence via important neighbors, and CPageRank estimates probabilistic importance. The aggregated NeighborScore identifies peers with strong direct ties and broader network influence around U. As some centralities are correlated, S(n) approximates overall neighbor prominence rather than precise influence. Performance variation by neighbor depth and count is shown in Appendix Ta- ble A9. 4.3 Input Data Formalization We structured the input to language models to cap- ture U’s timeline, others’ replies, and peer interac- tions. The overall data formalization, distinguish- ing immediate and neighbor interactions, is shown in Figure 2. Immediate Interaction We aggregated content from a user’s timeline, along with others’ replies, and then used all of these textual content for fine- tuning the language model. To train our models, we used different combinations of features—1) U’s self-posts, 2) U’s self-posts and self-comments, and 3) U’s self-posts, self-comments, and others’ replies to U’s posts or comments. Neighbor Interaction Neighbor Interaction cap- tures signals from proximate peers. For this pur- pose, we temporally aligned the timelines of U and their top-n neighbors. Then, at each timestamp i, we selected ten posts by the top neighbors closest in time to U’s post. We aggregated the neighbors’ posts with U’s posts and embedded them into a dense vector representation. This approach led to the fourth type of model, where we included fea- tures from both immediate interactions above, as well as the top neighbors’ posts. Table 3: Model performance excluding neighbors with r/SuicideWatch posts. Results remain comparable to using all neighbors, indicating that broader neighbor interactions capture key signals for implicit SI detection (20 epochs). Data Used Acc. F1 Prec. Rec. M4 Self-posts + top neighbor posts 0.95 0.96 0.93 0.99 M7 Self-posts + filtered neighbor posts 0.89 0.90 0.87 0.93 4.4 Implicit SI signal classification with modeling For our study, we leveraged Microsoft’s Deberta- v3-large model (He et al., 2021)—a 418M param- eter transformer with 24 layers that processes se- quences up to 512 tokens. We fine-tuned this model using CLS for global representation and SEP for segment boundaries on an Nvidia A100 GPU. We framed our problem of detecting implicit SI as a binary classification task—labeling each input xi as yi ∈{0, 1} for absence or presence of risk. We tokenized text into subword embeddings with at- tention masks, padded or truncated to 512 tokens, and encoded them to obtain a pooled [CLS] vector (1024-dim). A linear layer then mapped this vector to logits, followed by softmax for class probabil- ities. We optimized the model with cross-entropy loss using AdamW (learning rate 2 × 10−5, weight decay 0.01) for 20 epochs with batch size 8, and evaluated after each epoch on a held-out validation set using accuracy, precision, recall, and F1-score. Results from fine-tuning alternative base models are provided in Appendix Table A2. 4.5 Linguistic Analysis of Interaction Types We analyzed lexical, topical, and psycholinguistic patterns across users’ self-posts, self-comments, replies, and top-neighbor posts. For lexical analy- sis, we used the Sparse Additive Generative Model (SAGE) (Eisenstein et al., 2011) to identify dis- criminative unigrams and bigrams distinguishing immediate and neighbor interactions. SAGE uses multinomial models with adaptive regularization to balance frequent and rare terms. For topical analy- sis, we employed BERTopic (Grootendorst, 2022) on Case and Control data, varying topic numbers (k=2–15) and achieving the highest coherence at k=12. After removing one outlier, eleven topics remained (Table 4) whose topic names were as- signed by our clinical psychologist coauthor. Their normalized proportions captured topical variation across interaction types Table 5. For psycholinguis- tic analysis, we examined the occurrences of the af- 5 20 40 60 80 user posts(t) 0.90 0.91 0.92 0.93 0.94 eval_accuracy t=88, 0.943 (a) Accuracy 20 40 60 80 user posts(t) 0.91 0.92 0.93 0.94 0.95 eval_f1 t=88, 0.948 (b) F1 Score 20 40 60 80 user posts(t) 0.925 0.930 0.935 0.940 0.945 eval_precision t=73, 0.948 (c) Precision 20 40 60 80 user posts(t) 0.89 0.90 0.91 0.92 0.93 0.94 0.95 eval_recall t=44, 0.956 (d) Recall Figure 3: Performance metrics on varying the number of input user posts (t). The plots show that the performance peaks at t=88. 2 4 6 8 10 12 14 neighbor post(n) 0.88 0.90 0.92 0.94 0.96 eval_accuracy n=7, 0.966 (a) Accuracy 2 4 6 8 10 12 14 neighbor post(n) 0.88 0.90 0.92 0.94 0.96 eval_f1 n=7, 0.970 (b) F1 Score 2 4 6 8 101214 neighbor post(n) 0.84 0.86 0.88 0.90 0.92 0.94 0.96 0.98 eval_precision n=10, 0.978 (c) Precision 2 4 6 8 10 12 14 neighbor post(n) 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 eval_recall n=7, 0.990 (d) Recall Figure 4: Performance metrics on varying the number of neighbor posts (n). The plots show that the performance peaks at around n=7. fect category of keywords as per the well-validated Linguistic Inquiry and Word Count (LIWC) lexi- con (Tausczik and Pennebaker, 2010). 5 Results 5.1 Model Performance (Immediate Interaction vs. Neighbor Interaction): Table 1 summarizes the performance comparison of models with combinations of immediate inter- actions and neighbor interactions feature set. The baseline model using only a target user’s posts per- forms strongly (Accuracy = 0.86, F1 = 0.85), showing that self-authored text alone captures salient linguistic markers of future SI, consistent with prior temporal analyses on Reddit and Face- book (Coppersmith et al., 2018; De Choudhury et al., 2016). Adding the user’s self-comments boosts recall (0.84 →0.91) but lowers precision (0.88 →0.78), suggesting broader coverage but added noise in terms of precision. For example, a comment from a Case user reads, “Are they allowed to hit me? [..] I need to stay strong,” including both themes of physical abuse as well as optimism. Addi- tionally, including replies to user’s posts/comments further reduces accuracy (0.81 →0.80) and F1 (0.83 →0.79). Such replies often convey empa- thy or advice that mask the user’s true mindset (Gkotsis et al., 2017), as in one response: “I think talking to a professional would help, because he can understand you and give you advice.” We note that the top-neighbor interaction approach achieves the highest performance (Accuracy=0.95, F1=0.96, Precision=0.93, Recall=0.99), with high recall re- ducing false negatives, which is critical for SI risk detection (Franklin et al., 2017). This suggests that features reflecting social exposure and information environment context provide valuable predictive signals for implicit SI. 5.2 Determining Optimal Post Counts for Robust Prediction We evaluated how many posts from users (Case and Control) and their top neighbors optimize implicit SI detection. First, varying the number of neigh- bor posts (n) showed an improvement in accuracy from 0.87 to 0.97, with near-perfect recall (0.99) at n = 7, after which the gains plateaued (Figure 4). With n = 7 fixed, varying self-posts (t) revealed unstable performance at low t (e.g., recall 0.87 at t = 1), stabilizing for t ≥40; the best results oc- curred at t = 88 (accuracy 0.94, F1 0.95, precision 0.95, recall 0.96) (Figure 3). These findings indi- cate that combining a user’s full history with a few 6 Table 4: Clinically informed topics identified from posts and comments via topic modeling, with corresponding explanations and keywords. Topic Theme Explanation Keywords Self-Harm Tools Mentions of knives, blades, or cutting instruments indicating self-injury ideation knife, blade, cut, sharpen Diet / Body Image Discussions of calories, weight, dieting, or eating habits reflect- ing body concerns calori, weight, diet, eat Physical Pain / Health Mentions of pain, medication, or suffering over time pain, aspirin, suffer, longer War / Military Discussions related to military, soldiers, or historical conflicts soldier, german, armi, soviet Academic / Mental Stress Mentions of depression, PhD, or mental strain depress, phd, ggoddamn, eurozon Cringe / Embarrassment Mentions of awkwardness, social discomfort, or cringing situa- tions cri, cringi, crusad, crimin Burn / Injury Mentions of burns, cuts, or physical injury burn, cut, hurt, degre Self-Harm / Coping Strategies Mentions of harming oneself or suicide coping mechanisms harm, self, suicidebyword, cope Violent Ideation Mentions of killing or violent intent in extreme contexts kill, killin, zaibatsu, meeee Confusion / Uncertainty Expressions of perplexity or inability to understand situations confus, percent, stranger, 12ish Cute / Affection Positive, playful, or affectionate language cute, soo, aww, omgggg Table 5: Normalized topic proportions across different content sources. ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Topic User’s Self- Posts User’s Self- Comments Other’s Com- ments Top Neigh- bor’s Posts Kruskal- Wallis H-stat. Self-Harm Tools 0.002 0.002 0.002 0.006 44.08*** Diet & Body Image 0.005 0.003 0.003 0.003 46.36*** Physical Pain & Health 0.003 0.002 0.002 0.003 51.55*** War & Military 0.013 0.003 0.001 0.002 7.45* Academic Stress 0.001 0.001 0.002 0.002 43.81*** Cringe & Embarrass- ment 0.000 0.001 0.001 0.002 39.49*** Burn & Injury 0.000 0.000 0.000 0.001 1.84 Coping & Self-Harm 0.001 0.001 0.001 0.002 22.81*** Violent Ideation 0.001 0.001 0.000 0.002 26.13*** Confusion & Uncer- tainty 0.001 0.001 0.001 0.000 12.62** Cute / Affection 0.000 0.001 0.001 0.002 22.93*** neighbor posts yields a robust SI predictor. 5.3 Role of Top Neighbor Posts in Capturing SI Signals To understand the superior performance of top- neighbor-based modeling, we analyzed lexical, top- ical, and psycholinguistic patterns in users’ self- posts, self-comments, replies, and posts of their top neighbors. Others’ replies to a user’s posts or comments provide a limited discriminative sig- nal: common bigrams such as “feel like”, “year old”, and “every day” appear across both Case and Control data, reflecting supportive or neutral discourse (Zirikly et al., 2019). Topic modeling shows that clinically relevant themes (e.g., Self- Harm Tools, Coping/Self-Harm, Violent Ideation) are rare in others’ replies, while users’ own posts exhibit only moderate and inconsistent signals (Ta- ble 4, Table 5). In contrast, top neighbor posts con- sistently reflect mental-health-related themes that are highly predictive of SI (Kirtley et al., 2021). For Case users, neighbors frequently post about Self- Harm Tools, Coping / Self-Harm, Violent Ideation, Physical Pain / Health, and Academic / Mental Stress, with bigrams such as “mental health”, “sui- cide prevention”, and “emotional support” appear- ing frequently. Neighbors of Control users, by com- parison, focus on neutral or unrelated topics (Ta- ble 4, Table 5). Table 6 shows that differences in affective categories (e.g., negative affect, anger, and sadness) are statistically significant, indicating that including neighbor posts enhances the distinction between Case and Control users’ content. There- fore, top-neighbor posts provide richer social con- text than replies or isolated user history. Combining them with user history yields a more robust model for early detection of early and implicit SI. 5.4 Robustness Tests: To ensure that our findings are not artifacts of specific modeling or sampling choices, we con- ducted additional robustness tests by varying the type of neighbors included—for instance, incor- porating non-top neighbors into our models. For this purpose, we built additional models M5 with the lowest-ranked neighbors and M6 with non- neighbors—a random selection of other users—of a target user. Table 2 compares the performance—we note both M4 and M5 perform significantly poorly. Therefore, the performance boost by including the top neighbor posts is not by chance, and rather an important element in detecting implicit SI (Cero et al., 2024). Moreover, Table 3 demonstrates that the model retains strong predictive performance even after filtering out all top neighbors who have posted in r/SuicideWatch. This finding indicates that successful detection does not rely solely on neighbors’ direct SI-related language, but rather on broader contextual cues captured from high-quality interactions. Appendix Table A4 shows linguistic analyses aligned with Table 2, Table A3 highlights “Affect” differences linked to Table 3, and addi- 7 Table 6: Comparison of LIWC Affect categories across M1–M4. Columns show normalized category counts for Case and Control groups and their differences. ***p<0.001, *p<0.01, p<0.05. Category M1 (Immediate) M2 (Immediate) M3 (Immediate) M4 (Neighbor) Case Control Diff. H Case Control Diff. H Case Control Diff. H Case Control Diff. H Pos. Affect 0.049 0.053 -0.004 38.80*** 0.080 0.074 0.006 416.94*** 0.060 0.057 0.003 45.86*** 0.048 0.053 -0.005 23.57*** Neg. Affect 0.029 0.025 0.004 74.947*** 0.035 0.030 0.005 651.17*** 0.029 0.026 0.003 100.07*** 0.029 0.021 0.007 169.87*** Anxiety 0.004 0.003 0.001 125.379*** 0.003 0.002 0.001 358.217*** 0.002 0.002 0.000 176.606*** 0.003 0.002 0.000 62.663*** Anger 0.009 0.006 0.003 50.654*** 0.012 0.010 0.002 322.855*** 0.010 0.008 0.002 77.809*** 0.011 0.006 0.005 130.857*** Sadness 0.007 0.007 0.000 71.131*** 0.008 0.007 0.001 376.545*** 0.006 0.005 0.001 136.379*** 0.009 0.005 0.004 181.551*** tional model comparisons with LIWC results are in Appendix Table A1 (Tables A5–A8). 6 Discussion and Implications A key takeaway of our study is that, early and im- plicit SI is best detected by modeling both an indi- vidual’s longitudinal online activity and their sur- rounding information environment, capturing the inherently relational nature of SI expressions (Am- merman and Jacobucci, 2023). Neighbor-based modeling is particularly effective, as top neigh- bors, weighted via a NeighborScore, capture direct behavior and network-level influences, consistent with epidemiological evidence and classic effects like Werther and Papageno (Wyman et al., 2019; Phillips, 1974; Niederkrotenthaler et al., 2010; Yuan et al., 2023). Neighbor posts provide com- plementary signals beyond the user’s self-content, showing that exposure to neighbors’ expressions of distress or coping is associated with detection. Neighbor-informed models frame suicidality within a social-ecological perspective (Bronfen- brenner, 1979), reflecting social contagion pro- cesses where suicidal behaviors and ideation can spread through networks, especially when peers disclose distress (Wyman et al., 2019; Gould et al., 2003). Online environments may amplify these effects, as individuals in high-risk networks can show subtle precursors before explicit disclosure. Network theories suggest that peers model behav- ior and influence interpretations of distress (Cero et al., 2024; Bearman and Moody, 2004; Christakis and Fowler, 2025), explaining why neighbor in- teractions reflect relational signals of implicit SI. A user’s longitudinal posting history remains es- sential, tracing gradual shifts in tone, sentiment, and discourse (De Choudhury et al., 2016). How- ever, interactional content must be selective: in- cluding all replies adds noise and weakens risk sig- nals (Schmidt et al., 2024). As shown in Table 6, the difference between the LIWC “Affect” categories is consistently higher in M4 where neighbors’ posts are added. This shows that including posts from the neighbors add a clear distinction signal between the Case and Control data, making it better for the model to predict (Lu and Tu, 2024; Guo et al., 2012). These findings suggest avenues for theoreti- cal and methodological extensions. Beyond the In- terpersonal Theory of Suicide (Joiner, 2005), mod- els such as the Integrated Motivational Volitional Model (O’Connor and Kirtley, 2018) and the Three- Step Theory (Klonsky and May, 2015) may help identify subtle motivational or volitional cues in language and interactions. The strong influence of social context further supports contagion and network theories, indicating that implicit suicidal ideation emerges from both individual cognition and broader interpersonal environments. Overall, this work enhances detection accuracy while fram- ing suicide risk as a relational process in digital social environments. By showing that peer-network interactions substantially improve predictive power, it bridges computational modeling with network- informed prevention strategies. 7 Conclusion Our study demonstrates that early and implicit SI is best detected when a user’s longitudinal activ- ity is interpreted within their social context. Subtle linguistic and interactional cues often emerge well before explicit disclosures, enabling identification of risk trajectories at an early stage. By incorporat- ing posts from socially proximal peers, our frame- work operationalizes theories of suicide contagion and social influence, yielding substantial gains in predictive performance. Notably, while a user’s own posts and comments provide moderate insight, even a small set of strategically selected neighbor posts carries a significantly strong predictive sig- nal. These findings highlight that suicide risk is inherently relational: effective detection requires selectively integrating self-content and socially dis- tributed cues. Beyond improving accuracy, this re- lational framing supports scalable, context-aware, and ethically responsible early-warning systems that align with both clinical theory and the dynam- ics of online communities. 8 8 Limitations Our study has limitations that also show interesting future directions. Methodologically, our models exclusively rely on textual content, overlooking multimodal signals such as images, videos, emojis, GIFs, or external links that often convey emotional states, coping strategies, or distress. Incorporating these modalities could improve sensitivity to subtle risk signals, enhance interpretability, and provide a more holistic understanding of online behaviors associated with suicidal ideation. Similarly, our current approach treats peer interactions homoge- neously, though peers can exert positive, neutral, or negative influences. Modeling these distinctions could capture the functional impact of social con- text on risk trajectories and guide ethically respon- sible interventions. Finally, robustness and generalizability can be improved through replication across platforms (e.g., Twitter, TikTok, Discord), multilingual and cross- cultural settings, and naturalistic populations. Tem- poral weighting, trajectory-based risk modeling, and interpretability techniques such as attention visualization, SHAP, or counterfactuals could clar- ify key linguistic and network features while en- hancing actionable insights. Future work should integrate multimodal inputs, refine social context modeling, and maintain rigorous ethical oversight to ensure predictive models support vulnerable in- dividuals responsibly and effectively. 9 Ethical Considerations Reflexivity This paper used publicly accessible social media discussions on Reddit and did not re- quire direct interactions with individuals, thereby not requiring ethics board approval. However, we are committed to the ethics of the research and we followed practices to secure the privacy of individ- uals in our dataset. Our research team comprises researchers holding diverse gender, racial, and cul- tural backgrounds, including people of color and immigrants, and hold interdisciplinary research ex- pertise. This team consists of computer scientists with expertise in social computing, NLP, and HCI, and psychologists with expertise in clinical psy- chology, adolescent depression and suicide, and digital health interventions. One psychologist coau- thor specializes in suicide etiology, suicide preven- tion, and crisis intervention, and another psychol- ogist coauthor is a clinical psychologist with over 16 years of experience spanning adult and adoles- cent inpatient care and crisis suicide helplines. To ensure validity and prevent misrepresentation, our findings were reviewed and corroborated by our psychologist coauthors. However, our work is not intended to replace the clinical evaluation of an individual undergoing suicidal thoughts and should not be taken out of context to conduct mental health assessments. Risk of Misinterpretation and Harm with Au- tomated Detection Our study leverages compu- tational methods to identify early and implicit sui- cidal ideation (SI) in online communities. While these approaches provide opportunities for early intervention, they also present serious ethical chal- lenges. Automated systems may misinterpret sub- tle linguistic signals or social cues, leading to false positives or false negatives. Misclassification can result in unwarranted labeling of individuals as at risk, potentially causing stigma, anxiety, or social consequences. Conversely, failing to detect gen- uine risk may deny individuals timely support. We emphasize that suicidal ideation manifests hetero- geneously across individuals, and even carefully designed models cannot fully capture the nuanced context of personal distress. Privacy, Consent, and Data Sensitivity Given the sensitive nature of suicidal thoughts, the pri- vacy and confidentiality of users are paramount. Our approach relies on publicly available text, but the inclusion of social network information, even aggregated, raises concerns about inadvertent expo- sure of personal interactions. Ethical deployment requires anonymization, secure storage, and strict access controls. Users should be informed about potential data usage and retention, and explicit con- sent should be prioritized wherever feasible. Social Context and Potential Misuse By mod- eling social context and peer interactions, our work operationalizes theories of social influence in sui- cidal ideation. However, this introduces additional ethical considerations. Insights derived from peer interactions could be misused if exploited for non- clinical purposes, such as targeted advertising or profiling of vulnerable individuals. Careful gover- nance and ethical oversight are necessary to ensure that social context is leveraged solely for support- ive, preventive interventions rather than for com- mercial or punitive applications. Transparency, Interpretability, and Responsi- ble Use Ethical deployment also requires trans- 9 parency and interpretability. Systems identifying SI must clearly communicate their scope, limitations, and the fact that outputs are probabilistic, not diag- nostic. Stakeholders, including clinicians, platform moderators, and potentially affected users, should understand how predictions are generated, espe- cially when interventions are triggered. Responsi- ble use also involves integrating human-in-the-loop frameworks, where trained professionals augment computational predictions to avoid over-reliance on automated systems. Future Directions for Ethical AI Future re- search should explore multimodal detection meth- ods while carefully balancing privacy and ethical constraints. Integrating images, URLs, or videos may improve sensitivity, but must be approached with strict ethical safeguards. Additionally, system- atically categorizing peer influence, positive, neu- tral, or negative, could enhance the interpretability and safety of predictions, ensuring interventions are informed by context rather than raw social activ- ity. Overall, ethical design in computational mental health requires prioritizing user welfare, minimiz- ing harm, and embedding accountability at every stage of model development and deployment. 10 AI Involvement Disclosure Certain sections of the manuscript were refined us- ing AI-assisted writing tools (e.g., ChatGPT, Gram- marly). All analyses, scientific content, and experi- ments were written solely by the authors. References Daniyal Alghazzawi, Hayat Ullah, Naila Tabassum, Sahar K Badri, and Muhammad Zubair Asghar. 2025. Explainable ai-based suicidal and non-suicidal ideations detection from social media text with en- hanced ensemble technique. Scientific Reports, 15(1):1111. Falwah Alhamed, Julia Ive, and Lucia Specia. 2024. Classifying social media users before and after de- pression diagnosis via their language usage: A dataset and study. In Proceedings of the 2024 Joint Interna- tional Conference on Computational Linguistics, Lan- guage Resources and Evaluation (LREC-COLING 2024), pages 3250–3260. Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale analysis of counseling conversations: An application of natural language processing to mental health. Transactions of the Association for Computa- tional Linguistics, 4:463–476. Brooke A Ammerman and Ross Jacobucci. 2023. The impact of social connection on near-term suicidal ideation. Psychiatry research, 326:115338. Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. In Proceedings of the inter- national AAAI conference on web and social media, volume 14, pages 830–839. Peter S Bearman and James Moody. 2004. Suicide and friendships among american adolescents. American journal of public health, 94(1):89–95. Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multi-task learning for mental health using social media text. arXiv preprint arXiv:1712.03538. Paul Bloom, Isaac Treves, David Pagliaccio, Isabella Nadel, Emma Wool, Hayley Quinones, Julia Green- blatt, Natalia Parjane, Katherine Durham, Saman- tha Salem, and 1 others. 2025. Identifying suicide- related language in smartphone keyboard entries among high-risk adolescents. Urie Bronfenbrenner. 1979. The ecology of human de- velopment: Experiments by nature and design. Har- vard university press. Pete Burnap, Walter Colombo, and Jonathan Scourfield. 2015. Machine classification and analysis of suicide- related communication on twitter. In Proc. ACM conference on hypertext & social media. Alison L Calear and Philip J Batterham. 2019. Suicidal ideation disclosure: Patterns, correlates and outcome. Psychiatry research, 278:1–6. Ian Cero, Munmun De Choudhury, and Peter A Wyman. 2024. Social network structure as a suicide preven- tion target. Social psychiatry and psychiatric epi- demiology, 59(3):555–564. Stevie Chancellor and Munmun De Choudhury. 2020. Methods in predictive techniques for mental health status on social media: a critical review. NPJ digital medicine, 3(1):1–11. Stevie Chancellor, Zhiyuan Lin, Erica L Goodman, Stephanie Zerwas, and Munmun De Choudhury. 2016. Quantifying and predicting mental illness severity in online pro-eating disorder communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pages 1171–1184. ACM. Siyuan Chen, Zhiling Zhang, Mengyue Wu, and Kenny Zhu. 2023. Detection of multiple mental disorders from social media with two-stream psychiatric ex- perts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9071–9084. Nicholas A Christakis and James H Fowler. 2025. Con- nected: The surprising power of our social networks and how they shape our lives. Hachette+ ORM. 10 Cindy Chung and James W Pennebaker. 2007. The psychological functions of function words. Social communication, pages 343–359. Molly Copeland. 2021. The long shadow of peers: ado- lescent networks and young adult mental health. So- cial Sciences, 10(6):231. Glen Coppersmith, Mark Dredze, and Craig Harman. 2014. Quantifying mental health signals in twitter. In Proceedings of the workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality, pages 51–60. Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine. 2018. Natural language processing of so- cial media as screening for suicide risk. Biomedical informatics insights, 10:1178222618792860. Glen Coppersmith, Kim Ngo, Ryan Leary, and Anthony Wood. 2016. Exploratory analysis of social media prior to a suicide attempt. In Proceedings of the third workshop on computational linguistics and clinical psychology, pages 106–117. Carolyn E Cutrona and Beth R Troutman. 1986. So- cial support, infant temperament, and parenting self- efficacy: A mediational model of postpartum depres- sion. Child development, pages 1507–1518. Munmun De Choudhury. 2015. Social media for men- tal illness risk assessment, prevention and support. In Proceedings of the 1st ACM Workshop on Social Media World Sensors, pages 1–1. ACM. Munmun De Choudhury and Sushovan De. 2014. Men- tal health discourse on reddit: Self-disclosure, social support, and anonymity. In Proceedings of the inter- national AAAI conference on web and social media, volume 8, pages 71–80. Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. In Proceedings of the international AAAI conference on web and social media, volume 7, pages 128–137. Munmun De Choudhury and Emre Kiciman. 2017. The language of social support in social media and its effect on suicidal ideation risk. In Proceedings of the international AAAI conference on web and social media, volume 11, pages 32–41. Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Discov- ering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 CHI conference on human factors in computing sys- tems, pages 2098–2110. Jacob Eisenstein, Amr Ahmed, and Eric P Xing. 2011. Sparse additive generative models of text. In Pro- ceedings of the 28th international conference on ma- chine learning (ICML-11), pages 1041–1048. Asra Fatima, Ying Li, Thomas Trenholm Hills, and Massimo Stella. 2021. Dasentimental: Detecting de- pression, anxiety, and stress in texts via emotional recall, cognitive networks, and machine learning. Big Data and Cognitive Computing, 5(4):77. Joseph C Franklin, Jessica D Ribeiro, Kathryn R Fox, Kate H Bentley, Evan M Kleiman, Xieyining Huang, Katherine M Musacchio, Adam C Jaroszewski, Bernard P Chang, and Matthew K Nock. 2017. Risk factors for suicidal thoughts and behaviors: A meta- analysis of 50 years of research. Psychological bul- letin, 143(2):187. George Gkotsis, Anika Oellrich, Sumithra Velupillai, Maria Liakata, Tim JP Hubbard, Richard JB Dob- son, and Rina Dutta. 2017. Characterisation of men- tal health conditions in social media using informed deep learning. Scientific reports, 7(1):1–11. Madelyn Gould, Patrick Jamieson, and Daniel Romer. 2003. Media contagion and suicide among the young. American Behavioral Scientist, 46(9):1269–1284. Maarten Grootendorst. 2022. Bertopic: Neural topic modeling with a class-based tf-idf procedure. arXiv preprint arXiv:2203.05794. Sharath Chandra Guntuku, David B Yaden, Margaret L Kern, Lyle H Ungar, and Johannes C Eichstaedt. 2017. Detecting depression and mental illness on social media: an integrative review. Current Opinion in Behavioral Sciences, 18:43–49. Guibing Guo, Jie Zhang, and Daniel Thalmann. 2012. A simple but effective method to incorporate trusted neighbors in recommender systems. In International conference on user modeling, adaptation, and per- sonalization, pages 114–125. Springer. David John Hallford, Danielle Rusanov, B Winestone, R Kaplan, Matthew Fuller-Tyszkiewicz, and Glenn Melvin. 2023. Disclosure of suicidal ideation and behaviours: A systematic review and meta-analysis of prevalence. Clinical psychology review, 101:102272. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre- training with gradient-disentangled embedding shar- ing. arXiv preprint arXiv:2111.09543. Melanie A Hom, Ian H Stanley, Matthew C Podlogar, and Thomas E Joiner Jr. 2017. “are you having thoughts of suicide?” examining experiences with disclosing and denying suicidal ideation. Journal of Clinical Psychology, 73(10):1382–1392. Shaoxiong Ji. 2022a. Towards intention understanding in suicidal risk assessment with natural language pro- cessing. In Findings of the Association for Computa- tional Linguistics: EMNLP 2022, pages 4057–4067. The Association for Computational Linguistics. Shaoxiong Ji. 2022b. Towards intention understanding in suicidal risk assessment with natural language pro- cessing. In Findings of the Association for Computa- tional Linguistics: EMNLP 2022, pages 4057–4067. The Association for Computational Linguistics. 11 Jazette Johnson, Vitica Arnold, Anne Marie Piper, and Gillian R Hayes. 2022. " it’s a lonely disease": Culti- vating online spaces for social support among people living with dementia and dementia caregivers. Pro- ceedings of the ACM on Human-Computer Interac- tion, 6(CSCW2):1–27. Thomas E. Joiner. 2005. Why People Die by Suicide. Harvard University Press. Meeyun Kim, Koustuv Saha, Munmun De Choudhury, and Daejin Choi. 2023. Supporters first: understand- ing online social support on mental health from a supporter perspective. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1):1–28. Olivia J Kirtley, Ian Hussey, and Lisa Marzano. 2021. Exposure to and experience of self-harm and self- harm related content: An exploratory network analy- sis. Psychiatry Research, 295:113572. E David Klonsky and Alexis M May. 2015. The three- step theory (3st): A new theory of suicide rooted in the “ideation-to-action” framework. International Journal of Cognitive Therapy, 8(2):114–129. Annette M La Greca and Hannah Moore Harrison. 2005. Adolescent peer relations, friendships, and romantic relationships: Do they predict social anxiety and de- pression? Journal of clinical child and adolescent psychology, 34(1):49–61. Moritz Laurer, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2023. Less annotating, more classifying: Addressing the data scarcity issue of supervised machine learning with deep transfer learning and bert-nli. Political Analysis, pages 1–33. Fangcao Lu and Caixie Tu. 2024. The impact of com- ment slant and comment tone on digital health com- munication among polarized publics: A web-based survey experiment. Journal of medical Internet re- search, 26:e57967. Lauren McGillivray, Demee Rheinberger, Jessica Wang, Alexander Burnett, and Michelle Torok. 2022. Non- disclosing youth: a cross sectional study to under- stand why young people do not disclose suicidal thoughts to their mental health professional. BMC psychiatry, 22(1):3. Saskia Mérelle, Elise Foppen, Renske Gilissen, Jan Mokkenstorm, Resi Cluitmans, and Wouter Van Bal- legooijen. 2018. Characteristics associated with non- disclosure of suicidal ideation in adults. Interna- tional journal of environmental research and public health, 15(5):943. Usman Naseem, Liang Hu, Qi Zhang, Shoujin Wang, and Shoaib Jameel. 2025. Digri: Distorted greedy approach for human-assisted online suicide ideation detection. In Proceedings of the ACM on Web Con- ference 2025, pages 5192–5201. Thomas Niederkrotenthaler, Martin Voracek, Arno Her- berth, Benedikt Till, Markus Strauss, Elmar Etzers- dorfer, Brigitte Eisenwort, and Gernot Sonneck. 2010. Role of media reports in completed and prevented suicide: Werther v. papageno effects. The British Journal of Psychiatry, 197(3):234–243. Rory C O’Connor and Olivia J Kirtley. 2018. The in- tegrated motivational–volitional model of suicidal behaviour. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1754):20170268. James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71(2001):2001. Anxo Pérez, Neha Warikoo, Kexin Wang, Javier Parapar, and Iryna Gurevych. 2022. Semantic similarity mod- els for depression severity estimation. arXiv preprint arXiv:2211.07624. David P Phillips. 1974. The influence of suggestion on suicide: Substantive and theoretical implications of the werther effect. American sociological review, pages 340–354. Matthew C Podlogar, Peter M Gutierrez, and Thomas E Joiner. 2022. Past levels of mental health intervention and current nondisclosure of suicide risk among men older than age 50. Assessment, 29(8):1611–1621. Huachuan Qiu, Lizhi Ma, and Zhenzhong Lan. 2024. Psyguard: An automated system for suicide detec- tion and risk assessment in psychological counseling. arXiv preprint arXiv:2409.20243. Bhanu Pratap Singh Rawat, Samuel Kovaly, Wilfred R Pigeon, and Hong Yu. 2022. Scan: suicide attempt and ideation events dataset. In Proceedings of the conference. Association for Computational Linguis- tics. North American Chapter. Meeting, volume 2022, page 1029. Koustuv Saha and Munmun De Choudhury. 2017. Mod- eling stress with social media around incidents of gun violence on college campuses. PACM Human- Computer Interaction, (CSCW). Koustuv Saha, Sindhu Kiranmai Ernala, Sarmistha Dutta, Eva Sharma, and Munmun De Choudhury. 2020. Understanding moderation in online mental health communities. In HCII. Springer. Koustuv Saha and Amit Sharma. 2020. Causal factors of effective psychosocial outcomes in online mental health communities. In Proceedings of the interna- tional AAAI conference on web and social media, volume 14, pages 590–601. Koustuv Saha, Benjamin Sugar, John Torous, Bruno Abrahao, Emre Kıcıman, and Munmun De Choud- hury. 2019. A social media study on the effects of psychiatric medication use. In Proceedings of the International AAAI Conference on Web and Social Media. 12 Koustuv Saha, Asra Yousuf, Ryan L Boyd, James W Pennebaker, and Munmun De Choudhury. 2022. So- cial media discussions predict mental health con- sultations on college campuses. Scientific reports, 12(1):123. Ramit Sawhney, Harshit Joshi, Lucie Flek, and Ra- jiv Shah. 2021a. Phase: Learning emotional phase- aware representations for suicide ideation detection on social media. In Proceedings of the 16th con- ference of the European Chapter of the Association for Computational Linguistics: main volume, pages 2415–2428. Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Shah. 2020. A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 7685–7697. Ramit Sawhney, Harshit Joshi, Rajiv Shah, and Lucie Flek. 2021b. Suicide ideation detection via social and temporal user representations using hyperbolic learning. In Proceedings of the 2021 conference of the North American Chapter of the Association for Computational Linguistics: human language tech- nologies, pages 2176–2190. Henk G Schmidt, Geoffrey R Norman, Silvia Mamede, and Mohi Magzoub. 2024. The influence of con- text on diagnostic reasoning: A narrative synthesis of experimental findings. Journal of Evaluation in Clinical Practice, 30(6):1091–1101. Ashish Sharma, Adam Miner, David Atkins, and Tim Al- thoff. 2020. A computational approach to understand- ing empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263–5276. Eva Sharma and Munmun De Choudhury. 2018. Men- tal health support and its relationship to linguistic accommodation in online communities. In Proceed- ings of the 2018 CHI conference on human factors in computing systems, pages 1–13. Chen Shen and 1 others. 2020. Suicide risk prediction using social media data. In Proceedings of AAAI. Soorya Ram Shimgekar, Violeta J Rodriguez, Paul A Bloom, Dong Whi Yoo, and Koustuv Saha. 2025. Interpersonal theory of suicide as a lens to examine suicidal ideation in online spaces. arXiv preprint arXiv:2504.13277. Daun Shin, Hyoseung Kim, Seunghwan Lee, Younhee Cho, and Whanbo Jung. 2024. Using large language models to detect depression from user-generated di- ary text data as a novel approach in digital mental health screening: instrument validation study. Jour- nal of Medical Internet Research, 26:e54617. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and comput- erized text analysis methods. Journal of language and social psychology, 29(1):24–54. Shiyu Teng, Jiaqing Liu, Rahul Kumar Jain, Shurong Chai, Ruibo Hou, Tomoko Tateyama, Lanfen Lin, and Yen-wei Chen. 2025. Enhancing depression de- tection with chain-of-thought prompting: From emo- tion to reasoning using large language models. arXiv preprint arXiv:2502.05879. Marcel Trotzek, Sven Koitka, and Christoph M Friedrich. 2018. Utilizing neural networks and lin- guistic metadata for early detection of depression indications in text sequences. IEEE Transactions on Knowledge and Data Engineering, 32(3):588–601. Sarah E Victor, Alison E Hipwell, Stephanie D Stepp, and Lori N Scott. 2019. Parent and peer relationships as longitudinal predictors of adolescent non-suicidal self-injury onset. Child and adolescent psychiatry and mental health, 13(1):1. Yuxi Wang, Diana Inkpen, and Prasadith Kirinde Gamaarachchige. 2024. Explainable depression de- tection using large language models on social me- dia data. In Proceedings of the 9th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2024), pages 108–126. WHO. 2025. Suicide worldwide in 2021: global health estimates. World Health Organization. Peter A Wyman, Trevor A Pickering, Anthony R Pisani, Kelly Rulison, Karen Schmeelk-Cone, Chelsey Hart- ley, Madelyn Gould, Eric D Caine, Mark LoMurray, Charles Hendricks Brown, and 1 others. 2019. Peer- adult network structure and suicide attempts in 38 high schools: Implications for network-informed sui- cide prevention. Journal of Child Psychology and Psychiatry, 60(10):1065–1075. Yunhao Yuan, Koustuv Saha, Barbara Keller, Erkki Tapio Isometsä, and Talayeh Aledavood. 2023. Mental health coping stories on social media: A causal-inference study of papageno effect. In Proceedings of the ACM Web Conference 2023, pages 2677–2685. Dongsong Zhang, Lina Zhou, Jie Tao, Tingshao Zhu, and Guodong Gao. 2025. Ketch: A knowledge- enhanced transformer-based approach to suicidal ideation detection from social media content. In- formation Systems Research, 36(1):572–599. Zhiling Zhang, Siyuan Chen, Mengyue Wu, and Kenny Q Zhu. 2022. Symptom identification for interpretable detection of multiple mental disorders. arXiv preprint arXiv:2205.11308. Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019. Clpsych 2019 shared task: Pre- dicting the degree of suicide risk in reddit posts. In Proceedings of the sixth workshop on computational linguistics and clinical psychology, pages 24–33. 13 A Appendix Table A1: Model Performance on Remaining Data Combinations (epochs=20). Model Data Used Accuracy F1 Precision Recall M8 Top neighbor’s posts only 0.83 0.82 0.92 0.74 M9 Target user posts + comments + top neighbor posts 0.91 0.92 0.91 0.92 M10 Target user posts + comments + others’ comments + top neighbor posts 0.71 0.64 0.75 0.56 M11 Filtered neighbor posts only (no post/comments in r/SuicideWatch) 0.78 0.77 0.73 0.82 Table A2: Performance comparison of transformer models on the classification task. Model Accuracy F1 Precision Recall microsoft/deberta-v3-large 0.955 0.946 0.945 0.945 roberta-large 0.944 0.931 0.944 0.919 bert-base-uncased 0.933 0.921 0.897 0.946 google/electra-large-discriminator 0.944 0.935 0.900 0.973 Table A3: Comparison of LIWC affect categories between M4 and M7 showing how the LIWC Affect category varies when considering all top neighbors vs. filtered top neighbors. Columns show mean proportions for users showing suicidal ideation (SI) and not, their difference, and the Kruskal–Wallis H-statistic. Significance codes: ***p<0.001, **p<0.01, *p<0.05. Category M4 M7 Case Control Diff. H Case Control Diff. H posemo 0.048 0.053 -0.005 23.568*** 0.050 0.053 -0.003 17.482*** negemo 0.029 0.021 0.007 169.871*** 0.032 0.025 0.007 72.531*** anx 0.003 0.002 0.000 62.663*** 0.003 0.002 0.001 63.084*** anger 0.011 0.006 0.005 130.857*** 0.010 0.006 0.004 51.436*** sad 0.009 0.005 0.004 181.551*** 0.008 0.007 0.001 71.131*** 14 Table A4: Comparison of LIWC affect categories across M4, M5, and M6 showing how the Affect LIWC category varies across various neighbors (top, worst, random). Columns show mean proportions for users showing suicidal ideation (SI) and not, their difference, and the Kruskal–Wallis H-statistic. Significance codes: ***p<0.001, **p<0.01, *p<0.05. Category M4 M5 M6 Case Control Diff. H Case Control Diff. H Case Control Diff. H posemo 0.048 0.053 -0.005 23.568*** 0.047 0.049 -0.003 38.611*** 0.048 0.047 0.001 42.425*** negemo 0.029 0.021 0.007 169.871*** 0.029 0.025 0.004 74.947*** 0.030 0.024 0.006 59.078*** anx 0.003 0.002 0.000 62.663*** 0.004 0.002 0.002 53.518*** 0.003 0.002 0.001 125.397*** anger 0.011 0.006 0.005 130.857*** 0.009 0.006 0.003 40.154*** 0.009 0.006 0.002 138.704*** sad 0.009 0.005 0.004 181.551*** 0.007 0.007 0.000 71.131*** 0.007 0.008 -0.001 36.640*** Table A5: Comparison of LIWC affect categories for M8. Columns show mean proportions for suicidal and non- suicidal users, their difference, and the Kruskal–Wallis H statistic. Significance codes: ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Category M8 Case Control Diff H posemo 0.053 0.053 0.000 0.409 negemo 0.026 0.025 0.001 4.805* anx 0.001 0.002 -0.001 3.323 anger 0.009 0.006 0.003 9.523** sad 0.005 0.007 -0.001 3.638 Table A6: Comparison of LIWC affect categories for M9. Columns show mean proportions for suicidal and non- suicidal users, their difference, and the Kruskal–Wallis H statistic. Significance codes: ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Category M9 Case Control Diff H posemo 0.050 0.053 -0.003 17.482*** negemo 0.032 0.025 0.007 72.531*** anx 0.003 0.002 0.001 63.084*** anger 0.010 0.006 0.004 51.436*** sad 0.008 0.007 0.001 71.131*** Table A7: Comparison of LIWC affect categories for M10. Columns show mean proportions for suicidal and non- suicidal users, their difference, and the Kruskal–Wallis H statistic. Significance codes: ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Category M10 Case Control Diff H posemo 0.070 0.067 0.003 349.268*** negemo 0.031 0.028 0.003 538.074*** anx 0.003 0.002 0.000 403.691*** anger 0.010 0.009 0.001 281.206*** sad 0.007 0.006 0.001 361.992*** Table A8: Comparison of LIWC affect categories for M11. Columns show mean proportions for suicidal and non- suicidal users, their difference, and the Kruskal–Wallis H statistic. Significance codes: ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Category M11 Case Control Diff H posemo 0.053 0.053 0.000 0.409 negemo 0.026 0.025 0.001 4.805* anx 0.001 0.002 -0.001 3.323 anger 0.009 0.006 0.003 9.523** sad 0.005 0.007 -0.001 3.638 15 Table A9: Performance metrics for different depth and neighbor configurations. Depth refers to the number of recursive depth considered, and Neighbors refers to the number of top neighbor posts included. Depth Neighbors Accuracy F1 Precision Recall 1 1 0.98 0.97 0.96 0.98 1 2 0.97 0.96 0.95 0.97 1 3 0.96 0.95 0.95 0.96 1 4 0.96 0.95 0.94 0.96 1 5 0.95 0.94 0.94 0.95 2 1 0.97 0.96 0.95 0.97 2 2 0.96 0.95 0.94 0.96 2 3 0.95 0.94 0.93 0.95 2 4 0.95 0.94 0.93 0.95 2 5 0.94 0.93 0.92 0.94 3 1 0.96 0.95 0.94 0.96 3 2 0.95 0.94 0.93 0.95 3 3 0.94 0.93 0.92 0.94 3 4 0.94 0.93 0.92 0.94 3 5 0.95 0.94 0.93 0.95 4 1 0.95 0.94 0.93 0.95 4 2 0.94 0.93 0.92 0.94 4 3 0.93 0.92 0.91 0.93 4 4 0.93 0.92 0.91 0.93 4 5 0.92 0.91 0.90 0.92 5 1 0.94 0.93 0.92 0.94 5 2 0.93 0.92 0.91 0.93 5 3 0.92 0.91 0.90 0.92 5 4 0.92 0.91 0.90 0.92 5 5 0.91 0.90 0.89 0.91 16
Detecting Early and Implicit Suicidal Ideation via Longitudinal and Information Environment Signals on Social Media Soorya Ram Shimgekar1, Ruining Zhao1, Agam Goyal1, Violeta J. Rodriguez1, Paul A. Bloom2, Hari Sundaram1, Koustuv Saha1 1 -Champaign, {sooryas2, ruining, agamg2, vjrodrig, hs1, 2Columbia University, New York State Psychiatric Institute, Abstract On social media, many individuals experiencing suicidal ideation (SI) do not disclose their distress explicitly. Instead, signs may surface indirectly through everyday posts or peer interactions. Detecting such implicit signals early is critical but remains challenging. We frame early and implicit SI as a forward-looking prediction task and develop a computational framework that models a user's information environment, consisting of both their longitudinal posting histories as well as the discourse of their socially proximal peers. We adopted a composite network centrality measure to identify top neighbors of a user, and temporally aligned the user's and neighbors' interactionsintegrating the multi-layered signals in a finetuned DeBERTa-v3 model. In a Reddit study of 1,000 (500 Case and 500 Control) users, our approach improves early and implicit SI detection by 15% over individual-only baselines. These findings highlight that peer interactions offer valuable predictive signals and carry broader implications for designing early detection systems that capture indirect as well as masked expressions of risk in online environments. 1 Introduction More than 703,000 individuals die by suicide each year worldwide, making it a major global public health concern (WHO, 2025). Suicidal ideation (SI) refers to a spectrum of cognitive and behavioral manifestations related to suicide, ranging from passive thoughts of death to active planning and engagement in self-injurious actions with intent to die (Joiner, 2005). While early detection of these signs is critical, psychiatry and mental health services have long struggled to detect risk before individuals disclose it explicitly (Calear and Batterham, 2019; Hom et al., 2017). A recent meta-analysis revealed that the estimated prevalence of SI disclosure is only 46.3%, inferring that the majority of people who experience suicidal thoughts do not disclose them, making early detection critical for timely intervention (Hallford et al., 2023). Online platforms have become vital spaces where signs of SI may surface. People use online mental health communities to share distress, seek help, and connect with peers (De Choudhury and De, 2014). This has created opportunities for both researchers and clinicians to understand suicide risk through language, social interactions, and online behavioral cues. Prior work has shown the value of linguistic cues for identifying mental health risks (Coppersmith et al., 2014; De Choudhury et al., 2013; Guntuku et al., 2017). In particular, within the context of SI, prior work has computationally modeled the language of SI on social media (Burnap et al., 2015; Saha et al., 2019; Alghazzawi et al., 2025; Zhang et al., 2025; Naseem et al., 2025). More recent computational approaches leveraged longitudinal and multimodal data, as well as social network analyses, to anticipate suicide-related behaviors (Shen et al., 2020). Yet, critical limitations persist. Prior research has primarily examined posts where SI is explicitly conveyed, such as direct or indirect references to selfharm or suicidal thoughts within suicide-related forums or discussions (Ji, 2022a; Bloom et al., 2025). These approaches presuppose that individuals articulate their distress, yet, many at-risk individuals neither disclose SI nor exhibit overt warning signs, but instead mask distress within seemingly ordinary discourse (McGillivray et al., 2022; Mérelle et al., 2018; Podlogar et al., 2022). Our work targets the detection of these undisclosed SI, characterized by subtle, contextually obscured, or socially distributed indicators, to potentially enable earlier and more effective intervention. To address the above gap, our work is guided by the research question (RQ): How can early signals of SI be detected in social media activity, particularly in the absence of explicit disclosures? To address the RQ, we conceptualize early and im1 16 Oct 2025 plicit SI as a forward-looking prediction problem that requires modeling both an individual's longitudinal behavior and the broader social context in which that behavior unfolds. We develop a framework that jointly captures temporal dynamics in a user's posting history and the conversations of socially proximal peers, enabling a richer representation of early warning signals. This idea of using socially proximal peers to understand an individual's mental state is derived from many prior acclaimed psychology works (La Greca and Harrison, 2005; Copeland, 2021; Victor et al., 2019). Within this framework, we pursue three aims: Aim 1: Examine whether users' longitudinal posting patterns, in conjunction with commentary on those posts, reveal early signals of implicit SI. Aim 2: Examine how the surrounding information environment, including peer interactions and exposure to content, contributes as an additional predictive feature for implicit SI. Aim 3: Identify the linguistic markers that underlie interaction types, in terms of how specific language cues are associated with these interactions. We conducted our study on Reddit, focusing on a sample of 1,000 users divided into 500 Case and 500 Control users (who never participated in mental health conversations). Our predictive framework jointly modeled each user's full posting timeline along with the discourse of their most socially proximal neighbors, identified through network centrality, and fine-tuned a DeBERTa-v3 model (He et al., 2021) to embed both individual and peer interaction signals into a unified representation. Our findings show that incorporating peer interactions within the information environment, alongside users' longitudinal posting histories, significantly enhances detection of early and implicit SI, improving performance by 15% compared to models based only on longitudinal user data. Overall, this paper contributes: 1) a framework for detecting implicit SI; 2) a method for systematically integrating environment information through neighbor interactions; and 3) empirical evidence that this integration improves early detection. 2 Related Work 2.1 Suicidal Ideation and Social Context Psychological theory views SI as the product of intertwined social and cognitive factors. The Interpersonal Theory of Suicide (Joiner, 2005) posits that suicidal desire arises from perceived burdensomeness and thwarted belongingness, while capability develops through repeated exposure to pain or fear. Early computational studies of depression and SI focused on overt signals such as self-disclosure or negative sentiment (Coppersmith et al., 2014; De Choudhury et al., 2013), but later work showed that many at-risk individuals express distress through subtle cues, motivating methods that model temporal and semantic dynamics of behavior (Guntuku et al., 2017; Benton et al., 2017; Saha and De Choudhury, 2017). A range of methods have been proposed to detect mental health risk signals beyond surface cues. For instance, (Fatima et al., 2021) introduced DASentimental, a semi-supervised model combining bagof-words and semantic networks, while (Trotzek et al., 2018) showed that convolutional networks with linguistic metadata enable earlier detection. More recent work leverages large language models, with GPT-3.5/4 using chain-of-thought prompting on diaries (Shin et al., 2024) and reasoningguided LLMs improving interpretability (Teng et al., 2025). Temporal dynamics remain critical, from emotional "phases" in user timelines (Sawhney et al., 2021a) to transformer-based models enriched with temporal signals (Sawhney et al., 2020). Social context is equally important: hyperbolic embeddings of user histories and peer interactions enhance prediction (Sawhney et al., 2021b), peer networks and conversational responses influence trajectories (Wyman et al., 2019; De Choudhury and Kiciman, 2017), and longitudinal patterns reveal precursors of SI (De Choudhury et al., 2016). Complementary directions include clinical-domain datasets such as ScAN (Rawat et al., 2022), automated counseling support with PsyGUARD (Qiu et al., 2024), and calls to model underlying intent rather than surface disclosure (Ji, 2022b). Together, this work affirms that SI arises from psychological distress, temporal dynamics, and social context, demanding models that go beyond surface cues. Yet most approaches still rely on explicit disclosures or static timelines, overlooking how evolving language interacts with peer responses. Our framework addresses this by jointly modeling users' longitudinal histories and post-level commentary, enabling early detection of implicit SI. 2.2 Social Media and Mental Health Online platforms have become key venues for mental-health self-disclosure (De Choudhury and 2 De, 2014), with communities like Reddit fostering targeted support and a sense of belonging (Saha and Sharma, 2020; De Choudhury, 2015; Shimgekar et al., 2025; Kim et al., 2023). Moderated peersupport spaces reduce isolation and help people discuss stigmatized experiences (Johnson et al., 2022). Social support, especially emotional and informational, has been shown to improve wellbeing both offline and online (Cutrona and Troutman, 1986; De Choudhury and Kiciman, 2017; Saha and Sharma, 2020). Language plays a central role: psycholinguistic research links specific linguistic markers to mental-health outcomes (Chung and Pennebaker, 2007; Pennebaker et al., 2001), and computational studies have used these cues to detect distress and model support dynamics (Chancellor et al., 2016; Guntuku et al., 2017; Chancellor and De Choudhury, 2020). Prior work has also established the construct validity of these measurements (Saha et al., 2022). Recent NLP work has focused on interpretable and fine-grained modeling of mental health on social media. Symptom-based approaches such as PSYSYM (Zhang et al., 2022; Chen et al., 2023) link online language to clinically meaningful categories of disorders. Depression severity has been quantified through semantic similarity to symptom descriptors (Pérez et al., 2022), while large language models now enable explainable detection with interpretable rationales (Wang et al., 2024). Analyses of pre and post diagnosis language shifts, highlight the temporal dynamics of distress expression (Alhamed et al., 2024). Supportive language marked by adaptability, immediacy, and emotionality predicts better outcomes (Althoff et al., 2016; Saha and Sharma, 2020), and automatic empathy-detection models scale such insights to peer-support settings (Sharma et al., 2020). For SI, machine-learning approaches have identified risk signals in social media language alongside emotional patterns that precede suicide attempts (Coppersmith et al., 2016; De Choudhury et al., 2016). While online data shows promise for early risk detection, most methods isolate either individual language or specific interactions. Our approach instead models full posting timelines alongside peer influences, capturing risk even without explicit SI disclosures. Our methodological framework identifies early indicators without requiring references to self-harm or participation in such spaces. Figure 1: Distribution of SI probability for the same set of users before their last non suicidal post and their first post in r/SuicideWatch. 3 Data We used data from r/SuicideWatch, a semianonymous Reddit community focused on SI, alongside posts and comments from other subreddits. From the PushShift archive (Baumgartner et al., 2020) of April 2019 (18.3M posts, 138.5M comments), the data includes 10,037 posts and 38,130 comments from r/SuicideWatch. Prior work has leveraged Reddit for SI (De Choudhury et al., 2016; De Choudhury and Kiciman, 2017; Shimgekar et al., 2025) and broader mental health studies (Sharma and De Choudhury, 2018; Saha et al., 2020; De Choudhury and De, 2014). Constructing Case and Control datasets. We identified two user cohorts. The first cohort comprises 500 Case individuals, defined as individuals who have made at least one post on r/SuicideWatch. The second cohort consists of 500 Control individuals, who never participated in any subreddit related to mental health. We identified the Control users by referencing the taxonomy of mental health-related subreddits from prior work (Sharma and De Choudhury, 2018). For the Case group, all posts and comments before each user's first r/SuicideWatch disclosure were labeled positive (1). On average, users made 10.07 posts before disclosure, transitioning within 1.5 days of their last non-r/SuicideWatch post. To construct a balanced Control group, we sampled the first 10 posts from each user (matching Case averages), labeling them negative (0). The dataset was then split 75:25 by users into train and test sets, ensuring no overlap. A zero-shot NLI model (Laurer et al., 2023) revealed significantly higher SI probabilities in users' first r/SuicideWatch posts, indicating that entry marks a key turning point (Figure 1). The model, 3 Post 1 (r/random) Comment 2 (r/random2) Post 3 (r/random3) Post 4 (r/SuicideWatch) Neighbors' Posts R1, R2 Top Neighbors U User "U" User-User Graph of "U" Others' Replies R1, R2 Neighbors' Posts R1, R2 Others' Replies R1, R2 Neighbors' Posts R1, R2 Others' Replies R1, R2 Neighbor Interaction Immediate Interaction Input Data Formalization Finetuned Model Early and Implicit Suicide Detection [yes/no] Figure 2: Illustration of user interactions (Immediate/Neighbor): Immediate interactions include users' self-posts, self-comments, and received replies, while neighbor interactions represent top neighbor posts and self-posts. Top neighbors identified via NeighborScore based on DeBERTa-v3-base, was trained on 1.3M hypothesis-premise pairs from eight NLI datasets (e.g., MultiNLI, FEVER-NLI, LingNLI, DocNLI) to capture long-range reasoning. Using zero-shot labels (SI, non-SI), it assigned SI probabilities to posts, showing that users' first r/SuicideWatch posts exhibit stronger suicidal self-disclosure than their prior posts. While these probabilities may partly reflect the act of joining r/SuicideWatch, the linguistic patterns align more with distress and self-harm intent, supporting their use as an operational proxy for heightened suicidal self-disclosure. 4 Methods To address our RQ on detecting early signals of SI in social media activity, particularly when explicit disclosures are absent, we conceptualized implicit SI as a forward-looking prediction problem of the likelihood that a user would eventually make a SI disclosure, proxied by their first post on r/SuicideWatch. Our framework models a user's longitudinal activity and social context via two interaction types: (1) immediate interactions-self-posts, self-comments, and received comments, and (2) neighbor interactions-selfposts and top neighbors' posts. This captures both active engagement and passive exposure, reflecting how users interact with and are influenced by content they relate to (De Choudhury and Kiciman, 2017). Our methodological framework consists of five components-1) Timeline construction, 2) Neighboring user detection, 3) Input data formalization, and 4) implicit SI signal classification with modeling, 5) linguistic analysis of interaction types, which we describe below: 4.1 Timeline Construction For each user U, we analyzed their full post and comment history for SI signals, ordered chronologically from earliest to latest. 4.2 Neighboring user detection For a user U, we found its top neighbors by the following four steps: Step 1: Initial User Collection We first collected all users who interacted with the user U, defined as either U commenting on their posts/comments or them commenting on U's posts/comments. This neighbor-identification procedure was then applied recursively to a maximum depth d=3, ensuring that both direct and indirect neighbors of U were captured. Step 2: User-User Graph Based on all the initial collected users, we constructed a user-user graph where each Node represents an individual user in the network, and an undirected Edge connects two users if either of the users has commented on the other's post. The weight of the edge was quantified as the total number of comment-based interactions, with higher weights indicating stronger ties. Step 3: Top Neighbor Detection From the user-user graph, we identified the topn (n=10) neighbors for U. We ranked the neighbors using a NeighborScore, a combined centrality score S(n), defined as: S(n) = Cin-degree(n) + Cout-degree(n) + Ccloseness(n) + Ceigenvector(n) + Cbetweenness(n) + CPageRank(n) Cin-degree and Cout-degree capture normalized connectivity, Cbetweenness measures shortest-path centrality, Ccloseness denotes proximity, Ceigenvector 4 Table 1: Model performance metrics on different data combinations for Models M1-M4 (epochs=20). Data Used Acc. F1 Prec. Rec. M1 Self-posts 0.86 0.85 0.88 0.84 M2 Self-posts + Self-comments 0.81 0.83 0.78 0.91 M3 Self-posts + Self-comments + Others' comments 0.80 0.79 0.79 0.80 M4 Self-posts + Top neighbor posts 0.95 0.96 0.93 0.99 Table 2: Model performance on showing the importance of choosing the best neighbors (epochs=20). Data Used Acc. F1 Prec. Rec. M4 Self-posts + top neighbor posts 0.95 0.96 0.93 0.99 M5 Self-posts + worst neighbor posts 0.75 0.76 0.78 0.75 M6 Self-posts + non-neighbor posts 0.68 0.71 0.67 0.78 reflects influence via important neighbors, and CPageRank estimates probabilistic importance. The aggregated NeighborScore identifies peers with strong direct ties and broader network influence around U. As some centralities are correlated, S(n) approximates overall neighbor prominence rather than precise influence. Performance variation by neighbor depth and count is shown in Appendix Table A9. 4.3 Input Data Formalization We structured the input to language models to capture U's timeline, others' replies, and peer interactions. The overall data formalization, distinguishing immediate and neighbor interactions, is shown in Figure 2. Immediate Interaction We aggregated content from a user's timeline, along with others' replies, and then used all of these textual content for finetuning the language model. To train our models, we used different combinations of features-1) U's self-posts, 2) U's self-posts and self-comments, and 3) U's self-posts, self-comments, and others' replies to U's posts or comments. Neighbor Interaction Neighbor Interaction captures signals from proximate peers. For this purpose, we temporally aligned the timelines of U and their top-n neighbors. Then, at each timestamp i, we selected ten posts by the top neighbors closest in time to U's post. We aggregated the neighbors' posts with U's posts and embedded them into a dense vector representation. This approach led to the fourth type of model, where we included features from both immediate interactions above, as well as the top neighbors' posts. Table 3: Model performance excluding neighbors with r/SuicideWatch posts. Results remain comparable to using all neighbors, indicating that broader neighbor interactions capture key signals for implicit SI detection (20 epochs). Data Used Acc. F1 Prec. Rec. M4 Self-posts + top neighbor posts 0.95 0.96 0.93 0.99 M7 Self-posts + filtered neighbor posts 0.89 0.90 0.87 0.93 4.4 Implicit SI signal classification with modeling For our study, we leveraged Microsoft's Debertav3-large model (He et al., 2021)-a 418M parameter transformer with 24 layers that processes sequences up to 512 tokens. We fine-tuned this model using CLS for global representation and SEP for segment boundaries on an Nvidia A100 GPU. We framed our problem of detecting implicit SI as a binary classification task-labeling each input xi as yi ∈{0, 1} for absence or presence of risk. We tokenized text into subword embeddings with attention masks, padded or truncated to 512 tokens, and encoded them to obtain a pooled [CLS] vector (1024-dim). A linear layer then mapped this vector to logits, followed by softmax for class probabilities. We optimized the model with cross-entropy loss using AdamW (learning rate 2 × 10-5, weight decay 0.01) for 20 epochs with batch size 8, and evaluated after each epoch on a held-out validation set using accuracy, precision, recall, and F1-score. Results from fine-tuning alternative base models are provided in Appendix Table A2. 4.5 Linguistic Analysis of Interaction Types We analyzed lexical, topical, and psycholinguistic patterns across users' self-posts, self-comments, replies, and top-neighbor posts. For lexical analysis, we used the Sparse Additive Generative Model (SAGE) (Eisenstein et al., 2011) to identify discriminative unigrams and bigrams distinguishing immediate and neighbor interactions. SAGE uses multinomial models with adaptive regularization to balance frequent and rare terms. For topical analysis, we employed BERTopic (Grootendorst, 2022) on Case and Control data, varying topic numbers (k=2-15) and achieving the highest coherence at k=12. After removing one outlier, eleven topics remained (Table 4) whose topic names were assigned by our clinical psychologist coauthor. Their normalized proportions captured topical variation across interaction types Table 5. For psycholinguistic analysis, we examined the occurrences of the af5 20 40 60 80 user posts(t) 0.90 0.91 0.92 0.93 0.94 eval_accuracy t=88, 0.943 (a) Accuracy 20 40 60 80 user posts(t) 0.91 0.92 0.93 0.94 0.95 eval_f1 t=88, 0.948 (b) F1 Score 20 40 60 80 user posts(t) 0.925 0.930 0.935 0.940 0.945 eval_precision t=73, 0.948 (c) Precision 20 40 60 80 user posts(t) 0.89 0.90 0.91 0.92 0.93 0.94 0.95 eval_recall t=44, 0.956 (d) Recall Figure 3: Performance metrics on varying the number of input user posts (t). The plots show that the performance peaks at t=88. 2 4 6 8 10 12 14 neighbor post(n) 0.88 0.90 0.92 0.94 0.96 eval_accuracy n=7, 0.966 (a) Accuracy 2 4 6 8 10 12 14 neighbor post(n) 0.88 0.90 0.92 0.94 0.96 eval_f1 n=7, 0.970 (b) F1 Score 2 4 6 8 101214 neighbor post(n) 0.84 0.86 0.88 0.90 0.92 0.94 0.96 0.98 eval_precision n=10, 0.978 (c) Precision 2 4 6 8 10 12 14 neighbor post(n) 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 eval_recall n=7, 0.990 (d) Recall Figure 4: Performance metrics on varying the number of neighbor posts (n). The plots show that the performance peaks at around n=7. fect category of keywords as per the well-validated Linguistic Inquiry and Word Count (LIWC) lexicon (Tausczik and Pennebaker, 2010). 5 Results 5.1 Model Performance (Immediate Interaction vs. Neighbor Interaction): Table 1 summarizes the performance comparison of models with combinations of immediate interactions and neighbor interactions feature set. The baseline model using only a target user's posts performs strongly (Accuracy = 0.86, F1 = 0.85), showing that self-authored text alone captures salient linguistic markers of future SI, consistent with prior temporal analyses on Reddit and Facebook (Coppersmith et al., 2018; De Choudhury et al., 2016). Adding the user's self-comments boosts recall (0.84 →0.91) but lowers precision (0.88 →0.78), suggesting broader coverage but added noise in terms of precision. For example, a comment from a Case user reads, "Are they allowed to hit me? [..] I need to stay strong," including both themes of physical abuse as well as optimism. Additionally, including replies to user's posts/comments further reduces accuracy (0.81 →0.80) and F1 (0.83 →0.79). Such replies often convey empathy or advice that mask the user's true mindset (Gkotsis et al., 2017), as in one response: "I think talking to a professional would help, because he can understand you and give you advice." We note that the top-neighbor interaction approach achieves the highest performance (Accuracy=0.95, F1=0.96, Precision=0.93, Recall=0.99), with high recall reducing false negatives, which is critical for SI risk detection (Franklin et al., 2017). This suggests that features reflecting social exposure and information environment context provide valuable predictive signals for implicit SI. 5.2 Determining Optimal Post Counts for Robust Prediction We evaluated how many posts from users (Case and Control) and their top neighbors optimize implicit SI detection. First, varying the number of neighbor posts (n) showed an improvement in accuracy from 0.87 to 0.97, with near-perfect recall (0.99) at n = 7, after which the gains plateaued (Figure 4). With n = 7 fixed, varying self-posts (t) revealed unstable performance at low t (e.g., recall 0.87 at t = 1), stabilizing for t ≥40; the best results occurred at t = 88 (accuracy 0.94, F1 0.95, precision 0.95, recall 0.96) (Figure 3). These findings indicate that combining a user's full history with a few 6 Table 4: Clinically informed topics identified from posts and comments via topic modeling, with corresponding explanations and keywords. Topic Theme Explanation Keywords Self-Harm Tools Mentions of knives, blades, or cutting instruments indicating self-injury ideation knife, blade, cut, sharpen Diet / Body Image Discussions of calories, weight, dieting, or eating habits reflecting body concerns calori, weight, diet, eat Physical Pain / Health Mentions of pain, medication, or suffering over time pain, aspirin, suffer, longer War / Military Discussions related to military, soldiers, or historical conflicts soldier, german, armi, soviet Academic / Mental Stress Mentions of depression, PhD, or mental strain depress, phd, ggoddamn, eurozon Cringe / Embarrassment Mentions of awkwardness, social discomfort, or cringing situations cri, cringi, crusad, crimin Burn / Injury Mentions of burns, cuts, or physical injury burn, cut, hurt, degre Self-Harm / Coping Strategies Mentions of harming oneself or suicide coping mechanisms harm, self, suicidebyword, cope Violent Ideation Mentions of killing or violent intent in extreme contexts kill, killin, zaibatsu, meeee Confusion / Uncertainty Expressions of perplexity or inability to understand situations confus, percent, stranger, 12ish Cute / Affection Positive, playful, or affectionate language cute, soo, aww, omgggg Table 5: Normalized topic proportions across different content sources. ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Topic User's SelfPosts User's SelfComments Other's Comments Top Neighbor's Posts KruskalWallis H-stat. Self-Harm Tools 0.002 0.002 0.002 0.006 44.08*** Diet & Body Image 0.005 0.003 0.003 0.003 46.36*** Physical Pain & Health 0.003 0.002 0.002 0.003 51.55*** War & Military 0.013 0.003 0.001 0.002 7.45* Academic Stress 0.001 0.001 0.002 0.002 43.81*** Cringe & Embarrassment 0.000 0.001 0.001 0.002 39.49*** Burn & Injury 0.000 0.000 0.000 0.001 1.84 Coping & Self-Harm 0.001 0.001 0.001 0.002 22.81*** Violent Ideation 0.001 0.001 0.000 0.002 26.13*** Confusion & Uncertainty 0.001 0.001 0.001 0.000 12.62** Cute / Affection 0.000 0.001 0.001 0.002 22.93*** neighbor posts yields a robust SI predictor. 5.3 Role of Top Neighbor Posts in Capturing SI Signals To understand the superior performance of topneighbor-based modeling, we analyzed lexical, topical, and psycholinguistic patterns in users' selfposts, self-comments, replies, and posts of their top neighbors. Others' replies to a user's posts or comments provide a limited discriminative signal: common bigrams such as "feel like", "year old", and "every day" appear across both Case and Control data, reflecting supportive or neutral discourse (Zirikly et al., 2019). Topic modeling shows that clinically relevant themes (e.g., SelfHarm Tools, Coping/Self-Harm, Violent Ideation) are rare in others' replies, while users' own posts exhibit only moderate and inconsistent signals (Table 4, Table 5). In contrast, top neighbor posts consistently reflect mental-health-related themes that are highly predictive of SI (Kirtley et al., 2021). For Case users, neighbors frequently post about SelfHarm Tools, Coping / Self-Harm, Violent Ideation, Physical Pain / Health, and Academic / Mental Stress, with bigrams such as "mental health", "suicide prevention", and "emotional support" appearing frequently. Neighbors of Control users, by comparison, focus on neutral or unrelated topics (Table 4, Table 5). Table 6 shows that differences in affective categories (e.g., negative affect, anger, and sadness) are statistically significant, indicating that including neighbor posts enhances the distinction between Case and Control users' content. Therefore, top-neighbor posts provide richer social context than replies or isolated user history. Combining them with user history yields a more robust model for early detection of early and implicit SI. 5.4 Robustness Tests: To ensure that our findings are not artifacts of specific modeling or sampling choices, we conducted additional robustness tests by varying the type of neighbors included-for instance, incorporating non-top neighbors into our models. For this purpose, we built additional models M5 with the lowest-ranked neighbors and M6 with nonneighbors-a random selection of other users-of a target user. Table 2 compares the performance-we note both M4 and M5 perform significantly poorly. Therefore, the performance boost by including the top neighbor posts is not by chance, and rather an important element in detecting implicit SI (Cero et al., 2024). Moreover, Table 3 demonstrates that the model retains strong predictive performance even after filtering out all top neighbors who have posted in r/SuicideWatch. This finding indicates that successful detection does not rely solely on neighbors' direct SI-related language, but rather on broader contextual cues captured from high-quality interactions. Appendix Table A4 shows linguistic analyses aligned with Table 2, Table A3 highlights "Affect" differences linked to Table 3, and addi7 Table 6: Comparison of LIWC Affect categories across M1-M4. Columns show normalized category counts for Case and Control groups and their differences. ***p<0.001, *p<0.01, p<0.05. Category M1 (Immediate) M2 (Immediate) M3 (Immediate) M4 (Neighbor) Case Control Diff. H Case Control Diff. H Case Control Diff. H Case Control Diff. H Pos. Affect 0.049 0.053 -0.004 38.80*** 0.080 0.074 0.006 416.94*** 0.060 0.057 0.003 45.86*** 0.048 0.053 -0.005 23.57*** Neg. Affect 0.029 0.025 0.004 74.947*** 0.035 0.030 0.005 651.17*** 0.029 0.026 0.003 100.07*** 0.029 0.021 0.007 169.87*** Anxiety 0.004 0.003 0.001 125.379*** 0.003 0.002 0.001 358.217*** 0.002 0.002 0.000 176.606*** 0.003 0.002 0.000 62.663*** Anger 0.009 0.006 0.003 50.654*** 0.012 0.010 0.002 322.855*** 0.010 0.008 0.002 77.809*** 0.011 0.006 0.005 130.857*** Sadness 0.007 0.007 0.000 71.131*** 0.008 0.007 0.001 376.545*** 0.006 0.005 0.001 136.379*** 0.009 0.005 0.004 181.551*** tional model comparisons with LIWC results are in Appendix Table A1 (Tables A5-A8). 6 Discussion and Implications A key takeaway of our study is that, early and implicit SI is best detected by modeling both an individual's longitudinal online activity and their surrounding information environment, capturing the inherently relational nature of SI expressions (Ammerman and Jacobucci, 2023). Neighbor-based modeling is particularly effective, as top neighbors, weighted via a NeighborScore, capture direct behavior and network-level influences, consistent with epidemiological evidence and classic effects like Werther and Papageno (Wyman et al., 2019; Phillips, 1974; Niederkrotenthaler et al., 2010; Yuan et al., 2023). Neighbor posts provide complementary signals beyond the user's self-content, showing that exposure to neighbors' expressions of distress or coping is associated with detection. Neighbor-informed models frame suicidality within a social-ecological perspective (Bronfenbrenner, 1979), reflecting social contagion processes where suicidal behaviors and ideation can spread through networks, especially when peers disclose distress (Wyman et al., 2019; Gould et al., 2003). Online environments may amplify these effects, as individuals in high-risk networks can show subtle precursors before explicit disclosure. Network theories suggest that peers model behavior and influence interpretations of distress (Cero et al., 2024; Bearman and Moody, 2004; Christakis and Fowler, 2025), explaining why neighbor interactions reflect relational signals of implicit SI. A user's longitudinal posting history remains essential, tracing gradual shifts in tone, sentiment, and discourse (De Choudhury et al., 2016). However, interactional content must be selective: including all replies adds noise and weakens risk signals (Schmidt et al., 2024). As shown in Table 6, the difference between the LIWC "Affect" categories is consistently higher in M4 where neighbors' posts are added. This shows that including posts from the neighbors add a clear distinction signal between the Case and Control data, making it better for the model to predict (Lu and Tu, 2024; Guo et al., 2012). These findings suggest avenues for theoretical and methodological extensions. Beyond the Interpersonal Theory of Suicide (Joiner, 2005), models such as the Integrated Motivational Volitional Model (O'Connor and Kirtley, 2018) and the ThreeStep Theory (Klonsky and May, 2015) may help identify subtle motivational or volitional cues in language and interactions. The strong influence of social context further supports contagion and network theories, indicating that implicit suicidal ideation emerges from both individual cognition and broader interpersonal environments. Overall, this work enhances detection accuracy while framing suicide risk as a relational process in digital social environments. By showing that peer-network interactions substantially improve predictive power, it bridges computational modeling with networkinformed prevention strategies. 7 Conclusion Our study demonstrates that early and implicit SI is best detected when a user's longitudinal activity is interpreted within their social context. Subtle linguistic and interactional cues often emerge well before explicit disclosures, enabling identification of risk trajectories at an early stage. By incorporating posts from socially proximal peers, our framework operationalizes theories of suicide contagion and social influence, yielding substantial gains in predictive performance. Notably, while a user's own posts and comments provide moderate insight, even a small set of strategically selected neighbor posts carries a significantly strong predictive signal. These findings highlight that suicide risk is inherently relational: effective detection requires selectively integrating self-content and socially distributed cues. Beyond improving accuracy, this relational framing supports scalable, context-aware, and ethically responsible early-warning systems that align with both clinical theory and the dynamics of online communities. 8 8 Limitations Our study has limitations that also show interesting future directions. Methodologically, our models exclusively rely on textual content, overlooking multimodal signals such as images, videos, emojis, GIFs, or external links that often convey emotional states, coping strategies, or distress. Incorporating these modalities could improve sensitivity to subtle risk signals, enhance interpretability, and provide a more holistic understanding of online behaviors associated with suicidal ideation. Similarly, our current approach treats peer interactions homogeneously, though peers can exert positive, neutral, or negative influences. Modeling these distinctions could capture the functional impact of social context on risk trajectories and guide ethically responsible interventions. Finally, robustness and generalizability can be improved through replication across platforms (e.g., Twitter, TikTok, Discord), multilingual and crosscultural settings, and naturalistic populations. Temporal weighting, trajectory-based risk modeling, and interpretability techniques such as attention visualization, SHAP, or counterfactuals could clarify key linguistic and network features while enhancing actionable insights. Future work should integrate multimodal inputs, refine social context modeling, and maintain rigorous ethical oversight to ensure predictive models support vulnerable individuals responsibly and effectively. 9 Ethical Considerations Reflexivity This paper used publicly accessible social media discussions on Reddit and did not require direct interactions with individuals, thereby not requiring ethics board approval. However, we are committed to the ethics of the research and we followed practices to secure the privacy of individuals in our dataset. Our research team comprises researchers holding diverse gender, racial, and cultural backgrounds, including people of color and immigrants, and hold interdisciplinary research expertise. This team consists of computer scientists with expertise in social computing, NLP, and HCI, and psychologists with expertise in clinical psychology, adolescent depression and suicide, and digital health interventions. One psychologist coauthor specializes in suicide etiology, suicide prevention, and crisis intervention, and another psychologist coauthor is a clinical psychologist with over 16 years of experience spanning adult and adolescent inpatient care and crisis suicide helplines. To ensure validity and prevent misrepresentation, our findings were reviewed and corroborated by our psychologist coauthors. However, our work is not intended to replace the clinical evaluation of an individual undergoing suicidal thoughts and should not be taken out of context to conduct mental health assessments. Risk of Misinterpretation and Harm with Automated Detection Our study leverages computational methods to identify early and implicit suicidal ideation (SI) in online communities. While these approaches provide opportunities for early intervention, they also present serious ethical challenges. Automated systems may misinterpret subtle linguistic signals or social cues, leading to false positives or false negatives. Misclassification can result in unwarranted labeling of individuals as at risk, potentially causing stigma, anxiety, or social consequences. Conversely, failing to detect genuine risk may deny individuals timely support. We emphasize that suicidal ideation manifests heterogeneously across individuals, and even carefully designed models cannot fully capture the nuanced context of personal distress. Privacy, Consent, and Data Sensitivity Given the sensitive nature of suicidal thoughts, the privacy and confidentiality of users are paramount. Our approach relies on publicly available text, but the inclusion of social network information, even aggregated, raises concerns about inadvertent exposure of personal interactions. Ethical deployment requires anonymization, secure storage, and strict access controls. Users should be informed about potential data usage and retention, and explicit consent should be prioritized wherever feasible. Social Context and Potential Misuse By modeling social context and peer interactions, our work operationalizes theories of social influence in suicidal ideation. However, this introduces additional ethical considerations. Insights derived from peer interactions could be misused if exploited for nonclinical purposes, such as targeted advertising or profiling of vulnerable individuals. Careful governance and ethical oversight are necessary to ensure that social context is leveraged solely for supportive, preventive interventions rather than for commercial or punitive applications. Transparency, Interpretability, and Responsible Use Ethical deployment also requires trans9 parency and interpretability. Systems identifying SI must clearly communicate their scope, limitations, and the fact that outputs are probabilistic, not diagnostic. Stakeholders, including clinicians, platform moderators, and potentially affected users, should understand how predictions are generated, especially when interventions are triggered. Responsible use also involves integrating human-in-the-loop frameworks, where trained professionals augment computational predictions to avoid over-reliance on automated systems. Future Directions for Ethical AI Future research should explore multimodal detection methods while carefully balancing privacy and ethical constraints. Integrating images, URLs, or videos may improve sensitivity, but must be approached with strict ethical safeguards. Additionally, systematically categorizing peer influence, positive, neutral, or negative, could enhance the interpretability and safety of predictions, ensuring interventions are informed by context rather than raw social activity. Overall, ethical design in computational mental health requires prioritizing user welfare, minimizing harm, and embedding accountability at every stage of model development and deployment. 10 AI Involvement Disclosure Certain sections of the manuscript were refined using AI-assisted writing tools (e.g., ChatGPT, Grammarly). All analyses, scientific content, and experiments were written solely by the authors. References Daniyal Alghazzawi, Hayat Ullah, Naila Tabassum, Sahar K Badri, and Muhammad Zubair Asghar. 2025. Explainable ai-based suicidal and non-suicidal ideations detection from social media text with enhanced ensemble technique. Scientific Reports, 15(1):1111. Falwah Alhamed, Julia Ive, and Lucia Specia. 2024. Classifying social media users before and after depression diagnosis via their language usage: A dataset and study. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 3250-3260. Tim Althoff, Kevin Clark, and Jure Leskovec. 2016. Large-scale analysis of counseling conversations: An application of natural language processing to mental health. Transactions of the Association for Computational Linguistics, 4:463-476. Brooke A Ammerman and Ross Jacobucci. 2023. The impact of social connection on near-term suicidal ideation. Psychiatry research, 326:115338. Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. In Proceedings of the international AAAI conference on web and social media, volume 14, pages 830-839. Peter S Bearman and James Moody. 2004. Suicide and friendships among american adolescents. American journal of public health, 94(1):89-95. Adrian Benton, Margaret Mitchell, and Dirk Hovy. 2017. Multi-task learning for mental health using social media text. arXiv preprint . Paul Bloom, Isaac Treves, David Pagliaccio, Isabella Nadel, Emma Wool, Hayley Quinones, Julia Greenblatt, Natalia Parjane, Katherine Durham, Samantha Salem, and 1 others. 2025. Identifying suiciderelated language in smartphone keyboard entries among high-risk adolescents. Urie Bronfenbrenner. 1979. The ecology of human development: Experiments by nature and design. Harvard university press. Pete Burnap, Walter Colombo, and Jonathan Scourfield. 2015. Machine classification and analysis of suiciderelated communication on twitter. In Proc. ACM conference on hypertext & social media. Alison L Calear and Philip J Batterham. 2019. Suicidal ideation disclosure: Patterns, correlates and outcome. Psychiatry research, 278:1-6. Ian Cero, Munmun De Choudhury, and Peter A Wyman. 2024. Social network structure as a suicide prevention target. Social psychiatry and psychiatric epidemiology, 59(3):555-564. Stevie Chancellor and Munmun De Choudhury. 2020. Methods in predictive techniques for mental health status on social media: a critical review. NPJ digital medicine, 3(1):1-11. Stevie Chancellor, Zhiyuan Lin, Erica L Goodman, Stephanie Zerwas, and Munmun De Choudhury. 2016. Quantifying and predicting mental illness severity in online pro-eating disorder communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pages 1171-1184. ACM. Siyuan Chen, Zhiling Zhang, Mengyue Wu, and Kenny Zhu. 2023. Detection of multiple mental disorders from social media with two-stream psychiatric experts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9071-9084. Nicholas A Christakis and James H Fowler. 2025. Connected: The surprising power of our social networks and how they shape our lives. Hachette+ ORM. 10 Cindy Chung and James W Pennebaker. 2007. The psychological functions of function words. Social communication, pages 343-359. Molly Copeland. 2021. The long shadow of peers: adolescent networks and young adult mental health. Social Sciences, 10(6):231. Glen Coppersmith, Mark Dredze, and Craig Harman. 2014. Quantifying mental health signals in twitter. In Proceedings of the workshop on computational linguistics and clinical psychology: From linguistic signal to clinical reality, pages 51-60. Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine. 2018. Natural language processing of social media as screening for suicide risk. Biomedical informatics insights, 10:1178222618792860. Glen Coppersmith, Kim Ngo, Ryan Leary, and Anthony Wood. 2016. Exploratory analysis of social media prior to a suicide attempt. In Proceedings of the third workshop on computational linguistics and clinical psychology, pages 106-117. Carolyn E Cutrona and Beth R Troutman. 1986. Social support, infant temperament, and parenting selfefficacy: A mediational model of postpartum depression. Child development, pages 1507-1518. Munmun De Choudhury. 2015. Social media for mental illness risk assessment, prevention and support. In Proceedings of the 1st ACM Workshop on Social Media World Sensors, pages 1-1. ACM. Munmun De Choudhury and Sushovan De. 2014. Mental health discourse on reddit: Self-disclosure, social support, and anonymity. In Proceedings of the international AAAI conference on web and social media, volume 8, pages 71-80. Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. In Proceedings of the international AAAI conference on web and social media, volume 7, pages 128-137. Munmun De Choudhury and Emre Kiciman. 2017. The language of social support in social media and its effect on suicidal ideation risk. In Proceedings of the international AAAI conference on web and social media, volume 11, pages 32-41. Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Discovering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 CHI conference on human factors in computing systems, pages 2098-2110. Jacob Eisenstein, Amr Ahmed, and Eric P Xing. 2011. Sparse additive generative models of text. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 1041-1048. Asra Fatima, Ying Li, Thomas Trenholm Hills, and Massimo Stella. 2021. Dasentimental: Detecting depression, anxiety, and stress in texts via emotional recall, cognitive networks, and machine learning. Big Data and Cognitive Computing, 5(4):77. Joseph C Franklin, Jessica D Ribeiro, Kathryn R Fox, Kate H Bentley, Evan M Kleiman, Xieyining Huang, Katherine M Musacchio, Adam C Jaroszewski, Bernard P Chang, and Matthew K Nock. 2017. Risk factors for suicidal thoughts and behaviors: A metaanalysis of 50 years of research. Psychological bulletin, 143(2):187. George Gkotsis, Anika Oellrich, Sumithra Velupillai, Maria Liakata, Tim JP Hubbard, Richard JB Dobson, and Rina Dutta. 2017. Characterisation of mental health conditions in social media using informed deep learning. Scientific reports, 7(1):1-11. Madelyn Gould, Patrick Jamieson, and Daniel Romer. 2003. Media contagion and suicide among the young. American Behavioral Scientist, 46(9):1269-1284. Maarten Grootendorst. 2022. Bertopic: Neural topic modeling with a class-based tf-idf procedure. arXiv preprint . Sharath Chandra Guntuku, David B Yaden, Margaret L Kern, Lyle H Ungar, and Johannes C Eichstaedt. 2017. Detecting depression and mental illness on social media: an integrative review. Current Opinion in Behavioral Sciences, 18:43-49. Guibing Guo, Jie Zhang, and Daniel Thalmann. 2012. A simple but effective method to incorporate trusted neighbors in recommender systems. In International conference on user modeling, adaptation, and personalization, pages 114-125. Springer. David John Hallford, Danielle Rusanov, B Winestone, R Kaplan, Matthew Fuller-Tyszkiewicz, and Glenn Melvin. 2023. Disclosure of suicidal ideation and behaviours: A systematic review and meta-analysis of prevalence. Clinical psychology review, 101:102272. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing. arXiv preprint . Melanie A Hom, Ian H Stanley, Matthew C Podlogar, and Thomas E Joiner Jr. 2017. "are you having thoughts of suicide?" examining experiences with disclosing and denying suicidal ideation. Journal of Clinical Psychology, 73(10):1382-1392. Shaoxiong Ji. 2022a. Towards intention understanding in suicidal risk assessment with natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4057-4067. The Association for Computational Linguistics. Shaoxiong Ji. 2022b. Towards intention understanding in suicidal risk assessment with natural language processing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4057-4067. The Association for Computational Linguistics. 11 Jazette Johnson, Vitica Arnold, Anne Marie Piper, and Gillian R Hayes. 2022. " it's a lonely disease": Cultivating online spaces for social support among people living with dementia and dementia caregivers. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2):1-27. Thomas E. Joiner. 2005. Why People Die by Suicide. Harvard University Press. Meeyun Kim, Koustuv Saha, Munmun De Choudhury, and Daejin Choi. 2023. Supporters first: understanding online social support on mental health from a supporter perspective. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1):1-28. Olivia J Kirtley, Ian Hussey, and Lisa Marzano. 2021. Exposure to and experience of self-harm and selfharm related content: An exploratory network analysis. Psychiatry Research, 295:113572. E David Klonsky and Alexis M May. 2015. The threestep theory (3st): A new theory of suicide rooted in the "ideation-to-action" framework. International Journal of Cognitive Therapy, 8(2):114-129. Annette M La Greca and Hannah Moore Harrison. 2005. Adolescent peer relations, friendships, and romantic relationships: Do they predict social anxiety and depression? Journal of clinical child and adolescent psychology, 34(1):49-61. Moritz Laurer, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2023. Less annotating, more classifying: Addressing the data scarcity issue of supervised machine learning with deep transfer learning and bert-nli. Political Analysis, pages 1-33. Fangcao Lu and Caixie Tu. 2024. The impact of comment slant and comment tone on digital health communication among polarized publics: A web-based survey experiment. Journal of medical Internet research, 26:e57967. Lauren McGillivray, Demee Rheinberger, Jessica Wang, Alexander Burnett, and Michelle Torok. 2022. Nondisclosing youth: a cross sectional study to understand why young people do not disclose suicidal thoughts to their mental health professional. BMC psychiatry, 22(1):3. Saskia Mérelle, Elise Foppen, Renske Gilissen, Jan Mokkenstorm, Resi Cluitmans, and Wouter Van Ballegooijen. 2018. Characteristics associated with nondisclosure of suicidal ideation in adults. International journal of environmental research and public health, 15(5):943. Usman Naseem, Liang Hu, Qi Zhang, Shoujin Wang, and Shoaib Jameel. 2025. Digri: Distorted greedy approach for human-assisted online suicide ideation detection. In Proceedings of the ACM on Web Conference 2025, pages 5192-5201. Thomas Niederkrotenthaler, Martin Voracek, Arno Herberth, Benedikt Till, Markus Strauss, Elmar Etzersdorfer, Brigitte Eisenwort, and Gernot Sonneck. 2010. Role of media reports in completed and prevented suicide: Werther v. papageno effects. The British Journal of Psychiatry, 197(3):234-243. Rory C O'Connor and Olivia J Kirtley. 2018. The integrated motivational-volitional model of suicidal behaviour. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1754):20170268. James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Associates, 71(2001):2001. Anxo Pérez, Neha Warikoo, Kexin Wang, Javier Parapar, and Iryna Gurevych. 2022. Semantic similarity models for depression severity estimation. arXiv preprint . David P Phillips. 1974. The influence of suggestion on suicide: Substantive and theoretical implications of the werther effect. American sociological review, pages 340-354. Matthew C Podlogar, Peter M Gutierrez, and Thomas E Joiner. 2022. Past levels of mental health intervention and current nondisclosure of suicide risk among men older than age 50. Assessment, 29(8):1611-1621. Huachuan Qiu, Lizhi Ma, and Zhenzhong Lan. 2024. Psyguard: An automated system for suicide detection and risk assessment in psychological counseling. arXiv preprint . Bhanu Pratap Singh Rawat, Samuel Kovaly, Wilfred R Pigeon, and Hong Yu. 2022. Scan: suicide attempt and ideation events dataset. In Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting, volume 2022, page 1029. Koustuv Saha and Munmun De Choudhury. 2017. Modeling stress with social media around incidents of gun violence on college campuses. PACM HumanComputer Interaction, (CSCW). Koustuv Saha, Sindhu Kiranmai Ernala, Sarmistha Dutta, Eva Sharma, and Munmun De Choudhury. 2020. Understanding moderation in online mental health communities. In HCII. Springer. Koustuv Saha and Amit Sharma. 2020. Causal factors of effective psychosocial outcomes in online mental health communities. In Proceedings of the international AAAI conference on web and social media, volume 14, pages 590-601. Koustuv Saha, Benjamin Sugar, John Torous, Bruno Abrahao, Emre Kıcıman, and Munmun De Choudhury. 2019. A social media study on the effects of psychiatric medication use. In Proceedings of the International AAAI Conference on Web and Social Media. 12 Koustuv Saha, Asra Yousuf, Ryan L Boyd, James W Pennebaker, and Munmun De Choudhury. 2022. Social media discussions predict mental health consultations on college campuses. Scientific reports, 12(1):123. Ramit Sawhney, Harshit Joshi, Lucie Flek, and Rajiv Shah. 2021a. Phase: Learning emotional phaseaware representations for suicide ideation detection on social media. In Proceedings of the 16th conference of the European Chapter of the Association for Computational Linguistics: main volume, pages 2415-2428. Ramit Sawhney, Harshit Joshi, Saumya Gandhi, and Rajiv Shah. 2020. A time-aware transformer based model for suicide ideation detection on social media. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 7685-7697. Ramit Sawhney, Harshit Joshi, Rajiv Shah, and Lucie Flek. 2021b. Suicide ideation detection via social and temporal user representations using hyperbolic learning. In Proceedings of the 2021 conference of the North American Chapter of the Association for Computational Linguistics: human language technologies, pages 2176-2190. Henk G Schmidt, Geoffrey R Norman, Silvia Mamede, and Mohi Magzoub. 2024. The influence of context on diagnostic reasoning: A narrative synthesis of experimental findings. Journal of Evaluation in Clinical Practice, 30(6):1091-1101. Ashish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020. A computational approach to understanding empathy expressed in text-based mental health support. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5263-5276. Eva Sharma and Munmun De Choudhury. 2018. Mental health support and its relationship to linguistic accommodation in online communities. In Proceedings of the 2018 CHI conference on human factors in computing systems, pages 1-13. Chen Shen and 1 others. 2020. Suicide risk prediction using social media data. In Proceedings of AAAI. Soorya Ram Shimgekar, Violeta J Rodriguez, Paul A Bloom, Dong Whi Yoo, and Koustuv Saha. 2025. Interpersonal theory of suicide as a lens to examine suicidal ideation in online spaces. arXiv preprint . Daun Shin, Hyoseung Kim, Seunghwan Lee, Younhee Cho, and Whanbo Jung. 2024. Using large language models to detect depression from user-generated diary text data as a novel approach in digital mental health screening: instrument validation study. Journal of Medical Internet Research, 26:e54617. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24-54. Shiyu Teng, Jiaqing Liu, Rahul Kumar Jain, Shurong Chai, Ruibo Hou, Tomoko Tateyama, Lanfen Lin, and Yen-wei Chen. 2025. Enhancing depression detection with chain-of-thought prompting: From emotion to reasoning using large language models. arXiv preprint . Marcel Trotzek, Sven Koitka, and Christoph M Friedrich. 2018. Utilizing neural networks and linguistic metadata for early detection of depression indications in text sequences. IEEE Transactions on Knowledge and Data Engineering, 32(3):588-601. Sarah E Victor, Alison E Hipwell, Stephanie D Stepp, and Lori N Scott. 2019. Parent and peer relationships as longitudinal predictors of adolescent non-suicidal self-injury onset. Child and adolescent psychiatry and mental health, 13(1):1. Yuxi Wang, Diana Inkpen, and Prasadith Kirinde Gamaarachchige. 2024. Explainable depression detection using large language models on social media data. In Proceedings of the 9th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2024), pages 108-126. WHO. 2025. Suicide worldwide in 2021: global health estimates. World Health Organization. Peter A Wyman, Trevor A Pickering, Anthony R Pisani, Kelly Rulison, Karen Schmeelk-Cone, Chelsey Hartley, Madelyn Gould, Eric D Caine, Mark LoMurray, Charles Hendricks Brown, and 1 others. 2019. Peeradult network structure and suicide attempts in 38 high schools: Implications for network-informed suicide prevention. Journal of Child Psychology and Psychiatry, 60(10):1065-1075. Yunhao Yuan, Koustuv Saha, Barbara Keller, Erkki Tapio Isometsä, and Talayeh Aledavood. 2023. Mental health coping stories on social media: A causal-inference study of papageno effect. In Proceedings of the ACM Web Conference 2023, pages 2677-2685. Dongsong Zhang, Lina Zhou, Jie Tao, Tingshao Zhu, and Guodong Gao. 2025. Ketch: A knowledgeenhanced transformer-based approach to suicidal ideation detection from social media content. Information Systems Research, 36(1):572-599. Zhiling Zhang, Siyuan Chen, Mengyue Wu, and Kenny Q Zhu. 2022. Symptom identification for interpretable detection of multiple mental disorders. arXiv preprint . Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019. Clpsych 2019 shared task: Predicting the degree of suicide risk in reddit posts. In Proceedings of the sixth workshop on computational linguistics and clinical psychology, pages 24-33. 13 A Appendix Table A1: Model Performance on Remaining Data Combinations (epochs=20). Model Data Used Accuracy F1 Precision Recall M8 Top neighbor's posts only 0.83 0.82 0.92 0.74 M9 Target user posts + comments + top neighbor posts 0.91 0.92 0.91 0.92 M10 Target user posts + comments + others' comments + top neighbor posts 0.71 0.64 0.75 0.56 M11 Filtered neighbor posts only (no post/comments in r/SuicideWatch) 0.78 0.77 0.73 0.82 Table A2: Performance comparison of transformer models on the classification task. Model Accuracy F1 Precision Recall microsoft/deberta-v3-large 0.955 0.946 0.945 0.945 roberta-large 0.944 0.931 0.944 0.919 bert-base-uncased 0.933 0.921 0.897 0.946 google/electra-large-discriminator 0.944 0.935 0.900 0.973 Table A3: Comparison of LIWC affect categories between M4 and M7 showing how the LIWC Affect category varies when considering all top neighbors vs. filtered top neighbors. Columns show mean proportions for users showing suicidal ideation (SI) and not, their difference, and the Kruskal-Wallis H-statistic. Significance codes: ***p<0.001, **p<0.01, *p<0.05. Category M4 M7 Case Control Diff. H Case Control Diff. H posemo 0.048 0.053 -0.005 23.568*** 0.050 0.053 -0.003 17.482*** negemo 0.029 0.021 0.007 169.871*** 0.032 0.025 0.007 72.531*** anx 0.003 0.002 0.000 62.663*** 0.003 0.002 0.001 63.084*** anger 0.011 0.006 0.005 130.857*** 0.010 0.006 0.004 51.436*** sad 0.009 0.005 0.004 181.551*** 0.008 0.007 0.001 71.131*** 14 Table A4: Comparison of LIWC affect categories across M4, M5, and M6 showing how the Affect LIWC category varies across various neighbors (top, worst, random). Columns show mean proportions for users showing suicidal ideation (SI) and not, their difference, and the Kruskal-Wallis H-statistic. Significance codes: ***p<0.001, **p<0.01, *p<0.05. Category M4 M5 M6 Case Control Diff. H Case Control Diff. H Case Control Diff. H posemo 0.048 0.053 -0.005 23.568*** 0.047 0.049 -0.003 38.611*** 0.048 0.047 0.001 42.425*** negemo 0.029 0.021 0.007 169.871*** 0.029 0.025 0.004 74.947*** 0.030 0.024 0.006 59.078*** anx 0.003 0.002 0.000 62.663*** 0.004 0.002 0.002 53.518*** 0.003 0.002 0.001 125.397*** anger 0.011 0.006 0.005 130.857*** 0.009 0.006 0.003 40.154*** 0.009 0.006 0.002 138.704*** sad 0.009 0.005 0.004 181.551*** 0.007 0.007 0.000 71.131*** 0.007 0.008 -0.001 36.640*** Table A5: Comparison of LIWC affect categories for M8. Columns show mean proportions for suicidal and nonsuicidal users, their difference, and the Kruskal-Wallis H statistic. Significance codes: ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Category M8 Case Control Diff H posemo 0.053 0.053 0.000 0.409 negemo 0.026 0.025 0.001 4.805* anx 0.001 0.002 -0.001 3.323 anger 0.009 0.006 0.003 9.523** sad 0.005 0.007 -0.001 3.638 Table A6: Comparison of LIWC affect categories for M9. Columns show mean proportions for suicidal and nonsuicidal users, their difference, and the Kruskal-Wallis H statistic. Significance codes: ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Category M9 Case Control Diff H posemo 0.050 0.053 -0.003 17.482*** negemo 0.032 0.025 0.007 72.531*** anx 0.003 0.002 0.001 63.084*** anger 0.010 0.006 0.004 51.436*** sad 0.008 0.007 0.001 71.131*** Table A7: Comparison of LIWC affect categories for M10. Columns show mean proportions for suicidal and nonsuicidal users, their difference, and the Kruskal-Wallis H statistic. Significance codes: ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Category M10 Case Control Diff H posemo 0.070 0.067 0.003 349.268*** negemo 0.031 0.028 0.003 538.074*** anx 0.003 0.002 0.000 403.691*** anger 0.010 0.009 0.001 281.206*** sad 0.007 0.006 0.001 361.992*** Table A8: Comparison of LIWC affect categories for M11. Columns show mean proportions for suicidal and nonsuicidal users, their difference, and the Kruskal-Wallis H statistic. Significance codes: ∗∗∗p < 0.001, ∗∗p < 0.01, ∗p < 0.05. Category M11 Case Control Diff H posemo 0.053 0.053 0.000 0.409 negemo 0.026 0.025 0.001 4.805* anx 0.001 0.002 -0.001 3.323 anger 0.009 0.006 0.003 9.523** sad 0.005 0.007 -0.001 3.638 15 Table A9: Performance metrics for different depth and neighbor configurations. Depth refers to the number of recursive depth considered, and Neighbors refers to the number of top neighbor posts included. Depth Neighbors Accuracy F1 Precision Recall 1 1 0.98 0.97 0.96 0.98 1 2 0.97 0.96 0.95 0.97 1 3 0.96 0.95 0.95 0.96 1 4 0.96 0.95 0.94 0.96 1 5 0.95 0.94 0.94 0.95 2 1 0.97 0.96 0.95 0.97 2 2 0.96 0.95 0.94 0.96 2 3 0.95 0.94 0.93 0.95 2 4 0.95 0.94 0.93 0.95 2 5 0.94 0.93 0.92 0.94 3 1 0.96 0.95 0.94 0.96 3 2 0.95 0.94 0.93 0.95 3 3 0.94 0.93 0.92 0.94 3 4 0.94 0.93 0.92 0.94 3 5 0.95 0.94 0.93 0.95 4 1 0.95 0.94 0.93 0.95 4 2 0.94 0.93 0.92 0.94 4 3 0.93 0.92 0.91 0.93 4 4 0.93 0.92 0.91 0.93 4 5 0.92 0.91 0.90 0.92 5 1 0.94 0.93 0.92 0.94 5 2 0.93 0.92 0.91 0.93 5 3 0.92 0.91 0.90 0.92 5 4 0.92 0.91 0.90 0.92 5 5 0.91 0.90 0.89 0.91 16
2510.14886
Ruelle-Pollicott Decay of Out-of-Time-Order Correlators in Many-Body Systems Jer´onimo Duarte,1 Ignacio Garc´ıa-Mata,1 and Diego A. Wisniacki2 1Instituto de Investigaciones F´ısicas de Mar del Plata (IFIMAR), Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata & CONICET, Funes 3350 (7600) Mar del Plata, Argentina 2Departamento de F´ısica “J. J. Giambiagi” and IFIBA, FCEyN, Universidad de Buenos Aires, 1428 Buenos Aires, Argentina (Dated: October 17, 2025) The out-of-time-order correlator (OTOC) quantifies information scrambling in quantum systems and serves as a key diagnostic of quantum chaos. In one-body systems with a classical counterpart, the relaxation of the OTOC is governed by Ruelle-Pollicott resonances. For many-body systems lacking a semiclassical limit, recent studies have identified an analogous role played by the Liouvillian spectrum of weakly open extensions of the dynamics, where the slowest decay rate?the Liouvillian gap?encodes relaxation. Here we study the kicked Ising spin chain and show that the long-time exponential decay of the OTOC in the isolated system occurs at a rate equal to twice this intrinsic gap. This correspondence persists even in crossover regimes between integrability and chaos, demon- strating that the Liouvillian spectrum provides a unified framework for understanding relaxation and irreversibility in closed many-body quantum systems. I. INTRODUCTION Understanding relaxation dynamics and the emergence of effective irreversibility in quantum many-body systems is a central problem at the interface of quantum chaos, statistical mechanics, and open-system theory. In clas- sical chaotic dynamics, the intermediate time decay of correlations is governed by the Ruelle-Pollicott (RP) res- onances [1–3] – isolated singularities of the analytically continued spectrum of the Frobenius?Perron operator – which generate effective irreversibility despite the under- lying deterministic and reversible motion. In quantum one-body systems with a classical limit, an analogous correspondence is well established through spectral and operator-truncation approaches [4–6] or through weakly dissipative formulations [7–9]. In contrast, identifying the quantum analogue of RP resonances in generic many-body systems remains an open question. Several works [10–12] have introduced coarse-grained formulations of the operator space, re- vealing isolated eigenvalues inside the unit disk that can be interpreted as quantum RP resonances. While con- ceptually appealing, this approach becomes impracti- cal for large systems due to the exponential growth of the operator space and the difficulty of defining an ap- propriate coarse-graining. A more recent development exploits translational invariance and quasi-momentum decomposition [13], enabling the identification of reso- nances within each symmetry sector. An alternative line of research focuses on the Liou- villian spectrum of weakly open quantum systems [14]. When a system is coupled to an environment, its relax- ation dynamics are governed by the eigenvalues of the corresponding Lindblad superoperator, with the slowest nonzero decay rate defining the Liouvillian gap. In ex- actly solvable models [15–17], this gap has been shown to coincide with the leading RP resonance of the corre- sponding isolated dynamics. The magnitude of the leading RP resonance determines the approach to equilibrium of dynamical observables, including the out-of-time-order correlator (OTOC). The OTOC measures the spread of local perturbations and serves as a sensitive probe of information scrambling in quantum systems. Although the butterfly effect in quan- tum chaos is not universal [18], the relaxation of the OTOC at late times provides a robust signature of under- lying dynamics. In one-body systems, the crossover from integrability to chaos can be characterized through the OTOC decay rate [19], a result that extends to systems with mixed classical dynamics [20]. Here we extend this connection to many-body systems. We study the kicked Ising spin chain, a paradigmatic model that exhibits both integrable and chaotic regimes depending on its parameters. Specifically, we analyze three complementary quantities: (i) the Liouvillian gap ¯g extracted from a weakly dissipative version of the model, (ii) standard chaos indicators from spectral statistics and eigenstate delocalization, and (iii) the intermediate time exponential decay rate α of the OTOC in the isolated chain. We show that α ≈2¯g even in transitional regimes between integrability and chaos, establishing the Liou- villian gap as a qualitative (and quantitative) measure of integrability in quantum many-body systems. II. MANY-BODY SYSTEM: KICKED ISING SPIN CHAIN To investigate the interplay between relaxation dy- namics, quantum chaos, and dissipation, we consider the kicked Ising spin chain [21, 22], a paradigmatic model of many-body Floquet dynamics. This system is partic- ularly well suited to our study because its parameters allow for a continuous interpolation between integrable and chaotic behavior. arXiv:2510.14886v1 [quant-ph] 16 Oct 2025 2 The model is defined by the time-periodic Hamiltonian ˆH(t) = ˆHfree + ˆHkick τ ∞ X n=−∞ δ(t −nτ), (1) where the “free” and “kick” components take the form          ˆHfree = −J L−2 P i=0 ˆσz i ˆσz i+1 −hz L−1 P i=0 ˆσz i , ˆHkick = −hx L−1 P i=0 ˆσx i . (2) Here, ˆσα i (α = x, z) denote Pauli operators acting on site i, J is the nearest-neighbor coupling, and hx, hz are transverse and longitudinal magnetic fields, respec- tively. The dynamics is stroboscopic with period τ, and the evolution over one period is governed by the Floquet operator ˆUF = ˆUfree ˆUkick, (3) where    ˆUfree = exp(−i ˆHfreeτ), ˆUkick = exp(−i ˆHkickτ). (4) This Floquet representation conveniently describes both unitary and, when extended, weakly dissipative evolu- tions. Throughout this work we set J = hz = τ = 1 and vary the transverse field hx, which controls the degree of chaoticity. These values ensure nontrivial dynamics while reducing the parameter space to a single effective control variable. All simulations are performed for system sizes up to L = 12 spins. To understand the structure of the spectrum, it is es- sential to discuss the symmetries present in the model. Since we work with a Floquet operator, the relevant spec- trum consists of quasienergies. Throughout this study we impose open boundary conditions (OBC). The sys- tem possesses an external reflection symmetry ˆR, which acts as ˆR |m1, m2, . . . , mL−2, mL−1⟩= |mL−1, mL−2, . . . , m2, m1⟩, where |m1, m2, . . . , mL−1⟩denotes a computational-basis state. Because [ ˆUF , ˆR] = 0, the Hilbert space decomposes into two invariant subspaces corresponding to the eigen- values +1 and −1 of ˆR, which we refer to as the even and odd parity sectors. This symmetry decomposition allows us to analyze each sector independently, avoiding spectral mixing and enabling a cleaner identification of chaotic behavior and Liouvillian relaxation modes. III. QUANTUM CHAOS DIAGNOSTICS To characterize the degree of quantum chaos in the kicked Ising spin chain, we employ two complementary diagnostics: spectral statistics and eigenstate delocal- ization. These indicators enable us to distinguish be- tween integrable, chaotic, and intermediate regimes, pro- viding essential context for interpreting the behavior of the OTOC and its connection to the Liouvillian gap. A well-established signature of quantum chaos is the statistical distribution of level spacings [23]. Chaotic systems exhibit Wigner-Dyson distributions predicted by random matrix theory (RMT), with the specific ensemble—in our case, the Circular Orthogonal Ensem- ble (COE)—determined by the symmetries of the sys- tem [24]. In contrast, integrable systems display Poisson statistics reflecting the presence of extensive conserved quantities [25]. Rather than computing the full level-spacing distribu- tion, we employ the ratio of adjacent spacings [26], which provides a practical and robust diagnostic. For the Flo- quet operator satisfying ˆUF |ψn⟩= eiφn |ψn⟩, n = 1, 2, . . . , D, (5) where φn are the quasienergies, |ψn⟩are the eigenstates, and D is the Hilbert space dimension, we compute rn = min(δn, δn−1) max(δn, δn−1), δn = φn+1 −φn. (6) The mean value ⟨r⟩exhibits two distinct limits: ⟨r⟩COE ≈ 0.5307 for COE statistics and ⟨r⟩P ≈0.3863 for Poisson statistics [27, 28]. To interpolate between these limits, we define the normalized parameter η = ⟨r⟩−⟨r⟩P ⟨r⟩COE −⟨r⟩P , (7) which provides a continuous measure ranging from η →0 in the integrable regime to η →1 in the fully chaotic regime. A complementary diagnostic is provided by eigenstate delocalization, quantified through the participation ra- tio (PR) [29]. Consider an eigenstate |ψi⟩expanded in a reference basis {|ϕj⟩}D−1 j=0 as |ψi⟩= P j aij |ϕj⟩. The participation ratio is defined as ξE(i) =   D−1 X j=0 |aij|4   −1 . (8) This quantity measures how extended an eigenstate is in the chosen basis. Small values indicate localization, while high values signal delocalization. For chaotic systems consistent with RMT, the coefficients |aij|2 behave as independent random variables, leading to typical values ξdeloc E ≈D/3, where D is the Hilbert space dimension [30, 31]. In our numerical analysis, we compute the average of ξE(i) over all eigenstates. These spectral and eigenstate-based chaos indicators provide a quantitative map of the dynamical regime of the system. We analyze their behavior as a function 3 of the transverse field strength hx, which controls the kick strength in the Floquet dynamics. To avoid spec- tral mixing between symmetry sectors, all analyses are performed within the even-parity subspace, ensuring a clearer characterization of the underlying dynamical be- havior. This characterization serves as a reference for studying the Liouvillian gap and exploring potential con- nections between unitary and dissipative indicators of quantum chaos. IV. OUT-OF-TIME-ORDER CORRELATOR The out-of-time-order correlator (OTOC) quantifies the scrambling of quantum information and is defined as the thermal expectation value of the squared commu- tator between two operators evaluated at different times: C(t) = h ˆW(t), ˆV i h ˆW(t), ˆV i† , (9) where ˆW(t) is the Heisenberg picture evolution of ˆW. Ex- pectation values are taken over the infinite-temperature ensemble, ⟨·⟩= Tr(·)/D, with D the Hilbert-space di- mension. This ensemble is appropriate for Floquet sys- tems or Hamiltonians with bounded spectra, where mi- crocanonical and infinite-temperature averages coincide in the thermodynamic limit. Originally introduced in studies of superconductiv- ity [32], the OTOC has become a central tool for char- acterizing quantum chaos and information scrambling in many-body systems [33]. The OTOC gained attention when upper bound for the exponential growth was found [34], setting the bar of fast scramblers like black holes[35] and strongly correlated fermionic systems[36–38]. In sys- tems with a classical limit, the growth rate coincides with the classical Lyapunov exponent [19, 39–41]. At later times, quantum interference leads to saturation around a constant plateau with residual fluctuations that depend on the underlying dynamics [42]. We focus on local Hermitian and unitary operators ˆW and ˆV acting on distinct sites of the spin chain. A conve- nient choice is given by Pauli operators ˆσµ i (µ = x, y, z) acting on site i. At infinite temperature, the OTOC takes the explicit form Cµν(l, t) = 1 2 D [ˆσµ 0 (t), ˆσν l ]2E = 1 −1 D Re {Tr [ˆσµ 0 (t)ˆσν l ˆσµ 0 (t)ˆσν l ]} . (10) We define Oµν 1 (l, t) = 1 D Re {Tr [ˆσµ 0 (t)ˆσν l ˆσµ 0 (t)ˆσν l ]} , (11) so that Cµν(l, t) = 1−Oµν 1 (l, t). In what follows we focus on the case µ = ν = z, denoted Czz(l, t), which probes the scrambling and relaxation of local ˆσz operators. At intermediate times, the approach to saturation of C(t), at (what we will call intermediate times) is reflected by an exponential decay of O1(t), which can be present even in systems which are not completely chaotic [20]. For one-body, completely chaotic systems, the decay is given by the classical RP resonances[19]. Figure 1 illustrates the time evolution of |Ozz 1 (l, t)| for l = 1 and a transverse field hx deep in the chaotic regime. The data, obtained for L = 12, display a clear exponen- tial decay at intermediate times . An exponential fit of the form |Ozz 1 (1, t)| ∝e−αt yields the decay rate α, which we compare below with twice the Liouvillian relaxation rate extracted from the weakly dissipative extension of the model. 0 10 20 30 40 50 60 t 10−4 10−3 10−2 10−1 100 |Ozz 1 | FIG. 1. The absolute value of Ozz 1 (1, t) for a transverse field hx in the chaotic regime. The data corresponds to a spin chain of size L = 12 and hx = 0.8168. The blue line with circles represents Ozz 1 (1, t), while the dashed line is the exponential fit to the intermediate time behavior of |Ozz 1 (1, t)|, with a decay exponent α = 0.2888. V. LIOUVILLIAN GAP To investigate the relation between the decay rate dis- cussed in the previous section and the spectral properties of the system, we follow Ref. [14]. In this approach, the system is extended to a weakly open setting described by a Liouvillian superoperator L = −i[H, · ] + γD, (12) where D is a dissipator and γ the dissipation strength. Building on the kicked Ising model defined above, we introduce bulk dephasing and study the corresponding relaxation dynamics through the time-periodic Lindblad master equation dρ dt = −i [ ˆH(t), ρ] + γ L−1 X i=0  σz i ρ σz i − 1 2{σz i σz i , ρ}  ≡L(t) ρ, (13) 4 where γ > 0 controls the dephasing strength and ˆH(t) = ˆH(t + τ) is the Floquet Hamiltonian. The Liouvillian superoperator L(t) has a complex spec- trum λα with negative real parts. For time-independent generators (L) there is one zero mode (λ0 = 0) corre- sponding to the steady state, plus decaying modes with Re λα < 0. The Liouvillian gap g = −max α̸=0 Re λα (14) characterizes the slowest non-trivial relaxation rate. In periodically driven systems it is convenient to define the Floquet map UF = T e R τ 0 L(t) dt (15) where T denotes time ordering. The eigenvalues of UF are eλατ, allowing one to extract g equivalently from the Floquet-Liouvillian spectrum. 0.00 0.05 0.10 0.15 0.20 γ 0.0 0.2 0.4 0.6 g ¯g FIG. 2. Liouvillian gap g(γ) as a function of the dissipation strength γ for system sizes L = 6 (blue circles), L = 8 (orange squares), and L = 10 (green triangles) and hx = 0.8168. The dashed line indicate quadratic fit. For L = 10, the extrapo- lated value of ¯g is 0.1429 In chaotic many-body systems, it has been shown that taking the limits L →∞first and γ →0, the gap con- verges to a nonzero limit ¯g = lim γ→0 lim L→∞g, (16) which coincides with the real part of the leading Ruelle- Pollicott resonance of the corresponding isolated chain [14]. To extract ¯g in the weak dissipation regime, we com- pute g(γ) for several small values of γ and perform a quadratic fit in γ. Following Eq. (16), the fit is applied to the largest available system size. The eigenvalues of the Floquet-Liouvillian spectrum, from which g(γ) is ob- tained, are computed using the Arnoldi-Lindblad method described in Appendix A. This procedure provides the ex- trapolated value ¯g ≡limγ→0 limL→∞g(γ), characterizing the intrinsic relaxation of the isolated chain. Figure 2 shows the Liouvillian gap g(γ) as a function of the dissipation strength for system sizes L = 6, 8, 10 at hx = 0.8168. Dashed lines indicate quadratic fits, and for L = 10 the extrapolated value ¯g = 0.1429. Remark- ably, the corresponding OTOC decay rate obtained in the isolated chain satisfies α ≃2¯g, demonstrating that the Liouvillian relaxation rate of the weakly open system predicts the intermediate time OTOC decay in the closed dynamics. Parity-resolved analysis – We analyze the Liouvillian spectrum further within each parity subspace. Across the integrable-to-chaotic transition, the gaps of the two subspaces intersect when plotted as a function of the dissipation strength. As a result, the smallest nonzero eigenvalue – and hence the global Liouvillian gap – orig- inates from the even subspace in certain parameter re- gions, while in others it comes from the odd subspace. This leads to an effective switching between sectors, so that the extracted global Liouvillian gap combines contri- butions from both subspaces depending on which hosts the slowest decaying mode. Importantly, this does not imply that the Arnoldi-Lindblad method is superior to the approach in Ref. [14]; rather, it allows us to probe parameter regimes?particularly the crossover from inte- grable to chaotic dynamics?where the behavior of the Li- ouvillian gaps can be resolved in detail. To carry out this analysis, we explicitly constructed both the Lindbladian superoperator and the spatial in- version superoperator. By diagonalizing the latter, we obtained its eigenvalues and eigenvectors, which allowed us to separate the Hilbert space into parity-even and parity-odd subspaces. Since the Lindbladian commutes with the spatial inversion operator, it can be represented in this eigenbasis as a block-diagonal matrix, with one block corresponding to each parity sector. Restricting the dynamics to each block separately, we extracted the eigenvalues of the Lindbladian within each subspace, and thereby determined the corresponding parity-resolved Li- ouvillian gaps. Figure 3 compares the Liouvillian gaps computed within each parity subspace?parity-even (blue empty cir- cles) and parity-odd (orange empty squares)?with the global gap obtained via the Arnoldi-Lindblad method (green crosses). Solid lines serve as guides to the eye for the subspace gaps. As the dissipation parameter is varied, the dominant eigenvalue can switch between sub- spaces: in certain regions the even sector exhibits the smallest eigenvalue, while in others the odd sector be- comes dominant. The Arnoldi method naturally captures this transition, since it always selects the eigenvalue with the smallest real part across all subspaces. Moreover, as the system size increases, the spectra of the two par- ity sectors become increasingly separated, making such crossings less probable. This behavior highlights that the Arnoldi approach consistently identifies the slowest 5 0.00 0.05 0.10 0.0 0.2 0.4 g (a) 0.00 0.02 0.04 0.06 γ 0.00 0.01 0.02 g (b) FIG. 3. Comparison of Liouvillian gaps for two values of the transverse field hx. (a) hx = 0.7854. The parity-even sub- space is shown in blue empty circles, the parity-odd subspace in orange empty squares, and the global gap obtained via the Arnoldi-Lindblad method in green crosses. (b) hx = 0.2749. The parity- even subspace is shown in blue empty circles, the parity- odd subspace in orange empty squares, and the global gap obtained via the Arnoldi-Lindblad method in green crosses. decaying mode of the global dynamics, regardless of the underlying sectoral structure. The case analyzed in Fig. 3(a) corresponds to the fully chaotic regime, where the Liouvillian gap curves for the two parity subspaces are clearly separated and no crossings occur, reflecting a well-defined hierarchy of decay rates between the sectors. In contrast, Fig. 3(b) shows an intermediate regime between integrability and chaos, where the curves partially overlap and crossings emerge, indicating that the slowest decaying mode can originate from either subspace depending on the dissi- pation strength. Both regimes are further illustrated in Fig. 4, highlighting how the Arnoldi-Lindblad method ef- fectively captures the dominant relaxation dynamics and smoothly interpolates between different contributions as the system transitions from integrable to chaotic behav- ior. VI. RELATION BETWEEN OTOC DECAY, THE LIOUVILLIAN GAP AND CHAOS PARAMETERS In this section, we bring together the chaos indicators, OTOC decay exponents, and the Liouvillian gap?into a single comparative framework. Previously, we have shown how to compute the normalized level?spacing ra- tio η and participation ratio ¯ξE across different transverse fields hx, how to extract the intermediate time exponen- tial decay exponent α of the local correlator Ozz 1 (l = 1, t), and how to obtain the Liouvillian gap ¯g via quadratic ex- trapolation in the weak?dissipation limit for L = 10. 0.0 0.5 1.0 η, ¯ξE (a) 0.5 1.0 hx 0.0 0.1 0.2 0.3 α, 2¯g (b) FIG. 4. (a) Chaos indicators η and ¯ξE as functions of the transverse field hx, computed from the Floquet spectrum in the parity-even subspace for system size L = 12. Blue upward triangles denote η and cyan diamonds denote ¯ξE. (b) Compar- ison between the OTOC decay exponent α (red crosses) and twice the Liouvillian gap 2¯g (orange empty circles) as func- tions of hx. The Liouvillian gap ¯g is extrapolated for L = 10, while α is obtained from the isolated chain with L = 12. The two panels together illustrate how both dynamical and spec- tral quantities capture the transition between integrable and chaotic regimes. Figure 4(a) displays the chaos indicators η (blue up- ward triangles) and ¯ξE (cyan diamonds) as functions of the transverse field hx. Both quantities are computed from the unitary Floquet spectrum at system size L = 12, and?owing to the symmetries of the model?are analyzed within the parity-even subspace defined with respect to the external reflection operator. These indicators clearly capture the crossover from integrability to chaos. Figure 4(b) shows the comparison between the OTOC 6 decay exponent α (red crosses) and twice the Liouvillian gap 2¯g (orange empty circles) as functions of hx. For each value of hx, α is obtained by fitting |Ozz 1 (1, t)| to an exponential decay at intermediate times in the isolated chain of size L = 12, while ¯g is extrapolated from g(γ) for L = 10. The two quantities closely track each other across both fully chaotic and transitional regimes. Overall, this figure highlights that the OTOC decay exponent remains approximately equal to twice the Li- ouvillian gap for all hx values where scrambling-induced relaxation persists. For hx > 1.00, the correspondence becomes less pronounced, likely due to finite-size ef- fects?larger system sizes would be required to achieve quantitative convergence of both α and ¯g in this regime. Nevertheless, even at these modest sizes, a clear qualita- tive and quantitative agreement is observed, underscor- ing the robustness of the relationship between the OTOC decay rate and the Liouvillian gap. It is worth noting that the decay exponent α and the Liouvillian gap ¯g qualitatively track the transition be- tween dynamical regimes. A similar qualitative (and quantitative) behavior of the OTOC decay was previ- ously reported for quantum maps in Ref. [20]. VII. CONCLUSIONS We have investigated the connection between the decay of the out-of-time-order correlator (OTOC), the Liouvil- lian gap, and spectral indicators of chaos in many-body quantum systems, focusing on the kicked Ising spin chain. Our central finding is that the intermediate time expo- nential decay rate of the OTOC for generic, symmetry- unresolved operators equals twice the Liouvillian gap of the corresponding weakly open extension of the dynam- ics. This correspondence, previously established for sys- tems with a semiclassical limit, is shown here to per- sist across the entire crossover from integrable to chaotic regimes, demonstrating that the Liouvillian spectrum provides a unifying description of relaxation and irre- versibility in quantum many-body dynamics. A detailed parity-resolved analysis confirms that the intrinsic Liouvillian gap extracted from the Arnoldi?Lindblad method reproduces the global relax- ation rate of the system, even when the dominant decay mode shifts between symmetry sectors. This robustness highlights that both the OTOC and the Liouvillian gap respond coherently to changes in the underlying dynam- ical regime, providing consistent signatures of the tran- sition from integrability to chaos. The practicality of the Arnoldi?Lindblad approach lies in its ability to access the smallest Liouvillian eigenval- ues without constructing the full superoperator, enabling efficient computation of the Liouvillian gap and its ex- trapolation to the weak-dissipation limit. In contrast to the momentum-resolved analysis of Ref. [13] for systems with periodic boundary conditions, our framework cap- tures the global relaxation properties of the chain under open boundaries, providing a complementary perspective that links spectral relaxation modes directly to informa- tion scrambling. Overall, our results establish a quantitative bridge be- tween unitary and dissipative descriptions of quantum relaxation, extending the Liouvillian?OTOC correspon- dence beyond the fully chaotic regime. This connection suggests that Liouvillian spectroscopy may serve as a general diagnostic tool for intermediate time dynamics and operator spreading in isolated quantum systems. ACKNOWLEDGMENTS We acknowledge support from Argentinian Agen- cia I+D+i (Grants No. PICT-2020-SERIEA-00740 and PICT-2020-SERIEA-01082). I.G-M received sup- port from the French-Argentinian International Research Project Complex Quantum Systems (COQSYS), funded by CNRS. D.A.W. received support from CONICET (Grant No. PIP 11220200100568CO), UBACyT (Grant No. 20020220300049BA) and PREI-UNAM. Appendix A: Arnoldi-Lindblad time evolution To compute the Liouvillian spectrum efficiently, we employ the Arnoldi–Lindblad approach of Ref. [43]. This method allows one to access the smallest-magnitude eigenvalues of the Floquet-Lindbladian without con- structing the full superoperator, whose matrix represen- tation has dimensions D2×D2, with D being the Hilbert- space dimension. As a result, explicitly building the Li- ouvillian quickly becomes computationally prohibitive as the system size increases. We consider a periodically driven open quantum sys- tem whose stroboscopic dynamics is governed by the Lindblad master equation of period T, ∂tˆρ(t) = L(t) ˆρ(t), L(t) = L(t + T). (A1) The evolution over one period is formally given by the time-ordered exponential ˆρ(T) = T " exp Z T 0 L(t′) dt′ !# ˆρ(0) ≡F(T, 0) ˆρ(0), (A2) where F(T, 0) denotes the Floquet map associated with one full period of evolution. By periodicity, F(t, 0) = [F(T, 0)]n F(t −nT, 0), (A3) for integer n. Defining the Floquet Liouvillian LF through F(T, 0) = exp(LF T), (A4) 7 one obtains F ˆρF j = φF j ˆρF j = eλF j T ˆρF j ⇐⇒ LF ˆρF j = λF j ˆρF j . (A5) The eigenvalues λF j form the Floquet-Liouvillian spectrum, from which the Liouvillian gap g = −maxα̸=0 Re λF α is extracted. In practice, we generate the Krylov subspace Kn = {ˆρ(0), ˆρ(T), ˆρ(2T), . . . , ˆρ(nT)} = {ˆρ(0), F ˆρ(0), F2ˆρ(0), . . . , Fnˆρ(0)}. (A6) directly from time evolution under the Lindblad equa- tion, without explicitly constructing L or F. Applying the Arnoldi iteration to this subspace yields an effective Hessenberg matrix whose eigenvalues approximate those of F, providing the leading relaxation rates of the Li- ouvillian spectrum. The key advantage of this method is that the resulting Hessenberg matrix has dimensions n × n, with n ≪D2, thus drastically reducing the com- putational cost compared to handling the full Liouvillian superoperator. We validated the implementation by explicitly con- structing L for small system sizes (L = 4, 5, 6) and com- paring the resulting eigenvalues with those obtained from the Arnoldi–Lindblad method. The agreement was ex- cellent, confirming the accuracy and efficiency of the ap- proach. [1] D. Ruelle, Locating resonances for axiom a dynamical systems, J. Stat. Phys. 44, 281 (1986). [2] D. Ruelle, Resonances of chaotic dynamical systems, Phys. Rev. Lett. 56, 405 (1986). [3] M. Pollicott, On the rate of mixing of axiom a flows, Inventiones mathematicae 81, 413 (1985). [4] C. Manderfeld, J. Weber, and F. Haake, Classical versus quantum time evolution of (quasi-) probability densities at limited phase-space resolution, J. Phys. A: Math. Gen. 34, 9893 (2001). [5] M. Khodas and S. Fishman, Relaxation and diffusion for the kicked rotor, Phys. Rev. Lett. 84, 2837 (2000). [6] S. Fishman and S. Rahav, Relaxation and noise in chaotic systems, in Dynamics of Dissipation, edited by P. Gar- baczewski and R. Olkiewicz (Springer Berlin Heidelberg, Berlin, Heidelberg, 2002) pp. 165–192. [7] S. Nonnenmacher, Spectral properties of noisy classical and quantum propagators, Nonlinearity 16, 1685 (2003). [8] I. Garc´ıa-Mata and M. Saraceno, Spectral properties and classical decays in quantum open systems, Physical Re- view E 69, 056211 (2004). [9] I. Garc´ıa-Mata and M. Saraceno, Spectral approach to chaos and quantum-classical correspondence in quantum maps, Mod. Phys. Lett. B 19, 341 (2005). [10] T. Prosen, Ruelle resonances in quantum many-body dy- namics, J. Phys. A: Math. & Gen 35, L737 (2002). [11] T. Prosen, Ruelle resonances in kicked quantum spin chain, Physica D 187, 244 (2004). [12] T. Prosen, Chaos and complexity of quantum motion, J. Phys. A: Math. & Gen. 40, 7881 (2007). [13] M. ˇZnidariˇc, Momentum-dependent quantum ruelle- pollicott resonances in translationally invariant many- body systems, Phys. Rev. E 110, 054204 (2024) (2024). [14] T. Mori, Liouvillian-gap analysis of open quantum many- body systems in the weak dissipation limit, Physical Re- view B 109, 064311 (2024). [15] J. A. Jacoby, D. A. Huse, and S. Gopalakrishnan, Spectral gaps of local quantum channels in the weak- dissipation limit, Phys. Rev. B 111, 104303 (2025). [16] C. Zhang, L. Nie, and C. von Keyserlingk, Thermal- ization rates and quantum ruelle-pollicott resonances: insights from operator hydrodynamics, arXiv preprint arXiv:2409.17251 (2024). [17] T. Yoshimura and L. S´a, Theory of irreversibility in quantum many-body systems, Phys. Rev. E 111, 064135 (2025). [18] I. Garc´ıa-Mata, R. A. Jalabert, and D. A. Wisni- acki, Out-of-time-order correlations and quantum chaos, Scholarpedia 18, 55237 (2023). [19] I. Garc´ıa-Mata, M. Saraceno, R. A. Jalabert, A. J. Roncaglia, and D. A. Wisniacki, Chaos signatures in the short and long time behavior of the out-of-time ordered correlator, Phys. Rev. Lett. 121, 210601 (2018). [20] T. Notenson, I. Garc´ıa-Mata, A. J. Roncaglia, and D. A. Wisniacki, Classical approach to equilibrium of out-of- time ordered correlators in mixed systems, Phys. Rev. E 107, 064207 (2023). [21] T. Prosen, Exact time-correlation functions of quantum ising chain in a kicking transversal magnetic field: Spec- tral analysis of the adjoint propagator in heisenberg pic- ture, Prog. Theor. Phys. Suppl. 139, 191 (2000). [22] T. Prosen, General relation between quantum ergodic- ity and fidelity of quantum dynamics, Phys. Rev. E 65, 036208 (2002). [23] O. Bohigas, M. J. Giannoni, and C. Schmit, Character- ization of chaotic quantum spectra and universality of level fluctuation laws, Phys. Rev. Lett. 52, 1 (1984). [24] M. L. Mehta, Random matrices, Vol. 142 (Elsevier, 2004). [25] M. V. Berry and M. Tabor, Level clustering in the regular spectrum, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 356, 375 (1977). [26] V. Oganesyan and D. A. Huse, Localization of interacting fermions at high temperature, Phys. Rev. B 75, 155111 (2007). [27] Y. Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Distribution of the ratio of consecutive level spacings in random matrix ensembles, Phys. Rev. Lett. 110, 084101 (2013). [28] P. A. Lee and T. V. Ramakrishnan, Disordered electronic systems, Rev. Mod. Phys. 57, 287 (1985). [29] F. Wegner, Inverse participation ratio in 2+ ε dimen- sions, Zeitschrift f¨ur Physik B Condensed Matter 36, 209 (1980). [30] A. Gubin and L. F. Santos, Quantum chaos: An intro- duction via chains of interacting spins 1/2, Am. J. Phys. 80, 246 (2012). 8 [31] V. Zelevinsky, B. A. Brown, N. Frazier, and M. Horoi, The nuclear shell model as a testing ground for many- body quantum chaos, Physics Reports 276, 85 (1996). [32] A. Larkin and Y. N. Ovchinnikov, Quasiclassical method in the theory of superconductivity, Sov Phys JETP 28, 1200 (1969). [33] S. Xu and B. Swingle, Scrambling dynamics and out-of- time-ordered correlators in quantum many-body systems, PRX Quantum 5, 010201 (2024). [34] J. Maldacena, S. H. Shenker, and D. Stanford, A bound on chaos, Journal of High Energy Physics 2016, 1 (2016). [35] S. H. Shenker and D. Stanford, Black holes and the but- terfly effect, Journal of High Energy Physics 2014 (2013). [36] S. Sachdev and J. Ye, Gapless spin-fluid ground state in a random quantum heisenberg magnet, Phys. Rev. Lett. 70, 3339 (1993). [37] A. Kitaev, A simple model of quantum holography, talk given at KITP Program: Entanglement in Strongly- Correlated Quantum Matter, Vol. 7 (USA April, 2015). [38] J. Maldacena and D. Stanford, Remarks on the Sachdev- Ye-Kitaev model, Phys. Rev. D 94, 106002 (2016). [39] E. B. Rozenbaum, S. Ganeshan, and V. Galitski, Lya- punov exponent and out-of-time-ordered correlator’s growth rate in a chaotic system, Phys. Rev. Lett. 118, 086801 (2017). [40] R. A. Jalabert, I. Garc´ıa-Mata, and D. A. Wisniacki, Semiclassical theory of out-of-time-order correlators for low-dimensional classically chaotic systems, Phys. Rev. E 98, 062218 (2018). [41] X. Chen and T. Zhou, Operator scrambling and quantum chaos (2018). [42] E. M. Fortes, I. Garc´ıa-Mata, R. A. Jalabert, and D. A. Wisniacki, Gauging classical and quantum integrability through out-of-time-ordered correlators, Phys. Rev. E 100, 042201 (2019). [43] F. Minganti and D. Huybrechts, Arnoldi-Lindblad time evolution: Faster-than-the-clock algorithm for the spec- trum of time-independent and Floquet open quantum systems, Quantum 6, 649 (2022).
Ruelle-Pollicott Decay of Out-of-Time-Order Correlators in Many-Body Systems Jer ́onimo Duarte,1 Ignacio Garc ́ıa-Mata,1 and Diego A. Wisniacki2 1Instituto de Investigaciones F ́ısicas de Mar del Plata (IFIMAR), Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata & CONICET, Funes 3350 (7600) Mar del Plata, Argentina 2Departamento de F ́ısica "J. J. Giambiagi" and IFIBA, FCEyN, Universidad de Buenos Aires, 1428 Buenos Aires, Argentina (Dated: October 17, 2025) The out-of-time-order correlator (OTOC) quantifies information scrambling in quantum systems and serves as a key diagnostic of quantum chaos. In one-body systems with a classical counterpart, the relaxation of the OTOC is governed by Ruelle-Pollicott resonances. For many-body systems lacking a semiclassical limit, recent studies have identified an analogous role played by the Liouvillian spectrum of weakly open extensions of the dynamics, where the slowest decay rate?the Liouvillian gap?encodes relaxation. Here we study the kicked Ising spin chain and show that the long-time exponential decay of the OTOC in the isolated system occurs at a rate equal to twice this intrinsic gap. This correspondence persists even in crossover regimes between integrability and chaos, demonstrating that the Liouvillian spectrum provides a unified framework for understanding relaxation and irreversibility in closed many-body quantum systems. I. INTRODUCTION Understanding relaxation dynamics and the emergence of effective irreversibility in quantum many-body systems is a central problem at the interface of quantum chaos, statistical mechanics, and open-system theory. In classical chaotic dynamics, the intermediate time decay of correlations is governed by the Ruelle-Pollicott (RP) resonances [1-3] - isolated singularities of the analytically continued spectrum of the Frobenius?Perron operator - which generate effective irreversibility despite the underlying deterministic and reversible motion. In quantum one-body systems with a classical limit, an analogous correspondence is well established through spectral and operator-truncation approaches [4-6] or through weakly dissipative formulations [7-9]. In contrast, identifying the quantum analogue of RP resonances in generic many-body systems remains an open question. Several works [10-12] have introduced coarse-grained formulations of the operator space, revealing isolated eigenvalues inside the unit disk that can be interpreted as quantum RP resonances. While conceptually appealing, this approach becomes impractical for large systems due to the exponential growth of the operator space and the difficulty of defining an appropriate coarse-graining. A more recent development exploits translational invariance and quasi-momentum decomposition [13], enabling the identification of resonances within each symmetry sector. An alternative line of research focuses on the Liouvillian spectrum of weakly open quantum systems [14]. When a system is coupled to an environment, its relaxation dynamics are governed by the eigenvalues of the corresponding Lindblad superoperator, with the slowest nonzero decay rate defining the Liouvillian gap. In exactly solvable models [15-17], this gap has been shown to coincide with the leading RP resonance of the corresponding isolated dynamics. The magnitude of the leading RP resonance determines the approach to equilibrium of dynamical observables, including the out-of-time-order correlator (OTOC). The OTOC measures the spread of local perturbations and serves as a sensitive probe of information scrambling in quantum systems. Although the butterfly effect in quantum chaos is not universal [18], the relaxation of the OTOC at late times provides a robust signature of underlying dynamics. In one-body systems, the crossover from integrability to chaos can be characterized through the OTOC decay rate [19], a result that extends to systems with mixed classical dynamics [20]. Here we extend this connection to many-body systems. We study the kicked Ising spin chain, a paradigmatic model that exhibits both integrable and chaotic regimes depending on its parameters. Specifically, we analyze three complementary quantities: (i) the Liouvillian gap ̄g extracted from a weakly dissipative version of the model, (ii) standard chaos indicators from spectral statistics and eigenstate delocalization, and (iii) the intermediate time exponential decay rate α of the OTOC in the isolated chain. We show that α ≈2 ̄g even in transitional regimes between integrability and chaos, establishing the Liouvillian gap as a qualitative (and quantitative) measure of integrability in quantum many-body systems. II. MANY-BODY SYSTEM: KICKED ISING SPIN CHAIN To investigate the interplay between relaxation dynamics, quantum chaos, and dissipation, we consider the kicked Ising spin chain [21, 22], a paradigmatic model of many-body Floquet dynamics. This system is particularly well suited to our study because its parameters allow for a continuous interpolation between integrable and chaotic behavior. 16 Oct 2025 2 The model is defined by the time-periodic Hamiltonian ˆH(t) = ˆHfree + ˆHkick τ ∞ X n=-∞ δ(t -nτ), (1) where the "free" and "kick" components take the form          ˆHfree = -J L-2 P i=0 ˆσz i ˆσz i+1 -hz L-1 P i=0 ˆσz i , ˆHkick = -hx L-1 P i=0 ˆσx i . (2) Here, ˆσα i (α = x, z) denote Pauli operators acting on site i, J is the nearest-neighbor coupling, and hx, hz are transverse and longitudinal magnetic fields, respectively. The dynamics is stroboscopic with period τ, and the evolution over one period is governed by the Floquet operator ˆUF = ˆUfree ˆUkick, (3) where    ˆUfree = exp(-i ˆHfreeτ), ˆUkick = exp(-i ˆHkickτ). (4) This Floquet representation conveniently describes both unitary and, when extended, weakly dissipative evolutions. Throughout this work we set J = hz = τ = 1 and vary the transverse field hx, which controls the degree of chaoticity. These values ensure nontrivial dynamics while reducing the parameter space to a single effective control variable. All simulations are performed for system sizes up to L = 12 spins. To understand the structure of the spectrum, it is essential to discuss the symmetries present in the model. Since we work with a Floquet operator, the relevant spectrum consists of quasienergies. Throughout this study we impose open boundary conditions (OBC). The system possesses an external reflection symmetry ˆR, which acts as ˆR |m1, m2, . . . , mL-2, mL-1⟩= |mL-1, mL-2, . . . , m2, m1⟩, where |m1, m2, . . . , mL-1⟩denotes a computational-basis state. Because [ ˆUF , ˆR] = 0, the Hilbert space decomposes into two invariant subspaces corresponding to the eigenvalues +1 and -1 of ˆR, which we refer to as the even and odd parity sectors. This symmetry decomposition allows us to analyze each sector independently, avoiding spectral mixing and enabling a cleaner identification of chaotic behavior and Liouvillian relaxation modes. III. QUANTUM CHAOS DIAGNOSTICS To characterize the degree of quantum chaos in the kicked Ising spin chain, we employ two complementary diagnostics: spectral statistics and eigenstate delocalization. These indicators enable us to distinguish between integrable, chaotic, and intermediate regimes, providing essential context for interpreting the behavior of the OTOC and its connection to the Liouvillian gap. A well-established signature of quantum chaos is the statistical distribution of level spacings [23]. Chaotic systems exhibit Wigner-Dyson distributions predicted by random matrix theory (RMT), with the specific ensemble-in our case, the Circular Orthogonal Ensemble (COE)-determined by the symmetries of the system [24]. In contrast, integrable systems display Poisson statistics reflecting the presence of extensive conserved quantities [25]. Rather than computing the full level-spacing distribution, we employ the ratio of adjacent spacings [26], which provides a practical and robust diagnostic. For the Floquet operator satisfying ˆUF |ψn⟩= eiφn |ψn⟩, n = 1, 2, . . . , D, (5) where φn are the quasienergies, |ψn⟩are the eigenstates, and D is the Hilbert space dimension, we compute rn = min(δn, δn-1) max(δn, δn-1), δn = φn+1 -φn. (6) The mean value ⟨r⟩exhibits two distinct limits: ⟨r⟩COE ≈ 0.5307 for COE statistics and ⟨r⟩P ≈0.3863 for Poisson statistics [27, 28]. To interpolate between these limits, we define the normalized parameter η = ⟨r⟩-⟨r⟩P ⟨r⟩COE -⟨r⟩P , (7) which provides a continuous measure ranging from η →0 in the integrable regime to η →1 in the fully chaotic regime. A complementary diagnostic is provided by eigenstate delocalization, quantified through the participation ratio (PR) [29]. Consider an eigenstate |ψi⟩expanded in a reference basis {|φj⟩}D-1 j=0 as |ψi⟩= P j aij |φj⟩. The participation ratio is defined as ξE(i) =   D-1 X j=0 |aij|4   -1 . (8) This quantity measures how extended an eigenstate is in the chosen basis. Small values indicate localization, while high values signal delocalization. For chaotic systems consistent with RMT, the coefficients |aij|2 behave as independent random variables, leading to typical values ξdeloc E ≈D/3, where D is the Hilbert space dimension [30, 31]. In our numerical analysis, we compute the average of ξE(i) over all eigenstates. These spectral and eigenstate-based chaos indicators provide a quantitative map of the dynamical regime of the system. We analyze their behavior as a function 3 of the transverse field strength hx, which controls the kick strength in the Floquet dynamics. To avoid spectral mixing between symmetry sectors, all analyses are performed within the even-parity subspace, ensuring a clearer characterization of the underlying dynamical behavior. This characterization serves as a reference for studying the Liouvillian gap and exploring potential connections between unitary and dissipative indicators of quantum chaos. IV. OUT-OF-TIME-ORDER CORRELATOR The out-of-time-order correlator (OTOC) quantifies the scrambling of quantum information and is defined as the thermal expectation value of the squared commutator between two operators evaluated at different times: C(t) = h ˆW(t), ˆV i h ˆW(t), ˆV i† , (9) where ˆW(t) is the Heisenberg picture evolution of ˆW. Expectation values are taken over the infinite-temperature ensemble, ⟨·⟩= Tr(·)/D, with D the Hilbert-space dimension. This ensemble is appropriate for Floquet systems or Hamiltonians with bounded spectra, where microcanonical and infinite-temperature averages coincide in the thermodynamic limit. Originally introduced in studies of superconductivity [32], the OTOC has become a central tool for characterizing quantum chaos and information scrambling in many-body systems [33]. The OTOC gained attention when upper bound for the exponential growth was found [34], setting the bar of fast scramblers like black holes[35] and strongly correlated fermionic systems[36-38]. In systems with a classical limit, the growth rate coincides with the classical Lyapunov exponent [19, 39-41]. At later times, quantum interference leads to saturation around a constant plateau with residual fluctuations that depend on the underlying dynamics [42]. We focus on local Hermitian and unitary operators ˆW and ˆV acting on distinct sites of the spin chain. A convenient choice is given by Pauli operators ˆσμ i (μ = x, y, z) acting on site i. At infinite temperature, the OTOC takes the explicit form Cμν(l, t) = 1 2 D [ˆσμ 0 (t), ˆσν l ]2E = 1 -1 D Re {Tr [ˆσμ 0 (t)ˆσν l ˆσμ 0 (t)ˆσν l ]} . (10) We define Oμν 1 (l, t) = 1 D Re {Tr [ˆσμ 0 (t)ˆσν l ˆσμ 0 (t)ˆσν l ]} , (11) so that Cμν(l, t) = 1-Oμν 1 (l, t). In what follows we focus on the case μ = ν = z, denoted Czz(l, t), which probes the scrambling and relaxation of local ˆσz operators. At intermediate times, the approach to saturation of C(t), at (what we will call intermediate times) is reflected by an exponential decay of O1(t), which can be present even in systems which are not completely chaotic [20]. For one-body, completely chaotic systems, the decay is given by the classical RP resonances[19]. Figure 1 illustrates the time evolution of |Ozz 1 (l, t)| for l = 1 and a transverse field hx deep in the chaotic regime. The data, obtained for L = 12, display a clear exponential decay at intermediate times . An exponential fit of the form |Ozz 1 (1, t)| ∝e-αt yields the decay rate α, which we compare below with twice the Liouvillian relaxation rate extracted from the weakly dissipative extension of the model. 0 10 20 30 40 50 60 t 10-4 10-3 10-2 10-1 100 |Ozz 1 | FIG. 1. The absolute value of Ozz 1 (1, t) for a transverse field hx in the chaotic regime. The data corresponds to a spin chain of size L = 12 and hx = 0.8168. The blue line with circles represents Ozz 1 (1, t), while the dashed line is the exponential fit to the intermediate time behavior of |Ozz 1 (1, t)|, with a decay exponent α = 0.2888. V. LIOUVILLIAN GAP To investigate the relation between the decay rate discussed in the previous section and the spectral properties of the system, we follow Ref. [14]. In this approach, the system is extended to a weakly open setting described by a Liouvillian superoperator L = -i[H, · ] + γD, (12) where D is a dissipator and γ the dissipation strength. Building on the kicked Ising model defined above, we introduce bulk dephasing and study the corresponding relaxation dynamics through the time-periodic Lindblad master equation dρ dt = -i [ ˆH(t), ρ] + γ L-1 X i=0 σz i ρ σz i - 1 2{σz i σz i , ρ} ≡L(t) ρ, (13) 4 where γ > 0 controls the dephasing strength and ˆH(t) = ˆH(t + τ) is the Floquet Hamiltonian. The Liouvillian superoperator L(t) has a complex spectrum λα with negative real parts. For time-independent generators (L) there is one zero mode (λ0 = 0) corresponding to the steady state, plus decaying modes with Re λα 1.00, the correspondence becomes less pronounced, likely due to finite-size effects?larger system sizes would be required to achieve quantitative convergence of both α and ̄g in this regime. Nevertheless, even at these modest sizes, a clear qualitative and quantitative agreement is observed, underscoring the robustness of the relationship between the OTOC decay rate and the Liouvillian gap. It is worth noting that the decay exponent α and the Liouvillian gap ̄g qualitatively track the transition between dynamical regimes. A similar qualitative (and quantitative) behavior of the OTOC decay was previously reported for quantum maps in Ref. [20]. VII. CONCLUSIONS We have investigated the connection between the decay of the out-of-time-order correlator (OTOC), the Liouvillian gap, and spectral indicators of chaos in many-body quantum systems, focusing on the kicked Ising spin chain. Our central finding is that the intermediate time exponential decay rate of the OTOC for generic, symmetryunresolved operators equals twice the Liouvillian gap of the corresponding weakly open extension of the dynamics. This correspondence, previously established for systems with a semiclassical limit, is shown here to persist across the entire crossover from integrable to chaotic regimes, demonstrating that the Liouvillian spectrum provides a unifying description of relaxation and irreversibility in quantum many-body dynamics. A detailed parity-resolved analysis confirms that the intrinsic Liouvillian gap extracted from the Arnoldi?Lindblad method reproduces the global relaxation rate of the system, even when the dominant decay mode shifts between symmetry sectors. This robustness highlights that both the OTOC and the Liouvillian gap respond coherently to changes in the underlying dynamical regime, providing consistent signatures of the transition from integrability to chaos. The practicality of the Arnoldi?Lindblad approach lies in its ability to access the smallest Liouvillian eigenvalues without constructing the full superoperator, enabling efficient computation of the Liouvillian gap and its extrapolation to the weak-dissipation limit. In contrast to the momentum-resolved analysis of Ref. [13] for systems with periodic boundary conditions, our framework captures the global relaxation properties of the chain under open boundaries, providing a complementary perspective that links spectral relaxation modes directly to information scrambling. Overall, our results establish a quantitative bridge between unitary and dissipative descriptions of quantum relaxation, extending the Liouvillian?OTOC correspondence beyond the fully chaotic regime. This connection suggests that Liouvillian spectroscopy may serve as a general diagnostic tool for intermediate time dynamics and operator spreading in isolated quantum systems. ACKNOWLEDGMENTS We acknowledge support from Argentinian Agencia I+D+i (Grants No. PICT-2020-SERIEA-00740 and PICT-2020-SERIEA-01082). I.G-M received support from the French-Argentinian International Research Project Complex Quantum Systems (COQSYS), funded by CNRS. D.A.W. received support from CONICET (Grant No. PIP 11220200100568CO), UBACyT (Grant No. 20020220300049BA) and PREI-UNAM. Appendix A: Arnoldi-Lindblad time evolution To compute the Liouvillian spectrum efficiently, we employ the Arnoldi-Lindblad approach of Ref. [43]. This method allows one to access the smallest-magnitude eigenvalues of the Floquet-Lindbladian without constructing the full superoperator, whose matrix representation has dimensions D2×D2, with D being the Hilbertspace dimension. As a result, explicitly building the Liouvillian quickly becomes computationally prohibitive as the system size increases. We consider a periodically driven open quantum system whose stroboscopic dynamics is governed by the Lindblad master equation of period T, ∂tˆρ(t) = L(t) ˆρ(t), L(t) = L(t + T). (A1) The evolution over one period is formally given by the time-ordered exponential ˆρ(T) = T " exp Z T 0 L(t′) dt′ !# ˆρ(0) ≡F(T, 0) ˆρ(0), (A2) where F(T, 0) denotes the Floquet map associated with one full period of evolution. By periodicity, F(t, 0) = [F(T, 0)]n F(t -nT, 0), (A3) for integer n. Defining the Floquet Liouvillian LF through F(T, 0) = exp(LF T), (A4) 7 one obtains F ˆρF j = φF j ˆρF j = eλF j T ˆρF j ⇐⇒ LF ˆρF j = λF j ˆρF j . (A5) The eigenvalues λF j form the Floquet-Liouvillian spectrum, from which the Liouvillian gap g = -maxα̸=0 Re λF α is extracted. In practice, we generate the Krylov subspace Kn = {ˆρ(0), ˆρ(T), ˆρ(2T), . . . , ˆρ(nT)} = {ˆρ(0), F ˆρ(0), F2ˆρ(0), . . . , Fnˆρ(0)}. (A6) directly from time evolution under the Lindblad equation, without explicitly constructing L or F. Applying the Arnoldi iteration to this subspace yields an effective Hessenberg matrix whose eigenvalues approximate those of F, providing the leading relaxation rates of the Liouvillian spectrum. The key advantage of this method is that the resulting Hessenberg matrix has dimensions n × n, with n ≪D2, thus drastically reducing the computational cost compared to handling the full Liouvillian superoperator. We validated the implementation by explicitly constructing L for small system sizes (L = 4, 5, 6) and comparing the resulting eigenvalues with those obtained from the Arnoldi-Lindblad method. The agreement was excellent, confirming the accuracy and efficiency of the approach. [1] D. Ruelle, Locating resonances for axiom a dynamical systems, J. Stat. Phys. 44, 281 (1986). [2] D. Ruelle, Resonances of chaotic dynamical systems, Phys. Rev. Lett. 56, 405 (1986). [3] M. Pollicott, On the rate of mixing of axiom a flows, Inventiones mathematicae 81, 413 (1985). [4] C. Manderfeld, J. Weber, and F. Haake, Classical versus quantum time evolution of (quasi-) probability densities at limited phase-space resolution, J. Phys. A: Math. Gen. 34, 9893 (2001). [5] M. Khodas and S. Fishman, Relaxation and diffusion for the kicked rotor, Phys. Rev. Lett. 84, 2837 (2000). [6] S. Fishman and S. Rahav, Relaxation and noise in chaotic systems, in Dynamics of Dissipation, edited by P. Garbaczewski and R. Olkiewicz (Springer Berlin Heidelberg, Berlin, Heidelberg, 2002) pp. 165-192. [7] S. Nonnenmacher, Spectral properties of noisy classical and quantum propagators, Nonlinearity 16, 1685 (2003). [8] I. Garc ́ıa-Mata and M. Saraceno, Spectral properties and classical decays in quantum open systems, Physical Review E 69, 056211 (2004). [9] I. Garc ́ıa-Mata and M. Saraceno, Spectral approach to chaos and quantum-classical correspondence in quantum maps, Mod. Phys. Lett. B 19, 341 (2005). [10] T. Prosen, Ruelle resonances in quantum many-body dynamics, J. Phys. A: Math. & Gen 35, L737 (2002). [11] T. Prosen, Ruelle resonances in kicked quantum spin chain, Physica D 187, 244 (2004). [12] T. Prosen, Chaos and complexity of quantum motion, J. Phys. A: Math. & Gen. 40, 7881 (2007). [13] M. ˇZnidariˇc, Momentum-dependent quantum ruellepollicott resonances in translationally invariant manybody systems, Phys. Rev. E 110, 054204 (2024) (2024). [14] T. Mori, Liouvillian-gap analysis of open quantum manybody systems in the weak dissipation limit, Physical Review B 109, 064311 (2024). [15] J. A. Jacoby, D. A. Huse, and S. Gopalakrishnan, Spectral gaps of local quantum channels in the weakdissipation limit, Phys. Rev. B 111, 104303 (2025). [16] C. Zhang, L. Nie, and C. von Keyserlingk, Thermalization rates and quantum ruelle-pollicott resonances: insights from operator hydrodynamics, arXiv preprint (2024). [17] T. Yoshimura and L. S ́a, Theory of irreversibility in quantum many-body systems, Phys. Rev. E 111, 064135 (2025). [18] I. Garc ́ıa-Mata, R. A. Jalabert, and D. A. Wisniacki, Out-of-time-order correlations and quantum chaos, Scholarpedia 18, 55237 (2023). [19] I. Garc ́ıa-Mata, M. Saraceno, R. A. Jalabert, A. J. Roncaglia, and D. A. Wisniacki, Chaos signatures in the short and long time behavior of the out-of-time ordered correlator, Phys. Rev. Lett. 121, 210601 (2018). [20] T. Notenson, I. Garc ́ıa-Mata, A. J. Roncaglia, and D. A. Wisniacki, Classical approach to equilibrium of out-oftime ordered correlators in mixed systems, Phys. Rev. E 107, 064207 (2023). [21] T. Prosen, Exact time-correlation functions of quantum ising chain in a kicking transversal magnetic field: Spectral analysis of the adjoint propagator in heisenberg picture, Prog. Theor. Phys. Suppl. 139, 191 (2000). [22] T. Prosen, General relation between quantum ergodicity and fidelity of quantum dynamics, Phys. Rev. E 65, 036208 (2002). [23] O. Bohigas, M. J. Giannoni, and C. Schmit, Characterization of chaotic quantum spectra and universality of level fluctuation laws, Phys. Rev. Lett. 52, 1 (1984). [24] M. L. Mehta, Random matrices, Vol. 142 (Elsevier, 2004). [25] M. V. Berry and M. Tabor, Level clustering in the regular spectrum, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 356, 375 (1977). [26] V. Oganesyan and D. A. Huse, Localization of interacting fermions at high temperature, Phys. Rev. B 75, 155111 (2007). [27] Y. Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Distribution of the ratio of consecutive level spacings in random matrix ensembles, Phys. Rev. Lett. 110, 084101 (2013). [28] P. A. Lee and T. V. Ramakrishnan, Disordered electronic systems, Rev. Mod. Phys. 57, 287 (1985). [29] F. Wegner, Inverse participation ratio in 2+ ε dimensions, Zeitschrift f ̈ur Physik B Condensed Matter 36, 209 (1980). [30] A. Gubin and L. F. Santos, Quantum chaos: An introduction via chains of interacting spins 1/2, Am. J. Phys. 80, 246 (2012). 8 [31] V. Zelevinsky, B. A. Brown, N. Frazier, and M. Horoi, The nuclear shell model as a testing ground for manybody quantum chaos, Physics Reports 276, 85 (1996). [32] A. Larkin and Y. N. Ovchinnikov, Quasiclassical method in the theory of superconductivity, Sov Phys JETP 28, 1200 (1969). [33] S. Xu and B. Swingle, Scrambling dynamics and out-oftime-ordered correlators in quantum many-body systems, PRX Quantum 5, 010201 (2024). [34] J. Maldacena, S. H. Shenker, and D. Stanford, A bound on chaos, Journal of High Energy Physics 2016, 1 (2016). [35] S. H. Shenker and D. Stanford, Black holes and the butterfly effect, Journal of High Energy Physics 2014 (2013). [36] S. Sachdev and J. Ye, Gapless spin-fluid ground state in a random quantum heisenberg magnet, Phys. Rev. Lett. 70, 3339 (1993). [37] A. Kitaev, A simple model of quantum holography, talk given at KITP Program: Entanglement in StronglyCorrelated Quantum Matter, Vol. 7 (USA April, 2015). [38] J. Maldacena and D. Stanford, Remarks on the SachdevYe-Kitaev model, Phys. Rev. D 94, 106002 (2016). [39] E. B. Rozenbaum, S. Ganeshan, and V. Galitski, Lyapunov exponent and out-of-time-ordered correlator's growth rate in a chaotic system, Phys. Rev. Lett. 118, 086801 (2017). [40] R. A. Jalabert, I. Garc ́ıa-Mata, and D. A. Wisniacki, Semiclassical theory of out-of-time-order correlators for low-dimensional classically chaotic systems, Phys. Rev. E 98, 062218 (2018). [41] X. Chen and T. Zhou, Operator scrambling and quantum chaos (2018). [42] E. M. Fortes, I. Garc ́ıa-Mata, R. A. Jalabert, and D. A. Wisniacki, Gauging classical and quantum integrability through out-of-time-ordered correlators, Phys. Rev. E 100, 042201 (2019). [43] F. Minganti and D. Huybrechts, Arnoldi-Lindblad time evolution: Faster-than-the-clock algorithm for the spectrum of time-independent and Floquet open quantum systems, Quantum 6, 649 (2022).
2510.14893
STITCHER: Constrained Trajectory Planning in Complex Environments with Real-Time Motion Primitive Search Helene J. Levy and Brett T. Lopez Abstract—Autonomous high-speed navigation through large, complex environments requires real-time generation of agile trajectories that are dynamically feasible, collision-free, and satisfy state or actuator constraints. Modern trajectory plan- ning techniques primarily use numerical optimization, as they enable the systematic computation of high-quality, expressive trajectories that satisfy various constraints. However, stringent requirements on computation time and the risk of numerical instability can limit the use of optimization-based planners in safety-critical scenarios. This work presents an optimization- free planning framework called STITCHER that stitches short trajectory segments together with graph search to compute long- range, expressive, and near-optimal trajectories in real-time. STITCHER outperforms modern optimization-based planners through our innovative planning architecture and several al- gorithmic developments that make real-time planning possible. Extensive simulation testing is performed to analyze the al- gorithmic components that make up STITCHER, along with a thorough comparison with two state-of-the-art optimization planners. Simulation tests show that safe trajectories can be created within a few milliseconds for paths that span the entirety of two 50 m x 50 m environments. Hardware tests with a custom quadrotor verify that STITCHER can produce trackable paths in real-time while respecting nonconvex constraints, such as limits on tilt angle and motor forces, which are otherwise hard to include in optimization-based planners. Index Terms—Trajectory planning, aerial systems, motion primitives, graph search, collision avoidance. I. INTRODUCTION P LANNING collision-free, dynamically feasible trajecto- ries in real-time through complex environments is crucial for many autonomous systems. As a result, trajectory planning has garnered significant interest from the research community, but meeting the reliability requirements for safety-critical real-world applications remains challenging. Specifically, few methods have guarantees regarding trajectory optimality and time/memory complexity without sacrificing trajectory length, computation time, or expressiveness. Our approach addresses this gap by combining optimal control with graph search to generate near-optimal trajectories over long distances in real- time, resulting in a framework that provides strong guarantees on path quality and algorithm complexity. Optimization-based trajectory planning has emerged as the primary framework for autonomous systems that must navigate complex environments. This is because constraints and perfor- mance objectives can be naturally stated in the optimization problem. Most approaches can be broadly classified by their use of continuous or integer variables. Continuous variable methods employ gradient descent to jointly optimize over Authors are with the VECTR Laboratory, University of California, Los Angeles, Los Angeles, CA, USA. {hjlevy, btlopez}@ucla.edu Fig. 1: A trajectory (colored based on speed) generated by our proposed algorithm called STITCHER through a perlin noise environment. STITCHER searches over candidates motion primitives (white) to find a safe trajectory in real-time with time and memory complexity guarantees. Indoor flight experiments were performed to verify dynamic feasibility of trajectory plans. the coefficients of basis functions (e.g., polynomials) and waypoint arrival times while imposing obstacle and state constraints [1]–[6]. Integer variable methods require that the free space of the environment be represented as the union of convex sets (continuous variable methods have also used this representation, e.g., [5], [6]) and solve a mixed-integer pro- gram for a collision-free trajectory [7]–[10]. Despite continued innovations, these methods lack a priori time complexity bounds and often scale very poorly with trajectory length; this is especially true for integer programming approaches. A computationally efficient alternative to optimization- based trajectory planning is the use of so-called motion primitives: a library of short length or duration trajectories that can be efficiently computed and evaluated [11]–[14]. To effectively use motion primitives, a planner must operate in a receding horizon fashion, i.e., continuously replan, because motion primitives are inherently near-sighted with their short length or duration. This can introduce several performance issues, e.g., myopic behavior or unexpressive trajectories, that are exacerbated in large, complex environments. Subsequent work has attempted to pose the problem as a graph search with nodes and edges being desired states (position, velocity, etc.) and motion primitives, respectively [15]–[18]. While this allows for the creation of long-range trajectories, search times can be extremely high (seconds) because of the graph size. An admissible search heuristic can be used to reduce the number of node expansions required to find a solution, also known as search effort, while preserving optimality of the graph [19]. However, designing such a search heuristic is non-trivial. We propose a new trajectory planning algorithm called STITCHER that can perform real-time motion primitive arXiv:2510.14893v2 [cs.RO] 20 Oct 2025 searches across long distances in complex environments. STITCHER utilizes an innovative three-stage planning ar- chitecture to generate smooth and expressive trajectories by stitching motion primitives together through graph search. The first two stages are designed to expedite the motion primitive search in the final stage by constructing a compact but expressive search graph and search heuristic. Specifically, given a set of waypoints computed in the first stage, we create a velocity graph by representing nodes as sampled velocities at each waypoint, and edges as quick-to-generate minimum-time trajectories. We employ dynamic programming on this graph, using the Bellman equation to compute the cost-to-go for each node. Critically, the cost-to-go is then used as an admissible heuristic to efficiently guide the motion primitive search in the third stage. We also leverage a greedy graph pre-processing step to form a compact motion primitive graph. We prove all graphs are finite and that the proposed heuristic is admissible. These technical results are critical as they guarantee i) a priori time and memory complexity bounds and ii) trajectory optimality with respect to the graph discretization. To further reduce computation time, we improve the collision checking procedure from [13], by leveraging the known free space from previous nearest-neighbor queries, bypassing the rigidity and computational complexity of free space decomposition. Additionally, we show that employing a simple sampling procedure in the final search stage is effective at pruning candidate trajectories that violate complex state or actuator constraints. STITCHER was extensively tested in two simulation environments to evaluate algorithmic innova- tions and assess its performance against two state-of-the-art optimization-based planners [5], [9]. STITCHER is shown to generate high-quality, dynamically feasible trajectories over long distances (over 50 m) with computation times in the milliseconds, and consistently generates trajectories faster than the state-of-the-art with comparable execution times. This paper substantially builds upon previous work [20] by providing a new in-depth analysis of the algorithmic inno- vations used to achieve real-time planning times, along with hardware experiments to demonstrate real-world feasibility. We include a parameter sensitivity analysis to evaluate the performance of STITCHER with different hyperparameters, such as using different state discretization sets. A new study on the proposed search heuristic is conducted to examine the ef- fects of varied edge costs and acceleration constraints on both admissibility and search effort (node expansions). The impact of the greedy graph pre-processing step on solution quality and computation time is evaluated through comparisons with an exhaustive, i.e., non-greedy, graph creation. We also showed STITCHER can adhere to highly nonconvex constraints, such as individual motor constraints for a quadrotor, with no noticeable increase computation time. Comparison with state- of-the-art algorithms in simulation was also expanded, incor- porating new metrics to better evaluate performance regarding optimized waypoints and the sampled set of states. Finally, we flew the trajectories designed by STITCHER in hardware on a custom quadrotor to show that the trajectories were dynamically feasible, adhered to physical constraints, and could be tracked via standard geometric cascaded control. II. RELATED WORKS A. Optimization-based Planning Designing high quality trajectories using online optimiza- tion has become a popular planning strategy as a performance index and constraints can be systematically incorporated into an optimization problem. Optimization-based trajectory plan- ners can be categorized using several criteria, but the clearest delineation is whether the method uses continuous or integer variables. For methods that use only continuous variables, the work by [2] reformulated the quadratic program in [1] to jointly optimize over polynomial endpoint derivatives and arrival times for a trajectory passing through waypoints. Col- lisions were handled by adding intermediate waypoints and redoing the trajectory optimization if the original trajectory was in collision. Oleynikova et al. [3] represented obstacles using an Euclidean Signed Distance Field (ESDF) which was incorporated into a nonconvex solver as a soft constraint. Zhou et al. [4] used a similar penalty-based method but introduced a topological path search to escape local minima. An alternative approach is to decompose the occupied space or free space into convex polyhedra [7], [21], [22] which can be easily incorporated as constraints in an optimization. The methods in [5], [6] treat these constraints as soft while efficiently optimizing over polynomial trajectory segments that must pass near waypoints. One can also use the free-space polyhedra to formulate a mixed-integer program [8]–[10], [23] to bypass the nonconvexity introduced by having unknown waypoint arrival times, but at the expense of poor scalability with trajectory length and number of polyhedra. Marcucci et al. [24] addresses scalability concerns of [10] by solving a sequence of convex problems instead of one large-scale optimization but requires an offline process for generating collision-free convex sets. B. Motion Primitives Motion primitive planners have been proposed as an alterna- tive to optimization-based planners to address computational complexity and numerical instability concerns. The underlying idea of motion primitive planners is to select trajectories online from a precomputed, finite library of trajectories. Initial work on motion primitives for quadrotors leveraged differen- tial flatness and known solutions to specific optimal control problems to efficiently compute point-to-point trajectories in real-time [11], [25]. Later work employed motion primitives for receding horizon collision avoidance where primitives were efficiently generated online by sampling desired final states, and selected at each planning step based on safety and trajectory cost [12]–[14], [26]–[29]. Howard et al. [26] first introduced this idea of searching over feasible trajectories of a car with a model predictive control framework. Subsequent works extended this methodology to quadcopters using depth images [12], [14], point clouds [13], [27], or ESDFs [28] for motion primitive evaluations and collision avoidance. While computationally efficient, the behavior of these planners can be myopic, leading to suboptimal behavior in complex environ- ments which limit their use for planning long-term trajectories. C. Motion Primitive Search One way to address nearsightedness is to perform a graph search over motion primitives, i.e., stitch motion primitives together. This can be achieved by extending traditional graph search algorithms [30]–[32], which typically use coarse dis- crete action sets, to using a lattice of motion primitives [15]– [17], [33]–[35]. Graph search algorithms are an attractive method for planning due to inherent guarantees of complete- ness, optimality1, and bounded time and memory complexity [36]. The works by Liu et al. [15], [16] were some of the first works to successfully showcase a search-based algorithm using a lattice of motion primitives for use on quadcopters. However, these methods can be computationally expensive as generating high-quality trajectories relies on generating a large number of motion primitives for sufficient density. Jarin et al. [37] addresses computation concerns by improving upon the sampling of different motion primitives, inspired by a mini- mum dispersion sampling method [38]. Another way to narrow the search space is by utilizing a geometric path as a prior and constraining motion primitives to pass through waypoints from the path. Recently, [18], [39]–[41] proposed an efficient motion primitive search in velocity space using minimum-time input-constrained trajectories from a double integrator model restricted to pass through a set of waypoints. The search can be done in real-time but the resulting bang-bang acceleration profile is dynamically infeasible for quadrotors, leading to poor tracking performance. An additional smoothing step, e.g., model predictive contouring control, is required to achieve sufficient trajectory smoothness [40]–[42]. D. Search Heuristics Fast graph search speed while retaining optimality guar- antees can be achieved by employing an admissible search heuristic [36] to guide the search to the goal state. Constructing an informative and admissible heuristic, however, is non- trivial. Much of the previous work in motion primitive search overlooks the importance of the heuristic by generating a weak approximation to the goal [17], using an inadmissible heuristic which forfeits optimality guarantees [16], or pro- ceeding without a heuristic [18]. As a result, motion primitive search algorithms to date scale poorly in large environments and for large planning horizons, making them unsuitable for systems with limited onboard computational resources. Paden et al. [43] proposed a method to systematically construct admissible heuristics for use in kinodynamic planning using sum-of-squares (SOS) programming. However, the resulting size of the SOS program requires heuristic calculations to be performed offline. Other strategies involve learning a search heuristic or cost-to-go [44]–[47]. Kim et al. [44] uses a neural network to approximate graph node distances and provides a sub-optimality bound on the solution. Reinforcement and imitation learning have also been proposed for learning search heuristics [45]–[47], but these works focus on minimizing node expansions rather than ensuring admissibility, sacrificing the optimality guarantees of graph search. 1In the context of graph search, optimality refers to resolution optimality, i.e., optimality with respect to the discretized state space. III. PROBLEM FORMULATION This work is concerned with solving the following trajectory planning problem: min u ∈U J = r(T) + Z T 0 q(x, u) dt s.t. ˙x = Ax + Bu x ∈Xs, x /∈Xobst, u ∈U x(0) = x0, x(T) = xf, (1) where x ∈Rn is the state that must satisfy state Xs and obstacle (collision) Xobst constraints, u ∈Rm is the control input that must satisfy actuator constraints U, A ∈Rn×n and B ∈Rn×m govern the system’s dynamics and are assumed to take a multi-axis chain of integrators form, and r : R+ →R+ and q : Rn × Rm →R+ are the terminal and stage cost, respectively. The goal is to find an optimal final time T ∗and feasible optimal state trajectory x∗(t) with a corresponding control input sequence u∗(t) for t ∈[0 T ∗] that steers the system from an initial state x0 to a desired final state xf that minimizes the cost functional J. While the dynamics are linear in (1), many nonlinear systems can be placed into the linear control affine form if they are differentially flat, e.g., VTOL vehicles like quadrotors, capturing a large class of systems of interest. In many cases, the state vector can be x = (r, v, a, . . . , r(p−1))⊤and the control input can be u = r(p) where r = (x, y, z)⊤is the position of the vehicle in some reference frame. A. Background: Motion Primitives We define motion primitives to be closed-form solutions to specific optimal control problems. In this work, we will restrict our attention to the following two optimal control problems: the input-constrained minimum-time problem for a double integrator and the linear quadratic minimum time problem for a p-th order integrator. We will briefly review each optimal control problem and the structure of its solution. The formulations will be presented for a single axis, but can be repeated for all three position coordinates. Minimum-Time Double Integrator: Given an initial state (s0, v0) ∈R2 and desired final state (sf, vf) ∈R2, the minimum-time double integrator optimal control problem is min u J = T (2) s.t. ¨s = u, |u| ≤umax s(0) = s0, v(0) = v0 s(T) = sf, v(T) = vf, where the final time T is free. From Pontryagin’s minimum principle, the optimal control solution is known to have a bang-bang control profile. The switching times can be efficiently computed by solving a quadratic equation. It is required that each coordinate axis trajectory all have the same execution time, so the limiting minimum-time horizon is T ∗= max{Tx, Ty, Tz}. The limiting minimum-time horizon T ∗, is then applied as a known variable for the non-limiting Fig. 2: System architecture describing the three planning stages. Stage 1: A sparse geometric path is found via an A* search through the voxelized environment. Stage 2: A velocity state is then introduced at each waypoint and dynamic programming is used to recursively solve for the cost-to-go at each node. Stage 3: A full motion primitive search informed by the previous stages is performed, and checks for collisions are completed to yield the final trajectory. axes. This allows one to then solve a quadratic equation for a new bound ¯u ≤umax on the control input. Linear Quadratic Minimum-Time p-th Order Integrator: Smooth trajectories can be generated by solving the linear quadratic minimum-time (LQMT) optimal control problem, min T, u J = ρ T + Z T 0 u2 dt (3) s.t. s(p) = u s(0) = s0, v(0) = v0, . . . , s(p−1)(0) = s(p−1) 0 s(T) = sf, v(T) = vf, s(k−1)(T) free for 3 ≤k ≤p where ρ > 1 penalizes the final time. The final time T and all terminal states except position and velocity are free. The optimal trajectory is a polynomial in time so the cost functional can be expressed analytically in terms of T and the known boundary conditions. The final time can be found efficiently using a root-finding algorithm such as QR algorithm [48]. State constraints are omitted from (3) as it is more efficient to prune many candidate trajectories once the final time is known, as discussed in Section IV-D. IV. METHODOLOGY STITCHER, detailed in Algorithm 1, generates a full- state trajectory by stitching collision-free, dynamically feasible trajectory segments together through graph search. At its core, STITCHER searches over closed-form solutions, i.e., motion primitives, to optimal control problems of the form discussed above. These solutions serve as a basis set for the solution space to (1). To achieve real-time performance, STITCHER utilizes a three stage planning process (see Fig. 2). In Stage 1 (left), A* algorithm is used to produce a sparse geometric path, i.e., waypoints, in the free space of the environment (line 3); this is standard in many planning frameworks. In Stage 2 (middle), nodes representing sampled velocities at the waypoints are formed into a velocity graph where dynamic programming is used to compute the minimum time path from Algorithm 1: STITCHER Trajectory Planner 1 input: P ←point cloud, ns ←start, ng ←goal; 2 output: s∗(t); // extract waypoints and path features 3 w, q, H ←getGeometricPath(P, ns, ng); 4 G ←buildVelocityGraph(w, q, H); // get heuristic from recursive cost-to-go 5 h(n) ←dynamicProgramming(G, ns, ng); 6 Gmp ←buildFullStateGraph(G); 7 function planPath(P, Gmp, ns, ng): 8 ncurr = ns; 9 while ncurr ̸= ng do // get node with lowest cost g(n)+h(n) 10 ncurr = OPEN.pop(); 11 CLOSED.insert(ncurr); 12 if ncurr = ng then 13 break; 14 end 15 Encurr ←getSuccessors(ncurr, Gmp); 16 for e in Encurr do // collision and state constraint check 17 pruneMotionPrimitive(e, P); 18 OPEN.insert(ϕ(n, e)); 19 end 20 end 21 s∗(t) ←getFinalMotionPrimitives(ncurr, CLOSED); each node to the desired final state using a control-constrained double integrator model (lines 4-5). This step is critical for constructing an admissible heuristic to guide the full motion primitive search, and is one of the key innovations that enables real-time performance. Note that the optimal “path” in velocity space is never used; computing the cost-to-go is the primary objective as it serves as an admissible heuristic for motion primitive search as shown in Section V-B. In Stage 3 (right), an A* search is performed using the heuristic from Stage 2 over motion primitives of a p-th order integrator where p ≥2 . A greedy pre-processing step is used to construct a compact motion primitive graph (line 6), ensuring the search remains real-time. At this stage, position and all higher-order derivatives are considered, yielding a full state trajectory that can be tracked by the system (lines 7-21). The remainder of this section expands upon each component of STITCHER. A. Stage 1: Forward Geometric Path Search STITCHER requires a sequence of waypoints that essen- tially guides the motion primitive search by limiting the size of the search space. This can be done by generating a collision- free geometric path (see Fig. 2 left) through the environment with A* search or any other discrete graph search algorithm where the environment is represented as a 3D voxel grid where each grid cell contains occupancy information. Let the collision-free, geometric path generated by a discrete graph search algorithm be composed of points O = {o1, o2, ..., oH} where oi ∈R3. The set of points O is further pruned to create a sparse set of waypoints W = {w1, w2, ..., wN} where N ≤H and wi ∈R3. Sparsification is done by finding the minimal set of points in O that can be connected with collision-free line segments. The geometric path search is used in line 3 of Algorithm 1. B. State 2: Backward Velocity Search The ordered waypoint set W found in Stage 1 only provides a collision-free geometric path through the environment. In other words, the velocity, acceleration, and higher-order states necessary for tracking control are not specified. We propose creating a velocity graph (see Fig. 2 middle) where each node in the graph is defined by a position and velocity. The positions are restricted to waypoint locations and M velocities are sampled at each waypoint. More explicitly, for each waypoint wi ∈W, we sample a set of velocities V = {v1, ..., vM}, where V is composed of candidate velocity magnitudes Vm and directions Vd. The choice of Vm and Vd can impact the STITCHER’s performance in terms of path optimality and computational complexity; this point will be discussed in more detail in Section VI. With the ordered waypoint W and sampled velocity V sets, we create a velocity graph G = (N, E), where node n ∈N is a given position and sampled velocity, i.e., n = (wi, vj) with wi ∈W and vj ∈V, and edge e ∈E is the double integrator control-constrained minimum-time motion primitive r(t) from (2) that connects neighboring nodes. Recall that the solution to (2) is fully determined by having an initial and final position and velocity pair which is precisely how each node in N is defined. At this stage, collision and state constraints are not enforced to prevent candidate trajectories from being eliminated prematurely. We recursively compute and store the ranked list of cost- to-go’s Vd : N × E →R+ for each node n ∈N and all connecting edges e ∈En of n where Vd(n, e) = ℓ(n, e) + V ∗ d ϕ(n, e)  ∀e ∈En, (4) with the optimal cost-to-go V ∗ d (n) = mine∈En Vd(n, e), the cost of taking edge e from node n being ℓ(n, e), and the node reached by taking edge e being ϕ(n, e). The cost of taking an edge is given by ℓ(n, e) = T ∗ d (n, e), where T ∗ d (n, e) is the minimum-time of trajectory r(t) connecting the states of node n to the states of ϕ(n, e). Minimizing (4) is the well-known Bellman equation, which is guaranteed to return the optimal Fig. 3: The achievable mass-normalized thrust (nonconvex) of a aerial VTOL vehicle with limits on minimum thrust fmin, maximum thrust fmax, and maximum tilt θmax. cost-to-go. In Section V-B, we prove V ∗ d (n) for each node in a graph G is an admissible heuristic for an A* search over a broad class of motion primitives. Building and searching the velocity graph are shown in lines 4-5 of Algorithm 1. C. Stage 3: Forward Motion Primitive Search The cost-to-go’s computed in Stage 2 for the sampled velocities at each waypoint serve as an admissible heuristic (formally defined later in Definition 1) that guides an efficient A* search over motion primitives. The motion primitives can be generated using any chain of integrators model of order at least two so long as i) the initial and final position and veloc- ities match those used to construct the velocity graph G and ii) the allowable acceleration is maximally bounded by umax given in (2). It is important to note that the bound on allowable acceleration can be easily satisfied with user defined umax or simply by applying the box constraint ∥a∥∞≤umax. The mo- tion primitive search graph is denoted as Gmp = (Nmp, Emp) where Nmp is the set of nodes, each corresponding to a state vector, and Emp is the set of edges, each corresponding to a motion primitive that connecting neighboring nodes. A* search is used to meet real-time constraints where the search minimizes the cost f(n) = g(n) + h(n) where n ∈Nmp is the current node, g : Nmp →R+ is the cost from the start node ns to node n, and h : Nmp →R+ is the estimated cost from the current node n to the goal node ng. In the context of optimal control, g is the cost accrued, i.e., the cost functional J, for a path from ns to n whereas h is the estimated cost- to-go, i.e., the estimated value function V ∗, from n to ng. In this stage, collision and state constraints are checked for each candidate motion primitive to ensure safety. The methodology for both is discussed in Section IV-D. Each step of the motion primitive A* search is shown in lines 6-21 of Algorithm 1. D. Pruning Infeasible & In-Collision Motion Primitives STITCHER guarantees safety by pruning motion primitives from the final search that violate constraints or are in colli- sion. For state and actuator constraints, many optimization- based planning approaches approximate the true physical constraints of the system with simple convex constraints, e.g., ∥v∥∞≤vmax, ∥a∥∞≤amax, etc., to reduce computational complexity. When polynomials are used to represent the optimal trajectory, imposing a convex hull constraint on the polynomial is one method for enforcing such state constraints Fig. 4: Removing redundant collision checks. (a): Motion primitive r0(t) checks for collisions using [13]. (b): Sampled points of r1(t) are checked to lie within obstacle-free regions derived from r0(t) calculations. [9], [17]. However, many of these approximations are made only to simplify the resulting optimization problem and do not accurately reflect the actual physical constraint, which can lead to conservatism. STITCHER has the freedom to use a variety of methods to enforce state and actuator constraints. For the examples shown in this work, we uniformly sample candidate trajectories in time to check for constraint violations as it was found to be effective and efficient. Sampling avoids mistakenly eliminating safe trajectories, and the observed computation time under the current discretization was better than using convex hulls. Critically, sampling allows for the inclusion of more complex constraints, such as those that couple multiple axes. Examples are Thrust Magnitude: 0 < fmin ≤∥f∥2 ≤fmax Thrust Tilt Angle: ∥f∥2 cos(θmax) ≤fz Linear Velocity: ∥v∥2 ≤vmax Angular Velocity: ∥ω∥2 ≤ωmax, (5) where f is the mass-normalized thrust, θ is the thrust tilt angle, and ω is the angular velocity. Note that differential flatness can be leveraged to express the angular velocity constraint in terms of derivatives of position. Figure 3 depicts the achievable mass-normalized thrust of a VTOL vehicle given thrust and tilt constraints in (5). The constraints are nonconvex making it difficult to include in real-time optimization-based planners without some form of relaxation, e.g., as in [49] for a double integrator, which is tight, or a more conservative relaxation. Even more direct system constraints which limit the maximum force exerted by individual motors are highly nonconvex functions of the flat variables and their derivatives. While the majority of this paper showcases results that en- force constraints (5), we include a case study of STITCHER constraining individual thrusters in Section VI-D. An efficient collision checking strategy was devised by constructing a safe set of spheres resulting from a sampling- based collision checking approach developed in [13] (see Fig. 4). The core idea from [13] is that a trajectory can be intelligently sampled for collisions by estimating the next possible “time-of-collision” along the trajectory by combin- ing obstacle proximity and the vehicle’s maximum speed. Leveraging this idea, further computation time savings can be achieved by storing and reusing nearest neighbor queries. Algorithm 2 details our strategy which takes in a k-d tree data structure filled with points from a point cloud P of Algorithm 2: Collision Check 1 input: T ←k-d tree, r(t) ←motion primitive, 2 output: bool collision 3 if S = ∅then // initial collision check using k-d tree 4 collision, S ←collisionCheckMap(r(t), T ); 5 return collision 6 end 7 τ ←0; 8 dmin ←∞; 9 while τ ≤T do 10 d ←calcDistToSphereCenters(c, r(τ)) 11 for i = 1 to |S| do 12 if di < Ri & di ≤dmin then 13 dmin ←di; 14 k ←i; 15 end 16 end 17 if dmin < ∞then // update sample time 18 τ ←τ + (Rk −dk)/vmax; 19 else // point outside spheres, use k-d tree 20 collision, S ←collisionCheckMap(r(t), T ); 21 end 22 end 23 return collision the environment. For the first candidate motion primitive connecting two successive waypoints, we use the strategy from [13] to intelligently sample for collisions while also storing the resulting set of safe, obstacle-free spheres S, defined by center and radius vectors, c and R (line 4). For subsequent motion primitives between the same waypoint pair, a nearest neighbor query is only done if the primitive is expected to leave the initial set of obstacle-free spheres. For a point found to be within a certain sphere (lines 12-15), the next possible “time-of-collision” is when the trajectory intersects the edge of the sphere, which can estimated by assuming the trajectory is emanating radially from the center of the sphere at maximum velocity (line 18). The process is repeated until the final time horizon T is reached. Unlike spherical safety corridors, our safe set is only used as a means to avoid repeated calculation, and allows for on-the-fly addition of collision- free spheres. In other words, our approach does not restrict solutions to remain within convex sets centered along the geometric path. STITCHER thus has the flexibility to create and check candidate trajectories without being restricted to pre-defined safety spheres. E. Motion Primitive Search Graph with Triple Integrator In many applications, a triple integrator model for gen- erating motion primitives is sufficient because the resulting trajectory is smooth enough for most aerial vehicles to track as discontinuities in jerk typically do not severely degrade tracking. This was verified through hardware experiments discussed in Section VII. Motion primitives in our formulation (3) are derived imposing a free terminal acceleration. Constructing a motion primitive search graph where nodes are a waypoint-velocity-acceleration tuple drastically increases both computation and memory consumption as the graph Fig. 5: Difference in motion primitive graphs with free terminal accelerations employing an (a): exhaustive search, (b): greedy search, and (c): greedy graph pre-processing step. Green edges constitute the optimal path, blue edges are the least cost parent saved in memory, grey edges are those evaluated but not saved and red dashed edges are connections that no longer exist. Greedy pre-processing (c) leads to some disconnections for graphs with over 3 waypoints, but maintains more connections than a greedy search (b), and does not suffer from exponential edge growth like the graph of an exhaustive search (a). size depends on both the number of sampled velocities and accelerations. If the acceleration at each node, i.e., the fi- nal acceleration, af, is free, the number of edges grows exponentially with respect to the number of waypoints (See Fig. 5a). Our formulation employs a greedy pre-processing step in which the motion primitive search graph Gmp is identical in size to the velocity graph G (graph size detailed in Section V-A). This formulation offers an advantage in terms of computational efficiency, as a full state trajectory is generated while the graph size is restricted by only the number of sampled velocities. Excluding acceleration information from graph creation assumes that the optimal stitched trajectory is only weakly dependent on acceleration at each waypoint. It is important to note the difference between a greedy algorithm Fig. 5b and our greedy graph pre-processing step Fig. 5c. While the pre-processing step does limit edges that could be generated in the exhaustive graph, it maintains more than a greedy algorithm. Further comparisons of the relative solution cost and computation speed using STITCHER’s greedy pre- processing step versus the exhaustive search were conducted in Section VI-B. V. THEORETICAL ANALYSIS In this section we prove STITCHER has bounded time and memory complexity by showing the velocity and motion primitive graphs are finite. We also show STITCHER is complete and optimal by proving the heuristic used in the motion primitive search is admissible. A. Velocity Graph Complexity The following proposition proves the size of the velocity graph G is finite and solely depends on the number of waypoints and sampled velocities; a property that also holds for the motion primitive graph Gmp by extension. This result is critical as a finite graph yields known time complexity for the motion primitive search. In other words, an upper bound can be placed on the computation time of the planner given known quantities. This is in contrast to optimization-based methods where the time complexity depends on the number of iterations required to converge—which cannot be known a priori—so the time to compute a trajectory via optimization does not have an a priori bound. Proposition 1. For N waypoints and M sampled velocities, the number of nodes |N| and edges |E| in graph G is |N| = (N −2)M + 2, (6) |E| = (N −3)M 2 + 2M for N > 2. (7) Proof. Using Fig. 2 (middle), the start and goal nodes con- tribute 2 nodes to the graph G. For intermediate waypoints, given M sampled velocities, there are M nodes per waypoint. As a result, |N| = (N −2)M +2 which is (6). For each edge, we consider the transition to successive waypoints. Ignoring the trivial N = 2 case where |E| = 1, there are M connections between the start node and next waypoint, which also has M nodes. The same applies for connecting waypoint wN−1 to the goal node, resulting in a total of 2M edges. For all other intermediate waypoint pairs, M nodes connect to M nodes at the next waypoint so there are M 2 edges. The total number of edges is then (7). Corollary 1. The size of the motion primitive graph Gmp using Linear Quadratic Minimum Time (LQMT) motion primitives with free terminal acceleration for a triple integrator is identical to the velocity graph G. Proof. The proof is immediate since the terminal acceleration is free so N and M are identical for both graphs. Remark 1. Corollary 1 can be generalized to any motion primitive search graph where the primitives are solutions to an optimal control problem with the dynamics being a chain of integrators and all terminal state derivatives of second order or higher are free. Note that this assumes the greedy graph pre-processing step still yields adequate performance. B. Admissible Heuristic for Motion Primitive Search It is well known that heuristics can be used to expedite searching through a graph by incentivizing the search to prioritize exploring promising nodes. For example, in A* search, the next node explored is selected based on minimizing the cost f(n) = g(n) + h(n), where g is the stage cost to get from the start node ns to node n, and h is a heuristic estimate of the remaining cost to reach the goal node ng. A* search is guaranteed to find an optimal solution so long as the heuristic function h is admissible (see Definition 1) [36]. Below, we prove the cost-to-go V ∗for each node in the velocity graph G calculated in Stage 2, i.e., the minimum-time to goal for a double integrator, is an admissible heuristic for an A* search over motion primitives of any higher-order chain of integrators. Definition 1 ([36]). A function h : N →R is an admissible heuristic if for all n ∈N then h(n) ≤h∗(n), where h∗is the optimal cost from n to the goal node ng. Proposition 2. Consider the optimal control problem min T, u J = ρ T + Z T 0 q(r, v, . . . , u) dt (8) s.t. r(p) = u, c(a) ≤0 r(0) = r0, v(0) = v0, . . . , r(p−1)(0) = r(p−1) 0 r(T) = rf, v(T) = vf, r(k−1)(T) free for 3 ≤k ≤p where q is a positive definite function, the system is at least second order (p ≥2), and the position and velocities boundary conditions are identical to those of (2), with all other boundary constraints free to specify. If umax in (2) is the maximum possible acceleration achievable in a given axis imposed by c(a) ≤0, then the optimal cost-to-go V ∗from the initial conditions for (8) satisfies V ∗≥ρ T ∗ d where T ∗ d is the optimal final time for (2). Proof. First, consider the case when p = 2. For a given axis, if umax is chosen so that it exceeds the allowable acceleration imposed by c(a) ≤0, e.g., ux,max ≥maxax c(a) (see Fig. 3), then the optimal final time T ∗for (8) will always be greater than that of (2) even when q = 0. Specifically, when q = 0, one can show the optimal final time for (2) increases as umax decreases. Moreover, T ∗ d for (2) is guaranteed to exist and be unique [50]. Hence, by appropriate selection of umax, we can ensure T ∗≥T ∗ d always, where equality holds when q = 0 and c(a) is a box constraint. If q ̸= 0, then it immediately follows that T ∗> T ∗ d because q is positive definite by construction. Now consider the case when p > 2. We can deduce V ∗> ρ T ∗ d by contradiction. Specifically, assume T ∗= T ∗ d for p > 2. This would require a to be discontinuous in order to match the bang-bang acceleration profile of (2). However, (8) is a continuous-time linear system that will not exhibit discrete behaviors, e.g., jumps, so it is mathematically impossible to generate an optimal control sequence where the acceleration profiles for Eqs. (2) and (8) will be identical. It can then be concluded V ∗> ρ T ∗ d for p > 2. Therefore, V ∗≥ρ T ∗ d for p ≥2, as desired. Remark 2. Proposition 2 also holds when inequality state or actuator constraints in (8) are present, and when the terminal desired states are specified rather than free. The main result of this section can now be stated. Theorem 1. The optimal cost-to-go for the minimum-time input-constrained double integrator optimal control problem (2) is an admissible heuristic for motion primitive search Fig. 6: Simulation test environments. (a): Willow Garage environment. (b): Perlin Noise environment. where the primitives are solutions to the optimal control problem of the form (8). Proof. Let G = (N, E) be a graph with nodes being sampled velocities at waypoints and edges being the time-optimal trajectories using an input-constrained double integrator. Fur- ther, let Gmp = (Nmp, Emp) be a graph with nodes being sampled velocities, accelerations, etc. at waypoints and edges being trajectories that are solutions to (8). Using the Bellman equation, the optimal cost-to-go V ∗ mp(n) for any n ∈Nmp can be computed recursively. Using Proposition 2, V ∗ mp(n) ≥ V ∗ d (n′) by induction where V ∗ d is the optimal cost-to-go for the minimum-time input-constrained double integrator with n′ ∈N. Recognizing N ⊆Nmp, V ∗ d (n′) can be rewritten as V ∗ d (n). Setting h∗(n) = V ∗ mp(n) and h(n) = V ∗ d (n), it can be concluded h(n) ≤h∗(n). Therefore, by Definition 1, the optimal cost-to-go computed for G is an admissible heuristic for the motion primitive search over Gmp. The importance of Theorem 1 follows from the well-known result that searching a graph with an admissible heuristic is guaranteed to return the optimal path through the graph [36], and can significantly improve search efficiency because not every node in the graph has to be explored. The effectiveness of the proposed heuristic both in terms of path quality and search times is analyzed in Section VI. VI. SIMULATION RESULTS This section contains an analysis of STITCHER. First, we conduct a parameter sensitivity study to determine a suitable sampled velocity set (direction and speed) using a modified version of Dijkstra’s algorithm that has access to a dense set of velocities and no constraints on computation time. Second, STITCHER is compared to a non-greedy algorithm variant to characterize the impact of our greedy graph pre-processing step on solution cost and computation. Third, we investigate the effectiveness of the heuristic proposed in Section V-B in reducing the number of edges generated by STITCHER. Fourth, a study of STITCHER constraining individual mo- tor forces is presented to demonstrate the flexibility of our method in adhering to complex actuator constraints. Fifth, we characterize the average computation time of the different components that make up STITCHER. Lastly, STITCHER is compared to two optimization-based modern planners [5], [9] capable of running in real-time. Simulation experiments were run in a Perlin Noise environ- ment and the Willow Garage environment, both with a volume Fig. 7: Velocity directions sampled at waypoint wi, where ˆai is normal to the separating hyperplane Hi and ˆqi is the heading toward wi+1. TABLE I: Frequency of Velocity Directions in Final Trajectory. Zenith Angle 70◦ 80◦ 90◦ 100◦ 110◦ Frequency 0% 13% 79% 8% 0% Azimuth Angle -50◦ -20◦ -10◦ 0◦ 10◦ 20◦ Frequency 4% 4% 17% 42% 29% 4% of approximately 50 × 50 × 5 m (see Fig. 6). Geometric paths with N = 4, 6, 8 waypoints were found for different start and end locations in each environment. For all experiments, we imposed the following constraints assuming agile drone flight and reasonable thrust limits: fmin = 0.85 m/s2, fmax = 18.75 m/s2, θmax = 60◦, ωmax = 6 rad/s, vmax = 10 m/s, and a time penalty ρ = 1000. All reported times are from tests conducted on an 11th generation Intel i7 CPU. A. Parameter Sensitivity Analysis: Sampled Velocity Set STITCHER requires a discrete velocity set V composed of a set of magnitudes and directions. As shown in Section IV-B, the size of the velocity graph scales quadratically with the number of sampled velocities, so it is desirable to choose V to be small without degrading path quality. Hence, this section focuses on constructing a velocity sample set V that optimizes the trade-off between computation time and path quality as measured by execution time. We conducted two studies to determine i) the predominate sampled velocity directions used by an offline version of Dijkstra’s algorithm and ii) the trade- off between path cost and the number of sampled speeds. In the following comparison, Dijkstra’s algorithm was modified to search over a dense set of velocities with 3611 terminal velocities sampled per waypoint; the method is referred to as Dense Dijkstra. Dijkstra’s algorithm is a complete and optimal search algorithm with respect to the set of actions [36], making it a suitable benchmark. 1) Sampled Direction: Dense Dijkstra was used to statisti- cally identify velocity directions set Vd commonly employed in a variety of test cases. Using Fig. 7, we define angles with respect to the hyperplane H with normal vector ˆa at the plane of symmetry between path segments connecting two waypoints. The search was given velocity direction zenith and azimuth angles in the range [0◦, 180◦] and [-90◦, 90◦] sampled in 10◦increments. Table I shows the frequency of velocity directions chosen by Dense Dijkstra across all motion primitives for six different path length trials in the Perlin Noise and Willow Garage environment. The velocities chosen by Dense Dijkstra align with the normal vector ˆa of the hyperplane 80% of the time. Fig. 8: Analysis of speed discretization on execution and planning time. As the discretization of our method is increased, we converge to the Dense Dijkstra solution. We use Vm = 5 for our experiments as it achieves significant computational advantages while retaining suitable performance. Fig. 9: Speed profiles of STITCHER using varied speed discretizations. As the number of speed samples increases, STITCHER converges to the optimal solution approximated by Dense Dijkstra. From these results, sampling at the center and boundaries of a 20◦cone centered around a given ˆa will yield a suitable set of velocity directions. 2) Sampled Speed: Using the velocity direction set Vd identified in the previous test, the performance of Dense Dijkstra was compared to STITCHER with different sampled speed sets Vm. Each set Vm consists of k discrete speeds sampled from the interval [0, vmax], such that |Vm| = k. In order to ensure a fair comparison, these sets must be subsets of those used by Dense Dijkstra. In other words, we ensure k ≤K, where K denotes the number of speeds sampled in Dense Dijkstra, and our sampled sets are as evenly distributed as possible within the interval. Figure 8 shows the relative execution time (left) and plan- ning times (right) for different sizes of Vm compared to Dense Dijkstra. As expected, the execution time increases as the sampled speed set becomes sparser because the planner has fewer speed options available. However, the observed increase in execution time is at most 8% even with the sparsest speed set tested (|Vm| = 5). Critically, the minimal increase in execution time is accompanied by a significant reduction in planning time: a speed-up of four orders of magnitude with the sparsest sampled speed set tested. Al- though a sample speed set size of |Vm| = 11 yields nearly identical execution times to the dense set while offering three orders of magnitude improvement in planning time, adequate performance is achieved when |Vm| = 5. Hence, we use the set Vm = {0, 0.25 vmax, 0.5 vmax, 0.75 vmax, vmax} for the remainder of our analysis. A representative speed profile for different sized speed sets is shown in Fig. 9. Fig. 10: Evaluation of the effect of a greedy pre-processing step on solution quality and computation speed. Results are from a monte carlo simulation of 50 random realizations for trajectories with between 2 and 10 waypoints. Fig. 11: The final speed profile given by a motion primitive search with (a) an exhaustive exponentially growing graph and (b) STITCHER. B. Greediness Benchmarking STITCHER employs a greedy graph pre-processing step in order to generate a compact final search graph which enables real-time trajectory computation. It is important to note that this greedy pre-processing step results in a graph that does not exhaustively account for all possible acceleration states at a given waypoint (see Fig. 5). Hence, we evaluated the extent to which this greedy pre-processing step leads to sub- optimal solutions by performing a Monte Carlo simulation comparing the path cost of STITCHER to that of an exhaus- tive exponentially-growing graph. The experiment includes 50 realizations of randomly generated initial and final positions, with the number of waypoints ranging between 2 and 10. Figure 10 shows the relative solution cost (left) and planning times (right) resulting from searching over the exponentially-growing graph and running STITCHER. On average, STITCHER achieved a 0.5% difference in solution cost, with the maximum difference being 7%. Additionally, STITCHER achieves approximately 66% speed-up in compu- tation time as a result of the reduced graph size. Excluding solution cases with 2 or 3 waypoints where the exponential growth of the graph including terminal acceleration states cannot be observed, STITCHER achieves an average of 82% speed-up. Figure 11 shows the speed profile of a case in which STITCHER achieved a solution cost that was 2% different from that of the exponential graph. Qualitatively, the trajectories are very similar with only a minor deviation at the end of the trajectory. Therefore, the greedy graph pre- processing step results in a negligible increase in solution cost, but a substantial improvement in computation time. Fig. 12: Final trajectories (red) through a six waypoint path in (a): Perlin Noise and (b): Willow Garage environment. The trajectory options (white) which inform the heuristic are more likely to be in collision in the Willow Garage environment due to tight corridors. C. Heuristic Benchmarking The quality of the heuristic used to guide the motion prim- itive search phase of STITCHER can be quantified by com- paring the number of edges, i.e., motion primitives, generated by STITCHER with that of Dijkstra’s algorithm. The velocity magnitude and direction sets were kept constant across both planners with |Vm| = 11 and |Vd| = 3. The number of edges created is a better evaluation metric than the nodes explored because the main source of computation time comes from generating motion primitives and checking them for constraint violations. The effectiveness of the search heuristic depends on two main factors: how accurately it approximates the true cost-to-go, and whether the heuristic is admissible under the specified constraints. The edge cost used in the final search affects the tightness of the heuristic approximation, while the acceleration constraint imposed to prune motion primitives can influence admissibility. Therefore, we assess the effectiveness of STITCHER’s heuristic with various graph edge costs and acceleration constraints to provide a general understanding of how search performance is impacted. 1) Varying Edge Cost: We evaluate the quality of the heuristic with two different edge costs: (1) equivalent to the LQMT performance index and (2) execution time. Table II shows the percent reduction in the number of edges created for STITCHER compared to Dijkstra’s algorithm. Using the LQMT edge cost in the final search, STITCHER creates 7% fewer edges on average compared to Dijkstra’s algorithm in the Perlin Noise environment and 5% in the Willow Garage environment. Greater edge reduction can be achieved by defining the edge cost to be the trajectory execution time. Recall that the search heuristic is the minimum time required to reach the goal using a double integrator model. Therefore, by using an edge cost soley defined by trajectory duration, the heuristic now more closely approximates the cost-to-go. This tighter approximation is reflected in the results of Table II, where STITCHER generates an average of 20% fewer edges in Perlin Noise and 13% fewer in Willow Garage. The reduced effectiveness of the heuristic in Willow Garage was attributed to having more tight corridors, so more motion primitives were in collision (see Fig. 12). Therefore, while environment dependent, using a search heuristic that closely approximates the remaining cost to the goal greatly reduces search effort. 2) Varying Acceleration Constraint: Varying acceleration constraints may also lead to differing performance of the TABLE II: Heuristic Evaluation with Varied Edge Costs and Acceleration Constraints via Percent Reduction in Generated Edges Compared to Dijkstra’s. Edge Cost Acceleration Constraint Map Perlin Noise Willow Garage N = 4 N = 6 N = 8 N = 4 N = 6 N = 8 % Red. % Red. % Red. % Red. % Red. % Red. LQMT Admissible Truncated Cone 8 8 7 0 3 13 Time Admissible Truncated Cone 22 21 15 0.3 13 26 Time Admissible Box 27 34 28 2 20 24 Time Inadmissible Truncated Cone 45 44 22 9 38 39 Fig. 13: Different acceleration constraints used for heuristic study: (a) admis- sible truncated cone, (b) admissible box, and (c) inadmissible truncated cone. Green dashed outlines are Stage 2 constraints used for heuristic creation while solid black outlines denote constraints imposed for the final trajectory. heuristic as this constraint directly affects admissibility. We tested three different acceleration constraints: (1) admissible truncated cone, (2) admissible box, and (3) inadmissible trun- cated cone (See Fig. 13). The truncated cone constraint comes from the resulting achievable mass-normalized thrust volume with constraints on the magnitude and tilt. This constraint c(a) is admissible when it is maximally bounded by the maximum control input umax imposed during heuristic generation in Stage 2 (see Fig. 13a), while the inadmissible variant applies a smaller umax, e.g., ux,max ≤maxax c(a) (see Fig. 13c). An admissible box constraint is an acceleration constraint that perfectly matches that of Stage 2 (see Fig. 13b). Table II shows that improved performance can be achieved by applying an admissible box constraint on acceleration compared to the admissible truncated cone. In this case, STITCHER achieves approximately a 30% reduction in edges in the Perlin Noise environment and a 15% reduction in the Willow Garage compared to Dijkstra’s algorithm. Because the box constraint on acceleration is equivalent to that used in the heuristic generation, the STITCHER algorithm guides the search away from branches that are likely to exceed acceleration con- straints. Further improvement in search speed may be observed while using an inadmissible truncated cone constraint, and is reflected in Table II, where this algorithm variant achieves the highest percent reduction among all STITCHER variations. By allowing the final motion primitive search to have a larger allowance in achievable acceleration, the true edge cost may be smaller than that estimated by the heuristic. The use of inadmissible heuristics in graph search foregos desirable optimality guarantees, but can more rapidly motivate the search toward the goal as a larger weighting is placed on the heuristic cost. 3) Summary: This study showed that the proposed heuristic is effective, but its performance depends on the environment, graph edge cost, and the chosen acceleration constraint. For the remaining experiments, we used an LQMT edge cost to ensure fair comparisons to state-of-the-art algorithms, and apply an admissible truncated cone acceleration constraint to retain graph search optimality guarantees and accurately reflect the true physical limits of quadrotor systems. D. Constrained Motor Forces We present a case study demonstrating STITCHER’s ability to generate trajectories that satisfy individual thruster limits for a quadrotor, which also extends to lander vehicles. We assume the vehicle dynamics takes the form m¨r = R T −mg J ˙ω = τ −ω × Jω, where r is the vehicle’s position vector, R is the rotation matrix that represents the vehicle orientation with respect to an inertial frame, T = T ˆez is the thrust vector directed along the body z-axis, g is the gravity vector, ω is the angular velocity vector, J is the inertia tensor, and τ is the body torque vector. For the case study, we assume a quadrotor with a mass m = 0.5 kg, moment of inertia J = diag(0.01, 0.01, 0.01) kg m2, arm length l = 12.5 cm, motor drag coefficient c = 0.2 m, minimum single-motor thrust Fmin = 0.15 N, and maximum single-motor thrust Fmax = 3 N. Each motor thrust Fi is constrained by 0 < Fmin ≤Fi ≤Fmax, ∀i ∈{1, 2, 3, 4}. At any given time, individual motor thrusts Fi are found by solving a linear system that depends on motor placement and physical vehicle properties, i.e., solving   T τx τy τz  = M   F1 F2 F3 F4  , for some invertible matrix M ∈R4×4 with T denoting the total thrust T = m∥f∥2. The torque τ = [τx, τy, τz]⊤along the trajectory is τ = J ˙ω + ω × Jω and can be found via differential flatness, with components of ω and ˙ω given by   ωy −ωx 0  = R⊤˙ˆ f,   ˙ωy −˙ωx 0  = R⊤¨ˆ f −ω × R⊤˙ˆ f, where ˆ f is the normalized thrust vector. We conducted two experiments in the Willow Garage envi- ronment, where the tight corridors may make individual motor constraint satisfaction more difficult. As shown in Fig. 14, the trajectories achieve high speeds while strictly obeying the per- motor limits on thrust. These results showcase the versatility (a) Individual Motor Constraint Case Study 1. (b) Individual Motor Constraint Case Study 2. Fig. 14: Case studies in the Willow Garage environment of trajectory plans constraining individual motors by the maximum thrust limits given by manufacturer specifications. Left: Trajectory colored by speed. Right: The corresponding thrust profile of individual motors depicting constraint satisfaction. of our framework in adhering to highly nonconvex constraints that are directly limited by hardware. E. Planning Time Composition Analysis Figure 15 shows the computation time of the different components of STITCHER averaged over six tested trials. The average time to perform the velocity graph search is 1.8 ms, compared to 2.2 ms for the motion primitive search. Although both searches have the same graph size, it is important to note that the motion primitives from (3) are more time-consuming to generate than the minimum time trajectories used in the velocity graph. Therefore, the similar computation times arise from the search heuristic reducing the number of edges generated. The low computation time of the velocity search further indicates its effectiveness in computing an informative admissible heuristic for the motion primitive search. Finally, constraint-checking with uniform samples at every 0.1 s averaged only 0.3 ms, showing the efficacy of the relatively simple approach. F. Comparison with State-of-the-Art We compared STITCHER with two state-of-the-art algo- rithms: GCOPTER [5] and FASTER [9]. GCOPTER performs an online optimization by incorporating state constraints into Fig. 15: The average contribution of different path planning components. the cost functional and running a quasi-newton method, while FASTER solves a mixed integer quadratic program online. Both algorithms rely on a sparse geometric path to form safe flight corridors, but do not enforce final trajectories to pass through waypoints. We evaluate each planner by time (planning time versus execution time) and failure (constraint violation or incomplete/no path found). For the Perlin Noise environment, the path lengths were 12.5 m, 30 m, and 55 m while the path lengths for Willow Garage were 20 m, 25 m, and 30 m with N = 4, 6, 8 waypoints for both environments. 1) Time Analysis: Table III compares the planning times and trajectory execution times of each planner given the six different geometric paths. STITCHER’s planning times are TABLE III: State-of-the-Art Comparison Time Analysis. Map N Planning time (ms) Execution time (s) [9] [5] Ours [9] [5] Ours Perlin Noise 4 109 21.1 3.82 2.98 3.51 3.67 6 207 59.4 10.4 4.42 4.42 5.75 8 1020 116 16.9 7.62 7.34 9.62 Willow Garage 4 240 38.2 8.27 4.40 4.37 4.53 6 5240 53.2 20.9 8.25 5.97 6.30 8 10400 84.4 27.1 10.5 FAILED 7.95 TABLE IV: Effect of Different Waypoints and Sampling in Perlin Noise. N Planning time (ms) Execution time (s) STITCHER STITCHER [5] S W SW [5] S W SW 4 27.2 14.8 3.01 10.5 3.51 3.44 3.61 3.56 6 72.0 44.7 13.8 42.4 4.42 5.43 5.38 4.90 8 130 64.1 24.6 86.9 7.34 9.13 8.99 9.26 Abbreviations: S – speed sample set expanded to |Vm| = 9; W – optimized waypoints from [5] used; SW – combination of S and W. TABLE V: State-of-the-Art Comparison Failure Analysis. Map No Path Found (%) State Violation (%) Collisions (%) [9] [5] Ours [9] [5] Ours [9] [5] Ours Perlin Noise 0 0 0 6 0 0 4 4 0 Willow Garage 18 2 2 8 0 0 0 24 0 faster than those measured for FASTER and GCOPTER for every test, and are up to 7x faster than GCOPTER and 400x faster than FASTER. It is important to note the computation times for the motion primitive search planner [16] were omit- ted from Table III because planning times exceeded several seconds. In some cases GCOPTER and FASTER achieved lower execution times. This can be attributed to their ability to treat waypoints as soft constraints, i.e., the trajectory is only required to pass nearby a waypoint rather than through it, as well as the chosen resolution of state samples in STITCHER. The difference is most noticeable in the Perlin Noise en- vironment where more free space is available for waypoint adjustment. Table IV compares the planning and execution times of GCOPTER [5] to different algorithm variants of STITCHER evaluating the effect of sampling and waypoint selection in the Perlin Noise environment. Case S increases the sampled speed set to |Vm| = 9, Case W utilizes the optimized waypoints from GCOPTER, and Case SW is a combination of both strategies. All the case variants reduce STITCHER’s baseline execution time, motivating future work in incorporating waypoint flexibility. 2) Failure Analysis: A Monte Carlo simulation composed of 50 realizations was conducted to evaluate the different modes of failure experienced by each planner. Table V com- pares the rate at which each planner does not find a path, generates a trajectory violating state constraints or gener- ates a trajectory in collision. The “No Path Found” metric includes a numerical solver not returning a solution, or if the solution does not reach the goal. Using either FASTER or GCOPTER, this can occur due to numerical instability or when a feasible solution is not within the calculated set of safe corridors. Conversely, in STITCHER’s graph search framework, a graph disconnection occurs when a feasible Fig. 16: Example of mass-normalized thrust profile strictly satisfying con- straints in Willow Garage environment. Fig. 17: Control and estimation architecture for custom quadrotor platform used in hardware experiments. The vehicle is equipped with an OrangePi 3.0 computer, Teensy 4.0 microcontroller, and an InvenSense ICM-20948 IMU. An external motion capture system provides pose measurements at 100Hz. solution is not within the discrete set of sampled states. Across all test cases, STITCHER’s motion primitive graph disconnects only once, achieving the lowest rate of failure among the state-of-the-art planners. In the Willow Garage environment, where narrow corridors make collisions more likey, the number of failed solutions by FASTER and collisions by GCOPTER significantly increases. In contrast, STITCHER never violates constraints (state, control, or obstacles) because all constraints are strictly enforced. As an example, Fig. 16 is a representative mass-normalized thrust profile generated by STITCHER which remains within the valid limits. VII. HARDWARE RESULTS The custom quadrotor used for hardware experiments is shown in Figure 17. Onboard hardware includes an Orange Pi 3.0 computer, Teensy 4.0 microcontroller, and an InvenSense ICM-20948 IMU. The system was flown indoors in a 4m x 4m x 4m drone cage equipped with an OptiTrack motion capture system that accurately measures the pose of the vehicle. A. Control and Estimation Architecture The control and state estimation architecture used in the hardware experiments is shown in Fig. 17. We employed the standard cascaded geometric controller [51], [52] to track the trajectory generated by STITCHER with the desired acceler- ation and jerk as feedforward. The outer loop PID position tracking controller ran on the Orange Pi flight computer, and the inner loop quaternion-based attitude tracking controller from [53] ran on the Teensy microcontroller; the two pro- cessors communicate via serial. The outer loop control rate was 100Hz while the inner loop control rate was 500Hz. For estimation, position and orientation measurements from an Fig. 18: Motion trail of quadrotor executing trajectory from Experiment 1 in the virtual Random Forest 1 environment through 4x4x4 meter drone cage. TABLE VI: Trajectory Tracking Error. RMSE Experiment Random Forest 1 Random Forest 2 Obstacle Course Windowed Wall Pos. (cm) 9.86 13.3 10.9 8.59 Tilt (deg) 4.98 6.01 6.28 6.41 external motion capture system were sent wirelessly at 100Hz to the onboard Orange Pi computer, and fused with an IMU to estimate position, velocity, orientation, angular velocity, and IMU biases with a nonlinear geometric observer [54]. Estimates were generated by the observer at 100Hz and used by the outer loop position controller. B. Flight Experiments Indoor flight experiments were conducted to evaluate the dy- namic feasibility of the trajectories planned with STITCHER. The trajectories were first generated with virtual obstacles and then flown using our custom quadrotor (see Fig. 18). In Fig. 19, planned trajectories generated by STITCHER are displayed alongside the actual trajectories flown by the quadrotor in four experiments. The environments are named: Random Forest 1 (Fig. 19a), Random Forest 2 (Fig. 19b), Obstacle Course (Fig. 19c), and Windowed Wall (Fig. 19d). The desired path lengths for each test in their respective environment were 9.6 m, 13.1 m, 8.7 m, and 8.5 m. Table VI reports the root mean square errors (RMSE) of position and tilt for each experiment. Across all experiments, the maximum RMSE is 6.41 degrees in tilt and 13.3 cm in position. Further, the average ratio between position RMSE to the total path length was only 1.07% over all the tests. These flight experiments show that STITCHER generates trajectories that satisfy constraints and can be tracked with low error using a standard cascaded geometric controller. Additionally, while STITCHER may use any p−th order integrator with p ≥2 to model system dynamics, the results show that using a triple integrator model is sufficient for quadrotor tracking control despite discontinuities in jerk at waypoints. Figure 20 compares the actual and desired profiles of the mass-normalized thrust (top), position (middle), and attitude (bottom) through Random Forest 1. The quadrotor remains within the physical limits dictated by the thrust magnitude and tilt constraints, and the motor commands executed by the drone closely match the desired mass-normalized thrust. The position and attitude error profiles further show that for the duration of the flight, the maximum deviations in position and tilt are less than 20 cm and 20 degrees, respectively. These results demonstrate that STITCHER generates trajectories that safely and accurately express complex system-level constraints. VIII. CONCLUSIONS In this work, we presented STITCHER, a motion primitive search planning algorithm that utilizes a novel three-stage planning architecture to design constrained trajectories in real- time over long distances. We proved the search graph is finite, and the proposed search heuristic is admissible, so STITCHER is guaranteed to i) have a priori bounded time and memory complexity and ii) generate optimal trajectories with respect to the sampled set of states. Real-time search speeds were achieved through our novel heuristic crafting technique, greedy graph pre-processing method, and non-conservative constraint and collision checking procedure. Our extensive simulation study showed the trade-off in terms of path quality and compu- tation time for different sampled velocity sets, the effectiveness of the proposed heuristic with varied edge costs and state constraints, a case study imposing individual thruster limits, and the average computation times of the components that make up STITCHER. We also found that our greedy motion primitive graph pre-processing step has a negligible effect on solution cost compared to the observed computation speed up owing to the reduced graph size. Importantly, STITCHER was shown to consistently generate trajectories faster than two state-of-the-art optimization-based planners while never violating constraints. Hardware experiments further proved that our planner is effective in generating trajectories suitable for position and attitude tracking control while remaining within set physical limits. Future work includes developing a receding horizon implementation for navigating through un- known environments, the use of imitation learning to improve search efficiency, and learning motion primitives for more general optimal control problems. Acknowledgments The authors would like to thank lab mem- bers Grace Kwak, Ryu Adams, James Row, and Qiyuan Wu for implementation and hardware support. REFERENCES [1] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in IEEE International Conference on Robotics and Automation, 2011, pp. 2520–2525. [2] C. Richter, A. Bry, and N. Roy, “Polynomial trajectory planning for ag- gressive quadrotor flight in dense indoor environments,” in International Symposium of Robotics Research, 2016, pp. 649–666. [3] H. Oleynikova, M. Burri, Z. Taylor, J. Nieto, R. Siegwart, and E. Gal- ceran, “Continuous-time trajectory optimization for online uav replan- ning,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016, pp. 5332–5339. [4] B. Zhou, J. Pan, F. Gao, and S. Shen, “Raptor: Robust and perception- aware trajectory replanning for quadrotor fast flight,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1992–2009, Dec. 2021. [5] Z. Wang, X. Zhou, C. Xu, and F. Gao, “Geometrically constrained tra- jectory optimization for multicopters,” IEEE Transactions on Robotics, vol. 38, no. 5, pp. 3259–3278, May 2022. [6] Y. Ren, F. Zhu, W. Liu, Z. Wang, Y. Lin, F. Gao, and F. Zhang, “Bubble planner: Planning high-speed smooth quadrotor trajectories using receding corridors,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 6332–6339. (a) Experiment 1: Flight through Random Forest 1. (b) Experiment 2: Flight through Random Forest 2. (c) Experiment 3: Flight through Obstacle Course. (d) Experiment 4: Flight through Windowed Wall. Fig. 19: Hardware experiments through four different virtual environments: (a) Random Forest 1, (b) Random Forest 2, (c) Obstacle Course, and (d) Windowed Wall. The desired trajectory planned by STITCHER is shown in red and the actual trajectory flown by the quadcopter is shown in green. Two views of the environment are provided in each subfigure. Left: Top-down view. Right: Perspective view. [7] R. Deits and R. Tedrake, “Computing large convex regions of obstacle- free space through semidefinite programming,” in Algorithmic Founda- tions of Robotics. Springer, 2015, pp. 109–124. [8] ——, “Efficient mixed-integer planning for uavs in cluttered environ- ments,” in IEEE International Conference on Robotics and Automation, 2015, pp. 42–49. [9] J. Tordesillas, B. T. Lopez, M. Everett, and J. P. How, “FASTER: Fast and safe trajectory planner for navigation in unknown environments,” IEEE Transactions on Robotics, vol. 38, no. 2, pp. 922–938, Apr. 2022. [10] T. Marcucci, M. Petersen, D. von Wrangel, and R. Tedrake, “Motion planning around obstacles with convex optimization,” Science Robotics, vol. 8, no. 84, Nov. 2023. [11] M. W. Mueller, M. Hehn, and R. D’Andrea, “A computationally ef- ficient motion primitive for quadrocopter trajectory generation,” IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1294–1310, Dec. 2015. [12] P. Florence, J. Carter, and R. Tedrake, “Integrated perception and control at high speed: Evaluating collision avoidance maneuvers without maps,” in Algorithmic Foundations of Robotics. Springer, 2016, pp. 304–319. [13] B. T. Lopez and J. P. How, “Aggressive 3-d collision avoidance for high- speed navigation,” in IEEE International Conference on Robotics and Automation, 2017, pp. 5759–5765. [14] M. Ryll, J. Ware, J. Carter, and N. Roy, “Efficient trajectory planning for high speed flight in unknown environments,” in IEEE International Conference on Robotics and Automation, 2019, pp. 732–738. [15] S. Liu, N. Atanasov, K. Mohta, and V. Kumar, “Search-based motion planning for quadrotors using linear quadratic minimum time control,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 2872–2879. [16] S. Liu, K. Mohta, N. Atanasov, and V. Kumar, “Search-based motion planning for aggressive flight in SE(3),” IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2439–2446, July 2018. [17] B. Zhou, F. Gao, L. Wang, C. Liu, and S. Shen, “Robust and effi- cient quadrotor trajectory generation for fast autonomous flight,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3529–3536, Oct. 2019. [18] P. Foehn, D. Brescianini, E. Kaufmann, T. Cieslewski, M. Gehrig, M. Muglikar, and D. Scaramuzza, “Alphapilot: Autonomous drone racing,” Autonomous Robots, vol. 46, no. 1, pp. 307–320, Oct. 2021. [19] J. Pearl, Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley Longman Publishing Co., Inc., 1984. [20] H. J. Levy and B. T. Lopez, “STITCHER: Real-time trajectory planning with motion primitive search,” arXiv:2412.21180, 2024. [21] D. Mellinger, A. Kushleyev, and V. Kumar, “Mixed-integer quadratic program trajectory generation for heterogeneous quadrotor teams,” in IEEE International Conference on Robotics and Automation, 2012, pp. 477–483. [22] S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor, and V. Kumar, “Planning dynamically feasible trajectories for quadrotors using safe flight corridors in 3-d complex environments,” IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1688–1695, July 2017. [23] B. Landry, R. Deits, P. R. Florence, and R. Tedrake, “Aggressive quadrotor flight through cluttered environments using mixed integer programming,” in IEEE International Conference on Robotics and Automation, 2016, pp. 1469–1475. [24] T. Marcucci, P. Nobel, R. Tedrake, and S. Boyd, “Fast path planning through large collections of safe boxes,” IEEE Transactions on Robotics, vol. 40, July 2024. [25] M. Hehn and R. D’Andrea, “Quadrocopter trajectory generation and control,” IFAC Proceedings Volumes, vol. 44, no. 1, pp. 1485–1491, Jan. 2011. [26] T. M. Howard, C. J. Green, A. Kelly, and D. Ferguson, “State space sampling of feasible motions for high-performance mobile robot nav- igation in complex environments,” Journal of Field Robotics, vol. 25, no. 6-7, pp. 325–345, June 2008. [27] B. T. Lopez and J. P. How, “Aggressive collision avoidance with limited field-of-view sensing,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 1358–1365. [28] M. Dharmadhikari, T. Dang, L. Solanka, J. Loje, H. Nguyen, N. Khedekar, and K. Alexis, “Motion primitives-based path planning for fast and agile exploration using aerial robots,” in IEEE International Conference on Robotics and Automation, 2020, pp. 179–185. [29] J. Hou, X. Zhou, N. Pan, A. Li, Y. Guan, C. Xu, Z. Gan, and F. Gao, “Primitive-swarm: An ultra-lightweight and scalable planner for large- scale aerial swarms,” IEEE Transactions on Robotics, vol. 41, pp. 3629– 3648, May 2025. [30] E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische Mathematik, vol. 1, 1959. [31] P. Hart, N. Nilsson, and B. Raphael, “A formal basis for the heuristic determination of minimum cost paths,” IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100–107, July 1968. [32] D. Harabor and A. Grastien, “Online graph pruning for pathfinding on grid maps,” in National Conference on Artificial Intelligence, 2011, pp. 1114–1119. [33] D. Dolgov, S. Thrun, and M. Montemerlo, “Path planning for au- tonomous vehicles in unknown semi-structured environments,” Interna- tional Journal of Robotics Research, vol. 29, no. 5, pp. 485–501, Jan. 2010. [34] M. Pivtoraiko and A. Kelly, “Kinodynamic motion planning with state Fig. 20: Experiment 1: State profiles comparing the actual flown trajectory and STITCHER’s planned trajectory. Top: The actual (blue) and desired (orange) mass-normalized thrust profiles remaining within permissible regions, outside of invalid regions (grey). Middle: Comparison of the actual and desired position in the three coordinate axes along with the total error (red). Bottom: The actual and desired roll ϕ, pitch θ, and yaw ψ as well as the tilt error (red) of the trajectory over the flight duration. lattice motion primitives,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, pp. 2172–2179. [35] O. Andersson, O. Ljungqvist, M. Tiger, D. Axehill, and F. Heintz, “Receding-horizon lattice-based motion planning with dynamic obstacle avoidance,” in IEEE Conference on Decision and Control, 2018, pp. 4467–4474. [36] S. J. Russell and P. Norvig, Artificial intelligence: a modern approach. Pearson, 2016. [37] L. Jarin-Lipschitz, J. Paulos, R. Bjorkman, and V. Kumar, “Dispersion- minimizing motion primitives for search-based motion planning,” in IEEE International Conference on Robotics and Automation, 2021, pp. 12 625–12 631. [38] L. Palmieri, L. Bruns, M. Meurer, and K. O. Arras, “Dispertio: Optimal sampling for safe deterministic motion planning,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 362–368, Apr. 2020. [39] R. Penicka and D. Scaramuzza, “Minimum-time quadrotor waypoint flight in cluttered environments,” IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5719–5726, Apr. 2022. [40] A. Romero, S. Sun, P. Foehn, and D. Scaramuzza, “Model predictive contouring control for time-optimal quadrotor flight,” IEEE Transactions on Robotics, vol. 38, no. 6, pp. 3340–3356, Dec. 2022. [41] A. Romero, R. Penicka, and D. Scaramuzza, “Time-optimal online replanning for agile quadrotor flight,” IEEE Robotics and Automation Letters, vol. 7, July 2022. [42] M. Krinner, A. Romero, L. Bauersfeld, M. Zeilinger, A. Carron, and D. Scaramuzza, “MPCC++: Model predictive contouring control for time-optimal flight with safety constraints,” in Robotics: Science and Systems, 2024. [43] B. Paden, V. Varricchio, and E. Frazzoli, “Verification and synthesis of admissible heuristics for kinodynamic motion planning,” IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 648–655, Apr. 2017. [44] S. Kim and B. An, “Learning heuristic a: Efficient graph search using neural network,” in IEEE International Conference on Robotics and Automation, 2020, pp. 9542–9547. [45] J. Thayer, A. Dionne, and W. Ruml, “Learning inadmissible heuristics during search,” in International Conference on Automated Planning and Scheduling, 2011, pp. 250–257. [46] M. Bhardwaj, S. Choudhury, and S. Scherer, “Learning heuristic search via imitation,” in Conference on Robot Learning, 2017, pp. 271–280. [47] M. P´andy, W. Qiu, G. Corso, P. Veliˇckovi´c, Z. Ying, J. Leskovec, and P. Li`o, “Learning graph search heuristics,” in Learning on Graphs Conference, 2022, pp. 10–1. [48] J. W. Demmel, Applied Numerical Linear Algebra. Society for Industrial and Applied Mathematics, 1997. [49] B. Acikmese and S. R. Ploen, “Convex programming approach to powered descent guidance for mars landing,” Journal of Guidance, Control, and Dynamics, vol. 30, no. 5, pp. 1353–1366, Sept. 2007. [50] D. E. Kirk, Optimal control theory: an introduction. Courier Corpora- tion, 2004. [51] E. Frazzoli, M. A. Dahleh, and E. Feron, “Trajectory tracking control design for autonomous helicopters using a backstepping algorithm,” in IEEE American Control Conference, 2000, pp. 4102–4107. [52] T. Lee, M. Leok, and N. H. McClamroch, “Geometric tracking control of a quadrotor uav on se (3),” in IEEE Conference on Decision and Control, 2010, pp. 5420–5425. [53] B. T. Lopez and J.-J. Slotine, “Sliding on manifolds: Geometric attitude control with quaternions,” in IEEE International Conference on Robotics and Automation, 2021, pp. 11 140–11 146. [54] B. T. Lopez, “A contracting hierarchical observer for pose-inertial fusion,” arXiv preprint arXiv:2303.02777, 2023.
STITCHER: Constrained Trajectory Planning in Complex Environments with Real-Time Motion Primitive Search Helene J. Levy and Brett T. Lopez Abstract-Autonomous high-speed navigation through large, complex environments requires real-time generation of agile trajectories that are dynamically feasible, collision-free, and satisfy state or actuator constraints. Modern trajectory planning techniques primarily use numerical optimization, as they enable the systematic computation of high-quality, expressive trajectories that satisfy various constraints. However, stringent requirements on computation time and the risk of numerical instability can limit the use of optimization-based planners in safety-critical scenarios. This work presents an optimizationfree planning framework called STITCHER that stitches short trajectory segments together with graph search to compute longrange, expressive, and near-optimal trajectories in real-time. STITCHER outperforms modern optimization-based planners through our innovative planning architecture and several algorithmic developments that make real-time planning possible. Extensive simulation testing is performed to analyze the algorithmic components that make up STITCHER, along with a thorough comparison with two state-of-the-art optimization planners. Simulation tests show that safe trajectories can be created within a few milliseconds for paths that span the entirety of two 50 m x 50 m environments. Hardware tests with a custom quadrotor verify that STITCHER can produce trackable paths in real-time while respecting nonconvex constraints, such as limits on tilt angle and motor forces, which are otherwise hard to include in optimization-based planners. Index Terms-Trajectory planning, aerial systems, motion primitives, graph search, collision avoidance. I. INTRODUCTION P LANNING collision-free, dynamically feasible trajectories in real-time through complex environments is crucial for many autonomous systems. As a result, trajectory planning has garnered significant interest from the research community, but meeting the reliability requirements for safety-critical real-world applications remains challenging. Specifically, few methods have guarantees regarding trajectory optimality and time/memory complexity without sacrificing trajectory length, computation time, or expressiveness. Our approach addresses this gap by combining optimal control with graph search to generate near-optimal trajectories over long distances in realtime, resulting in a framework that provides strong guarantees on path quality and algorithm complexity. Optimization-based trajectory planning has emerged as the primary framework for autonomous systems that must navigate complex environments. This is because constraints and performance objectives can be naturally stated in the optimization problem. Most approaches can be broadly classified by their use of continuous or integer variables. Continuous variable methods employ gradient descent to jointly optimize over Authors are with the VECTR Laboratory, . {hjlevy, Fig. 1: A trajectory (colored based on speed) generated by our proposed algorithm called STITCHER through a perlin noise environment. STITCHER searches over candidates motion primitives (white) to find a safe trajectory in real-time with time and memory complexity guarantees. Indoor flight experiments were performed to verify dynamic feasibility of trajectory plans. the coefficients of basis functions (e.g., polynomials) and waypoint arrival times while imposing obstacle and state constraints [1]-[6]. Integer variable methods require that the free space of the environment be represented as the union of convex sets (continuous variable methods have also used this representation, e.g., [5], [6]) and solve a mixed-integer program for a collision-free trajectory [7]-[10]. Despite continued innovations, these methods lack a priori time complexity bounds and often scale very poorly with trajectory length; this is especially true for integer programming approaches. A computationally efficient alternative to optimizationbased trajectory planning is the use of so-called motion primitives: a library of short length or duration trajectories that can be efficiently computed and evaluated [11]-[14]. To effectively use motion primitives, a planner must operate in a receding horizon fashion, i.e., continuously replan, because motion primitives are inherently near-sighted with their short length or duration. This can introduce several performance issues, e.g., myopic behavior or unexpressive trajectories, that are exacerbated in large, complex environments. Subsequent work has attempted to pose the problem as a graph search with nodes and edges being desired states (position, velocity, etc.) and motion primitives, respectively [15]-[18]. While this allows for the creation of long-range trajectories, search times can be extremely high (seconds) because of the graph size. An admissible search heuristic can be used to reduce the number of node expansions required to find a solution, also known as search effort, while preserving optimality of the graph [19]. However, designing such a search heuristic is non-trivial. We propose a new trajectory planning algorithm called STITCHER that can perform real-time motion primitive 20 Oct 2025 searches across long distances in complex environments. STITCHER utilizes an innovative three-stage planning architecture to generate smooth and expressive trajectories by stitching motion primitives together through graph search. The first two stages are designed to expedite the motion primitive search in the final stage by constructing a compact but expressive search graph and search heuristic. Specifically, given a set of waypoints computed in the first stage, we create a velocity graph by representing nodes as sampled velocities at each waypoint, and edges as quick-to-generate minimum-time trajectories. We employ dynamic programming on this graph, using the Bellman equation to compute the cost-to-go for each node. Critically, the cost-to-go is then used as an admissible heuristic to efficiently guide the motion primitive search in the third stage. We also leverage a greedy graph pre-processing step to form a compact motion primitive graph. We prove all graphs are finite and that the proposed heuristic is admissible. These technical results are critical as they guarantee i) a priori time and memory complexity bounds and ii) trajectory optimality with respect to the graph discretization. To further reduce computation time, we improve the collision checking procedure from [13], by leveraging the known free space from previous nearest-neighbor queries, bypassing the rigidity and computational complexity of free space decomposition. Additionally, we show that employing a simple sampling procedure in the final search stage is effective at pruning candidate trajectories that violate complex state or actuator constraints. STITCHER was extensively tested in two simulation environments to evaluate algorithmic innovations and assess its performance against two state-of-the-art optimization-based planners [5], [9]. STITCHER is shown to generate high-quality, dynamically feasible trajectories over long distances (over 50 m) with computation times in the milliseconds, and consistently generates trajectories faster than the state-of-the-art with comparable execution times. This paper substantially builds upon previous work [20] by providing a new in-depth analysis of the algorithmic innovations used to achieve real-time planning times, along with hardware experiments to demonstrate real-world feasibility. We include a parameter sensitivity analysis to evaluate the performance of STITCHER with different hyperparameters, such as using different state discretization sets. A new study on the proposed search heuristic is conducted to examine the effects of varied edge costs and acceleration constraints on both admissibility and search effort (node expansions). The impact of the greedy graph pre-processing step on solution quality and computation time is evaluated through comparisons with an exhaustive, i.e., non-greedy, graph creation. We also showed STITCHER can adhere to highly nonconvex constraints, such as individual motor constraints for a quadrotor, with no noticeable increase computation time. Comparison with stateof-the-art algorithms in simulation was also expanded, incorporating new metrics to better evaluate performance regarding optimized waypoints and the sampled set of states. Finally, we flew the trajectories designed by STITCHER in hardware on a custom quadrotor to show that the trajectories were dynamically feasible, adhered to physical constraints, and could be tracked via standard geometric cascaded control. II. RELATED WORKS A. Optimization-based Planning Designing high quality trajectories using online optimization has become a popular planning strategy as a performance index and constraints can be systematically incorporated into an optimization problem. Optimization-based trajectory planners can be categorized using several criteria, but the clearest delineation is whether the method uses continuous or integer variables. For methods that use only continuous variables, the work by [2] reformulated the quadratic program in [1] to jointly optimize over polynomial endpoint derivatives and arrival times for a trajectory passing through waypoints. Collisions were handled by adding intermediate waypoints and redoing the trajectory optimization if the original trajectory was in collision. Oleynikova et al. [3] represented obstacles using an Euclidean Signed Distance Field (ESDF) which was incorporated into a nonconvex solver as a soft constraint. Zhou et al. [4] used a similar penalty-based method but introduced a topological path search to escape local minima. An alternative approach is to decompose the occupied space or free space into convex polyhedra [7], [21], [22] which can be easily incorporated as constraints in an optimization. The methods in [5], [6] treat these constraints as soft while efficiently optimizing over polynomial trajectory segments that must pass near waypoints. One can also use the free-space polyhedra to formulate a mixed-integer program [8]-[10], [23] to bypass the nonconvexity introduced by having unknown waypoint arrival times, but at the expense of poor scalability with trajectory length and number of polyhedra. Marcucci et al. [24] addresses scalability concerns of [10] by solving a sequence of convex problems instead of one large-scale optimization but requires an offline process for generating collision-free convex sets. B. Motion Primitives Motion primitive planners have been proposed as an alternative to optimization-based planners to address computational complexity and numerical instability concerns. The underlying idea of motion primitive planners is to select trajectories online from a precomputed, finite library of trajectories. Initial work on motion primitives for quadrotors leveraged differential flatness and known solutions to specific optimal control problems to efficiently compute point-to-point trajectories in real-time [11], [25]. Later work employed motion primitives for receding horizon collision avoidance where primitives were efficiently generated online by sampling desired final states, and selected at each planning step based on safety and trajectory cost [12]-[14], [26]-[29]. Howard et al. [26] first introduced this idea of searching over feasible trajectories of a car with a model predictive control framework. Subsequent works extended this methodology to quadcopters using depth images [12], [14], point clouds [13], [27], or ESDFs [28] for motion primitive evaluations and collision avoidance. While computationally efficient, the behavior of these planners can be myopic, leading to suboptimal behavior in complex environments which limit their use for planning long-term trajectories. C. Motion Primitive Search One way to address nearsightedness is to perform a graph search over motion primitives, i.e., stitch motion primitives together. This can be achieved by extending traditional graph search algorithms [30]-[32], which typically use coarse discrete action sets, to using a lattice of motion primitives [15]- [17], [33]-[35]. Graph search algorithms are an attractive method for planning due to inherent guarantees of completeness, optimality1, and bounded time and memory complexity [36]. The works by Liu et al. [15], [16] were some of the first works to successfully showcase a search-based algorithm using a lattice of motion primitives for use on quadcopters. However, these methods can be computationally expensive as generating high-quality trajectories relies on generating a large number of motion primitives for sufficient density. Jarin et al. [37] addresses computation concerns by improving upon the sampling of different motion primitives, inspired by a minimum dispersion sampling method [38]. Another way to narrow the search space is by utilizing a geometric path as a prior and constraining motion primitives to pass through waypoints from the path. Recently, [18], [39]-[41] proposed an efficient motion primitive search in velocity space using minimum-time input-constrained trajectories from a double integrator model restricted to pass through a set of waypoints. The search can be done in real-time but the resulting bang-bang acceleration profile is dynamically infeasible for quadrotors, leading to poor tracking performance. An additional smoothing step, e.g., model predictive contouring control, is required to achieve sufficient trajectory smoothness [40]-[42]. D. Search Heuristics Fast graph search speed while retaining optimality guarantees can be achieved by employing an admissible search heuristic [36] to guide the search to the goal state. Constructing an informative and admissible heuristic, however, is nontrivial. Much of the previous work in motion primitive search overlooks the importance of the heuristic by generating a weak approximation to the goal [17], using an inadmissible heuristic which forfeits optimality guarantees [16], or proceeding without a heuristic [18]. As a result, motion primitive search algorithms to date scale poorly in large environments and for large planning horizons, making them unsuitable for systems with limited onboard computational resources. Paden et al. [43] proposed a method to systematically construct admissible heuristics for use in kinodynamic planning using sum-of-squares (SOS) programming. However, the resulting size of the SOS program requires heuristic calculations to be performed offline. Other strategies involve learning a search heuristic or cost-to-go [44]-[47]. Kim et al. [44] uses a neural network to approximate graph node distances and provides a sub-optimality bound on the solution. Reinforcement and imitation learning have also been proposed for learning search heuristics [45]-[47], but these works focus on minimizing node expansions rather than ensuring admissibility, sacrificing the optimality guarantees of graph search. 1In the context of graph search, optimality refers to resolution optimality, i.e., optimality with respect to the discretized state space. III. PROBLEM FORMULATION This work is concerned with solving the following trajectory planning problem: min u ∈U J = r(T) + Z T 0 q(x, u) dt s.t. ̇x = Ax + Bu x ∈Xs, x /∈Xobst, u ∈U x(0) = x0, x(T) = xf, (1) where x ∈Rn is the state that must satisfy state Xs and obstacle (collision) Xobst constraints, u ∈Rm is the control input that must satisfy actuator constraints U, A ∈Rn×n and B ∈Rn×m govern the system's dynamics and are assumed to take a multi-axis chain of integrators form, and r : R+ →R+ and q : Rn × Rm →R+ are the terminal and stage cost, respectively. The goal is to find an optimal final time T ∗and feasible optimal state trajectory x∗(t) with a corresponding control input sequence u∗(t) for t ∈[0 T ∗] that steers the system from an initial state x0 to a desired final state xf that minimizes the cost functional J. While the dynamics are linear in (1), many nonlinear systems can be placed into the linear control affine form if they are differentially flat, e.g., VTOL vehicles like quadrotors, capturing a large class of systems of interest. In many cases, the state vector can be x = (r, v, a, . . . , r(p-1))⊤and the control input can be u = r(p) where r = (x, y, z)⊤is the position of the vehicle in some reference frame. A. Background: Motion Primitives We define motion primitives to be closed-form solutions to specific optimal control problems. In this work, we will restrict our attention to the following two optimal control problems: the input-constrained minimum-time problem for a double integrator and the linear quadratic minimum time problem for a p-th order integrator. We will briefly review each optimal control problem and the structure of its solution. The formulations will be presented for a single axis, but can be repeated for all three position coordinates. Minimum-Time Double Integrator: Given an initial state (s0, v0) ∈R2 and desired final state (sf, vf) ∈R2, the minimum-time double integrator optimal control problem is min u J = T (2) s.t. ̈s = u, |u| ≤umax s(0) = s0, v(0) = v0 s(T) = sf, v(T) = vf, where the final time T is free. From Pontryagin's minimum principle, the optimal control solution is known to have a bang-bang control profile. The switching times can be efficiently computed by solving a quadratic equation. It is required that each coordinate axis trajectory all have the same execution time, so the limiting minimum-time horizon is T ∗= max{Tx, Ty, Tz}. The limiting minimum-time horizon T ∗, is then applied as a known variable for the non-limiting Fig. 2: System architecture describing the three planning stages. Stage 1: A sparse geometric path is found via an A* search through the voxelized environment. Stage 2: A velocity state is then introduced at each waypoint and dynamic programming is used to recursively solve for the cost-to-go at each node. Stage 3: A full motion primitive search informed by the previous stages is performed, and checks for collisions are completed to yield the final trajectory. axes. This allows one to then solve a quadratic equation for a new bound ̄u ≤umax on the control input. Linear Quadratic Minimum-Time p-th Order Integrator: Smooth trajectories can be generated by solving the linear quadratic minimum-time (LQMT) optimal control problem, min T, u J = ρ T + Z T 0 u2 dt (3) s.t. s(p) = u s(0) = s0, v(0) = v0, . . . , s(p-1)(0) = s(p-1) 0 s(T) = sf, v(T) = vf, s(k-1)(T) free for 3 ≤k ≤p where ρ > 1 penalizes the final time. The final time T and all terminal states except position and velocity are free. The optimal trajectory is a polynomial in time so the cost functional can be expressed analytically in terms of T and the known boundary conditions. The final time can be found efficiently using a root-finding algorithm such as QR algorithm [48]. State constraints are omitted from (3) as it is more efficient to prune many candidate trajectories once the final time is known, as discussed in Section IV-D. IV. METHODOLOGY STITCHER, detailed in Algorithm 1, generates a fullstate trajectory by stitching collision-free, dynamically feasible trajectory segments together through graph search. At its core, STITCHER searches over closed-form solutions, i.e., motion primitives, to optimal control problems of the form discussed above. These solutions serve as a basis set for the solution space to (1). To achieve real-time performance, STITCHER utilizes a three stage planning process (see Fig. 2). In Stage 1 (left), A* algorithm is used to produce a sparse geometric path, i.e., waypoints, in the free space of the environment (line 3); this is standard in many planning frameworks. In Stage 2 (middle), nodes representing sampled velocities at the waypoints are formed into a velocity graph where dynamic programming is used to compute the minimum time path from Algorithm 1: STITCHER Trajectory Planner 1 input: P ←point cloud, ns ←start, ng ←goal; 2 output: s∗(t); // extract waypoints and path features 3 w, q, H ←getGeometricPath(P, ns, ng); 4 G ←buildVelocityGraph(w, q, H); // get heuristic from recursive cost-to-go 5 h(n) ←dynamicProgramming(G, ns, ng); 6 Gmp ←buildFullStateGraph(G); 7 function planPath(P, Gmp, ns, ng): 8 ncurr = ns; 9 while ncurr ̸= ng do // get node with lowest cost g(n)+h(n) 10 ncurr = OPEN.pop(); 11 CLOSED.insert(ncurr); 12 if ncurr = ng then 13 break; 14 end 15 Encurr ←getSuccessors(ncurr, Gmp); 16 for e in Encurr do // collision and state constraint check 17 pruneMotionPrimitive(e, P); 18 OPEN.insert(φ(n, e)); 19 end 20 end 21 s∗(t) ←getFinalMotionPrimitives(ncurr, CLOSED); each node to the desired final state using a control-constrained double integrator model (lines 4-5). This step is critical for constructing an admissible heuristic to guide the full motion primitive search, and is one of the key innovations that enables real-time performance. Note that the optimal "path" in velocity space is never used; computing the cost-to-go is the primary objective as it serves as an admissible heuristic for motion primitive search as shown in Section V-B. In Stage 3 (right), an A* search is performed using the heuristic from Stage 2 over motion primitives of a p-th order integrator where p ≥2 . A greedy pre-processing step is used to construct a compact motion primitive graph (line 6), ensuring the search remains real-time. At this stage, position and all higher-order derivatives are considered, yielding a full state trajectory that can be tracked by the system (lines 7-21). The remainder of this section expands upon each component of STITCHER. A. Stage 1: Forward Geometric Path Search STITCHER requires a sequence of waypoints that essentially guides the motion primitive search by limiting the size of the search space. This can be done by generating a collisionfree geometric path (see Fig. 2 left) through the environment with A* search or any other discrete graph search algorithm where the environment is represented as a 3D voxel grid where each grid cell contains occupancy information. Let the collision-free, geometric path generated by a discrete graph search algorithm be composed of points O = {o1, o2, ..., oH} where oi ∈R3. The set of points O is further pruned to create a sparse set of waypoints W = {w1, w2, ..., wN} where N ≤H and wi ∈R3. Sparsification is done by finding the minimal set of points in O that can be connected with collision-free line segments. The geometric path search is used in line 3 of Algorithm 1. B. State 2: Backward Velocity Search The ordered waypoint set W found in Stage 1 only provides a collision-free geometric path through the environment. In other words, the velocity, acceleration, and higher-order states necessary for tracking control are not specified. We propose creating a velocity graph (see Fig. 2 middle) where each node in the graph is defined by a position and velocity. The positions are restricted to waypoint locations and M velocities are sampled at each waypoint. More explicitly, for each waypoint wi ∈W, we sample a set of velocities V = {v1, ..., vM}, where V is composed of candidate velocity magnitudes Vm and directions Vd. The choice of Vm and Vd can impact the STITCHER's performance in terms of path optimality and computational complexity; this point will be discussed in more detail in Section VI. With the ordered waypoint W and sampled velocity V sets, we create a velocity graph G = (N, E), where node n ∈N is a given position and sampled velocity, i.e., n = (wi, vj) with wi ∈W and vj ∈V, and edge e ∈E is the double integrator control-constrained minimum-time motion primitive r(t) from (2) that connects neighboring nodes. Recall that the solution to (2) is fully determined by having an initial and final position and velocity pair which is precisely how each node in N is defined. At this stage, collision and state constraints are not enforced to prevent candidate trajectories from being eliminated prematurely. We recursively compute and store the ranked list of costto-go's Vd : N × E →R+ for each node n ∈N and all connecting edges e ∈En of n where Vd(n, e) = l(n, e) + V ∗ d φ(n, e) ∀e ∈En, (4) with the optimal cost-to-go V ∗ d (n) = mine∈En Vd(n, e), the cost of taking edge e from node n being l(n, e), and the node reached by taking edge e being φ(n, e). The cost of taking an edge is given by l(n, e) = T ∗ d (n, e), where T ∗ d (n, e) is the minimum-time of trajectory r(t) connecting the states of node n to the states of φ(n, e). Minimizing (4) is the well-known Bellman equation, which is guaranteed to return the optimal Fig. 3: The achievable mass-normalized thrust (nonconvex) of a aerial VTOL vehicle with limits on minimum thrust fmin, maximum thrust fmax, and maximum tilt θmax. cost-to-go. In Section V-B, we prove V ∗ d (n) for each node in a graph G is an admissible heuristic for an A* search over a broad class of motion primitives. Building and searching the velocity graph are shown in lines 4-5 of Algorithm 1. C. Stage 3: Forward Motion Primitive Search The cost-to-go's computed in Stage 2 for the sampled velocities at each waypoint serve as an admissible heuristic (formally defined later in Definition 1) that guides an efficient A* search over motion primitives. The motion primitives can be generated using any chain of integrators model of order at least two so long as i) the initial and final position and velocities match those used to construct the velocity graph G and ii) the allowable acceleration is maximally bounded by umax given in (2). It is important to note that the bound on allowable acceleration can be easily satisfied with user defined umax or simply by applying the box constraint ∥a∥∞≤umax. The motion primitive search graph is denoted as Gmp = (Nmp, Emp) where Nmp is the set of nodes, each corresponding to a state vector, and Emp is the set of edges, each corresponding to a motion primitive that connecting neighboring nodes. A* search is used to meet real-time constraints where the search minimizes the cost f(n) = g(n) + h(n) where n ∈Nmp is the current node, g : Nmp →R+ is the cost from the start node ns to node n, and h : Nmp →R+ is the estimated cost from the current node n to the goal node ng. In the context of optimal control, g is the cost accrued, i.e., the cost functional J, for a path from ns to n whereas h is the estimated costto-go, i.e., the estimated value function V ∗, from n to ng. In this stage, collision and state constraints are checked for each candidate motion primitive to ensure safety. The methodology for both is discussed in Section IV-D. Each step of the motion primitive A* search is shown in lines 6-21 of Algorithm 1. D. Pruning Infeasible & In-Collision Motion Primitives STITCHER guarantees safety by pruning motion primitives from the final search that violate constraints or are in collision. For state and actuator constraints, many optimizationbased planning approaches approximate the true physical constraints of the system with simple convex constraints, e.g., ∥v∥∞≤vmax, ∥a∥∞≤amax, etc., to reduce computational complexity. When polynomials are used to represent the optimal trajectory, imposing a convex hull constraint on the polynomial is one method for enforcing such state constraints Fig. 4: Removing redundant collision checks. (a): Motion primitive r0(t) checks for collisions using [13]. (b): Sampled points of r1(t) are checked to lie within obstacle-free regions derived from r0(t) calculations. [9], [17]. However, many of these approximations are made only to simplify the resulting optimization problem and do not accurately reflect the actual physical constraint, which can lead to conservatism. STITCHER has the freedom to use a variety of methods to enforce state and actuator constraints. For the examples shown in this work, we uniformly sample candidate trajectories in time to check for constraint violations as it was found to be effective and efficient. Sampling avoids mistakenly eliminating safe trajectories, and the observed computation time under the current discretization was better than using convex hulls. Critically, sampling allows for the inclusion of more complex constraints, such as those that couple multiple axes. Examples are Thrust Magnitude: 0 2. (7) Proof. Using Fig. 2 (middle), the start and goal nodes contribute 2 nodes to the graph G. For intermediate waypoints, given M sampled velocities, there are M nodes per waypoint. As a result, |N| = (N -2)M +2 which is (6). For each edge, we consider the transition to successive waypoints. Ignoring the trivial N = 2 case where |E| = 1, there are M connections between the start node and next waypoint, which also has M nodes. The same applies for connecting waypoint wN-1 to the goal node, resulting in a total of 2M edges. For all other intermediate waypoint pairs, M nodes connect to M nodes at the next waypoint so there are M 2 edges. The total number of edges is then (7). Corollary 1. The size of the motion primitive graph Gmp using Linear Quadratic Minimum Time (LQMT) motion primitives with free terminal acceleration for a triple integrator is identical to the velocity graph G. Proof. The proof is immediate since the terminal acceleration is free so N and M are identical for both graphs. Remark 1. Corollary 1 can be generalized to any motion primitive search graph where the primitives are solutions to an optimal control problem with the dynamics being a chain of integrators and all terminal state derivatives of second order or higher are free. Note that this assumes the greedy graph pre-processing step still yields adequate performance. B. Admissible Heuristic for Motion Primitive Search It is well known that heuristics can be used to expedite searching through a graph by incentivizing the search to prioritize exploring promising nodes. For example, in A* search, the next node explored is selected based on minimizing the cost f(n) = g(n) + h(n), where g is the stage cost to get from the start node ns to node n, and h is a heuristic estimate of the remaining cost to reach the goal node ng. A* search is guaranteed to find an optimal solution so long as the heuristic function h is admissible (see Definition 1) [36]. Below, we prove the cost-to-go V ∗for each node in the velocity graph G calculated in Stage 2, i.e., the minimum-time to goal for a double integrator, is an admissible heuristic for an A* search over motion primitives of any higher-order chain of integrators. Definition 1 ([36]). A function h : N →R is an admissible heuristic if for all n ∈N then h(n) ≤h∗(n), where h∗is the optimal cost from n to the goal node ng. Proposition 2. Consider the optimal control problem min T, u J = ρ T + Z T 0 q(r, v, . . . , u) dt (8) s.t. r(p) = u, c(a) ≤0 r(0) = r0, v(0) = v0, . . . , r(p-1)(0) = r(p-1) 0 r(T) = rf, v(T) = vf, r(k-1)(T) free for 3 ≤k ≤p where q is a positive definite function, the system is at least second order (p ≥2), and the position and velocities boundary conditions are identical to those of (2), with all other boundary constraints free to specify. If umax in (2) is the maximum possible acceleration achievable in a given axis imposed by c(a) ≤0, then the optimal cost-to-go V ∗from the initial conditions for (8) satisfies V ∗≥ρ T ∗ d where T ∗ d is the optimal final time for (2). Proof. First, consider the case when p = 2. For a given axis, if umax is chosen so that it exceeds the allowable acceleration imposed by c(a) ≤0, e.g., ux,max ≥maxax c(a) (see Fig. 3), then the optimal final time T ∗for (8) will always be greater than that of (2) even when q = 0. Specifically, when q = 0, one can show the optimal final time for (2) increases as umax decreases. Moreover, T ∗ d for (2) is guaranteed to exist and be unique [50]. Hence, by appropriate selection of umax, we can ensure T ∗≥T ∗ d always, where equality holds when q = 0 and c(a) is a box constraint. If q ̸= 0, then it immediately follows that T ∗> T ∗ d because q is positive definite by construction. Now consider the case when p > 2. We can deduce V ∗> ρ T ∗ d by contradiction. Specifically, assume T ∗= T ∗ d for p > 2. This would require a to be discontinuous in order to match the bang-bang acceleration profile of (2). However, (8) is a continuous-time linear system that will not exhibit discrete behaviors, e.g., jumps, so it is mathematically impossible to generate an optimal control sequence where the acceleration profiles for Eqs. (2) and (8) will be identical. It can then be concluded V ∗> ρ T ∗ d for p > 2. Therefore, V ∗≥ρ T ∗ d for p ≥2, as desired. Remark 2. Proposition 2 also holds when inequality state or actuator constraints in (8) are present, and when the terminal desired states are specified rather than free. The main result of this section can now be stated. Theorem 1. The optimal cost-to-go for the minimum-time input-constrained double integrator optimal control problem (2) is an admissible heuristic for motion primitive search Fig. 6: Simulation test environments. (a): Willow Garage environment. (b): Perlin Noise environment. where the primitives are solutions to the optimal control problem of the form (8). Proof. Let G = (N, E) be a graph with nodes being sampled velocities at waypoints and edges being the time-optimal trajectories using an input-constrained double integrator. Further, let Gmp = (Nmp, Emp) be a graph with nodes being sampled velocities, accelerations, etc. at waypoints and edges being trajectories that are solutions to (8). Using the Bellman equation, the optimal cost-to-go V ∗ mp(n) for any n ∈Nmp can be computed recursively. Using Proposition 2, V ∗ mp(n) ≥ V ∗ d (n′) by induction where V ∗ d is the optimal cost-to-go for the minimum-time input-constrained double integrator with n′ ∈N. Recognizing N ⊆Nmp, V ∗ d (n′) can be rewritten as V ∗ d (n). Setting h∗(n) = V ∗ mp(n) and h(n) = V ∗ d (n), it can be concluded h(n) ≤h∗(n). Therefore, by Definition 1, the optimal cost-to-go computed for G is an admissible heuristic for the motion primitive search over Gmp. The importance of Theorem 1 follows from the well-known result that searching a graph with an admissible heuristic is guaranteed to return the optimal path through the graph [36], and can significantly improve search efficiency because not every node in the graph has to be explored. The effectiveness of the proposed heuristic both in terms of path quality and search times is analyzed in Section VI. VI. SIMULATION RESULTS This section contains an analysis of STITCHER. First, we conduct a parameter sensitivity study to determine a suitable sampled velocity set (direction and speed) using a modified version of Dijkstra's algorithm that has access to a dense set of velocities and no constraints on computation time. Second, STITCHER is compared to a non-greedy algorithm variant to characterize the impact of our greedy graph pre-processing step on solution cost and computation. Third, we investigate the effectiveness of the heuristic proposed in Section V-B in reducing the number of edges generated by STITCHER. Fourth, a study of STITCHER constraining individual motor forces is presented to demonstrate the flexibility of our method in adhering to complex actuator constraints. Fifth, we characterize the average computation time of the different components that make up STITCHER. Lastly, STITCHER is compared to two optimization-based modern planners [5], [9] capable of running in real-time. Simulation experiments were run in a Perlin Noise environment and the Willow Garage environment, both with a volume Fig. 7: Velocity directions sampled at waypoint wi, where ˆai is normal to the separating hyperplane Hi and ˆqi is the heading toward wi+1. TABLE I: Frequency of Velocity Directions in Final Trajectory. Zenith Angle 70◦ 80◦ 90◦ 100◦ 110◦ Frequency 0% 13% 79% 8% 0% Azimuth Angle -50◦ -20◦ -10◦ 0◦ 10◦ 20◦ Frequency 4% 4% 17% 42% 29% 4% of approximately 50 × 50 × 5 m (see Fig. 6). Geometric paths with N = 4, 6, 8 waypoints were found for different start and end locations in each environment. For all experiments, we imposed the following constraints assuming agile drone flight and reasonable thrust limits: fmin = 0.85 m/s2, fmax = 18.75 m/s2, θmax = 60◦, ωmax = 6 rad/s, vmax = 10 m/s, and a time penalty ρ = 1000. All reported times are from tests conducted on an 11th generation Intel i7 CPU. A. Parameter Sensitivity Analysis: Sampled Velocity Set STITCHER requires a discrete velocity set V composed of a set of magnitudes and directions. As shown in Section IV-B, the size of the velocity graph scales quadratically with the number of sampled velocities, so it is desirable to choose V to be small without degrading path quality. Hence, this section focuses on constructing a velocity sample set V that optimizes the trade-off between computation time and path quality as measured by execution time. We conducted two studies to determine i) the predominate sampled velocity directions used by an offline version of Dijkstra's algorithm and ii) the tradeoff between path cost and the number of sampled speeds. In the following comparison, Dijkstra's algorithm was modified to search over a dense set of velocities with 3611 terminal velocities sampled per waypoint; the method is referred to as Dense Dijkstra. Dijkstra's algorithm is a complete and optimal search algorithm with respect to the set of actions [36], making it a suitable benchmark. 1) Sampled Direction: Dense Dijkstra was used to statistically identify velocity directions set Vd commonly employed in a variety of test cases. Using Fig. 7, we define angles with respect to the hyperplane H with normal vector ˆa at the plane of symmetry between path segments connecting two waypoints. The search was given velocity direction zenith and azimuth angles in the range [0◦, 180◦] and [-90◦, 90◦] sampled in 10◦increments. Table I shows the frequency of velocity directions chosen by Dense Dijkstra across all motion primitives for six different path length trials in the Perlin Noise and Willow Garage environment. The velocities chosen by Dense Dijkstra align with the normal vector ˆa of the hyperplane 80% of the time. Fig. 8: Analysis of speed discretization on execution and planning time. As the discretization of our method is increased, we converge to the Dense Dijkstra solution. We use Vm = 5 for our experiments as it achieves significant computational advantages while retaining suitable performance. Fig. 9: Speed profiles of STITCHER using varied speed discretizations. As the number of speed samples increases, STITCHER converges to the optimal solution approximated by Dense Dijkstra. From these results, sampling at the center and boundaries of a 20◦cone centered around a given ˆa will yield a suitable set of velocity directions. 2) Sampled Speed: Using the velocity direction set Vd identified in the previous test, the performance of Dense Dijkstra was compared to STITCHER with different sampled speed sets Vm. Each set Vm consists of k discrete speeds sampled from the interval [0, vmax], such that |Vm| = k. In order to ensure a fair comparison, these sets must be subsets of those used by Dense Dijkstra. In other words, we ensure k ≤K, where K denotes the number of speeds sampled in Dense Dijkstra, and our sampled sets are as evenly distributed as possible within the interval. Figure 8 shows the relative execution time (left) and planning times (right) for different sizes of Vm compared to Dense Dijkstra. As expected, the execution time increases as the sampled speed set becomes sparser because the planner has fewer speed options available. However, the observed increase in execution time is at most 8% even with the sparsest speed set tested (|Vm| = 5). Critically, the minimal increase in execution time is accompanied by a significant reduction in planning time: a speed-up of four orders of magnitude with the sparsest sampled speed set tested. Although a sample speed set size of |Vm| = 11 yields nearly identical execution times to the dense set while offering three orders of magnitude improvement in planning time, adequate performance is achieved when |Vm| = 5. Hence, we use the set Vm = {0, 0.25 vmax, 0.5 vmax, 0.75 vmax, vmax} for the remainder of our analysis. A representative speed profile for different sized speed sets is shown in Fig. 9. Fig. 10: Evaluation of the effect of a greedy pre-processing step on solution quality and computation speed. Results are from a monte carlo simulation of 50 random realizations for trajectories with between 2 and 10 waypoints. Fig. 11: The final speed profile given by a motion primitive search with (a) an exhaustive exponentially growing graph and (b) STITCHER. B. Greediness Benchmarking STITCHER employs a greedy graph pre-processing step in order to generate a compact final search graph which enables real-time trajectory computation. It is important to note that this greedy pre-processing step results in a graph that does not exhaustively account for all possible acceleration states at a given waypoint (see Fig. 5). Hence, we evaluated the extent to which this greedy pre-processing step leads to suboptimal solutions by performing a Monte Carlo simulation comparing the path cost of STITCHER to that of an exhaustive exponentially-growing graph. The experiment includes 50 realizations of randomly generated initial and final positions, with the number of waypoints ranging between 2 and 10. Figure 10 shows the relative solution cost (left) and planning times (right) resulting from searching over the exponentially-growing graph and running STITCHER. On average, STITCHER achieved a 0.5% difference in solution cost, with the maximum difference being 7%. Additionally, STITCHER achieves approximately 66% speed-up in computation time as a result of the reduced graph size. Excluding solution cases with 2 or 3 waypoints where the exponential growth of the graph including terminal acceleration states cannot be observed, STITCHER achieves an average of 82% speed-up. Figure 11 shows the speed profile of a case in which STITCHER achieved a solution cost that was 2% different from that of the exponential graph. Qualitatively, the trajectories are very similar with only a minor deviation at the end of the trajectory. Therefore, the greedy graph preprocessing step results in a negligible increase in solution cost, but a substantial improvement in computation time. Fig. 12: Final trajectories (red) through a six waypoint path in (a): Perlin Noise and (b): Willow Garage environment. The trajectory options (white) which inform the heuristic are more likely to be in collision in the Willow Garage environment due to tight corridors. C. Heuristic Benchmarking The quality of the heuristic used to guide the motion primitive search phase of STITCHER can be quantified by comparing the number of edges, i.e., motion primitives, generated by STITCHER with that of Dijkstra's algorithm. The velocity magnitude and direction sets were kept constant across both planners with |Vm| = 11 and |Vd| = 3. The number of edges created is a better evaluation metric than the nodes explored because the main source of computation time comes from generating motion primitives and checking them for constraint violations. The effectiveness of the search heuristic depends on two main factors: how accurately it approximates the true cost-to-go, and whether the heuristic is admissible under the specified constraints. The edge cost used in the final search affects the tightness of the heuristic approximation, while the acceleration constraint imposed to prune motion primitives can influence admissibility. Therefore, we assess the effectiveness of STITCHER's heuristic with various graph edge costs and acceleration constraints to provide a general understanding of how search performance is impacted. 1) Varying Edge Cost: We evaluate the quality of the heuristic with two different edge costs: (1) equivalent to the LQMT performance index and (2) execution time. Table II shows the percent reduction in the number of edges created for STITCHER compared to Dijkstra's algorithm. Using the LQMT edge cost in the final search, STITCHER creates 7% fewer edges on average compared to Dijkstra's algorithm in the Perlin Noise environment and 5% in the Willow Garage environment. Greater edge reduction can be achieved by defining the edge cost to be the trajectory execution time. Recall that the search heuristic is the minimum time required to reach the goal using a double integrator model. Therefore, by using an edge cost soley defined by trajectory duration, the heuristic now more closely approximates the cost-to-go. This tighter approximation is reflected in the results of Table II, where STITCHER generates an average of 20% fewer edges in Perlin Noise and 13% fewer in Willow Garage. The reduced effectiveness of the heuristic in Willow Garage was attributed to having more tight corridors, so more motion primitives were in collision (see Fig. 12). Therefore, while environment dependent, using a search heuristic that closely approximates the remaining cost to the goal greatly reduces search effort. 2) Varying Acceleration Constraint: Varying acceleration constraints may also lead to differing performance of the TABLE II: Heuristic Evaluation with Varied Edge Costs and Acceleration Constraints via Percent Reduction in Generated Edges Compared to Dijkstra's. Edge Cost Acceleration Constraint Map Perlin Noise Willow Garage N = 4 N = 6 N = 8 N = 4 N = 6 N = 8 % Red. % Red. % Red. % Red. % Red. % Red. LQMT Admissible Truncated Cone 8 8 7 0 3 13 Time Admissible Truncated Cone 22 21 15 0.3 13 26 Time Admissible Box 27 34 28 2 20 24 Time Inadmissible Truncated Cone 45 44 22 9 38 39 Fig. 13: Different acceleration constraints used for heuristic study: (a) admissible truncated cone, (b) admissible box, and (c) inadmissible truncated cone. Green dashed outlines are Stage 2 constraints used for heuristic creation while solid black outlines denote constraints imposed for the final trajectory. heuristic as this constraint directly affects admissibility. We tested three different acceleration constraints: (1) admissible truncated cone, (2) admissible box, and (3) inadmissible truncated cone (See Fig. 13). The truncated cone constraint comes from the resulting achievable mass-normalized thrust volume with constraints on the magnitude and tilt. This constraint c(a) is admissible when it is maximally bounded by the maximum control input umax imposed during heuristic generation in Stage 2 (see Fig. 13a), while the inadmissible variant applies a smaller umax, e.g., ux,max ≤maxax c(a) (see Fig. 13c). An admissible box constraint is an acceleration constraint that perfectly matches that of Stage 2 (see Fig. 13b). Table II shows that improved performance can be achieved by applying an admissible box constraint on acceleration compared to the admissible truncated cone. In this case, STITCHER achieves approximately a 30% reduction in edges in the Perlin Noise environment and a 15% reduction in the Willow Garage compared to Dijkstra's algorithm. Because the box constraint on acceleration is equivalent to that used in the heuristic generation, the STITCHER algorithm guides the search away from branches that are likely to exceed acceleration constraints. Further improvement in search speed may be observed while using an inadmissible truncated cone constraint, and is reflected in Table II, where this algorithm variant achieves the highest percent reduction among all STITCHER variations. By allowing the final motion primitive search to have a larger allowance in achievable acceleration, the true edge cost may be smaller than that estimated by the heuristic. The use of inadmissible heuristics in graph search foregos desirable optimality guarantees, but can more rapidly motivate the search toward the goal as a larger weighting is placed on the heuristic cost. 3) Summary: This study showed that the proposed heuristic is effective, but its performance depends on the environment, graph edge cost, and the chosen acceleration constraint. For the remaining experiments, we used an LQMT edge cost to ensure fair comparisons to state-of-the-art algorithms, and apply an admissible truncated cone acceleration constraint to retain graph search optimality guarantees and accurately reflect the true physical limits of quadrotor systems. D. Constrained Motor Forces We present a case study demonstrating STITCHER's ability to generate trajectories that satisfy individual thruster limits for a quadrotor, which also extends to lander vehicles. We assume the vehicle dynamics takes the form m ̈r = R T -mg J ̇ω = τ -ω × Jω, where r is the vehicle's position vector, R is the rotation matrix that represents the vehicle orientation with respect to an inertial frame, T = T ˆez is the thrust vector directed along the body z-axis, g is the gravity vector, ω is the angular velocity vector, J is the inertia tensor, and τ is the body torque vector. For the case study, we assume a quadrotor with a mass m = 0.5 kg, moment of inertia J = diag(0.01, 0.01, 0.01) kg m2, arm length l = 12.5 cm, motor drag coefficient c = 0.2 m, minimum single-motor thrust Fmin = 0.15 N, and maximum single-motor thrust Fmax = 3 N. Each motor thrust Fi is constrained by 0 < Fmin ≤Fi ≤Fmax, ∀i ∈{1, 2, 3, 4}. At any given time, individual motor thrusts Fi are found by solving a linear system that depends on motor placement and physical vehicle properties, i.e., solving   T τx τy τz  = M   F1 F2 F3 F4  , for some invertible matrix M ∈R4×4 with T denoting the total thrust T = m∥f∥2. The torque τ = [τx, τy, τz]⊤along the trajectory is τ = J ̇ω + ω × Jω and can be found via differential flatness, with components of ω and ̇ω given by   ωy -ωx 0  = R⊤ ̇ˆ f,   ̇ωy - ̇ωx 0  = R⊤ ̈ˆ f -ω × R⊤ ̇ˆ f, where ˆ f is the normalized thrust vector. We conducted two experiments in the Willow Garage environment, where the tight corridors may make individual motor constraint satisfaction more difficult. As shown in Fig. 14, the trajectories achieve high speeds while strictly obeying the permotor limits on thrust. These results showcase the versatility (a) Individual Motor Constraint Case Study 1. (b) Individual Motor Constraint Case Study 2. Fig. 14: Case studies in the Willow Garage environment of trajectory plans constraining individual motors by the maximum thrust limits given by manufacturer specifications. Left: Trajectory colored by speed. Right: The corresponding thrust profile of individual motors depicting constraint satisfaction. of our framework in adhering to highly nonconvex constraints that are directly limited by hardware. E. Planning Time Composition Analysis Figure 15 shows the computation time of the different components of STITCHER averaged over six tested trials. The average time to perform the velocity graph search is 1.8 ms, compared to 2.2 ms for the motion primitive search. Although both searches have the same graph size, it is important to note that the motion primitives from (3) are more time-consuming to generate than the minimum time trajectories used in the velocity graph. Therefore, the similar computation times arise from the search heuristic reducing the number of edges generated. The low computation time of the velocity search further indicates its effectiveness in computing an informative admissible heuristic for the motion primitive search. Finally, constraint-checking with uniform samples at every 0.1 s averaged only 0.3 ms, showing the efficacy of the relatively simple approach. F. Comparison with State-of-the-Art We compared STITCHER with two state-of-the-art algorithms: GCOPTER [5] and FASTER [9]. GCOPTER performs an online optimization by incorporating state constraints into Fig. 15: The average contribution of different path planning components. the cost functional and running a quasi-newton method, while FASTER solves a mixed integer quadratic program online. Both algorithms rely on a sparse geometric path to form safe flight corridors, but do not enforce final trajectories to pass through waypoints. We evaluate each planner by time (planning time versus execution time) and failure (constraint violation or incomplete/no path found). For the Perlin Noise environment, the path lengths were 12.5 m, 30 m, and 55 m while the path lengths for Willow Garage were 20 m, 25 m, and 30 m with N = 4, 6, 8 waypoints for both environments. 1) Time Analysis: Table III compares the planning times and trajectory execution times of each planner given the six different geometric paths. STITCHER's planning times are TABLE III: State-of-the-Art Comparison Time Analysis. Map N Planning time (ms) Execution time (s) [9] [5] Ours [9] [5] Ours Perlin Noise 4 109 21.1 3.82 2.98 3.51 3.67 6 207 59.4 10.4 4.42 4.42 5.75 8 1020 116 16.9 7.62 7.34 9.62 Willow Garage 4 240 38.2 8.27 4.40 4.37 4.53 6 5240 53.2 20.9 8.25 5.97 6.30 8 10400 84.4 27.1 10.5 FAILED 7.95 TABLE IV: Effect of Different Waypoints and Sampling in Perlin Noise. N Planning time (ms) Execution time (s) STITCHER STITCHER [5] S W SW [5] S W SW 4 27.2 14.8 3.01 10.5 3.51 3.44 3.61 3.56 6 72.0 44.7 13.8 42.4 4.42 5.43 5.38 4.90 8 130 64.1 24.6 86.9 7.34 9.13 8.99 9.26 Abbreviations: S - speed sample set expanded to |Vm| = 9; W - optimized waypoints from [5] used; SW - combination of S and W. TABLE V: State-of-the-Art Comparison Failure Analysis. Map No Path Found (%) State Violation (%) Collisions (%) [9] [5] Ours [9] [5] Ours [9] [5] Ours Perlin Noise 0 0 0 6 0 0 4 4 0 Willow Garage 18 2 2 8 0 0 0 24 0 faster than those measured for FASTER and GCOPTER for every test, and are up to 7x faster than GCOPTER and 400x faster than FASTER. It is important to note the computation times for the motion primitive search planner [16] were omitted from Table III because planning times exceeded several seconds. In some cases GCOPTER and FASTER achieved lower execution times. This can be attributed to their ability to treat waypoints as soft constraints, i.e., the trajectory is only required to pass nearby a waypoint rather than through it, as well as the chosen resolution of state samples in STITCHER. The difference is most noticeable in the Perlin Noise environment where more free space is available for waypoint adjustment. Table IV compares the planning and execution times of GCOPTER [5] to different algorithm variants of STITCHER evaluating the effect of sampling and waypoint selection in the Perlin Noise environment. Case S increases the sampled speed set to |Vm| = 9, Case W utilizes the optimized waypoints from GCOPTER, and Case SW is a combination of both strategies. All the case variants reduce STITCHER's baseline execution time, motivating future work in incorporating waypoint flexibility. 2) Failure Analysis: A Monte Carlo simulation composed of 50 realizations was conducted to evaluate the different modes of failure experienced by each planner. Table V compares the rate at which each planner does not find a path, generates a trajectory violating state constraints or generates a trajectory in collision. The "No Path Found" metric includes a numerical solver not returning a solution, or if the solution does not reach the goal. Using either FASTER or GCOPTER, this can occur due to numerical instability or when a feasible solution is not within the calculated set of safe corridors. Conversely, in STITCHER's graph search framework, a graph disconnection occurs when a feasible Fig. 16: Example of mass-normalized thrust profile strictly satisfying constraints in Willow Garage environment. Fig. 17: Control and estimation architecture for custom quadrotor platform used in hardware experiments. The vehicle is equipped with an OrangePi 3.0 computer, Teensy 4.0 microcontroller, and an InvenSense ICM-20948 IMU. An external motion capture system provides pose measurements at 100Hz. solution is not within the discrete set of sampled states. Across all test cases, STITCHER's motion primitive graph disconnects only once, achieving the lowest rate of failure among the state-of-the-art planners. In the Willow Garage environment, where narrow corridors make collisions more likey, the number of failed solutions by FASTER and collisions by GCOPTER significantly increases. In contrast, STITCHER never violates constraints (state, control, or obstacles) because all constraints are strictly enforced. As an example, Fig. 16 is a representative mass-normalized thrust profile generated by STITCHER which remains within the valid limits. VII. HARDWARE RESULTS The custom quadrotor used for hardware experiments is shown in Figure 17. Onboard hardware includes an Orange Pi 3.0 computer, Teensy 4.0 microcontroller, and an InvenSense ICM-20948 IMU. The system was flown indoors in a 4m x 4m x 4m drone cage equipped with an OptiTrack motion capture system that accurately measures the pose of the vehicle. A. Control and Estimation Architecture The control and state estimation architecture used in the hardware experiments is shown in Fig. 17. We employed the standard cascaded geometric controller [51], [52] to track the trajectory generated by STITCHER with the desired acceleration and jerk as feedforward. The outer loop PID position tracking controller ran on the Orange Pi flight computer, and the inner loop quaternion-based attitude tracking controller from [53] ran on the Teensy microcontroller; the two processors communicate via serial. The outer loop control rate was 100Hz while the inner loop control rate was 500Hz. For estimation, position and orientation measurements from an Fig. 18: Motion trail of quadrotor executing trajectory from Experiment 1 in the virtual Random Forest 1 environment through 4x4x4 meter drone cage. TABLE VI: Trajectory Tracking Error. RMSE Experiment Random Forest 1 Random Forest 2 Obstacle Course Windowed Wall Pos. (cm) 9.86 13.3 10.9 8.59 Tilt (deg) 4.98 6.01 6.28 6.41 external motion capture system were sent wirelessly at 100Hz to the onboard Orange Pi computer, and fused with an IMU to estimate position, velocity, orientation, angular velocity, and IMU biases with a nonlinear geometric observer [54]. Estimates were generated by the observer at 100Hz and used by the outer loop position controller. B. Flight Experiments Indoor flight experiments were conducted to evaluate the dynamic feasibility of the trajectories planned with STITCHER. The trajectories were first generated with virtual obstacles and then flown using our custom quadrotor (see Fig. 18). In Fig. 19, planned trajectories generated by STITCHER are displayed alongside the actual trajectories flown by the quadrotor in four experiments. The environments are named: Random Forest 1 (Fig. 19a), Random Forest 2 (Fig. 19b), Obstacle Course (Fig. 19c), and Windowed Wall (Fig. 19d). The desired path lengths for each test in their respective environment were 9.6 m, 13.1 m, 8.7 m, and 8.5 m. Table VI reports the root mean square errors (RMSE) of position and tilt for each experiment. Across all experiments, the maximum RMSE is 6.41 degrees in tilt and 13.3 cm in position. Further, the average ratio between position RMSE to the total path length was only 1.07% over all the tests. These flight experiments show that STITCHER generates trajectories that satisfy constraints and can be tracked with low error using a standard cascaded geometric controller. Additionally, while STITCHER may use any p-th order integrator with p ≥2 to model system dynamics, the results show that using a triple integrator model is sufficient for quadrotor tracking control despite discontinuities in jerk at waypoints. Figure 20 compares the actual and desired profiles of the mass-normalized thrust (top), position (middle), and attitude (bottom) through Random Forest 1. The quadrotor remains within the physical limits dictated by the thrust magnitude and tilt constraints, and the motor commands executed by the drone closely match the desired mass-normalized thrust. The position and attitude error profiles further show that for the duration of the flight, the maximum deviations in position and tilt are less than 20 cm and 20 degrees, respectively. These results demonstrate that STITCHER generates trajectories that safely and accurately express complex system-level constraints. VIII. CONCLUSIONS In this work, we presented STITCHER, a motion primitive search planning algorithm that utilizes a novel three-stage planning architecture to design constrained trajectories in realtime over long distances. We proved the search graph is finite, and the proposed search heuristic is admissible, so STITCHER is guaranteed to i) have a priori bounded time and memory complexity and ii) generate optimal trajectories with respect to the sampled set of states. Real-time search speeds were achieved through our novel heuristic crafting technique, greedy graph pre-processing method, and non-conservative constraint and collision checking procedure. Our extensive simulation study showed the trade-off in terms of path quality and computation time for different sampled velocity sets, the effectiveness of the proposed heuristic with varied edge costs and state constraints, a case study imposing individual thruster limits, and the average computation times of the components that make up STITCHER. We also found that our greedy motion primitive graph pre-processing step has a negligible effect on solution cost compared to the observed computation speed up owing to the reduced graph size. Importantly, STITCHER was shown to consistently generate trajectories faster than two state-of-the-art optimization-based planners while never violating constraints. Hardware experiments further proved that our planner is effective in generating trajectories suitable for position and attitude tracking control while remaining within set physical limits. Future work includes developing a receding horizon implementation for navigating through unknown environments, the use of imitation learning to improve search efficiency, and learning motion primitives for more general optimal control problems. Acknowledgments The authors would like to thank lab members Grace Kwak, Ryu Adams, James Row, and Qiyuan Wu for implementation and hardware support. REFERENCES [1] D. Mellinger and V. Kumar, "Minimum snap trajectory generation and control for quadrotors," in IEEE International Conference on Robotics and Automation, 2011, pp. 2520-2525. [2] C. Richter, A. Bry, and N. Roy, "Polynomial trajectory planning for aggressive quadrotor flight in dense indoor environments," in International Symposium of Robotics Research, 2016, pp. 649-666. [3] H. Oleynikova, M. Burri, Z. Taylor, J. Nieto, R. Siegwart, and E. Galceran, "Continuous-time trajectory optimization for online uav replanning," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2016, pp. 5332-5339. [4] B. Zhou, J. Pan, F. Gao, and S. Shen, "Raptor: Robust and perceptionaware trajectory replanning for quadrotor fast flight," IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1992-2009, Dec. 2021. [5] Z. Wang, X. Zhou, C. Xu, and F. Gao, "Geometrically constrained trajectory optimization for multicopters," IEEE Transactions on Robotics, vol. 38, no. 5, pp. 3259-3278, May 2022. [6] Y. Ren, F. Zhu, W. Liu, Z. Wang, Y. Lin, F. Gao, and F. Zhang, "Bubble planner: Planning high-speed smooth quadrotor trajectories using receding corridors," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 6332-6339. (a) Experiment 1: Flight through Random Forest 1. (b) Experiment 2: Flight through Random Forest 2. (c) Experiment 3: Flight through Obstacle Course. (d) Experiment 4: Flight through Windowed Wall. Fig. 19: Hardware experiments through four different virtual environments: (a) Random Forest 1, (b) Random Forest 2, (c) Obstacle Course, and (d) Windowed Wall. The desired trajectory planned by STITCHER is shown in red and the actual trajectory flown by the quadcopter is shown in green. Two views of the environment are provided in each subfigure. Left: Top-down view. Right: Perspective view. [7] R. Deits and R. Tedrake, "Computing large convex regions of obstaclefree space through semidefinite programming," in Algorithmic Foundations of Robotics. Springer, 2015, pp. 109-124. [8] --, "Efficient mixed-integer planning for uavs in cluttered environments," in IEEE International Conference on Robotics and Automation, 2015, pp. 42-49. [9] J. Tordesillas, B. T. Lopez, M. Everett, and J. P. How, "FASTER: Fast and safe trajectory planner for navigation in unknown environments," IEEE Transactions on Robotics, vol. 38, no. 2, pp. 922-938, Apr. 2022. [10] T. Marcucci, M. Petersen, D. von Wrangel, and R. Tedrake, "Motion planning around obstacles with convex optimization," Science Robotics, vol. 8, no. 84, Nov. 2023. [11] M. W. Mueller, M. Hehn, and R. D'Andrea, "A computationally efficient motion primitive for quadrocopter trajectory generation," IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1294-1310, Dec. 2015. [12] P. Florence, J. Carter, and R. Tedrake, "Integrated perception and control at high speed: Evaluating collision avoidance maneuvers without maps," in Algorithmic Foundations of Robotics. Springer, 2016, pp. 304-319. [13] B. T. Lopez and J. P. How, "Aggressive 3-d collision avoidance for highspeed navigation," in IEEE International Conference on Robotics and Automation, 2017, pp. 5759-5765. [14] M. Ryll, J. Ware, J. Carter, and N. Roy, "Efficient trajectory planning for high speed flight in unknown environments," in IEEE International Conference on Robotics and Automation, 2019, pp. 732-738. [15] S. Liu, N. Atanasov, K. Mohta, and V. Kumar, "Search-based motion planning for quadrotors using linear quadratic minimum time control," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 2872-2879. [16] S. Liu, K. Mohta, N. Atanasov, and V. Kumar, "Search-based motion planning for aggressive flight in SE(3)," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 2439-2446, July 2018. [17] B. Zhou, F. Gao, L. Wang, C. Liu, and S. Shen, "Robust and efficient quadrotor trajectory generation for fast autonomous flight," IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 3529-3536, Oct. 2019. [18] P. Foehn, D. Brescianini, E. Kaufmann, T. Cieslewski, M. Gehrig, M. Muglikar, and D. Scaramuzza, "Alphapilot: Autonomous drone racing," Autonomous Robots, vol. 46, no. 1, pp. 307-320, Oct. 2021. [19] J. Pearl, Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley Longman Publishing Co., Inc., 1984. [20] H. J. Levy and B. T. Lopez, "STITCHER: Real-time trajectory planning with motion primitive search," , 2024. [21] D. Mellinger, A. Kushleyev, and V. Kumar, "Mixed-integer quadratic program trajectory generation for heterogeneous quadrotor teams," in IEEE International Conference on Robotics and Automation, 2012, pp. 477-483. [22] S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor, and V. Kumar, "Planning dynamically feasible trajectories for quadrotors using safe flight corridors in 3-d complex environments," IEEE Robotics and Automation Letters, vol. 2, no. 3, pp. 1688-1695, July 2017. [23] B. Landry, R. Deits, P. R. Florence, and R. Tedrake, "Aggressive quadrotor flight through cluttered environments using mixed integer programming," in IEEE International Conference on Robotics and Automation, 2016, pp. 1469-1475. [24] T. Marcucci, P. Nobel, R. Tedrake, and S. Boyd, "Fast path planning through large collections of safe boxes," IEEE Transactions on Robotics, vol. 40, July 2024. [25] M. Hehn and R. D'Andrea, "Quadrocopter trajectory generation and control," IFAC Proceedings Volumes, vol. 44, no. 1, pp. 1485-1491, Jan. 2011. [26] T. M. Howard, C. J. Green, A. Kelly, and D. Ferguson, "State space sampling of feasible motions for high-performance mobile robot navigation in complex environments," Journal of Field Robotics, vol. 25, no. 6-7, pp. 325-345, June 2008. [27] B. T. Lopez and J. P. How, "Aggressive collision avoidance with limited field-of-view sensing," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017, pp. 1358-1365. [28] M. Dharmadhikari, T. Dang, L. Solanka, J. Loje, H. Nguyen, N. Khedekar, and K. Alexis, "Motion primitives-based path planning for fast and agile exploration using aerial robots," in IEEE International Conference on Robotics and Automation, 2020, pp. 179-185. [29] J. Hou, X. Zhou, N. Pan, A. Li, Y. Guan, C. Xu, Z. Gan, and F. Gao, "Primitive-swarm: An ultra-lightweight and scalable planner for largescale aerial swarms," IEEE Transactions on Robotics, vol. 41, pp. 36293648, May 2025. [30] E. W. Dijkstra, "A note on two problems in connexion with graphs," Numerische Mathematik, vol. 1, 1959. [31] P. Hart, N. Nilsson, and B. Raphael, "A formal basis for the heuristic determination of minimum cost paths," IEEE Transactions on Systems Science and Cybernetics, vol. 4, no. 2, pp. 100-107, July 1968. [32] D. Harabor and A. Grastien, "Online graph pruning for pathfinding on grid maps," in National Conference on Artificial Intelligence, 2011, pp. 1114-1119. [33] D. Dolgov, S. Thrun, and M. Montemerlo, "Path planning for autonomous vehicles in unknown semi-structured environments," International Journal of Robotics Research, vol. 29, no. 5, pp. 485-501, Jan. 2010. [34] M. Pivtoraiko and A. Kelly, "Kinodynamic motion planning with state Fig. 20: Experiment 1: State profiles comparing the actual flown trajectory and STITCHER's planned trajectory. Top: The actual (blue) and desired (orange) mass-normalized thrust profiles remaining within permissible regions, outside of invalid regions (grey). Middle: Comparison of the actual and desired position in the three coordinate axes along with the total error (red). Bottom: The actual and desired roll φ, pitch θ, and yaw ψ as well as the tilt error (red) of the trajectory over the flight duration. lattice motion primitives," in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, pp. 2172-2179. [35] O. Andersson, O. Ljungqvist, M. Tiger, D. Axehill, and F. Heintz, "Receding-horizon lattice-based motion planning with dynamic obstacle avoidance," in IEEE Conference on Decision and Control, 2018, pp. 4467-4474. [36] S. J. Russell and P. Norvig, Artificial intelligence: a modern approach. Pearson, 2016. [37] L. Jarin-Lipschitz, J. Paulos, R. Bjorkman, and V. Kumar, "Dispersionminimizing motion primitives for search-based motion planning," in IEEE International Conference on Robotics and Automation, 2021, pp. 12 625-12 631. [38] L. Palmieri, L. Bruns, M. Meurer, and K. O. Arras, "Dispertio: Optimal sampling for safe deterministic motion planning," IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 362-368, Apr. 2020. [39] R. Penicka and D. Scaramuzza, "Minimum-time quadrotor waypoint flight in cluttered environments," IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 5719-5726, Apr. 2022. [40] A. Romero, S. Sun, P. Foehn, and D. Scaramuzza, "Model predictive contouring control for time-optimal quadrotor flight," IEEE Transactions on Robotics, vol. 38, no. 6, pp. 3340-3356, Dec. 2022. [41] A. Romero, R. Penicka, and D. Scaramuzza, "Time-optimal online replanning for agile quadrotor flight," IEEE Robotics and Automation Letters, vol. 7, July 2022. [42] M. Krinner, A. Romero, L. Bauersfeld, M. Zeilinger, A. Carron, and D. Scaramuzza, "MPCC++: Model predictive contouring control for time-optimal flight with safety constraints," in Robotics: Science and Systems, 2024. [43] B. Paden, V. Varricchio, and E. Frazzoli, "Verification and synthesis of admissible heuristics for kinodynamic motion planning," IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 648-655, Apr. 2017. [44] S. Kim and B. An, "Learning heuristic a: Efficient graph search using neural network," in IEEE International Conference on Robotics and Automation, 2020, pp. 9542-9547. [45] J. Thayer, A. Dionne, and W. Ruml, "Learning inadmissible heuristics during search," in International Conference on Automated Planning and Scheduling, 2011, pp. 250-257. [46] M. Bhardwaj, S. Choudhury, and S. Scherer, "Learning heuristic search via imitation," in Conference on Robot Learning, 2017, pp. 271-280. [47] M. P ́andy, W. Qiu, G. Corso, P. Veliˇckovi ́c, Z. Ying, J. Leskovec, and P. Li`o, "Learning graph search heuristics," in Learning on Graphs Conference, 2022, pp. 10-1. [48] J. W. Demmel, Applied Numerical Linear Algebra. Society for Industrial and Applied Mathematics, 1997. [49] B. Acikmese and S. R. Ploen, "Convex programming approach to powered descent guidance for mars landing," Journal of Guidance, Control, and Dynamics, vol. 30, no. 5, pp. 1353-1366, Sept. 2007. [50] D. E. Kirk, Optimal control theory: an introduction. Courier Corporation, 2004. [51] E. Frazzoli, M. A. Dahleh, and E. Feron, "Trajectory tracking control design for autonomous helicopters using a backstepping algorithm," in IEEE American Control Conference, 2000, pp. 4102-4107. [52] T. Lee, M. Leok, and N. H. McClamroch, "Geometric tracking control of a quadrotor uav on se (3)," in IEEE Conference on Decision and Control, 2010, pp. 5420-5425. [53] B. T. Lopez and J.-J. Slotine, "Sliding on manifolds: Geometric attitude control with quaternions," in IEEE International Conference on Robotics and Automation, 2021, pp. 11 140-11 146. [54] B. T. Lopez, "A contracting hierarchical observer for pose-inertial fusion," arXiv preprint , 2023.
2510.14890
EM Approaches to Nonparametric Estimation for Mixture of Linear Regressions Andrew Welbaum Department of Statistics, George Mason University and Wanli Qiao Department of Statistics, George Mason University October 17, 2025 Abstract In a mixture of linear regression model, the regression coefficients are treated as random vectors that may follow either a continuous or discrete distribution. We propose two Expectation-Maximization (EM) algorithms to estimate this prior dis- tribution. The first algorithm solves a kernelized version of the nonparametric max- imum likelihood estimation (NPMLE). This method not only recovers continuous prior distributions but also accurately estimates the number of clusters when the prior is discrete. The second algorithm, designed to approximate the NPMLE, tar- gets prior distributions with a density. It also performs well for discrete priors when combined with a post-processing step. We study the convergence properties of both algorithms and demonstrate their effectiveness through simulations and applications to real datasets. Keywords: Mixture of linear regressions, EM algorithm, nonparametric maximum likeli- hood estimation, gradient ascent, mixture models, clustering 1 arXiv:2510.14890v1 [stat.ME] 16 Oct 2025 1 Introduction The mixture of linear regression model extends the multiple linear regression by allow- ing the coefficient vector to have a continuous or discrete prior distribution, and has a wide number of applications when the response and covariates can have individualized or clustered linear relation, including market segmentation (Wedel and Kamakura, 2000), medical studies (Schlattmann, 2009), educational research (Ding, 2006), and various indus- try and economic studies, such as housing construction (Quandt, 1972), wage prediction (Quandt and Ramsey, 1978), and climate-economic growth relationships (Yao and Song, 2015). The model we consider in this article is as follows: There are n independent obser- vations (x1,y1),⋯,(xn,yn) ∈Rd × R generated from the model yi = x⊺ i βi + σzi, (1) where β1,⋯,βn,z1,⋯,zn are independent of β1,⋯,βn iid∼G∗and z1,⋯,zn iid∼N(0,1), and σ > 0. Here, G∗is a probability distribution on Rd, which may be discrete or continuous. We will assume that σ is known, but G∗, the true distribution, is not known and needs to be estimated. In practice, an unknown σ can be estimated using cross-validation. The estimation of G∗can be further used to make statistical inference for individual βi through a plug-in approach. The posterior distribution of βi given (xi,yi) with respect to a prior distribution G (which is G∗if the prior is correctly identified) is P(βi ∈A ∣xi,yi,G) = ∫A ϕσ (yi −x⊺ i β)dG(β) ∫Rd ϕσ (yi −x⊺ i β)dG(β), (2) for any measurable set A ⊂Rd, where ϕ is the standard normal density, and ϕσ(⋅) = (1/σ)ϕ(⋅/σ). The posterior mean of βi given (xi,yi) is E(βi ∣xi,yi,G) = ∫Rd ϕσ (yi −x⊺ i β)βdG(β) ∫Rd ϕσ (yi −x⊺ i β)dG(β) . (3) We can then obtain a point estimate of βi if we replace G by an estimator of G∗in E(βi ∣xi,yi,G). 2 When G∗is discrete, an additional interesting task is to cluster (xi,yi)’s based on this estimation. Suppose G∗(β) = ∑K j=1 πjδβj(β), where K is unknown, πj represents the weight of the component j such that ∑K k=1 πk = 1 and πk > 0, and δβj is the Dirac delta with point mass at βj. The posterior distribution in (2) when G = G∗takes the form of K ∑ j=1 [ πjϕσ (yi −x⊺ i βj) ∑K k=1 πkϕσ (yi −x⊺ i βk)]δβj(β). (4) In an oracle setting, if we knew the true prior distribution G∗= G, we could use this to identify cluster membership of the points (xi,yi)’s by assigning (xi,yi) to the component j with the highest posterior probability, i.e., arg max j∈{1,...,K} πjϕσ (yi −x⊺ i βj) ∑K k=1 πkϕσ (yi −x⊺ i βk). (5) If the estimator of G∗is in the form of ∑ ˆ K j=1 ˆπjδˆβj, the above clustering rule is updated by replacing {(πk,βk) ∶k = 1,...,K} with {(ˆπk, ˆβk) ∶k = 1,..., ˆK}. A nonparametric approach to estimating G∗is nonparametric maximum likelihood es- timation (NPMLE), which does not assume a parametric form of G∗. The NPMLE for mixture models was first described by Kiefer and Wolfowitz (1956). Some recent NPMLE- related works include Jiang and Zhang (2009); Koenker and Mizera (2014); Dicker and Zhao (2016); Jagabathula et al. (2020); Polyanskiy and Wu (2020); Saha and Guntuboyina (2020); Deb et al. (2021); Gu and Koenker (2022); and Jiang and Guntuboyina (2025). Of note, Jiang and Zhang (2009) introduce a general maximum likelihood empirical Bayesian method for estimatng a vector of means in a mixture of Gaussians problem, assuming in- dependent and identically distributed errors. Their method aims to find the MLE of the distribution of the unknown parameters with a discrete set of support points using an EM algorithm. Koenker and Mizera (2014) introduce an interior point method that estimates the nonparametric MLE of the Gaussian mixture problem. Also, Jiang and Guntuboyina (2025) describe an NPMLE approach to the mixture of linear regression problem using a method called Conditional Gradient Method (CGM). 3 We now describe the NPMLE to estimate G∗. The log-likelihood function for model (1) with a prior distribution G given (x1,y1),⋯,(xn,yn) is G ↦L(G) ∶= n ∑ i=1 log f G xi(yi), (6) where f G xi(yi) = ∫ϕσ (yi −x⊺ i β)dG(β), i = 1,⋯,n. (7) We seek a maximizer of L over the set G of probability distributions G supported on X ⊂Rd. That is, the NPMLE problem can be stated as ˆG ∈arg max G∈G L(G). (8) If X is compact or satisfies a regularity condition met by Rd, (Jiang and Guntuboyina, 2025) establishes the existence of an NPMLE ˆG that is a probability measure supported on at most n points in X and is a consistent estimator of G∗in terms of the L´evy-Prokhorov metric. If G is the class of mixture of K probability distributions where K is known, that is, G = GK ∶= {G = K ∑ j=1 πjδβj ∶ K ∑ j=1 πj = 1, πj > 0, βj ∈Rd}, then the model becomes a finite mixture of linear regressions, and this problem can be solved by the regular EM algorithm for a known number of components (Leisch, 2004). The modern EM algorithm was formalized in Dempster et al. (1977). DeSarbo and Cron (1988) introduced an EM algorithm for the cluster of linear regression problem with K components, which extended previous work on estimating two-component models such as in Quandt (1972); Hosmer (1974); and Quandt and Ramsey (1978). In the classic EM algorithm, the number of components, if it is unknown, must be estimated prior to initiating the algorithm. Some approaches to estimating this number include methods based on the incomplete and complete log-likelihoods, Fischer informa- tion matrix, Bayesian criteria, and information theory (e.g., AIC) (Hawkins et al., 2001; 4 Melnykov and Melnykov, 2012). On the other hand, methods based on NPMLE such as the approach in Jiang and Guntuboyina (2025) do not appear to be able to accurately detect the true number of components in the mixture model. Although the consistency for NPMLE has been established in Jiang and Guntuboyina (2025), in practice, a true cluster may correspond to a few small clusters detected by NPMLE, where the sum of the estimated weights approximate the true weight of that single cluster (see Figure 1). Figure 1: Estimated regression lines for a mixture of linear regression model with three true components: y = 3 −x, y = 1 + 1.5x, and y = −1 + .5x with true weights (.3, .3, and .4), respectively, and noise with σ = .5. Gray points are (xi,yi)’s of size n = 5000. Left panel: NPMLE solution estimated using the CGM method in Jiang and Guntuboyina (2025). There are 10 estimated regression lines with weights ranging from 0.012 to 0.343, although three clusters appear to be formed with aggregated weights 0.310, 0.296, and 0.393. Right panel: NPKMLE solution estimated by an EM algorithm developed in this paper. There are exactly 3 estimated regression lines with weights 0.314, 0.299 and 0.386, respectively. In our approach, instead of maximizing (8) without imposing a form restriction on G, we restrict the set G to Gkde, which is the set of kernel density estimators with a given kernel function and a bandwidth based on n points in Rd and solve ˆGkde ∈arg max G∈Gkde L(G). (9) 5 We call this the nonparametric kernel maximum likelihood estimator (NPKMLE). The optimization is reduced to finding the n points that define the KDE solution since the kernel function and the bandwidth are given. We develop an EM algorithm that aims to find the solution to the above optimization problem. The algorithm is able to estimate a mixture of linear regression model with a true prior distribution that is discrete or continuous, as opposed to the classical EM-algorithm’s restriction to estimating the parameters of a mixture of regression model with a finite number of components. In the case of a discrete prior distribution, our algorithm does not require a prior estimate of the number of components, and is able to determine the true number of components automatically (see Figure 1). Hence the output of our algorithm can be used for clustering in the finite mixture of linear regression model. If the prior distribution G∗is known to have a density, we also develop another EM algorithm that converges to NPMLE. This EM algorithm falls short when the density does not exist. However, specific post-processing steps tailored for the underlying structure of G∗can remediate the issue, for example, when G∗is supported on a finite set or has a low dimensional manifold as it support. This article is structured as follows: In Section 2, we present the mixture of linear regression problem and discuss our approaches to solving it. We introduce an EM algorithm that can be used when the true distribution G∗has a density, as well as a different EM algorithm when we drop this assumption. We also discuss the algorithms’ properties. In Section 3, we demonstrate the performance of our algorithms using simulations and show their utility in real-world applications. All the proofs are provided in the Appendix. 2 Methods In the discussion below, we distinguish two cases between G∗has a density and G∗does not necessarily have a density (e.g., when G∗is discrete with an unknown number of 6 components), and describe an EM algorithm for each case. Our starting point is the NPMLE. Note that L(G) in (6) is the “incomplete-data” log-likelihood function (meaning that the information of βi for each pair (xi,yi) is not available). If G has a density g, then, abusing the notation L, which was used for L(G), the log-likelihood can be written as g ↦L(g) ∶= n ∑ i=1 log [∫ϕσ (yi −x⊺ i β)g(β)dβ]. (10) The corresponding “complete-data” log-likelihood function is L(G∣x,y,β) = n ∑ i=1 log [ϕσ (yi −x⊺ i βi)g(βi)], (11) where x = (x1,⋯,xn), y = (y1,⋯,yn), and β = {β1,⋯,βn}. Here g should be understood as the density of G if it exists; If G is discrete such that G = ∑K j=1 πjδβj, then g(β) = ∑K j=1 πj1βj(β), where 1βj is the indicator function at βj. 2.1 EM-NPMLE algorithm: when G∗has a density function We first assume G∗is a continuous distribution for which a density function g∗exists. In this case, the posterior density of βi given (xi,yi) with respect to g∗exists and is fi(β ∣xi,yi,g∗) = ϕσ (yi −x⊺ i β)g∗(β) ∫Rd ϕσ (yi −x⊺ i β)g∗(β)dβ . (12) We provide an iterative algorithm to approximate g∗in what follows. With an initial estimate of g∗, we can obtain the posterior density of βi given (xi,yi) with respect to this initial estimate. Then it is a natural idea to use the average of the posterior densities across all βi to update the estimate of g∗, and the process is continued iteratively until convergence. More formally, with an initialization density g(0) for βi’s, for t = 0,1,2,⋯, the algorithm iterates between f (t+1) i (β) ≡fi(β ∣xi,yi,g(t)) = ϕσ (yi −x⊺ i β)g(t)(β) ∫Rd ϕσ (yi −x⊺ i β)g(t)(β)dβ , i = 1,⋯,n, (13) 7 and g(t+1) = 1 n n ∑ i=1 f (t+1) i . (14) It turns out that this simple but elegant algorithm can be interpreted as an EM al- gorithm, and it is inspired by a similar algorithm in the setting of mixture of distribu- tions (Vardi et al., 1985; Laird and Louis, 1991; Vardi and Lee, 1993; Chung and Lindsay, 2015; Chae et al., 2018). We will call this an EM-NPMLE algorithm. To see this is an EM algorithm, taking the expectation of L(g∣x,y,β) (we again abuse notation L) with respect to the posterior density of β given x,y,g(t), we get Eβ∣x,y,g(t)L(g∣x,y,β) = n ∑ i=1 ∫Rd log [ϕσ (yi −x⊺ i β)g(β)]ϕσ (yi −x⊺ i β)g(t)(β)dβ ∫Rd ϕσ (yi −x⊺ i β)g(t)(β)dβ . (15) This is the E-step in an EM algorithm. Let Gden be the space of distributions on Rd for which their density functions exist. We would like to find a maximizer of (15) in Gden. Note that ∫Rd g(u)du = 1 for G ∈Gden such that g is the density function of G, which is a restriction in the optimization. Define the Lagrangian function for the optimization as F(g;g(t)) ∶= Eβ∣x,y,g(t)L(g∣x,y,β) −n[∫Rd g(u)du −1], (16) where n is the Lagrange multiplier. Notice that arg max G∈Gden F(g;g(t)) = arg max G∈Gden Eβ∣x,y,g(t)L(g∣x,y,β). (17) To find the solution to (17), we can first take the derivative of F(g;g(t)) with respect to g, which is a functional derivative. In general, the functional derivative of a functional D(f) is defined as δD/δf such that lim ϵ→0 [D[f + ϵη] −D[f] ϵ ] = ∫ δD δf (x)η(x)dx, (18) where η is an arbitrary function and ϵ is a scalar value (Parr and Weitao, 1994). 8 Taking the functional derivative of F(g;g(t)) with respect to g, we obtain δF(g;g(t)) δg = n ∑ i=1 ϕσ (yi −x⊺ i β)g(t)(β) g(β)∫Rd ϕσ (yi −x⊺ i β)g(t)(β)dβ −n. (19) By letting δF(g;g(t)) δg = 0 and setting the solution as g(t+1), we recover the algorithm in (14): g(t+1)(β) = 1 n n ∑ i=1 ϕσ (yi −x⊺ i β)g(t)(β) ∫Rd ϕσ (yi −x⊺ i β)g(t)(β)dβ . (20) The following proposition states that (14) provides the solution to the M-step. Proposition 1. In the above setting, g(t+1) = arg max G∈Gden Eβ∣x,y,g(t)L(g∣x,y,β), t = 0,1,2,⋯, (21) that is, g(t+1) is the unique maximizer of the expectation of the complete log-likelihood given g(t). The output of this algorithm converges to NPMLE under mild assumptions, and the convergence limit g(∞) can then be used as a final estimate for g∗. The following theorem is similar to Proposition 2 in Chae et al. (2018) and its proof is omitted. Recall that X is the support of G∗. Theorem 2.1. Suppose for every i = 1,⋯,n, that β →ϕσ (yi −x⊺ i β) is a continuous and strictly positive map on X. In addition, assume that for each ϵ > 0 there exists a compact X0 ⊂X where sup β∈X ∁ 0 ϕσ (yi −x⊺ i β) < ϵ for all i = 1,⋯,n. If a unique NPMLE ˆG exists, then as t →∞, G(t) converges weakly to ˆG, where G(t) is the probability distribution function of g(t). 2.2 EM-NPKMLE algorithm: when G∗possibly does not have a density The success of the EM-NPMLE method in Section 2.1 requires G∗to have a density, but may not perform well when this is not true. For example, it is not expected to give a 9 consistent estimate of the number of components when G∗has a finite number of support points, because g(t+1) from (14) will always be a density, although it converges to the NPMLE, which has at most n support points (see Theorem 1 in Jiang and Guntuboyina (2025)). Figure 2 illustrates an example of the estimated regression lines with coefficients sampled from the output of the EM-NPMLE algorithm in Section 2.1. The algorithm developed in Jiang and Guntuboyina (2025) aiming to approximate the NPMLE also tends to overestimate the number of components in the mixture of linear regression model. Figure 2: The regression lines of different colors representing 100 β coefficients sampled from an estimated density generated from the output of the EM-NPMLE algorithm for the same simulation as used in Figure 1. To address this issue when G∗does not have a density function, for example, when it is only supported on K points in Rd, where we emphasize that we do not assume that K is known, we propose to restrict the maximization of the likelihood function in (8) to a subset Gkde, which is the set of the kernel density estimates (KDEs) with a bandwidth h > 0 and a 10 kernel function V based on n points. The optimization problem becomes NPKMLE in (9). The kernel function V is a probability density function on Rd and we specifically require that it is differentiable and spherically symmetric such that V (x) = v(∥x∥2) for some v, where v ∶R≥0 →R≥0 is called the profile of V (Comaniciu and Meer, 2002). An example of V is the density function of the standard normal distribution on Rd. We can write Gkde ≡Gkde(v,h,n) = { 1 nhd n ∑ ℓ=1 v(∥⋅−˜βℓ∥2 h2 ) ∶˜β1,..., ˜βn ∈Rd}. (22) Here we fix v and h so that each element in Gkde is determined by ˜β = {˜β1,..., ˜βn} ⊂Rd, and the corresponding distribution is denoted Gβ. The optimization in (9) is reduced to finding the ˜β associated with the NPKMLE, with the solution denoted by ˆβ. The empirical distribution based on ˆβ (which is discrete) is used to estimate G∗and cluster all (xi,yi)’s using a plug-in approach as described in Section 1. The above approach works even when the true distribution is discrete, because we can always get a kernel density estimate using a sample, even from a discrete distribution. To solve NPKMLE, we will again apply the EM idea. Recall that given a current estimate G(t) of G∗with density g(t), the expectation of “complete-data” log-likelihood in (11) with respect to the posterior distribution is given in (15). Since we require g(t) to be a KDE in Gkde, G(t) is characterized by a set of n points in Rd denoted by β(t) = {β(t) ℓ ∈ Rd ∶ℓ= 1,⋯,n}, such that we can write G(t) = Gβ(t). This is the E-step of our algorithm. For the M-step, we seek a g(t+1) such that g(t+1) ∈arg max g∈Gkde Q(G;G(t)), (23) where Q(G;G(t)) = Eβ∣x,y,G(t)L(G∣x,y,β). (24) In other words, we would like to maximize the function in (15) with respect to g but require g to take the form of 11 g(β) = g(t+1)(β) = 1 nhd n ∑ ℓ=1 v(∥β −β(t+1) ℓ ∥2 h2 ), (25) where {β(t+1) ℓ ∈Rd ∶ℓ= 1,⋯,n} =∶β(t+1) is a new set of points to be determined, which are viewed as an update from β(t). In what follows we also use β(t) as a vector by stacking all its elements together. Since any g ∈Gkde is only determined by n (unspecified) points in Rd, the optimization problem in (23) is reduced to finding these n points. There is no closed form solution to this optimization problem in general, and therefore a gradient-ascent type algorithm is pro- posed, which features adaptive step sizes that do not need to be chosen. The gradient-ascent algorithm has no guarantee to converge to a global maximum, and hence the optimization in (23) is understood to find a local maximum. This leads to an EM algorithm involving two loops, as given in Algorithm 1 below, which we call EM-NPKMLE. Algorithm 1 EM-NPKMLE algorithm Input: x,y, β(0) = {β(0) 1 ,⋯,β(0) n } For t = 0,1,2,... ν(0) = β(t) For r = 0,1,2,... ν(r+1) ←Ð ξ(ν(r);β(t),x,y), where ξ is given in (32) End β(t+1) ←Ð ν(∞) End Output: empirical distribution of {β(∞) j ∶j = 1,⋯,n} The key step in the algorithm is the update from ν(r) to ν(r+1), which can be indeed understood as a gradient ascent method because it can be expressed in the following form: 12 ν(r+1) = ξ(ν(r);β(t),x,y) ∶= ν(r) + 1 C(ν(r),β(t),x,y)∇Q(Gν(r);G(t)), (26) where ν(r) is the current update approaching to β(t+1), ∇Q is the gradient of Q(Gν;G(t)) with respect to ν, and 1/C(ν(r),β(t),x,y) is an adaptive step size—see (31) below for the definition of C. We now derive the result in (26). Denote vh(∥x∥2) ∶= v(∥x∥2/h2) and v′ h(∥x∥2) ∶= v′(∥x∥2/h2). Also, let w = −v′ and wh = −v′ h. Plugging (25) into (15), we get Q(Gν;G(t)) =Eβ∣x,y,G(t)L(Gν∣x,y,β) = n ∑ i=1 ∫Rd log [ϕσ (yi −x⊺ i β) 1 nhd ∑n ℓ=1 vh(∥β −νℓ∥2)]Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ , (27) where Si(β,β(t),x,y) = ϕσ (yi −x⊺ i β) n ∑ ℓ=1 vh(∥β −β(t) ℓ∥2). (28) Taking the derivative of Q with respect to νℓ, ℓ= 1,⋯,n, where νℓis the ℓth entry of ν, we have ζ(νℓ;β(t),x,y) ∶= ∂Q(Gν;G(t)) ∂νℓ = n ∑ i=1 ∫Rd wh(∥β−νℓ∥2)(β−νℓ) ∑n m=1 vh(∥β−νm∥2) Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ , ℓ= 1,⋯,n. (29) Setting ζ(νℓ;β(t),x,y) = 0 we can write n ∑ i=1 ∫Rd wh(∥β−νℓ∥2)β ∑n m=1 vh(∥β−νm∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ =A(νℓ;β(t),x,y) (30) =νℓ n ∑ i=1 ∫Rd wh(∥β−νℓ∥2) ∑n m=1 vh(∥β−νm∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ , ℓ= 1,⋯,n ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ =C(νℓ,β(t),x,y) . (31) 13 Or equivalently, ξ(νℓ;β(t),x,y) ∶= A(νℓ;β(t),x,y) C(νℓ,β(t),x,y) = νℓ, ℓ= 1,⋯,n. (32) Hence, the gradient of Q(Gν;G(t)) with respect to ν is ∇Q(Gν;G(t)) (33) =( ζ(ν1;β(t),x,y),⋯,ζ(νn;β(t),x,y) ) ⊺ =( C(ν1,β(t),x,y)[ξ(ν1;β(t),x,y) −ν1],⋯,C(νn,β(t),x,y)[ξ(νn;β(t),x,y) −νn] ) ⊺ , which justifies (26). We now show that Algorithm 1 generates an increasing sequence of (27) as a function of ν(r). Here we require that the profile v is monotonically decreasing and function log(∑n i=1 v(xi)) is convex, which is satisfied when V is the Gaussian kernel since the LogSumExp function is convex.1 Theorem 2.2. In the current setting, the expectation of the complete log-likelihood function Q in (27) as a function of ν(r) is monotonically non-decreasing, i.e., for r = 0,1,⋯, Q(Gν(r+1);G(t)) ≥Q(Gν(r);G(t)), and Q(Gν(r);G(t)) converges as r →∞, where Gν(r+1), Gν(r), and G(t) are the distributions of the kernel density estimations of G using ν(r+1), ν(r), and β(t), respectively. The convergence of ν(r) is not implied by the above result but will be assumed in what follows. Now we show the monotonicity of the incomplete likelihood function in (6) as a function of the sequence G(t). Theorem 2.3. The incomplete likelihood function L(G) in (6) is bounded from above, and is a monotonically non-decreasing function of the sequence G(t), that is, L(G(t+1)) ≥ L(G(t)), and hence L(G(t)) converges as t →∞. 1See https://www.math.uwaterloo.ca/~hwolkowi/henry/teaching/w10/367.w10/367miscfiles/ pages48to60.pdf. 14 In the EM-NPKMLE algorithm, we take the sequence νr in our Algorithm to conver- gence, which can be viewed as a complete M-step to locally maximize (23), and can be com- putationally expensive sometimes. As a lightweight variant, we may just run the gradient ascent step in the inner loop of the algorithm one time instead of taking it to convergence, which is a generalized (or gradient) EM (GEM) algorithm, and is called GEM-NPKMLE. Similar to the result in Theorem 2.3, the incomplete likelihood function L evaluated at the distribution sequence G produced by GEM-NPKMLE algorithm is also non-decreasing and convergent. 2.3 Post-processing EM-NPMLE: when G∗possibly does not have a density Although the EM-NPMLE algorithm presented in Section 2.1 to approximate NPMLE is not directly suitable for the case where the true prior distribution is discrete, if a clustering algorithm is applied as a post-processing step to a random sample generated from the density g(t) for t sufficiently large produced in (14), we can still obtain a good result to estimate the components in the finite mixture of the linear regression model. The key observation is that as t grows to infinity, g(t) gets closer to the NPMLE and the true prior distribution, formed by peaks near the true support points (see Figure 2). The clustering algorithm we choose to use in this postprocessing step is the mean shift algorithm, which does not require the specification of the number of clusters, and naturally uses local modes (peaks in density) as the central idea of clustering. The mean shift algorithm is a density- based clustering algorithm that iteratively calculates a local weighted mean starting at a sample point. A kernel function acts as a weight function for this local weighted mean, and a bandwidth determines the level of smoothing. A sequence of local weighted means is iteratively calculated until convergence, which can be shown to approximate the gradient flow of the kernel density estimator using the given sample. All sample points that converge 15 to the same local mode are considered part of the same cluster (Comaniciu et al., 2003). The post-processing idea can be applied not only when we know the true prior distribu- tion is discrete, but can also be extended to some other settings when we have some specific information about the distribution. For example, an interesting scenario occurs when the true prior distribution is supported on a low-dimensional submanifold in Rd. In this case, the distribution does not have a density with respect to the Lebesgue measure, and hence does not satisfy the assumptions when we apply the EM-NPMLE algorithm in Section 2.1. However, we can use a random sample from the output of this EM-NPMLE algorithm as initialization and apply the subspace constrained mean shift (SCMS) algorithm (Ozertem and Erdogmus, 2011; Qiao and Polonik, 2021), which extends the mean shift algorithm to extract low-dimensional features called ridges from data—in fact, we can view a set of local modes as a 0-dimensional submanifold for which the mean shift algorithm is applicable. Like the mean shift algorithm, the SCMS algorithm also requires a kernel (weight) func- tion and a smoothing bandwidth. However, the SCMS projects the gradient of the kernel density estimator onto a subspace spanned by a subset of the eigenvectors of the Hessian matrix using the kernel density estimator. We demonstrate this approach in a simulation study in Section 3.1.2. We emphasize that the specific post-processing procedure depends on the further knowl- edge or assumption about the true prior distribution. Jiang and Guntuboyina (2025) de- velop an algorithm to approximate NPMLE, and in order to improve the estimation of the clusters when the prior distribution is discrete, they add a BIC trimming step, which has a similar purpose as the mean shift post-processing step using the output of EM-NPMLE algorithm in Section 2.1. In practice, it can be difficult to decide whether the prior distribution is continuous or discrete (see the CO2 vs GDP real data example in Section 3.2), which can create challenges of determining the best post-processing steps to use. As a comparison, the EM-NPKMLE 16 algorithm we present in Section 2.2 can be used to handle both discrete and continuous prior distributions without requiring a post-processing step. 3 Numerical Results For the EM-NPMLE algorithm introduced in Section 2.1, it requires an initial density g(0), which can be chosen as a (non-informative) uniform distribution over a bounded subset in Rd. For the EM-NPKMLE algorithm developed in Section 2.2, we need a set of initial points β(0), which can be chosen by sampling from the same uniform distribution. Alternatively, one can also use a random sample from the estimated density after running the EM-NPMLE algorithm. This creates initial data points that reflect the underlying distribution G∗more accurately than assuming no prior knowledge of the distribution G∗and using samples from an arbitrary distribution (e.g., a uniform distribution) as the starting values. Our simulation results below indicate that the choice of the initial points β(0) does not have a strong influence on the results, as the performance of the algorithm is good for either choice described above. We used the Gaussian kernel as the kernel function V for EM-NPKMLE in all the sim- ulations below. We propose using a bandwidth based on the maximal smoothing principle as described in Terrell (1990). Based on this principle, the bandwidth is chosen based on the maximum smoothing consistent with the density’s estimated dispersion. We find this approach advantageous because by using such a bandwidth, we guard against choosing a smaller bandwidth that may introduce spurious features, such as additional local modes in the estimated distribution that do not exist in G∗. The bandwidth is defined as follows: hOS = U × [(d + 8) d+6 2 πd/2R(V ) 16nΓ(d+8 2 )d(d + 2) ] 1 d+4 , (34) where R(V ) = ∫V 2, which is equal to 1/(4π) when V is a Gaussian kernel (Terrell, 1990; Davies, 2013). For U, we calculate the Gaussian-scaled interquartile range (IQR) (i.e., 17 IQR/1.34, a robust estimator of the standard deviation of a normal density) of the values sampled from the estimated density in each dimension in our data and then take the mean, as proposed in Davies and Hazelton (2010). We emphasize that this bandwidth selection strategy is used only when the initialization is based on sampling from the EM-NPMLE output. Otherwise an adjustment may be needed for a different initilization. The numerical study includes simulations where the true distribution is discrete and continuous. For the simulation using a discrete distribution, when we examined the per- formance of the EM-NPKMLE, we tested two different initialization procedures. We also ran the GEM-NPKMLE with a single iteration in the inner loop, as discussed at the end of Section 2.2. We compared the EM-NPKMLE performance with the mean-shift algorithm as a post-processing step after sampling from the output density based on the EM-NPMLE algorithm, and with the CGM developed in Jiang and Guntuboyina (2025). In addition, we included a study verifying that cross-validation can effectively estimate σ that was as- sumed to be known. For the continuous distribution case, we generated one simulation and compared the results visually using the EM–NPMKLE, the EM-NPMLE with and without a post-processing step, and the CGM. We also implemented our EM algorithms in two real applications. We used R version 4.1.2 (R Core Team, 2023) for all the numerical results except for the CGM method in Jiang and Guntuboyina (2025), where we used the Python code referenced in their paper. 3.1 Simulations 3.1.1 Simulation 1 We used the following model to test the algorithm, which has also been used in Jiang and Guntuboyina (2025). The mixture of linear regression model has three components with 18 prior weights .3, .3, and .4, respectively: y = 3 −x + ε, y = 1 + 1.5x + ε, (35) y = −1 + .5x + ε, where the noise ε has a N(0,σ2) distribution with σ = .5. The x values were generated from a uniform distribution on [−1,3]. We ran our algorithm on 200 simulated datasets (see Figure 3 for an example). We assume that σ is known until we present the result using cross-validation. Figure 3: An example of a simulated dataset from the model in (35) for σ = .5 and n=500 (left panel) and the corresponding true components in the mixture of linear regression model represented by different colors (right panel). In our simulation experiment, we used the uniform distribution over [−4,4]2 as the initialization for the EM-NPMLE algorithm, to iteratively calculate g(t+1) using the ex- pression in (14) until the L2 distance between g(t) and g(t+1) fell below a predefined small threshold. The final estimated density was used to sample n points as the starting points for the EM-NPKMLE algorithm. We stepped through sample sizes starting from 500 until 10,000. The EM-NPKMLE algorithm produced a set of points that defines a KDE solution 19 to (23). Suppose its empirical distribution can be represented by ˆG = ∑ ˆ K j=1 ˆπjδˆβj where the ˆβj’s are all distinct, that is, we aggregate the solution points according to their unique values. For a mixture of linear regression model that has K components with K unknown, the true prior distribution G∗= ∑K j=1 πjδβj is estimated by ˆG, where each βj is estimated by the closest ˆβj, and each πj by the associated ˆπj. To cluster a pair of points (xi,yi), it is assigned to a class j for which the posterior probability is maximized, i.e., arg max j ˆπjϕσ (yi −x⊺ i ˆβj) ∑ ˆ K k=1 ˆπkϕσ (yi −x⊺ i ˆβk) . (36) We report the average adjusted Rand index (Hubert and Arabie, 1985), the proportion of samples that were correctly identified as having three components, and the average Wasserstein-2 distance (Nguyen, 2011) between the probability measures ˆG produced by the algorithm in question and G∗, i.e., W2(G∗, ˆG) ∶= ⎡⎢⎢⎢⎢⎣ min γ∈Γ(π,ˆπ) K ∑ i=1 ˆ K ∑ j=1 ∣∣βi −ˆβj∣∣2γij ⎤⎥⎥⎥⎥⎦ 1/2 , (37) where γ = (γij)1≤i≤K,1≤j≤ˆ K ∈[0,1]K× ˆ K, Γ(π, ˆπ) is the set of all matrices γ satisfying ∑K i=1 γij = ˆπj, ∑ ˆ K j=1 γij = πi, where π = {π1,⋯,πK} are the true weights, and ˆπ = {ˆπ1,⋯, ˆπ ˆ K} are the estimated weights; and βi’s are the true coefficients and ˆβj’s are the estimated coefficients (see Table 1). Additionally, note that some points at the intersections of the linear regression com- ponents will unavoidably be mislabeled using (36) by any estimators of G∗, which results in an adjusted Rand index less than 1. As an oracle reference for our algorithm, we cal- culated the average adjusted Rand index when G was specified as G∗, and the posterior probabilities were directly calculated. We also report the bias and standard deviation of the estimated coefficients and weights when three components, the true number of components, were estimated. EM-NPKMLE: For the results of the EM-NPKMLE algorithm, see Table 1 for infor- mation on the proportion of simulations in which the true number of components were 20 detected, and Table 2 for bias and standard deviation of the errors. Standard deviations are also reported for all values (in parentheses), where applicable. Table 1: Results of Simulation 1 using EM-NPKMLE algorithm. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Distance 500 0.415 (0.119) 0.665 (0.033) 0.540 0.818 (0.137) 1000 0.567 (0.034) 0.663 (0.023) 0.990 0.562 (0.097) 5000 0.645 (0.011) 0.664 (0.010) 0.995 0.343 (0.080) 10000 0.651 (0.008) 0.663 (0.008) 0.995 0.288 (0.081) Table 2: Results of Simulation 1 using EM-NPKMLE algorithm. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.360, 0.227)±(0.053, 0.074) (-0.565, -0.363)±(0.234, 0.138) (0.460, 0.230)±(0.119, 0.077) π: bias ± SD -0.005±0.024 -0.026±0.059 0.031±0.056 1000 β: bias ± SD (-0.291, 0.157)±(0.037, 0.042) (-0.397, -0.164)±(0.112, 0.062) (0.297, 0.143)±(0.048, 0.046) π: bias ± SD 0.002±0.019 -0.003±0.017 0.001±0.019 5000 β: bias ± SD (-0.155, 0.033)±(0.014, 0.016) (-0.157, 0.003)±(0.032, 0.018) (0.149, 0.038)±(0.019, 0.014) π: bias ± SD -0.001±0.008 0±0.007 0.001±0.009 10000 β: bias ± SD (-0.112, 0.004)±(0.011, 0.010) (-0.115, 0.013)±(0.018, 0.012) (0.113, 0.027)±(0.013, 0.010) π: bias ± SD -0.001±0.006 0±0.006 0.001±0.006 The results suggest that our algorithm produces consistent estimates of the β compo- nents and weights, as well as a consistent estimator of the number of components. As n increases, the values of these estimates approach the true parameter values and true number of components. As n gets larger, we observe that all of the points that are mis- labeled are at the intersections of the components, and the slightly worse performance of the estimated G from our algorithm than using G∗is due to the bias of the β estimates creating intersection areas that are slightly larger than when G is set to G∗. Since we use the bandwidth based on the oversmoothing principle, we accept slightly larger bias in exchange for avoiding detecting spurious modes (components) in the data. In what follows we compare the above results using the EM-NPKMLE algorithm with those using the the EM-NPMLE algorithm with mean shift as a post-processing step as 21 described in Section 2.3 and the conditional gradient method (CGM) as described in Jiang and Guntuboyina (2025). We stepped through the same sequence of sample sizes as in Tables 1 and 2. EM-NPMLE and mean-shift post-processing: As shown in Tables 3 and 4, the mean shift algorithm as a post-processing step after EM-NPMLE produced results that suggest consistent estimation of the true β values and weights, as well as the true number of components. When using the mean shift for post-processing, we also used the bandwidth based on the maximal smoothing principle as described previously. CGM algorithm in Jiang and Guntuboyina (2025): As shown in Tables 5 and 6, the results of the CGM algorithm do not suggest that this algorithm can consistently estimate the number of modes, though the decreasing average W-2 distance with increasing sample size supports the idea of convergence in distribution between the true and estimated distributions. Table 3: Results of Simulation 1 using EM-NPMLE and mean-shift post-processing. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Distance 500 0.620 (0.094) 0.665 (0.033) 0.915 0.612 (0.238) 1000 0.657 (0.024) 0.663 (0.023) 0.995 0.465 (0.141) 5000 0.663 (0.010) 0.664 (0.010) 0.990 0.311 (0.092) 10000 0.662 (0.008) 0.663 (0.008) 1 0.265 (0.089) EM-NPKMLE with uniform initialization: We also ran our EM-NPKMLE algo- rithm on the same model as in (35) using initial points drawn from a uniform distribution over [−4,4]2 to demonstrate that our algorithm does not depend on initial points sampled from the estimated density g derived from the EM-NPKMLE algorithm. We currently do not have a bandwidth selection strategy for the uniform initialization, and we manually tuned the bandwidth by using the maximal smoothing principle multiplied by a constant 22 Table 4: Results of Simulation 1 using EM-NPMLE and mean-shift post-processing. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.012, 0.034)±(0.062, 0.040) (-0.173, -0.080)±(0.108, 0.045) (0.088, 0.031)±(0.062, 0.042) π: bias ± SD -0.001±0.025 -0.002±0.027 0.030±0.028 1000 β: bias ± SD (-0.002, 0)±(0.049, 0.031) (-0.077, -0.021)±(0.061, 0.029) (0.041, 0.013)±(0.044, 0.026) π: bias ± SD 0.002±0.019 -0.002±0.018 0±0.019 5000 β: bias ± SD (0, 0)±(0.021, 0.015) (-0.009, 0.005)±(0.023, 0.012) (0.013, -0.003)±(0.018, 0.013) π: bias ± SD -0.001±0.008 0±0.008 0.001±0.009 10000 β: bias ± SD (0, 0)±(0.014, 0.010) (-0.007, 0.004)±(0.017, 0.011) (0.010, -0.002)±(0.012, 0.008) π: bias ± SD -0.001±0.006 0±0.006 0±0.006 Table 5: Results of Simulation 1 using CGM. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp 500 0.590 (0.066) 0.665 (0.033) 0 1000 0.581 (0.068) 0.663 (0.023) 0 5000 0.602 (0.053) 0.664 (0.010) 0 10000 0.596 (0.057) 0.663 (0.008) 0 Table 6: Results of Simulation 1 using CGM. Sample Size Avg W-2 Distance Mean No. of Comp 500 0.439 (0.109) 9.295 (2.095) 1000 0.373 (0.093) 8.550 (1.744) 5000 0.260 (0.069) 7.800 (1.504) 10000 0.229 (0.064) 7.570 (1.274) c = 1.15. As shown in Tables 7 and 8, the results are comparable to running our EM- NPKMLE algorithm using the output of EM-NPMLE for the initialization. Table 7: Results of Simulation 1 using EM-NPKMLE Algorithm with uniform initialization. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Dist 500 0.146 (0.043) 0.665 (0.033) 0.030 1.130 (0.059) 1000 0.283 (0.140) 0.663 (0.023) 0.515 1.020 (0.114) 5000 0.624 (0.012) 0.664 (0.010) 1 0.422 (0.060) 10000 0.642 (0.008) 0.663 (0.008) 1 0.320 (0.052) 23 Table 8: Results of Simulation 1 using EM-NPKMLE Algorithm with uniform initialization. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.453, 0.273)±(0.035, 0.072) (-0.888, -0.504)±(0.259, 0.149) (0.643, 0.325)±(0.136, 0.070) π: bias ± SD 0.029±0.017 -0.039±0.211 0.010±0.218 1000 β: bias ± SD (-0.430, 0.333)±(0.039, 0.048) (-0.717, -0.364)±(0.211, 0.094) (0.556, 0.299)±(0.094, 0.075) π: bias ± SD 0.049±0.018 -0.119±0.066 0.070±0.063 5000 β: bias ± SD (-0.224, 0.101)±(0.013, 0.018) (-0.234, -0.029)±(0.036, 0.020) (0.222, 0.073)±(0.020, 0.016) π: bias ± SD 0.009±0.007 -0.018±0.006 0.009±0.007 10000 β: bias ± SD (-0.164, 0.047)±(0.010, 0.011) (-0.164, 0.001)±(0.019, 0.013) (0.164, 0.043)±(0.014, 0.011) π: bias ± SD 0±0.005 -0.008±0.004 0.008±0.005 GEM-NPKMLE: As discussed at the end of Section 2.2, GEM-NPKMLE is a variant of EM-NPKMLE, where we run a single iteration in the inner loop instead of running it to convergence. This approach saves computation time. For example, for the simulation with n=5000 and the EM-NPMLE initialization, the EM-NPKLME took an average of 115.888 (40.759) minutes, while the GEM-NPKMLE took an average of 15.448 (2.194) minutes. It also produced similar results compared to Tables 1, 2, 3, and 4 for larger sample sizes, as can be seen in Tables 9 and 10. This suggests that the GEM-NPKMLE approach is viable when saving computation time is important and there is a sufficient sample size. The initial points we used for GEM-NPKMLE were sampled from the output of EM-NPMLE, which was the same strategy that was used in the first part of our simulations. However, in this case, we multiplied the initial bandwidth by a constant c = 1.2 to improve the algorithm’s performance. Table 9: Results of Simulation 1 using GEM-NPKMLE algorithm. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Dist 500 0.133 (0.034) 0.665 (0.033) 0.235 1.070 (0.062) 1000 0.213 (0.106) 0.663 (0.023) 0.195 0.993 (0.059) 5000 0.614 (0.012) 0.664 (0.010) 0.840 0.401 (0.066) 10000 0.637 (0.008) 0.663 (0.008) 0.930 0.323 (0.070) 24 Table 10: Results of Simulation 1 using GEM-NPKMLE algorithm. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.527,0.360)±(0.067, 0.083) (-0.983,-0.642)±(0.135,0.071) (0.839,0.387)±(0.071,0.067) π: bias ± SD 0.005±(0.027) -0.298±(0.002) 0.292±(0.027) 1000 β: bias ± SD (-0.414,0.308)±(0.052,0.059) (-1.052,-0.551)±(0.175,0.079) (0.757,0.402)±(0.092,0.075) π: bias ± SD 0.007±0.019 -0.291±0.048 0.284±0.042 5000 β: bias ± SD (-0.241,0.104)±(0.014,0.018) (-0.282,-0.069)±(0.041,0.023) (0.231,0.088)±(0.020,0.016) π: bias ± SD -0.001±0.008 0±0.007 0.001±0.009 10000 β: bias ± SD (-0.184,0.056)±(0.010,0.011) (-0.196,-0.009)±(0.022,0.013) (0.172,0.053)±(0.014,0.011) π: bias ± SD -0.001±(0.006) 0±(0.006) 0.001±(0.006) Cross-validation for unknown σ: If σ is unknown, we can implement a cross-validation (CV) approach found in Jiang and Guntuboyina (2025) to estimate the parameter σ. First we divide the dataset T = {(xi,yi) ∶ i = 1,...,n} into C folds: T1,...,TC. For each c ∈{1,2,...C}, one fold Tc is left out as test data and the remaining folds are used to obtain an estimator ˆG−c for G∗. Let f ˆG−c xi (yi) = ∫ϕσ (yi −x⊺ i β)d ˆG−c(β), i = 1,⋯,n. The σ value that minimizes the following objective function is selected and denoted by ˆσ: CV (σ) = − C ∑ c=1 ∑ (xi,yi)∈Tc log(f ˆG−c xi (yi)). In our simulation, we used a random sample from the output of the EM-NPMLE al- gorithm as the initialization for our GEM-NPKMLE algorithm (with the oversmoothing bandwidth) to calculate ˆG−c. We tested this method on the model in (35) with C = 5 to get ˆσ, which was further plugged into the EM-NPKMLE algorithm. The results are provided in Tables 11 and 12, which are comparable to those in Tables 1 and 2. 3.1.2 Simulation 2 Unlike the EM-NPMLE algorithm, our EM-NPKMLE algorithm does not assume the knowledge of a particular structure in the true distribution, such as a low-dimensional 25 Table 11: Results of Simulation 1 using EM-NPKMLE algorithm and cross-validation to estimate σ. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Dist 500 0.302 (0.204) 0.665 (0.033) 0.390 0.891 (0.192) 1000 0.574 (0.045) 0.663 (0.023) 0.975 0.557 (0.118) 5000 0.645 (0.011) 0.664 (0.010) 0.995 0.343 (0.080) 10000 0.651 (0.008) 0.663 (0.008) 0.995 0.288 (0.081) Table 12: Results of Simulation 1 using EM-NPKMLE algorithm and cross-validation to estimate σ. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.328,0.201)±(0.080,0.090) (-0.425,-0.281)±(0.236,0.177) (0.375,0.193)±(0.162,0.091) π: bias ± SD -0.010±0.025 -0.028±0.071 0.039±0.066 1000 β: bias ± SD (-0.273, 0.143)±(0.048, 0.047) (-0.355,-0.151)±(0.137,0.068) (0.271,0.136)±(0.073,0.047) π: bias ± SD -0.003±0.018 -0.002±0.018 0.005±0.019 5000 β: bias ± SD (-0.155, 0.033)±(0.0143, 0.016) (-0.157,0.003)±(0.032,0.018) (0.149,0.038)±(0.019,0.014) π: bias ± SD -0.001±0.008 0±0.007 0.001±0.009 10000 β: bias ± SD (-0.112, 0.004)±(0.011, 0.010) (-0.115, 0.013)±(0.018, 0.012) (0.113, 0.027)±(0.013, 0.010) π: bias ± SD -0.001±0.006 0±0.006 0.001±0.006 manifold as the support. To further demonstrate this, we use a continuous distribution as the model, which consists of a bivariate distribution represented by a mixture of uniform distributions over two concentric circles, each with center at the origin (see Figure 4). The model is given in (1), where σ = .2 is assumed to be known, and the true distribution G∗is 1 2 × Uniform{B(1)} + 1 2 × Uniform{B(2)}, where B(r) = {β ∈R2 ∶∣∣β∣∣= r}. This model has also been used in (Jiang and Guntuboyina, 2025). We uniformly sampled 10000 design points xi’s from [−1,3]. For our EM-NPKMLE algorithm, we used the output of the EM-NPMLE algorithm as the initial β(0) points and used the oversmoothing bandwidth. We got similar results when we used points sampled from a uniform distribution on [−4,4]2 as the initial β(0) and applied a constant to the 26 Figure 4: Left panel: the support of G∗in Simulation 2; Right panel: the sample used in Simulation 2. oversmoothing bandwidth. In this simulation we also tested the EM-NPMLE algorithm with and without SCMS as a post-processing step, as described in Section 2.3. The EM-NPMLE algorithm was initialized by a uniform density over [−4,4]2. Then the SCMS algorithm was applied to 10000 β points sampled from the estimated density produced by the EM-NPMLE algorithm. The bandwidth used for the SCMS algorithm was selected by the biased cross-validation method aiming to minimize an asymptotic mean integrated squared error (AMISE) estimate between the estimated and true density gradients (Sain et al., 1994). This was implemented by the ‘Hbcv.diag’ function in the R package ks (Duong, 2024), and we used the mean of the diagonal entries in the selected diagonal bandwidth matrix. As shown in Figure 5 for a single random sample, the points produced by the EM- NPKMLE algorithm are roughly evenly distributed near the support of G∗, while the support does not seem to be recovered well by the CGM method. The distribution produced by the EM-NPMLE algorithm has a visible concentration near the support of G∗, and 27 applying the SCMS algorithm as a post-processing step further extracted the 1-dimensional structure. In Table 13 we show the results of applying the Wasserstein-2 distance between the probability measures G∗and ˆG from the applicable algorithms. Using SCMS algorithm for post-processing has the best performance, followed by the EM-NPMLE algorithm without post-processing. As two algorithms without knowing the prior distribution is continuous or requiring a post-processing step, the EM-NPKMLE is an improvement over the CGM. Table 13: W-2 Distance between G∗and ˆG for the simulation depicted in Figure 5 Method W-2 Distance EM-NPKMLE 0.328 CGM 0.394 EM-NPMLE 0.190 EM-NPMLE + SCMS 0.157 28 (a) (b) (c) (d) Figure 5: Comparison of the true mixture distribution and the estimated distributions of β for different algorithms in Simulation 2, where the red circles are the support of G∗, and point sizes are proportional to weights. (a) EM-NPKMLE algorithm: 52 support points, weights ranging from 0.0002 to 0.085. (b) CGM method: 34 support points, weights ranging from 0.011 to 0.054. (c) 104 points sampled from the output of the EM-NPMLE algorithm, also used as the input for the SCMS algorithm. (d) post-processing result after using the SCMS algorithm, where all points have weight= 10−4. 29 3.2 Applications Application: CO2-GDP We tested our EM-NPKMLE algorithm on data of 159 countries’ CO2 emissions and GDP per capita in 2015 taken from Bolt et al. (2018); Friedlingstein et al. (2020); Ritchie et al. (2020); and (UN, 2022). This dataset was also analyzed in Jiang and Guntuboyina (2025). Identifying distinct components among the relationship between CO2 emissions and economic output could reveal countries that have high economic output and low CO2 emissions, potentially revealing favorable policies to pursue. Other papers have also looked into this relationship from a mixture of regression approach, such as Huang and Yao (2012) and Hurn et al. (2003), though these papers use different data sources. As in Jiang and Guntuboyina (2025), we used data on CO2 emission in per capita in 10-ton increments and GDP per capita in 10 thousand U.S. dollars. We used a 10-fold cross-validation method as described in Section 3.1 for the CO2-GDP dataset, and used the same bandwidth selection method as described at the beginning of Section 3. Our estimate of σ from this CV method is 0.160. The results of our algorithm with this estimated σ suggest that there are two growth paths with larger estimated prior weights, with possible deviations as evidenced by the other components with smaller weights (see Table 14 and Figure 6). This result differs from Jiang and Guntuboyina (2025), who identified ten components before applying the BIC trimming, and five components after BIC trimming. We note that the two components with largest weights detected from our algorithm ((0.022, 0.179) and (-0.070, 0.343)) are similar to the two components with the largest weights from Jiang and Guntuboyina (2025) before and after their BIC trimming method was applied, that is, (0.030, 0.170) and (-0.010, 0.230), with pre-BIC trimming weights of 0.290 and 0.180, and post-BIC trimming weights of 0.270 and 0.280, respectively. Application: Music Tone Perception We also tested our EM-NPKMLE algorithm on another dataset analyzed in Jiang and 30 Figure 6: CO2-GDP Relationship: The points represent CO2 emissions in per capita in 10-ton increments and GDP per capita in 10 thousand U.S. dollars for individual countries. The lines represent the estimated component regression lines, whose estimated coefficients can be found in Table 14. The thicker lines are the components with larger weights. Table 14: Results from CO2/GDP Application 1st Comp 2nd Comp 3rd Comp 4th Comp 5th Comp β Estimates (0.022, 0.179) (-0.070, 0.343) (-0.101, 0.628) (0.105, 0.120) (0.839, -0.051) Estimated Weights 0.484 0.358 0.088 0.057 0.013 Guntuboyina (2025). The data was initially collected by Cohen (1980) in an experiment examining how trained musicians perceive tone. See Cohen (1980) for further information on the experiment. This dataset, which can be found in the R packages mixtools and fpc, 31 contains one musician’s tuning that was observed 150 times (Benaglia et al., 2009; Hennig, 2024). See Figure 7, where the y-axis displays the musician’s assessed tone ratio and the x- axis depicts the true tone ratio (Viele and Tong, 2002; Yao and Song, 2015). Other studies of this dataset were carried out in De Veaux (1989); Viele and Tong (2002); and Yao and Song (2015); but in these studies the number of components was pre-specified at two. We applied our EM algorithm that does not require prespecifying the number of components. We used the same cross-validation method as in the CO2-GDP application to estimate σ, and our estimate is 0.210. We also used the same bandwidth selection method as in the CO2-GDP application, that is, maximal smoothing bandwidth. Our results revealed two main components (see Figure 7 and Table 15). In terms of the number of components, the results are consistent with the music perception theory when the experiment took place, with one component as y = x and another as y = 2 (Jiang and Guntuboyina, 2025). Our results (see Table 15) are comparable to the two largest components computed using the CGM algorithm (both pre- and post BIC trimming) in Jiang and Guntuboyina (2025), that is, (1.898, 0.054), (0, 0.989), with weights 0.540 and 0.260 (pre-BIC trimming), and 0.700 and 0.300 (post-BIC trimming), respectively. However, the CGM method detected six components before the BIC trimming method was applied. Table 15: Results from Music Tone Application 1st Comp 2nd Comp β Estimates (0.012, 0.967) (1.292, 0.361) Estimated Weights 0.427 0.573 4 Discussion In this paper we first introduce an EM algorithm to approximate NPMLE in the setting of mixture of linear regressions when the prior distribution for the coefficients has a density. 32 Figure 7: Music Tone Perception: The blue points represent a musician’s assessed tone ratio (y) and the true tone ratio (x) pairs. The estimated regression lines (see Table 15) are also depicted for the two components. The thicker line is the component with the larger weight. We then develop another EM algorithm to approximate NPKMLE that can automatically detect the number of components for a discrete prior distribution, and is able to uncover a continuous prior distribution. We provide some theoretical support for our algorithms as well as demonstrate their performance in simulations and applications. In particular, our simulation results suggest that our EM-NPKMLE algorithm can consistently estimate the true number of components and the β coefficients and their associated weights when the prior distribution is discrete. While our EM-NPKMLE algorithm does have useful 33 qualities, it requires the selection of a bandwidth, which can be challenging. We selected the oversmoothing bandwidth based on its ability to avoid the appearance of spurious features that may not belong to the true prior distribution, but further investigation into the bandwidth selection could yield better methods, e.g., adaptive bandwidths for different iterations of the algorithm, or introducing unique bandwidths for each dimension of the data. While we demonstrate the algorithm’s ability to uncover both discrete and continuous distributions, investigating its theoretical properties is an area of further research. It is clear that our methods can be extended to the setting of mixture of distributions, and we will investigate this in a separate article. A Appendix A.1 Proof of Proposition 1 Proof. Let the Kullback-Leibler (KL) divergence between distributions P and Q with den- sities p and q be DKL(P∥Q) = ∫p(x)log p(x) q(x)dx. (38) We can write Eβ∣x,y,G(t)L(g∣x,y,β) (39) = − n ∑ i=1 ∫f (t+1) i (β)log f (t+1) i (β) g(β) dβ + n ∑ i=1 ∫log [ϕσ (yi −x⊺ i β)f (t+1) i (β)]f (t+1) i (β)dβ (40) = − n ∑ i=1 DKL(F (t+1) i ∥G) + n ∑ i=1 ∫log [ϕσ (yi −x⊺ i β)f (t+1) i (β)]f (t+1) i (β)dβ, (41) where F (t+1) i is the distribution associated with f (t+1) i . Hence, maximizing Eβ∣x,y,G(t)L(g∣x,y,β) over g is equivalent to minimizing 1 n ∑n i=1 DKL(F (t+1) i ∥G) over G ∈Gden. Let g(t+1) = 1 n ∑n i=1 f (t+1) i . Notice that 1 n n ∑ i=1 δDKL(F (t+1) i ∥G) δg ∣ g=g(t+1) = −1 n n ∑ i=1 f (t+1) i g(t+1) = −1. (42) 34 Using Theorem 2 in (Nishiyama, 2020), we conclude that g(t+1) is the unique minimizer of 1 n ∑n i=1 DKL(F (t+1) i ∥G), and hence the unique maximizer of Eβ∣x,y,G(t)L(g∣x,y,β,G(t)). A.2 Proof of Theorem 2.2 Proof. Using the subgradient inequality for differentiable convex functions, we have log[ n ∑ ℓ=1 v(yℓ)] −log[ n ∑ ℓ=1 v(xℓ)] ≥∑n ℓ=1 v′(xℓ)(yℓ−xℓ) ∑n ℓ=1 v(xℓ) , ∀xℓ,yℓ∈[0,∞). Also, recall that w = −v′. Then Q(Gν(r+1);G(t)) −Q(Gν(r);G(t)) (43) = n ∑ i=1 ∫Rd log [ϕσ (yi −x⊺ i β) 1 nhd ∑n ℓ=1 vh(∥β −ν(r+1) ℓ ∥2)]Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (44) − n ∑ i=1 ∫Rd log [ϕσ (yi −x⊺ i β) 1 nhd ∑n ℓ=1 vh(∥β −ν(r) ℓ∥2)]Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (45) ≥ n ∑ i=1 ∫Rd ∑n ℓ=1 wh(∥β−ν(r) ℓ ∥2)[∥β−ν(r) ℓ ∥2−∥β−ν(r+1) ℓ ∥2] ∑n j=1 vh(∥β−ν(r) j ∥2) Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (46) = n ∑ i=1 ∫Rd ∑n ℓ=1 wh(∥β−ν(r) ℓ ∥2)[2ν(r+1)⊺ ℓ β−2ν(r)⊺ ℓ β−∥ν(r+1) ℓ ∥2+∥ν(r) ℓ ∥2] ∑n j=1 vh(∥β−ν(r+1) j ∥2) Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (47) = n ∑ ℓ=1 2ν(r+1)⊺ ℓ n ∑ i=1 ∫Rd wh(∥β−ν(r) ℓ ∥2)β ∑n j=1 vh(∥β−ν(r) j ∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (48) − n ∑ ℓ=1 2ν(r)⊺ ℓ n ∑ i=1 ∫Rd wh(∥β−ν(r) ℓ ∥2)β ∑n j=1 vh(∥β−ν(r) j ∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (49) − n ∑ ℓ=1 ∥ν(r+1) ℓ ∥2 n ∑ i=1 ∫Rd wh(∥β−ν(r) ℓ ∥2) ∑n j=1 vh(∥β−ν(r) j ∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (50) + n ∑ ℓ=1 ∥ν(r) ℓ∥2 n ∑ i=1 ∫Rd wh(∥β−ν(r) ℓ ∥2) ∑n j=1 vh(∥β−ν(r) j ∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (51) = n ∑ ℓ=1 [2ν(r+1)⊺ ℓ A(ν(r) ℓ;β(t),x,y) −2ν(r)⊺ ℓ A(ν(r) ℓ;β(t),x,y) (52) −∥ν(r+1) ℓ ∥2C(ν(r) ℓ,β(t),x,y) + ∥ν(r) ℓ∥2C(ν(r) ℓ,β(t),x,y)]. (53) 35 It then follows from (30),(31), and (32) that Q(Gν(r+1);G(t)) −Q(Gν(r);G(t)) (54) ≥ n ∑ ℓ=1 −∥ν(r+1) ℓ ∥2C(ν(r) ℓ,β(t),x,y) + 2∥ν(r+1) ℓ ∥2C(ν(r) ℓ;β(t),x,y) (55) −2ν(r)⊺ ℓ ν(r+1) ℓ C(ν(r) ℓ;β(t),x,y) + ∥ν(r) ℓ∥2C(ν(r) ℓ,β(t),x,y)] (56) = n ∑ ℓ=1 C(ν(r) ℓ,β(t),x,y)∥ν(r+1) ℓ −ν(r) ℓ∥2 (57) ≥0, (58) where the last inequality holds because C(ν(r) ℓ,β(t),x,y) > 0 by noticing that w is always positive. Next we show that Q(Gν;G(t)) has an upper bound. Notice that sup β ∣ϕσ (yi −x⊺ i β) 1 nhd n ∑ ℓ=1 vh(∥β −νℓ∥2)∣≤ ∥v∥∞ √ 2πσhd =∶B, (59) where ∥v∥∞= supt ∣v(t)∣. It then follows from (27) that sup ν Q(Gν;G(t)) ≤nlog B. (60) Hence the sequence Q(Gν(r);G(t)) converges. A.3 Proof of Theorem 2.3 Proof. For a distribution G on Rd with density function g, let P(β,G) = n ∑ i=1 log(fi(βi ∣xi,yi,g)), (61) where fi(βi ∣xi,yi,g) is defined in (12). We can write P(β,G) = n ∑ i=1 log [ ϕσ (yi −x⊺ i βi)g(βi) ∫Rd ϕσ (yi −x⊺ i β)g(β)dβ ] = n ∑ i=1 log [ϕσ (yi −x⊺ i βi)g(βi)] − n ∑ i=1 log [∫Rd ϕσ (yi −x⊺ i β)g(β)dβ]. 36 Using the definitions in (6) and (11), L(G) = L(G∣x,y,β) −P(β,G). Taking the expectation of each side with respect to β∣x,y,G(t), we have L(G) = Eβ∣x,y,G(t)(L(G)) = Eβ∣x,y,G(t)(L(G∣x,y,β)) −Eβ∣x,y,G(t)(P(β,G)) =∶Q(G;G(t)) −H(G;G(t)). Notice that L(G(t+1)) −L(G(t)) = [Q(G(t+1);G(t)) −Q(G(t);G(t))] −[H(G(t+1);G(t)) −H(G(t);G(t))]. We have already shown in Theorem 2.2 that Q(G(t+1);G(t)) ≥Q(G(t);G(t)). To show L(G(t+1)) −L(G(t)) ≥0, it remains to prove that H(G(t+1);G(t)) −H(G(t);G(t)) ≤0. This is true because for any arbitrary distribution G with density function g, H(G;G(t)) −H(G(t),G(t)) =Eβ∣x,y,G(t) [ n ∑ i=1 log(fi(βi ∣xi,yi,g))] −Eβ∣x,y,G(t) [ n ∑ i=1 log(fi(βi ∣xi,yi,g(t)))] =Eβ∣x,y,G(t) { n ∑ i=1 log [ fi(βi ∣xi,yi,g) fi(βi ∣xi,yi,g(t))]} = n ∑ i=1 Eβ∣x,y,G(t) {log [ fi(βi ∣xi,yi,g) fi(βi ∣xi,yi,g(t))]} ≤ n ∑ i=1 log {Eβ∣x,y,G(t) [ fi(βi ∣xi,yi,g) fi(βi ∣xi,yi,g(t))]} (by Jensen’s inequality and the concavity of the log function) = n ∑ i=1 log [∫ fi(β ∣xi,yi,g) fi(β ∣xi,yi,g(t))fi(β ∣xi,yi,g(t))dβ] = n ∑ i=1 log [∫fi(β ∣xi,yi,g)dβ] =0. 37 We can also conclude from Theorem 1 in (Jiang and Guntuboyina, 2025) that the incom- plete log likelihood L(G) is bounded from above because of the existence of a maximizer ˆG. By the monotone convergence theorem, L(G(t)) converges. Acknowledgments This work was partially supported by resources provided by the Office of Research Com- puting at George Mason University (URL: https://orc.gmu.edu). Data Availability Statement The music tone data that supports the findings of this study is openly available in the R packages mixtools (https://cran.r-project.org/web/packages/mixtools/index.html), fpc (https://cran.r-project.org/web//packages/fpc/index.html). The CO2-GDP data can be found at https://github.com/hanshengjiang/npmle git. References Benaglia, T., Chauveau, D., Hunter, D. R., and Young, D. (2009). mixtools: An R package for analyzing finite mixture models. Journal of Statistical Software, 32(6):1–29. Bolt, J., Inklaar, R., de Jong, H., and Van Zanden, J. L. (2018). Rebasing ‘maddison’: new income comparisons and the shape of long-run economic development. Maddison Project Working paper 10, University of Groningen, Groningen, The Netherlands. Maddison Project, version 2018; accessed via Our World in Data. Chae, M., Martin, R., and Walker, S. G. (2018). Convergence of an iterative algorithm to the nonparametric mle of a mixing distribution. Statistics & Probability Letters, 140:142– 146. 38 Chung, Y. and Lindsay, B. G. (2015). Convergence of the em algorithm for continuous mixing distributions. Statistics & Probability Letters, 96:190–195. Cohen, E. (1980). Inharmonic tone perception. Unpublished Ph. D. Dissertation, Stanford University. Comaniciu, D. and Meer, P. (2002). Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence, 24(5):603– 619. Comaniciu, D., Ramesh, V., and Meer, P. (2003). Kernel-based object tracking. IEEE Transactions on pattern analysis and machine intelligence, 25(5):564–577. Davies, T. M. (2013). Scaling oversmoothing factors for kernel estimation of spatial relative risk. Epidemiologic Methods, 2(1):67–83. Davies, T. M. and Hazelton, M. L. (2010). Adaptive kernel estimation of spatial relative risk. Statistics in Medicine, 29(23):2423–2437. De Veaux, R. D. (1989). Mixtures of linear regressions. Computational Statistics & Data Analysis, 8(3):227–245. Deb, N., Saha, S., Guntuboyina, A., and Sen, B. (2021). Two-component mixture model in the presence of covariates. Journal of the American Statistical Association, pages 1–15. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from in- complete data via the em algorithm. Journal of the royal statistical society: series B (methodological), 39(1):1–22. DeSarbo, W. S. and Cron, W. (1988). A maximum likelihood methodology for clusterwise linear regression. Journal of classification, 5:249–282. 39 Dicker, L. H. and Zhao, S. D. (2016). High-dimensional classification via nonparametric empirical bayes and maximum likelihood inference. Biometrika, 103(1):21–34. Ding, C. S. (2006). Using regression mixture analysis in educational research. Practical Assessment, Research, and Evaluation, 11(1). Duong, T. (2024). ks: Kernel Smoothing. R package version 1.14.2. Friedlingstein, P., O’sullivan, M., Jones, M. W., Andrew, R. M., Hauck, J., Olsen, A., Peters, G. P., Peters, W., Pongratz, J., Sitch, S., et al. (2020). Global carbon budget 2020. Earth System Science Data Discussions, 2020:1–3. Gu, J. and Koenker, R. (2022). Nonparametric maximum likelihood methods for binary re- sponse models with random coefficients. Journal of the American Statistical Association, 117(538):732–751. Hawkins, D. S., Allen, D. M., and Stromberg, A. J. (2001). Determining the number of components in mixtures of linear models. Computational Statistics & Data Analysis, 38(1):15–48. Hennig, C. (2024). fpc: Flexible Procedures for Clustering. R package version 2.2-13. Hosmer, D. W. (1974). Maximum likelihood estimates of the parameters of a mixture of two regression lines. Communications in Statistics-Theory and Methods, 3(10):995–1006. Huang, M. and Yao, W. (2012). Mixture of regression models with varying mixing pro- portions: a semiparametric approach. Journal of the American Statistical Association, 107(498):711–724. Hubert, L. and Arabie, P. (1985). Comparing partitions. Journal of classification, 2(1):193– 218. 40 Hurn, M., Justel, A., and Robert, C. P. (2003). Estimating mixtures of regressions. Journal of computational and graphical statistics, 12(1):55–79. Jagabathula, S., Subramanian, L., and Venkataraman, A. (2020). A conditional gradient approach for nonparametric estimation of mixing distributions. Management Science, 66(8):3635–3656. Jiang, H. and Guntuboyina, A. (2025). A nonparametric maximum likelihood approach to mixture of regression. arXiv preprint arXiv:2108.09816v2. Jiang, W. and Zhang, C.-H. (2009). General maximum likelihood empirical Bayes estima- tion of normal means. The Annals of Statistics, 37(4):1647 – 1684. Kiefer, J. and Wolfowitz, J. (1956). Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. The Annals of Mathematical Statistics, pages 887–906. Koenker, R. and Mizera, I. (2014). Convex optimization, shape constraints, compound decisions, and empirical bayes rules. Journal of the American Statistical Association, 109(506):674–685. Laird, N. M. and Louis, T. A. (1991). Smoothing the non-parametric estimate of a prior distribution by roughening: A computational study. Computational statistics & data analysis, 12(1):27–37. Leisch, F. (2004). Flexmix: A general framework for finite mixture models and latent class regression in r. Journal of Statistical Software, 11(8):1–18. Melnykov, V. and Melnykov, I. (2012). Initializing the em algorithm in gaussian mix- ture models with an unknown number of components. Computational Statistics & Data Analysis, 56(6):1381–1395. 41 Nguyen, X. (2011). Wasserstein distances for discrete measures and convergence in non- parametric mixture models. Citeseer. Forschungsbericht. Nishiyama, T. (2020). Minimization problems on strictly convex divergences. arXiv preprint arXiv:2001.01079. Ozertem, U. and Erdogmus, D. (2011). Locally defined principal curves and surfaces. The Journal of Machine Learning Research, 12:1249–1286. Parr, R. G. and Weitao, Y. Y. (1994). Density-Functional Theory of Atoms and Molecules. Oxford University Press. Polyanskiy, Y. and Wu, Y. (2020). Self-regularizing property of nonparametric maximum likelihood estimator in mixture models. arXiv preprint arXiv:2008.08244. Qiao, W. and Polonik, W. (2021). Algorithms for ridge estimation with convergence guar- antees. arXiv preprint arXiv:2104.12314. Quandt, R. E. (1972). A new approach to estimating switching regressions. Journal of the American statistical association, 67(338):306–310. Quandt, R. E. and Ramsey, J. B. (1978). Estimating mixtures of normal distributions and switching regressions. Journal of the American statistical Association, 73(364):730–738. R Core Team (2023). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. Ritchie, H., Roser, M., and Rosado, P. (2020). Co2 and greenhouse gas emissions. Our World in Data. https://ourworldindata.org/co2-and-greenhouse-gas-emissions. Saha, S. and Guntuboyina, A. (2020). On the nonparametric maximum likelihood estima- tor for gaussian location mixture densities with application to gaussian denoising. The Annals of Statistics, 48(2):738–762. 42 Sain, S. R., Baggerly, K. A., and Scott, D. W. (1994). Cross-validation of multivariate densities. Journal of the American Statistical Association, 89(427):807–817. Schlattmann, P. (2009). Medical applications of finite mixture models, volume 1. Springer. Terrell, G. R. (1990). The maximal smoothing principle in density estimation. Journal of the American statistical association, 85(410):470–477. UN (2022). United nations world population prospects 2022. Accessed via Our World in Data. Vardi, Y. and Lee, D. (1993). From image deblurring to optimal investments: Maximum likelihood solutions for positive linear inverse problems. Journal of the Royal Statistical Society Series B: Statistical Methodology, 55(3):569–598. Vardi, Y., Shepp, L. A., and Kaufman, L. (1985). A statistical model for positron emission tomography. Journal of the American statistical Association, 80(389):8–20. Viele, K. and Tong, B. (2002). Modeling with mixtures of linear regressions. Statistics and Computing, 12:315–330. Wedel, M. and Kamakura, W. A. (2000). Market segmentation: Conceptual and method- ological foundations. Springer Science & Business Media. Yao, W. and Song, W. (2015). Mixtures of linear regression with measurement errors. Communications in Statistics-Theory and Methods, 44(8):1602–1614. 43
EM Approaches to Nonparametric Estimation for Mixture of Linear Regressions Andrew Welbaum 17, 2025 Abstract In a mixture of linear regression model, the regression coefficients are treated as random vectors that may follow either a continuous or discrete distribution. We propose two Expectation-Maximization (EM) algorithms to estimate this prior distribution. The first algorithm solves a kernelized version of the nonparametric maximum likelihood estimation (NPMLE). This method not only recovers continuous prior distributions but also accurately estimates the number of clusters when the prior is discrete. The second algorithm, designed to approximate the NPMLE, targets prior distributions with a density. It also performs well for discrete priors when combined with a post-processing step. We study the convergence properties of both algorithms and demonstrate their effectiveness through simulations and applications to real datasets. Keywords: Mixture of linear regressions, EM algorithm, nonparametric maximum likelihood estimation, gradient ascent, mixture models, clustering 1 16 Oct 2025 1 Introduction The mixture of linear regression model extends the multiple linear regression by allowing the coefficient vector to have a continuous or discrete prior distribution, and has a wide number of applications when the response and covariates can have individualized or clustered linear relation, including market segmentation (Wedel and Kamakura, 2000), medical studies (Schlattmann, 2009), educational research (Ding, 2006), and various industry and economic studies, such as housing construction (Quandt, 1972), wage prediction (Quandt and Ramsey, 1978), and climate-economic growth relationships (Yao and Song, 2015). The model we consider in this article is as follows: There are n independent observations (x1,y1),⋯,(xn,yn) ∈Rd × R generated from the model yi = x⊺ i βi + σzi, (1) where β1,⋯,βn,z1,⋯,zn are independent of β1,⋯,βn iid∼G∗and z1,⋯,zn iid∼N(0,1), and σ > 0. Here, G∗is a probability distribution on Rd, which may be discrete or continuous. We will assume that σ is known, but G∗, the true distribution, is not known and needs to be estimated. In practice, an unknown σ can be estimated using cross-validation. The estimation of G∗can be further used to make statistical inference for individual βi through a plug-in approach. The posterior distribution of βi given (xi,yi) with respect to a prior distribution G (which is G∗if the prior is correctly identified) is P(βi ∈A ∣xi,yi,G) = ∫A φσ (yi -x⊺ i β)dG(β) ∫Rd φσ (yi -x⊺ i β)dG(β), (2) for any measurable set A ⊂Rd, where φ is the standard normal density, and φσ(⋅) = (1/σ)φ(⋅/σ). The posterior mean of βi given (xi,yi) is E(βi ∣xi,yi,G) = ∫Rd φσ (yi -x⊺ i β)βdG(β) ∫Rd φσ (yi -x⊺ i β)dG(β) . (3) We can then obtain a point estimate of βi if we replace G by an estimator of G∗in E(βi ∣xi,yi,G). 2 When G∗is discrete, an additional interesting task is to cluster (xi,yi)'s based on this estimation. Suppose G∗(β) = ∑K j=1 πjδβj(β), where K is unknown, πj represents the weight of the component j such that ∑K k=1 πk = 1 and πk > 0, and δβj is the Dirac delta with point mass at βj. The posterior distribution in (2) when G = G∗takes the form of K ∑ j=1 [ πjφσ (yi -x⊺ i βj) ∑K k=1 πkφσ (yi -x⊺ i βk)]δβj(β). (4) In an oracle setting, if we knew the true prior distribution G∗= G, we could use this to identify cluster membership of the points (xi,yi)'s by assigning (xi,yi) to the component j with the highest posterior probability, i.e., arg max j∈{1,...,K} πjφσ (yi -x⊺ i βj) ∑K k=1 πkφσ (yi -x⊺ i βk). (5) If the estimator of G∗is in the form of ∑ ˆ K j=1 ˆπjδˆβj, the above clustering rule is updated by replacing {(πk,βk) ∶k = 1,...,K} with {(ˆπk, ˆβk) ∶k = 1,..., ˆK}. A nonparametric approach to estimating G∗is nonparametric maximum likelihood estimation (NPMLE), which does not assume a parametric form of G∗. The NPMLE for mixture models was first described by Kiefer and Wolfowitz (1956). Some recent NPMLErelated works include Jiang and Zhang (2009); Koenker and Mizera (2014); Dicker and Zhao (2016); Jagabathula et al. (2020); Polyanskiy and Wu (2020); Saha and Guntuboyina (2020); Deb et al. (2021); Gu and Koenker (2022); and Jiang and Guntuboyina (2025). Of note, Jiang and Zhang (2009) introduce a general maximum likelihood empirical Bayesian method for estimatng a vector of means in a mixture of Gaussians problem, assuming independent and identically distributed errors. Their method aims to find the MLE of the distribution of the unknown parameters with a discrete set of support points using an EM algorithm. Koenker and Mizera (2014) introduce an interior point method that estimates the nonparametric MLE of the Gaussian mixture problem. Also, Jiang and Guntuboyina (2025) describe an NPMLE approach to the mixture of linear regression problem using a method called Conditional Gradient Method (CGM). 3 We now describe the NPMLE to estimate G∗. The log-likelihood function for model (1) with a prior distribution G given (x1,y1),⋯,(xn,yn) is G ↦L(G) ∶= n ∑ i=1 log f G xi(yi), (6) where f G xi(yi) = ∫φσ (yi -x⊺ i β)dG(β), i = 1,⋯,n. (7) We seek a maximizer of L over the set G of probability distributions G supported on X ⊂Rd. That is, the NPMLE problem can be stated as ˆG ∈arg max G∈G L(G). (8) If X is compact or satisfies a regularity condition met by Rd, (Jiang and Guntuboyina, 2025) establishes the existence of an NPMLE ˆG that is a probability measure supported on at most n points in X and is a consistent estimator of G∗in terms of the L ́evy-Prokhorov metric. If G is the class of mixture of K probability distributions where K is known, that is, G = GK ∶= {G = K ∑ j=1 πjδβj ∶ K ∑ j=1 πj = 1, πj > 0, βj ∈Rd}, then the model becomes a finite mixture of linear regressions, and this problem can be solved by the regular EM algorithm for a known number of components (Leisch, 2004). The modern EM algorithm was formalized in Dempster et al. (1977). DeSarbo and Cron (1988) introduced an EM algorithm for the cluster of linear regression problem with K components, which extended previous work on estimating two-component models such as in Quandt (1972); Hosmer (1974); and Quandt and Ramsey (1978). In the classic EM algorithm, the number of components, if it is unknown, must be estimated prior to initiating the algorithm. Some approaches to estimating this number include methods based on the incomplete and complete log-likelihoods, Fischer information matrix, Bayesian criteria, and information theory (e.g., AIC) (Hawkins et al., 2001; 4 Melnykov and Melnykov, 2012). On the other hand, methods based on NPMLE such as the approach in Jiang and Guntuboyina (2025) do not appear to be able to accurately detect the true number of components in the mixture model. Although the consistency for NPMLE has been established in Jiang and Guntuboyina (2025), in practice, a true cluster may correspond to a few small clusters detected by NPMLE, where the sum of the estimated weights approximate the true weight of that single cluster (see Figure 1). Figure 1: Estimated regression lines for a mixture of linear regression model with three true components: y = 3 -x, y = 1 + 1.5x, and y = -1 + .5x with true weights (.3, .3, and .4), respectively, and noise with σ = .5. Gray points are (xi,yi)'s of size n = 5000. Left panel: NPMLE solution estimated using the CGM method in Jiang and Guntuboyina (2025). There are 10 estimated regression lines with weights ranging from 0.012 to 0.343, although three clusters appear to be formed with aggregated weights 0.310, 0.296, and 0.393. Right panel: NPKMLE solution estimated by an EM algorithm developed in this paper. There are exactly 3 estimated regression lines with weights 0.314, 0.299 and 0.386, respectively. In our approach, instead of maximizing (8) without imposing a form restriction on G, we restrict the set G to Gkde, which is the set of kernel density estimators with a given kernel function and a bandwidth based on n points in Rd and solve ˆGkde ∈arg max G∈Gkde L(G). (9) 5 We call this the nonparametric kernel maximum likelihood estimator (NPKMLE). The optimization is reduced to finding the n points that define the KDE solution since the kernel function and the bandwidth are given. We develop an EM algorithm that aims to find the solution to the above optimization problem. The algorithm is able to estimate a mixture of linear regression model with a true prior distribution that is discrete or continuous, as opposed to the classical EM-algorithm's restriction to estimating the parameters of a mixture of regression model with a finite number of components. In the case of a discrete prior distribution, our algorithm does not require a prior estimate of the number of components, and is able to determine the true number of components automatically (see Figure 1). Hence the output of our algorithm can be used for clustering in the finite mixture of linear regression model. If the prior distribution G∗is known to have a density, we also develop another EM algorithm that converges to NPMLE. This EM algorithm falls short when the density does not exist. However, specific post-processing steps tailored for the underlying structure of G∗can remediate the issue, for example, when G∗is supported on a finite set or has a low dimensional manifold as it support. This article is structured as follows: In Section 2, we present the mixture of linear regression problem and discuss our approaches to solving it. We introduce an EM algorithm that can be used when the true distribution G∗has a density, as well as a different EM algorithm when we drop this assumption. We also discuss the algorithms' properties. In Section 3, we demonstrate the performance of our algorithms using simulations and show their utility in real-world applications. All the proofs are provided in the Appendix. 2 Methods In the discussion below, we distinguish two cases between G∗has a density and G∗does not necessarily have a density (e.g., when G∗is discrete with an unknown number of 6 components), and describe an EM algorithm for each case. Our starting point is the NPMLE. Note that L(G) in (6) is the "incomplete-data" log-likelihood function (meaning that the information of βi for each pair (xi,yi) is not available). If G has a density g, then, abusing the notation L, which was used for L(G), the log-likelihood can be written as g ↦L(g) ∶= n ∑ i=1 log [∫φσ (yi -x⊺ i β)g(β)dβ]. (10) The corresponding "complete-data" log-likelihood function is L(G∣x,y,β) = n ∑ i=1 log [φσ (yi -x⊺ i βi)g(βi)], (11) where x = (x1,⋯,xn), y = (y1,⋯,yn), and β = {β1,⋯,βn}. Here g should be understood as the density of G if it exists; If G is discrete such that G = ∑K j=1 πjδβj, then g(β) = ∑K j=1 πj1βj(β), where 1βj is the indicator function at βj. 2.1 EM-NPMLE algorithm: when G∗has a density function We first assume G∗is a continuous distribution for which a density function g∗exists. In this case, the posterior density of βi given (xi,yi) with respect to g∗exists and is fi(β ∣xi,yi,g∗) = φσ (yi -x⊺ i β)g∗(β) ∫Rd φσ (yi -x⊺ i β)g∗(β)dβ . (12) We provide an iterative algorithm to approximate g∗in what follows. With an initial estimate of g∗, we can obtain the posterior density of βi given (xi,yi) with respect to this initial estimate. Then it is a natural idea to use the average of the posterior densities across all βi to update the estimate of g∗, and the process is continued iteratively until convergence. More formally, with an initialization density g(0) for βi's, for t = 0,1,2,⋯, the algorithm iterates between f (t+1) i (β) ≡fi(β ∣xi,yi,g(t)) = φσ (yi -x⊺ i β)g(t)(β) ∫Rd φσ (yi -x⊺ i β)g(t)(β)dβ , i = 1,⋯,n, (13) 7 and g(t+1) = 1 n n ∑ i=1 f (t+1) i . (14) It turns out that this simple but elegant algorithm can be interpreted as an EM algorithm, and it is inspired by a similar algorithm in the setting of mixture of distributions (Vardi et al., 1985; Laird and Louis, 1991; Vardi and Lee, 1993; Chung and Lindsay, 2015; Chae et al., 2018). We will call this an EM-NPMLE algorithm. To see this is an EM algorithm, taking the expectation of L(g∣x,y,β) (we again abuse notation L) with respect to the posterior density of β given x,y,g(t), we get Eβ∣x,y,g(t)L(g∣x,y,β) = n ∑ i=1 ∫Rd log [φσ (yi -x⊺ i β)g(β)]φσ (yi -x⊺ i β)g(t)(β)dβ ∫Rd φσ (yi -x⊺ i β)g(t)(β)dβ . (15) This is the E-step in an EM algorithm. Let Gden be the space of distributions on Rd for which their density functions exist. We would like to find a maximizer of (15) in Gden. Note that ∫Rd g(u)du = 1 for G ∈Gden such that g is the density function of G, which is a restriction in the optimization. Define the Lagrangian function for the optimization as F(g;g(t)) ∶= Eβ∣x,y,g(t)L(g∣x,y,β) -n[∫Rd g(u)du -1], (16) where n is the Lagrange multiplier. Notice that arg max G∈Gden F(g;g(t)) = arg max G∈Gden Eβ∣x,y,g(t)L(g∣x,y,β). (17) To find the solution to (17), we can first take the derivative of F(g;g(t)) with respect to g, which is a functional derivative. In general, the functional derivative of a functional D(f) is defined as δD/δf such that lim ε→0 [D[f + εη] -D[f] ε ] = ∫ δD δf (x)η(x)dx, (18) where η is an arbitrary function and ε is a scalar value (Parr and Weitao, 1994). 8 Taking the functional derivative of F(g;g(t)) with respect to g, we obtain δF(g;g(t)) δg = n ∑ i=1 φσ (yi -x⊺ i β)g(t)(β) g(β)∫Rd φσ (yi -x⊺ i β)g(t)(β)dβ -n. (19) By letting δF(g;g(t)) δg = 0 and setting the solution as g(t+1), we recover the algorithm in (14): g(t+1)(β) = 1 n n ∑ i=1 φσ (yi -x⊺ i β)g(t)(β) ∫Rd φσ (yi -x⊺ i β)g(t)(β)dβ . (20) The following proposition states that (14) provides the solution to the M-step. Proposition 1. In the above setting, g(t+1) = arg max G∈Gden Eβ∣x,y,g(t)L(g∣x,y,β), t = 0,1,2,⋯, (21) that is, g(t+1) is the unique maximizer of the expectation of the complete log-likelihood given g(t). The output of this algorithm converges to NPMLE under mild assumptions, and the convergence limit g(∞) can then be used as a final estimate for g∗. The following theorem is similar to Proposition 2 in Chae et al. (2018) and its proof is omitted. Recall that X is the support of G∗. Theorem 2.1. Suppose for every i = 1,⋯,n, that β →φσ (yi -x⊺ i β) is a continuous and strictly positive map on X. In addition, assume that for each ε > 0 there exists a compact X0 ⊂X where sup β∈X ∁ 0 φσ (yi -x⊺ i β) 0 and a 10 kernel function V based on n points. The optimization problem becomes NPKMLE in (9). The kernel function V is a probability density function on Rd and we specifically require that it is differentiable and spherically symmetric such that V (x) = v(∥x∥2) for some v, where v ∶R≥0 →R≥0 is called the profile of V (Comaniciu and Meer, 2002). An example of V is the density function of the standard normal distribution on Rd. We can write Gkde ≡Gkde(v,h,n) = { 1 nhd n ∑ l=1 v(∥⋅- ̃βl∥2 h2 ) ∶ ̃β1,..., ̃βn ∈Rd}. (22) Here we fix v and h so that each element in Gkde is determined by ̃β = { ̃β1,..., ̃βn} ⊂Rd, and the corresponding distribution is denoted Gβ. The optimization in (9) is reduced to finding the ̃β associated with the NPKMLE, with the solution denoted by ˆβ. The empirical distribution based on ˆβ (which is discrete) is used to estimate G∗and cluster all (xi,yi)'s using a plug-in approach as described in Section 1. The above approach works even when the true distribution is discrete, because we can always get a kernel density estimate using a sample, even from a discrete distribution. To solve NPKMLE, we will again apply the EM idea. Recall that given a current estimate G(t) of G∗with density g(t), the expectation of "complete-data" log-likelihood in (11) with respect to the posterior distribution is given in (15). Since we require g(t) to be a KDE in Gkde, G(t) is characterized by a set of n points in Rd denoted by β(t) = {β(t) l ∈ Rd ∶l= 1,⋯,n}, such that we can write G(t) = Gβ(t). This is the E-step of our algorithm. For the M-step, we seek a g(t+1) such that g(t+1) ∈arg max g∈Gkde Q(G;G(t)), (23) where Q(G;G(t)) = Eβ∣x,y,G(t)L(G∣x,y,β). (24) In other words, we would like to maximize the function in (15) with respect to g but require g to take the form of 11 g(β) = g(t+1)(β) = 1 nhd n ∑ l=1 v(∥β -β(t+1) l ∥2 h2 ), (25) where {β(t+1) l ∈Rd ∶l= 1,⋯,n} =∶β(t+1) is a new set of points to be determined, which are viewed as an update from β(t). In what follows we also use β(t) as a vector by stacking all its elements together. Since any g ∈Gkde is only determined by n (unspecified) points in Rd, the optimization problem in (23) is reduced to finding these n points. There is no closed form solution to this optimization problem in general, and therefore a gradient-ascent type algorithm is proposed, which features adaptive step sizes that do not need to be chosen. The gradient-ascent algorithm has no guarantee to converge to a global maximum, and hence the optimization in (23) is understood to find a local maximum. This leads to an EM algorithm involving two loops, as given in Algorithm 1 below, which we call EM-NPKMLE. Algorithm 1 EM-NPKMLE algorithm Input: x,y, β(0) = {β(0) 1 ,⋯,β(0) n } For t = 0,1,2,... ν(0) = β(t) For r = 0,1,2,... ν(r+1) ←Ð ξ(ν(r);β(t),x,y), where ξ is given in (32) End β(t+1) ←Ð ν(∞) End Output: empirical distribution of {β(∞) j ∶j = 1,⋯,n} The key step in the algorithm is the update from ν(r) to ν(r+1), which can be indeed understood as a gradient ascent method because it can be expressed in the following form: 12 ν(r+1) = ξ(ν(r);β(t),x,y) ∶= ν(r) + 1 C(ν(r),β(t),x,y)∇Q(Gν(r);G(t)), (26) where ν(r) is the current update approaching to β(t+1), ∇Q is the gradient of Q(Gν;G(t)) with respect to ν, and 1/C(ν(r),β(t),x,y) is an adaptive step size-see (31) below for the definition of C. We now derive the result in (26). Denote vh(∥x∥2) ∶= v(∥x∥2/h2) and v′ h(∥x∥2) ∶= v′(∥x∥2/h2). Also, let w = -v′ and wh = -v′ h. Plugging (25) into (15), we get Q(Gν;G(t)) =Eβ∣x,y,G(t)L(Gν∣x,y,β) = n ∑ i=1 ∫Rd log [φσ (yi -x⊺ i β) 1 nhd ∑n l=1 vh(∥β -νl∥2)]Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ , (27) where Si(β,β(t),x,y) = φσ (yi -x⊺ i β) n ∑ l=1 vh(∥β -β(t) l∥2). (28) Taking the derivative of Q with respect to νl, l= 1,⋯,n, where νlis the lth entry of ν, we have ζ(νl;β(t),x,y) ∶= ∂Q(Gν;G(t)) ∂νl = n ∑ i=1 ∫Rd wh(∥β-νl∥2)(β-νl) ∑n m=1 vh(∥β-νm∥2) Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ , l= 1,⋯,n. (29) Setting ζ(νl;β(t),x,y) = 0 we can write n ∑ i=1 ∫Rd wh(∥β-νl∥2)β ∑n m=1 vh(∥β-νm∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ ́11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ =A(νl;β(t),x,y) (30) =νl n ∑ i=1 ∫Rd wh(∥β-νl∥2) ∑n m=1 vh(∥β-νm∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ , l= 1,⋯,n ́111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 ̧1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111¶ =C(νl,β(t),x,y) . (31) 13 Or equivalently, ξ(νl;β(t),x,y) ∶= A(νl;β(t),x,y) C(νl,β(t),x,y) = νl, l= 1,⋯,n. (32) Hence, the gradient of Q(Gν;G(t)) with respect to ν is ∇Q(Gν;G(t)) (33) =( ζ(ν1;β(t),x,y),⋯,ζ(νn;β(t),x,y) ) ⊺ =( C(ν1,β(t),x,y)[ξ(ν1;β(t),x,y) -ν1],⋯,C(νn,β(t),x,y)[ξ(νn;β(t),x,y) -νn] ) ⊺ , which justifies (26). We now show that Algorithm 1 generates an increasing sequence of (27) as a function of ν(r). Here we require that the profile v is monotonically decreasing and function log(∑n i=1 v(xi)) is convex, which is satisfied when V is the Gaussian kernel since the LogSumExp function is convex.1 Theorem 2.2. In the current setting, the expectation of the complete log-likelihood function Q in (27) as a function of ν(r) is monotonically non-decreasing, i.e., for r = 0,1,⋯, Q(Gν(r+1);G(t)) ≥Q(Gν(r);G(t)), and Q(Gν(r);G(t)) converges as r →∞, where Gν(r+1), Gν(r), and G(t) are the distributions of the kernel density estimations of G using ν(r+1), ν(r), and β(t), respectively. The convergence of ν(r) is not implied by the above result but will be assumed in what follows. Now we show the monotonicity of the incomplete likelihood function in (6) as a function of the sequence G(t). Theorem 2.3. The incomplete likelihood function L(G) in (6) is bounded from above, and is a monotonically non-decreasing function of the sequence G(t), that is, L(G(t+1)) ≥ L(G(t)), and hence L(G(t)) converges as t →∞. 1See https://www.math.uwaterloo.ca/~hwolkowi/henry/teaching/w10/367.w10/367miscfiles/ pages48to60.pdf. 14 In the EM-NPKMLE algorithm, we take the sequence νr in our Algorithm to convergence, which can be viewed as a complete M-step to locally maximize (23), and can be computationally expensive sometimes. As a lightweight variant, we may just run the gradient ascent step in the inner loop of the algorithm one time instead of taking it to convergence, which is a generalized (or gradient) EM (GEM) algorithm, and is called GEM-NPKMLE. Similar to the result in Theorem 2.3, the incomplete likelihood function L evaluated at the distribution sequence G produced by GEM-NPKMLE algorithm is also non-decreasing and convergent. 2.3 Post-processing EM-NPMLE: when G∗possibly does not have a density Although the EM-NPMLE algorithm presented in Section 2.1 to approximate NPMLE is not directly suitable for the case where the true prior distribution is discrete, if a clustering algorithm is applied as a post-processing step to a random sample generated from the density g(t) for t sufficiently large produced in (14), we can still obtain a good result to estimate the components in the finite mixture of the linear regression model. The key observation is that as t grows to infinity, g(t) gets closer to the NPMLE and the true prior distribution, formed by peaks near the true support points (see Figure 2). The clustering algorithm we choose to use in this postprocessing step is the mean shift algorithm, which does not require the specification of the number of clusters, and naturally uses local modes (peaks in density) as the central idea of clustering. The mean shift algorithm is a densitybased clustering algorithm that iteratively calculates a local weighted mean starting at a sample point. A kernel function acts as a weight function for this local weighted mean, and a bandwidth determines the level of smoothing. A sequence of local weighted means is iteratively calculated until convergence, which can be shown to approximate the gradient flow of the kernel density estimator using the given sample. All sample points that converge 15 to the same local mode are considered part of the same cluster (Comaniciu et al., 2003). The post-processing idea can be applied not only when we know the true prior distribution is discrete, but can also be extended to some other settings when we have some specific information about the distribution. For example, an interesting scenario occurs when the true prior distribution is supported on a low-dimensional submanifold in Rd. In this case, the distribution does not have a density with respect to the Lebesgue measure, and hence does not satisfy the assumptions when we apply the EM-NPMLE algorithm in Section 2.1. However, we can use a random sample from the output of this EM-NPMLE algorithm as initialization and apply the subspace constrained mean shift (SCMS) algorithm (Ozertem and Erdogmus, 2011; Qiao and Polonik, 2021), which extends the mean shift algorithm to extract low-dimensional features called ridges from data-in fact, we can view a set of local modes as a 0-dimensional submanifold for which the mean shift algorithm is applicable. Like the mean shift algorithm, the SCMS algorithm also requires a kernel (weight) function and a smoothing bandwidth. However, the SCMS projects the gradient of the kernel density estimator onto a subspace spanned by a subset of the eigenvectors of the Hessian matrix using the kernel density estimator. We demonstrate this approach in a simulation study in Section 3.1.2. We emphasize that the specific post-processing procedure depends on the further knowledge or assumption about the true prior distribution. Jiang and Guntuboyina (2025) develop an algorithm to approximate NPMLE, and in order to improve the estimation of the clusters when the prior distribution is discrete, they add a BIC trimming step, which has a similar purpose as the mean shift post-processing step using the output of EM-NPMLE algorithm in Section 2.1. In practice, it can be difficult to decide whether the prior distribution is continuous or discrete (see the CO2 vs GDP real data example in Section 3.2), which can create challenges of determining the best post-processing steps to use. As a comparison, the EM-NPKMLE 16 algorithm we present in Section 2.2 can be used to handle both discrete and continuous prior distributions without requiring a post-processing step. 3 Numerical Results For the EM-NPMLE algorithm introduced in Section 2.1, it requires an initial density g(0), which can be chosen as a (non-informative) uniform distribution over a bounded subset in Rd. For the EM-NPKMLE algorithm developed in Section 2.2, we need a set of initial points β(0), which can be chosen by sampling from the same uniform distribution. Alternatively, one can also use a random sample from the estimated density after running the EM-NPMLE algorithm. This creates initial data points that reflect the underlying distribution G∗more accurately than assuming no prior knowledge of the distribution G∗and using samples from an arbitrary distribution (e.g., a uniform distribution) as the starting values. Our simulation results below indicate that the choice of the initial points β(0) does not have a strong influence on the results, as the performance of the algorithm is good for either choice described above. We used the Gaussian kernel as the kernel function V for EM-NPKMLE in all the simulations below. We propose using a bandwidth based on the maximal smoothing principle as described in Terrell (1990). Based on this principle, the bandwidth is chosen based on the maximum smoothing consistent with the density's estimated dispersion. We find this approach advantageous because by using such a bandwidth, we guard against choosing a smaller bandwidth that may introduce spurious features, such as additional local modes in the estimated distribution that do not exist in G∗. The bandwidth is defined as follows: hOS = U × [(d + 8) d+6 2 πd/2R(V ) 16nΓ(d+8 2 )d(d + 2) ] 1 d+4 , (34) where R(V ) = ∫V 2, which is equal to 1/(4π) when V is a Gaussian kernel (Terrell, 1990; Davies, 2013). For U, we calculate the Gaussian-scaled interquartile range (IQR) (i.e., 17 IQR/1.34, a robust estimator of the standard deviation of a normal density) of the values sampled from the estimated density in each dimension in our data and then take the mean, as proposed in Davies and Hazelton (2010). We emphasize that this bandwidth selection strategy is used only when the initialization is based on sampling from the EM-NPMLE output. Otherwise an adjustment may be needed for a different initilization. The numerical study includes simulations where the true distribution is discrete and continuous. For the simulation using a discrete distribution, when we examined the performance of the EM-NPKMLE, we tested two different initialization procedures. We also ran the GEM-NPKMLE with a single iteration in the inner loop, as discussed at the end of Section 2.2. We compared the EM-NPKMLE performance with the mean-shift algorithm as a post-processing step after sampling from the output density based on the EM-NPMLE algorithm, and with the CGM developed in Jiang and Guntuboyina (2025). In addition, we included a study verifying that cross-validation can effectively estimate σ that was assumed to be known. For the continuous distribution case, we generated one simulation and compared the results visually using the EM-NPMKLE, the EM-NPMLE with and without a post-processing step, and the CGM. We also implemented our EM algorithms in two real applications. We used R version 4.1.2 (R Core Team, 2023) for all the numerical results except for the CGM method in Jiang and Guntuboyina (2025), where we used the Python code referenced in their paper. 3.1 Simulations 3.1.1 Simulation 1 We used the following model to test the algorithm, which has also been used in Jiang and Guntuboyina (2025). The mixture of linear regression model has three components with 18 prior weights .3, .3, and .4, respectively: y = 3 -x + ε, y = 1 + 1.5x + ε, (35) y = -1 + .5x + ε, where the noise ε has a N(0,σ2) distribution with σ = .5. The x values were generated from a uniform distribution on [-1,3]. We ran our algorithm on 200 simulated datasets (see Figure 3 for an example). We assume that σ is known until we present the result using cross-validation. Figure 3: An example of a simulated dataset from the model in (35) for σ = .5 and n=500 (left panel) and the corresponding true components in the mixture of linear regression model represented by different colors (right panel). In our simulation experiment, we used the uniform distribution over [-4,4]2 as the initialization for the EM-NPMLE algorithm, to iteratively calculate g(t+1) using the expression in (14) until the L2 distance between g(t) and g(t+1) fell below a predefined small threshold. The final estimated density was used to sample n points as the starting points for the EM-NPKMLE algorithm. We stepped through sample sizes starting from 500 until 10,000. The EM-NPKMLE algorithm produced a set of points that defines a KDE solution 19 to (23). Suppose its empirical distribution can be represented by ˆG = ∑ ˆ K j=1 ˆπjδˆβj where the ˆβj's are all distinct, that is, we aggregate the solution points according to their unique values. For a mixture of linear regression model that has K components with K unknown, the true prior distribution G∗= ∑K j=1 πjδβj is estimated by ˆG, where each βj is estimated by the closest ˆβj, and each πj by the associated ˆπj. To cluster a pair of points (xi,yi), it is assigned to a class j for which the posterior probability is maximized, i.e., arg max j ˆπjφσ (yi -x⊺ i ˆβj) ∑ ˆ K k=1 ˆπkφσ (yi -x⊺ i ˆβk) . (36) We report the average adjusted Rand index (Hubert and Arabie, 1985), the proportion of samples that were correctly identified as having three components, and the average Wasserstein-2 distance (Nguyen, 2011) between the probability measures ˆG produced by the algorithm in question and G∗, i.e., W2(G∗, ˆG) ∶= ⎡⎢⎢⎢⎢⎣ min γ∈Γ(π,ˆπ) K ∑ i=1 ˆ K ∑ j=1 ∣∣βi -ˆβj∣∣2γij ⎤⎥⎥⎥⎥⎦ 1/2 , (37) where γ = (γij)1≤i≤K,1≤j≤ˆ K ∈[0,1]K× ˆ K, Γ(π, ˆπ) is the set of all matrices γ satisfying ∑K i=1 γij = ˆπj, ∑ ˆ K j=1 γij = πi, where π = {π1,⋯,πK} are the true weights, and ˆπ = {ˆπ1,⋯, ˆπ ˆ K} are the estimated weights; and βi's are the true coefficients and ˆβj's are the estimated coefficients (see Table 1). Additionally, note that some points at the intersections of the linear regression components will unavoidably be mislabeled using (36) by any estimators of G∗, which results in an adjusted Rand index less than 1. As an oracle reference for our algorithm, we calculated the average adjusted Rand index when G was specified as G∗, and the posterior probabilities were directly calculated. We also report the bias and standard deviation of the estimated coefficients and weights when three components, the true number of components, were estimated. EM-NPKMLE: For the results of the EM-NPKMLE algorithm, see Table 1 for information on the proportion of simulations in which the true number of components were 20 detected, and Table 2 for bias and standard deviation of the errors. Standard deviations are also reported for all values (in parentheses), where applicable. Table 1: Results of Simulation 1 using EM-NPKMLE algorithm. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Distance 500 0.415 (0.119) 0.665 (0.033) 0.540 0.818 (0.137) 1000 0.567 (0.034) 0.663 (0.023) 0.990 0.562 (0.097) 5000 0.645 (0.011) 0.664 (0.010) 0.995 0.343 (0.080) 10000 0.651 (0.008) 0.663 (0.008) 0.995 0.288 (0.081) Table 2: Results of Simulation 1 using EM-NPKMLE algorithm. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.360, 0.227)±(0.053, 0.074) (-0.565, -0.363)±(0.234, 0.138) (0.460, 0.230)±(0.119, 0.077) π: bias ± SD -0.005±0.024 -0.026±0.059 0.031±0.056 1000 β: bias ± SD (-0.291, 0.157)±(0.037, 0.042) (-0.397, -0.164)±(0.112, 0.062) (0.297, 0.143)±(0.048, 0.046) π: bias ± SD 0.002±0.019 -0.003±0.017 0.001±0.019 5000 β: bias ± SD (-0.155, 0.033)±(0.014, 0.016) (-0.157, 0.003)±(0.032, 0.018) (0.149, 0.038)±(0.019, 0.014) π: bias ± SD -0.001±0.008 0±0.007 0.001±0.009 10000 β: bias ± SD (-0.112, 0.004)±(0.011, 0.010) (-0.115, 0.013)±(0.018, 0.012) (0.113, 0.027)±(0.013, 0.010) π: bias ± SD -0.001±0.006 0±0.006 0.001±0.006 The results suggest that our algorithm produces consistent estimates of the β components and weights, as well as a consistent estimator of the number of components. As n increases, the values of these estimates approach the true parameter values and true number of components. As n gets larger, we observe that all of the points that are mislabeled are at the intersections of the components, and the slightly worse performance of the estimated G from our algorithm than using G∗is due to the bias of the β estimates creating intersection areas that are slightly larger than when G is set to G∗. Since we use the bandwidth based on the oversmoothing principle, we accept slightly larger bias in exchange for avoiding detecting spurious modes (components) in the data. In what follows we compare the above results using the EM-NPKMLE algorithm with those using the the EM-NPMLE algorithm with mean shift as a post-processing step as 21 described in Section 2.3 and the conditional gradient method (CGM) as described in Jiang and Guntuboyina (2025). We stepped through the same sequence of sample sizes as in Tables 1 and 2. EM-NPMLE and mean-shift post-processing: As shown in Tables 3 and 4, the mean shift algorithm as a post-processing step after EM-NPMLE produced results that suggest consistent estimation of the true β values and weights, as well as the true number of components. When using the mean shift for post-processing, we also used the bandwidth based on the maximal smoothing principle as described previously. CGM algorithm in Jiang and Guntuboyina (2025): As shown in Tables 5 and 6, the results of the CGM algorithm do not suggest that this algorithm can consistently estimate the number of modes, though the decreasing average W-2 distance with increasing sample size supports the idea of convergence in distribution between the true and estimated distributions. Table 3: Results of Simulation 1 using EM-NPMLE and mean-shift post-processing. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Distance 500 0.620 (0.094) 0.665 (0.033) 0.915 0.612 (0.238) 1000 0.657 (0.024) 0.663 (0.023) 0.995 0.465 (0.141) 5000 0.663 (0.010) 0.664 (0.010) 0.990 0.311 (0.092) 10000 0.662 (0.008) 0.663 (0.008) 1 0.265 (0.089) EM-NPKMLE with uniform initialization: We also ran our EM-NPKMLE algorithm on the same model as in (35) using initial points drawn from a uniform distribution over [-4,4]2 to demonstrate that our algorithm does not depend on initial points sampled from the estimated density g derived from the EM-NPKMLE algorithm. We currently do not have a bandwidth selection strategy for the uniform initialization, and we manually tuned the bandwidth by using the maximal smoothing principle multiplied by a constant 22 Table 4: Results of Simulation 1 using EM-NPMLE and mean-shift post-processing. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.012, 0.034)±(0.062, 0.040) (-0.173, -0.080)±(0.108, 0.045) (0.088, 0.031)±(0.062, 0.042) π: bias ± SD -0.001±0.025 -0.002±0.027 0.030±0.028 1000 β: bias ± SD (-0.002, 0)±(0.049, 0.031) (-0.077, -0.021)±(0.061, 0.029) (0.041, 0.013)±(0.044, 0.026) π: bias ± SD 0.002±0.019 -0.002±0.018 0±0.019 5000 β: bias ± SD (0, 0)±(0.021, 0.015) (-0.009, 0.005)±(0.023, 0.012) (0.013, -0.003)±(0.018, 0.013) π: bias ± SD -0.001±0.008 0±0.008 0.001±0.009 10000 β: bias ± SD (0, 0)±(0.014, 0.010) (-0.007, 0.004)±(0.017, 0.011) (0.010, -0.002)±(0.012, 0.008) π: bias ± SD -0.001±0.006 0±0.006 0±0.006 Table 5: Results of Simulation 1 using CGM. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp 500 0.590 (0.066) 0.665 (0.033) 0 1000 0.581 (0.068) 0.663 (0.023) 0 5000 0.602 (0.053) 0.664 (0.010) 0 10000 0.596 (0.057) 0.663 (0.008) 0 Table 6: Results of Simulation 1 using CGM. Sample Size Avg W-2 Distance Mean No. of Comp 500 0.439 (0.109) 9.295 (2.095) 1000 0.373 (0.093) 8.550 (1.744) 5000 0.260 (0.069) 7.800 (1.504) 10000 0.229 (0.064) 7.570 (1.274) c = 1.15. As shown in Tables 7 and 8, the results are comparable to running our EMNPKMLE algorithm using the output of EM-NPMLE for the initialization. Table 7: Results of Simulation 1 using EM-NPKMLE Algorithm with uniform initialization. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Dist 500 0.146 (0.043) 0.665 (0.033) 0.030 1.130 (0.059) 1000 0.283 (0.140) 0.663 (0.023) 0.515 1.020 (0.114) 5000 0.624 (0.012) 0.664 (0.010) 1 0.422 (0.060) 10000 0.642 (0.008) 0.663 (0.008) 1 0.320 (0.052) 23 Table 8: Results of Simulation 1 using EM-NPKMLE Algorithm with uniform initialization. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.453, 0.273)±(0.035, 0.072) (-0.888, -0.504)±(0.259, 0.149) (0.643, 0.325)±(0.136, 0.070) π: bias ± SD 0.029±0.017 -0.039±0.211 0.010±0.218 1000 β: bias ± SD (-0.430, 0.333)±(0.039, 0.048) (-0.717, -0.364)±(0.211, 0.094) (0.556, 0.299)±(0.094, 0.075) π: bias ± SD 0.049±0.018 -0.119±0.066 0.070±0.063 5000 β: bias ± SD (-0.224, 0.101)±(0.013, 0.018) (-0.234, -0.029)±(0.036, 0.020) (0.222, 0.073)±(0.020, 0.016) π: bias ± SD 0.009±0.007 -0.018±0.006 0.009±0.007 10000 β: bias ± SD (-0.164, 0.047)±(0.010, 0.011) (-0.164, 0.001)±(0.019, 0.013) (0.164, 0.043)±(0.014, 0.011) π: bias ± SD 0±0.005 -0.008±0.004 0.008±0.005 GEM-NPKMLE: As discussed at the end of Section 2.2, GEM-NPKMLE is a variant of EM-NPKMLE, where we run a single iteration in the inner loop instead of running it to convergence. This approach saves computation time. For example, for the simulation with n=5000 and the EM-NPMLE initialization, the EM-NPKLME took an average of 115.888 (40.759) minutes, while the GEM-NPKMLE took an average of 15.448 (2.194) minutes. It also produced similar results compared to Tables 1, 2, 3, and 4 for larger sample sizes, as can be seen in Tables 9 and 10. This suggests that the GEM-NPKMLE approach is viable when saving computation time is important and there is a sufficient sample size. The initial points we used for GEM-NPKMLE were sampled from the output of EM-NPMLE, which was the same strategy that was used in the first part of our simulations. However, in this case, we multiplied the initial bandwidth by a constant c = 1.2 to improve the algorithm's performance. Table 9: Results of Simulation 1 using GEM-NPKMLE algorithm. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Dist 500 0.133 (0.034) 0.665 (0.033) 0.235 1.070 (0.062) 1000 0.213 (0.106) 0.663 (0.023) 0.195 0.993 (0.059) 5000 0.614 (0.012) 0.664 (0.010) 0.840 0.401 (0.066) 10000 0.637 (0.008) 0.663 (0.008) 0.930 0.323 (0.070) 24 Table 10: Results of Simulation 1 using GEM-NPKMLE algorithm. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.527,0.360)±(0.067, 0.083) (-0.983,-0.642)±(0.135,0.071) (0.839,0.387)±(0.071,0.067) π: bias ± SD 0.005±(0.027) -0.298±(0.002) 0.292±(0.027) 1000 β: bias ± SD (-0.414,0.308)±(0.052,0.059) (-1.052,-0.551)±(0.175,0.079) (0.757,0.402)±(0.092,0.075) π: bias ± SD 0.007±0.019 -0.291±0.048 0.284±0.042 5000 β: bias ± SD (-0.241,0.104)±(0.014,0.018) (-0.282,-0.069)±(0.041,0.023) (0.231,0.088)±(0.020,0.016) π: bias ± SD -0.001±0.008 0±0.007 0.001±0.009 10000 β: bias ± SD (-0.184,0.056)±(0.010,0.011) (-0.196,-0.009)±(0.022,0.013) (0.172,0.053)±(0.014,0.011) π: bias ± SD -0.001±(0.006) 0±(0.006) 0.001±(0.006) Cross-validation for unknown σ: If σ is unknown, we can implement a cross-validation (CV) approach found in Jiang and Guntuboyina (2025) to estimate the parameter σ. First we divide the dataset T = {(xi,yi) ∶ i = 1,...,n} into C folds: T1,...,TC. For each c ∈{1,2,...C}, one fold Tc is left out as test data and the remaining folds are used to obtain an estimator ˆG-c for G∗. Let f ˆG-c xi (yi) = ∫φσ (yi -x⊺ i β)d ˆG-c(β), i = 1,⋯,n. The σ value that minimizes the following objective function is selected and denoted by ˆσ: CV (σ) = - C ∑ c=1 ∑ (xi,yi)∈Tc log(f ˆG-c xi (yi)). In our simulation, we used a random sample from the output of the EM-NPMLE algorithm as the initialization for our GEM-NPKMLE algorithm (with the oversmoothing bandwidth) to calculate ˆG-c. We tested this method on the model in (35) with C = 5 to get ˆσ, which was further plugged into the EM-NPKMLE algorithm. The results are provided in Tables 11 and 12, which are comparable to those in Tables 1 and 2. 3.1.2 Simulation 2 Unlike the EM-NPMLE algorithm, our EM-NPKMLE algorithm does not assume the knowledge of a particular structure in the true distribution, such as a low-dimensional 25 Table 11: Results of Simulation 1 using EM-NPKMLE algorithm and cross-validation to estimate σ. Sample Size Avg Adj RI Avg Adj RI G* Prop of Sim with 3 Comp Avg W-2 Dist 500 0.302 (0.204) 0.665 (0.033) 0.390 0.891 (0.192) 1000 0.574 (0.045) 0.663 (0.023) 0.975 0.557 (0.118) 5000 0.645 (0.011) 0.664 (0.010) 0.995 0.343 (0.080) 10000 0.651 (0.008) 0.663 (0.008) 0.995 0.288 (0.081) Table 12: Results of Simulation 1 using EM-NPKMLE algorithm and cross-validation to estimate σ. Sample Size Estimate 1st Comp 2nd Comp 3rd Comp 500 β: bias ± SD (-0.328,0.201)±(0.080,0.090) (-0.425,-0.281)±(0.236,0.177) (0.375,0.193)±(0.162,0.091) π: bias ± SD -0.010±0.025 -0.028±0.071 0.039±0.066 1000 β: bias ± SD (-0.273, 0.143)±(0.048, 0.047) (-0.355,-0.151)±(0.137,0.068) (0.271,0.136)±(0.073,0.047) π: bias ± SD -0.003±0.018 -0.002±0.018 0.005±0.019 5000 β: bias ± SD (-0.155, 0.033)±(0.0143, 0.016) (-0.157,0.003)±(0.032,0.018) (0.149,0.038)±(0.019,0.014) π: bias ± SD -0.001±0.008 0±0.007 0.001±0.009 10000 β: bias ± SD (-0.112, 0.004)±(0.011, 0.010) (-0.115, 0.013)±(0.018, 0.012) (0.113, 0.027)±(0.013, 0.010) π: bias ± SD -0.001±0.006 0±0.006 0.001±0.006 manifold as the support. To further demonstrate this, we use a continuous distribution as the model, which consists of a bivariate distribution represented by a mixture of uniform distributions over two concentric circles, each with center at the origin (see Figure 4). The model is given in (1), where σ = .2 is assumed to be known, and the true distribution G∗is 1 2 × Uniform{B(1)} + 1 2 × Uniform{B(2)}, where B(r) = {β ∈R2 ∶∣∣β∣∣= r}. This model has also been used in (Jiang and Guntuboyina, 2025). We uniformly sampled 10000 design points xi's from [-1,3]. For our EM-NPKMLE algorithm, we used the output of the EM-NPMLE algorithm as the initial β(0) points and used the oversmoothing bandwidth. We got similar results when we used points sampled from a uniform distribution on [-4,4]2 as the initial β(0) and applied a constant to the 26 Figure 4: Left panel: the support of G∗in Simulation 2; Right panel: the sample used in Simulation 2. oversmoothing bandwidth. In this simulation we also tested the EM-NPMLE algorithm with and without SCMS as a post-processing step, as described in Section 2.3. The EM-NPMLE algorithm was initialized by a uniform density over [-4,4]2. Then the SCMS algorithm was applied to 10000 β points sampled from the estimated density produced by the EM-NPMLE algorithm. The bandwidth used for the SCMS algorithm was selected by the biased cross-validation method aiming to minimize an asymptotic mean integrated squared error (AMISE) estimate between the estimated and true density gradients (Sain et al., 1994). This was implemented by the 'Hbcv.diag' function in the R package ks (Duong, 2024), and we used the mean of the diagonal entries in the selected diagonal bandwidth matrix. As shown in Figure 5 for a single random sample, the points produced by the EMNPKMLE algorithm are roughly evenly distributed near the support of G∗, while the support does not seem to be recovered well by the CGM method. The distribution produced by the EM-NPMLE algorithm has a visible concentration near the support of G∗, and 27 applying the SCMS algorithm as a post-processing step further extracted the 1-dimensional structure. In Table 13 we show the results of applying the Wasserstein-2 distance between the probability measures G∗and ˆG from the applicable algorithms. Using SCMS algorithm for post-processing has the best performance, followed by the EM-NPMLE algorithm without post-processing. As two algorithms without knowing the prior distribution is continuous or requiring a post-processing step, the EM-NPKMLE is an improvement over the CGM. Table 13: W-2 Distance between G∗and ˆG for the simulation depicted in Figure 5 Method W-2 Distance EM-NPKMLE 0.328 CGM 0.394 EM-NPMLE 0.190 EM-NPMLE + SCMS 0.157 28 (a) (b) (c) (d) Figure 5: Comparison of the true mixture distribution and the estimated distributions of β for different algorithms in Simulation 2, where the red circles are the support of G∗, and point sizes are proportional to weights. (a) EM-NPKMLE algorithm: 52 support points, weights ranging from 0.0002 to 0.085. (b) CGM method: 34 support points, weights ranging from 0.011 to 0.054. (c) 104 points sampled from the output of the EM-NPMLE algorithm, also used as the input for the SCMS algorithm. (d) post-processing result after using the SCMS algorithm, where all points have weight= 10-4. 29 3.2 Applications Application: CO2-GDP We tested our EM-NPKMLE algorithm on data of 159 countries' CO2 emissions and GDP per capita in 2015 taken from Bolt et al. (2018); Friedlingstein et al. (2020); Ritchie et al. (2020); and (UN, 2022). This dataset was also analyzed in Jiang and Guntuboyina (2025). Identifying distinct components among the relationship between CO2 emissions and economic output could reveal countries that have high economic output and low CO2 emissions, potentially revealing favorable policies to pursue. Other papers have also looked into this relationship from a mixture of regression approach, such as Huang and Yao (2012) and Hurn et al. (2003), though these papers use different data sources. As in Jiang and Guntuboyina (2025), we used data on CO2 emission in per capita in 10-ton increments and GDP per capita in 10 thousand U.S. dollars. We used a 10-fold cross-validation method as described in Section 3.1 for the CO2-GDP dataset, and used the same bandwidth selection method as described at the beginning of Section 3. Our estimate of σ from this CV method is 0.160. The results of our algorithm with this estimated σ suggest that there are two growth paths with larger estimated prior weights, with possible deviations as evidenced by the other components with smaller weights (see Table 14 and Figure 6). This result differs from Jiang and Guntuboyina (2025), who identified ten components before applying the BIC trimming, and five components after BIC trimming. We note that the two components with largest weights detected from our algorithm ((0.022, 0.179) and (-0.070, 0.343)) are similar to the two components with the largest weights from Jiang and Guntuboyina (2025) before and after their BIC trimming method was applied, that is, (0.030, 0.170) and (-0.010, 0.230), with pre-BIC trimming weights of 0.290 and 0.180, and post-BIC trimming weights of 0.270 and 0.280, respectively. Application: Music Tone Perception We also tested our EM-NPKMLE algorithm on another dataset analyzed in Jiang and 30 Figure 6: CO2-GDP Relationship: The points represent CO2 emissions in per capita in 10-ton increments and GDP per capita in 10 thousand U.S. dollars for individual countries. The lines represent the estimated component regression lines, whose estimated coefficients can be found in Table 14. The thicker lines are the components with larger weights. Table 14: Results from CO2/GDP Application 1st Comp 2nd Comp 3rd Comp 4th Comp 5th Comp β Estimates (0.022, 0.179) (-0.070, 0.343) (-0.101, 0.628) (0.105, 0.120) (0.839, -0.051) Estimated Weights 0.484 0.358 0.088 0.057 0.013 Guntuboyina (2025). The data was initially collected by Cohen (1980) in an experiment examining how trained musicians perceive tone. See Cohen (1980) for further information on the experiment. This dataset, which can be found in the R packages mixtools and fpc, 31 contains one musician's tuning that was observed 150 times (Benaglia et al., 2009; Hennig, 2024). See Figure 7, where the y-axis displays the musician's assessed tone ratio and the xaxis depicts the true tone ratio (Viele and Tong, 2002; Yao and Song, 2015). Other studies of this dataset were carried out in De Veaux (1989); Viele and Tong (2002); and Yao and Song (2015); but in these studies the number of components was pre-specified at two. We applied our EM algorithm that does not require prespecifying the number of components. We used the same cross-validation method as in the CO2-GDP application to estimate σ, and our estimate is 0.210. We also used the same bandwidth selection method as in the CO2-GDP application, that is, maximal smoothing bandwidth. Our results revealed two main components (see Figure 7 and Table 15). In terms of the number of components, the results are consistent with the music perception theory when the experiment took place, with one component as y = x and another as y = 2 (Jiang and Guntuboyina, 2025). Our results (see Table 15) are comparable to the two largest components computed using the CGM algorithm (both pre- and post BIC trimming) in Jiang and Guntuboyina (2025), that is, (1.898, 0.054), (0, 0.989), with weights 0.540 and 0.260 (pre-BIC trimming), and 0.700 and 0.300 (post-BIC trimming), respectively. However, the CGM method detected six components before the BIC trimming method was applied. Table 15: Results from Music Tone Application 1st Comp 2nd Comp β Estimates (0.012, 0.967) (1.292, 0.361) Estimated Weights 0.427 0.573 4 Discussion In this paper we first introduce an EM algorithm to approximate NPMLE in the setting of mixture of linear regressions when the prior distribution for the coefficients has a density. 32 Figure 7: Music Tone Perception: The blue points represent a musician's assessed tone ratio (y) and the true tone ratio (x) pairs. The estimated regression lines (see Table 15) are also depicted for the two components. The thicker line is the component with the larger weight. We then develop another EM algorithm to approximate NPKMLE that can automatically detect the number of components for a discrete prior distribution, and is able to uncover a continuous prior distribution. We provide some theoretical support for our algorithms as well as demonstrate their performance in simulations and applications. In particular, our simulation results suggest that our EM-NPKMLE algorithm can consistently estimate the true number of components and the β coefficients and their associated weights when the prior distribution is discrete. While our EM-NPKMLE algorithm does have useful 33 qualities, it requires the selection of a bandwidth, which can be challenging. We selected the oversmoothing bandwidth based on its ability to avoid the appearance of spurious features that may not belong to the true prior distribution, but further investigation into the bandwidth selection could yield better methods, e.g., adaptive bandwidths for different iterations of the algorithm, or introducing unique bandwidths for each dimension of the data. While we demonstrate the algorithm's ability to uncover both discrete and continuous distributions, investigating its theoretical properties is an area of further research. It is clear that our methods can be extended to the setting of mixture of distributions, and we will investigate this in a separate article. A Appendix A.1 Proof of Proposition 1 Proof. Let the Kullback-Leibler (KL) divergence between distributions P and Q with densities p and q be DKL(P∥Q) = ∫p(x)log p(x) q(x)dx. (38) We can write Eβ∣x,y,G(t)L(g∣x,y,β) (39) = - n ∑ i=1 ∫f (t+1) i (β)log f (t+1) i (β) g(β) dβ + n ∑ i=1 ∫log [φσ (yi -x⊺ i β)f (t+1) i (β)]f (t+1) i (β)dβ (40) = - n ∑ i=1 DKL(F (t+1) i ∥G) + n ∑ i=1 ∫log [φσ (yi -x⊺ i β)f (t+1) i (β)]f (t+1) i (β)dβ, (41) where F (t+1) i is the distribution associated with f (t+1) i . Hence, maximizing Eβ∣x,y,G(t)L(g∣x,y,β) over g is equivalent to minimizing 1 n ∑n i=1 DKL(F (t+1) i ∥G) over G ∈Gden. Let g(t+1) = 1 n ∑n i=1 f (t+1) i . Notice that 1 n n ∑ i=1 δDKL(F (t+1) i ∥G) δg ∣ g=g(t+1) = -1 n n ∑ i=1 f (t+1) i g(t+1) = -1. (42) 34 Using Theorem 2 in (Nishiyama, 2020), we conclude that g(t+1) is the unique minimizer of 1 n ∑n i=1 DKL(F (t+1) i ∥G), and hence the unique maximizer of Eβ∣x,y,G(t)L(g∣x,y,β,G(t)). A.2 Proof of Theorem 2.2 Proof. Using the subgradient inequality for differentiable convex functions, we have log[ n ∑ l=1 v(yl)] -log[ n ∑ l=1 v(xl)] ≥∑n l=1 v′(xl)(yl-xl) ∑n l=1 v(xl) , ∀xl,yl∈[0,∞). Also, recall that w = -v′. Then Q(Gν(r+1);G(t)) -Q(Gν(r);G(t)) (43) = n ∑ i=1 ∫Rd log [φσ (yi -x⊺ i β) 1 nhd ∑n l=1 vh(∥β -ν(r+1) l ∥2)]Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (44) - n ∑ i=1 ∫Rd log [φσ (yi -x⊺ i β) 1 nhd ∑n l=1 vh(∥β -ν(r) l∥2)]Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (45) ≥ n ∑ i=1 ∫Rd ∑n l=1 wh(∥β-ν(r) l ∥2)[∥β-ν(r) l ∥2-∥β-ν(r+1) l ∥2] ∑n j=1 vh(∥β-ν(r) j ∥2) Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (46) = n ∑ i=1 ∫Rd ∑n l=1 wh(∥β-ν(r) l ∥2)[2ν(r+1)⊺ l β-2ν(r)⊺ l β-∥ν(r+1) l ∥2+∥ν(r) l ∥2] ∑n j=1 vh(∥β-ν(r+1) j ∥2) Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (47) = n ∑ l=1 2ν(r+1)⊺ l n ∑ i=1 ∫Rd wh(∥β-ν(r) l ∥2)β ∑n j=1 vh(∥β-ν(r) j ∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (48) - n ∑ l=1 2ν(r)⊺ l n ∑ i=1 ∫Rd wh(∥β-ν(r) l ∥2)β ∑n j=1 vh(∥β-ν(r) j ∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (49) - n ∑ l=1 ∥ν(r+1) l ∥2 n ∑ i=1 ∫Rd wh(∥β-ν(r) l ∥2) ∑n j=1 vh(∥β-ν(r) j ∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (50) + n ∑ l=1 ∥ν(r) l∥2 n ∑ i=1 ∫Rd wh(∥β-ν(r) l ∥2) ∑n j=1 vh(∥β-ν(r) j ∥2)Si(β,β(t),x,y)dβ ∫Rd Si(β,β(t),x,y)dβ (51) = n ∑ l=1 [2ν(r+1)⊺ l A(ν(r) l;β(t),x,y) -2ν(r)⊺ l A(ν(r) l;β(t),x,y) (52) -∥ν(r+1) l ∥2C(ν(r) l,β(t),x,y) + ∥ν(r) l∥2C(ν(r) l,β(t),x,y)]. (53) 35 It then follows from (30),(31), and (32) that Q(Gν(r+1);G(t)) -Q(Gν(r);G(t)) (54) ≥ n ∑ l=1 -∥ν(r+1) l ∥2C(ν(r) l,β(t),x,y) + 2∥ν(r+1) l ∥2C(ν(r) l;β(t),x,y) (55) -2ν(r)⊺ l ν(r+1) l C(ν(r) l;β(t),x,y) + ∥ν(r) l∥2C(ν(r) l,β(t),x,y)] (56) = n ∑ l=1 C(ν(r) l,β(t),x,y)∥ν(r+1) l -ν(r) l∥2 (57) ≥0, (58) where the last inequality holds because C(ν(r) l,β(t),x,y) > 0 by noticing that w is always positive. Next we show that Q(Gν;G(t)) has an upper bound. Notice that sup β ∣φσ (yi -x⊺ i β) 1 nhd n ∑ l=1 vh(∥β -νl∥2)∣≤ ∥v∥∞ √ 2πσhd =∶B, (59) where ∥v∥∞= supt ∣v(t)∣. It then follows from (27) that sup ν Q(Gν;G(t)) ≤nlog B. (60) Hence the sequence Q(Gν(r);G(t)) converges. A.3 Proof of Theorem 2.3 Proof. For a distribution G on Rd with density function g, let P(β,G) = n ∑ i=1 log(fi(βi ∣xi,yi,g)), (61) where fi(βi ∣xi,yi,g) is defined in (12). We can write P(β,G) = n ∑ i=1 log [ φσ (yi -x⊺ i βi)g(βi) ∫Rd φσ (yi -x⊺ i β)g(β)dβ ] = n ∑ i=1 log [φσ (yi -x⊺ i βi)g(βi)] - n ∑ i=1 log [∫Rd φσ (yi -x⊺ i β)g(β)dβ]. 36 Using the definitions in (6) and (11), L(G) = L(G∣x,y,β) -P(β,G). Taking the expectation of each side with respect to β∣x,y,G(t), we have L(G) = Eβ∣x,y,G(t)(L(G)) = Eβ∣x,y,G(t)(L(G∣x,y,β)) -Eβ∣x,y,G(t)(P(β,G)) =∶Q(G;G(t)) -H(G;G(t)). Notice that L(G(t+1)) -L(G(t)) = [Q(G(t+1);G(t)) -Q(G(t);G(t))] -[H(G(t+1);G(t)) -H(G(t);G(t))]. We have already shown in Theorem 2.2 that Q(G(t+1);G(t)) ≥Q(G(t);G(t)). To show L(G(t+1)) -L(G(t)) ≥0, it remains to prove that H(G(t+1);G(t)) -H(G(t);G(t)) ≤0. This is true because for any arbitrary distribution G with density function g, H(G;G(t)) -H(G(t),G(t)) =Eβ∣x,y,G(t) [ n ∑ i=1 log(fi(βi ∣xi,yi,g))] -Eβ∣x,y,G(t) [ n ∑ i=1 log(fi(βi ∣xi,yi,g(t)))] =Eβ∣x,y,G(t) { n ∑ i=1 log [ fi(βi ∣xi,yi,g) fi(βi ∣xi,yi,g(t))]} = n ∑ i=1 Eβ∣x,y,G(t) {log [ fi(βi ∣xi,yi,g) fi(βi ∣xi,yi,g(t))]} ≤ n ∑ i=1 log {Eβ∣x,y,G(t) [ fi(βi ∣xi,yi,g) fi(βi ∣xi,yi,g(t))]} (by Jensen's inequality and the concavity of the log function) = n ∑ i=1 log [∫ fi(β ∣xi,yi,g) fi(β ∣xi,yi,g(t))fi(β ∣xi,yi,g(t))dβ] = n ∑ i=1 log [∫fi(β ∣xi,yi,g)dβ] =0. 37 We can also conclude from Theorem 1 in (Jiang and Guntuboyina, 2025) that the incomplete log likelihood L(G) is bounded from above because of the existence of a maximizer ˆG. By the monotone convergence theorem, L(G(t)) converges. Acknowledgments This work was partially supported by resources provided by the Office of Research Computing at George Mason University (URL: https://orc.gmu.edu). Data Availability Statement The music tone data that supports the findings of this study is openly available in the R packages mixtools (https://cran.r-project.org/web/packages/mixtools/index.html), fpc (https://cran.r-project.org/web//packages/fpc/index.html). The CO2-GDP data can be found at https://github.com/hanshengjiang/npmle git. References Benaglia, T., Chauveau, D., Hunter, D. R., and Young, D. (2009). mixtools: An R package for analyzing finite mixture models. Journal of Statistical Software, 32(6):1-29. Bolt, J., Inklaar, R., de Jong, H., and Van Zanden, J. L. (2018). Rebasing 'maddison': new income comparisons and the shape of long-run economic development. Maddison Project Working paper 10, . Maddison Project, version 2018; accessed via Our World in Data. Chae, M., Martin, R., and Walker, S. G. (2018). Convergence of an iterative algorithm to the nonparametric mle of a mixing distribution. Statistics & Probability Letters, 140:142146. 38 Chung, Y. and Lindsay, B. G. (2015). Convergence of the em algorithm for continuous mixing distributions. Statistics & Probability Letters, 96:190-195. Cohen, E. (1980). Inharmonic tone perception. Unpublished Ph. D. Dissertation, Stanford University. Comaniciu, D. and Meer, P. (2002). Mean shift: A robust approach toward feature space analysis. IEEE Transactions on pattern analysis and machine intelligence, 24(5):603619. Comaniciu, D., Ramesh, V., and Meer, P. (2003). Kernel-based object tracking. IEEE Transactions on pattern analysis and machine intelligence, 25(5):564-577. Davies, T. M. (2013). Scaling oversmoothing factors for kernel estimation of spatial relative risk. Epidemiologic Methods, 2(1):67-83. Davies, T. M. and Hazelton, M. L. (2010). Adaptive kernel estimation of spatial relative risk. Statistics in Medicine, 29(23):2423-2437. De Veaux, R. D. (1989). Mixtures of linear regressions. Computational Statistics & Data Analysis, 8(3):227-245. Deb, N., Saha, S., Guntuboyina, A., and Sen, B. (2021). Two-component mixture model in the presence of covariates. Journal of the American Statistical Association, pages 1-15. Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society: series B (methodological), 39(1):1-22. DeSarbo, W. S. and Cron, W. (1988). A maximum likelihood methodology for clusterwise linear regression. Journal of classification, 5:249-282. 39 Dicker, L. H. and Zhao, S. D. (2016). High-dimensional classification via nonparametric empirical bayes and maximum likelihood inference. Biometrika, 103(1):21-34. Ding, C. S. (2006). Using regression mixture analysis in educational research. Practical Assessment, Research, and Evaluation, 11(1). Duong, T. (2024). ks: Kernel Smoothing. R package version 1.14.2. Friedlingstein, P., O'sullivan, M., Jones, M. W., Andrew, R. M., Hauck, J., Olsen, A., Peters, G. P., Peters, W., Pongratz, J., Sitch, S., et al. (2020). Global carbon budget 2020. Earth System Science Data Discussions, 2020:1-3. Gu, J. and Koenker, R. (2022). Nonparametric maximum likelihood methods for binary response models with random coefficients. Journal of the American Statistical Association, 117(538):732-751. Hawkins, D. S., Allen, D. M., and Stromberg, A. J. (2001). Determining the number of components in mixtures of linear models. Computational Statistics & Data Analysis, 38(1):15-48. Hennig, C. (2024). fpc: Flexible Procedures for Clustering. R package version 2.2-13. Hosmer, D. W. (1974). Maximum likelihood estimates of the parameters of a mixture of two regression lines. Communications in Statistics-Theory and Methods, 3(10):995-1006. Huang, M. and Yao, W. (2012). Mixture of regression models with varying mixing proportions: a semiparametric approach. Journal of the American Statistical Association, 107(498):711-724. Hubert, L. and Arabie, P. (1985). Comparing partitions. Journal of classification, 2(1):193218. 40 Hurn, M., Justel, A., and Robert, C. P. (2003). Estimating mixtures of regressions. Journal of computational and graphical statistics, 12(1):55-79. Jagabathula, S., Subramanian, L., and Venkataraman, A. (2020). A conditional gradient approach for nonparametric estimation of mixing distributions. Management Science, 66(8):3635-3656. Jiang, H. and Guntuboyina, A. (2025). A nonparametric maximum likelihood approach to mixture of regression. arXiv preprint . Jiang, W. and Zhang, C.-H. (2009). General maximum likelihood empirical Bayes estimation of normal means. The Annals of Statistics, 37(4):1647 - 1684. Kiefer, J. and Wolfowitz, J. (1956). Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. The Annals of Mathematical Statistics, pages 887-906. Koenker, R. and Mizera, I. (2014). Convex optimization, shape constraints, compound decisions, and empirical bayes rules. Journal of the American Statistical Association, 109(506):674-685. Laird, N. M. and Louis, T. A. (1991). Smoothing the non-parametric estimate of a prior distribution by roughening: A computational study. Computational statistics & data analysis, 12(1):27-37. Leisch, F. (2004). Flexmix: A general framework for finite mixture models and latent class regression in r. Journal of Statistical Software, 11(8):1-18. Melnykov, V. and Melnykov, I. (2012). Initializing the em algorithm in gaussian mixture models with an unknown number of components. Computational Statistics & Data Analysis, 56(6):1381-1395. 41 Nguyen, X. (2011). Wasserstein distances for discrete measures and convergence in nonparametric mixture models. Citeseer. Forschungsbericht. Nishiyama, T. (2020). Minimization problems on strictly convex divergences. arXiv preprint . Ozertem, U. and Erdogmus, D. (2011). Locally defined principal curves and surfaces. The Journal of Machine Learning Research, 12:1249-1286. Parr, R. G. and Weitao, Y. Y. (1994). Density-Functional Theory of Atoms and Molecules. Oxford University Press. Polyanskiy, Y. and Wu, Y. (2020). Self-regularizing property of nonparametric maximum likelihood estimator in mixture models. arXiv preprint . Qiao, W. and Polonik, W. (2021). Algorithms for ridge estimation with convergence guarantees. arXiv preprint . Quandt, R. E. (1972). A new approach to estimating switching regressions. Journal of the American statistical association, 67(338):306-310. Quandt, R. E. and Ramsey, J. B. (1978). Estimating mixtures of normal distributions and switching regressions. Journal of the American statistical Association, 73(364):730-738. R Core Team (2023). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. Ritchie, H., Roser, M., and Rosado, P. (2020). Co2 and greenhouse gas emissions. Our World in Data. https://ourworldindata.org/co2-and-greenhouse-gas-emissions. Saha, S. and Guntuboyina, A. (2020). On the nonparametric maximum likelihood estimator for gaussian location mixture densities with application to gaussian denoising. The Annals of Statistics, 48(2):738-762. 42 Sain, S. R., Baggerly, K. A., and Scott, D. W. (1994). Cross-validation of multivariate densities. Journal of the American Statistical Association, 89(427):807-817. Schlattmann, P. (2009). Medical applications of finite mixture models, volume 1. Springer. Terrell, G. R. (1990). The maximal smoothing principle in density estimation. Journal of the American statistical association, 85(410):470-477. UN (2022). United nations world population prospects 2022. Accessed via Our World in Data. Vardi, Y. and Lee, D. (1993). From image deblurring to optimal investments: Maximum likelihood solutions for positive linear inverse problems. Journal of the Royal Statistical Society Series B: Statistical Methodology, 55(3):569-598. Vardi, Y., Shepp, L. A., and Kaufman, L. (1985). A statistical model for positron emission tomography. Journal of the American statistical Association, 80(389):8-20. Viele, K. and Tong, B. (2002). Modeling with mixtures of linear regressions. Statistics and Computing, 12:315-330. Wedel, M. and Kamakura, W. A. (2000). Market segmentation: Conceptual and methodological foundations. Springer Science & Business Media. Yao, W. and Song, W. (2015). Mixtures of linear regression with measurement errors. Communications in Statistics-Theory and Methods, 44(8):1602-1614. 43
2510.14887
Prediction-Specific Design of Learning-Augmented Algorithms Sizhe Li*1, Nicolas Christianson†2, and Tongxin Li‡1 1School of Data Science, The Chinese University of Hong Kong, Shenzhen 2Department of Management Science and Engineering, Stanford University Abstract Algorithms with predictions has emerged as a powerful framework to combine the robustness of traditional online algorithms with the data-driven performance benefits of machine-learned (ML) predictions. However, most existing approaches in this paradigm are overly conservative, as they do not leverage problem structure to optimize performance in a prediction-specific manner. In this paper, we show that such prediction-specific performance criteria can enable significant performance improvements over the coarser notions of consistency and robustness considered in prior work. Specifically, we propose a notion of strongly-optimal algorithms with predictions, which obtain Pareto optimality not just in the worst-case tradeoff between robustness and consistency, but also in the prediction-specific tradeoff between these metrics. We develop a general bi-level optimization framework that enables systematically designing strongly-optimal algorithms in a wide variety of problem settings, and we propose explicit strongly-optimal algorithms for several classic online problems: deterministic and randomized ski rental, and one-max search. Our analysis reveals new structural insights into how predictions can be optimally integrated into online algorithms by leveraging a prediction-specific design. To validate the benefits of our proposed framework, we empirically evaluate our algorithms in case studies on problems including dynamic power management and volatility-based index trading. Our results demonstrate that prediction-specific, strongly-optimal algorithms can significantly improve performance across a variety of online decision-making settings. 1 Introduction Online algorithms operate in environments where decisions must be made sequentially without full knowl- edge of future inputs. Traditionally, these algorithms are designed to guarantee robust performance on adversarial problem instances, providing competitive ratio bounds that hold under worst-case inputs [1]. While theoretically robust, this adversarial perspective often yields overly pessimistic strategies that can underperform in the real world, where worst-case instances are uncommon. Recent work on algorithms with predictions addresses this lim- itation by integrating machine-learned (ML) predictions into classical online decision-making frameworks [2, 3]. This learning-augmented algorithm design paradigm has been applied to a variety of online problems including ski rental, caching, and metrical task systems [2–4] and applications including GPU power management [5], battery energy storage system control [6], carbon-aware workload management [7, 8], and electric vehicle (EV) charging [9, 10]—with algorithms that achieve near-optimal performance when predictions are accurate, while maintaining robust worst-case guarantees when they are not. Existing approaches to learning-augmented algorithm design typically evaluate performance through a so-called consistency-robustness tradeoff. In particular, consistency measures worst-case algorithm performance when the prediction is perfectly accurate, while robustness measures the worst-case competitive ratio over all possible predictions and instances. *Email: lisizhe@link.cuhk.edu.cn †Email: christianson@stanford.edu ‡Email: litongxin@cuhk.edu.cn 1 arXiv:2510.14887v1 [cs.DS] 16 Oct 2025 Table 1: Comparison of our technical results on algorithm (weak and strong) optimality. General DSR (LARGE b) RSR (LARGE b) OMS ϵ-OMS WEAK Algorithm 1 KD Kumar et al. KR Kumar et al. Sun et al. \ STRONG Algorithm 1 PDSR PRSR PST ϵ-Tolerant PST Proposition 1 Theorem 4.2 Theorem 5.3 Theorem 6.2 Theorem 7.3 Note: Algorithm 1, PDSR, PRSR, PST, and ϵ-Tolerant PST are results from this work. DSR/RSR=Deterministic/Randomized Ski Rental, OMS=One-Max Search, ϵ-OMS=Error Tolerant OMS 0 50 100 150 200 Prediction y 1.0 1.5 2.0 2.5 3.0 Consistency y KD ( = 0.5) y KD ( = 0.5) y PDSR (Ours) y PDSR (Ours) y 1.0 1.5 2.0 2.5 3.0 Robustness y 0 50 100 150 200 Prediction y 1.0 1.5 2.0 2.5 Consistency y KR ( = 0.5) y KR ( = 0.5) y PRSR (Ours) y PRSR (Ours) y 1.0 1.5 2.0 2.5 Robustness y 10 12 14 16 18 20 Prediction y 1.50 1.75 2.00 y × y Sun et al. ( = 0.5) PST (Ours) Figure 1: Prediction-specific consistency βy and robustness γy under different predictions y for DSR with b = 100 (LEFT), RSR with b = 100 (MIDDLE), and OMS with L = 10, U = 20 (RIGHT). Importantly, both metrics are inherently worst-case in nature: consistency reflects the algorithm’s perfor- mance under the least favorable instance with the least favorable, yet accurate prediction, and robustness measures performance under the least favorable (inaccurate) prediction and instance. Because they are worst-case, neither of these metrics measures whether algorithms can achieve better performance for specific predictions. As such, while much prior work has sought to design algorithms that obtain the optimal tradeoff between consistency and robustness (e.g., [2, 11]), and some existing algorithms have sought to improve performance by optimizing decisions in a prediction-specific manner (e.g., [8, 12]), no prior work has considered the question of how to design online algorithms with prediction-specific guarantees on optimality. Motivated by this gap and the potential to improve the performance of algorithms with predictions, our work explores the following question: How can we design algorithms that achieve Pareto-optimal tradeoffs between consistency and robustness that are tailored to specific prediction values? To this end, we introduce a prediction-specific framework for algorithm design, enabling the development of explicit and tractable online algorithms that adapt to each prediction’s characteristics to ensure Pareto-optimal robustness and consistency for each individual prediction value. Instantiating this framework, we design explicit prediction-specific algorithms for the problems of deterministic and randomized ski rental and one-max search, two classic online problems with connections to real-world applications including TCP acknowledgment [13], cloud cost management [14], dynamic power management [5], and energy market operations [15, 16]. 1.1 Contributions The main contributions of this paper are as follows. Framework and Theoretical Results. We introduce a novel prediction-specific framework for the design and analysis of learning-augmented algorithms. Specifically, Definition 2 extends the classic notions of consistency and robustness by defining the prediction-specific consistency βy and robustness γy for each possible prediction value y. Here, βy evaluates the algorithm’s performance on the worst-case instance which is consistent with the prediction y, while γy measures its worst-case performance under prediction y. Furthermore, Definition 4 introduces the notion of strong optimality: an algorithm is strongly optimal if it is Pareto-optimal not only 2 DSR RSR OMS Existing Results KD ([2], Algorithm 2) KR ([2], Algorithm 3) Sun et al. [11] This Work PDSR (Algorithm 2) PRSR (Algorithm 9) PST (Algorithm 4) Improvement Ratio 1 + 1 λ  /2 ( e−1 e ) · (1 −e−λ)−1 √ (1−λ)2+4λθ−(1−λ) 2λ √ θ Table 2: Comparison of proposed algorithms in this work with existing results. The Improvement Ratio quantifies the maximum ratio between the consistency-robustness score (βy · γy) of our algorithm over the best existing score for all y. in the classic sense (referred to as weak optimality, see Definition 3), but also in the (βy, γy) plane for every prediction value y in the prediction space F. This reframes the evaluation of algorithm performance from a single worst-case trade-off to a richer, per-prediction perspective, thereby enabling a more fine-grained analysis of algorithm behavior across different prediction values. Under this new framework, we show that the existing weakly-optimal algorithms for several canonical online problems—deterministic ski rental, randomized ski rental, and one-max search (see Table 3)—are not strongly-optimal (see Theorems 4.1, 5.1, and 6.1). As such, we propose new algorithms for these problems (Algorithms 2, 9, and 4) and prove their strong optimality (see Theorems 4.2, 5.3, and 6.2). Notably, our strongly- optimal algorithms can obtain significant performance improvements over prior weakly-optimal algorithms for certain prediction values; Table 2 summarizes the best improvement ratio of the consistency–robustness score (βy · γy) across all y for each problem, reflecting the relative benefit of our algorithms over existing ones under the most favorable y. The parameter λ represents some confidence governing the tradeoff between robustness and consistency used in [2, 11]. Note that for both deterministic and randomized ski rental, the maximum improvement ratio is obtained as λ ↓0, where it diverges to infinity with asymptotic order O(λ−1). In contrast, for one-max search, the maximum improvement ratio is √ θ, which is also attained as λ ↓0. Consequently, our methods can yield substantial performance improvements for certain predictions and choices of the parameter λ. We also show that there exist problems for which existing weakly-optimal algorithms in the literature are already strongly-optimal: in particular, we establish prediction-specific bounds in Theorem 2.1 for the weakly-optimal algorithm of Wei and Zhang [17] for non-clairvoyant job scheduling when the job size n = 2, and prove its strong optimality in this case. Novel Techniques and Methodology. We propose a bi-level optimization methodology (see Problems 1 and 2 in Section 3.1) for systematically deriving strongly-optimal algorithms. Given an initial robustness target γ, Problem 1 finds the best prediction-specific consistency βy, and Problem 2 then finds a decision with the best prediction-specific robustness γy given the fixed βy. The bi-level optimization pipeline naturally forms a meta-algorithm (Algorithm 1), which we prove in Proposition 1 yields a strategy on the prediction-specific Pareto front for each y, guaranteeing strong optimality. This approach offers two key benefits. (1) Generality – the bi-level optimization framework is broadly applicable: it can be used to design algorithms for online problems beyond those described in Table 3. (2) Flexibility – this approach exhibits two kinds of flexibility. First, the consistency–robustness trade-off in the meta-algorithm (Algorithm 1) can be tuned by adjusting the robustness target γ. Second, the bi-level optimization problem can be adapted to different kinds of predictions and performance objectives. For example, the objectives and constraints in Problems 1 (P1) and 2 (P2) can be extended to incorporate error tolerance to enable the design of algorithms which perform well even if predictions are erroneous, thus alleviating, to some extent, the issues of brittleness and smooth performance degradation which have been considered in a number of recent works [18–20]. More specifically, in Section 7, we introduce the notion of ϵ-consistency to formalize the goal of preserving consistency (and avoiding algorithm brittleness) when faced with small prediction error ϵ, and we show that it is possible to obtain both classic Pareto optimality between ϵ-consistency and robustness, as well as a corresponding version of strong optimality in this case (see Theorem 7.3). In addition, our analysis of strong optimality in the randomized ski rental problem (see Appendix E) employs 3 a novel prediction-specific primal–dual technique. Specifically, we first use a perturbation-based analysis to derive structural properties of the optimal randomized distribution, and then formulate and analyze a primal–dual optimization problem for each prediction. This provides a structured and potentially generalizable approach to establish Pareto optimality with respect to a given prediction. Experimental Results and Real-World Implications. We evaluate our algorithms through both synthetic simulations and real-world case studies, spanning a range of online decision-making problems. Overall, our methods consistently outperform state-of-the-art learning-augmented and classical baseline algorithms, confirming their theoretical soundness and practical value. More specifically, we apply the deterministic and randomized ski rental algorithms to Dynamic Power Management (DPM) traces [5], an important benchmark for energy-efficient computing, and we apply the one-max search algorithms to VIX trading, a representative financial market task marked by high volatility and uncertainty. In both domains, our methods deliver significant improvements across most prediction regimes, illustrating how prediction-specific design can translate improved theoretical guarantees into tangible real-world gains. 1.2 Related Work Algorithms with Predictions. Although the study of online algorithms with ML predictions is still relatively new [2, 3], significant research progress has been made in recent years on various problems such as online optimization [6], control [21, 22], and reinforcement learning [23, 24]. Similar frameworks have also been adopted to design and analyze algorithms for a number of other online problems, such as caching [3, 25–27], online knapsack [28–30], secretary and bipartite matching [31], metrical task systems [4, 32, 33], and convex body/function chasing [32, 34]. Most of these works make no assumptions about prediction quality, seeking to balance consistency (competitive ratio when predictions are accurate) and robustness (worst-case competitive ratio regardless of prediction quality), though the precise, formal definitions of these metrics vary slightly across works. Beyond their theoretical appeal, these frameworks have begun to influence practical systems domains, enabling advances in areas such as data-center scheduling [35], energy-aware computing [7, 15], and networked control systems [6, 22], where ML-driven forecasts are increasingly available but inherently imperfect. Note also that some recent works depart from the standard paradigm of robustness and consistency and consider alternative prediction models and performance measures. Sun et al. [36] proposed online algorithms with uncertainty-quantified (UQ) predictions, which leverage UQ to assess prediction quality. They introduced the distributionally robust competitive ratio (DRCR), which weighs both the setting where the UQ prediction accurately describes the instance, and the worst-case adversarial setting, and applied this metric to the problems of ski rental and online search. Mahdian et al. [37] proposed a general framework for online optimization under uncertain inputs, where the algorithm has access to an optimistic strategy that performs well when the future unfolds favorably. They developed a meta-algorithm that balances between this optimistic policy and a robust fallback, achieving a trade-off between worst-case guarantees and performance under accurate predictions without relying on a formal error model. Ski Rental and Scheduling with ML Predictions. Regarding online problems that are closely related to those specific examples considered in this work, Kumar et al. [2] studied ski rental and non-clairvoyant scheduling with ML predictions. Their framework introduces a tunable trade-off between consistency and robustness through a user-specified hyperparameter. Wei and Zhang [17] subsequently gave general lower bounds on the trade-off obtainable in these problems, thus proving that the deterministic and randomized algorithms of Kumar et al. [2] for ski rental achieve the Pareto-optimal trade-off between consistency and robustness. Furthermore, they demonstrated that the meta-algorithm proposed by Kumar et al. [2] for non- clairvoyant scheduling does not achieve the tight trade-off, and introduced a novel two-stage scheduling strategy that is provably tight for the case of n = 2. One-Max Search with ML Predictions. In the learning-augmented setting of one-max search, where algorithms receive a prediction of the maximum element value, Sun et al. [11] established a fundamental lower 4 bound on the trade-off between consistency and robustness. They also proposed a threshold-based algorithm and showed that it achieves this lower bound, making it Pareto-optimal in terms of these two performance measures. Algorithm Smoothness. A recent line of work has shown that some existing learning-augmented algorithms are brittle, suffering sharp performance drops under small prediction errors. Elenter et al. [18] addressed this by designing algorithms that follow user-specified error-performance profiles, ensuring controlled degradation in performance for the one-way trading problem. Benomar et al. [19] further proposed smooth one-max search algorithms that are Pareto-optimal and exhibit gradual performance degradation with increasing prediction error, achieving a “triple Pareto optimality” among consistency, robustness, and smoothness. Unlike prior work that focuses on structural smoothness of the Pareto frontier, our formulation provides a principled relaxation of consistency itself, leading to algorithms that are both robust and tolerant to small predictive errors. 2 Problem Formulation We consider online cost minimization problems over a set of instances I. For each instance I ∈I, let ALG(A, I) and OPT(I) denote the cost incurred by an online algorithm A and the cost of the offline optimum, respectively. We assume that the algorithms have no prior knowledge of the instance I. Under the competitive analysis framework [1], the goal is to find an online algorithm A that minimizes the worst-case competitive ratio α(A), which is defined as1 α(A) = max I∈I ALG(A, I) OPT(I) . This worst-case focus, however, can be overly pessimistic in practical settings. To move beyond the limitations of worst-case algorithms, machine-learned predictions can be integrated into algorithm design (see [2, 3]). In this setting, an online algorithm Aω, potentially parameterized by ω ∈Ω, receives not only the instance I ∈I online but also an ML prediction y ∈F concerning some relevant but unknown feature x(I) ∈F of I, where F denotes a prediction space. The feature x(I) encapsulates useful information about I (e.g., the maximum element value for online one-max search) or may even fully specify I in some problems (e.g., the total number of skiing days for discrete-time ski rental). Let ALG(Aω, I, y) denote the (expected) cost incurred by algorithm Aω on instance I given prediction y. Classic Consistency and Robustness. Since the ML prediction y ∈F may be erroneous, a number of algorithms have been designed (see those summarized in Sections 1.2 and 2.3) to (1) achieve near-optimal performance when the prediction y is accurate (consistency), while (2) simultaneously maintaining a bounded performance guarantee even when the prediction is arbitrarily wrong (robustness). To formalize this, let Iy ⊆I represent the set of instances for which prediction y is considered “accurate” or “consistent”; the precise definition depends on the specific problem, the form of prediction, and the prediction quality measure. The classic consistency and robustness metrics are defined as follows. Definition 1 (Classic Metrics). Given an online algorithm Aω that takes predictions, the consistency β(Aω) and robustness γ(Aω) are defined as: β(Aω) := sup y∈F sup I∈Iy ALG(Aω, I, y) OPT(I) , γ(Aω) := sup y∈F sup I∈I ALG(Aω, I, y) OPT(I) . (1) If an algorithm Aω achieves β(Aω) consistency and γ(Aω) robustness, it is called β(Aω)-consistent and γ(Aω)-robust, respectively. A typical choice of Iy is Iy := {I ∈I : x(I) = y} (following [2, 3]). The equality constraint in Iy can be generalized in a number of ways—for instance, to a ball centered at x(I), which will emphasize small error tolerance of the algorithm (see Section 7 for more details). Here, the consistency β(Aω) captures the 1In online profit maximization problems, the competitive ratio is defined as the worst-case ratio between the optimal offline profit and that obtained by the online algorithm, which is the inverse of the minimization setting. 5 algorithm’s performance guarantee assuming the prediction is correct, taking the worst case over all possible correct predictions, and the robustness γ(Aω) represents the worst-case guarantee under all possible instances and any prediction y, regardless of accuracy. An ideal algorithm minimizes both β(Aω) (aiming for near 1) and γ(Aω). 2.1 Prediction-Specific Consistency and Robustness The standard metrics in (1) evaluate performance by taking the worst case over both instances I and predictions F. While adopting a worst-case perspective for instances I is a standard and reasonable approach in competitive analysis due to future uncertainty, applying this same perspective to the entire prediction space F is unnecessarily conservative, as better prediction-specific performance may be possible. This motivates a more nuanced framework to evaluate performance conditioned on the specific prediction value y ∈F that the algorithm receives. We thus introduce the notions of prediction-specific consistency and prediction-specific robustness. Definition 2 (Prediction-Specific Metrics). Given an online algorithm Aω and a specific prediction value y ∈F, the prediction-specific consistency βy(Aω) and prediction-specific robustness γy(Aω) relative to y are defined as: βy(Aω) := sup I∈Iy ALG(Aω, I, y) OPT(I) , γy(Aω) := sup I∈I ALG(Aω, I, y) OPT(I) . (2) If an algorithm Aω achieves βy(Aω) and γy(Aω) for a given prediction value y, we say Aω is βy(Aω)-consistent and γy(Aω)-robust under y. The prediction-specific metrics in Definition 2 enable a finer-grained analysis for algorithms with predictions. Instead of considering only the worst-case consistency or robustness over all predictions, we may instead tailor an algorithm’s strategy based on the characteristics of each observed prediction value y. This adaptiveness can enable better performance compared to standard algorithms, which only optimize for worst-case consistency and robustness (1). 2.2 WEAK and STRONG Optimality While these prediction-specific metrics provide valuable insight into algorithm performance conditioned on the received prediction, the standard metrics of consistency and robustness still remain valuable to characterize an algorithm’s overall worst-case guarantees across all instances and predictions. To formally capture the notion of algorithms that perform well both overall and for specific predictions, we introduce two notions of optimality based on these different settings. Our goal is to distinguish algorithms which both achieve the optimal tradeoff between robustness and consistency for the worst-case prediction, as well as on a per-prediction basis. This leads to the following definitions of weakly-optimal and strongly-optimal algorithms. We begin by defining weak optimality, which characterizes algorithms with the optimal tradeoff between the standard notions of consistency and robustness. Definition 3 (WEAK Optimality). For a fixed online problem, consider an online algorithm A that achieves β consistency and γ robustness. A is WEAKLY-optimal if there does not exist another online algorithm with consistency β′ and robustness γ′ such that β′ ≤β and γ′ ≤γ, with at least one inequality being strict. Building upon this, we define the stricter notion of strong optimality incorporating prediction-specific performance. Definition 4 (STRONG Optimality). For a fixed online problem, consider an online algorithm A that achieves βy prediction-specific consistency and γy prediction-specific robustness under prediction y ∈F. A is STRONGLY- optimal if 6 1. A is weakly-optimal; 2. for any prediction y ∈F, there does not exist another online algorithm with prediction-specific consistency β′ y and robustness γ′ y such that β′ y ≤βy and γ′ y ≤γy, with at least one inequality being strict. The notion of strong optimality in Definition 4 extends classic Pareto optimality to the prediction-specific setting. This definition requires Pareto optimality in the two-dimensional consistency-robustness plane (βy, γy) for each specific prediction y ∈F, in addition to the baseline weak optimality. This stricter criterion captures whether an algorithm is unimprovable in its fundamental performance trade-off across the entire prediction space F. 2.3 Online Problems Studied We now briefly introduce each of the problems studied in this work; these problems are summarized in Table 3. Table 3: Instantiation of the general problem components I, x, and y for problems analyzed. Discrete-Time Ski Rental One-Max Search Scheduling Instance I Number of days n values in [L, U] Set of n jobs Feature x Number of days Maximum value Processing times (x1, . . . , xn) Prediction y Predicted number of days Predicted maximum value Predicted times (y1, . . . , yn) Discrete-Time Ski Rental. The discrete-time ski rental problem is a classic online decision-making problem, wherein a decision-maker decides each day whether to rent skis for 1 unit of cost per day or purchase them outright for a fixed cost b ∈N+, without prior knowledge of the total number of skiing days x. The optimal cost is given by min{b, x}. In the standard competitive analysis framework, where no predictions are available, the optimal deterministic algorithm employs a simple rent-or-buy strategy, purchasing the skis on day M = b; this strategy achieves a competitive ratio of 2 −1/b. The optimal randomized algorithm is known as Karlin’s algorithm [38], which strategically balances renting and buying through a carefully-designed probability distribution, achieving a competitive ratio of approximately e/(e −1) ≈1.582. For the ski rental problem, Kumar et al. [2] proposed algorithms that trade off consistency (performance under accurate predictions) and robustness (worst-case performance). They presented a deterministic algorithm achieving (1 + λ) consistency and (1 + 1/λ) robustness (for λ ∈(0, 1)), and a randomized algorithm with ( λ 1−e−λ ) consistency and ( 1+1/b 1−e−(λ−1/b) ) robustness (for λ ∈(1/b, 1)). Subsequently, Wei and Zhang [17] established fundamental lower bounds on this trade-off, showing that as b →∞, for deterministic algorithms, any (1+λ)-consistent algorithm must have a robustness of at least (1+1/λ) for λ ∈(0, 1), while for randomized algorithms, any γ-robust algorithm must have a consistency of at least γ log(1 + 1/(γ −1)). These results show that the algorithms in [2] achieve weak optimality in the limit b →∞(see Definition 3). In this work, we provide a deeper analysis of the algorithms proposed by Kumar et al. [2], examining their prediction-specific consistency and robustness. To maintain consistency and comparability with [2, 17], we also consider the asymptotic regime b →∞. Our findings indicate that neither the deterministic nor randomized algorithms in [2] are strongly-optimal within this framework (see Theorems 4.1 and 5.1). Consequently, we propose novel algorithms (see Algorithms 2 and 9) that achieve strong optimality (see Theorems 4.2 and 5.3), thereby improving upon existing approaches for this problem setting with ML predictions. One-Max Search. The one-max search problem considers a sequence of n elements with values in the range [L, U], where L and U are known positive constants. At each step, an element is observed, and the algorithm must decide whether to accept it immediately or irrevocably discard it. The objective is to select the element with the maximum value. The instance’s difficulty is characterized by the ratio θ = U/L. In classical competitive analysis, the optimal competitive ratio is √ θ, achieved by an algorithm using the fixed threshold √ LU [39]. Note that this is a reward maximization problem, rather than a loss minimization; thus, for this problem, we will 7 consider definitions of classic (1) and prediction-specific (2) consistency and robustness with the ratio between ALG and OPT inverted. For learning-augmented one-max search, where algorithms receive a prediction of the maximum element value, Sun et al. [11] established a fundamental lower bound on the consistency-robustness trade-off: any γ-robust algorithm must have a consistency of at least θ/γ. They further proposed a threshold-based algorithm that achieves this lower bound, implying its weak optimality. In this work, we provide a deeper analysis of the algorithm proposed by Sun et al. [11] within this prediction- specific framework. Our analysis reveals that their algorithm is not strongly-optimal (see Theorem 6.1). Consequently, we propose a novel algorithm (see Algorithm 4) that is strongly-optimal (see Theorem 6.2) and offers improved performance. Non-Clairvoyant Scheduling. The non-clairvoyant scheduling problem concerns scheduling n jobs on a single machine without prior knowledge of their processing times. We focus on the preemptive setting, where jobs can be interrupted and resumed without cost. In the standard competitive analysis framework [1], Round-Robin (RR) achieves an optimal competitive ratio of 2n n+1 [40]. Extending this, Wei and Zhang [17] established a fundamental lower bound on the consistency-robustness trade-off, showing that any algorithm must have robustness of at least n+n(n−1)λ 1+λ(n+1)(n+2)/2 if it is (1 + λ)-consistent. They also propose a two-stage schedule algorithm (see Algorithm 6 in Section A.1) and show it achieves the tight tradeoff for n = 2 jobs in this learning-augmented setting. In the next section, as a warm-up, we provide a deeper analysis of this two-stage algorithm’s prediction-specific consistency and robustness, establishing that it is strongly-optimal. 2.4 Warm-Up: An Existing Algorithm that is Strongly-Optimal While our proposed notion of strong optimality is much stricter than weak optimality, some existing algorithms known only to be weakly-optimal can be shown to satisfy strong optimality. In this section, we consider the problem of non-clairvoyant scheduling with n = 2 jobs (with length predictions y = (y1, y2) with y1 ≤y2). It is well known that the two-stage scheduling algorithm proposed by Wei and Zhang [17] (see Algorithm 6 in Section A.1) achieves the optimal trade-off between classic consistency and robustness and is thus weakly-optimal. Notably, we can show that this algorithm also satisfies prediction-specific Pareto optimality: Theorem 2.1. The two-stage algorithm in [17] with n = 2 is 1 + min{y1/(2y1 + y2), λ}-consistent and 1 + max{1/3, y1/(y1 + 2λ(2y1 + y2))}-robust under y = (y1, y2). Moreover, it is strongly-optimal. We prove this theorem in Section A.2. Despite the fact that this algorithm is strongly optimal for non- clairvoyant scheduling, for many other canonical online problems, including the ski rental and one-max search problems, we shall soon see that existing weakly-optimal algorithms (such as those in [2, 11]) fail to achieve strong optimality (see Theorems 4.1, 5.1, and 6.1). As such, the rest of this paper will consider the development of new algorithms that can achieve strong optimality in these settings. 3 Optimization-Based Meta-Algorithm In this section, we introduce a general optimization-based approach to systematically identify strongly- optimal algorithms for online problems. Our goal is to obtain a general meta-algorithm that, given a prediction y ∈F and a target robustness upper bound γ, returns an algorithm Aω which is strongly-optimal. 3.1 Bi-Level Optimization Formulation We begin by considering the following question: given some target upper bound γ on robustness and a prediction y ∈F, how can we obtain an algorithm that both satisfies the target robustness bound, and obtains a prediction-specific optimal tradeoff between consistency and robustness? Here, γ ∈Λγ := [α∗, +∞) can be any possible robustness level, where α∗denotes the optimal competitive ratio achievable for the problem. 8 Algorithm 1 BI-LEVEL OPTIMIZATION-BASED META-ALGORITHM 1: Input: Desired robustness upper bound γ ∈Λγ := [α∗, +∞) 2: Receive a prediction y; 3: Compute {β∗ y, ω} by solving P1(γ, y) (Problem 1); 4: Obtain {γ∗ y, ω∗} by solving P2(β∗ y, y) (Problem 2) ; 5: Deploy the online decision rule induced by Aω∗; To achieve this goal, we specify a bi-level optimization framework comprising Problems 1 and 2. Recall that a prediction y determines a consistent subset of instances Iy ⊆I, as described in Section 2, and that ALG(Aω, I, y) and OPT(I) denote, respectively, the costs of the algorithm Aω augmented by the prediction y and optimal offline algorithm under instance I ∈I. Our first optimization problem (Problem 1), referred to as P1(γ, y), determines the best achievable prediction-specific consistency of any γ-robust algorithm under the prediction y ∈F. Let {β∗ y, ω} denote its optimal solution. We then solve a second optimization problem (Problem 2), referred to as P2(β∗ y, y), to find the most robust algorithm among those achieving β∗ y consistency under the prediction y. Let {γ∗ y, ω∗} denote its optimal solution. The resulting algorithm Aω∗with the optimal solution ω∗is then implemented for the prediction y. Problem 1 (P1): Minimize Consistency P1(γ, y) := min βy,ω∈Ω βy s.t. (3a) ALG(Aω, I, y) ≤γ OPT(I), ∀I ∈I, (3b) ALG(Aω, I, y) ≤βyOPT(I), ∀I ∈Iy. (3c) Constraint (3b) enforces the initial robustness level γ. Problem 2 (P2): Minimize Robustness P2(β∗ y, y) := min γy,ω∈Ω γy s.t. (4a) ALG(Aω, I, y) ≤γyOPT(I), ∀I ∈I, (4b) ALG(Aω, I, y) ≤β∗ yOPT(I), ∀I ∈Iy. (4c) Constraint (4c) enforces the optimal prediction- specific consistency β∗ y found in Problem 1. 3.2 Meta-Algorithm Formulation The preceding approach (Problems 1 and 2) focuses on deriving an instance-specific solution given a fixed prediction y. To operationalize this idea in the online setting, where predictions may vary, we introduce a meta- algorithm (Algorithm 1) that invokes this two-stage optimization procedure for any realized prediction. This enables prediction-specific robust and consistent decision-making across varying inputs, with a user-specified robustness bound γ. In the following proposition, which we prove in Section B, we show that this meta-algorithm yields decisions on the prediction-specific Pareto frontier and satisfying the desired robustness bound. Moreover, the condition that γ is on the (non-prediction-specific) Pareto frontier guarantees the weak optimality of Algorithm 1; this holds if tight Pareto fronts are available off-the-shelf for the problem at hand, such as the tight tradeoffs between classical consistency and robustness available for ski rental and one-max search [2, 11, 17]. Proposition 1. Suppose there exists a weakly-optimal algorithm with robustness γ. With ω∗being an optimal solution of Problem 2 in line 4 of Algorithm 1, Aω∗is β∗ y-consistent and γ∗ y-robust with respect to the prediction y, with γ∗ y ≤γ. Furthermore, Algorithm 1 is strongly-optimal. Notably, the meta-algorithm (Algorithm 1) offers a systematic pipeline to achieve Pareto optimality for each prediction y ∈F while also guaranteeing weak optimality. While the tractability of solving the constituent bi-level optimization problems P1(γ, y) and P2(β∗ y, y) depends on the structure of the specific online problem, this framework provides a foundation for deriving explicit and tractable strongly-optimal algorithms for the problems we discuss in the remainder of this paper. 9 3.3 Generalizations of this framework We briefly note that the bi-level optimization framework we have proposed is quite flexible, and could be generalized in a number of ways to enable its application to different problem settings or objectives. For example, Problems 1 and 2 could be augmented to include practical considerations such as risk- or uncertainty-sensitive constraints and objectives [36, 41, 42] or tolerance to erroneous predictions. In particular, in this work, we develop an error-tolerant variant of this framework, motivated by the continuous nature of certain problems (e.g., prices in the one-max search problem) and the fact that prediction errors are often unavoidable in practice. Instead of analyzing the tradeoff between consistency and robustness under perfectly accurate predictions, we can instead study the tradeoff between ϵ-consistency and robustness, where ϵ-consistency denotes the worst-case guarantee when the prediction error is bounded by a chosen constant ϵ. We further describe this generalized framework, and how to design algorithms in this setting, in Section 7. While our approach could, in principle, be extended to more general and complex dynamic multi-round problems, we anticipate that such an extension would likely require nontrivial technical developments. In particular, such an extension would require a number of additional structural assumptions, e.g., that the overall prediction space F remains fixed and independent of the algorithm’s actions, and that the algorithm’s actions exert negligible influence on future prediction values. Moreover, depending on the problem, the resulting optimization problems may be considerably harder to analyze. For complex problems like metrical task systems and non-clairvoyant scheduling with n jobs, achieving even weak optimality remains an open question [17, 33]; as such, designing strongly-optimal algorithms is inherently challenging. Thus, while our methodology in its current form may not be directly applicable to these settings, we posit that it may still serve as a useful conceptual framework for designing prediction-specific algorithms for these problems. In the rest of this paper, we will consider problem settings where this methodology can be tractably instantiated. 4 Deterministic Ski Rental We begin our investigation of strongly-optimal algorithm design in specific problem settings by considering deterministic algorithms for the discrete-time ski rental problem described in Section 2.3. If a deterministic decision-maker buys skis at the start of day M ∈N (that may depend on the predicted day y), then the induced cost is ALGDSR(M, x, y) := x · 1M(y)>x + (b + M(y) −1) · 1M(y)≤x, where b ∈N denotes the price of the skis. Then, for this problem, the general definitions of the prediction-specific consistency and robustness in Equation (2) are instantiated as: βDSR y (AM(y)) := ALGDSR(M, y, y) min{b, y} , γDSR y (AM(y)) := sup x∈N ALGDSR(M, x, y) min{b, x} . (5) 4.1 The Deterministic Algorithm of Kumar et al. is Not Strongly-Optimal We first provide a deeper analysis of the deterministic algorithm proposed by Kumar et al. [2], examining its prediction-specific consistency and robustness. The deterministic algorithm of Kumar et al. [2, Algorithm 2], which we denote by KD, purchases at the beginning of day ⌈λb⌉if y ≥b, and at the beginning of day ⌈b/λ⌉otherwise. It achieves (1+λ)-consistency and (1 + 1/λ)-robustness, where λ ∈(0, 1) is a tunable hyper-parameter. By the lower bound of Wei and Zhang [17], KD is weakly-optimal as the price b →+∞. However, as we show in the following theorem (which is proved in Section C.1), the KD algorithm is not strongly-optimal. Theorem 4.1. KD is 1-consistent and (1+1/λ)-robust when y < b, and (1+λ)-consistent and (1+1/λ)-robust when y ≥b. Furthermore, KD is not strongly-optimal even for b →+∞. 10 4.2 A Strongly-Optimal Algorithm for Deterministic Ski Rental We now turn to the design of an algorithm that is strongly-optimal in the asymptotic regime b →∞. Suppose the decision maker buys at the beginning of day M. The objective is to determine M for each y ∈F such that the resulting prediction-specific consistency-robustness pair (βy(M), γy(M)) is not dominated by that of any alternative decision M′ ̸= M, while remaining within the universal consistency–robustness bound (1 + λ, 1 + 1/λ). We consider the following two cases. Case I: y < b. Given the decision M, the prediction-specific consistency and robustness are βy(M) = ( M−1+b y if M ≤y, 1 if M > y. γy(M) = ( M−1+b M if M ≤b, M−1+b b if M > b. It’s clear that the decision M = b simultaneously achieves the optimal consistency 1 and the optimal robustness 2 −1 b with respect to y, thus dominating any other decisions. Case II: y ≥b. Given the decision M, the prediction-specific consistency and robustness are βy(M) = ( M−1+b b if M ≤y, y b if M > y. γy(M) = ( M−1+b M if M ≤b, M−1+b b if M > b. As shown in Figure 2 and 3, we observe that choosing M = y + 1 dominates all options with M > y + 1, as it offers better robustness without compromising consistency, making M = y + 1 a strong choice. Note that βy(y + 1) = y/b and γy(y + 1) = (y + b)/b. Therefore, two critical decision boundaries within (0, b) are M1 = y + 1 −b and M2 = (b2 −b)/y.2 Specifically, when M1 < M2, M = y + 1 dominates any choice within (M1, M2). Our strategy is to set M = ⌈λb⌉as the primary decision, and to take M = y + 1 whenever βy(M) 1 b O y M βy(M) = M −1 + b b βy(M) = y b y + 1 −b y + 1 y/b Figure 2: Prediction-specific consistency βy(M) ver- sus decision M given y ≥b. O γy(M) = M −1 + b M b y + 1 γy(M) = M −1 + b b (y + b)/b (b2 −b)/y M γy(M) Figure 3: Prediction-specific robustness γy(M) versus decision M given y ≥b. the primary choice is dominated. Given λ ∈(0, 1), the condition that y + 1 dominates ⌈λb⌉in the asymptotic regime b →∞requires that y simultaneously satisfies the following constraints: (i) λb ≥y + 1 −b, (ii) λb ≤(b2 −b)/y, which together imply y ∈[b, min {b(λ + 1) −1, (b −1)/λ}]. Therefore, when y ∈[b, min {b(λ + 1) −1, (b −1)/λ}], ⌈λb⌉is dominated by y + 1, thereby making y + 1 a better choice. On the other hand, when y > min {b(λ + 1) −1, (b −1)/λ}, ⌈λb⌉remains on the Pareto front. All together, these cases motivate the design of our deterministic algorithm, PDSR (Algorithm 2). In the following theorem, we characterize the prediction-specific consistency and robustness of PDSR, establishing its strong optimality in the asymptotic regime b →+∞; the full proof is deferred to Section C.2. 2Note that b2−b y may not be an integer. 11 Algorithm 2 PDSR: PREDICTION-SPECIFIC DETERMINISTIC SKI RENTAL 1: Input: λ ∈(0, 1) 2: If y < b then determine M = b; 3: Else if y ∈[b, min{b(λ + 1) −1, b−1 λ }] then determine M = y + 1; 4: Else if y > min{b(λ + 1) −1, b−1 λ } then determine M = ⌈λb⌉; 5: Buy skis on day M. Theorem 4.2. PDSR presented in Algorithm 2 is strongly-optimal when b →+∞, and is        1-consistent and (2 −1 b)-robust if y < b; y b-consistent and 1 + y b  -robust if y ∈  b, min  b(λ + 1) −1, b−1 λ  ; (1 + λ)-consistent and (1 + 1 λ)-robust if y > min  b(λ + 1) −1, b−1 λ . 5 Randomized Ski Rental We now turn to considering the design of randomized algorithms for the discrete-time ski rental problem. Consider a randomized algorithm π = (πi)i∈N+ (that may depend on the predicted day y) that chooses to buy skis at the start of day i ∈N+ with some probability πi. The algorithm’s average cost is ALGRSR(π, x, y) := +∞ X i=1 πi(y) · [x · 1i>x + (b + i −1) · 1i≤x], where b ∈N denotes the price of the skis. For this problem, the prediction-specific consistency and robustness in Equation (2) are instantiated as: βRSR y (Aπ(y)) := ALGRSR(π, y, y) min{b, y} , γRSR y (Aπ(y)) := sup x∈N ALGRSR(π, x, y) min{b, x} . (6) 5.1 The Randomized Algorithm of Kumar et al. is Not Strongly-Optimal Kumar et al. [2] propose a randomized ski rental algorithm, which we denote by KR, which is a variant of Karlin’s classical randomized strategy [38]. Given a prediction y and a hyper-parameter λ ∈(1/b, 1), KR chooses the purchase day i according to the distribution πi =    b−1 b m−i · 1 b  1−  1−1 b m, if i ≤m, 0, if i > m, (7) where m := ( ⌈b/λ⌉, if y < b, ⌊λb⌋, if y ≥b. Under this strategy, KR achieves consistency λ/(1 −e−λ) and robustness 1/(1 −e−(λ−1/b)) [2]. It is known that KR is weakly-optimal as b →+∞[17]; however, as we show in the following theorem, which is proved in Section E.3, KR is not strongly-optimal. Theorem 5.1. KR is not strongly-optimal, even for b →+∞. 12 Algorithm 3 Variant of Algorithm 1 for randomized ski rental 1: Input: γ ∈Λγ := [eb/(eb −1), ∞); 2: Compute {β∗ y, π} by solving PRSR 1 (γ, y) (Problem 8); 3: Obtain {γ∗ y, π∗} by solving PRSR 2 (β∗ y, y) (Problem 9); 4: Choose i randomly according to the distribution π∗; 5: Buy the skis at the start of day i. 5.2 A Strongly-Optimal Algorithms for Randomized Ski Rental In the analysis that follows, we focus on designing a strongly-optimal algorithm for randomized ski rental. We use R(π, x) to denote the expected ratio between the cost achieved by a randomized algorithm that uses a distribution π over purchase days and the offline solution when the actual ski season lasts x days, i.e., R(π, x) := ALGRSR(π, x, y)/min{b, x}. Let βy(π) and γy(π) denote the prediction-specific consistency and robustness of π; thus, βy(π) = R(π, y) γy(π) = max x∈N+ R(π, x). An Optimization-Based Algorithm. While the bi-level optimization-based meta-algorithm (Algorithm 1) provides a general method for computing strongly-optimal strategies, solving the underlying optimization problems can be complicated in general—in particular, since they require optimizing over all possible algorithms. Fortunately, for randomized ski rental, it is possible to restrict the support of the randomized strategy to a finite set U(b, y) := [b]∪{y +1} without loss of optimality, thus enabling a computationally tractable solution to these problems (see Lemma 1 in Section E.1). Given a target robustness level γ ∈Λγ := [eb/(eb −1), +∞), where eb := (1 + 1 b−1)b, this structural result allows us to specialize the bi-level optimization framework introduced in Section 3.1 as follows: Problem 1 (PRSR 1 ): Minimize Consistency min π,βy βy (8a) s.t. R(π, x) ≤γ, ∀x ∈U(b, y) (8b) R(π, y) ≤βy (8c) Let {π, β∗} denote the optimal solution. Problem 2 (PRSR 2 ): Minimize Robustness min π,γy γy (9a) s.t. R(π, x) ≤γy, ∀x ∈U(b, y) (9b) R(π, y) ≤β∗ y (9c) Let {π∗, γ∗ y} denote the optimal solution. Note that both of these problems are linear programs, and thus the corresponding meta-algorithm (Algo- rithm 3) is tractable to implement in this case. As such, we obtain as a consequence of Proposition 1 the following strong optimality result (see Section E.4 for a full proof). Theorem 5.2. Algorithm 3 is strongly-optimal when b →∞. Explicit Algorithm Design. While the optimization-based algorithm described in Algorithm 3 is strongly- optimal, it does not provide much insight into the analytic structure of the resulting probability distribution over purchase days. Complementing this result, we can in fact leverage the problem structure to derive a novel and explicit strongly-optimal randomized algorithm PRSR (Algorithm 9), whose construction we detail in Appendix D. The algorithm builds on two transformation procedures, OPERATION A (Algorithm 7) and OPERATION B (Algorithm 8), which systematically adjust the “equalizing distributions” (Theorem D.1, Equation 18) by reallocating probability mass to trace the prediction-specific Pareto frontier for each y. As we show in the following theorem, PRSR, like the optimization-based approach, achieves strong optimality. Theorem 5.3. The algorithm PRSR (Algorithm 9) is strongly-optimal when b →+∞. 13 Algorithm 4 PST: PREDICTION-SPECIFIC THRESHOLDING 1: Input: λ ∈[0, 1] 2: Determine M = λL + (1 −λ) √ LU; 3: If y ∈[L, M] then set Φ = √ LU; 4: Else if y ∈(M, √ LU] then set Φ = y; 5: Else if y ∈( √ LU, U] then determine µ = (1−λ) √ θ (1−λ) √ θ+λ and set Φ = µ √ LU + (1 −µ)y; 6: Perform OTA with threshold Φ. The proof of Theorem 5.3 is detailed in Section E.5. We begin with a perturbation-based analysis that char- acterizes the structure of the optimal randomized distribution, and then establish prediction-specific optimality via a primal–dual formulation. 6 One-Max Search In this section, we shift our focus to the one-max search problem described in Section 2.3 with predicted maximum price. Let x denote the true highest price and let y denote the predicted highest price. If the decision- maker decides to set their purchase threshold to Φ ∈[L, U]—i.e., they purchase at the first price exceeding Φ, which may depend on the prediction y—then their reward is ALGOMS(Φ, x, y) := Φ(y) · 1Φ(y)≤x + L · 1Φ(y)>x. For this problem, the prediction-specific consistency and robustness are: βOMS y (AΦ(y)) := y ALGOMS(Φ, y, y), γOMS y (AΦ(y)) := sup x∈[L,U] x ALGOMS(Φ, x, y). (10) 6.1 The Algorithm of Sun et al. is Not Strongly-Optimal Sun et al. [11] proposed a β-consistent and γ-robust Online Threshold-based Algorithm (OTA) using the threshold Φ =      Lβ, if y ∈[L, Lβ); λLγ + (1 −λ)y/β if y ∈[Lβ, Lγ); Lγ if y ∈[Lγ, U], where β = 2λθ/[ p (1 −λ)2 + 4λθ −(1 −λ)], γ = θ/β, and λ ∈[0, 1]. Furthermore, they show that any γ-robust deterministic algorithm for one-max search must have consistency β ≥θ/γ, implying their algorithm is weakly-optimal. However, as we establish in the following theorem (which is proved in Section F.1), their algorithm is not strongly-optimal. Theorem 6.1. Sun’s algorithm is not strongly-optimal. 6.2 A Strongly-Optimal Algorithm for One-Max Search We propose in Algorithm 4 a new approach, PST, which, by more carefully selecting the purchase threshold, achieves strong optimality. In particular, it achieves the prediction-specific consistency and robustness values established in the following theorem. Theorem 6.2. PST (Algorithm 4) is strongly-optimal, and is        (y/L)-consistent and √ θ-robust if y ∈[L, λL + (1 −λ) √ LU); 1-consistent and (U/y)-robust if y ∈[λL + (1 −λ) √ LU, √ LU];  (1−λ) √ θy+λy (1−λ)U+λy  -consistent and  (1−λ)U+λy (1−λ) √ LU+λL  -robust if y ∈( √ LU, U]. 14 Algorithm 5 ϵ-Tolerant PST 1: Input: λ ∈[0, 1], ϵ > 0 2: Determine M = λ(L + 3ϵ) + (1 −λ)( √ LU −ϵ); 3: If y ∈[L, M −2ϵ] then set Φ = √ LU; 4: Else if y ∈(M −2ϵ, M) set Φ = M −ϵ; 5: Else if y ∈[M, √ LU + ϵ] then set Φ = y −ϵ; 6: Else if y ∈( √ LU + ϵ, U −ϵ) then set µ = (U−2ϵ)−LU/(M−ϵ) (U−2ϵ)− √ LU , Φ = µ √ LU + (1 −µ)(y −ϵ); 7: Else if y ∈[U −ϵ, U] then set Φ = LU/(M −ϵ); 8: Perform OTA with threshold Φ. We prove Theorem 6.2 in Section F.2; the proof identifies prediction-specific optimal thresholds (Φ) by partitioning the prediction space and deriving each threshold (which is a convex combination involving µ) that ensures Pareto-optimality for each segment of predictions. 7 Error-Tolerant Algorithms Pareto-optimal algorithms can exhibit brittleness, a vulnerability noted by [18, 19], where the competitive ratio degrades sharply toward the worst-case robustness bound even with small prediction errors. This issue stems from the standard definition of consistency (see Definition 1), which assumes strictly perfect predictions (x(I) = y) and thus fails to consider performance under erroneous predictions. To address this, we use the one-max search problem as an example to demonstrate how explicit and tractable Pareto-optimal algorithms incorporating a “generalized consistency” can be constructed in our prediction-specific framework, offering good performance tradeoffs when faced with minor prediction errors. It is worth emphasizing that the notion of “error tolerance” considered in this section differs from the concept of “smoothness” discussed in [18–20]. The former concerns the tradeoff between generalized consistency (allowing small prediction errors) and robustness (with arbitrarily large errors), whereas the latter requires that the algorithm’s performance degrade smoothly as the prediction error increases. Nevertheless, we view the two notions as related, both contributing to alleviating "brittleness". To account for small prediction errors, we specify a desired error tolerance ϵ and define a relaxed consistent set Iϵ y := {I ∈I : L(x(I), y) ≤ϵ} , where L : F × F →R≥0 is a chosen loss function that measures prediction error. When substituting Iy with Iϵ y, we refer to the corresponding consistency metrics in Equation (1) and Equation (2) as ϵ-consistency and prediction-specific ϵ-consistency, denoted by βϵ and βϵ y, respectively. Substituting β, βy and Iy with βϵ, βϵ y and Iϵ y in Section 3, we can generalize our meta-algorithm to incorporate error tolerance. For the one-max search problem, its inherent continuity renders prediction errors unavoidable in practice, underscoring the necessity of incorporating error tolerance. We fix L(x, y) = |x −y| and assume that ϵ is small relative to the scale of the problem. We propose an error-tolerant algorithm for this problem, ϵ-tolerant PST, in Algorithm 5. To conclude the section, we present three theorems that characterize the performance of this algorithm. In particular, Theorem 7.1 establishes the tradeoffs between ϵ-consistency and robustness achieved by ϵ-tolerant PST, together with their prediction-specific counterparts, while Theorem 7.2 provides a lower bound on the global ϵ-consistency–robustness tradeoff. Finally, Theorem 7.3 shows that the ϵ-tolerant PST algorithm is strongly optimal. The proofs of all three results are given in Section G. Theorem 7.1. ϵ-Tolerant PST achieves [(M −ϵ)/L] ϵ-consistency and U/(M −ϵ) robustness. Specifically, 15 ϵ-Tolerant PST achieves                        [(y + ϵ)/L] ϵ-consistency and √ θ robustness if y ∈[L, M −2ϵ]; (M −ϵ)/L ϵ-consistency and U/(M −ϵ) robustness if y ∈(M −2ϵ, M); (y + ϵ)/(y −ϵ) ϵ-consistency and U/(y −ϵ) robustness if y ∈[M, √ LU + ϵ]; (y+ϵ) µ √ LU+(1−µ)(y−ϵ) ϵ-consistency and µ √ LU+(1−µ)(y−ϵ) L robustness if y ∈( √ LU + ϵ, U −ϵ); (M −ϵ)/L ϵ-consistency and U/(M −ϵ) robustness if y ∈[U −ϵ, U]; where M = λ(L + 3ϵ) + (1 −λ)( √ LU −ϵ). Theorem 7.2. Any γ-robust algorithm has at least (θ/γ) ϵ-consistency, and any algorithm that achieves βϵ ϵ-consistency must be at least (θ/βϵ)-robust. Theorem 7.3. Assume ϵ ≤( √ LU −L)/4. The ϵ-consistency and robustness of ϵ-Tolerant PST (Algorithm 5) are jointly Pareto optimal. Moreover, for every prediction y ∈F = [L, U], ϵ-Tolerant PST achieves prediction- specific ϵ-consistency βϵ y and robustness γy that are jointly Pareto optimal. 8 Numerical Experiments In this section, we evaluate the performance of our proposed algorithms in three different case studies spanning synthetic and real-world settings.3 8.1 Case Study 1: Synthetic Data Experiments for Ski Rental We begin by testing the performance of our algorithms for the ski rental problem via simulations on synthetic instances. We let the actual number of skiing days x be a uniformly random integer drawn from [1, 10b], where b = 100 is buying costs of skis. The prediction y is generated with accuracy p: with probability p, the prediction is accurate (i.e., y = x), and with probability (1 −p), the prediction y ∼N(x, σ2), where σ = 500 and the output is rounded and made positive. For the deterministic setting, we compare PDSR (Algorithm 2) with KD ([2], Algorithm 2) and the classic competitive algorithm (which always buys on day b), using the same parameter λ = 0.5 for both PDSR and KD. For the randomized setting, we compare PRSR (Algorithm 9) with KR ([2], Algorithm 3) and Karlin’s algorithm [38], using γ = 3 for PRSR and λ = ln(3/2) for KR, ensuring PRSR and KR have the same robustness 3. Each setup is evaluated over 10000 independent trials. Figures 4 and 5 present the empirical results of average competitive ratio versus accuracy p. We observe that our proposed algorithms, PDSR and PRSR, consistently outperform both classic online algorithms and existing learning-augmented algorithms across both settings. 8.2 Case Study 2: Ski Rental on Dynamic Power Management We next evaluate our ski rental algorithms on real-world traces for a Dynamic Power Management (DPM) problem, where we control the idle and active periods of a computing device. Modern processors typically support multiple power states: deeper states disable more components, leading to lower operating cost/energy but higher wake-up penalties/overhead. During each idle interval, a DPM controller must decide whether to stay active or transition into a deeper sleep state without knowing the remaining idle duration. 3Our code is publicly available at https://github.com/Bill-SizheLi/Prediction_Specific_Design_of_ Learning-Augmented_Algorithms 16 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy of ML Predictor 1.5 1.6 1.7 1.8 1.9 Empirical Competitive Ratio PDSR (Ours) KD Competitive Figure 4: Empirical competitive ratios versus accuracy p in deterministic setting. 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy of ML Predictor 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 Empirical Competitive Ratio PRSR (Ours) KR Competitive Figure 5: Empirical competitive ratios versus accuracy p in randomized setting. log126_copyDtoH log176_Booting log225_MusicFaceBook log225_MusicTwitter log245_MusicFaceBook log126_copyDtoH log176_Booting log225_MusicFaceBook log225_MusicTwitter log245_MusicFaceBook 1.0 1.5 2.0 2.5 3.0 3.5 Empirical ratio Good predictions Bad predictions ( e e 1)-competitive Blindly Trust KR PRSR(Ours) Figure 6: Empirical competitive ratios on DPM traces for good and bad predictions. The two-state DPM system (one active and one sleep state with zero operating cost) is equivalent to the ski rental problem, where remaining active corresponds to renting and transitioning to the sleep state corresponds to buying. Moreover, Antoniadis et al. [5] demonstrated that randomized ski rental algorithms can be converted to multi-state DPM algorithms. Setup. We consider a DPM problem with 4 states. Specifically, we use the same problem setting as Antoniadis et al. [5], employing I/O traces4 collected from a Nexus 5 smartphone [43], from which idle intervals between consecutive requests are extracted. We adopt the IBM mobile hard-drive power states reported in [44], consistent with the setup in [5]. The idle periods are scaled in the same way as in [5]. We use the five largest traces for evaluation. Since the main goal of this section is to probe the algorithms’ performance under the two extremes of very good and very bad predictions, we consider the following method for generating predictions: "good predictions" and "bad predictions" are obtained by perturbing the ground truth with N(0, σ2 good) and N(0, σ2 bad) noises, respectively. In this experiment, we set σgood = 0.02 and σbad = 20. We compare four algorithms: the classic ( e e−1)-competitive algorithm, Blindly Trust (which treats the prediction as if it is correct and optimizes accordingly), the randomized algorithm of Kumar et al. (KR), and our randomized algorithm (PRSR). For the learning-augmented algorithms, we use the same parameter values for λ and γ as in Case Study 1. Results. Figure 6 reports the empirical competitive ratios on the real DPM traces. We observe that our strongly-optimal algorithm PRSR consistently achieves the lowest competitive ratios, except for when 4The traces are available at http://iotta.snia.org/traces/block-io. 17 the prediction quality is very good (in which case, the non-robust Blindly Trust algorithm outperforms it. This validates our algorithm’s ability to exploit specific predictions to enable good performance regardless of prediction value or quality. 8.3 Case Study 3: One-Max Search on VIX Trading Figure 7: VIX Closing Price from January 2020 to December 2024. The VIX index soared from a pre-pandemic level of 12-15 to 83 in March 2020. The VIX, often referred to as the fear index, exhibits sharp volatility spikes that make it a natural benchmark for evaluating online search algorithms. Its uncertainty and heavy-tailed dynamics closely mirror the one-max search setting, where the core challenge lies in optimally timing a single exit. This characterization is vividly illustrated by the extreme volatility in early 2020: within just a few weeks at the onset of the COVID-19 market shock, the VIX surged from below $15 to over $80 (shown in Figure 7). Setup. We evaluate our one-max search algorithms in a case study using the daily closing prices of the VIX index from January 2020 to December 2024 (shown in Fig- ure 7), which consist of publicly available values obtained from the Cboe Options Exchange. We assume that at the beginning of each month, an agent holds one unit of VIX and must choose a single day within that month to sell it. Over the course of five years, there are 60 trading rounds (one per month), each offering approximately 20 to 21 trading opportunities, as the VIX is only traded on weekdays. We set L and U as the historical minimum and maximum prices over the entire 5 years.5 Baselines. We compare our proposed methods, PST(Algorithm 4) and ϵ-Tolerant PST(Algorithm 5), to three baseline algorithms: (i) blindly trusting the prediction, (ii) the classical online algorithm of El-Yaniv [39], and (iii) prior learning-augmented algorithms (Sun’s [11] and Benomar’s [19]). Experiment 1. In this experiment, we consider a naive prediction strategy that simply uses the highest observed VIX price from the previous month. As the evaluation metric, we use the empirical ratio6, defined as the cumulative online outcome up to the current round divided by the cumulative offline optimum. This metric reflects how well an algorithm performs in practice relative to the hindsight-optimal strategy, averaged over time. We run the algorithms over the 60-month horizon using historical VIX data, and report the empirical ratios at each round to visualize both long-term trends and the stability of performance across different market periods in Figure 8. For our proposed algorithms, we fix the trade-off parameter λ = 0.3 in PST, and use λ = 0.3 and ϵ = 1.8 in ϵ-Tolerant PST. For baseline algorithms with tunable parameters, including those from Sun et al. [11] and Benomar et al. [19], we find that setting λ = 1.0 yields the best cumulative empirical ratio over the full 60-month horizon. However, to better illustrate performance variation across different regimes, we also include their results under λ = 0.3 and λ = 0.6. Experiment 2. In practical settings, machine-learned predictions are often more accurate than the naive predictor used in Experiment 1, though they remain imperfect due to model limitations and data noise. The degree of prediction accuracy varies with the capability and training of the underlying ML model. To systematically evaluate algorithmic performance under varying prediction quality, we introduce the notion of an error level – a scalar value between 0 and 1 that quantifies the deviation from perfect information. For each trading round, the prediction is constructed via linear interpolation between the previous month’s maximum (naive prediction) 5The focus of this paper is not on the impact of L and U; therefore, we simply set them to historical values. In practical trading scenarios, L and U can be viewed as predetermined parameters representing the stop-loss and take-profit thresholds in the exit strategy of the trading process. 6Note that the empirical ratio here is the inverse of that used in the theoretical analysis, so as to better reflect the proportion of the hindsight optimum that the online or learning-augmented algorithm can achieve (or recover). 18 0 10 20 30 40 50 60 Round 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Empirical Ratio Blindly Trust El-Yaniv's (Sun's ( =1) & Benomar's( =1)) Sun's ( =0.6) Sun's ( =0.3) Benomar's ( =0.6) Benomar's ( =0.3) PST (Ours) -Tolerant PST (Ours) Figure 8: Empirical competitive ratios over 60 trading rounds. 0.0 0.2 0.4 0.6 0.8 1.0 Error Level 0.800 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 Cumulative Empirical Ratio Blindly Trust El-Yaniv's Sun's Benomar's PST (Ours) -Tolerant PST (Ours) Figure 9: Cumulative empirical competitive ratio with varying prediction error levels. and the current month’s actual maximum (perfect prediction), where the error level determines the interpolation weight. An error level of 1.0 corresponds to the naive prediction, while 0.0 yields the perfect prediction. We assess the cumulative empirical ratio of each algorithm over all 60 trading rounds under varying error levels from 0 to 1, and report the result in Figure 9. To ensure a fair comparison across prediction regimes, we fix the trade-off parameter λ = 0.5 for all tunable methods, including those of Sun, Benomar, PST, and ϵ-Tolerant PST. For ϵ-Tolerant PST, we additionally set ϵ = 0.5 to account for moderate tolerance to prediction error. Results. The results for Experiment 1 are shown in Figure 8. Our methods consistently outperform all baselines across the decision horizon, with final empirical competitive ratios of 87.2% (ϵ-Tolerant PST) and 85.7% (PST), compared to 82.4%–84.4% for all baselines. The results for Experiment 2 are shown in Figure 9; ϵ-Tolerant PST remains consistently superior across nearly all error levels. 9 Concluding Remarks In this work, we introduce a prediction-specific analysis framework and a finer-grained notion of strong optimality for online algorithms with predictions. We further provide a systematic approach to designing Pareto-optimal online algorithms with better prediction-specific performance than prior algorithms, and we show how this methodology can yield significant performance improvements for the problems of ski rental (deterministic and randomized) and one-max search. Future Directions. In contrast to the ski rental and one-max search settings, the existing weakly-optimal non-clairvoyant scheduling algorithm of Wei and Zhang [17] is strongly-optimal when n = 2. Thus, designing a strongly-optimal algorithm for n-job non-clairvoyant scheduling remains an open question. Similarly, as we did for one-max search, developing explicit error-tolerant strongly-optimal algorithms for both deterministic and randomized ski rental is also an interesting future direction. In addition, exploring whether the bi-level optimization in the meta-algorithm (Algorithm 1) in Section 3 can be tractably solved for more complex, multi-stage problems represents a challenging but potentially impactful direction for future study. References [1] Allan Borodin and Ran El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 2005. ISBN 9780521562458. https://books.google.com/books/about/Online_ Computation_and_Competitive_Analy.html?id=v3faI8pER6IC. [2] Ravi Kumar, Manish Purohit, and Zoya Svitkina. Improving Online Algorithms via ML Predictions. In Advances in Neural Information Processing Systems 19 31 (NeurIPS 2018), pages 1–10, 2018. https://papers.nips.cc/paper/ 8174-improving-online-algorithms-via-ml-predictions. [3] Thodoris Lykouris and Sergei Vassilvitskii. Competitive Caching with Machine Learned Advice. Journal of the ACM, 68(4):24:1–24:25, 2021. doi: 10.1145/3447579. https://dl.acm.org/doi/10.1145/ 3447579. [4] Antonios Antoniadis, Christian Coester, Marek Elias, Adam Polak, and Berthold Simon. Online Metric Algorithms with Untrusted Predictions. In Proceedings of the 37th International Conference on Machine Learning (ICML), pages 345–355. PMLR, 2020. https://proceedings.mlr.press/v119/ antoniadis20a.html. [5] Antonios Antoniadis, Christian Coester, Marek Eliáš, Adam Polak, and Bertrand Simon. Learning- augmented dynamic power management with multiple states via new ski rental bounds. In Advances in Neural Information Processing Systems (NeurIPS 2021), 2021. https://proceedings.neurips. cc/paper/2021/hash/8b8388180314a337c9aa3c5aa8e2f37a-Abstract.html. [6] Pengfei Li, Jianyi Yang, Adam Wierman, and Shaolei Ren. Learning-Augmented Decentralized Online Convex Optimization in Networks. In Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS), page 163–165, 2025. doi: 10.1145/3726854.3727293. https://dl.acm.org/doi/10.1145/3726854.3727293. [7] Adam Lechowicz, Nicolas Christianson, Jinhang Zuo, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, and Prashant Shenoy. The Online Pause and Resume Problem: Optimal Algorithms and an Application to Carbon-Aware Load Shifting. In Proceedings of the 14th ACM International Conference on Future Energy Systems (e-Energy ’23). ACM, 2023. doi: 10.1145/3626776. https://dl.acm.org/ doi/10.1145/3626776. [8] Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, and Prashant Shenoy. Learning-Augmented Competitive Algorithms for Spatiotemporal Online Allocation with Deadline Constraints. Proc. ACM Meas. Anal. Comput. Syst., 9(1):8:1–8:49, March 2025. doi: 10.1145/3711701. https://dl.acm.org/doi/10.1145/3711701. [9] Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, and Prashant Shenoy. Online Conversion with Switching Costs: Robust and Learning-Augmented Algo- rithms. In Proceedings of the 2024 ACM SIGMETRICS / IFIP Performance Joint International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS/PERFORMANCE ’24), pages 1–48. ACM, 2024. doi: 10.1145/3652963.3655074. https://dl.acm.org/doi/10.1145/3652963. 3655074. [10] Bo Sun, Ali Zeynali, Tongxin Li, Mohammad Hajiesmaili, Adam Wierman, and Danny H. K. Tsang. Competitive Algorithms for the Online Multiple Knapsack Problem with Application to Electric Vehicle Charging. In Proceedings of the ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS ’21). ACM, 2021. doi: 10.1145/3543516.3456271. https: //dl.acm.org/doi/10.1145/3543516.3456271. [11] Bo Sun, Russell Lee, Mohammad Hajiesmaili, Adam Wierman, and Danny H.K. Tsang. Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Prob- lems. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), page 55a988dfb00a914717b3000a3374694c, 2021. https://proceedings.neurips.cc/paper/ 2021/file/55a988dfb00a914717b3000a3374694c-Paper.pdf. [12] Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, and Prashant Shenoy. Chasing Convex Functions with Long-term Constraints. In Proceedings of the 20 41st International Conference on Machine Learning, volume 235, pages 26259–26289. PMLR, 2024. https://proceedings.mlr.press/v235/lechowicz24a.html. [13] Anna R. Karlin, Claire Kenyon, and Dana Randall. Dynamic TCP acknowledgement and other stories about e/(e-1). In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing (STOC), pages 502–509. ACM, 2001. doi: 10.1145/380752.380845. https://dl.acm.org/doi/10.1145/ 380752.380845. [14] Lingqing Ai, Xian Wu, Lingxiao Huang, Longbo Huang, Pingzhong Tang, and Jian Li. The multi-shop ski rental problem. In Proceedings of the 2014 ACM SIGMETRICS/International Conference on Measurement and Modeling of Computer Systems, pages 463–475. ACM, 2014. doi: 10.1145/2591971.2591984. https://dl.acm.org/doi/10.1145/2591971.2591984. [15] Russell Lee, Jessica Maghakian, Mohammad Hajiesmaili, Jian Li, Ramesh Sitaraman, and Zhenhua Liu. Online Peak-Aware Energy Scheduling with Untrusted Advice. In Proceedings of the 12th ACM International Conference on Future Energy Systems (e-Energy ’21). ACM, 2021. doi: 10.1145/3447555. 3464860. https://dl.acm.org/doi/10.1145/3447555.3464860. [16] Russell Lee, Bo Sun, Mohammad Hajiesmaili, and John C. S. Lui. Online Search with Predictions: Pareto- optimal Algorithm and its Applications in Energy Markets. In Proceedings of the 15th ACM International Conference on Future Energy Systems (e-Energy ’24). ACM, 2024. doi: 10.1145/3632775.3639590. https://dl.acm.org/doi/10.1145/3632775.3639590. [17] Alexander Wei and Fred Zhang. Optimal Robustness-Consistency Trade-offs for Learning- Augmented Online Algorithms. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pages 21219–21229, 2020. https://proceedings.neurips.cc/paper/2020/hash/ 5bd844f11fa520d54fa5edec06ea2507-Abstract.html. [18] Alex Elenter, Spyros Angelopoulos, Christoph Dürr, and Yanni Lefki. Overcoming Brittleness in Pareto- Optimal Learning Augmented Algorithms. In Advances in Neural Information Processing Systems (NeurIPS 2024), 2024. https://proceedings.neurips.cc/paper_files/paper/2024/ hash/11c6625b0481a7d5625831369f6b7c82-Abstract-Conference.html. [19] Ziyad Benomar, Lorenzo Croissant, Vianney Perchet, and Spyros Angelopoulos. Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search. In Proceedings of the 2025 International Conference on Machine Learning (ICML 2025), 2025. https://icml.cc/virtual/ 2025/poster/44853. [20] Ziyad Benomar and Vianney Perchet. On Tradeoffs in Learning-Augmented Algorithms. In Proceedings of the 28th International Conference on Artificial Intelligence and Statistics, pages 802–810. PMLR, 2025. https://proceedings.mlr.press/v258/benomar25a.html. [21] Tongxin Li, Ruixiao Yang, Guannan Qu, Guanya Shi, Chenkai Yu, Adam Wierman, and Steven Low. Robustness and Consistency in Linear Quadratic Control with Untrusted Predictions. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 6(1):1–35, 2022. doi: 10.1145/3508038. https://dl.acm.org/doi/10.1145/3508038. [22] Tongxin Li, Hao Liu, and Yisong Yue. Disentangling Linear Quadratic Control with Untrusted ML Predic- tions. In Proceedings of the 37th Annual Conference on Neural Information Processing Systems (NeurIPS 2024), pages 86860–86898, 2024. https://proceedings.neurips.cc/paper_files/ paper/2024/hash/9dff3b83d463fab213941bfee23341ba-Abstract-Conference. html. 21 [23] Noah Golowich and Ankur Moitra. Can Q-learning be Improved with Advice? In Proceedings of the Thirty-Fifth Conference on Learning Theory, volume 178, pages 4548–4619. PMLR, 2022. https: //proceedings.mlr.press/v178/golowich22a.html. [24] Tongxin Li, Yiheng Lin, Shaolei Ren, and Adam Wierman. Beyond Black-Box Advice: Learning- Augmented Algorithms for MDPs with Q-Value Predictions. In Proceedings of the 37th Conference on Neu- ral Information Processing Systems (NeurIPS 2023), 2023. https://proceedings.neurips.cc/ paper/2023/file/8e806d3c56ed5f1dab85d601e13cbe38-Paper-Conference.pdf. [25] Dhruv Rohatgi. Near-Optimal Bounds for Online Caching with Machine Learned Advice. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA 2020), pages 1834–1845, 2020. doi: 10.5555/3381089.3381201. https://dl.acm.org/doi/10.5555/3381089.3381201. [26] Sungjin Im, Ravi Kumar, Aditya Petety, and Manish Purohit. Parsimonious Learning-Augmented Caching. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pages 9588–9601. PMLR, 2022. https://proceedings.mlr.press/v162/im22a.html. [27] Alexander Wei. Better and Simpler Learning-Augmented Online Caching. In Approximation, Ran- domization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020), volume 176 of Leibniz International Proceedings in Informatics (LIPIcs), pages 60:1–60:17. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 2020. doi: 10.4230/LIPIcs.APPROX/RANDOM.2020.60. https://drops.dagstuhl.de/opus/volltexte/2020/12673. [28] Sungjin Im, Ravi Kumar, Mahshid Montazer Qaem, and Manish Purohit. Online Knapsack with Fre- quency Predictions. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), page 161c5c5ad51fcc884157890511b3c8b0, 2021. https://proceedings.neurips.cc/paper/ 2021/hash/161c5c5ad51fcc884157890511b3c8b0-Abstract.html. [29] Mohammadreza Daneshvaramoli, Helia Karisani, Adam Lechowicz, Bo Sun, Cameron Musco, and Mohammad Hajiesmaili. Competitive Algorithms for Online Knapsack with Succinct Predictions. arXiv preprint arXiv:2406.18752, 2024. https://arxiv.org/abs/2406.18752. [30] Adam Lechowicz, Rik Sengupta, Bo Sun, Shahin Kamali, and Mohammad Hajiesmaili. Time Fairness in Online Knapsack Problems. In Proceedings of the International Conference on Learning Representations (ICLR 2024), 2024. https://openreview.net/forum?id=9kG7TwgLYu. [31] Antonios Antoniadis, Themis Gouleakis, Pieter Kleer, and Pavel Kolev. Secretary and Online Match- ing Problems with Machine Learned Advice. In Advances in Neural Information Processing Sys- tems 33 (NeurIPS 2020), 2020. https://proceedings.neurips.cc/paper/2020/hash/ 5a378f8490c8d6af8647a753812f6e31-Abstract.html. [32] Sébastien Bubeck, Yassine Engel, Yin Tat Lee, Yifeng Li, and Aleksandar Nikolov. Online Multiserver Convex Chasing and Optimization. In Proceedings of the 2021 Symposium on Discrete Algorithms (SODA), pages 1–40, 2021. https://dl.acm.org/doi/10.5555/3458064.3458189. [33] Nicolas Christianson, Junxuan Shen, and Adam Wierman. Optimal Robustness-Consistency Tradeoffs for Learning-Augmented Metrical Task Systems. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 206, pages 4223–4254. PMLR, 2023. https: //proceedings.mlr.press/v206/christianson23a.html. [34] Nicolas Christianson, Tinashe Handina, and Adam Wierman. Chasing Convex Bodies and Functions with Black-Box Advice. In Proceedings of the Conference on Learning Theory, volume 178, pages 867–908. PMLR, 2022. https://proceedings.mlr.press/v178/christianson22a.html. 22 [35] Hongzi Mao, Malte Schwarzkopf, Shaileshh Bojja Venkatakrishnan, Zili Meng, and Mohammad Alizadeh. Learning Scheduling Algorithms for Data Processing Clusters. In Proceedings of the ACM SIGCOMM 2019 Conference. ACM, 2019. https://dl.acm.org/doi/10.1145/3341302.3342080. [36] Bo Sun, Jerry Huang, Nicolas Christianson, Mohammad Hajiesmaili, Adam Wierman, and Raouf Boutaba. Online Algorithms with Uncertainty-Quantified Predictions. In Proceedings of the 41st International Conference on Machine Learning (ICML 2024), volume 235, pages 47056–47077. PMLR, 2024. https: //proceedings.mlr.press/v235/sun24f.html. [37] Mohammad Mahdian, Hamid Nazerzadeh, and Amin Saberi. Online Optimization with Uncertain Information. ACM Transactions on Algorithms, 8(1):1–29, 2012. doi: 10.1145/2071379.2071381. https://dl.acm.org/doi/10.1145/2071379.2071381. [38] Anna R. Karlin, Mark S. Manasse, Larry Rudolph, and Daniel D. Sleator. Competitive Snoopy Caching. Algorithmica, 3(1):77–119, 1988. doi: 10.1007/BF01762111. https://dl.acm.org/doi/10. 1007/BF01762111. [39] Ran El-Yaniv, Amos Fiat, Richard M. Karp, and Gabriel Turpin. Optimal Search and One-Way Trading Online Algorithms. Algorithmica, 30(1):101–139, 2001. doi: 10.1007/s00453-001-0003-0. https: //link.springer.com/article/10.1007/s00453-001-0003-0. [40] Rajeev Motwani, Steven Phillips, and Eric Torng. Nonclairvoyant Scheduling. Theoretical Com- puter Science, 130(1):17–47, 1994. https://dl.acm.org/doi/10.1016/0304-3975%2894% 2990151-1. [41] Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii. Controlling Tail Risk in Online Ski-Rental. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 4247–4263. SIAM, 2024. doi: 10.1137/1.9781611977912.147. https: //epubs.siam.org/doi/10.1137/1.9781611977912.147. [42] Nicolas Christianson, Bo Sun, Steven Low, and Adam Wierman. Risk-Sensitive Online Algorithms. In The Thirty Seventh Annual Conference on Learning Theory, pages 1140–1141. PMLR, 2024. [43] D. Zhou, W. Pan, W. Wang, and T. Xie. I/O Characteristics of Smartphone Applications and Their Implications for eMMC Design. In Proceedings of the 2015 IEEE International Symposium on Workload Characterization (IISWC), pages 12–21. IEEE Computer Society, 2015. doi: 10.1109/IISWC.2015.8. https://doi.org/10.1109/IISWC.2015.8. [44] S. Irani, S. Shukla, and R. Gupta. Online strategies for dynamic power management in systems with multiple power-saving states. ACM Transactions on Embedded Computing Systems, 2(3):325–346, 2003. doi: 10.1145/860176.860180. https://dl.acm.org/doi/10.1145/860176.860180. 23 A Additional Details and Proofs for Section 2.4 This section supplements the analysis of the non-clairvoyant scheduling problem for the special case n = 2, as introduced in Section 2.3. We focus on the two-stage scheduling algorithm proposed by Wei and Zhang [17], which is known to achieve the optimal classic trade-off and is weakly-optimal-optimal under Definition 3. As detailed in Algorithm 6 in Section A.1, the algorithm proceeds in two phases based on predicted job lengths y = (y1, y2), where y1 ≤y2. In Theorem 2.1, we show that this algorithm also satisfies prediction-specific Pareto optimality under n = 2. A.1 Wei et al.’s Two-Stage Schedule Wei and Zhang [17] propose an algorithm called the two-stage schedule (see Algorithm 6) that achieves a consistency of (1 + λ) and a robustness of (1 + 1 1+6λ) under n = 2. By [17] and Definition 3, this two-stage schedule algorithm is weakly-optimal for n = 2. Algorithm 6 Two-Stage Schedule 1: At any point, if a job finishes with processing time less or more than its prediction, use round robin forever. 2: Stage 1: Round robin for at most λn · OPTy/ n 2  units of time. 3: Stage 2: Process jobs in predicted order (starting from the unfinished job with the least predicted time). A.2 Proof of Theorem 2.1 In this subsection, we further analyze the prediction-specific consistency and robustness and prove that two-stage schedule is strongly-optimal under n = 2. Proof of Theorem 2.1. Note that the jobs are ranked based on their predicted lengths; thus we have y1 ≤y2. We first prove the prediction-specific consistency and robustness under a specific set of predictions y = (y1, y2). We first consider λ ≤ y1 2y1+y2 , i.e. λ(2y1 + y2) ≤y1. Regarding the consistency, assume that x1 = y1, x2 = y2. In stage 1, the algorithm runs round-robin for 2λ · (2y1 + y2) time. Since λ · (2y1 + y2) ≤y1, job 1 cannot finish in stage 1. Therefore, the completion time of job 1 is 2λ(2y1 + y2) + y1 −λ(2y1 + y2) = y1 + λ(2y1 + y2), and that of job 2 is y1 + λ(2y1 + y2) + y2 −λ(2y1 + y2) = y1 + y2, thus, ALG = 2y1 + y2 + λ(2y1 + y2), and OPT = 2y1 + y2, yeilding a consistency of 1 + λ. Regarding the robustness, we consider an adversarial attack x1, x2. Let δ denote an infinitesimal quantity. Case I: If x1 ≤λ(2y1 + y2) or x2 ≤λ(2y1 + y2) , i.e. some incorrect prediction is found in stage 1. In this case, the algorithm runs round-robin from beginning to end, resulting in a robustness of at most 4/3. Case II: If λ(2y1 + y2) < x1 ≤y1 , i.e. job 1 finishes no later than its prediction. In this case, the algorithm runs round-robin for 2λ(2y1 + y2) time and processes job 1 until completion, and then turns to job 2. Thus, ALG = 2  2λ(2y1 + y2) + x1 −λ(2y1 + y2)  + (x2 −λ(2y1 + y2)) = 2x1 + x2 + λ(2y1 + y2). Case II(a): λ(2y1 + y2) < x2 ≤x1 In this case, OPT = 2x2 + x1. This yields a robustness of 1 + y1 y1+2λ(2y1+y2) ≥4/3, which is attained when x1 = y1 and x2 = λ(2y1 + y2) + δ. 24 Case II(b): x2 > x1 In this case, OPT = 2x1 + x2. This results in a robustness of 1 + λ(2y1+y2) 3y1 ≤4/3, which is achieved when x1 = y1 and x2 = y1 + δ. Case III: If x1 > y1 , i.e. job 1 finishes later than its prediction. In this case, the algorithm first runs round-robin for 2λ · (2y1 + y2) time, then processes job 1 for y1 −λ(2y1 + y2) time, and finally runs round-robin till the end. Case III(a) λ(2y1 + y2) < x2 < x1 + λ(2y1 + y2) −y1 , i.e. job 1 finishes later than job 2. In this case, ALG = x1 + 3x2 −λ(2y1 + y2) + y1 and OPT = 2x2 + x1. This yields a robustness of 1 + y1 y1+2λ(2y1+y2) ≥4/3, which is achieved when x1 = y1 + δ and x2 = λ(2y1 + y2) + δ. Case III(b) x2 ≥x1 + λ(2y1 + y2) −y1 , i.e. job 1 finishes no later than job 2. Note that in this case, we have ALG = 3x1 + x2 + λ(2y1 + y2) −y1. First, if x1 < x2, OPT = 2x1 + x2. This generates a robustness of 1 + λ(2y1+y2) 3y1 ≤4/3, which is achieved when x1 = y1 + δ and x2 = y1 + 2δ. Otherwise, if x1 + λ(2y1 + y2) −y1 < x2 < x1, OPT = 2x2 + x1. This yields a robustness of 1 + y1 y1+2λ(2y1+y2) ≥4/3, which is achieved when x1 = y1 + δ and x2 = λ(2y1 + y2) + 2δ. To sum up, if λ ≤y1/(2y1 + y2), the algorithm is (1 + λ)-consistent and (1 + y1 y1+2λ(2y1+y2))-robust. Otherwise, if λ > y1/(2y1 + y2), the algorithm runs round-robin forever, and is (1 + y1 2y1+y2 )-consistent and (4/3)-robust. In conclusion, given prediction y = (y1, y2), the prediction-specific consistency and robustness are βy = 1 + min{ y1 2y1 + y2 , λ}, γy = 1 + max{1/3, y1 y1 + 2λ(2y1 + y2)}. Since the two-stage schedule is already proven to be weakly-optimal, we then prove that it is strongly-optimal by demonstrating the Pareto optimality of their prediction-specific consistency and robustness. When λ ≤y1/(2y1 + y2), the algorithm is (1 + λ)-consistent and  1 + y1 y1+2λ(2y1+y2)  -robust. Consider algorithm Aω that completes r > λ(2y1 + y2) amount of work for job 2 when it finishes job 1 in the case where predictions are accurate. Then, it follows that ALG = 2 · (y1 + r) + y2 −r = 2y1 + y2 + r OPT = 2y1 + y2, which yields a competitive ratio of 1 + r/(2y1 + y2) > 1 + λ. Therefore, any (1 + λ)-consistent algorithm processes at most λ(2y1 +y2) amount of work for job 2 when it finishes job 1 or finds any incorrect prediction of job 1. Then, we consider an incorrect prediction x1 = y1, x2 = λ(2y1 + y2) + δ. Consider a (1 + λ)-consistent algorithm B that completes r ≤λ(2y1 + y2) amount of work when it finishes job 1. Then, upon x1 = y1 and x2 = r + δ, ALG = 2(y1 + r) + (r + δ −r) = 2y1 + 2r + δ and OPT = 2r + y1. This leads to a robustness no better than min r≤λ(2y1+y2) 2y1 + 2r y1 + 2r = 1 + y1 y1 + 2λ(2y1 + y2) for all r ≤λ(2y1 + y2). When λ > y1/(2y1 + y2), the algorithm, equivalent to round-robin (RR), is (1 + y1/(2y1 + y2))-consistent and 4/3-robust. Note that RR is the only algorithm that achieves 4/3 competitive ratio. Therefore, under prediction y = (y1, y2), no other algorithm achieves a robustness equal to or less than 4/3. By Definition 4, the two-stage schedule is strongly-optimal. 25 B Additional Details for Section 3 In this section, we provide a proof of Proposition 1 and discuss the scenario in which a weakly-optimal algorithm with robustness γ is unavailable. B.1 Proof of Proposition 1 Proof of Proposition 1. Let {β∗ y, ω} and {γ∗ y, ω∗} denote the optimal solution to Problem 1 and Problem 2, respectively. Since γ ≥α∗, {γ, ω} is always a feasible solution to Problem 2. Thus, γ∗ y ≤γ, where γ∗ y is the optimal objective value to Problem 2. Since {γ∗ y, ω∗} is a feasible solution to Problem 2, we have ALG(Aω∗, I, y) ≤γ∗ y · OPT(I), ∀I ∈I, ALG(Aω∗, I, y) ≤β∗ y · OPT(I), ∀I ∈Iy. By Definition 2, Aω∗is β∗ y-consistent and γ∗ y-robust with respect to y. Consider ω′ ∈Ω. Let β′ y and γ′ y denote the prediction-specific consistency and robustness of Aω′ with respect to y. We consider two cases. Case I: β′ y > β∗ y If β′ y > β∗ y, then (β′ y, γ′ y) produces no Pareto improvement over (β∗ y, γ∗ y). Case II: β′ y ≤β∗ y . If β′ y ≤β∗ y, since Aω′ is β′ y-consistent and γ′ y-robust with respect to y, by Definition 2, ALG(Aω′, I, y) ≤β′ y · OPT(I), ∀I ∈Iy, (11a) ALG(Aω′, I, y) ≤γ′ y · OPT(I), ∀I ∈I. (11b) Since β′ y ≤β∗ y, by Inequality (11a), we further have ALG(Aω′, I, y) ≤β∗ y · OPT(I), ∀I ∈Iy. (12) By Inequalities (11b) and (12), {γ′ y, ω′} is a feasible solution to Problem 2, thus we have γ∗ y ≤γ′ y, where γ∗ y is the optimal objective value to Problem 2. Case II (a): γ′ y > γ∗ y If γ′ y > γ∗ y, then (β′ y, γ′ y) produces no Pareto improvement over (β∗ y, γ∗ y). Case II (b): γ′ y = γ∗ y If γ′ y = γ∗ y, since γ∗ y ≤γ, we have γ′ y ≤γ. By Inequality (11b), we further have ALG(Aω′, I, y) ≤γ · OPT(I), ∀I ∈I. (13) By Inequalities (11a) and (13), {β′ y, ω′} is a feasible solution to Problem 1, thus we have β∗ y ≤β′ y, where β∗ y is the optimal objective value to problem 1. Because β′ y ≤β∗ y and β∗ y ≤β′ y, we have β′ y = β∗ y. Since γ′ y = γ∗ y and β′ y = β∗ y, (β′ y, γ′ y) produces no Pareto improvement over (β∗ y, γ∗ y). Consequently, (β∗ y, γ∗ y) are jointly Pareto optimal in terms of the prediction-specific consistency and robust- ness with respect to y. Since γ∗ y ≤γ for all y ∈F, we have supy∈F γ∗ y ≤γ. By Equation (1) and Equation (2), Algorithm 1 is γ-robust. Now, there exists a weakly-optimal algorithm Aω with robustness γ, we assume the consistency of Aω is β and the prediction-specific consistency of Aω under prediction y is βy. Since Aω is βy-consistent with respect to y, by Definition 2, ALG(Aω, I, y) ≤βy · OPT(I), ∀I ∈Iy. (14) Since Aω is γ-robust, it is also γ-robust with respect to y, thus we have ALG(Aω, I, y) ≤γ · OPT(I), ∀I ∈Iy. (15) 26 By Inequalities (14) and (15), {βy, ω} is a feasible solution to Problem 1. Therefore, β∗ y ≤βy, where β∗ y is the optimal objective value to Problem 1. Consequently, sup y∈F β∗ y ≤sup y∈F βy = β. By Equation (1) and Equation (2), Algorithm 1 is β-consistent. By weak optimality of Aω, any γ-robust algorithm is at least β-consistent, and any β-consistent algorithm is at least γ-robust. Since Algorithm 1 is β-consistent and γ-robust, by Definition 3, it is weakly-optimal. Moreover, ∀y ∈F, (β∗ y, γ∗ y) are jointly Pareto optimal in terms of the prediction-specific consistency and robustness with respect to y. Thus, Algorithm 1 is strongly-optimal by Definition 4. B.2 Addressing the Absence of a Weakly-Optimal Algorithm with γ Robustness. In general, even if (β′, γ) is not on the Pareto front for any β′ ≥1, a process that first determines a tight consistency bound β = supy′∈F P1(γ, y′), then determines a tight robustness bound γ = supy′∈F P2(β, y′) can generate a tight Pareto-optimal consistency-robustness tradeoff (β, γ) so that γ becomes a valid input of Algorithm 1. Let β, γ denote supy′∈F P1(γ, y′), supy′∈F P2(β, y′), respectively. Define y1 := arg maxy′∈F P1(γ, y′), y2 := arg maxy′∈F P2(β, y′). Since β = supy′∈F P1(γ, y′), we have ∀y′ ∈F, P1(γ, y′) ≤β. Therefore, ∀y′ ∈F, ∃ω′ ∈Ω, s.t. {γ, ω′} is a feasible solution to P2(β, y′), i.e., ∀y′ ∈F, γ ≥P2(β, y′), where P2(β, y′) is the optimal objective value. Consequently, γ ≥sup y′∈F P2(β, y′) = γ. (16) We can use similar techniques to prove β ≥sup y′∈F P1(γ, y′). Since β = supy′∈F P1(γ, y′), any γ-robust algorithm is at least β-consistent. We prove this by contradiction, assuming there exists a γ-robust algorithm A that has consistency βA < β. Then, under the prediction y1, A achieves a prediction-specific consistency βA y1 ≤βA < β = P1(γ, y1). Moreover, A is γ-robust under y1, thus satisfying Constraint 3b. Therefore, P1(γ, y1) is not the optimal objective value, since βA y1 < P1(γ, y1), yielding a contradiction. Note that by Inequality (16), γ ≥γ; thus we can further conclude that any γ-robust algorithm is at least β-consistent. Similarly, since γ = supy′∈F P2(β, y′), any β-consistent algorithm is at least γ-robust. Therefore, (β, γ) is a Pareto-optimal consistency-robustness tradeoff. 27 C Proofs for Section 4 We provide the proofs of Theorem 4.1 and 4.2 in the following. C.1 Proof of Theorem 4.1 Proof of Theorem 4.1. We begin by analyzing the prediction-specific consistency and robustness of KD, consid- ering two distinct cases: y < b and y ≥b. Case I: If y < b , then the algorithm buys on day ⌈b/λ⌉. To obtain the prediction-specific consistency, we assume that the prediction is accurate, i.e. x = y. Since the algorithm postpones the purchase beyond the predicted day, ALG = OPT = y, thus βy = 1. To analyze the prediction-specific robustness, we consider incorrect predictions. Since the worst-case attack arises when x = ⌈b/λ⌉, we have ALG = ⌈b/λ⌉−1 + b and OPT = b. Thus, γy = ALG/OPT < (b/λ + b)/b = 1 + 1/λ. Case II: If y ≥b , then the algorithm buys on day ⌈λb⌉. To obtain the prediction-specific consistency, we assume x = y. ALG = ⌈λb⌉−1 + b and OPT = b, yielding βy = ALG/OPT < 1 + λ. To obtain the prediction-specific robustness, we consider incorrect predictions. Observe that the worst-case attack occurs when x = ⌈λb⌉, we have ALG = ⌈λb⌉−1 + b, and OPT = ⌈λb⌉. Hence, γy = ALG/OPT < (λb + b)/⌈λb⌉≤(λb + b)/(λb) = 1 + 1/λ. We now prove that KD is not strongly-optimal. Consider a simple algorithm that always buys on day b. It is straightforward to verify that buy-to-rent algorithm is 1-consistent and (2 −1/b)-robust under y < b. Recall that under y < b, KD’s prediction-specific consistency and robustness are 1 and ⌈b/λ⌉−1+b b , respectively. Since ⌈b/λ⌉−1 + b b > b −1 + b b = 2 −1/b for all λ ∈(0, 1), by Definition 4, KD is not strong-optimal. C.2 Proof of Theorem 4.2 Proof of Theorem 4.2. Denote the prediction-specific consistency and robustness of Algorithm 2 with respect to y as βy and γy. We start by analyzing the prediction-specific consistency and robustness of Algorithm 2 by considering the following three different cases. Case I: y < b . In this case, Algorithm 2 purchases on day b. It is straightforward that the algorithm is 1-consistent and (2 −1/b)-robust with respect to y. Case II: y ∈[b, min{b(λ + 1) −1, (b −1)/λ}] . In this case, Algorithm 2 purchases on day y+1. To prove the prediction-specific consistency, we assume x = y. ALG = y and OPT = b, yielding βy = ALG/OPT = y/b. To prove the prediction-specific robustness, we consider inaccurate predictions. Observe that x = y + 1 brings the best attack to the algorithm, in which case ALG = y +b, OPT = b, leading to γy = ALG/OPT = 1+y/b. Case III: y > min{b(λ + 1) −1, (b −1)/λ} . In this case, Algorithm 2 buys on day ⌈λb⌉. To obtain the prediction-specific consistency, we assume x = y. ALG = ⌈λb⌉−1 + b, OPT = b, βy = ALG/OPT < 1 + λ. To get the prediction-specific robustness, we consider incorrect predictions. Observe that the worst-case attack occurs at x = ⌈λb⌉, in which case ALG = ⌈λb⌉−1 + b, OPT = ⌈λb⌉. Thus, γy = ALG/OPT < 1 + 1/λ. Now, we prove that Algorithm 2 is strongly-optimal. By considering the worst-case prediction y over the βy and γy in Theorem 4.2, Algorithm 2 is (1 + λ)- consistent and (1+1/λ)-robust, where the worst-case prediction occurs when y > min{b(λ+1)−1, (b−1)/λ}. Based on the lower bound by Wei and Zhang [17], Algorithm 2 is weakly-optimal, as b →∞. 28 We consider the following three cases to prove the Pareto optimality of (βy, γy) when b →∞. Case I: y < b . Algorithm 2 is 1-consistent and (2 −1/b)-robust, which is already optimal. This is because consistency can be no less than 1, and (2−1/b) is the best competitive ratio α achievable in competitive analysis. Case II: y ∈[b, min{b(λ + 1) −1, (b −1)/λ}] . Algorithm 2, which purchases on day y + 1, is (y/b)- consistent and (1 + y/b)-robust with respect to y. Since b ≤y ≤min{b(λ + 1) −1, (b −1)/λ}, we have (i) λb ≥y + 1 −b, (ii) λb ≤b2−b y , which together implies (b2 −b)/y ≥y + 1 −b. (17) Consider another algorithm P that buys on day p (̸= y + 1) with prediction-specific consistency and robustness βP y and γP y . Case II (a): p < (b2 −b)/y . In this case, it holds that γP y = p −1 + b p > [(b2 −b)/y] −1 + b [(b2 −b)/y] = 1 + y/b = γy. Case II (b) : p = (b2 −b)/y (This case applies only if b2−b y ∈N+). In this case, γP y = p−1+b p = 1+y/b = γy. By Inequality (17), we have the following holds: βP y = p −1 + b b = [(b2 −b)/y] + (b −1) b ≥[y + 1 −b] + (b −1) b = y/b = βy. Case II (c) : (b2 −b)/y < p < y + 1 . In this case, by Inequality (17), it follows that βP y = p −1 + b b > [(b2 −b)/y] −1 + b b ≥[y + 1 −b] −1 + b b = y/b = βy. Case II (d) : p > y + 1 . In this case, we see that γP y = p −1 + b b > (y + 1) −1 + b b = 1 + y/b = γy. In either case, (βP y , γP y ) provides no Pareto improvement over (βy, γy). Case III: y > min{b(λ + 1) −1, (b −1)/λ} . Algorithm 2, purchasing on day ⌈λb⌉, achieves a consistency of ⌈λb⌉−1+b b and a robustness of ⌈λb⌉−1+b ⌈λb⌉ . Consider another algorithm Q that buys on day q (̸= ⌈λb⌉) with prediction-specific consistency and robustness βQ y and γQ y . Case III (a): q < ⌈λb⌉. In this case, γQ y = q−1+b q > ⌈λb⌉−1+b ⌈λb⌉ = γy. Case III (b): ⌈λb⌉< q ≤y . In this case, βQ y = q−1+b b > ⌈λb⌉−1+b b = βy. Case III (c): q > y . If y > b(λ+1)−1, we have λb < y +1−b. Note that limb→+∞ ⌈λb⌉ b = λ. Therefore, βy = ⌈λb⌉−1 + b b < (y + 1 −b) −1 + b b = y b = βQ y as b →∞. If y > (b −1)/λ, we have ⌈λb⌉≥λb > b2−b y . Therefore, γy = ⌈λb⌉−1 + b ⌈λb⌉ < [(b2 −b)/y] −1 + b [(b2 −b)/y] = y + b b ≤γQ y . In either case, (βP y , γP y ) yields no Pareto improvement over (βy, γy). By Definition 4, Algorithm 2 is strongly-optimal as b →∞. 29 D Additional Details for Section 5 In this section, we provide some supplementary details for the construction of explicit algorithms for randomized ski rental. The design of the explicit strongly-optimal algorithm is largely inspired by an equalizing property, which plays a central role in the classic randomized ski rental problem [38]. D.1 Equalizing Distributions We formally define equalizing distributions as follows. Definition 5. Given integers m and n with 1 ≤m ≤n ≤b, a distribution πeq[m, n] is an equalizing distribution in [m, n] if it is only supported on {m, m + 1, . . . , n}, and R(πeq[m, n], x) = α(πeq[m, n]) for all x ∈{m, m + 1, . . . , n}, where α(π) is the competitive ratio of distribution π. Furthermore, the following theorem gives the explicit form of the equalizing distribution. Theorem D.1. Let πeq[m, n] be an equalizing distribution in [m, n]. Then it satisfies the following recursive formula: πeq i [m, n] =           1 + m+b−1 m ·  ( b b−1)n−m −1 −1 for i = m; πeq m[m, n] · m+b−1 m(b−1) ·  b b−1 i−m−1 , for i ∈{m + 1, . . . , n}; 0, otherwise. (18) Note that πeq[1, b] is exactly the same as Karlin’s distribution [38], while πeq[1, ⌊λb⌋] coincides with the distribution used in Kumar’s randomized algorithm [2] when y ≥b. Theorem D.2. For any y < b, the equalizing distribution on [y + 1, b], denoted πeq[y + 1, b], is 1-consistent and minimizes robustness among all 1-consistent distributions under y. The proof of Theorem D.1 and D.2 is detailed in Appendix E.2. D.2 OPERATION A and OPERATION B We now propose two types of operations (OPERATION A and OPERATION B) that transform an equalizing distribution into a distribution with the desired robustness level on the Pareto frontier. OPERATION A: Consistency Boosting. OPERATION A (see Algorithm 7) is employed when y ≥b. It starts with an equalizing distribution πeq[1, n], where the precise choice of n depends on the desired robustness level γ. The core idea of OPERATION A is to enhance the prediction-specific consistency βy of the distribution while not sacrificing the prediction-specific robustness γy. Remark 1. Reaching r = 1 in Step 0 only occurs when y = b; otherwise, the condition r ≤y + 1 −b will be satisfied earlier. In the special case where y = b, any two-point distribution over {1, b + 1} is trivially 1-consistent. In this case, OPERATION A returns the distribution among them that achieves the lowest possible robustness. Remark 2. When y ≥b, the prediction-specific consistency is βy = Py i=1 πi·(b+i−1) b + P∞ i=y+1 πi·y b . As a result, shifting probability mass from πr to πy+1 can potentially improve prediction-specific consistency only in cases where b + r −1 > y. This explains why, when r ≤y + 1 −b, we immediately terminate the shifting process in Step 0. 30 Algorithm 7 OPERATION A: CONSISTENCY BOOSTING 1: Input: Initial distribution πeq[1, n] and prediction y ≥b; 2: Initialization: π ←πeq[1, n], iterative index r ←n, initial robustness γ ←γy(π); 3: Step 0: 4: // Check whether the current distribution is a two-point distribution. 5: If r = 1, then update π1 ←1/b and πb+1 ←(b −1)/b, go to Step 4; 6: // Check if further shifting enhances consistency. 7: If r ≤y + 1 −b, then go to Step 4; 8: Step 1: 9: // Shifting probability mass from πr to πy+1. 10: Update πy+1 ←πy+1 + πr and πr ←0; 11: Step 2: 12: // Determine if more shifting is necessary. 13: If R(π, y + 1) < γ, then set r ←r −1, go to Step 0; 14: Step 3: 15: //Determine the maximum amount of probability mass that can be shifted. 16: Set π′ ←(π1, . . . , πr−1, π′ r, 0, . . . , 0, π′ y+1); 17: Solve π′ r, π′ y+1 through R(π′, y + 1) = γ and π′ r + π′ y+1 = 1 −Pr−1 i=1 πi; 18: Update πr, πy+1 ←π′ r, π′ y+1; 19: Step 4: Return π. OPERATION B: Robustness Seeking. OPERATION B (see Algorithm 8) is employed when y < b. OPERATION B (see Algorithm 8) initializes from the most robust 1-consistent distribution πeq[y + 1, b] given the prediction y, and incrementally sacrifices consistency to gain robustness. This process continues until a distribution is obtained that achieves Pareto-optimality under the consistency-robustness trade-off, subject to a desired robustness level. Algorithm 8 OPERATION B: ROBUSTNESS SEEKING 1: Input: Initial distribution πeq[y + 1, b](y < b) and desired robustness γ; 2: Initialization: π ←πeq[y + 1, b], iterative index r ←1; 3: Step 1: 4: // Establish an equalized distribution over {1, . . . , r, y + 1, . . . , b}. 5: Set π′ ←(π′ 1, . . . , π′ r, 0, . . . , 0, π′ y+1, . . . , π′ b); 6: Solve (π′, γ′) through Pb i=1 π′ i = 1 and R(π′, x) = γ′, x ∈{1, . . . , r, y + 1, . . . , b}; 7: Step 2: 8: // Check if the robustness of the newly constructed distribution meets the desired level. 9: If γ′ > γ, then set r ←r + 1, and go to Step 1; 10: Step 3: 11: // After determining the appropriate value of r , 12: // Prioritize assigning probability mass to {1, . . . , r −1, y + 1, . . . , b}. 13: Solve π′ through Pb i=1 π′ i = 1 and R(π′, x) = γ, x ∈{1, . . . , r −1, y + 1, . . . , b}; 14: Update π ←π′; 15: Step 4: Return π. 31 D.3 PRSR: Prediction-specific Randomized Ski Rental With these foundations in place, we proceed to introduce an algorithm for randomized ski rental, referred as PRSR(see Algorithm 9). Algorithm 9 PRSR: PREDICTION-SPECIFIC RANDOMIZED SKI RENTAL 1: Input: γ ∈[eb/(eb −1), b −2) 2: // Determine the smallest n such that πeq[1, n] is γ-robust. 3: Determine n ←⌈logb/(b−1)(1 + 1/(γ −1))⌉; 4: // Determine the adjusted robustness γ′, defined as that of πeq[1, n]. 5: Determine γ′ ←[( b b−1)n −1]−1 + 1; 6: If y ≥b then 7: π ←OPERATION A(πeq[1, n], y); (see Algorithm 7) 8: Else if y < b then 9: // Ensure that the adjusted robustness does not exceed the upper bound γν := γ(πeq[y + 1, b]). 10: Determine γν ←γ(πeq[y + 1, b]) and determine γ′′ ←min{γν, γ′}; 11: π ←OPERATION B(πeq[y + 1, b], γ′′); (see Algorithm 8) 12: Choose i randomly according to the distribution π; 13: Buy the skis at the start of day i. Remark 3. Consider the equalizing distribution πeq[1, n] with n ≤b. Let ˆγ denote the robustness of πeq[1, n]. Note that ˆγ = R(πeq[1, n], 1) = (b −1) · πeq 1 [1, n] + 1. (19) Moreover, by Theorem D.1, πeq 1 [1, n] · 1 −( b b−1)n 1 −( b b−1) ! = 1. (20) By Equation (19) and Equation (20), n = logb/(b−1)(1 + 1/(ˆγ −1)). Therefore, choosing n = ⌈logb/(b−1)(1 + 1/(1 −γ))⌉ensures that πeq[1, n] is γ-robust. Remark 4. The requirement that γ < b −2 is primarily enforced to guarantee n > 1. In Section E.5, we formally verify the prediction-specific Pareto optimality and strong optimality of PRSR. 32 E Proofs for Section 5 and Appendix D This section provides proofs for Section 5 and Appendix D. E.1 Proofs of Useful Lemmas Lemma 1. Consider the ski rental randomized problem with b > 1, for any distribution π supported on N+, there exists another distribution π′ supported on a finite set [b] such that α(π′) ≤α(π). Specifically, when the best attack for π only occurs at x > b, we have α(π′) < α(π). Furthermore, in the learning-augmented setting with prediction y, for any distribution π supported on N+, there exists another distribution π′ supported on the finite set [b] ∪{y + 1} such that βy(π′) ≤βy(π) and γy(π′) ≤γy(π). Proof. We first consider the traditional competitive analysis setting. Let π be a distribution with P∞ i=b+1 πi ̸= 0. Consider another distribution π′ with π′ i = πi for all i ∈[b −1], and π′ b = P∞ i=b πi. Let r be the maximal index on which π has probability mass, i.e. r = max{i ∈N | πi ̸= 0}. Let α(π), α(π′) denote the competitive ratio of a randomized algorithm that uses π and π′, respectively. Case I: x < b . By the definition of R(π, x) in Section 5, R(π, x) = R(π′, x). Case II: x ≥b . In this range, the worst-case attack against π occurs at x = r, while the worst-case attack against π′ occurs at x = b. According to the definition of R(π, x), we have R(π, r) > R(π′, b). Note that α(π) = supx∈N+ R(π, x) and α(π′) = supx∈N+ R(π′, x). We have α(π) ≥α(π′). Specifically, when the best attack for π occurs at x > b, we have α(π) > α(π′). In the learning-augmented setting, we divide our discussion into two parts: y < b and y ≥b. Case I: y < b . In this case, [b] ∪{y + 1} = [b]. Let π′ be the distribution after transferring all probability mass beyond b of π to b (i.e., π′ i = πi, ∀i ∈[b −1] and π′ b = P∞ i=b πi). By Lemma 1, γy(π′) ≤γy(π). By Equation (6), we have βy(π′) = βy(π). Case II: y ≥b . Construct another distribution π′ by transferring all the probability mass of π on the {b + 1, . . . , y} to b, and all the mass beyond y + 1 to y + 1 (i.e., π′ i = πi, ∀i ∈[b −1]; π′ b = Py i=b πi and π′ y+1 = P∞ i=y+1 πi). By a simple verification, we obtain that βy(π′) ≤βy(π) and γ(π′) ≤γ(π). Lemma 2. Consider a randomized ski rental problem with b > 1. Consider a probability distribution π over N+. If R(π, x) = α(π) for all x ∈{m, m + 1, . . . , n}, then we have πi+1 = b b −1 · πi, ∀i ∈{m + 1, . . . , n −1}. Proof. By the definition of R(π, x) in Section 5, when x ≤b: R(π, x) = Px i=1 πi(x −1 + b) + P∞ i=x+1 πi · x x . Consider the following equations: R(π, i) = α(π), m ≤i ≤m + 2 (21) Subtracting both sides of (m + 1)×Equation (21) with i = m + 1 by m×Equation (21) with i = m, b · πm+1 + ∞ X i=m+2 πi = α(π). (22) 33 Subtracting both sides of (m + 2)× Equation (21) with i = m + 2 by (m + 1)× Equation (21) with i = m + 1, b · πm+2 + ∞ X i=m+3 πi = α(π). (23) Finally, subtracting both sides of Equation (23) by Equation (22) yields πm+2 = b b−1πm+1. Similarly, we can show that πi+1 = b b −1πi, ∀i ∈{m + 2, . . . , n −1}. E.2 Proof of Theorem D.1 and D.2 Proof of Theorem D.1. By Definition 5, πeq[m, n] only has positive support over {m, . . . , n}. Consider x = m and x = m + 1, (m −1 + b) · πeq m[m, n] m + (1 −πeq m[m, n]) = α(π). (24) (m −1 + b)πeq m[m, n] + (m + b)πeq m+1[m, n]] (m + 1) + (1 −πeq m[m, n] −πeq m+1[m, n]) = α(π). (25) Equating the left-hand sides of Equation (24) and Equation (25) yields: πeq m+1[m, n] = m(b −1) m + b −1πeq m[m, n]. (26) By Lemma 2, we have πeq i+1[m, n] = b b −1 · πeq i [m, n], ∀i ∈{m + 1, . . . , n −1}. (27) Equation (26), Equation (27), together with the constraint Pn i=m πi = 1, imply that πeq i [m, n] =           1 + m+(b−1) m ·  ( b b−1)n−m −1 −1 for i = m; πeq m[m, n] · m+(b−1) m(b−1) ·  b b−1 i−m−1 , for i ∈{m + 1, . . . , n}; 0, otherwise. Proof of Theorem D.2. Applying Lemma 1, we first reduce the support under consideration to [b]. Furthermore, the condition of 1 consistency requires πi = 0 for all i ∈[y]. We consider the following optimization problem. min πy+1,...,πb,γy γy (Primal Problem) s.t. i X j=y+1 πj · (b + j −1) + b X j=i+1 πj · i ≤γy · i, ∀i ∈{y + 1, . . . , b}, b X i=y+1 πi = 1, πi ≥0, ∀i ∈{y + 1, . . . , b}. γy ≥0 34 Let π∗ y+1, . . . , π∗ b, γ∗ y denote the optimal solution to Primal Problem. It’s clear that γ∗ y = min π∗ y+1,...,π∗ b max i∈{y+1,...,b} R(π∗, i). (28) We claim that π∗ y+1 ̸= 0. We prove this by contradiction, assuming π∗ y+1 = 0. Let r be the minimal index with non-zero probability mass, i.e. r := min{i | π∗ i ̸= 0}. Consider another distribution π′ with π′ y+1 = ϵ, π′ r = π∗ r −ϵ and π′ i = π∗ i for all i ∈Z+ \ {y + 1, r}, where we denote ϵ := min  π∗ r, y + 1 b −1 · (γ∗ y −1 2 )  . Note that R(π′, y + 1) = (b + y) · π′ y+1 + (y + 1) · (1 −πy+1) y + 1 ≤γ∗ y −1 2 + 1 = γ∗ y + 1 2 < γ∗ y. Moreover, it follows that R(π′, i) < R(π∗, i), ∀i ∈{y + 2, . . . , b}. Therefore, max i∈{y+1,...,b} R(π′, i) < max i∈{y+1,...,b} R(π∗, i) = γ∗ y, which contradicts with Equation (28). We then claim that π∗ b ̸= 0. We prove this by contradiction, assuming π∗ b = 0. Consider another distribution π′ with π′ y+1 = π∗ y+1 −ϵ, π′ b = ϵ, and π′ i = π∗ i for all i ∈Z+ \ {y + 1, r}, where ϵ = min{π∗ y+1, γ∗ y/(2b −1)}. We verify that R(π′, i) < R(π∗, i), ∀i ∈{y + 1, . . . , b −1}. Furthermore, R(π′, b) < Pb−1 i=y+1(i −1 + b) · π∗ i + (2b −1) · ϵ b ≤(b −1) · γ∗ y + (2b −1) · ϵ b ≤γ∗ y. Consequently, max i∈{y+1,...,b} R(π′, i) < max i∈{y+1,...,b} R(π∗, i) = γ∗ y, which contradicts with Equation (28). We further claim that π∗ i ̸= 0, for all i ∈{y + 2, . . . , b −1}. We prove this by contradiction, assuming that there exists some q ∈{y + 2, . . . , b −1}, such that π∗ q = 0. Let r = min{i > q | πi ̸= 0}. Since π∗ b ̸= 0, such r is guaranteed to exist. Consider another distribution π′ with π′ y+1 = π∗ y+1 −ϵ1, π′ q = ϵ1 + ϵ2, πr = π∗ r −ϵ2, and π′ i = π∗ i for all i ∈Z+ \ {y + 1, q, r}, where we denote ϵ1 := min  π∗ y+1, r −q 2(q −y −1) · ϵ2  , ϵ2 := min  π∗ r, 2(q −y −1) [(r −q) + 2(q −y −1)](q −1 + b) · γ∗ y  . Similarly, we verify that R(π′, i) < R(π∗, i), ∀i ∈{y + 1, . . . , q −1}. Note that ϵ1 + ϵ2 ≤ γ∗ y q−1+b. We have R(π′, q) < (q −1) · γ∗ y + (q −1 + b) · (ϵ1 + ϵ2) q ≤γ∗ y. 35 Since (q −y −1)ϵ1 < 2(q −y −1)ϵ1 ≤(r −q)ϵ2, we have R(π′, i) < R(π∗, i), ∀i ∈{q + 1, . . . , b}. Therefore, max i∈{y+1,...,b} R(π′, i) < max i∈{y+1,...,b} R(π∗, i) = γ∗ y, which contradicts with Equation (28). As a result, we conclude that π∗ i ̸= 0, ∀i ∈{y + 1, . . . , b}. Now, let the dual variables be λy+1, . . . , λb, λ. The dual problem can be formulated as following: max λy+1,...,λb,λ λ (Dual Problem) s.t. x−1 X i=y+1 i · λi + (b + x −1) · b X i=x λi + λ ≤0, ∀x ∈{y + 1, . . . , b}, b X i=y+1 −i · λi ≤1, λi ≤0, ∀i ∈{y + 1, . . . , b}, λ is free. Let λ∗ y+1, . . . , λ∗ b, λ∗denote the optimal solution to Dual Problem. By the complementary slackness, π∗ x ·   x−1 X i=y+1 i · λ∗ i + (b + x −1) · b X i=x λ∗ i + λ∗  = 0, ∀x ∈{y + 1, . . . , b}; γ∗ y ·  1 + b X i=y+1 i · λ∗ i  = 0. Since π∗ i ̸= 0, ∀i ∈{y + 1, . . . , b}, we have x−1 X i=y+1 i · λ∗ i + (b + x −1) · b X i=x λ∗ i + λ∗= 0, ∀x ∈{y + 1, . . . , b}. Because γ∗ y ̸= 0, we have Pb i=y+1 −i · λ∗ i = 1. Let Λi := −i · λi for all i ∈{y + 1, . . . , b}. Then, the dual problem can be transformed into: max Λy+1,...,Λb,λ λ s.t. x−1 X i=y+1 Λi + b X i=x (b + x −1) i · Λi = λ, ∀x ∈{y + 1, . . . , b}, b X i=y+1 Λi = 1, Λi ≥0, ∀i ∈{y + 1, . . . , b}, λ is free. 36 Let Λ∗ y+1, . . . , Λ∗ b, λ∗denote the optimization problem. We claim that Λ∗ y+1 ̸= 0. Consider x = y + 1 and x = y + 2, the optimal solution should satisfy that ( b+y y+1 · Λ∗ y+1 + b+y y+2 · Λ∗ y+2 + · · · + b+y b · Λ∗ b = λ∗ Λ∗ y+1 + b+y+1 y+2 · Λ∗ y+2 + · · · + b+y+1 b · Λ∗ b = λ∗ If Λ∗ y+1 = 0, since b + y y + 2 < b + y + 1 y + 2 , . . . , b + y b < b + y + 1 b , then we have Λ∗ y+2 = · · · = Λ∗ b = 0, which contradicts with Pb i=y+1 Λ∗ i = 1. We then claim that Λ∗ i ̸= 0, for all i ∈{y + 2, . . . , b −1}. Consider x = i and x = i + 1, the optimal solution should satisfy that ( Λ∗ y+1 + · · · + Λ∗ i−1 + b+i−1 i · Λ∗ i + b+i−1 i+1 · Λ∗ i+1 + · · · + b+i−1 b · Λ∗ b = λ∗ Λ∗ y+1 + · · · + Λ∗ i−1 + Λ∗ i + b+i i+1 · Λ∗ i+1 + · · · + b+i b · Λ∗ b = λ∗ Assume Λ∗ i = 0. Since b + i −1 i + 1 < b + i i + 1, . . . , b + i −1 b < b + i b , we have Λ∗ i+1 = Λ∗ i+2 = · · · = Λ∗ b = 0. Similarly, consider x = i −1 and x = i, we have ( Λ∗ y+1 + · · · + Λ∗ i−2 + b+i−2 i−1 · Λ∗ i−1 + b+i−2 i · Λ∗ i + · · · + b+i−2 b · Λ∗ b = λ∗ Λ∗ y+1 + · · · + Λ∗ i−2 + Λ∗ i−1 + b+i−1 i · Λ∗ i + · · · + b+i−1 b · Λ∗ b = λ∗ We further have Λ∗ i−1 = 0. Repeating this process, we obtain Λ∗ y+1 = 0, which leads to a contradiction. We finally claim that Λ∗ b ̸= 0. Consider x = b −1 and x = b, ( Λ∗ y+1 + · · · + Λ∗ b−2 + 2b−2 b−1 · Λ∗ b−1 + 2b−2 b · Λ∗ b = λ∗ Λ∗ y+1 + · · · + Λ∗ b−2 + Λ∗ b−1 + 2b−1 b · Λ∗ b = λ∗ If Λ∗ b = 0, we can deduce that Λ∗ b−1 = 0, which contradicts our previous results. Therefore, we have Λ∗ i ̸= 0, ∀i ∈{y + 1, . . . , b}, which is equivalent to λ∗ i ̸= 0, ∀i ∈{y + 1, . . . , b}. Furthermore, by complementary slackness, λ∗ i ·   i X j=y+1 πj · (b + j −1) + b X j=i+1 πj · i −γy · i  = 0, ∀i ∈{y + 1, . . . , b} It follows that i X j=y+1 πj · (b + j −1) + b X j=i+1 πj · i = γy · i, ∀i ∈{y + 1, . . . , b} By Definition 5, we conclude that πeq[y + 1, b] is the most robust 1-consistent distribution. 37 E.3 Proof of Theorem 5.1 Proof of Theorem 5.1. Consider y < b. Kumar’s algorithm uses distribution in (7), where m = ⌈b/λ⌉> b. We denote this distribution as π, and consider another distribution π′ that transfer all probability masses beyond b to b (i.e. π′ i = πi, ∀i ∈[b −1] and π′ b = P∞ i=b πi). Recall that βy(π), γy(π), βy(π′), γy(π′) are the consistency and robustness of π and π′ under prediction y, respectively. Since y < b, by Equation (6), we have βy(π′) = βy(π). Observe that the worst-case attack against π only occurs at x > b, by Lemma 1, we have γy(π′) < γy(π). By Definition 4, KR is not strongly-optimal. E.4 Proof of Theorem 5.2 Proof of Theorem 5.2. We start by finding a weakly-optimal algorithm with robustness γ for any given γ ∈Λγ. If γ = eb/(eb −1), Karlin’s algorithm [38] has exactly γ robustness. Since it is the only eb/(eb −1)- competitive algorithm, it is weakly-optimal. As b →∞, for any γ ∈(eb/(eb −1), ∞), there exists λ ∈(1/b, 1), such that KR’s robustness is exactly γ. Furthermore, KR is weakly-optimal as b →∞. Therefore, we find a weakly-optimal algorithm with robustness γ for any γ ∈Λγ. We now conclude with the strong optimality of Algorithm 3: by Lemma 1, the bi-level optimization problems can be reduced to Problem 8 and Problem 9. By immediate consequence of Proposition 1, Algorithm 3 is strongly-optimal. E.5 Proof of Theorem 5.3 Before proving Theorem 5.3, we first prove the Pareto optimality of PRSR(see Algorithm 9)’s prediction- specific consistency and robustness. Our analysis is divided into two cases: y < b and y ≥b. For the case where y < b, we begin by considering the following optimization problem, referred to as Problem A, which plays a key role in the subsequent analysis. min π1,π2,...,πb Py i=1 πi · (b + i −1) + Pb i=y+1 πi · y y (Problem A) s.t. Px i=1 πi · (b + i −1) + Pb i=x+1 πi · x x ≤γy, ∀x ∈{1, 2, . . . , b}, b X i=1 πi = 1, πi ≥0, ∀i ∈{1, 2, . . . , b}. Let πA = (πA 1 , πA 2 , . . . , πA b ) denote the optimal solution to Problem A. Lemma 3. Assume that γy ∈[γξ, γν]. For Problem A, πA i ̸= 0, for all i ∈{y + 1, y + 2, . . . , b}. Proof. For any γy ∈[γξ, γν], πeq[1, b] is always a feasible solution to Problem A. This guarantees the existence of a optimal solution πA. We prove by contradiction. Suppose there exists r ∈{y + 1, y + 2, . . . , b} such that πA r = 0. We analyze the following cases. Case I: Py i=1 πA i = 0 . In this case, πA 1 = · · · = πA y = πA r = 0. When γy = γν, πeq[y + 1, r] is the only solution to Problem A if we requires πA 1 = · · · = πA y = 0. If we further require πA r = 0 for some r ∈{y + 1, y + 2, . . . , b}, there is no feasible solution to Problem A for any γy ∈[γξ, γν]. 38 Case II(a): Py i=1 πA i ̸= 0 and Pb i=r+1 πA i ̸= 0 . Let p be the largest index in [y] that has non-zero proba- bility mass, i.e. p := max{i ∈[y] | πA i ̸= 0}. Let q be the smallest index in [b]\[r] that has non-zero probability mass, i.e. q := min  i ∈[b] \ [r] | πA i ̸= 0 . Since Py i=1 πA i ̸= 0 and Pb i=r+1 πA i ̸= 0, such p and q are guaranteed to exist. Based on this, we can always construct another solution π′ = πA 1 , πA 2 , . . . , πA p −ϵ1, . . . , πr−1, ϵ1 + ϵ2, . . . , πA q −ϵ2, . . . , πA b  , where ϵ2 = min n (Pr i=1 πA i )b (q−r+b−1)(r−1), πA q o > 0, ϵ1 = min n ( q−r r−p)ϵ2, πA p o > 0. We first prove that π′ is still a feasible solution to Problem A. Recall that when x ≤b (see the definition of R(π, x) in Section 5) R(π, x) = Px i=1 πi(i −1 + b) + Pb i=x+1 πi  x x . (1) Consider x < p. R(π′, x) = R(πA, x) ≤γy. (2) Consider p ≤x < r. Since π′ i = πA i for all i ∈[x] \ {p}, π′ p = πA p −ϵ1, and Pb i=x+1 π′ i  = Pb i=x+1 πA i  + ϵ1, we have xR(π′, x) = xR(πA, x) −ϵ1(p −1 + b) + ϵ1x < xR(πA, x). Therefore, we have R(π′, x) < R(πA, x) ≤γy. (3) Consider x = r. Note that rR(π′, r) = r X i=1 π′ i(b + i −1) + 1 − r X i=1 π′ i ! r, (29) rR(πA, r) = r X i=1 πA i (b + i −1) + 1 − r X i=1 πA i ! r. (30) Since π′ i = πA i for all i ∈[r] \ {p, r}, π′ p = πA p −ϵ1, π′ r = πA r + (ϵ1 + ϵ2), and (Pb i=r+1 π′ i) = (Pb i=r+1 πA i ) −ϵ2, we have rR(π′, r) = rR(πA, r) −ϵ1(b + p −1) + (ϵ1 + ϵ2)(b + r −1) −ϵ2r. By arranging terms, we have rR(π′, r) = rR(πA, r) + ϵ1(r −p) + ϵ2(b −1). (31) Note that r −p > 0, b −1 > 0, and ϵ1 ≤( q−r r−p)ϵ2, ϵ2 ≤ (Pr i=1 πA i )b (q−r+b−1)(r−1). We have ϵ1(r −p) + ϵ2(b −1) ≤(Pr i=1 πA i )b r −1 ≤ Pr i=1 πA i (b + i −1) r −1 . (32) 39 By (30), (31) and (32), rR(π′, r) ≤r Pr i=1 πA i (b + i −1) + r(r −1)(1 −Pr i=1 πA i ) r −1 . (33) Since πA r = 0, (33) implies rR(π′, r) ≤r Pr−1 i=1 πA i (b + i −1) + r(r −1)(1 −Pr−1 i=1 πA i ) r −1 . Therefore, R(π′, r) ≤ Pr−1 i=1 πA i (b + i −1) + (1 −Pr−1 i=1 πA i )(r −1) r −1 = R(πA, r −1) ≤γy. (34) (4) Consider r < x < q. Note that q = min{i ∈[b] \ [r] | πA i ̸= 0}. Thus πA r+1 = · · · = πA q−1 = 0. Hence, π′ r+1 = · · · = π′ q−1 = 0. It follows that xR(π′, x) = rR(π′, r). (35) By (34) and (35), we have R(π′, x) = r x · R(π′, r) < R(π′, r) ≤γy. (5) Consider q ≤x ≤b. Since π′ i = πA i for all i ∈[b] \ {p, r, q}, π′ p = πA p −ϵ1, π′ r = πA r + (ϵ1 + ϵ2), and π′ q = πA q −ϵ2, we have xR(π′, x) = xR(πA, x) −ϵ1(b + p −1) + (ϵ1 + ϵ2)(b + r −1) −ϵ2(b + q −1). By arrange terms, we have xR(π′, x) = xR(πA, x) + ϵ1(r −p) −ϵ2(q −r). (36) Note that ϵ1 ≤( q−r r−p)ϵ2, and r −p > 0, q −r > 0. (36) further indicates that xR(π′, x) ≤xR(πA, x). Therefore, R(π′, x) ≤R(πA, x) ≤γy. Consequently, R(π′, x) ≤γy for all x ∈[b]. Note that Pb i=1 π′ i = 1 and π′ i ≥0 for all i ∈[b]. Therefore, π′ is a feasible solution to Problem A. We then compare the objective value of Problem A at π′ and πA. Note that p ≤y < r. Consequently, as established in Case II(a)(2), yR(π′, y) = yR(πA, y) −ϵ1(b + p −1) + ϵ1y < yR(πA, y). This leads to R(π′, y) < R(πA, y). This makes πA impossible to be the optimal solution to Problem A under the assumptions of Case II (a), since there is always another feasible solution π′ that achieves a smaller objective value. Case II(b): Py i=1 πA i ̸= 0 and Pb i=r+1 πA i = 0 . 40 Let p be the largest index in [y] that has non-zero probability mass, i.e. p := max{i ∈[y] | πA i ̸= 0}. We can always construct another solution π′ = (πA 1 , . . . , πA p−1, πA p −ϵ, . . . , πA r−1, ϵ, . . . , πA b ), where ϵ = min{πA p , b (r−1)·(r−p)} > 0. Similarly we first investigate the feasibility of π′. (1) Consider x < p. It is clear that R(π′, x) = R(πA, x) ≤γy. (2) Consider p ≤x < r. Note that xR(π′, x) = xR(πA, x) −ϵ(b + p −1) + ϵx < xR(πA, x). Therefore, R(π′, x) < R(πA, x) ≤γy. (3) Consider x = r. Since Pb i=r+1 πA i = 0, we have πA i = 0, ∀i ∈{r + 1, . . . , b}. Note that rR(π′, r) = rR(πA, r) −ϵ(b + p −1) + ϵ(b + r −1). By arranging terms, rR(π′, r) = rR(πA, r) + ϵ(r −p). (37) Given that ϵ ≤ b (r−1)(r−p), rR(πA, r) + ϵ(r −p) ≤rR(πA, r) + b r −1. (38) Since Pb i=r+1 πA i = 0, we have Pr i=1 πA i = 1. Therefore, rR(πA, r) + b r −1 ≤rR(πA, r) + Pr i=1 πA i (b + i −1) r −1 . (39) By definition, R(πA, r) = Pr i=1 πA i (b + i −1) + Pb i=r+1 πA i r r . Note that Pb i=r+1 πA i = 0. We further conclude rR(πA, r) + Pr i=1 πA i (b + i −1) r −1 = r Pr i=1 πA i (b + i −1) r −1 . (40) By (37), (38), (39) and (40), rR(π′, r) ≤r Pr i=1 πA i (b + i −1) r −1 . Note that πA r = 0. We have R(π′, r) ≤ Pr−1 i=1 πA i (b + i −1) r −1 = R(πA, r −1) ≤γy. (4) Consider r < x ≤b. Consider r < x ≤b. Since π′ i = πA i = 0 for all i ≥r + 1, it is clear that R(π′, x) < R(π′, r) ≤γy. 41 Consequently, R(π′, x) ≤γy for all x ∈[b]. This implies that π′ is also a feasible solution to Problem A. However, yR(π′, y) = yR(πA, y) −ϵ(b + p −1) + ϵy < yR(πA, y). This implies R(π′, y) < R(πA, y), which contradicts with the optimality of πA. In conclusion, πA i ̸= 0, ∀i ∈{y + 1, y + 2, . . . , b}. Lemma 4. For any γy ∈[γξ, γν] in Problem A, at the optimal solution πA, the (y+1)-th through b-th constraints are binding, i.e., they are satisfied as equalities. Proof. The dual problem to Problem A (referred to as Problem B) can be formulated as max λ1,λ2,...,λb+1 γy · b X i=1 λi ! + λb+1 (Problem B) s.t. x−1 X i=1 λi + b X i=x b + x −1 i · λi + λb+1 ≤b + x −1 y , ∀x ∈{1, 2, . . . , y}, x−1 X i=1 λi + b X i=x b + x −1 i · λi + λb+1 ≤1, ∀x ∈{y + 1, . . . , b}, λi ≤0, ∀i ∈{1, 2, . . . , b}, λb+1 is free. Let λB = (λB 1 , λB 2 , . . . , λB b+1) be the optimal solution to Problem B. By the complementary slackness, "x−1 X i=1 λB i + b X i=x b + x −1 i · λB i + λB b+1 −1 # · πA x = 0, ∀x ∈{y + 1, . . . , b}. By Lemma 3, πA x ̸= 0 for all x ∈{y + 1, . . . , b}, we have x−1 X i=1 λB i + b X i=x b + x −1 i · λB i + λB b+1 = 1, ∀x ∈{y + 1, . . . . , b}, (41) where we later refer to them as Eq.(y + 1) to Eq.(b). We consider the following three cases. Case I: γy = γν . Note that πeq[y + 1, b] is a feasible solution to Problem A when γy = γν. Moreover, by Theorem D.2, it’s the only distribution that achieves R(π, y) = 1. Therefore, πeq[y +1, b] is the optimal solution to Problem A when γy = γν. By Definition 5, (y + 1)-th to b-th constraints in Problem A are binding at πA = πeq[y + 1, b]. Case II: γy ∈(γξ, γν) . Let p∗denote the optimal objective value in Problem A (the primal problem) and let d∗ denote the optimal value in Problem B (the dual problem). It is clear that p∗> 1 when γy ∈(γξ, γν). Since Problem A is a convex optimization problem, and πeq[1, b] (i.e. Karlin’s distribution [38]) is always an interior point, by the Slater’s condition, d∗= p∗> 1. We then show that in Problem B, λB i ̸= 0 for all i ∈{y+1, . . . , b}. We prove this by contradiction, assuming that there exists r ∈{y + 1, . . . , b} such that λB r = 0. 42 Case II(a): r = y + 1 . Consider Eq.(y + 1) and Eq.(y + 2): ( λ1 + · · · + λy + b+y y+1 · λy+1 + b+y y+2 · λy+2 + · · · + b+y b · λb + λb+1 = 1 Eq.(y + 1) λ1 + · · · + λy + λy+1 + b+y+1 y+2 · λy+2 + · · · + b+y+1 b · λb + λb+1 = 1 Eq.(y + 2) Note that b+y y+2 < b+y+1 y+2 , . . . , b+y b < b+y+1 b . If λB y+1 = 0, then (Py i=1 λB i ) + λB b+1 = 1. Case II(b): y + 1 < r < b . Consider Eq.(r) and Eq.(r + 1): ( λ1 + · · · + λr−1 + b+r−1 r · λr + b+r−1 r+1 · λr+1 + · · · + b+r−1 b · λb + λb+1 = 1 Eq.(r) λ1 + · · · + λr−1 + λr + b+r r+1 · λr+1 + · · · + b+r b · λb + λb+1 = 1 Eq.(r + 1) Note that b+r−1 r+1 < b+r r+1, . . . , b+r−1 b < b+r b . If λB r = 0, then Pr−1 i=1 λB i  + λB b+1 = 1. Based on this, consider Eq.(r −1) and Eq.(r): ( λ1 + · · · + λr−2 + b+r−2 r−1 · λr−1 + b+r−2 r · λr + · · · + b+r−2 b · λb + λb+1 = 1 Eq.(r −1) λ1 + · · · + λr−2 + λr−1 + b+r−1 r · λr + · · · + b+r−1 b · λb + λb+1 = 1 Eq.(r) We can deduce λB r−1 = 0. Thus, Pr−2 i=1 λB i  + λB b+1 = 1. Repeating this process, we obtain Py i=1 λB i  + λB b+1 = 1. Case II(c): r = b . Consider Eq.(b −1) and Eq.(b): ( λ1 + · · · + λb−2 + 2b−2 b−1 · λb−1 + 2b−2 b · λb = 1 Eq.(b −1) λ1 + · · · + λb−2 + λb−1 + 2b−1 b · λb = 1 Eq.(b) If λB b = 0, then λB b−1 = 0. This implies that there exists r′ ∈(y + 1, b) such that λB r′ = 0. Based on the analysis in (b), we have (Py i=1 λB i ) + λB b+1 = 1. Note that in Problem Problem B, we have λi ≤0 for all i ∈[b], λb+1 is a free variable, and γy > 1. Therefore, the optimal solution to Problem B is λB i = 0, ∀i ∈[b] and λB b+1 = 1, achieving an objective value of d∗= 1. This leads to a contradiction. It thus follows that λB i ̸= 0 for all i ∈{y + 1, . . . , b}. Now, by the complementary slackness, λB x · "Px i=1 πA i (b + i −1) + Pb i=y+1 πA i x x −γy # = 0, ∀x ∈{y + 1, . . . , b}. Since λB i ̸= 0 for all i ∈{y + 1, . . . , b}, we have Px i=1 πA i (b + i −1) + Pb i=y+1 πA i x x = γy, ∀x ∈{y + 1, . . . , b}. Therefore, the (y + 1)-th to b-th constraints in Problem A are binding at πA. Case III: γy = γξ . Note that πeq[1, b] (i.e. Karlin’s distribution [38]) is the only probability distribution that is γξ-competitive. We can verify that the (y + 1)-th to b-th constraints in Problem A are binding at πA = πeq[1, b]. Lemma 5. Consider y < b. Let γ ∈[γξ, γν], where γξ = γ(πeq[1, b]) = eb eb−1 and γν = γ(πeq[y + 1, b]). Let the consistency and robustness of OPERATION B(πeq[y + 1, b], γ) (see Algorithm 8) under prediction y be βy and γy, respectively. Then, any γy-robust algorithm’s consistency under prediction y is at least βy. 43 Proof. Let ∆denote the set of probability distributions on N+, i.e. ∆:= {π | P∞ i=1 πi = 1}. Let ∆b be the set of probability distributions on [b], i.e. ∆b := {π | Pb i=1 πi = 1}. Let ∆′ b be the set of probability distributions on [b] that achieve equalizing ratios on {y + 1, . . . , b}, i.e. ∆′ b := ( π | b X i=1 πi = 1; R(π, x) = α(π), ∀x ∈{y + 1, . . . , b} ) . It is straightforward to see that ∆′ b ⊆∆b ⊆∆. Let ∆γy denote the set of probability distributions on N+ that is γy-robust under prediction y, i.e. ∆γy := ( π | ∞ X i=1 πi = 1; γy(π) = γy ) . It is straightforward to see that ∆γy ⊆∆. To prove that any γy-robust algorithm’s consistency under prediction y is at least βy, it suffices to show min π∈∆γy βy(π) = βy. Consider π ∈∆\ ∆b. Let π′ be the distribution after transferring all probability mass at i > b to b. It is clear that βy(π′) = βy(π), ∀y < b. By Lemma 1, γy(π′) ≤γy(π). Therefore, min π∈∆γy βy(π) = min π∈∆γy ∩∆b βy(π). (42) Note that minπ∈∆γy ∩∆b βy(π) is the optimal objective value of Problem A. By Lemma 4, Problem A reduces to the following problem (referred to as Problem C): min π1,π2,...,πb Py i=1 πi · (b + i −1) + Pb i=y+1 πi · y y (Problem C) s.t. Px i=1 πi · (b + i −1) + Pb i=x+1 πi · x x ≤γy, ∀x ∈{1, 2, . . . , y}, Px i=1 πi · (b + i −1) + Pb i=x+1 πi · x x = γy, ∀x ∈{y + 1, . . . , b}, b X i=1 πi = 1, πi ≥0, ∀i ∈{1, 2, . . . , b}. In other words, we have min π∈∆γy ∩∆b βy(π) = min π∈∆γy ∩∆′ b βy(π). (43) Let π∗denote OPERATION B(πeq[y+1, b], γ). Suppose π∗has positive support over {1, . . . , k, y+1, . . . , b}. From the construction of OPERATION B (see Algorithm 8), it follows that R(π∗, x) = γ, ∀x ∈{1, . . . , k −1, y + 1, . . . , b} (44a) R(π∗, x) ≤γ, ∀x ∈{k, . . . , y} (44b) Equation (44a) and (44b) together imply γ = γy(π∗) = γy. Since Problem C requires R(π, x) = γy for all x ∈{y + 1, . . . , b}, by Lemma 2, a necessary condition for the optimal solution is πi+1 = b b −1 · πi, ∀i ∈{y + 2, . . . , b −1}. (45) 44 Consider π ∈(∆γy ∩∆′ b) \ {π∗}. Let r denote the minimal index such that π∗ r ̸= πr, i.e. r = min{i | π∗ i ̸= πi}. Obviously, r < b. We consider the following cases. Case I: y < r < b . In this case, πi = π∗ i , for all i ∈[y]. Since πy+1 ̸= π∗ y+1 leads to R(π, y + 1) ̸= R(π∗, y + 1) = γy, we have r ̸= y + 1. Thus, πi = π∗ i , for all i ∈[y + 1]. Based on this, we have Pb i=y+2 πi = Pb i=y+2 π∗ i . If there exists r ∈{y + 2, . . . , b}, such that πr ̸= π∗ r, then π violates Condition 45. Therefore, π is not the optimal solution to Problem C. Case II: r ≤y . Case II(a): πr > π∗ r . Note that R(π∗, x) = γy for all x ∈[k −1]. This makes r > k −1. Therefore, we have πi = π∗ i , ∀i ∈[k −1] πk ≥π∗ k and πi ≥π∗ i = 0, ∀i ∈{k + 1, . . . , y} with some r ∈{k, . . . , y} such that πr > π∗ r. Thus, Pb i=y+1 πi < Pb i=y+1 π∗ i . Combing them together, βy(π) = R(π, y) > R(π∗, y) = βy(π∗). Therefore, π cannot be an optimal solution to Problem C. Case II(b): πr < π∗ r . Note that π∗ i = 0 for all i ∈{k + 1, . . . , y}. Therefore, r ≤k. Since πi = π∗ i for all i ∈[r −1] and πr < π∗ r, we have Py i=r+1 πi ̸= 0; otherwise γy(π) > γy(π∗) = γy. Let r′ denote the minimal index in {r + 1, . . . , y} such that πr′ ̸= 0, i.e. r′ = min {i ∈{r + 1, . . . , y} | πi ̸= 0}. Consider π′ = (π1, . . . , πr + ϵ, . . . , πr′ −ϵ, . . .), where ϵ = min{π∗ r −πr, πr′}. It is straightforward to verify that γy(π′) = γy(π), βy(π′) < βy(π). (46) Note that βy(π′) ≥min π∈∆γy βy(π) ≥ min π∈∆γy ∩∆′ b βy(π). If βy(π) = minπ∈∆γy ∩∆′ b βy(π), then we have βy(π′) ≥βy(π). (47) However, (47) Contradict with (46). Therefore, βy(π) ̸= minπ∈∆γy ∩∆′ b, which makes π suboptimal to Problem C. In conclusion, considering (a) and (b), ∀π ∈(∆γy ∩∆′ b) \ {π∗}, π is not an optimal solution to Problem C. In other words, π∗is the optimal solution to Problem C and thus βy = βy(π∗) = min π∈∆γy ∩∆′ b βy(π). (48) By Equation (42), Equation (43) and Equation (48), we have min π∈∆γy βy(π) = βy. Therefore, any γy-robust algorithm’s consistency under prediction y is at least βy. Lemma 6. Consider y < b. Let γ ∈[γξ, γν], where γξ = γ(πeq[1, b]) = eb eb−1 and γν = γ(πeq[y + 1, b]). Let βy(γ) and γy(γ) denote the consistency and robustness of OPERATION B(πeq[y + 1, b], γ) (see Algorithm 8) with respect to prediction y, respectively. βy(γ) and γy(γ) are jointly Pareto optimal. 45 Proof. Consider OPERATION B (πeq[y + 1, b], γ) and let βy(γ) and γy(γ) denote its prediction-specific consis- tency and robustness. Note that βy(γ) is a strictly decreasing function of γ, and γy(γ) = γ. By Lemma 5, for any γ ∈[γξ, γν], any γy(γ)-robust algorithm is at least βy(γ)-consistent under prediction y. Then, the only way to disprove the Pareto optimality of (βy(γ), γy(γ)) is to find another βy(γ)-consistent and γQ y -robust algorithm Q, where γQ y < γy(γ). Assume such algorithm Q exists. Since γQ y < γy(γ) and βy(γ) is a strictly decreasing function of γ, we have βy(γQ y ) > βy(γy(γ)) = βy(γ). (49) Next, we invoke Lemma 6 once again, any γQ y -robust algorithm is at least βy(γQ y )-consistent. Note that γQ y -robust algorithm Q is βy(γ)-consistent. Thus βy(γQ y ) ≤βy(γ). (50) Equation (49) and Equation (50) lead to a Contradiction. Therefore, for any γ ∈[γξ, γν], βy(γ) and γy(γ) are jointly Pareto optimal. Consider OPERATION A(πeq[1, n], y) and denote its consistency and robustness under prediction y by βy and γy, respectively. For the case when y ≥b, the following result holds. Lemma 7. Suppose y ≥b and 1 < n ≤b. The consistency and robustness of OPERATION A(πeq[1, n], y) (see Algorithm 7) under prediction y, denoted by βy and γy are jointly Pareto optimal. Proof. Let π∗denote OPERATION A(πeq[1, n], y). Our proof is divided into two cases. Case I: π∗ 2 = 0 . Note that n > 1, from the construction of Algorithm 7, π∗ 2 = 0 could only happen when y = b. In this case, we have π∗ 1 = 1/b, π∗ b+1 = (b −1)/b, and π∗ i = 0 for all i ∈N+ \ {1, b + 1}. Note that its prediction-specific consistency and robustness are 1 and 2 −(1/b), respectively, i.e. βy = βy(π∗) = 1 and γy = γy(π∗) = 2 −(1/b). Note that all 1-consistent algorithm under y = b can only have probability mass on {1, b + 1}. Let π1 and 1 −π1 represent the probability mass on 1 and b + 1, respectively. then γy(π) = max π1∈[0,1]{bπ1 + (1 −π1), π1 + 2(1 −π1)} ≥2 −(1/b). Therefore, βy and γy are jointly Pareto optimal. Case II: π∗ 2 ̸= 0 . Let k := max{i ≤b | π∗ i ̸= 0}. Since π∗ 2 ̸= 0, we have k ≥2. According to the structure of Algorithm 7, we have that at least one of the following holds: R(π∗, y + 1) = γy, (51) y −b + 1 = k. (52) Consider π ̸= π∗. Let r be the minimal index such that πi ̸= π∗ i , i.e. r := min{i | πi ̸= π∗ i }. Case II(a): πr > π∗ r . In this case, we must have r < y + 1. If r ≤k −1, based on the design of πeq[1, n] and OPERATION A (see Algorithm 7), we have R(π∗, x) = γy for all x ∈[k −1], therefore, πr > π∗ r will lead to γy(π) > γy(π∗) = γy. If k ≤r < y + 1, then we must have π∗ y+1 ̸= 0, and thus k −1 + b ≥y. It’s clear that βy(π∗) ≤βy(π). Note that γy(π) ≥R(π, 1) = R(π∗, 1) = γy(π∗) = γy. Therefore, π cannot achieve strictly better consistency or robustness with respect to y. 46 Case II(b): πr < π∗ r . A necessary condition for π to dominate π∗is that π must be γy-robust. In the subsequent analysis, we proceed under this assumption. Note that it suffices to restrict the support of the distribution to [y + 1], since assigning probability mass beyond y+1 yields no improvement in consistency while potentially worsening robustness. Then, we have r < y + 1. Since π∗ i = 0 for all k < i < y + 1, we further have r ≤k. Let v := min{i > r | πi ̸= 0}. Consider π′ = {π1, . . . , πr−1, πr + ϵ, . . . , πv −ϵ, . . .}, where ϵ = {πv, π∗ r −πr}. It is straightforward to verify γy(π′) ≤γy. If b+k−1 < y, by construction of OPERATION A (see Algorithm 7), this can only happen when b+n−1 < y and the operation terminates upon encountering the first occurrence of Step 0. Since r < v ≤k < y, it’s clear that βy(π′) < βy(π). If b + k −1 = y, since r ≤k, we have r −1 + b ≤y, thus bβy(π′) = bβy(π) + ϵ · [1{v ≤y}(r −v) + 1{v > y}(r −1 + b −y)] ≤bβy(π). Equivalently, βy(π′) ≤βy(π). Attaining equality requires r −1 + b = y and r = k. If b + k −1 > y, then Condition (52) does not hold; thus, Condition (51) must hold, i.e., we have R(π∗, y + 1) = γy. If v = y + 1, then πi = π∗ i for all i ∈[r −1], πr < π∗ r, πi = 0 for all r < i < y + 1. It’s clear that γ(π) > γy. This makes v < y + 1. Therefore, bβy(π′) = bβy(π) + ϵ(r −v) < bβy(π). Equivalently, βy(π′) < βy(π). Therefore, for all π ̸= π∗, π is not the most consistent γy-robust distribution except that r −1 + b = y and r = k hold simultaneously. When r = k ≥2 and r −1 + b = y hold simultaneously, we have π1 = π∗ 1. Thus, γy(π) ≥R(π, 1) = R(π∗, 1) = γy(π∗) = γy. Since π is γy-robust, γy(π) ≤γy. Thus, γy(π) = γy. Note that βy(π) = R(π, y) = Py i=1(i −1 + b)πi + πy+1y b , and i −1 + b ≥r −1 + b = y for all k ≤i ≤y. This makes βy(π) ≥βy(π∗). Therefore, any π ̸= π∗makes no Pareto improvement over π∗. In conclusion, βy and γy are jointly Pareto optimal. With those foundational lemmas, we prove Theorem 5.3. Proof of Theorem 5.3. Assume that the user-specified parameter is γ. Let n = ⌈logb/(b−1) 1 + 1/(γ + 1)⌉. Let γ′ = γ(πeq[1, n]) = 1 + [( b b−1)n −1]−1. Let π denote the final distribution obtained after applying either OPERATION A or OPERATION B. Case I: y ≥b . Let πA denote OPERATION A(πeq[1, n], y). Case I(a): πA is a two-point distribution . We have βy(πA) = 1 and γy(πA) = 2b−1 b . By design, OPERATION A never increases the robustness of the distribution at any point in time. Therefore, the a condition for forming a two-point should be γ′ ≥γy(πA). In this case, it’s easy to verify that 47 βy(πA) ≤γ′ log  1 + 1 γ′ −1  . (53) Case I(b): πA is not a two-point distribution . In this case, we have πA 1 = πeq 1 [1, n] = [(b −1)(( b b −1)n −1)]−1. Thus, we have γy(πA) = R(πA, 1) = πA 1 · b + (1 −πA 1 ) = 1 + [( b b −1)n −1]−1 = γ′. Note that as b →∞, Kumar’s algorithm is λ 1−e−λ -consistent and 1 1−e−λ -robust. (see [2]) Take λ = −log(1 −γ′−1). In this case, Kumar’s algorithm is γ′ log(1 + 1 γ′−1)-consistent and γ′-robust. Since γ < b −2 and n = ⌈logb/(b−1) 1 + 1/(γ + 1)⌉, we have n > 1. By Lemma 7, (βy(πA), γy(πA)) is Pareto optimal. Therefore, we have βy(πA) ≤γy(πA) log  1 + 1 γy(πA) −1  . Since γ′ = γy(πA), we have for any y ≥b, βy(πA) ≤γ′ log  1 + 1 γ′ −1  . (54) Case II: y < b . Let γ′′ = min{γν, γ′}. Recall that γν = γ(πeq[y+1, b]). Let πB denote OPERATION B(πeq[y+ 1, b], γ′′). Note that γν = γ(πeq[y + 1, b]) > γ(πeq[1, b]) = γξ, and 1 + [( b b −1)n −1]−1 = γ(πeq[1, n]) ≥γ(πeq[1, b]) = γξ. Therefore, γ′′ = min{γν, γ′} ≥γξ, and γ′′ = min{γν, γ′} ≤γν. Thus, we have γξ ≤γ′′ ≤γν. Therefore, γy(πB) = γ′′. We consider the following two cases. Case II(a): γ′′ = γ′ . In this case, we consider λ = −log(1−γ−1 y ), Kumar’s algorithm is γy log(1+ 1 γy−1)- consistent and γy-robust. Since γξ ≤γ′′ ≤γν, by Lemma 5, βy(πB) ≤γy(πB) log  1 + 1 γy(πB) −1  . Since γ′ = γ′′ = γy(πB), we have for any y < b, βy(πB) ≤γ′ log  1 + 1 γ′ −1  . (55) Case II(b): γ′′ = γν < γ′ . In this case, πB = πeq[y + 1, b], and βy(πB) = 1. We can verify that βy(πB) ≤γ′ log  1 + 1 γ′ −1  . (56) 48 By (53), (54), (55) and (56), we conclude that for any y ∈N+, βy(πB) ≤γ′ log  1 + 1 γ′ −1  . Therefore, by Definition 1 and Definition 2, β(π) = sup y∈N+ βy(π) = max ( sup y≥b βy(πA), sup y<b (βy(πB)) ) ≤γ′ log  1 + 1 γ′ −1  . Across all of the cases discussed above, it holds that γy(π) ≤γ′. Similarly, by Definition 1 and Definition 2, γ(π) = supy∈N+ γy(π) ≤γ′. Since F(x) := x log(1 + 1 x−1) is decreasing on x ∈(1, +∞), we further have β(π) ≤γ(π) log  1 + 1 γ(π) −1  . Wei’s lower bound [17] and Definition 3, PRSRis weakly-optimal. By Lemma 6 and Lemma 7, PRSR’s prediction-specific consistency and robustness are Pareto optimal. According to Definition 4, PRSR is strongly-optimal. 49 F Proofs for Section 6 In this section, we prove Theorem 6.1 and 6.2. F.1 Proof of Theorem 6.1 Proof of Theorem 6.1. Let βS y and γS y denote the prediction-specific consistency and robustness of Sun’s algo- rithm with respect to prediction y. Let βS and γS denote the consistency and robustness of Sun’s algorithm. Consider y ∈[L, LβS). In this case, Sun’s algorithm sets thresholds to Φ = LβS. To analyze the prediction- specific consistency, we assume that the maximum price is exactly y. Since y < LβS, we have ALG = L and OPT = y. By Definition 2, βS y = y/L. To obtain the robustness under prediction y, we consider incorrect predictions. Observe that in the worst case, ALG = LβS, OPT = U. Therefore, γS y = U/(LβS) = θ/βS. Consider the canonical competitive algorithm that adopts a fixed threshold policy with Φ = √ LU, which has prediction-specific consistency βC y = y/L and robustness γC y = √ θ. Note that ∀λ ∈[0, 1), γS > √ θ, which makes βS y = βC y , γS y > γC y . By Definition 4, Sun’s algorithm is not strongly-optimal. F.2 Proof of Theorem 6.2 Proof of Theorem 6.2. We first prove the prediction-specific consistency and robustness of PST. We consider the following three cases. Case I: y ∈[L, λL + (1 −λ) √ LU) . In this case, PST sets the threshold at Φ = √ LU. Since λL + (1 − λ) √ LU ≤ √ LU, we have y < √ LU, therefore βy = y/L. Since Φ = √ LU, γy = max{Φ/L, U/Φ} = √ θ. Case II: y ∈[λL + (1 −λ) √ LU, √ LU] . In this case, PST sets the threshold at Φ = y. Obviously, βy = 1. Since y ≤ √ LU, we have γy = max{Φ/L, U/Φ} = U/y. Case III: y ∈( √ LU, U] . In this case, PST sets the threshold at Φ = µ √ LU + (1 −µ)y, where µ = (1 −λ) √ θ (1 −λ) √ θ + λ . Since √ LU < y, Φ = µ √ LU + (1 −µ)y ≤y. Under worst-case construction, we conclude βy = y Φ = y µ √ LU + (1 −µ)y = (1 −λ) √ θy + λy (1 −λ)U + λy , γy = max{Φ/L, U/Φ} = Φ L = (1 −λ)U + λy (1 −λ) √ LU + λL . We then prove the STRONG optimality of PST. As in worst-case instances any deterministic algorithm performs equivalently to a threshold algorithm, it is not restrictive to only consider OTAs. By considering the worst-case predictions for both prediction-specific consistency and robustness, we can conclude that PST is λ + (1 −λ) √ θ-consistent and θ λ+(1−λ) √ θ-robust. Building on the lower bound established by Sun et al. [11], PST is weakly-optimal. Consider Φ′ ̸= Φ. Let (β′ y, γ′ y) denote the consistency and robustness of OTA that uses threshold Φ′ with respect to y. 50 Assume y ∈[L, λL + (1 −λ) √ LU). Since Φ′ ̸= Φ = √ LU, γ′ y = max{Φ′/L, U/Φ′} > √ LU = γy. Therefore, (βy, γy) is Pareto optimal. Assume y ∈[λL+(1−λ) √ LU, √ LU]. Since Φ′ ̸= Φ = y, βy > 1 = βy. Thus, (βy, γy) is Pareto optimal. Assume y ∈( √ LU, U]. If Φ′ < Φ, then β′ y = y/Φ′ > y/Φ = βy. If Φ′ > Φ, since Φ′ > Φ ≥ √ LU, we have γy = Φ/ √ LU and γ′ y = Φ′/ √ LU, thus γ′ y > γy. Therefore, (βy, γy) is Pareto optimal. By Definition 4, PST is strongly-optimal. 51 G Proofs for Section 7 In this section, we prove Theorem 7.1, 7.2 and 7.3. G.1 Proof of Theorem 7.1 Proof of Theorem 7.1. We consider the following five cases to investigate the prediction-specific ϵ-consistency and robustness. Case I: y ∈[L, M −2ϵ] . In this case, ϵ-Tolerant PST decides to set the threshold to Φ = √ LU and the price range relevant to ϵ-consistency is restricted to [max{L, y −ϵ}, y +ϵ]. To determine the ϵ-consistency, we assume that the highest price x ∈[max{L, y −ϵ}, y + ϵ]. Note that y + ϵ ≤(M −2ϵ) + ϵ < √ LU = Φ. We have βϵ y = (y + ϵ)/L. To obtain the robustness, we consider incorrect predictions. γy = max{Φ/L, U/Φ} = √ θ. Case II: y ∈(M −2ϵ, M) . In this case, ϵ-Tolerant PST decides to set the threshold to Φ = M −ϵ and the price range relevant to ϵ-consistency is restricted to [y −ϵ, y + ϵ]. For ϵ-consistency, we observe the best attack can potentially occur at x = M −ϵ −δ or x = y + ϵ, where δ is the infinitesimal quantity. The corresponding ratios are r1(M) := (M −ϵ)/L and r2(M) := (M + ϵ)/(M −ϵ). Note that r1(M) is an increasing function of M and r2(M) is a decreasing function of M. Since r1(L + 3ϵ) = L + 2ϵ L ≥L + 4ϵ L + 2ϵ = r2(L + 3ϵ), we can conclude r1(M) ≥r2(M), for all M ∈[L + 3ϵ, √ LU −ϵ]. thus βϵ y = (M −ϵ)/L. To obtain the robustness, we consider incorrect predictions. Since M −ϵ < √ LU, γy = U/(M −ϵ). Case III: y ∈[M, √ LU + ϵ] . In this case, ϵ-Tolerant PST decides to set the threshold to Φ = y −ϵ and the price range relevant to ϵ-consistency is restricted to [y −ϵ, y + ϵ]. To determine the ϵ-consistency, we assume that x ∈[y −ϵ, y + ϵ]. Observe that the worst case occurs at x = y + ϵ, we have βϵ y = (y + ϵ)/(y −ϵ). To obtain the robustness, we consider incorrect predictions. Note that y −ϵ ≤( √ LU + ϵ) −ϵ = √ LU. We have γy = U/Φ = U/(y −ϵ). Case IV: y ∈( √ LU + ϵ, U −ϵ) . In this case, ϵ-Tolerant PST decides to set the threshold to Φ = µ √ LU+(1− µ)(y −ϵ) and the price range relevant to ϵ-consistency is restricted to [y −ϵ, y +ϵ]. Since µ = (U−2ϵ)−LU/(M−ϵ) (U−2ϵ)− √ LU , by noting that (U −2ϵ) − LU (L + 3ϵ −ϵ) > 0, (U −2ϵ) − LU ( √ LU −ϵ −ϵ) < (U −2ϵ) − √ LU, we have µ ∈(0, 1). To determine the ϵ-consistency, we assume that x ∈[y −ϵ, y + ϵ]. Note that Φ ≤y −ϵ. We have βϵ y = y + ϵ Φ = y + ϵ µ √ LU + (1 −µ)(y −ϵ) . To obtain the robustness, we consider incorrect predictions. Note that Φ ≥ √ LU. We have γy = Φ/L = [µ √ LU + (1 −µ)(y −ϵ)]/L. 52 Case V: y ∈[U −ϵ, U] . In this case, ϵ-Tolerant PST decides to set the threshold to Φ = LU/(M −ϵ) and the price range relevant to ϵ-consistency is restricted to [y −ϵ, U]. Since ϵ ≤( √ LU −L)/4 < (U −L)/2, we have Φ = LU M −ϵ ≤ LU L + 2ϵ ≤U −2ϵ ≤y −ϵ, thus, βϵ y = U Φ = M−ϵ L . Furthermore, γy = Φ L = U M−ϵ. Then, we focus on the ϵ-consistency and robustness of ϵ-Tolerant PST. In Case III, we have βϵ y = (y + ϵ) (y −ϵ) < L + 2ϵ L ≤M −ϵ L . In Case IV, since µ = (U−2ϵ)−LU/(M−ϵ) (U−2ϵ)− √ LU , we get βϵ y ≤ U µ √ LU + (1 −µ)(U −2ϵ) = M −ϵ L . Similarly, we obtain γy = µ √ LU + (1 −µ)(y −ϵ) L ≤µ √ LU + (1 −µ)(U −2ϵ) L = U M −ϵ. By considering the worst-case y over F = [L, U], we conclude that ϵ-Tolerant PST’s ϵ-consistency and robustness is βϵ = (M −ϵ)/L and γ = U/(M −ϵ), respectively. G.2 Proof of Theorem 7.2 Proof of Theorem 7.2. By the lower bound provided by Sun et al. [11], any γ-robust algorithm must be at least (θ/γ)-consistent. By Definition 1 and the definition of ϵ-consistency, we have βϵ ≥β for any ϵ > 0. Therefore, any γ-robust algorithm has at least (θ/γ) ϵ-consistency. Similarly, given ϵ > 0, any algorithm that achieves βϵ ϵ-consistency is βϵ-consistent. Based on the lower bound by Sun et al. [11], it must be at least (θ/βϵ)-robust. G.3 Proof of Theorem 7.3 Proof of Theorem 7.3. By Theorem 7.1 and Theorem 7.2, we conclude that ϵ-Tolerant PST’s ϵ-consistency and robustness are jointly Pareto optimal. To prove the Pareto optimality of prediction-specific ϵ-consistency and robustness, we consider Φ′ ̸= Φ, which achieves ϵ-consistency βϵ y ′ and robustness γy′ with respect to y. Case I: y ∈[L, M −2ϵ] . In this case, γy = √ θ. Note that Φ = √ LU is the only threshold that achieves robustness √ θ. We can conclude that (βϵ y, γy) is Pareto optimal. Case II: y ∈(M −2ϵ, M) . In this case, ϵ-Tolerant PST sets the threshold to Φ = M −ϵ, achieving βϵ y = (M −ϵ)/L and γy = U/(M −ϵ). If Φ′ < Φ, then γ′ y > γy. If Φ′ > Φ, then consider the attack x = min{Φ′ −δ, y + ϵ} for threshold Φ′, where δ is the infinitesimal quantity. This makes βϵ y ′ ≥min{Φ′, y + ϵ}/L > (M −ϵ)/L = βϵ y. Therefore, (βϵ y, γy) is Pareto optimal. 53 Case III: y ∈[M, √ LU + ϵ] . In this case, ϵ-Tolerant PST sets the threshold to Φ = y −ϵ, achieving βϵ y = (y + ϵ)/(y −ϵ) and γy = U/(y −ϵ). Note that Φ = y −ϵ < √ LU. If Φ′ < Φ, then γ′ y > γy. If Φ′ > Φ, consider x = min{Φ′ −δ, y + ϵ} for threshold Φ′, where δ is the infinitesimal quantity. This guarantees the following inequality: βϵ y ′ ≥min{Φ′, y + ϵ}/L > (y −ϵ)/L. Note that r′(y) := (y −ϵ)/L is an increasing function of y, and r(y) := (y + ϵ)/(y −ϵ) is a decreasing function of y. Since y ≥M ≥L + 3ϵ and r(L + 3ϵ) = (L + 4ϵ)/(L + 2ϵ) < (L + 2ϵ)/L = r′(L + 3ϵ), we have r(y) < r′(y), ∀y ∈[M, √ LU + ϵ]. This gives βϵ y < βϵ y ′. Therefore, (βϵ y, γy) is Pareto optimal. Case IV: y ∈( √ LU + ϵ, U −ϵ) . In this case, ϵ-Tolerant PST sets the threshold to Φ = µ √ LU+(1−µ)(y−ϵ), achieving βϵ y = (y + ϵ)/Φ and γy = Φ/L. If Φ′ < Φ, it follows that βϵ y ′ ≥(y + ϵ)/Φ′ > (y + ϵ)/Φ = βϵ y. If Φ′ > Φ, since Φ > √ LU + ϵ > √ LU, γ′ y > γy. Therefore, (βϵ y, γy) is Pareto optimal. Case V: y ∈[U −ϵ, U] . In this case, ϵ-Tolerant PST sets the threshold to Φ = LU/(M −ϵ), achieving βϵ y = (M −ϵ)/L and γy = U/(M −ϵ). If Φ′ > Φ, then γ′ y > γy. If Φ′ < Φ, consider highest price x = U with |x −y| ≤ϵ, βϵ y ′ ≥U/Φ′ > U/Φ = βϵ y. Therefore, (βϵ y, γy) is Pareto optimal. 54
Prediction-Specific Design of Learning-Augmented Algorithms Sizhe Li*1, Nicolas Christianson†2, and Tongxin Li‡1 1 2 -driven performance benefits of machine-learned (ML) predictions. However, most existing approaches in this paradigm are overly conservative, as they do not leverage problem structure to optimize performance in a prediction-specific manner. In this paper, we show that such prediction-specific performance criteria can enable significant performance improvements over the coarser notions of consistency and robustness considered in prior work. Specifically, we propose a notion of strongly-optimal algorithms with predictions, which obtain Pareto optimality not just in the worst-case tradeoff between robustness and consistency, but also in the prediction-specific tradeoff between these metrics. We develop a general bi-level optimization framework that enables systematically designing strongly-optimal algorithms in a wide variety of problem settings, and we propose explicit strongly-optimal algorithms for several classic online problems: deterministic and randomized ski rental, and one-max search. Our analysis reveals new structural insights into how predictions can be optimally integrated into online algorithms by leveraging a prediction-specific design. To validate the benefits of our proposed framework, we empirically evaluate our algorithms in case studies on problems including dynamic power management and volatility-based index trading. Our results demonstrate that prediction-specific, strongly-optimal algorithms can significantly improve performance across a variety of online decision-making settings. 1 Introduction Online algorithms operate in environments where decisions must be made sequentially without full knowledge of future inputs. Traditionally, these algorithms are designed to guarantee robust performance on adversarial problem instances, providing competitive ratio bounds that hold under worst-case inputs [1]. While theoretically robust, this adversarial perspective often yields overly pessimistic strategies that can underperform in the real world, where worst-case instances are uncommon. Recent work on algorithms with predictions addresses this limitation by integrating machine-learned (ML) predictions into classical online decision-making frameworks [2, 3]. This learning-augmented algorithm design paradigm has been applied to a variety of online problems including ski rental, caching, and metrical task systems [2-4] and applications including GPU power management [5], battery energy storage system control [6], carbon-aware workload management [7, 8], and electric vehicle (EV) charging [9, 10]-with algorithms that achieve near-optimal performance when predictions are accurate, while maintaining robust worst-case guarantees when they are not. Existing approaches to learning-augmented algorithm design typically evaluate performance through a so-called consistency-robustness tradeoff. In particular, consistency measures worst-case algorithm performance when the prediction is perfectly accurate, while robustness measures the worst-case competitive ratio over all possible predictions and instances. *Email: †Email: ‡Email: 1 16 Oct 2025 Table 1: Comparison of our technical results on algorithm (weak and strong) optimality. General DSR (LARGE b) RSR (LARGE b) OMS ε-OMS WEAK Algorithm 1 KD Kumar et al. KR Kumar et al. Sun et al. \ STRONG Algorithm 1 PDSR PRSR PST ε-Tolerant PST Proposition 1 Theorem 4.2 Theorem 5.3 Theorem 6.2 Theorem 7.3 Note: Algorithm 1, PDSR, PRSR, PST, and ε-Tolerant PST are results from this work. DSR/RSR=Deterministic/Randomized Ski Rental, OMS=One-Max Search, ε-OMS=Error Tolerant OMS 0 50 100 150 200 Prediction y 1.0 1.5 2.0 2.5 3.0 Consistency y KD ( = 0.5) y KD ( = 0.5) y PDSR (Ours) y PDSR (Ours) y 1.0 1.5 2.0 2.5 3.0 Robustness y 0 50 100 150 200 Prediction y 1.0 1.5 2.0 2.5 Consistency y KR ( = 0.5) y KR ( = 0.5) y PRSR (Ours) y PRSR (Ours) y 1.0 1.5 2.0 2.5 Robustness y 10 12 14 16 18 20 Prediction y 1.50 1.75 2.00 y × y Sun et al. ( = 0.5) PST (Ours) Figure 1: Prediction-specific consistency βy and robustness γy under different predictions y for DSR with b = 100 (LEFT), RSR with b = 100 (MIDDLE), and OMS with L = 10, U = 20 (RIGHT). Importantly, both metrics are inherently worst-case in nature: consistency reflects the algorithm's performance under the least favorable instance with the least favorable, yet accurate prediction, and robustness measures performance under the least favorable (inaccurate) prediction and instance. Because they are worst-case, neither of these metrics measures whether algorithms can achieve better performance for specific predictions. As such, while much prior work has sought to design algorithms that obtain the optimal tradeoff between consistency and robustness (e.g., [2, 11]), and some existing algorithms have sought to improve performance by optimizing decisions in a prediction-specific manner (e.g., [8, 12]), no prior work has considered the question of how to design online algorithms with prediction-specific guarantees on optimality. Motivated by this gap and the potential to improve the performance of algorithms with predictions, our work explores the following question: How can we design algorithms that achieve Pareto-optimal tradeoffs between consistency and robustness that are tailored to specific prediction values? To this end, we introduce a prediction-specific framework for algorithm design, enabling the development of explicit and tractable online algorithms that adapt to each prediction's characteristics to ensure Pareto-optimal robustness and consistency for each individual prediction value. Instantiating this framework, we design explicit prediction-specific algorithms for the problems of deterministic and randomized ski rental and one-max search, two classic online problems with connections to real-world applications including TCP acknowledgment [13], cloud cost management [14], dynamic power management [5], and energy market operations [15, 16]. 1.1 Contributions The main contributions of this paper are as follows. Framework and Theoretical Results. We introduce a novel prediction-specific framework for the design and analysis of learning-augmented algorithms. Specifically, Definition 2 extends the classic notions of consistency and robustness by defining the prediction-specific consistency βy and robustness γy for each possible prediction value y. Here, βy evaluates the algorithm's performance on the worst-case instance which is consistent with the prediction y, while γy measures its worst-case performance under prediction y. Furthermore, Definition 4 introduces the notion of strong optimality: an algorithm is strongly optimal if it is Pareto-optimal not only 2 DSR RSR OMS Existing Results KD ([2], Algorithm 2) KR ([2], Algorithm 3) Sun et al. [11] This Work PDSR (Algorithm 2) PRSR (Algorithm 9) PST (Algorithm 4) Improvement Ratio 1 + 1 λ /2 ( e-1 e ) · (1 -e-λ)-1 √ (1-λ)2+4λθ-(1-λ) 2λ √ θ Table 2: Comparison of proposed algorithms in this work with existing results. The Improvement Ratio quantifies the maximum ratio between the consistency-robustness score (βy · γy) of our algorithm over the best existing score for all y. in the classic sense (referred to as weak optimality, see Definition 3), but also in the (βy, γy) plane for every prediction value y in the prediction space F. This reframes the evaluation of algorithm performance from a single worst-case trade-off to a richer, per-prediction perspective, thereby enabling a more fine-grained analysis of algorithm behavior across different prediction values. Under this new framework, we show that the existing weakly-optimal algorithms for several canonical online problems-deterministic ski rental, randomized ski rental, and one-max search (see Table 3)-are not strongly-optimal (see Theorems 4.1, 5.1, and 6.1). As such, we propose new algorithms for these problems (Algorithms 2, 9, and 4) and prove their strong optimality (see Theorems 4.2, 5.3, and 6.2). Notably, our stronglyoptimal algorithms can obtain significant performance improvements over prior weakly-optimal algorithms for certain prediction values; Table 2 summarizes the best improvement ratio of the consistency-robustness score (βy · γy) across all y for each problem, reflecting the relative benefit of our algorithms over existing ones under the most favorable y. The parameter λ represents some confidence governing the tradeoff between robustness and consistency used in [2, 11]. Note that for both deterministic and randomized ski rental, the maximum improvement ratio is obtained as λ ↓0, where it diverges to infinity with asymptotic order O(λ-1). In contrast, for one-max search, the maximum improvement ratio is √ θ, which is also attained as λ ↓0. Consequently, our methods can yield substantial performance improvements for certain predictions and choices of the parameter λ. We also show that there exist problems for which existing weakly-optimal algorithms in the literature are already strongly-optimal: in particular, we establish prediction-specific bounds in Theorem 2.1 for the weakly-optimal algorithm of Wei and Zhang [17] for non-clairvoyant job scheduling when the job size n = 2, and prove its strong optimality in this case. Novel Techniques and Methodology. We propose a bi-level optimization methodology (see Problems 1 and 2 in Section 3.1) for systematically deriving strongly-optimal algorithms. Given an initial robustness target γ, Problem 1 finds the best prediction-specific consistency βy, and Problem 2 then finds a decision with the best prediction-specific robustness γy given the fixed βy. The bi-level optimization pipeline naturally forms a meta-algorithm (Algorithm 1), which we prove in Proposition 1 yields a strategy on the prediction-specific Pareto front for each y, guaranteeing strong optimality. This approach offers two key benefits. (1) Generality - the bi-level optimization framework is broadly applicable: it can be used to design algorithms for online problems beyond those described in Table 3. (2) Flexibility - this approach exhibits two kinds of flexibility. First, the consistency-robustness trade-off in the meta-algorithm (Algorithm 1) can be tuned by adjusting the robustness target γ. Second, the bi-level optimization problem can be adapted to different kinds of predictions and performance objectives. For example, the objectives and constraints in Problems 1 (P1) and 2 (P2) can be extended to incorporate error tolerance to enable the design of algorithms which perform well even if predictions are erroneous, thus alleviating, to some extent, the issues of brittleness and smooth performance degradation which have been considered in a number of recent works [18-20]. More specifically, in Section 7, we introduce the notion of ε-consistency to formalize the goal of preserving consistency (and avoiding algorithm brittleness) when faced with small prediction error ε, and we show that it is possible to obtain both classic Pareto optimality between ε-consistency and robustness, as well as a corresponding version of strong optimality in this case (see Theorem 7.3). In addition, our analysis of strong optimality in the randomized ski rental problem (see Appendix E) employs 3 a novel prediction-specific primal-dual technique. Specifically, we first use a perturbation-based analysis to derive structural properties of the optimal randomized distribution, and then formulate and analyze a primal-dual optimization problem for each prediction. This provides a structured and potentially generalizable approach to establish Pareto optimality with respect to a given prediction. Experimental Results and Real-World Implications. We evaluate our algorithms through both synthetic simulations and real-world case studies, spanning a range of online decision-making problems. Overall, our methods consistently outperform state-of-the-art learning-augmented and classical baseline algorithms, confirming their theoretical soundness and practical value. More specifically, we apply the deterministic and randomized ski rental algorithms to Dynamic Power Management (DPM) traces [5], an important benchmark for energy-efficient computing, and we apply the one-max search algorithms to VIX trading, a representative financial market task marked by high volatility and uncertainty. In both domains, our methods deliver significant improvements across most prediction regimes, illustrating how prediction-specific design can translate improved theoretical guarantees into tangible real-world gains. 1.2 Related Work Algorithms with Predictions. Although the study of online algorithms with ML predictions is still relatively new [2, 3], significant research progress has been made in recent years on various problems such as online optimization [6], control [21, 22], and reinforcement learning [23, 24]. Similar frameworks have also been adopted to design and analyze algorithms for a number of other online problems, such as caching [3, 25-27], online knapsack [28-30], secretary and bipartite matching [31], metrical task systems [4, 32, 33], and convex body/function chasing [32, 34]. Most of these works make no assumptions about prediction quality, seeking to balance consistency (competitive ratio when predictions are accurate) and robustness (worst-case competitive ratio regardless of prediction quality), though the precise, formal definitions of these metrics vary slightly across works. Beyond their theoretical appeal, these frameworks have begun to influence practical systems domains, enabling advances in areas such as data-center scheduling [35], energy-aware computing [7, 15], and networked control systems [6, 22], where ML-driven forecasts are increasingly available but inherently imperfect. Note also that some recent works depart from the standard paradigm of robustness and consistency and consider alternative prediction models and performance measures. Sun et al. [36] proposed online algorithms with uncertainty-quantified (UQ) predictions, which leverage UQ to assess prediction quality. They introduced the distributionally robust competitive ratio (DRCR), which weighs both the setting where the UQ prediction accurately describes the instance, and the worst-case adversarial setting, and applied this metric to the problems of ski rental and online search. Mahdian et al. [37] proposed a general framework for online optimization under uncertain inputs, where the algorithm has access to an optimistic strategy that performs well when the future unfolds favorably. They developed a meta-algorithm that balances between this optimistic policy and a robust fallback, achieving a trade-off between worst-case guarantees and performance under accurate predictions without relying on a formal error model. Ski Rental and Scheduling with ML Predictions. Regarding online problems that are closely related to those specific examples considered in this work, Kumar et al. [2] studied ski rental and non-clairvoyant scheduling with ML predictions. Their framework introduces a tunable trade-off between consistency and robustness through a user-specified hyperparameter. Wei and Zhang [17] subsequently gave general lower bounds on the trade-off obtainable in these problems, thus proving that the deterministic and randomized algorithms of Kumar et al. [2] for ski rental achieve the Pareto-optimal trade-off between consistency and robustness. Furthermore, they demonstrated that the meta-algorithm proposed by Kumar et al. [2] for nonclairvoyant scheduling does not achieve the tight trade-off, and introduced a novel two-stage scheduling strategy that is provably tight for the case of n = 2. One-Max Search with ML Predictions. In the learning-augmented setting of one-max search, where algorithms receive a prediction of the maximum element value, Sun et al. [11] established a fundamental lower 4 bound on the trade-off between consistency and robustness. They also proposed a threshold-based algorithm and showed that it achieves this lower bound, making it Pareto-optimal in terms of these two performance measures. Algorithm Smoothness. A recent line of work has shown that some existing learning-augmented algorithms are brittle, suffering sharp performance drops under small prediction errors. Elenter et al. [18] addressed this by designing algorithms that follow user-specified error-performance profiles, ensuring controlled degradation in performance for the one-way trading problem. Benomar et al. [19] further proposed smooth one-max search algorithms that are Pareto-optimal and exhibit gradual performance degradation with increasing prediction error, achieving a "triple Pareto optimality" among consistency, robustness, and smoothness. Unlike prior work that focuses on structural smoothness of the Pareto frontier, our formulation provides a principled relaxation of consistency itself, leading to algorithms that are both robust and tolerant to small predictive errors. 2 Problem Formulation We consider online cost minimization problems over a set of instances I. For each instance I ∈I, let ALG(A, I) and OPT(I) denote the cost incurred by an online algorithm A and the cost of the offline optimum, respectively. We assume that the algorithms have no prior knowledge of the instance I. Under the competitive analysis framework [1], the goal is to find an online algorithm A that minimizes the worst-case competitive ratio α(A), which is defined as1 α(A) = max I∈I ALG(A, I) OPT(I) . This worst-case focus, however, can be overly pessimistic in practical settings. To move beyond the limitations of worst-case algorithms, machine-learned predictions can be integrated into algorithm design (see [2, 3]). In this setting, an online algorithm Aω, potentially parameterized by ω ∈Ω, receives not only the instance I ∈I online but also an ML prediction y ∈F concerning some relevant but unknown feature x(I) ∈F of I, where F denotes a prediction space. The feature x(I) encapsulates useful information about I (e.g., the maximum element value for online one-max search) or may even fully specify I in some problems (e.g., the total number of skiing days for discrete-time ski rental). Let ALG(Aω, I, y) denote the (expected) cost incurred by algorithm Aω on instance I given prediction y. Classic Consistency and Robustness. Since the ML prediction y ∈F may be erroneous, a number of algorithms have been designed (see those summarized in Sections 1.2 and 2.3) to (1) achieve near-optimal performance when the prediction y is accurate (consistency), while (2) simultaneously maintaining a bounded performance guarantee even when the prediction is arbitrarily wrong (robustness). To formalize this, let Iy ⊆I represent the set of instances for which prediction y is considered "accurate" or "consistent"; the precise definition depends on the specific problem, the form of prediction, and the prediction quality measure. The classic consistency and robustness metrics are defined as follows. Definition 1 (Classic Metrics). Given an online algorithm Aω that takes predictions, the consistency β(Aω) and robustness γ(Aω) are defined as: β(Aω) := sup y∈F sup I∈Iy ALG(Aω, I, y) OPT(I) , γ(Aω) := sup y∈F sup I∈I ALG(Aω, I, y) OPT(I) . (1) If an algorithm Aω achieves β(Aω) consistency and γ(Aω) robustness, it is called β(Aω)-consistent and γ(Aω)-robust, respectively. A typical choice of Iy is Iy := {I ∈I : x(I) = y} (following [2, 3]). The equality constraint in Iy can be generalized in a number of ways-for instance, to a ball centered at x(I), which will emphasize small error tolerance of the algorithm (see Section 7 for more details). Here, the consistency β(Aω) captures the 1In online profit maximization problems, the competitive ratio is defined as the worst-case ratio between the optimal offline profit and that obtained by the online algorithm, which is the inverse of the minimization setting. 5 algorithm's performance guarantee assuming the prediction is correct, taking the worst case over all possible correct predictions, and the robustness γ(Aω) represents the worst-case guarantee under all possible instances and any prediction y, regardless of accuracy. An ideal algorithm minimizes both β(Aω) (aiming for near 1) and γ(Aω). 2.1 Prediction-Specific Consistency and Robustness The standard metrics in (1) evaluate performance by taking the worst case over both instances I and predictions F. While adopting a worst-case perspective for instances I is a standard and reasonable approach in competitive analysis due to future uncertainty, applying this same perspective to the entire prediction space F is unnecessarily conservative, as better prediction-specific performance may be possible. This motivates a more nuanced framework to evaluate performance conditioned on the specific prediction value y ∈F that the algorithm receives. We thus introduce the notions of prediction-specific consistency and prediction-specific robustness. Definition 2 (Prediction-Specific Metrics). Given an online algorithm Aω and a specific prediction value y ∈F, the prediction-specific consistency βy(Aω) and prediction-specific robustness γy(Aω) relative to y are defined as: βy(Aω) := sup I∈Iy ALG(Aω, I, y) OPT(I) , γy(Aω) := sup I∈I ALG(Aω, I, y) OPT(I) . (2) If an algorithm Aω achieves βy(Aω) and γy(Aω) for a given prediction value y, we say Aω is βy(Aω)-consistent and γy(Aω)-robust under y. The prediction-specific metrics in Definition 2 enable a finer-grained analysis for algorithms with predictions. Instead of considering only the worst-case consistency or robustness over all predictions, we may instead tailor an algorithm's strategy based on the characteristics of each observed prediction value y. This adaptiveness can enable better performance compared to standard algorithms, which only optimize for worst-case consistency and robustness (1). 2.2 WEAK and STRONG Optimality While these prediction-specific metrics provide valuable insight into algorithm performance conditioned on the received prediction, the standard metrics of consistency and robustness still remain valuable to characterize an algorithm's overall worst-case guarantees across all instances and predictions. To formally capture the notion of algorithms that perform well both overall and for specific predictions, we introduce two notions of optimality based on these different settings. Our goal is to distinguish algorithms which both achieve the optimal tradeoff between robustness and consistency for the worst-case prediction, as well as on a per-prediction basis. This leads to the following definitions of weakly-optimal and strongly-optimal algorithms. We begin by defining weak optimality, which characterizes algorithms with the optimal tradeoff between the standard notions of consistency and robustness. Definition 3 (WEAK Optimality). For a fixed online problem, consider an online algorithm A that achieves β consistency and γ robustness. A is WEAKLY-optimal if there does not exist another online algorithm with consistency β′ and robustness γ′ such that β′ ≤β and γ′ ≤γ, with at least one inequality being strict. Building upon this, we define the stricter notion of strong optimality incorporating prediction-specific performance. Definition 4 (STRONG Optimality). For a fixed online problem, consider an online algorithm A that achieves βy prediction-specific consistency and γy prediction-specific robustness under prediction y ∈F. A is STRONGLYoptimal if 6 1. A is weakly-optimal; 2. for any prediction y ∈F, there does not exist another online algorithm with prediction-specific consistency β′ y and robustness γ′ y such that β′ y ≤βy and γ′ y ≤γy, with at least one inequality being strict. The notion of strong optimality in Definition 4 extends classic Pareto optimality to the prediction-specific setting. This definition requires Pareto optimality in the two-dimensional consistency-robustness plane (βy, γy) for each specific prediction y ∈F, in addition to the baseline weak optimality. This stricter criterion captures whether an algorithm is unimprovable in its fundamental performance trade-off across the entire prediction space F. 2.3 Online Problems Studied We now briefly introduce each of the problems studied in this work; these problems are summarized in Table 3. Table 3: Instantiation of the general problem components I, x, and y for problems analyzed. Discrete-Time Ski Rental One-Max Search Scheduling Instance I Number of days n values in [L, U] Set of n jobs Feature x Number of days Maximum value Processing times (x1, . . . , xn) Prediction y Predicted number of days Predicted maximum value Predicted times (y1, . . . , yn) Discrete-Time Ski Rental. The discrete-time ski rental problem is a classic online decision-making problem, wherein a decision-maker decides each day whether to rent skis for 1 unit of cost per day or purchase them outright for a fixed cost b ∈N+, without prior knowledge of the total number of skiing days x. The optimal cost is given by min{b, x}. In the standard competitive analysis framework, where no predictions are available, the optimal deterministic algorithm employs a simple rent-or-buy strategy, purchasing the skis on day M = b; this strategy achieves a competitive ratio of 2 -1/b. The optimal randomized algorithm is known as Karlin's algorithm [38], which strategically balances renting and buying through a carefully-designed probability distribution, achieving a competitive ratio of approximately e/(e -1) ≈1.582. For the ski rental problem, Kumar et al. [2] proposed algorithms that trade off consistency (performance under accurate predictions) and robustness (worst-case performance). They presented a deterministic algorithm achieving (1 + λ) consistency and (1 + 1/λ) robustness (for λ ∈(0, 1)), and a randomized algorithm with ( λ 1-e-λ ) consistency and ( 1+1/b 1-e-(λ-1/b) ) robustness (for λ ∈(1/b, 1)). Subsequently, Wei and Zhang [17] established fundamental lower bounds on this trade-off, showing that as b →∞, for deterministic algorithms, any (1+λ)-consistent algorithm must have a robustness of at least (1+1/λ) for λ ∈(0, 1), while for randomized algorithms, any γ-robust algorithm must have a consistency of at least γ log(1 + 1/(γ -1)). These results show that the algorithms in [2] achieve weak optimality in the limit b →∞(see Definition 3). In this work, we provide a deeper analysis of the algorithms proposed by Kumar et al. [2], examining their prediction-specific consistency and robustness. To maintain consistency and comparability with [2, 17], we also consider the asymptotic regime b →∞. Our findings indicate that neither the deterministic nor randomized algorithms in [2] are strongly-optimal within this framework (see Theorems 4.1 and 5.1). Consequently, we propose novel algorithms (see Algorithms 2 and 9) that achieve strong optimality (see Theorems 4.2 and 5.3), thereby improving upon existing approaches for this problem setting with ML predictions. One-Max Search. The one-max search problem considers a sequence of n elements with values in the range [L, U], where L and U are known positive constants. At each step, an element is observed, and the algorithm must decide whether to accept it immediately or irrevocably discard it. The objective is to select the element with the maximum value. The instance's difficulty is characterized by the ratio θ = U/L. In classical competitive analysis, the optimal competitive ratio is √ θ, achieved by an algorithm using the fixed threshold √ LU [39]. Note that this is a reward maximization problem, rather than a loss minimization; thus, for this problem, we will 7 consider definitions of classic (1) and prediction-specific (2) consistency and robustness with the ratio between ALG and OPT inverted. For learning-augmented one-max search, where algorithms receive a prediction of the maximum element value, Sun et al. [11] established a fundamental lower bound on the consistency-robustness trade-off: any γ-robust algorithm must have a consistency of at least θ/γ. They further proposed a threshold-based algorithm that achieves this lower bound, implying its weak optimality. In this work, we provide a deeper analysis of the algorithm proposed by Sun et al. [11] within this predictionspecific framework. Our analysis reveals that their algorithm is not strongly-optimal (see Theorem 6.1). Consequently, we propose a novel algorithm (see Algorithm 4) that is strongly-optimal (see Theorem 6.2) and offers improved performance. Non-Clairvoyant Scheduling. The non-clairvoyant scheduling problem concerns scheduling n jobs on a single machine without prior knowledge of their processing times. We focus on the preemptive setting, where jobs can be interrupted and resumed without cost. In the standard competitive analysis framework [1], Round-Robin (RR) achieves an optimal competitive ratio of 2n n+1 [40]. Extending this, Wei and Zhang [17] established a fundamental lower bound on the consistency-robustness trade-off, showing that any algorithm must have robustness of at least n+n(n-1)λ 1+λ(n+1)(n+2)/2 if it is (1 + λ)-consistent. They also propose a two-stage schedule algorithm (see Algorithm 6 in Section A.1) and show it achieves the tight tradeoff for n = 2 jobs in this learning-augmented setting. In the next section, as a warm-up, we provide a deeper analysis of this two-stage algorithm's prediction-specific consistency and robustness, establishing that it is strongly-optimal. 2.4 Warm-Up: An Existing Algorithm that is Strongly-Optimal While our proposed notion of strong optimality is much stricter than weak optimality, some existing algorithms known only to be weakly-optimal can be shown to satisfy strong optimality. In this section, we consider the problem of non-clairvoyant scheduling with n = 2 jobs (with length predictions y = (y1, y2) with y1 ≤y2). It is well known that the two-stage scheduling algorithm proposed by Wei and Zhang [17] (see Algorithm 6 in Section A.1) achieves the optimal trade-off between classic consistency and robustness and is thus weakly-optimal. Notably, we can show that this algorithm also satisfies prediction-specific Pareto optimality: Theorem 2.1. The two-stage algorithm in [17] with n = 2 is 1 + min{y1/(2y1 + y2), λ}-consistent and 1 + max{1/3, y1/(y1 + 2λ(2y1 + y2))}-robust under y = (y1, y2). Moreover, it is strongly-optimal. We prove this theorem in Section A.2. Despite the fact that this algorithm is strongly optimal for nonclairvoyant scheduling, for many other canonical online problems, including the ski rental and one-max search problems, we shall soon see that existing weakly-optimal algorithms (such as those in [2, 11]) fail to achieve strong optimality (see Theorems 4.1, 5.1, and 6.1). As such, the rest of this paper will consider the development of new algorithms that can achieve strong optimality in these settings. 3 Optimization-Based Meta-Algorithm In this section, we introduce a general optimization-based approach to systematically identify stronglyoptimal algorithms for online problems. Our goal is to obtain a general meta-algorithm that, given a prediction y ∈F and a target robustness upper bound γ, returns an algorithm Aω which is strongly-optimal. 3.1 Bi-Level Optimization Formulation We begin by considering the following question: given some target upper bound γ on robustness and a prediction y ∈F, how can we obtain an algorithm that both satisfies the target robustness bound, and obtains a prediction-specific optimal tradeoff between consistency and robustness? Here, γ ∈Λγ := [α∗, +∞) can be any possible robustness level, where α∗denotes the optimal competitive ratio achievable for the problem. 8 Algorithm 1 BI-LEVEL OPTIMIZATION-BASED META-ALGORITHM 1: Input: Desired robustness upper bound γ ∈Λγ := [α∗, +∞) 2: Receive a prediction y; 3: Compute {β∗ y, ω} by solving P1(γ, y) (Problem 1); 4: Obtain {γ∗ y, ω∗} by solving P2(β∗ y, y) (Problem 2) ; 5: Deploy the online decision rule induced by Aω∗; To achieve this goal, we specify a bi-level optimization framework comprising Problems 1 and 2. Recall that a prediction y determines a consistent subset of instances Iy ⊆I, as described in Section 2, and that ALG(Aω, I, y) and OPT(I) denote, respectively, the costs of the algorithm Aω augmented by the prediction y and optimal offline algorithm under instance I ∈I. Our first optimization problem (Problem 1), referred to as P1(γ, y), determines the best achievable prediction-specific consistency of any γ-robust algorithm under the prediction y ∈F. Let {β∗ y, ω} denote its optimal solution. We then solve a second optimization problem (Problem 2), referred to as P2(β∗ y, y), to find the most robust algorithm among those achieving β∗ y consistency under the prediction y. Let {γ∗ y, ω∗} denote its optimal solution. The resulting algorithm Aω∗with the optimal solution ω∗is then implemented for the prediction y. Problem 1 (P1): Minimize Consistency P1(γ, y) := min βy,ω∈Ω βy s.t. (3a) ALG(Aω, I, y) ≤γ OPT(I), ∀I ∈I, (3b) ALG(Aω, I, y) ≤βyOPT(I), ∀I ∈Iy. (3c) Constraint (3b) enforces the initial robustness level γ. Problem 2 (P2): Minimize Robustness P2(β∗ y, y) := min γy,ω∈Ω γy s.t. (4a) ALG(Aω, I, y) ≤γyOPT(I), ∀I ∈I, (4b) ALG(Aω, I, y) ≤β∗ yOPT(I), ∀I ∈Iy. (4c) Constraint (4c) enforces the optimal predictionspecific consistency β∗ y found in Problem 1. 3.2 Meta-Algorithm Formulation The preceding approach (Problems 1 and 2) focuses on deriving an instance-specific solution given a fixed prediction y. To operationalize this idea in the online setting, where predictions may vary, we introduce a metaalgorithm (Algorithm 1) that invokes this two-stage optimization procedure for any realized prediction. This enables prediction-specific robust and consistent decision-making across varying inputs, with a user-specified robustness bound γ. In the following proposition, which we prove in Section B, we show that this meta-algorithm yields decisions on the prediction-specific Pareto frontier and satisfying the desired robustness bound. Moreover, the condition that γ is on the (non-prediction-specific) Pareto frontier guarantees the weak optimality of Algorithm 1; this holds if tight Pareto fronts are available off-the-shelf for the problem at hand, such as the tight tradeoffs between classical consistency and robustness available for ski rental and one-max search [2, 11, 17]. Proposition 1. Suppose there exists a weakly-optimal algorithm with robustness γ. With ω∗being an optimal solution of Problem 2 in line 4 of Algorithm 1, Aω∗is β∗ y-consistent and γ∗ y-robust with respect to the prediction y, with γ∗ y ≤γ. Furthermore, Algorithm 1 is strongly-optimal. Notably, the meta-algorithm (Algorithm 1) offers a systematic pipeline to achieve Pareto optimality for each prediction y ∈F while also guaranteeing weak optimality. While the tractability of solving the constituent bi-level optimization problems P1(γ, y) and P2(β∗ y, y) depends on the structure of the specific online problem, this framework provides a foundation for deriving explicit and tractable strongly-optimal algorithms for the problems we discuss in the remainder of this paper. 9 3.3 Generalizations of this framework We briefly note that the bi-level optimization framework we have proposed is quite flexible, and could be generalized in a number of ways to enable its application to different problem settings or objectives. For example, Problems 1 and 2 could be augmented to include practical considerations such as risk- or uncertainty-sensitive constraints and objectives [36, 41, 42] or tolerance to erroneous predictions. In particular, in this work, we develop an error-tolerant variant of this framework, motivated by the continuous nature of certain problems (e.g., prices in the one-max search problem) and the fact that prediction errors are often unavoidable in practice. Instead of analyzing the tradeoff between consistency and robustness under perfectly accurate predictions, we can instead study the tradeoff between ε-consistency and robustness, where ε-consistency denotes the worst-case guarantee when the prediction error is bounded by a chosen constant ε. We further describe this generalized framework, and how to design algorithms in this setting, in Section 7. While our approach could, in principle, be extended to more general and complex dynamic multi-round problems, we anticipate that such an extension would likely require nontrivial technical developments. In particular, such an extension would require a number of additional structural assumptions, e.g., that the overall prediction space F remains fixed and independent of the algorithm's actions, and that the algorithm's actions exert negligible influence on future prediction values. Moreover, depending on the problem, the resulting optimization problems may be considerably harder to analyze. For complex problems like metrical task systems and non-clairvoyant scheduling with n jobs, achieving even weak optimality remains an open question [17, 33]; as such, designing strongly-optimal algorithms is inherently challenging. Thus, while our methodology in its current form may not be directly applicable to these settings, we posit that it may still serve as a useful conceptual framework for designing prediction-specific algorithms for these problems. In the rest of this paper, we will consider problem settings where this methodology can be tractably instantiated. 4 Deterministic Ski Rental We begin our investigation of strongly-optimal algorithm design in specific problem settings by considering deterministic algorithms for the discrete-time ski rental problem described in Section 2.3. If a deterministic decision-maker buys skis at the start of day M ∈N (that may depend on the predicted day y), then the induced cost is ALGDSR(M, x, y) := x · 1M(y)>x + (b + M(y) -1) · 1M(y)≤x, where b ∈N denotes the price of the skis. Then, for this problem, the general definitions of the prediction-specific consistency and robustness in Equation (2) are instantiated as: βDSR y (AM(y)) := ALGDSR(M, y, y) min{b, y} , γDSR y (AM(y)) := sup x∈N ALGDSR(M, x, y) min{b, x} . (5) 4.1 The Deterministic Algorithm of Kumar et al. is Not Strongly-Optimal We first provide a deeper analysis of the deterministic algorithm proposed by Kumar et al. [2], examining its prediction-specific consistency and robustness. The deterministic algorithm of Kumar et al. [2, Algorithm 2], which we denote by KD, purchases at the beginning of day ⌈λb⌉if y ≥b, and at the beginning of day ⌈b/λ⌉otherwise. It achieves (1+λ)-consistency and (1 + 1/λ)-robustness, where λ ∈(0, 1) is a tunable hyper-parameter. By the lower bound of Wei and Zhang [17], KD is weakly-optimal as the price b →+∞. However, as we show in the following theorem (which is proved in Section C.1), the KD algorithm is not strongly-optimal. Theorem 4.1. KD is 1-consistent and (1+1/λ)-robust when y y. γy(M) = ( M-1+b M if M ≤b, M-1+b b if M > b. It's clear that the decision M = b simultaneously achieves the optimal consistency 1 and the optimal robustness 2 -1 b with respect to y, thus dominating any other decisions. Case II: y ≥b. Given the decision M, the prediction-specific consistency and robustness are βy(M) = ( M-1+b b if M ≤y, y b if M > y. γy(M) = ( M-1+b M if M ≤b, M-1+b b if M > b. As shown in Figure 2 and 3, we observe that choosing M = y + 1 dominates all options with M > y + 1, as it offers better robustness without compromising consistency, making M = y + 1 a strong choice. Note that βy(y + 1) = y/b and γy(y + 1) = (y + b)/b. Therefore, two critical decision boundaries within (0, b) are M1 = y + 1 -b and M2 = (b2 -b)/y.2 Specifically, when M1 min {b(λ + 1) -1, (b -1)/λ}, ⌈λb⌉remains on the Pareto front. All together, these cases motivate the design of our deterministic algorithm, PDSR (Algorithm 2). In the following theorem, we characterize the prediction-specific consistency and robustness of PDSR, establishing its strong optimality in the asymptotic regime b →+∞; the full proof is deferred to Section C.2. 2Note that b2-b y may not be an integer. 11 Algorithm 2 PDSR: PREDICTION-SPECIFIC DETERMINISTIC SKI RENTAL 1: Input: λ ∈(0, 1) 2: If y min{b(λ + 1) -1, b-1 λ } then determine M = ⌈λb⌉; 5: Buy skis on day M. Theorem 4.2. PDSR presented in Algorithm 2 is strongly-optimal when b →+∞, and is        1-consistent and (2 -1 b)-robust if y min b(λ + 1) -1, b-1 λ . 5 Randomized Ski Rental We now turn to considering the design of randomized algorithms for the discrete-time ski rental problem. Consider a randomized algorithm π = (πi)i∈N+ (that may depend on the predicted day y) that chooses to buy skis at the start of day i ∈N+ with some probability πi. The algorithm's average cost is ALGRSR(π, x, y) := +∞ X i=1 πi(y) · [x · 1i>x + (b + i -1) · 1i≤x], where b ∈N denotes the price of the skis. For this problem, the prediction-specific consistency and robustness in Equation (2) are instantiated as: βRSR y (Aπ(y)) := ALGRSR(π, y, y) min{b, y} , γRSR y (Aπ(y)) := sup x∈N ALGRSR(π, x, y) min{b, x} . (6) 5.1 The Randomized Algorithm of Kumar et al. is Not Strongly-Optimal Kumar et al. [2] propose a randomized ski rental algorithm, which we denote by KR, which is a variant of Karlin's classical randomized strategy [38]. Given a prediction y and a hyper-parameter λ ∈(1/b, 1), KR chooses the purchase day i according to the distribution πi =    b-1 b m-i · 1 b 1- 1-1 b m , if i ≤m, 0, if i > m, (7) where m := ( ⌈b/λ⌉, if y x. For this problem, the prediction-specific consistency and robustness are: βOMS y (AΦ(y)) := y ALGOMS(Φ, y, y), γOMS y (AΦ(y)) := sup x∈[L,U] x ALGOMS(Φ, x, y). (10) 6.1 The Algorithm of Sun et al. is Not Strongly-Optimal Sun et al. [11] proposed a β-consistent and γ-robust Online Threshold-based Algorithm (OTA) using the threshold Φ =      Lβ, if y ∈[L, Lβ); λLγ + (1 -λ)y/β if y ∈[Lβ, Lγ); Lγ if y ∈[Lγ, U], where β = 2λθ/[ p (1 -λ)2 + 4λθ -(1 -λ)], γ = θ/β, and λ ∈[0, 1]. Furthermore, they show that any γ-robust deterministic algorithm for one-max search must have consistency β ≥θ/γ, implying their algorithm is weakly-optimal. However, as we establish in the following theorem (which is proved in Section F.1), their algorithm is not strongly-optimal. Theorem 6.1. Sun's algorithm is not strongly-optimal. 6.2 A Strongly-Optimal Algorithm for One-Max Search We propose in Algorithm 4 a new approach, PST, which, by more carefully selecting the purchase threshold, achieves strong optimality. In particular, it achieves the prediction-specific consistency and robustness values established in the following theorem. Theorem 6.2. PST (Algorithm 4) is strongly-optimal, and is        (y/L)-consistent and √ θ-robust if y ∈[L, λL + (1 -λ) √ LU); 1-consistent and (U/y)-robust if y ∈[λL + (1 -λ) √ LU, √ LU]; (1-λ) √ θy+λy (1-λ)U+λy -consistent and (1-λ)U+λy (1-λ) √ LU+λL -robust if y ∈( √ LU, U]. 14 Algorithm 5 ε-Tolerant PST 1: Input: λ ∈[0, 1], ε > 0 2: Determine M = λ(L + 3ε) + (1 -λ)( √ LU -ε); 3: If y ∈[L, M -2ε] then set Φ = √ LU; 4: Else if y ∈(M -2ε, M) set Φ = M -ε; 5: Else if y ∈[M, √ LU + ε] then set Φ = y -ε; 6: Else if y ∈( √ LU + ε, U -ε) then set μ = (U-2ε)-LU/(M-ε) (U-2ε)- √ LU , Φ = μ √ LU + (1 -μ)(y -ε); 7: Else if y ∈[U -ε, U] then set Φ = LU/(M -ε); 8: Perform OTA with threshold Φ. We prove Theorem 6.2 in Section F.2; the proof identifies prediction-specific optimal thresholds (Φ) by partitioning the prediction space and deriving each threshold (which is a convex combination involving μ) that ensures Pareto-optimality for each segment of predictions. 7 Error-Tolerant Algorithms Pareto-optimal algorithms can exhibit brittleness, a vulnerability noted by [18, 19], where the competitive ratio degrades sharply toward the worst-case robustness bound even with small prediction errors. This issue stems from the standard definition of consistency (see Definition 1), which assumes strictly perfect predictions (x(I) = y) and thus fails to consider performance under erroneous predictions. To address this, we use the one-max search problem as an example to demonstrate how explicit and tractable Pareto-optimal algorithms incorporating a "generalized consistency" can be constructed in our prediction-specific framework, offering good performance tradeoffs when faced with minor prediction errors. It is worth emphasizing that the notion of "error tolerance" considered in this section differs from the concept of "smoothness" discussed in [18-20]. The former concerns the tradeoff between generalized consistency (allowing small prediction errors) and robustness (with arbitrarily large errors), whereas the latter requires that the algorithm's performance degrade smoothly as the prediction error increases. Nevertheless, we view the two notions as related, both contributing to alleviating "brittleness". To account for small prediction errors, we specify a desired error tolerance ε and define a relaxed consistent set Iε y := {I ∈I : L(x(I), y) ≤ε} , where L : F × F →R≥0 is a chosen loss function that measures prediction error. When substituting Iy with Iε y, we refer to the corresponding consistency metrics in Equation (1) and Equation (2) as ε-consistency and prediction-specific ε-consistency, denoted by βε and βε y, respectively. Substituting β, βy and Iy with βε, βε y and Iε y in Section 3, we can generalize our meta-algorithm to incorporate error tolerance. For the one-max search problem, its inherent continuity renders prediction errors unavoidable in practice, underscoring the necessity of incorporating error tolerance. We fix L(x, y) = |x -y| and assume that ε is small relative to the scale of the problem. We propose an error-tolerant algorithm for this problem, ε-tolerant PST, in Algorithm 5. To conclude the section, we present three theorems that characterize the performance of this algorithm. In particular, Theorem 7.1 establishes the tradeoffs between ε-consistency and robustness achieved by ε-tolerant PST, together with their prediction-specific counterparts, while Theorem 7.2 provides a lower bound on the global ε-consistency-robustness tradeoff. Finally, Theorem 7.3 shows that the ε-tolerant PST algorithm is strongly optimal. The proofs of all three results are given in Section G. Theorem 7.1. ε-Tolerant PST achieves [(M -ε)/L] ε-consistency and U/(M -ε) robustness. Specifically, 15 ε-Tolerant PST achieves                        [(y + ε)/L] ε-consistency and √ θ robustness if y ∈[L, M -2ε]; (M -ε)/L ε-consistency and U/(M -ε) robustness if y ∈(M -2ε, M); (y + ε)/(y -ε) ε-consistency and U/(y -ε) robustness if y ∈[M, √ LU + ε]; (y+ε) μ √ LU+(1-μ)(y-ε) ε-consistency and μ √ LU+(1-μ)(y-ε) L robustness if y ∈( √ LU + ε, U -ε); (M -ε)/L ε-consistency and U/(M -ε) robustness if y ∈[U -ε, U]; where M = λ(L + 3ε) + (1 -λ)( √ LU -ε). Theorem 7.2. Any γ-robust algorithm has at least (θ/γ) ε-consistency, and any algorithm that achieves βε ε-consistency must be at least (θ/βε)-robust. Theorem 7.3. Assume ε ≤( √ LU -L)/4. The ε-consistency and robustness of ε-Tolerant PST (Algorithm 5) are jointly Pareto optimal. Moreover, for every prediction y ∈F = [L, U], ε-Tolerant PST achieves predictionspecific ε-consistency βε y and robustness γy that are jointly Pareto optimal. 8 Numerical Experiments In this section, we evaluate the performance of our proposed algorithms in three different case studies spanning synthetic and real-world settings.3 8.1 Case Study 1: Synthetic Data Experiments for Ski Rental We begin by testing the performance of our algorithms for the ski rental problem via simulations on synthetic instances. We let the actual number of skiing days x be a uniformly random integer drawn from [1, 10b], where b = 100 is buying costs of skis. The prediction y is generated with accuracy p: with probability p, the prediction is accurate (i.e., y = x), and with probability (1 -p), the prediction y ∼N(x, σ2), where σ = 500 and the output is rounded and made positive. For the deterministic setting, we compare PDSR (Algorithm 2) with KD ([2], Algorithm 2) and the classic competitive algorithm (which always buys on day b), using the same parameter λ = 0.5 for both PDSR and KD. For the randomized setting, we compare PRSR (Algorithm 9) with KR ([2], Algorithm 3) and Karlin's algorithm [38], using γ = 3 for PRSR and λ = ln(3/2) for KR, ensuring PRSR and KR have the same robustness 3. Each setup is evaluated over 10000 independent trials. Figures 4 and 5 present the empirical results of average competitive ratio versus accuracy p. We observe that our proposed algorithms, PDSR and PRSR, consistently outperform both classic online algorithms and existing learning-augmented algorithms across both settings. 8.2 Case Study 2: Ski Rental on Dynamic Power Management We next evaluate our ski rental algorithms on real-world traces for a Dynamic Power Management (DPM) problem, where we control the idle and active periods of a computing device. Modern processors typically support multiple power states: deeper states disable more components, leading to lower operating cost/energy but higher wake-up penalties/overhead. During each idle interval, a DPM controller must decide whether to stay active or transition into a deeper sleep state without knowing the remaining idle duration. 3Our code is publicly available at https://github.com/Bill-SizheLi/Prediction_Specific_Design_of_ Learning-Augmented_Algorithms 16 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy of ML Predictor 1.5 1.6 1.7 1.8 1.9 Empirical Competitive Ratio PDSR (Ours) KD Competitive Figure 4: Empirical competitive ratios versus accuracy p in deterministic setting. 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy of ML Predictor 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 Empirical Competitive Ratio PRSR (Ours) KR Competitive Figure 5: Empirical competitive ratios versus accuracy p in randomized setting. log126_copyDtoH log176_Booting log225_MusicFaceBook log225_MusicTwitter log245_MusicFaceBook log126_copyDtoH log176_Booting log225_MusicFaceBook log225_MusicTwitter log245_MusicFaceBook 1.0 1.5 2.0 2.5 3.0 3.5 Empirical ratio Good predictions Bad predictions ( e e 1)-competitive Blindly Trust KR PRSR(Ours) Figure 6: Empirical competitive ratios on DPM traces for good and bad predictions. The two-state DPM system (one active and one sleep state with zero operating cost) is equivalent to the ski rental problem, where remaining active corresponds to renting and transitioning to the sleep state corresponds to buying. Moreover, Antoniadis et al. [5] demonstrated that randomized ski rental algorithms can be converted to multi-state DPM algorithms. Setup. We consider a DPM problem with 4 states. Specifically, we use the same problem setting as Antoniadis et al. [5], employing I/O traces4 collected from a Nexus 5 smartphone [43], from which idle intervals between consecutive requests are extracted. We adopt the IBM mobile hard-drive power states reported in [44], consistent with the setup in [5]. The idle periods are scaled in the same way as in [5]. We use the five largest traces for evaluation. Since the main goal of this section is to probe the algorithms' performance under the two extremes of very good and very bad predictions, we consider the following method for generating predictions: "good predictions" and "bad predictions" are obtained by perturbing the ground truth with N(0, σ2 good) and N(0, σ2 bad) noises, respectively. In this experiment, we set σgood = 0.02 and σbad = 20. We compare four algorithms: the classic ( e e-1)-competitive algorithm, Blindly Trust (which treats the prediction as if it is correct and optimizes accordingly), the randomized algorithm of Kumar et al. (KR), and our randomized algorithm (PRSR). For the learning-augmented algorithms, we use the same parameter values for λ and γ as in Case Study 1. Results. Figure 6 reports the empirical competitive ratios on the real DPM traces. We observe that our strongly-optimal algorithm PRSR consistently achieves the lowest competitive ratios, except for when 4The traces are available at http://iotta.snia.org/traces/block-io. 17 the prediction quality is very good (in which case, the non-robust Blindly Trust algorithm outperforms it. This validates our algorithm's ability to exploit specific predictions to enable good performance regardless of prediction value or quality. 8.3 Case Study 3: One-Max Search on VIX Trading Figure 7: VIX Closing Price from January 2020 to December 2024. The VIX index soared from a pre-pandemic level of 12-15 to 83 in March 2020. The VIX, often referred to as the fear index, exhibits sharp volatility spikes that make it a natural benchmark for evaluating online search algorithms. Its uncertainty and heavy-tailed dynamics closely mirror the one-max search setting, where the core challenge lies in optimally timing a single exit. This characterization is vividly illustrated by the extreme volatility in early 2020: within just a few weeks at the onset of the COVID-19 market shock, the VIX surged from below 80 (shown in Figure 7). Setup. We evaluate our one-max search algorithms in a case study using the daily closing prices of the VIX index from January 2020 to December 2024 (shown in Figure 7), which consist of publicly available values obtained from the Cboe Options Exchange. We assume that at the beginning of each month, an agent holds one unit of VIX and must choose a single day within that month to sell it. Over the course of five years, there are 60 trading rounds (one per month), each offering approximately 20 to 21 trading opportunities, as the VIX is only traded on weekdays. We set L and U as the historical minimum and maximum prices over the entire 5 years.5 Baselines. We compare our proposed methods, PST(Algorithm 4) and ε-Tolerant PST(Algorithm 5), to three baseline algorithms: (i) blindly trusting the prediction, (ii) the classical online algorithm of El-Yaniv [39], and (iii) prior learning-augmented algorithms (Sun's [11] and Benomar's [19]). Experiment 1. In this experiment, we consider a naive prediction strategy that simply uses the highest observed VIX price from the previous month. As the evaluation metric, we use the empirical ratio6, defined as the cumulative online outcome up to the current round divided by the cumulative offline optimum. This metric reflects how well an algorithm performs in practice relative to the hindsight-optimal strategy, averaged over time. We run the algorithms over the 60-month horizon using historical VIX data, and report the empirical ratios at each round to visualize both long-term trends and the stability of performance across different market periods in Figure 8. For our proposed algorithms, we fix the trade-off parameter λ = 0.3 in PST, and use λ = 0.3 and ε = 1.8 in ε-Tolerant PST. For baseline algorithms with tunable parameters, including those from Sun et al. [11] and Benomar et al. [19], we find that setting λ = 1.0 yields the best cumulative empirical ratio over the full 60-month horizon. However, to better illustrate performance variation across different regimes, we also include their results under λ = 0.3 and λ = 0.6. Experiment 2. In practical settings, machine-learned predictions are often more accurate than the naive predictor used in Experiment 1, though they remain imperfect due to model limitations and data noise. The degree of prediction accuracy varies with the capability and training of the underlying ML model. To systematically evaluate algorithmic performance under varying prediction quality, we introduce the notion of an error level - a scalar value between 0 and 1 that quantifies the deviation from perfect information. For each trading round, the prediction is constructed via linear interpolation between the previous month's maximum (naive prediction) 5The focus of this paper is not on the impact of L and U; therefore, we simply set them to historical values. In practical trading scenarios, L and U can be viewed as predetermined parameters representing the stop-loss and take-profit thresholds in the exit strategy of the trading process. 6Note that the empirical ratio here is the inverse of that used in the theoretical analysis, so as to better reflect the proportion of the hindsight optimum that the online or learning-augmented algorithm can achieve (or recover). 18 0 10 20 30 40 50 60 Round 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Empirical Ratio Blindly Trust El-Yaniv's (Sun's ( =1) & Benomar's( =1)) Sun's ( =0.6) Sun's ( =0.3) Benomar's ( =0.6) Benomar's ( =0.3) PST (Ours) -Tolerant PST (Ours) Figure 8: Empirical competitive ratios over 60 trading rounds. 0.0 0.2 0.4 0.6 0.8 1.0 Error Level 0.800 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 Cumulative Empirical Ratio Blindly Trust El-Yaniv's Sun's Benomar's PST (Ours) -Tolerant PST (Ours) Figure 9: Cumulative empirical competitive ratio with varying prediction error levels. and the current month's actual maximum (perfect prediction), where the error level determines the interpolation weight. An error level of 1.0 corresponds to the naive prediction, while 0.0 yields the perfect prediction. We assess the cumulative empirical ratio of each algorithm over all 60 trading rounds under varying error levels from 0 to 1, and report the result in Figure 9. To ensure a fair comparison across prediction regimes, we fix the trade-off parameter λ = 0.5 for all tunable methods, including those of Sun, Benomar, PST, and ε-Tolerant PST. For ε-Tolerant PST, we additionally set ε = 0.5 to account for moderate tolerance to prediction error. Results. The results for Experiment 1 are shown in Figure 8. Our methods consistently outperform all baselines across the decision horizon, with final empirical competitive ratios of 87.2% (ε-Tolerant PST) and 85.7% (PST), compared to 82.4%-84.4% for all baselines. The results for Experiment 2 are shown in Figure 9; ε-Tolerant PST remains consistently superior across nearly all error levels. 9 Concluding Remarks In this work, we introduce a prediction-specific analysis framework and a finer-grained notion of strong optimality for online algorithms with predictions. We further provide a systematic approach to designing Pareto-optimal online algorithms with better prediction-specific performance than prior algorithms, and we show how this methodology can yield significant performance improvements for the problems of ski rental (deterministic and randomized) and one-max search. Future Directions. In contrast to the ski rental and one-max search settings, the existing weakly-optimal non-clairvoyant scheduling algorithm of Wei and Zhang [17] is strongly-optimal when n = 2. Thus, designing a strongly-optimal algorithm for n-job non-clairvoyant scheduling remains an open question. Similarly, as we did for one-max search, developing explicit error-tolerant strongly-optimal algorithms for both deterministic and randomized ski rental is also an interesting future direction. In addition, exploring whether the bi-level optimization in the meta-algorithm (Algorithm 1) in Section 3 can be tractably solved for more complex, multi-stage problems represents a challenging but potentially impactful direction for future study. References [1] Allan Borodin and Ran El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 2005. ISBN 9780521562458. https://books.google.com/books/about/Online_ Computation_and_Competitive_Analy.html?id=v3faI8pER6IC. [2] Ravi Kumar, Manish Purohit, and Zoya Svitkina. Improving Online Algorithms via ML Predictions. In Advances in Neural Information Processing Systems 19 31 (NeurIPS 2018), pages 1-10, 2018. https://papers.nips.cc/paper/ 8174-improving-online-algorithms-via-ml-predictions. [3] Thodoris Lykouris and Sergei Vassilvitskii. Competitive Caching with Machine Learned Advice. Journal of the ACM, 68(4):24:1-24:25, 2021. https://dl.acm.org/doi/10.1145/ 3447579. [4] Antonios Antoniadis, Christian Coester, Marek Elias, Adam Polak, and Berthold Simon. Online Metric Algorithms with Untrusted Predictions. In Proceedings of the 37th International Conference on Machine Learning (ICML), pages 345-355. PMLR, 2020. https://proceedings.mlr.press/v119/ antoniadis20a.html. [5] Antonios Antoniadis, Christian Coester, Marek Eliáš, Adam Polak, and Bertrand Simon. Learningaugmented dynamic power management with multiple states via new ski rental bounds. In Advances in Neural Information Processing Systems (NeurIPS 2021), 2021. https://proceedings.neurips. cc/paper/2021/hash/8b8388180314a337c9aa3c5aa8e2f37a-Abstract.html. [6] Pengfei Li, Jianyi Yang, Adam Wierman, and Shaolei Ren. Learning-Augmented Decentralized Online Convex Optimization in Networks. In Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS), page 163-165, 2025. https://dl.acm.org/doi/10.1145/3726854.3727293. [7] Adam Lechowicz, Nicolas Christianson, Jinhang Zuo, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, and Prashant Shenoy. The Online Pause and Resume Problem: Optimal Algorithms and an Application to Carbon-Aware Load Shifting. In Proceedings of the 14th ACM International Conference on Future Energy Systems (e-Energy '23). ACM, 2023. https://dl.acm.org/ doi/10.1145/3626776. [8] Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, and Prashant Shenoy. Learning-Augmented Competitive Algorithms for Spatiotemporal Online Allocation with Deadline Constraints. Proc. ACM Meas. Anal. Comput. Syst., 9(1):8:1-8:49, March 2025. https://dl.acm.org/doi/10.1145/3711701. [9] Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, and Prashant Shenoy. Online Conversion with Switching Costs: Robust and Learning-Augmented Algorithms. In Proceedings of the 2024 ACM SIGMETRICS / IFIP Performance Joint International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS/PERFORMANCE '24), pages 1-48. ACM, 2024. https://dl.acm.org/doi/10.1145/3652963. 3655074. [10] Bo Sun, Ali Zeynali, Tongxin Li, Mohammad Hajiesmaili, Adam Wierman, and Danny H. K. Tsang. Competitive Algorithms for the Online Multiple Knapsack Problem with Application to Electric Vehicle Charging. In Proceedings of the ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS '21). ACM, 2021. https: //dl.acm.org/doi/10.1145/3543516.3456271. [11] Bo Sun, Russell Lee, Mohammad Hajiesmaili, Adam Wierman, and Danny H.K. Tsang. Pareto-Optimal Learning-Augmented Algorithms for Online Conversion Problems. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), page 55a988dfb00a914717b3000a3374694c, 2021. https://proceedings.neurips.cc/paper/ 2021/file/55a988dfb00a914717b3000a3374694c-Paper.pdf. [12] Adam Lechowicz, Nicolas Christianson, Bo Sun, Noman Bashir, Mohammad Hajiesmaili, Adam Wierman, and Prashant Shenoy. Chasing Convex Functions with Long-term Constraints. In Proceedings of the 20 41st International Conference on Machine Learning, volume 235, pages 26259-26289. PMLR, 2024. https://proceedings.mlr.press/v235/lechowicz24a.html. [13] Anna R. Karlin, Claire Kenyon, and Dana Randall. Dynamic TCP acknowledgement and other stories about e/(e-1). In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing (STOC), pages 502-509. ACM, 2001. https://dl.acm.org/doi/10.1145/ 380752.380845. [14] Lingqing Ai, Xian Wu, Lingxiao Huang, Longbo Huang, Pingzhong Tang, and Jian Li. The multi-shop ski rental problem. In Proceedings of the 2014 ACM SIGMETRICS/International Conference on Measurement and Modeling of Computer Systems, pages 463-475. ACM, 2014. https://dl.acm.org/doi/10.1145/2591971.2591984. [15] Russell Lee, Jessica Maghakian, Mohammad Hajiesmaili, Jian Li, Ramesh Sitaraman, and Zhenhua Liu. Online Peak-Aware Energy Scheduling with Untrusted Advice. In Proceedings of the 12th ACM International Conference on Future Energy Systems (e-Energy '21). ACM, 2021. 3464860. https://dl.acm.org/doi/10.1145/3447555.3464860. [16] Russell Lee, Bo Sun, Mohammad Hajiesmaili, and John C. S. Lui. Online Search with Predictions: Paretooptimal Algorithm and its Applications in Energy Markets. In Proceedings of the 15th ACM International Conference on Future Energy Systems (e-Energy '24). ACM, 2024. https://dl.acm.org/doi/10.1145/3632775.3639590. [17] Alexander Wei and Fred Zhang. Optimal Robustness-Consistency Trade-offs for LearningAugmented Online Algorithms. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pages 21219-21229, 2020. https://proceedings.neurips.cc/paper/2020/hash/ 5bd844f11fa520d54fa5edec06ea2507-Abstract.html. [18] Alex Elenter, Spyros Angelopoulos, Christoph Dürr, and Yanni Lefki. Overcoming Brittleness in ParetoOptimal Learning Augmented Algorithms. In Advances in Neural Information Processing Systems (NeurIPS 2024), 2024. https://proceedings.neurips.cc/paper_files/paper/2024/ hash/11c6625b0481a7d5625831369f6b7c82-Abstract-Conference.html. [19] Ziyad Benomar, Lorenzo Croissant, Vianney Perchet, and Spyros Angelopoulos. Pareto-Optimality, Smoothness, and Stochasticity in Learning-Augmented One-Max-Search. In Proceedings of the 2025 International Conference on Machine Learning (ICML 2025), 2025. https://icml.cc/virtual/ 2025/poster/44853. [20] Ziyad Benomar and Vianney Perchet. On Tradeoffs in Learning-Augmented Algorithms. In Proceedings of the 28th International Conference on Artificial Intelligence and Statistics, pages 802-810. PMLR, 2025. https://proceedings.mlr.press/v258/benomar25a.html. [21] Tongxin Li, Ruixiao Yang, Guannan Qu, Guanya Shi, Chenkai Yu, Adam Wierman, and Steven Low. Robustness and Consistency in Linear Quadratic Control with Untrusted Predictions. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 6(1):1-35, 2022. https://dl.acm.org/doi/10.1145/3508038. [22] Tongxin Li, Hao Liu, and Yisong Yue. Disentangling Linear Quadratic Control with Untrusted ML Predictions. In Proceedings of the 37th Annual Conference on Neural Information Processing Systems (NeurIPS 2024), pages 86860-86898, 2024. https://proceedings.neurips.cc/paper_files/ paper/2024/hash/9dff3b83d463fab213941bfee23341ba-Abstract-Conference. html. 21 [23] Noah Golowich and Ankur Moitra. Can Q-learning be Improved with Advice? In Proceedings of the Thirty-Fifth Conference on Learning Theory, volume 178, pages 4548-4619. PMLR, 2022. https: //proceedings.mlr.press/v178/golowich22a.html. [24] Tongxin Li, Yiheng Lin, Shaolei Ren, and Adam Wierman. Beyond Black-Box Advice: LearningAugmented Algorithms for MDPs with Q-Value Predictions. In Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), 2023. https://proceedings.neurips.cc/ paper/2023/file/8e806d3c56ed5f1dab85d601e13cbe38-Paper-Conference.pdf. [25] Dhruv Rohatgi. Near-Optimal Bounds for Online Caching with Machine Learned Advice. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA 2020), pages 1834-1845, 2020. https://dl.acm.org/doi/10.5555/3381089.3381201. [26] Sungjin Im, Ravi Kumar, Aditya Petety, and Manish Purohit. Parsimonious Learning-Augmented Caching. In Proceedings of the 39th International Conference on Machine Learning, volume 162, pages 9588-9601. PMLR, 2022. https://proceedings.mlr.press/v162/im22a.html. [27] Alexander Wei. Better and Simpler Learning-Augmented Online Caching. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020), volume 176 of Leibniz International Proceedings in Informatics (LIPIcs), pages 60:1-60:17. Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2020. https://drops.dagstuhl.de/opus/volltexte/2020/12673. [28] Sungjin Im, Ravi Kumar, Mahshid Montazer Qaem, and Manish Purohit. Online Knapsack with Frequency Predictions. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021), page 161c5c5ad51fcc884157890511b3c8b0, 2021. https://proceedings.neurips.cc/paper/ 2021/hash/161c5c5ad51fcc884157890511b3c8b0-Abstract.html. [29] Mohammadreza Daneshvaramoli, Helia Karisani, Adam Lechowicz, Bo Sun, Cameron Musco, and Mohammad Hajiesmaili. Competitive Algorithms for Online Knapsack with Succinct Predictions. arXiv preprint , 2024. https://arxiv.org/abs/2406.18752. [30] Adam Lechowicz, Rik Sengupta, Bo Sun, Shahin Kamali, and Mohammad Hajiesmaili. Time Fairness in Online Knapsack Problems. In Proceedings of the International Conference on Learning Representations (ICLR 2024), 2024. https://openreview.net/forum?id=9kG7TwgLYu. [31] Antonios Antoniadis, Themis Gouleakis, Pieter Kleer, and Pavel Kolev. Secretary and Online Matching Problems with Machine Learned Advice. In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020. https://proceedings.neurips.cc/paper/2020/hash/ 5a378f8490c8d6af8647a753812f6e31-Abstract.html. [32] Sébastien Bubeck, Yassine Engel, Yin Tat Lee, Yifeng Li, and Aleksandar Nikolov. Online Multiserver Convex Chasing and Optimization. In Proceedings of the 2021 Symposium on Discrete Algorithms (SODA), pages 1-40, 2021. https://dl.acm.org/doi/10.5555/3458064.3458189. [33] Nicolas Christianson, Junxuan Shen, and Adam Wierman. Optimal Robustness-Consistency Tradeoffs for Learning-Augmented Metrical Task Systems. In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics (AISTATS), volume 206, pages 4223-4254. PMLR, 2023. https: //proceedings.mlr.press/v206/christianson23a.html. [34] Nicolas Christianson, Tinashe Handina, and Adam Wierman. Chasing Convex Bodies and Functions with Black-Box Advice. In Proceedings of the Conference on Learning Theory, volume 178, pages 867-908. PMLR, 2022. https://proceedings.mlr.press/v178/christianson22a.html. 22 [35] Hongzi Mao, Malte Schwarzkopf, Shaileshh Bojja Venkatakrishnan, Zili Meng, and Mohammad Alizadeh. Learning Scheduling Algorithms for Data Processing Clusters. In Proceedings of the ACM SIGCOMM 2019 Conference. ACM, 2019. https://dl.acm.org/doi/10.1145/3341302.3342080. [36] Bo Sun, Jerry Huang, Nicolas Christianson, Mohammad Hajiesmaili, Adam Wierman, and Raouf Boutaba. Online Algorithms with Uncertainty-Quantified Predictions. In Proceedings of the 41st International Conference on Machine Learning (ICML 2024), volume 235, pages 47056-47077. PMLR, 2024. https: //proceedings.mlr.press/v235/sun24f.html. [37] Mohammad Mahdian, Hamid Nazerzadeh, and Amin Saberi. Online Optimization with Uncertain Information. ACM Transactions on Algorithms, 8(1):1-29, 2012. https://dl.acm.org/doi/10.1145/2071379.2071381. [38] Anna R. Karlin, Mark S. Manasse, Larry Rudolph, and Daniel D. Sleator. Competitive Snoopy Caching. Algorithmica, 3(1):77-119, 1988. https://dl.acm.org/doi/10. 1007/BF01762111. [39] Ran El-Yaniv, Amos Fiat, Richard M. Karp, and Gabriel Turpin. Optimal Search and One-Way Trading Online Algorithms. Algorithmica, 30(1):101-139, 2001. https: //link.springer.com/article/10.1007/s00453-001-0003-0. [40] Rajeev Motwani, Steven Phillips, and Eric Torng. Nonclairvoyant Scheduling. Theoretical Computer Science, 130(1):17-47, 1994. https://dl.acm.org/doi/10.1016/0304-3975%2894% 2990151-1. [41] Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii. Controlling Tail Risk in Online Ski-Rental. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 4247-4263. SIAM, 2024. https: //epubs.siam.org/doi/10.1137/1.9781611977912.147. [42] Nicolas Christianson, Bo Sun, Steven Low, and Adam Wierman. Risk-Sensitive Online Algorithms. In The Thirty Seventh Annual Conference on Learning Theory, pages 1140-1141. PMLR, 2024. [43] D. Zhou, W. Pan, W. Wang, and T. Xie. I/O Characteristics of Smartphone Applications and Their Implications for eMMC Design. In Proceedings of the 2015 IEEE International Symposium on Workload Characterization (IISWC), pages 12-21. IEEE Computer Society, 2015. https://doi.org/10.1109/IISWC.2015.8. [44] S. Irani, S. Shukla, and R. Gupta. Online strategies for dynamic power management in systems with multiple power-saving states. ACM Transactions on Embedded Computing Systems, 2(3):325-346, 2003. https://dl.acm.org/doi/10.1145/860176.860180. 23 A Additional Details and Proofs for Section 2.4 This section supplements the analysis of the non-clairvoyant scheduling problem for the special case n = 2, as introduced in Section 2.3. We focus on the two-stage scheduling algorithm proposed by Wei and Zhang [17], which is known to achieve the optimal classic trade-off and is weakly-optimal-optimal under Definition 3. As detailed in Algorithm 6 in Section A.1, the algorithm proceeds in two phases based on predicted job lengths y = (y1, y2), where y1 ≤y2. In Theorem 2.1, we show that this algorithm also satisfies prediction-specific Pareto optimality under n = 2. A.1 Wei et al.'s Two-Stage Schedule Wei and Zhang [17] propose an algorithm called the two-stage schedule (see Algorithm 6) that achieves a consistency of (1 + λ) and a robustness of (1 + 1 1+6λ) under n = 2. By [17] and Definition 3, this two-stage schedule algorithm is weakly-optimal for n = 2. Algorithm 6 Two-Stage Schedule 1: At any point, if a job finishes with processing time less or more than its prediction, use round robin forever. 2: Stage 1: Round robin for at most λn · OPTy/ n 2 units of time. 3: Stage 2: Process jobs in predicted order (starting from the unfinished job with the least predicted time). A.2 Proof of Theorem 2.1 In this subsection, we further analyze the prediction-specific consistency and robustness and prove that two-stage schedule is strongly-optimal under n = 2. Proof of Theorem 2.1. Note that the jobs are ranked based on their predicted lengths; thus we have y1 ≤y2. We first prove the prediction-specific consistency and robustness under a specific set of predictions y = (y1, y2). We first consider λ ≤ y1 2y1+y2 , i.e. λ(2y1 + y2) ≤y1. Regarding the consistency, assume that x1 = y1, x2 = y2. In stage 1, the algorithm runs round-robin for 2λ · (2y1 + y2) time. Since λ · (2y1 + y2) ≤y1, job 1 cannot finish in stage 1. Therefore, the completion time of job 1 is 2λ(2y1 + y2) + y1 -λ(2y1 + y2) = y1 + λ(2y1 + y2), and that of job 2 is y1 + λ(2y1 + y2) + y2 -λ(2y1 + y2) = y1 + y2, thus, ALG = 2y1 + y2 + λ(2y1 + y2), and OPT = 2y1 + y2, yeilding a consistency of 1 + λ. Regarding the robustness, we consider an adversarial attack x1, x2. Let δ denote an infinitesimal quantity. Case I: If x1 ≤λ(2y1 + y2) or x2 ≤λ(2y1 + y2) , i.e. some incorrect prediction is found in stage 1. In this case, the algorithm runs round-robin from beginning to end, resulting in a robustness of at most 4/3. Case II: If λ(2y1 + y2) x1 In this case, OPT = 2x1 + x2. This results in a robustness of 1 + λ(2y1+y2) 3y1 ≤4/3, which is achieved when x1 = y1 and x2 = y1 + δ. Case III: If x1 > y1 , i.e. job 1 finishes later than its prediction. In this case, the algorithm first runs round-robin for 2λ · (2y1 + y2) time, then processes job 1 for y1 -λ(2y1 + y2) time, and finally runs round-robin till the end. Case III(a) λ(2y1 + y2) y1/(2y1 + y2), the algorithm runs round-robin forever, and is (1 + y1 2y1+y2 )-consistent and (4/3)-robust. In conclusion, given prediction y = (y1, y2), the prediction-specific consistency and robustness are βy = 1 + min{ y1 2y1 + y2 , λ}, γy = 1 + max{1/3, y1 y1 + 2λ(2y1 + y2)}. Since the two-stage schedule is already proven to be weakly-optimal, we then prove that it is strongly-optimal by demonstrating the Pareto optimality of their prediction-specific consistency and robustness. When λ ≤y1/(2y1 + y2), the algorithm is (1 + λ)-consistent and 1 + y1 y1+2λ(2y1+y2) -robust. Consider algorithm Aω that completes r > λ(2y1 + y2) amount of work for job 2 when it finishes job 1 in the case where predictions are accurate. Then, it follows that ALG = 2 · (y1 + r) + y2 -r = 2y1 + y2 + r OPT = 2y1 + y2, which yields a competitive ratio of 1 + r/(2y1 + y2) > 1 + λ. Therefore, any (1 + λ)-consistent algorithm processes at most λ(2y1 +y2) amount of work for job 2 when it finishes job 1 or finds any incorrect prediction of job 1. Then, we consider an incorrect prediction x1 = y1, x2 = λ(2y1 + y2) + δ. Consider a (1 + λ)-consistent algorithm B that completes r ≤λ(2y1 + y2) amount of work when it finishes job 1. Then, upon x1 = y1 and x2 = r + δ, ALG = 2(y1 + r) + (r + δ -r) = 2y1 + 2r + δ and OPT = 2r + y1. This leads to a robustness no better than min r≤λ(2y1+y2) 2y1 + 2r y1 + 2r = 1 + y1 y1 + 2λ(2y1 + y2) for all r ≤λ(2y1 + y2). When λ > y1/(2y1 + y2), the algorithm, equivalent to round-robin (RR), is (1 + y1/(2y1 + y2))-consistent and 4/3-robust. Note that RR is the only algorithm that achieves 4/3 competitive ratio. Therefore, under prediction y = (y1, y2), no other algorithm achieves a robustness equal to or less than 4/3. By Definition 4, the two-stage schedule is strongly-optimal. 25 B Additional Details for Section 3 In this section, we provide a proof of Proposition 1 and discuss the scenario in which a weakly-optimal algorithm with robustness γ is unavailable. B.1 Proof of Proposition 1 Proof of Proposition 1. Let {β∗ y, ω} and {γ∗ y, ω∗} denote the optimal solution to Problem 1 and Problem 2, respectively. Since γ ≥α∗, {γ, ω} is always a feasible solution to Problem 2. Thus, γ∗ y ≤γ, where γ∗ y is the optimal objective value to Problem 2. Since {γ∗ y, ω∗} is a feasible solution to Problem 2, we have ALG(Aω∗, I, y) ≤γ∗ y · OPT(I), ∀I ∈I, ALG(Aω∗, I, y) ≤β∗ y · OPT(I), ∀I ∈Iy. By Definition 2, Aω∗is β∗ y-consistent and γ∗ y-robust with respect to y. Consider ω′ ∈Ω. Let β′ y and γ′ y denote the prediction-specific consistency and robustness of Aω′ with respect to y. We consider two cases. Case I: β′ y > β∗ y If β′ y > β∗ y, then (β′ y, γ′ y) produces no Pareto improvement over (β∗ y, γ∗ y). Case II: β′ y ≤β∗ y . If β′ y ≤β∗ y, since Aω′ is β′ y-consistent and γ′ y-robust with respect to y, by Definition 2, ALG(Aω′, I, y) ≤β′ y · OPT(I), ∀I ∈Iy, (11a) ALG(Aω′, I, y) ≤γ′ y · OPT(I), ∀I ∈I. (11b) Since β′ y ≤β∗ y, by Inequality (11a), we further have ALG(Aω′, I, y) ≤β∗ y · OPT(I), ∀I ∈Iy. (12) By Inequalities (11b) and (12), {γ′ y, ω′} is a feasible solution to Problem 2, thus we have γ∗ y ≤γ′ y, where γ∗ y is the optimal objective value to Problem 2. Case II (a): γ′ y > γ∗ y If γ′ y > γ∗ y, then (β′ y, γ′ y) produces no Pareto improvement over (β∗ y, γ∗ y). Case II (b): γ′ y = γ∗ y If γ′ y = γ∗ y, since γ∗ y ≤γ, we have γ′ y ≤γ. By Inequality (11b), we further have ALG(Aω′, I, y) ≤γ · OPT(I), ∀I ∈I. (13) By Inequalities (11a) and (13), {β′ y, ω′} is a feasible solution to Problem 1, thus we have β∗ y ≤β′ y, where β∗ y is the optimal objective value to problem 1. Because β′ y ≤β∗ y and β∗ y ≤β′ y, we have β′ y = β∗ y. Since γ′ y = γ∗ y and β′ y = β∗ y, (β′ y, γ′ y) produces no Pareto improvement over (β∗ y, γ∗ y). Consequently, (β∗ y, γ∗ y) are jointly Pareto optimal in terms of the prediction-specific consistency and robustness with respect to y. Since γ∗ y ≤γ for all y ∈F, we have supy∈F γ∗ y ≤γ. By Equation (1) and Equation (2), Algorithm 1 is γ-robust. Now, there exists a weakly-optimal algorithm Aω with robustness γ, we assume the consistency of Aω is β and the prediction-specific consistency of Aω under prediction y is βy. Since Aω is βy-consistent with respect to y, by Definition 2, ALG(Aω, I, y) ≤βy · OPT(I), ∀I ∈Iy. (14) Since Aω is γ-robust, it is also γ-robust with respect to y, thus we have ALG(Aω, I, y) ≤γ · OPT(I), ∀I ∈Iy. (15) 26 By Inequalities (14) and (15), {βy, ω} is a feasible solution to Problem 1. Therefore, β∗ y ≤βy, where β∗ y is the optimal objective value to Problem 1. Consequently, sup y∈F β∗ y ≤sup y∈F βy = β. By Equation (1) and Equation (2), Algorithm 1 is β-consistent. By weak optimality of Aω, any γ-robust algorithm is at least β-consistent, and any β-consistent algorithm is at least γ-robust. Since Algorithm 1 is β-consistent and γ-robust, by Definition 3, it is weakly-optimal. Moreover, ∀y ∈F, (β∗ y, γ∗ y) are jointly Pareto optimal in terms of the prediction-specific consistency and robustness with respect to y. Thus, Algorithm 1 is strongly-optimal by Definition 4. B.2 Addressing the Absence of a Weakly-Optimal Algorithm with γ Robustness. In general, even if (β′, γ) is not on the Pareto front for any β′ ≥1, a process that first determines a tight consistency bound β = supy′∈F P1(γ, y′), then determines a tight robustness bound γ = supy′∈F P2(β, y′) can generate a tight Pareto-optimal consistency-robustness tradeoff (β, γ) so that γ becomes a valid input of Algorithm 1. Let β, γ denote supy′∈F P1(γ, y′), supy′∈F P2(β, y′), respectively. Define y1 := arg maxy′∈F P1(γ, y′), y2 := arg maxy′∈F P2(β, y′). Since β = supy′∈F P1(γ, y′), we have ∀y′ ∈F, P1(γ, y′) ≤β. Therefore, ∀y′ ∈F, ∃ω′ ∈Ω, s.t. {γ, ω′} is a feasible solution to P2(β, y′), i.e., ∀y′ ∈F, γ ≥P2(β, y′), where P2(β, y′) is the optimal objective value. Consequently, γ ≥sup y′∈F P2(β, y′) = γ. (16) We can use similar techniques to prove β ≥sup y′∈F P1(γ, y′). Since β = supy′∈F P1(γ, y′), any γ-robust algorithm is at least β-consistent. We prove this by contradiction, assuming there exists a γ-robust algorithm A that has consistency βA b -1 + b b = 2 -1/b for all λ ∈(0, 1), by Definition 4, KD is not strong-optimal. C.2 Proof of Theorem 4.2 Proof of Theorem 4.2. Denote the prediction-specific consistency and robustness of Algorithm 2 with respect to y as βy and γy. We start by analyzing the prediction-specific consistency and robustness of Algorithm 2 by considering the following three different cases. Case I: y min{b(λ + 1) -1, (b -1)/λ} . In this case, Algorithm 2 buys on day ⌈λb⌉. To obtain the prediction-specific consistency, we assume x = y. ALG = ⌈λb⌉-1 + b, OPT = b, βy = ALG/OPT min{b(λ+1)-1, (b-1)/λ}. Based on the lower bound by Wei and Zhang [17], Algorithm 2 is weakly-optimal, as b →∞. 28 We consider the following three cases to prove the Pareto optimality of (βy, γy) when b →∞. Case I: y [(b2 -b)/y] -1 + b [(b2 -b)/y] = 1 + y/b = γy. Case II (b) : p = (b2 -b)/y (This case applies only if b2-b y ∈N+). In this case, γP y = p-1+b p = 1+y/b = γy. By Inequality (17), we have the following holds: βP y = p -1 + b b = [(b2 -b)/y] + (b -1) b ≥[y + 1 -b] + (b -1) b = y/b = βy. Case II (c) : (b2 -b)/y [(b2 -b)/y] -1 + b b ≥[y + 1 -b] -1 + b b = y/b = βy. Case II (d) : p > y + 1 . In this case, we see that γP y = p -1 + b b > (y + 1) -1 + b b = 1 + y/b = γy. In either case, (βP y , γP y ) provides no Pareto improvement over (βy, γy). Case III: y > min{b(λ + 1) -1, (b -1)/λ} . Algorithm 2, purchasing on day ⌈λb⌉, achieves a consistency of ⌈λb⌉-1+b b and a robustness of ⌈λb⌉-1+b ⌈λb⌉ . Consider another algorithm Q that buys on day q (̸= ⌈λb⌉) with prediction-specific consistency and robustness βQ y and γQ y . Case III (a): q ⌈λb⌉-1+b ⌈λb⌉ = γy. Case III (b): ⌈λb⌉ ⌈λb⌉-1+b b = βy. Case III (c): q > y . If y > b(λ+1)-1, we have λb (b -1)/λ, we have ⌈λb⌉≥λb > b2-b y . Therefore, γy = ⌈λb⌉-1 + b ⌈λb⌉ y. This explains why, when r ≤y + 1 -b, we immediately terminate the shifting process in Step 0. 30 Algorithm 7 OPERATION A: CONSISTENCY BOOSTING 1: Input: Initial distribution πeq[1, n] and prediction y ≥b; 2: Initialization: π ←πeq[1, n], iterative index r ←n, initial robustness γ ←γy(π); 3: Step 0: 4: // Check whether the current distribution is a two-point distribution. 5: If r = 1, then update π1 ←1/b and πb+1 ←(b -1)/b, go to Step 4; 6: // Check if further shifting enhances consistency. 7: If r ≤y + 1 -b, then go to Step 4; 8: Step 1: 9: // Shifting probability mass from πr to πy+1. 10: Update πy+1 ←πy+1 + πr and πr ←0; 11: Step 2: 12: // Determine if more shifting is necessary. 13: If R(π, y + 1) γ, then set r ←r + 1, and go to Step 1; 10: Step 3: 11: // After determining the appropriate value of r , 12: // Prioritize assigning probability mass to {1, . . . , r -1, y + 1, . . . , b}. 13: Solve π′ through Pb i=1 π′ i = 1 and R(π′, x) = γ, x ∈{1, . . . , r -1, y + 1, . . . , b}; 14: Update π ←π′; 15: Step 4: Return π. 31 D.3 PRSR: Prediction-specific Randomized Ski Rental With these foundations in place, we proceed to introduce an algorithm for randomized ski rental, referred as PRSR(see Algorithm 9). Algorithm 9 PRSR: PREDICTION-SPECIFIC RANDOMIZED SKI RENTAL 1: Input: γ ∈[eb/(eb -1), b -2) 2: // Determine the smallest n such that πeq[1, n] is γ-robust. 3: Determine n ←⌈logb/(b-1)(1 + 1/(γ -1))⌉; 4: // Determine the adjusted robustness γ′, defined as that of πeq[1, n]. 5: Determine γ′ ←[( b b-1)n -1]-1 + 1; 6: If y ≥b then 7: π ←OPERATION A(πeq[1, n], y); (see Algorithm 7) 8: Else if y 1. In Section E.5, we formally verify the prediction-specific Pareto optimality and strong optimality of PRSR. 32 E Proofs for Section 5 and Appendix D This section provides proofs for Section 5 and Appendix D. E.1 Proofs of Useful Lemmas Lemma 1. Consider the ski rental randomized problem with b > 1, for any distribution π supported on N+, there exists another distribution π′ supported on a finite set [b] such that α(π′) ≤α(π). Specifically, when the best attack for π only occurs at x > b, we have α(π′) R(π′, b). Note that α(π) = supx∈N+ R(π, x) and α(π′) = supx∈N+ R(π′, x). We have α(π) ≥α(π′). Specifically, when the best attack for π occurs at x > b, we have α(π) > α(π′). In the learning-augmented setting, we divide our discussion into two parts: y 1. Consider a probability distribution π over N+. If R(π, x) = α(π) for all x ∈{m, m + 1, . . . , n}, then we have πi+1 = b b -1 · πi, ∀i ∈{m + 1, . . . , n -1}. Proof. By the definition of R(π, x) in Section 5, when x ≤b: R(π, x) = Px i=1 πi(x -1 + b) + P∞ i=x+1 πi · x x . Consider the following equations: R(π, i) = α(π), m ≤i ≤m + 2 (21) Subtracting both sides of (m + 1)×Equation (21) with i = m + 1 by m×Equation (21) with i = m, b · πm+1 + ∞ X i=m+2 πi = α(π). (22) 33 Subtracting both sides of (m + 2)× Equation (21) with i = m + 2 by (m + 1)× Equation (21) with i = m + 1, b · πm+2 + ∞ X i=m+3 πi = α(π). (23) Finally, subtracting both sides of Equation (23) by Equation (22) yields πm+2 = b b-1πm+1. Similarly, we can show that πi+1 = b b -1πi, ∀i ∈{m + 2, . . . , n -1}. E.2 Proof of Theorem D.1 and D.2 Proof of Theorem D.1. By Definition 5, πeq[m, n] only has positive support over {m, . . . , n}. Consider x = m and x = m + 1, (m -1 + b) · πeq m[m, n] m + (1 -πeq m[m, n]) = α(π). (24) (m -1 + b)πeq m[m, n] + (m + b)πeq m+1[m, n]] (m + 1) + (1 -πeq m[m, n] -πeq m+1[m, n]) = α(π). (25) Equating the left-hand sides of Equation (24) and Equation (25) yields: πeq m+1[m, n] = m(b -1) m + b -1πeq m[m, n]. (26) By Lemma 2, we have πeq i+1[m, n] = b b -1 · πeq i [m, n], ∀i ∈{m + 1, . . . , n -1}. (27) Equation (26), Equation (27), together with the constraint Pn i=m πi = 1, imply that πeq i [m, n] =          1 + m+(b-1) m · ( b b-1)n-m -1 -1 for i = m; πeq m[m, n] · m+(b-1) m(b-1) · b b-1 i-m-1 , for i ∈{m + 1, . . . , n}; 0, otherwise. Proof of Theorem D.2. Applying Lemma 1, we first reduce the support under consideration to [b]. Furthermore, the condition of 1 consistency requires πi = 0 for all i ∈[y]. We consider the following optimization problem. min πy+1,...,πb,γy γy (Primal Problem) s.t. i X j=y+1 πj · (b + j -1) + b X j=i+1 πj · i ≤γy · i, ∀i ∈{y + 1, . . . , b}, b X i=y+1 πi = 1, πi ≥0, ∀i ∈{y + 1, . . . , b}. γy ≥0 34 Let π∗ y+1, . . . , π∗ b, γ∗ y denote the optimal solution to Primal Problem. It's clear that γ∗ y = min π∗ y+1,...,π∗ b max i∈{y+1,...,b} R(π∗, i). (28) We claim that π∗ y+1 ̸= 0. We prove this by contradiction, assuming π∗ y+1 = 0. Let r be the minimal index with non-zero probability mass, i.e. r := min{i | π∗ i ̸= 0}. Consider another distribution π′ with π′ y+1 = ε, π′ r = π∗ r -ε and π′ i = π∗ i for all i ∈Z+ \ {y + 1, r}, where we denote ε := min π∗ r, y + 1 b -1 · (γ∗ y -1 2 ) . Note that R(π′, y + 1) = (b + y) · π′ y+1 + (y + 1) · (1 -πy+1) y + 1 ≤γ∗ y -1 2 + 1 = γ∗ y + 1 2 q | πi ̸= 0}. Since π∗ b ̸= 0, such r is guaranteed to exist. Consider another distribution π′ with π′ y+1 = π∗ y+1 -ε1, π′ q = ε1 + ε2, πr = π∗ r -ε2, and π′ i = π∗ i for all i ∈Z+ \ {y + 1, q, r}, where we denote ε1 := min π∗ y+1, r -q 2(q -y -1) · ε2 , ε2 := min π∗ r, 2(q -y -1) [(r -q) + 2(q -y -1)](q -1 + b) · γ∗ y . Similarly, we verify that R(π′, i) b. We denote this distribution as π, and consider another distribution π′ that transfer all probability masses beyond b to b (i.e. π′ i = πi, ∀i ∈[b -1] and π′ b = P∞ i=b πi). Recall that βy(π), γy(π), βy(π′), γy(π′) are the consistency and robustness of π and π′ under prediction y, respectively. Since y b, by Lemma 1, we have γy(π′) 0, ε1 = min n ( q-r r-p)ε2, πA p o > 0. We first prove that π′ is still a feasible solution to Problem A. Recall that when x ≤b (see the definition of R(π, x) in Section 5) R(π, x) = Px i=1 πi(i -1 + b) + Pb i=x+1 πi x x . (1) Consider x 0, b -1 > 0, and ε1 ≤( q-r r-p)ε2, ε2 ≤ (Pr i=1 πA i )b (q-r+b-1)(r-1). We have ε1(r -p) + ε2(b -1) ≤(Pr i=1 πA i )b r -1 ≤ Pr i=1 πA i (b + i -1) r -1 . (32) 39 By (30), (31) and (32), rR(π′, r) ≤r Pr i=1 πA i (b + i -1) + r(r -1)(1 -Pr i=1 πA i ) r -1 . (33) Since πA r = 0, (33) implies rR(π′, r) ≤r Pr-1 i=1 πA i (b + i -1) + r(r -1)(1 -Pr-1 i=1 πA i ) r -1 . Therefore, R(π′, r) ≤ Pr-1 i=1 πA i (b + i -1) + (1 -Pr-1 i=1 πA i )(r -1) r -1 = R(πA, r -1) ≤γy. (34) (4) Consider r 0, q -r > 0. (36) further indicates that xR(π′, x) ≤xR(πA, x). Therefore, R(π′, x) ≤R(πA, x) ≤γy. Consequently, R(π′, x) ≤γy for all x ∈[b]. Note that Pb i=1 π′ i = 1 and π′ i ≥0 for all i ∈[b]. Therefore, π′ is a feasible solution to Problem A. We then compare the objective value of Problem A at π′ and πA. Note that p ≤y 0. Similarly we first investigate the feasibility of π′. (1) Consider x 1 when γy ∈(γξ, γν). Since Problem A is a convex optimization problem, and πeq[1, b] (i.e. Karlin's distribution [38]) is always an interior point, by the Slater's condition, d∗= p∗> 1. We then show that in Problem B, λB i ̸= 0 for all i ∈{y+1, . . . , b}. We prove this by contradiction, assuming that there exists r ∈{y + 1, . . . , b} such that λB r = 0. 42 Case II(a): r = y + 1 . Consider Eq.(y + 1) and Eq.(y + 2): ( λ1 + · · · + λy + b+y y+1 · λy+1 + b+y y+2 · λy+2 + · · · + b+y b · λb + λb+1 = 1 Eq.(y + 1) λ1 + · · · + λy + λy+1 + b+y+1 y+2 · λy+2 + · · · + b+y+1 b · λb + λb+1 = 1 Eq.(y + 2) Note that b+y y+2 1. Therefore, the optimal solution to Problem B is λB i = 0, ∀i ∈[b] and λB b+1 = 1, achieving an objective value of d∗= 1. This leads to a contradiction. It thus follows that λB i ̸= 0 for all i ∈{y + 1, . . . , b}. Now, by the complementary slackness, λB x · "Px i=1 πA i (b + i -1) + Pb i=y+1 πA i x x -γy # = 0, ∀x ∈{y + 1, . . . , b}. Since λB i ̸= 0 for all i ∈{y + 1, . . . , b}, we have Px i=1 πA i (b + i -1) + Pb i=y+1 πA i x x = γy, ∀x ∈{y + 1, . . . , b}. Therefore, the (y + 1)-th to b-th constraints in Problem A are binding at πA. Case III: γy = γξ . Note that πeq[1, b] (i.e. Karlin's distribution [38]) is the only probability distribution that is γξ-competitive. We can verify that the (y + 1)-th to b-th constraints in Problem A are binding at πA = πeq[1, b]. Lemma 5. Consider y b to b. It is clear that βy(π′) = βy(π), ∀y π∗ r . Note that R(π∗, x) = γy for all x ∈[k -1]. This makes r > k -1. Therefore, we have πi = π∗ i , ∀i ∈[k -1] πk ≥π∗ k and πi ≥π∗ i = 0, ∀i ∈{k + 1, . . . , y} with some r ∈{k, . . . , y} such that πr > π∗ r. Thus, Pb i=y+1 πi R(π∗, y) = βy(π∗). Therefore, π cannot be an optimal solution to Problem C. Case II(b): πr γy(π∗) = γy. Let r′ denote the minimal index in {r + 1, . . . , y} such that πr′ ̸= 0, i.e. r′ = min {i ∈{r + 1, . . . , y} | πi ̸= 0}. Consider π′ = (π1, . . . , πr + ε, . . . , πr′ -ε, . . .), where ε = min{π∗ r -πr, πr′}. It is straightforward to verify that γy(π′) = γy(π), βy(π′) βy(γy(γ)) = βy(γ). (49) Next, we invoke Lemma 6 once again, any γQ y -robust algorithm is at least βy(γQ y )-consistent. Note that γQ y -robust algorithm Q is βy(γ)-consistent. Thus βy(γQ y ) ≤βy(γ). (50) Equation (49) and Equation (50) lead to a Contradiction. Therefore, for any γ ∈[γξ, γν], βy(γ) and γy(γ) are jointly Pareto optimal. Consider OPERATION A(πeq[1, n], y) and denote its consistency and robustness under prediction y by βy and γy, respectively. For the case when y ≥b, the following result holds. Lemma 7. Suppose y ≥b and 1 1, from the construction of Algorithm 7, π∗ 2 = 0 could only happen when y = b. In this case, we have π∗ 1 = 1/b, π∗ b+1 = (b -1)/b, and π∗ i = 0 for all i ∈N+ \ {1, b + 1}. Note that its prediction-specific consistency and robustness are 1 and 2 -(1/b), respectively, i.e. βy = βy(π∗) = 1 and γy = γy(π∗) = 2 -(1/b). Note that all 1-consistent algorithm under y = b can only have probability mass on {1, b + 1}. Let π1 and 1 -π1 represent the probability mass on 1 and b + 1, respectively. then γy(π) = max π1∈[0,1]{bπ1 + (1 -π1), π1 + 2(1 -π1)} ≥2 -(1/b). Therefore, βy and γy are jointly Pareto optimal. Case II: π∗ 2 ̸= 0 . Let k := max{i ≤b | π∗ i ̸= 0}. Since π∗ 2 ̸= 0, we have k ≥2. According to the structure of Algorithm 7, we have that at least one of the following holds: R(π∗, y + 1) = γy, (51) y -b + 1 = k. (52) Consider π ̸= π∗. Let r be the minimal index such that πi ̸= π∗ i , i.e. r := min{i | πi ̸= π∗ i }. Case II(a): πr > π∗ r . In this case, we must have r π∗ r will lead to γy(π) > γy(π∗) = γy. If k ≤r r | πi ̸= 0}. Consider π′ = {π1, . . . , πr-1, πr + ε, . . . , πv -ε, . . .}, where ε = {πv, π∗ r -πr}. It is straightforward to verify γy(π′) ≤γy. If b+k-1 y}(r -1 + b -y)] ≤bβy(π). Equivalently, βy(π′) ≤βy(π). Attaining equality requires r -1 + b = y and r = k. If b + k -1 > y, then Condition (52) does not hold; thus, Condition (51) must hold, i.e., we have R(π∗, y + 1) = γy. If v = y + 1, then πi = π∗ i for all i ∈[r -1], πr γy. This makes v 1. By Lemma 7, (βy(πA), γy(πA)) is Pareto optimal. Therefore, we have βy(πA) ≤γy(πA) log 1 + 1 γy(πA) -1 . Since γ′ = γy(πA), we have for any y ≥b, βy(πA) ≤γ′ log 1 + 1 γ′ -1 . (54) Case II: y γ(πeq[1, b]) = γξ, and 1 + [( b b -1)n -1]-1 = γ(πeq[1, n]) ≥γ(πeq[1, b]) = γξ. Therefore, γ′′ = min{γν, γ′} ≥γξ, and γ′′ = min{γν, γ′} ≤γν. Thus, we have γξ ≤γ′′ ≤γν. Therefore, γy(πB) = γ′′. We consider the following two cases. Case II(a): γ′′ = γ′ . In this case, we consider λ = -log(1-γ-1 y ), Kumar's algorithm is γy log(1+ 1 γy-1)- consistent and γy-robust. Since γξ ≤γ′′ ≤γν, by Lemma 5, βy(πB) ≤γy(πB) log 1 + 1 γy(πB) -1 . Since γ′ = γ′′ = γy(πB), we have for any y √ θ, which makes βS y = βC y , γS y > γC y . By Definition 4, Sun's algorithm is not strongly-optimal. F.2 Proof of Theorem 6.2 Proof of Theorem 6.2. We first prove the prediction-specific consistency and robustness of PST. We consider the following three cases. Case I: y ∈[L, λL + (1 -λ) √ LU) . In this case, PST sets the threshold at Φ = √ LU. Since λL + (1 - λ) √ LU ≤ √ LU, we have y √ LU = γy. Therefore, (βy, γy) is Pareto optimal. Assume y ∈[λL+(1-λ) √ LU, √ LU]. Since Φ′ ̸= Φ = y, βy > 1 = βy. Thus, (βy, γy) is Pareto optimal. Assume y ∈( √ LU, U]. If Φ′ y/Φ = βy. If Φ′ > Φ, since Φ′ > Φ ≥ √ LU, we have γy = Φ/ √ LU and γ′ y = Φ′/ √ LU, thus γ′ y > γy. Therefore, (βy, γy) is Pareto optimal. By Definition 4, PST is strongly-optimal. 51 G Proofs for Section 7 In this section, we prove Theorem 7.1, 7.2 and 7.3. G.1 Proof of Theorem 7.1 Proof of Theorem 7.1. We consider the following five cases to investigate the prediction-specific ε-consistency and robustness. Case I: y ∈[L, M -2ε] . In this case, ε-Tolerant PST decides to set the threshold to Φ = √ LU and the price range relevant to ε-consistency is restricted to [max{L, y -ε}, y +ε]. To determine the ε-consistency, we assume that the highest price x ∈[max{L, y -ε}, y + ε]. Note that y + ε ≤(M -2ε) + ε 0, (U -2ε) - LU ( √ LU -ε -ε) 0. Therefore, any γ-robust algorithm has at least (θ/γ) ε-consistency. Similarly, given ε > 0, any algorithm that achieves βε ε-consistency is βε-consistent. Based on the lower bound by Sun et al. [11], it must be at least (θ/βε)-robust. G.3 Proof of Theorem 7.3 Proof of Theorem 7.3. By Theorem 7.1 and Theorem 7.2, we conclude that ε-Tolerant PST's ε-consistency and robustness are jointly Pareto optimal. To prove the Pareto optimality of prediction-specific ε-consistency and robustness, we consider Φ′ ̸= Φ, which achieves ε-consistency βε y ′ and robustness γy′ with respect to y. Case I: y ∈[L, M -2ε] . In this case, γy = √ θ. Note that Φ = √ LU is the only threshold that achieves robustness √ θ. We can conclude that (βε y, γy) is Pareto optimal. Case II: y ∈(M -2ε, M) . In this case, ε-Tolerant PST sets the threshold to Φ = M -ε, achieving βε y = (M -ε)/L and γy = U/(M -ε). If Φ′ γy. If Φ′ > Φ, then consider the attack x = min{Φ′ -δ, y + ε} for threshold Φ′, where δ is the infinitesimal quantity. This makes βε y ′ ≥min{Φ′, y + ε}/L > (M -ε)/L = βε y. Therefore, (βε y, γy) is Pareto optimal. 53 Case III: y ∈[M, √ LU + ε] . In this case, ε-Tolerant PST sets the threshold to Φ = y -ε, achieving βε y = (y + ε)/(y -ε) and γy = U/(y -ε). Note that Φ = y -ε γy. If Φ′ > Φ, consider x = min{Φ′ -δ, y + ε} for threshold Φ′, where δ is the infinitesimal quantity. This guarantees the following inequality: βε y ′ ≥min{Φ′, y + ε}/L > (y -ε)/L. Note that r′(y) := (y -ε)/L is an increasing function of y, and r(y) := (y + ε)/(y -ε) is a decreasing function of y. Since y ≥M ≥L + 3ε and r(L + 3ε) = (L + 4ε)/(L + 2ε) (y + ε)/Φ = βε y. If Φ′ > Φ, since Φ > √ LU + ε > √ LU, γ′ y > γy. Therefore, (βε y, γy) is Pareto optimal. Case V: y ∈[U -ε, U] . In this case, ε-Tolerant PST sets the threshold to Φ = LU/(M -ε), achieving βε y = (M -ε)/L and γy = U/(M -ε). If Φ′ > Φ, then γ′ y > γy. If Φ′ U/Φ = βε y. Therefore, (βε y, γy) is Pareto optimal. 54
2510.14884
Learning When Not to Learn: Risk-Sensitive Abstention in Bandits with Unbounded Rewards Sarah Liaw∗ Benjamin Plaut∗ Harvard University University of California, Berkeley Abstract In high-stakes AI applications, even a single action can cause irreparable damage. How- ever, nearly all of sequential decision-making theory assumes that all errors are recoverable (e.g., by bounding rewards). Standard bandit algorithms that explore aggressively may cause irreparable damage when this assumption fails. Some prior work avoids irreparable errors by asking for help from a mentor, but a mentor may not always be available. In this work, we formalize a model of learning with unbounded rewards without a mentor as a two-action contextual bandit with an abstain option: at each round the agent observes an input and chooses either to abstain (always 0 reward) or to commit (ex- ecute a preexisting task policy). Committing yields rewards that are upper-bounded but can be arbitrarily negative, and the commit reward is assumed Lipschitz in the input. We propose a caution-based algorithm that learns when not to learn: it chooses a trusted region and commits only where the available evidence does not already certify harm. Under these conditions and i.i.d. inputs, we establish sublinear regret guarantees, theoretically demonstrating the effectiveness of cautious exploration for deploying learning agents safely in high-stakes environments. 1 INTRODUCTION With AI becoming ubiquitous, many learning systems are now deployed in unpredictable, safety-critical domains, such as process control and manufacturing robotics, autonomous driving, and surgical assistance. In these settings, a single ill-chosen action can cause irreparable and lasting damage with no opportunity ∗Equal contribution. for subsequent recovery. For instance, a self-driving car cannot compensate for a deadly crash by later driving more safely, nor can a medical robot undo a fatal mis- take during surgery. Following Plaut et al. (2025a,b), we refer to such irreparable errors as catastrophes. Despite the risks such deployments pose, there is lim- ited work (and limited theoretical work in particular) on how an agent can learn without ever incurring an irreparable error. The possibility of catastrophes challenges standard frameworks for sequential decision making, especially the familiar notion of optimism under uncertainty. Optimism effectively assumes that early mistakes can be offset (or be compensated for) by later gains, an assumption that is inappropriate when errors are irrecoverable. Instead, these settings call for pessimism under uncertainty: when evidence is insufficient, prefer inaction to risky action. One approach to mitigate these problems is to let the agent ask for help from a mentor in unfamiliar or risky situations. Such human-in-the-loop oversight can block unsafe actions and prevent irreparable errors (even if ordinary, recoverable errors still occur). However, this approach depends on the availability of a capable men- tor, which can be costly or impractical at scale. This motivates a mentor-free alternative: can an agent avoid irreparable errors on its own by acting cautiously when inputs appear unfamiliar? We propose a model of learning in the presence of irreparable costs without a mentor but with an option to abstain from action. The key question is when to abstain, i.e., when not to learn. To focus on this question, we assume the agent has previously learned a baseline policy that works well in-distribution but behaves unpredictably elsewhere. This allows us to streamline the model to two actions: abstain (do nothing) and commit (follow the baseline policy). Ab- staining yields a deterministic safe reward r(x, 0) = 0, while committing yields a reward r(x, 1) ∈(−∞, 1]1. 1The asymmetric bounds on the commit reward reflect that a single action can be catastrophic, whereas it is rare for a single action to yield arbitrarily large benefit. arXiv:2510.14884v1 [cs.LG] 16 Oct 2025 Risk-Sensitive Abstention in Bandits We treat the origin as fully “in-distribution” and as- sume the baseline policy is beneficial there: r(0, 1) > 0. We use the distance from the origin ∥x∥as a measure of how out-of-distribution (OOD) an input is. The commit reward is assumed L-Lipschitz, capturing the idea that similar inputs yield similar outcomes. For our main results we focus on a fixed distribution ν; for our impossibility results we also consider a T-dependent distribution νT . We formalize the tension between exploration and safety via two negative results. First, in the worst case, any algorithm that begins by always exploring (i.e., commits on the first round regardless of the input) can suffer infinite expected regret (Thm. 4.1). Second, when every input lies uniformly far OOD, there is no safe way to explore to identify a beneficial committing region, and sublinear regret is impossible (Thm. 4.2). Together, these results delineate both the necessity and the limits of caution. Motivated by this perspective, we develop a caution- based algorithm that learns only when it can guarantee that an error is not catastrophic (which essentially corresponds to not-too-OOD inputs). This approach yields sublinear expected regret for i.i.d. inputs from any fixed distribution, with bounds that also reflect how often the agent encounters far OOD inputs, while prioritizing the avoidance of irreparable errors. Contributions. Our contributions can be summarized as follows: 1. We introduce a formal model of learning with ir- reparable costs and no external mentor. 2. We prove two impossibility results that delineate the necessity and limits of caution. 3. We develop a caution-based algorithm that achieves sublinear regret for any fixed input distribution. Organization. §3 introduces the formal model and no- tation. §4 presents the impossibility results (Thms. 4.1 and 4.2) and their implications for exploration. § 5 de- scribes our caution-based learning algorithm and states the main regret bound. § 6 outlines the proof strategy and supporting lemmas. 2 RELATED WORK Most prior work on sequential decision-making and safe exploration focuses on settings where errors are ultimately recoverable; here we contrast this with our setting where individual actions can cause irreparable harm. 2.1 Sequential decision-making when all errors are recoverable The literature on sequential decision-making is vast, spanning bandit problems, reinforcement learning, and online learning. See Slivkins et al. (2019), Sutton et al. (1998), and Cesa-Bianchi and Lugosi (2006) for introductions to these (somewhat overlapping) topics, respectively. However, nearly all of this work assumes explicitly or implicitly that any error can be recovered from. This assumption enables the agent to ignore risk and simply try all possible behaviors, since no matter how badly it performs in the short term, it can always eventually make up for it. Indeed, most sequential decision-making algorithms with formal regret bounds have this general structure. This assumption can manifest in different ways. In bandit settings, it suffices to assume that rewards are bounded (or at least have bounded expectation). This assumption implies that the expected regret from any action on any time step is always bounded, which is sufficient for the risk-agnostic exploration mentioned above. In contrast, we allow unbounded negative rewards so that actions can be arbitrarily costly. Indeed, our first negative result (Thm. 4.1) relies on the expected regret for a single action potentially being infinite in our model. In Markov Decision Processes (MDPs), the agent’s actions determine the next state via a transition func- tion, so in addition to bounded rewards, one typically assumes that either the environment is reset at the start of each “episode” (e.g., Azar et al., 2017) or that any state is reachable from any other (e.g., Jaksch et al., 2010). The dependence of standard MDP algorithms on these assumptions was observed by Moldovan and Abbeel (2012a); Cohen et al. (2021), among others. Regardless of the specific form of this assumption, it clearly does not hold in safety-critical contexts where a single action can be catastrophic. 2.2 Safe exploration These issues have motivated a wide field of safe exploration. A full survey is beyond the scope of this paper (see Garc´ıa and Fern´andez, 2015; Gu et al., 2024; Krasowski et al., 2023; Tan et al., 2022 for surveys), so we cover only the most relevant prior work. Avoiding irreparable errors while learning has also been studied empirically across multiple domains (e.g., Saunders et al., 2017; Moldovan and Abbeel, 2012b; Wachi et al., 2023; Zhao et al., 2023; Perkins and Barto, 2003), but here we focus on theoretical work, which is most relevant to our setting. Safe exploration is modeled in two main ways. The first Sarah Liaw, Benjamin Plaut approach is to require the agent to satisfy some sort of constraint in addition to maximizing reward. The con- straint can be entirely separate from reward, as in the case of constrained MDPs (Altman, 1999), or they can be related to the reward (e.g., the agent’s reward must always exceed some baseline). When zero or near-zero constraint violation is required, these formalisms do capture the possibility of irreparable errors. The second approach treats reward as the sole objective, with safety as a necessary but not sufficient property for maximiz- ing reward. Here, irreparable errors correspond to ei- ther unboundedly negative rewards (our work falls into this category) or inescapable “trap” states with poor reward. An agent that obtains very negative rewards or enters trap states clearly cannot obtain high reward. Both of these models must contend with a fundamental obstacle: how does one learn which actions are catastrophic without trying those actions directly? This can be formalized by the so called “Heaven or Hell problem”. Suppose there are two available actions, where one has unbounded positive reward and the other has unbounded negative reward. In this case, the agent can do no better than simply guessing and can never guarantee good regret. This problem shows that some sort of additional assumption is necessary for any meaningful regret guarantees. Below, we categorize work within safe exploration based on which assumption(s) it uses for this purpose. Full prior knowledge. Perhaps the simplest approach is to assume that the agent knows the precise safety constraint upfront (see Zhao et al., 2023 for a survey). This immediately resolves the Heaven or Hell problem; indeed, it eliminates the need for the agent to “learn when not to learn” at all. However, full knowledge of the safety constraint may not hold in practice. In contrast, we only assume that the (1) baseline policy performs well in-distribution and (2) the agent can always safely abstain. Learning constraints using a safe fallback action. There is a growing body of work which shares our assumption of a safe fallback action. Liu et al. (2021); Stradi et al. (2024) use this approach in the constrained MDP model, while Wu et al. (2016); Kazerouni et al. (2017); Lin et al. (2022); Chen et al. (2022) require the reward to exceed a fixed baseline in a bandit model. These papers generally rely on a pair of subtle but crucial assumptions to obtain zero constraint violation: (1) the constraint violation on any given time step is bounded and (2) the baseline policy satisfies the constraints with a known amount of slack (this is called Slater’s gap, although not all of the above papers use this term). This combination of assumptions enables the agent to still explore aggressively with some known probability. Furthermore, the resulting bounds typically depend inversely on Slater’s gap. Our work is complementary to each of these two assumptions. First, rather than assuming global boundedness, we assume that rewards decrease at a bounded rate, i.e., rewards are Lipschitz continuous. Second, rather than dependence on the reward or cost function (in the form of Slater’s gap), our bounds depend on the input distribution: specifically, our bounds degrade as the agent sees more OOD inputs. Our approach may be more or less realistic depending on the specific context, but it notably diverges from the typical way fallback actions are utilized. Asking for help. Perhaps the most common approach in this model is relying on external supervision. This is a growing body of work which uses limited queries to a mentor to prove formal regret guarantees in the presence of irreversible dynamics (Cohen et al., 2021; Cohen and Hutter, 2020; Kosoy, 2019; Maillard et al., 2019; Plaut et al., 2025b,a). However, as the number of deployed AI systems continues to grow, it may be impractical for each one to have a human supervisor. Even in cases where external help will eventually become available, the agent may need to behave safely on its own in the short-term. These considerations motivate our study of how to learn safely in the absence of external help. 2.3 Other related work We briefly discuss some topics that are less directly relevant but still worth mentioning. One is the heavy- tailed bandit model (Bubeck et al., 2013; Agrawal et al., 2021), which studies the case where reward distri- butions are not subgaussian and thus less predictable. While this model does incorporate elements of safety, as long as the expected reward from any action is bounded, risk-agnostic exploration remains valid (as discussed above). Another topic adjacent to our work is the standard Lipschitz bandit model with bounded rewards and bounded domain (see, e.g., Chapters 4 and 8 of Slivkins, 2011). This work shares some similarities with ours, like the algorithmic use of discretization. However, the core of our paper is removing the boundedness assumptions, which introduces a host of new challenges. Finally, there is complementary work on abstention with bounded rewards (Neu and Zhivotovskiy, 2020; Yang et al., 2024). While this line of work also demonstrates the benefits of abstention, it does not address the possibility of irreparable errors. 3 PRELIMINARIES We study a two-action contextual bandit model in which, on each round, the agent observes an input and Risk-Sensitive Abstention in Bandits chooses either to commit, thus executing a fixed task policy that may yield risky outcomes, or to abstain, receiving a safe default reward of zero. In this section, we introduce the formal notation and assumptions used throughout. For k ∈N, let [k] = {1, . . . , k}. Let X = Rn be the input space, T ∈N be the time horizon, and ∥· ∥be the Euclidean norm (though one could also consider a more general metric space). On each time step t ∈[T], the agent observes an input xt ∈X, chooses an action yt ∈{0, 1}, and receives a (noisy) scalar reward; the precise noise assumptions are stated below. Actions and Rewards. We interpret yt = 0 as “abstaining”, a safe default which deterministically yields r(xt, 0) = 0 for any xt ∈X. We interpret yt = 1 as “committing”, which executes a preexisting policy whose reward r(xt, 1) may be arbitrarily negative (catastrophic) but is assumed to have a constant upper bound (rescaled to 1 without loss of generality). This captures the asymmetry of high-stakes settings where catastrophic losses can be unbounded in magnitude, whereas gains typically saturate. Input models. We assume inputs are i.i.d. draws from an unknown distribution ν on X, i.e., x1, . . . , xT i.i.d. ∼ν. We typically take ν to be fixed, but in our impossibility results we also consider the case of T-dependent ν (denoted νT ). We assume bandit feedback: the agent observes only the realized reward of its chosen action. Abstaining provides no information about the counterfactual com- mit reward r(xt, 1), so the agent cannot “learn by abstaining”. Formally, at round t the learner observes rt = r(xt, yt) + ηt, where (ηt)T t=1 are i.i.d. zero-mean σ-subgaussian noise variables, independent of (xt) and of the learner’s internal randomness (specified formally in Def. 3.1). Definition 3.1 (σ-subgaussian). A random variable Z is σ-subgaussian if E[exp(λ(Z −E[Z]))] ≤exp  σ2λ2 2  for all λ ∈R. Equivalently, Z −E[Z] has tails that are dominated by a centered Gaussian with variance proxy σ2. Regularity. We make two assumptions on the reward function: (i) the commit reward r(·, 1) is L-Lipschitz in the Euclidean norm, i.e., there exists L > 0 such that for all x, x′ ∈X, |r(x, 1) −r(x′, 1)| ≤L∥x −x′∥. This is a standard smoothness condition in Lipschitz bandit models (see, e.g., Slivkins et al., 2019) and captures the intuition that similar inputs yield similar commit rewards. Since r(x, 0) ≡0, the abstain reward is 0-Lipschitz. (ii) The in-distribution baseline input yields strictly positive reward when committing, i.e. r(0, 1) > 0. This guarantees that committing is beneficial somewhere (at the origin); without it, the optimal policy would be to always abstain and cautious learning would be impossible. Objective. The agent’s goal is to minimize its (ex- pected) regret, which is the difference between its cu- mulative reward and the optimal cumulative reward. Formally, define Reg(T) = T X t=1  max y∗∈{0,1} r(xt, y∗) −r(xt, yt)  . We take the expectations over the input process (in the stochastic model), the observation noise, and the learner’s internal randomness. The goal is to achieve sublinear expected regret, i.e., E[Reg(T)] = o(T), equivalently E[Reg(T)]/T →0 as T →∞. 4 THE VIRTUES AND LIMITS OF CAUTION In this section, we provide two impossibility results that demonstrate the importance and limitations of caution in high-stakes, unbounded reward bandits. First, caution is necessary: if an agent commits with non-negligible probability on inputs that are far OOD, catastrophic tail losses dominate—indeed, even a single risk-agnostic exploratory commit can incur infinite expected regret. This kind of “incautious exploration” is exactly how standard bandit algorithms behave when they begin by pulling every arm at least once. Second, caution has limits: when the input stream is uniformly far OOD, there is no way to explore cautiously to identify a beneficial committing region without risking catastrophe. In such settings, sublinear regret is not possible and the optimal strategy is to abstain on every time step. Theorem 4.1 (The need for caution). Let ν be any dis- tribution over X such that Ex∼ν[∥x∥] = ∞and assume x1, . . . , xT ∼ν i.i.d. Then there exists a reward func- tion r such that any algorithm which always commits on the first time step satisfies E[Reg(T)] = ∞. Proof. Define r(x, 1) = 1 −L∥x∥for all x ∈X. Then E[Reg(T)] = E " T X t=1  max y∗∈{0,1} r(xt, y∗) −r(xt, yt) # ≥E  max y∗∈{0,1} r(x1, y∗) −r(x1, y1)  ≥E[0 −(1 −L∥x1∥)] Sarah Liaw, Benjamin Plaut = L E x∼ν[∥x∥] −1 = ∞ as required. The proof can easily be modified to handle the cases where the first commit is taken with constant probability (rather than probability 1) or where the algorithm abstains for a constant number of initial rounds. Essentially, this negative result applies to any algorithm that is not cautious, i.e., that explores without considering how OOD xt is. However, caution can only get us so far. While it pre- vents catastrophic first commits, some exploration is necessary to obtain sublinear regret. If all inputs are far OOD, then there is no safe way to explore, so the agent has no choice but to always abstain. Equivalently, this can be phrased by considering i.i.d. inputs from a T- dependent distribution νT supported on {x : ∥x∥= T}. Theorem 4.2 (The limits of caution). Let νT be any distribution supported on {x : ∥x∥= T}, and suppose x1, . . . , xT i.i.d. ∼νT . Then no algorithm can guarantee E[Reg(T)] ∈o(T). Proof. Define r−(x, 1) := 1 −L∥x∥and r+(x, 1) := 1, with r±(x, 0) := 0. Since we only care about asymp- totics, we can restrict our attention to T > 1/L. Then for ∥xt∥= T, optimal behavior for r+ is to always commit, while optimal behavior for r−is to always abstain. We show maxr∈{r−,r+} E[Reg(T)] ∈Ω(T). To do so, we use a mild version of the probabilistic method. Let U(r−, r+) be the uniform distribution over {r−, r+}. It suffices to show Er∼U E[Reg(T)] ∈Ω(T), where the second expectation is over x1, . . . , xT and y1, . . . , yT . Let E be the event that the agent ever commits. If E holds, there exists i ∈[T] with yi = 1. Since yi is independent of r, E r E[Reg(T) | E] = E r E " T X t=1  max y∗∈{0,1} r(xt, y∗) −r(xt, yt)  | E # ≥Pr[r = r−] E  max y∗∈{0,1} r(xi, y∗) −r(xi, yi) | r = r−  = LT −1 2 On the other hand, if E does not occur, then E r E[Reg(T) | ¬E] ≥Pr[r = r+] E " T X t=1  max y∗∈{0,1}r+(xt, y∗)−r+(xt, yt)  |¬E # ≥T 2 . Then by the law of total expectation, E r E[Reg(T)] = Pr[E] E r E[Reg(T) | E] + Pr[¬E] E r E[Reg(T) | ¬E] ≥min Pr[E], Pr[¬E]  min LT −1 2 , T 2  ∈Ω(T) as required. 5 ALGORITHM AND MAIN RESULT Following the negative results in § 4, we propose an algorithm (Algorithm 1) that operationalizes cautious learning: only learn in regions that are not too far OOD and where the available evidence does not already certify that committing is harmful. Informally, we define a trusted region around the origin whose radius grows with the time horizon, reflecting the maximum regret we are willing to tolerate—intuitively, this corresponds to allowing mistakes that are bad but not catastrophic. We then discretize the region into bins to exploit Lipschitz continuity. Within each bin, the commit reward cannot vary by more than a Lipschitz discretization error, so it suffices to estimate a single per-bin mean. The agent always abstains outside the trusted region, and inside it abstains in any bin whose pessimistic upper bound on reward is negative; otherwise it commits to gather information. More precisely, the algorithm defines a ball of radius m(T) around the origin, treating inputs outside this ball as too OOD to test. The ball is partitioned into n-dimensional hypercubes (bins) of side length w(T). By Lipschitz continuity, the variation of r(·, 1) within any bin B is at most L√nw(T). For each bin B, the algorithm maintains its empirical mean ˆµB and a confidence radius γ(k) after k commits in B. If ˆµB + γ(k) + L√nw(T) < 0, then B is certified unsafe and the algorithm abstains there permanently. Figure 1 shows a schematic of the algorithm. We saw in § 4 that the problem is impossible when inputs are too far OOD. A natural way to quantify this is via the amount of probability mass that lies outside a given radius, captured by the radial survival function: Definition 5.1 (Radial survival function). For any radius R ≥0, the radial survival function of ν is ¯ν(R) := Prx∼ν  ∥x∥≥R  . We are now ready to state our main result. Risk-Sensitive Abstention in Bandits Algorithm 1 Risk-Sensitive Abstention Algorithm Inputs: m : N →R>0, w : N →R>0 H ←partition of X into n-cubes of side length w(T) B ←{B ∈H : ∃x ∈B with ∥x∥≤m(T)} σw ← p nL2w(T)2 + σ2 γ(k) := q c−1σ2w ln(2T 4) k where c is the absolute con- stant from Lemma A.2 (kB, ˆµB) = (0, 0) for all B ∈B for t = 1, . . . , T do if ∃B ∈B s.t. xt ∈B then ▷xt is not too OOD: it’s safe to learn if ˆµB + γ(kB) + L√nw(T) < 0 then ▷We already know xt is bad: don’t learn Abstain (yt = 0) else ▷xt might be good: learn Commit (yt = 1) kB ←kB + 1 ˆµB ←ˆµB + rt−ˆµB kB else ▷xt is far OOD: it’s too risky to learn Abstain (yt = 0) Theorem 5.2. In the stochastic setting with xt ∼ν i.i.d., Algorithm 1 with w(T) = T −1/(n+2) and m(T) = ln T satisfies E[Reg(T)] ∈O  (L + σ2)T n+1 n+2 (ln T)n+1 + T ¯ν(ln T)  . The first term is typical for Lipschitz contextual ban- dits and reflects the curse of dimensionality (see, e.g., Slivkins et al., 2019, Thms. 4.11–4.12; Plaut et al., 2025a, Thm. 10). The T ¯ν(ln T) term is unusual mainly because un- bounded domains are unusual: for bounded domains, ¯ν(ln T) = 0 for all large T. Our analysis deals with far OOD inputs directly and the bound necessarily degrades as such inputs become more frequent. This dependence is unavoidable: the construction in Thm. 4.2 sets xt = T for all t ∈[T], hence ¯ν(ln T) = 1 and the bound in Thm. 5.2 becomes linear, matching the impossibility result. By contrast, for any fixed distribution ν, we have ¯ν(ln T) →0 as T →∞, so T ¯ν(ln T) = o(T) and the overall regret stays sublinear. For example, if ν is subgaussian with ¯ν(r) ≤e−cr2, then T ¯ν(ln T) ≤Te−c(ln T )2 = o(1). If ν is subexponential with ¯ν(r) ≤e−cr, then T ¯ν(ln T) ≤T · T −c = T 1−c = o(T). If ν has polynomial tails with ¯ν(r) ≍r−α for α > 0, then T ¯ν(ln T) ≍T/(ln T)α = o(T). If one has prior knowledge of ν, the choice of m(T) can be tailored more precisely than our generic setting m(T) = ln T. In particular, for polynomial tails, setting m(T) = T c 0 x1 x2 w(T) grid (bins) bin ∈B (trusted) certified negative ∥x∥≤m(T) Figure 1: Trusted region of radius m(T) around the origin, partitioned into bins of side w(T). Any square intersecting the ball is shown fully green (bin ∈B). Certified negative bins are shown hatched red. The agent abstains outside the ball. for small c > 0 improves the bound to O(T 1−cα). Thus, the regret decomposes into a geomet- ric/statistical term from discretization and concentra- tion inside the trusted region, and a tail term from far OOD inputs; both are sublinear for any fixed ν. 6 PROOF SKETCH We now outline the logical structure of the proof of Thm. 5.2; we also provide the intuition behind each step. Full technical details and complete proofs of all lemmas are deferred to Appendix A. Let m(T), w(T) be as in Algorithm 1, and let B be the set of bins intersecting the ball of radius m(T). For any B ∈B, let µB = Ex∼ν[ r(x, 1) | x ∈B ] be its true mean commit reward, let kB(t) be the number of commits taken in B by the end of round t (so kB(0) = 0), and let ˆµB(k) be the empirical mean in B after k commits (i.e., the running mean from Algorithm 1 indexed by its commit count). To control estimation error we define the confidence radius γ(k) = r c−1σ2w ln(2T 4) k , σ2 w = nL2w(T)2 + σ2, where c > 0 is the absolute constant from Lemma A.2. Here σ2 w aggregates the combines observation noise σ2 with the Lipschitz-induced within-bin variation of (L√nw(T))2. Sarah Liaw, Benjamin Plaut Define the good event under which all per-bin estimates are uniformly accurate over the realized commit counts: G = n ∀B ∈B, ∀t ∈[T] : |ˆµB(kB(t))−µB| ≤γ(kB(t)) o . On G, each empirical mean is a reliable proxy for its bin’s true mean at every commit count that occurs along the algorithm’s trajectory. The analysis condi- tions on G (which holds with high probability by a union bound over realized bin–count pairs), and then decomposes regret into: (i) commits inside the trusted region (handled by certification of negative bins plus a margin term), and (ii) abstentions outside the ball of ra- dius m(T) (quantified by the radial survival function). Lemma 6.1 (Per-bin concentration). For any B ∈B and t ∈[T], Pr [|ˆµB(kB(t)) −µB| > γ(kB(t))] ≤T −4. Proof idea. Fix B and t and condition on the kB(t) commit times t1 < · · · < tkB(t) with xtj ∈B. Decom- pose ˆµB(kB(t)) −µB = 1 kB(t) kB(t) X j=1 r(xtj, 1) −µB  + 1 kB(t) kB(t) X j=1 ηtj. By Lipschitz continuity and the fact that all xtj lie in a single cube of side w(T), we have r(xtj, 1) − µB ≤L√nw(T) (see Lem. A.3), so the first term is bounded and hence subgaussian with variance proxy O((L√nw(T))2). The second term is the ob- servation noise, which is σ-subgaussian by assump- tion. Standard subgaussian tail bounds then give Pr (|ˆµB(kB(t)) −µB| > γ(kB(t))) ≤T −4. □ Lem. 6.1 gave concentration for each fixed bin and commit count. To extend this guarantee uniformly, we apply a union bound over the at most T bin–time pairs actually realized by the algorithm. Lemma 6.2 (Uniform concentration bound). With probability at least 1 −T −2, the good event G holds. Equivalently, Pr(¬G) ≤T −2. Proof idea. There are at most T 2 relevant pairs (B, t) along the trajectory of the algorithm. Each has failure probability T −4 by Lem. 6.1. A union bound yields failure probability at most T −2. □ Recall that the algorithm abstains permanently in any bin once ˆµB(kB(t)) + γ(kB(t)) + L√nw(T) < 0. On the good event G, bins with sufficiently negative mean are thus certified unsafe after finitely many commits. (For bins near the decision boundary, certification may not occur, but their per-round regret is O(L√nw(T)), so their total contribution is small and accounted for by the margin term later.) We now compute how many commits are needed to certify a negative bin. Lemma 6.3 (Samples for negative certification). Con- sider any t ∈[T] and B ∈B. On G, if µB < −L√nw(T) and kB(t) > 4c−1σ2 w ln(2T 4) (µB+L√nw(T ))2 , then bin B is certified negative at time t. Proof idea. On the good event G, certification in bin B occurs when ˆµB +γ(kB(t))+L√nw(T) < 0. Using the worst-case deviation ˆµB = µB + γ(kB(t)), this reduces to µB + 2γ(kB(t)) + L√nw(T) < 0. Plugging in the definition of γ(kB(t)) and solving for k yields the stated number of commits needed for certification. □ Next, we bound the geometry of the trusted region. By construction, B consists of all bins intersecting the ball of radius m(T). Consequently, their union S B∈B B is contained within a slightly larger ball. This enlarged region will be useful both for bounding how negative rewards can be (via Lipschitz continuity) and for controlling the number of bins (via volume packing). Let v1 be the volume of the unit ball {x ∈X : ∥x∥≤1}. Lemma 6.4 (Trusted cover is a slightly larger ball). Every x ∈S B∈B B satisfies ∥x∥≤R(T) = m(T) + √nw(T). We now bound the regret from a truly unsafe bin before it is certified. Let ∆t := maxy∈{0,1} r(xt, y) −r(xt, yt) be the instantaneous regret at time t. Lemma 6.5 (Per-bin commit regret). On G, for any B ∈B with kB(T) ≥1 and µB < −(2L√n + 1)w(T), X t: xt∈B, yt=1 ∆t ≤2LR(T) + 32c−1σ2 w ln(2T 4) w(T) . Proof idea. Lem. 6.3 shows that a negative bin is certi- fied after O(σ2 w/(µB + L√nw(T))2) commits. Each such commit incurs at most O(|µB|) regret, but Lem. 6.4 ensures that µB ≥−LR(T), so the loss per commit is bounded. Multiplying the number of pre- certification commits by the maximum per-step regret yields the stated bound. □ Now that we have controlled the regret contribution of each individual bin, we sum across all bins that are ever visited and include the effect of near-margin bins (those with µB close to zero). Such bins may never be certified, but their regret per commit is small, so their total contribution is still controlled. Lemma 6.6 (Total commit regret inside the trusted region). On G, X t: yt=1 ∆t ≤v1R(T)n w(T)n  2LR(T) + 32c−1σ2 w ln(2T 4) w(T)  + (3L√n + 1)w(T)T. Proof idea. We partition commits into bins with de- cisively negative mean and those near the decision Risk-Sensitive Abstention in Bandits boundary. For µB well below zero, Lem. 6.5 bounds the regret before certification. Summing over all such bins gives at most |B| times the per-bin cost, and by the packing bound |B|w(T)n ≤v1R(T)n, this gives the first term. For bins near the margin, the algorithm may continue committing longer, but Lipschitzness bounds the per-round regret by (3L√n + 1)w(T), giving the second term after T rounds. □ Lem. 6.6 completes the analysis of commit regret inside the trusted region. Combining this with the abstention regret outside the ball of radius m(T), we yield the final rate in Thm. 5.2 as follows. Proof idea of Thm. 5.2. Regret decomposes into (i) abstention outside the trusted ball; (ii) commits inside. For (i), each input with ∥xt∥> m(T) contributes at most 1, giving T ¯ν(m(T)), which is sublinear for any fixed ν since ¯ν(ln T) →0. For (ii), Lipschitz conti- nuity bounds the within-bin variation, and a uniform concentration event (probability 1 −O(T −2)) ensures empirical means stay within confidence radii. Nega- tive bins are certified after O(σ2 w/margin2) commits, so each contributes at most O(LR(T)+σ2 w log T/w(T)) regret. Summing across O((m(T)/w(T))n) bins gives ˜O  R(T)nLR(T)w(T)−n + σ2 ww(T)−(n+1) , and bins near the decision boundary contribute an additional O(w(T)T). □ If we ignore log factors, the leading terms trade off R(T)nw(T)−(n+1) | {z } variance-driven vs. w(T)T | {z } margin-driven . Balancing these yields the optimal choice w(T) ≍ T −1/(n+2). Independently, the radius m(T) trades off the abstention term T ¯ν(m(T)) against the growth of the volume factor R(T)n. Choosing m(T) = ln T makes T ¯ν(m(T)) sublinear for any fixed ν (since ¯ν(ln T) →0) while increasing R(T) only logarithmically. 7 CONCLUSION In this work, we introduced a formal model for safe learning under distribution shift in contextual bandits with catastrophic tails, provided impossibility results that clarify when sublinear regret is unattainable, and gave a cautious risk-sensitive algorithm with sublinear regret under suitable conditions. Our work has several limitations, which also provide directions for future work, including handling richer structure beyond Lipschitz continuity, incorporating adaptive or learned metrics, and extending the analysis to non-i.i.d. inputs or worst-case sequences. Our regret bound can be close to linear. In Thm. 5.2, the abstention term T ¯ν(ln T) can dom- inate for heavy-tailed inputs (e.g., power laws). This is the price of caution: avoiding catastrophic far OOD commits requires systematic abstention in the tails, and the resulting regret can be unavoidable (see the impos- sibility in Thm. 4.2). Moreover, while the bound is sub- linear for every fixed n, the exponent (n+1)/(n+2) →1 as n →∞, which is a standard curse of dimensionality in Lipschitz contextual bandits (Slivkins et al., 2019, Thms. 4.11–4.12); see also Plaut et al. (2025a, Thm. 10). While we do not expect to remove these dependencies entirely, future work could improve rates. The sim- plicity of Algorithm 1 is appealing but ignores useful information: commits inform not only their own bin but also nearby bins via Lipschitz continuity, and certifying a bin as positive could justify expanding the trusted re- gion around it. Additional structural assumptions, such as margin/low-noise conditions, intrinsic low dimension- ality, or smoothness beyond Lipschitz, could also help. Assumptions may not always hold. Our guaran- tees here rely on i.i.d. inputs and Lipschitz continuity of the commit reward. In practice, inputs may drift or exhibit temporal dependence, and rewards may be only piecewise smooth or even non-smooth. Extending the analysis to weaker smoothness conditions or drifting processes is an important direction. Moreover, Algorithm 1 assumes knowledge of L, σ2, and T. While knowledge of T can be handled by the standard doubling trick (see Slivkins et al. (2019, §1.5)), L and σ2 may be unknown. Thus, developing parameter-free (or adaptively tuned) algorithms that remain cautious would increase robustness. No unconditionally irreparable errors. Obtaining regret −T on a single time step is irreparable in the sense that it automatically implies linear regret on that run. However, errors in our model are only irreparable for a fixed T: for any error, there exists a large enough T that the error is no longer catastrophic. It may be worth considering alternative models of catastrophe such as inescapable trap states in MDPs which do allow for errors that are unconditionally catastrophic. Broader impact. This work is motivated by safety concerns in the deployment of learning systems in high- stakes domains. We provide theoretical justification for abstention as a mechanism for averting catastrophic errors under distribution shift, and abstention is also a practical choice for deployed systems. Agents that can defer action when uncertain may be safer and more trustworthy, but abstention mechanisms must be de- signed carefully to avoid consequences such as excessive conservatism or over-reliance on human supervision. Sarah Liaw, Benjamin Plaut Acknowledgments This work was supported by a gift from Open Philan- thropy to the Center for Human-Compatible AI (CHAI) at UC Berkeley. This work was conducted while Sarah Liaw was a research intern at CHAI. We would also like to thank Vamshi Bonagiri and Pavel Czempin for helpful discussions and feedback. References Agrawal, S., Juneja, S. K., and Koolen, W. M. (2021). Regret minimization in heavy-tailed bandits. In Conference on Learning Theory, pages 26–62. PMLR. Altman, E. (1999). Constrained Markov Decision Pro- cesses, volume 7. CRC Press. Azar, M. G., Osband, I., and Munos, R. (2017). Min- imax regret bounds for reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, pages 263–272. PMLR. Boucheron, S., Lugosi, G., and Massart, P. (2013). Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press. Bubeck, S., Cesa-Bianchi, N., and Lugosi, G. (2013). Bandits with heavy tail. IEEE Transactions on Information Theory, 59(11):7711–7717. Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, learning, and games. Cambridge university press. Chen, T., Gangrade, A., and Saligrama, V. (2022). Strategies for safe multi-armed bandits with logarith- mic regret and risk. Cohen, M. K., Catt, E., and Hutter, M. (2021). Curios- ity Killed or Incapacitated the Cat and the Asymp- totically Optimal Agent. IEEE Journal on Selected Areas in Information Theory, 2(2):665–677. Con- ference Name: IEEE Journal on Selected Areas in Information Theory. Cohen, M. K. and Hutter, M. (2020). Pessimism About Unknown Unknowns Inspires Conservatism. In Pro- ceedings of Thirty Third Conference on Learning Theory, pages 1344–1373. PMLR. Garc´ıa, J. and Fern´andez, F. (2015). A Comprehensive Survey on Safe Reinforcement Learning. Journal of Machine Learning Research, 16(42):1437–1480. Gu, S., Yang, L., Du, Y., Chen, G., Walter, F., Wang, J., and Knoll, A. (2024). A review of safe rein- forcement learning: Methods, theories, and applica- tions. 46(12):11216–11235. Conference Name: IEEE Transactions on Pattern Analysis and Machine Intel- ligence. Jaksch, T., Ortner, R., and Auer, P. (2010). Near- optimal Regret Bounds for Reinforcement Learning. Journal of Machine Learning Research, 11(51):1563– 1600. Kazerouni, A., Ghavamzadeh, M., Abbasi-Yadkori, Y., and Roy, B. V. (2017). Conservative contextual linear bandits. Kosoy, V. (2019). Delegative Reinforcement Learning: learning to avoid traps with a little help. arXiv. arXiv:1907.08461 [cs, stat]. Krasowski, H., Thumm, J., M¨uller, M., Sch¨afer, L., Wang, X., and Althoff, M. (2023). Provably safe reinforcement learning: Conceptual analysis, survey, and benchmarking. Lin, J., Lee, X. Y., Jubery, T., Moothedath, S., Sarkar, S., and Ganapathysubramanian, B. (2022). Stochas- tic conservative contextual linear bandits. Liu, T., Zhou, R., Kalathil, D., Kumar, P., and Tian, C. (2021). Learning policies with zero or bounded constraint violation for constrained MDPs. Advances in Neural Information Processing Systems, 34:17183– 17193. Maillard, O.-A., Mann, T., Ortner, R., and Mannor, S. (2019). Active Roll-outs in MDP with Irreversible Dynamics. Moldovan, T. M. and Abbeel, P. (2012a). Safe explo- ration in Markov decision processes. In Proceedings of the 29th International Conference on Machine Learning, ICML’12, pages 1451–1458, Madison, WI, USA. Omnipress. Moldovan, T. M. and Abbeel, P. (2012b). Safe explo- ration in markov decision processes. Neu, G. and Zhivotovskiy, N. (2020). Fast rates for online prediction with abstention. Perkins, T. J. and Barto, A. G. (2003). Lyapunov design for safe reinforcement learning. J. Mach. Learn. Res., 3(null):803–832. Plaut, B., Li´evano-Karim, J., Zhu, H., and Russell, S. (2025a). Safe learning under irreversible dynamics via asking for help. Plaut, B., Zhu, H., and Russell, S. (2025b). Avoiding catastrophe in online learning by asking for help. In Proceedings of the 42nd International Conference on Machine Learning. Saunders, W., Sastry, G., Stuhlmueller, A., and Evans, O. (2017). Trial without error: Towards safe rein- forcement learning via human intervention. Slivkins, A. (2011). Contextual Bandits with Similarity Information. In Proceedings of the 24th Annual Con- ference on Learning Theory (COLT), pages 679–702. ISSN: 1938-7228. Slivkins, A. et al. (2019). Introduction to multi-armed bandits. Foundations and Trends® in Machine Learning, 12(1-2):1–286. Risk-Sensitive Abstention in Bandits Stradi, F. E., Castiglioni, M., Marchesi, A., and Gatti, N. (2024). Learning adversarial MDPs with stochastic hard constraints. arXiv preprint arXiv:2403.03672. Sutton, R. S., Barto, A. G., et al. (1998). Reinforcement learning: An introduction, volume 1. MIT press Cambridge. Tan, V. Y. F., L.A., P., and Jagannathan, K. (2022). A survey of risk-aware multi-armed bandits. In Raedt, L. D., editor, Proceedings of the Thirty-First Inter- national Joint Conference on Artificial Intelligence, IJCAI-22, pages 5623–5629. International Joint Con- ferences on Artificial Intelligence Organization. Sur- vey Track. Wachi, A., Hashimoto, W., Shen, X., and Hashimoto, K. (2023). Safe exploration in reinforcement learning: A generalized formulation and algorithms. Wu, Y., Shariff, R., Lattimore, T., and Szepesv´ari, C. (2016). Conservative bandits. In International Conference on Machine Learning (ICML). Yang, J., Jin, T., and Tan, V. Y. F. (2024). Multi- armed bandits with abstention. Zhao, W., He, T., Chen, R., Wei, T., and Liu, C. (2023). State-wise Safe Reinforcement Learning: A Survey. volume 6, pages 6814–6822. ISSN: 1045-0823. Sarah Liaw, Benjamin Plaut A PROOF OF MAIN RESULT A.1 Proof notation The proof will use the following notation: 1. The true mean of bin B is µB = Ex∼ν[r(x, 1) | x ∈B]. 2. Let kB(t) denote the value of the variable kB in Algorithm 1 at the end of time step t. 3. Let ˆµB(k) denote the value of the variable ˆµB in Algorithm 1 after k rewards from bin B have been observed. 4. Let σw = p nL2w(T)2 + σ2 for brevity. 5. Define the confidence radius γ(k) = q c−1σ2w ln(2T 4) k where c is the absolute constant from Lemma A.2. 6. Define the good event G = {∀t ∈[T], ∀B ∈B where kB(t) > 0 : |ˆµB(kB(t)) −µB| ≤γ(kB(t))}. 7. A bin B is certified negative at time t if ˆµB(kB(t)) + γ(kB(t)) + L√nw(T) < 0. 8. Let ¯ν be the radial survival function of ν. That is, for any y ∈R≥0, ¯ν(y) = Prx∼ν[∥x∥≥y]. 9. Let ∆t = maxy∗∈{0,1} r(xt, y∗) −r(xt, yt) be the single-step regret at time t. 10. Let v1 be the volume of the unit ball {x ∈X : ∥x∥≤1}. 11. Let R(T) = m(T) + √nw(T). This will be the maximum distance of any input in ∪B∈BB from the origin. Lemma A.1 (Hoeffding’s Lemma, Lemma 2.2 in Boucheron et al., 2013). If Z is a random variable taking values in the bounded interval [a, b], then Z is ( b−a 2 )-subgaussian. Lemma A.2 (Hoeffding’s inequality, subgaussian version). Let X1, . . . , Xk be independent random variables with mean zero, where each Xi is σi-subgaussian for some σi > 0. Then there exists an absolute constant c > 0 such that for any ε > 0, Pr " k X i=1 Xi > ε # ≤2 exp − cε2 Pk i=1 σ2 i ! Lemma A.3. If x ∈B ∈B, then |r(x, 1) −µB| ≤L√nw(T). Proof. We must prove that r(x, 1) ≥µB −L√nw(T) and r(x, 1) ≤µB + L√nw(T). Let r−= infx′∈B r(x′, 1) and r+ = supx′∈B r(x′, 1). Then r−≤µB ≤r+ and r−≤r(x, 1) ≤r+. Next, for any ε > 0, there exists x−, x+ ∈B such that r(x−, 1) −ε < r−and r(x+, 1) + ε > r+ (if not, this contradicts r−and r+ being the infimum and supremum). Then r(x, 1) and µB belong to the interval [r−, r+], which is a subset of the interval [r(x−, 1) −ε, r(x+, 1) + ε]. Since x−and x+ belong to the same n-hypercube with side length w(T), ∥x−−x+∥≤√nw(T). Then by Lipschitz continuity, |r(x+, 1) −r(x−, 1)| = r(x+, 1) −r(x−, 1) ≤L√nw(T). Therefore r(x, 1) and µB belong to the same interval of length L√nw(T) + 2ε, so |r(x, 1) −µB| ≤L√nw(T) + 2ε. Since this holds for all ε > 0, we must have |r(x, 1) −µB| ≤L√nw(T). Lemma 6.1 (Per-bin concentration). For any B ∈B and t ∈[T], Pr [|ˆµB(kB(t)) −µB| > γ(kB(t))] ≤T −4. Proof. Fix a bin B and t ∈[T]. Let k = kB(t) for brevity and let t1 < t2 < . . . tk be the set of time steps i ≤t with xi ∈B and yi = 1. For each j ∈[k], define Zj = r(xtj, 1) −µB. The idea is to apply Lemma A.2 to Z1, . . . , Zk and ηt1, . . . , ηtk. To do so, we establish key three properties of Z1, . . . , Zk. Property 1. Fix some j ∈[k]. Since xtj ∈B, Lemma A.3 implies that |r(xtj, 1) −µB| ≤L√nw(T). Thus the random variable Zj = r(xtj, 1) −µB is always belongs to an interval of length 2L√nw(T): specifically, [−L√nw(T), L√nw(T)]. Then by Lemma A.1, Zj is (L√nw(T))-subgaussian. Property 2. Observe that the algorithm’s behavior does not distinguish between inputs in the same bin. Thus for any i ∈[T], conditional on xi ∈B, xi is independent of y1, . . . , yi (though clearly not independent in general). By assumption, xi is independent of x1, . . . , xi−1. Therefore E[r(xtj, 1)] = E  r(xi, 1) | xi ∈B and yi = 1 and kB(i) = j −1  = E[r(xi, 1) | xi ∈B] Risk-Sensitive Abstention in Bandits = E x∼ν[r(x, 1) | x ∈B] = µB Therefore E[Zj] = E[r(xtj, 1)] −µB = 0. Also, since xt1, . . . , xtk are iid, Z1, . . . , Zk are also iid. Property 3. We claim that Z1, . . . , Zk are also independent of ηt1, . . . , ηtk. One way to see this is imagine that at t = 0, for each bin B, we take k samples ηt1, . . . , ηtk which are independent from each other and also from Z1, . . . , Zk. Then on each time step i ∈[t], if xi ∈B, we let ηi be equal to the next ηtj that has not already been used. This process is equivalent to randomly sampling ηi on each time step, and makes it clear that ηt1, . . . , ηtk are independent from Z1, . . . , Zk.2 Thus Z1, . . . , Zk, ηt1, . . . , ηtk are independent random variables with mean zero, where each Zj is (L√nw(T))- subgaussian and each ηtj is σ-subgaussian. Then by Lemma A.2, for any ε > 0, Pr   k X j=1 Zj + k X j=1 ηtj > ε  ≤2 exp − cε2 Pk j=1(L√nw(T))2 + Pk j=1 σ2 ! = 2 exp  −cε2 kσ2w  Note that the ˆµB(k) = 1 k Pk j=1 rtj = 1 k Pk j=1(r(xtj, 1) + ηtj). Then ˆµB(k) −µB = 1 k Pk j=1(Zj + ηtj). Set ε = kγ(k) = p kc−1σ2w ln(2T 4) to get Pr [|ˆµB(k) −µB| > γ(k)] = Pr h k|ˆµB(k) −µB| > p kc−1σ2w ln(2T 4) i = Pr   k X j=1 Zj + k X j=1 ηtj > p kc−1σ2w ln(2T 4)   ≤2 exp(−ln(2T 4)) = 2 exp  ln  1 2T 4  = T −4 as required. Lemma 6.2 (Uniform concentration bound). With probability at least 1−T −2, the good event G holds. Equivalently, Pr(¬G) ≤T −2. Proof. Let J be the the number of bins that receive at least one commit, i.e., J = |{B ∈B : ∃t ∈[T] s.t. xt ∈ B, yt = 1}|. For each j ∈[J], let Bj be the jth bin to receive a commit. Then Pr[¬G] = E[Pr[¬G | J, B1, . . . , BJ]] (Law of total expectation) = E  Pr   J[ j=1 T[ t=1 {|ˆµBj(kB(t)) −µBj| > γ(kB(t))}   J, B1, . . . , BJ   (Direct negation) ≤E   J X j=1 T X t=1 Pr[|ˆµBj(kB(t)) −µBj| > γ(kB(t))] J, B1, . . . , BJ   (Union bound) ≤E   J X j=1 T X t=1 T −4 J, B1, . . . , BJ   (Lemma 6.1) ≤E h T −2 J, B1, . . . , BJ i (J ∈[T]) ≤T −2 (Expectation of a constant) as required. 2This is similar to the “reward tape” argument used in Section 1.3.1 of Slivkins et al. (2019). Sarah Liaw, Benjamin Plaut Lemma 6.3 (Samples for negative certification). Consider any t ∈[T] and B ∈B. On G, if µB < −L√nw(T) and kB(t) > 4c−1σ2 w ln(2T 4) (µB+L√nw(T ))2 , then bin B is certified negative at time t. Proof. Note that µB < −L√nw(T) implies that the denominator is well-defined. By assumption on kB(t), γ(kB(t)) = s c−1σ2w ln(2T 4) kB(t) < r (µB + L√nw(T))2 4 = |µB + L√nw(T)| 2 = −µB + L√nw(T) 2 By definition of G, we have −γ(kB(t)) ≤ˆµB(kB(t)) −µB ≤γ(kB(t)), so ˆµB(kB(t)) + γ(kB(t)) + L√nw(T) ≤µB + 2γ(kB(t)) + L√nw(T) < µB −(µB + L√nw(T) + L√nw(T) = 0 so B is certified negative at time t. Lemma 6.4 (Trusted cover is a slightly larger ball). Every x ∈S B∈B B satisfies ∥x∥≤R(T) = m(T) + √nw(T). Proof. If x ∈B for some B ∈B, there must exist x′ ∈B such that ∥x′∥≤m(T). The maximum distance between any pair of points in an n-cube with side length w(T) is √nw(T). Thus by the triangle inequality, x satisfies ∥x∥≤∥x′∥+ ∥x −x′∥≤m(T) + √nw(T) = R(T) as required. Lemma 6.5 (Per-bin commit regret). On G, for any B ∈B with kB(T) ≥1 and µB < −(2L√n + 1)w(T), X t: xt∈B, yt=1 ∆t ≤2LR(T) + 32c−1σ2 w ln(2T 4) w(T) . Proof. Since µB < −(2√nL+1)w(T) < −L√nw(T) and G holds, Lemma 6.3 implies that B is certified negative on the first time step t such that kB(t) > 4c−1σ2 w ln(2T 4) (µB+L√nw(T ))2 . Therefore |{t ∈[T] : xt ∈B, yt = 1}| ≤⌈4c−1σ2 w ln(2T 4) (µB+L√nw(T ))2 ⌉≤ 1 + 4c−1σ2 w ln(2T 4) (|µB|−L√nw(T ))2 . Since |µB| ≥(2L√n + 1)w(T) ≥2L√nw(T), we have |µB| = |µB| 2 + |µB| 2 ≥|µB| 2 + L√nw(T) so |µB| −L√nw(T) ≥|µB/2|. Therefore (|µB| −L√nw(T))2 ≥µ2 B/4, so |{t ∈[T] : xt ∈B, yt = 1}| ≤ 1 + 16c−1σ2 w ln(2T 4) µ2 B . For any t ∈[T] such that yt = 1, either yt = 1 is optimal, in which case the single-step regret ∆t is 0, or yt = 0 is optimal, if which case ∆t = −r(xt, 1). If xt ∈B, Lemma A.3 implies that r(xt, 1) ≥µB −L√nw(T). Since µB < −L√nw(T), we have r(xt, 1) ≥2µB. Hence X t:xt∈B,yt=1 ∆t ≤ X t:xt∈B,yt=1 (−2µB) = |{t ∈[T] : xt ∈B, yt = 1}| · 2|µB| ≤  1 + 16c−1σ2 w ln(2T 4) µ2 B  · 2|µB| Risk-Sensitive Abstention in Bandits = 2|µB| + 32c−1σ2 w ln(2T 4) |µB| ≤2|µB| + 32c−1σ2 w ln(2T 4) (2√nL + 1)w(T) ≤2|µB| + 32c−1σ2 w ln(2T 4) w(T) with the last step due to 2√nL + 1 ≥1. By Lemma 6.4, any x ∈B satisfies ∥x∥≤R(T). Thus by Lipschitz con- tinuity, r(x, 1) ≥r(0, 1) −LR(T) > −LR(T) for all x ∈B. ThusµB = Ex∼ν[r(x, 1) | x ∈B] ≥Ex∼ν[−LR(T)] = −LR(T), so X t:xt∈B,yt=1 ∆t ≤2LR(T) + 32c−1σ2 w ln(2T 4) w(T) as required. Lemma 6.6 (Total commit regret inside the trusted region). On G, X t: yt=1 ∆t ≤v1R(T)n w(T)n  2LR(T) + 32c−1σ2 w ln(2T 4) w(T)  + (3L√n + 1)w(T)T. Proof. For each t ∈[T], let B(t) denote the bin to which xt belongs. Partition the time steps with commits into S1 = {t ∈[T] : yt = 1 and µB(t) < −(2L√n+1)w(T)} and S2 = {t ∈[T] : yt = 1 and µB(t) ≥−(2L√n+1)w(T)}. Let B1 = {B ∈B : ∃t ∈S1 s.t. B(t) = B} be the set of bins associated with time steps in S1. Then we can write X t∈S1 ∆t = X B∈B1 X t∈S1:B(t)=B ∆t = X B∈B1 X t:xt∈B,yt=1 ∆t Then by Lemma 6.5, X t∈S1 ∆t ≤ X B∈B1  2LR(T) + 32c−1σ2 w ln(2T 4) w(T)  ≤|B1|  2LR(T) + 32c−1σ2 w ln(2T 4) w(T)  ≤|B|  2LR(T) + 32c−1σ2 w ln(2T 4) w(T)  By Lemma 6.4, every x ∈∪B∈BB satisfies ∥x∥≤R(T). Thus ∪B∈BB is fully contained within an n-ball of radius r. The volume of such a ball is v1R(T)n. Each bin in B has side length w(T) so has volume w(T)n. Furthermore, the bins in B have no volume overlap, so the total volume of bins in B is w(T)n|B|. Then w(T)n|B| ≤v1R(T)n. Therefore X t∈S1 ∆t ≤v1R(T)n w(T)n  2LR(T) + 32c−1σ2 w ln(2T 4) w(T)  Now consider any t ∈S2. By definition, xt ∈B(t) ∈B, so Lemma A.3 implies that r(xt, 1) ≥µB(t) −L√nw(T). Since µB(t) ≥−(2L√n + 1)w(T) by construction of S2, we have r(xt, 1) ≥−(3L√n + 1)w(T). Therefore X t∈S2  max y∗∈{0,1} r(xt, y∗) −r(xt, 1)  ≤ X t∈S2 (3L√n + 1)w(T) = |S2|(3L√n + 1)w(T) ≤(3L√n + 1)w(T)T Sarah Liaw, Benjamin Plaut Putting it all together, X t:yt=1 ∆t = X t∈S1  max y∗∈{0,1} r(xt, y∗) −r(xt, 1)  + X t∈S2  max y∗∈{0,1} r(xt, y∗) −r(xt, 1)  ≤v1R(T)n w(T)n  2LR(T) + 32c−1σ2 w ln(2T 4) w(T)  + (3L√n + 1)w(T)T as required. Theorem 5.2. In the stochastic setting with xt ∼ν i.i.d., Algorithm 1 with w(T) = T −1/(n+2) and m(T) = ln T satisfies E[Reg(T)] ∈O  (L + σ2)T n+1 n+2 (ln T)n+1 + T ¯ν(ln T)  . Proof. First assume G holds. Let S3 = {t ∈[T] : yt = 1 and r(xt, 1) < r(xt, 0)} be the time steps where we committed but we should have abstained, and let S4 = {t ∈[T] : yt = 0 and r(xt, 0) < r(xt, 1)} be the time steps where we should have committed but we abstained. Lemma 6.6 bounds the regret of time steps in S3. Since we always commit whenever xt ∈B for some B ∈B, S4 can only occur when xt ̸∈B for all B ∈B. By construction, any such xt satisfies ∥xt∥> m(T) (otherwise the bin containing xt would be in B). Also, r(xt, 1) −r(xt, 0) ≤1 by assumption. Hence Reg(T) = T X t=1 ∆t = X t∈S3 ∆t + X t∈S4 ∆t ≤v1R(T)n w(T)n  2LR(T) + 32c−1σ2 w ln(2T 4) w(T)  + (3L√n + 1)w(T)T + T X t=1 1(∥xt∥> m(T)) = 2Lv1R(T)n+1 w(T)n + 32v1c−1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T X t=1 1(∥xt∥> m(T)) Therefore E[Reg(T) | G] ≤2Lv1R(T)n+1 w(T)n + 32v1c−1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T X t=1 Pr[∥xt∥> m(T)] = 2Lv1R(T)n+1 w(T)n + 32v1c−1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T X t=1 ¯ν(m(T)) = 2Lv1R(T)n+1 w(T)n + 32v1c−1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T ¯ν(m(T)) Now suppose G does not hold. Consider an arbitrary t ∈[T]. If yt = 0, then the regret at time t is at most 1. If yt = 1, we still have ∥xt∥≤R(T), so by Lipschitz continuity, r(xt, 1) ≥−LR(T). Therefore E[Reg(T) | ¬G] ≤T + LR(T)T. Lemma 6.2 implies that Pr[¬G] ≤T −2, so by the law of expectation, E[Reg(T)] = Pr[¬G] E[Reg(T) | ¬G] + Pr[G] E[Reg(T) | G] ≤1 T 2 · (T + LR(T)T) + 2Lv1R(T)n+1 w(T)n + 32v1c−1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T ¯ν(m(T)) ∈O 1 + LR(T) T + 2Lv1R(T)n+1 w(T)n + 32v1c−1σ2 wR(T)n ln(2T 4) w(T)n+1 + L√nw(T)T + T ¯ν(m(T))  We now plug in w(T) = T −1 n+2 and m(T) = ln T. Since limT →∞w(T) = 0, we have σ2 w = nL2w(T)2 + σ2 ∈O(σ2). Similarly, for any k ≥0, R(T)k = (ln(T) + √nT −1 n+2 )k ∈O((ln T)k). Thus E[Reg(T)] ∈O L ln T T + L(ln T)n+1 T −n n+2 + σ2(ln T)n+1 T −n−1 n+2 + LT −1 n+2 T + T ¯ν(ln T)  = O  (L + σ2)T n+1 n+2 (ln T)n+1 + T ¯ν(ln T)  as required.
Learning When Not to Learn: Risk-Sensitive Abstention in Bandits with Unbounded Rewards Sarah Liaw∗ Benjamin Plaut∗ Harvard University -stakes AI applications, even a single action can cause irreparable damage. However, nearly all of sequential decision-making theory assumes that all errors are recoverable (e.g., by bounding rewards). Standard bandit algorithms that explore aggressively may cause irreparable damage when this assumption fails. Some prior work avoids irreparable errors by asking for help from a mentor, but a mentor may not always be available. In this work, we formalize a model of learning with unbounded rewards without a mentor as a two-action contextual bandit with an abstain option: at each round the agent observes an input and chooses either to abstain (always 0 reward) or to commit (execute a preexisting task policy). Committing yields rewards that are upper-bounded but can be arbitrarily negative, and the commit reward is assumed Lipschitz in the input. We propose a caution-based algorithm that learns when not to learn: it chooses a trusted region and commits only where the available evidence does not already certify harm. Under these conditions and i.i.d. inputs, we establish sublinear regret guarantees, theoretically demonstrating the effectiveness of cautious exploration for deploying learning agents safely in high-stakes environments. 1 INTRODUCTION With AI becoming ubiquitous, many learning systems are now deployed in unpredictable, safety-critical domains, such as process control and manufacturing robotics, autonomous driving, and surgical assistance. In these settings, a single ill-chosen action can cause irreparable and lasting damage with no opportunity ∗Equal contribution. for subsequent recovery. For instance, a self-driving car cannot compensate for a deadly crash by later driving more safely, nor can a medical robot undo a fatal mistake during surgery. Following Plaut et al. (2025a,b), we refer to such irreparable errors as catastrophes. Despite the risks such deployments pose, there is limited work (and limited theoretical work in particular) on how an agent can learn without ever incurring an irreparable error. The possibility of catastrophes challenges standard frameworks for sequential decision making, especially the familiar notion of optimism under uncertainty. Optimism effectively assumes that early mistakes can be offset (or be compensated for) by later gains, an assumption that is inappropriate when errors are irrecoverable. Instead, these settings call for pessimism under uncertainty: when evidence is insufficient, prefer inaction to risky action. One approach to mitigate these problems is to let the agent ask for help from a mentor in unfamiliar or risky situations. Such human-in-the-loop oversight can block unsafe actions and prevent irreparable errors (even if ordinary, recoverable errors still occur). However, this approach depends on the availability of a capable mentor, which can be costly or impractical at scale. This motivates a mentor-free alternative: can an agent avoid irreparable errors on its own by acting cautiously when inputs appear unfamiliar? We propose a model of learning in the presence of irreparable costs without a mentor but with an option to abstain from action. The key question is when to abstain, i.e., when not to learn. To focus on this question, we assume the agent has previously learned a baseline policy that works well in-distribution but behaves unpredictably elsewhere. This allows us to streamline the model to two actions: abstain (do nothing) and commit (follow the baseline policy). Abstaining yields a deterministic safe reward r(x, 0) = 0, while committing yields a reward r(x, 1) ∈(-∞, 1]1. 1The asymmetric bounds on the commit reward reflect that a single action can be catastrophic, whereas it is rare for a single action to yield arbitrarily large benefit. 16 Oct 2025 Risk-Sensitive Abstention in Bandits We treat the origin as fully "in-distribution" and assume the baseline policy is beneficial there: r(0, 1) > 0. We use the distance from the origin ∥x∥as a measure of how out-of-distribution (OOD) an input is. The commit reward is assumed L-Lipschitz, capturing the idea that similar inputs yield similar outcomes. For our main results we focus on a fixed distribution ν; for our impossibility results we also consider a T-dependent distribution νT . We formalize the tension between exploration and safety via two negative results. First, in the worst case, any algorithm that begins by always exploring (i.e., commits on the first round regardless of the input) can suffer infinite expected regret (Thm. 4.1). Second, when every input lies uniformly far OOD, there is no safe way to explore to identify a beneficial committing region, and sublinear regret is impossible (Thm. 4.2). Together, these results delineate both the necessity and the limits of caution. Motivated by this perspective, we develop a cautionbased algorithm that learns only when it can guarantee that an error is not catastrophic (which essentially corresponds to not-too-OOD inputs). This approach yields sublinear expected regret for i.i.d. inputs from any fixed distribution, with bounds that also reflect how often the agent encounters far OOD inputs, while prioritizing the avoidance of irreparable errors. Contributions. Our contributions can be summarized as follows: 1. We introduce a formal model of learning with irreparable costs and no external mentor. 2. We prove two impossibility results that delineate the necessity and limits of caution. 3. We develop a caution-based algorithm that achieves sublinear regret for any fixed input distribution. Organization. §3 introduces the formal model and notation. §4 presents the impossibility results (Thms. 4.1 and 4.2) and their implications for exploration. § 5 describes our caution-based learning algorithm and states the main regret bound. § 6 outlines the proof strategy and supporting lemmas. 2 RELATED WORK Most prior work on sequential decision-making and safe exploration focuses on settings where errors are ultimately recoverable; here we contrast this with our setting where individual actions can cause irreparable harm. 2.1 Sequential decision-making when all errors are recoverable The literature on sequential decision-making is vast, spanning bandit problems, reinforcement learning, and online learning. See Slivkins et al. (2019), Sutton et al. (1998), and Cesa-Bianchi and Lugosi (2006) for introductions to these (somewhat overlapping) topics, respectively. However, nearly all of this work assumes explicitly or implicitly that any error can be recovered from. This assumption enables the agent to ignore risk and simply try all possible behaviors, since no matter how badly it performs in the short term, it can always eventually make up for it. Indeed, most sequential decision-making algorithms with formal regret bounds have this general structure. This assumption can manifest in different ways. In bandit settings, it suffices to assume that rewards are bounded (or at least have bounded expectation). This assumption implies that the expected regret from any action on any time step is always bounded, which is sufficient for the risk-agnostic exploration mentioned above. In contrast, we allow unbounded negative rewards so that actions can be arbitrarily costly. Indeed, our first negative result (Thm. 4.1) relies on the expected regret for a single action potentially being infinite in our model. In Markov Decision Processes (MDPs), the agent's actions determine the next state via a transition function, so in addition to bounded rewards, one typically assumes that either the environment is reset at the start of each "episode" (e.g., Azar et al., 2017) or that any state is reachable from any other (e.g., Jaksch et al., 2010). The dependence of standard MDP algorithms on these assumptions was observed by Moldovan and Abbeel (2012a); Cohen et al. (2021), among others. Regardless of the specific form of this assumption, it clearly does not hold in safety-critical contexts where a single action can be catastrophic. 2.2 Safe exploration These issues have motivated a wide field of safe exploration. A full survey is beyond the scope of this paper (see Garc ́ıa and Fern ́andez, 2015; Gu et al., 2024; Krasowski et al., 2023; Tan et al., 2022 for surveys), so we cover only the most relevant prior work. Avoiding irreparable errors while learning has also been studied empirically across multiple domains (e.g., Saunders et al., 2017; Moldovan and Abbeel, 2012b; Wachi et al., 2023; Zhao et al., 2023; Perkins and Barto, 2003), but here we focus on theoretical work, which is most relevant to our setting. Safe exploration is modeled in two main ways. The first Sarah Liaw, Benjamin Plaut approach is to require the agent to satisfy some sort of constraint in addition to maximizing reward. The constraint can be entirely separate from reward, as in the case of constrained MDPs (Altman, 1999), or they can be related to the reward (e.g., the agent's reward must always exceed some baseline). When zero or near-zero constraint violation is required, these formalisms do capture the possibility of irreparable errors. The second approach treats reward as the sole objective, with safety as a necessary but not sufficient property for maximizing reward. Here, irreparable errors correspond to either unboundedly negative rewards (our work falls into this category) or inescapable "trap" states with poor reward. An agent that obtains very negative rewards or enters trap states clearly cannot obtain high reward. Both of these models must contend with a fundamental obstacle: how does one learn which actions are catastrophic without trying those actions directly? This can be formalized by the so called "Heaven or Hell problem". Suppose there are two available actions, where one has unbounded positive reward and the other has unbounded negative reward. In this case, the agent can do no better than simply guessing and can never guarantee good regret. This problem shows that some sort of additional assumption is necessary for any meaningful regret guarantees. Below, we categorize work within safe exploration based on which assumption(s) it uses for this purpose. Full prior knowledge. Perhaps the simplest approach is to assume that the agent knows the precise safety constraint upfront (see Zhao et al., 2023 for a survey). This immediately resolves the Heaven or Hell problem; indeed, it eliminates the need for the agent to "learn when not to learn" at all. However, full knowledge of the safety constraint may not hold in practice. In contrast, we only assume that the (1) baseline policy performs well in-distribution and (2) the agent can always safely abstain. Learning constraints using a safe fallback action. There is a growing body of work which shares our assumption of a safe fallback action. Liu et al. (2021); Stradi et al. (2024) use this approach in the constrained MDP model, while Wu et al. (2016); Kazerouni et al. (2017); Lin et al. (2022); Chen et al. (2022) require the reward to exceed a fixed baseline in a bandit model. These papers generally rely on a pair of subtle but crucial assumptions to obtain zero constraint violation: (1) the constraint violation on any given time step is bounded and (2) the baseline policy satisfies the constraints with a known amount of slack (this is called Slater's gap, although not all of the above papers use this term). This combination of assumptions enables the agent to still explore aggressively with some known probability. Furthermore, the resulting bounds typically depend inversely on Slater's gap. Our work is complementary to each of these two assumptions. First, rather than assuming global boundedness, we assume that rewards decrease at a bounded rate, i.e., rewards are Lipschitz continuous. Second, rather than dependence on the reward or cost function (in the form of Slater's gap), our bounds depend on the input distribution: specifically, our bounds degrade as the agent sees more OOD inputs. Our approach may be more or less realistic depending on the specific context, but it notably diverges from the typical way fallback actions are utilized. Asking for help. Perhaps the most common approach in this model is relying on external supervision. This is a growing body of work which uses limited queries to a mentor to prove formal regret guarantees in the presence of irreversible dynamics (Cohen et al., 2021; Cohen and Hutter, 2020; Kosoy, 2019; Maillard et al., 2019; Plaut et al., 2025b,a). However, as the number of deployed AI systems continues to grow, it may be impractical for each one to have a human supervisor. Even in cases where external help will eventually become available, the agent may need to behave safely on its own in the short-term. These considerations motivate our study of how to learn safely in the absence of external help. 2.3 Other related work We briefly discuss some topics that are less directly relevant but still worth mentioning. One is the heavytailed bandit model (Bubeck et al., 2013; Agrawal et al., 2021), which studies the case where reward distributions are not subgaussian and thus less predictable. While this model does incorporate elements of safety, as long as the expected reward from any action is bounded, risk-agnostic exploration remains valid (as discussed above). Another topic adjacent to our work is the standard Lipschitz bandit model with bounded rewards and bounded domain (see, e.g., Chapters 4 and 8 of Slivkins, 2011). This work shares some similarities with ours, like the algorithmic use of discretization. However, the core of our paper is removing the boundedness assumptions, which introduces a host of new challenges. Finally, there is complementary work on abstention with bounded rewards (Neu and Zhivotovskiy, 2020; Yang et al., 2024). While this line of work also demonstrates the benefits of abstention, it does not address the possibility of irreparable errors. 3 PRELIMINARIES We study a two-action contextual bandit model in which, on each round, the agent observes an input and Risk-Sensitive Abstention in Bandits chooses either to commit, thus executing a fixed task policy that may yield risky outcomes, or to abstain, receiving a safe default reward of zero. In this section, we introduce the formal notation and assumptions used throughout. For k ∈N, let [k] = {1, . . . , k}. Let X = Rn be the input space, T ∈N be the time horizon, and ∥· ∥be the Euclidean norm (though one could also consider a more general metric space). On each time step t ∈[T], the agent observes an input xt ∈X, chooses an action yt ∈{0, 1}, and receives a (noisy) scalar reward; the precise noise assumptions are stated below. Actions and Rewards. We interpret yt = 0 as "abstaining", a safe default which deterministically yields r(xt, 0) = 0 for any xt ∈X. We interpret yt = 1 as "committing", which executes a preexisting policy whose reward r(xt, 1) may be arbitrarily negative (catastrophic) but is assumed to have a constant upper bound (rescaled to 1 without loss of generality). This captures the asymmetry of high-stakes settings where catastrophic losses can be unbounded in magnitude, whereas gains typically saturate. Input models. We assume inputs are i.i.d. draws from an unknown distribution ν on X, i.e., x1, . . . , xT i.i.d. ∼ν. We typically take ν to be fixed, but in our impossibility results we also consider the case of T-dependent ν (denoted νT ). We assume bandit feedback: the agent observes only the realized reward of its chosen action. Abstaining provides no information about the counterfactual commit reward r(xt, 1), so the agent cannot "learn by abstaining". Formally, at round t the learner observes rt = r(xt, yt) + ηt, where (ηt)T t=1 are i.i.d. zero-mean σ-subgaussian noise variables, independent of (xt) and of the learner's internal randomness (specified formally in Def. 3.1). Definition 3.1 (σ-subgaussian). A random variable Z is σ-subgaussian if E[exp(λ(Z -E[Z]))] ≤exp σ2λ2 2 for all λ ∈R. Equivalently, Z -E[Z] has tails that are dominated by a centered Gaussian with variance proxy σ2. Regularity. We make two assumptions on the reward function: (i) the commit reward r(·, 1) is L-Lipschitz in the Euclidean norm, i.e., there exists L > 0 such that for all x, x′ ∈X, |r(x, 1) -r(x′, 1)| ≤L∥x -x′∥. This is a standard smoothness condition in Lipschitz bandit models (see, e.g., Slivkins et al., 2019) and captures the intuition that similar inputs yield similar commit rewards. Since r(x, 0) ≡0, the abstain reward is 0-Lipschitz. (ii) The in-distribution baseline input yields strictly positive reward when committing, i.e. r(0, 1) > 0. This guarantees that committing is beneficial somewhere (at the origin); without it, the optimal policy would be to always abstain and cautious learning would be impossible. Objective. The agent's goal is to minimize its (expected) regret, which is the difference between its cumulative reward and the optimal cumulative reward. Formally, define Reg(T) = T X t=1 max y∗∈{0,1} r(xt, y∗) -r(xt, yt) . We take the expectations over the input process (in the stochastic model), the observation noise, and the learner's internal randomness. The goal is to achieve sublinear expected regret, i.e., E[Reg(T)] = o(T), equivalently E[Reg(T)]/T →0 as T →∞. 4 THE VIRTUES AND LIMITS OF CAUTION In this section, we provide two impossibility results that demonstrate the importance and limitations of caution in high-stakes, unbounded reward bandits. First, caution is necessary: if an agent commits with non-negligible probability on inputs that are far OOD, catastrophic tail losses dominate-indeed, even a single risk-agnostic exploratory commit can incur infinite expected regret. This kind of "incautious exploration" is exactly how standard bandit algorithms behave when they begin by pulling every arm at least once. Second, caution has limits: when the input stream is uniformly far OOD, there is no way to explore cautiously to identify a beneficial committing region without risking catastrophe. In such settings, sublinear regret is not possible and the optimal strategy is to abstain on every time step. Theorem 4.1 (The need for caution). Let ν be any distribution over X such that Ex∼ν[∥x∥] = ∞and assume x1, . . . , xT ∼ν i.i.d. Then there exists a reward function r such that any algorithm which always commits on the first time step satisfies E[Reg(T)] = ∞. Proof. Define r(x, 1) = 1 -L∥x∥for all x ∈X. Then E[Reg(T)] = E " T X t=1 max y∗∈{0,1} r(xt, y∗) -r(xt, yt) # ≥E max y∗∈{0,1} r(x1, y∗) -r(x1, y1) ≥E[0 -(1 -L∥x1∥)] Sarah Liaw, Benjamin Plaut = L E x∼ν[∥x∥] -1 = ∞ as required. The proof can easily be modified to handle the cases where the first commit is taken with constant probability (rather than probability 1) or where the algorithm abstains for a constant number of initial rounds. Essentially, this negative result applies to any algorithm that is not cautious, i.e., that explores without considering how OOD xt is. However, caution can only get us so far. While it prevents catastrophic first commits, some exploration is necessary to obtain sublinear regret. If all inputs are far OOD, then there is no safe way to explore, so the agent has no choice but to always abstain. Equivalently, this can be phrased by considering i.i.d. inputs from a Tdependent distribution νT supported on {x : ∥x∥= T}. Theorem 4.2 (The limits of caution). Let νT be any distribution supported on {x : ∥x∥= T}, and suppose x1, . . . , xT i.i.d. ∼νT . Then no algorithm can guarantee E[Reg(T)] ∈o(T). Proof. Define r-(x, 1) := 1 -L∥x∥and r+(x, 1) := 1, with r±(x, 0) := 0. Since we only care about asymptotics, we can restrict our attention to T > 1/L. Then for ∥xt∥= T, optimal behavior for r+ is to always commit, while optimal behavior for r-is to always abstain. We show maxr∈{r-,r+} E[Reg(T)] ∈Ω(T). To do so, we use a mild version of the probabilistic method. Let U(r-, r+) be the uniform distribution over {r-, r+}. It suffices to show Er∼U E[Reg(T)] ∈Ω(T), where the second expectation is over x1, . . . , xT and y1, . . . , yT . Let E be the event that the agent ever commits. If E holds, there exists i ∈[T] with yi = 1. Since yi is independent of r, E r E[Reg(T) | E] = E r E " T X t=1 max y∗∈{0,1} r(xt, y∗) -r(xt, yt) | E # ≥Pr[r = r-] E max y∗∈{0,1} r(xi, y∗) -r(xi, yi) | r = r- = LT -1 2 On the other hand, if E does not occur, then E r E[Reg(T) | ¬E] ≥Pr[r = r+] E " T X t=1 max y∗∈{0,1}r+(xt, y∗)-r+(xt, yt) |¬E # ≥T 2 . Then by the law of total expectation, E r E[Reg(T)] = Pr[E] E r E[Reg(T) | E] + Pr[¬E] E r E[Reg(T) | ¬E] ≥min Pr[E], Pr[¬E] min LT -1 2 , T 2 ∈Ω(T) as required. 5 ALGORITHM AND MAIN RESULT Following the negative results in § 4, we propose an algorithm (Algorithm 1) that operationalizes cautious learning: only learn in regions that are not too far OOD and where the available evidence does not already certify that committing is harmful. Informally, we define a trusted region around the origin whose radius grows with the time horizon, reflecting the maximum regret we are willing to tolerate-intuitively, this corresponds to allowing mistakes that are bad but not catastrophic. We then discretize the region into bins to exploit Lipschitz continuity. Within each bin, the commit reward cannot vary by more than a Lipschitz discretization error, so it suffices to estimate a single per-bin mean. The agent always abstains outside the trusted region, and inside it abstains in any bin whose pessimistic upper bound on reward is negative; otherwise it commits to gather information. More precisely, the algorithm defines a ball of radius m(T) around the origin, treating inputs outside this ball as too OOD to test. The ball is partitioned into n-dimensional hypercubes (bins) of side length w(T). By Lipschitz continuity, the variation of r(·, 1) within any bin B is at most L√nw(T). For each bin B, the algorithm maintains its empirical mean ˆμB and a confidence radius γ(k) after k commits in B. If ˆμB + γ(k) + L√nw(T) 0, w : N →R>0 H ←partition of X into n-cubes of side length w(T) B ←{B ∈H : ∃x ∈B with ∥x∥≤m(T)} σw ← p nL2w(T)2 + σ2 γ(k) := q c-1σ2w ln(2T 4) k where c is the absolute constant from Lemma A.2 (kB, ˆμB) = (0, 0) for all B ∈B for t = 1, . . . , T do if ∃B ∈B s.t. xt ∈B then ▷xt is not too OOD: it's safe to learn if ˆμB + γ(kB) + L√nw(T) 0, then T ̄ν(ln T) ≍T/(ln T)α = o(T). If one has prior knowledge of ν, the choice of m(T) can be tailored more precisely than our generic setting m(T) = ln T. In particular, for polynomial tails, setting m(T) = T c 0 x1 x2 w(T) grid (bins) bin ∈B (trusted) certified negative ∥x∥≤m(T) Figure 1: Trusted region of radius m(T) around the origin, partitioned into bins of side w(T). Any square intersecting the ball is shown fully green (bin ∈B). Certified negative bins are shown hatched red. The agent abstains outside the ball. for small c > 0 improves the bound to O(T 1-cα). Thus, the regret decomposes into a geometric/statistical term from discretization and concentration inside the trusted region, and a tail term from far OOD inputs; both are sublinear for any fixed ν. 6 PROOF SKETCH We now outline the logical structure of the proof of Thm. 5.2; we also provide the intuition behind each step. Full technical details and complete proofs of all lemmas are deferred to Appendix A. Let m(T), w(T) be as in Algorithm 1, and let B be the set of bins intersecting the ball of radius m(T). For any B ∈B, let μB = Ex∼ν[ r(x, 1) | x ∈B ] be its true mean commit reward, let kB(t) be the number of commits taken in B by the end of round t (so kB(0) = 0), and let ˆμB(k) be the empirical mean in B after k commits (i.e., the running mean from Algorithm 1 indexed by its commit count). To control estimation error we define the confidence radius γ(k) = r c-1σ2w ln(2T 4) k , σ2 w = nL2w(T)2 + σ2, where c > 0 is the absolute constant from Lemma A.2. Here σ2 w aggregates the combines observation noise σ2 with the Lipschitz-induced within-bin variation of (L√nw(T))2. Sarah Liaw, Benjamin Plaut Define the good event under which all per-bin estimates are uniformly accurate over the realized commit counts: G = n ∀B ∈B, ∀t ∈[T] : |ˆμB(kB(t))-μB| ≤γ(kB(t)) o . On G, each empirical mean is a reliable proxy for its bin's true mean at every commit count that occurs along the algorithm's trajectory. The analysis conditions on G (which holds with high probability by a union bound over realized bin-count pairs), and then decomposes regret into: (i) commits inside the trusted region (handled by certification of negative bins plus a margin term), and (ii) abstentions outside the ball of radius m(T) (quantified by the radial survival function). Lemma 6.1 (Per-bin concentration). For any B ∈B and t ∈[T], Pr [|ˆμB(kB(t)) -μB| > γ(kB(t))] ≤T -4. Proof idea. Fix B and t and condition on the kB(t) commit times t1 γ(kB(t))) ≤T -4. □ Lem. 6.1 gave concentration for each fixed bin and commit count. To extend this guarantee uniformly, we apply a union bound over the at most T bin-time pairs actually realized by the algorithm. Lemma 6.2 (Uniform concentration bound). With probability at least 1 -T -2, the good event G holds. Equivalently, Pr(¬G) ≤T -2. Proof idea. There are at most T 2 relevant pairs (B, t) along the trajectory of the algorithm. Each has failure probability T -4 by Lem. 6.1. A union bound yields failure probability at most T -2. □ Recall that the algorithm abstains permanently in any bin once ˆμB(kB(t)) + γ(kB(t)) + L√nw(T) 4c-1σ2 w ln(2T 4) (μB+L√nw(T ))2 , then bin B is certified negative at time t. Proof idea. On the good event G, certification in bin B occurs when ˆμB +γ(kB(t))+L√nw(T) m(T) contributes at most 1, giving T ̄ν(m(T)), which is sublinear for any fixed ν since ̄ν(ln T) →0. For (ii), Lipschitz continuity bounds the within-bin variation, and a uniform concentration event (probability 1 -O(T -2)) ensures empirical means stay within confidence radii. Negative bins are certified after O(σ2 w/margin2) commits, so each contributes at most O(LR(T)+σ2 w log T/w(T)) regret. Summing across O((m(T)/w(T))n) bins gives ̃O R(T)n LR(T)w(T)-n + σ2 ww(T)-(n+1) , and bins near the decision boundary contribute an additional O(w(T)T). □ If we ignore log factors, the leading terms trade off R(T)nw(T)-(n+1) | {z } variance-driven vs. w(T)T | {z } margin-driven . Balancing these yields the optimal choice w(T) ≍ T -1/(n+2). Independently, the radius m(T) trades off the abstention term T ̄ν(m(T)) against the growth of the volume factor R(T)n. Choosing m(T) = ln T makes T ̄ν(m(T)) sublinear for any fixed ν (since ̄ν(ln T) →0) while increasing R(T) only logarithmically. 7 CONCLUSION In this work, we introduced a formal model for safe learning under distribution shift in contextual bandits with catastrophic tails, provided impossibility results that clarify when sublinear regret is unattainable, and gave a cautious risk-sensitive algorithm with sublinear regret under suitable conditions. Our work has several limitations, which also provide directions for future work, including handling richer structure beyond Lipschitz continuity, incorporating adaptive or learned metrics, and extending the analysis to non-i.i.d. inputs or worst-case sequences. Our regret bound can be close to linear. In Thm. 5.2, the abstention term T ̄ν(ln T) can dominate for heavy-tailed inputs (e.g., power laws). This is the price of caution: avoiding catastrophic far OOD commits requires systematic abstention in the tails, and the resulting regret can be unavoidable (see the impossibility in Thm. 4.2). Moreover, while the bound is sublinear for every fixed n, the exponent (n+1)/(n+2) →1 as n →∞, which is a standard curse of dimensionality in Lipschitz contextual bandits (Slivkins et al., 2019, Thms. 4.11-4.12); see also Plaut et al. (2025a, Thm. 10). While we do not expect to remove these dependencies entirely, future work could improve rates. The simplicity of Algorithm 1 is appealing but ignores useful information: commits inform not only their own bin but also nearby bins via Lipschitz continuity, and certifying a bin as positive could justify expanding the trusted region around it. Additional structural assumptions, such as margin/low-noise conditions, intrinsic low dimensionality, or smoothness beyond Lipschitz, could also help. Assumptions may not always hold. Our guarantees here rely on i.i.d. inputs and Lipschitz continuity of the commit reward. In practice, inputs may drift or exhibit temporal dependence, and rewards may be only piecewise smooth or even non-smooth. Extending the analysis to weaker smoothness conditions or drifting processes is an important direction. Moreover, Algorithm 1 assumes knowledge of L, σ2, and T. While knowledge of T can be handled by the standard doubling trick (see Slivkins et al. (2019, §1.5)), L and σ2 may be unknown. Thus, developing parameter-free (or adaptively tuned) algorithms that remain cautious would increase robustness. No unconditionally irreparable errors. Obtaining regret -T on a single time step is irreparable in the sense that it automatically implies linear regret on that run. However, errors in our model are only irreparable for a fixed T: for any error, there exists a large enough T that the error is no longer catastrophic. It may be worth considering alternative models of catastrophe such as inescapable trap states in MDPs which do allow for errors that are unconditionally catastrophic. Broader impact. This work is motivated by safety concerns in the deployment of learning systems in highstakes domains. We provide theoretical justification for abstention as a mechanism for averting catastrophic errors under distribution shift, and abstention is also a practical choice for deployed systems. Agents that can defer action when uncertain may be safer and more trustworthy, but abstention mechanisms must be designed carefully to avoid consequences such as excessive conservatism or over-reliance on human supervision. Sarah Liaw, Benjamin Plaut Acknowledgments This work was supported by a gift from Open Philanthropy to the Center for Human-Compatible AI (CHAI) at UC Berkeley. This work was conducted while Sarah Liaw was a research intern at CHAI. We would also like to thank Vamshi Bonagiri and Pavel Czempin for helpful discussions and feedback. References Agrawal, S., Juneja, S. K., and Koolen, W. M. (2021). Regret minimization in heavy-tailed bandits. In Conference on Learning Theory, pages 26-62. PMLR. Altman, E. (1999). Constrained Markov Decision Processes, volume 7. CRC Press. Azar, M. G., Osband, I., and Munos, R. (2017). Minimax regret bounds for reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, pages 263-272. PMLR. Boucheron, S., Lugosi, G., and Massart, P. (2013). Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press. Bubeck, S., Cesa-Bianchi, N., and Lugosi, G. (2013). Bandits with heavy tail. IEEE Transactions on Information Theory, 59(11):7711-7717. Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, learning, and games. Cambridge university press. Chen, T., Gangrade, A., and Saligrama, V. (2022). Strategies for safe multi-armed bandits with logarithmic regret and risk. Cohen, M. K., Catt, E., and Hutter, M. (2021). Curiosity Killed or Incapacitated the Cat and the Asymptotically Optimal Agent. IEEE Journal on Selected Areas in Information Theory, 2(2):665-677. Conference Name: IEEE Journal on Selected Areas in Information Theory. Cohen, M. K. and Hutter, M. (2020). Pessimism About Unknown Unknowns Inspires Conservatism. In Proceedings of Thirty Third Conference on Learning Theory, pages 1344-1373. PMLR. Garc ́ıa, J. and Fern ́andez, F. (2015). A Comprehensive Survey on Safe Reinforcement Learning. Journal of Machine Learning Research, 16(42):1437-1480. Gu, S., Yang, L., Du, Y., Chen, G., Walter, F., Wang, J., and Knoll, A. (2024). A review of safe reinforcement learning: Methods, theories, and applications. 46(12):11216-11235. Conference Name: IEEE Transactions on Pattern Analysis and Machine Intelligence. Jaksch, T., Ortner, R., and Auer, P. (2010). Nearoptimal Regret Bounds for Reinforcement Learning. Journal of Machine Learning Research, 11(51):15631600. Kazerouni, A., Ghavamzadeh, M., Abbasi-Yadkori, Y., and Roy, B. V. (2017). Conservative contextual linear bandits. Kosoy, V. (2019). Delegative Reinforcement Learning: learning to avoid traps with a little help. arXiv. . Krasowski, H., Thumm, J., M ̈uller, M., Sch ̈afer, L., Wang, X., and Althoff, M. (2023). Provably safe reinforcement learning: Conceptual analysis, survey, and benchmarking. Lin, J., Lee, X. Y., Jubery, T., Moothedath, S., Sarkar, S., and Ganapathysubramanian, B. (2022). Stochastic conservative contextual linear bandits. Liu, T., Zhou, R., Kalathil, D., Kumar, P., and Tian, C. (2021). Learning policies with zero or bounded constraint violation for constrained MDPs. Advances in Neural Information Processing Systems, 34:1718317193. Maillard, O.-A., Mann, T., Ortner, R., and Mannor, S. (2019). Active Roll-outs in MDP with Irreversible Dynamics. Moldovan, T. M. and Abbeel, P. (2012a). Safe exploration in Markov decision processes. In Proceedings of the 29th International Conference on Machine Learning, ICML'12, pages 1451-1458, Madison, WI, USA. Omnipress. Moldovan, T. M. and Abbeel, P. (2012b). Safe exploration in markov decision processes. Neu, G. and Zhivotovskiy, N. (2020). Fast rates for online prediction with abstention. Perkins, T. J. and Barto, A. G. (2003). Lyapunov design for safe reinforcement learning. J. Mach. Learn. Res., 3(null):803-832. Plaut, B., Li ́evano-Karim, J., Zhu, H., and Russell, S. (2025a). Safe learning under irreversible dynamics via asking for help. Plaut, B., Zhu, H., and Russell, S. (2025b). Avoiding catastrophe in online learning by asking for help. In Proceedings of the 42nd International Conference on Machine Learning. Saunders, W., Sastry, G., Stuhlmueller, A., and Evans, O. (2017). Trial without error: Towards safe reinforcement learning via human intervention. Slivkins, A. (2011). Contextual Bandits with Similarity Information. In Proceedings of the 24th Annual Conference on Learning Theory (COLT), pages 679-702. ISSN: 1938-7228. Slivkins, A. et al. (2019). Introduction to multi-armed bandits. Foundations and Trends® in Machine Learning, 12(1-2):1-286. Risk-Sensitive Abstention in Bandits Stradi, F. E., Castiglioni, M., Marchesi, A., and Gatti, N. (2024). Learning adversarial MDPs with stochastic hard constraints. arXiv preprint . Sutton, R. S., Barto, A. G., et al. (1998). Reinforcement learning: An introduction, volume 1. MIT press Cambridge. Tan, V. Y. F., L.A., P., and Jagannathan, K. (2022). A survey of risk-aware multi-armed bandits. In Raedt, L. D., editor, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22, pages 5623-5629. International Joint Conferences on Artificial Intelligence Organization. Survey Track. Wachi, A., Hashimoto, W., Shen, X., and Hashimoto, K. (2023). Safe exploration in reinforcement learning: A generalized formulation and algorithms. Wu, Y., Shariff, R., Lattimore, T., and Szepesv ́ari, C. (2016). Conservative bandits. In International Conference on Machine Learning (ICML). Yang, J., Jin, T., and Tan, V. Y. F. (2024). Multiarmed bandits with abstention. Zhao, W., He, T., Chen, R., Wei, T., and Liu, C. (2023). State-wise Safe Reinforcement Learning: A Survey. volume 6, pages 6814-6822. ISSN: 1045-0823. Sarah Liaw, Benjamin Plaut A PROOF OF MAIN RESULT A.1 Proof notation The proof will use the following notation: 1. The true mean of bin B is μB = Ex∼ν[r(x, 1) | x ∈B]. 2. Let kB(t) denote the value of the variable kB in Algorithm 1 at the end of time step t. 3. Let ˆμB(k) denote the value of the variable ˆμB in Algorithm 1 after k rewards from bin B have been observed. 4. Let σw = p nL2w(T)2 + σ2 for brevity. 5. Define the confidence radius γ(k) = q c-1σ2w ln(2T 4) k where c is the absolute constant from Lemma A.2. 6. Define the good event G = {∀t ∈[T], ∀B ∈B where kB(t) > 0 : |ˆμB(kB(t)) -μB| ≤γ(kB(t))}. 7. A bin B is certified negative at time t if ˆμB(kB(t)) + γ(kB(t)) + L√nw(T) 0. Then there exists an absolute constant c > 0 such that for any ε > 0, Pr " k X i=1 Xi > ε # ≤2 exp - cε2 Pk i=1 σ2 i ! Lemma A.3. If x ∈B ∈B, then |r(x, 1) -μB| ≤L√nw(T). Proof. We must prove that r(x, 1) ≥μB -L√nw(T) and r(x, 1) ≤μB + L√nw(T). Let r-= infx′∈B r(x′, 1) and r+ = supx′∈B r(x′, 1). Then r-≤μB ≤r+ and r-≤r(x, 1) ≤r+. Next, for any ε > 0, there exists x-, x+ ∈B such that r(x-, 1) -ε r+ (if not, this contradicts r-and r+ being the infimum and supremum). Then r(x, 1) and μB belong to the interval [r-, r+], which is a subset of the interval [r(x-, 1) -ε, r(x+, 1) + ε]. Since x-and x+ belong to the same n-hypercube with side length w(T), ∥x--x+∥≤√nw(T). Then by Lipschitz continuity, |r(x+, 1) -r(x-, 1)| = r(x+, 1) -r(x-, 1) ≤L√nw(T). Therefore r(x, 1) and μB belong to the same interval of length L√nw(T) + 2ε, so |r(x, 1) -μB| ≤L√nw(T) + 2ε. Since this holds for all ε > 0, we must have |r(x, 1) -μB| ≤L√nw(T). Lemma 6.1 (Per-bin concentration). For any B ∈B and t ∈[T], Pr [|ˆμB(kB(t)) -μB| > γ(kB(t))] ≤T -4. Proof. Fix a bin B and t ∈[T]. Let k = kB(t) for brevity and let t1 0, Pr   k X j=1 Zj + k X j=1 ηtj > ε  ≤2 exp - cε2 Pk j=1(L√nw(T))2 + Pk j=1 σ2 ! = 2 exp -cε2 kσ2w Note that the ˆμB(k) = 1 k Pk j=1 rtj = 1 k Pk j=1(r(xtj, 1) + ηtj). Then ˆμB(k) -μB = 1 k Pk j=1(Zj + ηtj). Set ε = kγ(k) = p kc-1σ2w ln(2T 4) to get Pr [|ˆμB(k) -μB| > γ(k)] = Pr h k|ˆμB(k) -μB| > p kc-1σ2w ln(2T 4) i = Pr   k X j=1 Zj + k X j=1 ηtj > p kc-1σ2w ln(2T 4)   ≤2 exp(-ln(2T 4)) = 2 exp ln 1 2T 4 = T -4 as required. Lemma 6.2 (Uniform concentration bound). With probability at least 1-T -2, the good event G holds. Equivalently, Pr(¬G) ≤T -2. Proof. Let J be the the number of bins that receive at least one commit, i.e., J = |{B ∈B : ∃t ∈[T] s.t. xt ∈ B, yt = 1}|. For each j ∈[J], let Bj be the jth bin to receive a commit. Then Pr[¬G] = E[Pr[¬G | J, B1, . . . , BJ]] (Law of total expectation) = E  Pr   J[ j=1 T[ t=1 {|ˆμBj(kB(t)) -μBj| > γ(kB(t))}   J, B1, . . . , BJ   (Direct negation) ≤E   J X j=1 T X t=1 Pr[|ˆμBj(kB(t)) -μBj| > γ(kB(t))] J, B1, . . . , BJ   (Union bound) ≤E   J X j=1 T X t=1 T -4 J, B1, . . . , BJ   (Lemma 6.1) ≤E h T -2 J, B1, . . . , BJ i (J ∈[T]) ≤T -2 (Expectation of a constant) as required. 2This is similar to the "reward tape" argument used in Section 1.3.1 of Slivkins et al. (2019). Sarah Liaw, Benjamin Plaut Lemma 6.3 (Samples for negative certification). Consider any t ∈[T] and B ∈B. On G, if μB 4c-1σ2 w ln(2T 4) (μB+L√nw(T ))2 , then bin B is certified negative at time t. Proof. Note that μB 4c-1σ2 w ln(2T 4) (μB+L√nw(T ))2 . Therefore |{t ∈[T] : xt ∈B, yt = 1}| ≤⌈4c-1σ2 w ln(2T 4) (μB+L√nw(T ))2 ⌉≤ 1 + 4c-1σ2 w ln(2T 4) (|μB|-L√nw(T ))2 . Since |μB| ≥(2L√n + 1)w(T) ≥2L√nw(T), we have |μB| = |μB| 2 + |μB| 2 ≥|μB| 2 + L√nw(T) so |μB| -L√nw(T) ≥|μB/2|. Therefore (|μB| -L√nw(T))2 ≥μ2 B/4, so |{t ∈[T] : xt ∈B, yt = 1}| ≤ 1 + 16c-1σ2 w ln(2T 4) μ2 B . For any t ∈[T] such that yt = 1, either yt = 1 is optimal, in which case the single-step regret ∆t is 0, or yt = 0 is optimal, if which case ∆t = -r(xt, 1). If xt ∈B, Lemma A.3 implies that r(xt, 1) ≥μB -L√nw(T). Since μB -LR(T) for all x ∈B. ThusμB = Ex∼ν[r(x, 1) | x ∈B] ≥Ex∼ν[-LR(T)] = -LR(T), so X t:xt∈B,yt=1 ∆t ≤2LR(T) + 32c-1σ2 w ln(2T 4) w(T) as required. Lemma 6.6 (Total commit regret inside the trusted region). On G, X t: yt=1 ∆t ≤v1R(T)n w(T)n 2LR(T) + 32c-1σ2 w ln(2T 4) w(T) + (3L√n + 1)w(T)T. Proof. For each t ∈[T], let B(t) denote the bin to which xt belongs. Partition the time steps with commits into S1 = {t ∈[T] : yt = 1 and μB(t) m(T) (otherwise the bin containing xt would be in B). Also, r(xt, 1) -r(xt, 0) ≤1 by assumption. Hence Reg(T) = T X t=1 ∆t = X t∈S3 ∆t + X t∈S4 ∆t ≤v1R(T)n w(T)n 2LR(T) + 32c-1σ2 w ln(2T 4) w(T) + (3L√n + 1)w(T)T + T X t=1 1(∥xt∥> m(T)) = 2Lv1R(T)n+1 w(T)n + 32v1c-1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T X t=1 1(∥xt∥> m(T)) Therefore E[Reg(T) | G] ≤2Lv1R(T)n+1 w(T)n + 32v1c-1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T X t=1 Pr[∥xt∥> m(T)] = 2Lv1R(T)n+1 w(T)n + 32v1c-1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T X t=1 ̄ν(m(T)) = 2Lv1R(T)n+1 w(T)n + 32v1c-1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T ̄ν(m(T)) Now suppose G does not hold. Consider an arbitrary t ∈[T]. If yt = 0, then the regret at time t is at most 1. If yt = 1, we still have ∥xt∥≤R(T), so by Lipschitz continuity, r(xt, 1) ≥-LR(T). Therefore E[Reg(T) | ¬G] ≤T + LR(T)T. Lemma 6.2 implies that Pr[¬G] ≤T -2, so by the law of expectation, E[Reg(T)] = Pr[¬G] E[Reg(T) | ¬G] + Pr[G] E[Reg(T) | G] ≤1 T 2 · (T + LR(T)T) + 2Lv1R(T)n+1 w(T)n + 32v1c-1σ2 wR(T)n ln(2T 4) w(T)n+1 + (3L√n + 1)w(T)T + T ̄ν(m(T)) ∈O 1 + LR(T) T + 2Lv1R(T)n+1 w(T)n + 32v1c-1σ2 wR(T)n ln(2T 4) w(T)n+1 + L√nw(T)T + T ̄ν(m(T)) We now plug in w(T) = T -1 n+2 and m(T) = ln T. Since limT →∞w(T) = 0, we have σ2 w = nL2w(T)2 + σ2 ∈O(σ2). Similarly, for any k ≥0, R(T)k = (ln(T) + √nT -1 n+2 )k ∈O((ln T)k). Thus E[Reg(T)] ∈O L ln T T + L(ln T)n+1 T -n n+2 + σ2(ln T)n+1 T -n-1 n+2 + LT -1 n+2 T + T ̄ν(ln T) = O (L + σ2)T n+1 n+2 (ln T)n+1 + T ̄ν(ln T) as required.
2510.14888
Modeling nonlinear scales for dynamical dark energy cosmologies with COLA João Rebouças,1, 2 Victoria Lloyd,3 Jonathan Gordon,3, 4 Guilherme Brando,1 and Vivian Miranda4 1CBPF - Brazilian Center for Research in Physics, Xavier Sigaud st. 150, zip 22290-180, Rio de Janeiro, RJ, Brazil 2Instituto de Física Teórica da Universidade Estadual Paulista, R. Dr. Bento Teobaldo Ferraz, 271, Bloco II, Barra-Funda - São Paulo/SP, Brazil 3Department of Physics & Astronomy, Stony Brook University, Stony Brook, NY 11794, USA 4C. N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY, 11794, USA (Dated: October 17, 2025) Upcoming galaxy surveys will bring a wealth of information about the clustering of matter at small scales, but modeling small-scale structure beyond ΛCDM remains computationally challenging. While accurate 𝑁-body emulators exist to model the matter power spectrum for ΛCDM and some limited extensions, it’s unfeasible to generate 𝑁-body simulation suites for all candidate models. Motivated by recent hints of an evolving dark energy equation of state from galaxy surveys, we assess the viability of employing the COmoving Lagrangian Acceleration (COLA) method to generate simulation suites assuming the 𝑤0𝑤𝑎dynamical dark energy model. Following up on our previous work, we combine COLA simulations with an existing high-precision ΛCDM emulator to extend its predictions into new regions of parameter space. We assess the precision of our emulator at the level of the matter power spectrum, finding that our emulator can reproduce the nonlinear boosts from EuclidEmulator2 at less than 2% error. Moreover, we perform an analysis of a simulated cosmic shear survey akin to future data from the Legacy Survey of Space and Time (LSST) first year of observations, assessing the differences in parameter constraints between our COLA-based emulator and the benchmark emulator. We find our emulator to be in excellent agreement with the benchmark, achieving less than 0.3𝜎shifts in cosmological parameters across multiple fiducial cosmologies, and a 7D figure of bias of less than 0.35. We further compare our emulator’s performance to a commonly used approach: assuming the ΛCDM boost can be employed for extended parameter spaces without modification. We find that our emulator yields a significantly smaller Δ𝜒2 distribution, parameter constraint biases, and a more accurate figure of merit compared to this second approach. These results demonstrate that COLA emulators provide a computationally efficient and physically motivated path forward for modeling nonlinear structure in extended cosmologies, offering a practical alternative to full 𝑁-body suites in the era of precision cosmology. I. INTRODUCTION In recent decades, galaxy surveys have been able to map the large-scale structure of the Universe with great precision, being as competitive as CMB surveys. Current photometric surveys, such as DES [1–10], KiDS [11–23], HSC [24–29], and spectro- scopic surveys, such as DESI [30–36], BOSS/eBOSS [37–50], and WiggleZ [51–55], have placed percent-level constraints on ΛCDM parameters, demonstrating the success of the the- oretical model in describing independent datasets. However, in recent years, tensions in the cosmological parameter con- straints have begun to arise with the increase in precision. Notably, most galaxy surveys report a mildly lower value for the structure growth parameter 𝑆8 than those from CMB mea- surements (see e.g. [56]). Moreover, recent findings from the DESI collaboration, as well as Type Ia supernovae datasets such as Pantheon+, Union3, and DESY5 [30, 34, 35, 57–59] fa- vor an evolution of the dark energy equation of state. Whether these results truly indicate new physics is still under debate [60–64]. Forthcoming Stage-IV galaxy surveys, such as the Vera Rubin Observatory’s LSST [65], Euclid [66], SphereX [67], and Roman [68], will enable higher-precision measurements, especially at small scales where non-linearities in the mat- ter density field become increasingly sizable [69], and will be decisive for investigating dark energy dynamics. Anal- yses of galaxy surveys rely on a key theoretical prediction: the matter power spectrum, as galaxies are biased tracers of the underlying matter density field. At large scales and early times, the power spectrum can be quickly and accurately com- puted using Einstein-Boltzmann solvers such as camb [70] and class [71, 72]. However, for small scales and low redshifts, linear perturbation theory breaks down, and accurately mod- eling non-linearities becomes a task of central importance. Although N-body simulations offer the most accurate pre- dictions in the nonlinear regime, they are computationally ex- pensive — each one demanding tens of thousands of CPU hours [73–75]. As a result, integrating simulations into Bayesian analyses is a prohibitive task, as theoretical predic- tions must be provided for O(105 −106) points in the param- eter space. To address this issue, machine learning emulators trained on N-body simulations have been developed for ΛCDM and simple, widely adopted extensions such as the phenomeno- logical 𝑤0𝑤𝑎CDM [76, 77] parametrization for dark energy, as well as massive neutrinos with their total mass as a free param- eter. Examples include EuclidEmulator2 [78], bacco [79], CosmicEmu [80], the Dark Quest emulator [81, 82], the CSST Emulator [83–85], aemulus [86–89], among others. At the same time, due to the sheer amount of candidate cosmologi- cal models and the high cost of running N-body simulations, emulators for broader extensions of ΛCDM are still scarce: examples are emulators to specific modified gravity theories (e.g. [90–93]) where we have full N-Body simulations [94], as well as some hydrodynamical emulators. A viable alternative to using full 𝑁-body simulations for constructing matter power spectrum emulators is to use well- established approximate methods for extended cosmological models. These methods reduce the computational complexity arXiv:2510.14888v1 [astro-ph.CO] 16 Oct 2025 2 of running hundreds of simulations to train emulators, at the cost of losing accuracy in deep non-linear scales. One com- pelling approach is to use the COmoving Lagrangian Acceler- ation (COLA) method, which combines Lagrangian Perturba- tion Theory with a particle-mesh (PM) [95] evolution scheme to approximate N-body results while being cheaper than the usual N-body methods by 1-2 orders of magnitude [96, 97]. Emulators created using pure COLA simulations are prone to small-scale inaccuracies when compared directly to their N- body counterparts. To mitigate this effect, the work of [98] introduces an approach that combines COLA simulations with predictions from high-accuracy ΛCDM emulators or full N- body results, leveraging COLA’s reduced computational cost and dramatically increasing small-scale accuracy simultane- ously. Our previous work [99] has validated this approach, creating an emulator for the matter power spectrum of COLA simulations under the 𝑤CDM cosmological model, and testing it in a mock Stage-IV cosmic shear analysis. As such, we aim to demonstrate here that the hybrid approach of COLA-based em- ulators combined with high-resolution ΛCDM emulators can provide unbiased cosmological parameter constraints when compared to full N-body methods. In this work, we now present a final validation of our COLA- based emulators on the 𝑤0𝑤𝑎CDM cosmological model, where the dark energy equation of state evolves linearly with the scale factor. This parametrization represents the most widely used and general extension of ΛCDM for which high- accuracy emulators currently exist, and remains central to on- going investigations into dynamical dark energy [30, 34]. We extend our machine learning pipeline, combining COLA simu- lations with ΛCDM emulators, to predict the nonlinear matter spectrum across the 𝑤0𝑤𝑎CDM parameter space. We train a simple neural network to emulate the nonlinear correction factor (i.e the boost) from COLA, correcting for small-scale inaccuracies by referencing boosts from a high-fidelity ΛCDM emulator. This hybrid approach enables fast predictions across an extended cosmological parameter space while maintaining consistency with 𝑁-body precision. To validate our emulator in a cosmological inference setting, we perform a simulated cosmic shear analysis using survey specifications consistent with LSST’s first year of observations (LSST-Y1) [65]. We compare parameter constraints derived using both our pipeline and a benchmark 𝑁-body emulator, chosen as EuclidEmula- tor2, quantifying their disagreement with standard tension metrics [100–102]. Additionally, we also benchmark our emulator against a widely used approximation method in beyond-ΛCDM analy- ses for models without dedicated nonlinear simulations [103– 108]: projecting the nonlinear boost from the nearest ΛCDM cosmology. This projection method assumes that nonlinear corrections calibrated in ΛCDM remain valid in nearby ex- tended cosmologies, providing a computationally inexpensive workaround but at the cost of uncontrolled systematics. In con- trast, we demonstrate that for dynamical dark energy models, we find that the ΛCDM projection approach may introduce sig- nificant deviations in both goodness-of-fit and parameter con- straints. Meanwhile, our COLA-based emulator reproduces the predictions of high-precision 𝑁-body emulators without bias. This paper is organized as follows: Section II describes the COLA simulations, cosmological parameters, simulation output processing, emulator construction, and validation; in Section III, we present our LSST-Y1 simulated cosmic shear analysis and the tension metrics used to assess their differences; in Section IV, we present and discuss the results of the LSST- Y1 simulated analysis; finally, we conclude in Section V. II. COLA EMULATOR A. COLA Simulation Suite The COmoving Lagrangian Acceleration (COLA) algo- rithm [96] is a fast approximate method for 𝑁-body simu- lations, wherein particles evolve in a frame comoving with trajectories calculated using Lagrangian Perturbation Theory (LPT), most commonly 2nd order Lagrangian perturbation the- ory (2LPT). For small scales, the method computes the force by using a Particle-Mesh (PM) algorithm, where the residual displacements not captured by LPT are added to the trajecto- ries. COLA has been shown to agree with full 𝑁-body simula- tions, at the level of the power spectrum at up to 𝑘∼1ℎ/Mpc, as well as when predicting ratios of the modified gravity power spectrum and the ΛCDM one, the so-called boost function in modified gravity [98, 109–111]. Despite being 1-2 orders of magnitude faster than a full 𝑁-body run, the computational cost of these approximations is still too high for direct use in the O(106) computations of the matter power spectrum re- quired for Monte Carlo searches. A practical alternative is to use a fixed set of COLA simulations to train emulators for the matter power spectrum, enabling efficient interpola- tion across cosmological parameter space. Our previous work demonstrated this approach for 𝑤CDM [99]; here, we extend it to 𝑤0𝑤𝑎CDM and evaluate its performance relative to the benchmark EuclidEmulator2 [78], which achieves ≲1% precision for 𝑤0𝑤𝑎CDM + Í 𝑚𝜈up to 𝑘= 10 ℎ/Mpc−1 and 𝑧≤3. 1. Simulation Settings We use the COLA algorithm as implemented in the pub- lic fml1 code. Each simulation is performed in a box of size 𝐿= 1024ℎ−1Mpc, populated with 𝑁part = 10243 parti- cles, initialized at 𝑧ini = 19, and evolved over 51 time steps chosen to maintain a uniform time resolution of Δ𝑎≈0.02. The force grid uses 𝑁mesh = 20483 cells, and the power spectra are calculated on-the-fly using a 𝑁3 pk−mesh = 10243 grid. Therefore, the corresponding Nyquist frequency is 𝑘Nyq = 𝜋𝑁pk−mesh/𝐿= 𝜋ℎ/Mpc. To avoid aliasing, we 1 https://github.com/HAWinther/FML 3 restrict our analysis to 𝑘≤𝑘Nyq [112]. Our choices are based on Reference [98] and are validated therein. Initial conditions are generated using 2LPT, and we employ the forward approach [113] for our simulations. We provide the linear transfer functions of matter density, velocity, and relativistic species’ densities at each time step using class2 in synchronous gauge, and convert to the 𝑁-body gauge [114– 117] in COLA. Our simulations were run in the Seawulf3 cluster. With these settings, and using 128 cores, one COLA 𝑤0𝑤𝑎simulation takes approximately 40 minutes to finish, and requires a total RAM of approximately 950 GB. To suppress sample variance from finite box effects at large scales (𝑘≈1/𝐿), we use the pairing-and-fixing method [118], in which we generate Gaussian random field modes with a fixed amplitude, 𝛿𝑖,lin, but with phase shifts of 𝜋with respect to one another. The initial overdensity fields are sampled as 𝛿𝑖,lin = √︁ 𝑃𝑖𝑒i𝜃𝑖. (1) where 𝜃𝑖is a random phase and 𝑃𝑖the initial power spectrum. Averaging over each pair, we find that the result substantially suppresses the effects of cosmic variance. This strategy was chosen for this work following [99] and [78]. 2. Definition of the Parameter Space We consider the cosmological 𝑤0𝑤𝑎CDM model, where the dark energy equation of state is parametrized as 𝑤(𝑎) = 𝑤0 + 𝑤𝑎(1 −𝑎), (2) with 𝑎being the scale factor, and 𝑤0 and 𝑤𝑎control the present-day value and time derivative of the dark energy equa- tion of state, respectively. The ΛCDM model is recovered in the limit 𝑤0 = −1 and 𝑤𝑎= 0. The free cosmological parameters are: • Ω𝑚, the total baryon density, • Ω𝑏, the total matter density, • ℎ= 𝐻0/(100 km s−1 Mpc−1), the dimensionless Hub- ble parameter, • 𝐴𝑠, the amplitude of initial scalar fluctuations, • 𝑛𝑠, the scalar spectral index, • 𝑤0 and 𝑤𝑎, the dark energy equation of state parameters. We fix the summed neutrino masses to the minimum value allowed by neutrino oscillation experiments, Í 𝑚𝜈= 0.058 eV, assuming three degenerate massive species [119, 120]. The parameter space boundaries are described in Table I, set to match those of EuclidEmulator2, which we have adopted 2 https://github.com/lesgourg/class_public 3 https://rci.stonybrook.edu/HPC Ω𝑚 Ω𝑏 𝑛𝑠 𝐴𝑠× 109 ℎ 𝑤0 𝑤𝑎 Min 0.24 0.04 0.92 1.7 0.61 −1.3 −0.7 Max 0.40 0.06 1.00 2.5 0.73 −0.7 0.5 Center 0.319 0.05 0.96 2.1 0.67 −1 0 TABLE I. Parameter space validity bounds of our COLA-based em- ulator. The training set is drawn from a slightly bigger hypercube, where each dimension is stretched by 10% in each direction (e.g. 0.224 < Ω𝑚< 0.416). We also define a center cosmology chosen to agree with the EuclidEmulator2 reference cosmology [78]. as our benchmark for comparison. We emphasize that this choice is arbitrary, and our methodology is agnostic to the benchmark we have chosen. To improve model performance near parameter space edges, our COLA training simulations are sampled from an expanded box where each parameter inter- val has been stretched symmetrically by 10%. Cosmologies within this volume are selected using Latin hypercube sam- pling to ensure uniform coverage for training and validation. B. Emulation of COLA Boosts 1. Emulator Prototypes with halofit Our goal is for the emulation error (i.e., the error in recov- ering COLA boosts from a predetermined test set excluded from training) to be significantly smaller than the intrinsic COLA approximation error relative to full 𝑁-body simula- tions. To determine optimal hyperparameters, such as training set size and emulator architecture, we perform mock tests us- ing halofit [121] boosts. We generated training datasets with 𝑁train ∈[500, 600, 700, 800, 1000] halofit boosts and a test dataset with 𝑁test = 200. We found that 600 simulations were sufficient to achieve ∼0.1% error at 𝑘= 1 ℎ/Mpc; conserva- tively, we adopt 𝑁train = 700 and 𝑁test = 200 for our COLA emulator. 2. Post-processing the Simulation Boosts We define the nonlinear boost as 𝐵X(𝑘, 𝑧|θ) ≡𝑃X(𝑘, 𝑧|θ) 𝑃L(𝑘, 𝑧|θ) , (3) where θ refers to a point in the 𝑤0𝑤𝑎CDM parameter space, 𝑃𝑋(𝑘, 𝑧|θ) is the matter power spectrum for cosmology θ, either linear (denoted 𝑃L), or calculated using COLA or an- other N-body method (generically denoted 𝑃X). Prior to computing 𝐵COLA, we subtract the shot noise power spec- trum, 𝑃SN = (𝐿/𝑁part)3 = 1(Mpc/ℎ)3, from 𝑃COLA. At high redshift, 𝑧> 1.182, aliasing of the 𝑘modes near the Nyquist frequency leads to a power spectrum less than the shot noise for some simulations, and the subtraction would lead to unphysical negative values [113]; for these redshifts, 4 we choose to cut the scales at half of the Nyquist frequency, 𝑘𝑧>1.182 ≤(𝜋/2) ℎ/Mpc, following our procedure in [99] (also see [112]). We then perform several transformations to optimize the inputs and outputs of our emulator. For instance, machine learning techniques are known to perform poorly if the features span several orders of magnitude. To stabilize the following procedures, we normalize the cosmological parameters θ to [−1, 1] according to θ𝑁= −1 + 2 θ −θmin θmax −θmin , (4) where minimum and maximum values correspond to the train- ing set boundaries, i.e., stretching the intervals of Table I by 10% in each direction. Furthermore, we standardize the boosts using 𝐵COLA 𝑁 (𝑘, 𝑧|θ) = 𝐵COLA(𝑘, 𝑧|θ) −¯𝐵COLA(𝑘, 𝑧) 𝜎𝐵(𝑘, 𝑧) , (5) where 𝐵COLA 𝑁 (𝑘, 𝑧|θ) is the normalized COLA boost, ¯𝐵(𝑘, 𝑧) is the average of all boosts in the training set, and 𝜎𝐵(𝑘, 𝑧) is their standard deviation. We then perform a Principal Com- ponent Analysis (PCA) decomposition of the COLA boosts using scikit-learn [122] to reduce dimensionality. We retain 𝑁PC = 15 components, which are sufficient to recover the test set boosts to within 0.2%. 3. Neural Network Emulator After post-processing, we train our emulator with the nor- malized cosmological parameters as input features and the principal components as targets. We use a fully connected neural network with three hidden layers, each with 1024 neu- rons, with a mean squared error loss function, L = 𝑁train ∑︁ 𝑖=1 𝑁PC ∑︁ 𝑗=1 (𝛼𝑖,train 𝑗 −𝛼𝑖,pred 𝑗 )2, (6) where 𝛼𝑖,train 𝑗 is the 𝑗-th principal component coefficient of the 𝑖-th cosmology in the training set, and 𝛼𝑖,pred 𝑗 the corresponding prediction. We use the parametric activation function [123, 124] 𝑦𝑚+1 𝑛 =  𝛾𝑚 𝑛+ (1 −𝛾𝑚 𝑛) 1 1 + 𝑒−𝛽𝑚 𝑛𝑦𝑚 𝑛  ˜𝑦𝑚 𝑛, (7) where 𝑦𝑚+1 𝑛 is the value of the 𝑛-th neuron of the (𝑚+ 1)-th layer, ˜𝑦𝑚 𝑛the 𝑛-th neuron from the (𝑚+1)-th layer after the ap- plication of weights and biases, and 𝛾𝑚 𝑛and 𝛽𝑚 𝑛are parameters of the activation function that can be back-propagated during training. We use the Adam [125] optimizer to train the model parameters. 4. Boost Errors We perform a series of accuracy checks on the emulator out- puts. First, to assess the accuracy of our neural network, we compare the emulator’s predictions for test set cosmologies, unseen in the training procedure, against the actual COLA simulations. The relative errors are shown in the first panel of Figure 1. At 𝑘= 1 ℎ/Mpc, 90% of the test set cosmolo- gies have an emulation error within 0.1%. Comparing these direct emulation errors to the errors on the corrected boosts ˜𝐵COLA(𝑘, 𝑧) (second panel), we note that the errors between the two differ by an order of magnitude. This indicates that, in the context of comparing COLA with high-precision simu- lations, the COLA emulator faithfully reproduces its simula- tions, and differences between the emulators can be attributed to the COLA approximation rather than the performance of the machine learning model. As per [98], COLA simulations increasingly lose power4 at progressively smaller scales, leading to typical errors of 10% at 𝑘= 1 ℎ/Mpc for raw COLA boosts 𝐵COLA, as defined in Equa- tion 3. However, this power loss is cosmology-independent. Our previous work [99] showed that the best technique to build a COLA-based emulator is to leverage existing high-precision emulators in ΛCDM, using COLA only to extend the results into new dimensions, i.e., extra model parameters. This idea is encoded in the following expression for the nonlinear boost, ˜𝐵COLA(𝑘, 𝑧|θ) = 𝐵N-body(𝑘, 𝑧|θ𝑝) × 𝐵COLA(𝑘, 𝑧|θ) 𝐵COLA(𝑘, 𝑧|θ𝑝) , (8) where θ𝑝is the projection of θ in the ΛCDM subspace and 𝐵N−body is the nonlinear boost obtained from our benchmark 𝑁-body prescription. We choose EuclidEmulator2 as the base N-body prescription. The second panel of Figure 1 shows the relative difference between ˜𝐵(𝑘, 𝑧), calculated using Equation 8, and the bench- mark emulator predictions, 𝐵EE2(𝑘, 𝑧), for all test set cosmolo- gies. At 𝑘= 1 ℎ/Mpc, 90% of cosmologies have emulation errors within 2%, with 50% of cosmologies contained well within 1%. This demonstrates that our method successfully mitigates the accumulation of errors typical of COLA simula- tions in the nonlinear regime, allowing us to generate accurate predictions across our target 𝑘range. Furthermore, the cos- mologies with larger errors are those with higher values of 𝑤0 + 𝑤𝑎, a region of the parameter space excluded by current data. Finally, we consider a third nonlinear prescription: using the nonlinear boost from EuclidEmulator2 in the ΛCDM subspace; this approach will be denoted as EE2 ΛCDM. For this purpose, we compute the nonlinear boosts for 𝑤0𝑤𝑎CDM cosmologies using EuclidEmulator2, setting 𝑤0 = −1 and 𝑤𝑎= 0. The third panel of Figure 1 shows relative errors 4 This loss of power is well known to PM N-body codes, which fail to resolve the internal dynamics of halos. This trend of losing power starts roughly at a scale at which the pure 1-halo term of the halo model would dominate the power spectrum. 5 5.0 2.5 0.0 2.5 5.0 1000×(B COLA NN /B COLA 1) 50% 10 1 100 k (h/Mpc) 5.0 2.5 0.0 2.5 5.0 100×(BCOLA/B EE2 1) 90% 10 1 100 k (h/Mpc) 5.0 2.5 0.0 2.5 5.0 100×(B EE2 CDM/B EE2 1) 100% FIG. 1. From top to bottom: 1) Relative errors between the COLA boosts 𝐵COLA predicted by the emulator versus those obtained from the test set simulations. 2) Relative errors between ˜𝐵COLA (see Equation 8) and the boosts from EE2. 3) Relative errors between 𝐵EE2 ΛCDM and EE2. Colors in all panels denote the percentile of cosmologies around the mean: blue contours enclose 50% of cos- mologies, red contours enclose 90% of cosmologies, and the outer gray lines enclose 100% of cosmologies. All panels show results for 𝑧= 0, see Appendix A for the equivalent plots at higher redshifts. between 𝐵EE2 ΛCDM and the actual EuclidEmulator2 boosts. The 90% percentile shows errors of the order of 7.5% at 𝑘= 1 ℎ/Mpc, significantly worse than the COLA errors of the panel above. These results support the viability of our emulator for pa- rameter inference in 𝑤0𝑤𝑎CDM, as the emulated boost ˜𝐵(𝑘, 𝑧) agrees with our 𝑁-body proxy at a level suitable for upcoming precision cosmology experiments [126] while requiring sig- nificantly less computational expense compared to traditional 𝑁-body methods. In the following, we investigate how the dif- ferences shown in Figure 1 impact the parameter constraints from simulated cosmic shear analysis. Parameter Fiducial Prior Survey specifications Area 12300 deg2 – Shape noise per component 0.26 – 𝑛sources eff 11.2 arcmin−2 – Photometric redshift offsets Δ𝑧𝑖 source 0 N[0, 0.002] Intrinsic alignment (NLA) 𝑎1 0.7 U[-5, 5] 𝜂1 -1.7 U[-5, 5] Shear calibration 𝑚𝑖 0 N[0, 0.005] TABLE II. Mock survey specifications for our simulated analysis, and nuisance parameter priors. U[𝑎, 𝑏] denotes an uniform distribution with edges [𝑎, 𝑏], while N [𝑎, 𝑏] denotes a Gaussian distribution with mean 𝑎and standard deviation 𝑏. Tomographic bin indices are denoted by 𝑖, and all our priors are the same for all bins. III. ANALYSIS OF LSST-Y1 SIMULATED DATA A. Simulating Cosmic Shear Data We simulate cosmic shear observations based on LSST-Y1, following the methodology of [127, 128] and detailed in [99]. Survey specifications, source galaxy redshift distributions, and nuisance parameter priors are taken from the LSST DESC Science Requirements Document [65], and summarized in Ta- ble II. The redshift distribution is modeled as a Smail distri- bution convolved with a Gaussian uncertainty 0.02(1 + 𝑧) and divided into five tomographic bins with equal galaxy number densities. The cosmic shear two-point correlation functions 𝜉𝑖𝑗 ± (𝜃) are computed by first evaluating in Fourier space, 𝐶𝑖𝑗 𝜅𝜅(ℓ), using the nonlinear matter power spectrum via the Limber ap- proximation, then transforming to real space via the analytic functions in Appendix A of [6]. We compute 𝜉𝑖𝑗 ± in 26 loga- rithmically spaced angular bins between 2.5 and 900 arcmin, averaging over each bin. We include standard self-calibrating systematics in our computation of 𝜉± — photometric redshift uncertainties, multiplicative shear calibration, and the non- linear alignment (NLA) model of intrinsic galaxy alignments (see, e.g., [129, 130]). Likelihood analyses are performed using Cocoa, the Cobaya-CosmoLike Joint Architecture5 [131–133]. Linear power spectra are computed with camb [134, 135], and nonlin- ear corrections are applied using either ˜𝐵(Eq. 8), 𝐵EE2 ΛCDM, or EuclidEmulator2. We use MCMC sampling to explore the parameter space and assess convergence using the Gel- man–Rubin criterion (|𝑅−1| < 0.01) [136]. 5 https://github.com/CosmoLike/cocoa 6 𝜃 Ω𝑚 109𝐴𝑠 𝑛𝑠 𝑤0 𝑤𝑎 𝜃↑ 0.36 2.3 0.98 -0.85 0.25 𝜃↓ 0.28 1.9 0.94 -1.15 -0.35 TABLE III. "High" (↑) and "low" (↓) cosmological parameter values used to construct the fiducial cosmologies of our analyses. In this notation, fiducial cosmologies are labeled in the text by the parameters shifted from the central values listed in Table I. Cutoff ⟨𝑧⟩ 0.33 0.54 0.74 1.01 1.62 Cutoff 1 1.4 1.1 0.9 0.9 0.8 Cutoff 2 2.9 2.2 1.9 1.7 1.6 Cutoff 3 5.7 4.3 3.8 3.4 3.3 TABLE IV. Approximate scales in wavenumber 𝑘, measured in ℎMpc−1, for each galaxy source bin that correspond to the angu- lar cutoffs in 𝜉+ tested in our LSST-like cosmic shear analysis. The scales are approximated by computing the angular diameter distance to the mean redshift of each bin and converting to a wavenumber. To evaluate the emulator’s accuracy in the parameter space, we define 29 fiducial cosmologies within the emulator’s valid- ity range. One data vector is generated at the center cosmology shown in Table I. We then vary cosmological parameters to their intermediate "low" (↓) and "high" (↑) values, shown in Table III. We define four cosmologies by varying both 𝑤0 and 𝑤𝑎for each combination of their low and high values, keeping other parameters fixed at their central values. We further de- fine 24 fiducial cosmologies varying 𝑤0, 𝑤𝑎, and one of Ω𝑚, 𝐴𝑠, or 𝑛𝑠, using all combinations of their low and high values. All data vectors are generated using EuclidEmulator2 as the nonlinear prescription, and the covariance is computed analyt- ically with CosmoCov [127]. We do not include a Gaussian noise realization in the fiducial data vectors. To mitigate biases from emulator inaccuracies on small scales, which may degrade the goodness-of-fit, we apply three angular scale cuts (C1-C3) following [99], removing 𝜉± mea- surements below a minimum angular separation, 𝜃min. The corresponding wavenumbers are shown in Table IV; cuts are more aggressive for 𝜉−due to its sensitivity to smaller scales. Finally, to compute the cosmic shear integrals beyond the emulator’s 𝑘-range, we extrapolate log( ˜𝐵(𝑘)) versus log(𝑘) using a linear fit for both COLA and EE2 emulators. A Sav- itsky–Golay filter (order 1, window length 5) is applied to the last entries of the ˜𝐵COLA vector to suppress noise before extrapolation. B. Quantifying discrepancies To quantify deviations between parameter constraints from the LSST-Y1 simulated analyses using the nonlinear prescrip- tions 𝑋and 𝑌, we first evaluate the one-dimensional bias for key parameters Ω𝑚, 𝑆8, 𝑤0, and 𝑤𝑎, defined as Δ𝜃𝑖 𝜎𝜃𝑖 = ⟨𝜃𝑖⟩X −⟨𝜃𝑖⟩Y √︃ 𝜎2 𝜃𝑖,X + 𝜎2 𝜃𝑖,Y , (9) where 𝜃𝑖denotes one of Ω𝑚, 𝑆8, 𝑤, or 𝑤𝑎, and ⟨𝜃𝑖⟩and 𝜎2 𝜃𝑖 are, respectively, the sample mean and sample variance from MCMC posteriors. To capture parameter correlations, we also compute the Fig- ure of Bias (FoB), a multivariate generalization of the 1D bias defined by FoB(θ) = [Δ⟨θ⟩T · (𝐶COLA + 𝐶EE2)−1 · Δ⟨θ⟩]1/2, (10) where θ denotes a vector of cosmological parameters, Δ⟨θ⟩= ⟨θ⟩COLA −⟨θ⟩EE2 is the difference in sample means, and 𝐶X denotes the parameter covariance matrices for prescription X. We choose to calculate the FoB in selected 2D planes: Ω𝑚×𝑆8, Ω𝑚× 𝑤0 and Ω𝑚× 𝑤𝑎. Furthermore, we also calculate the FoB in the seven cosmological parameters. A bias of less than 0.3 is considered negligible [137]. Changing the nonlinear modeling of the cosmic shear data vector may lead to underestimating or overestimating cosmo- logical parameters, compared to a fiducial model. The strength of the constraints is measured by the Figure of Merit (FoM) statistic, defined as FoM = 𝛼det(𝐶)−1/2, (11) where 𝐶is the covariance matrix of cosmological parameters obtained from the MCMC, and 𝛼is a prefactor that depends on the desired limits (i.e., 1𝜎or 2𝜎) and the number of parameters considered [138]. We report the FoM ratio between the COLA and benchmark analyses. To assess whether a nonlinear prescription X may degrade the goodness-of-fit when compared to the benchmark emula- tor, we compute the quantity Δ𝜒2 = (tX −tEE2)𝑇· 𝐶−1 data · (tX −tEE2), (12) where tX is the cosmic shear theory prediction calculated using the nonlinear prescription X and 𝐶data is the data covariance matrix. We compute this quantity for random points across the parameter space. IV. RESULTS FOR LSST-Y1 SIMULATED ANALYSIS To evaluate the accuracy of our emulator in practice, we be- gin by examining cosmological parameter constraints across three nonlinear prescriptions: the EuclidEmulator2 bench- mark, our COLA-based emulator, and EE2 ΛCDM (i.e., using the boost from the projected ΛCDM cosmology). Figure 2 shows 1D and 2D posterior contours (68% and 95%) for the parameters Ω𝑚, 𝑆8, 𝑤0 and the sum 𝑤0 + 𝑤𝑎, assuming the central cosmology (see Table I) as the fiducial. Since the con- straints on 𝑤𝑎are weak, we focus on 𝑤0 + 𝑤𝑎as the more informative parameter combination. 7 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.80 0.84 0.88 S8 0.80 0.85 S8 1.2 0.9 w0 2.0 1.0 w0 +wa Cutoff 1 Fid. Cosmo.: Center 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.83 0.86 S8 0.80 0.85 S8 1.2 0.9 w0 2.0 1.0 w0 +wa Cutoff 2 EE2 w0wa COLA EE2 CDM 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.82 0.84 0.86 S8 0.80 0.84 S8 1.2 0.9 w0 2.0 1.0 w0 +wa Cutoff 3 FIG. 2. Cosmological parameter constraints (68% and 95%) from the LSST-Y1 simulated analyses assuming the center cosmology from Table I as the fiducial. Green-filled contours denote constraints obtained using EuclidEmulator2 as the nonlinear prescription, orange dashed and dotted contours use our COLA emulator, and blue dashed contours use EE2 ΛCDM prescription. The left, middle, and right panels show constraints using the angular cutoffs C1, C2, and C3, respectively. We observe no shifts between the analyses; however, using cutoffs C2 and C3, the constraints obtained using COLA are slightly tighter than those found with EE2 for 𝑆8 and 𝑤0. For EE2 ΛCDM, this effect is amplified. Because our fiducial cosmology is in the ΛCDM subspace, where all prescriptions are equivalent, we observe no signifi- cant biases between the different constraints, and all posteriors peak at the same point. However, an indication of failure out- side ΛCDM is observed in the size of the error bars. Employing the most conservative cutoff (C1), the COLA and EuclidEm- ulator2 contours are nearly indistinguishable. At the same time, those of EE2 ΛCDM are significantly smaller than the "true" error bars of EE2, especially at the parameters 𝑆8 and 𝑤0, positively correlated. These results suggest that, in the context of LSST-Y1 cosmic shear, COLA emulators are equiv- alent to high-precision N-body emulators up to 𝑘= 1 ℎ/Mpc. Figure 1 indicates that, as we advance towards smaller angu- lar scales, the disagreements in the boost predictions between COLA and EuclidEmulator2 quickly increase. For Cutoffs 2 and 3, the constraints are still in excellent agreement, but COLA yields slightly tighter error bars on 𝑆8 and 𝑤0; this effect is much more pronounced in the case of EE2 ΛCDM. The 1D marginalized constraints under Cutoff 2 are: • EuclidEmulator2: 𝑆8 = 0.835 ± 0.013, 𝑤0 = −1.02+0.15 −0.18, 𝑤0 + 𝑤𝑎= −1.15+0.32 −0.28; • COLA: 𝑆8 = 0.837 ± 0.012, 𝑤0 = −1.00 ± 0.14, 𝑤0 + 𝑤𝑎= −1.13 ± 0.29; • EE2 ΛCDM: 𝑆8 = 0.838 ± 0.008, 𝑤0 = −1.00 ± 0.14, 𝑤0 + 𝑤𝑎= −1.11 ± 0.26. Using Cutoff 3, the constraints are: • EuclidEmulator2: 𝑆8 = 0.836+0.012 −0.010, 𝑤0 = −1.01 ± 0.14, 𝑤0 + 𝑤𝑎= −1.10 ± 0.26; • COLA: 𝑆8 = 0.838 ± 0.009, 𝑤0 = −1.00 ± 0.13, 𝑤0 + 𝑤𝑎= −1.10 ± 0.26; • EE2 ΛCDM: 𝑆8 = 0.838 ± 0.007, 𝑤0 = −0.99 ± 0.13, 𝑤0 + 𝑤𝑎= −1.09 ± 0.24. To quantify the overestimation in 𝑆8 and 𝑤0, we compute the figure of merit (FoM) in the 𝑆8 ×𝑤0 plane. Assuming Cut- off 2, relative to EuclidEmulator2, the COLA emulator in- creases the FoM by 8%, indicating slightly tighter constraints, whereas the projected EE2 ΛCDM boost inflates the FoM by approximately 47%. For the most "aggressive" Cutoff 3, the FoM obtained with COLA is 19% bigger than that of EE2, while EE2 ΛCDM increases the FoM by 58%. Remarkably, in terms of figure of merit, the COLA emulator performs better at Cutoff 3 than EE2 ΛCDM at Cutoff 1, where the FoM is increased by 28%. From Figure 2, we observe that most of the disagreement between COLA and EuclidEmulator2 lies in low 𝑤0 and low 𝑆8 values, two parameters that are positively correlated in the analysis. This region of the parameter space is excluded by low-redshift geometric data from type-Ia super- novae [58, 139] and BAO [30, 34]. Therefore, we expect that this disagreement would not affect constraints obtained from the combination of LSST cosmic shear data with supernovae and BAO distance measurements. Figure 3 further investigates the disagreements between COLA and EuclidEmulator2, showing histograms of Δ𝜒2 (see Equation 12) for COLA and EE2 ΛCDM compared to EuclidEmulator2, obtained from 10,000 cosmologies sam- pled randomly from the emulation box (see Table I). Across all cutoffs, the EE2 ΛCDM prescription yields Δ𝜒2 distributions that are substantially broader than those from our COLA emu- lator. As such, the use of COLA simulations can significantly improve the consistency with the high-precision benchmark, while the projection approach fails to capture the nonlinear cor- rections required by dynamical dark energy models, leading to degraded fits and potentially biased constraints. In Figure 4, we investigate how these Δ𝜒2 values distribute around cosmo- logical space; we find a clear trend of higher values of Δ𝜒2 correlated with higher values of Ω𝑚and 𝜎8. To assess the robustness of our results beyond the ΛCDM 8 0 100 200 300 Number of Samples 2 COLA <1: 89.5% 2 EE2 CDM <1: 45.8% Cutoff 1 EE2 CDM COLA 0 100 200 300 Number of Samples 2 COLA <1: 77.5% 2 EE2 CDM <1: 27.5% Cutoff 2 10 2 10 1 100 101 102 2 0 100 200 300 Number of Samples 2 COLA <1: 62.8% 2 EE2 CDM <1: 18.4% Cutoff 3 FIG. 3. Histograms of Δ𝜒2 X = (tX −tEE2)𝑇· 𝐶−1 data · (tX −tEE2) for prescription X ∈{COLA, EE2 ΛCDM} compared to EE2 at the full 𝑤0𝑤𝑎cosmology, with random samples drawn from the prior. The top, middle, and bottom panels show results for angular cut- offs C1, C2, and C3, respectively. The distribution of Δ𝜒2 values demonstrates an order of magnitude difference between theory pre- dictions calculated using our COLA method compared to the tradi- tional EE2 ΛCDM approach, showing an improved fit from modeling the extended parameters using COLA. subspace, we repeat our analysis using the fiducials described in Section II, all of them with 𝑤0 ≠−1 and 𝑤𝑎≠0. We find that the biases may increase as we shift 𝑤0 and 𝑤𝑎in the same direction, either higher or lower, in the parameter space of Table I. This is due to a known geometrical degener- acy along 𝑤0 + 𝑤𝑎, which can suppress modeling systematics 0.25 0.30 0.35 0.40 m 0.6 0.8 1.0 8 2 4 2 COLA FIG. 4. Spatial distribution of Δ𝜒2 in the Ω𝑚×𝜎8 plane. We observe higher values of Δ𝜒2 at higher values of Ω𝑚and 𝜎8. when 𝑤0 +𝑤𝑎≈1. Figure 5 illustrates the results for a fiducial with 𝑤↑ 0 and 𝑤↑ 𝑎(see Table III), keeping the other parameters fixed to their central values. In this case, we see a remarkable agreement between COLA and EE2 across all angular cut- offs. We focus on Cutoff 3, which yields the marginalized 1D constraints: • EuclidEmulator2: 𝑆8 = 0.772+0.008 −0.006, 𝑤0 > −0.882, 𝑤0 + 𝑤𝑎= −0.76+0.26 −0.15; • COLA: 𝑆8 = 0.770+0.008 −0.007, 𝑤0 > −0.900, 𝑤0 + 𝑤𝑎= −0.79+0.26 −0.17; • EE2 ΛCDM: 𝑆8 = 0.762 ± 0.006, 𝑤0 = −0.879+0.140 −0.088, 𝑤0 + 𝑤𝑎= −0.82+0.26 −0.17. In this case, the EE2 ΛCDM projection induces substantial biases in 𝑆8, shifting up by nearly 1𝜎. By comparison, the COLA emulator remains consistent to within 0.25 standard deviations. This trend extends to the multidimensional figures of bias: COLA has a 7D figure of bias of FoB7D = 0.27 compared to the benchmark, while the projected EE2 ΛCDM reaches FoB7D = 1.04. Unlike the parameter constraints assuming the center cos- mology from Table I as the fiducial, we see in Figure 5 that the posteriors calculated using our COLA-based emulator now closely track those generated using our 𝑁-body proxy, rather than overestimating 𝑆8 and 𝑤0 in any substantive way. We find that in switching from the center cosmology to 𝑤↑ 0 and 𝑤↑ 𝑎, the relative ratio between FoMs of COLA and EE2 in the 𝑆8 × 𝑤0 plane is 0.97, while the same ratio is 1.21 for EE2 ΛCDM. To investigate whether our COLA emulator can provide un- biased constraints with FOMs similar to EE2 across the param- eter space, Figure 6 shows the 1D biases of Equation 9 for Ω𝑚, 𝑆8, 𝑤0 and 𝑤𝑎between the COLA emulator and our baseline EE2 for all scale cuts and all of the 29 cosmologies outlined in Section II. The fiducial cosmologies are listed in increasing order of their associated 𝜎8 values. All 1D biases are within 9 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.72 0.76 0.80 S8 0.72 0.78 S8 1.2 0.9 w0 1.6 0.8 w0 +wa Cutoff 1 Fid. Cosmo.: w0 ,wa 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.76 0.79 S8 0.75 0.80 S8 1.2 0.9 w0 1.6 0.8 w0 +wa Cutoff 2 EE2 w0wa COLA EE2 CDM 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.74 0.76 0.78 S8 0.75 0.78 S8 1.2 0.9 w0 1.6 0.8 w0 +wa Cutoff 3 FIG. 5. Cosmological parameter constraints (68% and 95%) from the LSST-Y1 simulated analyses assuming a fiducial cosmology with 𝑤↑ 0 and 𝑤↑ 𝑎(see Table III), keeping the other parameters at their central values. Green-filled contours denote constraints obtained using EuclidEmulator2 as the nonlinear prescription, orange dashed and dotted contours use our COLA emulator, and blue dashed contours use EE2 ΛCDM prescription. The left, middle, and right panels show constraints using the angular cutoffs C1, C2, and C3, respectively. In this case, the EE2 ΛCDM prescription can provide significant biases in 𝑆8, which are not present when using COLA. 0.3𝜎, even for cosmologies with higher values of 𝜎8 and Ω𝑚, where Figure 4 shows our emulator performs worse. We com- puted the 7D figure of bias in cosmological parameters for all fiducial cosmologies using Cutoff 3, finding a maximum value of FoB7D = 0.35 at the cosmology Ω↑ 𝑚, 𝑤↑ 0, 𝑤↑ 𝑎. We conclude that, for all scale cuts considered in this work, our emulator succeeds in providing unbiased constraints on cosmological parameters when compared to a high-precision N-body em- ulator in the context of dynamical dark energy models, even with significant variations in cosmological parameters and ex- treme values of 𝜎8. Further, Figure 6 indicates our measure of the relative tightness of the parameter constraints in the 𝑆8 ×𝑤 plane compared to EE2, FOMCOLA 𝑆8×𝑤/FOMEE2 𝑆8×𝑤, assuming Cut- off 3. The highest ratio is 1.19 for the center cosmology, shown in Figure 2. The same remark from before applies to other fiducial cosmologies: the disagreement between COLA and EE2 is driven mainly by low values of 𝑤0 and 𝑆8, a region of the parameter space disallowed by low-redshift distance measurements. V. CONCLUSION In light of recent hints of dynamical dark energy, constrain- ing its equation of state has become a task of central impor- tance. While geometrical probes, such as BAO and supernovae measurements, are the primary probes of late dark energy be- havior, galaxy surveys can also probe dark energy dynamics through their effects on the growth of large-scale structure. Extracting robust constraints from these surveys requires ac- curate modeling of nonlinear gravitational effects in the mat- ter power spectrum, which becomes increasingly challenging in extended cosmologies where 𝑤(𝑧) departs from a cosmo- logical constant. While accurate modeling can be achieved with 𝑁-body simulations, their high computational cost limits their applicability across the myriad candidate dynamical dark energy models (e.g., quintessence), and, further, for models that directly impact the growth of matter perturbations beyond modifications in the Universe’s expansion. In this context, COLA is a fast approximate alternative to 𝑁-body simulations, which, when appropriately corrected, presents an avenue for constructing accurate emulators for the nonlinear matter power spectrum at a fraction of the computational cost. In this work, we have built an emulator for the nonlinear boost assuming the 𝑤0𝑤𝑎CDM cosmological model using a suite of 1400 COLA simulations, two for each of the 700 cos- mologies in the training set to account for pairing-and-fixing. To evaluate the accuracy of the neural network, we ran a pair of simulations for each of the 200 cosmologies in the test set. The total computational cost of all simulations is estimated at 153.600 CPU-hours. A simple connected neural network with a trainable activation function can reproduce the test set boosts at 0.1% error. The computational cost of the simula- tion suite could potentially be lowered by using an alternative sampling algorithm for the training set cosmologies: the Sobol sequence [140], which has been shown to improve emulation errors compared to Latin hypercube sampling [83]. We have compared our COLA emulator to a benchmark 𝑁-body emulator, chosen as EuclidEmulator2. We test an additional nonlinear prescription common to analyses of extended cosmological models without 𝑁-body simulations: using 𝑁-body boosts at the projected ΛCDM cosmology (i.e., setting 𝑤0 = −1 and 𝑤𝑎= 0), an approach we denote as EE2 ΛCDM. We compare nonlinear models in two manners: at the boost level, shown in Figure 1, and at the level of a simulated cosmic shear analysis akin to LSST-Y1, assuming EuclidEmulator2 as the true nonlinear model. In the data analysis, to account for possible variations in the cosmolog- ical parameters when beyond-ΛCDM models are analyzed, we define fiducial cosmologies scattered across the parameter 10 0.5 0.0 0.5 m/ m m,w0 ,wa As ,w0 ,wa m,w0 ,wa ns ,w0 ,wa m,w0 ,wa w0 ,wa ns ,w0 ,wa As ,w0 ,wa m,w0 ,wa As ,w0 ,wa As ,w0 ,wa ns ,w0 ,wa w0 ,wa ns ,w0 ,wa As ,w0 ,wa Center ns ,w0 ,wa m,w0 ,wa w0 ,wa ns ,w0 ,wa As ,w0 ,wa ns ,w0 ,wa w0 ,wa ns ,w0 ,wa As ,w0 ,wa m,w0 ,wa As ,w0 ,wa m,w0 ,wa m,w0 ,wa 0.5 0.0 0.5 S8/ S8 0.5 0.0 0.5 w0/ w0 0.5 0.0 0.5 (w0 +wa)/ w0 +wa 0.95 | 0.97 | 0.92 1.04 | 1.05 | 0.96 1.01 | 1.06 | 1.04 0.99 | 1.05 | 1.01 1.04 | 1.05 | 1.07 1.00 | 1.05 | 0.94 1.06 | 1.01 | 1.03 1.02 | 1.04 | 1.14 0.98 | 0.95 | 1.01 1.00 | 0.98 | 1.06 1.03 | 1.03 | 1.12 0.99 | 1.04 | 1.10 1.02 | 1.15 | 1.18 1.06 | 1.10 | 1.17 0.93 | 1.02 | 0.99 1.06 | 1.08 | 1.19 0.91 | 1.01 | 1.08 1.07 | 1.10 | 1.02 1.04 | 1.07 | 1.15 1.07 | 1.05 | 1.14 1.02 | 1.07 | 1.18 1.00 | 0.96 | 0.97 0.99 | 0.96 | 1.02 1.07 | 0.97 | 1.04 1.00 | 1.05 | 1.08 1.04 | 1.11 | 1.13 0.92 | 1.00 | 0.99 0.95 | 0.98 | 0.99 0.94 | 0.90 | 0.92 FoMCOLA S8 ×w / FoMEE2 S8 ×w C1 | C2 | C3 C1 C2 C3 FIG. 6. One-dimensional biases from Equation 9, shown for the parameters Ω𝑚, 𝑆8, 𝑤0 and 𝑤𝑎, sorted in increasing order of their associated 𝜎8 values. On the right-hand side, we show the ratios in figure of merit (see Equation 11) in the 𝑆8 × 𝑤0 plane between the analyses using COLA and EE2, for the three angular cutoffs. The gray bands represent 0.3𝜎bias. 11 space, including outside ΛCDM, with significant variations in Ω𝑚and 𝜎8. We have assessed the goodness-of-fit degradation, shown in Figure 3, and parameter constraint biases, shown in Figures 2, 5 and 6. We find that, at the boost level, our emu- lator can reproduce the benchmark emulator results with less than 2% error at 𝑘= 1 ℎ/Mpc, while EE2 ΛCDM produces 7.5% errors at the same scales. As for the simulated analysis, we find that our COLA-based emulator can provide unbiased constraints compared to EuclidEmulator2: all 1D biases are well within 0.3𝜎, even for the most aggressive angular cutoffs and exotic fiducial cosmologies with extreme values of Ω𝑚, 𝜎8, or outside ΛCDM. Furthermore, all 7D figures of bias are below 0.35. At the precision level expected for the first year of LSST observations, the COLA emulator yields constraints equivalent to those obtained using EuclidEmula- tor2 for scales up to 𝑘≈3 ℎ/Mpc, comparable to our Cutoff 2. Our results demonstrate that COLA, when combined with an accurate ΛCDM reference, offers a viable and flexible frame- work for extending nonlinear modeling to dynamical dark en- ergy and other beyond-ΛCDM scenarios. We emphasize that, while we use EuclidEmulator2 as the "baseline" ΛCDM emulator in Equation 8, any other emulator could be used to provide ΛCDM boosts. Moreover, our methodology can be applied to more exotic models that also modify the growth of structure directly, such as modified gravity or coupled dark en- ergy [105]. We also remark that there are avenues to improve our methodology. One example is the choice of "reference" cosmology. Equation 8 uses the projected ΛCDM cosmology because, geometrically, it is the closest cosmology; however, a possible choice that improves accuracy is to use a 𝑤CDM cos- mology with the same value of 𝜎8, akin to what is done in the Casarini [141] prescription. Moreover, COLA can be com- bined with other analytical or semi-analytical prescriptions, such as the one proposed in [142], improving its accuracy at small scales. ACKNOWLEDGEMENTS The authors would like to thank Stony Brook Research Com- puting and Cyberinfrastructure and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance SeaWulf computing system. This research was also supported by resources supplied by the Cen- ter for Scientific Computing (NCC/GridUNESP) of the São Paulo State University (UNESP). This work made use of the CHE cluster, managed and funded by COSMO/CBPF/MCTI, with financial support from FINEP and FAPERJ, and oper- ating at the RSDC - Datacenter para Ciência Alfredo Mar- ques de Oliveira/CBPF. JR acknowledges the financial support from FAPESP under grant 2020/03756-2, São Paulo Research Foundation (FAPESP) through ICTP-SAIFR. JR also acknowl- edges that this study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001. VM is supported by the Roman Project Infrastructure Team “Maximizing Cosmological Sci- ence with the Roman High Latitude Imaging Survey" (NASA contracts 80NM0018D0004-80NM0024F0012). V.M. is also partially supported by the Roman Project Infrastructure Team “A Roman Project Infrastructure Team to Support Cosmolog- ical Measurements with Type Ia Supernovae" (NASA contract 80NSSC24M0023). [1] M. A. Troxel, N. MacCrann, J. Zuntz, T. F. Eifler, E. Krause, et al., Dark Energy Survey Year 1 results: Cosmological con- straints from cosmic shear, Phys. Rev. D 98, 043528 (2018), arXiv:1708.01538 [astro-ph.CO]. [2] T. M. C. Abbott, F. B. Abdalla, A. Alarcon, J. Aleksić, S. Al- lam, et al., Dark Energy Survey year 1 results: Cosmological constraints from galaxy clustering and weak lensing, Phys. Rev. D 98, 043526 (2018), arXiv:1708.01530 [astro-ph.CO]. [3] T. M. C. Abbott, F. B. Abdalla, S. Avila, M. Banerji, E. Bax- ter, et al., Dark Energy Survey year 1 results: Constraints on extended cosmological models from galaxy clustering and weak lensing, Physical Review D 99, 123505 (2019), arXiv:1810.02499 [astro-ph.CO]. [4] A. Amon, D. Gruen, M. A. Troxel, N. MacCrann, S. Dodelson, et al., Dark Energy Survey Year 3 results: Cosmology from cosmic shear and robustness to data calibration, Phys. Rev. D 105, 023514 (2022), arXiv:2105.13543 [astro-ph.CO]. [5] L. F. Secco, S. Samuroff, E. Krause, B. Jain, J. Blazek, et al., Dark Energy Survey Year 3 results: Cosmology from cosmic shear and robustness to modeling uncertainty, Phys. Rev. D 105, 023515 (2022), arXiv:2105.13544 [astro-ph.CO]. [6] O. Friedrich, F. Andrade-Oliveira, H. Camacho, O. Alves, R. Rosenfeld, et al., Dark Energy Survey year 3 results: co- variance modelling and its impact on parameter estimation and quality of fit, Mon. Not. of the Royal Astron. Soc. 508, 3125 (2021), arXiv:2012.08568 [astro-ph.CO]. [7] A. Porredon, M. Crocce, J. Elvin-Poole, R. Cawthon, G. Gian- nini, et al., Dark Energy Survey Year 3 results: Cosmological constraints from galaxy clustering and galaxy-galaxy lensing using the MAGLIM lens sample, Phys. Rev. D 106, 103530 (2022), arXiv:2105.13546 [astro-ph.CO]. [8] S. Pandey, E. Krause, J. DeRose, N. MacCrann, B. Jain, et al., Dark Energy Survey year 3 results: Constraints on cosmologi- cal parameters and galaxy-bias models from galaxy clustering and galaxy-galaxy lensing using the redMaGiC sample, Phys. Rev. D 106, 043520 (2022), arXiv:2105.13545 [astro-ph.CO]. [9] T. M. C. Abbott, M. Aguena, A. Alarcon, S. Allam, O. Alves, et al., Dark Energy Survey Year 3 results: Cosmological con- straints from galaxy clustering and weak lensing, Phys. Rev. D 105, 023520 (2022), arXiv:2105.13549 [astro-ph.CO]. [10] T. M. C. Abbott, M. Aguena, A. Alarcon, O. Alves, A. Amon, et al., Dark Energy Survey Year 3 results: Constraints on extensions to Λ CDM with weak lensing and galaxy clustering, Phys. Rev. D 107, 083504 (2023), arXiv:2207.05766 [astro- ph.CO]. [11] A. H. Wright, B. Stölzner, M. Asgari, M. Bilicki, B. Giblin, et al., KiDS-Legacy: Cosmological constraints from cosmic shear with the complete Kilo-Degree Survey, arXiv e-prints , 12 arXiv:2503.19441 (2025), arXiv:2503.19441 [astro-ph.CO]. [12] S.-S. Li, K. Kuijken, H. Hoekstra, L. Miller, C. Heymans, et al., KiDS-Legacy calibration: Unifying shear and redshift cali- bration with the SKiLLS multi-band image simulations, As- tronomy & Astrophysics 670, A100 (2023), arXiv:2210.07163 [astro-ph.CO]. [13] M. Bilicki, A. Dvornik, H. Hoekstra, A. H. Wright, N. E. Chis- ari, et al., Bright galaxy sample in the Kilo-Degree Survey Data Release 4. Selection, photometric redshifts, and physi- cal properties, Astronomy & Astrophysics 653, A82 (2021), arXiv:2101.06010 [astro-ph.GA]. [14] A. Loureiro, L. Whittaker, A. Spurio Mancini, B. Joachimi, A. Cuceu, et al., KiDS and Euclid: Cosmological implications of a pseudo angular power spectrum analysis of KiDS-1000 cosmic shear tomography, Astronomy & Astrophysics 665, A56 (2022), arXiv:2110.06947 [astro-ph.CO]. [15] P. A. Burger, O. Friedrich, J. Harnois-Déraps, P. Schneider, M. Asgari, et al., KiDS-1000 cosmology: Constraints from density split statistics, Astronomy & Astrophysics 669, A69 (2023), arXiv:2208.02171 [astro-ph.CO]. [16] J. L. van den Busch, A. H. Wright, H. Hildebrandt, M. Bil- icki, M. Asgari, et al., KiDS-1000: Cosmic shear with en- hanced redshift calibration, Astronomy & Astrophysics 664, A170 (2022), arXiv:2204.02396 [astro-ph.CO]. [17] T. Tröster, M. Asgari, C. Blake, M. Cataneo, C. Heymans, et al., KiDS-1000 Cosmology: Constraints beyond flat ΛCDM, As- tronomy & Astrophysics 649, A88 (2021), arXiv:2010.16416 [astro-ph.CO]. [18] A. Dvornik, C. Heymans, M. Asgari, C. Mahony, B. Joachimi, et al., KiDS-1000: Combined halo-model cosmology con- straints from galaxy abundance, galaxy clustering, and galaxy- galaxy lensing, Astronomy & Astrophysics 675, A189 (2023), arXiv:2210.03110 [astro-ph.CO]. [19] C. Heymans, T. Tröster, M. Asgari, C. Blake, H. Hilde- brandt, et al., KiDS-1000 Cosmology: Multi-probe weak gravitational lensing and spectroscopic galaxy clustering con- straints, Astronomy and Astrophysics 646, A140 (2021), arXiv:2007.15632 [astro-ph.CO]. [20] M. C. Fortuna, H. Hoekstra, H. Johnston, M. Vakili, A. Kan- nawadi, et al., KiDS-1000: Constraints on the intrinsic align- ment of luminous red galaxies, Astronomy & Astrophysics 654, A76 (2021), arXiv:2109.02556 [astro-ph.CO]. [21] H. Hildebrandt, J. L. van den Busch, A. H. Wright, C. Blake, B. Joachimi, et al., KiDS-1000 catalogue: Redshift distribu- tions and their calibration, Astronomy & Astrophysics 647, A124 (2021), arXiv:2007.15635 [astro-ph.CO]. [22] S. J. Nakoneczny, M. Bilicki, A. Pollo, M. Asgari, A. Dvornik, et al., Photometric selection and redshifts for quasars in the Kilo-Degree Survey Data Release 4, Astronomy & Astro- physics 649, A81 (2021), arXiv:2010.13857 [astro-ph.CO]. [23] J. Yao, H. Shan, P. Zhang, X. Liu, C. Heymans, et al., KiDS- 1000: Cross-correlation with Planck cosmic microwave back- ground lensing and intrinsic alignment removal with self- calibration, Astronomy & Astrophysics 673, A111 (2023), arXiv:2301.13437 [astro-ph.CO]. [24] C. Hikage, M. Oguri, T. Hamana, S. More, R. Mandelbaum, et al., Cosmology from cosmic shear power spectra with Sub- aru Hyper Suprime-Cam first-year data, Publications of the As- tronomical Society of Japan 71, 43 (2019), arXiv:1809.09148 [astro-ph.CO]. [25] T. Hamana, M. Shirasaki, S. Miyazaki, C. Hikage, M. Oguri, et al., Cosmological constraints from cosmic shear two-point correlation functions with HSC survey first-year data, Publi- cations of the Astronomical Society of Japan 72, 16 (2020), arXiv:1906.06041 [astro-ph.CO]. [26] H. Aihara, N. Arimoto, R. Armstrong, S. Arnouts, N. A. Bah- call, et al., The Hyper Suprime-Cam SSP Survey: Overview and survey design, Publications of the Astronomical Society of Japan 70, S4 (2018), arXiv:1704.05858 [astro-ph.IM]. [27] M. Tanaka, J. Coupon, B.-C. Hsieh, S. Mineo, A. J. Nishizawa, et al., Photometric redshifts for Hyper Suprime-Cam Subaru Strategic Program Data Release 1, Publications of the Astro- nomical Society of Japan 70, S9 (2018), arXiv:1704.05988 [astro-ph.GA]. [28] R. Mandelbaum, F. Lanusse, A. Leauthaud, R. Armstrong, M. Simet, et al., Weak lensing shear calibration with simula- tions of the HSC survey, Monthly Notices of the Royal Astro- nomical Society 481, 3170 (2018), arXiv:1710.00885 [astro- ph.CO]. [29] R. Dalal, X. Li, A. Nicola, J. Zuntz, M. A. Strauss, et al., Hyper Suprime-Cam Year 3 results: Cosmology from cos- mic shear power spectra, Phys. Rev. D 108, 123519 (2023), arXiv:2304.00701 [astro-ph.CO]. [30] DESI Collaboration, M. Abdul-Karim, A. G. Adame, D. Aguado, J. Aguilar, et al., Data Release 1 of the Dark Energy Spectroscopic Instrument, arXiv e-prints , arXiv:2503.14745 (2025), arXiv:2503.14745 [astro-ph.CO]. [31] U. Andrade, E. Paillas, J. Mena-Fernandez, Q. Li, A. J. Ross, et al., Validation of the DESI DR2 Measurements of Baryon Acoustic Oscillations from Galaxies and Quasars, arXiv e- prints , arXiv:2503.14742 (2025), arXiv:2503.14742 [astro- ph.CO]. [32] L. Casas, H. K. Herrera-Alcantar, J. Chaves-Montero, A. Cuceu, A. Font-Ribera, et al., Validation of the DESI DR2 Ly𝛼BAO analysis using synthetic datasets, arXiv e-prints , arXiv:2503.14741 (2025), arXiv:2503.14741 [astro-ph.IM]. [33] DESI Collaboration, M. Abdul-Karim, J. Aguilar, S. Ahlen, C. Allende Prieto, et al., DESI DR2 Results I: Baryon Acoustic Oscillations from the Lyman Alpha Forest, arXiv e-prints , arXiv:2503.14739 (2025), arXiv:2503.14739 [astro-ph.CO]. [34] DESI Collaboration, M. Abdul-Karim, J. Aguilar, S. Ahlen, S. Alam, et al., DESI DR2 Results II: Measurements of Baryon Acoustic Oscillations and Cosmological Constraints, arXiv e- prints , arXiv:2503.14738 (2025), arXiv:2503.14738 [astro- ph.CO]. [35] K. Lodha, R. Calderon, W. L. Matthewson, A. Shafieloo, M. Ishak, et al., Extended Dark Energy analysis using DESI DR2 BAO measurements, arXiv e-prints , arXiv:2503.14743 (2025), arXiv:2503.14743 [astro-ph.CO]. [36] W. Elbers, A. Aviles, H. E. Noriega, D. Chebat, A. Menegas, et al., Constraints on Neutrino Physics from DESI DR2 BAO and DR1 Full Shape, arXiv e-prints , arXiv:2503.14744 (2025), arXiv:2503.14744 [astro-ph.CO]. [37] A. J. Ross, F. Beutler, C.-H. Chuang, M. Pellejero-Ibanez, H.-J. Seo, et al., The clustering of galaxies in the completed SDSS- III Baryon Oscillation Spectroscopic Survey: observational systematics and baryon acoustic oscillations in the correlation function, Monthly Notices of the Royal Astronomical Society 464, 1168 (2017), arXiv:1607.03145 [astro-ph.CO]. [38] F. Beutler, H.-J. Seo, A. J. Ross, P. McDonald, S. Saito, et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: baryon acoustic oscillations in the Fourier space, Monthly Notices of the Royal Astro- nomical Society 464, 3409 (2017), arXiv:1607.03149 [astro- ph.CO]. [39] S. Alam, M. Ata, S. Bailey, F. Beutler, D. Bizyaev, et al., The clustering of galaxies in the completed SDSS-III Baryon Os- cillation Spectroscopic Survey: cosmological analysis of the 13 DR12 galaxy sample, Monthly Notices of the Royal Astro- nomical Society 470, 2617 (2017), arXiv:1607.03155 [astro- ph.CO]. [40] S. Satpathy, S. Alam, S. Ho, M. White, N. A. Bahcall, et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: on the measurement of growth rate using galaxy correlation functions, Monthly No- tices of the Royal Astronomical Society 469, 1369 (2017), arXiv:1607.03148 [astro-ph.CO]. [41] F. Beutler, H.-J. Seo, S. Saito, C.-H. Chuang, A. J. Cuesta, et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: anisotropic galaxy clustering in Fourier space, Monthly Notices of the Royal Astronomical Society 466, 2242 (2017), arXiv:1607.03150 [astro-ph.CO]. [42] K. S. Dawson, J.-P. Kneib, W. J. Percival, S. Alam, F. D. Al- bareti, et al., The SDSS-IV Extended Baryon Oscillation Spec- troscopic Survey: Overview and Early Data, The Astronomical Journal 151, 44 (2016), arXiv:1508.04473 [astro-ph.CO]. [43] Z. Zhai, J. L. Tinker, C. Hahn, H.-J. Seo, M. R. Blanton, et al., The Clustering of Luminous Red Galaxies at z ∼0.7 from EBOSS and BOSS Data, Astrophys. J. 848, 76 (2017), arXiv:1607.05383 [astro-ph.CO]. [44] G.-B. Zhao, Y. Wang, A. J. Ross, S. Shandera, W. J. Percival, et al., The extended Baryon Oscillation Spectroscopic Survey: a cosmological forecast, Monthly Notices of the Royal Astro- nomical Society 457, 2377 (2016), arXiv:1510.08216 [astro- ph.CO]. [45] S. Jouvel, T. Delubac, J. Comparat, H. Camacho, A. Carnero, et al., Photometric redshifts and clustering of emission line galaxies selected jointly by DES and eBOSS, Monthly Notices of the Royal Astronomical Society 469, 2771 (2017). [46] M. Ata, F. Baumgarten, J. Bautista, F. Beutler, D. Bizyaev, et al., The clustering of the SDSS-IV extended Baryon Oscil- lation Spectroscopic Survey DR14 quasar sample: first mea- surement of baryon acoustic oscillations between redshift 0.8 and 2.2, Monthly Notices of the Royal Astronomical Society 473, 4773 (2018), arXiv:1705.06373 [astro-ph.CO]. [47] S. A. Rodríguez-Torres, J. Comparat, F. Prada, G. Yepes, E. Burtin, et al., Clustering of quasars in the first year of the SDSS-IV eBOSS survey: interpretation and halo occupa- tion distribution, Monthly Notices of the Royal Astronomical Society 468, 728 (2017), arXiv:1612.06918 [astro-ph.CO]. [48] A. D. Myers, N. Palanque-Delabrouille, A. Prakash, I. Pâris, C. Yeche, et al., The SDSS-IV Extended Baryon Os- cillation Spectroscopic Survey: Quasar Target Selection, The Astrophysical Journal Supplement 221, 27 (2015), arXiv:1508.04472 [astro-ph.CO]. [49] H. Gil-Marín, J. Guy, P. Zarrouk, E. Burtin, C.-H. Chuang, et al., The clustering of the SDSS-IV extended Baryon Oscil- lation Spectroscopic Survey DR14 quasar sample: structure growth rate measurement from the anisotropic quasar power spectrum in the redshift range 0.8 < z < 2.2, Monthly No- tices of the Royal Astronomical Society 477, 1604 (2018), arXiv:1801.02689 [astro-ph.CO]. [50] R. Ruggeri, W. J. Percival, H. Gil-Marín, F. Beutler, E.- M. Mueller, et al., The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sam- ple: measuring the evolution of the growth rate using redshift- space distortions between redshift 0.8 and 2.2, Monthly No- tices of the Royal Astronomical Society 483, 3878 (2019), arXiv:1801.02891 [astro-ph.CO]. [51] C. Blake, E. A. Kazin, F. Beutler, T. M. Davis, D. Parkin- son, et al., The WiggleZ Dark Energy Survey: mapping the distance-redshift relation with baryon acoustic oscillations, Monthly Notices of the Royal Astronomical Society 418, 1707 (2011), arXiv:1108.2635 [astro-ph.CO]. [52] E. A. Kazin, J. Koda, C. Blake, N. Padmanabhan, S. Brough, et al., The WiggleZ Dark Energy Survey: improved distance measurements to z = 1 with reconstruction of the baryonic acoustic feature, Monthly Notices of the Royal Astronomical Society 441, 3524 (2014), arXiv:1401.0358 [astro-ph.CO]. [53] S. Riemer-Sørensen, C. Blake, D. Parkinson, T. M. Davis, S. Brough, et al., WiggleZ Dark Energy Survey: Cosmolog- ical neutrino mass constraint from blue high-redshift galax- ies, Phys. Rev. D 85, 081101 (2012), arXiv:1112.4940 [astro- ph.CO]. [54] D. Parkinson, S. Riemer-Sørensen, C. Blake, G. B. Poole, T. M. Davis, et al., The WiggleZ Dark Energy Survey: Final data release and cosmological results, Phys. Rev. D 86, 103518 (2012), arXiv:1210.2130 [astro-ph.CO]. [55] C. Blake, S. Brough, M. Colless, C. Contreras, W. Couch, et al., The WiggleZ Dark Energy Survey: the growth rate of cosmic structure since redshift z=0.9, Monthly Notices of the Royal As- tronomical Society 415, 2876 (2011), arXiv:1104.2948 [astro- ph.CO]. [56] E. Di Valentino, L. A. Anchordoqui, Ö. Akarsu, Y. Ali- Haimoud, L. Amendola, et al., Cosmology Intertwined III: f𝜎8 and S8, Astroparticle Physics 131, 102604 (2021), arXiv:2008.11285 [astro-ph.CO]. [57] D. Rubin, G. Aldering, M. Betoule, A. Fruchter, X. Huang, et al., Union Through UNITY: Cosmology with 2,000 SNe Using a Unified Bayesian Framework, arXiv e-prints , arXiv:2311.12098 (2023), arXiv:2311.12098 [astro-ph.CO]. [58] DES Collaboration, T. M. C. Abbott, M. Acevedo, M. Aguena, A. Alarcon, et al., The Dark Energy Survey: Cosmology Re- sults with ∼1500 New High-redshift Type Ia Supernovae Using the Full 5 yr Data Set, The Astrophysical Journal Letters 973, L14 (2024), arXiv:2401.02929 [astro-ph.CO]. [59] J. Rebouças, D. H. F. de Souza, K. Zhong, V. Miranda, and R. Rosenfeld, Investigating late-time dark energy and massive neutrinos in light of DESI Y1 BAO, Journal of Cosmology and Astroparticle Physics 2025, 024 (2025), arXiv:2408.14628 [astro-ph.CO]. [60] N. Roy, Dynamical dark energy in the light of desi 2024 data, Physics of the Dark Universe 48, 101912 (2025). [61] Y. Carloni, O. Luongo, and M. Muccino, Does dark energy really revive using desi 2024 data?, Phys. Rev. D 111, 023512 (2025). [62] A. Chakraborty, P. K. Chanda, S. Das, and K. Dutta, Desi results: Hint towards coupled dark matter and dark energy (2025), arXiv:2503.10806 [astro-ph.CO]. [63] L. Huang, R.-G. Cai, and S.-J. Wang, The desi dr1/dr2 evidence for dynamical dark energy is biased by low-redshift supernovae (2025), arXiv:2502.04212 [astro-ph.CO]. [64] H. Chaudhary, S. Capozziello, V. K. Sharma, and G. Mustafa, Does desi dr2 challenge 𝜆cdm paradigm ? (2025), arXiv:2507.21607 [astro-ph.CO]. [65] The LSST Dark Energy Science Collaboration, R. Mandel- baum, T. Eifler, R. Hložek, T. Collett, et al., The LSST Dark Energy Science Collaboration (DESC) Science Require- ments Document, arXiv e-prints , arXiv:1809.01669 (2018), arXiv:1809.01669 [astro-ph.CO]. [66] Euclid Collaboration, Euclid. I. Overview of the Eu- clid mission, arXiv e-prints , arXiv:2405.13491 (2024), arXiv:2405.13491 [astro-ph.CO]. [67] B. P. Crill, M. Werner, R. Akeson, M. Ashby, L. Bleem, et al., SPHEREx: NASA’s near-infrared spectrophotometric 14 all-sky survey, in Space Telescopes and Instrumentation 2020: Optical, Infrared, and Millimeter Wave, Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11443, edited by M. Lystrup and M. D. Perrin (2020) p. 114430I, arXiv:2404.11017 [astro-ph.IM]. [68] T. Eifler, H. Miyatake, E. Krause, C. Heinrich, V. Miranda, et al., Cosmology with the Roman Space Telescope - multi- probe strategies, Monthly Notices of the Royal Astronomical Society 507, 1746 (2021), arXiv:2004.05271 [astro-ph.CO]. [69] Lacasa, Fabien, Cosmology in the non-linear regime: the small scale miracle, A&A 661, A70 (2022). [70] A. Lewis, A. Challinor, and A. Lasenby, Efficient Computa- tion of Cosmic Microwave Background Anisotropies in Closed Friedmann-Robertson-Walker Models, Astrophys. J. 538, 473 (2000), arXiv:astro-ph/9911177 [astro-ph]. [71] J. Lesgourgues, The Cosmic Linear Anisotropy Solving Sys- tem (CLASS) I: Overview, arXiv e-prints , arXiv:1104.2932 (2011), arXiv:1104.2932 [astro-ph.IM]. [72] D. Blas, J. Lesgourgues, and T. Tram, The Cosmic Linear Anisotropy Solving System (CLASS). Part II: Approxima- tion schemes, Journal of Cosmology and Astroparticle Physics 2011 (7), 034, arXiv:1104.2933 [astro-ph.CO]. [73] A. Schneider, R. Teyssier, D. Potter, J. Stadel, J. Onions, et al., Matter power spectrum and the challenge of percent accuracy, JCAP 04, 047, arXiv:1503.05920 [astro-ph.CO]. [74] D. Potter, J. Stadel, and R. Teyssier, PKDGRAV3: beyond trillion particle cosmological simulations for the next era of galaxy surveys, Computational Astrophysics and Cosmology 4, 2 (2017), arXiv:1609.08621 [astro-ph.IM]. [75] V. Springel, R. Pakmor, O. Zier, and M. Reinecke, Simulating cosmic structure formation with the GADGET-4 code, Monthly Notices of the Royal Astronomical Society 506, 2871 (2021), arXiv:2010.03567 [astro-ph.IM]. [76] M. Chevallier and D. Polarski, Accelerating Universes with Scaling Dark Matter, International Journal of Modern Physics D 10, 213 (2001), arXiv:gr-qc/0009008 [gr-qc]. [77] E. V. Linder, Exploring the Expansion History of the Universe, Phys. Rev. Lett. 90, 091301 (2003), arXiv:astro-ph/0208512 [astro-ph]. [78] Euclid Collaboration, M. Knabenhans, J. Stadel, D. Potter, J. Dakin, et al., Euclid preparation: IX. EuclidEmulator2 - power spectrum emulation with massive neutrinos and self- consistent dark energy perturbations, Mon. Not. Royal Astron. Soc. 505, 2840 (2021), arXiv:2010.11288 [astro-ph.CO]. [79] R. E. Angulo, M. Zennaro, S. Contreras, G. Aricò, M. Pellejero-Ibañez, and J. Stücker, The BACCO simulation project: exploiting the full power of large-scale structure for cosmology, Mon. Notices Royal Astron. Soc. 507, 5869 (2021), arXiv:2004.06245 [astro-ph.CO]. [80] E. Lawrence, K. Heitmann, J. Kwan, A. Upadhye, D. Bingham, S. Habib, D. Higdon, A. Pope, H. Finkel, and N. Frontiere, The mira-titan universe. ii. matter power spectrum emulation, The Astrophysical Journal 847, 50 (2017). [81] A. Apyan, C. Cosby, Y. Feng, A. Gelgen, S. Gori, P. Har- ris, X. Liu, M. Liu, P. Maksimovic, C. Mantilla-Suarez, R. McLaughlin, C. Miller, A. Mitra, N. Paladino, A. R. Das, V. Slokenbergs, D. Sperka, N. Tran, and Z. Wan, Performance measurements of the electromagnetic calorimeter and read- out electronics system for the darkquest experiment (2025), arXiv:2502.20590 [physics.ins-det]. [82] A. Apyan, B. Batell, A. Berlin, N. Blinov, C. Chaharom, S. Cuadra, Z. Demiragli, A. Duran, Y. Feng, I. P. Fernando, S. Gori, P. Harris, D. Hoang, D. Keller, E. Kowalczyk, M. Leys, K. Liu, M. Liu, W. Lorenzon, P. Maksimovic, C. M. Suarez, H. Marukyan, A. Mitra, Y. Miyachi, P. McCormack, E. A. Moreno, Y. C. Morales, N. Paladino, M. Rai, S. Rotella, L. Saunders, S. Sawada, C. Smith, D. Sperka, R. Tesarek, N. Tran, Y.-D. Tsai, Z. Wan, and M. Wynne, Darkquest: A dark sector upgrade to spinquest at the 120 gev fermilab main injector (2022), arXiv:2203.08322 [hep-ex]. [83] Z. Chen, Y. Yu, J. Han, and Y. Jing, Csst cosmological em- ulator i: Matter power spectrum emulation with one percent accuracy to k = 10h mpc-1, Science China Physics, Mechanics &; Astronomy 68, 10.1007/s11433-025-2671-0 (2025). [84] Z. Chen and Y. Yu, Csst cosmological emulator ii: Gen- eralized accurate halo mass function emulation (2025), arXiv:2506.09688 [astro-ph.CO]. [85] S. Zhou, Z. Chen, and Y. Yu, Csst cosmological emulator iii: Hybrid lagrangian bias expansion emulation of galaxy clustering (2025), arXiv:2506.04671 [astro-ph.CO]. [86] J. DeRose, N. Kokron, A. Banerjee, S.-F. Chen, M. White, et al., Aemulus 𝜈: precise predictions for matter and biased tracer power spectra in the presence of neutrinos, JCAP 07, 054, arXiv:2303.09762 [astro-ph.CO]. [87] J. DeRose, R. H. Wechsler, J. L. Tinker, M. R. Becker, Y.- Y. Mao, et al., The Aemulus Project I: Numerical Simula- tions for Precision Cosmology, Astrophys. J. 875, 69 (2019), arXiv:1804.05865 [astro-ph.CO]. [88] T. McClintock, E. Rozo, M. R. Becker, J. DeRose, Y.-Y. Mao, et al., The Aemulus Project II: Emulating the Halo Mass Func- tion, Astrophys. J. 872, 53 (2019), arXiv:1804.05866 [astro- ph.CO]. [89] Z. Zhai, J. L. Tinker, M. R. Becker, J. DeRose, Y.-Y. Mao, et al., The Aemulus Project III: Emulation of the Galaxy Correla- tion Function, Astrophys. J. 874, 95 (2019), arXiv:1804.05867 [astro-ph.CO]. [90] B. Fiorini, K. Koyama, and T. Baker, Fast production of cosmo- logical emulators in modified gravity: the matter power spec- trum, Journal of Cosmology and Astroparticle Physics 2023, 045 (2023), arXiv:2310.05786 [astro-ph.CO]. [91] D. Fremstad and H. A. Winther, Emulating the Non-Linear Matter Power-Spectrum in Mixed Axion Dark Matter Models, arXiv e-prints , arXiv:2503.07277 (2025), arXiv:2503.07277 [astro-ph.CO]. [92] I. Sáez-Casares, Y. Rasera, and B. Li, The e-MANTIS emula- tor: fast predictions of the non-linear matter power spectrum in f(R)CDM cosmology, Monthly Notices of the Royal Astro- nomical Society 527, 7242 (2024), arXiv:2303.08899 [astro- ph.CO]. [93] C. Arnold, B. Li, B. Giblin, J. Harnois-Déraps, and Y.-C. Cai, FORGE: the f(R)-gravity cosmic emulator project - I. Introduction and matter power spectrum emulator, Monthly Notices of the Royal Astronomical Society 515, 4161 (2022), arXiv:2109.04984 [astro-ph.CO]. [94] H. A. Winther, F. Schmidt, A. Barreira, C. Arnold, S. Bose, et al., Modified gravity N-body code comparison project, Monthly Notices of the Royal Astronomical Society 454, 4208 (2015), arXiv:1506.06384 [astro-ph.CO]. [95] R. W. Hockney and J. W. Eastwood, Computer simulation using particles (1988). [96] S. Tassev, M. Zaldarriaga, and D. Eisenstein, Solving Large Scale Structure in Ten Easy Steps with COLA, JCAP 06, 036, arXiv:1301.0322 [astro-ph.CO]. [97] J. Ding, S. Li, Y. Zheng, X. Luo, L. Zhang, and X.-D. Li, Fast generation of mock galaxy catalogues with COLA, arXiv e-prints , arXiv:2311.00981 (2023), arXiv:2311.00981 [astro- ph.CO]. 15 [98] G. Brando, B. Fiorini, K. Koyama, and H. A. Winther, En- abling matter power spectrum emulation in beyond-ΛCDM cosmologies with COLA, Journal of Cosmology and Astropar- ticle Physics 2022 (9), 051, arXiv:2203.11120 [astro-ph.CO]. [99] J. Gordon, B. F. de Aguiar, J. Rebouças, G. Brando, F. Fal- ciano, et al., Modeling nonlinear scales with the comoving La- grangian acceleration method: Preparing for LSST Y1, Phys. Rev. D 110, 083529 (2024), arXiv:2404.12344 [astro-ph.CO]. [100] M. J. Mortonson, D. Huterer, and W. Hu, Figures of merit for present and future dark energy probes, Physical Review D 82, 10.1103/physrevd.82.063004 (2010). [101] R. Trotta, Bayes in the sky: Bayesian inference and model selection in cosmology, Contemporary Physics 49, 71–104 (2008). [102] C. Shapiro, Biased dark energy constraints from neglecting re- duced shear in weak-lensing surveys, The Astrophysical Jour- nal 696, 775–784 (2009). [103] M.-X. Lin, B. Jain, M. Raveri, E. J. Baxter, C. Chang, M. Gatti, S. Lee, and J. Muir, Late time modification of structure growth and the S8 tension, Phys. Rev. D 109, 063523 (2024), arXiv:2308.16183 [astro-ph.CO]. [104] V. Poulin, T. L. Smith, R. Calderón, and T. Simon, Impact of ACT DR6 and DESI DR2 for Early Dark Energy and the Hubble tension, arXiv e-prints , arXiv:2505.08051 (2025), arXiv:2505.08051 [astro-ph.CO]. [105] E. Silva, M. A. Sabogal, M. Scherer, R. C. Nunes, E. Di Valentino, and S. Kumar, New constraints on interacting dark energy from DESI DR2 BAO observations, Phys. Rev. D 111, 123511 (2025), arXiv:2503.23225 [astro-ph.CO]. [106] K. Liu, X. Fu, B. Xu, C. Ding, Y. Huang, and X. Qing, The growth of linear perturbations in the interacting dark en- ergy models and observational constraints, arXiv e-prints , arXiv:2503.05208 (2025), arXiv:2503.05208 [astro-ph.CO]. [107] P. Ghedini, R. Hajjar, and O. Mena, Redshift-space distortions corner interacting dark energy, Physics of the Dark Universe 46, 101671 (2024), arXiv:2409.02700 [astro-ph.CO]. [108] M. A. Sabogal, E. Silva, R. C. Nunes, S. Kumar, E. Di Valentino, and W. Giarè, Quantifying the 𝑆8 tension and ev- idence for interacting dark energy from redshift-space dis- tortion measurements, Phys. Rev. D 110, 123508 (2024), arXiv:2408.12403 [astro-ph.CO]. [109] B. Fiorini, K. Koyama, and T. Baker, Fast production of cos- mological emulators in modified gravity: the matter power spectrum, JCAP 12, 045, arXiv:2310.05786 [astro-ph.CO]. [110] A. Izard, M. Crocce, and P. Fosalba, ICE-COLA: Towards fast and accurate synthetic galaxy catalogues optimizing a quasi 𝑁- body method, Mon. Not. Roy. Astron. Soc. 459, 2327 (2016), arXiv:1509.04685 [astro-ph.CO]. [111] B. S. Wright, A. Sen Gupta, T. Baker, G. Valogiannis, B. Fior- ini, and LSST Dark Energy Science Collaboration, Hi-COLA: fast, approximate simulations of structure formation in Horn- deski gravity, Journal of Cosmology and Astroparticle Physics 2023 (3), 040, arXiv:2209.01666 [astro-ph.CO]. [112] S. Colombi, A. Jaffe, D. Novikov, and C. Pichon, Accurate estimators of power spectra in N-body simulations, Monthly Notices of the Royal Astronomical Society 393, 511 (2009), arXiv:0811.0313 [astro-ph]. [113] R. E. Angulo and O. Hahn, Large-scale dark matter sim- ulations, Living Reviews in Computational Astrophysics 8, 10.1007/s41115-021-00013-z (2022). [114] C. Fidler, C. Rampf, T. Tram, R. Crittenden, K. Koyama, et al., General relativistic corrections to 𝑁-body simulations and the Zel’dovich approximation, Phys. Rev. D 92, 123517 (2015), arXiv:1505.04756 [astro-ph.CO]. [115] C. Fidler, T. Tram, C. Rampf, R. Crittenden, K. Koyama, et al., Relativistic initial conditions for N-body simulations, JCAP 06, 043, arXiv:1702.03221 [astro-ph.CO]. [116] T. Tram, J. Brandbyge, J. Dakin, and S. Hannestad, Fully rel- ativistic treatment of light neutrinos in 𝑁-body simulations, JCAP 03, 022, arXiv:1811.00904 [astro-ph.CO]. [117] G. Brando, K. Koyama, and D. Wands, Relativistic Corrections to the Growth of Structure in Modified Gravity, JCAP 01, 013, arXiv:2006.11019 [astro-ph.CO]. [118] R. E. Angulo and A. Pontzen, Cosmological n-body simula- tions with suppressed variance, Monthly Notices of the Royal Astronomical Society: Letters 462, L1–L5 (2016). [119] F. Capozzi, E. Di Valentino, E. Lisi, A. Marrone, A. Melchiorri, and A. Palazzo, Global constraints on absolute neutrino masses and their ordering, Phys. Rev. D 95, 096014 (2017). [120] M. Tanabashi, K. Hagiwara, K. Hikasa, K. Nakamura, et al. (Particle Data Group), Review of particle physics, Phys. Rev. D 98, 030001 (2018). [121] R. Takahashi, M. Sato, T. Nishimichi, A. Taruya, and M. Oguri, Revising the Halofit Model for the Nonlinear Matter Power Spectrum, Astrophys. J. 761, 152 (2012), arXiv:1208.2701 [astro-ph.CO]. [122] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Scikit-learn: Ma- chine learning in Python, Journal of Machine Learning Re- search 12, 2825 (2011). [123] A. Spurio Mancini, D. Piras, J. Alsing, B. Joachimi, and M. P. Hobson, COSMOPOWER: emulating cosmological power spectra for accelerated Bayesian inference from next- generation surveys, Mon. Notices Royal Astron. Soc. 511, 1771 (2022), arXiv:2106.03846 [astro-ph.CO]. [124] J. Alsing, H. Peiris, J. Leja, C. Hahn, R. Tojeiro, et al., SPEC- ULATOR: Emulating Stellar Population Synthesis for Fast and Accurate Galaxy Spectra and Photometry, The Astrophysical Journal Supplement 249, 5 (2020), arXiv:1911.11778 [astro- ph.IM]. [125] D. P. Kingma and J. Ba, Adam: A Method for Stochas- tic Optimization, arXiv e-prints , arXiv:1412.6980 (2014), arXiv:1412.6980 [cs.LG]. [126] A. Schneider, R. Teyssier, D. Potter, J. Stadel, J. Onions, D. S. Reed, R. E. Smith, V. Springel, F. R. Pearce, and R. Scocci- marro, Matter power spectrum and the challenge of percent ac- curacy, Journal of Cosmology and Astroparticle Physics 2016 (04), 047–047. [127] X. Fang, T. Eifler, and E. Krause, 2D-FFTLog: efficient com- putation of real-space covariance matrices for galaxy clustering and weak lensing, Mon. Notices Royal Astron. Soc. 497, 2699 (2020), arXiv:2004.04833 [astro-ph.CO]. [128] S. S. Boruah, T. Eifler, V. Miranda, and P. M. S. Krishanth, Accelerating cosmological inference with Gaussian processes and neural networks - an application to LSST Y1 weak lensing and galaxy clustering, Mon. Notices Royal Astron. Soc. 518, 4818 (2023), arXiv:2203.06124 [astro-ph.CO]. [129] B. Joachimi, M. Cacciato, T. D. Kitching, A. Leonard, R. Man- delbaum, et al., Galaxy Alignments: An Overview, Space Sci- ence Reviews 193, 1 (2015), arXiv:1504.05456 [astro-ph.GA]. [130] M. A. Troxel and M. Ishak, The intrinsic alignment of galaxies and its impact on weak gravitational lensing in an era of precision cosmology, Physics Reports 558, 1 (2015), arXiv:1407.6990 [astro-ph.CO]. [131] E. Krause and T. Eifler, cosmolike – cosmological likelihood analyses for photometric galaxy surveys, Mon. Not. Roy. As- 16 tron. Soc. 470, 2100 (2017), arXiv:1601.05779 [astro-ph.CO]. [132] A. Lewis, Efficient sampling of fast and slow cosmological parameters, Phys. Rev. D 87, 103529 (2013), arXiv:1304.4473 [astro-ph.CO]. [133] J. Torrado and A. Lewis, Cobaya: Code for Bayesian Analysis of hierarchical physical models, JCAP 05, 057, arXiv:2005.05290 [astro-ph.IM]. [134] A. Lewis, A. Challinor, and A. Lasenby, Efficient computation of CMB anisotropies in closed FRW models, Astrophys. J. 538, 473 (2000), arXiv:astro-ph/9911177 [astro-ph]. [135] C. Howlett, A. Lewis, A. Hall, and A. Challinor, CMB power spectrum parameter degeneracies in the era of precision cos- mology, Journal of Cosmology and Astroparticle Physics 1204, 027, arXiv:1201.3654 [astro-ph.CO]. [136] A. Gelman and D. B. Rubin, Inference from Iterative Sim- ulation Using Multiple Sequences, Statistical Science 7, 457 (1992). [137] E. Krause, X. Fang, S. Pandey, L. F. Secco, O. Alves, et al., Dark Energy Survey Year 3 Results: Multi-Probe Modeling Strategy and Validation, arXiv e-prints , arXiv:2105.13548 (2021), arXiv:2105.13548 [astro-ph.CO]. [138] A. Albrecht, G. Bernstein, R. Cahn, W. L. Freedman, J. Hewitt, W. Hu, J. Huth, M. Kamionkowski, E. W. Kolb, L. Knox, J. C. Mather, S. Staggs, and N. B. Suntzeff, Report of the dark energy task force (2006), arXiv:astro-ph/0609591 [astro-ph]. [139] D. Scolnic, D. Brout, A. Carr, A. G. Riess, T. M. Davis, A. Dwomoh, D. O. Jones, N. Ali, P. Charvu, R. Chen, E. R. Peterson, B. Popovic, B. M. Rose, C. M. Wood, P. J. Brown, K. Chambers, D. A. Coulter, K. G. Dettman, G. Dimitriadis, A. V. Filippenko, R. J. Foley, S. W. Jha, C. D. Kilpatrick, R. P. Kirshner, Y.-C. Pan, A. Rest, C. Rojas-Bravo, M. R. Siebert, B. E. Stahl, and W. Zheng, The Pantheon+ Analysis: The Full Data Set and Light-curve Release, Astrophys. J. 938, 113 (2022), arXiv:2112.03863 [astro-ph.CO]. [140] I. M. Sobol, On the distribution of points in a cube and the approximate evaluation of integrals, U.S.S.R. Comput. Math. Math. Phys. 7, 784 (1967). [141] L. Casarini, A. V. Macciò, and S. A. Bonometto, Dynamical dark energy simulations: high accuracy power spectra at high redshift, Journal of Cosmology and Astroparticle Physics 2009, 014 (2009), arXiv:0810.0190 [astro-ph]. [142] S. Brieden, F. Beutler, and M. Pellejero-Ibañez, Web-Halo Model (WHM): Accurate non-linear matter power spec- trum predictions without free parameters, arXiv e-prints , arXiv:2508.10902 (2025), arXiv:2508.10902 [astro-ph.CO]. Appendix A: Simulations and Emulator Accuracy for Higher Redshifts In this Appendix, we show relative errors from the simula- tions and the emulator for higher redshifts, ensuring that our discussion in Section II still holds. The results are presented in Figure 7. We find that for 𝑧≤3, the 𝑧= 0 results still hold in that at 𝑘= 1 ℎ/Mpc, 90% of cosmologies have emulation errors within 2% on ˜𝐵(𝑘, 𝑧), and that 50% of cosmologies are contained well within 1%. COLA emulators trained at differ- ent redshifts demonstrate no notable change in the fidelity of predictions across the range 0 ≤𝑧≤3 (right panels), even though the highest 10% of raw emulation errors (left panels) and COLA boost errors (middle panels) grow with increasing 𝑧. 17 k (h/Mpc) 5 0 5 z=0.24 50% k (h/Mpc) 5 0 5 90% k (h/Mpc) 5 0 5 100% k (h/Mpc) 5 0 5 1000×(B COLA NN /B COLA 1) z=0.50 k (h/Mpc) 5 0 5 100×(BCOLA/B EE2 1) k (h/Mpc) 5 0 5 100×(B EE2 CDM/B EE2 1) k (h/Mpc) 5 0 5 z=0.74 k (h/Mpc) 5 0 5 k (h/Mpc) 5 0 5 10 1 100 k (h/Mpc) 5 0 5 z=1.00 10 1 100 k (h/Mpc) 5 0 5 10 1 100 k (h/Mpc) 5 0 5 FIG. 7. Left panel: Relative errors between the COLA boosts 𝐵COLA predicted by the emulator versus those obtained from the test set simulations. Middle panel: Relative errors between ˜𝐵COLA (see Equation 8) versus the boosts from EuclidEmulator2. Right panel: Relative errors between 𝐵EE2 ΛCDM and EE2. Each row denotes a different redshift. Colors in all panels denote the percentile of cosmologies around the mean: blue contours enclose 50% of cosmologies, red contours enclose 90% of cosmologies, and the outer gray lines enclose 100% of cosmologies.
Modeling nonlinear scales for dynamical dark energy cosmologies with COLA João Rebouças,1, 2 Victoria Lloyd,3 Jonathan Gordon,3, 4 Guilherme Brando,1 and Vivian Miranda4 1CBPF - Brazilian Center for Research in Physics, Xavier Sigaud st. 150, zip 22290-180, Rio de Janeiro, RJ, Brazil 2Instituto de Física Teórica da Universidade Estadual Paulista, R. Dr. Bento Teobaldo Ferraz, 271, Bloco II, Barra-Funda - São Paulo/SP, Brazil 3 11794, USA 4C. N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY, 11794, USA (Dated: October 17, 2025) Upcoming galaxy surveys will bring a wealth of information about the clustering of matter at small scales, but modeling small-scale structure beyond ΛCDM remains computationally challenging. While accurate N-body emulators exist to model the matter power spectrum for ΛCDM and some limited extensions, it's unfeasible to generate N-body simulation suites for all candidate models. Motivated by recent hints of an evolving dark energy equation of state from galaxy surveys, we assess the viability of employing the COmoving Lagrangian Acceleration (COLA) method to generate simulation suites assuming the w0wadynamical dark energy model. Following up on our previous work, we combine COLA simulations with an existing high-precision ΛCDM emulator to extend its predictions into new regions of parameter space. We assess the precision of our emulator at the level of the matter power spectrum, finding that our emulator can reproduce the nonlinear boosts from EuclidEmulator2 at less than 2% error. Moreover, we perform an analysis of a simulated cosmic shear survey akin to future data from the Legacy Survey of Space and Time (LSST) first year of observations, assessing the differences in parameter constraints between our COLA-based emulator and the benchmark emulator. We find our emulator to be in excellent agreement with the benchmark, achieving less than 0.3σshifts in cosmological parameters across multiple fiducial cosmologies, and a 7D figure of bias of less than 0.35. We further compare our emulator's performance to a commonly used approach: assuming the ΛCDM boost can be employed for extended parameter spaces without modification. We find that our emulator yields a significantly smaller Δχ2 distribution, parameter constraint biases, and a more accurate figure of merit compared to this second approach. These results demonstrate that COLA emulators provide a computationally efficient and physically motivated path forward for modeling nonlinear structure in extended cosmologies, offering a practical alternative to full N-body suites in the era of precision cosmology. I. INTRODUCTION In recent decades, galaxy surveys have been able to map the large-scale structure of the Universe with great precision, being as competitive as CMB surveys. Current photometric surveys, such as DES [1-10], KiDS [11-23], HSC [24-29], and spectroscopic surveys, such as DESI [30-36], BOSS/eBOSS [37-50], and WiggleZ [51-55], have placed percent-level constraints on ΛCDM parameters, demonstrating the success of the theoretical model in describing independent datasets. However, in recent years, tensions in the cosmological parameter constraints have begun to arise with the increase in precision. Notably, most galaxy surveys report a mildly lower value for the structure growth parameter S8 than those from CMB measurements (see e.g. [56]). Moreover, recent findings from the DESI collaboration, as well as Type Ia supernovae datasets such as Pantheon+, Union3, and DESY5 [30, 34, 35, 57-59] favor an evolution of the dark energy equation of state. Whether these results truly indicate new physics is still under debate [60-64]. Forthcoming Stage-IV galaxy surveys, such as the Vera Rubin Observatory's LSST [65], Euclid [66], SphereX [67], and Roman [68], will enable higher-precision measurements, especially at small scales where non-linearities in the matter density field become increasingly sizable [69], and will be decisive for investigating dark energy dynamics. Analyses of galaxy surveys rely on a key theoretical prediction: the matter power spectrum, as galaxies are biased tracers of the underlying matter density field. At large scales and early times, the power spectrum can be quickly and accurately computed using Einstein-Boltzmann solvers such as camb [70] and class [71, 72]. However, for small scales and low redshifts, linear perturbation theory breaks down, and accurately modeling non-linearities becomes a task of central importance. Although N-body simulations offer the most accurate predictions in the nonlinear regime, they are computationally expensive - each one demanding tens of thousands of CPU hours [73-75]. As a result, integrating simulations into Bayesian analyses is a prohibitive task, as theoretical predictions must be provided for O(105 -106) points in the parameter space. To address this issue, machine learning emulators trained on N-body simulations have been developed for ΛCDM and simple, widely adopted extensions such as the phenomenological w0waCDM [76, 77] parametrization for dark energy, as well as massive neutrinos with their total mass as a free parameter. Examples include EuclidEmulator2 [78], bacco [79], CosmicEmu [80], the Dark Quest emulator [81, 82], the CSST Emulator [83-85], aemulus [86-89], among others. At the same time, due to the sheer amount of candidate cosmological models and the high cost of running N-body simulations, emulators for broader extensions of ΛCDM are still scarce: examples are emulators to specific modified gravity theories (e.g. [90-93]) where we have full N-Body simulations [94], as well as some hydrodynamical emulators. A viable alternative to using full N-body simulations for constructing matter power spectrum emulators is to use wellestablished approximate methods for extended cosmological models. These methods reduce the computational complexity 16 Oct 2025 2 of running hundreds of simulations to train emulators, at the cost of losing accuracy in deep non-linear scales. One compelling approach is to use the COmoving Lagrangian Acceleration (COLA) method, which combines Lagrangian Perturbation Theory with a particle-mesh (PM) [95] evolution scheme to approximate N-body results while being cheaper than the usual N-body methods by 1-2 orders of magnitude [96, 97]. Emulators created using pure COLA simulations are prone to small-scale inaccuracies when compared directly to their Nbody counterparts. To mitigate this effect, the work of [98] introduces an approach that combines COLA simulations with predictions from high-accuracy ΛCDM emulators or full Nbody results, leveraging COLA's reduced computational cost and dramatically increasing small-scale accuracy simultaneously. Our previous work [99] has validated this approach, creating an emulator for the matter power spectrum of COLA simulations under the wCDM cosmological model, and testing it in a mock Stage-IV cosmic shear analysis. As such, we aim to demonstrate here that the hybrid approach of COLA-based emulators combined with high-resolution ΛCDM emulators can provide unbiased cosmological parameter constraints when compared to full N-body methods. In this work, we now present a final validation of our COLAbased emulators on the w0waCDM cosmological model, where the dark energy equation of state evolves linearly with the scale factor. This parametrization represents the most widely used and general extension of ΛCDM for which highaccuracy emulators currently exist, and remains central to ongoing investigations into dynamical dark energy [30, 34]. We extend our machine learning pipeline, combining COLA simulations with ΛCDM emulators, to predict the nonlinear matter spectrum across the w0waCDM parameter space. We train a simple neural network to emulate the nonlinear correction factor (i.e the boost) from COLA, correcting for small-scale inaccuracies by referencing boosts from a high-fidelity ΛCDM emulator. This hybrid approach enables fast predictions across an extended cosmological parameter space while maintaining consistency with N-body precision. To validate our emulator in a cosmological inference setting, we perform a simulated cosmic shear analysis using survey specifications consistent with LSST's first year of observations (LSST-Y1) [65]. We compare parameter constraints derived using both our pipeline and a benchmark N-body emulator, chosen as EuclidEmulator2, quantifying their disagreement with standard tension metrics [100-102]. Additionally, we also benchmark our emulator against a widely used approximation method in beyond-ΛCDM analyses for models without dedicated nonlinear simulations [103108]: projecting the nonlinear boost from the nearest ΛCDM cosmology. This projection method assumes that nonlinear corrections calibrated in ΛCDM remain valid in nearby extended cosmologies, providing a computationally inexpensive workaround but at the cost of uncontrolled systematics. In contrast, we demonstrate that for dynamical dark energy models, we find that the ΛCDM projection approach may introduce significant deviations in both goodness-of-fit and parameter constraints. Meanwhile, our COLA-based emulator reproduces the predictions of high-precision N-body emulators without bias. This paper is organized as follows: Section II describes the COLA simulations, cosmological parameters, simulation output processing, emulator construction, and validation; in Section III, we present our LSST-Y1 simulated cosmic shear analysis and the tension metrics used to assess their differences; in Section IV, we present and discuss the results of the LSSTY1 simulated analysis; finally, we conclude in Section V. II. COLA EMULATOR A. COLA Simulation Suite The COmoving Lagrangian Acceleration (COLA) algorithm [96] is a fast approximate method for N-body simulations, wherein particles evolve in a frame comoving with trajectories calculated using Lagrangian Perturbation Theory (LPT), most commonly 2nd order Lagrangian perturbation theory (2LPT). For small scales, the method computes the force by using a Particle-Mesh (PM) algorithm, where the residual displacements not captured by LPT are added to the trajectories. COLA has been shown to agree with full N-body simulations, at the level of the power spectrum at up to k∼1h/Mpc, as well as when predicting ratios of the modified gravity power spectrum and the ΛCDM one, the so-called boost function in modified gravity [98, 109-111]. Despite being 1-2 orders of magnitude faster than a full N-body run, the computational cost of these approximations is still too high for direct use in the O(106) computations of the matter power spectrum required for Monte Carlo searches. A practical alternative is to use a fixed set of COLA simulations to train emulators for the matter power spectrum, enabling efficient interpolation across cosmological parameter space. Our previous work demonstrated this approach for wCDM [99]; here, we extend it to w0waCDM and evaluate its performance relative to the benchmark EuclidEmulator2 [78], which achieves ≲1% precision for w0waCDM + Í mνup to k= 10 h/Mpc-1 and z≤3. 1. Simulation Settings We use the COLA algorithm as implemented in the public fml1 code. Each simulation is performed in a box of size L= 1024h-1Mpc, populated with Npart = 10243 particles, initialized at zini = 19, and evolved over 51 time steps chosen to maintain a uniform time resolution of Δa≈0.02. The force grid uses Nmesh = 20483 cells, and the power spectra are calculated on-the-fly using a N3 pk-mesh = 10243 grid. Therefore, the corresponding Nyquist frequency is kNyq = πNpk-mesh/L= πh/Mpc. To avoid aliasing, we 1 https://github.com/HAWinther/FML 3 restrict our analysis to k≤kNyq [112]. Our choices are based on Reference [98] and are validated therein. Initial conditions are generated using 2LPT, and we employ the forward approach [113] for our simulations. We provide the linear transfer functions of matter density, velocity, and relativistic species' densities at each time step using class2 in synchronous gauge, and convert to the N-body gauge [114117] in COLA. Our simulations were run in the Seawulf3 cluster. With these settings, and using 128 cores, one COLA w0wasimulation takes approximately 40 minutes to finish, and requires a total RAM of approximately 950 GB. To suppress sample variance from finite box effects at large scales (k≈1/L), we use the pairing-and-fixing method [118], in which we generate Gaussian random field modes with a fixed amplitude, δi,lin, but with phase shifts of πwith respect to one another. The initial overdensity fields are sampled as δi,lin = √︁ Pieiθi. (1) where θiis a random phase and Pithe initial power spectrum. Averaging over each pair, we find that the result substantially suppresses the effects of cosmic variance. This strategy was chosen for this work following [99] and [78]. 2. Definition of the Parameter Space We consider the cosmological w0waCDM model, where the dark energy equation of state is parametrized as w(a) = w0 + wa(1 -a), (2) with abeing the scale factor, and w0 and wacontrol the present-day value and time derivative of the dark energy equation of state, respectively. The ΛCDM model is recovered in the limit w0 = -1 and wa= 0. The free cosmological parameters are: • Ωm, the total baryon density, • Ωb, the total matter density, • h= H0/(100 km s-1 Mpc-1), the dimensionless Hubble parameter, • As, the amplitude of initial scalar fluctuations, • ns, the scalar spectral index, • w0 and wa, the dark energy equation of state parameters. We fix the summed neutrino masses to the minimum value allowed by neutrino oscillation experiments, Í mν= 0.058 eV, assuming three degenerate massive species [119, 120]. The parameter space boundaries are described in Table I, set to match those of EuclidEmulator2, which we have adopted 2 https://github.com/lesgourg/class_public 3 https://rci.stonybrook.edu/HPC Ωm Ωb ns As× 109 h w0 wa Min 0.24 0.04 0.92 1.7 0.61 -1.3 -0.7 Max 0.40 0.06 1.00 2.5 0.73 -0.7 0.5 Center 0.319 0.05 0.96 2.1 0.67 -1 0 TABLE I. Parameter space validity bounds of our COLA-based emulator. The training set is drawn from a slightly bigger hypercube, where each dimension is stretched by 10% in each direction (e.g. 0.224 1.182, aliasing of the kmodes near the Nyquist frequency leads to a power spectrum less than the shot noise for some simulations, and the subtraction would lead to unphysical negative values [113]; for these redshifts, 4 we choose to cut the scales at half of the Nyquist frequency, kz>1.182 ≤(π/2) h/Mpc, following our procedure in [99] (also see [112]). We then perform several transformations to optimize the inputs and outputs of our emulator. For instance, machine learning techniques are known to perform poorly if the features span several orders of magnitude. To stabilize the following procedures, we normalize the cosmological parameters θ to [-1, 1] according to θN= -1 + 2 θ -θmin θmax -θmin , (4) where minimum and maximum values correspond to the training set boundaries, i.e., stretching the intervals of Table I by 10% in each direction. Furthermore, we standardize the boosts using BCOLA N (k, z|θ) = BCOLA(k, z|θ) - ̄BCOLA(k, z) σB(k, z) , (5) where BCOLA N (k, z|θ) is the normalized COLA boost, ̄B(k, z) is the average of all boosts in the training set, and σB(k, z) is their standard deviation. We then perform a Principal Component Analysis (PCA) decomposition of the COLA boosts using scikit-learn [122] to reduce dimensionality. We retain NPC = 15 components, which are sufficient to recover the test set boosts to within 0.2%. 3. Neural Network Emulator After post-processing, we train our emulator with the normalized cosmological parameters as input features and the principal components as targets. We use a fully connected neural network with three hidden layers, each with 1024 neurons, with a mean squared error loss function, L = Ntrain ∑︁ i=1 NPC ∑︁ j=1 (αi,train j -αi,pred j )2, (6) where αi,train j is the j-th principal component coefficient of the i-th cosmology in the training set, and αi,pred j the corresponding prediction. We use the parametric activation function [123, 124] ym+1 n = γm n+ (1 -γm n) 1 1 + e-βm nym n ̃ym n, (7) where ym+1 n is the value of the n-th neuron of the (m+ 1)-th layer, ̃ym nthe n-th neuron from the (m+1)-th layer after the application of weights and biases, and γm nand βm nare parameters of the activation function that can be back-propagated during training. We use the Adam [125] optimizer to train the model parameters. 4. Boost Errors We perform a series of accuracy checks on the emulator outputs. First, to assess the accuracy of our neural network, we compare the emulator's predictions for test set cosmologies, unseen in the training procedure, against the actual COLA simulations. The relative errors are shown in the first panel of Figure 1. At k= 1 h/Mpc, 90% of the test set cosmologies have an emulation error within 0.1%. Comparing these direct emulation errors to the errors on the corrected boosts ̃BCOLA(k, z) (second panel), we note that the errors between the two differ by an order of magnitude. This indicates that, in the context of comparing COLA with high-precision simulations, the COLA emulator faithfully reproduces its simulations, and differences between the emulators can be attributed to the COLA approximation rather than the performance of the machine learning model. As per [98], COLA simulations increasingly lose power4 at progressively smaller scales, leading to typical errors of 10% at k= 1 h/Mpc for raw COLA boosts BCOLA, as defined in Equation 3. However, this power loss is cosmology-independent. Our previous work [99] showed that the best technique to build a COLA-based emulator is to leverage existing high-precision emulators in ΛCDM, using COLA only to extend the results into new dimensions, i.e., extra model parameters. This idea is encoded in the following expression for the nonlinear boost, ̃BCOLA(k, z|θ) = BN-body(k, z|θp) × BCOLA(k, z|θ) BCOLA(k, z|θp) , (8) where θpis the projection of θ in the ΛCDM subspace and BN-body is the nonlinear boost obtained from our benchmark N-body prescription. We choose EuclidEmulator2 as the base N-body prescription. The second panel of Figure 1 shows the relative difference between ̃B(k, z), calculated using Equation 8, and the benchmark emulator predictions, BEE2(k, z), for all test set cosmologies. At k= 1 h/Mpc, 90% of cosmologies have emulation errors within 2%, with 50% of cosmologies contained well within 1%. This demonstrates that our method successfully mitigates the accumulation of errors typical of COLA simulations in the nonlinear regime, allowing us to generate accurate predictions across our target krange. Furthermore, the cosmologies with larger errors are those with higher values of w0 + wa, a region of the parameter space excluded by current data. Finally, we consider a third nonlinear prescription: using the nonlinear boost from EuclidEmulator2 in the ΛCDM subspace; this approach will be denoted as EE2 ΛCDM. For this purpose, we compute the nonlinear boosts for w0waCDM cosmologies using EuclidEmulator2, setting w0 = -1 and wa= 0. The third panel of Figure 1 shows relative errors 4 This loss of power is well known to PM N-body codes, which fail to resolve the internal dynamics of halos. This trend of losing power starts roughly at a scale at which the pure 1-halo term of the halo model would dominate the power spectrum. 5 5.0 2.5 0.0 2.5 5.0 1000×(B COLA NN /B COLA 1) 50% 10 1 100 k (h/Mpc) 5.0 2.5 0.0 2.5 5.0 100×(BCOLA/B EE2 1) 90% 10 1 100 k (h/Mpc) 5.0 2.5 0.0 2.5 5.0 100×(B EE2 CDM/B EE2 1) 100% FIG. 1. From top to bottom: 1) Relative errors between the COLA boosts BCOLA predicted by the emulator versus those obtained from the test set simulations. 2) Relative errors between ̃BCOLA (see Equation 8) and the boosts from EE2. 3) Relative errors between BEE2 ΛCDM and EE2. Colors in all panels denote the percentile of cosmologies around the mean: blue contours enclose 50% of cosmologies, red contours enclose 90% of cosmologies, and the outer gray lines enclose 100% of cosmologies. All panels show results for z= 0, see Appendix A for the equivalent plots at higher redshifts. between BEE2 ΛCDM and the actual EuclidEmulator2 boosts. The 90% percentile shows errors of the order of 7.5% at k= 1 h/Mpc, significantly worse than the COLA errors of the panel above. These results support the viability of our emulator for parameter inference in w0waCDM, as the emulated boost ̃B(k, z) agrees with our N-body proxy at a level suitable for upcoming precision cosmology experiments [126] while requiring significantly less computational expense compared to traditional N-body methods. In the following, we investigate how the differences shown in Figure 1 impact the parameter constraints from simulated cosmic shear analysis. Parameter Fiducial Prior Survey specifications Area 12300 deg2 - Shape noise per component 0.26 - nsources eff 11.2 arcmin-2 - Photometric redshift offsets Δzi source 0 N[0, 0.002] Intrinsic alignment (NLA) a1 0.7 U[-5, 5] η1 -1.7 U[-5, 5] Shear calibration mi 0 N[0, 0.005] TABLE II. Mock survey specifications for our simulated analysis, and nuisance parameter priors. U[a, b] denotes an uniform distribution with edges [a, b], while N [a, b] denotes a Gaussian distribution with mean aand standard deviation b. Tomographic bin indices are denoted by i, and all our priors are the same for all bins. III. ANALYSIS OF LSST-Y1 SIMULATED DATA A. Simulating Cosmic Shear Data We simulate cosmic shear observations based on LSST-Y1, following the methodology of [127, 128] and detailed in [99]. Survey specifications, source galaxy redshift distributions, and nuisance parameter priors are taken from the LSST DESC Science Requirements Document [65], and summarized in Table II. The redshift distribution is modeled as a Smail distribution convolved with a Gaussian uncertainty 0.02(1 + z) and divided into five tomographic bins with equal galaxy number densities. The cosmic shear two-point correlation functions ξij ± (θ) are computed by first evaluating in Fourier space, Cij κκ(l), using the nonlinear matter power spectrum via the Limber approximation, then transforming to real space via the analytic functions in Appendix A of [6]. We compute ξij ± in 26 logarithmically spaced angular bins between 2.5 and 900 arcmin, averaging over each bin. We include standard self-calibrating systematics in our computation of ξ± - photometric redshift uncertainties, multiplicative shear calibration, and the nonlinear alignment (NLA) model of intrinsic galaxy alignments (see, e.g., [129, 130]). Likelihood analyses are performed using Cocoa, the Cobaya-CosmoLike Joint Architecture5 [131-133]. Linear power spectra are computed with camb [134, 135], and nonlinear corrections are applied using either ̃B(Eq. 8), BEE2 ΛCDM, or EuclidEmulator2. We use MCMC sampling to explore the parameter space and assess convergence using the Gelman-Rubin criterion (|R-1| -0.882, w0 + wa= -0.76+0.26 -0.15; • COLA: S8 = 0.770+0.008 -0.007, w0 > -0.900, w0 + wa= -0.79+0.26 -0.17; • EE2 ΛCDM: S8 = 0.762 ± 0.006, w0 = -0.879+0.140 -0.088, w0 + wa= -0.82+0.26 -0.17. In this case, the EE2 ΛCDM projection induces substantial biases in S8, shifting up by nearly 1σ. By comparison, the COLA emulator remains consistent to within 0.25 standard deviations. This trend extends to the multidimensional figures of bias: COLA has a 7D figure of bias of FoB7D = 0.27 compared to the benchmark, while the projected EE2 ΛCDM reaches FoB7D = 1.04. Unlike the parameter constraints assuming the center cosmology from Table I as the fiducial, we see in Figure 5 that the posteriors calculated using our COLA-based emulator now closely track those generated using our N-body proxy, rather than overestimating S8 and w0 in any substantive way. We find that in switching from the center cosmology to w↑ 0 and w↑ a, the relative ratio between FoMs of COLA and EE2 in the S8 × w0 plane is 0.97, while the same ratio is 1.21 for EE2 ΛCDM. To investigate whether our COLA emulator can provide unbiased constraints with FOMs similar to EE2 across the parameter space, Figure 6 shows the 1D biases of Equation 9 for Ωm, S8, w0 and wabetween the COLA emulator and our baseline EE2 for all scale cuts and all of the 29 cosmologies outlined in Section II. The fiducial cosmologies are listed in increasing order of their associated σ8 values. All 1D biases are within 9 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.72 0.76 0.80 S8 0.72 0.78 S8 1.2 0.9 w0 1.6 0.8 w0 +wa Cutoff 1 Fid. Cosmo.: w0 ,wa 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.76 0.79 S8 0.75 0.80 S8 1.2 0.9 w0 1.6 0.8 w0 +wa Cutoff 2 EE2 w0wa COLA EE2 CDM 0.30 0.35 m 1.5 1.0 0.5 w0 +wa 1.2 1.0 0.8 w0 0.74 0.76 0.78 S8 0.75 0.78 S8 1.2 0.9 w0 1.6 0.8 w0 +wa Cutoff 3 FIG. 5. Cosmological parameter constraints (68% and 95%) from the LSST-Y1 simulated analyses assuming a fiducial cosmology with w↑ 0 and w↑ a(see Table III), keeping the other parameters at their central values. Green-filled contours denote constraints obtained using EuclidEmulator2 as the nonlinear prescription, orange dashed and dotted contours use our COLA emulator, and blue dashed contours use EE2 ΛCDM prescription. The left, middle, and right panels show constraints using the angular cutoffs C1, C2, and C3, respectively. In this case, the EE2 ΛCDM prescription can provide significant biases in S8, which are not present when using COLA. 0.3σ, even for cosmologies with higher values of σ8 and Ωm, where Figure 4 shows our emulator performs worse. We computed the 7D figure of bias in cosmological parameters for all fiducial cosmologies using Cutoff 3, finding a maximum value of FoB7D = 0.35 at the cosmology Ω↑ m, w↑ 0, w↑ a. We conclude that, for all scale cuts considered in this work, our emulator succeeds in providing unbiased constraints on cosmological parameters when compared to a high-precision N-body emulator in the context of dynamical dark energy models, even with significant variations in cosmological parameters and extreme values of σ8. Further, Figure 6 indicates our measure of the relative tightness of the parameter constraints in the S8 ×w plane compared to EE2, FOMCOLA S8×w/FOMEE2 S8×w, assuming Cutoff 3. The highest ratio is 1.19 for the center cosmology, shown in Figure 2. The same remark from before applies to other fiducial cosmologies: the disagreement between COLA and EE2 is driven mainly by low values of w0 and S8, a region of the parameter space disallowed by low-redshift distance measurements. V. CONCLUSION In light of recent hints of dynamical dark energy, constraining its equation of state has become a task of central importance. While geometrical probes, such as BAO and supernovae measurements, are the primary probes of late dark energy behavior, galaxy surveys can also probe dark energy dynamics through their effects on the growth of large-scale structure. Extracting robust constraints from these surveys requires accurate modeling of nonlinear gravitational effects in the matter power spectrum, which becomes increasingly challenging in extended cosmologies where w(z) departs from a cosmological constant. While accurate modeling can be achieved with N-body simulations, their high computational cost limits their applicability across the myriad candidate dynamical dark energy models (e.g., quintessence), and, further, for models that directly impact the growth of matter perturbations beyond modifications in the Universe's expansion. In this context, COLA is a fast approximate alternative to N-body simulations, which, when appropriately corrected, presents an avenue for constructing accurate emulators for the nonlinear matter power spectrum at a fraction of the computational cost. In this work, we have built an emulator for the nonlinear boost assuming the w0waCDM cosmological model using a suite of 1400 COLA simulations, two for each of the 700 cosmologies in the training set to account for pairing-and-fixing. To evaluate the accuracy of the neural network, we ran a pair of simulations for each of the 200 cosmologies in the test set. The total computational cost of all simulations is estimated at 153.600 CPU-hours. A simple connected neural network with a trainable activation function can reproduce the test set boosts at 0.1% error. The computational cost of the simulation suite could potentially be lowered by using an alternative sampling algorithm for the training set cosmologies: the Sobol sequence [140], which has been shown to improve emulation errors compared to Latin hypercube sampling [83]. We have compared our COLA emulator to a benchmark N-body emulator, chosen as EuclidEmulator2. We test an additional nonlinear prescription common to analyses of extended cosmological models without N-body simulations: using N-body boosts at the projected ΛCDM cosmology (i.e., setting w0 = -1 and wa= 0), an approach we denote as EE2 ΛCDM. We compare nonlinear models in two manners: at the boost level, shown in Figure 1, and at the level of a simulated cosmic shear analysis akin to LSST-Y1, assuming EuclidEmulator2 as the true nonlinear model. In the data analysis, to account for possible variations in the cosmological parameters when beyond-ΛCDM models are analyzed, we define fiducial cosmologies scattered across the parameter 10 0.5 0.0 0.5 m/ m m,w0 ,wa As ,w0 ,wa m,w0 ,wa ns ,w0 ,wa m,w0 ,wa w0 ,wa ns ,w0 ,wa As ,w0 ,wa m,w0 ,wa As ,w0 ,wa As ,w0 ,wa ns ,w0 ,wa w0 ,wa ns ,w0 ,wa As ,w0 ,wa Center ns ,w0 ,wa m,w0 ,wa w0 ,wa ns ,w0 ,wa As ,w0 ,wa ns ,w0 ,wa w0 ,wa ns ,w0 ,wa As ,w0 ,wa m,w0 ,wa As ,w0 ,wa m,w0 ,wa m,w0 ,wa 0.5 0.0 0.5 S8/ S8 0.5 0.0 0.5 w0/ w0 0.5 0.0 0.5 (w0 +wa)/ w0 +wa 0.95 | 0.97 | 0.92 1.04 | 1.05 | 0.96 1.01 | 1.06 | 1.04 0.99 | 1.05 | 1.01 1.04 | 1.05 | 1.07 1.00 | 1.05 | 0.94 1.06 | 1.01 | 1.03 1.02 | 1.04 | 1.14 0.98 | 0.95 | 1.01 1.00 | 0.98 | 1.06 1.03 | 1.03 | 1.12 0.99 | 1.04 | 1.10 1.02 | 1.15 | 1.18 1.06 | 1.10 | 1.17 0.93 | 1.02 | 0.99 1.06 | 1.08 | 1.19 0.91 | 1.01 | 1.08 1.07 | 1.10 | 1.02 1.04 | 1.07 | 1.15 1.07 | 1.05 | 1.14 1.02 | 1.07 | 1.18 1.00 | 0.96 | 0.97 0.99 | 0.96 | 1.02 1.07 | 0.97 | 1.04 1.00 | 1.05 | 1.08 1.04 | 1.11 | 1.13 0.92 | 1.00 | 0.99 0.95 | 0.98 | 0.99 0.94 | 0.90 | 0.92 FoMCOLA S8 ×w / FoMEE2 S8 ×w C1 | C2 | C3 C1 C2 C3 FIG. 6. One-dimensional biases from Equation 9, shown for the parameters Ωm, S8, w0 and wa, sorted in increasing order of their associated σ8 values. On the right-hand side, we show the ratios in figure of merit (see Equation 11) in the S8 × w0 plane between the analyses using COLA and EE2, for the three angular cutoffs. The gray bands represent 0.3σbias. 11 space, including outside ΛCDM, with significant variations in Ωmand σ8. We have assessed the goodness-of-fit degradation, shown in Figure 3, and parameter constraint biases, shown in Figures 2, 5 and 6. We find that, at the boost level, our emulator can reproduce the benchmark emulator results with less than 2% error at k= 1 h/Mpc, while EE2 ΛCDM produces 7.5% errors at the same scales. As for the simulated analysis, we find that our COLA-based emulator can provide unbiased constraints compared to EuclidEmulator2: all 1D biases are well within 0.3σ, even for the most aggressive angular cutoffs and exotic fiducial cosmologies with extreme values of Ωm, σ8, or outside ΛCDM. Furthermore, all 7D figures of bias are below 0.35. At the precision level expected for the first year of LSST observations, the COLA emulator yields constraints equivalent to those obtained using EuclidEmulator2 for scales up to k≈3 h/Mpc, comparable to our Cutoff 2. Our results demonstrate that COLA, when combined with an accurate ΛCDM reference, offers a viable and flexible framework for extending nonlinear modeling to dynamical dark energy and other beyond-ΛCDM scenarios. We emphasize that, while we use EuclidEmulator2 as the "baseline" ΛCDM emulator in Equation 8, any other emulator could be used to provide ΛCDM boosts. Moreover, our methodology can be applied to more exotic models that also modify the growth of structure directly, such as modified gravity or coupled dark energy [105]. We also remark that there are avenues to improve our methodology. One example is the choice of "reference" cosmology. Equation 8 uses the projected ΛCDM cosmology because, geometrically, it is the closest cosmology; however, a possible choice that improves accuracy is to use a wCDM cosmology with the same value of σ8, akin to what is done in the Casarini [141] prescription. Moreover, COLA can be combined with other analytical or semi-analytical prescriptions, such as the one proposed in [142], improving its accuracy at small scales. ACKNOWLEDGEMENTS The authors would like to thank Stony Brook Research Computing and Cyberinfrastructure and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance SeaWulf computing system. This research was also supported by resources supplied by the Center for Scientific Computing (NCC/GridUNESP) of the São Paulo State University (UNESP). This work made use of the CHE cluster, managed and funded by COSMO/CBPF/MCTI, with financial support from FINEP and FAPERJ, and operating at the RSDC - Datacenter para Ciência Alfredo Marques de Oliveira/CBPF. JR acknowledges the financial support from FAPESP under grant 2020/03756-2, São Paulo Research Foundation (FAPESP) through ICTP-SAIFR. JR also acknowledges that this study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. VM is supported by the Roman Project Infrastructure Team "Maximizing Cosmological Science with the Roman High Latitude Imaging Survey" (NASA contracts 80NM0018D0004-80NM0024F0012). V.M. is also partially supported by the Roman Project Infrastructure Team "A Roman Project Infrastructure Team to Support Cosmological Measurements with Type Ia Supernovae" (NASA contract 80NSSC24M0023). [1] M. A. Troxel, N. MacCrann, J. Zuntz, T. F. Eifler, E. Krause, et al., Dark Energy Survey Year 1 results: Cosmological constraints from cosmic shear, Phys. Rev. D 98, 043528 (2018), . [2] T. M. C. Abbott, F. B. Abdalla, A. Alarcon, J. Aleksić, S. Allam, et al., Dark Energy Survey year 1 results: Cosmological constraints from galaxy clustering and weak lensing, Phys. Rev. D 98, 043526 (2018), . [3] T. M. C. Abbott, F. B. Abdalla, S. Avila, M. Banerji, E. Baxter, et al., Dark Energy Survey year 1 results: Constraints on extended cosmological models from galaxy clustering and weak lensing, Physical Review D 99, 123505 (2019), . [4] A. Amon, D. Gruen, M. A. Troxel, N. MacCrann, S. Dodelson, et al., Dark Energy Survey Year 3 results: Cosmology from cosmic shear and robustness to data calibration, Phys. Rev. D 105, 023514 (2022), . [5] L. F. Secco, S. Samuroff, E. Krause, B. Jain, J. Blazek, et al., Dark Energy Survey Year 3 results: Cosmology from cosmic shear and robustness to modeling uncertainty, Phys. Rev. D 105, 023515 (2022), . [6] O. Friedrich, F. Andrade-Oliveira, H. Camacho, O. Alves, R. Rosenfeld, et al., Dark Energy Survey year 3 results: covariance modelling and its impact on parameter estimation and quality of fit, Mon. Not. of the Royal Astron. Soc. 508, 3125 (2021), . [7] A. Porredon, M. Crocce, J. Elvin-Poole, R. Cawthon, G. Giannini, et al., Dark Energy Survey Year 3 results: Cosmological constraints from galaxy clustering and galaxy-galaxy lensing using the MAGLIM lens sample, Phys. Rev. D 106, 103530 (2022), . [8] S. Pandey, E. Krause, J. DeRose, N. MacCrann, B. Jain, et al., Dark Energy Survey year 3 results: Constraints on cosmological parameters and galaxy-bias models from galaxy clustering and galaxy-galaxy lensing using the redMaGiC sample, Phys. Rev. D 106, 043520 (2022), . [9] T. M. C. Abbott, M. Aguena, A. Alarcon, S. Allam, O. Alves, et al., Dark Energy Survey Year 3 results: Cosmological constraints from galaxy clustering and weak lensing, Phys. Rev. D 105, 023520 (2022), . [10] T. M. C. Abbott, M. Aguena, A. Alarcon, O. Alves, A. Amon, et al., Dark Energy Survey Year 3 results: Constraints on extensions to Λ CDM with weak lensing and galaxy clustering, Phys. Rev. D 107, 083504 (2023), . [11] A. H. Wright, B. Stölzner, M. Asgari, M. Bilicki, B. Giblin, et al., KiDS-Legacy: Cosmological constraints from cosmic shear with the complete Kilo-Degree Survey, arXiv e-prints , 12 (2025), . [12] S.-S. Li, K. Kuijken, H. Hoekstra, L. Miller, C. Heymans, et al., KiDS-Legacy calibration: Unifying shear and redshift calibration with the SKiLLS multi-band image simulations, Astronomy & Astrophysics 670, A100 (2023), . [13] M. Bilicki, A. Dvornik, H. Hoekstra, A. H. Wright, N. E. Chisari, et al., Bright galaxy sample in the Kilo-Degree Survey Data Release 4. Selection, photometric redshifts, and physical properties, Astronomy & Astrophysics 653, A82 (2021), . [14] A. Loureiro, L. Whittaker, A. Spurio Mancini, B. Joachimi, A. Cuceu, et al., KiDS and Euclid: Cosmological implications of a pseudo angular power spectrum analysis of KiDS-1000 cosmic shear tomography, Astronomy & Astrophysics 665, A56 (2022), . [15] P. A. Burger, O. Friedrich, J. Harnois-Déraps, P. Schneider, M. Asgari, et al., KiDS-1000 cosmology: Constraints from density split statistics, Astronomy & Astrophysics 669, A69 (2023), . [16] J. L. van den Busch, A. H. Wright, H. Hildebrandt, M. Bilicki, M. Asgari, et al., KiDS-1000: Cosmic shear with enhanced redshift calibration, Astronomy & Astrophysics 664, A170 (2022), . [17] T. Tröster, M. Asgari, C. Blake, M. Cataneo, C. Heymans, et al., KiDS-1000 Cosmology: Constraints beyond flat ΛCDM, Astronomy & Astrophysics 649, A88 (2021), . [18] A. Dvornik, C. Heymans, M. Asgari, C. Mahony, B. Joachimi, et al., KiDS-1000: Combined halo-model cosmology constraints from galaxy abundance, galaxy clustering, and galaxygalaxy lensing, Astronomy & Astrophysics 675, A189 (2023), . [19] C. Heymans, T. Tröster, M. Asgari, C. Blake, H. Hildebrandt, et al., KiDS-1000 Cosmology: Multi-probe weak gravitational lensing and spectroscopic galaxy clustering constraints, Astronomy and Astrophysics 646, A140 (2021), . [20] M. C. Fortuna, H. Hoekstra, H. Johnston, M. Vakili, A. Kannawadi, et al., KiDS-1000: Constraints on the intrinsic alignment of luminous red galaxies, Astronomy & Astrophysics 654, A76 (2021), . [21] H. Hildebrandt, J. L. van den Busch, A. H. Wright, C. Blake, B. Joachimi, et al., KiDS-1000 catalogue: Redshift distributions and their calibration, Astronomy & Astrophysics 647, A124 (2021), . [22] S. J. Nakoneczny, M. Bilicki, A. Pollo, M. Asgari, A. Dvornik, et al., Photometric selection and redshifts for quasars in the Kilo-Degree Survey Data Release 4, Astronomy & Astrophysics 649, A81 (2021), . [23] J. Yao, H. Shan, P. Zhang, X. Liu, C. Heymans, et al., KiDS1000: Cross-correlation with Planck cosmic microwave background lensing and intrinsic alignment removal with selfcalibration, Astronomy & Astrophysics 673, A111 (2023), . [24] C. Hikage, M. Oguri, T. Hamana, S. More, R. Mandelbaum, et al., Cosmology from cosmic shear power spectra with Subaru Hyper Suprime-Cam first-year data, Publications of the Astronomical Society of Japan 71, 43 (2019), . [25] T. Hamana, M. Shirasaki, S. Miyazaki, C. Hikage, M. Oguri, et al., Cosmological constraints from cosmic shear two-point correlation functions with HSC survey first-year data, Publications of the Astronomical Society of Japan 72, 16 (2020), . [26] H. Aihara, N. Arimoto, R. Armstrong, S. Arnouts, N. A. Bahcall, et al., The Hyper Suprime-Cam SSP Survey: Overview and survey design, Publications of the Astronomical Society of Japan 70, S4 (2018), . [27] M. Tanaka, J. Coupon, B.-C. Hsieh, S. Mineo, A. J. Nishizawa, et al., Photometric redshifts for Hyper Suprime-Cam Subaru Strategic Program Data Release 1, Publications of the Astronomical Society of Japan 70, S9 (2018), . [28] R. Mandelbaum, F. Lanusse, A. Leauthaud, R. Armstrong, M. Simet, et al., Weak lensing shear calibration with simulations of the HSC survey, Monthly Notices of the Royal Astronomical Society 481, 3170 (2018), . [29] R. Dalal, X. Li, A. Nicola, J. Zuntz, M. A. Strauss, et al., Hyper Suprime-Cam Year 3 results: Cosmology from cosmic shear power spectra, Phys. Rev. D 108, 123519 (2023), . [30] DESI Collaboration, M. Abdul-Karim, A. G. Adame, D. Aguado, J. Aguilar, et al., Data Release 1 of the Dark Energy Spectroscopic Instrument, arXiv e-prints , (2025), . [31] U. Andrade, E. Paillas, J. Mena-Fernandez, Q. Li, A. J. Ross, et al., Validation of the DESI DR2 Measurements of Baryon Acoustic Oscillations from Galaxies and Quasars, arXiv eprints , (2025), . [32] L. Casas, H. K. Herrera-Alcantar, J. Chaves-Montero, A. Cuceu, A. Font-Ribera, et al., Validation of the DESI DR2 LyαBAO analysis using synthetic datasets, arXiv e-prints , (2025), . [33] DESI Collaboration, M. Abdul-Karim, J. Aguilar, S. Ahlen, C. Allende Prieto, et al., DESI DR2 Results I: Baryon Acoustic Oscillations from the Lyman Alpha Forest, arXiv e-prints , (2025), . [34] DESI Collaboration, M. Abdul-Karim, J. Aguilar, S. Ahlen, S. Alam, et al., DESI DR2 Results II: Measurements of Baryon Acoustic Oscillations and Cosmological Constraints, arXiv eprints , (2025), . [35] K. Lodha, R. Calderon, W. L. Matthewson, A. Shafieloo, M. Ishak, et al., Extended Dark Energy analysis using DESI DR2 BAO measurements, arXiv e-prints , (2025), . [36] W. Elbers, A. Aviles, H. E. Noriega, D. Chebat, A. Menegas, et al., Constraints on Neutrino Physics from DESI DR2 BAO and DR1 Full Shape, arXiv e-prints , (2025), . [37] A. J. Ross, F. Beutler, C.-H. Chuang, M. Pellejero-Ibanez, H.-J. Seo, et al., The clustering of galaxies in the completed SDSSIII Baryon Oscillation Spectroscopic Survey: observational systematics and baryon acoustic oscillations in the correlation function, Monthly Notices of the Royal Astronomical Society 464, 1168 (2017), . [38] F. Beutler, H.-J. Seo, A. J. Ross, P. McDonald, S. Saito, et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: baryon acoustic oscillations in the Fourier space, Monthly Notices of the Royal Astronomical Society 464, 3409 (2017), . [39] S. Alam, M. Ata, S. Bailey, F. Beutler, D. Bizyaev, et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: cosmological analysis of the 13 DR12 galaxy sample, Monthly Notices of the Royal Astronomical Society 470, 2617 (2017), . [40] S. Satpathy, S. Alam, S. Ho, M. White, N. A. Bahcall, et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: on the measurement of growth rate using galaxy correlation functions, Monthly Notices of the Royal Astronomical Society 469, 1369 (2017), . [41] F. Beutler, H.-J. Seo, S. Saito, C.-H. Chuang, A. J. Cuesta, et al., The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: anisotropic galaxy clustering in Fourier space, Monthly Notices of the Royal Astronomical Society 466, 2242 (2017), . [42] K. S. Dawson, J.-P. Kneib, W. J. Percival, S. Alam, F. D. Albareti, et al., The SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: Overview and Early Data, The Astronomical Journal 151, 44 (2016), . [43] Z. Zhai, J. L. Tinker, C. Hahn, H.-J. Seo, M. R. Blanton, et al., The Clustering of Luminous Red Galaxies at z ∼0.7 from EBOSS and BOSS Data, Astrophys. J. 848, 76 (2017), . [44] G.-B. Zhao, Y. Wang, A. J. Ross, S. Shandera, W. J. Percival, et al., The extended Baryon Oscillation Spectroscopic Survey: a cosmological forecast, Monthly Notices of the Royal Astronomical Society 457, 2377 (2016), . [45] S. Jouvel, T. Delubac, J. Comparat, H. Camacho, A. Carnero, et al., Photometric redshifts and clustering of emission line galaxies selected jointly by DES and eBOSS, Monthly Notices of the Royal Astronomical Society 469, 2771 (2017). [46] M. Ata, F. Baumgarten, J. Bautista, F. Beutler, D. Bizyaev, et al., The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: first measurement of baryon acoustic oscillations between redshift 0.8 and 2.2, Monthly Notices of the Royal Astronomical Society 473, 4773 (2018), . [47] S. A. Rodríguez-Torres, J. Comparat, F. Prada, G. Yepes, E. Burtin, et al., Clustering of quasars in the first year of the SDSS-IV eBOSS survey: interpretation and halo occupation distribution, Monthly Notices of the Royal Astronomical Society 468, 728 (2017), . [48] A. D. Myers, N. Palanque-Delabrouille, A. Prakash, I. Pâris, C. Yeche, et al., The SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: Quasar Target Selection, The Astrophysical Journal Supplement 221, 27 (2015), . [49] H. Gil-Marín, J. Guy, P. Zarrouk, E. Burtin, C.-H. Chuang, et al., The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: structure growth rate measurement from the anisotropic quasar power spectrum in the redshift range 0.8 < z < 2.2, Monthly Notices of the Royal Astronomical Society 477, 1604 (2018), . [50] R. Ruggeri, W. J. Percival, H. Gil-Marín, F. Beutler, E.- M. Mueller, et al., The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: measuring the evolution of the growth rate using redshiftspace distortions between redshift 0.8 and 2.2, Monthly Notices of the Royal Astronomical Society 483, 3878 (2019), . [51] C. Blake, E. A. Kazin, F. Beutler, T. M. Davis, D. Parkinson, et al., The WiggleZ Dark Energy Survey: mapping the distance-redshift relation with baryon acoustic oscillations, Monthly Notices of the Royal Astronomical Society 418, 1707 (2011), . [52] E. A. Kazin, J. Koda, C. Blake, N. Padmanabhan, S. Brough, et al., The WiggleZ Dark Energy Survey: improved distance measurements to z = 1 with reconstruction of the baryonic acoustic feature, Monthly Notices of the Royal Astronomical Society 441, 3524 (2014), . [53] S. Riemer-Sørensen, C. Blake, D. Parkinson, T. M. Davis, S. Brough, et al., WiggleZ Dark Energy Survey: Cosmological neutrino mass constraint from blue high-redshift galaxies, Phys. Rev. D 85, 081101 (2012), . [54] D. Parkinson, S. Riemer-Sørensen, C. Blake, G. B. Poole, T. M. Davis, et al., The WiggleZ Dark Energy Survey: Final data release and cosmological results, Phys. Rev. D 86, 103518 (2012), . [55] C. Blake, S. Brough, M. Colless, C. Contreras, W. Couch, et al., The WiggleZ Dark Energy Survey: the growth rate of cosmic structure since redshift z=0.9, Monthly Notices of the Royal Astronomical Society 415, 2876 (2011), . [56] E. Di Valentino, L. A. Anchordoqui, Ö. Akarsu, Y. AliHaimoud, L. Amendola, et al., Cosmology Intertwined III: fσ8 and S8, Astroparticle Physics 131, 102604 (2021), . [57] D. Rubin, G. Aldering, M. Betoule, A. Fruchter, X. Huang, et al., Union Through UNITY: Cosmology with 2,000 SNe Using a Unified Bayesian Framework, arXiv e-prints , (2023), . [58] DES Collaboration, T. M. C. Abbott, M. Acevedo, M. Aguena, A. Alarcon, et al., The Dark Energy Survey: Cosmology Results with ∼1500 New High-redshift Type Ia Supernovae Using the Full 5 yr Data Set, The Astrophysical Journal Letters 973, L14 (2024), . [59] J. Rebouças, D. H. F. de Souza, K. Zhong, V. Miranda, and R. Rosenfeld, Investigating late-time dark energy and massive neutrinos in light of DESI Y1 BAO, Journal of Cosmology and Astroparticle Physics 2025, 024 (2025), . [60] N. Roy, Dynamical dark energy in the light of desi 2024 data, Physics of the Dark Universe 48, 101912 (2025). [61] Y. Carloni, O. Luongo, and M. Muccino, Does dark energy really revive using desi 2024 data?, Phys. Rev. D 111, 023512 (2025). [62] A. Chakraborty, P. K. Chanda, S. Das, and K. Dutta, Desi results: Hint towards coupled dark matter and dark energy (2025), . [63] L. Huang, R.-G. Cai, and S.-J. Wang, The desi dr1/dr2 evidence for dynamical dark energy is biased by low-redshift supernovae (2025), . [64] H. Chaudhary, S. Capozziello, V. K. Sharma, and G. Mustafa, Does desi dr2 challenge λcdm paradigm ? (2025), . [65] The LSST Dark Energy Science Collaboration, R. Mandelbaum, T. Eifler, R. Hložek, T. Collett, et al., The LSST Dark Energy Science Collaboration (DESC) Science Requirements Document, arXiv e-prints , (2018), . [66] Euclid Collaboration, Euclid. I. Overview of the Euclid mission, arXiv e-prints , (2024), . [67] B. P. Crill, M. Werner, R. Akeson, M. Ashby, L. Bleem, et al., SPHEREx: NASA's near-infrared spectrophotometric 14 all-sky survey, in Space Telescopes and Instrumentation 2020: Optical, Infrared, and Millimeter Wave, Society of PhotoOptical Instrumentation Engineers (SPIE) Conference Series, Vol. 11443, edited by M. Lystrup and M. D. Perrin (2020) p. 114430I, . [68] T. Eifler, H. Miyatake, E. Krause, C. Heinrich, V. Miranda, et al., Cosmology with the Roman Space Telescope - multiprobe strategies, Monthly Notices of the Royal Astronomical Society 507, 1746 (2021), . [69] Lacasa, Fabien, Cosmology in the non-linear regime: the small scale miracle, A&A 661, A70 (2022). [70] A. Lewis, A. Challinor, and A. Lasenby, Efficient Computation of Cosmic Microwave Background Anisotropies in Closed Friedmann-Robertson-Walker Models, Astrophys. J. 538, 473 (2000), arXiv:astro-ph/9911177 [astro-ph]. [71] J. Lesgourgues, The Cosmic Linear Anisotropy Solving System (CLASS) I: Overview, arXiv e-prints , (2011), . [72] D. Blas, J. Lesgourgues, and T. Tram, The Cosmic Linear Anisotropy Solving System (CLASS). Part II: Approximation schemes, Journal of Cosmology and Astroparticle Physics 2011 (7), 034, . [73] A. Schneider, R. Teyssier, D. Potter, J. Stadel, J. Onions, et al., Matter power spectrum and the challenge of percent accuracy, JCAP 04, 047, . [74] D. Potter, J. Stadel, and R. Teyssier, PKDGRAV3: beyond trillion particle cosmological simulations for the next era of galaxy surveys, Computational Astrophysics and Cosmology 4, 2 (2017), . [75] V. Springel, R. Pakmor, O. Zier, and M. Reinecke, Simulating cosmic structure formation with the GADGET-4 code, Monthly Notices of the Royal Astronomical Society 506, 2871 (2021), . [76] M. Chevallier and D. Polarski, Accelerating Universes with Scaling Dark Matter, International Journal of Modern Physics D 10, 213 (2001), arXiv:gr-qc/0009008 [gr-qc]. [77] E. V. Linder, Exploring the Expansion History of the Universe, Phys. Rev. Lett. 90, 091301 (2003), arXiv:astro-ph/0208512 [astro-ph]. [78] Euclid Collaboration, M. Knabenhans, J. Stadel, D. Potter, J. Dakin, et al., Euclid preparation: IX. EuclidEmulator2 - power spectrum emulation with massive neutrinos and selfconsistent dark energy perturbations, Mon. Not. Royal Astron. Soc. 505, 2840 (2021), . [79] R. E. Angulo, M. Zennaro, S. Contreras, G. Aricò, M. Pellejero-Ibañez, and J. Stücker, The BACCO simulation project: exploiting the full power of large-scale structure for cosmology, Mon. Notices Royal Astron. Soc. 507, 5869 (2021), . [80] E. Lawrence, K. Heitmann, J. Kwan, A. Upadhye, D. Bingham, S. Habib, D. Higdon, A. Pope, H. Finkel, and N. Frontiere, The mira-titan universe. ii. matter power spectrum emulation, The Astrophysical Journal 847, 50 (2017). [81] A. Apyan, C. Cosby, Y. Feng, A. Gelgen, S. Gori, P. Harris, X. Liu, M. Liu, P. Maksimovic, C. Mantilla-Suarez, R. McLaughlin, C. Miller, A. Mitra, N. Paladino, A. R. Das, V. Slokenbergs, D. Sperka, N. Tran, and Z. Wan, Performance measurements of the electromagnetic calorimeter and readout electronics system for the darkquest experiment (2025), . [82] A. Apyan, B. Batell, A. Berlin, N. Blinov, C. Chaharom, S. Cuadra, Z. Demiragli, A. Duran, Y. Feng, I. P. Fernando, S. Gori, P. Harris, D. Hoang, D. Keller, E. Kowalczyk, M. Leys, K. Liu, M. Liu, W. Lorenzon, P. Maksimovic, C. M. Suarez, H. Marukyan, A. Mitra, Y. Miyachi, P. McCormack, E. A. Moreno, Y. C. Morales, N. Paladino, M. Rai, S. Rotella, L. Saunders, S. Sawada, C. Smith, D. Sperka, R. Tesarek, N. Tran, Y.-D. Tsai, Z. Wan, and M. Wynne, Darkquest: A dark sector upgrade to spinquest at the 120 gev fermilab main injector (2022), . [83] Z. Chen, Y. Yu, J. Han, and Y. Jing, Csst cosmological emulator i: Matter power spectrum emulation with one percent accuracy to k = 10h mpc-1, Science China Physics, Mechanics &; Astronomy 68, 10.1007/s11433-025-2671-0 (2025). [84] Z. Chen and Y. Yu, Csst cosmological emulator ii: Generalized accurate halo mass function emulation (2025), . [85] S. Zhou, Z. Chen, and Y. Yu, Csst cosmological emulator iii: Hybrid lagrangian bias expansion emulation of galaxy clustering (2025), . [86] J. DeRose, N. Kokron, A. Banerjee, S.-F. Chen, M. White, et al., Aemulus ν: precise predictions for matter and biased tracer power spectra in the presence of neutrinos, JCAP 07, 054, . [87] J. DeRose, R. H. Wechsler, J. L. Tinker, M. R. Becker, Y.- Y. Mao, et al., The Aemulus Project I: Numerical Simulations for Precision Cosmology, Astrophys. J. 875, 69 (2019), . [88] T. McClintock, E. Rozo, M. R. Becker, J. DeRose, Y.-Y. Mao, et al., The Aemulus Project II: Emulating the Halo Mass Function, Astrophys. J. 872, 53 (2019), . [89] Z. Zhai, J. L. Tinker, M. R. Becker, J. DeRose, Y.-Y. Mao, et al., The Aemulus Project III: Emulation of the Galaxy Correlation Function, Astrophys. J. 874, 95 (2019), . [90] B. Fiorini, K. Koyama, and T. Baker, Fast production of cosmological emulators in modified gravity: the matter power spectrum, Journal of Cosmology and Astroparticle Physics 2023, 045 (2023), . [91] D. Fremstad and H. A. Winther, Emulating the Non-Linear Matter Power-Spectrum in Mixed Axion Dark Matter Models, arXiv e-prints , (2025), . [92] I. Sáez-Casares, Y. Rasera, and B. Li, The e-MANTIS emulator: fast predictions of the non-linear matter power spectrum in f(R)CDM cosmology, Monthly Notices of the Royal Astronomical Society 527, 7242 (2024), . [93] C. Arnold, B. Li, B. Giblin, J. Harnois-Déraps, and Y.-C. Cai, FORGE: the f(R)-gravity cosmic emulator project - I. Introduction and matter power spectrum emulator, Monthly Notices of the Royal Astronomical Society 515, 4161 (2022), . [94] H. A. Winther, F. Schmidt, A. Barreira, C. Arnold, S. Bose, et al., Modified gravity N-body code comparison project, Monthly Notices of the Royal Astronomical Society 454, 4208 (2015), . [95] R. W. Hockney and J. W. Eastwood, Computer simulation using particles (1988). [96] S. Tassev, M. Zaldarriaga, and D. Eisenstein, Solving Large Scale Structure in Ten Easy Steps with COLA, JCAP 06, 036, . [97] J. Ding, S. Li, Y. Zheng, X. Luo, L. Zhang, and X.-D. Li, Fast generation of mock galaxy catalogues with COLA, arXiv e-prints , (2023), . 15 [98] G. Brando, B. Fiorini, K. Koyama, and H. A. Winther, Enabling matter power spectrum emulation in beyond-ΛCDM cosmologies with COLA, Journal of Cosmology and Astroparticle Physics 2022 (9), 051, . [99] J. Gordon, B. F. de Aguiar, J. Rebouças, G. Brando, F. Falciano, et al., Modeling nonlinear scales with the comoving Lagrangian acceleration method: Preparing for LSST Y1, Phys. Rev. D 110, 083529 (2024), . [100] M. J. Mortonson, D. Huterer, and W. Hu, Figures of merit for present and future dark energy probes, Physical Review D 82, 10.1103/physrevd.82.063004 (2010). [101] R. Trotta, Bayes in the sky: Bayesian inference and model selection in cosmology, Contemporary Physics 49, 71-104 (2008). [102] C. Shapiro, Biased dark energy constraints from neglecting reduced shear in weak-lensing surveys, The Astrophysical Journal 696, 775-784 (2009). [103] M.-X. Lin, B. Jain, M. Raveri, E. J. Baxter, C. Chang, M. Gatti, S. Lee, and J. Muir, Late time modification of structure growth and the S8 tension, Phys. Rev. D 109, 063523 (2024), . [104] V. Poulin, T. L. Smith, R. Calderón, and T. Simon, Impact of ACT DR6 and DESI DR2 for Early Dark Energy and the Hubble tension, arXiv e-prints , (2025), . [105] E. Silva, M. A. Sabogal, M. Scherer, R. C. Nunes, E. Di Valentino, and S. Kumar, New constraints on interacting dark energy from DESI DR2 BAO observations, Phys. Rev. D 111, 123511 (2025), . [106] K. Liu, X. Fu, B. Xu, C. Ding, Y. Huang, and X. Qing, The growth of linear perturbations in the interacting dark energy models and observational constraints, arXiv e-prints , (2025), . [107] P. Ghedini, R. Hajjar, and O. Mena, Redshift-space distortions corner interacting dark energy, Physics of the Dark Universe 46, 101671 (2024), . [108] M. A. Sabogal, E. Silva, R. C. Nunes, S. Kumar, E. Di Valentino, and W. Giarè, Quantifying the S8 tension and evidence for interacting dark energy from redshift-space distortion measurements, Phys. Rev. D 110, 123508 (2024), . [109] B. Fiorini, K. Koyama, and T. Baker, Fast production of cosmological emulators in modified gravity: the matter power spectrum, JCAP 12, 045, . [110] A. Izard, M. Crocce, and P. Fosalba, ICE-COLA: Towards fast and accurate synthetic galaxy catalogues optimizing a quasi Nbody method, Mon. Not. Roy. Astron. Soc. 459, 2327 (2016), . [111] B. S. Wright, A. Sen Gupta, T. Baker, G. Valogiannis, B. Fiorini, and LSST Dark Energy Science Collaboration, Hi-COLA: fast, approximate simulations of structure formation in Horndeski gravity, Journal of Cosmology and Astroparticle Physics 2023 (3), 040, . [112] S. Colombi, A. Jaffe, D. Novikov, and C. Pichon, Accurate estimators of power spectra in N-body simulations, Monthly Notices of the Royal Astronomical Society 393, 511 (2009), . [113] R. E. Angulo and O. Hahn, Large-scale dark matter simulations, Living Reviews in Computational Astrophysics 8, 10.1007/s41115-021-00013-z (2022). [114] C. Fidler, C. Rampf, T. Tram, R. Crittenden, K. Koyama, et al., General relativistic corrections to N-body simulations and the Zel'dovich approximation, Phys. Rev. D 92, 123517 (2015), . [115] C. Fidler, T. Tram, C. Rampf, R. Crittenden, K. Koyama, et al., Relativistic initial conditions for N-body simulations, JCAP 06, 043, . [116] T. Tram, J. Brandbyge, J. Dakin, and S. Hannestad, Fully relativistic treatment of light neutrinos in N-body simulations, JCAP 03, 022, . [117] G. Brando, K. Koyama, and D. Wands, Relativistic Corrections to the Growth of Structure in Modified Gravity, JCAP 01, 013, . [118] R. E. Angulo and A. Pontzen, Cosmological n-body simulations with suppressed variance, Monthly Notices of the Royal Astronomical Society: Letters 462, L1-L5 (2016). [119] F. Capozzi, E. Di Valentino, E. Lisi, A. Marrone, A. Melchiorri, and A. Palazzo, Global constraints on absolute neutrino masses and their ordering, Phys. Rev. D 95, 096014 (2017). [120] M. Tanabashi, K. Hagiwara, K. Hikasa, K. Nakamura, et al. (Particle Data Group), Review of particle physics, Phys. Rev. D 98, 030001 (2018). [121] R. Takahashi, M. Sato, T. Nishimichi, A. Taruya, and M. Oguri, Revising the Halofit Model for the Nonlinear Matter Power Spectrum, Astrophys. J. 761, 152 (2012), . [122] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine Learning Research 12, 2825 (2011). [123] A. Spurio Mancini, D. Piras, J. Alsing, B. Joachimi, and M. P. Hobson, COSMOPOWER: emulating cosmological power spectra for accelerated Bayesian inference from nextgeneration surveys, Mon. Notices Royal Astron. Soc. 511, 1771 (2022), . [124] J. Alsing, H. Peiris, J. Leja, C. Hahn, R. Tojeiro, et al., SPECULATOR: Emulating Stellar Population Synthesis for Fast and Accurate Galaxy Spectra and Photometry, The Astrophysical Journal Supplement 249, 5 (2020), . [125] D. P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, arXiv e-prints , (2014), . [126] A. Schneider, R. Teyssier, D. Potter, J. Stadel, J. Onions, D. S. Reed, R. E. Smith, V. Springel, F. R. Pearce, and R. Scoccimarro, Matter power spectrum and the challenge of percent accuracy, Journal of Cosmology and Astroparticle Physics 2016 (04), 047-047. [127] X. Fang, T. Eifler, and E. Krause, 2D-FFTLog: efficient computation of real-space covariance matrices for galaxy clustering and weak lensing, Mon. Notices Royal Astron. Soc. 497, 2699 (2020), . [128] S. S. Boruah, T. Eifler, V. Miranda, and P. M. S. Krishanth, Accelerating cosmological inference with Gaussian processes and neural networks - an application to LSST Y1 weak lensing and galaxy clustering, Mon. Notices Royal Astron. Soc. 518, 4818 (2023), . [129] B. Joachimi, M. Cacciato, T. D. Kitching, A. Leonard, R. Mandelbaum, et al., Galaxy Alignments: An Overview, Space Science Reviews 193, 1 (2015), . [130] M. A. Troxel and M. Ishak, The intrinsic alignment of galaxies and its impact on weak gravitational lensing in an era of precision cosmology, Physics Reports 558, 1 (2015), . [131] E. Krause and T. Eifler, cosmolike - cosmological likelihood analyses for photometric galaxy surveys, Mon. Not. Roy. As16 tron. Soc. 470, 2100 (2017), . [132] A. Lewis, Efficient sampling of fast and slow cosmological parameters, Phys. Rev. D 87, 103529 (2013), . [133] J. Torrado and A. Lewis, Cobaya: Code for Bayesian Analysis of hierarchical physical models, JCAP 05, 057, . [134] A. Lewis, A. Challinor, and A. Lasenby, Efficient computation of CMB anisotropies in closed FRW models, Astrophys. J. 538, 473 (2000), arXiv:astro-ph/9911177 [astro-ph]. [135] C. Howlett, A. Lewis, A. Hall, and A. Challinor, CMB power spectrum parameter degeneracies in the era of precision cosmology, Journal of Cosmology and Astroparticle Physics 1204, 027, . [136] A. Gelman and D. B. Rubin, Inference from Iterative Simulation Using Multiple Sequences, Statistical Science 7, 457 (1992). [137] E. Krause, X. Fang, S. Pandey, L. F. Secco, O. Alves, et al., Dark Energy Survey Year 3 Results: Multi-Probe Modeling Strategy and Validation, arXiv e-prints , (2021), . [138] A. Albrecht, G. Bernstein, R. Cahn, W. L. Freedman, J. Hewitt, W. Hu, J. Huth, M. Kamionkowski, E. W. Kolb, L. Knox, J. C. Mather, S. Staggs, and N. B. Suntzeff, Report of the dark energy task force (2006), arXiv:astro-ph/0609591 [astro-ph]. [139] D. Scolnic, D. Brout, A. Carr, A. G. Riess, T. M. Davis, A. Dwomoh, D. O. Jones, N. Ali, P. Charvu, R. Chen, E. R. Peterson, B. Popovic, B. M. Rose, C. M. Wood, P. J. Brown, K. Chambers, D. A. Coulter, K. G. Dettman, G. Dimitriadis, A. V. Filippenko, R. J. Foley, S. W. Jha, C. D. Kilpatrick, R. P. Kirshner, Y.-C. Pan, A. Rest, C. Rojas-Bravo, M. R. Siebert, B. E. Stahl, and W. Zheng, The Pantheon+ Analysis: The Full Data Set and Light-curve Release, Astrophys. J. 938, 113 (2022), . [140] I. M. Sobol, On the distribution of points in a cube and the approximate evaluation of integrals, U.S.S.R. Comput. Math. Math. Phys. 7, 784 (1967). [141] L. Casarini, A. V. Macciò, and S. A. Bonometto, Dynamical dark energy simulations: high accuracy power spectra at high redshift, Journal of Cosmology and Astroparticle Physics 2009, 014 (2009), . [142] S. Brieden, F. Beutler, and M. Pellejero-Ibañez, Web-Halo Model (WHM): Accurate non-linear matter power spectrum predictions without free parameters, arXiv e-prints , (2025), . Appendix A: Simulations and Emulator Accuracy for Higher Redshifts In this Appendix, we show relative errors from the simulations and the emulator for higher redshifts, ensuring that our discussion in Section II still holds. The results are presented in Figure 7. We find that for z≤3, the z= 0 results still hold in that at k= 1 h/Mpc, 90% of cosmologies have emulation errors within 2% on ̃B(k, z), and that 50% of cosmologies are contained well within 1%. COLA emulators trained at different redshifts demonstrate no notable change in the fidelity of predictions across the range 0 ≤z≤3 (right panels), even though the highest 10% of raw emulation errors (left panels) and COLA boost errors (middle panels) grow with increasing z. 17 k (h/Mpc) 5 0 5 z=0.24 50% k (h/Mpc) 5 0 5 90% k (h/Mpc) 5 0 5 100% k (h/Mpc) 5 0 5 1000×(B COLA NN /B COLA 1) z=0.50 k (h/Mpc) 5 0 5 100×(BCOLA/B EE2 1) k (h/Mpc) 5 0 5 100×(B EE2 CDM/B EE2 1) k (h/Mpc) 5 0 5 z=0.74 k (h/Mpc) 5 0 5 k (h/Mpc) 5 0 5 10 1 100 k (h/Mpc) 5 0 5 z=1.00 10 1 100 k (h/Mpc) 5 0 5 10 1 100 k (h/Mpc) 5 0 5 FIG. 7. Left panel: Relative errors between the COLA boosts BCOLA predicted by the emulator versus those obtained from the test set simulations. Middle panel: Relative errors between ̃BCOLA (see Equation 8) versus the boosts from EuclidEmulator2. Right panel: Relative errors between BEE2 ΛCDM and EE2. Each row denotes a different redshift. Colors in all panels denote the percentile of cosmologies around the mean: blue contours enclose 50% of cosmologies, red contours enclose 90% of cosmologies, and the outer gray lines enclose 100% of cosmologies.
2510.14885
You May Speak Freely: Improving the Fine-Grained Visual Recognition Capabilities of Multimodal Large Language Models with Answer Extraction Logan Lawrence1 Oindrila Saha1 Megan Wei2 Chen Sun2 Subhransu Maji1 Grant Van Horn1 University of Massachusetts, Amherst1 Brown University2 {lclawrence, osaha, smaji, gvanhorn}@umass.edu1 {meganwei, chen sun4}@brown.edu2 Abstract Despite the renewed interest in zero-shot visual classifi- cation due to the rise of Multimodal Large Language Mod- els (MLLMs), the problem of evaluating free-form responses of auto-regressive models remains a persistent challenge. Most existing works focus on language-only tasks or don’t consider Multiple Choice Questions (MCQs) beyond 5-way options, both of which are critical capabilities to solve tasks in Fine-Grained Visual Classification (FGVC) where choice counts are in the hundreds to thousands and the choices are highly related. Furthermore, in this highly multi-way MCQ setting it is not clear how to extend LLM choice extraction to retrieval-based problems, where computing probabilities over the choice set is computationally costly. In this work we investigate nlg2choice, a simple two-stage method which first asks the MLLM an open-ended question for the task with minimal constraints, then uses text-only constrained decoding to predict the most likely choice. In retrieval set- tings, we compute the probability of the constrained re- sponse taking that choice with an early stopping method to significantly improve throughput. Our results show im- provement over a suite of seven fine-grained visual datasets when evaluating in terms of classification and retrieval, and show that this performance holds over the various ways that users of LLMs can implement tasks in natural language. 1. Introduction Zero-shot visual classification has been revitalized by the introduction of Multimodal Large Language Models (MLLMs) [1, 9, 20, 21], where one can formulate clas- sification datasets as Visual Question Answering (VQA) tasks with Multiple Choice Questions (MCQs) and a com- mon prompt. However, fine-grained visual classification (FGVC) poses a barrier to traditional MCQ problems due to two issues: (1) a vastly higher number of choices (of- ten hundreds to thousands) and (2) the niche concepts that each choice represents. Recent evaluations have found that state-of-the-art instruction-tuned MLLMs (e.g. LLaVA 1.5 free-form choice This bird is an Ivory Gull. Crested Auklet nlg2choice Ivory Gull “What is the species of this bird?” “What is the most likely species?” 15x ~ Figure 1. Proposed evaluation setup of nlg2choice. Whereas previous approaches for MLLM classification use methods like probability sequences or constrained decoding, nlg2choice first prompts the MLLM for a natural language response, then performs text-only constrained decoding on the text to extract classes. Accu- racy is computed as an average over multiple rewritings of prompts as a robustness measure. Correct MLLM responses are shown in green whereas incorrect responses are red . [20, 21], InstructBLIP [9], GPT-4V [1]) suffer dramatic drops in accuracy when asked to name fine-grained cat- egories. For instance, LLaVA-1.5 (13B) achieves near- perfect accuracy on coarse categories (like “bird” vs “not bird”), but only 1–2% accuracy on fine-grained species la- bels in iNaturalist [15]. Using LLMs to extract choices from pre-existing text ap- pears to be a potential solution to this problem. Constrain- ing LLM’s output format to ease extraction can simplify scoring but might hinder the model’s reasoning [35, 41, 43], arXiv:2510.14885v1 [cs.CV] 16 Oct 2025 “What is the species of bird in this image? Answer from: { full_species_list }” “What is the species of bird in this image?” “What is the species of bird in this image? Answer with only the species name.” Accuracy 𝜇 = 41.05 σ = 0.74 “It’s an Ivory Gull.” “It’s a Herring Gull.” “What species of bird is this?” 𝜇 = 47.19 σ = 2.37 𝜇 = 63.36 σ = 1.26 “What species of gull is this?” “bird” → “gull” Figure 2. Seemingly innocuous instruction changes greatly affect MLLM performance. When testing on the gull subset of CUB200[40], replacing one word can change the predictions of a model (left). When rewriting an instruction in a semantically equiv- alent way, performance varies substantially for Qwen-2.5VL (right). Each distribution comprises 15 semantically equivalent ways of writing the same instruction. These cases highlight the need to test LLMs over various equivalent inputs to ensure robustness. whereas allowing free-form explanations improves reason- ing at the cost of harder extraction [27, 35]. Many ap- proaches prompt a strong LLM [8, 18] such as GPT4 [2] to perform second-stage answer extraction. However, the works that propose LLMs as choice extractor are either fo- cused only on language-only tasks [24, 44] or do not extend beyond simple lettering or numbering schemes [23]. Fur- thermore, no significant treatment of retrieval nor the ro- bustness of these results under input prompt variation has been considered. In this work, we conduct a systematic analysis into con- strained decoding with LLMs as answer extractors on free- form responses to fine-grained visual recognition tasks. We come to the following conclusions: 1. Answer extraction on free-form responses improves visual recognition capabilities. We find that perfor- mance improves substantially across 7 FGVC benchmarks when viewed from both classification (Tab. 2) and retrieval (Tab. 6) perspectives. It additionally improves misclassifi- cations, with most responses being within the same genus (Tab. 3). To overcome the computational costs associated with generating probabilities over hundreds to thousands of choices, we propose an early stopping method (Sec. 3.3) which increases throughput substantially (Tab. 1). 2. Answer extraction is robust to user variation. We test robustness by generating additional semantically equivalent ways of writing an instruction (Sec. 3.1) and see that per- formance improvements are statistically significant when viewing user writing as a source of randomness (Fig. 3). We also show that instructions which constrict the response decrease performance (Tab. 4) and that methods which rely on forcing the LLM to generate some initial tokens to stimu- late Chain-of-Thought (CoT) reasoning do not consistently increase performance (Tab. 5). 3. Constrained decoding is a reliable answer extractor without additional training. We gauge the answer extrac- tion abilities of LLMs out-of-the-box by labeling spans and assigning answers to free-form NLG responses (Sec. A). We find that the main bottleneck is not answer extrac- tion (Tab. 7) as studied in previous works, but a lack of extractable content from free-form responses themselves (Fig. 4). We make this data publicly available1. 2. Related Work Forcing MLLMs to Generate Valid Choices The earli- est works first prompted an LLM directly for a response then did regex parsing to extract choices [42], but it was shown that direct open-ended querying often underperforms on fine-grained tasks [15]. While generating the full prob- ability the text of each choice sequence is as an option, it remains difficult to implement in practice due to its com- putational cost [31]. The most dominant choice proba- bility generation rank the log probabilities assigned to the option IDs (e.g. ”A/B/C/D”) from the first token predic- tion of the model. This approach is widely adopted in many different benchmarks and LLM evaluation studies [14, 19, 33, 34, 45]. Parallel to this, constrained decoding techniques seek to guarantee consistent outputs by guiding the generation pro- cess during sampling by constricting the output vocabulary [5, 22]. There are many open-source tools that implement this approach [10, 11, 13, 46] and while these methods en- sure model responses lie within the choice set, recent evalu- ations have shown that they decrease performance [35, 41]. Our work sits in opposition to these recent results, where we posit that constrained decoding is advantageous in the fine- grained setting where previous approaches like first token prediction fail. MLLMs as Answer Extractors Recent works have shown a mismatch difference between text-based outputs 1https://github.com/Racerx250/nlg2choice and token-probability metrics of LLMs on language-only tasks, revealing that a model’s first-token likelihood often fails to correspond with its generated answer [24, 43, 44]. Evaluating a model by reading its full text answer instead of only the first-token or forced letter has been shown to improve robustness, reinforcing the value of answer extrac- tion methods [43]. Many works use large proprietary mod- els like GPT-4 [2] to extract answers from free-form text [8, 18, 23]. Some recent methods have addressed the problem of an- swer extraction directly. xFinder [47] propose a lightweight model fine-tuned specifically to map an LLM’s free-form answer to the correct choice, reporting that their method outperforms a GPT-4 based extractor struggled in terms of extraction accuracy. Similarly, SLOT [41] proposes a model-agnostic, lightweight “Structured LLM Output Transformer” that is fine-tuned to post-process any LLM’s free-form text into JSON that exactly matches an arbi- trary schema, eliminating the need to modify or constrain the generating model itself. xVerify [7] is an efficient answer verifier for reasoning model evaluations that ex- tracts final answers from complex, multi-step LLM outputs and performs robust equivalence checking—trained on the VAR dataset of multi-model, multi-round–annotated ques- tion–answer pairs and achieving over 95% accuracy and F1, demonstrating strong generalizability. Contrary to these approaches the answer extraction method proposed in this work requires no additional training. We argue that an- swer extraction is not a critical bottleneck when using con- strained decoding for fine-grained visual classification and show that LLMs perform decent answer extraction out-of- the-box on popular benchmarks. 3. Methodology 3.1. Simulating User Variation Given the brittleness of LLMs to prompt variations (Fig. 2), we wish to test the robustness of LLMs as choice extractors by generating a range of semantically equivalent writings of prompts for a given task. We do this by starting with a small set of hand-written templates for a classification task. Letting [choice list] be the list of classes for a given dataset separated by newlines, we define [templatei] to be equal to the following: (Template 1) ”What is the species of bird in this image?” (Template 2) ”What is the species of bird in this image? An- swer only with species name and no other text.” (Template 3) ”What is the species of bird in this image? Answer only from the following list with no other text: [choice list]” Where the words ”species” and ”bird” are updated to suite the benchmark dataset. Given [templatei], we use o3-high [29] to generate 14 additional prompt variations: (Generate Variations) What are 14 semantically equivalent ways of writing the following question? Prompt: [templatei] Next, we perform a manual check on this set of prompts for semantic equivalence to the input prompt, resulting in a total of 45 variations for each dataset. For the remainder of the paper, accuracies reported are across these variations. For the full list of these prompts, see Sec. C. 3.2. nlg2choice for Classification Our approach to classification involves two stages: (1) prompting the LLM for a free-form response to a question defined by Sec. 3.1 and (2) prompting the same model again to select the most likely species given the previous response. On the second step, we enforce a valid output by using con- strained decoding across the choice set. Additionally, the second stage is text-only, namely it does not see the image which produced the original response. It receives the fol- lowing prompt: (Text-Only Choice) What is the most likely species of bird in- dicated in this response? Response: [nlg] Answer from the following: [choice list] Where [nlg] is the free-form response to the dataset in- struction and [choice list] is newline-separated list of class names. Similarly, ”species” and ”bird” are updated ac- cordingly for the given dataset. 3.3. nlg2choice for Retrieval In thresholding-based scenarios one desires a continuous score for every query. Most commonly this is done by tak- ing the probability of "Yes" and "No" for each image, class pair xi ∈X, yi ∈Y, ie.: ˆyi = softmax( p("Yes"|xi, [prompt]) p("No"|xi, [prompt])  Where p(·) is the next-token probability distribution of a MLLM and [prompt] describes a two-way predic- tion task on one class of the dataset, eg. "Is this a [cname]?" and [cname] is the name of some class. However, this requires |X|·|Y| forward passes of the model. To improve throughput, we propose to update the second step of nlg2choice to stop generating proba- bilities when the remaining tokens for a given choice do not occur in any other choices. For example, take the case of "Baltimore Oriole" which can Method Full Prob Yes/No Truncated Prob CUB200 687 200 47 (+0%) (+244%) (+1362%) Flowers 257 102 12 (+0%) (+152%) (+2042%) Aircrafts 479 100 79 (+0%) (+379%) (+506%) Cars 1668 196 125 (+0%) (+751%) (+1234%) Food 269 101 24 (+0%) (+166%) (+1021%) NABirds 3294 555 701 (+0%) (+494%) (+370%) iNat-Birds 5450 1486 480 (+0%) (+267%) (+1035%) Table 1. Number of forward passes needed for different meth- ods in the second-stage answer extraction of nlg2choice. ”Full Prob” refers to the probability of a full class name p([cname]) under the LLM, where [cname] is a common name of a class. “Yes/No” refers to the common |X| · |Y| method of calculating probability over all the classes yi ∈Y. “Truncated Prob” refers to the process outlined in Sec. 3.3. The percentages in the “Yes/No” and ”Truncated Prob” columns refer to the throughput increase over caculating the full sequence probabilities ”Full Prob.” Every setting for truncation is faster than “Yes/No” except for one bench- mark - NABirds [38]. The fastest method for each benchmark is bolded. be decomposed into ["B", "altimore", " Ori", "ole"]. We can see that in the CUB200 species list there are many names which share the first token "B", ie. "Bewick Wren" or "Belted Kingfisher". How- ever, "Baltimore Oriole" is the only choice name which has "altimore" as its second token. Thus, the probabilities for [" Ori", "ole"] are not calculated. In terms of probabilities, this is equivalent to: pfull("Baltimore Oriole") = p("B") (1) × p("altimore"|"B") × p(" Ori"|"Baltimore") × p("ole"|"Baltimore Ori") ptrunc("Baltimore Oriole") = p("B") (2) × p("altimore"|"B") ×(((((((((((( p(" Ori"|"Baltimore") × (((((((((((((( p("ole"|"Baltimore Ori") On many datasets, this change produces substantial throughput increase as many choice names can easily be de- termined by their first or second tokens. The special prop- erty of truncated probability sequences is that they have the same probabilities as sampling with constrained genera- tion. This throughput increase is depicted in Tab. 1. 4. Experiments Datasets We conduct experiments over a benchmark of 7 FGVC datasets: CUB [40] (200 classes), Flowers 102 [28] (102 classes), Stanford Cars [17] (196 classes), FGVC Aircrafts [25] (100 classes) and Food101 [6] (101 classes), NABirds [38] (555 classes), and iNaturalist-Birds [39] (1486 classes). We choose these benchmarks because they are well-known for fine-grained visual classification and are traditionally difficult for zero-shot visual models to solve. iNaturalist-Birds is the validation split of the iNat2021 dataset where the the ”class” field of each row is equal to ”Aves,” resulting in a test size of 14960. For each dataset we use the full test set and the training split remains unused. Evaluation Metrics For the classification setting we use accuracy averaged over 15 semantically equivalent prompts, whereas for the retrieval setting we use Mean Average Pre- cision (mAP). Choice lists are the names of classes them- selves rather than letters or numbers. As compared to previ- ous works [36], we perform the full-way class prediction rather than subsetting the choices to a 4-way prediction. However, we provide an experiment evaluating this setting in Sec. B. Implementation Details Constrained decoding is imple- mented using the Outlines2 library [46] and classes for each setting are detailed in Sec. C. We start by generating 15 prompts described in Sec. 3.1, then replacing the key fea- tures like domain (”birds,” ”flowers,” etc.) and class types (”species,” ”variant,” etc.) for each dataset. This allows us to compare directly between questions for a given model or dataset, and we also test the performance of appending class lists and common CoT methods. For our experiments, we test only open-source models. Specifically, we test the Qwen-2.5VL [4], Llama-3.2 [12], and InternVL3 [49] ar- chitectures. For hyperparameters, we use the recommended generation parameters as defined by the HuggingFace card and take max new tokens to be 512. Experiments were ran in a heterogeneous environment of single-GPU nodes. All nodes required a GPU with at least 23GB VRAM, 16 CPU cores, and 64GB RAM. 5. Results 5.1. Zero-Shot Classification Performance Answer extraction is better than directly answering. In Tab. 2 we report the performance of nlg2choice across various architectures and datasets. All architectures en- joyed a bump in performance across individual datasets and on average (+17.46, +8.49, +6.87 across Qwen-2.5VL, Llama-3.2V, and Intern3), but model-specific differences can be clearly seen. Likewise, we find that removing ex- 2https://github.com/dottxt-ai/outlines Method Birds Flowers Aircrafts Cars Foods NABirds iNat-Birds Average Qwen-2.5VL-7B choice 40.08 51.29 44.97 26.80 57.93 37.77 17.29 39.45 nlg2choice 47.72 63.22 59.96 37.52 68.99 44.88 21.47 49.11 (+7.64) (+11.93) (+14.99) (+10.72) (+11.06) (+7.11) (+4.18) (+9.66) nlg2choiceopen 63.36 78.03 61.54 47.13 72.26 50.19 25.86 56.91 (+23.28) (+26.74) (+16.57) (+20.33) (+14.33) (+12.42) (+8.57) (+17.46) Llama-3.2-Vision-11B choice 42.27 45.68 31.38 24.68 64.70 36.69 13.82 37.03 nlg2choice 46.96 51.16 30.75 32.72 64.52 41.34 15.81 40.47 (+4.69) (+5.48) (-0.63) (+8.04) (-0.18) (+4.65) (+1.99) (+3.43) nlg2choiceopen 49.59 62.24 37.12 36.78 72.72 46.37 13.85 45.52 (+7.32) (+16.56) (+5.74) (+12.10) (+8.02) (+9.68) (+0.03) (+8.49) Intern3VL-8B choice 11.27 24.68 15.84 14.78 44.12 6.90 2.28 17.12 nlg2choice 13.81 32.52 18.63 18.34 43.36 7.06 1.95 19.38 (+2.54) (+7.84) (+2.79) (+3.56) (-0.76) (+0.16) (-0.33) (+2.26) nlg2choiceopen 18.47 37.50 21.69 22.15 51.48 12.39 4.29 24.00 (+7.20) (+12.82) (+5.85) (+7.37) (+7.36) (+5.49) (+2.01) (+6.87) Table 2. Performance across fine-grained visual classification (FGVC) benchmarks and architectures. The “Birds” dataset refers to CUB200. The first row of each model refers to using constrained decoding directly on the model across the class list of each dataset using the prompt in the header. “choice” is performing constrained decoding on the prompt "What is the {type} of {domain} in this image? Answer with {type} only." “nlg2choice” refers to letting the model freely respond to the prompt, then using constrained decoding in a text-only approach. “nlg2choiceopen” refers to removing the appended response-style instruction, eg. "Answer with {type} only." All models are their instructed-tuned version. Best performance is bolded for each model. Method Genus Matches % of Data Qwen-2.5VL-7B choice 45.99 59.92 nlg2choice 56.57 (+10.56) 52.28 nlg2choiceopen 62.74 (+16.75) 36.64 Llama-3.2-Vision-11B choice 42.52 57.73 nlg2choice 61.71 (+19.19) 53.04 nlg2choiceopen 66.37 (+23.85) 50.94 Intern3VL-8B choice 36.77 88.73 nlg2choice 48.79 (+12.02) 86.19 nlg2choiceopen 60.36 (+23.59) 81.53 Table 3. Accuracy of genus-level predictions on misclassified examples. ”Genus Matches” refers to the accuracy of model with respect to the translating each species to its corresponding genus. ”% of Data” refers to the amount of data misclassified by the model and is equivalent to 1 −accuracy in Tab. 2. The species to genus mapping manually created. plicit answer formatting from the model (changing from “nlg2choice” and “nlg2choiceopen”). We find the Aves sub- set of iNaturalist [39] to be the hardest among all the bench- marks, resulting in the smallest increase in performance across all models. For model-specific differences, we find that Qwen- 2.5VL performs the best on average using nlg2choice, hav- ing an average accuracy of 56.91 across the 7 datasets. We also find that Qwen-2.5VL experiences the greatest gain, having an average accuracy difference of +17.46 from the base constrained decoding method to nlg2choiceopen. Comparatively, we see that Intern3-VL performs the worst across architectures, having the overall smallest increase when using nlg2choiceopen (+6.87) and the lowest perfor- mance (24.00). However, we do still see improvements over all benchmarks. Misclassifications are more aligned with ground truth. A natural question is to ask how the classifications qualita- tively change under nlg2choice. For an example, take Fig. 1 where the ground truth label is "Ivory Gull": under constrained decoding the model responds with "Crested Auklet" whereas a higher quality misclassificaiton would be "Herring Gull", "California Gull", ..., ie. a class within the same genus as Ivory Gull. In Tab. 3 we qualitatively gauge how misclassifications change when us- ing nlg2choice by measuring its genus-level performance. We construct a species to genus mapping on the CUB200 dataset manually, then map the species prediction in previ- ous results to genus. Simply put, we find that nlg2choice makes more reasonable errors, switching to nlg2choice in- curs a genus-level performance improvement of +16.75, +23.85, +23.59 for Qwen-2.5VL, Llama-3.2V-11B, and Intern3VL-8B. Δ Accuracy -0.05 0.00 0.05 Intern3VL-8B Llama-3.2V-11B Qwen-2.5VL-8B 0.10 0.15 0.20 0.25 0.30 𝜇 = 7.38 σ = 1.69 𝜇 = 7.72 σ = 2.95 𝜇 = 22.06 σ = 1.92 Figure 3. Difference in accuracies on the question-level be- tween nlg2choice and constrained decoding. Each distribu- tion is made up of the performance difference when switching to nlg2choice over 15 questions. nlg2choice uses an open-ended prompt whereas constrained decoding uses an instruction to in- crease brevity ("... Answer with species only."). Improvements are statistically significant. Rather than ask about the average performance of different methods under user variation, we wish to quantify how much per- formance is gained for a specific question by switching to nlg2choice. We calculate this by taking the differ- ence between the performance of constrained decoding and nlg2choiceopen for each question, then compute summary statistics over the set of questions. This forms a distribution of changes in accuracy, with significant results having a pos- itive mean and 95% confidence interval. The performance difference for the various settings in outlined in Fig. 3. We find that for the proposed settings, switch- ing from constrained decoding with an instruction to increase brevity ("... Answer with species only.") to nlg2choiceopen comes with an average µ = 22.06, 7.38, 7.72 percentage increase in accuracy for Qwen-2.5VL, Llama-3.2V-11B, and Intern3VL-8B, re- spectively. Additionally, we find that these increases are within the bounds of statistical significance (σ = 1.92, 1.69, 2.95). Constrictive instructions reduce performance. A com- mon means of steering models to produce valid choices is through appending explicit instructions onto the instruc- tion which enumerate the available choices, typically by means of lettering or enumeration. We wish to quantify the effect of adding this explicit instruction to the model. In Tab. 4 we show the performance of constrained decod- Method Birds Flowers "What is the {type} of {domain} in this image?" Qwen-2.5VL 0.92 2.53 + nlg2choice 63.36 (+62.44) 78.03 (+75.5) Llama-3.2-Vision 7.67 27.50 + nlg2choice 49.59 (+41.92) 62.24 (+34.74) Intern3-VL 2.28 5.01 + nlg2choice 18.47 (+16.19) 37.5 (+32.49) "... Answer with {type} only." Qwen-2.5VL 40.08 51.29 + nlg2choice 47.72 (+7.64) 63.22 (+11.93) Llama-3.2-Vision 42.27 45.68 + nlg2choice 46.96 (+4.69) 51.16 (+5.48) Intern3-VL 11.27 24.68 + nlg2choice 13.81 (+2.54) 32.52 (+7.84) "... Answer from {choice list}." Qwen-2.5VL 38.71 69.19 + nlg2choice 41.05 (+2.34) 72.48 (+3.29) Llama-3.2-Vision 13.50 44.83 + nlg2choice 46.68 (+33.18) 66.03 (+21.2) Intern3-VL 22.34 35.24 + nlg2choice 23.48 (+1.14) 35.73 (+0.49) Table 4. Classification performance across instruction tem- plates with varying levels of constrictiveness. The model name refers to default constrained decoding. The difference in perfor- mance is shown in parentheses. The best performing model and method for each input instruction is bolded. “nlg2choiceopen” corresponds to the top section of the table. ing and nlg2choice under three variations of instruction steering: (1) no explicit choice steering ("What is the {type} of {domain} in this image?"), (2) en- couraging succinct prediction ("... Answer with {type} only"), and (3) explicitly enumerating choices ("... Answer from {choice list}"). We see that for every template, switching from con- strained decoding to nlg2choice improves performance. The most stark setting is the most open-ended prompt, where we see an average increase of +68.97, +38.33, and +24.34 for Qwen-2.5VL, Llama-3.2V, and Intern3-VL, re- spectively. Surprisingly, explicit choice instruction shows the weakest improvement for Qwen-2.5VL and Intern3-VL (+2.82 and +0.81) but strong improvement for Llama-3.2V (+27.19). Otherwise, keeping in line with previous results, we find that Qwen-2.5VL is the greatest benefactor of per- formance. CoT doesn’t consistently improve performance. Next, we ask how much performance can be improved by us- ing CoT-style instructions. We investigate methods which induce reasoning by forcing the model to generate some initial tokens that are common to thought-out reasoning (eg. "Let’s think step by step"), then letting the model freely respond. In Tab. 5 we choose three of Method Birds Flowers "What is the {type} of {domain} in this image?" Qwen-2.5VL 63.36 78.03 Llama-3.2-Vision 49.59 62.24 Intern3-VL 18.47 37.50 ... "Let’s think step by step." [16] Qwen-2.5VL 54.93 (-8.43) 70.16 (-7.87) Llama-3.2-Vision 39.87 (-9.72) 39.22 (-23.02) Intern3-VL 19.11 (+0.64) 30.82 (-6.68) ... "First," [3] Qwen-2.5VL 53.51 (-9.85) 72.80 (-5.23) Llama-3.2-Vision 41.84 (-7.75) 45.31 (-16.93) Intern3-VL 19.65 (+1.18) 33.46 (-4.04) ... "Let’s solve this problem by splitting it into steps." [30] Qwen-2.5VL 54.69 (-8.67) 59.56 (-18.47) Llama-3.2-Vision 40.11 (-9.48) 38.08 (-24.16) Intern3-VL 19.68 (+1.21) 31.06 (-6.44) Table 5. Classification performance when using assistant out- put prepending CoT methods. All results are reported using nlg2choice as an answer extraction method. The proposed set- ting is the first section in gray. Forced assistant prepending text is shown in blue . these methods and report their performance and difference between the default nlg2choiceopen setting. In general, we see that these methods do not consistently improve performance, resulting in an average decrease of -9.75, -15.18, and -2.36 across Qwen-2.5VL, Llama-3.2V, and Intern3-VL, respectively. The one exception to this rule is the CUB200 dataset with Intern3-VL, where we see an average increase of +1.01. 5.2. Zero-Shot Retrieval Performance Answer extraction is better than directly answering. In Tab. 6, we report the performance of various models when viewing FGVC datasets as a one-vs-rest retrieval task. Like the classification setting, we find that when compared to exhaustive “Yes/No” questioning, nlg2choice generally im- proves with the exception of the Stanford Cars [17], in which performance across all models decreases. We again find Qwen-2.5VL to receive the greatest improvement, with +8.16 average mAP increase across the 5 datasets. Like- wise, we find Intern3-VL receives the least improvement, with an average increase of +4.43. Unlike classification, we do not find that changing the prompt to be open-ended (nlg2choiceopen, ie. removing "Answer from {species list}" from the end of the input prompt) does not uniformly increase performance. However, we do still find that open-ended prompts gener- ally perform better on average with nlg2choice (+8.16 vs. +6.96) and are less variable across all models. Method Birds Flowers Aircrafts Cars Foods Baseline Random 0.64 1.24 1.23 0.62 1.02 Qwen-2.5VL-7B Yes/No 51.01 61.19 51.07 77.58 82.67 nlg2choice 57.79 82.56 65.14 68.61 84.30 (+6.78) (+21.37) (+14.07) (-8.97) (+1.63) nlg2choiceopen 59.01 86.42 64.03 69.82 85.05 (+8.00) (+25.23) (+12.96) (-7.76) (+2.38) Llama-3.2-Vision-11B Yes/No 41.24 63.59 34.24 66.03 78.81 nlg2choice 51.14 68.64 39.39 56.46 86.73 (+9.90) (+5.05) (+5.15) (-9.57) (+7.92) nlg2choiceopen 52.61 74.55 40.00 59.35 88.84 (+11.37) (+10.96) (+5.76) (-6.68) (+10.03) Intern3VL-8B Yes/No 15.22 38.32 17.93 31.83 61.46 nlg2choice 26.97 45.13 20.47 25.81 66.55 (+11.75) (+6.81) (+2.54) (-6.02) (+5.09) nlg2choiceopen 30.51 41.12 20.69 26.84 67.76 (+15.29) (+2.80) (+2.76) (-4.99) (+6.30) Table 6. Performance across fine-grained visual classifica- tion (FGVC) benchmarks and architectures when interpreted as retrieval tasks. Numbers represent Mean Average Precision (mAP). Every query is a one-vs-rest task, eg. for CUB200 (Birds) the prior for a single PR curve is 1/200. 5.3. Answer Extraction Performance In Sec. A we detail the process of labeling data for species extraction from free-form responses. First, labelers are shown responses from various models to a dataset instruc- tion and image, then are instructed to highlight the first in- cidence of the predicted species within the generated text. Then, they are instructed to confirm whether the species highlighted matches the choice selection by the model, and in the failure case to indicate the species predicted or whether there was a schema failure. This process results in a set of common outcomes, ie. “schema failure,” “no species predicted,” “answer={species}”. In Fig. 4 we show the makeup of the labeled data in terms of these various out- comes. Models often respond out of schema. Depicted in Fig. 4, we see that 34.64% of free-form responses by models have an answer within their text that does not occur within the dataset schema. In other words, the worst-case scenario is that only 65.35% of examples are correctly classifiable. However, this turns out to not be the case due to the fact that a large percentage of these responses have real species within them (70.75% of failures) which can be resolved to the correct answer, but that the predicted species itself can- not be easily put within the given schema. We hypothe- Figure 4. Breakdown of the small labeled set of natural lan- guage responses. ”Other” contains two main categories, ”Refused to answer” and ”No discernable information.” Examples of these are given in Sec. A. size that many of these responses are grouped into the cor- rect answer due to either genus-level answers in the dataset, eg. CUB200’s labels include the genus "Mockingbird" which matches to "Northern Mockingbird" within the free-form text, or being a closely-related but technically different species, eg. "Glaucous-winged Gull" in schema versus "Glaucous Gull" in free-form text. Motivation of different methods. In Tab. 7 we show the performance of main architectures in terms of extrac- tion performance over various methods when sub-setting the data to examples where an answer within the schema is clearly apparent. We include two additional variations: (1) nlg2nlg, the free-form response given the instruction to rephrase the MLLM answer and (2) nlg2nlg2choice, the constrained decoding off one extra round of rephrasing. We include nlg2nlg due to the fact that we wish to test the direct distance to the highlighted text created during labeling and include nlg2nlg2choice to gauge the distance between the interpretation of that text into an answer. LLMs are decent answer extractors out-of-the-box. In the default nlg2choiceopen setting, we find high an- swer extraction success rates for Qwen-2.5VL (97.93) and Intern3-VL (93.26), but lower results for Llama-3.2V (79.02). We find that the raw text extracted by the models (”nlg2nlgexact”) seldom fits neatly into the dataset schema, having large performance drops (-16.47, 50.77, -25.91). However, the output of nlg2nlgexact retains the relevant information from the original free-form text, as indicated by the nlg2nlg2choice scores which has much smaller per- Method Qwen2.5VL Llama3.2V Intern3VL Overall xFinder [47] Qwen-1.5-0.5B [37] 92.05 88.67 71.76 86.27 Llama-3-8B-Instruct [12] 80.13 80.67 75.29 79.27 Qwen-2.5VL 8B nlg2choice 98.01 98.67 96.47 97.93 nlg2nlgexact 80.79 84.00 78.82 81.61 (-17.22) (-14.67) (-17.65) (-16.32) nlg2nlg2choice 98.01 97.33 100.00 98.19 (+0.00) (-1.34) (+3.53) (+0.26) Llama-3.2-Vision 11B nlg2choice 77.48 82.00 76.47 79.02 nlg2nlgexact 31.79 26.00 25.88 28.24 (-45.69) (-56.00) (-50.59) (-50.78) nlg2nlg2choice 76.82 80.00 70.59 76.68 (-0.66) (-2.00) (-5.88) (-2.34) Intern3-VL 8B nlg2choice 94.04 94.00 90.59 93.26 nlg2nlgexact 59.60 78.67 62.35 67.62 (-34.44) (-15.33) (-28.24) (-25.64) nlg2nlg2choice 90.06 85.33 90.59 88.34 (-3.98) (-8.67) (+0.00) (-4.92) Table 7. Performance of various MLLMs across a small labeled set. Differences are measured from the “nlg2choice“ row of each model. formance differences (+0.61, -2.72, and -4.39). We hy- pothesize that the nlg2nlg text still contains artifacts such as "The species is ..." or scientific names, which lower exact match performance. 6. Limitations This work has three main limitations. First, our experiments were conducted primarily with medium-sized models (8B- 11B) and we only used open-source LLMs from a few dif- ferent families: Qwen, Llama, and Intern. Secondly, while nlg2choice avoids the need for specialized training data or architectural modifications and is more aligned with the true responses of the model, the method still requires significant computational resources to generate and process the tex- tual descriptions. Lastly, while nlg2choice shows promise in classification, it still remains to be seen whether the ap- proach scales to multi-label problems. 7. Conclusion In this work, we presented nlg2choice, a simple approach for zero-shot fine-grained visual classification and retrieval that extracts fine-grained answers from MLLM free-form responses. Through experiments across multiple architec- tures and datasets, we demonstrated that nlg2choice con- sistently outperforms constrained decoding and exhaustive questioning methods in both classification accuracy and re- trieval performance. We also validated the answer extrac- tion directly through labeling predicted species in unstruc- tured text, where we showed that models are capable of ex- traction their own answers without any labeled data. References [1] Gpt-4v(ision) system card. 2023. 1 [2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ah- mad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 2, 3 [3] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Cheb- otar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. 7 [4] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923, 2025. 4 [5] Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. Guiding llms the right way: Fast, non-invasive constrained generation. arXiv preprint arXiv:2403.06988, 2024. 2 [6] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101–mining discriminative components with random forests. In Computer vision–ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, pro- ceedings, part VI 13, pages 446–461. Springer, 2014. 4 [7] Ding Chen, Qingchen Yu, Pengyuan Wang, Wentao Zhang, Bo Tang, Feiyu Xiong, Xinchi Li, Minchuan Yang, and Zhiyu Li. xverify: Efficient answer verifier for reasoning model evaluations. arXiv preprint arXiv:2504.10481, 2025. 3 [8] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, march 2023. URL https://lmsys. org/blog/2023-03-30-vicuna, 3(5), 2023. 2, 3 [9] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general- purpose vision-language models with instruction tuning, 2023. 1 [10] Yixin Dong, Charlie F Ruan, Yaxing Cai, Ruihang Lai, Ziyi Xu, Yilong Zhao, and Tianqi Chen. Xgrammar: Flexible and efficient structured generation engine for large language models. arXiv preprint arXiv:2411.15100, 2024. 2 [11] Georgi Gerganov. llama.cpp, 2023. Software. 2 [12] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Ab- hinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024. 4, 8 [13] GuidanceAI. Guidance, 2023. Software. 2 [14] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Mea- suring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. 2 [15] Jeonghwan Kim and Heng Ji. Finer: Investigating and en- hancing fine-grained visual concept recognition in large vi- sion language models. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing, pages 6187–6207, Miami, Florida, USA, 2024. Association for Computational Linguistics. 1, 2 [16] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information pro- cessing systems, 35:22199–22213, 2022. 7 [17] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on com- puter vision workshops, pages 554–561, 2013. 4, 7 [18] Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tat- sunori B Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models, 2023. 2, 3 [19] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evalu- ation of language models. arXiv preprint arXiv:2211.09110, 2022. 2 [20] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892–34916, 2023. 1 [21] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next: Im- proved reasoning, ocr, and world knowledge, 2024. 1 [22] Michael Xieyang Liu, Frederick Liu, Alexander J Fiannaca, Terry Koo, Lucas Dixon, Michael Terry, and Carrie J Cai. ” we need structured output”: Towards user-centered con- straints on large language model output. In Extended Ab- stracts of the CHI Conference on Human Factors in Com- puting Systems, pages 1–9, 2024. 2 [23] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In European conference on computer vi- sion, pages 216–233. Springer, 2024. 2, 3 [24] Chenyang Lyu, Minghao Wu, and Alham Fikri Aji. Beyond probabilities: Unveiling the misalignment in evaluating large language models. arXiv preprint arXiv:2402.13887, 2024. 2, 3 [25] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classi- fication of aircraft. arXiv preprint arXiv:1306.5151, 2013. 4 [26] George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41, 1995. 11 [27] Francesco Maria Molfese, Luca Moroni, Luca Gioffr´e, Alessandro Scir`e, Simone Conia, and Roberto Navigli. Right answer, wrong score: Uncovering the inconsistencies of llm evaluation in multiple-choice question answering. arXiv preprint arXiv:2503.14996, 2025. 2 [28] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & im- age processing, pages 722–729. IEEE, 2008. 4 [29] OpenAI. Openai o3 and o4-mini system card. 2025. 3 [30] Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended abstracts of the 2021 CHI conference on human factors in computing systems, pages 1–7, 2021. 7 [31] Joshua Robinson, Christopher Michael Rytting, and David Wingate. Leveraging large language models for multiple choice question answering, 2023. 2 [32] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211–252, 2015. 11 [33] Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. Whose opinions do language models reflect? In International Conference on Machine Learning, pages 29971–30004. PMLR, 2023. 2 [34] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adri`a Garriga- Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. 2 [35] Zhi Rui Tam, Cheng-Kuang Wu, Yi-Lin Tsai, Chieh-Yen Lin, Hung-yi Lee, and Yun-Nung Chen. Let me speak freely? a study on the impact of format restrictions on large language model performance. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: In- dustry Track, pages 1218–1236, 2024. 1, 2 [36] Yuwen Tan, Yuan Qing, and Boqing Gong. Vision llms are bad at hierarchical visual understanding, and llms are the bottleneck. arXiv preprint arXiv:2505.24840, 2025. 4, 11 [37] Qwen Team. Introducing qwen1.5, 2024. 8 [38] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber, Jessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Be- longie. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 595–604, 2015. 4 [39] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and de- tection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769–8778, 2018. 4, 5, 11 [40] Catherine Wah, Steve Branson, Peter Welinder, Pietro Per- ona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. 2, 4 [41] Darren Yow-Bang Wang, Zhengyuan Shen, Soumya Smruti Mishra, Zhichao Xu, Yifei Teng, and Haibo Ding. Slot: Structuring the output of large language models. arXiv preprint arXiv:2505.04016, 2025. 1, 2, 3 [42] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Ki- gali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. 2 [43] Xinpeng Wang, Chengzhi Hu, Bolei Ma, Paul R¨ottger, and Barbara Plank. Look at the text: Instruction-tuned language models are more robust multiple choice selectors than you think. arXiv preprint arXiv:2404.08382, 2024. 1, 3 [44] Xinpeng Wang, Bolei Ma, Chengzhi Hu, Leon Weber- Genzel, Paul R¨ottger, Frauke Kreuter, Dirk Hovy, and Bar- bara Plank. ”my answer is c”: First-token probabilities do not match text answers in instruction-tuned language mod- els. arXiv preprint arXiv:2402.14499, 2024. 2, 3 [45] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. Advances in Neural Information Processing Sys- tems, 37:95266–95290, 2024. 2 [46] Brandon T Willard and R´emi Louf. Efficient guided generation for large language models. arXiv preprint arXiv:2307.09702, 2023. 2, 4 [47] Qingchen Yu, Zifan Zheng, Shichao Song, Zhiyu Li, Feiyu Xiong, Bo Tang, and Ding Chen. xfinder: Robust and pin- point answer extraction for large language models. arXiv preprint arXiv:2405.11874, 2024. 3, 8 [48] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11975–11986, 2023. 11 [49] Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shen- glong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Shao, et al. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models. arXiv preprint arXiv:2504.10479, 2025. 4 A. Labeling Species Answer Extraction Data Motivation. For labeling answer extraction data, we pre- pare a user interface built with Label Studio which shown in Fig. 5. On the right contains information on the inputs to the model, specifically the question, image, and ground truth label. On the left is main interface, which consists a free-form response from a model to the example displayed on the right, as well as the choice predicted by the model via nlg2choice. Labeling consists of two pieces: (1) iden- tification and highlighting of text spans containing answers to the input question and (2) matching the highlighted text span to a class within the given schema. Text span labeling. For the text span labeling task, label- ers read through the text to identify the first incidence of a predicted species. Before highlighting the species found within the text, the prediction is confirmed to be a real species through a brief internet search. If there are any is- sues with highlighting the predicted species, a text descrip- tion of the issue is written in the input text box at the bottom and the example is skipped. Figure 5. Labeling interface for efficacy of the answer extraction stage of nlg2choice. Labelers are asked to highlight the name predicted in free-form responses from various models and whether the name occurs easily within the dataset choice list. Above shows an interesting example: for the Chipping Sparrow on the right, the model freely responded with ”Painted Bunting” and answer extraction made an additional mistake by predicting ”Scarlet Tanager.” Choice assignment. Once the span is labeled, we next wish to either assign the span to a class within the dataset or tag the span as a schema failure. For expediency, the la- belers are also equipped with the nlg2choice prediction and a tool for sorting most like class names with fuzzy match- ing score. When the nlg2choice predict matches the text span, the prediction is ”Consistent with nlg” and the label- ers check the button. Otherwise, ”Inconsistent with nlg” is checked and a natural language explanation is put in the bottom text box. When the nlg2choice is inconsistent, “an- swer={species}” is put within that text box. B. Zero-Shot 4-Way VQA Performance Answer extraction does not consistently increase per- formance. Following recent work [36], we evalute nlg2choice on popular fine-graiend classification datasets as a 4-way VQA task. iNat-Animal and iNat-Plant are the rows of iNat21 [39] when subset to rows which whose king- dom matches ”Animalia” and ”Plantae.” ImgNet-Animal and ImgNet-Artifact are the rows of ImageNet-1K [32] sub- set to rows which have an anceter of ”Animal” and ”Arti- fact” in the WordNet hierarchy [26]. The choices for each example are from the top 3 SigLIP [48] predictions and the correct class. For the nlg2choice prompt we use the same prompt as the previous work. We report the open-ended question performance in Tab. 8 where we find that it slightly Method CUB200 iNat-Animal iNat-Plant ImgNet-Animal ImgNet-Artifact Qwen-2.5VL-7B A/B/C/D 65.50 41.33 41.61 85.20 80.01 nlg2choiceopen 67.43 40.19 40.03 84.03 81.81 (+1.93) (-1.14) (-1.58) (-1.17) (+1.80) Llama-3.2-Vision-11B A/B/C/D 65.52 32.44 31.88 79.93 75.89 nlg2choiceopen 68.79 31.52 31.57 81.39 75.29 (+3.27) (-0.92) (-0.31) (+1.46) (-0.60) Intern3VL-8B A/B/C/D 50.52 35.40 36.39 77.50 69.41 nlg2choiceopen 47.30 35.86 35.64 81.91 70.02 (-3.22) (+0.46) (-0.75) (+4.41) (+0.61) Table 8. 4-way VQA accuracy across fine-grained classifcation tasks. underperforms or has not effect on the lettering approach. Specifically, we find that accuracy changes by -0.03, +0.58, and +0.30 on average for Qwen-2.5VL, Llama-3.2V, and Intern3VL, respectively. Decreasing the choice set improves performance. We are also able to compare the 4-way VQA setting of CUB200 directly to the many-way prediction displayed in Tab. 2. We see that subsetting the full choice set to four choices incurs a performance improvement of +4.07, +19.20, and +28.83 for Qwen-2.5VL, Llama-3.2V, and Intern3VL, respectively. C. Prompt Variations "What { type } is this { domain }?" "What is the { type } of this { domain }?" "What is the { type } of the { domain }?" "What is the { type } of the { domain } in this image?" "What is the { type } of the { domain } in the image?" "Identify this { domain }’s { type }." "Name the { type } shown in the image." "Which { domain } { type } is pictured here?" "Classify the { type } of this { domain }." "What { domain } { type } does the photo depict?" "Determine the { type } of the { domain } in view." "Provide the common name of this { domain }." "To which { type } does this { domain } belong?" "Label the { type } of the { domain } shown." "Recognize and state this { domain }’s { type }." Table 9. Base template variations generated by Sec. 3.1 Dataset Type Domain CUB200 species bird Flowers species flower Aircrafts variant aircraft Cars year, make, and model car Foods name food NABirds species bird iNat-Birds species bird Table 10. Dataset variables for filling out prompt templates.
You May Speak Freely: Improving the Fine-Grained Visual Recognition Capabilities of Multimodal Large Language Models with Answer Extraction Logan Lawrence1 Oindrila Saha1 Megan Wei2 Chen Sun2 Subhransu Maji1 Grant Van Horn1 1 Brown University2 {lclawrence, osaha, smaji, {meganwei, chen Abstract Despite the renewed interest in zero-shot visual classification due to the rise of Multimodal Large Language Models (MLLMs), the problem of evaluating free-form responses of auto-regressive models remains a persistent challenge. Most existing works focus on language-only tasks or don't consider Multiple Choice Questions (MCQs) beyond 5-way options, both of which are critical capabilities to solve tasks in Fine-Grained Visual Classification (FGVC) where choice counts are in the hundreds to thousands and the choices are highly related. Furthermore, in this highly multi-way MCQ setting it is not clear how to extend LLM choice extraction to retrieval-based problems, where computing probabilities over the choice set is computationally costly. In this work we investigate nlg2choice, a simple two-stage method which first asks the MLLM an open-ended question for the task with minimal constraints, then uses text-only constrained decoding to predict the most likely choice. In retrieval settings, we compute the probability of the constrained response taking that choice with an early stopping method to significantly improve throughput. Our results show improvement over a suite of seven fine-grained visual datasets when evaluating in terms of classification and retrieval, and show that this performance holds over the various ways that users of LLMs can implement tasks in natural language. 1. Introduction Zero-shot visual classification has been revitalized by the introduction of Multimodal Large Language Models (MLLMs) [1, 9, 20, 21], where one can formulate classification datasets as Visual Question Answering (VQA) tasks with Multiple Choice Questions (MCQs) and a common prompt. However, fine-grained visual classification (FGVC) poses a barrier to traditional MCQ problems due to two issues: (1) a vastly higher number of choices (often hundreds to thousands) and (2) the niche concepts that each choice represents. Recent evaluations have found that state-of-the-art instruction-tuned MLLMs (e.g. LLaVA 1.5 free-form choice This bird is an Ivory Gull. Crested Auklet nlg2choice Ivory Gull "What is the species of this bird?" "What is the most likely species?" 15x ~ Figure 1. Proposed evaluation setup of nlg2choice. Whereas previous approaches for MLLM classification use methods like probability sequences or constrained decoding, nlg2choice first prompts the MLLM for a natural language response, then performs text-only constrained decoding on the text to extract classes. Accuracy is computed as an average over multiple rewritings of prompts as a robustness measure. Correct MLLM responses are shown in green whereas incorrect responses are red . [20, 21], InstructBLIP [9], GPT-4V [1]) suffer dramatic drops in accuracy when asked to name fine-grained categories. For instance, LLaVA-1.5 (13B) achieves nearperfect accuracy on coarse categories (like "bird" vs "not bird"), but only 1-2% accuracy on fine-grained species labels in iNaturalist [15]. Using LLMs to extract choices from pre-existing text appears to be a potential solution to this problem. Constraining LLM's output format to ease extraction can simplify scoring but might hinder the model's reasoning [35, 41, 43], 16 Oct 2025 "What is the species of bird in this image? Answer from: { full_species_list }" "What is the species of bird in this image?" "What is the species of bird in this image? Answer with only the species name." Accuracy μ = 41.05 σ = 0.74 "It's an Ivory Gull." "It's a Herring Gull." "What species of bird is this?" μ = 47.19 σ = 2.37 μ = 63.36 σ = 1.26 "What species of gull is this?" "bird" → "gull" Figure 2. Seemingly innocuous instruction changes greatly affect MLLM performance. When testing on the gull subset of CUB200[40], replacing one word can change the predictions of a model (left). When rewriting an instruction in a semantically equivalent way, performance varies substantially for Qwen-2.5VL (right). Each distribution comprises 15 semantically equivalent ways of writing the same instruction. These cases highlight the need to test LLMs over various equivalent inputs to ensure robustness. whereas allowing free-form explanations improves reasoning at the cost of harder extraction [27, 35]. Many approaches prompt a strong LLM [8, 18] such as GPT4 [2] to perform second-stage answer extraction. However, the works that propose LLMs as choice extractor are either focused only on language-only tasks [24, 44] or do not extend beyond simple lettering or numbering schemes [23]. Furthermore, no significant treatment of retrieval nor the robustness of these results under input prompt variation has been considered. In this work, we conduct a systematic analysis into constrained decoding with LLMs as answer extractors on freeform responses to fine-grained visual recognition tasks. We come to the following conclusions: 1. Answer extraction on free-form responses improves visual recognition capabilities. We find that performance improves substantially across 7 FGVC benchmarks when viewed from both classification (Tab. 2) and retrieval (Tab. 6) perspectives. It additionally improves misclassifications, with most responses being within the same genus (Tab. 3). To overcome the computational costs associated with generating probabilities over hundreds to thousands of choices, we propose an early stopping method (Sec. 3.3) which increases throughput substantially (Tab. 1). 2. Answer extraction is robust to user variation. We test robustness by generating additional semantically equivalent ways of writing an instruction (Sec. 3.1) and see that performance improvements are statistically significant when viewing user writing as a source of randomness (Fig. 3). We also show that instructions which constrict the response decrease performance (Tab. 4) and that methods which rely on forcing the LLM to generate some initial tokens to stimulate Chain-of-Thought (CoT) reasoning do not consistently increase performance (Tab. 5). 3. Constrained decoding is a reliable answer extractor without additional training. We gauge the answer extraction abilities of LLMs out-of-the-box by labeling spans and assigning answers to free-form NLG responses (Sec. A). We find that the main bottleneck is not answer extraction (Tab. 7) as studied in previous works, but a lack of extractable content from free-form responses themselves (Fig. 4). We make this data publicly available1. 2. Related Work Forcing MLLMs to Generate Valid Choices The earliest works first prompted an LLM directly for a response then did regex parsing to extract choices [42], but it was shown that direct open-ended querying often underperforms on fine-grained tasks [15]. While generating the full probability the text of each choice sequence is as an option, it remains difficult to implement in practice due to its computational cost [31]. The most dominant choice probability generation rank the log probabilities assigned to the option IDs (e.g. "A/B/C/D") from the first token prediction of the model. This approach is widely adopted in many different benchmarks and LLM evaluation studies [14, 19, 33, 34, 45]. Parallel to this, constrained decoding techniques seek to guarantee consistent outputs by guiding the generation process during sampling by constricting the output vocabulary [5, 22]. There are many open-source tools that implement this approach [10, 11, 13, 46] and while these methods ensure model responses lie within the choice set, recent evaluations have shown that they decrease performance [35, 41]. Our work sits in opposition to these recent results, where we posit that constrained decoding is advantageous in the finegrained setting where previous approaches like first token prediction fail. MLLMs as Answer Extractors Recent works have shown a mismatch difference between text-based outputs 1https://github.com/Racerx250/nlg2choice and token-probability metrics of LLMs on language-only tasks, revealing that a model's first-token likelihood often fails to correspond with its generated answer [24, 43, 44]. Evaluating a model by reading its full text answer instead of only the first-token or forced letter has been shown to improve robustness, reinforcing the value of answer extraction methods [43]. Many works use large proprietary models like GPT-4 [2] to extract answers from free-form text [8, 18, 23]. Some recent methods have addressed the problem of answer extraction directly. xFinder [47] propose a lightweight model fine-tuned specifically to map an LLM's free-form answer to the correct choice, reporting that their method outperforms a GPT-4 based extractor struggled in terms of extraction accuracy. Similarly, SLOT [41] proposes a model-agnostic, lightweight "Structured LLM Output Transformer" that is fine-tuned to post-process any LLM's free-form text into JSON that exactly matches an arbitrary schema, eliminating the need to modify or constrain the generating model itself. xVerify [7] is an efficient answer verifier for reasoning model evaluations that extracts final answers from complex, multi-step LLM outputs and performs robust equivalence checking-trained on the VAR dataset of multi-model, multi-round-annotated question-answer pairs and achieving over 95% accuracy and F1, demonstrating strong generalizability. Contrary to these approaches the answer extraction method proposed in this work requires no additional training. We argue that answer extraction is not a critical bottleneck when using constrained decoding for fine-grained visual classification and show that LLMs perform decent answer extraction out-ofthe-box on popular benchmarks. 3. Methodology 3.1. Simulating User Variation Given the brittleness of LLMs to prompt variations (Fig. 2), we wish to test the robustness of LLMs as choice extractors by generating a range of semantically equivalent writings of prompts for a given task. We do this by starting with a small set of hand-written templates for a classification task. Letting [choice list] be the list of classes for a given dataset separated by newlines, we define [templatei] to be equal to the following: (Template 1) "What is the species of bird in this image?" (Template 2) "What is the species of bird in this image? Answer only with species name and no other text." (Template 3) "What is the species of bird in this image? Answer only from the following list with no other text: [choice list]" Where the words "species" and "bird" are updated to suite the benchmark dataset. Given [templatei], we use o3-high [29] to generate 14 additional prompt variations: (Generate Variations) What are 14 semantically equivalent ways of writing the following question? Prompt: [templatei] Next, we perform a manual check on this set of prompts for semantic equivalence to the input prompt, resulting in a total of 45 variations for each dataset. For the remainder of the paper, accuracies reported are across these variations. For the full list of these prompts, see Sec. C. 3.2. nlg2choice for Classification Our approach to classification involves two stages: (1) prompting the LLM for a free-form response to a question defined by Sec. 3.1 and (2) prompting the same model again to select the most likely species given the previous response. On the second step, we enforce a valid output by using constrained decoding across the choice set. Additionally, the second stage is text-only, namely it does not see the image which produced the original response. It receives the following prompt: (Text-Only Choice) What is the most likely species of bird indicated in this response? Response: [nlg] Answer from the following: [choice list] Where [nlg] is the free-form response to the dataset instruction and [choice list] is newline-separated list of class names. Similarly, "species" and "bird" are updated accordingly for the given dataset. 3.3. nlg2choice for Retrieval In thresholding-based scenarios one desires a continuous score for every query. Most commonly this is done by taking the probability of "Yes" and "No" for each image, class pair xi ∈X, yi ∈Y, ie.: ˆyi = softmax( p("Yes"|xi, [prompt]) p("No"|xi, [prompt]) Where p(·) is the next-token probability distribution of a MLLM and [prompt] describes a two-way prediction task on one class of the dataset, eg. "Is this a [cname]?" and [cname] is the name of some class. However, this requires |X|·|Y| forward passes of the model. To improve throughput, we propose to update the second step of nlg2choice to stop generating probabilities when the remaining tokens for a given choice do not occur in any other choices. For example, take the case of "Baltimore Oriole" which can Method Full Prob Yes/No Truncated Prob CUB200 687 200 47 (+0%) (+244%) (+1362%) Flowers 257 102 12 (+0%) (+152%) (+2042%) Aircrafts 479 100 79 (+0%) (+379%) (+506%) Cars 1668 196 125 (+0%) (+751%) (+1234%) Food 269 101 24 (+0%) (+166%) (+1021%) NABirds 3294 555 701 (+0%) (+494%) (+370%) iNat-Birds 5450 1486 480 (+0%) (+267%) (+1035%) Table 1. Number of forward passes needed for different methods in the second-stage answer extraction of nlg2choice. "Full Prob" refers to the probability of a full class name p([cname]) under the LLM, where [cname] is a common name of a class. "Yes/No" refers to the common |X| · |Y| method of calculating probability over all the classes yi ∈Y. "Truncated Prob" refers to the process outlined in Sec. 3.3. The percentages in the "Yes/No" and "Truncated Prob" columns refer to the throughput increase over caculating the full sequence probabilities "Full Prob." Every setting for truncation is faster than "Yes/No" except for one benchmark - NABirds [38]. The fastest method for each benchmark is bolded. be decomposed into ["B", "altimore", " Ori", "ole"]. We can see that in the CUB200 species list there are many names which share the first token "B", ie. "Bewick Wren" or "Belted Kingfisher". However, "Baltimore Oriole" is the only choice name which has "altimore" as its second token. Thus, the probabilities for [" Ori", "ole"] are not calculated. In terms of probabilities, this is equivalent to: pfull("Baltimore Oriole") = p("B") (1) × p("altimore"|"B") × p(" Ori"|"Baltimore") × p("ole"|"Baltimore Ori") ptrunc("Baltimore Oriole") = p("B") (2) × p("altimore"|"B") ×(((((((((((( p(" Ori"|"Baltimore") × (((((((((((((( p("ole"|"Baltimore Ori") On many datasets, this change produces substantial throughput increase as many choice names can easily be determined by their first or second tokens. The special property of truncated probability sequences is that they have the same probabilities as sampling with constrained generation. This throughput increase is depicted in Tab. 1. 4. Experiments Datasets We conduct experiments over a benchmark of 7 FGVC datasets: CUB [40] (200 classes), Flowers 102 [28] (102 classes), Stanford Cars [17] (196 classes), FGVC Aircrafts [25] (100 classes) and Food101 [6] (101 classes), NABirds [38] (555 classes), and iNaturalist-Birds [39] (1486 classes). We choose these benchmarks because they are well-known for fine-grained visual classification and are traditionally difficult for zero-shot visual models to solve. iNaturalist-Birds is the validation split of the iNat2021 dataset where the the "class" field of each row is equal to "Aves," resulting in a test size of 14960. For each dataset we use the full test set and the training split remains unused. Evaluation Metrics For the classification setting we use accuracy averaged over 15 semantically equivalent prompts, whereas for the retrieval setting we use Mean Average Precision (mAP). Choice lists are the names of classes themselves rather than letters or numbers. As compared to previous works [36], we perform the full-way class prediction rather than subsetting the choices to a 4-way prediction. However, we provide an experiment evaluating this setting in Sec. B. Implementation Details Constrained decoding is implemented using the Outlines2 library [46] and classes for each setting are detailed in Sec. C. We start by generating 15 prompts described in Sec. 3.1, then replacing the key features like domain ("birds," "flowers," etc.) and class types ("species," "variant," etc.) for each dataset. This allows us to compare directly between questions for a given model or dataset, and we also test the performance of appending class lists and common CoT methods. For our experiments, we test only open-source models. Specifically, we test the Qwen-2.5VL [4], Llama-3.2 [12], and InternVL3 [49] architectures. For hyperparameters, we use the recommended generation parameters as defined by the HuggingFace card and take max new tokens to be 512. Experiments were ran in a heterogeneous environment of single-GPU nodes. All nodes required a GPU with at least 23GB VRAM, 16 CPU cores, and 64GB RAM. 5. Results 5.1. Zero-Shot Classification Performance Answer extraction is better than directly answering. In Tab. 2 we report the performance of nlg2choice across various architectures and datasets. All architectures enjoyed a bump in performance across individual datasets and on average (+17.46, +8.49, +6.87 across Qwen-2.5VL, Llama-3.2V, and Intern3), but model-specific differences can be clearly seen. Likewise, we find that removing ex2https://github.com/dottxt-ai/outlines Method Birds Flowers Aircrafts Cars Foods NABirds iNat-Birds Average Qwen-2.5VL-7B choice 40.08 51.29 44.97 26.80 57.93 37.77 17.29 39.45 nlg2choice 47.72 63.22 59.96 37.52 68.99 44.88 21.47 49.11 (+7.64) (+11.93) (+14.99) (+10.72) (+11.06) (+7.11) (+4.18) (+9.66) nlg2choiceopen 63.36 78.03 61.54 47.13 72.26 50.19 25.86 56.91 (+23.28) (+26.74) (+16.57) (+20.33) (+14.33) (+12.42) (+8.57) (+17.46) Llama-3.2-Vision-11B choice 42.27 45.68 31.38 24.68 64.70 36.69 13.82 37.03 nlg2choice 46.96 51.16 30.75 32.72 64.52 41.34 15.81 40.47 (+4.69) (+5.48) (-0.63) (+8.04) (-0.18) (+4.65) (+1.99) (+3.43) nlg2choiceopen 49.59 62.24 37.12 36.78 72.72 46.37 13.85 45.52 (+7.32) (+16.56) (+5.74) (+12.10) (+8.02) (+9.68) (+0.03) (+8.49) Intern3VL-8B choice 11.27 24.68 15.84 14.78 44.12 6.90 2.28 17.12 nlg2choice 13.81 32.52 18.63 18.34 43.36 7.06 1.95 19.38 (+2.54) (+7.84) (+2.79) (+3.56) (-0.76) (+0.16) (-0.33) (+2.26) nlg2choiceopen 18.47 37.50 21.69 22.15 51.48 12.39 4.29 24.00 (+7.20) (+12.82) (+5.85) (+7.37) (+7.36) (+5.49) (+2.01) (+6.87) Table 2. Performance across fine-grained visual classification (FGVC) benchmarks and architectures. The "Birds" dataset refers to CUB200. The first row of each model refers to using constrained decoding directly on the model across the class list of each dataset using the prompt in the header. "choice" is performing constrained decoding on the prompt "What is the {type} of {domain} in this image? Answer with {type} only." "nlg2choice" refers to letting the model freely respond to the prompt, then using constrained decoding in a text-only approach. "nlg2choiceopen" refers to removing the appended response-style instruction, eg. "Answer with {type} only." All models are their instructed-tuned version. Best performance is bolded for each model. Method Genus Matches % of Data Qwen-2.5VL-7B choice 45.99 59.92 nlg2choice 56.57 (+10.56) 52.28 nlg2choiceopen 62.74 (+16.75) 36.64 Llama-3.2-Vision-11B choice 42.52 57.73 nlg2choice 61.71 (+19.19) 53.04 nlg2choiceopen 66.37 (+23.85) 50.94 Intern3VL-8B choice 36.77 88.73 nlg2choice 48.79 (+12.02) 86.19 nlg2choiceopen 60.36 (+23.59) 81.53 Table 3. Accuracy of genus-level predictions on misclassified examples. "Genus Matches" refers to the accuracy of model with respect to the translating each species to its corresponding genus. "% of Data" refers to the amount of data misclassified by the model and is equivalent to 1 -accuracy in Tab. 2. The species to genus mapping manually created. plicit answer formatting from the model (changing from "nlg2choice" and "nlg2choiceopen"). We find the Aves subset of iNaturalist [39] to be the hardest among all the benchmarks, resulting in the smallest increase in performance across all models. For model-specific differences, we find that Qwen2.5VL performs the best on average using nlg2choice, having an average accuracy of 56.91 across the 7 datasets. We also find that Qwen-2.5VL experiences the greatest gain, having an average accuracy difference of +17.46 from the base constrained decoding method to nlg2choiceopen. Comparatively, we see that Intern3-VL performs the worst across architectures, having the overall smallest increase when using nlg2choiceopen (+6.87) and the lowest performance (24.00). However, we do still see improvements over all benchmarks. Misclassifications are more aligned with ground truth. A natural question is to ask how the classifications qualitatively change under nlg2choice. For an example, take Fig. 1 where the ground truth label is "Ivory Gull": under constrained decoding the model responds with "Crested Auklet" whereas a higher quality misclassificaiton would be "Herring Gull", "California Gull", ..., ie. a class within the same genus as Ivory Gull. In Tab. 3 we qualitatively gauge how misclassifications change when using nlg2choice by measuring its genus-level performance. We construct a species to genus mapping on the CUB200 dataset manually, then map the species prediction in previous results to genus. Simply put, we find that nlg2choice makes more reasonable errors, switching to nlg2choice incurs a genus-level performance improvement of +16.75, +23.85, +23.59 for Qwen-2.5VL, Llama-3.2V-11B, and Intern3VL-8B. Δ Accuracy -0.05 0.00 0.05 Intern3VL-8B Llama-3.2V-11B Qwen-2.5VL-8B 0.10 0.15 0.20 0.25 0.30 μ = 7.38 σ = 1.69 μ = 7.72 σ = 2.95 μ = 22.06 σ = 1.92 Figure 3. Difference in accuracies on the question-level between nlg2choice and constrained decoding. Each distribution is made up of the performance difference when switching to nlg2choice over 15 questions. nlg2choice uses an open-ended prompt whereas constrained decoding uses an instruction to increase brevity ("... Answer with species only."). Improvements are statistically significant. Rather than ask about the average performance of different methods under user variation, we wish to quantify how much performance is gained for a specific question by switching to nlg2choice. We calculate this by taking the difference between the performance of constrained decoding and nlg2choiceopen for each question, then compute summary statistics over the set of questions. This forms a distribution of changes in accuracy, with significant results having a positive mean and 95% confidence interval. The performance difference for the various settings in outlined in Fig. 3. We find that for the proposed settings, switching from constrained decoding with an instruction to increase brevity ("... Answer with species only.") to nlg2choiceopen comes with an average μ = 22.06, 7.38, 7.72 percentage increase in accuracy for Qwen-2.5VL, Llama-3.2V-11B, and Intern3VL-8B, respectively. Additionally, we find that these increases are within the bounds of statistical significance (σ = 1.92, 1.69, 2.95). Constrictive instructions reduce performance. A common means of steering models to produce valid choices is through appending explicit instructions onto the instruction which enumerate the available choices, typically by means of lettering or enumeration. We wish to quantify the effect of adding this explicit instruction to the model. In Tab. 4 we show the performance of constrained decodMethod Birds Flowers "What is the {type} of {domain} in this image?" Qwen-2.5VL 0.92 2.53 + nlg2choice 63.36 (+62.44) 78.03 (+75.5) Llama-3.2-Vision 7.67 27.50 + nlg2choice 49.59 (+41.92) 62.24 (+34.74) Intern3-VL 2.28 5.01 + nlg2choice 18.47 (+16.19) 37.5 (+32.49) "... Answer with {type} only." Qwen-2.5VL 40.08 51.29 + nlg2choice 47.72 (+7.64) 63.22 (+11.93) Llama-3.2-Vision 42.27 45.68 + nlg2choice 46.96 (+4.69) 51.16 (+5.48) Intern3-VL 11.27 24.68 + nlg2choice 13.81 (+2.54) 32.52 (+7.84) "... Answer from {choice list}." Qwen-2.5VL 38.71 69.19 + nlg2choice 41.05 (+2.34) 72.48 (+3.29) Llama-3.2-Vision 13.50 44.83 + nlg2choice 46.68 (+33.18) 66.03 (+21.2) Intern3-VL 22.34 35.24 + nlg2choice 23.48 (+1.14) 35.73 (+0.49) Table 4. Classification performance across instruction templates with varying levels of constrictiveness. The model name refers to default constrained decoding. The difference in performance is shown in parentheses. The best performing model and method for each input instruction is bolded. "nlg2choiceopen" corresponds to the top section of the table. ing and nlg2choice under three variations of instruction steering: (1) no explicit choice steering ("What is the {type} of {domain} in this image?"), (2) encouraging succinct prediction ("... Answer with {type} only"), and (3) explicitly enumerating choices ("... Answer from {choice list}"). We see that for every template, switching from constrained decoding to nlg2choice improves performance. The most stark setting is the most open-ended prompt, where we see an average increase of +68.97, +38.33, and +24.34 for Qwen-2.5VL, Llama-3.2V, and Intern3-VL, respectively. Surprisingly, explicit choice instruction shows the weakest improvement for Qwen-2.5VL and Intern3-VL (+2.82 and +0.81) but strong improvement for Llama-3.2V (+27.19). Otherwise, keeping in line with previous results, we find that Qwen-2.5VL is the greatest benefactor of performance. CoT doesn't consistently improve performance. Next, we ask how much performance can be improved by using CoT-style instructions. We investigate methods which induce reasoning by forcing the model to generate some initial tokens that are common to thought-out reasoning (eg. "Let's think step by step"), then letting the model freely respond. In Tab. 5 we choose three of Method Birds Flowers "What is the {type} of {domain} in this image?" Qwen-2.5VL 63.36 78.03 Llama-3.2-Vision 49.59 62.24 Intern3-VL 18.47 37.50 ... "Let's think step by step." [16] Qwen-2.5VL 54.93 (-8.43) 70.16 (-7.87) Llama-3.2-Vision 39.87 (-9.72) 39.22 (-23.02) Intern3-VL 19.11 (+0.64) 30.82 (-6.68) ... "First," [3] Qwen-2.5VL 53.51 (-9.85) 72.80 (-5.23) Llama-3.2-Vision 41.84 (-7.75) 45.31 (-16.93) Intern3-VL 19.65 (+1.18) 33.46 (-4.04) ... "Let's solve this problem by splitting it into steps." [30] Qwen-2.5VL 54.69 (-8.67) 59.56 (-18.47) Llama-3.2-Vision 40.11 (-9.48) 38.08 (-24.16) Intern3-VL 19.68 (+1.21) 31.06 (-6.44) Table 5. Classification performance when using assistant output prepending CoT methods. All results are reported using nlg2choice as an answer extraction method. The proposed setting is the first section in gray. Forced assistant prepending text is shown in blue . these methods and report their performance and difference between the default nlg2choiceopen setting. In general, we see that these methods do not consistently improve performance, resulting in an average decrease of -9.75, -15.18, and -2.36 across Qwen-2.5VL, Llama-3.2V, and Intern3-VL, respectively. The one exception to this rule is the CUB200 dataset with Intern3-VL, where we see an average increase of +1.01. 5.2. Zero-Shot Retrieval Performance Answer extraction is better than directly answering. In Tab. 6, we report the performance of various models when viewing FGVC datasets as a one-vs-rest retrieval task. Like the classification setting, we find that when compared to exhaustive "Yes/No" questioning, nlg2choice generally improves with the exception of the Stanford Cars [17], in which performance across all models decreases. We again find Qwen-2.5VL to receive the greatest improvement, with +8.16 average mAP increase across the 5 datasets. Likewise, we find Intern3-VL receives the least improvement, with an average increase of +4.43. Unlike classification, we do not find that changing the prompt to be open-ended (nlg2choiceopen, ie. removing "Answer from {species list}" from the end of the input prompt) does not uniformly increase performance. However, we do still find that open-ended prompts generally perform better on average with nlg2choice (+8.16 vs. +6.96) and are less variable across all models. Method Birds Flowers Aircrafts Cars Foods Baseline Random 0.64 1.24 1.23 0.62 1.02 Qwen-2.5VL-7B Yes/No 51.01 61.19 51.07 77.58 82.67 nlg2choice 57.79 82.56 65.14 68.61 84.30 (+6.78) (+21.37) (+14.07) (-8.97) (+1.63) nlg2choiceopen 59.01 86.42 64.03 69.82 85.05 (+8.00) (+25.23) (+12.96) (-7.76) (+2.38) Llama-3.2-Vision-11B Yes/No 41.24 63.59 34.24 66.03 78.81 nlg2choice 51.14 68.64 39.39 56.46 86.73 (+9.90) (+5.05) (+5.15) (-9.57) (+7.92) nlg2choiceopen 52.61 74.55 40.00 59.35 88.84 (+11.37) (+10.96) (+5.76) (-6.68) (+10.03) Intern3VL-8B Yes/No 15.22 38.32 17.93 31.83 61.46 nlg2choice 26.97 45.13 20.47 25.81 66.55 (+11.75) (+6.81) (+2.54) (-6.02) (+5.09) nlg2choiceopen 30.51 41.12 20.69 26.84 67.76 (+15.29) (+2.80) (+2.76) (-4.99) (+6.30) Table 6. Performance across fine-grained visual classification (FGVC) benchmarks and architectures when interpreted as retrieval tasks. Numbers represent Mean Average Precision (mAP). Every query is a one-vs-rest task, eg. for CUB200 (Birds) the prior for a single PR curve is 1/200. 5.3. Answer Extraction Performance In Sec. A we detail the process of labeling data for species extraction from free-form responses. First, labelers are shown responses from various models to a dataset instruction and image, then are instructed to highlight the first incidence of the predicted species within the generated text. Then, they are instructed to confirm whether the species highlighted matches the choice selection by the model, and in the failure case to indicate the species predicted or whether there was a schema failure. This process results in a set of common outcomes, ie. "schema failure," "no species predicted," "answer={species}". In Fig. 4 we show the makeup of the labeled data in terms of these various outcomes. Models often respond out of schema. Depicted in Fig. 4, we see that 34.64% of free-form responses by models have an answer within their text that does not occur within the dataset schema. In other words, the worst-case scenario is that only 65.35% of examples are correctly classifiable. However, this turns out to not be the case due to the fact that a large percentage of these responses have real species within them (70.75% of failures) which can be resolved to the correct answer, but that the predicted species itself cannot be easily put within the given schema. We hypotheFigure 4. Breakdown of the small labeled set of natural language responses. "Other" contains two main categories, "Refused to answer" and "No discernable information." Examples of these are given in Sec. A. size that many of these responses are grouped into the correct answer due to either genus-level answers in the dataset, eg. CUB200's labels include the genus "Mockingbird" which matches to "Northern Mockingbird" within the free-form text, or being a closely-related but technically different species, eg. "Glaucous-winged Gull" in schema versus "Glaucous Gull" in free-form text. Motivation of different methods. In Tab. 7 we show the performance of main architectures in terms of extraction performance over various methods when sub-setting the data to examples where an answer within the schema is clearly apparent. We include two additional variations: (1) nlg2nlg, the free-form response given the instruction to rephrase the MLLM answer and (2) nlg2nlg2choice, the constrained decoding off one extra round of rephrasing. We include nlg2nlg due to the fact that we wish to test the direct distance to the highlighted text created during labeling and include nlg2nlg2choice to gauge the distance between the interpretation of that text into an answer. LLMs are decent answer extractors out-of-the-box. In the default nlg2choiceopen setting, we find high answer extraction success rates for Qwen-2.5VL (97.93) and Intern3-VL (93.26), but lower results for Llama-3.2V (79.02). We find that the raw text extracted by the models ("nlg2nlgexact") seldom fits neatly into the dataset schema, having large performance drops (-16.47, 50.77, -25.91). However, the output of nlg2nlgexact retains the relevant information from the original free-form text, as indicated by the nlg2nlg2choice scores which has much smaller perMethod Qwen2.5VL Llama3.2V Intern3VL Overall xFinder [47] Qwen-1.5-0.5B [37] 92.05 88.67 71.76 86.27 Llama-3-8B-Instruct [12] 80.13 80.67 75.29 79.27 Qwen-2.5VL 8B nlg2choice 98.01 98.67 96.47 97.93 nlg2nlgexact 80.79 84.00 78.82 81.61 (-17.22) (-14.67) (-17.65) (-16.32) nlg2nlg2choice 98.01 97.33 100.00 98.19 (+0.00) (-1.34) (+3.53) (+0.26) Llama-3.2-Vision 11B nlg2choice 77.48 82.00 76.47 79.02 nlg2nlgexact 31.79 26.00 25.88 28.24 (-45.69) (-56.00) (-50.59) (-50.78) nlg2nlg2choice 76.82 80.00 70.59 76.68 (-0.66) (-2.00) (-5.88) (-2.34) Intern3-VL 8B nlg2choice 94.04 94.00 90.59 93.26 nlg2nlgexact 59.60 78.67 62.35 67.62 (-34.44) (-15.33) (-28.24) (-25.64) nlg2nlg2choice 90.06 85.33 90.59 88.34 (-3.98) (-8.67) (+0.00) (-4.92) Table 7. Performance of various MLLMs across a small labeled set. Differences are measured from the "nlg2choice" row of each model. formance differences (+0.61, -2.72, and -4.39). We hypothesize that the nlg2nlg text still contains artifacts such as "The species is ..." or scientific names, which lower exact match performance. 6. Limitations This work has three main limitations. First, our experiments were conducted primarily with medium-sized models (8B11B) and we only used open-source LLMs from a few different families: Qwen, Llama, and Intern. Secondly, while nlg2choice avoids the need for specialized training data or architectural modifications and is more aligned with the true responses of the model, the method still requires significant computational resources to generate and process the textual descriptions. Lastly, while nlg2choice shows promise in classification, it still remains to be seen whether the approach scales to multi-label problems. 7. Conclusion In this work, we presented nlg2choice, a simple approach for zero-shot fine-grained visual classification and retrieval that extracts fine-grained answers from MLLM free-form responses. Through experiments across multiple architectures and datasets, we demonstrated that nlg2choice consistently outperforms constrained decoding and exhaustive questioning methods in both classification accuracy and retrieval performance. We also validated the answer extraction directly through labeling predicted species in unstructured text, where we showed that models are capable of extraction their own answers without any labeled data. References [1] Gpt-4v(ision) system card. 2023. 1 [2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint , 2023. 2, 3 [3] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint , 2022. 7 [4] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint , 2025. 4 [5] Luca Beurer-Kellner, Marc Fischer, and Martin Vechev. Guiding llms the right way: Fast, non-invasive constrained generation. arXiv preprint , 2024. 2 [6] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101-mining discriminative components with random forests. In Computer vision-ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part VI 13, pages 446-461. Springer, 2014. 4 [7] Ding Chen, Qingchen Yu, Pengyuan Wang, Wentao Zhang, Bo Tang, Feiyu Xiong, Xinchi Li, Minchuan Yang, and Zhiyu Li. xverify: Efficient answer verifier for reasoning model evaluations. arXiv preprint , 2025. 3 [8] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, march 2023. URL https://lmsys. org/blog/2023-03-30-vicuna, 3(5), 2023. 2, 3 [9] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards generalpurpose vision-language models with instruction tuning, 2023. 1 [10] Yixin Dong, Charlie F Ruan, Yaxing Cai, Ruihang Lai, Ziyi Xu, Yilong Zhao, and Tianqi Chen. Xgrammar: Flexible and efficient structured generation engine for large language models. arXiv preprint , 2024. 2 [11] Georgi Gerganov. llama.cpp, 2023. Software. 2 [12] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint , 2024. 4, 8 [13] GuidanceAI. Guidance, 2023. Software. 2 [14] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint , 2020. 2 [15] Jeonghwan Kim and Heng Ji. Finer: Investigating and enhancing fine-grained visual concept recognition in large vision language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 6187-6207, Miami, Florida, USA, 2024. Association for Computational Linguistics. 1, 2 [16] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199-22213, 2022. 7 [17] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on computer vision workshops, pages 554-561, 2013. 4, 7 [18] Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models, 2023. 2, 3 [19] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint , 2022. 2 [20] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems, 36:34892-34916, 2023. 1 [21] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next: Improved reasoning, ocr, and world knowledge, 2024. 1 [22] Michael Xieyang Liu, Frederick Liu, Alexander J Fiannaca, Terry Koo, Lucas Dixon, Michael Terry, and Carrie J Cai. " we need structured output": Towards user-centered constraints on large language model output. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pages 1-9, 2024. 2 [23] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In European conference on computer vision, pages 216-233. Springer, 2024. 2, 3 [24] Chenyang Lyu, Minghao Wu, and Alham Fikri Aji. Beyond probabilities: Unveiling the misalignment in evaluating large language models. arXiv preprint , 2024. 2, 3 [25] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint , 2013. 4 [26] George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41, 1995. 11 [27] Francesco Maria Molfese, Luca Moroni, Luca Gioffr ́e, Alessandro Scir`e, Simone Conia, and Roberto Navigli. Right answer, wrong score: Uncovering the inconsistencies of llm evaluation in multiple-choice question answering. arXiv preprint , 2025. 2 [28] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian conference on computer vision, graphics & image processing, pages 722-729. IEEE, 2008. 4 [29] OpenAI. Openai o3 and o4-mini system card. 2025. 3 [30] Laria Reynolds and Kyle McDonell. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended abstracts of the 2021 CHI conference on human factors in computing systems, pages 1-7, 2021. 7 [31] Joshua Robinson, Christopher Michael Rytting, and David Wingate. Leveraging large language models for multiple choice question answering, 2023. 2 [32] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115:211-252, 2015. 11 [33] Shibani Santurkar, Esin Durmus, Faisal Ladhak, Cinoo Lee, Percy Liang, and Tatsunori Hashimoto. Whose opinions do language models reflect? In International Conference on Machine Learning, pages 29971-30004. PMLR, 2023. 2 [34] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adri`a GarrigaAlonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint , 2022. 2 [35] Zhi Rui Tam, Cheng-Kuang Wu, Yi-Lin Tsai, Chieh-Yen Lin, Hung-yi Lee, and Yun-Nung Chen. Let me speak freely? a study on the impact of format restrictions on large language model performance. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 1218-1236, 2024. 1, 2 [36] Yuwen Tan, Yuan Qing, and Boqing Gong. Vision llms are bad at hierarchical visual understanding, and llms are the bottleneck. arXiv preprint , 2025. 4, 11 [37] Qwen Team. Introducing qwen1.5, 2024. 8 [38] Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber, Jessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Belongie. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 595-604, 2015. 4 [39] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769-8778, 2018. 4, 5, 11 [40] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011. 2, 4 [41] Darren Yow-Bang Wang, Zhengyuan Shen, Soumya Smruti Mishra, Zhichao Xu, Yifei Teng, and Haibo Ding. Slot: Structuring the output of large language models. arXiv preprint , 2025. 1, 2, 3 [42] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023. 2 [43] Xinpeng Wang, Chengzhi Hu, Bolei Ma, Paul R ̈ottger, and Barbara Plank. Look at the text: Instruction-tuned language models are more robust multiple choice selectors than you think. arXiv preprint , 2024. 1, 3 [44] Xinpeng Wang, Bolei Ma, Chengzhi Hu, Leon WeberGenzel, Paul R ̈ottger, Frauke Kreuter, Dirk Hovy, and Barbara Plank. "my answer is c": First-token probabilities do not match text answers in instruction-tuned language models. arXiv preprint , 2024. 2, 3 [45] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. Advances in Neural Information Processing Systems, 37:95266-95290, 2024. 2 [46] Brandon T Willard and R ́emi Louf. Efficient guided generation for large language models. arXiv preprint , 2023. 2, 4 [47] Qingchen Yu, Zifan Zheng, Shichao Song, Zhiyu Li, Feiyu Xiong, Bo Tang, and Ding Chen. xfinder: Robust and pinpoint answer extraction for large language models. arXiv preprint , 2024. 3, 8 [48] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision, pages 11975-11986, 2023. 11 [49] Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Shao, et al. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models. arXiv preprint , 2025. 4 A. Labeling Species Answer Extraction Data Motivation. For labeling answer extraction data, we prepare a user interface built with Label Studio which shown in Fig. 5. On the right contains information on the inputs to the model, specifically the question, image, and ground truth label. On the left is main interface, which consists a free-form response from a model to the example displayed on the right, as well as the choice predicted by the model via nlg2choice. Labeling consists of two pieces: (1) identification and highlighting of text spans containing answers to the input question and (2) matching the highlighted text span to a class within the given schema. Text span labeling. For the text span labeling task, labelers read through the text to identify the first incidence of a predicted species. Before highlighting the species found within the text, the prediction is confirmed to be a real species through a brief internet search. If there are any issues with highlighting the predicted species, a text description of the issue is written in the input text box at the bottom and the example is skipped. Figure 5. Labeling interface for efficacy of the answer extraction stage of nlg2choice. Labelers are asked to highlight the name predicted in free-form responses from various models and whether the name occurs easily within the dataset choice list. Above shows an interesting example: for the Chipping Sparrow on the right, the model freely responded with "Painted Bunting" and answer extraction made an additional mistake by predicting "Scarlet Tanager." Choice assignment. Once the span is labeled, we next wish to either assign the span to a class within the dataset or tag the span as a schema failure. For expediency, the labelers are also equipped with the nlg2choice prediction and a tool for sorting most like class names with fuzzy matching score. When the nlg2choice predict matches the text span, the prediction is "Consistent with nlg" and the labelers check the button. Otherwise, "Inconsistent with nlg" is checked and a natural language explanation is put in the bottom text box. When the nlg2choice is inconsistent, "answer={species}" is put within that text box. B. Zero-Shot 4-Way VQA Performance Answer extraction does not consistently increase performance. Following recent work [36], we evalute nlg2choice on popular fine-graiend classification datasets as a 4-way VQA task. iNat-Animal and iNat-Plant are the rows of iNat21 [39] when subset to rows which whose kingdom matches "Animalia" and "Plantae." ImgNet-Animal and ImgNet-Artifact are the rows of ImageNet-1K [32] subset to rows which have an anceter of "Animal" and "Artifact" in the WordNet hierarchy [26]. The choices for each example are from the top 3 SigLIP [48] predictions and the correct class. For the nlg2choice prompt we use the same prompt as the previous work. We report the open-ended question performance in Tab. 8 where we find that it slightly Method CUB200 iNat-Animal iNat-Plant ImgNet-Animal ImgNet-Artifact Qwen-2.5VL-7B A/B/C/D 65.50 41.33 41.61 85.20 80.01 nlg2choiceopen 67.43 40.19 40.03 84.03 81.81 (+1.93) (-1.14) (-1.58) (-1.17) (+1.80) Llama-3.2-Vision-11B A/B/C/D 65.52 32.44 31.88 79.93 75.89 nlg2choiceopen 68.79 31.52 31.57 81.39 75.29 (+3.27) (-0.92) (-0.31) (+1.46) (-0.60) Intern3VL-8B A/B/C/D 50.52 35.40 36.39 77.50 69.41 nlg2choiceopen 47.30 35.86 35.64 81.91 70.02 (-3.22) (+0.46) (-0.75) (+4.41) (+0.61) Table 8. 4-way VQA accuracy across fine-grained classifcation tasks. underperforms or has not effect on the lettering approach. Specifically, we find that accuracy changes by -0.03, +0.58, and +0.30 on average for Qwen-2.5VL, Llama-3.2V, and Intern3VL, respectively. Decreasing the choice set improves performance. We are also able to compare the 4-way VQA setting of CUB200 directly to the many-way prediction displayed in Tab. 2. We see that subsetting the full choice set to four choices incurs a performance improvement of +4.07, +19.20, and +28.83 for Qwen-2.5VL, Llama-3.2V, and Intern3VL, respectively. C. Prompt Variations "What { type } is this { domain }?" "What is the { type } of this { domain }?" "What is the { type } of the { domain }?" "What is the { type } of the { domain } in this image?" "What is the { type } of the { domain } in the image?" "Identify this { domain }'s { type }." "Name the { type } shown in the image." "Which { domain } { type } is pictured here?" "Classify the { type } of this { domain }." "What { domain } { type } does the photo depict?" "Determine the { type } of the { domain } in view." "Provide the common name of this { domain }." "To which { type } does this { domain } belong?" "Label the { type } of the { domain } shown." "Recognize and state this { domain }'s { type }." Table 9. Base template variations generated by Sec. 3.1 Dataset Type Domain CUB200 species bird Flowers species flower Aircrafts variant aircraft Cars year, make, and model car Foods name food NABirds species bird iNat-Birds species bird Table 10. Dataset variables for filling out prompt templates.